From xen-devel-bounces@lists.xenproject.org Sun Jan 01 03:45:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 03:45:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470059.729504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBpHC-0004SP-0A; Sun, 01 Jan 2023 03:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470059.729504; Sun, 01 Jan 2023 03:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBpHB-0004SH-Pm; Sun, 01 Jan 2023 03:44:45 +0000
Received: by outflank-mailman (input) for mailman id 470059;
 Sun, 01 Jan 2023 03:44:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBpHA-0004S7-Op; Sun, 01 Jan 2023 03:44:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBpHA-00013c-La; Sun, 01 Jan 2023 03:44:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBpHA-0002NI-0f; Sun, 01 Jan 2023 03:44:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pBpH9-0005jA-SU; Sun, 01 Jan 2023 03:44:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RguQacY/nKjE++pQVabwjzKV2k/elA3f+unIfSSfgd4=; b=sELqmNvg1iiLjZYnOgXCQLXLCr
	3ssSpNNgRYJ8IuPNvGRCh5XGsxKkUrx+LHhoMGymAyRkh6CCZwbY/MHEVngc3fiarqcXTZwPAvj8s
	J7D31v6itc9F0IDl9nCvFXWVeIwxP0POC8TTcN1PKjASWmr5eWVmzvDYLtgQauxkD6v4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175539-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175539: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e4cf7c25bae5c3b5089a3c23a897f450149caef2
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Jan 2023 03:44:43 +0000

flight 175539 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175539/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                e4cf7c25bae5c3b5089a3c23a897f450149caef2
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   85 days
Failing since        173470  2022-10-08 06:21:34 Z   84 days  175 attempts
Testing same since   175539  2022-12-31 19:43:50 Z    0 days    1 attempts

------------------------------------------------------------
3254 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 496420 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 01 06:15:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 06:15:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470074.729515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBrcS-0008Ac-Tj; Sun, 01 Jan 2023 06:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470074.729515; Sun, 01 Jan 2023 06:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBrcS-0008AM-NK; Sun, 01 Jan 2023 06:14:52 +0000
Received: by outflank-mailman (input) for mailman id 470074;
 Sun, 01 Jan 2023 06:14:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBrcR-0008AC-67; Sun, 01 Jan 2023 06:14:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBrcR-0005Bi-2r; Sun, 01 Jan 2023 06:14:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBrcQ-0003AD-J4; Sun, 01 Jan 2023 06:14:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pBrcQ-0002Fh-Fq; Sun, 01 Jan 2023 06:14:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B6qAQVTj8LEGUumIcQN7yU9CUSdSYg5pj7zXZWPJuTo=; b=sSEcqiZrTndV9tQ3wfP9cnBjKl
	cs3mh3w8OBCGm/vd+d95kCRT38X+T5I/4gUc4AIUoYZbBamMlOnrKTJI0p2qSZCnIQNGRMuvdjH0B
	N41V+6rnd+/fVvHfbH1moOyHOV6Dc7sPt1PShnz8ozQKSTSCeDSHzDOUiCoDqdzIBF0w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175540-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175540: FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start.2:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
X-Osstest-Versions-That:
    linux=66bb2e2b24ce52819a7070d3a3255726cb946b69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Jan 2023 06:14:50 +0000

flight 175540 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175540/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 175407
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 175407
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 175407
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 175407

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 175407 pass in 175540
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 175407 pass in 175540
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 175407 pass in 175540
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 175407 pass in 175540
 test-amd64-amd64-libvirt-vhd 20 guest-start.2    fail in 175536 pass in 175538
 test-armhf-armhf-xl-credit1  14 guest-start      fail in 175536 pass in 175540
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 175538 pass in 175540
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat  fail pass in 175407
 test-amd64-i386-pair         10 xen-install/src_host       fail pass in 175536
 test-armhf-armhf-xl-multivcpu 14 guest-start               fail pass in 175538
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 175538
 test-armhf-armhf-xl-rtds     14 guest-start                fail pass in 175538
 test-armhf-armhf-xl-vhd      13 guest-start                fail pass in 175538
 test-armhf-armhf-libvirt-raw 13 guest-start                fail pass in 175538

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175197
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175407 like 175197
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 175536 like 175197
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 175536 like 175197
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 175536 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 175536 never pass
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 175536 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 175536 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 175536 never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-start       fail in 175538 like 175197
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 175538 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 175538 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175197
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175197
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175197
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52
baseline version:
 linux                66bb2e2b24ce52819a7070d3a3255726cb946b69

Last test of basis   175197  2022-12-14 10:43:17 Z   17 days
Testing same since   175407  2022-12-19 11:42:26 Z   12 days   30 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Schocher <hs@denx.de>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jialiang Wang <wangjialiang0806@163.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Colitti <lorenzo@google.com>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Samuel Mendoza-Jonas <samjonas@amazon.com>
  Sasha Levin <sashal@kernel.org>
  Shiwei Cui <cuishw@inspur.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <simon.horman@corigine.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Yasushi SHOJI <yashi@spacecubics.com>
  Yasushi SHOJI <yasushi.shoji@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken

Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 01 08:37:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 08:37:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470088.729525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBtq9-0002sY-8L; Sun, 01 Jan 2023 08:37:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470088.729525; Sun, 01 Jan 2023 08:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBtq9-0002sR-5i; Sun, 01 Jan 2023 08:37:09 +0000
Received: by outflank-mailman (input) for mailman id 470088;
 Sun, 01 Jan 2023 08:37:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBtq8-0002sH-3V; Sun, 01 Jan 2023 08:37:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBtq8-0000qv-0C; Sun, 01 Jan 2023 08:37:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBtq7-0000ay-1v; Sun, 01 Jan 2023 08:37:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pBtq7-000331-0o; Sun, 01 Jan 2023 08:37:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=na2H8WO1yr9+JElHKFaihx0o84md8ZQZMdA028n1+S0=; b=cCcgFGw4y2+kE69OeSdOEOEVS8
	5yGoo2YTrm9W5InmyU7g7Jpnfk419H1vDb93iALEGzYMdsUtH+ycGeonss5weBE+ocvHwhHzCm1px
	6EOaojoa5ah+bFZNpALKclfu7j+1MZcj9CjZpnOIIemmvW7bPLPGc3Lg4dVV/hMp2o6w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175541-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175541: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
X-Osstest-Versions-That:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Jan 2023 08:37:07 +0000

flight 175541 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175541/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175526
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175534
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175534
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175534
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175534
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175534
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175534
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175534
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175534
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175534
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175534
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175534
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8
baseline version:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8

Last test of basis   175541  2023-01-01 01:54:34 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jan 01 14:48:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 14:48:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470104.729538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBzdf-0000Pa-Dd; Sun, 01 Jan 2023 14:48:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470104.729538; Sun, 01 Jan 2023 14:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBzdf-0000PT-7a; Sun, 01 Jan 2023 14:48:39 +0000
Received: by outflank-mailman (input) for mailman id 470104;
 Sun, 01 Jan 2023 14:48:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBzde-0000PF-NV; Sun, 01 Jan 2023 14:48:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBzde-0001Z6-L3; Sun, 01 Jan 2023 14:48:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pBzde-00064Y-28; Sun, 01 Jan 2023 14:48:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pBzde-0005XB-1g; Sun, 01 Jan 2023 14:48:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iUGD9b/kwuM4sIt4+/IPfkt86Qh9gf6evsgMCWrsG3c=; b=na3j6fHAJ19ilOgRjjuZ4CQlkF
	2SsGlsXPLMZ5mtMJwIM71XWaSibRlLBwxnNr76gfxVX6Wc2SkEcS511dG7JpLuSC4Nih/e1SQVeNd
	XmltDLJfpwzIbI8jj6da32XObZT/9LtIYgA5IDWOkedww0MM/we63rE2IjUTOM5+5h1w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175542-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175542: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e4cf7c25bae5c3b5089a3c23a897f450149caef2
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Jan 2023 14:48:38 +0000

flight 175542 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175542/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                e4cf7c25bae5c3b5089a3c23a897f450149caef2
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   85 days
Failing since        173470  2022-10-08 06:21:34 Z   85 days  176 attempts
Testing same since   175539  2022-12-31 19:43:50 Z    0 days    2 attempts

------------------------------------------------------------
3254 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 496420 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 01 15:01:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 15:01:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470113.729548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBzpm-0001A7-FG; Sun, 01 Jan 2023 15:01:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470113.729548; Sun, 01 Jan 2023 15:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBzpm-0001A0-By; Sun, 01 Jan 2023 15:01:10 +0000
Received: by outflank-mailman (input) for mailman id 470113;
 Sun, 01 Jan 2023 15:01:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=11tG=46=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pBzpk-00019u-T3
 for xen-devel@lists.xenproject.org; Sun, 01 Jan 2023 15:01:09 +0000
Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com
 [64.147.123.19]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1bc8970d-89e5-11ed-91b6-6bf2151ebd3b;
 Sun, 01 Jan 2023 16:01:04 +0100 (CET)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id F206132002FB;
 Sun,  1 Jan 2023 10:01:00 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Sun, 01 Jan 2023 10:01:01 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 1 Jan 2023 10:00:56 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bc8970d-89e5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672585260; x=
	1672671660; bh=X7obdlIu9PYYeP+uZsieZNOt5OQU6gNv6h9CZLugzpQ=; b=E
	jOj39WEP4Ya9aiY5z1cNiMq59QPtZ5lSR1Fz7SKlt/qt480vfBRKFNEsItcGF31z
	BU9TxV7WbMyICR1unSeS/EACE/cV4CkVGKmHxAQWPQeZqIVykcGmOLgi57amXdpA
	hYNs3+cg8apMBM3kmtfgliJnO/TPJWxlOceYhila7CL26JMa674XNe1lB5XRtXSs
	OA2CDb1iEJDlc4iy5BKyqRCd8PN9zqRusraZLrrWv0L1xVK/KdY5tpcHZ4J5/Pzy
	DZi6Rta6MuDFvxgS3okcA/x9VHSlxSeKK3JY2RDpDUAtZyXzYHC9XAiYS3uqrXh2
	9APFOUzMNRTeuDviBzx5g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672585260; x=1672671660; bh=X7obdlIu9PYYeP+uZsieZNOt5OQU
	6gNv6h9CZLugzpQ=; b=hSQokPzcSPcDKmZporw7m5E0UIGhIkehrm/AiA/nRecW
	oTaPm4r0UO5v8lfTGNmnLcRHmhzvzXl9Uh41y8MxCPSl3LUTI4XILY4SPUBI30z9
	3qyBV/j438ksvWRxF+IHVKraC7LO/CM1k+SxxmC7MZfHYRMm3s2foaeZqFSw11jk
	tVemo1wsIflh/bYCJ10aG3CshRnyZSk1R6IzPD6HAb6/mjjGR4NLAYxdWV/Kzfnk
	679LbJbNkwzWVTVjG1GyjK7VO36NOIITTNMp48BCKPqPyKn4gW1Fm0NA5SsIfMuk
	7ICNLmMG/rd+9cBev1+4lKf2ZL+l24DBxq4uz4dM6A==
X-ME-Sender: <xms:LKCxY8WZqMc2eGTAPIOT4YF24c2IkasVNbfuWpMyHsbcyY3yFX4VRQ>
    <xme:LKCxYwmTglwkrGYKGcwCDa0_t6gojO7koEqeIf5hsfFGRr4C-NK0BCK_zIpUuBhGM
    fJcDTMD6MEIZg>
X-ME-Received: <xmr:LKCxYwYJXwVddni7u8DHbOuOx2qrpdJgqWNeOTeE5nMruTFsx4_sq4rdE7nK>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjedtgdeijecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefgudel
    teefvefhfeehieetleeihfejhfeludevteetkeevtedtvdegueetfeejudenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:LKCxY7Wn_XWyX3gE5b8sZ4RA-VihSeZM6x7vNcErCLKL181wpoJOMA>
    <xmx:LKCxY2nKzO6SVRp9WwNcehUFwoGk-kcjU0m_51bcvtFYId6uxIbc-g>
    <xmx:LKCxYwchv9L1yvysLFR5F2_hpBY9JRfE7MSfn9-QOyKqWb60vynorQ>
    <xmx:LKCxY7xbFS_xzW0r2fYjLCHfNJMoa2EVnN0IlE-I88mXOVYJuYe3rw>
Feedback-ID: i1568416f:Fastmail
Date: Sun, 1 Jan 2023 16:00:50 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] x86/cpuid: Infrastructure for leaves 7:1{ecx,edx}
Message-ID: <Y7GgJJ5wyds83Uwn@mail-itl>
References: <20221231003007.26916-1-andrew.cooper3@citrix.com>
 <20221231003007.26916-2-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="tMAtNi7zpm7M/yTn"
Content-Disposition: inline
In-Reply-To: <20221231003007.26916-2-andrew.cooper3@citrix.com>


--tMAtNi7zpm7M/yTn
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sun, 1 Jan 2023 16:00:50 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] x86/cpuid: Infrastructure for leaves 7:1{ecx,edx}

On Sat, Dec 31, 2022 at 12:30:06AM +0000, Andrew Cooper wrote:
> We don't actually need ecx yet, but adding it in now will reduce the amou=
nt to
> which leaf 7 is out of order in a featureset.
>=20
> cpufeatureset.h remains in leaf architectrual order for the sanity of any=
one
> trying to locate where to insert new rows.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> ---
>  tools/misc/xen-cpuid.c                      | 10 ++++++++++
>  xen/arch/x86/cpu/common.c                   |  3 ++-
>  xen/include/public/arch-x86/cpufeatureset.h |  3 +++
>  xen/include/xen/lib/x86/cpuid.h             | 15 ++++++++++++++-
>  4 files changed, 29 insertions(+), 2 deletions(-)
>=20
> diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
> index d5833e9ce879..0091a11a67bc 100644
> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -202,6 +202,14 @@ static const char *const str_7b1[32] =3D
>      [ 0] =3D "ppin",
>  };
> =20
> +static const char *const str_7c1[32] =3D
> +{
> +};
> +
> +static const char *const str_7d1[32] =3D
> +{
> +};
> +
>  static const char *const str_7d2[32] =3D
>  {
>      [ 0] =3D "intel-psfd",
> @@ -229,6 +237,8 @@ static const struct {
>      { "0x80000021.eax",  "e21a", str_e21a },
>      { "0x00000007:1.ebx", "7b1", str_7b1 },
>      { "0x00000007:2.edx", "7d2", str_7d2 },
> +    { "0x00000007:1.ecx", "7b1", str_7c1 },
> +    { "0x00000007:1.edx", "7b1", str_7d1 },

"7c1" and "7d1" ?



--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--tMAtNi7zpm7M/yTn
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmOxoCQACgkQ24/THMrX
1ywWTggAlO6Y2XPzgM/eZh8mSyzx8sApwoYd3Qa8kz/xV1Kj3HSX4kI6tL7z+RMH
yXC6bideqATvzFRzTWWzgQ+3nwiSi/EFe0H3W4EeKy1En2SBjpVNeZDjeLk+uPj4
1cbbv61BDcBd196DLrMG71L0UTJYe+RnJPOwGPtp/gSgzy2/zqAHfsj7707qOJOe
QnqxuZeP7D42+cHJ6ot8e31x7ScCL4V0Aa0LOR7svmBPHm1bPNjpjgguELfdGYQC
UjmJgrAUHIL+00S4r488PeL22YuLnX3PkW8YqNjtB1awLyiOuKtSZxO9sDr5rGPy
onuMim5IYhj9HkxrTWhak1vI0xxvyg==
=W6fD
-----END PGP SIGNATURE-----

--tMAtNi7zpm7M/yTn--


From xen-devel-bounces@lists.xenproject.org Sun Jan 01 15:11:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 15:11:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470121.729559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBzzP-0001s3-DI; Sun, 01 Jan 2023 15:11:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470121.729559; Sun, 01 Jan 2023 15:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pBzzP-0001rw-91; Sun, 01 Jan 2023 15:11:07 +0000
Received: by outflank-mailman (input) for mailman id 470121;
 Sun, 01 Jan 2023 15:11:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=11tG=46=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pBzzO-0001rq-BO
 for xen-devel@lists.xenproject.org; Sun, 01 Jan 2023 15:11:06 +0000
Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com
 [64.147.123.19]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 82214e8a-89e6-11ed-91b6-6bf2151ebd3b;
 Sun, 01 Jan 2023 16:11:04 +0100 (CET)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 312C632005CA;
 Sun,  1 Jan 2023 10:11:02 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Sun, 01 Jan 2023 10:11:02 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 1 Jan 2023 10:10:58 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82214e8a-89e6-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672585861; x=
	1672672261; bh=0iWt9Z2gLbDLmZ3Gl405zz+RiInR67FHrOISNpK+eQo=; b=z
	BE/218ltPLi/9QKJiPs+yXwfWBknaWGX3MqAinvoJJK2pbeU37HGeHnLyTRJ4Mc/
	H3vIg+vnQDlkjCZvcG1TFmM+0TUOhw7WCItZwWxZXMnJLczT+Fs5s8rVoEdbVSGO
	17wHZcNCbzn13vsk3C2GPnzskL4B6WxvE8X80xeA7MJsyZSZPPhIETZ0rAku42WZ
	UOc4cOnegc49vvJaTV+Hhy3GzwK5MrLUXAlWfcZOlAKPY/tNushsSZhljg+qaaZ8
	OOjKxJzRm8OPUOf0OWb3R1fIVqXBFy7PuQ6PMn+e8jEIt2BROFHjhzG+4Tgm8Eg2
	GbvNAykWufPc2fJXvoANA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672585861; x=1672672261; bh=0iWt9Z2gLbDLmZ3Gl405zz+RiInR
	67FHrOISNpK+eQo=; b=EpwVUp3YbqvO8XdwOIEzmq+pCZx/oo7PHCZB66fCSJBE
	106S2UdR21RO4LhV16Q4xYCMLyq8X5PEoJCS9N/C6rZKSyW3mpGIP79szDkWQAFi
	jHj1s01lUfKfIalMHi1o2zQU0LQI6VbRjHbrUtcMy6N40j1DKqfE7IzAy8+AhwZG
	NMBHYvpGY0NyHaDyYDrDILS221SdY3tAp+sw3bEOyBX7ye6naQvQ+DXF7NnXFotx
	ocsPfBxTdPkcKzP5cCSU/VhvbnBHx/Ctbm9o5QEy9HUnBxawFckgANTrU+IHQgg5
	aNimGPKLI4Y0xkoxkZ7WPlMY3CgMLM57JiRverxifw==
X-ME-Sender: <xms:haKxY0Z6jaUgamoyy8wz-dhEpWTViKO97OdpLgoipJkxCZ90s53B8g>
    <xme:haKxY_YPlk1xGh9T6x35_XfmJo5Z-LZNtQnIhSeWgOyY2OAun1LUzwF-0IVIP_dKi
    ztozlyLXp9vag>
X-ME-Received: <xmr:haKxY-8nZN3i8KBXJWaQt9LqUsqiY0Avf8aLuh8YY3FInWvxNEcy91c6ozlr>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjedtgdejtdcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefgudel
    teefvefhfeehieetleeihfejhfeludevteetkeevtedtvdegueetfeejudenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:haKxY-omvZaoNXPy_P5P4Wk2m6o2KISTD9HFVmwcWjTIYAi6mzcUIA>
    <xmx:haKxY_qReycYgXhkupw-4BXj1GH9WfT3k42x4CpGiLP0_tapneisPg>
    <xmx:haKxY8RfIrRNnuODvqkg2Jp_5USI1meikcN33m7FEOSiwZJSd2oluw>
    <xmx:haKxY20khfJYZ_-y77YRo-2qHWXx0G3cgjPZ8NcrEgAG8wRD7tSeMA>
Feedback-ID: i1568416f:Fastmail
Date: Sun, 1 Jan 2023 16:10:54 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] x86/shskt: Disable CET-SS on parts succeptable to
 fractured updates
Message-ID: <Y7GifofUaQ8u8ugr@mail-itl>
References: <20221231003007.26916-1-andrew.cooper3@citrix.com>
 <20221231003007.26916-3-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="+a2V8WRjif75DJuw"
Content-Disposition: inline
In-Reply-To: <20221231003007.26916-3-andrew.cooper3@citrix.com>


--+a2V8WRjif75DJuw
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sun, 1 Jan 2023 16:10:54 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] x86/shskt: Disable CET-SS on parts succeptable to
 fractured updates

On Sat, Dec 31, 2022 at 12:30:07AM +0000, Andrew Cooper wrote:
> Refer to Intel SDM Rev 70 (Dec 2022), Vol3 17.2.3 "Supervisor Shadow Stack
> Token".
>=20
> Architecturally, an event delivery which starts in CPL>3 and switches sha=
dow
> stack will first validate the Supervisor Shstk Token and set the busy bit,
> then pushes LIP/CS/SSP.  One example of this is an NMI interrupting Xen.
>=20
> Some CPUs suffer from an issue called fracturing, whereby a fault/vmexit/=
etc
> between setting the busy bit and completing the event injection renders t=
he
> action non-restartable, because when it comes time to restart, the busy b=
it is
> found to be already set.
>=20
> This is far more easily encountered under virt, yet it is not the fault o=
f the
> hypervisor, nor the fault of the guest kernel.  The fault lies somewhere
> between the architectural specification, and the uarch behaviour.
>=20
> Intel have allocated CPUID.7[1].ecx[18] CET_SSS to enumerate that supervi=
sor
> shadow stacks are safe to use.  Because of how Xen lays out its shadow st=
acks,
> fracturing is not expected to be a problem on native.
>=20
> Detect this case on boot and default to not using shstk if virtualised.
> Specifying `cet=3Dshstk` on the command line will override this heurstic =
and
> enable shadow stacks irrespective.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
>=20
> I've got a query out with AMD, but so far it is only Intel CPUs known to =
be
> impacted.
>=20
> This ideally wants backporting to Xen 4.14.  I have no idea how likely it=
 is
> to need to backport the prerequisite patch for new feature words, but we'=
ve
> already had to do that once for security patches...
> ---
>  docs/misc/xen-command-line.pandoc           |  7 +++++-
>  tools/libs/light/libxl_cpuid.c              |  2 ++
>  tools/misc/xen-cpuid.c                      |  1 +
>  xen/arch/x86/cpu/common.c                   | 11 +++++++--
>  xen/arch/x86/setup.c                        | 37 +++++++++++++++++++++++=
+++---
>  xen/include/public/arch-x86/cpufeatureset.h |  1 +
>  6 files changed, 53 insertions(+), 6 deletions(-)
>=20
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-li=
ne.pandoc
> index 923910f553c5..19d4d815bdee 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -287,10 +287,15 @@ can be maintained with the pv-shim mechanism.
>      protection.
> =20
>      The option is available when `CONFIG_XEN_SHSTK` is compiled in, and
> -    defaults to `true` on hardware supporting CET-SS.  Specifying
> +    generally defaults to `true` on hardware supporting CET-SS.  Specify=
ing
>      `cet=3Dno-shstk` will cause Xen not to use Shadow Stacks even when s=
upport
>      is available in hardware.
> =20
> +    Some hardware suffers from an issue known as Supervisor Shadow Stack
> +    Fracturing.  On such hardware, Xen will default to not using Shadow =
Stacks
> +    when virtualised.  Specifying `cet=3Dshstk` will override this heuri=
stic and
> +    enable Shadow Stacks unilaterally.
> +
>  *   The `ibt=3D` boolean controls whether Xen uses Indirect Branch Track=
ing for
>      its own protection.
> =20
> diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpui=
d.c
> index 2aa23225f42c..d97a2f3338bc 100644
> --- a/tools/libs/light/libxl_cpuid.c
> +++ b/tools/libs/light/libxl_cpuid.c
> @@ -235,6 +235,8 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list =
*cpuid, const char* str)
>          {"fsrs",         0x00000007,  1, CPUID_REG_EAX, 11,  1},
>          {"fsrcs",        0x00000007,  1, CPUID_REG_EAX, 12,  1},
> =20
> +        {"cet-sss",      0x00000007,  1, CPUID_REG_EDX, 18,  1},
> +
>          {"intel-psfd",   0x00000007,  2, CPUID_REG_EDX,  0,  1},
>          {"mcdt-no",      0x00000007,  2, CPUID_REG_EDX,  5,  1},
> =20
> diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
> index 0091a11a67bc..ea33b587665d 100644
> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -208,6 +208,7 @@ static const char *const str_7c1[32] =3D
> =20
>  static const char *const str_7d1[32] =3D
>  {
> +    [18] =3D "cet-sss",
>  };
> =20
>  static const char *const str_7d2[32] =3D
> diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
> index b3fcf4680f3a..d962f384a995 100644
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -346,11 +346,18 @@ void __init early_cpu_init(void)
>  	       x86_cpuid_vendor_to_str(c->x86_vendor), c->x86, c->x86,
>  	       c->x86_model, c->x86_model, c->x86_mask, eax);
> =20
> -	if (c->cpuid_level >=3D 7)
> -		cpuid_count(7, 0, &eax, &ebx,
> +	if (c->cpuid_level >=3D 7) {
> +		uint32_t max_subleaf;
> +
> +		cpuid_count(7, 0, &max_subleaf, &ebx,
>  			    &c->x86_capability[FEATURESET_7c0],
>  			    &c->x86_capability[FEATURESET_7d0]);
> =20
> +                if (max_subleaf >=3D 1)

tabs vs spaces ...

Is this file imported from Linux? It uses tabs for indentation, contrary
to the rest of the Xen code base.

> +			cpuid_count(7, 1, &eax, &ebx, &ecx,
> +				    &c->x86_capability[FEATURESET_7d1]);
> +        }
> +
>  	eax =3D cpuid_eax(0x80000000);
>  	if ((eax >> 16) =3D=3D 0x8000 && eax >=3D 0x80000008) {
>  		ebx =3D eax >=3D 0x8000001f ? cpuid_ebx(0x8000001f) : 0;
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 566422600d94..e052b7b748fa 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -96,7 +96,7 @@ size_param("highmem-start", highmem_start);
>  #endif
> =20
>  #ifdef CONFIG_XEN_SHSTK
> -static bool __initdata opt_xen_shstk =3D true;
> +static int8_t __initdata opt_xen_shstk =3D -1;
>  #else
>  #define opt_xen_shstk false
>  #endif
> @@ -1101,9 +1101,40 @@ void __init noreturn __start_xen(unsigned long mbi=
_p)
>      /* Choose shadow stack early, to set infrastructure up appropriately=
=2E */
>      if ( opt_xen_shstk && boot_cpu_has(X86_FEATURE_CET_SS) )
>      {
> -        printk("Enabling Supervisor Shadow Stacks\n");
> +        /*
> +         * Some CPUs suffer from Shadow Stack Fracturing, an issue where=
by a
> +         * fault/VMExit/etc between setting a Supervisor Busy bit and the
> +         * event delivery completing renders the operation non-restartab=
le.
> +         * On restart, event delivery will find the Busy bit already set.
> +         *
> +         * This is a problem on native, but outside of synthetic cases, =
only
> +         * with #MC against a stack access (in which case we're dead any=
way).
> +         * It is a much bigger problem under virt, because we can VMExit=
 for a
> +         * number of legitimate reasons and tickle this bug.
> +         *
> +         * CPUs with this addressed enumerate CET-SSS to indicate that
> +         * supervisor shadow stacks are now safe to use.
> +         */
> +        bool cpu_has_bug_shstk_fracture =3D
> +            boot_cpu_data.x86_vendor =3D=3D X86_VENDOR_INTEL &&
> +            !boot_cpu_has(X86_FEATURE_CET_SSS);
> +
> +        /*
> +         * On native, assume that Xen won't be impacted by shstk fractur=
ing
> +         * problems.  Under virt, be more conservative and disable shstk=
 by
> +         * default.
> +         */
> +        if ( opt_xen_shstk =3D=3D -1 )
> +            opt_xen_shstk =3D
> +                cpu_has_hypervisor ? !cpu_has_bug_shstk_fracture
> +                                   : true;
> +
> +        if ( opt_xen_shstk )
> +        {
> +            printk("Enabling Supervisor Shadow Stacks\n");
> =20
> -        setup_force_cpu_cap(X86_FEATURE_XEN_SHSTK);
> +            setup_force_cpu_cap(X86_FEATURE_XEN_SHSTK);
> +        }
>      }
> =20
>      if ( opt_xen_ibt && boot_cpu_has(X86_FEATURE_CET_IBT) )
> diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/pu=
blic/arch-x86/cpufeatureset.h
> index 7a896f0e2d92..f6a46f62a549 100644
> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -290,6 +290,7 @@ XEN_CPUFEATURE(INTEL_PPIN,         12*32+ 0) /*   Pro=
tected Processor Inventory
> =20
>  /* Intel-defined CPU features, CPUID level 0x00000007:1.ecx, word 14 */
>  /* Intel-defined CPU features, CPUID level 0x00000007:1.edx, word 15 */
> +XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow =
Stacks safe to use */
> =20
>  /* Intel-defined CPU features, CPUID level 0x00000007:2.edx, word 13 */
>  XEN_CPUFEATURE(INTEL_PSFD,         13*32+ 0) /*A  MSR_SPEC_CTRL.PSFD */
> --=20
> 2.11.0
>=20
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--+a2V8WRjif75DJuw
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmOxon4ACgkQ24/THMrX
1yzQ8Af/eDRIjmUkuoE9Oc63KDRB5WB+pCi8hFiwWELOfAkGpWiUVjqVtTCwIoba
lZaKVgTb1tNX5FSk7BOEzngH09zATAEk4b61ZVldAq/3LLuiW2YGMAZDRDf5al7l
pkMxoy+Tk9Ze+bEhawl2dC0FXzWgTXkVEUWd12AtFtImLcF8Gx4UILI8/USzi5Om
V1UPMppQlI4+d9MvPw47jwnLniWqxb0235ZRQPUqZkGM10HJCtwzN1H2tXD3TgzP
ANxNDptb+oEHqP4HDi5OyDanjm+ht4ARJVwq5nBJ4Rua/zYg/PK+GgxBEyFRMACe
Q8m577inn76kaVNmQbPx7mgkWLCGTw==
=zDH5
-----END PGP SIGNATURE-----

--+a2V8WRjif75DJuw--


From xen-devel-bounces@lists.xenproject.org Sun Jan 01 15:29:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 15:29:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470131.729570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC0Gn-0002jW-V4; Sun, 01 Jan 2023 15:29:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470131.729570; Sun, 01 Jan 2023 15:29:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC0Gn-0002jP-Ru; Sun, 01 Jan 2023 15:29:05 +0000
Received: by outflank-mailman (input) for mailman id 470131;
 Sun, 01 Jan 2023 15:29:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pC0Gm-0002jF-Aq; Sun, 01 Jan 2023 15:29:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pC0Gm-0002Qn-8y; Sun, 01 Jan 2023 15:29:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pC0Gl-00070F-S9; Sun, 01 Jan 2023 15:29:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pC0Gl-0001ET-Re; Sun, 01 Jan 2023 15:29:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WF5lPz66X1OSwVvjX6vDFO5oUXkWeyOlgS7pUa9Q9io=; b=2fAx5kPtOHwikPgHhRAyFqoz4r
	1sBnaCHIgDiC7tOIu6ydxPwkRh49S+vHlODiTQBEiNvUjiHt5fOB0P50lYEXdPybQk3gaR3oOVGt8
	e/gDV6nlDMi55VwePSog4YSnbuCZix+sI0dNYj67wP8xbV3vRC9NNaR8RjAeT9wHzkR0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175543-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175543: FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
X-Osstest-Versions-That:
    linux=66bb2e2b24ce52819a7070d3a3255726cb946b69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Jan 2023 15:29:03 +0000

flight 175543 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175543/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 175407
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 175407
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 175407
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 175407

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 175407 pass in 175543
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 175407 pass in 175543
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 175407 pass in 175543
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 175407 pass in 175543
 test-amd64-i386-pair     10 xen-install/src_host fail in 175540 pass in 175543
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 175540 pass in 175543
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 175540 pass in 175543
 test-armhf-armhf-xl-rtds     14 guest-start      fail in 175540 pass in 175543
 test-armhf-armhf-xl-vhd      13 guest-start      fail in 175540 pass in 175543
 test-armhf-armhf-libvirt-raw 13 guest-start      fail in 175540 pass in 175543
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat  fail pass in 175407
 test-amd64-i386-pair         11 xen-install/dst_host       fail pass in 175540

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 175197
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175197
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175407 like 175197
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175197
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175197
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175197
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175197
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175197
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52
baseline version:
 linux                66bb2e2b24ce52819a7070d3a3255726cb946b69

Last test of basis   175197  2022-12-14 10:43:17 Z   18 days
Testing same since   175407  2022-12-19 11:42:26 Z   13 days   31 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Schocher <hs@denx.de>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jialiang Wang <wangjialiang0806@163.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Colitti <lorenzo@google.com>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Samuel Mendoza-Jonas <samjonas@amazon.com>
  Sasha Levin <sashal@kernel.org>
  Shiwei Cui <cuishw@inspur.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <simon.horman@corigine.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Yasushi SHOJI <yashi@spacecubics.com>
  Yasushi SHOJI <yasushi.shoji@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-amd64-xl-credit2 broken

Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 01 21:21:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 21:21:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470142.729581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC5lz-0000aY-Qn; Sun, 01 Jan 2023 21:21:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470142.729581; Sun, 01 Jan 2023 21:21:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC5lz-0000aR-NU; Sun, 01 Jan 2023 21:21:39 +0000
Received: by outflank-mailman (input) for mailman id 470142;
 Sun, 01 Jan 2023 21:21:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Fqzy=46=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pC5ly-0000aL-Fz
 for xen-devel@lists.xenproject.org; Sun, 01 Jan 2023 21:21:38 +0000
Received: from sonic309-21.consmr.mail.gq1.yahoo.com
 (sonic309-21.consmr.mail.gq1.yahoo.com [98.137.65.147])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 442f083c-8a1a-11ed-91b6-6bf2151ebd3b;
 Sun, 01 Jan 2023 22:21:35 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.gq1.yahoo.com with HTTP; Sun, 1 Jan 2023 21:21:32 +0000
Received: by hermes--production-ne1-7b69748c4d-srx4j (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 7f437a9e58cf454b53c47f515b720546; 
 Sun, 01 Jan 2023 21:21:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 442f083c-8a1a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672608092; bh=S30WlSSONBXrZ2BF/FPFcCpOU+QISMNJ7oj/4DgswUw=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=Huz+R3UeJtZmUYPS++HCKp2ESoNmFKrmWAcdYeqlOq+dRVqpqkYdvgICTnBinTh/dRLch50n7vvbSkhd12SZmXpbS0c8T7LIbqW/ktfVg8+bXzjixBE6oqYDTA/IOss1pDkGI3dWZ3lM6kPwZicZsXDcewuJTv3F9NHbkhBBAq4XEuKrhyi9Po4ky6BSXMhS39sB1zVMJuTc2iBhhsUoWiZsy2ahpDYQBiiMzXRancv9G748+pecZ9crI99U8rSa21F/HQvA2vt83sry8DjGjUfRGuqbCoNxKY65TAzFD/WSQWlkhwlEHY+nUp8YwhlLd3jsiM9V7Wz/usxcabYL8A==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672608092; bh=r/yy/ZKK2KArZy546ewlwnBrVacT2a5OP52yPzVlDHf=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=BZ8kVUKyjl0/D637pX9LdAwQLGWUyx1Q1Rya1KgL9hMZ2lzfN+ScNMcqRf0+KMnqrdF0RNmJE+Tion5D5o3fMFk3Jd9yzg/qU7HW7wL790Koq/QVmp5HvJpcX0dxoYCYsXSnSg7aW2poRvnqTVDho648mxNKq1y8z5vhg/45lAC/L4koJIM2lXNR/aWFU1WYJ433qOl6gkj1gNng54mXmVW7+LnNoE9kpaXSzWMlOCqr+Iwo1LS1DtDp0TcJLxJNuQCSaPHsmL/aHcSTi5S6FBvxulTKX3yKVnZOjZXUbGQP/xUJ6ax8sY2mMVGB8Mzmp62G2voIMd/rQsjSaMZc8w==
X-YMail-OSG: BCDhSkMVM1mJ85S6opULzSNIPRuCztzSVruPbT59egn165AIS3i4QGYt35uPTCW
 IFT99kYLyRzdwTdlnXJBSdu2It0yYWn30BgoBX9eOATBQzzQw00KmDvoEUM1ws66tauVgglfktd1
 r0kOUE7DOiR2IjzZsD1GR_ECFHFZgUbtDj5WSz4p2zj_UWU3uVUhiHPAgY3QjoI9rtC2FLNR93am
 HT3wHtfk6GKw8Z1j1My01aYii8r87DXcE5EQjsFzkW6Tww_MAN5VauXBD5aevzoVnxWWQGszeujY
 RhU_B5oMVW5Nfh0F7TgpXwjK2CGb3W6fhbLn4cOMKqEq2_ivNBiM9SRCpYce4lS6IjntD6P7PaA9
 VX1KJuR7DdegHeawktOLhkkSUh6tyKTHE4wjskptrL3pDx1uK0Om0yrDBp9UjNw.Os2cWKreVWYp
 0HZ66r8GpypWmMzwfcbjy4LFE3NmU821lW7q2m3yZm9d_zPI2p9gsgfqV5gCLoXgYk5CWYr_JHPz
 94djlnqQg6WV8gykzmtdCDqIbJ1DR_WtqskzE5Kj8_vSqylU76TPOb90b3MEf2agpKCoJjamGEZp
 0fNdOzocqgy7dpQ9yxdg2tyZ2EJxglsFEEqy763054_w9.Y2dOvDu.lQ21s4offltFJoVrpQdhsA
 wgAf4R7nKH2YPULO_KbPZFWzUMd0sbOZ.Maj5jYRI1FVEnYN4CXisDGZ_b7f5X.fM5tALYbHvrDu
 7BPk93JqRZqvSF.Iy_oDD8CCfM6mS3gIQCWkzfCOtuwD.wsRs6QwLs3HwVxsWHLB2Hp2BHt81M9o
 xQleAZsfBsYfwxvISvluAxQLiNFGk8ObZhlWo5YHQBxOdsQeQNTpKlDrseyLDfsCxZjp5UDCD2ZE
 apRBWRistAxqb7fSXQYu00SaVvA22.6jOAdR_k.cZV05hiCCJquABK7fbq8cC0BdIQM1Cb9MAjOc
 Ie8KelimQWjpLNgXXyQNayPx20cxycoPFnVDLyWb5c1ejiTpsU0twNib5bkhwV3O.Y4HBYfJXuiW
 b5Xk6MILzaGpKWQXzp3ikPhZZ0_p9.0tpVIKOuSVb.88HKfml6ce_d6WbnEurcYx7g_46uHIW2ds
 creTYERfYLhlJYrVPZ_7lUdRuy6YQYADzatmuvunUQrUXJbHefe8lnYNLDcpjsFPrcQNWmoUfkk9
 wFHId_F3SFnNlU_P.lJ2tKVEGSYSa6cXFW4GmpvUEFYiTnWp3G8c2nGzrWY.2fODXt.TaqT7y3QU
 lgDexhh8zApNRgeVGttwNuhGrHS_NtRI4FDDjqBphr6DhULPZvaxkjoSRtsEU6wbgUpoEKN.PYPk
 qtScsOWYuq0UwCSSaj8PuqJ1oCgkbwvIGTNrPY27gFmPT.yDC1eHvftq_WbWyQMaEwe_BuS.D5VP
 DwNhMrVXDrQ6Ud6ITstP2DbaW46cTe6mZ3VmKxBEuO.QFGk45rHK1KENvG4JbbJIei4e2fIS0bOP
 0qFVnPIvEMNwMr.WHY1UiMid4RfpcInzp_xGUCu3M7rcsN4BAwBMg0IffrYSLMpZXosmU8B3rCRf
 5WEAu7NR5RZyoT8y_1dv.QUS_HnORYznYi90nZSAs4_VXnn5EJSUwfEA74qVBxmATzKqowxKBuks
 M9dGV3WuclyKjjgDDrtSqp6bjx4VW_H2_1MwIL5Edz49Xu5OgAZvsbuvmecEiuEe.eJ9gnJi0kfb
 r01pauFVeLNt5QyAR3l5_0KTsfManc_QvHFHtHn_3EWZNO1TfaCxWw4eFmGHxZKuAckbHGla45_2
 9FUTfHHnlaEU6DaYvxPAg3dbemMOAIVrjGiJXnOmp9PFs6Lg.76ZO2QtSDf_I.1JrtWHajLWis9K
 4eD8xPir6V8us9OVw1HjQ1lpTTtOTg4TfBQsMFZcJeMoRYpZwTGvthnHpHC_7jIy8act2pbb9gEq
 G92QNc9i5gLkgQVgFHMJEXFAJWI_lhp7aya9EW4gyIFNNZv8LQ9NzdjA9udyJqFgpbpEmV8g.0ho
 mn34igxj9coaUmrtMnSrOo5gRjOuj.3FH8L_wzhNtP.wc7UYvsCCouFMlPNgUzwlUV8ejC2tc.nh
 Rf2lNwl.YX4ipLaeOqmmwEkvG9.oEPGoyfPAUq49ZX1sfy12cl9UVoj_gSP7jtZEJYCWZ_gS0KTb
 nnAo7wYwNxh68MhnbZX_UR_DZCY9XBcyLYdmcHGgdbWVIz0eRjg4q6Jn1iIt8FJCVpTJlA_O.Zov
 Bgcc-
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v5] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Date: Sun,  1 Jan 2023 16:21:19 -0500
Message-Id: <f600368591f6fafea4b00e1c5205782052e43ddb.1672605633.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <f600368591f6fafea4b00e1c5205782052e43ddb.1672605633.git.brchuckz.ref@aol.com>
Content-Length: 10830

Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
as noted in docs/igd-assign.txt in the Qemu source code.

Currently, when the xl toolstack is used to configure a Xen HVM guest with
Intel IGD passthrough to the guest with the Qemu upstream device model,
a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
a different slot. This problem often prevents the guest from booting.

The only available workaround is not good: Configure Xen HVM guests to use
the old and no longer maintained Qemu traditional device model available
from xenbits.xen.org which does reserve slot 2 for the Intel IGD.

To implement this feature in the Qemu upstream device model for Xen HVM
guests, introduce the following new functions, types, and macros:

* XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
* XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
* typedef XenPTQdevRealize function pointer
* XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
* xen_igd_reserve_slot and xen_igd_clear_slot functions

The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
the xl toolstack with the gfx_passthru option enabled, which sets the
igd-passthru=on option to Qemu for the Xen HVM machine type.

The new xen_igd_reserve_slot function also needs to be implemented in
hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
in which case it does nothing.

The new xen_igd_clear_slot function overrides qdev->realize of the parent
PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
created in hw/i386/pc_piix.c for the case when igd-passthru=on.

Move the call to xen_host_pci_device_get, and the associated error
handling, from xen_pt_realize to the new xen_igd_clear_slot function to
initialize the device class and vendor values which enables the checks for
the Intel IGD to succeed. The verification that the host device is an
Intel IGD to be passed through is done by checking the domain, bus, slot,
and function values as well as by checking that gfx_passthru is enabled,
the device class is VGA, and the device vendor in Intel.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Notes that might be helpful to reviewers of patched code in hw/xen:

The new functions and types are based on recommendations from Qemu docs:
https://qemu.readthedocs.io/en/latest/devel/qom.html

Notes that might be helpful to reviewers of patched code in hw/i386:

The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
not affect builds that do not have CONFIG_XEN defined.

xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
existing function that is only true when Qemu is built with
xen-pci-passthrough enabled and the administrator has configured the Xen
HVM guest with Qemu's igd-passthru=on option.

v2: Remove From: <email address> tag at top of commit message

v3: Changed the test for the Intel IGD in xen_igd_clear_slot:

    if (is_igd_vga_passthrough(&s->real_device) &&
        (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {

    is changed to

    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    I hoped that I could use the test in v2, since it matches the
    other tests for the Intel IGD in Qemu and Xen, but those tests
    do not work because the necessary data structures are not set with
    their values yet. So instead use the test that the administrator
    has enabled gfx_passthru and the device address on the host is
    02.0. This test does detect the Intel IGD correctly.

v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
    email address to match the address used by the same author in commits
    be9c61da and c0e86b76
    
    Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc

v5: The patch of xen_pt.c was re-worked to allow a more consistent test
    for the Intel IGD that uses the same criteria as in other places.
    This involved moving the call to xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot and updating the checks for the
    Intel IGD in xen_igd_clear_slot:
    
    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    is changed to

    if (is_igd_vga_passthrough(&s->real_device) &&
        s->real_device.domain == 0 && s->real_device.bus == 0 &&
        s->real_device.dev == 2 && s->real_device.func == 0 &&
        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {

    Added an explanation for the move of xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot to the commit message.

    Rebase.

 hw/i386/pc_piix.c    |  3 +++
 hw/xen/xen_pt.c      | 56 +++++++++++++++++++++++++++++++++-----------
 hw/xen/xen_pt.h      | 16 +++++++++++++
 hw/xen/xen_pt_stub.c |  4 ++++
 4 files changed, 65 insertions(+), 14 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index b48047f50c..bc5efa4f59 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
     }
 
     pc_xen_hvm_init_pci(machine);
+    if (xen_igd_gfx_pt_enabled()) {
+        xen_igd_reserve_slot(pcms->bus);
+    }
     pci_create_simple(pcms->bus, -1, "xen-platform");
 }
 #endif
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 0ec7e52183..7e54500b27 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -775,20 +775,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
     int pirq = XEN_PT_UNASSIGNED_PIRQ;
 
     /* register real device */
-    XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
-               " to devfn 0x%x\n",
-               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
-               s->dev.devfn);
-
-    xen_host_pci_device_get(&s->real_device,
-                            s->hostaddr.domain, s->hostaddr.bus,
-                            s->hostaddr.slot, s->hostaddr.function,
-                            errp);
-    if (*errp) {
-        error_append_hint(errp, "Failed to \"open\" the real pci device");
-        return;
-    }
-
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
@@ -950,11 +936,52 @@ static void xen_pci_passthrough_instance_init(Object *obj)
     PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
 }
 
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
+    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
+}
+
+static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
+{
+    ERRP_GUARD();
+    PCIDevice *pci_dev = (PCIDevice *)qdev;
+    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
+    PCIBus *pci_bus = pci_get_bus(pci_dev);
+
+    XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
+               " to devfn 0x%x\n",
+               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
+               s->dev.devfn);
+
+    xen_host_pci_device_get(&s->real_device,
+                            s->hostaddr.domain, s->hostaddr.bus,
+                            s->hostaddr.slot, s->hostaddr.function,
+                            errp);
+    if (*errp) {
+        error_append_hint(errp, "Failed to \"open\" the real pci device");
+        return;
+    }
+
+    if (is_igd_vga_passthrough(&s->real_device) &&
+        s->real_device.domain == 0 && s->real_device.bus == 0 &&
+        s->real_device.dev == 2 && s->real_device.func == 0 &&
+        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
+        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
+        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
+    }
+    xpdc->pci_qdev_realize(qdev, errp);
+}
+
 static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
+    xpdc->pci_qdev_realize = dc->realize;
+    dc->realize = xen_igd_clear_slot;
     k->realize = xen_pt_realize;
     k->exit = xen_pt_unregister_device;
     k->config_read = xen_pt_pci_read_config;
@@ -977,6 +1004,7 @@ static const TypeInfo xen_pci_passthrough_info = {
     .instance_size = sizeof(XenPCIPassthroughState),
     .instance_finalize = xen_pci_passthrough_finalize,
     .class_init = xen_pci_passthrough_class_init,
+    .class_size = sizeof(XenPTDeviceClass),
     .instance_init = xen_pci_passthrough_instance_init,
     .interfaces = (InterfaceInfo[]) {
         { INTERFACE_CONVENTIONAL_PCI_DEVICE },
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index e7c4316a7d..40b31b5263 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -3,6 +3,7 @@
 
 #include "hw/xen/xen_common.h"
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_bus.h"
 #include "xen-host-pci-device.h"
 #include "qom/object.h"
 
@@ -41,7 +42,20 @@ typedef struct XenPTReg XenPTReg;
 #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
 OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
 
+#define XEN_PT_DEVICE_CLASS(klass) \
+    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
+#define XEN_PT_DEVICE_GET_CLASS(obj) \
+    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
+
+typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
+
+typedef struct XenPTDeviceClass {
+    PCIDeviceClass parent_class;
+    XenPTQdevRealize pci_qdev_realize;
+} XenPTDeviceClass;
+
 uint32_t igd_read_opregion(XenPCIPassthroughState *s);
+void xen_igd_reserve_slot(PCIBus *pci_bus);
 void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
 void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
                                            XenHostPCIDevice *dev);
@@ -76,6 +90,8 @@ typedef int (*xen_pt_conf_byte_read)
 
 #define XEN_PCI_INTEL_OPREGION 0xfc
 
+#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
+
 typedef enum {
     XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
     XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
index 2d8cac8d54..5c108446a8 100644
--- a/hw/xen/xen_pt_stub.c
+++ b/hw/xen/xen_pt_stub.c
@@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
         error_setg(errp, "Xen PCI passthrough support not built in");
     }
 }
+
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Sun Jan 01 23:01:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 23:01:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470152.729592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC7Kh-0002c0-4g; Sun, 01 Jan 2023 23:01:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470152.729592; Sun, 01 Jan 2023 23:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC7Kh-0002bt-1B; Sun, 01 Jan 2023 23:01:35 +0000
Received: by outflank-mailman (input) for mailman id 470152;
 Sun, 01 Jan 2023 23:01:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pC7Kg-0002bj-86; Sun, 01 Jan 2023 23:01:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pC7Kg-0004rF-53; Sun, 01 Jan 2023 23:01:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pC7Kf-0007H3-Iq; Sun, 01 Jan 2023 23:01:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pC7Kf-0002ZQ-IM; Sun, 01 Jan 2023 23:01:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OJR5MKPEAYB4ZBKU4sgCqFcvXAel/bhKVOsDgRFKlRE=; b=P67nSyjjCBi5lugY6tHf601M/C
	vPssOmn0tBB9AmJbi67w7Sm8wbFIdx8Ad5tmWEpaPapO1yVOlskW5IUrHQdq67h4g05RlzokoqO+y
	iDmOmVIJxaunNZWzNWn0knC4V/sFj1tgyeRpnlXy8PMViNM8ZxaqCChUXD/Ef/BfWcy0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175544-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175544: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e4cf7c25bae5c3b5089a3c23a897f450149caef2
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Jan 2023 23:01:33 +0000

flight 175544 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175544/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                e4cf7c25bae5c3b5089a3c23a897f450149caef2
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   86 days
Failing since        173470  2022-10-08 06:21:34 Z   85 days  177 attempts
Testing same since   175539  2022-12-31 19:43:50 Z    1 days    3 attempts

------------------------------------------------------------
3254 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 496420 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 01 23:25:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 23:25:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470161.729603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC7hj-0003TV-Ty; Sun, 01 Jan 2023 23:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470161.729603; Sun, 01 Jan 2023 23:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC7hj-0003TO-R0; Sun, 01 Jan 2023 23:25:23 +0000
Received: by outflank-mailman (input) for mailman id 470161;
 Sun, 01 Jan 2023 23:25:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=11tG=46=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pC7hi-0003TI-LJ
 for xen-devel@lists.xenproject.org; Sun, 01 Jan 2023 23:25:23 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8d485b44-8a2b-11ed-8fd4-01056ac49cbb;
 Mon, 02 Jan 2023 00:25:19 +0100 (CET)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 5B4EA5C00C0;
 Sun,  1 Jan 2023 18:25:17 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Sun, 01 Jan 2023 18:25:17 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 1 Jan 2023 18:25:15 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d485b44-8a2b-11ed-8fd4-01056ac49cbb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672615517; x=
	1672701917; bh=/ixdKicwJ/iycAG6e0GinH8Mqtqh91IJTxQYyBrr4/w=; b=H
	1pgl1lOn+0MlZV8pXl1MhvIWeD5BWbonVS7e+UQCx1ERgtgx4ZiKvJN41YZ9b0bE
	0zrYBiFHHRc3KCyaHBwK9s5PeUxQLexwIM7acb2aYUmLQDSvCwmxZVAP/lxXnAoN
	mO5xSSZCSFiDlcLXmnFt8tdnl/oSiDINKyUwkCcR3K61Nr0d+0et3Z40UcDAzsQc
	UzDOfwEjhVC8QtUov0eWS5Od8TzsMwHDh8UtLzpBb2Bt+1lXu/FiQD9mScCPQf44
	lQ8FVD5ja2QFhZMDE9oWW1TykTyK1YqdOyPexUYgTWT2I2pD4+xSOYPeUon+fmzg
	YmhOUH8Hi9Wc4mttjvIpg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672615517; x=1672701917; bh=/ixdKicwJ/iycAG6e0GinH8Mqtqh
	91IJTxQYyBrr4/w=; b=LSXapZ0UHiFYgZKAfSw5WsYsso5FM0DBX3rzu/tgHD2S
	q9dyt5+dzJLH3p8aWw5z74Ku8XGQF8x2i1Pbb0cy7+58D/M/U+Z2WK/RlMsb0sfI
	y3MtpBZvcLwiomOHVdim2RRRrwlu8dd0hT+9LyNAZwQq6duza1VhPQyZEzMOQ0Fa
	C0wZE/JAwsxE8JbNi8tzbjZylApBE03N8GyGCHb0OHCBUYFwd0s8VJ6cmN1InZzg
	14y3AhVPkLlp354HtuF+tcLvDqV3KuX+zMO8KXFpbagG+CN/jV1Yj8tpM2tdgwY6
	zPpeob7jjOJ6oPk4FuVnQc0wPo+2agGyQXnHeShFfQ==
X-ME-Sender: <xms:XBayYxrh135QIcykO2qzZPXmNASgpenDhQcF2VsQZislBPTA8eCWPA>
    <xme:XBayYzpQFk6OhMG9eCLu1qeI_Y67xCTpsuIFiT0Cu8zTqPGaa9MKHru6X9TyGKG3q
    VYTGYBUwRZ1Ew>
X-ME-Received: <xmr:XBayY-OcbIREEfomdkuvtXHYbOCauGUysuNAX3mTvxH2g1mQrbf9lN4qkEBz8PJTKHdCHkJGI6SaWoQZkaRo55YDMlDbBY-uWQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjedugddtkecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeliedu
    tdfggfekuefhhedtfeevhfetgfdtvdeujeeuteevtdeutdegffeguedufeenucffohhmrg
    hinhepqhhusggvshdqohhsrdhorhhgpdhgihhthhhusgdrtghomhdpkhgvrhhnvghlrdho
    rhhgpdhithhlrdhsphgrtggvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpe
    hmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgr
    sgdrtghomh
X-ME-Proxy: <xmx:XBayY84qWrcvSI3V5jpnCzAkTgwHRcSlN90AkYsXRABU3Cf5_YMlcQ>
    <xmx:XBayYw4SBWqm7g61m2N3T4mT4DQxjKEpQMYUaJpZXUlmCtqbQfWj2w>
    <xmx:XBayY0jgL_zqlkUttBiYNkUwGBmEskEohqEJB9b0ZUi_Nuc90fwYrw>
    <xmx:XRayY9vAx0_FK4IK-YP1I9F6ssNP5-T_BWCfRH_m7DnYFdub8JZFXg>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 2 Jan 2023 00:24:54 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	"Demi M. Obenour" <demi@invisiblethingslab.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes
Message-ID: <Y7IWWFaU54VWn266@mail-itl>
References: <Y5Hst0bCxQDTN7lK@mail-itl>
 <1c326e0c-5812-083a-0739-aa20fab3efc4@citrix.com>
 <Y6QVhRP+voSLi9xm@intel.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="mv1JohAN8Ksm/tl+"
Content-Disposition: inline
In-Reply-To: <Y6QVhRP+voSLi9xm@intel.com>


--mv1JohAN8Ksm/tl+
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 2 Jan 2023 00:24:54 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	"Demi M. Obenour" <demi@invisiblethingslab.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes

On Thu, Dec 22, 2022 at 10:29:57AM +0200, Ville Syrj=C3=A4l=C3=A4 wrote:
> On Fri, Dec 16, 2022 at 03:30:13PM +0000, Andrew Cooper wrote:
> > On 08/12/2022 1:55 pm, Marek Marczykowski-G=C3=B3recki wrote:
> > > Hi,
> > >
> > > There is an issue with i915 on Xen PV (dom0). The end result is a lot=
 of
> > > glitches, like here: https://openqa.qubes-os.org/tests/54748#step/sta=
rtup/8
> > > (this one is on ADL, Linux 6.1-rc7 as a Xen PV dom0). It's using Xorg
> > > with "modesetting" driver.
> > >
> > > After some iterations of debugging, we narrowed it down to i915 handl=
ing
> > > caching. The main difference is that PAT is setup differently on Xen =
PV
> > > than on native Linux. Normally, Linux does have appropriate abstracti=
on
> > > for that, but apparently something related to i915 doesn't play well
> > > with it. The specific difference is:
> > > native linux:
> > > x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT
> > > xen pv:
> > > x86/PAT: Configuration [0-7]: WB  WT  UC- UC  WC  WP  UC  UC
> > >                                   ~~          ~~      ~~  ~~
> > >
> > > The specific impact depends on kernel version and the hardware. The m=
ost
> > > severe issues I see on >=3DADL, but some older hardware is affected t=
oo -
> > > sometimes only if composition is disabled in the window manager.
> > > Some more information is collected at
> > > https://github.com/QubesOS/qubes-issues/issues/4782 (and few linked
> > > duplicates...).
> > >
> > > Kind-of related commit is here:
> > > https://github.com/torvalds/linux/commit/bdd8b6c98239cad ("drm/i915:
> > > replace X86_FEATURE_PAT with pat_enabled()") - it is the place where
> > > i915 explicitly checks for PAT support, so I'm cc-ing people mentioned
> > > there too.
> > >
> > > Any ideas?
> > >
> > > The issue can be easily reproduced without Xen too, by adjusting PAT =
in
> > > Linux:
> > > -----8<-----
> > > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> > > index 66a209f7eb86..319ab60c8d8c 100644
> > > --- a/arch/x86/mm/pat/memtype.c
> > > +++ b/arch/x86/mm/pat/memtype.c
> > > @@ -400,8 +400,8 @@ void pat_init(void)
> > >  		 * The reserved slots are unused, but mapped to their
> > >  		 * corresponding types in the presence of PAT errata.
> > >  		 */
> > > -		pat =3D PAT(0, WB) | PAT(1, WC) | PAT(2, UC_MINUS) | PAT(3, UC) |
> > > -		      PAT(4, WB) | PAT(5, WP) | PAT(6, UC_MINUS) | PAT(7, WT);
> > > +		pat =3D PAT(0, WB) | PAT(1, WT) | PAT(2, UC_MINUS) | PAT(3, UC) |
> > > +		      PAT(4, WC) | PAT(5, WP) | PAT(6, UC)       | PAT(7, UC);
> > >  	}
> > > =20
> > >  	if (!pat_bp_initialized) {
> > > -----8<-----
> > >
> >=20
> > Hello, can anyone help please?
> >=20
> > Intel's CI has taken this reproducer of the bug, and confirmed the
> > regression.=C2=A0
> > https://lore.kernel.org/intel-gfx/Y5Hst0bCxQDTN7lK@mail-itl/T/#m4480c15=
a0d117dce6210562eb542875e757647fb
> >=20
> > We're reasonably confident that it is an i915 bug (given the repro with
> > no Xen in the mix), but we're out of any further ideas.
>=20
> I don't think we have any code that assumes anything about the PAT,
> apart from WC being available (which seems like it should still be
> the case with your modified PAT). I suppose you'll just have to=20
> start digging from pgprot_writecombine()/noncached() and make sure
> everything ends up using the correct PAT entry.

I tried several approach to this, without success. Here is an update on
debugging (reported also on #intel-gfx live):

I did several tests with different PAT configuration (by modifying Xen
that sets the MSR). Full table is at https://pad.itl.space/sheet/#/2/sheet/=
view/HD1qT2Zf44Ha36TJ3wj2YL+PchsTidyNTFepW5++ZKM/
Some highlights:
- 1=3DWC, 4=3DWT - good
- 1=3DWT, 4=3DWC - bad
- 1=3DWT, 3=3DWC (4=3DWC too) - good
- 1=3DWT, 5=3DWC - good

So, for me it seems WC at index 4 is problematic for some reason.

Next, I tried to trap all the places in arch/x86/xen/mmu_pv.c that
write PTEs and verify requested cache attributes. There, it seems all
the requested WC are properly translated (using either index 1, 3, 4, or
5 according to PAT settings). And then after reading PTE back, it indeed
seems to be correctly set. I didn't added reading back after
HYPERVISOR_update_va_mapping, but verified it isn't used for setting WC.

Using the same method, I also checked that indexes that aren't supposed
to be used (for example index 4 when both 3 and 4 are WC) indeed are not
used. So, the hypothesis that specific indexes are hardcoded somewhere
is unlikely.

This all looks very weird to me. Any ideas?

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--mv1JohAN8Ksm/tl+
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmOyFlcACgkQ24/THMrX
1ywfAgf9HICPEpdpOi9mXoYblTrjrO4RH9tA/NauSwf1bRPaDbUtVVvYhCqFJRAA
bV1OFtI5TsP6Nq4jJdqMCQO39+BodGJDIeX0+gsFkhvzftL0tbf4+QNSi6jlh0Sx
k8kUmMEJRy0wo49DrAg8CemdzV7AdlO6faS3OTWvrGE0bweii/Tj9VwOvzh7KGku
lG/5VuTg8pxfx34re6VNKpMMqlviVkfFAkpnBz/clQrlwZJ5hrn6GApc3PmN9BXT
5XbsJOkGPdNbqSHikGIuYcqB6Xaj5adTaMVn3U0hhpLv8EJLv46NRqgk7UP3TOe5
ciqIqp8tvU/ksvdbWObDOVkgzhn8bw==
=Krhb
-----END PGP SIGNATURE-----

--mv1JohAN8Ksm/tl+--


From xen-devel-bounces@lists.xenproject.org Sun Jan 01 23:44:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 23:44:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470173.729613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC80E-0004LM-Jd; Sun, 01 Jan 2023 23:44:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470173.729613; Sun, 01 Jan 2023 23:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC80E-0004LF-Gv; Sun, 01 Jan 2023 23:44:30 +0000
Received: by outflank-mailman (input) for mailman id 470173;
 Sun, 01 Jan 2023 23:44:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pC80D-0004L0-8t; Sun, 01 Jan 2023 23:44:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pC80D-0005rA-42; Sun, 01 Jan 2023 23:44:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pC80C-0008J0-OU; Sun, 01 Jan 2023 23:44:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pC80C-00069A-O0; Sun, 01 Jan 2023 23:44:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mmBCoMmO4KdCmFKGY6qOl7bSwXyhWULMUV1Ly7Vhv64=; b=uh68dHggqlTd5rzzXEKraHlvBC
	Uo/ki57Uup4j6sN54l9FGAuNeDNLqg9IaJUlhSPGbWn+VaLWMiEbWb/BVmGT182kBvnrh+T5zOzVi
	F+vLHfzO9rmgEGerNHStCFWJ6hMvcG4xF1LIomJ6WBVv/HBzNH84lBdmzY1zEqkLqcwk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175545-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175545: FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
X-Osstest-Versions-That:
    linux=66bb2e2b24ce52819a7070d3a3255726cb946b69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Jan 2023 23:44:28 +0000

flight 175545 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175545/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 175407
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 175407
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 175407
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 175407

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 175407 pass in 175545
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 175407 pass in 175545
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 175407 pass in 175545
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 175407 pass in 175545
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 175407 pass in 175545
 test-amd64-i386-pair     11 xen-install/dst_host fail in 175543 pass in 175545
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat  fail pass in 175407
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 175543
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 175543

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175197
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175407 like 175197
 test-armhf-armhf-xl-rtds     19 guest-start.2 fail in 175543 blocked in 175197
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175197
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175197
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 175197
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175197
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 175197
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175197
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52
baseline version:
 linux                66bb2e2b24ce52819a7070d3a3255726cb946b69

Last test of basis   175197  2022-12-14 10:43:17 Z   18 days
Testing same since   175407  2022-12-19 11:42:26 Z   13 days   32 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Schocher <hs@denx.de>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jialiang Wang <wangjialiang0806@163.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Colitti <lorenzo@google.com>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Samuel Mendoza-Jonas <samjonas@amazon.com>
  Sasha Levin <sashal@kernel.org>
  Shiwei Cui <cuishw@inspur.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <simon.horman@corigine.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Yasushi SHOJI <yashi@spacecubics.com>
  Yasushi SHOJI <yasushi.shoji@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-amd64-xl-credit2 broken

Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 01 23:52:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Jan 2023 23:52:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470181.729625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC87u-0004zO-CU; Sun, 01 Jan 2023 23:52:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470181.729625; Sun, 01 Jan 2023 23:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC87u-0004zH-9d; Sun, 01 Jan 2023 23:52:26 +0000
Received: by outflank-mailman (input) for mailman id 470181;
 Sun, 01 Jan 2023 23:52:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Fqzy=46=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pC87s-0004zB-9r
 for xen-devel@lists.xenproject.org; Sun, 01 Jan 2023 23:52:24 +0000
Received: from sonic309-21.consmr.mail.gq1.yahoo.com
 (sonic309-21.consmr.mail.gq1.yahoo.com [98.137.65.147])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 539a5eb8-8a2f-11ed-8fd4-01056ac49cbb;
 Mon, 02 Jan 2023 00:52:20 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.gq1.yahoo.com with HTTP; Sun, 1 Jan 2023 23:52:18 +0000
Received: by hermes--production-bf1-5458f64d4-46wzk (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 0bf49e87fc25c8372e7b6d686a48fa29; 
 Sun, 01 Jan 2023 23:52:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 539a5eb8-8a2f-11ed-8fd4-01056ac49cbb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672617138; bh=skwhbHUkGjznop/QWhj7GG069Cfw1vyGWz5DAdkodwY=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=IEJMj5MNCJUJhzkSMnO2XnVNisG1L6mJlpp0C5gDVnxeviRF0kqvnRwxfJmLj+hC0QFBlbTcbeKhkqEkIp226t/XKrDoSsN3wDu5VrIXb2cUvH5a2Vcwyj8SgX05o06G9WgTnCGWI0B3TzLuSxxZwk5tlLxj2MD8skG8zP1PDa8RCqZpkemmac+B+Wu9oRwhBsYgBuhau0Vw0ts8RUB30xaVWBpukpOAHkMH7iXfYHs9kjWtitKFueQ/8ky0r1vX3pPMPTpu/lVuNMi4J9tT9ePilvZ0qBWVgrn6omTTOXDvowMCx2KPRenG/K1y8Lf/G56IZ06fgbJ16W2s9SgJgw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672617138; bh=Gm5Qal8G0/XjlI2GzUDH3lY8DeAqXwfGdjCP6FAlAGr=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=MLkLtG3fDe+9QkP7XrKUzDF3W5oo2DKdXEroHJibxjdodo3SkQwi9xe91r3zpalxKtTnTLCsR+r65qUkbkr98KJ8X7z/Ihs3hS4QMmqQgG+QAn6LIQPfJLm0CztXzNlBqnbbyr5A0IPaLjuN/vgxzM5ZZc5oV1ESWBf9bQB3I5j2AcAy7HKX/2yeBZjTE8xJtQNuWk4Moc5uleMlu0icTC6mTuG4L1bQxquvoitgpY3kZb06+ebAmMIInFu6wCY0qK0FU/+NatbUhzO52O5uIdDmHSdc6GhJx+a3Q9arQjH7zgdrMFhrrbg0RHcToW7bso8tZlF6vSAM41le0Ju2tw==
X-YMail-OSG: XFe56mIVM1mvJtN9hySqOXDspxB_hs1PSyXIqthm1uK3iEFtNgI8dO.eX7Sh1E.
 BJ1_4tOtjK7lE71x5m91dWrNder9ITw6LQQNWL1ZRh04Rkz6C9T5ZqdVzBRxGBUO8c9Bq5OBVHaG
 v1nSCCYiBvXZen7Tv4KLZnEkZQfTBQ56Sk6eauPerh5Qh3wFR8eIJy4VHTPQjKljBFJ1S3.HIvC0
 b6s4BfbDPWb5lQOLQ0veeAFWE6MCM9Jd8OTyofW4JOrT4E8yOAOoqwNfGm2sq_29hUFkBE9DYiuy
 lepBR2MhcdMxJnXb4lJT6.GksxAcUKXabYXr8bzIGIIon6iWsmO9iKcB4pYR_9D_O1Gvkj4ZJRcy
 cDxMQpvv657LI3SbOKsg3JkWEqkEjqxZvUDH8ZiGK0dXagy4DMkS3vdjlilpNp_nLUiolcyuTKjR
 Y.MixgwpfhXp_AAumgfU_Wt5_LdAQuY6dz1jmDqz7XFt4y1XAhI39LsesNVCeVjhDT7hAAso5DrZ
 jThaK6f8.d9wUCLgv91C37kWGpliHOrOhgEDf7NQoC8hpdmTMWxuuQ4YkDTVbHDojewOHVZ6Tidt
 nUuANYHH1_OODqyUi5XlPs3BWwgt9f50ROmoarNuiVMUTqPhPeBL21V4bNFbNh2Br_bMNezBPGfo
 iJ.mb7harPCtknUJmAZVDADeOVLSmEhJ7P_cjMha5vUTRID7Z9eNaTY6WYDVG_8mLAC7EM31Betd
 KBTnqzCuvyBBviOTMg0Ieq3Na_mST1fyuSMFsWOufN2D38tWNjPB8EUZ4Je8JsFfwZ8gMHIcKNFv
 kJwQFHUd80S7N5SIN3HQ739GVfL1PcYJhkH23wWlQiS6zYglydDMBMNzXkA4aY7KB0oz2mMZok9x
 xiEnT66vM84_PW8FEFb9mG5eE.ZIYZXSDjG8lktttqJU_cZGuCGracGLXxPAfd6rwvkEh9fQMTDU
 Q0T2XlpvVv749C1piJ2RvV1uoLbXh5rWDdM8xln.BSf_hl88FPrx8Ro5WYTfU5MNVcT5vJ33dkh3
 ARL1pbX12SKD5bCvTfpUqTEEpBP2XF0w34jJ.9__K.7LYyVUv66eh3n3Tqtr3xemxj0CB4bGYNJe
 LpPmOAvJ3HESwAQxkmjlI0F2VtJIH3vl8vWAhpLvKFJvHUzp9jciS7SZ6HdaMejHVhyEVseJ_A8B
 ppi9ZzQ5dEgSHrpzptAWizANlnRW8sFB_lNZfKtYIDoWGtqA0d0YFCFtmhQlYmLtdvj1VoWoROeX
 2EWeSOGJMCW8HssHVG10qjOo6aVTmbdk6pfrjMJAzDPxDMwyhIev5OwnUpqlttBLrPi5oeDQ_HTA
 xARzA5.jBSaMan00X944tSUVHqLS_h_Qi0SnVfpTEAfnXTWH6.gRT2DUJVSXWNEo_MRnipuHUzIi
 PFze2H6K7kSxQIhFbynzFyZqFFHPbl5lWk3Dw703uq9c5z1kMBEAJuPFNDCFw.JiEjBQSdN0pwxH
 OQNJLL85254cqjFaWLNEzFkbCy1UGOXCsVX3fancVFLTjtR5yjetxo6kEaECVnec4TVSR80ZUMHG
 JY_t3GYE45lNfss1WXhoGd2etO9YphpQH01cWD4_XyNvFDu0hsyLtSGt._P1XizvV29ldZggSh5d
 O7hR4CUw3k4mUbXjf8_yEu8ijJfc1cp7UZZNO.3ya4msXawCH3DBAKxlHTkJh.WLSginWznIhOcI
 UAOnPePzjkVSUjsydCp3iKI51t54.BnU16MbcaR6dPlwIF4KJ91nOD8H4oKqDBDNX5fq8.rDWfzR
 2Wxq7NqKa8Mg5i32Kpy_ICSx0qC0w0WJkkQDx59YmEiF7Zv0KsSjCJCUTn55VDmvV2Bk7dhhwuLE
 _AB6CseM_XTMQc65VW213Z8kD_6FD7ehL_00xRmHyWG6ejN.AqbKiucoRuprKE4fqbcGk0EWskfQ
 DYa8EqRUdEu1r9aXvjauffxqzHIp.OH9VzYQeFP8Ub7I_qKK_69Z74XKm_8HYjacNW8l8aSSKXaG
 P9TyD90AxMZkzfkRRbhyS7uyxQXuj_44oeTnkt7feAv_CopWlh8TzDqe05DUl4dD5G2An16DHqE0
 M_tVkAgDQ4MdWGLGjsbRNWCGUI46gXip2VrPMSirQpEU9Ggrkk8k2OZQGSCGd7b4342UFBG3R.xk
 DnEkFEFc1s3sfbz1tc20FUwoa6AMmpMV3REHRpT6towzQjbYC_E8w0KE4fEcz.W2cTrtHROSK.db
 VoKP19swTfj0-
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Date: Sun,  1 Jan 2023 18:52:03 -0500
Message-Id: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
Content-Length: 10912

Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
as noted in docs/igd-assign.txt in the Qemu source code.

Currently, when the xl toolstack is used to configure a Xen HVM guest with
Intel IGD passthrough to the guest with the Qemu upstream device model,
a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
a different slot. This problem often prevents the guest from booting.

The only available workaround is not good: Configure Xen HVM guests to use
the old and no longer maintained Qemu traditional device model available
from xenbits.xen.org which does reserve slot 2 for the Intel IGD.

To implement this feature in the Qemu upstream device model for Xen HVM
guests, introduce the following new functions, types, and macros:

* XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
* XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
* typedef XenPTQdevRealize function pointer
* XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
* xen_igd_reserve_slot and xen_igd_clear_slot functions

The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
the xl toolstack with the gfx_passthru option enabled, which sets the
igd-passthru=on option to Qemu for the Xen HVM machine type.

The new xen_igd_reserve_slot function also needs to be implemented in
hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
in which case it does nothing.

The new xen_igd_clear_slot function overrides qdev->realize of the parent
PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
created in hw/i386/pc_piix.c for the case when igd-passthru=on.

Move the call to xen_host_pci_device_get, and the associated error
handling, from xen_pt_realize to the new xen_igd_clear_slot function to
initialize the device class and vendor values which enables the checks for
the Intel IGD to succeed. The verification that the host device is an
Intel IGD to be passed through is done by checking the domain, bus, slot,
and function values as well as by checking that gfx_passthru is enabled,
the device class is VGA, and the device vendor in Intel.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Notes that might be helpful to reviewers of patched code in hw/xen:

The new functions and types are based on recommendations from Qemu docs:
https://qemu.readthedocs.io/en/latest/devel/qom.html

Notes that might be helpful to reviewers of patched code in hw/i386:

The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
not affect builds that do not have CONFIG_XEN defined.

xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
existing function that is only true when Qemu is built with
xen-pci-passthrough enabled and the administrator has configured the Xen
HVM guest with Qemu's igd-passthru=on option.

v2: Remove From: <email address> tag at top of commit message

v3: Changed the test for the Intel IGD in xen_igd_clear_slot:

    if (is_igd_vga_passthrough(&s->real_device) &&
        (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {

    is changed to

    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    I hoped that I could use the test in v2, since it matches the
    other tests for the Intel IGD in Qemu and Xen, but those tests
    do not work because the necessary data structures are not set with
    their values yet. So instead use the test that the administrator
    has enabled gfx_passthru and the device address on the host is
    02.0. This test does detect the Intel IGD correctly.

v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
    email address to match the address used by the same author in commits
    be9c61da and c0e86b76
    
    Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc

v5: The patch of xen_pt.c was re-worked to allow a more consistent test
    for the Intel IGD that uses the same criteria as in other places.
    This involved moving the call to xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot and updating the checks for the
    Intel IGD in xen_igd_clear_slot:
    
    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    is changed to

    if (is_igd_vga_passthrough(&s->real_device) &&
        s->real_device.domain == 0 && s->real_device.bus == 0 &&
        s->real_device.dev == 2 && s->real_device.func == 0 &&
        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {

    Added an explanation for the move of xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot to the commit message.

    Rebase.

v6: Fix logging by removing these lines from the move from xen_pt_realize
    to xen_igd_clear_slot that was done in v5:

    XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
               " to devfn 0x%x\n",
               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
               s->dev.devfn);

    This log needs to be in xen_pt_realize because s->dev.devfn is not
    set yet in xen_igd_clear_slot.

    Sorry for the extra noise.

 hw/i386/pc_piix.c    |  3 +++
 hw/xen/xen_pt.c      | 46 +++++++++++++++++++++++++++++++++++---------
 hw/xen/xen_pt.h      | 16 +++++++++++++++
 hw/xen/xen_pt_stub.c |  4 ++++
 4 files changed, 60 insertions(+), 9 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index b48047f50c..bc5efa4f59 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
     }
 
     pc_xen_hvm_init_pci(machine);
+    if (xen_igd_gfx_pt_enabled()) {
+        xen_igd_reserve_slot(pcms->bus);
+    }
     pci_create_simple(pcms->bus, -1, "xen-platform");
 }
 #endif
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 0ec7e52183..7fae1e7a6f 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
                s->dev.devfn);
 
-    xen_host_pci_device_get(&s->real_device,
-                            s->hostaddr.domain, s->hostaddr.bus,
-                            s->hostaddr.slot, s->hostaddr.function,
-                            errp);
-    if (*errp) {
-        error_append_hint(errp, "Failed to \"open\" the real pci device");
-        return;
-    }
-
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
@@ -950,11 +941,47 @@ static void xen_pci_passthrough_instance_init(Object *obj)
     PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
 }
 
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
+    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
+}
+
+static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
+{
+    ERRP_GUARD();
+    PCIDevice *pci_dev = (PCIDevice *)qdev;
+    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
+    PCIBus *pci_bus = pci_get_bus(pci_dev);
+
+    xen_host_pci_device_get(&s->real_device,
+                            s->hostaddr.domain, s->hostaddr.bus,
+                            s->hostaddr.slot, s->hostaddr.function,
+                            errp);
+    if (*errp) {
+        error_append_hint(errp, "Failed to \"open\" the real pci device");
+        return;
+    }
+
+    if (is_igd_vga_passthrough(&s->real_device) &&
+        s->real_device.domain == 0 && s->real_device.bus == 0 &&
+        s->real_device.dev == 2 && s->real_device.func == 0 &&
+        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
+        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
+        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
+    }
+    xpdc->pci_qdev_realize(qdev, errp);
+}
+
 static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
+    xpdc->pci_qdev_realize = dc->realize;
+    dc->realize = xen_igd_clear_slot;
     k->realize = xen_pt_realize;
     k->exit = xen_pt_unregister_device;
     k->config_read = xen_pt_pci_read_config;
@@ -977,6 +1004,7 @@ static const TypeInfo xen_pci_passthrough_info = {
     .instance_size = sizeof(XenPCIPassthroughState),
     .instance_finalize = xen_pci_passthrough_finalize,
     .class_init = xen_pci_passthrough_class_init,
+    .class_size = sizeof(XenPTDeviceClass),
     .instance_init = xen_pci_passthrough_instance_init,
     .interfaces = (InterfaceInfo[]) {
         { INTERFACE_CONVENTIONAL_PCI_DEVICE },
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index e7c4316a7d..40b31b5263 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -3,6 +3,7 @@
 
 #include "hw/xen/xen_common.h"
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_bus.h"
 #include "xen-host-pci-device.h"
 #include "qom/object.h"
 
@@ -41,7 +42,20 @@ typedef struct XenPTReg XenPTReg;
 #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
 OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
 
+#define XEN_PT_DEVICE_CLASS(klass) \
+    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
+#define XEN_PT_DEVICE_GET_CLASS(obj) \
+    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
+
+typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
+
+typedef struct XenPTDeviceClass {
+    PCIDeviceClass parent_class;
+    XenPTQdevRealize pci_qdev_realize;
+} XenPTDeviceClass;
+
 uint32_t igd_read_opregion(XenPCIPassthroughState *s);
+void xen_igd_reserve_slot(PCIBus *pci_bus);
 void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
 void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
                                            XenHostPCIDevice *dev);
@@ -76,6 +90,8 @@ typedef int (*xen_pt_conf_byte_read)
 
 #define XEN_PCI_INTEL_OPREGION 0xfc
 
+#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
+
 typedef enum {
     XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
     XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
index 2d8cac8d54..5c108446a8 100644
--- a/hw/xen/xen_pt_stub.c
+++ b/hw/xen/xen_pt_stub.c
@@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
         error_setg(errp, "Xen PCI passthrough support not built in");
     }
 }
+
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 00:03:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 00:03:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470191.729635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC8Ij-0007AP-DL; Mon, 02 Jan 2023 00:03:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470191.729635; Mon, 02 Jan 2023 00:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC8Ij-0007AI-Af; Mon, 02 Jan 2023 00:03:37 +0000
Received: by outflank-mailman (input) for mailman id 470191;
 Mon, 02 Jan 2023 00:03:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JwIb=47=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pC8Ih-0007A8-0L
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 00:03:35 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e33489e9-8a30-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 01:03:31 +0100 (CET)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id 5C91D32002E2;
 Sun,  1 Jan 2023 19:03:27 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute5.internal (MEProxy); Sun, 01 Jan 2023 19:03:28 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 1 Jan 2023 19:03:25 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e33489e9-8a30-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672617806; x=
	1672704206; bh=QWrm808uO4klYg+6e3E8cUIIB4tFpjr8qW51PGfU32Y=; b=e
	bxkn2aTRgu0l59e/6Oqdv6szPiRBO/78w1Dd4UOM6PIZweGRHIQxl4jxuAEh4G6q
	h6WKeYhEXx87JPUVPW70MDZ6MwXxkpK+cuxYVvN3ErWXuGdRht8aF685LcKKPPBO
	WtDrSlUn4NsjCl9UEVbnzK0ZKaAY5uItq8Cdir1wNZHYOK/tksh3Z49iWoi7kcCQ
	srPNIJwKhcIkabtF9ru1pOXZFefD4udHeEzUR7XX582gDCYYtilPY/enYOq3ggv7
	o+HRBvoaMmj5/r9aPSre1itLobIx45f+lbvYduihtnHb4btv9PmKJ6S34Qdn1RW7
	NHVHx/ijWKZwAMtvmh1yA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672617806; x=1672704206; bh=QWrm808uO4klYg+6e3E8cUIIB4tF
	pjr8qW51PGfU32Y=; b=GrJqyVNBo1kfGOBnftWq0Fy4NW/bcBSZWe32fh0zB0Ld
	+5mj0k1EGOF19qyj8UPEY1Rt3JJADs6bsVlxdSK9EbmuoI2F19fLtg85bbmld1nV
	ifXciu1X8CJzp4XhovimjUUvRIktBAHyZ0UzYGvp/Ev6vNjT2MvXIFVhwHZwv/+d
	GAo06YHHudENWVehkLtBIBzInPdWVN5CO0nrEQWuKFQjKsmF3/3kWNi83/USi1LN
	5nfldfLSwVpbRVk+6fRZVfdKINXxVh6xWz8Ln5HT8ONiyyx0mxv+3AwJ27UZpmQ4
	Ll5LGd83t3GNvd26+2pbr8q33VyCWkq7D/H6Yx9F1g==
X-ME-Sender: <xms:Th-yY-JYANqI5Br2v7M0dMA_VrNmA9FGkU8WpaHOgSv_JXg5P6NPPw>
    <xme:Th-yY2IAT1b3HE9ld9zZXA5iRlauzHpR24vA5t_19EbdtoAE3D-b8lh9Y4H5Y_NTD
    xcFBiP6NJAEkNs>
X-ME-Received: <xmr:Th-yY-uNswf0ElLgnxnkrruOXpeardYK8e49Tg4MFtQMdtQfDvYpMjnQiBKl>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjedugdduhecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetleffvdehleeffeffudffvedutdev
    geffvdffkeevfeevkefhjeegleduteffjeenucffohhmrghinhepqhhusggvshdqohhsrd
    horhhgpdhgihhthhhusgdrtghomhdpkhgvrhhnvghlrdhorhhgpdhithhlrdhsphgrtggv
    necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepuggvmh
    hisehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:Th-yYzb7e3xkX7s4ZTG6pUKl-Dz7lS616SqSntfkyN43uek1LHI0bA>
    <xmx:Th-yY1Y-iAr_Gs8NX7Nl25GRo3RgapwRr83qmmTPlvZqkOenG53BmQ>
    <xmx:Th-yY_BqMnBhjuRUsPnRSrDeLkIIaFRWLPRRjXRPIIVW-8kSanBRVA>
    <xmx:Th-yY6O6nQyxRPLFMEnb8wdMvfvqy3gJinap4wAg8KxCcG1sr4IgOA>
Feedback-ID: iac594737:Fastmail
Date: Sun, 1 Jan 2023 19:03:18 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes
Message-ID: <Y7IfS91fHQ/8fwXt@itl-email>
References: <Y5Hst0bCxQDTN7lK@mail-itl>
 <1c326e0c-5812-083a-0739-aa20fab3efc4@citrix.com>
 <Y6QVhRP+voSLi9xm@intel.com>
 <Y7IWWFaU54VWn266@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="sna7pTQbM0Qkvz+e"
Content-Disposition: inline
In-Reply-To: <Y7IWWFaU54VWn266@mail-itl>


--sna7pTQbM0Qkvz+e
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sun, 1 Jan 2023 19:03:18 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes

On Mon, Jan 02, 2023 at 12:24:54AM +0100, Marek Marczykowski-G=C3=B3recki w=
rote:
> On Thu, Dec 22, 2022 at 10:29:57AM +0200, Ville Syrj=C3=A4l=C3=A4 wrote:
> > On Fri, Dec 16, 2022 at 03:30:13PM +0000, Andrew Cooper wrote:
> > > On 08/12/2022 1:55 pm, Marek Marczykowski-G=C3=B3recki wrote:
> > > > Hi,
> > > >
> > > > There is an issue with i915 on Xen PV (dom0). The end result is a l=
ot of
> > > > glitches, like here: https://openqa.qubes-os.org/tests/54748#step/s=
tartup/8
> > > > (this one is on ADL, Linux 6.1-rc7 as a Xen PV dom0). It's using Xo=
rg
> > > > with "modesetting" driver.
> > > >
> > > > After some iterations of debugging, we narrowed it down to i915 han=
dling
> > > > caching. The main difference is that PAT is setup differently on Xe=
n PV
> > > > than on native Linux. Normally, Linux does have appropriate abstrac=
tion
> > > > for that, but apparently something related to i915 doesn't play well
> > > > with it. The specific difference is:
> > > > native linux:
> > > > x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT
> > > > xen pv:
> > > > x86/PAT: Configuration [0-7]: WB  WT  UC- UC  WC  WP  UC  UC
> > > >                                   ~~          ~~      ~~  ~~
> > > >
> > > > The specific impact depends on kernel version and the hardware. The=
 most
> > > > severe issues I see on >=3DADL, but some older hardware is affected=
 too -
> > > > sometimes only if composition is disabled in the window manager.
> > > > Some more information is collected at
> > > > https://github.com/QubesOS/qubes-issues/issues/4782 (and few linked
> > > > duplicates...).
> > > >
> > > > Kind-of related commit is here:
> > > > https://github.com/torvalds/linux/commit/bdd8b6c98239cad ("drm/i915:
> > > > replace X86_FEATURE_PAT with pat_enabled()") - it is the place where
> > > > i915 explicitly checks for PAT support, so I'm cc-ing people mentio=
ned
> > > > there too.
> > > >
> > > > Any ideas?
> > > >
> > > > The issue can be easily reproduced without Xen too, by adjusting PA=
T in
> > > > Linux:
> > > > -----8<-----
> > > > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> > > > index 66a209f7eb86..319ab60c8d8c 100644
> > > > --- a/arch/x86/mm/pat/memtype.c
> > > > +++ b/arch/x86/mm/pat/memtype.c
> > > > @@ -400,8 +400,8 @@ void pat_init(void)
> > > >  		 * The reserved slots are unused, but mapped to their
> > > >  		 * corresponding types in the presence of PAT errata.
> > > >  		 */
> > > > -		pat =3D PAT(0, WB) | PAT(1, WC) | PAT(2, UC_MINUS) | PAT(3, UC) |
> > > > -		      PAT(4, WB) | PAT(5, WP) | PAT(6, UC_MINUS) | PAT(7, WT);
> > > > +		pat =3D PAT(0, WB) | PAT(1, WT) | PAT(2, UC_MINUS) | PAT(3, UC) |
> > > > +		      PAT(4, WC) | PAT(5, WP) | PAT(6, UC)       | PAT(7, UC);
> > > >  	}
> > > > =20
> > > >  	if (!pat_bp_initialized) {
> > > > -----8<-----
> > > >
> > >=20
> > > Hello, can anyone help please?
> > >=20
> > > Intel's CI has taken this reproducer of the bug, and confirmed the
> > > regression.=C2=A0
> > > https://lore.kernel.org/intel-gfx/Y5Hst0bCxQDTN7lK@mail-itl/T/#m4480c=
15a0d117dce6210562eb542875e757647fb
> > >=20
> > > We're reasonably confident that it is an i915 bug (given the repro wi=
th
> > > no Xen in the mix), but we're out of any further ideas.
> >=20
> > I don't think we have any code that assumes anything about the PAT,
> > apart from WC being available (which seems like it should still be
> > the case with your modified PAT). I suppose you'll just have to=20
> > start digging from pgprot_writecombine()/noncached() and make sure
> > everything ends up using the correct PAT entry.
>=20
> I tried several approach to this, without success. Here is an update on
> debugging (reported also on #intel-gfx live):
>=20
> I did several tests with different PAT configuration (by modifying Xen
> that sets the MSR). Full table is at https://pad.itl.space/sheet/#/2/shee=
t/view/HD1qT2Zf44Ha36TJ3wj2YL+PchsTidyNTFepW5++ZKM/
> Some highlights:
> - 1=3DWC, 4=3DWT - good
> - 1=3DWT, 4=3DWC - bad
> - 1=3DWT, 3=3DWC (4=3DWC too) - good
> - 1=3DWT, 5=3DWC - good
>=20
> So, for me it seems WC at index 4 is problematic for some reason.
>=20
> Next, I tried to trap all the places in arch/x86/xen/mmu_pv.c that
> write PTEs and verify requested cache attributes. There, it seems all
> the requested WC are properly translated (using either index 1, 3, 4, or
> 5 according to PAT settings). And then after reading PTE back, it indeed
> seems to be correctly set. I didn't added reading back after
> HYPERVISOR_update_va_mapping, but verified it isn't used for setting WC.
>=20
> Using the same method, I also checked that indexes that aren't supposed
> to be used (for example index 4 when both 3 and 4 are WC) indeed are not
> used. So, the hypothesis that specific indexes are hardcoded somewhere
> is unlikely.
>=20
> This all looks very weird to me. Any ideas?

Old CPUs have had hardware errata that caused the top bit of the PAT
entry to be ignored in certain cases.  Could modern CPUs be ignoring
this bit when accessing iGPU memory or registers?  With WC at position
4, this would cause WC to be treated as WB, which is consistent with the
observed behavior.  WC at position 3 would not be impacted, and WC at
position 5 would be treated as WT which I expect to be safe.  One way to
test this is to test 1=3DWB, 5=3DWC.  If my hypothesis is correct, this
should trigger the bug, even if entry 1 in the PAT is unused because
entry 0 is also WB.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--sna7pTQbM0Qkvz+e
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmOyH0sACgkQsoi1X/+c
IsHkExAAwPBiEG7fU2iXwJT+pGOCsgqm0InTt1WC6YwaJi/6s/0qVqm5KZx/mN2/
x/So26ifOHDD/FMfsBGbfodobx7dvLsQwujMiOWgS03p7rh9C3kyT93rlx47y4U6
QSM5JQzkiM0I/7cQw3HTYTG0bUbIIzRPN856pW/7arGXTtOmFWaqKWxPnB8fqNe1
dmpjK5Zz+MCvayA8gUvvIaQ9aFbC7k40BA/7uCKga7T3acyUAfRCVFd0+0nAnfRP
hEDjlQjejLsuaKT2eLYZdweiHJZtOhYoRg8DFzam+BQaNIslI9HRZ8x3cIi+6UYQ
2oR+hTKCtgUQPd9q2aQUWMeq9BXNzfQyUTWw+Vn7fXc7awe4j7bYAfIVV6nXlk6W
6qFqHUlTcjYmmG17484RL/dHWkJPE+B3beuo8kSvXqHoyXp9z18kLLcCJocN1gkX
n4TjWU7TSuOg68kwVruWsg1uOznzZTIktLw83zPmAFYJiIh83eE+yEwttCD4pogf
A9oE/jMqdhoEuyZBjE3Ru8/YXoEk/h7if9wGvBUU58/jATGDMuOtbk90zptVqtcl
IEeoVqLBBe5zDNW1YBlL6EpP4xfTRXbQ+diRUU2J0UhVLmMF0hGYxAPLgsUFpuE+
CtXH/JDlIUWU3up7t6yedp74exPvS5PWExWh7GJ7bGxn38G3Zk8=
=7ciq
-----END PGP SIGNATURE-----

--sna7pTQbM0Qkvz+e--


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 01:01:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 01:01:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470200.729647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC9CG-0001mP-9R; Mon, 02 Jan 2023 01:01:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470200.729647; Mon, 02 Jan 2023 01:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC9CG-0001lr-6f; Mon, 02 Jan 2023 01:01:00 +0000
Received: by outflank-mailman (input) for mailman id 470200;
 Mon, 02 Jan 2023 01:00:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJmt=47=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pC9CF-0001Kp-J6
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 01:00:59 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ea054943-8a38-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 02:00:57 +0100 (CET)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 5AEBF5C009C;
 Sun,  1 Jan 2023 20:00:56 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Sun, 01 Jan 2023 20:00:56 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 1 Jan 2023 20:00:53 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea054943-8a38-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672621256; x=
	1672707656; bh=Vkd82xk0hwXjNxZx3kuGm8JTwE53wz2kGurdp3zIabA=; b=N
	bWv9dVghzrn5CzYjEZ2XHjk0LiN0KaMCPktk5F+y/BdYX+CedrW8476kTChm1VRU
	Bb+9pQ1RwPuBT0PJiRBLpheheynP9VT059+SRlEnG7qTk7vKmBv5HdKcm8wNfZK9
	puwYeGPARBbm4T1Eq+MAyXniln5vFBArd+B58ZYhpdL/nzC1RbWBlGA2VUFekdFh
	Iogn4XHThGCF6DjYNKCbhe/Z5rE8M9/h213Zy7rUiVihpkO/zf0qehbu+BfBRnGw
	MflvQTrnMc6XAaEpGp9bNw53EqbV+A2+GIO4o7nj4+Xfc/uBW16Rm2xAVsjAcWKg
	csaWTC/jLdRnUtQBi63Rw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672621256; x=1672707656; bh=Vkd82xk0hwXjNxZx3kuGm8JTwE53
	wz2kGurdp3zIabA=; b=JIN4Paay5P/9fIVjA2E0OUWwcjj6sD5INPL8kYNmTuND
	bbADh1ig1kYqiQFeUS1ovTYHryEatSx4lxOEaMFl+ZvTrG0vcha8LmtoNsYrxI9Y
	BbKxCEi28nS1aIik/hXqu3hc/JHgLqO9V7993fQGWAXbDqz0EKgqWG9NIsY+HUIR
	yraTCVbyVq1TdW1C32ACl+hetr4MEJe1fj0BE419rCgujN7naaDdIi/4SF87EMRI
	yTTmskVhlg1ziO22ElBcLrC7RhtJV51miwr/tlYoKvkNmo7KjhhKHbrtwOpn6NzQ
	BaL6SjoYnlpVkigFkdQ8YCv8wy7oSldEGXggmAC01A==
X-ME-Sender: <xms:xyyyY7semQvE55h1Wk2brbt1lNEVWB_49iZIgaFUVcaaFobVNqriiw>
    <xme:xyyyY8czKEh2X7h4CklooemG1lJf2wYAmygMALmZr8CcMdwj5VgEzTfmPn-F-PEeB
    yrt-fhg5R4DkQ>
X-ME-Received: <xmr:xyyyY-wtdxzExXOkAchlxP2QOTXwhS6dk8qDokMvLX0yXnBVSGi0pwawfQ65FMyPXO0GYQuv1TQma-OnIaMCnK5ii9DcOxw-BA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjedugddvkecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeliedu
    tdfggfekuefhhedtfeevhfetgfdtvdeujeeuteevtdeutdegffeguedufeenucffohhmrg
    hinhepqhhusggvshdqohhsrdhorhhgpdhgihhthhhusgdrtghomhdpkhgvrhhnvghlrdho
    rhhgpdhithhlrdhsphgrtggvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpe
    hmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgr
    sgdrtghomh
X-ME-Proxy: <xmx:xyyyY6MBdHQ01EjhjMtjFlnjkvkX9XD_0Pklv1BAvHaZxjuvLrljew>
    <xmx:xyyyY7_oZOdJJN8YbfygrhLj2MRVtgk80MlFlmSKlCxraaEmm9rU2Q>
    <xmx:xyyyY6VPDeLzQtYbGhlDEIn7IyhTXEu8DHAjzm1xbboVuY3_C2qHFw>
    <xmx:yCyyY7yoR9xjh2_JaOtFh0xF-iqF9xL4xUhPKjZGFOt4opLzhImfgw>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 2 Jan 2023 02:00:51 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes
Message-ID: <Y7Isw0VxkV91+rP0@mail-itl>
References: <Y5Hst0bCxQDTN7lK@mail-itl>
 <1c326e0c-5812-083a-0739-aa20fab3efc4@citrix.com>
 <Y6QVhRP+voSLi9xm@intel.com>
 <Y7IWWFaU54VWn266@mail-itl>
 <Y7IfS91fHQ/8fwXt@itl-email>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="ZRMbSRxQJ79m73FE"
Content-Disposition: inline
In-Reply-To: <Y7IfS91fHQ/8fwXt@itl-email>


--ZRMbSRxQJ79m73FE
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 2 Jan 2023 02:00:51 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes

On Sun, Jan 01, 2023 at 07:03:18PM -0500, Demi Marie Obenour wrote:
> On Mon, Jan 02, 2023 at 12:24:54AM +0100, Marek Marczykowski-G=C3=B3recki=
 wrote:
> > On Thu, Dec 22, 2022 at 10:29:57AM +0200, Ville Syrj=C3=A4l=C3=A4 wrote:
> > > On Fri, Dec 16, 2022 at 03:30:13PM +0000, Andrew Cooper wrote:
> > > > On 08/12/2022 1:55 pm, Marek Marczykowski-G=C3=B3recki wrote:
> > > > > Hi,
> > > > >
> > > > > There is an issue with i915 on Xen PV (dom0). The end result is a=
 lot of
> > > > > glitches, like here: https://openqa.qubes-os.org/tests/54748#step=
/startup/8
> > > > > (this one is on ADL, Linux 6.1-rc7 as a Xen PV dom0). It's using =
Xorg
> > > > > with "modesetting" driver.
> > > > >
> > > > > After some iterations of debugging, we narrowed it down to i915 h=
andling
> > > > > caching. The main difference is that PAT is setup differently on =
Xen PV
> > > > > than on native Linux. Normally, Linux does have appropriate abstr=
action
> > > > > for that, but apparently something related to i915 doesn't play w=
ell
> > > > > with it. The specific difference is:
> > > > > native linux:
> > > > > x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT
> > > > > xen pv:
> > > > > x86/PAT: Configuration [0-7]: WB  WT  UC- UC  WC  WP  UC  UC
> > > > >                                   ~~          ~~      ~~  ~~
> > > > >
> > > > > The specific impact depends on kernel version and the hardware. T=
he most
> > > > > severe issues I see on >=3DADL, but some older hardware is affect=
ed too -
> > > > > sometimes only if composition is disabled in the window manager.
> > > > > Some more information is collected at
> > > > > https://github.com/QubesOS/qubes-issues/issues/4782 (and few link=
ed
> > > > > duplicates...).
> > > > >
> > > > > Kind-of related commit is here:
> > > > > https://github.com/torvalds/linux/commit/bdd8b6c98239cad ("drm/i9=
15:
> > > > > replace X86_FEATURE_PAT with pat_enabled()") - it is the place wh=
ere
> > > > > i915 explicitly checks for PAT support, so I'm cc-ing people ment=
ioned
> > > > > there too.
> > > > >
> > > > > Any ideas?
> > > > >
> > > > > The issue can be easily reproduced without Xen too, by adjusting =
PAT in
> > > > > Linux:
> > > > > -----8<-----
> > > > > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> > > > > index 66a209f7eb86..319ab60c8d8c 100644
> > > > > --- a/arch/x86/mm/pat/memtype.c
> > > > > +++ b/arch/x86/mm/pat/memtype.c
> > > > > @@ -400,8 +400,8 @@ void pat_init(void)
> > > > >  		 * The reserved slots are unused, but mapped to their
> > > > >  		 * corresponding types in the presence of PAT errata.
> > > > >  		 */
> > > > > -		pat =3D PAT(0, WB) | PAT(1, WC) | PAT(2, UC_MINUS) | PAT(3, UC=
) |
> > > > > -		      PAT(4, WB) | PAT(5, WP) | PAT(6, UC_MINUS) | PAT(7, WT);
> > > > > +		pat =3D PAT(0, WB) | PAT(1, WT) | PAT(2, UC_MINUS) | PAT(3, UC=
) |
> > > > > +		      PAT(4, WC) | PAT(5, WP) | PAT(6, UC)       | PAT(7, UC);
> > > > >  	}
> > > > > =20
> > > > >  	if (!pat_bp_initialized) {
> > > > > -----8<-----
> > > > >
> > > >=20
> > > > Hello, can anyone help please?
> > > >=20
> > > > Intel's CI has taken this reproducer of the bug, and confirmed the
> > > > regression.=C2=A0
> > > > https://lore.kernel.org/intel-gfx/Y5Hst0bCxQDTN7lK@mail-itl/T/#m448=
0c15a0d117dce6210562eb542875e757647fb
> > > >=20
> > > > We're reasonably confident that it is an i915 bug (given the repro =
with
> > > > no Xen in the mix), but we're out of any further ideas.
> > >=20
> > > I don't think we have any code that assumes anything about the PAT,
> > > apart from WC being available (which seems like it should still be
> > > the case with your modified PAT). I suppose you'll just have to=20
> > > start digging from pgprot_writecombine()/noncached() and make sure
> > > everything ends up using the correct PAT entry.
> >=20
> > I tried several approach to this, without success. Here is an update on
> > debugging (reported also on #intel-gfx live):
> >=20
> > I did several tests with different PAT configuration (by modifying Xen
> > that sets the MSR). Full table is at https://pad.itl.space/sheet/#/2/sh=
eet/view/HD1qT2Zf44Ha36TJ3wj2YL+PchsTidyNTFepW5++ZKM/
> > Some highlights:
> > - 1=3DWC, 4=3DWT - good
> > - 1=3DWT, 4=3DWC - bad
> > - 1=3DWT, 3=3DWC (4=3DWC too) - good
> > - 1=3DWT, 5=3DWC - good
> >=20
> > So, for me it seems WC at index 4 is problematic for some reason.
> >=20
> > Next, I tried to trap all the places in arch/x86/xen/mmu_pv.c that
> > write PTEs and verify requested cache attributes. There, it seems all
> > the requested WC are properly translated (using either index 1, 3, 4, or
> > 5 according to PAT settings). And then after reading PTE back, it indeed
> > seems to be correctly set. I didn't added reading back after
> > HYPERVISOR_update_va_mapping, but verified it isn't used for setting WC.
> >=20
> > Using the same method, I also checked that indexes that aren't supposed
> > to be used (for example index 4 when both 3 and 4 are WC) indeed are not
> > used. So, the hypothesis that specific indexes are hardcoded somewhere
> > is unlikely.
> >=20
> > This all looks very weird to me. Any ideas?
>=20
> Old CPUs have had hardware errata that caused the top bit of the PAT
> entry to be ignored in certain cases.  Could modern CPUs be ignoring
> this bit when accessing iGPU memory or registers?  With WC at position
> 4, this would cause WC to be treated as WB, which is consistent with the
> observed behavior.  WC at position 3 would not be impacted, and WC at
> position 5 would be treated as WT which I expect to be safe.  One way to
> test this is to test 1=3DWB, 5=3DWC.  If my hypothesis is correct, this
> should trigger the bug, even if entry 1 in the PAT is unused because
> entry 0 is also WB.

This looks like a very probable situation, indeed 1=3DWB, 5=3DWC does
trigger the bug! Specifically this layout:

    WB	WB	UC-	UC	WP	WC	WT	UC

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--ZRMbSRxQJ79m73FE
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmOyLMMACgkQ24/THMrX
1yzksAf+LbWU8vxaEkf4STfhncM6CbXghzbFswKWaKDltJdvfXQ20ZmeXTb3nWLe
Mi2+HcNdl2niQVEgZB+qXyp5AJIhnTIhsF5X+zvh3a1MQsOtILZx2wwCDqbXezTv
9OQ55kx2z3ujhAgIme/pxQOQRZWum9+luspCL7lNJFRwwBdhSE1eQAHGqaHsBdtr
ytPO6ZODN5Fb7q5aCZ4WNzsKFQ2wsROzmVO14opSpyE2YgaKFjQfTpevhin/gnYz
pGkuDWRFGURry8WJcwhpI6e009Gf8Kz7rt7+oW0z+IxxT3zS1q2x+xZ2vY6+S8uQ
qQdv9UjlUOkFXqK79ZNfy1WOmSkF3Q==
=K+CU
-----END PGP SIGNATURE-----

--ZRMbSRxQJ79m73FE--


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 01:18:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 01:18:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470208.729658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC9St-0002CD-KZ; Mon, 02 Jan 2023 01:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470208.729658; Mon, 02 Jan 2023 01:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC9St-0002C6-Hp; Mon, 02 Jan 2023 01:18:11 +0000
Received: by outflank-mailman (input) for mailman id 470208;
 Mon, 02 Jan 2023 01:18:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JwIb=47=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pC9Ss-0002C0-At
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 01:18:10 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ee087d1-8a3b-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 02:18:06 +0100 (CET)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 23C7832006F5;
 Sun,  1 Jan 2023 20:18:03 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Sun, 01 Jan 2023 20:18:03 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 1 Jan 2023 20:18:01 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ee087d1-8a3b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672622282; x=
	1672708682; bh=yVoTWhV9NOKPo4v/n3sT/VMH9GHO2ZdiZBvPkH9zdC8=; b=B
	vMoYz9Mhi9YgxkkD3IIl5q0kG4dW8IaCqWvko7S1VMASUN7f88iKJRsEvT/bFxJ/
	m26ip5/42bqXaNAtxLqWebGyk+qWgv6BjAeJWMxd6viebeIdSLT5pAIgN3wWiRyW
	XywzNeCGT6D2BXAJ8TXz5jXrH6v3UeRi1RxgcEpOfaz8QryIo9lsoqC+Pv1CfskK
	RVyCZkZrt0qoq0q+Dq77ZmDQImvafYJzFBZb4oszMVZ5cHLMekMUa892VVZ1M6mH
	210ErWh7zE5Hlzl5w2m7/2d9rKPCTYlTp4AlA38pqJYDQTA4tSpdxbG0i+EFBm9J
	Rgo9biNVcWIvomPQTPC/A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672622282; x=1672708682; bh=yVoTWhV9NOKPo4v/n3sT/VMH9GHO
	2ZdiZBvPkH9zdC8=; b=laMRUKP7nV5k4DmVETxqR13hJI8RB0akL2mRwZWmzr47
	bpqo/s/VgAhLqew5gfWIZL5j/09kJT2k2sY0/uThi6YqBXl5R93Q5mjy3tpam9iP
	hRFFCKe5o7HJNnL2b3D9XLlsez1o6NNFxtuSCPhP/SZTcxJ9r5iNMyWFsX5fEb3y
	RQ3gC1P8Z+qyeVPq07pm0cpfXw2z20ycksnF5Ul08czz6+b8M0H381KyqhEOOfNC
	w5J/KUYP7A6BdOSNd2eP6uP7/r6Je4lFmfcVE8INpfIrqxWb/HxcrDNmaB97+cUX
	uriLyiscHnjVs9GaDQ8V+RlbpJ2WVro13xGJfivOEg==
X-ME-Sender: <xms:yTCyY10JoPGbSMuds6Mw9TshHYhyNK2o8cOgsMi3GAB0EbOMcGjZIQ>
    <xme:yTCyY8GmPzk5_rvjPYA8_nIeNiQuQ_ElPLZktNobYCZiZfydMAscpT7W6g_juh67y
    vEl2rlgyGKO1gA>
X-ME-Received: <xmr:yTCyY16-N58FwS5jdpsq5bo7u61__MfsTxSX3ZGE3mx7QW_Rg3PNGlf6NGj5>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjedugdefudcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetleffvdehleeffeffudffvedutdev
    geffvdffkeevfeevkefhjeegleduteffjeenucffohhmrghinhepqhhusggvshdqohhsrd
    horhhgpdhgihhthhhusgdrtghomhdpkhgvrhhnvghlrdhorhhgpdhithhlrdhsphgrtggv
    necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepuggvmh
    hisehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:yTCyYy3uTycFw6oU8EB1CMFpEo5U0ur6EjSyhZhjWpp9vk9-cRYdZQ>
    <xmx:yTCyY4GbxGBqpk4rlmZVguAGNzHSMTgGSuhIaNWUfgi2l-kH7d82yw>
    <xmx:yTCyYz-4PpHqjCC9i2Cfu7JmirvBfym-EMAa_s7b-b3PP7Pk-I0rlQ>
    <xmx:yjCyY95_RmMtWKFJsQb89_a6LLAEMjsX9PpL8QaoqAPK4W61DuGSng>
Feedback-ID: iac594737:Fastmail
Date: Sun, 1 Jan 2023 20:17:52 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes
Message-ID: <Y7Iwx14scvdamsSj@itl-email>
References: <Y5Hst0bCxQDTN7lK@mail-itl>
 <1c326e0c-5812-083a-0739-aa20fab3efc4@citrix.com>
 <Y6QVhRP+voSLi9xm@intel.com>
 <Y7IWWFaU54VWn266@mail-itl>
 <Y7IfS91fHQ/8fwXt@itl-email>
 <Y7Isw0VxkV91+rP0@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="2tpZ6p+tZudaMPXj"
Content-Disposition: inline
In-Reply-To: <Y7Isw0VxkV91+rP0@mail-itl>


--2tpZ6p+tZudaMPXj
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sun, 1 Jan 2023 20:17:52 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes

On Mon, Jan 02, 2023 at 02:00:51AM +0100, Marek Marczykowski-G=C3=B3recki w=
rote:
> On Sun, Jan 01, 2023 at 07:03:18PM -0500, Demi Marie Obenour wrote:
> > On Mon, Jan 02, 2023 at 12:24:54AM +0100, Marek Marczykowski-G=C3=B3rec=
ki wrote:
> > > On Thu, Dec 22, 2022 at 10:29:57AM +0200, Ville Syrj=C3=A4l=C3=A4 wro=
te:
> > > > On Fri, Dec 16, 2022 at 03:30:13PM +0000, Andrew Cooper wrote:
> > > > > On 08/12/2022 1:55 pm, Marek Marczykowski-G=C3=B3recki wrote:
> > > > > > Hi,
> > > > > >
> > > > > > There is an issue with i915 on Xen PV (dom0). The end result is=
 a lot of
> > > > > > glitches, like here: https://openqa.qubes-os.org/tests/54748#st=
ep/startup/8
> > > > > > (this one is on ADL, Linux 6.1-rc7 as a Xen PV dom0). It's usin=
g Xorg
> > > > > > with "modesetting" driver.
> > > > > >
> > > > > > After some iterations of debugging, we narrowed it down to i915=
 handling
> > > > > > caching. The main difference is that PAT is setup differently o=
n Xen PV
> > > > > > than on native Linux. Normally, Linux does have appropriate abs=
traction
> > > > > > for that, but apparently something related to i915 doesn't play=
 well
> > > > > > with it. The specific difference is:
> > > > > > native linux:
> > > > > > x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT
> > > > > > xen pv:
> > > > > > x86/PAT: Configuration [0-7]: WB  WT  UC- UC  WC  WP  UC  UC
> > > > > >                                   ~~          ~~      ~~  ~~
> > > > > >
> > > > > > The specific impact depends on kernel version and the hardware.=
 The most
> > > > > > severe issues I see on >=3DADL, but some older hardware is affe=
cted too -
> > > > > > sometimes only if composition is disabled in the window manager.
> > > > > > Some more information is collected at
> > > > > > https://github.com/QubesOS/qubes-issues/issues/4782 (and few li=
nked
> > > > > > duplicates...).
> > > > > >
> > > > > > Kind-of related commit is here:
> > > > > > https://github.com/torvalds/linux/commit/bdd8b6c98239cad ("drm/=
i915:
> > > > > > replace X86_FEATURE_PAT with pat_enabled()") - it is the place =
where
> > > > > > i915 explicitly checks for PAT support, so I'm cc-ing people me=
ntioned
> > > > > > there too.
> > > > > >
> > > > > > Any ideas?
> > > > > >
> > > > > > The issue can be easily reproduced without Xen too, by adjustin=
g PAT in
> > > > > > Linux:
> > > > > > -----8<-----
> > > > > > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtyp=
e.c
> > > > > > index 66a209f7eb86..319ab60c8d8c 100644
> > > > > > --- a/arch/x86/mm/pat/memtype.c
> > > > > > +++ b/arch/x86/mm/pat/memtype.c
> > > > > > @@ -400,8 +400,8 @@ void pat_init(void)
> > > > > >  		 * The reserved slots are unused, but mapped to their
> > > > > >  		 * corresponding types in the presence of PAT errata.
> > > > > >  		 */
> > > > > > -		pat =3D PAT(0, WB) | PAT(1, WC) | PAT(2, UC_MINUS) | PAT(3, =
UC) |
> > > > > > -		      PAT(4, WB) | PAT(5, WP) | PAT(6, UC_MINUS) | PAT(7, WT=
);
> > > > > > +		pat =3D PAT(0, WB) | PAT(1, WT) | PAT(2, UC_MINUS) | PAT(3, =
UC) |
> > > > > > +		      PAT(4, WC) | PAT(5, WP) | PAT(6, UC)       | PAT(7, UC=
);
> > > > > >  	}
> > > > > > =20
> > > > > >  	if (!pat_bp_initialized) {
> > > > > > -----8<-----
> > > > > >
> > > > >=20
> > > > > Hello, can anyone help please?
> > > > >=20
> > > > > Intel's CI has taken this reproducer of the bug, and confirmed the
> > > > > regression.=C2=A0
> > > > > https://lore.kernel.org/intel-gfx/Y5Hst0bCxQDTN7lK@mail-itl/T/#m4=
480c15a0d117dce6210562eb542875e757647fb
> > > > >=20
> > > > > We're reasonably confident that it is an i915 bug (given the repr=
o with
> > > > > no Xen in the mix), but we're out of any further ideas.
> > > >=20
> > > > I don't think we have any code that assumes anything about the PAT,
> > > > apart from WC being available (which seems like it should still be
> > > > the case with your modified PAT). I suppose you'll just have to=20
> > > > start digging from pgprot_writecombine()/noncached() and make sure
> > > > everything ends up using the correct PAT entry.
> > >=20
> > > I tried several approach to this, without success. Here is an update =
on
> > > debugging (reported also on #intel-gfx live):
> > >=20
> > > I did several tests with different PAT configuration (by modifying Xen
> > > that sets the MSR). Full table is at https://pad.itl.space/sheet/#/2/=
sheet/view/HD1qT2Zf44Ha36TJ3wj2YL+PchsTidyNTFepW5++ZKM/
> > > Some highlights:
> > > - 1=3DWC, 4=3DWT - good
> > > - 1=3DWT, 4=3DWC - bad
> > > - 1=3DWT, 3=3DWC (4=3DWC too) - good
> > > - 1=3DWT, 5=3DWC - good
> > >=20
> > > So, for me it seems WC at index 4 is problematic for some reason.
> > >=20
> > > Next, I tried to trap all the places in arch/x86/xen/mmu_pv.c that
> > > write PTEs and verify requested cache attributes. There, it seems all
> > > the requested WC are properly translated (using either index 1, 3, 4,=
 or
> > > 5 according to PAT settings). And then after reading PTE back, it ind=
eed
> > > seems to be correctly set. I didn't added reading back after
> > > HYPERVISOR_update_va_mapping, but verified it isn't used for setting =
WC.
> > >=20
> > > Using the same method, I also checked that indexes that aren't suppos=
ed
> > > to be used (for example index 4 when both 3 and 4 are WC) indeed are =
not
> > > used. So, the hypothesis that specific indexes are hardcoded somewhere
> > > is unlikely.
> > >=20
> > > This all looks very weird to me. Any ideas?
> >=20
> > Old CPUs have had hardware errata that caused the top bit of the PAT
> > entry to be ignored in certain cases.  Could modern CPUs be ignoring
> > this bit when accessing iGPU memory or registers?  With WC at position
> > 4, this would cause WC to be treated as WB, which is consistent with the
> > observed behavior.  WC at position 3 would not be impacted, and WC at
> > position 5 would be treated as WT which I expect to be safe.  One way to
> > test this is to test 1=3DWB, 5=3DWC.  If my hypothesis is correct, this
> > should trigger the bug, even if entry 1 in the PAT is unused because
> > entry 0 is also WB.
>=20
> This looks like a very probable situation, indeed 1=3DWB, 5=3DWC does
> trigger the bug! Specifically this layout:
>=20
>     WB	WB	UC-	UC	WP	WC	WT	UC

What about WB WT WB UC WB WP WC UC- and WB WT WT UC WB WP WC UC-?  Those
only differ in entry 2, which will not be used as it duplicates entry 0
or 1.  Therefore, architecturally, these should behave identically.  If
I am correct, the second will work fine, but the first will trigger the
bug.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--2tpZ6p+tZudaMPXj
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmOyMMcACgkQsoi1X/+c
IsFgvA/9FmiZPls9UyctSYLzjz5C03aRvLIN/oFqAo/laqzBcDPHVJeCLWr39qy6
hUoXgzqIGISsDyAmzAgX3+fdd7WkNRjqworEjRdtYRFu7hGkzbPkFd7oa7Ty+CCk
QUy7oZo8u/UEjiBfTctO5P/YZaKFIqZyLFeVRRZp3kZUS+pk35IJ4EE7n7bm7duH
HHGvrptmL8u2KpPNlPLKQamQYHDVemsf6q5MJQ7HdL6ouD3+bv/DFqxxeYECNhiP
XiJ3lyZdbgrzoRGFGTM7MzxHZkG3agWcE4tz6s4owEYUxdkUb4YX4Kr8Nd9SK00F
hy3l266Auwi1s6NZfXx3IA4zLwlII+7UmBhUai42dKSNVX0+8lIDt7wYArVoW2f+
wXlZk8fDxr/tqx/wzCj1thWIdcue10TB/xPFXmSdcW0zDz82rU6xg0aLg8OWlKR7
ssGUl28ApVPgD95u+xONhGvWegD7N1TjUGVTTPMwHh12308h8T9sze//53ws5Jjy
yv6si59zBe2qN2RroSBtcAM9qX+yrTovpY38V8sde+sIPIXLOrLdRNekpQK9AbsE
Gbxg+02/9Exq0KKgS/6KmbUUD/NRpK+Q/gHAWPAS5uU9QmOhIs6OGiQ3Ni1Z4uvy
e0lpaiLd3TTfrY1U0Z/4dmBsqbZHCHw+L6B75y8ZsTcN4EIS2vE=
=rum4
-----END PGP SIGNATURE-----

--2tpZ6p+tZudaMPXj--


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 01:48:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 01:48:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470218.729669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC9wF-0005Xz-4T; Mon, 02 Jan 2023 01:48:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470218.729669; Mon, 02 Jan 2023 01:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pC9wF-0005Xs-1D; Mon, 02 Jan 2023 01:48:31 +0000
Received: by outflank-mailman (input) for mailman id 470218;
 Mon, 02 Jan 2023 01:48:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JwIb=47=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pC9wD-0005Xm-OI
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 01:48:29 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89498e70-8a3f-11ed-8fd4-01056ac49cbb;
 Mon, 02 Jan 2023 02:48:22 +0100 (CET)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id B0AB532003CE;
 Sun,  1 Jan 2023 20:48:22 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Sun, 01 Jan 2023 20:48:23 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 1 Jan 2023 20:48:20 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89498e70-8a3f-11ed-8fd4-01056ac49cbb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672624102; x=
	1672710502; bh=cIx24x0KYrdmbF9rhXPF+ueoJK5CbFbBsixsjiFIyTA=; b=y
	quWpo9NWEUD6eKxRWXmnIVtM4n1mdref6I1rw1vi3SaylcPkrtFECZq8N8Z5LoHo
	jpX17cTglH0HN3XiI86ffSzb5GhzpeQUAJnN+Vvx9bo2lD0WDSInB1GUXaRf0vwb
	MhRHuhb4mXrKBAOlXlEKN9yf9/jotECUSw0WeWDDNA1PngumefSC1yU6VTW+NTZh
	2AXd3vVH2f8FU2PbNEcv1qygf++Pvl5vE/6U7wGNGQBU69KFwpmI5QjQXGKFvzZm
	7RiUXmnJQ0aPUIl8lBX9MkhwG7WjGtvkKPk79yL0aylQnW+CNMgbFQtCof9FYFV3
	Em1OC3HV5XDLgweSVWteQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672624102; x=1672710502; bh=cIx24x0KYrdmbF9rhXPF+ueoJK5C
	bFbBsixsjiFIyTA=; b=rmzK1e6jYgwQ5/iS+ionKPE3u/6AWuKeL+rpsvlNuv4O
	j9kEUf6/NSR/m0ci7xJ+s6fkpJv4mryQ+ENDXkJXVV1+QK30Y8dnoBEyOI4a2fSZ
	0r3KtmXyTm01Meo+bZjPEAgMWFdNXOaxOkDfXu6G7BDMgXdP6XjYdmINBo76B4Qc
	U7cii1udF4zGU/Edz0cQq2WMQtOKJoCyjyp9vVOK0zfXNOxIsa4X0wHkBbivvTPE
	p+70JoKth6owmn2GxHarEPYSUxylTb5B71DePVF0H7sskOYYDpQG+6Vuotc+nEpe
	WOdFVyCOzH0QRMEwG0x3pNeJ6COIJi3zXCMD19AYIg==
X-ME-Sender: <xms:5TeyY_mOsUy6p8rf2R1HSm6bV332KDM9m0aRA_jU6cGmWoiyMkMVTA>
    <xme:5TeyYy2yjzFL04xOGU4jE4GfEWbu-fM0dEZptwyJPGqOl0n-jP2OVAJ2WcHoDkj_d
    lMjHnpDWuj58Mw>
X-ME-Received: <xmr:5TeyY1qjH3M5l76VPn1P69vuuqKiyCYRPoBqzEVURyuQmFCr8NCxdsFv-VZ2>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjedugdefjecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetleffvdehleeffeffudffvedutdev
    geffvdffkeevfeevkefhjeegleduteffjeenucffohhmrghinhepqhhusggvshdqohhsrd
    horhhgpdhgihhthhhusgdrtghomhdpkhgvrhhnvghlrdhorhhgpdhithhlrdhsphgrtggv
    necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepuggvmh
    hisehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:5TeyY3nVFlnThrUqDtfkBvk-RVhwxMyi5rFcoq2_YvlY3vM7KqkWzQ>
    <xmx:5TeyY91eU2iPESjl_hDqB7DLtYbIlE4cvTzk8Q4sKPPOxBFqDz_AzQ>
    <xmx:5TeyY2uOXO0-GxokHwRH_bKEmNFgmgEa9IMOVOXotCbWOn96Qr80uw>
    <xmx:5jeyY3qkdLAMM2XDPDBOmYfRHHMeMxd-YVK1BF86eLMITNULTELRPA>
Feedback-ID: iac594737:Fastmail
Date: Sun, 1 Jan 2023 20:48:13 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes
Message-ID: <Y7I3486sAGEVby1U@itl-email>
References: <Y5Hst0bCxQDTN7lK@mail-itl>
 <1c326e0c-5812-083a-0739-aa20fab3efc4@citrix.com>
 <Y6QVhRP+voSLi9xm@intel.com>
 <Y7IWWFaU54VWn266@mail-itl>
 <Y7IfS91fHQ/8fwXt@itl-email>
 <Y7Isw0VxkV91+rP0@mail-itl>
 <Y7Iwx14scvdamsSj@itl-email>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="hSJ/g7c8/Ke3ldGo"
Content-Disposition: inline
In-Reply-To: <Y7Iwx14scvdamsSj@itl-email>


--hSJ/g7c8/Ke3ldGo
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sun, 1 Jan 2023 20:48:13 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] i915 and PAT attributes

On Sun, Jan 01, 2023 at 08:17:52PM -0500, Demi Marie Obenour wrote:
> On Mon, Jan 02, 2023 at 02:00:51AM +0100, Marek Marczykowski-G=C3=B3recki=
 wrote:
> > On Sun, Jan 01, 2023 at 07:03:18PM -0500, Demi Marie Obenour wrote:
> > > On Mon, Jan 02, 2023 at 12:24:54AM +0100, Marek Marczykowski-G=C3=B3r=
ecki wrote:
> > > > On Thu, Dec 22, 2022 at 10:29:57AM +0200, Ville Syrj=C3=A4l=C3=A4 w=
rote:
> > > > > On Fri, Dec 16, 2022 at 03:30:13PM +0000, Andrew Cooper wrote:
> > > > > > On 08/12/2022 1:55 pm, Marek Marczykowski-G=C3=B3recki wrote:
> > > > > > > Hi,
> > > > > > >
> > > > > > > There is an issue with i915 on Xen PV (dom0). The end result =
is a lot of
> > > > > > > glitches, like here: https://openqa.qubes-os.org/tests/54748#=
step/startup/8
> > > > > > > (this one is on ADL, Linux 6.1-rc7 as a Xen PV dom0). It's us=
ing Xorg
> > > > > > > with "modesetting" driver.
> > > > > > >
> > > > > > > After some iterations of debugging, we narrowed it down to i9=
15 handling
> > > > > > > caching. The main difference is that PAT is setup differently=
 on Xen PV
> > > > > > > than on native Linux. Normally, Linux does have appropriate a=
bstraction
> > > > > > > for that, but apparently something related to i915 doesn't pl=
ay well
> > > > > > > with it. The specific difference is:
> > > > > > > native linux:
> > > > > > > x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT
> > > > > > > xen pv:
> > > > > > > x86/PAT: Configuration [0-7]: WB  WT  UC- UC  WC  WP  UC  UC
> > > > > > >                                   ~~          ~~      ~~  ~~
> > > > > > >
> > > > > > > The specific impact depends on kernel version and the hardwar=
e. The most
> > > > > > > severe issues I see on >=3DADL, but some older hardware is af=
fected too -
> > > > > > > sometimes only if composition is disabled in the window manag=
er.
> > > > > > > Some more information is collected at
> > > > > > > https://github.com/QubesOS/qubes-issues/issues/4782 (and few =
linked
> > > > > > > duplicates...).
> > > > > > >
> > > > > > > Kind-of related commit is here:
> > > > > > > https://github.com/torvalds/linux/commit/bdd8b6c98239cad ("dr=
m/i915:
> > > > > > > replace X86_FEATURE_PAT with pat_enabled()") - it is the plac=
e where
> > > > > > > i915 explicitly checks for PAT support, so I'm cc-ing people =
mentioned
> > > > > > > there too.
> > > > > > >
> > > > > > > Any ideas?
> > > > > > >
> > > > > > > The issue can be easily reproduced without Xen too, by adjust=
ing PAT in
> > > > > > > Linux:
> > > > > > > -----8<-----
> > > > > > > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memt=
ype.c
> > > > > > > index 66a209f7eb86..319ab60c8d8c 100644
> > > > > > > --- a/arch/x86/mm/pat/memtype.c
> > > > > > > +++ b/arch/x86/mm/pat/memtype.c
> > > > > > > @@ -400,8 +400,8 @@ void pat_init(void)
> > > > > > >  		 * The reserved slots are unused, but mapped to their
> > > > > > >  		 * corresponding types in the presence of PAT errata.
> > > > > > >  		 */
> > > > > > > -		pat =3D PAT(0, WB) | PAT(1, WC) | PAT(2, UC_MINUS) | PAT(3=
, UC) |
> > > > > > > -		      PAT(4, WB) | PAT(5, WP) | PAT(6, UC_MINUS) | PAT(7, =
WT);
> > > > > > > +		pat =3D PAT(0, WB) | PAT(1, WT) | PAT(2, UC_MINUS) | PAT(3=
, UC) |
> > > > > > > +		      PAT(4, WC) | PAT(5, WP) | PAT(6, UC)       | PAT(7, =
UC);
> > > > > > >  	}
> > > > > > > =20
> > > > > > >  	if (!pat_bp_initialized) {
> > > > > > > -----8<-----
> > > > > > >
> > > > > >=20
> > > > > > Hello, can anyone help please?
> > > > > >=20
> > > > > > Intel's CI has taken this reproducer of the bug, and confirmed =
the
> > > > > > regression.=C2=A0
> > > > > > https://lore.kernel.org/intel-gfx/Y5Hst0bCxQDTN7lK@mail-itl/T/#=
m4480c15a0d117dce6210562eb542875e757647fb
> > > > > >=20
> > > > > > We're reasonably confident that it is an i915 bug (given the re=
pro with
> > > > > > no Xen in the mix), but we're out of any further ideas.
> > > > >=20
> > > > > I don't think we have any code that assumes anything about the PA=
T,
> > > > > apart from WC being available (which seems like it should still be
> > > > > the case with your modified PAT). I suppose you'll just have to=
=20
> > > > > start digging from pgprot_writecombine()/noncached() and make sure
> > > > > everything ends up using the correct PAT entry.
> > > >=20
> > > > I tried several approach to this, without success. Here is an updat=
e on
> > > > debugging (reported also on #intel-gfx live):
> > > >=20
> > > > I did several tests with different PAT configuration (by modifying =
Xen
> > > > that sets the MSR). Full table is at https://pad.itl.space/sheet/#/=
2/sheet/view/HD1qT2Zf44Ha36TJ3wj2YL+PchsTidyNTFepW5++ZKM/
> > > > Some highlights:
> > > > - 1=3DWC, 4=3DWT - good
> > > > - 1=3DWT, 4=3DWC - bad
> > > > - 1=3DWT, 3=3DWC (4=3DWC too) - good
> > > > - 1=3DWT, 5=3DWC - good
> > > >=20
> > > > So, for me it seems WC at index 4 is problematic for some reason.
> > > >=20
> > > > Next, I tried to trap all the places in arch/x86/xen/mmu_pv.c that
> > > > write PTEs and verify requested cache attributes. There, it seems a=
ll
> > > > the requested WC are properly translated (using either index 1, 3, =
4, or
> > > > 5 according to PAT settings). And then after reading PTE back, it i=
ndeed
> > > > seems to be correctly set. I didn't added reading back after
> > > > HYPERVISOR_update_va_mapping, but verified it isn't used for settin=
g WC.
> > > >=20
> > > > Using the same method, I also checked that indexes that aren't supp=
osed
> > > > to be used (for example index 4 when both 3 and 4 are WC) indeed ar=
e not
> > > > used. So, the hypothesis that specific indexes are hardcoded somewh=
ere
> > > > is unlikely.
> > > >=20
> > > > This all looks very weird to me. Any ideas?
> > >=20
> > > Old CPUs have had hardware errata that caused the top bit of the PAT
> > > entry to be ignored in certain cases.  Could modern CPUs be ignoring
> > > this bit when accessing iGPU memory or registers?  With WC at position
> > > 4, this would cause WC to be treated as WB, which is consistent with =
the
> > > observed behavior.  WC at position 3 would not be impacted, and WC at
> > > position 5 would be treated as WT which I expect to be safe.  One way=
 to
> > > test this is to test 1=3DWB, 5=3DWC.  If my hypothesis is correct, th=
is
> > > should trigger the bug, even if entry 1 in the PAT is unused because
> > > entry 0 is also WB.
> >=20
> > This looks like a very probable situation, indeed 1=3DWB, 5=3DWC does
> > trigger the bug! Specifically this layout:
> >=20
> >     WB	WB	UC-	UC	WP	WC	WT	UC
>=20
> What about WB WT WB UC WB WP WC UC- and WB WT WT UC WB WP WC UC-?  Those
> only differ in entry 2, which will not be used as it duplicates entry 0
> or 1.  Therefore, architecturally, these should behave identically.  If
> I am correct, the second will work fine, but the first will trigger the
> bug.

Also worth testing:

WB  UC- UC  WB  WB  WP  WT  WC
WB  UC- UC  UC  WB  WP  WT  WC

These differ only in (unused) entry 3.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--hSJ/g7c8/Ke3ldGo
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmOyN+MACgkQsoi1X/+c
IsF4eBAAm/8NuQ6F7sPJh36pPa13o66PgJLeu7Ncoe34u5rWemIwVj5vZ9dei7SP
hbcl8frTWIs3ay7mvpg4FJFn7foLzk9LbTQJyimBUqGU2Q9B2nIqiSgZN8VEZyHR
STe6XMIFnREbln6YT8eg4DJesTE4yVXeDJgojpQm1WaaTof3mmtyLpsPNEZ2BBl5
lZRwIHtgcHNmQyJxG1pNU5lGwrvXrOMJVneuMYUabr/2xpcYGWMXpwrvBdPHoGtD
tMwKIqvF69R2GmYspp3omVURtBhMYwfzrcHKyhouNTy/y8hCI25Ov0DHqUWGzQYP
0KUOhyqAVKw0TGGRfkdcBhd40DDlGShNx2MxFySDzcfAz+C3T9Xtup9oYVj0t9pH
btU8+6zF7Ko2McdFFeZiNPZFq8j2SntnDwqDemqIgp3OM5gEw2fa+nTRlYgtOGoS
v3p/iixJi3tlgW3Chs9AB8tim+zWt8cJjKQifpvUWw9Zi05Ca7myzAXi9NRXHiqg
0BuhLLN08079RdkLKWOqj0aTkyrbjTy2bKH8+/sxFCggdVCZSiJzv91uyGTleK7m
SeH2Kwi0fPVX2Jik5SVHyLCyyJ2ICgqSbE4Ya9/3OBa3nCeKm97t/DDaZjMsY8MO
OTe4QlRyZKge/3rNkFK39y7TQ6gZd3MAakQIKuYfkherQqpPC2E=
=cznL
-----END PGP SIGNATURE-----

--hSJ/g7c8/Ke3ldGo--


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 01:58:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 01:58:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470226.729679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCA5x-000723-Tl; Mon, 02 Jan 2023 01:58:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470226.729679; Mon, 02 Jan 2023 01:58:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCA5x-00071w-RC; Mon, 02 Jan 2023 01:58:33 +0000
Received: by outflank-mailman (input) for mailman id 470226;
 Mon, 02 Jan 2023 01:58:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJmt=47=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pCA5w-00071q-KT
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 01:58:32 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1810f9c-8a40-11ed-8fd4-01056ac49cbb;
 Mon, 02 Jan 2023 02:58:26 +0100 (CET)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 7573B5C016C;
 Sun,  1 Jan 2023 20:58:28 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Sun, 01 Jan 2023 20:58:28 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 1 Jan 2023 20:58:26 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1810f9c-8a40-11ed-8fd4-01056ac49cbb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672624708; x=
	1672711108; bh=SkyBYYpMMvL3uEm5eSua/6wIi9ndNHC5rn4cANOE338=; b=R
	vee3pGnzlD3J8m018uX/+iSyU3O/8VMJvJU/pm2BLIlXNDO/81yA9k48+b1k8HzG
	ndmCGnLQkhBWQgeif0jeuKqG0aJf1PiefKC8BwRNlgdYqw/gKh70DX3JK2A+pRF9
	YcDOJ3ZQipjC4X5bKdRe2bVlUYlZFdSAKmTI+0n76vTkccDeLjT7Ok6AKaRm22/h
	7BLLyPoh17o23UOLS5cz8GzA6DbL74+ofSuVXsEdTdRMbxMikKet76Z3HZdmwcqz
	sRSQUFnAxqqVVQiTnw/2As5Ath4GxE0oKxSPJMPEoF4Ve0UW1mxuaXMD/NvuGy2H
	7tWthK1zAnilsL/93Mxbg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672624708; x=1672711108; bh=SkyBYYpMMvL3uEm5eSua/6wIi9nd
	NHC5rn4cANOE338=; b=N+BkGOOEgKU/A5Fcz3XDri2ZgFHi3ZwmcdzBvbvIf+mx
	0OlZSPHqz8NfymsIbkBnBZPPGpcQI2ev9w1YvPSG7LqxQJpeBG0pylDtJbrK1V9n
	bW5wEWm5yN+vfQzuMavxtOjyRMm/Xt1YssPMeEdpDERJrBZ8R7/Lm9GkSydGpTns
	ZTfGtl7h2JkzpV3S+ciENBSJKw4uj2GubaSeCycbKCoabvRRZrKjxU/EzeNi1iE9
	7jApf0ro+d8ukNKaP4b2865DcyVwUoSw9BdOwr6VtfFgUMOLo95eWz1kbTHIMlf0
	OsiCyob8GsMxP+CaK4FqdZE/Vlv1mY6uZNT9I7UCvQ==
X-ME-Sender: <xms:RDqyYwHZiRS36eGqh9LvV1s9S2wmL9vJSTnFR-DtoLCVbd-sGLgZ4g>
    <xme:RDqyY5VnisoXRqr9S-k7Ifpp8KhMkQ4CqoodG9xeh_4fOBe5ZkjzxexOH8IsiYdYG
    FW2zKJOVgoESA>
X-ME-Received: <xmr:RDqyY6JQukBRi1KXNuk4ANc_mZNLOTCgov5g_0Kfz8h5-QlctgSdWIsFlSafvR2UqsDdpMcNG4dBwJFMkoYTqujSK6U3uKnJdw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjedugdeflecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeliedu
    tdfggfekuefhhedtfeevhfetgfdtvdeujeeuteevtdeutdegffeguedufeenucffohhmrg
    hinhepqhhusggvshdqohhsrdhorhhgpdhgihhthhhusgdrtghomhdpkhgvrhhnvghlrdho
    rhhgpdhithhlrdhsphgrtggvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpe
    hmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgr
    sgdrtghomh
X-ME-Proxy: <xmx:RDqyYyFNAQpzUg9TrVgyZzunMW0DFC6UijTg5rduddItO8AM__eJBA>
    <xmx:RDqyY2VBiWKzyRBt1hQfCjdEgAF8VigArkh4haYinzPnWiIGa-pN4g>
    <xmx:RDqyY1MyNKXP5aPmkMVQM3eeI_X46cCfqTs56HyLdSQOHL0lz5_pVA>
    <xmx:RDqyY4KAk6rs8vrQyTjEwq_b5_s8FIeDgo7YXVpYmxkdF49OSgRamg>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 2 Jan 2023 02:58:23 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] [hw bug?] i915 and PAT
 attributes
Message-ID: <Y7I6P2peo5j/vymz@mail-itl>
References: <Y5Hst0bCxQDTN7lK@mail-itl>
 <1c326e0c-5812-083a-0739-aa20fab3efc4@citrix.com>
 <Y6QVhRP+voSLi9xm@intel.com>
 <Y7IWWFaU54VWn266@mail-itl>
 <Y7IfS91fHQ/8fwXt@itl-email>
 <Y7Isw0VxkV91+rP0@mail-itl>
 <Y7Iwx14scvdamsSj@itl-email>
 <Y7I3486sAGEVby1U@itl-email>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="FiKZDM0ecfbmwtbS"
Content-Disposition: inline
In-Reply-To: <Y7I3486sAGEVby1U@itl-email>


--FiKZDM0ecfbmwtbS
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 2 Jan 2023 02:58:23 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Ville =?utf-8?B?U3lyasOkbMOk?= <ville.syrjala@linux.intel.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Intel-gfx] [cache coherency bug] [hw bug?] i915 and PAT
 attributes

On Sun, Jan 01, 2023 at 08:48:13PM -0500, Demi Marie Obenour wrote:
> On Sun, Jan 01, 2023 at 08:17:52PM -0500, Demi Marie Obenour wrote:
> > On Mon, Jan 02, 2023 at 02:00:51AM +0100, Marek Marczykowski-G=C3=B3rec=
ki wrote:
> > > On Sun, Jan 01, 2023 at 07:03:18PM -0500, Demi Marie Obenour wrote:
> > > > On Mon, Jan 02, 2023 at 12:24:54AM +0100, Marek Marczykowski-G=C3=
=B3recki wrote:
> > > > > On Thu, Dec 22, 2022 at 10:29:57AM +0200, Ville Syrj=C3=A4l=C3=A4=
 wrote:
> > > > > > On Fri, Dec 16, 2022 at 03:30:13PM +0000, Andrew Cooper wrote:
> > > > > > > On 08/12/2022 1:55 pm, Marek Marczykowski-G=C3=B3recki wrote:
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > There is an issue with i915 on Xen PV (dom0). The end resul=
t is a lot of
> > > > > > > > glitches, like here: https://openqa.qubes-os.org/tests/5474=
8#step/startup/8
> > > > > > > > (this one is on ADL, Linux 6.1-rc7 as a Xen PV dom0). It's =
using Xorg
> > > > > > > > with "modesetting" driver.
> > > > > > > >
> > > > > > > > After some iterations of debugging, we narrowed it down to =
i915 handling
> > > > > > > > caching. The main difference is that PAT is setup different=
ly on Xen PV
> > > > > > > > than on native Linux. Normally, Linux does have appropriate=
 abstraction
> > > > > > > > for that, but apparently something related to i915 doesn't =
play well
> > > > > > > > with it. The specific difference is:
> > > > > > > > native linux:
> > > > > > > > x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT
> > > > > > > > xen pv:
> > > > > > > > x86/PAT: Configuration [0-7]: WB  WT  UC- UC  WC  WP  UC  UC
> > > > > > > >                                   ~~          ~~      ~~  ~~
> > > > > > > >
> > > > > > > > The specific impact depends on kernel version and the hardw=
are. The most
> > > > > > > > severe issues I see on >=3DADL, but some older hardware is =
affected too -
> > > > > > > > sometimes only if composition is disabled in the window man=
ager.
> > > > > > > > Some more information is collected at
> > > > > > > > https://github.com/QubesOS/qubes-issues/issues/4782 (and fe=
w linked
> > > > > > > > duplicates...).
> > > > > > > >
> > > > > > > > Kind-of related commit is here:
> > > > > > > > https://github.com/torvalds/linux/commit/bdd8b6c98239cad ("=
drm/i915:
> > > > > > > > replace X86_FEATURE_PAT with pat_enabled()") - it is the pl=
ace where
> > > > > > > > i915 explicitly checks for PAT support, so I'm cc-ing peopl=
e mentioned
> > > > > > > > there too.
> > > > > > > >
> > > > > > > > Any ideas?
> > > > > > > >
> > > > > > > > The issue can be easily reproduced without Xen too, by adju=
sting PAT in
> > > > > > > > Linux:
> > > > > > > > -----8<-----
> > > > > > > > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/me=
mtype.c
> > > > > > > > index 66a209f7eb86..319ab60c8d8c 100644
> > > > > > > > --- a/arch/x86/mm/pat/memtype.c
> > > > > > > > +++ b/arch/x86/mm/pat/memtype.c
> > > > > > > > @@ -400,8 +400,8 @@ void pat_init(void)
> > > > > > > >  		 * The reserved slots are unused, but mapped to their
> > > > > > > >  		 * corresponding types in the presence of PAT errata.
> > > > > > > >  		 */
> > > > > > > > -		pat =3D PAT(0, WB) | PAT(1, WC) | PAT(2, UC_MINUS) | PAT=
(3, UC) |
> > > > > > > > -		      PAT(4, WB) | PAT(5, WP) | PAT(6, UC_MINUS) | PAT(7=
, WT);
> > > > > > > > +		pat =3D PAT(0, WB) | PAT(1, WT) | PAT(2, UC_MINUS) | PAT=
(3, UC) |
> > > > > > > > +		      PAT(4, WC) | PAT(5, WP) | PAT(6, UC)       | PAT(7=
, UC);
> > > > > > > >  	}
> > > > > > > > =20
> > > > > > > >  	if (!pat_bp_initialized) {
> > > > > > > > -----8<-----
> > > > > > > >
> > > > > > >=20
> > > > > > > Hello, can anyone help please?
> > > > > > >=20
> > > > > > > Intel's CI has taken this reproducer of the bug, and confirme=
d the
> > > > > > > regression.=C2=A0
> > > > > > > https://lore.kernel.org/intel-gfx/Y5Hst0bCxQDTN7lK@mail-itl/T=
/#m4480c15a0d117dce6210562eb542875e757647fb
> > > > > > >=20
> > > > > > > We're reasonably confident that it is an i915 bug (given the =
repro with
> > > > > > > no Xen in the mix), but we're out of any further ideas.
> > > > > >=20
> > > > > > I don't think we have any code that assumes anything about the =
PAT,
> > > > > > apart from WC being available (which seems like it should still=
 be
> > > > > > the case with your modified PAT). I suppose you'll just have to=
=20
> > > > > > start digging from pgprot_writecombine()/noncached() and make s=
ure
> > > > > > everything ends up using the correct PAT entry.
> > > > >=20
> > > > > I tried several approach to this, without success. Here is an upd=
ate on
> > > > > debugging (reported also on #intel-gfx live):
> > > > >=20
> > > > > I did several tests with different PAT configuration (by modifyin=
g Xen
> > > > > that sets the MSR). Full table is at https://pad.itl.space/sheet/=
#/2/sheet/view/HD1qT2Zf44Ha36TJ3wj2YL+PchsTidyNTFepW5++ZKM/
> > > > > Some highlights:
> > > > > - 1=3DWC, 4=3DWT - good
> > > > > - 1=3DWT, 4=3DWC - bad
> > > > > - 1=3DWT, 3=3DWC (4=3DWC too) - good
> > > > > - 1=3DWT, 5=3DWC - good
> > > > >=20
> > > > > So, for me it seems WC at index 4 is problematic for some reason.
> > > > >=20
> > > > > Next, I tried to trap all the places in arch/x86/xen/mmu_pv.c that
> > > > > write PTEs and verify requested cache attributes. There, it seems=
 all
> > > > > the requested WC are properly translated (using either index 1, 3=
, 4, or
> > > > > 5 according to PAT settings). And then after reading PTE back, it=
 indeed
> > > > > seems to be correctly set. I didn't added reading back after
> > > > > HYPERVISOR_update_va_mapping, but verified it isn't used for sett=
ing WC.
> > > > >=20
> > > > > Using the same method, I also checked that indexes that aren't su=
pposed
> > > > > to be used (for example index 4 when both 3 and 4 are WC) indeed =
are not
> > > > > used. So, the hypothesis that specific indexes are hardcoded some=
where
> > > > > is unlikely.
> > > > >=20
> > > > > This all looks very weird to me. Any ideas?
> > > >=20
> > > > Old CPUs have had hardware errata that caused the top bit of the PAT
> > > > entry to be ignored in certain cases.  Could modern CPUs be ignoring
> > > > this bit when accessing iGPU memory or registers?  With WC at posit=
ion
> > > > 4, this would cause WC to be treated as WB, which is consistent wit=
h the
> > > > observed behavior.  WC at position 3 would not be impacted, and WC =
at
> > > > position 5 would be treated as WT which I expect to be safe.  One w=
ay to
> > > > test this is to test 1=3DWB, 5=3DWC.  If my hypothesis is correct, =
this
> > > > should trigger the bug, even if entry 1 in the PAT is unused because
> > > > entry 0 is also WB.
> > >=20
> > > This looks like a very probable situation, indeed 1=3DWB, 5=3DWC does
> > > trigger the bug! Specifically this layout:
> > >=20
> > >     WB	WB	UC-	UC	WP	WC	WT	UC
> >=20
> > What about WB WT WB UC WB WP WC UC- and WB WT WT UC WB WP WC UC-?  Those
> > only differ in entry 2, which will not be used as it duplicates entry 0
> > or 1.  Therefore, architecturally, these should behave identically.  If
> > I am correct, the second will work fine, but the first will trigger the
> > bug.

Bingo! This also behaves as predicted.

So, it indeed looks like the _PAGE_PAT bit is ignored by the hardware,
even though set in relevant PTEs.

> Also worth testing:
>=20
> WB  UC- UC  WB  WB  WP  WT  WC
> WB  UC- UC  UC  WB  WP  WT  WC
>=20
> These differ only in (unused) entry 3.

I'll skip this, as I think it's pretty clear what will be the result.
But if somebody else think it's worth testing anyway, let me know.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--FiKZDM0ecfbmwtbS
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmOyOj8ACgkQ24/THMrX
1yzWTAf+Nm62uwMkKfko7fNV0IB2LOSKLMk8dRIQAOk4ah3LCtiGYJBfYuxsb2EP
luPaltO3LnmfKR+iiRckJXR/7Z+wCfks0yiYEdqa+GxkNlZtZd4XNuUHhKJQBi1j
JKrjweJDftepahzDEFfdNyFfTxegeRHnzPvhYAprZJ1NT83bo4hqvF5ghBoPcRPy
YILB76OXUlO315IqJ9sZUJHeSJTcJHgif/ZMF+lkYRfy0pIGrKCEWyZTUKv0ygB4
L/EU0U4AL2OJNX0Wt77XRW+b1v/quVm1aswCVks4a6Y32E9bCNR6ATCk4Cspz2p+
ZRojW+IfIUYtT+qnW3aJasLmp0NyIA==
=yf4Y
-----END PGP SIGNATURE-----

--FiKZDM0ecfbmwtbS--


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 07:56:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 07:56:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470242.729691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCFg5-0000bk-Ob; Mon, 02 Jan 2023 07:56:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470242.729691; Mon, 02 Jan 2023 07:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCFg5-0000bd-Lq; Mon, 02 Jan 2023 07:56:13 +0000
Received: by outflank-mailman (input) for mailman id 470242;
 Mon, 02 Jan 2023 07:56:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCFg4-0000bT-HI; Mon, 02 Jan 2023 07:56:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCFg4-0006NJ-Et; Mon, 02 Jan 2023 07:56:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCFg4-0007Xr-09; Mon, 02 Jan 2023 07:56:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCFg3-0003Z2-Vu; Mon, 02 Jan 2023 07:56:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O131MNS9aTUiwP9GuA5C+apYE1zZk9p1Ah3gyFgix3s=; b=AN2Q0ju214ogQkSy7XNHkElAPN
	kvDi5PKWYcOdwZspUNmz28lGTBuvZ4gIzqyka9Sd3T1aHRRGtvY/CK5KatXML0KiruplWfGaCjn/o
	pnVq3CS7F05VlgDp4cDOahDFW/m/G0FWf+a01MKk6C9Sp49CfEscJstJ9GYRwFoRpaEc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175547-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175547: trouble: broken/fail/pass/starved
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
X-Osstest-Versions-That:
    linux=66bb2e2b24ce52819a7070d3a3255726cb946b69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Jan 2023 07:56:11 +0000

flight 175547 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175547/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 175407
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 175407
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 175407
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 175407

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 175407 pass in 175547
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 175407 pass in 175547
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 175407 pass in 175547
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 175407 pass in 175547
 test-armhf-armhf-xl-credit1   5 host-install(5)          broken pass in 175545
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 175407 pass in 175547
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail in 175545 pass in 175407
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175545 pass in 175547
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 175545 pass in 175547
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host       fail pass in 175545
 test-amd64-i386-xl-vhd       22 guest-start.2              fail pass in 175545

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175197
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175407 like 175197
 test-arm64-arm64-xl-seattle 15 migrate-support-check fail in 175545 never pass
 test-arm64-arm64-xl-seattle 16 saverestore-support-check fail in 175545 never pass
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 175545 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 175545 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175197
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175197
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 175197
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175197
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 175197
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175197
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52
baseline version:
 linux                66bb2e2b24ce52819a7070d3a3255726cb946b69

Last test of basis   175197  2022-12-14 10:43:17 Z   18 days
Testing same since   175407  2022-12-19 11:42:26 Z   13 days   33 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Schocher <hs@denx.de>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jialiang Wang <wangjialiang0806@163.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Colitti <lorenzo@google.com>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Samuel Mendoza-Jonas <samjonas@amazon.com>
  Sasha Levin <sashal@kernel.org>
  Shiwei Cui <cuishw@inspur.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <simon.horman@corigine.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Yasushi SHOJI <yashi@spacecubics.com>
  Yasushi SHOJI <yasushi.shoji@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit1 broken
broken-step test-armhf-armhf-xl-credit1 host-install(5)
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken

Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 09:26:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 09:26:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470257.729702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCH5N-0001fa-RF; Mon, 02 Jan 2023 09:26:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470257.729702; Mon, 02 Jan 2023 09:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCH5N-0001fT-O8; Mon, 02 Jan 2023 09:26:25 +0000
Received: by outflank-mailman (input) for mailman id 470257;
 Mon, 02 Jan 2023 09:26:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCH5M-0001fJ-3A; Mon, 02 Jan 2023 09:26:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCH5L-0000R8-U1; Mon, 02 Jan 2023 09:26:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCH5L-0003EE-C3; Mon, 02 Jan 2023 09:26:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCH5L-0005q9-BZ; Mon, 02 Jan 2023 09:26:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hDvuI+E2CeBN0YTN+eIuSe1htPbueljzw1Tq2gzEsFo=; b=NbbRAaW7Nf0QiDS7/vlcQdHFpn
	pEalFfwNqSGx/Gd2XSZ9JnlCnxJbLC09OTtedxA363BtMI4HIaGDfoG4/ez0bglYwYde/DfXSr7Ax
	W2gF8mbP7zf0jlKhqWslCmjOZWpHrnq8luT8PamKcJURdlXN4NOg6dIQwmMVrcpEDm/o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175546-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175546: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=88603b6dc419445847923fcb7fe5080067a30f98
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Jan 2023 09:26:23 +0000

flight 175546 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175546/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                88603b6dc419445847923fcb7fe5080067a30f98
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   86 days
Failing since        173470  2022-10-08 06:21:34 Z   86 days  178 attempts
Testing same since   175546  2023-01-01 23:13:03 Z    0 days    1 attempts

------------------------------------------------------------
3255 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 497012 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 10:08:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 10:08:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470266.729713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCHjq-0005zy-1S; Mon, 02 Jan 2023 10:08:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470266.729713; Mon, 02 Jan 2023 10:08:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCHjp-0005zr-Tj; Mon, 02 Jan 2023 10:08:13 +0000
Received: by outflank-mailman (input) for mailman id 470266;
 Mon, 02 Jan 2023 10:08:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCHjo-0005zh-Ud; Mon, 02 Jan 2023 10:08:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCHjo-0001SM-S6; Mon, 02 Jan 2023 10:08:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCHjo-0005vg-4O; Mon, 02 Jan 2023 10:08:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCHjo-0000gO-3u; Mon, 02 Jan 2023 10:08:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2LvjWK3IGqL/Bb5c+K2iyJINb+Qfm0zRQLtCPPuyinI=; b=NtjE/Lg5GNHkV8wVi41X/7pwvX
	P77P71K4AVs3DnNM9k6S8BiZtBmahbcfmcHjZ6YicSSKaiK7KejEbVW6UKKMbqeh1q+oLkStDCaNV
	B3E2Lh13EA3Z8mQ3E6U/GDz+08zO9WR6YAYHMCrVtPujN+0G38kArdv3bPmVwpqYgN7A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175548-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175548: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-livepatch:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
X-Osstest-Versions-That:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Jan 2023 10:08:12 +0000

flight 175548 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175548/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair    <job status>                 broken

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken pass in 175541
 test-amd64-i386-freebsd10-amd64  7 xen-install             fail pass in 175541
 test-amd64-i386-livepatch     7 xen-install                fail pass in 175541

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175541
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175541
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175541
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175541
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175541
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175541
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175541
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175541
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175541
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175541
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175541
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175541
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8
baseline version:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8

Last test of basis   175548  2023-01-02 01:54:04 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                broken  
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-libvirt-pair broken
broken-step test-amd64-amd64-libvirt-pair host-install/src_host(6)

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 10:58:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 10:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470277.729725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCIW9-0002kx-TB; Mon, 02 Jan 2023 10:58:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470277.729725; Mon, 02 Jan 2023 10:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCIW9-0002kq-Op; Mon, 02 Jan 2023 10:58:09 +0000
Received: by outflank-mailman (input) for mailman id 470277;
 Mon, 02 Jan 2023 10:58:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pCIW8-0002kk-34
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 10:58:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCIW5-0002ro-8k; Mon, 02 Jan 2023 10:58:05 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCIW5-0007Wf-2q; Mon, 02 Jan 2023 10:58:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ObKuiq2kyQgt9UlGOidlvxa/0ITGglmtp5rnTLQ3o8c=; b=OBjT2xDwb5x/plFAci6CErolg3
	8MgG/EfJYMhCSfktf3dbPZispRk4mKQt/YFq0vyqyWv66UUiZ9JzuohnrtKt5YA7te2V2Tge4ycw4
	BxuUFOda0NNLOIFoGyg+FsHMuP3nr04aIIQyCVQ73jfh411E4HTLdkl1LSRR0f2rEO9Q=;
Message-ID: <6c46401b-9fbd-b7e0-5f09-cdaec92beead@xen.org>
Date: Mon, 2 Jan 2023 10:58:02 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.0
Subject: Re: [XEN PATCH v1 1/4] arch/riscv: initial RISC-V support to
 build/run minimal Xen
To: Oleksii <oleksii.kurochko@gmail.com>,
 Julien Grall <julien.grall@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Gianluca Guida <gianluca@rivosinc.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1671789736.git.oleksii.kurochko@gmail.com>
 <5d5ec5fbd8787ed8f86bf84a5ac291d07a20b307.1671789736.git.oleksii.kurochko@gmail.com>
 <3343c927-0f27-e285-5b6e-281c61939bb4@xen.org>
 <43d726761900ba3d8b4919fc29fe7cb1991ac656.camel@gmail.com>
 <CAF3u54A+Qqn+7dyBqh8utnN04b-DYkHbtL5DEEWuw9HQ2EQzTQ@mail.gmail.com>
 <540787f539e2620951ebbc7d0aa6767588e0e3ab.camel@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <540787f539e2620951ebbc7d0aa6767588e0e3ab.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 29/12/2022 08:45, Oleksii wrote:
> Totally agree then.
> I missed that there is .bss.*.
> Actually I reworked a little bit xen.lds.S. As a basis I took xen.lds.S
> from ARM and removed all arch specific sections. So xen.lds.S contains
> stuff which isn't used for now (for example, *(.data.schedulers)) but
> will be used in future.
> Would it be better to go with the reworked linker script or remove
> unneeded sections for now and make it get a linking warning/error when
> sections will be used?

I don't have a strong opinion either way. I would say what's the easiest 
for you. In the long term, we will want to have a common linker script.

But that's a separate discussion and not a request for you to do it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 13:42:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 13:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470286.729735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCL4t-000219-PW; Mon, 02 Jan 2023 13:42:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470286.729735; Mon, 02 Jan 2023 13:42:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCL4t-000212-M0; Mon, 02 Jan 2023 13:42:11 +0000
Received: by outflank-mailman (input) for mailman id 470286;
 Mon, 02 Jan 2023 13:42:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4MZA=47=epam.com=prvs=53661eeefc=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1pCL4r-00020m-Oq
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 13:42:10 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f199e71-8aa3-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 14:42:07 +0100 (CET)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 302Bx46E011991; Mon, 2 Jan 2023 13:41:52 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2054.outbound.protection.outlook.com [104.47.13.54])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3mtd0uv746-3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Jan 2023 13:41:52 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by DB9PR03MB9709.eurprd03.prod.outlook.com (2603:10a6:10:459::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 13:41:49 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552%9]) with mapi id 15.20.5944.019; Mon, 2 Jan 2023
 13:41:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f199e71-8aa3-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QxO5o76k7CjUEU6rKKrq5RwfzDgbpnnwrNT6z5k537oR14SrE8C61Oycz18oAseHvZ5mfJVPJWAiqGbzvAf6WtNKcqLto4f8HZ9QWw0Aa1Xyay7WuO5n2azLMPYE5n+up7JuVXh61z1ntQL7NpDN1dRN8uX8z99LefuygMdoyhVSSkrOuEFl8PPx58VfIdoQD/PgyTze/YMO4G1TqX9pumEtbErIdRFjYc9tln8xP92nSfCi7ygEVvm44G/zF30Go8W5eS1hLo8dNblHsLBCd3ihFUxweQyzGpTB2r75WciMOaoqm2oe0ET/vo7c5bJ2Rdr3fCIhTrx+IBjaHdtmiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cIV9qnXRfvVgBfrc0HVcsMdFAbZxe0DZ7oyvFfhKedY=;
 b=OyKgCuMQiWAlZ80MzuCxTb5BV5m7N6DxFVfVmJcnrVuzV7yPFAR2DNZg98UNuLIu6II7/bqX78wXnGwAdkqwI3sLPDi8wyrrhLfcpXhKIBjIvZnvlAyqw1E70WsOPMyjP8qFB15hIf/AvEi3SSydbz0Q0qJ08rtBu26mMHaJ42lrFPyuCmJMOlmoc73cDiD7KEoyWm/ZkNXpS0bctlQpUFIqkFsA+dcKbUTeruuz2M3quMg5wH0UHkm422fc9mcSMl7eRRiiXj6j6wpj73+tgciZDtakctFbVGXLkpsGuJcdIlTm6jH5w/LKE/F55yfuY4axoZIQurkYogZ8YG8PTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cIV9qnXRfvVgBfrc0HVcsMdFAbZxe0DZ7oyvFfhKedY=;
 b=lm6SXYlSebGIE4OJ9EN9GySexcKaGRBtdfEQ/x13J2lS+m1haBrn9HM+JRVBdHcFmMi9NgfCnCYkwS4oRsmfjgM/Pp/BhL9+bl9/+2xRhKX8NbkBQ2JpNT8rEK52Dh9zCo1scOxpA5kSjNzXzOdZbhQmADgvRTNTYZ+c9g9JIwT5De9jwft0GVJmeX+ond7HcQVe++B1jVYDyUQOm3PwBEl8I0rfXIatq82OzrqiGt+1MEV41xJ8vDc/6Yuc7rWpHCYI7z5k2uGGTWFYpbyazqDQCyefpdTl4NBcT1QpxfL44hZyX/0Gzl1wYVxwh4tiIloMJ/CyhIMDW0pDD9Xccg==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: "jgross@suse.com" <jgross@suse.com>
CC: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>,
        Sumit Semwal <sumit.semwal@linaro.org>,
        =?iso-8859-1?Q?Christian_K=F6nig?= <christian.koenig@amd.com>,
        "linux-media@vger.kernel.org" <linux-media@vger.kernel.org>,
        "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>,
        "linaro-mm-sig@lists.linaro.org" <linaro-mm-sig@lists.linaro.org>
Subject: [PATCH v1 2/3] dma-buf: add dma buffer release notifier callback
Thread-Topic: [PATCH v1 2/3] dma-buf: add dma buffer release notifier callback
Thread-Index: AQHZHq/2iAqVu94bUUyzCrbXi66OTA==
Date: Mon, 2 Jan 2023 13:41:48 +0000
Message-ID: 
 <835ecf35d2d2d1ea763fe25837f52297c83c511f.1672666311.git.oleksii_moisieiev@epam.com>
References: <cover.1672666311.git.oleksii_moisieiev@epam.com>
In-Reply-To: <cover.1672666311.git.oleksii_moisieiev@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PA4PR03MB7136:EE_|DB9PR03MB9709:EE_
x-ms-office365-filtering-correlation-id: 6a772b46-7aa1-4f3f-dd01-08daecc71919
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 I6xZItbt6oR7HVPauOPDvu99ZgGdQHMCdkGOIK7Kf9Q67OhE2zJOil5CfaljTx7dGNpXpOL7iVBxiNA6cDipi83qifFGwuURzg0ogrKgNTtcaPFHeFUrFc308gr41XBMdaR6o5qI3hFXm0QxX6MMOsXkeg3gdozFIGVtyQ21cL9HCBlItZUcTbzFk5xzmK4xTrpy3evhOWQ6u/pjRS9jRApRLS3wPXK5VWRQuFeoRYB66LUvwCwsoWPwttdan2jw7GFgsJSwe5C2FLyPsXJVEChGO8Fby4TJFfND2Hos1MYb49wG8v5SRWwlm7Ye1qaW8HyvmG6o1+WdZrNm+u8K/X/wDWoLtrlCgQZTYO3dJ7w9HNUMHSfSQA76KMjqiKzjwVa75kDB0339uWOrRQH9uqhKdkm17k3p0lElizw/IeD6j8ZcJ6TtW9Ip1Bqki0aE1DgMHnf/K/xvdAVIwCnrhbeSubHqL1I+n/CtdxYAW6zIo4k4vSeGkqubyzuqKMVlxPBGTVYIv3PCDwDmXnNvPCEs4vgXXG2dEevKtpgdq9Hx8jBAUR1u1hN1wkW3HeUGnL47AhPZVSjbh2Z82WJ0vIydhVjQQo0HDzzbKCG9wuNr6TLWNkY/fSlr33cstWE2qjLGIfR/fPP6+PIyH35CP/ZHcLlHkol+qxAnqJzRHO0R5QUiu0hwzWP5zpsw3FZq6FBDj/GLll8rrIWm8aHqp2jRryQgsZ8t4zXmPuWzv9o=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(136003)(376002)(39860400002)(366004)(451199015)(38100700002)(122000001)(86362001)(8676002)(38070700005)(66476007)(76116006)(66946007)(64756008)(66446008)(66556008)(6916009)(91956017)(4326008)(41300700001)(54906003)(316002)(2906002)(5660300002)(8936002)(2616005)(83380400001)(6512007)(478600001)(71200400001)(6486002)(6506007)(26005)(55236004)(186003)(36756003)(22166006);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?ZSgpoXOX/ZZA1Vgo0MZN1aN//8+zLiN43kF69YgxoeR8frlg8vwJu7x+fX?=
 =?iso-8859-1?Q?xSVVGWVO+q9rOqDQ9N4W9nEqQNi2I9DBQGEKkGFY8lAvbuko1m+1Gx5nOz?=
 =?iso-8859-1?Q?3crIq0sE206KoIUQn0Gcbiw227Jw/43zylM2WeSxpdk6zwcRcJnD0STJav?=
 =?iso-8859-1?Q?d/srqcdvx+np25XIsevDxDjp870UXFPTrf3IRdhamWSbZwn9pgYQ0oT+1M?=
 =?iso-8859-1?Q?LET6oOcFfHtashYQ5Ng8LTGXAvTg/w/wIg7iYFJ1valtosgTq75RFPu+92?=
 =?iso-8859-1?Q?IIszRPLSm4D91jgr7DjKgKv/9HG6Lg9ybkjaHxqhb3gLBMpdkw0dd0XqHT?=
 =?iso-8859-1?Q?NTQKTXDjW3FlDAtpd5+4XcJZN5wduuVwzxyoGekNeWG2OO9/l4kZGi98JD?=
 =?iso-8859-1?Q?vhr3wvyfwqQzsRM9uZaMd/uxb6J5zNpUoL1xB+ZhaoZ3k1NElO7VKihYXy?=
 =?iso-8859-1?Q?fWDVlte/YoQvR8qFxGBDpemv1eDSLI4Re20GqnMbZdxSlLWmi74SuUnRKe?=
 =?iso-8859-1?Q?v5xBmPRUYMWPoCuV90hoUTsUI8mcxYsbkXd4DD9slIZCXeeSqHCjl15nZV?=
 =?iso-8859-1?Q?GB3bo4PcmzpxcilJ+XCBDaxaRXge5YFVfEPqaYRq1bjDAZn+r6cJNuORoe?=
 =?iso-8859-1?Q?bjHEWBPyYuKKjqtiuGQIggyAJFzcImJgGATOGwTGAk8WE1z2CACo1yzYm0?=
 =?iso-8859-1?Q?TyC9xvxMFRFUApNvf8KzQEb9h++X1yaF2QgM+KNdzTq2UtW0WeiuOx8b9g?=
 =?iso-8859-1?Q?iW26H4jd29TMsxLttDkCPzvAkLfth484McHNtMx/+KDocsuFCzGMM6osmW?=
 =?iso-8859-1?Q?rUnNVTgAww+E+CLXCezs+SXKFtpGPib4fV9kOIQBVN0QKmspbCo3+wiy1p?=
 =?iso-8859-1?Q?CuYE/FmvVd65LYXWy46+nHpljHzWQ1cc6T09tMUM1+OT0W/dyuvbUkveRR?=
 =?iso-8859-1?Q?SsFMhvX1Od9GbwHyYkoH7f53dOQNk9xlNsa9rXM8OxCRsuB/fAGnJUcvky?=
 =?iso-8859-1?Q?sU2DkExSpeJ8pYZl1lJtpZT2Qd4jdHxy2aSgjxME/UEMH9krvlW10q7a0q?=
 =?iso-8859-1?Q?5x+y3o0nZQUysNTjISbrddrugIr8UWRiPI+ftxgTG/zc5g3ch69oq43157?=
 =?iso-8859-1?Q?kvQGca/mbT46Lc1zuM8vLC/B5HwRv0Hwx6l6a707EGl6chP7GbF0wHULp/?=
 =?iso-8859-1?Q?wbwK+THDjqzspYRZWtguxMKTZGLCvj7uP0vJ2rJQl6Oq1quyMsT4Baf0ik?=
 =?iso-8859-1?Q?PtZkubNMOn7EXCY8t+v3DzH/Y3bBoBOQsQQwpsx3Wt/hxUx2u9oj2aOoSn?=
 =?iso-8859-1?Q?nUoVeI+uEXZ6ifqg73aVy7Fs4lkz1VtJ2h3TqukUoS3zJnOGGcKX2Q/FYE?=
 =?iso-8859-1?Q?Ry0RQA83pZWnjqGrODWrhSPtD15MVFqEmiaFC0pQnWQ0710f5gxDNITEQk?=
 =?iso-8859-1?Q?GK71JA9yeagTmfH2B47Anu1Pigjzi0q9+Nxc+thr52VW2ZuFlFNDUNT4DW?=
 =?iso-8859-1?Q?70JOX5LB9vjGFeuefKeoxBKBMe8okVWvG/S58DURQwul6XywvvH7nIPbEe?=
 =?iso-8859-1?Q?d3ynZ5XCGF/MuTDMJxaE4OSoFgQ9CxfJhEe3jHmzo8JGW/y+L/EGeEj77W?=
 =?iso-8859-1?Q?DzrgnbHaz9y2LKMm61n4Q3bHmF5Pe19SL0jC32gwVMOlRULlCDYWk3vA?=
 =?iso-8859-1?Q?=3D=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a772b46-7aa1-4f3f-dd01-08daecc71919
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Jan 2023 13:41:48.9962
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: TQbH5ZtPMRkf5Lq6Jdo1NpaaN0V7CSgdcxJfYDDOXQM4wc1apBGOrkf7D/qZ/5DSjk5C2bFAufoxpe7okefYoO8K7PdqQLH7QRo332Az9W8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR03MB9709
X-Proofpoint-GUID: OvaobUQFU99AdHxtz2tas7SLtdW6NrxG
X-Proofpoint-ORIG-GUID: OvaobUQFU99AdHxtz2tas7SLtdW6NrxG
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2023-01-02_08,2022-12-30_01,2022-06-22_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 phishscore=0
 lowpriorityscore=0 bulkscore=0 impostorscore=0 spamscore=0
 priorityscore=1501 adultscore=0 suspectscore=0 mlxscore=0 malwarescore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2212070000 definitions=main-2301020124

Add posibility to register callback on dma-buffer which is
called before dma_buf->ops->release call.
This helps when external user of the dma buffer should be notified
before buffer releases without changing dma-buf ops. This is needed when
external dma buffer is used as backing storage for gntdev refs export
and grant refs should be unmapped before dma buffer release.

Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
---
 drivers/dma-buf/dma-buf.c | 44 +++++++++++++++++++++++++++++++++++++++
 include/linux/dma-buf.h   | 15 +++++++++++++
 2 files changed, 59 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index efb4990b29e1..3e663ef92e1f 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -25,6 +25,7 @@
 #include <linux/dma-resv.h>
 #include <linux/mm.h>
 #include <linux/mount.h>
+#include <linux/notifier.h>
 #include <linux/pseudo_fs.h>
=20
 #include <uapi/linux/dma-buf.h>
@@ -57,6 +58,46 @@ static char *dmabuffs_dname(struct dentry *dentry, char =
*buffer, int buflen)
 			     dentry->d_name.name, ret > 0 ? name : "");
 }
=20
+int dma_buf_register_release_notifier(struct dma_buf *dmabuf,
+			ext_release_notifier_cb ext_release_cb, void *priv)
+{
+	int ret =3D 0;
+
+	spin_lock(&dmabuf->ext_release_lock);
+
+	if (dmabuf->ext_release_cb) {
+		ret =3D -EEXIST;
+		goto unlock;
+	}
+
+	dmabuf->ext_release_cb =3D ext_release_cb;
+	dmabuf->ext_release_priv =3D priv;
+ unlock:
+	spin_unlock(&dmabuf->ext_release_lock);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(dma_buf_register_release_notifier);
+
+void dma_buf_unregister_release_notifier(struct dma_buf *dmabuf)
+{
+	spin_lock(&dmabuf->ext_release_lock);
+	dmabuf->ext_release_cb =3D NULL;
+	spin_unlock(&dmabuf->ext_release_lock);
+}
+EXPORT_SYMBOL_GPL(dma_buf_unregister_release_notifier);
+
+static void dma_buf_call_release_notifier(struct dma_buf *dmabuf)
+{
+	if (!dmabuf->ext_release_cb)
+		return;
+
+	spin_lock(&dmabuf->ext_release_lock);
+	dmabuf->ext_release_cb(dmabuf, dmabuf->ext_release_priv);
+	spin_unlock(&dmabuf->ext_release_lock);
+
+	dma_buf_unregister_release_notifier(dmabuf);
+}
+
 static void dma_buf_release(struct dentry *dentry)
 {
 	struct dma_buf *dmabuf;
@@ -75,6 +116,8 @@ static void dma_buf_release(struct dentry *dentry)
 	BUG_ON(dmabuf->cb_in.active || dmabuf->cb_out.active);
=20
 	dma_buf_stats_teardown(dmabuf);
+	dma_buf_call_release_notifier(dmabuf);
+
 	dmabuf->ops->release(dmabuf);
=20
 	if (dmabuf->resv =3D=3D (struct dma_resv *)&dmabuf[1])
@@ -642,6 +685,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_exp=
ort_info *exp_info)
 	init_waitqueue_head(&dmabuf->poll);
 	dmabuf->cb_in.poll =3D dmabuf->cb_out.poll =3D &dmabuf->poll;
 	dmabuf->cb_in.active =3D dmabuf->cb_out.active =3D 0;
+	spin_lock_init(&dmabuf->ext_release_lock);
=20
 	if (!resv) {
 		resv =3D (struct dma_resv *)&dmabuf[1];
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 71731796c8c3..6282d56ac040 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -287,6 +287,8 @@ struct dma_buf_ops {
 	void (*vunmap)(struct dma_buf *dmabuf, struct iosys_map *map);
 };
=20
+typedef void (*ext_release_notifier_cb)(struct dma_buf *dmabuf, void *priv=
);
+
 /**
  * struct dma_buf - shared buffer object
  *
@@ -432,6 +434,15 @@ struct dma_buf {
 	 */
 	struct dma_resv *resv;
=20
+	/** @ext_release_cb notififier callback to call on release */
+	ext_release_notifier_cb ext_release_cb;
+
+	/** @ext_release_priv private data for callback */
+	void *ext_release_priv;
+
+	/** @ext_release_lock spinlock for ext_notifier helper */
+	spinlock_t ext_release_lock;
+
 	/** @poll: for userspace poll support */
 	wait_queue_head_t poll;
=20
@@ -632,4 +643,8 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struc=
t *,
 		 unsigned long);
 int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map);
 void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map);
+
+int dma_buf_register_release_notifier(struct dma_buf *dmabuf,
+		 ext_release_notifier_cb ext_release_cb, void *priv);
+void dma_buf_unregister_release_notifier(struct dma_buf *dmabuf);
 #endif /* __DMA_BUF_H__ */
--=20
2.25.1


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 13:42:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 13:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470288.729746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCL4u-00028V-Ac; Mon, 02 Jan 2023 13:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470288.729746; Mon, 02 Jan 2023 13:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCL4u-00026Y-3h; Mon, 02 Jan 2023 13:42:12 +0000
Received: by outflank-mailman (input) for mailman id 470288;
 Mon, 02 Jan 2023 13:42:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4MZA=47=epam.com=prvs=53661eeefc=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1pCL4s-00020m-W3
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 13:42:10 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3fe4c446-8aa3-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 14:42:08 +0100 (CET)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 302Bx46D011991; Mon, 2 Jan 2023 13:41:52 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2054.outbound.protection.outlook.com [104.47.13.54])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3mtd0uv746-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Jan 2023 13:41:51 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by DB9PR03MB9709.eurprd03.prod.outlook.com (2603:10a6:10:459::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 13:41:48 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552%9]) with mapi id 15.20.5944.019; Mon, 2 Jan 2023
 13:41:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3fe4c446-8aa3-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EH7Q1ygdq8hb6QLZQg+FO3oVGM0vm/dgC8rYDMaCGbqIzXkBuWvCcCjEkT9BLFqskHwaumIyq3O3nt6S5Lv7I20UGLq5EbJEuzTyi4uAGvxCefF2eJ+MLBNKzcGQCMf99/O6b+D/cvmd1SOTEBLp7S5aSdrAICK71R/mpn+6dPMsNBX9SvNqhNas0XK59rwP+6JqZz6b8q8+ito9T2Q97MSKzUYHfnpWR1KMBwizuOo6HjMPZZL6snO3eWsUlI+d3zRu+9cGTFheT0qcGXU+/YLlh1I79DJOB+VbOub05RYYlQ3TvBxRSNAp4DZQG2EoyCpsMwrK/QjvIfQcEebgkg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=upLENppnbaOJqNZ7wAgRPaS772ApiRNNpeCnQADukok=;
 b=h+zfyrOORIykAmzlkpyogetoAix8Ex/PKUn+FKR/CFwgSrN2riv/nmkccGuDkFF3iiK9iWQe4JH40PFdW3gVJQEqnW3kaaGslapFEmRK/TraLd/VkjPYvKzhHc9aCfG7l/LljrHRLbsIZN4YNplomA3lqaUPLyMRuTyv39tjSkBm4Ko4k2vNmtjtlXIBF4EDigFUpgo5qiSl30DEz2m9ro3GF2XAHRxccHCCeMtd+eeMWzpe7xFrvksY2hejLBl8G0Lkv2341IL6xZ1hUE+aloTtaisvD3JqTfwBOK8rQ+DZVOtP83gFWo1r5OFzyV+B5k7px/Hh7RzcV7YOA9L8ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=upLENppnbaOJqNZ7wAgRPaS772ApiRNNpeCnQADukok=;
 b=T3oL8E80Zu6y5uljNdl6ASKZKYjjEA7AvCxRY+i21xtxiBfvjMG6/xT7U1cxnRV8lE86tjF/e6m7K0WSOpKufW2F26xxojhuABWsRb9Rdktjhc9cFPEfgcVpTiLOWy350xpe2P9Tb2eXTIs7kvUmoOuOUXre2jiercXn7OwfUsXdDUa48eBYSKuWR4trfV7gAhRp6gSvM8JJhEtgWUpo/9Y71yLuG4xGOvkI1CvXsSA2dU0+dGAeyRGFM6zAuqb440w9PlqSo0KFmegGnGw7qbB41zxXhotEha0P45ur/8uCbWEGqGTmJVe3xyippsarZ1xnEtawF+otzRYHSeUnlg==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: "jgross@suse.com" <jgross@suse.com>
CC: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>,
        Sumit Semwal <sumit.semwal@linaro.org>,
        =?iso-8859-1?Q?Christian_K=F6nig?= <christian.koenig@amd.com>,
        "linux-media@vger.kernel.org" <linux-media@vger.kernel.org>,
        "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>,
        "linaro-mm-sig@lists.linaro.org" <linaro-mm-sig@lists.linaro.org>
Subject: [PATCH v1 1/3] xen/grant-table: save page_count on map and use if
 during async unmapping
Thread-Topic: [PATCH v1 1/3] xen/grant-table: save page_count on map and use
 if during async unmapping
Thread-Index: AQHZHq/2VrVTMTTaekaw3MeMlNz4/w==
Date: Mon, 2 Jan 2023 13:41:48 +0000
Message-ID: 
 <e58e80d2856cb4656ec76409c2db75652865e2ed.1672666311.git.oleksii_moisieiev@epam.com>
References: <cover.1672666311.git.oleksii_moisieiev@epam.com>
In-Reply-To: <cover.1672666311.git.oleksii_moisieiev@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PA4PR03MB7136:EE_|DB9PR03MB9709:EE_
x-ms-office365-filtering-correlation-id: 01aafa7a-9d3f-4d4e-013e-08daecc718ea
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 qufCfaYw3n7K5GBDOfPY8lJ0s2EbpdNaaVh1FB2Pw+YoK5QwbfcpPET4AXM5Toz2dzO9dIr+DdNC4joci4/RzuhGLqi21535Yz3P3srG6Lpcnh7CUF2hJv/HHJIo0FYK7JNW0GwdhZT8nsBTZUNHceKZjm5bn/VOJRDshf278kimt9ZzSTXviP06ZWob69c/WnQGL5BMf34w91h10zbmqzZwUdRBsgn44uTXvUfabiQWwUdSUvqiZfxXh7MADOPECKVxUZpOM+ev0TJxcSh9SFuc2DE9zKdog0Ikyu9z+yKmTFvPtmmAb5B+PqNXAtO8c/7NqDZLxOjEnXUq1yJO00YW3fodzAxmUGKLwvsS0s1kIr3BsboABlbV5GsHXpJlano+8KhyfZYdjFzYqPFo21E+sg5CTewTHHOL7kLIJkwQEpAJaC4XCTxhA/SLDVpj6ZCCnAzR89BcHAbZeXMQyB2Hjwi+q+/YDnZix3Vsht6ar5afyQ0WGDdtD08a3BCnkOAZ3g+WgyqselwMRSFAVdlRLIVolZKEsWGR8iwB3eEXrak1oySjbeKAjKnaVScUQDaILO8fZ/9v6KZQz/ic05R5cFDigD6yCyMTnU6xaQWRkCEakZ/7/gyGCyVciq/zys1pLiMmM3pGFA4XlGHpy9WESyonSwVWHzF22oyBXuxZC+T/5ValxPLViewW81d699ESQS/SFiQXon855al4XyCzxm+4AUow5YOHP5/f9ao=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(136003)(376002)(39860400002)(366004)(451199015)(38100700002)(122000001)(86362001)(8676002)(38070700005)(66476007)(76116006)(66946007)(64756008)(66446008)(66556008)(6916009)(91956017)(4326008)(41300700001)(54906003)(316002)(2906002)(5660300002)(8936002)(2616005)(83380400001)(6512007)(478600001)(71200400001)(6486002)(6506007)(26005)(55236004)(186003)(36756003)(22166006);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?dANsysGQdFemPfZPLN3R5/iueQyffU7Aefc222DGZ8engqUXxDYFqDhhWs?=
 =?iso-8859-1?Q?klXBmUqO8vdTsG9eOYChTyQ3+PL/+hhuseX0TkGSJKfjWL5yL1FxZI3DLL?=
 =?iso-8859-1?Q?LNWx8q2ucibwxBygufAWID7zKm+cikKSS2rG1pejj+wR1TIuu+K4jplG65?=
 =?iso-8859-1?Q?gFh/+hDBrmYEOHMFr+uTX/Lh8QwwaB/6qHCUjgRwIdlHc3a3UYM96ErdWU?=
 =?iso-8859-1?Q?QOvxwx+ofVp73MmjqxruCfkoSXCYr6T3sXHXGJRNK6L5zKsa8qmeIqt7Zb?=
 =?iso-8859-1?Q?ZwrLGQMJPVB0+P6bg4yghqhyj69aQHwy0x6O94ezPzNMmQYcIMdO+ekY3u?=
 =?iso-8859-1?Q?TKEWrbhJMy+Smzzu52UFjuCq5NTQdEoX1mhiEvnomzVaryP9ZqHooMRPTT?=
 =?iso-8859-1?Q?2p9XEorxFULahLRpfxbZUE28+dnNoLxy3DwrzLfP2YecjeBmwxewRXbxMl?=
 =?iso-8859-1?Q?Q3g+1N68v6lKIRYINDvORC2v7aSOodhMMxcd1Fto72kkDvuQgDrnY27u55?=
 =?iso-8859-1?Q?8OmEd4flK0bQ2jx2sFB5LzGQFN6jUcJIKVIoUyM0lSuHQY+/6PKUKfp/IT?=
 =?iso-8859-1?Q?hCgXV/xWkEELFj5S8nu+/znOyxSwEWeXHshjI5VUt45VhiCvHaHNFzkoQM?=
 =?iso-8859-1?Q?VCcKqsJpv+w8qy4aC+O/zHSn88lQWNbMgd8YYWfRgM75keJMuZbWA1lHQr?=
 =?iso-8859-1?Q?DOnSVB7uyVzulo/R3rWzsTR6oXTE7k3smf49EChQUJRzono5yFwXGDKCfZ?=
 =?iso-8859-1?Q?Sw4/+83oz2cTfxyPYnd/AFfj+M5himSq/t282B5M00TU4Xz3b8YgeDs3Qg?=
 =?iso-8859-1?Q?J3ZBI/2oaAh5nWD9Y+7758VlM/ANYR9Aj46qDx+FOZzJN0cE/xwggKnVDT?=
 =?iso-8859-1?Q?0lX616JttIsRpZ0MYz0cDn+Z388IrOdy9tqgiGe3wvEZFujLz3XRxnyVlz?=
 =?iso-8859-1?Q?xRHIhUGyj6ii4r+jvD2KA8nvMV1mcxtbHkcKlzG9O5dn7+w1/pvVF8f8tI?=
 =?iso-8859-1?Q?nCJ/1SG18OlFiPoUQkKNXYi0gcjEdnxlIQ9Jp+Bt+NKwn9zr1VtjovfjPh?=
 =?iso-8859-1?Q?Rzun/CuzGk/fh2uuaVEadwIcntMwSGXrroTPLOiB17mDlUqd86F+lC/615?=
 =?iso-8859-1?Q?wEzM4vFGJo1gN5F/PmMfolS1kGUcXjVYdYC0wnTFtFlfW1BEZE9EW8MWmT?=
 =?iso-8859-1?Q?dQ/4u6JIBk/Kn8b9cXEOqmg2mcP6yGcIfARxp9CgDU9v9Zuwqe4Ut9v0iF?=
 =?iso-8859-1?Q?Q8QR4DIedxMgyTZPc6wfygq0+Prcu3CzYyAahoY7BAAROmXxKXvIuHxlx+?=
 =?iso-8859-1?Q?tWyMROlWn5GapUZEH34JKeVSD0H9TdnrT7yvsTncF5KCNdKVCtpOuoM+L1?=
 =?iso-8859-1?Q?PgL62+5zHNaB7PqzKmXwB0nwr9yOPIgblcWs1df9/tVt1syoCdf0YNcayZ?=
 =?iso-8859-1?Q?6706N9LqTL7QejzPhha14Lqym3usffPv8FSd80TdfyRJ5wv0ICxWXp8D/Y?=
 =?iso-8859-1?Q?gIqGHet/mIPyvNR10HTpJVUT1rXskt0g3Dc5EPWVM637dWaMp0SfhwOfT0?=
 =?iso-8859-1?Q?zyufy/nezBU2LaSsW2RaKNfl2Gp4tzoSSO3uoJzsHiIDy0pwY1tGcqeB1H?=
 =?iso-8859-1?Q?f/0OgvjdnGNOcLr6n/E1H018gdiz50V/9uefCUfa6ecGjGARXGxbNAOw?=
 =?iso-8859-1?Q?=3D=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 01aafa7a-9d3f-4d4e-013e-08daecc718ea
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Jan 2023 13:41:48.6825
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: kI1NPR1PtH+s/DzX1W50xaAwDmqfN4R/xQrpV0d+2vI3yUPaymI20bn90qI3J/4UWEJxnW+SL0clikRBIlzGek+KvB6SWXedvjDmoLuBIUU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR03MB9709
X-Proofpoint-GUID: mk6-_2x5bNgRKHNXlG71jmOCVvk0mvbE
X-Proofpoint-ORIG-GUID: mk6-_2x5bNgRKHNXlG71jmOCVvk0mvbE
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2023-01-02_08,2022-12-30_01,2022-06-22_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=902 phishscore=0
 lowpriorityscore=0 bulkscore=0 impostorscore=0 spamscore=0
 priorityscore=1501 adultscore=0 suspectscore=0 mlxscore=0 malwarescore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2212070000 definitions=main-2301020124

Save the reference count of the page before mapping and use this
value in gntdev_unmap_refs_async() call.
This is the enhancement of the
commit 3f9f1c67572f5e5e6dc84216d48d1480f3c4fcf6 ("xen/grant-table: add a
mechanism to safely unmap pages that are in use").
Safe unmapping mechanism defers page that may being use (ref count > 1).

This is needed to allow to map/unmap pages, which have more than 1
reference. For example, DRM_IOCTL_MODE_CREATE_DUMB creates dma buffer
with page_count =3D 2, which unmap call would be deferred while buffer
exists because ref count will never equals 1.
This means the buffer remains mapped during DRM_IOCTL_MODE_DESTROY_DUMB
call which causes an error:

Unable to handle kernel paging request at virtual address <addr>
....
Call trace:
  check_move_unevictable_folios+0xb8/0x4d0
  check_move_unevictable_pages+0x8c/0x110
  drm_gem_put_pages+0x118/0x198
  drm_gem_shmem_put_pages_locked+0x4c/0x70
  drm_gem_shmem_unpin+0x30/0x50
  virtio_gpu_cleanup_object+0x84/0x130
  virtio_gpu_cmd_unref_cb+0x18/0x2c
  virtio_gpu_dequeue_ctrl_func+0x124/0x290
  process_one_work+0x1d0/0x320
  worker_thread+0x14c/0x444
  kthread+0x10c/0x110

This enhancement allows to provide the expected page_count during
map call so refs could be unmapped properly without unneeded defers.

Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
---
 drivers/xen/grant-table.c | 16 +++++++++++++++-
 include/xen/grant_table.h |  3 +++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index e1ec725c2819..d6576c8b6f0f 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1241,11 +1241,23 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *ma=
p_ops,
 		case GNTST_okay:
 		{
 			struct xen_page_foreign *foreign;
+			int page_cnt;
=20
 			SetPageForeign(pages[i]);
 			foreign =3D xen_page_foreign(pages[i]);
 			foreign->domid =3D map_ops[i].dom;
 			foreign->gref =3D map_ops[i].ref;
+			page_cnt =3D page_count(pages[i]);
+			if (page_cnt > FOREIGN_MAX_PAGE_COUNT) {
+				/* foreign structure can't hold more than FOREIGN_MAX_PAGE_COUNT.
+				 * That's why we save page_count =3D 1 so safe unmap mechanism will
+				 * defer unmapping until all users stops using this page and let
+				 * caller handle page users.
+				 */
+				pr_warn_ratelimited("page have too many users. Will wait for 0 on umap=
\n");
+				foreign->private =3D 1;
+			} else
+				foreign->private =3D page_cnt;
 			break;
 		}
=20
@@ -1308,9 +1320,11 @@ static void __gnttab_unmap_refs_async(struct gntab_u=
nmap_queue_data* item)
 {
 	int ret;
 	int pc;
+	struct xen_page_foreign *foreign;
=20
 	for (pc =3D 0; pc < item->count; pc++) {
-		if (page_count(item->pages[pc]) > 1) {
+		foreign =3D xen_page_foreign(item->pages[pc]);
+		if (page_count(item->pages[pc]) > foreign->private) {
 			unsigned long delay =3D GNTTAB_UNMAP_REFS_DELAY * (item->age + 1);
 			schedule_delayed_work(&item->gnttab_work,
 					      msecs_to_jiffies(delay));
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index e279be353e3f..8e220edf44ab 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -49,6 +49,7 @@
 #include <linux/mm_types.h>
 #include <linux/page-flags.h>
 #include <linux/kernel.h>
+#include <linux/limits.h>
=20
 /*
  * Technically there's no reliably invalid grant reference or grant handle=
,
@@ -274,9 +275,11 @@ int gnttab_unmap_refs_sync(struct gntab_unmap_queue_da=
ta *item);
 void gnttab_batch_map(struct gnttab_map_grant_ref *batch, unsigned count);
 void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count);
=20
+#define FOREIGN_MAX_PAGE_COUNT       U16_MAX
=20
 struct xen_page_foreign {
 	domid_t domid;
+	uint16_t private;
 	grant_ref_t gref;
 };
=20
--=20
2.25.1


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 13:42:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 13:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470289.729768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCL4v-0002kc-LZ; Mon, 02 Jan 2023 13:42:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470289.729768; Mon, 02 Jan 2023 13:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCL4v-0002kS-Hf; Mon, 02 Jan 2023 13:42:13 +0000
Received: by outflank-mailman (input) for mailman id 470289;
 Mon, 02 Jan 2023 13:42:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4MZA=47=epam.com=prvs=53661eeefc=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1pCL4t-00020m-W9
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 13:42:12 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f6982ee-8aa3-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 14:42:07 +0100 (CET)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 302Bx46F011991; Mon, 2 Jan 2023 13:41:53 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2054.outbound.protection.outlook.com [104.47.13.54])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3mtd0uv746-4
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Jan 2023 13:41:53 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by DB9PR03MB9709.eurprd03.prod.outlook.com (2603:10a6:10:459::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 13:41:49 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552%9]) with mapi id 15.20.5944.019; Mon, 2 Jan 2023
 13:41:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f6982ee-8aa3-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NkZR2Py/TQGVGAtl5cGJv1VL4Uut2A5KCLn+PZA2+PahhoQxhJfFgBzXYobMhmnvwoQDDf1UPAPIAsKYUVz2VtVdIH28gm/63gX/KGa92kpexRt8qfbuTm34A0hcnDAxdR3W6kdBSPBZqNMgPNjRu2WYbg0v7Syvo3liHnR6TW/SE8iib0z9Sw/KFwelpQnen+080mK/wfas635JZy9b9xApywDMGlZkRkTdPYpRlX8Nm/NZaxuqr2yVOV9MmxM5C1mAOKUzmQPZl0TwKMZuWvxauR0Mfuj07U7Cojj0W3nnNCqaogfpxk4gdAMbLrJtjBAekEAGo+k1yWs189e3FA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cpM/GTVcMI8fqBjEBRN4Lty7tXpejC9G2s04LY0mrJs=;
 b=EEzdH3VRQ7JISgO3Pm+5rcqC6ss0N1f4FcqAxW5dcJmGSxaDB859fVc3LQ4f/Bdr/tnhen8I3FfVCcbqmj4jZcqPajiIzCpxry/oXPoEV0z5xjKCX7l14AxanffYJW4YyyyFc5GVBWXjPPe/okzQ13nsZtVgMFZLkT4Vn99ZKoslkVtzmHX4HT2psk9WzwNbrWZ8bsjeX4dXB6/EMstLiL2hL+7+PZPjMKBKcM4ziGsPKYhrIepHW4hk2BS1RHoRBE/4jhbMW8Sg6NyTh3fw6w5QTxvmQd3PBLTEmWgOKkn5syANpoOidPdWqXNii+7QS0dKLeR3Byexzhx0mamHnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cpM/GTVcMI8fqBjEBRN4Lty7tXpejC9G2s04LY0mrJs=;
 b=kyQSl3x6MrFAKpRga7jcWIrq4nfcd2PTWuhf4JjUdeSs3A7u8UN/ApDCsby083Ir7XLZ+4/4TJk9cCjrP/GxTX5sgqaDOnWgFWTeLQlciVCtrrIU9pCMA23IbTFnhZ1Y9G6bm/KJmVZ3iu+UpdaWL9iFvia+lrRYtvBGAO38poY+SDZbV+O3I2s+FFLzBv7h3mLk/JKYnPqFVfsjwJs3OJO+GPDZ5MHfjwP+PgouVa5bO1H+usu3J8P0l2Bb6bff0Xqpc8ACHazLsRrSugtkfpgMaEueGGmNh0ogmvkswRXcxd3Wgnmg1WTU3ZKQ80uBwG1qlMtPzVYPlhE1D5YRMw==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: "jgross@suse.com" <jgross@suse.com>
CC: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>,
        Sumit Semwal <sumit.semwal@linaro.org>,
        =?iso-8859-1?Q?Christian_K=F6nig?= <christian.koenig@amd.com>,
        "linux-media@vger.kernel.org" <linux-media@vger.kernel.org>,
        "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>,
        "linaro-mm-sig@lists.linaro.org" <linaro-mm-sig@lists.linaro.org>
Subject: [PATCH v1 3/3] xen/grant-table: add new ioctls to map dmabuf to
 existing fd
Thread-Topic: [PATCH v1 3/3] xen/grant-table: add new ioctls to map dmabuf to
 existing fd
Thread-Index: AQHZHq/2ph/BnX8oTk+YEfKPDbopCA==
Date: Mon, 2 Jan 2023 13:41:49 +0000
Message-ID: 
 <157bd897b4dd50b3c724722090b804440914c3cf.1672666311.git.oleksii_moisieiev@epam.com>
References: <cover.1672666311.git.oleksii_moisieiev@epam.com>
In-Reply-To: <cover.1672666311.git.oleksii_moisieiev@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PA4PR03MB7136:EE_|DB9PR03MB9709:EE_
x-ms-office365-filtering-correlation-id: 1c96edab-5a77-4b99-3495-08daecc71979
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 fAYApRAI95RYBFTJ84xgUdQh8nwOqFcHmW6e4jmqZfdGqkMYneO/J61PZb2X53TTM6IsMtxHGOWXEqjpiqfknllLuklpWu8+7hYEAIReTjLzdPBw7upIOsmejItupVabytOWiyZW6TaLvjx1JDH0tZAe0l595aaVxXTGY8oWDpWF+DH94QgzqoscUNVHVEnHUkdz/xhybEd/v71F5nppfdeMgkuRe6djsWI+ApEJeTAymu9ZnmblPV7+JKY1ZdC9v62fsjVBAHoQ9tw04K3Q7JCSvkgA+E412eAh1E6Yo3tIUn6ITiNi5fGjl91fd38YanuTYZodEJ1Kvhvx57fwfs4xKi+zE1BnI6dW8qPPkyRbl5vMWmD/85BH+l7GJYhekfbTvTiGswc6TpVAL8xx8Fo5iLfe0TRK/uSR+a7pDH6cWYrCXvP24qt7DAWQSmC3C5INOs/QB3zuUHWDSIKOvQdq7eSKNzmR9LvWq/HKAZZcVWvJOsEIvb2d+UawzAr8uUrmwfu8XMIANhyFi1wqFwHwBu5FrLbNQ77b7PsiTaulriuKWMHaoX7jsdXu0l8AhhQx65Bb4zacsTMoO3KC9mN9Fb+ibKRsFXsd3AeGw/zDJJivIVsXKdmbti7VDsidzp1Z3st1VJq/VscBwS98eiQkWEC1TGm3Tvps7Fum1qqh9SklgWuLpcCNS31PfOZA5l9XgjNLuRI0xWKQYd1IN+qQEZTvASbhQPgP6eYm4J8Oi3QHsmx37Vp/EGJC+gyD
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(136003)(376002)(39860400002)(366004)(451199015)(38100700002)(122000001)(86362001)(8676002)(38070700005)(66476007)(76116006)(66946007)(64756008)(66446008)(66556008)(6916009)(91956017)(4326008)(41300700001)(54906003)(316002)(2906002)(5660300002)(8936002)(30864003)(2616005)(83380400001)(6512007)(478600001)(71200400001)(6486002)(6506007)(26005)(55236004)(186003)(36756003)(22166006)(21314003)(579004)(559001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?SiGlTnYNPDw2YpnUzSo6FpTU+eS1V6Hg5OD2RkL6vKuPrNo8IBCWnG0z7S?=
 =?iso-8859-1?Q?WBGx1espg5fyI3QzgfX7olheSlizTxfW5qdDpIqsJimXnwH3rfIvoNXkoy?=
 =?iso-8859-1?Q?XqJtIIiE/3ToUosbcI0LXN+Q5AvOheCg35ov2CL/As8sv/vvSALHr0TFnd?=
 =?iso-8859-1?Q?UlmStJ9KuIBsmPqwOcunDnRWvkBgUcUgZMtVKWqXMFexIhhKwQl4XjTxSE?=
 =?iso-8859-1?Q?r2zwAt7JGRGSx0VhqhFyd+socMt5My6h4oP07SGvxnKj7TSqym8MjXUv/+?=
 =?iso-8859-1?Q?DI4d69HysRmUpP0uFgfrxfVsAdbOcrDkSgSkOsURxE7SiqrFm+PWG9mNxH?=
 =?iso-8859-1?Q?SCPmqQOOaRLDx7sOpZ7+K9B+gvahNYRBndIht8m09aSlOPWueDGUU/4uH9?=
 =?iso-8859-1?Q?+XIP19g8B9ry2b83Zy6bIA3Liun1+RLJLqZ9HrZrq2rOY5E6RSVkimzM+l?=
 =?iso-8859-1?Q?nzHtvxx/F3OfFMbFcWEk0jDZEQcdwqFUQMFISGmU2S+kU37B98UPtBTAVD?=
 =?iso-8859-1?Q?4XdR2IEguZN5MBtVIXdUEpb7hNu0aB6o/9jWtA3em9NdPDionCaIw1z1Il?=
 =?iso-8859-1?Q?oDGQ693RDtgG7v+tsPRC96iYJkaUzqIVxz4hrDgeOcOV+CoV3bOacyrv/y?=
 =?iso-8859-1?Q?ylKYGU7eyVj0bbDqf46aUj2IVlq4wviiGk3fEYuCcnQbuOro+LWP2jJoYi?=
 =?iso-8859-1?Q?+F78DqHY8TKbvvlFTPEi4kaLysLqIMg8KRk2KQBNRGx+v94swe5jw0uebs?=
 =?iso-8859-1?Q?GjNsf3PG0fmBYyjT7rG9+Ya8YCKDUcUB7pm19dcg95Gi4rmiX9Yo8pkdHr?=
 =?iso-8859-1?Q?0sOY7Szifb2rXh5AbbhKG5bETxeamg0F/vGjgCrALjwY7A2ifW9URopXeu?=
 =?iso-8859-1?Q?6sBlEfCl2VwIxGmwmWUXryiZgRlklZ/VFmx9Ww0KqF9ukVWKtOnGcm6vca?=
 =?iso-8859-1?Q?3Xwt8TtNQQxDtWZhiUpJZvFdF4SwkCP48qmGo5DwsDL4EFzGvGvIc9/2iu?=
 =?iso-8859-1?Q?LrEW+9HObIFTZPLEEwX8KSXSsChpxzrKhADvZws56emjs0GieP45wR4sfu?=
 =?iso-8859-1?Q?0Mqr5KdqXaDg9mr9wtjpc194o7WNZqGdKADYa1yvwDGdd2mOV6AG6t3FdG?=
 =?iso-8859-1?Q?OOw2l9KFW4giqjZONbNPCRDdXOlQ+mBI7UyoYc5UkuE+L/1VPBvwLTniNO?=
 =?iso-8859-1?Q?YzJAs4W6Lf+p4zePK7D4v2AVf0Tv5HO/5Fm0hfy7a9Wm6KJ4QaVyyKa9An?=
 =?iso-8859-1?Q?8/mLO5BpQrzdo/9zLsTDjnoF7jZVGaPkzLQ8IBDY+nVN6xmzZFRtyq+R0j?=
 =?iso-8859-1?Q?pcjxgUPYZx/PieIoKhbcg8feNMUG/gk3+VtGsDsH/xXhrlEk/CCPl3+df5?=
 =?iso-8859-1?Q?KuAPCG4bLtvxrdnotVskz37kiofFcIhmpfbdbBjihdv2REoVwJO5CFMbRc?=
 =?iso-8859-1?Q?roAx6vDRp6jZ2PTddgCwlU3Lx/8G/2tibtqfLIXq5y7LDLPBeYrXEC4W13?=
 =?iso-8859-1?Q?qqjZ9aQhOQdlU/vGbjsjuW4PsPHIPng7Syi4Hpl92p/1P4a/RxQKSuY56M?=
 =?iso-8859-1?Q?UZmIBd2cDqkhqUQWFXWqcFLR2n32uaQvyG7q60eb5CEdymGDbPlYbPw3FA?=
 =?iso-8859-1?Q?2+SDqBbY+WHRph7Lg7/ZyTLn04nfk2kZ0IGbKXItgLvqXWr9osn9ygoA?=
 =?iso-8859-1?Q?=3D=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c96edab-5a77-4b99-3495-08daecc71979
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Jan 2023 13:41:49.5742
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: kgZ/vtfEM6LlGzuqzZbCumY3cAdnlx9y0l19k7z+nFTx/NM3mYE1FHEWj5HVsRXv4EsKbXXLQ6aU7sRFK9pQuJq363AyXdHlrfgtaAbpv0w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR03MB9709
X-Proofpoint-GUID: zqExVzAwob7QUgCeIbSNQg5wp2GNIpKB
X-Proofpoint-ORIG-GUID: zqExVzAwob7QUgCeIbSNQg5wp2GNIpKB
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2023-01-02_08,2022-12-30_01,2022-06-22_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 phishscore=0
 lowpriorityscore=0 bulkscore=0 impostorscore=0 spamscore=0
 priorityscore=1501 adultscore=0 suspectscore=0 mlxscore=0 malwarescore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2212070000 definitions=main-2301020124

Add new ioctls to allow gntdev to map scatter-gather table on
top of the existing dmabuf, referenced by file descriptor.

When using dma-buf exporter to create dma-buf with backing storage and
map it to the grant refs, provided from the domain, we've met a problem,
that several HW (i.MX8 gpu in our case) do not support external buffer
and requires backing storage to be created using it's native tools.
That's why new ioctls were added to be able to pass existing dma-buffer
fd as input parameter and use it as backing storage to export to refs.

Following calls were added:
IOCTL_GNTDEV_DMABUF_MAP_REFS_TO_BUF - map existing buffer as the backing
storage and export it to the provided grant refs;
IOCTL_GNTDEV_DMABUF_MAP_RELEASE - detach buffer from the grant table and
set notification to unmap grant refs before releasing the external
buffer. After this call the external buffer should be detroyed.
IOCTL_GNTDEV_DMABUF_MAP_WAIT_RELEASED - wait for timeout until buffer is
completely destroyed and gnt refs unmapped so domain could free grant
pages.

Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
---
 drivers/xen/gntdev-common.h |   8 +-
 drivers/xen/gntdev-dmabuf.c | 416 +++++++++++++++++++++++++++++++++++-
 drivers/xen/gntdev-dmabuf.h |   7 +
 drivers/xen/gntdev.c        | 101 ++++++++-
 drivers/xen/grant-table.c   |  57 +++--
 include/uapi/xen/gntdev.h   |  62 ++++++
 include/xen/grant_table.h   |   5 +
 7 files changed, 626 insertions(+), 30 deletions(-)

diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
index 9c286b2a1900..3b6980df3f9d 100644
--- a/drivers/xen/gntdev-common.h
+++ b/drivers/xen/gntdev-common.h
@@ -61,6 +61,10 @@ struct gntdev_grant_map {
 	bool *being_removed;
 	struct page **pages;
 	unsigned long pages_vm_start;
+	unsigned int preserve_pages;
+
+	/* Needed to avoid allocation in gnttab_dma_free_pages(). */
+	xen_pfn_t *frames;
=20
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
 	/*
@@ -73,8 +77,6 @@ struct gntdev_grant_map {
 	int dma_flags;
 	void *dma_vaddr;
 	dma_addr_t dma_bus_addr;
-	/* Needed to avoid allocation in gnttab_dma_free_pages(). */
-	xen_pfn_t *frames;
 #endif
=20
 	/* Number of live grants */
@@ -85,6 +87,8 @@ struct gntdev_grant_map {
=20
 struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int co=
unt,
 					  int dma_flags);
+struct gntdev_grant_map *gntdev_get_alloc_from_fd(struct gntdev_priv *priv=
,
+					  struct sg_table *sgt, int count, int dma_flags);
=20
 void gntdev_add_map(struct gntdev_priv *priv, struct gntdev_grant_map *add=
);
=20
diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
index 940e5e9e8a54..71d3bfee72aa 100644
--- a/drivers/xen/gntdev-dmabuf.c
+++ b/drivers/xen/gntdev-dmabuf.c
@@ -10,14 +10,18 @@
=20
 #include <linux/kernel.h>
 #include <linux/errno.h>
+#include <linux/delay.h>
 #include <linux/dma-buf.h>
+#include <linux/dma-resv.h>
 #include <linux/slab.h>
 #include <linux/types.h>
 #include <linux/uaccess.h>
 #include <linux/module.h>
+#include <linux/notifier.h>
=20
 #include <xen/xen.h>
 #include <xen/grant_table.h>
+#include <xen/mem-reservation.h>
=20
 #include "gntdev-common.h"
 #include "gntdev-dmabuf.h"
@@ -46,6 +50,18 @@ struct gntdev_dmabuf {
 			/* dma-buf attachment of the imported buffer. */
 			struct dma_buf_attachment *attach;
 		} imp;
+		struct {
+			/* Scatter-gather table of the mapped buffer. */
+			struct sg_table *sgt;
+			/* dma-buf attachment of the mapped buffer. */
+			struct dma_buf_attachment *attach;
+			/* map table */
+			struct gntdev_grant_map *map;
+			/* frames table for memory reservation */
+			xen_pfn_t *frames;
+
+			struct gntdev_priv *priv;
+		} map;
 	} u;
=20
 	/* Number of pages this buffer has. */
@@ -57,6 +73,7 @@ struct gntdev_dmabuf {
 struct gntdev_dmabuf_wait_obj {
 	struct list_head next;
 	struct gntdev_dmabuf *gntdev_dmabuf;
+	int fd;
 	struct completion completion;
 };
=20
@@ -72,6 +89,10 @@ struct gntdev_dmabuf_priv {
 	struct list_head exp_wait_list;
 	/* List of imported DMA buffers. */
 	struct list_head imp_list;
+	/* List of mapped DMA buffers. */
+	struct list_head map_list;
+	/* List of wait objects. */
+	struct list_head map_wait_list;
 	/* This is the lock which protects dma_buf_xxx lists. */
 	struct mutex lock;
 	/*
@@ -88,6 +109,64 @@ struct gntdev_dmabuf_priv {
=20
 static void dmabuf_exp_release(struct kref *kref);
=20
+static struct gntdev_dmabuf_wait_obj *
+dmabuf_map_wait_obj_find(struct gntdev_dmabuf_priv *priv, int fd)
+{
+	struct gntdev_dmabuf_wait_obj *obj, *ret =3D ERR_PTR(-ENOENT);
+
+	mutex_lock(&priv->lock);
+	list_for_each_entry(obj, &priv->map_wait_list, next)
+		if (obj->fd =3D=3D fd) {
+			pr_debug("Found gntdev_dmabuf in the wait list\n");
+			ret =3D obj;
+			break;
+		}
+	mutex_unlock(&priv->lock);
+
+	return ret;
+}
+
+static void dmabuf_exp_wait_obj_free(struct gntdev_dmabuf_priv *priv,
+			struct gntdev_dmabuf_wait_obj *obj);
+
+static int
+dmabuf_map_wait_obj_set(struct gntdev_dmabuf_priv *priv,
+			struct gntdev_dmabuf *gntdev_dmabuf, int fd)
+{
+	struct gntdev_dmabuf_wait_obj *obj;
+
+	obj =3D dmabuf_map_wait_obj_find(gntdev_dmabuf->priv, fd);
+	if ((!obj) || (PTR_ERR(obj) =3D=3D -ENOENT)) {
+		obj =3D kzalloc(sizeof(*obj), GFP_KERNEL);
+		if (!obj)
+			return -ENOMEM;
+	}
+
+	init_completion(&obj->completion);
+	obj->gntdev_dmabuf =3D gntdev_dmabuf;
+	obj->fd =3D fd;
+	mutex_lock(&priv->lock);
+	list_add(&obj->next, &priv->map_wait_list);
+	mutex_unlock(&priv->lock);
+	return 0;
+}
+
+static void dmabuf_map_wait_obj_signal(struct gntdev_dmabuf_priv *priv,
+				       struct gntdev_dmabuf *gntdev_dmabuf)
+{
+	struct gntdev_dmabuf_wait_obj *obj;
+
+	mutex_lock(&priv->lock);
+	list_for_each_entry(obj, &priv->map_wait_list, next)
+		if (obj->gntdev_dmabuf =3D=3D gntdev_dmabuf) {
+			pr_debug("Found gntdev_dmabuf in the wait list, wake\n");
+			complete_all(&obj->completion);
+			break;
+		}
+
+	mutex_unlock(&priv->lock);
+}
+
 static struct gntdev_dmabuf_wait_obj *
 dmabuf_exp_wait_obj_new(struct gntdev_dmabuf_priv *priv,
 			struct gntdev_dmabuf *gntdev_dmabuf)
@@ -410,6 +489,18 @@ static int dmabuf_exp_from_pages(struct gntdev_dmabuf_=
export_args *args)
 	return ret;
 }
=20
+static void dmabuf_map_free_gntdev_dmabuf(struct gntdev_dmabuf *gntdev_dma=
buf)
+{
+	if (!gntdev_dmabuf)
+		return;
+
+	kfree(gntdev_dmabuf->pages);
+
+	kvfree(gntdev_dmabuf->u.map.frames);
+	kfree(gntdev_dmabuf);
+	gntdev_dmabuf =3D NULL;
+}
+
 static struct gntdev_grant_map *
 dmabuf_exp_alloc_backing_storage(struct gntdev_priv *priv, int dmabuf_flag=
s,
 				 int count)
@@ -432,6 +523,113 @@ dmabuf_exp_alloc_backing_storage(struct gntdev_priv *=
priv, int dmabuf_flags,
 	return map;
 }
=20
+static void dmabuf_map_remove(struct gntdev_priv *priv,
+		  struct gntdev_dmabuf *gntdev_dmabuf)
+{
+	dmabuf_exp_remove_map(priv, gntdev_dmabuf->u.map.map);
+	dmabuf_map_free_gntdev_dmabuf(gntdev_dmabuf);
+}
+
+static struct gntdev_dmabuf *
+dmabuf_alloc_gntdev_from_buf(struct gntdev_priv *priv, int fd, int dmabuf_=
flags,
+						 int count, unsigned int data_ofs)
+{
+	struct gntdev_dmabuf *gntdev_dmabuf;
+	struct dma_buf_attachment *attach;
+	struct dma_buf *dma_buf;
+	struct sg_table *sgt;
+	int ret =3D 0;
+
+	gntdev_dmabuf =3D kzalloc(sizeof(*gntdev_dmabuf), GFP_KERNEL);
+	if (!gntdev_dmabuf)
+		return ERR_PTR(-ENOMEM);
+
+	gntdev_dmabuf->pages =3D kcalloc(count,
+		sizeof(gntdev_dmabuf->pages[0]), GFP_KERNEL);
+
+	if (!gntdev_dmabuf->pages) {
+		ret =3D -ENOMEM;
+		goto free;
+	}
+
+	gntdev_dmabuf->u.map.frames =3D kvcalloc(count,
+		sizeof(gntdev_dmabuf->u.map.frames[0]), GFP_KERNEL);
+	if (!gntdev_dmabuf->u.map.frames) {
+		ret =3D -ENOMEM;
+		goto free;
+	}
+
+	if (gntdev_test_page_count(count)) {
+		ret =3D -EINVAL;
+		goto free;
+	}
+
+	dma_buf =3D dma_buf_get(fd);
+	if (IS_ERR_OR_NULL(dma_buf)) {
+		pr_debug("Unable to get dmabuf from fd\n");
+		ret =3D PTR_ERR(dma_buf);
+		goto free;
+	}
+
+	attach =3D dma_buf_attach(dma_buf, priv->dma_dev);
+	if (IS_ERR_OR_NULL(attach)) {
+		ret =3D PTR_ERR(attach);
+		goto fail_put;
+	}
+
+	sgt =3D dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
+	if (IS_ERR_OR_NULL(sgt)) {
+		ret =3D PTR_ERR(sgt);
+		goto fail_detach;
+	}
+
+	if (sgt->sgl->offset !=3D data_ofs) {
+		pr_debug("DMA buffer offset %d, user-space expects %d\n",
+			 sgt->sgl->offset, data_ofs);
+		ret =3D -EINVAL;
+		goto fail_unmap;
+	}
+
+	/* Check number of pages that imported buffer has. */
+	if (attach->dmabuf->size < count << PAGE_SHIFT) {
+		pr_debug("DMA buffer has %zu pages, user-space expects %d\n",
+			 attach->dmabuf->size, count << PAGE_SHIFT);
+		ret =3D -EINVAL;
+		goto fail_unmap;
+	}
+
+	gntdev_dmabuf->u.map.map =3D gntdev_get_alloc_from_fd(priv, sgt, count,
+									dmabuf_flags);
+	if (IS_ERR_OR_NULL(gntdev_dmabuf->u.map.map)) {
+		ret =3D -ENOMEM;
+		goto fail_unmap;
+	};
+
+	gntdev_dmabuf->priv =3D priv->dmabuf_priv;
+	gntdev_dmabuf->fd =3D fd;
+	gntdev_dmabuf->u.map.attach =3D attach;
+	gntdev_dmabuf->u.map.sgt =3D sgt;
+	gntdev_dmabuf->dmabuf =3D dma_buf;
+	gntdev_dmabuf->nr_pages =3D count;
+	gntdev_dmabuf->u.map.priv =3D priv;
+
+	memcpy(gntdev_dmabuf->pages, gntdev_dmabuf->u.map.map->pages, count *
+			sizeof(gntdev_dmabuf->u.map.map->pages[0]));
+	memcpy(gntdev_dmabuf->u.map.frames, gntdev_dmabuf->u.map.map->frames, cou=
nt *
+			sizeof(gntdev_dmabuf->u.map.map->frames[0]));
+
+	return gntdev_dmabuf;
+fail_unmap:
+	dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
+fail_detach:
+	dma_buf_detach(dma_buf, attach);
+fail_put:
+	dma_buf_put(dma_buf);
+free:
+	dmabuf_map_free_gntdev_dmabuf(gntdev_dmabuf);
+	return ERR_PTR(ret);
+}
+
 static int dmabuf_exp_from_refs(struct gntdev_priv *priv, int flags,
 				int count, u32 domid, u32 *refs, u32 *fd)
 {
@@ -481,6 +679,117 @@ static int dmabuf_exp_from_refs(struct gntdev_priv *p=
riv, int flags,
 	return ret;
 }
=20
+static void dmabuf_release_notifier_cb(struct dma_buf *dmabuf, void *priv)
+{
+	struct gntdev_dmabuf *gntdev_dmabuf =3D priv;
+
+	if (!gntdev_dmabuf)
+		return;
+
+	dmabuf_map_remove(gntdev_dmabuf->u.map.priv, gntdev_dmabuf);
+	dmabuf_map_wait_obj_signal(gntdev_dmabuf->priv, gntdev_dmabuf);
+}
+
+static int dmabuf_detach_map(struct gntdev_dmabuf *gntdev_dmabuf)
+{
+	struct dma_buf *dma_buf =3D gntdev_dmabuf->dmabuf;
+	long lret;
+
+	/* Wait on any implicit fences */
+	lret =3D dma_resv_wait_timeout(dma_buf->resv,
+					dma_resv_usage_rw(true), true,
+					MAX_SCHEDULE_TIMEOUT);
+	if (lret =3D=3D 0)
+		return -ETIME;
+	else if (lret < 0)
+		return lret;
+
+	if (gntdev_dmabuf->u.map.sgt) {
+		dma_buf_unmap_attachment(gntdev_dmabuf->u.map.attach,
+				gntdev_dmabuf->u.map.sgt, DMA_BIDIRECTIONAL);
+	}
+
+	dma_buf_detach(dma_buf, gntdev_dmabuf->u.map.attach);
+	dma_buf_put(dma_buf);
+
+	return 0;
+}
+
+static int dmabuf_map_release(struct gntdev_dmabuf *gntdev_dmabuf, bool sy=
nc)
+{
+	int ret;
+
+	if (!sync) {
+		ret =3D dmabuf_map_wait_obj_set(gntdev_dmabuf->priv, gntdev_dmabuf,
+					gntdev_dmabuf->fd);
+		if (ret)
+			return ret;
+	}
+
+	ret =3D dmabuf_detach_map(gntdev_dmabuf);
+	if (ret)
+		return ret;
+
+	if (!sync) {
+		ret =3D dma_buf_register_release_notifier(gntdev_dmabuf->dmabuf,
+						  &dmabuf_release_notifier_cb, gntdev_dmabuf);
+		if (ret)
+			return ret;
+	} else {
+		dmabuf_map_remove(gntdev_dmabuf->u.map.priv, gntdev_dmabuf);
+	}
+
+	return 0;
+}
+
+static int dmabuf_map_refs_to_fd(struct gntdev_priv *priv, int flags,
+			int count, u32 domid, u32 *refs, u32 fd,
+				unsigned int data_ofs)
+{
+	struct gntdev_dmabuf *gntdev_dmabuf;
+	int i, ret;
+
+	gntdev_dmabuf =3D dmabuf_alloc_gntdev_from_buf(priv, fd, flags, count,
+						data_ofs);
+
+	if (IS_ERR_OR_NULL(gntdev_dmabuf)) {
+		ret =3D PTR_ERR(gntdev_dmabuf);
+		goto fail_gntdev;
+	}
+
+	for (i =3D 0; i < count; i++) {
+		gntdev_dmabuf->u.map.map->grants[i].domid =3D domid;
+		gntdev_dmabuf->u.map.map->grants[i].ref =3D refs[i];
+	}
+
+	mutex_lock(&priv->lock);
+	gntdev_add_map(priv, gntdev_dmabuf->u.map.map);
+	mutex_unlock(&priv->lock);
+
+	gntdev_dmabuf->u.map.map->flags |=3D GNTMAP_host_map;
+#if defined(CONFIG_X86)
+	gntdev_dmabuf->u.map.map->flags |=3D GNTMAP_device_map;
+#endif
+
+	ret =3D gntdev_map_grant_pages(gntdev_dmabuf->u.map.map);
+	if (ret < 0)
+		goto fail;
+
+	mutex_lock(&priv->lock);
+	list_add(&gntdev_dmabuf->next, &priv->dmabuf_priv->map_list);
+	mutex_unlock(&priv->lock);
+
+	return 0;
+fail:
+	mutex_lock(&priv->lock);
+	list_del(&gntdev_dmabuf->u.map.map->next);
+	mutex_unlock(&priv->lock);
+	dmabuf_detach_map(gntdev_dmabuf);
+	dmabuf_map_free_gntdev_dmabuf(gntdev_dmabuf);
+fail_gntdev:
+	return ret;
+}
+
 /* DMA buffer import support. */
=20
 static int
@@ -673,14 +982,15 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, s=
truct device *dev,
  * it from the buffer's list.
  */
 static struct gntdev_dmabuf *
-dmabuf_imp_find_unlink(struct gntdev_dmabuf_priv *priv, int fd)
+dmabuf_list_find_unlink(struct gntdev_dmabuf_priv *priv, struct list_head =
*list,
+		int fd)
 {
 	struct gntdev_dmabuf *q, *gntdev_dmabuf, *ret =3D ERR_PTR(-ENOENT);
=20
 	mutex_lock(&priv->lock);
-	list_for_each_entry_safe(gntdev_dmabuf, q, &priv->imp_list, next) {
+	list_for_each_entry_safe(gntdev_dmabuf, q, list, next) {
 		if (gntdev_dmabuf->fd =3D=3D fd) {
-			pr_debug("Found gntdev_dmabuf in the import list\n");
+			pr_debug("Found gntdev_dmabuf in the list\n");
 			ret =3D gntdev_dmabuf;
 			list_del(&gntdev_dmabuf->next);
 			break;
@@ -696,7 +1006,7 @@ static int dmabuf_imp_release(struct gntdev_dmabuf_pri=
v *priv, u32 fd)
 	struct dma_buf_attachment *attach;
 	struct dma_buf *dma_buf;
=20
-	gntdev_dmabuf =3D dmabuf_imp_find_unlink(priv, fd);
+	gntdev_dmabuf =3D dmabuf_list_find_unlink(priv, &priv->imp_list, fd);
 	if (IS_ERR(gntdev_dmabuf))
 		return PTR_ERR(gntdev_dmabuf);
=20
@@ -726,6 +1036,21 @@ static void dmabuf_imp_release_all(struct gntdev_dmab=
uf_priv *priv)
 		dmabuf_imp_release(priv, gntdev_dmabuf->fd);
 }
=20
+static void dmabuf_map_release_all(struct gntdev_dmabuf_priv *priv)
+{
+	struct gntdev_dmabuf *q, *gntdev_dmabuf;
+	struct gntdev_dmabuf_wait_obj *o, *obj;
+
+	list_for_each_entry_safe(obj, o, &priv->map_wait_list, next) {
+		dmabuf_exp_wait_obj_free(priv, obj);
+	}
+
+	list_for_each_entry_safe(gntdev_dmabuf, q, &priv->map_list, next) {
+		dmabuf_map_release(gntdev_dmabuf, true);
+	}
+
+}
+
 /* DMA buffer IOCTL support. */
=20
 long gntdev_ioctl_dmabuf_exp_from_refs(struct gntdev_priv *priv, int use_p=
temod,
@@ -769,6 +1094,47 @@ long gntdev_ioctl_dmabuf_exp_from_refs(struct gntdev_=
priv *priv, int use_ptemod,
 	return ret;
 }
=20
+long gntdev_ioctl_dmabuf_map_refs_to_buf(struct gntdev_priv *priv, int use=
_ptemod,
+					  struct ioctl_gntdev_dmabuf_map_refs_to_buf __user *u)
+{
+	struct ioctl_gntdev_dmabuf_map_refs_to_buf op;
+	u32 *refs;
+	long ret;
+
+	if (use_ptemod) {
+		pr_debug("Cannot provide dma-buf: use_ptemode %d\n",
+			 use_ptemod);
+		return -EINVAL;
+	}
+
+	if (copy_from_user(&op, u, sizeof(op)) !=3D 0)
+		return -EFAULT;
+
+	if (op.count <=3D 0)
+		return -EINVAL;
+
+	refs =3D kcalloc(op.count, sizeof(*refs), GFP_KERNEL);
+	if (!refs)
+		return -ENOMEM;
+
+	if (copy_from_user(refs, u->refs, sizeof(*refs) * op.count) !=3D 0) {
+		ret =3D -EFAULT;
+		goto out;
+	}
+
+	ret =3D dmabuf_map_refs_to_fd(priv, op.flags, op.count,
+				   op.domid, refs, op.fd, op.data_ofs);
+	if (ret)
+		goto out;
+
+	if (copy_to_user(u, &op, sizeof(op)) !=3D 0)
+		ret =3D -EFAULT;
+
+out:
+	kfree(refs);
+	return ret;
+}
+
 long gntdev_ioctl_dmabuf_exp_wait_released(struct gntdev_priv *priv,
 					   struct ioctl_gntdev_dmabuf_exp_wait_released __user *u)
 {
@@ -823,6 +1189,45 @@ long gntdev_ioctl_dmabuf_imp_release(struct gntdev_pr=
iv *priv,
 	return dmabuf_imp_release(priv->dmabuf_priv, op.fd);
 }
=20
+long gntdev_ioctl_dmabuf_map_release(struct gntdev_priv *priv,
+				     struct ioctl_gntdev_dmabuf_map_release __user *u)
+{
+	struct ioctl_gntdev_dmabuf_map_release op;
+	struct gntdev_dmabuf *gntdev_dmabuf;
+
+	if (copy_from_user(&op, u, sizeof(op)) !=3D 0)
+		return -EFAULT;
+
+	gntdev_dmabuf =3D dmabuf_list_find_unlink(priv->dmabuf_priv,
+			&priv->dmabuf_priv->map_list, op.fd);
+	if (IS_ERR(gntdev_dmabuf))
+		return PTR_ERR(gntdev_dmabuf);
+
+	return dmabuf_map_release(gntdev_dmabuf, false);
+}
+
+long gntdev_ioctl_dmabuf_map_wait_released(struct gntdev_priv *priv,
+				     struct ioctl_gntdev_dmabuf_map_wait_released __user *u)
+{
+	struct ioctl_gntdev_dmabuf_map_wait_released op;
+	struct gntdev_dmabuf_wait_obj *obj;
+	int ret =3D 0;
+
+	if (copy_from_user(&op, u, sizeof(op)) !=3D 0)
+		return -EFAULT;
+
+	obj =3D dmabuf_map_wait_obj_find(priv->dmabuf_priv, op.fd);
+	if (IS_ERR_OR_NULL(obj))
+		return (PTR_ERR(obj) =3D=3D -ENOENT) ? 0 : PTR_ERR(obj);
+
+	if (!completion_done(&obj->completion))
+		ret =3D dmabuf_exp_wait_obj_wait(obj, op.wait_to_ms);
+
+	if (!ret && ret !=3D -ETIMEDOUT)
+		dmabuf_exp_wait_obj_free(priv->dmabuf_priv, obj);
+	return ret;
+}
+
 struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct file *filp)
 {
 	struct gntdev_dmabuf_priv *priv;
@@ -835,6 +1240,8 @@ struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct f=
ile *filp)
 	INIT_LIST_HEAD(&priv->exp_list);
 	INIT_LIST_HEAD(&priv->exp_wait_list);
 	INIT_LIST_HEAD(&priv->imp_list);
+	INIT_LIST_HEAD(&priv->map_list);
+	INIT_LIST_HEAD(&priv->map_wait_list);
=20
 	priv->filp =3D filp;
=20
@@ -844,5 +1251,6 @@ struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct f=
ile *filp)
 void gntdev_dmabuf_fini(struct gntdev_dmabuf_priv *priv)
 {
 	dmabuf_imp_release_all(priv);
+	dmabuf_map_release_all(priv);
 	kfree(priv);
 }
diff --git a/drivers/xen/gntdev-dmabuf.h b/drivers/xen/gntdev-dmabuf.h
index 3d9b9cf9d5a1..07301f12ac52 100644
--- a/drivers/xen/gntdev-dmabuf.h
+++ b/drivers/xen/gntdev-dmabuf.h
@@ -21,6 +21,9 @@ void gntdev_dmabuf_fini(struct gntdev_dmabuf_priv *priv);
 long gntdev_ioctl_dmabuf_exp_from_refs(struct gntdev_priv *priv, int use_p=
temod,
 				       struct ioctl_gntdev_dmabuf_exp_from_refs __user *u);
=20
+long gntdev_ioctl_dmabuf_map_refs_to_buf(struct gntdev_priv *priv, int use=
_ptemod,
+					   struct ioctl_gntdev_dmabuf_map_refs_to_buf __user *u);
+
 long gntdev_ioctl_dmabuf_exp_wait_released(struct gntdev_priv *priv,
 					   struct ioctl_gntdev_dmabuf_exp_wait_released __user *u);
=20
@@ -30,4 +33,8 @@ long gntdev_ioctl_dmabuf_imp_to_refs(struct gntdev_priv *=
priv,
 long gntdev_ioctl_dmabuf_imp_release(struct gntdev_priv *priv,
 				     struct ioctl_gntdev_dmabuf_imp_release __user *u);
=20
+long gntdev_ioctl_dmabuf_map_release(struct gntdev_priv *priv,
+				     struct ioctl_gntdev_dmabuf_map_release __user *u);
+long gntdev_ioctl_dmabuf_map_wait_released(struct gntdev_priv *priv,
+				     struct ioctl_gntdev_dmabuf_map_wait_released __user *u);
 #endif
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 4d9a3050de6a..677a51244bb2 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -22,6 +22,7 @@
=20
 #define pr_fmt(fmt) "xen:" KBUILD_MODNAME ": " fmt
=20
+#include <linux/dma-buf.h>
 #include <linux/dma-mapping.h>
 #include <linux/module.h>
 #include <linux/kernel.h>
@@ -43,6 +44,7 @@
 #include <xen/gntdev.h>
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/mem-reservation.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
=20
@@ -96,7 +98,11 @@ static void gntdev_free_map(struct gntdev_grant_map *map=
)
 		return;
=20
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
-	if (map->dma_vaddr) {
+	if (map->pages && map->preserve_pages) {
+		gnttab_dma_clean_page_reservation(map->count, map->pages,
+				map->frames);
+
+	} else if (map->dma_vaddr) {
 		struct gnttab_dma_alloc_args args;
=20
 		args.dev =3D map->dma_dev;
@@ -216,6 +222,82 @@ struct gntdev_grant_map *gntdev_alloc_map(struct gntde=
v_priv *priv, int count,
 	return NULL;
 }
=20
+struct gntdev_grant_map *gntdev_get_alloc_from_fd(struct gntdev_priv *priv=
,
+					  struct sg_table *sgt, int count, int dma_flags)
+{
+	struct gntdev_grant_map *add;
+	int i =3D 0;
+	struct sg_page_iter sg_iter;
+
+	add =3D kzalloc(sizeof(*add), GFP_KERNEL);
+	if (!add)
+		return NULL;
+
+	add->grants    =3D kvcalloc(count, sizeof(add->grants[0]), GFP_KERNEL);
+	add->map_ops   =3D kvcalloc(count, sizeof(add->map_ops[0]), GFP_KERNEL);
+	add->unmap_ops =3D kvcalloc(count, sizeof(add->unmap_ops[0]), GFP_KERNEL)=
;
+	add->pages     =3D kvcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);
+	add->frames    =3D kvcalloc(count, sizeof(add->frames[0]),
+			       GFP_KERNEL);
+	add->being_removed =3D
+		kvcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);
+	add->preserve_pages =3D 1;
+
+	if (add->grants =3D=3D NULL    ||
+		add->map_ops =3D=3D NULL   ||
+		add->unmap_ops =3D=3D NULL ||
+		add->pages =3D=3D NULL     ||
+		add->frames =3D=3D NULL    ||
+		add->being_removed =3D=3D NULL)
+		goto err;
+
+	if (use_ptemod) {
+		add->kmap_ops   =3D kvmalloc_array(count, sizeof(add->kmap_ops[0]),
+						 GFP_KERNEL);
+		add->kunmap_ops =3D kvmalloc_array(count, sizeof(add->kunmap_ops[0]),
+						 GFP_KERNEL);
+		if (NULL =3D=3D add->kmap_ops || NULL =3D=3D add->kunmap_ops)
+			goto err;
+	}
+
+	for_each_sgtable_page(sgt, &sg_iter, 0) {
+		struct page *page =3D sg_page_iter_page(&sg_iter);
+
+		add->pages[i] =3D page;
+		add->frames[i] =3D xen_page_to_gfn(page);
+		i++;
+		if (i >=3D count)
+			break;
+	}
+
+	if (i < count) {
+		pr_debug("Provided buffer is too small");
+		goto err;
+	}
+
+	if (gnttab_dma_reserve_pages(count, add->pages, add->frames))
+		goto err;
+
+	for (i =3D 0; i < count; i++) {
+		add->map_ops[i].handle =3D -1;
+		add->unmap_ops[i].handle =3D -1;
+		if (use_ptemod) {
+			add->kmap_ops[i].handle =3D -1;
+			add->kunmap_ops[i].handle =3D -1;
+		}
+	}
+
+	add->index =3D 0;
+	add->count =3D count;
+	refcount_set(&add->users, 1);
+
+	return add;
+
+err:
+	gntdev_free_map(add);
+	return NULL;
+}
+
 void gntdev_add_map(struct gntdev_priv *priv, struct gntdev_grant_map *add=
)
 {
 	struct gntdev_grant_map *map;
@@ -610,6 +692,9 @@ static int gntdev_release(struct inode *inode, struct f=
ile *flip)
 	struct gntdev_grant_map *map;
=20
 	pr_debug("priv %p\n", priv);
+#ifdef CONFIG_XEN_GNTDEV_DMABUF
+	gntdev_dmabuf_fini(priv->dmabuf_priv);
+#endif
=20
 	mutex_lock(&priv->lock);
 	while (!list_empty(&priv->maps)) {
@@ -620,10 +705,6 @@ static int gntdev_release(struct inode *inode, struct =
file *flip)
 	}
 	mutex_unlock(&priv->lock);
=20
-#ifdef CONFIG_XEN_GNTDEV_DMABUF
-	gntdev_dmabuf_fini(priv->dmabuf_priv);
-#endif
-
 	kfree(priv);
 	return 0;
 }
@@ -1020,6 +1101,16 @@ static long gntdev_ioctl(struct file *flip,
=20
 	case IOCTL_GNTDEV_DMABUF_IMP_RELEASE:
 		return gntdev_ioctl_dmabuf_imp_release(priv, ptr);
+
+	case IOCTL_GNTDEV_DMABUF_MAP_REFS_TO_BUF:
+		return gntdev_ioctl_dmabuf_map_refs_to_buf(priv, use_ptemod, ptr);
+
+	case IOCTL_GNTDEV_DMABUF_MAP_RELEASE:
+		return gntdev_ioctl_dmabuf_map_release(priv, ptr);
+
+	case IOCTL_GNTDEV_DMABUF_MAP_WAIT_RELEASED:
+		return gntdev_ioctl_dmabuf_map_wait_released(priv, ptr);
+
 #endif
=20
 	default:
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index d6576c8b6f0f..257e335bc65b 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1036,6 +1036,40 @@ void gnttab_free_pages(int nr_pages, struct page **p=
ages)
 }
 EXPORT_SYMBOL_GPL(gnttab_free_pages);
=20
+int gnttab_dma_reserve_pages(int nr_pages, struct page **pages,
+		    xen_pfn_t *frames)
+{
+	int ret, i;
+
+	for (i =3D 0; i < nr_pages; i++)
+		xenmem_reservation_scrub_page(pages[i]);
+
+	xenmem_reservation_va_mapping_reset(nr_pages, pages);
+
+	ret =3D xenmem_reservation_decrease(nr_pages, frames);
+	if (ret !=3D nr_pages)
+		return ret;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_dma_reserve_pages);
+
+int gnttab_dma_clean_page_reservation(int nr_pages, struct page **pages,
+		    xen_pfn_t *frames)
+{
+	int ret;
+
+	ret =3D xenmem_reservation_increase(nr_pages, frames);
+	if (ret !=3D nr_pages) {
+		pr_debug("Failed to increase reservation for DMA buffer\n");
+		return -EFAULT;
+	}
+
+	xenmem_reservation_va_mapping_update(nr_pages, pages, frames);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_dma_clean_page_reservation);
+
 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
 /**
  * gnttab_dma_alloc_pages - alloc DMAable pages suitable for grant mapping=
 into
@@ -1071,17 +1105,11 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_=
args *args)
=20
 		args->pages[i] =3D page;
 		args->frames[i] =3D xen_page_to_gfn(page);
-		xenmem_reservation_scrub_page(page);
 	}
=20
-	xenmem_reservation_va_mapping_reset(args->nr_pages, args->pages);
-
-	ret =3D xenmem_reservation_decrease(args->nr_pages, args->frames);
-	if (ret !=3D args->nr_pages) {
-		pr_debug("Failed to decrease reservation for DMA buffer\n");
-		ret =3D -EFAULT;
+	ret =3D gnttab_dma_reserve_pages(args->nr_pages, args->pages, args->frame=
s);
+	if (ret)
 		goto fail;
-	}
=20
 	ret =3D gnttab_pages_set_private(args->nr_pages, args->pages);
 	if (ret < 0)
@@ -1109,17 +1137,8 @@ int gnttab_dma_free_pages(struct gnttab_dma_alloc_ar=
gs *args)
 	for (i =3D 0; i < args->nr_pages; i++)
 		args->frames[i] =3D page_to_xen_pfn(args->pages[i]);
=20
-	ret =3D xenmem_reservation_increase(args->nr_pages, args->frames);
-	if (ret !=3D args->nr_pages) {
-		pr_debug("Failed to increase reservation for DMA buffer\n");
-		ret =3D -EFAULT;
-	} else {
-		ret =3D 0;
-	}
-
-	xenmem_reservation_va_mapping_update(args->nr_pages, args->pages,
-					     args->frames);
-
+	ret =3D gnttab_dma_clean_page_reservation(args->nr_pages, args->pages,
+			  args->frames);
 	size =3D args->nr_pages << PAGE_SHIFT;
 	if (args->coherent)
 		dma_free_coherent(args->dev, size,
diff --git a/include/uapi/xen/gntdev.h b/include/uapi/xen/gntdev.h
index 7a7145395c09..cadc7fd9bc9c 100644
--- a/include/uapi/xen/gntdev.h
+++ b/include/uapi/xen/gntdev.h
@@ -312,4 +312,66 @@ struct ioctl_gntdev_dmabuf_imp_release {
 	__u32 reserved;
 };
=20
+/*
+ * Fd mapping ioctls allows to map @fd to @refs.
+ *
+ * Allows gntdev to map scatter-gather table to the existing dma-buf
+ * file destriptor. It provides the same functionality as
+ * DMABUF_EXP_FROM_REFS_V2 ioctls,
+ * but maps sc table on top of the existing buffer memory, instead of
+ * allocting memory. This is useful when exporter should work with externa=
l
+ * buffer.
+ */
+
+#define IOCTL_GNTDEV_DMABUF_MAP_REFS_TO_BUF \
+	_IOC(_IOC_NONE, 'G', 15, \
+	  sizeof(struct ioctl_gntdev_dmabuf_map_refs_to_buf))
+struct ioctl_gntdev_dmabuf_map_refs_to_buf {
+	/* IN parameters. */
+	/* Specific options for this dma-buf: see GNTDEV_DMA_FLAG_XXX. */
+	__u32 flags;
+	/* Number of grant references in @refs array. */
+	__u32 count;
+	/* Offset of the data in the dma-buf. */
+	__u32 data_ofs;
+	/* File descriptor of the dma-buf. */
+	__u32 fd;
+	/* The domain ID of the grant references to be mapped. */
+	__u32 domid;
+	/* Variable IN parameter. */
+	/* Array of grant references of size @count. */
+	__u32 refs[1];
+};
+
+/*
+ * This will release gntdev attachment to the provided buffer with file
+ * descriptor @fd, so it can be released by the owner. This is only valid
+ * for buffers created with IOCTL_GNTDEV_DMABUF_EXP_REFS_TO_BUF.
+ * Returns 0 on success, -ETIME when waiting dma_buffer fences to clean
+ * reached timeout. In this case release call should be repeated after
+ * releasing dma_buffer fences.
+ */
+#define IOCTL_GNTDEV_DMABUF_MAP_RELEASE \
+	_IOC(_IOC_NONE, 'G', 16, \
+	     sizeof(struct ioctl_gntdev_dmabuf_map_release))
+struct ioctl_gntdev_dmabuf_map_release {
+	/* IN parameters */
+	__u32 fd;
+	__u32 reserved;
+};
+
+/*
+ * This will wait until gntdev release procedure is finished and buffer wa=
s
+ * released completely. This is only valid for buffers created with
+ * IOCTL_GNTDEV_DMABUF_EXP_REFS_TO_BUF.
+ */
+#define IOCTL_GNTDEV_DMABUF_MAP_WAIT_RELEASED \
+	_IOC(_IOC_NONE, 'G', 17, \
+	     sizeof(struct ioctl_gntdev_dmabuf_map_wait_released))
+struct ioctl_gntdev_dmabuf_map_wait_released {
+	/* IN parameters */
+	__u32 fd;
+	__u32 wait_to_ms;
+};
+
 #endif /* __LINUX_PUBLIC_GNTDEV_H__ */
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 8e220edf44ab..73b473474ac9 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -250,6 +250,11 @@ int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_arg=
s *args);
 int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args);
 #endif
=20
+int gnttab_dma_reserve_pages(int nr_pages, struct page **pages,
+		    xen_pfn_t *frames);
+int gnttab_dma_clean_page_reservation(int nr_pages, struct page **pages,
+		    xen_pfn_t *frames);
+
 int gnttab_pages_set_private(int nr_pages, struct page **pages);
 void gnttab_pages_clear_private(int nr_pages, struct page **pages);
=20
--=20
2.25.1


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 13:42:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 13:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470287.729740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCL4u-00023y-21; Mon, 02 Jan 2023 13:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470287.729740; Mon, 02 Jan 2023 13:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCL4t-000237-Sn; Mon, 02 Jan 2023 13:42:11 +0000
Received: by outflank-mailman (input) for mailman id 470287;
 Mon, 02 Jan 2023 13:42:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4MZA=47=epam.com=prvs=53661eeefc=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1pCL4s-00020m-I0
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 13:42:10 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f199e7a-8aa3-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 14:42:07 +0100 (CET)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 302Bx46C011991; Mon, 2 Jan 2023 13:41:51 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2054.outbound.protection.outlook.com [104.47.13.54])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3mtd0uv746-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Jan 2023 13:41:51 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by DB9PR03MB9709.eurprd03.prod.outlook.com (2603:10a6:10:459::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 13:41:48 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552%9]) with mapi id 15.20.5944.019; Mon, 2 Jan 2023
 13:41:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f199e7a-8aa3-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ABPLD1J8Kgnw715Dt1/2KX2dbDeje4uBexVMcbPScdgoa5Ic6kYbKvXO+cui/dem3U9LADmf/iKoHKFFeXuMfbZPp3U8KglwR5PTslS7OJkTJdQp1K0l5J+ufIDB1UpU6afpou/jC8ue08RNNHJqG+UAqfB0oXgLWtTqx2eFp0IugPVZKZxp2KBA9mz0yGdAbrY47e1FKNCJUk+PR/jWpeFBu6UTOMc5t9USRBdxJNGM5Tea+l/KV9Gzy0alkFTwZSbun43ey5NU6CC0bpzw1G5HdvUI5+hyMm0a08yqFJ66FNIlODhiPyNcLXPPdnwYy+pjje5Yu5kS8FKcst9ovQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iuEwZFP31bPDffsqU14aI2kPMruZO5ACTYwm5FDPm60=;
 b=B9R/I6QLYSiJU+rii9DHveI07cMZtlbZXngAXR/puw/cgHHwLp5JGWxVLrg+b3BTF2Q7WU4Z/DObyL6xLwXHlx5G4RAV2k2bKDhbyidMnRDo5sq8xCD15iFtpVjzL3tC2L3q7Qgy14QvilN3S7zgyf1O/3oP9sfwKGFY9V3w8iyghobsDLk5DlQiHN1H45QBzs6GaUwrXSaCRNBbehQ+mmg04w0fAWeuJ85YUoLzIy7XP1+qOwq4Uj5eTTpGWKuLuT6iO52xYVsv/P8iPYqExb6tmrAToRvAPHfOpeb8SW2sjIPtgQpKRfXvVk0CEXx3Y1mVGn8A6C2/v6RKrCdi1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iuEwZFP31bPDffsqU14aI2kPMruZO5ACTYwm5FDPm60=;
 b=Fj64byEjy6KUX0YgNrwxi4w4yTqByFsOVPSfZxAYH/vd6Q4mO6K0I929JQcT0CGUSh1iLJ6OVIt03Hl0cStQ0yQ2hhePHeIwmn19mSxc9aCbP25w3FPYsWgvVm19EMr7G5NnbGkWJW7LQe2uejzlWo8014RWSliHs/GLKMs60J++lOEHifxY81phUlClQxIRKuWZBr9pKxptY6IbkuS9T2dM5jEMgIOYRwsrBv1nc7DmGbBw5TfeFPY55tieXLeAmWgIvmVtzouTWQTpHKx0rAYrYA3eT5h88OeQjYD6iAV/jIgdyGWp0cAMtdP9mOVsKmFLBusZfvIdGW39WVP9WA==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: "jgross@suse.com" <jgross@suse.com>
CC: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>,
        Sumit Semwal <sumit.semwal@linaro.org>,
        =?iso-8859-1?Q?Christian_K=F6nig?= <christian.koenig@amd.com>,
        "linux-media@vger.kernel.org" <linux-media@vger.kernel.org>,
        "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>,
        "linaro-mm-sig@lists.linaro.org" <linaro-mm-sig@lists.linaro.org>
Subject: [PATCH v1 0/3] Add ioctls to map grant refs on the external backing
 storage
Thread-Topic: [PATCH v1 0/3] Add ioctls to map grant refs on the external
 backing storage
Thread-Index: AQHZHq/2qxwFjlIdpkaeJi9xnPE21Q==
Date: Mon, 2 Jan 2023 13:41:48 +0000
Message-ID: <cover.1672666311.git.oleksii_moisieiev@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PA4PR03MB7136:EE_|DB9PR03MB9709:EE_
x-ms-office365-filtering-correlation-id: d61e592b-7ca9-4505-ba47-08daecc7189d
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 K//txomNbdnSgoB7Ft87gMsqzlplA7ByfPveZH2lOQbvzABfonhwPmW8pHTNVcWTtxhsA5pikAmzM4H43dTGp5i67FfOE6KrsVW/xqecWgvPNpwRErzxifu/6Dh7LMSihwMtaA4ZNSJsAR5Bh7Ofo8G9TJ4zwgbQxcC+i8EvuQn5oFj41yWqthGpy5doxFGUZsePuUZrrZJURNp7H9wTJfZPr9M66lcICykSQJtytLyBXnYk4VFdf3OuCKWUkxXPLnhzpdAgsil+oTjBXbDQ5TEacgQB4Y6L9lt59W8/XuaaHqPUVUE96QFFTMU4tASrqj1T5GqGgxNvbMo05ijG7i5rDAUW8galctv+eyyR2EF1ygZBYj9ce+q4rmgE9g6b7MPadEhwy1bAhh52OVhgqA4UaaTD0ffgE6xLbwm8J2FoadXZY4dxq54JF8XrbLVHAxiEPseciaNfVFWVKslvmPS105pRW1ZRN2d0X8DLzQN9SGU40rj4rgm40/KDPDoS9wBlsa9fJwktvRREvui4tdnsPGj4DuhJzy42AsxUQ12Kz21QKNgm4rpEdmCSgXz/Q8KAFDQWpEqY7Jn2dfTwQw/X/ubqmjfaWPQx0nOFRjjwxO9qfOMc6BFaJ2V9qABylklJMBrGgAt3QNQ/5tlqJRftRFbfu0Dm0EyOIIUqZq61zsUty87JKWX/ln3ZTgwBGU35xLvoW+hkIb0dRxV2XLtAlHyksYaGai0kIpUbyHk=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(136003)(376002)(39860400002)(366004)(451199015)(38100700002)(122000001)(86362001)(8676002)(38070700005)(66476007)(76116006)(66946007)(64756008)(66446008)(66556008)(6916009)(91956017)(4326008)(41300700001)(54906003)(316002)(2906002)(5660300002)(8936002)(2616005)(83380400001)(6512007)(478600001)(71200400001)(6486002)(6506007)(26005)(55236004)(186003)(36756003)(22166006);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?PimHRKzqh67sap2y2kDEW/CM2rd3FBGTTabLIqTqSyMy/RLiHOm+gXm8aG?=
 =?iso-8859-1?Q?Hnuj2k+DTRaDD5ESNOehMMM846HY5OaQIkW3l4pjaaUfzUEhxjby2Ac1xK?=
 =?iso-8859-1?Q?XeDLrvdsSK+2BnhMlDHiXB7MTLeSE2E8GriDNay6Rg0zoIhthgVUqQAUbt?=
 =?iso-8859-1?Q?ifrpBbeO/oc3ErtvBDnbAFM1svMTk3dXPcybVgnRTXTuDoFieCZawXZrWe?=
 =?iso-8859-1?Q?ZiT5Dm3BVL0nXr2lRJ1ze8mcYM+OHSEtZirldOJoduV6vXz552ddktFqz7?=
 =?iso-8859-1?Q?qg1SsTjGNJtSLptzQ88HYATYXZHeeWJgzEEHwX6IpOVxazUWzHnKpwEtBW?=
 =?iso-8859-1?Q?4HnbQnEZy49jw28MWMsy8UYmt71KYKShDc+uBjCDTp1Xqdh1WLU5mxP5ZE?=
 =?iso-8859-1?Q?ViFQvCbsLM0jx67476k8SJMC4ZZOzupO0AZPMa29Z9R1RzyaTXiOCa9zHH?=
 =?iso-8859-1?Q?AjNR19bieXK/4dylgt4UGDyCQOrBWxDbRG4y64SljszHiGtZ4iZdzu1aA6?=
 =?iso-8859-1?Q?0nVLEhmpk2KtQKqjBA+HYQZXrX8sg2SWXEa3pObkLtxAjdoWmWdiuT8SKY?=
 =?iso-8859-1?Q?nb7gq3NO+o9syurpCfSjxqvCParte6GjpYJP91ItCJUWb0HPFJg1GFPAHv?=
 =?iso-8859-1?Q?DZNEQhimJzXKW+p/BQmdN9DWNWpSYInuTnWDuvGXyeRp0ixOGNlVMN5ycw?=
 =?iso-8859-1?Q?wKHcsMv+1ZTyraq8NgL04IkkQSlJ/dPTWNM0lOvwa9I1TCDBO+iW6xcrj1?=
 =?iso-8859-1?Q?9TPM3lLQ/b4SiGmGBXAJyffCCzdPXwgTAJjduf934bZ+IXPuYjit60YIgT?=
 =?iso-8859-1?Q?uU/nZSyrDoULkQvFArILSSEL5c9jcLUsfZcmVffEdFEY+YcRwcGHv/u51+?=
 =?iso-8859-1?Q?bUgdaaeju+jdcGFFla8Zc8BL3iO6Ynct9KpP3l4mXy4iLZfzq0nwnYvUXC?=
 =?iso-8859-1?Q?U/G7PVhp4Eut8dTLf/wY39NNhX/zHGA75m88nLBe2yPe2MyxG9ssZ6Wk8K?=
 =?iso-8859-1?Q?RmDJZGV4PU4yLtxVmusqZc62cZrGMo/cny0W0qRTdIUpgb+N+5X6ixu2ZI?=
 =?iso-8859-1?Q?Xi9Rqpi7hgThw7GA9fQQwFo0nPS2MTvZNZQuSLpL5qanaJgdK1AKEbQktO?=
 =?iso-8859-1?Q?j8/STEsx9CHPtOWCR7CpiuWyH3Y6/OqH5p6XGUTmAb0ze1SeXzhG7QBolX?=
 =?iso-8859-1?Q?R8xTxyYn+m+qG7LLIuP+UGb7cSmDAs3LyhMVgrNyQiclf2p2K/jcRGv9yx?=
 =?iso-8859-1?Q?HSYQlHoHpZbTRfNaCMoH2WupqAIzLrMOMa3Q6OgL81qXj7ydI40/Zh0FXb?=
 =?iso-8859-1?Q?6++sxxXzVIa9XxOvFpibW8O9W29l2taYNnkbZHadC+ar1SOrgcjNrAbxBU?=
 =?iso-8859-1?Q?p807cTlttC2AxOaZcT5VthKfvSpUCFZVO5p0x+cHDTqKJxYW6+2ppy+JRA?=
 =?iso-8859-1?Q?g6wcVY/lZqgSPKXiaZYcKebfU7b/MotBTCi098JvAYW6rSYr7+w4faH7gg?=
 =?iso-8859-1?Q?UkIsQyvfEd4FtBRbX6cQU7/ttc64DTWCP2jkhAxZZUytHjdFVd5cZCn1pC?=
 =?iso-8859-1?Q?ScOhRLV3Dk9VrgJk2kF/BpuUaTBBMFK6+3S4QQd6NZkPdieiAP7Gj6yzAD?=
 =?iso-8859-1?Q?mZbzlbnUSocfa+hX+HGmntj5hKBCpd0H0n7CK5Sv0tJkYz6ImXTtQyIw?=
 =?iso-8859-1?Q?=3D=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d61e592b-7ca9-4505-ba47-08daecc7189d
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Jan 2023 13:41:48.1514
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: v8p9FkNMGcd+IAcNLFIF+4SgKpMOoAi7zGCwL0Q8hm7GPRWT8c9uD3HtozKM872KL0egO8w00Y3Y/YZBdiHg4+4C35ro1kP5trBmjT97ZdU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR03MB9709
X-Proofpoint-GUID: xU2f70Y-pvWxurXYXUo2TyiosKfNjL_N
X-Proofpoint-ORIG-GUID: xU2f70Y-pvWxurXYXUo2TyiosKfNjL_N
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2023-01-02_08,2022-12-30_01,2022-06-22_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=835 phishscore=0
 lowpriorityscore=0 bulkscore=0 impostorscore=0 spamscore=0
 priorityscore=1501 adultscore=0 suspectscore=0 mlxscore=0 malwarescore=0
 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2212070000 definitions=main-2301020124

Hello,

Let me introduce the new ioctls, which are intended to allow gntdev to
map scatter-gather table on top of the existing dmabuf, referenced by
file descriptor.

When using dma-buf exporter to create dma-buf with backing storage and
map it to the grant refs, provided from the domain, we've met a problem,
that several HW (i.MX8 gpu in our case) do not support external buffer
and requires backing storage to be created using it's native tools.
That's why new ioctls were added to be able to pass existing dma-buffer
fd as input parameter and use it as backing storage to export to refs.

Following calls were added:
IOCTL_GNTDEV_DMABUF_MAP_REFS_TO_BUF - map existing buffer as the backing
storage and export it to the provided grant refs;
IOCTL_GNTDEV_DMABUF_MAP_RELEASE - detach buffer from the grant table and
set notification to unmap grant refs before releasing the external
buffer. After this call the external buffer should be detroyed.
IOCTL_GNTDEV_DMABUF_MAP_WAIT_RELEASED - wait for timeout until buffer is
completely destroyed and gnt refs unmapped so domain could free grant
pages. Should be called after buffer was destoyed.

Our setup is based on IMX8QM board. We're trying to implement zero-copy
support for DomU graphics using Wayland zwp_linux_dmabuf_v1_interface
implementation.

For dma-buf exporter we used i.MX8 gpu native tools to create backing
storage grant-refs, received from DomU. Buffer for the backing storage was
allocated using gbm_bo_create call because gpu do not support external
buffer and requires backing storage to be created using it's native tools
(eglCreateImageKHR returns EGL_NO_IMAGE_KHR for buffers, which were not
created using gbm_bo_create).

This behaviour was also tested on Qemu setup using
DRM_IOCTL_MODE_CREATE_DUMB call to create backing storage buffer.

---
Oleksii Moisieiev (3):
  xen/grant-table: save page_count on map and use if during async
    unmapping
  dma-buf: add dma buffer release notifier callback
  xen/grant-table: add new ioctls to map dmabuf to existing fd

 drivers/dma-buf/dma-buf.c   |  44 ++++
 drivers/xen/gntdev-common.h |   8 +-
 drivers/xen/gntdev-dmabuf.c | 416 +++++++++++++++++++++++++++++++++++-
 drivers/xen/gntdev-dmabuf.h |   7 +
 drivers/xen/gntdev.c        | 101 ++++++++-
 drivers/xen/grant-table.c   |  73 +++++--
 include/linux/dma-buf.h     |  15 ++
 include/uapi/xen/gntdev.h   |  62 ++++++
 include/xen/grant_table.h   |   8 +
 9 files changed, 703 insertions(+), 31 deletions(-)

--=20
2.25.1


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 13:57:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 13:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470318.729778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCLJu-0005Nb-3G; Mon, 02 Jan 2023 13:57:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470318.729778; Mon, 02 Jan 2023 13:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCLJu-0005NU-0i; Mon, 02 Jan 2023 13:57:42 +0000
Received: by outflank-mailman (input) for mailman id 470318;
 Mon, 02 Jan 2023 13:57:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W7CY=47=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pCLJs-0005NO-KE
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 13:57:40 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on2087.outbound.protection.outlook.com [40.107.95.87])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6949fb15-8aa5-11ed-b8d0-410ff93cb8f0;
 Mon, 02 Jan 2023 14:57:37 +0100 (CET)
Received: from MN2PR20CA0012.namprd20.prod.outlook.com (2603:10b6:208:e8::25)
 by BL0PR12MB4961.namprd12.prod.outlook.com (2603:10b6:208:1c9::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 13:57:34 +0000
Received: from BL02EPF00010208.namprd05.prod.outlook.com
 (2603:10b6:208:e8:cafe::61) by MN2PR20CA0012.outlook.office365.com
 (2603:10b6:208:e8::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5966.19 via Frontend
 Transport; Mon, 2 Jan 2023 13:57:34 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF00010208.mail.protection.outlook.com (10.167.241.199) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5944.8 via Frontend Transport; Mon, 2 Jan 2023 13:57:33 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 2 Jan
 2023 07:57:33 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 2 Jan
 2023 07:57:33 -0600
Received: from [10.71.193.33] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 2 Jan 2023 07:57:31 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6949fb15-8aa5-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W2VACdSmDz5iLDl0kP+sKe7hWiU6YDRc1rVINaLagfuz2xO6h8wpOBGQUvZScfP9B65z8RLn8z6QiJFnpALDqOW/7HoHUyVFRQc2VT92s4eCgGQdOJM7lOEeKOsHOjOZj9NrgxXyji/GJSsDPv7n9/04ksHQgksl+IIKODUvniwqxZFR4t+S0ovg7InKRf3GtqjfQ2eV9vqS6a0PZk5vEj6NZM/r3ZyNgfZdlVMPcDFVv9QSw6eVKPTZGnrVZ5mBkbRcBRV+vSVtuqBPmaAtaeSuFiqAC1axxbh23UBnWl6aXb7Jox1AQlwX18gimuIVibs3fWI/Ga3Kt7r4h2HvQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kOiwgzj1hlD6HA/7tnUM8GQZQVyFlg4Pd1hXLiTyrPs=;
 b=PfwAjgafJQ4wE19tbbhz4SupNPCIu6JvU4qRcprGSEGE74R9/U9xJ2lt4S2ru5o87oSFfnXNjB+O8Xy7h29O9sdrbZWrGI6YhkaVjvNLnx16Oe8SR9uPAQnJdmEZE62U6S3rkZVPFy9fVkHiYDPjwkGqzrY7XizPvrDtqRIvUL80QkGCotRGlM2y+c0YHJbDT5tVyWbSyHa9vrz4ZROFHzn9ypUWS5BSnPFWg2OW8bbJ/Fq2Wjdv6Gr6BLWnoJW1gh26s1Y8eEux2u2eewpYuLy+ufpiarlNbSqkcff9MKQELZep02Kg5yfuF+O++s8qqOYhzcQGRCFiRkxTk4Q28Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kOiwgzj1hlD6HA/7tnUM8GQZQVyFlg4Pd1hXLiTyrPs=;
 b=V648YDqu/sZKWeT7uxbJEkxoDrsDuP2r3tF+6PyO5z4+84k+Pi+/puGv9YYtZ1gkbsz/XB+jjVVVe4ukN2qGyrSXsGu2NS5KQSwg0sT45E5geZpgwUVbVy/Be+Z+f76G2f34KryawlL3xUX1AS4lwLcM7eC7HS6MADHf1AImjDU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <8f26f8fd-ec65-57c9-4ab3-7cb411580b6c@amd.com>
Date: Mon, 2 Jan 2023 14:57:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/6] CI: Drop automation/configs/
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony.perard@citrix.com>,
	"Oleksii Kurochko" <oleksii.kurochko@gmail.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-2-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221230003848.3241-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF00010208:EE_|BL0PR12MB4961:EE_
X-MS-Office365-Filtering-Correlation-Id: 65123726-f4a4-43bf-ec5d-08daecc94c5f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5MDAEZPX9//T8/0czxw1hwJ1Mrqw03IEZsi/OJ9zHV/+qfhJ3E2qQf5nAxRkUUqdJBqF0V+G6RxW8l3uDX7R/M3xe3xSCqtrO3o/SgEVCpCFZ0KaOXL/SGxkxFLiPDTTE6nzYVhxHGHC1Vl+PBQQLnBSkGFPPzdVUeZxF0OG2Nks5kZnKHPKU1cC1K/M1/xtv5QS8fILQKbYfAAbxCIjj1wZVf3CRlusdw4pkwIUrh4HLh9VyRx/4GArm/H90ZW5aVyAVAwmru3lqDiUHEFsB94JqwEJ3TgS2pl3YyYXFkLEurYsJ00K/Ma1eKtDK7AuuAu20zV5YizrM5+ysd7QSIXHhUM0hssBAX9NAkF1kIOr9d2qfiIVvhT6P+qtIb+/Ptjzaj8rcsNcnmWTgM2TtRFa6ud5tiBU3urxGaP0VDJHkHZtkxPjD9nLrVgrbtafV32n7DdOjkSGPK4Lz2/59oizEZT5GueWnwqjhG63+dzKLuQX6/4zZzNwnFjtYSTmkVz0pb+jke/8haL4/gxVKPCJxvHX0nSgooW1BoWo/I7U++Gnh5PDbsSnArFsXE4Lr9wVNUfLB9fHWvPmap/tHIRAGZyULFpGV3mtoy+Z9kj5B2StMj4S2EH9SmZTsEoIaqbpIYkLmr8k6hT3UIJSJCRFC2at2UFzVgpfGw73cpOesxE+Irml1i+ccCCnOhc5SQIPxC0aTFKLAS0lT+vHE7p1YULyNCvqYnuc/gN0LMVXk9jtDOoUIu1Cf124dqUee4jWz/mdNbaM2Gm+zQLhWw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39850400004)(136003)(346002)(396003)(376002)(451199015)(36840700001)(46966006)(5660300002)(2906002)(4744005)(44832011)(8936002)(8676002)(4326008)(41300700001)(54906003)(70206006)(316002)(70586007)(16576012)(110136005)(478600001)(31686004)(53546011)(6666004)(26005)(82310400005)(336012)(40480700001)(83380400001)(47076005)(426003)(82740400003)(81166007)(356005)(186003)(2616005)(31696002)(86362001)(36756003)(36860700001)(22166006)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jan 2023 13:57:33.9814
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 65123726-f4a4-43bf-ec5d-08daecc94c5f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF00010208.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4961

Hi Andrew,

On 30/12/2022 01:38, Andrew Cooper wrote:
> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
> 
> 
> Having 3 extra hypervisor builds on the end of a full build is deeply
> confusing to debug if one of them fails, because the .config file presented in
> the artefacts is not the one which caused a build failure.  Also, the log
> tends to be truncated in the UI.
> 
> PV-only is tested as part of PV-Shim in a full build anyway, so doesn't need
> repeating.  HVM-only and neither will come up frequently in randconfig, so
> drop all the logic here to simplify things.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 14:01:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 14:01:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470325.729789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCLNk-0006ru-KB; Mon, 02 Jan 2023 14:01:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470325.729789; Mon, 02 Jan 2023 14:01:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCLNk-0006rn-HX; Mon, 02 Jan 2023 14:01:40 +0000
Received: by outflank-mailman (input) for mailman id 470325;
 Mon, 02 Jan 2023 14:01:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W7CY=47=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pCLNj-0006rh-AF
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 14:01:39 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2049.outbound.protection.outlook.com [40.107.92.49])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f89bd132-8aa5-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 15:01:37 +0100 (CET)
Received: from BN8PR12CA0025.namprd12.prod.outlook.com (2603:10b6:408:60::38)
 by CH0PR12MB5042.namprd12.prod.outlook.com (2603:10b6:610:e1::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 14:01:34 +0000
Received: from BL02EPF00010207.namprd05.prod.outlook.com
 (2603:10b6:408:60:cafe::94) by BN8PR12CA0025.outlook.office365.com
 (2603:10b6:408:60::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5966.19 via Frontend
 Transport; Mon, 2 Jan 2023 14:01:34 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF00010207.mail.protection.outlook.com (10.167.241.197) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5944.8 via Frontend Transport; Mon, 2 Jan 2023 14:01:34 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 2 Jan
 2023 08:01:33 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 2 Jan
 2023 08:01:00 -0600
Received: from [10.71.193.33] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 2 Jan 2023 08:00:59 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f89bd132-8aa5-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iTOhaNPCQopKF+VeJPvubk3OPDFMBCmITNpImUqg2rJQJ7yCyrHHZ6qusE2moXStFi84dNKUsDhyxuMzM/M3owBLoqoWQvWuhd3qgy+z4AE48E6AncLV8di8z0aOVB3kk0ip1N+PMXMOUpnkiIe8hxZn1VHTC0+UHj4cGpz9zWxUyTt0b57a287o5oMwFBbe1+qudt3W2G6uA0/FEHtME0/ULS1clYbdEULMZEB01EQeHKSQA5VhoHSHaSii7yjZ+cbrvHXyUT0NXDOOcCzHMug1hrDU8ujhnLCfbgm36OBG0lGNYql0r21BjX6cWwwEGGhfuwwRTCpDaXbf07+h9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ABCgx/a3xNCjBPacP85PXQGgbm8BLLReM5BWn7wRal4=;
 b=YTHiozJc7qQygkeKY11CSBPldE2VcMZ1ZfjzxL1GlpGW+5AbxcvPVg7CU2Lz+NpQMJd7zhRfr2JlTwLGEUsnE/5QL22HZJE06xXH9sSR/60xmjysYyFmc3OiHxi/Zovhgff1Y0cWeBA4SarOkRtXZYFRe7NVb8/ZmIR9FL277pVKO/Wqwenks8Uzv2DiGiJY6+IhFnXO/x5gf0XBWacEW98QCa2zYu29EGnNVDvccTHDewdf3dE00vudf+cdKB6osn8QGaqh6DlEfMwlgbC98752ZbdjlaElRL9pObjKmMOTT15VOc0zp8isDAyDSgYWd96ECnOBqLTER5uj++vZPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ABCgx/a3xNCjBPacP85PXQGgbm8BLLReM5BWn7wRal4=;
 b=QlP3WhmhYXG2zjR4hs5QyBxkdkZcgr5Jz0feaTXiSjtrR3mtTVyfOEpldrLVs5lgrOEy9TTqR9glZwndzip7fZmbi8VH3kxPGhtsTfHfKj/0FVNcCItw2kCM6VJYhpBlSFS91SMPpXKvJIx+PjGa3TlfhL+fcc3oZb2EUxK7S3I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <07b5cc36-e0ea-9b51-036a-9523920dd74a@amd.com>
Date: Mon, 2 Jan 2023 15:00:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/6] CI: Remove guesswork about which artefacts to
 preserve
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony.perard@citrix.com>,
	"Oleksii Kurochko" <oleksii.kurochko@gmail.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-3-andrew.cooper3@citrix.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221230003848.3241-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF00010207:EE_|CH0PR12MB5042:EE_
X-MS-Office365-Filtering-Correlation-Id: d4c969fc-3648-4ae2-67c3-08daecc9db87
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0u6BXlASZ44acsqFlYmT9XE87+1XgE2geVGvdLjglH2ynRwISjm8YRU6RmNmTN03Ij/QqGKwaa7OZxjLWVrqsLRSEnXw5AXk7N1s42F7QwY/ZnVogJRXWLiRR706cE/buRn52VjbCFDY1L/NJwIdbuIjMaBQvR5QzWpg177nblI7GEgIs49wT1QRYrcgvnDYBlCV+wpWv3HQjX4j8MX17Yc51FwB3j3HTUOWDtH+I4/zvkpUpq7F5BMLbWuvrK/6Qj5bwkaveFWVbRzk66duJbUYMor8APfS7z8wIB9qJ64esisaLyOcFFz3joXo251qRWZgib/UdH1uqfKTBeNzO7nveFu7l/2hiJf/q3KO4QsapLHARdkjaw3nLLJUqGZVRjlQ05XNKn/B468jSVwo0Wmm5XAC3edzCxBU4js8pnKjKx9mkVu9QB21G4GE0XKnco3ITVbiha5j+3jcZ9pJ0I8hdSba27q+s5+iUK6v9WcNCopr/0xmF8SW/3+Ggvl7RsB2xmAOh9Gcl8Tyw5Hyy20DdxihZFN9Sry2nKevQ7mLYNvUP62kOkf0+pMxfUQaN/+gSdundOb+o8FD/mmrjOe8WPG/JmBE7l/AOlBsEA6pdebA+CjrDynuaLqUtjNE1xZpJ5yl6N+KG+vV2lX9qJBn56UWSeXId6+pOZ6wRB+0IpOo+k7UdwNgcf1PHLKL+fpbNbn1Cv1KUqbo3j1pfcH8ylEG/QAmL+pLp0tNRFkQA/yHDsE0xZEqWYMriCVdE29aROQH9F4KBsConi2m30njmcw2pPCvwp1+d06UzqI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(36756003)(2906002)(82740400003)(356005)(8936002)(5660300002)(41300700001)(44832011)(4744005)(426003)(47076005)(36860700001)(86362001)(31696002)(83380400001)(70206006)(110136005)(54906003)(31686004)(70586007)(81166007)(40460700003)(82310400005)(336012)(40480700001)(478600001)(316002)(4326008)(8676002)(16576012)(53546011)(186003)(26005)(2616005)(22166006)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jan 2023 14:01:34.0762
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d4c969fc-3648-4ae2-67c3-08daecc9db87
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF00010207.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5042

Hi Andrew,

On 30/12/2022 01:38, Andrew Cooper wrote:
> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
> 
> 
> Preserve the artefacts based on the `make` rune we actually ran, rather than
> guesswork about which rune we would have run based on other settings.
> 
> Note that the ARM qemu smoke tests depend on finding binaries/xen even from
> full builds.  Also that the Jessie-32 containers build tools but not Xen.
> 
> This means the x86_32 builds now store relevant artefacts.  No change in other
> configurations.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I'd prefer to keep using "artifacts" and not "artefacts" as the former is what GitLab uses
and what we use in build/test.yaml.

Apart from that:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 14:01:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 14:01:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470326.729801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCLNs-00079n-TB; Mon, 02 Jan 2023 14:01:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470326.729801; Mon, 02 Jan 2023 14:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCLNs-00079g-P9; Mon, 02 Jan 2023 14:01:48 +0000
Received: by outflank-mailman (input) for mailman id 470326;
 Mon, 02 Jan 2023 14:01:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W7CY=47=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pCLNr-00079J-Ol
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 14:01:47 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on2067.outbound.protection.outlook.com [40.107.212.67])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fd722ad5-8aa5-11ed-b8d0-410ff93cb8f0;
 Mon, 02 Jan 2023 15:01:45 +0100 (CET)
Received: from DM6PR06CA0042.namprd06.prod.outlook.com (2603:10b6:5:54::19) by
 MW4PR12MB7381.namprd12.prod.outlook.com (2603:10b6:303:219::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 14:01:42 +0000
Received: from DM6NAM11FT110.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:54:cafe::cb) by DM6PR06CA0042.outlook.office365.com
 (2603:10b6:5:54::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5966.19 via Frontend
 Transport; Mon, 2 Jan 2023 14:01:42 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT110.mail.protection.outlook.com (10.13.173.205) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5966.18 via Frontend Transport; Mon, 2 Jan 2023 14:01:42 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 2 Jan
 2023 08:01:41 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 2 Jan
 2023 06:01:41 -0800
Received: from [10.71.193.33] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 2 Jan 2023 08:01:40 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd722ad5-8aa5-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MyniDtl3pJnC1SFpkJ6hJ3gzbM4/GRo3EDrmn4XlQsIO7oWFuO3U5kBfmAoJN1GzOwPsDdM6OEbskWgUWF/9sfekiaWMW//a47gN72ErKyflFJnHy+g0cFeDkMzD1fLsP6Lnofk+Aa0jLiJjza/rIL5kxvAdTVa9QvsbBEC5lg8PPrJf/mD6eK0Lt3nXWOnhdv/MIJsE8WOdfI5krliveaej+UDSXjkvWYm1CIvUkknPqqPZL+dPGDpbeJjyI0zEnGJ6ekzyijXlXzpy8dLZNnYC+q0uuIH5QMrQmgqCmWtpInl4M/ULxRvlQGK15xwdPVNTtH9+z5X5zm0HgHu0yg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=J0gEafiirFF0bZlDHW4qHs/6kFKGTcRmz9ZoeGuR6fE=;
 b=HGIxDXOcKmGOTgXbBnKsZl9+5bNaYWwMwZ+++kD3HORvBYc1+zfrGqYIg0zKHInjy2bHyQKLctie5oFWnJHkvxqpyw2yEUVNo1mSDyZ/bnU7ez9BVBNmPuvuBMP4euVS2PTvr2D+hPje80I5Aek/pGoIleqtNSP+8RG510EFr94FuRh1q6guJ33t00INVk15726zxwgszBvqJq8S2SsaC4BnU0YmZUaGZDhc18GVbLGGJSsVkV8U/xp2iCurbOTrUv6gl9vb6h8W6LvUlGj9ssBPAojYNt9dVe2jdiqyJ4w09OG4bbry0D6CtxR09RH9Mr12A2gvhFG4jo6PO/e4dQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J0gEafiirFF0bZlDHW4qHs/6kFKGTcRmz9ZoeGuR6fE=;
 b=04uLRQnN+fyJ9UKHMWMV31P0rG8iZY3ta/83hmFsKwygewmtK0f/cfEt++l9ghfvDdIs3Fz5N+4HDq6z+rs9MCwOwR8HEW+45HpyAooGN7R+rJcI/hVs+Rigdsb5aOmhOqIYYV5QaSgzPz9Dn9ddQ93QktMIKPBEk22vE2cFEro=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <a355516f-1104-8285-9810-ea5de35e817d@amd.com>
Date: Mon, 2 Jan 2023 15:01:39 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 3/6] CI: Only calculate ./configure args if needed
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony.perard@citrix.com>,
	"Oleksii Kurochko" <oleksii.kurochko@gmail.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-4-andrew.cooper3@citrix.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221230003848.3241-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT110:EE_|MW4PR12MB7381:EE_
X-MS-Office365-Filtering-Correlation-Id: 89031ae5-c623-4ab0-a82b-08daecc9e03d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qjPdYPpf47sX6xAl/KihMxpYgUo6xgiBzzmT9EJfxKyj0OQGJDDgFfkicoyJa6QKv7ppGijmftUhBTMJn7RGC6WPHrEpIGUUKmXLF50TFkDwKBXTOePdMXcK1QXohfleQSH8V+GmKeYwfglMWCu1GTeKvTYn+jTUiLQSFMK3waJQwLSE+7ishIura/n2iD+FYrzMhkLMbRzWCGjhe1cWzZxM7QauOHOSBMT+4a8i+yrdmCgJ6sBGpeJYcJCvHoEW17D3EOv7x8pO4bUawpK6Z2cvLq0lBftlSnDMRJgNlRv26XjCrp7Ldi29q2pxiH4RbEcLruCjhW3tZsLqtOLfVlKfRhLfzyggXj8BwfDtPBP+JSfjnCvnc+C6YD8FmoQBCpjPv02203oc94algOD3SfGHAj2VvAFgVi7YE2ISIsV7m9DL0M7TnqHGjbIcVB+uUQvzvWUXRUKFL8PpH8xITdDAjwtL+/6qUeOSZcHAVnf9xahMjdKqkspw8VfCEJMQRei2GfCMqHjGqX4xF2WXuCR7FimrUwNHWCKJm3GXGXROleOPPXy3/8rA5RuOeEnEaHL5rtDnC7+85InnsMiLGPJsa5CySJzfKIKJ6tjcC7jdLCkpO/CZIanyFufkH4db8Mxxgdm3eQ3OLKlWvAr5qUNgCyJwkBi/L2kv2kIQ/ZlJyAT8VNRCzGD5xjRYgnQvJDr9/TGNhuytUsRKg2Sj0nZQxPvdelSiZuLYiFB795dozb/OAxlP6GGLrlb7CI5deVgWwb16CrCO100Z0kll0/moH50L74+6L6E0HLErBno=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(376002)(39860400002)(136003)(451199015)(36840700001)(46966006)(40470700004)(81166007)(356005)(82740400003)(40480700001)(558084003)(40460700003)(36756003)(82310400005)(86362001)(31696002)(53546011)(36860700001)(70586007)(70206006)(316002)(2616005)(478600001)(26005)(186003)(16576012)(31686004)(54906003)(110136005)(8676002)(2906002)(47076005)(41300700001)(426003)(336012)(8936002)(4326008)(44832011)(5660300002)(22166006)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jan 2023 14:01:42.0284
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 89031ae5-c623-4ab0-a82b-08daecc9e03d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT110.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7381

Hi Andrew,

On 30/12/2022 01:38, Andrew Cooper wrote:
> 
> 
> This is purely code motion of the cfgargs construction, into the case where it
> is used.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 14:05:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 14:05:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470339.729811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCLRQ-000838-Ce; Mon, 02 Jan 2023 14:05:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470339.729811; Mon, 02 Jan 2023 14:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCLRQ-000831-9u; Mon, 02 Jan 2023 14:05:28 +0000
Received: by outflank-mailman (input) for mailman id 470339;
 Mon, 02 Jan 2023 14:05:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W7CY=47=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pCLRO-00082q-Th
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 14:05:26 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2076.outbound.protection.outlook.com [40.107.243.76])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 809d5f19-8aa6-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 15:05:25 +0100 (CET)
Received: from BN9PR03CA0197.namprd03.prod.outlook.com (2603:10b6:408:f9::22)
 by DM4PR12MB6566.namprd12.prod.outlook.com (2603:10b6:8:8d::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 14:05:21 +0000
Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f9:cafe::b) by BN9PR03CA0197.outlook.office365.com
 (2603:10b6:408:f9::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5966.19 via Frontend
 Transport; Mon, 2 Jan 2023 14:05:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5966.18 via Frontend Transport; Mon, 2 Jan 2023 14:05:20 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 2 Jan
 2023 08:05:20 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 2 Jan
 2023 06:05:19 -0800
Received: from [10.71.193.33] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 2 Jan 2023 08:05:18 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 809d5f19-8aa6-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aQfKUCh69TDJtBzRyezYuC43lVhYInQ2xv0WwQo8su7h8EnGkZgzfUXuxZi0lONAQALMigNU8LJ6ZoKd2BHQ/6a6vEtaj6fPerKBOyDU+KhfosLjD6Ev4laHQsdShYlYyGaExUTEm2Hfw9SA/YB1VeYzYPLAHY9yu/bPhOnL3MEgdPUobmr48nuYmQihAbG6E49+hzhAiGV2NHOIhBs1OfrqPESD+BSHmXq1gUz5CdSuUEV+TQ6eNA3X8TUiVn601R8IFSV4gfFV6VvYJlgVPtXst74kvFimBFHc22yI/uA6wmyz1n3CkztSscuGDjabQQ5E/++cWvs3sK4TdZHIew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Yv2nbBORmNi9jMe76KzemtB5/sficMOEg/4qBqBfQog=;
 b=X3b9O2onx+PLmuyOjXNf4hYz+LSJZlVTGFJlzQ2q+rMkCzzMtFVktS5BRHJ8CA2dQMLihWTIUG85Jrvw17a2C9wZsqJZxFRvsRbA2D/lEMMmcmqibd9PtbmYkr3OkMzmZUfX8dF/5xU5L2KzD3Pdl1fHhRu8KhEGEyNhp/Q+RNcgWemhnCGpFXgnfKeSVPOj75XRogZEizdt4BfFkIAuemnqk147y++rcONzvwD6la7b1t6hVM6WwYEcxty+vdLc+3vYkRaOXSdOVJRWx+TjeQrTLM+mxsaQiLL/cXN4i7kSERDBOOEsQUFg6TLm8w123ZCUOxW6Sac4rQcRQKAFYw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Yv2nbBORmNi9jMe76KzemtB5/sficMOEg/4qBqBfQog=;
 b=voQ6nPNErpryNN+qY4HZWb5EPMkYONhd4ovhhC5aLAJQ0xZRPBnLOqtoPaJCZsNAygmX/UjHfaDu8lqfUv9bVAlEKnwTKjJtxF0XWpuHnLCs1KCaUbFMpJeQD3c/kc91mI5ndfO9dlDrRC2RNrEbuMV5oaYNZeu6TDV0XpZmKvk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <4caae81c-84a3-9925-2e9d-0f256feaa6c4@amd.com>
Date: Mon, 2 Jan 2023 15:05:18 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony.perard@citrix.com>,
	"Oleksii Kurochko" <oleksii.kurochko@gmail.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-5-andrew.cooper3@citrix.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221230003848.3241-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT017:EE_|DM4PR12MB6566:EE_
X-MS-Office365-Filtering-Correlation-Id: 3bfa60f8-76b9-4f1d-36d3-08daecca6278
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	M45WnpRSehcSRgLauqT+P57rmVJ64fDbUaOKsKAMzlv+8oL/ovUmyGr/0sDimdYRpPQOLE2qfa7SrH7xGieAM5956g+Lj36pPcAzdTfoOAN9Ub3g+nDpY2nyqRflMb1CfHd3/ENXM9jNxN07XBE7ReH3F+eiFaE/ITmKJtb0IFtBaQ5psbxLcf8PsfP5jfJnPAW/5kf6Zefuy9uEYm+NEkaJNW4/YXnSDi5z8+ZEWNT+JQYhEt2GpWdnNeb638ZcSx14BxwQS3u/Xx5proxTHVeAXJvVl4JdkZ8SVOwg/U5BZNVpwyxdIYvrVLVi4NcI+jLeTkjIXht6LvNicd1QbyAuKj2y+c+fLJkmDnZCg/VdinSvBtYtSCwsRO8KNOGZXIISMkAfFkVzBR+ErKAQ8Dvn5tkeBZI3NhPFVEpwRog9x9kgOGXZpjKjQvF1hV/hFeZessNf5Jk6Bh+9Pcl4mNNlXK57/y6gswvsa3OODFrMc8kK8Q7pFpf0ZaysR+hDkFyxEVu1Mz3/eqCt+TiyBsO9sPOihHbK/G9ewQhjqEiDtT7v1h7hHzQohITllym4J7DQSgs7iMdnLVymmU8bSU10ZJmccJIrjRd2+/8F+2whuwEKBRMG4/rI7dKuT1oBSJoY6UMqbVAqoxsh8hN5g+Qp+mEopG9oAbb0E4//7oAmOrrsGdBOkSwQdwYys7YoU2DXZm6DgVZ4INAa8qmu/IEYrFIadJDxDAZFlCJN4qhGDXqlNefKMRZA0cZVtONpEr8Qa7rB/wgz99BaWvIa43eC2inZNqKRqf08bqkneoo=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(376002)(39860400002)(136003)(451199015)(46966006)(36840700001)(40470700004)(81166007)(356005)(82740400003)(40480700001)(40460700003)(36756003)(82310400005)(86362001)(31696002)(53546011)(16576012)(316002)(36860700001)(70586007)(70206006)(2616005)(478600001)(26005)(186003)(31686004)(54906003)(110136005)(2906002)(47076005)(41300700001)(426003)(336012)(8676002)(8936002)(4326008)(44832011)(4744005)(5660300002)(22166006)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jan 2023 14:05:20.5472
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3bfa60f8-76b9-4f1d-36d3-08daecca6278
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6566

Hi Andrew,

On 30/12/2022 01:38, Andrew Cooper wrote:
> 
> 
> Whether to build only Xen, or everything, is a property of container,
> toolchain and/or testcase.  It is not a property of XEN_TARGET_ARCH.
> 
> Capitalise HYPERVISOR_ONLY and have it set by the debian-unstable-gcc-arm32-*
> testcases at the point that arm32 get matched with a container that can only
> build Xen.
> 
> For simplicity, retain the RANDCONFIG -> HYPERVISOR_ONLY implication.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Michal Orzel <michal.orzel@amd.com>

With regards to Oleksii comment, I do not think we should add HYPERVISOR_ONLY explicitly
for randconfig debian-unstable-gcc-arm32-* jobs. Doing so would require similar change
for all the randconfig jobs to be consistent and this is not needed.

~Michal



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 14:49:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 14:49:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470348.729823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCM7n-0003vb-OS; Mon, 02 Jan 2023 14:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470348.729823; Mon, 02 Jan 2023 14:49:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCM7n-0003vU-Lo; Mon, 02 Jan 2023 14:49:15 +0000
Received: by outflank-mailman (input) for mailman id 470348;
 Mon, 02 Jan 2023 14:49:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W7CY=47=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pCM7m-0003vO-7F
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 14:49:14 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on2044.outbound.protection.outlook.com [40.107.102.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e3e6c39-8aac-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 15:49:12 +0100 (CET)
Received: from DM6PR21CA0017.namprd21.prod.outlook.com (2603:10b6:5:174::27)
 by IA0PR12MB7675.namprd12.prod.outlook.com (2603:10b6:208:433::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 14:49:09 +0000
Received: from DM6NAM11FT012.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:174:cafe::b8) by DM6PR21CA0017.outlook.office365.com
 (2603:10b6:5:174::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.2 via Frontend
 Transport; Mon, 2 Jan 2023 14:49:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT012.mail.protection.outlook.com (10.13.173.109) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5966.18 via Frontend Transport; Mon, 2 Jan 2023 14:49:08 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 2 Jan
 2023 08:49:07 -0600
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Mon, 2 Jan 2023 08:49:06 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e3e6c39-8aac-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RuB2gXUQZGR7Cv/1+KZRvB4PM22ZXzMuLPHkpRTtoc5JWCeU0B+zTQskVzFB2k5KSp7H497c6KHFG8M944pPDrxZvlV8yGDCkfVUMREGfP2RTvXBF3PkHdPwMFOwHf/fKNyPYNGfd/jU68TaULwP+Yv9GLbtuh4gdnXCTwNHmYJz3IlplOA1pD1c77X9ob/NHDAuAxeoAwAyBeh3LVVdF3CIGr1VevnE0KxUBLgvsiZ/yriA766pnfuKowfzKdeJiteXVcGw1hnwtmpyTJ+kKtI8b6+5xduKacm3T/IVqdF1fZjeW4Uhty+kDQlOV+2Bzc3FgVSX4SpDrnyVcq8rug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=efh/Yhzkae+fINf3IfQzkBHj+8mOBrO76E4WEVCwp8Y=;
 b=loi+jwbdNCnhgN+rYjVn77BF9bDZluXY+SAKWKn3HY77Vzbhtdnh6f5eA9ZYr572X1is1DEtOGS1e3OqQAtQN7C4Faxd9mDdwIA79G3G7zWqKl0WZKhEgpqknlX2yqnjjpqYxHumYjOb+UHIodxcQep5vDM7Z8ZaT6hunSweEQlKzcbvu0wXZ5ENM6RACwqo8TWSotqZkXdDooZPm3/giibKxru28r+f2ORx8VtoLVn/B+awiaPCzSwPmsTKfVu5eh7kVk9hXMcD8xKLK2uqQaIOXDWImuQ+lr0cA8Dqb1QKUSe8DhQ0JVD2rITkDJm1f2YbPTf5/Mv+7uvOIu7LMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=efh/Yhzkae+fINf3IfQzkBHj+8mOBrO76E4WEVCwp8Y=;
 b=NkMUBsfU67o6GM97paS6SzxTEnz8Ucj4w95d3oU3dAyIY3CgFukot29ICnrIw5xvl/7JEw/4hhQTj8gnq/I7TiiqG//42HT9GB48NoBbFQ/MulnQG7DBDDbrSrVLcDvvapLWGSPJrK4d49QpgPiqfbwkFERZv9l27KIGJwQR0lM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: Print memory size in decimal in construct_domU
Date: Mon, 2 Jan 2023 15:49:04 +0100
Message-ID: <20230102144904.17619-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT012:EE_|IA0PR12MB7675:EE_
X-MS-Office365-Filtering-Correlation-Id: 21fd9655-9e8f-47f6-6819-08daecd080ed
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MtQBb6uID8MpFXOcmFSRs3kXSzkoOOILoj2Cc/V85xcsWhvH1biTc88zAV52DfOQOo8rBVpINlhTnstUPQYUJkcpGhRo8qNDor4MIwPM4KSyF51KYb3jgSNRJZcFlq98OXnpw/Z46k/FOpHpPmScQsWnbtthdbTE0rVg8rpRyLXuw+0ANO744GAz0PnbaldsnNhqIDZIp18K6pb0HquaDk+iTD9XkzYMULWc/uFWFXnAJXyHiQ0EDqodo442gclSC6Cad8VzWjSw7VySmefJcXRwjEkf8p+gCMmOGqj0RgHSSBH8m6BGef8bJahiisbsQSxJ/tWKan8yiHIDfXiDmpxPg9NhgWSun/uzm1RpXpHj11RiRIZZ1DKtHkAS7fLl99KID7A7sJ4LiMM9wxDuSXJ53wQeAqxMZeUh6EP1IJArbGxqdUtIeTc295w28aAT8VIjzaShTBtSvCuMIT2uY4zYhtkhIzO66V6czh/AZl48+OybY2FDhiiGso+DjKK6gSIG9Q/pfzbuLqZKiiiFKqdxZtvK+fnEoNRqKd2OtxAOToDMGKvbYbS3s7tzaTkjBSXXgV8b5s+T+tQsbcA96mKueVLVa2f3iikSd/MiY5lIJXcspaZP+Q1VDz5E3Oc8cBng7XQDzPsGjsIQFP2r9DsYEPpae7AnCTCU/smpkhzgAit1Bsi343D2bXNGK7YSuitLI5oTKpMQaewlrzLyS8IpAozFEOwwdRY2YYcTFb50aKXy4XxaoGUqdcs1RB4ZXXYciJ+ZkrqseYFYmXkTdA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199015)(40470700004)(36840700001)(46966006)(54906003)(26005)(186003)(2616005)(70586007)(36860700001)(1076003)(316002)(70206006)(478600001)(8676002)(8936002)(4326008)(336012)(426003)(47076005)(41300700001)(83380400001)(5660300002)(4744005)(44832011)(2906002)(82740400003)(40480700001)(356005)(81166007)(86362001)(6916009)(36756003)(40460700003)(82310400005)(22166006)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jan 2023 14:49:08.5960
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 21fd9655-9e8f-47f6-6819-08daecd080ed
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT012.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7675

Printing domain's memory size in hex without even prepending it
with 0x is not very useful and can be misleading. Switch to decimal
notation.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/arch/arm/domain_build.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 829cea8de84f..7e204372368c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3774,7 +3774,7 @@ static int __init construct_domU(struct domain *d,
     if ( rc != 0 )
         return rc;
 
-    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
+    printk("*** LOADING DOMU cpus=%u memory=%"PRIu64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 15:09:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 15:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470355.729834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCMRX-0006KM-Fo; Mon, 02 Jan 2023 15:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470355.729834; Mon, 02 Jan 2023 15:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCMRX-0006KF-Bq; Mon, 02 Jan 2023 15:09:39 +0000
Received: by outflank-mailman (input) for mailman id 470355;
 Mon, 02 Jan 2023 15:09:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCMRV-0006K5-Nh; Mon, 02 Jan 2023 15:09:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCMRV-0008Qr-Hz; Mon, 02 Jan 2023 15:09:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCMRV-00009x-1X; Mon, 02 Jan 2023 15:09:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCMRV-0004dk-11; Mon, 02 Jan 2023 15:09:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LrcgIju8gsvzBsMrfYfpmnFb4Ao9ySubas6Vb31qVN8=; b=MOGenCN+3j5/B6DXs0mTkIA0T+
	1S8SUDUrqrJz9BBcAQGLaFRVkCBj1WQ+4cYLHrQuzSqVjkhgBmalELThJnFeK7xAx1ECYpF+DPdUk
	kqopsFY2kdvmiG2X8qJxjUREu2aYmFUfEu0Y6Znn0p9/rM//iGcnW5+6lHFMwUgoGf8Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175549-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175549: FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
X-Osstest-Versions-That:
    linux=66bb2e2b24ce52819a7070d3a3255726cb946b69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Jan 2023 15:09:37 +0000

flight 175549 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175549/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 175407
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 175407
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 175407
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 175407
 test-armhf-armhf-xl-credit1     <job status>                 broken  in 175547

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 175407 pass in 175549
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 175407 pass in 175549
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 175407 pass in 175549
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 175407 pass in 175549
 test-armhf-armhf-xl-credit1  5 host-install(5) broken in 175547 pass in 175549
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 175407 pass in 175549
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175545 pass in 175549
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 175545 pass in 175549
 test-amd64-i386-xl-vhd       22 guest-start.2    fail in 175547 pass in 175545
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail in 175547 pass in 175549
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat  fail pass in 175407
 test-amd64-i386-libvirt-raw   7 xen-install                fail pass in 175547
 test-armhf-armhf-xl-credit2  14 guest-start                fail pass in 175547
 test-armhf-armhf-xl          18 guest-start/debian.repeat  fail pass in 175547
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 175547

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175197
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175407 like 175197
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 175547 like 175197
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 175547 like 175197
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 175547 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 175547 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 175547 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175197
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175197
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175197
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175197
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 175547 n/a

version targeted for testing:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52
baseline version:
 linux                66bb2e2b24ce52819a7070d3a3255726cb946b69

Last test of basis   175197  2022-12-14 10:43:17 Z   19 days
Testing same since   175407  2022-12-19 11:42:26 Z   14 days   34 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Schocher <hs@denx.de>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jialiang Wang <wangjialiang0806@163.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Colitti <lorenzo@google.com>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Samuel Mendoza-Jonas <samjonas@amazon.com>
  Sasha Levin <sashal@kernel.org>
  Shiwei Cui <cuishw@inspur.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <simon.horman@corigine.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Yasushi SHOJI <yashi@spacecubics.com>
  Yasushi SHOJI <yasushi.shoji@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-amd64-xl-credit2 broken

Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 16:36:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 16:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470366.729845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCNnR-0007CU-Pr; Mon, 02 Jan 2023 16:36:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470366.729845; Mon, 02 Jan 2023 16:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCNnR-0007CN-NB; Mon, 02 Jan 2023 16:36:21 +0000
Received: by outflank-mailman (input) for mailman id 470366;
 Mon, 02 Jan 2023 16:36:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oSDj=47=amd.com=Christian.Koenig@srs-se1.protection.inumbo.net>)
 id 1pCNnR-0007CH-01
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 16:36:21 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 940d627b-8abb-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 17:36:17 +0100 (CET)
Received: from BN8PR12MB3587.namprd12.prod.outlook.com (2603:10b6:408:43::13)
 by CH0PR12MB5204.namprd12.prod.outlook.com (2603:10b6:610:bb::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 16:36:14 +0000
Received: from BN8PR12MB3587.namprd12.prod.outlook.com
 ([fe80::80d8:934f:caa7:67b0]) by BN8PR12MB3587.namprd12.prod.outlook.com
 ([fe80::80d8:934f:caa7:67b0%3]) with mapi id 15.20.5944.019; Mon, 2 Jan 2023
 16:36:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 940d627b-8abb-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UQigwWBF6x3OemHSKmW41oBtjIO4MMzBqTqTL+J4zOHnr9oco0p1nV4zs5Ok97SGgvgVe6FNO2866hZW3Ozap37V7bdcZP2/uY91Ez85HGZ5J+TVqiRs9/orSP/xBR4d1S5ALWa6NWOnMX2csFW85X2BEQqfpwNkOhYU9OOioDZWAwV/mAF6ddwGmGT86zkYc3Qdj+kP3roGanE3fkqj/MURLTyAcfP462ueGPJOt+kFiyDqxzWyZyZeKib/TN2dCimYu2FcNy7IFzx/yJr+LBCedIz3gpTpvFJYuUwa9W0Tl5gN2YvRLAE1vReAQ7f7wLP+vSPrjCFrXSyNkEm4ZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=81NAQSNOaG4Jfe/lCufx56HmfZ33XwLIlKF0EzKp46s=;
 b=UIQfa+4xwB3WSx+wbQsopZuMcKqSDVh3lZ2uX5kvaIIcJZRfQhQm4zJMy3eIkU03+EqP+nyr77VJdUkCuBhca75sZoc2h4sP8Jh32M5SSx9PBflm5BFTwCJAR2VeedOwufh4pt+o3cGCIIhF6kdfVjcmYTIT2I7l0XAHCAhPWK4fc3dxlGLt2yyABjiMWSpqPOuy4M24jnkPbtNtF7w9sO6LOWTGailBPMEPjKtnNjBYQxI2PVdZ8NezR9jUOXzGFhNOU+ur/a7hqzeqVAOXch65H93lr3dQI8hYQImjv/iEQeaF1XdVl+s+WndPRESJe1HFdqQiJ6lqYBSuppcCCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=81NAQSNOaG4Jfe/lCufx56HmfZ33XwLIlKF0EzKp46s=;
 b=nlfBcXuKCGXLIb+RPD1HMhcaz4iiQBP4ViJ0GSGOw2xiQ8YnaAPXhpBod9LccGNWbLhT4ckz4lvsnZ56FzL/+yp+O0PsL3EdUAJrW0Gz9IFgxgWaVdY9Gp+GPHsDmWI7oHfunDxWl9X2Q/kQYR2yf/Hya3O/yaBlhlD0eMDXhrE=
From: "Koenig, Christian" <Christian.Koenig@amd.com>
To: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>, "jgross@suse.com"
	<jgross@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, Sumit Semwal <sumit.semwal@linaro.org>,
	"linux-media@vger.kernel.org" <linux-media@vger.kernel.org>,
	"dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>,
	"linaro-mm-sig@lists.linaro.org" <linaro-mm-sig@lists.linaro.org>
Subject: AW: [PATCH v1 0/3] Add ioctls to map grant refs on the external
 backing storage
Thread-Topic: [PATCH v1 0/3] Add ioctls to map grant refs on the external
 backing storage
Thread-Index: AQHZHq/2qxwFjlIdpkaeJi9xnPE21a6LUk6x
Date: Mon, 2 Jan 2023 16:36:13 +0000
Message-ID:
 <BN8PR12MB3587673F8EF2642D267E943D83F79@BN8PR12MB3587.namprd12.prod.outlook.com>
References: <cover.1672666311.git.oleksii_moisieiev@epam.com>
In-Reply-To: <cover.1672666311.git.oleksii_moisieiev@epam.com>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_Enabled=True;MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d;MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_SetDate=2023-01-02T16:36:13.069Z;MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_Name=General;MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_ContentBits=0;MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_Method=Standard;
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN8PR12MB3587:EE_|CH0PR12MB5204:EE_
x-ms-office365-filtering-correlation-id: d8e81b28-214b-4d4f-fe6b-08daecdf7696
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 8lfP6iV9Cj8Jf/vzIyuWKa81TwB63aWW/m+BoHNfidtF6xE+G5DGFfdL9rQqy80aQTkKnf41J6eBOL4NwXskCpP+4oKhYYfiCvV0ADvfqbbaEg9YXkk8EpxqANuJQvGr4w6oMRHgQ84+8NfaVhSIDAN4Wg7KJwZXd5Gybsj85JPVVE5W2bzRWhbzNQqFZrGD5KPEkp3U3q/U7PB7Qabh91DogSG7QXxxmzkfi9lxIrsa6Iup7qDSHw9E9oaVgzXeFRakA/MnG6oeAJGfNBQDM+Z/XdZVn/JPkN6QKr8nmUqN3//3xLga9sa+nXg4w3JgbMS/uPgCv8S0CcGJPOWXgfbJWyL0V7+nQqW+4kSb5l0gY6X1DQQ746AWpg17UAreMe5gDqgYrZ+fr1hpKvdJs6Ru/lGlUNLhWkCuBCJKrDcJ7N0sF4smbrVaAhy9SIq27YhuhXvuUzgvgboekV+Rs06X01qT7tII0y3+woBxxyRJr5X91CatSefQMQapJNRzxvVaYNuPGvj4f85+DAC6W/9vskW6fRNcIMlRVsXPlcgWD6FIWz5m7715lXknIWZ+id+xI4hL0kYICR6sMvUp4714ftltCyyNPJIBK+c4n8mIFeQBG2kR9uNck2LqQvsv0UBVu67eUfQsc98J9hIVh3CD4H2YUI9yc5h+3e5H6ybw5LoBG3GdVqHMC1Mrck3RqLLTt39X6018gjWCs56Uf2Q90pJnmdhk3QlcIsbluyg=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN8PR12MB3587.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(366004)(39860400002)(346002)(396003)(136003)(451199015)(76116006)(8676002)(66946007)(4326008)(64756008)(66446008)(66476007)(91956017)(66556008)(41300700001)(38070700005)(316002)(54906003)(110136005)(2906002)(86362001)(5660300002)(122000001)(38100700002)(33656002)(7416002)(8936002)(55016003)(83380400001)(6506007)(478600001)(7696005)(186003)(9686003)(71200400001)(52536014)(19627405001)(22166006);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?iynZLw+lWnlyJUuFnxCDxh0ai/Y+taz9/0TxacvKmUXxZ4pNx/vj+EH3gq?=
 =?iso-8859-1?Q?0kMRNyinvhUWXF0+Dv1Y4cs5JAOXfVXgJkAOXeCbCrfpqZ+SmSD6+2iTu9?=
 =?iso-8859-1?Q?JfDAQlLOb+EApigzE2fGA1hojOYbiJ5nyhl5lDFnNSx6vCD4tsvcPip0iN?=
 =?iso-8859-1?Q?CWMm0eO/nPD332qWUpH9yjCsQ2mLa8DKXXMrzkegu0kqIIVYA6LWVgZiak?=
 =?iso-8859-1?Q?XBi2N8vji5MuXW6gSOPRYWe6jaYxFGWvUZ7dvx8Gs+pIAAfsw3xzDMPJa3?=
 =?iso-8859-1?Q?+99t3QFFzFCh2UnLAcYGSyNiWjwi53Kp6hd1o3V8hqPD/5b+s9IVui3+n6?=
 =?iso-8859-1?Q?DRCm2+pIFNNeeBJN+6Zkil/gIGVJqChIH1alH/QyVEj4r6Dpaf1e0xg1oo?=
 =?iso-8859-1?Q?qXw4r2sO3Van9JOa0s3XjWjdUWGAWsUdP42Dx1J49POz3sM8B6zUGoW7RY?=
 =?iso-8859-1?Q?dmOfH+bP06K042aIMuu2APFoddmOZ4ZT2WBrmMWMp1eepo4pRvv0Wj7WC+?=
 =?iso-8859-1?Q?IwldBcHkyYiOnL9QEdbY7XMUUbvfefCQvxXHXjtMA04pEwrrsRzYcHVl9r?=
 =?iso-8859-1?Q?gujegkpoZw45y0K94imEYeP9SOI5T8JrxaWyEz80wIgA8TmDU+vUHkKbIU?=
 =?iso-8859-1?Q?355p3L2B3SYOfjPQWdKpTjRsWhzH0VEilO1oKJYh5n/8BfpevEC05/CXHT?=
 =?iso-8859-1?Q?Ki74tm/rBzeQ/sx0LhoG77g87tHe3iQrZReyTqJDD5DhJytbsG3SBe2l5x?=
 =?iso-8859-1?Q?y1m4rpSBKmbitpmm3dAVUEL9E1gpYtReoRC3fQMnH3B7DsBFRYCrm2Jnrp?=
 =?iso-8859-1?Q?lrZjuIkRfo2EDp0IZ4p678cxs6+rYuytefwhmAFL375aO8XVQSNQbz6Bp4?=
 =?iso-8859-1?Q?ME2jaZfPah2N+7ZkNYiBMddxlpD6HhxWEHHYDl1UOFR+jSZXZYuUVSiAy8?=
 =?iso-8859-1?Q?kd7PU7US3RNFtjp8ziClQ1l4d81aEpWOezLtL0QkpJQ9uxMAfFT08Tn7HD?=
 =?iso-8859-1?Q?oYHFRoKyju0lOdH+jO0XRqhALrdvSE+ZYMNlnvu+ssJBY1reuoP8PITbjS?=
 =?iso-8859-1?Q?shjfR2GeMIY2+MyA2n178lkildN8e1htCB3jjZ+nrId6GhyuYmJBgiUxeg?=
 =?iso-8859-1?Q?/ZZRCm6SNho0bCvCvxk1+EuxH10FnZmamF+mntK/GGQJsVE3rolrGrZSfC?=
 =?iso-8859-1?Q?or3IvTs7hVKnQ7Kl+pmQtDSsII82ARieIFwRv8NMNb9fSC6r2iopaueEbZ?=
 =?iso-8859-1?Q?ZKevTWk0gaGoGeWxBtxLFbxdMdT8ht/bPM44CXQfvzlGjdOLNbZzp6XatM?=
 =?iso-8859-1?Q?CuBBwim3YZHGW4b0ALlT5p4ljVpgd7agKWDaNibleWniSQgxnrbYAB4RMH?=
 =?iso-8859-1?Q?el1dnVVzSQWY/ChT+j+AoYDQnR2kt1mOCJ36jHVkO7zVtleMHwHcjrIure?=
 =?iso-8859-1?Q?uaqTWtqwach2GuoYjnh7O4LbVq2UaBPllK+Ic43QJWGfBAeFBpC71FEVGD?=
 =?iso-8859-1?Q?gl8K0O552ZFeBk86gvnLQb54wBSYZA/EPzBj4uJmG7dem2vY2gJ6Cw4URu?=
 =?iso-8859-1?Q?fkShI4MISlH9rikntgbzTE8VpRazk/urs0+vf+yIaPMHAIvK1IRYN3DEPv?=
 =?iso-8859-1?Q?snEYjiOFOkEazvqO719xzd754ELKRNf34laSo9hkA8BZYN6adqOFzry4wL?=
 =?iso-8859-1?Q?R4/A8b+WNO70/FyyDpg=3D?=
Content-Type: multipart/alternative;
	boundary="_000_BN8PR12MB3587673F8EF2642D267E943D83F79BN8PR12MB3587namp_"
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN8PR12MB3587.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d8e81b28-214b-4d4f-fe6b-08daecdf7696
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Jan 2023 16:36:13.7503
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: idFrnxasVVJCk0pFQ8h95HH9zVeB9lUNVXY57oBtij/oJpQd8jBH9GAc94odtIIr
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5204

--_000_BN8PR12MB3587673F8EF2642D267E943D83F79BN8PR12MB3587namp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

[AMD Official Use Only - General]

Sorry for the messed up mail. We currently have mail problems here at AMD.

________________________________
Von: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
Gesendet: Montag, 2. Januar 2023 14:41
An: jgross@suse.com <jgross@suse.com>
Cc: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>; Stefano Stabellini <sst=
abellini@kernel.org>; Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>;=
 xen-devel@lists.xenproject.org <xen-devel@lists.xenproject.org>; linux-ker=
nel@vger.kernel.org <linux-kernel@vger.kernel.org>; Sumit Semwal <sumit.sem=
wal@linaro.org>; Koenig, Christian <Christian.Koenig@amd.com>; linux-media@=
vger.kernel.org <linux-media@vger.kernel.org>; dri-devel@lists.freedesktop.=
org <dri-devel@lists.freedesktop.org>; linaro-mm-sig@lists.linaro.org <lina=
ro-mm-sig@lists.linaro.org>
Betreff: [PATCH v1 0/3] Add ioctls to map grant refs on the external backin=
g storage

Hello,

Let me introduce the new ioctls, which are intended to allow gntdev to
map scatter-gather table on top of the existing dmabuf, referenced by
file descriptor.

When using dma-buf exporter to create dma-buf with backing storage and
map it to the grant refs, provided from the domain, we've met a problem,
that several HW (i.MX8 gpu in our case) do not support external buffer
and requires backing storage to be created using it's native tools.
That's why new ioctls were added to be able to pass existing dma-buffer
fd as input parameter and use it as backing storage to export to refs.

This is a pretty big NAK from my side to this approach.

If you need to replace a file descriptor number local to your process then =
you can simply use dup2() from userspace.

If your intention here is to replace the backing store of the fd on all pro=
cesses which currently have it open then please just completely forget that=
. This will *NEVER* ever work correctly.

Regards,
Christian.

Following calls were added:
IOCTL_GNTDEV_DMABUF_MAP_REFS_TO_BUF - map existing buffer as the backing
storage and export it to the provided grant refs;
IOCTL_GNTDEV_DMABUF_MAP_RELEASE - detach buffer from the grant table and
set notification to unmap grant refs before releasing the external
buffer. After this call the external buffer should be detroyed.
IOCTL_GNTDEV_DMABUF_MAP_WAIT_RELEASED - wait for timeout until buffer is
completely destroyed and gnt refs unmapped so domain could free grant
pages. Should be called after buffer was destoyed.

Our setup is based on IMX8QM board. We're trying to implement zero-copy
support for DomU graphics using Wayland zwp_linux_dmabuf_v1_interface
implementation.

For dma-buf exporter we used i.MX8 gpu native tools to create backing
storage grant-refs, received from DomU. Buffer for the backing storage was
allocated using gbm_bo_create call because gpu do not support external
buffer and requires backing storage to be created using it's native tools
(eglCreateImageKHR returns EGL_NO_IMAGE_KHR for buffers, which were not
created using gbm_bo_create).

This behaviour was also tested on Qemu setup using
DRM_IOCTL_MODE_CREATE_DUMB call to create backing storage buffer.

---
Oleksii Moisieiev (3):
  xen/grant-table: save page_count on map and use if during async
    unmapping
  dma-buf: add dma buffer release notifier callback
  xen/grant-table: add new ioctls to map dmabuf to existing fd

 drivers/dma-buf/dma-buf.c   |  44 ++++
 drivers/xen/gntdev-common.h |   8 +-
 drivers/xen/gntdev-dmabuf.c | 416 +++++++++++++++++++++++++++++++++++-
 drivers/xen/gntdev-dmabuf.h |   7 +
 drivers/xen/gntdev.c        | 101 ++++++++-
 drivers/xen/grant-table.c   |  73 +++++--
 include/linux/dma-buf.h     |  15 ++
 include/uapi/xen/gntdev.h   |  62 ++++++
 include/xen/grant_table.h   |   8 +
 9 files changed, 703 insertions(+), 31 deletions(-)

--
2.25.1

--_000_BN8PR12MB3587673F8EF2642D267E943D83F79BN8PR12MB3587namp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"> P {margin-top:0;margin-bo=
ttom:0;} </style>
</head>
<body dir=3D"ltr">
<p style=3D"font-family:Arial;font-size:10pt;color:#0000FF;margin:5pt;" ali=
gn=3D"Left">
[AMD Official Use Only - General]<br>
</p>
<br>
<div>
<div class=3D"elementToProof" style=3D"font-family: Calibri, Arial, Helveti=
ca, sans-serif; font-size: 12pt; color: rgb(0, 0, 0); background-color: rgb=
(255, 255, 255);">
<span style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-siz=
e: 12pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);">Sorry =
for the messed up mail. We currently have mail problems here at AMD.</span>=
</div>
<div id=3D"appendonsend"></div>
<div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif; font-size:12p=
t; color:rgb(0,0,0)">
<br>
</div>
<hr tabindex=3D"-1" style=3D"display:inline-block; width:98%">
<div id=3D"divRplyFwdMsg" dir=3D"ltr"><font face=3D"Calibri, sans-serif" co=
lor=3D"#000000" style=3D"font-size:11pt"><b>Von:</b> Oleksii Moisieiev &lt;=
Oleksii_Moisieiev@epam.com&gt;<br>
<b>Gesendet:</b> Montag, 2. Januar 2023 14:41<br>
<b>An:</b> jgross@suse.com &lt;jgross@suse.com&gt;<br>
<b>Cc:</b> Oleksii Moisieiev &lt;Oleksii_Moisieiev@epam.com&gt;; Stefano St=
abellini &lt;sstabellini@kernel.org&gt;; Oleksandr Tyshchenko &lt;Oleksandr=
_Tyshchenko@epam.com&gt;; xen-devel@lists.xenproject.org &lt;xen-devel@list=
s.xenproject.org&gt;; linux-kernel@vger.kernel.org &lt;linux-kernel@vger.ke=
rnel.org&gt;;
 Sumit Semwal &lt;sumit.semwal@linaro.org&gt;; Koenig, Christian &lt;Christ=
ian.Koenig@amd.com&gt;; linux-media@vger.kernel.org &lt;linux-media@vger.ke=
rnel.org&gt;; dri-devel@lists.freedesktop.org &lt;dri-devel@lists.freedeskt=
op.org&gt;; linaro-mm-sig@lists.linaro.org &lt;linaro-mm-sig@lists.linaro.o=
rg&gt;<br>
<b>Betreff:</b> [PATCH v1 0/3] Add ioctls to map grant refs on the external=
 backing storage</font>
<div>&nbsp;</div>
</div>
<div class=3D"BodyFragment"><font size=3D"2"><span style=3D"font-size:11pt"=
>
<div class=3D"PlainText elementToProof">Hello,<br>
<br>
Let me introduce the new ioctls, which are intended to allow gntdev to<br>
map scatter-gather table on top of the existing dmabuf, referenced by<br>
file descriptor.<br>
<br>
When using dma-buf exporter to create dma-buf with backing storage and<br>
map it to the grant refs, provided from the domain, we've met a problem,<br=
>
that several HW (i.MX8 gpu in our case) do not support external buffer<br>
and requires backing storage to be created using it's native tools.<br>
That's why new ioctls were added to be able to pass existing dma-buffer<br>
fd as input parameter and use it as backing storage to export to refs.</div=
>
<div class=3D"PlainText elementToProof"><br>
</div>
<div class=3D"PlainText elementToProof">This is a pretty big NAK from my si=
de to this approach.</div>
<div class=3D"PlainText elementToProof"><br>
</div>
<div class=3D"PlainText elementToProof">If you need to replace a file descr=
iptor number local to your process then you can simply use dup2() from user=
space.</div>
<div class=3D"PlainText elementToProof"><br>
</div>
<div class=3D"PlainText elementToProof">If your intention here is to replac=
e the backing store of the fd on all processes which currently have it open=
 then please just completely forget that. This will *NEVER* ever work corre=
ctly.</div>
<div class=3D"PlainText elementToProof"><br>
</div>
<div class=3D"PlainText elementToProof">Regards,</div>
<div class=3D"PlainText elementToProof">Christian.</div>
<div class=3D"PlainText elementToProof"><br>
Following calls were added:<br>
IOCTL_GNTDEV_DMABUF_MAP_REFS_TO_BUF - map existing buffer as the backing<br=
>
storage and export it to the provided grant refs;<br>
IOCTL_GNTDEV_DMABUF_MAP_RELEASE - detach buffer from the grant table and<br=
>
set notification to unmap grant refs before releasing the external<br>
buffer. After this call the external buffer should be detroyed.<br>
IOCTL_GNTDEV_DMABUF_MAP_WAIT_RELEASED - wait for timeout until buffer is<br=
>
completely destroyed and gnt refs unmapped so domain could free grant<br>
pages. Should be called after buffer was destoyed.<br>
<br>
Our setup is based on IMX8QM board. We're trying to implement zero-copy<br>
support for DomU graphics using Wayland zwp_linux_dmabuf_v1_interface<br>
implementation.<br>
<br>
For dma-buf exporter we used i.MX8 gpu native tools to create backing<br>
storage grant-refs, received from DomU. Buffer for the backing storage was<=
br>
allocated using gbm_bo_create call because gpu do not support external<br>
buffer and requires backing storage to be created using it's native tools<b=
r>
(eglCreateImageKHR returns EGL_NO_IMAGE_KHR for buffers, which were not<br>
created using gbm_bo_create).<br>
<br>
This behaviour was also tested on Qemu setup using<br>
DRM_IOCTL_MODE_CREATE_DUMB call to create backing storage buffer.<br>
<br>
---<br>
Oleksii Moisieiev (3):<br>
&nbsp; xen/grant-table: save page_count on map and use if during async<br>
&nbsp;&nbsp;&nbsp; unmapping<br>
&nbsp; dma-buf: add dma buffer release notifier callback<br>
&nbsp; xen/grant-table: add new ioctls to map dmabuf to existing fd<br>
<br>
&nbsp;drivers/dma-buf/dma-buf.c&nbsp;&nbsp; |&nbsp; 44 ++++<br>
&nbsp;drivers/xen/gntdev-common.h |&nbsp;&nbsp; 8 +-<br>
&nbsp;drivers/xen/gntdev-dmabuf.c | 416 +++++++++++++++++++++++++++++++++++=
-<br>
&nbsp;drivers/xen/gntdev-dmabuf.h |&nbsp;&nbsp; 7 +<br>
&nbsp;drivers/xen/gntdev.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | 101 =
++++++++-<br>
&nbsp;drivers/xen/grant-table.c&nbsp;&nbsp; |&nbsp; 73 +++++--<br>
&nbsp;include/linux/dma-buf.h&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; 15 ++<br>
&nbsp;include/uapi/xen/gntdev.h&nbsp;&nbsp; |&nbsp; 62 ++++++<br>
&nbsp;include/xen/grant_table.h&nbsp;&nbsp; |&nbsp;&nbsp; 8 +<br>
&nbsp;9 files changed, 703 insertions(+), 31 deletions(-)<br>
<br>
-- <br>
2.25.1<br>
</div>
</span></font></div>
</div>
</body>
</html>

--_000_BN8PR12MB3587673F8EF2642D267E943D83F79BN8PR12MB3587namp_--


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 17:47:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 17:47:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470373.729856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCOtq-0005iK-UH; Mon, 02 Jan 2023 17:47:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470373.729856; Mon, 02 Jan 2023 17:47:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCOtq-0005iD-Pb; Mon, 02 Jan 2023 17:47:02 +0000
Received: by outflank-mailman (input) for mailman id 470373;
 Mon, 02 Jan 2023 17:47:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wEdB=47=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pCOtp-0005i7-K5
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 17:47:02 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 73bfb719-8ac5-11ed-b8d0-410ff93cb8f0;
 Mon, 02 Jan 2023 18:46:58 +0100 (CET)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-563-STQ25Pb-PuC45ur2pQwQaQ-1; Mon, 02 Jan 2023 12:46:55 -0500
Received: by mail-wm1-f71.google.com with SMTP id
 m38-20020a05600c3b2600b003d1fc5f1f80so18795768wms.1
 for <xen-devel@lists.xenproject.org>; Mon, 02 Jan 2023 09:46:55 -0800 (PST)
Received: from redhat.com ([2.52.151.85]) by smtp.gmail.com with ESMTPSA id
 bo19-20020a056000069300b00294176c2c01sm7280930wrb.86.2023.01.02.09.46.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 02 Jan 2023 09:46:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73bfb719-8ac5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1672681617;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UEwMEVORwGQdUkaE8aygrUja5mjHxP1iCKWj4zKLawg=;
	b=b9j4b2WJh/hMUN9HVjw95rHqJFkAe0A9GUeWljyrD+0F5WyW3F6J9EW67bysgQtCvOy2pa
	wTDqmRAg3UlAzT/JF6j0ovYqoSm7L1ixlauWHEvxVvURagigSkfe+840IQsSwX9uuqhMnX
	c1iMy6PQY/ee9Ie7wuNPu63J7zxM+9I=
X-MC-Unique: STQ25Pb-PuC45ur2pQwQaQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UEwMEVORwGQdUkaE8aygrUja5mjHxP1iCKWj4zKLawg=;
        b=rjpjM5M0jhdX60KDfM5Xah0bwGWOPnFmMfXoi7NSA9QdWYyFVHMaAtYypuMFClgqkO
         1OSqBQJm6jReFaY2Z3k7J94GvdX/zsffrNatGLQZlSK23KuR9dOF/4KaEIv6mWM5b6Oa
         vNgnfbzQOw6Cde50miQVcKBLnfydy4K90TxVk5G0gIWV9DIv6d66+YZ0c8SyvJfdFnRv
         IoiRiysEcgeNB25MN4Ru9v6kIZN0jc0l7GnkyHVCrce2d9MLajNKyiq2OSlZlCtKedAI
         CVLFB9Au0tB5nPLpXbOyiFNO7RnjKLXpKxH6ANrfMHFMvvFoa32a/PJ2tudyn2n0TEu6
         C3cA==
X-Gm-Message-State: AFqh2kqHiC4lbRnT0yLN1qFYFFt8/INs4I7DGKbXaApNd9keS+4RthuH
	9Qm8Ee/vqfzi2RaD2Y0HKtFgngxSTnn4UXh4fvP/gXq09XkKK8ysFOJrHyy2Wa1MGARwBh7JFjR
	Bi4xjQBC//nO4a4zbXbG8/P3DOKA=
X-Received: by 2002:adf:9dd1:0:b0:242:165c:95ed with SMTP id q17-20020adf9dd1000000b00242165c95edmr25290017wre.48.1672681614740;
        Mon, 02 Jan 2023 09:46:54 -0800 (PST)
X-Google-Smtp-Source: AMrXdXt0p5m4ylYbBESCUKrBIMR1ZDEWRey8pWymXQEuhRcdaKS8YrAUzdmCxicDkxYUA8Ea68YynA==
X-Received: by 2002:adf:9dd1:0:b0:242:165c:95ed with SMTP id q17-20020adf9dd1000000b00242165c95edmr25290000wre.48.1672681614461;
        Mon, 02 Jan 2023 09:46:54 -0800 (PST)
Date: Mon, 2 Jan 2023 12:46:49 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org, alex.williamson@redhat.com
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230102124605-mutt-send-email-mst@kernel.org>
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
MIME-Version: 1.0
In-Reply-To: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> as noted in docs/igd-assign.txt in the Qemu source code.
> 
> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> Intel IGD passthrough to the guest with the Qemu upstream device model,
> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> a different slot. This problem often prevents the guest from booting.
> 
> The only available workaround is not good: Configure Xen HVM guests to use
> the old and no longer maintained Qemu traditional device model available
> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> 
> To implement this feature in the Qemu upstream device model for Xen HVM
> guests, introduce the following new functions, types, and macros:
> 
> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> * typedef XenPTQdevRealize function pointer
> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> 
> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> the xl toolstack with the gfx_passthru option enabled, which sets the
> igd-passthru=on option to Qemu for the Xen HVM machine type.
> 
> The new xen_igd_reserve_slot function also needs to be implemented in
> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> in which case it does nothing.
> 
> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> 
> Move the call to xen_host_pci_device_get, and the associated error
> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> initialize the device class and vendor values which enables the checks for
> the Intel IGD to succeed. The verification that the host device is an
> Intel IGD to be passed through is done by checking the domain, bus, slot,
> and function values as well as by checking that gfx_passthru is enabled,
> the device class is VGA, and the device vendor in Intel.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>

I'm not sure why is the issue xen specific. Can you explain?
Doesn't it affect kvm too?

> ---
> Notes that might be helpful to reviewers of patched code in hw/xen:
> 
> The new functions and types are based on recommendations from Qemu docs:
> https://qemu.readthedocs.io/en/latest/devel/qom.html
> 
> Notes that might be helpful to reviewers of patched code in hw/i386:
> 
> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> not affect builds that do not have CONFIG_XEN defined.
> 
> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
> existing function that is only true when Qemu is built with
> xen-pci-passthrough enabled and the administrator has configured the Xen
> HVM guest with Qemu's igd-passthru=on option.
> 
> v2: Remove From: <email address> tag at top of commit message
> 
> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
> 
>     is changed to
> 
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     I hoped that I could use the test in v2, since it matches the
>     other tests for the Intel IGD in Qemu and Xen, but those tests
>     do not work because the necessary data structures are not set with
>     their values yet. So instead use the test that the administrator
>     has enabled gfx_passthru and the device address on the host is
>     02.0. This test does detect the Intel IGD correctly.
> 
> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>     email address to match the address used by the same author in commits
>     be9c61da and c0e86b76
>     
>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
> 
> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>     for the Intel IGD that uses the same criteria as in other places.
>     This involved moving the call to xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>     Intel IGD in xen_igd_clear_slot:
>     
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     is changed to
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> 
>     Added an explanation for the move of xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot to the commit message.
> 
>     Rebase.
> 
> v6: Fix logging by removing these lines from the move from xen_pt_realize
>     to xen_igd_clear_slot that was done in v5:
> 
>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>                " to devfn 0x%x\n",
>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                s->dev.devfn);
> 
>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>     set yet in xen_igd_clear_slot.
> 
>     Sorry for the extra noise.
> 
>  hw/i386/pc_piix.c    |  3 +++
>  hw/xen/xen_pt.c      | 46 +++++++++++++++++++++++++++++++++++---------
>  hw/xen/xen_pt.h      | 16 +++++++++++++++
>  hw/xen/xen_pt_stub.c |  4 ++++
>  4 files changed, 60 insertions(+), 9 deletions(-)
> 
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index b48047f50c..bc5efa4f59 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>      }
>  
>      pc_xen_hvm_init_pci(machine);
> +    if (xen_igd_gfx_pt_enabled()) {
> +        xen_igd_reserve_slot(pcms->bus);
> +    }
>      pci_create_simple(pcms->bus, -1, "xen-platform");
>  }
>  #endif
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 0ec7e52183..7fae1e7a6f 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                 s->dev.devfn);
>  
> -    xen_host_pci_device_get(&s->real_device,
> -                            s->hostaddr.domain, s->hostaddr.bus,
> -                            s->hostaddr.slot, s->hostaddr.function,
> -                            errp);
> -    if (*errp) {
> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
> -        return;
> -    }
> -
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> @@ -950,11 +941,47 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>  }
>  
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
> +}
> +
> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
> +{
> +    ERRP_GUARD();
> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
> +
> +    xen_host_pci_device_get(&s->real_device,
> +                            s->hostaddr.domain, s->hostaddr.bus,
> +                            s->hostaddr.slot, s->hostaddr.function,
> +                            errp);
> +    if (*errp) {
> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
> +        return;
> +    }
> +
> +    if (is_igd_vga_passthrough(&s->real_device) &&
> +        s->real_device.domain == 0 && s->real_device.bus == 0 &&
> +        s->real_device.dev == 2 && s->real_device.func == 0 &&
> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
> +    }
> +    xpdc->pci_qdev_realize(qdev, errp);
> +}
> +
>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>  
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
> +    xpdc->pci_qdev_realize = dc->realize;
> +    dc->realize = xen_igd_clear_slot;
>      k->realize = xen_pt_realize;
>      k->exit = xen_pt_unregister_device;
>      k->config_read = xen_pt_pci_read_config;
> @@ -977,6 +1004,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>      .instance_size = sizeof(XenPCIPassthroughState),
>      .instance_finalize = xen_pci_passthrough_finalize,
>      .class_init = xen_pci_passthrough_class_init,
> +    .class_size = sizeof(XenPTDeviceClass),
>      .instance_init = xen_pci_passthrough_instance_init,
>      .interfaces = (InterfaceInfo[]) {
>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index e7c4316a7d..40b31b5263 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -3,6 +3,7 @@
>  
>  #include "hw/xen/xen_common.h"
>  #include "hw/pci/pci.h"
> +#include "hw/pci/pci_bus.h"
>  #include "xen-host-pci-device.h"
>  #include "qom/object.h"
>  
> @@ -41,7 +42,20 @@ typedef struct XenPTReg XenPTReg;
>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>  
> +#define XEN_PT_DEVICE_CLASS(klass) \
> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
> +
> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
> +
> +typedef struct XenPTDeviceClass {
> +    PCIDeviceClass parent_class;
> +    XenPTQdevRealize pci_qdev_realize;
> +} XenPTDeviceClass;
> +
>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>                                             XenHostPCIDevice *dev);
> @@ -76,6 +90,8 @@ typedef int (*xen_pt_conf_byte_read)
>  
>  #define XEN_PCI_INTEL_OPREGION 0xfc
>  
> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
> +
>  typedef enum {
>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> index 2d8cac8d54..5c108446a8 100644
> --- a/hw/xen/xen_pt_stub.c
> +++ b/hw/xen/xen_pt_stub.c
> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>          error_setg(errp, "Xen PCI passthrough support not built in");
>      }
>  }
> +
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +}
> -- 
> 2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 20:49:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 20:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470382.729867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCRk0-0006Hm-7P; Mon, 02 Jan 2023 20:49:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470382.729867; Mon, 02 Jan 2023 20:49:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCRk0-0006Hf-4B; Mon, 02 Jan 2023 20:49:04 +0000
Received: by outflank-mailman (input) for mailman id 470382;
 Mon, 02 Jan 2023 20:49:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4MZA=47=epam.com=prvs=53661eeefc=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1pCRjy-0006HZ-4l
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 20:49:02 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e08094fb-8ade-11ed-b8d0-410ff93cb8f0;
 Mon, 02 Jan 2023 21:48:58 +0100 (CET)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 302FvnSs017390; Mon, 2 Jan 2023 20:48:48 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2113.outbound.protection.outlook.com [104.47.18.113])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3mtcd4n2ym-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Jan 2023 20:48:47 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by DB4PR03MB9388.eurprd03.prod.outlook.com (2603:10a6:10:3f8::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 2 Jan
 2023 20:48:45 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::2da6:6d63:389b:3552%9]) with mapi id 15.20.5944.019; Mon, 2 Jan 2023
 20:48:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e08094fb-8ade-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZY5F6RkHCoYxBoKv7EDZQHoaek3xW7u5Hm+FDvD8RHAG7RMqEZgXY1nNHR9pkl8gZyr5lk6pi18a1NmyEpyRY5orZ9/aD5NiXPgXO2+cmXpek9FOK1UG7qNAbKtbEYipmAPJXrGyPpivsnOL6Em95nSqLJnWOxjNlA6WvJIwpbIBEpubYfpNo0Jln2DpgBEH4QrqTHDnFI3mkZqFvK/YUykP0oYXjJctg9S6hRaghU7PoT2d99fMaGtiK6gqRIh+OIyAmH/RAv0Tiqzcu0gHPEV68X5Ie5ktiEPnHKRi/zDS8TQt3iXuUfhFscPq6yLz7k/szIlNtHmnUqy1f/XR1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+E3TXzxPss9HYZ06zJckRnc/PGr5FuEsdgCcfgst9PQ=;
 b=VcRJLxHluv948WS8AhgDKvVx3ERiEAmeHRhgLi8ssM/NuwDQkU/Oh8KBQvmHvqyiLeKjq42w8+gsNxs4OReRY+l4OoHyGRRyTetydb46N++jQDl93YeikSEcIwRFSNxMYuiHp8lKrRfzwbsHqXdcSXcC7X3LG1cb7wrQRtJxWLUID1VINhgCVsj2o3TX2ITFJ2KzbKqywhjFVj4rmwwbo6EGe/M0yviLbtb35+Fv9tjcCR8m79c1RosJJbpBWUy3Y3iegzqOSomFsiWfl6MUh9zSQI9J7v2ffU78RgXTMp6T4vqHbvo4L5zcQziKjf29OGUUEnGXbx02SANMYqRinw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+E3TXzxPss9HYZ06zJckRnc/PGr5FuEsdgCcfgst9PQ=;
 b=bwAkRATXco11uanrTRzXnTaT66HVkj96Et4g7dCRI1yWQutbnBpC/pdaSA4cDzufB5CzVvJCYN/+tZLZzy0TFlDRv8zc/tfDR8nWPPv1OJ0+aIEHHQ3cXmqwxd4Vs5DFjD5COOjXMySg/1n6mDbF0y/VAl2+czOZE2nYvHHFk3DA0Da9NxoDFzam8Nc1Lra4TBK1JtvhdWd+bMinHPpCEZ1TW2rxQM3MkQ+icUDZNRTWjOZiXaGMbHr06SOdVZN8NT8GZ0Viur3Pu6wO2ZAVMKiSOfmjlKcCB4tiGI0ohMENduxxzKL+cGvrTO7s1+fSBDocJ1WrlVBs0UgjXZawgQ==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: "Koenig, Christian" <Christian.Koenig@amd.com>,
        "jgross@suse.com"
	<jgross@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>,
        Sumit Semwal <sumit.semwal@linaro.org>,
        "linux-media@vger.kernel.org" <linux-media@vger.kernel.org>,
        "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>,
        "linaro-mm-sig@lists.linaro.org" <linaro-mm-sig@lists.linaro.org>
Subject: Re: AW: [PATCH v1 0/3] Add ioctls to map grant refs on the external
 backing storage
Thread-Topic: AW: [PATCH v1 0/3] Add ioctls to map grant refs on the external
 backing storage
Thread-Index: AQHZHq/2qxwFjlIdpkaeJi9xnPE21a6LUk6xgABHXgA=
Date: Mon, 2 Jan 2023 20:48:44 +0000
Message-ID: <3066964b-511f-6314-bc8f-79e387712013@epam.com>
References: <cover.1672666311.git.oleksii_moisieiev@epam.com>
 <BN8PR12MB3587673F8EF2642D267E943D83F79@BN8PR12MB3587.namprd12.prod.outlook.com>
In-Reply-To: 
 <BN8PR12MB3587673F8EF2642D267E943D83F79@BN8PR12MB3587.namprd12.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PA4PR03MB7136:EE_|DB4PR03MB9388:EE_
x-ms-office365-filtering-correlation-id: 62401e1d-d955-4914-1b8e-08daed02bd5d
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 undBv+hWy72zwCvSlrHqV7I9nQOzX/iM/BxUaY+m/sdfGuNWlVWuhFcXslxMFEfdFvQ/VL07xz21DiMVzMpU0Wd9hFbHY2TtPNfRKQjrSTguaOtCl45ILx05D8iIDtnzzRcLsN8/4TALcks83Ig3hR9aVeJdWOj00KxsIaFqYn+Z9QowyARfIHS/vNXuvW3ZnQ+PkozF9jbDknEQ1k3J02gtPWeK4t5Z4ZzSs01atGfjXAA//eM4SmPp4YZI9jHllTDHDsrVp817uldLL9ZLJi9G2cLYbPCJw4jRuQ9Yma48xhvBunBtLGxiGtmQPYYgGE+/GN5FCrFyei5D/kMZ+mDhxMXOD6MnWOqwKD28b5zwFLnaxPrn+7PkeG4c8WiJ6Q547ui468Vj3Z3aip8DJE1Crzk5sKtHHAoYIi6wOFjtjvZKP1KbVNyazEA2n4RmCLS2DeXAT1L+DK9SDXa8uQXEmxR9XHlOXvoLxS4M7z4Ez753BNbOYe4pBqFOjETNezWycBYGhL6PGn9M98np3hSTNVyuGi1IOBpTHRD/HsVuDsEaVnryfa8IGBUoGDoZ7EEtujqTQjy6Y76RU833xLkzfzvveJvMWOhMFyC/rEH7odRoxrNMXkavMIRm/lEtjLSMjCflqu8OjfS8PLFvSXUFE6HWXMYy8z2HF7UPiquUGyuSiMaBR0FBM8QKNXjuKvUGzkx/a4DIJ4dvIlNIqRq1njkJfnU9LC/0rlWsTPKKZVmHvJOBioQv2JQlHI+r
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(366004)(39860400002)(376002)(396003)(451199015)(38070700005)(122000001)(38100700002)(36756003)(86362001)(31696002)(6506007)(53546011)(66446008)(91956017)(76116006)(66946007)(2616005)(66476007)(66556008)(64756008)(478600001)(316002)(6512007)(26005)(186003)(71200400001)(55236004)(31686004)(54906003)(110136005)(19627405001)(6486002)(8676002)(2906002)(41300700001)(4326008)(8936002)(83380400001)(5660300002)(22166006)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?NXBIYk9DTTl0a3ozaXFBbGI4MVc4bTJEWFZCR1krZTlnVWVjNDc1SG1peFda?=
 =?utf-8?B?U3ZPM0tTaU5ONnpaWDcycmloWnhXZGx2bTZOY3FCdW43VmUvcmw3OTdwcDIz?=
 =?utf-8?B?TEd2SzdHVngvVE9NWksxQTJqdDhCelVXdGtaWnJ2T3RKZTV3b3FJMmpicCtM?=
 =?utf-8?B?WUtqSzV1QkRPVWVFSzNMU21Cd3VPQzc4ZWo1TFNjZmg4cFhqNFZiK3hBekVa?=
 =?utf-8?B?L3ZESnI1ZXpJWEtuU0cvSTJIYUZjQ0VDdkJvVWtURk1WTG51WXNkT2ZpQWdv?=
 =?utf-8?B?MS9oOUp2TWpyYTF0OGM0NzBMczhwUUJWMlZnQVgwV24xVVIvSW5IenR6YjhB?=
 =?utf-8?B?Q0duRy9vK1d3aWpORjlRb2Z3NG81cU9sdVBicGpCdW94dWhsckZ4ellKK1dS?=
 =?utf-8?B?TmFyV0JiekV0SU12R2YzWTUzeWlvQnplRk1kUFhiOWoxRXpyUndzaE0xTkN5?=
 =?utf-8?B?LzUvSFgvQS9GeE5tU045UHpFTEJCVHRIR0ppVzFnV0xNZFllV2dqbUdNcnhy?=
 =?utf-8?B?MUpyM2FyMFJKczBPYzBrbGlESnZZYU1yU0Q0aWlWN1ZEZ1NSVGdocHNmeG1j?=
 =?utf-8?B?SnN2cFZ6ZCtJWUYxaWlFd21jRUI1RXZTWENrekdqRkkzNGNkeEFISTlkNGhF?=
 =?utf-8?B?UHl1Nm93dG5RYjBWS214MW9WLzJwZG1mL1dYc0xHVGJRR3NmSjRoTjJuRmRt?=
 =?utf-8?B?amgxRTBiRThWWmVMVldEQmtabGNlQUY1RWVqOXhUelliUm8rS0RBNFZnMjds?=
 =?utf-8?B?Wm56MHQ3OWZjU0lkTzlIRUlCdlVkUmU2cHhvOVg3QkpxbzRvazJ6RUk1OFhL?=
 =?utf-8?B?eksxVEpiaU1VTWtQNS9lN1dXeE5iK1o5NEN3YTBrdVN5R2xQdDR1L1A3S1lY?=
 =?utf-8?B?Z1pubVJPd2tCV2tLMUpia2NEVG5jcEZUWjBkWDRRdnZQalkvQ3hNTGdmVnZi?=
 =?utf-8?B?ZmNOamhYRDQrVlpxVTlKYTk4NVJUM05QWGZNaDV5eG1wa3IyUEZwdXE5a04x?=
 =?utf-8?B?S1hsRzBpdjhPNzI1VG5YNWVUOXpudDVnTk9xWi9nVDRwWXY4TXZJSDVaRDd6?=
 =?utf-8?B?VXBnYXMzdm0vaUt1UXZ1M2pzUkl6SmhlVUtaUjZ4aytYVElEbnZBMUk2Sith?=
 =?utf-8?B?WUtYQUo3bzBtQm9RTTd1UmVrMEFleFRHUjNhVitNUGo5N0E5QzZKYW5xcEkx?=
 =?utf-8?B?WTJpQmFONThZUUN5UDI3KytzeW1IODR5V2JQQmNPR29iY2VjSTZka2Rya2Uw?=
 =?utf-8?B?Z25FVk9RTTJpVG9ueXY2dEpLdWNlWTFZSVg5d2pvUTJrVUJkWm9FeWszSTJ1?=
 =?utf-8?B?b3BhRjNQL3hMM3BRZDlaS2hOVE9YYzYzVkN1OEhqU1RRQmJuWHJITGc4Mmo1?=
 =?utf-8?B?NnowRXh6c25EaVYwZ2JJWkhJOHR6S0lkTmlTNjNnZDVSbzV3ZVNhUE44TWl4?=
 =?utf-8?B?RUx2VGNHZ2dWS05rY1UyZnRVa25PVktWNWVaYStobHgvMkp5b1dENUpxUjhE?=
 =?utf-8?B?L24waDlrSE42ZkM0a0JqSXhJYnAvemVJRGlkRlBTN1pkQmZtdTk0Q0hFWDlJ?=
 =?utf-8?B?V2ZqREx2UW9zbzFFYVdLbEhIY3ZJY2NpallrSDBXdThrVi95YzBiQVdrRHZI?=
 =?utf-8?B?d1hZdk9kN0FqR1lvUWJtcmNZak94VWVQZktmMFF0ekhWRk5lZlhjay9Pam1y?=
 =?utf-8?B?eEt2azlJeUowUlBhSE1kTDVVdWtqRXRNZmpXeDRvWGprYVorWWorTUVhNHRu?=
 =?utf-8?B?dGlvVUx2QVJEUDJlWVBnUVNLaGFBeGtUN1JKUnVUdStCRVZHK2IrTTAxMG1z?=
 =?utf-8?B?MjNsR1Q4MDhkWnVZUFpjZ01VcW5qNVlVc3djNnptR0gyVkR6U0FiWDk5Kytr?=
 =?utf-8?B?bk44Z01yNm5WQzZRSWZMOGJRbDFKb3VzYzNja3ZoU0w1K2htbS9TS3BIUExw?=
 =?utf-8?B?ZTVTVGRaMGtXVlRza080UkpwMTFOUUl2cVA0cXJLQldzazJPem1ad0VKNTFK?=
 =?utf-8?B?aThRODAxOW1JNll0MEJzcVU2blkzakJpcXpQdDBZdDh4RlFUbi85RnZ1QSth?=
 =?utf-8?B?SFFSOTB4ZUwyVG5LKzZ2Sm16MFM5OGxYUW16cnhOa3ZUajQ0WmpKUFU5N1Rp?=
 =?utf-8?B?UXpOV3pMdm1iVWlicWx4YmRRSC91S0VDRUk5eGdZWDBKdnBJQnA2cE0wN3E5?=
 =?utf-8?B?cnc9PQ==?=
Content-Type: multipart/alternative;
	boundary="_000_3066964b511f6314bc8f79e387712013epamcom_"
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 62401e1d-d955-4914-1b8e-08daed02bd5d
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Jan 2023 20:48:44.8590
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 7fAbwh6e613qsKOD0RYyyQIRh1mUiUcvI52MWQUL33bL2zp0Mhi83Ta67QqjZwxYldzCDOzRVck24atwUZhyTQ2YR5B5+0uMuWTWk10dmuQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR03MB9388
X-Proofpoint-ORIG-GUID: kGRKwnH16VakoAE2LHJNXe6X_U5GkfkA
X-Proofpoint-GUID: kGRKwnH16VakoAE2LHJNXe6X_U5GkfkA
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2023-01-02_12,2022-12-30_01,2022-06-22_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 phishscore=0
 priorityscore=1501 suspectscore=0 bulkscore=0 adultscore=0 spamscore=0
 impostorscore=0 mlxscore=0 mlxlogscore=999 clxscore=1015
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2212070000 definitions=main-2301020188

--_000_3066964b511f6314bc8f79e387712013epamcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

DQpPbiAwMi4wMS4yMyAxODozNiwgS29lbmlnLCBDaHJpc3RpYW4gd3JvdGU6DQoNCltBTUQgT2Zm
aWNpYWwgVXNlIE9ubHkgLSBHZW5lcmFsXQ0KDQpTb3JyeSBmb3IgdGhlIG1lc3NlZCB1cCBtYWls
LiBXZSBjdXJyZW50bHkgaGF2ZSBtYWlsIHByb2JsZW1zIGhlcmUgYXQgQU1ELg0KDQpfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXw0KVm9uOiBPbGVrc2lpIE1vaXNpZWlldiA8T2xla3Np
aV9Nb2lzaWVpZXZAZXBhbS5jb20+PG1haWx0bzpPbGVrc2lpX01vaXNpZWlldkBlcGFtLmNvbT4N
Ckdlc2VuZGV0OiBNb250YWcsIDIuIEphbnVhciAyMDIzIDE0OjQxDQpBbjogamdyb3NzQHN1c2Uu
Y29tPG1haWx0bzpqZ3Jvc3NAc3VzZS5jb20+IDxqZ3Jvc3NAc3VzZS5jb20+PG1haWx0bzpqZ3Jv
c3NAc3VzZS5jb20+DQpDYzogT2xla3NpaSBNb2lzaWVpZXYgPE9sZWtzaWlfTW9pc2llaWV2QGVw
YW0uY29tPjxtYWlsdG86T2xla3NpaV9Nb2lzaWVpZXZAZXBhbS5jb20+OyBTdGVmYW5vIFN0YWJl
bGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnPjsgT2xla3NhbmRyIFR5c2hjaGVua28gPE9sZWtzYW5kcl9UeXNoY2hlbmtvQGVwYW0uY29t
PjxtYWlsdG86T2xla3NhbmRyX1R5c2hjaGVua29AZXBhbS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmc8bWFpbHRvOnhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZz4gPHhl
bi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZz48bWFpbHRvOnhlbi1kZXZlbEBsaXN0cy54ZW5w
cm9qZWN0Lm9yZz47IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc8bWFpbHRvOmxpbnV4LWtl
cm5lbEB2Z2VyLmtlcm5lbC5vcmc+IDxsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwub3JnPjxtYWls
dG86bGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZz47IFN1bWl0IFNlbXdhbCA8c3VtaXQuc2Vt
d2FsQGxpbmFyby5vcmc+PG1haWx0bzpzdW1pdC5zZW13YWxAbGluYXJvLm9yZz47IEtvZW5pZywg
Q2hyaXN0aWFuIDxDaHJpc3RpYW4uS29lbmlnQGFtZC5jb20+PG1haWx0bzpDaHJpc3RpYW4uS29l
bmlnQGFtZC5jb20+OyBsaW51eC1tZWRpYUB2Z2VyLmtlcm5lbC5vcmc8bWFpbHRvOmxpbnV4LW1l
ZGlhQHZnZXIua2VybmVsLm9yZz4gPGxpbnV4LW1lZGlhQHZnZXIua2VybmVsLm9yZz48bWFpbHRv
OmxpbnV4LW1lZGlhQHZnZXIua2VybmVsLm9yZz47IGRyaS1kZXZlbEBsaXN0cy5mcmVlZGVza3Rv
cC5vcmc8bWFpbHRvOmRyaS1kZXZlbEBsaXN0cy5mcmVlZGVza3RvcC5vcmc+IDxkcmktZGV2ZWxA
bGlzdHMuZnJlZWRlc2t0b3Aub3JnPjxtYWlsdG86ZHJpLWRldmVsQGxpc3RzLmZyZWVkZXNrdG9w
Lm9yZz47IGxpbmFyby1tbS1zaWdAbGlzdHMubGluYXJvLm9yZzxtYWlsdG86bGluYXJvLW1tLXNp
Z0BsaXN0cy5saW5hcm8ub3JnPiA8bGluYXJvLW1tLXNpZ0BsaXN0cy5saW5hcm8ub3JnPjxtYWls
dG86bGluYXJvLW1tLXNpZ0BsaXN0cy5saW5hcm8ub3JnPg0KQmV0cmVmZjogW1BBVENIIHYxIDAv
M10gQWRkIGlvY3RscyB0byBtYXAgZ3JhbnQgcmVmcyBvbiB0aGUgZXh0ZXJuYWwgYmFja2luZyBz
dG9yYWdlDQoNCkhlbGxvLA0KDQpMZXQgbWUgaW50cm9kdWNlIHRoZSBuZXcgaW9jdGxzLCB3aGlj
aCBhcmUgaW50ZW5kZWQgdG8gYWxsb3cgZ250ZGV2IHRvDQptYXAgc2NhdHRlci1nYXRoZXIgdGFi
bGUgb24gdG9wIG9mIHRoZSBleGlzdGluZyBkbWFidWYsIHJlZmVyZW5jZWQgYnkNCmZpbGUgZGVz
Y3JpcHRvci4NCg0KV2hlbiB1c2luZyBkbWEtYnVmIGV4cG9ydGVyIHRvIGNyZWF0ZSBkbWEtYnVm
IHdpdGggYmFja2luZyBzdG9yYWdlIGFuZA0KbWFwIGl0IHRvIHRoZSBncmFudCByZWZzLCBwcm92
aWRlZCBmcm9tIHRoZSBkb21haW4sIHdlJ3ZlIG1ldCBhIHByb2JsZW0sDQp0aGF0IHNldmVyYWwg
SFcgKGkuTVg4IGdwdSBpbiBvdXIgY2FzZSkgZG8gbm90IHN1cHBvcnQgZXh0ZXJuYWwgYnVmZmVy
DQphbmQgcmVxdWlyZXMgYmFja2luZyBzdG9yYWdlIHRvIGJlIGNyZWF0ZWQgdXNpbmcgaXQncyBu
YXRpdmUgdG9vbHMuDQpUaGF0J3Mgd2h5IG5ldyBpb2N0bHMgd2VyZSBhZGRlZCB0byBiZSBhYmxl
IHRvIHBhc3MgZXhpc3RpbmcgZG1hLWJ1ZmZlcg0KZmQgYXMgaW5wdXQgcGFyYW1ldGVyIGFuZCB1
c2UgaXQgYXMgYmFja2luZyBzdG9yYWdlIHRvIGV4cG9ydCB0byByZWZzLg0KDQpUaGlzIGlzIGEg
cHJldHR5IGJpZyBOQUsgZnJvbSBteSBzaWRlIHRvIHRoaXMgYXBwcm9hY2guDQoNCklmIHlvdSBu
ZWVkIHRvIHJlcGxhY2UgYSBmaWxlIGRlc2NyaXB0b3IgbnVtYmVyIGxvY2FsIHRvIHlvdXIgcHJv
Y2VzcyB0aGVuIHlvdSBjYW4gc2ltcGx5IHVzZSBkdXAyKCkgZnJvbSB1c2Vyc3BhY2UuDQoNCklm
IHlvdXIgaW50ZW50aW9uIGhlcmUgaXMgdG8gcmVwbGFjZSB0aGUgYmFja2luZyBzdG9yZSBvZiB0
aGUgZmQgb24gYWxsIHByb2Nlc3NlcyB3aGljaCBjdXJyZW50bHkgaGF2ZSBpdCBvcGVuIHRoZW4g
cGxlYXNlIGp1c3QgY29tcGxldGVseSBmb3JnZXQgdGhhdC4gVGhpcyB3aWxsICpORVZFUiogZXZl
ciB3b3JrIGNvcnJlY3RseS4NCg0KUmVnYXJkcywNCkNocmlzdGlhbi4NCg0KDQpIZWxsbyBDcmlz
dGlhbiwNCg0KDQpUaGFuayB5b3UgZm9yIHRoZSBxdWljayByZXNwb25zZS4NCg0KDQpNeSBnb2Fs
IGlzIHRvIHByb3ZpZGUgY29ycmVjdCBidWZmZXIgZm9yIHRoZSBpbnRlcmZhY2VzLCBzdWNoIGFz
IHp3cF9saW51eF9kbWFidWZfdjFfaW50ZXJmYWNlLCBzbyB6ZXJvLWNvcHkNCg0KZmVhdHVyZSB3
aWxsIHdvcmsuIE15IHN1Z2dlc3Rpb24gaXMgdG8gZ2l2ZSBhIHBvc3NpYmlsaXR5IHRvIHVzZSBz
b21lIHNwZWNpZmljIGJ1ZmZlciBhcyBiYWNraW5nIHN0b3JhZ2Ugd2hlcmUgY2FsbGVyDQoNCnRh
a2VzIHJlc3BvbnNpYmlsaXR5IGZvciB0aGUgcHJvdmlkZWQgYnVmZmVyIHRvIGhhdmUgdGhlIGNv
cnJlY3QgZm9ybWF0Lg0KDQoNCkluIG91ciBjYXNlIHdlIGFyZSB1c2luZyB0aGVzZSBjYWxscyB0
aGUgZm9sbG93aW5nIHdheToNCg0KMSkgR2V0IGdyZWZzIGZyb20gYW5vdGhlciBEb21haW4gKFdl
J3JlIHdvcmtpbmcgb24gdGhlIHZpcnR1YWxpemVkIHN5c3RlbSB3aXRoIGRpZmZlcmVudCBEb21h
aW5zDQoNCndvcmtpbmcgYXMgc3RhbmRhbG9uZSBWTXMgdGhhdCBhcmUgc2hhcmluZyByZXNvdXJj
ZXMpOw0KDQoyKSBDcmVhdGUgYnVmZmVyIHVzaW5nIGdibV9ib19jcmVhdGUgYW5kIHJlY2VpdmUg
ZmQgKGkuTVg4IHJlcXVpcmVzIGVnbCBhcGkgdG8gYmUgY2FsbGVkIGZvciB0aGUgYnVmZmVyDQoN
CmFsbG9jYXRpb24sIG9yIGVnbENyZWF0ZUltYWdlS0hSIHdpbGwgcmV0dXJuIEVHTF9OT19JTUFH
RV9LSFIgZHVyaW5nIHBhcmFtIHNldHRpbmcgZm9yIHp3cF9saW51eF9kbWFidWYpOw0KDQozKSBD
YWxsIGZvciBJT0NUTF9HTlRERVZfRE1BQlVGX01BUF9SRUZTX1RPX0JVRiB0byBtYXAgZ3JlZnMg
dXNpbmcgZmQgYXMgdGhlIGJhY2tpbmcgc3RvcmFnZTsNCg0KNCkgQ2FsbCBmb3IgendwX2xpbnV4
X2RtYWJ1Zl92MV9pbnRlcmZhY2UgYW5kIHVzZSB6ZXJvLWNvcHkgZmVhdHVyZSB3aXRoIFdheWxh
bmQ7DQoNCjUpIEFmdGVyIHdvcmsgZmluaXNoZWQgLSBjYWxsIGZvciBJT0NUTF9HTlRERVZfRE1B
QlVGX01BUF9SRUxFQVNFIHRvIHVubWFwIGdyYW50IHJlZnM7DQoNCjYpIENhbGwgZ2JtX2JvX2Rl
c3Ryb3kgdG8gcmVsZWFzZSBhbGxvY2F0ZWQgQk8gb2JqZWN0Ow0KDQo3KSBDYWxsIGZvciBJT0NU
TF9HTlRERVZfRE1BQlVGX01BUF9XQUlUX1JFTEVBU0VEIHRvIHdhaXQgdW50aWwgbWFwIHJlbGVh
c2VkIGNvbXBsZXRlbHkNCg0KKFRoaXMgaW5jbHVkZXMgcmVsZWFzaW5nIGdyYW50IHJlZnMgb24g
dGhlIERvbWFpbiBzaWRlKS4NCg0KDQpJJ3ZlIHRlc3RlZCBteSBjaGFuZ2VzIG9uIElNWDhRTSBi
b2FyZCB1c2luZyBnYm1fYm9fY3JlYXRlIHRvIGFsbG9jYXRlIGJ1ZmZlciBhbmQNCg0Kb24gUUVN
VSBzZXR1cCB1c2luZyBEUk1fSU9DVExfTU9ERV9DUkVBVEVfRFVNQiBmb3IgYnVmZmVyIGFsbG9j
YXRpb24uDQoNCkkgY2FuIHByb3ZpZGUgdGVzdCBhcHBsaWNhdGlvbnMgSSd2ZSB1c2VkIGZvciB0
ZXN0aW5nIHB1cnBvc2VzLg0KDQoNCkJlc3QgcmVnYXJkcywNCg0KT2xla3NpaS4NCg0KDQpGb2xs
b3dpbmcgY2FsbHMgd2VyZSBhZGRlZDoNCklPQ1RMX0dOVERFVl9ETUFCVUZfTUFQX1JFRlNfVE9f
QlVGIC0gbWFwIGV4aXN0aW5nIGJ1ZmZlciBhcyB0aGUgYmFja2luZw0Kc3RvcmFnZSBhbmQgZXhw
b3J0IGl0IHRvIHRoZSBwcm92aWRlZCBncmFudCByZWZzOw0KSU9DVExfR05UREVWX0RNQUJVRl9N
QVBfUkVMRUFTRSAtIGRldGFjaCBidWZmZXIgZnJvbSB0aGUgZ3JhbnQgdGFibGUgYW5kDQpzZXQg
bm90aWZpY2F0aW9uIHRvIHVubWFwIGdyYW50IHJlZnMgYmVmb3JlIHJlbGVhc2luZyB0aGUgZXh0
ZXJuYWwNCmJ1ZmZlci4gQWZ0ZXIgdGhpcyBjYWxsIHRoZSBleHRlcm5hbCBidWZmZXIgc2hvdWxk
IGJlIGRldHJveWVkLg0KSU9DVExfR05UREVWX0RNQUJVRl9NQVBfV0FJVF9SRUxFQVNFRCAtIHdh
aXQgZm9yIHRpbWVvdXQgdW50aWwgYnVmZmVyIGlzDQpjb21wbGV0ZWx5IGRlc3Ryb3llZCBhbmQg
Z250IHJlZnMgdW5tYXBwZWQgc28gZG9tYWluIGNvdWxkIGZyZWUgZ3JhbnQNCnBhZ2VzLiBTaG91
bGQgYmUgY2FsbGVkIGFmdGVyIGJ1ZmZlciB3YXMgZGVzdG95ZWQuDQoNCk91ciBzZXR1cCBpcyBi
YXNlZCBvbiBJTVg4UU0gYm9hcmQuIFdlJ3JlIHRyeWluZyB0byBpbXBsZW1lbnQgemVyby1jb3B5
DQpzdXBwb3J0IGZvciBEb21VIGdyYXBoaWNzIHVzaW5nIFdheWxhbmQgendwX2xpbnV4X2RtYWJ1
Zl92MV9pbnRlcmZhY2UNCmltcGxlbWVudGF0aW9uLg0KDQpGb3IgZG1hLWJ1ZiBleHBvcnRlciB3
ZSB1c2VkIGkuTVg4IGdwdSBuYXRpdmUgdG9vbHMgdG8gY3JlYXRlIGJhY2tpbmcNCnN0b3JhZ2Ug
Z3JhbnQtcmVmcywgcmVjZWl2ZWQgZnJvbSBEb21VLiBCdWZmZXIgZm9yIHRoZSBiYWNraW5nIHN0
b3JhZ2Ugd2FzDQphbGxvY2F0ZWQgdXNpbmcgZ2JtX2JvX2NyZWF0ZSBjYWxsIGJlY2F1c2UgZ3B1
IGRvIG5vdCBzdXBwb3J0IGV4dGVybmFsDQpidWZmZXIgYW5kIHJlcXVpcmVzIGJhY2tpbmcgc3Rv
cmFnZSB0byBiZSBjcmVhdGVkIHVzaW5nIGl0J3MgbmF0aXZlIHRvb2xzDQooZWdsQ3JlYXRlSW1h
Z2VLSFIgcmV0dXJucyBFR0xfTk9fSU1BR0VfS0hSIGZvciBidWZmZXJzLCB3aGljaCB3ZXJlIG5v
dA0KY3JlYXRlZCB1c2luZyBnYm1fYm9fY3JlYXRlKS4NCg0KVGhpcyBiZWhhdmlvdXIgd2FzIGFs
c28gdGVzdGVkIG9uIFFlbXUgc2V0dXAgdXNpbmcNCkRSTV9JT0NUTF9NT0RFX0NSRUFURV9EVU1C
IGNhbGwgdG8gY3JlYXRlIGJhY2tpbmcgc3RvcmFnZSBidWZmZXIuDQoNCi0tLQ0KT2xla3NpaSBN
b2lzaWVpZXYgKDMpOg0KICB4ZW4vZ3JhbnQtdGFibGU6IHNhdmUgcGFnZV9jb3VudCBvbiBtYXAg
YW5kIHVzZSBpZiBkdXJpbmcgYXN5bmMNCiAgICB1bm1hcHBpbmcNCiAgZG1hLWJ1ZjogYWRkIGRt
YSBidWZmZXIgcmVsZWFzZSBub3RpZmllciBjYWxsYmFjaw0KICB4ZW4vZ3JhbnQtdGFibGU6IGFk
ZCBuZXcgaW9jdGxzIHRvIG1hcCBkbWFidWYgdG8gZXhpc3RpbmcgZmQNCg0KIGRyaXZlcnMvZG1h
LWJ1Zi9kbWEtYnVmLmMgICB8ICA0NCArKysrDQogZHJpdmVycy94ZW4vZ250ZGV2LWNvbW1vbi5o
IHwgICA4ICstDQogZHJpdmVycy94ZW4vZ250ZGV2LWRtYWJ1Zi5jIHwgNDE2ICsrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrLQ0KIGRyaXZlcnMveGVuL2dudGRldi1kbWFidWYuaCB8
ICAgNyArDQogZHJpdmVycy94ZW4vZ250ZGV2LmMgICAgICAgIHwgMTAxICsrKysrKysrLQ0KIGRy
aXZlcnMveGVuL2dyYW50LXRhYmxlLmMgICB8ICA3MyArKysrKy0tDQogaW5jbHVkZS9saW51eC9k
bWEtYnVmLmggICAgIHwgIDE1ICsrDQogaW5jbHVkZS91YXBpL3hlbi9nbnRkZXYuaCAgIHwgIDYy
ICsrKysrKw0KIGluY2x1ZGUveGVuL2dyYW50X3RhYmxlLmggICB8ICAgOCArDQogOSBmaWxlcyBj
aGFuZ2VkLCA3MDMgaW5zZXJ0aW9ucygrKSwgMzEgZGVsZXRpb25zKC0pDQoNCi0tDQoyLjI1LjEN
Cg==

--_000_3066964b511f6314bc8f79e387712013epamcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <57550DFA2D6AF9478A26363158656BE6@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHA+PGJyPg0KPC9w
Pg0KPGRpdiBjbGFzcz0ibW96LWNpdGUtcHJlZml4Ij5PbiAwMi4wMS4yMyAxODozNiwgS29lbmln
LCBDaHJpc3RpYW4gd3JvdGU6PGJyPg0KPC9kaXY+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBj
aXRlPSJtaWQ6Qk44UFIxMk1CMzU4NzY3M0Y4RUYyNjQyRDI2N0U5NDNEODNGNzlAQk44UFIxMk1C
MzU4Ny5uYW1wcmQxMi5wcm9kLm91dGxvb2suY29tIj4NCjxzdHlsZSB0eXBlPSJ0ZXh0L2NzcyIg
c3R5bGU9ImRpc3BsYXk6bm9uZTsiPlAge21hcmdpbi10b3A6MDttYXJnaW4tYm90dG9tOjA7fTwv
c3R5bGU+DQo8cCBzdHlsZT0iZm9udC1mYW1pbHk6QXJpYWw7Zm9udC1zaXplOjEwcHQ7Y29sb3I6
IzAwMDBGRjttYXJnaW46NXB0OyIgYWxpZ249IkxlZnQiPg0KW0FNRCBPZmZpY2lhbCBVc2UgT25s
eSAtIEdlbmVyYWxdPGJyPg0KPC9wPg0KPGJyPg0KPGRpdj4NCjxkaXYgY2xhc3M9ImVsZW1lbnRU
b1Byb29mIiBzdHlsZT0iZm9udC1mYW1pbHk6IENhbGlicmksIEFyaWFsLA0KICAgICAgICAgIEhl
bHZldGljYSwgc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxMnB0OyBjb2xvcjogcmdiKDAsIDAsIDAp
Ow0KICAgICAgICAgIGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwgMjU1KTsiPg0KPHNw
YW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBDYWxpYnJpLCBBcmlhbCwgSGVsdmV0aWNhLCBzYW5zLXNl
cmlmOw0KICAgICAgICAgICAgZm9udC1zaXplOiAxMnB0OyBjb2xvcjogcmdiKDAsIDAsIDApOyBi
YWNrZ3JvdW5kLWNvbG9yOg0KICAgICAgICAgICAgcmdiKDI1NSwgMjU1LCAyNTUpOyI+U29ycnkg
Zm9yIHRoZSBtZXNzZWQgdXAgbWFpbC4gV2UgY3VycmVudGx5IGhhdmUgbWFpbCBwcm9ibGVtcyBo
ZXJlIGF0IEFNRC48L3NwYW4+PC9kaXY+DQo8ZGl2IHN0eWxlPSJmb250LWZhbWlseTpDYWxpYnJp
LEFyaWFsLEhlbHZldGljYSxzYW5zLXNlcmlmOw0KICAgICAgICAgIGZvbnQtc2l6ZToxMnB0OyBj
b2xvcjpyZ2IoMCwwLDApIj4NCjxicj4NCjwvZGl2Pg0KPGhyIHRhYmluZGV4PSItMSIgc3R5bGU9
ImRpc3BsYXk6aW5saW5lLWJsb2NrOyB3aWR0aDo5OCUiPg0KPGRpdiBpZD0iZGl2UnBseUZ3ZE1z
ZyIgZGlyPSJsdHIiPjxmb250IHN0eWxlPSJmb250LXNpemU6MTFwdCIgZmFjZT0iQ2FsaWJyaSwg
c2Fucy1zZXJpZiIgY29sb3I9IiMwMDAwMDAiPjxiPlZvbjo8L2I+IE9sZWtzaWkgTW9pc2llaWV2
DQo8YSBjbGFzcz0ibW96LXR4dC1saW5rLXJmYzIzOTZFIiBocmVmPSJtYWlsdG86T2xla3NpaV9N
b2lzaWVpZXZAZXBhbS5jb20iPiZsdDtPbGVrc2lpX01vaXNpZWlldkBlcGFtLmNvbSZndDs8L2E+
PGJyPg0KPGI+R2VzZW5kZXQ6PC9iPiBNb250YWcsIDIuIEphbnVhciAyMDIzIDE0OjQxPGJyPg0K
PGI+QW46PC9iPiA8YSBjbGFzcz0ibW96LXR4dC1saW5rLWFiYnJldmlhdGVkDQogICAgICAgICAg
ICAgIG1vei10eHQtbGluay1mcmVldGV4dCIgaHJlZj0ibWFpbHRvOmpncm9zc0BzdXNlLmNvbSI+
DQpqZ3Jvc3NAc3VzZS5jb208L2E+IDxhIGNsYXNzPSJtb3otdHh0LWxpbmstcmZjMjM5NkUiIGhy
ZWY9Im1haWx0bzpqZ3Jvc3NAc3VzZS5jb20iPg0KJmx0O2pncm9zc0BzdXNlLmNvbSZndDs8L2E+
PGJyPg0KPGI+Q2M6PC9iPiBPbGVrc2lpIE1vaXNpZWlldiA8YSBjbGFzcz0ibW96LXR4dC1saW5r
LXJmYzIzOTZFIiBocmVmPSJtYWlsdG86T2xla3NpaV9Nb2lzaWVpZXZAZXBhbS5jb20iPg0KJmx0
O09sZWtzaWlfTW9pc2llaWV2QGVwYW0uY29tJmd0OzwvYT47IFN0ZWZhbm8gU3RhYmVsbGluaSA8
YSBjbGFzcz0ibW96LXR4dC1saW5rLXJmYzIzOTZFIiBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyI+DQombHQ7c3N0YWJlbGxpbmlAa2VybmVsLm9yZyZndDs8L2E+OyBPbGVrc2Fu
ZHIgVHlzaGNoZW5rbyA8YSBjbGFzcz0ibW96LXR4dC1saW5rLXJmYzIzOTZFIiBocmVmPSJtYWls
dG86T2xla3NhbmRyX1R5c2hjaGVua29AZXBhbS5jb20iPg0KJmx0O09sZWtzYW5kcl9UeXNoY2hl
bmtvQGVwYW0uY29tJmd0OzwvYT47IDxhIGNsYXNzPSJtb3otdHh0LWxpbmstYWJicmV2aWF0ZWQg
bW96LXR4dC1saW5rLWZyZWV0ZXh0IiBocmVmPSJtYWlsdG86eGVuLWRldmVsQGxpc3RzLnhlbnBy
b2plY3Qub3JnIj4NCnhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzwvYT4gPGEgY2xhc3M9
Im1vei10eHQtbGluay1yZmMyMzk2RSIgaHJlZj0ibWFpbHRvOnhlbi1kZXZlbEBsaXN0cy54ZW5w
cm9qZWN0Lm9yZyI+DQombHQ7eGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnJmd0OzwvYT47
IDxhIGNsYXNzPSJtb3otdHh0LWxpbmstYWJicmV2aWF0ZWQgbW96LXR4dC1saW5rLWZyZWV0ZXh0
IiBocmVmPSJtYWlsdG86bGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZyI+DQpsaW51eC1rZXJu
ZWxAdmdlci5rZXJuZWwub3JnPC9hPiA8YSBjbGFzcz0ibW96LXR4dC1saW5rLXJmYzIzOTZFIiBo
cmVmPSJtYWlsdG86bGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZyI+DQombHQ7bGludXgta2Vy
bmVsQHZnZXIua2VybmVsLm9yZyZndDs8L2E+OyBTdW1pdCBTZW13YWwgPGEgY2xhc3M9Im1vei10
eHQtbGluay1yZmMyMzk2RSIgaHJlZj0ibWFpbHRvOnN1bWl0LnNlbXdhbEBsaW5hcm8ub3JnIj4N
CiZsdDtzdW1pdC5zZW13YWxAbGluYXJvLm9yZyZndDs8L2E+OyBLb2VuaWcsIENocmlzdGlhbiA8
YSBjbGFzcz0ibW96LXR4dC1saW5rLXJmYzIzOTZFIiBocmVmPSJtYWlsdG86Q2hyaXN0aWFuLktv
ZW5pZ0BhbWQuY29tIj4NCiZsdDtDaHJpc3RpYW4uS29lbmlnQGFtZC5jb20mZ3Q7PC9hPjsgPGEg
Y2xhc3M9Im1vei10eHQtbGluay1hYmJyZXZpYXRlZCBtb3otdHh0LWxpbmstZnJlZXRleHQiIGhy
ZWY9Im1haWx0bzpsaW51eC1tZWRpYUB2Z2VyLmtlcm5lbC5vcmciPg0KbGludXgtbWVkaWFAdmdl
ci5rZXJuZWwub3JnPC9hPiA8YSBjbGFzcz0ibW96LXR4dC1saW5rLXJmYzIzOTZFIiBocmVmPSJt
YWlsdG86bGludXgtbWVkaWFAdmdlci5rZXJuZWwub3JnIj4NCiZsdDtsaW51eC1tZWRpYUB2Z2Vy
Lmtlcm5lbC5vcmcmZ3Q7PC9hPjsgPGEgY2xhc3M9Im1vei10eHQtbGluay1hYmJyZXZpYXRlZCBt
b3otdHh0LWxpbmstZnJlZXRleHQiIGhyZWY9Im1haWx0bzpkcmktZGV2ZWxAbGlzdHMuZnJlZWRl
c2t0b3Aub3JnIj4NCmRyaS1kZXZlbEBsaXN0cy5mcmVlZGVza3RvcC5vcmc8L2E+IDxhIGNsYXNz
PSJtb3otdHh0LWxpbmstcmZjMjM5NkUiIGhyZWY9Im1haWx0bzpkcmktZGV2ZWxAbGlzdHMuZnJl
ZWRlc2t0b3Aub3JnIj4NCiZsdDtkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnJmd0Ozwv
YT47IDxhIGNsYXNzPSJtb3otdHh0LWxpbmstYWJicmV2aWF0ZWQgbW96LXR4dC1saW5rLWZyZWV0
ZXh0IiBocmVmPSJtYWlsdG86bGluYXJvLW1tLXNpZ0BsaXN0cy5saW5hcm8ub3JnIj4NCmxpbmFy
by1tbS1zaWdAbGlzdHMubGluYXJvLm9yZzwvYT4gPGEgY2xhc3M9Im1vei10eHQtbGluay1yZmMy
Mzk2RSIgaHJlZj0ibWFpbHRvOmxpbmFyby1tbS1zaWdAbGlzdHMubGluYXJvLm9yZyI+DQombHQ7
bGluYXJvLW1tLXNpZ0BsaXN0cy5saW5hcm8ub3JnJmd0OzwvYT48YnI+DQo8Yj5CZXRyZWZmOjwv
Yj4gW1BBVENIIHYxIDAvM10gQWRkIGlvY3RscyB0byBtYXAgZ3JhbnQgcmVmcyBvbiB0aGUgZXh0
ZXJuYWwgYmFja2luZyBzdG9yYWdlPC9mb250Pg0KPGRpdj4mbmJzcDs8L2Rpdj4NCjwvZGl2Pg0K
PGRpdiBjbGFzcz0iQm9keUZyYWdtZW50Ij48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjExcHQiPg0KPGRpdiBjbGFzcz0iUGxhaW5UZXh0IGVsZW1lbnRUb1Byb29mIj5IZWxs
byw8YnI+DQo8YnI+DQpMZXQgbWUgaW50cm9kdWNlIHRoZSBuZXcgaW9jdGxzLCB3aGljaCBhcmUg
aW50ZW5kZWQgdG8gYWxsb3cgZ250ZGV2IHRvPGJyPg0KbWFwIHNjYXR0ZXItZ2F0aGVyIHRhYmxl
IG9uIHRvcCBvZiB0aGUgZXhpc3RpbmcgZG1hYnVmLCByZWZlcmVuY2VkIGJ5PGJyPg0KZmlsZSBk
ZXNjcmlwdG9yLjxicj4NCjxicj4NCldoZW4gdXNpbmcgZG1hLWJ1ZiBleHBvcnRlciB0byBjcmVh
dGUgZG1hLWJ1ZiB3aXRoIGJhY2tpbmcgc3RvcmFnZSBhbmQ8YnI+DQptYXAgaXQgdG8gdGhlIGdy
YW50IHJlZnMsIHByb3ZpZGVkIGZyb20gdGhlIGRvbWFpbiwgd2UndmUgbWV0IGEgcHJvYmxlbSw8
YnI+DQp0aGF0IHNldmVyYWwgSFcgKGkuTVg4IGdwdSBpbiBvdXIgY2FzZSkgZG8gbm90IHN1cHBv
cnQgZXh0ZXJuYWwgYnVmZmVyPGJyPg0KYW5kIHJlcXVpcmVzIGJhY2tpbmcgc3RvcmFnZSB0byBi
ZSBjcmVhdGVkIHVzaW5nIGl0J3MgbmF0aXZlIHRvb2xzLjxicj4NClRoYXQncyB3aHkgbmV3IGlv
Y3RscyB3ZXJlIGFkZGVkIHRvIGJlIGFibGUgdG8gcGFzcyBleGlzdGluZyBkbWEtYnVmZmVyPGJy
Pg0KZmQgYXMgaW5wdXQgcGFyYW1ldGVyIGFuZCB1c2UgaXQgYXMgYmFja2luZyBzdG9yYWdlIHRv
IGV4cG9ydCB0byByZWZzLjwvZGl2Pg0KPGRpdiBjbGFzcz0iUGxhaW5UZXh0IGVsZW1lbnRUb1By
b29mIj48YnI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IlBsYWluVGV4dCBlbGVtZW50VG9Qcm9vZiI+
VGhpcyBpcyBhIHByZXR0eSBiaWcgTkFLIGZyb20gbXkgc2lkZSB0byB0aGlzIGFwcHJvYWNoLjwv
ZGl2Pg0KPGRpdiBjbGFzcz0iUGxhaW5UZXh0IGVsZW1lbnRUb1Byb29mIj48YnI+DQo8L2Rpdj4N
CjxkaXYgY2xhc3M9IlBsYWluVGV4dCBlbGVtZW50VG9Qcm9vZiI+SWYgeW91IG5lZWQgdG8gcmVw
bGFjZSBhIGZpbGUgZGVzY3JpcHRvciBudW1iZXIgbG9jYWwgdG8geW91ciBwcm9jZXNzIHRoZW4g
eW91IGNhbiBzaW1wbHkgdXNlIGR1cDIoKSBmcm9tIHVzZXJzcGFjZS48L2Rpdj4NCjxkaXYgY2xh
c3M9IlBsYWluVGV4dCBlbGVtZW50VG9Qcm9vZiI+PGJyPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSJQ
bGFpblRleHQgZWxlbWVudFRvUHJvb2YiPklmIHlvdXIgaW50ZW50aW9uIGhlcmUgaXMgdG8gcmVw
bGFjZSB0aGUgYmFja2luZyBzdG9yZSBvZiB0aGUgZmQgb24gYWxsIHByb2Nlc3NlcyB3aGljaCBj
dXJyZW50bHkgaGF2ZSBpdCBvcGVuIHRoZW4gcGxlYXNlIGp1c3QgY29tcGxldGVseSBmb3JnZXQg
dGhhdC4gVGhpcyB3aWxsICpORVZFUiogZXZlciB3b3JrIGNvcnJlY3RseS48L2Rpdj4NCjxkaXYg
Y2xhc3M9IlBsYWluVGV4dCBlbGVtZW50VG9Qcm9vZiI+PGJyPg0KPC9kaXY+DQo8ZGl2IGNsYXNz
PSJQbGFpblRleHQgZWxlbWVudFRvUHJvb2YiPlJlZ2FyZHMsPC9kaXY+DQo8ZGl2IGNsYXNzPSJQ
bGFpblRleHQgZWxlbWVudFRvUHJvb2YiPkNocmlzdGlhbi48L2Rpdj4NCjxkaXYgY2xhc3M9IlBs
YWluVGV4dCBlbGVtZW50VG9Qcm9vZiI+PGJyPg0KPC9kaXY+DQo8L3NwYW4+PC9mb250PjwvZGl2
Pg0KPC9kaXY+DQo8L2Jsb2NrcXVvdGU+DQo8cD5IZWxsbyBDcmlzdGlhbiw8L3A+DQo8cD48YnI+
DQo8L3A+DQo8cD5UaGFuayB5b3UgZm9yIHRoZSBxdWljayByZXNwb25zZS48L3A+DQo8cD48YnI+
DQo8L3A+DQo8cD5NeSBnb2FsIGlzIHRvIHByb3ZpZGUgY29ycmVjdCBidWZmZXIgZm9yIHRoZSBp
bnRlcmZhY2VzLCBzdWNoIGFzIDxmb250IHNpemU9IjIiPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6
ZToxMXB0Ij56d3BfbGludXhfZG1hYnVmX3YxX2ludGVyZmFjZSwgc28gemVyby1jb3B5PC9zcGFu
PjwvZm9udD48L3A+DQo8cD48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEx
cHQiPmZlYXR1cmUgd2lsbCB3b3JrLiBNeSBzdWdnZXN0aW9uIGlzIHRvIGdpdmUgYSBwb3NzaWJp
bGl0eSB0byB1c2Ugc29tZSBzcGVjaWZpYyBidWZmZXIgYXMgYmFja2luZyBzdG9yYWdlIHdoZXJl
IGNhbGxlcjwvc3Bhbj48L2ZvbnQ+PC9wPg0KPHA+PGZvbnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9
ImZvbnQtc2l6ZToxMXB0Ij50YWtlcyByZXNwb25zaWJpbGl0eSBmb3IgdGhlIHByb3ZpZGVkIGJ1
ZmZlciB0byBoYXZlIHRoZSBjb3JyZWN0IGZvcm1hdC48L3NwYW4+PC9mb250PjwvcD4NCjxwPjxm
b250IHNpemU9IjIiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTFwdCI+PGJyPg0KPC9zcGFuPjwv
Zm9udD48L3A+DQo8cD48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExcHQi
PkluIG91ciBjYXNlIHdlIGFyZSB1c2luZyB0aGVzZSBjYWxscyB0aGUgZm9sbG93aW5nIHdheTo8
L3NwYW4+PC9mb250PjwvcD4NCjxwPjxmb250IHNpemU9IjIiPjxzcGFuIHN0eWxlPSJmb250LXNp
emU6MTFwdCI+MSkgR2V0IGdyZWZzIGZyb20gYW5vdGhlciBEb21haW4gKFdlJ3JlIHdvcmtpbmcg
b24gdGhlIHZpcnR1YWxpemVkIHN5c3RlbSB3aXRoIGRpZmZlcmVudCBEb21haW5zPC9zcGFuPjwv
Zm9udD48L3A+DQo8cD48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExcHQi
PndvcmtpbmcgYXMgc3RhbmRhbG9uZSBWTXMgdGhhdCBhcmUgc2hhcmluZyByZXNvdXJjZXMpOzwv
c3Bhbj48L2ZvbnQ+PC9wPg0KPHA+PGZvbnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6
ZToxMXB0Ij4yKSBDcmVhdGUgYnVmZmVyIHVzaW5nIGdibV9ib19jcmVhdGUgYW5kIHJlY2VpdmUg
ZmQgKGkuTVg4IHJlcXVpcmVzIGVnbCBhcGkgdG8gYmUgY2FsbGVkIGZvciB0aGUgYnVmZmVyPC9z
cGFuPjwvZm9udD48L3A+DQo8cD48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjExcHQiPmFsbG9jYXRpb24sIG9yIDwvc3Bhbj48L2ZvbnQ+PGZvbnQgc2l6ZT0iMiI+PHNwYW4g
c3R5bGU9ImZvbnQtc2l6ZToxMXB0Ij48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjExcHQiPmVnbENyZWF0ZUltYWdlS0hSIHdpbGwgcmV0dXJuIEVHTF9OT19JTUFHRV9LSFIN
Cjwvc3Bhbj48L2ZvbnQ+ZHVyaW5nIHBhcmFtIHNldHRpbmcgZm9yIHp3cF9saW51eF9kbWFidWYp
Ozwvc3Bhbj48L2ZvbnQ+PC9wPg0KPHA+PGZvbnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMXB0Ij4zKSBDYWxsIGZvciA8L3NwYW4+PC9mb250Pjxmb250IHNpemU9IjIiPjxzcGFu
IHN0eWxlPSJmb250LXNpemU6MTFwdCI+SU9DVExfR05UREVWX0RNQUJVRl9NQVBfUkVGU19UT19C
VUYgdG8gbWFwIGdyZWZzIHVzaW5nIGZkIGFzIHRoZSBiYWNraW5nIHN0b3JhZ2U7PC9zcGFuPjwv
Zm9udD48L3A+DQo8cD48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExcHQi
PjQpIENhbGwgZm9yIDwvc3Bhbj48L2ZvbnQ+PGZvbnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9ImZv
bnQtc2l6ZToxMXB0Ij56d3BfbGludXhfZG1hYnVmX3YxX2ludGVyZmFjZSBhbmQgdXNlIHplcm8t
Y29weSBmZWF0dXJlIHdpdGggV2F5bGFuZDs8L3NwYW4+PC9mb250PjwvcD4NCjxwPjxmb250IHNp
emU9IjIiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTFwdCI+NSkgQWZ0ZXIgd29yayBmaW5pc2hl
ZCAtIGNhbGwgZm9yIDwvc3Bhbj4NCjwvZm9udD48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0i
Zm9udC1zaXplOjExcHQiPklPQ1RMX0dOVERFVl9ETUFCVUZfTUFQX1JFTEVBU0UgdG8gdW5tYXAg
Z3JhbnQgcmVmczs8L3NwYW4+PC9mb250PjwvcD4NCjxwPjxmb250IHNpemU9IjIiPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6MTFwdCI+NikgQ2FsbCBnYm1fYm9fZGVzdHJveSB0byByZWxlYXNlIGFs
bG9jYXRlZCBCTyBvYmplY3Q7PC9zcGFuPjwvZm9udD48L3A+DQo8cD48Zm9udCBzaXplPSIyIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOjExcHQiPjcpIENhbGwgZm9yIDwvc3Bhbj48L2ZvbnQ+PGZv
bnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMXB0Ij5JT0NUTF9HTlRERVZfRE1B
QlVGX01BUF9XQUlUX1JFTEVBU0VEIHRvIHdhaXQgdW50aWwgbWFwIHJlbGVhc2VkIGNvbXBsZXRl
bHk8L3NwYW4+PC9mb250PjwvcD4NCjxwPjxmb250IHNpemU9IjIiPjxzcGFuIHN0eWxlPSJmb250
LXNpemU6MTFwdCI+KFRoaXMgaW5jbHVkZXMgcmVsZWFzaW5nIGdyYW50IHJlZnMgb24gdGhlIERv
bWFpbiBzaWRlKS48L3NwYW4+PC9mb250PjwvcD4NCjxwPjxmb250IHNpemU9IjIiPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6MTFwdCI+PGJyPg0KPC9zcGFuPjwvZm9udD48L3A+DQo8cD48Zm9udCBz
aXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExcHQiPkkndmUgdGVzdGVkIG15IGNoYW5n
ZXMgb24gSU1YOFFNIGJvYXJkIHVzaW5nIGdibV9ib19jcmVhdGUgdG8gYWxsb2NhdGUgYnVmZmVy
IGFuZDwvc3Bhbj48L2ZvbnQ+PC9wPg0KPHA+PGZvbnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9ImZv
bnQtc2l6ZToxMXB0Ij5vbiBRRU1VIHNldHVwIHVzaW5nIDwvc3Bhbj48L2ZvbnQ+PGZvbnQgc2l6
ZT0iMiI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMXB0Ij5EUk1fSU9DVExfTU9ERV9DUkVBVEVf
RFVNQiBmb3IgYnVmZmVyIGFsbG9jYXRpb24uPC9zcGFuPjwvZm9udD48L3A+DQo8cD48Zm9udCBz
aXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExcHQiPkkgY2FuIHByb3ZpZGUgdGVzdCBh
cHBsaWNhdGlvbnMgSSd2ZSB1c2VkIGZvciB0ZXN0aW5nIHB1cnBvc2VzLjxicj4NCjwvc3Bhbj48
L2ZvbnQ+PC9wPg0KPHA+PGZvbnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMXB0
Ij48YnI+DQo8L3NwYW4+PC9mb250PjwvcD4NCjxwPjxmb250IHNpemU9IjIiPjxzcGFuIHN0eWxl
PSJmb250LXNpemU6MTFwdCI+QmVzdCByZWdhcmRzLDwvc3Bhbj48L2ZvbnQ+PC9wPg0KPHA+PGZv
bnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMXB0Ij5PbGVrc2lpLjwvc3Bhbj48
L2ZvbnQ+PC9wPg0KPHA+PGZvbnQgc2l6ZT0iMiI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMXB0
Ij48YnI+DQo8L3NwYW4+PC9mb250Pjxmb250IHNpemU9IjIiPjxzcGFuIHN0eWxlPSJmb250LXNp
emU6MTFwdCI+PC9zcGFuPjwvZm9udD48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjExcHQiPjwvc3Bhbj48L2ZvbnQ+PC9wPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2l0
ZT0ibWlkOkJOOFBSMTJNQjM1ODc2NzNGOEVGMjY0MkQyNjdFOTQzRDgzRjc5QEJOOFBSMTJNQjM1
ODcubmFtcHJkMTIucHJvZC5vdXRsb29rLmNvbSI+DQo8ZGl2Pg0KPGRpdiBjbGFzcz0iQm9keUZy
YWdtZW50Ij48Zm9udCBzaXplPSIyIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExcHQiPg0KPGRp
diBjbGFzcz0iUGxhaW5UZXh0IGVsZW1lbnRUb1Byb29mIj5Gb2xsb3dpbmcgY2FsbHMgd2VyZSBh
ZGRlZDo8YnI+DQpJT0NUTF9HTlRERVZfRE1BQlVGX01BUF9SRUZTX1RPX0JVRiAtIG1hcCBleGlz
dGluZyBidWZmZXIgYXMgdGhlIGJhY2tpbmc8YnI+DQpzdG9yYWdlIGFuZCBleHBvcnQgaXQgdG8g
dGhlIHByb3ZpZGVkIGdyYW50IHJlZnM7PGJyPg0KSU9DVExfR05UREVWX0RNQUJVRl9NQVBfUkVM
RUFTRSAtIGRldGFjaCBidWZmZXIgZnJvbSB0aGUgZ3JhbnQgdGFibGUgYW5kPGJyPg0Kc2V0IG5v
dGlmaWNhdGlvbiB0byB1bm1hcCBncmFudCByZWZzIGJlZm9yZSByZWxlYXNpbmcgdGhlIGV4dGVy
bmFsPGJyPg0KYnVmZmVyLiBBZnRlciB0aGlzIGNhbGwgdGhlIGV4dGVybmFsIGJ1ZmZlciBzaG91
bGQgYmUgZGV0cm95ZWQuPGJyPg0KSU9DVExfR05UREVWX0RNQUJVRl9NQVBfV0FJVF9SRUxFQVNF
RCAtIHdhaXQgZm9yIHRpbWVvdXQgdW50aWwgYnVmZmVyIGlzPGJyPg0KY29tcGxldGVseSBkZXN0
cm95ZWQgYW5kIGdudCByZWZzIHVubWFwcGVkIHNvIGRvbWFpbiBjb3VsZCBmcmVlIGdyYW50PGJy
Pg0KcGFnZXMuIFNob3VsZCBiZSBjYWxsZWQgYWZ0ZXIgYnVmZmVyIHdhcyBkZXN0b3llZC48YnI+
DQo8YnI+DQpPdXIgc2V0dXAgaXMgYmFzZWQgb24gSU1YOFFNIGJvYXJkLiBXZSdyZSB0cnlpbmcg
dG8gaW1wbGVtZW50IHplcm8tY29weTxicj4NCnN1cHBvcnQgZm9yIERvbVUgZ3JhcGhpY3MgdXNp
bmcgV2F5bGFuZCB6d3BfbGludXhfZG1hYnVmX3YxX2ludGVyZmFjZTxicj4NCmltcGxlbWVudGF0
aW9uLjxicj4NCjxicj4NCkZvciBkbWEtYnVmIGV4cG9ydGVyIHdlIHVzZWQgaS5NWDggZ3B1IG5h
dGl2ZSB0b29scyB0byBjcmVhdGUgYmFja2luZzxicj4NCnN0b3JhZ2UgZ3JhbnQtcmVmcywgcmVj
ZWl2ZWQgZnJvbSBEb21VLiBCdWZmZXIgZm9yIHRoZSBiYWNraW5nIHN0b3JhZ2Ugd2FzPGJyPg0K
YWxsb2NhdGVkIHVzaW5nIGdibV9ib19jcmVhdGUgY2FsbCBiZWNhdXNlIGdwdSBkbyBub3Qgc3Vw
cG9ydCBleHRlcm5hbDxicj4NCmJ1ZmZlciBhbmQgcmVxdWlyZXMgYmFja2luZyBzdG9yYWdlIHRv
IGJlIGNyZWF0ZWQgdXNpbmcgaXQncyBuYXRpdmUgdG9vbHM8YnI+DQooZWdsQ3JlYXRlSW1hZ2VL
SFIgcmV0dXJucyBFR0xfTk9fSU1BR0VfS0hSIGZvciBidWZmZXJzLCB3aGljaCB3ZXJlIG5vdDxi
cj4NCmNyZWF0ZWQgdXNpbmcgZ2JtX2JvX2NyZWF0ZSkuPGJyPg0KPGJyPg0KVGhpcyBiZWhhdmlv
dXIgd2FzIGFsc28gdGVzdGVkIG9uIFFlbXUgc2V0dXAgdXNpbmc8YnI+DQpEUk1fSU9DVExfTU9E
RV9DUkVBVEVfRFVNQiBjYWxsIHRvIGNyZWF0ZSBiYWNraW5nIHN0b3JhZ2UgYnVmZmVyLjxicj4N
Cjxicj4NCi0tLTxicj4NCk9sZWtzaWkgTW9pc2llaWV2ICgzKTo8YnI+DQombmJzcDsgeGVuL2dy
YW50LXRhYmxlOiBzYXZlIHBhZ2VfY291bnQgb24gbWFwIGFuZCB1c2UgaWYgZHVyaW5nIGFzeW5j
PGJyPg0KJm5ic3A7Jm5ic3A7Jm5ic3A7IHVubWFwcGluZzxicj4NCiZuYnNwOyBkbWEtYnVmOiBh
ZGQgZG1hIGJ1ZmZlciByZWxlYXNlIG5vdGlmaWVyIGNhbGxiYWNrPGJyPg0KJm5ic3A7IHhlbi9n
cmFudC10YWJsZTogYWRkIG5ldyBpb2N0bHMgdG8gbWFwIGRtYWJ1ZiB0byBleGlzdGluZyBmZDxi
cj4NCjxicj4NCiZuYnNwO2RyaXZlcnMvZG1hLWJ1Zi9kbWEtYnVmLmMmbmJzcDsmbmJzcDsgfCZu
YnNwOyA0NCArKysrPGJyPg0KJm5ic3A7ZHJpdmVycy94ZW4vZ250ZGV2LWNvbW1vbi5oIHwmbmJz
cDsmbmJzcDsgOCArLTxicj4NCiZuYnNwO2RyaXZlcnMveGVuL2dudGRldi1kbWFidWYuYyB8IDQx
NiArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy08YnI+DQombmJzcDtkcml2ZXJz
L3hlbi9nbnRkZXYtZG1hYnVmLmggfCZuYnNwOyZuYnNwOyA3ICs8YnI+DQombmJzcDtkcml2ZXJz
L3hlbi9nbnRkZXYuYyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyB8
IDEwMSArKysrKysrKy08YnI+DQombmJzcDtkcml2ZXJzL3hlbi9ncmFudC10YWJsZS5jJm5ic3A7
Jm5ic3A7IHwmbmJzcDsgNzMgKysrKystLTxicj4NCiZuYnNwO2luY2x1ZGUvbGludXgvZG1hLWJ1
Zi5oJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IHwmbmJzcDsgMTUgKys8YnI+DQombmJzcDtpbmNs
dWRlL3VhcGkveGVuL2dudGRldi5oJm5ic3A7Jm5ic3A7IHwmbmJzcDsgNjIgKysrKysrPGJyPg0K
Jm5ic3A7aW5jbHVkZS94ZW4vZ3JhbnRfdGFibGUuaCZuYnNwOyZuYnNwOyB8Jm5ic3A7Jm5ic3A7
IDggKzxicj4NCiZuYnNwOzkgZmlsZXMgY2hhbmdlZCwgNzAzIGluc2VydGlvbnMoKyksIDMxIGRl
bGV0aW9ucygtKTxicj4NCjxicj4NCi0tIDxicj4NCjIuMjUuMTxicj4NCjwvZGl2Pg0KPC9zcGFu
PjwvZm9udD48L2Rpdj4NCjwvZGl2Pg0KPC9ibG9ja3F1b3RlPg0KPC9ib2R5Pg0KPC9odG1sPg0K

--_000_3066964b511f6314bc8f79e387712013epamcom_--


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 21:01:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 21:01:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470389.729878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCRvb-0000A2-EY; Mon, 02 Jan 2023 21:01:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470389.729878; Mon, 02 Jan 2023 21:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCRvb-00009s-B3; Mon, 02 Jan 2023 21:01:03 +0000
Received: by outflank-mailman (input) for mailman id 470389;
 Mon, 02 Jan 2023 21:01:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCRvZ-00009f-UD; Mon, 02 Jan 2023 21:01:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCRvZ-0000Ri-Py; Mon, 02 Jan 2023 21:01:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCRvZ-0006LP-6k; Mon, 02 Jan 2023 21:01:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCRvZ-0001M9-6I; Mon, 02 Jan 2023 21:01:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lam2k+eFP9Z/Vyf+PiWhfCGZt1/LdhplGcM9qnY5dzI=; b=T1BPKcTU9TLZWIPzQJB1RwosO3
	+b9LDFYIOF5nvnJ7FX80AIP3t1b9+pK84O8xzBptnVx+2h6+gMB3SdG4fSlQGYyh1QZ1QzGjDIAIn
	ENFXrEn4ShS2qgpeasE1OasJWVDRyFkFYMv35kerT0HmewteEojIZFt7Wp3/zoQWM5yw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175550-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175550: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=88603b6dc419445847923fcb7fe5080067a30f98
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Jan 2023 21:01:01 +0000

flight 175550 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175550/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                88603b6dc419445847923fcb7fe5080067a30f98
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   87 days
Failing since        173470  2022-10-08 06:21:34 Z   86 days  179 attempts
Testing same since   175546  2023-01-01 23:13:03 Z    0 days    2 attempts

------------------------------------------------------------
3255 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 497012 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 21:35:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 21:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470400.729912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCST9-00041q-V7; Mon, 02 Jan 2023 21:35:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470400.729912; Mon, 02 Jan 2023 21:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCST9-00041Y-Nr; Mon, 02 Jan 2023 21:35:43 +0000
Received: by outflank-mailman (input) for mailman id 470400;
 Mon, 02 Jan 2023 21:35:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jV8v=47=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pCST8-0003Wk-CH
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 21:35:42 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6765f685-8ae5-11ed-b8d0-410ff93cb8f0;
 Mon, 02 Jan 2023 22:35:40 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id m18so69165178eji.5
 for <xen-devel@lists.xenproject.org>; Mon, 02 Jan 2023 13:35:40 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 f11-20020a17090631cb00b0084c465709b7sm10583826ejf.74.2023.01.02.13.35.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 02 Jan 2023 13:35:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6765f685-8ae5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gZDEL8s6HWwEfgKMeGlSAkqHU0n7hORkrqT7smSJTVY=;
        b=GU3AYEhEt5HGHHUhKsuD0ojio6SvhzKD2dSKr64WYZjapbbSIsZnEZNRwIsilTaOUJ
         iGTkrUbs+yhPQyQgY/ZZpAiFqB4ecrRyGNQlNB9jjQAFyfPYYKyIu3p+2LAHxwhgitUL
         LedNWX6Boc+BkDPK3KEqCjyCurW61QmS5JrZOGNzDZwVGaY/3Bu50aDxQPLngHJylhPt
         wdEtW5D2xVHw3Vu4PphFdbdzXJQQT1bvsQyyOKRq9fDe6W81Cvgzz2lFPFByXTJL5t4o
         5CQ8Qo1pNIhYYxhrzpL7P7fjIZnzuZrFuUzHvPYI6ubpnIrjO06SV/rS4zJP0QG7PcyB
         Swaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=gZDEL8s6HWwEfgKMeGlSAkqHU0n7hORkrqT7smSJTVY=;
        b=QFU5oC71Yk5hHY3H/mWS4ZzcDTAPl9QJmWRDXZL/vYIu94oZrMi2UUeX9GvuBg054w
         l0NcjgtS0WiPe5tdOa5JaM/lKwO/G9BUL0RjYTmeHvuMzbwkJYyUod5szAYZpB7DkuP/
         DvUP8FGng4ovzsvOOy1DoSUxU/zdCt06RlJ3blF7fZgAeimbiBB8QRt6rdbCL8LQk7qe
         iu0xRlKY/0AKQqPYbYPgA1On3ED52tMEVy9HOQcDrkNxDJsjHeTNxPQQXafgr+zPYME+
         wJwQft0ubY7aDmpTm/2UkArG4KyzjTatRsraXO4wtOjmk9csNCqUsz+Oq9eoT1xPDTMF
         pg3A==
X-Gm-Message-State: AFqh2kq4QmGcSTQvY+RkxZNzHkCDxDBLfatipvHgcL1TZoOMbPxUs+lU
	JGOUpXl6NYgZYkufwyHwvaQ=
X-Google-Smtp-Source: AMrXdXtC/ibfuSlBrNsQnaQfyhwXLOGrgcrvGa+eum0cbe4JY/6WC6uF8ZpOR32bH9fWmLx0CvyqaQ==
X-Received: by 2002:a17:906:a1d7:b0:815:8942:dde with SMTP id bx23-20020a170906a1d700b0081589420ddemr45198684ejb.23.1672695340325;
        Mon, 02 Jan 2023 13:35:40 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 2/6] hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
Date: Mon,  2 Jan 2023 22:35:00 +0100
Message-Id: <20230102213504.14646-3-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230102213504.14646-1-shentey@gmail.com>
References: <20230102213504.14646-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a preparational patch for the next one to make the following
more obvious:

First, pci_bus_irqs() is now called twice in case of Xen where the
second call overrides the pci_set_irq_fn with the Xen variant.

Second, pci_bus_set_route_irq_fn() is now also called in Xen mode.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/isa/piix.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index dc6014a4e4..a1281c2d77 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -493,7 +493,7 @@ static void piix3_xen_realize(PCIDevice *dev, Error **errp)
     PIIXState *piix3 = PIIX_PCI_DEVICE(dev);
     PCIBus *pci_bus = pci_get_bus(dev);
 
-    pci_piix_realize(dev, TYPE_PIIX3_USB_UHCI, errp);
+    piix3_realize(dev, errp);
     if (*errp) {
         return;
     }
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 21:35:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 21:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470399.729895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCST8-0003aS-Dx; Mon, 02 Jan 2023 21:35:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470399.729895; Mon, 02 Jan 2023 21:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCST8-0003ZF-7a; Mon, 02 Jan 2023 21:35:42 +0000
Received: by outflank-mailman (input) for mailman id 470399;
 Mon, 02 Jan 2023 21:35:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jV8v=47=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pCST6-0003Wl-Qh
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 21:35:40 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 66c192d6-8ae5-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 22:35:39 +0100 (CET)
Received: by mail-ej1-x631.google.com with SMTP id u9so69361564ejo.0
 for <xen-devel@lists.xenproject.org>; Mon, 02 Jan 2023 13:35:39 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 f11-20020a17090631cb00b0084c465709b7sm10583826ejf.74.2023.01.02.13.35.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 02 Jan 2023 13:35:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66c192d6-8ae5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AhjGz5k2CpbSc4H1yL5PDfOgeHeVz+Sw9cmuSS5IpyA=;
        b=owH/gHSxZpPPK6jvooYZ2wLT9TML1U3LOD629OCrtg+nERm71Dji9K6TNg3MbP6Idc
         qcr126FdTfRUKlWTIXTY+Bgh37w+XP8+KS1ONuTsaMPx8lC9OojQhW7i7hBUNA9igiqi
         0v1xr7C4jQOwjzirYfnBq8TDN97Iw+0ano5/nh5z0xMU5V/R/iqLspeXW77ZBjfooiu3
         r8Xfxx73bxr97JwI5kDKQM/jYhKNXoAT2c00SH5ujp4Jzc+PhtRVm192H5TPW1zIPZvr
         TpLoG9qlpOhMEhPVkIGVXiyGOxzZUa63JMH+ChYMA8W8mWV7O3EAr+PNxZJ1Gd4R6rOu
         pYsA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=AhjGz5k2CpbSc4H1yL5PDfOgeHeVz+Sw9cmuSS5IpyA=;
        b=BdzhCGTWdTkEauStNARUaT9PapnrXOAQX5Q/3DsOr/i0FUMuhTlG3GGmE4A1Cw16wn
         F3AEroIqqgetoxljdL0UVHP5TbXoPbd0zDuq4FbIjizUHybdnswlM29ihCkjiyEj9Kpx
         svpexZfru7U9JfahntnfoLgm/gap7LSq6k9EMKDW4msS8i6k0hN4sMJvo9+a3LgNyOG8
         bdeMjbl9shENjuRDgymYgeRURSDeF5wnEqTWsmcfBpwK9kMUFxmhyhrNm2Q3gI/tbS9P
         GMUBsKPzHazTU7eg1AXzr3SK4CBB3Ub4ZF+DtOUkmcv42n9VXfeDE1JiqgOsuFUD7Jbm
         XzjA==
X-Gm-Message-State: AFqh2kq2yGJyiTc5YZRE9u9xxc7wXmAUXywe8GhafM9AeS75c5tguORt
	Y2WbrB/sNehmbLAXZ46vwYo=
X-Google-Smtp-Source: AMrXdXsKw/KWvzmQ9hqdX4WTRGdsRsoTmLiF1urYt1nYchYGOMtg1bcydRX4/IC9k2BmIdSzSy2jZQ==
X-Received: by 2002:a17:907:9625:b0:7ad:9455:d57d with SMTP id gb37-20020a170907962500b007ad9455d57dmr38487663ejc.74.1672695339125;
        Mon, 02 Jan 2023 13:35:39 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 1/6] include/hw/xen/xen: Make xen_piix3_set_irq() generic and rename it
Date: Mon,  2 Jan 2023 22:34:59 +0100
Message-Id: <20230102213504.14646-2-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230102213504.14646-1-shentey@gmail.com>
References: <20230102213504.14646-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_piix3_set_irq() hardcoded the number of PCI IRQ lines. Get it from
the PCI bus instead.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/xen/xen-hvm.c | 9 ++++++---
 hw/isa/piix.c         | 2 +-
 include/hw/xen/xen.h  | 2 +-
 stubs/xen-hw-stub.c   | 2 +-
 4 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index e4293d6d66..59e8246a48 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -142,10 +142,13 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
     return irq_num + (PCI_SLOT(pci_dev->devfn) << 2);
 }
 
-void xen_piix3_set_irq(void *opaque, int irq_num, int level)
+void xen_intx_set_irq(void *opaque, int irq_num, int level)
 {
-    xen_set_pci_intx_level(xen_domid, 0, 0, irq_num >> 2,
-                           irq_num & 3, level);
+    PCIDevice *pci_dev = opaque;
+    PCIBus *pci_bus = pci_get_bus(pci_dev);
+
+    xen_set_pci_intx_level(xen_domid, 0, 0, irq_num / pci_bus->nirq,
+                           irq_num % pci_bus->nirq, level);
 }
 
 int xen_set_pci_link_route(uint8_t link, uint8_t irq)
diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index ae8a27c53c..dc6014a4e4 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -504,7 +504,7 @@ static void piix3_xen_realize(PCIDevice *dev, Error **errp)
      * connected to the IOAPIC directly.
      * These additional routes can be discovered through ACPI.
      */
-    pci_bus_irqs(pci_bus, xen_piix3_set_irq, piix3, XEN_PIIX_NUM_PIRQS);
+    pci_bus_irqs(pci_bus, xen_intx_set_irq, piix3, XEN_PIIX_NUM_PIRQS);
 }
 
 static void piix3_xen_class_init(ObjectClass *klass, void *data)
diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index afdf9c436a..7c83ecf6b9 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -22,7 +22,7 @@ extern bool xen_domid_restrict;
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 int xen_set_pci_link_route(uint8_t link, uint8_t irq);
-void xen_piix3_set_irq(void *opaque, int irq_num, int level);
+void xen_intx_set_irq(void *opaque, int irq_num, int level);
 void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
 int xen_is_pirq_msi(uint32_t msi_data);
 
diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
index 34a22f2ad7..7d7ffe83a9 100644
--- a/stubs/xen-hw-stub.c
+++ b/stubs/xen-hw-stub.c
@@ -15,7 +15,7 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
     return -1;
 }
 
-void xen_piix3_set_irq(void *opaque, int irq_num, int level)
+void xen_intx_set_irq(void *opaque, int irq_num, int level)
 {
 }
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 21:35:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 21:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470402.729927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSTB-0004KI-F5; Mon, 02 Jan 2023 21:35:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470402.729927; Mon, 02 Jan 2023 21:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSTB-0004JT-8N; Mon, 02 Jan 2023 21:35:45 +0000
Received: by outflank-mailman (input) for mailman id 470402;
 Mon, 02 Jan 2023 21:35:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jV8v=47=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pCST9-0003Wk-Lw
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 21:35:43 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6823eacd-8ae5-11ed-b8d0-410ff93cb8f0;
 Mon, 02 Jan 2023 22:35:41 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id qk9so69227516ejc.3
 for <xen-devel@lists.xenproject.org>; Mon, 02 Jan 2023 13:35:41 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 f11-20020a17090631cb00b0084c465709b7sm10583826ejf.74.2023.01.02.13.35.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 02 Jan 2023 13:35:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6823eacd-8ae5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=D3N14otcWIL0n4lyXnEKPR8qnN0/LRmM4qVkpIj4xR0=;
        b=o/cROMVNdjgczrnzaHlIWJEYXmIcrhVEkY1chtTByVFgw8LTNs+VtgtFtEZgCY5yoO
         nzOSynWwIDOUxRcnk9SUCJGzI20lfjgufMnCsY3qKkrbVKMBuWhX5TfEhfasZkfISq3w
         UUBnbDggVxAN214v2MFTNHh1dDpNtR39S8/3tlxvw36a2E8mKGLvwXUdhgefdiWugr2Q
         /JRT9RMXrxyUOTVNhO906P9DpV3HnLmQXEbcusyKm+dfxznocAkcd176q8u6n1jPlKMw
         RuGYas0dxRTf9TF7zr00vUMGJtTGMRb1AQ7dyW7UTX3Cu2OxXUh1shUHoWE61xetRae1
         jQiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=D3N14otcWIL0n4lyXnEKPR8qnN0/LRmM4qVkpIj4xR0=;
        b=12C9he5Yw+mNQUKLlQ/Sg/I21kIO8cWuyM6u5dL1lhOkjmbqffQxY02quCSoG3H/GQ
         54gNXO8ioU6yB+8pjn99HwGkIONlAyruzA+rvfhs0ziS/xrCU1NQjAW05rDEIpZKEHY6
         RIOk2SluH9doZwZ+Vd3blxbv43JnICfdTRYc/QwEBUf5hDPRSgCeXLZW3h9jIX7uhHHE
         wRf89QriXv7FxNv5TBeCjeJMx+M/7asvAVU8SFAhDqeAeS8Ggc2/Lnom372dAc/eGSb2
         kGTWYvqdFAksjIzQo6l30PetMySpNaaxDs5I6tP5eV+by7S0zWQ2agE+dqe/10m6tcDt
         cI/Q==
X-Gm-Message-State: AFqh2kp9eWLGHFRUQKRmRjz6K5KnsF9p8ULVJQp4eu30OAZdSPbUSjiQ
	asnFgr02jffAxX8DrtNWRS4=
X-Google-Smtp-Source: AMrXdXsbnk+1fmyTa+aN0jvfrb8pZSWUpibMyiM6zr4b2AEMWkyILO87oUs0wXcIHRi65FlXo993VQ==
X-Received: by 2002:a17:906:2349:b0:837:3ddb:7e97 with SMTP id m9-20020a170906234900b008373ddb7e97mr43321304eja.61.1672695341405;
        Mon, 02 Jan 2023 13:35:41 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 3/6] hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
Date: Mon,  2 Jan 2023 22:35:01 +0100
Message-Id: <20230102213504.14646-4-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230102213504.14646-1-shentey@gmail.com>
References: <20230102213504.14646-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_intx_set_irq() doesn't depend on PIIX state. In order to resolve
TYPE_PIIX3_XEN_DEVICE and in order to make Xen agnostic about the
precise south bridge being used, set up Xen's PCI IRQ handling of PIIX3
in the board.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/pc_piix.c | 12 ++++++++++++
 hw/isa/piix.c     | 24 +-----------------------
 2 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index aacdb72b7c..c02f68010d 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -67,6 +67,7 @@
 #include "kvm/kvm-cpu.h"
 
 #define MAX_IDE_BUS 2
+#define XEN_PIIX_NUM_PIRQS 128ULL
 
 #ifdef CONFIG_IDE_ISA
 static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
@@ -246,6 +247,17 @@ static void pc_init1(MachineState *machine,
                                  &error_abort);
         pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
 
+        if (xen_enabled()) {
+            /*
+             * Xen supports additional interrupt routes from the PCI devices to
+             * the IOAPIC: the four pins of each PCI device on the bus are also
+             * connected to the IOAPIC directly.
+             * These additional routes can be discovered through ACPI.
+             */
+            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
+                         XEN_PIIX_NUM_PIRQS);
+        }
+
         dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
         for (i = 0; i < ISA_NUM_IRQS; i++) {
             qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index a1281c2d77..ac04781f46 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -38,8 +38,6 @@
 #include "migration/vmstate.h"
 #include "hw/acpi/acpi_aml_interface.h"
 
-#define XEN_PIIX_NUM_PIRQS      128ULL
-
 static void piix_set_irq_pic(PIIXState *piix, int pic_irq)
 {
     qemu_set_irq(piix->pic.in_irqs[pic_irq],
@@ -487,33 +485,13 @@ static const TypeInfo piix3_info = {
     .class_init    = piix3_class_init,
 };
 
-static void piix3_xen_realize(PCIDevice *dev, Error **errp)
-{
-    ERRP_GUARD();
-    PIIXState *piix3 = PIIX_PCI_DEVICE(dev);
-    PCIBus *pci_bus = pci_get_bus(dev);
-
-    piix3_realize(dev, errp);
-    if (*errp) {
-        return;
-    }
-
-    /*
-     * Xen supports additional interrupt routes from the PCI devices to
-     * the IOAPIC: the four pins of each PCI device on the bus are also
-     * connected to the IOAPIC directly.
-     * These additional routes can be discovered through ACPI.
-     */
-    pci_bus_irqs(pci_bus, xen_intx_set_irq, piix3, XEN_PIIX_NUM_PIRQS);
-}
-
 static void piix3_xen_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
     k->config_write = piix3_write_config_xen;
-    k->realize = piix3_xen_realize;
+    k->realize = piix3_realize;
     /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
     dc->vmsd = &vmstate_piix3;
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 21:35:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 21:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470403.729942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSTC-0004jt-PH; Mon, 02 Jan 2023 21:35:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470403.729942; Mon, 02 Jan 2023 21:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSTC-0004iy-J8; Mon, 02 Jan 2023 21:35:46 +0000
Received: by outflank-mailman (input) for mailman id 470403;
 Mon, 02 Jan 2023 21:35:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jV8v=47=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pCSTB-0003Wk-Gr
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 21:35:45 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6960b59e-8ae5-11ed-b8d0-410ff93cb8f0;
 Mon, 02 Jan 2023 22:35:43 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id m18so69165375eji.5
 for <xen-devel@lists.xenproject.org>; Mon, 02 Jan 2023 13:35:43 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 f11-20020a17090631cb00b0084c465709b7sm10583826ejf.74.2023.01.02.13.35.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 02 Jan 2023 13:35:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6960b59e-8ae5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZTbz96Fg6rdZNkQF03buq3nvArFrt1X2bnXBL/QETy8=;
        b=PsKszMFE/irXAQUUzyvmEIaEr0Tcpdj3qhQcGkpM6bYRneaVlCz9APN9qcxNx7XmNo
         KLZfpHfC5iyIkCg8DGZEO4juAo/rZxv4CMMbW6pgthHUkyA2C1vKwUY20xj2gWQil7Zw
         jDG7lQAM8bdmOaCwb7KbNXuVBzjg4YjJD5g65Tg0rYTlN+NbeDFcUpR6TAg1MVAEV7zN
         i481Ii117uzsakJQlHK4tV7aHWnDIjke8SIy8P0RWBgz4pEH+jNFsPpnCqeEDR3i8nak
         Dc6oZySXPxHR7v967RWJPS5SkmtQNNGpv+r2EOUT/sAmRxGlHKArDmdliBsUgWsrEfGG
         VIIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ZTbz96Fg6rdZNkQF03buq3nvArFrt1X2bnXBL/QETy8=;
        b=tXC1jm6887S5wd9YaEsRFoqjXGPZpmYbqnMKuwRPhumdHNRlDbJ7+dfMDwvtjP+SAD
         yx+R79Yf1zpm7t+35x7dQT6DDFr9qQjrvqo5K02HnCWpZRSYlq82l/xP+kB2BmXjW5+v
         oExYubbBtpVF8RKJEt+WJvn60MQOCez2pQYY4l4S7GpqDxblPDn/ValmAtfydga6muj5
         EMdqQxzQJoE6mggJwP6J3vJYMZJrSvpQXNQWzxY6o1x1hH4VQLCj16fZaBCVmYeyvWzd
         xs58pUhFthghW4mxnKn+26MkE0DiTaiyofEv8ruiyxGGXzgNpQCs8raI34dZ/i8hA45p
         ss5w==
X-Gm-Message-State: AFqh2kpyJ6FW2Sgb9jQ3lphEYanMwZYry2yJXcJMM1cMlf2g9NZp4xYY
	/cMQJEuJX91dJtFHip/suVE=
X-Google-Smtp-Source: AMrXdXsn74EQeNcIi6oiYO0nJq/ZeXKHgJ9u+Dwmp6n2+uN/W06k2zB+iAzGi6JoudbLO8czjj/jbQ==
X-Received: by 2002:a17:906:d205:b0:7c1:51ee:a2ec with SMTP id w5-20020a170906d20500b007c151eea2ecmr36107634ejz.46.1672695343628;
        Mon, 02 Jan 2023 13:35:43 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 5/6] hw/isa/piix: Resolve redundant k->config_write assignments
Date: Mon,  2 Jan 2023 22:35:03 +0100
Message-Id: <20230102213504.14646-6-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230102213504.14646-1-shentey@gmail.com>
References: <20230102213504.14646-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The previous patch unified handling of piix_write_config() accross all
PIIX device models which allows for assigning k->config_write once in the
base class.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/isa/piix.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index d4cdb3dadb..98e9b12661 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -396,6 +396,7 @@ static void pci_piix_class_init(ObjectClass *klass, void *data)
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
     AcpiDevAmlIfClass *adevc = ACPI_DEV_AML_IF_CLASS(klass);
 
+    k->config_write = piix_write_config;
     dc->reset       = piix_reset;
     dc->desc        = "ISA bridge";
     dc->hotpluggable   = false;
@@ -451,7 +452,6 @@ static void piix3_class_init(ObjectClass *klass, void *data)
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix_write_config;
     k->realize = piix3_realize;
     /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
@@ -470,7 +470,6 @@ static void piix3_xen_class_init(ObjectClass *klass, void *data)
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix_write_config;
     k->realize = piix3_realize;
     /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
@@ -519,7 +518,6 @@ static void piix4_class_init(ObjectClass *klass, void *data)
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix_write_config;
     k->realize = piix4_realize;
     k->device_id = PCI_DEVICE_ID_INTEL_82371AB_0;
     dc->vmsd = &vmstate_piix4;
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 21:35:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 21:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470398.729888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCST8-0003X3-33; Mon, 02 Jan 2023 21:35:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470398.729888; Mon, 02 Jan 2023 21:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCST8-0003Ww-0F; Mon, 02 Jan 2023 21:35:42 +0000
Received: by outflank-mailman (input) for mailman id 470398;
 Mon, 02 Jan 2023 21:35:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jV8v=47=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pCST6-0003Wk-Nl
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 21:35:40 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6627f758-8ae5-11ed-b8d0-410ff93cb8f0;
 Mon, 02 Jan 2023 22:35:38 +0100 (CET)
Received: by mail-ej1-x634.google.com with SMTP id vm8so62327859ejc.2
 for <xen-devel@lists.xenproject.org>; Mon, 02 Jan 2023 13:35:38 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 f11-20020a17090631cb00b0084c465709b7sm10583826ejf.74.2023.01.02.13.35.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 02 Jan 2023 13:35:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6627f758-8ae5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=ThcfWkzyxbomQpjRZA7dc6wiEsJYQ4EhFUJkZuMYNh4=;
        b=l3DiR+ii5Hfw71WDsiNsQdM2zwr5EVsEEAGGOTU6ym+YOnTFgyDMg9p3kAGMFm2Uf/
         aQS0jZyQnqxtYSMIPattFYbm1mdsvpSJ/GGsEEd5HhYrEEeJ6+2Fu38VF4NsSG6Y21lK
         AHtxfwQOcsVklKIdAlVyxuFr1pmDQodVdZJhWtiAToFJemLAUAaz3oFSXvJAQTBCumzf
         fXIh9eI0mnzZ2IHGNbfvZOC7FxHI11aH6DYlMAaKv7fdbr/Awo02oZHlRFHMG503UjCZ
         k9Rxs77JgBKr5BTE3jl29v3+IbSrFZHhEs8IwI6z+tCMLssBEMwiOSRQih7t0fPjl1kG
         WoJg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ThcfWkzyxbomQpjRZA7dc6wiEsJYQ4EhFUJkZuMYNh4=;
        b=vUCrY0uGT43bpAGlD3s/VcRla72zCXlf/OWrx9ChDsvBLU/JR3PfE+/WtfB5egh+X7
         b6cPuaiOjWRK+DaTVqIMv5Fs04ijQT0dPW25OKudhXq5TlPVmltUGdgAg4e7q5yDtqm1
         zQLGUeWB+1yQFIitxUGIMSB5+FVWQVy1WvDiwSZITlH/hVHZe6uRGJcP4XYXfMqwkAIK
         1VVEwusu+xsjcSbal7bFCB94SOz/Xmqbzpr2sTO6Jzr1Jh0P6t3uXKeJx8/S+/Kg+NOU
         RhEGgleYEIMOBWKtk0sf1lg1DVjFWFvQdyBJqEK4XhxHRejbPzF1fs83O6CujQE6Pjb3
         g99g==
X-Gm-Message-State: AFqh2krQFiPQyZpmz2ktSeSv2x+pZ0aMssNz+shn6kKNlW7+tVypr/wQ
	BRTfUy99YtpjE+RVpfyeWJM=
X-Google-Smtp-Source: AMrXdXurMjYiz5Hum/Tzox9GBdNJ4/Pvd6hcKQmZX141+8/hdih4/lbxpie9ZEO4sSmtCYEFcdoKug==
X-Received: by 2002:a17:907:3ea1:b0:7c1:7f84:10ac with SMTP id hs33-20020a1709073ea100b007c17f8410acmr61151091ejc.33.1672695338100;
        Mon, 02 Jan 2023 13:35:38 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
Date: Mon,  2 Jan 2023 22:34:58 +0100
Message-Id: <20230102213504.14646-1-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally remov=
es=0D
it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in th=
e PC=0D
machine agnostic to the precise southbridge being used. 2/ will become=0D
particularily interesting once PIIX4 becomes usable in the PC machine, avoi=
ding=0D
the "Frankenstein" use of PIIX4_ACPI in PIIX3.=0D
=0D
Testing done:=0D
None, because I don't know how to conduct this properly :(=0D
=0D
Based-on: <20221221170003.2929-1-shentey@gmail.com>=0D
          "[PATCH v4 00/30] Consolidate PIIX south bridges"=0D
=0D
Bernhard Beschow (6):=0D
  include/hw/xen/xen: Make xen_piix3_set_irq() generic and rename it=0D
  hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()=0D
  hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3=0D
  hw/isa/piix: Avoid Xen-specific variant of piix_write_config()=0D
  hw/isa/piix: Resolve redundant k->config_write assignments=0D
  hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE=0D
=0D
 hw/i386/pc_piix.c             | 34 ++++++++++++++++--=0D
 hw/i386/xen/xen-hvm.c         |  9 +++--=0D
 hw/isa/piix.c                 | 66 +----------------------------------=0D
 include/hw/southbridge/piix.h |  1 -=0D
 include/hw/xen/xen.h          |  2 +-=0D
 stubs/xen-hw-stub.c           |  2 +-=0D
 6 files changed, 40 insertions(+), 74 deletions(-)=0D
=0D
-- =0D
2.39.0=0D
=0D


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 21:35:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 21:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470401.729922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSTB-0004Hn-4X; Mon, 02 Jan 2023 21:35:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470401.729922; Mon, 02 Jan 2023 21:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSTB-0004He-1T; Mon, 02 Jan 2023 21:35:45 +0000
Received: by outflank-mailman (input) for mailman id 470401;
 Mon, 02 Jan 2023 21:35:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jV8v=47=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pCST9-0003Wl-HS
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 21:35:43 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68b2b11a-8ae5-11ed-91b6-6bf2151ebd3b;
 Mon, 02 Jan 2023 22:35:42 +0100 (CET)
Received: by mail-ej1-x631.google.com with SMTP id tz12so69253234ejc.9
 for <xen-devel@lists.xenproject.org>; Mon, 02 Jan 2023 13:35:42 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 f11-20020a17090631cb00b0084c465709b7sm10583826ejf.74.2023.01.02.13.35.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 02 Jan 2023 13:35:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68b2b11a-8ae5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7hyvY3JWBoATeou/9rB1bwHPuTVRUibflv0giDEMRFI=;
        b=FFEt7jlJJ+TZncL9s1i0SklCgyxtm7oXcgRQkgj1jE/dUm5xg4LuIlrUbaq8eTpWQk
         lusWPMYsPPe89MlCOjXfmbss+PluOmAxo6TNTCn9VkwqzVNEZ3pVIGV+6zPOo/B7hbEh
         FmoABY83bkE7ljMZ/86Ed1q7lLc0TapiAISU8cMWHLoCo4wI4jM8GQazvPxntgfkxhiz
         2ZSA4IZW5Si/6jksdfwVNRNxrFq3EWZxMZttbBJnMHMVsowEnu9KUUccK7YbD9iPZamu
         DParMymbvqglGiQez25zSFsN1UKJDDtUYdY3QfXN5LHP3fVI24Qal2+w+ixfoMNzXLhj
         V9dA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7hyvY3JWBoATeou/9rB1bwHPuTVRUibflv0giDEMRFI=;
        b=3UUAPrDBn6aryWVtNxiLS7asCOMcMAjUJbU4CUbVqxKh8a4iYJ6CE6SK10ch4/ndJt
         B5LtgvQvD7dKPaAmR8Z9FCy9/lHtB+XMeFhuaiwPcJyfyFG8xmuZTQbEagqvW9ZblUli
         /YswbHBHfRuYk4uuB3o2C+Lenpe6UAWpYcgX452CpeDPgMEzu3+zEggkzO2FZYg8hF+B
         +wKITtukVVqr8iMZojEt0s9QzFewVBP/WZRUt0XrQCb25cDB6E247slJkOD6c56d/hfP
         CJUsv6B2bSuhy0AJOoVZoWxPy065WeyN195sBf0HzGvz97Kjv9wbvEJTlFEB2PX4qFZj
         WzRg==
X-Gm-Message-State: AFqh2kqguy8j4HZBlynSxeobZVM6EacLLp4+vz9PW3RJzZ+LVTLq+fUo
	ba0Tl7mpNIx/zK/YrXFFH/g=
X-Google-Smtp-Source: AMrXdXvIXbwA5JGbCDQQQ/3trRrkBQ1pENW/JDFoCpw2RIHuU1jqIwyoQaXoQv83ATNGp2NKk0iVhw==
X-Received: by 2002:a17:906:1410:b0:7c0:eba3:e2e with SMTP id p16-20020a170906141000b007c0eba30e2emr32660029ejc.31.1672695342501;
        Mon, 02 Jan 2023 13:35:42 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 4/6] hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
Date: Mon,  2 Jan 2023 22:35:02 +0100
Message-Id: <20230102213504.14646-5-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230102213504.14646-1-shentey@gmail.com>
References: <20230102213504.14646-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Subscribe to pci_bus_fire_intx_routing_notifier() instead which allows for
having a common piix_write_config() for all PIIX device models.

While at it, move the subscription into machine code in order to resolve
TYPE_PIIX3_XEN_DEVICE.

In a possible future followup, pci_bus_fire_intx_routing_notifier() could
be adjusted in such a way that subscribing to it doesn't require
knowledge of the device firing it.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/pc_piix.c | 18 ++++++++++++++++++
 hw/isa/piix.c     | 22 +---------------------
 2 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index c02f68010d..7ef0054b3a 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -86,6 +86,21 @@ static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
     return (pci_intx + slot_addend) & 3;
 }
 
+static void piix_intx_routing_notifier_xen(PCIDevice *dev)
+{
+    int i;
+
+    /* Scan for updates to PCI link routes (0x60-0x63). */
+    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
+        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
+        if (v & 0x80) {
+            v = 0;
+        }
+        v &= 0xf;
+        xen_set_pci_link_route(i, v);
+    }
+}
+
 /* PC hardware initialisation */
 static void pc_init1(MachineState *machine,
                      const char *host_type, const char *pci_type)
@@ -248,6 +263,9 @@ static void pc_init1(MachineState *machine,
         pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
 
         if (xen_enabled()) {
+            pci_device_set_intx_routing_notifier(
+                        pci_dev, piix_intx_routing_notifier_xen);
+
             /*
              * Xen supports additional interrupt routes from the PCI devices to
              * the IOAPIC: the four pins of each PCI device on the bus are also
diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index ac04781f46..d4cdb3dadb 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -125,26 +125,6 @@ static void piix_write_config(PCIDevice *dev, uint32_t address, uint32_t val,
     }
 }
 
-static void piix3_write_config_xen(PCIDevice *dev,
-                                   uint32_t address, uint32_t val, int len)
-{
-    int i;
-
-    /* Scan for updates to PCI link routes (0x60-0x63). */
-    for (i = 0; i < len; i++) {
-        uint8_t v = (val >> (8 * i)) & 0xff;
-        if (v & 0x80) {
-            v = 0;
-        }
-        v &= 0xf;
-        if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
-            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
-        }
-    }
-
-    piix_write_config(dev, address, val, len);
-}
-
 static void piix_reset(DeviceState *dev)
 {
     PIIXState *d = PIIX_PCI_DEVICE(dev);
@@ -490,7 +470,7 @@ static void piix3_xen_class_init(ObjectClass *klass, void *data)
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix3_write_config_xen;
+    k->config_write = piix_write_config;
     k->realize = piix3_realize;
     /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 21:35:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 21:35:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470404.729954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSTE-00053j-4V; Mon, 02 Jan 2023 21:35:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470404.729954; Mon, 02 Jan 2023 21:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSTD-00053C-TO; Mon, 02 Jan 2023 21:35:47 +0000
Received: by outflank-mailman (input) for mailman id 470404;
 Mon, 02 Jan 2023 21:35:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jV8v=47=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pCSTC-0003Wk-Qk
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 21:35:46 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a0576a9-8ae5-11ed-b8d0-410ff93cb8f0;
 Mon, 02 Jan 2023 22:35:45 +0100 (CET)
Received: by mail-ej1-x62f.google.com with SMTP id u19so69210598ejm.8
 for <xen-devel@lists.xenproject.org>; Mon, 02 Jan 2023 13:35:45 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 f11-20020a17090631cb00b0084c465709b7sm10583826ejf.74.2023.01.02.13.35.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 02 Jan 2023 13:35:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a0576a9-8ae5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Di8D1ZLAqNG4YBVeY21LWd3M3jwmUnOPUPIRfWvH74o=;
        b=oKrx+5YbmTLqbEyB0IFlqqHxQljKl/Fp1de0CQdBSfdKl10T7XcJA8XTNLwawbTkeh
         j5cv6e8K3sfV/6QNn70B21D7ZKgQZTo6P0tzupn4POy+aqrOfkP+od97KKCbHnOo6Fpj
         sPkN9L6tV5zGaOfBboViBOI31hupccc6I1zTDQGrZnwnnQcNQ7H1IESNC/LCsbAvfHAZ
         tHza0+AzW+kC0P4eZuGrQfVFLSnww6j6TKQuz3rfwRxSJJBD6CJQdyvobGr3WA3BHfc3
         pRH+0ycpzvBgWh/4lVAbvjxJL97sgxaxg9vbiPlGIZI0NsS7h9yctI5oRHM+0gDXCPgf
         wI3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Di8D1ZLAqNG4YBVeY21LWd3M3jwmUnOPUPIRfWvH74o=;
        b=Y2yRrYs/cCAa2wY9z2Qk/jLy7yiywOKBrIzz4j0QDTToJaY1MeMRlj1isdRryTrVjT
         SteVF1MYNY5LCkyhJ1Lx98emwDOVlUwbJlMQQ9uhXfg6zNYFrEJGI7jDU4sjahSaXKf9
         zpxOAkTktxvOz5VDuff7xlvh63zY+dQCOaxqXKX0zWHUeShGiphVK79TrgQV3yb/nN0z
         ujE/aHjl5ggeMWycAV3K94djwkUDScVD5XE8c/RGI9JKxHA8o6aKW8Q69QLkhqhYbGP9
         5mjnBdw8TClOWLUKqnsrI+twtWG5JRhfE4nATOpHf73v/JZ7CpKoEoJMTXZS6m2hvKK8
         rtQw==
X-Gm-Message-State: AFqh2krWy2NAhUHG3p9G0Wyunap00h0CIFiQ7PfMk00RYXrLfv4r1XLH
	4Lab/GaN+oZIab2gXV/L68Wyb9YLU8M=
X-Google-Smtp-Source: AMrXdXuZ+eoqe1BE27lbsYyf/6auDgeXSZ1ibYoD7I6UMu7gzvCwnSHmEmW7pvlF2BvjpLremBPADA==
X-Received: by 2002:a17:907:a407:b0:84c:7974:8a73 with SMTP id sg7-20020a170907a40700b0084c79748a73mr22076362ejc.57.1672695344686;
        Mon, 02 Jan 2023 13:35:44 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH 6/6] hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
Date: Mon,  2 Jan 2023 22:35:04 +0100
Message-Id: <20230102213504.14646-7-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230102213504.14646-1-shentey@gmail.com>
References: <20230102213504.14646-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
TYPE_PIIX3_DEVICE. Remove this redundancy.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/pc_piix.c             |  4 +---
 hw/isa/piix.c                 | 20 --------------------
 include/hw/southbridge/piix.h |  1 -
 3 files changed, 1 insertion(+), 24 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 7ef0054b3a..76d98183ac 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
     if (pcmc->pci_enabled) {
         DeviceState *dev;
         PCIDevice *pci_dev;
-        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
-                                         : TYPE_PIIX3_DEVICE;
         int i;
 
         pci_bus = i440fx_init(pci_type,
@@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
                                        : pci_slot_get_pirq);
         pcms->bus = pci_bus;
 
-        pci_dev = pci_new_multifunction(-1, true, type);
+        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
         object_property_set_bool(OBJECT(pci_dev), "has-usb",
                                  machine_usb(machine), &error_abort);
         object_property_set_bool(OBJECT(pci_dev), "has-acpi",
diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index 98e9b12661..e4587352c9 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -33,7 +33,6 @@
 #include "hw/qdev-properties.h"
 #include "hw/ide/piix.h"
 #include "hw/isa/isa.h"
-#include "hw/xen/xen.h"
 #include "sysemu/runstate.h"
 #include "migration/vmstate.h"
 #include "hw/acpi/acpi_aml_interface.h"
@@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
     .class_init    = piix3_class_init,
 };
 
-static void piix3_xen_class_init(ObjectClass *klass, void *data)
-{
-    DeviceClass *dc = DEVICE_CLASS(klass);
-    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
-
-    k->realize = piix3_realize;
-    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
-    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
-    dc->vmsd = &vmstate_piix3;
-}
-
-static const TypeInfo piix3_xen_info = {
-    .name          = TYPE_PIIX3_XEN_DEVICE,
-    .parent        = TYPE_PIIX_PCI_DEVICE,
-    .instance_init = piix3_init,
-    .class_init    = piix3_xen_class_init,
-};
-
 static void piix4_realize(PCIDevice *dev, Error **errp)
 {
     ERRP_GUARD();
@@ -534,7 +515,6 @@ static void piix3_register_types(void)
 {
     type_register_static(&piix_pci_type_info);
     type_register_static(&piix3_info);
-    type_register_static(&piix3_xen_info);
     type_register_static(&piix4_info);
 }
 
diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
index 65ad8569da..b1fc94a742 100644
--- a/include/hw/southbridge/piix.h
+++ b/include/hw/southbridge/piix.h
@@ -77,7 +77,6 @@ struct PIIXState {
 OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
 
 #define TYPE_PIIX3_DEVICE "PIIX3"
-#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
 #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
 
 #endif
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 02 21:47:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 21:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470451.729966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSeY-00006N-Ba; Mon, 02 Jan 2023 21:47:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470451.729966; Mon, 02 Jan 2023 21:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCSeY-00006G-7w; Mon, 02 Jan 2023 21:47:30 +0000
Received: by outflank-mailman (input) for mailman id 470451;
 Mon, 02 Jan 2023 21:47:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCSeW-000066-Vp; Mon, 02 Jan 2023 21:47:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCSeW-0001X5-Qq; Mon, 02 Jan 2023 21:47:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCSeW-0007Pn-4C; Mon, 02 Jan 2023 21:47:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCSeW-0005FO-3k; Mon, 02 Jan 2023 21:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sDKInzDfiPQ0zyd0VH+Niz3QJ70OcqD45GsibYiTQ1w=; b=cDmHlGasIYSTuaJO/05Pvr+IKt
	3K6QxeVk48Pz8W90NuBwgZ7l10zjCiMIodRknwhtWf66FdrJzZr0EqDnu9ccUQfmVRvfdsWbWYsMp
	8RprGLmYrQecommj269vKvYw2Kc5jIpO7L85822pdRMf4BDiAHMinh04QUb+wTtlOSvQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175551-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175551: FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
X-Osstest-Versions-That:
    linux=66bb2e2b24ce52819a7070d3a3255726cb946b69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Jan 2023 21:47:28 +0000

flight 175551 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175551/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 175407
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 175407
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 175407
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 175407
 test-armhf-armhf-xl-credit1     <job status>                 broken  in 175547

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 175407 pass in 175551
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 175407 pass in 175551
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 175407 pass in 175551
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 175407 pass in 175551
 test-armhf-armhf-xl-credit1  5 host-install(5) broken in 175547 pass in 175551
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 175407 pass in 175551
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175545 pass in 175551
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 175545 pass in 175551
 test-amd64-i386-xl-vhd       22 guest-start.2    fail in 175547 pass in 175545
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail in 175547 pass in 175551
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail in 175549 pass in 175407
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 175549 pass in 175551
 test-armhf-armhf-xl 18 guest-start/debian.repeat fail in 175549 pass in 175551
 test-armhf-armhf-xl-credit2  14 guest-start                fail pass in 175547
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 175547
 test-armhf-armhf-xl-credit1  14 guest-start                fail pass in 175549

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175197
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175407 like 175197
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 175407 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 175407 never pass
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 175547 like 175197
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 175547 like 175197
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 175549 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 175549 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175197
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175197
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 175197
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175197
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175197
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 175547 n/a

version targeted for testing:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52
baseline version:
 linux                66bb2e2b24ce52819a7070d3a3255726cb946b69

Last test of basis   175197  2022-12-14 10:43:17 Z   19 days
Testing same since   175407  2022-12-19 11:42:26 Z   14 days   35 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Schocher <hs@denx.de>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jialiang Wang <wangjialiang0806@163.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Colitti <lorenzo@google.com>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Samuel Mendoza-Jonas <samjonas@amazon.com>
  Sasha Levin <sashal@kernel.org>
  Shiwei Cui <cuishw@inspur.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <simon.horman@corigine.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Yasushi SHOJI <yashi@spacecubics.com>
  Yasushi SHOJI <yasushi.shoji@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-armhf-armhf-xl-credit1 broken

Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 02 23:10:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Jan 2023 23:10:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470460.729977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCTwy-0000RN-8S; Mon, 02 Jan 2023 23:10:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470460.729977; Mon, 02 Jan 2023 23:10:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCTwy-0000RG-5q; Mon, 02 Jan 2023 23:10:36 +0000
Received: by outflank-mailman (input) for mailman id 470460;
 Mon, 02 Jan 2023 23:10:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lmSl=47=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pCTww-0000RA-6X
 for xen-devel@lists.xenproject.org; Mon, 02 Jan 2023 23:10:34 +0000
Received: from sonic301-22.consmr.mail.gq1.yahoo.com
 (sonic301-22.consmr.mail.gq1.yahoo.com [98.137.64.148])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5e65120-8af2-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 00:10:30 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic301.consmr.mail.gq1.yahoo.com with HTTP; Mon, 2 Jan 2023 23:10:28 +0000
Received: by hermes--production-ne1-7b69748c4d-nmpxj (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 963445cead58a3588ed1e45bff7a73fd; 
 Mon, 02 Jan 2023 23:10:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5e65120-8af2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netscape.net; s=a2048; t=1672701028; bh=/P05T3kVETVy1VB1C0TuUxelNF1OaAvo1evjneqVCMU=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=b+7pSiRSKew1q/VPQv7CaT+qZJBnZDTW0heVp++EhRpqK28AT8nCreJ1suIBVTu8czSCsrLd0J8HeGVXCkAm+qwrtyL6z7Wofh3s8WCEF5HmL+hNbei0cRKDfJ+1Mi2sHZAB8nTjJvZeMoLbU4UbMTWIzO7IaSeOaux+s8z68M/mWxb5r9iNLWuObrpDqtFqysnD2hWdkrr+gPQ09fmfQVUPpPbUSVJeOhiQDkqYozN1wXi3f3ue3diupb+CjWstFYm6/jUoxY1E4ONDpkNAWviJZXaraqlNZDDZyuDXMYEApuclr/ylcwj1j3L83zzLtpfckMp4VI5fzbcnUSfOlg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672701028; bh=F7PED5wr9QAQjEahtUrLzVLgsgKiEXXeUwZjmBCgO8h=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=hCWIKrqN02NpKGle9G/IhBUWXwXGugj+Mihg4GftI73UNwJ830+EqA7FUaQ6zDplxk+SuyhFaORHG6dlzkw0VoEBsNCBecY6pS4T83d7jWWHlOInItZZsyK0J946nuJnlXZbCnbk89HnkhENvOxJMbOVL94xe+DUKk1I+g5E0mS+QwJRiAh4BHIl0lvxtIRfeUcFL4Z/pMqB2awTOjTo89BJSVUxbxBrrxpeQVBpTQCYCFWG/TSu3Lvzk74W7vhNJoeDSQ67fRgv2MoO3GTO2cWQp++13/3eSFexcL2I3pOO0TGCGz4FtSl22Yqu5ulegU85QqacmrdXz6gTL9+3Bw==
X-YMail-OSG: F0I5ySsVM1nusXZyF.Cjxyr_dddCC3_SHHp9I0IS_eTCEfOog1xA2ISVyCSMLbc
 5VRYLDj_3mzgO8dRLirYUu2nS01PUG_JplqrUMOMPNF7e3bvDoVgHO3H.d1KJ72CCMnzYJLDh0ys
 eMxO0NLUau7PzXigqidVCjhA1KMijX8fdUUxtXRCHK1.o6vTN02ZeY1MgwnGrN1yt6eiIburoQUR
 dn34Z9huL4PFLyQvWed_bmIcSq5Pic1ckBjNdPNXgcDy9eZ_Swx6ssyobjhMkh6T.06IG8aK_kb5
 B0uMGExA6v43Brfpx_vexlWAmS0H92rtrLaBZhkI8qB3f6aCe7q9ggSxXSNrlSDhYI7taWIkMsaw
 IpZD0VqdMmg0J_54fPzLLVawZj.cS.3H89cx0miEspQkucSyt8_l1adqlj6ZLRuDJnrcDXpb8D_W
 pHFgK3f.rOhMObixVVYhqz.fLnCyQt.mfoXvDCsmvOfyUXv9zyTS3QbbpqXdO4gTcmkUnMd0VCwZ
 dtTq2e6gq6nJ3TwsncW7LtgAf2GIIAUoGqqLFiOsM5378IgCtP30lJg_9cuOtgurm4CcgRdGlJ8S
 X9wfAqyjeETrhOscYIco8f0w.Z1y3vxZc7v3qWbsuqvJXE7IfuUeu6AzTBoE3ithGMohcy0IOESz
 8TF448t36ws5WMi6BtHbFGUHzjVtFnVrSVYYNRHWAZYkbx2h64K3tu.PqEyqRD0saZX7fI.i6v1K
 xNFupaLtOXvGoXDE.1ljGFrV26N04vo5P9sy1ClyNCVtofYD78xvM1K6OLfCXYiCLatWjRFi9_YX
 do3nyojH4gKoWPbG_RSAgyQB3L.a5rVN1gPBWDZoBd404mZvPJIROUrBUPEln7X0TuriY_bCpCBm
 lWgg2lX3Q0yBQFKUPczm.wYddPE_Jr5KC1jWOZW2mnQ8JoVEfP48ZdmUEHC9kVxux7yBbGbrpKiQ
 xVhOOOAEA4Kq32tlVfU.ZbFWWhiALU4C9Ebpjq.AdqI0et7K3YnzLG54EPGg.eRhXk.nqT4PaYaS
 eet.7h81VK2HIJAq7I8JOBTKFxJCgFPJTdAtPkRp7R3_1XKc3MT.9WFWKtvScgEBwO4z7bnABvgf
 oYN1XRc27xvVQbypSpD6ht_.l2BXSRj_8IpiHs1reejifFvafFLQi8qmp9k9JJN8aV8WvB6Q64ng
 Q7_3kGCuBwZQqgnonLMnkx3PYFVnO.EhUz6CnQoCb2SDE_U9or4BPEOwkuR7wGRcaWefQP16k4YV
 AwZQGTnEVgUNT5Q9BtBH5rcuxpH4wkD2XF._WSZjrO3cPsVviErG9OWr9knokD9yk5zlj55Hsfej
 mHvV39e2A7c.S2wDF2cRGl_nlQhYx2KIT_hefC5kptfo.qFFsaCz5LtUREyKx0bPxrKzzRe5pybk
 iu1F9mHYxro5sThCP78jA35LzURGA0PI05zAcx5FlqGtKYS6HK5iXda41O6n7uK8_UBzkheLBF7m
 Or3rqLS3J9MLggLVxUWbEINBLUXI6YGDiqebvmAD6HL3u_L9xFPLQFVh8Urk10hz.IL4_m9ib5eW
 xlePTrby0DpUD4Kc6IkcZ_G0aPZb45hwK491MT1XDv95CzfPEjQeLkXrlv_19MjP8d3nw5zBYeR5
 wp8KfV3NDryVIqkaSSW60d_hyPNrLaWxJxyydZBK2MqAV872EWJ5qhJuiJZtCecsU4_uaCpwHgds
 ErQcgFQ78Gna4sEac833bxjbxKFLbmM3Nw458rfNgJmaKrfbU9v0xBlGdsF5kyAG.pjgw15ZsWxZ
 5lf5vBQUj3EAhgkvbVKEm.x5xqMfmVBlfPGWTaUcSWyLA.5ndvuVpDNcmLJro7PYCJ.iYbgNT20i
 5jort5ktFigPIbreTXRvdFNqzsyvBhd0.Xs0EiRo9CEqJxg33aYemvy9_wK9ZfR8G26I9pbQTxLD
 Ewka.Oit1t.lagGwFkKNeLVsQMSA9dCNdJDUzZQ7XFYDiVWjUXVCvNlxypWq_asUVuyv5CfRkR3T
 gQy8.KphKCYdeuSdcH2hnYpVQ_ew3HKRSk5mvAYB0fK6hiP1HdCGlhqCZt4C4KPvraQnN0givVph
 lyg6XfmGZhXuuAmYSYJtbR7d_6lFMdcHKeelBqakhQupObVeCZwbFPdtGt1bX1aGKC5dWYv4JnjJ
 dw2Q41jP26DsiXH1ZqA0eUPcrNmXYRBZaDi_IbULyosecKqvefQZ2jHL.lQjD_xw3feieNMBolGX
 WpISfWsRX8OGlKYXC
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
Date: Mon, 2 Jan 2023 18:10:24 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, alex.williamson@redhat.com
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <20230102124605-mutt-send-email-mst@kernel.org>
From: Chuck Zmudzinski <brchuckz@netscape.net>
In-Reply-To: <20230102124605-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4657

On 1/2/23 12:46 PM, Michael S. Tsirkin wrote:
> On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
> > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > as noted in docs/igd-assign.txt in the Qemu source code.
> > 
> > Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > Intel IGD passthrough to the guest with the Qemu upstream device model,
> > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > a different slot. This problem often prevents the guest from booting.
> > 
> > The only available workaround is not good: Configure Xen HVM guests to use
> > the old and no longer maintained Qemu traditional device model available
> > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > 
> > To implement this feature in the Qemu upstream device model for Xen HVM
> > guests, introduce the following new functions, types, and macros:
> > 
> > * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > * typedef XenPTQdevRealize function pointer
> > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > 
> > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > the xl toolstack with the gfx_passthru option enabled, which sets the
> > igd-passthru=on option to Qemu for the Xen HVM machine type.
> > 
> > The new xen_igd_reserve_slot function also needs to be implemented in
> > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > in which case it does nothing.
> > 
> > The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > 
> > Move the call to xen_host_pci_device_get, and the associated error
> > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > initialize the device class and vendor values which enables the checks for
> > the Intel IGD to succeed. The verification that the host device is an
> > Intel IGD to be passed through is done by checking the domain, bus, slot,
> > and function values as well as by checking that gfx_passthru is enabled,
> > the device class is VGA, and the device vendor in Intel.
> > 
> > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>
> I'm not sure why is the issue xen specific. Can you explain?
> Doesn't it affect kvm too?

Recall from docs/igd-assign.txt that there are two modes for
igd passthrough: legacy and upt, and the igd needs to be
at slot 2 only when using legacy mode which gives one
single guest exclusive access to the Intel igd.

It's only xen specific insofar as xen does not have support
for the upt mode so xen must use legacy mode which
requires the igd to be at slot 2. I am not an expert with
kvm, but if I understand correctly, with kvm one can use
the upt mode with the Intel i915 kvmgt kernel module
and in that case the guest will see a virtual Intel gpu
that can be at any arbitrary slot when using kvmgt, and
also, in that case, more than one guest can access the
igd through the kvmgt kernel module.

Again, I am not an expert and do not have as much
experience with kvm, but if I understand correctly it is
possible to use the legacy mode with kvm and I think you
are correct that if one uses kvm in legacy mode and without
using the Intel i915 kvmgt kernel module, then it would be
necessary to reserve slot 2 for the igd on kvm.

Your question makes me curious, and I have not been able
to determine if anyone has tried igd passthrough using
legacy mode on kvm with recent versions of linux and qemu.
I will try reproducing the problem on kvm in legacy mode with
current versions of linux and qemu and report my findings.
With kvm, there might be enough flexibility to specify the
slot number for every pci device in the guest. Such a
capability is not available using the xenlight toolstack
for managing xen guests, so I have been using this patch
to ensure that the Intel igd is at slot 2 with xen guests
created by the xenlight toolstack.

The patch as is will only fix the problem on xen, so if the
problem exists on kvm also, I agree that the patch should
be modified to also fix it on kvm.

Chuck


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 03:15:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 03:15:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470472.729988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCXlr-00011d-6m; Tue, 03 Jan 2023 03:15:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470472.729988; Tue, 03 Jan 2023 03:15:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCXlr-00011V-1J; Tue, 03 Jan 2023 03:15:23 +0000
Received: by outflank-mailman (input) for mailman id 470472;
 Tue, 03 Jan 2023 03:15:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Awo3=5A=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pCXlq-00011P-1M
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 03:15:22 +0000
Received: from sonic310-22.consmr.mail.gq1.yahoo.com
 (sonic310-22.consmr.mail.gq1.yahoo.com [98.137.69.148])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d84929b4-8b14-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 04:15:20 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic310.consmr.mail.gq1.yahoo.com with HTTP; Tue, 3 Jan 2023 03:15:15 +0000
Received: by hermes--production-ne1-7b69748c4d-8kjnk (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID a4e7c13e8efaf23c69e74dbbd036ec23; 
 Tue, 03 Jan 2023 03:15:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d84929b4-8b14-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672715715; bh=mUvEtjv/I8yLh91vV87I0a3FTB0JyYixbRRmZITWn+U=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=Rszazt7e+Bzxyiv4od/qkS3bXTWNb/x02M884X0I00UJRm8xuIFfAhFKBVAlLdSqEqSXZsjFWsuSwCWHgZO2Zqmq0C3v6YfbR/OanmRrYcTIMM55BhpdsXpFxDvjp3RuhLnZF85P5Lx6nO4R3AvpF7rMynXKT6RLCKEtYr5qLo8WLwFZpDkXfTpfinnGHIpl8GsiOmZQkDE2sxiN8Lvbkr/y+Y4mS1h4iw8bov6tANc8v3+sRRlI9YXCJxTn8Ip4DetYzW4fiKwIy8yT/LO/ZeVCgEVvUBM5/cYANcrBvJZb/mFtih5QF6JzpOmsbyHbMfiQIOEcpO7HlQ08pl0u1A==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672715715; bh=lYJfJoQuxGxXFN6h0u7dbEyN70Bc3LSevVyoVriBqCD=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=sehfmdw8c2IZH7yxVsapPdyC5oZk8E9C8HG+RZlHweyTRT01NJXlGl1iSkBcdmKNSAvEE/1ffIB0aIHVFuf766DnCOnlbVQZPa40CKhW6Ej8xZHlok+XK5Wh3Etcf4lOTUMQlIsHpDym9nK6BCeIOhX70DK+orIkPdY1Fx82vY+fCRzaFzX/Jvz4D5y+Xc+mwvKd/3PP89Dnsp948+ybPy2HtCV0WJGsX8ARvxzpCrseLx3nHig1Kg898sWGib66rqMseYrfb5GQrzI5ew4OWat4mBItgYYoBggl3G40JmfQaWF/hk3KcR0HNcxqF8eKDhfZ/IJY2h2wWBrkkiOK2Q==
X-YMail-OSG: lCVByNUVM1n9wslqYSg4OXTvsdTRZ9_iuMTmzMW9dTREhHpL_UOylw05Cz7exkl
 Z5ljYFmVFCILlOFzUUr52ffQahGqKRKFSW_e.1IxBKheaSe..uR.mCsgd4Zzwbmj.gFsBav0HFj.
 dKStrESxn_aXF8JxMNE1d8Wxhsdmu8VENtmyq6g6T_Z88.RsjV8B9cXf..gvYgzqskiCNfPyrWgd
 GTGH_JGI0ZeVIiczSvmIrZnEc.2enwHsdFntOagaU3KPKalav98xS6M36QDnT20djOsRM.Ggt19H
 njBeoFSAXTGwE6yrODnAH6DcHRz0HUa_It3aQwqQ30GBhxHPktU5jHnjJUMXIlOUeOUdfy3_1TlW
 LR09f.yR1smbw8x7B.k02yzzO..7FQ_NZDfberb74CivbkqvqznI3CQUVcbGnOWhcmqss7kBu3S.
 2bCyKfPJIJw4etb9dv3sp.88kg2QORspv3HQ_Aln_cHonT0XT9yk8bzhQ2rOU5SOCFQMoZGgC_PE
 3CR_mAgs1_5g0qSd4ldP3Q3bj0NRcPy.n115m52vgTaghoM5GUKEbR9mvhK8BraD0ifqnrFL_kz.
 XCYPbCdMSM2DO2mOVDyRySzh4PLDLr7a07LXhmdHhOgSiK45BmqAvWcOOeRc.4VT0TAd79kqfU5.
 IcziZdxEcHsasp8rapBpIGkJ.GZAkGVhejkDvr16X4ceNSvIIhmoAWPzIZQtmDlmbeP9lXCvE4Pv
 CxqJHTIe1ocQoQbjhQ.jeIJcPyFNBXOVpiHeV4rVHLSIJjRzWJ8OZy.4SbiILz35xcATpdRLkOvZ
 Bxj9p81S.HztznklXUHBuYjFPg3RaEI_bWETggJ8OEnOgRdJhFh2ZvfVFJV9d7MzvYXdjmhPlfjJ
 655osScA4NmCcn9YBEFecN82TIKxFTnzFh_fJwjPZrkNsiOcF_oHp_a7_he4OxcDXZ9Dk0s9oRHb
 1DfyjPX7.qCSc7kK9bEE.ckWXkQE9JxHYFsPBzrL8PDgnTLB3Oq.1WRxeloHpPirqi_UkAUB7oO9
 FwHrcew9dluUAKfNnhG_8.utzChEsv2jhocIU5cyu6.C8qsGOZKKSU0j5ctTtJKnvH_iV9aW2j9Y
 ijOu4CK_B2KEYCyHp1AqekyWvvKRbpgEQsD6aUkA_hX.hhaS14H5FgNID20.DXw1dRiHys2cKBiu
 _2iluXRh_Xwuw1yumac92P3.TSNgfj6sPnMTg.tSKHYOJB.48WjdgHEk4OMTXh9W_FotUqHzPMng
 KdFKoV8s7yxGnC5BNlhKugrbvJhqrhpTV5skZBG.tR54LjQ.O8HbwHNE33p4fBIy.ctdRXVlyykr
 pyXmz4bn377tu0Pd2Te53a.MK72k6R.QYv7vwGDoxsQWDuwU6kQIPAOCb2gOyADp7Lr76eVN6Q9O
 ZvsVjbUw1qLUAHGZDa1.SISq47XkkBGKEP6c0_6ssR8DTBx84xxt8miA8WS_9ub.LxLjmQq6HT5L
 yDK8ljvXxbS5PpM1AOU.CMDQ3dUrOINXyPYW4NcRlcbqTob3LwJ2BrShjvRFNjYVHWKpw_sLokX5
 fWDVGxmfmZVkdZDBYtWJsNaOFcpviFO3cVu75eYRtwcHs8nvRNWW05zNEZ5PirJ3R3Y2QzuHt7n4
 uDxgBxWMemJfV6s5zzH95.iVKZR5saoT1WBWJpwGjyqhgjkEA7mFSbfkbbYJD0T3XCZ.HFRO9WX5
 fXtTXwOvXRvM3Dcj9tuq_2.Fs2hB3B5Vd1K0DCo6Cyp5rlpLC1zEA8S3Sk8wVDavi91C63GBCZj2
 _gIwiN6RJrwvEZTetXmpyjB34RP9mRaoUsgEJsju.M288S4RdsTye_yXLGKpfqh7EvOLpzZlle.l
 Pv55CFZs3ch8jkYkYvOkmZO4RSmPQ8kwjVO6RzPDczr_JjzLYFm0dh3BAnmxpPC7qkV0lnkgki7.
 cwViyU4OCEfAOky1t4gr8fTVE2S_DAT_fatW4C_DH5XKrDevZ7q67pAw2LQZZUubwYhcc71BrgKI
 0I4LWAplQzaZOUZNW8kU76eyEsYaacFsCEiNKQ2O_kU_uoZ0B89NoTzZRkIIO_RnrK6PUdTlrC9X
 r5lw7ebYBEk1eUDBfU6K9bDu6gvbhlz0bxGoo7PLgMlj7gB389sIKtgUYylENK.orwl7pd5jFLOF
 6i.s13qkMAAQFHwDiUyk3O.xqoTFRFFuTYkrgW2HvQs92ylDoYy24Jvv8nBd8ExGljETdrUM5dqV
 tEqm5pDVYa7cAvIlC1Q_m78E-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com>
Date: Mon, 2 Jan 2023 22:15:04 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
References: <20230102213504.14646-1-shentey@gmail.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230102213504.14646-1-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 1835

On 1/2/23 4:34 PM, Bernhard Beschow wrote:
> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally removes
> it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in the PC
> machine agnostic to the precise southbridge being used. 2/ will become
> particularily interesting once PIIX4 becomes usable in the PC machine, avoiding
> the "Frankenstein" use of PIIX4_ACPI in PIIX3.
>
> Testing done:
> None, because I don't know how to conduct this properly :(
>
> Based-on: <20221221170003.2929-1-shentey@gmail.com>
>           "[PATCH v4 00/30] Consolidate PIIX south bridges"
>
> Bernhard Beschow (6):
>   include/hw/xen/xen: Make xen_piix3_set_irq() generic and rename it
>   hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
>   hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
>   hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
>   hw/isa/piix: Resolve redundant k->config_write assignments
>   hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
>
>  hw/i386/pc_piix.c             | 34 ++++++++++++++++--
>  hw/i386/xen/xen-hvm.c         |  9 +++--
>  hw/isa/piix.c                 | 66 +----------------------------------

This file does not exist on the Qemu master branch.
But hw/isa/piix3.c and hw/isa/piix4.c do exist.

I tried renaming it from piix.c to piix3.c in the patch, but
the patch set still does not apply cleanly on my tree.

Is this patch set re-based against something other than
the current master Qemu branch?

I have a system that is suitable for testing this patch set, but
I need guidance on how to apply it to the Qemu source tree.

Thanks,

Chuck Zmudzinski

>  include/hw/southbridge/piix.h |  1 -
>  include/hw/xen/xen.h          |  2 +-
>  stubs/xen-hw-stub.c           |  2 +-
>  6 files changed, 40 insertions(+), 74 deletions(-)
>



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 05:50:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 05:50:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470479.729999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCaBQ-0007nn-1y; Tue, 03 Jan 2023 05:49:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470479.729999; Tue, 03 Jan 2023 05:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCaBP-0007ng-Ut; Tue, 03 Jan 2023 05:49:55 +0000
Received: by outflank-mailman (input) for mailman id 470479;
 Tue, 03 Jan 2023 05:49:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCaBO-0007nW-2J; Tue, 03 Jan 2023 05:49:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCaBN-0007RA-Vn; Tue, 03 Jan 2023 05:49:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCaBN-0004n1-FY; Tue, 03 Jan 2023 05:49:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCaBN-0001rZ-DQ; Tue, 03 Jan 2023 05:49:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Tf1mkHv01cPgr10qPYZWIuf6QJRrCnwt08+WxRELxdI=; b=pkCEfslchEx543reFDb77Ren+p
	nXRtZzDKzs3S9EeeulzSFPb69R1WnJ1e7f2h5g1Uq0GBNfmoVMDX787rn72LfN+LolFr2XFIMq25V
	Fv7kwneZfy8b/FVJaMUk4SEHWyEDDP6YgFXH5IQX/xbAYFxCg1Kd0lfmgBYdK2fCqvH8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175552-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175552: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=69b41ac87e4a664de78a395ff97166f0b2943210
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Jan 2023 05:49:53 +0000

flight 175552 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175552/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                69b41ac87e4a664de78a395ff97166f0b2943210
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   87 days
Failing since        173470  2022-10-08 06:21:34 Z   86 days  180 attempts
Testing same since   175552  2023-01-02 21:13:23 Z    0 days    1 attempts

------------------------------------------------------------
3257 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 497265 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 06:33:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 06:33:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470488.730009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCar6-0004WG-Am; Tue, 03 Jan 2023 06:33:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470488.730009; Tue, 03 Jan 2023 06:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCar6-0004W9-8D; Tue, 03 Jan 2023 06:33:00 +0000
Received: by outflank-mailman (input) for mailman id 470488;
 Tue, 03 Jan 2023 06:32:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCar4-0004Vz-Mg; Tue, 03 Jan 2023 06:32:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCar4-0008Rt-CQ; Tue, 03 Jan 2023 06:32:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCar3-0007Ti-Qs; Tue, 03 Jan 2023 06:32:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCar3-0000kK-QL; Tue, 03 Jan 2023 06:32:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FY67voJVR18X58zrC1O5tndNZGNUWeXoQ52VEUY4qac=; b=DUJcA3MN3TMUjvE0tUxZ3JQKg2
	hxVMtirOZLD85s/sUsuLMJbCYSp8Xs50ihtCwjRGzdyAeRNsUi8CCn9M4ExqARBp+WN5Uf3Wal0Hg
	qvYaft0MtSGmoaM8Il4RELmrjiLY7/hr/5I89MIEmugviibn589nn2tWFWGg3COXFiBI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175553-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175553: FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    linux-5.4:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
X-Osstest-Versions-That:
    linux=66bb2e2b24ce52819a7070d3a3255726cb946b69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Jan 2023 06:32:57 +0000

flight 175553 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175553/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>     broken in 175407
 test-amd64-i386-qemuu-rhel6hvm-intel    <job status>          broken in 175407
 test-amd64-i386-xl-qemuu-ws16-amd64    <job status>           broken in 175407
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 175407
 test-armhf-armhf-xl-credit1     <job status>                 broken  in 175547

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 5 host-install(5) broken in 175407 pass in 175553
 test-amd64-i386-qemuu-rhel6hvm-intel 5 host-install(5) broken in 175407 pass in 175553
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 175407 pass in 175553
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken in 175407 pass in 175553
 test-armhf-armhf-xl-credit1  5 host-install(5) broken in 175547 pass in 175553
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175545 pass in 175553
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 175545 pass in 175553
 test-amd64-i386-xl-vhd       22 guest-start.2    fail in 175547 pass in 175545
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail in 175547 pass in 175553
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail in 175549 pass in 175407
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 175549 pass in 175553
 test-armhf-armhf-xl 18 guest-start/debian.repeat fail in 175549 pass in 175553
 test-armhf-armhf-xl-credit2  14 guest-start                fail pass in 175547
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 175547
 test-armhf-armhf-xl-credit1  14 guest-start                fail pass in 175549
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 175551
 test-armhf-armhf-xl-multivcpu 14 guest-start               fail pass in 175551
 test-armhf-armhf-xl-vhd      17 guest-start/debian.repeat  fail pass in 175551

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale 18 guest-start/debian.repeat fail blocked in 175197
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175197
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175407 like 175197
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 175407 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 175407 never pass
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 175547 like 175197
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 175549 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 175549 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 175549 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 175549 never pass
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail in 175551 like 175197
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175197
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175197
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175197
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 175197
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175197
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 175547 n/a

version targeted for testing:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52
baseline version:
 linux                66bb2e2b24ce52819a7070d3a3255726cb946b69

Last test of basis   175197  2022-12-14 10:43:17 Z   19 days
Testing same since   175407  2022-12-19 11:42:26 Z   14 days   36 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Schocher <hs@denx.de>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jialiang Wang <wangjialiang0806@163.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Colitti <lorenzo@google.com>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Samuel Mendoza-Jonas <samjonas@amazon.com>
  Sasha Levin <sashal@kernel.org>
  Shiwei Cui <cuishw@inspur.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <simon.horman@corigine.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Yasushi SHOJI <yashi@spacecubics.com>
  Yasushi SHOJI <yasushi.shoji@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job test-amd64-i386-qemuu-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 broken
broken-job test-amd64-amd64-xl-credit2 broken
broken-job test-armhf-armhf-xl-credit1 broken

Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 09:22:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 09:22:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470503.730021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCdUO-0004im-P6; Tue, 03 Jan 2023 09:21:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470503.730021; Tue, 03 Jan 2023 09:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCdUO-0004if-MF; Tue, 03 Jan 2023 09:21:44 +0000
Received: by outflank-mailman (input) for mailman id 470503;
 Tue, 03 Jan 2023 09:21:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMDV=5A=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pCdUN-0004iZ-Cx
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 09:21:43 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2057.outbound.protection.outlook.com [40.107.237.57])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 07aebe6f-8b48-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 10:21:41 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MN0PR12MB6150.namprd12.prod.outlook.com (2603:10b6:208:3c6::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 09:21:38 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::9856:da7:1ff1:d55c]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::9856:da7:1ff1:d55c%5]) with mapi id 15.20.5944.019; Tue, 3 Jan 2023
 09:21:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07aebe6f-8b48-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L57P4pEsXqa97kttUkByOOOjc/CDl82noJeBFXsgziCBWwbh9BOXZduzWcgBZ+xjMzVSfsU6GWD3p1od5MiW6AmrleEkd3b4SI9y8s2rJCLb/FPVjMs6aJnpEoqnz9B0Y9NSJdJuaThDiQB6pNRdtN8fSgZCyNiprBK4rxhLYJF/YEIrxbky4U0Yx7xHYAKli7YjFw/K/wMDAvcshizyKw5DvuRKC4ndDzuKNEfR7AFPI5zuQtwTKCmtXRZD7RXroWDeXxSYxSTj/olS4E4SrP3Wm0ahBRodY78uQ4KaB9rXURY7AmLbtN18XX2ihSrc9bEkjTUp4ZagraymAf+qJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7qkI29jLQQRBYRidEwFXmAcmunYaTX4OpV8a84XMPHA=;
 b=BqqAfsyEMXTP/CABtrHwV5l0aYPGaBDwebePedHap8DFQHHtm+TKh5xJ6sfSKpHbDQ/TfxAIPmBZy4645Eeur0OPb0E87eHSF5s1HCZukb4ndm2OA1TzaEE0VMcC/mF3nPnbgniz40OVtQZUzmdcl7m1eH8BtSKJgiZt64zOPDnInFS33akbTp+Bj7oTcxpYn7wLUeNG06ZbKqV2n4cCSqtiMMLkMGaIObpKpo8vr6UirX7YGUn/T+C9GbRFUyrTZej/3JqnMGIP+7PVOcacPlWfJJH2sZ8SOA8TvFUCzO9bvLZZZBwM8h5OvFTYXyhzg55cvMU8Sdci7PcuR1mGIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7qkI29jLQQRBYRidEwFXmAcmunYaTX4OpV8a84XMPHA=;
 b=Q2WrKf98MKoFUNBXEDLn4CfYpbVfduh51k6u1T4MY2mez+vd5MpknQ4xfSdmtUR/qdg2/Y9jJSmv/SzpSPLhGM4b9ZSvPz6+csq9EenDN+mzPcFhxviNmQ7jZDA6BsqaCANMJEws7etOKqR7zV3DybHkJN9fkdZVEWZ6GoNvctA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <2ddcaf95-03f5-81fb-3091-316b99201a1c@amd.com>
Date: Tue, 3 Jan 2023 09:21:33 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm: Print memory size in decimal in construct_domU
To: xen-devel@lists.xenproject.org
References: <20230102144904.17619-1-michal.orzel@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230102144904.17619-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0077.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::10) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|MN0PR12MB6150:EE_
X-MS-Office365-Filtering-Correlation-Id: 7a872c1d-6783-4b2b-95c8-08daed6bea8d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nIfBgkVcm5u9+UfjzBEvvfjZ6Pjkz57arRg7sJz2j2QdjO6FmI/RHkDcwIaIMZ7IGgxHLn/GyXFQu+oif42H7G3gMCA2Qi60dJ9QbmflfWOnB3PzuL5+Ua0McJNTaERiEH3t+yWApe/CBYaEDi0ZNsnkLcFlTD7sOV/TTym6+Kq6UvIijNxGpUsiSmPpAMXyBuYV0ah18Z7DlfUUMXGQNLc9gjbhQCDGSCHC8NOdZMH8USLiH1DKw9WXTAJ15CVNZy1rjaGtNhOZ8ATiBqCOXlD0MGjnVcA2mOVPHTsBQ5UIHrmJCqh291Fr5oMHTZNNmeUMgc56SwDvz/Chv1V/tXg8ZypK3w10xNLAnTgfoos2mXVNqt6ewt9EcdkoJgxRxw6G1U0c8T0b3AulWezaf5v2bz6tfg3zqEcLy5vm+zowcteacUKpGeVGuaDLN1/k5vfYp6SSYfjNylXmQeAFvS9a/e/pXg0FOdLbjSpdZENKFjddHxJGreif3ukoWPckckFdwRBRYTH1zsZQRGxHFS89sZc63OOX5u2DVN+L09aXPWh/KDZHKUziumWCvclpPW0JLAJcJHdrGTtlobq8+SlLEGtDmjmKCkPeBk3DAzbt8pFfPnA6l78HkIT8e3mhk+BXqe0Oi0QVDqHgcQ1ODmG3YW1KzYUNZzkbJ1ncxkXIjqlZt9N7h+t2vKsAPNe3Ra+CbkCED9VaznxEoxPqBeISKRGCI3QFym6308rOj0HV7mKr5tWaqRsZdjtNyrZO
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(136003)(376002)(39860400002)(346002)(396003)(451199015)(36756003)(38100700002)(2906002)(8936002)(5660300002)(41300700001)(83380400001)(31696002)(66476007)(66946007)(6486002)(6916009)(31686004)(66556008)(6666004)(53546011)(478600001)(6506007)(8676002)(2616005)(186003)(6512007)(316002)(26005)(22166006)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eWdTTStqZ2RvZ1ZEVUZVYjM2UHNIUnU2aklIQ0Z1YVhXTjF2MXJlanRQMzkr?=
 =?utf-8?B?eTB5QjZCTjZCRmIwc2xWM0Y1L3lIWjZ6SUR5M3p2eWdvZUVtMmduY01sMG53?=
 =?utf-8?B?QSs3S2puUFZBQTNFVzVoekFMTVpDTUJYVzR0ZFk1SGZLc0MwTU1STVdFcThQ?=
 =?utf-8?B?UTN4NWpYVE16RkpKUVczRlJIdHhySG9wUiszYmVrZGN3TVhzYjZmekJIV2Zo?=
 =?utf-8?B?MDZmcGdLQ3MvMWpKRHZtTUcwWkhWU1JuazJxVmJhc2srME4rTFI3eEc5UFVv?=
 =?utf-8?B?MC9oczBOVGZIcTdnaC9Nd3ViTFZxeXBjaWR3WTI1amJRVlVnZ2Jsb25ZZkF6?=
 =?utf-8?B?dVB6dlhkS1Rkd25Lb0d4TEFUZnpIczcvUVp3Q0hDaGFabXo0K3J6cFZRdE15?=
 =?utf-8?B?M2YyWUlhYUJrWk82ZkNkQU5kay9CVkcybEN0eGJuK24yeXFjclRzSWJaeWlR?=
 =?utf-8?B?RitZbE56Mk9leHBrOFpPdkhoVU05UGhxQjRJNi8rVzYyanovYXVIb2t3R2Js?=
 =?utf-8?B?VUtqOWZ4R1dSbC9hTStSSmtEbjdOaVlEQVpjZUZDSEtSWXAxMVh5aFlvVzlD?=
 =?utf-8?B?NWUyTzE2eHBMaDhZVC81Rm9ES29pV21sWThhb3U5dUNXVXlDYkdnMFIwZHhr?=
 =?utf-8?B?TVUwWjRBeWt0YnVOZC9sekpQR2FZWjEvQzNtSnNXaE9qM040ZmFSSXRCTllF?=
 =?utf-8?B?VE1SZis4NDZIMmNMMWFzV3VNQUN6TkhjR3p5ZFgvUXI0NUpUblZ1R0IwN0RX?=
 =?utf-8?B?RVV6LzFYTDIzZHZGbEtEbW41R1hJVmFPK1p3QUpIOFFKdTI3OXk0Z2N3QmJx?=
 =?utf-8?B?MXhsd0d3TmtPendrMTFlNW9yUmFkVmFmZWJ4dm9FZ01ma0M1amVsejRJTzEy?=
 =?utf-8?B?VlQyS0NEajdCZUhJei96dVlUNHNsZ0JHNXd1eGlrN3dZT3dnUFlORk1RYkZ4?=
 =?utf-8?B?MTFwNWpzT011dUxSdmUxOGYvc29DVlludk1DVU9icERlNHBuUnJoazE3dWZk?=
 =?utf-8?B?azJjR1JNOGxaMVRUdGM5V1FRakVQMGkrdHh4cUl2WkFadk5yK0VVcTRnYWxP?=
 =?utf-8?B?YWdZRi8wdFk1U0J1Y0dDN0VHdGRHQ0p2dVowZGdzdGZ1bVR3c1lvWVZza2Rv?=
 =?utf-8?B?V0VnZHcycWhOSWdKcUF0Q0NISDNlNTV2c29VaHI3Y2RrRTFlN2h5RXo0WEJ4?=
 =?utf-8?B?eWFjbTNLVk50NkFUeDZjQW5Ya2xTRW1sMisrQnp0WXNYMUtiOFE5RHhPaWRO?=
 =?utf-8?B?WS8wUzFPbDVDR3R1UVZqcm1PdERYMkduQWxJMU85VkQ3T2xXdXhxd3VDWW1o?=
 =?utf-8?B?K1dESXFNTWt5bVhYTzRzRVNDSDJjQlUzdS9TY2hKMk9STjRGaERQNDdqVCtp?=
 =?utf-8?B?SWhMbjBmUmJieDBxalQ4eWdGN01yL0o0Wkd6bkFYSHhESFI0ODcrUkV4MlhE?=
 =?utf-8?B?ODZDc3pUek5HM0RQRXRuWHN5LzFUTDNGalQ0KzlYRzRwZUpuMkZQUjQyamEr?=
 =?utf-8?B?dHU4Y1RJaDIwZFN4NE93V3E3ZzgyOG9mMDJ3dXlHNUNhbUUwL245WHl1NnNr?=
 =?utf-8?B?dVBvTHlCUk1nWWRqMUI2VkpZWmVNWDk3eEtVR2xBQ1NjL1YyMnowdi9vRlc5?=
 =?utf-8?B?MXJVQTc0U1gyVHlZWjhHRVdIY0FNVjJMTDEvZk1ETGFBdDlBRDhCeGE4bzUx?=
 =?utf-8?B?Zk84WmM4ZWg1bFpJNkJOcTFycktlRHNUVnBrdUN3cmVORUJjVUV6S2liamhx?=
 =?utf-8?B?ZWtrUE5FdUd0clZYZXdadFVwUnVNb3IrUXVxZlRnNk05N0pCa0VOeDJVOXlF?=
 =?utf-8?B?Wk9HTUFubEs2SmE2Q2IwYTdKRDBXUGNQaDE1bmFMdEpVejhqVmNVQXBuQ2pG?=
 =?utf-8?B?ZEk5bzFFd1J1R3ZIeXpMcHg3Wk1TSmJSSml3WGZxczF6eVQzd1A4b2VFUWpM?=
 =?utf-8?B?TWtLQ3RmRFJ5MlVyRTZTZ1FtUURhM1BDZ2NBaURXQk9tRkh5Y21xcDFWbFk4?=
 =?utf-8?B?NmpyMW1zQm9vNG1adytwVzR3cHNzanZRbmVDQnd4Wk4xb3VlWi81dkQxRGd3?=
 =?utf-8?B?UlJCamQ4MUc3czhSc1V5dElvV1VydHBHQks4WW9yc0JSWVI0a2RzRUtOa2FG?=
 =?utf-8?Q?bsQmZNpVQ7J9BOZNa1pLn968b?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7a872c1d-6783-4b2b-95c8-08daed6bea8d
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2023 09:21:38.0964
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: C7rTaYPMSVNb0g39rC0awRsPNMP4ookeqzoXF71+YIIV3/bySEy3HXxRRUoNBLTOqjSNLGKUPT6NmY0aqfFC4g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6150

Hi Michal,

On 02/01/2023 14:49, Michal Orzel wrote:
> Printing domain's memory size in hex without even prepending it
> with 0x is not very useful and can be misleading. Switch to decimal
> notation.
>
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
>   xen/arch/arm/domain_build.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 829cea8de84f..7e204372368c 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3774,7 +3774,7 @@ static int __init construct_domU(struct domain *d,
>       if ( rc != 0 )
>           return rc;
>   
> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
> +    printk("*** LOADING DOMU cpus=%u memory=%"PRIu64"KB ***\n", d->max_vcpus, mem);

I will prefer it to be printed in hex format with 0x prefixed. The 
reason being the mem is obtained from device-tree domU's 'memory' prop 
where the values are in hex.

It will help the user to debug easily without requiring the the person 
to manually calculate the hex equivalent and then trying to correlate it 
with what is written in dts.

- Ayan

>   
>       kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
>   


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 09:39:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 09:39:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470511.730032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCdlu-0006L3-B6; Tue, 03 Jan 2023 09:39:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470511.730032; Tue, 03 Jan 2023 09:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCdlu-0006Kw-8D; Tue, 03 Jan 2023 09:39:50 +0000
Received: by outflank-mailman (input) for mailman id 470511;
 Tue, 03 Jan 2023 09:39:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cMja=5A=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pCdlt-0006Kq-Cf
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 09:39:49 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on2074.outbound.protection.outlook.com [40.107.96.74])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 687222b4-8b4a-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 10:38:42 +0100 (CET)
Received: from BN7PR06CA0072.namprd06.prod.outlook.com (2603:10b6:408:34::49)
 by SN7PR12MB6862.namprd12.prod.outlook.com (2603:10b6:806:265::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 09:39:42 +0000
Received: from BN8NAM11FT085.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:34:cafe::49) by BN7PR06CA0072.outlook.office365.com
 (2603:10b6:408:34::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5966.19 via Frontend
 Transport; Tue, 3 Jan 2023 09:39:42 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT085.mail.protection.outlook.com (10.13.176.100) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5944.17 via Frontend Transport; Tue, 3 Jan 2023 09:39:42 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 3 Jan
 2023 03:39:42 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 3 Jan
 2023 01:39:41 -0800
Received: from [10.71.193.33] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 3 Jan 2023 03:39:41 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 687222b4-8b4a-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZbDtK9IN8dJyVC3JAft+EUdIKXjvw1pWdfjiTScHZYPs09NGXc4D8aJ5YCI4qOAoUg9EL2x4OEhE8CsG9kTYhh6xz3WOwQ7k263D4MYei7WkteBqNNSkZqAPrr/tz829tONYuYrdoV8hJgeAN/ahQUmCIWEUaf763T0l3oIFNMiK6BQ2++94wi14ffKsY9eAZp3gTp1UkWFyPW+3dLqzOboEJXmJraMTYPo/g92G2Bnoi3IeL/dI7w/eDCgoJVHs79r0rtCYEBQltGg6w60zCOhPhqfHZOWBTvdvOMlXt3tqZMcgXXWO2LhFLBZXyPMqnoZqzBU/cZjXlS6/4EgSSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=c2ya0maJOkqbjYxQwcP9WS/1jwK67qwB1Qca1G3RDtk=;
 b=F6F11XGXRk69Tld0eJOrFiyWaWOEXOIK6UQnTP1QzEHJi7yQ4QNc/Zs489G1VNOEUN2pAKEzqGcHuMzGBmDKZTOTLbyXXvQ9q+52gwA5ex63hvERzB1PKLskX/HTgP9I8n4uKismrNQaJnhMAsX2KGcoWxsLTrPdH2BeJm6IGnGGz70Epn9a+0IDTlc0FzZM5VSgoutzvS1mvz0W0lGXHkm20x4FW0uYg5HSAn8PUPxgYZDSZyrznXEJnbfl/J2GTKmtLLUpxeqC0ufqQDXN1T82H3niroedVHXO8P8STpwWaW651FP/Jw0RvOMW6mmALUfUybxELJnFIlvMuLhA+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=c2ya0maJOkqbjYxQwcP9WS/1jwK67qwB1Qca1G3RDtk=;
 b=KnP0mEITVSQ5oY11j3KQuwkwgnWejGpt+sTKzPu7dcD/gZ2XQuvAXJmVsP+5LCHn1rpD+mzoR9NnLT0st87XJEU1wdsanAYUzG/SX8hfzXr/ZlF1HU1jM4Apn1pRDk6+jUPHeM/q8x6bB0Ur0pIBMxhtmNHcOmRbFbo0R7QtXjU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <562770a6-2933-43b8-326f-3bb6d8e0ce61@amd.com>
Date: Tue, 3 Jan 2023 10:39:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm: Print memory size in decimal in construct_domU
To: Ayan Kumar Halder <ayankuma@amd.com>, <xen-devel@lists.xenproject.org>
References: <20230102144904.17619-1-michal.orzel@amd.com>
 <2ddcaf95-03f5-81fb-3091-316b99201a1c@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <2ddcaf95-03f5-81fb-3091-316b99201a1c@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT085:EE_|SN7PR12MB6862:EE_
X-MS-Office365-Filtering-Correlation-Id: 5c1305de-deb5-47ba-ab1b-08daed6e70f9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gD462IFkFzA3j870YILR/JGoo0/qrYGE9HS1pCVwXfmKCuxBFxf06d+/sNHC1l3iCRjO8h2+2KsnGSMp4CCJZfG8KpN9AuqyM3QzUYVyrwwsS3G2WcyZw17NsVQxKs8ytdZsTzgyIzdRgBpSvsta3SOsy/4FM3q8c1Dg/8CJG7nAszj+CKji32V9+gSVBYSqxG3oozjgSj0nJtXUNjg68w2AMpzitNd3xovbVFc7ae6optf9v3EFk8n2U3VQOACY+OuWuPyjfq56tEoSXZDWtyle8RW1Sd/VTb4Nv2yQXI7DE1Adff1taVVsaaWSJ0jVoySPeYqdkNxFYcNx2aZEQQ/OQDi7O2HvYBopaSXH9Syl8jrhQCsCZ1ac2EcyAFL2Ob0Axixy0vFCgIgoBkBJIoEEICtRnlsyDopQN6U4ecfD/av8UwH+5+DdoL5wg2ATmWPwhtYiM0z7tEASITjydQQRXkie7mWHgNfhC14Ia64bGJZNtZLt9V1DIdRghKOcZyqdv3WcONvUkF/kuZLvAU/9mfscoVXessWD+D1MuH5xXy12qYPnp6/r59xGeZQY3GHqCHkQqRW0HzZTfTGdVuY+tmgRLC5gorf6jsg9sYEIFgQOP/uJRV8GBgW+tXpwuZuMVEktyb8MrT83BNg8WBxH+sQGLmTXuvV2PlgD4ox45NxjA+hCcEwFr3X53krDxyUkvq2ZEDlS8g6RIt0/eWqSxY50BmhAicID22KFwZ6MERvQSjJys48Fqw1lil2JlvRq/W1ZMsxVQ+qucpP5lHQqQVSiLFyHEtMtsJ3aTzE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(376002)(136003)(396003)(451199015)(46966006)(40470700004)(36840700001)(70586007)(44832011)(31686004)(5660300002)(316002)(41300700001)(16576012)(2906002)(8676002)(70206006)(8936002)(478600001)(36756003)(36860700001)(110136005)(82310400005)(53546011)(6666004)(2616005)(186003)(83380400001)(426003)(47076005)(336012)(31696002)(40480700001)(82740400003)(40460700003)(86362001)(26005)(81166007)(356005)(22166006)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2023 09:39:42.3473
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c1305de-deb5-47ba-ab1b-08daed6e70f9
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT085.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6862

Hi Ayan,

On 03/01/2023 10:21, Ayan Kumar Halder wrote:
> 
> 
> Hi Michal,
> 
> On 02/01/2023 14:49, Michal Orzel wrote:
>> Printing domain's memory size in hex without even prepending it
>> with 0x is not very useful and can be misleading. Switch to decimal
>> notation.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>>   xen/arch/arm/domain_build.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 829cea8de84f..7e204372368c 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -3774,7 +3774,7 @@ static int __init construct_domU(struct domain *d,
>>       if ( rc != 0 )
>>           return rc;
>>
>> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
>> +    printk("*** LOADING DOMU cpus=%u memory=%"PRIu64"KB ***\n", d->max_vcpus, mem);
> 
> I will prefer it to be printed in hex format with 0x prefixed. The
> reason being the mem is obtained from device-tree domU's 'memory' prop
> where the values are in hex.
No, I cannot agree. Refer to booting.txt documentation:
"A 64-bit integer specifying the amount of kilobytes of RAM to allocate to the guest."
Also note that in the provided examples, we are using the decimal values.
All in all it does not matter the notation, you can provide e.g. "memory = 131072;" or "memory = 0x20000".
I find it a bit odd to print e.g. 0x20000KB and decimal is easier to read.

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 09:45:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 09:45:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470518.730043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCdqw-0007l1-V7; Tue, 03 Jan 2023 09:45:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470518.730043; Tue, 03 Jan 2023 09:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCdqw-0007ku-Rb; Tue, 03 Jan 2023 09:45:02 +0000
Received: by outflank-mailman (input) for mailman id 470518;
 Tue, 03 Jan 2023 09:45:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pCdqv-0007ko-Sq
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 09:45:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCdqv-0005MU-LU; Tue, 03 Jan 2023 09:45:01 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232] helo=[192.168.2.2])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCdqv-0001LC-EP; Tue, 03 Jan 2023 09:45:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=UNZIIxpcYI8SXohcSlee+JrQs+hB8zr/zTLdkwPGu6U=; b=vXxNb5bT7zMh005sKxH5BTj6ZP
	8kqheebd0ivh1efYQKu5xfDyk0P1bsKqcrCzkQPB8WQwJpueVetAG4xwJlQ7tGX7iJqR0y9pxYnbF
	j7FgOItpEL6pyP0Twc2diHDfHX2nlkJBLN07qlu9No/doBiYWty62vW6zJM1J2vR9vgk=;
Message-ID: <57a19b1b-7078-7653-978a-d4b72e564d1a@xen.org>
Date: Tue, 3 Jan 2023 09:44:59 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm: Print memory size in decimal in construct_domU
To: Ayan Kumar Halder <ayankuma@amd.com>, xen-devel@lists.xenproject.org
References: <20230102144904.17619-1-michal.orzel@amd.com>
 <2ddcaf95-03f5-81fb-3091-316b99201a1c@amd.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <2ddcaf95-03f5-81fb-3091-316b99201a1c@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Ayan,

On 03/01/2023 09:21, Ayan Kumar Halder wrote:
> On 02/01/2023 14:49, Michal Orzel wrote:
>> Printing domain's memory size in hex without even prepending it
>> with 0x is not very useful and can be misleading. Switch to decimal
>> notation.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>>   xen/arch/arm/domain_build.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 829cea8de84f..7e204372368c 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -3774,7 +3774,7 @@ static int __init construct_domU(struct domain *d,
>>       if ( rc != 0 )
>>           return rc;
>> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", 
>> d->max_vcpus, mem);
>> +    printk("*** LOADING DOMU cpus=%u memory=%"PRIu64"KB ***\n", 
>> d->max_vcpus, mem);
> 
> I will prefer it to be printed in hex format with 0x prefixed. The 
> reason being the mem is obtained from device-tree domU's 'memory' prop 
> where the values are in hex. >
> It will help the user to debug easily without requiring the the person 
> to manually calculate the hex equivalent and then trying to correlate it 
> with what is written in dts.

I am a bit confused with your reasoning. The value stored in the 
device-tree is a 64-bit value. This is then up to the consumer to decide 
whether the output provided is in hexadecimal or decimal.

So are you saying that the tool dumping the device-tree will show the 
values in hexadecimal?

If so, the argument is the same for the number of CPUs (you could have 
more than 15 vCPUs). So I think this argument to be used here.

TBH, I am a bit split between using hexadecimal and decimal here. For 
smaller values, decimal is definitely easier to read but for larger one 
(i.e. GB), then the hexadecimal would help (it is easier to do the math).

So I would leaning towards using hexadecimal for the memory (so adding 
the 0x).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 09:56:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 09:56:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470525.730054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCe29-0000og-2D; Tue, 03 Jan 2023 09:56:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470525.730054; Tue, 03 Jan 2023 09:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCe28-0000oZ-UD; Tue, 03 Jan 2023 09:56:36 +0000
Received: by outflank-mailman (input) for mailman id 470525;
 Tue, 03 Jan 2023 09:56:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pCe27-0000oS-66
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 09:56:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCe27-0005kL-1z; Tue, 03 Jan 2023 09:56:35 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232] helo=[192.168.2.2])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCe26-0001gu-R1; Tue, 03 Jan 2023 09:56:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=Usdo28dg4bFOA1INrPO7hb0HNbXQ9TsZkQjhVBPS8YE=; b=yRG0WEh7qnNYMQLG7sr8ScIxSx
	/mL2DyiY9gxVe07H0G2xqSU0KSfRXymdEDKVDSZeBPkxMJ39E6qucOybjZby9x6rFiV/Kl/koKu6n
	0HHDPUFuugImvhYg8Om2fp9j3/Xa5/8pqfNk+UBPn9x0WNK7ynfjraaapyGgUxDy2O/g=;
Message-ID: <adbaed89-3327-c339-3e36-0a0243a3ffd2@xen.org>
Date: Tue, 3 Jan 2023 09:56:33 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm: Print memory size in decimal in construct_domU
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, Ayan Kumar Halder
 <ayankuma@amd.com>, xen-devel@lists.xenproject.org
References: <20230102144904.17619-1-michal.orzel@amd.com>
 <2ddcaf95-03f5-81fb-3091-316b99201a1c@amd.com>
 <562770a6-2933-43b8-326f-3bb6d8e0ce61@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <562770a6-2933-43b8-326f-3bb6d8e0ce61@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 03/01/2023 09:39, Michal Orzel wrote:
> On 03/01/2023 10:21, Ayan Kumar Halder wrote:
>>
>>
>> Hi Michal,
>>
>> On 02/01/2023 14:49, Michal Orzel wrote:
>>> Printing domain's memory size in hex without even prepending it
>>> with 0x is not very useful and can be misleading. Switch to decimal
>>> notation.
>>>
>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>> ---
>>>    xen/arch/arm/domain_build.c | 2 +-
>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index 829cea8de84f..7e204372368c 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -3774,7 +3774,7 @@ static int __init construct_domU(struct domain *d,
>>>        if ( rc != 0 )
>>>            return rc;
>>>
>>> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
>>> +    printk("*** LOADING DOMU cpus=%u memory=%"PRIu64"KB ***\n", d->max_vcpus, mem);
>>
>> I will prefer it to be printed in hex format with 0x prefixed. The
>> reason being the mem is obtained from device-tree domU's 'memory' prop
>> where the values are in hex.
> No, I cannot agree. Refer to booting.txt documentation:
> "A 64-bit integer specifying the amount of kilobytes of RAM to allocate to the guest."
> Also note that in the provided examples, we are using the decimal values.
> All in all it does not matter the notation, you can provide e.g. "memory = 131072;" or "memory = 0x20000".
> I find it a bit odd to print e.g. 0x20000KB and decimal is easier to read.
By easier, do you mean you can easily figure out how much memory in 
GB/MB/KB you gave to the guest? If so, then I have to disagree. Without 
a calculator, I will find quicker the split.

If you want to print in decimal, then I think we should split the amount 
in GB/MB/KB. Otherwise, we should stick in hexadecimal (so add 0x).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 10:08:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 10:08:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470533.730064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCeDx-0002Pn-40; Tue, 03 Jan 2023 10:08:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470533.730064; Tue, 03 Jan 2023 10:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCeDx-0002Pg-1L; Tue, 03 Jan 2023 10:08:49 +0000
Received: by outflank-mailman (input) for mailman id 470533;
 Tue, 03 Jan 2023 10:08:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cMja=5A=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pCeDw-0002Pa-0j
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 10:08:48 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2075.outbound.protection.outlook.com [40.107.92.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9b29dcfa-8b4e-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 11:08:45 +0100 (CET)
Received: from BN0PR04CA0186.namprd04.prod.outlook.com (2603:10b6:408:e9::11)
 by PH0PR12MB5608.namprd12.prod.outlook.com (2603:10b6:510:143::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 10:08:42 +0000
Received: from BN8NAM11FT095.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e9:cafe::7e) by BN0PR04CA0186.outlook.office365.com
 (2603:10b6:408:e9::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5966.19 via Frontend
 Transport; Tue, 3 Jan 2023 10:08:42 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT095.mail.protection.outlook.com (10.13.176.206) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5944.17 via Frontend Transport; Tue, 3 Jan 2023 10:08:41 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 3 Jan
 2023 04:08:40 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 3 Jan
 2023 02:08:40 -0800
Received: from [10.71.193.33] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 3 Jan 2023 04:08:39 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b29dcfa-8b4e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KQlAG2TdtiaLrntgRqveCMe6JhTH4SsrbrdG2OFGTR0PnkcFEhX4hioMc5kqQd/MjniP5T5wjSNGL5gasNyfSulBdRzv9YTCgQAB0PeG2b/rNDxt6RvOuqg+oT622TFlKUFw+Mvf1QJq5ISZpFy3XJZ/Jqd9lMyOhdU6We4isuxdCUTho563wBqtFYYw/HN9o2/QPljt4J7yjdR/HLXcg8hC21EfmH6hS81/qvsubUOwfgh6bVMseYY12MaF5TQRDiP0JMSLpMmcTmRGbNO7MiNHfLM8JH5kwOTjBnVYBpK8z2tWH6Pe+DtyL2R6pHOU10BhpPn0vdj74OFEtJ+d/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IFbfQOi4RZr5UwQVShC5LQVMlBWdSbMPdaMTF/RdxIM=;
 b=aON696qhTVIuXYB3sZEhlUyLh5i+sltmNKN0TpC1P2HnZ7MlNVC1XZL5BlTZJTLcHV+gp7mlrIj31POmuV+EVPHRmQOopbk2UL3mw/7RG/gVGnN5ysffsAupo0nCfSJ7xQLq4DXwyHhtwDCbDJmxLVTcKOE45cFnxCAbOASiNYSMpn+huXQEorJj6hseOZNogcM/J92WG+FC1d7RiPjTZPPtsvrTBIKu60uDM+UVW5lccz6V7U404/oiIVXGMtpIJ2Zd7zrXrzkYo0rP41jVDmPNRuy2UM5s2pi8z6GudV0kPPWMtg7/1N/euUsaHIeq01+YyCf6rjvhmx95MqW+Tw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IFbfQOi4RZr5UwQVShC5LQVMlBWdSbMPdaMTF/RdxIM=;
 b=bia2XultJv1WyD2zxHUxT3wBf26RWLYO1QCCBWChdtyXAt/X9FHMfeUIRctuidPRFOH0x58x0NR9vzIX8LwO0iV6Nk3K8nbN9abF1U4tVM/qNaTlh10hZYGEZTQMZWiTKDeXs6C7oQYQOhKj+rjK2+mYQSQvL0PT9Rt94UFwHko=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <bd99a2ef-5e31-bc38-ce64-ad7372112cab@amd.com>
Date: Tue, 3 Jan 2023 11:08:39 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm: Print memory size in decimal in construct_domU
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Ayan Kumar Halder <ayankuma@amd.com>,
	<xen-devel@lists.xenproject.org>
References: <20230102144904.17619-1-michal.orzel@amd.com>
 <2ddcaf95-03f5-81fb-3091-316b99201a1c@amd.com>
 <562770a6-2933-43b8-326f-3bb6d8e0ce61@amd.com>
 <adbaed89-3327-c339-3e36-0a0243a3ffd2@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <adbaed89-3327-c339-3e36-0a0243a3ffd2@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT095:EE_|PH0PR12MB5608:EE_
X-MS-Office365-Filtering-Correlation-Id: b1d014c4-b8ac-4a02-1a26-08daed727de3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fJynHSJnSPEBvzbxghtpDupzx9m9Qee+q1G70VG1EtoMAmrWrGOfp0Kd0HRcKMQfHbHlZWZPU2LwnmXIeyyI8VC22z5PN0v02sge4e5tO9m67pLhi+NGYF8GJP9lWEvcgog5smDgaSIFazlJsCCAEKcJYwxuR11RMDSdkX+f2Wq8f2yAEGA6m5Sx4mhhdl+TVEQa9zbOFv6uxTkchaZsGQsgzN61QfeH1WCSHzMrJoXU/fLcAddOkjn2k8QgMhwmWxwPIrV+U14KJrGVBRclJbV36laKw5ZX6kQnxhl048R6Gj1R5cgCdbr8IHYIL2gtMaRpexPzHEY4X5MRIvMEi/rrIaoSVYkS01q2cYULM7upLNHTMFIbC9OYeic4cmXeHCH1RAsIqcCfNFeMSCcSf3StkdS6Yq9tGHwiMXtl84Q9uQlCViEZtyuG49Is0J5ULWLTlYH69DOb8Dm97JJbEBibEQegin5qn6r33V4KLdIQBr0Vl1BmiXVTwePib7EdeExISxQZ9BwgFZR4hylpg4zyn2MeYRnFpImMWzoDAqe0VLgytr3wjUSw2dZbN7lXlH4r4AN5ACRsvLFWTd3TdlPIcmo9gZJ36LHIRLwO+Zgj4bMWlrGcmNGdKGz5rrJRETl9EaL9oTeRqZSPRjuktRNueYI3GGNcL7HPVD4lLapyCiFDAu93I3XsdGioAzR+rAzZBMQgQMFDEr5LXBVgONqiXUV+u2AIdpvdias3DCWLydRuGpuW5isxP+cmG5+QdI62q6uMPS31/HquIftlvM5Qr20+J94E+3Cad6B0c7A=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(396003)(346002)(376002)(451199015)(36840700001)(46966006)(40470700004)(81166007)(2616005)(316002)(16576012)(26005)(356005)(40480700001)(186003)(478600001)(40460700003)(82310400005)(86362001)(36756003)(53546011)(31696002)(110136005)(41300700001)(336012)(2906002)(8936002)(44832011)(426003)(47076005)(83380400001)(5660300002)(82740400003)(8676002)(31686004)(70586007)(36860700001)(70206006)(22166006)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2023 10:08:41.9499
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b1d014c4-b8ac-4a02-1a26-08daed727de3
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT095.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5608

Hi Julien,

On 03/01/2023 10:56, Julien Grall wrote:
> 
> 
> Hi Michal,
> 
> On 03/01/2023 09:39, Michal Orzel wrote:
>> On 03/01/2023 10:21, Ayan Kumar Halder wrote:
>>>
>>>
>>> Hi Michal,
>>>
>>> On 02/01/2023 14:49, Michal Orzel wrote:
>>>> Printing domain's memory size in hex without even prepending it
>>>> with 0x is not very useful and can be misleading. Switch to decimal
>>>> notation.
>>>>
>>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>>> ---
>>>>    xen/arch/arm/domain_build.c | 2 +-
>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>> index 829cea8de84f..7e204372368c 100644
>>>> --- a/xen/arch/arm/domain_build.c
>>>> +++ b/xen/arch/arm/domain_build.c
>>>> @@ -3774,7 +3774,7 @@ static int __init construct_domU(struct domain *d,
>>>>        if ( rc != 0 )
>>>>            return rc;
>>>>
>>>> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
>>>> +    printk("*** LOADING DOMU cpus=%u memory=%"PRIu64"KB ***\n", d->max_vcpus, mem);
>>>
>>> I will prefer it to be printed in hex format with 0x prefixed. The
>>> reason being the mem is obtained from device-tree domU's 'memory' prop
>>> where the values are in hex.
>> No, I cannot agree. Refer to booting.txt documentation:
>> "A 64-bit integer specifying the amount of kilobytes of RAM to allocate to the guest."
>> Also note that in the provided examples, we are using the decimal values.
>> All in all it does not matter the notation, you can provide e.g. "memory = 131072;" or "memory = 0x20000".
>> I find it a bit odd to print e.g. 0x20000KB and decimal is easier to read.
> By easier, do you mean you can easily figure out how much memory in
> GB/MB/KB you gave to the guest? If so, then I have to disagree. Without
> a calculator, I will find quicker the split.
I guess it depends on the size but you have a valid point.

> 
> If you want to print in decimal, then I think we should split the amount
> in GB/MB/KB. Otherwise, we should stick in hexadecimal (so add 0x).
Ok, I will then just add a prefix.

> 
> Cheers,
> 
> --
> Julien Grall

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 10:25:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 10:25:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470543.730076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCeUE-0004pl-LW; Tue, 03 Jan 2023 10:25:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470543.730076; Tue, 03 Jan 2023 10:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCeUE-0004pe-IS; Tue, 03 Jan 2023 10:25:38 +0000
Received: by outflank-mailman (input) for mailman id 470543;
 Tue, 03 Jan 2023 10:25:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cMja=5A=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pCeUD-0004pY-AF
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 10:25:37 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2063.outbound.protection.outlook.com [40.107.237.63])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f51e8cae-8b50-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 11:25:35 +0100 (CET)
Received: from DS7PR05CA0050.namprd05.prod.outlook.com (2603:10b6:8:2f::9) by
 SN7PR12MB6814.namprd12.prod.outlook.com (2603:10b6:806:266::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5944.19; Tue, 3 Jan 2023 10:25:32 +0000
Received: from DS1PEPF0000E643.namprd02.prod.outlook.com
 (2603:10b6:8:2f:cafe::ea) by DS7PR05CA0050.outlook.office365.com
 (2603:10b6:8:2f::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.14 via Frontend
 Transport; Tue, 3 Jan 2023 10:25:32 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DS1PEPF0000E643.mail.protection.outlook.com (10.167.17.197) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5944.8 via Frontend Transport; Tue, 3 Jan 2023 10:25:31 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 3 Jan
 2023 04:25:31 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 3 Jan
 2023 02:25:31 -0800
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 3 Jan 2023 04:25:29 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f51e8cae-8b50-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OerhJLJ1dIwKnPPnc1vVMFxZFSnasESGa2Hu7zfkdb2Ilq9PAUbdrxdT5mpd6rZKz3gAYRxrq0fojRZxKOxuP32KEjlbBYlaAtZOz2eX6oq0xxLP7arnrW143vMT/zOnawAnE13cWwy1TQSNuQQYHyKg6/n6LsWoV/5up+WdZfwdH5WDdRjgkzlhjDptG5J/5GYItTPpvqsE9L4uIMxD8PmSC61dGVgK5diB7e7LQN/hKjVCDzvc8xF7yfG7D5VqWddrML6JzbqzR8uTe6nlYGzpcpQ9JVMbavvFiM/N4Xq9GChYSAcG/lZObd3GEZ5OsaQ1MrIlCsx2UnMvKuGQjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1wG8Ayo4aOuncq1FnIWUKxCPPCI3r62/OM1xAaqLnPc=;
 b=XQFKrWqfbEa98m3l47QqBVK1zGJbiikXZmOVYc4BJo3fVM9XkPo0UBCz/lNDVXsA4+0tn8rSSjXc1LF0eS7Im09xphUaPHljNdc5aUEOS6KVqutUfM3t91qBhnRsd5agJ+3vdO3O8la4eD6MZ/4fcY/5/syHnBC0GJr/Ctpz/N3jNOf0uGBEVylIhzqRd8xDnUanavp2mqUCS9C4OxwEFDFz6NoowYUQHLQVQaQst4kPEe0q2AXZHV41SO5uE7uya7oU5z2S3oRtTG4IlP/Da4Z0X3Y4+24sUlIxZFKizor9qCDNi7FIhRwZA9K7ELlp9/aSDuCNWnBMbP9Irso3XA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1wG8Ayo4aOuncq1FnIWUKxCPPCI3r62/OM1xAaqLnPc=;
 b=2IlrJDtvsRoUlklYanmZjqDFz9z2uI9ab1U0X2l2eLSryfWfsa90qwyYMo7D5ushEj1yBBurbVxwYW9UOrsZQX4ZLJ53usS6nxXLxK7zU0DxOys4RpsQTPCcvxu+myrJC7P+H/3NgfsUxAjZNl7gDaBUBUmoLYRqhVXl7aTOhPE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in construct_domU
Date: Tue, 3 Jan 2023 11:25:19 +0100
Message-ID: <20230103102519.26224-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E643:EE_|SN7PR12MB6814:EE_
X-MS-Office365-Filtering-Correlation-Id: 16268770-c82a-468d-0230-08daed74d7dc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EDRiX9j+dJxylAl3t2/a7ozIS4mwMnZHatopv7kLQifDZk2hsTlaTmVGd9mJpTdnghp2sTshtzLbHkbO3B0BOlkFRZkwGSWbMXGajdQOsX6UJhIeAn8wxLbEWVag+R+88fsm/bK6At1UffUjPP9Hbj6UstuI33TjjOJL+P5IqPQ0YMBNr/FOxtz1p58jZGiZmcxj1Ss+yrj2YjCsowr8m0etOutVMviRwp+acEAolt9+pb5bJ/gzoNl9wp6IVVXlcLn5lBQaMXx78f3vn8S31vPgz58GLsyTlxysfDD2V+OSZcc6eMSYW7r6glYJwh2MqXqf2Pd+G7Yzexlb7ze1hnK4njxL0u0r5soYrPTEkHoLmQpX5b0Et1vry5F73/9K3Z3kzxzaRI4PYCFM9R5MdrtgyihkMlYmyLGcXjwmAu9gGRHxEnhUCptPT9pOE7Ovk6TKQ4q2GG06tRz4KpkLywK4dvlNOGgAcJ7CgpLgayazoOnM/QzktGQeN8roSCBa2P9GmUiSc1XA92ogPZVbkAIN50DGT7G6KXIrRIaL/uyc2BArfl+Rnjrsffe3KKsXgvux8FKJ9xYyix15zruAtG+gFgyyZSRcnfKC1Z/+fdpzM9ygHpzclkVtzCtvk4YM3hLrSeC9OfUvsUPq/6gflzjV1u0DdOhufio5sFV2kKmEWWvGia3htR1GHBHb6/urCwmoIA1BGLQCKoGvuWH+MWW4pdcHhCmTCg2hH0eJXAlOwK+l3om7ENA5HvDAQLqtsm2l0eRjqfBq2LnCBH1p8w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(396003)(39860400002)(136003)(451199015)(36840700001)(46966006)(40470700004)(83380400001)(47076005)(426003)(1076003)(82310400005)(336012)(26005)(6666004)(40480700001)(40460700003)(86362001)(36756003)(36860700001)(81166007)(82740400003)(356005)(2616005)(186003)(478600001)(41300700001)(8676002)(4326008)(2906002)(5660300002)(44832011)(8936002)(4744005)(316002)(70586007)(70206006)(6916009)(54906003)(22166006)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2023 10:25:31.9074
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 16268770-c82a-468d-0230-08daed74d7dc
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E643.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6814

Printing memory size in hex without 0x prefix can be misleading, so
add it. Also, take the opportunity to adhere to 80 chars line length
limit by moving the printk arguments to the next line.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Changes in v2:
 - was: "Print memory size in decimal in construct_domU"
 - stick to hex but add a 0x prefix
 - adhere to 80 chars line length limit
---
 xen/arch/arm/domain_build.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 829cea8de84f..f35f4d24569c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3774,7 +3774,8 @@ static int __init construct_domU(struct domain *d,
     if ( rc != 0 )
         return rc;
 
-    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
+    printk("*** LOADING DOMU cpus=%u memory=%#"PRIx64"KB ***\n",
+           d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 10:33:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 10:33:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470550.730087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCebi-0006Iy-EI; Tue, 03 Jan 2023 10:33:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470550.730087; Tue, 03 Jan 2023 10:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCebi-0006Ir-Am; Tue, 03 Jan 2023 10:33:22 +0000
Received: by outflank-mailman (input) for mailman id 470550;
 Tue, 03 Jan 2023 10:33:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCebg-0006Ih-Nb; Tue, 03 Jan 2023 10:33:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCebg-0006fZ-I1; Tue, 03 Jan 2023 10:33:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCebf-0001NP-VA; Tue, 03 Jan 2023 10:33:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCebf-00080n-UM; Tue, 03 Jan 2023 10:33:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=U62M04JiUpJ8vsnwyW/6C/ih+T4SqsduBxg/obDWGaI=; b=P1pufGgPcgMcUMGY9ZWuqKwDns
	f1VzA/BinEiQIceGpb3ixf0iqCPR9aW7Se0PsUaNgjii+/SMYzkBAMvAkLeTCWv1ppmOqCkQV+08I
	QkR3UrU6Au2kGKQAwCvhPTCawiqyH9hmRl+ZW3yHjzMDs06vR7e19MkMU9CKU9aTHBXM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175554-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175554: FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-amd64-i386-livepatch:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-vhd:debian-di-install:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
X-Osstest-Versions-That:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Jan 2023 10:33:19 +0000

flight 175554 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175554/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair    <job status>                 broken in 175548

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken in 175548 pass in 175554
 test-amd64-i386-livepatch     7 xen-install      fail in 175548 pass in 175554
 test-amd64-i386-libvirt      20 guest-start/debian.repeat  fail pass in 175548
 test-amd64-i386-qemut-rhel6hvm-amd 15 guest-start.2        fail pass in 175548
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host       fail pass in 175548
 test-arm64-arm64-xl-vhd      12 debian-di-install          fail pass in 175548

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 175548 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 175548 never pass
 test-amd64-i386-freebsd10-amd64  7 xen-install                fail like 175548
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175548
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175548
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175548
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175548
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175548
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175548
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175548
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175548
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175548
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175548
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175548
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175548
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8
baseline version:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8

Last test of basis   175554  2023-01-03 01:53:38 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-libvirt-pair broken

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 10:54:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 10:54:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470561.730097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCewS-0000N2-7p; Tue, 03 Jan 2023 10:54:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470561.730097; Tue, 03 Jan 2023 10:54:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCewS-0000Mv-52; Tue, 03 Jan 2023 10:54:48 +0000
Received: by outflank-mailman (input) for mailman id 470561;
 Tue, 03 Jan 2023 10:54:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCewQ-0000Ml-Dp; Tue, 03 Jan 2023 10:54:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCewQ-0007FR-BH; Tue, 03 Jan 2023 10:54:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCewQ-0002LA-0f; Tue, 03 Jan 2023 10:54:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCewQ-0002Jh-0H; Tue, 03 Jan 2023 10:54:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZTHG2ZwIsaSqEf66LtQuBcdhmyksofI4lkxW5wb3inM=; b=Cq4NCQ/SE1TLd91avksjoLhlaU
	Hhcyho3PAvNg3eZdw/rYtiNSjaXK6X1ipt71IjC5lmGXjAvDI75aqDyvYE0vZRxNJeXQVN4x6kFXO
	OVCYasbwX6U9PND8Gi9YK/TTPQ0JCjjOD1g9UTaofI/0n/N2/8oG2haOprQuG3peXaTM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175558-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175558: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b670700ddf5eb1dd958d60eb4f2a51e0636206f9
X-Osstest-Versions-That:
    ovmf=bbd30066e137c036db140b6e58e6e172e2827eb3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Jan 2023 10:54:46 +0000

flight 175558 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175558/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b670700ddf5eb1dd958d60eb4f2a51e0636206f9
baseline version:
 ovmf                 bbd30066e137c036db140b6e58e6e172e2827eb3

Last test of basis   175529  2022-12-30 07:40:56 Z    4 days
Testing same since   175558  2023-01-03 06:41:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dun Tan <dun.tan@intel.com>
  Tan, Dun <dun.tan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   bbd30066e1..b670700ddf  b670700ddf5eb1dd958d60eb4f2a51e0636206f9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 11:15:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 11:15:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470570.730109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCfGO-0002pJ-SW; Tue, 03 Jan 2023 11:15:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470570.730109; Tue, 03 Jan 2023 11:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCfGO-0002pB-Pj; Tue, 03 Jan 2023 11:15:24 +0000
Received: by outflank-mailman (input) for mailman id 470570;
 Tue, 03 Jan 2023 11:15:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oadl=5A=citrix.com=prvs=36087fe06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCfGM-0002p5-Um
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 11:15:23 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7d0c786-8b57-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 12:15:20 +0100 (CET)
Received: from mail-bn7nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jan 2023 06:15:11 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS7PR03MB5414.namprd03.prod.outlook.com (2603:10b6:5:2c2::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 11:15:09 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Tue, 3 Jan 2023
 11:15:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7d0c786-8b57-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672744520;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=FrCbF7/5KWC5RgyFF95YNecZCcdp2COhwvPWjoJVcOc=;
  b=UYCTDV0VbuCflIR+Q25xYJxm+po8LYn/wn0im9lhfEx+Gj7pLHoOaRrw
   Cwvg6sPwgtQU1ce9n+CWM/BwQg3Ti8ZoL1iJsCqUzJSh/OxEEnIK2DDbN
   hePdY35M3YmBFMYTZFijD9f3w3tVZ+T9vJD7m8tRVkmez9vYlW1U1L5Or
   g=;
X-IronPort-RemoteIP: 104.47.70.105
X-IronPort-MID: 90987143
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dTr0S6oejxSDGtOOD4NTgkSn1dteBmLUZBIvgKrLsJaIsI4StFCzt
 garIBnXafncNGbxLYgjaYni8UsPv5eGn95jSAZoryljQ3xG8ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHzyJNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADEgQS6Iht+O+ajhSrJ8nZ0sCtDsFZxK7xmMzRmBZRonabbqZvySoPV+g3I3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+CraYKIEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXprq863wTPnQT/DjU1UUOVp+e5uHLlRt9CM
 BJOvXITr+8toRnDot7VGkfQTGS/lhsVQd9ZCeA5wACL1KvP4gydC3QETzhOc9gvvok9QjlC/
 k+EmZblCCJitJWRSGmB7fGEoDWqIy8XIGQeIygeQmMtwfPuvYUyhRLnVct4Hei+ididMS706
 yCHqm45nbp7sCIQ/6Cy/FSCiTTzoJHMF1Yx/l+OBj/j6R5lbom4YYDu8ULc8ftLMIeeSB+Go
 WQAnM+dqusJCPlhiRCwfQnEJ5nxj97tDdEWqQQxd3X931xBI0KeQL0=
IronPort-HdrOrdr: A9a23:ZEysqa0Gh6JflL5ymjZougqjBZxxeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4uxpOMG7MBDhHQYc2/hcAV7QZnidhILOFvAs0WKC+UysJ8SazIJgPM
 hbAs9D4bHLbGSSyPyKmDVQcOxQjuVvkprY49s2pk0FJW4FV0gj1XYBNu/xKDwVeOAyP+tcKH
 Pq3Lsjm9PPQxQqR/X+IkNAc/nIptXNmp6jSRkaByQ/4A3LoSK05KX8Gx242A5bdz9U278t/U
 XMjgS8v8yYwrGG4y6Z81WWw4VdmdPnxNcGLMuQivINIjGpphe0aJ9nU7iiuilwhO208l4lnP
 TFvh9lFcVu7HH6eH2zvHLWqkjd+Qdrz0Wn5U6TgHPlr8C8bik9EdB9iYVQdQacw1Y8vflnuZ
 g7nV6xht5yN1ftjS7979/HW1VBjUyvu0cvluYVkjh2TZYeUrlMtoYSlXklVavoXRiKrLzPIt
 MeSv0018wmKG9yqEqp5lWH9ebcGUjb2C32GXTq9PbliQS+10oJsnfwjPZv4kvosqhNCKWsrt
 60TJiB3tt1P7ArRLM4C+EbTcStDGvRBRrKLWKJOFziULoKInTXtvfMkf0IDU6RCe41JbYJ6e
 L8uWljxCcPUlOrDdfL0IxA8xjLTmn4VTPxyttG75w8vrHnXrLkPSCKVVhryqKb0r8iK9yeX+
 z2NINdAvflI2erEYFV3xfmU50XLXUFSsUattsyRlrLqMPWLY/hsPDdbZ/oVfHQOCdhXnm6Dm
 oIXTD1KskF5ke3WmXgiByUQH/pclyXx+MGLEEbxZlm9GEgDPw+juFOsyXJ2iiiE0wzjoUmOE
 1jPbjgjqS34WGr4Geg1RQdBiZg
X-IronPort-AV: E=Sophos;i="5.96,296,1665460800"; 
   d="scan'208";a="90987143"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mNJCg+pI6zQ6RY3Pzy8amqUlpSfA7yPPEy4AxpcEA4yofADJ1IgJSKi1/hjwAIPtG7wdZ8SIznTU0EPDWuGzefEzvXWAF+Lr3hCpph/RFKW97AnFuaK2EOcCK06Cbq5k2pftQPPiFASg1wD9ZhuaesAzS4mI0bgmE+YYHXqUh0JyXwK+gCVRLaRm+5d2bn0Y2Alzyc7E7vleVGhZjFkPH+VJINLiK43QF5+aNki/h2kjSF5Y5RhhuRNDGZbYFpMaQv0r9/bAwzdUqm7QPjTM2mO479SKGy7nE4GsSkw5NWqkStU7caNqo5G7Q3V7nGwowd6NqqLRh/Sml7+iR57YHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FrCbF7/5KWC5RgyFF95YNecZCcdp2COhwvPWjoJVcOc=;
 b=nfgh/ohiv+BW+d1x9wR2tU4tF6NyCM8rQ3I2B5LQJ8wOSG8jfG178knpnst6DdvEK5iC1qb+QyaCmBPyoAXVyun295nB7dyDjqb3ZCggVHxxHBsJ2tNjk+HSGBMqwq9qZBIYBgowEtS9Eg0zGM/c0OwlYbrWVqjLU/hxnaSFFbNY3DR9iAsfwm9N/s3VyrhmhUB9etiY7rPpWzjPLlaGrOJj80vXSjeJSFC0qt9l9ry0MxGD8Pw7xD1AwGXMSrkY0i95ozn6fc7xa9JlQ8+qBUdzu9Y4oDTbr9mWBIYBL4nDFciNnXfhQa8Onzzzr8pAy+lMfbriD+THM1G7VELrPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FrCbF7/5KWC5RgyFF95YNecZCcdp2COhwvPWjoJVcOc=;
 b=hbe3TypLI1htgupX+s+FKOuAv4Ri4lb8zRakf/CfDofdm9IoJOTs0IiWvf8KJoBkukqIz4t83VGIOhXujk6itL3wMsBppj7MJEkGXfXnKEQuQ11t8M18BlIL2NQet8rgu9nhsW9zSqXPP4UZCwWDanyrc/hGrhSNNBJD6UTTKN4=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 1/2] x86/cpuid: Infrastructure for leaves 7:1{ecx,edx}
Thread-Topic: [PATCH 1/2] x86/cpuid: Infrastructure for leaves 7:1{ecx,edx}
Thread-Index: AQHZHK8RB6bbm4Xa9UKGRMzZOin/e66Jqk4AgALlmgA=
Date: Tue, 3 Jan 2023 11:15:08 +0000
Message-ID: <9557496f-e232-ef50-39bb-7eb509bc72e8@citrix.com>
References: <20221231003007.26916-1-andrew.cooper3@citrix.com>
 <20221231003007.26916-2-andrew.cooper3@citrix.com>
 <Y7GgJJ5wyds83Uwn@mail-itl>
In-Reply-To: <Y7GgJJ5wyds83Uwn@mail-itl>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DS7PR03MB5414:EE_
x-ms-office365-filtering-correlation-id: d94ef126-11fa-4f73-605e-08daed7bc60d
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ZFL/7HY6Wkjr7lQC1xQool8D8xW8czTNapUDf7qZ5SffDCwFa9wlCPtmUkbEJbFz6f8WywflqB3T0fPCYwF0KCqNd5vI05MTqrcqTJkhBr7A7b+rYT1mtUs0KuAvwEDD76zbK7k+AYFwz2HuRkNkbU39QaaJSfddMfjPI90i3wdxO+TtaBwUlqKqPJN7oyTUF2noAKyJHQKkLJhuAC/OATUd92Hcp7J3R3+diow91RMnuU7olTOsIkAhp2K24ctUdZKwkYyyWmSz4ylQPrOs4W6kkqcD3NhvW5YPRiJf1fA6asdCYAx1JNbeiFQSVRsvgjs9Rg9ix3/mjKlB4QjI+wDuj2Xi2CmQ+nwIEFGHoH93cSeGoDh0WIiCBIlvfXxHXmx90NevQAbPUjv6Jag/lCrUAZVf8kaDDj4t8tEseahnrZll+328JtBcbLI+XC335jNCL6UnmoO1IedK7u144oDhUgji+oQdJaXDlxUFoen5xcDJFROWZyE57D6S0cVSADy0J9TWwKAGlalW1jXfAHUudXM2l7uMt2dQ3mVTeAA/Mwq/X34Y7lZuy8SPrD3DP2c9JT+jI5OOoqZzbc9CqPuzh0/kXnNIZAKbGsxsD7Zu81sgd12ysifpxBWOmCJoKNA8kj5HFSfT/xum2rm1Avbm0aJ54dGsTK7lLxQ+IdOa//B2K23a77Syj4VpvMyDjkoVpQGXVLi+VTvpbHkTrOLCIarH25Q6Cq7JCxaZ2tjOJT3+njo4zUhNrT/fHC+HX/xo3XmewxkQIyzfWmfCYjjDnQti9KItd2qVCfwcycU=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(366004)(396003)(346002)(136003)(451199015)(38070700005)(53546011)(76116006)(31686004)(64756008)(66446008)(66556008)(66476007)(66946007)(6506007)(4326008)(41300700001)(91956017)(8676002)(316002)(6916009)(2906002)(86362001)(26005)(54906003)(186003)(31696002)(478600001)(6486002)(6512007)(82960400001)(122000001)(38100700002)(4744005)(8936002)(5660300002)(2616005)(71200400001)(36756003)(22166006)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?K2Z3TG9NT2pXK2NkOXo1ZUY3OThQSDV2YXhWbkNINldGQmlLa3cwRHR0dmJH?=
 =?utf-8?B?cm5sWGI1SXZlVUJ4Nlk1cEJVMFhmajZnUDcyaDF3NUFSREhKazBWRk0wQ3pW?=
 =?utf-8?B?eDRNWFkyYkRQOGM2SVRpTitPQmF0TmhweVNidUVsbXpPaytCZ1ZBMmwxSmZX?=
 =?utf-8?B?dVRWcXFDY0JDOUEvcUNsVk1yR20vc0lNSG9VYmtJVHpGZWtvQklsWUlhQTdD?=
 =?utf-8?B?bVZBY0U1TlNVOEdvVlpLUlNBVEE5N1J3Y1MxaENkMHpML2trbU53YkFFUCsx?=
 =?utf-8?B?WGpkTGlBVWQyZzVPcE9nUENDU2xONVNXZ2ZMR0t6WXJ0dk1aZTd3TndFSTh4?=
 =?utf-8?B?eDZXVGxaaDBaTmhZcjAxVlhNZEZPRnl4QmhjUzBBTjRpOHVkb09iVnZFbno2?=
 =?utf-8?B?bHZnaUZ1NEtKTGlWNWZGcVRRQTczZytreitLMzRsNUVWN2toajB0Zlp4QmZX?=
 =?utf-8?B?TXZGYUtpOGpqTXRnSWFad0hYVEI5aEF6WDZZTTZEeFBzWjNOODQrSGFhRzlX?=
 =?utf-8?B?ZWZTSk5pdGM2cTlOeTMwNXhYK1hzRy9uNlk5dTJIbExKQy9lNEVQYTB0ck1n?=
 =?utf-8?B?NGdocGpPUWp1b2JwQkFCemJ5b3VLbk1Hd2ZQOTloa0ZHdnRKSnRMK24raEQr?=
 =?utf-8?B?UmU0YUpnNGpsRG90Z1NGYXFuNXJxNkpBVGVVTG00UmFzVEFJUW1zOUUxVmZJ?=
 =?utf-8?B?V3B1WnJPREhlYTdpS2hZaHVIWVJESU5zbUV5MnF5UWd6Y0tlRGg0aC95TytI?=
 =?utf-8?B?UjN2d0M2Y0RDY3Q1bkFiamNabmdWQU9YK3lQWnR6UDRqV0tIVVljYktlaWpL?=
 =?utf-8?B?TDZacEZ5RFMyVE1vbWdmVTVJVmpPeWFhQmludXMvYXhsMkR4V3JKTTNrcUd5?=
 =?utf-8?B?WmMvM2JIeSs0dm9zMkZkdUZscjJyN3JLT2Nxb2FnTmxZRGt1bldzS2RQeWZz?=
 =?utf-8?B?bTRxVzFtTDd1WXB0SWpXY09xOG56WVhSOTNUbXZSSmFoSUlTd1hJcG5NeEFU?=
 =?utf-8?B?V01ONHQ4RFc1ODBpME9GZ2dSOVNyWUp1UHJ6RkJlNmJvZXlXUitlYXB0WHRm?=
 =?utf-8?B?cG43OGd4cUtyMXlvY1UxTlc2Skh6YzJ6Vk9mUU0xcW4xM3JaL29PSCs3bkhh?=
 =?utf-8?B?SlFMN2Z4bGZVZnR2aHZBTjZIVDV3b2owd2ZZekZBS093WFAwMFVEcGhBWW40?=
 =?utf-8?B?Rjl0VWQyMzgwOENOaitPdzR6b21LakcwemNHdXE0bHd2R2d4amhmUG9qcW04?=
 =?utf-8?B?Y091cnlnR0NORGJFK1RPeWljVGtyRDVUOEpZK0owNlNXSS91bmoxVzRIeWk0?=
 =?utf-8?B?ejhBN1oyNStFSGk4aDdobG5nUWR5Wld6MGMxaGFDZTdPckpJTEJDQWxHVDFK?=
 =?utf-8?B?NTVWN1B2amFZUXRqRGVqVFZ6RmxlYjg3cXN0VHZudXNqWE5NMFRjZ2tzSlFW?=
 =?utf-8?B?RUpkZDhwZUhrczFnMzFndVZraERqV1VOcEUwRVZCUjR0cHBlbFNnQWRtNWZh?=
 =?utf-8?B?YVdvYmNQbWhJTVAyWGRRUVNwRUdZaEF3QjRuekFqeGJ1RlVOckhuMFFOTnBD?=
 =?utf-8?B?OWk0eDBleGlnMHFMWXRpemZLelRzUDl4T3lkYWlwa2lBS1NmWGJqeGFua29T?=
 =?utf-8?B?cVpuK2lwaDZna2xkdDB5YXFNNjFpbjlsWDZuMTh1cE1RVThxWFEzeDZHQ3p0?=
 =?utf-8?B?Q0I0UkNveU1wWXpOSk1TczVPRElUZDJEYzJsN3RtNVMvQldUYjNpTHBrajhH?=
 =?utf-8?B?RnlERHVUSGhGRmtpVU9XZW1oZnVRckdhY1ZydThaRHR6L3YwY1lqYXBDRnB0?=
 =?utf-8?B?am5yNDFIUDE0cUg3UzdTdjZ0K3l5TmQxc1E2aXF6d0x2OTBGZXRHWnkxOUNl?=
 =?utf-8?B?N2tZY2s1eklQVHVYeDdEZ01GbFNvdjlMbTdua21Zbk1xWWh1YWg5Z2dDOU5w?=
 =?utf-8?B?YkNoZ1ByTThPMFZraW1Benp3dTNsYVdBM2VxRDdINm5GSll4MjhFZXRGSFE0?=
 =?utf-8?B?Z2hFVENhUzkvcFBQUmVDbFJhN0tMNUJHL0M2QmFnak85TFUrT05oYkVJdFhL?=
 =?utf-8?B?ZjdoUDEwYzVIcmhJMjVyK3Q5Nkc5QVo0YjJlRi9vS2RtK1dOejFaU0NXRnhY?=
 =?utf-8?Q?oHBmR3GPL0gB39R/j8fzchHat?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C863B0A7E5CE7C478D68A76BC6768725@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ztuzM6H6ZXFYqSjqCwzLfem3cA5R4M9VtB7fnul0YXSplBIWSKKtayOvs0WO95+UGERRPpstyO5cAvOF0DazNUxQxvNXYh09NhCIsXKjlzgtjNpv9APhxhiDbRnD29AKodIq/V0nvchcneVhmKG1fH6BrxXISvEIpUi+xMiAQX3t5VijnNxLlDqyWkZq7UjhOYqQLJPzSH+5OzmvoWObY2m+qFkGG1UZ9T/Dp4BHm8UDeZqhBILLerJOIbtyGYteh0yocuVeWw8H8wrPPH3sojrcTPvtsKiSIqbSCbmW6ZLbc/VjcN01VTPqgmp2QvELt5Ey1p/N5Mf5vLqFiTZOr57cePp1u7vLMKce7Dli8zoeFbDzC/s0920nxoVg+I3CrLkkdscgMl2ljJWxOkUhapuTa7+9vNEiepATcwnXUj+HeUyMWk1717TDbdofGzNu814wg3otRDkEsvaVOQfxAfoZSz3GgEqBxqpx6AAbrf/B1L3537qgGw/UUs9lq5G9UvoeiXjuoRaEY+8fbxqewo5Yr5yZnTLduCeOjexczWtE7WvpkWewCNllnHEZ4fEOsAsH5zlj1gIHqWoMEOtPFunnsgwcFDvN39lLrcJvk2KgCCHqVf4bQsAodckSJLNHtxsIvKkcZGjftqWet2B+ZqxaW3BUxPU1DQwaa7Zy00HxxqNTppze8B3d3G0rzTOSKFZJS9DcFuoR6NsQoGP5I8ZadRZq96P1oB5n9nVpmH2Iyoc/VSiP6E9xM0DjlFuGMVFpT6UKAMoPfr42r54RpsJUP7sD/25eFbOhM9aaU1Z4uxV1uLPCfSKOcP1r03AumS0mQ/79ThNby0N2MZmXe9W68phzNlRvx6MYJ9Bbaw/4MDVT0cL3VCRqf5L5m0GV
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d94ef126-11fa-4f73-605e-08daed7bc60d
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Jan 2023 11:15:08.5403
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: i84OxRmiDwmiLcUYzkGigotJ7RoL9i42Lx4qOn6gJC6CfazQhMgXnKZkhh3HqQiEw4X5+fV3qsm+dG8Mq5Ixwn1T3ODAnqPxII6Ac8ijwU4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5414

T24gMDEvMDEvMjAyMyAzOjAwIHBtLCBNYXJlayBNYXJjenlrb3dza2ktR8OzcmVja2kgd3JvdGU6
DQo+IE9uIFNhdCwgRGVjIDMxLCAyMDIyIGF0IDEyOjMwOjA2QU0gKzAwMDAsIEFuZHJldyBDb29w
ZXIgd3JvdGU6DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMvbWlzYy94ZW4tY3B1aWQuYyBiL3Rvb2xz
L21pc2MveGVuLWNwdWlkLmMNCj4+IGluZGV4IGQ1ODMzZTljZTg3OS4uMDA5MWExMWE2N2JjIDEw
MDY0NA0KPj4gLS0tIGEvdG9vbHMvbWlzYy94ZW4tY3B1aWQuYw0KPj4gKysrIGIvdG9vbHMvbWlz
Yy94ZW4tY3B1aWQuYw0KPj4gQEAgLTIyOSw2ICsyMzcsOCBAQCBzdGF0aWMgY29uc3Qgc3RydWN0
IHsNCj4+ICAgICAgeyAiMHg4MDAwMDAyMS5lYXgiLCAgImUyMWEiLCBzdHJfZTIxYSB9LA0KPj4g
ICAgICB7ICIweDAwMDAwMDA3OjEuZWJ4IiwgIjdiMSIsIHN0cl83YjEgfSwNCj4+ICAgICAgeyAi
MHgwMDAwMDAwNzoyLmVkeCIsICI3ZDIiLCBzdHJfN2QyIH0sDQo+PiArICAgIHsgIjB4MDAwMDAw
MDc6MS5lY3giLCAiN2IxIiwgc3RyXzdjMSB9LA0KPj4gKyAgICB7ICIweDAwMDAwMDA3OjEuZWR4
IiwgIjdiMSIsIHN0cl83ZDEgfSwNCj4gIjdjMSIgYW5kICI3ZDEiID8NCg0KRml4ZWQsIHRoYW5r
cy4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 11:18:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 11:18:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470577.730120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCfIt-0003Pm-Al; Tue, 03 Jan 2023 11:17:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470577.730120; Tue, 03 Jan 2023 11:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCfIt-0003Pf-7k; Tue, 03 Jan 2023 11:17:59 +0000
Received: by outflank-mailman (input) for mailman id 470577;
 Tue, 03 Jan 2023 11:17:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oadl=5A=citrix.com=prvs=36087fe06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCfIr-0003PZ-5C
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 11:17:57 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 443a6bd5-8b58-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 12:17:55 +0100 (CET)
Received: from mail-dm6nam11lp2172.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jan 2023 06:17:35 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS7PR03MB5414.namprd03.prod.outlook.com (2603:10b6:5:2c2::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 11:17:33 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Tue, 3 Jan 2023
 11:17:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 443a6bd5-8b58-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672744675;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=loQ1+D7roScKs3lPYDTnaMxLHTA/ehvWJiM110hZ2Zo=;
  b=F8BWpFI6LmIORtA2/SHrrMR8SV0UqVlui9B6yC3jfJR7FfBbLZgaue45
   g1/qSR2NkQHMJx4YWrauKw35buAsHovrKac1I3SijqYr7XWI2iViIGMQ1
   PxD4upPCumLeEVPIK4ilxtzD+KultIzH16Bxd01i8j0YvGbE5rwksICpN
   k=;
X-IronPort-RemoteIP: 104.47.57.172
X-IronPort-MID: 89900496
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:UGrBGq7QBOyaeQHbyqLH1AxRtPXGchMFZxGqfqrLsTDasY5as4F+v
 jAfWDqHa/mKM2qgL9tzbo+3ph4GvMeGztQ3GwNr/Cg9Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakT5weH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m9
 tgIED5dMle/u8mI5+KBZvhVudYPM5y+VG8fkikIITDxK98DGMmGb4CUoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6Ok0ooj+KF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNJDReXgqq806LGV7kc9VzAqalGJm6mkkm3iasNkD
 WM56BN7+MDe82TuFLERRSaQpXeeuxcGVtl4Eusk6RqMwK7Z/waYAGcfSjdLLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDQcmZwYY59jooKkokwnCCN1kFcadkdndCTz2h
 TeQo0ADa647iMcK0+C3+A7Bijf1/5zRFFdqvEPQQ36v6R5/aMi9fYu05FPH7PFGaoGEUl2Gu
 3tCkM+bhAwTMayweOW2aL1lNNmUCzytYVUwXXYH80EdygmQ
IronPort-HdrOrdr: A9a23:rovTC62V3ljUhsoG425UlQqjBZxxeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4uxpOMG7MBDhHQYc2/hcAV7QZnidhILOFvAs0WKC+UysJ8SazIJgPM
 hbAs9D4bHLbGSSyPyKmDVQcOxQjuVvkprY49s2pk0FJW4FV0gj1XYBNu/xKDwVeOAyP+tcKH
 Pq3Lsjm9PPQxQqR/X+IkNAc/nIptXNmp6jSRkaByQ/4A3LoSK05KX8Gx242A5bdz9U278t/U
 XMjgS8v8yYwrGG4y6Z81WWw4VdmdPnxNcGLMuQivINIjGpphe0aJ9nU7iiuilwhO208l4lnP
 TFvh9lFcVu7HH6eH2zvHLWqkjd+Qdrz0Wn5U6TgHPlr8C8bik9EdB9iYVQdQacw1Y8vflnuZ
 g7nV6xht5yN1ftjS7979/HW1VBjUyvu0cvluYVkjh2TZYeUrlMtoYSlXklVavoXRiKrLzPIt
 MeSv0018wmKG9yqEqp5lWH9ebcGUjb2C32GXTq9PbliQS+10oJsnfwjPZv4kvosqhNCKWsrt
 60TJiB3tt1P7ArRLM4C+EbTcStDGvRBRrKLWKJOFziULoKInTXtvfMkf0IDU6RCe41JbYJ6e
 L8uWljxCcPUlOrDdfL0IxA8xjLTmn4VTPxyttG75w8vrHnXrLkPSCKVVhryqKb0r8iK9yeX+
 z2NINdAvflI2erEYFV3xfmU50XLXUFSsUattsyRlrLqMPWLY/hsPDdbZ/oVfHQOCdhXnm6Dm
 oIXTD1KskF5ke3WmXgiByUQH/pclyXx+MGLEEbxZlm9GEgDPw+juFOsyXJ2iiiE0wzjoUmOE
 1jPbjgjqS34WGr4Geg1RQdBiZg
X-IronPort-AV: E=Sophos;i="5.96,296,1665460800"; 
   d="scan'208";a="89900496"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WkR7GoBglnFwdLrezXZPSkC+Aj7NpdeoUb+CsbPZ6yIOa55dtSiM+Db8rwpNJA856Q4GdL2XUJSJwGZ+pLMy8gf/t7tkLzCSJHd9B/e56N8ODORXRpfytZ94SB6AI+3i2YLCQNdMnSS+hWnR9PUKSVb892Ig7dfA+dwEnDdU+0/6fi9epnh3UcyaRXlCz84THrP3XPpAkx7/AA5BywNaqTO42DJ8R+81EVZc7KBTzpRUDYwCai885DdPhD0Ku/5ZaDObig83jZXKyA5qAmNIhl4qNUosr5xY0cUfXYWygDo9AG9v5SapdmB3vXoWHADdn2GXn8XWYNyiSHXNLJOVLQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=loQ1+D7roScKs3lPYDTnaMxLHTA/ehvWJiM110hZ2Zo=;
 b=VhG6bOkBmFYHGpm7nceKkNGJd7nht7Ta2Iogu3PB25TH57tNPvZcfY/OTKzTq5SAiyVZ66Q2IOSS/JfIPA8p98Ogju2peMD5mW03KsLqi9kDCZHpqSAUvTjogWzPnYIUsDCj9hR7OxFuUqqIE+3e1vLwlob44Q4suvt052DXeNUeAV+1CA0dsY3G2v1ilXv1yWoP907jTvNoI6rld0hm5eM0TO/JcSJoDG4MASWlYWaufehKBzZFS41EzpFAv7q6i74PWGH6e0uInLmxqKp4lz7hji4gu5pRixd9+s3xGNV02NANzcKfzyvKTiB0+X3IE6UVBvfoQVd8f917rnv/DQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=loQ1+D7roScKs3lPYDTnaMxLHTA/ehvWJiM110hZ2Zo=;
 b=jjT+6Qk+9AfJldODV5rOv5g9SgXCoNwZnAqJoWDeofWMviDdWsikgcJBXJogjU0IOJ408kkYVIfm2YMAbr2slsE49tL7KLUGKIDfH54Gtum3cZzJCWXqEHIJQhP7RQDhxeV7rhFIdwr/B1b5GPLJ4jkwl9t5nMCXXm60rtHb3po=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 2/2] x86/shskt: Disable CET-SS on parts succeptable to
 fractured updates
Thread-Topic: [PATCH 2/2] x86/shskt: Disable CET-SS on parts succeptable to
 fractured updates
Thread-Index: AQHZHK8Tc134ZZkfC0SbNIFFZQXGXq6JrR4AgALjd4A=
Date: Tue, 3 Jan 2023 11:17:33 +0000
Message-ID: <d4850283-26f6-895d-4677-a3c1043a68ce@citrix.com>
References: <20221231003007.26916-1-andrew.cooper3@citrix.com>
 <20221231003007.26916-3-andrew.cooper3@citrix.com>
 <Y7GifofUaQ8u8ugr@mail-itl>
In-Reply-To: <Y7GifofUaQ8u8ugr@mail-itl>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DS7PR03MB5414:EE_
x-ms-office365-filtering-correlation-id: d5f58551-5bd6-4906-a271-08daed7c1c76
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 urFOqW4+7Rw8yYtZFNVE6Y23iUhgAAaea0FqjFNBTNPEVcroEOPMhdP4GOdNAzA9MkndhlhBO/S/JBxBt4yqL9FvVVZYx9+pOzOxCog6Rgo+3P3xAqKt+67qYRvfDEZjPFr0t7A2W/mmdAts5P1eYWLGd8ud1Fs91NNuW+UGSvOmAOUY2GHqikAu6C1wbEJjrz/xzVdVVe5yJYiJ0MeT3plgRo9IkBT/PMZ8/pgH0HkDYEzH5ZE8Nz81NPNaLINV3pjbxGRTn99eU7ymtNJNfq5FUjOwI9D76DH+gk/hrViIZpl6orAS17uUTYJUg9MSrSmx9z3Wq4JypRN+C6IHThCHFmpevRcDkpfaFjeAkJyr8CDkaCJZsu6XxubdHxCeu2isOS0foIU8fiGN8fFlxlmj0VNqdi6NMhz8atD98S52zgeuwzzlTmWhi2H9t/GFCQFAcABOXvp2eYgMlaOsrszpM35EIGkPghOSia2kfjZ5jXbKwrc0D9fO7ticwdH19e8fdQpLgTvi+5yrqtPx4vse8q9Y2NImIhP81RTyMAh7+B46vOc7Nub3V3uRFalS9wAyUAK6WukP2U3retE7vOVauNH31TSv0p084oy6qJuQ37i6j8ZlRWSkahAKbfZcxzHoaCvxsXUvmQoY/lkXrZcOdnwU+C4o0gJPUBKIhfZ4pqm7fy1rgNamHWU99AFuyIa1r49rZPynKIDgWxrvca216LZzkT81H8Bk0ejLsO+REkvsZ+b8Tgl2EeaTLFhylEAPbrISbn3M0YaAKcNFZS0UeEJi5071YGaZumkM+1g=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(366004)(396003)(346002)(136003)(451199015)(38070700005)(53546011)(76116006)(31686004)(64756008)(66446008)(66556008)(66476007)(66946007)(6506007)(4326008)(41300700001)(91956017)(8676002)(316002)(6916009)(2906002)(86362001)(26005)(54906003)(186003)(31696002)(478600001)(6486002)(6512007)(82960400001)(122000001)(38100700002)(4744005)(8936002)(5660300002)(2616005)(71200400001)(36756003)(22166006)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bXUwSmZ6ZkNvZVVLVW5EZWZ1WWR2cWt2RFd5RmllL08xM2w4eWt0MVVxTXJY?=
 =?utf-8?B?MExiUUFZMTk2K2FJNG9RblllKzIwSjVPcjVSUk9TVldnREJxUVBibWIxUmR3?=
 =?utf-8?B?cC9BRDVVTXAyeVdZR0U4K09aOWg0SDF3TmNFc0MxMEhoWUVKNHZ5WEFValZL?=
 =?utf-8?B?SWhmcHZ1eGViSnY1dWt4cHppakRLeUJ3NEFaN05mVCtpdlMveDlQeXhFaHZG?=
 =?utf-8?B?VlYrTVJxem5YQnJsTGt6aytUMEFwZWxxanFZSnI4ZmRVandmS3JVT0hMbEdw?=
 =?utf-8?B?WTFLbEo2WHlWQnprc2x6dURsZmlReVNaR0NUbWFUekp0ZFNWZ1J5WHNxMS8w?=
 =?utf-8?B?TXVmcS8vVGdweSswNTNYU0wyZE5jcGt4cm13N1ZsSWp1cUE1STA2SXVvUENW?=
 =?utf-8?B?bzZpYWF1aTdLUGY2eTRUK21qZGdlOC8wR2UrUEszalFUSUVKYU5PU1BRYzVo?=
 =?utf-8?B?cjVYZDVPejkwVWR4d0tPL05yRGRMSHZvZUd5R0pqYThHNWI5Zk41ZzdnNFdl?=
 =?utf-8?B?R040R1FnSHQrVlp2YU5YNHlORGNFYlhyNzRhZXd1YWRKWGxhc3FBdkxQK2M1?=
 =?utf-8?B?Z1BMK1JCSmdhdjdPaGFyT2hRZG9xVDlRTmtzRms2dFIwTlNNaW5IZCtMaW51?=
 =?utf-8?B?Sys5dk5CbzdjakIxKzlROHgybmtYcDQwTG1LazV0NXhuUTBRdGpZbS9hL2Uy?=
 =?utf-8?B?VDBEdlZzWC9KdGFodlpUbFB0ZW43bnZuN1FQN0k3dHNwaHNXRDJmL1dxYis1?=
 =?utf-8?B?KzlvaEtxejU0ZTJobWFyVUhCemF3eG5uWDdLU1Y1RFIyK3pidHJHWEY0VGQv?=
 =?utf-8?B?ODhhMzFUMVRRM1BEYVlQaWxTWnJwRzBoWGFXc0E1NDAzRkt5UEIxUjN5QXhW?=
 =?utf-8?B?WkQ2cmk0ZVc2WmNtZmZGMFBQU3NkSW5ZU3M0Z1piRmg1RTFNSXBKZ0ZPRzYr?=
 =?utf-8?B?UUNHZEpDaS9NZDhxcEpBdDEvb2N1dXRiQWR1M1RSM1dydnVZYTV1RlBoVUt2?=
 =?utf-8?B?Syt2RkJ2dnZWWXFCT3JPZHVhVzBaZkoyU2p1Tk9ieDQzSnlkUElUTnRqRVpt?=
 =?utf-8?B?VWFmVGFQdngrSnRPblNkb1JCM1hFUlB4eWxVL01tOVVXcE9IdW13QkY3L3N4?=
 =?utf-8?B?enZNYkVPSUNYSlpNYjJsNEQ1a0JXN05ZK1ZtN0lzNWJzY1A2ZGJKQkFjUGFx?=
 =?utf-8?B?TDRRbWtCeTJlLzdiQ1RmckU2NU9KZTM5bDIzc0h1S1JWZmZ2bWN2M1RQNG14?=
 =?utf-8?B?eFhwc05kOFVwa3FueU9OWXB4c3ZDUmIwRGFzQ09qVWF3U2JiUkNjVmNNLzlE?=
 =?utf-8?B?V3VNcXlwR2x3aVhVSXBuc1p2dUR6L3RCOEN4dXdCb0MzdUlyS25sNG9aVUts?=
 =?utf-8?B?d09uVnhlY0wvSTQyMU9ONU5qUTJYVTdVRkJlNkdyQXl4WnZnMTFOOUhRQkV4?=
 =?utf-8?B?d0pyUTJSY3QvK1hKelZqcldTajBuR1liZWtqUHcrT0I0RHAzVTJuRkNpU1pr?=
 =?utf-8?B?Z3BYd00wdnMxUXFzU3BPZmR1T2tyWkpFdWRjK3M3RzJXdTVmMm1FbVRLa3Bi?=
 =?utf-8?B?ZHBlTmU4ODRUQWw0NFNVOHhDbFRuZmVKbURHRFk5TmhEam9WYXJ0Q0IrUmpW?=
 =?utf-8?B?OEJIeGtBVVp1UTFPZlhGaEF0WlQyL2pCZmFqVEViUmxRcGZSQkYrcUpJbFNt?=
 =?utf-8?B?RCt6RytqMHV2QmlyUzBqSXczYkl2ZzJYWWpzdzU3UlRVOGl4bUVuOUFTdko4?=
 =?utf-8?B?NEVEa3JFd1lYS1VWWlJ5cUV6Q1A3WnUrdFlIOGtXbVBVUk5KU2ZuOEZBQ2FI?=
 =?utf-8?B?azNlNnRnQ1V1R3BZQnoyS1VHM3Q4UmpoQk55bGlZNE4xcFB1VkYwYVpXbjR1?=
 =?utf-8?B?K201UkpYOFJ6MEFqbEp1U0dXbW5KZlB3SmZDQ3djamFoSVRPTEUzTGxVM2xJ?=
 =?utf-8?B?cm5GbS9uQ3ZGN2xXekRJeVdKOWhEY1NnUjRLR3M3bXdpTHFMcFJkelFUOHpy?=
 =?utf-8?B?ZXZMQ1FPSTYwcXpvb0pVL1IxUUVhcGhvY2JsYjFidFdjNUY0RG91TnEvVmJL?=
 =?utf-8?B?V1l5OElJQVNMczRSN2pMVEtFOG5rdkNES0hjeDI4dzU0cUFhM2tPSHp6OElp?=
 =?utf-8?Q?ZD9an9vwhc6iQ4o7O7sdLPcfS?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <48F5AA714F8BAE4F8386F60BF328D80E@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	9n1vybJ40N9m7G2ZMje5H/85HT+PSDUW/oYQdjhAIeBqu5lC3D2213sSOMbW/lFSFC3KDcLL2BnVo8mo6YLl8knT7VwrKMiIm5G7pCDfLV3Jqd9rg4jdjsiFTP6TTQlg8lx/loOBMftKaLGDw8aWxNeWGbZpt/Gtw1jbgoyWm1uR19BDjTl1x8dpTI1JHiC6/Lg71jdgm53aAtN/dAk79TpOMJ0uMCofl70X/ACV01Qc5hUj3eULc9tfESVPi9NsKPViJAv6xQS2wKIABQx9Q9po+6RS1OF2kMVsBE2456oVRUv7/ahcYEkw+H7F5yjgPAvFNq1D4DzVrj5QNfiSOBalMajgsXuBO7teFPfSKocvxLp5bY/Opxe9x6rTKv3YH/bjEmxfOoU17tqjRUTxiOsp5POKZz+bl54fbMEty75aPp0oGru7Jk5Adgmhk+/Zi0U+vV1hxq12K5aPTByhlafrcp//TatykzMBokCXSUsjCZ2pFhGtNSplXuOuPXrpy8DXHNAlX2n39KGDmpLE/au/aHNowI9vbd5Zp/lJgWfIOUaQH9HKEv7xBtNw90NDbUsGP4BVRjllFPVlbhpz4eWekDFyZKfey3d7YDxPvCOjhzNfnFEN/IeUcPs8DspJTdDWqV/M0TDdmhPIIJLRV2Zeo408zUUPp9zta6hrFVak+z3oQ13FugheTq8t4vJK03h1UbqAmo7DnpNRX3k9AwvsUCPcHyoVinJs7ZL53/SAkiJ6Ei+7+XFrfRUqgTaH8rutn7HWrUfXe0GlLcVdslas3x5vuX+QGdHRnSR7LIMgEmT+MfM6jasOT544sX6PIIvoI2fynq0DxXuX8PK3Dh9sSAaalEAdI+VqseoOpDUXSeo6jWzA+Hob7mq22V7u
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d5f58551-5bd6-4906-a271-08daed7c1c76
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Jan 2023 11:17:33.5454
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xxgWUWNmWak2zgmNtbKYzkxQgFX1vJj1NxCItAISKkIlji4CITVaPEIDqYl1Zq8rzW5irqyCQBfsqXeLgRbuyh779uYgAy4aD88qaKeMfSU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5414

T24gMDEvMDEvMjAyMyAzOjEwIHBtLCBNYXJlayBNYXJjenlrb3dza2ktR8OzcmVja2kgd3JvdGU6
DQo+IE9uIFNhdCwgRGVjIDMxLCAyMDIyIGF0IDEyOjMwOjA3QU0gKzAwMDAsIEFuZHJldyBDb29w
ZXIgd3JvdGU6DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2NwdS9jb21tb24uYyBiL3hl
bi9hcmNoL3g4Ni9jcHUvY29tbW9uLmMNCj4+IGluZGV4IGIzZmNmNDY4MGYzYS4uZDk2MmYzODRh
OTk1IDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gveDg2L2NwdS9jb21tb24uYw0KPj4gKysrIGIv
eGVuL2FyY2gveDg2L2NwdS9jb21tb24uYw0KPj4gQEAgLTM0NiwxMSArMzQ2LDE4IEBAIHZvaWQg
X19pbml0IGVhcmx5X2NwdV9pbml0KHZvaWQpDQo+PiAgCSAgICAgICB4ODZfY3B1aWRfdmVuZG9y
X3RvX3N0cihjLT54ODZfdmVuZG9yKSwgYy0+eDg2LCBjLT54ODYsDQo+PiAgCSAgICAgICBjLT54
ODZfbW9kZWwsIGMtPng4Nl9tb2RlbCwgYy0+eDg2X21hc2ssIGVheCk7DQo+PiAgDQo+PiAtCWlm
IChjLT5jcHVpZF9sZXZlbCA+PSA3KQ0KPj4gLQkJY3B1aWRfY291bnQoNywgMCwgJmVheCwgJmVi
eCwNCj4+ICsJaWYgKGMtPmNwdWlkX2xldmVsID49IDcpIHsNCj4+ICsJCXVpbnQzMl90IG1heF9z
dWJsZWFmOw0KPj4gKw0KPj4gKwkJY3B1aWRfY291bnQoNywgMCwgJm1heF9zdWJsZWFmLCAmZWJ4
LA0KPj4gIAkJCSAgICAmYy0+eDg2X2NhcGFiaWxpdHlbRkVBVFVSRVNFVF83YzBdLA0KPj4gIAkJ
CSAgICAmYy0+eDg2X2NhcGFiaWxpdHlbRkVBVFVSRVNFVF83ZDBdKTsNCj4+ICANCj4+ICsgICAg
ICAgICAgICAgICAgaWYgKG1heF9zdWJsZWFmID49IDEpDQo+IHRhYnMgdnMgc3BhY2VzIC4uLg0K
DQpGaXhlZC4NCg0KPg0KPiBJcyB0aGlzIGZpbGUgaW1wb3J0ZWQgZnJvbSBMaW51eD8gSXQgdXNl
cyB0YWJzIGZvciBpbmRlbnRhdGlvbiwgY29udHJhcnkNCj4gdG8gdGhlIHJlc3Qgb2YgdGhlIFhl
biBjb2RlIGJhc2UuDQoNCkl0IGlzIGEgZmlsZSB3aGljaCBvcmlnaW5hbGx5IGluaGVyaXRzIGZy
b20gTGludXgsIGJ1dCBpdCBwcm9iYWJseSBoYXMNCn4wJSBpbiBjb21tb24gYW55IG1vcmUuLi4N
Cg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 11:27:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 11:27:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470585.730130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCfRq-0004wi-Al; Tue, 03 Jan 2023 11:27:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470585.730130; Tue, 03 Jan 2023 11:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCfRq-0004wb-8B; Tue, 03 Jan 2023 11:27:14 +0000
Received: by outflank-mailman (input) for mailman id 470585;
 Tue, 03 Jan 2023 11:27:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCfRp-0004wR-HJ; Tue, 03 Jan 2023 11:27:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCfRp-0008Ai-9s; Tue, 03 Jan 2023 11:27:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCfRo-0004bX-P9; Tue, 03 Jan 2023 11:27:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCfRo-0007bQ-OR; Tue, 03 Jan 2023 11:27:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dZq4ag7xytgvKdQRS3T7T3mJLpMyuCfLdfvb2n15uLw=; b=R139uJuCzFzsVqYQROL3WYtgVN
	1pktIqaTjGHmAcJETaLT0Rh97kip5kUkIqvef+yewBdrXV82QMMy8KKoCgPZWqCVwE+SIUhqEHgiS
	3MZcvV+PHNx1m2e+bgPABQSIxvAr+Jcj+geSn0otneBrqR+72ruTHF2PSt2h7oV0Neds=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175555-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175555: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=35afa1d2d6c10ce993c60caea1efe1c589fa1d5d
X-Osstest-Versions-That:
    libvirt=0f2396751fccdc9f742230763880f70dbd977f3b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Jan 2023 11:27:12 +0000

flight 175555 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175555/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175482
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175482
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175482
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              35afa1d2d6c10ce993c60caea1efe1c589fa1d5d
baseline version:
 libvirt              0f2396751fccdc9f742230763880f70dbd977f3b

Last test of basis   175482  2022-12-24 04:20:18 Z   10 days
Testing same since   175555  2023-01-03 04:20:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Martin Kletzander <mkletzan@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   0f2396751f..35afa1d2d6  35afa1d2d6c10ce993c60caea1efe1c589fa1d5d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 12:15:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 12:15:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470597.730142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCgCu-0001w4-4G; Tue, 03 Jan 2023 12:15:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470597.730142; Tue, 03 Jan 2023 12:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCgCu-0001vx-1T; Tue, 03 Jan 2023 12:15:52 +0000
Received: by outflank-mailman (input) for mailman id 470597;
 Tue, 03 Jan 2023 12:15:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMDV=5A=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pCgCs-0001vr-Dq
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 12:15:50 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on2051.outbound.protection.outlook.com [40.107.100.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b3abe62-8b60-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 13:15:48 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MN2PR12MB4519.namprd12.prod.outlook.com (2603:10b6:208:262::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 12:15:45 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::9856:da7:1ff1:d55c]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::9856:da7:1ff1:d55c%5]) with mapi id 15.20.5944.019; Tue, 3 Jan 2023
 12:15:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b3abe62-8b60-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OzTWRdHDNEkRNFp21ZBe0/CrTV3OnUHeiUI9GI6/E5Va7bmb9i1wSXznr0Mt0+tL5CTANP0pLt10rzywCVdPXdA1pkb60DtCn4Ev0hOrJ5QDvzwekihtgP6I9v+1Tyrn/30ZdYzTvmY4y5kS8rEQ9tC+rgCJHJB78cVBGoAK39pMG4tjDnuMZ776k+litqQDsGLrg5rHvPdA1GgJGBV0zVI8FsrWRwJowfmFjiWXrUzOQtd8FNbOByXKTpLUDJ5G4F0cCAdpE0GyQXFyia8L7cR6x4x8gzbwN1xVsFMO7zzvuQdpDeiLKFgJfYern5wkiftGEDM8YchAJsSMYAqzRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EJNCStJYDmn1oZ0E2+wobE6P7eaHGzm9mLkM3k1B11Q=;
 b=ffEGqKXZf9/kZ1vYJCYBrVsDq1kiF7yH0ZfoDwOecgS++dBwylOc1rYr6gN3OJ1tjU6KUEN6rFYDHcNn4QkAGBnKsOT/dDvHyWY6piNSn/l5jy3Ss+8e+G9LG19/r6pu0MnPHqfO0gVtTi33Qze7OTIodrjIskA1N/qZLn22YnCOknHt4mtHcHeyAOMfzS+Sp6Wmb/hy4nR4vDapJH5HLUGZnkVR+hv0JQnkSVfPERbk9HNUKGVViCye4IFYfIMRZD8NnyKt12EDKqf31AFu1fKlCf/BL640FvuTTLlX7HBoYFGscGcCaaqeZ0OZ+2TGpNfWZvUfTDajSN5uwPHrUw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EJNCStJYDmn1oZ0E2+wobE6P7eaHGzm9mLkM3k1B11Q=;
 b=gKMgUB1asm1I9Da+ilboCNfSXeX0fiVkxurP6WF8wUUG0432YLprly1GLnBnL/oObPkvrFVoqmRLwD3RzULy90AUBo2kumJK6GG4mzDzuE3Cik59mSD2uAOsSPxLyGjW7DRDvqlBKhDxP2tSPmOIr69e1zUTDvyOvtfWavmjw5w=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <5dc9db0c-529a-3906-b866-2ec6e9efc27c@amd.com>
Date: Tue, 3 Jan 2023 12:15:40 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in
 construct_domU
To: xen-devel@lists.xenproject.org
References: <20230103102519.26224-1-michal.orzel@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230103102519.26224-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0257.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::29) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|MN2PR12MB4519:EE_
X-MS-Office365-Filtering-Correlation-Id: 37134b9a-1b2e-4418-fcab-08daed843da3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2vGNH4WD9ABWvD1IGwYMBTEiS9dzNdYIYQswrBnNrkXzfAAoJWFjwPxfM6DR6Og46HHbjEyBFS5MNwYdV7SkwelizIwcSvmqWQwakiTymeEXhWhdSVkaw7iPM5LPYEK3yHP9ZyJZ7FpqAoQxJqDTasVvUjMehKhuI3Z4yq7dcnWEYyzto6n4LzsCaWi7o2p5UmKEaURbYM+x/mhsug0hOJ6JfoBTBHVVLl23yoCYMkNhdw566Kn6WcvWn59m0xYZsI+DM+z/Z/P8rzuQdmtugtGQ1WzX8+5pc8oKtGqcvr/iJ5xtkZA55Lziovrh/0rVqvzJv00P8pa7lU5IHTCvTs5sFV/cFNkyI1Mvh2WouiwzOD9KIjYnGgldrOwMyhYkXlGcEp9/8WU3GZR5+J8Jw6nLUHKTDZxjjCkFCOGut8RvPX9jnobsKRaPMC2MrlDZokaQk3ADVsO+zudljX3Y2+sDc0cKumARg71513wQDewO16RG1yo6JUIH488NOPfqzQZpngeUkFQ9mQ4ozkiY0KK+xgWozGzDsZYTvCcZrZJrXemJ1v0g7SxiHlxeAfMuR4Xs0Yy9Bi6mOp9XMZh0uflttrbNBAD4jiW0fW6Ot+nTw9hE1Ab3S1f09H2sLV1Wwb877AHgm5oHh7eGKZ6097czpCMDJy9tPtTXJZ0p96ruDPx/w4G0crUWKpVS+u7FNiXo/40hcOO16C8rnTuTtPVz7DOdDCiwufd3iF5cCazG1yhBfXJ71VbwU8XMTR2uI7MzuSNncywA0xLUvE/Niw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(376002)(136003)(366004)(396003)(451199015)(38100700002)(6916009)(36756003)(316002)(31696002)(2616005)(31686004)(53546011)(6486002)(478600001)(6506007)(6666004)(26005)(186003)(6512007)(2906002)(5660300002)(66946007)(41300700001)(66556008)(66476007)(8676002)(8936002)(83380400001)(22166006)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bk1MSFlCK3dwajhWTFplNkZTbFc5cllBVlhwSWJBUDc4aVFUcnVmSUNxMnF1?=
 =?utf-8?B?NjdrQWxIM2ZVSnBGUU5YRjJ6MmxtSEJQbFRMNFRPY0FkSDFnR0xlc3Vpc2pY?=
 =?utf-8?B?ZjR6am5ZMXFncmFLRE9vZVNZZEhsYVFzNDFPUjMwZURlUXJIZmthZktYYWVM?=
 =?utf-8?B?UEpwRjNsUTRZUXZaR2pnMktWeHd2K0JwMFUzbWN6NnQvZUd3VnhiTXh1Q0VP?=
 =?utf-8?B?WVZqNHY3aks5eC8rNlpTdCtkazRTRnJxTUZlNVNNc3UyVFZJcFRTS0ZSNGNB?=
 =?utf-8?B?K3N0b1dYUDUvZjNxWHFteTJLZEFjWmxuMnZPdVpHU2dWYmhib1o3Tk51QXRF?=
 =?utf-8?B?dWtxK0ZkcmZMMldreUdwQVo2eEJ4Mm9KbkFoWC9hZHg2K2FtUnd5Sk1WQ3Zn?=
 =?utf-8?B?Ukw0a0J0cjUwTDJNdGtRZ1BidzgxTVdxZFprbVRpeTVWdzArOHRJMEJNWFlB?=
 =?utf-8?B?eHY5NHZDVEtocmNBaERHRTc0aTk5WnF1RjNTWVBvRTFQVkpla1hmbG02bUsr?=
 =?utf-8?B?VzhVSitZVDFtUlp3NkE1N1h4VEsrb1dCa0RoMTlqYzloVklRTmFjMTNKbzE5?=
 =?utf-8?B?WXpjazRNUk11REJHK0lkQzhDZUFjU3FxV2wweitnaFY1aGtXN1NZV243Mzln?=
 =?utf-8?B?K0oxSjgxQTl5R1BJUFlhcjNETWtwNVdONjhyNTFxcnhTcU12TkQzVCtiamsr?=
 =?utf-8?B?ODlXb3gvUEU3Z3k3QzdRTDJkbUd3MmVwc1ByS1FKc3h4eE9nNDZPWlNnNEsx?=
 =?utf-8?B?cThZSHd0VmlYT2Rsa25aTUdRMncrSzJIMVNsT3ZDMG1MNHIvSjUzSzJ3NUJq?=
 =?utf-8?B?Vmt0SFRVRkZSdHJpaWcweFg4NVl3MjhJdjNWUHJYSTF5UmQ0MVk5NVRyNkMz?=
 =?utf-8?B?a0ptc2dFTXNjRFpqSU04UlovWTlwTlJmUVRGNzlOTGMvVzlYUGd1OWl5Qm5n?=
 =?utf-8?B?Q2tMc2s4eXNKbStSY0NoQ0llZW9uS0pPWHhTNlJKdE9FZGlvemd2VFV0T2V5?=
 =?utf-8?B?Z09zTW9PTnN4RXc0RDVYdUZnZU40YjlkczIyaC9WNEZNNElUdG9WeDZWS0N5?=
 =?utf-8?B?TEhPNFAwdExQWHB4VGovdENPUzRRRkpDdWMwTmpqTUc3dDMxd2VPVW5UZXFk?=
 =?utf-8?B?U2JmWjVWWU9iMGpUSTgwemhSS1VhQlNObExYN2xWR2tHeHFPZVNrb0VGQ28r?=
 =?utf-8?B?QXN4N3I4TXFWTGMxbVY5WjBsN1JjZkVOa1VHeW1wSG5nd3MzcVZlTmxTSGNE?=
 =?utf-8?B?dnBxVFVqY0dTL3F0bWpBNHhZWlYxNC81bUI1Tlg5K29CdzZ2SGdOMG1KODVC?=
 =?utf-8?B?ZEJuZDJSSWRzNXdLWDZ4R09CZDFwZ0U2NGM1LzhzZEV1OTBuVUZ2RkZiWW1v?=
 =?utf-8?B?OUVGeHV1dXRHU05FalhhSTRpUXVTRVVtbVAyR205TlU1YkoxS21pbmlYTmVu?=
 =?utf-8?B?bkdHQTJvcGFWU21kTldqeisvLzJvSGM2WkpHUm4zWXZLNGdzWTMrUnBsdVFD?=
 =?utf-8?B?U1MrUThpdTZNK3A4VlJHbXRoZ2NOYlVDUSswT1lMcDlmaGExSzJUcnBOWFc3?=
 =?utf-8?B?eUFTNFl3VHBKeE9RaVJ4QU44VXJIWlhLZ3JDUzhMWmFpdmJINVNDdUVnWjRt?=
 =?utf-8?B?NFkrdjZCMjZGM1BFTEMvRXBlZEVJOEtGOVR5cVBDaE0rZjBBd284NzhUWk5T?=
 =?utf-8?B?S1J5dzlmVDVGYzYyUlliYm83U3BWOWlybFAxWEo2eU1rUEE3U2RlZXREM3VT?=
 =?utf-8?B?MFZhQ2duOEFXSWRUbWtmRjY5ZnZyWDU3NzVWUG5NZkpEU05OSHpsdGJ6WVJ2?=
 =?utf-8?B?WEVibkN4cVNWOHdPVHRyVEpLOVBTZGxiZjFRdkIrY08vTkJWektEQ2RUalph?=
 =?utf-8?B?ZEtMNlI4RWU1N1hMdndKQlU2Qzd0dTdRODhYVjlCNFFLZnZLZTJXeDcwRGJy?=
 =?utf-8?B?b3B1NXRXcUtlTlpoQkh0TTM3QkVOUG1sRDh5WHZBYm5Fb2FrekVCcGNEUFBo?=
 =?utf-8?B?TCtCZ3hhU3RxNkpNZ054akRCeVhEYWc1bC9jM2pDMEFYMy9kNjlTZnpWbnow?=
 =?utf-8?B?SnR6dkxVUHRxc0VOTzdDeEkxQ2o2VjFHeFMxTmx5RW9kZjBLTzc0c3VSeUxv?=
 =?utf-8?Q?g4YvRrw5k1X7Sv9FLG3ayn5LZ?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37134b9a-1b2e-4418-fcab-08daed843da3
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2023 12:15:45.4000
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NOkDQNSfwcKmREPZ2IP8NnpBw9wjEMpKW9MFGNlNorMex1cD+PT5ni404qw40D9xUmUoUawBA8vcOh+U+2tARg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4519


On 03/01/2023 10:25, Michal Orzel wrote:
> Printing memory size in hex without 0x prefix can be misleading, so
> add it. Also, take the opportunity to adhere to 80 chars line length
> limit by moving the printk arguments to the next line.
>
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes in v2:
>   - was: "Print memory size in decimal in construct_domU"
>   - stick to hex but add a 0x prefix
>   - adhere to 80 chars line length limit
> ---
>   xen/arch/arm/domain_build.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 829cea8de84f..f35f4d24569c 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3774,7 +3774,8 @@ static int __init construct_domU(struct domain *d,
>       if ( rc != 0 )
>           return rc;
>   
> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
> +    printk("*** LOADING DOMU cpus=%u memory=%#"PRIx64"KB ***\n",
> +           d->max_vcpus, mem);
>   
>       kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
>   


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 12:34:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 12:34:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470604.730152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCgV1-0004L4-M6; Tue, 03 Jan 2023 12:34:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470604.730152; Tue, 03 Jan 2023 12:34:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCgV1-0004Kx-JO; Tue, 03 Jan 2023 12:34:35 +0000
Received: by outflank-mailman (input) for mailman id 470604;
 Tue, 03 Jan 2023 12:34:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kMp5=5A=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pCgUz-0004Kr-Qq
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 12:34:34 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2049.outbound.protection.outlook.com [40.107.22.49])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f898d381-8b62-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 13:34:31 +0100 (CET)
Received: from DB8PR06CA0033.eurprd06.prod.outlook.com (2603:10a6:10:100::46)
 by DB5PR08MB9998.eurprd08.prod.outlook.com (2603:10a6:10:489::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.14; Tue, 3 Jan
 2023 12:34:28 +0000
Received: from DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:100:cafe::bb) by DB8PR06CA0033.outlook.office365.com
 (2603:10a6:10:100::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5966.19 via Frontend
 Transport; Tue, 3 Jan 2023 12:34:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT053.mail.protection.outlook.com (100.127.142.121) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5966.18 via Frontend Transport; Tue, 3 Jan 2023 12:34:28 +0000
Received: ("Tessian outbound 0d7b2ab0f13d:v132");
 Tue, 03 Jan 2023 12:34:28 +0000
Received: from c3ea07c5858d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6B79D49D-969C-46E7-8BC8-BCB82A3C1FFD.1; 
 Tue, 03 Jan 2023 12:34:18 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c3ea07c5858d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 03 Jan 2023 12:34:18 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV2PR08MB9254.eurprd08.prod.outlook.com (2603:10a6:150:e3::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.14; Tue, 3 Jan
 2023 12:34:13 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::1909:220b:70ee:a5c3]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::1909:220b:70ee:a5c3%7]) with mapi id 15.20.5986.009; Tue, 3 Jan 2023
 12:34:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f898d381-8b62-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m+pfp5dyoybtMDi34WVwk5cS65qgk7w5i6y4Bj/H1fk=;
 b=HH2QYVWclhr8Zi1UnQOnHd37puEZQsT3yPr+SNs437Z6pj8WoMh2t91ThJNMPS7klO/kM+mGQpEpTmdlCp/VZuldeSk+reCU5Fgkp939KagBGXJbup2o78MVfhzWHoKByKF0RQI3m36eesYaU0pRd9VZm3HqT5CXXc9IYjpDNRc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f9841ea2b83a4b58
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mGyWvN1Jxz7/JdAk8PoMjPF5jyp4NFdHW7iCHXDj9iHzfv8ozcfcfjktSwPEl/5y2nzdhNrvlRs9xK+KpXxSKovDR2nJUgJDkBb6cUIHx674IEv52CBUzio3Yt9NEmHHzAGCgAJr+y6k1dBi1kY1BSAZBpaw29TAGwku+uGsSHAIOyxDi80X2TLhD/GbnrSmBh4MAXEgeDffaZ0RSpEuyCnIVuMQIwNBHHQthRX/u+jjjZUzjHSD5z0fIrU7db1Bc5qy++pJEpLeBJlH273H58FyUyGZmPNn/FILatWQWI0vbuYd+6XmnGyYwg0sAnUurLc4dvCzBm2zEbtWgWpsRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=m+pfp5dyoybtMDi34WVwk5cS65qgk7w5i6y4Bj/H1fk=;
 b=Sl2CjsljSdXzfel8aHYM+p46t4PfxwenM1RGXVdYwhb+lf3CZxlnkVLJEJHVxChH1JbmpZ7PbaoTAIVOLruyVRAegCNO+0txtRLCAvg6GtqfntBmaQF95ESemAKRvUU7Z9Z4amJTDbu+6Ox4SnmJVClmX0kbLLtRwWYOfuTgR5MtYsGCJcKfP+RADmfT3iDaTtZKqh+xSkNpER3qGX4oc3jgaSttWpCvkWUKGUGbOLOzljvrJdmtbGvo1LVEcv7k/CWXL8lrgbYWm+ZqRGxaNN5xD99Adt64uVNx75uQS1lNyf6hB0UP+i7xXzLMky6p0N9k9yCreQrZyX7CHQ2Hjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m+pfp5dyoybtMDi34WVwk5cS65qgk7w5i6y4Bj/H1fk=;
 b=HH2QYVWclhr8Zi1UnQOnHd37puEZQsT3yPr+SNs437Z6pj8WoMh2t91ThJNMPS7klO/kM+mGQpEpTmdlCp/VZuldeSk+reCU5Fgkp939KagBGXJbup2o78MVfhzWHoKByKF0RQI3m36eesYaU0pRd9VZm3HqT5CXXc9IYjpDNRc=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, Henry Wang <Henry.Wang@arm.com>, Wei Chen
	<Wei.Chen@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2] xen/arm: p2m: Populate pages for GICv2 mapping in
 arch_domain_create()
Thread-Topic: [PATCH v2] xen/arm: p2m: Populate pages for GICv2 mapping in
 arch_domain_create()
Thread-Index:
 AQHY36RTFiJSG85qSUCZzK4infnuWq4NsO2AgAANc4CAAYyLAIBUWeeAgAKahgCAADe5AIAmqQKA
Date: Tue, 3 Jan 2023 12:34:12 +0000
Message-ID: <A52E1C09-40F1-416C-A085-2F2320EE69EA@arm.com>
References: <20221014080917.14980-1-Henry.Wang@arm.com>
 <a947e0b4-8f76-cea6-893f-abf30ff95e0d@xen.org>
 <AS8PR08MB7991FD5994497D812FE3AE2E92249@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <9d5ab09e-650f-118d-0233-d7988f1504f1@xen.org>
 <AS8PR08MB799170627B34BD2627CE3092921D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org>
 <alpine.DEB.2.22.394.2212091409020.3075842@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2212091409020.3075842@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.120.41.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GV2PR08MB9254:EE_|DBAEUR03FT053:EE_|DB5PR08MB9998:EE_
X-MS-Office365-Filtering-Correlation-Id: 56858e94-17c4-4f85-73fd-08daed86dafa
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 adYc2ewzfsaYEgPg+LmehYT9Rz03VERxxxR2J8DWj2ZTEuCNtZDv6ow95LqpEwmhDFEx7NplCOeE5XCUI8uO4IeUqOWmpzJ7Zr2QOD6xcnCONqxHWqgVwFBWBKL69BNuswteTXO0KOgQYsdr9JRy1ivSGKRulf/xAFSKoZCEZOjbAa5uLB/F0IvKoTN1Y8J4YoUNi1lkkNO6rU4p2TubsHSsvcCJvxwG+1zQVOMinRG2j/Jr/ZtecjGa4HOErPJ83QZ/Co7VDZIGY9aFoFRHrS3V5XEdj1XgPdg0Rwds7HELxQmknLsf9iWZAP4SDVpFSLF21rCMdyMYtFCRuTWIjDbv4tnHOWF4Z7G9Cp0Pc09dDJxqd+y0QTn7fND6/mVFlE8m94SImAjtThMxL/zv0CeSE41bJVCPSz97dtNsobcNg8rdeHy+UcG/FIif/YRd9QGNoDkwIbl6zp8eAmxioTclVZQurz9bWUrpPDo6okY9Rg64oLP5rbU83xbivB9mR9zyU1gFOm7ZYBRt/24mdGRfM4CoXlCnw/3k/tgaQawraRCF1ujaIR0Zmzgk3wtDq7sGjS2KGRB27Up6ZQg5RXAijwCclbHUCqv3ZMrbAEkNAo0CUExpWLAFSMnLir9laHINaVCjURkIsLvAW0xOlp+Yb+mf7OSVnpRx6d6XnJgWuQmUeQRFtbB8ZkPCK+UhFh8hnMn/1jUSbaZjAxMdcCd0e6koR6w8iC+E0FLrzKVipSPAa3Y16nFheJe73dqHHaweTbuECglF8yLlcz3TUw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(39860400002)(396003)(366004)(346002)(451199015)(66899015)(36756003)(86362001)(4326008)(6916009)(41300700001)(54906003)(91956017)(66946007)(38070700005)(66556008)(66446008)(66476007)(8676002)(76116006)(64756008)(38100700002)(122000001)(33656002)(26005)(53546011)(6486002)(186003)(71200400001)(6512007)(478600001)(2906002)(8936002)(5660300002)(6506007)(316002)(83380400001)(2616005)(22166006)(45980500001);DIR:OUT;SFP:1101;
Content-Type: multipart/alternative;
	boundary="_000_A52E1C0940F1416CA0852F2320EE69EAarmcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB9254
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8d182304-f1a0-42ba-c491-08daed86d187
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sfnrfhc6gvWXsK4ic9EbnnT7yLcF1t4OC4TvZluKsAbrNy3EwXiSaCQuTgVZv+e5HwJQY6hFqtSFs/xC1TIiRiQfsX3/hlZQ/9189LalP5oe9OcdTAcApZRoS0W/7ypw0yZmlbx8bPMdf4VVGA22DLprAjnfGX7ONylHwM5fPOHxEUMX421WV3FEebNe+E64oV0K4FEtns2MB9o2cqHy0x1oNtO4DVZsE5/sJep4nqU8Tx8vSutzxB5SbocSDq8X7NeNZPU/MNyT430u/TFhQinlaPA7dFBUlzn1f0QCZtYWZvEQGdjqDTrk4H9llW8NXpVZpVOKvadd8Przp4zPm7fyZLO7xjpKsADqltiJ2W4nPReWlDGCrz8bpzsLBPQhYFJFCvEMJ3RAKkYClzbhq4luR9qsKCMDopBOgIlAdIuK5bVSqQNxmEt46ZjL6dkwHQQ495iSlCKRe0lbkha0KvRfFikl4MRpUZlGUyXHRxD2bw58InuCDymGQVLJX1fFcDKOwdD36/Lkz2+WfEIlSAcgPaPCtPNEMSGL16F9BjYY+YhiZfVYk/vrQxbdVjrRVZgBKHSIoA3sDdy+uTQJdvBR+g62naJjfi67/g6ioB83AfIXRCgBGttWGTXe6jP3N95bWcIjoEMZgbjqklLelu9pk//JnSj/TZzbokaNFMZ/vPmSgOquDDeN94qmGz6/mVTBgpPiBijfRRCFO1JWm9Q80y2F36Zn0tldewPPow8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39850400004)(136003)(346002)(376002)(451199015)(36840700001)(40470700004)(46966006)(33656002)(36756003)(81166007)(356005)(8936002)(82740400003)(6862004)(5660300002)(41300700001)(83380400001)(47076005)(36860700001)(86362001)(2906002)(82310400005)(6486002)(70586007)(54906003)(45080400002)(66899015)(70206006)(6506007)(53546011)(40480700001)(40460700003)(478600001)(8676002)(4326008)(316002)(336012)(2616005)(6512007)(186003)(26005)(22166006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2023 12:34:28.1593
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 56858e94-17c4-4f85-73fd-08daed86dafa
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR08MB9998

--_000_A52E1C0940F1416CA0852F2320EE69EAarmcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi,

Sorry for the delay but I have very limited access to my mails right now.

On 9 Dec 2022, at 23:11, Stefano Stabellini <sstabellini@kernel.org<mailto:=
sstabellini@kernel.org>> wrote:

On Fri, 9 Dec 2022, Julien Grall wrote:
Hi Henry,

On 08/12/2022 03:06, Henry Wang wrote:
I am trying to work on the follow-up improvements about the Arm P2M code,
and while trying to address the comment below, I noticed there was an
unfinished
discussion between me and Julien which I would like to continue and here
opinions from all of you (if possible).

-----Original Message-----
From: Julien Grall <julien@xen.org<mailto:julien@xen.org>>
Subject: Re: [PATCH v2] xen/arm: p2m: Populate pages for GICv2 mapping in
arch_domain_create()
I also noticed that relinquish_p2m_mapping() is not called. This
should
be fine for us because arch_domain_create() should never create a
mapping that requires p2m_put_l3_page() to be called.

For the background of the discussion, this was about the failure path of
arch_domain_create(), where we only call p2m_final_teardown() which does
not call relinquish_p2m_mapping()...
So all this mess with the P2M is necessary because we are mapping the GICv2
CPU interface in arch_domain_create(). I think we should consider to defer =
the
mapping to later.

Other than it makes the code simpler, it also means we could also the P2M r=
oot
on the same numa node the domain is going to run (those information are pas=
sed
later on).

There is a couple of options:
1. Introduce a new hypercall to finish the initialization of a domain (e.g.
allocating the P2M and map the GICv2 CPU interface). This option was briefl=
y
discussed with Jan (see [2])./
2. Allocate the P2M when populate the P2M pool and defer the GICv2 CPU
interface mapping until the first access (similar to how with deal with MMI=
O
access for ACPI).

I find the second option is neater but it has the drawback that it requires=
 on
more trap to the hypervisor and we can't report any mapping failure (which
should not happen if the P2M was correctly sized). So I am leaning towards
option 2.

Any opinions?

Option 1 is not great due to the extra hypercall. But I worry that
option 2 might make things harder for safety because the
mapping/initialization becomes "dynamic". I don't know if this is a
valid concern.

I would love to hear Bertrand's thoughts about it. Putting him in To:

How option 1 would work for dom0less ?

Option 2 would make safety more challenging but not impossible (we have a l=
ot of other use cases where we cannot map everything on boot).

I would vote for option 2 as I think we will not certify gicv2 and it is no=
t adding an other hyper call.

Cheers
Bertrand



--_000_A52E1C0940F1416CA0852F2320EE69EAarmcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <C3537B5D65335948995A5BE4B696560C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break:=
 after-white-space;" class=3D"">
Hi,
<div class=3D""><br class=3D"">
</div>
<div class=3D"">Sorry for the delay but I have very limited access to my ma=
ils right now.<br class=3D"">
<div><br class=3D"">
<blockquote type=3D"cite" class=3D"">
<div class=3D"">On 9 Dec 2022, at 23:11, Stefano Stabellini &lt;<a href=3D"=
mailto:sstabellini@kernel.org" class=3D"">sstabellini@kernel.org</a>&gt; wr=
ote:</div>
<br class=3D"Apple-interchange-newline">
<div class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: Helv=
etica; font-size: 12px; font-style: normal; font-variant-caps: normal; font=
-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-=
stroke-width: 0px; text-decoration: none; float: none; display: inline !imp=
ortant;" class=3D"">On
 Fri, 9 Dec 2022, Julien Grall wrote:</span><br style=3D"caret-color: rgb(0=
, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-=
variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align:=
 start; text-indent: 0px; text-transform: none; white-space: normal; word-s=
pacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=
=3D"">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: 400; letter-sp=
acing: normal; orphans: auto; text-align: start; text-indent: 0px; text-tra=
nsform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit=
-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration: n=
one;" class=3D"">
Hi Henry,<br class=3D"">
<br class=3D"">
On 08/12/2022 03:06, Henry Wang wrote:<br class=3D"">
<blockquote type=3D"cite" class=3D"">I am trying to work on the follow-up i=
mprovements about the Arm P2M code,<br class=3D"">
and while trying to address the comment below, I noticed there was an<br cl=
ass=3D"">
unfinished<br class=3D"">
discussion between me and Julien which I would like to continue and here<br=
 class=3D"">
opinions from all of you (if possible).<br class=3D"">
<br class=3D"">
<blockquote type=3D"cite" class=3D"">-----Original Message-----<br class=3D=
"">
From: Julien Grall &lt;<a href=3D"mailto:julien@xen.org" class=3D"">julien@=
xen.org</a>&gt;<br class=3D"">
Subject: Re: [PATCH v2] xen/arm: p2m: Populate pages for GICv2 mapping in<b=
r class=3D"">
arch_domain_create()<br class=3D"">
<blockquote type=3D"cite" class=3D"">
<blockquote type=3D"cite" class=3D"">I also noticed that relinquish_p2m_map=
ping() is not called. This<br class=3D"">
should<br class=3D"">
be fine for us because arch_domain_create() should never create a<br class=
=3D"">
mapping that requires p2m_put_l3_page() to be called.<br class=3D"">
</blockquote>
</blockquote>
</blockquote>
<br class=3D"">
For the background of the discussion, this was about the failure path of<br=
 class=3D"">
arch_domain_create(), where we only call p2m_final_teardown() which does<br=
 class=3D"">
not call relinquish_p2m_mapping()...<br class=3D"">
</blockquote>
So all this mess with the P2M is necessary because we are mapping the GICv2=
<br class=3D"">
CPU interface in arch_domain_create(). I think we should consider to defer =
the<br class=3D"">
mapping to later.<br class=3D"">
<br class=3D"">
Other than it makes the code simpler, it also means we could also the P2M r=
oot<br class=3D"">
on the same numa node the domain is going to run (those information are pas=
sed<br class=3D"">
later on).<br class=3D"">
<br class=3D"">
There is a couple of options:<br class=3D"">
1. Introduce a new hypercall to finish the initialization of a domain (e.g.=
<br class=3D"">
allocating the P2M and map the GICv2 CPU interface). This option was briefl=
y<br class=3D"">
discussed with Jan (see [2])./<br class=3D"">
2. Allocate the P2M when populate the P2M pool and defer the GICv2 CPU<br c=
lass=3D"">
interface mapping until the first access (similar to how with deal with MMI=
O<br class=3D"">
access for ACPI).<br class=3D"">
<br class=3D"">
I find the second option is neater but it has the drawback that it requires=
 on<br class=3D"">
more trap to the hypervisor and we can't report any mapping failure (which<=
br class=3D"">
should not happen if the P2M was correctly sized). So I am leaning towards<=
br class=3D"">
option 2.<br class=3D"">
<br class=3D"">
Any opinions?<br class=3D"">
</blockquote>
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: 400; lett=
er-spacing: normal; text-align: start; text-indent: 0px; text-transform: no=
ne; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
 text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: 400; le=
tter-spacing: normal; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0p=
x; text-decoration: none; float: none; display: inline !important;" class=
=3D"">Option
 1 is not great due to the extra hypercall. But I worry that</span><br styl=
e=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; fo=
nt-style: normal; font-variant-caps: normal; font-weight: 400; letter-spaci=
ng: normal; text-align: start; text-indent: 0px; text-transform: none; whit=
e-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-de=
coration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: 400; le=
tter-spacing: normal; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0p=
x; text-decoration: none; float: none; display: inline !important;" class=
=3D"">option
 2 might make things harder for safety because the</span><br style=3D"caret=
-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal=
; text-align: start; text-indent: 0px; text-transform: none; white-space: n=
ormal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: 400; le=
tter-spacing: normal; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0p=
x; text-decoration: none; float: none; display: inline !important;" class=
=3D"">mapping/initialization
 becomes &quot;dynamic&quot;. I don't know if this is a</span><br style=3D"=
caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-st=
yle: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: n=
ormal; text-align: start; text-indent: 0px; text-transform: none; white-spa=
ce: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decorat=
ion: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: 400; le=
tter-spacing: normal; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0p=
x; text-decoration: none; float: none; display: inline !important;" class=
=3D"">valid
 concern.</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvet=
ica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-w=
eight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; te=
xt-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-st=
roke-width: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: 400; lett=
er-spacing: normal; text-align: start; text-indent: 0px; text-transform: no=
ne; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
 text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: 400; le=
tter-spacing: normal; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0p=
x; text-decoration: none; float: none; display: inline !important;" class=
=3D"">I
 would love to hear Bertrand's thoughts about it. Putting him in To:</span>=
</div>
</blockquote>
<br class=3D"">
</div>
<div>How option 1 would work for dom0less ?</div>
<div><br class=3D"">
</div>
<div>Option 2 would make safety more challenging but not impossible (we hav=
e a lot of other use cases where we cannot map everything on boot).</div>
<div><br class=3D"">
</div>
<div>I would vote for option 2 as I think we will not certify gicv2 and it =
is not adding an other hyper call.</div>
<div><br class=3D"">
</div>
<div>Cheers</div>
<div>Bertrand</div>
<div><br class=3D"">
</div>
<br class=3D"">
</div>
</body>
</html>

--_000_A52E1C0940F1416CA0852F2320EE69EAarmcom_--


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 13:02:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 13:02:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470615.730163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCgwG-0007mk-Um; Tue, 03 Jan 2023 13:02:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470615.730163; Tue, 03 Jan 2023 13:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCgwG-0007md-S7; Tue, 03 Jan 2023 13:02:44 +0000
Received: by outflank-mailman (input) for mailman id 470615;
 Tue, 03 Jan 2023 13:02:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hIZ3=5A=citrix.com=prvs=3603a684f=Per.Bilse@srs-se1.protection.inumbo.net>)
 id 1pCgwF-0007l1-8g
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 13:02:43 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e58a8ef3-8b66-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 14:02:39 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e58a8ef3-8b66-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672750959;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=q3pBHOUVWnIYdWgJPflbM7j5UuiigUvRijYwk8CcALY=;
  b=Y+L66T8d29bwOBvqTqhebby2KXA8TbrA2Y4g3o0Z66L2hJ23k4uujweF
   Clcb2od2n8Ab5buAbi4v1N0oPCrH0J5JWjh05/yUmE6aZaLKERtUVfcVc
   9zUcMK6uh9qqF5O4RLyDbiyKedfeo1VNJ0lAVH/fO/3OE8r1mqliSB4gr
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 89911226
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:D7kuqKjbAIKAlgz3YuoRr//OX161qBAKZh0ujC45NGQN5FlHY01je
 htvWWmGM/iCN2XxedBxad+/o0xTusfSnIBnQFZs+HwwFCob9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWs0N8klgZmP6sT5QeFzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ6cmo0RCGJodirg/WHFdhIgYcFFsDCadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJagx/5JEWxY9EglHHficeglORvqcf6GnP1g1hlrPqNbI5f/TaFJQMxxzC+
 Aoq+UypAi88bv6RzgDcrHGC2bCUoSChB6ANQejQGvlC3wTImz175ActfUCgvfCzh0q6WtReA
 08Z4Cwjqe417kPDZt75Uh6jqXiIpCkASsFQGO037gKK4qfM6gPfDW8BJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaESwUK3ISICwJVw0I5/H9r4wpyBHCVNBuFOiylNKdMTP/2
 TeRtwAlmq4ey8UM0s2GEUvv2mz24MKTF0hsu1uRDjnNAh5FiJCNRdby0HLq59B6McXecgawm
 H9HvfKXxbVbZX2SrxClTOIIFbCvwv+KNjzAnFJid6UcGySRF22LJt4JvmwnTKt9GoNdIGKyP
 heP0e9EzMULVEZGe5ObdG5Y5y4C6aH7XeroWfnPBjalSsggLVTXlM2CiKP54owMrKTOuftjU
 Xt4WZz2ZZr/NUiA5GTeegvl+eV3rh3SPEuKLXwB8zyp0KCFeFmeQqofPV2FY4gRtf3b+luFr
 4oBa5bVkH2ztdEShQGOqOYuwa0idyBnVfgaVeQNHgJ8HuaWMD54UKKAqV/QU4dkg75Uho/1w
 51JYWcBkACXrSSeeW23hoVLNOuHsWBX8ShqYkTB/D+AhxAeXGpYxP1PLMZvJOJ6rLELIDwdZ
 6BtRvhsy89nElzvkwnxp7GmxGC+XHxHXT6zAhc=
IronPort-HdrOrdr: A9a23:Va8EVat2YcnzdA9hxSrePiDE7skDzdV00zEX/kB9WHVpmwKj9v
 xG+85rsyMc6QxhP03I/OrrBEDuex7hHPJOjbX5eI3SPzUPVgOTXf1fBMjZskDd8xSXzJ8j6U
 4YSdkBNDSTNzhHZLfBkW2F+o0bsaC6GcmT7I+0854ud3AJV0gH1WhE422gYyhLrWd9a6bRPa
 Dsl/Zvln6PeWk3cs/+PXUMRe7Fzue77q7OUFopBwMH9ALLtj+j6Kf7Hx+Ety1uKA9n8PMN8X
 Xljwe83amos+i6xhjAk0ff4o9bgsGJ8KoyOOW8zuYUNxTxgUKTaINtV6bqhkFMnN2S
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="89911226"
From: Per Bilse <per.bilse@citrix.com>
To: <linux-kernel@vger.kernel.org>
CC: Per Bilse <per.bilse@citrix.com>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, "moderated list:XEN HYPERVISOR INTERFACE"
	<xen-devel@lists.xenproject.org>
Subject: [PATCH v3] drivers/xen/hypervisor: Expose Xen SIF flags to userspace
Date: Tue, 3 Jan 2023 13:02:13 +0000
Message-ID: <20230103130213.2129753-1-per.bilse@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

/proc/xen is a legacy pseudo filesystem which predates Xen support
getting merged into Linux.  It has largely been replaced with more
normal locations for data (/sys/hypervisor/ for info, /dev/xen/ for
user devices).  We want to compile xenfs support out of the dom0 kernel.

There is one item which only exists in /proc/xen, namely
/proc/xen/capabilities with "control_d" being the signal of "you're in
the control domain".  This ultimately comes from the SIF flags provided
at VM start.

This patch exposes all SIF flags in /sys/hypervisor/start_flags/ as
boolean files, one for each bit, returning '1' if set, '0' otherwise.
Two known flags, 'privileged' and 'initdomain', are explicitly named,
and all remaining flags can be accessed via generically named files,
as suggested by Andrew Cooper.

Signed-off-by: Per Bilse <per.bilse@citrix.com>
---
v2: minor fix to layout, incorporate suggestions from Juergen Gross
v3: update assumed availability in documentation
---
 Documentation/ABI/stable/sysfs-hypervisor-xen | 13 ++++
 drivers/xen/sys-hypervisor.c                  | 69 +++++++++++++++++--
 2 files changed, 78 insertions(+), 4 deletions(-)

diff --git a/Documentation/ABI/stable/sysfs-hypervisor-xen b/Documentation/ABI/stable/sysfs-hypervisor-xen
index 748593c64568..be9ca9981bb1 100644
--- a/Documentation/ABI/stable/sysfs-hypervisor-xen
+++ b/Documentation/ABI/stable/sysfs-hypervisor-xen
@@ -120,3 +120,16 @@ Contact:	xen-devel@lists.xenproject.org
 Description:	If running under Xen:
 		The Xen version is in the format <major>.<minor><extra>
 		This is the <minor> part of it.
+
+What:		/sys/hypervisor/start_flags/*
+Date:		March 2023
+KernelVersion:	6.3.0
+Contact:	xen-devel@lists.xenproject.org
+Description:	If running under Xen:
+		All bits in Xen's start-flags are represented as
+		boolean files, returning '1' if set, '0' otherwise.
+		This takes the place of the defunct /proc/xen/capabilities,
+		which would contain "control_d" on dom0, and be empty
+		otherwise.  This flag is now exposed as "initdomain" in
+		addition to the "privileged" flag; all other possible flags
+		are accessible as "unknownXX".
diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index fcb0792f090e..f5460b34ae6f 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -31,7 +31,10 @@ struct hyp_sysfs_attr {
 	struct attribute attr;
 	ssize_t (*show)(struct hyp_sysfs_attr *, char *);
 	ssize_t (*store)(struct hyp_sysfs_attr *, const char *, size_t);
-	void *hyp_attr_data;
+	union {
+		void *hyp_attr_data;
+		unsigned long hyp_attr_value;
+	};
 };
 
 static ssize_t type_show(struct hyp_sysfs_attr *attr, char *buffer)
@@ -399,6 +402,60 @@ static int __init xen_sysfs_properties_init(void)
 	return sysfs_create_group(hypervisor_kobj, &xen_properties_group);
 }
 
+#define FLAG_UNAME "unknown"
+#define FLAG_UNAME_FMT FLAG_UNAME "%02u"
+#define FLAG_UNAME_MAX sizeof(FLAG_UNAME "XX")
+#define FLAG_COUNT (sizeof(xen_start_flags) * BITS_PER_BYTE)
+static_assert(sizeof(xen_start_flags) 
+		<= sizeof_field(struct hyp_sysfs_attr, hyp_attr_value));
+
+static ssize_t flag_show(struct hyp_sysfs_attr *attr, char *buffer)
+{
+	char *p = buffer;
+
+	*p++ = '0' + ((xen_start_flags & attr->hyp_attr_value) != 0);
+	*p++ = '\n';
+	return p - buffer; 
+}
+
+#define FLAG_NODE(flag, node)				\
+	[ilog2(flag)] = {				\
+		.attr = { .name = #node, .mode = 0444 },\
+		.show = flag_show,			\
+		.hyp_attr_value = flag			\
+	}
+
+/*
+ * Add new, known flags here.  No other changes are required, but
+ * note that each known flag wastes one entry in flag_unames[].
+ * The code/complexity machinations to avoid this isn't worth it
+ * for a few entries, but keep it in mind.
+ */
+static struct hyp_sysfs_attr flag_attrs[FLAG_COUNT] = {
+	FLAG_NODE(SIF_PRIVILEGED, privileged),
+	FLAG_NODE(SIF_INITDOMAIN, initdomain)
+};
+static struct attribute_group xen_flags_group = {
+	.name = "start_flags",
+	.attrs = (struct attribute *[FLAG_COUNT + 1]){}
+};
+static char flag_unames[FLAG_COUNT][FLAG_UNAME_MAX];
+
+static int __init xen_sysfs_flags_init(void)
+{
+	for (unsigned fnum = 0; fnum != FLAG_COUNT; fnum++) {
+		if (likely(flag_attrs[fnum].attr.name == NULL)) {
+			sprintf(flag_unames[fnum], FLAG_UNAME_FMT, fnum);
+			flag_attrs[fnum].attr.name = flag_unames[fnum];
+			flag_attrs[fnum].attr.mode = 0444;
+			flag_attrs[fnum].show = flag_show;
+			flag_attrs[fnum].hyp_attr_value = 1 << fnum;
+		}
+		xen_flags_group.attrs[fnum] = &flag_attrs[fnum].attr;
+	}
+	return sysfs_create_group(hypervisor_kobj, &xen_flags_group);
+}
+
 #ifdef CONFIG_XEN_HAVE_VPMU
 struct pmu_mode {
 	const char *name;
@@ -539,18 +596,22 @@ static int __init hyper_sysfs_init(void)
 	ret = xen_sysfs_properties_init();
 	if (ret)
 		goto prop_out;
+	ret = xen_sysfs_flags_init();
+	if (ret)
+		goto flags_out;
 #ifdef CONFIG_XEN_HAVE_VPMU
 	if (xen_initial_domain()) {
 		ret = xen_sysfs_pmu_init();
 		if (ret) {
-			sysfs_remove_group(hypervisor_kobj,
-					   &xen_properties_group);
-			goto prop_out;
+			sysfs_remove_group(hypervisor_kobj, &xen_flags_group);
+			goto flags_out;
 		}
 	}
 #endif
 	goto out;
 
+flags_out:
+	sysfs_remove_group(hypervisor_kobj, &xen_properties_group);
 prop_out:
 	sysfs_remove_file(hypervisor_kobj, &uuid_attr.attr);
 uuid_out:
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 13:17:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 13:17:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470621.730175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pChAW-0000uM-7s; Tue, 03 Jan 2023 13:17:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470621.730175; Tue, 03 Jan 2023 13:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pChAW-0000uF-53; Tue, 03 Jan 2023 13:17:28 +0000
Received: by outflank-mailman (input) for mailman id 470621;
 Tue, 03 Jan 2023 13:17:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=20m7=5A=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pChAV-0000u9-He
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 13:17:27 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f673d0b5-8b68-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 14:17:24 +0100 (CET)
Received: by mail-wm1-x329.google.com with SMTP id
 p1-20020a05600c1d8100b003d8c9b191e0so23054738wms.4
 for <xen-devel@lists.xenproject.org>; Tue, 03 Jan 2023 05:17:24 -0800 (PST)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 r17-20020a05600c425100b003cffd3c3d6csm40171333wmm.12.2023.01.03.05.17.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 03 Jan 2023 05:17:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f673d0b5-8b68-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=+6FCYc7IsbLXUkXo08SQ4GUWBR9iGC2Xzf9FolVUtRo=;
        b=n/56VD4IRMp0yuUVG73jhVPdUgXayuZx4giEfUZ8VLgAAnbQZeIuIvQcokB2wptb6k
         roMl4LNuBykQy/MpusQOu0PMAW1zy6KpNMcCebcpS9DCuAqDne0+v68abElKm/+c9PGX
         5deJomEDWCynXEAQCe2GxPFbAKq3knf/Q0Vi2APEhmyoBGnSpcnvaiXifXNt7MFWuLtI
         CYdI4jO0cRMmDBNHyzsDW6aehiMZJ7lHGoG9Mw1g3tBw2z61ryhoGk+Eml41wavMKgNr
         3jQc0/MkoQeBHxtCrcfilhHCmnRLxA8ZrtYjJvD3vWgTfxcPoCHF0jAIAEkIpPZNbsBN
         AdAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=+6FCYc7IsbLXUkXo08SQ4GUWBR9iGC2Xzf9FolVUtRo=;
        b=DIgZrvcbTihimYS6Pc/U3JE+nIRpF4Cyp7Qs+KLLG0IegroKrqKw80gzRDYiJaYwbM
         kzjd9Pzn865w5K0BlWo29hijZ0dPchQtS1yPlZl2Z5Kwg8XBFpIm1ePUqjeRg0Q/t211
         3lTDKkP4GrMftcqmSyXrzTtkee+VgOLscLcRVaR7sOElR7m2jX5ay0gBg5jwSAD/cpeZ
         jp0j91xrhXeKgDdNFoMtDv7ihed3rXsjPAJBcR7hyoa7Fti+9ir+XcraLFcGi/xxBeG8
         Gou/OOlwwAQ00c6KU4mZnr6r+ZuEweQmqugD6cM7kFPk6L+EFfq5uECH5YPhAF4xkicM
         CHew==
X-Gm-Message-State: AFqh2kobLcBRJp5YBSA9+KTFBrM2X7a4gkbGOvt/bA0mniOqg8XBJlE7
	WZmY02M4mWDXmJFvbHePXPJHbA==
X-Google-Smtp-Source: AMrXdXtKrBI9NGgAUmqiWfEI2RGlGcbNiSC2hdExF6pkPz5b4Eg0tXIw6L8+2KPWlNKYtW4NbzL5LQ==
X-Received: by 2002:a05:600c:4255:b0:3d3:3d34:5d63 with SMTP id r21-20020a05600c425500b003d33d345d63mr32744520wmm.8.1672751844305;
        Tue, 03 Jan 2023 05:17:24 -0800 (PST)
Message-ID: <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org>
Date: Tue, 3 Jan 2023 14:17:22 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Chuck Zmudzinski <brchuckz@aol.com>, Bernhard Beschow
 <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost
 <eduardo@habkost.net>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
References: <20230102213504.14646-1-shentey@gmail.com>
 <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Chuck,

On 3/1/23 04:15, Chuck Zmudzinski wrote:
> On 1/2/23 4:34 PM, Bernhard Beschow wrote:
>> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally removes
>> it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in the PC
>> machine agnostic to the precise southbridge being used. 2/ will become
>> particularily interesting once PIIX4 becomes usable in the PC machine, avoiding
>> the "Frankenstein" use of PIIX4_ACPI in PIIX3.
>>
>> Testing done:
>> None, because I don't know how to conduct this properly :(
>>
>> Based-on: <20221221170003.2929-1-shentey@gmail.com>
>>            "[PATCH v4 00/30] Consolidate PIIX south bridges"

This series is based on a previous series:
https://lore.kernel.org/qemu-devel/20221221170003.2929-1-shentey@gmail.com/
(which itself also is).

>> Bernhard Beschow (6):
>>    include/hw/xen/xen: Make xen_piix3_set_irq() generic and rename it
>>    hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
>>    hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
>>    hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
>>    hw/isa/piix: Resolve redundant k->config_write assignments
>>    hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
>>
>>   hw/i386/pc_piix.c             | 34 ++++++++++++++++--
>>   hw/i386/xen/xen-hvm.c         |  9 +++--
>>   hw/isa/piix.c                 | 66 +----------------------------------
> 
> This file does not exist on the Qemu master branch.
> But hw/isa/piix3.c and hw/isa/piix4.c do exist.
> 
> I tried renaming it from piix.c to piix3.c in the patch, but
> the patch set still does not apply cleanly on my tree.
> 
> Is this patch set re-based against something other than
> the current master Qemu branch?
> 
> I have a system that is suitable for testing this patch set, but
> I need guidance on how to apply it to the Qemu source tree.

You can ask Bernhard to publish a branch with the full work,
or apply each series locally. I use the b4 tool for that:
https://b4.docs.kernel.org/en/latest/installing.html

i.e.:

$ git checkout -b shentey_work
$ b4 am 20221120150550.63059-1-shentey@gmail.com
$ git am 
./v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south_bridges.mbx
$ b4 am 20221221170003.2929-1-shentey@gmail.com
$ git am 
./v4_20221221_shentey_this_series_consolidates_the_implementations_of_the_piix3_and_piix4_south.mbx
$ b4 am 20230102213504.14646-1-shentey@gmail.com
$ git am ./20230102_shentey_resolve_type_piix3_xen_device.mbx

Now the branch 'shentey_work' contains all the patches and you can test.

Regards,

Phil.


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 13:20:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 13:20:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470629.730186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pChDO-0002Ik-N5; Tue, 03 Jan 2023 13:20:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470629.730186; Tue, 03 Jan 2023 13:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pChDO-0002Id-JQ; Tue, 03 Jan 2023 13:20:26 +0000
Received: by outflank-mailman (input) for mailman id 470629;
 Tue, 03 Jan 2023 13:20:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=20m7=5A=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pChDN-0002IX-5Y
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 13:20:25 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 616f84da-8b69-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 14:20:24 +0100 (CET)
Received: by mail-wm1-x32c.google.com with SMTP id ja17so22623180wmb.3
 for <xen-devel@lists.xenproject.org>; Tue, 03 Jan 2023 05:20:24 -0800 (PST)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 q185-20020a1c43c2000000b003cff309807esm48256372wma.23.2023.01.03.05.20.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 03 Jan 2023 05:20:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 616f84da-8b69-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ZAwOz3kc/9RSBo9h8xa3JZTDJQ2pKeCHHfYRBL5M+SY=;
        b=w6BTCvFONQCYPX0/nZZCMv4rr29dPxkv98CNXiyLTNiTsQbTli7oCIEorcdwzVRc3Q
         ge3kqsR5GujjPB0TW7Pfp2NU83khk8s3NXWX8wKRV1tIJK8gp6OTTXxCAVEi8dGmp2iE
         lSDbp3a/SWK5od1MJsS6FD+hXAzD4sSwQ9gd/VTBqX31dHDmzyczIf3WwfGAFeCnsVtd
         R4W7sbNJr0jZ+6X6duvmxscybWitengU3/4qw/6wfZC1vSF4TxAx4LbtKYfauDhMGWc8
         9qdRLb6hF7Co+q3ij/HqSwp9UpU0jJhxmDYqusP9482jSljfGackybhTGEO4GzTJm0cC
         xn+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=ZAwOz3kc/9RSBo9h8xa3JZTDJQ2pKeCHHfYRBL5M+SY=;
        b=g4tzIF+wBHSb2rkIIkUuzkRi/5nMTEK5FYv2VhsSav6YIafagvYA0dLATUOShueGz7
         x1/pCExsNbiUVyDiXe8noIkGHBHx7arSrTgXfUi5KRX1wAmCehV+caAuIlMISE/yiCHH
         +b/9+2UKFiFuII+bMltACNaZgzi76065eToucQcsjWnOfi9mWlVluVlHKEGz48gfgftM
         yhW9qSU2zUQ+g+1xWmOIM9jBKREbFTim6CrLSjQNS/4ftWdY1ciN7+Cr4r07lqzx93G7
         MX9A8yIS05SN2qXuEhq+n73WB0m0131KSl32EIbHj/18wtsac9s+mZeed34e1awd4efx
         m5Pw==
X-Gm-Message-State: AFqh2kocusvibBb0AVE61QJs8coaPUmbugD7WSYgtsrySj0USqzY2nkG
	ZdAe8RA6PWscx/WU1+6OjuZPJg==
X-Google-Smtp-Source: AMrXdXujUu1mK5FhNhKF64/Kjpk2KVHVVnvRRQsbR6qSJmMKeYk3xlYQ4gde/xnsingH5Bey5hXL1A==
X-Received: by 2002:a05:600c:22cc:b0:3d1:bd81:b1b1 with SMTP id 12-20020a05600c22cc00b003d1bd81b1b1mr30316821wmg.18.1672752023907;
        Tue, 03 Jan 2023 05:20:23 -0800 (PST)
Message-ID: <cdfe29e9-327b-476b-3343-92216874075a@linaro.org>
Date: Tue, 3 Jan 2023 14:20:22 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 5/6] hw/isa/piix: Resolve redundant k->config_write
 assignments
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Paul Durrant <paul@xen.org>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost
 <eduardo@habkost.net>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
References: <20230102213504.14646-1-shentey@gmail.com>
 <20230102213504.14646-6-shentey@gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230102213504.14646-6-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 2/1/23 22:35, Bernhard Beschow wrote:
> The previous patch unified handling of piix_write_config() accross all
> PIIX device models which allows for assigning k->config_write once in the
> base class.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> ---
>   hw/isa/piix.c | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 13:39:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 13:39:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470636.730197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pChVL-0003vz-4v; Tue, 03 Jan 2023 13:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470636.730197; Tue, 03 Jan 2023 13:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pChVL-0003vs-1R; Tue, 03 Jan 2023 13:38:59 +0000
Received: by outflank-mailman (input) for mailman id 470636;
 Tue, 03 Jan 2023 13:38:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JkUc=5A=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pChVJ-0003vj-SP
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 13:38:58 +0000
Received: from mail-vs1-xe2d.google.com (mail-vs1-xe2d.google.com
 [2607:f8b0:4864:20::e2d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f7dc828f-8b6b-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 14:38:56 +0100 (CET)
Received: by mail-vs1-xe2d.google.com with SMTP id h27so18006907vsq.3
 for <xen-devel@lists.xenproject.org>; Tue, 03 Jan 2023 05:38:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7dc828f-8b6b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=/LsA1hPZM5dA9H27C/BbcFIU066UnS0soamgpDeqDB8=;
        b=ZaZolHmVwQMCjQ/Vy+OT863+IagRBZtKYtxqP3l9IxNNTOJsCBPgAs1G92xOvTOSUZ
         y8iZSDODB5uwW1ulxWbMpK4ol3HPNUWJu8JS5Q6gt+AEKz9RB74lqsTvlTK+SA59hIeB
         oMpRhIHM83rJdrzSVIzUYnmcg+9X/X1KTm2k7qKSpuqQro1aQ7KaqnFwXvPKX8p1XCrn
         nxWmuWxbx1zhsNMZMCgSnyvc1gx8kMpaTKB8NiGcGA6OyymHpd6Tv/l1kr8kjWL/Ptpg
         fFd7fsYZdSwP3yGVIzNbt/aAYEMBqr3dy/eYewuHRZX0yFVVwJtWPpjPxu20HW1P8lXT
         GYwQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=/LsA1hPZM5dA9H27C/BbcFIU066UnS0soamgpDeqDB8=;
        b=v2CxP9zAYBht/CkT/0Gk0SBHk1YCCdku591tlztYlhc9yw9opC0NEFcpTUBkKEx94h
         WrUbAV0cKJtXgYN0E44JqKfEoB6xwVJrKCTYv569JcfLx6bBL6OVREhfDCUhiXBsTaqT
         sBQOKCUouMOF454jXyR4584cWZ8Eo/Qp2QCLzGu2LR6NhnKiLpAJHDFQ8klw7LdIB9Tv
         dL1f0wreVNHUTRI9Kj9ujRol6UOWPKl9LxvIBQ1+Srtf9601aLifYa80Ev/dKBTiUFSd
         ZwLLwVyVoMWE9vEKj+tfcey/xnp6ZwYeR3T6hZilTz4iWKBAPNE2oEPee6iv3L7PkP9Y
         3y+A==
X-Gm-Message-State: AFqh2kq3Uc37UVojurTy488p9i1UPPTYYyi7BCFYRaE/vuwHASl1M2jf
	uKQwiwUAQ/xneyMZAtOkTbZu4EqkRVtcBEJIGbM=
X-Google-Smtp-Source: AMrXdXsKnsuze9zgtzTt/vE0gzVOeHDaiKZOvCtvg2WBe8lWBDodUYclv5LABCoqB6l+X/uAVhfC0lKwpOH5KleJtnI=
X-Received: by 2002:a67:f9d8:0:b0:3ce:82f7:5f41 with SMTP id
 c24-20020a67f9d8000000b003ce82f75f41mr1139409vsq.2.1672753134937; Tue, 03 Jan
 2023 05:38:54 -0800 (PST)
MIME-Version: 1.0
References: <20230102213504.14646-1-shentey@gmail.com> <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com>
 <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org>
In-Reply-To: <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org>
From: Bernhard Beschow <shentey@gmail.com>
Date: Tue, 3 Jan 2023 14:38:40 +0100
Message-ID: <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com>
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>
Cc: Chuck Zmudzinski <brchuckz@aol.com>, qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost <eduardo@habkost.net>, 
	xen-devel@lists.xenproject.org, =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>
Content-Type: multipart/alternative; boundary="00000000000067a9ab05f15c3005"

--00000000000067a9ab05f15c3005
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jan 3, 2023 at 2:17 PM Philippe Mathieu-Daud=C3=A9 <philmd@linaro.o=
rg>
wrote:

> Hi Chuck,
>
> On 3/1/23 04:15, Chuck Zmudzinski wrote:
> > On 1/2/23 4:34=E2=80=AFPM, Bernhard Beschow wrote:
> >> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally
> removes
> >> it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen
> in the PC
> >> machine agnostic to the precise southbridge being used. 2/ will become
> >> particularily interesting once PIIX4 becomes usable in the PC machine,
> avoiding
> >> the "Frankenstein" use of PIIX4_ACPI in PIIX3.
> >>
> >> Testing done:
> >> None, because I don't know how to conduct this properly :(
> >>
> >> Based-on: <20221221170003.2929-1-shentey@gmail.com>
> >>            "[PATCH v4 00/30] Consolidate PIIX south bridges"
>
> This series is based on a previous series:
> https://lore.kernel.org/qemu-devel/20221221170003.2929-1-shentey@gmail.co=
m/
> (which itself also is).
>
> >> Bernhard Beschow (6):
> >>    include/hw/xen/xen: Make xen_piix3_set_irq() generic and rename it
> >>    hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
> >>    hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
> >>    hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
> >>    hw/isa/piix: Resolve redundant k->config_write assignments
> >>    hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
> >>
> >>   hw/i386/pc_piix.c             | 34 ++++++++++++++++--
> >>   hw/i386/xen/xen-hvm.c         |  9 +++--
> >>   hw/isa/piix.c                 | 66 +--------------------------------=
--
> >
> > This file does not exist on the Qemu master branch.
> > But hw/isa/piix3.c and hw/isa/piix4.c do exist.
> >
> > I tried renaming it from piix.c to piix3.c in the patch, but
> > the patch set still does not apply cleanly on my tree.
> >
> > Is this patch set re-based against something other than
> > the current master Qemu branch?
> >
> > I have a system that is suitable for testing this patch set, but
> > I need guidance on how to apply it to the Qemu source tree.
>
> You can ask Bernhard to publish a branch with the full work,
>

Hi Chuck,

... or just visit
https://patchew.org/QEMU/20230102213504.14646-1-shentey@gmail.com/ . There
you'll find a git tag with a complete history and all instructions!

Thanks for giving my series a test ride!

Best regards,
Bernhard

or apply each series locally. I use the b4 tool for that:
> https://b4.docs.kernel.org/en/latest/installing.html
>
> i.e.:
>
> $ git checkout -b shentey_work
> $ b4 am 20221120150550.63059-1-shentey@gmail.com
> $ git am
> ./v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south_bridges.mb=
x
> $ b4 am 20221221170003.2929-1-shentey@gmail.com
> $ git am
>
> ./v4_20221221_shentey_this_series_consolidates_the_implementations_of_the=
_piix3_and_piix4_south.mbx
> $ b4 am 20230102213504.14646-1-shentey@gmail.com
> $ git am ./20230102_shentey_resolve_type_piix3_xen_device.mbx
>
> Now the branch 'shentey_work' contains all the patches and you can test.
>
> Regards,
>
> Phil.
>

--00000000000067a9ab05f15c3005
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Tue, Jan 3, 2023 at 2:17 PM Philip=
pe Mathieu-Daud=C3=A9 &lt;<a href=3D"mailto:philmd@linaro.org">philmd@linar=
o.org</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"ma=
rgin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:=
1ex">Hi Chuck,<br>
<br>
On 3/1/23 04:15, Chuck Zmudzinski wrote:<br>
&gt; On 1/2/23 4:34=E2=80=AFPM, Bernhard Beschow wrote:<br>
&gt;&gt; This series first renders TYPE_PIIX3_XEN_DEVICE redundant and fina=
lly removes<br>
&gt;&gt; it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make =
Xen in the PC<br>
&gt;&gt; machine agnostic to the precise southbridge being used. 2/ will be=
come<br>
&gt;&gt; particularily interesting once PIIX4 becomes usable in the PC mach=
ine, avoiding<br>
&gt;&gt; the &quot;Frankenstein&quot; use of PIIX4_ACPI in PIIX3.<br>
&gt;&gt;<br>
&gt;&gt; Testing done:<br>
&gt;&gt; None, because I don&#39;t know how to conduct this properly :(<br>
&gt;&gt;<br>
&gt;&gt; Based-on: &lt;<a href=3D"mailto:20221221170003.2929-1-shentey@gmai=
l.com" target=3D"_blank">20221221170003.2929-1-shentey@gmail.com</a>&gt;<br=
>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quot;[PATCH v4 00/30] Co=
nsolidate PIIX south bridges&quot;<br>
<br>
This series is based on a previous series:<br>
<a href=3D"https://lore.kernel.org/qemu-devel/20221221170003.2929-1-shentey=
@gmail.com/" rel=3D"noreferrer" target=3D"_blank">https://lore.kernel.org/q=
emu-devel/20221221170003.2929-1-shentey@gmail.com/</a><br>
(which itself also is).<br>
<br>
&gt;&gt; Bernhard Beschow (6):<br>
&gt;&gt;=C2=A0 =C2=A0 include/hw/xen/xen: Make xen_piix3_set_irq() generic =
and rename it<br>
&gt;&gt;=C2=A0 =C2=A0 hw/isa/piix: Reuse piix3_realize() in piix3_xen_reali=
ze()<br>
&gt;&gt;=C2=A0 =C2=A0 hw/isa/piix: Wire up Xen PCI IRQ handling outside of =
PIIX3<br>
&gt;&gt;=C2=A0 =C2=A0 hw/isa/piix: Avoid Xen-specific variant of piix_write=
_config()<br>
&gt;&gt;=C2=A0 =C2=A0 hw/isa/piix: Resolve redundant k-&gt;config_write ass=
ignments<br>
&gt;&gt;=C2=A0 =C2=A0 hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE<=
br>
&gt;&gt;<br>
&gt;&gt;=C2=A0 =C2=A0hw/i386/pc_piix.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0| 34 ++++++++++++++++--<br>
&gt;&gt;=C2=A0 =C2=A0hw/i386/xen/xen-hvm.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0|=C2=A0 9 +++--<br>
&gt;&gt;=C2=A0 =C2=A0hw/isa/piix.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0| 66 +----------------------------------<br>
&gt; <br>
&gt; This file does not exist on the Qemu master branch.<br>
&gt; But hw/isa/piix3.c and hw/isa/piix4.c do exist.<br>
&gt; <br>
&gt; I tried renaming it from piix.c to piix3.c in the patch, but<br>
&gt; the patch set still does not apply cleanly on my tree.<br>
&gt; <br>
&gt; Is this patch set re-based against something other than<br>
&gt; the current master Qemu branch?<br>
&gt; <br>
&gt; I have a system that is suitable for testing this patch set, but<br>
&gt; I need guidance on how to apply it to the Qemu source tree.<br>
<br>
You can ask Bernhard to publish a branch with the full work,<br></blockquot=
e><div><br></div><div>Hi Chuck,</div><div><br></div><div>... or just visit =
<a href=3D"https://patchew.org/QEMU/20230102213504.14646-1-shentey@gmail.co=
m/">https://patchew.org/QEMU/20230102213504.14646-1-shentey@gmail.com/</a> =
. There you&#39;ll find a git tag with a complete history and all instructi=
ons!</div><div><br></div><div>Thanks for giving my series a test ride!</div=
><div><br></div><div>Best regards,</div><div>Bernhard</div><div><br></div><=
blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-l=
eft:1px solid rgb(204,204,204);padding-left:1ex">
or apply each series locally. I use the b4 tool for that:<br>
<a href=3D"https://b4.docs.kernel.org/en/latest/installing.html" rel=3D"nor=
eferrer" target=3D"_blank">https://b4.docs.kernel.org/en/latest/installing.=
html</a><br>
<br>
i.e.:<br>
<br>
$ git checkout -b shentey_work<br>
$ b4 am <a href=3D"mailto:20221120150550.63059-1-shentey@gmail.com" target=
=3D"_blank">20221120150550.63059-1-shentey@gmail.com</a><br>
$ git am <br>
./v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south_bridges.mbx<=
br>
$ b4 am <a href=3D"mailto:20221221170003.2929-1-shentey@gmail.com" target=
=3D"_blank">20221221170003.2929-1-shentey@gmail.com</a><br>
$ git am <br>
./v4_20221221_shentey_this_series_consolidates_the_implementations_of_the_p=
iix3_and_piix4_south.mbx<br>
$ b4 am <a href=3D"mailto:20230102213504.14646-1-shentey@gmail.com" target=
=3D"_blank">20230102213504.14646-1-shentey@gmail.com</a><br>
$ git am ./20230102_shentey_resolve_type_piix3_xen_device.mbx<br>
<br>
Now the branch &#39;shentey_work&#39; contains all the patches and you can =
test.<br>
<br>
Regards,<br>
<br>
Phil.<br>
</blockquote></div></div>

--00000000000067a9ab05f15c3005--


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 15:15:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 15:15:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470645.730208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCj0O-0005c6-5A; Tue, 03 Jan 2023 15:15:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470645.730208; Tue, 03 Jan 2023 15:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCj0O-0005bz-1B; Tue, 03 Jan 2023 15:15:08 +0000
Received: by outflank-mailman (input) for mailman id 470645;
 Tue, 03 Jan 2023 15:15:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWpM=5A=redhat.com=alex.williamson@srs-se1.protection.inumbo.net>)
 id 1pCj0M-0005bt-8b
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 15:15:06 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65b57108-8b79-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 16:15:04 +0100 (CET)
Received: from mail-il1-f199.google.com (mail-il1-f199.google.com
 [209.85.166.199]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-179-I2NBm3HZPaKoWNPYrnMZpw-1; Tue, 03 Jan 2023 10:14:59 -0500
Received: by mail-il1-f199.google.com with SMTP id
 n15-20020a056e021baf00b0030387c2e1d3so19417872ili.5
 for <xen-devel@lists.xenproject.org>; Tue, 03 Jan 2023 07:14:59 -0800 (PST)
Received: from redhat.com ([38.15.36.239]) by smtp.gmail.com with ESMTPSA id
 v17-20020a02b091000000b00356738a2aa2sm9895119jah.55.2023.01.03.07.14.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 03 Jan 2023 07:14:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65b57108-8b79-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1672758903;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3C3N3hWqoUvt4wQQ24fzp8QfU4qA5jURytjfdBX+cwQ=;
	b=Qef+lj1tCgVPMgWr+qqywfKx/t002r3ZtY/4bSo0eIdt/S74B6rAl4fOoBElYs+Bc53eKn
	5UvtyfqWYN+dAAOaoeVbAEif6QD9QigvCOg99lDktA3f4y5VN3HLwAyt7SzpiL1LjIBYS/
	eVR+Ne+ZbHH381qKHEHKjzGEXz0t9Gw=
X-MC-Unique: I2NBm3HZPaKoWNPYrnMZpw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:organization:references
         :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=3C3N3hWqoUvt4wQQ24fzp8QfU4qA5jURytjfdBX+cwQ=;
        b=OJsYPioBZurw/AaMGH6n7PjybIYwr2pky+LiOt3X0KTNj4IT2JBHNlLTwPhymggXxk
         ++EattZ5mb6/tPK6286KH+AjtQcc6+Ni1AdP5XyBOAf0WFIjYNBJP8I5xB2dBbovrb0h
         j7f1vAS3FmdQuW2ZZgRyGzWIyWjbwXo57HfrCKMDaN+ulKIha7N+pzShAlX1y3x5R7Fy
         PObsWHbpOs861G+alMWRnGQ1SC8m3vUiOxwrtkyfp0SuObAefJhD97lqv4wLyMurHAvl
         +xUjq3zqz5XhTAW6pEBhYDni0aHW2lyyAJELgUFoswmcdpSBlrAoQCl/2g9NXi62H7LY
         b/zg==
X-Gm-Message-State: AFqh2kpNm5PSFXewLKfJbekrbmlMRBiDuNf1p9y03CDpn0fefeYo//H8
	OEjnyBAYVOa/vC7TCNH97Xa0q7QHg9NPaGSelBEHMYSjEfTce4OgCfP1q0DJiBtpXkFmlcMGkNq
	lnINrVrC16Udt7pcHcI+blQwxE0w=
X-Received: by 2002:a92:b745:0:b0:304:c04b:c184 with SMTP id c5-20020a92b745000000b00304c04bc184mr30283020ilm.28.1672758899079;
        Tue, 03 Jan 2023 07:14:59 -0800 (PST)
X-Google-Smtp-Source: AMrXdXv34XF6KPgrepazk8naDtFk/N6HqJkq8zkS1urCN22H3/wO2mCbFnxpkHrBAKNgBTuqKEFBxA==
X-Received: by 2002:a92:b745:0:b0:304:c04b:c184 with SMTP id c5-20020a92b745000000b00304c04bc184mr30283006ilm.28.1672758898748;
        Tue, 03 Jan 2023 07:14:58 -0800 (PST)
Date: Tue, 3 Jan 2023 08:14:56 -0700
From: Alex Williamson <alex.williamson@redhat.com>
To: Chuck Zmudzinski <brchuckz@netscape.net>
Cc: "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org, Stefano
 Stabellini <sstabellini@kernel.org>, Anthony Perard
 <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, Paolo Bonzini
 <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230103081456.1d676b8e.alex.williamson@redhat.com>
In-Reply-To: <c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
	<830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
	<20230102124605-mutt-send-email-mst@kernel.org>
	<c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
Organization: Red Hat
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Mon, 2 Jan 2023 18:10:24 -0500
Chuck Zmudzinski <brchuckz@netscape.net> wrote:

> On 1/2/23 12:46 PM, Michael S. Tsirkin wrote:
> > On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:  
> > > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > > as noted in docs/igd-assign.txt in the Qemu source code.
> > > 
> > > Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > > Intel IGD passthrough to the guest with the Qemu upstream device model,
> > > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > > a different slot. This problem often prevents the guest from booting.
> > > 
> > > The only available workaround is not good: Configure Xen HVM guests to use
> > > the old and no longer maintained Qemu traditional device model available
> > > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > > 
> > > To implement this feature in the Qemu upstream device model for Xen HVM
> > > guests, introduce the following new functions, types, and macros:
> > > 
> > > * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > > * typedef XenPTQdevRealize function pointer
> > > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > > * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > > 
> > > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > > the xl toolstack with the gfx_passthru option enabled, which sets the
> > > igd-passthru=on option to Qemu for the Xen HVM machine type.
> > > 
> > > The new xen_igd_reserve_slot function also needs to be implemented in
> > > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > > when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > > in which case it does nothing.
> > > 
> > > The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > > 
> > > Move the call to xen_host_pci_device_get, and the associated error
> > > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > > initialize the device class and vendor values which enables the checks for
> > > the Intel IGD to succeed. The verification that the host device is an
> > > Intel IGD to be passed through is done by checking the domain, bus, slot,
> > > and function values as well as by checking that gfx_passthru is enabled,
> > > the device class is VGA, and the device vendor in Intel.
> > > 
> > > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>  
> >
> > I'm not sure why is the issue xen specific. Can you explain?
> > Doesn't it affect kvm too?  
> 
> Recall from docs/igd-assign.txt that there are two modes for
> igd passthrough: legacy and upt, and the igd needs to be
> at slot 2 only when using legacy mode which gives one
> single guest exclusive access to the Intel igd.
> 
> It's only xen specific insofar as xen does not have support
> for the upt mode so xen must use legacy mode which
> requires the igd to be at slot 2. I am not an expert with

UPT mode never fully materialized for direct assignment, the folks at
Intel championing this scenario left.

> kvm, but if I understand correctly, with kvm one can use
> the upt mode with the Intel i915 kvmgt kernel module
> and in that case the guest will see a virtual Intel gpu
> that can be at any arbitrary slot when using kvmgt, and
> also, in that case, more than one guest can access the
> igd through the kvmgt kernel module.

This is true, IIRC an Intel vGPU does not need to be in slot 2.

> Again, I am not an expert and do not have as much
> experience with kvm, but if I understand correctly it is
> possible to use the legacy mode with kvm and I think you
> are correct that if one uses kvm in legacy mode and without
> using the Intel i915 kvmgt kernel module, then it would be
> necessary to reserve slot 2 for the igd on kvm.

It's necessary to configure the assigned IGD at slot 2 to make it
functional, yes, but I don't really understand this notion of
"reserving" slot 2.  If something occupies address 00:02.0 in the
config, it's the user's or management tool's responsibility to move it
to make this configuration functional.  Why does QEMU need to play a
part in reserving this bus address.  IGD devices are not generally
hot-pluggable either, so it doesn't seem we need to reserve an address
in case an IGD device is added dynamically later.
 
> Your question makes me curious, and I have not been able
> to determine if anyone has tried igd passthrough using
> legacy mode on kvm with recent versions of linux and qemu.

Yes, it works.

> I will try reproducing the problem on kvm in legacy mode with
> current versions of linux and qemu and report my findings.
> With kvm, there might be enough flexibility to specify the
> slot number for every pci device in the guest. Such a

I think this is always the recommendation, libvirt will do this by
default in order to make sure the configuration is reproducible.  This
is what we generally rely on for kvm/vfio IGD assignment to place the
GPU at the correct address.

> capability is not available using the xenlight toolstack
> for managing xen guests, so I have been using this patch
> to ensure that the Intel igd is at slot 2 with xen guests
> created by the xenlight toolstack.

Seems like a deficiency in xenlight.  I'm not sure why QEMU should take
on this burden to support support tool stacks that lack such basic
features.
 
> The patch as is will only fix the problem on xen, so if the
> problem exists on kvm also, I agree that the patch should
> be modified to also fix it on kvm.

AFAICT, it's not a problem on kvm/vfio because we generally make use of
invocations that specify bus addresses for each device by default,
making this a configuration requirement for the user or management tool
stack.  Thanks,

Alex



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 15:25:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 15:25:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470652.730219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCjAL-00076q-26; Tue, 03 Jan 2023 15:25:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470652.730219; Tue, 03 Jan 2023 15:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCjAK-00076j-VO; Tue, 03 Jan 2023 15:25:24 +0000
Received: by outflank-mailman (input) for mailman id 470652;
 Tue, 03 Jan 2023 15:25:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/j9y=5A=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pCjAI-00076d-M3
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 15:25:22 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2059.outbound.protection.outlook.com [40.107.237.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d2087c08-8b7a-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 16:25:15 +0100 (CET)
Received: from BN0PR04CA0180.namprd04.prod.outlook.com (2603:10b6:408:eb::35)
 by BL1PR12MB5062.namprd12.prod.outlook.com (2603:10b6:208:313::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 15:25:17 +0000
Received: from BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:eb:cafe::55) by BN0PR04CA0180.outlook.office365.com
 (2603:10b6:408:eb::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5966.19 via Frontend
 Transport; Tue, 3 Jan 2023 15:25:17 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT101.mail.protection.outlook.com (10.13.177.126) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5966.17 via Frontend Transport; Tue, 3 Jan 2023 15:25:16 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 3 Jan
 2023 09:25:14 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 3 Jan
 2023 07:25:14 -0800
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 3 Jan 2023 09:25:13 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2087c08-8b7a-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fzQQXm6jCsEr6SEsVauW/nRsBpxPEIulDuuZEDFKFDu35RrtbCdoxhY9Eb7t+0pxrKPOTMi+V0XucCbE3RXdAl8mIFvou7IBTcqaKk8sp6oiGo8qBxAyM8YH/Amo2kaNW45N1HfQyCYKHcu6zQuBo/1ki1WBEhxB7ZaAShIhsUqYdQBV2dRdQWJjXEomUsPSLI5bnY833jr5E9tn2mLIT9gg/9TDyXCm8TX2DgoWs1GkT5yEQk6FXXpKY+gS7Oa/83mKq3jkN+m69LGGH6oaxJtt/U4HMdTzWSIr1er7CaZxFcVGxwt/gKvU13dgdFNJEhNlpYJdicSF82rfbhowUQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=15mGA6wja2IZernS34ru3oqZcAZP2HDHZmQsJGvAPbA=;
 b=GiomPXlCx8eG93WtfhP2oxRrkuuHUOkqgYuR78ZkEh+u4kLlOfBhb5zLm9QcvUUIbD2fDMgqE3aSJu0IE0VllblJkvjCj0JUXFvLf7XZD99MBFDO3mjMrExhFCaVi0X9NAkaqqYk16D3SNab8JQ4OrEFbcXGrViz5K2i933CWiEJ8iR6zZDP5MfXDFmOoExAktu1N9zBKARjwHuZfiEnVDrQMhOTMB2vhiv7metyZdpYNpujhH/is5Ai9Ikn+A5iBJ8mdMZtBcj+1+rVxA5Cllup1Zxk5WDfxBhvi9twqHbnSj3oZImd1Lsw5WEjMlTjoENMKwIiAH8cSiH7qu64cQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=15mGA6wja2IZernS34ru3oqZcAZP2HDHZmQsJGvAPbA=;
 b=TaDBrxHBOVOshOIe09f3w2xtJzmxbXRvR6lOunqCEmiPGwAlf0TojOFfn3WvKMwJOt5t+OO7lSHEABpSulcAiKl4tX9EIxZ8PFaqugdHWm+1hTU7pB/XXHBN/2QTTwgLcNDs6vPP+ZMO5PLysxDvNcVhH9OOgkK9oRX2AUkCDKY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <33e3e0f0-6784-5012-eb25-4d90f9cf188b@amd.com>
Date: Tue, 3 Jan 2023 10:25:12 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [RFC PATCH 21/21] xen/arm: vIOMMU: Modify the partial device tree
 for dom0less
To: Rahul Singh <rahul.singh@arm.com>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, "Volodymyr
 Babchuk" <Volodymyr_Babchuk@epam.com>
References: <cover.1669888522.git.rahul.singh@arm.com>
 <127da5a0d4300e083b8840a4f3a0d2d63bde5b6f.1669888522.git.rahul.singh@arm.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <127da5a0d4300e083b8840a4f3a0d2d63bde5b6f.1669888522.git.rahul.singh@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT101:EE_|BL1PR12MB5062:EE_
X-MS-Office365-Filtering-Correlation-Id: 67c76040-fd68-40d8-dd53-08daed9eb7b0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	u3O4znWQAghTI0l4vOGlH5msU7OkxnsKbhRdQPkBprmSj6ocjlokBV2PlBM2VMH+yoTgLrD7QNJ5vY8LcifWQHFohxabwB0bsUMMc/U9AgNKFo0JqbyqoCYBA2/SlqifpgYgxBjZD/BzzgP0Cg+pt2ZuMzOId9VgkrQTAWl4jl9xBAe263BDbGThCTtJ1Wv3zfaoVNEVgKxE8ajvwAuxe+ggu1yHYFP3WsvMVv7X6HbE8tx10b5P9b5DvxV2gGHveJFTGtwy0NhSlf9gebvdI6glCjQJeDyXQkC+ScadnXm/cF5HGjOfInk2MOm2ADTzS4TnjMrFqDrOhZtDW4ZvhVVAbGaLoGpeGLsqC+VE60PxdAMrA7QSqeGamoYhwTAPNkP4WJVHFnCPcXZNoRIoBMMNt4jUY+OQUYkmgeqWlwGYDgcOfu/4B80P2lhy/TcmfylwZbmCEtOKJV9bqe070AFl+A6sx3u4aAg6QajMHcxQoI7+XW7hHX1VxkvhtWKxZXfm3IxB27Aogc5aXcUzR6hVVjH4UHZW1zOqLxoO6g2O0YuZPUD5x/9fObGlCZy1JPxkIz8raCm488M4lcgBesXvHrf51Sm022EDPDPgjutdA4gJec89w9TxM4+nO2hNf3xFbaiIsB7KkI9WEWLiv0SLRcWMbJNM0LVNX9sDmShZ/6e+EMV84JyVnC9/09XPYEOqMOIe9vCx7/G2VW1mRQTlAazhXqtqNHj0x32wyAtQy3pIW5weQitdIgkup70nfWQiAjUseuV6W62VLkwG7vKQu31aGaE6Fq0IRzuC+3QZU+IR7qXp/4MsOUDkHngJ
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199015)(40470700004)(36840700001)(46966006)(26005)(186003)(8936002)(83380400001)(40480700001)(44832011)(53546011)(478600001)(31686004)(40460700003)(54906003)(16576012)(316002)(41300700001)(36756003)(2616005)(70586007)(70206006)(8676002)(81166007)(4326008)(336012)(47076005)(426003)(36860700001)(82310400005)(82740400003)(5660300002)(86362001)(2906002)(356005)(110136005)(31696002)(22166006)(32563001)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2023 15:25:16.8314
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 67c76040-fd68-40d8-dd53-08daed9eb7b0
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5062

On 12/1/22 11:02, Rahul Singh wrote:
> To configure IOMMU in guest for passthrough devices, user will need to
> copy the unmodified "iommus" property from host device tree to partial
> device tree. To enable the dom0 linux kernel to confiure the IOMMU
> correctly replace the phandle in partial device tree with virtual
> IOMMU phandle when "iommus" property is set.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/arch/arm/domain_build.c | 31 ++++++++++++++++++++++++++++++-
>  1 file changed, 30 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 7cd99a6771..afb3e76409 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3235,7 +3235,35 @@ static int __init handle_prop_pfdt(struct kernel_info *kinfo,
>      return ( propoff != -FDT_ERR_NOTFOUND ) ? propoff : 0;
>  }
> 
> -static int __init scan_pfdt_node(struct kernel_info *kinfo, const void *pfdt,
> +static void modify_pfdt_node(void *pfdt, int nodeoff)

This should have the __init attribute


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 16:14:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 16:14:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470659.730230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCjvF-0004QM-Mi; Tue, 03 Jan 2023 16:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470659.730230; Tue, 03 Jan 2023 16:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCjvF-0004QF-JL; Tue, 03 Jan 2023 16:13:53 +0000
Received: by outflank-mailman (input) for mailman id 470659;
 Tue, 03 Jan 2023 16:13:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/j9y=5A=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pCjvF-0004Q9-2y
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 16:13:53 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2084.outbound.protection.outlook.com [40.107.220.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9852df46-8b81-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 17:13:45 +0100 (CET)
Received: from DS7PR03CA0156.namprd03.prod.outlook.com (2603:10b6:5:3b2::11)
 by MN2PR12MB4454.namprd12.prod.outlook.com (2603:10b6:208:26c::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 16:13:46 +0000
Received: from DM6NAM11FT046.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b2:cafe::dc) by DS7PR03CA0156.outlook.office365.com
 (2603:10b6:5:3b2::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5966.19 via Frontend
 Transport; Tue, 3 Jan 2023 16:13:46 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT046.mail.protection.outlook.com (10.13.172.121) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5944.10 via Frontend Transport; Tue, 3 Jan 2023 16:13:45 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 3 Jan
 2023 10:13:45 -0600
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 3 Jan 2023 10:13:43 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9852df46-8b81-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IyYmyWqVNOBzOpct5pUdnvHwZxnRj+hg67YAoqMeI6qT04AXucw8gh5uvtfPoJZCSo8PW9BGSksmr6GBnCJEolOfJjgXy3MgGyyFaewB3NaKdC4ms2ps7N7AKcr8tKRV2zlLXR2dsm2EjxmC7wxBca+z3MCs0YOHpF536uigjDtvHEbD1WTzCsmL3bPM1JNOMxZL1bQ0CHg7KYs6CoGB6rKVkCwFVjITj6zQWAY25ETPLYYJO1pfaqaP3SCIJi0Jjd33wqiBT/EjGbBfvhciHmDif3Xt+dBhl0Lfk+7kEjCoif5XTfvt4n37XWErGKeBHR7DJafMEYJsUKbvpJ9ynw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zAeB5DZg5O4cLoUIOlrA32lq3p/qkuMgOeXypP9e4Uc=;
 b=mRmO0bjHJ46qRqQO1tbGgGkwrsxtpvKQsQgxJY0HkAeMVzLJRRmVaDKN/fiyjoHuxnREK85lffRJKX7ihR7P6nIMb2gZko0C/7msK6ET7bnvtoJVxzxjlnK2WGviZFtNF9bSaTGyrvOYxqPRM9AQ5mMf4X0a6xhBCzoEKgx1LPPOkSuL2nPQW7L00pT2fpI8nV6gZfXf9aV5VHYAwOQ3VKPRoSkSCP3RJaSRrult5VXGgsKCTBxbIXFsN7gAEDcUh/0NqdE6n4NNlZMrEsUs++wyqteyDQBb9U6SLgUd7QEJMMFhyvSFJxNnsbtjn1t6Y4/KTqXLI/ntj1+lgQC1lA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zAeB5DZg5O4cLoUIOlrA32lq3p/qkuMgOeXypP9e4Uc=;
 b=bUdgB5xAVBBbzj/uXLYOBwEjNzMXxlQRU94Y8+eQfyvLEXbC+JJv8lLC/1wKrqxCQyzY4HdqXBhBP70w3XDwLVlgZMrEPn2a/eCWoeF8M4nthk2Boa9BmzxF5q171bV019nfYsrb7lS1ADwfvcNAU8nM5LSkZ2nmh2gyOyY7O5k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <9c11f1d8-1f59-9993-22bd-fb7e6d1b93ae@amd.com>
Date: Tue, 3 Jan 2023 11:13:42 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [RFC PATCH 11/21] xen/arm: vsmmuv3: Attach Stage-1 configuration
 to SMMUv3 hardware
Content-Language: en-US
To: Rahul Singh <rahul.singh@arm.com>, <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <bertrand.marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Jan Beulich <jbeulich@suse.com>, Paul Durrant
	<paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <cover.1669888522.git.rahul.singh@arm.com>
 <b0873a8cf229143544388a90334edd7c96bc78c4.1669888522.git.rahul.singh@arm.com>
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <b0873a8cf229143544388a90334edd7c96bc78c4.1669888522.git.rahul.singh@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT046:EE_|MN2PR12MB4454:EE_
X-MS-Office365-Filtering-Correlation-Id: 224f5d43-8e7e-417a-4b2b-08daeda57d96
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mDwQhiGyCkNz/XfPANZsWewMcPHUMJnbjL3NXCv8QtCOF+6CIPpzNgFq72cnsx5+keAitAsFFFXgAiYmNnVVaC0R8j6uwZcnzDuJY0gTRwkcZfMD/KR9kIo02WY+ryifJi6eycE8NT/ccc9DKZ6JIX4VTKTl34iNCGvxb2sJBIHULiIo9q1oZNW7ayjRjVEf1dVfHq+gP624WSOIdWyQtQXeinqcMjpgDvdh+RjFRE5nUm3oRDjKKSskn/M1F5UH33SkeWm3Rogrnu9JmKnt/cq0/LYeUZedVK3uBFEZm5btt59X+Cz7M2M3ZrGK2BmYewl/yQs7jVkVMDydljrN29V3dsHsm8WLwWPfXTdwP5o9r5zfcMfbBhFw1SIi16jDz5SD0FG8txOfJ3ANsyJ2jk9zPjMtYHEtbcBds/Zp+nh4KZC/lchLuhUkG2Fe5va2K7BVr6+u8B+78BsmHP6xklX324pEhLLDRLY9r9XGJcXMenpTJGkh+TVOAS/pWZ4aUp12D9k/hJJFu4HlmL2Tmrb8HP3Qx4qGxndNN3xxIqYJUQRyjj4tK+pefYbAusgjdfTyz1WaQMNOqVDeh/eRHowk5zrbqiPgwW7g7vejReUD1wpvOWYJopCJKZ28Mrqa6KZcTprOvhx0RdrAyOQXnHHmd7YKEScRXnacEKCPglxFRj5jXBjXtfqh7pLg/DCLSP0VwEnic92U6gj5MzpG8zysMBpM+HNKTCE8m3SkF+YlXH0anKiLiX/l7e8jWuU/Px5oBvUr0oghHo1wnA8OKnV1qrHoSCGGYmDXE56/KtSvmf15o3UpfQFeEHKl/IKYxsQ2bqpPJdYTssIJjUy11jpl+puEScldh1eWybfVsTU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199015)(46966006)(36840700001)(40470700004)(70206006)(8676002)(70586007)(4326008)(81166007)(36756003)(2616005)(426003)(47076005)(336012)(41300700001)(40460700003)(54906003)(16576012)(316002)(110136005)(31696002)(2906002)(356005)(86362001)(36860700001)(5660300002)(82740400003)(82310400005)(40480700001)(44832011)(8936002)(83380400001)(53546011)(478600001)(186003)(26005)(31686004)(22166006)(43740500002)(36900700001)(473944003)(414714003);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2023 16:13:45.8109
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 224f5d43-8e7e-417a-4b2b-08daeda57d96
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT046.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4454

On 12/1/22 11:02, Rahul Singh wrote:
> Attach the Stage-1 configuration to device STE to support nested
> translation for the guests.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/drivers/passthrough/arm/smmu-v3.c  | 79 ++++++++++++++++++++++++++
>  xen/drivers/passthrough/arm/smmu-v3.h  |  1 +
>  xen/drivers/passthrough/arm/vsmmu-v3.c | 18 ++++++
>  xen/include/xen/iommu.h                | 14 +++++
>  4 files changed, 112 insertions(+)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 4f96fdb92f..c4b4a5d86d 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -2754,6 +2754,37 @@ static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev)
>         return NULL;
>  }
> 
> +static struct iommu_domain *arm_smmu_get_domain_by_sid(struct domain *d,
> +                               u32 sid)
> +{
> +       int i;
> +       unsigned long flags;
> +       struct iommu_domain *io_domain;
> +       struct arm_smmu_domain *smmu_domain;
> +       struct arm_smmu_master *master;
> +       struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +
> +       /*
> +        * Loop through the &xen_domain->contexts to locate a context
> +        * assigned to this SMMU
> +        */
> +       list_for_each_entry(io_domain, &xen_domain->contexts, list) {
> +               smmu_domain = to_smmu_domain(io_domain);
> +
> +               spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +               list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> +                       for (i = 0; i < master->num_streams; i++) {
> +                               if (sid != master->streams[i].id)
> +                                       continue;
> +                               spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +                               return io_domain;
> +                       }
> +               }
> +               spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +       }
> +       return NULL;
> +}
> +
>  static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
>                                 struct device *dev)
>  {
> @@ -2909,6 +2940,53 @@ static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
>         xfree(xen_domain);
>  }
> 
> +static int arm_smmu_attach_guest_config(struct domain *d, u32 sid,
> +               struct iommu_guest_config *cfg)
> +{
> +       int ret = -EINVAL;
> +       unsigned long flags;
> +       struct arm_smmu_master *master;
> +       struct arm_smmu_domain *smmu_domain;
> +       struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +       struct iommu_domain *io_domain = arm_smmu_get_domain_by_sid(d, sid);
> +
> +       if (!io_domain)
> +               return -ENODEV;
> +
> +       smmu_domain = to_smmu_domain(io_domain);
> +
> +       spin_lock(&xen_domain->lock);
> +
> +       switch (cfg->config) {
> +       case ARM_SMMU_DOMAIN_ABORT:
> +               smmu_domain->abort = true;
> +               break;
> +       case ARM_SMMU_DOMAIN_BYPASS:
> +               smmu_domain->abort = false;
> +               break;
> +       case ARM_SMMU_DOMAIN_NESTED:
> +               /* Enable Nested stage translation. */
> +               smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> +               smmu_domain->s1_cfg.s1ctxptr = cfg->s1ctxptr;
> +               smmu_domain->s1_cfg.s1fmt = cfg->s1fmt;
> +               smmu_domain->s1_cfg.s1cdmax = cfg->s1cdmax;
> +               smmu_domain->abort = false;
> +               break;
> +       default:
> +               goto out;
> +       }
> +
> +       spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> +       list_for_each_entry(master, &smmu_domain->devices, domain_head)
> +               arm_smmu_install_ste_for_dev(master);
> +       spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> +
> +       ret = 0;
> +out:
> +       spin_unlock(&xen_domain->lock);
> +       return ret;
> +}
> +
>  static const struct iommu_ops arm_smmu_iommu_ops = {
>         .page_sizes             = PAGE_SIZE_4K,
>         .init                   = arm_smmu_iommu_xen_domain_init,
> @@ -2921,6 +2999,7 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
>         .unmap_page             = arm_iommu_unmap_page,
>         .dt_xlate               = arm_smmu_dt_xlate,
>         .add_device             = arm_smmu_add_device,
> +       .attach_guest_config = arm_smmu_attach_guest_config

Please append a trailing comma here, even if it is the last element

>  };
> 
>  static __init int arm_smmu_dt_init(struct dt_device_node *dev,
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.h b/xen/drivers/passthrough/arm/smmu-v3.h
> index e270fe05e0..50a050408b 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.h
> +++ b/xen/drivers/passthrough/arm/smmu-v3.h
> @@ -393,6 +393,7 @@ enum arm_smmu_domain_stage {
>         ARM_SMMU_DOMAIN_S2,
>         ARM_SMMU_DOMAIN_NESTED,
>         ARM_SMMU_DOMAIN_BYPASS,
> +       ARM_SMMU_DOMAIN_ABORT,
>  };
> 
>  /* Xen specific code. */
> diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c
> index 916b97b8a2..5188181929 100644
> --- a/xen/drivers/passthrough/arm/vsmmu-v3.c
> +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c
> @@ -223,8 +223,11 @@ static int arm_vsmmu_handle_cfgi_ste(struct virt_smmu *smmu, uint64_t *cmdptr)
>  {
>      int ret;
>      uint64_t ste[STRTAB_STE_DWORDS];
> +    struct domain *d = smmu->d;
> +    struct domain_iommu *hd = dom_iommu(d);
>      struct arm_vsmmu_s1_trans_cfg s1_cfg = {0};
>      uint32_t sid = smmu_cmd_get_sid(cmdptr[0]);
> +    struct iommu_guest_config guest_cfg = {0};
> 
>      ret = arm_vsmmu_find_ste(smmu, sid, ste);
>      if ( ret )
> @@ -234,6 +237,21 @@ static int arm_vsmmu_handle_cfgi_ste(struct virt_smmu *smmu, uint64_t *cmdptr)
>      if ( ret )
>          return (ret == -EAGAIN ) ? 0 : ret;
> 
> +    guest_cfg.s1ctxptr = s1_cfg.s1ctxptr;
> +    guest_cfg.s1fmt = s1_cfg.s1fmt;
> +    guest_cfg.s1cdmax = s1_cfg.s1cdmax;
> +
> +    if ( s1_cfg.bypassed )
> +        guest_cfg.config = ARM_SMMU_DOMAIN_BYPASS;
> +    else if ( s1_cfg.aborted )
> +        guest_cfg.config = ARM_SMMU_DOMAIN_ABORT;
> +    else
> +        guest_cfg.config = ARM_SMMU_DOMAIN_NESTED;
> +
> +    ret = hd->platform_ops->attach_guest_config(d, sid, &guest_cfg);
> +    if ( ret )
> +        return ret;
> +
>      return 0;
>  }
> 
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 4f22fc1bed..b2fc027e5e 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -230,6 +230,15 @@ int iommu_do_dt_domctl(struct xen_domctl *, struct domain *,
> 
>  #endif /* HAS_DEVICE_TREE */
> 
> +#ifdef CONFIG_ARM
> +struct iommu_guest_config {
> +    paddr_t     s1ctxptr;
> +    uint8_t     config;
> +    uint8_t     s1fmt;
> +    uint8_t     s1cdmax;
> +};
> +#endif /* CONFIG_ARM */
> +
>  struct page_info;
> 
>  /*
> @@ -302,6 +311,11 @@ struct iommu_ops {
>       */
>      int (*dt_xlate)(device_t *dev, const struct dt_phandle_args *args);
>  #endif
> +
> +#ifdef CONFIG_ARM
> +    int (*attach_guest_config)(struct domain *d, u32 sid,
> +                               struct iommu_guest_config *cfg);
> +#endif
>  };
> 
>  /*
> --
> 2.25.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 16:22:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 16:22:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470668.730241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCk3q-0005z7-Mq; Tue, 03 Jan 2023 16:22:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470668.730241; Tue, 03 Jan 2023 16:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCk3q-0005z0-Je; Tue, 03 Jan 2023 16:22:46 +0000
Received: by outflank-mailman (input) for mailman id 470668;
 Tue, 03 Jan 2023 16:22:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCk3p-0005yp-HC; Tue, 03 Jan 2023 16:22:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCk3p-0007FY-EX; Tue, 03 Jan 2023 16:22:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCk3o-0003tk-Uw; Tue, 03 Jan 2023 16:22:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCk3o-0006uX-UY; Tue, 03 Jan 2023 16:22:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zCmiLNKq5IvoyoJImn8u+ON/yqdwLEQ9u9RwZk0AlaU=; b=zs4LFuy8rOMAmDanYghrn97svH
	V90jjkR7avSlEUAmi+T8ii5LY/qDOyxXCeSkgifxvLk57KrHbbpT6H7whT86WX43eeicdpZVKiggG
	NjFpi9S91dA+B/SS95kqjRTtEFi9zyzjreE7KYGxw1q6XM3P7ZJ8Zb5chi8qLqrT16jk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175556-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175556: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=69b41ac87e4a664de78a395ff97166f0b2943210
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Jan 2023 16:22:44 +0000

flight 175556 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175556/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                69b41ac87e4a664de78a395ff97166f0b2943210
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   87 days
Failing since        173470  2022-10-08 06:21:34 Z   87 days  181 attempts
Testing same since   175552  2023-01-02 21:13:23 Z    0 days    2 attempts

------------------------------------------------------------
3257 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 497265 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 17:03:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 17:03:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470677.730252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCkhD-0001se-O9; Tue, 03 Jan 2023 17:03:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470677.730252; Tue, 03 Jan 2023 17:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCkhD-0001sX-Kt; Tue, 03 Jan 2023 17:03:27 +0000
Received: by outflank-mailman (input) for mailman id 470677;
 Tue, 03 Jan 2023 17:03:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCkhC-0001sN-9n; Tue, 03 Jan 2023 17:03:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCkhC-0008AA-63; Tue, 03 Jan 2023 17:03:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCkhB-0004o3-Rk; Tue, 03 Jan 2023 17:03:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCkhB-0001cS-RJ; Tue, 03 Jan 2023 17:03:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XTDO45LiqFqn+Hn/GBMBcWBxChM+LfX28bE2ABfufN0=; b=pGeWsLz/BTKBdq435hEHHPpY24
	E/wG0i8kFN0wX6YEF08Rl7d/t7XyrC6yUgDwwT3HRX0WgNP58kyXs9XxbaZcPxeCyTEdc3yegzI7X
	1eXrv+z4IicwhXN1/7td83wSjzNWbkOPiPTBH3A52XFOgKI9I0+msh9D3gfOV5S6SXAo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175557-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175557: FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-5.4:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
X-Osstest-Versions-That:
    linux=66bb2e2b24ce52819a7070d3a3255726cb946b69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Jan 2023 17:03:25 +0000

flight 175557 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175557/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1     <job status>                 broken  in 175547

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1  5 host-install(5) broken in 175547 pass in 175557
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail in 175545 pass in 175557
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 175545 pass in 175557
 test-amd64-i386-xl-vhd       22 guest-start.2    fail in 175547 pass in 175545
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail in 175547 pass in 175557
 test-armhf-armhf-xl-credit1  14 guest-start      fail in 175551 pass in 175557
 test-armhf-armhf-xl-credit2  14 guest-start      fail in 175551 pass in 175557
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 175553 pass in 175557
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail in 175553 pass in 175557
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 175547
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 175551
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 175553

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175197
 test-armhf-armhf-xl-arndale 18 guest-start/debian.repeat fail in 175553 blocked in 175197
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175197
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175197
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 175197
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 175197
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175197
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 175197
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175197
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 175547 n/a

version targeted for testing:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52
baseline version:
 linux                66bb2e2b24ce52819a7070d3a3255726cb946b69

Last test of basis   175197  2022-12-14 10:43:17 Z   20 days
Testing same since   175407  2022-12-19 11:42:26 Z   15 days   37 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Schocher <hs@denx.de>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jialiang Wang <wangjialiang0806@163.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Colitti <lorenzo@google.com>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Samuel Mendoza-Jonas <samjonas@amazon.com>
  Sasha Levin <sashal@kernel.org>
  Shiwei Cui <cuishw@inspur.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <simon.horman@corigine.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Yasushi SHOJI <yashi@spacecubics.com>
  Yasushi SHOJI <yasushi.shoji@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit1 broken

Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 17:26:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 17:26:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470688.730263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCl2r-0004Q8-Nh; Tue, 03 Jan 2023 17:25:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470688.730263; Tue, 03 Jan 2023 17:25:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCl2r-0004Q1-Ji; Tue, 03 Jan 2023 17:25:49 +0000
Received: by outflank-mailman (input) for mailman id 470688;
 Tue, 03 Jan 2023 17:25:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Awo3=5A=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pCl2p-0004Pv-9n
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 17:25:48 +0000
Received: from sonic305-20.consmr.mail.gq1.yahoo.com
 (sonic305-20.consmr.mail.gq1.yahoo.com [98.137.64.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a64ebdcc-8b8b-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 18:25:44 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic305.consmr.mail.gq1.yahoo.com with HTTP; Tue, 3 Jan 2023 17:25:41 +0000
Received: by hermes--production-ne1-7b69748c4d-7hwnr (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 82ee4ab4f4bd9b6bcc4691c031bc8c2e; 
 Tue, 03 Jan 2023 17:25:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a64ebdcc-8b8b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672766741; bh=BQrajt4wazRs/IHuCnNgKkNdztR9Jg2pmYu2J+B7rHE=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=MWkZBW23xhtaKL1h1Y0DanzZgzC9tYI/+m4L8tapKm3VckHEyr4IKBKbt2karoTLcVff2Rzi4Y5XNLNCb0bPm5I+ipKxq63QIYuHQbEGlxZ6ODcsmt+WA6z2clSqp73iNijF2uZjEdOOfjJo53wsQkAIWwKhpYFjyuCNrO9SyqAAl1SoBdpgAJSSmKRCFCcAz/91QXdhjtMO4aD05AV0jbdhtMuBxFwVCYQGwUefbBxB6fapNDZZJDoypx8Qys8rFkCCUkA62AXAei8GDiCDpfxTdubgFKOJfKSHYEZFqzb04k/Sx1upwmzdmSbnm2Hp6mHQV4hc1gmsV+jCs8NpUg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672766741; bh=Q7Gnch2KYK2DKvvEQImsPlGPBX0A629TKeRg29/A8EE=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=mof7SKRASPArXL2gtemBJPLd0mREs96QXV1acGqUuAHynSzRDGaw8Wu+EX7leobknIu7iKptkgx60XyiDLy1OPhgHAIjnPNDMNWGidPOd4yBAoG1Al0u/gojbvZGVe5R1Lya/ei+RnmXHiMlFCS0h3K13/XUrXBZdfKuRXEkXE3O35E5NXnmvzLfcd0baS06B102tjQPpyfiWI/dHTgVXZbyvTwBdkyb4YD0tjXXqG4HbUJTsAarKTd/BqQYuc+0P6NgweedkzHN5/DlJFbFz9bVhqqCpns2WgWH6HTf8tBMRqwx6sGC7sr7DKBHplQtcaR0bp1sByimvapXoqxsqg==
X-YMail-OSG: ny44inEVM1l0o89yZjgXBU7LE7onIq0HkXAJb04On7SUMzGglQaXKHGEWbCdbXB
 pmtWDTz0hbtpwQwDQdKrNh39Zp5m.O4Fj8UzgRiJNrfoovSEieMGvk3PRI7G3OeyN1wf8dtCb2fU
 lIuCZdt_FzLFBPLLUQKK59mu6QyzsjiQ.MA2CL3GglAx.pHzyz1gcWtEF3E4A2WERHEiSN47ztBk
 vA_nx_mve0L1iBQviDPoysrv_Dv6A401Z8nWr08svddzs6_Lbc3pcpi5dSvqxavNRMcSCwLT6poG
 AP8fUICvBp50lOthZFFZwQT9ULK5rEKLCmhYLV2nEPEM4uKz2lYveZQqHEEj3Se9MCm0fnHQX.Av
 eWxHZi0ci1N63I0ZQpHBUrUiZ_Ja89TIIUj1IHi3fXG_ezTJAjAitU4dDn9DtPGwQzD6vpIwP2dw
 Mz1rM9mIMFjcEnTil.iLvPnDm27rD0FL8NU43hlzh7Rn3Cs8Vpg_brBeuqS4tLGq.T0XgmW_LUlj
 MevjUuIOwAkBlZ8WQuQrAww71HAQa_q.vTJmZbIu1W4Wg_Ap5WYnvl6iA.a4YWu5nl1O5.za1rX3
 3JVeLg8e9wr2tPg4s_QOLwAQ4JyTLH_AH4mCZZEpycm_06_Lw.k4VaNKKziz1pOP4mFi8B8XIbgt
 nELLx6LD10127YlU9Kz.me5IiFC_YIY9c6HsK3.YiNVMVuN1QO8YELfEzOGiCjJ29E.FgOz9vPed
 QYiGVNmRS_WJAUzg.lUxZX_YW26wLaPeaw8.Rdn.x5oud2.cy7khkWsEr7os2WMeeKimm2ck4FYs
 AI7TbzN0qkHZ5seroGCG1CNnQU1gcgKw8SR449tFz4nhp5Bddsv7VL6P54ua8QrBoxFh28hhZgo1
 84zal0QCIM6pbyIff8BL_zRFnbW56PZ2bHmLrUBoraOTaVcEY2p1WO0iE73_BBuSCSWHPOOhRG49
 1ugaJ8SlzuHlnSGLcFGsHI1lEGwVwwcoqdYyHk4AbvqVtHhP1F7kml8h863Z7yrJIg3JInK.eb5a
 RMfNR9VhhyQDPVHbGL7SLhEx27Zdy6dVmGmv96OFZ29gdGXhOBGKuvmpiIOgBPxKe5WKKl3Mn6m_
 B8CosT0xV4IIk.Q2v1iGJSm0J7P881Eehr52j5K9QwkILs.9tvkeZJ8xwcOB53xoEEHUiCFNJdki
 1L.Rx37MYVoMi1FLKWMoSdGjvaa2ffAfLsO8gtL2SujvwD6jiiJQss0.MYtLUI3MaxKUHwiw4V9v
 Q6UxoStJG_8xM_JZWD8vVxkPl6XGC_NWumKgLxTFHHXTR.45Fddbn.89QpWmAFLUeHr85dLo2VDu
 iz1KPlb8jKa5EoksQuGN2dHhBxqdhyLfgoACHqXeMzzWik65x51kBk9zSDNJee2IMP6upuDjt02_
 TO9owR24rQbCtO69dUSaizmN60t.16Yr3_iX92NOyQcrLhlD6CCgAvUYXKpSRBBoNSY26Q.oCjEx
 JcrZDLz0Akcy1PD.RUGB8ujrQaJSP6GMa1dMSm5r94wWzaXsY8a226lrtxxhjCvFfihStkYhfk1s
 _rlgw9ThgSZJinU2JZp1tEtglK.ZJwHrkKaf0Fk8IcannH5gJtR2lAPx_9Z2lvTRbj1DeiUQe8I5
 rGUnYmhKPYL7TLLzZjoQ_2aClfkGMQj5QHf57B7oTlWIX7k4TjpgfXduxedaXrr8CriQ993AgjdN
 BAVE.9._rB1T_pyKSznCLu1DnLcmko7VBnPRjF3jsMW_nWKeMPppVnytd56NF4yRAWJ35Tf1KCUZ
 jt046_JMEx1MpFLgf5QDmbZXH6kj8.H95aTch0Gg.UA0nXWUo5TTQLGAx6DpJ0HKtmn.WQehXQWv
 au0g7rUxzWEkHDVJrwGKKFspT8Xfw._VxuO1cqNzagTcIbI3KWHVrFj759GdWobklFaekSZQ8Jh_
 oztf0Nx7QIN2vk_KPhcQhHONf0tNJlpVcy9KZ97EQmpn4tLeLipY5cECcRpXk0cxUGb.a05t4OvI
 wk.Dy1tethPFxK.DsMSnkmepWfMc4Vi7uJeYastmy99mgOvSmAkN1WixcIEjCYgazYqG4eJ4nP.a
 rSEk8helJW93kYoS2hqPQoBHB4Q0ocIzic3jIEZ2mfmqvGCfGCk_B3Nuz1T7TWEwIHjlKtfvtgUA
 k8i2n6BqQ2XazK6JaY7DqSKtrd6xbIBKMdt6CvizK1vkCsjEL66UVJXyZPJg3rhYfI_rHcCZWZGD
 w1XDPplRRvqzeXI0s.GhMng--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <6360e4a1-dc2b-685e-5e19-62b92eec695b@aol.com>
Date: Tue, 3 Jan 2023 12:25:35 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
To: Bernhard Beschow <shentey@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost
 <eduardo@habkost.net>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
References: <20230102213504.14646-1-shentey@gmail.com>
 <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com>
 <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org>
 <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3727

On 1/3/2023 8:38 AM, Bernhard Beschow wrote:
>
>
> On Tue, Jan 3, 2023 at 2:17 PM Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
>
>     Hi Chuck,
>
>     On 3/1/23 04:15, Chuck Zmudzinski wrote:
>     > On 1/2/23 4:34 PM, Bernhard Beschow wrote:
>     >> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally removes
>     >> it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in the PC
>     >> machine agnostic to the precise southbridge being used. 2/ will become
>     >> particularily interesting once PIIX4 becomes usable in the PC machine, avoiding
>     >> the "Frankenstein" use of PIIX4_ACPI in PIIX3.
>     >>
>     >> Testing done:
>     >> None, because I don't know how to conduct this properly :(
>     >>
>     >> Based-on: <20221221170003.2929-1-shentey@gmail.com>
>     >>            "[PATCH v4 00/30] Consolidate PIIX south bridges"
>
>     This series is based on a previous series:
>     https://lore.kernel.org/qemu-devel/20221221170003.2929-1-shentey@gmail.com/
>     (which itself also is).
>
>     >> Bernhard Beschow (6):
>     >>    include/hw/xen/xen: Make xen_piix3_set_irq() generic and rename it
>     >>    hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
>     >>    hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
>     >>    hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
>     >>    hw/isa/piix: Resolve redundant k->config_write assignments
>     >>    hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
>     >>
>     >>   hw/i386/pc_piix.c             | 34 ++++++++++++++++--
>     >>   hw/i386/xen/xen-hvm.c         |  9 +++--
>     >>   hw/isa/piix.c                 | 66 +----------------------------------
>     >
>     > This file does not exist on the Qemu master branch.
>     > But hw/isa/piix3.c and hw/isa/piix4.c do exist.
>     >
>     > I tried renaming it from piix.c to piix3.c in the patch, but
>     > the patch set still does not apply cleanly on my tree.
>     >
>     > Is this patch set re-based against something other than
>     > the current master Qemu branch?
>     >
>     > I have a system that is suitable for testing this patch set, but
>     > I need guidance on how to apply it to the Qemu source tree.
>
>     You can ask Bernhard to publish a branch with the full work,
>
>
> Hi Chuck,
>
> ... or just visit https://patchew.org/QEMU/20230102213504.14646-1-shentey@gmail.com/ . There you'll find a git tag with a complete history and all instructions!
>
> Thanks for giving my series a test ride!
>
> Best regards,
> Bernhard
>
>     or apply each series locally. I use the b4 tool for that:
>     https://b4.docs.kernel.org/en/latest/installing.html
>
>     i.e.:
>
>     $ git checkout -b shentey_work
>     $ b4 am 20221120150550.63059-1-shentey@gmail.com
>     $ git am
>     ./v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south_bridges.mbx
>     $ b4 am 20221221170003.2929-1-shentey@gmail.com
>     $ git am
>     ./v4_20221221_shentey_this_series_consolidates_the_implementations_of_the_piix3_and_piix4_south.mbx
>     $ b4 am 20230102213504.14646-1-shentey@gmail.com
>     $ git am ./20230102_shentey_resolve_type_piix3_xen_device.mbx
>
>     Now the branch 'shentey_work' contains all the patches and you can test.
>
>     Regards,
>
>     Phil.
>

OK, I didn't see the "Consolidate PIIX south bridges" series is a
prerequisite.

I will try it - it may take a couple of days because I need to test both
patch series in my environment and I can only work on this in my spare
time.

I will provide Tested-by tags to both series if successful. Otherwise,
I will reply with an explanation of any problems.

Chuck


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 18:39:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 18:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470695.730275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCmBl-0003KZ-1Q; Tue, 03 Jan 2023 18:39:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470695.730275; Tue, 03 Jan 2023 18:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCmBk-0003KS-S7; Tue, 03 Jan 2023 18:39:04 +0000
Received: by outflank-mailman (input) for mailman id 470695;
 Tue, 03 Jan 2023 18:39:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Awo3=5A=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pCmBi-0003KM-Tg
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 18:39:03 +0000
Received: from sonic309-21.consmr.mail.gq1.yahoo.com
 (sonic309-21.consmr.mail.gq1.yahoo.com [98.137.65.147])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e1a14f44-8b95-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 19:38:58 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.gq1.yahoo.com with HTTP; Tue, 3 Jan 2023 18:38:56 +0000
Received: by hermes--production-ne1-7b69748c4d-ljcdv (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID e8d8eabb7d2095d26a89087287432376; 
 Tue, 03 Jan 2023 18:38:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1a14f44-8b95-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672771136; bh=jk/KLNiyy/FQvaIWK2eHg0WptgKxIBRh7IHFHEKYpQE=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=hv7CP/giOX1qsGY0VIw1lh1/GYpUIzG/RE4PDQgN5oDHkURma2JkFbatER7UZo6h661DRV0dp/L4GMbmLRuMXCuEtuyt3RMJG7EWROoBbNHxnzMsZkJBLjm1aBJRfZKcypIkntELcAqUq/dxHgvYwE/v5onddTgU0Da3Lh27nqVKuJgTpy+DAL+5iE8vBhyBEe0wctaEG3Wl3+/HN/O1x9ifQd+O32FBoylMWYMkbzs9Rb4QzbqcS7traVjAX87v0yXvE+1jkUKSZkZVKK/C0kRYtCrj3hAQa4mEYtTePktIPWvuCYr/nezRBE5/DshfUxurZZhw+oru/GmFM79vRA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672771136; bh=hRuGRUez3Oizt3ToDFhsg/1SZnSh4jC6hJlllWzQatj=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=IvaoES6iP1QGItnVNQAEOVWCUELU/Dgq6gEpPs2abS1gu2cTAJlZPHjbIi5RM/7/618RBrH8RCHC+6/elZLq1tBFBv3ug2KdCnSNrknqlN9AYOAZ/vFu5edtn4FtUvt243J+Y8r0KPuowkN2Ir50GG7nqktEAapa7il14xrma9z0936wSestfJzL6ILDKII+vgX0CMS7UlaHh0UZ+k5+5I6wnYSFM+4uw5vfWpDqS/Kgh+uYfRqnF1hxXL4zMRthKvLp0pnyVP6WwcJ1q59QFVmBOVPTqMA7WWLKQEslNb24yPVNro/Ud/pmRAciffLNn1bGi0SVLM8+2MyDefazBA==
X-YMail-OSG: Pxqkev0VM1kZ9lcHp3BG8TNw_e4OtoPcljfhZXr9f.UAqZk_9eDy5yZKdcIVkat
 e8rbybkHsnbzus0vOKoiBlRJ1TOHJU6URU7aenVuc9aawjXoGsMzYSRD.Tqjl8OdY14mlbKE7Uj1
 6i6eQqq8CMVTqINo0q7Pp3KYnBvdvRFOs3Gb99icY4PfZbTnLhV3NlyEY7NaKRAKokJqPutPotfz
 qLAVSVYsqs.5MMLzcz8EIp6lestsZw5XGsbG_FhFST7FDHfR_A4At1NYxCsRQd43ythspegLc32N
 eQoZleQrdgUvDxg1xmd0L8rCZpmn6NS4ADNK5qnJTYume4epZXbvOAjnRgiqukd1kkOJb4Ey6RE5
 J6MGKuQVpHSuGhVXTXpJY6h8zV7y5EV3hhxNP9TAgn1LdrrOYNexkbbrcl9SUCK__GH_pYqNF8Wj
 n22tmFEvARbDR_v7AMw4RVwf4stvoM0lYUen6KJSbD40Tg0SiPgdPDLbdOsvdJ_HdP4NJoR4EGz1
 LYFSnnTvVp5utKncHJLBY5CS87bNaHiE.OxVWfu7kGO9rUFtAk2adia2HaFLwq67AxCQHr8FtBai
 _TlmciLzTMwAmHpYikazEDtU48R65SIaougDhaYdtO.S4OKoKvM4Z0fqxWrLtluaFwbabdm1X9t.
 64VbVnqaqzejohCwuyq31dFojfAKYvNSlJhqMLMBKDokuaW5QRTyzxxj6qkPmHhiZ1j2jMSuxpK4
 fD9aBZxlTto8sZclslhKRcfatKhbdDz3uiCkAVey.tHCuKHNzDgmLBRr7ZAr1Uu1jFmy8u87BeAr
 diLKufT_vSVbGGABb49xpHkTVrCtXxNrrzScmJlzdd3BPbhjkAOM38.farQ3J3IhvCdl0nAUFbi_
 4S9SC6jSKlkTK.Pn0578mZLjH154133AKPpcxameaPBLM_RFJevXLWdoaDCtStimet5pWNRnIeaT
 Z044PksPv.B1jo0W7zQi4zg6mGHVCu.asdiqKqDBE6ZajUVU9HfykIl.S5a.jdjGBbjCJSsOow7X
 u0tZx8YyclHQrSN6rxKyTv5uzt9yQMVjuIZ0q0AAWf2EFxtOOAz5U7taRRY7ZUWAraCxFovKbQJL
 S_Cyy5w1i0BBNj9kB0xMrtlkJcoPukOMVYJ_xZGdoT_4Kw7NADoY9AlOvkraqBzTZYO_eievY7Wc
 UXCl2Ns4z6TjVYsYnhotJt1zxEGtN_KenBycSwcS0s6y46Wl0jRO3CKklDZzEWTkJ74aSisFrThe
 gyjiGdlr4NiLGDlyfr2Jszz_5O37Ql6KQZkfl.QcvsmsiIt5VaJq2rPcRWGzccl6u4mwyr2p6go8
 aXcjWVlD7NhuQ.XwdttT7Y3YMgzmM4do969H.qfLSd7KgZ7Hq8jDvnW5weEaat1bS_et7ISeRRy8
 RGtJov3SiBPeVupZY.NAulXJjfirNjozS_hZPO6ax4LyO0DnCciELWuxJ80mYde2CyUwHLe0swOr
 aFPGy7Rt8O2tJj.qEPljplt2wuGN3L5AG_nMCiZ0XrJTDIalwEAsXm9YCCGkGwlRFw9moriwRDkA
 3yd_uCxI2eRh7bwRq1xF8LqncVR6EhTTXBYUSStfHl39U.lTi.Q.1UdQZVQ_whhDdMmSnMZFm6lx
 rMxS54DRavOwxm6nIha87._oytTKn1gACIpvi5A0H2V_x9zyUK7zodw67M31zywb.PzJQD4CC5tZ
 XDZzqPP2V4FwneNvO0UH.eKjjq6CDlonYn.nsjhOgPi6La_2M_bIscIQPrIhIWM6aMu76T648yLg
 ttuAcpvL3XcCjY0EE.e1JngxQcTlhZCLFqWhcacDak2W5CUAQSzLOU0dhOGKjbUV6EeIjp9UKRMq
 G2tVWQRNOJJnRbGuNIKkduMaMzQx7k6cW9yqDzIdTytvyXeBjf8Malgf8k.T9txoreie9ftH02EL
 HcIW.GZjvuc7e3XJS9EC7CW0A_EHaucrEa5XTaiPoB3l8hjFmy5EtQkrl2Cidx3eGCSTtnw.TmQQ
 m.EPiBIgR1U8mQCjTAEaGEZWrcJNp0yuMJhbrFvEna9Wk_0s9P2LH.mOMp0BVgbC7D2sAIK827e8
 gyROrL9kbLBU4GbH8yFSM0bQIkYaFk9ID3tQTkEpyn7G6pFhakrFhfhthhFfvz3D84ZhxsWKdQUr
 EfY3wtOgAS91a4uIAwXCjrm.Qw6Sf5DQmNZgzYTE63jt.STtw0jhdWrCWPT5hGJIl6a98XCK5dIZ
 VIzixOEE7GW1Leho-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <c1532681-0a37-7812-84f6-fd1e5dd576c0@aol.com>
Date: Tue, 3 Jan 2023 13:38:51 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, alex.williamson@redhat.com
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <20230102124605-mutt-send-email-mst@kernel.org>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230102124605-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 5456

On 1/2/2023 12:46 PM, Michael S. Tsirkin wrote:
> On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
> > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > as noted in docs/igd-assign.txt in the Qemu source code.
> > 
> > Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > Intel IGD passthrough to the guest with the Qemu upstream device model,
> > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > a different slot. This problem often prevents the guest from booting.
> > 
> > The only available workaround is not good: Configure Xen HVM guests to use
> > the old and no longer maintained Qemu traditional device model available
> > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > 
> > To implement this feature in the Qemu upstream device model for Xen HVM
> > guests, introduce the following new functions, types, and macros:
> > 
> > * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > * typedef XenPTQdevRealize function pointer
> > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > 
> > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > the xl toolstack with the gfx_passthru option enabled, which sets the
> > igd-passthru=on option to Qemu for the Xen HVM machine type.
> > 
> > The new xen_igd_reserve_slot function also needs to be implemented in
> > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > in which case it does nothing.
> > 
> > The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > 
> > Move the call to xen_host_pci_device_get, and the associated error
> > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > initialize the device class and vendor values which enables the checks for
> > the Intel IGD to succeed. The verification that the host device is an
> > Intel IGD to be passed through is done by checking the domain, bus, slot,
> > and function values as well as by checking that gfx_passthru is enabled,
> > the device class is VGA, and the device vendor in Intel.
> > 
> > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>
> I'm not sure why is the issue xen specific. Can you explain?
> Doesn't it affect kvm too?

Yes, it does, and of course this only applies to using the igd in a
guest in legacy mode as described in docs/igd-assign.txt.

Searching the web, I found this successful report of legacy
igd passthrough using kvm:

https://www.reddit.com/r/VFIO/comments/i9dbyp/this_is_how_i_managed_to_passthrough_my_igd/

That user posted the virtual machine xml on pastebin:

https://pastebin.com/vYf3a1gz

For reference, details of my configuration of legacy igd passthrough on
xen are available on the xenproject wiki:

https://wiki.xenproject.org/wiki/Xen_VGA_Passthrough_Tested_Adapters#Intel_display_adapters

As I expected, with kvm, it is possible to specify the slot number
of every pci device in the guest (as well as domain, bus, and function)
in the xml configuration, but this is not easy to do with xen's
xenlight (libxl) toolstack. That is why this patch is specific to xen.

To further explain this:

On xen, the xl.cfg guest configuration file does not allow the
administrator to specify the slot number of the xen platform
pci device, or of the emulated network device and disk controller,
and one of these devices will grab slot 2 without this patch to
qemu, making it impossible to have the passed through igd at
slot 2 on xen without patching qemu.

Another way to solve this problem on xen is to extend libxl so the
administrator can specify the slot number of the emulated qemu
pci devices, or possibly by using the xl.cfg
device_model_args_hvm=[ "ARG", "ARG", ...] settings which might
allow the administrator to control the slot number of the emulated
qemu pci devices, and I tried that without success.

This solution of patching qemu to reserve slot 2 for the intel igd when
the qemu igd-passthru=on option for the xenfv machine type is set is
a more simple solution to the problem on xen than trying to manually
set all the slot numbers using the device_model_args_hvm option in
xl.cfg.

I think kvm users who desire this feature of legacy igd passthrough
would benefit from something like the qemu igd-passthru=on option
which, as far as I know, only applies to the xenfv machine type that is
enabled with the gfx_passthru setting in the xl.cfg configuration file.
Such an option for kvm could allow for qemu to take care of all the
details of configuring the vm correctly for igd legacy passthrough
on kvm instead of requiring the administrator to manually specify
all the settings correctly in the xml configuration file.

I think making igd legacy passthrough easier to configure on kvm
would be a useful patch, but it is beyond the scope of what this patch
is trying to accomplish.

Chuck


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 20:10:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 20:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470703.730296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnbm-0004km-MH; Tue, 03 Jan 2023 20:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470703.730296; Tue, 03 Jan 2023 20:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnbm-0004kF-J9; Tue, 03 Jan 2023 20:10:02 +0000
Received: by outflank-mailman (input) for mailman id 470703;
 Tue, 03 Jan 2023 20:10:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oadl=5A=citrix.com=prvs=36087fe06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCnbl-0004TZ-Bx
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 20:10:01 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 97c03ea5-8ba2-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 21:09:58 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97c03ea5-8ba2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672776598;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=/mBbqgiiuNZanVYbh0nZVNE+sEURAYgdK6CFKGtITFc=;
  b=RNyF8re0u6gsjimsCEUzigihal79cUqMyWBE5z8TpwKxL6lmH/4GcjHd
   B9DsC1uRN8NSIrSjc7LdOjPuMVP50ubTXE0lP/oRJ/mNdm6fzakgXyL5w
   gwto9jLvqkV+EG5z1mNDbzXV4P/n38YVMBuv91z2s2koE8SRimRn768im
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90516756
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:k8kcW6sr222vbiNZasQ29fFuCufnVGBeMUV32f8akzHdYApBsoF/q
 tZmKTuAP/fcNmWhf910YIqwp04HuMDczdM1SQNq+XxhQytH+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaGzCFJZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwcQgPdkqxiNKKzLO3e8dIr8A8E+PxM9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfAdMYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27J/
 j+foTukXHn2MvSu+weD1SqAn9PtlBLcAb4cTrCk8uxT1Qj7Kms7V0RNCArTTeOColWlR9tVJ
 kgQ+ywvhas/7kqmSp/6RRLQiGGAlg4RXZxXCeJSwAOQzqvZ5S6JC25CSSROAPQEnsIrQT0h1
 neSgsjkQzdotdW9WX+bs7uZsz62ESwUNnMZIz8JSxMf5Nvuq511iQjAJuuPC4bs0IezQ2uph
 WnX8m5u3N3/kPLnyY3ixVLf2QCjlqTFTwIEyCH+Wn2axBl2MdvNi5OT1XDX6vNJLYC8R1aHv
 WQZl8X20N3iHa1hhwTWHrxTQejBC+KtdWSF3AUxR8VJGyGFoSbLQGxG3N1pyK6F2O4gcCShX
 kLcsBg5CHR7bCrzNv8fj25c5q0XIUnc+TbNDKu8gjlmOMIZmOq7EMZGOyatM5jFyhRErE3GE
 c7znTyQJXgbE7976zG9Wv0Q17QmrghnmzyJGs+jn0T/iObODJJwdVviGALXBt3VEYve+FmFm
 zqhH5HiJ+pjvB3WPXCMrN97waEiJnknH5Hmw/Fqmhq4ClM+QgkJUqaBqY7NjqQ5x8y5YM+Up
 CDiMqKZoXKj7UD6xfKiMy06MOu2DMsv8hrW/0UEZD6V5pTqWq73hI93Snf9VeJPGDBLpRKsc
 8Q4Rg==
IronPort-HdrOrdr: A9a23:Ki5oxqqFSrmfBcOJouT2EOwaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="90516756"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH 2/4] xen/version: Drop compat/kernel.c
Date: Tue, 3 Jan 2023 20:09:41 +0000
Message-ID: <20230103200943.5801-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230103200943.5801-1-andrew.cooper3@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
reincludes kernel.c to recompile xen_version() in a compat form.

However, the xen_version hypercall is almost guest-ABI-agnostic; only
XENVER_platform_parameters has a compat split.  Handle this locally, and do
away with the reinclude entirely.

In particular, this removed the final instances of obfucation via the DO()
macro.

No functional change.  Also saves 2k of of .text in the x86 build.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 xen/common/Makefile          |  2 +-
 xen/common/compat/kernel.c   | 53 --------------------------------------------
 xen/common/kernel.c          | 43 ++++++++++++++++++++++-------------
 xen/include/hypercall-defs.c |  2 +-
 xen/include/xlat.lst         |  3 +++
 5 files changed, 33 insertions(+), 70 deletions(-)
 delete mode 100644 xen/common/compat/kernel.c

diff --git a/xen/common/Makefile b/xen/common/Makefile
index 9a3a12b12dea..bbd75b4be6d3 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -59,7 +59,7 @@ obj-y += xmalloc_tlsf.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma lzo unlzo unlz4 unzstd earlycpio,$(n).init.o)
 
-obj-$(CONFIG_COMPAT) += $(addprefix compat/,domain.o kernel.o memory.o multicall.o xlat.o)
+obj-$(CONFIG_COMPAT) += $(addprefix compat/,domain.o memory.o multicall.o xlat.o)
 
 ifneq ($(CONFIG_PV_SHIM_EXCLUSIVE),y)
 obj-y += domctl.o
diff --git a/xen/common/compat/kernel.c b/xen/common/compat/kernel.c
deleted file mode 100644
index 804b919bdc72..000000000000
--- a/xen/common/compat/kernel.c
+++ /dev/null
@@ -1,53 +0,0 @@
-/******************************************************************************
- * kernel.c
- */
-
-EMIT_FILE;
-
-#include <xen/init.h>
-#include <xen/lib.h>
-#include <xen/errno.h>
-#include <xen/version.h>
-#include <xen/sched.h>
-#include <xen/guest_access.h>
-#include <asm/current.h>
-#include <compat/xen.h>
-#include <compat/version.h>
-
-extern xen_commandline_t saved_cmdline;
-
-#define xen_extraversion compat_extraversion
-#define xen_extraversion_t compat_extraversion_t
-
-#define xen_compile_info compat_compile_info
-#define xen_compile_info_t compat_compile_info_t
-
-CHECK_TYPE(capabilities_info);
-
-#define xen_platform_parameters compat_platform_parameters
-#define xen_platform_parameters_t compat_platform_parameters_t
-#undef HYPERVISOR_VIRT_START
-#define HYPERVISOR_VIRT_START HYPERVISOR_COMPAT_VIRT_START(current->domain)
-
-#define xen_changeset_info compat_changeset_info
-#define xen_changeset_info_t compat_changeset_info_t
-
-#define xen_feature_info compat_feature_info
-#define xen_feature_info_t compat_feature_info_t
-
-CHECK_TYPE(domain_handle);
-
-#define DO(fn) int compat_##fn
-#define COMPAT
-
-#include "../kernel.c"
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * tab-width: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index f8134d3e7a9d..ccee178ff17a 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -18,7 +18,13 @@
 #include <asm/current.h>
 #include <public/version.h>
 
-#ifndef COMPAT
+#ifdef CONFIG_COMPAT
+#include <compat/version.h>
+
+CHECK_compile_info;
+CHECK_feature_info;
+CHECK_build_id;
+#endif
 
 enum system_state system_state = SYS_STATE_early_boot;
 
@@ -463,15 +469,7 @@ static int __init cf_check param_init(void)
 __initcall(param_init);
 #endif
 
-# define DO(fn) long do_##fn
-
-#endif
-
-/*
- * Simple hypercalls.
- */
-
-DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
+long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     bool_t deny = !!xsm_xen_version(XSM_OTHER, cmd);
 
@@ -520,12 +518,27 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     
     case XENVER_platform_parameters:
     {
-        xen_platform_parameters_t params = {
-            .virt_start = HYPERVISOR_VIRT_START
-        };
+#ifdef CONFIG_COMPAT
+        if ( current->hcall_compat )
+        {
+            compat_platform_parameters_t params = {
+                .virt_start = HYPERVISOR_COMPAT_VIRT_START(current->domain),
+            };
+
+            if ( copy_to_guest(arg, &params, 1) )
+                return -EFAULT;
+        }
+        else
+#endif
+        {
+            xen_platform_parameters_t params = {
+                .virt_start = HYPERVISOR_VIRT_START,
+            };
+
+            if ( copy_to_guest(arg, &params, 1) )
+                return -EFAULT;
+        }
 
-        if ( copy_to_guest(arg, &params, 1) )
-            return -EFAULT;
         return 0;
         
     }
diff --git a/xen/include/hypercall-defs.c b/xen/include/hypercall-defs.c
index 41e1af01f6b2..6d361ddfce1b 100644
--- a/xen/include/hypercall-defs.c
+++ b/xen/include/hypercall-defs.c
@@ -245,7 +245,7 @@ multicall                          compat:2 do:2     compat   do       do
 update_va_mapping                  compat   do       -        -        -
 set_timer_op                       compat   do       compat   do       -
 event_channel_op_compat            do       do       -        -        dep
-xen_version                        compat   do       compat   do       do
+xen_version                        do       do       do       do       do
 console_io                         do       do       do       do       do
 physdev_op_compat                  compat   do       -        -        dep
 #if defined(CONFIG_GRANT_TABLE)
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 65f7fe3811c7..f2bae220a6df 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -169,6 +169,9 @@
 !	vcpu_runstate_info		vcpu.h
 ?	vcpu_set_periodic_timer		vcpu.h
 !	vcpu_set_singleshot_timer	vcpu.h
+?	compile_info                    version.h
+?	feature_info                    version.h
+?	build_id                        version.h
 ?	xenoprof_init			xenoprof.h
 ?	xenoprof_passive		xenoprof.h
 ?	flask_access			xsm/flask_op.h
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 20:10:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 20:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470702.730284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnbf-0004EE-DW; Tue, 03 Jan 2023 20:09:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470702.730284; Tue, 03 Jan 2023 20:09:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnbf-0004E7-As; Tue, 03 Jan 2023 20:09:55 +0000
Received: by outflank-mailman (input) for mailman id 470702;
 Tue, 03 Jan 2023 20:09:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oadl=5A=citrix.com=prvs=36087fe06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCnbe-0004E1-CL
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 20:09:54 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 93d66c6d-8ba2-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 21:09:52 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93d66c6d-8ba2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672776592;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=2yFcSH2umpU0naapLlqvQU9WmOebWnbXbogEZZlzszQ=;
  b=NGhzt1fpRodRKYbZz8kndgdKkukR2L0+6sTWeGoXnbizLFxsR2VorXDx
   IkzjmWA/WAc7PILrIsreGejf0Rxs5pDf/OlGJLWhso3LElhhAZTjTx2Ov
   D1pd0wvgli+LX+mOZWQxpQyQIMNA/c9dxx+rNTdyOvopGqmomNsaob86O
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91025429
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:K79aoK27suSQTPsnj/bD5eVxkn2cJEfYwER7XKvMYLTBsI5bpzNTy
 mQbCDjXOfiOYjf9f9Ekb4rkoxgBuZCHnIVgTVBupC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK5ULSfUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS9nuDgNyo4GlD5gVmPqgX1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfPX588
 KZfAjQ0cU7cmeuQkI66bukzr5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKkSbC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TbHJ0PwxrC9
 woq+Uz1LhU2BfKv1gO68yuJvPHLxB6jYoI7QejQGvlC3wTImz175ActfUCgvfCzh0q6WtReA
 08Z4Cwjqe417kPDZsb5dw21pjiDpBF0c9hNF+w37imdx6yS5ByWbkAmZDNcbN0ttOctWCcnk
 FSOmrvBFTFp9bGYV3+Z3rOVti+pfzgYK3cYYi0JRhdD5MPsyLzflTqWEIwlSvTsyISoR3egm
 FhmsRTSmZ0vkvwOjeKR+WnY3W2yr76Zcg8R5lr+CzfNAhxCWKapYImh6F7+5PlGLZqEQlTpg
 EXoi/Ry/8hVU8jTyXXlrPElWejwuq3baGG0bUtHRcFJyti7x5K0kWm8ChlaLVwhDMsLcCSBj
 KT76VIIv8870JdHgMZKj2ON5yYCl/OI+TfNDKq8gj9yjn9ZKme6ENlGPxL44owUuBFEfVsDE
 Zmaa92wKn0RFL5qyjG7L89Ej+BxmX9mlTOLHMyrp/hC7VZ5TCfMIYrpzXPUNrxphE96iF+9H
 ylj2zuilEwEDbyWjtj/+o8PN1EaRUUG6WTNg5UPLIare1M2cFzN/teNmdvNjaQ5xfUK/goJl
 1nhMnJlJK3X3CCecV3TOy0zN9sCn/9X9BoGAMDlBn7ws1BLXGplxP13m0cfFVX/yNFe8A==
IronPort-HdrOrdr: A9a23:MwMeY6B+wy/VSHDlHems55DYdb4zR+YMi2TDtnoBLyC9F/bzqy
 nApoV46faWslYssRMb6LW90cC7KBu2lKKdirNhWYtKMjOW31dA77sP0WIh+VDd8uHFmdK1HJ
 0PT0DZZeeAbmRHsQ==
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="91025429"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH 3/4] xen/version: Drop bogus return values for XENVER_platform_parameters
Date: Tue, 3 Jan 2023 20:09:42 +0000
Message-ID: <20230103200943.5801-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230103200943.5801-1-andrew.cooper3@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

A split in virtual address space is only applicable for x86 PV guests.
Furthermore, the information returned for x86 64bit PV guests is wrong.

Explain the problem in version.h, stating the other information that PV guests
need to know.

For 64bit PV guests, and all non-x86-PV guests, return 0, which is strictly
less wrong than the values currently returned.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 xen/common/kernel.c          |  6 ++++--
 xen/include/public/version.h | 20 ++++++++++++++++++++
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index ccee178ff17a..70e7dff87488 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -522,7 +522,9 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( current->hcall_compat )
         {
             compat_platform_parameters_t params = {
-                .virt_start = HYPERVISOR_COMPAT_VIRT_START(current->domain),
+                .virt_start = is_pv_vcpu(current)
+                            ? HYPERVISOR_COMPAT_VIRT_START(current->domain)
+                            : 0,
             };
 
             if ( copy_to_guest(arg, &params, 1) )
@@ -532,7 +534,7 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 #endif
         {
             xen_platform_parameters_t params = {
-                .virt_start = HYPERVISOR_VIRT_START,
+                .virt_start = 0,
             };
 
             if ( copy_to_guest(arg, &params, 1) )
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 0ff8bd9077c6..c8325219f648 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -42,6 +42,26 @@ typedef char xen_capabilities_info_t[1024];
 typedef char xen_changeset_info_t[64];
 #define XEN_CHANGESET_INFO_LEN (sizeof(xen_changeset_info_t))
 
+/*
+ * This API is problematic.
+ *
+ * It is only applicable to guests which share pagetables with Xen (x86 PV
+ * guests), and is supposed to identify the virtual address split between
+ * guest kernel and Xen.
+ *
+ * For 32bit PV guests, it mostly does this, but the caller needs to know that
+ * Xen lives between the split and 4G.
+ *
+ * For 64bit PV guests, Xen lives at the bottom of the upper canonical range.
+ * This previously returned the start of the upper canonical range (which is
+ * the userspace/Xen split), not the Xen/kernel split (which is 8TB further
+ * on).  This now returns 0 because the old number wasn't correct, and
+ * changing it to anything else would be even worse.
+ *
+ * For all guest types using hardware virt extentions, Xen is not mapped into
+ * the guest kernel virtual address space.  This now return 0, where it
+ * previously returned unrelated data.
+ */
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
     xen_ulong_t virt_start;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 20:10:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 20:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470704.730307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnbo-0005CO-5e; Tue, 03 Jan 2023 20:10:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470704.730307; Tue, 03 Jan 2023 20:10:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnbo-0005Bq-0q; Tue, 03 Jan 2023 20:10:04 +0000
Received: by outflank-mailman (input) for mailman id 470704;
 Tue, 03 Jan 2023 20:10:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oadl=5A=citrix.com=prvs=36087fe06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCnbm-0004TZ-Sh
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 20:10:03 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a018dc3-8ba2-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 21:10:00 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a018dc3-8ba2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672776600;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=JKdrRr7waCmGtSepF69W8Ba+E4SGmyEJw2vd/HieU84=;
  b=ffSbfg2XRS+ZCjyF/xrg7vJqJ8yWr+HTHFhj4+sdz3Tc2NX+mqRprw4k
   AoDajDy01nxfofsqU5pZzrrBtzup5rBynJbVgu4JIi6k3Ed3U12rG54c6
   EZMfFwaSXVJ7+fbA+1LETXLD+ZSAleBS+1gmXfufsojCbV8NsXL2MLg09
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90516757
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:rqrKsqJugGU/2yUtFE+R55UlxSXFcZb7ZxGr2PjKsXjdYENShjIDm
 GAdCmGGb/bbMWqhco10bI7jpENUvJeHyYQwGgdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPdwP9TlK6q4mhA5wRlPawjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4uOWNWr
 84XGAwBbxyyqcafxK2EYPNj05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpPE+ojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUo3SGJwLwRvHz
 o7A12LdKT1ABffc8jPbzHmto82fggnSUatHQdVU8dY12QbOlwT/EiY+TkCnqPO0jkq/XdN3K
 EEO/Ccq668o+ySDUd3VTxC+5nmesXY0WcdUEuA8wBGAzOzT+QnxLkouQyNFadcmnNQrXjFs3
 ViM9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLnW0opkuRFJA5Svfz14CrX2Grq
 9yXkMQgr+wrj9ZR5aqLwVzGjwOCgrHNUTEf/AqCCwpJ8ThFTIKiYoWp733S4vBBMJuVQzG9g
 ZQUpySNxLtQVM/QzURhVM1IRej0vKjdbFUwlHY1R/EcGyKRF2lPlGy6yBV3Pw9XP8kNYlcFi
 2eD6FoKtPe/0JZHBJKbgr5d6exwlsAM9vy/DJg4i+aihbAvHDJrBAk0OSatM5nFySDAa50XN
 5aBatqLBn0HE6lhxzfeb75DjuV0mXhulTuKGcCTI/GbPV22PSf9dFv4GAHWMrBRAF2s+m05D
 Oqzx+PVkk4CAYUSkwHc8JIJLEBiEEXX8ave8pQNHsbae1oOJY3UI6OJqV/XU9A/zvs9eyah1
 i3VZ3K0P3Km2CSfcVzbNyA8AF4tNL4mxU8G0eUXFQ7A8xAejUyHt8/zq7NfkWEbydFe
IronPort-HdrOrdr: A9a23:rENAKKPYHoXrDcBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="90516757"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_* subops
Date: Tue, 3 Jan 2023 20:09:43 +0000
Message-ID: <20230103200943.5801-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230103200943.5801-1-andrew.cooper3@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Recently in XenServer, we have encountered problems caused by both
XENVER_extraversion and XENVER_commandline having fixed bounds.

More than just the invariant size, the APIs/ABIs also broken by typedef-ing an
array, and using an unqualified 'char' which has implementation-specific
signed-ness.

Provide brand new ops, which are capable of expressing variable length
strings, and mark the older ops as broken.

This fixes all issues around XENVER_extraversion being longer than 15 chars.
More work is required to remove other assumptions about XENVER_commandline
being 1023 chars long.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>

Tested by forcing XENVER_extraversion to be 20 chars long, and confirming that
an untruncated version can be obtained.

API/ABI wise, XENVER_build_id could be merged into xenver_varstring_op(), but
the internal infrastructure is awkward.
---
 xen/common/kernel.c          | 69 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/public/version.h | 56 +++++++++++++++++++++++++++++++++--
 xen/include/xlat.lst         |  1 +
 xen/xsm/flask/hooks.c        |  4 +++
 4 files changed, 128 insertions(+), 2 deletions(-)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 70e7dff87488..56bd6c6f5d9c 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -24,6 +24,7 @@
 CHECK_compile_info;
 CHECK_feature_info;
 CHECK_build_id;
+CHECK_var_string;
 #endif
 
 enum system_state system_state = SYS_STATE_early_boot;
@@ -469,6 +470,66 @@ static int __init cf_check param_init(void)
 __initcall(param_init);
 #endif
 
+static long xenver_varstring_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    const char *str = NULL;
+    size_t sz = 0;
+    union {
+        xen_capabilities_info_t info;
+    } u;
+    struct xen_var_string user_str;
+
+    switch ( cmd )
+    {
+    case XENVER_extraversion2:
+        str = xen_extra_version();
+        break;
+
+    case XENVER_changeset2:
+        str = xen_changeset();
+        break;
+
+    case XENVER_commandline2:
+        str = saved_cmdline;
+        break;
+
+    case XENVER_capabilities2:
+        memset(u.info, 0, sizeof(u.info));
+        arch_get_xen_caps(&u.info);
+        str = u.info;
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        break;
+    }
+
+    if ( !str ||
+         !(sz = strlen(str)) )
+        return -ENODATA; /* failsafe */
+
+    if ( sz > INT32_MAX )
+        return -E2BIG; /* Compat guests.  2G ought to be plenty. */
+
+    if ( guest_handle_is_null(arg) ) /* Length request */
+        return sz;
+
+    if ( copy_from_guest(&user_str, arg, 1) )
+        return -EFAULT;
+
+    if ( user_str.len == 0 )
+        return -EINVAL;
+
+    if ( sz > user_str.len )
+        return -ENOBUFS;
+
+    if ( copy_to_guest_offset(arg, offsetof(struct xen_var_string, buf),
+                              str, sz) )
+        return -EFAULT;
+
+    return sz;
+}
+
 long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     bool_t deny = !!xsm_xen_version(XSM_OTHER, cmd);
@@ -670,6 +731,14 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         return sz;
     }
+
+    case XENVER_extraversion2:
+    case XENVER_capabilities2:
+    case XENVER_changeset2:
+    case XENVER_commandline2:
+        if ( deny )
+            return -EPERM;
+        return xenver_varstring_op(cmd, arg);
     }
 
     return -ENOSYS;
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index c8325219f648..cf2d2ef38b54 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -19,12 +19,20 @@
 /* arg == NULL; returns major:minor (16:16). */
 #define XENVER_version      0
 
-/* arg == xen_extraversion_t. */
+/*
+ * arg == xen_extraversion_t.
+ *
+ * This API/ABI is broken.  Use XENVER_extraversion2 instead.
+ */
 #define XENVER_extraversion 1
 typedef char xen_extraversion_t[16];
 #define XEN_EXTRAVERSION_LEN (sizeof(xen_extraversion_t))
 
-/* arg == xen_compile_info_t. */
+/*
+ * arg == xen_compile_info_t.
+ *
+ * This API/ABI is broken and truncates data.
+ */
 #define XENVER_compile_info 2
 struct xen_compile_info {
     char compiler[64];
@@ -34,10 +42,20 @@ struct xen_compile_info {
 };
 typedef struct xen_compile_info xen_compile_info_t;
 
+/*
+ * arg == xen_capabilities_info_t.
+ *
+ * This API/ABI is broken.  Use XENVER_capabilities2 instead.
+ */
 #define XENVER_capabilities 3
 typedef char xen_capabilities_info_t[1024];
 #define XEN_CAPABILITIES_INFO_LEN (sizeof(xen_capabilities_info_t))
 
+/*
+ * arg == xen_changeset_info_t.
+ *
+ * This API/ABI is broken.  Use XENVER_changeset2 instead.
+ */
 #define XENVER_changeset 4
 typedef char xen_changeset_info_t[64];
 #define XEN_CHANGESET_INFO_LEN (sizeof(xen_changeset_info_t))
@@ -88,6 +106,11 @@ typedef struct xen_feature_info xen_feature_info_t;
  */
 #define XENVER_guest_handle 8
 
+/*
+ * arg == xen_commandline_t.
+ *
+ * This API/ABI is broken.  Use XENVER_commandline2 instead.
+ */
 #define XENVER_commandline 9
 typedef char xen_commandline_t[1024];
 
@@ -103,6 +126,35 @@ struct xen_build_id {
 };
 typedef struct xen_build_id xen_build_id_t;
 
+/*
+ * Container for an arbitrary variable length string.
+ */
+struct xen_var_string {
+    uint32_t len;                          /* IN:  size of buf[] in bytes. */
+    unsigned char buf[XEN_FLEX_ARRAY_DIM]; /* OUT: requested data.         */
+};
+typedef struct xen_var_string xen_var_string_t;
+
+/*
+ * arg == xenver_string_t
+ *
+ * Equivalent to the original ops, but with a non-truncating API/ABI.
+ *
+ * Passing arg == NULL is a request for size.  The returned size does not
+ * include a NUL terminator, and has a practical upper limit of INT32_MAX for
+ * 32bit guests.  This is expected to be plenty for the purpose.
+ *
+ * Otherwise, the input xenver_string_t provides the size of the following
+ * buffer.  Xen will fill the buffer, and return the number of bytes written
+ * (e.g. if the input buffer was longer than necessary).
+ *
+ * These hypercalls can fail, in which case they'll return -XEN_Exx.
+ */
+#define XENVER_extraversion2 11
+#define XENVER_capabilities2 12
+#define XENVER_changeset2    13
+#define XENVER_commandline2  14
+
 #endif /* __XEN_PUBLIC_VERSION_H__ */
 
 /*
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index f2bae220a6df..19cef4424add 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -172,6 +172,7 @@
 ?	compile_info                    version.h
 ?	feature_info                    version.h
 ?	build_id                        version.h
+?	var_string                      version.h
 ?	xenoprof_init			xenoprof.h
 ?	xenoprof_passive		xenoprof.h
 ?	flask_access			xsm/flask_op.h
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 78225f68c15c..a671dcd0322e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1777,15 +1777,18 @@ static int cf_check flask_xen_version(uint32_t op)
         /* These sub-ops ignore the permission checks and return data. */
         return 0;
     case XENVER_extraversion:
+    case XENVER_extraversion2:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_EXTRAVERSION, NULL);
     case XENVER_compile_info:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_COMPILE_INFO, NULL);
     case XENVER_capabilities:
+    case XENVER_capabilities2:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_CAPABILITIES, NULL);
     case XENVER_changeset:
+    case XENVER_changeset2:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_CHANGESET, NULL);
     case XENVER_pagesize:
@@ -1795,6 +1798,7 @@ static int cf_check flask_xen_version(uint32_t op)
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_GUEST_HANDLE, NULL);
     case XENVER_commandline:
+    case XENVER_commandline2:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_COMMANDLINE, NULL);
     case XENVER_build_id:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 20:10:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 20:10:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470705.730318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnc1-00067p-Dm; Tue, 03 Jan 2023 20:10:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470705.730318; Tue, 03 Jan 2023 20:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnc1-00067Z-9K; Tue, 03 Jan 2023 20:10:17 +0000
Received: by outflank-mailman (input) for mailman id 470705;
 Tue, 03 Jan 2023 20:10:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oadl=5A=citrix.com=prvs=36087fe06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCnbz-0004E1-W5
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 20:10:15 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a1e434ae-8ba2-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 21:10:15 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1e434ae-8ba2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672776615;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=DE3aCaTT2lSMY1mYXXzNfG1b9Pe72/lY/yfRxG9StRQ=;
  b=hxfpMNhg8ygN7klPOiR4sivGIBa8mgys/zGE5JI+DKA7rQ6rYV+fPJCB
   HO4lFi4emAie7Grt1NQoMjB2sr8virlzVakHkOatS0Ne3JyHse12/Z1wi
   1PxExUQ2oR41IqlRa6RNZF4xnrDQ+b8NJATSuB6jlgVTs3WdX0Gnktvcb
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 89980395
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:Sm87RalYXZe1VtTXKmr0tIHo5gxsJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIXDGDUOP7eYmSjLd9zPoy+ph5Xv56AyNVlTws5+CwxEyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aqaVA8w5ARkPqgS5ASGyxH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 cIocj1XdS++vey3yeiQdNlc3J16Lda+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsfYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9zhfJ9
 jqdrjmR7hcyE4GPl3nfzWuXtvLGnTvWZoVKC6+Z+as/6LGU7jNKU0BHPbehmtG7l0q/VtR3O
 0ESvC00osAa60iDXtT7GRqirxassgYHXttME8Uz8AyX1rfP+AGdG3QFSThaLtchsaceSTMm2
 1CTlvv1FDdvt/uTUnvb+bCKxRupIjQcJ2IGYS4CTCMG7sPlrYV1iQjAJv5zHajwgtDrFDXYx
 zGRsDN4l7gVldQM1aiw4RbAmT3EjrrjQxMx5w7Xdnm49Q4/b4mgD7FE8nCCs6wGdtzACADc4
 j5UwKBy8dziE7mzpHaAGv4yIYun5v/fGwPssQQ2Rpg+omHFF2GYQahc5zR3JUFMO8kCeCP0b
 EK7hT699KO/L1PxM/YpPtvZ59ACiPG5SI+7Dqy8gs9mOMAZSeORwM15iad8NUjJmVNkr6wwM
 IzznS2EXSdDUvQPINZbqo4gPV4XKsIWnzi7qXPTlU7PPV+iiJm9F9843KOmNLxR0U99iFy9H
 yxjH8WL0Q5Dd+b1fzPa94UeRXhTcydhWs+u+5wPL7LTSuaDJI3GI6aIqY7NhqQ/x/gF/gs21
 ivVtrBkJKrX2iScdFTihoFLY7LzR5dvxU8G0dgXFQ/wgRALON//hJrzgrNrJdHLAsQ/l68rJ
 xTEEu3caslypsPvomlENcGk/dEyK3xGR2umZkKYXdT2RLY4LyShxzMuVlGHGPUmZsZvifYDn
 g==
IronPort-HdrOrdr: A9a23:mtaaPqP8jMQ7OcBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="89980395"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, Daniel Smith
	<dpsmith@apertussolutions.com>
Subject: [PATCH 0/4] Fix truncation of various XENVER_* subops
Date: Tue, 3 Jan 2023 20:09:39 +0000
Message-ID: <20230103200943.5801-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

See patch 4 for details of the problem.  Other patches fix other errors found
while investigating.

This is only the hypervisor side of the change for now, because I want
agreement before starting to untangle the mess which is libxc's helpers for
this.

Also a disaster is Linux's sysfs handling for these.  In several places it
makes a heap allocation for a pointer (or two) sized object.

Andrew Cooper (4):
  public/version: Change xen_feature_info to have a fixed size
  xen/version: Drop compat/kernel.c
  xen/version: Drop bogus return values for XENVER_platform_parameters
  xen/version: Introduce non-truncating XENVER_* subops

 xen/common/Makefile          |   2 +-
 xen/common/compat/kernel.c   |  53 ---------------------
 xen/common/kernel.c          | 108 ++++++++++++++++++++++++++++++++++++++-----
 xen/include/hypercall-defs.c |   2 +-
 xen/include/public/version.h |  78 +++++++++++++++++++++++++++++--
 xen/include/xlat.lst         |   4 ++
 xen/xsm/flask/hooks.c        |   4 ++
 7 files changed, 181 insertions(+), 70 deletions(-)
 delete mode 100644 xen/common/compat/kernel.c

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 20:10:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 20:10:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470708.730322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnc1-0006B5-Q9; Tue, 03 Jan 2023 20:10:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470708.730322; Tue, 03 Jan 2023 20:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCnc1-00069t-Jm; Tue, 03 Jan 2023 20:10:17 +0000
Received: by outflank-mailman (input) for mailman id 470708;
 Tue, 03 Jan 2023 20:10:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oadl=5A=citrix.com=prvs=36087fe06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCnc1-0004E1-0Y
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 20:10:17 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a33a766e-8ba2-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 21:10:16 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a33a766e-8ba2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672776616;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=VaSW3H4jlJ3dv04QHJz+xn4RvXE21Emi7q8vmojUlYQ=;
  b=K1YvjnCQ8K5mZB3i6BUWM59Lo5NSp758Kd+vknByeD0K4ogC4iuu17bL
   upcLJJ5vpZkSGWggLMcQqkBF6fFNiV6fWLcqhqoreGYC/MyHlxTUEhgiJ
   4g2bsSQgyifm0fmPYe02LPTvlPcElLoA4r/mtLQ4gJJ4A5ifNwlKYRPIP
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 89980396
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:zISvBajSqRhsWFeCDFxR0A02X161VhAKZh0ujC45NGQN5FlHY01je
 htvCmrXPqyPamD1fIokPdy2pB4AscXQmtFmSFE9/iphRXsb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWs0N8klgZmP6sT5QeFzyV94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQmGGoANAu8uti4wbvhYddctp56dcj0adZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJagx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQthbJ9
 zKepDWnav0cHPPFjhSX0UOMvN3esjP2c6MdOf6etdc/1TV/wURMUUZLBDNXu8KRlUqWS99Zb
 UsO9UIGr7U29UGtZsnwWVu/unHslgUHR9NaHuk+6QeM4qnZ+QCUAi4DVDEpQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkA9D0UPeCsFRgst+MT4rcc4iRenczp4OPfr1JuvQ2i2m
 m3U6nhl71kOsSIV/5uLp12dmA2RnJbIR1Qe2DXnBDv87xwsMeZJeLeUwVTc6P9BKqOQQV+Ao
 GUIlqCi0QweMX2evHfTGbtQRdlF897AaWSB2gA3Q/HN4hz3oxaekZZsDCaSzauDGuINYnfXb
 UDaomu9D7cDbSLxPcebj29cYvnGLJQM9/y/DZg4jfIUOPCdkTNrGwkwDXN8J0i3zCARfVgXY
 P93i/qEA3cAErhAxzGrXeob2rJD7nlgmjmMGsirn0z+jer2iJuppVAtaQLmUwzExPnc/FW9H
 yh3baNmNCmzoMWhO3KKoOb/3HgBLGQhBICelvG7gtWre1I8cEl4Uq+5/F/UU9A990ijvruSr
 y7Vt44x4AaXuEAr3i3TNyk7MOq+AMYgxZ/5VAR1VWuVN7EYSd7HxM8im1EfJNHLKMQLISZIc
 sQ4
IronPort-HdrOrdr: A9a23:oznK46q+t3YFZKpl/7t2pyQaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="89980396"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH 1/4] public/version: Change xen_feature_info to have a fixed size
Date: Tue, 3 Jan 2023 20:09:40 +0000
Message-ID: <20230103200943.5801-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230103200943.5801-1-andrew.cooper3@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

This is technically an ABI change, but Xen doesn't operate in any environment
where "unsigned int" is differnet to uint32_t, so switch to the explicit form.
This avoids the need to derive (identical) compat logic for handling the
subop.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 xen/include/public/version.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 9c78b4f3b6a4..0ff8bd9077c6 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -50,7 +50,7 @@ typedef struct xen_platform_parameters xen_platform_parameters_t;
 
 #define XENVER_get_features 6
 struct xen_feature_info {
-    unsigned int submap_idx;    /* IN: which 32-bit submap to return */
+    uint32_t     submap_idx;    /* IN: which 32-bit submap to return */
     uint32_t     submap;        /* OUT: 32-bit submap */
 };
 typedef struct xen_feature_info xen_feature_info_t;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 20:41:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 20:41:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470739.730340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCo6S-0002Po-A3; Tue, 03 Jan 2023 20:41:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470739.730340; Tue, 03 Jan 2023 20:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCo6S-0002Ph-7J; Tue, 03 Jan 2023 20:41:44 +0000
Received: by outflank-mailman (input) for mailman id 470739;
 Tue, 03 Jan 2023 20:41:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CdqB=5A=oracle.com=ross.philipson@srs-se1.protection.inumbo.net>)
 id 1pCo6R-0002Pb-0a
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 20:41:43 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 05ff00ef-8ba7-11ed-91b6-6bf2151ebd3b;
 Tue, 03 Jan 2023 21:41:40 +0100 (CET)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 303Jn0ac005591; Tue, 3 Jan 2023 20:41:13 GMT
Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta01.appoci.oracle.com [138.1.114.2])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3mtbv2w4h7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 03 Jan 2023 20:41:13 +0000
Received: from pps.filterd
 (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.5/8.17.1.5)
 with ESMTP id 303KIn43023165; Tue, 3 Jan 2023 20:41:12 GMT
Received: from nam04-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam04lp2171.outbound.protection.outlook.com [104.47.73.171])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3mvu9s0wnr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 03 Jan 2023 20:41:12 +0000
Received: from BY5PR10MB3793.namprd10.prod.outlook.com (2603:10b6:a03:1f6::14)
 by BY5PR10MB4146.namprd10.prod.outlook.com (2603:10b6:a03:20d::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 20:36:09 +0000
Received: from BY5PR10MB3793.namprd10.prod.outlook.com
 ([fe80::e21f:383b:e0bc:afb4]) by BY5PR10MB3793.namprd10.prod.outlook.com
 ([fe80::e21f:383b:e0bc:afb4%3]) with mapi id 15.20.5944.019; Tue, 3 Jan 2023
 20:36:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05ff00ef-8ba7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=message-id : date :
 subject : to : cc : references : from : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2022-7-12;
 bh=PMWevr6d4J682B7tfyH8LkkhLNv+gcCfzKsdi7NhCZE=;
 b=nd2trsc4+12C7q/7zRvaNkXkPnx/0ejO8moafSoxlrAm2JhnHwYKptCG8WZMUlYBMGkJ
 pxEbyw6vmQnyNCmDT4soxD8NztIwcrjKHGSsxyVqI3M9HsOIvzkbJuvmeGIy/7hRcH/c
 l2kRPG9GKrKVpkwRBU6tVtrQf+qL1aYw8c9OdwNSvFmm4mhlpQcS3wXD0Dfavr+vRzjh
 b9byZryfdclBh238baG8UXgH1iIvEEDhDNP3GNbjTzPdSfbxMY51RvyIxZIrHJFF8qJx
 BIEaS/5BgzGF0B4EtLtm85lEMYq0mcX5Bvdl4hSJEdBFkKcrmGjw30HW59ZgWIj3Mei4 Jg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DNQUo2+lB00qyENmtMdgCjwesqpzLgdLfIZBiYFqYnjLZOJlndHyNoEoOvYAK7I41OMuToLAXiQVrv/kdanCoV7F0gK6URYIc0lWFTyI2xWLDPY2hjEdkWfG8p7g56/naDnIMz9+vI/YBotIYTrQYhzxMO/84jLQl+QGTMyX8yRrRNZKBbEH5kllUVACWlXoo56DrwGFsWOTlBa+pX1isnUMtXxV4eaeWP2XXdNxDolv8bvdv2WZTgmSa3XaOIyO4wj3RU25NtZATerXUSYdaNX9bdRuYg2OZ6qqn1WkvtKXYjB8qS9yKdBQgNLJ2T5Fd1aq7mq1U0uEfVlsnrzdPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PMWevr6d4J682B7tfyH8LkkhLNv+gcCfzKsdi7NhCZE=;
 b=Z0Q4gUQQHKtkdngFWn+9/VbJ94XAGpXOrpPhC3H90MIYrDL3jrc6zRlVO2eu30WLk3uvILNU46AgR0YEQ0T7bwoAHnR8ZhJGqnJsXul/zrzfkOMURr2w70LHJ0mc1FPJc37tkW2QGRGEVc4+fBdsSlgHA0U1xqFIUhmeNa3AzRzu7UiTzTPFD73dT/oxUyJSYk+6T6kFHq4312tFa+8X5ScJY3cQ8PMLd9gX7/JXdZh6X/BsPqgXzY7v7eJEFfxYygdDiISnV0qPoZzfxcw/ZAUwYNykTXlMCo9jQ30Sluo3FQOxY8XfGAdFYkN67a/Gnlw65VmE5d/5gIOoWpG+QA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PMWevr6d4J682B7tfyH8LkkhLNv+gcCfzKsdi7NhCZE=;
 b=MJU5uWrNBr8HQUmh81E3YXppBk7ko0cVsW46s3ESe3SPDj85WGg36WHRJePKk4hTgQWrtKFGFafnhdEi9gzeQrrzkEyUAuIROx3P79vjiRXw6ikSgOc+MzRmpYpsV5NqfUK11DJHsDDJYmrrHtm+CKck1wpfAPjij7Zu8nD1NSg=
Message-ID: <6dbb8c10-1c3d-2ad8-0491-049af8ccd89b@oracle.com>
Date: Tue, 3 Jan 2023 15:36:03 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.1
Subject: Re: [PATCH v2 0/2] x86: Check return values for early memory/IO remap
 calls
Content-Language: en-US
To: linux-kernel@vger.kernel.org, x86@kernel.org, bp@alien8.de
Cc: dpsmith@apertussolutions.com, tglx@linutronix.de, mingo@redhat.com,
        hpa@zytor.com, luto@amacapital.net, dave.hansen@linux.intel.com,
        kanth.ghatraju@oracle.com, trenchboot-devel@googlegroups.com,
        jailhouse-dev@googlegroups.com, jan.kiszka@siemens.com,
        xen-devel@lists.xenproject.org, jgross@suse.com,
        boris.ostrovsky@oracle.com, andrew.cooper3@citrix.com
References: <20221110154521.613472-1-ross.philipson@oracle.com>
From: Ross Philipson <ross.philipson@oracle.com>
In-Reply-To: <20221110154521.613472-1-ross.philipson@oracle.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: BN9PR03CA0543.namprd03.prod.outlook.com
 (2603:10b6:408:138::8) To BY5PR10MB3793.namprd10.prod.outlook.com
 (2603:10b6:a03:1f6::14)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BY5PR10MB3793:EE_|BY5PR10MB4146:EE_
X-MS-Office365-Filtering-Correlation-Id: ed08d174-8445-49ac-e4cc-08daedca2493
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	iHACh/TdwfV/ZHDhxnPq1AawOD0xs1skenzpmGR9AgUL/MnvZq11muFcrPVxs3GWvQYIy4ZIep2knTbkJPsyHBspFey3PboGUxUvrb1BCe4VRuvHFFZ947tEFKKJVtXcX1SB6WAFsVp/4nXhKoJF229iEJk2mlfxprYfONY1EXsrlyzX6HbaddGIG31yxztMXni8J4xaa4XMIPKADcXQt8SuqKxtWJNk34XABnho97OaRrKOpw9wapZDokp/qGkX+Hv/ZHWCoLLKJ0eDxOyjNqXYlUsPB4AE+rcz6WpMnXWQJk/s1P3hxaJ/PdnCcQllwjEEQ64HUCxqWlMhnY/GTqgBdd60X9txKXH8LODKBRSkVKuxV/5duKnC3GZSWtM8PMKbTUruSfAyUXkqyO+/ovimOgiS3E6cDS3hfQNzL5V+H5F6j+ULyrIEJIvj0jG6Bww+4XcOnohpzU94H6/a2sww4FwzJhgACuzwF0Kyv4Vp+nBe1kKEVXRoSUrXF7wXCkjxcseAn4DDTzuwBndEE6ywS7LwDpvtY96Bxz/7+SepYg2M5pR/v9mo4Dz0ZbuRBSWD9ZDA9uYALH9dLJestIZP66GrB46+D7dl8MH/o9eVxkuWJC+7JigIsDe8Pc76rVhIBakOVliYlYjiqKPV5JIlVW9l1CLCdOjb8L5obCjiQgl+rgfxK396TYzXjzashv1IgKRPwQoufuZ8LbJS9YyBPGvOTXj5sB9A3NwUtIraV7wcPp6nbVOsFnV/+Fo64IdRp5a5lTSXX0Y0EQCtnA==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BY5PR10MB3793.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(376002)(39860400002)(396003)(136003)(346002)(451199015)(7416002)(5660300002)(2906002)(8936002)(44832011)(8676002)(4326008)(41300700001)(66476007)(66946007)(316002)(6486002)(478600001)(31686004)(66556008)(53546011)(6512007)(186003)(6506007)(6666004)(83380400001)(2616005)(38100700002)(31696002)(86362001)(36756003)(22166006)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?L084TUFxSkJFa0xXcUJ3dEZVcnh0S1lKUkh1SnZZUXMxTzlhYVV3b1FQQUtl?=
 =?utf-8?B?em1nSG84N0dHcWFuSHkxVGhieHpCdlZFSjRNKzNIZS9Jc1lyNHVhRnVuV3hP?=
 =?utf-8?B?QkZUWUc4dmJVN3ZYQU13Smg5bWtBNWNRbEdSR1lSMU9qQ1QwSEVXbDFUaHJt?=
 =?utf-8?B?cXNDTG51dXN0ZWlLbTFJY2FjcWFjZTdGRkx1Wk14ZWlPbUN6ZVByQ2ROTUNy?=
 =?utf-8?B?VUdwVjdVK2ZxWjhIMVR2alNkNm9yQVRENVdOWER0OEszMlRTYk0wOE8xMzFC?=
 =?utf-8?B?dmREUjlIUkZzYzRIWGJ0Nm52Y3pCVkFqbVQwNUUvUUVQRFh3d3poc01JbFh5?=
 =?utf-8?B?ODQrYlpKZEFyWGU3dVNOSjFvemlxMlV1VVJPc3lmVXQ4SDF1MVNwSk9EV3ZL?=
 =?utf-8?B?Z2NVZ1hYZXpJM0VyVytscUdNeVZqQm50em1MSEdkRE5objBKOFAwNkNVNTI3?=
 =?utf-8?B?QkMxRVBVY3FXN3J6MHdRd2o3aFprcWFkWXlHcW9RYlNJY1pTb0o3SE96VGJR?=
 =?utf-8?B?V3ZJQjVHV2I5UVllcnlUNi9IK3BNUlJnR0ZvSk5QbWdXSkxYdW5DYVlWOHUz?=
 =?utf-8?B?L2NXVlBVU1I5ZHcvbHMvM3V6S0tWelkzOXVxUUxxekVIUUkxWmIzOE9YdmZH?=
 =?utf-8?B?M2lIdE9XbTY1RHd5ZDFhWGF0dHhUb2h4V2hoNS9WUkRmSUlSUnZvU0hZeGhi?=
 =?utf-8?B?L0hDR0tlK3JFQzljZXkwZmFRRGxucXlWM0wycGVBWDQ4WkYxTHR2Q0c3QUVy?=
 =?utf-8?B?TXlJdDlEWXM5d1RKM2wvUkVNdjNZSkxTeWlENHdZQjdZemN1NVQ2bVZzbVBr?=
 =?utf-8?B?NEw5bUdRZENwdGoyMUI2WG84N1FWRDZYSmora0lvaXNzOFgxSFhLamd0d2p1?=
 =?utf-8?B?RlQrKzR0SlZCRWdTVWpSV0MzdXc2N3pwTGVTUmlVT3RqNFR2ZkFuSnZUUjRo?=
 =?utf-8?B?MXh0eVpXMjZ3a0tuQ3l1c1BVa1RzQ1Y4cFB0SytYRWZEdzduNWkvNHhaT1pV?=
 =?utf-8?B?QzdmUXpwd0xpdTM4NkwyeCtSdTgvYlFRQ1EyVUVrWnQ5ZWNqY0x2bEtFNlUv?=
 =?utf-8?B?d0s3YW9zTWZ2VG5sNEd1d1hid1AxazQ4N3MyNXpGR1d4cGdUMkkvaTBPWEZh?=
 =?utf-8?B?Z1NNcW5KaEZMOFlXNEl5bVRLQ0tqZWpVdkF1OVY0blVSb2ZhTVVVUUcxenlR?=
 =?utf-8?B?NFZqblAxMGRrQ3cxU1lzUDYwMHA5bVlRU1czYWhCU3UwanVwTnlMdFFsL0RX?=
 =?utf-8?B?NDNJQldLY1NGcVQ0UTVKR2tEWnVSeWlETHFYMzBVMEg0c0JMNEJQY2p5SUdF?=
 =?utf-8?B?d3dWWFFyTXFNN1kvakR1SW9CaTBpVzQ0MUd1UEgza1ZmblA2anJONG14N1Bt?=
 =?utf-8?B?R2J3NUtGdG1VRldteitGb3FJS01NaGxxbm1uYTc5S0NaVW14SUZGVmhtUlFD?=
 =?utf-8?B?TzJKSzh1aWUydVVkZ2d3a0kxOU81Vi92ek9WbXV4bmduQTUyem5YNnd5Z1ZP?=
 =?utf-8?B?SE5KdVJrL0FrNlIxa1pjMXVBak1kVFJidlJoQ2M4V1ZZaDhpWUE0b0ErMXNL?=
 =?utf-8?B?SzZFRGFaaW5LckNlWlpyVGdSNVZnSHo2UmE0UkdNOUMrVEFlU1lCRnJLM0pq?=
 =?utf-8?B?L3pPdGxuLzlsV05maGVPMkRZR0tmcjBjcWllNnpDNDk1MFpzTWJYbG5yWUlP?=
 =?utf-8?B?akZXemFkYS9tQzJmZjFuOWQzR0kzQWlCM0N2WE1qaElPWHI1OWdramw0RDln?=
 =?utf-8?B?blJUUEE1MXdpZWNpSVVmR1QyU0hZUFVPZlJLYWZrd1FvTys3dXZSQU5aWVJ0?=
 =?utf-8?B?dXBQRDAwTU9zOURQcjFWK2s1Y2R1aUhmQ3MwNXJlbW93am9zcnM3TDJOU2VM?=
 =?utf-8?B?L1FLQ3NwUitoQzBuTWIwMExuL0pYY3BTdlZDcHpET1pldWRGU1hyY0I5dGlJ?=
 =?utf-8?B?bjNYMGhEeE9FRmpSYzNCNTgzbldOUytHMVdyYTlqREtOMitiSDQ1eXEyNWRi?=
 =?utf-8?B?MEFyNzI5UVNFMkc0cHVIaWxaRzMyVzlFUFpzcnZCa3I2UDRxNU15VGxHSHFW?=
 =?utf-8?B?cDBmM3hxWjc1L2dOVkRQRjBVNDVZb3ppTFVOQVAzMm44VzFIMFZReE55UlpC?=
 =?utf-8?B?UlFBU1RaM1J0UmRaTmpFbEJOQVFqazBPYWtYbUlvd3F6bmprZjRTR3JOeHhl?=
 =?utf-8?Q?KLBqw4sBE+nwAH+DLZsPZ70=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	=?utf-8?B?U1dYc2FFRVZNbTBUbm9BUkV3M2tDOTJMbmJ3Nll5aEk3bVo3ZGRSM0l2VTll?=
 =?utf-8?B?eG9JNTErY0UzOS92bHpzdjRCM0JjeFI4WWxxaHZBZHVtUkpIbElJdFNpMVhI?=
 =?utf-8?B?VTAxRTNkWm1nZTFmMHFyVjBNN1dQVkVSY3dpU25oZHF6TUl6MkJYM3cycDZP?=
 =?utf-8?B?UU1rWlB4UjFlQ2liVFJSSVJPM201MjdPdmJ0USt2Z21sYk01L3M0eWJERXFq?=
 =?utf-8?B?dXpKaDJ2YXpiVE5CZUZJR2w4WUJNb0F5aEV1WTJRcTRhUjY3anpaeXVucnhZ?=
 =?utf-8?B?SlRUc3BqcHUwWHE5RFhrTnFzTjZZL0FlalNOczNmSFRSck8vNC92SkIrK2xM?=
 =?utf-8?B?Z3BPRTlGQkkvdFVGQXI5T2Q1RUQ2TWpvTnFZK1Bpd01SckxycGxjb2M4UDRJ?=
 =?utf-8?B?Q2ZXL0J6WTVRM2h5ZHczSGxocjVpYXZVSlQrNWR5eS9rYThJQUwrVGt5UnRR?=
 =?utf-8?B?K0xSMWwvc0JQcm5WZThlTU9TcEsrdW1UNHFBQWZzd21YTXdqVThXWXpyQkxH?=
 =?utf-8?B?YlZPdURQUkZUZTVXTVJ6ZmVvZlJFRmZ2ZG5mdlpaemtPZ0pGS0tSY2JkSnhL?=
 =?utf-8?B?SFVRV0pOMW82NTQvd3NFYTQ0YVpYRThLRTJCTjlsQlZqSnIvWTFHZVgxck9E?=
 =?utf-8?B?VjRKWitKK2Z2UGlVOUhidlQxbnVZNHN1R3JFUjE2NlN0aTk3MGVKK0lrK3hq?=
 =?utf-8?B?YWM3Nm9vcDR6Y3NjeE93L2NHdTlxNHp2VWl4VW5hRXo2RTVYNHIvVCtna1Rl?=
 =?utf-8?B?NFErRkNBMytIZW1yV1gxZUVLakpUTjAyMm5JQ2pVRDR4QmlTY1N5UlJHWTRy?=
 =?utf-8?B?NUNsZWlrcjZDaWR0SFE4R3hOakpqSk9XU1kzZU5xaE96SFZLa2sraEhzZGEr?=
 =?utf-8?B?blpIRkRXc3c1d0owL0M2U3JoRzBGWlVIMkJUTWV1YXVKSjgwVG1tK2VwUURs?=
 =?utf-8?B?R01uajIyR2Q2SW5TOTA3ZXZmSDhFc3VucU1Wd0VPVFZxQmFhd1dOMjlqVjQx?=
 =?utf-8?B?cjdjc3lRMjFvcnlzQXRkTWkzaFlTbk0ycjVEN3J4d1lMZVY1L3hZZmRxM3BM?=
 =?utf-8?B?dGFpSStTOFNTeWgvV1paS3lvSGhGc200TXRnWHd3RVN5VnBwSWUvdnk2YWlZ?=
 =?utf-8?B?L2tiTmU3R1dMaE11RFpiVFpkZDE0alhyb2x6SFhEQWI4M21pRVppQVNSb0Mx?=
 =?utf-8?B?SXNEQWFiMTlRQmNtbXNGSElxbHBmNGpKZnRKOXRYNC9jYlRqZGpxK0p2T3cy?=
 =?utf-8?B?RmxvM1Z1WDM2K0l6ejZ2OU8rRVVId2M4YjZCUWZCMXhzWHhjZzRTRzJCamFN?=
 =?utf-8?B?TU1QSjFwa082aEFYd2ZQVTczZGgxWWpNd0xqYkVpT1FKMUNKMDIyMXVrZUNG?=
 =?utf-8?B?aVpwazRQdmZjbnlkL3hwbVF1R1RROWZ4djZ0aDhxVzNwSHk1YzdSeVZ4UzdX?=
 =?utf-8?B?NXRJMmVIVjBUVlc3Z0pSZVhkWi96TmtkSXRmcXhkeDg2bS9WUjVWYWMyNnBh?=
 =?utf-8?B?MWErRGxHV3ZCbjI0V0oxaEFnKzc2UXdWamlwSG12U1BqTDJhRldqOFFJclY0?=
 =?utf-8?B?VS9tQWRFeVNPeUwzeUJSVi9Ec3cwNC9MMW5KY01FaXd3TUc0TkxGL284a3R4?=
 =?utf-8?B?Ry90amVmRmFFT2hHa3Bqd2Z0UEs3dFErODZGU1MvSGVSWHVUTkNFMlFIaEQ5?=
 =?utf-8?B?c3lyVUhvUmtFMjMrVlpzeXpqdFlDbXdsaUYrTUJYNlVzZHdCT0w4amVRPT0=?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ed08d174-8445-49ac-e4cc-08daedca2493
X-MS-Exchange-CrossTenant-AuthSource: BY5PR10MB3793.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2023 20:36:08.2007
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3wUYta/Sway5MJS5K1zPkGzSmHXn3y6PYL0T0pQkqNW5bBxQLCpYXQ6IKvf8+W2zrD7YJlBfI+yW6bHuNDSj8IEovoM5nmcxhKZuLKPQ9AA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB4146
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2023-01-03_07,2023-01-03_02,2022-06-22_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 phishscore=0 suspectscore=0
 mlxlogscore=999 adultscore=0 malwarescore=0 bulkscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000
 definitions=main-2301030175
X-Proofpoint-GUID: OcoREfwsY4SJgwNelb5L45jMWqgk5Nsj
X-Proofpoint-ORIG-GUID: OcoREfwsY4SJgwNelb5L45jMWqgk5Nsj

On 11/10/22 10:45, Ross Philipson wrote:
> While sending an earlier patch set it was discovered that there are a
> number of places in early x86 code were the functions early_memremap()
> and early_ioremap() are called but the returned pointer is not checked
> for NULL. Since NULL can be returned for a couple of reasons, the return
> value should be checked for NULL.
> 
> This set fixes the places where the checks were missing. It was not always
> clear what the best failure mode should be when NULL is detected. In modules
> where other places tended to pr_warn or panic e.g., the same was done for
> the checks. In other places it was based on how significantly fatal the
> failure would end up being. The review process may point out places where
> this should be changed.

Borislav,

I just wanted to get your thoughts here since it was by your prompting 
that I sent this second patch set to make checking of return values from 
early_memremap() and early_ioremap() consistent. I have gotten 
Reviewed-by's from some of the maintainers in specific areas that they 
approve of the return handling. I also got two replies basically 
questioning the underlying approach. I replied that I basically did what 
you asked me to do. I have not heard back. How would you like me to proceed?

Thanks,

Ross Philipson

> 
> Changes in v2:
>   - Added notes in comments about why panic() was used in some cases and
> the fact that maintainers approved the usage.
>   - Added pr_fmt macros in changed files to allow proper usage of pr_*
> printing macros.
> 
> Ross Philipson (2):
>    x86: Check return values from early_memremap calls
>    x86: Check return values from early_ioremap calls
> 
>   arch/x86/kernel/apic/x2apic_uv_x.c |  2 ++
>   arch/x86/kernel/devicetree.c       | 13 ++++++++++
>   arch/x86/kernel/e820.c             | 12 +++++++--
>   arch/x86/kernel/early_printk.c     |  2 ++
>   arch/x86/kernel/jailhouse.c        |  6 +++++
>   arch/x86/kernel/mpparse.c          | 51 ++++++++++++++++++++++++++++----------
>   arch/x86/kernel/setup.c            | 19 +++++++++++---
>   arch/x86/kernel/vsmp_64.c          |  3 +++
>   arch/x86/xen/enlighten_hvm.c       |  2 ++
>   arch/x86/xen/mmu_pv.c              |  8 ++++++
>   arch/x86/xen/setup.c               |  2 ++
>   11 files changed, 102 insertions(+), 18 deletions(-)
> 



From xen-devel-bounces@lists.xenproject.org Tue Jan 03 20:47:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 20:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470747.730351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCoCL-0003An-2P; Tue, 03 Jan 2023 20:47:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470747.730351; Tue, 03 Jan 2023 20:47:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCoCK-0003Ag-V2; Tue, 03 Jan 2023 20:47:48 +0000
Received: by outflank-mailman (input) for mailman id 470747;
 Tue, 03 Jan 2023 20:47:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pCoCJ-0003Aa-5R
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 20:47:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCoCI-0004pF-BW; Tue, 03 Jan 2023 20:47:46 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232] helo=[192.168.2.2])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCoCI-0000Ww-5T; Tue, 03 Jan 2023 20:47:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=VsuQH8n3GuXWuA+VQJgibmsw0UwoS3Jh87+EAp1bEec=; b=Pp7lzDQvvlB4HdI9AWWQk0Q83z
	Y374uwIdkfvwX2CBHBJ0SjddoHdJ77RdnlFJ+zXIlRz20PYoDoyPaC6ie1XhsdkF8/mhl59C/Oeab
	H/zGT8IbuAkvYytW0l2jbui5LMTD5SHz/rlLISiDdBdnN3/p8AELBJUi5AxL/0XjVOuo=;
Message-ID: <a1300a9e-26a2-4307-d1c3-102729f16a09@xen.org>
Date: Tue, 3 Jan 2023 20:47:43 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_* subops
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Jan Beulich <JBeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-5-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230103200943.5801-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 03/01/2023 20:09, Andrew Cooper wrote:
> diff --git a/xen/include/public/version.h b/xen/include/public/version.h
> index c8325219f648..cf2d2ef38b54 100644
> --- a/xen/include/public/version.h
> +++ b/xen/include/public/version.h
> @@ -19,12 +19,20 @@
>   /* arg == NULL; returns major:minor (16:16). */
>   #define XENVER_version      0
>   
> -/* arg == xen_extraversion_t. */
> +/*
> + * arg == xen_extraversion_t.
> + *
> + * This API/ABI is broken.  Use XENVER_extraversion2 instead.

I read this as newer tools should never try to call XENVER_extraversion. 
But I don't think this is what you intend to say, correct? If so, I 
would say that an OS should first try XENVER_extraversion2 and then 
fallback to XENVER_extraversion if it returns -ENOSYS.

Same goes for the new hypercalls.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 21:23:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 21:23:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470755.730361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCokO-0007VZ-Ob; Tue, 03 Jan 2023 21:23:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470755.730361; Tue, 03 Jan 2023 21:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCokO-0007VS-Lt; Tue, 03 Jan 2023 21:23:00 +0000
Received: by outflank-mailman (input) for mailman id 470755;
 Tue, 03 Jan 2023 21:22:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oadl=5A=citrix.com=prvs=36087fe06=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCokN-0007VM-5U
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 21:22:59 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9515cb1-8bac-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 22:22:56 +0100 (CET)
Received: from mail-sn1nam02lp2043.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jan 2023 16:22:39 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5554.namprd03.prod.outlook.com (2603:10b6:208:290::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Tue, 3 Jan
 2023 21:22:34 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Tue, 3 Jan 2023
 21:22:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9515cb1-8bac-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672780976;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=gCKpOgleSfHJ92c+Y9v38Ry7yOOQgS66T1rdtC84y1E=;
  b=XV+AuZl6u0KAwZchSgrJJ1ou7YJuoso4YULHPSHHpHJ4fafF2vxowT0j
   yv08cEvkt+mt9RJ21c27LzQmeVz2RrW3N/P+vLx75k7GF2kOJWzE2u6yi
   Qd7CrBr/aykNqZsdkeaAvE+jA05C5NBtV5KZ4EPwR1phs8EVsGb0N74VM
   Q=;
X-IronPort-RemoteIP: 104.47.57.43
X-IronPort-MID: 90524720
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:gg7TuKodUJq75S7f+drAhPfYJOleBmLSZBIvgKrLsJaIsI4StFCzt
 garIBnSa/3cYDehe9pxa4Ww9x4BuJOAm9VgSQdt/i1jHi0b85uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHzyJNUfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABYicw2KhOnn/ICEWvNrp+kENPjTBpxK7xmMzRmBZRonabbqZv2QoOR+hXI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeerbIG9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzgq6Q23wfLroAVIB9JSGO4g8SptlGFXtZ9I
 Fw6wwEQ94FnoSRHSfG4BXVUukWsvAMYWtdWO/037keK0KW8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq11J2ZsDezMig9NnIZaGkPSg5ty8L4vIg5gxbLT9BiOK24lNv4HXf32
 T/ihDc6r6Uei4gMzarTwLzcqzelp5yMVQhs4AzSBjih9lkhONXjYJG041/G6/oGNJyeUlSKo
 HkDnY6Z8fwKCpaO0ieKRY3hAY2U2hpMCxWE6XYHInXr327wk5J/Vei8OA1DGXo=
IronPort-HdrOrdr: A9a23:3BDY4qC62fm0N8vlHemT55DYdb4zR+YMi2TDGXoBMCC9E/bo7/
 xG+c5w6faaskd1ZJhNo6HjBEDEewK+yXcX2+gs1NWZLW3bUQKTRekI0WKh+V3d8kbFh4lgPM
 lbAs5D4R7LYWSST/yW3OB1KbkdKRC8npyVuQ==
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="90524720"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uu3S+0Pt8nu/TYtlbsPc5gfNPoQnYZJoD5rhtJxZarKOYtwHpNAxLk/KHng00ZwQBDRrBOqdfEP4iiJQEaC5D1dFxSAYHIHHHdvkZIH+bRFFtr+fawAxdKODVG1lFiJ0lmkRwtPhMiXpSZaMEHN75wuQdDWT5BY0MJvR81AQb8q9j8wLwiOFuytpzVJNT4UO+D/B9UjPRSC/69HVTJ/gP66/qc7Q4UYat8oSPnKURr796/lZOs3gptkNl4JGMO5oK9n8dOLmtJ0Y/ssG302wj+HPQXnHtYz8zh4wXH3PAjJiN/pQXt1jmQR/9sMOUZpl486YwHPvZ/lKFkG36qsYPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gCKpOgleSfHJ92c+Y9v38Ry7yOOQgS66T1rdtC84y1E=;
 b=dqTvYpEya0vaCpfIInDkOKoDSM9nyu70QTe+ZnT2NmqLgZZ7YJJaO8iYu4uXzVqFspduKQ5J8DHdrQu4+dJP8+szf1DW2K0DWxNAZzJU3Z8LWDVQooXK0K87i7TZJDenNHPAb2xiPRf/wRSTmlcWksA5UH8yxOwznSlXHrOWhB9QUsjnemg//LQ41nyxFMREc/egdUivrkcRKDxUEhjauTojbJ4gY4RN6ljxi389AyjpMI5CUOc25MoAiVU82QmEzhRsmFthjNr7EAZsCLHXKyUrtoh1ZAIZDY4vOwUmgG8DkSkZBv6F2wLZAsBlDhWsB+tRhOVBiRIK7eJ6VE+k5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gCKpOgleSfHJ92c+Y9v38Ry7yOOQgS66T1rdtC84y1E=;
 b=pCVfVTaI20YXPELy/b+hMIYOXSd6aPPwp8BKmlp785tdJdxLxl5T+lagyUi7PSsZA8GJqrU0qlBt2a/LX4j6mQNgGmMgVn7tzRhUTPeiSS+Qhs90xLlOe8Sdc/XRnouneDq+z2eGeJmA8Yooep1OB3pZZORaFfb2MUD8TNLtu0w=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_* subops
Thread-Topic: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_*
 subops
Thread-Index: AQHZH69eYU4qCMuPJUaHOPPUV+yM+a6NKeKAgAAJu4A=
Date: Tue, 3 Jan 2023 21:22:34 +0000
Message-ID: <88f0365f-2ab7-ed8d-fc99-0b1f2a4837aa@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-5-andrew.cooper3@citrix.com>
 <a1300a9e-26a2-4307-d1c3-102729f16a09@xen.org>
In-Reply-To: <a1300a9e-26a2-4307-d1c3-102729f16a09@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5554:EE_
x-ms-office365-filtering-correlation-id: a167821b-bc1e-4bdf-230f-08daedd0a176
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 DavcmWwVXoelIIHWQEhdbh9DGYJn9QiOOGyUAEmUPPZYRV2quXpygC2vVRTcU1XS+Wf87xA1QhPOC5CsFaFHV6ijVfA2BmkkP40uV0JyCN/zJzLCwxgqZR0pT0qqL4U0zYWDgQSoKII6khy9NdheleaOxPxhdRyQblN+h6DC2lN4JFe02V4CjdRDB1TVG87KWiuOEvPP/rSoMV9gWnmWSzPqay0cwHIpWH2JPjiaGk/+uDOveN69b2oSSgupqAyjkrf25O6b08NEq+Owz3ZZi1EGR6VEYPQMooAB4OmJqkGRTWCy8PbhpwErBefiFp/YDHPVNaaJFByOl3W/VfjsNic4TfmrVA/0aRUgOGCNpgQ4HEc4CT3gke4WVZNe+rnRFxw7VlGSKv6OzKf5TCUbF+my+egeod78R/rMi4w0YQenKstucma2vF0xrsT/Fl22/g9JW1Oc44/hA7/1bTZ/MqW4ldvoShlCh1t470SNo6xse1Jy4hLdkHfZ+NX+rYAASgCVkdCXk81/E9DXLF3kLw02rijXzipwRrjg9EWTchrxIDO4+WaL8adbz0YJcdxSHpkzupsAkYTM2YVl48bhSWTjKgidqPANCBWV6+gzUadyyBxsMuZQTSfyQHO8P/CMzHvd9atvdXZAV1dtkEHNCUDFdDZqdBCK6KWlSvww1USF+N7Dht4knuYKoACIYAS+RZRtO2Wm88ozMo9xZ3S1lEQbSz67xzmaUqIBzpVwCklHid6j9+XdBtBsv8tOt4TO0CCM0ryFg7V1kZqycib7yj/iKBzd0eHJUxDrWYkI/gk=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(366004)(396003)(136003)(346002)(39860400002)(451199015)(4744005)(2906002)(5660300002)(8936002)(8676002)(4326008)(41300700001)(66476007)(54906003)(66946007)(110136005)(64756008)(66446008)(91956017)(76116006)(71200400001)(316002)(6486002)(478600001)(31686004)(66556008)(53546011)(6512007)(186003)(26005)(6506007)(2616005)(122000001)(38100700002)(82960400001)(31696002)(38070700005)(86362001)(36756003)(22166008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?eHBEeTJiMTlYNlo4emZWQTBrbk0zeDI5d2lkQnJ2RkFMOXhMZ2N5cHp1c0dw?=
 =?utf-8?B?N3RlT1BNOEh1SFdsQ1hpRVpTczVranhCWFQyYklVUm5oV3VCVzFTaHg0Mk5n?=
 =?utf-8?B?SGlqajBhQktBbEV3bjc4SUhqUDhCVWJGZDJSREZLeXVsSWtqMVRFblZSRk5h?=
 =?utf-8?B?dkc5Vk10RndNcTd5TEpjUWxKdUlTQ1FMWW8xR21jMktwUEpVcHZqWHNtSE5P?=
 =?utf-8?B?RGw1bkZIMncrOWlCdFpud2dYTnlzM2s0aFBaNVJNR01WdHBlOEtKbjEwVlN2?=
 =?utf-8?B?eVpkeUh1Mi9XNFkxZ0FaSW9kRzVlSWR3MzVFdVRNckRDSTF6NUFsS0FqeWZk?=
 =?utf-8?B?NkxmQ3RaNG5LVFdGeHplRjhXeXBsS0tJMkZoL0NQTlZsRE9kUWdjSUtVMWJz?=
 =?utf-8?B?eEJOUXczbncxejFlUW9tUWsxYjc5U2QvcWN3cEFEeW5pRHUwRW10Tmx5WUla?=
 =?utf-8?B?UGR4dE5CN3ZLbDg2V05OeENGeUE1b2JqN1I5ckRmVExnMEQvSnpnZ0FGbjVn?=
 =?utf-8?B?dk8wU28zQjZsUE13ODVyOEFWZENIMkNxWWVQOFNWU3Mzb2pzWTU1Vk0wV0Fw?=
 =?utf-8?B?SEYrd1E5NFhVdURBVjZVbjduazBKNzdtT2tpdjZldGdZMTVQUmVhUzhUNERL?=
 =?utf-8?B?SFF5bWhmdU9vV2VSUjhzZXJmL3JCTStOR0RjLzRRYlUvVlBWWnFWcThMS2JZ?=
 =?utf-8?B?V2dnWDhwVy9UaG1xem1ERG4velRHYnNJd1pSQWRubG1XMG8rUWx1a1FiRVF3?=
 =?utf-8?B?aGJDbVl0ZEVVcEpmOUoreEkxMThsUHhXL3dtdmd1QzZVajd4M080VkZ5RDF3?=
 =?utf-8?B?U0FpWnNrQnpHOXpGT3dJZUZ1YWplakhpWHhDZ3VCanFra0dGVXlreVRJcGxW?=
 =?utf-8?B?WWJwdkJZK0xwQzBRL0xWVTFUc1dzWndIeEFCVjdMNGxKa0hpbzFRNy8zbi9N?=
 =?utf-8?B?WUJBN240QVBXSnRKc2duV1FFMlgzNWdmSVpSeXJnVndyZFlEcWlCZ1BhNWVy?=
 =?utf-8?B?YWJYNGlJMkxpUFJxQ012ZjZXQk5KL3dBeTBhSUtFcHRHM0kxOW83VzJvaVFN?=
 =?utf-8?B?Y1lWYjJaKzl0SXMzaUdic1JEM24yWnllbTJvTEpLV0hBT3BIRENkcUxaeXdE?=
 =?utf-8?B?TU16aGNhbGIxTGp4VjhYeHBQVGN2eU1NaEQ0VHkreS8xd3ZJOEszNitIbkVz?=
 =?utf-8?B?ZG96SkRiWHVXKzc3TFBVc3YvL0gvakdoVVpiV3RSbWkzOCs3eXhsbW9JZUpI?=
 =?utf-8?B?VGtJZFIzSDhJZ3JXUS9OSkhxdFo5Z3R4T29sczdOU3dWUUVGVjF2UVh2dkJa?=
 =?utf-8?B?bW4zN2t6alBxVFZiUEg2U0dpTGlDSWNNTTRLNXl2cDFCWitTYzh1QjA0STJY?=
 =?utf-8?B?Rzhqam9DSkFYUm9WZUU2NnRlL2d3RWdLQjk5SGhKandkRTZVR3QzdXJEQ0E0?=
 =?utf-8?B?clJaOUdmQ1RKNnVaZmY5RHYzRFh5NFU1dU83bTV6UGVGcFREN2MzUzBiU1Fi?=
 =?utf-8?B?emxhVU93YVl4MzJOZndxV2pocDJyV0VZTjl1WXJ3ZGExOXgrMnJZRmZmSFV1?=
 =?utf-8?B?WTdodzlleFFQMU1HZTlRcFdwSWkrZHBoQXBTMktSZVc5Q0RBSUxKeURFUnU3?=
 =?utf-8?B?MWZ0Z1JDem0vTlVrUEZicEJsZ0EvYW9PeEN0Q3lzSVlCVGdHelVnTzJ5Z250?=
 =?utf-8?B?czNNd1gzU2d0c253MEZ5L3lTRFRyTVoyUkZ3cno4L0tzUHdOWElyZTRCRWxa?=
 =?utf-8?B?ck5oNk0yMThOR1k0Y0NrV09BTmRjL3JYa0FXZG8wZHdlQjdybXBXUnVweEdu?=
 =?utf-8?B?Y3EvWlI2aGZLWWtRNGxEOWtsZEN2ZDM4NEtJUVYwNHNzNEhyL1FUR2pJWnBR?=
 =?utf-8?B?QzAzTXNqeEhKbjNjQjlhZHJHcThBbGxyRTZnZTJKdmFza01Qb1BwSWhyczd4?=
 =?utf-8?B?Q3NqUCtlM3hiTWlqQ0JGUi95YnpmU1FTdEV4clZiN0Ftb2I2cDlIRUh6UUpJ?=
 =?utf-8?B?UzRFMlF0R2QwaEJHRnMrdElnVWpodUs3UnpPVUUrQWNJZnVWU0QrcnBvSko3?=
 =?utf-8?B?dHpVdUZLNDVvSWdmeGZ5Qnh1WFJzdVJUeVN2M1RLVHgzZnFxQVNFUkpLV0FF?=
 =?utf-8?Q?vPYNa48CkpGahZPybKWBaTtZX?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <29864C5C5355714C9F9A915CB1444921@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	87lgWfx8BNUBFqWfupj5fg38AOZ/IlRXxDWfKyffbGyDwOzNAwasQFxGT1Zc3Exy4NLJkX9GkXfN0YZkfs5d/CSAOhk6Fp5NaYOsycW0IN04GWuYHCNvpwtgxRTl1EwotlZy/8TDAyx3PWAgdbJHWOUqvtOjQb2bIXUxBZFV4RSJJ4TOMJfJoqQORCBHHQ04jjDNeP4bzXVxnN+EZXzUGDMmz+krPvE0xDRr6YbakOsG8D1e2CRRMGB+Bqu+HzSpPoHNixu0D7lsYm6YFYMqbqW2mpQGocvvMK4+4KgRictXamE8/4M91nzhsmBp6ysR4/55mTtxM4sR1kvbuyBoVgAQE51/SYplkY/9rNgkgrN8ZaijZvf0xcavhSIqMGuxMU5qL83MoSaLF2jemBIoghwcCbvxwyV3VCR4HdESv4Cj6aZkttCoWpc8ca1IDFGHkFpdxKvp3vJNDyCrz2e6f31igk1/QpFSFH6Zx9FEd5ih06udhIe8dbZXb2MFglXUTdDsb2jWGilRYZj5nbkmz/sS5ekqTTKjlNQXA+uVqHj6w5t9mFl59T8mGN6NtTao5V7FXanJcb0h80sbAia1pqYpIqRuG8BrCd6XHSXfsgm/I+YRhF77wIwTgdzkUBD0twPTH7t6hhO4aG7ShZ2tzvmdYoqPh4Bo23C44bMzfakJyoYNcLM2gBM3+bzFWcrgDw3ahG6JVPs3fzWa9LAMb13i0SAcMZZn9hSiQ0y9wM+QxWXizbbaD+PAcoCbQUSF/9E6juYNCuCVJMPQ4Vxy0de+Qyu91mfv9887wV/encUe1xO/j1VNfTED8zh2k2+MayLEOI8Xs/uye/r2jc4vj9DP/8WTVTeLXZa3LapuN57K1+ljg89IP05PkUAU1FCK
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a167821b-bc1e-4bdf-230f-08daedd0a176
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Jan 2023 21:22:34.3888
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 42yc3fjxyxDVO6uAxX4zDv0y9Vh+mIlghFAJ03MLw/RVAa/Ca/KvesVSPciC+H8iTcZE7LV5WSW0djWnIZjTnBZD+PXFeN9XgjIZprskuJc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5554

T24gMDMvMDEvMjAyMyA4OjQ3IHBtLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+IEhpIEFuZHJldywN
Cj4NCj4gT24gMDMvMDEvMjAyMyAyMDowOSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IGRpZmYg
LS1naXQgYS94ZW4vaW5jbHVkZS9wdWJsaWMvdmVyc2lvbi5oIGIveGVuL2luY2x1ZGUvcHVibGlj
L3ZlcnNpb24uaA0KPj4gaW5kZXggYzgzMjUyMTlmNjQ4Li5jZjJkMmVmMzhiNTQgMTAwNjQ0DQo+
PiAtLS0gYS94ZW4vaW5jbHVkZS9wdWJsaWMvdmVyc2lvbi5oDQo+PiArKysgYi94ZW4vaW5jbHVk
ZS9wdWJsaWMvdmVyc2lvbi5oDQo+PiBAQCAtMTksMTIgKzE5LDIwIEBADQo+PiDCoCAvKiBhcmcg
PT0gTlVMTDsgcmV0dXJucyBtYWpvcjptaW5vciAoMTY6MTYpLiAqLw0KPj4gwqAgI2RlZmluZSBY
RU5WRVJfdmVyc2lvbsKgwqDCoMKgwqAgMA0KPj4gwqAgLS8qIGFyZyA9PSB4ZW5fZXh0cmF2ZXJz
aW9uX3QuICovDQo+PiArLyoNCj4+ICsgKiBhcmcgPT0geGVuX2V4dHJhdmVyc2lvbl90Lg0KPj4g
KyAqDQo+PiArICogVGhpcyBBUEkvQUJJIGlzIGJyb2tlbi7CoCBVc2UgWEVOVkVSX2V4dHJhdmVy
c2lvbjIgaW5zdGVhZC4NCj4NCj4gSSByZWFkIHRoaXMgYXMgbmV3ZXIgdG9vbHMgc2hvdWxkIG5l
dmVyIHRyeSB0byBjYWxsDQo+IFhFTlZFUl9leHRyYXZlcnNpb24uIEJ1dCBJIGRvbid0IHRoaW5r
IHRoaXMgaXMgd2hhdCB5b3UgaW50ZW5kIHRvIHNheSwNCj4gY29ycmVjdD8gSWYgc28sIEkgd291
bGQgc2F5IHRoYXQgYW4gT1Mgc2hvdWxkIGZpcnN0IHRyeQ0KPiBYRU5WRVJfZXh0cmF2ZXJzaW9u
MiBhbmQgdGhlbiBmYWxsYmFjayB0byBYRU5WRVJfZXh0cmF2ZXJzaW9uIGlmIGl0DQo+IHJldHVy
bnMgLUVOT1NZUy4NCj4NCj4gU2FtZSBnb2VzIGZvciB0aGUgbmV3IGh5cGVyY2FsbHMuDQoNClJp
Z2h0LCBidXQgdGhhdCdzIHN1ZmZpY2llbnRseSBiYXNpYyB0aGF0IGl0IGdvZXMgd2l0aG91dCBz
YXlpbmcuDQoNClRoaXMgaXMgbm90IGEgIm15IGZpcnN0IGludHJvZHVjdGlvbiB0byB3cml0aW5n
IGNvZGUiIHR1dG9yaWFsLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 21:40:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 21:40:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470762.730373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCp1H-0001Vo-5k; Tue, 03 Jan 2023 21:40:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470762.730373; Tue, 03 Jan 2023 21:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCp1H-0001Vh-1Q; Tue, 03 Jan 2023 21:40:27 +0000
Received: by outflank-mailman (input) for mailman id 470762;
 Tue, 03 Jan 2023 21:40:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pCp1F-0001Vb-Mu
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 21:40:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCp1F-0005pj-GI; Tue, 03 Jan 2023 21:40:25 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232] helo=[192.168.2.2])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCp1F-0002lG-9N; Tue, 03 Jan 2023 21:40:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=jiiODyd1ijypetGoKyiCA8WgkDKdxvuaiGHcJPoL/ak=; b=uR1nmPRUei0Z+pHk4So1HUih72
	s0HF5yaqYWHSslGtSUi/iuXznIZUt74vjBcK0jrpXUTHVgbOwXQJva66L+FGD8MHGSxoB74HggyYS
	dRfguHVVaUv12XrdPdnEcC44Ih+idTW+mVG1ijKpOByOdzYX2KfUNMK1ZGpAExVOzkL0=;
Message-ID: <750237c6-d20f-a66a-45d0-3cefcbe0efed@xen.org>
Date: Tue, 3 Jan 2023 21:40:22 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_* subops
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich
 <JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-5-andrew.cooper3@citrix.com>
 <a1300a9e-26a2-4307-d1c3-102729f16a09@xen.org>
 <88f0365f-2ab7-ed8d-fc99-0b1f2a4837aa@citrix.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <88f0365f-2ab7-ed8d-fc99-0b1f2a4837aa@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 03/01/2023 21:22, Andrew Cooper wrote:
> On 03/01/2023 8:47 pm, Julien Grall wrote:
>> On 03/01/2023 20:09, Andrew Cooper wrote:
>>> diff --git a/xen/include/public/version.h b/xen/include/public/version.h
>>> index c8325219f648..cf2d2ef38b54 100644
>>> --- a/xen/include/public/version.h
>>> +++ b/xen/include/public/version.h
>>> @@ -19,12 +19,20 @@
>>>    /* arg == NULL; returns major:minor (16:16). */
>>>    #define XENVER_version      0
>>>    -/* arg == xen_extraversion_t. */
>>> +/*
>>> + * arg == xen_extraversion_t.
>>> + *
>>> + * This API/ABI is broken.  Use XENVER_extraversion2 instead.
>>
>> I read this as newer tools should never try to call
>> XENVER_extraversion. But I don't think this is what you intend to say,
>> correct? If so, I would say that an OS should first try
>> XENVER_extraversion2 and then fallback to XENVER_extraversion if it
>> returns -ENOSYS.
>>
>> Same goes for the new hypercalls.
> 
> Right, but that's sufficiently basic that it goes without saying.
It never hurts to be obvious. I couldn't say the same for the other way 
around.

> 
> This is not a "my first introduction to writing code" tutorial.

That's the first introduction to Xen for an OS developer. Adding a few 
words (maybe with the first version introducing it) would likely be 
welcomed by any developer.

This avoids any misinterpretation and/or losing time playing the git 
blame game...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 21:50:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 21:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470769.730383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCpAt-00031H-1D; Tue, 03 Jan 2023 21:50:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470769.730383; Tue, 03 Jan 2023 21:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCpAs-00031A-Up; Tue, 03 Jan 2023 21:50:22 +0000
Received: by outflank-mailman (input) for mailman id 470769;
 Tue, 03 Jan 2023 21:50:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Awo3=5A=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pCpAr-000314-QI
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 21:50:22 +0000
Received: from sonic314-20.consmr.mail.gq1.yahoo.com
 (sonic314-20.consmr.mail.gq1.yahoo.com [98.137.69.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9c5d08b8-8bb0-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 22:50:18 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Tue, 3 Jan 2023 21:50:15 +0000
Received: by hermes--production-ne1-7b69748c4d-gv5kc (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 81c81d8f73da6f85953acc29cd3f8c46; 
 Tue, 03 Jan 2023 21:50:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c5d08b8-8bb0-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672782615; bh=MuT9WQEKaC6d5FTK4Spmmmji5WG4CNUMm8s9sH9M3Ko=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=fZ9+7V+JUVWcviXErNxRBhy7rIrayNLgYtb2o+MyMD2fHbuRDCVfiWvQzSfYWfm41mRh5HU2GZgkYMTar+g3iJUKlGRiJx2mag5njYkyVWAYDeowegtR9gthrV8DbSomjSTQaSg+4vjn2D0ia6+XTUnXlPbxd7pv5zG6EAgT3awW0wAZz/0rzNc2eDDCr5PHxe8pV984CeU9GJEN9X2Y+Vftn/l2AzLWZGY74g2wZchKo2woUpKP+5Shx9leyB5uMI2hK0+GutgSBWWNeWPLzN+P5O9eiI7zK32SHX2ceXrVjiPOMIYgnBRrxiZKoVYd/eCA9zhakpxfE9YQgKKM8A==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672782615; bh=fsWlUl+B0IEOHQXXH6HoSZbHVIhMRyVbJAlb1IU7XRr=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=YS6wTKRBBk9kNivnCoUIkmrdr8N+uvWDCZ43eCWLk0jpXMjwilt41nZVpGns/GRMnsmP3kRbVoH0u0BLDwL9PWcDOAbHvdBKqVNPwFUfc+rLby38rtEdc/Mi79wEwoRElc3fOxq0A/pXVVPdvBVXh9CwnCp7cOcWlKFArPBaoVV/D2P1oMDG+dk+zjGMORCJKbewxBEpFikkt0ceu0uX6NZAu1gLNVa9lpj6hPDaWOAnKXJOSWMKI5YaaqhXMMD6IZdgtunuTVCmc/nOzLeacEvZ9zBbUDeBk6Sh53H2rm07bivVIFIF+FN37ZOLzyM3Yl5uQXIjvmni+4dlv4QSgg==
X-YMail-OSG: p_yAS9UVM1m2RvDl.g5.yw1FtsPTQHp_HScHPKTHjXDGTAUveyFVf5JiUggVOcV
 5TZ_yEVYvttCTnxMxCqadl5HOrUs1YQyUF6UUw3Y2AacQMzj5JUURRijKz1aeX99WCFC5JnTT8qL
 SxGDMGFQz2rfimKdqEp3XD1wJD9OB3JneX.UWFOBr2DdTfERdg6wREgPSbge3GCvrPZcJbEqeyB4
 sv_1Kk11J0GiOWEv5n6JC_YFa0Ten1ko_o01uP79FwcbZT5XutDnIFtNw8bW7MLTO3oQDa3zkveD
 mJagKcsW8B2.D9R_KprjvIyc39EgkcX94rB1HgPcmR5EUCskxY3sni82aew0nzJZs02ZfyEW.P1C
 8gYiXwDnJDRgeq64uRNHDR7w5ScI_SKpfmitL9o9_Zfy8g.5CMcMPDtt0O.YJ_o8VBlCdmPOE0Av
 INdLtOcKw.DrfeudB5zL8sfefuM2PEeYzEa54rqzc_.7eAnX69e9LzCF14hYc7nbxYlSbTwbxhdb
 EeCtf17WVSDWclH1Cnf2KfptnucTlNhperWrRb0yLBf3rU_RAhrO2pjiE6tuOd9L2gP_LVN3WRNj
 mRnVZHacflHnjfRuzN5QQvfrjicBr6OYjKVEUIPmK_cb7urm9MYIyHYj55soEVza4eWwMzZFuHWg
 UintR8uIw74oL.ua29ABtW2B_tOca57qC00p6mcj50JOy7PgkQal12S1x_Hhfxf2gQH8QmT4_dWJ
 SmYXfqXDVGt02aSLYiygdIM43aWzR0T6gmkqTiCglq8NbBNrOge5MeURR9mQbKF40a51v.K.fdwB
 Yc.kyC17Gu2Y4mI.l7tjDzFcfW86bGhIq9B3uJkmyM88IJNvjvBKpj9EAaeAZv2nGTz5SWZZuoaX
 iENfaIZWQXw.VmT4H8BfJytM1gtmWHzY1igSJvOTVlb5mhmVn0vWpU7egiVgIxdgRnJO5_nEVHMV
 GuNEvd729eVy66lgKhhsQHRfwn1EiOlWUVO4C45AAhGvrHeWwyk2QeEGOYzathi3zimHuVYd9kPS
 THogexzZiJGD5XXkrEy9ujpud_tlGFcTON1TuZYIM_cdyHExGxZG19_qZT8VhLjkhF6fxAqlJWYx
 fJffKY44sYLH2PY6GDVI_iIWUD7J8OuwS3hWI.3kIojwK1zW5JfXqj0WPTzkaPbdh1VqN3zwbmXp
 pm3gcPc1Mlg8kT2vBIZ1V58rNdQ2D_aLrMTkfvbbU6FMtWNP6vi4FknpYpl8J6Uepk6G.Y0urE_7
 eFU8KvZTRpfW8TnXOc.Lg5XY85fjduxZfH8reslwY5C9SHYGlji1Rla7JQwtgjqUZaPcJ1sPOfcW
 Qxd61FD9_3mysXPIsSVia90gCrRoEMfGYJCqaur4ObYdhSwNfxXQzhdl_lum2Fn8hrOqZjrqRruJ
 K52CWG0mWjaXtpDdvo.9tb..WrpgqnFHu5.bmcpm_ze7Gbejve3m4XYf08CSfLCl75g5Fikfmm_g
 uvgLQOng0qUvfyZPXv_arPa.GeJNfRqv2_wPjxNNqzDiNUI70ZlI9ENodqz4bTlVBlTyl7SGvPhM
 lmJQfnnOaZb4dHaOE_LszPdY_OQHC1XPrQEiJ_1l4FfkgsbCt7jXCh0i6m3G9qvkh0gXK0nYV6NQ
 Rgx3rUrSl5HI3D4JBGBjznbj6qE9WS_UihYta83.eAzwg66q7jovqm6nw7vDLzVHgjFcae6DZGw2
 0LHK8Ci3JyfGSkZzXUlfE33iiAUATiF0TJ14ExwIR0iojgaoLi.s00KpkyMq.1cX4no6PQUw7JCb
 vgjpijzgHx2hrVCl7Hinugio3keN.4kk3t_rVaO49wDNH2G6uNebb8xX7gjgGAo0TvwDS3zTD6o8
 ecjgVDTpVhxNCKKpsMZlc.NfLLfHBce3syV3w0D10360EMnaz8kLm_Lj8c7DodkZrQ_6cVDGRQl2
 SAhfHS7lZW6iaEbBdni1QRD7TopKEo5P4TLMRD1cZGWAEz1e6E0z6k6p68wCSqPaD1yNM5s34Uw2
 erVz4HXwjK84aCpXrqb5envrdJVWxxWRWzAUARXU4gnwyBPLSNftlj2vgzmCVtPLhYo1s.KlbGHl
 X2EAu_BOdlw.3rE.SdMfRtcTLIiOrk8umNvQhGk2A1xriRpRnMCTsyPPkdcF.P2zQE02LoIU1xOT
 ozU6p4hmKxHp7ChXY9HXBsP9fH9oFLARfdNVVBizBdg6QHUiNUW2eVq4jKPn99HDu4uR6hCskX0f
 0xH_sUVP2JNfUyqEZChU-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <cbfdcafc-383e-aea3-d04d-38388fab202f@aol.com>
Date: Tue, 3 Jan 2023 16:50:09 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <20230102124605-mutt-send-email-mst@kernel.org>
 <c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
 <20230103081456.1d676b8e.alex.williamson@redhat.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230103081456.1d676b8e.alex.williamson@redhat.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 5329

On 1/3/2023 10:14 AM, Alex Williamson wrote:
> On Mon, 2 Jan 2023 18:10:24 -0500
> Chuck Zmudzinski <brchuckz@netscape.net> wrote:
>
> > On 1/2/23 12:46 PM, Michael S. Tsirkin wrote:
> > > On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:  
> > > > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > > > as noted in docs/igd-assign.txt in the Qemu source code.
> > > > ... 
> > > > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>  
> > >
> > > I'm not sure why is the issue xen specific. Can you explain?
> > > Doesn't it affect kvm too?  
> > 
> > Recall from docs/igd-assign.txt that there are two modes for
> > igd passthrough: legacy and upt, and the igd needs to be
> > at slot 2 only when using legacy mode which gives one
> > single guest exclusive access to the Intel igd.
> > 
> > It's only xen specific insofar as xen does not have support
> > for the upt mode so xen must use legacy mode which
> > requires the igd to be at slot 2. I am not an expert with
>
> UPT mode never fully materialized for direct assignment, the folks at
> Intel championing this scenario left.

Thanks for clarifying that for me.

>
> > kvm, but if I understand correctly, with kvm one can use
> > the upt mode with the Intel i915 kvmgt kernel module
> > and in that case the guest will see a virtual Intel gpu
> > that can be at any arbitrary slot when using kvmgt, and
> > also, in that case, more than one guest can access the
> > igd through the kvmgt kernel module.
>
> This is true, IIRC an Intel vGPU does not need to be in slot 2.
>
> > Again, I am not an expert and do not have as much
> > experience with kvm, but if I understand correctly it is
> > possible to use the legacy mode with kvm and I think you
> > are correct that if one uses kvm in legacy mode and without
> > using the Intel i915 kvmgt kernel module, then it would be
> > necessary to reserve slot 2 for the igd on kvm.
>
> It's necessary to configure the assigned IGD at slot 2 to make it
> functional, yes, but I don't really understand this notion of
> "reserving" slot 2.  If something occupies address 00:02.0 in the
> config, it's the user's or management tool's responsibility to move it
> to make this configuration functional.  Why does QEMU need to play a
> part in reserving this bus address.  IGD devices are not generally
> hot-pluggable either, so it doesn't seem we need to reserve an address
> in case an IGD device is added dynamically later.

As I said in earlier message in this thread, the xenlight toolstack (libxl) that is
provided as the default toolstack for building xen guests with pci passthrough
is not the most flexible management tool, and that is why, in the case of xen,
it is simpler to reserve slot 2 while qemu assigns the slot addresses of the
qemu emulated pci devices so that the igd can use slot 2. IIRC, In hw/pci/pci.c,
once the slot value is assigned, it is constant and cannot be changed later on
by a management tool.

>  
> > Your question makes me curious, and I have not been able
> > to determine if anyone has tried igd passthrough using
> > legacy mode on kvm with recent versions of linux and qemu.
>
> Yes, it works.
>
> > I will try reproducing the problem on kvm in legacy mode with
> > current versions of linux and qemu and report my findings.
> > With kvm, there might be enough flexibility to specify the
> > slot number for every pci device in the guest. Such a
>
> I think this is always the recommendation, libvirt will do this by
> default in order to make sure the configuration is reproducible.  This
> is what we generally rely on for kvm/vfio IGD assignment to place the
> GPU at the correct address.
>
> > capability is not available using the xenlight toolstack
> > for managing xen guests, so I have been using this patch
> > to ensure that the Intel igd is at slot 2 with xen guests
> > created by the xenlight toolstack.
>
> Seems like a deficiency in xenlight.  I'm not sure why QEMU should take
> on this burden to support support tool stacks that lack such basic
> features.

So you would prefer to patch xenlight (libxl) to make it flexible enough to properly
handle this case of legacy igd passthrough.

>  
> > The patch as is will only fix the problem on xen, so if the
> > problem exists on kvm also, I agree that the patch should
> > be modified to also fix it on kvm.
>
> AFAICT, it's not a problem on kvm/vfio because we generally make use of
> invocations that specify bus addresses for each device by default,
> making this a configuration requirement for the user or management tool
> stack.  Thanks,

Unfortunately, and as I mentioned in an earlier message on this thread,
the xenlight management tool stack (libxl) is not so flexible and does not
make it so easy for the administrator to specify the bus address of each
device, and that is why either this patch is needed for igd legacy passtrhough
on xen, or the libxl management tool needs to be patched so it is flexible
enough to enable the slot addresses to be assigned correctly using
that tool instead of relying on qemu to reserve slot 2 for the igd.

If there is a consensus that this should be fixed in libxl instead of in qemu,
I will work on a patch to libxl, but I will wait a while for some feedback from
the xen people before I do that.

Thanks for your feedback,

Chuck


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 22:58:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 22:58:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470778.730395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCqEY-00013Z-86; Tue, 03 Jan 2023 22:58:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470778.730395; Tue, 03 Jan 2023 22:58:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCqEY-00013S-5U; Tue, 03 Jan 2023 22:58:14 +0000
Received: by outflank-mailman (input) for mailman id 470778;
 Tue, 03 Jan 2023 22:58:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Awo3=5A=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pCqEW-00013M-0y
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 22:58:12 +0000
Received: from sonic307-8.consmr.mail.gq1.yahoo.com
 (sonic307-8.consmr.mail.gq1.yahoo.com [98.137.64.32])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15bb7a1e-8bba-11ed-b8d0-410ff93cb8f0;
 Tue, 03 Jan 2023 23:58:07 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic307.consmr.mail.gq1.yahoo.com with HTTP; Tue, 3 Jan 2023 22:58:05 +0000
Received: by hermes--production-ne1-7b69748c4d-g8q5j (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 0f5dfd348dcdea8c73c8659aacf64335; 
 Tue, 03 Jan 2023 22:58:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15bb7a1e-8bba-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672786685; bh=mj1hV3XKQIygCD5C9KFURA84Ye2W3J4D7Pbw0h3BRWA=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=oVLw59c2TpHpmBbmqkxVGh+rT0QahuIz5Kwu3F0cTm0BnxdWl5gMqpZxe84EQL2WUFg4ORFhLXXYnU6BjzTyxhrgCLA4ld+nIeHBCSOm51s7tue3kjRPWRkkVGwu5KU+jPW6v4SlflGTS771Ea/Fh7N9qYftEnMI1+l4L1j3p2nJcrwHztX/GxBHFSFhALY1GZ30rBNBmYqMWud3yGjE7rIrqqL16Q/R+ujs/w1p+9C1ezFjyJ4UihG1DRH9yOiK6ASlSHl90qs6Om/3woxlQ+/12lxNu/A4A1+xuQm+M2Pb4EY4EEdFbqN60dx3V/T5uFd8BVZzh/CH13z3yAW+AQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672786685; bh=WiuYg/8LnJwOnfok6Fmccn7n30Kdg+k73gk70EwdB92=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=n4kTffxFaPUi2qDYIzZilwOPEGgKbbVccVG0Um9oVZF/xBMv7Vkgz2STKSFbvIMCYplYb4V70da4km5LNhSQJS7UV6FTVJTZBGjVFy9KqWW4G9bqHDBRrvIsmg1cctQV3h98fY5gUs2+sbDZ+q+0tVDJUiubWWiD1TQUgCW9+Sv+H5lr+v2/3SxLL+q0aRFO1nHBbjyi2Q3ybniWyDzLTC6AZzEOzerZTIMl86SEN4PoTh0WMzb9sG0OlGZndwCIIDUgsw8iiMwbRnGLeeQBjYqoH/MT0/np5/37GVCoFuY6M+VhNyeYsMtCmDsYYEfTYtbGV0FNffjHvtrX/9BiIg==
X-YMail-OSG: q4EKH0wVM1mptllaHzGei_ILU8jAVyZuPHgXULVo9UBiHSGLwwc31Ply3EO5C_j
 wACY7Ozc2P.BmIh91qBKmgPSm_w0K.uq2avCrTNHwWPUcONjUMZmZhTD.iJPaYr0ofsqkbSXQxrW
 yI7fBHMqDDw7lkH6R2rmeJfLmZuVdSNANun7UwLC.X0eS96xMOQKVi3h1aQfC0hfE_mKg5QzRNJH
 _KvOX5hKqTrmQejreoA.zM2GDEOPCcKD.ftFbBYppPivdK7PaU0aHWGd3rS_2QmDMBFX2ISgAvi6
 VYod1cMb9Q.K7ZeWplt_2ORWMnlmXq54U8NVj68I0iJAgNhfgZ9cPS5.GtaJJq_rsrztZq.407q6
 SaDUe72dL66fjNW82fgyU8Hb3Qvcf2eWgPDzrNuuYOusgH1.pM8pUkizr2A0urEi6fG9zu83.hMF
 Uu1Z.z5gmKqZtb9EUvGsoMKqTyXssbrBmk38nNOu2jmgIITY84WoaCSYgfK6EFDOqoG8zPqOhW1B
 jeqjjYqWFj5dl86hkBwqBImFfVx6Fm52sQzjtm57_c1fQ4kezh3onY2uiJsVexxsdpqBcW0EFD4O
 7RGJY7c.Ye_mgreGmjhSq7loxnbtH7KX_mCMmO9Nw.Xrwhor5ErcgHIQ8MFSge2GBazCdCGPv249
 zI4sTBvaIQjXi8rZZvmC1mH6B6IrbHbu8pszTXFLHp.dEJ5NPFF3.FRcVuHrhQXQ2p63sRKstB7Y
 wqOzAeDx.3cIpoU4POhY2sHff3IGa4S8EHtj5taw7KqHuMbguRFl6Elx5oDKNYELXuBJyGHnWzNh
 6UhrYxbR0tba7icOFdY6UWbU86W.NNpwp.K2eQTKoza2sgXq3NE3xdkGU.9XFsd8OGx4Ykz_09rO
 jbkpPuTzxeyVbirGqNDMTlr_F0F_1N2VyIjJq8SkrGWwUw.GJDmBsIT30hk8A08zelBxAmZfgmcG
 a_Ze93VozORjSudR5nvwK6TQa7t5TNZ1Z9U7ZLTBiVoceC_CRElukLTnLeBdguuo688N7oqJq5R9
 4ZEJvpu36Ti_m2WX6RCsWuT3rT8uddUQzVfbK.VGVItJFaqaKczb7KTzhIKj9IsDrOjCeC.tnb3t
 OtB78VpZXKKtE8xSDGvvzm.dgPP91_7hBYuzZxzQdtCUXo9XacpIDm8KhUgCaU_VMxcDuf_Z1lhm
 PqkWln5DukOy2EU395DnU0rfaMKUQC2ETwdtA4oHUr1p7covCCWEggkl.MkKi9Nhs1diZ11KXwmB
 HiVy2elukJBMBG9SFcM59LO3Vq5gqo0eq6octYtyU1Ivj0BYesHwpI.lOTsWcgbVsDyW3Jvhu015
 ViiN0ZRgBd6Xkfbrf5Ro.OFXmU6j3EZyRZscdY.aopdIEVVDaXzjiZfyHVdg0x.6cfTfMMvoQEMT
 ubG4.ox9jnC2YKpIO.RNbVujqmWEYeScj7lKAbdEVdGWpSGhYlAN13Hlxdf5aGy.O6Oib4mayIf3
 getvbasKI899OX8sDSXYGrH1guctlqIPoZpt.dkwAdTZz_W6wlYlZapXPlXpvys.FIQUWkQAtNGe
 G5xlNLfnYyhFOFt_UfYMYtBgy7ePWgJc.wbk6AJ2lknqf31vK2I.CnOl2kVQQsPFOk84TG6r_VN0
 cO0iE0gC9Vv2FytA0M1Z0JGCSLi.gBzvEAov0ujaEVjEOxuidZZxtkzY_mCfAL3UngrbptbVTdPn
 32q3dkuwQix5QniG6pBCZqMGC61lMaNwj18r.LdombM1WOOrq.dIF2KuE7dnEUCwBaeRXvAXDZe.
 MDDsfxxnDSAbPHctZxxFYLcEpO5oBwT9Ue2CJhsApL0qFqmFHbeLmoqZE5epPi81_TmnuBg3o7L0
 DyNllDHNGMBoOP64A7.IKcvEUFiNDmpXf4dBgsB1Ditj4PSB98qt0umhviu_NfYRzeuLKV0Ty49_
 TJkZVXvUMQsjmlSYNr9_mkVVfagt.bZ8QQ6ypqg6Hhq02tEJOijwlfoqmvGgjiYrv3ZGJjWJ5jl5
 7rAcWotmCnwVU0TWrQohDjDwSNmckodQ_v1pp7pmudJhoQNKXTHjx0tUKpH645wUen8OovJyDsH7
 0XecIee8uKHH3tdaBSSzKkked2Op_60w9jWrPkEnT.1b5ivSa5_QJpheO4lXoZw9L8gZvhlToTq2
 Ud1OF9PCSVmvPOl4TKR9ep3rZbjZ1ISNFut_K4YE7pFdXcbp2wPY7nEqL.cqU.bCo253JwXaOk7I
 ODzkowUPjZrwGIitOBZbHnyEE39.atOM-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <ba4f8fd6-ae10-da60-7ef5-66782f29fdb9@aol.com>
Date: Tue, 3 Jan 2023 17:58:01 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
To: Alex Williamson <alex.williamson@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, qemu-devel@nongnu.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <20230102124605-mutt-send-email-mst@kernel.org>
 <c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
 <20230103081456.1d676b8e.alex.williamson@redhat.com>
 <cbfdcafc-383e-aea3-d04d-38388fab202f@aol.com>
In-Reply-To: <cbfdcafc-383e-aea3-d04d-38388fab202f@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 6508

On 1/3/2023 4:50 PM, Chuck Zmudzinski wrote:
> On 1/3/2023 10:14 AM, Alex Williamson wrote:
> > On Mon, 2 Jan 2023 18:10:24 -0500
> > Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> >
> > > On 1/2/23 12:46 PM, Michael S. Tsirkin wrote:
> > > > On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:  
> > > > > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > > > > as noted in docs/igd-assign.txt in the Qemu source code.
> > > > > ... 
> > > > > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>  
> > > >
> > > > I'm not sure why is the issue xen specific. Can you explain?
> > > > Doesn't it affect kvm too?  
> > > 
> > > Recall from docs/igd-assign.txt that there are two modes for
> > > igd passthrough: legacy and upt, and the igd needs to be
> > > at slot 2 only when using legacy mode which gives one
> > > single guest exclusive access to the Intel igd.
> > > 
> > > It's only xen specific insofar as xen does not have support
> > > for the upt mode so xen must use legacy mode which
> > > requires the igd to be at slot 2. I am not an expert with
> >
> > UPT mode never fully materialized for direct assignment, the folks at
> > Intel championing this scenario left.
>
> Thanks for clarifying that for me.
>
> >
> > > kvm, but if I understand correctly, with kvm one can use
> > > the upt mode with the Intel i915 kvmgt kernel module
> > > and in that case the guest will see a virtual Intel gpu
> > > that can be at any arbitrary slot when using kvmgt, and
> > > also, in that case, more than one guest can access the
> > > igd through the kvmgt kernel module.
> >
> > This is true, IIRC an Intel vGPU does not need to be in slot 2.
> >
> > > Again, I am not an expert and do not have as much
> > > experience with kvm, but if I understand correctly it is
> > > possible to use the legacy mode with kvm and I think you
> > > are correct that if one uses kvm in legacy mode and without
> > > using the Intel i915 kvmgt kernel module, then it would be
> > > necessary to reserve slot 2 for the igd on kvm.
> >
> > It's necessary to configure the assigned IGD at slot 2 to make it
> > functional, yes, but I don't really understand this notion of
> > "reserving" slot 2.  If something occupies address 00:02.0 in the
> > config, it's the user's or management tool's responsibility to move it
> > to make this configuration functional.  Why does QEMU need to play a
> > part in reserving this bus address.  IGD devices are not generally
> > hot-pluggable either, so it doesn't seem we need to reserve an address
> > in case an IGD device is added dynamically later.
>
> As I said in earlier message in this thread, the xenlight toolstack (libxl) that is
> provided as the default toolstack for building xen guests with pci passthrough
> is not the most flexible management tool, and that is why, in the case of xen,
> it is simpler to reserve slot 2 while qemu assigns the slot addresses of the
> qemu emulated pci devices so that the igd can use slot 2. IIRC, In hw/pci/pci.c,
> once the slot value is assigned, it is constant and cannot be changed later on
> by a management tool.
>
> >  
> > > Your question makes me curious, and I have not been able
> > > to determine if anyone has tried igd passthrough using
> > > legacy mode on kvm with recent versions of linux and qemu.
> >
> > Yes, it works.
> >
> > > I will try reproducing the problem on kvm in legacy mode with
> > > current versions of linux and qemu and report my findings.
> > > With kvm, there might be enough flexibility to specify the
> > > slot number for every pci device in the guest. Such a
> >
> > I think this is always the recommendation, libvirt will do this by
> > default in order to make sure the configuration is reproducible.  This
> > is what we generally rely on for kvm/vfio IGD assignment to place the
> > GPU at the correct address.
> >
> > > capability is not available using the xenlight toolstack
> > > for managing xen guests, so I have been using this patch
> > > to ensure that the Intel igd is at slot 2 with xen guests
> > > created by the xenlight toolstack.
> >
> > Seems like a deficiency in xenlight.  I'm not sure why QEMU should take
> > on this burden to support support tool stacks that lack such basic
> > features.
>
> So you would prefer to patch xenlight (libxl) to make it flexible enough to properly
> handle this case of legacy igd passthrough.
>
> >  
> > > The patch as is will only fix the problem on xen, so if the
> > > problem exists on kvm also, I agree that the patch should
> > > be modified to also fix it on kvm.
> >
> > AFAICT, it's not a problem on kvm/vfio because we generally make use of
> > invocations that specify bus addresses for each device by default,
> > making this a configuration requirement for the user or management tool
> > stack.  Thanks,
>
> Unfortunately, and as I mentioned in an earlier message on this thread,
> the xenlight management tool stack (libxl) is not so flexible and does not
> make it so easy for the administrator to specify the bus address of each
> device, and that is why either this patch is needed for igd legacy passtrhough
> on xen, or the libxl management tool needs to be patched so it is flexible
> enough to enable the slot addresses to be assigned correctly using
> that tool instead of relying on qemu to reserve slot 2 for the igd.
>
> If there is a consensus that this should be fixed in libxl instead of in qemu,
> I will work on a patch to libxl, but I will wait a while for some feedback from
> the xen people (Anthony, Paul) before I do that.

Hello Anthony and Paul,

I am requesting your feedback to Alex Williamson's suggestion that this
problem with assigning the correct slot address to the igd on xen should
be fixed in libxl instead of in qemu.

It seems to me that the xen folks and the kvm folks have two different
philosophies regarding how a tool stack should be designed. kvm/libvirt
provides much greater flexibility in configuring the guest which puts
the burden on the administrator to set all the options correctly for
a given feature set, while xen/xenlight does not provide so much
flexibility and tries to automatically configure the guest based on
a high-level feature option such as the igd-passthrough=on option that
is available for xen guests using qemu but not for kvm guests using
qemu.

What do you think? Should libxl be patched instead of fixing the problem
with this patch to qemu, which is contrary to Alex's suggestion?

Thanks in advance,

Chuck


From xen-devel-bounces@lists.xenproject.org Tue Jan 03 23:13:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Jan 2023 23:13:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470785.730406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCqSr-0003Qc-GH; Tue, 03 Jan 2023 23:13:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470785.730406; Tue, 03 Jan 2023 23:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCqSr-0003QV-DF; Tue, 03 Jan 2023 23:13:01 +0000
Received: by outflank-mailman (input) for mailman id 470785;
 Tue, 03 Jan 2023 23:12:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JkUc=5A=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pCqSp-0003QP-Qh
 for xen-devel@lists.xenproject.org; Tue, 03 Jan 2023 23:12:59 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28c971a9-8bbc-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 00:12:57 +0100 (CET)
Received: by mail-wr1-x431.google.com with SMTP id y8so31270773wrl.13
 for <xen-devel@lists.xenproject.org>; Tue, 03 Jan 2023 15:12:57 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb20094284aaccacb64e1.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9428:4aac:cacb:64e1])
 by smtp.gmail.com with ESMTPSA id
 a18-20020a5d4d52000000b0028df2d57204sm16840020wru.81.2023.01.03.15.12.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 03 Jan 2023 15:12:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28c971a9-8bbc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7+qafBATEoa+2wTKgOzGNp070b+aJSvCBksgwfL9zmI=;
        b=FfjpMyyhXfBZThxZeYGQQO7vyOLV9v0CEBG6dVn0ZgBxugLJMWJFd+HsU69RlII0C5
         /NLql6fmLqkER/sYBYysomuni9ejwEf7pxSGpAaLjsF7eIsaeBgxI1z74MJkrgXL2kup
         nCuuYp4mXX+BAmgm7SJnSWiNnYy1w7RUz530xYxJMgmpoH8y3sHa1My98uOYcLgv7QnL
         xlHE4D85uwdfL1i7el8H9LOGgF+r9iZrSVpxLUQpOXnc36chinVi9laOaZdgnkTVpro0
         ck9PK0oPV359foRYgxXgeM46zviSi6e0Qi7JAOcLHC6FAfFdoQHJMEmicC2WMdlMwzGD
         k7bA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7+qafBATEoa+2wTKgOzGNp070b+aJSvCBksgwfL9zmI=;
        b=1+jdHgsIUKXyam0rAKhbMaMdeNe2Hw/qhMM9jjyuxQUaTh6QvfxulR7znGv6B14ilj
         elo6afXkXW2cbQFp3DGXBLGaKjEviNxonXksOdYZuCo2gljTtKJp+lBnLCUwZwMti2ZT
         b++c0KttX7RqOy3VcPEFr0Vj6otj/QiQTwbfuvye30OPDl6bWzzTnqHU1phpqbzU4Ute
         12WoJ6V0WG45h7VBFUyJ0wr1t7rLzUNNHH0qr7i8jeaxnI5u8mqIDMOONB6/PJ3Rndqp
         DqS0qFQW+XUciao4dVaEReYibzNfjj2D4Qsmns15Y2D4VBt6kkvt3DlC19inXPmRhChQ
         SNoA==
X-Gm-Message-State: AFqh2krWJU0dI4ctT2wFsXMNzEaEOa3TH1yDycLEMC7shWq3yXfpshdc
	poa2/FFJS7glvT4YF9Oxsj8=
X-Google-Smtp-Source: AMrXdXv5WbayTDQhnHzoCHddCGvxv2hlBPr+kTrTQpT4lKo5mnbVdopIJ2APRWebmxbsRfsVjBUHYg==
X-Received: by 2002:a5d:42cc:0:b0:28a:326d:1d11 with SMTP id t12-20020a5d42cc000000b0028a326d1d11mr14209407wrr.53.1672787576887;
        Tue, 03 Jan 2023 15:12:56 -0800 (PST)
Date: Tue, 03 Jan 2023 23:12:48 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Chuck Zmudzinski <brchuckz@aol.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>
CC: qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost <eduardo@habkost.net>,
 xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
In-Reply-To: <6360e4a1-dc2b-685e-5e19-62b92eec695b@aol.com>
References: <20230102213504.14646-1-shentey@gmail.com> <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com> <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org> <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com> <6360e4a1-dc2b-685e-5e19-62b92eec695b@aol.com>
Message-ID: <DD07C54B-F562-42B6-A6CD-824670514248@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 3=2E Januar 2023 17:25:35 UTC schrieb Chuck Zmudzinski <brchuckz@aol=2E=
com>:
>On 1/3/2023 8:38 AM, Bernhard Beschow wrote:
>>
>>
>> On Tue, Jan 3, 2023 at 2:17 PM Philippe Mathieu-Daud=C3=A9 <philmd@lina=
ro=2Eorg> wrote:
>>
>>     Hi Chuck,
>>
>>     On 3/1/23 04:15, Chuck Zmudzinski wrote:
>>     > On 1/2/23 4:34=E2=80=AFPM, Bernhard Beschow wrote:
>>     >> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and fi=
nally removes
>>     >> it=2E The motivation is to 1/ decouple PIIX from Xen and 2/ to m=
ake Xen in the PC
>>     >> machine agnostic to the precise southbridge being used=2E 2/ wil=
l become
>>     >> particularily interesting once PIIX4 becomes usable in the PC ma=
chine, avoiding
>>     >> the "Frankenstein" use of PIIX4_ACPI in PIIX3=2E
>>     >>
>>     >> Testing done:
>>     >> None, because I don't know how to conduct this properly :(
>>     >>
>>     >> Based-on: <20221221170003=2E2929-1-shentey@gmail=2Ecom>
>>     >>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "[PATCH v4 00/30] Conso=
lidate PIIX south bridges"
>>
>>     This series is based on a previous series:
>>     https://lore=2Ekernel=2Eorg/qemu-devel/20221221170003=2E2929-1-shen=
tey@gmail=2Ecom/
>>     (which itself also is)=2E
>>
>>     >> Bernhard Beschow (6):
>>     >>=C2=A0 =C2=A0 include/hw/xen/xen: Make xen_piix3_set_irq() generi=
c and rename it
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Reuse piix3_realize() in piix3_xen_rea=
lize()
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Wire up Xen PCI IRQ handling outside o=
f PIIX3
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Avoid Xen-specific variant of piix_wri=
te_config()
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Resolve redundant k->config_write assi=
gnments
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVIC=
E
>>     >>
>>     >>=C2=A0 =C2=A0hw/i386/pc_piix=2Ec=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0| 34 ++++++++++++++++--
>>     >>=C2=A0 =C2=A0hw/i386/xen/xen-hvm=2Ec=C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0|=C2=A0 9 +++--
>>     >>=C2=A0 =C2=A0hw/isa/piix=2Ec=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0| 66 +----------------------------------
>>     >
>>     > This file does not exist on the Qemu master branch=2E
>>     > But hw/isa/piix3=2Ec and hw/isa/piix4=2Ec do exist=2E
>>     >
>>     > I tried renaming it from piix=2Ec to piix3=2Ec in the patch, but
>>     > the patch set still does not apply cleanly on my tree=2E
>>     >
>>     > Is this patch set re-based against something other than
>>     > the current master Qemu branch?
>>     >
>>     > I have a system that is suitable for testing this patch set, but
>>     > I need guidance on how to apply it to the Qemu source tree=2E
>>
>>     You can ask Bernhard to publish a branch with the full work,
>>
>>
>> Hi Chuck,
>>
>> =2E=2E=2E or just visit https://patchew=2Eorg/QEMU/20230102213504=2E146=
46-1-shentey@gmail=2Ecom/ =2E There you'll find a git tag with a complete h=
istory and all instructions!
>>
>> Thanks for giving my series a test ride!
>>
>> Best regards,
>> Bernhard
>>
>>     or apply each series locally=2E I use the b4 tool for that:
>>     https://b4=2Edocs=2Ekernel=2Eorg/en/latest/installing=2Ehtml
>>
>>     i=2Ee=2E:
>>
>>     $ git checkout -b shentey_work
>>     $ b4 am 20221120150550=2E63059-1-shentey@gmail=2Ecom
>>     $ git am
>>     =2E/v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south_br=
idges=2Embx
>>     $ b4 am 20221221170003=2E2929-1-shentey@gmail=2Ecom
>>     $ git am
>>     =2E/v4_20221221_shentey_this_series_consolidates_the_implementation=
s_of_the_piix3_and_piix4_south=2Embx
>>     $ b4 am 20230102213504=2E14646-1-shentey@gmail=2Ecom
>>     $ git am =2E/20230102_shentey_resolve_type_piix3_xen_device=2Embx
>>
>>     Now the branch 'shentey_work' contains all the patches and you can =
test=2E
>>
>>     Regards,
>>
>>     Phil=2E
>>
>
>OK, I didn't see the "Consolidate PIIX south bridges" series is a
>prerequisite=2E
>
>I will try it - it may take a couple of days because I need to test both
>patch series in my environment and I can only work on this in my spare
>time=2E
>
>I will provide Tested-by tags to both series if successful=2E Otherwise,
>I will reply with an explanation of any problems=2E

Sounds good! You don't need to test the prerequisite though since it is al=
ready reviewed=2E It would be completely sufficient if you could test this =
series to fill in the gap for my limited Xen knowledge -- thanks!

Best regards,
Bernhard

>
>Chuck


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 00:38:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 00:38:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470792.730418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCrnG-0003ux-UJ; Wed, 04 Jan 2023 00:38:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470792.730418; Wed, 04 Jan 2023 00:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCrnG-0003uq-P0; Wed, 04 Jan 2023 00:38:10 +0000
Received: by outflank-mailman (input) for mailman id 470792;
 Wed, 04 Jan 2023 00:38:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCrnE-0003uf-UK; Wed, 04 Jan 2023 00:38:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCrnE-0002BX-Rl; Wed, 04 Jan 2023 00:38:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCrnE-0004wD-A9; Wed, 04 Jan 2023 00:38:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCrnE-0000WN-9O; Wed, 04 Jan 2023 00:38:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=E3bsdIUnk3QSGeuOzyoRSHT+vVNTZiIud/NMZwF6ZU0=; b=bbQwdGaqHfbDYT2em1vZLhV1Mx
	neUqda85SaonsN1WPuPSBLIfh1W7KZV9jIVlYOaCQv6YsL7cqvtA/CxCao8ZQBYXAZzq0swMj1ive
	chcNqmBUr6msaPi2o97nDCdXigCjgk841QUXeWieDY1vy57p5I38g3jQj0UFFxFPR3Ps=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175559-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175559: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=69b41ac87e4a664de78a395ff97166f0b2943210
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 00:38:08 +0000

flight 175559 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175559/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                69b41ac87e4a664de78a395ff97166f0b2943210
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   88 days
Failing since        173470  2022-10-08 06:21:34 Z   87 days  182 attempts
Testing same since   175552  2023-01-02 21:13:23 Z    1 days    3 attempts

------------------------------------------------------------
3257 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 497265 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:08:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470805.730428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsGI-0001Cy-Ez; Wed, 04 Jan 2023 01:08:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470805.730428; Wed, 04 Jan 2023 01:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsGI-0001Cr-Ba; Wed, 04 Jan 2023 01:08:10 +0000
Received: by outflank-mailman (input) for mailman id 470805;
 Wed, 04 Jan 2023 01:08:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCsGH-0001Cl-E2
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:08:09 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3ef0ec07-8bcc-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 02:08:06 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id D7277B810A5;
 Wed,  4 Jan 2023 01:08:05 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 908D2C433D2;
 Wed,  4 Jan 2023 01:08:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ef0ec07-8bcc-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672794484;
	bh=BMlVarPMrb1stKvQSfClz/2+qPszURvdTGuLCEAjXoQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=PdY8whexeyDHkc9yR78kXhn6ArLt0hzS+qjbzyW/uk8WiBBzI0Uzp7hiwPQH6h8bE
	 XFZhPsZw+QZkGNnkeyNOhsd+bBloGR6UxIomrCVyBujXVHrxkELkXAoC1aAiaQtoQF
	 BhuzQiAM69+1SScIIhDnuB9Vf+Aagh0QzbClTXaQRHSy7lQD/tBxQhZuWrS631AtOd
	 u9LWyMlNSAL17atr6Uobw8SbrtYSPY14X6KXbGLFCUJAE7nXPMeoyErnMGBNkzMuEY
	 GyZ9aCYCfPg0/oUbrvfNp1HFMZFmtDbruOv4nQyKBDo9FmUxDB5BUcRCZ5UqIqCwHZ
	 A/EHdi2M9o9/g==
Date: Tue, 3 Jan 2023 17:08:01 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 1/6] CI: Drop automation/configs/
In-Reply-To: <20221230003848.3241-2-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031707540.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-2-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 30 Dec 2022, Andrew Cooper wrote:
> Having 3 extra hypervisor builds on the end of a full build is deeply
> confusing to debug if one of them fails, because the .config file presented in
> the artefacts is not the one which caused a build failure.  Also, the log
> tends to be truncated in the UI.
> 
> PV-only is tested as part of PV-Shim in a full build anyway, so doesn't need
> repeating.  HVM-only and neither will come up frequently in randconfig, so
> drop all the logic here to simplify things.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> CC: Doug Goldstein <cardoe@cardoe.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  automation/configs/x86/hvm_only_config  |  3 ---
>  automation/configs/x86/no_hvm_pv_config |  3 ---
>  automation/configs/x86/pv_only_config   |  3 ---
>  automation/scripts/build                | 21 ---------------------
>  4 files changed, 30 deletions(-)
>  delete mode 100644 automation/configs/x86/hvm_only_config
>  delete mode 100644 automation/configs/x86/no_hvm_pv_config
>  delete mode 100644 automation/configs/x86/pv_only_config
> 
> diff --git a/automation/configs/x86/hvm_only_config b/automation/configs/x86/hvm_only_config
> deleted file mode 100644
> index 9efbddd5353b..000000000000
> --- a/automation/configs/x86/hvm_only_config
> +++ /dev/null
> @@ -1,3 +0,0 @@
> -CONFIG_HVM=y
> -# CONFIG_PV is not set
> -# CONFIG_DEBUG is not set
> diff --git a/automation/configs/x86/no_hvm_pv_config b/automation/configs/x86/no_hvm_pv_config
> deleted file mode 100644
> index 0bf6a8e46818..000000000000
> --- a/automation/configs/x86/no_hvm_pv_config
> +++ /dev/null
> @@ -1,3 +0,0 @@
> -# CONFIG_HVM is not set
> -# CONFIG_PV is not set
> -# CONFIG_DEBUG is not set
> diff --git a/automation/configs/x86/pv_only_config b/automation/configs/x86/pv_only_config
> deleted file mode 100644
> index e9d8b4a7c753..000000000000
> --- a/automation/configs/x86/pv_only_config
> +++ /dev/null
> @@ -1,3 +0,0 @@
> -CONFIG_PV=y
> -# CONFIG_HVM is not set
> -# CONFIG_DEBUG is not set
> diff --git a/automation/scripts/build b/automation/scripts/build
> index a5934190634b..5dafa72ba540 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -85,24 +85,3 @@ if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
>          cp -r dist binaries/
>      fi
>  fi
> -
> -if [[ "${hypervisor_only}" == "y" ]]; then
> -    # If we are build testing a specific Kconfig exit now, there's no point in
> -    # testing all the possible configs.
> -    exit 0
> -fi
> -
> -# Build all the configs we care about
> -case ${XEN_TARGET_ARCH} in
> -    x86_64) arch=x86 ;;
> -    *) exit 0 ;;
> -esac
> -
> -cfg_dir="automation/configs/${arch}"
> -for cfg in `ls ${cfg_dir}`; do
> -    echo "Building $cfg"
> -    make -j$(nproc) -C xen clean
> -    rm -f xen/.config
> -    make -C xen KBUILD_DEFCONFIG=../../../../${cfg_dir}/${cfg} defconfig
> -    make -j$(nproc) -C xen
> -done
> -- 
> 2.11.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:10:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470812.730439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsIn-0002aC-Qe; Wed, 04 Jan 2023 01:10:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470812.730439; Wed, 04 Jan 2023 01:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsIn-0002a5-O2; Wed, 04 Jan 2023 01:10:45 +0000
Received: by outflank-mailman (input) for mailman id 470812;
 Wed, 04 Jan 2023 01:10:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCsIm-0002Ze-KY
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:10:44 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c77776d-8bcc-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 02:10:43 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 1CFA4B810A5;
 Wed,  4 Jan 2023 01:10:43 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC4D6C433EF;
 Wed,  4 Jan 2023 01:10:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c77776d-8bcc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672794641;
	bh=GDcvH45biK4FMGW3wYmyxksscV4VpSyTbRTo0Z0xoQA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HPSfISB5bqRcObrshfvAhQQJ7L8WUNPmN9FAgm5AXd1J0bW/8B0NEXM3iZgGsbKIJ
	 98r3rcHwzqZVe3gy6+hcSnZpFybisbvCUp2LDwID3sY8mAknXpf555maiIAv7zJ9gJ
	 UP7ZnF5qxmP12xDyY8hSioE3Vl0ZKU0A0MaWcxuxcicCCuUEdMWSd2mEus4WzjEWMs
	 UARo1Oyc9E0qUDqxzVCEt2iGcZBGdbPV9tLVpomcTsfEE43jo00Qcs9lKn4rZTYxDn
	 35Z2EsyfgpvLGRw7snf4QGRBeXUsKBOnG0+VKYx0RUGfiEDiDZFmLReRgBx/ax2aDz
	 rBGyRA2zBqiSQ==
Date: Tue, 3 Jan 2023 17:10:38 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 2/6] CI: Remove guesswork about which artefacts to
 preserve
In-Reply-To: <20221230003848.3241-3-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031709570.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-3-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 30 Dec 2022, Andrew Cooper wrote:
> Preserve the artefacts based on the `make` rune we actually ran, rather than
> guesswork about which rune we would have run based on other settings.
> 
> Note that the ARM qemu smoke tests depend on finding binaries/xen even from
> full builds.  Also that the Jessie-32 containers build tools but not Xen.
> 
> This means the x86_32 builds now store relevant artefacts.  No change in other
> configurations.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Doug Goldstein <cardoe@cardoe.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  automation/scripts/build | 22 ++++++++++++++--------
>  1 file changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 5dafa72ba540..8dee1cbbc251 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -70,18 +70,24 @@ if [[ "${CC}" == "gcc" && `cc-ver` -lt 0x040600 ]]; then
>      cfgargs+=("--with-system-seabios=/bin/false")
>  fi
>  
> +# Directory for the artefacts to be dumped into
> +mkdir binaries
> +
>  if [[ "${hypervisor_only}" == "y" ]]; then
> +    # Xen-only build
>      make -j$(nproc) xen
> +
> +    # Preserve artefacts
> +    cp xen/xen binaries/xen
>  else
> +    # Full build
>      ./configure "${cfgargs[@]}"
>      make -j$(nproc) dist
> -fi
>  
> -# Extract artifacts to avoid getting rewritten by customised builds
> -mkdir binaries
> -if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
> -    cp xen/xen binaries/xen
> -    if [[ "${hypervisor_only}" != "y" ]]; then
> -        cp -r dist binaries/
> -    fi
> +    # Preserve artefacts
> +    # Note: Some smoke tests depending on finding binaries/xen on a full build
> +    # even though dist/ contains everything, while some containers don't even
> +    # build Xen
> +    cp -r dist binaries/
> +    if [[ -f xen/xen ]] ; then cp xen/xen binaries/xen; fi

why the "if" ?

You could just:

cp xen/xen binaries/xen

unconditionally?

If you are OK with this change you can do it on commit

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:12:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:12:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470819.730450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsKg-0003A5-7Z; Wed, 04 Jan 2023 01:12:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470819.730450; Wed, 04 Jan 2023 01:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsKg-00039w-3S; Wed, 04 Jan 2023 01:12:42 +0000
Received: by outflank-mailman (input) for mailman id 470819;
 Wed, 04 Jan 2023 01:12:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCsKe-00039m-8w
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:12:40 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0e52d7e-8bcc-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 02:12:39 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 83D96614CC;
 Wed,  4 Jan 2023 01:12:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EF0DDC433D2;
 Wed,  4 Jan 2023 01:12:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0e52d7e-8bcc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672794756;
	bh=M7neBu1eufP02ijEEgXFLm7XN0usVpAGztqI1Ji4c3Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LY/KgCgyzX1J4lo2GPH8cS2Ty9MqIQoh7vcOUKeNTeT0RxiDvEogMGgEbpkx2tQUD
	 FX+Ni/hOZn75YWWabLrPHDJdNVzwqy8k7qhJWrWB1MYt+T8zAHhHqdBmCU7gW0iP1T
	 LvrZ+0cPv2quN87jSPmS0aYflozp2J8ZcyuH73PwBC2Uf7nU6jtCOegf/B6ulHf5/m
	 mHWCUsHUVUT6TMUPjfRGR15XCD9TY3el++YbBqykb4gBAd6Tk06AwFqsaPXlt8KtFX
	 1oNijbnaK3HIC3BPOOCEFsjNcs+eKbS4YZBlXy1jtcUTf7LXyLGR2egJsiGj+DrQYO
	 0uYti9QbJne4A==
Date: Tue, 3 Jan 2023 17:12:33 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 3/6] CI: Only calculate ./configure args if needed
In-Reply-To: <20221230003848.3241-4-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031712270.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-4-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 30 Dec 2022, Andrew Cooper wrote:
> This is purely code motion of the cfgargs construction, into the case where it
> is used.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> CC: Doug Goldstein <cardoe@cardoe.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  automation/scripts/build | 63 ++++++++++++++++++++++++------------------------
>  1 file changed, 31 insertions(+), 32 deletions(-)
> 
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 8dee1cbbc251..f2301d08789d 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -39,37 +39,6 @@ if [[ "${XEN_TARGET_ARCH}" = "arm32" ]]; then
>      hypervisor_only="y"
>  fi
>  
> -# build up our configure options
> -cfgargs=()
> -cfgargs+=("--enable-docs")
> -
> -if [[ "${CC}" == "clang"* ]]; then
> -    # SeaBIOS cannot be built with clang
> -    cfgargs+=("--with-system-seabios=/usr/share/seabios/bios.bin")
> -    # iPXE cannot be built with clang
> -    cfgargs+=("--with-system-ipxe=/usr/lib/ipxe/ipxe.pxe")
> -    # newlib cannot be built with clang so we cannot build stubdoms
> -    cfgargs+=("--disable-stubdom")
> -fi
> -
> -if ! test -z "$(ldd /bin/ls|grep musl|head -1)"; then
> -    # disable --disable-werror for QEMUU when building with MUSL
> -    cfgargs+=("--with-extra-qemuu-configure-args=\"--disable-werror\"")
> -    # SeaBIOS doesn't build on MUSL systems
> -    cfgargs+=("--with-system-seabios=/bin/false")
> -fi
> -
> -# Qemu requires Python 3.5 or later, and ninja
> -if ! type python3 || python3 -c "import sys; res = sys.version_info < (3, 5); exit(not(res))" \
> -        || ! type ninja; then
> -    cfgargs+=("--with-system-qemu=/bin/false")
> -fi
> -
> -# SeaBIOS requires GCC 4.6 or later
> -if [[ "${CC}" == "gcc" && `cc-ver` -lt 0x040600 ]]; then
> -    cfgargs+=("--with-system-seabios=/bin/false")
> -fi
> -
>  # Directory for the artefacts to be dumped into
>  mkdir binaries
>  
> @@ -80,7 +49,37 @@ if [[ "${hypervisor_only}" == "y" ]]; then
>      # Preserve artefacts
>      cp xen/xen binaries/xen
>  else
> -    # Full build
> +    # Full build.  Figure out our ./configure options
> +    cfgargs=()
> +    cfgargs+=("--enable-docs")
> +
> +    if [[ "${CC}" == "clang"* ]]; then
> +        # SeaBIOS cannot be built with clang
> +        cfgargs+=("--with-system-seabios=/usr/share/seabios/bios.bin")
> +        # iPXE cannot be built with clang
> +        cfgargs+=("--with-system-ipxe=/usr/lib/ipxe/ipxe.pxe")
> +        # newlib cannot be built with clang so we cannot build stubdoms
> +        cfgargs+=("--disable-stubdom")
> +    fi
> +
> +    if  ! test -z "$(ldd /bin/ls|grep musl|head -1)"; then
> +        # disable --disable-werror for QEMUU when building with MUSL
> +        cfgargs+=("--with-extra-qemuu-configure-args=\"--disable-werror\"")
> +        # SeaBIOS doesn't build on MUSL systems
> +        cfgargs+=("--with-system-seabios=/bin/false")
> +    fi
> +
> +    # Qemu requires Python 3.5 or later, and ninja
> +    if ! type python3 || python3 -c "import sys; res = sys.version_info < (3, 5); exit(not(res))" \
> +            || ! type ninja; then
> +        cfgargs+=("--with-system-qemu=/bin/false")
> +    fi
> +
> +    # SeaBIOS requires GCC 4.6 or later
> +    if [[ "${CC}" == "gcc" && `cc-ver` -lt 0x040600 ]]; then
> +        cfgargs+=("--with-system-seabios=/bin/false")
> +    fi
> +
>      ./configure "${cfgargs[@]}"
>      make -j$(nproc) dist
>  
> -- 
> 2.11.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:15:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:15:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470826.730461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsNW-0003qI-Ka; Wed, 04 Jan 2023 01:15:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470826.730461; Wed, 04 Jan 2023 01:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsNW-0003qB-Hz; Wed, 04 Jan 2023 01:15:38 +0000
Received: by outflank-mailman (input) for mailman id 470826;
 Wed, 04 Jan 2023 01:15:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCsNW-0003q5-0I
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:15:38 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4af041a3-8bcd-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 02:15:36 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B777D6152B;
 Wed,  4 Jan 2023 01:15:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2E2DDC433D2;
 Wed,  4 Jan 2023 01:15:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4af041a3-8bcd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672794935;
	bh=pSP+FWoGi3j9xVpx2Q3vxu2qFJqevFkECabRvGMIwXE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Nd1uLcSPDGKBsZxT370TxN3yzQoJgYOWyk6vqou7q8EP9q9c9jLz9XR0qpZm/9EqB
	 ItRfa3flimSMn/Bj8hnoEus25CUaZQ0lLXYUAkYWBWv9iVnRf92Ck3hS/GCSNqFGZz
	 tGFYR2hDPrl/LHa38nGteNmOvY6r1kDf0JEm2eye/EM/bIqUTKVaM08BAIjy7m8ySk
	 9XWczzTQihkECO72RHEKUv84hHNwgynGMpKoIegVm2J5EnKDJNjjrSlDpPx0xrqqFA
	 qhCvdHxMP1LtdEVubYKkoJQNTIK4bCEs1UzNe90ZinCaN4IWj8iMsrwizhd9giUitS
	 5EEmxcRx/TF/A==
Date: Tue, 3 Jan 2023 17:15:31 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
In-Reply-To: <20221230003848.3241-5-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031713530.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-5-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII


On Fri, 30 Dec 2022, Andrew Cooper wrote:

> Whether to build only Xen, or everything, is a property of container,
> toolchain and/or testcase.  It is not a property of XEN_TARGET_ARCH.
> 
> Capitalise HYPERVISOR_ONLY and have it set by the debian-unstable-gcc-arm32-*
> testcases at the point that arm32 get matched with a container that can only
> build Xen.
> 
> For simplicity, retain the RANDCONFIG -> HYPERVISOR_ONLY implication.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Doug Goldstein <cardoe@cardoe.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  automation/gitlab-ci/build.yaml |  2 ++
>  automation/scripts/build        | 11 ++++-------
>  2 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> index 93d9ff69a9f2..e6a9357de3ef 100644
> --- a/automation/gitlab-ci/build.yaml
> +++ b/automation/gitlab-ci/build.yaml
> @@ -516,11 +516,13 @@ debian-unstable-gcc-arm32:
>    extends: .gcc-arm32-cross-build
>    variables:
>      CONTAINER: debian:unstable-arm32-gcc
> +    HYPERVISOR_ONLY: y
>  
>  debian-unstable-gcc-arm32-debug:
>    extends: .gcc-arm32-cross-build-debug
>    variables:
>      CONTAINER: debian:unstable-arm32-gcc
> +    HYPERVISOR_ONLY: y

can you move the setting of HYPERVISOR_ONLY to .arm32-cross-build-tmpl ?

I think that makes the most sense because .arm32-cross-build-tmpl is the
one setting XEN_TARGET_ARCH and also the x86_64 tag.

>  
>  debian-unstable-gcc-arm32-randconfig:
>    extends: .gcc-arm32-cross-build
> diff --git a/automation/scripts/build b/automation/scripts/build
> index f2301d08789d..4c6d1f3b70bc 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -19,7 +19,9 @@ if [[ "${RANDCONFIG}" == "y" ]]; then
>      fi
>  
>      make -j$(nproc) -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
> -    hypervisor_only="y"
> +
> +    # RANDCONFIG implies HYPERVISOR_ONLY
> +    HYPERVISOR_ONLY="y"
>  else
>      echo "CONFIG_DEBUG=${debug}" > xen/.config
>  
> @@ -34,15 +36,10 @@ fi
>  # to exit early -- bash is invoked with -e.
>  cp xen/.config xen-config
>  
> -# arm32 only cross-compiles the hypervisor
> -if [[ "${XEN_TARGET_ARCH}" = "arm32" ]]; then
> -    hypervisor_only="y"
> -fi
> -
>  # Directory for the artefacts to be dumped into
>  mkdir binaries
>  
> -if [[ "${hypervisor_only}" == "y" ]]; then
> +if [[ "${HYPERVISOR_ONLY}" == "y" ]]; then
>      # Xen-only build
>      make -j$(nproc) xen
>  
> -- 
> 2.11.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:18:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:18:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470833.730472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsPs-0004Sj-0l; Wed, 04 Jan 2023 01:18:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470833.730472; Wed, 04 Jan 2023 01:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsPr-0004Sc-U1; Wed, 04 Jan 2023 01:18:03 +0000
Received: by outflank-mailman (input) for mailman id 470833;
 Wed, 04 Jan 2023 01:18:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCsPq-0004SK-O3; Wed, 04 Jan 2023 01:18:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCsPq-0005HB-Jv; Wed, 04 Jan 2023 01:18:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCsPq-0005pj-7e; Wed, 04 Jan 2023 01:18:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCsPq-0006Dl-7G; Wed, 04 Jan 2023 01:18:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IC2bfMXVPIP3VLfpbogX9WUs2Pod6RrYRKd3mye1aJg=; b=w9STSdoCnpWc0re8REKgTpdG7x
	x12FLNpxohzOpde6SM9njOaNJ2VfJSHYNQhLElwhgIvttSfAah+3Z/QraBzNukYZ48yG+qf7JO5di
	pT+lhr3VO0OUHXy6pbtEi/tMTrqlA/kbiFFb5+i50sTvKqSjhIwuDIlpWofULXyx1hvc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175560-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175560: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-shadow:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
X-Osstest-Versions-That:
    linux=66bb2e2b24ce52819a7070d3a3255726cb946b69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 01:18:02 +0000

flight 175560 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175560/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 175557 pass in 175560
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 175557 pass in 175560
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 175557 pass in 175560
 test-amd64-i386-xl-shadow     7 xen-install                fail pass in 175557
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat  fail pass in 175557
 test-armhf-armhf-libvirt-raw 13 guest-start                fail pass in 175557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175197
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 175557 like 175197
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 175557 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175197
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175197
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175197
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175197
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 175197
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 175197
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175197
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 175197
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175197
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52
baseline version:
 linux                66bb2e2b24ce52819a7070d3a3255726cb946b69

Last test of basis   175197  2022-12-14 10:43:17 Z   20 days
Testing same since   175407  2022-12-19 11:42:26 Z   15 days   38 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Heiko Schocher <hs@denx.de>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jialiang Wang <wangjialiang0806@163.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Colitti <lorenzo@google.com>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mark Brown <broonie@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Samuel Mendoza-Jonas <samjonas@amazon.com>
  Sasha Levin <sashal@kernel.org>
  Shiwei Cui <cuishw@inspur.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <simon.horman@corigine.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Yasushi SHOJI <yashi@spacecubics.com>
  Yasushi SHOJI <yasushi.shoji@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   66bb2e2b24ce..851c2b5fb793  851c2b5fb7936d54e1147f76f88e2675f9f82b52 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:18:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:18:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470843.730483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsQi-00053q-IX; Wed, 04 Jan 2023 01:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470843.730483; Wed, 04 Jan 2023 01:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsQi-00053j-Ec; Wed, 04 Jan 2023 01:18:56 +0000
Received: by outflank-mailman (input) for mailman id 470843;
 Wed, 04 Jan 2023 01:18:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCsQg-0004ud-Ie
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:18:54 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id be53b235-8bcd-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 02:18:51 +0100 (CET)
Received: from mail-dm6nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jan 2023 20:18:48 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6000.namprd03.prod.outlook.com (2603:10b6:5:38b::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 01:18:46 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 01:18:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be53b235-8bcd-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672795131;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=eqOY2ksHn1dUxyXx0p4brYvZm8CgsRTAnDvPno2SXMU=;
  b=hPwFtGuK+WOXxkhjj5L5hL3KxNJ9FtWvcK3hc3N/Nh6NIO8o9tRRoNLb
   wrfcclBn0ZTX7uBcC59OcVfRBKHLbE23v/9UrTLNiyvfzD9U++VwX3QEQ
   f4FBo9eQ5GIr06LsXwsbVJo32wIe+KgwYov40Jw4b/WfCQ25rMjFZfoYe
   U=;
X-IronPort-RemoteIP: 104.47.58.101
X-IronPort-MID: 91475240
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:uQtTcaJdowZapfqTFE+RMZQlxSXFcZb7ZxGr2PjKsXjdYENS1WQGm
 mRKXmiEM66JZWDzKd9wbN/lpx5UvMXVmtUxQVdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPdwP9TlK6q4mhA5wRlPa0jUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c55AjBiq
 eVGEAs8YzWngsa1+LG7aOpj05FLwMnDZOvzu1lG5BSAVbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGLlWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzHqnB99ISOXQGvhC2GSq4nQ+DQcqa3ih/Min03GzUol7N
 BlBksYphe1onKCxdfHmRAGxqnOAuh8aWvJTHvc85QXLzbDbiy6JC25BQjNfZdgOsM4tWSdsx
 lKPh8nuBzFkrPuSU3313rydtz+1NAAeJHUOYiJCRgwAi/HmoYozhxaJScxxHaqdh9j5Xzr3x
 liiqywzhK4SjIgIyr+89lDEhBqjo5HISkg+4QC/Y46+xgZwZYrga4n271HetK5ENNzAFgDHu
 2UYkc+D6uxIFYuKiCGGXOQKGveu+uqBNzrfx1VoGvHN6giQxpJqRqgIiBkWGaujGp9slePBC
 KMLhT5s2Q==
IronPort-HdrOrdr: A9a23:oR5fUagVgY0Y7I1Gq9jpB7YwxnBQXvwji2hC6mlwRA09TyXPrb
 HVoB0+726MtN93YgBHpTngAtjmfZq4z/RICOYqTNSftWXd2FdAT7sSibcKoQeQeREWrdQtrZ
 uIGpIWYLbN5DNB/KPHCWeDcuoI8Z2u7L2vnuvGymcFd3AQV0ii1WtE48Sgf3GeiDMoOXPxLv
 Sh2vY=
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="91475240"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KexJpr9wgWU1QiwPzIyza/NBjZfYyhJGuUIwvIrS0oXyTbRL47jIaJCS4OKsfjTaCADleOsFH9cQEkMfZ8VpBgZ23D87LLqz1tonN7DQCtlD3kfDnPhW4goQkWASuxeB4QYZKfOI+f8tqbnk+FRyGXoNU9QADsVG6GINLBwDSa18Y0QqHGZGalpe3wGAKAGrTAikCa8Rop5k8oJqYEBeA2CBgJYPJltk8fXwvEV/x7kDAsyLYUh3RmS4Ecg/kKjKCwafc3dSRRCVDltJzATDBXXU5TeX5Ac5JwhTAlbFHjhxh1wnUwCIqWVIdfs/BJW1C38uHiLZafNkIf6UkSm2fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eqOY2ksHn1dUxyXx0p4brYvZm8CgsRTAnDvPno2SXMU=;
 b=R64kivNJImWCR+4n54KC9VBP/NI8x4cJe9gGN5vvbA+VhHuKszB09VsylC52SM+kyeCq6QRzsYSsEiZs4tTuBssj7AhSdS1RChwmkhQ81dGe2WkkCOzaNqgMBUdb0irl6aKl1PZjyIHsd2GPMA6Tfdhyx5ncHjfQ1gR9RtmY4+pIp0kzzi9Ft4DvHPmlPoRGwebfneSTvj1560eToN1yRGl3QdPraAy6L9Ng1zPsX1h8SZkIASPZY91DI666gD64FG3GGG2yM2FMR2RtxUkt0iOMPylLhRgUmDeesE9n78sn6IYPF2hNnaqg99vIcU24e4zw1U0TEL2vQqX3Tpe71Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eqOY2ksHn1dUxyXx0p4brYvZm8CgsRTAnDvPno2SXMU=;
 b=RJQF9NfNE2QdHMcbPFp8yVLTrnQunnAauwAo7Wk9M7oSlzj92h8WdKF2PdsbEqWmA43fW6jB/NuhseKQpYEU2wRIj0adW9Ccmu194LB96oeHQH9tIOmEpv1ZkXomSak/ey9f9wZszQcqvp4KaDic1DrSd9NORpyksuhzPxNYIaQ=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Doug Goldstein
	<cardoe@cardoe.com>, Anthony Perard <anthony.perard@citrix.com>, Michal Orzel
	<michal.orzel@amd.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 2/6] CI: Remove guesswork about which artefacts to
 preserve
Thread-Topic: [PATCH 2/6] CI: Remove guesswork about which artefacts to
 preserve
Thread-Index: AQHZG+cbH36uve2aEUGxm9tAfYRLqK6NeucAgAACRYA=
Date: Wed, 4 Jan 2023 01:18:46 +0000
Message-ID: <71b7e387-8607-039b-6cb3-8555a1593361@citrix.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-3-andrew.cooper3@citrix.com>
 <alpine.DEB.2.22.394.2301031709570.4079@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2301031709570.4079@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM4PR03MB6000:EE_
x-ms-office365-filtering-correlation-id: e2cc64c4-edbb-42ee-88e5-08daedf1a098
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 oLwOb5ZgR2ChadQU6PJ7ze67qYvJKlEOSjXMEAqXrdiJvp+2j1kTJQ+pbGZxNxfKyLLNEzDH6QGSJ5hhQU0dexfU9i4HqIqCsxBfiZneHyk4DubPDZrfk74nt1yVoDg6GBwZX6Noa6Mc7xJkFxqUKo8z0lo8k0clh1sRtAiNsbaafY6zD8upIB6qTDiGKH2Tm2FCdRZI8c4Ok17F0wrnjlPCLg5Sp0Ae51oHs0u6+sYqueLjsSkJrobUqHNbokQjtmvdhA63XMrZHjxDAPoSquM2LLvM8xdM+oy0tbG8zyD9yFqdauHk5ZTAB1+FlBbsAT1/ZkglA23iqx7Y0+1bEaoLQUQWG2rJzpz8SQD493dX8gjepcdzU4ES1sQE2xlEmio6t3D3aJP6nwmt4gsDAAFRCWzr0ikICW3sE30LmYnfWgV5vpV9HIP82wehOBd7xM4V3cid4kmTYseUnlX1fr4GV7SONo6OkXOUv6/xqpcigrTXkyJfiThNudBQ3yoCbabF84NKBr97R7cU4jPIUEkCBn6lwmWEEx7Hqll+PFa9R5uO4vzP7eUWi0M3DAs8boMMVsET/MbZBnHOg9ewhDvzvkUMnYNeMLAlGKWKNelAFQDosXM6CtV9gCskVfHnwiw407dwUy8/+bY00DkNbc5Xa6CDvp6HapZJLFlpY0bR7VMLtCmgxOe7CDOnyKAf5GQVWU8798H5jf5JjIVY+2y/m7XqielGqon0iqM+gjMqg5R8EN4Isn++yLssjAXaF4xqLvD7Atz4CWXqV9bzsQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(366004)(346002)(396003)(136003)(451199015)(38070700005)(53546011)(31686004)(41300700001)(66946007)(66446008)(66556008)(76116006)(66476007)(6506007)(64756008)(4326008)(91956017)(8676002)(316002)(6916009)(2906002)(54906003)(86362001)(186003)(31696002)(26005)(83380400001)(478600001)(6486002)(6512007)(82960400001)(38100700002)(122000001)(8936002)(5660300002)(2616005)(36756003)(71200400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?aCsyd20zQ09jQXdOeCsrandUNVUwRFU4RGoxY1RVK1dZdzhIdkM4Vm9zNVdo?=
 =?utf-8?B?Y1lSOGI2bFBvTVpESGtvbjN5NW9yWW9vVXYzdFd0TUFuRVdGV1FBRUF4N3U4?=
 =?utf-8?B?amcrN2toelVqWnNVSW9peTcwYUIrdmZNV25TVWFhaXlDR3J5dGlQbXRTbjM0?=
 =?utf-8?B?U2lvQ1c5TjZpZHF0Qy9tdlpDNGpiZG0wS3c2Y3IxZXVsbmRYVmZaQlcrd29M?=
 =?utf-8?B?U1ZQSlNuM2lTUmoxN3AzOGpNb01JSXZkZ1lmeENpbGtCKzhXdnZJeW5XK2Rl?=
 =?utf-8?B?eVpnK25zQ3YwTWFSMlR3c3RvZlpISnZCd0Nyb1RCWk5YUi9qM1NiTUVRM1gw?=
 =?utf-8?B?RnNJSXVkamtuZGwrY0VRRXlCVU93ZTU1L0VyY2tGNkJOSG05Vk55TmsvNmJW?=
 =?utf-8?B?WjNXcGhseWhaWEFIeFRwUUhHZGpEWXVURFdPV0N6T1N2bnNkemdnWXhLZ2k4?=
 =?utf-8?B?MXo3a05SalJjRXpjcjhIOTJUSUFMQU5iMU1uWXB5WFlIeFVrdWFxc1NLOGVF?=
 =?utf-8?B?TUdOSTgvMW1PeXhaTDB4K3cyaUhZTE1IRHROR0lBM1VzWHhyNkpNczNYZjhY?=
 =?utf-8?B?djRqZXRHUHpWVU5rZUxNSnU4OTdrS0QvZkh0bkxDWTJub25vUlQvNnJGakt6?=
 =?utf-8?B?ZEJka0dnYnc4aDlUdlh6UTd0c29ReXo5bEpMM0dGeVE1NGdaQjh5c2NrY3Fx?=
 =?utf-8?B?ckdPQTRQcU5aWFkzbW8yTzVkNzNqSDJNb0dRaENXRHo3dmd0eEtjaUJUem43?=
 =?utf-8?B?cDNvSUlNODNIWWpMWi84Tlh3TnNWOGFLOVlTcmp0dXl1QWVhVTIrVkpMaGdY?=
 =?utf-8?B?QWptNDVUaHBFblVLUE1TTU5oMDdmeEZ0YjUwWEw1Yk9YQ1BFLzcwRC9waHhS?=
 =?utf-8?B?VlVIRTVIUlZtM1Zmak53dkRXcDhCVW4zUlZpWE9JR3dJcTJZNjhiUWx0Nk8y?=
 =?utf-8?B?VzloUVFDSFhLUzZzeUR6bjEwRXl2YmxJdEF1eDlJdnNleUlRd2kzbnFYOUFN?=
 =?utf-8?B?T3dieCttM1RjemVNU0xZcEZydTJycFp4UFZUZ3czVEREWEJSL1JVN0FSYnIx?=
 =?utf-8?B?YSs4SXFjMWZxTkQrMDhBeXBRclVlQlJpc1ZueHQyT0pNcW9uTk14WUNDTXdZ?=
 =?utf-8?B?cURoNVZGdHQvaTZvdTBoajc2SnpkZ1ZNQWlwYTB0VDFCRUpFRlVlYWJrNGc1?=
 =?utf-8?B?bnVJNWxMbEd4dmFHVXBVK0xyWkIzemhGa0FueHdCeU5JcGVuM0ZONGZtRUdC?=
 =?utf-8?B?NGdyWEtsK2hiK3E0dm40TzZKdjh1THJBRlpDd0c1Z24yRG9WS3RYdzZMdCsy?=
 =?utf-8?B?WnBEN0YrOFl2a3Aydk1OYjlUYUNLQXNUU0lOOVZRQlp5aGF6QmorRGNwZmNY?=
 =?utf-8?B?R0cxQ3VOZFFKQ3RnYmhDTlJMRFlkRVlRR1c1cGh2WElwaU1mWk5YVGF4cHpz?=
 =?utf-8?B?YTNEMitzbHVoa1YwSXhVWGNhaFZaTGZUT3BRNDhvbkVOTUo3QWJoQ0dJMTJk?=
 =?utf-8?B?NkVJcFFyM0ZaNHlFdHYzMnREaDAxRGZhZ29SVFBDWDU2V2tBQWJ0SjlpK0sz?=
 =?utf-8?B?aTVEaVVrck5jaFZsWllZUjVSU0w2dFVISmNZMmVJWWFRaFJoaTF0bkJjeU41?=
 =?utf-8?B?TWhyRVozMVZnbjBHeDBhTGRvOWpMdCtsUUdnMExDV0pkMGduMmpnL1lxeUJt?=
 =?utf-8?B?V3JVVmhKMTV0dDdzUmhKV0U5dkc2NjljVFdqVFVQTnhBNU9NT2FvaWQxNXIw?=
 =?utf-8?B?bHhTczBSblRFVjhuYXRnZGd4VWFQOHU5aUlnMGVmL0VqNzl1TEUrK3Rva0xS?=
 =?utf-8?B?eEFac3d4U2xZaGUzKzJtTXBDOXlmWW5JU3Q0VS9sV29RbVZPRzFjaHlET1J6?=
 =?utf-8?B?K3hHUTU0MDRDUjVyaHhFdWJlRTcrdWIxVnQvSVZhVVU5aWtjVHZwcFRycGxS?=
 =?utf-8?B?QUF5ZXNBK1ZEM1JOVVRaK01iYjdqckdLeXhkbWFVQ0dhLzlQajF5Z0FVN1lT?=
 =?utf-8?B?THFYQVdHUjNuUnF3bkYxbDFoZ1p4NllPOHJieERVNTdZUTJsR3ptWnZxZ0Zo?=
 =?utf-8?B?dGJVNXA4SE5rOEM1Tmx3dVFVUmZDalRxY1N3V1MzNjhhK0FVbS9pMlJVYU52?=
 =?utf-8?Q?It8kPRFIJfPqW90rrgbMNu64Y?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <6AA8B9985F017B41AE18BC62411A0977@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	S3b/7Qz6Xw4UxshWtUYxuMWsS3DufwTYg29sgfxiA6aYJbYPqt2VVy12JCURjGThrPgWNZaUmCPaNJV8hk2iQSjj2oKPwF84fmmZtV4FaaKF/5vS6P9OgaEfs3Uo2gFtmLbF+f7ImWbX2JU1bIF0jEYcDjPKlCMU4rbJ8woy781wLTfAtiVbMBuqBpc82xtfbkTbHfONJPQJLuawbL5CYHo5IwAFaXweLyoGnozsFN6yI2D+0qjJ7MGHU469yMlriF+Th10s6+FPUbrjr18XIWcAOyycdQwmnMXaDcmxvAaSfXwl4YOJiKoaoTx2aL2K4ukMfXiwIQr1dwefCzRx4slw6ithlaa0tAvyZbeGG8dSKHur3tTu99+TTiCJtnjpV9pQNhayzYCOb4FzeWo6oEUN0kosB/hqZHzqm2N37+wfdGXRqgy6UfMdi2V3Th/GPMGzb3c9QUChwSH/eJFbfl5oJmLaXRFg9FG6kQ0bntROq33TjL5udV6LkM4XTbM05tFC7kN/+sluyBdLkt8rRJy2GuveYdh4+R5cV33bh82OmM14p8/AVbHzSgTamXdmSLxI7YJxWUrWrv1N4Emm6BwVtHyF04TRllOUr/9fDolnc64IELqpdQ1BovIcBS4ORRPdPE7OCyI2vVYnWTUfF/jpJs7HpC/vpYHHVPYoU4gpHAQeoMTjdGFFtw24cyKZrF96fO2egnMFylTzsl7Eko5DvL0LZApH6NW8DS/HFXeDVtxCFAFCxBVH3MnCAf6H8ASb1pgDlIl0TNEMyuyOCXefH1VT0pknUBgk01wPu5X+bU84IpQhT1smz8ev5raAIFjRnL3RTg2ay0CWqFCPAAxJNJTtliB42Zv7DMjRS7UM02kZ7zDxNZiZmW9UCwuw
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e2cc64c4-edbb-42ee-88e5-08daedf1a098
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 01:18:46.3442
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: RT7KOKbm2v8+qj3Oy4Wt3bPQmuX4jweA1l5LpC+H4/RP2pjzzb1u7RcFqpI76wER/aPwXbk6+8I2R64s8gK4NSNs4u2ZzJ1ane3d23gLrCA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6000

T24gMDQvMDEvMjAyMyAxOjEwIGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+IE9uIEZy
aSwgMzAgRGVjIDIwMjIsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBQcmVzZXJ2ZSB0aGUgYXJ0
ZWZhY3RzIGJhc2VkIG9uIHRoZSBgbWFrZWAgcnVuZSB3ZSBhY3R1YWxseSByYW4sIHJhdGhlciB0
aGFuDQo+PiBndWVzc3dvcmsgYWJvdXQgd2hpY2ggcnVuZSB3ZSB3b3VsZCBoYXZlIHJ1biBiYXNl
ZCBvbiBvdGhlciBzZXR0aW5ncy4NCj4+DQo+PiBOb3RlIHRoYXQgdGhlIEFSTSBxZW11IHNtb2tl
IHRlc3RzIGRlcGVuZCBvbiBmaW5kaW5nIGJpbmFyaWVzL3hlbiBldmVuIGZyb20NCj4+IGZ1bGwg
YnVpbGRzLiAgQWxzbyB0aGF0IHRoZSBKZXNzaWUtMzIgY29udGFpbmVycyBidWlsZCB0b29scyBi
dXQgbm90IFhlbi4NCj4+DQo+PiBUaGlzIG1lYW5zIHRoZSB4ODZfMzIgYnVpbGRzIG5vdyBzdG9y
ZSByZWxldmFudCBhcnRlZmFjdHMuICBObyBjaGFuZ2UgaW4gb3RoZXINCj4+IGNvbmZpZ3VyYXRp
b25zLg0KPj4NCj4+IFNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIz
QGNpdHJpeC5jb20+DQo+PiAtLS0NCj4+IENDOiBEb3VnIEdvbGRzdGVpbiA8Y2FyZG9lQGNhcmRv
ZS5jb20+DQo+PiBDQzogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3Jn
Pg0KPj4gQ0M6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KPj4g
Q0M6IE1pY2hhbCBPcnplbCA8bWljaGFsLm9yemVsQGFtZC5jb20+DQo+PiBDQzogT2xla3NpaSBL
dXJvY2hrbyA8b2xla3NpaS5rdXJvY2hrb0BnbWFpbC5jb20+DQo+PiAtLS0NCj4+ICBhdXRvbWF0
aW9uL3NjcmlwdHMvYnVpbGQgfCAyMiArKysrKysrKysrKysrKy0tLS0tLS0tDQo+PiAgMSBmaWxl
IGNoYW5nZWQsIDE0IGluc2VydGlvbnMoKyksIDggZGVsZXRpb25zKC0pDQo+Pg0KPj4gZGlmZiAt
LWdpdCBhL2F1dG9tYXRpb24vc2NyaXB0cy9idWlsZCBiL2F1dG9tYXRpb24vc2NyaXB0cy9idWls
ZA0KPj4gaW5kZXggNWRhZmE3MmJhNTQwLi44ZGVlMWNiYmMyNTEgMTAwNzU1DQo+PiAtLS0gYS9h
dXRvbWF0aW9uL3NjcmlwdHMvYnVpbGQNCj4+ICsrKyBiL2F1dG9tYXRpb24vc2NyaXB0cy9idWls
ZA0KPj4gQEAgLTcwLDE4ICs3MCwyNCBAQCBpZiBbWyAiJHtDQ30iID09ICJnY2MiICYmIGBjYy12
ZXJgIC1sdCAweDA0MDYwMCBdXTsgdGhlbg0KPj4gICAgICBjZmdhcmdzKz0oIi0td2l0aC1zeXN0
ZW0tc2VhYmlvcz0vYmluL2ZhbHNlIikNCj4+ICBmaQ0KPj4gIA0KPj4gKyMgRGlyZWN0b3J5IGZv
ciB0aGUgYXJ0ZWZhY3RzIHRvIGJlIGR1bXBlZCBpbnRvDQo+PiArbWtkaXIgYmluYXJpZXMNCj4+
ICsNCj4+ICBpZiBbWyAiJHtoeXBlcnZpc29yX29ubHl9IiA9PSAieSIgXV07IHRoZW4NCj4+ICsg
ICAgIyBYZW4tb25seSBidWlsZA0KPj4gICAgICBtYWtlIC1qJChucHJvYykgeGVuDQo+PiArDQo+
PiArICAgICMgUHJlc2VydmUgYXJ0ZWZhY3RzDQo+PiArICAgIGNwIHhlbi94ZW4gYmluYXJpZXMv
eGVuDQo+PiAgZWxzZQ0KPj4gKyAgICAjIEZ1bGwgYnVpbGQNCj4+ICAgICAgLi9jb25maWd1cmUg
IiR7Y2ZnYXJnc1tAXX0iDQo+PiAgICAgIG1ha2UgLWokKG5wcm9jKSBkaXN0DQo+PiAtZmkNCj4+
ICANCj4+IC0jIEV4dHJhY3QgYXJ0aWZhY3RzIHRvIGF2b2lkIGdldHRpbmcgcmV3cml0dGVuIGJ5
IGN1c3RvbWlzZWQgYnVpbGRzDQo+PiAtbWtkaXIgYmluYXJpZXMNCj4+IC1pZiBbWyAiJHtYRU5f
VEFSR0VUX0FSQ0h9IiAhPSAieDg2XzMyIiBdXTsgdGhlbg0KPj4gLSAgICBjcCB4ZW4veGVuIGJp
bmFyaWVzL3hlbg0KPj4gLSAgICBpZiBbWyAiJHtoeXBlcnZpc29yX29ubHl9IiAhPSAieSIgXV07
IHRoZW4NCj4+IC0gICAgICAgIGNwIC1yIGRpc3QgYmluYXJpZXMvDQo+PiAtICAgIGZpDQo+PiAr
ICAgICMgUHJlc2VydmUgYXJ0ZWZhY3RzDQo+PiArICAgICMgTm90ZTogU29tZSBzbW9rZSB0ZXN0
cyBkZXBlbmRpbmcgb24gZmluZGluZyBiaW5hcmllcy94ZW4gb24gYSBmdWxsIGJ1aWxkDQo+PiAr
ICAgICMgZXZlbiB0aG91Z2ggZGlzdC8gY29udGFpbnMgZXZlcnl0aGluZywgd2hpbGUgc29tZSBj
b250YWluZXJzIGRvbid0IGV2ZW4NCj4+ICsgICAgIyBidWlsZCBYZW4NCj4+ICsgICAgY3AgLXIg
ZGlzdCBiaW5hcmllcy8NCj4+ICsgICAgaWYgW1sgLWYgeGVuL3hlbiBdXSA7IHRoZW4gY3AgeGVu
L3hlbiBiaW5hcmllcy94ZW47IGZpDQo+IHdoeSB0aGUgImlmIiA/DQo+DQo+IFlvdSBjb3VsZCBq
dXN0Og0KPg0KPiBjcCB4ZW4veGVuIGJpbmFyaWVzL3hlbg0KPg0KPiB1bmNvbmRpdGlvbmFsbHk/
DQoNCk5vIC0gdGhlIHNjcmlwdCBpcyBgc2V0IC1lYCBhbmQgeGVuL3hlbiBkb2Vzbid0IGV4aXN0
IGluIHRoZSBKZXNzaWUzMg0KY29udGFpbmVycy4NCg0KVGhhdCdzIHdoeSB0aGUgb2xkIGxvZ2lj
IGhhZCBhbiAiaWYgbm90IHg4Nl8zMiIgZ3VhcmQgKGV4Y2VwdCBpdCBhbHNvDQpndWFyZGVkIHRo
ZSByZWd1bGFyIGRpc3QgLT4gYmluYXJpZXMvIGNvcHkgd2hpY2ggaXMgcHJvYmxlbWF0aWMpLg0K
DQpBdCBhIGd1ZXNzLCB0aGUgb3RoZXIgd29ya2luZyBvcHRpb24gd291bGQgYmUgYGNwIHhlbi94
ZW4gYmluYXJpZXMveGVuIHx8IDpgDQoNCkkgZG9uJ3QgbXVjaCBjYXJlIHdoaWNoIG9mIHRoZXNl
IHR3byBnZXRzIHVzZWQsIGJ1dCBwcmV0dHkgbXVjaCBhbnl0aGluZw0KZWxzZSByZXN1bHRzIGlu
IGEgZmFpbGVkIHRlc3QuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:21:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:21:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470850.730495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsSi-0006RR-Vr; Wed, 04 Jan 2023 01:21:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470850.730495; Wed, 04 Jan 2023 01:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsSi-0006RK-R9; Wed, 04 Jan 2023 01:21:00 +0000
Received: by outflank-mailman (input) for mailman id 470850;
 Wed, 04 Jan 2023 01:20:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCsSh-0006RC-6v
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:20:59 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 09a78602-8bce-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 02:20:56 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 96701614F9;
 Wed,  4 Jan 2023 01:20:55 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 15DC1C433D2;
 Wed,  4 Jan 2023 01:20:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09a78602-8bce-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672795255;
	bh=1BW4Ls3PHmwwNSiG3AJmJkC7DqZ5ixj1dctuN6FJe6A=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=W2CTHzPLrV5O04ASlyt0KThzyjd9P8cwZF+/2X7KOfjTR0/rJbCAOh0Qo8D0oz6XM
	 00F2Rr0xUSCBEhGj32m4+mH6LELAW4USYcFNzoTn+29d3L+bDuGuOQT9THdRygFAxh
	 tiO+SyVMTs1GLceBJhxlSEIjBsTs16ZJhUk2lpufB/DwGSllL73HcvEtqATXywUmiE
	 yaOT5jkhbxsnvoox4uHcVgsA2HMc/IMp1EBtAp1xHtjwJzKVN1keWcE5TTN8Fe7wB3
	 BLWou4RAzthN7oUj7YySAHFDysvggg4g1pTu8bSBVkv8KXNIzXpWr6JkoDmXt8KUXF
	 Ag6NDqNLoh8Bg==
Date: Tue, 3 Jan 2023 17:20:51 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 5/6] CI: Fix build script when CROSS_COMPILE is in use
In-Reply-To: <20221230003848.3241-6-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031720400.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-6-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 30 Dec 2022, Andrew Cooper wrote:
> Some testcases use a cross compiler.  Presently it's only arm32 and due to
> previous cleanup the only thing which is now wrong is printing the compiler
> version at the start of day.
> 
> Construct $cc to match what `make` will eventually choose given CROSS_COMPILE,
> taking care not to modify $CC.  Use $cc throughout the rest of the script.
> 
> Also correct the compiler detection logic.  Plain "gcc" was wrong, and
> "clang"* was a bodge highlighting the issue, but neither survive the
> CROSS_COMPILE correction.  Instead, construct cc_is_{gcc,clang} booleans like
> we do elsewhere in the build system, by querying the --version text for gcc or
> clang.
> 
> While making this change, adjust cc_ver to be calculated once at the same time
> as cc_is_* are calculated.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> CC: Doug Goldstein <cardoe@cardoe.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  automation/scripts/build | 22 ++++++++++++++--------
>  1 file changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 4c6d1f3b70bc..206312ecc7d0 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -2,13 +2,12 @@
>  
>  test -f /etc/os-release && cat "$_"
>  
> -$CC --version
> +# Construct $cc such that it matches what `make` will chose when taking
> +# CROSS_COMPILE into account.  Do not modify $CC directly, as that will cause
> +# `make` to double-account CROSS_COMPILE.
> +cc="${CROSS_COMPILE}${CC}"
>  
> -# Express the compiler version as an integer.  e.g. GCC 4.9.2 => 0x040902
> -cc-ver()
> -{
> -    $CC -dumpversion | awk -F. '{ printf "0x%02x%02x%02x", $1, $2, $3 }'
> -}
> +$cc --version
>  
>  # random config or default config
>  if [[ "${RANDCONFIG}" == "y" ]]; then
> @@ -50,7 +49,14 @@ else
>      cfgargs=()
>      cfgargs+=("--enable-docs")
>  
> -    if [[ "${CC}" == "clang"* ]]; then
> +    # booleans for which compiler is in use
> +    cc_is_gcc="$($cc --version | grep -q gcc && echo "y" || :)"
> +    cc_is_clang="$($cc --version | grep -q clang && echo "y" || :)"
> +
> +    # The compiler version as an integer.  e.g. GCC 4.9.2 => 0x040902
> +    cc_ver="$($cc -dumpversion | awk -F. '{ printf "0x%02x%02x%02x", $1, $2, $3 }')"
> +
> +    if [[ "${cc_is_clang}" == "y" ]]; then
>          # SeaBIOS cannot be built with clang
>          cfgargs+=("--with-system-seabios=/usr/share/seabios/bios.bin")
>          # iPXE cannot be built with clang
> @@ -73,7 +79,7 @@ else
>      fi
>  
>      # SeaBIOS requires GCC 4.6 or later
> -    if [[ "${CC}" == "gcc" && `cc-ver` -lt 0x040600 ]]; then
> +    if [[ "${cc_is_gcc}" == "y" && "${cc_ver}" -lt 0x040600 ]]; then
>          cfgargs+=("--with-system-seabios=/bin/false")
>      fi
>  
> -- 
> 2.11.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:22:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:22:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470857.730504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsTp-00070P-7i; Wed, 04 Jan 2023 01:22:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470857.730504; Wed, 04 Jan 2023 01:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsTp-00070I-58; Wed, 04 Jan 2023 01:22:09 +0000
Received: by outflank-mailman (input) for mailman id 470857;
 Wed, 04 Jan 2023 01:22:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCsTn-00070C-QK
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:22:07 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 33c37a1e-8bce-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 02:22:06 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id F3976B810FA;
 Wed,  4 Jan 2023 01:22:05 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B0014C433D2;
 Wed,  4 Jan 2023 01:22:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33c37a1e-8bce-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672795324;
	bh=pTGI2ZexUZC1WRlRQKkwMJUAN5r/S/qdIbguBBS3LXs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WtXaF5/0GmUJHS6EKjC90BWZokfrZamzUxNu/v3g60FuOCYvEJQw7RGNgcsA6/L35
	 SVuTJ0CHTvouMLmBKsN6OoM65Y1ptNY6LIqjWYRjJJzffg8SvMEbYxpgP/TW8aBwfI
	 uXXAMirDvkbdqX5vlXSQHTgNyXyjujSLLuKxhxd8dQ2XlnNYlTERQZ65ZG2wTTDuXW
	 k11E2ohgi9Uhe92XySYG95EBzaQiYjy2diPOq6df+pqm74uAug05phXOaaY5WH3DnT
	 MgT040kdLGrvA+X4xEadcNCoSIggZZxhYH+jT6PPRmHevmhN15vHx7g3oVCByqw9oL
	 16Tha2WuIhEGg==
Date: Tue, 3 Jan 2023 17:22:01 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 6/6] CI: Simplify the MUSL check
In-Reply-To: <20221230003848.3241-7-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031721530.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-7-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 30 Dec 2022, Andrew Cooper wrote:
> There's no need to do ad-hoc string parsing.  Use grep -q instead.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> CC: Doug Goldstein <cardoe@cardoe.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  automation/scripts/build | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 206312ecc7d0..f2f5e55bc04f 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -65,7 +65,7 @@ else
>          cfgargs+=("--disable-stubdom")
>      fi
>  
> -    if  ! test -z "$(ldd /bin/ls|grep musl|head -1)"; then
> +    if ldd /bin/ls | grep -q musl; then
>          # disable --disable-werror for QEMUU when building with MUSL
>          cfgargs+=("--with-extra-qemuu-configure-args=\"--disable-werror\"")
>          # SeaBIOS doesn't build on MUSL systems
> -- 
> 2.11.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:22:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:22:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470864.730516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsUc-0007aW-Ha; Wed, 04 Jan 2023 01:22:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470864.730516; Wed, 04 Jan 2023 01:22:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsUc-0007aP-EQ; Wed, 04 Jan 2023 01:22:58 +0000
Received: by outflank-mailman (input) for mailman id 470864;
 Wed, 04 Jan 2023 01:22:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCsUb-0007UY-Mp
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:22:57 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5086befd-8bce-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 02:22:55 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9EBAB61569;
 Wed,  4 Jan 2023 01:22:54 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2062BC433EF;
 Wed,  4 Jan 2023 01:22:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5086befd-8bce-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672795374;
	bh=HDAcCFD2XNQIAh7QQfIjgmW6fh/8SiYiGhMXClePVhk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=UvzSMXNCLqUTxPTOn3i50WPUc3u+G1whb1nhBH/K+q6VDacFkxcUxjRaMyJtYjF5h
	 bgUp80itAkGQFVlzPuq/slGzJ6ymuZq92HsDJi4T1AuosJ0GfVUX8zZKjVTmb56XOJ
	 eBhb6CY82i55K3quh7kKfxCLW1+7W2ZJW/Bd6AyyIDTCrpG9e5v6V1Q5XO98gLbRfS
	 94vDqcem6UK3HrbmSEUgDB/ada8ipFIGnHz3afJg+SgCfPROc6wZUWlQnqZp6/l68R
	 At0ps+ThlZ++SxrW2OFO4mQQqDCwFGs0s+RYUKtb3EpPSmb6lXM40YEaTSLapFlXlI
	 Zb4V1k5plDTHQ==
Date: Tue, 3 Jan 2023 17:22:51 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Anthony Perard <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 2/6] CI: Remove guesswork about which artefacts to
 preserve
In-Reply-To: <71b7e387-8607-039b-6cb3-8555a1593361@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031722250.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-3-andrew.cooper3@citrix.com> <alpine.DEB.2.22.394.2301031709570.4079@ubuntu-linux-20-04-desktop> <71b7e387-8607-039b-6cb3-8555a1593361@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 4 Jan 2023, Andrew Cooper wrote:
> On 04/01/2023 1:10 am, Stefano Stabellini wrote:
> > On Fri, 30 Dec 2022, Andrew Cooper wrote:
> >> Preserve the artefacts based on the `make` rune we actually ran, rather than
> >> guesswork about which rune we would have run based on other settings.
> >>
> >> Note that the ARM qemu smoke tests depend on finding binaries/xen even from
> >> full builds.  Also that the Jessie-32 containers build tools but not Xen.
> >>
> >> This means the x86_32 builds now store relevant artefacts.  No change in other
> >> configurations.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> ---
> >> CC: Doug Goldstein <cardoe@cardoe.com>
> >> CC: Stefano Stabellini <sstabellini@kernel.org>
> >> CC: Anthony PERARD <anthony.perard@citrix.com>
> >> CC: Michal Orzel <michal.orzel@amd.com>
> >> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> >> ---
> >>  automation/scripts/build | 22 ++++++++++++++--------
> >>  1 file changed, 14 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/automation/scripts/build b/automation/scripts/build
> >> index 5dafa72ba540..8dee1cbbc251 100755
> >> --- a/automation/scripts/build
> >> +++ b/automation/scripts/build
> >> @@ -70,18 +70,24 @@ if [[ "${CC}" == "gcc" && `cc-ver` -lt 0x040600 ]]; then
> >>      cfgargs+=("--with-system-seabios=/bin/false")
> >>  fi
> >>  
> >> +# Directory for the artefacts to be dumped into
> >> +mkdir binaries
> >> +
> >>  if [[ "${hypervisor_only}" == "y" ]]; then
> >> +    # Xen-only build
> >>      make -j$(nproc) xen
> >> +
> >> +    # Preserve artefacts
> >> +    cp xen/xen binaries/xen
> >>  else
> >> +    # Full build
> >>      ./configure "${cfgargs[@]}"
> >>      make -j$(nproc) dist
> >> -fi
> >>  
> >> -# Extract artifacts to avoid getting rewritten by customised builds
> >> -mkdir binaries
> >> -if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
> >> -    cp xen/xen binaries/xen
> >> -    if [[ "${hypervisor_only}" != "y" ]]; then
> >> -        cp -r dist binaries/
> >> -    fi
> >> +    # Preserve artefacts
> >> +    # Note: Some smoke tests depending on finding binaries/xen on a full build
> >> +    # even though dist/ contains everything, while some containers don't even
> >> +    # build Xen
> >> +    cp -r dist binaries/
> >> +    if [[ -f xen/xen ]] ; then cp xen/xen binaries/xen; fi
> > why the "if" ?
> >
> > You could just:
> >
> > cp xen/xen binaries/xen
> >
> > unconditionally?
> 
> No - the script is `set -e` and xen/xen doesn't exist in the Jessie32
> containers.
> 
> That's why the old logic had an "if not x86_32" guard (except it also
> guarded the regular dist -> binaries/ copy which is problematic).
> 
> At a guess, the other working option would be `cp xen/xen binaries/xen || :`
> 
> I don't much care which of these two gets used, but pretty much anything
> else results in a failed test.

I didn't realize that. I think your version is this patch is better,
keep it as is


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:27:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:27:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470874.730527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsYo-0008MY-6C; Wed, 04 Jan 2023 01:27:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470874.730527; Wed, 04 Jan 2023 01:27:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsYo-0008MR-3R; Wed, 04 Jan 2023 01:27:18 +0000
Received: by outflank-mailman (input) for mailman id 470874;
 Wed, 04 Jan 2023 01:27:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCsYm-0008ML-T5
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:27:17 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ea2a94a8-8bce-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 02:27:14 +0100 (CET)
Received: from mail-dm6nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jan 2023 20:27:11 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN9PR03MB5962.namprd03.prod.outlook.com (2603:10b6:408:133::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 01:27:08 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 01:27:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea2a94a8-8bce-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672795634;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=CsMEGptc43vLte0q02CVPOWS3aRKtCm8JX6gPknmLfM=;
  b=ULq994mPcGoo/JoMTeJk5yv0mBznXEgAiBtJbjE4fTfm3SgFaofftJPg
   s9USZV5T1bE8loB3RZkzkUlKAhT9402euZFuKtuIS8swDTEDT4eTVQ/5L
   wi/7eD+q0FhBq8ll/UebGrf6LBMWK+AQ2ifVVTPdpZwgwfMn0MEmJ0Ni5
   Q=;
X-IronPort-RemoteIP: 104.47.59.170
X-IronPort-MID: 91475861
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rfnMHKp/nnagpfBlVQIOqt3YA8teBmL4ZBIvgKrLsJaIsI4StFCzt
 garIBmGPf2NN2DxeY1zPdnioBkB7J7cx4A1SQdl/yoyESND9JuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHzyJNUfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACAcb0+8luXs+7iyTsJ8u+A4PuLgJrpK7xmMzRmBZRonabbqZvyQoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jemraYWNEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXpr6I13Q3NnwT/DjVLBFeRiOS4sXePWvx2I
 m491hoIrqktoRnDot7VGkfQTGS/lgUHR9NaHuk+6QeM4qnZ+QCUAi4DVDEpQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkA9JmYYaSgJZQAM+dXkrcc4iRenZtNiG668ipvyAyP9x
 xiDqSR4jLIW5eYM06i45lHGxTGxvJ/CTg0yzgrSV2OhqAh+YeaYi5eA7FHa6bNMKdifR1zY5
 nwcwZHBtKYJEI2HkzGLTKMVBra16v2ZMTrax1lyA50m8Dfr8HmmFWxN3AxDyI5SGp5sUVfUj
 IX74Gu9OLc70KOWUJJK
IronPort-HdrOrdr: A9a23:JwGQCa/ozJNLz6UEVFtuk+AuI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHnJYVMqMk3I9uruBEDtex3hHNtOkOss1NSZLW7bUQmTXeJfBOLZqlWNJ8S9zJ856U
 4JScND4bbLfDxHZKjBgTVRE7wbsaa6GKLDv5ah85+6JzsaGp2J7G1Ce3am+lUdfng+OXKgfq
 Dsm/auoVCbCAwqR/X+PFYpdc7ZqebGkZr3CCR2eyLOuGG1/EiVAKeRKWnj4isj
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="91475861"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j35cgxW88//7tu9zUnPYnw89O5WIBjTRF+Fw+9EjwbmzmERUsEI38jxJRsP+GkdbAKprYk4xkvc6YR93XpduvgkSgTCiV0BRn+JpZMri2CwcV6F8kD+3AZ/A/NQWXFwpn0JVnoryYFRpTfTz3yylz2jmclOhqS7zKMKTPF4w6S8IsEM5WUI9m/MeZNFpq/m+7UP5RWWS/UcOYVwf0N6JZLhwKRJ8Jymay+iix4Tv8JpXae717T9QkSxERlw7WPmnMUbtfKzsFumvFg4fOlaOvkbJSGc5+wYjmd0wenutgaWYzW1fBRAE3MulNuwJzTzoqONLJ7icDxs66yh1xM2Plg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CsMEGptc43vLte0q02CVPOWS3aRKtCm8JX6gPknmLfM=;
 b=d4NcYJ1Hx+x+MPF00zboJ5BT6tR6jSa7iwWa59ImHY9QS9aYOQl1Up0vg+TMWrkcztfI+TGSoCYRLgJ8k/huVUxzM36yOivM4dCpvMzJFQu32zzvjdUnZrL7ogDpCVK+VAhvEDrbpqHLJexJJRJvJqWBl0MwUxqDrkdQZvbUSq0fhcy0YQVS/rBiEj1CT/CvrbVVtr9r88pi37InZFV++M0q73YUI871TcxI/OnLn95F5kxKkhXdkaDzyn92vuwDJIOSBfLMEqzF13Gz6EqA9jUQc+cSozYReDbAqAa4EYrxLvbt0rUUOj0Co/Gd3K6lfekWZYXW89zYxavQWcb03A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CsMEGptc43vLte0q02CVPOWS3aRKtCm8JX6gPknmLfM=;
 b=onQPY8K/ozqeQXc66qN0SQj7Ii8pGb8flLN+s+mlIsBHvWyHhM9KifUdbKWJ5XJIY4tIy+c7+XnaRX0wW7HY6b+ATd8FJ3EgTQRxux0x9FJCG7C0BY9UdBGbsVIwAfcBZuQm9VFTPDo2Tjm2HUOt51kYm4DGtLOlXBXpaSUAOwk=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Doug Goldstein
	<cardoe@cardoe.com>, Anthony Perard <anthony.perard@citrix.com>, Michal Orzel
	<michal.orzel@amd.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
Thread-Topic: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
Thread-Index: AQHZG+cbFLjU6ViRqEqaO8BfTNldnq6NfEWAgAADPYA=
Date: Wed, 4 Jan 2023 01:27:07 +0000
Message-ID: <34e692e3-ef76-a43e-ec4f-a7c1ed2d094f@citrix.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-5-andrew.cooper3@citrix.com>
 <alpine.DEB.2.22.394.2301031713530.4079@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2301031713530.4079@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BN9PR03MB5962:EE_
x-ms-office365-filtering-correlation-id: 187c1aab-702c-45e3-5da9-08daedf2cb87
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 /v4ND3ZQbgdjQX3BW9gyCmC0r1dhlP6o4NuZv1D11joi0BQ39rfHWjLHLIr017DmFtTWRXArxfki8BM8dBceS1LLcYxwIqii7W7L5g8reqmzkYAhg4za3lx5x7PHJabdYREoIbfCgv3o9uzE9K/c1jP0c8hI++zgKN8SIB038bH8WsXvTa2Nw28HDd+efVZYJukVXkM211KyotLMPOvt+mDPjHSKDqXNyvKFL65wl6cJapRuySXCHUBKzlIaFcyeIqFtgNtr7GkOvZWjblKH+iglYc9+qS0d2EIM2o1NgqNsKSdGew0pzZ5unfRlooJvfTGBXORweDsajpVfXfZJo+d2/Yym5DicEc4ORgiCZDTgJsoVJWxMi4KHoNRwKhXlFzAfNj/5nilHnI7bZ5iYnpCEnalLKocbsVTYM1FfdCYDh8yRgNHUEmxVlYXHCZ9f+z1HGX2d6sxC3HyxCMr1WPQveNVMKHTpOEgBt7UIHSmjQrhYbwZwe1lenjRlEKVe+9p68739i7Dv+/mI60chloRSuJpHk7oa8sdTYMIFx7qWlhHKWG55oQpVwZ6QGOMJR/lVaXae0rbWOe+P6tsnDkZPZjAkIdtf6ulwgPKnmW+7xRs0HrR3XKsztNovsPVW4T7yU4cbXoUKPztN63gMMSF3tcvJaapU0izkXfcrHNfoK2MYlQkRtNcttLT7+fo1wuRtYdHT8wqZcGiKdjfkFA6e7bKHH1DK3JO88QtDEBC7iEKeooCcrQ0mhNWhzZBKVCGh9D6XhblR01wDyxPuBQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(366004)(136003)(376002)(39860400002)(396003)(451199015)(83380400001)(186003)(6512007)(26005)(6506007)(31696002)(38070700005)(86362001)(36756003)(82960400001)(122000001)(38100700002)(53546011)(2616005)(8676002)(4326008)(31686004)(41300700001)(8936002)(2906002)(5660300002)(6486002)(71200400001)(478600001)(66946007)(66476007)(6916009)(76116006)(316002)(91956017)(54906003)(66556008)(64756008)(66446008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?WmE5ZmV0S2JUZFNUd2JQV1I1QnlvMS8xdnZlZ2FtVDdjM0MrUmRvUHpyUWVL?=
 =?utf-8?B?TEVRQ1BaWDYwaUU2cnNtVlRxWmtJYXQ1SklwSjN1bEZ5MlJvWVFCU3NLY25D?=
 =?utf-8?B?Yk8wd0ovc2tobTh3L0U1alpyaEUzOHZwL2dQc0IvNVBpbVQyakdIcFMwM0FG?=
 =?utf-8?B?UFNqdzRjSVdnVjJ5UlhNV2M3ekFRckJSV3cxejZzandzQ0RLeHBadzBFNmlo?=
 =?utf-8?B?S2FUemxkN0swdmZmekFaVkEvOHNKWUlFKzkzTXlLK1k0RWd6dEdyb2lrcER0?=
 =?utf-8?B?bytWejVwdE5rWkNtYUFJTFR3SkJGRXMzRktKOW9ycUw2RFU4aEtZSEh3Zmhk?=
 =?utf-8?B?SDkzVjYyWDZzcmxraUVXdmU2T2FlZEVRR05YL1B6UHVpSy80bmowdTc1MU5Z?=
 =?utf-8?B?S0pQL1VtVUxCNThIdk02VDNhc01aZy8vQzczb1N2aXFQV0NhQlk5eUJvWndn?=
 =?utf-8?B?Yy8zK1NLaVhGU01DNGd4R2xhV2RLR2pxWEZRaTV6VlNqcHdsMjl3YWd0aFc4?=
 =?utf-8?B?bnY1LzNaWDN1ZEZQK2JWSnNUSlJkaGtMdGtDMHFhNGFpUnpGYU4wSXk0M3Fi?=
 =?utf-8?B?V2FwYjV4UXB1U1Bnclo5Z2FkKytnVCt3U0hPckFROG10WnRKdE9jVmlVU3NG?=
 =?utf-8?B?TnlZS0xXd1JZVFNFNVo3bUR3M0lLRkVMZ2ZTWW5pQXNIaG1PU0ZYcmVhd2dq?=
 =?utf-8?B?WllBRzRpd2x5ZkJBbEcvb1lGSUVaZ2dWRVVPUUVIR25ibmVmazQvNWFPbzVz?=
 =?utf-8?B?OS9QeE43NUEvL1pKalJzZ0FtTFdFOStubEpsdG5QbUI0Qkg1M3VnWDNuN0F1?=
 =?utf-8?B?YWp4eDVxU1AyUXNoUXdLY1RRb0VtenlvUUZ3YUNjTkNQb2lacWhtbW9OdlBH?=
 =?utf-8?B?cGIxVE5rYlFYTmRtc3NkcUJuQjF3eGJhZWc0cmI1d0x1bkZjZHZ2VVhSUGZ0?=
 =?utf-8?B?NmduRHcrTVcvdDdxZG1lTG0xdXoxYkRlSzRsQzc0VXZGeHU0NjVCalo3Wk1k?=
 =?utf-8?B?ejBCYU9LMXFZSlFDNmUvME1xaXFLZk1yZ2R6TENTcmZCL3llbjU0N2dHY3dU?=
 =?utf-8?B?WnF5dW51c21TbUhvb1pXSzRmbEh4WDVocEdKdUhXMXJGUFQwOXhFM2F2d3pU?=
 =?utf-8?B?WDZWdlhEeEN0U2xPRkZld20vQ0lBNHh5MDREcmFycEtqenY1Um5MTndNNzRT?=
 =?utf-8?B?eXpVTGxRZUhoTkFMSkkvd1VCUnM3NFViWDZWZUZxU3BacXYxTnNtWG5Hand2?=
 =?utf-8?B?YzZxRVVXMy9BTkJ2d2ZWSlZVWkx5MzY2WXA1a01Sc04wb3RhUnFqN3JJOWZm?=
 =?utf-8?B?d3F4R2JaMFRKLzZXUEJ6TDZrcjgxckV4allsTWlGS2t6RGtsVjFEdnVobUFK?=
 =?utf-8?B?eVpXTTNuSEhnQ1FjbW1ySUhjMjgrY3hwNjNSaXgvTHVNU3c5UEQ2cXFoRCtj?=
 =?utf-8?B?ejY1QWJCODVPUllyQkh6S290Sjgwckdpdlk3R25uek52RG1SWmIyZWducnd0?=
 =?utf-8?B?NytzM1h3MVVlejI2a0h2UzdpZDEwN1hod2xkMGNOSUNQSzUySXRTcndlamp4?=
 =?utf-8?B?WkJRTXZxRzNsOXRZdDJZd2Nxc1dyUnJMcDJCa1hGTWQrTW1Xam1sOTFxNkRW?=
 =?utf-8?B?M1hGTzMxVDhSelVkZTB2MWJBbU9pZXc1K1ZJeHh4YlprMnBhNFpKZGQydTQw?=
 =?utf-8?B?dmJ4N3ZpVmhXcDJyY3diRXJKWlYwTmdhSnpLUk0xaTEvZEN6Y2trUXBmMlkx?=
 =?utf-8?B?Qng3L1FnVDRaa2F6bFY5eGZ3L1Exck5yQ0ZRenY2V2FYZGpNS2lNejJkM1Rp?=
 =?utf-8?B?ak9xV2NKYmpaQXVMdTNnWTNOaXRqalVRZTBtRjlhWUFnSFJTMVBuOE9FTVBr?=
 =?utf-8?B?S0w4MWNIdDZFeFFzUE1nelU3QjVyQm1RNTc1bGJoS2ovZ1pPa1pWcy9IeVZE?=
 =?utf-8?B?YWM2N0Q0M01OZ2tGQ0JieldEcjB6azBwWFlpYm1YcGx0M2lPWTNJZmE3SWZC?=
 =?utf-8?B?RjJPcGoyd1plOWY5SkJvWExieFZ6eWVrZWNKSkM3cjkwUlFsWWc3aXlZdHBW?=
 =?utf-8?B?dWNhSTJmTFJwdDhaWWpYY0NXVUFKb3hIQkN5TWJsdkMrT05nREVEVG9YUys3?=
 =?utf-8?Q?ip+fuQahUav9n8no+MkUc500L?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <A2D67F0DB1306749B21F0107D4DC0EC6@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	IIekUxq1ooG+5M7hXDklfTQgvJ+UziZomfAdvKzZE/9G0G5KEWYexsMtD2EailrNlQn1maFh4uMjsPoM326GS7EW1Mnk/14yrZErVd45G2wJHUG55FnkUexHooJNAbUxVyilykU06gsDJmsQLKQ8e/hPqXKnpuOUumoSImTYgxiWgEtntjWxUn1ev6O9+gwF5+ivXiVfs0uAxzdBOGi4EFulYVMakOFepg6X1f/duog7FnM2OnjlBffJzbQlSg1Scd5DSoD3nA8+z0p58pCWrB42z4RHjpZM/PuHrHOfxwIngvEwP6wY1oAnZDR1owL+Dve9NTSLtsm2Q2VCK54Iqmc7lg5pfJnHw/NLMZ+NTLd/GL+W4/GpNG+A7G/o79qOYOuOy6KxBkD/GfOyue6BWANrSS/LBcOjHL0oW7A05EeOSUGiTlOI6LNhlFC97Zad/E5sQG8pwwI/OnhVBJdN9rPUWknH4eopHt3eB2beXNZV9XymqJO7n8Ak+033rA1TzJh3ms1cyFReujEGcqqSn0fuVqbJYqfHsjSC2S2fzLVaNMrILubHLxvLw+eqdvBj55UA1mdbU6KCR9bZePzTWI9LKna0sWHiTy5EcbO6+3fUZdI0N3JB8nXTvWQ9BV0UJHyA7Y00hK6Jci1UAdT2k3SrgXz3d87qsdBlvV3pdRgxt7RV5EF6Bqw8yeBNghIbn6HGtZoWHRjboteHcOnCHIDu/ww0lMpiT55FG5zQ/DRDyfeY68U7mqTVyFyZ8KhdhqH26BijtPEKLO/pAjQ84bXQ6w2H2nwo6FIvBDbiO3EsbGqJwjPtTmI5UW37BMhYY1T5xJJ3PUrt7qFjzrtYVEGBZ1Ci1Ab0eQbtiqNh5TRahIHHZQEydVThzWB9XPFy
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 187c1aab-702c-45e3-5da9-08daedf2cb87
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 01:27:07.8548
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: B3hwhehOI8jCBlHR4FRgSmRUVvI207TFmAWlbmMtYzO2JnGX7sAElbh7l4ZZLZFur0fhgnyR4JT/vQ/Bkde6SvCuh8GNlI93kLkVC44gHFc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB5962

T24gMDQvMDEvMjAyMyAxOjE1IGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+IE9uIEZy
aSwgMzAgRGVjIDIwMjIsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+DQo+PiBXaGV0aGVyIHRvIGJ1
aWxkIG9ubHkgWGVuLCBvciBldmVyeXRoaW5nLCBpcyBhIHByb3BlcnR5IG9mIGNvbnRhaW5lciwN
Cj4+IHRvb2xjaGFpbiBhbmQvb3IgdGVzdGNhc2UuICBJdCBpcyBub3QgYSBwcm9wZXJ0eSBvZiBY
RU5fVEFSR0VUX0FSQ0guDQo+Pg0KPj4gQ2FwaXRhbGlzZSBIWVBFUlZJU09SX09OTFkgYW5kIGhh
dmUgaXQgc2V0IGJ5IHRoZSBkZWJpYW4tdW5zdGFibGUtZ2NjLWFybTMyLSoNCj4+IHRlc3RjYXNl
cyBhdCB0aGUgcG9pbnQgdGhhdCBhcm0zMiBnZXQgbWF0Y2hlZCB3aXRoIGEgY29udGFpbmVyIHRo
YXQgY2FuIG9ubHkNCj4+IGJ1aWxkIFhlbi4NCj4+DQo+PiBGb3Igc2ltcGxpY2l0eSwgcmV0YWlu
IHRoZSBSQU5EQ09ORklHIC0+IEhZUEVSVklTT1JfT05MWSBpbXBsaWNhdGlvbi4NCj4+DQo+PiBT
aWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0K
Pj4gLS0tDQo+PiBDQzogRG91ZyBHb2xkc3RlaW4gPGNhcmRvZUBjYXJkb2UuY29tPg0KPj4gQ0M6
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4+IENDOiBBbnRo
b255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCj4+IENDOiBNaWNoYWwgT3J6
ZWwgPG1pY2hhbC5vcnplbEBhbWQuY29tPg0KPj4gQ0M6IE9sZWtzaWkgS3Vyb2Noa28gPG9sZWtz
aWkua3Vyb2Noa29AZ21haWwuY29tPg0KPj4gLS0tDQo+PiAgYXV0b21hdGlvbi9naXRsYWItY2kv
YnVpbGQueWFtbCB8ICAyICsrDQo+PiAgYXV0b21hdGlvbi9zY3JpcHRzL2J1aWxkICAgICAgICB8
IDExICsrKystLS0tLS0tDQo+PiAgMiBmaWxlcyBjaGFuZ2VkLCA2IGluc2VydGlvbnMoKyksIDcg
ZGVsZXRpb25zKC0pDQo+Pg0KPj4gZGlmZiAtLWdpdCBhL2F1dG9tYXRpb24vZ2l0bGFiLWNpL2J1
aWxkLnlhbWwgYi9hdXRvbWF0aW9uL2dpdGxhYi1jaS9idWlsZC55YW1sDQo+PiBpbmRleCA5M2Q5
ZmY2OWE5ZjIuLmU2YTkzNTdkZTNlZiAxMDA2NDQNCj4+IC0tLSBhL2F1dG9tYXRpb24vZ2l0bGFi
LWNpL2J1aWxkLnlhbWwNCj4+ICsrKyBiL2F1dG9tYXRpb24vZ2l0bGFiLWNpL2J1aWxkLnlhbWwN
Cj4+IEBAIC01MTYsMTEgKzUxNiwxMyBAQCBkZWJpYW4tdW5zdGFibGUtZ2NjLWFybTMyOg0KPj4g
ICAgZXh0ZW5kczogLmdjYy1hcm0zMi1jcm9zcy1idWlsZA0KPj4gICAgdmFyaWFibGVzOg0KPj4g
ICAgICBDT05UQUlORVI6IGRlYmlhbjp1bnN0YWJsZS1hcm0zMi1nY2MNCj4+ICsgICAgSFlQRVJW
SVNPUl9PTkxZOiB5DQo+PiAgDQo+PiAgZGViaWFuLXVuc3RhYmxlLWdjYy1hcm0zMi1kZWJ1ZzoN
Cj4+ICAgIGV4dGVuZHM6IC5nY2MtYXJtMzItY3Jvc3MtYnVpbGQtZGVidWcNCj4+ICAgIHZhcmlh
YmxlczoNCj4+ICAgICAgQ09OVEFJTkVSOiBkZWJpYW46dW5zdGFibGUtYXJtMzItZ2NjDQo+PiAr
ICAgIEhZUEVSVklTT1JfT05MWTogeQ0KPiBjYW4geW91IG1vdmUgdGhlIHNldHRpbmcgb2YgSFlQ
RVJWSVNPUl9PTkxZIHRvIC5hcm0zMi1jcm9zcy1idWlsZC10bXBsID8NCg0KTm90IHJlYWxseSAt
IHRoYXQncyB0aGUgcG9pbnQgSSdtIHRyeWluZyB0byBtYWtlIGluIHRoZSBjb21taXQgbWVzc2Fn
ZS4NCg0KPiBJIHRoaW5rIHRoYXQgbWFrZXMgdGhlIG1vc3Qgc2Vuc2UgYmVjYXVzZSAuYXJtMzIt
Y3Jvc3MtYnVpbGQtdG1wbCBpcyB0aGUNCj4gb25lIHNldHRpbmcgWEVOX1RBUkdFVF9BUkNIIGFu
ZCBhbHNvIHRoZSB4ODZfNjQgdGFnLg0KDQpJdCdzIG5vdCBhYm91dCB4ODZfNjQ7IGl0cyBhYm91
dCB0aGUgY29udGFpbmVyLg0KDQpXaGV0aGVyIHdlIGNhbiBidWlsZCBqdXN0IFhlbiwgb3IgZXZl
cnl0aGluZywgc29sZWx5IGRlcGVuZHMgb24gdGhlDQpjb250ZW50cyBpbiBkZWJpYW46dW5zdGFi
bGUtYXJtMzItZ2NjDQoNCklmIHdlIHdhbnRlZCB0bywgd2UgY291bGQgdXBkYXRlIHVuc3RhYmxl
LWFybTMyLWdjYydzIGRvY2tlcmZpbGUgdG8NCmluc3RhbGwgdGhlIGFybTMyIGNyb3NzIHVzZXIg
bGlicywgYW5kIGRyb3AgdGhpcyBIWVBFUlZJU09SX09OTFkNCnJlc3RyaWN0aW9uLg0KDQp+QW5k
cmV3DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:36:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470882.730539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCshh-0001So-5r; Wed, 04 Jan 2023 01:36:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470882.730539; Wed, 04 Jan 2023 01:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCshg-0001Sg-V4; Wed, 04 Jan 2023 01:36:28 +0000
Received: by outflank-mailman (input) for mailman id 470882;
 Wed, 04 Jan 2023 01:36:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCshf-0001Sa-Sw
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:36:27 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 326c050d-8bd0-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 02:36:24 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 984716157D;
 Wed,  4 Jan 2023 01:36:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C409C433F1;
 Wed,  4 Jan 2023 01:36:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 326c050d-8bd0-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672796183;
	bh=GQAamOljZd55tkVcWUsHddK5hYE0gMhCzFuyOd9DZd4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=aIhtNsQ2iVKyJPgCqVS3G3Zz/1hW7XTt4tju1nwRPASAw13zF8OUqvrAJ3ih6B+Jw
	 oQ2Cg1ODHS7pXBYl3eeK3G/FWwLkX5CFLl17s2/si/tYwbif1GjCeuynMPYIKy16MW
	 jTTp1oRXuJQ407tjj3fmcIkBy7HbWWE6+TqXg08WWEmatXmtGUNHqfszNl9o4/bWI1
	 5zSRUprnfwkG3NBdXc9Q2wrkYanjl5b87r2lDNJbV4aOXDABRt4hdGlqPcIWIuD1tv
	 OkOJlWnNhE5WQzZ15ikTzw7qtEWl1WJMCyjdFUbSliupWQ4m4Vh+00qj1iP1MzwVtA
	 V7mNArNzZnBlw==
Date: Tue, 3 Jan 2023 17:36:21 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Anthony Perard <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
In-Reply-To: <34e692e3-ef76-a43e-ec4f-a7c1ed2d094f@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031733410.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-5-andrew.cooper3@citrix.com> <alpine.DEB.2.22.394.2301031713530.4079@ubuntu-linux-20-04-desktop> <34e692e3-ef76-a43e-ec4f-a7c1ed2d094f@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 4 Jan 2023, Andrew Cooper wrote:
> On 04/01/2023 1:15 am, Stefano Stabellini wrote:
> > On Fri, 30 Dec 2022, Andrew Cooper wrote:
> >
> >> Whether to build only Xen, or everything, is a property of container,
> >> toolchain and/or testcase.  It is not a property of XEN_TARGET_ARCH.
> >>
> >> Capitalise HYPERVISOR_ONLY and have it set by the debian-unstable-gcc-arm32-*
> >> testcases at the point that arm32 get matched with a container that can only
> >> build Xen.
> >>
> >> For simplicity, retain the RANDCONFIG -> HYPERVISOR_ONLY implication.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> ---
> >> CC: Doug Goldstein <cardoe@cardoe.com>
> >> CC: Stefano Stabellini <sstabellini@kernel.org>
> >> CC: Anthony PERARD <anthony.perard@citrix.com>
> >> CC: Michal Orzel <michal.orzel@amd.com>
> >> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> >> ---
> >>  automation/gitlab-ci/build.yaml |  2 ++
> >>  automation/scripts/build        | 11 ++++-------
> >>  2 files changed, 6 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> >> index 93d9ff69a9f2..e6a9357de3ef 100644
> >> --- a/automation/gitlab-ci/build.yaml
> >> +++ b/automation/gitlab-ci/build.yaml
> >> @@ -516,11 +516,13 @@ debian-unstable-gcc-arm32:
> >>    extends: .gcc-arm32-cross-build
> >>    variables:
> >>      CONTAINER: debian:unstable-arm32-gcc
> >> +    HYPERVISOR_ONLY: y
> >>  
> >>  debian-unstable-gcc-arm32-debug:
> >>    extends: .gcc-arm32-cross-build-debug
> >>    variables:
> >>      CONTAINER: debian:unstable-arm32-gcc
> >> +    HYPERVISOR_ONLY: y
> > can you move the setting of HYPERVISOR_ONLY to .arm32-cross-build-tmpl ?
> 
> Not really - that's the point I'm trying to make in the commit message.
> 
> > I think that makes the most sense because .arm32-cross-build-tmpl is the
> > one setting XEN_TARGET_ARCH and also the x86_64 tag.
> 
> It's not about x86_64; its about the container.
> 
> Whether we can build just Xen, or everything, solely depends on the
> contents in debian:unstable-arm32-gcc
> 
> If we wanted to, we could update unstable-arm32-gcc's dockerfile to
> install the arm32 cross user libs, and drop this HYPERVISOR_ONLY
> restriction.

If it is a property of the container, shouldn't HYPERVISOR_ONLY be set
every time the debian:unstable-arm32-gcc container is used? Including
debian-unstable-gcc-arm32-randconfig and
debian-unstable-gcc-arm32-debug-randconfig?

I realize that the other 2 jobs are randconfigs so HYPERVISOR_ONLY gets
set anyway. But if HYPERVISOR_ONLY is a property of the specific
container, then I think it would be best to be consistent and set
HYPERVISOR_ONLY everywhere debian:unstable-arm32-gcc is used.

E.g. one day we could just randconfigs to build also the tools with a
simple change to the build script and otherwise we would need to
remember to also add the HYPERVISOR_ONLY tag for the other 2 jobs using
debian:unstable-arm32-gcc.


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:42:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:42:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470889.730549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsnE-0002sQ-MB; Wed, 04 Jan 2023 01:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470889.730549; Wed, 04 Jan 2023 01:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsnE-0002sJ-JW; Wed, 04 Jan 2023 01:42:12 +0000
Received: by outflank-mailman (input) for mailman id 470889;
 Wed, 04 Jan 2023 01:42:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCsnC-0002sD-L1
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:42:10 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fd129cfc-8bd0-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 02:42:05 +0100 (CET)
Received: from mail-dm6nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jan 2023 20:41:59 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5022.namprd03.prod.outlook.com (2603:10b6:208:1ab::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 01:41:57 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 01:41:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd129cfc-8bd0-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672796526;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=4zdzJlG75cMPxlVOfMNmGrXGm/DiNSso512V2M2wsXg=;
  b=UGUrTio43wf+7qzzCR0k+j1JxcG7Ce5aRvBObCdbaPs5o4MnkbZ9kzC0
   DmKaoizzlbThZRSJjLxCF6VLNAlch50P/8weVhEUupAFvLZtuHeOu8aXh
   bwmolQYOjIvpdF3iADbEvMt58TCMieQXhKwDpNfxiQz8Kip+fgii8X2zY
   E=;
X-IronPort-RemoteIP: 104.47.59.169
X-IronPort-MID: 90010168
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:D5rKf6skDRdWy6jlb1VcBm2FmefnVLZfMUV32f8akzHdYApBsoF/q
 tZmKTiEMvjZNDD1LYsgPInloBgEuJ/SmN9jSAo6qCoxQy5B+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaGzCFIZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwLS0rPyumoN2P6Y2HU65dh8ADMtLnM9ZK0p1g5Wmx4fcOZ7nmGv+PyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgf60bou9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzgrqcy2QPDroAVIEUuBUKinfSDs3W3HO1cF
 E4up3M2sZFnoSRHSfG4BXVUukWspQUAUtBdF+k77gClyafO5QudQG8eQVZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9UQAeJHUOYiIsRgIf797u5oo0i3rnS9doEKexyNrvCzz06
 zmPoG41gLB7pckP2qag/FGBgC+2oZPJTQkd6QDeX2bj5QR8DLNJfKSt4FnfqPpFcoCQSwDbu
 GBewpbPqucTEZuKiSqBBv0XG62k7OqENzuahkNzG54m9HKm/HvLkZ1s3QyS7XxBaq4sEQIFq
 meK0e+NzPe/5EeXUJI=
IronPort-HdrOrdr: A9a23:cvHzUKvYz8TGFkid6uE8edbF7skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="90010168"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NxsKbGibreGMl0Ag8UXh0EX3w+AkOqpvaEQ/KOWKsT8018MfPT+3W8kA2QVEBWkBcewcTIaw8HR55bvCZZL6BOFgwjgb1KnA6Rq8t/rbKiu85ykbTZmpLCPJ12CNsc4lWtytKtqdf34Lc92TJo1Y14u4OhLTh45p7kl8hGc+mfpfykElvxm0YqRA7SaAaGULjZqEDk7G6wSBDnApOWZrhSHtfxAzTZzv0/bqdTOa4HdKlhziOIVTz3TXiKAh8/MMyOi5QbjjA4taxtSew3tzDdjsa+MpZzpTBrHx4RBM4nnSkbeXj3xY6zDRcFYNbIe9mzf63oMc0M5R5vyM4EE0mA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4zdzJlG75cMPxlVOfMNmGrXGm/DiNSso512V2M2wsXg=;
 b=XzmKaQ0Jhr9jx3/ix0AtrFdlk4WMnilSe5xWjzR3y9kaw71+TQCDtUaKRORUz88qf7dyEmDPzQdcUT+2pZIlHybEmYBzCTjEIyT2hz1iWa/DqdgVrnI80Fn2mDxkTSIx8FAVOQIvl1yP6yrHl8NlSIEB1aNvUqwC5IyG+5ULrAyYTfMJ+aSaDZp26Ve37F8xDIGtZiUaTftbw8k5tfs+eZSOqxNo3NzUVsIN7yKW2l7LD0LO76mtBJ011APaj/xmXPTbjndD5Qu3QIuoidt/SBJoFSGoWgZ0TAVRt9zZsZNgavbh+k4PHq1SI3RW5DXW9uPjEGF6QMQ7ZLU2vYysdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4zdzJlG75cMPxlVOfMNmGrXGm/DiNSso512V2M2wsXg=;
 b=oI53Kb3SjEhuGbEyDTtf/HJvKeTLyOFljsDVSEN3n9uesiqZHYfCVX1kmW13u3EnRwBuyIkmUeSBGFQ2WkQ+Fjc8w5qEjoDzT8+t31tbZNNhpNMm1a8qnRv1ApkSrIvw3fpC0ObNstNuBrNc70xUUK2vExa5VwT4e4Gq8ZRO+zo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Doug Goldstein
	<cardoe@cardoe.com>, Anthony Perard <anthony.perard@citrix.com>, Michal Orzel
	<michal.orzel@amd.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
Thread-Topic: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
Thread-Index: AQHZG+cbFLjU6ViRqEqaO8BfTNldnq6NfEWAgAADPYCAAAKVgIAAAY8A
Date: Wed, 4 Jan 2023 01:41:57 +0000
Message-ID: <4f9a9927-c287-b40e-e4b0-653e69dbc1bb@citrix.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-5-andrew.cooper3@citrix.com>
 <alpine.DEB.2.22.394.2301031713530.4079@ubuntu-linux-20-04-desktop>
 <34e692e3-ef76-a43e-ec4f-a7c1ed2d094f@citrix.com>
 <alpine.DEB.2.22.394.2301031733410.4079@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2301031733410.4079@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MN2PR03MB5022:EE_
x-ms-office365-filtering-correlation-id: e9c54b3d-cb61-4b94-0dcf-08daedf4dd8b
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ewMfha2+TxL6HkdkvUj4v8C9WNEsjmYNEBTKpLEV/RMzHYNhbfYC+Kw5UCiPdpoAP6wS6+p6rR8WsCjQ84kGnxPxxihPc+5w0RnbMdrzj1H27MtXdl3a7FPa1w+GJHO5QrmkoBYICYT4HQGA3cHFDLlq7gyQeQxFodD4bCKdC3BGnP7k1ajjJcMMjCtCL2F0g79qQHz9N4zRXcgndtt1aHFhF6Df9X/Sxo8qldNhHvVue8OPwJBZYMIJwN4ZVbU4jsSJbmMD2IHUDDV2N7aWcs4LH7OSjWNxc7D/D04vBEHCzlGI56axXkfba4fJxn9xXQHuZo3waoPzV4vzHov+zIcGZg5kbh3kvhdLscRShdj0ltFUB2b9NuUJN9WOaFkxcyVDXj9V9tfA7FKZeZrBM8+JtsOcMmKbD+p50TCBMNQwUTrah7UMBQ+gu93XvyQXOFkMYqQvc8rF/xUrUqpZ/v2q/D8nsSAbFcPBNCpXQXHLbOuFh1qQODZi5UD5N9e3/4K7ZQXPj8vi9yBTWm1P4FyIKsXZDZPe6d4f8arFofE3WUm+tHOTrpzGqxz+/v7juiIw4MV2jptzzg8nTFNzvsxVaCaEk8dcF0HTlTqXmrGtyRcqm41QvMe+WoAna3QM5X3kGkvrz43RN+nUOIYaWG010F/nyY1BP1uM8h5KNBvBbylaeqbrHy8aaCLPaRUtyG5Mea3o+9C827yrqaPutkzoYTY45uBq/1nP5kmqMbLGKRs0d0Y1eYuoGQR8uebzNYNJNWOR6ANfxZLcu7poOg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(376002)(396003)(366004)(346002)(451199015)(31686004)(36756003)(86362001)(31696002)(66446008)(4326008)(54906003)(91956017)(66946007)(41300700001)(6916009)(66556008)(38070700005)(66476007)(64756008)(76116006)(8676002)(82960400001)(38100700002)(122000001)(26005)(186003)(6486002)(53546011)(6512007)(71200400001)(478600001)(5660300002)(8936002)(2906002)(6506007)(316002)(2616005)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?YnZTTVU0aHRsRlpKS0NwUmVUZFlETGZvK0lJYWxZTk1reW1Fc0VyUU5vdERZ?=
 =?utf-8?B?WUJOTW50SXhWZHdOei9xN0M0TnhQemdEcWJTVlNpb1hFa0twVy9XampGRnIy?=
 =?utf-8?B?eDUzZUswTml0dHBCVDM5cWZrVWFseEk1d0FISXhwekhIMENENllMRnBaOW1w?=
 =?utf-8?B?a1ZBeW93L0l3eGpvMGJPNys3MXNlRktyRmVBT3FrZ2R6U042RXJCdCtOMjZU?=
 =?utf-8?B?emdCZlNlYmNJZzA0aEVBR0l4NzNJcHcvK3JBMzNDOWVnSDhNTnlMc0FxZHI4?=
 =?utf-8?B?MndVbDhBbGwzbW53RElpSms0TWczWVJhT2xnUW5veXFWWUFUL3B1TzN0NUs5?=
 =?utf-8?B?MG9XNmdtTGlqVERuUGxzQjVSSzJHSnNUTnJZdVZiczBsZy9ZYWMyZTh2T2hC?=
 =?utf-8?B?d3FnaGJGL1Zkc01FL2czWXZnMk13U0pDQ09FMStGM1FDNkF1QUtGelhlTWN0?=
 =?utf-8?B?dVZwTWFUKzY4WnYxcTZ0RGhSdHQzZ2hvQjFkK2VpbDE0S01lYjI3U0p1MWky?=
 =?utf-8?B?VDVvcUFsL055T0ExK2lubndQMVlkTFlxRnpRZlZUdXhJMHBZc1VZdHdHNFNM?=
 =?utf-8?B?ay9KMWZLNER0a1BKNEZBTlZjWExCZGhsNmhGcWk5TUxaWThaRU1EM0dRbTBI?=
 =?utf-8?B?RGljUWE0a0I5azBjYWllc3VqZytaUTl6VkxTNnlHUkpVSGtBZmZvMHpqaSsw?=
 =?utf-8?B?OVFKUWFlM3VwTXpoYlo3WXFSNWtxRlliMzJNVFptN3l2RTVGVGpRemFudkRS?=
 =?utf-8?B?NWduSi90cGJyMXNzYlNoODdza0RURDQ5c0hRL0VhcWw4Q3Fibmw1bVpZUW5z?=
 =?utf-8?B?TjIrK3greWh2b0xxbVdhT2dVaElRM1FpSmt4WldpVmZrb0tid252VlAweW9o?=
 =?utf-8?B?T0s3S01ab1UrSVdqa3M4empSdzQwZnZYb1h4MnZBVnZjQWtDaUNmOVFyWE0r?=
 =?utf-8?B?dGIyMW9NK2ZiTWpXV0E5ZHVtOXNPVDQrVE5yQzZCcVJSWjM5RTNsZEdqcW55?=
 =?utf-8?B?Rk9lZGdKWTVuWWwweFJpazZ2QWh4SmJaWlcvUEFOa0JETTdGdjloaUNhZ0lp?=
 =?utf-8?B?MFc4Z2w5U2xYaGpvMnJHQ05rK2x5aWdWVlVvbTRKQkg2WXd3Ym9WMnhHRmFv?=
 =?utf-8?B?dzFFblk5UmdFcktEUnFLeDUwdElEb3ZPQU1QSHovbk9zWGluOXdrYXZmcG0v?=
 =?utf-8?B?WkhadmtReHJRWWZzV1pDMzRvbkU2dWhOZmV2bTQrT1ZIOWlqMW85TEpWbk5Q?=
 =?utf-8?B?M3hxV1Uzb0t3VUVTL1dEUFJSZ0F3U0IyRm50eDg5eVZjNDdZNWZiNjFtK1Iv?=
 =?utf-8?B?UnNwOElxdHZGY1FZVHJyUjlZMk5zQmF1aWRVVzZHbFNKWHRYNTZIYXJmdkZK?=
 =?utf-8?B?TWx0SDNWa2dJSnE1eXVLK0tMSEVzRVEvTTVncit2OEIzT2pScWRGcW5ZRjFp?=
 =?utf-8?B?M2dwRkdLU1RtV0g3NlQveFVBSEVvcGVHc1NDZ1lBQXJnZzE3V29xU01YeDE1?=
 =?utf-8?B?NTVoSGhSS1BWbk9ZK205QjJuZjd1Y2tsOENwUEdWTk5ybmxCQnFwYzhGNmpi?=
 =?utf-8?B?NFBUQXFKSHgyS0w0TXdON1NSQVdMR3ozZ0dkVXRhNDBHWjdUVlV3d2FwbXZm?=
 =?utf-8?B?YmswelAzbHF1aDJyRFdIZ1JUbXJwOVZ2WGkyWURXa0RMQlAydEc1UFNnaC8r?=
 =?utf-8?B?ZWNkWHRMeGJjU21iU0lSYWpmZy9malk4NHc5eHVpek5mcVI1MXIzVEh2dkx2?=
 =?utf-8?B?ZEtCeU9CcEtoOFdkVzBydGVSY3BKZThNN3B5NWg4bUF3SlBBK3k1SXI3Q0VI?=
 =?utf-8?B?cGVxM0lpTm9OUUZlKytIWFVsOXdHR21TVE1UZk1YTXQ5M2dwUUFJbUsvdnZN?=
 =?utf-8?B?UUZhRmJ2VWE2RTZSQTRiQ1p3dmQyeTFiWkZGYitTQU1lSWJKYUppKzEwUWpP?=
 =?utf-8?B?R2VRZWtIOU9uM3ZaZklZcGFxbmNMNml4M08xemRXVGNxSGpaTHNkT0lEUWVx?=
 =?utf-8?B?NHhtMDNPakpNUk5CV2FDK0J3R2dwNlQ5K3dSeEwxTHRMWTNHQWU2a0xTQzNq?=
 =?utf-8?B?MnlRYWYxOE1GNnB5UU1yQ3ZzMUpMbFhRT3A5MmUyN1NZQXNIT3U0NE5PeVY2?=
 =?utf-8?Q?QeiUove9+CZxWZSfu74KJlVh9?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1239397978D90B4B8430A6826B3FDC78@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	YUyL0nEbAMSdKqJQXSuH8Z+PrwARjbl/AoH6HLp7X6Y7yar7oIsmpJDKElj4t4xlu3lpseTIF0/JVhHblce2sso8ibmib+ndSUSI71kFyN3CaHxmQ14stx4QRCj8ArblQfSmwnefTCoUSqQHC2z2IGrUaiybSbOChL1fv3Cj7hT/O1EpQGUu0huBj2rJiZmEneUbqMJ16HE69twrQy3e1tDpGXYEx5SdBfyBM++maZeBSInX7oHzObwMgxcQzIEMyu0dTEaail0EflkVRYlEqZhMaX2u80uTBNeGHDLmWhRAiEEN7Utq1mACj/W2Fqj4h8Ejxg5oneYT8uJJPlZS+sOLEH1j8w8KF7hImDCya3D5+6seHMDxlmLr69iIhgrY9XBIZQugJiCZIMbXfC4jxnWSL+ejqASJAWcSP/d8XznkgHsZrxfyDiGhvThyINbwLpptD3jyNLtrgiTsMNYa8YzaTF1cIlrjWWJ+LivGAS2N6cn78NCtbAGFy0lnU9ksbpiVVf0rf202THbzXNhuxFUzGHO1ufRmUi2svoaTmeTk/DNdPkuQaF6qqIkQK39woqJ59EsiGAGo3Ww9LpvaeDfW1nyor64NshdYeYMXdebv+KDG0AI4FEBuEm6trSJczaJeedzqSC9BlJVqmPSNseMSXvOxXbfD+ZUGi2qC1I3XbQL2nsuv8WfzbsPqVNRigUC7iHa0ue8DWh2M6tDXrUJ0ghp87RbYgyzmX2ETrPswt3BtBuEMbLtVecbfu6rLCdh3JYLb8gkxXnlXaJ4YDR3228i9usH5T82dL+dQvsHQZKV5EGYqJf3KWhtd03x/PYIpCwOpy52RqFCM2U4Gohau0aejCkh5C4gCAUolJMQD6U6tbxgNhkndBpfQ2TW/
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e9c54b3d-cb61-4b94-0dcf-08daedf4dd8b
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 01:41:57.0871
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: WjVoH2+7Ml0cbU75bkReyjjRE6T2H9mZ2ADzVpTUgo/g+Y5HE1yD2AmQe/q0bsoKlMyWU49tOe2AaKJLU7wHdNfnedeKa4s1/loIW0APAVc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5022

T24gMDQvMDEvMjAyMyAxOjM2IGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+IE9uIFdl
ZCwgNCBKYW4gMjAyMywgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDA0LzAxLzIwMjMgMTox
NSBhbSwgU3RlZmFubyBTdGFiZWxsaW5pIHdyb3RlOg0KPj4+IE9uIEZyaSwgMzAgRGVjIDIwMjIs
IEFuZHJldyBDb29wZXIgd3JvdGU6DQo+Pj4NCj4+Pj4gV2hldGhlciB0byBidWlsZCBvbmx5IFhl
biwgb3IgZXZlcnl0aGluZywgaXMgYSBwcm9wZXJ0eSBvZiBjb250YWluZXIsDQo+Pj4+IHRvb2xj
aGFpbiBhbmQvb3IgdGVzdGNhc2UuICBJdCBpcyBub3QgYSBwcm9wZXJ0eSBvZiBYRU5fVEFSR0VU
X0FSQ0guDQo+Pj4+DQo+Pj4+IENhcGl0YWxpc2UgSFlQRVJWSVNPUl9PTkxZIGFuZCBoYXZlIGl0
IHNldCBieSB0aGUgZGViaWFuLXVuc3RhYmxlLWdjYy1hcm0zMi0qDQo+Pj4+IHRlc3RjYXNlcyBh
dCB0aGUgcG9pbnQgdGhhdCBhcm0zMiBnZXQgbWF0Y2hlZCB3aXRoIGEgY29udGFpbmVyIHRoYXQg
Y2FuIG9ubHkNCj4+Pj4gYnVpbGQgWGVuLg0KPj4+Pg0KPj4+PiBGb3Igc2ltcGxpY2l0eSwgcmV0
YWluIHRoZSBSQU5EQ09ORklHIC0+IEhZUEVSVklTT1JfT05MWSBpbXBsaWNhdGlvbi4NCj4+Pj4N
Cj4+Pj4gU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4
LmNvbT4NCj4+Pj4gLS0tDQo+Pj4+IENDOiBEb3VnIEdvbGRzdGVpbiA8Y2FyZG9lQGNhcmRvZS5j
b20+DQo+Pj4+IENDOiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
DQo+Pj4+IENDOiBBbnRob255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCj4+
Pj4gQ0M6IE1pY2hhbCBPcnplbCA8bWljaGFsLm9yemVsQGFtZC5jb20+DQo+Pj4+IENDOiBPbGVr
c2lpIEt1cm9jaGtvIDxvbGVrc2lpLmt1cm9jaGtvQGdtYWlsLmNvbT4NCj4+Pj4gLS0tDQo+Pj4+
ICBhdXRvbWF0aW9uL2dpdGxhYi1jaS9idWlsZC55YW1sIHwgIDIgKysNCj4+Pj4gIGF1dG9tYXRp
b24vc2NyaXB0cy9idWlsZCAgICAgICAgfCAxMSArKysrLS0tLS0tLQ0KPj4+PiAgMiBmaWxlcyBj
aGFuZ2VkLCA2IGluc2VydGlvbnMoKyksIDcgZGVsZXRpb25zKC0pDQo+Pj4+DQo+Pj4+IGRpZmYg
LS1naXQgYS9hdXRvbWF0aW9uL2dpdGxhYi1jaS9idWlsZC55YW1sIGIvYXV0b21hdGlvbi9naXRs
YWItY2kvYnVpbGQueWFtbA0KPj4+PiBpbmRleCA5M2Q5ZmY2OWE5ZjIuLmU2YTkzNTdkZTNlZiAx
MDA2NDQNCj4+Pj4gLS0tIGEvYXV0b21hdGlvbi9naXRsYWItY2kvYnVpbGQueWFtbA0KPj4+PiAr
KysgYi9hdXRvbWF0aW9uL2dpdGxhYi1jaS9idWlsZC55YW1sDQo+Pj4+IEBAIC01MTYsMTEgKzUx
NiwxMyBAQCBkZWJpYW4tdW5zdGFibGUtZ2NjLWFybTMyOg0KPj4+PiAgICBleHRlbmRzOiAuZ2Nj
LWFybTMyLWNyb3NzLWJ1aWxkDQo+Pj4+ICAgIHZhcmlhYmxlczoNCj4+Pj4gICAgICBDT05UQUlO
RVI6IGRlYmlhbjp1bnN0YWJsZS1hcm0zMi1nY2MNCj4+Pj4gKyAgICBIWVBFUlZJU09SX09OTFk6
IHkNCj4+Pj4gIA0KPj4+PiAgZGViaWFuLXVuc3RhYmxlLWdjYy1hcm0zMi1kZWJ1ZzoNCj4+Pj4g
ICAgZXh0ZW5kczogLmdjYy1hcm0zMi1jcm9zcy1idWlsZC1kZWJ1Zw0KPj4+PiAgICB2YXJpYWJs
ZXM6DQo+Pj4+ICAgICAgQ09OVEFJTkVSOiBkZWJpYW46dW5zdGFibGUtYXJtMzItZ2NjDQo+Pj4+
ICsgICAgSFlQRVJWSVNPUl9PTkxZOiB5DQo+Pj4gY2FuIHlvdSBtb3ZlIHRoZSBzZXR0aW5nIG9m
IEhZUEVSVklTT1JfT05MWSB0byAuYXJtMzItY3Jvc3MtYnVpbGQtdG1wbCA/DQo+PiBOb3QgcmVh
bGx5IC0gdGhhdCdzIHRoZSBwb2ludCBJJ20gdHJ5aW5nIHRvIG1ha2UgaW4gdGhlIGNvbW1pdCBt
ZXNzYWdlLg0KPj4NCj4+PiBJIHRoaW5rIHRoYXQgbWFrZXMgdGhlIG1vc3Qgc2Vuc2UgYmVjYXVz
ZSAuYXJtMzItY3Jvc3MtYnVpbGQtdG1wbCBpcyB0aGUNCj4+PiBvbmUgc2V0dGluZyBYRU5fVEFS
R0VUX0FSQ0ggYW5kIGFsc28gdGhlIHg4Nl82NCB0YWcuDQo+PiBJdCdzIG5vdCBhYm91dCB4ODZf
NjQ7IGl0cyBhYm91dCB0aGUgY29udGFpbmVyLg0KPj4NCj4+IFdoZXRoZXIgd2UgY2FuIGJ1aWxk
IGp1c3QgWGVuLCBvciBldmVyeXRoaW5nLCBzb2xlbHkgZGVwZW5kcyBvbiB0aGUNCj4+IGNvbnRl
bnRzIGluIGRlYmlhbjp1bnN0YWJsZS1hcm0zMi1nY2MNCj4+DQo+PiBJZiB3ZSB3YW50ZWQgdG8s
IHdlIGNvdWxkIHVwZGF0ZSB1bnN0YWJsZS1hcm0zMi1nY2MncyBkb2NrZXJmaWxlIHRvDQo+PiBp
bnN0YWxsIHRoZSBhcm0zMiBjcm9zcyB1c2VyIGxpYnMsIGFuZCBkcm9wIHRoaXMgSFlQRVJWSVNP
Ul9PTkxZDQo+PiByZXN0cmljdGlvbi4NCj4gSWYgaXQgaXMgYSBwcm9wZXJ0eSBvZiB0aGUgY29u
dGFpbmVyLCBzaG91bGRuJ3QgSFlQRVJWSVNPUl9PTkxZIGJlIHNldA0KPiBldmVyeSB0aW1lIHRo
ZSBkZWJpYW46dW5zdGFibGUtYXJtMzItZ2NjIGNvbnRhaW5lciBpcyB1c2VkPyBJbmNsdWRpbmcN
Cj4gZGViaWFuLXVuc3RhYmxlLWdjYy1hcm0zMi1yYW5kY29uZmlnIGFuZA0KPiBkZWJpYW4tdW5z
dGFibGUtZ2NjLWFybTMyLWRlYnVnLXJhbmRjb25maWc/DQo+DQo+IEkgcmVhbGl6ZSB0aGF0IHRo
ZSBvdGhlciAyIGpvYnMgYXJlIHJhbmRjb25maWdzIHNvIEhZUEVSVklTT1JfT05MWSBnZXRzDQo+
IHNldCBhbnl3YXkuIEJ1dCBpZiBIWVBFUlZJU09SX09OTFkgaXMgYSBwcm9wZXJ0eSBvZiB0aGUg
c3BlY2lmaWMNCj4gY29udGFpbmVyLCB0aGVuIEkgdGhpbmsgaXQgd291bGQgYmUgYmVzdCB0byBi
ZSBjb25zaXN0ZW50IGFuZCBzZXQNCj4gSFlQRVJWSVNPUl9PTkxZIGV2ZXJ5d2hlcmUgZGViaWFu
OnVuc3RhYmxlLWFybTMyLWdjYyBpcyB1c2VkLg0KPg0KPiBFLmcuIG9uZSBkYXkgd2UgY291bGQg
anVzdCByYW5kY29uZmlncyB0byBidWlsZCBhbHNvIHRoZSB0b29scyB3aXRoIGENCj4gc2ltcGxl
IGNoYW5nZSB0byB0aGUgYnVpbGQgc2NyaXB0IGFuZCBvdGhlcndpc2Ugd2Ugd291bGQgbmVlZCB0
bw0KPiByZW1lbWJlciB0byBhbHNvIGFkZCB0aGUgSFlQRVJWSVNPUl9PTkxZIHRhZyBmb3IgdGhl
IG90aGVyIDIgam9icyB1c2luZw0KPiBkZWJpYW46dW5zdGFibGUtYXJtMzItZ2NjLg0KDQpPaywg
c28gd2Ugd2FudCA0IEhZUEVSVklTT1JfT05MWSdzIGluIHRvdGFsLCBvbmUgZm9yIGVhY2ggaW5z
dGFuY2Ugb2YNCkNPTlRBSU5FUjogZGViaWFuOnVuc3RhYmxlLWFybTMyLWdjYyA/DQoNCn5BbmRy
ZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:48:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:48:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470898.730560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCstK-0003dm-EZ; Wed, 04 Jan 2023 01:48:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470898.730560; Wed, 04 Jan 2023 01:48:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCstK-0003df-BT; Wed, 04 Jan 2023 01:48:30 +0000
Received: by outflank-mailman (input) for mailman id 470898;
 Wed, 04 Jan 2023 01:48:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCstJ-0003dZ-C4
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:48:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e0de79ee-8bd1-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 02:48:26 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id C6560614F9;
 Wed,  4 Jan 2023 01:48:25 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 41AA7C433D2;
 Wed,  4 Jan 2023 01:48:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0de79ee-8bd1-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672796905;
	bh=w+ufstUd4M6KNCTa9BPxRDQ4SqOmoKlG5WoFYlctB60=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ErUBhbgQ2WiwWGJpfcH0xUZNrW4HUTWwZNZBSIC9NGDigO2qshwS26i/v9DvouMcU
	 ulHKLYCnxJvMdkL4PRdoW8Y95Z1Gz3E+GWul0F0upOxchS58SU3xYaJTJlmMPWj7i+
	 mWkQnKaiGwatzIqFKEIP95+ko2FoFQZTUT6+RcI81kQXrQKNscs4dqnrmSRalAhi4H
	 6KLfKP9tTImVoXVA7kPjA2jRJ38YDPnjCis8FMqDbAzjbpGRdsKY6vSYE2hLwB4VWK
	 krQsQvSJRggXYSJywy05A1BU6NuayojYK9edCPf7nxVMCQzuN6uZvXhxvgqyz0F/Wz
	 OgPacIVQguwbg==
Date: Tue, 3 Jan 2023 17:48:22 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Anthony Perard <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
In-Reply-To: <4f9a9927-c287-b40e-e4b0-653e69dbc1bb@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031748140.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-5-andrew.cooper3@citrix.com> <alpine.DEB.2.22.394.2301031713530.4079@ubuntu-linux-20-04-desktop> <34e692e3-ef76-a43e-ec4f-a7c1ed2d094f@citrix.com>
 <alpine.DEB.2.22.394.2301031733410.4079@ubuntu-linux-20-04-desktop> <4f9a9927-c287-b40e-e4b0-653e69dbc1bb@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 4 Jan 2023, Andrew Cooper wrote:
> On 04/01/2023 1:36 am, Stefano Stabellini wrote:
> > On Wed, 4 Jan 2023, Andrew Cooper wrote:
> >> On 04/01/2023 1:15 am, Stefano Stabellini wrote:
> >>> On Fri, 30 Dec 2022, Andrew Cooper wrote:
> >>>
> >>>> Whether to build only Xen, or everything, is a property of container,
> >>>> toolchain and/or testcase.  It is not a property of XEN_TARGET_ARCH.
> >>>>
> >>>> Capitalise HYPERVISOR_ONLY and have it set by the debian-unstable-gcc-arm32-*
> >>>> testcases at the point that arm32 get matched with a container that can only
> >>>> build Xen.
> >>>>
> >>>> For simplicity, retain the RANDCONFIG -> HYPERVISOR_ONLY implication.
> >>>>
> >>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>> ---
> >>>> CC: Doug Goldstein <cardoe@cardoe.com>
> >>>> CC: Stefano Stabellini <sstabellini@kernel.org>
> >>>> CC: Anthony PERARD <anthony.perard@citrix.com>
> >>>> CC: Michal Orzel <michal.orzel@amd.com>
> >>>> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> >>>> ---
> >>>>  automation/gitlab-ci/build.yaml |  2 ++
> >>>>  automation/scripts/build        | 11 ++++-------
> >>>>  2 files changed, 6 insertions(+), 7 deletions(-)
> >>>>
> >>>> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> >>>> index 93d9ff69a9f2..e6a9357de3ef 100644
> >>>> --- a/automation/gitlab-ci/build.yaml
> >>>> +++ b/automation/gitlab-ci/build.yaml
> >>>> @@ -516,11 +516,13 @@ debian-unstable-gcc-arm32:
> >>>>    extends: .gcc-arm32-cross-build
> >>>>    variables:
> >>>>      CONTAINER: debian:unstable-arm32-gcc
> >>>> +    HYPERVISOR_ONLY: y
> >>>>  
> >>>>  debian-unstable-gcc-arm32-debug:
> >>>>    extends: .gcc-arm32-cross-build-debug
> >>>>    variables:
> >>>>      CONTAINER: debian:unstable-arm32-gcc
> >>>> +    HYPERVISOR_ONLY: y
> >>> can you move the setting of HYPERVISOR_ONLY to .arm32-cross-build-tmpl ?
> >> Not really - that's the point I'm trying to make in the commit message.
> >>
> >>> I think that makes the most sense because .arm32-cross-build-tmpl is the
> >>> one setting XEN_TARGET_ARCH and also the x86_64 tag.
> >> It's not about x86_64; its about the container.
> >>
> >> Whether we can build just Xen, or everything, solely depends on the
> >> contents in debian:unstable-arm32-gcc
> >>
> >> If we wanted to, we could update unstable-arm32-gcc's dockerfile to
> >> install the arm32 cross user libs, and drop this HYPERVISOR_ONLY
> >> restriction.
> > If it is a property of the container, shouldn't HYPERVISOR_ONLY be set
> > every time the debian:unstable-arm32-gcc container is used? Including
> > debian-unstable-gcc-arm32-randconfig and
> > debian-unstable-gcc-arm32-debug-randconfig?
> >
> > I realize that the other 2 jobs are randconfigs so HYPERVISOR_ONLY gets
> > set anyway. But if HYPERVISOR_ONLY is a property of the specific
> > container, then I think it would be best to be consistent and set
> > HYPERVISOR_ONLY everywhere debian:unstable-arm32-gcc is used.
> >
> > E.g. one day we could just randconfigs to build also the tools with a
> > simple change to the build script and otherwise we would need to
> > remember to also add the HYPERVISOR_ONLY tag for the other 2 jobs using
> > debian:unstable-arm32-gcc.
> 
> Ok, so we want 4 HYPERVISOR_ONLY's in total, one for each instance of
> CONTAINER: debian:unstable-arm32-gcc ?

yeah


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:51:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:51:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470905.730571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsvx-00050m-Ts; Wed, 04 Jan 2023 01:51:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470905.730571; Wed, 04 Jan 2023 01:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCsvx-00050f-Qo; Wed, 04 Jan 2023 01:51:13 +0000
Received: by outflank-mailman (input) for mailman id 470905;
 Wed, 04 Jan 2023 01:51:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pCsvw-00050Z-F4
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:51:12 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 408487ba-8bd2-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 02:51:07 +0100 (CET)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Jan 2023 20:51:05 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS7PR03MB5429.namprd03.prod.outlook.com (2603:10b6:5:2d1::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 01:51:04 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 01:51:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 408487ba-8bd2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672797068;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=SPun27trLn8T3iqBdiblTkkD7u1SnPWiZG1UR+0D4M0=;
  b=fuwuaw7itMNk2IEAoCXybXvPfZfR179Hxk4Wftpkds23ZUvpw9yKTWDL
   LM8BTBaT/37SgXrmenqqMsVpMRR0xNsV1ru4eNmuvp/74bPtIZXuzb7Au
   q1y4wKVH0QoB59K78Ag17rHD7GXrBtdZJVtGdzzJq9xF1BJKdveoFWF4y
   o=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 90010737
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:szezXa5XZFkH/nj1LVt+VgxRtNjGchMFZxGqfqrLsTDasY5as4F+v
 mNNWj2CPauKN2OhKYsibY/lpEwD78fSyYBmG1dqpX81Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakT5weD/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m2
 dAcMXMrcCG5gc29npyRafU9wes5I5y+VG8fkikIITDxK98DGMqGb4CUoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OnUooj+eF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNKTOTgqaU16LGV7nAJCwckfmGQmsG8in/kWN5tF
 UELoDV7+MDe82TuFLERRSaQu2WYtxQRX95RFewS6wyXzKfQpQGDCQAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxRu5MDIZKmIqbiYeQQwIpdLkpekbixPETt9nVqmvnNDxMTj1z
 3aBqy1Wr7IXgM0Q1qT9/k3dhDmspZ/hQQs85wGRVWWghj6Vf6agbo2srFTes/BJKd/DSkHb5
 Sdb3c+D8OoJEJeB0jSXR/kAF62o4PDDNyDAhVloHN8q8DHFF2OfQL28KQpWfC9BWvvosxewC
 KMPkWu9PKNuAUY=
IronPort-HdrOrdr: A9a23:ejJTrK9vAL4WUAkd3gtuk+HGdr1zdoMgy1knxilNoERuA7Slf8
 DHppQmPGzP+XIssRAb6Je90cy7Kk80mqQFhbX5UY3NYOCighrQEGgA1/qU/9SDIVyYygc178
 4JGcQQNDSzNykdsS+Q2njaLz9U+qjjzEnev5a9854Cd2BXgwwK1WpEIzfeNnczaBhNBJI/Gp
 bZzs1bpwC4cXBSQtWnCmIDV+3jocSOsJ79exYJCzMu9QHL1FqTmffHOind+i1bfyJEwL8k/2
 SAuwvl5p+7u/X+5g7A23TV55F2nsKk7tdYHsSDhuUcNz2poAe1Y4ZKXaGEoVkO0ZeSwWdvtO
 OJjwYrPsx15X+UVHqyuwHR1w7p1ytrw2P+yHeD6EGT6vDRdXYfMY5slIhZehzW5w4Lp9dnyp
 9G2Gqfqt5+EQ7ApiLg/NLFPisa3HZc4EBS3NL7vUYvHrf2W4Uh47D3O3klUavoKRiKpLzP1t
 MeTP00qswmMm9yJEqpxVWHiObcJEjamXy9Mw0/Uhj/6UkVoJk+9TpR+OUP2ngH754zUJ9C+q
 DNNblpjqhHSosMYbt6H/ppe7rENoXhe2O9DIupGyWVKIgXf3bW75Ln6rQ84++nPJQO0ZspgZ
 zEFFdVr3Q7dU7iAdCHmMQjyGG4fEytGTD2js1O7ZlwvbPxALLtLC2YUVgr18+tue8WDMHXU+
 u6fJhWH/jgJ23zHpsh5XyKZ7BCbX0FFMEFsNcyXFyD5srNN43xr+TeNO3eIbL8eAxUK18Xwk
 FzIgQbCP8wkXxDAEWI/SQ5c0mdBnDCwQ==
X-IronPort-AV: E=Sophos;i="5.96,297,1665460800"; 
   d="scan'208";a="90010737"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Opey2Jn5lEmHD7A6JW7ki6Qs3qgz+6f+7YetS2xaClZKTF4rEqa28l3i5HnGlWvkbnGrhbczHVdH2Kj3o1eKVylltG6/YlGBvK73E3G+Mfii4iNLAEchtT0C/mHMc0pat18SqVwLsQrJVGMx20N2KtRwymmw8RsyovdK0lfJhi2t87tO2tGuKFuE4yPVyZvIwMhpCZPncAebQdYzqqIxsKRADVQV4dvjW7ZZTQWTjPL4PZ68ir8Z0jCNILB2S/c5i19WDsJ+QrlwTnUpVscsN1pJcNvJlSFW1bBYaqGGEErYoaErhMT5fhTRl1mc5GAz4x5HDuKTFkB5sxqNwEJrbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SPun27trLn8T3iqBdiblTkkD7u1SnPWiZG1UR+0D4M0=;
 b=Y1bJVKCFHDXRN49KpA3e02fvXMvEGfOUO/nNKPwh++URyO1TxzgFhFmV3lfgEtCUgAwcZgsQf10wjB+WAURj0C0Z8I/F8sfAj1k+VhsZDcPkYF8hUrR6Wk5R0y+jjnwPqSoLZbm+SCPvG0xFt7BTzhbZIlQw/Jf207AcCOSWY24ROPCOPoB7VJ9v1f070hub8EZgdQN3tajuJKCndjVs7w1xmDqn4dyDqKFsC3SAh6US44tSblEtpm+9d+CQnS8y22F0Qpxnxw+69ErKAIY01yA76jD/HmCb/5vt/MoFWtf/vh2rzb+GzMr1uISeb40QZJOUIlpIdXf3Ru/yKw30FQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SPun27trLn8T3iqBdiblTkkD7u1SnPWiZG1UR+0D4M0=;
 b=OTM9vjz2ZpsQmPC5XutVqB65vZx386AVqYFD17OINK8pYzqjLWdugIziRfp9qUNmjIzFYOnFMzQdQ+TunQteGE4PgvwtLRelr2mQzOQ18JV71+FOtLDiklne4ByOZC8qdGcIPvYF8TCfq3g5CJVoBlpuRthpASLexChYfnPlTaM=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Doug Goldstein
	<cardoe@cardoe.com>, Anthony Perard <anthony.perard@citrix.com>, Michal Orzel
	<michal.orzel@amd.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
Thread-Topic: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
Thread-Index:
 AQHZG+cbFLjU6ViRqEqaO8BfTNldnq6NfEWAgAADPYCAAAKVgIAAAY8AgAABzACAAADAgA==
Date: Wed, 4 Jan 2023 01:51:03 +0000
Message-ID: <5d3ed12e-3c01-5ea8-8d41-4a199fdf92b7@citrix.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-5-andrew.cooper3@citrix.com>
 <alpine.DEB.2.22.394.2301031713530.4079@ubuntu-linux-20-04-desktop>
 <34e692e3-ef76-a43e-ec4f-a7c1ed2d094f@citrix.com>
 <alpine.DEB.2.22.394.2301031733410.4079@ubuntu-linux-20-04-desktop>
 <4f9a9927-c287-b40e-e4b0-653e69dbc1bb@citrix.com>
 <alpine.DEB.2.22.394.2301031748140.4079@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2301031748140.4079@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DS7PR03MB5429:EE_
x-ms-office365-filtering-correlation-id: d51f62b5-468a-40a4-fa89-08daedf62371
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 BMCHowQ0FesAalJ2lN7tvhZh763c5r0KoymdgYXGgAsOZNgVwsV08NPCdA4mWGmp4iJgqYr+CC0pm3KtCBdj571c9d0mqZv5j8/3dupeK5OYOLIuagB3P7EXWrq7R6ghOWqME+VYVpoFb5NrE7uLPU9qIvXEhSHs1IqmkzyMz6jIhS0t5sMrFZYm0DB3QNYzHvwomSasF0WadCmzKXIL8diahwOx3pc1NadAm7ScSuz5t+18nie/33PfDlVjSRbuuB3WMGE+Uj8fJiFoX32Ev1bxrCNm9G36h89PS6X80aCJQqhVC9YTvtVxPnn95z+dNjoN276rsvUsOqZ4118JN3yj7LaffB8BdpZOvlYCC/SGSSoJS+y6GGIyeBEXzU6z++pgEL4G5sPVmviswl0YacU1BC00qWAo7QkQDCnrVeboFgJxr9zWlV2fbE6NfmAWD9pHsoGsFA/vY+3rPhXee5v3L0WcdeTATZQuKMJs71ETU9HLijx+ZM+16Owu7GSumFP/ZU3T5JQv73ot5+kXTd5uMz4F+luu7+8UygGuPyty+h+LdCo7+U04GRF/Z+H60fck/XsQxcsAiJwJwRqAcr8BAYsR7XXMwBXR0GX0Ityvp0iiRxKuw4cvMy59Qrich58dcE2rVCKM3wP6mkMvlaHrDFZemKJ0NPSXkhUcCoV3tAcrJFlc/rP4a/3rw+tH75qugwdVknkB0/CKyQlhXDfDbHAXunHrpYFLAI1DRlUCHevw1w18wfOeLHxNSYIG+a/gzs79dllGWn/Go8c61Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(396003)(39860400002)(376002)(366004)(451199015)(6486002)(71200400001)(6512007)(26005)(186003)(8936002)(122000001)(83380400001)(6506007)(53546011)(478600001)(31686004)(54906003)(6916009)(316002)(2616005)(66446008)(66476007)(4326008)(8676002)(66556008)(76116006)(36756003)(64756008)(66946007)(91956017)(38070700005)(41300700001)(38100700002)(82960400001)(5660300002)(2906002)(31696002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Mlc3TUlFb21oNnZmWEY1a2JHMlYxNnB1RG1VRmZVOWYxQVNDNWRQa2swOGxa?=
 =?utf-8?B?T20yV2krNXY0ZkNpTjZjOEs5bnZ0N0xIOEtKdVBxYklvR1RobzFHdUR2WXBr?=
 =?utf-8?B?TUZYK1RBb3Z3NFB6S1ZnNlFERzJySmwzM2JvUU5kQTFqTno1ZFhzMmE5d0dN?=
 =?utf-8?B?ajRMNW5oVjNSc3g4Tzhvei9NeHRlbGNGa05xcDUzV1o4VzF6UXY3K1ROL0ZN?=
 =?utf-8?B?K1V0NGc4UVlDZlF5anA3eE5OV3VrTFZYQzZSU3J5NUJwZURZY3BxODc0K2tY?=
 =?utf-8?B?UFd2ZEdST0xuUWFoS0gyNDhWZFRNdnNnZnRKM2xlblU1TDVLVGFKdU10TnBM?=
 =?utf-8?B?NFpPTStNSlplVVZUZW1jN3cxOElodDkwdWF6R1JXbXRnREhqbmJnZ0NoYnlv?=
 =?utf-8?B?citJT2c4d2g3U1pQZTVBNU0xSkF3Wll4QmJNQlR4ZWI1VzYxbUlMUDdYMXBt?=
 =?utf-8?B?bm5ycDlyM1M5d1JUbmhnOWZ2Q3NiNlJJM1EvaWNxZDBuQ0UzeDgrUmFCUTBK?=
 =?utf-8?B?R295WmR6c2xWcmY1OUd0UDdTRDVmUnVxcUJvNHpvSVN5RktocEtkUFdQQnoy?=
 =?utf-8?B?TlNiMVF4dWFMUGhsend2UVNNNTJ6SjFQRG5aVmdEV2E3RnhoelhNTm9kbURI?=
 =?utf-8?B?N1NYVkI3ZlNZMzJLb1dSMXhBSENaanFHalFRdmNHcUdkWVRCaHB4bURCcm5j?=
 =?utf-8?B?RVp5eW9OOGtwalV0SURndjQ4Y1ZDWlN3N0xESWZYbFF0eEc5dU1DenEyR0la?=
 =?utf-8?B?L3A4NUdqc0Q2Y0dydVYxWXBBWFVWOEpsZ0RtUWtnUmdvblpJL2wxeFFmYVdP?=
 =?utf-8?B?ZTMvL055b2NDZTFQbjJadFNUTGZ4S1h6WTE1N3hYcnlPNEdPQktSNUxET2xG?=
 =?utf-8?B?ZlEwaytLeVRJRWJvczFjNTYrWlVMay8rbEZVUTFZT1pDdlczb0h3MzJVc0Rl?=
 =?utf-8?B?UENvN0NLQmZtRXhIYm92SXJSQzYrTFJTZDdaRVl3T01yY2UvVVRDYVZ2eTFj?=
 =?utf-8?B?a0tnRWt3eWJ2NVJRRGl1MHZ1aGwvQ2t2aTdpZnFkTE9GWFVyeVYyRTNJdm8y?=
 =?utf-8?B?MHk4dnNUckdxcHFUdGMwZFFVVEhTeFNvYXN5L1RQK2RHaE1jV21mVVVEVW9x?=
 =?utf-8?B?cE1EQjBnWDRpWnFDZWFBLzZHZUJ6cVVsUjhsbXJVK3RvYmorUjVMVzBLZkEy?=
 =?utf-8?B?SlF1b0RpczRkNlI4enhrRTdOSVJsdWdMTnRTdzFVV1VOcWFuSCtaZ0U4VUE0?=
 =?utf-8?B?LzBZaUZlT1BFZndwaGxVZkNJZEExUUtPYjVKKzJIMVlLak5oZmlKRTZmVE9P?=
 =?utf-8?B?dFA5eVZUc09MNE13QUppZVpGVGEvZmdhV2sreUp1VHNaMlNVR3JmM1UxaFFx?=
 =?utf-8?B?MndkTzQzTVZTdS9OSVE1QURPdDlBMXkyblhVNTIyZmJGSmt0ZXdtL0g2UHJt?=
 =?utf-8?B?cFE5a01wOTlscnY4OHozNFpyTVhnSEh4Wmh2cElrMkl5aExMc1QzaTJCTnE1?=
 =?utf-8?B?RFNqOFVJMUJteFNoMFhlcTNlU3VZd1dTWDBxVHpjU0Y5Z1V0eEVkT09IenIw?=
 =?utf-8?B?cmlrRU1Qd1BiMkhNZDAzazV0SzZlbXJ1bExzWWdKeXZKcUo3YWFkS21HUkVw?=
 =?utf-8?B?ZXFlRkxZeFgwUlFjbDRzRUFtMmxTUEUxNDN6WTdDRG14dUlLQWxJamdScWdG?=
 =?utf-8?B?dHZQNE14bWtZMFFVc2w1VjZQZEp1RW9JZDZFc3RaUEJDQUQrMlVkSEhvSDhT?=
 =?utf-8?B?RnZUd24vaHB5ODM3d3hFTU5Kb2k2WkdkZ3RBMjdiWTdsRmF4enpLRTRTV1I0?=
 =?utf-8?B?ZWFnTkN3UjVWalFhOFFCSlBraVpYaU5VTWZFWVEvQllEZ3k5d3B2OEt5S0k2?=
 =?utf-8?B?QS84UktrU3gwaVZhQngxeE1sYXhzRkF0dFJ6VEcvRk9jbTlNRUloNFN4Z2k3?=
 =?utf-8?B?ZTJQelJHYkUwNVFoUFN5UVRFR0dnbXZSTWxTdXo1QXIvdWlPeVkxUVJ2bTZm?=
 =?utf-8?B?UFp3OXUyZnRBbnpPaGVFUzMvZTJ2djdpUFZQRS9tZkc1VnNscm12YjVjdzF3?=
 =?utf-8?B?VTdKU2pIekFWdmFZOThvaWYvQS9oRzdON2V0RVo3blIwZjB3UDBlVzJPOEhO?=
 =?utf-8?Q?KkTQcX3lrNaZ/DCo4vtNrDuon?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <D3C1869C3125DA48900FC3F596FAD320@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Htf94hKcFUR9XJPXySwYiyFGubF6T0jWNoWG1QJnYuCbaJ/NshYZdTUlV1nv7bnwO3jP8kEjs4rvcfcWzeg1dVmjtG4z7dBZrtIQ9MNmWHPXpmsWQLNI6zyZBimTgG/2Y6tfJ+Ac8CbwAPK5u9PDNEMpGU5GuJ9R+UJ6A+a2bSq/FMQ1TD4m4GEfVfDR/Bb4CrwT9Fu1gc0Ogiuei6MMWAg4HZ9eAjsdJrfPKSWe+A+gTu0Dy7kS9CCjAmJcy1vpuFEcGG6Yjs2VMAmNqyDsTnVbLAfkfMjd1vlDYSL0sQjLC4dFOo/2yb1QgUY0le+wO3rfZt6I/K0CBB2+W6icOQ2SB9XD6q92J8Gf0Bd/aMe5TKZEoDET8h4uQCb3occWWV+xMRZDpdv9i0kXkMtLDzzFrcVt/R5FKa3MXUrYdfP27F/eLPtj9S1410Di7xq3xfRWdzIa/0P6NuLpoAD13zbcxnJUvOVWeOLTl0sbM8tE8QLkQT/HyECIszsC/6XQo7oOFWBNq7prRgXD2Z0P2KSzfFPAt5k83FV8OFGrARH3Tubs/w8rHllHhy7AyrrIOLPLAYItS7MTQRfRHXLvd1jNHSa0zkhC7+2jYNErSH/RUb0iG8nLydsWyAiPhprfbKMAF/wvDGvrNxEY0aR1FKQnAdZXAqPmYC/jFylwL/SS+FZR+9eiIHI9cKiMrUFZE7YCf9rgB+iqq7zrCmx7Lh5M+jHXohMQ3a2meEVAwuwVjvVOD0N3NmLThYPuLf2JpSR8Q6Z0mnIT9pnP4Jnp/YGmACm/NKwPS9SQgSYUDbn62HtCocZkJBbbv58lrAKX+TYXOXokPEQwvIcraD17BOOnvnlXPeIZDz4qEyT4luLLhxEdija9A70ApOO2AJJn
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d51f62b5-468a-40a4-fa89-08daedf62371
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 01:51:03.8600
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Fm8PNh5daqtkfhyQuIHMV41ggtJ9UZHJNG5Eaufgk77nRXKaJ6r+63guTrjVo2pT7r1bNP9Pi3bRhLlTro9ONX6hTKNzgqgH0yGPrsReky4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5429

T24gMDQvMDEvMjAyMyAxOjQ4IGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+IE9uIFdl
ZCwgNCBKYW4gMjAyMywgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDA0LzAxLzIwMjMgMToz
NiBhbSwgU3RlZmFubyBTdGFiZWxsaW5pIHdyb3RlOg0KPj4+IE9uIFdlZCwgNCBKYW4gMjAyMywg
QW5kcmV3IENvb3BlciB3cm90ZToNCj4+Pj4gT24gMDQvMDEvMjAyMyAxOjE1IGFtLCBTdGVmYW5v
IFN0YWJlbGxpbmkgd3JvdGU6DQo+Pj4+PiBPbiBGcmksIDMwIERlYyAyMDIyLCBBbmRyZXcgQ29v
cGVyIHdyb3RlOg0KPj4+Pj4NCj4+Pj4+PiBXaGV0aGVyIHRvIGJ1aWxkIG9ubHkgWGVuLCBvciBl
dmVyeXRoaW5nLCBpcyBhIHByb3BlcnR5IG9mIGNvbnRhaW5lciwNCj4+Pj4+PiB0b29sY2hhaW4g
YW5kL29yIHRlc3RjYXNlLiAgSXQgaXMgbm90IGEgcHJvcGVydHkgb2YgWEVOX1RBUkdFVF9BUkNI
Lg0KPj4+Pj4+DQo+Pj4+Pj4gQ2FwaXRhbGlzZSBIWVBFUlZJU09SX09OTFkgYW5kIGhhdmUgaXQg
c2V0IGJ5IHRoZSBkZWJpYW4tdW5zdGFibGUtZ2NjLWFybTMyLSoNCj4+Pj4+PiB0ZXN0Y2FzZXMg
YXQgdGhlIHBvaW50IHRoYXQgYXJtMzIgZ2V0IG1hdGNoZWQgd2l0aCBhIGNvbnRhaW5lciB0aGF0
IGNhbiBvbmx5DQo+Pj4+Pj4gYnVpbGQgWGVuLg0KPj4+Pj4+DQo+Pj4+Pj4gRm9yIHNpbXBsaWNp
dHksIHJldGFpbiB0aGUgUkFORENPTkZJRyAtPiBIWVBFUlZJU09SX09OTFkgaW1wbGljYXRpb24u
DQo+Pj4+Pj4NCj4+Pj4+PiBTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPg0KPj4+Pj4+IC0tLQ0KPj4+Pj4+IENDOiBEb3VnIEdvbGRzdGVpbiA8
Y2FyZG9lQGNhcmRvZS5jb20+DQo+Pj4+Pj4gQ0M6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz4NCj4+Pj4+PiBDQzogQW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVy
YXJkQGNpdHJpeC5jb20+DQo+Pj4+Pj4gQ0M6IE1pY2hhbCBPcnplbCA8bWljaGFsLm9yemVsQGFt
ZC5jb20+DQo+Pj4+Pj4gQ0M6IE9sZWtzaWkgS3Vyb2Noa28gPG9sZWtzaWkua3Vyb2Noa29AZ21h
aWwuY29tPg0KPj4+Pj4+IC0tLQ0KPj4+Pj4+ICBhdXRvbWF0aW9uL2dpdGxhYi1jaS9idWlsZC55
YW1sIHwgIDIgKysNCj4+Pj4+PiAgYXV0b21hdGlvbi9zY3JpcHRzL2J1aWxkICAgICAgICB8IDEx
ICsrKystLS0tLS0tDQo+Pj4+Pj4gIDIgZmlsZXMgY2hhbmdlZCwgNiBpbnNlcnRpb25zKCspLCA3
IGRlbGV0aW9ucygtKQ0KPj4+Pj4+DQo+Pj4+Pj4gZGlmZiAtLWdpdCBhL2F1dG9tYXRpb24vZ2l0
bGFiLWNpL2J1aWxkLnlhbWwgYi9hdXRvbWF0aW9uL2dpdGxhYi1jaS9idWlsZC55YW1sDQo+Pj4+
Pj4gaW5kZXggOTNkOWZmNjlhOWYyLi5lNmE5MzU3ZGUzZWYgMTAwNjQ0DQo+Pj4+Pj4gLS0tIGEv
YXV0b21hdGlvbi9naXRsYWItY2kvYnVpbGQueWFtbA0KPj4+Pj4+ICsrKyBiL2F1dG9tYXRpb24v
Z2l0bGFiLWNpL2J1aWxkLnlhbWwNCj4+Pj4+PiBAQCAtNTE2LDExICs1MTYsMTMgQEAgZGViaWFu
LXVuc3RhYmxlLWdjYy1hcm0zMjoNCj4+Pj4+PiAgICBleHRlbmRzOiAuZ2NjLWFybTMyLWNyb3Nz
LWJ1aWxkDQo+Pj4+Pj4gICAgdmFyaWFibGVzOg0KPj4+Pj4+ICAgICAgQ09OVEFJTkVSOiBkZWJp
YW46dW5zdGFibGUtYXJtMzItZ2NjDQo+Pj4+Pj4gKyAgICBIWVBFUlZJU09SX09OTFk6IHkNCj4+
Pj4+PiAgDQo+Pj4+Pj4gIGRlYmlhbi11bnN0YWJsZS1nY2MtYXJtMzItZGVidWc6DQo+Pj4+Pj4g
ICAgZXh0ZW5kczogLmdjYy1hcm0zMi1jcm9zcy1idWlsZC1kZWJ1Zw0KPj4+Pj4+ICAgIHZhcmlh
YmxlczoNCj4+Pj4+PiAgICAgIENPTlRBSU5FUjogZGViaWFuOnVuc3RhYmxlLWFybTMyLWdjYw0K
Pj4+Pj4+ICsgICAgSFlQRVJWSVNPUl9PTkxZOiB5DQo+Pj4+PiBjYW4geW91IG1vdmUgdGhlIHNl
dHRpbmcgb2YgSFlQRVJWSVNPUl9PTkxZIHRvIC5hcm0zMi1jcm9zcy1idWlsZC10bXBsID8NCj4+
Pj4gTm90IHJlYWxseSAtIHRoYXQncyB0aGUgcG9pbnQgSSdtIHRyeWluZyB0byBtYWtlIGluIHRo
ZSBjb21taXQgbWVzc2FnZS4NCj4+Pj4NCj4+Pj4+IEkgdGhpbmsgdGhhdCBtYWtlcyB0aGUgbW9z
dCBzZW5zZSBiZWNhdXNlIC5hcm0zMi1jcm9zcy1idWlsZC10bXBsIGlzIHRoZQ0KPj4+Pj4gb25l
IHNldHRpbmcgWEVOX1RBUkdFVF9BUkNIIGFuZCBhbHNvIHRoZSB4ODZfNjQgdGFnLg0KPj4+PiBJ
dCdzIG5vdCBhYm91dCB4ODZfNjQ7IGl0cyBhYm91dCB0aGUgY29udGFpbmVyLg0KPj4+Pg0KPj4+
PiBXaGV0aGVyIHdlIGNhbiBidWlsZCBqdXN0IFhlbiwgb3IgZXZlcnl0aGluZywgc29sZWx5IGRl
cGVuZHMgb24gdGhlDQo+Pj4+IGNvbnRlbnRzIGluIGRlYmlhbjp1bnN0YWJsZS1hcm0zMi1nY2MN
Cj4+Pj4NCj4+Pj4gSWYgd2Ugd2FudGVkIHRvLCB3ZSBjb3VsZCB1cGRhdGUgdW5zdGFibGUtYXJt
MzItZ2NjJ3MgZG9ja2VyZmlsZSB0bw0KPj4+PiBpbnN0YWxsIHRoZSBhcm0zMiBjcm9zcyB1c2Vy
IGxpYnMsIGFuZCBkcm9wIHRoaXMgSFlQRVJWSVNPUl9PTkxZDQo+Pj4+IHJlc3RyaWN0aW9uLg0K
Pj4+IElmIGl0IGlzIGEgcHJvcGVydHkgb2YgdGhlIGNvbnRhaW5lciwgc2hvdWxkbid0IEhZUEVS
VklTT1JfT05MWSBiZSBzZXQNCj4+PiBldmVyeSB0aW1lIHRoZSBkZWJpYW46dW5zdGFibGUtYXJt
MzItZ2NjIGNvbnRhaW5lciBpcyB1c2VkPyBJbmNsdWRpbmcNCj4+PiBkZWJpYW4tdW5zdGFibGUt
Z2NjLWFybTMyLXJhbmRjb25maWcgYW5kDQo+Pj4gZGViaWFuLXVuc3RhYmxlLWdjYy1hcm0zMi1k
ZWJ1Zy1yYW5kY29uZmlnPw0KPj4+DQo+Pj4gSSByZWFsaXplIHRoYXQgdGhlIG90aGVyIDIgam9i
cyBhcmUgcmFuZGNvbmZpZ3Mgc28gSFlQRVJWSVNPUl9PTkxZIGdldHMNCj4+PiBzZXQgYW55d2F5
LiBCdXQgaWYgSFlQRVJWSVNPUl9PTkxZIGlzIGEgcHJvcGVydHkgb2YgdGhlIHNwZWNpZmljDQo+
Pj4gY29udGFpbmVyLCB0aGVuIEkgdGhpbmsgaXQgd291bGQgYmUgYmVzdCB0byBiZSBjb25zaXN0
ZW50IGFuZCBzZXQNCj4+PiBIWVBFUlZJU09SX09OTFkgZXZlcnl3aGVyZSBkZWJpYW46dW5zdGFi
bGUtYXJtMzItZ2NjIGlzIHVzZWQuDQo+Pj4NCj4+PiBFLmcuIG9uZSBkYXkgd2UgY291bGQganVz
dCByYW5kY29uZmlncyB0byBidWlsZCBhbHNvIHRoZSB0b29scyB3aXRoIGENCj4+PiBzaW1wbGUg
Y2hhbmdlIHRvIHRoZSBidWlsZCBzY3JpcHQgYW5kIG90aGVyd2lzZSB3ZSB3b3VsZCBuZWVkIHRv
DQo+Pj4gcmVtZW1iZXIgdG8gYWxzbyBhZGQgdGhlIEhZUEVSVklTT1JfT05MWSB0YWcgZm9yIHRo
ZSBvdGhlciAyIGpvYnMgdXNpbmcNCj4+PiBkZWJpYW46dW5zdGFibGUtYXJtMzItZ2NjLg0KPj4g
T2ssIHNvIHdlIHdhbnQgNCBIWVBFUlZJU09SX09OTFkncyBpbiB0b3RhbCwgb25lIGZvciBlYWNo
IGluc3RhbmNlIG9mDQo+PiBDT05UQUlORVI6IGRlYmlhbjp1bnN0YWJsZS1hcm0zMi1nY2MgPw0K
PiB5ZWFoDQoNCkNhbiBJIHRha2UgdGhhdCBhcyBhbiBSLWJ5L0EtYnkgdGhlbj8NCg0KfkFuZHJl
dw0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 01:55:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 01:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470913.730581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCszj-0005gR-CV; Wed, 04 Jan 2023 01:55:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470913.730581; Wed, 04 Jan 2023 01:55:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCszj-0005gK-9l; Wed, 04 Jan 2023 01:55:07 +0000
Received: by outflank-mailman (input) for mailman id 470913;
 Wed, 04 Jan 2023 01:55:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pCszi-0005gE-B7
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 01:55:06 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ced283a4-8bd2-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 02:55:04 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 758B4B810C0;
 Wed,  4 Jan 2023 01:55:04 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2607BC433D2;
 Wed,  4 Jan 2023 01:55:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ced283a4-8bd2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672797303;
	bh=ffuO8QeritqlgaKdGOBoioav+yKqzjNsb4dskCyqBb8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=L7hAeJKvvlPDYcxVj3sFPfF86oYhORzOJwl/Me2JplsVNNK+fGuKtYuubuxIS/k8i
	 mRJ84ngqn4rsvTjpomeuxiwEAvBVEkAatTxiQEj2huX4lqQ3eNBX8JgBDR3zrIlbia
	 Uz/9QpPJYxvM1cRNTbKFlSqmzk4s72uqBjnvC47G8C27alJO8hrp9noAvlJxTBcDeB
	 JEgGoodtyh3lQ452D8o2xRbd0vXB+fEoddymctdsQEtMcFuBfC1X9Qb72STOhQA44v
	 3byRAO+YADkzuTkL9rjr9dodVLY//Wr+2Id+tkBIIYXcdWUNZPnMS4xDBKnThe8kVX
	 EYw4oYLQwetvA==
Date: Tue, 3 Jan 2023 17:55:00 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Anthony Perard <anthony.perard@citrix.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 4/6] CI: Express HYPERVISOR_ONLY in build.yml
In-Reply-To: <5d3ed12e-3c01-5ea8-8d41-4a199fdf92b7@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2301031754550.4079@ubuntu-linux-20-04-desktop>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com> <20221230003848.3241-5-andrew.cooper3@citrix.com> <alpine.DEB.2.22.394.2301031713530.4079@ubuntu-linux-20-04-desktop> <34e692e3-ef76-a43e-ec4f-a7c1ed2d094f@citrix.com>
 <alpine.DEB.2.22.394.2301031733410.4079@ubuntu-linux-20-04-desktop> <4f9a9927-c287-b40e-e4b0-653e69dbc1bb@citrix.com> <alpine.DEB.2.22.394.2301031748140.4079@ubuntu-linux-20-04-desktop> <5d3ed12e-3c01-5ea8-8d41-4a199fdf92b7@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 4 Jan 2023, Andrew Cooper wrote:
> On 04/01/2023 1:48 am, Stefano Stabellini wrote:
> > On Wed, 4 Jan 2023, Andrew Cooper wrote:
> >> On 04/01/2023 1:36 am, Stefano Stabellini wrote:
> >>> On Wed, 4 Jan 2023, Andrew Cooper wrote:
> >>>> On 04/01/2023 1:15 am, Stefano Stabellini wrote:
> >>>>> On Fri, 30 Dec 2022, Andrew Cooper wrote:
> >>>>>
> >>>>>> Whether to build only Xen, or everything, is a property of container,
> >>>>>> toolchain and/or testcase.  It is not a property of XEN_TARGET_ARCH.
> >>>>>>
> >>>>>> Capitalise HYPERVISOR_ONLY and have it set by the debian-unstable-gcc-arm32-*
> >>>>>> testcases at the point that arm32 get matched with a container that can only
> >>>>>> build Xen.
> >>>>>>
> >>>>>> For simplicity, retain the RANDCONFIG -> HYPERVISOR_ONLY implication.
> >>>>>>
> >>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>>>> ---
> >>>>>> CC: Doug Goldstein <cardoe@cardoe.com>
> >>>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
> >>>>>> CC: Anthony PERARD <anthony.perard@citrix.com>
> >>>>>> CC: Michal Orzel <michal.orzel@amd.com>
> >>>>>> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> >>>>>> ---
> >>>>>>  automation/gitlab-ci/build.yaml |  2 ++
> >>>>>>  automation/scripts/build        | 11 ++++-------
> >>>>>>  2 files changed, 6 insertions(+), 7 deletions(-)
> >>>>>>
> >>>>>> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> >>>>>> index 93d9ff69a9f2..e6a9357de3ef 100644
> >>>>>> --- a/automation/gitlab-ci/build.yaml
> >>>>>> +++ b/automation/gitlab-ci/build.yaml
> >>>>>> @@ -516,11 +516,13 @@ debian-unstable-gcc-arm32:
> >>>>>>    extends: .gcc-arm32-cross-build
> >>>>>>    variables:
> >>>>>>      CONTAINER: debian:unstable-arm32-gcc
> >>>>>> +    HYPERVISOR_ONLY: y
> >>>>>>  
> >>>>>>  debian-unstable-gcc-arm32-debug:
> >>>>>>    extends: .gcc-arm32-cross-build-debug
> >>>>>>    variables:
> >>>>>>      CONTAINER: debian:unstable-arm32-gcc
> >>>>>> +    HYPERVISOR_ONLY: y
> >>>>> can you move the setting of HYPERVISOR_ONLY to .arm32-cross-build-tmpl ?
> >>>> Not really - that's the point I'm trying to make in the commit message.
> >>>>
> >>>>> I think that makes the most sense because .arm32-cross-build-tmpl is the
> >>>>> one setting XEN_TARGET_ARCH and also the x86_64 tag.
> >>>> It's not about x86_64; its about the container.
> >>>>
> >>>> Whether we can build just Xen, or everything, solely depends on the
> >>>> contents in debian:unstable-arm32-gcc
> >>>>
> >>>> If we wanted to, we could update unstable-arm32-gcc's dockerfile to
> >>>> install the arm32 cross user libs, and drop this HYPERVISOR_ONLY
> >>>> restriction.
> >>> If it is a property of the container, shouldn't HYPERVISOR_ONLY be set
> >>> every time the debian:unstable-arm32-gcc container is used? Including
> >>> debian-unstable-gcc-arm32-randconfig and
> >>> debian-unstable-gcc-arm32-debug-randconfig?
> >>>
> >>> I realize that the other 2 jobs are randconfigs so HYPERVISOR_ONLY gets
> >>> set anyway. But if HYPERVISOR_ONLY is a property of the specific
> >>> container, then I think it would be best to be consistent and set
> >>> HYPERVISOR_ONLY everywhere debian:unstable-arm32-gcc is used.
> >>>
> >>> E.g. one day we could just randconfigs to build also the tools with a
> >>> simple change to the build script and otherwise we would need to
> >>> remember to also add the HYPERVISOR_ONLY tag for the other 2 jobs using
> >>> debian:unstable-arm32-gcc.
> >> Ok, so we want 4 HYPERVISOR_ONLY's in total, one for each instance of
> >> CONTAINER: debian:unstable-arm32-gcc ?
> > yeah
> 
> Can I take that as an R-by/A-by then?

yep


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 06:29:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 06:29:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470922.730593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCxH1-00088F-Ui; Wed, 04 Jan 2023 06:29:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470922.730593; Wed, 04 Jan 2023 06:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCxH1-000888-RK; Wed, 04 Jan 2023 06:29:15 +0000
Received: by outflank-mailman (input) for mailman id 470922;
 Wed, 04 Jan 2023 06:29:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCxH0-00087y-6X; Wed, 04 Jan 2023 06:29:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCxH0-0004WC-4n; Wed, 04 Jan 2023 06:29:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCxGz-00022C-Ie; Wed, 04 Jan 2023 06:29:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCxGz-0003xp-Fq; Wed, 04 Jan 2023 06:29:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yBqAwBtZh4rt+py5r6h+d+4HmZXNp1GHJ2AzfZUrLNc=; b=Q1QvNqnpX41IHQKkcdxnFaz9FH
	HqtndwwgcDWQzUos2HZAqPSep1NpUBLCEG08UBS8rUbjqqpcyb7vqP/SM9eVrOjj0dBoRF8xxK4im
	5Src9v2NBrPDlF0sqZ9lXmcBUjHaGg+N4+0W8iyZgQTFIPveD5hd7cdrItVacOYGSSUw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175563-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175563: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=89c5d90003d9c54d03d3e85bd305718e9c29a213
X-Osstest-Versions-That:
    ovmf=b670700ddf5eb1dd958d60eb4f2a51e0636206f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 06:29:13 +0000

flight 175563 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175563/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 89c5d90003d9c54d03d3e85bd305718e9c29a213
baseline version:
 ovmf                 b670700ddf5eb1dd958d60eb4f2a51e0636206f9

Last test of basis   175558  2023-01-03 06:41:58 Z    0 days
Testing same since   175563  2023-01-04 02:10:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b670700ddf..89c5d90003  89c5d90003d9c54d03d3e85bd305718e9c29a213 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:19:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470934.730603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCyzR-0002m6-1y; Wed, 04 Jan 2023 08:19:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470934.730603; Wed, 04 Jan 2023 08:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCyzQ-0002lz-Ve; Wed, 04 Jan 2023 08:19:12 +0000
Received: by outflank-mailman (input) for mailman id 470934;
 Wed, 04 Jan 2023 08:19:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aavW=5B=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pCyzP-0002lt-4d
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:19:11 +0000
Received: from sonic308-54.consmr.mail.gq1.yahoo.com
 (sonic308-54.consmr.mail.gq1.yahoo.com [98.137.68.30])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 741a636f-8c08-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 09:19:08 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic308.consmr.mail.gq1.yahoo.com with HTTP; Wed, 4 Jan 2023 08:19:04 +0000
Received: by hermes--production-bf1-5458f64d4-rkcp4 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID c7ed076b62267adadaa8fddc1f12ac2c; 
 Wed, 04 Jan 2023 08:19:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 741a636f-8c08-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672820344; bh=jItkp3Ywrhodg9IS7x5pkpdm6AjC+m6vJyEcdHBOpiU=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=R0dkFiTFtn40tuRMVlNGzdofL2Lt95wFM25rBh1mdMpO7n88cuIwpQGTvHLt08TTDqYNl/9ycnCrDPtzK6GFyBH0q7KrXXgmTVLrnZlolzgeCZBa1FfHqMlkA16lU4GVlK/FHJNbhYbv/nzTs6As0p84Rvzvi4qfj4SnOBO0PRiP4AVIYxFNFwbNOeaBf+GAwlHwtJ9j+TwS3MpVHE15yDOd3v2p+OvhPsqxIaTbeXS/4/j9Q/pqq+TxpjwWGQF61K5bzSMGpGRAevYYErQm/haAncRIZX6rYKdLpuoRg69EbUWUj1phnXTtX6QppsecHVXRJBDl26cBlZnyEwSa5Q==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672820344; bh=7hgLgK60MMH0sdqTp22ESPXOA7a2xV2NLOYU1/ccyc9=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=W+7gxxwUWLe9DWK7TCCV7Kj2Ru3dwKsB7QMCTFx6k4cq0j+lReoursm950uVpho3KMki3iHxsvmevlg4AmBSGq9MJxNVNIeWmjHYfQ8FnilFJDSGy1GDiT50TsEXldBsT1yFZJxYjyYfROZOh/wFjXwMzhApjSP/2IhgyqfZFSoZVSkQHVtiKuTf8gHWBEwLiX4Nxdw3Nofa2r3Geeef1UE94y9VRekxvu0WC/Ghi0z/l9v16AaG0sOimRu+xoFGf7BK1fRfCY9MKFuFjVrhmdTovRMrOayoY2mUoJVuaDLyZc+E+1KpsBQWlpNS+PoH5gkUWQOE1QS9zI1cHIrWjw==
X-YMail-OSG: EFRbsZcVM1kycu_tVwkZa8YorSJ.0VupyH.WmI86cP2UGP_8F1MD0ctMb3h.Cl3
 BSiL3iWc9.j4Awwxbe0ni3pOHwXBcRg__QKKlcWqTWXrmiWdmIC6cnXXLxMtLJztXCxim5SJp7a3
 Iy7dHw8tG0sfVn57FPhydkaMpgOctrtT458QnHjePK1B3tEcuPHsU.qThFCfqIaK6j9kn1q2bsnI
 CtbkrZwUpmrXELKMWowWsw38LR8m8BSia5wopoxoZ6SPNWT4pF5x_HerYEvE35TBhg6P6jYLfqmX
 soKpiAI04X2ajYBd.UsTcyqePzMH9vw6zhO8NoZtBZbb7VbAFAs4nUJnd3j8HgYUusf7j.7rCpHM
 XyqHy338tKG.yZrzMnbT2OnYoX0VA_t8SMQQ_cuTvaXysa_To4tXsS_B7Oj8Qi1yGVlhEfT953Ru
 WHgNkDDms7MM5OWgVokzumIri4InTnSPGKSGPgTP8maNky0Hp7gyR6RdWGCsleYVdaGAsQ1qeoYr
 7YL7zTndySyTHFDavpg4bwdvL7o3UDa4SaZfI4kDqsspfJBreT1r13.8wrEkyNWiSsRlZd_e9jDA
 _JKue73h00sgDlqzhnP0dpNSwi8tDoCoYK.93F3mPumZ4Dk23vUmKmv_oU33OypLTHmXtzohZgvt
 Qvtvtwai4_sbgOpq42ibWTOUc5QLi_5ARES1BlYCotUcBGTR4uEGul36FUVHUB1lNTCqd1hpP9oW
 OtSWNjvs276Cr2xNPaK0fzTFpiCxqIHhipm8GBDHjUUazL9AZX_2YB7kMW33bbTOvbKInmQUD83o
 64bTY5AckLzeU5N0KgnL2W2X9qbO3ITnbVYhz.Qf1DuVAAmIiC0aRGdHcBrcY54oYHxQTCcV5_pC
 .hNIj4tWc4cAL2G0xgbM_HEKyCHbW5AqcvNHkDtak2QjtdJAgk3JQXQGVFRoJ9ZubdcuqksNwi0V
 A9ptTT11noqBMma1VtDS.24pfTI8AWiq_ulT_1f8iTBbHd0qT2jBeg3pC1ahDtzSxA_UIl7PBe1m
 iQ.XhpUggqbKJ6bGUClwaX_vji3rZ1SCU69Y2X.Z9ETDILMbgaU08FY_CUB7OEtip1wTHaSFXAdt
 naLTlwOjqk54MrOE6z624Ue6gQdZI_brLyQ6nb88mH8PrCXUkkFXOyeBQu.b_aR7SJNdGq1CYQaA
 oET4_Gqh2FUYAUYqZwx3S2ouDBXQIGN1ucikG8bP9Cwx4zf_ONu6V7exlLAa746Kfirc.Kdcont_
 gb3Xoz5D8klhWL4paq4DpJItmWZatesgmG4Njo23s5Nzw7_pGXrlvZo2U.dqVJrfEorDafltK5H5
 av6eSAapz8XY5ihOtSG_0TIHBVr_yNWEBXS3dXIaZFJ_jRA3iBxh2wYbo1Gl9UkRLT41sohS5i_9
 GGkc7QDLdVv1239T0_MSTA9eOQKBgXMWW6rtIPNj_hkoMgcGE_6SKYy7nF036J3UcmFK.2nJGjqL
 su26fCX_viP87c7P8K3uLUsvkGqS2quhHr8xZl8g5KDOgiWrJWtIoupKtMfLbTdtKrh0uwe1LRq6
 h6B.qzaIpKUPGBEnhGR61Tf72pPzDdlD8eMRlMAZ__uelm07BW.KihWJKjF9CY_fvIcEImJfT3pw
 F5AZTLvph6B8ivOyH5.PYvob96N.k2vQcUDffQGemIqI6v3DKDWN01_XWsWPki1FELppkuv6Se9W
 1Op5kHBFLc4c2Un0x0LzQI8rkFNeXPBz7QQZNp0_5FCE6bQOMpQul_yC7ECXHyNZN4z_u8wfgvrt
 0bXXZ_ef_B3OPiAj6rAIGdhOCEiz5Sru.qB3Cz_VfXBD3AW.nmC.tJtedP7bxp2.nxIwGNS7pfg6
 CfzJHBPiTmTUu7hT_eRsB3mpF4j1W09Sz21582ELRXMF5.TRHwHdzOMcTtOWMs5aHre54l0qtUf3
 kg6CAHhZs0BpU6YIWK62X7YR6r.wydoeqMNVb95fNM3tvgq6WAn54t_EHRDAL2TikB5P8ijgk3aV
 c679_jV1I29kxOKi.7mfVZmS35WjoutTF0Wag5FYQ8G8AZq5KDR9OrrIssQes2Yb8DrRi.LFMDz1
 Bf71VHxT8Sw7tfBn_8X1Sx0GcvW12opvBnlJNppzUaZgfGavviXdI4Zx37gR_nr0dcNFXavbJbF8
 K76kryuS7ewg_4ETxjuiKxMhqwJQUIRNYuMUXJ6FYZ_A44wIGwYYGpMAs.sCkYm9G2NmEKAId_DS
 BoFmj2yyfdUv75V0TRxxBc3ec
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com>
Date: Wed, 4 Jan 2023 03:18:59 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost
 <eduardo@habkost.net>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
References: <20230102213504.14646-1-shentey@gmail.com>
 <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com>
 <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org>
 <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 6141

On 1/3/2023 8:38 AM, Bernhard Beschow wrote:
>
>
> On Tue, Jan 3, 2023 at 2:17 PM Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
>
>     Hi Chuck,
>
>     On 3/1/23 04:15, Chuck Zmudzinski wrote:
>     > On 1/2/23 4:34 PM, Bernhard Beschow wrote:
>     >> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally removes
>     >> it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in the PC
>     >> machine agnostic to the precise southbridge being used. 2/ will become
>     >> particularily interesting once PIIX4 becomes usable in the PC machine, avoiding
>     >> the "Frankenstein" use of PIIX4_ACPI in PIIX3.
>     >>
>     >> Testing done:
>     >> None, because I don't know how to conduct this properly :(
>     >>
>     >> Based-on: <20221221170003.2929-1-shentey@gmail.com>
>     >>            "[PATCH v4 00/30] Consolidate PIIX south bridges"
>
>     This series is based on a previous series:
>     https://lore.kernel.org/qemu-devel/20221221170003.2929-1-shentey@gmail.com/
>     (which itself also is).
>
>     >> Bernhard Beschow (6):
>     >>    include/hw/xen/xen: Make xen_piix3_set_irq() generic and rename it
>     >>    hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
>     >>    hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
>     >>    hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
>     >>    hw/isa/piix: Resolve redundant k->config_write assignments
>     >>    hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
>     >>
>     >>   hw/i386/pc_piix.c             | 34 ++++++++++++++++--
>     >>   hw/i386/xen/xen-hvm.c         |  9 +++--
>     >>   hw/isa/piix.c                 | 66 +----------------------------------
>     >
>     > This file does not exist on the Qemu master branch.
>     > But hw/isa/piix3.c and hw/isa/piix4.c do exist.
>     >
>     > I tried renaming it from piix.c to piix3.c in the patch, but
>     > the patch set still does not apply cleanly on my tree.
>     >
>     > Is this patch set re-based against something other than
>     > the current master Qemu branch?
>     >
>     > I have a system that is suitable for testing this patch set, but
>     > I need guidance on how to apply it to the Qemu source tree.
>
>     You can ask Bernhard to publish a branch with the full work,
>
>
> Hi Chuck,
>
> ... or just visit https://patchew.org/QEMU/20230102213504.14646-1-shentey@gmail.com/ . There you'll find a git tag with a complete history and all instructions!
>
> Thanks for giving my series a test ride!
>
> Best regards,
> Bernhard
>
>     or apply each series locally. I use the b4 tool for that:
>     https://b4.docs.kernel.org/en/latest/installing.html
>
>     i.e.:
>
>     $ git checkout -b shentey_work
>     $ b4 am 20221120150550.63059-1-shentey@gmail.com
>     $ git am
>     ./v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south_bridges.mbx
>     $ b4 am 20221221170003.2929-1-shentey@gmail.com
>     $ git am
>     ./v4_20221221_shentey_this_series_consolidates_the_implementations_of_the_piix3_and_piix4_south.mbx
>     $ b4 am 20230102213504.14646-1-shentey@gmail.com
>     $ git am ./20230102_shentey_resolve_type_piix3_xen_device.mbx
>
>     Now the branch 'shentey_work' contains all the patches and you can test.
>
>     Regards,
>
>     Phil.
>

Hi Phil and Bernard,

I tried applying these 3 patch series on top of the current qemu
master branch.

Unfortunately, I saw a regression, so I can't give a tested-by tag yet.

Here are the details of the testing I did so far:

Xen only needs one target, the i386-softmmu target which creates
the qemu-system-i386 binary that Xen uses for its device model.
That target compiled and linked with no problems with these 3
patch series applied on top of qemu master. I didn't try building
any other targets.

My tests used the xenfv machine type with the xen platform
pci device, which is ordinarily called a xen hvm guest with xen
paravirtualized network and block device drivers. It is based on the
i440fx machine type and so emulates piix3. I tested the xen
hvm guests with two different configurations as described below.

I tested both Linux and Windows guests, with mixed results. With the
current Qemu master (commit 222059a0fccf4 without the 3 patch
series applied), all tested guest configurations work normally for both
Linux and Windows guests.

With these 3 patch series applied on top of the qemu master branch,
which tries to consolidate piix3 and piix4 and resolve the xen piix3
device that my guests use, I unfortunately got a regression.

The regression occurred with a configuration that uses the qemu
bochs stdvga graphics device with a vnc display, and the qemu
usb-tablet device to emulate the mouse and keyboard. After applying
the 3 patch series, the emulated mouse is not working at all for Linux
guests. It works for Windows guests, but the mouse pointer in the
guest does not follow the mouse pointer in the vnc window as closely
as it does without the 3 patch series. So this is the bad news of a
regression introduced somewhere in these 3 patch series.

The good news is that by using guests in a configuration that does
not use the qemu usb-tablet device or the bochs stdvga device but
instead uses a real passed through usb3 controller with a real usb
mouse and a real usb keyboard connected, and also the real sound
card and vga device passed through and a 1920x1080 HDMI monitor,
there is no regression introduced by the 3 patch series and both Linux
and Windows guests in that configuration work perfectly.

My next test will be to test Bernhard's published git tag without
trying to merge the 3 patch series into master and see if that also
has the regression. I also will double check that I didn't make
any mistakes in merging the 3 patch series by creating the shentey_work
branch with b4 and git as Phil described and compare that to my
working tree.

I also will try testing only the first series, then the first series and the
second series, to try to determine in which of the 3 series the regression
is introduced.

Best regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:31:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:31:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470941.730615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzBQ-0005GZ-6Q; Wed, 04 Jan 2023 08:31:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470941.730615; Wed, 04 Jan 2023 08:31:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzBQ-0005GS-3J; Wed, 04 Jan 2023 08:31:36 +0000
Received: by outflank-mailman (input) for mailman id 470941;
 Wed, 04 Jan 2023 08:31:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+XhT=5B=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pCzBO-0005GM-HP
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:31:34 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2066.outbound.protection.outlook.com [40.107.20.66])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 31016a85-8c0a-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 09:31:32 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9184.eurprd04.prod.outlook.com (2603:10a6:150:28::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 08:31:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 08:31:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31016a85-8c0a-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AhmVTYpDTRaBjXq/UucI1FyAs2XNrVS6bLwWz3aShLhOjYoEkr8tP1c0M43vNP/la8Qv9/IkDrEsdiO4mOgXrIozrLcRZYrEgsPQNpftS9TAAnLY77tQSKi3Yl4kg92v/P/6iOhJTfU4cZinL5dKS/6wuUd2uUeWYDiyzv0E5pimdQsAyqzJ/5I505q+NPjnp0ua+2HzxA9Ipu00q+2y/ZCPgZGu7mKRXhdO4mK7Bong8MaBJE+ilYJdsgpJJpYYOySdI5FhrUsEPwF3MyCzHidfhq1pop9+eDa0kRfhdE29w2SMtinTOFUeo1nuBcTur+aLui1G77xP3Sh6ARAY5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z41OvX2AvLdrMyeP19C5M5ObnBSWq+4jnWayoWtdC5o=;
 b=We773hljTbjK8ZKrA3JZDlbadZIFZxVjsQR7qKoo234ApGdQiAEVdblUMYajdWXphjSJGmOw6YkENYNWTVa+7lSbjEbD9g2qv5s/NeY7Qiwd9/EzaN9IglfGcxGSpVvBleg0cZNDDPXg16b50Eln29OHuVVRRD2yF/Mvc1BJW+7iod0nRiB4+vgdAFDvyprD3GT4aKKjuVytS2YGLTGY78WO4Lj80Co8KAWfvdYie30n9IhwNGDpkAzd9e0JOZx2eDXni8RlW2BcFv6LLf6mcl6dVkx4DH1AszCA1rStOb7oiigI8jAw+yvg5FLHvw48IEihM+DG8YxyxS0vc9jxaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z41OvX2AvLdrMyeP19C5M5ObnBSWq+4jnWayoWtdC5o=;
 b=AvhZjc9J9eFFbOZDESWCbr1HSPJKsDVkPqFpBH1QDJKqb31wYtdkDLMOy4trS/NNDYsEgMyARb0e7nT0eS+wUhPlBIrIOu0EJ/yxQkDR8fPon0EQBpRX+7tVWTBKzAMDTgCwfPbTHiYjEztaGUlJfIPLVQDoCFmgWq9RAbWxs209hUKAe3NIyTyBIZsdOyjKIVXUkO2SOi+34ZdensLg+3QhchJyeCfGkhgg4yrlnmsLRCKHKpdNElLM4YH0dJJT/pZuhPhhRfs7KdnSehZOIMRvVa+fLlz4Szd9u+U+PI9GzpiGFd4KyewZVQK1P/BkR+2BLUAD6OcmJE7EYFfeWA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d272122f-8459-075e-f2b5-476b2ddf7405@suse.com>
Date: Wed, 4 Jan 2023 09:31:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/xen: Remove the unused function p2m_top_mfn_init()
Content-Language: en-US
To: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Cc: boris.ostrovsky@oracle.com, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Abaci Robot <abaci@linux.alibaba.com>, jgross@suse.com
References: <20221227082115.59000-1-jiapeng.chong@linux.alibaba.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20221227082115.59000-1-jiapeng.chong@linux.alibaba.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0036.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9184:EE_
X-MS-Office365-Filtering-Correlation-Id: cd1917cb-0d3f-40d4-4fa3-08daee2e135c
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DWcvrHrMnjOII81DDVjEyJdJVEs7u/2MNMiU3lvKq2yPJSRgpQdDASpuCF1PZWH8Bqm+6aFdJTnye/IVSi2qvNkwn6+bnDT39RXpSHFaAUr/n9Fy1/qextCvH2DY5vj/uA0zeWBCZtwEpK0tsog2ncyhdkVoPb5RFL4HOUyriJLZyYpYI+TTnf95Tlf8D2LWc+yMYogWRqwZKoempsf0R8VQemXdnYOy/NzXbZeWXrJ9GVzAFH4QeuUDNBPCqBeYF1ItDa2HPovsjFBHjdd98h1lL3PzYr/qvnlS2M3IkVaqknt50bcEPwJvAwDpeG3RVI4pVfAgp9Gce8n7nkcvbL5gSI3ZI7M4BAZeE5zUeqPu6i30WH5Sxn+3h96WEXJiBSopvH0gDYxyq+kTEs6DErVOHMp2Iq/O6qcozBEwrVJo8+ya3iB6QsNg5/PQE4xh0uO/IPEFzNI2m3zHWPrhLeC2iTgzE2ko5BThrzLDmSJ1jIfzbJG1uKoRVkUrNYZcxpxwmkVSrkA4XGlCYi+ewOTs7ZijFoXxeHDb4LPJ04I2dPHXB30CQ4xwhMhIhsgiiUQaA17FmGRDQqILG/EU3siqjrcQ8AoSYjgxur0tPRHNS47W9AlrsuNlWLdJRDc4NfMISaLwzvzNI1GLuz0/DGaG+lsbsTP0aUIqeRT4OfcesS0v+if6+n/4U9yuAg6NkI75RQU8iP5NmslX28+IOhIKVWK3W8BOPOfy8bIqDkA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(136003)(346002)(366004)(39860400002)(451199015)(7416002)(5660300002)(2906002)(8936002)(4744005)(41300700001)(4326008)(478600001)(8676002)(316002)(66476007)(66946007)(6916009)(66556008)(6486002)(31686004)(107886003)(6512007)(26005)(6506007)(6666004)(83380400001)(186003)(38100700002)(53546011)(2616005)(31696002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?amwxeTIrYkJvNUdRVU10MXYvLzk4WWhidndVc2g5S0h1bnNRclVaRXpTTTRj?=
 =?utf-8?B?MTl5ZHJ0a3BwWkpxREZqc2dzSWpYaDZYM1NzSThDSFYzQ3p0MUNRZnFFdjl3?=
 =?utf-8?B?ckNISkZ3QVVjRFdWODZ0K2pxSUhPVitGYnllZEFJZFc3eis1RHl5dGJxdEtJ?=
 =?utf-8?B?NS9EUlhuNTF1cWg0WWxyaWRjMTluTmpPVEV1eDk3L2NiTG9pQU40S0VmQllk?=
 =?utf-8?B?aFRrb0NQcjFOR1I1OS9qNHMrMmJ1VTcvWmFJcy9YVGhwV0ZKeTdNUzFrSkJT?=
 =?utf-8?B?WkdlaGZBS0twN0p0M3I5QWs1QXp4dElWMjZ1Rm12Qkgxak5EWjdGQzNDZkln?=
 =?utf-8?B?QmYzTmplVUdFWDdPaERyWDVzNm9XUmI3TEJtUDhNSkpMM0NiYnZBU1RKY2dG?=
 =?utf-8?B?Unc3OE5TZ0gyazJzZXJpZVR3UjFHWjNUQUtuL1czbnNPcUhtbkRxd3FoQWc4?=
 =?utf-8?B?N3pyMGQrditOZzAxcU5CbnV6aXU2b01DaUVzQTkwVzJwektNU1ZZUldTS1Yz?=
 =?utf-8?B?UXJwNDZRamkycEhNaXBVQml6Q2xLZ0M0bWtyTStIdUZuc1Q1NUEvS0tZWE8x?=
 =?utf-8?B?TTRQYUQ2RFRwNmhXWlJabkk1Vmw2N2FTWExLSlpPRUJUbjdmczlsV05uSHph?=
 =?utf-8?B?Qk9zVk5NU0N0VTh5RnFqWUdFa2F0M3owMzRtcEJVWTlkU2Ruc2RBNzVOZklU?=
 =?utf-8?B?dVBkYW4veitZN0ZtRU1vaUlrcVdXWUI3ckJWeVF4eEZaWUFxbmtiQkNiRzZw?=
 =?utf-8?B?Mi9jdTVtV2VabFdLRGY5OVZQd0ZrVXN1bW5ZRUpXWTVVRWI4aWVUMENFZ2Ir?=
 =?utf-8?B?bFJ1d21oU01BOWR0NC9lMTNSMXlRYkJKRlRQUzdEZnpseHBBWC9NMVRibjFJ?=
 =?utf-8?B?UWZoSVQ0aC84RzcxNVhzR2JjMnJnN0ZJak9ZVzlPVUxnbnUzRmdMajhZdGRq?=
 =?utf-8?B?SUJuRy83YzdVendYZG91bU96RjdUZEFGM2VOUXpPc0cwNSs5bm56cHVjaC92?=
 =?utf-8?B?clh4b25BcEozeGdDOVBsN3JEUkN2azNRazluak5mNDRkNXcrcHp0UitWLzk3?=
 =?utf-8?B?SU1PQXRPL0dVeE5KYjAwUWpORDBBd1lJYmRtajNqMkxLZkJjSzkxaEJVQWl4?=
 =?utf-8?B?Y1QwaGxHblB3dHB0aXNBbzhLOTJjZzRsbThYY2NFUG5KUmY5RTFQaUI0dlJ4?=
 =?utf-8?B?NHZvT0N0c28xSkxFQTJVbFlXYnhDSXhud3prOFJua3dmdU5Rdkx1TnhRemYy?=
 =?utf-8?B?U0ZkZExaL0ZwbXEvQkZZUXpQdVhGYXNqSldSejZDN3JXMXpsVlJDMXl4NXUy?=
 =?utf-8?B?WFJjRWFlVC9WRHYzYTZBQUw3dTkvNjIxQ2x0SnRwNjd0TElidzlCSlFPbDhB?=
 =?utf-8?B?WFoyQ2x5WEREeHhIcFY3bjJ4cm1Ta0xGc3RBeVgyWWgrdXhGVEh4OTdzUmZT?=
 =?utf-8?B?ZXppYlFQcXZZQ0orNklaYXUyTEE1Y0hKRFpIV0M3WjNGOXp5WDlsdGJNdFUy?=
 =?utf-8?B?cU5BRjByd3Q0VE5NTnFmNUFZc01XMWxianJETDZib0xWdEVPTFUrQkRGSTJH?=
 =?utf-8?B?QXVCNkhPRElNK3RuVnFlRmkwMHczelNCcy9aMTh2L0lld2JkeU1PMEZZdVFV?=
 =?utf-8?B?TjdVMGx3K0cyL2lDSjJoc1EvMXdFanJBMXFMSXd1aXh4ZWMvc09HZFFFMERu?=
 =?utf-8?B?bUNRNUFuWlJDdGpkUHJLNWpRVys2aXEzYmdvNXR5TWpjeld3VlBLdEJKUVJ0?=
 =?utf-8?B?aUZtRFdsOHV0NEt2bXpKaTNoUGZ1d3B0bHBzUHNnajlhOHhickNSSEZhV1du?=
 =?utf-8?B?TlV3M0p0ZWI1a3lUVVEzaFkrWElLU3RWdERxNnoweHZlOFE5MHFZU0hOUUIv?=
 =?utf-8?B?akc5MVNRUXlOdG1Fc1BZVFJ2Q1VKRVhORVl3cVIzRkZoOHpYelN0TWVsOEpE?=
 =?utf-8?B?SnlHSkVQU00xUkd6YTEwTWNGWStIcHEzUTJhaWlQQkwxOVFUQ1BRT1d3VEZa?=
 =?utf-8?B?UnM3WFBSTE85SXczWlBnU3Zsb2FSNjNxb0pvRVV2UUgxMzJFcnRZTGRGU0gx?=
 =?utf-8?B?d3hTbTZGSkNlRlU0blpPQU4ydlU0SklxMkVFNTVNUUp1dEg5dHFEU3ByMCtL?=
 =?utf-8?Q?8cvKfQwAD9WGYm62juxevJCia?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd1917cb-0d3f-40d4-4fa3-08daee2e135c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 08:31:28.9100
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IBkh0DLcUpcMeaJVX9kUPlG4J1KxVRuuIppJcLnNmBFHVQ4N3k0bx2RaDHbfHEY5b5zPj38orMojAfstZ+Kn5g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9184

On 27.12.2022 09:21, Jiapeng Chong wrote:
> The function p2m_top_mfn_init is defined in the p2m.c file,
> but not called elsewhere, so remove this unused function.

This and the title are wrong - p2m_top_mfn_init() is used by
xen_build_mfn_list_list().

> arch/x86/xen/p2m.c:137:24: warning: unused function 'p2m_index'.

Whereas this and the actual code change look correct to me.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:32:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:32:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470947.730642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzBw-0006A5-RE; Wed, 04 Jan 2023 08:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470947.730642; Wed, 04 Jan 2023 08:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzBw-00069y-OX; Wed, 04 Jan 2023 08:32:08 +0000
Received: by outflank-mailman (input) for mailman id 470947;
 Wed, 04 Jan 2023 08:32:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzBv-0005oA-N4; Wed, 04 Jan 2023 08:32:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzBv-0007qP-LP; Wed, 04 Jan 2023 08:32:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzBv-000774-A2; Wed, 04 Jan 2023 08:32:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzBv-0004ir-9c; Wed, 04 Jan 2023 08:32:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SLUaGebJDqEx5zZic8zLR+toRxeS6xh0yZBcvX35n3k=; b=UIz72zw55JroSOXMaI1QI3sPSU
	KvdyYJpi5e9s8g9QAWtG7V3rAUrL3ti1OxLd7m47c24bi2XBRnFH8tDwZEZVkUtZzihDb2wGsm0B+
	AjNTCqsabegA0RcMhOwQ3yz3iJ427gYNSvwyefoW89SxjjoAmP3hHS3eRWpoNsXMc9/A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175565-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175565: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=992d5451d19b93635d52db293bab680e32142776
X-Osstest-Versions-That:
    ovmf=89c5d90003d9c54d03d3e85bd305718e9c29a213
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 08:32:07 +0000

flight 175565 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175565/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 992d5451d19b93635d52db293bab680e32142776
baseline version:
 ovmf                 89c5d90003d9c54d03d3e85bd305718e9c29a213

Last test of basis   175563  2023-01-04 02:10:49 Z    0 days
Testing same since   175565  2023-01-04 06:42:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sean Rhodes <sean@starlabs.systems>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   89c5d90003..992d5451d1  992d5451d19b93635d52db293bab680e32142776 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:45:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470985.730687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOb-0000kE-P5; Wed, 04 Jan 2023 08:45:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470985.730687; Wed, 04 Jan 2023 08:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOb-0000k7-Ln; Wed, 04 Jan 2023 08:45:13 +0000
Received: by outflank-mailman (input) for mailman id 470985;
 Wed, 04 Jan 2023 08:45:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfqB=5B=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pCzOZ-0008Pe-JT
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:45:11 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 192bf692-8c0c-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 09:45:10 +0100 (CET)
Received: by mail-ed1-x530.google.com with SMTP id m21so47531507edc.3
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 00:45:10 -0800 (PST)
Received: from uni.router.wind (adsl-57.109.242.233.tellas.gr.
 [109.242.233.57]) by smtp.googlemail.com with ESMTPSA id
 k22-20020a170906129600b007c10fe64c5dsm15016382ejb.86.2023.01.04.00.45.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 00:45:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 192bf692-8c0c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FErdG0bpTyu0YrAGq/30Jx78fTnJ4dyc3jWhGGqv2No=;
        b=qcSJvgIbcIVR8siwzqOdvogq8s0VHZTvIkzMO4poTEVOg2oi34sCvhNjBA68pbp6/c
         oDYUTImvkzAXzSsv4EEunLBJSB5/Z1RyCpcVyeiopQGkcdFGFVHZOIQGvy4sPsP7dzQ7
         oIjr6PInSK5S792qgowN7s9Nkot8vOLo43qTK9FDBWi/p05O6Sex6mx/DpXc0G/1riCB
         Slw9YvtQRRQm5uxl1zVVddLMM4fWRYsLvX3Dqrbfvy5BG0D4uCCT+B3EL2tg1dlI3Hx9
         shVf/l3/rsEHRL14qYRlbC1YAXccLTqlnn6aSv/rlynLQEkp0ECG/O04YL+r6mhV7TeT
         WcIw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FErdG0bpTyu0YrAGq/30Jx78fTnJ4dyc3jWhGGqv2No=;
        b=WOXSfMjdP0o+diG+ChXXCFPZIz8E98QnYd46Ekcgy/JOTg1iGbPf91eKm3klAI5ifB
         ZwNOJn3cwxN16wtrStztJbRo7edBRKia0fuST+TVe5aqw3JQR2dcCjJ30VJvvcKSnRoq
         c4LDxjps/sIqsyXWA5e2voivb9OdWNve6haPaTmZVGsy2qkSBZuTBPjBa4ItnskIWfmE
         /YtPPSgn6Cok1iYcCJ3qDRvdMgofQBaTmMprUv/HNeGqAhYi5gzrfuCnCIHjLLYG32O+
         Ukg2ZoHrttBUVw+j1RaGFXxw5X7Q3+17spA2TGAyy8SEJDaMFjpGIG8DLK3BSTcJqWCD
         JbmQ==
X-Gm-Message-State: AFqh2koky4frLRCSOM61evC5KJjBts9OYBCs7yosFiloSr9rXpO/MpKl
	Tgbiylzf3xsKwt8/Nty7nsRP5p2A9bY=
X-Google-Smtp-Source: AMrXdXtrV1AIwj0GTEIapqRqGaPwRkP//LOEvC89r7CXGAfVOJlmo1Xm/ZybUic50a87TCyZTzoBpQ==
X-Received: by 2002:a05:6402:1119:b0:472:46bf:fb3c with SMTP id u25-20020a056402111900b0047246bffb3cmr39919587edv.35.1672821910348;
        Wed, 04 Jan 2023 00:45:10 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and iommu_snoop are VT-d specific
Date: Wed,  4 Jan 2023 10:44:57 +0200
Message-Id: <20230104084502.61734-4-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230104084502.61734-1-burzalodowa@gmail.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use CONFIG_INTEL_IOMMU to guard their usage in common code.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v2:
  - replace CONFIG_INTEL_VTD with CONFIG_INTEL_IOMMU

 xen/drivers/passthrough/iommu.c | 4 +++-
 xen/include/xen/iommu.h         | 6 +++++-
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 998dfaf20d..a2c67a17cd 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -82,11 +82,13 @@ static int __init cf_check parse_iommu_param(const char *s)
         else if ( ss == s + 23 && !strncmp(s, "quarantine=scratch-page", 23) )
             iommu_quarantine = IOMMU_quarantine_scratch_page;
 #endif
-#ifdef CONFIG_X86
+#ifdef CONFIG_INTEL_IOMMU
         else if ( (val = parse_boolean("igfx", s, ss)) >= 0 )
             iommu_igfx = val;
         else if ( (val = parse_boolean("qinval", s, ss)) >= 0 )
             iommu_qinval = val;
+#endif
+#ifdef CONFIG_X86
         else if ( (val = parse_boolean("superpages", s, ss)) >= 0 )
             iommu_superpages = val;
 #endif
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 4f22fc1bed..aa924541d5 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -74,9 +74,13 @@ extern enum __packed iommu_intremap {
    iommu_intremap_restricted,
    iommu_intremap_full,
 } iommu_intremap;
-extern bool iommu_igfx, iommu_qinval, iommu_snoop;
 #else
 # define iommu_intremap false
+#endif
+
+#ifdef CONFIG_INTEL_IOMMU
+extern bool iommu_igfx, iommu_qinval, iommu_snoop;
+#else
 # define iommu_snoop false
 #endif
 
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:45:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470986.730697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOd-000110-1w; Wed, 04 Jan 2023 08:45:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470986.730697; Wed, 04 Jan 2023 08:45:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOc-00010r-Uz; Wed, 04 Jan 2023 08:45:14 +0000
Received: by outflank-mailman (input) for mailman id 470986;
 Wed, 04 Jan 2023 08:45:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfqB=5B=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pCzOc-0008Pf-0S
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:45:14 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 19fa257a-8c0c-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 09:45:12 +0100 (CET)
Received: by mail-ej1-x62b.google.com with SMTP id u19so80869654ejm.8
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 00:45:12 -0800 (PST)
Received: from uni.router.wind (adsl-57.109.242.233.tellas.gr.
 [109.242.233.57]) by smtp.googlemail.com with ESMTPSA id
 k22-20020a170906129600b007c10fe64c5dsm15016382ejb.86.2023.01.04.00.45.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 00:45:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19fa257a-8c0c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mdM0GT8K2NEubn6wx1wh2HmFlgRPGL3ox7yjkbP2cHM=;
        b=IlyAPSqkdHW7+9UwD5gXGVtDULbpYhhfd0Gx3yvhyrN1hcnLLYpk/M8pLuZh3YGQA5
         RsVx5uaz+bOugElYhIBMhtGc3tqbn9YBkmVDrxuCMqWWX/W5oNuPk6DW0mfx8vmvL3HF
         ENubWWJaVNiyKeqpPIwwuejdwmsFxhgpxr1B0qOLb/mJ/yQinsTqM9N5w7duefO33xzh
         CxrIrX8vsDWj7A9Pvq8nOW5W7QBLF+SVPuTuHzx+86VaMLDiZtwdhK3Musr1DWA3roXi
         SaRmbzkQydU5DN5adQlyaJo10MX9fRjb3UDxkTH2wFOytbtjR6qE6Ebm4ZmQd2sZty00
         eE/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=mdM0GT8K2NEubn6wx1wh2HmFlgRPGL3ox7yjkbP2cHM=;
        b=ujVkcSyvyXqPUEzNnTKz9NEDyz6evmAWdfIEWrtANAzNhQa+VFsLYIoN81nofZI6hO
         J9yPis4/OgPsLufkJFyLb8MMX/WTpu4KZX6q6PqcvQBsPHS9G8ApWIu3bFNlNh9aBcMd
         fpD75xnB0RWP9aD43mDdwSopD5WtKV3ZmHYIV0M8AsCMYzj1EeM+bySilgzzN7SznPnK
         qFmtD3DAsPVBdvNn9Q28pdP6jYvVofAaBCc8FTsp66rShvPRnHBfgeHrPDc8q+F0mTQ1
         mkWNhLyq0c/Ku47e60sy5pQHjEeAVaxuhwAflQrUH/5/94/ASkOcTrN4MPvxXW5kbWYm
         o7rA==
X-Gm-Message-State: AFqh2krDHKuaSvEMjWKXTJOVpOfbxAE0OgnG+FAVj0A6RLn67shxDH7h
	Lr6frMdaxd8qZBJkYusiGnKOga9DAT8=
X-Google-Smtp-Source: AMrXdXsZZgpByDZtDx+HjsyiTiwAs0G3noW3D/xTiatDsGRWMUyXQVuGvHA8kTWk4IDuO3qsJCn9ZA==
X-Received: by 2002:a17:907:20b0:b0:7d3:8159:f361 with SMTP id pw16-20020a17090720b000b007d38159f361mr40554936ejb.36.1672821911636;
        Wed, 04 Jan 2023 00:45:11 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 4/8] x86/acpi: separate AMD-Vi and VT-d specific functions
Date: Wed,  4 Jan 2023 10:44:58 +0200
Message-Id: <20230104084502.61734-5-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230104084502.61734-1-burzalodowa@gmail.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The functions acpi_dmar_init() and acpi_dmar_zap/reinstate() are
VT-d specific while the function acpi_ivrs_init() is AMD-Vi specific.
To eliminate dead code, they need to be guarded under CONFIG_INTEL_IOMMU
and CONFIG_AMD_IOMMU, respectively.

Instead of adding #ifdef guards around the function calls, implement them
as empty static inline functions.

Take the opportunity to move the declarations of acpi_dmar_zap/reinstate() to
the arch specific header.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v2:
  - replace CONFIG_INTEL_VTD with CONFIG_INTEL_IOMMU

 xen/arch/x86/include/asm/acpi.h | 14 ++++++++++++++
 xen/include/xen/acpi.h          |  3 ---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/include/asm/acpi.h b/xen/arch/x86/include/asm/acpi.h
index c453450a74..ca8529a7c8 100644
--- a/xen/arch/x86/include/asm/acpi.h
+++ b/xen/arch/x86/include/asm/acpi.h
@@ -140,8 +140,22 @@ extern u32 pmtmr_ioport;
 extern unsigned int pmtmr_width;
 
 void acpi_iommu_init(void);
+
+#ifdef CONFIG_INTEL_IOMMU
 int acpi_dmar_init(void);
+void acpi_dmar_zap(void);
+void acpi_dmar_reinstate(void);
+#else
+static inline int acpi_dmar_init(void) { return -ENODEV; }
+static inline void acpi_dmar_zap(void) {}
+static inline void acpi_dmar_reinstate(void) {}
+#endif
+
+#ifdef CONFIG_AMD_IOMMU
 int acpi_ivrs_init(void);
+#else
+static inline int acpi_ivrs_init(void) { return -ENODEV; }
+#endif
 
 void acpi_mmcfg_init(void);
 
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index 1b9c75e68f..82b24a5ef0 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -206,9 +206,6 @@ static inline int acpi_get_pxm(acpi_handle handle)
 
 void acpi_reboot(void);
 
-void acpi_dmar_zap(void);
-void acpi_dmar_reinstate(void);
-
 #endif /* __ASSEMBLY__ */
 
 #endif /*_LINUX_ACPI_H*/
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:45:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470982.730654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOZ-0008Q2-3Z; Wed, 04 Jan 2023 08:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470982.730654; Wed, 04 Jan 2023 08:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOZ-0008Pv-0s; Wed, 04 Jan 2023 08:45:11 +0000
Received: by outflank-mailman (input) for mailman id 470982;
 Wed, 04 Jan 2023 08:45:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfqB=5B=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pCzOX-0008Pe-Qd
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:45:09 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 17e4f316-8c0c-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 09:45:08 +0100 (CET)
Received: by mail-ej1-x62e.google.com with SMTP id kw15so80791239ejc.10
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 00:45:08 -0800 (PST)
Received: from uni.router.wind (adsl-57.109.242.233.tellas.gr.
 [109.242.233.57]) by smtp.googlemail.com with ESMTPSA id
 k22-20020a170906129600b007c10fe64c5dsm15016382ejb.86.2023.01.04.00.45.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 00:45:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17e4f316-8c0c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=U41p6PerASg/ei7rzLx/hT3otWDVCdBfixYygyAySwk=;
        b=CAgRsne3K9cYsoKps2JWLBhsdMVSIAip+bDt2+YOPYqa60Nwgh5iMJu69YQh0xbaI8
         61pO1+Er9QCvpY2ZfiM8RPYU1eqqYFxaKvpDkmdJvaoPA52aqLQ0KHWkTn1jIc8Z+H7R
         dEFEZIez3O7qkF4W/2iQ5X8WSpy+XyeMn99Jp0DC4CUj8lKYfsYv4dq4Xra+pEOeU4C8
         SJaTT7Q9j3ZZOsPZqsDnEe22RbGD7o2Y3H/JmaYOM7D5Sv6mY3zy5RqMHVGSva6yqO/b
         zVQLZOTerGCntCmyZzk1NyBdZspKxV/Jum5qWvVGkIlmOQnjZfQqJl6ht5KYUPBtNuVX
         sibA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=U41p6PerASg/ei7rzLx/hT3otWDVCdBfixYygyAySwk=;
        b=Cetd5oEilMjN4+qysv5IxONRy1hyfKs122I+0/yAR719XsB3dEyqGCRA8n2tGAw3Dv
         33IwbdfNcrsjJ8sORzKIOgTY3N2VTRoHDJCy3pG5czjSwkVKUNH6/eJ0jC4dG7t3Buqd
         6sowMtCDbcAQYOkW436xIm3P+/yKbfMjAucQfTxjr15I4k7dE8ZBmezjiXiFj2ERi0S/
         CX8T/FaixqDCyYf4kFqcXqLzbUmK//Z0H5kGa0V67c9jnCn9CBctU8+48RXu7AqbZCQI
         eRWjin5ywsjSPdmErvwV2Fl+9kJZB3KGOhD6fYVEKW/3Ov2AHRvxj+gqJn6E+QaUTGVI
         XjJg==
X-Gm-Message-State: AFqh2kr3RNrs2loScO146FnigipSbPcfY/zTdDxuHt5MQ7pkdFA0+BZI
	miVKnbMD6fi2NnBBu1e8vREeM8tzX38=
X-Google-Smtp-Source: AMrXdXs/+Vz7qVltCpOMZdgYin4fHcD0O3cVt2uL1v+aHewb2ycGe0IGJJHzHLbfcfM3tR+DqK94yA==
X-Received: by 2002:a17:907:a708:b0:82e:a59a:5c3e with SMTP id vw8-20020a170907a70800b0082ea59a5c3emr52269642ejc.10.1672821908187;
        Wed, 04 Jan 2023 00:45:08 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 1/8] x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
Date: Wed,  4 Jan 2023 10:44:55 +0200
Message-Id: <20230104084502.61734-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230104084502.61734-1-burzalodowa@gmail.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
specific to each IOMMU technology to be separated and, when not required,
stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
enable IOMMU support for platforms that implement the Intel Virtualization
Technology for Directed I/O.

Since, at this point, disabling any of them would cause Xen to not compile,
the options are not visible to the user and are enabled by default if X86.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v2:
  - just introduce the new options but not make them selectable by the user
  - adjust commit message
  - replace CONFIG_INTEL_VTD with CONFIG_INTEL_IOMMU

 xen/drivers/passthrough/Kconfig  | 6 ++++++
 xen/drivers/passthrough/Makefile | 4 ++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 479d7de57a..5c65567744 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -37,6 +37,12 @@ config IPMMU_VMSA
 
 endif
 
+config AMD_IOMMU
+	def_bool y if X86
+
+config INTEL_IOMMU
+	def_bool y if X86
+
 config IOMMU_FORCE_PT_SHARE
 	bool
 
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index a5efa22714..a1621540b7 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -1,5 +1,5 @@
-obj-$(CONFIG_X86) += vtd/
-obj-$(CONFIG_X86) += amd/
+obj-$(CONFIG_INTEL_IOMMU) += vtd/
+obj-$(CONFIG_AMD_IOMMU) += amd/
 obj-$(CONFIG_X86) += x86/
 obj-$(CONFIG_ARM) += arm/
 
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:45:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470983.730659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOZ-0008TX-FA; Wed, 04 Jan 2023 08:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470983.730659; Wed, 04 Jan 2023 08:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOZ-0008SU-8P; Wed, 04 Jan 2023 08:45:11 +0000
Received: by outflank-mailman (input) for mailman id 470983;
 Wed, 04 Jan 2023 08:45:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfqB=5B=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pCzOY-0008Pf-2k
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:45:10 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 175bb389-8c0c-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 09:45:07 +0100 (CET)
Received: by mail-ej1-x62e.google.com with SMTP id u9so81072765ejo.0
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 00:45:07 -0800 (PST)
Received: from uni.router.wind (adsl-57.109.242.233.tellas.gr.
 [109.242.233.57]) by smtp.googlemail.com with ESMTPSA id
 k22-20020a170906129600b007c10fe64c5dsm15016382ejb.86.2023.01.04.00.45.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 00:45:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 175bb389-8c0c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=EBCGeHaBTmxyN/DaJmKzht+c6jCnKFesS3Z897W1Kpg=;
        b=JofC2WjASo4RG8zIrbk6xa6YUAa3FT7Jj7pAXx4ze5bEmGFhHy6xVTEDJswYShuLd3
         oXCsr70nlBgeo4xO6yUa4FimRvyUT1cKnZ+ypf29bol4wnfFsqbWnKgItghkw01Ett65
         EnZhGqR3rFV4DnVWovgKN3JcE8Lgu8yYAYAjJTFRysld0F27OHFAHtzkIkuZbY44nrwb
         irfHUk8OgLkxn2vipZz3d2zGa5dJriBJaFaDOErbGNCXOZPLQtQxhqnOWjWyIoAJ5M93
         95t5mV5b3MmSQ2vog6CuziGQzmT4HqBaxuk+XaPNmbfjvkRtpUEiPvtY2oj4DAkqRyHe
         OBSg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=EBCGeHaBTmxyN/DaJmKzht+c6jCnKFesS3Z897W1Kpg=;
        b=bWqzMuTfvlZKeILfnpZ2bmnvjCwMAFyNfaWqYv5pkDd6MG58VtZdj+QNaELfL6FdOs
         2PUhKcPoTE8w1x/XAytG31YEwKbz5FubFRQ1xyrJgstZ3GWeCGrPOFmYIVkUWhtzg4Xl
         ZvEFBj7LSOOVuPZgGoX9ZKua+2tSIfv21bWGZVrDF8UKyk9zSMBpkbHvOd5lF8O5VIUd
         SkRaDgo23sbPMmU9Q4FPuV7iw1UosC0ofKA0pNQ+N28A6TggnliMFS8FUP4wFY0B3BQh
         UdKn2k0clwBETSAnMx7/T0RudKkbNpxN3pDCEutx1was93cT/lbwZ0lCgADE42kReC8r
         0I9Q==
X-Gm-Message-State: AFqh2krqMwFGDZM2aOJjC405vqCWAyb3EpxtfSmaxH8FfIaTHJ6ridqE
	8ybFRllYxdSadObHqkZBnibhls6OcfM=
X-Google-Smtp-Source: AMrXdXumjn8F5BjTIazn7G7h3S0/llhfHrGEVIch0uQ5aP66cd7oCFL/Tv5eNSx0bxDF6y5WjQIWAA==
X-Received: by 2002:a17:906:859:b0:84c:c4a4:61ca with SMTP id f25-20020a170906085900b0084cc4a461camr9478845ejd.61.1672821907069;
        Wed, 04 Jan 2023 00:45:07 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v2 0/8] Make x86 IOMMU driver support configurable
Date: Wed,  4 Jan 2023 10:44:54 +0200
Message-Id: <20230104084502.61734-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series aims to provide a means to render the iommu driver support for x86
configurable. Currently, irrespectively of the target platform, both AMD and
Intel iommu drivers are built. This is the case because the existent Kconfig
infrastructure does not provide any facilities for finer-grained configuration.

The series adds two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, that can be
used to generate a tailored iommu configuration for a given platform.

This version of the series addresses the initial comments made in the RFC
version to facilitate further review of the parts that need more feedback.

Xenia Ragiadakou (8):
  x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
  x86/iommu: amd_iommu_perdev_intremap is AMD-Vi specific
  x86/iommu: iommu_igfx, iommu_qinval and iommu_snoop are VT-d specific
  x86/acpi: separate AMD-Vi and VT-d specific functions
  x86/iommu: the code addressing CVE-2011-1898 is VT-d specific
  x86/iommu: call pi_update_irte through an hvm_function callback
  x86/dpci: move hvm_dpci_isairq_eoi() to generic HVM code
  x86/iommu: make AMD-Vi and Intel VT-d support configurable

 xen/arch/x86/hvm/vmx/vmx.c               | 10 ++++
 xen/arch/x86/include/asm/acpi.h          | 14 ++++++
 xen/arch/x86/include/asm/hvm/hvm.h       | 15 ++++++
 xen/arch/x86/include/asm/hvm/vmx/vmx.h   | 11 ++++
 xen/arch/x86/include/asm/iommu.h         |  5 +-
 xen/arch/x86/pv/hypercall.c              |  2 +
 xen/arch/x86/x86_64/entry.S              |  2 +
 xen/drivers/passthrough/Kconfig          | 24 +++++++++
 xen/drivers/passthrough/Makefile         |  4 +-
 xen/drivers/passthrough/amd/iommu_init.c |  2 +
 xen/drivers/passthrough/iommu.c          |  9 +++-
 xen/drivers/passthrough/vtd/x86/Makefile |  1 -
 xen/drivers/passthrough/vtd/x86/hvm.c    | 64 ------------------------
 xen/drivers/passthrough/x86/hvm.c        | 48 ++++++++++++++++--
 xen/include/xen/acpi.h                   |  3 --
 xen/include/xen/iommu.h                  |  7 ++-
 16 files changed, 141 insertions(+), 80 deletions(-)
 delete mode 100644 xen/drivers/passthrough/vtd/x86/hvm.c

-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:45:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470984.730676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOa-0000U7-J7; Wed, 04 Jan 2023 08:45:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470984.730676; Wed, 04 Jan 2023 08:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOa-0000Ty-Fb; Wed, 04 Jan 2023 08:45:12 +0000
Received: by outflank-mailman (input) for mailman id 470984;
 Wed, 04 Jan 2023 08:45:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfqB=5B=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pCzOY-0008Pe-JR
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:45:10 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 188b29c4-8c0c-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 09:45:09 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id t17so80776947eju.1
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 00:45:09 -0800 (PST)
Received: from uni.router.wind (adsl-57.109.242.233.tellas.gr.
 [109.242.233.57]) by smtp.googlemail.com with ESMTPSA id
 k22-20020a170906129600b007c10fe64c5dsm15016382ejb.86.2023.01.04.00.45.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 00:45:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 188b29c4-8c0c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=C2lVOOqFcf6BKXv3+gSDIksJ5cf6q+xOsktrfoPl6c8=;
        b=QADj/+r42GTlhqcmiLkt+4z4W4FZNfMPjZCt2fEzAMhFmYXlNRG4AeW5EmHS62VS9C
         deBnM3SUhDLULCartyBVBru9Rvh8jSUoTIzJUi8s+HCpOdNcvXwNp5M6aySNHtEgVHxC
         j+mdUyZcm71ul1fTmrg6Aj3MISpCAHrmSL0DZieRsixKoYIKfktRA6Xi4CUJKlrgWTj/
         Fk/QymbzqFYez7sTp/xE0v5aPg61T3FIiqxaDEl43eiJBV+/oe9HAD4CGQOJ3wsz3Q/W
         VhHd6TnDqqUFHUJ5nFlG1ICvDrS10E6WOM6A3y1u6tPIFenJ+CxA3jjQG4+tfX+ZRUAO
         E+rw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=C2lVOOqFcf6BKXv3+gSDIksJ5cf6q+xOsktrfoPl6c8=;
        b=bMKwckyJgyQjI04+qh/AL4m67kiVYKuNouIXAQZsLVFcjsQJMZN54deKSrEP7yz7eP
         PHR9KD6Vibbs3ZYKNrjyWO3NBrBniyxz45do0xSdVezhBp2NW7wso6JX75lvqVHE1uua
         aUFPaPqEWMEL4F/DXk592LNPzGUVoddeOVd08xZSdFn7yeEqpES15mn/WCjWr5W6GZ7v
         udswNSW2R+N1JZZ7OkiqJa94k+Cx9EMUx/4+W7NtggasVXf0fvNswgTDayb3Bt6xAh9Q
         vGfz7uXH1tWhTgUunvl/EcDrcuowjOtYXo6O/7qsvHZ49FG67UKTy7h7o6IbsCUSp8y6
         jWkw==
X-Gm-Message-State: AFqh2kqlHzlxcE31nuMxw23xO4aC1Ww+b5vobkiBdENoaoV6s7KuBMvH
	7E/nbYocRqnp6QHuoDbUgE5nMRF5zLQ=
X-Google-Smtp-Source: AMrXdXvItxE5hRpoj+ycvRCaI78Q5RBjmYLhM84Rcmus0xMSC+Kut57gjsLwxguo9onjP5q5CnkkUw==
X-Received: by 2002:a17:907:8b89:b0:7c1:6f86:eeb with SMTP id tb9-20020a1709078b8900b007c16f860eebmr36851095ejc.7.1672821909265;
        Wed, 04 Jan 2023 00:45:09 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 2/8] x86/iommu: amd_iommu_perdev_intremap is AMD-Vi specific
Date: Wed,  4 Jan 2023 10:44:56 +0200
Message-Id: <20230104084502.61734-3-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230104084502.61734-1-burzalodowa@gmail.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move its definition to the AMD-Vi driver and use CONFIG_AMD_IOMMU
to guard its usage in common code.

Take the opportunity to replace bool_t with bool, __read_mostly with
__ro_after_init and 1 with true.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v2:
  - declare amd_iommu_perdev_intremap __ro_after_init
  - use no_config_param() to print a warning when the user sets an AMD_IOMMU
    specific string in the iommu boot parameter and AMD_IOMMU is disabled
  - remove the #ifdef CONFIG_AMD_IOMMU guard from the declaration of
    amd_iommu_perdev_intremap in xen/iommu.h

 xen/drivers/passthrough/amd/iommu_init.c | 2 ++
 xen/drivers/passthrough/iommu.c          | 5 ++++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
index 1f14aaf49e..9773ccfcb4 100644
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -36,6 +36,8 @@ static struct radix_tree_root ivrs_maps;
 LIST_HEAD_READ_MOSTLY(amd_iommu_head);
 bool_t iommuv2_enabled;
 
+bool __ro_after_init amd_iommu_perdev_intremap = true;
+
 static bool iommu_has_ht_flag(struct amd_iommu *iommu, u8 mask)
 {
     return iommu->ht_flags & mask;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 5e2a720d29..998dfaf20d 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -58,7 +58,6 @@ bool __read_mostly iommu_hap_pt_share = true;
 #endif
 
 bool_t __read_mostly iommu_debug;
-bool_t __read_mostly amd_iommu_perdev_intremap = 1;
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
@@ -116,7 +115,11 @@ static int __init cf_check parse_iommu_param(const char *s)
                 iommu_verbose = 1;
         }
         else if ( (val = parse_boolean("amd-iommu-perdev-intremap", s, ss)) >= 0 )
+#ifdef CONFIG_AMD_IOMMU
             amd_iommu_perdev_intremap = val;
+#else
+            no_config_param("AMD_IOMMU", "amd-iommu-perdev-intremap", s, ss);
+#endif
         else if ( (val = parse_boolean("dom0-passthrough", s, ss)) >= 0 )
             iommu_hwdom_passthrough = val;
         else if ( (val = parse_boolean("dom0-strict", s, ss)) >= 0 )
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:45:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470987.730702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOd-00015H-GU; Wed, 04 Jan 2023 08:45:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470987.730702; Wed, 04 Jan 2023 08:45:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOd-00014l-AR; Wed, 04 Jan 2023 08:45:15 +0000
Received: by outflank-mailman (input) for mailman id 470987;
 Wed, 04 Jan 2023 08:45:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfqB=5B=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pCzOc-0008Pe-2t
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:45:14 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1aa4181c-8c0c-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 09:45:13 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id u19so80869748ejm.8
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 00:45:13 -0800 (PST)
Received: from uni.router.wind (adsl-57.109.242.233.tellas.gr.
 [109.242.233.57]) by smtp.googlemail.com with ESMTPSA id
 k22-20020a170906129600b007c10fe64c5dsm15016382ejb.86.2023.01.04.00.45.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 00:45:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1aa4181c-8c0c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HOiaAsPMwgLoSMJypKuCfvLjyhJsFw19vxzxT5oap4g=;
        b=Hd2mClmwGN7qPAVe8jLCBJik7TSu6D1hwPwRA6n95halEdoHOO5XHhWX2UHU6b6QaR
         apkNmStPd4cT1h/6oOJnkb/2bFDjpD+nmAOI39BHYt1vVWTBte6I0MJLrQghx1ZGKZYH
         8FLLbt5b7rEL4BF1k6fB2z7zni+yCr+ZaZgceCAglR7HvjajZ3HZOmmxrNPqPZ+LU0GT
         kw1GvxZdBd2btMegnuFM25MGAaBAcT7isq8D5DSD4tIGTtaG+dvJykEO446bArRtgd14
         euK0j83jYn+YYNEsQoEm5ZKu4sTQXH9HcXbxEdSxRbDJlW3zA0jDLyfElaLsk4WC7okw
         cAbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=HOiaAsPMwgLoSMJypKuCfvLjyhJsFw19vxzxT5oap4g=;
        b=qhbXC4niQE0bEJMxKLhBskodGjobmlzejTiuJV4czC7QGykh6ttjaAw8/b+FCf0lkZ
         fu1VHE8/f74MElaSd9napSDwr8UQupq9MS+Fa7U0Jv+uZlMML5yVEbVnv/pJBiWLrMJC
         VPscnb9hylW09GPGEG66ZS9Lhrzz42CnyVlB2YHvlN0a6GFGvJcROeLE6ouaBJx/GAeG
         x3oqFueBtv4gCFFWU65JUZMLt24Qf/EWcOsC5wHvcePcySIqKDEmrQBRFpGtLz0lZeyu
         SkIG519e3guWseIO13uk1D0AYKMjpnUZqCgoDloZXOZEuZot6QpSZJSPnif0OJJH3aqg
         TytQ==
X-Gm-Message-State: AFqh2kr8dDdEDK+mpSxWc960O8835l4OfO9hNzRG25w2SvkBe3wtMXNc
	vQun9RtDy8q3MR0wmA6wy49NnX57HTU=
X-Google-Smtp-Source: AMrXdXs5pE7YUspJLpR94+D/sV7BchYm34JJQ0CbAEWxWKKA/p0spj1+jbTzfVoPYDOubEeR4PhlqA==
X-Received: by 2002:a17:906:39d8:b0:847:410:ecff with SMTP id i24-20020a17090639d800b008470410ecffmr34003836eje.16.1672821912776;
        Wed, 04 Jan 2023 00:45:12 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 5/8] x86/iommu: the code addressing CVE-2011-1898 is VT-d specific
Date: Wed,  4 Jan 2023 10:44:59 +0200
Message-Id: <20230104084502.61734-6-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230104084502.61734-1-burzalodowa@gmail.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variable untrusted_msi indicates whether the system is vulnerable to
CVE-2011-1898. This vulnerablity is VT-d specific.
Place the code that addresses the issue under CONFIG_INTEL_IOMMU.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v2:
  - replace CONFIG_INTEL_VTD with CONFIG_INTEL_IOMMU

 xen/arch/x86/include/asm/iommu.h | 2 ++
 xen/arch/x86/pv/hypercall.c      | 2 ++
 xen/arch/x86/x86_64/entry.S      | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/xen/arch/x86/include/asm/iommu.h b/xen/arch/x86/include/asm/iommu.h
index fc0afe35bf..fb5fe4e1bf 100644
--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -127,7 +127,9 @@ int iommu_identity_mapping(struct domain *d, p2m_access_t p2ma,
                            unsigned int flag);
 void iommu_identity_map_teardown(struct domain *d);
 
+#ifdef CONFIG_INTEL_IOMMU
 extern bool untrusted_msi;
+#endif
 
 int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
                    const uint8_t gvec);
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index 2eedfbfae8..9d616a0fc5 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -193,8 +193,10 @@ void pv_ring1_init_hypercall_page(void *p)
 
 void do_entry_int82(struct cpu_user_regs *regs)
 {
+#ifdef CONFIG_INTEL_IOMMU
     if ( unlikely(untrusted_msi) )
         check_for_unexpected_msi((uint8_t)regs->entry_vector);
+#endif
 
     _pv_hypercall(regs, true /* compat */);
 }
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index ae01285181..8f2fb36770 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -406,11 +406,13 @@ ENTRY(int80_direct_trap)
 .Lint80_cr3_okay:
         sti
 
+#ifdef CONFIG_INTEL_IOMMU
         cmpb  $0,untrusted_msi(%rip)
 UNLIKELY_START(ne, msi_check)
         movl  $0x80,%edi
         call  check_for_unexpected_msi
 UNLIKELY_END(msi_check)
+#endif
 
         movq  STACK_CPUINFO_FIELD(current_vcpu)(%rbx), %rbx
 
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:45:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:45:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470988.730720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOg-0001dd-5g; Wed, 04 Jan 2023 08:45:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470988.730720; Wed, 04 Jan 2023 08:45:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOf-0001dC-Ux; Wed, 04 Jan 2023 08:45:17 +0000
Received: by outflank-mailman (input) for mailman id 470988;
 Wed, 04 Jan 2023 08:45:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfqB=5B=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pCzOd-0008Pe-LM
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:45:15 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1b7c2fdb-8c0c-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 09:45:14 +0100 (CET)
Received: by mail-ej1-x631.google.com with SMTP id qk9so80720802ejc.3
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 00:45:14 -0800 (PST)
Received: from uni.router.wind (adsl-57.109.242.233.tellas.gr.
 [109.242.233.57]) by smtp.googlemail.com with ESMTPSA id
 k22-20020a170906129600b007c10fe64c5dsm15016382ejb.86.2023.01.04.00.45.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 00:45:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b7c2fdb-8c0c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Tj9i4k4Po7JyYtPs55wgOJc0qUd+pVBAioCcKA5YDiY=;
        b=Zk+2/cbxitpP46T2xli3DDei5RXHSwU29gjuLZrqJIo5izrT20VbUgQPqJsoEFGTOc
         /W5fc3wv67hCfpm1t12Ko8J8tL1OoAck5uYCQR2rHRD+Y7bobdryX37MXLcS0wGOBinj
         sfmXg6ZATbhiMJUVaOAjzB9CpCFUWM4eatisjFSKochHLue/OZd2USvw144OzRzASLmc
         A2+LucMoBQ8V32eCE2nvpI01CKlieGkI7D71enXT+XQSZ/53iB11tE64UfHLtQlA7w9P
         Pv/+P/8KodQeSpU4MNasky3A6OTEwPnzfs1DPhQrLWke0TvD005GLTE4soIPyEHXtF97
         E+DA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Tj9i4k4Po7JyYtPs55wgOJc0qUd+pVBAioCcKA5YDiY=;
        b=R0D6QB90BGTKCxztadvHO2ADWox0M1pWIPjXtjNWIchfPIt9FWvajgtw9xU0JQZ4Px
         OWktcbtMchdfpmvxslO4NNsY090AgXLbs2XF049xAdPNSr+xKU9f++3MvXKRPrL6c6gk
         wCg16qd11ztWbwmDdeothjsD09wEywfKqWpZlln8+gtCJN5elwsBc/nG5VfdXwefzQZH
         DzQGK9rOsFmgtswFT83LB6cOtiO0UIs2EzaBM6boOgCspDVfA642RF09eDhrEfbStIiu
         IXBfKZBwpEck/0+QNAn1fOvA7fsZug1T0EK4SIfbrsfZfwyIUiyqgDDj9G4vDjo7ybiN
         O3eQ==
X-Gm-Message-State: AFqh2koJAwLtcnLCK288w7AWUjct14CWVQp/mukFLpBCutfiP/GkcsHM
	C8j5CBlel7B9l7SDCZ0mhCb8HPda4SY=
X-Google-Smtp-Source: AMrXdXvYf/Q/hqROtjcI4p728EDt17GV1HrYpbRwl8Wf778r0ZHaKQAZ6Auylm8M+YkwBNQ7HdOJYg==
X-Received: by 2002:a17:906:a186:b0:82d:e2a6:4b0d with SMTP id s6-20020a170906a18600b0082de2a64b0dmr40816443ejy.18.1672821914148;
        Wed, 04 Jan 2023 00:45:14 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v2 6/8] x86/iommu: call pi_update_irte through an hvm_function callback
Date: Wed,  4 Jan 2023 10:45:00 +0200
Message-Id: <20230104084502.61734-7-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230104084502.61734-1-burzalodowa@gmail.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Posted interrupt support in Xen is currently implemented only for the
Intel platforms. Instead of calling directly pi_update_irte() from the
common hvm code, add a pi_update_irte callback to the hvm_function_table.
Then, create a wrapper function hvm_pi_update_irte() to be used by the
common hvm code.

In the pi_update_irte callback prototype, pass the vcpu as first parameter
instead of the posted-interrupt descriptor that is platform specific, and
remove the const qualifier from the parameter gvec since it is not needed
and because it does not compile with the alternative code patching in use.

Move the declaration of pi_update_irte() from asm/iommu.h to asm/hvm/vmx/vmx.h
since it is hvm and Intel specific.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v2:
  - remove the definition of hvm_pi_update_irte() for !CONFIG_HVM
  - replace CONFIG_INTEL_VTD with CONFIG_INTEL_IOMMU

 xen/arch/x86/hvm/vmx/vmx.c             | 10 ++++++++++
 xen/arch/x86/include/asm/hvm/hvm.h     | 15 +++++++++++++++
 xen/arch/x86/include/asm/hvm/vmx/vmx.h | 11 +++++++++++
 xen/arch/x86/include/asm/iommu.h       |  3 ---
 xen/drivers/passthrough/x86/hvm.c      |  5 ++---
 5 files changed, 38 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 43a4865d1c..cb6b325e41 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2143,6 +2143,14 @@ static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
     return pi_test_pir(vec, &v->arch.hvm.vmx.pi_desc);
 }
 
+static int cf_check vmx_pi_update_irte(const struct vcpu *v,
+                                       const struct pirq *pirq, uint8_t gvec)
+{
+    const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
+
+    return pi_update_irte(pi_desc, pirq, gvec);
+}
+
 static void cf_check vmx_handle_eoi(uint8_t vector, int isr)
 {
     uint8_t old_svi = set_svi(isr);
@@ -2591,6 +2599,8 @@ static struct hvm_function_table __initdata_cf_clobber vmx_function_table = {
     .tsc_scaling = {
         .max_ratio = VMX_TSC_MULTIPLIER_MAX,
     },
+
+    .pi_update_irte = vmx_pi_update_irte,
 };
 
 /* Handle VT-d posted-interrupt when VCPU is blocked. */
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 93254651f2..b3fe0663f9 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -28,6 +28,8 @@
 #include <asm/x86_emulate.h>
 #include <asm/hvm/asid.h>
 
+struct pirq; /* needed by pi_update_irte */
+
 #ifdef CONFIG_HVM_FEP
 /* Permit use of the Forced Emulation Prefix in HVM guests */
 extern bool_t opt_hvm_fep;
@@ -250,6 +252,9 @@ struct hvm_function_table {
         /* Architecture function to setup TSC scaling ratio */
         void (*setup)(struct vcpu *v);
     } tsc_scaling;
+
+    int (*pi_update_irte)(const struct vcpu *v,
+                          const struct pirq *pirq, uint8_t gvec);
 };
 
 extern struct hvm_function_table hvm_funcs;
@@ -774,6 +779,16 @@ static inline void hvm_set_nonreg_state(struct vcpu *v,
         alternative_vcall(hvm_funcs.set_nonreg_state, v, nrs);
 }
 
+static inline int hvm_pi_update_irte(const struct vcpu *v,
+                                     const struct pirq *pirq, uint8_t gvec)
+{
+    if ( hvm_funcs.pi_update_irte )
+        return alternative_call(hvm_funcs.pi_update_irte, v, pirq, gvec);
+
+    return -EOPNOTSUPP;
+}
+
+
 #else  /* CONFIG_HVM */
 
 #define hvm_enabled false
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmx.h b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
index 96a9f07ca5..e827fece07 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
@@ -146,6 +146,17 @@ static inline void pi_clear_sn(struct pi_desc *pi_desc)
     clear_bit(POSTED_INTR_SN, &pi_desc->control);
 }
 
+#ifdef CONFIG_INTEL_IOMMU
+int pi_update_irte(const struct pi_desc *pi_desc,
+                   const struct pirq *pirq, const uint8_t gvec);
+#else
+static inline int pi_update_irte(const struct pi_desc *pi_desc,
+                                 const struct pirq *pirq, const uint8_t gvec)
+{
+    return -EOPNOTSUPP;
+}
+#endif
+
 /*
  * Exit Reasons
  */
diff --git a/xen/arch/x86/include/asm/iommu.h b/xen/arch/x86/include/asm/iommu.h
index fb5fe4e1bf..b432790d24 100644
--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -131,9 +131,6 @@ void iommu_identity_map_teardown(struct domain *d);
 extern bool untrusted_msi;
 #endif
 
-int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
-                   const uint8_t gvec);
-
 extern bool iommu_non_coherent, iommu_superpages;
 
 static inline void iommu_sync_cache(const void *addr, unsigned int size)
diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c
index a16e0e5344..e720461a14 100644
--- a/xen/drivers/passthrough/x86/hvm.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -381,8 +381,7 @@ int pt_irq_create_bind(
 
         /* Use interrupt posting if it is supported. */
         if ( iommu_intpost )
-            pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
-                           info, pirq_dpci->gmsi.gvec);
+            hvm_pi_update_irte(vcpu, info, pirq_dpci->gmsi.gvec);
 
         if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
         {
@@ -672,7 +671,7 @@ int pt_irq_destroy_bind(
             what = "bogus";
     }
     else if ( pirq_dpci && pirq_dpci->gmsi.posted )
-        pi_update_irte(NULL, pirq, 0);
+        hvm_pi_update_irte(NULL, pirq, 0);
 
     if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
          list_empty(&pirq_dpci->digl_list) )
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:45:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:45:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470989.730725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOg-0001he-J4; Wed, 04 Jan 2023 08:45:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470989.730725; Wed, 04 Jan 2023 08:45:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOg-0001fz-BE; Wed, 04 Jan 2023 08:45:18 +0000
Received: by outflank-mailman (input) for mailman id 470989;
 Wed, 04 Jan 2023 08:45:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfqB=5B=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pCzOe-0008Pe-KN
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:45:16 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1c29c3f9-8c0c-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 09:45:15 +0100 (CET)
Received: by mail-ej1-x62e.google.com with SMTP id kw15so80791873ejc.10
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 00:45:15 -0800 (PST)
Received: from uni.router.wind (adsl-57.109.242.233.tellas.gr.
 [109.242.233.57]) by smtp.googlemail.com with ESMTPSA id
 k22-20020a170906129600b007c10fe64c5dsm15016382ejb.86.2023.01.04.00.45.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 00:45:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c29c3f9-8c0c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tqo5/tDlicPu5mCxcbh6Z6IjcPc3BV3fslYWB3OSArs=;
        b=LePRpHzDhHQjRWbDbuAHMYF8Ya6h/Nu3+/MhJyIyz06QAqOF1EVetuhx8mCgZMC887
         kH7P9lgSIwn4eWBO93tmf01OzBKP4dDKVIzeOr/DpfVmuLWgRGIvYMUtgTa7YisgLZgP
         hn4oemUODuBnAQ5fxvN+mIWNSILJQwDbX1t+FNyMouEeOdIpJlDG/9kQdAlAsx80UlJL
         jTl1vRnFgs+K9AUDdy5LxW2dtzxuLLDtfPFyv45+htKGvvluZ6coAPK+sH8X0xLuVXjg
         UdP7pHR2LaOY4Om4f1/aSV5nvuxO0okrXOk2pJ/uSDKf7tvvuqVZZc3JdHrSVpAS7xzL
         yl4g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tqo5/tDlicPu5mCxcbh6Z6IjcPc3BV3fslYWB3OSArs=;
        b=3RCtZm2Kzdh1cL0CLDS7L1jN/eOCXAthuVTJlyTy8YpW94x+Lv/8IhDYau+gKUpocY
         55oTEfZMEcTYPRFx8aIqs5uZXWTN387uWzzBqQAyoJHAS3kk73L26WwIgdRnFwyXjBDS
         WQ2ndI9cOlM+isgbM7/GODZzKnd7Coiufs/emP8KyvNZG6YTI5FOYavD23X8ZCp7m/Bf
         WxUU6hixUDLhcKx0z2xB7cBrCZCqtXjzyuuj87LVZPovlAdZsCU8NUDaN+bqCP4pQkqY
         VBJYl4Nv5vWBVDp442llz3ZWej40bvJK3NG+QAfCUxRi0TrEHI5Up8uO8JJHKph4Mij3
         cJFA==
X-Gm-Message-State: AFqh2kqCh9TZv7AtNxZ8VZf5kWsFO883dmNqY96J8zpgvDz72Sb1G/IO
	87pQ5//ssTfCd6fGx5mB5bgLX3BXKzc=
X-Google-Smtp-Source: AMrXdXs5y3cNVYlCIU2x42F91fpieCpDGRWVHMwTv5YUoJffv5zEYYp91gCsYfg7Tg8x1q5qUj4dZw==
X-Received: by 2002:a17:907:cb85:b0:7c0:f216:cc14 with SMTP id un5-20020a170907cb8500b007c0f216cc14mr40673315ejc.11.1672821915341;
        Wed, 04 Jan 2023 00:45:15 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 7/8] x86/dpci: move hvm_dpci_isairq_eoi() to generic HVM code
Date: Wed,  4 Jan 2023 10:45:01 +0200
Message-Id: <20230104084502.61734-8-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230104084502.61734-1-burzalodowa@gmail.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function hvm_dpci_isairq_eoi() has no dependencies on VT-d driver code
and can be moved from xen/drivers/passthrough/vtd/x86/hvm.c to
xen/drivers/passthrough/x86/hvm.c, along with the corresponding copyrights.

Remove the now empty xen/drivers/passthrough/vtd/x86/hvm.c.

Since the funcion is used only in this file, declare it static.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v2:
  - update the copyright
  - declare hvm_dpci_isairq_eoi() static

 xen/drivers/passthrough/vtd/x86/Makefile |  1 -
 xen/drivers/passthrough/vtd/x86/hvm.c    | 64 ------------------------
 xen/drivers/passthrough/x86/hvm.c        | 43 ++++++++++++++++
 xen/include/xen/iommu.h                  |  1 -
 4 files changed, 43 insertions(+), 66 deletions(-)
 delete mode 100644 xen/drivers/passthrough/vtd/x86/hvm.c

diff --git a/xen/drivers/passthrough/vtd/x86/Makefile b/xen/drivers/passthrough/vtd/x86/Makefile
index 4ef00a4c5b..fe20a0b019 100644
--- a/xen/drivers/passthrough/vtd/x86/Makefile
+++ b/xen/drivers/passthrough/vtd/x86/Makefile
@@ -1,3 +1,2 @@
 obj-y += ats.o
-obj-$(CONFIG_HVM) += hvm.o
 obj-y += vtd.o
diff --git a/xen/drivers/passthrough/vtd/x86/hvm.c b/xen/drivers/passthrough/vtd/x86/hvm.c
deleted file mode 100644
index bc776cf7da..0000000000
--- a/xen/drivers/passthrough/vtd/x86/hvm.c
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Copyright (c) 2008, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; If not, see <http://www.gnu.org/licenses/>.
- *
- * Copyright (C) Allen Kay <allen.m.kay@intel.com>
- * Copyright (C) Weidong Han <weidong.han@intel.com>
- */
-
-#include <xen/iommu.h>
-#include <xen/irq.h>
-#include <xen/sched.h>
-
-static int cf_check _hvm_dpci_isairq_eoi(
-    struct domain *d, struct hvm_pirq_dpci *pirq_dpci, void *arg)
-{
-    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
-    unsigned int isairq = (long)arg;
-    const struct dev_intx_gsi_link *digl;
-
-    list_for_each_entry ( digl, &pirq_dpci->digl_list, list )
-    {
-        unsigned int link = hvm_pci_intx_link(digl->device, digl->intx);
-
-        if ( hvm_irq->pci_link.route[link] == isairq )
-        {
-            hvm_pci_intx_deassert(d, digl->device, digl->intx);
-            if ( --pirq_dpci->pending == 0 )
-                pirq_guest_eoi(dpci_pirq(pirq_dpci));
-        }
-    }
-
-    return 0;
-}
-
-void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq)
-{
-    struct hvm_irq_dpci *dpci = NULL;
-
-    ASSERT(isairq < NR_ISAIRQS);
-    if ( !is_iommu_enabled(d) )
-        return;
-
-    write_lock(&d->event_lock);
-
-    dpci = domain_get_irq_dpci(d);
-
-    if ( dpci && test_bit(isairq, dpci->isairq_map) )
-    {
-        /* Multiple mirq may be mapped to one isa irq */
-        pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq);
-    }
-    write_unlock(&d->event_lock);
-}
diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c
index e720461a14..5043ecab9c 100644
--- a/xen/drivers/passthrough/x86/hvm.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -14,6 +14,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  *
  * Copyright (C) Allen Kay <allen.m.kay@intel.com>
+ * Copyright (C) Weidong Han <weidong.han@intel.com>
  * Copyright (C) Xiaohui Xin <xiaohui.xin@intel.com>
  */
 
@@ -924,6 +925,48 @@ static void hvm_gsi_eoi(struct domain *d, unsigned int gsi)
     hvm_pirq_eoi(pirq);
 }
 
+static int cf_check _hvm_dpci_isairq_eoi(
+    struct domain *d, struct hvm_pirq_dpci *pirq_dpci, void *arg)
+{
+    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
+    unsigned int isairq = (long)arg;
+    const struct dev_intx_gsi_link *digl;
+
+    list_for_each_entry ( digl, &pirq_dpci->digl_list, list )
+    {
+        unsigned int link = hvm_pci_intx_link(digl->device, digl->intx);
+
+        if ( hvm_irq->pci_link.route[link] == isairq )
+        {
+            hvm_pci_intx_deassert(d, digl->device, digl->intx);
+            if ( --pirq_dpci->pending == 0 )
+                pirq_guest_eoi(dpci_pirq(pirq_dpci));
+        }
+    }
+
+    return 0;
+}
+
+static void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq)
+{
+    struct hvm_irq_dpci *dpci = NULL;
+
+    ASSERT(isairq < NR_ISAIRQS);
+    if ( !is_iommu_enabled(d) )
+        return;
+
+    write_lock(&d->event_lock);
+
+    dpci = domain_get_irq_dpci(d);
+
+    if ( dpci && test_bit(isairq, dpci->isairq_map) )
+    {
+        /* Multiple mirq may be mapped to one isa irq */
+        pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq);
+    }
+    write_unlock(&d->event_lock);
+}
+
 void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi)
 {
     const struct hvm_irq_dpci *hvm_irq_dpci;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index aa924541d5..f08bc2a4a5 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -200,7 +200,6 @@ int hvm_do_IRQ_dpci(struct domain *, struct pirq *);
 int pt_irq_create_bind(struct domain *, const struct xen_domctl_bind_pt_irq *);
 int pt_irq_destroy_bind(struct domain *, const struct xen_domctl_bind_pt_irq *);
 
-void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq);
 struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *);
 void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
 
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 08:45:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 08:45:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.470990.730742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOi-0002Fb-QC; Wed, 04 Jan 2023 08:45:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 470990.730742; Wed, 04 Jan 2023 08:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzOi-0002FP-Lr; Wed, 04 Jan 2023 08:45:20 +0000
Received: by outflank-mailman (input) for mailman id 470990;
 Wed, 04 Jan 2023 08:45:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfqB=5B=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pCzOg-0008Pf-OL
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 08:45:18 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1cc9b85c-8c0c-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 09:45:17 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id ud5so81013310ejc.4
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 00:45:17 -0800 (PST)
Received: from uni.router.wind (adsl-57.109.242.233.tellas.gr.
 [109.242.233.57]) by smtp.googlemail.com with ESMTPSA id
 k22-20020a170906129600b007c10fe64c5dsm15016382ejb.86.2023.01.04.00.45.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 00:45:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cc9b85c-8c0c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DXOzrCrP8CVhczjYbqhFp6hVHdTDovyn5H+grp051f4=;
        b=qeP7xCjifPj0QarjFgSOGJRU4qkTv1zwQ4edK6rTI+eSwqKg+JufxW8DXzH8md0vUK
         OOfmwgDfA2698SQzUCoM/5O1c3rlhV5lq3nsDZqtyGLrjNzJoKO+o5QRpQcxNdgUjSck
         8pz7Y3YcWzIGlbpLWFWTq9L5GXNOVcK1wWg9EyGkQypzwYLtk824AZKDhIMFNHbzaWnZ
         68ftp/Tmkt+E39dxup49sKqm1JK2p6me2q8IqsL1sof2c5x7UUqzdDNDtMhUXv64nE9D
         UzQabCUIHGG6hblKTyrWFfaWVz/ZO15PUiP+3ZAgHUKueM8BE57SN7F8Qr+4CmQjO7l1
         Y0Vg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=DXOzrCrP8CVhczjYbqhFp6hVHdTDovyn5H+grp051f4=;
        b=VK+RU3jJgk7Fahz518Wt6xzeIbT1p5HoHQiqi52dWagfvUP66BXivZ/Y3D50WvvpMa
         6erQU8nTMARf6ozWTTGqgPR0mgTAmYIaviDaTh6Ko5B61WMtgoOEsy5eiwL5mjM6OJVa
         ApqUrF4qxhjP3TNO1DQdR8bdlKGuIumI3X0i32wHAbm8i/Nc99yJ+g7BQUXjOvy2gv1D
         tU+e+v8lCixSptuRdOSUbbVXXbRNypXe42DihPYm0Mo7dxhSOvLnJYK/Qz2SQmVGeK6e
         uawVOdmIPwUw4yf4tDDk60hioMMr9yhQU6rHCw4N5Qm6vXY2FsHXdDp96IJMETVv2c3q
         twtg==
X-Gm-Message-State: AFqh2kqNCxHeXJVaqQ3a3bGQSm+P806m2K0IP/OHHJXv8Tz8Yl52bFeR
	dNQbl1oFfDjGhdbBWQQuA/Nu8fU5r7k=
X-Google-Smtp-Source: AMrXdXubp/bxwGlLu0FavJ4su5iF17RcY+AXjEy9frY7aW1SYFUeDfC4LNrgH7JgZHj4KH1yH9TQ+w==
X-Received: by 2002:a17:907:d389:b0:7c1:5a37:825 with SMTP id vh9-20020a170907d38900b007c15a370825mr52798189ejc.34.1672821916328;
        Wed, 04 Jan 2023 00:45:16 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 8/8] x86/iommu: make AMD-Vi and Intel VT-d support configurable
Date: Wed,  4 Jan 2023 10:45:02 +0200
Message-Id: <20230104084502.61734-9-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230104084502.61734-1-burzalodowa@gmail.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Provide the user with configuration control over the IOMMU support by making
AMD_IOMMU and INTEL_IOMMU options user selectable and able to be turned off.

However, there are cases where the IOMMU support is required, for instance for
a system with more than 254 CPUs. In order to prevent users from unknowingly
disabling it and ending up with a broken hypervisor, make the support user
selectable only if EXPERT is enabled.

To preserve the current default configuration of an x86 system, both options
depend on X86 and default to Y.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v2:
  - new patch that derives from patch 1/7 of v1 
  - replace --help-- with help
  - make the options visible if EXPERT is enabled
  - indicate the cases where the options need to be enabled

 xen/drivers/passthrough/Kconfig | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 5c65567744..864fcf3b0c 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -38,10 +38,28 @@ config IPMMU_VMSA
 endif
 
 config AMD_IOMMU
-	def_bool y if X86
+	bool "AMD IOMMU" if EXPERT
+	depends on X86
+	default y
+	help
+	  Enables I/O virtualization on platforms that implement the
+	  AMD I/O Virtualization Technology (IOMMU).
+
+	  If your system includes an IOMMU implementing AMD-Vi, say Y.
+	  This is required if your system has more than 254 CPUs.
+	  If in doubt, say Y.
 
 config INTEL_IOMMU
-	def_bool y if X86
+	bool "Intel VT-d" if EXPERT
+	depends on X86
+	default y
+	help
+	  Enables I/O virtualization on platforms that implement the
+	  Intel Virtualization Technology for Directed I/O (Intel VT-d).
+
+	  If your system includes an IOMMU implementing Intel VT-d, say Y.
+	  This is required if your system has more than 254 CPUs.
+	  If in doubt, say Y.
 
 config IOMMU_FORCE_PT_SHARE
 	bool
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 09:03:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 09:03:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471058.730752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzft-00072Q-PO; Wed, 04 Jan 2023 09:03:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471058.730752; Wed, 04 Jan 2023 09:03:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzft-00072J-MG; Wed, 04 Jan 2023 09:03:05 +0000
Received: by outflank-mailman (input) for mailman id 471058;
 Wed, 04 Jan 2023 09:03:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzfs-000729-Em; Wed, 04 Jan 2023 09:03:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzfs-00008R-Bz; Wed, 04 Jan 2023 09:03:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzfr-0007tV-Qb; Wed, 04 Jan 2023 09:03:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzfr-0004Yp-Q8; Wed, 04 Jan 2023 09:03:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=v2Krd+m8F0I7k4aHmYO02GgmL7VmRQ/F9KS0hJv0Tfg=; b=xyBrLBja3ipF5ihjW6ofK7Nv4j
	i55JZ4DTeBe2ex8RFXp6FsgmokIdgg1Fb4b0opsLVQ8tUB6wXPaGaVRgqwM0Rw3hIk3uFHKoRjAFi
	LaBNau5+MIBCptCZnIY9cet8Ng9Xb2o0NimfbR/Kg7jd7IkEYUXGHkB+SCRknCoBz4GQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175561-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175561: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=69b41ac87e4a664de78a395ff97166f0b2943210
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 09:03:03 +0000

flight 175561 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175561/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                69b41ac87e4a664de78a395ff97166f0b2943210
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   88 days
Failing since        173470  2022-10-08 06:21:34 Z   88 days  183 attempts
Testing same since   175552  2023-01-02 21:13:23 Z    1 days    4 attempts

------------------------------------------------------------
3257 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 497265 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 09:03:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 09:03:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471068.730764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzgl-0007cR-8q; Wed, 04 Jan 2023 09:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471068.730764; Wed, 04 Jan 2023 09:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzgl-0007cK-5O; Wed, 04 Jan 2023 09:03:59 +0000
Received: by outflank-mailman (input) for mailman id 471068;
 Wed, 04 Jan 2023 09:03:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pCzgj-0007c8-Rl
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 09:03:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCzgj-00008o-2o; Wed, 04 Jan 2023 09:03:57 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pCzgi-0007ac-Tt; Wed, 04 Jan 2023 09:03:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=E5QF3ZIYhIqtza+uUT60SAKBwRufJQsVMuv2YnaYHQ8=; b=VvcM8sPcG29hG5Jox/rEHnB177
	Pls2nM4qVrVH6uGURQk4eWL9L7sXOTkSshSZx8F9lt8+B6mfjC+I1/tCfi2V14Fj5W6NncFgy4bke
	PW7uTsyqNsrvCgDfUGk9gZbWmQSmD/RKvBdgJQLZhNYETDom45TeuOFWlnsTMRKsEk7g=;
Message-ID: <11432e22-94cd-c956-63dc-d45c419150f5@xen.org>
Date: Wed, 4 Jan 2023 09:03:55 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 1/4] public/version: Change xen_feature_info to have a
 fixed size
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Jan Beulich <JBeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-2-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230103200943.5801-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 03/01/2023 20:09, Andrew Cooper wrote:
> This is technically an ABI change, but Xen doesn't operate in any environment
> where "unsigned int" is differnet to uint32_t, so switch to the explicit form.

typo: s/differnet/different/

> This avoids the need to derive (identical) compat logic for handling the
> subop.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> ---
>   xen/include/public/version.h | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/include/public/version.h b/xen/include/public/version.h
> index 9c78b4f3b6a4..0ff8bd9077c6 100644
> --- a/xen/include/public/version.h
> +++ b/xen/include/public/version.h
> @@ -50,7 +50,7 @@ typedef struct xen_platform_parameters xen_platform_parameters_t;
>   
>   #define XENVER_get_features 6
>   struct xen_feature_info {
> -    unsigned int submap_idx;    /* IN: which 32-bit submap to return */
> +    uint32_t     submap_idx;    /* IN: which 32-bit submap to return */
>       uint32_t     submap;        /* OUT: 32-bit submap */
>   };
>   typedef struct xen_feature_info xen_feature_info_t;

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 09:20:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 09:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471075.730775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzwW-0001bw-J2; Wed, 04 Jan 2023 09:20:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471075.730775; Wed, 04 Jan 2023 09:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pCzwW-0001bp-Fk; Wed, 04 Jan 2023 09:20:16 +0000
Received: by outflank-mailman (input) for mailman id 471075;
 Wed, 04 Jan 2023 09:20:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzwV-0001bf-EE; Wed, 04 Jan 2023 09:20:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzwV-0000cN-Bl; Wed, 04 Jan 2023 09:20:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzwU-0008Ln-VW; Wed, 04 Jan 2023 09:20:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pCzwU-0000lZ-V4; Wed, 04 Jan 2023 09:20:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nYteTN+bVQACOqSlCkhq/HvlpTFd4l/wJ/H0DoxSeJ8=; b=rmPLt24NbOKyWPheQaid4JNfAF
	L4K+GmI5Q4WwXU9vpam1AYks9lxcKJN2tInlKsgi6/kTj59aV+crzJUBwRU4OGZ2md4WgoEGs23Rz
	fOnpuc39zdaisRJUFX6zeggZXah+Jan5Z75u0Z+IOHOXesf8wZ3Ntm8kl6VL3sVOQlY0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175564-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175564: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=9193bac260620fbeec580d709699b3e0f97b9bd7
X-Osstest-Versions-That:
    libvirt=35afa1d2d6c10ce993c60caea1efe1c589fa1d5d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 09:20:14 +0000

flight 175564 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175564/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175555
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175555
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175555
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              9193bac260620fbeec580d709699b3e0f97b9bd7
baseline version:
 libvirt              35afa1d2d6c10ce993c60caea1efe1c589fa1d5d

Last test of basis   175555  2023-01-03 04:20:26 Z    1 days
Testing same since   175564  2023-01-04 04:20:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ettore Atalan <atalanttore@googlemail.com>
  Gedalya <gedalya@gedalya.net>
  Jan Kuparinen <copper_fin@hotmail.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   35afa1d2d6..9193bac260  9193bac260620fbeec580d709699b3e0f97b9bd7 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 09:57:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 09:57:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471084.730786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD0W0-00050r-Bv; Wed, 04 Jan 2023 09:56:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471084.730786; Wed, 04 Jan 2023 09:56:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD0W0-00050k-89; Wed, 04 Jan 2023 09:56:56 +0000
Received: by outflank-mailman (input) for mailman id 471084;
 Wed, 04 Jan 2023 09:56:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD0Vz-00050a-9H; Wed, 04 Jan 2023 09:56:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD0Vy-0001Vh-VX; Wed, 04 Jan 2023 09:56:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD0Vy-0000lr-D5; Wed, 04 Jan 2023 09:56:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pD0Vy-0007HX-CZ; Wed, 04 Jan 2023 09:56:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TYXYHR4YwoG3jJWyaCO/sm230k9pfvnmXYXkRTtOX1E=; b=21lc8UIHDfybeh36dpryLG3eKX
	wmBHfiIhqBM5pCIL2CMQP983Yxkg1MJ3eIdt+opqNLST0xx9kFZcXlrl4YZtw4JT3uIHa/gGkmwZ8
	dyKH4C8cDqN0Gh0VEnjmPlko4NwcLx1ac9D4DkQSskzEy8iQ7vjVlAbCMknfNIRyJGhE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175562-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175562: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-amd64-i386-livepatch:xen-install:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-vhd:debian-di-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:heisenbug
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
X-Osstest-Versions-That:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 09:56:54 +0000

flight 175562 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175562/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair    <job status>                 broken in 175548
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 175554

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken in 175548 pass in 175562
 test-amd64-i386-livepatch     7 xen-install      fail in 175548 pass in 175562
 test-arm64-arm64-xl-vhd     12 debian-di-install fail in 175554 pass in 175548
 test-amd64-i386-libvirt 20 guest-start/debian.repeat fail in 175554 pass in 175562
 test-amd64-i386-qemut-rhel6hvm-amd 15 guest-start.2 fail in 175554 pass in 175562
 test-amd64-i386-libvirt-pair 10 xen-install/src_host       fail pass in 175554

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 175548 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 175548 never pass
 test-amd64-i386-freebsd10-amd64  7 xen-install      fail in 175554 like 175548
 test-arm64-arm64-xl-seattle 15 migrate-support-check fail in 175554 never pass
 test-arm64-arm64-xl-seattle 16 saverestore-support-check fail in 175554 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 175554 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 175554 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 175554 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 175554 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 175554 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 175554 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 175554 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 175554 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 175554 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 175554 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 175554 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 175554 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 175554 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 175554 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175554
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175554
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175554
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host         fail  like 175554
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175554
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175554
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175554
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175554
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175554
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175554
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175554
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175554
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175554
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8
baseline version:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8

Last test of basis   175562  2023-01-04 01:55:19 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-libvirt-pair broken

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 10:23:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 10:23:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471094.730797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD0vV-0008S8-LF; Wed, 04 Jan 2023 10:23:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471094.730797; Wed, 04 Jan 2023 10:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD0vV-0008S1-Hc; Wed, 04 Jan 2023 10:23:17 +0000
Received: by outflank-mailman (input) for mailman id 471094;
 Wed, 04 Jan 2023 10:23:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+XhT=5B=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pD0vU-0008Rv-27
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 10:23:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2057.outbound.protection.outlook.com [40.107.20.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ca8f7dde-8c19-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 11:23:12 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8074.eurprd04.prod.outlook.com (2603:10a6:10:24a::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 10:23:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 10:23:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca8f7dde-8c19-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X+C7P2uz4eDXlg3xI4PsY7OYOZr8bcOox/EUFgAxGsF0wfvVz3wsZ/XSHSSXagnm/ytSWTL0KHIwA1QYdfVkJjGQN1paugLjrPDVEaSJUqjtBi6irkWb6xQ4gvDGvKUWYOR/8hdzCtV/lPSR3NpoRGEBrhw6cGsP9dEI0KH5etajyyeZdboJuQ3yjCSPNUmH5UXa0tGy/ykAq3NLu2RQBPen7RzRS6P4JSRQv5EjDeiGZ6oLMqiSkQkTjIRRpLN51RegeCjfuHUnpA1uzaiFrpOR0piZGWg5zV5wf14zC+PqpzIRt7I0z7nu0bjMe2pN81w/U2N/UFQtfnJmbrFj4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=q0RcQ0o/ff3cMD/M7ElC/t6CUe9cnaxHFi1kJMUWY50=;
 b=XJhLCx+WVk6+PL63pxGEeGadAmtN3v/RX7vpLNgJiu5dfOZ+Tb3O5vG2h338icsCtRTbYZiQymqOGenGCWnQwn3wTp43KpOUCVMB+Y1g6Tljrt6YfbGS6olXP3o5dCA+ci2dcg2za3v9m2rEw47SlOiX21E9Dtp2yR/ZPlO0u8es35zEdqZvgTZbGcr2x1GmeoT2zxYBkS5dP5zMas29J3LBKLosSpH/lDct1yyB0o+pAjTEg6cLZ6KHt1xonIj78dy0SOYaV4Fa+EEu7Ae0DwJVPULw9rA8xAOHVp8l49Tph5ecB7cHobtSmp9T1bi4VqV/ST6odP8LDrr81iZUZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q0RcQ0o/ff3cMD/M7ElC/t6CUe9cnaxHFi1kJMUWY50=;
 b=aLBjZro+AQB1E3fbVf7h5J85C9qzO/ai0gBdInG0zj4hKPeSqnfDiczxSK8vU+Bu05hA38r0pjA6gBJosGy1KQy/0ugw319GPYfzOl0pc1JOHP4ZOHd4P6FAQeaZYsv+FrxgFMEJvi4ckEu3vvR/rNH0ZeaAG7UboL68yR8ezedteaGQUViywe7mqaWkscV8icohbOUDBMQg2q1u74QfgtFhq0LwtxtERbo3TTMkbDgCKeccV1M8mCKChycBZUtTAoCIQKAWRoFITh+lL4QFtulg0ITSA+MSVzTkKssSzetSKY+fZcOlX5sK/Vpaz4YAzXUMUOFLbeXJgiivEjSn/Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ade9f97d-aa28-bd7e-552c-35bd707bab29@suse.com>
Date: Wed, 4 Jan 2023 11:23:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 05/22] x86/srat: vmap the pages for acpi_slit
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-6-julien@xen.org>
 <ca02a313-0fa2-8041-3e8f-d467c3e99fb6@suse.com>
 <965e3faa-472d-9a79-83ca-fef57cda81c5@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <965e3faa-472d-9a79-83ca-fef57cda81c5@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0108.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8074:EE_
X-MS-Office365-Filtering-Correlation-Id: a2d52c6e-e7fb-4996-ae12-08daee3dadc3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	axDbbIbII69PS1LPz5UcfVp+L9CYX2zdCj8MvCFzXnbTpu2frFtqFHvpexQsrExYzXwPoVuhvZ4f/8v0YcxFuIUgm8EZS2L9I2dUe0zRLQ/9EK324TXkxi1oQ2cieiuS4wMItHHS0+cwgNZX52Z32uaBpjMN2s6Gfd0GynxWlWYgwaTxNwnLCmLXoFGLazjyw6jKKjxMuRqQgeJdOii0HhlShaxpoPKRubYDdyovkNg9WVCwbsMECm/Mv+PLTChtotpLmpaa1V8OxF08VH3X02WMtKfargEkkio4BghDmC9nLqKRSJeTmKKuNQaCAaaVX1d+EdI9y3nM/+h2Ma8v+k+gsxIQwAOda9tsUSKk96bs39yfdpEQ1I54Udxj+IDH0ntE6JTxgOEqbj/Q31rqLxSZd4BRynmN/78PbZXpkCatzPxWrh7kz7pr/YUdoYokAEvy8DtH1yilVw0smO71/os6HzKv+VW5JImxBAnEPIi46Nb5xMgj9sgSa38LG3wk0Vdzg2DowdNuXELPSIyrDxcs6n+Rn4eAwZCoYXT6ugzBlA5C9tMfYTRskmfGrPfmjY9dc8AwEg7LkkzWZGKAYelyi8KIPGaEE1QQn208NxF8dFBelT3UtQ2UCIXxCnjpvA5mZhxaZDn+HySRO3V0BoPswYCbHDR3dolv8nNt4bqNlAudrmqDRRp5xFXDFf14CVK+mt9x9WKm8Y86He3/mxLY3q7LqDUF734ns0TH7Ms=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(346002)(136003)(396003)(366004)(451199015)(83380400001)(53546011)(26005)(2616005)(6506007)(6512007)(186003)(31696002)(36756003)(86362001)(38100700002)(8676002)(4326008)(41300700001)(2906002)(5660300002)(8936002)(31686004)(6486002)(478600001)(66946007)(316002)(66556008)(66476007)(54906003)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UnNBTSthUjFDS0lMVUEwaitsMnZVM09BZ09vcmtadFJySFNIRk5WNFAwVnBu?=
 =?utf-8?B?T09Cb1NERXVTRHhKUU9XRlRjbHNZcFBWcHB4bUJnVElzU09oZis3TFNVZEZE?=
 =?utf-8?B?RGQxTU9tYWhqLzF4ckcvb1U5K3VnMi9US0VoQVNBQ2Fpd1lsalI1NXlEdE4w?=
 =?utf-8?B?b1lzck9YNUV2ZHJmTk91Y2IzWnFZMWM0aHFFaWZmQ2JaVjNqU2huOElYT0lI?=
 =?utf-8?B?OExpOEliVXEyamgrWENreWFqY3ZEd1hRaVFsZzdpQzN2YzJTZktwWmU1ODMz?=
 =?utf-8?B?TmJVcm5jSDl4SkVaUGlYZS9yWDNocXVsbHVMU2FEdmE4bGdsaE1acjFRTmdR?=
 =?utf-8?B?OTZVS0RhaXhKcldxK2lmMUJkcUQzQjZXNmhGeXVYZjZITlM2ZDdNQVFyMjZ5?=
 =?utf-8?B?ZWExc0ZVTGdHbUFIRTkxbzNSQmVaNGk1aVlnNEN5QjRLU2tpQVZia1lWRzRa?=
 =?utf-8?B?VldwOEd0V2NRNW1HaFZPNUNzd0g4VWRJMnQ4dHJrWGZyNzBUbmR1Q05MdVN5?=
 =?utf-8?B?a2Rkd2hVNjdIVnMyeFNvYWpYZUx6ZllnVzJETWJWc3ZsUHdHRCtTL1krS005?=
 =?utf-8?B?Y3R6dTRwMTJKa0FzTzRIdFpUL1pERW80b0kwUlNabTFsckJ2eFZJQmhNZ1hO?=
 =?utf-8?B?aXBEN1d0TjZlV3IrRUN1eE1RSDV1N3JQdEpkekdGTEt0Vm54MXo2dFd5UUR5?=
 =?utf-8?B?WllTTlJQSVJLSWtFRGczY2JlbXpuVkZua3pDM1BCOEtVN3BqWEppSk1BQU41?=
 =?utf-8?B?S3lPWUVwZE9PU0RCdHUvbmJXblF2Qk05RUdNTXU3c1NGSm9JWWh3clV5K3Yx?=
 =?utf-8?B?TEJJTFV3U2FDMkltcHhZRnhyd1BVNTQ2aVhNQ0xQRWJicWplVXhnalJjNmM5?=
 =?utf-8?B?U2tvZmZ4MjZDdmhhZTZiSGhqeDRrU3ZCOWIybFNlcFBWblBFZUdLdmVlZVF6?=
 =?utf-8?B?K1l5anY0YndrMnVUcEVEcXRCU0FRdkJ0NVlXVGtJMUVvTEsvTEFEeC9GcE1Y?=
 =?utf-8?B?NzlQZHA0eXJ4bXcrcm04YTBlY3ROb1g4emRsNnBzZkRtOGNIbUhqRWtLNDY2?=
 =?utf-8?B?RktIQUtuQ2pXVlZoLzJhcDlndE9iUER6S2d2TFl1NGJvQThNWUtzVWhwOWRz?=
 =?utf-8?B?QlhPaG1pUys5WU8yeFNvdzlXVUR5cWNudlNlZzRGUmlWYlBCTDNraEZ6ZGVB?=
 =?utf-8?B?bnBoWmI1UUZlemZsL1dUYllIVGNsaHB6Tk13M200VmNGV0cwUDNTUWxsYkp5?=
 =?utf-8?B?WVNwNjVJeCtVRlVPUjJTL2p5cnh0YytkRitOYlhSMDdyaFptL2ZhOVA1cHNK?=
 =?utf-8?B?Y2d3ZkdwckN3VGZVejZyeXhkQVBTdzV5Q3EvR3dXRjZsMENNVHlVYVZYV094?=
 =?utf-8?B?VTYxS2xEcDZUMmxVaERsZ1Bwc3p1WXE3K25scTZyRk1HMjJGQkpWSEhOdkFn?=
 =?utf-8?B?YzBkbnZCOXMxV2FVZVVjS1cwMnRlV0JqOGZ1VHZsMFozMHJ5dUI1RER3clI5?=
 =?utf-8?B?Nm9qcVhtdkFJRjVvMitDaXMvUFhaQ1hPc1ptaDh0cHNrTXc2ekZ5bEd6NUY5?=
 =?utf-8?B?c3N5NUlKcXZ4YXgyMVBzNVQyTWtCc2NxNlNIUWJQMnJydVpIOGo4VlRra043?=
 =?utf-8?B?MWpOMHFVV1Fxb2dQK1hXSkdmYnArSnBtLzdTa0FvNTRSbXNqMWp2M1J3R04r?=
 =?utf-8?B?L01rdnJwV29MdGdXQytmMStGZStxOU1SY3BEQXhiZHZOQWhKYUJEem02VThY?=
 =?utf-8?B?YzJsRVBSekpmS01wSll1SFV3T25kMEMyL2U2L2tpZWJxMG1KN0dScng3NWlX?=
 =?utf-8?B?YTZCWUpVK3A0R2IrMDZCcUUzcVJZUkV0TXJiVGI4WnlrMWJzcXMvUmM0c2ti?=
 =?utf-8?B?NGJ6a3JMcHVLQWVwdzdPN1VHd0tGcWZUMUhmK1c2WkFpVzZVMVJ6WTdtV25Z?=
 =?utf-8?B?TytYRHg4OGtCYmZ6QVhwazJzTDFXWkIrZXlmUWEzMkFOcmQwa1pYcG51VHZo?=
 =?utf-8?B?bU13UWtWMXlqenljRzBjZExMNU1lc0lxVkM5UzBjVVpLWmxSeXhiakZUdjJM?=
 =?utf-8?B?Q0R6U0tlMmdqMFgvT2Z5RFFoUFFlUVA0b2RTc3NCMUhsbmVPNXdNMHNxK2E2?=
 =?utf-8?Q?LL7Kz9BJn2X2uPkIT92yOCm/5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2d52c6e-e7fb-4996-ae12-08daee3dadc3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 10:23:10.3919
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WoAl85N9MhwIKw8rWZZ17GRhGbByxtFt0vAbjMh33O2k/TjneAGBigAIQelWuQ8pp/cj0RNrusSMTEwZvR2t2w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8074

On 23.12.2022 12:31, Julien Grall wrote:
> On 20/12/2022 15:30, Jan Beulich wrote:
>> On 16.12.2022 12:48, Julien Grall wrote:
>>> From: Hongyan Xia <hongyxia@amazon.com>
>>>
>>> This avoids the assumption that boot pages are in the direct map.
>>>
>>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> However, ...
>>
>>> --- a/xen/arch/x86/srat.c
>>> +++ b/xen/arch/x86/srat.c
>>> @@ -139,7 +139,8 @@ void __init acpi_numa_slit_init(struct acpi_table_slit *slit)
>>>   		return;
>>>   	}
>>>   	mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
>>> -	acpi_slit = mfn_to_virt(mfn_x(mfn));
>>> +	acpi_slit = vmap_contig_pages(mfn, PFN_UP(slit->header.length));
>>
>> ... with the increased use of vmap space the VA range used will need
>> growing. And that's perhaps better done ahead of time than late.
> 
> I will have a look to increase the vmap().
> 
>>
>>> +	BUG_ON(!acpi_slit);
>>
>> Similarly relevant for the earlier patch: It would be nice if boot
>> failure for optional things like NUMA data could be avoided.
> 
> If you can't map (or allocate the memory), then you are probably in a 
> very bad situation because both should really not fail at boot.
> 
> So I think this is correct to crash early because the admin will be able 
> to look what went wrong. Otherwise, it may be missed in the noise.

Well, I certainly can see one taking this view. However, at least in
principle allocation (or mapping) may fail _because_ of NUMA issues.
At which point it would be better to boot with NUMA support turned off.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 10:27:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 10:27:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471101.730808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD0zB-0000ck-4w; Wed, 04 Jan 2023 10:27:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471101.730808; Wed, 04 Jan 2023 10:27:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD0zB-0000cd-21; Wed, 04 Jan 2023 10:27:05 +0000
Received: by outflank-mailman (input) for mailman id 471101;
 Wed, 04 Jan 2023 10:27:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+XhT=5B=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pD0z9-0000cS-SM
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 10:27:03 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2054.outbound.protection.outlook.com [40.107.20.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 542d4b2a-8c1a-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 11:27:02 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8074.eurprd04.prod.outlook.com (2603:10a6:10:24a::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 10:27:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 10:27:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 542d4b2a-8c1a-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mslr9fnSh/7Xy49QUClU2PK1DtCoL+fyNGTnyc3WBjM4CazDLIfG5q8k7EqquoC043UoixidCnHlCNDC3QQZcGwHuiqN99Xx4TYvdmyl+0qjYC1nRUXhlQzqxaA9fiYr76uEE4yn+WF7iLl+jKmnJlOFmcxG2e9PXrEhmV6a7Y1lx1IG8Y/cmdBQ7wYT7tIEgBcPGRQzsRM2lmtv/6SQIIRCkyD8U1e9L9YXP1fh1RivI1c4AmbCuXcIYXVp9hnlJOu4hapIrjSV4VInoTm+++IZwnTAidneYfL9whOw8vTd3h69cxkKQI7C/um/DGed2UxQ+5BvApUuXA3TPkUzdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tx015fr3bjIhPCp331DUzfxXuxihO+fi9PQYrtpxUYk=;
 b=btmbypBYR+HPRjanODVX7MV6/8K4CUqg9ST+eVHpol9qCKipmW/Hn+Iyk8zIBU7T6DDCl8H6tPem1c5EktjMhn+EnLvjgTf7uppRT5ZvdRxP/raaVEFkl7/D1Fab3Ap5yvYLXKxG7zjD7NgNUUdSAdFzkwoaqV/K3yTSio9gFSOPjMyhO1mkgWf1QDZeIS/hUWWzeeh+PBwp+BDuL5Evnsn4rmR6in2s2QO8fBK4ogcVp3IOrYcWIJCuN/sSICqAdpGs0Omi+Hp7mBQ358kNR1Kx5hRbqIiMIozp5S9I6GDnVUgwu5g/YWPDEBnWFYyXVJBCdDJLjfDD95BJIP1XmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tx015fr3bjIhPCp331DUzfxXuxihO+fi9PQYrtpxUYk=;
 b=5jTSAepkHFrBJXxUZjb4a1iM44PJAXD+7iIrku5nIyN4SygtTuXeXSXsWtPZqy1SOWi391CIf+oIqt7EfQITFQnF+gCXT5FCcWRBFsxIPmnAo/UzQodS+1Gv3aioFse/VhiI/FOH5UPyRbggWUBFDLWSmRyi+B95G0cxlDC5BnJKoSB6LPSm2Lgm4iTcGXMazFJXeO1DaclAWdwZEW61El2TcEMh00GHjkKrrIqcEcYuXJCE3ykd0/yutgcomOLwGbpZcwMEeC0WhKmIrBtMP+w8l0rxrtsDqU0vMUljQtnZMZxOCC5pO6XWOtjSZGSopbiVMTgCBZVtVxNGhbWAig==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <01584e11-36ca-7836-85ad-bba9351af46e@suse.com>
Date: Wed, 4 Jan 2023 11:27:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 06/22] x86: map/unmap pages in restore_all_guests
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-7-julien@xen.org>
 <478e04bc-6ff7-de01-dfb9-55d579228152@suse.com>
 <f84d30cb-e743-60f8-a496-603323b79f37@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f84d30cb-e743-60f8-a496-603323b79f37@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0171.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8074:EE_
X-MS-Office365-Filtering-Correlation-Id: 9f95ef90-8dd0-497e-51d9-08daee3e3784
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T+1yW+o7Qnjt+lVjetaPFSzisAz8qBiQDidy1G0vpDD/eRo7poUU2Z4K/hNCWAhTxF73b/Zl91YZNYlDxruEwmxecrt4cNUhrNR54Oic4zk9o8LLkbH3iRIdI48EODsq5hqnrLm/A4rahEmpEvlz6p8bGfNsWxemA/nodo8Di7qt8re8bf9DNCBOIq0XNOowRSAPEQiehDggBlvtPUZuAzUspmIeGaxOds/tEW02r/WYMT6/io8ao4zikMBusE4TthwTBaSKZ8dNjivQEBM+tfzCSPLp4Z9rjddOBL7Blp1QkUzkWyKtJsMOwp4TAiqsyfJbB1AemxUO9/44+uh32ceBKJ7awlbvtHLmKONRJqu6aNChEpLHxPwd6SyHsOpC7in34zGKxmdRl36D/6EqngDzytyJ7G+MZsv9Irh/ZgONnCPRFs1yg83UKmPt7kDUbmK/uCCESvZrPBh7ZfUIZsPA1SQEVG3Ln7hn0qzmYlR9d2sZBVbOMtTtbSmYslgXwQB/FY96/OO1HWgN8jLkozoiz12mKlqRcDlIVTSEwp2K4J2e6UxT70IJxe0WU4E02dI3SnWUURubu9uz23u0N/Kfo7YyelcOZDRd3GNcF2Q+0h6Vp2+ICLkc84hzJNcVxHnm+l+4FiXb4MIEcyF7MMjHQn5mxnReHXRzM8V2ObK02KzNX0aL1P3R4BcNPmPG9haQPdJkI2al+PCwk4Uwj798xA96ewCSwpMR/foT1tk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(346002)(136003)(396003)(366004)(451199015)(83380400001)(53546011)(26005)(2616005)(6506007)(6512007)(186003)(31696002)(36756003)(86362001)(38100700002)(8676002)(4326008)(41300700001)(2906002)(5660300002)(8936002)(31686004)(6486002)(478600001)(66946007)(316002)(66556008)(66476007)(54906003)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MlpldnlueFo2WktUcGQwMGdYWHJoK2xIcVhhemFQSzh5TndNRWZoY281cldF?=
 =?utf-8?B?S05BS0xYeTg5ZEp3cUlxeld6d3Q0blBFalZZK2JhRCtHaHRUdlczRnZmcXdk?=
 =?utf-8?B?ekFpbnpaUXQyU3RIeU1jZGF5WmN5bTRVZVg0NzlwMklSQlVpeThTRVdYWDU3?=
 =?utf-8?B?Rm5lTWF5cVl3aFZ5L3ZFOGZSK1ZiWVJQNTBzc2lQUFZWeW5zTzlCV3FMU242?=
 =?utf-8?B?ZEVmbnNmR1N3RWJQY2FpU3U3RUlIS3pYYWVGWi9uWGJ3SkoyYWp4VUg1UHVr?=
 =?utf-8?B?cjc2akJsN3oreUlzampQOE5sUGYvSS9PWmhnUlRIVkI5emtrZHhJU0xaeWk1?=
 =?utf-8?B?ejVRUEZveGRLUGw1TGloUzJpb20yV1NJWVVEUUFsOVhzRi9OcUwyUE5EUDZr?=
 =?utf-8?B?QTIwbFB0OHpFTkkxczlCdFd5SEd3d2hXNCs1dWpCRHAxNWRRVVNkVENmYWZl?=
 =?utf-8?B?aDVtVWpYWVNoaWRySVl5a2lOYlZMRnhHUFFCcHhpYjhiSkMvV1dvTzdxV3d0?=
 =?utf-8?B?c3lsbnhWdDljdGlvZncvOVEySTZqZ1hOUTJmSTh1QlVCY2RKWjkyQUFLL1NX?=
 =?utf-8?B?eEp3QWdBUkNOVGQyRUxhL09MQjFSWUJiQXVFY2Y0bkhLVzR4Qk5ZTXR5dXJh?=
 =?utf-8?B?aFNJMHlVb3Bxb1M1VkZSL2c2Nlg1K2M1a2Y3OEc5UWYvSHhUbGxxSktVNlR1?=
 =?utf-8?B?a3JKVkxQaFFWYzdDNVhtMTJ0eEhaYjdJc09pb05yZnZkZ0l0OWpYVG9mQ1ZQ?=
 =?utf-8?B?b1lDVVpIeW5aWVcvbFU2WFRLbm5DSmNDNUIrUzd6T3BlR09TWU1tOGZWOFlY?=
 =?utf-8?B?cHNJU2NjL1lUS3FFVjVVNzFBMEkwV2l3V2RxSlZWMVUxVUNyZkptUm5zckk2?=
 =?utf-8?B?elNtdjFiQTRSZWVaRzFqYTZLYkgrdXJhSmRuK2lqRjREZGhNemszWUZZSmls?=
 =?utf-8?B?MWMwL3FQUEQ2ZVFTeXNPclFtNUVKOTNEcmVTYVlDQTVVYzByUTB4R0J4dDFG?=
 =?utf-8?B?NVNZMzlHc05QY2pGNDlnTU9LTGZQQjB2enFaVHVDdGx6Ly9zOFhFQ3QvT0Jo?=
 =?utf-8?B?Z1QxU3l3cTBueVlqS2xZWm1rUlJ3eEs5UTk5aHZEaTAvY2c2NlZKbk1XSFc4?=
 =?utf-8?B?M0Q0MUVqcGdPUVBQSkdCWE5WeTRiWDIyazJiU1VxOElaalJFbnVSSTFpV0xv?=
 =?utf-8?B?U3FQODNRTEFvRVZJb1FrSHo4SllFbFFzVkh0WjVIVTNBWnRUVkw5WmpGY3dl?=
 =?utf-8?B?OVZXWUNzOFBrMDhua2x2Y2xiWnBuZTRkcVQzdzlLS2k0bEEzYTcraEpIcDFz?=
 =?utf-8?B?VU01S0hTbkF5Z0JwTHRzckxka0NyR2FpWE5UemxFd3ZjKzdFUzJKQUp0cUdq?=
 =?utf-8?B?amxnRERSOXo1K2k1aTFZUHVja3FyY2NSbTRZMjRibGNUa0JDNktMUDd0L2tS?=
 =?utf-8?B?WkFVazBoQTFaMU9YQTV5QjlucmxqSWZwa1NIMVZEeWVzeGlIZ0Z3K1VHOW9r?=
 =?utf-8?B?ejR2K0x0eFFieUlpTFh5c3pmRnBrb0lkbEMvNEtmOWpNT3ZrcUxYUlJxRFBP?=
 =?utf-8?B?eFRmcGJSQkZYcHIrUUZsUVNOcGlJbWhjQzJQZlZPSHh4OXVVLzNNNTBid2Q1?=
 =?utf-8?B?MlBwVXpDRHBKTXhwb01vUWxyQnFsa0wyUWVRSU1zaEEzeGdkelNQb2tsZU9m?=
 =?utf-8?B?bkJPdW42Vzh5Mm1UaHNIZVpIYmZpbDFucEtGTXc1U3JZemNrWmtvR054Y01P?=
 =?utf-8?B?YTBtREwzU0xBWnZFcnlsYWI1TTBpYWt6RUFpZHB1ZE81aU5XZllNOUxxZ01G?=
 =?utf-8?B?UUM1eVNaME8vRTdkdG1OanFlYzVuM1R0MSsvWjRwZno5NnZkenVIV1Y2N3RM?=
 =?utf-8?B?Uks0RlZPbXF0ZXptWXJaVzlCSEY0bzIwalZJWldleXRLcmMvS1ZjeWJTaWRZ?=
 =?utf-8?B?eWx1b3UrMEY3VVBJNWlzbWFlNFFheFdIVUlhVG9IQy9mSnVWanRZRkw1S3ZY?=
 =?utf-8?B?ZURtSktucGI2dmdtYXF2NVBESTdxL2JyOHZxWW9TWUNZcnVad3hwRmlaRWhi?=
 =?utf-8?B?NGswRjRFcEs5QnB4SWlIcWVqVGNXcDBXeE1MQjA1aXdiTGI5a00razZvNXNm?=
 =?utf-8?Q?kK2ew9qPQzqhfvlNL8ZUWwoF2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9f95ef90-8dd0-497e-51d9-08daee3e3784
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 10:27:01.4241
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: p2vgT1tWJOGUkUJou7l9W4T6VnhB2tFoQlaAmvc6CJXjSgwOHCAkCP4l53SYaNJOv+Pph592ydMm4TaAlmRmWw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8074

On 23.12.2022 13:22, Julien Grall wrote:
> Hi,
> 
> On 22/12/2022 11:12, Jan Beulich wrote:
>> On 16.12.2022 12:48, Julien Grall wrote:
>>> --- a/xen/arch/x86/x86_64/entry.S
>>> +++ b/xen/arch/x86/x86_64/entry.S
>>> @@ -165,7 +165,24 @@ restore_all_guest:
>>>           and   %rsi, %rdi
>>>           and   %r9, %rsi
>>>           add   %rcx, %rdi
>>> -        add   %rcx, %rsi
>>> +
>>> +         /*
>>> +          * Without a direct map, we have to map first before copying. We only
>>> +          * need to map the guest root table but not the per-CPU root_pgt,
>>> +          * because the latter is still a xenheap page.
>>> +          */
>>> +        pushq %r9
>>> +        pushq %rdx
>>> +        pushq %rax
>>> +        pushq %rdi
>>> +        mov   %rsi, %rdi
>>> +        shr   $PAGE_SHIFT, %rdi
>>> +        callq map_domain_page
>>> +        mov   %rax, %rsi
>>> +        popq  %rdi
>>> +        /* Stash the pointer for unmapping later. */
>>> +        pushq %rax
>>> +
>>>           mov   $ROOT_PAGETABLE_FIRST_XEN_SLOT, %ecx
>>>           mov   root_table_offset(SH_LINEAR_PT_VIRT_START)*8(%rsi), %r8
>>>           mov   %r8, root_table_offset(SH_LINEAR_PT_VIRT_START)*8(%rdi)
>>> @@ -177,6 +194,14 @@ restore_all_guest:
>>>           sub   $(ROOT_PAGETABLE_FIRST_XEN_SLOT - \
>>>                   ROOT_PAGETABLE_LAST_XEN_SLOT - 1) * 8, %rdi
>>>           rep movsq
>>> +
>>> +        /* Unmap the page. */
>>> +        popq  %rdi
>>> +        callq unmap_domain_page
>>> +        popq  %rax
>>> +        popq  %rdx
>>> +        popq  %r9
>>
>> While the PUSH/POP are part of what I dislike here, I think this wants
>> doing differently: Establish a mapping when putting in place a new guest
>> page table, and use the pointer here. This could be a new per-domain
>> mapping, to limit its visibility.
> 
> I have looked at a per-domain approach and this looks way more complex 
> than the few concise lines here (not mentioning the extra amount of 
> memory).

Yes, I do understand that would be a more intrusive change.

> So I am not convinced this is worth the effort here.
> 
> I don't have an other approach in mind. So are you disliking this 
> approach to the point this will be nacked?

I guess I wouldn't nack it, but I also wouldn't provide an ack. I'm curious
what Andrew or Roger think here...

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 11:12:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 11:12:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471109.730830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD1gk-000622-Qo; Wed, 04 Jan 2023 11:12:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471109.730830; Wed, 04 Jan 2023 11:12:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD1gk-00061v-No; Wed, 04 Jan 2023 11:12:06 +0000
Received: by outflank-mailman (input) for mailman id 471109;
 Wed, 04 Jan 2023 11:12:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pD1gj-0005kw-AA
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 11:12:05 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9d30ff2d-8c20-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 12:12:04 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d30ff2d-8c20-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672830724;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=RmOdWMA5OLBNSSGKpmQKaV/mcBDHQ91bElXomYvQBTI=;
  b=S8kC3II/vWhQ5gA8MGDMfSsJPos/Wx2DQndl0BpjmYxJ/U1g7HYisteu
   QTrPvDk7K536kI0J3lTdYy4svV+SX/80ega9G6WTJGECgKe2vN0lW4Nyn
   1ArVPhGNYZFsqEN91vpvDlKKB4k6mIUt6xCtYOYE5fmJy5MJYD9rcR9s2
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93590517
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:IdDSV65w2jytOMymdgW9DgxRtDbHchMFZxGqfqrLsTDasY5as4F+v
 mQWXDiHbvyJZWH3Ldp/PtvjpBwOsJeHzIBgGwJr/H1nHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakT4AeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m2
 /gACw81fxe5tcmKh7ORS7dJhsktM5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJU0UUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0ExRvJ/
 j+foAwVBDk0Nv/C0GHa/0ihg//9mTnRW541KuSRo6sCbFq7mTVIVUx+uUGAiea9ol6zXZRYM
 UN80jojq+0++VKmSvH5XgakuziUsxgEQd1SHuYmrgaXxcL8wSyUG2wFRT5pc8E9uYk9QjlC6
 7OSt4q3X3o16uTTEC/DsOfPxd+vBcQLBUEGfQw7dRoC2evInMYypBORa9ZuHrHg27UZBgrM6
 zyNqSE/gZAagsgKy7i38Dj7vt68mnTaZlVrv1uKBwpJ+is8Pdf4PNLwtTA3+N4adO6kok+9U
 G/ociR0xMQHFtmzmSOEW43h95n5tq/eYFUwbbOCdqTNFghBGVb5Jui8Axkkfi+F1/ronhe3C
 HI/QSsLuPdu0IKCNMebmb6ZBcUw1rTHHt/4TP3SZdcmSsEvK1XYo3k+NRbJgzCFfK0QfUYXY
 M7zTCpRJSxCVfQPIMSeHY/xLoPHNghhnDiOFPgXPjys0KaEZW79dIrpxGCmN7hjhIvd+VW9z
 jqqH5fSo/mpeLGkM3a/HE96BQxiEEXX8riv8ZQGLrTafVM2cIzjYteIqY4cl0Vet/w9vo/1E
 ruVACe0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:h9quJKkiKRjEVKyHmo5p6XuzTVzpDfIi3DAbv31ZSRFFG/Fw9v
 rDoB1/73TJYVkqN03I9ervBEDjexPhHO9OgLX5VI3KNGOKhILCFvAA0WKN+UyEJwTOssJbyK
 d8Y+xfJbTLfDxHZB/BkWuFL+o=
X-IronPort-AV: E=Sophos;i="5.96,299,1665460800"; 
   d="scan'208";a="93590517"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 0/2] x86: Work around Shstk fracturing
Date: Wed, 4 Jan 2023 11:11:44 +0000
Message-ID: <20230104111146.2094-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

See patch 2 for details.

Andrew Cooper (2):
  x86/cpuid: Infrastructure for leaves 7:1{ecx,edx}
  x86/shskt: Disable CET-SS on parts susceptible to fractured updates

 docs/misc/xen-command-line.pandoc           |  7 ++++-
 tools/libs/light/libxl_cpuid.c              |  2 ++
 tools/misc/xen-cpuid.c                      | 11 +++++++
 xen/arch/x86/cpu/common.c                   | 14 +++++++--
 xen/arch/x86/setup.c                        | 46 ++++++++++++++++++++++++-----
 xen/include/public/arch-x86/cpufeatureset.h |  4 +++
 xen/include/xen/lib/x86/cpuid.h             | 15 +++++++++-
 7 files changed, 86 insertions(+), 13 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 11:12:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 11:12:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471108.730818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD1ge-0005l9-DR; Wed, 04 Jan 2023 11:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471108.730818; Wed, 04 Jan 2023 11:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD1ge-0005l2-Ak; Wed, 04 Jan 2023 11:12:00 +0000
Received: by outflank-mailman (input) for mailman id 471108;
 Wed, 04 Jan 2023 11:11:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pD1gd-0005kw-LH
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 11:11:59 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9962cbb2-8c20-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 12:11:57 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9962cbb2-8c20-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672830717;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Y4umxcgEQ+iPNN3hZApARntuGd8s3vbDQqyRC58FQsU=;
  b=ebw2b1zj1z+Z+ql8j5S1rVSnFQrceoiWqkfMPBtvJ/lmDyeHFxqiExx7
   LuGlmw7jIBxVQAIPpvo2x3NBeSA4Gc5oKDS9tsoY3rPnhUlFts8ApF7yp
   2CwW/ayEb3yTTsY461VRBxceKo1v3LpRYQUE1OseAkGbKFw5ujf7LsPfF
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91153120
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:QUVk6agYuSaEWSW8w4oQtFgNX161bhAKZh0ujC45NGQN5FlHY01je
 htvUWyBaKuNYWb0fY1xO9y3/RlQvcXdztdkGlFprnhhQnwb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWs0N8klgZmP6sT5QeCzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ4OhIHNQyKo9iO2az4G7g8rO0cF+j0adZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJagx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQthfB/
 z+dpj6hav0cHNrBywTaySuHvcuVmBLXG5xIF+H7x9c/1TV/wURMUUZLBDNXu8KRlUqWS99Zb
 UsO9UIGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbiy67LGUZSj9KaPQ9qdQ7Azct0
 ze0c8jBXGI19ufPEDTEq+nS/Wja1TUpwXEqRT0mR1AZ6v3ZmJgtzUOeHvxKL5WUkYigcd3v+
 AyioC87jrQVqMcE0aSn4FzK6w6RSoj1oh0dvVuOAD/8hu9tTMv8PtHztwCHhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JBVdoLiN2dDB0zWirhRdMOS
 BG7hO+pzMUPVEZGlIcuC25LN+wkzLL7CfPuXe3OY9xFb/BZLVHYpn4xPR7AhzmxwCDAdJ3T3
 r/CKK6R4YsyU/w7nFJauc9HuVPU+szO7TyKHs2qp/hW+bGfeGSUWd84Dbd6VchgtPnsiFyMo
 75i2z6il003vBvWPnOGrub+7DkicRAGOHwBg5UNLr/beFM5QTFJ5j246epJRrGJVp99zo/gl
 kxRkGcBoLYjrRUr8Tm3V00=
IronPort-HdrOrdr: A9a23:DCTKYqhnDQlvEpstI9xAbQV4aXBQXjwji2hC6mlwRA09TyVXrb
 HKoB0+7268tN9xYgBXpTnkAtjJfZqyz/5ICOMqTMKftWXdyQiVxcRZnPPfKxOJIVyOygd279
 YST0CVYOeAc2SS9PyKnzVQcOxQueVuIcqT9JXjJhVWPGZXgvpbnntE42+geyVLrUt9dPwE/f
 ynl756ThWbCAQqh6+Adx04tob41r/2fVHdEGk777NN0mWzZbTC0s+GL/BNtS1uKQ+nCI1Imw
 Wr/W3EDsnPiYDB9iPh
X-IronPort-AV: E=Sophos;i="5.96,299,1665460800"; 
   d="scan'208";a="91153120"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 2/2] x86/shskt: Disable CET-SS on parts susceptible to fractured updates
Date: Wed, 4 Jan 2023 11:11:46 +0000
Message-ID: <20230104111146.2094-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230104111146.2094-1-andrew.cooper3@citrix.com>
References: <20230104111146.2094-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Refer to Intel SDM Rev 70 (Dec 2022), Vol3 17.2.3 "Supervisor Shadow Stack
Token".

Architecturally, an event delivery which starts in CPL<3 and switches shadow
stack will first validate the Supervisor Shadow Stack Token (setting the busy
bit), then pushes CS/LIP/SSP.  One example of this is an NMI interrupting Xen.

Some CPUs suffer from an issue called fracturing, whereby a fault/vmexit/etc
between setting the busy bit and completing the event injection renders the
action non-restartable, because when it comes time to restart, the busy bit is
found to be already set.

This is far more easily encountered under virt, yet it is not the fault of the
hypervisor, nor the fault of the guest kernel.  The fault lies somewhere
between the architectural specification, and the uarch behaviour.

Intel have allocated CPUID.7[1].ecx[18] CET_SSS to enumerate that supervisor
shadow stacks are safe to use.  Because of how Xen lays out its shadow stacks,
fracturing is not expected to be a problem on native.

Detect this case on boot and default to not using shstk if virtualised.
Specifying `cet=shstk` on the command line will override this heuristic and
enable shadow stacks irrespective.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Confirmation from AMD that their parts are not impacted.
 * Fix pv-shim build.  The "define false" trick doesn't work with tristates.
 * Tweak wording in several places.
 * Fix tabs vs spaces.

This ideally wants backporting to Xen 4.14.  I have no idea how likely it is
to need to backport the prerequisite patch for new feature words, but we've
already had to do that once for security patches.  OTOH, I have no idea how
easy it is to trigger in non-synthetic cases.
---
 docs/misc/xen-command-line.pandoc           |  7 ++++-
 tools/libs/light/libxl_cpuid.c              |  2 ++
 tools/misc/xen-cpuid.c                      |  1 +
 xen/arch/x86/cpu/common.c                   | 11 +++++--
 xen/arch/x86/setup.c                        | 46 ++++++++++++++++++++++++-----
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 6 files changed, 57 insertions(+), 11 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 923910f553c5..19d4d815bdee 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -287,10 +287,15 @@ can be maintained with the pv-shim mechanism.
     protection.
 
     The option is available when `CONFIG_XEN_SHSTK` is compiled in, and
-    defaults to `true` on hardware supporting CET-SS.  Specifying
+    generally defaults to `true` on hardware supporting CET-SS.  Specifying
     `cet=no-shstk` will cause Xen not to use Shadow Stacks even when support
     is available in hardware.
 
+    Some hardware suffers from an issue known as Supervisor Shadow Stack
+    Fracturing.  On such hardware, Xen will default to not using Shadow Stacks
+    when virtualised.  Specifying `cet=shstk` will override this heuristic and
+    enable Shadow Stacks unilaterally.
+
 *   The `ibt=` boolean controls whether Xen uses Indirect Branch Tracking for
     its own protection.
 
diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index 2aa23225f42c..d97a2f3338bc 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -235,6 +235,8 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
         {"fsrs",         0x00000007,  1, CPUID_REG_EAX, 11,  1},
         {"fsrcs",        0x00000007,  1, CPUID_REG_EAX, 12,  1},
 
+        {"cet-sss",      0x00000007,  1, CPUID_REG_EDX, 18,  1},
+
         {"intel-psfd",   0x00000007,  2, CPUID_REG_EDX,  0,  1},
         {"mcdt-no",      0x00000007,  2, CPUID_REG_EDX,  5,  1},
 
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index addb3a39a11a..0248eaef44c1 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -208,6 +208,7 @@ static const char *const str_7c1[32] =
 
 static const char *const str_7d1[32] =
 {
+    [18] = "cet-sss",
 };
 
 static const char *const str_7d2[32] =
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index b3fcf4680f3a..27f73d3bbe31 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -346,11 +346,18 @@ void __init early_cpu_init(void)
 	       x86_cpuid_vendor_to_str(c->x86_vendor), c->x86, c->x86,
 	       c->x86_model, c->x86_model, c->x86_mask, eax);
 
-	if (c->cpuid_level >= 7)
-		cpuid_count(7, 0, &eax, &ebx,
+	if (c->cpuid_level >= 7) {
+		uint32_t max_subleaf;
+
+		cpuid_count(7, 0, &max_subleaf, &ebx,
 			    &c->x86_capability[FEATURESET_7c0],
 			    &c->x86_capability[FEATURESET_7d0]);
 
+		if (max_subleaf >= 1)
+			cpuid_count(7, 1, &eax, &ebx, &ecx,
+				    &c->x86_capability[FEATURESET_7d1]);
+	}
+
 	eax = cpuid_eax(0x80000000);
 	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
 		ebx = eax >= 0x8000001f ? cpuid_ebx(0x8000001f) : 0;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 566422600d94..1b8b74599f4a 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -95,11 +95,7 @@ unsigned long __initdata highmem_start;
 size_param("highmem-start", highmem_start);
 #endif
 
-#ifdef CONFIG_XEN_SHSTK
-static bool __initdata opt_xen_shstk = true;
-#else
-#define opt_xen_shstk false
-#endif
+static int8_t __initdata opt_xen_shstk = -IS_ENABLED(CONFIG_XEN_SHSTK);
 
 #ifdef CONFIG_XEN_IBT
 static bool __initdata opt_xen_ibt = true;
@@ -1099,11 +1095,45 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     early_cpu_init();
 
     /* Choose shadow stack early, to set infrastructure up appropriately. */
-    if ( opt_xen_shstk && boot_cpu_has(X86_FEATURE_CET_SS) )
+    if ( !boot_cpu_has(X86_FEATURE_CET_SS) )
+        opt_xen_shstk = 0;
+
+    if ( opt_xen_shstk )
     {
-        printk("Enabling Supervisor Shadow Stacks\n");
+        /*
+         * Some CPUs suffer from Shadow Stack Fracturing, an issue whereby a
+         * fault/VMExit/etc between setting a Supervisor Busy bit and the
+         * event delivery completing renders the operation non-restartable.
+         * On restart, event delivery will find the Busy bit already set.
+         *
+         * This is a problem on bare metal, but outside of synthetic cases or
+         * a very badly timed #MC, it's not believed to problem.  It is a much
+         * bigger problem under virt, because we can VMExit for a number of
+         * legitimate reasons and tickle this bug.
+         *
+         * CPUs with this addressed enumerate CET-SSS to indicate that
+         * supervisor shadow stacks are now safe to use.
+         */
+        bool cpu_has_bug_shstk_fracture =
+            boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+            !boot_cpu_has(X86_FEATURE_CET_SSS);
 
-        setup_force_cpu_cap(X86_FEATURE_XEN_SHSTK);
+        /*
+         * On bare metal, assume that Xen won't be impacted by shstk
+         * fracturing problems.  Under virt, be more conservative and disable
+         * shstk by default.
+         */
+        if ( opt_xen_shstk == -1 )
+            opt_xen_shstk =
+                cpu_has_hypervisor ? !cpu_has_bug_shstk_fracture
+                                   : true;
+
+        if ( opt_xen_shstk )
+        {
+            printk("Enabling Supervisor Shadow Stacks\n");
+
+            setup_force_cpu_cap(X86_FEATURE_XEN_SHSTK);
+        }
     }
 
     if ( opt_xen_ibt && boot_cpu_has(X86_FEATURE_CET_IBT) )
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 7a896f0e2d92..f6a46f62a549 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -290,6 +290,7 @@ XEN_CPUFEATURE(INTEL_PPIN,         12*32+ 0) /*   Protected Processor Inventory
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ecx, word 14 */
 /* Intel-defined CPU features, CPUID level 0x00000007:1.edx, word 15 */
+XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:2.edx, word 13 */
 XEN_CPUFEATURE(INTEL_PSFD,         13*32+ 0) /*A  MSR_SPEC_CTRL.PSFD */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 11:12:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 11:12:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471110.730841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD1gm-0006I7-2i; Wed, 04 Jan 2023 11:12:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471110.730841; Wed, 04 Jan 2023 11:12:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD1gl-0006I0-Vz; Wed, 04 Jan 2023 11:12:07 +0000
Received: by outflank-mailman (input) for mailman id 471110;
 Wed, 04 Jan 2023 11:12:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pD1gk-0005kw-GW
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 11:12:06 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9ee64cf1-8c20-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 12:12:05 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ee64cf1-8c20-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672830725;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=UAKOXtp3MBtKPheGYD3/xlv27YzzrN4rSPgxLhvD/Gw=;
  b=gsmztJ5C15L7xl8g1EBQWbQHx3f7E+sRte6/Ie2UVzZ0mnXeGwEqIBZ2
   8Q8R4Aoy8MRxnI/ptxIzVfMfyimU1plmfMeJl4VqzGCC9oe2PhayoXbzz
   p68ax8G9BKYci/UN8HEeq18cwqqoSOm8OjM4QZa8JoHgp2nttozNI0KpD
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93590518
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:/saynqt14HnKlcVzlDMXqR6G8OfnVFheMUV32f8akzHdYApBsoF/q
 tZmKWDVbqyOZWHzKYt/bN/npxgH756AxoNqHgZr+3g9EypH+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaGyyFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwBjQVTS+7luWM8pmHDehGh5oGLMTlI9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfAdMYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27J/
 T+XpzmgUnn2MvS8zRuZ23apm9bPvnLyfJwvGuW3ytV11Qj7Kms7V0RNCArTTeOColG6c8JSL
 QoT4CVGhYoY+VGvT9L9dwalu3PCtRkZM/JAHut/5AyTx6785weCGnNCXjNHcMYhtsI9WXotz
 FDhoj/yLWUx6vvPEyvbr+rK62PpUcQIEYMcTQEAVSg628jkmrMYslXOZNc9Na64gMKgTFkc3
 Au2hCQ5grwSi+sC2KO64U3LjlqQm3TZcuImzl6JBzz4t2uVcKbgPtX1sgaDsZ6sOa7DFjG8U
 G44d99yBQzkJbWEj2SzTeoEB9lFDN7VYWSH0TaD83TMnglBGkJPn6gKu1mSx28zaK7onAMFh
 2eN0T69HLcJYBOXgVZfOupd8fgCw6n6DsjCXfvJdNdIaZUZXFbZo3sxOhbAjzi0zBhEfUQD1
 XCzKJ/EMJrnIf4/kGreqxk1i9fHORzSNUuMHMumnnxLIJKVZWKPSKdtDWZimtsRtfveyC2Mq
 oY3Cid/40kHOAEISnWNoNF7wJFjBSRTOK0aXOQOLLbSelA/RzhxYxITqJt4E7FYc21uvr+g1
 hmAtoVwkTITWVWvxd22V01e
IronPort-HdrOrdr: A9a23:Enyza6labInGAmMNtvOt8hz+WEXpDfIi3DAbv31ZSRFFG/Fw9v
 rDoB1/73TJYVkqN03I9ervBEDjexPhHO9OgLX5VI3KNGOKhILCFvAA0WKN+UyEJwTOssJbyK
 d8Y+xfJbTLfDxHZB/BkWuFL+o=
X-IronPort-AV: E=Sophos;i="5.96,299,1665460800"; 
   d="scan'208";a="93590518"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] x86/cpuid: Infrastructure for leaves 7:1{ecx,edx}
Date: Wed, 4 Jan 2023 11:11:45 +0000
Message-ID: <20230104111146.2094-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230104111146.2094-1-andrew.cooper3@citrix.com>
References: <20230104111146.2094-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

We don't actually need ecx yet, but adding it in now will reduce the amount to
which leaf 7 is out of order in a featureset.

cpufeatureset.h remains in leaf architectrual order for the sanity of anyone
trying to locate where to insert new rows.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Fix decodes[] short string
---
 tools/misc/xen-cpuid.c                      | 10 ++++++++++
 xen/arch/x86/cpu/common.c                   |  3 ++-
 xen/include/public/arch-x86/cpufeatureset.h |  3 +++
 xen/include/xen/lib/x86/cpuid.h             | 15 ++++++++++++++-
 4 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index d5833e9ce879..addb3a39a11a 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -202,6 +202,14 @@ static const char *const str_7b1[32] =
     [ 0] = "ppin",
 };
 
+static const char *const str_7c1[32] =
+{
+};
+
+static const char *const str_7d1[32] =
+{
+};
+
 static const char *const str_7d2[32] =
 {
     [ 0] = "intel-psfd",
@@ -229,6 +237,8 @@ static const struct {
     { "0x80000021.eax",  "e21a", str_e21a },
     { "0x00000007:1.ebx", "7b1", str_7b1 },
     { "0x00000007:2.edx", "7d2", str_7d2 },
+    { "0x00000007:1.ecx", "7c1", str_7c1 },
+    { "0x00000007:1.edx", "7d1", str_7d1 },
 };
 
 #define COL_ALIGN "18"
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 0412dbc915e5..b3fcf4680f3a 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -450,7 +450,8 @@ static void generic_identify(struct cpuinfo_x86 *c)
 			cpuid_count(7, 1,
 				    &c->x86_capability[FEATURESET_7a1],
 				    &c->x86_capability[FEATURESET_7b1],
-				    &tmp, &tmp);
+				    &c->x86_capability[FEATURESET_7c1],
+				    &c->x86_capability[FEATURESET_7d1]);
 		if (max_subleaf >= 2)
 			cpuid_count(7, 2,
 				    &tmp, &tmp, &tmp,
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 7915f5826f57..7a896f0e2d92 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -288,6 +288,9 @@ XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ebx, word 12 */
 XEN_CPUFEATURE(INTEL_PPIN,         12*32+ 0) /*   Protected Processor Inventory Number */
 
+/* Intel-defined CPU features, CPUID level 0x00000007:1.ecx, word 14 */
+/* Intel-defined CPU features, CPUID level 0x00000007:1.edx, word 15 */
+
 /* Intel-defined CPU features, CPUID level 0x00000007:2.edx, word 13 */
 XEN_CPUFEATURE(INTEL_PSFD,         13*32+ 0) /*A  MSR_SPEC_CTRL.PSFD */
 XEN_CPUFEATURE(IPRED_CTRL,         13*32+ 1) /*   MSR_SPEC_CTRL.IPRED_DIS_* */
diff --git a/xen/include/xen/lib/x86/cpuid.h b/xen/include/xen/lib/x86/cpuid.h
index 73a5c330365e..fa98b371eef4 100644
--- a/xen/include/xen/lib/x86/cpuid.h
+++ b/xen/include/xen/lib/x86/cpuid.h
@@ -18,6 +18,8 @@
 #define FEATURESET_e21a  11 /* 0x80000021.eax      */
 #define FEATURESET_7b1   12 /* 0x00000007:1.ebx    */
 #define FEATURESET_7d2   13 /* 0x00000007:2.edx    */
+#define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
+#define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
 
 struct cpuid_leaf
 {
@@ -194,7 +196,14 @@ struct cpuid_policy
                 uint32_t _7b1;
                 struct { DECL_BITFIELD(7b1); };
             };
-            uint32_t /* c */:32, /* d */:32;
+            union {
+                uint32_t _7c1;
+                struct { DECL_BITFIELD(7c1); };
+            };
+            union {
+                uint32_t _7d1;
+                struct { DECL_BITFIELD(7d1); };
+            };
 
             /* Subleaf 2. */
             uint32_t /* a */:32, /* b */:32, /* c */:32;
@@ -343,6 +352,8 @@ static inline void cpuid_policy_to_featureset(
     fs[FEATURESET_e21a] = p->extd.e21a;
     fs[FEATURESET_7b1] = p->feat._7b1;
     fs[FEATURESET_7d2] = p->feat._7d2;
+    fs[FEATURESET_7c1] = p->feat._7c1;
+    fs[FEATURESET_7d1] = p->feat._7d1;
 }
 
 /* Fill in a CPUID policy from a featureset bitmap. */
@@ -363,6 +374,8 @@ static inline void cpuid_featureset_to_policy(
     p->extd.e21a  = fs[FEATURESET_e21a];
     p->feat._7b1  = fs[FEATURESET_7b1];
     p->feat._7d2  = fs[FEATURESET_7d2];
+    p->feat._7c1  = fs[FEATURESET_7c1];
+    p->feat._7d1  = fs[FEATURESET_7d1];
 }
 
 static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 11:17:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 11:17:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471130.730852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD1m4-0007aO-Mj; Wed, 04 Jan 2023 11:17:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471130.730852; Wed, 04 Jan 2023 11:17:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD1m4-0007aH-Jw; Wed, 04 Jan 2023 11:17:36 +0000
Received: by outflank-mailman (input) for mailman id 471130;
 Wed, 04 Jan 2023 11:17:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pD1m3-0007a9-Ou
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 11:17:35 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 60feff1a-8c21-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 12:17:32 +0100 (CET)
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Jan 2023 06:17:27 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN8PR03MB5106.namprd03.prod.outlook.com (2603:10b6:408:df::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 11:17:25 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 11:17:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60feff1a-8c21-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672831052;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=PF5+qnr50HlgCu8Yl2X1uruIFc58QQfovGua4qn8lv4=;
  b=gDDCJvyD5LVz+aaebL1+tjcxiSrwFQdXwXQLWlzMwBZ0/trALrBvbgys
   0NDMPj1AFmFSSe24PP8c4yINyg7gQTSYrpg38SDLwa8eMfZlTzY4N+fBv
   zfMMZ4oP5wLJu1zNIw9engF+zxU1DSCB9FSXPo9QbyjKCbkClAmZvR1GE
   w=;
X-IronPort-RemoteIP: 104.47.70.102
X-IronPort-MID: 90603934
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1gmxqqw9Xv9GXNDNl296t+cVxyrEfRIJ4+MujC+fZmUNrF6WrkVRm
 GAaUGGFafaIYWDxLYslO4rlo04EvJ7Rn982Tgpq+yAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5QRnPawT5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KURH5
 9kXGQldVymK3s2E77eyePNNoO12eaEHPKtH0p1h5RfwKK9+BLrlHODN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDCVlVQruFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqANxKT+zjqpaGhnWI+0gyGRkGfmKyqN+Yo3GwfohUF
 n4br39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOluU7WDgr3
 V+hhM7yCHpkt7j9YW2Z3qeZq3W1Iyd9EIMZTSoNTA9A6d+8pog210rLVow6SPTzicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQKzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:WE6k0qkJEO6WsZVIOKYnb6ey1jjpDfIi3DAbv31ZSRFFG/Fw9v
 rDoB1/73TJYVkqN03I9ervBEDjexPhHO9OgLX5VI3KNGOKhILCFvAA0WKN+UyEJwTOssJbyK
 d8Y+xfJbTLfDxHZB/BkWuFL+o=
X-IronPort-AV: E=Sophos;i="5.96,299,1665460800"; 
   d="scan'208";a="90603934"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mDRzcU2PmNald1yoXvo+VnSSH4clZOOM5mLjk4wQKeem1UBZ/mHtg72oSBa8mQH/5/iNthXUap3t2zcqTkKgz0Ff2+Iqv3GEq7g+h0JOcowMAmyIjwezQXZ4fO8zAvi5fW+xT2GbO+1Xt9Gm5pRl+fiddYqGfZ+Oi/K8OySEE7kfw/bqrClqE79lVTVisuKL/fx8fJhTvM2TQ/t26LqVAEgmC6bmK7MWSVB9mo74YR9CvFFWjjO88A/hW9zQQeh+8j+ywbsfrz3NlUO+PPfauRFITYK7nxJWoxTRVdRuen8gYqLU5vqTKFTTA6S35T0bZTM1GfdNVol++JM4/9vJYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PF5+qnr50HlgCu8Yl2X1uruIFc58QQfovGua4qn8lv4=;
 b=KH57fZl8i7h6s7RvA1GVUjKMPYt8m002JxrvC62fjdch27gkUm+K2ob6CRX4D/Mk0o0+yZh4yKQzby0ieQy+2cw98d4aCRCzOEzyKxRL+a4a8t/ekbjhZlgtIaoAy2qeThABJr/wDBtHHadIIKuBEjLkELmaelaX/hMO0NF7WgIo5bIUAMMeYHyXFd3IwiuhPLmb2YKCQUFb23z2gOKQnVIFLR7Ujw4SPd68fAoM9Ehh1SfPiUGMUb14qUdF+AbteO3QLRCPEXabXCXedoKujePBPxGra9evpCri/p4fRAAq1UfyLvvznI7KLrjS2KAcRtTPU1Rv0E1bPBVASEsT1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PF5+qnr50HlgCu8Yl2X1uruIFc58QQfovGua4qn8lv4=;
 b=MIpgHvxe99QbgZrZYZ+5Pd3ReJCZi6OHCDjSKB8HBCdiQWwZVnrFAhvi4VZyrLjDD2lDk9Cpt08NJzsFzUzaMWhmWsvviPg+KLnM/yqwMQLBGsq2Io+GEBdayAOVxQSx/7U0TDCUD9/hsBhmeFWDvBOcrQ7CQQej0y8o+r8TllU=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 0/2] x86: Work around Shstk fracturing
Thread-Topic: [PATCH v2 0/2] x86: Work around Shstk fracturing
Thread-Index: AQHZIC4eprKe1GLtgUSigry+xdpDWw==
Date: Wed, 4 Jan 2023 11:17:24 +0000
Message-ID: <028d658d-db7d-9cd0-08aa-9033bc65e547@citrix.com>
References: <20230104111146.2094-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230104111146.2094-1-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BN8PR03MB5106:EE_
x-ms-office365-filtering-correlation-id: f721dea9-21b6-45ad-c6f1-08daee45417a
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Vgk15gTQXxlmOO+SE/oe5OzHtwzrgZmqYuoPKBOItQhttvBt4m1qx+iWnS7QWrFEgrBQbIK7o0oNO26vXZCzpTxJe8LlxmQuku2HEoSsJMHceAVX3XnN9MOtH/AYP7VaWuAhIWpvP4fqEneLeT8D6vvOKxs2+IDcQX9eN+2v71e4ZW3+evCwrwZmdQ/aEom0+0N0x/nM4TQso++L5TecWBzyoinlKQy8S2vyHExC4/SSYXt6J4M4/3FQ0agd1+AXfH5E+JQuvPMtbMfeA5kkiZQP08pnswtJcYejMJYJbDKUDXOLRM3rskAhB/ndEGzC5T0alHfE9qBNuCJJgJ4qpdCHSiGCdMHdcIbXE6a9DFqSjtWTIPN3p0HDOJUGi7RM5ipmBm9zJQ2Mg+ACz0di3BQ9IWCjMHfdImNES9XggylusEiyj5Z1CeIMTR+767CWvu1ZMcxkbqL0QJOVYF5X1XgJXPTvNDalWIJF+f8RLPAwJTo86yIwQ9kN40g8d5JaZwSm4wuKBMDT1BObhBXYT2jXBQmMfe3wcKR8kSULLXpmBu5Ccjbb1VVWKSMR/XrEI/oTF6uZvGrNXwn2TLAkeYUlYpXQwvvk+Ano5Ph/1OdT5aEgHMqh9GkbDURPqjtXyIOzpby6nz581dNATZqMb7re/ipaBKukCN/oBENCXpQq10/2FPb3mTAgphbFTLQux0U4QiYSoCws6fRKIv+4WBKkK4WoMBWBI2oyKVypms6hjfoQvRdPmRkOg7LLO9aVO68O3RSbNdrMBddyOU6Dcw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(346002)(39860400002)(376002)(396003)(136003)(451199015)(6506007)(26005)(6512007)(2616005)(53546011)(186003)(86362001)(31696002)(558084003)(38070700005)(36756003)(82960400001)(38100700002)(122000001)(478600001)(8676002)(4326008)(41300700001)(2906002)(5660300002)(8936002)(66476007)(6486002)(71200400001)(31686004)(316002)(76116006)(66556008)(66946007)(54906003)(91956017)(66446008)(64756008)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TlNOSStCcTJaaHR6WUEwU0pYelZOQkVNdFlZZjJjN2szYUhvVnFuMnJiU2JI?=
 =?utf-8?B?cmVKTzdPb0tMZkNIR1llaWgzbFlibG10T3FoUFBVRGVXazZoMHFWRGhNRnZI?=
 =?utf-8?B?Y1NYLzNRTU9VSUxUcjEya1ByYlR1OUlUN1pXWDN2a2svT04vQlhhNVZBZmI0?=
 =?utf-8?B?UVZickRTN0Z6ckFGTmdGRS9jU0NvQ1dWSTMvT0FGNFFaUkRWN2tCOWZUVi9a?=
 =?utf-8?B?WE5OS1o3dlJVRlJvcWVKQVpxbCtOcERicXNzUTBqVUNaVVUwbi9JenF3dmxU?=
 =?utf-8?B?MWI1RG5salJ0SjFtak95TXQ4WW8rYStrY250VWx2ekowdjhLMldXTUlMNWFt?=
 =?utf-8?B?em5IWFI0OHFXYmhMTTV1UDRwa0FNUG1FV1Vpbi8rWi9HQ0JuU1VOQWFYSUN6?=
 =?utf-8?B?d3RpSks0akVXZFI4QlphOFNMUmxsaFBnckdTdXpZUjg2Z1MzSjJQZ01pQTBa?=
 =?utf-8?B?ckh5Y253bzUzQ040c2FicGtjR0Z5UFFDbms0d29zMjM2UEdRdDB1S01KcldE?=
 =?utf-8?B?bEdmRmQrZGVjcW5Ea0pRc1Rld0tKMFp5WEg0QldGbkZ3Y2l2c2FSelRNRVdn?=
 =?utf-8?B?Sk9ZV0tNS2ZNaUhhajZHanZwUk5mUHZxZ1BxVE9KeWJSVE9TcWhWZmR6MGJ2?=
 =?utf-8?B?amp5VDhIQnNTdTE2MWNGMnFObEhQTkg0a3VkdnJ4SmxwVzZ6eWw0dGpIL1hq?=
 =?utf-8?B?ejJKMDNPUXY4cUxsQ0ZJV2o0SXRROXIyYzN5TWlMYzhyRXpORi9sZ1Urc3Nw?=
 =?utf-8?B?a29tU1JvZk5zWTBqckEwN1JmQjF0V3BaeGJRMHpFRXBlZWRuNlZwM0VQcjQ2?=
 =?utf-8?B?RlBjNlNUM3I5T3dJVlpGUUFZSzdQYitLRER3ZFRwODg5Mlg1MG5tTWNrTG1M?=
 =?utf-8?B?REptZ3pYUE5SWUIxK1JwUWo4NUlScnZlK1VHbDZ1VzI4M091RVBhTU1wV3Zm?=
 =?utf-8?B?Zm01NVg2SGlGM1grMlZraTJ4bGpvVnBBajhleDRQMFR5RjYwSzBuVlpuZEpj?=
 =?utf-8?B?UFM1ZlFkckZCd2FIOGtlbWN3T3lOS3NuTU1kRmpKaU9wTndtVnFIYm5VdHk5?=
 =?utf-8?B?bEVFY1ovVk5kaGJUZmROTEx1SjBHL2NBVGdxb21PVTVYV1BtWnplWFRubVZI?=
 =?utf-8?B?ZStRNzRoSTFBNU16SzYzaDU4MUlaZ3NPY0VlcEJIQlN4NXJOTGdtQ2NsZHlX?=
 =?utf-8?B?MDBQVDBkZHVZTVIwNnVya1JybkhkZXNqZHBHQm5EckdLTHVZSHhlcis1L3Ay?=
 =?utf-8?B?a3hFWWowR0JGRkJlQWJNSlhGZ3pVV2dqVVFBTmEvVGF4b3V0RTVscmkzcWRN?=
 =?utf-8?B?OXFyWDZSRmE1WEFMMGlpV2JCOGtyMXJVaGk5ZGdhbnU1blViSzU5UkU1MjBD?=
 =?utf-8?B?SS8waytEYXRzMjNicUZvWUl4d3dKcnVMSlFHK0hSZGsvTEIxVkswVkQwTFIv?=
 =?utf-8?B?dUJseGJaS1ZKMnJKWTk4YUY1bmo5bGhVU1VHVitiZVhLYVZIdFQ5VFBha3lY?=
 =?utf-8?B?VElKcTJBdnUwQk1TU1FBWWpXRFdHVlVHT0ltZmUvZDRwaE9VVEp6OGlMbnM4?=
 =?utf-8?B?b2VvdDZLck1UbkU4N1VCTTF0WnpPSEFmc0dqdVdqSktwSHBNZStEcHF3Y1A5?=
 =?utf-8?B?VXd4RmFoc3FyU2IyOXBrRHZsanhFRWNrRUJIQnRvNnVJdU1SWTNyZzNHQmM4?=
 =?utf-8?B?SlN3WnVxSFR4T0MvcUNhaldxeWFlN2psWWVKVWVkaS9yTGZxdUI3NUdoWDVt?=
 =?utf-8?B?T28rS3pRamNudXNUWUJzajNMZklJZUZyOFhvVG5PRkdxZUhtZERsbXRtWDlE?=
 =?utf-8?B?bzVkUmZXZXJYU0E2ZWVwS01zbUM0R3ZyYlJXSXFwY002QStQdDlMU3BoM1kv?=
 =?utf-8?B?bGVzbnJlZTk1b2xGeXhqVGZFUzcxRzU0UTRJNjMzc0JyN2x0M3hKbDU3b2lW?=
 =?utf-8?B?ZUJSTWdYRlRzQ0VLT2FySE5TZnlpWTZyNGd1bU9OVytEL1lnRXlOVHJzdmlH?=
 =?utf-8?B?dThLalF0ZW9KVndzYU12dm9zRVpQWXpzNU9kRXpMOXJtY2NUYTdQVTAwRm1O?=
 =?utf-8?B?VFVkajdZbkhZN3BlLzI2TWd3ZGlpMkZ4dlU3dC95cGtCb1laUE5VZTFBU0x1?=
 =?utf-8?Q?WkqwmoZ9t/R5+IMLVQQvWKQMe?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <06EE0AC8DDD7AD4BAC2D1EC9E864D7B4@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2Z0dp6dMxhCwvWDgKFl32onbUU8uqPL38PK9udjyGvvR4qQG5q7pjj5fYcwvB8Kt4Aqe1sFTGZQTOx7wAtd4qHOEoGOcm2VgpJUUyP/KTgATlgLkXhYMRgDEQghJMDXYXy3A+D+S+2gnWXJAMtznTL+wFnv8NflkNrbNPM8LT4qbocHDMJJrWw+kL6sS3zyHKuVTt6xRLLiLDFiny81LWoh/AoF4cuKOfjzIGKLAtx98L8bG+7ZAX+i/CdpSubxRae/tV9CqnZiuv2uWtczwFIOnJXVbL4G0CDZx5y4yE5dHRNTxkH5y19jnl7fSSxroq/TBli3gRAd36e9czyvJCqS1K1oc0cc4wpIevXaVGqRxwZmQISO8yWdidtitde2DcvIOmwbM6DlFHUXWrD3Rvf5+UoESmXGBBTd+BowL7mgsCQEQGtYUzTpD8IVTpZduSewnXJkRJnx/rlGAem7f7aKltyfV1odljhVODLZz1ycjQr8G5ydpkmupt8AVDdcthamsivdPCcoz/IcEd0yg42bre1rlcpF8M/tJYLsLXBQ0Dqg9qDtnwGfhIiBoq0BEske06NV1eBVJe4PMnNb5+vck8lrGViqISxGrzUQs80dvDzjI69OigqQLH4gaPxc74fe4DNQ0v8sFGWIOyGt25816mFdi0A3Z87RGV940he3KJD++6n6qDA/paJ4+qcT8v57SETKaUPTVnaaL3OCw6qqKkMvQPrX0+JRNVv2YGvI5uAWSJ822qu+khJXEXj4faqS/4GlHp0A1XLnMr8nEVbZL/rcKf9KVMPN0w8xN9gLp+cOJc8C9zMNT4T5zks/K
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f721dea9-21b6-45ad-c6f1-08daee45417a
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 11:17:24.4753
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nllPKvA/Pe1plXY7YzG00Exan2B+vRCPIX+RJSvGZ15Bmbuyt8XmZVPzjlutHxSrfmXwEWzHcf50SKdpgbuYabTJtE7OEjr70nA8ytb4JCM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5106

T24gMDQvMDEvMjAyMyAxMToxMSBhbSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4gU2VlIHBhdGNo
IDIgZm9yIGRldGFpbHMuDQoNCkFwb2xvZ2llcyAtIHRoaXMgc2VyaWVzIHNob3VsZCBiZSAidjIi
Lg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 11:59:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 11:59:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471137.730863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD2QE-0003UQ-O6; Wed, 04 Jan 2023 11:59:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471137.730863; Wed, 04 Jan 2023 11:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD2QE-0003UJ-LL; Wed, 04 Jan 2023 11:59:06 +0000
Received: by outflank-mailman (input) for mailman id 471137;
 Wed, 04 Jan 2023 11:59:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LX0w=5B=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1pD2Q3-0003UA-HG
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 11:59:05 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 285a2951-8c27-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 12:58:53 +0100 (CET)
Received: from zn.tnic (p5de8e9fe.dip0.t-ipconnect.de [93.232.233.254])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 18EC91EC008F;
 Wed,  4 Jan 2023 12:58:52 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 285a2951-8c27-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1672833532;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=UKl/CVF+4GJS0LDdyX728sU1J1aGkocy2OklfcAdRtI=;
	b=VcttLkPNy9K1xNdWfFV7Yz0KEUwXXic9v2XG1jbmKgXxKVSo4wcHOiMPPS/Y09Wr2p5SCF
	2pmDtdw/423IjVUdnSQ/yOxlJdQFacJj5lXUfKDExPe/KfWSxHLfgoaUswfq1pSyDkPKwR
	+8kZx8fwKmnCVLzKEVBMtns1UtQy+9g=
Date: Wed, 4 Jan 2023 12:58:48 +0100
From: Borislav Petkov <bp@alien8.de>
To: Dave Hansen <dave.hansen@intel.com>
Cc: Ross Philipson <ross.philipson@oracle.com>,
	linux-kernel@vger.kernel.org, x86@kernel.org,
	dpsmith@apertussolutions.com, tglx@linutronix.de, mingo@redhat.com,
	hpa@zytor.com, luto@amacapital.net, dave.hansen@linux.intel.com,
	kanth.ghatraju@oracle.com, trenchboot-devel@googlegroups.com,
	jailhouse-dev@googlegroups.com, jan.kiszka@siemens.com,
	xen-devel@lists.xenproject.org, jgross@suse.com,
	boris.ostrovsky@oracle.com, andrew.cooper3@citrix.com
Subject: Re: [PATCH v2 1/2] x86: Check return values from early_memremap calls
Message-ID: <Y7Vp+EPq5wkGr5mi@zn.tnic>
References: <20221110154521.613472-1-ross.philipson@oracle.com>
 <20221110154521.613472-2-ross.philipson@oracle.com>
 <8e62a029-f2fa-0627-1f71-4850a68ec6b6@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <8e62a029-f2fa-0627-1f71-4850a68ec6b6@intel.com>

On Thu, Nov 10, 2022 at 08:07:53AM -0800, Dave Hansen wrote:
> On 11/10/22 07:45, Ross Philipson wrote:
> >  	dt = early_memremap(initial_dtb, map_len);
> > +	if (!dt) {
> > +		pr_warn("failed to memremap initial dtb\n");
> > +		return;
> > +	}
> 
> Are all of these new pr_warn/err()'s really adding much value?  They all
> look pretty generic.  It makes me wonder if we should just spit out a
> generic message in early_memremap() and save all the callers the trouble.

Well, let's see.

early_memremap() calls __early_ioremap() and that one already warns befofe each
NULL return. So I guess we don't need the error messages as we will know where
it starts failing.

I guess we still need the error handling though.

I.e., this above should be:

    dt = early_memremap(initial_dtb, map_len);
+   if (!dt)
+           return;

so that we don't go off into the weeds with a NULL ptr.

Or?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 12:12:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 12:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471151.730874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD2dE-00061t-7h; Wed, 04 Jan 2023 12:12:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471151.730874; Wed, 04 Jan 2023 12:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD2dE-00061m-4T; Wed, 04 Jan 2023 12:12:32 +0000
Received: by outflank-mailman (input) for mailman id 471151;
 Wed, 04 Jan 2023 12:12:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pD2dC-00061g-NJ
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 12:12:30 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0d36f55c-8c29-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 13:12:28 +0100 (CET)
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Jan 2023 07:12:24 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6345.namprd03.prod.outlook.com (2603:10b6:303:11c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 12:12:22 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 12:12:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d36f55c-8c29-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672834348;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=GzrVoLQbytdkR1Wp0KQFMh+Px/VkZ4Z8nHXx96puNtI=;
  b=bIUTrIImGpmRCbWlKrWEQwpIFXGQYtEh5z1Frk32aCofAd3GwuOcNQe6
   6pAe4rvPcNOaNv1mFw3bTbJY4xR6SM40mT5+lh4CMigIbqomguel/dnkm
   P8t/IGYpWFoole6LqBBDrLCf6ylqMph/w8SRwjKUUhJpQtZYa8RtNayyL
   I=;
X-IronPort-RemoteIP: 104.47.70.107
X-IronPort-MID: 91159701
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Tg52davV94b2m0kvJLqOgIaqvufnVLZfMUV32f8akzHdYApBsoF/q
 tZmKWGDbP7bZTOmLtl/a4m1/EsA7cDQmIM3QQJv/ns8Hy0b+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaGyyFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwAR8uaA2AgbOK0p2kWMV9m+V/dOTTFdZK0p1g5Wmx4fcOZ7nmGv2Pz/kHmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ouiP60aIW9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzhr6M73wXKroAVIDQOeEKAq+OIs0eRedBQF
 XIS2jQnioFnoSRHSfG4BXVUukWsux8XW9NUVeog+gyJ4qPR70CSAW1sZi5MbpkqudE7QRQu1
 0SVhJX5CDp3qrqXRHmBsLCOoluaMiEPIWgPTSQNVwcC7p/op4RbpgnUUt9pHaqxj9v0MTL92
 TaHqG45nbp7pckP2qag/FGBgC+2oZPJTQkd6QDeX2bj5QR8DLNJfKSt4FnfqPxGc4CQSwDZu
 GBewpDBqucTEZuKiSqBBv0XG62k7OqENzuahkNzG54m9HKm/HvLkZ1s3QyS7XxBaq4sEQIFq
 meK0e+NzPe/5EeXUJI=
IronPort-HdrOrdr: A9a23:0euD16xdKvjeTyHpdCXAKrPwP71zdoMgy1knxilNoH1uA6+lfq
 +V9sjzuSWYtN9zYhEdcLK7VpVoKEm0nfVICO8qUYtKNzOGhILHFu5fBU+I+UyEJ8U4ndQtt5
 tdTw==
X-IronPort-AV: E=Sophos;i="5.96,299,1665460800"; 
   d="scan'208";a="91159701"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y2Qzs8reLCpDxPF+xgVraCyCy1SX4jXeQx5ZvCVI/07byZRhnOcXvClHp65Cbbk17y0i6jDGvV1WPmG5XtDMltVNzZlwYn+yMOY+VGMVyDxGwCJHf2lDX7m/9ksTC6ffwJCoEd0Ay6Aro615qJ7lXNK8tlUptQhZ44QknDJE36E2+dFS5jg+o3/W/h1W7lHgtnUi5hEFySqRevPE8c2VJkt41A46pS7n2YCJ5gMZTQEtbRU6TvAGMKa3AyGTxr5hjLJmW2ETkDVIW3ahBqjtzDW6v5wqrlrJfzlsKke4fZFWdNIKmTJjccMoe4Rne2LnlB6Bq/5NFgzRCZsN8wbVeQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GzrVoLQbytdkR1Wp0KQFMh+Px/VkZ4Z8nHXx96puNtI=;
 b=WeHdu18g6l5POtV6ABN/vHDXVdc+ArSCcHbFB33Ojj8kBjrUeowz+M7XzsUHchkUJqvpV83J1jRNFbhSSlCfLTa0PAq4xv/a/uk2e6NuwhbyI3MVG9NXGeJeFFxgrb+LWtBtxNNQLitWGaujR9Hs/QjgAttP8IRMGadHiEt5sJH40GC920gksk6flc+3c0uM7G1n/EZEcmAjRvaMdRJYxUlzfRhE3zNGFxVEknK2LxO7/u1iV/cIM0ZSnWQ822zAp1ux0vtrSvnV/CVF6dbgiJ7RqlCJZnCT0EOCIWqstWAp8KN04wCMUcT8K1TN5tdJzGVQcC27Vy1ArURx7owm8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GzrVoLQbytdkR1Wp0KQFMh+Px/VkZ4Z8nHXx96puNtI=;
 b=xKJ1LRgz/WXhEeDPgZFHjicl3HIcQ+9wXYop0GuOhRiKh0GR9CyjukxJT7PHiBxcwnBeAPkjvZJZFFX9vNM2CBkftGEJLEmEDvxDT+8NQrmsIU05O0i4BO4qCritgi78mZvEd2LcSNJLQPeVbUM1v5oblCOxTZDYBy8dfG3mEdA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Michal Orzel <michal.orzel@amd.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, Oleksii
 Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 2/6] CI: Remove guesswork about which artefacts to
 preserve
Thread-Topic: [PATCH 2/6] CI: Remove guesswork about which artefacts to
 preserve
Thread-Index: AQHZG+cbH36uve2aEUGxm9tAfYRLqK6LLXmAgAMGUIA=
Date: Wed, 4 Jan 2023 12:12:21 +0000
Message-ID: <c48536e5-37b4-4dbd-4715-cd8d776e496e@citrix.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-3-andrew.cooper3@citrix.com>
 <07b5cc36-e0ea-9b51-036a-9523920dd74a@amd.com>
In-Reply-To: <07b5cc36-e0ea-9b51-036a-9523920dd74a@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MW4PR03MB6345:EE_
x-ms-office365-filtering-correlation-id: 44cb12d5-77e1-402f-a341-08daee4ceef0
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 G6h9PgyduLJ0RbgdAwCaM0xLXwLic4yfPLvQlgyGYXyriccXGUmNTweE4NbOND1eySIY5ltrf2sdaP0k6mTJ7acZ0YwUilJTOl2WpTGVz1zDsEH7kNIX1CiW/P0PUzVDvYSictzD0OC/nWnOACR0vYkA9tefJ1508hM2EQQ+DYgr6TybWpYmrSmvqMkQjt2cVVb2h+QpIIL1OgJ056hdj8FaUeaZwSIm8mUJkSLEfvDZnDVkc1wyrEKkjchmixNpD6yYbC49E0MMHaB4S/uZY8vKUF4T/rlYVzzEvnFC9YJ5rTDX0F+RHSvQDL8EzmvZQNXIdi+WNQ7RDk2EEpvcu+zO5iQPTvuTR9OPR9Afq37aUMJY1wwUNPkNHnpTpwZlL4f/rNkJs8HuWP1XbsXldnQmAEpHyD/NE3dFuC/8whSbB8gz3QJPhp+5v01ENeYubP8uB5gIaLsc8itTlMnsqjrGaHeG230HwXWjVj7ltkUPdpfrIEWdmpQIa7lus5+w60rU4SW3Ybp60DNo4jpX5n0UomOf8gMo+PgHzZ3ILfxGKpQJbTqRahtPA7VUDOQgPPrrmNOb5jkFJFy5T5o6mRJ2B4VBLlFO8P9GgM8+C6P/mK+X9XVgp4a44HkhByd5eDFcq2/DdK1JYxNfAz5xxs0QLu2J6TSIUwOToAu7ejK0rghCcMKwO9ndKM5/TcHjDpcXTRZD3iP6W3gdignTGY9pJE+Gqm/xOAnLEan0HFoUGtAcAZDR20kwRqp/H1ZOibC512O1QtPfzQaBvFVFbA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(39860400002)(366004)(396003)(346002)(451199015)(38100700002)(83380400001)(122000001)(82960400001)(38070700005)(31696002)(86362001)(5660300002)(4744005)(66476007)(2906002)(4326008)(8676002)(76116006)(64756008)(66556008)(66446008)(66946007)(41300700001)(186003)(6512007)(91956017)(26005)(53546011)(6506007)(8936002)(2616005)(110136005)(54906003)(478600001)(316002)(71200400001)(6486002)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?am5Zbm45L3FIMWlrcFYwVmxCZGpIOUxENnNaRjVrL29tTkpoOE9LMkNUR3pa?=
 =?utf-8?B?YytFR1BQTXRHR2ZsbitqeS9TS1MzbHVEN2hycGwxSnlja1IrSjNxSjlxMmdQ?=
 =?utf-8?B?UW5lU2tsNXVObTdrWFJCUnd5Z21KVjI0ZEJUOVl4Y3JSd2RUZkd4UTBnbDIr?=
 =?utf-8?B?VEd4WkpXM2VJWWpvMVRrWDAyK2dtVndSZHo4VGNndjB2RU5lVUJPQmNoSVhM?=
 =?utf-8?B?ZVBGUTF2dlhpcmlUck1GRFo4enBKOGtiMU5Gc2VUVVVTMkJxeFRkQmJud3pV?=
 =?utf-8?B?NDZiNlNXaCsxRlVPWTRGZlJTdkxDQ1VEMmkzRnlpMzJrazNINmRrRzNUSUdK?=
 =?utf-8?B?QjJ0VG0vS3Z5bGJmSkovNUMvTVNyMXlLM28xVnpac2lmdEhDQlJSOCtOZTBH?=
 =?utf-8?B?RklKMVdlcWV1dnZSNHdndzJEa25MSGdFZVBBRFVRYnVPSnRlWXp1c2NmeEFL?=
 =?utf-8?B?N3ZnV1hBemtKY2R1NzNGdkR6WGpudGlra1dsU2hibmtiM1liaXk5VVdHOXll?=
 =?utf-8?B?cXA2MDVWQ2t5V1R1YTVtYVR6MFYzQ1R5ZHVZTmpudTM2UGx4UEJYVVpINVJs?=
 =?utf-8?B?bFhwekZvRUJCTDMvWXg5MjFRYkFKZ3FVc1g3cnZCSDNVYjAxeGxpcjBIM1Jq?=
 =?utf-8?B?ekJQRndMNVowclNHVDNtaFlHTHYwMjBoWDd3VkNidlljNkkrUmZYZGhaVXJ1?=
 =?utf-8?B?c2xkcUxpdmFtTS94Y3N1YUFaU1UzeVZhdWF5ZTlkWDdaVzRrS0pGVWE2MjJi?=
 =?utf-8?B?UjBIMjV4MVhZWnVLdDRlRGpRNDFYM0tyOERCUEtwRHRIVitwNUYveTFKZ3E1?=
 =?utf-8?B?dDBCeTdnQ1pYbEFMZkxKWi9QODlHNGtHcjAwWEFRZEh4U2FCZDdiSlJYSUFG?=
 =?utf-8?B?UDM2Mkx5WFRPVnVVM1dhellxRDFuOEhJMitnN1I2TEMxYkFhWHRVZThSbFdk?=
 =?utf-8?B?U0p3YUVmcjFERVhoNHpSWXNyaGlCY1lmRWNCNFZSOVNDWGQ1elQrWFM2ODRm?=
 =?utf-8?B?MUVWREIzMmsxUlZZbWpvMEdPV0VIekJNNU5RNnF2em9RN3NZRXYvRkt2WUxk?=
 =?utf-8?B?bGxzMC9GeXBmSnB4ZzVzUDJxYUk1b1Nzc0FjSEtDenBFWlN1VEpnTm1QVXph?=
 =?utf-8?B?U2dvZ2s5ZWtCVDFNSFJFTFdvK1FwRWxSVVgybzFDYlNvVUhKbU9xSnNleWh1?=
 =?utf-8?B?NEpNRnBGZk5KTGt5Wk9vWmIrRG5DaXZUZVVjdmxBR0prbTFaM3ZMYlFPSGtG?=
 =?utf-8?B?U2E3b2crNm80b28yVFRNZHVIa0VUTXVIMXpVR095YVFieXNVazFESUZhSXpD?=
 =?utf-8?B?WnZwKzBJN0k1ak5WbjFmeFJKUmdqNENVK2R0d3pCU3dhWDVqZ1RrRWRoZnFL?=
 =?utf-8?B?d1E5YUhlM0FXRUxDclFsanJsVXlzbTI1akpLMldFZHl5UW5relA4ejg3Sitq?=
 =?utf-8?B?QUlIM2pVQzJnNG1rU0Z5c1kxVithUjdXV2FYQVVPL3Q2YWhQUlVVeHNQVEVu?=
 =?utf-8?B?U1pxcnB4Y2lZM1lXRzJFcHA3MnJ1Zi9kQ3lEbWliaFF4Z1JuWG9YMUNSelRD?=
 =?utf-8?B?Y3BTZ29ZZk5FZm5vb0VyTzdrWWhtMkhhQVdvNnRmelBxeG1iOHN2ZVlJQkxH?=
 =?utf-8?B?TEZ5SkRUZXlERzNMTE80bXpPWTFFeDJkUCs2cy9hUWhpWHRxak5kVyt2TVR3?=
 =?utf-8?B?ZlJLbm1YeWkwM3ByUmV3cXMxVlJBTlprNHFPRlRXWFRqK0xJbVRCM0tnc1ZI?=
 =?utf-8?B?UlBrTEdmeEgvTzlLdzY1U3hDdk5IQjNkVEg0Vm5sMUhFclhQK3RMMFNZUnFm?=
 =?utf-8?B?SlhuY0lLTUg2b0FEbXZDT2ljbXlac1hvQVU4ay9uWUFvZlVWQTdsN1ZsTTRk?=
 =?utf-8?B?OFI4TzRFS2Z4eDJTZGM1UzcrdzNKaFZYenRGOVdVdHRidndab25PcHl2WUVY?=
 =?utf-8?B?RTErTDVKdWVVakRodDhEZ2Z0NldBL1p0L2xDUmRuNDE3OEFXUU1rZ2t0RWVk?=
 =?utf-8?B?VXVab2dSTnNNSWs3TVNoR04yTmxIVTFrbEdKSUVJcWlwK1Z4TmR3Q0htb01s?=
 =?utf-8?B?THVDZitOMG1jUm9UdWgrRVpRL2ZWSlJQSlhTdlVPQ3gxNnBVTjZXbC9lbG00?=
 =?utf-8?Q?YLNVWnm4OrULaFFZIeBh7JNHT?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <90E7DE2C213C5C4FBB44CCBC7E8B790A@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2Nj88TlfsgWnyFUS5jU/aTCO3qzIf4a0/3RMsJWyS1A7VP0MEhZ0mot0SiP2r7pc9zPlxhEBU4yCIt+9En6b4rLgTqFNs6GktYRC3QA3zJO/u738TTicf+gLRZofMhejKxlx0aNoFn1TEJgq0JQ8XOZDY0cge+lh/8xprBR56MCDWQDxhTWe03KCE5TyyJRC8tvs8PMUddC68ACNLCWckIhBSFcVxwJUkeKhxP+HPDsKDNfRkbPkLHCIuxkn4liW2uJ8mXlyLKoq+eLxD5vDC2ZXTmMxxhxqnna2BQ6Htzrk3TjQvh188U1FJHHgvBlY5goPNNu+r9N98xltsw7wMjuqxlBn9wzE5ub2iEjq+8jpSetxOkyX+DurpHqiSUwWE9ZnKtiEbXFD2QdrcN9UI4loNSQvLi62U3Xx3KTHpK4pA19ns08BUWTY9Ger9yYLt4mNMEDHAH7zlmBasNYXJ6ZtLwWHBunfwoJhsiX34Rd+bV7XAYrfKUdJYWAN3rg8afIXp6OarEdd8jdG9qtsG6hFewuftKSVPI8C972zFliwbslEVO+EZxoItgZw2JzT6ZwtQUAGkbmf9KXbblnGlU/E94USfQj1CxsZozE5lM3u3WACKngsmY521n3R1U5fDFAeI5jmdTq4gMD7C/l/6PT7305sYBJ3epXuOr1mVEgX97kRlgGmg01nVA+sy8qi4B2Ko/dp3lBfo2yb/EChngevxM16MXPn4M3QTDsnHl7aVjWKSUjjNhXC2aBZ1mRw33WClinnztucThmmssxxhB5yKPWz9++o8xSTdq02ByCwlra7SpRjimtvfNv72lVbXmC7DfxsmOxepUSr1YnEBbnwtfeiQbmZFp08nkyboUllZzOVVIrHtHz580N39O8l
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 44cb12d5-77e1-402f-a341-08daee4ceef0
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 12:12:21.9863
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: qwGdT9Ywx68hoQfBQyl+mNntpPC/h3I/MstY4Etl9MtZoCsOgvoNB6DJgn02hSxFUl9XmK9dG4gViUqOGT3TuISp5dePOoS7Yd5D2+IpGoE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6345

T24gMDIvMDEvMjAyMyAyOjAwIHBtLCBNaWNoYWwgT3J6ZWwgd3JvdGU6DQo+IEhpIEFuZHJldywN
Cj4NCj4gT24gMzAvMTIvMjAyMiAwMTozOCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IENhdXRp
b246IFRoaXMgbWVzc2FnZSBvcmlnaW5hdGVkIGZyb20gYW4gRXh0ZXJuYWwgU291cmNlLiBVc2Ug
cHJvcGVyIGNhdXRpb24gd2hlbiBvcGVuaW5nIGF0dGFjaG1lbnRzLCBjbGlja2luZyBsaW5rcywg
b3IgcmVzcG9uZGluZy4NCj4+DQo+Pg0KPj4gUHJlc2VydmUgdGhlIGFydGVmYWN0cyBiYXNlZCBv
biB0aGUgYG1ha2VgIHJ1bmUgd2UgYWN0dWFsbHkgcmFuLCByYXRoZXIgdGhhbg0KPj4gZ3Vlc3N3
b3JrIGFib3V0IHdoaWNoIHJ1bmUgd2Ugd291bGQgaGF2ZSBydW4gYmFzZWQgb24gb3RoZXIgc2V0
dGluZ3MuDQo+Pg0KPj4gTm90ZSB0aGF0IHRoZSBBUk0gcWVtdSBzbW9rZSB0ZXN0cyBkZXBlbmQg
b24gZmluZGluZyBiaW5hcmllcy94ZW4gZXZlbiBmcm9tDQo+PiBmdWxsIGJ1aWxkcy4gIEFsc28g
dGhhdCB0aGUgSmVzc2llLTMyIGNvbnRhaW5lcnMgYnVpbGQgdG9vbHMgYnV0IG5vdCBYZW4uDQo+
Pg0KPj4gVGhpcyBtZWFucyB0aGUgeDg2XzMyIGJ1aWxkcyBub3cgc3RvcmUgcmVsZXZhbnQgYXJ0
ZWZhY3RzLiAgTm8gY2hhbmdlIGluIG90aGVyDQo+PiBjb25maWd1cmF0aW9ucy4NCj4+DQo+PiBT
aWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0K
PiBJJ2QgcHJlZmVyIHRvIGtlZXAgdXNpbmcgImFydGlmYWN0cyIgYW5kIG5vdCAiYXJ0ZWZhY3Rz
IiBhcyB0aGUgZm9ybWVyIGlzIHdoYXQgR2l0TGFiIHVzZXMNCj4gYW5kIHdoYXQgd2UgdXNlIGlu
IGJ1aWxkL3Rlc3QueWFtbC4NCg0KWGVuIGlzIHdyaXR0ZW4gaW4gQnJpdGlzaCBFbmdsaXNoLsKg
IFdlJ3JlIGZvcmNlZCB0byBjb21wcm9taXNlIGZvcg0KZXh0ZXJuYWwgZGVwZW5kZW5jaWVzLCBi
dXQgeGVuLmdpdCBpcyBtb3N0bHkgQnJpdGlzaCBub3QgQW1lcmljYW4uDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 12:14:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 12:14:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471158.730885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD2ei-0006bc-IA; Wed, 04 Jan 2023 12:14:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471158.730885; Wed, 04 Jan 2023 12:14:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD2ei-0006bV-FE; Wed, 04 Jan 2023 12:14:04 +0000
Received: by outflank-mailman (input) for mailman id 471158;
 Wed, 04 Jan 2023 12:14:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD2eg-0006bP-Vq
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 12:14:03 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 456b697c-8c29-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 13:14:00 +0100 (CET)
Received: by mail-wm1-x332.google.com with SMTP id m3so16502532wmq.0
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 04:14:00 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb20064d0d42b6b29c193.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:64d0:d42b:6b29:c193])
 by smtp.gmail.com with ESMTPSA id
 bg24-20020a05600c3c9800b003cfa3a12660sm2320766wmb.1.2023.01.04.04.13.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 04:13:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 456b697c-8c29-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tp0OYuN+hFp93UNubvtXVhop92O4XSYXyFAqRN3asxg=;
        b=bUjggjXm/ZKd2AdcPuoyYtGHQkJlJsoQhispfOfu6iis4HsJgusAR6+OaYyUeOFW/C
         ZXSbcyXa9RNXynpta1alP+kbX7N8M/3V2pAioMbthnKQc0qd/JUjpPLtFoBAjuRnSKdr
         qsihK5gCrYrl1NtVf98bO7tXADzeAw42JsNFo0K4SyG0NuS/ho3OSZs4h1bQNPsC2X3J
         ZH4dOO8tzOHxo0/3eVhOlr4y8Rvo87UoJIwsxwv55ykT53pD1pGl5CYdvgcKDBfSXO1D
         ZcJkLbeBpOHsl3EE/hYpAJWXISJ3cVGD+lZG7pr8Kt5bMq7ftvzQS9Bg6EM3uZYdz9yu
         794w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tp0OYuN+hFp93UNubvtXVhop92O4XSYXyFAqRN3asxg=;
        b=MtAIqzXTErTd5n9FyiIyYuHXccSeLtR2untEcrp+42vp7Yp2ZeSt5TtDpkAJPK04AN
         B6MDYjOXfFvst1Cl+QKZwxoNlZ8fy4BQnbShN386vrjqlnEh6TGrQEBJDrL1S/oWuHMr
         XJ6DsE5eGCzEV57ZXQrZJDA4pE6odaddTp3HOELynYsg4Heu0H2KgOtlQPLR7eFR8XXB
         5qlauCZi/jBvk5j3vkaTle8Rzy/0eCon6xOczqSRX8bPHVW07MU9mV7vkjOSm7liQkwz
         tyRq4mk9hHaUasb3MymiMphGykZB0ZEuO/v9eGgkMhhvk735JSC7iHhxHSrsTe8A/kIe
         P4iQ==
X-Gm-Message-State: AFqh2kob01VUyuvxeozFGL/d918jF3W7Px2Xvht+T7PQFqu37y6ESQQV
	dY7JLC/QhC2SnrosGHUKrjo=
X-Google-Smtp-Source: AMrXdXsxsodApwN0IFQ7MnSFd/e8HkT0U6Gmn6hEN4OSkztIw+emGuCdlBq9EcIRLyPspNRRIEuLBA==
X-Received: by 2002:a05:600c:5c8:b0:3d1:4145:b3b with SMTP id p8-20020a05600c05c800b003d141450b3bmr34663011wmd.9.1672834440070;
        Wed, 04 Jan 2023 04:14:00 -0800 (PST)
Date: Wed, 04 Jan 2023 12:13:54 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Chuck Zmudzinski <brchuckz@aol.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>
CC: qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost <eduardo@habkost.net>,
 xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
In-Reply-To: <aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com>
References: <20230102213504.14646-1-shentey@gmail.com> <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com> <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org> <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com> <aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com>
Message-ID: <AB058B2A-406E-487B-A1BA-74416C310B7A@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 4=2E Januar 2023 08:18:59 UTC schrieb Chuck Zmudzinski <brchuckz@aol=2E=
com>:
>On 1/3/2023 8:38 AM, Bernhard Beschow wrote:
>>
>>
>> On Tue, Jan 3, 2023 at 2:17 PM Philippe Mathieu-Daud=C3=A9 <philmd@lina=
ro=2Eorg> wrote:
>>
>>     Hi Chuck,
>>
>>     On 3/1/23 04:15, Chuck Zmudzinski wrote:
>>     > On 1/2/23 4:34=E2=80=AFPM, Bernhard Beschow wrote:
>>     >> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and fi=
nally removes
>>     >> it=2E The motivation is to 1/ decouple PIIX from Xen and 2/ to m=
ake Xen in the PC
>>     >> machine agnostic to the precise southbridge being used=2E 2/ wil=
l become
>>     >> particularily interesting once PIIX4 becomes usable in the PC ma=
chine, avoiding
>>     >> the "Frankenstein" use of PIIX4_ACPI in PIIX3=2E
>>     >>
>>     >> Testing done:
>>     >> None, because I don't know how to conduct this properly :(
>>     >>
>>     >> Based-on: <20221221170003=2E2929-1-shentey@gmail=2Ecom>
>>     >>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "[PATCH v4 00/30] Conso=
lidate PIIX south bridges"
>>
>>     This series is based on a previous series:
>>     https://lore=2Ekernel=2Eorg/qemu-devel/20221221170003=2E2929-1-shen=
tey@gmail=2Ecom/
>>     (which itself also is)=2E
>>
>>     >> Bernhard Beschow (6):
>>     >>=C2=A0 =C2=A0 include/hw/xen/xen: Make xen_piix3_set_irq() generi=
c and rename it
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Reuse piix3_realize() in piix3_xen_rea=
lize()
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Wire up Xen PCI IRQ handling outside o=
f PIIX3
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Avoid Xen-specific variant of piix_wri=
te_config()
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Resolve redundant k->config_write assi=
gnments
>>     >>=C2=A0 =C2=A0 hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVIC=
E
>>     >>
>>     >>=C2=A0 =C2=A0hw/i386/pc_piix=2Ec=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0| 34 ++++++++++++++++--
>>     >>=C2=A0 =C2=A0hw/i386/xen/xen-hvm=2Ec=C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0|=C2=A0 9 +++--
>>     >>=C2=A0 =C2=A0hw/isa/piix=2Ec=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0| 66 +----------------------------------
>>     >
>>     > This file does not exist on the Qemu master branch=2E
>>     > But hw/isa/piix3=2Ec and hw/isa/piix4=2Ec do exist=2E
>>     >
>>     > I tried renaming it from piix=2Ec to piix3=2Ec in the patch, but
>>     > the patch set still does not apply cleanly on my tree=2E
>>     >
>>     > Is this patch set re-based against something other than
>>     > the current master Qemu branch?
>>     >
>>     > I have a system that is suitable for testing this patch set, but
>>     > I need guidance on how to apply it to the Qemu source tree=2E
>>
>>     You can ask Bernhard to publish a branch with the full work,
>>
>>
>> Hi Chuck,
>>
>> =2E=2E=2E or just visit https://patchew=2Eorg/QEMU/20230102213504=2E146=
46-1-shentey@gmail=2Ecom/ =2E There you'll find a git tag with a complete h=
istory and all instructions!
>>
>> Thanks for giving my series a test ride!
>>
>> Best regards,
>> Bernhard
>>
>>     or apply each series locally=2E I use the b4 tool for that:
>>     https://b4=2Edocs=2Ekernel=2Eorg/en/latest/installing=2Ehtml
>>
>>     i=2Ee=2E:
>>
>>     $ git checkout -b shentey_work
>>     $ b4 am 20221120150550=2E63059-1-shentey@gmail=2Ecom
>>     $ git am
>>     =2E/v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south_br=
idges=2Embx
>>     $ b4 am 20221221170003=2E2929-1-shentey@gmail=2Ecom
>>     $ git am
>>     =2E/v4_20221221_shentey_this_series_consolidates_the_implementation=
s_of_the_piix3_and_piix4_south=2Embx
>>     $ b4 am 20230102213504=2E14646-1-shentey@gmail=2Ecom
>>     $ git am =2E/20230102_shentey_resolve_type_piix3_xen_device=2Embx
>>
>>     Now the branch 'shentey_work' contains all the patches and you can =
test=2E
>>
>>     Regards,
>>
>>     Phil=2E
>>
>
>Hi Phil and Bernard,
>
>I tried applying these 3 patch series on top of the current qemu
>master branch=2E
>
>Unfortunately, I saw a regression, so I can't give a tested-by tag yet=2E

Hi Chuck,

Thanks for your valuable test report! I think the culprit may be commit ht=
tps://lists=2Enongnu=2Eorg/archive/html/qemu-devel/2023-01/msg00102=2Ehtml =
where now 128 PIRQs are considered rather than four=2E I'll revisit my seri=
es and will prepare a v2 in the next days=2E I think there is no need for f=
urther testing v1=2E

Thanks,
Bernhard

>
>Here are the details of the testing I did so far:
>
>Xen only needs one target, the i386-softmmu target which creates
>the qemu-system-i386 binary that Xen uses for its device model=2E
>That target compiled and linked with no problems with these 3
>patch series applied on top of qemu master=2E I didn't try building
>any other targets=2E
>
>My tests used the xenfv machine type with the xen platform
>pci device, which is ordinarily called a xen hvm guest with xen
>paravirtualized network and block device drivers=2E It is based on the
>i440fx machine type and so emulates piix3=2E I tested the xen
>hvm guests with two different configurations as described below=2E
>
>I tested both Linux and Windows guests, with mixed results=2E With the
>current Qemu master (commit 222059a0fccf4 without the 3 patch
>series applied), all tested guest configurations work normally for both
>Linux and Windows guests=2E
>
>With these 3 patch series applied on top of the qemu master branch,
>which tries to consolidate piix3 and piix4 and resolve the xen piix3
>device that my guests use, I unfortunately got a regression=2E
>
>The regression occurred with a configuration that uses the qemu
>bochs stdvga graphics device with a vnc display, and the qemu
>usb-tablet device to emulate the mouse and keyboard=2E After applying
>the 3 patch series, the emulated mouse is not working at all for Linux
>guests=2E It works for Windows guests, but the mouse pointer in the
>guest does not follow the mouse pointer in the vnc window as closely
>as it does without the 3 patch series=2E So this is the bad news of a
>regression introduced somewhere in these 3 patch series=2E
>
>The good news is that by using guests in a configuration that does
>not use the qemu usb-tablet device or the bochs stdvga device but
>instead uses a real passed through usb3 controller with a real usb
>mouse and a real usb keyboard connected, and also the real sound
>card and vga device passed through and a 1920x1080 HDMI monitor,
>there is no regression introduced by the 3 patch series and both Linux
>and Windows guests in that configuration work perfectly=2E
>
>My next test will be to test Bernhard's published git tag without
>trying to merge the 3 patch series into master and see if that also
>has the regression=2E I also will double check that I didn't make
>any mistakes in merging the 3 patch series by creating the shentey_work
>branch with b4 and git as Phil described and compare that to my
>working tree=2E
>
>I also will try testing only the first series, then the first series and =
the
>second series, to try to determine in which of the 3 series the regressio=
n
>is introduced=2E
>
>Best regards,
>
>Chuck


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 12:19:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 12:19:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471165.730896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD2jd-0007Gr-4i; Wed, 04 Jan 2023 12:19:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471165.730896; Wed, 04 Jan 2023 12:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD2jd-0007Gk-1q; Wed, 04 Jan 2023 12:19:09 +0000
Received: by outflank-mailman (input) for mailman id 471165;
 Wed, 04 Jan 2023 12:19:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ETE0=5B=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pD2jc-0007Ge-AQ
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 12:19:08 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2071.outbound.protection.outlook.com [40.107.92.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fb0ff6d8-8c29-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 13:19:06 +0100 (CET)
Received: from DM5PR07CA0088.namprd07.prod.outlook.com (2603:10b6:4:ae::17) by
 BY5PR12MB4065.namprd12.prod.outlook.com (2603:10b6:a03:202::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 12:19:02 +0000
Received: from DM6NAM11FT038.eop-nam11.prod.protection.outlook.com
 (2603:10b6:4:ae:cafe::43) by DM5PR07CA0088.outlook.office365.com
 (2603:10b6:4:ae::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.14 via Frontend
 Transport; Wed, 4 Jan 2023 12:19:02 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT038.mail.protection.outlook.com (10.13.173.137) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5966.17 via Frontend Transport; Wed, 4 Jan 2023 12:19:02 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 4 Jan
 2023 06:19:01 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 4 Jan
 2023 04:19:00 -0800
Received: from [10.71.193.33] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 4 Jan 2023 06:18:59 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb0ff6d8-8c29-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZEmue9qvMhHZsyPXoDTayU+NimjKOXrhG+ZpRKQvxKm4rqeHPAu6N5ZKoHBTWYpthOM4nLncmGH+XqofpqRiwEJdSaoAm7OnMHcS2nZXyXNZvsmX+SswPQPpF4vlcxI3cpddJHqHEKSsqvBTEOiaaseQKi4nHrOk2N1sdfQYwl5SDKQO6Yu9N2Nft1fpagNdZwKMo6HPWzXN1IbSf5ReOAp9IKK3bnxF4hevGsu6arPO8PT0tiS8x4QSginkM1bwFCVs4zP0iaMalp5fW9gPlosQgmU2yjVdxpka/ZENVXj+M5HLuMFgoybdaYtDBUUcadMiBwVgMMrxoaH/LoWqfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=owsf/i7YRUA/xls2EfGwWANAD2PSYeWoOIm1GTKWhJs=;
 b=m3sicxencausxZSruM0KMq5xqkefUdOWIplYs50FDHeSSS4P4QxYfx9XlMVCsToBhumuGVGFrD1hBA0Qyd+y23s8ipMBCQCeiddKso9OKbjWzq9tMYPpmBMCBbfrchL0+7n9V1UEmeBv1v5v8PC3TViUfDhr76LmLgSYUI3DpSZ5rzMLMvuKmMIF9qL8WwSXYLaprPqacaudpaAnpPbrZqLAOLUpfkqGnKSV5jV90f7+GRaMlREdv/SOMxHBZmeHG4Ui7PqgNI+m+9LhfumzKC+xOmcXl9bGDcH3kvQ2FXN6fudhVJLGMrlrlB7xyn+5TBCrV2kOkvBUlPJPgyYBRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=owsf/i7YRUA/xls2EfGwWANAD2PSYeWoOIm1GTKWhJs=;
 b=XtTj6DBAJ5zPxwKrcytbFkik7GXCDHgOukujzK9pLoS0P8iE5lVP3vcg/rdSd0KxkNcypqUNH7BGxNhrSJuYN/BvDNdBBtZ3aLjFboCG0TQ/cAFdmEGVDo6dNryzeG96F5rNgwteAWxpt4Eio9QnXiaBPqMYv0Spts8DTgIFThg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <dd3f16ac-98d5-356a-c55b-508f142c1b1e@amd.com>
Date: Wed, 4 Jan 2023 13:18:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/6] CI: Remove guesswork about which artefacts to
 preserve
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>,
	"Oleksii Kurochko" <oleksii.kurochko@gmail.com>
References: <20221230003848.3241-1-andrew.cooper3@citrix.com>
 <20221230003848.3241-3-andrew.cooper3@citrix.com>
 <07b5cc36-e0ea-9b51-036a-9523920dd74a@amd.com>
 <c48536e5-37b4-4dbd-4715-cd8d776e496e@citrix.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <c48536e5-37b4-4dbd-4715-cd8d776e496e@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT038:EE_|BY5PR12MB4065:EE_
X-MS-Office365-Filtering-Correlation-Id: 190b7f70-2a3c-4127-f0d9-08daee4ddd8c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tDmtTo64YLeyM4hvsK9hyH/48CI36WZQNZ2xcPF36btRe7dc/mA2YiSAXMubh13ZRmCQlSEDdnMJfBiKs4UFITZzGcsQvNd6x6s8xzAmnleINxWMspj7sv4IMZAxdQHx5kGdWcQ1lCwsfigmr38FK91cXS53+yPe3dMt7RXcC/2GOR8JGg9wyACRNf23ukNoAesoB+z3NqzPPtlvKdooweyn8Rz5KHiyEQU/ki1HTLMDbp2zWuEOQ11FHoNFh3RfPTEQ/FQdaa2sxhK1seA5OBdCbJmlkLLshXThcjCrKFmCnLFtFkGifbxXkNvFwjENEWn/1p99JZiHJ/lR7M3LR+JEo7BWx/zKgHYSa1oXCi3cOHVPOfijKUDR+K3uqIYn+TS/upQTQ7RRXz9yWS1finqFAYYhX2V3ft98C5TjN0XR7/EM4SCfwxzUxeJwUlzdIJjEy+XlFhlx/9NavhkfJEFVQSwkzTYu+y7LqnJIn1WMaKU66oQg/dUYU5uYVNv/oIpzYbQu9N1KlAsaYBkwksF9ESYRhEkNgAKmEc+zUxACWxRW7KZtlUrKJ+7MqCKT0+yQfbFO5LTeElRZR0CQ3HgIHvXovEbsXl+trbaToW4JFF5pT9FVYnUMvCqLNW85AdGzhaRl7VVqX5IR0w4bkVMvyI3IRGjTMyaACKloU++lpfY+1QgzXlwdjgOZ+8cgUv1og1fznXKO6siwJ+MvCz+aFAOig5ssULeirghrzTFT+ttFB4rw5gd7XbOTev+C+rsUon52pyPXuQTLUGmiog==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(376002)(39860400002)(396003)(451199015)(40470700004)(46966006)(36840700001)(2906002)(5660300002)(31686004)(8936002)(44832011)(4326008)(8676002)(41300700001)(54906003)(70206006)(70586007)(16576012)(110136005)(316002)(478600001)(53546011)(186003)(82310400005)(26005)(336012)(40480700001)(83380400001)(426003)(47076005)(2616005)(81166007)(82740400003)(356005)(31696002)(86362001)(40460700003)(36756003)(36860700001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 12:19:02.2459
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 190b7f70-2a3c-4127-f0d9-08daee4ddd8c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT038.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4065


On 04/01/2023 13:12, Andrew Cooper wrote:
> 
> 
> On 02/01/2023 2:00 pm, Michal Orzel wrote:
>> Hi Andrew,
>>
>> On 30/12/2022 01:38, Andrew Cooper wrote:
>>> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
>>>
>>>
>>> Preserve the artefacts based on the `make` rune we actually ran, rather than
>>> guesswork about which rune we would have run based on other settings.
>>>
>>> Note that the ARM qemu smoke tests depend on finding binaries/xen even from
>>> full builds.  Also that the Jessie-32 containers build tools but not Xen.
>>>
>>> This means the x86_32 builds now store relevant artefacts.  No change in other
>>> configurations.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> I'd prefer to keep using "artifacts" and not "artefacts" as the former is what GitLab uses
>> and what we use in build/test.yaml.
> 
> Xen is written in British English.  We're forced to compromise for
> external dependencies, but xen.git is mostly British not American.
True, but from the consistency perspective and easy grepping, it is beneficial to stick
to what a subsystem uses by default.

> 
> ~Andrew

~Michal


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 12:38:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 12:38:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471173.730907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD32a-0001K1-TS; Wed, 04 Jan 2023 12:38:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471173.730907; Wed, 04 Jan 2023 12:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD32a-0001Ju-QA; Wed, 04 Jan 2023 12:38:44 +0000
Received: by outflank-mailman (input) for mailman id 471173;
 Wed, 04 Jan 2023 12:38:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD32Z-0001Jk-TG; Wed, 04 Jan 2023 12:38:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD32Z-0005N3-RN; Wed, 04 Jan 2023 12:38:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD32Z-0004Wt-H5; Wed, 04 Jan 2023 12:38:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pD32Z-0007zx-GZ; Wed, 04 Jan 2023 12:38:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Vp3xFGxDQtgtNhcfWty+Eg0d9+1Hfd1RyUmHsQqWqkE=; b=AuWm9LuPBA0/yGzwfBFfzNX3BM
	IMNYlKSeFP69dml0uTrWY9dyJweYsCoLZPP+FbSZ/Vj6Gi+lmY++TASV6CHgLmMEO+3PYEjNAuF+u
	p/IQcuKrRmvSkBZ4pNBGPx+EW2WmzOANn6osbfiRzROjYY9Ef2spJACZ4ngS8FXVXJC4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175567-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175567: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=9ce09870e721efacc41fa7ee684e9e299f120350
X-Osstest-Versions-That:
    ovmf=992d5451d19b93635d52db293bab680e32142776
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 12:38:43 +0000

flight 175567 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175567/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9ce09870e721efacc41fa7ee684e9e299f120350
baseline version:
 ovmf                 992d5451d19b93635d52db293bab680e32142776

Last test of basis   175565  2023-01-04 06:42:15 Z    0 days
Testing same since   175567  2023-01-04 10:13:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dun Tan <dun.tan@intel.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   992d5451d1..9ce09870e7  9ce09870e721efacc41fa7ee684e9e299f120350 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 13:11:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 13:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471182.730918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD3YI-0005XS-FP; Wed, 04 Jan 2023 13:11:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471182.730918; Wed, 04 Jan 2023 13:11:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD3YI-0005XL-CP; Wed, 04 Jan 2023 13:11:30 +0000
Received: by outflank-mailman (input) for mailman id 471182;
 Wed, 04 Jan 2023 13:11:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aavW=5B=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pD3YG-0005XF-BX
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 13:11:29 +0000
Received: from sonic316-54.consmr.mail.gq1.yahoo.com
 (sonic316-54.consmr.mail.gq1.yahoo.com [98.137.69.30])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 499b7c2f-8c31-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 14:11:25 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic316.consmr.mail.gq1.yahoo.com with HTTP; Wed, 4 Jan 2023 13:11:22 +0000
Received: by hermes--production-ne1-7b69748c4d-dzr9v (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 5f71576bf612bcab338a616d92656601; 
 Wed, 04 Jan 2023 13:11:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 499b7c2f-8c31-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672837882; bh=9KFOB9JL9CLZbbSE/mPnz5kluCg2oPK3oquc5XsZ7fw=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=Z4wUGpXg9lLdhSQCcDPDA6OZ6NhRx03NFdWfYG03mAK0xqRfXLOB/hOM2hjF0xe8yd0LQH9mYzWFlxdnLpMHS4PjBYlq0kaFUUbuBVMgrmUn8Nyt08HVsX96Szk+VLgiI71X2Qq9bUCHimXdXWK+C07g1apX8DCE2pVcrzEFb8yAyPhE41HiXctuV7ec4ENvvbe0rBnRb48rv8ZzK8BCxQm7C+xdvDXk8idqsHJQnKeTz2Dy/a6enTZCuLo5Pt/zASfVtfU7tKqQg5erj2sxiQfKBL7SEXpANc9IwPejBZ9t63cgvAmvaD5fkuJMeS4IzB70yVeK+5ZlJopYCQ184w==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672837882; bh=NiaQdQUKSXqj+/3FoQox87OgVYUchmxnGWiebKNaBbT=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=ij+iOgayTlmgInCanbFFrLJKs8XttBMJo7nZNtVgxapvaicNQ+WmRicwxdnaTJfcZdbKfRIy1ORkkoVDR17X1iurZwKzIZgBcupGw6DgNfNNyhdMlAUgLd3eUWqPdeljEvqZ6r8qmGA/Y6pmTGsW2fP7+XdXKqcAQM83hkLMNGPaPxMYk3R0u6KkaI4IWDTsBGb7+SWsz7aAvgSajI1HpwkLFjWk3Am/ZDp34YdohzMrHG22j2bEVeAnGlKGn6FBVepLjUgyaD9waIrMlYmlnLhKzmby03fWz7v/xchBFgc7e8OvlbXeaHPXATvhNr/lsPYnSRp+o6Sj+WcqF7NvHQ==
X-YMail-OSG: n1H3IGsVM1loDdT7ZhBKWbz4cuAZ.SQTUKrfCLV3b66hcka0Wr5vXQs0K7G2O4a
 Pv3SQUoN110Ab4muX1TA0xNP0vjRa.a5ncz89pv5MU5JZwATM6urstOsUHcchhqRYh0P8M4oPZeQ
 XUodIbreLkjgp3nATolbTgX_9IQX6ZKeDnv5f9zUCXEQyoqYUoC4uwLYz9JgoTTWAwMOeiqsrU2P
 O8FHKEvXm6ezl7mJjUHZ5DNaABq1tsla1IxTYH3_4FX2er5vRhpYpjq6kopMyvTJGt5g.b_2EKfx
 Z56Qe1D40a_IPjqKfRLPwddcidkHlFHU0OF4EfpG7h1PvNYJECSEwawqCwkQ_Bwzf6fJNTeLfv2x
 FgvVJA6_y9yXhS6AuK6KS_AwelVo2YSr2MItKQGHl39gj8lSYg7v8gC2yPxIPHtucvnOZ5mO9ewX
 ztXJLxo2WUkFg3xTA7F4Pa2fAZ8rFeVW63ACln1X_x98Jy4NZLcmsTDX8KrS85D.Adr3964bcolZ
 0uH9mT6LHItN_Qb3vKlhRds5bYwgbfMvrx78QCKHT5IbnIV57ncmN1h4YKqpL.LkjCBUVJxdeLJq
 Vqx9bZv0BQl74Y_M9KKH0B7EQdhX54LcC.ESRtZHuC0nK44sSqK.BLWPnPYVo.lM5hZIgwZjd234
 NExT6Mt1kQ4C.WEWvkhV2c8fOVg9qMItdZuzfoaT0tCPh8ZVZur6ESsKYruP3dqEEPy8iRTETLiC
 5CoQ9oweZhpPoclg4OejckehtSeK6r5iVYy70XVcNgHLsNgmdBwhtSG8ldRdCdGLpiheUPf3IhXP
 EpJOoTkNTwfGaH2Gpng.hQ1JITaCyv906CZlGLttyCddtgku4UD9UbD97VNEp4Eaols6O84xgyB7
 GMSGFqjr3xzFGw.ADJn1NnHXB4_8a.rMv.6C.YkjaAbN4u34qr7ZStcrO0QdKV9Aya0w4GMRuVJ4
 BPHFB5nITpreOTrezrOzxNKIAvlk_smur7OtMPr263JDsl.EayZBcedMsQTF5ulux6maH8EZeGBo
 xcqX_qhaEsVZeVHtrk5W1SPzIOzyWqOU9PdtTxxXUbsyTyTIqPQZLfkvUq5lROtP3dcO87I5Vlvz
 IowI3ZEgl5e90cwl1mpsXSfLzYsd7M6S0yR3rl6OTLNkx8f0opeOy84_aZnx1k.qtPE9vikA8v3E
 Y2l6Fx927_OcOC1GVG.XhG3jJLq2hjBbxVAO.zLwhOZHEKbmIZl6E.QISDMxyjxFL6535NPJ9yzU
 UE80GzbJ3pbjnHRK0OMlxDyFdi9yX.5zpKrz1RZyvwzWFlBNY5dXIP91RR1jxgGGFx9JRPF3G5.B
 atmfH3u7DY2DxlyqeAjUCHuZUfo34g3Ts64wCBOdAnVuNRzjTiSFgkKkoatEmZEc1_2NTKd3Fp20
 6v70bJPDB_GwzrsycSug6wQrtVCnDmjNmoEiKaQcXk88KQBVO1UkerMbNun2iHwi5C_RTBzjtj8b
 FLZaHCX6Dgua19BJvvFWXlgI2w.LWZU4Ru533E703a8z3okxeCkh_KTVnsmBAeN0CZYIpOxDEACw
 CKkJ0LAfJWA.yJCdMDkXklm82Wm_luHu7MPGZ04Fvmgd4tRx3mvHsAOV5FQfwPjiFCIcXajc0Xbf
 xF9r20rVlbZaIo6lDNzRb8cyjntoucLuVMrtg7Ncy1J1QzYW0nEoXlIoQ2U2vWZ3mVx2PxxGqgwY
 Ogdm0A100uX1R4hwVSxj2jmC.gDBp6ekwxAzCCf5hbCqKwYOpc1zCLf0Ksd6QuvQrnuu8R4DMvxY
 55.fuzprOyNLCaxO5vj4WHmQr6HMA3fqGvZgOILONvBSBVN3P1B2dx1ZkRwJXrWIldV4h29WV3HG
 NbSVjhaueaHiAN3OB2_.JIqa_0J7tGFnbQVWOPfXJtOcSmuaHuQHTzda3He1oCmttJ5abz6u8qxQ
 NGWCc1jOhS6PU6Js55BS7VtJKR0KpNjBLBUYeBNUOX18OPiV_jfg1aln8DNIAzVanXikm.P5ORLC
 wyO0gpzGYTgFjwLFhg2UodfOHX9WDGLEIy6Sy53euhHfP4bRqTKiXfWIuEtKobIhf67npHixRiQ_
 5xBoIW4GJ6fQLCMPqnminlF0ALGUSqSwiMubqRfJV9wCIMhRGoRBxxHY2tKpcxCGeHrKGTFq5FBw
 uoQERC7qiyAiKFFarQc6tbRDes5jQv_2lxPn58JdHVgp1N8ObbmRfdwLHceQllHgYw8prmUHOFgb
 gfMHBTfGjIPcouIdEkUbefA--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <00094755-ca61-372d-0bcf-540fe2798f5c@aol.com>
Date: Wed, 4 Jan 2023 08:11:16 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
To: Bernhard Beschow <shentey@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost
 <eduardo@habkost.net>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
References: <20230102213504.14646-1-shentey@gmail.com>
 <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com>
 <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org>
 <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com>
 <aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com>
 <AB058B2A-406E-487B-A1BA-74416C310B7A@gmail.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <AB058B2A-406E-487B-A1BA-74416C310B7A@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 7404

On 1/4/2023 7:13 AM, Bernhard Beschow wrote:
> Am 4. Januar 2023 08:18:59 UTC schrieb Chuck Zmudzinski <brchuckz@aol.com>:
> >On 1/3/2023 8:38 AM, Bernhard Beschow wrote:
> >>
> >>
> >> On Tue, Jan 3, 2023 at 2:17 PM Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
> >>
> >>     Hi Chuck,
> >>
> >>     On 3/1/23 04:15, Chuck Zmudzinski wrote:
> >>     > On 1/2/23 4:34 PM, Bernhard Beschow wrote:
> >>     >> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally removes
> >>     >> it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in the PC
> >>     >> machine agnostic to the precise southbridge being used. 2/ will become
> >>     >> particularily interesting once PIIX4 becomes usable in the PC machine, avoiding
> >>     >> the "Frankenstein" use of PIIX4_ACPI in PIIX3.
> >>     >>
> >>     >> Testing done:
> >>     >> None, because I don't know how to conduct this properly :(
> >>     >>
> >>     >> Based-on: <20221221170003.2929-1-shentey@gmail.com>
> >>     >>            "[PATCH v4 00/30] Consolidate PIIX south bridges"
> >>
> >>     This series is based on a previous series:
> >>     https://lore.kernel.org/qemu-devel/20221221170003.2929-1-shentey@gmail.com/
> >>     (which itself also is).
> >>
> >>     >> Bernhard Beschow (6):
> >>     >>    include/hw/xen/xen: Make xen_piix3_set_irq() generic and rename it
> >>     >>    hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
> >>     >>    hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
> >>     >>    hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
> >>     >>    hw/isa/piix: Resolve redundant k->config_write assignments
> >>     >>    hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
> >>     >>
> >>     >>   hw/i386/pc_piix.c             | 34 ++++++++++++++++--
> >>     >>   hw/i386/xen/xen-hvm.c         |  9 +++--
> >>     >>   hw/isa/piix.c                 | 66 +----------------------------------
> >>     >
> >>     > This file does not exist on the Qemu master branch.
> >>     > But hw/isa/piix3.c and hw/isa/piix4.c do exist.
> >>     >
> >>     > I tried renaming it from piix.c to piix3.c in the patch, but
> >>     > the patch set still does not apply cleanly on my tree.
> >>     >
> >>     > Is this patch set re-based against something other than
> >>     > the current master Qemu branch?
> >>     >
> >>     > I have a system that is suitable for testing this patch set, but
> >>     > I need guidance on how to apply it to the Qemu source tree.
> >>
> >>     You can ask Bernhard to publish a branch with the full work,
> >>
> >>
> >> Hi Chuck,
> >>
> >> ... or just visit https://patchew.org/QEMU/20230102213504.14646-1-shentey@gmail.com/ . There you'll find a git tag with a complete history and all instructions!
> >>
> >> Thanks for giving my series a test ride!
> >>
> >> Best regards,
> >> Bernhard
> >>
> >>     or apply each series locally. I use the b4 tool for that:
> >>     https://b4.docs.kernel.org/en/latest/installing.html
> >>
> >>     i.e.:
> >>
> >>     $ git checkout -b shentey_work
> >>     $ b4 am 20221120150550.63059-1-shentey@gmail.com
> >>     $ git am
> >>     ./v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south_bridges.mbx
> >>     $ b4 am 20221221170003.2929-1-shentey@gmail.com
> >>     $ git am
> >>     ./v4_20221221_shentey_this_series_consolidates_the_implementations_of_the_piix3_and_piix4_south.mbx
> >>     $ b4 am 20230102213504.14646-1-shentey@gmail.com
> >>     $ git am ./20230102_shentey_resolve_type_piix3_xen_device.mbx
> >>
> >>     Now the branch 'shentey_work' contains all the patches and you can test.
> >>
> >>     Regards,
> >>
> >>     Phil.
> >>
> >
> >Hi Phil and Bernard,
> >
> >I tried applying these 3 patch series on top of the current qemu
> >master branch.
> >
> >Unfortunately, I saw a regression, so I can't give a tested-by tag yet.
>
> Hi Chuck,
>
> Thanks for your valuable test report! I think the culprit may be commit https://lists.nongnu.org/archive/html/qemu-devel/2023-01/msg00102.html where now 128 PIRQs are considered rather than four. I'll revisit my series and will prepare a v2 in the next days. I think there is no need for further testing v1.
>
> Thanks,
> Bernhard

Hi Bernhard,

Thanks for letting me know I do not need to test v1 further. I agree the
symptoms are that it is an IRQ problem - it looks like IRQs associated with
the emulated usb tablet device are not making it to the guest with the
patched v1 piix device on xen.

I will be looking for your v2 in coming days and try it out also!

Best regards,

Chuck

>
> >
> >Here are the details of the testing I did so far:
> >
> >Xen only needs one target, the i386-softmmu target which creates
> >the qemu-system-i386 binary that Xen uses for its device model.
> >That target compiled and linked with no problems with these 3
> >patch series applied on top of qemu master. I didn't try building
> >any other targets.
> >
> >My tests used the xenfv machine type with the xen platform
> >pci device, which is ordinarily called a xen hvm guest with xen
> >paravirtualized network and block device drivers. It is based on the
> >i440fx machine type and so emulates piix3. I tested the xen
> >hvm guests with two different configurations as described below.
> >
> >I tested both Linux and Windows guests, with mixed results. With the
> >current Qemu master (commit 222059a0fccf4 without the 3 patch
> >series applied), all tested guest configurations work normally for both
> >Linux and Windows guests.
> >
> >With these 3 patch series applied on top of the qemu master branch,
> >which tries to consolidate piix3 and piix4 and resolve the xen piix3
> >device that my guests use, I unfortunately got a regression.
> >
> >The regression occurred with a configuration that uses the qemu
> >bochs stdvga graphics device with a vnc display, and the qemu
> >usb-tablet device to emulate the mouse and keyboard. After applying
> >the 3 patch series, the emulated mouse is not working at all for Linux
> >guests. It works for Windows guests, but the mouse pointer in the
> >guest does not follow the mouse pointer in the vnc window as closely
> >as it does without the 3 patch series. So this is the bad news of a
> >regression introduced somewhere in these 3 patch series.
> >
> >The good news is that by using guests in a configuration that does
> >not use the qemu usb-tablet device or the bochs stdvga device but
> >instead uses a real passed through usb3 controller with a real usb
> >mouse and a real usb keyboard connected, and also the real sound
> >card and vga device passed through and a 1920x1080 HDMI monitor,
> >there is no regression introduced by the 3 patch series and both Linux
> >and Windows guests in that configuration work perfectly.
> >
> >My next test will be to test Bernhard's published git tag without
> >trying to merge the 3 patch series into master and see if that also
> >has the regression. I also will double check that I didn't make
> >any mistakes in merging the 3 patch series by creating the shentey_work
> >branch with b4 and git as Phil described and compare that to my
> >working tree.
> >
> >I also will try testing only the first series, then the first series and the
> >second series, to try to determine in which of the 3 series the regression
> >is introduced.
> >
> >Best regards,
> >
> >Chuck



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 14:35:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 14:35:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471195.730939 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD4rg-0005el-Pe; Wed, 04 Jan 2023 14:35:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471195.730939; Wed, 04 Jan 2023 14:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD4rg-0005ee-Mk; Wed, 04 Jan 2023 14:35:36 +0000
Received: by outflank-mailman (input) for mailman id 471195;
 Wed, 04 Jan 2023 14:35:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+XhT=5B=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pD4rf-0005eY-AD
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 14:35:35 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2042.outbound.protection.outlook.com [40.107.21.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0b218fdf-8c3d-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 15:35:33 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7158.eurprd04.prod.outlook.com (2603:10a6:20b:120::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 14:35:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 14:35:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b218fdf-8c3d-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RWhIRKcMsrRSFxT20nxy5pJ+TmXM6ob/S50ssBFPkAnzP+UsmWPoLXIZD9/aTimxf6ZtMqRGF1+JBPr4JJLVQTrtGQv7fIrTngqTXVV3oOtbAfCwFx8hn1RM1VQnNyFkQlIl1zl4BoBiioZO1U3Ea6hqUeDTYyCP4F+dgKqt+PpSJ0FadW2A+MshxzwxpNYAjcEhoheyEbDmhSdMkE07Ar0MBkrWALrtN+xgiPrhnC0zlBm67g7YpGI7LPNlSsI/SDFx6gvHS99K6JmJeGqZN6vLjQDV8egwIFWSVJEgkAtTsER5pGODn7OxFRkcCEC8xavzRvIqQtpqrjuipdILPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KSdnEC4H8uuzngem4U0xCx9Q9CnYDPx8Fuv7+TTrdUE=;
 b=gbJsV5NDu+XPYCZG8w2NKcgxS3JaRlLkB+d2AM7oDcSNxIt7lUaffGiPDA4bMumdHGrfAN/bY0ULkHINBQWo9p8BcnJqRNeTQvBmWormc+QreoyxFYga8a30dt0do6Oo80M8OCsgpRuWWKk4qZDNnFWbd0924QfzgtBn5xIXTbmp4/DdNoq0VIDGlAxz+Q8fixme8eb9m3BcXiDmzys6XdaP03A0cnnzOcq30GKI1mzxKWYqSbfApjShO88ii0QJq4biBC/c7yG3EzAJkjyvuz3VCOsos1M6bT13dj8Nc/gNfJLiJMoMZ8bFSDIsSimdLcN3cnpy0FzwZrQUACCS0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KSdnEC4H8uuzngem4U0xCx9Q9CnYDPx8Fuv7+TTrdUE=;
 b=T6BWMIFozNHFtJfjNPJT3m0J64UMt1UGqpzy1mvEJWNyWX9urVnUZl+gAMDTIZFKyEBr4CY53fJu5g+M3pKb7dfExQ/a1qV+mh4I7tDEO19V3bKKNMklZc8IV/mF17lCNNdPWI16EuhQeXNthcotlolTHkYsPGaKd6n3urMBazG1TZQiaA45/cJREn0MYZrTL6Fga5GH/isJ8m4wruxV+HCDM6BTUcI1yK9UnEP5DEgPnlHyUPl6PcI9lIoZys7kz80XwJwSDa3krW84vkrHvA1gzjxgxLWVl2KwaFSk2jI2FilAHH9uAZ0ZKkB+bGY53VurkHOuoKIGs+L56p72/Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <37b87d38-de11-cc8d-0663-73c09e1dfd0c@suse.com>
Date: Wed, 4 Jan 2023 15:35:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] x86/cpuid: Infrastructure for leaves 7:1{ecx,edx}
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230104111146.2094-1-andrew.cooper3@citrix.com>
 <20230104111146.2094-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104111146.2094-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0090.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7158:EE_
X-MS-Office365-Filtering-Correlation-Id: 41f0afcd-ee32-40e5-f8de-08daee60ee16
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iEXdOrYZKGClUXAZax0JWRCl4iiSAgRtG5f6W1S/MSNZqQ6tvSfLpsgdiBA2IkvffM7hMH4eoqXWqH5ySD9LufxVJOT45Blk+oS0rF+B3SBoDlIBvuhMV4CNos7RUt5c/q0YmS1gxe/hFO3BAQyrTMsBin/wFmV0Ui1TibYXMld/YxiHwamXKR2pZ0TcsJxuItdNOX3AfLSdFfSikyjD1wr3wycAKK252QOG4VK264h9RFLJaHj1lBa+G6TpnFWCBCPB8bB3oznK/NfR+KsKLKBxK0sZwRlxAdpppyRGt6xoigjRTllHKQN4nbLQnNLgFtQ3CZS833jAVidXSiFOl7Yqe6mWsQyPsWB9nZT8e7jf9gUVYDxGF2+BRujoHvSlalS/OgRbOPZw6wXEP9up1GXYC6AvbZonuI0xMnTUPE/zfOjMAdodfILFZibihUwLhUg/XbvsxIrm9pWsfZGe4M6NQ817T2DbMABN51VprU7PQVXXpNkBcGwQxJBek5T9G3zIU/3nq9SJ6j+pZM7kxQxZQFDanMXmXrC/DiK0lKNxXczMonckp+vvIvxCgFH488arJCkBwGWDzM83o9g+pE7aTkUouQ25NCFciI2/Gg7ERj1qJx+xI1htlOJFcgN4k8anj1Y4/3Yfi388ph0kKMw1ccAYC0zjuUF7p9vZ++jmZ4dK+2/niNUrMPiqQRM5RHzVgkCx6KqdUcSTRR6s6DtMqq44SPPHytA9l0KbAoA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(376002)(396003)(366004)(39860400002)(136003)(451199015)(31696002)(86362001)(83380400001)(6506007)(53546011)(36756003)(2616005)(26005)(186003)(6512007)(38100700002)(6486002)(478600001)(8936002)(31686004)(4326008)(66946007)(66556008)(8676002)(41300700001)(316002)(66476007)(54906003)(6916009)(2906002)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGdGb0doMjhkc2RQQW9talc5MWE0TGdwTzVqdjMrRFd0eTZEbTJQNjBQRVV4?=
 =?utf-8?B?VVhQMHd3ZFkwYWpxMzY0WVc5b29nUnM5cWpZYmRUbkxsMVdlZVhJMVVIeFpl?=
 =?utf-8?B?V3hqZTdjRFJPWHoxNjZkWDVHdGJUNlhPcVY3VWpqNlljbis2aUxaVTY4bFND?=
 =?utf-8?B?WHE3NzZmOEJ6bUNPWjh4OHNNWFJVSVNhdWkrOUUyWFJueUt5WlNXOGg2Skln?=
 =?utf-8?B?ZzBxNzdja0Qrcmg4V3RxampKZFQ3VWlXd21KbEFWOXNFYlNPTWJ3aHhET05H?=
 =?utf-8?B?MHB6WWZvZEova00xWkxmUXM1S0dITGVqNnpYMlpRTDNzeTlWeEcyTjJ6bG9u?=
 =?utf-8?B?cDBUd05ZZ1Y2RVBxbU1wamN5amZLbTFRRlFObGtLTFUvWjV3OCs5RzEvMk13?=
 =?utf-8?B?aG56dzlwcVppc3dadDZxZEFxcGp0cTNSODV1cTFTVzZtd1J5ZlVZaHo4ZTdG?=
 =?utf-8?B?OGsvMWdjWTJiSHEvaW1ncDJBbi9uTHA0MFFrQXJNcG5BMmtwTi9hbWk5Z3FG?=
 =?utf-8?B?aUlGUnZubEw4K1haVWlmVW9ieENPY0dZbWptQ0cvWjRVVzU5bmNXRjJTZnVU?=
 =?utf-8?B?cUxHd0I5SEZDZjNOQzhIaEVnL2FlRE82Nm9YVUE2UVEwNmdtSVp5OTNLMTdy?=
 =?utf-8?B?K21SQmVsYmNOcFlrQjdKdHZ1bWRNWFhUS0g2OHpMR2lZSmdmcWQ1dVJRMGVR?=
 =?utf-8?B?NGNDK0l5bzhVaFZYV25hMVV0b3RHRTZ5OVFQd3R2cm1aYzl3YVNrb0ZYRjVQ?=
 =?utf-8?B?dGpkQ2ZXRnlqMEN4c0t5UnlYd1BBMnJSd3JIYnVoT2tTTjdxSzh5TnN4OXQv?=
 =?utf-8?B?Z3ZUdk9RZmFOc3dobVg1eVVBa2hnRE9LZlRidGJiU0pNMlNmUHZqajlCSjVa?=
 =?utf-8?B?bGRicGF5KzFpQW5LUUk0TmZ5TnpBR0hYbUs0TTl6RzFuQml3bTRVYzlkVWJK?=
 =?utf-8?B?UFJ0dmZ0T05uMkpxM1RYanZkU0YxanJmS095aXJBTTduNEwyTlBiQ0xNNTZj?=
 =?utf-8?B?MWszU1F4bHhUaDdqWTZZczdsYTBHem1mQWw4L1ZqN1ZkOUJMalRtS1FCVkRp?=
 =?utf-8?B?TWlBY3ovWEFadk15UXdSSjlNSGMxc1BINHFZV1hrWGQrL2dJRGpRbjBlOUd5?=
 =?utf-8?B?RlRya2tPSzhPQnN6WjZWOW5oZ1dZTSt1Q2NXcm02TGMvajArSXNYc1lINWhL?=
 =?utf-8?B?TU9Tb0RwRDYxWldSNEZFNHZTLzhsQTBnanc4S0ZLa29GWWtxa2thOVh2T3FE?=
 =?utf-8?B?U09oWU5pY3ViMFduRVBuUnd5Y1ErcEQ4MFdndjl6cDhqRXJ4bVhxeUE4NUVj?=
 =?utf-8?B?UDZrTmdJaERIZGhsallZT3FNVnpURlJKSTBpS2VaSVhpNkxoYnJwYjhwalZ2?=
 =?utf-8?B?b0oxUXdDekR1V3NFU1lrRVI1eEpsMm4rSjJ0a0VMaCtaY2hSclNaZERCa0xC?=
 =?utf-8?B?Sm9ZWHRjZVg4WEVkK0JQVENCdzJ3WmlzSnF6Q2g1S0U2OUJuYnZ1RW1yWUxv?=
 =?utf-8?B?bVZSb0V1bGl0UlI1cGhXVjV5YXhwWE5Cd2RiNGRoVFNlemdQMGJIOGlRV3J3?=
 =?utf-8?B?QlpuQTRRQVIydGd2QVdacnFuQ3EyclhLeE9nWXVjRlErMjJKSktoZFdjaS9P?=
 =?utf-8?B?aXRGR0NnYUlML3ZGSFFFeFJRc1ZHMHNFWWZiR2ZuclNGQXhqTUVGTEsybXdW?=
 =?utf-8?B?Q0N5elg4ZmNRKzB4MlNDYXQyTEVoQzJ1MjJxb2VBTncrOTNJMUFzc2FWNDlS?=
 =?utf-8?B?Ulh2akNUeTJualhjaU8zL2tmcFlScTdnVG9FRTRUZ0JFKzhNbHBuRTI5Mk5B?=
 =?utf-8?B?bGtjNUZyN05OVzdXWXBDT2lmZE9vbVZkVzgvNXl1Zk9sQk0vWUlLRnJuRDVH?=
 =?utf-8?B?NmFyRS9pMWN0Nkc4eWN1eU10WEIxYzBiSXJQNGhLREM2aGpPd1VQNDNRSmVS?=
 =?utf-8?B?ZlRNeEplOHVZd0tlcUgrQ3lIZWJYQm4rdTltZk1mOUxaRTlPRjllZnl4eS9a?=
 =?utf-8?B?UTNBZFJtcVNCbURNaS9SVCtzTXhoSEFiUUllQ0NhTm9SbWZkRkxJc1V6b3hQ?=
 =?utf-8?B?WVVadk43QTRySmExMFNxaGIrVytob1gzNGZGcGtPUlBXRjZuMWo2Yndoamkv?=
 =?utf-8?Q?TC0NqRDThuNDsJsBJ13pvfDYb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 41f0afcd-ee32-40e5-f8de-08daee60ee16
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 14:35:30.6975
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wRr8THmXfwl5VM15QaJ9PE/X5HK7L2uzUN+cly8HTnGTImTIoLvCcDVIIFTi/RWykg+YGRV0U4l4L0YRxS28nQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7158

On 04.01.2023 12:11, Andrew Cooper wrote:
> We don't actually need ecx yet, but adding it in now will reduce the amount to
> which leaf 7 is out of order in a featureset.
> 
> cpufeatureset.h remains in leaf architectrual order for the sanity of anyone
> trying to locate where to insert new rows.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit ...

> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -288,6 +288,9 @@ XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and
>  /* Intel-defined CPU features, CPUID level 0x00000007:1.ebx, word 12 */
>  XEN_CPUFEATURE(INTEL_PPIN,         12*32+ 0) /*   Protected Processor Inventory Number */
>  
> +/* Intel-defined CPU features, CPUID level 0x00000007:1.ecx, word 14 */
> +/* Intel-defined CPU features, CPUID level 0x00000007:1.edx, word 15 */
> +

... I'm not convinced getting these ordered by other than their word indexes
is quite reasonable. We can't really predict in what order elements / leaves
get populated, as can best be seen from ...

>  /* Intel-defined CPU features, CPUID level 0x00000007:2.edx, word 13 */
>  XEN_CPUFEATURE(INTEL_PSFD,         13*32+ 0) /*A  MSR_SPEC_CTRL.PSFD */
>  XEN_CPUFEATURE(IPRED_CTRL,         13*32+ 1) /*   MSR_SPEC_CTRL.IPRED_DIS_* */

... subleaf 2 already having one entry, and that one not being eax, ebx,
nor ecx, but edx. AMD (extended) leaves would also always be sprinkled into
the middle of any such sequence. To me it ends up more confusing (perhaps
not right away, but in a couple of years time) if one needs to go hunt for
what the next free index value would be. Similarly the need to re-base stuff
using non-upstream index values (like is the case for the KeyLocker leaves
that I have pending locally) is easier to notice when new sets are put at
the end of the existing list.

In any event, what I'd like to ask for as a minimum is that you insert a
blank line between the two new comments.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 14:45:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 14:45:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471204.730964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51E-0007Sm-EA; Wed, 04 Jan 2023 14:45:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471204.730964; Wed, 04 Jan 2023 14:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51E-0007Rs-9t; Wed, 04 Jan 2023 14:45:28 +0000
Received: by outflank-mailman (input) for mailman id 471204;
 Wed, 04 Jan 2023 14:45:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD51C-00079V-3l
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 14:45:26 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b8f66c4-8c3e-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 15:45:24 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id s9so1187877wru.13
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 06:45:24 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 e10-20020a5d594a000000b0028663fc8f4csm21168241wri.30.2023.01.04.06.45.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 06:45:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b8f66c4-8c3e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qwwBmjkQ+lNHH0kKiW9e0yfr8jRksndzOx0yQHLnF3U=;
        b=Wch2+6IG0obcLWirCwO0uN2viNNRWGwjqAFhW/ZYw1fchDlIbgWmuB3c8dlern5JGN
         4zUs9Y2GjuybCLMXzn2i3IB2HpMdnJ3HOfcR0Id/ZqbSM3D2zZU/HKa/camSQegFxTMw
         AHdCTPkE+aAGidOO551rDDq7D9WH+cxg/4sMPHQ+cM+9wzSWfesq1yH6xy47O9xTEoFd
         eZfdHHVnJEmwB7VwCBj9m9+mWDLchXhhkYcMqSmiWNviCG4U3NvQshROvXH0TUL+D83f
         7bjrqMxVbRF0Ka56hPzfPNov9BdR1GZ9Rp8kUFHu/C6dGAo3y08PsXTkRemZcP/M5Y9L
         Ojww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=qwwBmjkQ+lNHH0kKiW9e0yfr8jRksndzOx0yQHLnF3U=;
        b=ha4OAtcP3A//iPmL22Htk6mlfmsxKkyCAtcj+wFfNTPK4W7NfBUH92dNVbuZ67/5DU
         HtjyJmAhsuDpIwtPWRa+sQQKbBblXN81VJuUjpF83odvMT8vMQxvbkt6OxjoowTKckLR
         lxyx/znmf5sWnsNGpvneX5P85Ay1mNIIjvKepEVtVX/JMFnWKooUZIMNcvWBmVH/uWrK
         KzWZfQ2nNiTLmhJNwgIgXio1uNKwaY6VVNGaOM/dxxozJWYsXMVPHprs4YAOrhuMKNns
         /zAgwhbmCcpoTsZjmqhx0wGUXTu5dtNOVneIxnM9/+OW1Wn0n24FVDk3ZtRtauvlGNRI
         /CmA==
X-Gm-Message-State: AFqh2ko9ayom09+ubbFwtjokwIp9A8wUPW7/gXxOyCIVvQkxui1NDcnH
	jcji60g8/Uy8LHvfgf2NT54=
X-Google-Smtp-Source: AMrXdXv1Bz9WYYfmdlxF76KLvJgFfsoIF1V0rIeHDad7Wu77HbMKWyWlh26v/oTTwkdlUrXJkWzplw==
X-Received: by 2002:a05:6000:1d84:b0:273:6845:68ef with SMTP id bk4-20020a0560001d8400b00273684568efmr24329892wrb.60.1672843523618;
        Wed, 04 Jan 2023 06:45:23 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v2 1/6] include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()
Date: Wed,  4 Jan 2023 15:44:32 +0100
Message-Id: <20230104144437.27479-2-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230104144437.27479-1-shentey@gmail.com>
References: <20230104144437.27479-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_piix3_set_irq() isn't PIIX specific: PIIX is a single PCI device
while xen_piix3_set_irq() maps multiple PCI devices to their respective
IRQs, which is board-specific. Rename xen_piix3_set_irq() to communicate
this.

Also rename XEN_PIIX_NUM_PIRQS to XEN_IOAPIC_NUM_PIRQS since the Xen's
IOAPIC rather than PIIX has this many interrupt routes.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/xen/xen-hvm.c | 2 +-
 hw/isa/piix.c         | 4 ++--
 include/hw/xen/xen.h  | 2 +-
 stubs/xen-hw-stub.c   | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index e4293d6d66..558c43309e 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -142,7 +142,7 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
     return irq_num + (PCI_SLOT(pci_dev->devfn) << 2);
 }
 
-void xen_piix3_set_irq(void *opaque, int irq_num, int level)
+void xen_intx_set_irq(void *opaque, int irq_num, int level)
 {
     xen_set_pci_intx_level(xen_domid, 0, 0, irq_num >> 2,
                            irq_num & 3, level);
diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index ae8a27c53c..a7a4eec206 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -38,7 +38,7 @@
 #include "migration/vmstate.h"
 #include "hw/acpi/acpi_aml_interface.h"
 
-#define XEN_PIIX_NUM_PIRQS      128ULL
+#define XEN_IOAPIC_NUM_PIRQS    128ULL
 
 static void piix_set_irq_pic(PIIXState *piix, int pic_irq)
 {
@@ -504,7 +504,7 @@ static void piix3_xen_realize(PCIDevice *dev, Error **errp)
      * connected to the IOAPIC directly.
      * These additional routes can be discovered through ACPI.
      */
-    pci_bus_irqs(pci_bus, xen_piix3_set_irq, piix3, XEN_PIIX_NUM_PIRQS);
+    pci_bus_irqs(pci_bus, xen_intx_set_irq, piix3, XEN_IOAPIC_NUM_PIRQS);
 }
 
 static void piix3_xen_class_init(ObjectClass *klass, void *data)
diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index afdf9c436a..7c83ecf6b9 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -22,7 +22,7 @@ extern bool xen_domid_restrict;
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 int xen_set_pci_link_route(uint8_t link, uint8_t irq);
-void xen_piix3_set_irq(void *opaque, int irq_num, int level);
+void xen_intx_set_irq(void *opaque, int irq_num, int level);
 void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
 int xen_is_pirq_msi(uint32_t msi_data);
 
diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
index 34a22f2ad7..7d7ffe83a9 100644
--- a/stubs/xen-hw-stub.c
+++ b/stubs/xen-hw-stub.c
@@ -15,7 +15,7 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
     return -1;
 }
 
-void xen_piix3_set_irq(void *opaque, int irq_num, int level)
+void xen_intx_set_irq(void *opaque, int irq_num, int level)
 {
 }
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 14:45:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 14:45:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471202.730950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51C-00079t-OE; Wed, 04 Jan 2023 14:45:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471202.730950; Wed, 04 Jan 2023 14:45:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51C-00079m-L2; Wed, 04 Jan 2023 14:45:26 +0000
Received: by outflank-mailman (input) for mailman id 471202;
 Wed, 04 Jan 2023 14:45:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD51B-00079V-BY
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 14:45:25 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b14eac9-8c3e-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 15:45:23 +0100 (CET)
Received: by mail-wr1-x42f.google.com with SMTP id w1so21082853wrt.8
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 06:45:23 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 e10-20020a5d594a000000b0028663fc8f4csm21168241wri.30.2023.01.04.06.45.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 06:45:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b14eac9-8c3e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=8AWtfe719mk9eM2p93lKeuen7Jx5n1pIewca97MBqdc=;
        b=OwFsjrcMxKj1s4H/fotfYoCBwsygaVEcANDEtkGKlRyfBhRTz2OCJEfvXLHdpvGD7x
         1UEQXKL/fkPiuknoLjoM5DN7aY70nsceK2OJjfdaAgw9ncTN9zqwANYNKuVh9/VYPIav
         Gxa5DzQLOhl4wRP4fdUdJRo0/0J+NooIx+ufMBgihzp8h8ReOBqkfcMdo5OIuU7w6C1T
         xeIFvIeZUCH0v04IqmRX6RWe7b9aT6myshNL5uBQppaZzdT6fjevlsqcM0Ali9GqYGyM
         3aA5vHjcTLoKc3+eO5xe3fGq8g9ZH0qb1jafzvK3ILB8i4T2FQ5zM5m/u/LZkPbBN/GS
         +UMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=8AWtfe719mk9eM2p93lKeuen7Jx5n1pIewca97MBqdc=;
        b=Y+BI5Lx0ljBGFGeV01i+XvUq18IBuXCIzgOikKUY5CTWVKZ5eN1rQwNuOyL5/N5/mj
         pwgwec873LDLleg5xtmK420SbNUBb+7XTPmD9zQdwoeqGc0ggFMk7mq2xSy+MOiNsJlo
         fdwap9bmVJLFp5Qr0qcX6SZb9gxPPb03fCQSJjiJ9IxPMKDV7fPuXFv4RXhsGWQjDDwN
         lFxKLxfjsxF58VmbSGy1sModkypnlygJll4KSvuDBGCMwMV+vxR6eypfFsQlJmt9dFQ3
         b/ArioVeZJEbqiM18IdYP4bXocxprc8HlzAoiTrPhn78GCPrkJeQLgQlA4KcR6lwUInO
         A63g==
X-Gm-Message-State: AFqh2korvb/y1hK+QB8RwuTWj3WabA+qKQ5+2Om5tPdtsg0MAXTjGKMW
	jfIVJJmtiAKRlP8SE0mR8SE=
X-Google-Smtp-Source: AMrXdXvIETTxnaRaRvm9hcFDe31cy8g3McWj56E8OyCHBH1xc4SjLtjj5azN8me5MaQQNXaIHCKkjw==
X-Received: by 2002:a5d:5e81:0:b0:244:e704:df2c with SMTP id ck1-20020a5d5e81000000b00244e704df2cmr34565713wrb.57.1672843522692;
        Wed, 04 Jan 2023 06:45:22 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v2 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
Date: Wed,  4 Jan 2023 15:44:31 +0100
Message-Id: <20230104144437.27479-1-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally remov=
es=0D
it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in th=
e PC=0D
machine agnostic to the precise southbridge being used. 2/ will become=0D
particularily interesting once PIIX4 becomes usable in the PC machine, avoi=
ding=0D
the "Frankenstein" use of PIIX4_ACPI in PIIX3.=0D
=0D
v2:=0D
- xen_piix3_set_irq() is already generic. Just rename it. (Chuck)=0D
=0D
Testing done:=0D
None, because I don't know how to conduct this properly :(=0D
=0D
Based-on: <20221221170003.2929-1-shentey@gmail.com>=0D
          "[PATCH v4 00/30] Consolidate PIIX south bridges"=0D
=0D
Bernhard Beschow (6):=0D
  include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()=0D
  hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()=0D
  hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3=0D
  hw/isa/piix: Avoid Xen-specific variant of piix_write_config()=0D
  hw/isa/piix: Resolve redundant k->config_write assignments=0D
  hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE=0D
=0D
 hw/i386/pc_piix.c             | 34 ++++++++++++++++--=0D
 hw/i386/xen/xen-hvm.c         |  2 +-=0D
 hw/isa/piix.c                 | 66 +----------------------------------=0D
 include/hw/southbridge/piix.h |  1 -=0D
 include/hw/xen/xen.h          |  2 +-=0D
 stubs/xen-hw-stub.c           |  2 +-=0D
 6 files changed, 35 insertions(+), 72 deletions(-)=0D
=0D
-- =0D
2.39.0=0D
=0D


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 14:45:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 14:45:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471205.730972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51E-0007WK-RO; Wed, 04 Jan 2023 14:45:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471205.730972; Wed, 04 Jan 2023 14:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51E-0007Ur-Jr; Wed, 04 Jan 2023 14:45:28 +0000
Received: by outflank-mailman (input) for mailman id 471205;
 Wed, 04 Jan 2023 14:45:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD51D-00079V-SS
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 14:45:27 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6cc26dac-8c3e-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 15:45:26 +0100 (CET)
Received: by mail-wr1-x433.google.com with SMTP id bk16so19994157wrb.11
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 06:45:26 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 e10-20020a5d594a000000b0028663fc8f4csm21168241wri.30.2023.01.04.06.45.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 06:45:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6cc26dac-8c3e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jWsSUpGWnwKoxzlBa2RTErBbFhPwkbaywyVJwsOJ9Nk=;
        b=YlPrLritShhHTaeSGbLlb++n9XqS9VBhEgsy2PwwgydtFFm8UrBGDJ7iUMy8EaMHeH
         9SBO1n2DyEa7U5jCTGTOQDdlOVkvnVOt1hp8JY3uNc4BSSwglDnmGx2Ero35GpcLb5UZ
         lxDNlGzSW/OY/+wh4qjm3mF5gciV5251nu62u5qkggcU3PaBu3OtH7lvV3LzAIHUHPpY
         i/fR6+7MuwJBxH89wLn/4BkSrlZxnPIIZCNrMfecj4xhrjBhXyY2bE6HPVdgtcf6jPaJ
         iN/TGG+HuHA766ERLaC5giouVQ+KfyHIHRbdPCXS9JqqjfVfCA6wQA2G70I6E8iUvn/6
         2/8A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jWsSUpGWnwKoxzlBa2RTErBbFhPwkbaywyVJwsOJ9Nk=;
        b=7qubFAMSt0fgnIvn0DjngdK/c1YJK3nunCjBytT/lxypyWA2pIpWEv4Fgw7FaHgtL3
         hGFuPLIJHGeES8zA0VYyUsGEE5JU/hOUxuKeLh25Zt/96MHVEbmXj/8Tcsi4M/nrDMkH
         fg2X8bU2ro69SvUUUNv/gVvWniIBppFycOwAIjyI7gom+L+dKLb8GQSC+iB7a0ThkXlA
         HikQfR8zytqpXEXh4lc97n54B0YkbMuF+1L+1TcYwoZNli8jYyenYayZJn9GamHU+YN0
         ZqRoaMbhYwsJyGeD3Uwo6EFVnjVpWuZ5cTJqJFi5VeobGLTTB093jOroWCuKHnEzY6B8
         xa5Q==
X-Gm-Message-State: AFqh2ko0JyJW1tJrzpeJBaM5z/YbJf9LNMrxatT+TxPR58ULo2CD7c13
	uiTha4ScvyKUiHT0dgnVq1w=
X-Google-Smtp-Source: AMrXdXtdVz6/t/nUcG3XroMW/zOMXTIelUM5J1RHwOTKB+xhi7kJEMzBdrFkmDsmcHHUWZlTsIodDw==
X-Received: by 2002:adf:f4c5:0:b0:291:3f93:b7d1 with SMTP id h5-20020adff4c5000000b002913f93b7d1mr11327512wrp.64.1672843525599;
        Wed, 04 Jan 2023 06:45:25 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v2 3/6] hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
Date: Wed,  4 Jan 2023 15:44:34 +0100
Message-Id: <20230104144437.27479-4-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230104144437.27479-1-shentey@gmail.com>
References: <20230104144437.27479-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_intx_set_irq() doesn't depend on PIIX state. In order to resolve
TYPE_PIIX3_XEN_DEVICE and in order to make Xen agnostic about the
precise south bridge being used, set up Xen's PCI IRQ handling of PIIX3
in the board.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/pc_piix.c | 12 ++++++++++++
 hw/isa/piix.c     | 24 +-----------------------
 2 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index aacdb72b7c..792dcd3ce8 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -67,6 +67,7 @@
 #include "kvm/kvm-cpu.h"
 
 #define MAX_IDE_BUS 2
+#define XEN_IOAPIC_NUM_PIRQS 128ULL
 
 #ifdef CONFIG_IDE_ISA
 static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
@@ -246,6 +247,17 @@ static void pc_init1(MachineState *machine,
                                  &error_abort);
         pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
 
+        if (xen_enabled()) {
+            /*
+             * Xen supports additional interrupt routes from the PCI devices to
+             * the IOAPIC: the four pins of each PCI device on the bus are also
+             * connected to the IOAPIC directly.
+             * These additional routes can be discovered through ACPI.
+             */
+            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
+                         XEN_IOAPIC_NUM_PIRQS);
+        }
+
         dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
         for (i = 0; i < ISA_NUM_IRQS; i++) {
             qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index 25707479eb..ac04781f46 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -38,8 +38,6 @@
 #include "migration/vmstate.h"
 #include "hw/acpi/acpi_aml_interface.h"
 
-#define XEN_IOAPIC_NUM_PIRQS    128ULL
-
 static void piix_set_irq_pic(PIIXState *piix, int pic_irq)
 {
     qemu_set_irq(piix->pic.in_irqs[pic_irq],
@@ -487,33 +485,13 @@ static const TypeInfo piix3_info = {
     .class_init    = piix3_class_init,
 };
 
-static void piix3_xen_realize(PCIDevice *dev, Error **errp)
-{
-    ERRP_GUARD();
-    PIIXState *piix3 = PIIX_PCI_DEVICE(dev);
-    PCIBus *pci_bus = pci_get_bus(dev);
-
-    piix3_realize(dev, errp);
-    if (*errp) {
-        return;
-    }
-
-    /*
-     * Xen supports additional interrupt routes from the PCI devices to
-     * the IOAPIC: the four pins of each PCI device on the bus are also
-     * connected to the IOAPIC directly.
-     * These additional routes can be discovered through ACPI.
-     */
-    pci_bus_irqs(pci_bus, xen_intx_set_irq, piix3, XEN_IOAPIC_NUM_PIRQS);
-}
-
 static void piix3_xen_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
     k->config_write = piix3_write_config_xen;
-    k->realize = piix3_xen_realize;
+    k->realize = piix3_realize;
     /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
     dc->vmsd = &vmstate_piix3;
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 14:45:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 14:45:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471203.730961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51E-0007PI-45; Wed, 04 Jan 2023 14:45:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471203.730961; Wed, 04 Jan 2023 14:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51E-0007P7-0i; Wed, 04 Jan 2023 14:45:28 +0000
Received: by outflank-mailman (input) for mailman id 471203;
 Wed, 04 Jan 2023 14:45:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD51C-00079b-56
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 14:45:26 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c262db5-8c3e-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 15:45:25 +0100 (CET)
Received: by mail-wr1-x42a.google.com with SMTP id s9so1187925wru.13
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 06:45:25 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 e10-20020a5d594a000000b0028663fc8f4csm21168241wri.30.2023.01.04.06.45.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 06:45:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c262db5-8c3e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=skngD2C07lkAR57VFiC9hSpSkRXE8QLpJkhdrtzHaoE=;
        b=fglIPWzWe7OF+ReYiVQjbIlqvtMvTu/HBo031zXX2Dmr6PoRqVzCCMudfgPYd4aGRi
         ZzXHYC+LUG8aYoxMQ2QopDZUsHWe8+I3MBPfz33YHbBlA0wpSiPunpMSxDNEQu3gic8o
         tjY9LXxtHVjsr5RFAPk3J091DyzHz40da+k0hi4YR2NRpwGdMpmrPobxn+rII4pfszR+
         ABmHKixdWzp6dCP+vGUOG1X6vLOHELWO4oX1VeVkM5l0HNpdBbKdWd/OSHzgbTTCC2Ov
         ZZVVZUIoXft/7/vDq3sRETSoVxCTNiDIkVm5XL0QJYkZbABRQHyxfDTifj0y4vNNeuN2
         dMvg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=skngD2C07lkAR57VFiC9hSpSkRXE8QLpJkhdrtzHaoE=;
        b=kg0lubPSIw99ddQbqcMxO2UKuAKQ487UQqg1qktYtMniSR30oTmTs6q81N4JP3eI0i
         yFw91Zu6sCl2XRa9ZFvPZ18DHKcsprpvXjsrmryo8kvWMKXUI5ou/U0cd3G7qUxd9JpH
         ZGmouFnqsOi62gtiFCCMUnmBLW8OoFovgUYsU1UI4xBVD50mlseRbgp0acqoIJ0MfqiG
         loIRgnik8o6qPN7piuSN7sTDXdy7HEUZmdfoynvAdPrpX8g/aRexww5q/5W25s1SL6hT
         P0OwQmVUxJvx/RjpTYnQMfT81mJN7SwcJI33tn1MJ5vwbNnO+ArJWRhMXNG8Nn77on6l
         O2fw==
X-Gm-Message-State: AFqh2kpk7jXHU7ldk5yEkqrkdWIhooaAqwZc42X8ny+AtmfkiQjkJ2jn
	NKmD4AgHlgHcPMoGb3fg2Io=
X-Google-Smtp-Source: AMrXdXsN3egZuow7sNDCctPG+VunNJGlKj4E12ZlT9Um1FBGzDz97HrRjJAVFxuMN8+oPIQ6UyZajQ==
X-Received: by 2002:adf:fa0f:0:b0:28f:b480:f4 with SMTP id m15-20020adffa0f000000b0028fb48000f4mr12954011wrr.12.1672843524496;
        Wed, 04 Jan 2023 06:45:24 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v2 2/6] hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
Date: Wed,  4 Jan 2023 15:44:33 +0100
Message-Id: <20230104144437.27479-3-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230104144437.27479-1-shentey@gmail.com>
References: <20230104144437.27479-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a preparational patch for the next one to make the following
more obvious:

First, pci_bus_irqs() is now called twice in case of Xen where the
second call overrides the pci_set_irq_fn with the Xen variant.

Second, pci_bus_set_route_irq_fn() is now also called in Xen mode.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/isa/piix.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index a7a4eec206..25707479eb 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -493,7 +493,7 @@ static void piix3_xen_realize(PCIDevice *dev, Error **errp)
     PIIXState *piix3 = PIIX_PCI_DEVICE(dev);
     PCIBus *pci_bus = pci_get_bus(dev);
 
-    pci_piix_realize(dev, TYPE_PIIX3_USB_UHCI, errp);
+    piix3_realize(dev, errp);
     if (*errp) {
         return;
     }
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 14:45:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 14:45:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471206.730989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51G-0007zf-5U; Wed, 04 Jan 2023 14:45:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471206.730989; Wed, 04 Jan 2023 14:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51F-0007xg-Ti; Wed, 04 Jan 2023 14:45:29 +0000
Received: by outflank-mailman (input) for mailman id 471206;
 Wed, 04 Jan 2023 14:45:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD51E-00079V-NI
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 14:45:28 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6d631fb3-8c3e-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 15:45:26 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id s9so1188041wru.13
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 06:45:26 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 e10-20020a5d594a000000b0028663fc8f4csm21168241wri.30.2023.01.04.06.45.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 06:45:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d631fb3-8c3e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=9kGVFT98W8E67AgEjIgODwyRdBIrAWSrE8+3kXPv+J8=;
        b=ZOgIvlh/yVYKniIe+nDfDmWng8AeG9tivYZ8qAp1ejGNIiF0OlcQxQQawaNqvp5kyI
         db6PpoClsgtQ24vN7b43iAIA4RkZZU2IuOOdMiKpghvD5UHV30I+MHCh5SnMLIlEqhWk
         mR+3nciHSn7my9PZJ/fxT/DQUm5quEOmOzKRJxhQXvCi1Tlkjaffx44yMQmtlg7jafz9
         wOcea1XtCncOTtqIdlZLJeZYVgaPkId9KcpEWYWGMpMofvDc7L/iDozqI0Ael/BQ+wHL
         j+1XDUGTqoI9HCUs5EMZBaZj8RISPwyWc1chvQYyjR/soqxI4/ZON7/PcZZGAwU+arqe
         ymhA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=9kGVFT98W8E67AgEjIgODwyRdBIrAWSrE8+3kXPv+J8=;
        b=uqCJCt0N6crTncE10JorEj2r3sHJgBtLmnI/+SXmThCLsmNvPszNYblAPLgOJomsep
         g/W0QCD80NZ14/6fjbWuX0Ptqgdkm7CtjtfCXtO2kXcfgP6+AbcPSyjxQiHYwbgD1KTx
         IfvvXOaPYgI5IQ/5CDWcy67MJingWccMM9hEC7AWOBB9TKGk5n9Of11dMlTYZJd4dhx0
         eKKPqiVjF1sCL0dVW3lPWFqlC7nStXFwa7mAn+IMPHEhuIvO/GYxktAd9/rq5DrEZvsl
         EIwcElE32YmhWgkvgIHXmCt3hHin/F4Nkpz3euUFRb6TmqtFWeDwAXabnbsWYnh1DSxA
         OSmA==
X-Gm-Message-State: AFqh2krlW2JxHLCP9cclFtLM2fFwHgJr4scvfnS6uAhgE6Mn41skztBl
	xzDVTSqGFQM42PtzvyJnReo=
X-Google-Smtp-Source: AMrXdXu/V1VAakoVNA901K1DaXhCb0Z6edEmYwIKwkLsKHezEymfRbemWS6Mw7ngG0Tn4FnagiDi+A==
X-Received: by 2002:a5d:5485:0:b0:294:50d6:86ad with SMTP id h5-20020a5d5485000000b0029450d686admr8741280wrv.2.1672843526555;
        Wed, 04 Jan 2023 06:45:26 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v2 4/6] hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
Date: Wed,  4 Jan 2023 15:44:35 +0100
Message-Id: <20230104144437.27479-5-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230104144437.27479-1-shentey@gmail.com>
References: <20230104144437.27479-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Subscribe to pci_bus_fire_intx_routing_notifier() instead which allows for
having a common piix_write_config() for all PIIX device models.

While at it, move the subscription into machine code in order to resolve
TYPE_PIIX3_XEN_DEVICE.

In a possible future followup, pci_bus_fire_intx_routing_notifier() could
be adjusted in such a way that subscribing to it doesn't require
knowledge of the device firing it.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/pc_piix.c | 18 ++++++++++++++++++
 hw/isa/piix.c     | 22 +---------------------
 2 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 792dcd3ce8..5738d9cdca 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -86,6 +86,21 @@ static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
     return (pci_intx + slot_addend) & 3;
 }
 
+static void piix_intx_routing_notifier_xen(PCIDevice *dev)
+{
+    int i;
+
+    /* Scan for updates to PCI link routes (0x60-0x63). */
+    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
+        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
+        if (v & 0x80) {
+            v = 0;
+        }
+        v &= 0xf;
+        xen_set_pci_link_route(i, v);
+    }
+}
+
 /* PC hardware initialisation */
 static void pc_init1(MachineState *machine,
                      const char *host_type, const char *pci_type)
@@ -248,6 +263,9 @@ static void pc_init1(MachineState *machine,
         pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
 
         if (xen_enabled()) {
+            pci_device_set_intx_routing_notifier(
+                        pci_dev, piix_intx_routing_notifier_xen);
+
             /*
              * Xen supports additional interrupt routes from the PCI devices to
              * the IOAPIC: the four pins of each PCI device on the bus are also
diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index ac04781f46..d4cdb3dadb 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -125,26 +125,6 @@ static void piix_write_config(PCIDevice *dev, uint32_t address, uint32_t val,
     }
 }
 
-static void piix3_write_config_xen(PCIDevice *dev,
-                                   uint32_t address, uint32_t val, int len)
-{
-    int i;
-
-    /* Scan for updates to PCI link routes (0x60-0x63). */
-    for (i = 0; i < len; i++) {
-        uint8_t v = (val >> (8 * i)) & 0xff;
-        if (v & 0x80) {
-            v = 0;
-        }
-        v &= 0xf;
-        if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
-            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
-        }
-    }
-
-    piix_write_config(dev, address, val, len);
-}
-
 static void piix_reset(DeviceState *dev)
 {
     PIIXState *d = PIIX_PCI_DEVICE(dev);
@@ -490,7 +470,7 @@ static void piix3_xen_class_init(ObjectClass *klass, void *data)
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix3_write_config_xen;
+    k->config_write = piix_write_config;
     k->realize = piix3_realize;
     /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 14:45:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 14:45:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471207.730996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51G-00086j-Ki; Wed, 04 Jan 2023 14:45:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471207.730996; Wed, 04 Jan 2023 14:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51G-000842-BR; Wed, 04 Jan 2023 14:45:30 +0000
Received: by outflank-mailman (input) for mailman id 471207;
 Wed, 04 Jan 2023 14:45:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD51E-00079b-O1
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 14:45:28 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6dea74b4-8c3e-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 15:45:28 +0100 (CET)
Received: by mail-wr1-x42b.google.com with SMTP id az7so8799947wrb.5
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 06:45:28 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 e10-20020a5d594a000000b0028663fc8f4csm21168241wri.30.2023.01.04.06.45.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 06:45:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dea74b4-8c3e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZTbz96Fg6rdZNkQF03buq3nvArFrt1X2bnXBL/QETy8=;
        b=W9Un76kfIXX71algBT3MUyLCUiN80OFK1EUd0YbCe6wUItAP0Luda8qEceYXzOUdFw
         jY1QLBi8eG0gW8dCgOjygcJ+fp4QbyzP8Ye4eARwBnC9y0DkgijILW6lYwfh0Jwvet2r
         4qrOldpBtix87pTunJpR5Hzs+5mBOue2wekAGVrS5ABa97Ub9oHHVlIjCdGw8E/QrMWt
         uF7+vwGjO6ufN4oR16pm29afZxpgs19LCt554cFbOcBl4NTLEdn0+fh7sb0bkFYX/Qy3
         /7j2qYCvOrtcFrstpMIW3NJQlCJzKvvJDPW6GJrhHDp8NV6E0rozu/tgN7PiVc9LxJ4d
         rJkA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ZTbz96Fg6rdZNkQF03buq3nvArFrt1X2bnXBL/QETy8=;
        b=hAnboLlHOS5i/VMcaI6yf68aOHJr0usBmCb06vuVRan5avil1+ldzlNf8eXIMIQg+l
         4Deg6DYtiB0rLKw+6ctJXTWNvG0swLwt9S+h5CDOWnUZAwhVLSOpkz1zE5Z5n/0LIKdT
         lzpyzrvy0KBG2YW2cZ2s+BT+VA9AYOqK1IoIveKkec3kCjekl+/A2pMs9DOyCwa/QLS0
         39aPNGn9Y7tpc8wO2jYD/0pVxmUoQKSSL1q+vuuDHquid8MnmNF3Z7+UOUBqvg434CiL
         Ao4jFao6FGw75Z/bM95D0vaEkZCvrImVlrhT6CulONXKUOxrfID+8HcsSwlOdKfY0hiw
         R6qA==
X-Gm-Message-State: AFqh2krX0egy/g06wqU8ZS7NXEYaDE4gG1YDueJEapnq8Uqu3VUz+/+W
	N3Oef5fU7Ea3x6GYH+kWGl4=
X-Google-Smtp-Source: AMrXdXuqwFIIXGumdoGoKi2S2o1kZi2sGwiXEGMwUaQU6kBeT44weD6z/UJouVJiTGdquTAHfyXG3g==
X-Received: by 2002:a5d:640c:0:b0:2a3:1c13:2888 with SMTP id z12-20020a5d640c000000b002a31c132888mr2471851wru.60.1672843527498;
        Wed, 04 Jan 2023 06:45:27 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v2 5/6] hw/isa/piix: Resolve redundant k->config_write assignments
Date: Wed,  4 Jan 2023 15:44:36 +0100
Message-Id: <20230104144437.27479-6-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230104144437.27479-1-shentey@gmail.com>
References: <20230104144437.27479-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The previous patch unified handling of piix_write_config() accross all
PIIX device models which allows for assigning k->config_write once in the
base class.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/isa/piix.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index d4cdb3dadb..98e9b12661 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -396,6 +396,7 @@ static void pci_piix_class_init(ObjectClass *klass, void *data)
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
     AcpiDevAmlIfClass *adevc = ACPI_DEV_AML_IF_CLASS(klass);
 
+    k->config_write = piix_write_config;
     dc->reset       = piix_reset;
     dc->desc        = "ISA bridge";
     dc->hotpluggable   = false;
@@ -451,7 +452,6 @@ static void piix3_class_init(ObjectClass *klass, void *data)
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix_write_config;
     k->realize = piix3_realize;
     /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
@@ -470,7 +470,6 @@ static void piix3_xen_class_init(ObjectClass *klass, void *data)
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix_write_config;
     k->realize = piix3_realize;
     /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
@@ -519,7 +518,6 @@ static void piix4_class_init(ObjectClass *klass, void *data)
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix_write_config;
     k->realize = piix4_realize;
     k->device_id = PCI_DEVICE_ID_INTEL_82371AB_0;
     dc->vmsd = &vmstate_piix4;
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 14:45:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 14:45:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471208.731016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51J-0000NY-AF; Wed, 04 Jan 2023 14:45:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471208.731016; Wed, 04 Jan 2023 14:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD51J-0000NJ-5i; Wed, 04 Jan 2023 14:45:33 +0000
Received: by outflank-mailman (input) for mailman id 471208;
 Wed, 04 Jan 2023 14:45:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD51H-00079V-HU
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 14:45:31 +0000
Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com
 [2a00:1450:4864:20::335])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ede4b6e-8c3e-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 15:45:29 +0100 (CET)
Received: by mail-wm1-x335.google.com with SMTP id m3so16838999wmq.0
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 06:45:29 -0800 (PST)
Received: from osoxes.fritz.box
 (p200300faaf0bb2009c4947838afc41b6.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:9c49:4783:8afc:41b6])
 by smtp.gmail.com with ESMTPSA id
 e10-20020a5d594a000000b0028663fc8f4csm21168241wri.30.2023.01.04.06.45.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 06:45:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ede4b6e-8c3e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=E4FebSrd6qw4EtXuaelySJNjsiB1Oj70YbVnrwvN8/I=;
        b=lMKvYCGOWYmnaT84GaMb23/VAi195oDQudE1wLzcerQEkKvvtsHLGjOSg3Wp4GFafE
         XEtBgvv3xHIUWKKXmH1Ae9427VzyfEBA7cUKnYZSZvXPXSBefmryW+xa2Iv6VyKgWZfs
         r85U50UOY19i1THGPSpmLtkLrbogJ7ind85Uxcyqly4kLwW+iIYVU0SFYWcvUyE/B88b
         LnK2tJgHA8CffJ4VpAQZujNkJQGRaVtif+jOh2hKIW0Htmy+z0Ns9XHX5CwsqjO/ECtm
         2O5VciXS7zNEcI+bZMOyLKHtdkcj+Ej/CLxWUUVLE1r7h5cOkCZL6K1xhBFqGNEijOdc
         pK5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=E4FebSrd6qw4EtXuaelySJNjsiB1Oj70YbVnrwvN8/I=;
        b=VubhW56+2sC0a3puKqULFdv9Gkm+LnINfZRaYDMOi0Xf36oRRGzZ5oxbP5vOFaIKPz
         K5VZMTTvKUMw/UjLrT/dzw9ezOfiiF7BsE+7YbRxtzgrb4kZdxcf2Ep8w2LLae++qCb2
         mAzA3tXm6WPI4uhfFDnLwlG9gOb2RK9QueK0+n3Zh+8t707IuA08cZZpWwRC1uz7fOqz
         MlRocXKzkaYepwoZOWA8Ea2jgIU1dXQSDZGPeG0zBDvyyO1JnP6ctzXs1sAtOqZ+HakC
         AQPbFW9yItVag7P6D7x1rHyTn29FflavWIgAtahJ6jB6Z8alHwXZSqttBrVjajR0D8TQ
         RonA==
X-Gm-Message-State: AFqh2koIjQC6wfGpSqX8+uTIVeEI+t/aMYT0VvHWpPVWkBg1AVxapUcD
	zGS8z2ua3K/2ATz1s7DXO/Y=
X-Google-Smtp-Source: AMrXdXsYGgxoZWnwVZ7yR7arNry46fXICAla5+IdicOq5hbJrnHe2MpsGju5mZBACJ42RwflZH2L8w==
X-Received: by 2002:a05:600c:54cb:b0:3cf:d0be:1231 with SMTP id iw11-20020a05600c54cb00b003cfd0be1231mr41547624wmb.13.1672843529178;
        Wed, 04 Jan 2023 06:45:29 -0800 (PST)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v2 6/6] hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
Date: Wed,  4 Jan 2023 15:44:37 +0100
Message-Id: <20230104144437.27479-7-shentey@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230104144437.27479-1-shentey@gmail.com>
References: <20230104144437.27479-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
TYPE_PIIX3_DEVICE. Remove this redundancy.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/i386/pc_piix.c             |  4 +---
 hw/isa/piix.c                 | 20 --------------------
 include/hw/southbridge/piix.h |  1 -
 3 files changed, 1 insertion(+), 24 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 5738d9cdca..6b8de3d59d 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
     if (pcmc->pci_enabled) {
         DeviceState *dev;
         PCIDevice *pci_dev;
-        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
-                                         : TYPE_PIIX3_DEVICE;
         int i;
 
         pci_bus = i440fx_init(pci_type,
@@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
                                        : pci_slot_get_pirq);
         pcms->bus = pci_bus;
 
-        pci_dev = pci_new_multifunction(-1, true, type);
+        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
         object_property_set_bool(OBJECT(pci_dev), "has-usb",
                                  machine_usb(machine), &error_abort);
         object_property_set_bool(OBJECT(pci_dev), "has-acpi",
diff --git a/hw/isa/piix.c b/hw/isa/piix.c
index 98e9b12661..e4587352c9 100644
--- a/hw/isa/piix.c
+++ b/hw/isa/piix.c
@@ -33,7 +33,6 @@
 #include "hw/qdev-properties.h"
 #include "hw/ide/piix.h"
 #include "hw/isa/isa.h"
-#include "hw/xen/xen.h"
 #include "sysemu/runstate.h"
 #include "migration/vmstate.h"
 #include "hw/acpi/acpi_aml_interface.h"
@@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
     .class_init    = piix3_class_init,
 };
 
-static void piix3_xen_class_init(ObjectClass *klass, void *data)
-{
-    DeviceClass *dc = DEVICE_CLASS(klass);
-    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
-
-    k->realize = piix3_realize;
-    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
-    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
-    dc->vmsd = &vmstate_piix3;
-}
-
-static const TypeInfo piix3_xen_info = {
-    .name          = TYPE_PIIX3_XEN_DEVICE,
-    .parent        = TYPE_PIIX_PCI_DEVICE,
-    .instance_init = piix3_init,
-    .class_init    = piix3_xen_class_init,
-};
-
 static void piix4_realize(PCIDevice *dev, Error **errp)
 {
     ERRP_GUARD();
@@ -534,7 +515,6 @@ static void piix3_register_types(void)
 {
     type_register_static(&piix_pci_type_info);
     type_register_static(&piix3_info);
-    type_register_static(&piix3_xen_info);
     type_register_static(&piix4_info);
 }
 
diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
index 65ad8569da..b1fc94a742 100644
--- a/include/hw/southbridge/piix.h
+++ b/include/hw/southbridge/piix.h
@@ -77,7 +77,6 @@ struct PIIXState {
 OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
 
 #define TYPE_PIIX3_DEVICE "PIIX3"
-#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
 #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
 
 #endif
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 15:17:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 15:17:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471253.731027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5Vr-0005cv-Qw; Wed, 04 Jan 2023 15:17:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471253.731027; Wed, 04 Jan 2023 15:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5Vr-0005co-O8; Wed, 04 Jan 2023 15:17:07 +0000
Received: by outflank-mailman (input) for mailman id 471253;
 Wed, 04 Jan 2023 15:17:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=G5yt=5B=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pD5Vp-0005ci-Pg
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 15:17:05 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7979cd7-8c42-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 16:17:03 +0100 (CET)
Received: by mail-wr1-x433.google.com with SMTP id d17so13568528wrs.2
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 07:17:03 -0800 (PST)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 b16-20020adfee90000000b002a1dd8ff75fsm3038540wro.62.2023.01.04.07.17.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 07:17:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7979cd7-8c42-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=CeQL5xWC173b6TUlkncPiMJP/Eoy8yXUJGOkp8pSziE=;
        b=J5vToNBwZNPE6isC6rzDCQasdXVM5QQ6ViaudBztq9BY1UjSa7dqNQ4YtwVCTbIlM4
         VcFAiu42pOHnylu6h+Is6Sj4pPaGI2bDWbOdTgS29Hl0K7x09j1lk4l3FoJskBIVbjQy
         XR8I/KX+RNmSq9VDy6LX2bf5BJq4bIg8zUvtZMdjiJSL6yZa/YyfRW5ito3x+6nZpwTE
         swVNnHjOf+0BvmS6nLhRt9rPID6FeaoBnmfqsafX90aGSYTBKdJZO0bWoLLm+/f9yzID
         XB1vwczUCLW8KqQvCyAyboPhmSTxI7uiqJ7R6+kg309LuHLEexw+/tWuQR05HG87CQwu
         ebCA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=CeQL5xWC173b6TUlkncPiMJP/Eoy8yXUJGOkp8pSziE=;
        b=rQm+OIE5MofYax/tSSnJlP/Vtk1bDx/680QLDyqPzpExeLXzqrNAmFVPFcEockeoKT
         mXfYlpFoPnTrSyu7ZYfzA3wUOuRjJvACbdkv3Yrzc5dKQoYueqkh7wb0YEX7Fdz4HtM9
         El/bX/2QTD9vdyGoAjvg40K26EmZl2LTfsrmnEMM5ilT7it4ZvwVSzUR2JJdMKDvGTxy
         7JjKs2AypC9rxOBaIusthAutckGiOi1gLDK1AIvWjY7oWTG8p8jGZfCnZKou9PlhSeO7
         rxqk4nG0Fn6zCFElOkAaBRLwaEP28IlG8lltk1dAxSZNz5PRpDkQnQBJoqlbOnfg3bMK
         4g8w==
X-Gm-Message-State: AFqh2kq5TF6vzkQZA2rhvN/2ygCgthBbxMxwoY2PoL/R7NlQYqnzpFfJ
	tuRQwdoto3f0S1BStODyFnzbFQ==
X-Google-Smtp-Source: AMrXdXvH2pOn3LWxDaYFR7QAs+zf153SI/Oyb1HLgOqiNzpUp5DYIJp/xLQE53lCu6Un1TZ3PMWlhw==
X-Received: by 2002:a5d:52c1:0:b0:242:5d8e:6c35 with SMTP id r1-20020a5d52c1000000b002425d8e6c35mr29010302wrv.28.1672845422878;
        Wed, 04 Jan 2023 07:17:02 -0800 (PST)
Message-ID: <5a1b4b87-e6eb-de5d-ae1f-b648b6a7fc58@linaro.org>
Date: Wed, 4 Jan 2023 16:17:00 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/6] include/hw/xen/xen: Rename xen_piix3_set_irq() to
 xen_intx_set_irq()
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Chuck Zmudzinski <brchuckz@aol.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-2-shentey@gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230104144437.27479-2-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/1/23 15:44, Bernhard Beschow wrote:
> xen_piix3_set_irq() isn't PIIX specific: PIIX is a single PCI device
> while xen_piix3_set_irq() maps multiple PCI devices to their respective
> IRQs, which is board-specific. Rename xen_piix3_set_irq() to communicate
> this.
> 
> Also rename XEN_PIIX_NUM_PIRQS to XEN_IOAPIC_NUM_PIRQS since the Xen's
> IOAPIC rather than PIIX has this many interrupt routes.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> ---
>   hw/i386/xen/xen-hvm.c | 2 +-
>   hw/isa/piix.c         | 4 ++--
>   include/hw/xen/xen.h  | 2 +-
>   stubs/xen-hw-stub.c   | 2 +-
>   4 files changed, 5 insertions(+), 5 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 15:19:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 15:19:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471260.731038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5Xk-0006D2-5d; Wed, 04 Jan 2023 15:19:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471260.731038; Wed, 04 Jan 2023 15:19:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5Xk-0006Cv-2P; Wed, 04 Jan 2023 15:19:04 +0000
Received: by outflank-mailman (input) for mailman id 471260;
 Wed, 04 Jan 2023 15:19:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=G5yt=5B=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pD5Xj-0006Cp-8h
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 15:19:03 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1e7f6673-8c43-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 16:19:02 +0100 (CET)
Received: by mail-wm1-x32e.google.com with SMTP id
 p1-20020a05600c1d8100b003d8c9b191e0so26704581wms.4
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 07:19:02 -0800 (PST)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 6-20020a05600c024600b003cfd0bd8c0asm44453781wmj.30.2023.01.04.07.19.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 07:19:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e7f6673-8c43-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ppHjajXu00sfHG3gYewiMMYX9m68H41AIgp8I+UXRqY=;
        b=UC2d663kGMVSshGb3omgRSAw0Ledaea0D1mgz5XDzXkbSgMftbWdaODs4OZmZOZBoh
         FyMvchGQ+037Hv9r0gY2rD51/ne2FMsPDuiE3Zr9tDtd+GSwonagjkiT7goqZzl2Gh+U
         W4ctjpskbLJNK16iOyLNAxtSD8TykO7UWVx0+q05zFYM+chKLMn8WWzPXDTOREDFoFoX
         ai1PC9EBJ+Nyq85ZUfyieR+RVa63WqYZX6+4bmFIFPCpfBkXyscUWiRMXh36EpIKeSMd
         SKarwbnDmbMBH9wuFC0txM58BjgBz9sYihOWquggB5SiYOaAcSqkst3g2R31OfWxnwCT
         tISQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=ppHjajXu00sfHG3gYewiMMYX9m68H41AIgp8I+UXRqY=;
        b=lEdKjYJOc7ks5opfm5A+glUuT2bZWlLgGwAi+Xr8z6Gnf8fFgeHVYTlMMthk9hW+mT
         5p/Fg2h1bbhnirKy00MlO2wnx6hh5D5Q4tWuDQao2vYwgftW+htNtzT5PaOhZ5OOWBZe
         aKu6xhBhG12g67lWDMvIS9aUic6L2xQWswvXYusSdqxuqmj5X8XzuOcfp/Zv4ilXotje
         hn52WPPVdllQj0mqruwUdfYJICI2gLSWOp0QLd0Q5K/w66q9xsVPE2Z+JaRPjiqqwiaV
         pe2sAwl+8aZu0k6H+3RhMqa01jKWgMmgXhjAJSDzIuy0BZZA+Y12ofiDx7pZhUG7wfX0
         dl4g==
X-Gm-Message-State: AFqh2krWRvTaYyTjQU6Hr5W6dWhcme5YsV/dHW+3Jc9QYWn3uaRCGGbr
	Qdzh23zkMrdy86MqsajrSW8onw==
X-Google-Smtp-Source: AMrXdXs4D+lia01lWxYpxoqiFcHhQKZtcCTZ221KZh9cuBly4YiKqjSpq4+CbaUrUFzi9gJI4l5cIg==
X-Received: by 2002:a05:600c:3844:b0:3d3:4406:8a3a with SMTP id s4-20020a05600c384400b003d344068a3amr41590590wmr.30.1672845541824;
        Wed, 04 Jan 2023 07:19:01 -0800 (PST)
Message-ID: <90e7dc87-e2a1-90ae-1002-6f98abe2224e@linaro.org>
Date: Wed, 4 Jan 2023 16:18:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/6] hw/isa/piix: Wire up Xen PCI IRQ handling outside
 of PIIX3
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Chuck Zmudzinski <brchuckz@aol.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-4-shentey@gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230104144437.27479-4-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/1/23 15:44, Bernhard Beschow wrote:
> xen_intx_set_irq() doesn't depend on PIIX state. In order to resolve
> TYPE_PIIX3_XEN_DEVICE and in order to make Xen agnostic about the
> precise south bridge being used, set up Xen's PCI IRQ handling of PIIX3
> in the board.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> ---
>   hw/i386/pc_piix.c | 12 ++++++++++++
>   hw/isa/piix.c     | 24 +-----------------------
>   2 files changed, 13 insertions(+), 23 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 15:19:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 15:19:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471263.731049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5Y4-0006dX-F5; Wed, 04 Jan 2023 15:19:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471263.731049; Wed, 04 Jan 2023 15:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5Y4-0006dQ-B5; Wed, 04 Jan 2023 15:19:24 +0000
Received: by outflank-mailman (input) for mailman id 471263;
 Wed, 04 Jan 2023 15:19:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=G5yt=5B=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pD5Y3-0006bu-Oi
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 15:19:23 +0000
Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com
 [2a00:1450:4864:20::333])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a278203-8c43-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 16:19:21 +0100 (CET)
Received: by mail-wm1-x333.google.com with SMTP id ja17so25823465wmb.3
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 07:19:21 -0800 (PST)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 r9-20020a05600c35c900b003d6b71c0c92sm69512254wmq.45.2023.01.04.07.19.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 07:19:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a278203-8c43-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=kPTejbbi0VjrwZB3qm0I8/+jmgejs37VoikSC1jaUtQ=;
        b=AhDzPHiSag5hRqrsTQN2J0vuhBPs6nWR14Nx5/y7Eg0iKnIDoPZPGkD9b4OwiPoR9u
         wS9wgHnq7Ei4czHMLFTstFMhCH94/k12Ehtc4wRjzriD1k8WsGRQyUCzc9gTaImXQNhg
         qqwLwDiOkCUBdgQqahpfsNY3lfYZgilLFDbpu6SFuL6zxpz8oBVLVMJmVO5lg5HEE2Hi
         Ia9CuCDCBhGADcfj6+Qjr7uK7SoThYMhM1JnGu6XtSq0Bw/hj8L7MbItaxiQLEynibq4
         koSFvychnjqJrUvLRBcE+yuFr4jJChIxAVVfzEg7fULMcQxwxiYsWLpaMam1a5HJWq0h
         A+Hg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=kPTejbbi0VjrwZB3qm0I8/+jmgejs37VoikSC1jaUtQ=;
        b=TFDJiTY4wth89+k86/hJLASTAgvNVw56P+wX+8ma5fPaX9tH+2XsJjPy9g667Qr5C4
         +tIpnrGPg1yDgZxDn3wMYExNn0S2avpJTiMziIFWHPUFy8XgSmgPu+i6aU0FQMvYWDr1
         HdB8MOpQQ0kF8/8iLWpqHH7Fx4gfOqcHkNe89VJbbHnkD8nYLHYJPr5U4YX7eumEDmYr
         ONtLfW4oBqjxnIBYEZjKH2ScuGQUXd1ivwegWqDhQY7NZkpWgn/DjcrKomtGC0qru22X
         qK3OmGhxAFCKBi7DMxE61JD1C5GE76yMSeWomITh7r8XWgxRPs6iF4+RIvKhukfSbRUq
         5Uyg==
X-Gm-Message-State: AFqh2krF8pxrTbO3mxcnoNAdPWP+CDVkzyK3tK5qMg1r+isG/AEseW81
	XvseS2eeOiM8beTQ0716ABGbyg==
X-Google-Smtp-Source: AMrXdXur2qtX7wLHbl0tKH2b2is4RZ/VKSr2T4aG8FJIBb61gSsVoK18AtRN6qbV2HSt2GDGU9Bh7A==
X-Received: by 2002:a05:600c:4d25:b0:3d2:27ba:dde0 with SMTP id u37-20020a05600c4d2500b003d227badde0mr34776131wmp.33.1672845561368;
        Wed, 04 Jan 2023 07:19:21 -0800 (PST)
Message-ID: <265c7231-31a0-09a5-2b39-57e0d610661f@linaro.org>
Date: Wed, 4 Jan 2023 16:19:19 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 5/6] hw/isa/piix: Resolve redundant k->config_write
 assignments
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Chuck Zmudzinski <brchuckz@aol.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-6-shentey@gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230104144437.27479-6-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/1/23 15:44, Bernhard Beschow wrote:
> The previous patch unified handling of piix_write_config() accross all
> PIIX device models which allows for assigning k->config_write once in the
> base class.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> ---
>   hw/isa/piix.c | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 15:35:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 15:35:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471274.731060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5nn-0000hk-RW; Wed, 04 Jan 2023 15:35:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471274.731060; Wed, 04 Jan 2023 15:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5nn-0000hd-Op; Wed, 04 Jan 2023 15:35:39 +0000
Received: by outflank-mailman (input) for mailman id 471274;
 Wed, 04 Jan 2023 15:35:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=G5yt=5B=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pD5nm-0000hX-K4
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 15:35:38 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6f69a975-8c45-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 16:35:36 +0100 (CET)
Received: by mail-wr1-x430.google.com with SMTP id d17so13632402wrs.2
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 07:35:36 -0800 (PST)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 bt15-20020a056000080f00b00297dcfdc90fsm9758428wrb.24.2023.01.04.07.35.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 07:35:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f69a975-8c45-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=qKfYp56DzEdln6bw+epRjXvgWK4+5r5Y+0Le7rGeP9o=;
        b=uUvu3TZZ6cU7XFsjBFeKFFSqBvz1X0ROcGMGdM4mLTlVoVeGwEz9o6k2bGbmQIg4om
         TzVOHY6Cc2lGZTwx1Jj931bRdVCLmTaRi4NoaiXpD5LbwGwC5p5EbIjsbQz/Fl5chbHt
         qwMRsbZtZebWs3sWytVS2FNwfYSGFEmenYR03MWPelgo93c5N9u/OtlHARpd+iCZPoxT
         HZ5YaVoZSL8s59ak8bCLBbh4Iq6DjtCsnsjdSBN7D015K8yzoJN7xivoNYasnE06Lgaj
         9nJr7AZ1SEXoTAuMxBDbxJyCxj8CTZoJKcaS2AoX+xRNQ3SuZqcYI8rj691I/weGzwsL
         /Rmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=qKfYp56DzEdln6bw+epRjXvgWK4+5r5Y+0Le7rGeP9o=;
        b=6GeLvfNmc+Yl+Y0EdLLWhCLf60sMuUjyAGn11iyVN4qqLd0de7MpU9L9eImxEd+1R1
         PXaq2MOnN3dMNRT5fu8EFyXkaG6w/ZLPpVerY6etXoJGFlQ4X8neClvPvV9mN/3eDrnZ
         TeGqFdC7fZ4Ie7ZrF7I6s1EBTVrVVTcWMmTcFgSv3JnWbtjmrkPFxxxbfGgoUJodGj90
         1HFciqGYd6cDxldYcHxHvKpOfPHdGh6Fx2EFZok/oDP8CNyu7eMDrCMulNTBQeyQpdNH
         PSbuhU3yJu8gtBMEpq05di6AlhJuSFqRs5k2GsN8aYoujv9FGS0RQB6nliAUnV7AG6JJ
         g0tg==
X-Gm-Message-State: AFqh2ko7mARTSnNbNfS0x4iyGcXn63Vzo7KRJQAG4crVxQTPGXbW4Arq
	Jaiw1ptyngYKH4H6nQEbX7pEdA==
X-Google-Smtp-Source: AMrXdXtY6RzOZmuFEL1JagfUdSvbatncLArmeYUWzIWeXgc02Qt4gc26aWsY4X0BGqOTYR424X5PZQ==
X-Received: by 2002:adf:de8a:0:b0:266:3709:5ce3 with SMTP id w10-20020adfde8a000000b0026637095ce3mr28787116wrl.0.1672846536505;
        Wed, 04 Jan 2023 07:35:36 -0800 (PST)
Message-ID: <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
Date: Wed, 4 Jan 2023 16:35:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Chuck Zmudzinski <brchuckz@aol.com>,
 Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230104144437.27479-7-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

+Markus/Thomas

On 4/1/23 15:44, Bernhard Beschow wrote:
> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
> TYPE_PIIX3_DEVICE. Remove this redundancy.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> ---
>   hw/i386/pc_piix.c             |  4 +---
>   hw/isa/piix.c                 | 20 --------------------
>   include/hw/southbridge/piix.h |  1 -
>   3 files changed, 1 insertion(+), 24 deletions(-)
> 
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 5738d9cdca..6b8de3d59d 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>       if (pcmc->pci_enabled) {
>           DeviceState *dev;
>           PCIDevice *pci_dev;
> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
> -                                         : TYPE_PIIX3_DEVICE;
>           int i;
>   
>           pci_bus = i440fx_init(pci_type,
> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>                                          : pci_slot_get_pirq);
>           pcms->bus = pci_bus;
>   
> -        pci_dev = pci_new_multifunction(-1, true, type);
> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>           object_property_set_bool(OBJECT(pci_dev), "has-usb",
>                                    machine_usb(machine), &error_abort);
>           object_property_set_bool(OBJECT(pci_dev), "has-acpi",
> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
> index 98e9b12661..e4587352c9 100644
> --- a/hw/isa/piix.c
> +++ b/hw/isa/piix.c
> @@ -33,7 +33,6 @@
>   #include "hw/qdev-properties.h"
>   #include "hw/ide/piix.h"
>   #include "hw/isa/isa.h"
> -#include "hw/xen/xen.h"
>   #include "sysemu/runstate.h"
>   #include "migration/vmstate.h"
>   #include "hw/acpi/acpi_aml_interface.h"
> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
>       .class_init    = piix3_class_init,
>   };
>   
> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
> -{
> -    DeviceClass *dc = DEVICE_CLASS(klass);
> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
> -
> -    k->realize = piix3_realize;
> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
> -    dc->vmsd = &vmstate_piix3;

IIUC, since this device is user-creatable, we can't simply remove it
without going thru the deprecation process. Alternatively we could
add a type alias:

-- >8 --
diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
index 4b0ef65780..d94f7ea369 100644
--- a/softmmu/qdev-monitor.c
+++ b/softmmu/qdev-monitor.c
@@ -64,6 +64,7 @@ typedef struct QDevAlias
                                QEMU_ARCH_LOONGARCH)
  #define QEMU_ARCH_VIRTIO_CCW (QEMU_ARCH_S390X)
  #define QEMU_ARCH_VIRTIO_MMIO (QEMU_ARCH_M68K)
+#define QEMU_ARCH_XEN (QEMU_ARCH_ARM | QEMU_ARCH_I386)

  /* Please keep this table sorted by typename. */
  static const QDevAlias qdev_alias_table[] = {
@@ -111,6 +112,7 @@ static const QDevAlias qdev_alias_table[] = {
      { "virtio-tablet-device", "virtio-tablet", QEMU_ARCH_VIRTIO_MMIO },
      { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_VIRTIO_CCW },
      { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_VIRTIO_PCI },
+    { "PIIX3", "PIIX3-xen", QEMU_ARCH_XEN },
      { }
  };
---

But I'm not sure due to this comment from commit ee46d8a503
(2011-12-22 15:24:20 -0600):

47) /*
48)  * Aliases were a bad idea from the start.  Let's keep them
49)  * from spreading further.
50)  */

Maybe using qdev_alias_table[] during device deprecation is
acceptable?

> -}
> -
> -static const TypeInfo piix3_xen_info = {
> -    .name          = TYPE_PIIX3_XEN_DEVICE,
> -    .parent        = TYPE_PIIX_PCI_DEVICE,
> -    .instance_init = piix3_init,
> -    .class_init    = piix3_xen_class_init,
> -};
> -
>   static void piix4_realize(PCIDevice *dev, Error **errp)
>   {
>       ERRP_GUARD();
> @@ -534,7 +515,6 @@ static void piix3_register_types(void)
>   {
>       type_register_static(&piix_pci_type_info);
>       type_register_static(&piix3_info);
> -    type_register_static(&piix3_xen_info);
>       type_register_static(&piix4_info);
>   }
>   
> diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
> index 65ad8569da..b1fc94a742 100644
> --- a/include/hw/southbridge/piix.h
> +++ b/include/hw/southbridge/piix.h
> @@ -77,7 +77,6 @@ struct PIIXState {
>   OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
>   
>   #define TYPE_PIIX3_DEVICE "PIIX3"
> -#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
>   #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
>   
>   #endif



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 15:45:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 15:45:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471282.731071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5wq-0002Eu-R6; Wed, 04 Jan 2023 15:45:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471282.731071; Wed, 04 Jan 2023 15:45:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD5wq-0002En-OK; Wed, 04 Jan 2023 15:45:00 +0000
Received: by outflank-mailman (input) for mailman id 471282;
 Wed, 04 Jan 2023 15:44:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+XhT=5B=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pD5wp-0002EU-B3
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 15:44:59 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2040.outbound.protection.outlook.com [40.107.20.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bdd70812-8c46-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 16:44:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8377.eurprd04.prod.outlook.com (2603:10a6:10:25c::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 15:44:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 15:44:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdd70812-8c46-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nu30EW1cGTSvrNgzZhfYOoggD+Q16sa+nRBET+DscSiBrlVEj7EaxbMowo72pZ1koFBhLX8FSup4UObLUdTOoPQH/eOJ0ko9dxE39yr4ZI5KnCQrWb+c4rajegqX/G7Ril+AKSFEnTkSq4kON3HVg++Z5aUdFDoV0ioe5sGmnknLlkwaTRqxVGLkpafs4Q6a+5F0yQaaqRIpl3q4kMhxxV/5k+0axfCWWRO0iFdO8v4bLcz9M8+f0C2QfHLc4yV4daR6dl7Bl6xhB7I7TAp/WQuXoq/8dNZ+vecBAmTleun0h3vxTkYZh+aIYIHfGmQRX2ljoIXZyWz12QwT/BFkUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ekUmmCLjTlbJSqa3zB70oJ7pL5kMKDCfK5iJNx+FSQY=;
 b=TgpaWhfQrk2wD0u43wvsOTE4hyOLx9Ndm7jDs60xO6lHUNnoZKlKXdSwNj873NF++P6m0ZbfyrW8jh4oj3NzX0Ob/Tj1HRExpvEz9EY6TEBm0N6yFczxKh3pO+gHyqXbTakMlxLWj799JGDtAdxKw3D2+bb2BGObq18bXBDE42mbmLD0UNCJ6VIn9VM6GakPWGJaQVExBblAQxkLYpCVWKtTIUocdA4Iyn8E1biMc/avmFEOFP7lVdDTuHLXGW8LacSjBxl061NLmyftiUV4atMfK/ZMI+ybKpG5YOzBnm8G0FPLtERldB0rR6eW4zPXhvX2WGwMW9kFjASehf9IJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ekUmmCLjTlbJSqa3zB70oJ7pL5kMKDCfK5iJNx+FSQY=;
 b=e+boEKtCGHJnMzoVsJ2p9qAvkmJLdEPKLyRYjEvkT+Uw/1VG6o8TqUjv8tjOXHIwkVfzMqDYHFW27mHjkRDEtmZBM7i98/K7lb8L7Sg5Q1j29FB3IKesZpHkOlNzuwRfn4x0pnTk99QyPGlIRYo75Ut3TORB2UTjFB9N6F5lg7KkNIHPDnu911gON3su+RzHOttfE0ya08F+0vv/zdgQjb5W4/Tq8zwg+kQxW5f8R3dBuPZrPSyN1AxIlde53xYpdYj1XggEisr2fzKJO8n+wNE6JSjUcJPbT1m8/j1KkzKfCpKqFhyobu8Ka8cpt+G+Zsp3PprxvZwAjjMSV4QYNA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a225c263-dda1-8109-3dcc-ff7111f277fe@suse.com>
Date: Wed, 4 Jan 2023 16:44:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/2] x86/shskt: Disable CET-SS on parts susceptible to
 fractured updates
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230104111146.2094-1-andrew.cooper3@citrix.com>
 <20230104111146.2094-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104111146.2094-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0080.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8377:EE_
X-MS-Office365-Filtering-Correlation-Id: 691ad437-f119-46ae-78d4-08daee6aa093
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nUzd5OJUrLBZxc8nhI+DVNAs97ImunS1agOQyZ+e5V7VC/ypEfFJQIsokslFIuHOCZh8OKpZNChk6lRcqroO9ac2hzmF+9VxNQo1B+WDSijM4qTN+dKE7sYHSbyeSj/WioL7SZjYPl22pGw+Hi47r/CcKWIuceEOpdFXqyw0bfMIzOeoOKT+9bYNP2ZyJ4dl9oKfxXFKa1RE5OxTVoK0lnHPR3ml+MMw4TGMZ08U3zQHAZYt2e5y22zxxk0LXdsBC+pWNtY5lX21Mg/za5969inqwy0UwMjcPZD6NR6UYjIbG9tHuc1c6S5bEWWgcffMjK8qGkTD8+mVBsA5qCXiXBuKI2JOEB8tuYwOcg7/lD4IaMx8N2JtqIyCQp0LIOmlZqpv1UjIt1dCjYK6cEzjg7w21Vwmg04/9mZtfUFMqKF30mymcxKCrERp6eqXYTiSHQIlSDgZNEh5FLGVBazHahL/4tn0bAYzUSBA5Nyik+s4Uo0pTJ+jumIU+lD0iGAKAC6GfyiAJWsXfvVxCgj3jmf0r2a6P9q1FXuwytDuQClIFOi2c0eOUb/P0av8cD898WCTe3D4Np28hpxjVBgNsKoA4Q/N18e5WnmRb+UR11G95IpKsoDTBZEEZGW7ivwTgLbQbCuAWqsShcv/b9I6co6D+Bx5SwngYYALmpO1bEE99FT7sMv5obytfj0j0dHNGjIOqzpVGGCRszgAnKeHDghTza7T7QVT/lwQuoV1950=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(396003)(346002)(376002)(39860400002)(366004)(451199015)(83380400001)(5660300002)(8936002)(36756003)(6916009)(316002)(54906003)(8676002)(4326008)(66946007)(66556008)(66476007)(2616005)(15650500001)(2906002)(31686004)(66899015)(6506007)(86362001)(31696002)(6512007)(186003)(478600001)(6486002)(53546011)(26005)(38100700002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UXZaZEdxbDVUaHZDWm1WTjNvVFErOWFYbFBuRmpGK2FHZUtyZVZiMGdZcEhu?=
 =?utf-8?B?REFtVkd2NWxSU0JiSXRNb2RNVTEzSEZBUHpoTmVCZHJNT21qaHozUUNlMlZl?=
 =?utf-8?B?cHRJTFU5WEtVNWZjNTA1dFhFMlBEUkdMZnhheldWUzBPdnQ1NXJOaUdmQ3dn?=
 =?utf-8?B?NVhoWDlzck9rMmFwNlZzTnAxU2pETWhlakJ0dHVLV2lVdkdZaGNRRjFoZFRX?=
 =?utf-8?B?WWcxYWFDc3FWcWRBNTQ1eW5EOXp0TTJNRC9zSlFMSXpvWDNwNHdtWEVZNDZR?=
 =?utf-8?B?M3lab3MwdlB5UnFWVWU3NXhOVVNqY1NpUFBtakYzRHgrVm9QL0haTGIwOVB5?=
 =?utf-8?B?Z1d6aGJuSGFGd0VGSjVzQWhOVUV1NmVnYzBrTUEyQ3FyWFMrczFab3hWL0NT?=
 =?utf-8?B?RTFmb2xOUzIrSkJRMkhzWFRoUjNsY2FiK21JQ2JvRGs5UlZyVS9sUDZqRU8r?=
 =?utf-8?B?MlNQRitnT0NrbkN6MDM3dlB0TDVnM0x1amFsNDM5QkVGSFE4TkpzL0xka3lt?=
 =?utf-8?B?NUJvdndCb0VFakJzWnF3NWl6cmg2QS91WTZjZFJ3NjQ5akNJVnQ5eVNjK2h4?=
 =?utf-8?B?Q1pRSFdlaUxucjNVanIrQnN3Z3ZpQjE4UVBvMHBsTGRBTnZsMklpOEJkZ0F0?=
 =?utf-8?B?YituWWlRd2dBVTFGc083OXNXbHpndkJ5a0VzNkR5d3V2eU81bTNtZ3VTdUdq?=
 =?utf-8?B?b2FiTFY2NTFoQzRRVytrTG1OVXYvbkVGZmtsRm5oSVNsRStaNkpaVU5YRHhx?=
 =?utf-8?B?ZFRROEFzTVA2d3hrRWYwamdSTFl4VU1Rc0hPSHlHZGNJMWJXN1owZUtncWs2?=
 =?utf-8?B?NWlxdXUvUHIzWHRjYnVKWHE5V3ExMm1RRnUwTzZPa29XNE5KSmgrOXNrRm1J?=
 =?utf-8?B?QzdRRXpQdElJZDhucllydytZMm9veTQwak5zd3YwL1ZqK240OFRIeE9QcU1O?=
 =?utf-8?B?b2F0R1dEYlJ6L0IyakdIbW5DK0dmdld0QWVjRXp6OTkxWDZML25DS20vWTRY?=
 =?utf-8?B?dVEzNy9RdjJoMUNTSHl3d2xaVURaOU1nbXhDaWs4VlFES1RQZEFCZkd1dFVL?=
 =?utf-8?B?L1BUbmdSWVU3RjhXMFBZVUxNRjZUNy9wRFo1NG80ZFlBajBxUU1BT0lBWG8w?=
 =?utf-8?B?cFZOQTgxdldSRW9qSU1Ta2hwYmpiWXBWTVJDWjZOa0NsZVZ6bDQ5NkNIRjRN?=
 =?utf-8?B?NHdXSXZyWlBnM3pkMmFjc1pmOVRmZEZGTDlaMmJzRUhCKzNKOVZUeXFaVTRt?=
 =?utf-8?B?ME1QZGpkSlpxUDk2L2t4YlVvMzZ2Mmw3d2R3Q29tbEsxaURvQ3VrNmxFTURj?=
 =?utf-8?B?bFR5djdPWS9mZTRjT3NGYXJkK0RqeTVrT2pJY1FkOFVIMEx3Zm0zSkNvK2dQ?=
 =?utf-8?B?cUNMakRnbWI5MTVUSDgvelprWStWdFBiaTdaVS9DVnpNMlZFT0pVcHhveXVZ?=
 =?utf-8?B?YU4vek90b1NLRmg2bzZpQlU5WjdjVkI4d0hQVndjV0RyajhidVRiL1RTRk93?=
 =?utf-8?B?ZjJ3cEZRYWdnc1hKYVpmWXhDYkI2dDd6Syt1UmdFeld5ZXRUL0lEK2NYdGZp?=
 =?utf-8?B?dlUyRnNxUGRXSkRQWWR3dHVZdVI4bEx4b09jSXpuQk9pWVNUcyttNDhzY0dI?=
 =?utf-8?B?VGlHaC9CQjM4cHorOEZtRGFlNEFNbFI4RUJXaHduTUErazgvOHIxTCtjMlpD?=
 =?utf-8?B?RkYxNzhqajNnRXBXUEpKMXYwMC9ubnB1Z21iak9VZXFpbnphQkpWSnRpUERB?=
 =?utf-8?B?RU5WSmEreWFLTDRYcWl2bEhWMXBEbmwrdHNXVjFvRUsxZFp1TTRXY04vc3Zu?=
 =?utf-8?B?QWp6MSsvdVQ4MVVkd1FiYUNBUVY5UzAwK1NqT3NlbkhvbDFlZHNXOHJsR1ZM?=
 =?utf-8?B?YWVaNVg5WEFpalJDTWFOeUlyV2R1WS9oMnVIdmlyb0prTDBjSTdFeHB0Zzdy?=
 =?utf-8?B?SDBabnhxY1BVME8zWmhJUFNhaGRwTXYwRVhJbUlQQkY0OFRDUy9YZzd6Wnpo?=
 =?utf-8?B?R2RSK2IzZVBmQmsvblFHSURUN21FS0cyVXVsWDdrcmoxZy83SlRHNFJSSTRx?=
 =?utf-8?B?VnJscE9qR2hNQXNQWUgwNy9PU3hlNWJlbnhOVCtUY2s4b0JCR25La3p5MkhB?=
 =?utf-8?Q?g8Ejr9oLTJhWlqPscocAEom0a?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 691ad437-f119-46ae-78d4-08daee6aa093
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 15:44:55.5559
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OaPlvKH3LnzB24JRc8qlxECr4rB4TF48FA0D4g/7cxUerATsOIzw1uh5ho/AeCGzRbAmF2ppB3p/VAkWnkKBnw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8377

On 04.01.2023 12:11, Andrew Cooper wrote:
> Refer to Intel SDM Rev 70 (Dec 2022), Vol3 17.2.3 "Supervisor Shadow Stack
> Token".
> 
> Architecturally, an event delivery which starts in CPL<3 and switches shadow
> stack will first validate the Supervisor Shadow Stack Token (setting the busy
> bit), then pushes CS/LIP/SSP.  One example of this is an NMI interrupting Xen.
> 
> Some CPUs suffer from an issue called fracturing, whereby a fault/vmexit/etc
> between setting the busy bit and completing the event injection renders the
> action non-restartable, because when it comes time to restart, the busy bit is
> found to be already set.
> 
> This is far more easily encountered under virt, yet it is not the fault of the
> hypervisor, nor the fault of the guest kernel.  The fault lies somewhere
> between the architectural specification, and the uarch behaviour.
> 
> Intel have allocated CPUID.7[1].ecx[18] CET_SSS to enumerate that supervisor
> shadow stacks are safe to use.  Because of how Xen lays out its shadow stacks,
> fracturing is not expected to be a problem on native.

IOW that's the "contained in an aligned 32-byte region" constraint which we
meet, aiui.

> Detect this case on boot and default to not using shstk if virtualised.
> Specifying `cet=shstk` on the command line will override this heuristic and
> enable shadow stacks irrespective.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one nit (below).

> This ideally wants backporting to Xen 4.14.  I have no idea how likely it is
> to need to backport the prerequisite patch for new feature words, but we've
> already had to do that once for security patches.  OTOH, I have no idea how
> easy it is to trigger in non-synthetic cases.

Plus: How likely is it that Xen actually is used virtualized in production?
For the moment I don't see any reason to backport to branches in security-
only maintenance mode. I'm not even sure it needs backporting at all.

> @@ -1099,11 +1095,45 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      early_cpu_init();
>  
>      /* Choose shadow stack early, to set infrastructure up appropriately. */
> -    if ( opt_xen_shstk && boot_cpu_has(X86_FEATURE_CET_SS) )
> +    if ( !boot_cpu_has(X86_FEATURE_CET_SS) )
> +        opt_xen_shstk = 0;
> +
> +    if ( opt_xen_shstk )
>      {
> -        printk("Enabling Supervisor Shadow Stacks\n");
> +        /*
> +         * Some CPUs suffer from Shadow Stack Fracturing, an issue whereby a
> +         * fault/VMExit/etc between setting a Supervisor Busy bit and the
> +         * event delivery completing renders the operation non-restartable.
> +         * On restart, event delivery will find the Busy bit already set.
> +         *
> +         * This is a problem on bare metal, but outside of synthetic cases or
> +         * a very badly timed #MC, it's not believed to problem.  It is a much

Nit: "... to be a problem."

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 16:12:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 16:12:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471289.731082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6NX-00061z-0S; Wed, 04 Jan 2023 16:12:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471289.731082; Wed, 04 Jan 2023 16:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6NW-00061s-Ty; Wed, 04 Jan 2023 16:12:34 +0000
Received: by outflank-mailman (input) for mailman id 471289;
 Wed, 04 Jan 2023 16:12:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD6NW-00061k-JW
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 16:12:34 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9869fabb-8c4a-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 17:12:33 +0100 (CET)
Received: by mail-ej1-x634.google.com with SMTP id ud5so83878412ejc.4
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 08:12:33 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb2008108eedf25879029.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:8108:eedf:2587:9029])
 by smtp.gmail.com with ESMTPSA id
 lj2-20020a170906f9c200b007a4e02e32ffsm15454154ejb.60.2023.01.04.08.12.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 08:12:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9869fabb-8c4a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=govlnVA/VYqvVYKVSZE+CubqtU9g2EyfrdDcVjmMz6Y=;
        b=hk0Ny9eqmlchaMtYq7vYJnD2Xmrs4Tdbt0M/fiPj9+Csj1z3CNL44oGwDKK7yRjdRv
         1GdpucAx2w2stB9XA7MInVGx4tWEX53dQDzTKfVvI8RD94sYVc9V3XySjo5UpSkEklwC
         +CHaz0G97T/v2UggrwM4/Q/JJ/92B3hhQmSQNduXZ0astWp1XVfTMeWdTeDmV1hPxwf4
         NqCdGLvOcxw5C16GR0why6KidTYpZEWceNhodGD75yk/jbINnVgg4DFew3nfk410nY0X
         0exWpyG1wsR6uA9qu1lmrXlrntfKKHI87YrgZmVEyzgYnUF0kOEAWU0acaw4J/zZaj+n
         RyTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=govlnVA/VYqvVYKVSZE+CubqtU9g2EyfrdDcVjmMz6Y=;
        b=KBB0iShlqRZWDZGlsEFMHP9QGhGoSmI5/2+0c0UJaDtQ1K7E0Rwk6GKyVMzw3GTg6o
         XKenFBD9kfUNbkzaIN3MnMMrnmPdc0P+wT0AEQTr0WSCHsNmiE2iiUHWX0Evru3ikbeI
         KMx0BIRF5q9TlGIz+ihmAivtMo0cIOSgu5ky2h2wngdoF3pCkcwJrZg5or7YjwfYnRLo
         Ww5lNSsLJbnvZRlNPmxLpBIY3he4tZzRFh5FhFBuHiY5VqYj8O5+PVX6EMBVpsUqYHi+
         ndMMvInhcukopKHF0ElCv4KRxj8R8YkDynLHh+3zBivqHj0l8OCaf8It2KsCAdwtbOkw
         uXJQ==
X-Gm-Message-State: AFqh2kp9vVjqKQ6YooqwnEooM1Cs9ukJdFqHM1AyMZg96sOyk5z/nT3W
	hS4niovAl3hDtD8U9OUrKRg=
X-Google-Smtp-Source: AMrXdXtr3CdYAVbyb1GO2YeoRc3JJBajOiJ++PsT7D527I82IhxNk6hCymLzpQxg4HAA+IfNwMnr1Q==
X-Received: by 2002:a17:907:b684:b0:7c0:a99c:485c with SMTP id vm4-20020a170907b68400b007c0a99c485cmr45324433ejc.68.1672848752666;
        Wed, 04 Jan 2023 08:12:32 -0800 (PST)
Date: Wed, 04 Jan 2023 16:12:27 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Chuck Zmudzinski <brchuckz@aol.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>
CC: qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost <eduardo@habkost.net>,
 xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
In-Reply-To: <00094755-ca61-372d-0bcf-540fe2798f5c@aol.com>
References: <20230102213504.14646-1-shentey@gmail.com> <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com> <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org> <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com> <aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com> <AB058B2A-406E-487B-A1BA-74416C310B7A@gmail.com> <00094755-ca61-372d-0bcf-540fe2798f5c@aol.com>
Message-ID: <7E657325-705A-47EA-A334-0B59DF0DF772@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 4=2E Januar 2023 13:11:16 UTC schrieb Chuck Zmudzinski <brchuckz@aol=2E=
com>:
>On 1/4/2023 7:13 AM, Bernhard Beschow wrote:
>> Am 4=2E Januar 2023 08:18:59 UTC schrieb Chuck Zmudzinski <brchuckz@aol=
=2Ecom>:
>> >On 1/3/2023 8:38 AM, Bernhard Beschow wrote:
>> >>
>> >>
>> >> On Tue, Jan 3, 2023 at 2:17 PM Philippe Mathieu-Daud=C3=A9 <philmd@l=
inaro=2Eorg> wrote:
>> >>
>> >>     Hi Chuck,
>> >>
>> >>     On 3/1/23 04:15, Chuck Zmudzinski wrote:
>> >>     > On 1/2/23 4:34=E2=80=AFPM, Bernhard Beschow wrote:
>> >>     >> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and=
 finally removes
>> >>     >> it=2E The motivation is to 1/ decouple PIIX from Xen and 2/ t=
o make Xen in the PC
>> >>     >> machine agnostic to the precise southbridge being used=2E 2/ =
will become
>> >>     >> particularily interesting once PIIX4 becomes usable in the PC=
 machine, avoiding
>> >>     >> the "Frankenstein" use of PIIX4_ACPI in PIIX3=2E
>> >>     >>
>> >>     >> Testing done:
>> >>     >> None, because I don't know how to conduct this properly :(
>> >>     >>
>> >>     >> Based-on: <20221221170003=2E2929-1-shentey@gmail=2Ecom>
>> >>     >>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "[PATCH v4 00/30] Co=
nsolidate PIIX south bridges"
>> >>
>> >>     This series is based on a previous series:
>> >>     https://lore=2Ekernel=2Eorg/qemu-devel/20221221170003=2E2929-1-s=
hentey@gmail=2Ecom/
>> >>     (which itself also is)=2E
>> >>
>> >>     >> Bernhard Beschow (6):
>> >>     >>=C2=A0 =C2=A0 include/hw/xen/xen: Make xen_piix3_set_irq() gen=
eric and rename it
>> >>     >>=C2=A0 =C2=A0 hw/isa/piix: Reuse piix3_realize() in piix3_xen_=
realize()
>> >>     >>=C2=A0 =C2=A0 hw/isa/piix: Wire up Xen PCI IRQ handling outsid=
e of PIIX3
>> >>     >>=C2=A0 =C2=A0 hw/isa/piix: Avoid Xen-specific variant of piix_=
write_config()
>> >>     >>=C2=A0 =C2=A0 hw/isa/piix: Resolve redundant k->config_write a=
ssignments
>> >>     >>=C2=A0 =C2=A0 hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DE=
VICE
>> >>     >>
>> >>     >>=C2=A0 =C2=A0hw/i386/pc_piix=2Ec=C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0| 34 ++++++++++++++++--
>> >>     >>=C2=A0 =C2=A0hw/i386/xen/xen-hvm=2Ec=C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0|=C2=A0 9 +++--
>> >>     >>=C2=A0 =C2=A0hw/isa/piix=2Ec=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 66 +----------------------------------
>> >>     >
>> >>     > This file does not exist on the Qemu master branch=2E
>> >>     > But hw/isa/piix3=2Ec and hw/isa/piix4=2Ec do exist=2E
>> >>     >
>> >>     > I tried renaming it from piix=2Ec to piix3=2Ec in the patch, b=
ut
>> >>     > the patch set still does not apply cleanly on my tree=2E
>> >>     >
>> >>     > Is this patch set re-based against something other than
>> >>     > the current master Qemu branch?
>> >>     >
>> >>     > I have a system that is suitable for testing this patch set, b=
ut
>> >>     > I need guidance on how to apply it to the Qemu source tree=2E
>> >>
>> >>     You can ask Bernhard to publish a branch with the full work,
>> >>
>> >>
>> >> Hi Chuck,
>> >>
>> >> =2E=2E=2E or just visit https://patchew=2Eorg/QEMU/20230102213504=2E=
14646-1-shentey@gmail=2Ecom/ =2E There you'll find a git tag with a complet=
e history and all instructions!
>> >>
>> >> Thanks for giving my series a test ride!
>> >>
>> >> Best regards,
>> >> Bernhard
>> >>
>> >>     or apply each series locally=2E I use the b4 tool for that:
>> >>     https://b4=2Edocs=2Ekernel=2Eorg/en/latest/installing=2Ehtml
>> >>
>> >>     i=2Ee=2E:
>> >>
>> >>     $ git checkout -b shentey_work
>> >>     $ b4 am 20221120150550=2E63059-1-shentey@gmail=2Ecom
>> >>     $ git am
>> >>     =2E/v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south=
_bridges=2Embx
>> >>     $ b4 am 20221221170003=2E2929-1-shentey@gmail=2Ecom
>> >>     $ git am
>> >>     =2E/v4_20221221_shentey_this_series_consolidates_the_implementat=
ions_of_the_piix3_and_piix4_south=2Embx
>> >>     $ b4 am 20230102213504=2E14646-1-shentey@gmail=2Ecom
>> >>     $ git am =2E/20230102_shentey_resolve_type_piix3_xen_device=2Emb=
x
>> >>
>> >>     Now the branch 'shentey_work' contains all the patches and you c=
an test=2E
>> >>
>> >>     Regards,
>> >>
>> >>     Phil=2E
>> >>
>> >
>> >Hi Phil and Bernard,
>> >
>> >I tried applying these 3 patch series on top of the current qemu
>> >master branch=2E
>> >
>> >Unfortunately, I saw a regression, so I can't give a tested-by tag yet=
=2E
>>
>> Hi Chuck,
>>
>> Thanks for your valuable test report! I think the culprit may be commit=
 https://lists=2Enongnu=2Eorg/archive/html/qemu-devel/2023-01/msg00102=2Eht=
ml where now 128 PIRQs are considered rather than four=2E I'll revisit my s=
eries and will prepare a v2 in the next days=2E I think there is no need fo=
r further testing v1=2E
>>
>> Thanks,
>> Bernhard
>
>Hi Bernhard,
>
>Thanks for letting me know I do not need to test v1 further=2E I agree th=
e
>symptoms are that it is an IRQ problem - it looks like IRQs associated wi=
th
>the emulated usb tablet device are not making it to the guest with the
>patched v1 piix device on xen=2E

All PCI IRQs were routed to PCI slot 0=2E This should be fixed in v2 now=
=2E

>I will be looking for your v2 in coming days and try it out also!

Thank you! Here it is: https://patchew=2Eorg/QEMU/20230104144437=2E27479-1=
-shentey@gmail=2Ecom/

Best regards,
Bernhard

>
>Best regards,
>
>Chuck
>
>>
>> >
>> >Here are the details of the testing I did so far:
>> >
>> >Xen only needs one target, the i386-softmmu target which creates
>> >the qemu-system-i386 binary that Xen uses for its device model=2E
>> >That target compiled and linked with no problems with these 3
>> >patch series applied on top of qemu master=2E I didn't try building
>> >any other targets=2E
>> >
>> >My tests used the xenfv machine type with the xen platform
>> >pci device, which is ordinarily called a xen hvm guest with xen
>> >paravirtualized network and block device drivers=2E It is based on the
>> >i440fx machine type and so emulates piix3=2E I tested the xen
>> >hvm guests with two different configurations as described below=2E
>> >
>> >I tested both Linux and Windows guests, with mixed results=2E With the
>> >current Qemu master (commit 222059a0fccf4 without the 3 patch
>> >series applied), all tested guest configurations work normally for bot=
h
>> >Linux and Windows guests=2E
>> >
>> >With these 3 patch series applied on top of the qemu master branch,
>> >which tries to consolidate piix3 and piix4 and resolve the xen piix3
>> >device that my guests use, I unfortunately got a regression=2E
>> >
>> >The regression occurred with a configuration that uses the qemu
>> >bochs stdvga graphics device with a vnc display, and the qemu
>> >usb-tablet device to emulate the mouse and keyboard=2E After applying
>> >the 3 patch series, the emulated mouse is not working at all for Linux
>> >guests=2E It works for Windows guests, but the mouse pointer in the
>> >guest does not follow the mouse pointer in the vnc window as closely
>> >as it does without the 3 patch series=2E So this is the bad news of a
>> >regression introduced somewhere in these 3 patch series=2E
>> >
>> >The good news is that by using guests in a configuration that does
>> >not use the qemu usb-tablet device or the bochs stdvga device but
>> >instead uses a real passed through usb3 controller with a real usb
>> >mouse and a real usb keyboard connected, and also the real sound
>> >card and vga device passed through and a 1920x1080 HDMI monitor,
>> >there is no regression introduced by the 3 patch series and both Linux
>> >and Windows guests in that configuration work perfectly=2E
>> >
>> >My next test will be to test Bernhard's published git tag without
>> >trying to merge the 3 patch series into master and see if that also
>> >has the regression=2E I also will double check that I didn't make
>> >any mistakes in merging the 3 patch series by creating the shentey_wor=
k
>> >branch with b4 and git as Phil described and compare that to my
>> >working tree=2E
>> >
>> >I also will try testing only the first series, then the first series a=
nd the
>> >second series, to try to determine in which of the 3 series the regres=
sion
>> >is introduced=2E
>> >
>> >Best regards,
>> >
>> >Chuck
>


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 16:29:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 16:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471296.731092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6dZ-0007bG-Cu; Wed, 04 Jan 2023 16:29:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471296.731092; Wed, 04 Jan 2023 16:29:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6dZ-0007b9-A7; Wed, 04 Jan 2023 16:29:09 +0000
Received: by outflank-mailman (input) for mailman id 471296;
 Wed, 04 Jan 2023 16:29:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+XhT=5B=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pD6dX-0007b3-Vz
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 16:29:07 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2044.outbound.protection.outlook.com [40.107.20.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e87ed6b7-8c4c-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 17:29:06 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9584.eurprd04.prod.outlook.com (2603:10a6:20b:473::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 16:29:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 16:29:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e87ed6b7-8c4c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fDU9paoNdYFVwZ1CnLYr0VfEHjf1dyc+ZhZuQz6YVhuATmGywPvoDXES9uE7/9ia/ZlJ07R5rEqbHFde+KGsd69miJfntoWSH4r6VG18Ov3h/7EEHep0vsDR8wvy/yj/SudVv6cUkMdwb9Nx3k8MqRH/W8ByD8u4yxs3WAvLLlZTwQeCBmrfZDwuM0e/R5YejWPEqbDI+Cgl519Z3FWAmuDtwG8bUxxDmqKCKEaTo64/gS9c5tBFdyTtfpHAG+bwduGROwIwMNhltdQYGPMUmYrSoKJ9enBZsgDl3iIvnxJvneLL1vPIJ5r2uR8COyTthh2C0gL+coujE31LCWfBuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YYIvdnzux6f8D6z+9d1KjGqws6fClpdUIa/CmQotqho=;
 b=T3o1oHLJ/LK9zDofWyQ5+/bZHked8AB+j9NzXSFsz0lEEO1e9UUoI4xXPRg4oSwSanHOfy/oS2VnE91gqOnqN1tiylKg27BS7RH+yPyWtqdBgK6pLFnpkiaC4XELT7HFPd8doaTGsv8OYm30lM35CzFUmK77X1j8aMSd1PZbpFsB0JBclDFIxvJh+4/kLao7jTnqBs4/bGV0DE3cjNM9i1QJWemj1k5MmQWSNvGgA7Je5XyByOpMKvv9E7Bo9tsUiN4sLDxswWCeFUx8DwSu/SrQIA3WTK7njQW4NWQdcK2DuL3QfzzXt2ckqcO2fC01jGWLBmMmTpNINCRMUwx99A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YYIvdnzux6f8D6z+9d1KjGqws6fClpdUIa/CmQotqho=;
 b=bAQBfQfM81T/jgqtFtm0lm0fOXjhmIPJisCn3eMAx27sYIm5pot/F00BDEyH0jtY6BchOow3p3Rl/90Fv0xhovJgAkXPY/JGit0Rqh4e8fS/qBOO8KJrDDzU0pjMRjQHgrIMZWKEXQcreFrn+BDbvMhFr9YpYz6rnrBLetXTcPB5soITIWxowUIAU3TTQJ/bU93FcLhLl3BQlFubCf5yE4ki8aBaoFOwsj40KnKE64ycnjcsGTgazw/E7W+dHhNYTgFDiLQ8EfHke6H/CPXNq2CTQRs0Xe4m+kFq8ofDbn3zU5JXX1VIr0kkA34jwOBH5ikrFEuyQFJLOYwTvsbAbQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <074042f5-ecc1-11c9-bdcd-b9d619475d58@suse.com>
Date: Wed, 4 Jan 2023 17:29:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/4] xen/version: Drop compat/kernel.c
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230103200943.5801-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0136.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9584:EE_
X-MS-Office365-Filtering-Correlation-Id: 5b379f1a-b868-4afe-3e69-08daee70cb11
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8fwNwtfOnFpeg+cvUu0La0O64US0YEfij3D760+CsAIuzu0orLcT4r2LCps9j6VXSeXyoXoOBTxqYY5WbI1lSOnmxbA8046p+toYlKYD57nXc2L+3WALWwuzsMrUxk97aP2oeSDjvXh7MIsp8xXDzVpW3n3N1o705vOYLG36MkaB0Eve+hNibl3DExO6G0N10Bfc5IU0nViH7wRPQUWiRmNuFA/foc9gDpk6uEvr5pWiDMEdN/hfn2hpL49bHwMKBMXlQUpB7HIwQVigidjZ9G3V0ftOj2CHBFXIPSkRCNbPOpztMvBU8jfXDt5Pt1Vu1+ZyhgMMEXu9o8Bjk1ARvfxBiC9DQ/ArTV7kCVKF2O3XYAVTfIGafivUPei69ams9vwkMU6RRXt+L3Q6l4ycMvDxZePYLDqSwG+Zn/gi9IcLo+kHAuqCQR8QG/kYRlW9SY9Vvkbu5ctZlZzRiK6JuF/roZrlba0MeGPEEABX1+Rbfyft9l2Bj0Ss0oD19y1MaenWLpi1tyI32J7ZkKyGBhdRTcSs2k5TPKGtafUjcxMP9MQuRg6ooAHr/XEC+LApCf4nxYZdM8toCwD8emw/fwn7ox/sOEmDh6Lp5qPqZ7v8FqjNtgB7BbXU3lOmfP4dxPieTIBXAiGLRTZyaK4fHYPUVKVxoeLAfqM4sHfPRVCn8BqFGzvYqXKqSY4mLaMVRC0ajgWncMZ/utNEggPGumeJpVMO2S6FKDLxpyl2OEg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(396003)(136003)(346002)(366004)(451199015)(36756003)(6916009)(54906003)(53546011)(316002)(478600001)(31696002)(86362001)(6486002)(5660300002)(2906002)(66556008)(66946007)(4326008)(66476007)(8936002)(8676002)(41300700001)(38100700002)(6512007)(6506007)(26005)(186003)(2616005)(83380400001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WUgxUHFJdmtGTXR4MTlsVFlub3Z6TmtMRjZPMDQxOEpjZTY4QlhMbjJqSDRB?=
 =?utf-8?B?b1hXUzN1WnVVZWYzTFVWTHF6Mk1oNG85emZqVlpHanNKSVhwSjdUakVZSTJu?=
 =?utf-8?B?NzRpMlgyaVJ5V29XWThmcnpMNC9ZVUNUK0xBTXVwbXlvS25lYklqendveXls?=
 =?utf-8?B?bWpsekZmOHBnWmk0OFJ1MUtKbG5vazY3ZGp6aWtSMm5rNHdReGZWdGI2Sjhy?=
 =?utf-8?B?S2IxaHRFMHBKMU9qTVh4YXdGY096RmVpV25JWUJReGZUNkZzdS91QTlRUSt2?=
 =?utf-8?B?cys0L0RRUVFRbSsxSVNRbVFGWFZKdTdLT0hEUGJ1eGxzWjhXaXNqWDQ5Zi9G?=
 =?utf-8?B?VHUwQitYVTdEcXhvbXVna0JVdWlJRkZOaXNKK3ZmNGRzT3JJTy9vS0drZFF3?=
 =?utf-8?B?V1lGQjVUdHY5c3M5c2hrbmw0Z0krZm5yTUdrSGMzVlRTYUpCaGtjR09ZZk9V?=
 =?utf-8?B?WkkrTFJQVi9XRmRIRUZDVnJabGZrNlhmZHQ2U29hR0RnMnF3WnM5M2hVTXQ4?=
 =?utf-8?B?N09XcUorano5SDIrck1IZHNkM05IbnR2b2xzZGdDd1pjWm9sUVpkQWFCaFZu?=
 =?utf-8?B?NGVMenpsRnBKK1VDK1ZEMEw4RlpIK0ZKcmQ1cVBENFZNelV5ams1b2NFb0tu?=
 =?utf-8?B?eXBuSzJ3bGhRcmJyb3ByWGY1ZjUrYnVRbU95WVhtdzB2ZFY0QXZyOWhtdk5u?=
 =?utf-8?B?SnAyVXZsM2N6WHZSRCtBVVhvNUVBTW1ZZWZlSS9xZlhHRWI1eGNYWUNHZ1o0?=
 =?utf-8?B?YlNJWFAxcjR1eThBUHIrYzhqSVpVYjJuMlhzMGlhV1MzZDc0cEtGdjQzZ2RC?=
 =?utf-8?B?TGt0ekxDQTBaZmFUOVZIbFpJVlRnbGp5cFZTQWIxVkV2NUVjbnV1WnhsTUNY?=
 =?utf-8?B?Y2ZNeUZJNVVSbTcyODNaWHJaNUlYNSt3V0VBWFZyRlY4Z2U1b3pjUUdVaDdm?=
 =?utf-8?B?dElETHJwV3hQWmhDRy90TUg4azZVSG9zZU9SaDZSQUhvVCtJMWRrNkM2N0c2?=
 =?utf-8?B?VkF6bi9TSFpXaUMzYVlXYW9MNmVnUlRoRWFwVVlCSk1CVWlma2NqUzhEa0hm?=
 =?utf-8?B?OEFWRy9OZGM3M3hOTkNtVXFSaERRbEFTazhNY1NHdE1UcFZYVUxKMkJCQWdv?=
 =?utf-8?B?R2YzelFpbTNzclY4WGNnOXdtdXFxM1Z6Q2NuSk5LaE50R2M5aUowSnVxd3pr?=
 =?utf-8?B?NW5DZXRGUllkWFUvU0kxMmVBaVlzQUsrVnk1dTY3cmNROTdsbVVKb21hcHg0?=
 =?utf-8?B?a3BVNGs2ZEYxMThsaDhuSFFvZUcvZDhpeXp3NUN5OExtNkkwK3k3d2VoYnpx?=
 =?utf-8?B?SU94ZXFWb2FZeEN2MnM0QldtT2pXOVdQMm1LQzdqeDlZRXA4dWZleEhEbmxi?=
 =?utf-8?B?cjVUTG5GNGhRM3lNTGFjcXNOWXZhR2MvUlJ0NU8zWi8zUEpDL3JxZkdDTDQx?=
 =?utf-8?B?eVBBUlM5QjJwNnFVVnlEYkNDM0FrWUpzTUtzRU1uNjhrWGRIeEVzS0JvWGtP?=
 =?utf-8?B?QmhyUURZaVhzTC9uTncwQTZHMXdxZ0s5TXA2Skl1bDNjaUFRWkJIZ2lZVlFC?=
 =?utf-8?B?VlQ3NkUvdVloWUVMcHBWYXFaejlLcjdmV2hJVVRRUDZOUTlMWDFzNmo5d3BL?=
 =?utf-8?B?Y3FHU0tscUlFaHl5K0hCNnRZakprV3lpMmFWakRLa2h2REE2emlncVFkYXBh?=
 =?utf-8?B?ZmZIVWpOYVJ1aGNzc2YwdGlXNTVFbEU0Rkx3WFJSWnZXUjZuYUpaSm1zM2g3?=
 =?utf-8?B?c3g1VkdIbjV5WXI4T1YzbU54UlJvZlA1c1VoTnVQOFNnL2hJc2lFU01HTkpG?=
 =?utf-8?B?VmV3a2ZqL0VCanRWdFNMRHdzWDZDY2oxb3JhbVBXSWJUYm1ldmt3SVQ1eCt6?=
 =?utf-8?B?VlIyRXZ5UStWT3lFR2IyY1AzVnpqSDg0K1BjNmttejVYTjJYSDFjYWJYYzN1?=
 =?utf-8?B?Ky9ZZENnNUdkb2JDbFFwbEJadFN1VVEwY0swU1NBQit0SkJ6Vnp1VHVQU25G?=
 =?utf-8?B?TlVucFk5T0NINjRvdUlJNG8vazVsM2lXMkFKYlBUSGEwOWhRaWxYWWx2MVls?=
 =?utf-8?B?OGZSZGNtMm9KOE9XSmxDSFF0Y2V3V1NIVml3MFVnaSszbVdyMzJRb3NoS2lw?=
 =?utf-8?Q?4G3nlEN6aMIb9wgBAdBPs3EzI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b379f1a-b868-4afe-3e69-08daee70cb11
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 16:29:03.8405
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UHbYP3SdX5R7wWmqtaiLWfvIN0izWZIlCA/LBsNqhBg7DegTbALWNt3WGBjYrALsok+iB0qxI1ZWy289miWERg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9584

On 03.01.2023 21:09, Andrew Cooper wrote:
> kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
> reincludes kernel.c to recompile xen_version() in a compat form.
> 
> However, the xen_version hypercall is almost guest-ABI-agnostic; only
> XENVER_platform_parameters has a compat split.  Handle this locally, and do
> away with the reinclude entirely.

And we henceforth mean to not introduce any interface structures here
which would require translation (or we're willing to accept the clutter
of handling those "locally" as well). Fine with me, just wanting to
mention it.

> --- a/xen/common/compat/kernel.c
> +++ /dev/null
> @@ -1,53 +0,0 @@
> -/******************************************************************************
> - * kernel.c
> - */
> -
> -EMIT_FILE;
> -
> -#include <xen/init.h>
> -#include <xen/lib.h>
> -#include <xen/errno.h>
> -#include <xen/version.h>
> -#include <xen/sched.h>
> -#include <xen/guest_access.h>
> -#include <asm/current.h>
> -#include <compat/xen.h>
> -#include <compat/version.h>
> -
> -extern xen_commandline_t saved_cmdline;
> -
> -#define xen_extraversion compat_extraversion
> -#define xen_extraversion_t compat_extraversion_t
> -
> -#define xen_compile_info compat_compile_info
> -#define xen_compile_info_t compat_compile_info_t
> -
> -CHECK_TYPE(capabilities_info);

This and ...

> -#define xen_platform_parameters compat_platform_parameters
> -#define xen_platform_parameters_t compat_platform_parameters_t
> -#undef HYPERVISOR_VIRT_START
> -#define HYPERVISOR_VIRT_START HYPERVISOR_COMPAT_VIRT_START(current->domain)
> -
> -#define xen_changeset_info compat_changeset_info
> -#define xen_changeset_info_t compat_changeset_info_t
> -
> -#define xen_feature_info compat_feature_info
> -#define xen_feature_info_t compat_feature_info_t
> -
> -CHECK_TYPE(domain_handle);

... this go away without any replacement. Considering they're both
char[], that's probably fine, but could do with mentioning in the
description.

> @@ -520,12 +518,27 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>      
>      case XENVER_platform_parameters:
>      {
> -        xen_platform_parameters_t params = {
> -            .virt_start = HYPERVISOR_VIRT_START
> -        };

With this gone the oddly (but intentionally) placed braces can then
also go away.

Preferably with these minor adjustments
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 16:33:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 16:33:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471304.731104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6hI-0000dN-2Q; Wed, 04 Jan 2023 16:33:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471304.731104; Wed, 04 Jan 2023 16:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6hH-0000dG-VD; Wed, 04 Jan 2023 16:32:59 +0000
Received: by outflank-mailman (input) for mailman id 471304;
 Wed, 04 Jan 2023 16:32:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD6hH-0000d6-02; Wed, 04 Jan 2023 16:32:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD6hG-0002np-Tp; Wed, 04 Jan 2023 16:32:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD6hG-0001Lf-Ds; Wed, 04 Jan 2023 16:32:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pD6hG-0004vB-DN; Wed, 04 Jan 2023 16:32:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Xdhn2w1Vm6E81nwFY01aJgpMOgccQAofxCo5KepJfMs=; b=do9ObP8ZZ/qUojlYyMjoJhHf/c
	xf5JsNV8xfOE9nrfbZKeDppwiKlcdxFDz44hVCGrCZK+A4k1zOxlVq8CEGS5/NB9dQ4divPDF17RY
	XBV9fF6NiJTwCqRpT6otA755hRt8d429h1wjCNFZ9ZBWCDyoCz1Kysk997McqJ7vx710=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175566-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175566: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=69b41ac87e4a664de78a395ff97166f0b2943210
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 16:32:58 +0000

flight 175566 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175566/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                69b41ac87e4a664de78a395ff97166f0b2943210
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   88 days
Failing since        173470  2022-10-08 06:21:34 Z   88 days  184 attempts
Testing same since   175552  2023-01-02 21:13:23 Z    1 days    5 attempts

------------------------------------------------------------
3257 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 497265 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 16:36:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 16:36:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471313.731115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6kB-0001DJ-HO; Wed, 04 Jan 2023 16:35:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471313.731115; Wed, 04 Jan 2023 16:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6kB-0001DC-E4; Wed, 04 Jan 2023 16:35:59 +0000
Received: by outflank-mailman (input) for mailman id 471313;
 Wed, 04 Jan 2023 16:35:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD6k9-0001D2-J2; Wed, 04 Jan 2023 16:35:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD6k9-0002qj-IA; Wed, 04 Jan 2023 16:35:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pD6k9-0001Pw-31; Wed, 04 Jan 2023 16:35:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pD6k9-0007OX-2Z; Wed, 04 Jan 2023 16:35:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PrWdftEX9qT2U+aAUgfYWxi7bcWV/5I2f//PX7X6zpE=; b=YDC7TCFAm+DX4oJt4kVZYq0Pla
	EmbeMe1J1RuzhuLTVtbwIBpHNz3mNK1x0jh8VHjhB012ws2AB0g9Qfa18021ZrZJN938dU5GxyFzg
	67I0jdAEAS1QcPWJNVDVT0hb4a6JSqPdj/IJbkmcZIy6fVaXmK45pu+PsxfaWMGk7KGU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175568-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175568: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c1df06afe578f698ebe91a1e3817463b9d165123
X-Osstest-Versions-That:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Jan 2023 16:35:57 +0000

flight 175568 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175568/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c1df06afe578f698ebe91a1e3817463b9d165123
baseline version:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8

Last test of basis   175516  2022-12-28 19:00:28 Z    6 days
Testing same since   175568  2023-01-04 14:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7eef80e06e..c1df06afe5  c1df06afe578f698ebe91a1e3817463b9d165123 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 16:40:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 16:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471322.731126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6oe-0002eV-3m; Wed, 04 Jan 2023 16:40:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471322.731126; Wed, 04 Jan 2023 16:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6oe-0002eO-0M; Wed, 04 Jan 2023 16:40:36 +0000
Received: by outflank-mailman (input) for mailman id 471322;
 Wed, 04 Jan 2023 16:40:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+XhT=5B=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pD6oc-0002eI-91
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 16:40:34 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2081.outbound.protection.outlook.com [40.107.21.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80205dd2-8c4e-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 17:40:31 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9276.eurprd04.prod.outlook.com (2603:10a6:10:357::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 16:40:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 16:40:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80205dd2-8c4e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WbS+078ChGzJD9HY8Atp8m+3h4fMzx/IBArexLeGwSiT9RKtdrhdLTsGaOkbkjwkR9Vr6CBF4uRNnLKnHYgCWfF9Tcf6eRyeKpcdg8X96yKDNA+W+pGfmceQfVpYy8jTbZLkKnGNXKS/Zn3WaUQz4Ln2t7c29totqTFlZxNn3w+wcRjGCRsjepsQbeoQ1QclaB6Eb2RnKPh8K8Hec2OXhPQrHtfoG3rJMByxNyZybG4lS42FUUiD+5ED908VZ7AbzLwi/jLRHQENWKutr4dk4c9YiYddDj3fKXZX3FYKJ6MyEgnixTOy9hVW/TKqueakIaFl13Bk9soaEe3khq0GoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8tbsBPvURDFJUYLUmdtEznWOuWGi2HvZk8KWkzCBso8=;
 b=m8jhjXYThJHIWQ/6JWfhYofPvDT5sDEBbSbQvB459zjHg+W+psov+1cntd42j2Om2Z3UzUXe1Wj2SEQOwLKsNWiTK2uluTd4piz7kj+HsFz7Y5Xzrvt6bBgiYBSTSOgh85zjGnYqu4qFTIckMlEN4SBZeS+SaEfg1p/VzW/LuWZxmunM+VqFhIjih0CEiYJh69ECfRR11Ebf8K1wYvOme8L4MyrRXR7LlreNtUfE3/lTUwhweU/FzwsNG6ZxA8iGzi/h+auu5KimrCRM54WJXxZOZBeS2tYlaX6lTcXegSuZvdGka0ncIz0c+ksANKMyMD5E1bVZSjKNtLWBFdmYGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8tbsBPvURDFJUYLUmdtEznWOuWGi2HvZk8KWkzCBso8=;
 b=nq5bovy61x9fgha32KEREBG3yELEVDwjWbAm3/4yKDU3s/2QJAF9K3R9VWJfnUUUAjyC7HLXJ5AKNemRRcjrWhwYMWV8Yxj7hGQVgmCUbO53uBQUrYLBveoQcz84W5qjy3DmWCnsLNg1uS4iuIO/FjlMxG9bH1P7tb1XQjVk7Mxy1TujrpTfT7d2NJsxEqZJ/yd7VdJKBFOViCpCg0sHAgvjhk9UWBiI3sSx67DGJlWlGnmkK9ega7VpNcr19111KMFYWOH7iAbN3+m0t6/Vs+oedl6VGLg7Lt7xmQNgBdo9BrKcISLXXvgDwYRgn4CZetL7JGreeGM0WEQrhZCi/g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <540a449d-f76d-eb16-4f98-c4fb3564ce98@suse.com>
Date: Wed, 4 Jan 2023 17:40:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230103200943.5801-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0009.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::19)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9276:EE_
X-MS-Office365-Filtering-Correlation-Id: a1583721-fa72-41db-f6b3-08daee72616e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IkFOcWrpdQZ0jBKljvo8ge51neTRuvFh4rdMveWjAXaH+Nl0z7AAj9/DzZFiTtskRye3wLYTsXFDN/8eiaoOZnjKBxYeG1TYsp1vchJf1QeK6rbUt+eEKTl7+ZadvAoynsTHNzh54dv8uEKWJO8DAesvj3tdLI6WUSI836du4S5JwZHeKpo1TmuqU7OKo7BFvh5LcDFoPXQtrxa8i3T+zPRuw14ztkxy2n2JUPOH7Nogb/C/CON4A4xTwkL3K7wvbiL3QR8J0Oz8pumfrDuOjLcIcD5sT3Do5tQbIfXBOG82CrhONyWQGywhZTlQ7+6aeuldqGPLHFaBPBXGeIIwjsCj4D7L1DHNR0aR1GUwkT1BkYUOQE58sTzESZv1evw07RGTQ5aTqBlV48i3M1lerHFOlyvK9Z2ftDLNFrRM0F6aklHSa0IAuoHbgvIIlhkzPAY94eD1QaQ4OmjuAHgr1ER1A7NILEFBV1fwATAir4iS7EN/AfWBoBXndW2ptXD+qWjufTU2HW2VvKVheyI3Dd8csT9TmJzx8CdP5kOp+xKaW/cdv2/pLMM4f/lF3PBAFDMghpeLyq3Ba5sk6En6UDe6pjCJUenTSs/fJKx7rG4dFB37g2RAfPhExftDPomwqwP4eqAH2IPYiToCUSVshp+hT8KIwnH075EmTvyZ9c5a0tlM/5iTCzLF4f/p7MXGcUB6pxXxSPz3KA/gX6dmBx/Uo1HL+eXJ2qqjvoLPI8g=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(136003)(39860400002)(396003)(376002)(346002)(451199015)(31686004)(36756003)(4326008)(8676002)(66556008)(66946007)(66476007)(8936002)(5660300002)(2906002)(38100700002)(86362001)(31696002)(478600001)(316002)(54906003)(6916009)(6486002)(2616005)(41300700001)(53546011)(6512007)(26005)(186003)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y0t2ak9TR2ZpU25ybWNVRmIrVDAxeVA1cHU3SHFPdmttZUx1N1hVdCt3cCtj?=
 =?utf-8?B?T1hQa2JSZHRhdzhhdGxoZVFoTlhaelVpWlhCUHllWUc5QUdPK1hvc3VvYWw4?=
 =?utf-8?B?OXlIdHNORXdPOGRnamNHek9YbW4wWGhiUXE1bEc5NWJPaGRQblhVT1JFYWZK?=
 =?utf-8?B?SnFRZ0o3cm5tN3JDMDdRTnBZSDlTMVVHTmJvT2Q0c25XY2szd2tvWS83a3FF?=
 =?utf-8?B?RDRsYmdsbFVPVTNuTGpFMC9SSWtDSXVtYWhPNXBmSnlBR1JmdkNRUmplMnJy?=
 =?utf-8?B?ZmJCNHlmbmhzRmxsTXpKWmpnSGRNU1o2VERYd2JtdzNIUW9xcVNpVVlWVnRK?=
 =?utf-8?B?VThOWDZPckJYc3dWMkdDZW9kdkxZV095NUJ0NmtxYnRGRy9KaFUzQXMrT0kv?=
 =?utf-8?B?NUt1bFUwem1ZT1hRQi9XcVArdUVqZDV2VFQzc2NTUittbWxaREZmSVErZDFW?=
 =?utf-8?B?cmRWd1pZK0JQdG1SQ0RzU2dUUW40VGJ0OGVNV1FhWGpobTc1MWkyQlBuQXpl?=
 =?utf-8?B?bXp5Vlo5SEN1b0tiUTRGVGE2Q25pVUNuaG9VUk5RcUhFd1l4MlNIazlXSlRT?=
 =?utf-8?B?Q3lJV2Q2ajI2bklkZWFMcDViaGFOMzIzVk50dGJlaFdsMWhWOTNqL3RCWGtn?=
 =?utf-8?B?Tys0Ymh4a0dsVy8zWlZibjJIc1dSUW1uYXZsNlVnSjNZRGIxai9FSXJJRjJF?=
 =?utf-8?B?aEZiQk80akFuQ0pIc1ZiM0lQazJVdTNOVmlpZko4TFlxdUUyZkxTNTZvQlR1?=
 =?utf-8?B?R2wyeTNpRFQ2K1VhOU5FTk9nSU9mYXlUZUNVR3UvdDM3RFdOU09zcG0yakg3?=
 =?utf-8?B?ZGUyN3RvVFVzV05GMTJNdmtVRnpQWHJnb01WSkFuRWhwUVA4bmRmbS9ReDBH?=
 =?utf-8?B?c0p3VHpVNHVlT3hBaEI5ZVdScG1UbkcrTHBQTXNLSExXd2Y0SWtndEx6M0d2?=
 =?utf-8?B?QUxjNWFFVzFwMlVGTmhlVEpnUnNMenRvSXhNRVY1QXJxNjRJRDNHRm1IZkhn?=
 =?utf-8?B?Y1VrWlQySjg1MDZJNlNldm80V2NFYUpiVGtEOHpsQlFjVHdjWjZHc05YdFlt?=
 =?utf-8?B?bVQvZ3p2SGZCbStFWDZBL3AzZk9CVzhBajNQbFRWeEF3MFdzdmVDZUZvY21h?=
 =?utf-8?B?WW9IOTFwbU5wSENLWnBnTU5LTzRGNkFsQk03RGF6eXUrNmZ3RDF3bmFLdHFO?=
 =?utf-8?B?SVZZa2pBOStIek8vdlpGNWJlaWFJQlgvaXIxS1djaGZCaVR6Lyt3RlF6Y1hZ?=
 =?utf-8?B?c0JlVkJ1N2xkZzF4UlFXaFBtaTNGeDdOR0VRSVFnYi9RRm1qSUNYTitudEpZ?=
 =?utf-8?B?Z2dZRGdWSTk3L25UQ2N5Q1FPMTdnQlp2RGFObzhkclRFaDJDOVBPSFlEK3pm?=
 =?utf-8?B?SGFaNDhiKzVQMW53WGM0RlpDOEZZT08vWWw3SlZyS3ZwM2RuTnFNSUIwNXJ6?=
 =?utf-8?B?c2FicE1aQ3Myekl0d1MwQStlQWhNWVc5L0x5dWp3Rk5zRXRkZXlEQ1QwMmdw?=
 =?utf-8?B?bW5BbzlCVTNJVnhjU081TEpFUTgzYWFmWUV0RWdsOHFZNFp5TXUxZDdaQjVs?=
 =?utf-8?B?VXlkdXN2SUtKRUxjNWk0aTUvdU0reXNVMHZlVkRtbTFONEZUeWY5U3pzRVVl?=
 =?utf-8?B?a3NYSitPYXBEa0luTUM5dnhwaThrRHo3SnZ2VXh3UTN2ZzZoNWRwQUVwSDQw?=
 =?utf-8?B?OUloQTdVbERVQ1hxemdBMTZYMGNqLzhGY3Z4ZFk0K25oK0gzaEZ0V1Z1RnZE?=
 =?utf-8?B?dWswTHhDODYwSWZGNjVWcEJKczdDREE1NE94ZWlEZVZMNGloZTk0b0wxTFRX?=
 =?utf-8?B?YjkvZElLVExCQk02SjQ3WXFIbzg1aVoyTVByZWVoT2xCbmM4bG15aE9qeWxB?=
 =?utf-8?B?UGdYYTBEOVZSTVFSNytFd09WTmZCTjQ0Y3lhWmdDbHVndDdZZy93b1EwSTdR?=
 =?utf-8?B?cGoyM0lZQmFuNlBvMUhTRytjOEtMcUZlNXFGTkRzbEszK3pmNVo1TTZzeElI?=
 =?utf-8?B?UWRqTW1jb2hCalBnN2RUbENsaHlPd1duVFpvam9tWkZyc2QyTlY0VHk1NUtx?=
 =?utf-8?B?SlZBRlFLbVJOQ2dCMFR5SWxuelNiSkVOdVpmcjNMQXdZek9jdHNWRU5NWmFa?=
 =?utf-8?Q?3g6emGOQvN+76XKEWyK5nUOty?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a1583721-fa72-41db-f6b3-08daee72616e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 16:40:25.5784
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZVJ1jqjeYhj2j+7eV+yygH6TWdEzF3LMr8stwGj7e6AFfVE0Zj/M3DqDcSvPuxjhd1vwX9LJ6z7UcH94Yf18sA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9276

On 03.01.2023 21:09, Andrew Cooper wrote:
> A split in virtual address space is only applicable for x86 PV guests.
> Furthermore, the information returned for x86 64bit PV guests is wrong.
> 
> Explain the problem in version.h, stating the other information that PV guests
> need to know.
> 
> For 64bit PV guests, and all non-x86-PV guests, return 0, which is strictly
> less wrong than the values currently returned.

I disagree for the 64-bit part of this. Seeing Linux'es exposure of the
value in sysfs I even wonder whether we can change this like you do for
HVM. Who knows what is being inferred from the value, and by whom.

> --- a/xen/include/public/version.h
> +++ b/xen/include/public/version.h
> @@ -42,6 +42,26 @@ typedef char xen_capabilities_info_t[1024];
>  typedef char xen_changeset_info_t[64];
>  #define XEN_CHANGESET_INFO_LEN (sizeof(xen_changeset_info_t))
>  
> +/*
> + * This API is problematic.
> + *
> + * It is only applicable to guests which share pagetables with Xen (x86 PV
> + * guests), and is supposed to identify the virtual address split between
> + * guest kernel and Xen.
> + *
> + * For 32bit PV guests, it mostly does this, but the caller needs to know that
> + * Xen lives between the split and 4G.
> + *
> + * For 64bit PV guests, Xen lives at the bottom of the upper canonical range.
> + * This previously returned the start of the upper canonical range (which is
> + * the userspace/Xen split), not the Xen/kernel split (which is 8TB further
> + * on).  This now returns 0 because the old number wasn't correct, and
> + * changing it to anything else would be even worse.

Whether the guest runs user mode code in the low or high half (or in yet
another way of splitting) isn't really dictated by the PV ABI, is it? So
whether the value is "wrong" is entirely unclear. Instead ...

> + * For all guest types using hardware virt extentions, Xen is not mapped into
> + * the guest kernel virtual address space.  This now return 0, where it
> + * previously returned unrelated data.
> + */
>  #define XENVER_platform_parameters 5
>  struct xen_platform_parameters {
>      xen_ulong_t virt_start;

... the field name tells me that all that is being conveyed is the virtual
address of where the hypervisor area starts.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 16:43:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 16:43:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471329.731137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6qu-0003IY-Kb; Wed, 04 Jan 2023 16:42:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471329.731137; Wed, 04 Jan 2023 16:42:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6qu-0003IR-Gw; Wed, 04 Jan 2023 16:42:56 +0000
Received: by outflank-mailman (input) for mailman id 471329;
 Wed, 04 Jan 2023 16:42:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aavW=5B=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pD6qt-0003I0-7c
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 16:42:55 +0000
Received: from sonic309-21.consmr.mail.gq1.yahoo.com
 (sonic309-21.consmr.mail.gq1.yahoo.com [98.137.65.147])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d38d321d-8c4e-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 17:42:52 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.gq1.yahoo.com with HTTP; Wed, 4 Jan 2023 16:42:49 +0000
Received: by hermes--production-ne1-7b69748c4d-bxfkx (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID b97eb870a99459924a5a879746b40ce5; 
 Wed, 04 Jan 2023 16:42:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d38d321d-8c4e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672850569; bh=gkECN0M7dTCv9hA50jRyxlhxwaSHqhYobw90TKq7mWM=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=h8oY8n4lLbImB1k6gMs4kYXc4cNXruE9QK7kq7ctrtMOTKcGEyQR3UEhj1g9LiQW8G8HQ215zzdM1N3uYxFtojKOPkWG6aoUCpE+ZdjEGE7kD3LqmIDu0NiG5wSqJryzR1V0IWBtCiAjPvlXh/7KFvC0kLfkLQ4hneRcZsgVZz1TSG2/vbRJxSRxO+JMFySsIDZBFnREiFfwaWs+gLyKVPsTb4IqcJHEANa1a8t2MqaCCJ5N0UOPql3J9Anje6DeTiSnmhZw7IFJqURXiwctYcmAlO7SXYWXDweeOSxrFeF1R68P3uB69xG+zHu6uacNG5wEKOJBQSYcZYsvwmnn4w==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672850569; bh=02o3orJMLFp/E0j67fA2cazTjOZU8h9oAObFnE24cw2=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=lkkqrk2W9BhxIhO66ofRokPWnaWMyJ3Ygie8qiTAKKM7GtnrbV6JtfdjM0+EH8f5ioGJKOiRFZHQ/3U6Id7sI/N/GI9/GnuU716vvxg/eJBkeGc/jlLnOpqH+rXNWi40e95rbxsJI6KS7EFq+mTs3CERXL2kOOca527EXRWVXpL3bszjreGg0tlrCHYkfXeZb7JDQxnpcSPWxb7PEwAtghnbTZWaNius53k009QGyTLPKvnO/PnBtZaeT24SdUz5YGVhHYE1MyLhaAGnb5b63zE/qPI2zKHigsVI6SHkmK/Ozq78LjIIDfKQV8Km0dvyyRSGn+tmqgBIsxU1sBIraQ==
X-YMail-OSG: vknwtRAVM1l1uz1Q_nHZbH.B8L8gyl.qQ.KBCiCZrU_RCKZy0AnUDBa42P9Tzru
 iXAXZJghdT5Gr6nTd95_2CNoTsA.SvwRJjoUWPYxjLAM8zoyq0Sftla.zHqD5U3_jIQxpGgWReks
 m4iCgagvyS96mwvKkE6OoObuSyqoJ9.cbmHGU.IDplR0uX2upFsq0OxzzGF6m9Vlrta_uzNZ6Y32
 ZSzit1PdsgbIO4mZyf_d18Vwrjb2oHQ.8terXvT8KACy9Sf5sdSlils4Snk8oQIcH4CMy43hUFpB
 .Gfi3OtGCB4aiik236yGWOgJVNWUeNLvS3BQCjVIdO7SeE4id3cMAqp3z5R84bjUfktHEFjA5Mtj
 r2YVQGwdllrd2d9CJ_O_KV_Ipanu5viRC4VUl6.8nW.itbEHQn3.5VQuKscFo52kX3D2WjTAxrrz
 lNhwT1DBHsqRT3oELI23i_G2d5pl7AoBYssEiYeJWBu1.RJ8.cJ1LQtuDE.gRhu6N_nA19bU38PM
 8sX5eo8x7PXyPemEtXz1tfrBe1yJxjZqnuTYslLGWNZT1zs42UsQcthR9ZCI6COOJeUhZJUP5PqJ
 N5nvfZxg4mO4XRzbMI0IxxWP.SyHTwDD3FjmuPAKI2KfUYF8i8wNT0xpzu52fYOF01N771kaG36M
 lHgXM8WQrNHocPLW.W2dl78JPjtxgOOLeFMu0Xpl3wZ3OfH.itcTiaskB3z7.iKH2sI.uuCm4oxF
 LCttbeR92n8eF41rLX3jYVGmpdX9V3aZ2wytMm0J2lxM_Vle4sXouGvuVoe1Zf5hNSCogsegN_vj
 ljmGMXb0V4HHhqAjdYm9tuM5_8P5jgAm1wFOpOIUgu2wHKkVgxIWpGZDLBvJQm6UDNHYeq.u1QiL
 cquBrZjXaKtM8ws6DjuvTv3mhguu8qXGRC.k7b4vaV_7XfWdMsjGn3PWFRV68gTGAvYKLygWv3Qv
 88_vieb4xNpIjUdGjZNs3fUTWNDs90b8ebVChnkhZlTa3.LbOHYupm3BJrPvNN9cTvzu9F.0gKHl
 v55ufBNyKa47L9NwwdTzOgQIOzF_bG9TCWmJKYJ2QOt5wiwoEeu8APYbbVY5pliyDGipzXRXgPEZ
 akot0gTNo58w3LHV2IM0ycODKBXCkRSNv1mMoJd1omQITPAGGm5pwxC3OeLal7jEFZV.KiE.4ncz
 7ydc1i4rHU.45Ez0sno7tMlJnmtm23n4yWRobmc57yHrKvi8GQi3r7zVXVJ65lchQhygU83s7f17
 DafsXn608z0Hk0KOC56QGEcysFYRuDMz3Ly7dw99EiOmpmXSUEM.zwJETceKuz89keWBxJgio_Kk
 V1S_5OYaHq3ckztTMBe_68OfMBQPoafkosXYdVbiwMvrg3sdtwayGCHWr9ZaAjG2HHccypxywy6b
 OZ4KMMuFUeMa5NN8LZgDvzTPqOy9hVMJBvDPz_knuf1eWyi0hQTjGvjHA.kllO63kCu6g4BZGjEE
 82rGQ0tM2lqSSTznD6tf871ijV2mwNijMI_hpAIpylW2KqxZQfP9SwyxBLR1CFa._bpNDHND5u1k
 oayEKK0sz8blLKg7SJTrQNNN9xL5WW5BUHuc7AU5JBPBzVm5aICylhfgdByJg0shVSK9l40ID_cL
 AAkOETf4ddNcfhYA4YlfqiDptlRX52tH6EzOTk.F8f1w3WbCkf1EzP_Bm7QTlLXWqTkLspnyzOze
 AS_vTi6DYOZ_N1csoX._HWEqQMHoQpJCVJY8G4NMm7PwlATHw7CceuwmBweGeHjCpOhrXPH2NIFG
 B7im88usBn0wHgjb24nZrbCQ0rwJnQ8KZiJGEw6G.YDNn6.sQqPYV37IqFzmu.KTryK8CEWwbYSr
 XrYFaF6YmyL7u2tnS33ZvPur3CQpqY5_cIxQDVvMlxWJUzfbPJ5E3Y.j_4_pbjbiXQp.7R8geggv
 IKu9.wyEvLN8R3nK8Vj_s6TW1oF5ZsvpkMtFGyZbiF.RHm9MoE35LeCg1Ju5pSmqv8NenFrd0axY
 nlwlPXOMzXo2vqBHhyrWhCgEMfqJuAqaiv2O_yQ3q6ADJ6DidlfuKyj.HD9R8vRNzQFk6y5pKab0
 tQOArth2F3MVhO4HQA7oEWNBrxk7U9OGJT1MdRmFm84Mupmn7Q.OyFyIaY4u4hKAivSpxkKj30WT
 irJnBuvWUmIz35PJLqwhK4Z5I.P20Ex3zOiFBa6fVC8wxE0q105DsBQxTTj6BTvVshDNKuFSMKcT
 J3nr7F4iEZpwQNQUsf5JXjHo-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <30ed41ab-f7c9-15fb-8f4b-b2742b1d4188@aol.com>
Date: Wed, 4 Jan 2023 11:42:43 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230104144437.27479-7-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3589

On 1/4/23 9:44 AM, Bernhard Beschow wrote:
> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
> TYPE_PIIX3_DEVICE. Remove this redundancy.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> ---
>  hw/i386/pc_piix.c             |  4 +---
>  hw/isa/piix.c                 | 20 --------------------
>  include/hw/southbridge/piix.h |  1 -
>  3 files changed, 1 insertion(+), 24 deletions(-)
> 
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 5738d9cdca..6b8de3d59d 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>      if (pcmc->pci_enabled) {
>          DeviceState *dev;
>          PCIDevice *pci_dev;
> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
> -                                         : TYPE_PIIX3_DEVICE;
>          int i;
>  
>          pci_bus = i440fx_init(pci_type,
> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>                                         : pci_slot_get_pirq);
>          pcms->bus = pci_bus;
>  
> -        pci_dev = pci_new_multifunction(-1, true, type);
> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>          object_property_set_bool(OBJECT(pci_dev), "has-usb",
>                                   machine_usb(machine), &error_abort);
>          object_property_set_bool(OBJECT(pci_dev), "has-acpi",
> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
> index 98e9b12661..e4587352c9 100644
> --- a/hw/isa/piix.c
> +++ b/hw/isa/piix.c
> @@ -33,7 +33,6 @@
>  #include "hw/qdev-properties.h"
>  #include "hw/ide/piix.h"
>  #include "hw/isa/isa.h"
> -#include "hw/xen/xen.h"
>  #include "sysemu/runstate.h"
>  #include "migration/vmstate.h"
>  #include "hw/acpi/acpi_aml_interface.h"
> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
>      .class_init    = piix3_class_init,
>  };
>  
> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
> -{
> -    DeviceClass *dc = DEVICE_CLASS(klass);
> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
> -
> -    k->realize = piix3_realize;
> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
> -    dc->vmsd = &vmstate_piix3;
> -}
> -
> -static const TypeInfo piix3_xen_info = {
> -    .name          = TYPE_PIIX3_XEN_DEVICE,
> -    .parent        = TYPE_PIIX_PCI_DEVICE,
> -    .instance_init = piix3_init,
> -    .class_init    = piix3_xen_class_init,
> -};
> -
>  static void piix4_realize(PCIDevice *dev, Error **errp)
>  {
>      ERRP_GUARD();
> @@ -534,7 +515,6 @@ static void piix3_register_types(void)
>  {
>      type_register_static(&piix_pci_type_info);
>      type_register_static(&piix3_info);
> -    type_register_static(&piix3_xen_info);
>      type_register_static(&piix4_info);
>  }
>  
> diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
> index 65ad8569da..b1fc94a742 100644
> --- a/include/hw/southbridge/piix.h
> +++ b/include/hw/southbridge/piix.h
> @@ -77,7 +77,6 @@ struct PIIXState {
>  OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
>  
>  #define TYPE_PIIX3_DEVICE "PIIX3"
> -#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
>  #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
>  
>  #endif


This fixes the regression with the emulated usb tablet device that I reported in v1 here:

https://lore.kernel.org/qemu-devel/aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com/

I tested this patch again with all the prerequisites and now with v2 there are no regressions.

Tested-by: Chuck Zmudzinski <brchuckz@aol.com>


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 16:49:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 16:49:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471336.731148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6wj-0003wv-90; Wed, 04 Jan 2023 16:48:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471336.731148; Wed, 04 Jan 2023 16:48:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6wj-0003wo-6D; Wed, 04 Jan 2023 16:48:57 +0000
Received: by outflank-mailman (input) for mailman id 471336;
 Wed, 04 Jan 2023 16:48:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+XhT=5B=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pD6wi-0003wi-6i
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 16:48:56 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2075.outbound.protection.outlook.com [40.107.22.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acc592db-8c4f-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 17:48:54 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9351.eurprd04.prod.outlook.com (2603:10a6:102:2b6::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 16:48:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 16:48:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acc592db-8c4f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KByuMl3EQA2boIbd/IEpUu2zIs2s5DUsfCcF+o2bffxPD8rlQ/yqNoyAK83ENLldmgWtc9x09upKadJBgasG2Yye3BIRaOWzofK2mO5byltKjZt9+PFVrkKaVIpMEf9Rfhm5oM9kirG+o6eQfBun5dCBOZ3QMyq5od9uhEMI+lRe9ANn292/oeV0EXWlousEXVpGwAr+GGwaUFzuKFUHVe4G7WDg1kCD5Zp1+8LO0lCV5R+cE0mtPKalBHm3a49GffkMTiApM7fgM/1Ea507SuM7FLrcpYrwPRkIzFZtqmY7bhgB0BMF8XyUyzsfqPMC/0MJTXGuW0UinJlD80/Akg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9xAmverj9++5UlRNO3QmIsOYSyc2TLNT0B7BkJv09qQ=;
 b=DqYwGhYsK/kAqChvuQVFRTEAIFQO/H8wASIzW+9eJ078bv/PZXL45TMdlvKqe0jgPEBugKm5RQyDoJlg8lpiMK3dI8KqE4TXrFhzmFkOCsrWb6dQsYAqckbGJLXzJmcCdwKowiBjRYLkCsg5/A1C2jIISoLW2dGIjEFqO6Oqt0yE7qvK7BFiNs0HgRznnxxfurj4zpKYSZIAwLd2xXpu1XEgIalmowcKchLu8fgv3t/Ke8h6wWozAHD4Fj5qGXW076/PfGVu9rlqAdJLlVrmqHV5/7clMPwm3jSfAGMTg8/avXeAV8pfUpkox8QZ/YWUf3jCLf5MX/QQtI2aj67YmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9xAmverj9++5UlRNO3QmIsOYSyc2TLNT0B7BkJv09qQ=;
 b=s+2KXcIBjLaSUvUoKlBik02Oty7Ci9zfOE7IkVcA5+QCrMHDkVfvK7dawnkjCmRttT81jcakwVLAo285bwbhjfP2Eh32uyKdHsPWCIOC64hMTzBw1vwZhJkebjgzRpYTE+BBW2FF9k7D4pFRpYwiBRiBpUWd3wy1n8HD0POcJNR9qsw7AHNMrmUO6t+okVcX3/DMPvJejbLNdwGpEDBhFUbVn8ko8HoSmQXvlirHGOtPxw5LC/1uat9qDmd+84HsSv1J7sRylI3GkQlINbvJBVUP63FOR4y7tWwsIdWMLDL11dR/HyNc13G2U9W1ybP1mskBDGNZ25C64Az4KJW5VA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <23360b49-5e64-4b8b-49b1-214a26d9fd47@suse.com>
Date: Wed, 4 Jan 2023 17:48:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/4] xen/version: Drop compat/kernel.c
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230103200943.5801-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0135.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9351:EE_
X-MS-Office365-Filtering-Correlation-Id: 5b5083e4-d739-4c70-c622-08daee739009
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5hy1+vjpu7iK4XuFizmCeIIzj4pYaDqku8oNIiJwqnWelWWhufmQWQRcuGMe4JlrsN4785+WAwV4T8kqrQTKpOD3PENhNWTpmF7ZA6UqM25dekpeXN9cnAthXxqExi5m2OkVpSIAeGJZVVboWPIW79gV8WDwfRGHZF6hy//9FEDukOmDrTRnaoVNaQCtItmS2xfCegvcVpYxUK5OdobcOB+t1UM25+2X3kFJ4R2NGp+rygdBrrSxRn5hn4CZEuXOFFoaQlVaS0Xa67KM2gwT+bPBatxCRCTFq5LgfEy+SYFht12NjdvLmfX3OND3379zOmO0HiB7MiFKuQxbmOCXvb0UV9pEIpPkRK6eaji7jpsRWxioiMPwNlyy6L17v0nDBhjAVPZ6JbCcwOA0CuGqWVyzo0dDigEGDSa6he2yFrSIx3+PZgSIDgnvXWR8U9EcL9/v4YWmrUfvIomt2htGitin8rYP0EW7NegzTUckDn6I2EN67UB2buWBLDenSmwve8vVhgXhLWuQ7V74wEhU9j0WFSLRF0aM7mBd1MIgK1mjEVd1KObC++p7cqUG2vGjMtGGeBB3ZDAtSVKcY6ebdjA491EWsEeGn5kwXuB3rvR2pJnFkbEWPirX8TjynUkV14zlCm5NMEDZF3jzR+D0yAQrhVcYsuqywDvAKJ9ll3PrBe+kpimH4HgaoJSI1SURC0nZ32dkTjYgum/salSGyNnRV7x29h/tk4UjaxGy7pk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(346002)(396003)(39860400002)(136003)(376002)(451199015)(36756003)(8936002)(5660300002)(4744005)(38100700002)(2906002)(41300700001)(83380400001)(86362001)(31696002)(66556008)(6916009)(54906003)(66946007)(31686004)(6486002)(53546011)(478600001)(6506007)(4326008)(316002)(66476007)(8676002)(6512007)(26005)(2616005)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RUVEK2Y5SmxOcndTWkdlemRFRDByeHpsemZpa1dyd2UzTjIxYUE0Z0NiRm1i?=
 =?utf-8?B?RGsvSkxDbUJwVG1GTk9mZE9BUWRKYUdtNGpEOWFYQ1RhcCtiY3FPMHhpT2dU?=
 =?utf-8?B?SEE2cVVZTXZhajB3bytWK3VLbEZjNXJHNTU0UkJWODZ1UjlCZE1YZnNYQ0hN?=
 =?utf-8?B?NGhFeHkwb1NTTGMwLzFOWEhmUjBGb2NybVV3YXZzNS9sKzRVTGM4OVFjWCtv?=
 =?utf-8?B?Y2NvVFhoeXNHdXEvTC9nTmEzNlBvTXhJRmRjcGdUcm5WUkF0OEJPOGpyS2Ro?=
 =?utf-8?B?QTZOakdIL1RDUGZTZ3dIT3V4NXo0cnNYTnZ0WXZGTTQ3QTVjakhQS05WWldi?=
 =?utf-8?B?MWZ5c0k4eW4rK0FrUjRrZ2pOYVZGaXBXcWRDUUoyamlCUlJ0eWpjb3ZPNXZh?=
 =?utf-8?B?ak10NGxGQ29kYUFmeDYzNEFremxLK3lwWitvYnJwYkY5N09nckhTTTRNaVVZ?=
 =?utf-8?B?d1lEVWhENC9taTFyY0ZSSFJsa2VtR2IxQ3kvbUd1TWJ5aklhazBvVzhrR0s0?=
 =?utf-8?B?b2VrVlV2UjIyVVZkQWlubXlRcnJ4SFlWSFlNMk5PbjRXQ2RTUTh4L3VVSjRF?=
 =?utf-8?B?ZjNQNUFqdTZHVUhnMVdYeUUrOE5rbkd0bkZtNnI1RjN2QTZ5TmhsRUFtTUVB?=
 =?utf-8?B?U2xEcHVVRHZ4SzJMNlVaVC90L2ViZFE4VzNvcXMvekNXKzRPb2lMREI4V0Vj?=
 =?utf-8?B?RGF3am10NUJDYk1xeUF4cUgwK3o4R1d4RlZxcmNFQ1NxRmR0SHBRT0FBUGc3?=
 =?utf-8?B?Z3dCczNtRThRSm5pVXo5dzloK2ZNQzEraW10VkhCMHFLU3phbkpRc0RkY1Zs?=
 =?utf-8?B?V2VvL3pDTTRESmZndU5yYnlNbHczV3FJRmtMM0RiU1VaRzNURmRvZ0t6RmJV?=
 =?utf-8?B?b21BaGNSM1EwRG0rMFdNZWJhWlFCMzN6VFdzUFhsR2x0VkZ0dm93M0pFcnh2?=
 =?utf-8?B?S0JKc3FrT2F1SWNJeVdPc1l6clpyTitkbFlzWDA4OUdqSHV2NURnR1U3OTI0?=
 =?utf-8?B?L3FWZlpEYThMWG82bzAzMS82TUJMQWQvTUE5T0xma08xTnFkcGs4L0hhNEdi?=
 =?utf-8?B?V2pyVkkrdVpXd204a2VBdGxFcUs4cFBEOFV5WEYwYk9KcXdSV3lKbHhidFFD?=
 =?utf-8?B?RkFSUEpRYmhlMzBxdFh3R0J5ZEw5Y2tZRS9lY3ZOYS8yaGhrNDFpdlNUL3ZO?=
 =?utf-8?B?di9CcW8xREdRWTduVWhvNUFONnR0Ulk1dHYvS3UxTjhUTXFlQS9ZeFJQckJ2?=
 =?utf-8?B?U0V1bUVNTkRMVW9nb1F3R1E2SDI0OHlBMjRVa1AxUUk1R0ZSTEhaNk1rMDBK?=
 =?utf-8?B?QUk5ZERhT3k1NEQzdGR0cVV4d2dMYm9sN0djTDI5cWFoRDNRdmJleUV0RlI1?=
 =?utf-8?B?Ty82SEhFRVFYaDF6QnpZYzNOWEdPVG93Z3E5SzkxQzFMUUNTbEI3L0FQVHcv?=
 =?utf-8?B?eExJSWprSExQVGwxcGtXOG9rKzRRb2NZdlBwcmt0Y0ZGaDEyR2VMNWhOMmJj?=
 =?utf-8?B?K2ZvU3FScU0yVTRJZmZtam1JT0VPd2dlQUpCYnBTUDNLazNFZlZ5NzNiZktI?=
 =?utf-8?B?TWZVK2p2TS9xRTJJTVMvWGFQQVc4K1ZqYzhPYWRsWHdwcXB2VVltRS8yVDVT?=
 =?utf-8?B?SFJiczdaVGdrUC9BZEFOUlVVZk13aUdHeUJsYS92MVYreFdMRkh2a2pWVGQ2?=
 =?utf-8?B?SEJQaktyNVVKWlJiRGI3ZFM5ejJTUmlrTGhycXo5M1RsSW1BdVhDQS8yL0Vi?=
 =?utf-8?B?Q2k4NC8vSGppSlVpREJOWXc3K2lTUnB1QVdPSDgvQzgrdTRvdnJxalgycmYx?=
 =?utf-8?B?SUNnZmNEWlF2MmFMTERWZk9zUXdBMW1MQ3lOODM2K2lXR2Z3ZmhBTSt2QVB6?=
 =?utf-8?B?U3dpYUkzaXZxTlByM3hIZ05XWG9qRk1SYmo3QVN3cmRhdFpST3BuaWdUM3Vt?=
 =?utf-8?B?R2xGVS9JbWd5MFhqNDd4SVRBMno0Qm0zVVVPR01XNHNTMnIrWFJpNjVCWitH?=
 =?utf-8?B?Njh2YlphM3NUQ3l4VmdTWUlyT1BuZ1JpWS9EME5nK25iMHk5UWJrd21hWTFy?=
 =?utf-8?B?LzkwTkVMbG1pMHhXQUVnUmNYUWUxUldla3RhNkJZNk1PQkJuNlNxODZrbUVi?=
 =?utf-8?Q?HqaLFb+XJyVg9d/IWcxrtkLU9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b5083e4-d739-4c70-c622-08daee739009
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 16:48:53.4524
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ezlwZD1NIfHo2xTeGxqdap1K6p/6QpPPeyXkoNXve+2G+1sh+1ak9IBGgjUoVLQk1U2BX+e7XHj7nad0cP84tA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9351

On 03.01.2023 21:09, Andrew Cooper wrote:
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -169,6 +169,9 @@
>  !	vcpu_runstate_info		vcpu.h
>  ?	vcpu_set_periodic_timer		vcpu.h
>  !	vcpu_set_singleshot_timer	vcpu.h
> +?	compile_info                    version.h
> +?	feature_info                    version.h
> +?	build_id                        version.h

Oh, btw, another minor request: Can you please sort these by name (as
secondary criteria after the name of the containing header)?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 16:49:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 16:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471340.731159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6xW-0004SO-Hb; Wed, 04 Jan 2023 16:49:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471340.731159; Wed, 04 Jan 2023 16:49:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD6xW-0004SH-EQ; Wed, 04 Jan 2023 16:49:46 +0000
Received: by outflank-mailman (input) for mailman id 471340;
 Wed, 04 Jan 2023 16:49:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aavW=5B=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pD6xV-0004Pt-4w
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 16:49:45 +0000
Received: from sonic301-22.consmr.mail.gq1.yahoo.com
 (sonic301-22.consmr.mail.gq1.yahoo.com [98.137.64.148])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c87c342d-8c4f-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 17:49:42 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic301.consmr.mail.gq1.yahoo.com with HTTP; Wed, 4 Jan 2023 16:49:40 +0000
Received: by hermes--production-ne1-7b69748c4d-9jjs9 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID f78b14bbe46129161d3132f677ac1e69; 
 Wed, 04 Jan 2023 16:49:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c87c342d-8c4f-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672850980; bh=Uv0BocHD5tWIg18/jevfjehh7q0MhSFzY8Dyf/BXAg4=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=JbYYbvz87wXyfJVL0rGXDO37O81ZJSJ75kAmtjzrQ8Ux9WcdgKZsXRjVDE82L/aPd1WGXiR9qkCsQz9ZrvZl2dIBRSW0vkZ7DgrPIf9QKBlVI8syU8XFKJEZZlpxAa5X0bPQggQTuizc2seXsDYMGtOA4BGWvnI26ovQwe1AeIfmaLl0mi6SgPTBxmkuK6g+kE2F9O2OV6ajfslItuTza09LBG7KEgjzwYKrBQjU+I6USPYXWWAYa6DKK0lOgfhRMtlxgllUx4vdRLk4RDzF9/toB46ZKQcob4ejgV5qs9MHu9CGsaIigmrYMonQ/2vDNNxUWwmojvDd0+4bOI94Kg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672850980; bh=FxzOEnvBptL07rvzdDlcQCWQrhGrukvipEJHtDDzfWh=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=OYvHrPLvawpjH0CokBSNIhIygYnWUO1JQgVPCJuD2o9U5nkXiL2AvCIRQzOqEWEkzheTfey8a0nxHhDTULI0JbX8xhWnEbwkBsjXsFwz0vHT0bADsWynNcbVJyTuEsKWG3up47pilbtGN3mZ4M4I/udFDcmG/lOVQ8gpyLYY1qygxYoBTdcgUgqLtzImaa7FZcTXyNwPOArSqa5fWqqiCkfAi8oaRTNKQacYHG7ZmT88eb0DN4C5NbxiOXDixu1t2whaI8+OttmzhLU/rQNvzSPyV9jyBkyv39Aa/PSnHwFTFHo7CHFV+mCFqR96afdNwGrwIzp63lbhk/WxGGPRtQ==
X-YMail-OSG: Y.cY9QQVM1n12rN8bHGsI27Q8vksmGZnxhR.qASFPFpFz2aPufAc26szN7uy8RZ
 RlDLbliSLZ7kWm4a4GWBe.arjJy3fAzPrAp6W4LcLnaEshl3OvFtMBItLq5XfNpgwyV48QJ7F042
 TztH6BRKJG5RwIRzv0GTaOcB96LaltnABc0xhcY32V5DuOYKbvSGRKnjNkL6MChfOjgmMYyR9GYM
 aEod0xGcSVcD6P.b_Cq35fmQWjzay2iPdmYNFDjIFzQxpOGgc5peFNt8QeRwZy3AphQ0QXOxCM2.
 V2M_M1TjOYfavoJeIQoa4VSwg4LbkvaHO7JQRpwplTlapPrTITZuvSlHo__C6T85_np.XiLaKDsL
 qp5GB2GUYoYvxuyMwwyaGrYmzlHBR.6.m7gf8WWyH9LEF_xhvNn.pMb2dpb7OLN0CacdXsmpnvX3
 3uDcCXJHHgPEIU.AKvv5_HtBFPpPVXPajjHYBEWChZv71bAjrFf6aqpsTdVlO8NXI7ncZSnN3.w4
 TFCksh7.OyDy8crk5oDN89TOTxoP2_xqmWG2U0glQ8wnroq3KdOtL2uj0n5wFPCS74aL6KCbzhLZ
 AHNovKeCGqqnSRuAVX1tDwzvyYQpO0mSk07Uv.ncGAXZAxCbgpqALa2KCYm6LLR4jdjvRQfZSx68
 NHOfxrwHW72xRro.H1mskH_5gfRa4k1jF4cNZwP4ZbwBX4khm_s4nwXfnLxwjFGkSR61YVRf2F.N
 Wo60cRmmCMjI2gdrXxyAxFi5NuLg0EJnILFyaI3.gwE0e6EljA4ldjinI1w_9CJPODaeJm_3rFpD
 gCBWFpCq.J6X4yvV1wbAoZUBkcWsDbJQ_HPC1skYB2JmNGoi88rqAwFUCKDKCubfrGSAiFfB8xtg
 kPbPwol7Nc5wF7eLQ1KZ_am1Ne7dbW_61i3hNGV8vXC_iO_Xuo2yBbdwMR_HMbxvF08S6PWXRg7K
 L.4fPq5N1pD_B9qFkwaPc5jQmg.68_bsdAsvxuxTJDGHp.MfnwSUZiW1MqF.dW6K5yn8FjSpMfAs
 2vF3Xx2DCI7MHBP02J4wbZhXmFgnC9yk_gj.kHiLJwK5nKFkn9nnpf02uDpEwYCK5VYhu9gx09fv
 6N4Qc5g4OwCZX2W.7eArebedl1d.5SthtaAT3XIs3Q9aWF8hvE.udbekuB2nXSHlacZhuiRcxPCJ
 3LQSh0FNb23omeCPlkHfVV2zjBj.4fJ0PXqUYDvgV2nnb2VqLhXOL6b6KugV0abri7na7y5wfKIB
 ZAa.k5NibC0RGO70bMCIXF2Z8WdcFQ_NKAeTbAO.vBpJH40fVdfhKDfuqZI2BwsE8ppEp3fGzxvw
 YvZpwZmbIH13FCODp8IyZBMxew9nINe2WIecckS86mKq_U3tYg_TRJ8xO5w4uvWABxYGFHZwplhZ
 p3zCf49QZifHg8juo_QlZnmM5XDuh.tCKQhxiiOSWJDHgvORE2MXq288bJrvSwofCkmTeaHWU.R9
 A_0SkxYj1mmEWfdQ2_mGTjmeWo8JWntBJAg28X6PcPRiveOSeqCffG8alu4kQK314kH1vjm2ByJb
 ZKIqKUBNecy6wPEqAyeFBb.WHGda9RFtBzmiHRQM8nmfNQQL6luszjA7tTOFDlldFugucL.qS0o0
 mIzz85Mkc2FvdTMBI1EAFHb9T4aOuy22ViEKRQjkoIFparniZB5ooS2Iyhon3FMkLXB8I5F2AB6Z
 46KhYPCfz.clEFl3sY0kB8TWRuraitYtXA5aTEYr5oyByVNZG.0XtZi8UP3Qax0uLwUfXOw20d_c
 gaq3LbSXwtUc.lqyU1hU.j.BtJiER4.mF2WZkEzQQvlExvd935oOg7I1HUnmH01oIhoDvCEPQOb.
 MlfiPKcuKlqIAydWmacz.2hjxeRc9OUQIb16B2JiaIXbZJkWS1HPgmALR7jbS_AWENbuesxSm7m0
 cIk4wSgcJJQ6ApgfVa_xq089uNrI4jIb7JAY3vcRMqaQ8V3s3slPU9dBmbhvLAqm8QEkJ9tyanTF
 RFUGMJNrVvqPp.9Q8phM0bkn8KN9r62uBqWeQUFC7qZ7ARUL4XTUYmm7_bLNQdTKF8QagMqc0jnR
 kAVtwoWxZuKmRPYqkcYYwljTm_JeKD9Zo9ETYP1IHnHC3tR8eiq9OCEJe.0J.y7ZFO6DKiQGvSox
 ZO9.h7igNXcBNlBnXxHNiV54wfYs0l1lL0YEklgs6LX263sMaj50wnKLJBeB7cmyalcLzDt3agpV
 JyG.EzTBq0U6wvqy3gf5josIr
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <da0528c8-7260-a1cb-7e6f-3d93493b060d@aol.com>
Date: Wed, 4 Jan 2023 11:49:34 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost
 <eduardo@habkost.net>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>
References: <20230102213504.14646-1-shentey@gmail.com>
 <bd4daee7-09df-4bfa-3b96-713690be9f4e@aol.com>
 <0de699a7-98b8-e320-da4d-678d0f594213@linaro.org>
 <CAG4p6K7hcJ-47GvsEvmuBmdwP2LsEC4WLkw_t6ZfwhqakYUEyQ@mail.gmail.com>
 <aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com>
 <AB058B2A-406E-487B-A1BA-74416C310B7A@gmail.com>
 <00094755-ca61-372d-0bcf-540fe2798f5c@aol.com>
 <7E657325-705A-47EA-A334-0B59DF0DF772@gmail.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <7E657325-705A-47EA-A334-0B59DF0DF772@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 5318

On 1/4/23 11:12 AM, Bernhard Beschow wrote:
> 
> 
> Am 4. Januar 2023 13:11:16 UTC schrieb Chuck Zmudzinski <brchuckz@aol.com>:
>>On 1/4/2023 7:13 AM, Bernhard Beschow wrote:
>>> Am 4. Januar 2023 08:18:59 UTC schrieb Chuck Zmudzinski <brchuckz@aol.com>:
>>> >On 1/3/2023 8:38 AM, Bernhard Beschow wrote:
>>> >>
>>> >>
>>> >> On Tue, Jan 3, 2023 at 2:17 PM Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
>>> >>
>>> >>     Hi Chuck,
>>> >>
>>> >>     On 3/1/23 04:15, Chuck Zmudzinski wrote:
>>> >>     > On 1/2/23 4:34 PM, Bernhard Beschow wrote:
>>> >>     >> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally removes
>>> >>     >> it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in the PC
>>> >>     >> machine agnostic to the precise southbridge being used. 2/ will become
>>> >>     >> particularily interesting once PIIX4 becomes usable in the PC machine, avoiding
>>> >>     >> the "Frankenstein" use of PIIX4_ACPI in PIIX3.
>>> >>     >>
>>> >>     >> Testing done:
>>> >>     >> None, because I don't know how to conduct this properly :(
>>> >>     >>
>>> >>     >> Based-on: <20221221170003.2929-1-shentey@gmail.com>
>>> >>     >>            "[PATCH v4 00/30] Consolidate PIIX south bridges"
>>> >>
>>> >>     This series is based on a previous series:
>>> >>     https://lore.kernel.org/qemu-devel/20221221170003.2929-1-shentey@gmail.com/
>>> >>     (which itself also is).
>>> >>
>>> >>     >> Bernhard Beschow (6):
>>> >>     >>    include/hw/xen/xen: Make xen_piix3_set_irq() generic and rename it
>>> >>     >>    hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
>>> >>     >>    hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
>>> >>     >>    hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
>>> >>     >>    hw/isa/piix: Resolve redundant k->config_write assignments
>>> >>     >>    hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
>>> >>     >>
>>> >>     >>   hw/i386/pc_piix.c             | 34 ++++++++++++++++--
>>> >>     >>   hw/i386/xen/xen-hvm.c         |  9 +++--
>>> >>     >>   hw/isa/piix.c                 | 66 +----------------------------------
>>> >>     >
>>> >>     > This file does not exist on the Qemu master branch.
>>> >>     > But hw/isa/piix3.c and hw/isa/piix4.c do exist.
>>> >>     >
>>> >>     > I tried renaming it from piix.c to piix3.c in the patch, but
>>> >>     > the patch set still does not apply cleanly on my tree.
>>> >>     >
>>> >>     > Is this patch set re-based against something other than
>>> >>     > the current master Qemu branch?
>>> >>     >
>>> >>     > I have a system that is suitable for testing this patch set, but
>>> >>     > I need guidance on how to apply it to the Qemu source tree.
>>> >>
>>> >>     You can ask Bernhard to publish a branch with the full work,
>>> >>
>>> >>
>>> >> Hi Chuck,
>>> >>
>>> >> ... or just visit https://patchew.org/QEMU/20230102213504.14646-1-shentey@gmail.com/ . There you'll find a git tag with a complete history and all instructions!
>>> >>
>>> >> Thanks for giving my series a test ride!
>>> >>
>>> >> Best regards,
>>> >> Bernhard
>>> >>
>>> >>     or apply each series locally. I use the b4 tool for that:
>>> >>     https://b4.docs.kernel.org/en/latest/installing.html
>>> >>
>>> >>     i.e.:
>>> >>
>>> >>     $ git checkout -b shentey_work
>>> >>     $ b4 am 20221120150550.63059-1-shentey@gmail.com
>>> >>     $ git am
>>> >>     ./v2_20221120_shentey_decouple_intx_to_lnkx_routing_from_south_bridges.mbx
>>> >>     $ b4 am 20221221170003.2929-1-shentey@gmail.com
>>> >>     $ git am
>>> >>     ./v4_20221221_shentey_this_series_consolidates_the_implementations_of_the_piix3_and_piix4_south.mbx
>>> >>     $ b4 am 20230102213504.14646-1-shentey@gmail.com
>>> >>     $ git am ./20230102_shentey_resolve_type_piix3_xen_device.mbx
>>> >>
>>> >>     Now the branch 'shentey_work' contains all the patches and you can test.
>>> >>
>>> >>     Regards,
>>> >>
>>> >>     Phil.
>>> >>
>>> >
>>> >Hi Phil and Bernard,
>>> >
>>> >I tried applying these 3 patch series on top of the current qemu
>>> >master branch.
>>> >
>>> >Unfortunately, I saw a regression, so I can't give a tested-by tag yet.
>>>
>>> Hi Chuck,
>>>
>>> Thanks for your valuable test report! I think the culprit may be commit https://lists.nongnu.org/archive/html/qemu-devel/2023-01/msg00102.html where now 128 PIRQs are considered rather than four. I'll revisit my series and will prepare a v2 in the next days. I think there is no need for further testing v1.
>>>
>>> Thanks,
>>> Bernhard
>>
>>Hi Bernhard,
>>
>>Thanks for letting me know I do not need to test v1 further. I agree the
>>symptoms are that it is an IRQ problem - it looks like IRQs associated with
>>the emulated usb tablet device are not making it to the guest with the
>>patched v1 piix device on xen.
> 
> All PCI IRQs were routed to PCI slot 0. This should be fixed in v2 now.
> 
>>I will be looking for your v2 in coming days and try it out also!
> 
> Thank you! Here it is: https://patchew.org/QEMU/20230104144437.27479-1-shentey@gmail.com/

That fixed it! I added my Tested-by tag to the last patch of v2:

[PATCH v2 6/6] hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE

AFAICT, v2 is is ready to go!

Best regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 17:00:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 17:00:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471352.731170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD77X-0006tO-MR; Wed, 04 Jan 2023 17:00:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471352.731170; Wed, 04 Jan 2023 17:00:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD77X-0006tH-Ik; Wed, 04 Jan 2023 17:00:07 +0000
Received: by outflank-mailman (input) for mailman id 471352;
 Wed, 04 Jan 2023 17:00:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pD77V-0006rD-PN
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 17:00:06 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3a646905-8c51-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 18:00:03 +0100 (CET)
Received: from mail-bn1nam02lp2048.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Jan 2023 11:59:47 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6915.namprd03.prod.outlook.com (2603:10b6:510:169::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 16:59:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 16:59:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a646905-8c51-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672851603;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=xrAHntuMP5McfeZATKKyDoJutzM7llCarZaRHoOccBo=;
  b=GsCy6fBXA/QWL9tJpZZZOcOxikYpWhWl+NXNWJhXRlVvlYgSVIHx1dNi
   XrvTcEvMvWR4K08OhdEgDBkNcohkpJf8u5BGhWJ1KIylLmNVB8rriiWiL
   kmXHsqC3jLJ3d80J9uRTDg2rcSk8fsqu6JIyhhKyxwyemtX8g09Q9XLT2
   A=;
X-IronPort-RemoteIP: 104.47.51.48
X-IronPort-MID: 90660050
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:aOg0rq/Wp2unswfPhhRRDrUDon+TJUtcMsCJ2f8bNWPcYEJGY0x3n
 zRJXD+Oaa2KY2vyLo0gbY6woRwB6pPSmtAxHlBo+Hw8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIx1BjOkGlA5AdmPKgX5AO2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkli1
 qU8eTlTQSndmu2t5/G3aPRGnvUKeZyD0IM34hmMzBn/JNN+G9XpZfyP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWKilAhuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqANxMTOXlrpaGhnWt1FAKAiUrW2HlrMGGo1zuHMpZJ
 kALr39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLL5y6JC25CSSROAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9EIMZTSoNTA9A6d+6pog21kjLVow7TP7zicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQOEhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:Jkg7F65PcBHunGa94gPXwPLXdLJyesId70hD6qkmc20zTiX+rb
 HMoB1773/JYVkqM03I9errBEDiexLhHPxOjrX5Zo3SODUO0VHARL2Ki7GO/9SKIUPDH4BmuZ
 uJ3MJFebvN5fQRt7eZ3OEYeexQpeW6zA==
X-IronPort-AV: E=Sophos;i="5.96,300,1665460800"; 
   d="scan'208";a="90660050"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jxr6a4sdIJQV44zu1DxtIYp1Uvlu9RS5PQGirXM/MCsBCs5JCLfjvs99ltfklQHA1XQBwnLiG/z0GOnIg6hxaZ3RgRKKIEagaARTv3Zz7BXwfTSxVCu0/EpigYQCAEbGbLsXquokjTj418vJ8jHpifypR6Gb3HOVQESOJWevD0pR5nI3S8DG27I5Bd365hLXMMB9WmEZQ8vvmXC+1UweTRDXKLtkIFkwcbPCYiXPJbV9mwfvr25WE5uOwdNvDa/xONHb/MgO4vSn4Tqt2pPl7YQVDAdt2cpp1ghIUIwO/u+uOckU+RTBdVQjLUZOnhYaGS26OwATDg60puQW7Ssr0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xrAHntuMP5McfeZATKKyDoJutzM7llCarZaRHoOccBo=;
 b=ToNgNzYGJpYf1P2q9NKt2gq3c1S1+BTAvxLg68hycriFkS3/JL6H6gbyTBFvmRm//BVl0aYqz2C/b2eIUZ6TF89n1DN/klrdaVScN8uXBZ8bdFMsx5Vc2HZw0FPp/pwlXCD4d5OelpP+uXp8LVKLbjd9TTfOP4YrtlwncENq3Qm6NeNl4YPIM9tBbYrcbAl9mEQTKCE3LpQuWA2z9VMyNmB81K1KU10CRJP0J/qNQOxFlDxZYnzStnMn7bxLKbpqSQmIWvp1F5v0rdnCAn753r0sAWLhb4oVqFyBkECxPIDNEhTmU7h07yZmr8kUMJ1PdUXmVrKIm2bxtcZtIcjZDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xrAHntuMP5McfeZATKKyDoJutzM7llCarZaRHoOccBo=;
 b=miNCeBRt/YYCbyQkuRD6PDVVikFqHO6mDYb2yb2LTtZKBJKGyAsvoJb2vL6M9Hiy1GUXrEzPhWc0UmMk47LQAEM0Au34+ZDIQJd/lXb+7xyAB9xMfro0qJR9R4ENY1UwdCfraFh+5LBGtinIsGRCvINo3vnT/06diirN8UUAl04=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/2] x86/shskt: Disable CET-SS on parts susceptible to
 fractured updates
Thread-Topic: [PATCH 2/2] x86/shskt: Disable CET-SS on parts susceptible to
 fractured updates
Thread-Index: AQHZIC1dxLDr/Ft5R0KgtQNaFohPLq6OZp8AgAAU6IA=
Date: Wed, 4 Jan 2023 16:59:44 +0000
Message-ID: <c452eb49-15c5-f464-bab9-4ec93b840dcf@citrix.com>
References: <20230104111146.2094-1-andrew.cooper3@citrix.com>
 <20230104111146.2094-3-andrew.cooper3@citrix.com>
 <a225c263-dda1-8109-3dcc-ff7111f277fe@suse.com>
In-Reply-To: <a225c263-dda1-8109-3dcc-ff7111f277fe@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|PH0PR03MB6915:EE_
x-ms-office365-filtering-correlation-id: b9488f9f-af29-4686-6ca4-08daee751411
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 hN32oQmD6AB1vGKL39bquuQ7vYQZaCnslXqufOVKmRpZDSKCHUCg0XTLPYSpFKDwA7tAjlbwr1D5nUxlSpHAbmcrFa03okjrNDlWB4HJdS8+UwPLOXJ36DpHOlgMmRbY1djYP+JxfbekW2t39Z605FApvxjATJ5Z4qneC3GUPdIfZ7t6OD8keanwTko4DxrQAz4eJffqmRx59C18CLcQKe9BhsvixSf9XJLtuH7htx3A3Rr7EgqycmIdh8v8We+P2ITJXPj/EHNzaZlkyiVXgKysuzSZr1VmLvhg6V7MGljyQHhznpJK+P9bUgm1IVoPvF9OzsaYaOFE7dabO3eKcWpSeqnNlldEWiRH1uVz18jGjLEu409lsD2qiyk5dhjrX2AuKmm+kFTWoSHERayW4c7PgXGatTLF0TPcTe9ru3Vf4vqjmHX2WJ2SqPUHXI4PkAeXvXl6u5pK3k0bW+11o+bzT+S77xjokeUX2AxjqsQ954jW6vASzu/e3YwQ4M1qvc+2D1iVweagVOpqPOTZtvgwjL8b1vUmcBMHyHMjaWme+o2ZKGp4OHnI8HKumjWb4mf634+j29ex1ro7cOV/p3lHNTFTZA5wZD8p4RCX4o4H7+kHNaLIt/xxjEzbVwzjw5HdHSwfuksAfUiB+HdaOCfoA1UpptJRg0AfcDdqe3GLqVsG0t5TvO3ywUsFMoKXnW7b8V+IC8xtJ0EQKY5Y0MmHDuqKUOlFtzmUaFo3XtjNX3BtVF+Nvv3+ng1iptDQtQbh9bkUhA3jdWCSWRnLHw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(366004)(396003)(136003)(376002)(451199015)(5660300002)(2906002)(15650500001)(8936002)(41300700001)(4326008)(478600001)(316002)(8676002)(66899015)(54906003)(66476007)(66946007)(76116006)(6916009)(66446008)(91956017)(64756008)(66556008)(71200400001)(6486002)(31686004)(6512007)(26005)(6506007)(83380400001)(122000001)(82960400001)(186003)(38100700002)(53546011)(2616005)(31696002)(86362001)(38070700005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?eTdlUnlPb1dpYnJ1QUtCOVZtS2l3d0NVcjc3SlJOK0k0U0RCcHVNem1EVkQ5?=
 =?utf-8?B?cjJHQlJ2SXVKeC8waFZueWhGaExFOURmeWlTZ2hFNWZVTFpTTERuSk8vWCs5?=
 =?utf-8?B?bnJQcXVEVCt0RnRKeU15SEpJYnVkYnp1azVPNEdwOWQyWjJ0ak9LVllvd1RN?=
 =?utf-8?B?dHRwdSs3cUdTUzM5WnpjZ2RDZVdaaUJodW1SaE5MMkVhUnNJQjBQWGc3enky?=
 =?utf-8?B?bnd1dE1abW5HMUcrQjFScWpUOVNJVHRwQ29GeUJZT0NGWWpsNzlFL0QxSnBy?=
 =?utf-8?B?TDNHNmhVbkdiTi85WnN3blFzNy84NHh6dXlnbzNZNVlJZTdkWXBhZ2NwTWFu?=
 =?utf-8?B?L1preVA1cGJyMzI4b05QU3RoNTUwdENuRnN5a2VpWDY2NGIyNy8zaXlNZ1dE?=
 =?utf-8?B?Wlk1N1JsL2NOZzA3blRaWEhGVFRRK1Q2R0ticzBuN0lhc0Nkd0MyeGZvcFFa?=
 =?utf-8?B?RlVvV24rb3hXWmdFeGF0cU9HbTFNckpvakx5N1RHeVhhNkRsdVZOWjg2T3Yr?=
 =?utf-8?B?VERwL0ZGSjdaK3Ewc3NRYktkTCtxREtkbVYxMkRnbk1CQWxRcjM1TW9vVUdt?=
 =?utf-8?B?U2ZhMkxjemN0d0tVWi9QYnpBdERNanNPSUp3U0h4blFNM1RSRG90WGVDK0la?=
 =?utf-8?B?Y25PaXN1YXVwYWtqaVdIdTB0dTE5Vmc0c2U2YW1XdTlrR3VvRFNKRHRJcnpa?=
 =?utf-8?B?OWc5TFh2ZHFFQ3h2eEE2K29naHc4SHZ5SGRqODh4M0ZuRFBIR3JQM0dJcXFY?=
 =?utf-8?B?QTNGM1RVSzBLUUVFOUl6UGEvdzVMWGd5aHNBc0NGTFYvc0krNncxWDUxeks5?=
 =?utf-8?B?bEJpY0NNdjBzUlNUL2JKME02UWJ4QWI1cW5EeS9taXJjOEI0UXFJZ0ZlUnhy?=
 =?utf-8?B?Z0dkTXZXWTM1KzNsdTU5UHZaWmxaTkkvM3VIVlFwQVVBd2toYzRyUWtFcXh4?=
 =?utf-8?B?bmM4dFRoVmZpUmM4NE1lci92WVhoOUxqQUQ3SzZ1bTA0d1QwZ2RrUi9EUEt2?=
 =?utf-8?B?V3FFbTRsNWxmRklVRWdZa1ZySEUwTzczU3kyVExLanhWT1cxTUZDR0sxUEp5?=
 =?utf-8?B?YVJ4WGxDRWdQcVoxajlKN2FZb0w3SjFHeE9pOVJNanZqdTR3Tkxpb3RsR1lT?=
 =?utf-8?B?c2FRZ1ZHek1pUDE1QmJpTVlCMS9Ya0lIYm94MzJkK2tRSzhMVnhoaU9VUmFn?=
 =?utf-8?B?c3pmc2l5d3FhU0NXbU1BN1RWWE41dC9iTmwzOFQyTEJFSjNqQWVlSSs1Rjlz?=
 =?utf-8?B?Qnlhbi9oRmZWUDg5NXF3SmJOZ0Jmd3IwdDNaeWd1bkwyaVd1bjgvWk5lSE1r?=
 =?utf-8?B?ZjU5UnBhZEdVWnEyWEFKaXdLckQ5VTVnc05ML3lBVEdOU2owWS84eTU0K3hm?=
 =?utf-8?B?WVVCNnJEVHJ4eTFLVXhlV0FNeU1xdUswUytCNTN6Z3J4SkluUEhRRWJRVWhU?=
 =?utf-8?B?aTFqQmMxWFRPVEx1ckdKWStOVC9uM3NVOStxcUdkZXUvK1pMOUtpUWc0NmdI?=
 =?utf-8?B?MTh3MTBOZVY3OTRtLzBxL3JZK2tJeWlUK0dGT2ZZYW83NWNhaEFwaksveTl3?=
 =?utf-8?B?RTBpb2VhMXpYRGlOeTdTRWVqZ29YdnE3VWdsUEpLWmF3TUtIcUNPcllSTm9w?=
 =?utf-8?B?Y0k2UkhSWDhieXViQTQ3VmJCMlpCc0twUWJvWkJPc011d2RZSEdPRU0rTjdY?=
 =?utf-8?B?aCtIZmdnY0Z6WnFpZTN5VFFoR1F1blVVTzhEdk1FbkMvbVAreTB5QmdMbjdu?=
 =?utf-8?B?SzVmVnpxMVg0TUZ6YzUrTEcwWmR0TUtwK1JmdkNoVkxXR1JqQWdYK2VJdVdw?=
 =?utf-8?B?Zjd6ZHV5a3BCTERiQStyUkc4ZFd6emtzcUJDUnpvbmVMWTJMOTBxR2taVm5Y?=
 =?utf-8?B?KzI1MGxQb0hWL2dFZlc2MUpCcHExaC9nREQ4WHQ5dHlLM2pwbXJteTNqQUJa?=
 =?utf-8?B?dXplcDRrQjkyWVV4MFhDaXgzd3BlS201bjN2L3BzTXArTWVQYk56VWlTTXpC?=
 =?utf-8?B?SkNXbHltZzdXTVhEZjBHakVoTXhlc2JSL3phMnhMOEdwcWU2VVpwQkFSY3hM?=
 =?utf-8?B?cVk4QkdaemdUdjVKNFRRN3dWeUJKUEladVNCaXdTd1hjb0hGOElTWlY5SE05?=
 =?utf-8?Q?p49Szy75yyPJgEyy20G9OzKmL?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <B52444D3FDE9BF46868662C41391357F@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	7p8ZOtUCv7y8d/UkurIzYx1RfNhhYdtZQDeDqwVLB7oIgV+COjt9D0AIUXHnnbUSMJUoITrFRksTNpTPljwnXQlYka9YNx8q/H87h+wc5MU4kBfhbdJ5hOfeA8ZjcGEDX451NRGrNV+HlMXeccMKRAT+ogoTI3/XZpio2U6wXuZ/A7cqsK6Hx7PnJ+9VAuGxT2lI3dFDA54b4SQfoJOdojmykQ/r4ZjxmzXjNRj+BNq3SI1MG/uclgPU1yeP+029AyAQJfe4xtT4X/dRanTQi3KuX+IBj79lTq+mT0O9XaSc38w2lufde2YbiKyJRei888V35UQ2W8JCtjqEe5tz6TikkRw3hOgbzzZsBhj5fMgVZh+2yrJmfHMufPdnJRaRKoBpyBU5o/fFj7qhdzSk+pFGzXdugnGLGoR/kqVHDE8P0hn2V6ayjGid+yjR7nB8xCHnl+15naudTCdwPlfYjuXxNG8mf5m2W84CMGyDe3P0cjPvhburlEqndnyJ6I9XNrDD10APdKiMt4AdFtZGKtotHL7qGcgy1w93+NnMiOTWLmLFUkrtRvQ0JZnWSBiexks3LQJ8inYs90IgPlVxcSZT/lKv5majQM+ufz2TVE9QiHHBhp8/qFN09vimTTpwrR0vNUenuvW2DGvpBV3+Jrs5EZIXzti6vD1d14NE7ovQFkcHclk4V/q82eZ6aRm1c2aLcVBwJ9a9+5FfcYx/Njg3ZPknwa+jS9pFNGq6HhHqNzUOZYlnMfM95HugjIUvJSlqQ2h3idjX3xPxMuZu07b3EEp1SUDXx3uLHy9CFDmRznpD1mMftVCoOPVLIjfW
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b9488f9f-af29-4686-6ca4-08daee751411
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 16:59:44.1148
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Pjdcl2Jd0jlpdPr+atwBr5QxKS8LShKPlj9cQjsEPygoo+C9x62eMN2oMpnLBepmEt9ltDd44ccO4qlt8+88RFMM+eFo5/s799B+TCSwfDA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6915

T24gMDQvMDEvMjAyMyAzOjQ0IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDQuMDEuMjAy
MyAxMjoxMSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IFJlZmVyIHRvIEludGVsIFNETSBSZXYg
NzAgKERlYyAyMDIyKSwgVm9sMyAxNy4yLjMgIlN1cGVydmlzb3IgU2hhZG93IFN0YWNrDQo+PiBU
b2tlbiIuDQo+Pg0KPj4gQXJjaGl0ZWN0dXJhbGx5LCBhbiBldmVudCBkZWxpdmVyeSB3aGljaCBz
dGFydHMgaW4gQ1BMPDMgYW5kIHN3aXRjaGVzIHNoYWRvdw0KPj4gc3RhY2sgd2lsbCBmaXJzdCB2
YWxpZGF0ZSB0aGUgU3VwZXJ2aXNvciBTaGFkb3cgU3RhY2sgVG9rZW4gKHNldHRpbmcgdGhlIGJ1
c3kNCj4+IGJpdCksIHRoZW4gcHVzaGVzIENTL0xJUC9TU1AuICBPbmUgZXhhbXBsZSBvZiB0aGlz
IGlzIGFuIE5NSSBpbnRlcnJ1cHRpbmcgWGVuLg0KPj4NCj4+IFNvbWUgQ1BVcyBzdWZmZXIgZnJv
bSBhbiBpc3N1ZSBjYWxsZWQgZnJhY3R1cmluZywgd2hlcmVieSBhIGZhdWx0L3ZtZXhpdC9ldGMN
Cj4+IGJldHdlZW4gc2V0dGluZyB0aGUgYnVzeSBiaXQgYW5kIGNvbXBsZXRpbmcgdGhlIGV2ZW50
IGluamVjdGlvbiByZW5kZXJzIHRoZQ0KPj4gYWN0aW9uIG5vbi1yZXN0YXJ0YWJsZSwgYmVjYXVz
ZSB3aGVuIGl0IGNvbWVzIHRpbWUgdG8gcmVzdGFydCwgdGhlIGJ1c3kgYml0IGlzDQo+PiBmb3Vu
ZCB0byBiZSBhbHJlYWR5IHNldC4NCj4+DQo+PiBUaGlzIGlzIGZhciBtb3JlIGVhc2lseSBlbmNv
dW50ZXJlZCB1bmRlciB2aXJ0LCB5ZXQgaXQgaXMgbm90IHRoZSBmYXVsdCBvZiB0aGUNCj4+IGh5
cGVydmlzb3IsIG5vciB0aGUgZmF1bHQgb2YgdGhlIGd1ZXN0IGtlcm5lbC4gIFRoZSBmYXVsdCBs
aWVzIHNvbWV3aGVyZQ0KPj4gYmV0d2VlbiB0aGUgYXJjaGl0ZWN0dXJhbCBzcGVjaWZpY2F0aW9u
LCBhbmQgdGhlIHVhcmNoIGJlaGF2aW91ci4NCj4+DQo+PiBJbnRlbCBoYXZlIGFsbG9jYXRlZCBD
UFVJRC43WzFdLmVjeFsxOF0gQ0VUX1NTUyB0byBlbnVtZXJhdGUgdGhhdCBzdXBlcnZpc29yDQo+
PiBzaGFkb3cgc3RhY2tzIGFyZSBzYWZlIHRvIHVzZS4gIEJlY2F1c2Ugb2YgaG93IFhlbiBsYXlz
IG91dCBpdHMgc2hhZG93IHN0YWNrcywNCj4+IGZyYWN0dXJpbmcgaXMgbm90IGV4cGVjdGVkIHRv
IGJlIGEgcHJvYmxlbSBvbiBuYXRpdmUuDQo+IElPVyB0aGF0J3MgdGhlICJjb250YWluZWQgaW4g
YW4gYWxpZ25lZCAzMi1ieXRlIHJlZ2lvbiIgY29uc3RyYWludCB3aGljaCB3ZQ0KPiBtZWV0LCBh
aXVpLg0KDQpGb3IgcHJhY3RpY2FsIHB1cnBvc2VzLCBpdCBpcyAiY29udGFpbmVkIHdpdGhpbiBh
IHNpbmdsZSBjYWNoZSBsaW5lIi4NCg0KQU1EJ3MgcG9zaXRpb24gaXMgImlmIHRoZSBPUyBkb2Vz
bid0IGhhdmUgc3VpdGFibGUgYWxpZ25tZW50LCBvciBkb2Vzbid0DQpwbGFjZSB0aGUgc2hzdGsg
b24gV0IgbWVtb3J5LCBpdCBnZXRzIHRvIGtlZXAgYW55IHJlc3VsdGluZyBwaWVjZXMiLsKgDQpU
aGV5IGhhdmUgY29uZmlybWVkIHRoYXQgaWYgdGhlcmUgaXMgc3VpdGFibGUgYWxpZ25tZW50IGFu
ZCBpdCBpcyBvbiBhDQpXQiBtYXBwaW5nLCB0aGVuIG5vIHZtZXhpdCBjYW4gb2NjdXIgd2hpY2gg
d291bGQgZnJhY3R1cmUgdGhlIHRoZSBzaGFkb3cNCnN0YWNrLg0KDQpJbnRlbCByZXRyb2FjdGl2
ZWx5IHRpZ2h0ZW5lZCB0aGUgY29uc3RyYWludHMgaW4gbWljcm9jb2RlIG9uIFRHTC9BREwgc28N
CnRoYXQgTVNSX1BMP19TU1AgKG9yIElTU1QgcG9pbnRlcnMgZnJvbSBtZW1vcnkpIHdvdWxkIHN1
ZmZlciAjR1AgaWYgdGhleQ0KZGlkbid0IHJlZmVyIHRvIHRoZSB0b3Agd29yZCBpbiBhbiBhbGln
bmVkIDMyLWJ5dGUgcmVnaW9uLg0KDQpCdXQgdGhpcyBwcm92aWRlIHRvIGJlIGluc3VmZmljaWVu
dCB0byBmaXggdGhlIHByb2JsZW0sIGhlbmNlIHRoZSBuZXcNCkNQVUlEIGJpdCwgYW5kIHJlY29t
bWVuZGF0aW9uIHRvIGRpc2FibGUgc3VwZXJ2aXNvciBzaGFkb3cgc3RhY2tzIGJ5DQpkZWZhdWx0
IHVuZGVyIHZpcnQuDQoNCj4+IERldGVjdCB0aGlzIGNhc2Ugb24gYm9vdCBhbmQgZGVmYXVsdCB0
byBub3QgdXNpbmcgc2hzdGsgaWYgdmlydHVhbGlzZWQuDQo+PiBTcGVjaWZ5aW5nIGBjZXQ9c2hz
dGtgIG9uIHRoZSBjb21tYW5kIGxpbmUgd2lsbCBvdmVycmlkZSB0aGlzIGhldXJpc3RpYyBhbmQN
Cj4+IGVuYWJsZSBzaGFkb3cgc3RhY2tzIGlycmVzcGVjdGl2ZS4NCj4+DQo+PiBTaWduZWQtb2Zm
LWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBSZXZpZXdl
ZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiB3aXRoIG9uZSBuaXQgKGJl
bG93KS4NCj4NCj4+IFRoaXMgaWRlYWxseSB3YW50cyBiYWNrcG9ydGluZyB0byBYZW4gNC4xNC4g
IEkgaGF2ZSBubyBpZGVhIGhvdyBsaWtlbHkgaXQgaXMNCj4+IHRvIG5lZWQgdG8gYmFja3BvcnQg
dGhlIHByZXJlcXVpc2l0ZSBwYXRjaCBmb3IgbmV3IGZlYXR1cmUgd29yZHMsIGJ1dCB3ZSd2ZQ0K
Pj4gYWxyZWFkeSBoYWQgdG8gZG8gdGhhdCBvbmNlIGZvciBzZWN1cml0eSBwYXRjaGVzLiAgT1RP
SCwgSSBoYXZlIG5vIGlkZWEgaG93DQo+PiBlYXN5IGl0IGlzIHRvIHRyaWdnZXIgaW4gbm9uLXN5
bnRoZXRpYyBjYXNlcy4NCj4gUGx1czogSG93IGxpa2VseSBpcyBpdCB0aGF0IFhlbiBhY3R1YWxs
eSBpcyB1c2VkIHZpcnR1YWxpemVkIGluIHByb2R1Y3Rpb24/DQoNCkFsbCBvZiBvdXIgZ2l0bGFi
IHNtb2tlIHRlc3RzLCBhIGxhcmdlciBwYXJ0IG9mIFF1YmVzT1MncyBDSSwgYW5kDQpub24tZGVm
YXVsdCBQVi1zaGltIGNvbmZpZ3VyYXRpb25zLg0KDQpBcyBzb29uIGFzIHdlIGdldCBndWVzdCBD
RVQgd29ya2luZywgdGhlbiBwYXJ0IG9mIE9TU3Rlc3QsIGFuZCBhIHBvcnRpb24NCm9mIFhlblNl
cnZlcidzIGdlbmVyYWwgdGVzdGluZyB0b28uDQoNCkl0IGRvZXMgbmVlZCBiYWNrcG9ydGluZyB0
byBhbGwgZ2VuZXJhbCBzdXBwb3J0IHRyZWVzLsKgIChXaGVuIEkgZmlyc3QNCnN0YXJ0ZWQgd29y
a2luZyBvbiB0aGlzIHByb2JsZW0gd2l0aCBJbnRlbCwgdGhhdCB3b3VsZCBoYXZlIGluY2x1ZGVk
DQo0LjE0IHRvby4uLikNCg0KPj4gQEAgLTEwOTksMTEgKzEwOTUsNDUgQEAgdm9pZCBfX2luaXQg
bm9yZXR1cm4gX19zdGFydF94ZW4odW5zaWduZWQgbG9uZyBtYmlfcCkNCj4+ICAgICAgZWFybHlf
Y3B1X2luaXQoKTsNCj4+ICANCj4+ICAgICAgLyogQ2hvb3NlIHNoYWRvdyBzdGFjayBlYXJseSwg
dG8gc2V0IGluZnJhc3RydWN0dXJlIHVwIGFwcHJvcHJpYXRlbHkuICovDQo+PiAtICAgIGlmICgg
b3B0X3hlbl9zaHN0ayAmJiBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfQ0VUX1NTKSApDQo+PiAr
ICAgIGlmICggIWJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9DRVRfU1MpICkNCj4+ICsgICAgICAg
IG9wdF94ZW5fc2hzdGsgPSAwOw0KPj4gKw0KPj4gKyAgICBpZiAoIG9wdF94ZW5fc2hzdGsgKQ0K
Pj4gICAgICB7DQo+PiAtICAgICAgICBwcmludGsoIkVuYWJsaW5nIFN1cGVydmlzb3IgU2hhZG93
IFN0YWNrc1xuIik7DQo+PiArICAgICAgICAvKg0KPj4gKyAgICAgICAgICogU29tZSBDUFVzIHN1
ZmZlciBmcm9tIFNoYWRvdyBTdGFjayBGcmFjdHVyaW5nLCBhbiBpc3N1ZSB3aGVyZWJ5IGENCj4+
ICsgICAgICAgICAqIGZhdWx0L1ZNRXhpdC9ldGMgYmV0d2VlbiBzZXR0aW5nIGEgU3VwZXJ2aXNv
ciBCdXN5IGJpdCBhbmQgdGhlDQo+PiArICAgICAgICAgKiBldmVudCBkZWxpdmVyeSBjb21wbGV0
aW5nIHJlbmRlcnMgdGhlIG9wZXJhdGlvbiBub24tcmVzdGFydGFibGUuDQo+PiArICAgICAgICAg
KiBPbiByZXN0YXJ0LCBldmVudCBkZWxpdmVyeSB3aWxsIGZpbmQgdGhlIEJ1c3kgYml0IGFscmVh
ZHkgc2V0Lg0KPj4gKyAgICAgICAgICoNCj4+ICsgICAgICAgICAqIFRoaXMgaXMgYSBwcm9ibGVt
IG9uIGJhcmUgbWV0YWwsIGJ1dCBvdXRzaWRlIG9mIHN5bnRoZXRpYyBjYXNlcyBvcg0KPj4gKyAg
ICAgICAgICogYSB2ZXJ5IGJhZGx5IHRpbWVkICNNQywgaXQncyBub3QgYmVsaWV2ZWQgdG8gcHJv
YmxlbS4gIEl0IGlzIGEgbXVjaA0KPiBOaXQ6ICIuLi4gdG8gYmUgYSBwcm9ibGVtLiINCg0KRml4
ZWQsIHRoYW5rcy4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 17:04:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 17:04:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471359.731181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD7Bt-0007WA-6Q; Wed, 04 Jan 2023 17:04:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471359.731181; Wed, 04 Jan 2023 17:04:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD7Bt-0007W3-3G; Wed, 04 Jan 2023 17:04:37 +0000
Received: by outflank-mailman (input) for mailman id 471359;
 Wed, 04 Jan 2023 17:04:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+XhT=5B=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pD7Bs-0007Vx-0y
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 17:04:36 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2089.outbound.protection.outlook.com [40.107.14.89])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dc43f200-8c51-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 18:04:33 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9676.eurprd04.prod.outlook.com (2603:10a6:10:308::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 17:04:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 17:04:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc43f200-8c51-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XPyu6UHebCGyvPMdGlvnMEOnCDrvTE0VjL6ZB9MToYvCOMmiOr8T4r296TNmDVAdTo9yq4wNhBQZCNuNlHRbevMjX6FWpYGIK9G11Ent188TDy2cLu61byLLjJ/oGb5HBMiDgUI6EXWowexnIf1EMzoe3twddbIpK+RGQUJ1+xaJiFZKQ1mBZ9cWlZWllbX6LTzPCt7CmMLgrdtoLdkdbP0eDUJQzaXq0qNIXMr7J5ec9i0i9Bodt6CMnFJVhihdhF3OpHyWaeWWr4GChZpyUjCI4kL+U2nTqDX5aBqeBw+uFjw2ZFwFp+HdpnSbdR77JUeMLjXx0Iklfk5PsTmgPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cTxVePJt05zGmcAekCz40ZmvpoevLyoOtQJxIKsCzm8=;
 b=E0vNktuH22uBUt5578eWUBpsB0n8QttkP2ZI/s84JPB0tDgQNN7H4mMyEoazdaeoRWnVQB/7SWMXsqFvqlE9SnlIgdco5a1UI71dfAT4xaHJN3wpNr+PykVnlM0Ob8GG3I2VQ+wce+vf2SKOX0QR7rj1lXqiAMpwXA/BX8/Adso9gFDObWL5EVVkiwPu66uaEk1wFHZLZptN5Zx1o60yXQIBRCMex8JcYR7kQFzpdFeIj4V3m89EkmBxO5aKvLroKeO55nHsaZuwW2ptXoFGSMTKszXwku59qhUO01YaDeGcqzvVUA+WQ462PgVsBcRfeOuMbaQdQcDI4G+ir63FGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cTxVePJt05zGmcAekCz40ZmvpoevLyoOtQJxIKsCzm8=;
 b=Sny63Sz/PpvfUFjXIIbGh1AoyCy2k0tXOXKfrniijkccUmGotk7lY4wQnx001USzfLHohpUxTvXPCdZWpfEK/qmvFpL2cWeKaSBkEOf+aMKfyD7mpxUPUfSjmEgmzN7bRRSFyBpCOAq7lIQQWt74+zwiwtrHU0JCK4wWVttRtAVf3VSV5/mK4MZi+LdRAsTsM7Edt3W6B2u1ybymH50iRZsb1+oXA+iExVCuAJ5uiyQbjGamYhXDJCyshQ/GEbE2jgBoBM5d7njKuZRG7II//YHTpqqgAFWZv5FRNXhssiDnYd6D8YQTFiIQE49D1f5IYjsCYsqyTsMV0AlP9VOxOg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a0cb9c83-dc4d-5f03-0f65-3756fadfde0b@suse.com>
Date: Wed, 4 Jan 2023 18:04:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_* subops
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230103200943.5801-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0092.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9676:EE_
X-MS-Office365-Filtering-Correlation-Id: a85aab11-6941-48cd-78aa-08daee75bee8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MMUfiKKMjf9g2sg6L3bMGYcH8OxT3stgJaBjqVMQslzvgzAJZdrKe3lvUo9MpKSyJEl49dAEq8JGq2PGtu4PQedWJqJICa70+ymQ2xWtQzuYWUu0SEDHKaKsYBBMzkPLhSMwjfSubnljX/wPZqq5dSayrAQkHpGUBQ3E5k2fUKbg3bWdH1mSYV/pq2XMJhtkDe0brdz7wSmpiJSyJxPqHqprp4EYW72qF4rrxzALnNr5xU2rBL4JWcyEsug1dUadbT0ky/YB3fTuI6HGRSazU+j62nWeBCDzo5BAM3Uv+97d2yYxplAUPLEVlWTZ5MHgBqR05E1jtKN5sKnHufZOgV77dqpamsEAcKMgQgBac69XTADbGozjC4Jmb5LaIHyo1AKCyvfldYXZMtrUrGC2/CKlOIveg05nKWUBOOvguwOLMFSZrhmmauO9T+NYQn0qygj6/xkIMYkoFaas6cZeYQOMSd/kinZOXnbhNEVf0z7Lmc+MVp37SEQQrrwIxiudRCiSp/1NDZKGYlmjlkSNdI5VgWdrdtBBzeG1qcDpDqJi9mG4XpjvrHhdisCPNvLLsytjlW6mLBrESg5gIZZ4HhxZv870DdvQBZTljdP9I2j+PsmfzoBenXSUlYmiR9d3y+7MtuJTleeTjyqIIiNsELMIChKaJQoCKK5Hkx0OxTPRvGpzjS1oe0XTB8xaScvLng4MkIELXZq33ppesTGD4wxcMOoWvQU7fcVWpRfFuK8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(396003)(136003)(376002)(451199015)(5660300002)(2906002)(8936002)(41300700001)(4326008)(478600001)(316002)(8676002)(66899015)(54906003)(66476007)(66946007)(6916009)(66556008)(6486002)(31686004)(6512007)(26005)(6506007)(83380400001)(186003)(38100700002)(53546011)(2616005)(31696002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZTNTNlBmLzE1cTVvSDcvZ0ExTVFUN0dCc0p0NEhRSUErNVVkSkJGemsvamFM?=
 =?utf-8?B?WUhlSFVUeWhQamlUclFBeDBPWGhPZ1BlWlJvS3hSTXZnUFAvTFNiN3FrTVhh?=
 =?utf-8?B?eS9aZHNEbzUyQk1FbzRBMExtSkVxRCtES1JsYURGSStlR1ZkclByY0syeG9n?=
 =?utf-8?B?RlNxVlpLdWxrdnlNMTRteWdHaHlFR05ZQlBLU2xJWDdBMFA0SHpYb0l6d0l6?=
 =?utf-8?B?U3paamQzS3BHN3hXU2VpWVJMbU45ME1qQlZzYzFXbDVTT2hVenBVLzZobnZT?=
 =?utf-8?B?NFp4SE1CMXAzaWdTeFM0MGJjK0tVVGNmMS9tclJ4dUdwQnF6aFVqazNvS1lP?=
 =?utf-8?B?bkQ2c0dwRjhyYkxqWitjZG5BTVJsQ1VVbFFxMHZYUVAwVEZmT1daeGFzeWZw?=
 =?utf-8?B?UVFJNFZTUFlCbktDMzNndDRoTDUvM3grQ29ZbVE0ZGYxMWpPdWhpb3V0YjVx?=
 =?utf-8?B?WjJFTjFIdFN0ZGdUdzBjOG4wQ2NYcGxUOHk0amp5M2d1eVR0Y0FNbVozNDlN?=
 =?utf-8?B?OGQzbHlrNmFpRFltRHFjSTFQRWhXUnFEQjRrbHhyNDVibmVOYUlJMkpueThV?=
 =?utf-8?B?Mno0V2NoS2VCQmNtclEzajlCNjNzNUU1MW4xdXpkcVY0a0Y0ZXp3ZEx1Y1ZD?=
 =?utf-8?B?bEdJTnJFUDFHdDNkQytOdFFFUHNpanUvWGtTR1J0dzZESWI1eGNzVWMzcml5?=
 =?utf-8?B?MnBPSWxNWHVNaStnVVZuNVorSStWcUJUeW1HTjJkaVMzZzQ3VkJhTkw4aUJv?=
 =?utf-8?B?ZFU5V2EvT3BReEdkTFJHOUhkckUvUy9YbU54MmI3K2o5RzVWSmVhUFg0dW00?=
 =?utf-8?B?ZEVPTm9rN3R2d2Q4OCtIMmZiQjVUVFRMNzZuNEJrenBzT0dwK3FOaHRWNHVB?=
 =?utf-8?B?emN6akp2UHFVK1lKSFpJUHVMMTFqYzRzb0NidFUveWlrSk53UW4yVnhWajRy?=
 =?utf-8?B?Uy80em9jRGtyZld3NTBIYUx4RTdLMDRxQi9abVh4MXFObjhSS2Y2Ylc2U3Z5?=
 =?utf-8?B?YWFtb2J3QVI3MnFIL3dZYjlLN1prWjg3cGlOblZQTFpmOFlON08ybnVVTC9u?=
 =?utf-8?B?SDU1dnpvSzdrUWNCMXRLTDUxU21heVVZWFF1aFFqVDBvSEY4c3NBdk9IbGxy?=
 =?utf-8?B?RUh3OUVYajVzTytlMGtOSkhVeDJsa24zcS9RZWU1elhJcytmTTU1MDlPWWEx?=
 =?utf-8?B?MWNic3VpUXRVb25GT2F0K0p5S3JWNGpyZ05SWGdNWDJhdGpNamJQUVhBSVU4?=
 =?utf-8?B?d2V2R3hieTBZUjBFU2t0aGszQzZOSkZMTjN0eHBESWIxN1lVMXJxTzdZV3FD?=
 =?utf-8?B?dng2UUtNYXhoRWlUKzNhMTIxSDUyK1ZDSWlQZTVVa2xNSnBoQkk5c1dzZWdI?=
 =?utf-8?B?alUwcmdpemFyakxtQ0VwQ1hNNEpFVXh2ekMrNjgvZXFoY2JweTlxdmhEVTN5?=
 =?utf-8?B?QVJkUjVkSGZndUs3OVBweG1qRVd3bGV3cFBtZUVKbWt5azdlU3B4SGtyKzZp?=
 =?utf-8?B?OUZKM1ZSa284UWphVXBuZm9PUXQ2Q1ptRGhlaGlrR3JhZ3FJUU1iN0N1V1pZ?=
 =?utf-8?B?Y2U4WjZ2NW5QN2NVSW9lY3lqWmZWOGx2cXdCM2dCNGVCQzRLZEl5KzNPQVFz?=
 =?utf-8?B?ZE9iVTdpQlB4ZUs1T3Q0cUdwM0c0NWtRWmtDd04xT01tR0dxNW4wRjhKTFRs?=
 =?utf-8?B?MGlMbzNyTEVuQ0Z1dGxsM1N0UXZmdFVneE02eTBWVStyU2VYYTBsWHpZbTdX?=
 =?utf-8?B?ZnBORFJKOVQxRmh0NkIxVjNxdXlkdnM3ZVFTOU1FQjBqTUJVSHJtdTFmb1M1?=
 =?utf-8?B?SXRjdUlZb2xJRjF6UFVxdStaZkxzbkE1OVFkTTRITkpXWGVnQmNOdmVjYlVn?=
 =?utf-8?B?ei9aeEJuRVVVeUVSTFp1V1JOdEtpQlhlSGhwN1c2RGNlaGJOdEczNDcvbDlw?=
 =?utf-8?B?WndpWFA1MFNaU2xaVzVmaDc2NHR2U0tWdkNIWUJ4ZWc1eEFycnJoelN1SmRk?=
 =?utf-8?B?cHA3cXRycGg5a1dydVplRTErcWZtL3RJRTNoSmJCOHJnbkdsRENMNTlsZUJk?=
 =?utf-8?B?dVZmbEFCQlRDMCtwblptTFV1Q2tsRnE4R3ZQSHl5YWZ0SkZlNnIxeEd4c3o3?=
 =?utf-8?Q?zoGubOIaoj2w0WvhHd56dG1lD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a85aab11-6941-48cd-78aa-08daee75bee8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Jan 2023 17:04:31.0493
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CMyhFq+BNHT8TirD4HgWUM2NNjbHyskQCtgpKWnsKSIgananpBHJGmQI9igVr68rrpBrXxceWKE5iJ0Y2vT+rw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9676

On 03.01.2023 21:09, Andrew Cooper wrote:
> Recently in XenServer, we have encountered problems caused by both
> XENVER_extraversion and XENVER_commandline having fixed bounds.
> 
> More than just the invariant size, the APIs/ABIs also broken by typedef-ing an
> array, and using an unqualified 'char' which has implementation-specific
> signed-ness.

Which is fine as long as only ASCII is returned. If non-ASCII can be returned,
I agree "unsigned char" is better, but then we also need to spell out what
encoding the strings use (UTF-8 presumably).

> API/ABI wise, XENVER_build_id could be merged into xenver_varstring_op(), but
> the internal infrastructure is awkward.

I guess build-id also doesn't fit the rest because of not returning strings,
but indeed an array of bytes. You also couldn't use strlen() on the array.

> @@ -469,6 +470,66 @@ static int __init cf_check param_init(void)
>  __initcall(param_init);
>  #endif
>  
> +static long xenver_varstring_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> +{
> +    const char *str = NULL;
> +    size_t sz = 0;
> +    union {
> +        xen_capabilities_info_t info;
> +    } u;
> +    struct xen_var_string user_str;
> +
> +    switch ( cmd )
> +    {
> +    case XENVER_extraversion2:
> +        str = xen_extra_version();
> +        break;
> +
> +    case XENVER_changeset2:
> +        str = xen_changeset();
> +        break;
> +
> +    case XENVER_commandline2:
> +        str = saved_cmdline;
> +        break;
> +
> +    case XENVER_capabilities2:
> +        memset(u.info, 0, sizeof(u.info));
> +        arch_get_xen_caps(&u.info);
> +        str = u.info;
> +        break;
> +
> +    default:
> +        ASSERT_UNREACHABLE();
> +        break;
> +    }
> +
> +    if ( !str ||
> +         !(sz = strlen(str)) )
> +        return -ENODATA; /* failsafe */

Is this really appropriate for e.g. an empty command line?

> +    if ( sz > INT32_MAX )
> +        return -E2BIG; /* Compat guests.  2G ought to be plenty. */

While the comment here and in the public header mention compat guests,
the check is uniform. What's the deal?

> +    if ( guest_handle_is_null(arg) ) /* Length request */
> +        return sz;
> +
> +    if ( copy_from_guest(&user_str, arg, 1) )
> +        return -EFAULT;
> +
> +    if ( user_str.len == 0 )
> +        return -EINVAL;
> +
> +    if ( sz > user_str.len )
> +        return -ENOBUFS;
> +
> +    if ( copy_to_guest_offset(arg, offsetof(struct xen_var_string, buf),
> +                              str, sz) )
> +        return -EFAULT;

Not inserting a nul terminator is going to make this slightly awkward to
use.

> @@ -103,6 +126,35 @@ struct xen_build_id {
>  };
>  typedef struct xen_build_id xen_build_id_t;
>  
> +/*
> + * Container for an arbitrary variable length string.
> + */
> +struct xen_var_string {
> +    uint32_t len;                          /* IN:  size of buf[] in bytes. */
> +    unsigned char buf[XEN_FLEX_ARRAY_DIM]; /* OUT: requested data.         */
> +};
> +typedef struct xen_var_string xen_var_string_t;
> +
> +/*
> + * arg == xenver_string_t

Nit: xen_var_string_t (also again in the following text).

> + * Equivalent to the original ops, but with a non-truncating API/ABI.
> + *
> + * Passing arg == NULL is a request for size.  The returned size does not
> + * include a NUL terminator, and has a practical upper limit of INT32_MAX for
> + * 32bit guests.  This is expected to be plenty for the purpose.

As said above, the limit applies to all guests, which the wording here
doesn't suggest.

Jan

> + * Otherwise, the input xenver_string_t provides the size of the following
> + * buffer.  Xen will fill the buffer, and return the number of bytes written
> + * (e.g. if the input buffer was longer than necessary).
> + *
> + * These hypercalls can fail, in which case they'll return -XEN_Exx.
> + */
> +#define XENVER_extraversion2 11
> +#define XENVER_capabilities2 12
> +#define XENVER_changeset2    13
> +#define XENVER_commandline2  14
> +
>  #endif /* __XEN_PUBLIC_VERSION_H__ */
>  
>  /*



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 17:54:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 17:54:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471368.731192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD7y9-0004Nq-Vo; Wed, 04 Jan 2023 17:54:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471368.731192; Wed, 04 Jan 2023 17:54:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD7y9-0004Nj-S2; Wed, 04 Jan 2023 17:54:29 +0000
Received: by outflank-mailman (input) for mailman id 471368;
 Wed, 04 Jan 2023 17:54:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aavW=5B=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pD7y9-0004NY-6N
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 17:54:29 +0000
Received: from sonic316-54.consmr.mail.gq1.yahoo.com
 (sonic316-54.consmr.mail.gq1.yahoo.com [98.137.69.30])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d3ac3425-8c58-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 18:54:27 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic316.consmr.mail.gq1.yahoo.com with HTTP; Wed, 4 Jan 2023 17:54:24 +0000
Received: by hermes--production-ne1-7b69748c4d-brw6v (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 3b6e3b6717f2ad0d4df9645e096a3d43; 
 Wed, 04 Jan 2023 17:54:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3ac3425-8c58-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672854864; bh=62GrjcKcrjn6pGet/GNWyEnuI3H5ZMFveZgVFAY9fi4=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=XgcKO9mT2A5OpXMzNhTUWfEVZQeb3tZpsZi6pH8142PKactCJPOqzFwC0JdfDSsrPVx9PQymQO2Vh+y7/UH+fEjfmlLKfiAu6FO/nWGw/xlbSoj960yeTTZp7A5FPRNpovMsbZ5DMnAhrky90cR5W4IlqfThLdFaYNaozPbcoTRlgHPjeqNNffswzGk0pMkcfMBRfsEtngRVONq+24PF1eRHOaNp35sUXo43D4INbIR70r8t/imIFmavvVMR4xY6Hbdt8pLJY7E8Yz7IhW1r6gjpN5MpUZuR6Kp76gHXskt7q1FIkUEAM0PqM4t6KAnR2w7Mj/g2kPee4gwt0cnfmg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672854864; bh=hyoo4l5eYpZvZ37UDueVbgUvGgA7cb6c0wte2v6dAvd=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=LG2pMM13WGB66ytLaHr4OMlZA6Q2xRxDMSI4tEm6m/QLPGf3KWl9aSdO6B+2iLAuRbkU9Qx2LfjJY/8/lRLASAu8JQFel5gVTpQFLj1G1mRHxBHnukHWJSsaq4aAkPqrYEFeiN2zLnqcyFeEd9idjxfpz/eHxcp1yRwJUGQoJ2XWJBos9DdWysPxRgsZd+kSL+QfjONrMI+8Ivpz+wdrnw+REdozZ7IFHh+FoXHKbUqG39ZhXwTWH9ppsQF8iyRY8G8MD9EMr5OeG2ByLJwQGomyoWm4BHracWAqiISBcvH2eKjfwYkxIGFi+TlUuvJ7dmBypjQsdHlkPG94h9TPug==
X-YMail-OSG: hsgtjjMVM1kYbV.iYsvv_79.2anRl0pBtsppgVsWfzACkf34ams5kiyUUKtRAeR
 ZixpwMNisCWUZvqF3TyOuVRefxOjsMsVG_RWa0zr.A3jfIPAy6LYMBU3SbmOsSetaUSZfTM6v1X4
 MgjLyLoEu0MKqtTLBfk3Mb1Tg9yCa8HMuZtjK0eFZo3Mk4G8lUvWuDUhRMNLTPl7wAHj2W_YNqCa
 DOuVBUxMB8zVAtPSaSAZK4YnATYpBNYnwcrtbBrxLjSzu98wnxVVxOiKcEFisouTnxYS3qqjWXFi
 xyQoDB39mTkmGwRJ9DA.0Xz6rbntkl2Xx_jLQWy4dEvs2FBA8yNved08SQdsL7H7BVHE7amdURLN
 YLxKXvzdKyAmYknKoP6O_BZfNW0UFhzNz8GydVOQr2I5S8cWQSdyxX5XTW2f62mcLlY8_3dg4zIK
 K43e8sDu18HJberC7t2eM7SW9lQOuXnimfxTY4QgiyooPabhE11pNi9XO2QeA1Fgi6Pqi7EOvmFi
 mIcDAOd8L97.T5YfvC0ugCDemxEn7_rGWb2WVzW2JvgfEkMVlkK0kN3auayrzM_de0ynRY8GUjgh
 f.B8ZT.lCBxYSofNU4RJGVxId19GMonO5zj2ex0h7iJOSOtjPlBz1D17MY46RD15f3ihdrAU5z5r
 rOFdOiPq.FpngnAgSArFxIvOyO6Fm9qJ8QdPiGNw4R25XNm9_m1_05VgRGpeo.f5FX3WLHgv22C7
 ihX6yHXQDiNbEEktd1H_tcwJCVsRx_9PEG6QlXtX7uH8Yo3yQm77_ihbcDQ0jEMOMrJOqahMNzE5
 97KMP74VxteFthVugEqkr1UBTwbri99BzYKP5n1vLOCkazOaT6umA8pkuVbHE_jYW9RsjxoYizbW
 IBxG_usnYLBdA3.nBOjj4QpNmEVYaChv924thTD4OE47hI9ODDOETxR9PCLNe42GkBD3tJ8QGmsD
 A2ikhYE1_VqIPZcb6cSwK5Ju08oPEnUdK7S.viVcEiJzhFgT0DNh9xEqxEChtVOLnKOL5JqUImja
 eG3gTSGs_gY3C6.DfiWH4PhPZ0tfD6C7jgztZxzvORVtcnjjC2Oi.ZJAimFA6HSQvX5DS2c0Aag6
 1qiaH0EoSmd3O_AIch.ecUX4UnmUqNuey8UuXuvHUmo40DeVc9q14QDfWVgLT8fSKekjsWjeds90
 xoPCfKuQHXTaIxIhT3B9d5SR4.HlobPoSyeYQENmQrzDArxZiBsEpt10eRNIlmQ1.yR4vOoyCYwF
 EBIXU6X3QcOH.oPzJry9pykHkgOotSI5bfIUXz764ICdaAp2MSaBzjS.Jx.6NbUOqb.2z3kLduQs
 Qv0Tl.J3N4sWJiAlDd01j2A5dbpHND4nrKhCQj2zP1r.pi27izYq9Vkfexeo77IzxvfZ4VA3BMcm
 Kh1GwULZI04NKkPev4MSk9bpDL98aaoBY.OxfR3PnkG_fMLboLMujQIXDTCdu3D7grrrg58o97xo
 grE5Wwofk3j1.28IqBgz4EXC_7ceS8VMre8tRwQfI7DDV8VqYZBFOf.wVCbI6ok9hzhbokZh3wvf
 rTx2KRB0qNFLxlqfONlbUDh9DevOY8X5ltG5bRwqxEYA.ljbxS.aJWlxPpsNbxMDNeVLkr9xwz9Q
 ux9oFJSLl.uSV6.9xg7YwBtsH3jG65TrQtsR1P5wot.HH4pfCc4Anad7jv7jLykk9ZbivX_Rn5Hx
 4D2oolxJ002G18kAveulzNST1m5DdF_gdYQvitTLsjnBgZbjHw_Q4gI50Yb5Yv9SkgILQuk0OnUz
 xoR3J1E3xssxbZ2FCUcRAotd7r87C9PMXZ88M4nV_mxStQqaRbgm9nK4VxsJZERkmJvLKBklJFDx
 wy1X3M_.PklRG2hjjQnYpcJPM4tHOo.nZtwzi1dV9PDFkLMZN3x58gCHE.N4T2aR7M57J0Gm79SJ
 uyJmBphuyjttwYRbLzxRkguIhn3fMzhbg35si.8S8OeQRTa3gP2ro8KiN5qHp31caZCLV8m37C58
 5uU_6twcwnml9ok4uFaIWbW_sAHXW6SbcmdRE1z5i.U07PDLMghN5AV8LeKeKtsCnod2UGM9dP89
 9VCFtLto5buHFjzgkrS6.hp42mK2GN57tJS6ueY8q9LnmLR2o4AEUBpOAyQLZoyW5ccqWqaqgf_x
 XmDsl4SyxxsQmirGp0WRtPg0rtTCGNtxh28zkJ3JR0B8pG.wLhyECqc8Aid.LPydCoFdLOqgY0U3
 WkGJGa41Fz3kqygvv07ja9Qk-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
Date: Wed, 4 Jan 2023 12:54:16 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 5089

On 1/4/23 10:35 AM, Philippe Mathieu-Daudé wrote:
> +Markus/Thomas
> 
> On 4/1/23 15:44, Bernhard Beschow wrote:
>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>> 
>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>> ---
>>   hw/i386/pc_piix.c             |  4 +---
>>   hw/isa/piix.c                 | 20 --------------------
>>   include/hw/southbridge/piix.h |  1 -
>>   3 files changed, 1 insertion(+), 24 deletions(-)
>> 
>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>> index 5738d9cdca..6b8de3d59d 100644
>> --- a/hw/i386/pc_piix.c
>> +++ b/hw/i386/pc_piix.c
>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>       if (pcmc->pci_enabled) {
>>           DeviceState *dev;
>>           PCIDevice *pci_dev;
>> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>> -                                         : TYPE_PIIX3_DEVICE;
>>           int i;
>>   
>>           pci_bus = i440fx_init(pci_type,
>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>                                          : pci_slot_get_pirq);
>>           pcms->bus = pci_bus;
>>   
>> -        pci_dev = pci_new_multifunction(-1, true, type);
>> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>>           object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>                                    machine_usb(machine), &error_abort);
>>           object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
>> index 98e9b12661..e4587352c9 100644
>> --- a/hw/isa/piix.c
>> +++ b/hw/isa/piix.c
>> @@ -33,7 +33,6 @@
>>   #include "hw/qdev-properties.h"
>>   #include "hw/ide/piix.h"
>>   #include "hw/isa/isa.h"
>> -#include "hw/xen/xen.h"
>>   #include "sysemu/runstate.h"
>>   #include "migration/vmstate.h"
>>   #include "hw/acpi/acpi_aml_interface.h"
>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
>>       .class_init    = piix3_class_init,
>>   };
>>   
>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>> -{
>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>> -
>> -    k->realize = piix3_realize;
>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>> -    dc->vmsd = &vmstate_piix3;
> 
> IIUC, since this device is user-creatable, we can't simply remove it
> without going thru the deprecation process. Alternatively we could
> add a type alias:
> 
> -- >8 --
> diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
> index 4b0ef65780..d94f7ea369 100644
> --- a/softmmu/qdev-monitor.c
> +++ b/softmmu/qdev-monitor.c
> @@ -64,6 +64,7 @@ typedef struct QDevAlias
>                                 QEMU_ARCH_LOONGARCH)
>   #define QEMU_ARCH_VIRTIO_CCW (QEMU_ARCH_S390X)
>   #define QEMU_ARCH_VIRTIO_MMIO (QEMU_ARCH_M68K)
> +#define QEMU_ARCH_XEN (QEMU_ARCH_ARM | QEMU_ARCH_I386)
> 
>   /* Please keep this table sorted by typename. */
>   static const QDevAlias qdev_alias_table[] = {
> @@ -111,6 +112,7 @@ static const QDevAlias qdev_alias_table[] = {
>       { "virtio-tablet-device", "virtio-tablet", QEMU_ARCH_VIRTIO_MMIO },
>       { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_VIRTIO_CCW },
>       { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_VIRTIO_PCI },
> +    { "PIIX3", "PIIX3-xen", QEMU_ARCH_XEN },

Hi Bernhard,

Can you comment if this should be:

+    { "PIIX", "PIIX3-xen", QEMU_ARCH_XEN },

instead? IIUC, the patch series also removed PIIX3 and PIIX4 and
replaced them with PIIX. Or am I not understanding correctly?

Best regards,

Chuck


>       { }
>   };
> ---
> 
> But I'm not sure due to this comment from commit ee46d8a503
> (2011-12-22 15:24:20 -0600):
> 
> 47) /*
> 48)  * Aliases were a bad idea from the start.  Let's keep them
> 49)  * from spreading further.
> 50)  */
> 
> Maybe using qdev_alias_table[] during device deprecation is
> acceptable?
> 
>> -}
>> -
>> -static const TypeInfo piix3_xen_info = {
>> -    .name          = TYPE_PIIX3_XEN_DEVICE,
>> -    .parent        = TYPE_PIIX_PCI_DEVICE,
>> -    .instance_init = piix3_init,
>> -    .class_init    = piix3_xen_class_init,
>> -};
>> -
>>   static void piix4_realize(PCIDevice *dev, Error **errp)
>>   {
>>       ERRP_GUARD();
>> @@ -534,7 +515,6 @@ static void piix3_register_types(void)
>>   {
>>       type_register_static(&piix_pci_type_info);
>>       type_register_static(&piix3_info);
>> -    type_register_static(&piix3_xen_info);
>>       type_register_static(&piix4_info);
>>   }
>>   
>> diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
>> index 65ad8569da..b1fc94a742 100644
>> --- a/include/hw/southbridge/piix.h
>> +++ b/include/hw/southbridge/piix.h
>> @@ -77,7 +77,6 @@ struct PIIXState {
>>   OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
>>   
>>   #define TYPE_PIIX3_DEVICE "PIIX3"
>> -#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
>>   #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
>>   
>>   #endif
> 



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 18:34:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 18:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471375.731203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD8ap-0000LO-02; Wed, 04 Jan 2023 18:34:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471375.731203; Wed, 04 Jan 2023 18:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD8ao-0000LH-TV; Wed, 04 Jan 2023 18:34:26 +0000
Received: by outflank-mailman (input) for mailman id 471375;
 Wed, 04 Jan 2023 18:34:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pD8an-0000LA-LV
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 18:34:25 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67a0e035-8c5e-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 19:34:23 +0100 (CET)
Received: from mail-dm3nam02lp2040.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Jan 2023 13:34:19 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA1PR03MB6564.namprd03.prod.outlook.com (2603:10b6:806:1cb::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 18:34:15 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 18:34:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67a0e035-8c5e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672857263;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=R8d0creBhCbzEKKsTWf735ZH/0haJVM6D8OEXMR4+qw=;
  b=gYWfxQcy0G9GUF0RR+SKIHoa3+OCwfNMLz8UlI+BT0rn1C1VHeT4m0rg
   C2fGY843CsNI5Iue/gpRWQ7+QKdSyay1veAaDCx2CuChuq/QptmX6Q+Ri
   j8vWpyam2zWrV3kB881pvv7WGW8nzq6eip/RY7tIhVrS9ek8aN87c4HJr
   I=;
X-IronPort-RemoteIP: 104.47.56.40
X-IronPort-MID: 91188611
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hg5ODqKDICh4Ngy/FE+RGpQlxSXFcZb7ZxGr2PjKsXjdYENSg2YGn
 2VLXWiFP/jZNGb3Ldh2bozl9xlX6pPQn9FiHlRlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPdwP9TlK6q4mhA5wRiPawjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5WEGBMz
 vU4JwsWURaGlsmx0b6nWvZz05FLwMnDZOvzu1lG5BSBV7MKZMuGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dppTSLpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexHqrCNxLTdVU8NZkx0Si51JUOScmWGX8pd7ktU63Bc1mf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmW0bbd6QudAmkCTxZCZcYguctwQiYlv
 neWm/v5CDopt6eaIVqf67OVoDWaKSUTa2gYakcsXQYDptXuvow3phbOVcp4Vr64iMXvHjP9y
 CzMqzIx74j/luYO3qS/uFzC2DSlo8CTShZvvlmPGGW48gl+eYipIZSy7kTW5upBK4DfSUSdu
 H8DmI6V6+Vm4YyxqRFhid4lRNmBj8tp+hWB6bKzN/HNLwiQxkM=
IronPort-HdrOrdr: A9a23:sHQ4HqCNGKCuMsvlHehLsceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+VAssQIb6Km90ci7MAThHPtOjbX5Uo3SODUO1FHIEGgA1/qV/9SDIVyYygc178
 4JHMZD4bbLfDtHZLPBkWyF+qEbsbu6Gc6T5dv2/jNId0VHeqtg5wB2BkKwCUttXjRLApI/Cd
 61+tdHjyDIQwVeUu2LQl0+G8TTrdzCk5zrJTQcAQQ81QWIhTS0rJbnDhmj2AsEWT8n+8ZozY
 GFqX2y2kyQiYD29vbu7R6d032Qoqqu9jJ3Pr3AtiHSEESstu/nXvUgZ1TIhkFMnAjm0idQrD
 CLmWZoAy070QKqQkil5RTqwAXuyzAo9jvrzkKZm2LqpYjjSCs9ENcpv/MqTvL10TtRgDhH6t
 M540uJ855MSR/QliX04NbFExlsi0qvuHIn1eoelWZWX4cSYKJY6dV3xjIgLL4QWCbhrIw3Gu
 hnC8/RoP5QbFOBdnjc+m1i2salUHg/FgqPBkICpsuW2T5Lm20R9Tps+OUP2nMbsJ4tQZhN4O
 rJdqxuibFVV8cTKblwAe8QKPHHe1AlgSi8Tl56DW6Xa53vYUi91qIfyI9FmN2XRA==
X-IronPort-AV: E=Sophos;i="5.96,300,1665460800"; 
   d="scan'208";a="91188611"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KuPpe2N4LpV9NsqS2hQgfbPOddkz3Lb/GAOGc720ADEMycfCBqyPfxUhneOd0nCMZFP4nSxM0ejy5Tvy9+x+XrSB4ZwN5alI27y+ynPe+nitkb455xrHgPp4oz7UyAMRYVsO6xNvs0TngZFNo+IJZp/T26DRvu19B8dKxiiaYCq2W6BD3rImdl72c8HOoIaaxhhZzZjxrjJltPADWH4hqlwfUBYgTYkx3IO64mX0EGnEmzb/P8PSphNYnWCgeu1j6eY6ylKnKdPxL++mo1d2qhXPGEtoChoCL/U2eucM6kDMTYO4JhpHfCM4V/GDM+RSI5SuyvUijQg3CSWBrAd2Fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R8d0creBhCbzEKKsTWf735ZH/0haJVM6D8OEXMR4+qw=;
 b=eovIpsR0nUclfxDFFkw+AjnZPzQP8OCFmq6wjVjQzdpo8DTJrOFKzpvRTJaD95+a9eAUVRkwp076Tx6nlBNyxdW0zIwXVfswcj+1g69e1Zj4niIOKRaviSk4YAEFrkFXax0wncnOayyPK8gS9cSZbXZkdhsvtjDHeEvtE1ibm2AUf7ZTCoAuUikBnpQCBHSJowEuMHiZaMrO3kRIhw2n+BAe1GBZgX+kCeS5/vhqBNqUUlJkLgQtCqj5bhe8sy7rm5jvBe7wZeADJpgN7O6MvGwueaGt9y7RVd9Vb/n/gWKqWgOU3tutQ5ePnN8S2so+L20tgALYz0KcG1KMIXUNjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R8d0creBhCbzEKKsTWf735ZH/0haJVM6D8OEXMR4+qw=;
 b=aqhPD0si6R2AJFvo2iDDQmyMpdfuj8Tr8wpqm6wV7C566QJdryKScpHejYDAU3h8HxyFQXRxdMSZ+BAIzBzQf3ed11vQ7jGYMd71mCLTItcC+65bpor89kgcKKISCrlXVXtrYpoqVMNLLss06qCYllqD7J2BxljGo361qwQXmvo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_* subops
Thread-Topic: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_*
 subops
Thread-Index: AQHZH69eYU4qCMuPJUaHOPPUV+yM+a6OfdiAgAAZEwA=
Date: Wed, 4 Jan 2023 18:34:14 +0000
Message-ID: <9c9cedd5-cca7-95d4-00bb-f34a56de2695@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-5-andrew.cooper3@citrix.com>
 <a0cb9c83-dc4d-5f03-0f65-3756fadfde0b@suse.com>
In-Reply-To: <a0cb9c83-dc4d-5f03-0f65-3756fadfde0b@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA1PR03MB6564:EE_
x-ms-office365-filtering-correlation-id: c93bb995-222b-4f94-3c3a-08daee824816
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 38LE06HUUS/tohxVoExBUWoR+GhC+aaUpzJOw13JA+zttx/zAB60KnWDV9Pls2scxgO8d4oSYdhDhGwjrJQDYc50gUtVkLP8E+iTanAMA090/GhRGJS/l3D05pS8BePKv6GekRprumYlMSKA1ekL8y2tGWKnqHBn1RQYBev5A1R1rRKr/shIn2KmcuntiLlUUk3+ODfaG9u8X4PojKJDdk+9CMenfRlMCw25m12Or/WdpDdAQhbMEH2tSV2K5XSjyVaLqgdO+f9DZvC9quFO36Zi150PFjg/6WHlkL1qn2Mgh21KKUnregTHlCdanu4tk3FiOXxhjeZrO/3OFx9AV1v7G6pIuawe2RfnUg01cDSpXYBjUZ6BfaHzO01kHZaqfeh/+o/G8tZGJbiYUdBm9SgMxHO6yBOwejG0jDEGBGNDR/tp4UtsEepupy5QID2lpyiRVDGlXY/EBNepO79w59GJ4B7n6S/6NTX/wk9+oXC/q81dLyn8wibuOz1qFLRTCKEY2/OlJEx+2uzJ8hS2uzavGUERUNp12SsGpKuNvvvnKwx9jw07SkyxkXwKvHk2C4nm7fpGoiYxg9Yl5P0Hk0hFJmDsCsgT1/123kVMiZXyvqXfwklPFaD2HaSNhNwuqHtzl+P7A5bz5Uj6sPtoJ9ZfTozaUjb7FXqWy3x26ZqoqEuX8m1aP0J+qJduEWpnKWdgO+lOAlLeBwa5z4Tt827iQJaYoLNAbrO2JxGtpMno+WX6fTLneuS/TYP3hhlJ1cfo1azlbaSiqx+DuE9bAQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(376002)(366004)(346002)(39860400002)(451199015)(83380400001)(82960400001)(122000001)(8936002)(38100700002)(4326008)(66946007)(38070700005)(8676002)(86362001)(31696002)(5660300002)(66476007)(2906002)(76116006)(66556008)(66446008)(64756008)(41300700001)(26005)(6512007)(91956017)(6506007)(186003)(53546011)(54906003)(6916009)(71200400001)(316002)(6486002)(2616005)(478600001)(31686004)(66899015)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dUorUnFMNk1ibk00a1dKa2JoSUxpcnhNK25Wd0NVWHJXM1BYUENlVzh1MGVZ?=
 =?utf-8?B?dG51KzdvcVlmSzVIcFU1QnlwQXhYckhja2VIdHBkakNvbmV2cGhRcnF1Z3dT?=
 =?utf-8?B?QldIb0pvOGVVaVpNK0E1ZkJYVGRCeW02VE5FSzBWVE93d2l1Y3p1RFNaOVg1?=
 =?utf-8?B?N051VHBSVHBKdnNKaHlwd0hMeGYzSWlDTEprMnFSaTRRQjFoSGNWWWRPRHFG?=
 =?utf-8?B?dTcxQitwRXpIKzNYdmVwSzZ2ZmdVZWY3ZXJRc0lHTVZWSmxtN3hzYkdmWmNT?=
 =?utf-8?B?Q0l3Uzg1U2J4a2E3ZDJQdjM1UWptRjEya3hKWG9tZkorQkNCRkZOS2xmQTR6?=
 =?utf-8?B?NjVyNiszLzFnK01iY3JxNlhtN2VqTjYvcHJ4ck5MdmNZNUlQUlhhYU9JRWJm?=
 =?utf-8?B?ZWFGMkYvQm1WYkhYT0ZGODJHNHY4NldhNEM5cGRMaUpabHFiYjM3YXdXQzlv?=
 =?utf-8?B?Z2VIQjB4Yk0xUm0xRGtvQnl6Nm84M3ZyeVdxYzRzbUo1MDQ4WHlPTnFZVEIv?=
 =?utf-8?B?c1hvRlBvYjFrbUJ5by9QYmVDQ2UrNkRsZEk4MWhIVkRydVk0UGlVdE0rbHZM?=
 =?utf-8?B?aUU3VUhkcHFzM2xsSDNMOXhxU0wvMXNwUHVONUxXbmtMTE5WV0JWREY0UUFS?=
 =?utf-8?B?MmN4dDFFU3BaZnBYWCtvTFcrKzVZWEVvOXVNR2RPUHB3M0ppSnR3djd4VlJN?=
 =?utf-8?B?dWFoR25Yc1FVeElGYm9LY0h2TkJQMHRxRU1TZERwZllqL3c1ck5QWFlqR2tl?=
 =?utf-8?B?Vnl3Q01mUURlb24xZERHQXI2SGErZm94dmZ5YUg3cG01RVhEY3laM2N3bU95?=
 =?utf-8?B?NEYwU1VkNkNBcnJWUzkrSmNLalJkVnZQSzlwTDVyOU5lUUJOOHFvQ2liQWJU?=
 =?utf-8?B?L0FseUtUNGZYUEg5VE5ZRU1QQzRaWFJkWGQ0eGhJTnlpMzNkS0Fvd1krcGdC?=
 =?utf-8?B?Qzl3ZUxkNk5HdjVFQ3U1RkF1OGxoNTE0anJ2Vm1SOU5MZUYwMlo5cGl0YlZ1?=
 =?utf-8?B?em5tQXB0SE1pM2xPVG45MG5HaE5MbGFLTFpBeVc1OEZvWTdrd0JrYzNoWHZi?=
 =?utf-8?B?bEJNMDRqdkRtVlFndzZ1OWk2Ujg0NkNvQjNxWVIvSEFBRW9PMzN1MWQ1RW05?=
 =?utf-8?B?TS9GS21XeWs2NlptbjNPZ3JMNnlDb29LenZEcFhCQ2VsMnhSdzhYbDByd3FW?=
 =?utf-8?B?UCszY0RpcndMNnR0U2pLU1hCbVpJR0IyeEp5YXp2QTZjNUpxeUd0Qjh2bndv?=
 =?utf-8?B?ckIrOGdvK1B2SkNyTDJvVjhwYjBtb1BFY0N6VEkwZ1Nrekl0S25RclExZ2xU?=
 =?utf-8?B?cjVaMDBodTBoU2JZdHVXSWNuc2tXeGd6OVFPc0tyVlpaYmxMWVlvRnl1cE1U?=
 =?utf-8?B?TDZ1QzluelZuZmZnempSSkVicWFVdm4yY1M3bTFBZEMxSGJHOGFQUDZrdThY?=
 =?utf-8?B?clhtRFV2Z3pmMEpzTFIrbXVjbW5sM0FDM0hTZUxPd1BlVHpHeFZLSFdsdTlJ?=
 =?utf-8?B?WitENEpkNFB2US8rRkFMZFNOakQ1NC95WVdJTUF2SVhZTDhjbUZKNHpjSWo5?=
 =?utf-8?B?TWdiZGRRcTlEVDN1Z29PUTRuTHRMY1BhdGdVUXVwSlRoL0RXU0RFYlhqZUhS?=
 =?utf-8?B?bHlmcnZqUS9hNlBQQmxvV1JuL2ovYW9yUDhTT0dBMEI1RGFzTFBvUXRLSmRU?=
 =?utf-8?B?S2t2VFMwTDZCaWJPYy9YV0dCbnRrZWdBcFVIVXFnNG5YcXE1a0JNMk94YW5R?=
 =?utf-8?B?OGZhaWp2MFVBcWhPQ29rTVA5aXgrdzV3SUxRUFlVbXNDTjArYk04M0FDUGRt?=
 =?utf-8?B?VFpZVXBmbnVUUDJlSW94d3ZtdlNIUS8wY0J0bldpVFk5elB4TlBlN0RIeXh4?=
 =?utf-8?B?R1FhcC9rV1BnSDJuSldqdVBMRXpiOEZXblhSVkMvZkEwd3ZMK0Y3TC9ZcTl1?=
 =?utf-8?B?R0c4bk9wN3N4aHZET25QazQ5TzNqQ3Z1bjRnT0JINFNSemxRbVVQWVJQQUx0?=
 =?utf-8?B?cE9DV3ByODJCUkJ6M0sxdnltMURDeFFQUkdhQnZyV0UzV2JOdlR1ajhDeGpn?=
 =?utf-8?B?YU9JN2Z3QzVaNVB3S3kyTUE5bkhlb1dvY2owK2xJTkF0N3J1QUs4ODFuVGs4?=
 =?utf-8?Q?ARMDxtLQBVV0wEN83K62ZZ1uJ?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <13F53D7107A7664C82EAB7BF14044A34@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	6GlldlbtsgKibT2Jrai+8R6qA+OH3HhHJbZ9MpJJmOWA6D21FoIWFEVnXsRQTJWKhfW9YQ/yi8iew9SlWkLnB4r5WrzBp42cokxJPTsTo/hWqE7NSjIA+GmFRAeK5EGtRmqc18O8w/xp/0T/UrrujR8GwMPsQ2+BsIELwC7ahh4mnBlHBlHiWxpz89rJwtMZszZRmEvuPRtqLHdM/H+y4Y+A+BQ3CS26sH2C+9f7asPtNNx++Rvj3ggqCyfHaRkz1od+M80lzgplFXqJ7KTsyL50XjqRcjykQq2ScETPq8oZ/rB1Y3PjcbIz7NFAAwGe7x3RBMnzKK91wElhLGApArL3VsGnIGmhJIN2G/YbY2KdX8X8jnPcLyxikQod6onGpp0crgzzSLdjBkGhMJYlU4xHkwzTS7J5rHUxGw4pM7RMvOl+pYvTWyFmUFOka3P+PJ+Ag8d6B4+I3nONBft60B0GFL3NsQ4GVw6PmZU5fB1Cti7iQfp6eES8j8pFe44ZYDP+5ndFWOOMWWssmnlL96kL4aerWDJmIViMvQxzq9nx09nYTvDAKYHNEusQFdFno5bGzvE2PDDgr+NDVjLLgQ71WyVkiNWwA/Q8c5/bj3Te2OuQjY1Wpwr8YxJjv/F/Qmmd8YnvHSA9CMjJy2CC8IlQOWCUOF+NrUo/sEhxUTH23xuwR/zv6fjY+ZhyLCgNflzPYR/02ryyKaeRZwCLSE6YhwrbnMPui0zLFc5TgH//NXEmtzdEuPQ+xY62IqLj9lk28crQPBhMd5wIgetkDqLT77FyogJ8doyjeZync42KIakd1nN1an7xb+1KcAPW1O/itsxh+ndPBhaIbP3S877GiJ/c5hcxScNyv/IfiAg3xf/6yLyOgiJ70yw5UeOE
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c93bb995-222b-4f94-3c3a-08daee824816
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 18:34:14.8453
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nR5FhHbv7AG2urwwjUxQxFMEHpab9irJRbtaT7tB/ybV5iuPb017RLxMZ6aqjLJou/4VW8We3vNJE98ORQ3bOjTePAjW5ekXjPmH44bjpdA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6564

T24gMDQvMDEvMjAyMyA1OjA0IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDMuMDEuMjAy
MyAyMTowOSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IFJlY2VudGx5IGluIFhlblNlcnZlciwg
d2UgaGF2ZSBlbmNvdW50ZXJlZCBwcm9ibGVtcyBjYXVzZWQgYnkgYm90aA0KPj4gWEVOVkVSX2V4
dHJhdmVyc2lvbiBhbmQgWEVOVkVSX2NvbW1hbmRsaW5lIGhhdmluZyBmaXhlZCBib3VuZHMuDQo+
Pg0KPj4gTW9yZSB0aGFuIGp1c3QgdGhlIGludmFyaWFudCBzaXplLCB0aGUgQVBJcy9BQklzIGFs
c28gYnJva2VuIGJ5IHR5cGVkZWYtaW5nIGFuDQo+PiBhcnJheSwgYW5kIHVzaW5nIGFuIHVucXVh
bGlmaWVkICdjaGFyJyB3aGljaCBoYXMgaW1wbGVtZW50YXRpb24tc3BlY2lmaWMNCj4+IHNpZ25l
ZC1uZXNzLg0KPiBXaGljaCBpcyBmaW5lIGFzIGxvbmcgYXMgb25seSBBU0NJSSBpcyByZXR1cm5l
ZC4gSWYgbm9uLUFTQ0lJIGNhbiBiZSByZXR1cm5lZCwNCj4gSSBhZ3JlZSAidW5zaWduZWQgY2hh
ciIgaXMgYmV0dGVyLCBidXQgdGhlbiB3ZSBhbHNvIG5lZWQgdG8gc3BlbGwgb3V0IHdoYXQNCj4g
ZW5jb2RpbmcgdGhlIHN0cmluZ3MgdXNlIChVVEYtOCBwcmVzdW1hYmx5KS4NCg0KV2VsbCwgaXQg
ImZ1bmN0aW9ucyIgYXMgZmFyIGFzIEFTQ0lJIGlzIGNvbmNlcm5lZCwgYnV0IEkgd291bGRuJ3Qg
c2F5DQp0aGF0IHdhcyBmaW5lIGZvciBhbiBBQkkuDQoNClRoZSBjb21tYW5kIGxpbmUgY2FuIHJl
YXNvbmFibHkgaGF2ZSBub24tQVNDSUkgY2hhcmFjdGVycyB0aGVzZSBkYXlzLsKgDQooYXMgY2Fu
IHRoZSBjb21waWxlIGluZm9ybWF0aW9uLCBidXQgSSBpbnRlbnRpb25hbGx5IGRpZG4ndCBjb252
ZXJ0IHRoZW0NCnRvIHRoaXMgbmV3IGZvcm1hdCkuDQoNClVURi04IGlzIHByb2JhYmx5IHRoZSBz
ZW5zaWJsZSB0aGluZyB0byBzcGVjaWZ5LCBpZiB3ZSdyZSBnb2luZyB0byBtYWtlDQphbnkgc3Rh
dGVtZW50Lg0KDQo+DQo+PiBBUEkvQUJJIHdpc2UsIFhFTlZFUl9idWlsZF9pZCBjb3VsZCBiZSBt
ZXJnZWQgaW50byB4ZW52ZXJfdmFyc3RyaW5nX29wKCksIGJ1dA0KPj4gdGhlIGludGVybmFsIGlu
ZnJhc3RydWN0dXJlIGlzIGF3a3dhcmQuDQo+IEkgZ3Vlc3MgYnVpbGQtaWQgYWxzbyBkb2Vzbid0
IGZpdCB0aGUgcmVzdCBiZWNhdXNlIG9mIG5vdCByZXR1cm5pbmcgc3RyaW5ncywNCj4gYnV0IGlu
ZGVlZCBhbiBhcnJheSBvZiBieXRlcy4gWW91IGFsc28gY291bGRuJ3QgdXNlIHN0cmxlbigpIG9u
IHRoZSBhcnJheS4NCg0KVGhlIGZvcm1hdCBpcyB1bnNwZWNpZmllZCwgYnV0IGl0IGlzIGEgYmFz
ZTY0IGVuY29kZWQgQVNDSUkgc3RyaW5nIChub3QNCk5VTCB0ZXJtaW5hdGVkKS4NCg0KPj4gQEAg
LTQ2OSw2ICs0NzAsNjYgQEAgc3RhdGljIGludCBfX2luaXQgY2ZfY2hlY2sgcGFyYW1faW5pdCh2
b2lkKQ0KPj4gIF9faW5pdGNhbGwocGFyYW1faW5pdCk7DQo+PiAgI2VuZGlmDQo+PiAgDQo+PiAr
c3RhdGljIGxvbmcgeGVudmVyX3ZhcnN0cmluZ19vcChpbnQgY21kLCBYRU5fR1VFU1RfSEFORExF
X1BBUkFNKHZvaWQpIGFyZykNCj4+ICt7DQo+PiArICAgIGNvbnN0IGNoYXIgKnN0ciA9IE5VTEw7
DQo+PiArICAgIHNpemVfdCBzeiA9IDA7DQo+PiArICAgIHVuaW9uIHsNCj4+ICsgICAgICAgIHhl
bl9jYXBhYmlsaXRpZXNfaW5mb190IGluZm87DQo+PiArICAgIH0gdTsNCj4+ICsgICAgc3RydWN0
IHhlbl92YXJfc3RyaW5nIHVzZXJfc3RyOw0KPj4gKw0KPj4gKyAgICBzd2l0Y2ggKCBjbWQgKQ0K
Pj4gKyAgICB7DQo+PiArICAgIGNhc2UgWEVOVkVSX2V4dHJhdmVyc2lvbjI6DQo+PiArICAgICAg
ICBzdHIgPSB4ZW5fZXh0cmFfdmVyc2lvbigpOw0KPj4gKyAgICAgICAgYnJlYWs7DQo+PiArDQo+
PiArICAgIGNhc2UgWEVOVkVSX2NoYW5nZXNldDI6DQo+PiArICAgICAgICBzdHIgPSB4ZW5fY2hh
bmdlc2V0KCk7DQo+PiArICAgICAgICBicmVhazsNCj4+ICsNCj4+ICsgICAgY2FzZSBYRU5WRVJf
Y29tbWFuZGxpbmUyOg0KPj4gKyAgICAgICAgc3RyID0gc2F2ZWRfY21kbGluZTsNCj4+ICsgICAg
ICAgIGJyZWFrOw0KPj4gKw0KPj4gKyAgICBjYXNlIFhFTlZFUl9jYXBhYmlsaXRpZXMyOg0KPj4g
KyAgICAgICAgbWVtc2V0KHUuaW5mbywgMCwgc2l6ZW9mKHUuaW5mbykpOw0KPj4gKyAgICAgICAg
YXJjaF9nZXRfeGVuX2NhcHMoJnUuaW5mbyk7DQo+PiArICAgICAgICBzdHIgPSB1LmluZm87DQo+
PiArICAgICAgICBicmVhazsNCj4+ICsNCj4+ICsgICAgZGVmYXVsdDoNCj4+ICsgICAgICAgIEFT
U0VSVF9VTlJFQUNIQUJMRSgpOw0KPj4gKyAgICAgICAgYnJlYWs7DQo+PiArICAgIH0NCj4+ICsN
Cj4+ICsgICAgaWYgKCAhc3RyIHx8DQo+PiArICAgICAgICAgIShzeiA9IHN0cmxlbihzdHIpKSAp
DQo+PiArICAgICAgICByZXR1cm4gLUVOT0RBVEE7IC8qIGZhaWxzYWZlICovDQo+IElzIHRoaXMg
cmVhbGx5IGFwcHJvcHJpYXRlIGZvciBlLmcuIGFuIGVtcHR5IGNvbW1hbmQgbGluZT8NCg0KSG1t
LsKgIEdvb2QgcG9pbnQuwqAgQWxsIGNhbiBpbiBwcmluY2lwbGUgYmUgZW1wdHkuDQoNCkluIHdo
aWNoIGNhc2UgSSB0aGluayBJJ2xsIHB1dCB0aGUgLUVOT0RBVEEgaW4gdGhlIGRlZmF1bHQgY2Fz
ZSBhbmQgaGF2ZQ0KYSByZXR1cm4gMCBmb3IgdGhlIHN6ID09IDAgY2FzZS4NCg0KPj4gKyAgICBp
ZiAoIHN6ID4gSU5UMzJfTUFYICkNCj4+ICsgICAgICAgIHJldHVybiAtRTJCSUc7IC8qIENvbXBh
dCBndWVzdHMuICAyRyBvdWdodCB0byBiZSBwbGVudHkuICovDQo+IFdoaWxlIHRoZSBjb21tZW50
IGhlcmUgYW5kIGluIHRoZSBwdWJsaWMgaGVhZGVyIG1lbnRpb24gY29tcGF0IGd1ZXN0cywNCj4g
dGhlIGNoZWNrIGlzIHVuaWZvcm0uIFdoYXQncyB0aGUgZGVhbD8NCg0KV2VsbCBpdHMgZWl0aGVy
IHRoaXMsIG9yIGEgKGNvbWF0ID8gSU5UMzJfTUFYIDogSU5UNjRfTUFYKSBjaGVjaywgYWxvbmcN
CndpdGggdGhlIGlmZGVmYXJ5IGFuZCBwcmVkaWNhdGVzIHJlcXVpcmVkIHRvIG1ha2UgdGhhdCBj
b21waWxlLg0KDQpCdXQgdGhlcmUncyBub3QgYSBDUFUgdG9kYXkgd2hpY2ggY2FuIGFjdHVhbGx5
IG1vdmUgMkcgb2YgZGF0YSAod2hpY2ggaXMNCjRHIG9mIEwxZCBiYW5kd2lkdGgpIHdpdGhvdXQg
c3VmZmVyaW5nIHRoZSB3YXRjaGRvZyAoZXNwZWNpYWxseSBhcyB3ZSd2ZQ0KanVzdCByZWFkIGl0
IG9uY2UgZm9yIHN0cmxlbigpLCBzbyB0aGF0J3MgNkcgb2YgYmFuZHdpZHRoKSwgbm9yIGRvIEkN
CmV4cGVjdCB0aGlzIHRvIGNoYW5nZSBpbiB0aGUgZm9yc2VlYWJsZSBmdXR1cmUuDQoNClRoZXJl
J3Mgc29tZSBib3VuZGFyeSAocHJvYmFibHkgZmFyIGxvd2VyKSBiZXlvbmQgd2hpY2ggd2UgY2Fu
J3QgdXNlIHRoZQ0KYWxnb3JpdGhtIGhlcmUuDQoNClRoZXJlIHdhbnRzIHRvIGJlIHNvbWUgbGlt
aXQsIGFuZCBJIGRvbid0IGZlZWwgaXQgaXMgbmVjZXNzYXJ5IHRvIG1ha2UNCml0IHZhcmlhYmxl
IG9uIHRoZSBndWVzdCB0eXBlLg0KDQo+DQo+PiArICAgIGlmICggZ3Vlc3RfaGFuZGxlX2lzX251
bGwoYXJnKSApIC8qIExlbmd0aCByZXF1ZXN0ICovDQo+PiArICAgICAgICByZXR1cm4gc3o7DQo+
PiArDQo+PiArICAgIGlmICggY29weV9mcm9tX2d1ZXN0KCZ1c2VyX3N0ciwgYXJnLCAxKSApDQo+
PiArICAgICAgICByZXR1cm4gLUVGQVVMVDsNCj4+ICsNCj4+ICsgICAgaWYgKCB1c2VyX3N0ci5s
ZW4gPT0gMCApDQo+PiArICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4+ICsNCj4+ICsgICAgaWYg
KCBzeiA+IHVzZXJfc3RyLmxlbiApDQo+PiArICAgICAgICByZXR1cm4gLUVOT0JVRlM7DQo+PiAr
DQo+PiArICAgIGlmICggY29weV90b19ndWVzdF9vZmZzZXQoYXJnLCBvZmZzZXRvZihzdHJ1Y3Qg
eGVuX3Zhcl9zdHJpbmcsIGJ1ZiksDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
c3RyLCBzeikgKQ0KPj4gKyAgICAgICAgcmV0dXJuIC1FRkFVTFQ7DQo+IE5vdCBpbnNlcnRpbmcg
YSBudWwgdGVybWluYXRvciBpcyBnb2luZyB0byBtYWtlIHRoaXMgc2xpZ2h0bHkgYXdrd2FyZCB0
bw0KPiB1c2UuDQoNCk5vdCByZWFsbHkuwqAgSXQgbWF0Y2hlcyBob3cgYnVpbGQtaWQgY3VycmVu
dGx5IHdvcmtzLCBhbmQgZm9yY2VzIHRoZQ0KY2FsbGVyIHRvIHVzZSBwcm9wZXIgdGFpbmVkLXN0
cmluZyBkaXNjaXBsaW5lLg0KDQpUbyBzYWZlbHkgcHJpbnRrKCkgaXQsIGFsbCB5b3UgbmVlZCBp
czoNCg0KcmMgPSB4ZW5fdmVyc2lvbihYRU5WRVJfJFgsICZzdHIpOw0KaWYgKCByYyA8IDAgKQ0K
wqDCoMKgIHJldHVybiByYzsNCnByaW50aygiJSouc1xuIiwgcmMsIHN0ci5idWYpOw0KDQo+PiBA
QCAtMTAzLDYgKzEyNiwzNSBAQCBzdHJ1Y3QgeGVuX2J1aWxkX2lkIHsNCj4+ICB9Ow0KPj4gIHR5
cGVkZWYgc3RydWN0IHhlbl9idWlsZF9pZCB4ZW5fYnVpbGRfaWRfdDsNCj4+ICANCj4+ICsvKg0K
Pj4gKyAqIENvbnRhaW5lciBmb3IgYW4gYXJiaXRyYXJ5IHZhcmlhYmxlIGxlbmd0aCBzdHJpbmcu
DQo+PiArICovDQo+PiArc3RydWN0IHhlbl92YXJfc3RyaW5nIHsNCj4+ICsgICAgdWludDMyX3Qg
bGVuOyAgICAgICAgICAgICAgICAgICAgICAgICAgLyogSU46ICBzaXplIG9mIGJ1ZltdIGluIGJ5
dGVzLiAqLw0KPj4gKyAgICB1bnNpZ25lZCBjaGFyIGJ1ZltYRU5fRkxFWF9BUlJBWV9ESU1dOyAv
KiBPVVQ6IHJlcXVlc3RlZCBkYXRhLiAgICAgICAgICovDQo+PiArfTsNCj4+ICt0eXBlZGVmIHN0
cnVjdCB4ZW5fdmFyX3N0cmluZyB4ZW5fdmFyX3N0cmluZ190Ow0KPj4gKw0KPj4gKy8qDQo+PiAr
ICogYXJnID09IHhlbnZlcl9zdHJpbmdfdA0KPiBOaXQ6IHhlbl92YXJfc3RyaW5nX3QgKGFsc28g
YWdhaW4gaW4gdGhlIGZvbGxvd2luZyB0ZXh0KS4NCg0KQWggeWVzLsKgIEkgb3JpZ2luYWxseSBj
YWxsZWQgaXQgeGVudmVyX3N0cmluZ190IHRoZW4gZm91bmQgYSB3aG9sZSBsb2FkDQpvZiBjb21w
YXQgYXNzdW1wdGlvbnMgYWJvdXQgeGVuIHByZWZpeGVzIG9uIG5hbWVzLg0KDQpXaWxsIGZpeC4N
Cg0KDQpCdXQgb3ZlcmFsbCwgSSdtIG5vdCBzZWVpbmcgYSBtYWpvciBvYmplY3Rpb24gdG8gdGhp
cyBjaGFuZ2U/wqAgSW4gd2hpY2gNCmNhc2UgSSdsbCBnbyBhaGVhZCBhbmQgZG8gdGhlIHRvb2xz
LyBjbGVhbnVwIHRvbyBmb3IgdjIuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 18:48:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 18:48:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471383.731213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD8oK-0001xQ-9M; Wed, 04 Jan 2023 18:48:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471383.731213; Wed, 04 Jan 2023 18:48:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD8oK-0001xJ-6n; Wed, 04 Jan 2023 18:48:24 +0000
Received: by outflank-mailman (input) for mailman id 471383;
 Wed, 04 Jan 2023 18:48:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=G5yt=5B=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pD8oJ-0001xC-4K
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 18:48:23 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5b9719bd-8c60-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 19:48:20 +0100 (CET)
Received: by mail-ej1-x630.google.com with SMTP id ud5so84922695ejc.4
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 10:48:20 -0800 (PST)
Received: from [192.168.1.115] ([185.126.107.38])
 by smtp.gmail.com with ESMTPSA id
 i16-20020a170906115000b008373f9ea148sm15616785eja.71.2023.01.04.10.48.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 10:48:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b9719bd-8c60-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=FVquXEatkjsnt7/WWB3T+s6lCdXnkT97jqT/BXU5H7A=;
        b=zw2gd8LZo8cOpCZglRMftk6BEyJasva6EP/VBZSAXcv9OKOjH6N5pZmQws6Ilaec2r
         rxXBXUYdnR1hcB2O2ot+/iyc8XC8nfpuzUBB+5mzocUghZoJvzHGGKd4gezz20bRA00L
         ZPqt2BQcinhISzbDcrGVqFZJ8KKcWNoNxKTW3IS4d8ksDwLdFQ+eRXI0CMTEHZos7nJW
         4wydv+J9HUuKhWtC897qN0xbqsbEYNknTQabNmAjuz+S+rKwR+popxvD3jm/yuHfhFWy
         AMpN+CKtEkpMqQsePykDGSf1bAcgQeFDhx2wYgqLqJO+TotxLXqAY8bJmXoFBYAfllnv
         FiTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=FVquXEatkjsnt7/WWB3T+s6lCdXnkT97jqT/BXU5H7A=;
        b=ei6Oct+9eEg6p0OQ9ZqtFKvY5r9cFdiP9K8hcBpX/lc991JxuOQ56I+rKTAtS9Ohki
         5OKgAJWSBnqDvadu/F3OOpC2oRpzBIeIQLXFyP1bbmQ0Wo6Hw+DEL/VZO6O8+5OoUR62
         j+D/Ys/aTSljFZSHnEYsbiyAPvdmn58R1CqkmGTFBs7Sq3m9MEpEdRznZ/VpB2YKR7E9
         hMVK0s0aB6+h6V9ci6T/JM3WJklkKdjxZcLuZ9bLghfzN10xuxrlrioNcEGkp7aqIySO
         zOumbtW7YlBAAqNilUxmvIyy5Iwlh+h6A2NiYo0mUOkL7zWXTZcxOhNAXW1fWk2zUJ3y
         yB6g==
X-Gm-Message-State: AFqh2ko/5UC8owhvfPFr0GQz10WCDL22bthoM246z0R7glYKE7NLzZp6
	vRWcPPHIMKsjO0V5e+J4ZBK29w==
X-Google-Smtp-Source: AMrXdXtYAXbVIrEFvTkHu0Nr5M4bnN/BQvUtskqbL1WpwJ+T9PNcK0ZQpANoPO0mI9+9TvUnAkcmxg==
X-Received: by 2002:a17:907:c68a:b0:84c:e9c4:5751 with SMTP id ue10-20020a170907c68a00b0084ce9c45751mr6014067ejc.74.1672858100076;
        Wed, 04 Jan 2023 10:48:20 -0800 (PST)
Message-ID: <be75758a-2547-d1ef-223e-157f3aa28b23@linaro.org>
Date: Wed, 4 Jan 2023 19:48:17 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Chuck Zmudzinski <brchuckz@aol.com>, Bernhard Beschow
 <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/1/23 18:54, Chuck Zmudzinski wrote:
> On 1/4/23 10:35 AM, Philippe Mathieu-Daudé wrote:
>> +Markus/Thomas
>>
>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>>
>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>>> ---
>>>    hw/i386/pc_piix.c             |  4 +---
>>>    hw/isa/piix.c                 | 20 --------------------
>>>    include/hw/southbridge/piix.h |  1 -
>>>    3 files changed, 1 insertion(+), 24 deletions(-)
>>>
>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>> index 5738d9cdca..6b8de3d59d 100644
>>> --- a/hw/i386/pc_piix.c
>>> +++ b/hw/i386/pc_piix.c
>>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>>        if (pcmc->pci_enabled) {
>>>            DeviceState *dev;
>>>            PCIDevice *pci_dev;
>>> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>>> -                                         : TYPE_PIIX3_DEVICE;
>>>            int i;
>>>    
>>>            pci_bus = i440fx_init(pci_type,
>>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>>                                           : pci_slot_get_pirq);
>>>            pcms->bus = pci_bus;
>>>    
>>> -        pci_dev = pci_new_multifunction(-1, true, type);
>>> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>>>            object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>>                                     machine_usb(machine), &error_abort);
>>>            object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>>> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
>>> index 98e9b12661..e4587352c9 100644
>>> --- a/hw/isa/piix.c
>>> +++ b/hw/isa/piix.c
>>> @@ -33,7 +33,6 @@
>>>    #include "hw/qdev-properties.h"
>>>    #include "hw/ide/piix.h"
>>>    #include "hw/isa/isa.h"
>>> -#include "hw/xen/xen.h"
>>>    #include "sysemu/runstate.h"
>>>    #include "migration/vmstate.h"
>>>    #include "hw/acpi/acpi_aml_interface.h"
>>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
>>>        .class_init    = piix3_class_init,
>>>    };
>>>    
>>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>> -{
>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>> -
>>> -    k->realize = piix3_realize;
>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>>> -    dc->vmsd = &vmstate_piix3;
>>
>> IIUC, since this device is user-creatable, we can't simply remove it
>> without going thru the deprecation process. Alternatively we could
>> add a type alias:
>>
>> -- >8 --
>> diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
>> index 4b0ef65780..d94f7ea369 100644
>> --- a/softmmu/qdev-monitor.c
>> +++ b/softmmu/qdev-monitor.c
>> @@ -64,6 +64,7 @@ typedef struct QDevAlias
>>                                  QEMU_ARCH_LOONGARCH)
>>    #define QEMU_ARCH_VIRTIO_CCW (QEMU_ARCH_S390X)
>>    #define QEMU_ARCH_VIRTIO_MMIO (QEMU_ARCH_M68K)
>> +#define QEMU_ARCH_XEN (QEMU_ARCH_ARM | QEMU_ARCH_I386)
>>
>>    /* Please keep this table sorted by typename. */
>>    static const QDevAlias qdev_alias_table[] = {
>> @@ -111,6 +112,7 @@ static const QDevAlias qdev_alias_table[] = {
>>        { "virtio-tablet-device", "virtio-tablet", QEMU_ARCH_VIRTIO_MMIO },
>>        { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_VIRTIO_CCW },
>>        { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_VIRTIO_PCI },
>> +    { "PIIX3", "PIIX3-xen", QEMU_ARCH_XEN },
> 
> Hi Bernhard,
> 
> Can you comment if this should be:
> 
> +    { "PIIX", "PIIX3-xen", QEMU_ARCH_XEN },
> 
> instead? IIUC, the patch series also removed PIIX3 and PIIX4 and
> replaced them with PIIX. Or am I not understanding correctly?

There is a confusion in QEMU between PCI bridges, the first PCI
function they implement, and the other PCI functions.

Here TYPE_PIIX3_DEVICE means for "PCI function part of the PIIX
south bridge chipset, which expose a PCI-to-ISA bridge". A better
name could be TYPE_PIIX3_ISA_PCI_DEVICE. Unfortunately this
device is named "PIIX3" with no indication of ISA bridge.


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 19:15:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 19:15:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471391.731224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9Eg-0005HN-Cj; Wed, 04 Jan 2023 19:15:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471391.731224; Wed, 04 Jan 2023 19:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9Eg-0005HG-A1; Wed, 04 Jan 2023 19:15:38 +0000
Received: by outflank-mailman (input) for mailman id 471391;
 Wed, 04 Jan 2023 19:15:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pD9Ee-0005HA-Gb
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 19:15:36 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 290561a0-8c64-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 20:15:35 +0100 (CET)
Received: from mail-dm6nam11lp2172.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Jan 2023 14:15:24 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5423.namprd03.prod.outlook.com (2603:10b6:a03:28c::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 19:15:16 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 19:15:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 290561a0-8c64-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672859735;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ES+kHXVNJkaJV81kDGifXMUJBPFcIwo0/HwQsQat0AE=;
  b=WNelfR7GPnBn1EPbkjRIgrAl4qehZTtd29k/1ArIpo0PnAGlMFfGZoZU
   vvRRHhWFHVBqoEzngkBk6Pmr6e95nBP4Jpqhep9yHL9TustBjdoIgCCnQ
   ogvv1gy4Tktfjs6ryAz72HdAHHO9Wp4d4R9w6t/hA8Msysa4WfsdFoS9T
   s=;
X-IronPort-RemoteIP: 104.47.57.172
X-IronPort-MID: 90682189
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1sxPZ6wvRxZNrVaCJGF6t+f9xyrEfRIJ4+MujC+fZmUNrF6WrkVSm
 moZCjiCbP7ZZ2P3LdEnPtix9E8DvJSBzt5lHgVqpSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5QRnPawT4DcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KSIXq
 9lbBzsnVUykpvyH4qmZRc5AhMt2eaEHPKtH0p1h5RfwKK9/BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjeVlVMruFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqANxCTuTop5aGhnXJlmZPLBIKWWCwitDnuGyneNN1d
 k0br39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLP+BqQDGUASj9HafQludUwSDhs0
 UWG9/v2ARR/vbvTTmiSnp+WsDezNC49PWIEIygeQmMt89Tl5Y0+kB/LZtJiC7KuyM34Hynqx
 DKHpzR4gK8c5fPnzI2+9FHDxj6p+J7AS1ds4h2NBz3/qARkeISieoqkr0DB6upNJ5qYSV/Hu
 2UYn8+Z76YFCpTleDGxfdjh1YqBv56tWAAwS3Y0d3X931xBI0KeQL0=
IronPort-HdrOrdr: A9a23:CfHTEq1Oiy4b5QOZft5SKwqjBKMkLtp133Aq2lEZdPU1SKGlfq
 WV954mPHDP6Ar5J0tQ++xoVJPufZq+z/JICOsqTNSftWDd0QOVxepZjLcKrQePJ8T2zJ856Z
 td
X-IronPort-AV: E=Sophos;i="5.96,300,1665460800"; 
   d="scan'208";a="90682189"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VbkJ81F2guUFQ+9R/QitJS11I7xkpRJAbQ8Jwn2JKbVPUdy6VX7VzQ5BXQByDGlU226PUFXp94fhTGhSaDmVY/7Z3tF5GLQaRu+ciAKRA468IZD/9tsxOkOsy+lIkebLks5kL4c45lPuOGnvlatiqEBGQy75vwLGImN07qWlOG/QoBgC+xJf1LRzTDyQKsHC4kKSiVQeT3GnijDQODytrUiaB0QywknZrVGR3q0cT3YSbjjacH8074kGVm1nZVIiMPCHnBwIs14CRXEfqiYkZC2sGvXTd+S6KC0oTlqtYLBcx5qXuue+kdB8xZWLUArSt/c9q0ZAQTj8t7MyvBeB8A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ES+kHXVNJkaJV81kDGifXMUJBPFcIwo0/HwQsQat0AE=;
 b=UgC4XwWw58TJrRl0Yi2lbOQQDvtskvPKjka/7lktz2ULs74S64CXKuu7oEjsMeAk68A62KfIfdJLFkSAa/M0CYWlStdiybN49eQ9rGNzVd+yrE1BgZoxFqoMo5CXmwl0q7JQawT6dlccMecfB8q4LxFAgLc44W4zIZz8VBocKdWfJokdWcbZmoIFyi3VwfAScAtQ0WPGlTIgJi/hOOmiWljiiHSHhv1lCpmZilmu9j4MvDHvBt82hQAZXfg0dsNqvSEISfD9H9fjguX8U+2Pkp8237N7G4y3EhvG+bTRLvdgC9gT9Tf90YKFJanK34RN14EnpT1psLCuVBG63CL3Ng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ES+kHXVNJkaJV81kDGifXMUJBPFcIwo0/HwQsQat0AE=;
 b=Ui+GYpNj2ewOOFdUmzktB3KSlfnIIvzI2O5VXXLKV3FD3I2x5RQ3MPW2n12HIR1hCJyliq8nXOWg3ib68Bk9YsfSFM5+6ySD8sf5bXq0grxNYOOzAbEejKTtFmEI2PRWylDUG2woWCl8OesJ9GPTNiEQS6QooSbRqO9A/yT3jW0=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/4] xen/version: Drop compat/kernel.c
Thread-Topic: [PATCH 2/4] xen/version: Drop compat/kernel.c
Thread-Index: AQHZH69b2rEsPPoAl0SiHrhvZ3gYIK6Oc/AAgAAucYA=
Date: Wed, 4 Jan 2023 19:15:15 +0000
Message-ID: <c67509bf-b5e0-5364-9307-5432f5417ce5@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-3-andrew.cooper3@citrix.com>
 <074042f5-ecc1-11c9-bdcd-b9d619475d58@suse.com>
In-Reply-To: <074042f5-ecc1-11c9-bdcd-b9d619475d58@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5423:EE_
x-ms-office365-filtering-correlation-id: 9e38970d-5cbc-4f84-d7d8-08daee8802f8
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 wa8cPpSWkVRdQGIKDEKzyo//wjwv5VcoXFgePCyIPoo2WYZVpgST/XKO2VxACD0g9Vv2phbYAh8hOO8LPHqr3LjjVQoXLvz340KIE/zlVU0pZyNUJPIq3jvIOOTvp1TjDFlhcaapJIxtBuJszWzI6c7GpfAyuHHtMnD9KYKIOqV7jQ++XmsF7qxi8ocjAfUpO9e/DaBcA/asmOe9s8JadVvP4j+rZHrySZJpsGd74EoFpNs6F6fYyo6uovoJUHJWQ+hkqSFFjIe6Fb8KCGGHakNcN5BONNbDNNDoSl2grv9YTJiHgKWtwA3RuLaKmTedSqN6KsWg8NBh9OVgTylMZG1cO83wUyYHsVVhf/Jff/1U8LBnXiS3k0YoH0c07BcR2uVoprekg52qyX6GJTezazqeGgxEVmjAVm650TV25pu2TG0u5h5Ay1ccTG4pPycm6svcyPLSgXd2dbLzFXS14ZKX/pfLd3yqu4u9p+ukLNzPGU9nhSit61qQHpUqS6pyxQ9Y8aiI2IcDLKXmd5prNC5rJzV1FDfIViWvkrijYgqF8cwQA6Gdx9EpowhQo5FxoYCNJvXWUMVRKkI4/USXligU9FfFrYfE81n2fwXe6r467z62PfddhsP5tkmXll+tYw0EgoUnnJhtK08SaSqrlkqpcZujNHJG1y97WvLZIvm0VdC01+wyrvHwTKVM4pLmdrq60Y+rKg5QZNto0X55pkZKwFz/rVgxqhq31TI0Ro9OnUrZY/Se4iquK3c0AePhdpeObBh0rWQiHaaKTUIjmA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(366004)(39860400002)(396003)(136003)(346002)(451199015)(5660300002)(2906002)(8936002)(8676002)(4326008)(31686004)(41300700001)(66556008)(54906003)(316002)(66946007)(76116006)(64756008)(6916009)(91956017)(66446008)(6486002)(478600001)(71200400001)(66476007)(2616005)(53546011)(6506007)(186003)(6512007)(26005)(83380400001)(122000001)(38100700002)(82960400001)(38070700005)(31696002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dEl2bFhMcTRldXFoVExGcmZkRHQxZy9pUVV1aDQ5WldVaHhMcGdrMVV5cjdh?=
 =?utf-8?B?QkZ2VGpnek1md1NNL3ZYRnJPQWF2ck0xTXBxUFQ4TmdHdzNwZmNPbks3NVRs?=
 =?utf-8?B?MkkydnhhTGVVWXhURE12eVVmWitJbkpma0g3K2dRNXRvMHFYVDZRYkhPUzI2?=
 =?utf-8?B?UzVNYW5TVWpDSWRXT2RnVmpmQU51SWJEOU9Tc0dYZnkyZi9sVDBqSGpsZUQr?=
 =?utf-8?B?aS9HUlhBK3Y5ejJCTWJ6TmUvYnQzbUZiaEhWQlloNW52c2RJRzdHSHZyaC94?=
 =?utf-8?B?T0hOMDA1MXFtb0lCMVJZdCsxWUJzSENVMTdIaXlzSUNJQnpoQXNEOGlnRUtx?=
 =?utf-8?B?VWZJZUNPaHArcDZSTVFXZXNTamhmekQ0LzBhZmt1M0h3UUEvTmh2QTJEcVJn?=
 =?utf-8?B?c2EraEFpaG9Zcy9YSzhwcmNxRkxwRDZyakI4VWF5bldkVk9obXFwQ2Nwc2dm?=
 =?utf-8?B?RzNKVWtVK2FxL01VQ3gxb2ZYWTY0NmZBT3c4cGRlRzFVV2QvNXlkL3dVaGV6?=
 =?utf-8?B?d2ZpbkgxSG91d0pGcmYvaFBhSzM5aUdUSWlEempaZkcvUmVxL3JyL0pzTkNX?=
 =?utf-8?B?RDZ6ZFo1OTZlVUpzUkFXNndSRkQvREJuYmJXQS9uQlRMK3FOYUN2SVd2UzRy?=
 =?utf-8?B?R0hPaFo5ZVNxTGNKSXA5Mkd1cEdOVExJQ0kzUTNsYVlKREd4RFdTTTY3dUtC?=
 =?utf-8?B?bU95NnkxQlBZNXFKRTc2a3h0ZUtxVjlPV0lhWEw5MHpESzc4OEZTOTVvdGhl?=
 =?utf-8?B?aEhuUlVBVWVOeHY4RkxlT1pOUWZmRmxGcHN0UTZZalJmYThpQzJ0aU5lSVJF?=
 =?utf-8?B?THM5NU5RVlpSQkZ0c21mMis5Wnp5aFVnSVluK0cxcGd0L0pvam9wanVuSjhL?=
 =?utf-8?B?YnprMldRakJzTlVHR1J3djJaZUZ1Y3RjZmovc1RUdDhZeGJRZnluNUQ3R2ZJ?=
 =?utf-8?B?R3E4VUZmc3ZhcDFaWWZCSXR5L2NKeHR0U2IvbUpjYXQ1TDVZWW1PdjhtWFV2?=
 =?utf-8?B?Nit1MVQ1Q0RTeURRV3U0aWFsNzBDYmh6YXF4NTRDaWRzME5XY3EzbDUwY3Vx?=
 =?utf-8?B?bzVpTmRuQUFhYXF5elQyd3cyMGVYZzJtbUpWSE1MTlVhT0tlUDhiQ3NQdCto?=
 =?utf-8?B?cUZraENRaEFGQzA4V2E0T0NVK3QyYndjZU1HenpFVW03L0xpaXVjYXJZYnBr?=
 =?utf-8?B?VTkrZElaZWxGaGJPME9MRGhUdCt2RlZmQldXdDRMQjduZkRWYUl5WWdGd1VT?=
 =?utf-8?B?VEQ1SFdGbkVNUjJ3U0pzR1ZrakVDcmJPUlZ6SFppZEZtYlI2LzBvZGoxQkxL?=
 =?utf-8?B?T3RUdndxRHN3cDNhekZaanh3SDB3TTNNdmJuL0I0V2pyL3lzMWZyVEFlczBr?=
 =?utf-8?B?K3B6d1dvN1U0WUw1Qm5IOXlsZnB5YkpuajljeWxneFk1WnBscCsvQUdJRnFQ?=
 =?utf-8?B?eGdzMFhwcEZCanB5TDY5ZHY2bXlFRmNvbTQ4aTFTcHR0dDZNYnkxaE8zZFFz?=
 =?utf-8?B?UWNxdEltUWVrd2hST0RaT2xVSnpmWXRpYmNTN0FLUjI0OWhtSjFiQ0UyTkdX?=
 =?utf-8?B?S1U3R09kWko3QnJITUNVbWxqeTl5U3RQcHkvakkxdEZpZ3lweWRlVDI2ejBP?=
 =?utf-8?B?MHpoL0pncENwMml2YkNjSStVN1IrZFRVUGYyVEc3cTd5djQ1Vi90MVUxOGp1?=
 =?utf-8?B?MHNvSlU0cEp1SVp2b05ESm1hTzJpcnh6YWRGZVdyV3NkVkgxb0FhdTUybGpW?=
 =?utf-8?B?VjIxZTBxd2wxSlhDekJnVUVnWVhBbzM4SSt3alZxdVFRZDFwUCtMTHY3d2dH?=
 =?utf-8?B?bmEycnZUWVViT1FDOGZBbldjbE9LeGhRSmFjQ090NEo0TW9tSVdzemFBYmFN?=
 =?utf-8?B?OW8va3ZtNXdOSFM0UTBhSlFrUFlOTFNIV2liUzluREFWQVhkQnpXSWpnbEJm?=
 =?utf-8?B?THYwa09lNTIzMmtucHdHSE5tZ1ptalpVS2JsVll0T2w1TjJ4RExYZS8zS2ZC?=
 =?utf-8?B?bm50aUcveGZBb0hnMGhFdytFbER5ZlhQUHhsUEkvaE5QenFScUM4cmNwSWxO?=
 =?utf-8?B?ZzlmTWpYZzFWKzFrK0hkVXU2QXQvTEx5anhpeHYrZnJXbXVrMzBwUUplM0Fv?=
 =?utf-8?B?NkZWcDVFK3NwQ3ZHcjNIWm1YeWJjRlpOQzR6UkNPeXFRRVRwbGRnc0REMmh5?=
 =?utf-8?B?TUE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <13739C70CA4FD642898371BD8B097636@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ZkQnKcX4CgEUtA9dnFwEVhh+xG+FURvVZkIu35Bghaolde8jnz1pC+eV1fZJ/CLgM7n0PVIv5a6+r2Rg0lVrtaaUApgoagyO6fKdfG3wXJ/TepiAMSOXbGX9HzUZraVe8lrqxm50eoZUPGdL4aR0vs+ZsrwojcNBi8MYK+scldkfgFpoy58D8M7ANOwqvZBsp9HhBB6W48y2RzjD0hhDK335RbP0vCgsnsDat3+qkeFKniHs+BfJGE8bWkqzhhKKYPpwKo3Nw+SYiVwHRP1lt/6/+t8uj9uHKLzsLXHhNyE6vPme1L4WxjqdPUb6UKN6YHf4fhtwMQSHANQfgl7q9mdNlUB+us+pxzgzVUhm0yU8tVUjdWHTHSd3AtVKlWxGb9mzBwTOA2G3qrHhsPukJ8E1273Au9uRE6fQP+0cw31gwfV8pl+q5MfWxRklnT0PsL1jbg5h7gB68dEaQe+hFENvSfWjy3TpSO6Oq1ohd85tIvwlHSbWDmbAH46ihcYG7AzdIE6Nra15ESBcUwl0qVwP6xO0b89W641JlRsuDKZlYbFcQBqI+j9RePMStDi2FPN99kPnabE3wLtpOW2cxQ+RtUtCFpCnYRqVL/6zePSk6TXenYviAiZfwAbv58KGDy6PKIuDlW1QpZ4aI71WOcaCovuIzoKDEVcYBpNPzkKEbQvLXddpNE6BXc4qvvyayticTXF/tz0LKIrFsJKA0yxgoCV/3H+XgWXca9wwDo1p73SmmSdq+3ZVzRUIJKrAHLNvqgCDFzcUVSSjw6jcMDonnFyEliSM4xRKwRmbJV7FG8Xbh8K6dG2FvOHchVqcvZJe3HP2V8D8dvMfwDtcvaou+bacJFWeyNEBzmQgQFOdxaSwWxJS3+lH6f0nPxo1
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e38970d-5cbc-4f84-d7d8-08daee8802f8
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 19:15:15.9015
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 28EeK6JdHutwXbrLmgxixL6uibIH7zGTH1d19uxJ7x/aRSOzU9+OfBe70avVkcIH/bvm7bfShToag/7xdTZbNevvqBaZ17JF+x3lYSmJyP0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5423

T24gMDQvMDEvMjAyMyA0OjI5IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDMuMDEuMjAy
MyAyMTowOSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IGtlcm5lbC5jIGlzIG1vc3RseSBpbiBh
biAjaWZuZGVmIENPTVBBVCBndWFyZCwgYmVjYXVzZSBjb21wYXQva2VybmVsLmMNCj4+IHJlaW5j
bHVkZXMga2VybmVsLmMgdG8gcmVjb21waWxlIHhlbl92ZXJzaW9uKCkgaW4gYSBjb21wYXQgZm9y
bS4NCj4+DQo+PiBIb3dldmVyLCB0aGUgeGVuX3ZlcnNpb24gaHlwZXJjYWxsIGlzIGFsbW9zdCBn
dWVzdC1BQkktYWdub3N0aWM7IG9ubHkNCj4+IFhFTlZFUl9wbGF0Zm9ybV9wYXJhbWV0ZXJzIGhh
cyBhIGNvbXBhdCBzcGxpdC4gIEhhbmRsZSB0aGlzIGxvY2FsbHksIGFuZCBkbw0KPj4gYXdheSB3
aXRoIHRoZSByZWluY2x1ZGUgZW50aXJlbHkuDQo+IEFuZCB3ZSBoZW5jZWZvcnRoIG1lYW4gdG8g
bm90IGludHJvZHVjZSBhbnkgaW50ZXJmYWNlIHN0cnVjdHVyZXMgaGVyZQ0KPiB3aGljaCB3b3Vs
ZCByZXF1aXJlIHRyYW5zbGF0aW9uIChvciB3ZSdyZSB3aWxsaW5nIHRvIGFjY2VwdCB0aGUgY2x1
dHRlcg0KPiBvZiBoYW5kbGluZyB0aG9zZSAibG9jYWxseSIgYXMgd2VsbCkuIEZpbmUgd2l0aCBt
ZSwganVzdCB3YW50aW5nIHRvDQo+IG1lbnRpb24gaXQuDQoNClN1cmUgLSBJJ2xsIHB1dCBhIG5v
dGUgaW4gdGhlIGNvbW1pdCBtZXNzYWdlLg0KDQpJbiBnZW5lcmFsLCB3ZSBkb24ndCB3YW50IGd1
ZXN0LXZhcmlhbnQgaW50ZXJmYWNlcy4NCg0KPg0KPj4gLS0tIGEveGVuL2NvbW1vbi9jb21wYXQv
a2VybmVsLmMNCj4+ICsrKyAvZGV2L251bGwNCj4+IEBAIC0xLDUzICswLDAgQEANCj4+IC0vKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqDQo+PiAtICoga2VybmVsLmMNCj4+IC0gKi8NCj4+IC0NCj4+IC1F
TUlUX0ZJTEU7DQo+PiAtDQo+PiAtI2luY2x1ZGUgPHhlbi9pbml0Lmg+DQo+PiAtI2luY2x1ZGUg
PHhlbi9saWIuaD4NCj4+IC0jaW5jbHVkZSA8eGVuL2Vycm5vLmg+DQo+PiAtI2luY2x1ZGUgPHhl
bi92ZXJzaW9uLmg+DQo+PiAtI2luY2x1ZGUgPHhlbi9zY2hlZC5oPg0KPj4gLSNpbmNsdWRlIDx4
ZW4vZ3Vlc3RfYWNjZXNzLmg+DQo+PiAtI2luY2x1ZGUgPGFzbS9jdXJyZW50Lmg+DQo+PiAtI2lu
Y2x1ZGUgPGNvbXBhdC94ZW4uaD4NCj4+IC0jaW5jbHVkZSA8Y29tcGF0L3ZlcnNpb24uaD4NCj4+
IC0NCj4+IC1leHRlcm4geGVuX2NvbW1hbmRsaW5lX3Qgc2F2ZWRfY21kbGluZTsNCj4+IC0NCj4+
IC0jZGVmaW5lIHhlbl9leHRyYXZlcnNpb24gY29tcGF0X2V4dHJhdmVyc2lvbg0KPj4gLSNkZWZp
bmUgeGVuX2V4dHJhdmVyc2lvbl90IGNvbXBhdF9leHRyYXZlcnNpb25fdA0KPj4gLQ0KPj4gLSNk
ZWZpbmUgeGVuX2NvbXBpbGVfaW5mbyBjb21wYXRfY29tcGlsZV9pbmZvDQo+PiAtI2RlZmluZSB4
ZW5fY29tcGlsZV9pbmZvX3QgY29tcGF0X2NvbXBpbGVfaW5mb190DQo+PiAtDQo+PiAtQ0hFQ0tf
VFlQRShjYXBhYmlsaXRpZXNfaW5mbyk7DQo+IFRoaXMgYW5kIC4uLg0KPg0KPj4gLSNkZWZpbmUg
eGVuX3BsYXRmb3JtX3BhcmFtZXRlcnMgY29tcGF0X3BsYXRmb3JtX3BhcmFtZXRlcnMNCj4+IC0j
ZGVmaW5lIHhlbl9wbGF0Zm9ybV9wYXJhbWV0ZXJzX3QgY29tcGF0X3BsYXRmb3JtX3BhcmFtZXRl
cnNfdA0KPj4gLSN1bmRlZiBIWVBFUlZJU09SX1ZJUlRfU1RBUlQNCj4+IC0jZGVmaW5lIEhZUEVS
VklTT1JfVklSVF9TVEFSVCBIWVBFUlZJU09SX0NPTVBBVF9WSVJUX1NUQVJUKGN1cnJlbnQtPmRv
bWFpbikNCj4+IC0NCj4+IC0jZGVmaW5lIHhlbl9jaGFuZ2VzZXRfaW5mbyBjb21wYXRfY2hhbmdl
c2V0X2luZm8NCj4+IC0jZGVmaW5lIHhlbl9jaGFuZ2VzZXRfaW5mb190IGNvbXBhdF9jaGFuZ2Vz
ZXRfaW5mb190DQo+PiAtDQo+PiAtI2RlZmluZSB4ZW5fZmVhdHVyZV9pbmZvIGNvbXBhdF9mZWF0
dXJlX2luZm8NCj4+IC0jZGVmaW5lIHhlbl9mZWF0dXJlX2luZm9fdCBjb21wYXRfZmVhdHVyZV9p
bmZvX3QNCj4+IC0NCj4+IC1DSEVDS19UWVBFKGRvbWFpbl9oYW5kbGUpOw0KPiAuLi4gdGhpcyBn
byBhd2F5IHdpdGhvdXQgYW55IHJlcGxhY2VtZW50LiBDb25zaWRlcmluZyB0aGV5J3JlIGJvdGgN
Cj4gY2hhcltdLCB0aGF0J3MgcHJvYmFibHkgZmluZSwgYnV0IGNvdWxkIGRvIHdpdGggbWVudGlv
bmluZyBpbiB0aGUNCj4gZGVzY3JpcHRpb24uDQoNCkkgZGlkIGFjdHVhbGx5IG1lYW4gdG8gYXNr
IGFib3V0IHRoZXNlIHR3bywgYmVjYXVzZSB0aGV5J3JlIGluY29tcGxldGUNCmFscmVhZHkuDQoN
CldoeSBkbyB3ZSBDSEVDS19UWVBFKGNhcGFiaWxpdGllc19pbmZvKSBidXQgZGVmaW5lIGlkZW50
aXR5IGFsaWFzZXMgZm9yDQpjb21wYXRfZXh0cmF2ZXJzaW9uIChhbW9uZ3N0IG90aGVycykgPw0K
DQpJcyB0aGVyZSBldmVuIGEgcG9pbnQgZm9yIGhhdmluZyBhIGNvbXBhdCBhbGlhcyBvZiBhIGNo
YXIgYXJyYXk/DQoNCkknbSB0ZW1wdGVkIHRvIGp1c3QgZHJvcCB0aGVtLsKgIEkgZG9uJ3QgdGhp
bmsgdGhlIGNoZWNrIGRvZXMgYW55dGhpbmcNCnVzZWZ1bCBmb3IgdXMuDQoNCj4+IEBAIC01MjAs
MTIgKzUxOCwyNyBAQCBETyh4ZW5fdmVyc2lvbikoaW50IGNtZCwgWEVOX0dVRVNUX0hBTkRMRV9Q
QVJBTSh2b2lkKSBhcmcpDQo+PiAgICAgIA0KPj4gICAgICBjYXNlIFhFTlZFUl9wbGF0Zm9ybV9w
YXJhbWV0ZXJzOg0KPj4gICAgICB7DQo+PiAtICAgICAgICB4ZW5fcGxhdGZvcm1fcGFyYW1ldGVy
c190IHBhcmFtcyA9IHsNCj4+IC0gICAgICAgICAgICAudmlydF9zdGFydCA9IEhZUEVSVklTT1Jf
VklSVF9TVEFSVA0KPj4gLSAgICAgICAgfTsNCj4gV2l0aCB0aGlzIGdvbmUgdGhlIG9kZGx5IChi
dXQgaW50ZW50aW9uYWxseSkgcGxhY2VkIGJyYWNlcyBjYW4gdGhlbg0KPiBhbHNvIGdvIGF3YXku
DQoNCkluIGxpZ2h0IG9mIGhvdyBwYXRjaCAzIGVuZGVkIHVwLCBJIHdhcyBjb25zaWRlcmluZyBw
dWxsaW5nIGN1cnIgb3V0DQppbnRvIGEgdmFyaWFibGUuDQoNCj4gUHJlZmVyYWJseSB3aXRoIHRo
ZXNlIG1pbm9yIGFkanVzdG1lbnRzDQo+IFJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQoNClRoYW5rcywNCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 19:29:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 19:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471398.731236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9SQ-0006oW-Kg; Wed, 04 Jan 2023 19:29:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471398.731236; Wed, 04 Jan 2023 19:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9SQ-0006oP-Gr; Wed, 04 Jan 2023 19:29:50 +0000
Received: by outflank-mailman (input) for mailman id 471398;
 Wed, 04 Jan 2023 19:29:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aavW=5B=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pD9SP-0006oJ-9P
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 19:29:49 +0000
Received: from sonic306-20.consmr.mail.gq1.yahoo.com
 (sonic306-20.consmr.mail.gq1.yahoo.com [98.137.68.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 24452f5e-8c66-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 20:29:45 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic306.consmr.mail.gq1.yahoo.com with HTTP; Wed, 4 Jan 2023 19:29:43 +0000
Received: by hermes--production-ne1-7b69748c4d-rglm6 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 6610acf198dd57a5acb45951fb7310cb; 
 Wed, 04 Jan 2023 19:29:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24452f5e-8c66-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672860583; bh=f38ykNsj5P5917DyUYEqB636yxECNNSILhQF338ksUE=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=oskHmGnhbNjCTSYoC2MIf79xuDr6C9L/Ef+Tbju7HlyvAqO67EFex28GmboTCQWrJHmFdUdlHhgxo+dRtqJqbd19qBliT3pf+vURuOAtIAvXoIp0Xv9ptgM0fdRpIigEfJpi23IL6C/Bz6NdSpyH3eXWFUFdP7TUkMydbGc6s1sczjkYzZyzsFh6gYpgj3N9HuWytymGvI0W5f0fAV11Rj0MquVqPXcTr036fT9Nyvju2/tKDkVvZ4XQ55yQPAcn771nMXp78kPJJYsfYaKfMoB/vCjr7SzHaa4dkfGRCdXniPrfqvVp9PC25JBlQ1soisWBnS/5o3bCIEEBCHLWaA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672860583; bh=TTo4eJza70SWteqIqAHCCivI/STyWz6raPujdPpDkaa=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=LfnkBDyJg/pTxw8b94YEMOQZRCu0D3K1Xa8RyqUFjMB+WhGmO+aWeQ+yENtLZ+dVu38S0ZVrg647scpYoagu1jnICr/Jc/mhTW7syQL7hfaB7eqW78AF818YfU1WUF9Xx7dda8bO8Pk2HAVhc8DT9tYPGImchwmkpi+X8Cxerj7IepMDHVxyYI30yklevOydEspX6jp9EtKvtdr0Md8RInw7P3aS3kiNMELLfvceaG89HP738n9wuCgOFwMkd6N2EG/6tashj5HiSJEphyaAo6GB0g7nYQfhXeSc3ZHmm9JOB/ki0THx4PGhWVE+RO09hOqrnSIZ4OmOrfOVlZ3Ggg==
X-YMail-OSG: AYNUE8kVM1n0bruCHoYB.IOVLws3knC5gqJehESqqWC08o0GHu3bTOn0JE.68ZT
 mbWk_64D8Jw6GmRA..9I9OkdsyArBEY3YVrFWG5CwGDcrttgw68n5loH_fQ0Sfkx66GPrv4P0sZC
 C6asHdO2VkZknNwpbjkGw7OPb3Cw82gULXWf.Uj5UraX9a3vBoYq0LF1dsIYsKJP2Teqcw6bCsuD
 HlpI3NB4Y0d0Z2219BZZWOB.xIdR.jpiKrtbybQFlWVYEd1Gg9ErXE8LYwBf34DO2FdOQk4Ut5oK
 geiGkAY8joeWyr_lGKHV.JII28OcCg3tpYu98lYwjjPbrpOaAjxz9PzkZXEfadFsLfvBa_vAp.ky
 TDbQ9LF63wt6Vr596iaes631G.LXmJIJGoMBV0XQHhuCrxEO9iAG3nWJuyGDWYNLhiGQif_XQ4be
 LjAPDUMLeIq6Hd580BEIGUw_DbL0JFRjDZ.Pjr9Bk7Mespxe4WjuBB6s7JKmlAFFQSvCVDjTmr_G
 U7ZvtqpcFoeH6_7MGjyAXPPbM3fIdU3YcnNvQjbmMJojShDzlnygGOe39v.qrbdKA8kC9Pkx.qWJ
 7qtYILJU8Mga1xHV2aap9sKqUPwSApzyHIQqF7B8VSmncfDij27BPiHauaby3tDMEIAZFyjlUbYr
 XsWeKAWZxNIzas99Pag_PxkkVfTFcDry57eHj2nG1e_RkAywZUIlBnPQUvbtBEQYGezDrQJ7pgga
 0hlru888cmjdOLpLZQ5wfXXAtXMmUPq8uW3Z4WPel7ejnt_fKTvv9BAg7x30KxcNRIQmbNn5b69P
 I6lm97257DJhlyQQ6kmbMMSiC7Oe9d8nFLU7.H4iMciRXQkBHRg7OE93TDiKdKtjaAF.CSsNC7fw
 pHKd7CQCvxUv6rH63jlZzsrhasGRJE9EDHJEHTVrQiuN5LQjmfHpbc1iWNsLesQCAeW0g1v2k0Xc
 rTXmqodsgUKdMhX64F_Enbu7rnG8FrtXDYPIGgHO6MMUOJt20GYOhBUYaSIx9lzOxICHu8JjhB8l
 BBrJQY61_n3.DWSezPfJsM0UaDU7cYjojPryDyP2pzdnhi3v0zkMbIovNGSmWz23_2hebNMnD8RP
 _MLh4Xmk7mPzbAhFMxJ95dnRHkDtNTM5QcwfrbxDB3WzbnxrNBKspN648w4y5wsDA5vyiGT0Qfnu
 XU7fiuKeftpByta2nvdVVtWXfFKtWYch82dPlN3NLBrX80Blc.B4P_gCUDbLyTC2IO8grjkcspFa
 hMrblJi_CEkOiybPwQ2VmYEf99bk2BOCKOWkfPTQUV0dc7WEyVGDlk4C9uylC4ZJpy6fDoKCmnh2
 ay7qy.GxbzOB.yUbYJGh4h7kfOkcBOXgGFJPJFWuulgVEYnvXMBFaZgNzUGTlGiz8MmSuhXNhPZt
 DXi3GNLoEp6jxi5KzBmKQ4FNYW2NoQKhrbnrm8sVxzZ_Llsmd2goQeeW9zxtvN8rjvNcoV4R6jUG
 ssV7T6dj6OCblboyS7CcliN80tpppepzyHTUNeM8OZe5YeJCUF3h7xVLh1onukIsLkLsF9.HrCY6
 0cGuRFmmR3LBcWrJ3gfMt2skDyluUiw4ZAS5Kd6PuUnP9Ux6oK.stKkNfRToK.Nbg_T7g_n0Gm_B
 cacRgJ1nm4lX1m6z0KlyQfopoVFKwiqdwpmJM3SHX48pWy2A5wK8JcS7Jf19Og87IVHKMhCh.XV3
 Ejtb_HjsAXyYCtgHbLUnD0dM2oeWtiLsKufzHF.wV3UqLjfq_vHX2Z_ST9fp2Oqrqn6CWkm5bCnm
 zfPfpJqiNCuUVP5b8EvnW8d37p4Y2mqx46_w.meadkhD7XSHAPrEO3XhMkYufO8GfRtUMHRU48gI
 ZhAUGWpMmuGAJ24FtR_zT0w6QAJLrvKeCNTsjjgh_9FPlaUTI8RCOrGn.lIB6G9GSn31y0DOrrlk
 8F0Hx8mbjp_VkJ_jmoG8MG.X9MNTws6PtV0KhK_5GqHJ.T9B6le284N.rXbFW8WXeQc8zfeNk67M
 8uIT1CnZewnDNMICpbpYEMvk4Tmg5Klc425MttS_EpLzavr617uh1R7vkihEs7tgoJZ6sjLphJA5
 GeQWmp8b2w62DyIRnbSAcAWVXFYD9dEbn8NHXgLQvd9BYsB0WkPbZN8Q9a70h197OqtyTLL10.Lv
 VUuTlCqfTKljNlwzUF5mv.XONurczDHfHPe.6jJzOQvhWN31kopHC3EpCqZU4YWx.YbB41sdupog
 EtoozBnGiaz4t3UepihlDMQ--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <92efe0f1-f22b-47bc-f27d-2f31cb3621ea@aol.com>
Date: Wed, 4 Jan 2023 14:29:35 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
 <be75758a-2547-d1ef-223e-157f3aa28b23@linaro.org>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <be75758a-2547-d1ef-223e-157f3aa28b23@linaro.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 5455

On 1/4/23 1:48 PM, Philippe Mathieu-Daudé wrote:
> On 4/1/23 18:54, Chuck Zmudzinski wrote:
>> On 1/4/23 10:35 AM, Philippe Mathieu-Daudé wrote:
>>> +Markus/Thomas
>>>
>>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>>>
>>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>>>> ---
>>>>    hw/i386/pc_piix.c             |  4 +---
>>>>    hw/isa/piix.c                 | 20 --------------------
>>>>    include/hw/southbridge/piix.h |  1 -
>>>>    3 files changed, 1 insertion(+), 24 deletions(-)
>>>>
>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>>> index 5738d9cdca..6b8de3d59d 100644
>>>> --- a/hw/i386/pc_piix.c
>>>> +++ b/hw/i386/pc_piix.c
>>>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>>>        if (pcmc->pci_enabled) {
>>>>            DeviceState *dev;
>>>>            PCIDevice *pci_dev;
>>>> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>>>> -                                         : TYPE_PIIX3_DEVICE;
>>>>            int i;
>>>>    
>>>>            pci_bus = i440fx_init(pci_type,
>>>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>>>                                           : pci_slot_get_pirq);
>>>>            pcms->bus = pci_bus;
>>>>    
>>>> -        pci_dev = pci_new_multifunction(-1, true, type);
>>>> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>>>>            object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>>>                                     machine_usb(machine), &error_abort);
>>>>            object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>>>> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
>>>> index 98e9b12661..e4587352c9 100644
>>>> --- a/hw/isa/piix.c
>>>> +++ b/hw/isa/piix.c
>>>> @@ -33,7 +33,6 @@
>>>>    #include "hw/qdev-properties.h"
>>>>    #include "hw/ide/piix.h"
>>>>    #include "hw/isa/isa.h"
>>>> -#include "hw/xen/xen.h"
>>>>    #include "sysemu/runstate.h"
>>>>    #include "migration/vmstate.h"
>>>>    #include "hw/acpi/acpi_aml_interface.h"
>>>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
>>>>        .class_init    = piix3_class_init,
>>>>    };
>>>>    
>>>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>>> -{
>>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>>> -
>>>> -    k->realize = piix3_realize;
>>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>>>> -    dc->vmsd = &vmstate_piix3;
>>>
>>> IIUC, since this device is user-creatable, we can't simply remove it
>>> without going thru the deprecation process. Alternatively we could
>>> add a type alias:
>>>
>>> -- >8 --
>>> diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
>>> index 4b0ef65780..d94f7ea369 100644
>>> --- a/softmmu/qdev-monitor.c
>>> +++ b/softmmu/qdev-monitor.c
>>> @@ -64,6 +64,7 @@ typedef struct QDevAlias
>>>                                  QEMU_ARCH_LOONGARCH)
>>>    #define QEMU_ARCH_VIRTIO_CCW (QEMU_ARCH_S390X)
>>>    #define QEMU_ARCH_VIRTIO_MMIO (QEMU_ARCH_M68K)
>>> +#define QEMU_ARCH_XEN (QEMU_ARCH_ARM | QEMU_ARCH_I386)
>>>
>>>    /* Please keep this table sorted by typename. */
>>>    static const QDevAlias qdev_alias_table[] = {
>>> @@ -111,6 +112,7 @@ static const QDevAlias qdev_alias_table[] = {
>>>        { "virtio-tablet-device", "virtio-tablet", QEMU_ARCH_VIRTIO_MMIO },
>>>        { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_VIRTIO_CCW },
>>>        { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_VIRTIO_PCI },
>>> +    { "PIIX3", "PIIX3-xen", QEMU_ARCH_XEN },
>> 
>> Hi Bernhard,
>> 
>> Can you comment if this should be:
>> 
>> +    { "PIIX", "PIIX3-xen", QEMU_ARCH_XEN },
>> 
>> instead? IIUC, the patch series also removed PIIX3 and PIIX4 and
>> replaced them with PIIX. Or am I not understanding correctly?
> 
> There is a confusion in QEMU between PCI bridges, the first PCI
> function they implement, and the other PCI functions.
> 
> Here TYPE_PIIX3_DEVICE means for "PCI function part of the PIIX
> south bridge chipset, which expose a PCI-to-ISA bridge". A better
> name could be TYPE_PIIX3_ISA_PCI_DEVICE. Unfortunately this
> device is named "PIIX3" with no indication of ISA bridge.


Thanks, you are right, I see the PIIX3 device still exists after
this patch set is applied.

chuckz@debian:~/sources-sid/qemu/qemu-7.50+dfsg/hw/i386$ grep -r PIIX3 *
pc_piix.c:        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);

I also understand there is the PCI-to-ISA bridge at 00:01.0 on the PCI bus:

chuckz@debian:~$ lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
00:03.0 VGA compatible controller: Device 1234:1111 (rev 02)

I also see with this patch, there is a bridge that is a PIIX4 ACPI at 00:01.3.
I get the exact same output from lspci without the patch series, so that gives
me confidence it is working as designed.


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 19:45:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 19:45:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471428.731263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9hK-0001kH-D9; Wed, 04 Jan 2023 19:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471428.731263; Wed, 04 Jan 2023 19:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9hK-0001kA-AX; Wed, 04 Jan 2023 19:45:14 +0000
Received: by outflank-mailman (input) for mailman id 471428;
 Wed, 04 Jan 2023 19:45:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pD9hI-0001jw-AI
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 19:45:12 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4cb32b48-8c68-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 20:45:11 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id gh17so85261134ejb.6
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 11:45:11 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb2008108eedf25879029.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:8108:eedf:2587:9029])
 by smtp.gmail.com with ESMTPSA id
 z27-20020a1709063a1b00b007aea1dc1840sm15719325eje.111.2023.01.04.11.45.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 11:45:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cb32b48-8c68-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1kn+AYwd9lDHGXpUlw4WpNPWS0CBUeFg+5hmPOLLj2g=;
        b=fvw62VWCDMnjcXtA15FzvRRp64P3oNkuH/ZltE0KWoV3o5+NfP4Ep0LVLGB6BHpgz8
         2Es1/5ovPDWfohWdEir7Lop58w3KC26cxxW7sEFJ+B2G3/yy/h4scu0L3/FIKuox1XJd
         xLvm6xNx3sWjWoNGXqKJaeCSKnBQfv1haPzSX96eK78sKGQ30cH/WXUrw+ON92w5vn1v
         Z/IHW7X1e1HEZjzNyY9CJFsWR/2BRBjrsPP6sgAB/zJfWILsX8GUCR5Q/d+H4EoGauh5
         Px2Z0t6pzlXUdOEy9ObdXzVkC3QdPrGLGtwNo9YLG4bbkDjkb1L0WjI9fzb2tfshqP7C
         FoYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1kn+AYwd9lDHGXpUlw4WpNPWS0CBUeFg+5hmPOLLj2g=;
        b=o/3DD3DTtg2mLixFGXIMsgczFpU/61gj5flWVVoeauF16vqJTM/Ae7cDEvonsOxP35
         0LpfbRskZfJif7JoyMYsPikKnyTcz/+q1u6OGbdvVkrhxE5FjxzqqTRVj50XO3+jjN/d
         KmxkeIP1JETVl+fi2MFMLC+YUM7NomtuFYkjz+0iyjb7XZQ2Q0rMt1ZHXSOnOHwTdc1m
         /1lWioUBL82R235RFUpPi4uliGIn8uZ8am8uCO7hahEEg4zQjnDrlowRN33cOgV+T9Zx
         OM/1nOjxvKk8r0eJiJBrcWWpqa8aMCsw/tDTk7NNawcHzvPUHTktTJaCaRFTbbdDcesX
         w9qQ==
X-Gm-Message-State: AFqh2kqM7ApbUMtsdwg3Pc7ISuVNc1ujiliak5MCYWov3eN951qrTHvo
	VzlQRYGpQIb6GHYiWkPbcDI=
X-Google-Smtp-Source: AMrXdXu2tDDJFnRrl/d4WpCbxJux1PU03beEP+9otm5co6/WDkq8FqU81+yT4TaSGMFOmqXiN0FxFg==
X-Received: by 2002:a17:907:8b09:b0:7c0:e5c8:d439 with SMTP id sz9-20020a1709078b0900b007c0e5c8d439mr45326486ejc.3.1672861510655;
        Wed, 04 Jan 2023 11:45:10 -0800 (PST)
Date: Wed, 04 Jan 2023 19:45:04 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Chuck Zmudzinski <brchuckz@aol.com>, qemu-devel@nongnu.org,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>
CC: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v2_6/6=5D_hw/isa/piix=3A_Res?= =?US-ASCII?Q?olve_redundant_TYPE=5FPIIX3=5FXEN=5FDEVICE?=
In-Reply-To: <30ed41ab-f7c9-15fb-8f4b-b2742b1d4188@aol.com>
References: <20230104144437.27479-1-shentey@gmail.com> <20230104144437.27479-7-shentey@gmail.com> <30ed41ab-f7c9-15fb-8f4b-b2742b1d4188@aol.com>
Message-ID: <E1080F7D-3441-4C82-9EA8-FB6B6AC317A0@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 4=2E Januar 2023 16:42:43 UTC schrieb Chuck Zmudzinski <brchuckz@aol=2E=
com>:
>On 1/4/23 9:44=E2=80=AFAM, Bernhard Beschow wrote:
>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>> TYPE_PIIX3_DEVICE=2E Remove this redundancy=2E
>>=20
>> Signed-off-by: Bernhard Beschow <shentey@gmail=2Ecom>
>> ---
>>  hw/i386/pc_piix=2Ec             |  4 +---
>>  hw/isa/piix=2Ec                 | 20 --------------------
>>  include/hw/southbridge/piix=2Eh |  1 -
>>  3 files changed, 1 insertion(+), 24 deletions(-)
>>=20
>> diff --git a/hw/i386/pc_piix=2Ec b/hw/i386/pc_piix=2Ec
>> index 5738d9cdca=2E=2E6b8de3d59d 100644
>> --- a/hw/i386/pc_piix=2Ec
>> +++ b/hw/i386/pc_piix=2Ec
>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>      if (pcmc->pci_enabled) {
>>          DeviceState *dev;
>>          PCIDevice *pci_dev;
>> -        const char *type =3D xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>> -                                         : TYPE_PIIX3_DEVICE;
>>          int i;
>> =20
>>          pci_bus =3D i440fx_init(pci_type,
>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>                                         : pci_slot_get_pirq);
>>          pcms->bus =3D pci_bus;
>> =20
>> -        pci_dev =3D pci_new_multifunction(-1, true, type);
>> +        pci_dev =3D pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE)=
;
>>          object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>                                   machine_usb(machine), &error_abort);
>>          object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>> diff --git a/hw/isa/piix=2Ec b/hw/isa/piix=2Ec
>> index 98e9b12661=2E=2Ee4587352c9 100644
>> --- a/hw/isa/piix=2Ec
>> +++ b/hw/isa/piix=2Ec
>> @@ -33,7 +33,6 @@
>>  #include "hw/qdev-properties=2Eh"
>>  #include "hw/ide/piix=2Eh"
>>  #include "hw/isa/isa=2Eh"
>> -#include "hw/xen/xen=2Eh"
>>  #include "sysemu/runstate=2Eh"
>>  #include "migration/vmstate=2Eh"
>>  #include "hw/acpi/acpi_aml_interface=2Eh"
>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info =3D {
>>      =2Eclass_init    =3D piix3_class_init,
>>  };
>> =20
>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>> -{
>> -    DeviceClass *dc =3D DEVICE_CLASS(klass);
>> -    PCIDeviceClass *k =3D PCI_DEVICE_CLASS(klass);
>> -
>> -    k->realize =3D piix3_realize;
>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>> -    k->device_id =3D PCI_DEVICE_ID_INTEL_82371SB_0;
>> -    dc->vmsd =3D &vmstate_piix3;
>> -}
>> -
>> -static const TypeInfo piix3_xen_info =3D {
>> -    =2Ename          =3D TYPE_PIIX3_XEN_DEVICE,
>> -    =2Eparent        =3D TYPE_PIIX_PCI_DEVICE,
>> -    =2Einstance_init =3D piix3_init,
>> -    =2Eclass_init    =3D piix3_xen_class_init,
>> -};
>> -
>>  static void piix4_realize(PCIDevice *dev, Error **errp)
>>  {
>>      ERRP_GUARD();
>> @@ -534,7 +515,6 @@ static void piix3_register_types(void)
>>  {
>>      type_register_static(&piix_pci_type_info);
>>      type_register_static(&piix3_info);
>> -    type_register_static(&piix3_xen_info);
>>      type_register_static(&piix4_info);
>>  }
>> =20
>> diff --git a/include/hw/southbridge/piix=2Eh b/include/hw/southbridge/p=
iix=2Eh
>> index 65ad8569da=2E=2Eb1fc94a742 100644
>> --- a/include/hw/southbridge/piix=2Eh
>> +++ b/include/hw/southbridge/piix=2Eh
>> @@ -77,7 +77,6 @@ struct PIIXState {
>>  OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
>> =20
>>  #define TYPE_PIIX3_DEVICE "PIIX3"
>> -#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
>>  #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
>> =20
>>  #endif
>
>
>This fixes the regression with the emulated usb tablet device that I repo=
rted in v1 here:
>
>https://lore=2Ekernel=2Eorg/qemu-devel/aed4f2c1-83f7-163a-fb44-f284376668=
dc@aol=2Ecom/
>
>I tested this patch again with all the prerequisites and now with v2 ther=
e are no regressions=2E

Good news!

>Tested-by: Chuck Zmudzinski <brchuckz@aol=2Ecom>

Thanks for the test ride and the Tested-by medal ;)

Best regards,
Bernhard


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 19:47:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 19:47:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471437.731274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9jA-0002N8-Ou; Wed, 04 Jan 2023 19:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471437.731274; Wed, 04 Jan 2023 19:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9jA-0002N1-Ls; Wed, 04 Jan 2023 19:47:08 +0000
Received: by outflank-mailman (input) for mailman id 471437;
 Wed, 04 Jan 2023 19:47:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKhu=5B=redhat.com=dgilbert@srs-se1.protection.inumbo.net>)
 id 1pD9jA-0002Mt-2O
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 19:47:08 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91225a90-8c68-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 20:47:06 +0100 (CET)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-433-46R5JUs0NmaPYkxYbwUVvA-1; Wed, 04 Jan 2023 14:47:01 -0500
Received: by mail-wm1-f69.google.com with SMTP id
 bd6-20020a05600c1f0600b003d96f7f2396so19176950wmb.3
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 11:47:01 -0800 (PST)
Received: from work-vm
 (ward-16-b2-v4wan-166627-cust863.vm18.cable.virginm.net. [81.97.203.96])
 by smtp.gmail.com with ESMTPSA id
 p19-20020a1c5453000000b003d2157627a8sm50343697wmi.47.2023.01.04.11.46.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Jan 2023 11:46:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91225a90-8c68-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1672861625;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6QXOEovurLwLNIz+FnppUbIJfgyjk9M5XlA8eSDuCrU=;
	b=SfiSU2cr7jb/KCeBiGBtuRoIwdXqP6eVp26xyDHcyRFwjdbN2RjPCAKGqHFM5llRysVsQv
	xD1GQ8AIGdB/3Gxg9IHfgB0pqbC5op4XAmHSSmMqDRbDyf/4YZvdkvaFP4VYtJgS5aeS70
	TPxpeQL8HeinP1lqtTHQ/htJLHJCNbo=
X-MC-Unique: 46R5JUs0NmaPYkxYbwUVvA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=user-agent:in-reply-to:content-transfer-encoding
         :content-disposition:mime-version:references:message-id:subject:cc
         :to:from:date:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=6QXOEovurLwLNIz+FnppUbIJfgyjk9M5XlA8eSDuCrU=;
        b=HjDOM2rEKyh8Kq9oCdDUbSTrM52WmlMjj42wm+UsIX6Tl5v+RmUOeZDQkLwe0Sj8kj
         eyKw82Dnuvr9agRKD0t7tXnOp+YL9g1OwlHoXqhKRcVmNWs8mwA/6eXZEH/uPBSRRsrV
         yoUXbxvpz8qEoCSNbbPQ6sNRHyPTAOE49AG1Teg/mqw/gTIfg98Zm2gtTwTcFAgwrpZ4
         gVb+L0nFpp/EQMBwYLSq1gGkTGa/2LYajMleF+6fONFKDEVZ9RQQ2cNeCCNJKuMmKgWa
         /8pi+rYmxgB7ra3Duc2SzfixwFEid1afVICIFOymTz8RPg0YWOf2cKxua8DtJDqye1Uq
         hQpg==
X-Gm-Message-State: AFqh2kpHelz0Su3Hp4JLTxZueDiC8JQ7ubjDPoCJOJBJiuWMbLbtPjFd
	axdpb4brDto2icqUVDrpAJPbnVJs4eK1cUIgXXw1EI+AMosw3/o+cmsGgxBROwZmJ+41DHLp7bS
	CgQjqThJTb7pD4T5mW2QIuDEa3Lk=
X-Received: by 2002:a05:600c:4191:b0:3d1:fcca:ce24 with SMTP id p17-20020a05600c419100b003d1fccace24mr33592850wmh.32.1672861620110;
        Wed, 04 Jan 2023 11:47:00 -0800 (PST)
X-Google-Smtp-Source: AMrXdXt2IDKCOAndi+PoVcK0qTiAwfAqGYMJPzp7glXtRb4nxvriYq01FutSchTWhSgLZ0xQbbf1Wg==
X-Received: by 2002:a05:600c:4191:b0:3d1:fcca:ce24 with SMTP id p17-20020a05600c419100b003d1fccace24mr33592836wmh.32.1672861619972;
        Wed, 04 Jan 2023 11:46:59 -0800 (PST)
Date: Wed, 4 Jan 2023 19:46:56 +0000
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>
Cc: qemu-devel@nongnu.org, qemu-ppc@nongnu.org,
	xen-devel@lists.xenproject.org, Laurent Vivier <lvivier@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Daniel Henrique Barboza <danielhb413@gmail.com>,
	virtio-fs@redhat.com, Michael Roth <michael.roth@amd.com>,
	Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>,
	qemu-block@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
	qemu-arm@nongnu.org, Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	=?iso-8859-1?Q?C=E9dric?= Le Goater <clg@kaod.org>,
	John Snow <jsnow@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>, Greg Kurz <groug@kaod.org>,
	Thomas Huth <thuth@redhat.com>
Subject: Re: [PATCH 3/6] tools/virtiofsd: add G_GNUC_PRINTF for logging
 functions
Message-ID: <Y7XXsHEqgTG9Ani6@work-vm>
References: <20221219130205.687815-1-berrange@redhat.com>
 <20221219130205.687815-4-berrange@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20221219130205.687815-4-berrange@redhat.com>
User-Agent: Mutt/2.2.9 (2022-11-12)
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

* Daniel P. Berrang (berrange@redhat.com) wrote:
> Signed-off-by: Daniel P. Berrang <berrange@redhat.com>

Yes, although I'm a little surprised this hasn't thrown up any warnings.


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  tools/virtiofsd/fuse_log.c       | 1 +
>  tools/virtiofsd/fuse_log.h       | 6 ++++--
>  tools/virtiofsd/passthrough_ll.c | 1 +
>  3 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/virtiofsd/fuse_log.c b/tools/virtiofsd/fuse_log.c
> index 745d88cd2a..2de3f48ee7 100644
> --- a/tools/virtiofsd/fuse_log.c
> +++ b/tools/virtiofsd/fuse_log.c
> @@ -12,6 +12,7 @@
>  #include "fuse_log.h"
>  
>  
> +G_GNUC_PRINTF(2, 0)
>  static void default_log_func(__attribute__((unused)) enum fuse_log_level level,
>                               const char *fmt, va_list ap)
>  {
> diff --git a/tools/virtiofsd/fuse_log.h b/tools/virtiofsd/fuse_log.h
> index 8d7091bd4d..e5c2967ab9 100644
> --- a/tools/virtiofsd/fuse_log.h
> +++ b/tools/virtiofsd/fuse_log.h
> @@ -45,7 +45,8 @@ enum fuse_log_level {
>   * @param ap format string arguments
>   */
>  typedef void (*fuse_log_func_t)(enum fuse_log_level level, const char *fmt,
> -                                va_list ap);
> +                                va_list ap)
> +    G_GNUC_PRINTF(2, 0);
>  
>  /**
>   * Install a custom log handler function.
> @@ -68,6 +69,7 @@ void fuse_set_log_func(fuse_log_func_t func);
>   * @param level severity level (FUSE_LOG_ERR, FUSE_LOG_DEBUG, etc)
>   * @param fmt sprintf-style format string including newline
>   */
> -void fuse_log(enum fuse_log_level level, const char *fmt, ...);
> +void fuse_log(enum fuse_log_level level, const char *fmt, ...)
> +    G_GNUC_PRINTF(2, 3);
>  
>  #endif /* FUSE_LOG_H_ */
> diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough_ll.c
> index 20f0f41f99..40ea2ed27f 100644
> --- a/tools/virtiofsd/passthrough_ll.c
> +++ b/tools/virtiofsd/passthrough_ll.c
> @@ -4182,6 +4182,7 @@ static void setup_nofile_rlimit(unsigned long rlimit_nofile)
>      }
>  }
>  
> +G_GNUC_PRINTF(2, 0)
>  static void log_func(enum fuse_log_level level, const char *fmt, va_list ap)
>  {
>      g_autofree char *localfmt = NULL;
> -- 
> 2.38.1
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 19:55:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 19:55:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471445.731287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9rX-0003t0-M5; Wed, 04 Jan 2023 19:55:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471445.731287; Wed, 04 Jan 2023 19:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pD9rX-0003st-Gm; Wed, 04 Jan 2023 19:55:47 +0000
Received: by outflank-mailman (input) for mailman id 471445;
 Wed, 04 Jan 2023 19:55:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pD9rW-0003sn-Io
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 19:55:46 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c47a1958-8c69-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 20:55:43 +0100 (CET)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Jan 2023 14:55:39 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM8PR03MB6248.namprd03.prod.outlook.com (2603:10b6:8:25::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 19:55:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 19:55:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c47a1958-8c69-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672862144;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=lIjvNeL89m8O0HXMTS1K/V+te9aKFWyhRTArbSYxqPc=;
  b=DIUpeYdMj7+doehCq40tjzhT7TVSbWDbMBn6AstenfLFseWyO0ydtMfS
   tFjTxYXsFQXbqPR2zXgUp67+iLTO6cMBvxmyOEEg+hjF4Se+/ih4UoUa5
   Q4M7ZsSE+jM7vQ0JKCsJYQDViP9Z6iAVz1BDDGu2z/ErMxgkC3zrGQChC
   g=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 93674422
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:GOLbX6nRjh4c8tvua5AvMV7o5gxLJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIfXD+OPf7fYWL3edpxa4WxoU5XvpSGzIRgSgBq/3xhECMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aqaVA8w5ARkPqgS5AOGyxH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aBJMixSVxesu7OR0I+XcOYwmpp6FuC+aevzulk4pd3YJdAPZMmaBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3iea8WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDT+PnqqIw2jV/wEQUERhMW1zim8KFoXaMVI9uK
 hVI4hEx+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6CHXQNRDNFbN0gtec1SCYs2
 1vPmMnmbRRwtJWFRHTb8a2bxRuwJCwUIGkqdSICCwwf7LHLsIw1yx7CUNtnOKq0lcHuXyH9x
 SiQqyozjKlVitQEv5hX5njCijOo45LPHgg841yOWnr/t10oIom4e4av9F7Xq+5aK5qURUWAu
 35CnNWC6OcJDteGkynlrPgxIYxFLs2taFX06WOD1bF7n9hx0xZPpbxt3Qw=
IronPort-HdrOrdr: A9a23:i8iujql//WsCUc4eSfrv09BEqmPpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-IronPort-AV: E=Sophos;i="5.96,300,1665460800"; 
   d="scan'208";a="93674422"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=D3882hFN487MXL074f1/5DVe2l2w78ysE7f2ggzp9H7FCzLxL493h5/O1ZlCUjLkNEn+rD4jN1mBoCY+2y5Hxy0lJp3yIKNoefNH/elF/3sqPmXlobSRU9DDrj8daVMYGdyjVlfRz6UTY8loX9iT5OgRgUk/w346SJQjSAODEq4Clnr37Q/V/fEi9sfDkyCLoXPu7d7Ix9G0CPjkEQgGxBjUcXVgqnIL3SKTE041Z2Uh0thrVGGrdNr0T9zkyocwRKGpR6fZNPST2llU2bdF4waWbUSAmIAcKLN5i/On3YmxnSSg2AOZDpSyt1gLjqOZnC3K6ZeIozrQoCQiIfOFpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lIjvNeL89m8O0HXMTS1K/V+te9aKFWyhRTArbSYxqPc=;
 b=YQQLS34xMCIdHhd06b5jFci2hOrdzqC5Vk7evrtkazQwLhH/NoJiykSHqObDEahD3h8G5ZiReYAKz5mLm9IVfswse/bPm35RljXJZM2+6TeeOeC5UJ85yJfu7Anc/nDsn9Wz2Kee5IdYtTBTIzW3bIL/E1hJrI4ImJNjTd0JqCyaA4yD38TAieUVR50QUkUn7MCvDKpYzctjEp1/mR/l0JVEUCxOeQR8J+SQxp/sA+scm70Xh9rMBJBIjMyj31Jd7jREeQdB1EH4E4hJlyKFJMDWuuyeKARNn4PgKQCGpx1ySItykXiCFDSWCyCkzDSOgszVc6Wnr1tu9B3SWcATzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lIjvNeL89m8O0HXMTS1K/V+te9aKFWyhRTArbSYxqPc=;
 b=kXHrPahcPTFH7QrFyFek+4Ir4OMU70/fLsG1VJgwJnP1Kmio8xb4aFWMps3jQr22OTzjhlA57Wx8Bs7reNQj2Vzzx8nleGTOYJ8GyXmwT3up+yTbyzaex5yDiehRsp/qfSDjdAD+fDtlSrD95pgo+4L1ukLh+/J35jgojR4jKjo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Thread-Topic: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Thread-Index: AQHZH69XYhd2rMGoWE+asSBKOPPG4q6OdxyAgAA2ioA=
Date: Wed, 4 Jan 2023 19:55:35 +0000
Message-ID: <7dd00ce3-a95b-2477-128c-de36e75c4a34@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-4-andrew.cooper3@citrix.com>
 <540a449d-f76d-eb16-4f98-c4fb3564ce98@suse.com>
In-Reply-To: <540a449d-f76d-eb16-4f98-c4fb3564ce98@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM8PR03MB6248:EE_
x-ms-office365-filtering-correlation-id: 0b4122ae-3476-4370-6db0-08daee8da551
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 RYJz0lnHpvjLu9v89Wd3lsCv6HWmkwfkHn9jQoIHyv5kg2vPY2TFLpY/G9qLy4y2KjFaZFFmSF35goDleJLcu597FYese+D+I4TDQyQJdld1ewssqQnDvEcgo0y2VGEGgXYXzQnsS6C4BOw4l89a4E8wD9PNkPjAuJU7Eb1DXTUehUQkaf9AqaONIcg92Irs/O+cP9bJzdObD16P08DKBO30SFuEmdkin1vkZ2zhTfe0Zux/3h8S6RJ9So1NWReqBo3tYje9S2k4nVPsmq3+/N00HhBTxvNY7xVVDtCQv8cRzOKBI5g0NPTMRjdz7Y+HcfZSqM+35kYPt/OUvaQjSey3IGghN/CgfCrMIglFPqKHyqz2d6kTmsAxx+/5AYsvnE9tMGhPhDlgSpyP5Et29LRY/zGkgh3L+SwEqrljCEvIQrNYg+g+vGKpioI1G95ac00BAEno/OXw7rA8BUSE3/4rYs/8H0Od9n0APiuj4uv7iLNPtp0Doqm/pRjjvJr0rgh/wn4qtxfIDncLiwO9JWw7Te3XWF3Cu1mgPSEvsSScVP/wbsmx1pYY0++FTMP0lqf/6R4hcLeZ8N2of+ZMFmSx3XPkybNKTxVXVFwpL3H3d+usQDSc886Q1HsO5tXp0u0GzH7z67Fb+ZkoXKLYqtE32u95XrIMQIJnxyE0RfPV0OCu+Lw2AFWlONe3tQJNgt+F+9LiZ9HpIcYY1NzgHwRXPl/Gdo66K8zrCxuLtnLfRJgTVhMEBrmujnYhemHzCps/cz5zWp8Ovh4u4x6AvQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(39860400002)(396003)(366004)(451199015)(38100700002)(76116006)(122000001)(82960400001)(38070700005)(31696002)(4326008)(66476007)(66446008)(86362001)(5660300002)(64756008)(8676002)(66946007)(66556008)(2906002)(41300700001)(8936002)(6506007)(186003)(26005)(6512007)(91956017)(53546011)(54906003)(6916009)(316002)(71200400001)(6486002)(478600001)(2616005)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZjJRbVlFREhyZk1YeGx3NGw1cDBsdEFZVXFRUFB4RVhzTERHWnFJMWxrZ0xq?=
 =?utf-8?B?dVZxNFBoZDRUcUVzOGxBOG8yd2VHTlNLenV5dlpiblhzN25NU2F2MGpMMDV3?=
 =?utf-8?B?R3hiWVVxRGxEK1lpZDhLRXRnNzg5WTMxUklYc3FBM0VYYjlDNmgrUUtqTmNk?=
 =?utf-8?B?YkZnVEJnWDk5V09KMGZTM1hORkdiL1JleVB6ckh4dVNiZjBieFFxM0IrcnFi?=
 =?utf-8?B?bmVVLzh6a1h5REhDMXdIQy9lMWl0V0tMUlc5dzZrUnI5MHVzbzIwZmxZSkkw?=
 =?utf-8?B?SXVwTDFiSEs2L3JpejhVSUgvTGNmanoxVlN6WFF0S2ZGTXpqQ1dLWDY4ZjF5?=
 =?utf-8?B?NFFmQ2RjREIvRUh6d0NpYzlvZzA0OHNMUlRTRmdQY0tVcmRYK2gxcEJpMmUw?=
 =?utf-8?B?VHAreFdnTU9aaXBnWWJpRWdPS1F5dUgrSGFFVzQxaVV4RlZveUsyT2g2eWN4?=
 =?utf-8?B?Q1F2ZExheTFJeFdselFtVHkzVi9nZFIvSm1YUzVEQzhzSUZpNnBFNFZEODdL?=
 =?utf-8?B?MFAzYm1SRVhYd1lFNG0ybzJqUEVvTEUrTmFIQ3JtTU91elgzNGxWNTI5VFpN?=
 =?utf-8?B?anRlTUNFT0NaSnhYVFo5TEZkcllzV3hSdmtRNDZ4SThDMDMwVTFVMjgvcmk3?=
 =?utf-8?B?RDV6RDdjTjFsRlE2cklJNFFIbjFsZ25ES1hQY2d2Y0xiOHNuUTBiWGtmK3Y3?=
 =?utf-8?B?cnA2cDNwK2E0S2FySTFyeTdnb3NOQjNWakpOVVZSUDdmcG9WaWd3aEVKZHdx?=
 =?utf-8?B?U0NwZWNTd2luMmlmS2R4WXRnUk13QW02Y1k1dVoxaVltVWZIdFphQ1VkUWg4?=
 =?utf-8?B?YnpSb1Fhc3ZVN1UrSU9sTmVrektnTU5VbFF1dzRQVStZb2ZZOHdDaXZtM0Mw?=
 =?utf-8?B?d0JmdVQ4ZStySXhYSklhSXlMVWh5Q05YT2pLZ0xWa0FUQS9xbGFhMHQvMU5r?=
 =?utf-8?B?Ykp1bGVYaEZUU0FqSTNEcWUwR2twWFVYamtlYWwveDM5YnluSHZiUkUvdmQ2?=
 =?utf-8?B?SWhyUEJ1L1RITDgzQW5kKzVZSE80enpTblZUb21PMlVGNjZQZFNoaDRtVDY2?=
 =?utf-8?B?ZnFPMjBUd1JLQXhKamlmMnAwV1I3M3NCSzhPblZJNEdKWHZJajZwOHNWdG1B?=
 =?utf-8?B?VkMxNzdUZE93OWVTZUpxU1VPajJDcXJlQStaT3lEcjE5OGtmZmp3VXFkS3Y3?=
 =?utf-8?B?MG5OQ0p5dHFWeEVEWEQ4dVhVT3ltWkE4bHpiOW1xMzBEQnllRFZNY3JxMUNm?=
 =?utf-8?B?THRsY1ZpK2NxVWpRQkhjdmtuWEVJc3BzRlJGZTcxMVNwbHVsajB6VjBVVG40?=
 =?utf-8?B?WkFETzVqeFBiNHBWVnpBd00xTWpQdjhlS0o3VGNudnhQUWtnU0xmV1ltTWVv?=
 =?utf-8?B?WHpSWHBnaTArUHpSWGRJRWNpdTcwbEtYcWpQd2hlZENoWnpOaWZLbUNab1c3?=
 =?utf-8?B?NHJNUktzbUFXbFNFWTlwR3dRVEkwMEh5L2ZFQVRhTUVuSjdoUFpwVzh0SEx1?=
 =?utf-8?B?dCtMakY2djZFbC9SWGFack9iOEE3V2ZuQnN5MWE0cXVuTjNDc1NOZ2Y1a3pN?=
 =?utf-8?B?Z2VveVdQeXc3OS84Z2I3bmdXZitJMEwxVnByZU1nYjRPQ1JJcTRBMlo5VFMy?=
 =?utf-8?B?bTFhdEszQ1ZwSEtSYm4rVnd2Mjd4QmJJNVVkclJUYkR1RDNmbzd3VktSSXF3?=
 =?utf-8?B?Y01XYnIyOE9lWG1qcGV1SUZxMy9FSFRMMWx1YndaelRYMlN0Q2RNRFA2TVlV?=
 =?utf-8?B?VEtkaG9TMG90OHBWS3kyVStFN3FnSVI5YlhsSVBvL2l1ZVBTd29wVFVWRTl0?=
 =?utf-8?B?TXBrZHVXUWhSWnZ4LzNIN1FNOWU3MnZ2cWJibHN2UlRlQmVvdktrbnB4ZWdZ?=
 =?utf-8?B?d1pqRE10bk1rZVFYMUZONGdQdzV6YWwxUTJzM0VZU3RFUjAyQjZIOFIxYkd4?=
 =?utf-8?B?cHlGaVFFbmtnbmdPZTV6WW1QSnhCczFRemQ0UWduOHUvY3k1a2lBNmZoOTR3?=
 =?utf-8?B?ZnRWQ2pIYURXQTA0UnR4dmQ2Z1llUUI5WURDbW1mRkVtRzlITVBQOXZ5TmM4?=
 =?utf-8?B?Q0VpSFVHV1JTNTFrOVh3cWI3T2x3ajZPR1Y4TWxKUnAveTl4UGdrZzJCZ1Ji?=
 =?utf-8?B?bUtIeFRsOWdoWE9ocG9WQmY1STNyQjVsUVlRUFllbENnY01WUHpEa3ovZlJP?=
 =?utf-8?B?c2c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <8BAAB875855C2141A830C3988AC2C40F@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ZjJvlAYseAMKGUmYvaMMSlzSNgVzDIw1i22HIiyKJ+jFUf7eCNn+z00fG88fuIo6fe3guzcxqo455YXA4glE4U7sGGzzykX2s+RzorFE47dhtfWTiYmOplQ6nsXxi63nDPtD71gl6Z3IcGDFbFsgw24vNIZpvogTaJs6c1MxwalWNDOWQaYxnaygJhi6HAvVDva4IIA5Hj1q4gqVCCKnEAuMSYE0STT9cqAxem/I/+8YTsSczQeqYf/0G6m4r8iSZpZ1lpRDPru8hxaCn1Q04OfAFIctLhYq7n1ePJTxQovaNhGX4GAXaliCmu5GAK88WlCp2W//Jre1Ux5wcmzLpwBKpixmYFLKdQSqGKHy8VMeJSWONzTvqBcHCPu3ZBgUbpcCCaCMPa/tcwADtKT/RecU/44M5l6xuJ7UKu4Hbfe0LGJaK3/oeGYVQRdgYA7fORlR9w6QGhqqqTK5kv31uI9b+6f9XBnQnB/oG5SaHzoeGckBXvM5nSVyyIrGUK9bOtNOKA+RW+hbQ8lx6hTbQj53Uf53r5Fywj02MWLDg48uFuFTuG2Tn+u5YAdvWSQLwpK56B9T5DB8OCRGZZvkYakNi22q77aDyodP3FNPyQdWk/pKG9rfMa70ofTvnzLsOfzEA7zzavaIl6CnFIX/ScDmTHgFIzJUwp2I0lpAiBfGZvLxVaQOP/6NNqeFWtWfEo3ylqkRx646xl1U5z1BjIu9JQU6GkgksG16yVCgulpzVJGxQeaCuzD5Bn29mTRS+DPicAPsS4lblF4KxmaMakKR8U0ZK/TjRMIXe6M5WPs8r4cAAbse9Vk9ZzKcoXdA9aEuv/YQTNRCz/dRZzBFJrXv8I8ZCFenYDgHuAhlC5Jh+tfjcWHGsYOfk7MTR8CW
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b4122ae-3476-4370-6db0-08daee8da551
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 19:55:35.7254
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: wapB1tHHZ8CaBknxBW9DJ0z5a7IxbdwnbxiKIdWapf0og0OgCtict17gmGm/6C4ucLAL4cJmWrE7MkgzEyAhWrjwaXdj4d8NqamFyT3JSt0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR03MB6248

T24gMDQvMDEvMjAyMyA0OjQwIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDMuMDEuMjAy
MyAyMTowOSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IEEgc3BsaXQgaW4gdmlydHVhbCBhZGRy
ZXNzIHNwYWNlIGlzIG9ubHkgYXBwbGljYWJsZSBmb3IgeDg2IFBWIGd1ZXN0cy4NCj4+IEZ1cnRo
ZXJtb3JlLCB0aGUgaW5mb3JtYXRpb24gcmV0dXJuZWQgZm9yIHg4NiA2NGJpdCBQViBndWVzdHMg
aXMgd3JvbmcuDQo+Pg0KPj4gRXhwbGFpbiB0aGUgcHJvYmxlbSBpbiB2ZXJzaW9uLmgsIHN0YXRp
bmcgdGhlIG90aGVyIGluZm9ybWF0aW9uIHRoYXQgUFYgZ3Vlc3RzDQo+PiBuZWVkIHRvIGtub3cu
DQo+Pg0KPj4gRm9yIDY0Yml0IFBWIGd1ZXN0cywgYW5kIGFsbCBub24teDg2LVBWIGd1ZXN0cywg
cmV0dXJuIDAsIHdoaWNoIGlzIHN0cmljdGx5DQo+PiBsZXNzIHdyb25nIHRoYW4gdGhlIHZhbHVl
cyBjdXJyZW50bHkgcmV0dXJuZWQuDQo+IEkgZGlzYWdyZWUgZm9yIHRoZSA2NC1iaXQgcGFydCBv
ZiB0aGlzLiBTZWVpbmcgTGludXgnZXMgZXhwb3N1cmUgb2YgdGhlDQo+IHZhbHVlIGluIHN5c2Zz
IEkgZXZlbiB3b25kZXIgd2hldGhlciB3ZSBjYW4gY2hhbmdlIHRoaXMgbGlrZSB5b3UgZG8gZm9y
DQo+IEhWTS4gV2hvIGtub3dzIHdoYXQgaXMgYmVpbmcgaW5mZXJyZWQgZnJvbSB0aGUgdmFsdWUs
IGFuZCBieSB3aG9tLg0KDQpMaW51eCdzIHN5c2ZzIEFCSSBpc24ndCByZWxldmFudCB0byB1cyBo
ZXJlLsKgIFRoZSBzeXNmcyBBQkkgc2F5cyBpdA0KcmVwb3J0cyB3aGF0IHRoZSBoeXBlcnZpc29y
IHByZXNlbnRzLCBub3QgdGhhdCBpdCB3aWxsIGJlIGEgbm9uemVybyBudW1iZXIuDQoNCj4+IC0t
LSBhL3hlbi9pbmNsdWRlL3B1YmxpYy92ZXJzaW9uLmgNCj4+ICsrKyBiL3hlbi9pbmNsdWRlL3B1
YmxpYy92ZXJzaW9uLmgNCj4+IEBAIC00Miw2ICs0MiwyNiBAQCB0eXBlZGVmIGNoYXIgeGVuX2Nh
cGFiaWxpdGllc19pbmZvX3RbMTAyNF07DQo+PiAgdHlwZWRlZiBjaGFyIHhlbl9jaGFuZ2VzZXRf
aW5mb190WzY0XTsNCj4+ICAjZGVmaW5lIFhFTl9DSEFOR0VTRVRfSU5GT19MRU4gKHNpemVvZih4
ZW5fY2hhbmdlc2V0X2luZm9fdCkpDQo+PiAgDQo+PiArLyoNCj4+ICsgKiBUaGlzIEFQSSBpcyBw
cm9ibGVtYXRpYy4NCj4+ICsgKg0KPj4gKyAqIEl0IGlzIG9ubHkgYXBwbGljYWJsZSB0byBndWVz
dHMgd2hpY2ggc2hhcmUgcGFnZXRhYmxlcyB3aXRoIFhlbiAoeDg2IFBWDQo+PiArICogZ3Vlc3Rz
KSwgYW5kIGlzIHN1cHBvc2VkIHRvIGlkZW50aWZ5IHRoZSB2aXJ0dWFsIGFkZHJlc3Mgc3BsaXQg
YmV0d2Vlbg0KPj4gKyAqIGd1ZXN0IGtlcm5lbCBhbmQgWGVuLg0KPj4gKyAqDQo+PiArICogRm9y
IDMyYml0IFBWIGd1ZXN0cywgaXQgbW9zdGx5IGRvZXMgdGhpcywgYnV0IHRoZSBjYWxsZXIgbmVl
ZHMgdG8ga25vdyB0aGF0DQo+PiArICogWGVuIGxpdmVzIGJldHdlZW4gdGhlIHNwbGl0IGFuZCA0
Ry4NCj4+ICsgKg0KPj4gKyAqIEZvciA2NGJpdCBQViBndWVzdHMsIFhlbiBsaXZlcyBhdCB0aGUg
Ym90dG9tIG9mIHRoZSB1cHBlciBjYW5vbmljYWwgcmFuZ2UuDQo+PiArICogVGhpcyBwcmV2aW91
c2x5IHJldHVybmVkIHRoZSBzdGFydCBvZiB0aGUgdXBwZXIgY2Fub25pY2FsIHJhbmdlICh3aGlj
aCBpcw0KPj4gKyAqIHRoZSB1c2Vyc3BhY2UvWGVuIHNwbGl0KSwgbm90IHRoZSBYZW4va2VybmVs
IHNwbGl0ICh3aGljaCBpcyA4VEIgZnVydGhlcg0KPj4gKyAqIG9uKS4gIFRoaXMgbm93IHJldHVy
bnMgMCBiZWNhdXNlIHRoZSBvbGQgbnVtYmVyIHdhc24ndCBjb3JyZWN0LCBhbmQNCj4+ICsgKiBj
aGFuZ2luZyBpdCB0byBhbnl0aGluZyBlbHNlIHdvdWxkIGJlIGV2ZW4gd29yc2UuDQo+IFdoZXRo
ZXIgdGhlIGd1ZXN0IHJ1bnMgdXNlciBtb2RlIGNvZGUgaW4gdGhlIGxvdyBvciBoaWdoIGhhbGYg
KG9yIGluIHlldA0KPiBhbm90aGVyIHdheSBvZiBzcGxpdHRpbmcpIGlzbid0IHJlYWxseSBkaWN0
YXRlZCBieSB0aGUgUFYgQUJJLCBpcyBpdD8NCg0KTm8sIGJ1dCBnaXZlbiBhIGNob2ljZSBvZiBy
ZXBvcnRpbmcgdGhlIHRoaW5nIHdoaWNoIGlzIGFuIGFyY2hpdGVjdHVyYWwNCmJvdW5kYXJ5LCBv
ciB0aGUgb25lIHdoaWNoIGlzIHRoZSBhY3R1YWwgc3BsaXQgYmV0d2VlbiB0aGUgdHdvIGFkamFj
ZW50DQpyYW5nZXMsIHJlcG9ydGluZyB0aGUgYXJjaGl0ZWN0dXJhbCBib3VuZGFyeSBpcyBjbGVh
cmx5IHRoZSB1bmhlbHBmdWwgdGhpbmcuDQoNCj4gIFNvDQo+IHdoZXRoZXIgdGhlIHZhbHVlIGlz
ICJ3cm9uZyIgaXMgZW50aXJlbHkgdW5jbGVhci4gSW5zdGVhZCAuLi4NCj4NCj4+ICsgKiBGb3Ig
YWxsIGd1ZXN0IHR5cGVzIHVzaW5nIGhhcmR3YXJlIHZpcnQgZXh0ZW50aW9ucywgWGVuIGlzIG5v
dCBtYXBwZWQgaW50bw0KPj4gKyAqIHRoZSBndWVzdCBrZXJuZWwgdmlydHVhbCBhZGRyZXNzIHNw
YWNlLiAgVGhpcyBub3cgcmV0dXJuIDAsIHdoZXJlIGl0DQo+PiArICogcHJldmlvdXNseSByZXR1
cm5lZCB1bnJlbGF0ZWQgZGF0YS4NCj4+ICsgKi8NCj4+ICAjZGVmaW5lIFhFTlZFUl9wbGF0Zm9y
bV9wYXJhbWV0ZXJzIDUNCj4+ICBzdHJ1Y3QgeGVuX3BsYXRmb3JtX3BhcmFtZXRlcnMgew0KPj4g
ICAgICB4ZW5fdWxvbmdfdCB2aXJ0X3N0YXJ0Ow0KPiAuLi4gdGhlIGZpZWxkIG5hbWUgdGVsbHMg
bWUgdGhhdCBhbGwgdGhhdCBpcyBiZWluZyBjb252ZXllZCBpcyB0aGUgdmlydHVhbA0KPiBhZGRy
ZXNzIG9mIHdoZXJlIHRoZSBoeXBlcnZpc29yIGFyZWEgc3RhcnRzLg0KDQpJTU8sIGl0IGRvZXNu
J3QgbWF0dGVyIHdoYXQgdGhlIG5hbWUgb2YgdGhlIGZpZWxkIGlzLsKgIEl0IGRhdGVzIGZyb20g
dGhlDQpkYXlzIHdoZW4gMzJiaXQgUFYgd2FzIHRoZSBvbmx5IHR5cGUgb2YgZ3Vlc3QuDQoNCjMy
Yml0IFBWIGd1ZXN0cyByZWFsbHkgZG8gaGF2ZSBhIHZhcmlhYmxlIHNwbGl0LCBzbyB0aGUgZ3Vl
c3Qga2VybmVsDQpyZWFsbHkgZG9lcyBuZWVkIHRvIGdldCB0aGlzIHZhbHVlIGZyb20gWGVuLg0K
DQpUaGUgc3BsaXQgZm9yIDY0Yml0IFBWIGd1ZXN0cyBpcyBjb21waWxlLXRpbWUgY29uc3RhbnQs
IGhlbmNlIHdoeSA2NGJpdA0KUFYga2VybmVscyBkb24ndCBjYXJlLg0KDQpGb3IgY29tcGF0IEhW
TSwgaXQgaGFwcGVucyB0byBwaWNrIHVwIHRoZSAtMSBmcm9tOg0KDQojaWZkZWYgQ09ORklHX1BW
MzINCsKgwqDCoCBIWVBFUlZJU09SX0NPTVBBVF9WSVJUX1NUQVJUKGQpID0NCsKgwqDCoMKgwqDC
oMKgIGlzX3B2X2RvbWFpbihkKSA/IF9fSFlQRVJWSVNPUl9DT01QQVRfVklSVF9TVEFSVCA6IH4w
dTsNCiNlbmRpZg0KDQppbiBhcmNoX2RvbWFpbl9jcmVhdGUoKSwgd2hlcmVhcyBmb3Igbm9uLWNv
bXBhdCBIVk0sIGl0IGdldHMgYSBudW1iZXIgaW4NCmFuIGFkZHJlc3Mgc3BhY2UgaXQgaGFzIG5v
IGNvbm5lY3Rpb24gdG8gaW4gdGhlIHNsaWdodGVzdC7CoCBBUk0gZ3Vlc3RzDQplbmQgdXAgZ2V0
dGluZyBYRU5fVklSVF9TVEFSVCAoPT0gMk0pIGhhbmRlZCBiYWNrLCBidXQgdGhpcyBhYnNvbHV0
ZWx5DQphbiBpbnRlcm5hbCBkZXRhaWwgdGhhdCBndWVzdHMgaGF2ZSBubyBidXNpbmVzcyBrbm93
aW5nLg0KDQoNClRoZSBvbmx5IHJlYXNvbiBJJ20gbm90IGlzc3VpbmcgYW4gWFNBIGZvciB0aGlz
IGlzIGJlY2F1c2Ugd2UgZG9uJ3QgaGF2ZQ0KYW55IHByZXRlbmNlIG9mIEtBU0xSIGluIFhlbi7C
oCBQcmV0dHkgbXVjaCBldmVyeSBvdGhlciBrZXJuZWwgZ2V0cyBDVkVzDQpmb3IgaW5mb2xlYWtz
IGxpa2UgdGhpcy4NCg0KV2UgZmVhc2libHkgY291bGQgZG8gS0FTTFIgaW4gIVBWIGJ1aWxkcywg
YXQgd2hpY2ggcG9pbnQgdGhpcyB3b3VsZA0KcXVhbGlmeSBmb3IgYW4gWFNBLg0KDQp+QW5kcmV3
DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 20:04:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 20:04:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471454.731297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDA0G-0005W5-Ja; Wed, 04 Jan 2023 20:04:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471454.731297; Wed, 04 Jan 2023 20:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDA0G-0005Vy-Gi; Wed, 04 Jan 2023 20:04:48 +0000
Received: by outflank-mailman (input) for mailman id 471454;
 Wed, 04 Jan 2023 20:04:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDA0E-0005Vs-V6
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 20:04:47 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 071b0d19-8c6b-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 21:04:44 +0100 (CET)
Received: from mail-dm6nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Jan 2023 15:04:33 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5312.namprd03.prod.outlook.com (2603:10b6:208:1e9::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 20:04:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 20:04:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 071b0d19-8c6b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672862684;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=Q8lqIglA3cLE3gjKfVs53dWzklNgm/Z2M2j8Ey4rR3I=;
  b=CQI0kznF8bTSA5UgKzdn3RCyLc0gsvBfg3PaceTUZV1Ym23gmzdJD16K
   TQkagn4BemF6NlFexc/BfKb0kzXDmb3bc6t/hc7iTEZXUr2utlLRbTJVl
   cuYjVmkzpG7cxyPaAAxAdQr9E5tQ+8o5Oufolj06dbb5dMQG1q2v3+af5
   o=;
X-IronPort-RemoteIP: 104.47.57.170
X-IronPort-MID: 90151052
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:5/XurqrtoCwaTiObZQ520C9RH0deBmI6ZBIvgKrLsJaIsI4StFCzt
 garIBnXaa7eajP3c9pzYI22/E0DvZTUnYU2SVc4+Ck1QisQ8ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHzyVNUPrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABQURQ+ju+a1/LW6G8lVvskOEunZE4xK7xmMzRmBZRonabbqZvyToPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeeraYWNEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXprqc32QXOnAT/DjVRVVKqgeGGknW3WvZmK
 3IR8ycJlogLoRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2gheRSN9mSfexloesRmm2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:OCvYYqo++QmBgGtBG2IkRPcaV5scL9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssRYb6Kq90ci7MAjhHPtOjbX5Uo3SOTUO1FHYSL2Kj7GSpQEIaheOjtK1vJ
 0IG8cReb6Ab2SWlfyb3ODRKade/DDtytHQuQ6x9QYLcehCUdAd0+5MMHfkLqQ6fngxObMJUL
 6nouZXrTupfnoaKuy9G3k+RuDG4+bGkZr3CCR2ciLOvGO17A+A2frfKVy1zx0eWzRAzfMJ6m
 7eiTH04a2lrrWS1gLc/3W71eUYpPLRjv94QOCcgMkcLTvhziyyYp56ZrGEtDcp5Mmy9VcRls
 XWqRtIBbU+15roRBD6nfLR4Xir7N9u0Q6o9baguwqqnSUtfkN2NyJD7bgpACcxpXBQ/e2U65
 g7rF6xht5yN1fngDn54d7LEyFjkk65ulYyjOIVlXxVVIc1brhNoYsD0UtJGP47bV/HAbAcYZ
 hT5f7nlYZrmHOhHgTkVzpUsauRtzIIb167fnQ=
X-IronPort-AV: E=Sophos;i="5.96,300,1665460800"; 
   d="scan'208";a="90151052"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BUduF74z8gtOEPoxpvdB+b9Zk3G3cOqbVwGagme4YFA4hlu4+7GOtXe/j5QswTA8JERYkLOiVKHSfXwHZoM8VQp8KEYO7iRPtl4mPRpy5SnliedMKbZP8uoJ8t5x02QREPdNLLRKV7wUj4TpMxQrUR779fWU8hrU3oc9OltZLE5ZkNPA/iGHMMr5vUCT2y9R2WYIHzNIBKKEyOqkSoPTOoUHm5VmZFDN5L4xYNr91Av1a5ffAzNH71eDztK7CRZpsbvAcRGYNdA5AVmR55QBdpqNTGmGJVaZxsfK7/69ffLqqIOLDFDn00f1t75wSzeo38XWoQtzJxd9HMpZOoC+RA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q8lqIglA3cLE3gjKfVs53dWzklNgm/Z2M2j8Ey4rR3I=;
 b=P3EdI7bC16DC/fWxWhGKHMsY4AOzgiBqg0+dSfVT2fGrkQdDIdxgf/6N4WeNzT+IL0E7y9qG7RLUMzl36X3fAmqvn708OuPnUsi5s3jJyaZe4sji66IiSZ+BoRZyoUHxd6wNcetLCvEIVKkXtiBt3as9vPrQ+TlmQxo1pR0ZpwcKzLs2G5ZD8FtogAIq3GqaXNWt8ARcaPtMtZcUcJLO+nHcNuMpcrII3dzTQ+bEASxx3Zs92kF8mWKal1G8X94YA40UeZVWk3DDsr3qnGK9BNjyk/8ckZiNJY51eFdu/vNrSaNdbFCfgbwH1pbdvD4zOUSFTWjuWB28RUEnKgC5xQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q8lqIglA3cLE3gjKfVs53dWzklNgm/Z2M2j8Ey4rR3I=;
 b=iXTrHbSs2g7yG1zFjyQvxUAc0jUf5bQ8v+jjXYbN73+cq+QY27epcVY+5qv5XPaeWiLgtWMK5FqemQCxckojXD/cHK/2GYZA0sbvrrwlNwh07zvYBS4WAzchTwt6Pyfxawfrj7XzByN31FalGPZssGmVpp4blvT0GTgS0ykU784=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2-ish] x86/boot: Factor move_xen() out of __start_xen()
Thread-Topic: [PATCH v2-ish] x86/boot: Factor move_xen() out of __start_xen()
Thread-Index: AQHZEYuAyVgzwEEad0Cj70OyJmEML6520VMAgAB2soCAALPugIAW0HgA
Date: Wed, 4 Jan 2023 20:04:30 +0000
Message-ID: <5f3df359-2906-ba20-b8df-ae2f2d5f5981@citrix.com>
References: <20211207105339.28440-1-andrew.cooper3@citrix.com>
 <20221216201749.32164-1-andrew.cooper3@citrix.com>
 <520cdde9-07e1-fce8-56d9-205fc31c62e3@suse.com>
 <c14dacf1-7057-d860-7708-2dd13e8d6a4f@citrix.com>
 <e70bf233-4444-8c65-8cca-1b7ea74c55d1@suse.com>
In-Reply-To: <e70bf233-4444-8c65-8cca-1b7ea74c55d1@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MN2PR03MB5312:EE_
x-ms-office365-filtering-correlation-id: 27caa092-f2ac-4007-b33c-08daee8ee43e
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 aAXGvrFz3uHCl8YNj5zjio432c/8YP8/+2LRHHynR4ZT45lTKrqyLBV6f4ngXJ+/D3xbKmJeoPuTF3tLHFSpdHiCpBacHZH9XmzeNxhHlDeY+ZgcATu76MHhXlTHFPUxaeDFxaPTeRA02SexT/kTCgd3RyDYvgf4FXzhUem141SmnQeIZAIcm4kgRVpcPrVf2DxCZefg7OGLCi6Ks98rWaK9Rm9VqJG5TJo5uf4j+18whn6eVBHf5SbahJr3rgWEU7AsjckR8rhrqJnIWUnL5uci48STYpGaUAWkgNta1o+zMzhiuM5YNQpG+64OvZYqrTOjyW5O073+HdM+8Nz7s0JoqTjGQ4bBWhjJY1s3TOB/xCOFuy7yozStkDNy/cMRnVyR3VhyoFwdSoXB5EhzVUiKNZNcCHC1b66L1MbJQjVZLns3c9Qoju07Dl4Je23c9Yh8/AKF1oGMXxsYGhsXy1Ga56TW2BmhfNi8iOH45MeDmutTkMc1rbA/nnokoUPOfgewFHGuDTNwdjdCglxC+IDPJQ8McAzqaHXkdOfGqrm0OA56oyi8F1QjeZD6Zm9A4E0x3L8cz1EJ2NRffD1aGNnI0eD92rZTWBmR4nU6+xyE2S6SM3pTtGOLaqejsrGYFvNja5VkB/X8tY/3BnDxpSDky9cjHPiy7PwGMhm/fXUtZsEqSXCZSAEl1DJSDFl8g6XaTuov9VZOLtGxlZyTjDJ0we5Y+crSW1AroPaHqGRfOiHHi3oSh5U9aU/GVWmguy9HqdFh2etY03QZvAyXTbrQMoK2BY5lFWcAsYi+0Cs=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(376002)(39860400002)(366004)(396003)(451199015)(36756003)(122000001)(38100700002)(5660300002)(8936002)(41300700001)(83380400001)(86362001)(31696002)(38070700005)(2906002)(82960400001)(966005)(66946007)(71200400001)(66556008)(76116006)(6486002)(54906003)(6916009)(31686004)(6506007)(91956017)(66476007)(4326008)(2616005)(8676002)(6512007)(66446008)(316002)(64756008)(26005)(186003)(53546011)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?RjVyUGxRaTYxYTdaTWt6TDZpcU00WVdBNmJvckFNN2JkbDB4MkVhQ2p3Sk5E?=
 =?utf-8?B?S0J4TTBCbThTMHpjMDkrdkp2UGgzWXVPVXNjZHBmeHpHMVh6V1VKR1RmUDI5?=
 =?utf-8?B?dXdqbVNZdUZZdTg3U3BrbjFBaG4xOFl6Ump0WUJRYllWMTJJN0RCTTRLQ3Zn?=
 =?utf-8?B?cWE5dXo2dGpENVRzWm5NYjF4d01WRjZGeFEvTy9jdE1FWHdpMkExaTRlUFN1?=
 =?utf-8?B?NVBIMytjZThTTXBDeCtYUk5WZVVXMjlvcVAzQ1l5UDlpNjY5V05iQTdQWWJq?=
 =?utf-8?B?TDVZdUNTenQ2eHFzNWQ0ajY5YU1OMEg2ZnNsVXVEOStSOU9TYnRIWkgxSkd2?=
 =?utf-8?B?VkxZNVVWZzVzV3JoVTlOU3NjTkl2NFQxN05DWG1LU1hkaW5BZEJvZXFLZmJB?=
 =?utf-8?B?MmtEOS9qV2NBTllCMFUwN3NneTJPZFNONW9GZmxxOFpYZTFJT05iV1ZiQ3Fv?=
 =?utf-8?B?dFhicm55QzdCYUhTNUVEbXBrR2hYT2RzODZGZUIwalJIZHp0UENKMlpjOXRQ?=
 =?utf-8?B?SDRSN1dsN245dCtpVlVxMTZ1cVRBUU1TVmxQRGN1Uk5CMDkvSUdPQVlhQzZY?=
 =?utf-8?B?MEtSTTFSNUJSSXZZQ3FGckFZVHBqWE1Zdk02dmlkWVNmRllLMlBLdDlEazVZ?=
 =?utf-8?B?S2JLUmVScEFnNWtTQzBrOC81aGMxQmU0WWhhdkxJdGZHbjltN0xBWXVjN3A3?=
 =?utf-8?B?MGcxaktJV0d0RXlJaUU3UzhKeEpHck1pWnRkdXJzdUhKYmkzU3FxZitNQ1JB?=
 =?utf-8?B?UzFRYzNBdVlGeFVNUGpWcXM5KzFZTmYrR3FLU0dOWDRGYXlrY0VwR2toU081?=
 =?utf-8?B?a1B4QVhRTjZXRmdjckxpbkxXRHFhNnkxRGlkby9DRVJGVVl1S2NZTHlDRVZG?=
 =?utf-8?B?S0lvb1JMMGFxd3lDS2xUSmNUMzEzR1BEb0pEL2lNRmZBeWR6Q3hXQ0VFdEE4?=
 =?utf-8?B?Ymg5OCtuLzRGZHNIT0RBaFRkS1g1ZGNBd0pBdDRxTDJEMHEvd29YaTVBVHJI?=
 =?utf-8?B?SmF6aGFoSkVvam1pZlZsVmVjVDVhbnpMQjRiMnVZR0N6QmFKVUVqUmxnYklB?=
 =?utf-8?B?MGdjK1IzWlNaN3ZzbENSTzFOZ1F1L2lCRk5XSVRpYkg2THpNN0RtdDk0T3hq?=
 =?utf-8?B?MXRCUDRsNUZtS1cxclZSQXc3VlZvc0JGekxPOVhaMkZaZUxLR1R2Sk04a2F3?=
 =?utf-8?B?NXRwcXhLMEhWcVp0bWdFM3JSdXhvaFJldHpKVGNhM0l0U2t4eGJMQWFJN0ov?=
 =?utf-8?B?MG1FNnloNzdPYzc2d0FySTBSMGw4SE5WcUEydWZhY0VDYmNlY1ZoRkZlMnlq?=
 =?utf-8?B?WFBraUJRekpPUUx2aDlNTlFaUDg0bjJKQWsyZmt0YXJzcmVGSCtzMi9UZVBn?=
 =?utf-8?B?aHJDYWV2V2tDU0YwM2IvWXdtRDJCY0tMNVZYYW94T3cvWWpuUXBOVzZVcGph?=
 =?utf-8?B?dUdSTG5GekJ1L2t1VlhlbTFPYnlTdUE4TVNNR3VZTTNJV0Q3S2hGYjZjUXZE?=
 =?utf-8?B?ZklOT21uUHpKMUJySGp5bUpyRUxCWHFMZUR2ZnZ5K2FjY2Zqam9EMklzM2xa?=
 =?utf-8?B?bG10MVBmRXUzZHB5SHVKWnNUaGxCWWpra09tSzV1WlMrc3ZnTXJIQ29QRWs5?=
 =?utf-8?B?K1RtOC83ckZSUzE3ZWZJekJzOTBMYmFrWjFEem5sWGtKb2tGVzNLWWt3MFFE?=
 =?utf-8?B?SDF1NktjWm1FZFZuT3ZTeE53SWNlU1Vyelhld1k4Y3llQlkvZDJ5TThWdGdX?=
 =?utf-8?B?ZldNSC9URTZDUjliOGJrVFgvdUt2MStURGlwbDBGS2xaVENENXBSTUw1WUhR?=
 =?utf-8?B?eHpXb2ZwYlJJL0VXZlo5UzV3ZXBvODVqYjJCcWtQQVQwZ0pYMk40ZHJoOUM5?=
 =?utf-8?B?WTBVcTduTkducTJFVTF1RXY5SkcwV2FvNzhsZlh6QUR2d3R4UnZpc3hhNzdG?=
 =?utf-8?B?eEZGMFRFbFREakJMQlNQbjVoeFZWRWNQcXVCR3lmeU5YK0NTM21RQ013QlBi?=
 =?utf-8?B?L0xqZS94bWNoaU0wRUxGMFVpOFM4QWVWTEdlV0RuSWpick1TckhWd1ZZK3Bv?=
 =?utf-8?B?N25NNlZTejFycG8wQmJoRy9WNUxGUkdaNFVUYkE1TUhSdzdFd1lHL29iMDJG?=
 =?utf-8?B?cXVTTjZrQU1FZEhWeTA0cnJqcndzdGFlb096SnJ5UUk1cWR3VEtDbm9SQWFI?=
 =?utf-8?B?WFE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <3C3073882006F0489B71C3F9AD10B543@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	3GVJQUPme3502kB408snSO+lbyxtufW/d+3Hcjg94RMiNn8XFe8nmvWSqJX4oCmv1WYL7BRwF5Xkt7voQxGuHDlZs4MIukMWk7kgM3FWlA4N50FtEktsi+R7XD7Le/xSAfQU8i4a0qc0GDpVc/nnX198KT2aCPWtjHMdhgduGyTPDV8HcKe73BXP4OBbMHNpbjDR5gbqWeY0X6Ae/DtOgsaP1V+bT3Tp77P3xTTiI9hbmnCag2qNhWbSsB65dI8zMV6HkCESqLrDBzEU9+7cjE+uFb+WapeklYdG9x2aNvmi3cb6oFohuVUbHT1kbv4yvYqQ/hqKu2e9/DHG0jWAo27OeHWvu0+RA8yY7o7at2WDiLNBbgJsysSXV8YyhT9wXKRThf/2er8rRrAU95LXPy6665lxhwgBV+38RBxo4SC1eeMsmJNrBLbEL93flE7bj6B/7mSAyjiHmUIciIr2BLDs60xseqMOAWsRUbaDFn8aWcnRm+JMekPTTsbsBtfI65pGp0IN0AimwWR56FMsNTa1KbRRGsAVU6kkvtMmqqiyhKn4tB4TVYL+pHVrFHbE6AgbTkrasNIPlVmTFIsPZPUBcHVgKfrgV+j1QLSeBCIpE+xDKiKcYOG1rITDQMT/O85Go3e+8j7DyC7fYnT/VxlOIxcsUU07q+BLjKcEF3ajDUmQLSJ2pnQfFDgX01ySpaZo/J/4i6CkD1vn6CLWI3EkwbaNSvxub4yWnQPkFVi+FJYRc6/uhmsYRZCWSLarBPOryinxCucMraZ8JAXQ5BtFb9t2IpFl2vqrJhptezZqZcT/EmBQqb80ASe6hRP5
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 27caa092-f2ac-4007-b33c-08daee8ee43e
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 20:04:30.7802
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /XSBZYAZ1AhrR+gJ56L9v2jZat6uDOBH+gbMGzI8dCfQjzb926Ao9g+2AmOoFjshkbBgCdCW4MvvptyMxtWRdrox/tHVnjsrDfJRpD7sO1g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5312

T24gMjEvMTIvMjAyMiA3OjQwIGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjAuMTIuMjAy
MiAyMTo1NiwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDIwLzEyLzIwMjIgMTo1MSBwbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMTYuMTIuMjAyMiAyMToxNywgQW5kcmV3IENvb3Bl
ciB3cm90ZToNCj4+Pj4gKyAgICAgICAgIm1vdiAgICAlW2NyNF0sICUlY3I0XG5cdCIgICAgIC8q
IENSNC5QR0UgPSAxICovDQo+Pj4+ICsgICAgICAgIDogW2NyNF0gIj0mYSIgKHRtcCkgLyogQ291
bGQgYmUgInIiLCBidXQgImEiIG1ha2VzIGJldHRlciBhc20gKi8NCj4+Pj4gKyAgICAgICAgOiBb
Y3IzXSAiciIgKF9fcGEoaWRsZV9wZ190YWJsZSkpLA0KPj4+PiArICAgICAgICAgIFtwZ2VdICJp
IiAoWDg2X0NSNF9QR0UpDQo+Pj4+ICsgICAgICAgIDogIm1lbW9yeSIgKTsNCj4+PiBUaGUgcmVt
b3ZlZCBzdGFjayBjb3B5aW5nIHdvcnJpZXMgbWUsIHRvIGJlIGhvbmVzdC4gWWVzLCBmb3IgbG9j
YWwNCj4+PiB2YXJpYWJsZXMgb2Ygb3VycyBpdCBkb2Vzbid0IG1hdHRlciBiZWNhdXNlIHRoZXkg
YXJlIGFib3V0IHRvIGdvIG91dA0KPj4+IG9mIHNjb3BlLiBCdXQgd2hhdCBpZiB0aGUgY29tcGls
ZXIgaW5zdGFudGlhdGVzIGFueSBmb3IgaXRzIG93biB1c2UsDQo+Pj4gZS5nLiAuLi4NCj4+Pg0K
Pj4+PiArICAgIC8qDQo+Pj4+ICsgICAgICogRW5kIG9mIHRoZSBjcml0aWNhbCByZWdpb24uICBV
cGRhdGVzIHRvIGxvY2FscyBhbmQgZ2xvYmFscyBub3cgd29yayBhcw0KPj4+PiArICAgICAqIGV4
cGVjdGVkLg0KPj4+PiArICAgICAqDQo+Pj4+ICsgICAgICogVXBkYXRlcyB0byBsb2NhbCB2YXJp
YWJsZXMgd2hpY2ggaGF2ZSBiZWVuIHNwaWxsZWQgdG8gdGhlIHN0YWNrIHNpbmNlDQo+Pj4+ICsg
ICAgICogdGhlIG1lbWNweSgpIGhhdmUgYmVlbiBsb3N0LiAgQnV0IHdlIGRvbid0IGNhcmUsIGJl
Y2F1c2UgdGhleSdyZSBhbGwNCj4+Pj4gKyAgICAgKiBnb2luZyBvdXQgb2Ygc2NvcGUgaW1taW5l
bnRseS4NCj4+Pj4gKyAgICAgKi8NCj4+Pj4gKw0KPj4+PiArICAgIHByaW50aygiTmV3IFhlbiBp
bWFnZSBiYXNlIGFkZHJlc3M6ICUjbHhcbiIsIHhlbl9waHlzX3N0YXJ0KTsNCj4+PiAuLi4gdGhl
IHJlc3VsdCBvZiB0aGUgYWRkcmVzcyBjYWxjdWxhdGlvbiBmb3IgdGhlIHN0cmluZyBsaXRlcmFs
DQo+Pj4gaGVyZT8gU3VjaCBhdXhpbGlhcnkgY2FsY3VsYXRpb25zIGNhbiBoYXBwZW4gYXQgYW55
IHBvaW50IGluIHRoZQ0KPj4+IGZ1bmN0aW9uLCBhbmQgd29uJ3QgbmVjZXNzYXJpbHkgKGhlbmNl
IHRoZSBleGFtcGxlIGNob3NlbikgYmVjb21lDQo+Pj4gaW1wb3NzaWJsZSBmb3IgdGhlIGNvbXBp
bGVyIHRvIGRvIHdpdGggdGhlIG1lbW9yeSBjbG9iYmVyIGluIHRoZQ0KPj4+IGFzbSgpLiBBbmQg
d2hpbGUgdGhlIHN0cmluZyBsaXRlcmFsJ3MgYWRkcmVzcyBpcyBsaWtlbHkgY2hlYXANCj4+PiBl
bm91Z2ggdG8gY2FsY3VsYXRlIHJpZ2h0IGluIHRoZSBmdW5jdGlvbiBpbnZvY2F0aW9uIHNlcXVl
bmNlIChhbmQNCj4+PiBhbiBvcHRpb24gd291bGQgYWxzbyBiZSB0byByZXRhaW4gdGhlIHByaW50
aygpIGluIHRoZSBjYWxsZXIpLA0KPj4+IG90aGVyIGluc3RydW1lbnRhdGlvbiBvcHRpb25zIGNv
dWxkIGJlIHVuZGVybWluZWQgYnkgdGhpcyBhcyB3ZWxsLg0KPj4gUmlnaHQgbm93LCB0aGUgY29t
cGlsZXIgaXMgZnJlZSB0byBjYWxjdWxhdGUgdGhlIGFkZHJlc3Mgb2YgdGhlIHN0cmluZw0KPj4g
bGl0ZXJhbCBpbiBhIHJlZ2lzdGVyLCBhbmQgbW92ZSBpdCB0aGUgb3RoZXIgc2lkZSBvZiB0aGUg
VExCIGZsdXNoLsKgDQo+PiBUaGlzIHdpbGwgd29yayBqdXN0IGZpbmUuDQo+Pg0KPj4gSG93ZXZl
ciwgdGhlIGNvbXBpbGVyIGNhbm5vdCBub3csIG9yIGV2ZXIgaW4gdGhlIGZ1dHVyZSwgc3BpbGwg
c3VjaCBhDQo+PiBjYWxjdWxhdGlvbiB0byB0aGUgc3RhY2suDQo+IEknbSBhZnJhaWQgdGhlIGNv
bXBpbGVyJ3MgdmlldyBhdCB0aGluZ3MgaXMgZGlmZmVyZW50OiBXaGF0ZXZlciBpdCBwdXRzDQo+
IG9uIHRoZSBzdGFjayBpcyB2aWV3ZWQgYXMgdmlydHVhbCByZWdpc3RlcnMsIHVuYWZmZWN0ZWQg
YnkgYSBtZW1vcnkNCj4gY2xvYmJlciAob2YgY291cnNlIHRoZXJlIGNhbiBiZSBlZmZlY3RzIHJl
c3VsdGluZyBmcm9tIHVzZXMgb2YgbmFtZWQNCj4gdmFyaWFibGVzKS4gTG9vayBhdCAtTzMgb3V0
cHV0IG9mIGdjYzEyICh3aGljaCBpcyB3aGF0IEkgaGFwcGVuZWQgdG8NCj4gcGxheSB3aXRoOyBJ
IGRvbid0IHRoaW5rIGl0J3Mgb3Zlcmx5IHZlcnNpb24gZGVwZW5kZW50KSBmb3IgdGhpcw0KPiAo
Y2xlYXJseSBjb250cml2ZWQpIHBpZWNlIG9mIGNvZGU6DQo+DQo+IGludCBfX2F0dHJpYnV0ZV9f
KChjb25zdCkpIGNhbGMoaW50KTsNCj4NCj4gaW50IHRlc3QoaW50IGkpIHsNCj4gCWludCBqID0g
Y2FsYyhpKTsNCj4NCj4gCWFzbSgibm9wbCAlMCIgOiAiK20iIChqKSk7DQo+IAlhc20oIm5vcHEg
JSVyc3AiIDo6OiAibWVtb3J5IiwgImF4IiwgImN4IiwgImR4IiwgImJ4IiwgImJwIiwgInNpIiwg
ImRpIiwNCj4gCSAgICAgICAgICAgICAgICAgICAgICJyOCIsICJyOSIsICJyMTAiLCAicjExIiwg
InIxMiIsICJyMTMiLCAicjE0IiwgInIxNSIpOw0KPiAJaiA9IGNhbGMoaSk7DQo+IAlhc20oIm5v
cGwgJTAiIDo6ICJtIiAoaikpOw0KPg0KPiAJcmV0dXJuIGo7DQo+IH0NCj4NCj4gSXQgaW5zdGFu
dGlhdGVzIGEgbG9jYWwgb24gdGhlIHN0YWNrIGZvciB0aGUgcmVzdWx0IG9mIGNhbGMoKTsgaXQg
ZG9lcw0KPiBub3QgcmUtaW52b2tlIGNhbGMoKSBhIDJuZCB0aW1lLiBXaGljaCBtZWFucyB0aGUg
bWVtb3J5IGNsb2JiZXIgaW4gdGhlDQo+IG1pZGRsZSBhc20oKSBkb2VzIG5vdCBhZmZlY3QgdGhh
dCAoYW5kIGJ5IGltcGxpY2F0aW9uOiBhbnkpIHN0YWNrIHNsb3QuDQo+DQo+IFVzaW5nIGdvZGJv
bHQgSSBjYW4gYWxzbyBzZWUgdGhhdCBjbGFuZzE1IGFncmVlcyB3aXRoIGdjYzEyIGluIHRoaXMN
Cj4gcmVnYXJkLiBJIGRpZG4ndCBib3RoZXIgdHJ5aW5nIG90aGVyIHZlcnNpb25zLg0KDQpXZWxs
IHRoaXMgaXMgcHJvYmxlbWF0aWMsIGJlY2F1c2UgaXQgY29udHJhZGljdHMgd2hhdCB3ZSBkZXBl
bmQgb24NCmFzbSgiIjo6OiJtZW1vcnkiKSBkb2luZy4uLg0KDQpodHRwczovL2dvZGJvbHQub3Jn
L3oveGVHTWMzc005DQoNCkJ1dCBJIGRvbid0IGZ1bGx5IGFncmVlIHdpdGggdGhlIGNvbmNsdXNp
b25zIGRyYXduIGJ5IHRoaXMgZXhhbXBsZS4NCg0KSXQgb25seSBpbnN0YW50aWF0ZXMgYSBsb2Nh
bCBvbiB0aGUgc3RhY2sgYmVjYXVzZSB5b3UgZm9yY2UgYSBtZW1vcnkNCm9wZXJhbmQgdG8gc2F0
aXNmeSB0aGUgIm0iIGNvbnN0cmFpbnRzLCBub3QgdG8gc2F0aXNmeSB0aGUgIm1lbW9yeSINCmNv
bnN0cmFpbnQuDQoNCkJ5IGRlY2xhcmluZyBjYWxjIGFzIGNvbnN0LCB5b3UgYXJlIHBlcm1pdHRp
bmcgdGhlIGNvbXBpbGVyIHRvIG1ha2UgYW4NCmV4cGxpY2l0IHRyYW5zZm9ybWF0aW9uIHRvIGRl
bGV0ZSBvbmUgb2YgdGhlIGNhbGxzLCBpcnJlc3BlY3RpdmUgb2YNCmFueXRoaW5nIGVsc2UgaW4g
dGhlIGZ1bmN0aW9uLg0KDQpJdCBpcyB3ZWlyZCB0aGF0ICdqJyBlbmRzIHVwIHRha2luZyB0d28g
c3RhY2sgc2xvdHMgd2hlbiB3b3VsZCBiZQ0KYWJzb2x1dGVseSBmaW5lIGZvciBpdCB0byBvbmx5
IGhhdmUgMSwgYW5kIGluZGVlZCB0aGlzIGlzIHdoYXQgaGFwcGVucw0Kd2hlbiB5b3UgcmVtb3Zl
IHRoZSBmaXJzdCBhbmQgdGhpcmQgYXNtKCkncy7CoCBJdCBpcyB0aGVzZSB3aGljaCBmb3JjZQ0K
J2onIHRvIGJlIG9uIHRoZSBzdGFjaywgbm90IHRoZSBtZW1vcnkgY2xvYmJlciBpbiB0aGUgbWlk
ZGxlLg0KDQpPYnNlcnZlIHRoYXQgYWZ0ZXIgY29tbWVudGluZyB0aG9zZSB0d28gb3V0LCBDbGFu
ZyB0cmFuc2Zvcm1zIHRoaW5ncyB0bw0Kc3BpbGwgJ2knIG9udG8gdGhlIHN0YWNrLCByYXRoZXIg
dGhhbiAnaicsIGFuZCB0aGVuIHRhaWwtY2FsbCBjYWxjKCkgb24NCnRoZSB3YXkgb3V0LsKgIFRo
aXMgaXMgYWN0dWFsbHkgZGVsZXRpbmcgdGhlIGZpcnN0IGNhbGMoKSBjYWxsLCByYXRoZXINCnRo
YW4gdGhlIHNlY29uZC4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 20:31:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 20:31:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471461.731308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAPw-0000Nx-M1; Wed, 04 Jan 2023 20:31:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471461.731308; Wed, 04 Jan 2023 20:31:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAPw-0000Nq-In; Wed, 04 Jan 2023 20:31:20 +0000
Received: by outflank-mailman (input) for mailman id 471461;
 Wed, 04 Jan 2023 20:31:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pDAPv-0000Nk-E4
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 20:31:19 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bccd8b91-8c6e-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 21:31:16 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id x22so85394680ejs.11
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 12:31:16 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb2008108eedf25879029.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:8108:eedf:2587:9029])
 by smtp.gmail.com with ESMTPSA id
 l2-20020a1709060cc200b0084c4a8062a0sm12901703ejh.149.2023.01.04.12.31.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 12:31:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bccd8b91-8c6e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+XRBUxpmPx9usQEuTJXDwuff2z3rP9GnzpnzVXFfp2A=;
        b=KPR7ESFJUZ9mQDwHEIGHdXbgAumH46eg/HrFO8zp0Wzpc46FWnPJVYGRnO/nt6OjNc
         ouzKS2j34Qvb2dyGEZxtdRr5SDNyE4S2A0/UG6gjJxBRuqTOv71pMCY4j+piNai2QZxb
         uDWTJx67GW7J6kE23DtA68wH2pytXfrREZnuxxzguNTOaL2eH4DEJ2QYZtH6yG1kpTsa
         790YDTiWkV/FtzskkuhqUrNZIifd3V1zmpTDtqWAFsoR0hWAljOxP4EC91YO3PX6SD20
         l2DyB1Q3XnC5SJldn3YQVxu6GS24ixejmv78PidQHKSxji/wwNnzmVRQvaJlN0G9oCFM
         0lKA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=+XRBUxpmPx9usQEuTJXDwuff2z3rP9GnzpnzVXFfp2A=;
        b=Y/svb7YuSV9qeRAcbgabEtaTKeGbNGr8PhAuMXysYEORu78EqBs1qc1huSb3Fi1gid
         wlsOUw/5HjQ9V93aX35o9DlnqFc4SMxMhblzi7qzfpJTS/kLgFYvlp6sEEjqVZvszDdj
         CtseQj3+HyYYUfTbVrVpYsJKI4RqTh5raGWwAq66EX8ECpclfpdhL0AKVDI4RpON0e5q
         RLRlSD+c+sFjejvt2e4x0Cv87FIzhiB/xGPW87PfLMW1qD/ipfXmEB0kb2eA06xuxV3M
         Gx81li6wW14Ow4xX4RSEauhmtvrOogSLNmPPqZvU2PdclRFaOaKzIZQGT8NStASQWL3w
         nQpw==
X-Gm-Message-State: AFqh2kq7i8K1ndEjKo8YbyqyNSXBqzWXKojhJlnjlbl0tpOyO7Lc63lz
	EdWHIaAQoKWhH5RA3F/cqaE=
X-Google-Smtp-Source: AMrXdXsta8i2SwGAjzUl+oFzU2XtPqGoG0G+UvqlFlG8R0YsfOzmlrhJt+FjHFZ1CmWKajdhCWGJ1Q==
X-Received: by 2002:a17:907:6f19:b0:818:3ef8:f2fc with SMTP id sy25-20020a1709076f1900b008183ef8f2fcmr40195994ejc.5.1672864275622;
        Wed, 04 Jan 2023 12:31:15 -0800 (PST)
Date: Wed, 04 Jan 2023 20:31:10 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Chuck Zmudzinski <brchuckz@aol.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>
CC: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v2_6/6=5D_hw/isa/piix=3A_Res?= =?US-ASCII?Q?olve_redundant_TYPE=5FPIIX3=5FXEN=5FDEVICE?=
In-Reply-To: <92efe0f1-f22b-47bc-f27d-2f31cb3621ea@aol.com>
References: <20230104144437.27479-1-shentey@gmail.com> <20230104144437.27479-7-shentey@gmail.com> <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org> <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com> <be75758a-2547-d1ef-223e-157f3aa28b23@linaro.org> <92efe0f1-f22b-47bc-f27d-2f31cb3621ea@aol.com>
Message-ID: <A4B4B1D1-B466-459C-8A30-E79DACB14094@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 4=2E Januar 2023 19:29:35 UTC schrieb Chuck Zmudzinski <brchuckz@aol=2E=
com>:
>On 1/4/23 1:48=E2=80=AFPM, Philippe Mathieu-Daud=C3=A9 wrote:
>> On 4/1/23 18:54, Chuck Zmudzinski wrote:
>>> On 1/4/23 10:35=E2=80=AFAM, Philippe Mathieu-Daud=C3=A9 wrote:
>>>> +Markus/Thomas
>>>>
>>>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone o=
f
>>>>> TYPE_PIIX3_DEVICE=2E Remove this redundancy=2E
>>>>>
>>>>> Signed-off-by: Bernhard Beschow <shentey@gmail=2Ecom>
>>>>> ---
>>>>>    hw/i386/pc_piix=2Ec             |  4 +---
>>>>>    hw/isa/piix=2Ec                 | 20 --------------------
>>>>>    include/hw/southbridge/piix=2Eh |  1 -
>>>>>    3 files changed, 1 insertion(+), 24 deletions(-)
>>>>>
>>>>> diff --git a/hw/i386/pc_piix=2Ec b/hw/i386/pc_piix=2Ec
>>>>> index 5738d9cdca=2E=2E6b8de3d59d 100644
>>>>> --- a/hw/i386/pc_piix=2Ec
>>>>> +++ b/hw/i386/pc_piix=2Ec
>>>>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>>>>        if (pcmc->pci_enabled) {
>>>>>            DeviceState *dev;
>>>>>            PCIDevice *pci_dev;
>>>>> -        const char *type =3D xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>>>>> -                                         : TYPE_PIIX3_DEVICE;
>>>>>            int i;
>>>>>   =20
>>>>>            pci_bus =3D i440fx_init(pci_type,
>>>>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>>>>                                           : pci_slot_get_pirq);
>>>>>            pcms->bus =3D pci_bus;
>>>>>   =20
>>>>> -        pci_dev =3D pci_new_multifunction(-1, true, type);
>>>>> +        pci_dev =3D pci_new_multifunction(-1, true, TYPE_PIIX3_DEVI=
CE);
>>>>>            object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>>>>                                     machine_usb(machine), &error_abo=
rt);
>>>>>            object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>>>>> diff --git a/hw/isa/piix=2Ec b/hw/isa/piix=2Ec
>>>>> index 98e9b12661=2E=2Ee4587352c9 100644
>>>>> --- a/hw/isa/piix=2Ec
>>>>> +++ b/hw/isa/piix=2Ec
>>>>> @@ -33,7 +33,6 @@
>>>>>    #include "hw/qdev-properties=2Eh"
>>>>>    #include "hw/ide/piix=2Eh"
>>>>>    #include "hw/isa/isa=2Eh"
>>>>> -#include "hw/xen/xen=2Eh"
>>>>>    #include "sysemu/runstate=2Eh"
>>>>>    #include "migration/vmstate=2Eh"
>>>>>    #include "hw/acpi/acpi_aml_interface=2Eh"
>>>>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info =3D {
>>>>>        =2Eclass_init    =3D piix3_class_init,
>>>>>    };
>>>>>   =20
>>>>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>>>> -{
>>>>> -    DeviceClass *dc =3D DEVICE_CLASS(klass);
>>>>> -    PCIDeviceClass *k =3D PCI_DEVICE_CLASS(klass);
>>>>> -
>>>>> -    k->realize =3D piix3_realize;
>>>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>>>> -    k->device_id =3D PCI_DEVICE_ID_INTEL_82371SB_0;
>>>>> -    dc->vmsd =3D &vmstate_piix3;
>>>>
>>>> IIUC, since this device is user-creatable, we can't simply remove it
>>>> without going thru the deprecation process=2E Alternatively we could
>>>> add a type alias:
>>>>
>>>> -- >8 --
>>>> diff --git a/softmmu/qdev-monitor=2Ec b/softmmu/qdev-monitor=2Ec
>>>> index 4b0ef65780=2E=2Ed94f7ea369 100644
>>>> --- a/softmmu/qdev-monitor=2Ec
>>>> +++ b/softmmu/qdev-monitor=2Ec
>>>> @@ -64,6 +64,7 @@ typedef struct QDevAlias
>>>>                                  QEMU_ARCH_LOONGARCH)
>>>>    #define QEMU_ARCH_VIRTIO_CCW (QEMU_ARCH_S390X)
>>>>    #define QEMU_ARCH_VIRTIO_MMIO (QEMU_ARCH_M68K)
>>>> +#define QEMU_ARCH_XEN (QEMU_ARCH_ARM | QEMU_ARCH_I386)
>>>>
>>>>    /* Please keep this table sorted by typename=2E */
>>>>    static const QDevAlias qdev_alias_table[] =3D {
>>>> @@ -111,6 +112,7 @@ static const QDevAlias qdev_alias_table[] =3D {
>>>>        { "virtio-tablet-device", "virtio-tablet", QEMU_ARCH_VIRTIO_MM=
IO },
>>>>        { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_VIRTIO_CCW }=
,
>>>>        { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_VIRTIO_PCI }=
,
>>>> +    { "PIIX3", "PIIX3-xen", QEMU_ARCH_XEN },
>>>=20
>>> Hi Bernhard,
>>>=20
>>> Can you comment if this should be:
>>>=20
>>> +    { "PIIX", "PIIX3-xen", QEMU_ARCH_XEN },
>>>=20
>>> instead? IIUC, the patch series also removed PIIX3 and PIIX4 and
>>> replaced them with PIIX=2E Or am I not understanding correctly?
>>=20
>> There is a confusion in QEMU between PCI bridges, the first PCI
>> function they implement, and the other PCI functions=2E
>>=20
>> Here TYPE_PIIX3_DEVICE means for "PCI function part of the PIIX
>> south bridge chipset, which expose a PCI-to-ISA bridge"=2E A better
>> name could be TYPE_PIIX3_ISA_PCI_DEVICE=2E Unfortunately this
>> device is named "PIIX3" with no indication of ISA bridge=2E
>
>
>Thanks, you are right, I see the PIIX3 device still exists after
>this patch set is applied=2E
>
>chuckz@debian:~/sources-sid/qemu/qemu-7=2E50+dfsg/hw/i386$ grep -r PIIX3 =
*
>pc_piix=2Ec:        pci_dev =3D pci_new_multifunction(-1, true, TYPE_PIIX=
3_DEVICE);
>
>I also understand there is the PCI-to-ISA bridge at 00:01=2E0 on the PCI =
bus:
>
>chuckz@debian:~$ lspci
>00:00=2E0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (re=
v 02)
>00:01=2E0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton =
II]
>00:01=2E1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Trit=
on II]
>00:01=2E2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Tri=
ton II] (rev 01)
>00:01=2E3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>00:02=2E0 Unassigned class [ff80]: XenSource, Inc=2E Xen Platform Device =
(rev 01)
>00:03=2E0 VGA compatible controller: Device 1234:1111 (rev 02)
>
>I also see with this patch, there is a bridge that is a PIIX4 ACPI at 00:=
01=2E3=2E

Yeah, this PIIX4 ACPI device is what we consider a "Frankenstein" device h=
ere on the list=2E Indeed my PIIX consolidation series aims for eventually =
replacing the remaining PIIX3 devices with PIIX4 ones to present a realisti=
c environment to guests=2E The series you tested makes Xen work with PIIX4=
=2E With a couple of more patches you might be able to opt into a realistic=
 PIIX4 emulation in the future!

Best regards,
Bernhard

>I get the exact same output from lspci without the patch series, so that =
gives
>me confidence it is working as designed=2E


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 20:36:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 20:36:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471468.731318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAUs-00010p-9b; Wed, 04 Jan 2023 20:36:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471468.731318; Wed, 04 Jan 2023 20:36:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAUs-00010i-71; Wed, 04 Jan 2023 20:36:26 +0000
Received: by outflank-mailman (input) for mailman id 471468;
 Wed, 04 Jan 2023 20:36:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDAUr-00010c-86
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 20:36:25 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 72fb8ae8-8c6f-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 21:36:23 +0100 (CET)
Received: from mail-mw2nam12lp2048.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Jan 2023 15:36:20 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA0PR03MB5449.namprd03.prod.outlook.com (2603:10b6:806:bd::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 20:36:18 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 20:36:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72fb8ae8-8c6f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672864583;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=PX9Hao7kAOIUCw01ZnI+MlM66qhVjSXG+ZjoCHG19xs=;
  b=QUffAJYOu4311AtoLcUJou8SLWvjqdV7mpxjqraMtAenCIEJdLjszp7O
   GBT/mXZt2ojETK+jSgHR8ouiwIZuE4AVaXUWVf8AwlwCgUDKJmDr+lsne
   KrGQEh+40qmD0L3LD2RWJTefftErnyoNf/xBUA07mDXFPXkOA9DBDLR1z
   I=;
X-IronPort-RemoteIP: 104.47.66.48
X-IronPort-MID: 91204710
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jbDMRKqc6xEj+mJn4PnrduGrwXReBmKxZBIvgKrLsJaIsI4StFCzt
 garIBmEOvaON2f2KN53O9+1oRkO78PRmNFgQAFq+Co1QSgb9puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHzyVNUfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABs/czGarNqP+5zlCdJciMgAN9XpNZxK7xmMzRmBZRonabbqZvyQoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3juarbIq9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzhrKY23wTLroAVIF4bDBiSntuAsXP9HNx+L
 EFO6gt2t4FnoSRHSfG4BXVUukWsuRoRWMFREqs59RuKwarX5C6WA2EFSnhKb9lOnN87Q3km2
 0GEm/vtBCdzq/uFRHSF7LCWoDiufy8PIgcqbygORxoI+NnnrYQ6iDrAS99iFOi+ididMTj0w
 iuWpSkkwbsJhMgA1r6T4lzMxTmro/DhRQkz4ALVUmu77xhRa4usZoju4l/ehd5CMYKYQ1+pr
 HUC3c+E44gmFoqRnSaAROEMGrCB5PufNjDYx1l1EPEJ+DWk/Xq+dol4+jBgI1xoNM1CcjjsC
 HI/oitU7Z5XeX61N6l+ZtvrD9xwlfSwU9P4SvrTc9xCJIBrcxOK9z1vYkjW2H3xlE8rkuc0P
 pLznduQMEv2wJ9PlFKeL9rxG5dyrszi7Qs/nazG8ik=
IronPort-HdrOrdr: A9a23:PgFxNq4/3M/fqOk8xwPXwPXXdLJyesId70hD6mlbQxY9SL3+qy
 nIppgmPH7P5wr5PUtKpTnuAse9qB/nlKKdg7NhXotKLTOHhILAFugLh+bfKlbbak/DH4BmpM
 NdWpk7JNrsDUVryebWiTPIderIGeP3lZyVuQ==
X-IronPort-AV: E=Sophos;i="5.96,301,1665460800"; 
   d="scan'208";a="91204710"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kE81Gwf+/uQB7/gUc5Yft1aiBxdo265ZX1zDA+qI5G8OEsZP+s2O53ClDQxL4L4fFpWT/Fz7J1JV6uiH84nJ+oVFz+F4erVAoa6NJjz1uPmysDBG/TzpS3fvuh2WjpZUfwnf5/ydwKRWR8CsxXKHKq1lbCfjvKOn2GLGdaLbwNrN5MqFmVdRm4737rp4wowNWsPOV8ZJXS97ViXrR7gyzzsxzRwJC+k/dVBljuEFexffvVrAjYW1PwCFSp6+0WrMkQTGtbt5Teo/oXc2dCpPbApsOi1WcioECS0p3DqYSLLAxfNq+ykuc1DHxF4ePzwKf6lecR7uS48HZXQpgFJguw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PX9Hao7kAOIUCw01ZnI+MlM66qhVjSXG+ZjoCHG19xs=;
 b=LpjPTE6AmkbgnCqRf7jGx3zJYknMe8pwS3AbWlyN1ifWK8LbBmd2EMc6RYHYXO2fgY8Mn6CD1Bel+X5bKVFxussVb+YfcbJtaUcZORRQXGsm09aMRFG8viK5VBnuNs1GTUb5XERFGWeNmLW0r+mDfS+WOwDDQIIZnnmqpy1CnT1goP32XQmj5qp4uCJuWHO/C1PXOtv23uKmSzRCOClLoHXKxz+TEl/UAE/q2Iq14zt5YScUWuVH6TuYAjvg6RK+Qw/9f6yaMKLIVRozFEeROc0AVNOoih5oiyYBuw/wg6Vh8qFHVeApt/874Nvn8aBNcGbAM3SQNpYrZQZbM3omlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PX9Hao7kAOIUCw01ZnI+MlM66qhVjSXG+ZjoCHG19xs=;
 b=TECNXG6HNbP/wgwpgtHgrGLVM+VNTHO0W9teIH4qY1OyKPisEMoRU4GV7LXhen4k2uJJ31ASTyKIgLwYOsHxRhvanK7omycL0g3v3wV3auacKWoADrFveJPcRfoaGAWEm99noagfL5mjKfYlY2SPn1k8R7AFw03OBvuMf0R3wNg=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Julien
 Grall <julien@xen.org>, Anthony Perard <anthony.perard@citrix.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>
Subject: Re: [XEN PATCH v2 1/2] arch/riscv: initial RISC-V support to
 build/run minimal Xen
Thread-Topic: [XEN PATCH v2 1/2] arch/riscv: initial RISC-V support to
 build/run minimal Xen
Thread-Index: AQHZHE7fYMe4r24iGUSr/qW4K6+f3q6Ov8aA
Date: Wed, 4 Jan 2023 20:36:18 +0000
Message-ID: <fd67c2a1-8f57-2efc-e6d9-f82d529b8b8b@citrix.com>
References: <cover.1672401599.git.oleksii.kurochko@gmail.com>
 <4702cb223dbd7629fe3d3e494eb363f4b2534e96.1672401599.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <4702cb223dbd7629fe3d3e494eb363f4b2534e96.1672401599.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA0PR03MB5449:EE_
x-ms-office365-filtering-correlation-id: 610409af-518d-4e13-d89c-08daee935517
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Fg+he7S+pyE976bVWE8ZFC/jg0HmosF6Ifi/9DxqZ2EsSaasQ3K22te1M+oR/Wf95QHfPnTPm5PIUJFCytl5MieEM+eyJpTSbgAPMtUeU75lnF3UJYYTRM/EP8Z7u2p1WAK4Lv37AY9M8IZdeRR85oqggdIYzbCcC6Co+fHOaM5XWZrT/bAgFi1gAB+rWHxYuy+Yt94d/yFEkGYENWZu2Oszv1QTUeeXu85UQ3+1Qvt4RLgo+48HsWCivkcSeE+OGgHDFRD5NzioPfoyFOZtTxa+OVbe7E97Ad9gB04E///3cHcthNdET9607lvfhaPXyWFU0mX8wrAMCnHOVYzEr0yuDZePI5kT6Xi35XL+SVMsvZDDPzTFhtcJ+NwIsII1EjJfixnbBUcSjA4EYT9u8HEthx4+s96RXzpt6DbapdOdfGfr9JXA2bhGFZFmj1UijgXjGBTE3vibG4AP+tqS3byinqrTb6j42ngmK2FuV2ifNomliYYLQwUsGDcWFW/wc9NKNwsZlaLPgROejxZEwnbfmpmBnwojrgkbDLXWwZRf2DYDqWQp1wUZpxpItPt/pvItAH2L39uFh48CcJn4SfIfWm6ZMUZpDqfuU0F42FoWsjLXGNxGYs9NTLK5ruHDzSbfsRC2zKLs1NX5iX25ss4ZtXEqR+ZYP7vj61vLa061kJK1yGK6g7iWvGdjEPPkOrwzYsvznX4wrMlDB4A3wNY48FGS8e7qR/+EJP9qlr2qBqHC2WrR2YUzyCf62xSOZ4eS7I4HJ/RF0p8F+wnprw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(366004)(136003)(376002)(396003)(346002)(451199015)(38070700005)(122000001)(82960400001)(38100700002)(36756003)(31696002)(86362001)(2616005)(66556008)(64756008)(6506007)(66476007)(316002)(53546011)(478600001)(66446008)(76116006)(26005)(71200400001)(6512007)(186003)(110136005)(54906003)(31686004)(6486002)(91956017)(2906002)(66946007)(8676002)(41300700001)(8936002)(4326008)(83380400001)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MTltbTkrbTdjejQ2SnRvc2hORDhPRXZGSlVBbHExbG1QdzNDQlNILy9FM29E?=
 =?utf-8?B?eUZ0SjhGUDhhaHFCcm1KMytLcUNvU3hEWkQwcVNHcEVrK0g2cUxORXkvL1p3?=
 =?utf-8?B?U00vZHJNdjEwMWVzbmt5WC85ekR5NmhpR3hHRmFyRVVMT2xBQ3NqSEJMdFR0?=
 =?utf-8?B?NVorUnBWS2N5dk9CSmJkOTRQZWtuRXlEckc1MEJJWitpZlRHWU9SRDlucjFW?=
 =?utf-8?B?K1QzY3cybFh5aTcwQmt6VHFxMFBWMGFCRU9CdGFWY2l6NmI5dkpiZVFqd1Fn?=
 =?utf-8?B?NFBkWms3NU1VZzQ2QStaNzAxcUcySU9DM2RKczAyQ2UrVitpelZ4MHZmMFRY?=
 =?utf-8?B?NlpOSWx5T0xEMFNmSWx2bDIvZmljOXdKeTJ1a2ZvLzg5eHZxc1RIZjV6VXdv?=
 =?utf-8?B?ZEh2TXRhZklqYkVFa254U0tzb0p6R2w1V3JhdHA4NS9XUlNMMlZ4OWNFT1lE?=
 =?utf-8?B?S3JsUnRyOW9WRjJHOHg2SEo0eHJUM1MwNFIrY21TRHVWMEhiTkJqcFlJUVMr?=
 =?utf-8?B?MEZFT2NlYWpKa3lNbjZtZzlKU0NyT3NMSHhJMkZPaVNNZ2k3clVVaWlXdnps?=
 =?utf-8?B?NGVRQTdVT3AvZTc5SGJiZXdRZmRXQ2tMTWs1NXEzOXpnaDRzbkQ0WEVURFBz?=
 =?utf-8?B?QU91ZnNSUHh0dzVmZmtFQ1lZejNMaHorMGM5L3p3cFoyVFU4OXRZNVlXNzJT?=
 =?utf-8?B?UzNHOXdnTm8yRysrVjRFdlI5citibWxzNUcwRG93MVgwL3JDVDkyQkxoeUF5?=
 =?utf-8?B?RlhrMTFjK0R3Rjlsa3VUMW8rMmtXdEo5VjNRYUdyczNCQ2JJUHpqc1g2TnVR?=
 =?utf-8?B?WmowR1FsNEYwVWhGbWFlSGVwUGNmTDhIcHRZZThPM2hLQzVGYndkUVhpU3BJ?=
 =?utf-8?B?dkJrcVpnamtnQVFvNWZHdWo4enRsQTllYnRlNTlBZy9kL044NndlY3NhV0hm?=
 =?utf-8?B?ZXZ1RGczd0dNc2lxcHpicGVaOXhHbmozOFVJR2FwSFQwSS9PR3VLTXp5VmQy?=
 =?utf-8?B?eWIxWU16eHNka01ndVZxck1QbXZOSjNiaDFGK3BNTUtPRTdmRWVjZHY4Nk1V?=
 =?utf-8?B?MlpncWE2OXdOSVlYMzlHUk1WZ0hROURtUmlvR0pTbEF1YVV1dW90RktVazRy?=
 =?utf-8?B?Z1Avd2RUT29LazVQOWNSeTV3c01HRTdPc0wzQTdZdk13R2w0ZHJONzg4eWlt?=
 =?utf-8?B?K3R1SDBaemNPU1pIN1lSaHJ5a2VwVXR3ZVJQWnBCcWdYcmF6bEZiOHBRdzdu?=
 =?utf-8?B?UXUrcnhDNXMyQ1FyY0wzZ2d4OUZLT1dVSEhLWXg1MzVkaTQyc1hrajEyVmlI?=
 =?utf-8?B?TVExLzJFSk1DaEg3b2NXMkxDSkhRKzBJS1N2ZDJMYkVKRnI1cHhubFF6NUJL?=
 =?utf-8?B?OHowbzJ2T1pBd2pKdWwrelJyQmdtY041WXIxTHVLZUhGQk9GL0FTdk1RNHBJ?=
 =?utf-8?B?T29VK3lENE90RzVySm5ST1Z1SWZ4eEc3dnp6L2oweG5WSWlKL1l5M3NmTEx4?=
 =?utf-8?B?Z3h1MXl6VzIyNDhoZXBXVEIvaW94UWE4aFlnY2JyOUtMWDUxc2R5aGpuUUg4?=
 =?utf-8?B?R29CQXJUUGE4cjhRSGVXUk8zVFVmWU5zZk1lVnh6M3F0Y0o5L0VqeVFma2Fq?=
 =?utf-8?B?ZGkvSHZMVFRyOGNEQ3EzeC9vdkVCMDVIbk9WYkpIbVFyQlo4TUxxalZwaDNF?=
 =?utf-8?B?YjNySFdlVm14Qm5CWVFhR0JvZ2JnUHdVdnIzbU9JWFRXSEg4OEFid0lXdW1k?=
 =?utf-8?B?RmE0QjlrdnpBMGFyczVLTGlqaFFBUDhpb3FqYlMxVWhWOStpWEJza25QODA5?=
 =?utf-8?B?bXYwN0JDUXJsVkRGY1JWZVg2cUo5Q01ZeS9lUE11S3RtSjZ3MnkwelpZdzFG?=
 =?utf-8?B?M1o4QXpxRU9ldzBTY3NtL3FnT3dkenY4WVkrSS9nRHZNeU5hZDRndHZFM09O?=
 =?utf-8?B?OHVZcEx2Ny9DeGNaOUc0MDc2WG1yN0QvdlRrZ0hSTTg1a0o3QXZ1UVpYNnJU?=
 =?utf-8?B?bm9KMGN1azdvYnhLMkZVMkx0VDRpTkt4cU13UzdrY0I4SzlwYTRqL3dhOStZ?=
 =?utf-8?B?RUVORXB2eGRvcFRGM2tYWVMvbGxSdlU2Tk1EWmdIVGdKck1kc2NmUnVySjQ0?=
 =?utf-8?B?eG9PM2ZFallwTXZzckFNakhlZVlpQ2JDZHFnWGFxakZoY2FLaWlkWXVKOE9T?=
 =?utf-8?B?Y3c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <9A63B007D543C9449974BDBC2C78F189@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?UjJ6M3hFTVZBTndkQ0E4RVVUeEF2NGJ0dW5DYitaVDJESFNmZEtSR0xqNW9a?=
 =?utf-8?B?K3RoYlE1Rk43M1Nud21pa0lNckNoRG83NnpKcU5PbnRsa0wxQ3NkYzRxQXRw?=
 =?utf-8?B?UHYyL0RXdUtWbHdEVjFGMjdlMEd3MnhFK2pVOFVqU3FXaHR3RXI5dkRibVNX?=
 =?utf-8?B?Q1JWeEJHMWRZWkQ1dUlXWkZhL1BhTGpNakxjQnpkeFBwYzdUK09PeXRWd0Fp?=
 =?utf-8?B?V0pKSFVvQSszb0lmMmR1M0VkZlhHYkR1K2xvTTlWUFVtbDlwbW1pM1hoeERR?=
 =?utf-8?B?WnNwd05jb3gwZ0ZMNExKYlVqOE9UbXR0YU5nbkRuYmJmSWJPWEl3LzNyKzdu?=
 =?utf-8?B?SEloNUc5M0QyMXliK3VwOFFXNXdxeG5PWHNoNlNKQW1ZaHJxQTE1aHZScENa?=
 =?utf-8?B?aHpXRERGNDNhQmpOdXY4Y3Z2bGFubjZOa0k3UG1laDJER2o5UUIyRjU1LzVL?=
 =?utf-8?B?TlZwTGtMbU1jZG83dUlHcGNRdkZJdTFQc1VocHFHK3dsb3YzbHdwditQUUEx?=
 =?utf-8?B?MkIrSjlZVXlOZTBWTHNuaEFpc0p4aHpFNDM2SlhKYnlJM2ZRSXdTQXZ1d1hj?=
 =?utf-8?B?U2c4RFlaT0NaNnRvcStaUlI3VzhGV1Z3aFNKaDF4bjJOS2VidDdCL2NuL0dS?=
 =?utf-8?B?M09sVXBIZGhxb0tnbXVPcmQ0WnI1alU3Q2VuTE9QV2dHdFFjK1JmekRKSHRS?=
 =?utf-8?B?NFFTMnBWdkZaZTd0OER0a0pNeWNFSkZwZjJzbkNtdjBwMGx0RVd2UUVDczB4?=
 =?utf-8?B?ME9WU1ZTL2xsRXVsOG5ubzZ4QTNVUEc4TWQxZzRlVTBSUVNuaHdLYjdtVnBH?=
 =?utf-8?B?N3hVZW9IS0VpY2Z3YXlYU3Ava09kYkFLQmNDTFpDY0M3eGRrMHJwb05zaENa?=
 =?utf-8?B?eUhlWThpdkY0MitvTTNNdHRmVVVYYTVpR0dxOEJteG1kOStGako4R0MrSXNU?=
 =?utf-8?B?UnFOamg4RmU3a3MveWVCcUxuMkNVTE52dGJxaFdkRVdLbUlOWnBtN0Uza1Ez?=
 =?utf-8?B?VGpWbm1VZUVUU1lnUzJPT0hpbGcyTngwb0NDbkh5TGtuM2hNTkxqZ2YycXhN?=
 =?utf-8?B?ZGYwWUkvK2pCUjg4ano0WTV4WDFqNmIyNm1UeTZ6ZURTZGJFZWlKbEJNMWdR?=
 =?utf-8?B?SkE2ck9JVnFBemI1WlNHWXd5YU1pQ1dwV3VTRjM4RWJvVTYvOTJUY0xBUmlO?=
 =?utf-8?B?ditIRitFWHlCU0lSN2N0ajhVNkU1c1RpZXRYNVh4aXVTVUxheEZBbXNvVjdW?=
 =?utf-8?B?WWxtc0hNUDk4QmRrOUE1QVNkelp6S01MV2wxSS9OcmNLYjVnQT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 610409af-518d-4e13-d89c-08daee935517
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 20:36:18.1107
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 3+0sQmjB106lp5RS6lXiQLA8d8iKldcQQUaduiYzrXBAdKZBbz33wY0KCin6yt6vx/AIHSWBUHiKH3wJxZLPYMhchSgwB9QhaodfiSEYx8U=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5449

T24gMzAvMTIvMjAyMiAxOjAxIHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBUaGUgcGF0
Y2ggcHJvdmlkZXMgYSBtaW5pbWFsIGFtb3VudCBvZiBjaGFuZ2VzIHRvIHN0YXJ0DQo+IGJ1aWxk
IGFuZCBydW4gbWluaW1hbCBYZW4gYmluYXJ5IGF0IEdpdExhYiBDSSZDRCB0aGF0IHdpbGwNCj4g
YWxsb3cgY29udGludW91cyBjaGVja2luZyBvZiB0aGUgYnVpbGQgc3RhdHVzIG9mIFJJU0MtViBY
ZW4uDQo+DQo+IEV4Y2VwdCBpbnRyb2R1Y3Rpb24gb2YgbmV3IGZpbGVzIHRoZSBmb2xsb3dpbmcg
Y2hhbmdlcyB3ZXJlIGRvbmU6DQo+ICogUmVkZWZpbml0aW9uIG9mIEFMSUdOIGRlZmluZSBmcm9t
ICcuYWxnaW4gMicgdG8gJy5hbGlnbiA0JyBhcw0KDQoiYWxpZ24iDQoNCj4gICBSSVNDLVYgaW1w
bGVtZW50YXRpb25zIChleGNlcHQgZm9yIHdpdGggQyBleHRlbnNpb24pIGVuZm9yY2UgMzItYml0
DQoNCldoaWxlIHRoZSBDIGV4dGVuc2lvbiBtaWdodCBtZWFuIHRoaW5ncyB0byBSSVNDLVYgZXhw
ZXJ0cywgaXQgaXMgYmV0dGVyDQp0byBzYXkgdGhlICJjb21wcmVzc2lvbiIgZXh0ZW5zaW9uIGhl
cmUgc28gaXQncyBjbGVhciB0byBub24tUklTQy1WDQpleHBlcnRzIHJlYWRpbmcgdGhlIGNvbW1p
dCBtZXNzYWdlIHRvby4NCg0KQnV0LCBkbyB3ZSBhY3R1YWxseSBjYXJlIGFib3V0IEM/DQoNCkVO
VFJZKCkgbmVlZHMgdG8gYmUgNCBieXRlIGFsaWduZWQgYmVjYXVzZSBvbmUgb2YgdGhlIGZldyB0
aGluZ3MgaXQgaXMNCmdvaW5nIHRvIGJlIHVzZWQgZm9yIGlzIHttLHN9dHZlYyB3aGljaCByZXF1
aXJlcyA0LWJ5dGUgYWxpZ25tZW50IGV2ZW4NCndpdGggYW4gSUFMSUdOIG9mIDIuDQoNCg0KSSdk
IGRyb3AgYWxsIHRhbGsgYWJvdXQgQyBhbmQganVzdCBzYXkgdGhhdCAyIHdhcyBhbiBpbmNvcnJl
Y3QgY2hvaWNlDQpwcmV2aW91c2x5Lg0KDQo+ICAgaW5zdHJ1Y3Rpb24gYWRkcmVzcyBhbGlnbm1l
bnQuICBXaXRoIEMgZXh0ZW5zaW9uLCAxNi1iaXQgYW5kIDMyLWJpdA0KPiAgIGFyZSBib3RoIGFs
bG93ZWQuDQo+ICogQUxMX09CSi15IGFuZCBBTExfTElCUy15IHdlcmUgdGVtcG9yYXJ5IG92ZXJ3
cml0dGVkIHRvIHByb2R1Y2UNCj4gICBhIG1pbmltYWwgaHlwZXJ2aXNvciBpbWFnZSBvdGhlcndp
c2UgaXQgd2lsbCBiZSByZXF1aXJlZCB0byBwdXNoDQo+ICAgaHVnZSBhbW91bnQgb2YgaGVhZGVy
cyBhbmQgc3R1YnMgZm9yIGNvbW1vbiwgZHJpdmVycywgbGlicyBldGMgd2hpY2gNCj4gICBhcmVu
J3QgbmVjZXNzYXJ5IGZvciBub3cuDQo+ICogU2VjdGlvbiBjaGFuZ2VkIGZyb20gLnRleHQgdG8g
LnRleHQuaGVhZGVyIGZvciBzdGFydCBmdW5jdGlvbg0KPiAgIHRvIG1ha2UgaXQgdGhlIGZpcnN0
IG9uZSBleGVjdXRlZC4NCj4gKiBSZXdvcmsgcmlzY3Y2NC9NYWtlZmlsZSBsb2dpYyB0byByZWJh
c2Ugb3ZlciBjaGFuZ2VzIHNpbmNlIHRoZSBmaXJzdA0KPiAgIFJJU0MtViBjb21taXQuDQo+DQo+
IFJJU0MtViBYZW4gY2FuIGJlIGJ1aWx0IGJ5IHRoZSBmb2xsb3dpbmcgaW5zdHJ1Y3Rpb25zOg0K
PiAgICQgQ09OVEFJTkVSPXJpc2N2NjQgLi9hdXRvbWF0aW9uL3NjcmlwdHMvY29udGFpbmVyaXpl
IFwNCj4gICAgICAgIG1ha2UgWEVOX1RBUkdFVF9BUkNIPXJpc2N2NjQgdGlueTY0X2RlZmNvbmZp
Zw0KDQpUaGlzIG5lZWRzIGEgYC1DIHhlbmAgaW4gdGhpcyBydW5lLg0KDQo+IGRpZmYgLS1naXQg
YS94ZW4vYXJjaC9yaXNjdi9NYWtlZmlsZSBiL3hlbi9hcmNoL3Jpc2N2L01ha2VmaWxlDQo+IGlu
ZGV4IDk0MmU0ZmZiYzEuLjc0Mzg2YmViODUgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL3Jpc2N2
L01ha2VmaWxlDQo+ICsrKyBiL3hlbi9hcmNoL3Jpc2N2L01ha2VmaWxlDQo+IEBAIC0xLDIgKzEs
MTggQEANCj4gK29iai0kKENPTkZJR19SSVNDVl82NCkgKz0gcmlzY3Y2NC8NCj4gKw0KPiArJChU
QVJHRVQpOiAkKFRBUkdFVCktc3ltcw0KPiArCSQoT0JKQ09QWSkgLU8gYmluYXJ5IC1TICQ8ICRA
DQo+ICsNCj4gKyQoVEFSR0VUKS1zeW1zOiAkKG9ianRyZWUpL3ByZWxpbmsubyAkKG9iaikveGVu
Lmxkcw0KPiArCSQoTEQpICQoWEVOX0xERkxBR1MpIC1UICQob2JqKS94ZW4ubGRzIC1OICQ8ICQo
YnVpbGRfaWRfbGlua2VyKSAtbyAkQA0KPiArCSQoTk0pIC1wYSAtLWZvcm1hdD1zeXN2ICQoQEQp
LyQoQEYpIFwNCj4gKwkJfCAkKG9ianRyZWUpL3Rvb2xzL3N5bWJvbHMgLS1hbGwtc3ltYm9scyAt
LXhlbnN5bXMgLS1zeXN2IC0tc29ydCBcDQo+ICsJCT4kKEBEKS8kKEBGKS5tYXANCj4gKw0KPiAr
JChvYmopL3hlbi5sZHM6ICQoc3JjKS94ZW4ubGRzLlMgRk9SQ0UNCj4gKwkkKGNhbGwgaWZfY2hh
bmdlZF9kZXAsY3BwX2xkc19TKQ0KPiArDQo+ICtjbGVhbi1maWxlcyA6PSAkKG9ianRyZWUpLy54
ZW4tc3ltcy5bMC05XSoNCg0KV2UgZG9uJ3QgbmVlZCBjbGVhbi1maWxlcyBub3cgdGhhdCB0aGUg
bWFpbiBsaW5rIGhhcyBiZWVuIHNpbXBsaWZpZWQuDQoNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNo
L3Jpc2N2L2luY2x1ZGUvYXNtL2NvbmZpZy5oIGIveGVuL2FyY2gvcmlzY3YvaW5jbHVkZS9hc20v
Y29uZmlnLmgNCj4gaW5kZXggZTJhZTIxZGU2MS4uZTEwZTEzYmE1MyAxMDA2NDQNCj4gLS0tIGEv
eGVuL2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vY29uZmlnLmgNCj4gKysrIGIveGVuL2FyY2gvcmlz
Y3YvaW5jbHVkZS9hc20vY29uZmlnLmgNCj4gQEAgLTM2LDYgKzM5LDEwIEBADQo+ICAgIG5hbWU6
DQo+ICAjZW5kaWYNCj4gIA0KPiArI2RlZmluZSBYRU5fVklSVF9TVEFSVCAgX0FUKFVMLDB4MDAy
MDAwMDApDQoNClNwYWNlIGFmdGVyIHRoZSBjb21tYS4NCg0KT3RoZXJ3aXNlLCBMR1RNLg0KDQp+
QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 20:39:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 20:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471476.731330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAXr-0001g1-Su; Wed, 04 Jan 2023 20:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471476.731330; Wed, 04 Jan 2023 20:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAXr-0001fu-Q5; Wed, 04 Jan 2023 20:39:31 +0000
Received: by outflank-mailman (input) for mailman id 471476;
 Wed, 04 Jan 2023 20:39:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tpAs=5B=citrix.com=prvs=3612a7559=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDAXq-0001fk-Ec
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 20:39:30 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e192caec-8c6f-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 21:39:28 +0100 (CET)
Received: from mail-mw2nam12lp2044.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Jan 2023 15:39:17 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5536.namprd03.prod.outlook.com (2603:10b6:a03:28a::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 4 Jan
 2023 20:39:15 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Wed, 4 Jan 2023
 20:39:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e192caec-8c6f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672864768;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=sc0XT9Feq1UxpfZT/2AWJeldbgWn4KYN/O8SA/SfGkY=;
  b=PVZX2Jjkq66m/fMYx+Urm98UWfhkdtQnAYxtrC/w5udQkG7/y4Xl7ULt
   OUdRd8ZIJOIh+S/XakR7J9SXjFNMQcZxfd3PAFTRppCbCeL2xPFztuhk3
   Yf0JWyFr2SBXJ0skEO1LykpXGOy3cwz8Bqj3wziYcXqTKTIwoL4TW5SDf
   o=;
X-IronPort-RemoteIP: 104.47.66.44
X-IronPort-MID: 90692786
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:OgHOX6CnV+2UARVW/9Piw5YqxClBgxIJ4kV8jS/XYbTApDwr1T1Ry
 mFOXj2HO66PZmf0LdFybIrkp0xV7JSDz95hQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpA4gRjDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwvdcvH1oQ2
 tgiKCEraQ6hjO+Wmu6SVbw57igjBJGD0II3nFhFlGucKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTL++xrvwA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE17eRzHmkCNx6+LuQ+dlvunjD42MqBwAZD0CHv7qdr2icVIcKQ
 6AT0m90xUQoz2SsStT+RBy55n2ZpBkXW9lXO+I/4QCJjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+fpCm+PiU9JmYYaSgJCwAC5rHLqYUylQ7GT8wlHrS8iNb0Ahnvz
 zvMpy87750IltIC3ai/+VHBghqvq4LPQwpz4R/YNkqq7wp/YJOubqSy6Ebc9vdGJ8CSSVzpg
 ZQfs82X7eRLCI7XkiWIGb8JBOvxu6rDNyDAi1lyGZVn7y6q53OoYYFX5nd5OVttNcEHPzTuZ
 Sc/pD9s2XOaB1PyBYcfXm57I51CIXTIfTg9as3pUw==
IronPort-HdrOrdr: A9a23:hkTIm6F8dFtNkd+bpLqE0ceALOsnbusQ8zAXPiFKOGVom6mj/f
 xG885rsCMc5AxhOk3I3OrwW5VoIkm8yXcW2/h0AV7KZmCP01dAbrsD0WKI+UyGJ8SRzJ866U
 6iScRD4R/LYGSSQfyU3OBwKbgd/OU=
X-IronPort-AV: E=Sophos;i="5.96,301,1665460800"; 
   d="scan'208";a="90692786"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YXAkCERYUBIMESk0mZoQI89fMv4J202QTv2dALiKL51QYA+hn6/nwCtRTktJRgpO1Ajg5A9DQdo+p+YGMA13oZ3o2prr11HrnG0UcKXrugfzb1Py605F+tm67cAYTwYMkvhHqrnuR9TXFwLcXuL/kvIc/EzRCKK4dIxEKcq1gSiwJu9zfRo2Uhtb+hnzX/ew7Autsf4OyzHjOgCWWRKvk9CCZbxI2A2/WEQoWBoQukxhp+idLSvepv+KKKTQCBCWAZumIpDsaUAeASkEnof4BBIjA6kkDrqGxG/sq+s5unVQMvCJTERGkCeIkC3e4ljZyGpZLxnHJSqUhQdS+B9YNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sc0XT9Feq1UxpfZT/2AWJeldbgWn4KYN/O8SA/SfGkY=;
 b=l6mLlIJOhx2JdRVYYtPymbdOxqvNUwBq3jnh3yCBMmhrMM2BI/GXlz6RUDxmF1TCq8RggrBL1myzsjaAsCGMvkhGX0aTtlzbJPzaw97VhaqjsQjNihpvhUuwVnQyrIe+9riXiprvuzBfLHIA+PKulRsiPxUbJBESNurLjSXtQM5VM9Ws3bZluQPmPW0+RD6Ri3Oa3cRBPUTIQVqtQBXZLHdHDv8pJbEd/xRihDBkFQZOx28Hexk8LadX9uPdXTwJTSlNIew5KsoOTS0BMSn+b+RCmbDkpgjw0WLZP6N4zUgp5+/3b3x5etqDqdPeCb3uO/HluwZuS86TuIWL/FtSpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sc0XT9Feq1UxpfZT/2AWJeldbgWn4KYN/O8SA/SfGkY=;
 b=ZilPj/lFOkgGHdcich8NksKQJDe4V+tc4xoELLeAdJieyGTWnb3Mm4SLU6DHT2qUS8D2WEiRkyW6X+ri8ldiFVfRmHXFV5a5X9fiSNAP8wg5cgUTxqNBkcLuML6PXd7s+QeTXQ5RI9/fMFrGujQzlxwR4BHh5GAkWfo/DbaDD3k=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>, Alistair Francis
	<alistair.francis@wdc.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>
Subject: Re: [XEN PATCH v2 2/2] automation: add RISC-V 64 cross-build tests
 for Xen
Thread-Topic: [XEN PATCH v2 2/2] automation: add RISC-V 64 cross-build tests
 for Xen
Thread-Index: AQHZHE7hYcTjVcm+Z06GDSck55kIla6OwJqA
Date: Wed, 4 Jan 2023 20:39:15 +0000
Message-ID: <f05a72a7-9991-ed31-8174-596d5b3f8145@citrix.com>
References: <cover.1672401599.git.oleksii.kurochko@gmail.com>
 <855e05a0459d44282679f08c8f67e38d35635eb6.1672401599.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <855e05a0459d44282679f08c8f67e38d35635eb6.1672401599.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5536:EE_
x-ms-office365-filtering-correlation-id: 7ebd50ec-df32-4e86-c65a-08daee93becc
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 B0k6utREynH7vg5J5NtwF9ZkjCfrzG/ENqmMjCwkU24eU1hZZY+UFmksPR+w2TDvdGhvm7mBGWlDEqarD5nZ4jNeZ4etMwSXLc8GU/K53iMJvcHLbEjpm90+LzGB2NfH55bwjcEwVVGr4/zmhZyf8JMw0mdErLaaMxE8vW8mHM85AQXMN1lp2joU5Q93USv2qr9BBeEOZqPQqYFed5A75o5jqGuBynvzijkZ2DMzQq8o4GyxGrT2Eui2k1kkAoS8M6HdD3BSe0AfyMiBfUESG+laYarlltDEu/oeY/akn824h5l0FtN22KtNgEghyMiRhkf93JiLDsfbX2/YPcSmoWDj9jqKHTz1HtVfLKMks8YwcNIRqNROaQzXo/upAIhlorCDykqV9BVJOuZQ8uqPyfURNV06/6Pi7Dq+1Hl2yBjf+bieErQ0RMN6D0/F93AW4tzyeM9nfI7PppXvLs8We6cgvaTHXNnDSlQozxypI9tFi3AsqSUu6xh1OdIW0al8Vj8pnujm2Gfn8+KIAlzJFJU+nJRgQdUbd1eTkKBHLrpZF/v1eLFa0E5w1gBVAHqbY4hrKCcN4SnXbtjYtD/7fqPhN0Ap3WHwNtdpJuUFE5u8CsjHBXe9nRDxWPgo3JByAkOXNYZkQAfC8muidg8vtjutzmrfNflqznDpDJ/R7D6DGsDvZPSfltTT+Sw8+RY70wf07/zA19Pl+yQIvbAcHU3GnhbD+yxYhpOs2EYQJIwztB2knYX5pepNURp46yFlW5qEitx3YPEq47z4AicTIQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(39860400002)(376002)(346002)(136003)(396003)(451199015)(2616005)(82960400001)(6512007)(186003)(26005)(38070700005)(38100700002)(31696002)(86362001)(36756003)(122000001)(2906002)(110136005)(76116006)(5660300002)(66556008)(91956017)(66446008)(8936002)(66476007)(64756008)(66946007)(8676002)(4326008)(316002)(31686004)(54906003)(53546011)(41300700001)(6506007)(478600001)(6486002)(71200400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?K3Y0TlFTM0p6clhTSTRhc1ltVVJrMHFpczFQRVVIdWdtTEI1TG1vS2x3emZE?=
 =?utf-8?B?bmpReUxJc1Yyd3U5UUNaSnh6dE5rTjI0b1VmTTh4U0hoNlIwdEdVN1M3NWEx?=
 =?utf-8?B?R3ErcjNoaEZTRGpCVnpaK043Ny8yS1dQSENzN0ltZ25WS1JyYzVpTDVmd3kv?=
 =?utf-8?B?RTc5N00wb1BvRUUzdUpjbk5nUDhqL0VmQlZ3TGluOCtzdTBCMVhCYzk1REIv?=
 =?utf-8?B?dHAxMHUvVXVNYldoY2ttMlRTQ3N0Wm5NK0lZYjZSNk5aSzExSDRrK0JYTFZF?=
 =?utf-8?B?aVdLYlJUelN5YTdybkUzQlhUSmM5Q0JsNFJjVHJoUUFKYjRtTlo0RlNEd1pT?=
 =?utf-8?B?cUIvZXZCOHhvNllIaS9PRlNzR2hnSFV1aGsydUVyeE5Nb09kMUVITHVHSGhj?=
 =?utf-8?B?Y0R0RGI0OGhqcHdpOERvUmZWTHBzSWtvbW9lMkxVQTJmNDJUUjBWc2ljNWFO?=
 =?utf-8?B?MkJjOE9uTDRhM3lHL3UvUEdiOFcxbGRndWRmalg5WXpab0lYdUYwVHJsanBZ?=
 =?utf-8?B?Q0NzMmxvUGFEREdvcVNQaC82UHpHRVpTS0Z4N0RYTXVKL1BFOW1TWDRpcGM5?=
 =?utf-8?B?TXBlaFdxa2JCMUVrTjdJbFVlTS8weWgyY3YxYjBNZ25XekFxZzdmZjRjVkZx?=
 =?utf-8?B?cmE4dm9lOVdtbHh1RkRIcFM5QWZrUlk5K3MycFdDUjgvNjJSQWE0d0x1U1Bi?=
 =?utf-8?B?U2RTZkJaMVp6WExFQ1lycHR6dmNucTFHamNkdFhSLzZPWklDQ1NnZHozZkds?=
 =?utf-8?B?ZnVmUTJHY2NjelQ5WXY0LzEwT0htT29aandjYVYzOS9hWGdINUlsTjdiWjNQ?=
 =?utf-8?B?elp5SUN0bWQ4M2xLUUZteTl1dDBsV0NPT0xPam1KamFlaWtKVzd4Q0o1SUNq?=
 =?utf-8?B?c0ZEa3RRYmlZME9hS1hCdzFnWjdyRDFHVkVvUHRwNGFYUWwyZEV0VktTTTV5?=
 =?utf-8?B?OUxENEFRZ1hwSVJFK3Q5bG9RV1JhVkpXZ3BvSFB1QWlXYk9scThPWnVzeFVZ?=
 =?utf-8?B?VkNPZkkrNzk2cEpTWDNHdGc5czdnY05WTC9XSVErbVEzRm1RMXZFalY1RFdj?=
 =?utf-8?B?Y1JuZFFCbjJRTEVxaVhPZzBaeGFLZnp0bTErYU1Hay9ja2VDMFJIRXJPMTZB?=
 =?utf-8?B?OERTVGlWZVlVcjU3aGY3WlJrZmRISEhvcUR0MVk0Q2p3U05SbU04eEtxRmpP?=
 =?utf-8?B?Vjk3alJwaFBoMTNja3FsM1NIeE5TUm93WFdGMlRwTGVFUjI3V04wcHI4REdU?=
 =?utf-8?B?OGNXczRaR1hRSDNFNUo3MHFwUjk3STBxWlRHWUo5WXBvbjEzNjJLR0pqV1hq?=
 =?utf-8?B?ek5ha0UvVHk0dEtqRmhDRmh6YVQ5WStYTlptNDdEVXl2S205R1E3VFdweHVa?=
 =?utf-8?B?Y0JyVEZWejBpdlF4cE1oVUxVcS9ma3g0cmNFM0hoakVoQUhUdXZaYXlmTzVT?=
 =?utf-8?B?S2pMTXB4S1lVSEIvczdKRGNJd2NEcXBKelRGUnZPZ1NxOHMwVGFZRGtTSXdv?=
 =?utf-8?B?cjZ2SGpROENqMzB6WW9hRmdvcGJnMXVHZ2FsTWsyS1ovRGFPMnIxSVZ1OVA2?=
 =?utf-8?B?N2xpR1JjYTV0UTZyR2JaNUk3U253UUd6eFJSSlNYNm1sRFNic25VdjdQckE5?=
 =?utf-8?B?c2FWSlE5a2d4dXVVNFhuMUs3NlRqTHBzcVpJT3ZRUFYxTkx5NWVqMWZCM1JJ?=
 =?utf-8?B?NWV5NGZSVFZTMlNwaHZTR3JRbGwxT0VpM3M0WlRlNmNVc1U5MU5mcHEzYU5z?=
 =?utf-8?B?aEthUlJncVptTVE0eWRhSmNDY1oyb3FHMkVrdG5ITFNUVGVlbUdpS29URkh0?=
 =?utf-8?B?Q3N0U0NieTdub1RCTmMvM1M0d2ZXalA0OHN5WFFQbUhrTUJwb3BpRGhpSlVI?=
 =?utf-8?B?UVZGL3ZJZU5DWEMxTk5lKzQzUEN0eVZWMWx0VTFhaGlXNmFRamExYS91aFlN?=
 =?utf-8?B?a3lVUyt4cDJVRjRjS0RJTFRncThKclRaR2lDRU5VRE5zN1FXS3g1S1p6ZmJP?=
 =?utf-8?B?MW15SHBPRXZTQmZkdC9kMVovUjF0cUVYK1NtcTVmWVlpUWtnbWNIcGF4UmhQ?=
 =?utf-8?B?ZkJKUjBPWDBzc0VPeFNVREFPUHdrdm54QnVrT3d4WlVCNXA5RDFieDM1Ymt1?=
 =?utf-8?B?ckVTd09nYUZDb0pNZmQzR0pldUR2VkFQOGJxNGI0eVpWVEthY0pSdlhHYmRw?=
 =?utf-8?B?YlE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <8398A85E667F5440BD462359DE261F31@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	nT6PAZHd2nYbfTWi2xKEgMW08Z3ryxLXRnoaaXafRecHNaJSp/m+MJB+qSnFmlQMxrpuRcbmEGFoyxiYz5VLvKVEupS4JlbOi3inF9S0UOgYzlmDXP28vRaKqz+Q55zyjXu6g/Jeq5DopthB30hI8ADZa5f1YXIY84JaxWBH93cb2P2fP0H57VWqjl3iyvamu/Q71p2z+4v9YZ5JJewp6do335MFNlamRolsZEm+2Osv2Qd6aChKd07UZuWUG0+6+gZ6AccwaslIZFxQ1UKhWFoTQyBiTDh+7Zv0mjKXtbjneoXV6iPWPn8vSr1d7w4gLKILEpq045RrqxL9ypSbskP+TH0Ska5cPSNf597xRyUh4FxnT+nwDo3uQIGlRxdiAU+0tCn2+o17lmdzIrN8G3SM0qkbhVs0IKX+jLh68ywJQo6e3n43sBRRhHwplTlebmf+CHc5ZS0zW/q087edpjoSzp8vQCyxASysBrS+087i0ujBTJlsE1kZilx4gpBNqT5FxZj87yT9eqPlls4H1vDYfbD9gdFs2JR/tRj6Z5ZcGvnZJlkOZ+5j4Wmkdr6Y5p8idyiP3UmaTNM6gjx7tAKBT5l6m+eKAFWIZD0Kkux2mFt9P5F/eLREewGujlnDudNw5K/UO9fiPc/OddeCAlGEQzXLdgA93CG3Ati6dieh/JJoUiXDjbPmo39lPcpx4EoQh4RnNaaXqpt3oQKV3lyQryPx925XczddbqsCtjb6OKIftyhb/2IzNOMvIzSnm12eBm0Q0i5IoegG7r/PMzXOGSa2SJLXzIA6thGXEgB96EdpjyorPfacx/iInmy55qC+y5m0m6UOpQqAqihjXzn9Sl/rgdpkD34n8js4teL1+4cNU6sH9EtRrovAAShtfqHC883dIIpHQlBDPWLHeeyrda0deMRd69MlewoTLK4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ebd50ec-df32-4e86-c65a-08daee93becc
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jan 2023 20:39:15.4884
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: le90nfdN960ELi0TTiLjll5Moe2rvjD+GHCDJHXS4vZFhuNiH2WQ+mFphsk++eE0bBxRGfYOxwNjiUXXZl7WIuj64UjOeJohyyz7iyRVwKg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5536

T24gMzAvMTIvMjAyMiAxOjAxIHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBkaWZmIC0t
Z2l0IGEvYXV0b21hdGlvbi9naXRsYWItY2kvYnVpbGQueWFtbCBiL2F1dG9tYXRpb24vZ2l0bGFi
LWNpL2J1aWxkLnlhbWwNCj4gaW5kZXggZTZhOTM1N2RlMy4uMTFlYjFjNmI4MiAxMDA2NDQNCj4g
LS0tIGEvYXV0b21hdGlvbi9naXRsYWItY2kvYnVpbGQueWFtbA0KPiArKysgYi9hdXRvbWF0aW9u
L2dpdGxhYi1jaS9idWlsZC55YW1sDQo+IEBAIC02MTcsNiArNjQ0LDIxIEBAIGFscGluZS0zLjEy
LWdjYy1kZWJ1Zy1hcm02NC1ib290LWNwdXBvb2xzOg0KPiAgICAgIEVYVFJBX1hFTl9DT05GSUc6
IHwNCj4gICAgICAgIENPTkZJR19CT09UX1RJTUVfQ1BVUE9PTFM9eQ0KPiAgDQo+ICsjIFJJU0Mt
ViA2NCBjcm9zcy1idWlsZA0KPiArcmlzY3Y2NC1jcm9zcy1nY2M6DQo+ICsgIGV4dGVuZHM6IC5n
Y2MtcmlzY3Y2NC1jcm9zcy1idWlsZA0KPiArICB2YXJpYWJsZXM6DQo+ICsgICAgQ09OVEFJTkVS
OiBhcmNobGludXg6cmlzY3Y2NA0KPiArICAgIEtCVUlMRF9ERUZDT05GSUc6IHRpbnk2NF9kZWZj
b25maWcNCj4gKyAgICBIWVBFUlZJU09SX09OTFk6IHkNCj4gKw0KPiArcmlzY3Y2NC1jcm9zcy1n
Y2MtZGVidWc6DQo+ICsgIGV4dGVuZHM6IC5nY2MtcmlzY3Y2NC1jcm9zcy1idWlsZC1kZWJ1Zw0K
PiArICB2YXJpYWJsZXM6DQo+ICsgICAgQ09OVEFJTkVSOiBhcmNobGludXg6cmlzY3Y2NA0KPiAr
ICAgIEtCVUlMRF9ERUZDT05GSUc6IHRpbnk2NF9kZWZjb25maWcNCj4gKyAgICBIWVBFUlZJU09S
X09OTFk6IHkNCj4gKw0KDQpKdWRnaW5nIGJ5IHRoZSBLY29uZmlnIHdoaWNoIGdldHMgd3JpdHRl
biBvdXQsIEkgc3VnZ2VzdCBpbnNlcnRpbmcgdGhlDQp0d28gUkFORENPTkZJRyBqb2JzIHJpZ2h0
IG5vdy4NCg0KPiAgIyMgVGVzdCBhcnRpZmFjdHMgY29tbW9uDQo+ICANCj4gIC50ZXN0LWpvYnMt
YXJ0aWZhY3QtY29tbW9uOg0KPiBAQCAtNjkyLDMgKzczNCw2IEBAIGtlcm5lbC01LjEwLjc0LWV4
cG9ydDoNCj4gICAgICAgIC0gYmluYXJpZXMvYnpJbWFnZQ0KPiAgICB0YWdzOg0KPiAgICAgIC0g
eDg2XzY0DQo+ICsNCj4gKyMgIyBSSVNDLVYgNjQgdGVzdCBhcnRpZmljYXRzDQo+ICsjICMgVE9E
TzogYWRkIFJJU0MtViA2NCB0ZXN0IGFydGl0aWZhY3RzDQoNCkRyb3AgdGhpcyBodW5rLsKgIEFs
bCB5b3UncmUgZ29pbmcgdG8gYmUgZG9pbmcgaXMgZGVsZXRpbmcgaXQgaW4gdGhlIG5leHQNCnNl
cmllcy4uLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 20:44:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 20:44:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471484.731341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAd3-00035b-FC; Wed, 04 Jan 2023 20:44:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471484.731341; Wed, 04 Jan 2023 20:44:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAd3-00035U-Cf; Wed, 04 Jan 2023 20:44:53 +0000
Received: by outflank-mailman (input) for mailman id 471484;
 Wed, 04 Jan 2023 20:44:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bCSi=5B=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pDAd1-00035O-M2
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 20:44:51 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a2371365-8c70-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 21:44:50 +0100 (CET)
Received: by mail-ed1-x529.google.com with SMTP id u28so45464790edd.10
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 12:44:50 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb2008108eedf25879029.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:8108:eedf:2587:9029])
 by smtp.gmail.com with ESMTPSA id
 i5-20020a05640200c500b00463c5c32c6esm15153884edu.89.2023.01.04.12.44.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 12:44:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2371365-8c70-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gbV63gD9zn2fwjFvmF2ZygodMpZQcbM7tsdfwBcVKcw=;
        b=JoTrbyhvGJhx35lyAEl4uafzDehZJtSZL96Yt/G8GZyAM9VW+8CK7S+MgS6Jl360Z2
         uedFsPbWc2WBAqkRE5QrwUKI2LMmP9tIxJ+bOHpq6YZQfHu9tuLCcruvkj4euj8HY3k5
         cotEGIqV8sKqrfsO8VBrzHh8imH2bd9RoDXYgPe51OCGe3pSM+e6G1zU3N0zrBgpFTug
         vx8kd6S47Lj9jZ2rsEWEFSn7stp1xHEW9lyOZqgt+XC0OHufB57DzOhvUhNHv03pCM55
         EW/P/y2yrem3oCCItMjDKhUO3Hg0OO7PMDXqR7mQZvo3eqKlYvPOQ1YyocbnL18EKJiR
         K+hQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=gbV63gD9zn2fwjFvmF2ZygodMpZQcbM7tsdfwBcVKcw=;
        b=hDspIl/xQ5h8ScskmOwGo/gqLsAM/zsw48LIS+7xlIur54BpjFEhJpBk7T/dNtYK4J
         30Y4hHr76LaMdlA70TA5tdWxmT18KFKe0ia7QqqOyU9hJ3NJa7GE5xYCFEjgRkXd++J0
         1Y10Sbo6kgduqUsbWbiTWOyw2x5Muj1s/LRRqv+XpjIv4ZxMfYwgE5Yux6IojzNC7FLh
         LO7eoD0dQfqKiPFt5Hc47J7TdFYZm1NDC7cJt9pzivwh6WtFpDv72IXuKM51SR//8bsc
         VpbkJI37DQWkFE2cTlg9a+8JGr4NFqL/9DoybyP+Tr/vp1g9C8QldRKeXY4fXti4i0KM
         TSSA==
X-Gm-Message-State: AFqh2kqHCLDWkteorhHeIsNig/rPL0ffNqG96EZF8CKRHQaekSMoyuvB
	SR+5Dfbx0Aj+zxxFcbhFh+U=
X-Google-Smtp-Source: AMrXdXvoh5ORGxbIS7Ab4BCz07HsM9Kyaok5VbIDwUkQO0I/bxoGzMUn3yMRwqi70ndk6P6pYenC4Q==
X-Received: by 2002:a05:6402:551b:b0:46a:732e:fd29 with SMTP id fi27-20020a056402551b00b0046a732efd29mr44634692edb.42.1672865090026;
        Wed, 04 Jan 2023 12:44:50 -0800 (PST)
Date: Wed, 04 Jan 2023 20:44:44 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Chuck Zmudzinski <brchuckz@aol.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>
CC: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v2_6/6=5D_hw/isa/piix=3A_Res?= =?US-ASCII?Q?olve_redundant_TYPE=5FPIIX3=5FXEN=5FDEVICE?=
In-Reply-To: <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
References: <20230104144437.27479-1-shentey@gmail.com> <20230104144437.27479-7-shentey@gmail.com> <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org> <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
Message-ID: <E3E983F2-0FB3-4F6B-B2D6-ABE7E021228E@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 4=2E Januar 2023 17:54:16 UTC schrieb Chuck Zmudzinski <brchuckz@aol=2E=
com>:
>On 1/4/23 10:35=E2=80=AFAM, Philippe Mathieu-Daud=C3=A9 wrote:
>> +Markus/Thomas
>>=20
>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>> TYPE_PIIX3_DEVICE=2E Remove this redundancy=2E
>>>=20
>>> Signed-off-by: Bernhard Beschow <shentey@gmail=2Ecom>
>>> ---
>>>   hw/i386/pc_piix=2Ec             |  4 +---
>>>   hw/isa/piix=2Ec                 | 20 --------------------
>>>   include/hw/southbridge/piix=2Eh |  1 -
>>>   3 files changed, 1 insertion(+), 24 deletions(-)
>>>=20
>>> diff --git a/hw/i386/pc_piix=2Ec b/hw/i386/pc_piix=2Ec
>>> index 5738d9cdca=2E=2E6b8de3d59d 100644
>>> --- a/hw/i386/pc_piix=2Ec
>>> +++ b/hw/i386/pc_piix=2Ec
>>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>>       if (pcmc->pci_enabled) {
>>>           DeviceState *dev;
>>>           PCIDevice *pci_dev;
>>> -        const char *type =3D xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>>> -                                         : TYPE_PIIX3_DEVICE;
>>>           int i;
>>>  =20
>>>           pci_bus =3D i440fx_init(pci_type,
>>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>>                                          : pci_slot_get_pirq);
>>>           pcms->bus =3D pci_bus;
>>>  =20
>>> -        pci_dev =3D pci_new_multifunction(-1, true, type);
>>> +        pci_dev =3D pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE=
);
>>>           object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>>                                    machine_usb(machine), &error_abort)=
;
>>>           object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>>> diff --git a/hw/isa/piix=2Ec b/hw/isa/piix=2Ec
>>> index 98e9b12661=2E=2Ee4587352c9 100644
>>> --- a/hw/isa/piix=2Ec
>>> +++ b/hw/isa/piix=2Ec
>>> @@ -33,7 +33,6 @@
>>>   #include "hw/qdev-properties=2Eh"
>>>   #include "hw/ide/piix=2Eh"
>>>   #include "hw/isa/isa=2Eh"
>>> -#include "hw/xen/xen=2Eh"
>>>   #include "sysemu/runstate=2Eh"
>>>   #include "migration/vmstate=2Eh"
>>>   #include "hw/acpi/acpi_aml_interface=2Eh"
>>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info =3D {
>>>       =2Eclass_init    =3D piix3_class_init,
>>>   };
>>>  =20
>>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>> -{
>>> -    DeviceClass *dc =3D DEVICE_CLASS(klass);
>>> -    PCIDeviceClass *k =3D PCI_DEVICE_CLASS(klass);
>>> -
>>> -    k->realize =3D piix3_realize;
>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>> -    k->device_id =3D PCI_DEVICE_ID_INTEL_82371SB_0;
>>> -    dc->vmsd =3D &vmstate_piix3;
>>=20
>> IIUC, since this device is user-creatable, we can't simply remove it
>> without going thru the deprecation process=2E Alternatively we could
>> add a type alias:
>>=20
>> -- >8 --
>> diff --git a/softmmu/qdev-monitor=2Ec b/softmmu/qdev-monitor=2Ec
>> index 4b0ef65780=2E=2Ed94f7ea369 100644
>> --- a/softmmu/qdev-monitor=2Ec
>> +++ b/softmmu/qdev-monitor=2Ec
>> @@ -64,6 +64,7 @@ typedef struct QDevAlias
>>                                 QEMU_ARCH_LOONGARCH)
>>   #define QEMU_ARCH_VIRTIO_CCW (QEMU_ARCH_S390X)
>>   #define QEMU_ARCH_VIRTIO_MMIO (QEMU_ARCH_M68K)
>> +#define QEMU_ARCH_XEN (QEMU_ARCH_ARM | QEMU_ARCH_I386)
>>=20
>>   /* Please keep this table sorted by typename=2E */
>>   static const QDevAlias qdev_alias_table[] =3D {
>> @@ -111,6 +112,7 @@ static const QDevAlias qdev_alias_table[] =3D {
>>       { "virtio-tablet-device", "virtio-tablet", QEMU_ARCH_VIRTIO_MMIO =
},
>>       { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_VIRTIO_CCW },
>>       { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_VIRTIO_PCI },
>> +    { "PIIX3", "PIIX3-xen", QEMU_ARCH_XEN },
>
>Hi Bernhard,
>
>Can you comment if this should be:
>
>+    { "PIIX", "PIIX3-xen", QEMU_ARCH_XEN },
>
>instead? IIUC, the patch series also removed PIIX3 and PIIX4 and
>replaced them with PIIX=2E Or am I not understanding correctly?

PIIX3 is correct=2E The PIIX consolidation is just about sharing code betw=
een the PIIX3 and PIIX4 south bridges and should not cause any user or gues=
t observable differences=2E

Best regards,
Bernhard

>
>Best regards,
>
>Chuck
>
>
>>       { }
>>   };
>> ---
>>=20
>> But I'm not sure due to this comment from commit ee46d8a503
>> (2011-12-22 15:24:20 -0600):
>>=20
>> 47) /*
>> 48)  * Aliases were a bad idea from the start=2E  Let's keep them
>> 49)  * from spreading further=2E
>> 50)  */
>>=20
>> Maybe using qdev_alias_table[] during device deprecation is
>> acceptable?
>>=20
>>> -}
>>> -
>>> -static const TypeInfo piix3_xen_info =3D {
>>> -    =2Ename          =3D TYPE_PIIX3_XEN_DEVICE,
>>> -    =2Eparent        =3D TYPE_PIIX_PCI_DEVICE,
>>> -    =2Einstance_init =3D piix3_init,
>>> -    =2Eclass_init    =3D piix3_xen_class_init,
>>> -};
>>> -
>>>   static void piix4_realize(PCIDevice *dev, Error **errp)
>>>   {
>>>       ERRP_GUARD();
>>> @@ -534,7 +515,6 @@ static void piix3_register_types(void)
>>>   {
>>>       type_register_static(&piix_pci_type_info);
>>>       type_register_static(&piix3_info);
>>> -    type_register_static(&piix3_xen_info);
>>>       type_register_static(&piix4_info);
>>>   }
>>>  =20
>>> diff --git a/include/hw/southbridge/piix=2Eh b/include/hw/southbridge/=
piix=2Eh
>>> index 65ad8569da=2E=2Eb1fc94a742 100644
>>> --- a/include/hw/southbridge/piix=2Eh
>>> +++ b/include/hw/southbridge/piix=2Eh
>>> @@ -77,7 +77,6 @@ struct PIIXState {
>>>   OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
>>>  =20
>>>   #define TYPE_PIIX3_DEVICE "PIIX3"
>>> -#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
>>>   #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
>>>  =20
>>>   #endif
>>=20
>


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 20:47:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 20:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471492.731353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAfo-0003i7-VK; Wed, 04 Jan 2023 20:47:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471492.731353; Wed, 04 Jan 2023 20:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAfo-0003i0-Qx; Wed, 04 Jan 2023 20:47:44 +0000
Received: by outflank-mailman (input) for mailman id 471492;
 Wed, 04 Jan 2023 20:47:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aavW=5B=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDAfn-0003h6-5o
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 20:47:43 +0000
Received: from sonic313-20.consmr.mail.gq1.yahoo.com
 (sonic313-20.consmr.mail.gq1.yahoo.com [98.137.65.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 067ed089-8c71-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 21:47:39 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic313.consmr.mail.gq1.yahoo.com with HTTP; Wed, 4 Jan 2023 20:47:38 +0000
Received: by hermes--production-ne1-7b69748c4d-dzr9v (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 6c878df3bbaf9935a45a9609953ac48f; 
 Wed, 04 Jan 2023 20:47:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 067ed089-8c71-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672865258; bh=FLlL0Znq3MYflQbaAlbZMfah9YnKAAkzhkPNyAmAnYo=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=KDH9WaiS1KXgbWghoV4HnCmF93QGXWTOT5fr1SC5F1PRK63KybA8DKCIatPU9lvgXB5fECISUamfzaznjzPoaSGwk32CH9Q/sXq6QZ4u/p178Hb+TheLkuH3y5Eh4ZeYLoIf7tY3WwgnuFZGIOGbekxrCde8M8AthD6yIVcq+Y0R20CIq+GqjvpTKwCcAiQmWNMfdun+ns2bStujuVdYes/p0gZCjudYwfyuZcdDLwELdcw/ldP3zApkHjVf/DI+y3NmWlz0pW9Fq4Bj2EX+3l68vpnXeBzxrGVebntQV7t6iYrK86tHd8hJXEEpYiOY3wnGPRIy3MSwVk3pGQRrWw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672865258; bh=rd+9VNkpX9KRPfP3h2G2X5n3Ypj1rGC/lgX+KO7N4xX=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=QZn8SdyWbKB5DsSdTquVTHtaQCBvr5bbql460GmW0MIv5m6dYeh2QlegS5yQaVnQhBFgWLUXG4IIpJlOpGcFDCZOADIdMyyLBPBtUIkMm+3jBsVKv5ovu2XdUexK+lwflNbL4mmM9edHZrI9L8Ulno61Xh3M2W5eDK01DYow8oeDfABCAIZ4ohH3Ly8V4jKFh/eLdyvrvgSmxHkqt+1uEEVMPoXKgvOKnP75/cDQ6gIDh/jDhY5XRwv801OtWgXbVTBdfkLduWJnLvPLjC+WvpVrGwh8I3zZ8DR8vfVd6zM41FMEA7x0E8o6VBU0bfvjLKfYIURrg8Fs0Yz+QBC1xw==
X-YMail-OSG: LU2heVoVM1mhE8eq57Zy8wqFpUouWpl3R3PBRRyFH2OhAFZxA_bJWQsbDI8Zd.d
 YG6.etuKO9F4mTUG1tyW5WzJb7591qppMYk.kN5cpy.8CHNUaT4dSWvWrVRXOFNmbO7nXYM_3oP3
 ObfBvM9TiX5GwJrRg89JHTyI6HlI4UqJjjJms1XYn9st39csae5XUTi2KGfcCtInq6no_FH13vRs
 Q0juuEt9sGQnTib1QOg4mYWP1T9JCP97ytEvj8o16ugbFi37jKc42xYuvsEJFuNBC.tXOES7HFDk
 7TXfcwttj2D3HVla6Npx2tmTvFmmgwjIS4zB5eGStweda0bi5vnmIRJZQ2tJTb7yqzkIp2GJvi07
 but27yb3x4mF.zb5XnKh8QiaJilk4b0xQOlzFKGIaBCnCpLe1EFWiK2MD4gYveC1MzzM1Xuws1d2
 U1H.atPcAwcFE8hjPHYgeX3H4coQpmL1qoAagYreLbXYEzllSwGFHSh37Ukcb4ofhl5h5bBgq9hY
 IU4bXFlkdBamVTF2zxEdAgXz5p9EFB2ck9qoSzYWJYm_daQnuvIAI5VvI80HsUlJq8B3Uq2IxV2c
 xrU6__HcLnKTjOs4o_LSQQp2yWJL7k4XCjXQpc9v5KQkNv2h.JihzpvtQXSTi0nZh1BDmnHYVW4F
 AXCE1SM7xZhx2WPRljxLumhKidHoYXz.puvV4BWC0FNfWhnC4sGp8x6Cg9cYMy5TJwemHcuFRbci
 JGgoVAWH0p31bLaBx5h12Q6qJD1.inYVYqXbOy80HqAyCKAkMmepM8xDPAUxgcPhZL.BJXMVv4sc
 MN8QXrkF9H0Ge8uOo7sa8RwkiVIfLaIGHUQxmmLwlt6lRKYbLOG4cgtLvkVjRU1WZaf6WWUclgR_
 VCE.4iXaCenI37KZkcz6LEOdcxoNzOYnaKuBS4an09m.923cHkgYZH9cETrkecl9rntoySsmvlxR
 HuVn1AeF5eAB8ccfn5vqiKnVQRXz7IXb4MuioChbyOQkNOVX9_omJKi3zx3P9.WZDLh9GJavIRJ6
 Yp36TIwtdMCkNZEekkF0yZf632CkP97JXvWg8tc6_QXRAbwekb8ZlxfMQD1IBQPvhcT89dXSmCfG
 e2y__WrCIe8MA5wNvFBrElyLswyO6CYx7QOeC0kWSwMgOIvD_duV7tFxVJNcwXwwB6jqKRBu_mkU
 ptrAxrGpXv5_FkjtnpaE5MicyVhzNx7e1YBiOEbiVv8DcIbhM.Re4oeK0vD2lBL.M7vzD_vi1jxW
 Q0r7reRUj9qX9cxHsbvGShsoT8DH8.L7qOUrgp2m8CMK.y.Z_L_5rIcrRtjmWs3tnhmNwLjm70TI
 bPZCoLbyN71aOn5gUzZ3idvAHCsryMIkPcPHFQR4VuwXm1Z4ZcGCEyxGOMUikZEA6maMSDeHlWEc
 ODbhWXdF6PvPluKyIClMbvSg7V5OTFhqf7Gby6j5bshhH6Qy7LQFZ_yxb14m.bZQraDaiaQXWk6f
 r1Y2PAOy3QbCO2sLqd64LcDxF16J709lLsCuS5W6qsr.PDR2z90FeKnxLLd6_Koqf9QMfma8e5HP
 H8rerCx.mIwS8uSw3BRnNREfsJFyftcYwmVxmPN3IA8UZH2v4_FO0b2QMWVwSZXl4MOH_xRZ5MNY
 1UC61lX04HGgM_EFhqCwTnlWZO7eYHMszUnHfLYFKdfurLca25Ocm_wVr4RipRVnMrTsCcLKWXfI
 4F6OLlMqYGoup9.etjjLkwYJz0rpr_slMbxO4_T.gRe6HGtycAdrLKe01BSyEPg2ubjDj3ithbYA
 VQWmxmjsQFlmN7Zh3e1qpEIDnbD8_EjvibzE2c.lkToSQ8MFdLQijyRQtrGo.yHDvDk5Yqz.X3pe
 _drORI2z9qBD3oFL19GYqZFaNUtV.L42KEG0E3GqLQ_7edyE_7OpK78ovSqnBr2YyyOt22iyxfi3
 hnObSSy9yNZtke6Jb3d.GbdwNdYDtig_vfZJR93V4FsOXWRh4A2Qf10bx1OjW0avm7NhHnlU_KLa
 A1kzF7Bokp0HQSQvCH9DvBvKEtkaS4.R1brVE6PFf_jmqs1iqB_mFt3uPgXsfw3VXPcvYDN20R.D
 V0hv7guo1bOLFM_n6_oWQdkrrf5EVTEKKnr1TSrcwREO5tBwhq7e.1ta5TIrBaNgo9GER6XlSJxj
 cwXgESSoc8hsmwjwbmvJF26yywagzD4O4BGZOOa4k6rxNvAEcPMy6UXupjtULeWRHaurh4TeBEtL
 cua_DBwyZl8H4z74nTt4-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <b2ce641b-73ad-f3a2-dc9d-1ccfdd1ee8d8@aol.com>
Date: Wed, 4 Jan 2023 15:47:34 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: Alex Williamson <alex.williamson@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <20230102124605-mutt-send-email-mst@kernel.org>
 <c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
 <20230103081456.1d676b8e.alex.williamson@redhat.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230103081456.1d676b8e.alex.williamson@redhat.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3366

On 1/3/23 10:14 AM, Alex Williamson wrote:

> 
> It's necessary to configure the assigned IGD at slot 2 to make it
> functional, yes, but I don't really understand this notion of
> "reserving" slot 2.  If something occupies address 00:02.0 in the
> config, it's the user's or management tool's responsibility to move it
> to make this configuration functional.  Why does QEMU need to play a
> part in reserving this bus address.  IGD devices are not generally
> hot-pluggable either, so it doesn't seem we need to reserve an address
> in case an IGD device is added dynamically later.

The capability to reserve a bus address for a quirky device need not
be limited to the case of hotplugged or dynamically added devices. The
igd is a quirky device, and its presence in an emulated system like
qemu requires special handling. The slot_reserved_mask member of PCIBus
is also well-suited to the case of quirky device like Intel the igd that
needs to be at slot 2. Just because it is not dynamically added later
does not change the fact that it needs special handling at its initial
configuration when the guest is being created.

>  

Here's the problem that answers Michael's question why this patch is
specific to xen:

---snip---
#ifdef CONFIG_XEN

...

static void pc_xen_hvm_init(MachineState *machine)
{
    PCMachineState *pcms = PC_MACHINE(machine);

    if (!xen_enabled()) {
        error_report("xenfv machine requires the xen accelerator");
        exit(1);
    }

    pc_xen_hvm_init_pci(machine);
    pci_create_simple(pcms->bus, -1, "xen-platform");
}
#endif
---snip---

This code is from hw/i386/pc_piix.c. Note the call to
pci_create_simple to create the xen platform pci device,
which has -1 as the second argument. That -1 tells
pci_create_simple to autoconfigure the pci bdf address.

It is *hard-coded* that way. That means no toolstack or
management tool can change it. And what is hard-coded here
is that the xen platform device will occupy slot 2, preventing
the Intel igd or any other device from occupying slot 2.

So, even if xen developers wanted to create a version of the
libxl that is flexible enough to allow the xen platform device
to be at a different slot, they could not without patching
qemu to at least change that -1 to an initialization variable
that can be read from a qemu command line option that libxl
could configure.

So, why not just accept this patch as the best way to deal
with a xen-specific problem and fix it in a way that uses
the xen/libxl philosophy of autoconfiguring things as much as
possible except in cases of quirky devices like the Intel igd
in which case the existing slot_reserved_mask member of PCIBus
is very useful to accommodate the quirky igd device?

IMHO, trying to impose the kvm/libvirt philosophy of having
a very configurable toolstack on the xen/xenlight philosophy
of autoconfiguring things that can be autoconfigured and
using higher-level configuration options like igd-passthrough=on
to tweak how autoconfiguration is done in a way that is compatible
with quirky devices like the Intel igd is like trying to put
a square peg into a round hole. Actually, qemu with its qom is
able to accommodate both approaches to the design of a toolstack,
and each vendor or project that depends on qemu should be free to
use the approach it prefers.

Just my two cents, FWIW.

Kind regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 20:51:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 20:51:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471500.731362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAiu-00059j-F8; Wed, 04 Jan 2023 20:50:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471500.731362; Wed, 04 Jan 2023 20:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDAiu-00059c-CS; Wed, 04 Jan 2023 20:50:56 +0000
Received: by outflank-mailman (input) for mailman id 471500;
 Wed, 04 Jan 2023 20:50:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aavW=5B=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDAit-00059W-Eu
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 20:50:55 +0000
Received: from sonic315-55.consmr.mail.gq1.yahoo.com
 (sonic315-55.consmr.mail.gq1.yahoo.com [98.137.65.31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 79a6e7fc-8c71-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 21:50:53 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic315.consmr.mail.gq1.yahoo.com with HTTP; Wed, 4 Jan 2023 20:50:51 +0000
Received: by hermes--production-ne1-7b69748c4d-bmdl9 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 48efc7ae25fbebe2e93c90ab59b9c6eb; 
 Wed, 04 Jan 2023 20:50:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79a6e7fc-8c71-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672865451; bh=/7FmWC3We4NIkOcuub5MeTdLnF/yS37TemzpFMKai2s=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=r9jLsBPhibNqyy9S+gbD1MR5HeKsI7U6hTi83h06dg669hvFOrc+dnCfNupsO/fCXZo2ahfsKirc/I5LOxk4APzMFxpben2smQlWtFsmQCmXzyT34VdYaPrzBpPta/3rZMEy+tQTnLhQGHkgA2PJZz5JOKYBgURHjSMhYF+K9OiguB+ETO70q8prQqkT/14bWGJw1llRdG04o+x2TVZuEBUxnkgPPTAjMQkm3HFu8hEAL5V3Glvl5SStazzZhXPAl3j+/eotR7OfBZMOwF/S0HM0ckmProVICrZg5ndtbBurLHan/rz3Zu8jmuO6upN0B+wQBo2m78RO7wkJyWwqxA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672865451; bh=4u8K/39RqcAwmuB+QplqjOBExwC5vcMH37jYFoiYHZ6=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=gBUvRYv/eanwhFcA710YtJukZoktcIyKyss284FcbyYyOSXx7I58IitWB32S/pfnfZ1m19MkZC+Xk13AJ9usvRCe+rMpKWUtfYkLuTvvj1q5mT+Gxk0C+vfxnsm8KSxgZh93Rcm4p5n9XiNAvE304y+Ik9y4/xuwg4/qHnrCVVV7waj8xhp17cGeCyFoq8UIx2D/QAvRZSSS91zLzDd8G+TJGghA/VgRCSWe6paIplay/ZMJxtxNtcPRIagkoQqeaa2CItkS/6vWE3Asnxt7QJnCLBr7O+cKJaGrRQLiadnLz7EFza+V7ttoAODgs6vdemIVeL1txfdiu7ljpkoHYQ==
X-YMail-OSG: ZBIrh1kVM1k5EftxDLIx1GLNYz5JBJm0sGOynMCGSfr2OuExkOJuJ7okeTaXXUi
 2gzoumf5Y2D5hu1ZKHNZTHL6ZQJRdCkN8a.VYuyoso4WNXsDaVp2qA4BMVGGLQp7bxCM9I_XKY3p
 iu9NoBGq_PPmUjw0.kThe9BjQT9egrdC0WA4vurL6JUT_NPmtVmS7eL5XmQPi9HshzacUlQH73Dd
 xqABGR3c5XxA8RyJhUkpkp09Wr_cxcKT_4uFLx.K5ufcUa3h8T2PUE7zuU_I5b8oOGJiOClZz6gd
 XocOO6wWWRAgKDvKbaysp2MFy_BK9vpijMEr2CTRske4k2Eigl6EF3_thpYXgxfhUgEZLJD6UJ8s
 jydPaWs7EwsBZXAsDzsjTNa.qbaWEpkNzOgvxBEwTeUa33GnbArfIyWy3Czp1PDqJ9j6gyCYgo86
 nmkpT_uid1ST1CPajhMy5HVzBMqi3TOzqpmKL_lZCeNiMfDQVltf_UQttAh6gIyGLiDBhYGF8221
 75IO2VYJiV4SPLDReSIsW5pu5gW4p8OUWWSbEiSB0NmIfOqksJ.XtaYgF8RaplOqNeF0fG1PzVN6
 iOZUwrmhtmcxYJ7HYei2VxSsgmgQpYOxBYnxFAI2X_6DFcqlSL3c8JeBZ9xkpken3071nMtvJMMZ
 PQ0sTaZgpZGnGGPo8OPeD6341BQLxU1gki_uaDWsoHRT1CR5eXo4gV87zpFZ9E.JI29mzr5rodbc
 QBSsxu38WzcVhB1mfkPKIgeWzRJPRjpNVeCfbM_LPC2YFWP0VtD1LhsXlmWXkuENhaUjLs.3yU_5
 Tx25TefAdqxHFb.VeCB7gD8FQIqrs7bT_vZfSQv9L7TbZIDTixuE2q7HLT9E9mcq4JlaWicyMkcY
 2umAV02Cth6fDWv7.w5In1xXTukO0KNND.t5fG5A70jcB.bycf8ioAyA4lFUD77iss8ohQU6Rjvg
 zM3jP1cTCX_y9_I.0SXdWMcNEIeqyqaahcuGhst3RWyQgiLuMXgINgSJukiBgfMd7DScJAzyzRn.
 dkU41aC6OVTIXDImX0NpnGMAAmR6Y4KDOAvvzqO1xcjMrLWubw2CBgzeZqi5.7Mles1NVlq2JPZi
 KY_qicZg2_VGP89.C6.tA_k82SbnW8TKKejqXJ8sM0VKd4qZ8c07Fd92Igoqpp_LoIcYHigqTI3p
 iMqznn5a1j88GSCbUmpYaR_fnlPWr4NRozScdgURpenZLl9wSFNhv05uk.2jPgwreIxgwFORwtv4
 H2HzPWMlyMyui6GT8yqGd3XEYxcr.T8jozql16d.msQ3EbdkClzwRtBJqirkS8p9PDjXpVrEfWft
 73MsBPaYigK..ag_WmpMja96XTFv.7rYdVESuZPb6RhLTwQJ.fiQL0Wo8FksbLFR3zAgsw9tB5Fn
 J4_ecW5H3B0kZJcDhHsOhQPbxOsWU_Bdcojfv6x3Cl_FkpTv6iNsAedf2Mp49q1KpCqszDv4IPSn
 cI4T0zIP.ND13vtRg20trtz16RY9rv.bDWeRtMNp1MxG4z0vVgq_YnisDseGb0HLje07lxW7H9iw
 c6cZkMfOZkK4L.RnKtq3lRd9T2vER0HwwiXsFIdnIpypyv.ao.wImx74C9viWjCsULhGmIxpk5Q6
 e_tBNynJACMdMd6kdWdJFp3cZ8NhivMztyUAVwjhpX4rnROixmdD5Un_8NhpxAXagxHg39XVb9JY
 9_0u7IGGJR6pTkwPsmS.rA5oRybbkvxcBiOLJmIVRNrZJJE0S8kbmyBvWBgLsSZ8LDB6_V6fjNgi
 acRzqBKjECkRS.xX3SHNNdQ1klsDfoatNUjQgwh_TxupSSbu4k8FStYiA6GdTTcDdcmLYEpJ2wfe
 nbPJhBwg38vY_Fux7ZMm8GoVKXznu9wWoSq4bkH6_FE4hrIVELSkQKGYEvMepxNzvfgGFO1wXVtw
 Wzwt7HpLpHa0zTJcm_0EY_dcxOuiN9PcuNQl1E5BH1Ap0vmJuU6n2KcZvcbqPlvLWR0z3QEYVeJP
 wx8Zzh.9z2RIwMHo1bolZOCbHaDYTEe.gVPvtulml5EnH8SLPBQpAJakwOBnEh4ZAk1UqSP2O2gE
 AQBuydAk.KHb6Jau3zqVjeKwlZsIbZ8w5R5wMUKlsfMZuriNAPyLIkX3vMiqQvs42iHTWDa6RHKX
 y.fsLMSZVRIlmVrXttGFhMYQv56bDJZu7yvAMWr1_YL86_4pE.YAllwMSXoEC_YpvZjs9Cat37JE
 eG6wEvz02OaW8G6KLgx.v4WYqlA--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <38e0ba3c-1914-9148-0541-e2c28efb5f87@aol.com>
Date: Wed, 4 Jan 2023 15:50:44 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
 <E3E983F2-0FB3-4F6B-B2D6-ABE7E021228E@gmail.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <E3E983F2-0FB3-4F6B-B2D6-ABE7E021228E@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4274

On 1/4/23 3:44 PM, Bernhard Beschow wrote:
> 
> 
> Am 4. Januar 2023 17:54:16 UTC schrieb Chuck Zmudzinski <brchuckz@aol.com>:
>>On 1/4/23 10:35 AM, Philippe Mathieu-Daudé wrote:
>>> +Markus/Thomas
>>> 
>>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>>> 
>>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>>>> ---
>>>>   hw/i386/pc_piix.c             |  4 +---
>>>>   hw/isa/piix.c                 | 20 --------------------
>>>>   include/hw/southbridge/piix.h |  1 -
>>>>   3 files changed, 1 insertion(+), 24 deletions(-)
>>>> 
>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>>> index 5738d9cdca..6b8de3d59d 100644
>>>> --- a/hw/i386/pc_piix.c
>>>> +++ b/hw/i386/pc_piix.c
>>>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>>>       if (pcmc->pci_enabled) {
>>>>           DeviceState *dev;
>>>>           PCIDevice *pci_dev;
>>>> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>>>> -                                         : TYPE_PIIX3_DEVICE;
>>>>           int i;
>>>>   
>>>>           pci_bus = i440fx_init(pci_type,
>>>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>>>                                          : pci_slot_get_pirq);
>>>>           pcms->bus = pci_bus;
>>>>   
>>>> -        pci_dev = pci_new_multifunction(-1, true, type);
>>>> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>>>>           object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>>>                                    machine_usb(machine), &error_abort);
>>>>           object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>>>> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
>>>> index 98e9b12661..e4587352c9 100644
>>>> --- a/hw/isa/piix.c
>>>> +++ b/hw/isa/piix.c
>>>> @@ -33,7 +33,6 @@
>>>>   #include "hw/qdev-properties.h"
>>>>   #include "hw/ide/piix.h"
>>>>   #include "hw/isa/isa.h"
>>>> -#include "hw/xen/xen.h"
>>>>   #include "sysemu/runstate.h"
>>>>   #include "migration/vmstate.h"
>>>>   #include "hw/acpi/acpi_aml_interface.h"
>>>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
>>>>       .class_init    = piix3_class_init,
>>>>   };
>>>>   
>>>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>>> -{
>>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>>> -
>>>> -    k->realize = piix3_realize;
>>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>>>> -    dc->vmsd = &vmstate_piix3;
>>> 
>>> IIUC, since this device is user-creatable, we can't simply remove it
>>> without going thru the deprecation process. Alternatively we could
>>> add a type alias:
>>> 
>>> -- >8 --
>>> diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
>>> index 4b0ef65780..d94f7ea369 100644
>>> --- a/softmmu/qdev-monitor.c
>>> +++ b/softmmu/qdev-monitor.c
>>> @@ -64,6 +64,7 @@ typedef struct QDevAlias
>>>                                 QEMU_ARCH_LOONGARCH)
>>>   #define QEMU_ARCH_VIRTIO_CCW (QEMU_ARCH_S390X)
>>>   #define QEMU_ARCH_VIRTIO_MMIO (QEMU_ARCH_M68K)
>>> +#define QEMU_ARCH_XEN (QEMU_ARCH_ARM | QEMU_ARCH_I386)
>>> 
>>>   /* Please keep this table sorted by typename. */
>>>   static const QDevAlias qdev_alias_table[] = {
>>> @@ -111,6 +112,7 @@ static const QDevAlias qdev_alias_table[] = {
>>>       { "virtio-tablet-device", "virtio-tablet", QEMU_ARCH_VIRTIO_MMIO },
>>>       { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_VIRTIO_CCW },
>>>       { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_VIRTIO_PCI },
>>> +    { "PIIX3", "PIIX3-xen", QEMU_ARCH_XEN },
>>
>>Hi Bernhard,
>>
>>Can you comment if this should be:
>>
>>+    { "PIIX", "PIIX3-xen", QEMU_ARCH_XEN },
>>
>>instead? IIUC, the patch series also removed PIIX3 and PIIX4 and
>>replaced them with PIIX. Or am I not understanding correctly?
> 
> PIIX3 is correct. The PIIX consolidation is just about sharing code between the PIIX3 and PIIX4 south bridges and should not cause any user or guest observable differences.

I realize that now. I see the PIIX3 device still exists after applying the patch set.
Thanks,

Chuck


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 20:58:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 20:58:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471507.731374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDApo-0005pT-8A; Wed, 04 Jan 2023 20:58:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471507.731374; Wed, 04 Jan 2023 20:58:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDApo-0005pM-50; Wed, 04 Jan 2023 20:58:04 +0000
Received: by outflank-mailman (input) for mailman id 471507;
 Wed, 04 Jan 2023 20:58:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aavW=5B=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDApn-0005pA-2M
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 20:58:03 +0000
Received: from sonic307-55.consmr.mail.gq1.yahoo.com
 (sonic307-55.consmr.mail.gq1.yahoo.com [98.137.64.31])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78d6bbaf-8c72-11ed-91b6-6bf2151ebd3b;
 Wed, 04 Jan 2023 21:58:01 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic307.consmr.mail.gq1.yahoo.com with HTTP; Wed, 4 Jan 2023 20:57:59 +0000
Received: by hermes--production-ne1-7b69748c4d-drrwg (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID c018152b0879594142c30836da56166d; 
 Wed, 04 Jan 2023 20:57:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78d6bbaf-8c72-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672865879; bh=MwqxqmMf18MOAgnCK181pLsHPRDudVx5Vcx25bazdHY=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=jF8CsG9yyVzc/LfhMgmjumRjssA7XyT7Lmr2xT1z+IP7YOi6Tnxf1D9l+S0TkQ5XY5neasaWRQXKh6OmBCBENtGYszgxsr+tCWA5twvDOZHKeJVdGgATPZBprlz4PSYgb3d9LX25EO+jJkb5x5TluDRFME7QoCgXQCZ9rKmVToi+8TMczNlXRtTby1xK4C48uKmQxCLEj/mQ4dRJRuQgkBux6y2Q7i9fW/23m7Ry5v0m+PfFvo1pm5rut817HuCAPj+hteKVMk8O5S0UXqlPU5LCY5hKDh0o5DSK8J+jMVbFHEpyYmbiJWZmTE47c9zOfTboTymlvczvVCOmHhTF9w==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672865879; bh=2Ropz9OOjxUP5aP+emjrqR06SAJW6JbpIF/9X9NCbbZ=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=WFWWD1qzqOIXvwmXIMcgxWPR8KqiFcGRlf4sACs3LFzfpU4Tz4nGWvk+VXERK3FRmywWmQcfFotrphCLTT+PwrRRLEsJa+FLu35T2fiYmIqjz1RIEwXUAx/sVOmjHrYPTozE07/7paBTDVuVpp7J3Xt2nnVPCtA5ajFJb6zNx5Te3/wBuVYdOzyMQDQPLsxQky7GKZqJ0WBzQtVk0g6hLkO8H9NAl0jI7IiBEbrpYNldyo1JlyCutKMVGNFRA6DsY3FH+oFKUsVaIfgMRxYsiRWlFZl0CjVDGfqZ3NTMMJs80rfDPrZG+RNyKV+ac7qPi+l0DV0W9cJf8xyEql/ygw==
X-YMail-OSG: fsn.Lh0VM1mJv879NPieLib8bbgxISTRbyYolGsULGeNjvftgMlIzOkVnfvZm5Z
 3rQXoEOI_2TVq.wD7nOBGLRIBRqYLYGszRd44dfj8JwxgwUXf8BJ0m3tYviqk4UkmEBlaESc4pdr
 kl6wZB3bTnQiHFrrjS7lW4NxCncBP9CDq3k1rkqsP73DWQ0IveKwyPLGJ55YtkmkmMXiMn4YiUw_
 UdqVogcKlhKPPqG6BHKGiDFQwcgzlWa7iJdvn1GrvA17p3TUQ7AvQrl5rky2weI6r7bWcnvLOWwF
 K33VZ0kyH8wO7wfpGFiRTgGYxTwHytJjBtrtmzhvD9CiiI2f4MGAuEq1ir42WAJGcMPBcVM6tTQS
 dzE6iuiV4hEJIpdivPEB42tFQHJMnkkJLcyzrXYWA.7RjmDPLsyp6p31Le3o593PIsvAtHSI_cwL
 ff3ZjGIYBarimFdgGEtofJ6yvsasMeg8KvqmlThbzNXHLKuCaOAVIhC5cAosfpjCa_uhM7s.nFhZ
 UZq0MDBUVhIO5IHoJLKzT.bydQOZyqt6W.ied_U1HsdHnVO0j4u3R34TaFFlGMtykE2W3AZm0602
 FVqcH1xBhyKkjtibsagqRXql6YNbEIjmHvU4h5RPE2BqDPSojL_n9dcSxqZBfndA0a5Oc7aVOgzu
 YSpl6xEhgWLcnt3jzOX3DGlZT.YuAh0BiJ.Aru1rgptw1dBuzDvHPKNC7uxJnWs7ZBHV49ISNI0f
 BCH6XtJTpa.KVQfwwAJOfPOQ6SuhgkZZXN7vcQEQ3B8QzRKNbbMsMMtrDOCAtwpEecrLe2q2pcwd
 keL7wjbA4KtzixTIUcRtJ2S0eTcWa_vR2j9TfJxv8y7d_wiu8LN6BVH16xD5Qnaab7srR1aUoLU4
 1Uw65FruC1qWWsQTJDx_7oaByX77BR2C8GIsvxM31_TcZt8ThuW3DFAD_C8C2WE18WwMb647cdrr
 bR1yNq51OyH3HYzcUtvnPej.Dssig_2m.ai8caAcd8YtXddznK9tzWCyHMhHh4dbpgKEM4T5j7BQ
 NT11iOpIOPj_72Swf_.m_z6TmVOEBbUtR0gUkounQ_le64rMBPMsOJaVfxxQKT5bGeb750ekGJXE
 lA7nQeZZojZCN4J6Vy7eSxh3kZQEWesx0gMtYQxECqN5HsSc2E0OTFn4o8Nm_8LjNyFxk78_zAuo
 nfrIW_Peud28klRsn2iYcT6WxeVj5WVlgTVH1VsmcheW2rWsv9iLe4mno_44ovLD9vW1N1f5meeF
 rSLuadXq4grgwTLb6d6wY9nCJBxWvo8aQv_2CVlgT6PS6725_ySi6_kPJ51_nsdxHHNtg_0_a0GN
 UlN6cZkP6XoOJOA18fMR07kWOMzhEYbiGkW73vKe7F1g3kJMPABpUxkGmY8I4ssZcaRlkyqvucIl
 Xbe2f2GY1EPxJOySjJ3ccUf.a28HE.8ThyZlLhJlXWAb0g9hRuig9_9MIECjZki3fgMiA_ZAwT2z
 Qrj.BxJNVnNUepnvSehmFqikrM8sjb3AmmjOXXmqRkDsmgaw_fcZy7bQfyQSNrh9gqTDMTXxTUxH
 .FcI3FNtmSh1iAiiZ0dBsbl9b07NDjnsPOUHtFoDHFjra8jACLXINTUNX362mc9Se.9WJeJoWmaX
 mLo9923QIFqOJRzrqL4VUshFz2V0zL_C5ZKw1u.hp5P15XjuctUBNxrMLiRDja2KqQq5fxpl48Fk
 l87BGMi5j_67b_pEA6wPH7PGp0Dhb.ARAFl3mVLJ_12lI_sixyMU_IWc7WwNm.5QkfoxqsS7FkrH
 9DnoAR4VoK09rmhYc7Kcf7AWvleFjXsYtPIW6sjZjiZvCqAquR0flyukq93VYj1.Xdw6MabSo3WY
 NXQ3Z8_rBZx7O0DfOME_ScQwObHBDLQWSgnYbOnjPyad1e9.jDajVhHtCk924DalLobsDj0aeKXX
 mWSMhfVzmd_ePmXKLok8F5SswczNNsIOz20EmpAowbQBNpNfGzXL_lDjPkNAy3IGrZaNhkheZGTX
 LBGMJX0M45lLkYysQUdbXHHpNJj4_DSoTkBjfaejHkV53y9aWDPBP7lzKuaCsEYjQJsKVcIS2ijU
 HKrUjacjNgGWT_3iH4pRYfNYVzJw1EDEuqr0qNQ4yqz5ghUoke__vU6sAikjExA8qK01DlJThFUR
 5LKZU5XmnXpOmLwHtRXCC_vHvPQRDtmS.Z_UDa4TUvL49e1cbZDSmdwbgWOu.Ia65TB5OAF8AxC_
 cXEz1SYuAYx8lbT9PA7gx5wcXzzk-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <f6817652-8748-53e0-7ff8-505fd0a98ee3@aol.com>
Date: Wed, 4 Jan 2023 15:57:51 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
 <be75758a-2547-d1ef-223e-157f3aa28b23@linaro.org>
 <92efe0f1-f22b-47bc-f27d-2f31cb3621ea@aol.com>
 <A4B4B1D1-B466-459C-8A30-E79DACB14094@gmail.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <A4B4B1D1-B466-459C-8A30-E79DACB14094@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 6229

On 1/4/23 3:31 PM, Bernhard Beschow wrote:
> 
> 
> Am 4. Januar 2023 19:29:35 UTC schrieb Chuck Zmudzinski <brchuckz@aol.com>:
>>On 1/4/23 1:48 PM, Philippe Mathieu-Daudé wrote:
>>> On 4/1/23 18:54, Chuck Zmudzinski wrote:
>>>> On 1/4/23 10:35 AM, Philippe Mathieu-Daudé wrote:
>>>>> +Markus/Thomas
>>>>>
>>>>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>>>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>>>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>>>>>
>>>>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>>>>>> ---
>>>>>>    hw/i386/pc_piix.c             |  4 +---
>>>>>>    hw/isa/piix.c                 | 20 --------------------
>>>>>>    include/hw/southbridge/piix.h |  1 -
>>>>>>    3 files changed, 1 insertion(+), 24 deletions(-)
>>>>>>
>>>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>>>>> index 5738d9cdca..6b8de3d59d 100644
>>>>>> --- a/hw/i386/pc_piix.c
>>>>>> +++ b/hw/i386/pc_piix.c
>>>>>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>>>>>        if (pcmc->pci_enabled) {
>>>>>>            DeviceState *dev;
>>>>>>            PCIDevice *pci_dev;
>>>>>> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>>>>>> -                                         : TYPE_PIIX3_DEVICE;
>>>>>>            int i;
>>>>>>    
>>>>>>            pci_bus = i440fx_init(pci_type,
>>>>>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>>>>>                                           : pci_slot_get_pirq);
>>>>>>            pcms->bus = pci_bus;
>>>>>>    
>>>>>> -        pci_dev = pci_new_multifunction(-1, true, type);
>>>>>> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>>>>>>            object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>>>>>                                     machine_usb(machine), &error_abort);
>>>>>>            object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>>>>>> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
>>>>>> index 98e9b12661..e4587352c9 100644
>>>>>> --- a/hw/isa/piix.c
>>>>>> +++ b/hw/isa/piix.c
>>>>>> @@ -33,7 +33,6 @@
>>>>>>    #include "hw/qdev-properties.h"
>>>>>>    #include "hw/ide/piix.h"
>>>>>>    #include "hw/isa/isa.h"
>>>>>> -#include "hw/xen/xen.h"
>>>>>>    #include "sysemu/runstate.h"
>>>>>>    #include "migration/vmstate.h"
>>>>>>    #include "hw/acpi/acpi_aml_interface.h"
>>>>>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
>>>>>>        .class_init    = piix3_class_init,
>>>>>>    };
>>>>>>    
>>>>>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>>>>> -{
>>>>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>>>>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>>>>> -
>>>>>> -    k->realize = piix3_realize;
>>>>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>>>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>>>>>> -    dc->vmsd = &vmstate_piix3;
>>>>>
>>>>> IIUC, since this device is user-creatable, we can't simply remove it
>>>>> without going thru the deprecation process. Alternatively we could
>>>>> add a type alias:
>>>>>
>>>>> -- >8 --
>>>>> diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
>>>>> index 4b0ef65780..d94f7ea369 100644
>>>>> --- a/softmmu/qdev-monitor.c
>>>>> +++ b/softmmu/qdev-monitor.c
>>>>> @@ -64,6 +64,7 @@ typedef struct QDevAlias
>>>>>                                  QEMU_ARCH_LOONGARCH)
>>>>>    #define QEMU_ARCH_VIRTIO_CCW (QEMU_ARCH_S390X)
>>>>>    #define QEMU_ARCH_VIRTIO_MMIO (QEMU_ARCH_M68K)
>>>>> +#define QEMU_ARCH_XEN (QEMU_ARCH_ARM | QEMU_ARCH_I386)
>>>>>
>>>>>    /* Please keep this table sorted by typename. */
>>>>>    static const QDevAlias qdev_alias_table[] = {
>>>>> @@ -111,6 +112,7 @@ static const QDevAlias qdev_alias_table[] = {
>>>>>        { "virtio-tablet-device", "virtio-tablet", QEMU_ARCH_VIRTIO_MMIO },
>>>>>        { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_VIRTIO_CCW },
>>>>>        { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_VIRTIO_PCI },
>>>>> +    { "PIIX3", "PIIX3-xen", QEMU_ARCH_XEN },
>>>> 
>>>> Hi Bernhard,
>>>> 
>>>> Can you comment if this should be:
>>>> 
>>>> +    { "PIIX", "PIIX3-xen", QEMU_ARCH_XEN },
>>>> 
>>>> instead? IIUC, the patch series also removed PIIX3 and PIIX4 and
>>>> replaced them with PIIX. Or am I not understanding correctly?
>>> 
>>> There is a confusion in QEMU between PCI bridges, the first PCI
>>> function they implement, and the other PCI functions.
>>> 
>>> Here TYPE_PIIX3_DEVICE means for "PCI function part of the PIIX
>>> south bridge chipset, which expose a PCI-to-ISA bridge". A better
>>> name could be TYPE_PIIX3_ISA_PCI_DEVICE. Unfortunately this
>>> device is named "PIIX3" with no indication of ISA bridge.
>>
>>
>>Thanks, you are right, I see the PIIX3 device still exists after
>>this patch set is applied.
>>
>>chuckz@debian:~/sources-sid/qemu/qemu-7.50+dfsg/hw/i386$ grep -r PIIX3 *
>>pc_piix.c:        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>>
>>I also understand there is the PCI-to-ISA bridge at 00:01.0 on the PCI bus:
>>
>>chuckz@debian:~$ lspci
>>00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
>>00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>>00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
>>00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
>>00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>>00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
>>00:03.0 VGA compatible controller: Device 1234:1111 (rev 02)
>>
>>I also see with this patch, there is a bridge that is a PIIX4 ACPI at 00:01.3.
> 
> Yeah, this PIIX4 ACPI device is what we consider a "Frankenstein" device here on the list. Indeed my PIIX consolidation series aims for eventually replacing the remaining PIIX3 devices with PIIX4 ones to present a realistic environment to guests. The series you tested makes Xen work with PIIX4. With a couple of more patches you might be able to opt into a realistic PIIX4 emulation in the future!

That's exactly what I want to hear as a future milestone, and thanks
for your work on this!

Kind regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 22:18:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 22:18:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471514.731384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDC5j-0005Nq-UY; Wed, 04 Jan 2023 22:18:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471514.731384; Wed, 04 Jan 2023 22:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDC5j-0005Nj-Ro; Wed, 04 Jan 2023 22:18:35 +0000
Received: by outflank-mailman (input) for mailman id 471514;
 Wed, 04 Jan 2023 22:18:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=G5yt=5B=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pDC5i-0005NX-8v
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 22:18:34 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7a785ba-8c7d-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 23:18:30 +0100 (CET)
Received: by mail-ej1-x62d.google.com with SMTP id ud5so86128864ejc.4
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 14:18:30 -0800 (PST)
Received: from [192.168.1.115] ([185.126.107.38])
 by smtp.gmail.com with ESMTPSA id
 o17-20020a1709062e9100b007bd9e683639sm15718720eji.130.2023.01.04.14.18.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 14:18:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7a785ba-8c7d-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=4ysHgPmEoM4C0MsKpxmajfVnShhx28Dsr3QVGstLB34=;
        b=VV0KG9555T9PoLAI8dAC21IQsnZ3lSR8DsLsgK/MtaWHYl/VxEiwDbH+RTDfrr5dQW
         WA4utGik6635RTmghS0a7WkHbgjPjuHiE+pO+hgiksDUq+n8V5BNHExlZHlUloYboqY7
         zrU8PN4gZjq2WnCGmIiIMR675pgNyuDdsxOet1Bt9y03U0vOuXBwptSClErZqrI/0bHR
         KFQQFiqETpzUcm2ka7wktHggBLgYZyP7Mvj21q6OfiyMwE7sHyIMp+Dz2BCx02mnBy6N
         5FNUZefTOV8iHE77uu4IjCU7RVsjQhh39eEJrTwmwNi2v/yN30VM+aOjm2Gsij5+8n96
         tXgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=4ysHgPmEoM4C0MsKpxmajfVnShhx28Dsr3QVGstLB34=;
        b=yJX0MjuGvD6UQ9jmX2V71pklPnvolzEK9g79Qx+aDlFgeK8Tz45oYxU0w5Mh9UY7tr
         E3Lc1FtU5fjgwAjxetEKNY3WGXpBwLgQmzs1I2IAGqHgQiCRrAqi4oqRAAK9T7zuW9Hv
         XfSaDKUrUjgQ+K1hUeq6rGBbq7XSVciW4wW5d7IuOKhd1/JFjshZvtYnQJOKR5CYW1Ke
         glf94T3QVXDFEJVIXKl4lj0yAJ39BidEsXxYxPy116B5hF6XxaLXGnEB86lhoHJ4nWt4
         mfu3oGoRyPnT09vtiZJJ2FcWMFXFFOAcBNwT0QCtuvNNUZepWBuPj14/Yd4RdnUWlnxj
         uXig==
X-Gm-Message-State: AFqh2kqFRfWKeQ7dm3rp1dhH08/UMGT8hVf2oyChg7fFjJ67e3jRpKJO
	Ui+Ph4s6gQAOp4H4sNzg+HibBg==
X-Google-Smtp-Source: AMrXdXviPfF39VzEoGdqBpllgHy43HnZ0JUR7GccE1WHtt6JagF8scMFZzI1nxTausiLoRgwOJZm3A==
X-Received: by 2002:a17:906:94f:b0:7c1:4f7c:947f with SMTP id j15-20020a170906094f00b007c14f7c947fmr44428107ejd.72.1672870709605;
        Wed, 04 Jan 2023 14:18:29 -0800 (PST)
Message-ID: <877abde0-2e76-7fde-0212-eb7ce1384ea6@linaro.org>
Date: Wed, 4 Jan 2023 23:18:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Chuck Zmudzinski <brchuckz@aol.com>, Bernhard Beschow
 <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
 <be75758a-2547-d1ef-223e-157f3aa28b23@linaro.org>
 <92efe0f1-f22b-47bc-f27d-2f31cb3621ea@aol.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <92efe0f1-f22b-47bc-f27d-2f31cb3621ea@aol.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/1/23 20:29, Chuck Zmudzinski wrote:
> On 1/4/23 1:48 PM, Philippe Mathieu-Daudé wrote:

>> Here TYPE_PIIX3_DEVICE means for "PCI function part of the PIIX
>> south bridge chipset, which expose a PCI-to-ISA bridge". A better
>> name could be TYPE_PIIX3_ISA_PCI_DEVICE. Unfortunately this
>> device is named "PIIX3" with no indication of ISA bridge.
> 
> 
> Thanks, you are right, I see the PIIX3 device still exists after
> this patch set is applied.
> 
> chuckz@debian:~/sources-sid/qemu/qemu-7.50+dfsg/hw/i386$ grep -r PIIX3 *
> pc_piix.c:        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
> 
> I also understand there is the PCI-to-ISA bridge at 00:01.0 on the PCI bus:
> 
> chuckz@debian:~$ lspci
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)

All these entries ('PCI functions') ...:

> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
> 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)

... are part of the same *device*: the PIIX south bridge.

This device is enumerated as #1 on the PCI bus #0.
It currently exposes 4 functions: ISA/IDE/USB/ACPI.

> 00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
> 00:03.0 VGA compatible controller: Device 1234:1111 (rev 02)
> 
> I also see with this patch, there is a bridge that is a PIIX4 ACPI at 00:01.3.
> I get the exact same output from lspci without the patch series, so that gives
> me confidence it is working as designed.

Historically the PIIX3 and PIIX4 QEMU models have been written by
different people with different goals.

- PIIX3 comes from x86 machines and is important for KVM/Xen
   accelerators
- PIIX4 was developed by hobbyist for MIPS machines

PIIX4 added the ACPI function which was proven helpful for x86 machines.

OS such Linux don't consider the PIIX south bridge as a whole chipset,
and enumerate each PCI function individually. So it was possible to add
the PIIX4 ACPI function to a PIIX3... A config that doesn't exist with
real hardware :/
While QEMU aims at modeling real HW, this config is still very useful
for KVM/Xen. So this Frankenstein config is accepted / maintained.

Bernhard is doing an incredible work merging the PIIX3/PIIX4 differences
into a more maintainable model :)

Regards,

Phil.


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 22:35:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 22:35:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471522.731396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDCMQ-0007og-Gv; Wed, 04 Jan 2023 22:35:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471522.731396; Wed, 04 Jan 2023 22:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDCMQ-0007oZ-Do; Wed, 04 Jan 2023 22:35:50 +0000
Received: by outflank-mailman (input) for mailman id 471522;
 Wed, 04 Jan 2023 22:35:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=G5yt=5B=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pDCMP-0007oT-69
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 22:35:49 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 219598ff-8c80-11ed-b8d0-410ff93cb8f0;
 Wed, 04 Jan 2023 23:35:46 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id u9so86272002ejo.0
 for <xen-devel@lists.xenproject.org>; Wed, 04 Jan 2023 14:35:46 -0800 (PST)
Received: from [192.168.1.115] ([185.126.107.38])
 by smtp.gmail.com with ESMTPSA id
 q19-20020a17090676d300b008072c925e4csm15659503ejn.21.2023.01.04.14.35.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Jan 2023 14:35:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 219598ff-8c80-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=9D9ufjFO5ErkH2AbAYOcVhRTOeT61tLIfkXN1tOblOs=;
        b=qv7p6hbUZqhwy3iQDuhw5Zh/OaU+jdEqOBkCsMH4ACc+FvjAC91j7cExF3wcIjxTj/
         P5Af9RwADSEA4/2xj9jtn/bp4tinOssr7+eXzK5rb5W23ksxutq2KGWCOV8Vx0xUBEIj
         3SDQr6m6fnxHqGsFHN4hmyUPzpeyrvMBIGoy8RJkUKrdu3Dlse6HMVzuWofLbywDG8w7
         Yh0M2kZfJ6Hbhlrvq9jKxqEIEnU2CxVdRasynMZS6octhZaF3M72EbUgEIdGnHxJ5c16
         Zgdizvrjpo+H4Q9780RNeX85HvoeAzJ5ShUyofin14VywCadZG8+7SGyzKC6eUot9hS0
         mg6w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=9D9ufjFO5ErkH2AbAYOcVhRTOeT61tLIfkXN1tOblOs=;
        b=QhR2R8+ZYlkN2bhd4y7JoLRNbAnp/A+oYxJP6oo75shvul7BlwYL2GWUd1pfpX9Fxi
         DPfIBa/hiBev9agcpRVfJBepHSyYUzGycD/XBEjtyUxOBPZGEi6QVZSRYDhl1CH688Mm
         xsTWlTraHRdLqSXA7Uv9kYQ61MOWOnG6FHGZgTuf2G4PWSqtrsSxYbNZFCnxoqGl1fso
         gZlks54ekuejTuk4oaO+pNdkDAH/fHchGLKcacpwrqx3QG2spKh2EuOI14jQR+ku+NtG
         e39yhoSAYT39jFsHQ2v0Iww04UcxdX/Q1YkleB9s6/tQKpn0HDzmyEsKZuWhhwWGWzFx
         6CWA==
X-Gm-Message-State: AFqh2kpSRM3v/xXTRMgrEUEpEO4Re4QVN6u6iAVchg3CWrV6WznTHAbp
	RwJnwBvPW3+6h5JhcAkrefqxOg==
X-Google-Smtp-Source: AMrXdXsQ3IhtNb5cG1wk60QS7wd3RDWh6fd1SXn/KbYkoE54jRIhMPY1SXG2MimpwT8M9GQLVoAtyg==
X-Received: by 2002:a17:907:86a6:b0:7c0:fd1a:79f0 with SMTP id qa38-20020a17090786a600b007c0fd1a79f0mr59371406ejc.21.1672871746219;
        Wed, 04 Jan 2023 14:35:46 -0800 (PST)
Message-ID: <405dc396-7b7e-842a-2b94-6b26df1aa564@linaro.org>
Date: Wed, 4 Jan 2023 23:35:43 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Chuck Zmudzinski <brchuckz@aol.com>, Bernhard Beschow
 <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <30ed41ab-f7c9-15fb-8f4b-b2742b1d4188@aol.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <30ed41ab-f7c9-15fb-8f4b-b2742b1d4188@aol.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/1/23 17:42, Chuck Zmudzinski wrote:
> On 1/4/23 9:44 AM, Bernhard Beschow wrote:
>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>
>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>> ---
>>   hw/i386/pc_piix.c             |  4 +---
>>   hw/isa/piix.c                 | 20 --------------------
>>   include/hw/southbridge/piix.h |  1 -
>>   3 files changed, 1 insertion(+), 24 deletions(-)
>>
>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>> index 5738d9cdca..6b8de3d59d 100644
>> --- a/hw/i386/pc_piix.c
>> +++ b/hw/i386/pc_piix.c
>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>       if (pcmc->pci_enabled) {
>>           DeviceState *dev;
>>           PCIDevice *pci_dev;
>> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>> -                                         : TYPE_PIIX3_DEVICE;
>>           int i;
>>   
>>           pci_bus = i440fx_init(pci_type,
>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>                                          : pci_slot_get_pirq);
>>           pcms->bus = pci_bus;
>>   
>> -        pci_dev = pci_new_multifunction(-1, true, type);
>> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>>           object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>                                    machine_usb(machine), &error_abort);
>>           object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
>> index 98e9b12661..e4587352c9 100644
>> --- a/hw/isa/piix.c
>> +++ b/hw/isa/piix.c
>> @@ -33,7 +33,6 @@
>>   #include "hw/qdev-properties.h"
>>   #include "hw/ide/piix.h"
>>   #include "hw/isa/isa.h"
>> -#include "hw/xen/xen.h"
>>   #include "sysemu/runstate.h"
>>   #include "migration/vmstate.h"
>>   #include "hw/acpi/acpi_aml_interface.h"
>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
>>       .class_init    = piix3_class_init,
>>   };
>>   
>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>> -{
>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>> -
>> -    k->realize = piix3_realize;
>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>> -    dc->vmsd = &vmstate_piix3;
>> -}
>> -
>> -static const TypeInfo piix3_xen_info = {
>> -    .name          = TYPE_PIIX3_XEN_DEVICE,
>> -    .parent        = TYPE_PIIX_PCI_DEVICE,
>> -    .instance_init = piix3_init,
>> -    .class_init    = piix3_xen_class_init,
>> -};
>> -
>>   static void piix4_realize(PCIDevice *dev, Error **errp)
>>   {
>>       ERRP_GUARD();
>> @@ -534,7 +515,6 @@ static void piix3_register_types(void)
>>   {
>>       type_register_static(&piix_pci_type_info);
>>       type_register_static(&piix3_info);
>> -    type_register_static(&piix3_xen_info);
>>       type_register_static(&piix4_info);
>>   }
>>   
>> diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
>> index 65ad8569da..b1fc94a742 100644
>> --- a/include/hw/southbridge/piix.h
>> +++ b/include/hw/southbridge/piix.h
>> @@ -77,7 +77,6 @@ struct PIIXState {
>>   OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
>>   
>>   #define TYPE_PIIX3_DEVICE "PIIX3"
>> -#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
>>   #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
>>   
>>   #endif
> 
> 
> This fixes the regression with the emulated usb tablet device that I reported in v1 here:
> 
> https://lore.kernel.org/qemu-devel/aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com/
> 
> I tested this patch again with all the prerequisites and now with v2 there are no regressions.
> 
> Tested-by: Chuck Zmudzinski <brchuckz@aol.com>

(IIUC Chuck meant to send this tag to the cover letter)



From xen-devel-bounces@lists.xenproject.org Wed Jan 04 23:42:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 23:42:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471529.731407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDOI-0006LN-Co; Wed, 04 Jan 2023 23:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471529.731407; Wed, 04 Jan 2023 23:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDOI-0006LG-A4; Wed, 04 Jan 2023 23:41:50 +0000
Received: by outflank-mailman (input) for mailman id 471529;
 Wed, 04 Jan 2023 23:41:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pDDOG-0006LA-LP
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 23:41:48 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5962f51e-8c89-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 00:41:46 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 21156B81916;
 Wed,  4 Jan 2023 23:41:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2924C433F0;
 Wed,  4 Jan 2023 23:41:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5962f51e-8c89-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672875703;
	bh=Pn1GRpPj68jTuNTIPUm9liY//bIbQe9ac+YPKX51frY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=T87xF5b90qc4fUXNB01GaoNNftjiBJ3x9vHzXdC4jCkZfY/k7PJO77JC7V/Umjvpl
	 dm+JFEQJaCu9bKiQIMw9p0ZZo/+XUZ69O9Ij2vfnSkXMfw+z6TIQIf6xY9C5dH+ifa
	 lLn/h5uHFR5Xgu1GgNDxiYFukww0nFVOHJ4Jir4/pTpApIUCOTjW2nL2r1x8ihTMhU
	 5kZQAVsBYppF1PXUtd73QgYMQfyETCzean+EFaKuc1nYOibbod5xSAwTx5zX0ASg2c
	 1htKIFymfFYeMwC7BtsPfvuSsk6D03fnJw12Eu0OnUMDWkKdUrCqwpIohYb9HGyYkN
	 plqc0LDzl48iQ==
Date: Wed, 4 Jan 2023 15:41:40 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Print memory size in decimal in
 construct_domU
In-Reply-To: <20230102144904.17619-1-michal.orzel@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301041541260.4079@ubuntu-linux-20-04-desktop>
References: <20230102144904.17619-1-michal.orzel@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 2 Jan 2023, Michal Orzel wrote:
> Printing domain's memory size in hex without even prepending it
> with 0x is not very useful and can be misleading. Switch to decimal
> notation.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/domain_build.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 829cea8de84f..7e204372368c 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3774,7 +3774,7 @@ static int __init construct_domU(struct domain *d,
>      if ( rc != 0 )
>          return rc;
>  
> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
> +    printk("*** LOADING DOMU cpus=%u memory=%"PRIu64"KB ***\n", d->max_vcpus, mem);
>  
>      kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
>  
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 23:46:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 23:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471536.731418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDSl-0006yJ-U9; Wed, 04 Jan 2023 23:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471536.731418; Wed, 04 Jan 2023 23:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDSl-0006yC-R7; Wed, 04 Jan 2023 23:46:27 +0000
Received: by outflank-mailman (input) for mailman id 471536;
 Wed, 04 Jan 2023 23:46:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDDSk-0006y6-KQ
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 23:46:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDDSi-0004aK-ND; Wed, 04 Jan 2023 23:46:24 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDDSi-0007lI-9M; Wed, 04 Jan 2023 23:46:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=gmGTuusN/Cw77oybOwaLhW7XmTlkAMPzqinnHBSTdxg=; b=tYnmNNAk9EcGInkeuV7IWS/4yf
	xgx4FW5Jy8IZm7kO7iBuuv15RxiMUO+aR+2xzncu8Cche+m0nXKjhcQBlcoxWoRRrQkGB6EEV3247
	EgNUa72ObOt25KWLqzjnIjEGIw/9TGYuHmSvrhZKyVGn29oR6VbrrH/0hZCvIIlgUsI0=;
Message-ID: <2a343532-d324-e1ac-418c-5b34967d8de2@xen.org>
Date: Wed, 4 Jan 2023 23:46:22 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.0
Subject: Re: [PATCH] xen/arm: Print memory size in decimal in construct_domU
To: Stefano Stabellini <sstabellini@kernel.org>,
 Michal Orzel <michal.orzel@amd.com>
Cc: xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230102144904.17619-1-michal.orzel@amd.com>
 <alpine.DEB.2.22.394.2301041541260.4079@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301041541260.4079@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 04/01/2023 23:41, Stefano Stabellini wrote:
> On Mon, 2 Jan 2023, Michal Orzel wrote:
>> Printing domain's memory size in hex without even prepending it
>> with 0x is not very useful and can be misleading. Switch to decimal
>> notation.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Was this intended for v2 rather than v1?

Cheers,

> 
> 
>> ---
>>   xen/arch/arm/domain_build.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 829cea8de84f..7e204372368c 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -3774,7 +3774,7 @@ static int __init construct_domU(struct domain *d,
>>       if ( rc != 0 )
>>           return rc;
>>   
>> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
>> +    printk("*** LOADING DOMU cpus=%u memory=%"PRIu64"KB ***\n", d->max_vcpus, mem);
>>   
>>       kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
>>   
>> -- 
>> 2.25.1
>>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 23:47:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 23:47:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471544.731429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDTX-0007VJ-5m; Wed, 04 Jan 2023 23:47:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471544.731429; Wed, 04 Jan 2023 23:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDTX-0007VC-32; Wed, 04 Jan 2023 23:47:15 +0000
Received: by outflank-mailman (input) for mailman id 471544;
 Wed, 04 Jan 2023 23:47:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pDDTV-0007SD-Lv
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 23:47:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b2d6673-8c8a-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 00:47:11 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4EF6A6187C;
 Wed,  4 Jan 2023 23:47:10 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9FBF5C433D2;
 Wed,  4 Jan 2023 23:47:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b2d6673-8c8a-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672876029;
	bh=AZmbvsghGNSym+NrkscJbs9ahmzLtFbFNacMjbGYpbo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=MOaeIY2A/RZ7PfRSiRQV+AgR6jgq7o7btLpGBsmnc/L7qfU/t3Amws5Vryaxdg15l
	 cjgKdLMT9T1HLTr1UdmdX8LshhkFY49kSRUO1zIRcsh1S93223IqS+YxsMIkODOxOo
	 CuBz0UknlgLr38DRhFIeoz3HrJm8nG/DXMOFbIXbPmqGbH3qLfwutEtTtn1pVfvKOC
	 +awgPZLeVQD2Q775dnbPoEIiwa3rd9hvTWLSDKtuMnwHCzQPPY/6SXT3bcKi/d/pHU
	 aKKHnTrUtCPKevWKNr5dn8Pc891tlOmpxl/7SG9mkZZeOn6b1aRWgXudMgetsfqqmS
	 FEqYEkBl5IyZg==
Date: Wed, 4 Jan 2023 15:47:06 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in
 construct_domU
In-Reply-To: <20230103102519.26224-1-michal.orzel@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301041546230.4079@ubuntu-linux-20-04-desktop>
References: <20230103102519.26224-1-michal.orzel@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 3 Jan 2023, Michal Orzel wrote:
> Printing memory size in hex without 0x prefix can be misleading, so
> add it. Also, take the opportunity to adhere to 80 chars line length
> limit by moving the printk arguments to the next line.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
> Changes in v2:
>  - was: "Print memory size in decimal in construct_domU"
>  - stick to hex but add a 0x prefix
>  - adhere to 80 chars line length limit

Honestly I prefer decimal but also hex is fine. I'll let Julien pick the
version of this patch that he prefers.


> ---
>  xen/arch/arm/domain_build.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 829cea8de84f..f35f4d24569c 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3774,7 +3774,8 @@ static int __init construct_domU(struct domain *d,
>      if ( rc != 0 )
>          return rc;
>  
> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
> +    printk("*** LOADING DOMU cpus=%u memory=%#"PRIx64"KB ***\n",
> +           d->max_vcpus, mem);
>  
>      kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
>  
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 23:48:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 23:48:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471552.731439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDUP-000858-FJ; Wed, 04 Jan 2023 23:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471552.731439; Wed, 04 Jan 2023 23:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDUP-000851-Ci; Wed, 04 Jan 2023 23:48:09 +0000
Received: by outflank-mailman (input) for mailman id 471552;
 Wed, 04 Jan 2023 23:48:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pDDUO-0007SD-Dt
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 23:48:08 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3c679f56-8c8a-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 00:48:06 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 2AE95B81717;
 Wed,  4 Jan 2023 23:48:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5681C433D2;
 Wed,  4 Jan 2023 23:48:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c679f56-8c8a-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672876084;
	bh=fjUjCM6/dbAhaWVyakEsxbpi0WIuGmgOsRwOzF62oD0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=oXOAFC6/JyOvB7WzvJ+IAdutVCqCBX2EcQZJeEm6iWJociOiZ/IJ1rXkWfckuc1iW
	 bwX/po1vMsNVr+VUzomE7vm5n2VOoM5oYVBjMssb30lozxzFb+H5mpekIqzQU/VMRm
	 K+crZSzrqKZ4O/aLkSSkneVYxmw85WW9tDn2BGPoh2GIioKPNEA7bbuJmboOpvzeo/
	 9ZqbIuKKzqqJg4QMOHn/gSJ22Iq4LRVXyYdZ3+cOyWwXQzol8frNanL58ynGQrcn93
	 0FU75BeZ7pBPE7FRqlPo/aP37PrdXY4+OPwpD3WVmNktV8kgRWZNSp4CkqWBlFXcNu
	 5a4Kw/PfnlbGQ==
Date: Wed, 4 Jan 2023 15:48:02 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Print memory size in decimal in
 construct_domU
In-Reply-To: <2a343532-d324-e1ac-418c-5b34967d8de2@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301041547290.4079@ubuntu-linux-20-04-desktop>
References: <20230102144904.17619-1-michal.orzel@amd.com> <alpine.DEB.2.22.394.2301041541260.4079@ubuntu-linux-20-04-desktop> <2a343532-d324-e1ac-418c-5b34967d8de2@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 4 Jan 2023, Julien Grall wrote:
> Hi Stefano,
> 
> On 04/01/2023 23:41, Stefano Stabellini wrote:
> > On Mon, 2 Jan 2023, Michal Orzel wrote:
> > > Printing domain's memory size in hex without even prepending it
> > > with 0x is not very useful and can be misleading. Switch to decimal
> > > notation.
> > > 
> > > Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> > 
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Was this intended for v2 rather than v1?

I didn't notice v2 was already out. I did test v2 and made sure it is
working so go ahead and pick whichever version you prefer


> > > ---
> > >   xen/arch/arm/domain_build.c | 2 +-
> > >   1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > index 829cea8de84f..7e204372368c 100644
> > > --- a/xen/arch/arm/domain_build.c
> > > +++ b/xen/arch/arm/domain_build.c
> > > @@ -3774,7 +3774,7 @@ static int __init construct_domU(struct domain *d,
> > >       if ( rc != 0 )
> > >           return rc;
> > >   -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n",
> > > d->max_vcpus, mem);
> > > +    printk("*** LOADING DOMU cpus=%u memory=%"PRIu64"KB ***\n",
> > > d->max_vcpus, mem);
> > >         kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
> > >   
> > > -- 
> > > 2.25.1
> > > 
> 
> -- 
> Julien Grall
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 04 23:56:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Jan 2023 23:56:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471559.731450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDcc-0001AS-AA; Wed, 04 Jan 2023 23:56:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471559.731450; Wed, 04 Jan 2023 23:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDDcc-0001AL-7W; Wed, 04 Jan 2023 23:56:38 +0000
Received: by outflank-mailman (input) for mailman id 471559;
 Wed, 04 Jan 2023 23:56:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMRm=5B=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pDDca-0001AF-Gx
 for xen-devel@lists.xenproject.org; Wed, 04 Jan 2023 23:56:36 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6ae9959b-8c8b-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 00:56:34 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9B8D860B9F;
 Wed,  4 Jan 2023 23:56:33 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2207C433EF;
 Wed,  4 Jan 2023 23:56:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ae9959b-8c8b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1672876593;
	bh=56exPYTrVJWailpagQVD//cB5k6Ua23keldYMrZSHMY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Su5w5hidKmUJHpqaiIg43YfZ5DqaZRZnsFtd8qaBvscpk7h+GUD9IH2B31Dw/GxUb
	 aqBFyUghI89tZyLqOB2ZBSzLraI6AX6MQWt5pB6gqrKxJ4Wp5lvS6cUhG+h7CILDpO
	 JgkMSG7YmFLW7eOLuGM9Lb+TRoDf+l3prPj2Cf+ctNnzr1I6ISw1cHYQjrui8L1seM
	 sZnNCKPxMTxfvFZJcHM+Fl9bjr/rEBJa7ZKhtBzEC/cU8wMVj6PlxQmK1XBx7wjXZ8
	 uBO9v9e68rRru1pft3Dk6Anx0AfFUmLnnevjVewxZfjEyhNIKOYhBWigFrVez2OTB3
	 JWL9EUVUH4uJQ==
Date: Wed, 4 Jan 2023 15:56:29 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Ayan Kumar Halder <ayankuma@amd.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
    xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, 
    Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
Subject: Re: [XEN v1 2/9] xen/arm: Define translate_dt_address_size() for
 the translation between u64 and paddr_t
In-Reply-To: <5898da03-9947-1540-f424-d76c715619f4@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301041552590.4079@ubuntu-linux-20-04-desktop>
References: <20221215193245.48314-1-ayan.kumar.halder@amd.com> <20221215193245.48314-3-ayan.kumar.halder@amd.com> <b58c6548-9e70-0ed9-07a9-e35084620c36@xen.org> <alpine.DEB.2.22.394.2212161643400.315094@ubuntu-linux-20-04-desktop> <74786a57-d99a-6cfe-f475-df11c0d93afa@xen.org>
 <alpine.DEB.2.22.394.2212221520020.4079@ubuntu-linux-20-04-desktop> <5bc9435e-aee9-006c-a35a-ee9c7f946f91@amd.com> <5898da03-9947-1540-f424-d76c715619f4@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1427513259-1672876592=:4079"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1427513259-1672876592=:4079
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 23 Dec 2022, Julien Grall wrote:
> On 23/12/2022 10:01, Ayan Kumar Halder wrote:
> > Hi Julien/Stefano,
> > 
> > I want to make sure I understand correctly.
> > 
> > On 22/12/2022 23:20, Stefano Stabellini wrote:
> > > On Sat, 17 Dec 2022, Julien Grall wrote:
> > > > On 17/12/2022 00:46, Stefano Stabellini wrote:
> > > > > On Fri, 16 Dec 2022, Julien Grall wrote:
> > > > > > Hi Ayan,
> > > > > > 
> > > > > > On 15/12/2022 19:32, Ayan Kumar Halder wrote:
> > > > > > > paddr_t may be u64 or u32 depending of the type of architecture.
> > > > > > > Thus, while translating between u64 and paddr_t, one should check
> > > > > > > that
> > > > > > > the
> > > > > > > truncated bits are 0. If not, then raise an appropriate error.
> > > > > > I am not entirely convinced this extra helper is worth it. If the
> > > > > > user
> > > > > > can't
> > > > > > provide 32-bit address in the DT, then there are also a lot of other
> > > > > > part
> > > > > > that
> > > > > > can go wrong.
> > > > > > 
> > > > > > Bertrand, Stefano, what do you think?
> > > > > In general, it is not Xen's job to protect itself against bugs in
> > > > > device
> > > > > tree. However, if Xen spots a problem in DT and prints a helpful
> > > > > message
> > > > > that is better than just crashing because it gives a hint to the
> > > > > developer about what the problem is.
> > > > I agree with the principle. In practice this needs to be weight out with
> > > > the
> > > > long-term maintenance.
> > > > 
> > > > > In this case, I think a BUG_ON would be sufficient.
> > > > BUG_ON() is the same as panic(). They both should be used only when
> > > > there are
> > > > no way to recover (see CODING_STYLE).
> > > > 
> > > > If we parse the device-tree at boot, then BUG_ON() is ok. However, if we
> > > > need
> > > > to parse it after boot (e.g. the DT overlay from Vikram), then we should
> > > > definitely not call BUG_ON() for checking DT input.
> > > yeah, I wasn't thinking of that series
> > > 
> > > 
> > > > The correct way is like Ayan did by checking returning an error and let
> > > > the caller deciding what to do.
> > > > 
> > > > My concern with his approach is the extra amount of code in each caller
> > > > to
> > > > check it (even with a new helper).
> > > > 
> > > > I am not fully convinced that checking the addresses in the DT is that
> > > > useful.
> > > I am also happy not to do it to be honest
> > 
> > So are you suggesting that we do the truncation, but do not check if any non
> > zero bits have been truncated.
> > 
> > As an example, currently with this patch :-
> > 
> > -        device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase,
> > &gbase);
> > -        psize = dt_read_number(cells, size_cells);
> > +        device_tree_get_reg(&cells, addr_cells, addr_cells, &dt_pbase,
> > &dt_gbase);
> > +        ret = translate_dt_address_size(&dt_pbase, &dt_gbase, &pbase,
> > &gbase);
> > +        if ( ret )
> > +            return ret;
> > +        dt_psize = dt_read_number(cells, size_cells);
> > +        ret = translate_dt_address_size(NULL, &dt_psize, NULL, &psize);
> > 
> > 
> > With your proposed change, it should be
> > 
> > -        device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase,
> > &gbase);
> > -        psize = dt_read_number(cells, size_cells);
> > +        device_tree_get_reg(&cells, addr_cells, addr_cells, &dt_pbase,
> > &dt_gbase);
> > +        dt_psize = dt_read_number(cells, size_cells);
> > +        pbase = (paddr_t) dt_pbase;
> > +        gbase = (paddr_t) dt_gbase;
> > +        psize = (paddr_t) dt_psize;
> 
> -ETOOMANY cast and lines (the more if this is expected to be duplicated). But
> the last one seems unnecessary as the only reason you need separate variable
> for pbase and gbase is because the function are taking a reference rather than
> returning the value.
> 
> 
> > Because, we still need some way to convert u64 dt address/size to paddr_t
> > (u64/u32) address/size. 
> How about following the approach I suggested in my previous e-mail to Stefano:
>   - Convert device_tree_get_reg() to use paddr_t.

I think this is OK


>   - Introduce dt_device_get_address_checked() (Assuming you want to still add
> the check)

Let's just skip the check


> We may need an helper to wrap around dt_read_number() but I don't have a good
> idea for a name. There are only a couple of use. So I think it is fine to
> open-code. But you would not need a separate local variable. For the cast, I
> am in two mind. In one way, I don't like unnecessary explicit cast, but on the
> other way it serves as documentation.
>
> Stefano, any opinions?

I would not add a wrapper for dt_read_number() and open-code the cast,
like you said because something non-trivial is happening and it serves
as documentation:

  psize = (paddr_t) dt_read_number(cells, size_cells);

--8323329-1427513259-1672876592=:4079--


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 01:57:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 01:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471567.731462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDFVY-00076D-AN; Thu, 05 Jan 2023 01:57:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471567.731462; Thu, 05 Jan 2023 01:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDFVY-000765-6H; Thu, 05 Jan 2023 01:57:28 +0000
Received: by outflank-mailman (input) for mailman id 471567;
 Thu, 05 Jan 2023 01:57:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cz8e=5C=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDFVW-00075z-5Z
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 01:57:26 +0000
Received: from sonic313-21.consmr.mail.gq1.yahoo.com
 (sonic313-21.consmr.mail.gq1.yahoo.com [98.137.65.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4a147cf7-8c9c-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 02:57:23 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic313.consmr.mail.gq1.yahoo.com with HTTP; Thu, 5 Jan 2023 01:57:19 +0000
Received: by hermes--production-ne1-7b69748c4d-pm9xv (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 5ad7388ff1f2f81a903f2d20066a3dfd; 
 Thu, 05 Jan 2023 01:57:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a147cf7-8c9c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672883839; bh=r+wBx6/58Y0GMmVdloTVPnv7cslaN9V9d73StXKUUQ0=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=JmEijnJdY7xyeLtDVlvC20MAJx8IVALM9Pronf2NQS5j2l+f/wyRWYl8J7cSEL1zZ/TBkznsom4bDOkSXrw/XcvyL7owSzB6dxmFzPeJib1Zy9A0rWSoUJivzXbvtyA2RxXTPsinXKkTtu9tkl86TPOEXRc987IAoYishhEsbeiUVjoMIzHxILCKyh1w6XAlokZae5Sya9UZhJPJ3SnJmQ87DYWxjw0CWgk6dlguWRSnEGdHkZObZTA7i5SjHUhnmKH6sTzyJcGTdqV1AoWbM7W1Fmisas+fc9Qhp2pkiFaSiObE80HbloNAvyvVAS3d2dS07orDtWsHH3T/4Qn00Q==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672883839; bh=qg/iaLMBO19KpWjwfX+oJMpEdkALXNVjTEsgw++ocL1=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=Jl3jLKW6kJoI4mpVi3z0GUCtoMhc80+fdQm1ekRJM9kvEsYPDFIb32xyltXBCZ9HgLYdNS/aMleLo6aqT2VpFqXLQN03CHmBs0R2x2P/NzE4eJmSs9VZeBcy6E/1dQOoJwlOE6y1TU75iAdruTlWFO8UyyMfDQeFubEC82HPzmFWhQBJFqUxRBz1Cwm8/RcA2cv9AR/C2fTgLGCS8lBbzh/XdMA5ciblp1NoXuDF7Fhq4Ow1GX+OiOvnO//g+P7+nOAErfyHDUmn+9I9zZAC4kT+U6m6utWqAe/RmcS0vXHhgg3ZoN5t2dIYjxiBE2Ei7RBd4WfigmUqYm59nI1tvw==
X-YMail-OSG: mjPnh2wVM1nYlFHG1wPD0LqgcsVvFTHn2zlCr52Dzy1RRgwnNgMIwbrqc4UBZGW
 g5ebvmD1XpHnv1qJa0ceI2rXtq7yKR4hXgB_ZH0jxMlETJLq71y3iSC0NEAPfPwwoYNlf.ldIRE_
 HLdZ7oEBSC.J1BQLzMIf9Il0dtXUjp0kZ.ASo_.KMnRvNB8L24NPVNOtcPHWcPFg_YVMx67_fmdr
 a45OA3cPlGxb86tn00WW7rkTa2Xc4FLzYpca1x.jdXUPn6NhSfTy98PGszTLbXOOSkbRTZrPf3nP
 8ovNSzyuYe_c8kPbQNx5rpSZ5Zb4Swa0.4Opn9HZyG7fP1E.kxe67v0fDYbPnlgW3u5r2XdxJoGB
 n3plPYn2GNgeI6ZaHY2yJ4v1bzYrT_cAErxOe6gRPyfJ94Yly4eODm8meQUXiAqXQo_i.oXPc5BR
 FNgZdlu10FhvSQKfUQtntd2iw7y05.CQ9BBfTrVMFmbAJES1npSRO.nD8srXa2kAKvqLtEv9Ht0e
 _8X8zCzbSUkeL7V.5cHxdaR2f_icGCM_DZmF8q.TNn3TmaMav2Yp__GgviyofishrFTMvnmIkV.F
 KKI4n6Qd9YogKRB8WsMjUAxp0FW.HWECFmPCs9ba4yn43bP_RPOCV4PH4EJWjgDhzdAB1dsVkHdb
 5kkzNdlfsbDGrLUnzXHbSfrlptw6Agzorify.3O6kAky6kVg0lY_M4Dpa.xlvp8CLZqZbKbXXOzC
 szIklmCmFFMmOZTNZ5bC0gGNOrNcMN1zK9dqkO8DkEcOuF72l.VSaM4dR1.UQBw6t_Sb_vrzkE7z
 McxI9ddM8CoqUgYru0LQMrNSqcB8TuO3kzf7SbCRd_6hfnrzEgzoqq7jVy6cA62v20HcZLAMAn9r
 euEyQSoRT6UeekL2dP.OShWP4pzepNfCAfElTVs2NpNKLtRpo1A2TUZo73dZBKjG9rjkKsbuvrI7
 ONc9_sSSQUmHcn3VSIiJXKg2OWEgre9X6WaBnG0hMShrf7lvHfW60anLl6n60e_o5OC_uMdFN2f.
 gnC2FStCvECDumMpX9KygOO5VwdjgaRd7GRiZDKsvu5.oCPikWciYNhnxlLnTkv2l4NAaUQCHyDF
 7m4otnDOios1oFtwVEWqhZnv5VYpLwTuQtZOAHb6FTNYZV1gtvX4IKac5gVNcApHRDz8srn5hqY7
 bBvYjOgJ.VMss7FmxUQjQpefvgJTREgV2wTWskEEVEbXAUu8rreb6MlmLBn5G4uE9jDTQNtXosG_
 0BJoKG3_viYet2.QkErR2MZ2fwOmAJUarluIzNzGdvdpui4Ve0eFVaHepwuNS1yu.EFkMnr5IwUY
 ooi0k4yntguWPgpANsLKjAn2_Z8SudR3lzdWgA4QDtwgJfWaRy0P4Ev0KQyGcMQLG6iNf3j1tsYR
 EgBILItCsp8mF50BaLtms10i6c0H_bwdkWUp1Qz0XzP_G1qN_FYh4SkLJ0XMxJoIldZpdZ54kp7F
 qwPoHd75eKQ4X92k_uoOpgB8bjfwmv3iiJmlSbVJp.8z87SvSUx2up9EImRQZrcSGk56_robF42B
 Cz52dSFnu7s4W3pQxdhIExa7Fpo8Byib_VsBqla9vfEqTHHXUJnbFwpa3QkjHChEYeyBz5pHrQ23
 yH7zBb8wjQ_CgT4EWXdwUEb0O8M__9OILjCX7_cU7Aphthwul1guay2suGMcY86nAeBgDon7jGjZ
 v5FP7EJvVFblYTpR_T0L.aDUmyy6ouFC8LzxjIy0h0l2jF_WoIv03g5jf8MHRfO4upQBtqIrcBic
 TWaNPYL_qdH_nofo30E1kG.7L_1x1Xx2FrBMc6Ul3bgKTefvevqpSwNr8CRiSdywV1E0j_t0LsKW
 IhXIQkosmqklw12RKKQ2SWZ2zTFOZgIQewHkekRfSrhbWfF3U4dD6Fec5s9OovQid1AUjO0IliiS
 Wk4R5y1VSX.9Q7Tv52jIk_SooNCyPt3XCFtVKm_aI7N6.jWY_y.JHDxFn4BBmD0BIhJHB8ofr6Q9
 GIAuKpRqsfMsj.KnqSXPf6cPHVR__vz7_tfhcmAycjk2HfN0KWHu_n7FV9qL.rZITE5_LPoObwZZ
 sFwKjYk_DH1k10OPdGiiNouED_leV0LDP4ZzPi6y8quR_.Cygj8nrjidYA5WfIZ3M.mflmlcn5rR
 q2wxP2G3Dc_IaYgzogP0bQZvK4vVqpZiIjRcYgLDKOo_RT28_3IKyL9D7GqGIa2n6OoJeyAI6ZQ1
 r6AhHWSu87w57eeuePB4CVSNQfHE-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <1353bd20-0edb-ed66-5c1f-9d117a76c372@aol.com>
Date: Wed, 4 Jan 2023 20:57:14 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <30ed41ab-f7c9-15fb-8f4b-b2742b1d4188@aol.com>
 <405dc396-7b7e-842a-2b94-6b26df1aa564@linaro.org>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <405dc396-7b7e-842a-2b94-6b26df1aa564@linaro.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4127

On 1/4/23 5:35 PM, Philippe Mathieu-Daudé wrote:
> On 4/1/23 17:42, Chuck Zmudzinski wrote:
>> On 1/4/23 9:44 AM, Bernhard Beschow wrote:
>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>>
>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>>> ---
>>>   hw/i386/pc_piix.c             |  4 +---
>>>   hw/isa/piix.c                 | 20 --------------------
>>>   include/hw/southbridge/piix.h |  1 -
>>>   3 files changed, 1 insertion(+), 24 deletions(-)
>>>
>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>> index 5738d9cdca..6b8de3d59d 100644
>>> --- a/hw/i386/pc_piix.c
>>> +++ b/hw/i386/pc_piix.c
>>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>>       if (pcmc->pci_enabled) {
>>>           DeviceState *dev;
>>>           PCIDevice *pci_dev;
>>> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>>> -                                         : TYPE_PIIX3_DEVICE;
>>>           int i;
>>>   
>>>           pci_bus = i440fx_init(pci_type,
>>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>>                                          : pci_slot_get_pirq);
>>>           pcms->bus = pci_bus;
>>>   
>>> -        pci_dev = pci_new_multifunction(-1, true, type);
>>> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>>>           object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>>                                    machine_usb(machine), &error_abort);
>>>           object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>>> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
>>> index 98e9b12661..e4587352c9 100644
>>> --- a/hw/isa/piix.c
>>> +++ b/hw/isa/piix.c
>>> @@ -33,7 +33,6 @@
>>>   #include "hw/qdev-properties.h"
>>>   #include "hw/ide/piix.h"
>>>   #include "hw/isa/isa.h"
>>> -#include "hw/xen/xen.h"
>>>   #include "sysemu/runstate.h"
>>>   #include "migration/vmstate.h"
>>>   #include "hw/acpi/acpi_aml_interface.h"
>>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
>>>       .class_init    = piix3_class_init,
>>>   };
>>>   
>>> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>> -{
>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>> -
>>> -    k->realize = piix3_realize;
>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>>> -    dc->vmsd = &vmstate_piix3;
>>> -}
>>> -
>>> -static const TypeInfo piix3_xen_info = {
>>> -    .name          = TYPE_PIIX3_XEN_DEVICE,
>>> -    .parent        = TYPE_PIIX_PCI_DEVICE,
>>> -    .instance_init = piix3_init,
>>> -    .class_init    = piix3_xen_class_init,
>>> -};
>>> -
>>>   static void piix4_realize(PCIDevice *dev, Error **errp)
>>>   {
>>>       ERRP_GUARD();
>>> @@ -534,7 +515,6 @@ static void piix3_register_types(void)
>>>   {
>>>       type_register_static(&piix_pci_type_info);
>>>       type_register_static(&piix3_info);
>>> -    type_register_static(&piix3_xen_info);
>>>       type_register_static(&piix4_info);
>>>   }
>>>   
>>> diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
>>> index 65ad8569da..b1fc94a742 100644
>>> --- a/include/hw/southbridge/piix.h
>>> +++ b/include/hw/southbridge/piix.h
>>> @@ -77,7 +77,6 @@ struct PIIXState {
>>>   OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
>>>   
>>>   #define TYPE_PIIX3_DEVICE "PIIX3"
>>> -#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
>>>   #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
>>>   
>>>   #endif
>> 
>> 
>> This fixes the regression with the emulated usb tablet device that I reported in v1 here:
>> 
>> https://lore.kernel.org/qemu-devel/aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com/
>> 
>> I tested this patch again with all the prerequisites and now with v2 there are no regressions.
>> 
>> Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
> 
> (IIUC Chuck meant to send this tag to the cover letter)
> 

Yes, I tested the whole patch series, not just this individual patch.
I tagged this one because it is the last patch in the series.


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 02:14:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 02:14:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471574.731473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDFlu-0001Oq-N8; Thu, 05 Jan 2023 02:14:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471574.731473; Thu, 05 Jan 2023 02:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDFlu-0001Oj-KQ; Thu, 05 Jan 2023 02:14:22 +0000
Received: by outflank-mailman (input) for mailman id 471574;
 Thu, 05 Jan 2023 02:14:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDFlt-0001OZ-Pp; Thu, 05 Jan 2023 02:14:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDFlt-0002RZ-MU; Thu, 05 Jan 2023 02:14:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDFlt-0002Y6-83; Thu, 05 Jan 2023 02:14:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDFlt-0002X1-7Q; Thu, 05 Jan 2023 02:14:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XCzpgb5IkVcTARFvF29hxA051Lbxm63kHERRJiL5RZQ=; b=ZcUqICi0V9GdK29Z8g3ETj4hl1
	5uUZ3jOSVFx1Oyyc00h0UB5FKh3QIE97K7md7kxTHcZwV/zQvSBixEgI1j7WV+NIN2J3sAfahj4+F
	qRd5xyxxThHzOYnCBBGKSujP0SRtbHrOCW/Rh/FkEQYzlKE75iDgP1DHru9exuMNVUME=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175569-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175569: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-migrupgrade:xen-install/src_host:fail:regression
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c1df06afe578f698ebe91a1e3817463b9d165123
X-Osstest-Versions-That:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Jan 2023 02:14:21 +0000

flight 175569 xen-unstable real [real]
flight 175572 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175569/
http://logs.test-lab.xenproject.org/osstest/logs/175572/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-migrupgrade  10 xen-install/src_host     fail REGR. vs. 175562
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 175562

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail pass in 175572-retest
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail pass in 175572-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 175520
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175562
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175562
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175562
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175562
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175562
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175562
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175562
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175562
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175562
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175562
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175562
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175562
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c1df06afe578f698ebe91a1e3817463b9d165123
baseline version:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8

Last test of basis   175562  2023-01-04 01:55:19 Z    1 days
Testing same since   175569  2023-01-04 16:38:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c1df06afe578f698ebe91a1e3817463b9d165123
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Dec 29 22:19:40 2022 +0000

    CI: Simplify the MUSL check
    
    There's no need to do ad-hoc string parsing.  Use grep -q instead.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit df57a2c8da7acb14f3feac823f2fcbeef56899b2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Dec 29 21:46:50 2022 +0000

    CI: Fix build script when CROSS_COMPILE is in use
    
    Some testcases use a cross compiler.  Presently it's only arm32 and due to
    previous cleanup the only thing which is now wrong is printing the compiler
    version at the start of day.
    
    Construct $cc to match what `make` will eventually choose given CROSS_COMPILE,
    taking care not to modify $CC.  Use $cc throughout the rest of the script.
    
    Also correct the compiler detection logic.  Plain "gcc" was wrong, and
    "clang"* was a bodge highlighting the issue, but neither survive the
    CROSS_COMPILE correction.  Instead, construct cc_is_{gcc,clang} booleans like
    we do elsewhere in the build system, by querying the --version text for gcc or
    clang.
    
    While making this change, adjust cc_ver to be calculated once at the same time
    as cc_is_* are calculated.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit d329ca6baf51b229f866af2542cfdc0d5f5a4c2c
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Dec 29 15:52:50 2022 +0000

    CI: Express HYPERVISOR_ONLY in build.yml
    
    Whether to build only Xen, or everything, is a property of container,
    toolchain and/or testcase.  It is not a property of XEN_TARGET_ARCH.
    
    Capitalise HYPERVISOR_ONLY and have it set by all the
    debian-unstable-gcc-arm32-* testcases at the point that arm32 get matched with
    a container that can only build Xen.
    
    To reduce the churn elsewhere, retain the RANDCONFIG implies HYPERVISOR_ONLY
    property.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 2eb750242101f0b9c17688a09825ba6befbab113
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Dec 29 20:05:33 2022 +0000

    CI: Only calculate ./configure args if needed
    
    This is purely code motion of the cfgargs construction, into the case where it
    is used.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit b67625568448480eaa432d9637431ff7b1f62aa0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Dec 29 20:01:52 2022 +0000

    CI: Remove guesswork about which artefacts to preserve
    
    Preserve the artefacts based on the `make` rune we actually ran, rather than
    guesswork about which rune we would have run based on other settings.
    
    Note that the ARM qemu smoke tests depend on finding binaries/xen even from
    full builds.  Also the Jessie-32 containers build tools but not Xen.
    
    This means the x86_32 builds now store relevant artefacts.  No change in other
    configurations.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 7b20009a812f26e74bdbde2ab96165376b3dad34
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Dec 29 15:39:13 2022 +0000

    CI: Drop automation/configs/
    
    Having 3 extra hypervisor builds on the end of a full build is deeply
    confusing to debug if one of them fails, because the .config file presented in
    the artefacts is not the one which caused a build failure.  Also, the log
    tends to be truncated in the UI.
    
    PV-only is tested as part of PV-Shim in a full build anyway, so doesn't need
    repeating.  HVM-only and neither appear frequently in randconfig, so drop all
    the logic here to simplify things.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 02:25:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 02:25:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471585.731484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDFwf-0002y6-TT; Thu, 05 Jan 2023 02:25:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471585.731484; Thu, 05 Jan 2023 02:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDFwf-0002xz-Pi; Thu, 05 Jan 2023 02:25:29 +0000
Received: by outflank-mailman (input) for mailman id 471585;
 Thu, 05 Jan 2023 02:25:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cz8e=5C=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDFwe-0002xt-HP
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 02:25:28 +0000
Received: from sonic315-55.consmr.mail.gq1.yahoo.com
 (sonic315-55.consmr.mail.gq1.yahoo.com [98.137.65.31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 35810b64-8ca0-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 03:25:25 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic315.consmr.mail.gq1.yahoo.com with HTTP; Thu, 5 Jan 2023 02:25:23 +0000
Received: by hermes--production-ne1-7b69748c4d-bmdl9 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID b445f58ecae7df5da904de2ba90d4713; 
 Thu, 05 Jan 2023 02:25:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35810b64-8ca0-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672885523; bh=9/D7me0E4of8lPWTP1WaFa8eMBIgzfn0nOyeytivp40=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=RTy8KAQq4Zw4/69W08n3xkrG3E08Z+y1eboEZBZRR9yKIz1FTqCFNyXWHafN9KSYzRofahHXPiBLQWYDhYo3EAO79pKAc+giesAILPO/E5eH0aVTOQJqNleOJADkxJUID5vwporLLHurAopOn9Y3PTjMo2Iq3iNoI5uAnD4OCdNcp5aQ0NSpDdjMeOQH6JcXPJzKVor9I2bhS5t1dydDhSVt4d9UImxyaEjyxcP57sgvgwrqicVHVl8L5qP8QzK7MnnqPLJVxTWey5k5ro6Rlncdhbe2EFzBRjAjKStyHH7ZB5Af9iQJ+NgD5zUiztVKdO2HjTRDq8eirqLSgeIYLA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672885523; bh=bN+9ECMIMdaEvR4AWB7Sp8ghwy8CXF4zhl6GI6si8jq=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=PTm4dMsLtVuKTYjVRDQ8L8PZIRqFpGGYrPtfMgY85YL4VLzAjnsDyEdehEjJqs6JLlQEKfqNf8fmpF+NJ3Y4ooLGjTgL5XfeN9QjyEa/AaF8fW1dLJuBT3IJwgF/cfzTMOlb4CcJI+ym/vppY6nAYZTZZMmhY7LmQm/9nqwyb+ruKcv9FGcVXHWDsoDdHL6+oCtalYYRUIF0RlMiLqi2rJUXOzC8hWfcatsB7tlt3npiVeytAyTRWfdg9/f4k1dNpWaCGRkToOXTpAOcsyrqND/KrTLu4JTboEqN0RO5vvU/6MZslVZB9vcfsPN8QhbwucEt351+TmBLbXeTQHD0Ig==
X-YMail-OSG: IIJ1B6MVM1mlYeRXcKKGEFPddTrnCAMqj5pADwY0bEqgUR70H56Pm5q6EpfPkfH
 7ZW4sL0HJRgcACVZXjC9.I6MjX_EP7jgxST4q89YqdETjq8hI6bmpJ.s6slAky6dnPTYsnAX9_E7
 okPq1ECgp9pWWEHqqlzLwuCiUXhGA78tK6AgMTyb2Od5j4z.0tp6pT.feezrHyc6b.Rv.5uL.NP_
 oS2GDCpbkowUsd4d7dsKXoYMwlkuD2oCueY82Q9gk0keevyMNoBYmK9O3jlIzD5aakmSuBifNtfl
 5t.2.2U1SPJZSeAERLb2KbpAydRSSXwtHXN3dHmlBjyueknJgChTxp_CKu7Ya4HHEl0QErCjBHEs
 VI1SPSAg82vQJZlcm86KNpjgWkKNJlTAVDOhlc3cD_Wq2pRqMUb.oTbEJPZtDjJb5yv5_hmmLai5
 zHHDNHNXf7YtVgRUpTEw15MFWwsJUwDPbj3wKfN3IcWDmJVrpDYf8jslM.cNCDBmIxYEnnf1TXr.
 XoSPfajAdTiBJn67aEInJjlJP7L.XkPoc62yFvgPsajDzILnNdNLtOAnKzV63nHTszW.0WQJ9Dhk
 f._26dFXuuOnBV48QQwVyuXCfrgRrvBBkk.ynAdkoUIygD21uoBsc9DxmHqbE9ktVrwqHWJeKxjB
 Jp7ywLJTt80RRz8Yag6a5hFf185oNGRYipehfFCjnaOli8YSwD7kC85c73tjvxwnpKr34HoUoR9E
 VuuQsfcI8LsdsYTPe5NILLGgPBKzWa_V.qm2TWenR1hhyAdVJeOot_4ozu_n3uO1l.UUzgPvYKah
 Wj49y7tQYSglT5XrX.rxbPVZuSD9p8DyzoRvj9FCwDzSyNDfsgo22XTaTA98nEsxHB2rO16L.m3G
 vbw6Vd7D29ZxcHEEnRe8aON7Ir0a6wdn5tSk8khgvvCmemPcqNkpez_2Jltmcqabiagw3rBHv7u5
 CnZUit6edZJ6ZkLRr9RMVgfNgkETaG1PyppTFBjK4JiOGbCNV9dxDM4_P1WftBVqVukOecErZXrR
 PUFj4m800SBR0zuqLM02ZpQUrRxzTVCq1y8TyiriYJKfnxr78_zLjuy4_B506eM3wDipxz_Qweiq
 toZpS5H3g2w34b3iegJ8L52zBL9M2SOMBJqT0YGOIUaSlF3RNa76T.7k7IWieXHIxyF.B1kmW4t9
 3tfujjuVi0mbx8FEZRJK8mmOAROl826X6n9MmM4.bAsxUEw.qWtcEdtqhq4VZvjZIzatRkMn1TL8
 Xl88RAHz0_SCK83Pl9NqYT6VwAJthLtbxmlXw4YI1rt6qPTHPoPVxb_e9_Pn_Tp2KSdUAmkkXAA.
 .5so.hW0SbqBKeUm2zKzYdZNnnnfIaLEbUFfvE.1DtvkeqD7_ecnpSaC7r_PfacyCdamgkeyV6_c
 2mE_OcgOmJvEx_wePCe3ooN1JtaUeqMgFExYEmJVa5C7MQPFmp9OkR8_UepneNHOWV7wF8XQqvq_
 QqRdPwncq3sLDFuT5TiNGmybWmKAfJW5rVncS4fv4a9Gip0bOOybGFZhixxFPeGQlmnyLW2c_Fiq
 N5HPACbickUmImnquX7XUAE77CewA77qZfb5mqaNDlj_K3M7KQpmgyBHSF1uTFwRtHekC_o_77k1
 CXV3C.wDbVO7UG4iv8k.6ndC6nqf1hUZfXkvAnemdt0woSHGOptarg0p_X3ZVyE_Vb2nKDyZiQxK
 4Q8yUo2LHQEKa5J1hBxzWRSn.mCUG7Atkb5KU9uKXhgB10bf3eG52qYHbr0zqIf0JXN3cpQbCg.E
 h0LN_3pru0oIG_5Zk_y7PQJaRaheUBahOnk6FG4fuQR2hZR58FZN_u_7EHGsMMQ.eaCVsr4Y_1d_
 iXsCsC_kmEG_ocOSe1H9Xu0QgzEloiRSHVH5EaL9_PYSQlnlmIsvJq2fh9lIKy89XOp8641KA6tA
 ZqxUNpq8yKGQk6wgBnIzdG4S5oCO893TBr_uyaMdzzMdNSBTFp_0DszYm4BFTBxyjJWNNHmVoO9J
 vyduPpZ1lbNyECQpdmKZ2LlKOhDiLbHaajeLAXaIylLZe.RZhlpHRevP6v0Bb3N4rVdG5MGLw5FD
 jf6KVlcdt4tzGNbkykOVqYEIjN9GFLww5OdEvmkMOIwsrX8fPxXxJYYEPgQFVniHQIX44CwSAqUU
 J4ewkRh8aaVuAIF.9VcZTtnQ7vQyphqe.pSTJ1ZZwhaC3N58JCP_y9eezDAuk8XDjTLdFLuCoDn8
 xNjDx6c1Z1lHki6mCVjvKNqnXAw--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <91a05e98-1b1b-94a2-248c-8fe7ff0b5f4e@aol.com>
Date: Wed, 4 Jan 2023 21:25:15 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <30ed41ab-f7c9-15fb-8f4b-b2742b1d4188@aol.com>
 <405dc396-7b7e-842a-2b94-6b26df1aa564@linaro.org>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <405dc396-7b7e-842a-2b94-6b26df1aa564@linaro.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4571

On 1/4/2023 5:35 PM, Philippe Mathieu-Daudé wrote:
> On 4/1/23 17:42, Chuck Zmudzinski wrote:
> > On 1/4/23 9:44 AM, Bernhard Beschow wrote:
> >> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
> >> TYPE_PIIX3_DEVICE. Remove this redundancy.
> >>
> >> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> >> ---
> >>   hw/i386/pc_piix.c             |  4 +---
> >>   hw/isa/piix.c                 | 20 --------------------
> >>   include/hw/southbridge/piix.h |  1 -
> >>   3 files changed, 1 insertion(+), 24 deletions(-)
> >>
> >> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> >> index 5738d9cdca..6b8de3d59d 100644
> >> --- a/hw/i386/pc_piix.c
> >> +++ b/hw/i386/pc_piix.c
> >> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
> >>       if (pcmc->pci_enabled) {
> >>           DeviceState *dev;
> >>           PCIDevice *pci_dev;
> >> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
> >> -                                         : TYPE_PIIX3_DEVICE;
> >>           int i;
> >>   
> >>           pci_bus = i440fx_init(pci_type,
> >> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
> >>                                          : pci_slot_get_pirq);
> >>           pcms->bus = pci_bus;
> >>   
> >> -        pci_dev = pci_new_multifunction(-1, true, type);
> >> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
> >>           object_property_set_bool(OBJECT(pci_dev), "has-usb",
> >>                                    machine_usb(machine), &error_abort);
> >>           object_property_set_bool(OBJECT(pci_dev), "has-acpi",
> >> diff --git a/hw/isa/piix.c b/hw/isa/piix.c
> >> index 98e9b12661..e4587352c9 100644
> >> --- a/hw/isa/piix.c
> >> +++ b/hw/isa/piix.c
> >> @@ -33,7 +33,6 @@
> >>   #include "hw/qdev-properties.h"
> >>   #include "hw/ide/piix.h"
> >>   #include "hw/isa/isa.h"
> >> -#include "hw/xen/xen.h"
> >>   #include "sysemu/runstate.h"
> >>   #include "migration/vmstate.h"
> >>   #include "hw/acpi/acpi_aml_interface.h"
> >> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info = {
> >>       .class_init    = piix3_class_init,
> >>   };
> >>   
> >> -static void piix3_xen_class_init(ObjectClass *klass, void *data)
> >> -{
> >> -    DeviceClass *dc = DEVICE_CLASS(klass);
> >> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
> >> -
> >> -    k->realize = piix3_realize;
> >> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
> >> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
> >> -    dc->vmsd = &vmstate_piix3;
> >> -}
> >> -
> >> -static const TypeInfo piix3_xen_info = {
> >> -    .name          = TYPE_PIIX3_XEN_DEVICE,
> >> -    .parent        = TYPE_PIIX_PCI_DEVICE,
> >> -    .instance_init = piix3_init,
> >> -    .class_init    = piix3_xen_class_init,
> >> -};
> >> -
> >>   static void piix4_realize(PCIDevice *dev, Error **errp)
> >>   {
> >>       ERRP_GUARD();
> >> @@ -534,7 +515,6 @@ static void piix3_register_types(void)
> >>   {
> >>       type_register_static(&piix_pci_type_info);
> >>       type_register_static(&piix3_info);
> >> -    type_register_static(&piix3_xen_info);
> >>       type_register_static(&piix4_info);
> >>   }
> >>   
> >> diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
> >> index 65ad8569da..b1fc94a742 100644
> >> --- a/include/hw/southbridge/piix.h
> >> +++ b/include/hw/southbridge/piix.h
> >> @@ -77,7 +77,6 @@ struct PIIXState {
> >>   OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
> >>   
> >>   #define TYPE_PIIX3_DEVICE "PIIX3"
> >> -#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
> >>   #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
> >>   
> >>   #endif
> > 
> > 
> > This fixes the regression with the emulated usb tablet device that I reported in v1 here:
> > 
> > https://lore.kernel.org/qemu-devel/aed4f2c1-83f7-163a-fb44-f284376668dc@aol.com/
> > 
> > I tested this patch again with all the prerequisites and now with v2 there are no regressions.
> > 
> > Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
>
> (IIUC Chuck meant to send this tag to the cover letter)
>

Is it customary to tag the cover letter instead? I thought it appropriate
to tag the last commit in the series because it best represents the actual
commit on which tests were carried out. Also, the cover letter does not
actually represent a real commit that can be tested, except maybe the
last commit in the series. I did read the document on submitting patches,
but I don't remember if it specifies how to tag a series of patches with
the Tested-by tag.

Kind regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 02:35:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 02:35:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471592.731494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDG5v-0004RI-Oq; Thu, 05 Jan 2023 02:35:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471592.731494; Thu, 05 Jan 2023 02:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDG5v-0004RB-Lq; Thu, 05 Jan 2023 02:35:03 +0000
Received: by outflank-mailman (input) for mailman id 471592;
 Thu, 05 Jan 2023 02:35:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cz8e=5C=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDG5t-0004Qm-Fq
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 02:35:02 +0000
Received: from sonic313-21.consmr.mail.gq1.yahoo.com
 (sonic313-21.consmr.mail.gq1.yahoo.com [98.137.65.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8b2df3e2-8ca1-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 03:34:58 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic313.consmr.mail.gq1.yahoo.com with HTTP; Thu, 5 Jan 2023 02:34:56 +0000
Received: by hermes--production-ne1-7b69748c4d-rglm6 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID ac337fbdd3f23fe8dbfff1f809657b1b; 
 Thu, 05 Jan 2023 02:34:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b2df3e2-8ca1-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672886096; bh=/pRkfyPTTNceKNunGNRzqNqyo3veLlHP7DUWR1szcLQ=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=V2vW5f5Uy+sJKNuydA6aOe5NtERI8guZXx/KTQOH8sk5WVpJrSco7chMF9TZ5mT/wCJB1hOXZKYgtq0TwjIBY8EOe64Cy7wwwYzksEL/AJjKbd5kUWF1yzMttZqENpXg7aykoVF7MACyshYLm7bRmV9f20iq6uqFHRYcUi+2BFydZoyvUnJt0airuiB6z7+wjxg1y2Zqm36NRBAMvZj5/rS0Tm0SzvQ8+mMw4Cm2PbSJ0DfSGZdS0g6/a2f4Ds8hQLAs844R1sRKauZRK31y7Hzk2sZbmmyg7nnD89Y+KUjq2PvkCFsw2+UdzmYKbkMUvWpWSiGrkvu5Kshloc8hOg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672886096; bh=SUmFmCwnNydbHOG0E0HDGzqHeyfavww5DndpuizCsUo=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=Kjt/pFUbGuld+5RFIx3jW/KIhauf9E5JSh+SHMpql3A3Xofswqz9trqnGk3kw5WBwZ8WL0ZL//p04FgiPANN4u65O29eNG6/zxinPC+OQwQciUcyX8sFgz3M57hwUcPQ6KgZ0C31/H569vLdBoEjmV5W85H3lszxoXTT+BYg7RHvImQdtp7eARPaVAk5hGG3+VDKyyfa2QtvfKXC4WMBZRcIrwPgKAzM0e5ter5orr8JzCNHpLRm/8yr5IY0HAmaL9Oc63FF616NigXzbDRKr8n078F3YEBqGigV58fqKwlxAdOKRxQCF6H8J1tAgXq7aYBsm45YL+Opv9ldmsuvag==
X-YMail-OSG: sCZ4kewVM1nXmt9YrAGtmMRCOz5NRTCtu7Qkwdn1prKvo0eaIAQ91dp6FC7S0Fh
 WLtze.ZOUgVRzQJ3m9t4D7B89Mjmfa_LtZ4cHVOhMe7RgjAlGr1iodiQ2Duh6UZ.YjhHx5gcBKkj
 vwxFX_Qiz58yGw416YJhpAdE5rrcbPdPm9qjrYE1bPOr0rBRBVAiL.gEzjrlDFWTKhauZCnV4B9_
 eMcqMxjeOUZ4NP2Kut47wgg2X8HaauLL1OSQXFXLlgueB2VAQW88lxwdj3BJM51un9uG83hcp4dj
 zE.EHY_gKPyOCMrPhgoMoWK_kWgkG5I83ut7OKfZGMOjpOIUSvgJzoP8fdlqgCDa6FaRDPdpDURU
 Hqno4WdMyp9hHr_Wu1ccvAH.qi2CKh6IW0MGz5.kKx6yJhHoXMZ13zJqUH2l5ztjzbgkw2IUVX2P
 FOrHdOsjiojByZMdctdQ2bD1cfl7.R3PBtC5LNuMJofxTyUWRCHQjmdjIBZTwWH8rdNfEw7W1Ohl
 2TLX_33LtakfH6eRuccaXVt4U7OWaKcoG_cZvR2XLkmsJs11eyuiajMYFTKqexRbn3DFkp1E_BtD
 HCoNQJ3eclwTj5T0s2QODziPzfPrv0Eb_ufODOsGaprndkYPl0hBp3RoHrP0lLPgV7wLlfEregk7
 Hy0jhSocEn3aCwQDZ7yeNG.dHD6Me13sNHTUo4PJ.VNGq4LdIoWqh.BkjJ3q6WH37E3C08Ix_zoJ
 pdb.hWyY3QeihO9ui8ndOObskZo5Inda.JpTax6JSKCF7GWpJfENRKPgXJzG1iORf37VO7Z31Y_a
 _GuJ8PCRZVces8zc0J.n.eqQHjPHum8oMmGQ4KMnUPw1dbepkySzHqWkLtuFkSiENnt8lFOIcH1Z
 lmcYTByocaBRzXm2zs2ZYFT_VaSUHwe0ZFqbleQcB8xYh7rbKXXJeNNUooL7Jb0W2XO0LswaLekB
 PXaJtrPphOGyPdzJDIXE8FVupTUB.FBtnUbu.0zrD5gAbrIUAYAYQKbBpDGl831EJD4wjuH7PRT6
 H8c62QqNyNEeCSJd6ZHewi2rGLFqKe3q0l9nkt535bF0qZ7mkz1r_0XJJU035xASFmpnrS784bEG
 jUeAuCQZ9hNtFmCeRoBCGJcqr0aZin25_bMpkcbaKnGgVnoS_izKAxFaC.38WA3YHf271GeE2FzN
 EZPKv9lDe_.qpkjTlY__7iyFiuvdAwtRvLCtdUAnBfJ.O5CnCluSctbxjD3VfFu2EdK.amZDb5Oh
 GOl0QzhUlV4iiqbAHSClDMN314EEEz.LhEmy8s.VZrx5YkwIpEgFAp_Eukqcu8a_IGEq8k1DoQtl
 F9PlfFDJhxpuFXdE_jMtBQcO_OnglAw7F4yRYfiJXAYz0hH9vpGD9FOVwtxr45mbXiLn7rHFhukQ
 WioNy4GpsW6_MYBE_FMl8t5WxBWD4fBzZ7apctIhMEXsUe2g4a.TCT7l0nJ8AB24Du7U1OE_ciT.
 O8cp0mW72CCLIDIz3aACZQEhl9qNcaZOPKfwZ.v_MpVsBNQVirdPakWwre_Bby6inRNk7_1EjfNl
 rm1H9YZpggJ8XRIjR1ZfgyxhhAPgqgbo8GTHPmpv65zjzyFhUWTYoXvBiKJelzKTGqrRYtWNa7Iq
 K2ecVuDoyXz3NbzrPx187zvU7rSJE5Ot.73yeoukmfMeFDB0eZrn_BnqRX3WSyme80CY3h1kA.MF
 IbobusjxrLmzIYo1qkbtz40PAC1iXye6zVHBTLP19LWD630de.MqAhuXs4pqVnO9E0PiB5haht.6
 LD4RqkIkh968fIrsL4qvkXn7uiJElOvCT8OGMZXNiybrwPHEoSCOtJrnGUIsO5QC7lid9JZqb6Oh
 qmGRbXDE3tYCIBtT1Aax5uZvT2p6KbwRU.49yZfg_jkKf.zXf7WcxCskywcAzDa3e5BxvFE_J0Ed
 tKE0o8L3Z4b5WG3lE1QQEYZuE4Qs6a1pzFujdQKAsZRcmlGe9lerepndjdqXIIkYt40FakuvZrQf
 1jxM7nyAcBXgSY53j6oQN6nRTiix_kXKOzGCb_Wel0oYEW4JE7kus6KWlDaAoEp2CgTmNa28FRzY
 AaSJ0Fm4o8BkNCUowfAz5rvVmSvsu_rbpM2Fu9N6V5RaSgiIjOi6kSycqUfMtHj6Jc6J.N2.s.mZ
 6SGewiDdW98VXChfXmttbARU_f2qjVbj9bcWXZZnwX1NmlX08zkueL9rdF8vZKBVETfjuHzLj9.r
 XjqUi2ypTBepw2iX_Jd_9L4.gSQ--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <157390bd-4d1f-64be-49df-639ddbd8bf2d@aol.com>
Date: Wed, 4 Jan 2023 21:34:46 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <f596a7c1-10d0-3743-fe0b-d42003cf7440@aol.com>
 <be75758a-2547-d1ef-223e-157f3aa28b23@linaro.org>
 <92efe0f1-f22b-47bc-f27d-2f31cb3621ea@aol.com>
 <877abde0-2e76-7fde-0212-eb7ce1384ea6@linaro.org>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <877abde0-2e76-7fde-0212-eb7ce1384ea6@linaro.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3103

On 1/4/2023 5:18 PM, Philippe Mathieu-Daudé wrote:
> On 4/1/23 20:29, Chuck Zmudzinski wrote:
> > On 1/4/23 1:48 PM, Philippe Mathieu-Daudé wrote:
>
> >> Here TYPE_PIIX3_DEVICE means for "PCI function part of the PIIX
> >> south bridge chipset, which expose a PCI-to-ISA bridge". A better
> >> name could be TYPE_PIIX3_ISA_PCI_DEVICE. Unfortunately this
> >> device is named "PIIX3" with no indication of ISA bridge.
> > 
> > 
> > Thanks, you are right, I see the PIIX3 device still exists after
> > this patch set is applied.
> > 
> > chuckz@debian:~/sources-sid/qemu/qemu-7.50+dfsg/hw/i386$ grep -r PIIX3 *
> > pc_piix.c:        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
> > 
> > I also understand there is the PCI-to-ISA bridge at 00:01.0 on the PCI bus:
> > 
> > chuckz@debian:~$ lspci
> > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
>
> All these entries ('PCI functions') ...:
>
> > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
> > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
> > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> ... are part of the same *device*: the PIIX south bridge.
>
> This device is enumerated as #1 on the PCI bus #0.
> It currently exposes 4 functions: ISA/IDE/USB/ACPI.
>
> > 00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
> > 00:03.0 VGA compatible controller: Device 1234:1111 (rev 02)
> > 
> > I also see with this patch, there is a bridge that is a PIIX4 ACPI at 00:01.3.
> > I get the exact same output from lspci without the patch series, so that gives
> > me confidence it is working as designed.
>
> Historically the PIIX3 and PIIX4 QEMU models have been written by
> different people with different goals.
>
> - PIIX3 comes from x86 machines and is important for KVM/Xen
>    accelerators
> - PIIX4 was developed by hobbyist for MIPS machines
>
> PIIX4 added the ACPI function which was proven helpful for x86 machines.
>
> OS such Linux don't consider the PIIX south bridge as a whole chipset,
> and enumerate each PCI function individually. So it was possible to add
> the PIIX4 ACPI function to a PIIX3... A config that doesn't exist with
> real hardware :/
> While QEMU aims at modeling real HW, this config is still very useful
> for KVM/Xen. So this Frankenstein config is accepted / maintained.
>
> Bernhard is doing an incredible work merging the PIIX3/PIIX4 differences
> into a more maintainable model :)
>
> Regards,
>
> Phil.

Thanks for the nice explanation of the history. I understand the PIIX3 is associated
with the i440fx machine type for i386 - it goes all the way back to 1995 I think
with the original 32-bit Pentium processor and Windows 95. So it is a worthwhile
effort to work on updating this to something newer, and of course kvm can use
the newer Q35 chipset which goes back to 2009 or so, I think, but xen/x86 languishes
on the i440fx for now.

Best regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 04:54:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 04:54:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471606.731524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDIGm-0001NW-Ga; Thu, 05 Jan 2023 04:54:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471606.731524; Thu, 05 Jan 2023 04:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDIGm-0001NP-Dt; Thu, 05 Jan 2023 04:54:24 +0000
Received: by outflank-mailman (input) for mailman id 471606;
 Thu, 05 Jan 2023 04:54:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDIGk-0001NF-Vm; Thu, 05 Jan 2023 04:54:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDIGk-0006N2-Rq; Thu, 05 Jan 2023 04:54:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDIGk-0008BE-Cv; Thu, 05 Jan 2023 04:54:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDIGk-0001sI-CP; Thu, 05 Jan 2023 04:54:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9LaNnHNrvD2gi1b4pViituYE1r1RA108h4zfpDGpfYQ=; b=wWgyHoEfod7Y+mxyGBOy7Hp02G
	xi7oeu6/ZpFtlk4XSds39yvOKqPaQ0Qp7Up3oVKu/YYlHlRoNUeSsP8O3Ibur1ce8IkhrU2b59Tgu
	jy5/hB1775yd3WrxkMLkY2MA4QzZb10mqaK+1zAa330hnGMqkUT9qGZ9vOXB9ijACCgs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175570-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175570: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=69b41ac87e4a664de78a395ff97166f0b2943210
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Jan 2023 04:54:22 +0000

flight 175570 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175570/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                69b41ac87e4a664de78a395ff97166f0b2943210
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   89 days
Failing since        173470  2022-10-08 06:21:34 Z   88 days  185 attempts
Testing same since   175552  2023-01-02 21:13:23 Z    2 days    6 attempts

------------------------------------------------------------
3257 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 497265 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 06:44:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 06:44:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471623.731550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDJzG-0004Gl-Me; Thu, 05 Jan 2023 06:44:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471623.731550; Thu, 05 Jan 2023 06:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDJzG-0004Ge-JP; Thu, 05 Jan 2023 06:44:26 +0000
Received: by outflank-mailman (input) for mailman id 471623;
 Thu, 05 Jan 2023 06:44:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDJzF-0004GU-93; Thu, 05 Jan 2023 06:44:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDJzF-0000hI-5Q; Thu, 05 Jan 2023 06:44:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDJzE-00087L-RI; Thu, 05 Jan 2023 06:44:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDJzE-0005fX-Qh; Thu, 05 Jan 2023 06:44:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/wcW7TSTHh7lbkw4eeRAFe0Z+Z63P8BJNaPHygWmW+E=; b=NguCIK4GnMgRNqSxfBU1Fgr2tL
	X9c6pnOJ1YzRVQkKWWbo9zBuGVMjc9Op50tsX1kMuU91rI03QSHQ2sXv7Sl+JljuSdoegbqz8t3Gs
	BC5mLNX4Rra392HrWP9QhvkLtBcV+TdWX8tOQ2xEzkz1irK/aWvEhhzSaEVZ+Dq01jCU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175571-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175571: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start.2:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ecc9a58835f8d4ea4e3ed36832032a71ee08fbb2
X-Osstest-Versions-That:
    qemuu=222059a0fccf4af3be776fe35a5ea2d6a68f9a0b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Jan 2023 06:44:24 +0000

flight 175571 qemu-mainline real [real]
flight 175581 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175571/
http://logs.test-lab.xenproject.org/osstest/logs/175581/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 20 guest-start.2       fail pass in 175581-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175457
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175457
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175457
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175457
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175457
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175457
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175457
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175457
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                ecc9a58835f8d4ea4e3ed36832032a71ee08fbb2
baseline version:
 qemuu                222059a0fccf4af3be776fe35a5ea2d6a68f9a0b

Last test of basis   175457  2022-12-22 11:29:39 Z   13 days
Testing same since   175571  2023-01-04 17:08:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bin Meng <bin.meng@windriver.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Greg Kurz <groug@kaod.org>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   222059a0fc..ecc9a58835  ecc9a58835f8d4ea4e3ed36832032a71ee08fbb2 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 07:13:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 07:13:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471657.731583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDKRT-0000Ia-DE; Thu, 05 Jan 2023 07:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471657.731583; Thu, 05 Jan 2023 07:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDKRT-0000IT-AN; Thu, 05 Jan 2023 07:13:35 +0000
Received: by outflank-mailman (input) for mailman id 471657;
 Thu, 05 Jan 2023 07:13:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDKRR-0000IJ-DE
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 07:13:33 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2047.outbound.protection.outlook.com [40.107.6.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 74f3d6fb-8cc8-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 08:13:30 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8337.eurprd04.prod.outlook.com (2603:10a6:20b:3e8::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 07:13:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 07:13:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74f3d6fb-8cc8-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VM3VqyJ3usvUou7yHCxvrhCW4EDnmn/YJPTrvCm/f9UM3Dipg+2BUfXfgJyf1owCJ7q16IF3uffQsG2StG8JP6PNVbxBTq1NMcTBJ4FOXwjtQqp+k0qIJncV4XnODLgNzA084lZNUilxI8iQRKeS7IPAkRobePvELrYOdHwPi/+wiXxLkpHs+iPmz3w8EK0OSn+6qoW2W28g7RHXp2Qbew5FOpg6LiEFFBK5o45b2WTogFL1ydZswL7HkRnSE2zgfLVilbubsREdvlGQUUhAR8EODFYvxSN3MaMK+HG+cutQT0VCX09MRUW/DrtgU7zTUUDejENBhhE5cyw0ETP+CA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4180Hykc+KTZMJiQ1FymiaCwrT2A7+kybbdi21Hl2ks=;
 b=GwFNsH/BPPxn9wR5fUKyH1mY91TfVKUXbQTru0ZDF6yToUiXiktWCKbN7V6BN+B1yNuH4vp2zn20o2PzACbZ3LmLAy0ymUOPGMicingfnCrrssjDtUPI5ccVii791udStRCrg/p/TTZyXg678B93UQSeb8KwPPcagbNKQKWAlBYoMrrHxRdFA8xfiXhTAjWUpuH0Kh8d8Vfs0afSxtW/PdVAp/WDw4z9YHDYd0IJQyInj1jPWIsAxsAXpJP9TMI6PNewfhmQUCkhVImA/gmvAqf0n0XYhORgBUJdl9l26HazpkPViH+26Yov5bS8Lw23M2N0xsrVk/36w6scgYfJIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4180Hykc+KTZMJiQ1FymiaCwrT2A7+kybbdi21Hl2ks=;
 b=XShXLow1Xekh21gVO/411AR/B6mIWLUsGTNKauGGYl5Epu5DPeq3fouBQnXrt/BSnA+hs88mzNrR3hH/TQawsl8VVC3BcQZSgx8Ga/YSsLmHgzmM2+OYlf/cnWQXWPokqtylIaD13SoGGIxQ3vtu+UxGV1r2l7Q/3d0mOOABD3eoyM3IJAg2hJE9+5b2ooKnDiIvsJ5M5oc5VzTsiE8eoTB9XsNadw4tN1UVQSwTg2P66A5f19Alr1Nsz+adw/ICpDpcJYRF/rXWiZGaheJvfAQEs0i/QY3yaBQFI5VqHfznkeglQyuWvouTYlBao1PDKPzx3bMHrsFRxkdL2W7CpA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f5c4d3c7-59aa-1f21-c952-3b94f662298f@suse.com>
Date: Thu, 5 Jan 2023 08:13:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2-ish] x86/boot: Factor move_xen() out of __start_xen()
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20211207105339.28440-1-andrew.cooper3@citrix.com>
 <20221216201749.32164-1-andrew.cooper3@citrix.com>
 <520cdde9-07e1-fce8-56d9-205fc31c62e3@suse.com>
 <c14dacf1-7057-d860-7708-2dd13e8d6a4f@citrix.com>
 <e70bf233-4444-8c65-8cca-1b7ea74c55d1@suse.com>
 <5f3df359-2906-ba20-b8df-ae2f2d5f5981@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5f3df359-2906-ba20-b8df-ae2f2d5f5981@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0134.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8337:EE_
X-MS-Office365-Filtering-Correlation-Id: af0c9e5e-c866-40d7-ed5e-08daeeec5868
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	I5q2s1BOChuYGuI2wl4jkS4bogSS9HAMRupMCoI1tsV86suzLlkU1g5uT2WTxitanrtVzjfw64ilT1DaIF6hKTm25DMDjro9QspkPZJnVr6LDLpYHJGrCy6RXOiyDbpb4sj4mW+zdwKwO9wvql+QCeV44U2X5Rul/mlZ6CI8KMFkQHMbqM+xGObKvAnFHGumpTfS/nySl67dz04ssi1vkr3kHir/Hw24FHBsVsbA14dqpoVTP9a0bm+g2RPXvz93qvpdYQ8itQqWvOsx6GJ5izEak3+bfKnPV900K9g+Q41L6vUpOL7awK3aZLqKks/W+vUCTXMtyzXti9DMa0EtEqiyOu1QIVZtaL82DNrT81EFZxNTIw+r+kLmbRG0mF/KIkxGsy8RPJ67SoOX+lo8CQ9RGb6TouUPAz/k9vpQ8Q/8yskIyuJCYHzfebXDSN8/opG8sj2/IpvJHmow5s6WOtgwMdyiGgrANXJcyBwO3EqgbGVgNycfpqcAc+fKwr8p0I/8UYM9ZJo4ftTazIKsGwYQ3mHHxcGXdv0GaetVox6aw3dskX7/M5kmxqgKXIJ/6X9phsiBBC93cocvT6ORnGdsyOojnTo5ogibont52r5JihnnTemrPgiCwAFm7ZXVyQmaBzij145AbWwweNe5e1Ba90V2+6zX7ljZVLNUue4cjZydTm8Ru4OpNR7P3ff4zYnDAG2kgxwjHV/VXVC5VeWFXIpmgQwrgZrffWhq9OLpgqZk2KOPiXLpctYdkC541xry5llSlb/17IqY2fZGLQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(39860400002)(396003)(366004)(346002)(451199015)(5660300002)(2906002)(8936002)(478600001)(41300700001)(4326008)(316002)(8676002)(66899015)(66556008)(66476007)(6916009)(54906003)(66946007)(31686004)(6486002)(966005)(6512007)(26005)(6506007)(83380400001)(2616005)(53546011)(38100700002)(186003)(86362001)(31696002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MUozTWErdEZkZ1dFRzZyZUN2Tzdwa0N1SlNHUkFUdFNFbkpUUVYzby9DTS9B?=
 =?utf-8?B?QkhiUWVRS2hVVWFjTTBBZEJuNHlvK2J1QjVaSC94NVBwdENmVEpNUFhDSnZr?=
 =?utf-8?B?Ti9oanZnd1A2UGJCUlNIZUNSTlg1NmJqQjJwWVc3SUdxZzV3a3NwTGVHT3Jj?=
 =?utf-8?B?QmNSc3V4dHROVGR3WEU4ZHlVUm16QnZaY1Q5aXk2NS8vU29FYUh0UmVhMGYy?=
 =?utf-8?B?ZkQ0WnlVZzhGS2JXMHVNQjRQbnBFaUdFdUtvcnZLeGdXeTgwWTY2WUFVUzRa?=
 =?utf-8?B?R01VOStsLzNOVThHL2w3MG9ENHROeXZKWU52L0VTNVFHOGs3V0lvdFpzbk1B?=
 =?utf-8?B?cDBkOVpMa20wd3lOVTlsaUhBQVBEZENsbFJhdkExMU1aVk9SQ2VTcVBoUVNy?=
 =?utf-8?B?dkFKWktydHdsSXpacENxUlpnelVkUWxDN0pRYmRJYnNubHNQWTJCVVJLQm1G?=
 =?utf-8?B?WFFQZzJid2owU05Ja1VMSWsyZkJCZTFhbnYvY2EwNDVxSFE2eUg0L1NsWjVO?=
 =?utf-8?B?VHAreWVHS0ZtalJqK1oyYVZsaThLTy83cGpNZG04T09yYU5VZThvYVdiZnZT?=
 =?utf-8?B?RTZHWVdIQTQ3eGJ0Q3kwb0w1dEw0aGlrUEVnQU5VM3ZiMWVId1F3VjAzR2xy?=
 =?utf-8?B?QVBzdVozb3RzcUdld09RQ1NQRzZtdUQ1emNQbkJKZlY3cjYxV1dJbjIyQ1ZX?=
 =?utf-8?B?dGJ6MmRjQUFaNzNWaERlK0RjZTBnaEJhcGtIcDdTK0tCbkRoQ3NycUpjYm1Q?=
 =?utf-8?B?aEh6MTQvU3o3cWphbTY3Uk5ES3E5cC9BOFpxMCtjZFh5U3VLZEtZdzB2eSti?=
 =?utf-8?B?YzFLcmthN1hoNE0wQi9Ia0ZwZGxhOXNDaXJ5SlBJejlabUQrdkY2SWwxVUJT?=
 =?utf-8?B?Y3EyZVlUc1JlSjI5dlBpKys3Vlg2V1pPNHZBM2RNZ3ZQZUQvT2pFbnJxbEJl?=
 =?utf-8?B?MEorRy8zNWVTYXNhSkR6cXY5dWVXd0JwbE9tazBEa25CRHpMbXVoK1ZTKzRp?=
 =?utf-8?B?UEpiNWQ4NUZPVEV0RlNwNDVwcFQyeGdQS1JTZUJnRGN3UXBYZDFqYjVkV1Iy?=
 =?utf-8?B?b0o1OGNoTFdHWkFYYnY4QTBicktaazBGVXZGMnVrOFBNajYrZ2d1R0t5ZG44?=
 =?utf-8?B?QWNhemJEQkw5ait1Y2MvUzVJUHErTzF4NjlTdVpybnNOQ1JtR2NGb2luUGYz?=
 =?utf-8?B?bUpmVFBJdEJ0RFFKdEVONE11UlYrY2t4L0FxMGZTUE5lWHYxQ0NaZFEwZkxh?=
 =?utf-8?B?NXhDdExCUE1RR2xQcEVJU3lzYXhwd2xzbHJnWkVNalZKREZ0SnoybzFFS3Z3?=
 =?utf-8?B?TEQ0TGFIdkh0SGdWSXpYNGFRdm1qcjZDbkRuWnI0ZFBWMHBOVUxaQjJxNlpa?=
 =?utf-8?B?WUd0a1QwK0pjUVJ3TFpOZ0RyNkVzcXpqRk42d3paMU5semg0S3JWQTY0RE85?=
 =?utf-8?B?Wis1MTNsSWhocEpESzlaYWlRTEEzekFLZ3VxVlpVRFEvZzV6Q1dOSHZkTENF?=
 =?utf-8?B?RFBjb2c5M1lzRU1WOG16N3AzRitwZkFxUldGbUUxK0wrYzBqNVZjdWtIN1Y0?=
 =?utf-8?B?YnBhcDIzYTcydk85RUhYU3Y4enpsSi9xRSt2STZqWWpvem5XUnh6Y2VsVEEx?=
 =?utf-8?B?dk1sKzkyUjA0bHJ2VUN0MjgvTkQyUDZmTTR6NG50MGJPMGRJVENSVmFZaTFa?=
 =?utf-8?B?dTBIVzRCYmkzY0FzemxOWGFTcnErdnFrNWVlZHFXb0laVlczZzdQSTNKRXIr?=
 =?utf-8?B?OWMyWjhKWWdTblFDTGNsRVN4WUdMSmFMcm95YVJScHN0eDV2SmpSbUE2bjBN?=
 =?utf-8?B?aXA2U1hTNFhZa2pEZnpzUjRXeS9sOVBMTStkZm1UeGxZejQybUNOSjYyRVJG?=
 =?utf-8?B?a0FZN29lQXNiRmNWVXhuRTIvbzJwd3ZCSVR1NHViTFpCUkRDWkdMVTFuR2hJ?=
 =?utf-8?B?cGFac3hQZlNncjlYbiszbTNJVVJvYjREYUhlaGQ3V05LTGJOSzNXQVdRVFRL?=
 =?utf-8?B?TkdkMnRsYSsrb3BnRzVSQjl1VVZOZXF3UzNIRmRLV3cvZ0VpSkdHSDdYTXBy?=
 =?utf-8?B?bW5LTy9NZVhqam1IcklvVVlUaUtOMkF4OWw2cFlDRnVBaERFNmV2Wm5sdllv?=
 =?utf-8?Q?94NwyMAHdu4SVOScNeXSKjtkP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: af0c9e5e-c866-40d7-ed5e-08daeeec5868
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 07:13:29.0716
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PNJEfzNFG404jJM3BrultHvqYsSpi3NM2znqVMuHwopIi5560B7s7OMf0AuCRhEVj3e8REbzagFmkSf5ZGW6Cw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8337

On 04.01.2023 21:04, Andrew Cooper wrote:
> On 21/12/2022 7:40 am, Jan Beulich wrote:
>> On 20.12.2022 21:56, Andrew Cooper wrote:
>>> On 20/12/2022 1:51 pm, Jan Beulich wrote:
>>>> On 16.12.2022 21:17, Andrew Cooper wrote:
>>>>> +        "mov    %[cr4], %%cr4\n\t"     /* CR4.PGE = 1 */
>>>>> +        : [cr4] "=&a" (tmp) /* Could be "r", but "a" makes better asm */
>>>>> +        : [cr3] "r" (__pa(idle_pg_table)),
>>>>> +          [pge] "i" (X86_CR4_PGE)
>>>>> +        : "memory" );
>>>> The removed stack copying worries me, to be honest. Yes, for local
>>>> variables of ours it doesn't matter because they are about to go out
>>>> of scope. But what if the compiler instantiates any for its own use,
>>>> e.g. ...
>>>>
>>>>> +    /*
>>>>> +     * End of the critical region.  Updates to locals and globals now work as
>>>>> +     * expected.
>>>>> +     *
>>>>> +     * Updates to local variables which have been spilled to the stack since
>>>>> +     * the memcpy() have been lost.  But we don't care, because they're all
>>>>> +     * going out of scope imminently.
>>>>> +     */
>>>>> +
>>>>> +    printk("New Xen image base address: %#lx\n", xen_phys_start);
>>>> ... the result of the address calculation for the string literal
>>>> here? Such auxiliary calculations can happen at any point in the
>>>> function, and won't necessarily (hence the example chosen) become
>>>> impossible for the compiler to do with the memory clobber in the
>>>> asm(). And while the string literal's address is likely cheap
>>>> enough to calculate right in the function invocation sequence (and
>>>> an option would also be to retain the printk() in the caller),
>>>> other instrumentation options could be undermined by this as well.
>>> Right now, the compiler is free to calculate the address of the string
>>> literal in a register, and move it the other side of the TLB flush. 
>>> This will work just fine.
>>>
>>> However, the compiler cannot now, or ever in the future, spill such a
>>> calculation to the stack.
>> I'm afraid the compiler's view at things is different: Whatever it puts
>> on the stack is viewed as virtual registers, unaffected by a memory
>> clobber (of course there can be effects resulting from uses of named
>> variables). Look at -O3 output of gcc12 (which is what I happened to
>> play with; I don't think it's overly version dependent) for this
>> (clearly contrived) piece of code:
>>
>> int __attribute__((const)) calc(int);
>>
>> int test(int i) {
>> 	int j = calc(i);
>>
>> 	asm("nopl %0" : "+m" (j));
>> 	asm("nopq %%rsp" ::: "memory", "ax", "cx", "dx", "bx", "bp", "si", "di",
>> 	                     "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15");
>> 	j = calc(i);
>> 	asm("nopl %0" :: "m" (j));
>>
>> 	return j;
>> }
>>
>> It instantiates a local on the stack for the result of calc(); it does
>> not re-invoke calc() a 2nd time. Which means the memory clobber in the
>> middle asm() does not affect that (and by implication: any) stack slot.
>>
>> Using godbolt I can also see that clang15 agrees with gcc12 in this
>> regard. I didn't bother trying other versions.
> 
> Well this is problematic, because it contradicts what we depend on
> asm("":::"memory") doing...

Does it? I'm unaware of instances where we use "memory" to deal with
local variables.

> https://godbolt.org/z/xeGMc3sM9
> 
> But I don't fully agree with the conclusions drawn by this example.
> 
> It only instantiates a local on the stack because you force a memory
> operand to satisfy the "m" constraints, not to satisfy the "memory"
> constraint.

Sure, all I wanted to show is that the compiler may make such
transformations. As said, the example is clearly contrived, but I
also didn't want to spend too much time on finding a yet better one.

> By declaring calc as const, you are permitting the compiler to make an
> explicit transformation to delete one of the calls, irrespective of
> anything else in the function.
> 
> It is weird that 'j' ends up taking two stack slots when would be
> absolutely fine for it to only have 1, and indeed this is what happens
> when you remove the first and third asm()'s.  It is these which force
> 'j' to be on the stack, not the memory clobber in the middle.
> 
> Observe that after commenting those two out, Clang transforms things to
> spill 'i' onto the stack, rather than 'j', and then tail-call calc() on
> the way out.  This is actually deleting the first calc() call, rather
> than the second.

Which would still demonstrate what I wanted to demonstrate - we can't
assume that "memory" guards against the use of stack locals by the
compiler.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 07:22:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 07:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471666.731598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDKZj-0001oo-Bn; Thu, 05 Jan 2023 07:22:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471666.731598; Thu, 05 Jan 2023 07:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDKZj-0001oh-8Y; Thu, 05 Jan 2023 07:22:07 +0000
Received: by outflank-mailman (input) for mailman id 471666;
 Thu, 05 Jan 2023 07:22:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDKZh-0001ob-Fb
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 07:22:05 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2083.outbound.protection.outlook.com [40.107.7.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7097441-8cc9-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 08:22:04 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7051.eurprd04.prod.outlook.com (2603:10a6:10:fd::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 07:22:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 07:22:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7097441-8cc9-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DK6G3ULUcYtFg74R2S9L9wqwLLN3+1jfcD6GGteXsVrQIFoEvcYlkzf5sSBCiOrP/sYiVBsomQ4VYwLWUF1AmVBarRDYmG814+9Y5nBD5u5EziMiJHFXwXqK6FdPXPC6x988DQbVN2cs2UGmdTMxSU3LMWyLRxp7a55c0Vox+BcTkb5IcE3fRuO8q9cZerDbGw+RmhJH5FGjErpjF0mricgnBr6ttPQRApQg0EP5bYzS+cP5p5xuaKhTDNK7ONc7nP7wyEuVoESrc7mpdxLxuz+wK1+JRl/FwmxoC6lTYBPkunHOsEZlXuofsXFzuODpTz+P65+AzFcFkjsmnwcnTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vc6zNnVnnN9AEYpO0uP9nMDevrwJb8ehtkqTdKNxanY=;
 b=ipukJdbW7okFFaXvuKZqyZvjyPVKNO74JyGEvJeoNTDFyZ+2mP+O0f5yjSHv8WkmSgIr8sNrOavBKpesPYLzdpj3YLu32s3t4o2R7LBXvtU2YJgkdAvnPej/kFnDiqzx8jetonSEDkMBeAHJQ2FOxsQ5uKwFodfxSlch6+Rf6cWCGrguXJMt1wqugBPU84YtM7R8lAE05K7O85EzVfANwu1Wr9X2QUIAiEoFJwyjJKtcWBQFED4gV/LTXo1Ana4dS7K4zGCag4VPH3PZN0HnrUpFY3auZOGd9g3NNRmGMCODE13qlnuVlJOH4dBrv6eix4cy60Zf2m6LYb72PurxZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vc6zNnVnnN9AEYpO0uP9nMDevrwJb8ehtkqTdKNxanY=;
 b=eSLOkKfoPH04qKiI0F2O3oD/V/G4VP2iUC/jY2ObQU4SLl2saES/kinmJQAX6YizL4yiTV7vglF5q0asZ1kmU3GgHZaPig3OfpLGsu/+XJIH4z+GqpEQeeeFDmV5LFDzru24eDC3Pe/XEvQ05CefVi3lWe162gLQp56X6u+y7XlnYINa3XE5XHY1EsEGrshselLD06dReUh8E6r+vlMpN4Robj/H+/tN0OSOf7tTPZduBnPlOlPB4DcGoaZQ9PIusw6ptfN1Li3+L+X/qtWtaJpPTV8MLrzSGkFhinP8avpCMWQy0J7IzvxpGFz4+h+e8rfwTt9DmG2q0wfHVW0BoQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8da65f2f-19ae-6f79-d13c-c502ae1d3889@suse.com>
Date: Thu, 5 Jan 2023 08:22:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/4] xen/version: Drop compat/kernel.c
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-3-andrew.cooper3@citrix.com>
 <074042f5-ecc1-11c9-bdcd-b9d619475d58@suse.com>
 <c67509bf-b5e0-5364-9307-5432f5417ce5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <c67509bf-b5e0-5364-9307-5432f5417ce5@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0135.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7051:EE_
X-MS-Office365-Filtering-Correlation-Id: b7984aa7-66dd-4f7f-6884-08daeeed8a03
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ohY8MRUUyZ9R6tWr7g0b2AT/RpwbNkiU5Lt3qKXtGf42nPN3OYD4fq7Z5qX6UTitiyFSJ313nygIdafU5TCTuU8pECQYoPC6DJFqOoZqWQW3PbTLR0Vzi1mYpNYFG4NsCHpdTSfRdInsPRXxTbByLszTFx/7lhV8/s8QFaLnUYgxlsk0ARfKJJYKTgxquRKQbDYaOQGwVQNzPbU+pLvnbsTmEMtRceQ8K1HU0skXuaZBbwvMCgR1Bq0RRzcs/KGtbtp7HUgE8mTBQj/0wlWOlkH4xb4YJfjD1cFTWoWJ9WkYtcJNqrNfIdfT1EqCLhiSdb2kYidM3GZ1W+VNzFyJTvBzNTyUDO8/L82nr2q19riex7dtYj+mnMqWwfDPhAfPZxifpy1INfwKnehM+/yMeQBXzrS6/MfiLJxYqjLJ2/dmQjDEYz3b1ESYGCo1xJkGL/Ky3rKCHunFskj8X1BC7b3Th41lSYquVrwlsDqv+U+CWhUkzG0vugyWfQHf7WlHjC5YroWVqstEc0cJHaFVXAq9dYi6dhI+2VgDyOPDPLV9KWf+VJa7gHoBhY+M7StJaUEZocDwu/gIECPK0VR2qYQkHLkTaiwJQ8cXbh/DqSNquaRpiQ3q1phTicKCHxriVcEy/Ck3Gfr2ttaHAJeiDFK8WQze7RHR+PtM8Iu01z5G23giT7ejYwvFhXLamV9jeIzX56eM/lhD3wJU4xIAp/AyvUO+nxhJ9PvouUDtSOs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(39860400002)(346002)(376002)(366004)(451199015)(41300700001)(8936002)(66946007)(5660300002)(8676002)(54906003)(6916009)(2906002)(66556008)(316002)(31686004)(4326008)(6486002)(53546011)(6506007)(478600001)(66476007)(31696002)(86362001)(26005)(186003)(83380400001)(2616005)(6512007)(38100700002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RXlIQXFpVSsrcEpwWlRPQzZmWkp4L2Q1TVNhN3lZdUlHNWRHM1R3ZS9IV3dy?=
 =?utf-8?B?UlNXdFAvR3llRTlNVStlbzJVZEgrYnVFWVo0VWJZYVdtYk93YVVUNThkaCtR?=
 =?utf-8?B?WTFLZWNxVUVSQTFCWDk1Rmlrc3kzU2l6MkgrS2hhNk1KMkRXVnd4OTd4NEwx?=
 =?utf-8?B?MDRYMGU2ajFXRjQ0WjUvWjRhVHdNbkttZTVlL3lhdU9xUVkvdWFyUXlTa2hT?=
 =?utf-8?B?MDNpWi9XUFlGZWowd1BZVFdSNU1HMU1YT2JEWVlVRGlZMVJNY1cxSnpoOHFP?=
 =?utf-8?B?OGtBcDE4NU5hbGlNUHd2dU9iUitNYmV1NmsrZzM4eC9wN09XdER2QitpTms4?=
 =?utf-8?B?MXBIVHhnQWZnZDQ4WEU0RTduQUIwazhhRS90bzRqTlZ0LzVBMzRGeDZxaWpr?=
 =?utf-8?B?WWR4WXJ3YTdsMjd3TlVKa0hBYUFNZU1hQWxNVEc3YWNHQ1lQaVJZaURNMnJq?=
 =?utf-8?B?VkR4K1dXRE55NjdZUkxWOVJFSFIydDdieGNxRzFCcElvTTlHOHVyMGMvM1Z6?=
 =?utf-8?B?cFRTUGZoaEhpckZHMUhCT3NyNUxJaFlkbWNCbkQ3NktQRGRzVVFXQTdlTTh1?=
 =?utf-8?B?Z0l1dFB6THFnd3lLZW1DZzRFem9ZNENvRlhrd2dvaUlHRjJwNFVsZ0JyRThM?=
 =?utf-8?B?ZkpMeE52cXllTFNGV09WRW9YVXlwTHMyOG1lRVJPbzdESTdDeUNyK1lEZ0pr?=
 =?utf-8?B?eDUrQk81MWRZbXI5c0RQdDNsWG5lUzNoTUREM29CaXZpQlhaZXAwL2FydTNr?=
 =?utf-8?B?VlJTenJwY0tORUtnUUlETEV6OFNtNFo2NHJ5eGFpeGhmT1ZScjU0UmxlYnN1?=
 =?utf-8?B?MGZuUnNYd0tSVnIyeHN5bDU2VjR0QUdKVWRrd25GQUVQL1lYVGg5U0s4YWdF?=
 =?utf-8?B?bzFQdHk4bU4wK0ljY1NlUTVlNnBoV0RRcUVOYlg1eVJmTVZtUXZzcFRURHV6?=
 =?utf-8?B?OGduYmh2SE1IamF5OG9UbTYrVFQwdTFJeU9ETlRaVTBVYlpUai9XYXd5QXVV?=
 =?utf-8?B?R2hJbW9CWlRPb09EMVVVTWZQUXFPeExobEIxZWlIbldhcTFSZ1FtK0hCZFNy?=
 =?utf-8?B?UnJYVUFGU01ldHQxRlFkOVFXM2lQSUhWbDBaQ0Y0Qk5XV3VVV0NMcFdwdlpj?=
 =?utf-8?B?NTMzK0ZWVk9JbndKck16Ui9nUTFsWTlFa1QwQkRldFJyOGZ2RHM2Z1hBc1d4?=
 =?utf-8?B?ZnJsaktQdnRicGNqcW5oS2dZb1g1ZEFOSFRqeW1hblVRaHRFd3ZINkdVU0tq?=
 =?utf-8?B?SkNVKzZZVFlMRjNOdE5yRVlhdFNqTHRweGJUWlg1aHlaQ2tMVzJHaFdEeVR3?=
 =?utf-8?B?eEFMeTFyQlVmRElvcWNRc3pCZmMzbFVOZU9FNDg2K0NNQ0F0TCt3SlhuWFpW?=
 =?utf-8?B?K2h1bHBvaERyV2tFNGY2ODErYkNXQTFCa1ZjSEc0aC9QbldKNXF3VlBvOG1H?=
 =?utf-8?B?N01iWTY1T2ZtUWtKelB2QmFRVVZlbDhKdlhURy9NaEEzK0hCYmtQVjdqMUpo?=
 =?utf-8?B?Y0FYWGxJNFpQNTV0eCsvVTZ3bDlYTlplT2JKNHE0WFFxZUdaZzVKeUM0cm9p?=
 =?utf-8?B?am9tVEpVZkN0VnNJSURjWXZRWHZad1NOOEwrVmx4QWJmUGs2UWc5V3Z6aXVU?=
 =?utf-8?B?cFU0VFNQd3FwaDd0SSt6aVhjYWJISm5YT1p0RDFwbm11bm1QcUVZSHNXbFpB?=
 =?utf-8?B?Tlo1NWt5N1hZR2RzaGdWNi85ZWtrK0tBV1F6RDJEeUdWR2NKNHhJM29zdzVy?=
 =?utf-8?B?R2UxOTVDK29LUjVvSnhtNGU4b1ZVWTBOT2NIaFZFSW8vY2xmNjYwNzUyVG9Z?=
 =?utf-8?B?NC8rMlVnTTkwNUoyNC9PeDgyN25VV1lYYXdyaTRiR2xNUldSK1Q1R0puTnU2?=
 =?utf-8?B?NDlpNlErVDBPTGR0T2grN2wwQkx1alVacXhmamVLN2FXZlAremN6c1JZWU1h?=
 =?utf-8?B?U1g2WEZKTmw4Q29GejkyZXo3bkhyWTVXckVpcHd6TEJHbDNUZE4zUEZoZEJT?=
 =?utf-8?B?bU1NQVpYZW9tK2Q2bktIS2V6ekh5akVCQkU3cXFsRXpRNmVOM0NOamh4Y0JG?=
 =?utf-8?B?OEZNUEhPOE1yRFpIMXVkN0ZLZzFRODl1cTVselQ5M1BtN2Vmb3A2M3hkQjdC?=
 =?utf-8?Q?SG5Jo8ORettOSDezFjgogHYUq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b7984aa7-66dd-4f7f-6884-08daeeed8a03
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 07:22:01.7891
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 26BJLJl2LdsUqszAbkNVJqq39/92P1W+nwUEabl/k7bT+DfoS+YM7ywFnA/1ieI16OB91HHbZ/S6U+IKItZ2vg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7051

On 04.01.2023 20:15, Andrew Cooper wrote:
> On 04/01/2023 4:29 pm, Jan Beulich wrote:
>> On 03.01.2023 21:09, Andrew Cooper wrote:
>>> --- a/xen/common/compat/kernel.c
>>> +++ /dev/null
>>> @@ -1,53 +0,0 @@
>>> -/******************************************************************************
>>> - * kernel.c
>>> - */
>>> -
>>> -EMIT_FILE;
>>> -
>>> -#include <xen/init.h>
>>> -#include <xen/lib.h>
>>> -#include <xen/errno.h>
>>> -#include <xen/version.h>
>>> -#include <xen/sched.h>
>>> -#include <xen/guest_access.h>
>>> -#include <asm/current.h>
>>> -#include <compat/xen.h>
>>> -#include <compat/version.h>
>>> -
>>> -extern xen_commandline_t saved_cmdline;
>>> -
>>> -#define xen_extraversion compat_extraversion
>>> -#define xen_extraversion_t compat_extraversion_t
>>> -
>>> -#define xen_compile_info compat_compile_info
>>> -#define xen_compile_info_t compat_compile_info_t
>>> -
>>> -CHECK_TYPE(capabilities_info);
>> This and ...
>>
>>> -#define xen_platform_parameters compat_platform_parameters
>>> -#define xen_platform_parameters_t compat_platform_parameters_t
>>> -#undef HYPERVISOR_VIRT_START
>>> -#define HYPERVISOR_VIRT_START HYPERVISOR_COMPAT_VIRT_START(current->domain)
>>> -
>>> -#define xen_changeset_info compat_changeset_info
>>> -#define xen_changeset_info_t compat_changeset_info_t
>>> -
>>> -#define xen_feature_info compat_feature_info
>>> -#define xen_feature_info_t compat_feature_info_t
>>> -
>>> -CHECK_TYPE(domain_handle);
>> ... this go away without any replacement. Considering they're both
>> char[], that's probably fine, but could do with mentioning in the
>> description.
> 
> I did actually mean to ask about these two, because they're incomplete
> already.
> 
> Why do we CHECK_TYPE(capabilities_info) but define identity aliases for
> compat_extraversion (amongst others) ?
> 
> Is there even a point for having a compat alias of a char array?

To be quite honest, for capabilities_info I don't recall why it wasn't
aliased. For domain_handle I think the reason was that it's declared
outside of this header, and it could conceivably be a more UUID-like
struct.

> I'm tempted to just drop them.  I don't think the check does anything
> useful for us.

As said, I'm pretty much okay with this, but would like you to mention
in the description that this is an intentional change.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 07:58:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 07:58:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471675.731615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDL8G-0005Gv-5q; Thu, 05 Jan 2023 07:57:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471675.731615; Thu, 05 Jan 2023 07:57:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDL8G-0005Go-1Z; Thu, 05 Jan 2023 07:57:48 +0000
Received: by outflank-mailman (input) for mailman id 471675;
 Thu, 05 Jan 2023 07:57:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDL8E-0005Gi-Fc
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 07:57:46 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2074.outbound.protection.outlook.com [40.107.15.74])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1f8aea2-8cce-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 08:57:43 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7191.eurprd04.prod.outlook.com (2603:10a6:20b:11c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 07:57:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 07:57:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1f8aea2-8cce-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eB+bYGx7rKD0h3IHROmfe58JArXY3sX9rdfEYrKV+m5mvUe2g3WCJJil5hsBGub70gxleslJ5bVgApS7TpFF2XvDH2qGj8DecX5ZtjMlzXftgALuIPJtQEWA6gLQmZznKYYwWjEj2tClsKmQr7jULRbFY2DbolQumyJi101cLaGSb+/5TqdHe8tBcETbKw3pEYWn+NxyJ5HdsYLHMqcx+DrjBRDqkbOjEs3zzpFK/Z2pQemApNEXkvsp6bQHZKxCdhYt/P6bAiyfSq/P9RgK5PdpmdeaDhoOdWvNGEFBWrlamYqsfNK03EWaymW78XLO6TZZdSKHYrfpb4wKDyxxBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2u/5XapuE10KqU2DCj7P5PcSdhcY/Y5VJzSmMaxZnD8=;
 b=LJu3RGSRt/odNK8gTey0v+AIFAa6CrKIr+UJbkTXsLRexf3968W+tdZveCO0ImuBQcmiz2Iyy+MadgtF3BaD6XbxUqZ7Oz2ilPBW2VaGfJjYbx9veK+4bbq/e5LclcU8r4t98zE3wWmWFSHaABjzx4EVdU6YVNlHO3nChL1zHFSMAZTDrWH2J3F1qE2hgjXpPgbZHHG7XPWnwaVM/7c9v+eYu/tP3P0XRtlrA6CKgl8+MS4ETNLp08rXGnfEfI9pWaLWuTNW0SRexIcFOQmR3e+u8fK5UpUVPeW7b51hFXGH4xXoykfSCqECb/pLkkEeVtSXmQHKCIhkuzFYCGJKaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2u/5XapuE10KqU2DCj7P5PcSdhcY/Y5VJzSmMaxZnD8=;
 b=ZDANJ3SVwo0ih9wc7RpvJsCGQEeYWWhCcV0ywCVt3c+tsjAjxrNUiCZnFCGFISCI031fglVGwmaRWnh+WBqVFZCh2b+h3lrH4LHUxxT+0hIS/o5sI9oCgmi4XMJ9pjPjJ4Jv+wjyA1tmSovcx9TpGOjmiSIPDGgaoifdWbYBg5/gxD6+Fr1sFyUXQ+xvdCZFkHHGjj2dJYqaOG95EYhovXaEGCxiOOXWGDNvIHviuB++G7PSiT7q1HhC9u6fiFIKA3h0I+xfZ91w4bcTx4q4Rb9nqhGS/mB7/hhHI2yvMx3ZJmZBIh0A+qkuHwRI01qM8ACgxBwSfTNYxlMLaeDigg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ebaf70e5-e1ba-d72d-84f2-5acb7e38a6bc@suse.com>
Date: Thu, 5 Jan 2023 08:57:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-4-andrew.cooper3@citrix.com>
 <540a449d-f76d-eb16-4f98-c4fb3564ce98@suse.com>
 <7dd00ce3-a95b-2477-128c-de36e75c4a34@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <7dd00ce3-a95b-2477-128c-de36e75c4a34@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0092.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7191:EE_
X-MS-Office365-Filtering-Correlation-Id: f49f8969-8513-4cbb-e33b-08daeef2857f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TBx8dCfGUESSCwjM3zUdMwNaFoPJpNtAJyRsE23ibH6JPGFR85nJh2IqmWwSrPVf+wYLc1Aku2woP6SCkrsr6FDkWchCFhDLTPAYOSBX/p8nN6u2mmCNQv5CW+Itw2flSHndTQk8npBSlrItUlRT/2YwujnXPTNCUT/FmWBtEPiCmD4+xM5Hpev2NonWYSQ9qEiUpIzVrQ59Pp1VNlx86Klwznug+/lqaCr/2rXdoFTxIkpyE3zRZ3GONPvFDs2Wf4H0wk56k3t+QUD+rnFwzF/hFSVSCJQj5btiKWY32rsSNvaLz2cv9g1NOoRdaLZBf/n5KdbNjn6zuKOyN7AaDPsLo9pk8dmRyXu9NObUMifk9L1XTKDdqvMy0gqRFe5u0r68Qcpxhsftp15IVM5UMpAVGOGOpmH6ZjV2kffVnjhTb8y6066whzfil32E32IMB41XwNECtWQLcKg4A01SMO9Lg+5jnp9aoM+tnawo/fKWpFT/rfneKl0kLfhx0yNM6mF7H0lT6h6zavLAVCgm95I3f9CmxHGieGN2fJSCDKhPHn1tB2kHKTpGcUF82nx3uf29gDSUwUTyEXTsyh+m3WfHKCJS9OKVRJeSfJIrtB5P+1ySx5fzZcIJTVlQxdfKgR2dIRt4wWInwY3iyBKsiWFtFhXkecqHLxjB7KNGwf0OwDltpLn4thRtfwq3N5Sj5/Kp+x2q1A+eLO+PHjxzWrSwxnigQU6Q4e+pkILEwHM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(346002)(366004)(376002)(39860400002)(396003)(451199015)(26005)(2616005)(6512007)(186003)(31686004)(86362001)(31696002)(36756003)(38100700002)(316002)(6916009)(54906003)(2906002)(41300700001)(4326008)(66946007)(8936002)(5660300002)(8676002)(66476007)(478600001)(6486002)(66556008)(6506007)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VnQ5OXhFQzZkRXdiTFJKaytlQVdQTGowSXBwbXlIclNNN2xJWituYnpJZ3B3?=
 =?utf-8?B?emsyTmcycVlFLzd1eGNndXVkenpRYkdEUnhjVEUwclVQOGtrdFQ0dzAxNnVm?=
 =?utf-8?B?V3J3Y2laenpaRExtYVgzajVJL21EQnJaRUNoWHg3L3p0ZUFyMnRlM3hJMWp1?=
 =?utf-8?B?SXN2KzBPRUF6Q1pCZVdKK0JocVBpY0Q2Mm5Jc056K2RSazRpdmRacEJpRXg1?=
 =?utf-8?B?OFhxaFMxN2dGa1ZtU281bG1Sb29Cck9kcUZsaFZoREE3anY3VGE2NzNtSENU?=
 =?utf-8?B?bVZicXNsMEFGSnNaSS8rVFc1WjBoUmh2TnR6VGFvcU5vUXUyMEE2WHJHWlBU?=
 =?utf-8?B?cUd0NERNSHpueWUwekpmTFlNZFRSS1k1TXFUSXMySGxZdnlybGNvZVp4NDVu?=
 =?utf-8?B?UGtOVXVHUFkwQlJIdmtrNjB0cG5xQXZtRUZCN2l4WTVCVHZ2TTgrRkM0alRu?=
 =?utf-8?B?UDNHcXZkcmhWd004SU1CdnR1NVNwMUl3dlV4b3VoNTVodTZMQ0dqWS85ZDVt?=
 =?utf-8?B?NFRGWitDRStMOWZsQUdOblIxWUt6ZVp0OXJjTHZibEVaVzh4OE1zMlQwU1BC?=
 =?utf-8?B?OEprYlp4bXF0RmNvcWF4MkNLMVdaZnR5Wlh1My9vS2U4WGRXbHBLWGJoQThZ?=
 =?utf-8?B?WCtUVVZxOUxhOHlKTC9IeTdJanhPa2h5ZE04Z3JwT1hMZTZoai9UakdJRTZQ?=
 =?utf-8?B?dTlSclQyQm1LZURoaVpzSkVObkxSV2pQR3RzMmQwWUpQY3gwaks5am43TklY?=
 =?utf-8?B?RzFNZFFzcmJMN08rYWd3V1NucHpJa3JkSGUzblB4QitGcUIxaUlheTBnNUZC?=
 =?utf-8?B?bXVKVjZaUGwwdkR2UW90Wm5MWWJnQk14bTVpak40bVpsa25lZ3EvOUtUV0ox?=
 =?utf-8?B?akZYQTVMSkF0anVua3dzUTRhd01KMmRlMjdDdVNkekRrem5sWmJVOXhpUnZY?=
 =?utf-8?B?eDBaSTNzR0VMc1FSdEQzdkR6N25ReEpZdW5oaEcvR1BIWWNxVGRFa1N5TWMw?=
 =?utf-8?B?cWpLNjhLZy9SYjB2d2I2VTJWT1FQK1lZRlNxRm5CcUc1OEVyTEQxMzEwYytk?=
 =?utf-8?B?N1pyRm82RW9MRDdxbUd3M3grM1IyUWdHRDg3MEkvSjMya2RrZVkySk9pc1RI?=
 =?utf-8?B?YmJ5bDlNaGQ3RVZRczB6MXk0SFM3TlhZeTFES0tiUEViSS9naGNIWE5hdTZZ?=
 =?utf-8?B?d2lPRVNBSUR0QlBnczRjYVhiZXNmQjk0SFhlNDdudmkxSEdoNGc3amUxRFZp?=
 =?utf-8?B?OCtjYVJaUEFQQjlORHlQN2RVdHE4bUl2djMyWHc1cUY5VHZpUUxGWUFZSjV3?=
 =?utf-8?B?NHpRaU9PRWZLRkJPMS9EbWpnNGlRUEdZYUw3OEUvRjV6VWN0NUFPbURPSm4x?=
 =?utf-8?B?TUZiOG95MUtEb3EvZU1Cd2U0bUVLUFlGaXhNYzB3UXgxQW16dzk3T1lzaGFK?=
 =?utf-8?B?Y1RFUDhKOTlzbnYvRmdMY3ptZkN1enJ5R2N5WmNpKzBROTFnQWltUHdFVFhu?=
 =?utf-8?B?S1pUL2xwS1NkRnZOWEMzVVp5MUh1RHovcXpWMEdtOXFBVEs1MVJtR2lEakRs?=
 =?utf-8?B?bEYzYXJXemJ0VjFxZVQ1dEJ4Q0lDTzZrOVc3cFF6WFdPYlhRZ3pZVG1OZ2U2?=
 =?utf-8?B?MjBuTVFuN0t2RXV2TXpWVk5oQXBnbTIwS3lZcjJRZzFIMXJrZ3llRDhDQjVX?=
 =?utf-8?B?ZzhpZUFsK1pCcDRaenVIZTM2RExUUEk5b1hKQjFIandnSi9JRkxCQVFSc0Jh?=
 =?utf-8?B?YU9JNzAwQVphejZnYUJKd2NiWW9IeXM5bXZuUFJ1ek5maC9jS2hER1B5MGdl?=
 =?utf-8?B?VjhtZ2V1bllzdEtGbVMzL0dXOW9Zdis2dnE5K1F4WnU1aU1aNnNyNjJtSmdI?=
 =?utf-8?B?VE9QenllM2hPSlhIMG51Ri8vdmF0eU03NEVjRVVhLzVCWXpXS1FiMURmdlow?=
 =?utf-8?B?aWtuMDlGUUZVSjVJeWd1bnFRL1JIbXo5d3phRDIxQk9XS3pWdFJpdFllQlRM?=
 =?utf-8?B?aWV2UWdYUlZDZUJRLy90U0l1YmxXOHpJQlYvdmFRNU9CNC9RS3N3Vk5LRkxk?=
 =?utf-8?B?dlNjaTlyTGtpWjRjRy9IVmpGZDJxbkxMZEtOWFpPbGgyR3pTZWhNSnY0Y0FM?=
 =?utf-8?Q?e/cMS6grRiLS5eNutA6SVGzGA?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f49f8969-8513-4cbb-e33b-08daeef2857f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 07:57:41.6999
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0h4Hsz3EAHq1BFz+A73zKaLfpIH1565Nws3qTHJXO7E/S1+SxLUPq7a0oCbQzO73AgiDMAr7427rL/GSjkrhHQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7191

On 04.01.2023 20:55, Andrew Cooper wrote:
> On 04/01/2023 4:40 pm, Jan Beulich wrote:
>> On 03.01.2023 21:09, Andrew Cooper wrote:
>>> A split in virtual address space is only applicable for x86 PV guests.
>>> Furthermore, the information returned for x86 64bit PV guests is wrong.
>>>
>>> Explain the problem in version.h, stating the other information that PV guests
>>> need to know.
>>>
>>> For 64bit PV guests, and all non-x86-PV guests, return 0, which is strictly
>>> less wrong than the values currently returned.
>> I disagree for the 64-bit part of this. Seeing Linux'es exposure of the
>> value in sysfs I even wonder whether we can change this like you do for
>> HVM. Who knows what is being inferred from the value, and by whom.
> 
> Linux's sysfs ABI isn't relevant to us here.  The sysfs ABI says it
> reports what the hypervisor presents, not that it will be a nonzero number.

It effectively reports the hypervisor (virtual) base address there. How
can we not care if something (kexec would come to mind) would be using
it for whatever purpose. And thinking of it, the tool stack has uses,
too. Assuming you audited them, did you consider removing dead uses in
a prereq patch (and discuss the effects on live ones in the description)?

>>> --- a/xen/include/public/version.h
>>> +++ b/xen/include/public/version.h
>>> @@ -42,6 +42,26 @@ typedef char xen_capabilities_info_t[1024];
>>>  typedef char xen_changeset_info_t[64];
>>>  #define XEN_CHANGESET_INFO_LEN (sizeof(xen_changeset_info_t))
>>>  
>>> +/*
>>> + * This API is problematic.
>>> + *
>>> + * It is only applicable to guests which share pagetables with Xen (x86 PV
>>> + * guests), and is supposed to identify the virtual address split between
>>> + * guest kernel and Xen.
>>> + *
>>> + * For 32bit PV guests, it mostly does this, but the caller needs to know that
>>> + * Xen lives between the split and 4G.
>>> + *
>>> + * For 64bit PV guests, Xen lives at the bottom of the upper canonical range.
>>> + * This previously returned the start of the upper canonical range (which is
>>> + * the userspace/Xen split), not the Xen/kernel split (which is 8TB further
>>> + * on).  This now returns 0 because the old number wasn't correct, and
>>> + * changing it to anything else would be even worse.
>> Whether the guest runs user mode code in the low or high half (or in yet
>> another way of splitting) isn't really dictated by the PV ABI, is it?
> 
> No, but given a choice of reporting the thing which is an architectural
> boundary, or the one which is the actual split between the two adjacent
> ranges, reporting the architectural boundary is clearly the unhelpful thing.

Hmm. To properly parallel the 32-bit variant, a [start,end] range would need
exposing for 64-bit, rather than exposing nothing. Not the least because ...

>>  So
>> whether the value is "wrong" is entirely unclear. Instead ...
>>
>>> + * For all guest types using hardware virt extentions, Xen is not mapped into
>>> + * the guest kernel virtual address space.  This now return 0, where it
>>> + * previously returned unrelated data.
>>> + */
>>>  #define XENVER_platform_parameters 5
>>>  struct xen_platform_parameters {
>>>      xen_ulong_t virt_start;
>> ... the field name tells me that all that is being conveyed is the virtual
>> address of where the hypervisor area starts.
> 
> IMO, it doesn't matter what the name of the field is.  It dates from the
> days when 32bit PV was the only type of guest.
> 
> 32bit PV guests really do have a variable split, so the guest kernel
> really does need to get this value from Xen.
> 
> The split for 64bit PV guests is compile-time constant, hence why 64bit
> PV kernels don't care.

... once we get to run Xen in 5-level mode, 4-level PV guests could also
gain a variable split: Like for 32-bit guests now, only the r/o M2P would
need to live in that area, and this may well occupy less than the full
range presently reserved for the hypervisor.

> For compat HVM, it happens to pick up the -1 from:
> 
> #ifdef CONFIG_PV32
>     HYPERVISOR_COMPAT_VIRT_START(d) =
>         is_pv_domain(d) ? __HYPERVISOR_COMPAT_VIRT_START : ~0u;
> #endif
> 
> in arch_domain_create(), whereas for non-compat HVM, it gets a number in
> an address space it has no connection to in the slightest.  ARM guests
> end up getting XEN_VIRT_START (== 2M) handed back, but this absolutely
> an internal detail that guests have no business knowing.

Well, okay, this looks to be good enough an argument to make the adjustment
you propose for !PV guests.

> The only reason I'm not issuing an XSA for this is because we don't have
> any pretence of KASLR in Xen.  Pretty much every other kernel gets CVEs
> for infoleaks like this.
> 
> We feasibly could do KASLR in !PV builds, at which point this would
> qualify for an XSA.

I would question that, but I can see your view as one possible one.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 08:15:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 08:15:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471686.731628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLP3-0008C1-Vm; Thu, 05 Jan 2023 08:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471686.731628; Thu, 05 Jan 2023 08:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLP3-0008Bu-T8; Thu, 05 Jan 2023 08:15:09 +0000
Received: by outflank-mailman (input) for mailman id 471686;
 Thu, 05 Jan 2023 08:15:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDLP2-0008Bo-0o
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 08:15:08 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2066.outbound.protection.outlook.com [40.107.247.66])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0fdab74c-8cd1-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 09:15:06 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB8020.eurprd04.prod.outlook.com (2603:10a6:20b:244::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 08:15:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 08:15:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fdab74c-8cd1-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T0Fk8ZV87V9TBQSrXBFa5Pihfxwot/6gD6VgL2hefcfA/ykN1oSWoRt/EEHbW58JlqpdjOAp+NP7zSEOltp+6c0ivNIvXqlMtTvur8TcBqwU57oNR4UztpILxmMg5DmVLRMMH0YmI/C2va7JxjWuOCf9DOb3UaPDmx5NiZJJmnZsQ7KRuIkMRbx7ZojwgG71NPEepcdBXoqyW7B74lLL9epwKTdYn5Mz4N5XzgKsVQNa83JfFVkbcr5rfzXPhk5HD1+hJ12NIilQF8A8JvJagvp2JV3XUqewQF3cz4nBRyWiOlbLSz2bQIx+Cykmquu8w0ikQybwFPIkWSFLj3KNZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zYVrlbX8302WXqllVpFUD8somimCH2U2T6URi+6uf4s=;
 b=Juy10aC5QMK+ZSCdhuMRudsb4JpV8T2xaHYulUInMks9ZfX7RjwuUhPEvk4h7DfgyKDmnczuHkFHMtzGlTcXJjCh+eUhunEG/oyTDSATxkgAv0/WWZBTaqVD0VZlrYgjUZtXmCF7udczPpqhq+VGgf7Y1uryh3RQ0qED9ETrRLO6YYNHDZ/LBNNetNrcJcpu0A0/vTu+tMxWLviGUEcu8yOXy2qboShPzZGFhSfAIk8q3YJ/JuSSpwcJqRbD0kKXHbqm5MpdEjBVb46cjrkCXgxfQRxal0j3BUZM2adhxF+Ig1cPChk0xWUgIkt8ikChCmZTx6wUnJJ0q9UlPgBGyA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zYVrlbX8302WXqllVpFUD8somimCH2U2T6URi+6uf4s=;
 b=rCT0kTdHZM+MwBQOr9BzqQWBes2miZQR8Io2g89wgbktQxTipm1sbrsOeWO01tUip1u+xtzrOZKZBPUVvi+9lku0w3FEEhIDrnqaj63sUl/FaeMJ3S5R+pZgGDMaRMNF/MMbSiupJ92yYqke6pjiorTXzU8UbqJVwgGVjdN4ZEhA97q0iY42zTj+NEn9/8HwbEHIjAR1V/SyVwJQ8eNFzidJP9qKwWOh4nowTKntkY9ZX6iJ6f2N/DLGO2gJNEH2CxLLT5+/AkkSSw0+YdOQyjcuogEgmiWgjyGjOFEMmIZsX9TTXVl4s+RSoyNgo6jCHTQ36HrI7R6nNe4YHNJETA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f90111d0-b94e-8127-3b13-fbe3558d53f7@suse.com>
Date: Thu, 5 Jan 2023 09:15:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_* subops
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-5-andrew.cooper3@citrix.com>
 <a0cb9c83-dc4d-5f03-0f65-3756fadfde0b@suse.com>
 <9c9cedd5-cca7-95d4-00bb-f34a56de2695@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <9c9cedd5-cca7-95d4-00bb-f34a56de2695@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0166.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB8020:EE_
X-MS-Office365-Filtering-Correlation-Id: 6e40d968-ce7c-4b69-3737-08daeef4f2f3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dC+Z0LrCgx7J61M1ks1w+ze+cxyEs00MHcT0Lebwg01gRvmV5mhYhomwKndyQ6eQFkNmt4+Ujhewn5vX8Vl/b+9bbWopggEdSfYBJfDglQNbGN5nGXdvxnNr1QeHI9DmgL+khvx6dAGFUUnfnVwldwWhpK//N2RUvxIZH3AezUSGzH8KQBja6G+tjfGypqvF5smNkmU9KqYf48j4mUJ9u3163752WyxDgdIrxTyurwqi6lLpPLjCTxqw7TuasI8rdcMewAur7rBU+rPlz1ENrpWBjgZd4Qa7EOOp7Getdv4hR9qDxKqMQMhdrUn2+CD9U4KaApanlQ8UJ3pwMHLsb6/n6spdu3SbvVrh7iRrGABmMdWwqqrQg5vBpAns3ObbILK+VlXxUyDLUW6EuQvtEDcIsgHp81K0B50+xZnf7MbWi+pFXxdHYmC2r6OLbMofOHedE1qZv5EZ2YuC6VEFN+3lj6SR80t6pM84p4caTUe6fQ1Dc0Ujnttc/9sTC1Us2hJVSQasNojaUzc4jupLsVm5kR6PZ413iKlp6aHIoFdXADdVsChkZfwYCknknfYuxUaNcGSvCImZJpZxNVNB5VKDNoLjzPdM2Howm0ayuuWnbfBuYO8NNsMk3QW6GnZL4KzUJdQ/wxBIM6Dyz6YvWu2udVfwmYXTF0CYnuyjruk7OP9cKOjxPctdxjGFOC+wfLsMJGwz+zzl7IDFlGGoGaTpXIGWMxmpr0GtrIWlDlM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(396003)(136003)(376002)(366004)(451199015)(478600001)(38100700002)(36756003)(86362001)(31696002)(6916009)(54906003)(316002)(66899015)(31686004)(2906002)(6486002)(2616005)(6506007)(8936002)(186003)(26005)(8676002)(66476007)(4326008)(66556008)(66946007)(53546011)(41300700001)(5660300002)(83380400001)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S0xvVlY5b0p2R2tSb2tXbVUxTlczdENnMEpOaDFkb212WVd6NkZTL2lERURy?=
 =?utf-8?B?cG1oS25ySEVkU29VMUdSMlJyRzl3TlMwUVptOWd5OWtpdG5yeUppbWRoVEVO?=
 =?utf-8?B?T2V1c1djdHNTWXFSV1dkQkcxR1dzNmFSaGJvVzlyYllvVDRxd2g0ZFlZL2M3?=
 =?utf-8?B?OFM0bnFhMkhaRUZweGZDRUQ0OG5pN0dGc2xmUVZYV0dLS1loMFZTZHBVUU5E?=
 =?utf-8?B?Y0xkNENCZ0hUcDBIdkMxSktYYnBvWGV2TW40Sm5VVGQveXp4bFM5b250QzRv?=
 =?utf-8?B?UDZiYzZQcVUvOWxQdVZJRHZTU0VmRkx6ZDdWLzM0QTl5NTg3V1htQUI2U0JP?=
 =?utf-8?B?SnM2UFJSR0YzSUxKSVF5elRrWm04SURyRW95cHUrNURhaFlWc1RodUZkN1o5?=
 =?utf-8?B?ZnJOK1JhSUcwVXBNcEJyUDVhZzBPT2JOUHhtcVlETyt2aWtiNVpuY0lnVzdP?=
 =?utf-8?B?S25URjJ6ZHR4TzlBMHU1ZnIwVktiOVl6OE54aCtXMlo3YlovbldiMUFuZy9P?=
 =?utf-8?B?V0c4dUlBSVFBV0xKQjMwUnVTZVoxb3ZpVWpRMlVIWkNqM2hhZkQ2U3hzMnVi?=
 =?utf-8?B?SHVGTTRvKzZEWVJLRkpTNldBckZUeXBLVXd4YXlYVHRjRTFyN0JSU2I0dmRV?=
 =?utf-8?B?Zms4TVpRNkI4UW44NkZoMGk5aEhobjhHVCs5RE5qZGc0TUZ3Tyt1NGFlQW40?=
 =?utf-8?B?S3E4dCtzR1BrWjZWay9Yc3AzWnpmMFRobHcrOFZNbDJHNHBLZy9Ya1BOZ2Qr?=
 =?utf-8?B?eEYydFRXTzJ0QkVhZWoyQlAwazZPd2EvT1lPenJlSE1IazIzSkk0NU1kK2ps?=
 =?utf-8?B?WlNIZVpJQWhKTXNFVmU0QlFwSGhjVzR5TnFYVHdjcVpsNjd1ak9PQXFTRyt4?=
 =?utf-8?B?dGg1WlROQzhrQ1Z1YXJJUHpCQVBoNjJNTGlrQUpGY0xlTlFvYTdUTFNTMWVP?=
 =?utf-8?B?TUo0UW9kQXVuS1FKeVFBUDlYNWZZai9zMTVFOHhWbGJpVmNLK2RCMnpWeTE4?=
 =?utf-8?B?N2JBZUVrSXlYZE9YaTVQZ0c1SDV5eWpDU2JSd2MreGx4eElHTUMrQk0vblor?=
 =?utf-8?B?aGhjdzVrQnJvQ0pJanlscWFWWUo4d3BPVjc2azFaT2puRXVINVVhZDROQ2dD?=
 =?utf-8?B?dTEyLzNLY29DeDBzeXV0L3Mrc1dDZjlMMDdOdThEMXdmSytnNk5IK3B4UjRG?=
 =?utf-8?B?Mjl2aHcraG8wTFZRV0daVzBLbFBKNk1jQWhNNUFnS1IzVE5DR3N3R0hrd1Va?=
 =?utf-8?B?dDcyKzFuNjhrNzEzbEd4SU5mQzVIdm4yUEYyU3BwRWorUGQvRWZ0Q0tyL1pu?=
 =?utf-8?B?YWF6bDNqQVlnRlUreVdHaE5kM2N0UWVZUFplMFJGYzRQK0hxZHdzT3hvS0VL?=
 =?utf-8?B?N21pVjcvTzUrWjBPTmZIeTFwKzdaanJwTWI5SjQzK25VTTRVRUR2UWxGTEJ0?=
 =?utf-8?B?dW50Rm1ETmUzbGpMNHZ3UmgvTUh3TTdxTGJSUDJ6d3d5L0djbFFWdGJoQ01x?=
 =?utf-8?B?VENNNS9ubkxOSWVTWVRteXZ2c0VBZjBrZjJ0enE2anRUTWhuVkJ0bW8xTzlX?=
 =?utf-8?B?QlFHY2VLWS92d1ZQcGxGRm5rOUNyS2VwbzVoTThnZ1VtWU14VmwvaVk0Ukxl?=
 =?utf-8?B?YmJjVHFVK1RlTWdjN04zRklsYjBUak05RkdVcGpkL0U5aXlUbk5xeWJHZG8y?=
 =?utf-8?B?cEJ4RlFWSXJLTExnSEkxYktER0JyYWFERXVXZE5hQXExS3NhMlBiYVNZTDVp?=
 =?utf-8?B?eHJqR2oxV0tqTWZYN0lTWEgyYWRIeExSajI3bVR5VzJkb1ZhMWNGZmJsd0tK?=
 =?utf-8?B?ODNMZ21oZ3l2bG9URGRBR3I1RGdpSUd5dGxLdVlkZDA3ajczTW1BZkFUbXc1?=
 =?utf-8?B?RXFVZXBqZlBQbkJwRFczbFJmWVYyQ1hHb0dMenJMQnJ2YkY1OXQxOGtEK2ZT?=
 =?utf-8?B?YVJ0NEtJV2Nld2EwODJWRzM2UlpkckQ3U2R3K1JHU29ROWc5VXFjcmVmTjJ5?=
 =?utf-8?B?ZDBWWjQ3bWErTGVLekZmNUNaekNKVWFPUENtcU5GbW9oaDRtQ0k5cEhLZzZS?=
 =?utf-8?B?K244QlVEakRLM3c1MWQ5UDRBSG5BTERsN2VYVS9FcUlEV05mS25rY0F2ZXBF?=
 =?utf-8?Q?utvRic9M6LSZgIfeyDoU9JFJn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e40d968-ce7c-4b69-3737-08daeef4f2f3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 08:15:04.3209
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Q8Xm8LYaC64KMNZyCbVXr7MAe2LrUKftR+CQwhxwXLOkBDwFHD0ZCkIfqGhmAkT9rh0xk5P+BRu3qPpdLjq0dQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB8020

On 04.01.2023 19:34, Andrew Cooper wrote:
> On 04/01/2023 5:04 pm, Jan Beulich wrote:
>> On 03.01.2023 21:09, Andrew Cooper wrote:
>>> API/ABI wise, XENVER_build_id could be merged into xenver_varstring_op(), but
>>> the internal infrastructure is awkward.
>> I guess build-id also doesn't fit the rest because of not returning strings,
>> but indeed an array of bytes. You also couldn't use strlen() on the array.
> 
> The format is unspecified, but it is a base64 encoded ASCII string (not
> NUL terminated).

Where are you taking this base64 encoding from? I see the build ID being pure
binary data, which could easily have an embedded nul.

>>> +    if ( sz > INT32_MAX )
>>> +        return -E2BIG; /* Compat guests.  2G ought to be plenty. */
>> While the comment here and in the public header mention compat guests,
>> the check is uniform. What's the deal?
> 
> Well its either this, or a (comat ? INT32_MAX : INT64_MAX) check, along
> with the ifdefary and predicates required to make that compile.
> 
> But there's not a CPU today which can actually move 2G of data (which is
> 4G of L1d bandwidth) without suffering the watchdog (especially as we've
> just read it once for strlen(), so that's 6G of bandwidth), nor do I
> expect this to change in the forseeable future.
> 
> There's some boundary (probably far lower) beyond which we can't use the
> algorithm here.
> 
> There wants to be some limit, and I don't feel it is necessary to make
> it variable on the guest type.

Sure. My question was merely because of the special mentioning of 32-bit /
compat guests. I'm fine with the universal limit, and I'd also be fine
with a lower (universal) bound. All I'm after is that the (to me at least)
confusing comments be adjusted.

> But overall, I'm not seeing a major objection to this change?  In which
> case I'll go ahead and do the tools/ cleanup too for v2.

Well, I'm not entirely convinced of the need for the new subops (we could
as well introduce build time checks to make sure no truncation will occur
for the existing ones), but I also won't object to their introduction. At
least for the command line I can see that we will want (need) to support
more than 1k worth of a string, considering how many options we have
accumulated.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 08:39:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 08:39:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471696.731639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLmv-0002HS-39; Thu, 05 Jan 2023 08:39:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471696.731639; Thu, 05 Jan 2023 08:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLmv-0002HL-0U; Thu, 05 Jan 2023 08:39:49 +0000
Received: by outflank-mailman (input) for mailman id 471696;
 Thu, 05 Jan 2023 08:39:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDLmu-0002HB-9S; Thu, 05 Jan 2023 08:39:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDLmu-0003ib-5T; Thu, 05 Jan 2023 08:39:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDLmt-0003YF-OL; Thu, 05 Jan 2023 08:39:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDLmt-0006Vz-Nt; Thu, 05 Jan 2023 08:39:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CC6zXFyhJ98YNVHF1vPE1je0wOb/orcJsTo46y9U6qY=; b=s21RjYLLRzZ6U3Ws1Ya8x92k/h
	scEgDv7AlocVh0IY0H6LQgQK4TLUeLtJOMWqdmBbYFfBp5grYT6hjl4KZA4QXzb3ZGgK2UQf1kdvn
	oLAeq7yl+zEsp4JuQqymG5MzYzwYyL0TxMhAyC5vP2bkJea/XhVbGuZfoYqlTq4c84+Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175573-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175573: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-migrupgrade:xen-install/src_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-examine-bios:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c1df06afe578f698ebe91a1e3817463b9d165123
X-Osstest-Versions-That:
    xen=7eef80e06ed2282bbcec3619d860c6aacb0515d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Jan 2023 08:39:47 +0000

flight 175573 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175573/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail in 175569 pass in 175573
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail in 175569 pass in 175573
 test-amd64-i386-migrupgrade 10 xen-install/src_host fail in 175569 pass in 175573
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 175569 pass in 175573
 test-amd64-i386-examine-bios  6 xen-install                fail pass in 175569
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 19 guest-start.2 fail pass in 175569
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 175569

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 175569 like 175520
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175562
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175562
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175562
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175562
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175562
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175562
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175562
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175562
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175562
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175562
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175562
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175562
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c1df06afe578f698ebe91a1e3817463b9d165123
baseline version:
 xen                  7eef80e06ed2282bbcec3619d860c6aacb0515d8

Last test of basis   175562  2023-01-04 01:55:19 Z    1 days
Testing same since   175569  2023-01-04 16:38:42 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7eef80e06e..c1df06afe5  c1df06afe578f698ebe91a1e3817463b9d165123 -> master


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 08:40:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 08:40:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471706.731654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLo2-0003ce-GA; Thu, 05 Jan 2023 08:40:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471706.731654; Thu, 05 Jan 2023 08:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLo2-0003cX-DJ; Thu, 05 Jan 2023 08:40:58 +0000
Received: by outflank-mailman (input) for mailman id 471706;
 Thu, 05 Jan 2023 08:40:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDLo1-0003Um-3i
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 08:40:57 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ab522935-8cd4-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 09:40:56 +0100 (CET)
Received: by mail-wm1-x32d.google.com with SMTP id
 k22-20020a05600c1c9600b003d1ee3a6289so757335wms.2
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 00:40:55 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 n17-20020a05600c4f9100b003cffd3c3d6csm1752711wmq.12.2023.01.05.00.40.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 00:40:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab522935-8cd4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=fZgScg//r5KpzoqMeCWFDkQSqGzHzzpOB/W401gAMtM=;
        b=TqO7OCnB85PKAFF38C3QVGCH5mxK+Y46APnEg13YshyyClqX3Ok8C4+Uq4mYpAhotM
         jzwi0GnpBFxDeKNN3z7KcBZ5JLZsuaw4d1o/YpUtTKBuLbhDgXql4Ti8XQ0otJtvFJJs
         EHavA3csOSEz/2zy/kdF+wWD5eD5fqIwybW+P1FNZIZlRR7aAdlZBiHJwlO86+6wzIvh
         q92rg76kyQhZk7Fwe3G1AMylMdBAyFsUdS3ye+IEFLNQLybLYOiM/BTGKHI4kNYsWWj1
         +jgM3QxyN+PKnUWoqd7NSOKOTJjGe43SxyOE8BAyaPZKs+OD+GOpgt6gDxJyY1TxRPDE
         CZkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=fZgScg//r5KpzoqMeCWFDkQSqGzHzzpOB/W401gAMtM=;
        b=inZWW4C78Mg+PEIDtukyX0xDQ5YWYz5pR7B68otZsZTryYPGT5+mlBT3sYyQbCU6Fj
         cZKUISy7WBwXDgmgxrtkiuaBZoucBi5loHtjR/IlCqVpVNpSYRx3zqwfNSztP/wZ+h9R
         F2UmWNARVm+ldXeAJ1aWhdnwUgyUZ3ISVkOyfWJhyLl7F7INZaSsNFrMV1+Fb9DwhL0B
         M3Qijxe5o3AEzQ3H2TmKWwbaP+BoNf8hC70WG8Qq9Zp7/Wf3obcn2a7tKyIPx9/xI/zO
         tZarxb5FU1FSVSZAe6E0Jw3mcvlXbZeQF2EmYTPNUxn01MmkQv8/uASXheBOlnzuvw4Y
         R+EA==
X-Gm-Message-State: AFqh2kpU/66Alm1sGJaUGQx1EvzuJashoDs6QLcFsUjsqAcHqj4R5QdY
	6xoZY1GYYUXgInDpvRMRIlziPN55eQfPm/1u
X-Google-Smtp-Source: AMrXdXuYODPiQgWsNEyisCw6VhRWEzZEI8c5abzEX6U4l/O3cjPsGK7MvNB2/q/BRvvZ2+FmAB/bYw==
X-Received: by 2002:a1c:2b04:0:b0:3d3:4f28:40c6 with SMTP id r4-20020a1c2b04000000b003d34f2840c6mr34647278wmr.1.1672908054693;
        Thu, 05 Jan 2023 00:40:54 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Gianluca Guida <gianluca@rivosinc.com>
Subject: [PATCH v3 0/2] Add minimal RISC-V Xen build and build testing
Date: Thu,  5 Jan 2023 10:40:49 +0200
Message-Id: <cover.1672906559.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- provide a minimal amount of changes to add initial RISC-V support
  to make Xen binary buildable and runnable for RISC-V architecture
  which can be used for future development and testing.
- add RISC-V 64 cross-compile build jobs to check if any new changes
  break RISC-V build.

Changes in V3:
- Remove include of <asm/config.h> from head.S.

Changes in V2:
- Remove the patch "automation: add cross-compiler support
  for the build script" because it was reworked as a part of the patch
  series "CI: Fixes/cleanup in preparation for RISCV".
- Remove the patch "automation: add python3 package for riscv64.dockerfile"
  because it is not necessary for RISCV Xen binary build now.
- Rework the patch "arch/riscv: initial RISC-V support to build/run
  minimal Xen" according to the comments about v1 of the patch series.
- Add HYPERVISOR_ONLY to RISCV jobs in build.yaml after rebasing on
  "CI: Fixes/cleanup in preparation for RISCV" patch series.

Oleksii Kurochko (2):
  arch/riscv: initial RISC-V support to build/run minimal Xen
  automation: add RISC-V 64 cross-build tests for Xen

 automation/gitlab-ci/build.yaml     |  45 ++++++++
 xen/arch/riscv/Makefile             |  16 +++
 xen/arch/riscv/arch.mk              |   4 +
 xen/arch/riscv/include/asm/config.h |   9 +-
 xen/arch/riscv/riscv64/Makefile     |   2 +-
 xen/arch/riscv/riscv64/head.S       |   4 +-
 xen/arch/riscv/xen.lds.S            | 158 ++++++++++++++++++++++++++++
 7 files changed, 233 insertions(+), 5 deletions(-)
 create mode 100644 xen/arch/riscv/xen.lds.S

-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 08:41:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 08:41:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471707.731665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLo6-0003tV-OO; Thu, 05 Jan 2023 08:41:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471707.731665; Thu, 05 Jan 2023 08:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLo6-0003tO-KB; Thu, 05 Jan 2023 08:41:02 +0000
Received: by outflank-mailman (input) for mailman id 471707;
 Thu, 05 Jan 2023 08:41:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDLo5-0003sb-AE
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 08:41:01 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ad524a14-8cd4-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 09:40:58 +0100 (CET)
Received: by mail-wm1-x329.google.com with SMTP id ay40so27507622wmb.2
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 00:40:58 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 n17-20020a05600c4f9100b003cffd3c3d6csm1752711wmq.12.2023.01.05.00.40.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 00:40:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad524a14-8cd4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1DmOMupuCoWDrIsiMMu25qOlzUBJXmbm34omuNR0qaQ=;
        b=jEnnMoJsw/QR9BZmza/gxrU1z8UTyZnyWNrDOF6UyqsyG1XBIuhmRGY0Oyv7LfIosV
         uLuoX0OopVqyrU3SqigHHfbG7lL/aJe2HGn7E4it4/Nx/c0Xodn2HvV1Hw07vb4Kh68L
         PBBB8lk7Z+vxoTzytBuQHm949+/UQ7p01PNSbmrHqxFtvGAlHA5+jVivdYbIOznrkjQf
         yfoQbdoCQumqKFqtrDDwQZFR4VieSopb/K2KYgQPQ3SF1M7uYlYNFNgZ4TwijlltygHX
         7uI3pcUrexy0fUgRmSTfECGQLC5+29lxdMDXy9oLxU8mVnmeFz1yIwP8rBjRU/h6b5sR
         PhhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1DmOMupuCoWDrIsiMMu25qOlzUBJXmbm34omuNR0qaQ=;
        b=4203oaBj+M14aNP9WZOPI+5v9QNB3B5EdXKvfTK9kFoyLYlI7TNR5g26cHHx0OobGo
         26fwkhiSaaMT9+pbJkBNJASy2J3NZ1QT54ZwzfriVK44SqKnnMddYkhDZo15ggckEx/8
         qxI65rToY4T5i4AxS5wQ78oCJN6f2GcBq8iX2/pStnvT2o0afoQMgVclnIavIUaBOOy1
         U2ueVz7IO2qzrIrwjkg0ikGy2/nCPR2tSmoU6tBXc0XkVh5BCOLkb2po1fRJEh8biZIz
         iExhEoShNDvecIao+98SqIvGjbiUdE62k6ElptX9TUqbQJtYbyTMCuQBIoBkImwneTg9
         Ki5Q==
X-Gm-Message-State: AFqh2kp9fk1Eu37dWoJDqIT2fUnfiUQ9o9rh+Zcr1X07raP4zJcNGAek
	BD/a848o24hlc1qvC9chfAPnJjdnXPNNhhmN
X-Google-Smtp-Source: AMrXdXukQDeR5j0Ysjd9JHSWYImdEWJ6Dv4Vsx4eHct59HG9CM/lZXIMrVSQpGMpsYJZS4J1VlJ9zw==
X-Received: by 2002:a05:600c:4b0f:b0:3d2:2651:64bd with SMTP id i15-20020a05600c4b0f00b003d2265164bdmr35671137wmp.10.1672908058181;
        Thu, 05 Jan 2023 00:40:58 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>
Subject: [PATCH v3 1/2] arch/riscv: initial RISC-V support to build/run minimal Xen
Date: Thu,  5 Jan 2023 10:40:50 +0200
Message-Id: <9f575b36965088936b5f7beed1b0d4bd96063d16.1672906559.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1672906559.git.oleksii.kurochko@gmail.com>
References: <cover.1672906559.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch provides a minimal amount of changes to start
build and run minimal Xen binary at GitLab CI&CD that will
allow continuous checking of the build status of RISC-V Xen.

Except introduction of new files the following changes were done:
* Redefinition of ALIGN define from '.algin 2' to '.align 4' as
  RISC-V implementations (except for with C extension) enforce 32-bit
  instruction address alignment.  With C extension, 16-bit and 32-bit
  are both allowed.
* ALL_OBJ-y and ALL_LIBS-y were temporary overwritted to produce
  a minimal hypervisor image otherwise it will be required to push
  huge amount of headers and stubs for common, drivers, libs etc which
  aren't necessary for now.
* Section changed from .text to .text.header for start function
  to make it the first one executed.
* Rework riscv64/Makefile logic to rebase over changes since the first
  RISC-V commit.

RISC-V Xen can be built by the following instructions:
  $ CONTAINER=riscv64 ./automation/scripts/containerize \
       make XEN_TARGET_ARCH=riscv64 tiny64_defconfig
  $ CONTAINER=riscv64 ./automation/scripts/containerize \
       make XEN_TARGET_ARCH=riscv64 -C xen build

RISC-V Xen can be run as:
  $ qemu-system-riscv64 -M virt -smp 1 -nographic -m 2g \
       -kernel xen/xen

To run in debug mode should be done the following instructions:
 $ qemu-system-riscv64 -M virt -smp 1 -nographic -m 2g \
        -kernel xen/xen -s -S
 # In separate terminal:
 $ riscv64-buildroot-linux-gnu-gdb
 $ target remote :1234
 $ add-symbol-file <xen_src>/xen/xen-syms 0x80200000
 $ hb *0x80200000
 $ c # it should stop at instruction j 0x80200000 <start>

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V3:
- Remove include of <asm/config.h> from head.S
---
Changes in V2:
- Update commit message:
  - Add explanation why ALIGN define was changed.
  - Add explanation why section of 'start' function was changed.
- Rework xen.lds.S linker script. It is mostly based on ARM except
  ARM-specific sections which have been removed.
- Rework in riscv64/Makefile rule $(TARGET)-syms
- Remove asm/types.h header as after reworking of riscv64/Makefile
  it is not needed now.
- Remove unneeded define SYMBOLS_DUMMY_OBJ.
---
 xen/arch/riscv/Makefile             |  16 +++
 xen/arch/riscv/arch.mk              |   4 +
 xen/arch/riscv/include/asm/config.h |   9 +-
 xen/arch/riscv/riscv64/Makefile     |   2 +-
 xen/arch/riscv/riscv64/head.S       |   4 +-
 xen/arch/riscv/xen.lds.S            | 158 ++++++++++++++++++++++++++++
 6 files changed, 188 insertions(+), 5 deletions(-)
 create mode 100644 xen/arch/riscv/xen.lds.S

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 942e4ffbc1..74386beb85 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,2 +1,18 @@
+obj-$(CONFIG_RISCV_64) += riscv64/
+
+$(TARGET): $(TARGET)-syms
+	$(OBJCOPY) -O binary -S $< $@
+
+$(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@
+	$(NM) -pa --format=sysv $(@D)/$(@F) \
+		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
+		>$(@D)/$(@F).map
+
+$(obj)/xen.lds: $(src)/xen.lds.S FORCE
+	$(call if_changed_dep,cpp_lds_S)
+
+clean-files := $(objtree)/.xen-syms.[0-9]*
+
 .PHONY: include
 include:
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
index ae8fe9dec7..012dc677c3 100644
--- a/xen/arch/riscv/arch.mk
+++ b/xen/arch/riscv/arch.mk
@@ -11,3 +11,7 @@ riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
 # -mcmodel=medlow would force Xen into the lower half.
 
 CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+
+# TODO: Drop override when more of the build is working
+override ALL_OBJS-y = arch/$(TARGET_ARCH)/built_in.o
+override ALL_LIBS-y =
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index e2ae21de61..e10e13ba53 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -1,6 +1,9 @@
 #ifndef __RISCV_CONFIG_H__
 #define __RISCV_CONFIG_H__
 
+#include <xen/const.h>
+#include <xen/page-size.h>
+
 #if defined(CONFIG_RISCV_64)
 # define LONG_BYTEORDER 3
 # define ELFSIZE 64
@@ -28,7 +31,7 @@
 
 /* Linkage for RISCV */
 #ifdef __ASSEMBLY__
-#define ALIGN .align 2
+#define ALIGN .align 4
 
 #define ENTRY(name)                                \
   .globl name;                                     \
@@ -36,6 +39,10 @@
   name:
 #endif
 
+#define XEN_VIRT_START  _AT(UL,0x00200000)
+
+#define SMP_CACHE_BYTES (1 << 6)
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/riscv64/Makefile b/xen/arch/riscv/riscv64/Makefile
index 15a4a65f66..3340058c08 100644
--- a/xen/arch/riscv/riscv64/Makefile
+++ b/xen/arch/riscv/riscv64/Makefile
@@ -1 +1 @@
-extra-y += head.o
+obj-y += head.o
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 0dbc27ba75..990edb70a0 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,6 +1,4 @@
-#include <asm/config.h>
-
-        .text
+        .section .text.header, "ax", %progbits
 
 ENTRY(start)
         j  start
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
new file mode 100644
index 0000000000..ca57cce75c
--- /dev/null
+++ b/xen/arch/riscv/xen.lds.S
@@ -0,0 +1,158 @@
+#include <xen/xen.lds.h>
+
+#undef ENTRY
+#undef ALIGN
+
+OUTPUT_ARCH(riscv)
+ENTRY(start)
+
+PHDRS
+{
+    text PT_LOAD ;
+#if defined(BUILD_ID)
+    note PT_NOTE ;
+#endif
+}
+
+SECTIONS
+{
+    . = XEN_VIRT_START;
+    _start = .;
+    .text : {
+        _stext = .;            /* Text section */
+        *(.text.header)
+
+        *(.text.cold)
+        *(.text.unlikely .text.*_unlikely .text.unlikely.*)
+
+        *(.text)
+#ifdef CONFIG_CC_SPLIT_SECTIONS
+        *(.text.*)
+#endif
+
+        *(.fixup)
+        *(.gnu.warning)
+        . = ALIGN(POINTER_ALIGN);
+        _etext = .;             /* End of text section */
+    } :text
+
+    . = ALIGN(PAGE_SIZE);
+    .rodata : {
+        _srodata = .;          /* Read-only data */
+        *(.rodata)
+        *(.rodata.*)
+        *(.data.rel.ro)
+        *(.data.rel.ro.*)
+
+        VPCI_ARRAY
+
+        . = ALIGN(POINTER_ALIGN);
+        _erodata = .;        /* End of read-only data */
+    } :text
+
+    #if defined(BUILD_ID)
+    . = ALIGN(4);
+    .note.gnu.build-id : {
+        __note_gnu_build_id_start = .;
+        *(.note.gnu.build-id)
+        __note_gnu_build_id_end = .;
+    } :note :text
+    #endif
+    _erodata = .;                /* End of read-only data */
+
+    . = ALIGN(PAGE_SIZE);
+    .data.ro_after_init : {
+        __ro_after_init_start = .;
+        *(.data.ro_after_init)
+        . = ALIGN(PAGE_SIZE);
+        __ro_after_init_end = .;
+    } : text
+
+    .data.read_mostly : {
+        *(.data.read_mostly)
+    } :text
+
+    . = ALIGN(PAGE_SIZE);
+    .data : {                    /* Data */
+        *(.data.page_aligned)
+        . = ALIGN(8);
+        __start_schedulers_array = .;
+        *(.data.schedulers)
+        __end_schedulers_array = .;
+
+        HYPFS_PARAM
+
+        *(.data .data.*)
+        CONSTRUCTORS
+    } :text
+
+    . = ALIGN(PAGE_SIZE);             /* Init code and data */
+    __init_begin = .;
+    .init.text : {
+        _sinittext = .;
+        *(.init.text)
+        _einittext = .;
+        . = ALIGN(PAGE_SIZE);        /* Avoid mapping alt insns executable */
+    } :text
+    . = ALIGN(PAGE_SIZE);
+    .init.data : {
+        *(.init.rodata)
+        *(.init.rodata.*)
+
+        . = ALIGN(POINTER_ALIGN);
+        __setup_start = .;
+        *(.init.setup)
+        __setup_end = .;
+
+        __initcall_start = .;
+        *(.initcallpresmp.init)
+        __presmp_initcall_end = .;
+        *(.initcall1.init)
+        __initcall_end = .;
+
+        LOCK_PROFILE_DATA
+
+        *(.init.data)
+        *(.init.data.rel)
+        *(.init.data.rel.*)
+
+        . = ALIGN(8);
+        __ctors_start = .;
+        *(.ctors)
+        *(.init_array)
+        *(SORT(.init_array.*))
+        __ctors_end = .;
+    } :text
+    . = ALIGN(POINTER_ALIGN);
+    __init_end = .;
+
+    .bss : {                     /* BSS */
+        __bss_start = .;
+        *(.bss.stack_aligned)
+        . = ALIGN(PAGE_SIZE);
+        *(.bss.page_aligned)
+        . = ALIGN(PAGE_SIZE);
+        __per_cpu_start = .;
+        *(.bss.percpu.page_aligned)
+        *(.bss.percpu)
+        . = ALIGN(SMP_CACHE_BYTES);
+        *(.bss.percpu.read_mostly)
+        . = ALIGN(SMP_CACHE_BYTES);
+        __per_cpu_data_end = .;
+        *(.bss .bss.*)
+        . = ALIGN(POINTER_ALIGN);
+        __bss_end = .;
+    } :text
+    _end = . ;
+
+    /* Section for the device tree blob (if any). */
+    .dtb : { *(.dtb) } :text
+
+    DWARF2_DEBUG_SECTIONS
+
+    DISCARD_SECTIONS
+
+    STABS_DEBUG_SECTIONS
+
+    ELF_DETAILS_SECTIONS
+}
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 08:41:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 08:41:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471708.731675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLoA-0004CS-4B; Thu, 05 Jan 2023 08:41:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471708.731675; Thu, 05 Jan 2023 08:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDLoA-0004CL-0Q; Thu, 05 Jan 2023 08:41:06 +0000
Received: by outflank-mailman (input) for mailman id 471708;
 Thu, 05 Jan 2023 08:41:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDLo8-0003sb-G8
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 08:41:04 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id af938764-8cd4-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 09:41:02 +0100 (CET)
Received: by mail-wm1-x32d.google.com with SMTP id
 ay2-20020a05600c1e0200b003d22e3e796dso781108wmb.0
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 00:41:02 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 n17-20020a05600c4f9100b003cffd3c3d6csm1752711wmq.12.2023.01.05.00.41.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 00:41:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af938764-8cd4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aehIhiOSPO0UY1yboaOg5tDnKtSx7l5gzqO3ILXdJfA=;
        b=XyBn4t9PzsoVBWZ5HogTlhYr0qTNW/9gO9b4BLE4oYOPFemstOkmpmmEJXvftygwGM
         QBCPN4YC7GvxpentTkz1VTfxFLW8YcixdBVvCMXKYspCyzT6DsD9S1T38kEyhMShVXwe
         OFeY4WIN+/G3ubdPacUBSonky/oZzbZVTMgG7s7sGWUlUS1L3c9ON3bH1OUlR0UtskE4
         Jl8wGcyifXpg2af7OIY+r3Skj9nSW1SqpbMdWu755Ilpsb6sGpAvmmlWiHhbxlC13T+U
         GfKP5NhZIjolgGyb/lXgo64wQnrpXqzI9YrudkZvDB75gY1nP0xZaMtraRzTYy7UXaLJ
         5Qag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=aehIhiOSPO0UY1yboaOg5tDnKtSx7l5gzqO3ILXdJfA=;
        b=zCfyEj4iVyNv3DAnXEf55KFxnSWc4mFLNnjEG/TOHT/aeoRXHZkgEmE8CkedgILuC1
         Jc1MYT9FQDJGDTiW5dGsqzpaQHqOr+BW78GfnyOwQc71iWMC8kqF2Yq//ia1oVnIw7O0
         bygWRNlj7wqad7R5NTIRfrZ0pGoVMA5+EtHeuU8JNJbjpjfuMy6rYAhYoKgbWiQCUVXb
         ddczT9SRRYpa59JnVP/nGjuU2SquqMQQSOHMOWtAYS9cQN3v9YEyl1qk2md1UVjjFzhe
         RNiYR0zmtH3V87muIsXc5BG3/YfaM1LhCaMHvvPaJfVGp3FXGP/5/VXuKpAEB27HxpVo
         NCTA==
X-Gm-Message-State: AFqh2kp2xGiFa4EVbRmbuJrRXzUwFy4ZFArtp0pWf/gFR7AdkkHAPRVf
	oZjrm7trJG/Sl+44GRXEMJWLHt9pw3cgIgnl
X-Google-Smtp-Source: AMrXdXsjNND4+vpRTOlet0CB22rhp0yoxL/EPO4XuLf3u5HKhxkQhI4j9IALsg7DG6x7TD2Vl9XO5A==
X-Received: by 2002:a05:600c:1e10:b0:3d1:f496:e25f with SMTP id ay16-20020a05600c1e1000b003d1f496e25fmr38734543wmb.16.1672908061997;
        Thu, 05 Jan 2023 00:41:01 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v3 2/2] automation: add RISC-V 64 cross-build tests for Xen
Date: Thu,  5 Jan 2023 10:40:51 +0200
Message-Id: <a9e0866ec025bc70b4cde78ab782fe11390222d6.1672906559.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1672906559.git.oleksii.kurochko@gmail.com>
References: <cover.1672906559.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add build jobs to cross-compile Xen-only for RISC-V 64.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V2:
- Add HYPERVISOR_ONLY to RISCV jobs because after rebase on
  top of the patch series "CI: Fixes/cleanup in preparation for RISCV"
  it is required to set HYPERVISOR_ONLY in build.yaml
---
 automation/gitlab-ci/build.yaml | 45 +++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index e6a9357de3..11eb1c6b82 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -172,6 +172,33 @@
   variables:
     <<: *gcc
 
+.riscv64-cross-build-tmpl:
+  <<: *build
+  variables:
+    XEN_TARGET_ARCH: riscv64
+  tags:
+    - x86_64
+
+.riscv64-cross-build:
+  extends: .riscv64-cross-build-tmpl
+  variables:
+    debug: n
+
+.riscv64-cross-build-debug:
+  extends: .riscv64-cross-build-tmpl
+  variables:
+    debug: y
+
+.gcc-riscv64-cross-build:
+  extends: .riscv64-cross-build
+  variables:
+    <<: *gcc
+
+.gcc-riscv64-cross-build-debug:
+  extends: .riscv64-cross-build-debug
+  variables:
+    <<: *gcc
+
 # Jobs below this line
 
 archlinux-gcc:
@@ -617,6 +644,21 @@ alpine-3.12-gcc-debug-arm64-boot-cpupools:
     EXTRA_XEN_CONFIG: |
       CONFIG_BOOT_TIME_CPUPOOLS=y
 
+# RISC-V 64 cross-build
+riscv64-cross-gcc:
+  extends: .gcc-riscv64-cross-build
+  variables:
+    CONTAINER: archlinux:riscv64
+    KBUILD_DEFCONFIG: tiny64_defconfig
+    HYPERVISOR_ONLY: y
+
+riscv64-cross-gcc-debug:
+  extends: .gcc-riscv64-cross-build-debug
+  variables:
+    CONTAINER: archlinux:riscv64
+    KBUILD_DEFCONFIG: tiny64_defconfig
+    HYPERVISOR_ONLY: y
+
 ## Test artifacts common
 
 .test-jobs-artifact-common:
@@ -692,3 +734,6 @@ kernel-5.10.74-export:
       - binaries/bzImage
   tags:
     - x86_64
+
+# # RISC-V 64 test artificats
+# # TODO: add RISC-V 64 test artitifacts
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 09:02:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 09:02:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471740.731711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDM8Q-0007Z5-1u; Thu, 05 Jan 2023 09:02:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471740.731711; Thu, 05 Jan 2023 09:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDM8P-0007Yy-Uu; Thu, 05 Jan 2023 09:02:01 +0000
Received: by outflank-mailman (input) for mailman id 471740;
 Thu, 05 Jan 2023 09:02:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5bW3=5C=linux.alibaba.com=jiapeng.chong@srs-se1.protection.inumbo.net>)
 id 1pDM8O-0007Ys-MA
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 09:02:00 +0000
Received: from out30-8.freemail.mail.aliyun.com
 (out30-8.freemail.mail.aliyun.com [115.124.30.8])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99c8f410-8cd7-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 10:01:56 +0100 (CET)
Received: from localhost(mailfrom:jiapeng.chong@linux.alibaba.com
 fp:SMTPD_---0VYvE98r_1672909303) by smtp.aliyun-inc.com;
 Thu, 05 Jan 2023 17:01:51 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99c8f410-8cd7-11ed-b8d0-410ff93cb8f0
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R231e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=jiapeng.chong@linux.alibaba.com;NM=1;PH=DS;RN=12;SR=0;TI=SMTPD_---0VYvE98r_1672909303;
From: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
To: jgross@suse.com
Cc: boris.ostrovsky@oracle.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	dave.hansen@linux.intel.com,
	x86@kernel.org,
	hpa@zytor.com,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Jiapeng Chong <jiapeng.chong@linux.alibaba.com>,
	Abaci Robot <abaci@linux.alibaba.com>
Subject: [PATCH v2] x86/xen: Remove the unused function p2m_index()
Date: Thu,  5 Jan 2023 17:01:41 +0800
Message-Id: <20230105090141.36248-1-jiapeng.chong@linux.alibaba.com>
X-Mailer: git-send-email 2.20.1.7.g153144c
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function p2m_index is defined in the p2m.c file, but not called
elsewhere, so remove this unused function.

arch/x86/xen/p2m.c:137:24: warning: unused function 'p2m_index'.

Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3557
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
---
Changes in v2:
  -Modify title and submission.

 arch/x86/xen/p2m.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 58db86f7b384..9bdc3b656b2c 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -134,11 +134,6 @@ static inline unsigned p2m_mid_index(unsigned long pfn)
 	return (pfn / P2M_PER_PAGE) % P2M_MID_PER_PAGE;
 }
 
-static inline unsigned p2m_index(unsigned long pfn)
-{
-	return pfn % P2M_PER_PAGE;
-}
-
 static void p2m_top_mfn_init(unsigned long *top)
 {
 	unsigned i;
-- 
2.20.1.7.g153144c



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 09:16:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 09:16:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471748.731722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDMLt-0000fe-7K; Thu, 05 Jan 2023 09:15:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471748.731722; Thu, 05 Jan 2023 09:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDMLt-0000fX-4W; Thu, 05 Jan 2023 09:15:57 +0000
Received: by outflank-mailman (input) for mailman id 471748;
 Thu, 05 Jan 2023 09:15:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+8dv=5C=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pDMLr-0000fR-GJ
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 09:15:55 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8cb48600-8cd9-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 10:15:52 +0100 (CET)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-528-JEsIRH6WNHSAI5I6i2C_2w-1; Thu, 05 Jan 2023 04:15:48 -0500
Received: by mail-wm1-f70.google.com with SMTP id
 bd6-20020a05600c1f0600b003d96f7f2396so733188wmb.3
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 01:15:47 -0800 (PST)
Received: from redhat.com ([2.52.151.85]) by smtp.gmail.com with ESMTPSA id
 ba13-20020a0560001c0d00b002a91572638csm2863835wrb.75.2023.01.05.01.15.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 01:15:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cb48600-8cd9-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1672910151;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=WvJnFbsqNB85Ex5Jy9JWOBFwO+5P6BtsVC31mbY8Jko=;
	b=HGzcLJ606AZB/ucSA1nWJFKnp1HlzdqXu6SEDIXKCDKhXiTVK0xrq4KlRntjXb0/Bzwxfu
	FQ505qUj3nQ26ZKspbJwUtwJZH8zigoamnsR1gACilibxPWEOfvg1LAEoJPHpasYgXfVrE
	PX3cNW1H/Oij72e50RlgcFkegM5w45s=
X-MC-Unique: JEsIRH6WNHSAI5I6i2C_2w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WvJnFbsqNB85Ex5Jy9JWOBFwO+5P6BtsVC31mbY8Jko=;
        b=qtM4et2NBL9rHncv9LHOc0I/G0cF0qsv9qH+yJNv3r0VV4Mpvnxd0Knn7OtT1wFYUf
         2gpAarIXD6DNr4UtrTKC+s36eEpaHLA7tCddPEsayz86B/2eOQSMMq69xRzmmuf99KSB
         7Xh1Fu/Im5QmIR0mzdwraxNNZMkYpbqzYH9aYR+y069NnyPxuQMcHlkaqUDidU5r5lwU
         FnO5SltrYVGOuLt0azgi5KSpWusSH6aq/jD7Ul4VRJ9V0beJHymBBQhLrkS6ybIgF42Z
         Q7ZChzJ0tPQalCEp5oyJbmx9boUPpfOesaBtmCQf0ztSlOqGsTyKydOveO/Wu6jEazlS
         V2GA==
X-Gm-Message-State: AFqh2kquoTlnpfIvXgMNGM0SPY/BowL0Vxb8Ba2d4rxkw+4pvMgSnC8y
	0BqGiBcHA2SXApVHsjdB6SsU8B2HAOo9rRfdO+PerbBWqBJLMnXDKSmpf4KtDLattSSYt5JXBjz
	1GHKtscOtaCC7aDqnW/UIJ5MwwiA=
X-Received: by 2002:a5d:63cb:0:b0:293:b54e:4f09 with SMTP id c11-20020a5d63cb000000b00293b54e4f09mr10584293wrw.4.1672910146658;
        Thu, 05 Jan 2023 01:15:46 -0800 (PST)
X-Google-Smtp-Source: AMrXdXtL7rkgthxuayEoRod50MDK5WIP+ZyW54cKv1y5T8C/qx/fxyUAvKbsu59WxqwvfyS7nCS0Gg==
X-Received: by 2002:a5d:63cb:0:b0:293:b54e:4f09 with SMTP id c11-20020a5d63cb000000b00293b54e4f09mr10584279wrw.4.1672910146407;
        Thu, 05 Jan 2023 01:15:46 -0800 (PST)
Date: Thu, 5 Jan 2023 04:15:41 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Markus Armbruster <armbru@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Igor Mammedov <imammedo@redhat.com>, Ani Sinha <ani@anisinha.ca>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Yuval Shaia <yuval.shaia.ml@gmail.com>, Fam Zheng <fam@euphon.net>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Ben Widawsky <ben.widawsky@intel.com>,
	Jonathan Cameron <jonathan.cameron@huawei.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Andrey Smirnov <andrew.smirnov@gmail.com>,
	Xiaojuan Yang <yangxiaojuan@loongson.cn>,
	Song Gao <gaosong@loongson.cn>,
	=?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>,
	Paul Burton <paulburton@kernel.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org, qemu-arm@nongnu.org,
	qemu-ppc@nongnu.org
Subject: [PULL 27/51] include/hw/pci: Break inclusion loop pci_bridge.h and
 cxl.h
Message-ID: <20230105091310.263867-28-mst@redhat.com>
References: <20230105091310.263867-1-mst@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230105091310.263867-1-mst@redhat.com>
X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1
X-Mutt-Fcc: =sent
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

From: Markus Armbruster <armbru@redhat.com>

hw/pci/pci_bridge.h and hw/cxl/cxl.h include each other.

Fortunately, breaking the loop is merely a matter of deleting
unnecessary includes from headers, and adding them back in places
where they are now missing.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221222100330.380143-2-armbru@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/alpha/alpha_sys.h              | 1 -
 hw/rdma/rdma_utils.h              | 1 -
 hw/rdma/vmw/pvrdma.h              | 1 -
 hw/usb/hcd-ehci.h                 | 1 -
 hw/xen/xen_pt.h                   | 1 -
 include/hw/cxl/cxl.h              | 1 -
 include/hw/cxl/cxl_cdat.h         | 1 +
 include/hw/cxl/cxl_device.h       | 1 +
 include/hw/cxl/cxl_pci.h          | 2 --
 include/hw/i386/ich9.h            | 4 ----
 include/hw/i386/x86-iommu.h       | 1 -
 include/hw/isa/vt82c686.h         | 1 -
 include/hw/pci-host/designware.h  | 3 ---
 include/hw/pci-host/i440fx.h      | 2 +-
 include/hw/pci-host/ls7a.h        | 2 --
 include/hw/pci-host/pnv_phb3.h    | 2 --
 include/hw/pci-host/pnv_phb4.h    | 3 +--
 include/hw/pci-host/xilinx-pcie.h | 1 -
 include/hw/pci/pcie.h             | 1 -
 include/hw/virtio/virtio-scsi.h   | 1 -
 hw/alpha/pci.c                    | 1 +
 hw/alpha/typhoon.c                | 2 +-
 hw/i386/acpi-build.c              | 2 +-
 hw/pci-bridge/i82801b11.c         | 2 +-
 hw/rdma/rdma_utils.c              | 1 +
 hw/scsi/virtio-scsi.c             | 1 +
 26 files changed, 10 insertions(+), 30 deletions(-)

diff --git a/hw/alpha/alpha_sys.h b/hw/alpha/alpha_sys.h
index 2263e821da..a303c58438 100644
--- a/hw/alpha/alpha_sys.h
+++ b/hw/alpha/alpha_sys.h
@@ -5,7 +5,6 @@
 
 #include "target/alpha/cpu-qom.h"
 #include "hw/pci/pci.h"
-#include "hw/pci/pci_host.h"
 #include "hw/boards.h"
 #include "hw/intc/i8259.h"
 
diff --git a/hw/rdma/rdma_utils.h b/hw/rdma/rdma_utils.h
index 0c6414e7e0..54e4f56edd 100644
--- a/hw/rdma/rdma_utils.h
+++ b/hw/rdma/rdma_utils.h
@@ -18,7 +18,6 @@
 #define RDMA_UTILS_H
 
 #include "qemu/error-report.h"
-#include "hw/pci/pci.h"
 #include "sysemu/dma.h"
 
 #define rdma_error_report(fmt, ...) \
diff --git a/hw/rdma/vmw/pvrdma.h b/hw/rdma/vmw/pvrdma.h
index d08965d3e2..0caf95ede8 100644
--- a/hw/rdma/vmw/pvrdma.h
+++ b/hw/rdma/vmw/pvrdma.h
@@ -18,7 +18,6 @@
 
 #include "qemu/units.h"
 #include "qemu/notify.h"
-#include "hw/pci/pci.h"
 #include "hw/pci/msix.h"
 #include "chardev/char-fe.h"
 #include "hw/net/vmxnet3_defs.h"
diff --git a/hw/usb/hcd-ehci.h b/hw/usb/hcd-ehci.h
index a173707d9b..4d4b2830b7 100644
--- a/hw/usb/hcd-ehci.h
+++ b/hw/usb/hcd-ehci.h
@@ -23,7 +23,6 @@
 #include "sysemu/dma.h"
 #include "hw/pci/pci.h"
 #include "hw/sysbus.h"
-#include "qom/object.h"
 
 #ifndef EHCI_DEBUG
 #define EHCI_DEBUG   0
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index e7c4316a7d..cf10fc7bbf 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -2,7 +2,6 @@
 #define XEN_PT_H
 
 #include "hw/xen/xen_common.h"
-#include "hw/pci/pci.h"
 #include "xen-host-pci-device.h"
 #include "qom/object.h"
 
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 38e0e271d5..5129557bee 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -13,7 +13,6 @@
 
 #include "qapi/qapi-types-machine.h"
 #include "qapi/qapi-visit-machine.h"
-#include "hw/pci/pci_bridge.h"
 #include "hw/pci/pci_host.h"
 #include "cxl_pci.h"
 #include "cxl_component.h"
diff --git a/include/hw/cxl/cxl_cdat.h b/include/hw/cxl/cxl_cdat.h
index e9eda00142..7f67638685 100644
--- a/include/hw/cxl/cxl_cdat.h
+++ b/include/hw/cxl/cxl_cdat.h
@@ -11,6 +11,7 @@
 #define CXL_CDAT_H
 
 #include "hw/cxl/cxl_pci.h"
+#include "hw/pci/pcie_doe.h"
 
 /*
  * Reference:
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 449b0edfe9..fd475b947b 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -10,6 +10,7 @@
 #ifndef CXL_DEVICE_H
 #define CXL_DEVICE_H
 
+#include "hw/pci/pci.h"
 #include "hw/register.h"
 
 /*
diff --git a/include/hw/cxl/cxl_pci.h b/include/hw/cxl/cxl_pci.h
index 3cb79eca1e..aca14845ab 100644
--- a/include/hw/cxl/cxl_pci.h
+++ b/include/hw/cxl/cxl_pci.h
@@ -11,8 +11,6 @@
 #define CXL_PCI_H
 
 #include "qemu/compiler.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pcie.h"
 #include "hw/cxl/cxl_cdat.h"
 
 #define CXL_VENDOR_ID 0x1e98
diff --git a/include/hw/i386/ich9.h b/include/hw/i386/ich9.h
index 23ee8e371b..222781e8b9 100644
--- a/include/hw/i386/ich9.h
+++ b/include/hw/i386/ich9.h
@@ -5,12 +5,8 @@
 #include "hw/sysbus.h"
 #include "hw/i386/pc.h"
 #include "hw/isa/apm.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pcie_host.h"
-#include "hw/pci/pci_bridge.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/ich9.h"
-#include "hw/pci/pci_bus.h"
 #include "qom/object.h"
 
 void ich9_lpc_set_irq(void *opaque, int irq_num, int level);
diff --git a/include/hw/i386/x86-iommu.h b/include/hw/i386/x86-iommu.h
index 7637edb430..8d8d53b18b 100644
--- a/include/hw/i386/x86-iommu.h
+++ b/include/hw/i386/x86-iommu.h
@@ -21,7 +21,6 @@
 #define HW_I386_X86_IOMMU_H
 
 #include "hw/sysbus.h"
-#include "hw/pci/pci.h"
 #include "hw/pci/msi.h"
 #include "qom/object.h"
 
diff --git a/include/hw/isa/vt82c686.h b/include/hw/isa/vt82c686.h
index eaa07881c5..e273cd38dc 100644
--- a/include/hw/isa/vt82c686.h
+++ b/include/hw/isa/vt82c686.h
@@ -1,7 +1,6 @@
 #ifndef HW_VT82C686_H
 #define HW_VT82C686_H
 
-#include "hw/pci/pci.h"
 
 #define TYPE_VT82C686B_ISA "vt82c686b-isa"
 #define TYPE_VT82C686B_USB_UHCI "vt82c686b-usb-uhci"
diff --git a/include/hw/pci-host/designware.h b/include/hw/pci-host/designware.h
index 6d9b51ae67..908f3d946b 100644
--- a/include/hw/pci-host/designware.h
+++ b/include/hw/pci-host/designware.h
@@ -22,9 +22,6 @@
 #define DESIGNWARE_H
 
 #include "hw/sysbus.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pci_bus.h"
-#include "hw/pci/pcie_host.h"
 #include "hw/pci/pci_bridge.h"
 #include "qom/object.h"
 
diff --git a/include/hw/pci-host/i440fx.h b/include/hw/pci-host/i440fx.h
index d02bf1ed6b..fc93e22732 100644
--- a/include/hw/pci-host/i440fx.h
+++ b/include/hw/pci-host/i440fx.h
@@ -11,7 +11,7 @@
 #ifndef HW_PCI_I440FX_H
 #define HW_PCI_I440FX_H
 
-#include "hw/pci/pci_bus.h"
+#include "hw/pci/pci.h"
 #include "hw/pci-host/pam.h"
 #include "qom/object.h"
 
diff --git a/include/hw/pci-host/ls7a.h b/include/hw/pci-host/ls7a.h
index df7fa55a30..b27db8e2ca 100644
--- a/include/hw/pci-host/ls7a.h
+++ b/include/hw/pci-host/ls7a.h
@@ -8,8 +8,6 @@
 #ifndef HW_LS7A_H
 #define HW_LS7A_H
 
-#include "hw/pci/pci.h"
-#include "hw/pci/pcie_host.h"
 #include "hw/pci-host/pam.h"
 #include "qemu/units.h"
 #include "qemu/range.h"
diff --git a/include/hw/pci-host/pnv_phb3.h b/include/hw/pci-host/pnv_phb3.h
index 4854f6d2f6..f791ebda9b 100644
--- a/include/hw/pci-host/pnv_phb3.h
+++ b/include/hw/pci-host/pnv_phb3.h
@@ -10,8 +10,6 @@
 #ifndef PCI_HOST_PNV_PHB3_H
 #define PCI_HOST_PNV_PHB3_H
 
-#include "hw/pci/pcie_host.h"
-#include "hw/pci/pcie_port.h"
 #include "hw/ppc/xics.h"
 #include "qom/object.h"
 #include "hw/pci-host/pnv_phb.h"
diff --git a/include/hw/pci-host/pnv_phb4.h b/include/hw/pci-host/pnv_phb4.h
index 50d4faa001..d9cea3f952 100644
--- a/include/hw/pci-host/pnv_phb4.h
+++ b/include/hw/pci-host/pnv_phb4.h
@@ -10,8 +10,7 @@
 #ifndef PCI_HOST_PNV_PHB4_H
 #define PCI_HOST_PNV_PHB4_H
 
-#include "hw/pci/pcie_host.h"
-#include "hw/pci/pcie_port.h"
+#include "hw/pci/pci_bus.h"
 #include "hw/ppc/xive.h"
 #include "qom/object.h"
 
diff --git a/include/hw/pci-host/xilinx-pcie.h b/include/hw/pci-host/xilinx-pcie.h
index 89be88d87f..e1b3c1c280 100644
--- a/include/hw/pci-host/xilinx-pcie.h
+++ b/include/hw/pci-host/xilinx-pcie.h
@@ -21,7 +21,6 @@
 #define HW_XILINX_PCIE_H
 
 #include "hw/sysbus.h"
-#include "hw/pci/pci.h"
 #include "hw/pci/pci_bridge.h"
 #include "hw/pci/pcie_host.h"
 #include "qom/object.h"
diff --git a/include/hw/pci/pcie.h b/include/hw/pci/pcie.h
index 698d3de851..798a262a0a 100644
--- a/include/hw/pci/pcie.h
+++ b/include/hw/pci/pcie.h
@@ -26,7 +26,6 @@
 #include "hw/pci/pcie_aer.h"
 #include "hw/pci/pcie_sriov.h"
 #include "hw/hotplug.h"
-#include "hw/pci/pcie_doe.h"
 
 typedef enum {
     /* for attention and power indicator */
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index a36aad9c86..37b75e15e3 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -20,7 +20,6 @@
 #define VIRTIO_SCSI_SENSE_SIZE 0
 #include "standard-headers/linux/virtio_scsi.h"
 #include "hw/virtio/virtio.h"
-#include "hw/pci/pci.h"
 #include "hw/scsi/scsi.h"
 #include "chardev/char-fe.h"
 #include "sysemu/iothread.h"
diff --git a/hw/alpha/pci.c b/hw/alpha/pci.c
index 72251fcdf0..7c18297177 100644
--- a/hw/alpha/pci.c
+++ b/hw/alpha/pci.c
@@ -7,6 +7,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "hw/pci/pci_host.h"
 #include "alpha_sys.h"
 #include "qemu/log.h"
 #include "trace.h"
diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
index bd39c8ca86..49a80550c5 100644
--- a/hw/alpha/typhoon.c
+++ b/hw/alpha/typhoon.c
@@ -10,10 +10,10 @@
 #include "qemu/module.h"
 #include "qemu/units.h"
 #include "qapi/error.h"
+#include "hw/pci/pci_host.h"
 #include "cpu.h"
 #include "hw/irq.h"
 #include "alpha_sys.h"
-#include "qom/object.h"
 
 
 #define TYPE_TYPHOON_PCI_HOST_BRIDGE "typhoon-pcihost"
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index aa15b11cde..127c4e2d50 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -27,7 +27,7 @@
 #include "acpi-common.h"
 #include "qemu/bitmap.h"
 #include "qemu/error-report.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_bridge.h"
 #include "hw/cxl/cxl.h"
 #include "hw/core/cpu.h"
 #include "target/i386/cpu.h"
diff --git a/hw/pci-bridge/i82801b11.c b/hw/pci-bridge/i82801b11.c
index d9f224818b..f3b4a14611 100644
--- a/hw/pci-bridge/i82801b11.c
+++ b/hw/pci-bridge/i82801b11.c
@@ -42,7 +42,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_bridge.h"
 #include "migration/vmstate.h"
 #include "qemu/module.h"
 #include "hw/i386/ich9.h"
diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c
index 5a7ef63ad2..77008552f4 100644
--- a/hw/rdma/rdma_utils.c
+++ b/hw/rdma/rdma_utils.c
@@ -14,6 +14,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "hw/pci/pci.h"
 #include "trace.h"
 #include "rdma_utils.h"
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 6f6e2e32ba..2b649ca976 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -22,6 +22,7 @@
 #include "qemu/iov.h"
 #include "qemu/module.h"
 #include "sysemu/block-backend.h"
+#include "sysemu/dma.h"
 #include "hw/qdev-properties.h"
 #include "hw/scsi/scsi.h"
 #include "scsi/constants.h"
-- 
MST



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 09:16:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 09:16:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471749.731733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDMMC-00011z-FI; Thu, 05 Jan 2023 09:16:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471749.731733; Thu, 05 Jan 2023 09:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDMMC-00011s-CH; Thu, 05 Jan 2023 09:16:16 +0000
Received: by outflank-mailman (input) for mailman id 471749;
 Thu, 05 Jan 2023 09:16:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+8dv=5C=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pDMMA-0000fR-OH
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 09:16:15 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 977c9566-8cd9-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 10:16:10 +0100 (CET)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-190-4sSTT-oqNt-XxeRmToRbUw-1; Thu, 05 Jan 2023 04:16:07 -0500
Received: by mail-wm1-f70.google.com with SMTP id
 c23-20020a7bc857000000b003d97c8d4935so15190818wml.8
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 01:16:07 -0800 (PST)
Received: from redhat.com ([2.52.151.85]) by smtp.gmail.com with ESMTPSA id
 g41-20020a05600c4ca900b003cfd0bd8c0asm1541324wmp.30.2023.01.05.01.15.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 01:16:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 977c9566-8cd9-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1672910169;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=9E0uGbaHWSARDpwjIvxsxquPxCSk2QGI6Ma+Y/5MGDY=;
	b=OwnTmi68GSJuI4G7MsG6R8JuxBfwFMowpHaa+4inKJc3c05jbHtQw0UscNKWxSAVnCRS+0
	V+f1Srh9j0MvRE3gwVwlio4QGGTlst9ClHncIKhHutI1mmaAJXwyCK9pie5mvcwoRI9+PB
	AjtLQxp6goMwXOfVOzVxhFDCJeZdxrI=
X-MC-Unique: 4sSTT-oqNt-XxeRmToRbUw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=9E0uGbaHWSARDpwjIvxsxquPxCSk2QGI6Ma+Y/5MGDY=;
        b=7/nMtyW0wSGa2lhuXYUFgcmf1RWAaynhcA2V1IQfnfIR/2JGAMEdwQU6erXavZPyyy
         hCj1NQ0dFv5acKz7VNK2JFuEvYLsjYNreFOW4q92Y5gXbTXWGyjnnypEX5YXwotDjZjM
         Jj03V6u25GGJTPAOu5lEZNELrGavpEJbkUXxjQtzTzJbCB0aYy/VLAVLsI9CBCrvaXUC
         rTJJ219yjOAm4h5+ZxlV3XNe5rEfLda1xasYFebBMn1VHUYu1IeyeqEYCGuWct01t7pM
         52h92gbunLR5NyLLxSl0FnNFsXjwYaI4zjhjtQBlakJ3/BENWdCPTyr5WoZNWTNwY3VX
         z5WQ==
X-Gm-Message-State: AFqh2krhZusXrtSlmZAq5vcySSGhDJTvP17Rssd/GV69TN68X+pAFq9B
	VsJ/9GP6Twd0+K9CWl6tJ6Rb7QfDRbrDUFEbjSvul+r3fWMD02bsHy04QxkmAxXcvqO0KCv40/O
	QXe0NhHMhKy8HbbdTc6da3Nr+X0g=
X-Received: by 2002:a05:600c:44d4:b0:3cf:7925:7a3 with SMTP id f20-20020a05600c44d400b003cf792507a3mr35414326wmo.24.1672910165830;
        Thu, 05 Jan 2023 01:16:05 -0800 (PST)
X-Google-Smtp-Source: AMrXdXtomsIn8w+yG1vNX43EiWLehS044MzumjQUALNU2pfPKqSwYO6JSwdk86MB9u4hPAZrggu7uA==
X-Received: by 2002:a05:600c:44d4:b0:3cf:7925:7a3 with SMTP id f20-20020a05600c44d400b003cf792507a3mr35414200wmo.24.1672910164789;
        Thu, 05 Jan 2023 01:16:04 -0800 (PST)
Date: Thu, 5 Jan 2023 04:15:55 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Markus Armbruster <armbru@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>, Ani Sinha <ani@anisinha.ca>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?utf-8?Q?Marc-Andr=C3=A9?= Lureau <marcandre.lureau@redhat.com>,
	Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	BALATON Zoltan <balaton@eik.bme.hu>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	John Snow <jsnow@redhat.com>, Alberto Garcia <berto@igalia.com>,
	Corey Minyard <minyard@acm.org>,
	=?utf-8?B?SGVydsOp?= Poussineau <hpoussin@reactos.org>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Pavel Pisa <pisa@cmp.felk.cvut.cz>,
	Vikram Garhwal <fnu.vikram@xilinx.com>,
	Jason Wang <jasowang@redhat.com>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	Stefan Weil <sw@weilnetz.de>, Jiri Pirko <jiri@resnulli.us>,
	Sven Schnelle <svens@stackframe.org>,
	Keith Busch <kbusch@kernel.org>, Klaus Jensen <its@irrelevant.dk>,
	Huacai Chen <chenhuacai@kernel.org>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>, Helge Deller <deller@gmx.de>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Magnus Damm <magnus.damm@gmail.com>,
	Daniel Henrique Barboza <danielhb413@gmail.com>,
	=?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Greg Kurz <groug@kaod.org>, Yuval Shaia <yuval.shaia.ml@gmail.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Eric Farman <farman@linux.ibm.com>,
	David Hildenbrand <david@redhat.com>,
	Ilya Leoshkevich <iii@linux.ibm.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Thomas Huth <thuth@redhat.com>, Fam Zheng <fam@euphon.net>,
	Alex Williamson <alex.williamson@redhat.com>,
	Beniamino Galvani <b.galvani@gmail.com>,
	Ben Widawsky <ben.widawsky@intel.com>,
	Jonathan Cameron <jonathan.cameron@huawei.com>,
	Elena Ufimtseva <elena.ufimtseva@oracle.com>,
	Jagannathan Raman <jag.raman@oracle.com>,
	John G Johnson <john.g.johnson@oracle.com>,
	Bin Meng <bin.meng@windriver.com>,
	Alexander Bulekov <alxndr@bu.edu>, Bandan Das <bsd@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Darren Kenny <darren.kenny@oracle.com>,
	Qiuhao Li <Qiuhao.Li@outlook.com>,
	Laurent Vivier <lvivier@redhat.com>, qemu-ppc@nongnu.org,
	xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
	qemu-arm@nongnu.org, qemu-s390x@nongnu.org
Subject: [PULL 31/51] include/hw/pci: Split pci_device.h off pci.h
Message-ID: <20230105091310.263867-32-mst@redhat.com>
References: <20230105091310.263867-1-mst@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230105091310.263867-1-mst@redhat.com>
X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1
X-Mutt-Fcc: =sent
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

From: Markus Armbruster <armbru@redhat.com>

PCIDeviceClass and PCIDevice are defined in pci.h.  Many users of the
header don't actually need them.  Similar structs live in their own
headers: PCIBusClass and PCIBus in pci_bus.h, PCIBridge in
pci_bridge.h, PCIHostBridgeClass and PCIHostState in pci_host.h,
PCIExpressHost in pcie_host.h, and PCIERootPortClass, PCIEPort, and
PCIESlot in pcie_port.h.

Move PCIDeviceClass and PCIDeviceClass to new pci_device.h, along with
the code that needs them.  Adjust include directives.

This also enables the next commit.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221222100330.380143-6-armbru@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/display/ati_int.h             |   2 +-
 hw/display/qxl.h                 |   3 +-
 hw/ide/ahci_internal.h           |   2 +-
 hw/net/vmxnet3_defs.h            |   2 +-
 hw/nvme/nvme.h                   |   2 +-
 hw/rdma/vmw/pvrdma.h             |   1 +
 hw/scsi/mptsas.h                 |   2 +-
 hw/usb/hcd-ehci.h                |   2 +-
 hw/usb/hcd-uhci.h                |   2 +-
 hw/usb/hcd-xhci-pci.h            |   1 +
 hw/vfio/pci.h                    |   2 +-
 include/hw/acpi/piix4.h          |   2 +-
 include/hw/arm/allwinner-a10.h   |   1 +
 include/hw/cxl/cxl_device.h      |   2 +-
 include/hw/ide/pci.h             |   2 +-
 include/hw/misc/macio/macio.h    |   2 +-
 include/hw/pci-host/gpex.h       |   2 +-
 include/hw/pci-host/i440fx.h     |   2 +-
 include/hw/pci-host/q35.h        |   2 +-
 include/hw/pci-host/sabre.h      |   2 +-
 include/hw/pci/msi.h             |   2 +-
 include/hw/pci/pci.h             | 344 ------------------------------
 include/hw/pci/pci_bridge.h      |   2 +-
 include/hw/pci/pci_device.h      | 350 +++++++++++++++++++++++++++++++
 include/hw/pci/pcie_port.h       |   1 +
 include/hw/pci/shpc.h            |   2 +-
 include/hw/remote/iohub.h        |   2 +-
 include/hw/remote/proxy.h        |   2 +-
 include/hw/sd/sdhci.h            |   2 +-
 include/hw/southbridge/piix.h    |   3 +-
 include/hw/xen/xen_common.h      |   2 +-
 hw/acpi/erst.c                   |   2 +-
 hw/audio/ac97.c                  |   2 +-
 hw/audio/es1370.c                |   2 +-
 hw/audio/via-ac97.c              |   2 +-
 hw/char/serial-pci-multi.c       |   2 +-
 hw/char/serial-pci.c             |   2 +-
 hw/core/qdev-properties-system.c |   1 +
 hw/display/bochs-display.c       |   2 +-
 hw/display/cirrus_vga.c          |   2 +-
 hw/display/sm501.c               |   2 +-
 hw/display/vga-pci.c             |   2 +-
 hw/display/vmware_vga.c          |   2 +-
 hw/i386/xen/xen_pvdevice.c       |   2 +-
 hw/ipack/tpci200.c               |   2 +-
 hw/ipmi/pci_ipmi_bt.c            |   2 +-
 hw/ipmi/pci_ipmi_kcs.c           |   2 +-
 hw/isa/i82378.c                  |   2 +-
 hw/mips/gt64xxx_pci.c            |   2 +-
 hw/misc/pci-testdev.c            |   2 +-
 hw/misc/pvpanic-pci.c            |   2 +-
 hw/net/can/can_kvaser_pci.c      |   2 +-
 hw/net/can/can_mioe3680_pci.c    |   2 +-
 hw/net/can/can_pcm3680_pci.c     |   2 +-
 hw/net/can/ctucan_pci.c          |   2 +-
 hw/net/e1000.c                   |   2 +-
 hw/net/e1000x_common.c           |   2 +-
 hw/net/eepro100.c                |   2 +-
 hw/net/ne2000-pci.c              |   2 +-
 hw/net/net_tx_pkt.c              |   2 +-
 hw/net/pcnet-pci.c               |   2 +-
 hw/net/rocker/rocker.c           |   2 +-
 hw/net/rocker/rocker_desc.c      |   2 +-
 hw/net/rtl8139.c                 |   2 +-
 hw/net/sungem.c                  |   2 +-
 hw/net/sunhme.c                  |   2 +-
 hw/net/tulip.c                   |   2 +-
 hw/net/virtio-net.c              |   2 +-
 hw/pci-host/bonito.c             |   2 +-
 hw/pci-host/dino.c               |   2 +-
 hw/pci-host/grackle.c            |   2 +-
 hw/pci-host/mv64361.c            |   2 +-
 hw/pci-host/ppce500.c            |   2 +-
 hw/pci-host/raven.c              |   2 +-
 hw/pci-host/sh_pci.c             |   2 +-
 hw/pci-host/uninorth.c           |   2 +-
 hw/pci-host/versatile.c          |   2 +-
 hw/pci/pci-hmp-cmds.c            |   1 +
 hw/pci/pcie_host.c               |   2 +-
 hw/pci/pcie_sriov.c              |   2 +-
 hw/pci/slotid_cap.c              |   2 +-
 hw/ppc/ppc440_pcix.c             |   2 +-
 hw/ppc/ppc4xx_pci.c              |   2 +-
 hw/ppc/spapr_pci_vfio.c          |   1 +
 hw/rdma/rdma_utils.c             |   2 +-
 hw/s390x/s390-pci-inst.c         |   1 +
 hw/scsi/esp-pci.c                |   2 +-
 hw/scsi/lsi53c895a.c             |   2 +-
 hw/smbios/smbios.c               |   1 +
 hw/usb/hcd-ohci-pci.c            |   2 +-
 hw/watchdog/wdt_i6300esb.c       |   2 +-
 tests/qtest/fuzz/generic_fuzz.c  |   1 +
 ui/util.c                        |   2 +-
 93 files changed, 441 insertions(+), 427 deletions(-)
 create mode 100644 include/hw/pci/pci_device.h

diff --git a/hw/display/ati_int.h b/hw/display/ati_int.h
index 8acb9c7466..e8d3c7af75 100644
--- a/hw/display/ati_int.h
+++ b/hw/display/ati_int.h
@@ -10,7 +10,7 @@
 #define ATI_INT_H
 
 #include "qemu/timer.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/i2c/bitbang_i2c.h"
 #include "vga_int.h"
 #include "qom/object.h"
diff --git a/hw/display/qxl.h b/hw/display/qxl.h
index 7894bd5134..cd82c7a6fe 100644
--- a/hw/display/qxl.h
+++ b/hw/display/qxl.h
@@ -1,8 +1,7 @@
 #ifndef HW_QXL_H
 #define HW_QXL_H
 
-
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "vga_int.h"
 #include "qemu/thread.h"
 
diff --git a/hw/ide/ahci_internal.h b/hw/ide/ahci_internal.h
index 109de9e2d1..303fcd7235 100644
--- a/hw/ide/ahci_internal.h
+++ b/hw/ide/ahci_internal.h
@@ -26,7 +26,7 @@
 
 #include "hw/ide/ahci.h"
 #include "hw/ide/internal.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 
 #define AHCI_MEM_BAR_SIZE         0x1000
 #define AHCI_MAX_PORTS            32
diff --git a/hw/net/vmxnet3_defs.h b/hw/net/vmxnet3_defs.h
index 71440509ca..64034af6d5 100644
--- a/hw/net/vmxnet3_defs.h
+++ b/hw/net/vmxnet3_defs.h
@@ -19,7 +19,7 @@
 
 #include "net/net.h"
 #include "hw/net/vmxnet3.h"
-#include "qom/object.h"
+#include "hw/pci/pci_device.h"
 
 #define TYPE_VMXNET3 "vmxnet3"
 typedef struct VMXNET3State VMXNET3State;
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index 7adf042ec3..16da27a69b 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -19,7 +19,7 @@
 #define HW_NVME_NVME_H
 
 #include "qemu/uuid.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/block/block.h"
 
 #include "block/nvme.h"
diff --git a/hw/rdma/vmw/pvrdma.h b/hw/rdma/vmw/pvrdma.h
index 0caf95ede8..4cbc10c980 100644
--- a/hw/rdma/vmw/pvrdma.h
+++ b/hw/rdma/vmw/pvrdma.h
@@ -19,6 +19,7 @@
 #include "qemu/units.h"
 #include "qemu/notify.h"
 #include "hw/pci/msix.h"
+#include "hw/pci/pci_device.h"
 #include "chardev/char-fe.h"
 #include "hw/net/vmxnet3_defs.h"
 
diff --git a/hw/scsi/mptsas.h b/hw/scsi/mptsas.h
index c046497db7..04e97ce3af 100644
--- a/hw/scsi/mptsas.h
+++ b/hw/scsi/mptsas.h
@@ -2,7 +2,7 @@
 #define MPTSAS_H
 
 #include "mpi.h"
-#include "qom/object.h"
+#include "hw/pci/pci_device.h"
 
 #define MPTSAS_NUM_PORTS 8
 #define MPTSAS_MAX_FRAMES 2048     /* Firmware limit at 65535 */
diff --git a/hw/usb/hcd-ehci.h b/hw/usb/hcd-ehci.h
index 4d4b2830b7..2cd821f49e 100644
--- a/hw/usb/hcd-ehci.h
+++ b/hw/usb/hcd-ehci.h
@@ -21,7 +21,7 @@
 #include "qemu/timer.h"
 #include "hw/usb.h"
 #include "sysemu/dma.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/sysbus.h"
 
 #ifndef EHCI_DEBUG
diff --git a/hw/usb/hcd-uhci.h b/hw/usb/hcd-uhci.h
index c85ab7868e..5843af504a 100644
--- a/hw/usb/hcd-uhci.h
+++ b/hw/usb/hcd-uhci.h
@@ -30,7 +30,7 @@
 
 #include "exec/memory.h"
 #include "qemu/timer.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/usb.h"
 
 typedef struct UHCIQueue UHCIQueue;
diff --git a/hw/usb/hcd-xhci-pci.h b/hw/usb/hcd-xhci-pci.h
index c193f79443..08f70ce97c 100644
--- a/hw/usb/hcd-xhci-pci.h
+++ b/hw/usb/hcd-xhci-pci.h
@@ -24,6 +24,7 @@
 #ifndef HW_USB_HCD_XHCI_PCI_H
 #define HW_USB_HCD_XHCI_PCI_H
 
+#include "hw/pci/pci_device.h"
 #include "hw/usb.h"
 #include "hcd-xhci.h"
 
diff --git a/hw/vfio/pci.h b/hw/vfio/pci.h
index 7c236a52f4..177abcc8fb 100644
--- a/hw/vfio/pci.h
+++ b/hw/vfio/pci.h
@@ -13,7 +13,7 @@
 #define HW_VFIO_VFIO_PCI_H
 
 #include "exec/memory.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/vfio/vfio-common.h"
 #include "qemu/event_notifier.h"
 #include "qemu/queue.h"
diff --git a/include/hw/acpi/piix4.h b/include/hw/acpi/piix4.h
index 32686a75c5..be1f8ea80e 100644
--- a/include/hw/acpi/piix4.h
+++ b/include/hw/acpi/piix4.h
@@ -22,7 +22,7 @@
 #ifndef HW_ACPI_PIIX4_H
 #define HW_ACPI_PIIX4_H
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/cpu_hotplug.h"
 #include "hw/acpi/memory_hotplug.h"
diff --git a/include/hw/arm/allwinner-a10.h b/include/hw/arm/allwinner-a10.h
index a76dc7b84d..f9240ffa64 100644
--- a/include/hw/arm/allwinner-a10.h
+++ b/include/hw/arm/allwinner-a10.h
@@ -4,6 +4,7 @@
 #include "qemu/error-report.h"
 #include "hw/char/serial.h"
 #include "hw/arm/boot.h"
+#include "hw/pci/pci_device.h"
 #include "hw/timer/allwinner-a10-pit.h"
 #include "hw/intc/allwinner-a10-pic.h"
 #include "hw/net/allwinner_emac.h"
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 3f91969db0..250adf18b2 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -11,7 +11,7 @@
 #define CXL_DEVICE_H
 
 #include "hw/cxl/cxl_component.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/register.h"
 
 /*
diff --git a/include/hw/ide/pci.h b/include/hw/ide/pci.h
index d8384e1c42..2a6284acac 100644
--- a/include/hw/ide/pci.h
+++ b/include/hw/ide/pci.h
@@ -2,7 +2,7 @@
 #define HW_IDE_PCI_H
 
 #include "hw/ide/internal.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "qom/object.h"
 
 #define BM_STATUS_DMAING 0x01
diff --git a/include/hw/misc/macio/macio.h b/include/hw/misc/macio/macio.h
index 95d30a1745..86df2c2b60 100644
--- a/include/hw/misc/macio/macio.h
+++ b/include/hw/misc/macio/macio.h
@@ -27,7 +27,7 @@
 #define MACIO_H
 
 #include "hw/char/escc.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/ide/internal.h"
 #include "hw/intc/heathrow_pic.h"
 #include "hw/misc/macio/cuda.h"
diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
index fcf8b63820..b0240bd768 100644
--- a/include/hw/pci-host/gpex.h
+++ b/include/hw/pci-host/gpex.h
@@ -22,7 +22,7 @@
 
 #include "exec/hwaddr.h"
 #include "hw/sysbus.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pcie_host.h"
 #include "qom/object.h"
 
diff --git a/include/hw/pci-host/i440fx.h b/include/hw/pci-host/i440fx.h
index fc93e22732..bf57216c78 100644
--- a/include/hw/pci-host/i440fx.h
+++ b/include/hw/pci-host/i440fx.h
@@ -11,7 +11,7 @@
 #ifndef HW_PCI_I440FX_H
 #define HW_PCI_I440FX_H
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci-host/pam.h"
 #include "qom/object.h"
 
diff --git a/include/hw/pci-host/q35.h b/include/hw/pci-host/q35.h
index ab989698ef..e89329c51e 100644
--- a/include/hw/pci-host/q35.h
+++ b/include/hw/pci-host/q35.h
@@ -22,7 +22,7 @@
 #ifndef HW_Q35_H
 #define HW_Q35_H
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pcie_host.h"
 #include "hw/pci-host/pam.h"
 #include "qemu/units.h"
diff --git a/include/hw/pci-host/sabre.h b/include/hw/pci-host/sabre.h
index 01190241bb..d12de84ea2 100644
--- a/include/hw/pci-host/sabre.h
+++ b/include/hw/pci-host/sabre.h
@@ -1,7 +1,7 @@
 #ifndef HW_PCI_HOST_SABRE_H
 #define HW_PCI_HOST_SABRE_H
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_host.h"
 #include "hw/sparc/sun4u_iommu.h"
 #include "qom/object.h"
diff --git a/include/hw/pci/msi.h b/include/hw/pci/msi.h
index 58aa576215..ee8ee469a6 100644
--- a/include/hw/pci/msi.h
+++ b/include/hw/pci/msi.h
@@ -21,7 +21,7 @@
 #ifndef QEMU_MSI_H
 #define QEMU_MSI_H
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 
 struct MSIMessage {
     uint64_t address;
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 5ca2a9df58..7048a373d1 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -166,7 +166,6 @@ enum {
 #define QEMU_PCI_VGA_IO_HI_SIZE 0x20
 
 #include "hw/pci/pci_regs.h"
-#include "hw/pci/pcie.h"
 
 /* PCI HEADER_TYPE */
 #define  PCI_HEADER_TYPE_MULTI_FUNCTION 0x80
@@ -210,23 +209,6 @@ enum {
     QEMU_PCIE_CAP_CXL = (1 << QEMU_PCIE_CXL_BITNR),
 };
 
-#define TYPE_PCI_DEVICE "pci-device"
-typedef struct PCIDeviceClass PCIDeviceClass;
-DECLARE_OBJ_CHECKERS(PCIDevice, PCIDeviceClass,
-                     PCI_DEVICE, TYPE_PCI_DEVICE)
-
-/*
- * Implemented by devices that can be plugged on CXL buses. In the spec, this is
- * actually a "CXL Component, but we name it device to match the PCI naming.
- */
-#define INTERFACE_CXL_DEVICE "cxl-device"
-
-/* Implemented by devices that can be plugged on PCI Express buses */
-#define INTERFACE_PCIE_DEVICE "pci-express-device"
-
-/* Implemented by devices that can be plugged on Conventional PCI buses */
-#define INTERFACE_CONVENTIONAL_PCI_DEVICE "conventional-pci-device"
-
 typedef struct PCIINTxRoute {
     enum {
         PCI_INTX_ENABLED,
@@ -236,24 +218,6 @@ typedef struct PCIINTxRoute {
     int irq;
 } PCIINTxRoute;
 
-struct PCIDeviceClass {
-    DeviceClass parent_class;
-
-    void (*realize)(PCIDevice *dev, Error **errp);
-    PCIUnregisterFunc *exit;
-    PCIConfigReadFunc *config_read;
-    PCIConfigWriteFunc *config_write;
-
-    uint16_t vendor_id;
-    uint16_t device_id;
-    uint8_t revision;
-    uint16_t class_id;
-    uint16_t subsystem_vendor_id;       /* only for header type = 0 */
-    uint16_t subsystem_id;              /* only for header type = 0 */
-
-    const char *romfile;                /* rom bar */
-};
-
 typedef void (*PCIINTxRoutingNotifier)(PCIDevice *dev);
 typedef int (*MSIVectorUseNotifier)(PCIDevice *dev, unsigned int vector,
                                       MSIMessage msg);
@@ -262,129 +226,6 @@ typedef void (*MSIVectorPollNotifier)(PCIDevice *dev,
                                       unsigned int vector_start,
                                       unsigned int vector_end);
 
-enum PCIReqIDType {
-    PCI_REQ_ID_INVALID = 0,
-    PCI_REQ_ID_BDF,
-    PCI_REQ_ID_SECONDARY_BUS,
-    PCI_REQ_ID_MAX,
-};
-typedef enum PCIReqIDType PCIReqIDType;
-
-struct PCIReqIDCache {
-    PCIDevice *dev;
-    PCIReqIDType type;
-};
-typedef struct PCIReqIDCache PCIReqIDCache;
-
-struct PCIDevice {
-    DeviceState qdev;
-    bool partially_hotplugged;
-    bool has_power;
-
-    /* PCI config space */
-    uint8_t *config;
-
-    /*
-     * Used to enable config checks on load. Note that writable bits are
-     * never checked even if set in cmask.
-     */
-    uint8_t *cmask;
-
-    /* Used to implement R/W bytes */
-    uint8_t *wmask;
-
-    /* Used to implement RW1C(Write 1 to Clear) bytes */
-    uint8_t *w1cmask;
-
-    /* Used to allocate config space for capabilities. */
-    uint8_t *used;
-
-    /* the following fields are read only */
-    int32_t devfn;
-    /*
-     * Cached device to fetch requester ID from, to avoid the PCI tree
-     * walking every time we invoke PCI request (e.g., MSI). For
-     * conventional PCI root complex, this field is meaningless.
-     */
-    PCIReqIDCache requester_id_cache;
-    char name[64];
-    PCIIORegion io_regions[PCI_NUM_REGIONS];
-    AddressSpace bus_master_as;
-    MemoryRegion bus_master_container_region;
-    MemoryRegion bus_master_enable_region;
-
-    /* do not access the following fields */
-    PCIConfigReadFunc *config_read;
-    PCIConfigWriteFunc *config_write;
-
-    /* Legacy PCI VGA regions */
-    MemoryRegion *vga_regions[QEMU_PCI_VGA_NUM_REGIONS];
-    bool has_vga;
-
-    /* Current IRQ levels.  Used internally by the generic PCI code.  */
-    uint8_t irq_state;
-
-    /* Capability bits */
-    uint32_t cap_present;
-
-    /* Offset of MSI-X capability in config space */
-    uint8_t msix_cap;
-
-    /* MSI-X entries */
-    int msix_entries_nr;
-
-    /* Space to store MSIX table & pending bit array */
-    uint8_t *msix_table;
-    uint8_t *msix_pba;
-
-    /* May be used by INTx or MSI during interrupt notification */
-    void *irq_opaque;
-
-    MSITriggerFunc *msi_trigger;
-    MSIPrepareMessageFunc *msi_prepare_message;
-    MSIxPrepareMessageFunc *msix_prepare_message;
-
-    /* MemoryRegion container for msix exclusive BAR setup */
-    MemoryRegion msix_exclusive_bar;
-    /* Memory Regions for MSIX table and pending bit entries. */
-    MemoryRegion msix_table_mmio;
-    MemoryRegion msix_pba_mmio;
-    /* Reference-count for entries actually in use by driver. */
-    unsigned *msix_entry_used;
-    /* MSIX function mask set or MSIX disabled */
-    bool msix_function_masked;
-    /* Version id needed for VMState */
-    int32_t version_id;
-
-    /* Offset of MSI capability in config space */
-    uint8_t msi_cap;
-
-    /* PCI Express */
-    PCIExpressDevice exp;
-
-    /* SHPC */
-    SHPCDevice *shpc;
-
-    /* Location of option rom */
-    char *romfile;
-    uint32_t romsize;
-    bool has_rom;
-    MemoryRegion rom;
-    uint32_t rom_bar;
-
-    /* INTx routing notifier */
-    PCIINTxRoutingNotifier intx_routing_notifier;
-
-    /* MSI-X notifiers */
-    MSIVectorUseNotifier msix_vector_use_notifier;
-    MSIVectorReleaseNotifier msix_vector_release_notifier;
-    MSIVectorPollNotifier msix_vector_poll_notifier;
-
-    /* ID of standby device in net_failover pair */
-    char *failover_pair_id;
-    uint32_t acpi_index;
-};
-
 void pci_register_bar(PCIDevice *pci_dev, int region_num,
                       uint8_t attr, MemoryRegion *memory);
 void pci_register_vga(PCIDevice *pci_dev, MemoryRegion *mem,
@@ -745,11 +586,6 @@ void lsi53c8xx_handle_legacy_cmdline(DeviceState *lsi_dev);
 qemu_irq pci_allocate_irq(PCIDevice *pci_dev);
 void pci_set_irq(PCIDevice *pci_dev, int level);
 
-static inline int pci_intx(PCIDevice *pci_dev)
-{
-    return pci_get_byte(pci_dev->config + PCI_INTERRUPT_PIN) - 1;
-}
-
 static inline void pci_irq_assert(PCIDevice *pci_dev)
 {
     pci_set_irq(pci_dev, 1);
@@ -770,186 +606,6 @@ static inline void pci_irq_pulse(PCIDevice *pci_dev)
     pci_irq_deassert(pci_dev);
 }
 
-static inline int pci_is_cxl(const PCIDevice *d)
-{
-    return d->cap_present & QEMU_PCIE_CAP_CXL;
-}
-
-static inline int pci_is_express(const PCIDevice *d)
-{
-    return d->cap_present & QEMU_PCI_CAP_EXPRESS;
-}
-
-static inline int pci_is_express_downstream_port(const PCIDevice *d)
-{
-    uint8_t type;
-
-    if (!pci_is_express(d) || !d->exp.exp_cap) {
-        return 0;
-    }
-
-    type = pcie_cap_get_type(d);
-
-    return type == PCI_EXP_TYPE_DOWNSTREAM || type == PCI_EXP_TYPE_ROOT_PORT;
-}
-
-static inline int pci_is_vf(const PCIDevice *d)
-{
-    return d->exp.sriov_vf.pf != NULL;
-}
-
-static inline uint32_t pci_config_size(const PCIDevice *d)
-{
-    return pci_is_express(d) ? PCIE_CONFIG_SPACE_SIZE : PCI_CONFIG_SPACE_SIZE;
-}
-
-static inline uint16_t pci_get_bdf(PCIDevice *dev)
-{
-    return PCI_BUILD_BDF(pci_bus_num(pci_get_bus(dev)), dev->devfn);
-}
-
-uint16_t pci_requester_id(PCIDevice *dev);
-
-/* DMA access functions */
-static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
-{
-    return &dev->bus_master_as;
-}
-
-/**
- * pci_dma_rw: Read from or write to an address space from PCI device.
- *
- * Return a MemTxResult indicating whether the operation succeeded
- * or failed (eg unassigned memory, device rejected the transaction,
- * IOMMU fault).
- *
- * @dev: #PCIDevice doing the memory access
- * @addr: address within the #PCIDevice address space
- * @buf: buffer with the data transferred
- * @len: the number of bytes to read or write
- * @dir: indicates the transfer direction
- */
-static inline MemTxResult pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
-                                     void *buf, dma_addr_t len,
-                                     DMADirection dir, MemTxAttrs attrs)
-{
-    return dma_memory_rw(pci_get_address_space(dev), addr, buf, len,
-                         dir, attrs);
-}
-
-/**
- * pci_dma_read: Read from an address space from PCI device.
- *
- * Return a MemTxResult indicating whether the operation succeeded
- * or failed (eg unassigned memory, device rejected the transaction,
- * IOMMU fault).  Called within RCU critical section.
- *
- * @dev: #PCIDevice doing the memory access
- * @addr: address within the #PCIDevice address space
- * @buf: buffer with the data transferred
- * @len: length of the data transferred
- */
-static inline MemTxResult pci_dma_read(PCIDevice *dev, dma_addr_t addr,
-                                       void *buf, dma_addr_t len)
-{
-    return pci_dma_rw(dev, addr, buf, len,
-                      DMA_DIRECTION_TO_DEVICE, MEMTXATTRS_UNSPECIFIED);
-}
-
-/**
- * pci_dma_write: Write to address space from PCI device.
- *
- * Return a MemTxResult indicating whether the operation succeeded
- * or failed (eg unassigned memory, device rejected the transaction,
- * IOMMU fault).
- *
- * @dev: #PCIDevice doing the memory access
- * @addr: address within the #PCIDevice address space
- * @buf: buffer with the data transferred
- * @len: the number of bytes to write
- */
-static inline MemTxResult pci_dma_write(PCIDevice *dev, dma_addr_t addr,
-                                        const void *buf, dma_addr_t len)
-{
-    return pci_dma_rw(dev, addr, (void *) buf, len,
-                      DMA_DIRECTION_FROM_DEVICE, MEMTXATTRS_UNSPECIFIED);
-}
-
-#define PCI_DMA_DEFINE_LDST(_l, _s, _bits) \
-    static inline MemTxResult ld##_l##_pci_dma(PCIDevice *dev, \
-                                               dma_addr_t addr, \
-                                               uint##_bits##_t *val, \
-                                               MemTxAttrs attrs) \
-    { \
-        return ld##_l##_dma(pci_get_address_space(dev), addr, val, attrs); \
-    } \
-    static inline MemTxResult st##_s##_pci_dma(PCIDevice *dev, \
-                                               dma_addr_t addr, \
-                                               uint##_bits##_t val, \
-                                               MemTxAttrs attrs) \
-    { \
-        return st##_s##_dma(pci_get_address_space(dev), addr, val, attrs); \
-    }
-
-PCI_DMA_DEFINE_LDST(ub, b, 8);
-PCI_DMA_DEFINE_LDST(uw_le, w_le, 16)
-PCI_DMA_DEFINE_LDST(l_le, l_le, 32);
-PCI_DMA_DEFINE_LDST(q_le, q_le, 64);
-PCI_DMA_DEFINE_LDST(uw_be, w_be, 16)
-PCI_DMA_DEFINE_LDST(l_be, l_be, 32);
-PCI_DMA_DEFINE_LDST(q_be, q_be, 64);
-
-#undef PCI_DMA_DEFINE_LDST
-
-/**
- * pci_dma_map: Map device PCI address space range into host virtual address
- * @dev: #PCIDevice to be accessed
- * @addr: address within that device's address space
- * @plen: pointer to length of buffer; updated on return to indicate
- *        if only a subset of the requested range has been mapped
- * @dir: indicates the transfer direction
- *
- * Return: A host pointer, or %NULL if the resources needed to
- *         perform the mapping are exhausted (in that case *@plen
- *         is set to zero).
- */
-static inline void *pci_dma_map(PCIDevice *dev, dma_addr_t addr,
-                                dma_addr_t *plen, DMADirection dir)
-{
-    return dma_memory_map(pci_get_address_space(dev), addr, plen, dir,
-                          MEMTXATTRS_UNSPECIFIED);
-}
-
-static inline void pci_dma_unmap(PCIDevice *dev, void *buffer, dma_addr_t len,
-                                 DMADirection dir, dma_addr_t access_len)
-{
-    dma_memory_unmap(pci_get_address_space(dev), buffer, len, dir, access_len);
-}
-
-static inline void pci_dma_sglist_init(QEMUSGList *qsg, PCIDevice *dev,
-                                       int alloc_hint)
-{
-    qemu_sglist_init(qsg, DEVICE(dev), alloc_hint, pci_get_address_space(dev));
-}
-
-extern const VMStateDescription vmstate_pci_device;
-
-#define VMSTATE_PCI_DEVICE(_field, _state) {                         \
-    .name       = (stringify(_field)),                               \
-    .size       = sizeof(PCIDevice),                                 \
-    .vmsd       = &vmstate_pci_device,                               \
-    .flags      = VMS_STRUCT,                                        \
-    .offset     = vmstate_offset_value(_state, _field, PCIDevice),   \
-}
-
-#define VMSTATE_PCI_DEVICE_POINTER(_field, _state) {                 \
-    .name       = (stringify(_field)),                               \
-    .size       = sizeof(PCIDevice),                                 \
-    .vmsd       = &vmstate_pci_device,                               \
-    .flags      = VMS_STRUCT | VMS_POINTER,                          \
-    .offset     = vmstate_offset_pointer(_state, _field, PCIDevice), \
-}
-
 MSIMessage pci_get_msi_message(PCIDevice *dev, int vector);
 void pci_set_power(PCIDevice *pci_dev, bool state);
 
diff --git a/include/hw/pci/pci_bridge.h b/include/hw/pci/pci_bridge.h
index 58a3fb0c2c..63a7521567 100644
--- a/include/hw/pci/pci_bridge.h
+++ b/include/hw/pci/pci_bridge.h
@@ -26,7 +26,7 @@
 #ifndef QEMU_PCI_BRIDGE_H
 #define QEMU_PCI_BRIDGE_H
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_bus.h"
 #include "hw/cxl/cxl.h"
 #include "qom/object.h"
diff --git a/include/hw/pci/pci_device.h b/include/hw/pci/pci_device.h
new file mode 100644
index 0000000000..d3dd0f64b2
--- /dev/null
+++ b/include/hw/pci/pci_device.h
@@ -0,0 +1,350 @@
+#ifndef QEMU_PCI_DEVICE_H
+#define QEMU_PCI_DEVICE_H
+
+#include "hw/pci/pci.h"
+#include "hw/pci/pcie.h"
+
+#define TYPE_PCI_DEVICE "pci-device"
+typedef struct PCIDeviceClass PCIDeviceClass;
+DECLARE_OBJ_CHECKERS(PCIDevice, PCIDeviceClass,
+                     PCI_DEVICE, TYPE_PCI_DEVICE)
+
+/*
+ * Implemented by devices that can be plugged on CXL buses. In the spec, this is
+ * actually a "CXL Component, but we name it device to match the PCI naming.
+ */
+#define INTERFACE_CXL_DEVICE "cxl-device"
+
+/* Implemented by devices that can be plugged on PCI Express buses */
+#define INTERFACE_PCIE_DEVICE "pci-express-device"
+
+/* Implemented by devices that can be plugged on Conventional PCI buses */
+#define INTERFACE_CONVENTIONAL_PCI_DEVICE "conventional-pci-device"
+
+struct PCIDeviceClass {
+    DeviceClass parent_class;
+
+    void (*realize)(PCIDevice *dev, Error **errp);
+    PCIUnregisterFunc *exit;
+    PCIConfigReadFunc *config_read;
+    PCIConfigWriteFunc *config_write;
+
+    uint16_t vendor_id;
+    uint16_t device_id;
+    uint8_t revision;
+    uint16_t class_id;
+    uint16_t subsystem_vendor_id;       /* only for header type = 0 */
+    uint16_t subsystem_id;              /* only for header type = 0 */
+
+    const char *romfile;                /* rom bar */
+};
+
+enum PCIReqIDType {
+    PCI_REQ_ID_INVALID = 0,
+    PCI_REQ_ID_BDF,
+    PCI_REQ_ID_SECONDARY_BUS,
+    PCI_REQ_ID_MAX,
+};
+typedef enum PCIReqIDType PCIReqIDType;
+
+struct PCIReqIDCache {
+    PCIDevice *dev;
+    PCIReqIDType type;
+};
+typedef struct PCIReqIDCache PCIReqIDCache;
+
+struct PCIDevice {
+    DeviceState qdev;
+    bool partially_hotplugged;
+    bool has_power;
+
+    /* PCI config space */
+    uint8_t *config;
+
+    /*
+     * Used to enable config checks on load. Note that writable bits are
+     * never checked even if set in cmask.
+     */
+    uint8_t *cmask;
+
+    /* Used to implement R/W bytes */
+    uint8_t *wmask;
+
+    /* Used to implement RW1C(Write 1 to Clear) bytes */
+    uint8_t *w1cmask;
+
+    /* Used to allocate config space for capabilities. */
+    uint8_t *used;
+
+    /* the following fields are read only */
+    int32_t devfn;
+    /*
+     * Cached device to fetch requester ID from, to avoid the PCI tree
+     * walking every time we invoke PCI request (e.g., MSI). For
+     * conventional PCI root complex, this field is meaningless.
+     */
+    PCIReqIDCache requester_id_cache;
+    char name[64];
+    PCIIORegion io_regions[PCI_NUM_REGIONS];
+    AddressSpace bus_master_as;
+    MemoryRegion bus_master_container_region;
+    MemoryRegion bus_master_enable_region;
+
+    /* do not access the following fields */
+    PCIConfigReadFunc *config_read;
+    PCIConfigWriteFunc *config_write;
+
+    /* Legacy PCI VGA regions */
+    MemoryRegion *vga_regions[QEMU_PCI_VGA_NUM_REGIONS];
+    bool has_vga;
+
+    /* Current IRQ levels.  Used internally by the generic PCI code.  */
+    uint8_t irq_state;
+
+    /* Capability bits */
+    uint32_t cap_present;
+
+    /* Offset of MSI-X capability in config space */
+    uint8_t msix_cap;
+
+    /* MSI-X entries */
+    int msix_entries_nr;
+
+    /* Space to store MSIX table & pending bit array */
+    uint8_t *msix_table;
+    uint8_t *msix_pba;
+
+    /* May be used by INTx or MSI during interrupt notification */
+    void *irq_opaque;
+
+    MSITriggerFunc *msi_trigger;
+    MSIPrepareMessageFunc *msi_prepare_message;
+    MSIxPrepareMessageFunc *msix_prepare_message;
+
+    /* MemoryRegion container for msix exclusive BAR setup */
+    MemoryRegion msix_exclusive_bar;
+    /* Memory Regions for MSIX table and pending bit entries. */
+    MemoryRegion msix_table_mmio;
+    MemoryRegion msix_pba_mmio;
+    /* Reference-count for entries actually in use by driver. */
+    unsigned *msix_entry_used;
+    /* MSIX function mask set or MSIX disabled */
+    bool msix_function_masked;
+    /* Version id needed for VMState */
+    int32_t version_id;
+
+    /* Offset of MSI capability in config space */
+    uint8_t msi_cap;
+
+    /* PCI Express */
+    PCIExpressDevice exp;
+
+    /* SHPC */
+    SHPCDevice *shpc;
+
+    /* Location of option rom */
+    char *romfile;
+    uint32_t romsize;
+    bool has_rom;
+    MemoryRegion rom;
+    uint32_t rom_bar;
+
+    /* INTx routing notifier */
+    PCIINTxRoutingNotifier intx_routing_notifier;
+
+    /* MSI-X notifiers */
+    MSIVectorUseNotifier msix_vector_use_notifier;
+    MSIVectorReleaseNotifier msix_vector_release_notifier;
+    MSIVectorPollNotifier msix_vector_poll_notifier;
+
+    /* ID of standby device in net_failover pair */
+    char *failover_pair_id;
+    uint32_t acpi_index;
+};
+
+static inline int pci_intx(PCIDevice *pci_dev)
+{
+    return pci_get_byte(pci_dev->config + PCI_INTERRUPT_PIN) - 1;
+}
+
+static inline int pci_is_cxl(const PCIDevice *d)
+{
+    return d->cap_present & QEMU_PCIE_CAP_CXL;
+}
+
+static inline int pci_is_express(const PCIDevice *d)
+{
+    return d->cap_present & QEMU_PCI_CAP_EXPRESS;
+}
+
+static inline int pci_is_express_downstream_port(const PCIDevice *d)
+{
+    uint8_t type;
+
+    if (!pci_is_express(d) || !d->exp.exp_cap) {
+        return 0;
+    }
+
+    type = pcie_cap_get_type(d);
+
+    return type == PCI_EXP_TYPE_DOWNSTREAM || type == PCI_EXP_TYPE_ROOT_PORT;
+}
+
+static inline int pci_is_vf(const PCIDevice *d)
+{
+    return d->exp.sriov_vf.pf != NULL;
+}
+
+static inline uint32_t pci_config_size(const PCIDevice *d)
+{
+    return pci_is_express(d) ? PCIE_CONFIG_SPACE_SIZE : PCI_CONFIG_SPACE_SIZE;
+}
+
+static inline uint16_t pci_get_bdf(PCIDevice *dev)
+{
+    return PCI_BUILD_BDF(pci_bus_num(pci_get_bus(dev)), dev->devfn);
+}
+
+uint16_t pci_requester_id(PCIDevice *dev);
+
+/* DMA access functions */
+static inline AddressSpace *pci_get_address_space(PCIDevice *dev)
+{
+    return &dev->bus_master_as;
+}
+
+/**
+ * pci_dma_rw: Read from or write to an address space from PCI device.
+ *
+ * Return a MemTxResult indicating whether the operation succeeded
+ * or failed (eg unassigned memory, device rejected the transaction,
+ * IOMMU fault).
+ *
+ * @dev: #PCIDevice doing the memory access
+ * @addr: address within the #PCIDevice address space
+ * @buf: buffer with the data transferred
+ * @len: the number of bytes to read or write
+ * @dir: indicates the transfer direction
+ */
+static inline MemTxResult pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
+                                     void *buf, dma_addr_t len,
+                                     DMADirection dir, MemTxAttrs attrs)
+{
+    return dma_memory_rw(pci_get_address_space(dev), addr, buf, len,
+                         dir, attrs);
+}
+
+/**
+ * pci_dma_read: Read from an address space from PCI device.
+ *
+ * Return a MemTxResult indicating whether the operation succeeded
+ * or failed (eg unassigned memory, device rejected the transaction,
+ * IOMMU fault).  Called within RCU critical section.
+ *
+ * @dev: #PCIDevice doing the memory access
+ * @addr: address within the #PCIDevice address space
+ * @buf: buffer with the data transferred
+ * @len: length of the data transferred
+ */
+static inline MemTxResult pci_dma_read(PCIDevice *dev, dma_addr_t addr,
+                                       void *buf, dma_addr_t len)
+{
+    return pci_dma_rw(dev, addr, buf, len,
+                      DMA_DIRECTION_TO_DEVICE, MEMTXATTRS_UNSPECIFIED);
+}
+
+/**
+ * pci_dma_write: Write to address space from PCI device.
+ *
+ * Return a MemTxResult indicating whether the operation succeeded
+ * or failed (eg unassigned memory, device rejected the transaction,
+ * IOMMU fault).
+ *
+ * @dev: #PCIDevice doing the memory access
+ * @addr: address within the #PCIDevice address space
+ * @buf: buffer with the data transferred
+ * @len: the number of bytes to write
+ */
+static inline MemTxResult pci_dma_write(PCIDevice *dev, dma_addr_t addr,
+                                        const void *buf, dma_addr_t len)
+{
+    return pci_dma_rw(dev, addr, (void *) buf, len,
+                      DMA_DIRECTION_FROM_DEVICE, MEMTXATTRS_UNSPECIFIED);
+}
+
+#define PCI_DMA_DEFINE_LDST(_l, _s, _bits) \
+    static inline MemTxResult ld##_l##_pci_dma(PCIDevice *dev, \
+                                               dma_addr_t addr, \
+                                               uint##_bits##_t *val, \
+                                               MemTxAttrs attrs) \
+    { \
+        return ld##_l##_dma(pci_get_address_space(dev), addr, val, attrs); \
+    } \
+    static inline MemTxResult st##_s##_pci_dma(PCIDevice *dev, \
+                                               dma_addr_t addr, \
+                                               uint##_bits##_t val, \
+                                               MemTxAttrs attrs) \
+    { \
+        return st##_s##_dma(pci_get_address_space(dev), addr, val, attrs); \
+    }
+
+PCI_DMA_DEFINE_LDST(ub, b, 8);
+PCI_DMA_DEFINE_LDST(uw_le, w_le, 16)
+PCI_DMA_DEFINE_LDST(l_le, l_le, 32);
+PCI_DMA_DEFINE_LDST(q_le, q_le, 64);
+PCI_DMA_DEFINE_LDST(uw_be, w_be, 16)
+PCI_DMA_DEFINE_LDST(l_be, l_be, 32);
+PCI_DMA_DEFINE_LDST(q_be, q_be, 64);
+
+#undef PCI_DMA_DEFINE_LDST
+
+/**
+ * pci_dma_map: Map device PCI address space range into host virtual address
+ * @dev: #PCIDevice to be accessed
+ * @addr: address within that device's address space
+ * @plen: pointer to length of buffer; updated on return to indicate
+ *        if only a subset of the requested range has been mapped
+ * @dir: indicates the transfer direction
+ *
+ * Return: A host pointer, or %NULL if the resources needed to
+ *         perform the mapping are exhausted (in that case *@plen
+ *         is set to zero).
+ */
+static inline void *pci_dma_map(PCIDevice *dev, dma_addr_t addr,
+                                dma_addr_t *plen, DMADirection dir)
+{
+    return dma_memory_map(pci_get_address_space(dev), addr, plen, dir,
+                          MEMTXATTRS_UNSPECIFIED);
+}
+
+static inline void pci_dma_unmap(PCIDevice *dev, void *buffer, dma_addr_t len,
+                                 DMADirection dir, dma_addr_t access_len)
+{
+    dma_memory_unmap(pci_get_address_space(dev), buffer, len, dir, access_len);
+}
+
+static inline void pci_dma_sglist_init(QEMUSGList *qsg, PCIDevice *dev,
+                                       int alloc_hint)
+{
+    qemu_sglist_init(qsg, DEVICE(dev), alloc_hint, pci_get_address_space(dev));
+}
+
+extern const VMStateDescription vmstate_pci_device;
+
+#define VMSTATE_PCI_DEVICE(_field, _state) {                         \
+    .name       = (stringify(_field)),                               \
+    .size       = sizeof(PCIDevice),                                 \
+    .vmsd       = &vmstate_pci_device,                               \
+    .flags      = VMS_STRUCT,                                        \
+    .offset     = vmstate_offset_value(_state, _field, PCIDevice),   \
+}
+
+#define VMSTATE_PCI_DEVICE_POINTER(_field, _state) {                 \
+    .name       = (stringify(_field)),                               \
+    .size       = sizeof(PCIDevice),                                 \
+    .vmsd       = &vmstate_pci_device,                               \
+    .flags      = VMS_STRUCT | VMS_POINTER,                          \
+    .offset     = vmstate_offset_pointer(_state, _field, PCIDevice), \
+}
+
+#endif
diff --git a/include/hw/pci/pcie_port.h b/include/hw/pci/pcie_port.h
index d9b5d07504..fd484afb30 100644
--- a/include/hw/pci/pcie_port.h
+++ b/include/hw/pci/pcie_port.h
@@ -23,6 +23,7 @@
 
 #include "hw/pci/pci_bridge.h"
 #include "hw/pci/pci_bus.h"
+#include "hw/pci/pci_device.h"
 #include "qom/object.h"
 
 #define TYPE_PCIE_PORT "pcie-port"
diff --git a/include/hw/pci/shpc.h b/include/hw/pci/shpc.h
index d5683b7399..89c7a3b7fa 100644
--- a/include/hw/pci/shpc.h
+++ b/include/hw/pci/shpc.h
@@ -3,7 +3,7 @@
 
 #include "exec/memory.h"
 #include "hw/hotplug.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "migration/vmstate.h"
 
 struct SHPCDevice {
diff --git a/include/hw/remote/iohub.h b/include/hw/remote/iohub.h
index 0bf98e0d78..6a8444f9a9 100644
--- a/include/hw/remote/iohub.h
+++ b/include/hw/remote/iohub.h
@@ -11,7 +11,7 @@
 #ifndef REMOTE_IOHUB_H
 #define REMOTE_IOHUB_H
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "qemu/event_notifier.h"
 #include "qemu/thread-posix.h"
 #include "hw/remote/mpqemu-link.h"
diff --git a/include/hw/remote/proxy.h b/include/hw/remote/proxy.h
index 741def71f1..0cfd9665be 100644
--- a/include/hw/remote/proxy.h
+++ b/include/hw/remote/proxy.h
@@ -9,7 +9,7 @@
 #ifndef PROXY_H
 #define PROXY_H
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "io/channel.h"
 #include "hw/remote/proxy-memory-listener.h"
 #include "qemu/event_notifier.h"
diff --git a/include/hw/sd/sdhci.h b/include/hw/sd/sdhci.h
index a989fca3b2..6cd2822f1d 100644
--- a/include/hw/sd/sdhci.h
+++ b/include/hw/sd/sdhci.h
@@ -25,7 +25,7 @@
 #ifndef SDHCI_H
 #define SDHCI_H
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/sysbus.h"
 #include "hw/sd/sd.h"
 #include "qom/object.h"
diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
index 2693778b23..0bf48e936d 100644
--- a/include/hw/southbridge/piix.h
+++ b/include/hw/southbridge/piix.h
@@ -12,8 +12,7 @@
 #ifndef HW_SOUTHBRIDGE_PIIX_H
 #define HW_SOUTHBRIDGE_PIIX_H
 
-#include "hw/pci/pci.h"
-#include "qom/object.h"
+#include "hw/pci/pci_device.h"
 
 /* PIRQRC[A:D]: PIRQx Route Control Registers */
 #define PIIX_PIRQCA 0x60
diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
index 77ce17d8a4..9a13a756ae 100644
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -15,7 +15,7 @@
 #include "hw/xen/interface/io/xenbus.h"
 
 #include "hw/xen/xen.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/xen/trace.h"
 
 extern xc_interface *xen_xc;
diff --git a/hw/acpi/erst.c b/hw/acpi/erst.c
index aefcc03ad6..35007d8017 100644
--- a/hw/acpi/erst.c
+++ b/hw/acpi/erst.c
@@ -14,7 +14,7 @@
 #include "hw/qdev-core.h"
 #include "exec/memory.h"
 #include "qom/object.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "qom/object_interfaces.h"
 #include "qemu/error-report.h"
 #include "migration/vmstate.h"
diff --git a/hw/audio/ac97.c b/hw/audio/ac97.c
index be2dd701a4..364cdfa733 100644
--- a/hw/audio/ac97.c
+++ b/hw/audio/ac97.c
@@ -20,7 +20,7 @@
 #include "qemu/osdep.h"
 #include "hw/audio/soundhw.h"
 #include "audio/audio.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "qemu/module.h"
diff --git a/hw/audio/es1370.c b/hw/audio/es1370.c
index 6904589814..54cc19a637 100644
--- a/hw/audio/es1370.c
+++ b/hw/audio/es1370.c
@@ -29,7 +29,7 @@
 #include "qemu/osdep.h"
 #include "hw/audio/soundhw.h"
 #include "audio/audio.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "migration/vmstate.h"
 #include "qemu/module.h"
 #include "sysemu/dma.h"
diff --git a/hw/audio/via-ac97.c b/hw/audio/via-ac97.c
index 6d556f74fc..d1a856f63d 100644
--- a/hw/audio/via-ac97.c
+++ b/hw/audio/via-ac97.c
@@ -11,7 +11,7 @@
 
 #include "qemu/osdep.h"
 #include "hw/isa/vt82c686.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 
 static void via_ac97_realize(PCIDevice *pci_dev, Error **errp)
 {
diff --git a/hw/char/serial-pci-multi.c b/hw/char/serial-pci-multi.c
index 3a9f96c2d1..f18b8dcce5 100644
--- a/hw/char/serial-pci-multi.c
+++ b/hw/char/serial-pci-multi.c
@@ -31,7 +31,7 @@
 #include "qapi/error.h"
 #include "hw/char/serial.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "hw/qdev-properties-system.h"
 #include "migration/vmstate.h"
diff --git a/hw/char/serial-pci.c b/hw/char/serial-pci.c
index 93d6f99244..801b769aba 100644
--- a/hw/char/serial-pci.c
+++ b/hw/char/serial-pci.c
@@ -30,7 +30,7 @@
 #include "qemu/module.h"
 #include "hw/char/serial.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "qom/object.h"
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 97a968f477..54a09fa9ac 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -32,6 +32,7 @@
 #include "sysemu/blockdev.h"
 #include "net/net.h"
 #include "hw/pci/pci.h"
+#include "hw/pci/pcie.h"
 #include "util/block-helpers.h"
 
 static bool check_prop_still_unset(Object *obj, const char *name,
diff --git a/hw/display/bochs-display.c b/hw/display/bochs-display.c
index 8ed734b195..e7ec268184 100644
--- a/hw/display/bochs-display.c
+++ b/hw/display/bochs-display.c
@@ -8,7 +8,7 @@
 #include "qemu/osdep.h"
 #include "qemu/module.h"
 #include "qemu/units.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "hw/display/bochs-vbe.h"
diff --git a/hw/display/cirrus_vga.c b/hw/display/cirrus_vga.c
index 6e8c747c46..55c32e3e40 100644
--- a/hw/display/cirrus_vga.c
+++ b/hw/display/cirrus_vga.c
@@ -39,7 +39,7 @@
 #include "sysemu/reset.h"
 #include "qapi/error.h"
 #include "trace.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "ui/pixel_ops.h"
diff --git a/hw/display/sm501.c b/hw/display/sm501.c
index 663c37e7f2..52e42585af 100644
--- a/hw/display/sm501.c
+++ b/hw/display/sm501.c
@@ -32,7 +32,7 @@
 #include "ui/console.h"
 #include "hw/sysbus.h"
 #include "migration/vmstate.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "hw/i2c/i2c.h"
 #include "hw/display/i2c-ddc.h"
diff --git a/hw/display/vga-pci.c b/hw/display/vga-pci.c
index df23dbf3a0..b351b8f299 100644
--- a/hw/display/vga-pci.c
+++ b/hw/display/vga-pci.c
@@ -25,7 +25,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "vga_int.h"
diff --git a/hw/display/vmware_vga.c b/hw/display/vmware_vga.c
index 53949d2539..59ae7f74b8 100644
--- a/hw/display/vmware_vga.c
+++ b/hw/display/vmware_vga.c
@@ -29,7 +29,7 @@
 #include "qemu/log.h"
 #include "hw/loader.h"
 #include "trace.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "qom/object.h"
diff --git a/hw/i386/xen/xen_pvdevice.c b/hw/i386/xen/xen_pvdevice.c
index 1ea95fa601..e62e06622b 100644
--- a/hw/i386/xen/xen_pvdevice.c
+++ b/hw/i386/xen/xen_pvdevice.c
@@ -32,7 +32,7 @@
 #include "qemu/osdep.h"
 #include "qapi/error.h"
 #include "qemu/module.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "trace.h"
diff --git a/hw/ipack/tpci200.c b/hw/ipack/tpci200.c
index 1f764fc85b..6b3edbf017 100644
--- a/hw/ipack/tpci200.c
+++ b/hw/ipack/tpci200.c
@@ -12,7 +12,7 @@
 #include "qemu/units.h"
 #include "hw/ipack/ipack.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "migration/vmstate.h"
 #include "qemu/bitops.h"
 #include "qemu/module.h"
diff --git a/hw/ipmi/pci_ipmi_bt.c b/hw/ipmi/pci_ipmi_bt.c
index b6e52730d3..633931b825 100644
--- a/hw/ipmi/pci_ipmi_bt.c
+++ b/hw/ipmi/pci_ipmi_bt.c
@@ -25,7 +25,7 @@
 #include "migration/vmstate.h"
 #include "qapi/error.h"
 #include "hw/ipmi/ipmi_bt.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "qom/object.h"
 
 #define TYPE_PCI_IPMI_BT "pci-ipmi-bt"
diff --git a/hw/ipmi/pci_ipmi_kcs.c b/hw/ipmi/pci_ipmi_kcs.c
index de13418862..1a581413c2 100644
--- a/hw/ipmi/pci_ipmi_kcs.c
+++ b/hw/ipmi/pci_ipmi_kcs.c
@@ -25,7 +25,7 @@
 #include "migration/vmstate.h"
 #include "qapi/error.h"
 #include "hw/ipmi/ipmi_kcs.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "qom/object.h"
 
 #define TYPE_PCI_IPMI_KCS "pci-ipmi-kcs"
diff --git a/hw/isa/i82378.c b/hw/isa/i82378.c
index 2a2ff05b93..e3322e03bf 100644
--- a/hw/isa/i82378.c
+++ b/hw/isa/i82378.c
@@ -18,7 +18,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/irq.h"
 #include "hw/intc/i8259.h"
 #include "hw/timer/i8254.h"
diff --git a/hw/mips/gt64xxx_pci.c b/hw/mips/gt64xxx_pci.c
index 19d0d9889f..164866cf3e 100644
--- a/hw/mips/gt64xxx_pci.c
+++ b/hw/mips/gt64xxx_pci.c
@@ -26,7 +26,7 @@
 #include "qapi/error.h"
 #include "qemu/units.h"
 #include "qemu/log.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_host.h"
 #include "migration/vmstate.h"
 #include "hw/intc/i8259.h"
diff --git a/hw/misc/pci-testdev.c b/hw/misc/pci-testdev.c
index 03845c8de3..49303134e4 100644
--- a/hw/misc/pci-testdev.c
+++ b/hw/misc/pci-testdev.c
@@ -19,7 +19,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "qemu/event_notifier.h"
 #include "qemu/module.h"
diff --git a/hw/misc/pvpanic-pci.c b/hw/misc/pvpanic-pci.c
index 99cf7e2041..fbcaa50731 100644
--- a/hw/misc/pvpanic-pci.c
+++ b/hw/misc/pvpanic-pci.c
@@ -20,7 +20,7 @@
 #include "migration/vmstate.h"
 #include "hw/misc/pvpanic.h"
 #include "qom/object.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "standard-headers/linux/pvpanic.h"
 
 OBJECT_DECLARE_SIMPLE_TYPE(PVPanicPCIState, PVPANIC_PCI_DEVICE)
diff --git a/hw/net/can/can_kvaser_pci.c b/hw/net/can/can_kvaser_pci.c
index 94b3a534f8..2cd90cef1e 100644
--- a/hw/net/can/can_kvaser_pci.c
+++ b/hw/net/can/can_kvaser_pci.c
@@ -37,7 +37,7 @@
 #include "qapi/error.h"
 #include "chardev/char.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "net/can_emu.h"
diff --git a/hw/net/can/can_mioe3680_pci.c b/hw/net/can/can_mioe3680_pci.c
index 29dc696f7c..b9918773b3 100644
--- a/hw/net/can/can_mioe3680_pci.c
+++ b/hw/net/can/can_mioe3680_pci.c
@@ -33,7 +33,7 @@
 #include "qapi/error.h"
 #include "chardev/char.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "net/can_emu.h"
diff --git a/hw/net/can/can_pcm3680_pci.c b/hw/net/can/can_pcm3680_pci.c
index e8e57f4f33..8ef3e4659c 100644
--- a/hw/net/can/can_pcm3680_pci.c
+++ b/hw/net/can/can_pcm3680_pci.c
@@ -33,7 +33,7 @@
 #include "qapi/error.h"
 #include "chardev/char.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "net/can_emu.h"
diff --git a/hw/net/can/ctucan_pci.c b/hw/net/can/ctucan_pci.c
index 50f4ea6cd6..ea079e2af5 100644
--- a/hw/net/can/ctucan_pci.c
+++ b/hw/net/can/ctucan_pci.c
@@ -34,7 +34,7 @@
 #include "qapi/error.h"
 #include "chardev/char.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "net/can_emu.h"
diff --git a/hw/net/e1000.c b/hw/net/e1000.c
index e26e0a64c1..7efb8a4c52 100644
--- a/hw/net/e1000.c
+++ b/hw/net/e1000.c
@@ -26,7 +26,7 @@
 
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "net/eth.h"
diff --git a/hw/net/e1000x_common.c b/hw/net/e1000x_common.c
index a8d93870b5..2f43e8cd13 100644
--- a/hw/net/e1000x_common.c
+++ b/hw/net/e1000x_common.c
@@ -24,7 +24,7 @@
 
 #include "qemu/osdep.h"
 #include "qemu/units.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "net/net.h"
 
 #include "e1000x_common.h"
diff --git a/hw/net/eepro100.c b/hw/net/eepro100.c
index 679f52f80f..dc07984ae9 100644
--- a/hw/net/eepro100.c
+++ b/hw/net/eepro100.c
@@ -42,7 +42,7 @@
 
 #include "qemu/osdep.h"
 #include "qemu/units.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "net/net.h"
diff --git a/hw/net/ne2000-pci.c b/hw/net/ne2000-pci.c
index 9e5d10859a..edc6689d33 100644
--- a/hw/net/ne2000-pci.c
+++ b/hw/net/ne2000-pci.c
@@ -24,7 +24,7 @@
 
 #include "qemu/osdep.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "ne2000.h"
diff --git a/hw/net/net_tx_pkt.c b/hw/net/net_tx_pkt.c
index 1cb1125d9f..2533ea2700 100644
--- a/hw/net/net_tx_pkt.c
+++ b/hw/net/net_tx_pkt.c
@@ -21,7 +21,7 @@
 #include "net/checksum.h"
 #include "net/tap.h"
 #include "net/net.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 
 enum {
     NET_TX_PKT_VHDR_FRAG = 0,
diff --git a/hw/net/pcnet-pci.c b/hw/net/pcnet-pci.c
index 95d27102aa..96a302c141 100644
--- a/hw/net/pcnet-pci.c
+++ b/hw/net/pcnet-pci.c
@@ -29,7 +29,7 @@
 
 #include "qemu/osdep.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "net/net.h"
diff --git a/hw/net/rocker/rocker.c b/hw/net/rocker/rocker.c
index 281d43e6cf..cf54ddf49d 100644
--- a/hw/net/rocker/rocker.c
+++ b/hw/net/rocker/rocker.c
@@ -16,7 +16,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "hw/qdev-properties-system.h"
 #include "migration/vmstate.h"
diff --git a/hw/net/rocker/rocker_desc.c b/hw/net/rocker/rocker_desc.c
index 01845f1157..f3068c9250 100644
--- a/hw/net/rocker/rocker_desc.c
+++ b/hw/net/rocker/rocker_desc.c
@@ -16,7 +16,7 @@
 
 #include "qemu/osdep.h"
 #include "net/net.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 
 #include "rocker.h"
 #include "rocker_hw.h"
diff --git a/hw/net/rtl8139.c b/hw/net/rtl8139.c
index 700b1b66b6..5a5aaf868d 100644
--- a/hw/net/rtl8139.c
+++ b/hw/net/rtl8139.c
@@ -53,7 +53,7 @@
 #include "qemu/osdep.h"
 #include <zlib.h>
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "sysemu/dma.h"
diff --git a/hw/net/sungem.c b/hw/net/sungem.c
index 3684a4d733..eb01520790 100644
--- a/hw/net/sungem.c
+++ b/hw/net/sungem.c
@@ -8,7 +8,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "qemu/log.h"
diff --git a/hw/net/sunhme.c b/hw/net/sunhme.c
index fc34905f87..1f3d8011ae 100644
--- a/hw/net/sunhme.c
+++ b/hw/net/sunhme.c
@@ -23,7 +23,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
 #include "hw/net/mii.h"
diff --git a/hw/net/tulip.c b/hw/net/tulip.c
index c2b3b1bdfa..915e5fb595 100644
--- a/hw/net/tulip.c
+++ b/hw/net/tulip.c
@@ -9,7 +9,7 @@
 #include "qemu/osdep.h"
 #include "qemu/log.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/qdev-properties.h"
 #include "hw/nvram/eeprom93xx.h"
 #include "migration/vmstate.h"
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index f191e3037f..3ae909041a 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -42,7 +42,7 @@
 #include "sysemu/sysemu.h"
 #include "trace.h"
 #include "monitor/qdev.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "net_rx_pkt.h"
 #include "hw/virtio/vhost.h"
 #include "sysemu/qtest.h"
diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
index a57e81e3a9..f04f3ad668 100644
--- a/hw/pci-host/bonito.c
+++ b/hw/pci-host/bonito.c
@@ -42,7 +42,7 @@
 #include "qemu/units.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/irq.h"
 #include "hw/mips/mips.h"
 #include "hw/pci/pci_host.h"
diff --git a/hw/pci-host/dino.c b/hw/pci-host/dino.c
index f257c24e64..e8eaebca54 100644
--- a/hw/pci-host/dino.c
+++ b/hw/pci-host/dino.c
@@ -15,7 +15,7 @@
 #include "qemu/units.h"
 #include "qapi/error.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_bus.h"
 #include "hw/qdev-properties.h"
 #include "hw/pci-host/dino.h"
diff --git a/hw/pci-host/grackle.c b/hw/pci-host/grackle.c
index 95945ac0f4..8cf318cb80 100644
--- a/hw/pci-host/grackle.c
+++ b/hw/pci-host/grackle.c
@@ -25,7 +25,7 @@
 
 #include "qemu/osdep.h"
 #include "hw/qdev-properties.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/irq.h"
 #include "qapi/error.h"
 #include "qemu/module.h"
diff --git a/hw/pci-host/mv64361.c b/hw/pci-host/mv64361.c
index cc9c4d6d3b..015b92bd5f 100644
--- a/hw/pci-host/mv64361.c
+++ b/hw/pci-host/mv64361.c
@@ -13,7 +13,7 @@
 #include "qapi/error.h"
 #include "hw/hw.h"
 #include "hw/sysbus.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_host.h"
 #include "hw/irq.h"
 #include "hw/intc/i8259.h"
diff --git a/hw/pci-host/ppce500.c b/hw/pci-host/ppce500.c
index 89c1b53dd7..568849e930 100644
--- a/hw/pci-host/ppce500.c
+++ b/hw/pci-host/ppce500.c
@@ -19,7 +19,7 @@
 #include "hw/ppc/e500-ccsr.h"
 #include "hw/qdev-properties.h"
 #include "migration/vmstate.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_host.h"
 #include "qemu/bswap.h"
 #include "qemu/module.h"
diff --git a/hw/pci-host/raven.c b/hw/pci-host/raven.c
index 7a105e4a63..2c96ddf8fe 100644
--- a/hw/pci-host/raven.c
+++ b/hw/pci-host/raven.c
@@ -28,7 +28,7 @@
 #include "qemu/units.h"
 #include "qemu/log.h"
 #include "qapi/error.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_bus.h"
 #include "hw/pci/pci_host.h"
 #include "hw/qdev-properties.h"
diff --git a/hw/pci-host/sh_pci.c b/hw/pci-host/sh_pci.c
index 719d6ca2a6..77e7bbc65f 100644
--- a/hw/pci-host/sh_pci.c
+++ b/hw/pci-host/sh_pci.c
@@ -26,7 +26,7 @@
 #include "hw/sysbus.h"
 #include "hw/sh4/sh.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_host.h"
 #include "qemu/bswap.h"
 #include "qemu/module.h"
diff --git a/hw/pci-host/uninorth.c b/hw/pci-host/uninorth.c
index 8396c91d59..e3abe3c0f9 100644
--- a/hw/pci-host/uninorth.c
+++ b/hw/pci-host/uninorth.c
@@ -26,7 +26,7 @@
 #include "hw/irq.h"
 #include "hw/qdev-properties.h"
 #include "qemu/module.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_host.h"
 #include "hw/pci-host/uninorth.h"
 #include "trace.h"
diff --git a/hw/pci-host/versatile.c b/hw/pci-host/versatile.c
index f66384fa02..0d50ea4cc0 100644
--- a/hw/pci-host/versatile.c
+++ b/hw/pci-host/versatile.c
@@ -12,7 +12,7 @@
 #include "hw/sysbus.h"
 #include "migration/vmstate.h"
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_bus.h"
 #include "hw/pci/pci_host.h"
 #include "hw/qdev-properties.h"
diff --git a/hw/pci/pci-hmp-cmds.c b/hw/pci/pci-hmp-cmds.c
index fb7591d6ab..b09fce9377 100644
--- a/hw/pci/pci-hmp-cmds.c
+++ b/hw/pci/pci-hmp-cmds.c
@@ -15,6 +15,7 @@
 
 #include "qemu/osdep.h"
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "monitor/hmp.h"
 #include "monitor/monitor.h"
 #include "pci-internal.h"
diff --git a/hw/pci/pcie_host.c b/hw/pci/pcie_host.c
index 5abbe83220..3717e1a086 100644
--- a/hw/pci/pcie_host.c
+++ b/hw/pci/pcie_host.c
@@ -20,7 +20,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pcie_host.h"
 #include "qemu/module.h"
 
diff --git a/hw/pci/pcie_sriov.c b/hw/pci/pcie_sriov.c
index 8e3faf1f59..f0bd72e069 100644
--- a/hw/pci/pcie_sriov.c
+++ b/hw/pci/pcie_sriov.c
@@ -11,7 +11,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pcie.h"
 #include "hw/pci/pci_bus.h"
 #include "hw/qdev-properties.h"
diff --git a/hw/pci/slotid_cap.c b/hw/pci/slotid_cap.c
index 36d021b4a6..8372d05d9e 100644
--- a/hw/pci/slotid_cap.c
+++ b/hw/pci/slotid_cap.c
@@ -1,6 +1,6 @@
 #include "qemu/osdep.h"
 #include "hw/pci/slotid_cap.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "qemu/error-report.h"
 #include "qapi/error.h"
 
diff --git a/hw/ppc/ppc440_pcix.c b/hw/ppc/ppc440_pcix.c
index 788d25514a..f10f93c533 100644
--- a/hw/ppc/ppc440_pcix.c
+++ b/hw/ppc/ppc440_pcix.c
@@ -26,7 +26,7 @@
 #include "hw/irq.h"
 #include "hw/ppc/ppc.h"
 #include "hw/ppc/ppc4xx.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_host.h"
 #include "trace.h"
 #include "qom/object.h"
diff --git a/hw/ppc/ppc4xx_pci.c b/hw/ppc/ppc4xx_pci.c
index 8642b96455..1d4a50fa7c 100644
--- a/hw/ppc/ppc4xx_pci.c
+++ b/hw/ppc/ppc4xx_pci.c
@@ -29,7 +29,7 @@
 #include "migration/vmstate.h"
 #include "qemu/module.h"
 #include "sysemu/reset.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_host.h"
 #include "trace.h"
 #include "qom/object.h"
diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
index 2a76b4e0b5..d8aeee0b7e 100644
--- a/hw/ppc/spapr_pci_vfio.c
+++ b/hw/ppc/spapr_pci_vfio.c
@@ -22,6 +22,7 @@
 #include "hw/ppc/spapr.h"
 #include "hw/pci-host/spapr.h"
 #include "hw/pci/msix.h"
+#include "hw/pci/pci_device.h"
 #include "hw/vfio/vfio.h"
 #include "qemu/error-report.h"
 
diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c
index 77008552f4..c948baf052 100644
--- a/hw/rdma/rdma_utils.c
+++ b/hw/rdma/rdma_utils.c
@@ -14,7 +14,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "trace.h"
 #include "rdma_utils.h"
 
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 9abe95130c..69137e0757 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -16,6 +16,7 @@
 #include "exec/memory-internal.h"
 #include "qemu/error-report.h"
 #include "sysemu/hw_accel.h"
+#include "hw/pci/pci_device.h"
 #include "hw/s390x/s390-pci-inst.h"
 #include "hw/s390x/s390-pci-bus.h"
 #include "hw/s390x/s390-pci-kvm.h"
diff --git a/hw/scsi/esp-pci.c b/hw/scsi/esp-pci.c
index 1792f84cea..2f7f11e70b 100644
--- a/hw/scsi/esp-pci.c
+++ b/hw/scsi/esp-pci.c
@@ -24,7 +24,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/irq.h"
 #include "hw/nvram/eeprom93xx.h"
 #include "hw/scsi/esp.h"
diff --git a/hw/scsi/lsi53c895a.c b/hw/scsi/lsi53c895a.c
index 50979640c3..af93557a9a 100644
--- a/hw/scsi/lsi53c895a.c
+++ b/hw/scsi/lsi53c895a.c
@@ -16,7 +16,7 @@
 #include "qemu/osdep.h"
 
 #include "hw/irq.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/scsi/scsi.h"
 #include "migration/vmstate.h"
 #include "sysemu/dma.h"
diff --git a/hw/smbios/smbios.c b/hw/smbios/smbios.c
index b4243de735..4869566cf5 100644
--- a/hw/smbios/smbios.c
+++ b/hw/smbios/smbios.c
@@ -28,6 +28,7 @@
 #include "hw/loader.h"
 #include "hw/boards.h"
 #include "hw/pci/pci_bus.h"
+#include "hw/pci/pci_device.h"
 #include "smbios_build.h"
 
 /* legacy structures and constants for <= 2.0 machines */
diff --git a/hw/usb/hcd-ohci-pci.c b/hw/usb/hcd-ohci-pci.c
index 8e1146b862..6b630d35a7 100644
--- a/hw/usb/hcd-ohci-pci.c
+++ b/hw/usb/hcd-ohci-pci.c
@@ -23,7 +23,7 @@
 #include "qemu/timer.h"
 #include "hw/usb.h"
 #include "migration/vmstate.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/sysbus.h"
 #include "hw/qdev-dma.h"
 #include "hw/qdev-properties.h"
diff --git a/hw/watchdog/wdt_i6300esb.c b/hw/watchdog/wdt_i6300esb.c
index 5693ec6a09..54c167cd35 100644
--- a/hw/watchdog/wdt_i6300esb.c
+++ b/hw/watchdog/wdt_i6300esb.c
@@ -24,7 +24,7 @@
 #include "qemu/module.h"
 #include "qemu/timer.h"
 #include "sysemu/watchdog.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "migration/vmstate.h"
 #include "qom/object.h"
 
diff --git a/tests/qtest/fuzz/generic_fuzz.c b/tests/qtest/fuzz/generic_fuzz.c
index afc1d20355..7326f6840b 100644
--- a/tests/qtest/fuzz/generic_fuzz.c
+++ b/tests/qtest/fuzz/generic_fuzz.c
@@ -24,6 +24,7 @@
 #include "exec/ramblock.h"
 #include "hw/qdev-core.h"
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/boards.h"
 #include "generic_fuzz_configs.h"
 #include "hw/mem/sparse-mem.h"
diff --git a/ui/util.c b/ui/util.c
index 907d60e032..d54bbb74fb 100644
--- a/ui/util.c
+++ b/ui/util.c
@@ -17,7 +17,7 @@
 
 #include "qemu/osdep.h"
 
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_device.h"
 #include "hw/pci/pci_bus.h"
 #include "qapi/error.h"
 #include "ui/console.h"
-- 
MST



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 09:21:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 09:21:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471763.731744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDMRP-0002ii-72; Thu, 05 Jan 2023 09:21:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471763.731744; Thu, 05 Jan 2023 09:21:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDMRP-0002ib-3a; Thu, 05 Jan 2023 09:21:39 +0000
Received: by outflank-mailman (input) for mailman id 471763;
 Thu, 05 Jan 2023 09:21:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+8dv=5C=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pDMRM-0002iT-Uw
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 09:21:37 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 59059493-8cda-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 10:21:35 +0100 (CET)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-43-VgtQ5RWpOqOlpYBu-NTB8Q-1; Thu, 05 Jan 2023 04:21:32 -0500
Received: by mail-wm1-f70.google.com with SMTP id
 fm25-20020a05600c0c1900b003d9702a11e5so17818777wmb.0
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 01:21:32 -0800 (PST)
Received: from redhat.com ([2.52.151.85]) by smtp.gmail.com with ESMTPSA id
 p7-20020a05600c1d8700b003cf4eac8e80sm2044043wms.23.2023.01.05.01.21.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 01:21:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59059493-8cda-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1672910494;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=WvJnFbsqNB85Ex5Jy9JWOBFwO+5P6BtsVC31mbY8Jko=;
	b=hglsu2slg4MnTDYTUy47wghObtzBvyy0Sz0A7flJqHBc5WfEWYsRk65a/7aSGx6yL5k2Jt
	f0Grl/pHJ8YJl3NHF1vJTcRX3+W7BoGwMBbnY8EaLQTsRxOHt9MzTKX/vC09kXbZ2trMf1
	XGsLUKtVK/MOjjiUXdj3/abKlOSzkqU=
X-MC-Unique: VgtQ5RWpOqOlpYBu-NTB8Q-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WvJnFbsqNB85Ex5Jy9JWOBFwO+5P6BtsVC31mbY8Jko=;
        b=I3r1yIlneraGv1ZM1ZfhOo1nQI8KthQbM62LKQFt2ytgN44yEh30phFZMck7v1ZBwb
         uEQ7BKG6bCvhpxmxf+kEz+2kmlNm9YQO4xIZuK9RBCktTkgKpJfFpuz4aF0KJL0gQdF8
         x8eIfPTQhOeDX3r8Xlcx+1t2EVqblguIMBTCsy1aFgfvKxZxAZOUzyHhb0x6AoUZnCiK
         TjbBTnfXRlAoMh18ZqWN8SkP9E3WtvvdYRQYrsNJHKn3k+b3O36aErjNkvyiz0H5M8J4
         Qh8KttUDjvQcSt3I9SvnPK5moul3jYkzM/+CWFWi3o2KPxBrX1VtvTKc9ZDkaLxiH6vO
         tGAQ==
X-Gm-Message-State: AFqh2kpiEnk7eUxPF2EuCjn5tPULH35C9SjveQr63cEeX9W6+mjvTg27
	/z/R3O67Tkr3VqhLdkLRNqxeQNsAM4ER6g36AZNhzzhu4bIvX2RE/UyieCAqKDY3KOBrO0yy8dW
	lHdeKSPvVSRTU6s9cJdqQlC70UAQ=
X-Received: by 2002:a05:600c:1c21:b0:3d2:2043:9cb7 with SMTP id j33-20020a05600c1c2100b003d220439cb7mr34918982wms.5.1672910491475;
        Thu, 05 Jan 2023 01:21:31 -0800 (PST)
X-Google-Smtp-Source: AMrXdXtLDlc36VB/aXU7Q2yc/oSic44vXngqutQ8bJreyz+6azTWmcavTASW5ywJO485dZiUmBNxkw==
X-Received: by 2002:a05:600c:1c21:b0:3d2:2043:9cb7 with SMTP id j33-20020a05600c1c2100b003d220439cb7mr34918960wms.5.1672910491141;
        Thu, 05 Jan 2023 01:21:31 -0800 (PST)
Date: Thu, 5 Jan 2023 04:21:26 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Markus Armbruster <armbru@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Igor Mammedov <imammedo@redhat.com>, Ani Sinha <ani@anisinha.ca>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Yuval Shaia <yuval.shaia.ml@gmail.com>, Fam Zheng <fam@euphon.net>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Ben Widawsky <ben.widawsky@intel.com>,
	Jonathan Cameron <jonathan.cameron@huawei.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Andrey Smirnov <andrew.smirnov@gmail.com>,
	Xiaojuan Yang <yangxiaojuan@loongson.cn>,
	Song Gao <gaosong@loongson.cn>,
	=?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>,
	Paul Burton <paulburton@kernel.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	xen-devel@lists.xenproject.org, qemu-arm@nongnu.org,
	qemu-ppc@nongnu.org
Subject: [PULL 27/51] include/hw/pci: Break inclusion loop pci_bridge.h and
 cxl.h
Message-ID: <20230105091310.263867-28-mst@redhat.com>
References: <20230105091310.263867-1-mst@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230105091310.263867-1-mst@redhat.com>
X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1
X-Mutt-Fcc: =sent
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

From: Markus Armbruster <armbru@redhat.com>

hw/pci/pci_bridge.h and hw/cxl/cxl.h include each other.

Fortunately, breaking the loop is merely a matter of deleting
unnecessary includes from headers, and adding them back in places
where they are now missing.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221222100330.380143-2-armbru@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/alpha/alpha_sys.h              | 1 -
 hw/rdma/rdma_utils.h              | 1 -
 hw/rdma/vmw/pvrdma.h              | 1 -
 hw/usb/hcd-ehci.h                 | 1 -
 hw/xen/xen_pt.h                   | 1 -
 include/hw/cxl/cxl.h              | 1 -
 include/hw/cxl/cxl_cdat.h         | 1 +
 include/hw/cxl/cxl_device.h       | 1 +
 include/hw/cxl/cxl_pci.h          | 2 --
 include/hw/i386/ich9.h            | 4 ----
 include/hw/i386/x86-iommu.h       | 1 -
 include/hw/isa/vt82c686.h         | 1 -
 include/hw/pci-host/designware.h  | 3 ---
 include/hw/pci-host/i440fx.h      | 2 +-
 include/hw/pci-host/ls7a.h        | 2 --
 include/hw/pci-host/pnv_phb3.h    | 2 --
 include/hw/pci-host/pnv_phb4.h    | 3 +--
 include/hw/pci-host/xilinx-pcie.h | 1 -
 include/hw/pci/pcie.h             | 1 -
 include/hw/virtio/virtio-scsi.h   | 1 -
 hw/alpha/pci.c                    | 1 +
 hw/alpha/typhoon.c                | 2 +-
 hw/i386/acpi-build.c              | 2 +-
 hw/pci-bridge/i82801b11.c         | 2 +-
 hw/rdma/rdma_utils.c              | 1 +
 hw/scsi/virtio-scsi.c             | 1 +
 26 files changed, 10 insertions(+), 30 deletions(-)

diff --git a/hw/alpha/alpha_sys.h b/hw/alpha/alpha_sys.h
index 2263e821da..a303c58438 100644
--- a/hw/alpha/alpha_sys.h
+++ b/hw/alpha/alpha_sys.h
@@ -5,7 +5,6 @@
 
 #include "target/alpha/cpu-qom.h"
 #include "hw/pci/pci.h"
-#include "hw/pci/pci_host.h"
 #include "hw/boards.h"
 #include "hw/intc/i8259.h"
 
diff --git a/hw/rdma/rdma_utils.h b/hw/rdma/rdma_utils.h
index 0c6414e7e0..54e4f56edd 100644
--- a/hw/rdma/rdma_utils.h
+++ b/hw/rdma/rdma_utils.h
@@ -18,7 +18,6 @@
 #define RDMA_UTILS_H
 
 #include "qemu/error-report.h"
-#include "hw/pci/pci.h"
 #include "sysemu/dma.h"
 
 #define rdma_error_report(fmt, ...) \
diff --git a/hw/rdma/vmw/pvrdma.h b/hw/rdma/vmw/pvrdma.h
index d08965d3e2..0caf95ede8 100644
--- a/hw/rdma/vmw/pvrdma.h
+++ b/hw/rdma/vmw/pvrdma.h
@@ -18,7 +18,6 @@
 
 #include "qemu/units.h"
 #include "qemu/notify.h"
-#include "hw/pci/pci.h"
 #include "hw/pci/msix.h"
 #include "chardev/char-fe.h"
 #include "hw/net/vmxnet3_defs.h"
diff --git a/hw/usb/hcd-ehci.h b/hw/usb/hcd-ehci.h
index a173707d9b..4d4b2830b7 100644
--- a/hw/usb/hcd-ehci.h
+++ b/hw/usb/hcd-ehci.h
@@ -23,7 +23,6 @@
 #include "sysemu/dma.h"
 #include "hw/pci/pci.h"
 #include "hw/sysbus.h"
-#include "qom/object.h"
 
 #ifndef EHCI_DEBUG
 #define EHCI_DEBUG   0
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index e7c4316a7d..cf10fc7bbf 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -2,7 +2,6 @@
 #define XEN_PT_H
 
 #include "hw/xen/xen_common.h"
-#include "hw/pci/pci.h"
 #include "xen-host-pci-device.h"
 #include "qom/object.h"
 
diff --git a/include/hw/cxl/cxl.h b/include/hw/cxl/cxl.h
index 38e0e271d5..5129557bee 100644
--- a/include/hw/cxl/cxl.h
+++ b/include/hw/cxl/cxl.h
@@ -13,7 +13,6 @@
 
 #include "qapi/qapi-types-machine.h"
 #include "qapi/qapi-visit-machine.h"
-#include "hw/pci/pci_bridge.h"
 #include "hw/pci/pci_host.h"
 #include "cxl_pci.h"
 #include "cxl_component.h"
diff --git a/include/hw/cxl/cxl_cdat.h b/include/hw/cxl/cxl_cdat.h
index e9eda00142..7f67638685 100644
--- a/include/hw/cxl/cxl_cdat.h
+++ b/include/hw/cxl/cxl_cdat.h
@@ -11,6 +11,7 @@
 #define CXL_CDAT_H
 
 #include "hw/cxl/cxl_pci.h"
+#include "hw/pci/pcie_doe.h"
 
 /*
  * Reference:
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 449b0edfe9..fd475b947b 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -10,6 +10,7 @@
 #ifndef CXL_DEVICE_H
 #define CXL_DEVICE_H
 
+#include "hw/pci/pci.h"
 #include "hw/register.h"
 
 /*
diff --git a/include/hw/cxl/cxl_pci.h b/include/hw/cxl/cxl_pci.h
index 3cb79eca1e..aca14845ab 100644
--- a/include/hw/cxl/cxl_pci.h
+++ b/include/hw/cxl/cxl_pci.h
@@ -11,8 +11,6 @@
 #define CXL_PCI_H
 
 #include "qemu/compiler.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pcie.h"
 #include "hw/cxl/cxl_cdat.h"
 
 #define CXL_VENDOR_ID 0x1e98
diff --git a/include/hw/i386/ich9.h b/include/hw/i386/ich9.h
index 23ee8e371b..222781e8b9 100644
--- a/include/hw/i386/ich9.h
+++ b/include/hw/i386/ich9.h
@@ -5,12 +5,8 @@
 #include "hw/sysbus.h"
 #include "hw/i386/pc.h"
 #include "hw/isa/apm.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pcie_host.h"
-#include "hw/pci/pci_bridge.h"
 #include "hw/acpi/acpi.h"
 #include "hw/acpi/ich9.h"
-#include "hw/pci/pci_bus.h"
 #include "qom/object.h"
 
 void ich9_lpc_set_irq(void *opaque, int irq_num, int level);
diff --git a/include/hw/i386/x86-iommu.h b/include/hw/i386/x86-iommu.h
index 7637edb430..8d8d53b18b 100644
--- a/include/hw/i386/x86-iommu.h
+++ b/include/hw/i386/x86-iommu.h
@@ -21,7 +21,6 @@
 #define HW_I386_X86_IOMMU_H
 
 #include "hw/sysbus.h"
-#include "hw/pci/pci.h"
 #include "hw/pci/msi.h"
 #include "qom/object.h"
 
diff --git a/include/hw/isa/vt82c686.h b/include/hw/isa/vt82c686.h
index eaa07881c5..e273cd38dc 100644
--- a/include/hw/isa/vt82c686.h
+++ b/include/hw/isa/vt82c686.h
@@ -1,7 +1,6 @@
 #ifndef HW_VT82C686_H
 #define HW_VT82C686_H
 
-#include "hw/pci/pci.h"
 
 #define TYPE_VT82C686B_ISA "vt82c686b-isa"
 #define TYPE_VT82C686B_USB_UHCI "vt82c686b-usb-uhci"
diff --git a/include/hw/pci-host/designware.h b/include/hw/pci-host/designware.h
index 6d9b51ae67..908f3d946b 100644
--- a/include/hw/pci-host/designware.h
+++ b/include/hw/pci-host/designware.h
@@ -22,9 +22,6 @@
 #define DESIGNWARE_H
 
 #include "hw/sysbus.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pci_bus.h"
-#include "hw/pci/pcie_host.h"
 #include "hw/pci/pci_bridge.h"
 #include "qom/object.h"
 
diff --git a/include/hw/pci-host/i440fx.h b/include/hw/pci-host/i440fx.h
index d02bf1ed6b..fc93e22732 100644
--- a/include/hw/pci-host/i440fx.h
+++ b/include/hw/pci-host/i440fx.h
@@ -11,7 +11,7 @@
 #ifndef HW_PCI_I440FX_H
 #define HW_PCI_I440FX_H
 
-#include "hw/pci/pci_bus.h"
+#include "hw/pci/pci.h"
 #include "hw/pci-host/pam.h"
 #include "qom/object.h"
 
diff --git a/include/hw/pci-host/ls7a.h b/include/hw/pci-host/ls7a.h
index df7fa55a30..b27db8e2ca 100644
--- a/include/hw/pci-host/ls7a.h
+++ b/include/hw/pci-host/ls7a.h
@@ -8,8 +8,6 @@
 #ifndef HW_LS7A_H
 #define HW_LS7A_H
 
-#include "hw/pci/pci.h"
-#include "hw/pci/pcie_host.h"
 #include "hw/pci-host/pam.h"
 #include "qemu/units.h"
 #include "qemu/range.h"
diff --git a/include/hw/pci-host/pnv_phb3.h b/include/hw/pci-host/pnv_phb3.h
index 4854f6d2f6..f791ebda9b 100644
--- a/include/hw/pci-host/pnv_phb3.h
+++ b/include/hw/pci-host/pnv_phb3.h
@@ -10,8 +10,6 @@
 #ifndef PCI_HOST_PNV_PHB3_H
 #define PCI_HOST_PNV_PHB3_H
 
-#include "hw/pci/pcie_host.h"
-#include "hw/pci/pcie_port.h"
 #include "hw/ppc/xics.h"
 #include "qom/object.h"
 #include "hw/pci-host/pnv_phb.h"
diff --git a/include/hw/pci-host/pnv_phb4.h b/include/hw/pci-host/pnv_phb4.h
index 50d4faa001..d9cea3f952 100644
--- a/include/hw/pci-host/pnv_phb4.h
+++ b/include/hw/pci-host/pnv_phb4.h
@@ -10,8 +10,7 @@
 #ifndef PCI_HOST_PNV_PHB4_H
 #define PCI_HOST_PNV_PHB4_H
 
-#include "hw/pci/pcie_host.h"
-#include "hw/pci/pcie_port.h"
+#include "hw/pci/pci_bus.h"
 #include "hw/ppc/xive.h"
 #include "qom/object.h"
 
diff --git a/include/hw/pci-host/xilinx-pcie.h b/include/hw/pci-host/xilinx-pcie.h
index 89be88d87f..e1b3c1c280 100644
--- a/include/hw/pci-host/xilinx-pcie.h
+++ b/include/hw/pci-host/xilinx-pcie.h
@@ -21,7 +21,6 @@
 #define HW_XILINX_PCIE_H
 
 #include "hw/sysbus.h"
-#include "hw/pci/pci.h"
 #include "hw/pci/pci_bridge.h"
 #include "hw/pci/pcie_host.h"
 #include "qom/object.h"
diff --git a/include/hw/pci/pcie.h b/include/hw/pci/pcie.h
index 698d3de851..798a262a0a 100644
--- a/include/hw/pci/pcie.h
+++ b/include/hw/pci/pcie.h
@@ -26,7 +26,6 @@
 #include "hw/pci/pcie_aer.h"
 #include "hw/pci/pcie_sriov.h"
 #include "hw/hotplug.h"
-#include "hw/pci/pcie_doe.h"
 
 typedef enum {
     /* for attention and power indicator */
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index a36aad9c86..37b75e15e3 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -20,7 +20,6 @@
 #define VIRTIO_SCSI_SENSE_SIZE 0
 #include "standard-headers/linux/virtio_scsi.h"
 #include "hw/virtio/virtio.h"
-#include "hw/pci/pci.h"
 #include "hw/scsi/scsi.h"
 #include "chardev/char-fe.h"
 #include "sysemu/iothread.h"
diff --git a/hw/alpha/pci.c b/hw/alpha/pci.c
index 72251fcdf0..7c18297177 100644
--- a/hw/alpha/pci.c
+++ b/hw/alpha/pci.c
@@ -7,6 +7,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "hw/pci/pci_host.h"
 #include "alpha_sys.h"
 #include "qemu/log.h"
 #include "trace.h"
diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
index bd39c8ca86..49a80550c5 100644
--- a/hw/alpha/typhoon.c
+++ b/hw/alpha/typhoon.c
@@ -10,10 +10,10 @@
 #include "qemu/module.h"
 #include "qemu/units.h"
 #include "qapi/error.h"
+#include "hw/pci/pci_host.h"
 #include "cpu.h"
 #include "hw/irq.h"
 #include "alpha_sys.h"
-#include "qom/object.h"
 
 
 #define TYPE_TYPHOON_PCI_HOST_BRIDGE "typhoon-pcihost"
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index aa15b11cde..127c4e2d50 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -27,7 +27,7 @@
 #include "acpi-common.h"
 #include "qemu/bitmap.h"
 #include "qemu/error-report.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_bridge.h"
 #include "hw/cxl/cxl.h"
 #include "hw/core/cpu.h"
 #include "target/i386/cpu.h"
diff --git a/hw/pci-bridge/i82801b11.c b/hw/pci-bridge/i82801b11.c
index d9f224818b..f3b4a14611 100644
--- a/hw/pci-bridge/i82801b11.c
+++ b/hw/pci-bridge/i82801b11.c
@@ -42,7 +42,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/pci/pci.h"
+#include "hw/pci/pci_bridge.h"
 #include "migration/vmstate.h"
 #include "qemu/module.h"
 #include "hw/i386/ich9.h"
diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c
index 5a7ef63ad2..77008552f4 100644
--- a/hw/rdma/rdma_utils.c
+++ b/hw/rdma/rdma_utils.c
@@ -14,6 +14,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "hw/pci/pci.h"
 #include "trace.h"
 #include "rdma_utils.h"
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 6f6e2e32ba..2b649ca976 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -22,6 +22,7 @@
 #include "qemu/iov.h"
 #include "qemu/module.h"
 #include "sysemu/block-backend.h"
+#include "sysemu/dma.h"
 #include "hw/qdev-properties.h"
 #include "hw/scsi/scsi.h"
 #include "scsi/constants.h"
-- 
MST



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 09:26:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 09:26:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471770.731755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDMW6-0003Lt-PS; Thu, 05 Jan 2023 09:26:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471770.731755; Thu, 05 Jan 2023 09:26:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDMW6-0003Lm-MU; Thu, 05 Jan 2023 09:26:30 +0000
Received: by outflank-mailman (input) for mailman id 471770;
 Thu, 05 Jan 2023 09:26:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDMW5-0003Lg-Q0
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 09:26:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDMW3-0005OB-JW; Thu, 05 Jan 2023 09:26:27 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=[192.168.4.62])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDMW3-0007Rl-Cp; Thu, 05 Jan 2023 09:26:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=PEBC+UaIwLwZGAnFM0C127VFp32FpIJgDVgumc8M6K8=; b=UgaFoxdDnHb1LiRcDtwU/6apht
	o9f37caHdBmW9GMG+4+7I+FHY7bkxNmxdQyenIePj9utCd00Jdr3Z1YZGfACn4L7/ppvQZYOQCyxe
	ofy0Z6VzJCTCQ0jaBN5tPeR5S1bdSZDzBQLG+QXBC0i6R2ZX8eGOl65Spc5zPzfsucgw=;
Message-ID: <1264e5cc-1960-95d3-5ecb-d6f23d194aa4@xen.org>
Date: Thu, 5 Jan 2023 09:26:25 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in
 construct_domU
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Michal Orzel <michal.orzel@amd.com>
Cc: xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230103102519.26224-1-michal.orzel@amd.com>
 <alpine.DEB.2.22.394.2301041546230.4079@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301041546230.4079@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 04/01/2023 23:47, Stefano Stabellini wrote:
> On Tue, 3 Jan 2023, Michal Orzel wrote:
>> Printing memory size in hex without 0x prefix can be misleading, so
>> add it. Also, take the opportunity to adhere to 80 chars line length
>> limit by moving the printk arguments to the next line.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>> Changes in v2:
>>   - was: "Print memory size in decimal in construct_domU"
>>   - stick to hex but add a 0x prefix
>>   - adhere to 80 chars line length limit
> 
> Honestly I prefer decimal but also hex is fine. 

decimal is perfect for very small values, but as we print the amount in 
KB it will become a big mess. Here some examples (decimal first, then 
hexadecimal):

   512MB: 524288 vs 0x80000
   555MB: 568320 vs 0x8ac00
   1GB: 1048576 vs 0x100000
   512GB: 536870912 vs 0x20000000
   1TB: 1073741824 vs 0x40000000

For power of two values, you might be able to find your way with 
decimal. It is more difficult for non power of two unless you have a 
calculator in hand.

The other option I suggested in v1 is to print the amount in KB/GB/MB. 
Would that be better?

That said, to be honest, I am not entirely sure why we are actually 
printing in KB. It would seems strange that someone would create a guest 
with memory not aligned to 1MB.

So I would consider to check the size is 1MB-aligned and then print the 
value in KB. This will remove one order of magnitude and make the value 
more readable in decimal.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 10:00:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 10:00:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471777.731765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDN2V-0006lt-8F; Thu, 05 Jan 2023 09:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471777.731765; Thu, 05 Jan 2023 09:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDN2V-0006lm-5O; Thu, 05 Jan 2023 09:59:59 +0000
Received: by outflank-mailman (input) for mailman id 471777;
 Thu, 05 Jan 2023 09:59:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GSs2=5C=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pDN2U-0006lg-12
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 09:59:58 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b39c3a16-8cdf-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 10:59:54 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by PH0PR12MB7981.namprd12.prod.outlook.com (2603:10b6:510:26c::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 09:59:52 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::9856:da7:1ff1:d55c]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::9856:da7:1ff1:d55c%5]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 09:59:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b39c3a16-8cdf-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nsVTOw4vHJEFa7Uz+G6Nx/OkRvX/htpyZ2C4dcMKZcw9w1hhcg2h9T0X1D5FyOcHtlzmepZY0y4XFNepiTfg4zzppTdO3zcw2MO7XqklyH6XFbdAHqAQq8TpJAsPSjf39pL6YnbVAJIaR/kJwzPOjXRmYERBbdC4azJoAs7eavbWEnZLcWFTj//ShRPtdHMWaKXBxZ30mhd8WmVaeUYEr7fQOBEfxUFzcOIg6WEPFrPUQVXYFR0dQtVcHSEe90wMbnANEyMMsJ+4bmdC4p7ZInd6W1P2myAT/W5qH+t+GsgjxU9o3zqFnk0dk1VJ6gGN8PeoWvrKu8XAOcyIz3nYfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Jlfi8Xu69UjMjxtpqFoHVECk8dVZwxh51RO7VwTAM48=;
 b=TLmNfUTQuLWX7EhxfwEH95orLp/iaAY7w0apawK0ZkdQh68PX2GPdATL9swzssEmKpxg/snldHvv/Xrw1I9kdhL3hOh9pzmS5U04kPj+Zz1rWeEK16qgKmEssMwasZ+hSr53KvfGL1Gbv5bKrfQNiU1Cp+3FvSLg6ry/90JUoSNTQrvCQbOz+0pW/HtFFZIc9234gSUxcw1cxTobWpiA/5SrsN7rDjUMH3Ip2y+FAN16kR7HT2G6DlPHL6gyObW9ssFSYajsTVWTgCx3DD4dsgcITxTfAWGMxsCongdXDpnAF145p3mTgHyCpKu+zS5SiHhXVjd0WjEdFJrFZHQLzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jlfi8Xu69UjMjxtpqFoHVECk8dVZwxh51RO7VwTAM48=;
 b=pVEqIUntoznkKqsbVSPlQVsaOhdUA6SKcecfwgvt79X65QsBpUf0YLqHJiYhKocOfKGF/g6vt79uBRAJ1n70N9V+R1RrgtvJlPM6kjOYJsc/4ltyQZrILqj0Tdh9hA9O3DGp8EGJexy8tHOXJCFqb62wvrR4/5x9fi7azgSvzAE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <29460d07-cd43-7415-7125-6ed01f3c2920@amd.com>
Date: Thu, 5 Jan 2023 09:59:47 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in
 construct_domU
To: xen-devel@lists.xenproject.org
References: <20230103102519.26224-1-michal.orzel@amd.com>
 <alpine.DEB.2.22.394.2301041546230.4079@ubuntu-linux-20-04-desktop>
 <1264e5cc-1960-95d3-5ecb-d6f23d194aa4@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <1264e5cc-1960-95d3-5ecb-d6f23d194aa4@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO6P265CA0027.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ff::13) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|PH0PR12MB7981:EE_
X-MS-Office365-Filtering-Correlation-Id: 60671af0-5172-4eef-b952-08daef0396fc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	U4+Gu5nLXaIGuEav96iRWDJwNL5AQ9JLlm/T5mCm5Yli8ZdDtzRvmr5y2YY4RHE0SrFSaSnB9pkDusHlxe6vMQ+aFJ5WhAsW8xLFfrW3WVQE1RAYSjndSOTaN4c6SWFsOYMYIi1B8FsrCr5IEyr3V74+U+Mhp4ujQo1KpqqIEoseexUl8EDVtm6qRkG8OjVzkYNpWYardoYr7+ueY3u1HWDGh/0BgeB6B2BVnKyxnVRjmrZfflwlpp4IBWVeK0G3l+iDSBrTUFVMV9DtHxlKB3hpFltJ5CHez/o9gvA1s9g7zgGLT+jc30dlRmwZVBaRLhrVQt4l4ZnOWJY53Vz0+BcVGW4lrz/I7SPpHAa/h2+SfwOu14R/rwCBDlif1sn4+bAT4wCuYkYwaQ77182+ISQYlant9HijqzhuzGmW7s9B3Zm2eKXkbwwlVJyWYCtY+Tp7sgyjQoZtR3PNvKckrFbCglVDAHItddMPgUDIwMQLAkjifyMSTVuRS/BaPXSJ9+81hAylqQHm+1aIoRHhGnltUe5ESI24QnunluqMQ2dtUGHkEKRi3KU++U9KkQvSOtYAPjKC1heBa/+UdcBD52a9OYkfCuchxbIqJ42P8FBGNwgt0jSx9QiOaIcExnqBClnGgvdLk6+8O+ZiXvIjv2tlzUP0meRdd7yfFjJfkGT2IcFyyWiNDXygCo7iYsGmcDPnN0Zs3EUZRw6e6I4ETq9C3Ousf4ibji32yLB0KDk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(39860400002)(366004)(396003)(376002)(451199015)(53546011)(66476007)(66946007)(66556008)(41300700001)(6666004)(6506007)(8676002)(316002)(2906002)(6916009)(26005)(31696002)(186003)(31686004)(83380400001)(478600001)(6486002)(6512007)(38100700002)(8936002)(36756003)(5660300002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SmlqNnVsZXYrSnBEcTdjdzV1VE45TlpwRW1PanBVd3liekV2Rm9aWmJuVitS?=
 =?utf-8?B?LzNhbjBXR1ZSNVFKNGVsTC9yT1lqck5YcTZUb2toenhrbURON3FLUDNZUEkz?=
 =?utf-8?B?YWtlN3Vjd0dNQlM1L0p6S2Yzd2pHa1IxS0JWWWpGMDFDZ2t1aDFoa2tVODU0?=
 =?utf-8?B?TXVDSEdSbmVXWHRZcy83czdjNXllV3pPUlpVMTVRdllEc1BGbzVPYTl0S1JZ?=
 =?utf-8?B?OFA5SjNrTTBncDBWVThQZ1B1Z1BzdEhvTUlQNnZQSDVsL2dtYzVVaENRdnNm?=
 =?utf-8?B?SVJxR28rZy95ZTJCdGE3eGY0Q01sTnhHOUkrZWFEZndBR3gzT3ZqR0t5bmVN?=
 =?utf-8?B?cTA3Skg2NmhPRzZDTlB4ZldqZ0hROTNyK0FuT00wdlFoMTEwc0F1MEpaSDRr?=
 =?utf-8?B?eW94SytBSFBmbXd2R1pXb2hPYzJ5bG5vMXdWWkJkOWJERXZldW14c1ZTV0ZG?=
 =?utf-8?B?R3JVMDk1c0hMc0pURnZHVWRZNUFkZldjRkNreG0zbUtSTGxhRWlkVm5VNmxW?=
 =?utf-8?B?dFB5MlNHYzVnVXZyMFp0eDc2OU5QUkVzeGZwQ0VSMGUrWnpmV3ZuTlltaXFk?=
 =?utf-8?B?dXhxb2J2N1BXTnpZQVlOOC9KMW9KWGpneVJwYnVVRDhRZU40K0xPSUIreVBj?=
 =?utf-8?B?NHorcml6ei9kOFpNRCsvZmlDSHZERUIvTy9UUHRYMFRXaitFL21mckdiWmlF?=
 =?utf-8?B?L0tQMzdqZXJ6ZzgwdEd0NWpVZ1NmYjBNc0FkVko3QTJLdlRXS2xVMnY0aDIv?=
 =?utf-8?B?NGdZdlN5cXRKOG9weDBqNUlCWkdWQnkwYTRVVC9Zd3hVUzBpd0ROcWxmSTho?=
 =?utf-8?B?YnpIVjJxNjVjZlZjMVUrZHhDYk55cm5VU0gwL3VldzlIV0pGditKK3B2RWtn?=
 =?utf-8?B?aUJ0Z3JENEQ3L2NtQ0tBZ2VOdTV2SmJvNGk4VURORms5QWhlYUk5S3dHaFh2?=
 =?utf-8?B?OGV1YlVyc25PR3NIOW1Ca290ZFkrTGRiSi93WjBxZXdkV08xaDZ2NGViRDFU?=
 =?utf-8?B?dEQvRmYyRTlwTlhBMFlFN1V2WVpydlNvTlBBUHJVeFFpbkhhSmVsOHlKNExt?=
 =?utf-8?B?Wk80T3lQSFp0N2pWbU9VM09JWk1mb3RBVUxWdml1M0RqNW96WTF4cko1MzNN?=
 =?utf-8?B?UkVoVSsxNFlVSHlwdlpjMmR2T0lxNmFDQlVMRVJBa28ySUE0TDZNN3k2bkI3?=
 =?utf-8?B?ZVdudHJxTGFJeksrWGk2MFdZa2lqaTRaN0hzMWl1NGdqbDVPM3pwV1ZsNEth?=
 =?utf-8?B?di9aMExzZzgzRE45d1BDZ0dFS0FBYTZLcHoyQ2pRbHhUbGRqZ2lOUjR6Q0RK?=
 =?utf-8?B?SVFNRUNXU1hVckoweUw4OTdjMldFa3FwTUFiOXFLbjZLWmJEN1l1QUhzNHdJ?=
 =?utf-8?B?TmpjUzE5VENRc3RPZUFZNXFmZXluQnIzR1ZrZXB1NFR1Uk9RNjlmLzVGckRr?=
 =?utf-8?B?dDdZemxGRlRCUzE3dk1QQTFXNzlWZHYwQUNsQTZSRmpKYU1Ta3RxS1FuRG51?=
 =?utf-8?B?Nm5NU2orazZXQnhxZENZL2oyQUtGeDIwYU56WWdIVGtsbkRHT0ptaCtDWmRG?=
 =?utf-8?B?dmNOcHhSeVBBTFBrWk5qdUtuOEpGemVNSjArL3NLZ1p5M1krbCtxYWVzZFk5?=
 =?utf-8?B?R21rQVlKWlY2U3V0aVNSdVZXeWJSNDRhNXFnRUhKUGRWekFNVkt3Si90aS9k?=
 =?utf-8?B?dmhYQm9jcTdseStSNHJRUU9BeU5aTWhSK1hSNTMwWXZCYUF3SUk1N1lseko4?=
 =?utf-8?B?dWl6M2VZT05qek9KRjF4bUpFbzJwWnhyb0dtNldYSitiMHZ1aFMwbUdDalQ3?=
 =?utf-8?B?b2N2SVp2NXhkWm9UWFB3dHFJVlhkemdJQnJNS0dhSGljUHJnQkRHeHkyc0VN?=
 =?utf-8?B?eU5QUUEyRks2dTIvK0ZJTjkvTE9MMDJnOURuSnVGclRlb1hpY1JzTkcvdDhS?=
 =?utf-8?B?MkZsQnJLSVg4TU5QTEdGWUdVV29KajVSa1FCbGxKTkcxeUlrWGpiemswQ0w5?=
 =?utf-8?B?TFQ2bEsxcU1YelFKalg1Z09admZuMkltYjhqUnpLZHQvdFdzR1Rkb1RvSFRJ?=
 =?utf-8?B?ODVyTWZLaHd1ZmU2Z0ZMVDUydTdISEdhSzdFRDk2ZUIrRVFZWkcxRGhwMXox?=
 =?utf-8?Q?zTPCD2FhxYdybkkSqlIXSZm15?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 60671af0-5172-4eef-b952-08daef0396fc
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 09:59:52.5462
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9nSbIm7CBpnYpx+Czi4GWGKvVAlPjD2pDf4SClN9Gfr6J7J7q68+CnrCCkNeRCvn1NdInQpTrZU/icojQKmVFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7981

Hi Julien,

I have a clarification.

On 05/01/2023 09:26, Julien Grall wrote:
> CAUTION: This message has originated from an External Source. Please 
> use proper judgment and caution when opening attachments, clicking 
> links, or responding to this email.
>
>
> Hi Stefano,
>
> On 04/01/2023 23:47, Stefano Stabellini wrote:
>> On Tue, 3 Jan 2023, Michal Orzel wrote:
>>> Printing memory size in hex without 0x prefix can be misleading, so
>>> add it. Also, take the opportunity to adhere to 80 chars line length
>>> limit by moving the printk arguments to the next line.
>>>
>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>> ---
>>> Changes in v2:
>>>   - was: "Print memory size in decimal in construct_domU"
>>>   - stick to hex but add a 0x prefix
>>>   - adhere to 80 chars line length limit
>>
>> Honestly I prefer decimal but also hex is fine.
>
> decimal is perfect for very small values, but as we print the amount in
> KB it will become a big mess. Here some examples (decimal first, then
> hexadecimal):
>
>   512MB: 524288 vs 0x80000
>   555MB: 568320 vs 0x8ac00
>   1GB: 1048576 vs 0x100000
>   512GB: 536870912 vs 0x20000000
>   1TB: 1073741824 vs 0x40000000
>
> For power of two values, you might be able to find your way with
> decimal. It is more difficult for non power of two unless you have a
> calculator in hand.
>
> The other option I suggested in v1 is to print the amount in KB/GB/MB.
> Would that be better?
>
> That said, to be honest, I am not entirely sure why we are actually
> printing in KB. It would seems strange that someone would create a guest
> with memory not aligned to 1MB.

For RTOS (Zephyr and FreeRTOS), it should be possible for guests to have 
memory less than 1 MB, isn't it ?

- Ayan

>
> So I would consider to check the size is 1MB-aligned and then print the
> value in KB. This will remove one order of magnitude and make the value
> more readable in decimal.
>
> Cheers,
>
> -- 
> Julien Grall
>


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 10:12:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 10:12:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471785.731777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDNEL-0000sZ-Gy; Thu, 05 Jan 2023 10:12:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471785.731777; Thu, 05 Jan 2023 10:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDNEL-0000sS-Dc; Thu, 05 Jan 2023 10:12:13 +0000
Received: by outflank-mailman (input) for mailman id 471785;
 Thu, 05 Jan 2023 10:12:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDNEK-0000sM-5k
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 10:12:12 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2059.outbound.protection.outlook.com [40.107.8.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a31f8f5-8ce1-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 11:12:10 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8359.eurprd04.prod.outlook.com (2603:10a6:20b:3b3::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 10:12:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 10:12:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a31f8f5-8ce1-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZVLcHCyk2/JLnkCZGKicwOdU4frcid+8BrB5gWY7rJ1LPM5XcWK6Q7leASIKtRc22iSWjUL1iHL1jtGkk7d6+sxmxaFay5PHP+TN8R/W5vtzsmQ5VzeSv94jlr2N4qXNqAz5sh38V10HZwBTK2OG4NX/3n7JCVYxrkLVCWFjbwBl0uJKAqeGdzTujndXTB2GjIiOwCYf/Xl9Hw0CPF3CjuMFlJC4J1wwglSi1RzWQvfC8aEWbJad01I/Kr1EvkHOiMtKj3LSINHdHtr2fS35V+hc//kxBoS44Y9/uP9ovt6qnrkxi/YbZ07tNQ6e4do+Nghm1Z081BffOITlydYRUQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1RJywEcRfnPvZxYe9Fm8SQ3wBZj+5X3Tbq3pifAWXgM=;
 b=QQ/6VforayYyTv8IJdLgKHVa2KiJ8zKZ5jeYzuB3kltOJKWwYDJIiVv6WNmE4f1HPvvWRRr/67m7q2GDBvDiSx2b0ZtztqAHOOYccVvK3ZVEEZDS1Z3VdXqpvR4KCfl6RZ6uj+9w/RyUblsdO/TLQauPEATJuznt0znvbOS4pGzi4CodCt327J2I1lamsP13xDcO9QypGR9na+0o9FUCBy//PUgM2Kc3igp6rAAscdCzaxG6PhRQSbIw7MqZGQ5YmTybYWebXixl+ufmiPvjnrT/DrOdL0DKtRsJrH8qprjcyEZhRWcXQkZlg63+8IXXdRtRJgKLmIWrGXvThLPKEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1RJywEcRfnPvZxYe9Fm8SQ3wBZj+5X3Tbq3pifAWXgM=;
 b=277lANii0BjzasAhf3UwxAut5Z5PpkkyQxpVz4d6XM/GOj2W9FZm5anliZxYvF3eqC7LXOFDiDHr/CqjBIg2x+7yb1Ks6vXYdvWlU0UoTlIg9Lzi2L8Ym/SdXKtDQqAIUd/nnM2xH0yAZg0s1S+WEL1unDUAuGD7135pHjU1XnqPi+nHLGHJhPnGTdI5j2LuoWb+cEwN87fVJBG3GkoQ7pO88HQPMsUmcEUMAKeAyw9ZX0sPX5XtP+lrRQmeNpnXvHlzYCMbLLF1QFUf63GRTr8A5DbzFR7WYsTNhhayVwqZnTyWUX7PPoVxl3SRPWIZdQRp5rFjt8YWBH1OPnBmxw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <15dd4897-d87b-9a0d-fc99-551a1b4be04d@suse.com>
Date: Thu, 5 Jan 2023 11:12:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] hvmloader: use memory type constants
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0031.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8359:EE_
X-MS-Office365-Filtering-Correlation-Id: 2ddca258-141c-4210-c54c-08daef054d46
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VIUCT+qyONgTkqH1UG8F1KO1P5AcgGQpMBOkQnuyCcpG2/2GERw67BsAZkAC6qDNi7LM8K4b5bSZSQiAlfxNnWRXq4kOBkj8jwxw/n0rLKT62uOmJ2T8OHiN35+U1h9Q+xaI2Is4MabKvHhwlbzjw0AMrYgTjMKmeDGqsmvCxk2pYLDrzWJQ1e60roW/kVNIjEDVs6ic+exwY9jvFWUNNDJ4KGsRwyjMuwJCRgBATpcz3aArX13Qk1gy1m1Raud4YFZJMbJeildIm2ivu+x3sf3OxZ7d99qos3Lpc2d6BePkNp3Dlg5FkLzZbiyPjZkWKGnTBC1MooJUDxRDmLNXfeYGN6azmN0EEtS6yenGCFxDYdkM1xguP1elPLFvfZeshj9OUArlTniOKE7ahVinxintzOgh0Z4VUkN04B5yMLG0I0MwTi1hA3aM+Tb3Na7uEjLr9vGbfVAHq5Q6bL+oMGNJogKvDh3x/ODnMVjU3JWwLEdwQXXWPht2CSY8uacNe9iuRGfBzpHhEPV4Wr3I5VxG5IL/rjhd/3zVedhMC4BnYX4wAYB6rYiOSEIuX602NYsgRtBNzRpOZmMHsw46vkigybs2zJTVTCZ7oNs+NrugklJd9ac0TX5L5XD3Tfuza7rhLs325OK3qB/Bv4babq0Yzr1CimKh9UklE3NXocgTZUnlD6j2gSYWKWH2MdCq3W796q+DC9XOyXohDcszJU7onSuDZIj7m7YfcVWFfUY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(346002)(376002)(39860400002)(396003)(366004)(451199015)(38100700002)(31696002)(316002)(54906003)(66946007)(6486002)(2906002)(66476007)(66556008)(5660300002)(8676002)(4326008)(6916009)(41300700001)(8936002)(2616005)(478600001)(186003)(6512007)(6506007)(26005)(86362001)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cEp3dm40K0JuaGI0UHFrWUFqNGNQY05leFZKQ3U3TFRaL1hkMS91bllxUEY5?=
 =?utf-8?B?MExiOWdaTklTVDJDRkJ4TElvOVB2c1VhOG8rNDRyNVhjdURvVE80d3JoMlN6?=
 =?utf-8?B?MlBYdXNHRWJyOXlNMlE0ZEJjMC9rcXQxWUhmVG00dXhRR0NoM1FKSEdLUXBM?=
 =?utf-8?B?SUQ2cHIzbzBhMzJ6TlVWTFRRa3V6Z1p6UkpZU3JoQlJkN2NIaWdGQnlCZ0pE?=
 =?utf-8?B?MC9RSlc4N2dDTjBFMTFvSjVDMkNEd2R6OVlVQ0p6OEJ5Zk9BcjVUUkVUVGln?=
 =?utf-8?B?aXRjNThzZDlzZ3lzQ0orRXl6a1ptR2ZSdnVJaFErcU9ESVdSNWl1OFpHZzNT?=
 =?utf-8?B?TXAzSGs1UmtWRHhRcjRrbm1SRWM3T2NBTXFjZUV0cE9WU3NEUGhVekkzUHBS?=
 =?utf-8?B?ZnJyTEt3ekFMZlVVWVlyT1VscUV1QzBGd3hqc05JSlBWQTUwZmpuZC9OZHh3?=
 =?utf-8?B?eDlZZkFLbTU5RGx1eFJSRkltNVpFdDRaQmhRK1pMKzFyaVl3RWwwN3VIZ3JE?=
 =?utf-8?B?Uk01NjlNWFF3REltR1hWREVEem1aLzkxQlVpSjY0R3k5ZldJZVFoU1JzUkh1?=
 =?utf-8?B?bWZac0tQcjBxVFZhd2ZIbit6cVVPTlB4ek95RXp0S05VeGhRNTdvZktWZk5G?=
 =?utf-8?B?TFR3L1RQNmhCaTEvOElpZ25mWWtnVzRZSGlyWFFzbkJYUmxGZnMrVXdld1Rw?=
 =?utf-8?B?djIrQWs1VTVPTnZVRGc3cHlZb1d5S3B3KzByZENRdWFrZzk1SkxzZ0xuR1o5?=
 =?utf-8?B?Tjdjd1pUZ1hBNzcrUktRVmJIMU8xTTYvWDNJWmlXK2VxbkJNVlhtTEl5QUl3?=
 =?utf-8?B?SllDNUk2d3dyR2FyTElKYlQ0Zy9pd0ExMkxkRllDd3JZTFJETnR4dk84cHJv?=
 =?utf-8?B?NWZGdFJaSVh2SmZYVStZdlJ1Y296NmRISitESFd3U3RTZVlwQnk1K1pEdnlX?=
 =?utf-8?B?UW55WnVzUzIyb1UwNUJDcWUyNndWSkZCZFpCRnpKUW1rK0ZnVGZ1OHovNXJs?=
 =?utf-8?B?eTAzcVZWRnFHRGVpa0pLZU5sb25OQkpyMlBHZ01qTUIvZWoyMWM3cThsOE9Q?=
 =?utf-8?B?dFEvMUJsZXhWN01EaDY5cU8rVnBScXNmY2QwUVRhSXhJTVJsa3R0amdZNWNn?=
 =?utf-8?B?Y2hhS2pDL09wZjhjdkRVNm5NL0xkangrdmlwUFZ2bXg4WWREWUJUQWNud3JQ?=
 =?utf-8?B?TE5xK0JHWTRjdUJZQXEwQ245bDZnSHdvaWFXeVBRb0JmRnhLdDUzdjdTNUhi?=
 =?utf-8?B?OUdUUW9TTkh5WkVHQ2x1c21jODJFS3luM25KNlYrMzZpdjNVRFV4MUlsM1Zj?=
 =?utf-8?B?UERwcnJ4TGMvZnpMVWFpNExCZnZFM2dpa0lLUnBMRit1azRDei92L0hmTUdI?=
 =?utf-8?B?ZVlYaVE2bENrMXZXYnQzRDFDbmszbkE3dGo2SnV1djAzbktHSGhHUFhHeU0r?=
 =?utf-8?B?cXJLeERnSFkwRVI1dUFFQUk4VWt3WDZOUzcvc1JuRTFKUXZnRmcwaGpNbXFO?=
 =?utf-8?B?c2xWdWppRE5IeW5DWkhBS3RseG5TUDhpSml4WUdTWDRJRkRZTXJUWnNUc1BE?=
 =?utf-8?B?U1VyVTFIc0FKSGRjQlNQZ1ZtczFkaFlzSWs0aXFoM2JPcXdNOElOYnczR3Bv?=
 =?utf-8?B?QXJkakxHSzRoK3dBb3lJbHA5UVJsUGc0TWZEKzNBMG9PRFh4MXl6eDQ1Qkxp?=
 =?utf-8?B?ZHljc0hZNy9lZjBvMHRvVkNiYzA2bzErcjFnSGZaeUZ2aHNwYitoaEZ2SjBX?=
 =?utf-8?B?MUtnRFNHZ1dmY1oxeXBLbCtKME10VTVnZ1N0TUZ5N05WdlRqQTFVMEk2NDNS?=
 =?utf-8?B?bkU0Ry8zakdzQ0pRMS9hTDNUSGc5dDBncU1aZ0Z5ZkJldDJuQ0xRMllxLzVX?=
 =?utf-8?B?ek1qb0lHM3BMMmRwMER1UGlvTTkwRDVIMVpXUldWOVd1VUZ2ZEhYN05iUnJB?=
 =?utf-8?B?ZVlLTWtxMEFGcTJiUDVsOVdvTmNXMDhVRmFaODA2U1ZTR0xLZGJ5K2hhU29k?=
 =?utf-8?B?MDB3M1o5RFFhK1BZeDNjNjlXNEpUS0VuMTVpMGphb2tTRGRRZjlnQjdDZVZr?=
 =?utf-8?B?RmFsTUF0UDdGVnJ6MHlsR01Yekwyd1BNbXFGQ2xkNGpSOGEvS2RhaDVFVjR3?=
 =?utf-8?Q?YrJ8YMlic8RXz4pSvMwqsYLOI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ddca258-141c-4210-c54c-08daef054d46
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 10:12:07.8111
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6swL7whTXVbPMkhnwUbrgVPJ7C59HAo6O6DCuQBXYNFKyeQQ9OTDHVwE/62gEx/XZWd56PTQMiE6taXmq4FLFA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8359

Now that we have them available in a header which is okay to use from
hvmloader sources, do away with respective literal numbers and silent
assumptions.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Use simpler BCST() macro.

--- a/tools/firmware/hvmloader/cacheattr.c
+++ b/tools/firmware/hvmloader/cacheattr.c
@@ -22,6 +22,8 @@
 #include "util.h"
 #include "config.h"
 
+#include <xen/asm/x86-defns.h>
+
 #define MSR_MTRRphysBase(reg) (0x200 + 2 * (reg))
 #define MSR_MTRRphysMask(reg) (0x200 + 2 * (reg) + 1)
 #define MSR_MTRRcap          0x00fe
@@ -71,23 +73,28 @@ void cacheattr_init(void)
 
     addr_mask = ((1ull << phys_bits) - 1) & ~((1ull << 12) - 1);
     mtrr_cap = rdmsr(MSR_MTRRcap);
-    mtrr_def = (1u << 11) | 6; /* E, default type WB */
+    mtrr_def = (1u << 11) | X86_MT_WB; /* E, default type WB */
 
     /* Fixed-range MTRRs supported? */
     if ( mtrr_cap & (1u << 8) )
     {
+#define BCST(mt) ((mt) * 0x0101010101010101ULL)
         /* 0x00000-0x9ffff: Write Back (WB) */
-        content = 0x0606060606060606ull;
+        content = BCST(X86_MT_WB);
         wrmsr(MSR_MTRRfix64K_00000, content);
         wrmsr(MSR_MTRRfix16K_80000, content);
+
         /* 0xa0000-0xbffff: Write Combining (WC) */
         if ( mtrr_cap & (1u << 10) ) /* WC supported? */
-            content = 0x0101010101010101ull;
+            content = BCST(X86_MT_WC);
         wrmsr(MSR_MTRRfix16K_A0000, content);
+
         /* 0xc0000-0xfffff: Write Back (WB) */
-        content = 0x0606060606060606ull;
+        content = BCST(X86_MT_WB);
         for ( i = 0; i < 8; i++ )
             wrmsr(MSR_MTRRfix4K_C0000 + i, content);
+#undef BCST
+
         mtrr_def |= 1u << 10; /* FE */
         printf("fixed MTRRs ... ");
     }
@@ -106,7 +113,7 @@ void cacheattr_init(void)
             while ( ((base + size) < base) || ((base + size) > pci_mem_end) )
                 size >>= 1;
 
-            wrmsr(MSR_MTRRphysBase(i), base);
+            wrmsr(MSR_MTRRphysBase(i), base | X86_MT_UC);
             wrmsr(MSR_MTRRphysMask(i), (~(size - 1) & addr_mask) | (1u << 11));
 
             base += size;
@@ -121,7 +128,7 @@ void cacheattr_init(void)
             while ( (base + size < base) || (base + size > pci_hi_mem_end) )
                 size >>= 1;
 
-            wrmsr(MSR_MTRRphysBase(i), base);
+            wrmsr(MSR_MTRRphysBase(i), base | X86_MT_UC);
             wrmsr(MSR_MTRRphysMask(i), (~(size - 1) & addr_mask) | (1u << 11));
 
             base += size;


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 10:21:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 10:21:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471792.731788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDNNX-0002LX-Ed; Thu, 05 Jan 2023 10:21:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471792.731788; Thu, 05 Jan 2023 10:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDNNX-0002LQ-Bf; Thu, 05 Jan 2023 10:21:43 +0000
Received: by outflank-mailman (input) for mailman id 471792;
 Thu, 05 Jan 2023 10:21:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDNNV-0002LG-TK
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 10:21:42 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bcc3e809-8ce2-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 11:21:40 +0100 (CET)
Received: from mail-dm6nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 05:21:34 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6197.namprd03.prod.outlook.com (2603:10b6:208:30b::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 10:21:32 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 10:21:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcc3e809-8ce2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672914100;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=DJ7lKESvxhuCgrxIskdZvgGuje2QQJAQLZrTbRSZmj0=;
  b=egP0ACbQ52obv5bmSSUTjtvsL7V9sTQdV88RIXAYy7i+gfvlDxc+C7Sb
   ukYHN5NXfEHElb7FkKK4cIqRd0e6JMA0+92Vx2WiO9Je2YdVCTzJPMwy1
   3anVPnzzvgn+D8jK+FTfJXnvuGbEKlaDwwYhxnXvE/TTzY0QXfOD2f6l6
   A=;
X-IronPort-RemoteIP: 104.47.57.169
X-IronPort-MID: 90235615
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:xtGeWai5AbMFDDzZXrFrQjTTX161QhEKZh0ujC45NGQN5FlHY01je
 htvDz+CPPmJNDP8Lt1/aI61/RwC6MTcn4djQAY4+y88FCgb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWs0N8klgZmP6sT5QeDzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ9EXNRVz/ft964+7afYbBc3fU+a9HkadZ3VnFIlVk1DN4AaLWaGuDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEuluGzYbI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6ReXiraM02QD7Kmo7AzwJCGK6pNaFkBC1atJyd
 Wk+8y8Rov1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+LqRuiNC5TKnUNDRLoViMA6tjn5Y020BTGS486FLbv14OkXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94aRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:89mxbK74RlGpt80qSAPXwNnXdLJyesId70hD6qkXc202TiX4rb
 HMoB1/73TJYVkqOU3I6errBEDtexzhHP1OjbX5X43SOTUO0VHAROxfBODZowEIdReRytJg
X-IronPort-AV: E=Sophos;i="5.96,302,1665460800"; 
   d="scan'208";a="90235615"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PJzzvtozj3aUq+4Pk+cLIIasSA97O/2/R3IgWvdvq1Ufn2C6ML3GHgXIj8BcbCIALygBwrfsuUKDqhaptYH3SbPtG8x4LHDK050IQn36XvQDW83ZDVKmODEEbkLddej/Fe0AsMcJj4WYVZs2raTgbl4hhA7tMlDIhx8q+tryF+YgSveOR9FEaYH9IsjxP8n5YrAWbu8o2pbvgf508FQEYlJ521WBa9HuasxHyc9qWndrhkeTUtdwu+Ldt4JfSmUIGTRL6fBb3QxMC+f9xu5IfjyRWA7t4LI1kIMkfMOYDRSCnc4/FoHIeb37yun4zaP1cGCQxRqq5BuBi0gCbrNgGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DJ7lKESvxhuCgrxIskdZvgGuje2QQJAQLZrTbRSZmj0=;
 b=Up87IpPx6xcrLuRJgyeluOkrpjjxCflQXnpHcvC82y1vu06BTQVbJY1q64x6/OKVJoubEG4c56OXqCZmbfXuvV32NODMztCnErt0Dih/oI0izQU8rW6UxoWh5q2taXgGMdD6WOyGC8HT9hVJuEgp5OLMXVTPFdUnArO/4mdUKN9hxXhjxdJ7698NFHxTY3d3I6g/QncZpxFXjC81NRXP4bPCeqTz3Ec0RKWz7lNJhLqfdcue6dfzEfGqcZqYYYtbBjlvkiqnxeL1Y430kTqmA9wVrxUI5Gz3BNZfYkRJOmdW41VIdEK3tSgtKDmZm/Z60Xh8L5+S+rw8KHIR9PW9rA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DJ7lKESvxhuCgrxIskdZvgGuje2QQJAQLZrTbRSZmj0=;
 b=T+/zy+FQ9daKcgrEHrdxwNdEmbffnV2buvrwnq7AaXxS4zGty2wrOhCDePuS+KSAPAuYg8/3aT6IOsAbIA63GKuu2NsXV3YONMlgq6/pC3WoT0GHw5B9wHgDS89lYCBoXt+OtO3DUhXOzLMuwoHUD0HszHsROupbWukcj0DjL4A=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH v2] hvmloader: use memory type constants
Thread-Topic: [PATCH v2] hvmloader: use memory type constants
Thread-Index: AQHZIO4xGYPRUU9lrUSG9wpp5EE7mK6PnRcA
Date: Thu, 5 Jan 2023 10:21:30 +0000
Message-ID: <aa0a1e28-42bd-3826-2911-288093ab2a00@citrix.com>
References: <15dd4897-d87b-9a0d-fc99-551a1b4be04d@suse.com>
In-Reply-To: <15dd4897-d87b-9a0d-fc99-551a1b4be04d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BL1PR03MB6197:EE_
x-ms-office365-filtering-correlation-id: 89282547-5ddb-4e9f-8e1f-08daef069ccd
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 M/3qGEZeVA8BrHgCZ9/p5aktFpvMX61Lh+gRO70LiQwj6RvC0jwYp7FcC+aelz0fNoH2j9peoz73gdHVcQskGp79vJNfszQtwUlIbaRt7BmQ0ndXYaiB6gUKXjWqj7cVmz1vBPMNTS6xyxWSWSZhMfiWgIDKt41yGvBZgQEPeEtq2x8cSmCZXQU/gFLokZIuTAqMop2a0pHoToUPIKYtYptBAUolwnW/IFrd0WesOFoBJblDKrt9y7b0x8i1FIRwugKStq4BCqThquctAryx2V3aWWxOeT5dX4eRVHhj4ZZKA8Dl8jrHy6SLTUhnqO9rFdNNVPil8MbDC78f2RtD5vjUwsddpsElN6UOQQKdmQW9LHxi1tXVz1BYs1JABp7Gk3z6gRGP8sPN6ZYdc+oxxHCZ6dLggoNLZrPoAyRwqXCyMYpXJLTk/inyLwp/PxR/dgLjgkYlfM9QrLzRwurDLhuVE+JHGR9eKZeNtA9iwWJ/Bq2r34JsqC08TTb6y1ceDD3xBn7JHIEw9okC6pBlCq/XYdsWnHuTzbYcIeCMF7P1R22OEOFqfnex3O0gQela8ZthKpA7xATALTrvLuOpHEeNHH/29Cdx+1iBpm2vOHfVwBzVnoRP2FeXgtKXcjrgdkMqrw0YEPcMTewZpeD1GsPO0XjVrBCjEeAH3C14g7XErsWh8db8y33wo6Qi+Ef8JOd4oG4TfOdNtvYzC36Sqy56D1faKfww3iaf/hag4rRO+aorcizid/AcFw9FgjAfs4Hxxa84SIl8EssNDEqv/g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(366004)(346002)(136003)(376002)(396003)(451199015)(6512007)(26005)(107886003)(6506007)(86362001)(38070700005)(36756003)(558084003)(31696002)(82960400001)(122000001)(2616005)(53546011)(38100700002)(186003)(316002)(4326008)(478600001)(41300700001)(8676002)(2906002)(5660300002)(8936002)(71200400001)(6486002)(31686004)(76116006)(66476007)(66556008)(66946007)(91956017)(54906003)(66446008)(110136005)(64756008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bjVhVjA0bXhGWk1IN25lS0QzMmVkUGs5aVNnRlpTN2VucHNlay9MTGhuWklP?=
 =?utf-8?B?U2x0RlgreW9nRVVPb2thV2tBQUswbkJlT2Fta3ZSTTZOeS83TUprZ1NybGF1?=
 =?utf-8?B?cmFTMEFQQmNEaWluWDl2T1ZZRmg5ZE9QVjJmbnlEclM4UDR5OTF2Szk1anor?=
 =?utf-8?B?UVpOVUNJMlVncW9WUDMyRmpSOTJIa211ZzRNb0Vqa2dhcy8yTEFYd3BEVkhr?=
 =?utf-8?B?c0FyTW9DdG1qMWNsd3pReVFPVXdqYm5aSHJ6QVBWcFNzbWtjWnhUdzFUa3Y5?=
 =?utf-8?B?RFBodlFrc0JtV3NnYWIvaGVvcllQR1VSK05uaStya3ErcERkd0pkMExOMllp?=
 =?utf-8?B?Sk82Y1F3T0o1cDhqWjZtNGxBT0U5YWVXNG1OYldRb0RxaHllVmN0RXdmaUdW?=
 =?utf-8?B?RWU4Y2h4eVAwa0N6RmhZQnVKbWpOVDdhZDFqZS9JWGxqRkFITUgvN1hwUkJR?=
 =?utf-8?B?UEJETnZMN3FETkI0NFZtN25wYVlNR1N2YW55cUthYXR6bHZGMmpqUWxWVk5T?=
 =?utf-8?B?a3hpWjNON0wxQWhXM2hyWThVcHF0TFRyaG5lTG1xaXRJd3pTNUlFNmswZ3dn?=
 =?utf-8?B?UktXc0F3NjE0UmdBd0RFUnBKZFJMVW1GZHVOU3dyeUVBdGV1MUQxQitqbWs0?=
 =?utf-8?B?ZGVpbisvcEZ4ckZYblNoVVMrdkdkOHUwTUlZeHRIbWl4WEJvWHRRWnVVMnR5?=
 =?utf-8?B?aFVLdXgvN2JTN1NLN0xPYzh3ajZMS1JqcmVoNXlSZVpRd0FUbHQvUE5uNStC?=
 =?utf-8?B?cFlLQUNOOEI1d241ZEI5dGo2OHI4cHl1M3lSaldxZkg3U0hYSi9qV1FwMXlh?=
 =?utf-8?B?RmY4UzFTckR4b2dGQ2dHTkNhdGJEUS81ZVlrUlVBT0ROUkI3YVM2d1lOcUF2?=
 =?utf-8?B?WFg1SWU4d0E2b3VxZTEwVy8vNEMzN0hBODdVK0p5aDdFYWo1U0dmR0o2MXY5?=
 =?utf-8?B?cHAwcXRlVjFDaVF3bkFXTDRPcVFQMEdxS0JxSHpWQ0c1R0k2L0lZYUNrUXd5?=
 =?utf-8?B?M2tkVXJ2Q0pyQjJZay9oem1QVFJ2THRVMllJczR0SEYvR05FQSs0TGphQXBP?=
 =?utf-8?B?MGljVGxPVUxCMmd6VkJxSlpWTGJsN2tSSW9lb1VFV0pUL2xWK2ZHK2ZYSS9E?=
 =?utf-8?B?cUhRUmNxKzdsMGZlcVVHNW8vWFllQnhBOVl5UlRWNjE1MkJCU2ZacjhIRGZG?=
 =?utf-8?B?VXZRSmVZWFdZT3NFaUZQV2RsR0Z5NWNnWC9VVnZsYTB3cTZXSVNndzYybGxR?=
 =?utf-8?B?Y2FoRTZrajdQVkdQUzlLOG55M2pvVnhGZ05uNFd5RWhoRm1FLzdLdWVZdmdq?=
 =?utf-8?B?NDRCWm95SjBZL0FsRDh3Q282cU1yUVhiSU5BTE9ua2NKTm12aHJhczRKRllT?=
 =?utf-8?B?QkF4YmFnOFNsN3VZNWtEcjkyZVQ3dE9KMld0b0JqanJEN3Q2M3BsS2UzUG82?=
 =?utf-8?B?T3NLd2pPZnlHOE5oOUxwcUpMSm80cmJKZUFzeXZGNEthN2ZjVTZNajlBdmlU?=
 =?utf-8?B?Y1U3dzhjUVBnaWltL083aDZZNi9HdHVudjZMeHIrOVZGNTI4WVZpdkZoUURU?=
 =?utf-8?B?QmpVaXN0Q21HMzZlRUZLVDZQeU9Pa2lHY1B3cWRYUTc1RWNMR3NWNVBnSXkv?=
 =?utf-8?B?RnlRem1IYUxoQlQzQWxiam5NRXA5M3NZVm9YWTlqUFlYaXNXbGUzNTdrY0Rv?=
 =?utf-8?B?SHlqaFYzbDk0SDZ0Qkk4dElFRXdFSkt0WG56dXNpUnliYi9jZmFSaHlTVHU1?=
 =?utf-8?B?VVQ3ZEpBVDNxT1dUQzdEZ3lxbFBkTGtkTW9qZ2hiVFNsOTQ2ZlAzSTh0L1Q2?=
 =?utf-8?B?NktwWUlucG9vVllsZitNR3hTbEJ0QlRVNTNiWXM1TWI5bG9pTThBcFM5V3Z3?=
 =?utf-8?B?QmV3ZlpmQ0lDWExEYTl5T0VUWTlnN1JGVkxFOUxtb1FtY3BKNzB3cHdOWjVN?=
 =?utf-8?B?ZGlQcTQ2RFc3dFo1dUJOMWg2Mi90K1g3L0RJck9XSDZ3TitOL29Pc2IxMzFZ?=
 =?utf-8?B?R1UzQnFwNEtBUWNkcXBhUmdiNy9ialJ4Mys0Mk5QbTIrSHcvYlU2aENwQVpY?=
 =?utf-8?B?aExMbUlOZENVZTQwNmpqdForVjB0bkkvNGV4MllnZjJvMnpMNkZCRnlJZjdS?=
 =?utf-8?Q?4RCNBPykLkuw/aItWHk3O0UZJ?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5F7A3CB5A084FB4EBCDFCA5B88CB1816@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	tbEO4sih/91FC8TGg5Wuy+h8pDf2755M3hjm5AawkJCYDigAa2T9Mgra9dQvvKUzSJQNo7IcY2wQuhkTWIkgsDgUsNsIz/TI1qPe0cDoGg//jX9jl+M678GAo+aLcfI4cgqPR9tP7DRtXxM8SruARtv4CpEOpil+9ED0f8XmSriZsA+cvJ4nsIGu7jRtBm2FbjzfQZNnQ4uQI6SJvDfAZLRvbWOQKHfhABJMYH49D3i3TGArSqKe91ekJwpKvJYjQXqv2gGm1NXtHTDf98Fjo+c1yUM0WAjUZRdP6CUaitwMOtN+5lqQM3cyYNUxeTTQHuSPtOoFSTsj8YHQqnBhfrJMDA9eFh6QxadyiqQH/U3js8h9qATb6EZWGrvZKpMxKsykGypZvgf+MWK32mjVbOK92aBI+mQkcziT0P9UaV5WyocCUgFNK4E1VXSbFsK+HrdLPFbgV5tSKF8wN1gWmbRgJK0aG8wfizTt5b+LBDUsA2wLinWZefS/k0RfqkACZRQg0ok+F8NIl08zgcckPhtiI1TzPmgSFdGUtAPGBFw7pmry+aLgv16jwO69HRW0fO65fLVimjqlhXTszW6bdQn/wQJtlmNyQdVbkJ37VLLc9q69DAphWWO2PTVdvrqq0LsmV7dnJLUYVGKlhZ/EVJemcf6J5Nbopmpzjk5xnvMaGW7jqPx7zUd/vZMsdWki6HrlgwEtJhV+lhw0EXNVG6oIWnSOdsbRNB++aayivUxbfq7VkgSMI1ljbFP4wE+PfSY+tGGxId7QBBpAUpMtnHroC9haZlZrYvMrbfJLKBKCDXACxOjTB1zng1FtqoUQ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 89282547-5ddb-4e9f-8e1f-08daef069ccd
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 10:21:30.5749
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: okECWCMB6zLzoEvPQZVpeBL+tim3P7SCl9ZOW77NsN195i/X7EDjBEYh1yIPv/Z8J+pxOYdaIRdrtFXvqyx5UwIhOEYfbHbAHIiBvkqaUT8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6197

T24gMDUvMDEvMjAyMyAxMDoxMiBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE5vdyB0aGF0IHdl
IGhhdmUgdGhlbSBhdmFpbGFibGUgaW4gYSBoZWFkZXIgd2hpY2ggaXMgb2theSB0byB1c2UgZnJv
bQ0KPiBodm1sb2FkZXIgc291cmNlcywgZG8gYXdheSB3aXRoIHJlc3BlY3RpdmUgbGl0ZXJhbCBu
dW1iZXJzIGFuZCBzaWxlbnQNCj4gYXNzdW1wdGlvbnMuDQo+DQo+IFNpZ25lZC1vZmYtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29w
ZXIgPGFuZHJldy5jb29wZXJAY2l0cml4LmNvbT4NCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 11:09:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 11:09:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471799.731798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDO7z-0006jI-Ug; Thu, 05 Jan 2023 11:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471799.731798; Thu, 05 Jan 2023 11:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDO7z-0006jB-S3; Thu, 05 Jan 2023 11:09:43 +0000
Received: by outflank-mailman (input) for mailman id 471799;
 Thu, 05 Jan 2023 11:09:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDO7z-0006j4-1t
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 11:09:43 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2063.outbound.protection.outlook.com [40.107.6.63])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72874b91-8ce9-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 12:09:39 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8423.eurprd04.prod.outlook.com (2603:10a6:20b:3b5::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 11:09:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 11:09:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72874b91-8ce9-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kPrUxtfFQq/gUda+YtL0dXlEP1j6oELrH4y/nr07aqQInol4goKcFbqRc4MxsOuYSzAFQT6g/dczZBll9U1Y8+QxoKRC6Y7YHHMkdfkIlrpDNVGk3vHuIQbqJ7Rggd8D1NkkNFmdSDRx2pjrM06+AoTjuIqoOm6nTkMrGU4hH/RpjrSVs6pMa41pJ9nXUm+yNnn5tZsXi/3ugSYATzAhDHr5vflkOrQZLBbRiz8buJ2E6/59OPVzbsG6sGphj2Z0KZJHEzSxVyDhCGbwCFfLv2EXUKb8yPyJO1b7X1Y7OepDIp8AqG1zQz3ZRFdYVxnA/qfXt1TCB5CaNDEul3ORew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xtJ2tMl3hWQX0ZOOqnHW05T+xWUWp8nD3ypTcC84mxA=;
 b=Z8bPzqLT7Tpl9Hu3kX2IH5Nf7UxqJpTlDW4MOPMurW7x7HyBW1NtQAZ17z771Dd5jDophr63DpdseZ6Cf083tF5+irVRi9FPo39x0TlyQTyoBQvAsP7yUQnkYaXDTX34HeG1gBZbIEKHKa+XZvzGeq1o4bcsVN5BoSpvGBx140Ru1nv/7xsvjBUMkhHybPhYf8MFI/cThQYFHBoRNowRy6/NRVFevKaUP8XIrW0FTsJ3alQwR4qo1UBZOeKNG6nimpd0htNr13FKjL7FXDc+s74emyMchScSpoEc25nfzVuKeCT823slpNRq1M/3TFW8CwJHRZZKx6zCq+jNhzpWCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xtJ2tMl3hWQX0ZOOqnHW05T+xWUWp8nD3ypTcC84mxA=;
 b=lSlMtiULll5Z8uf0ZXNU86IRDBE4Mwb7KuKW3+0Es8lIM24UK6Y1TRo3QmvK942G0gmgWclNus4uhQOPHO9l0/Knd77deKJysoGAv7MsJjEenwI1c9PCX4clqmAweWbesfXAsWZ4GY6lbAUyFzcN/wChf5Ttl48uGIEqFFo4c9mWJWx85REUMtwRjyP9v0+L1wcPx76Q3kfXUsoEhm02ZZ/J8RTZ4m21wHbJ8goH3hAyFDpkA4QmiLq7eI5ay65M9uSO6V0U5IHifTBkYvcvLyeBo3oHrj7TLypHqGGjjAkmwh0CQj2ZiWwEAwyZtwWwPDvlwcsuc8j5ddhzXqOBGA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <139bab3a-fd3e-fec2-b7b4-f63dd9f9439a@suse.com>
Date: Thu, 5 Jan 2023 12:09:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: idle domains don't have a domain-page mapcache
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0202.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8423:EE_
X-MS-Office365-Filtering-Correlation-Id: 8fb5b746-b21c-46b8-54d3-08daef0d565f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FFo7UhOL9DWFY9GcW7DpNCrrsgZQKCFii4/qLpT5bMhIs3UTzhIhkNSO0LQ2IJIFQ9m/p4l0j9RZCT9jbvFLEUdcqcRAryhNnomXXx8wBJWePM0mFtKmLGIj3Vcf4IPl8ueWYdwiVEzCGdTGnrJPfoUyq/iwMGEUX7m0lY79dMll8FyfuuKUnX5h6VvHjUBYqivbYj1AY4GlB6CD0GzR7XkiV2KOFOuiRwDSJSaQXK0ScUXDYBls+MchJEMgAopqgdXH+by0DGP5C5bWuNJ3ASEcaWwCqM9s1xCztbNxOhFTrWdu6Qu26Zjdku6iLG5aBLDzWUOUzCqR/9Z0qDMGCJvXAZLAALC1+nEAyJZfCpyh+LbfxSJzdUxRaOxmWaWRsyj2to673+Z2WTJKYnJyeuqVsyVjTn+msHCspgB7V8RgQnxPlXyg+8qpcfDNvsWzoF512VdW6jr72H5N1G+ckDJQb9Ac/5s84m3+5ZL3t/55Q33jFWTFyvBDRy8VVbsSbnOmdH8IqTuXXiME7L/ja6IxcBVPakrKI53vsOemWXM+XS0L9dvCVik3cBvW37poNM1OCQlUECiZOxdWega7QdTyQI3IDn2ugQdAlZ9o/jCW7dgi5+2/HhnY+uNwljoioaTEE73wSkZ58voaZ7g6h+Qgrei+QQJARI2dvYkad0V5S6LJKwZZwH1udGFWHtT3AEyCa6q6V1vzJ91LsmNaLzZENZHUuMHCCX+ktMpeCu0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(39860400002)(346002)(366004)(376002)(451199015)(2616005)(2906002)(86362001)(31696002)(36756003)(38100700002)(316002)(54906003)(6916009)(8676002)(66946007)(66556008)(8936002)(5660300002)(41300700001)(66476007)(4326008)(31686004)(6486002)(26005)(186003)(6512007)(478600001)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c0xCdU5LenJUTlVDNUN3RjVIU3pweE0vUFo1c0xwU01ZV0lvWUo3ZHc3Rk5D?=
 =?utf-8?B?QUJQK25LWEkvaERhdUVGaXJYTFNiQ3BNVXlmamUzSSsrcEJVR005VHZSOUFq?=
 =?utf-8?B?MW9OQkV4aDBCb2tHY010bFFOUGtNdjZmNHhmaWxvQVczdnZ6dVhTNVBpUlVP?=
 =?utf-8?B?K3JFSUtjS1pKcitJdVNPM2dCSDlTRHI4S1NldmpXaGttTlgxMW1jVVBoTm9r?=
 =?utf-8?B?eTdEc2ZmVjFqZWpTemp5NDVnWlY2QVdLK2tMNm55WDYwWWZqR0ZjRXZMVEJ3?=
 =?utf-8?B?WUE0NmpHbHJOOGQ5REk5SXlZcGNQdzlSbXp6OERiU09VeSt4ZkljbG4wR2Ju?=
 =?utf-8?B?Q2pDSitSY1IzQjNidjBsdUtIOUZST05MUkRuNHRkbWQ2cEQwTFM5TTNtc2Jw?=
 =?utf-8?B?UGZrS3cwbC9kTmxzWEZ6bXJJenE0L21KVDhNQ0oxLzhVMWJ4TlMzcXE4TDkz?=
 =?utf-8?B?ankxeTYvWlYySFQzSXpkeEFwV08vSm96Q3hTcDRzUE0xc1Y4L0Y1ZGk5Y2RW?=
 =?utf-8?B?S0FRYWN1b1N0QVBHQlo4aFNWR0NiLzhHYm4zVE1ndUYxV1YxUzVqRUxrQWZq?=
 =?utf-8?B?MEJ3OVlTMHhieEZVNTNzT1pSKzhGN1FLUDUvSEJVMzhvYVdJamUwR2RuM2lG?=
 =?utf-8?B?VVNvTXgwV0hNZEwvdGdaVTE4SUtiWVdLT1EyNVFlNWtyTm5XdzczbVJxM1JW?=
 =?utf-8?B?bzI4NExwOUN1NzY0SGxZNUdWcjNGZCsvWC82SHk3b2VMSHpLdVluQmREVnFR?=
 =?utf-8?B?VEdtNlBrOWZHSDVHSmVVeTBIV1k2bFc5UHZXcTYyakM2ZDVmL2xLRWEwVnZJ?=
 =?utf-8?B?c3BBWVBSdEhmTmtRMHptVjNHY0RzUjZ3Y3J5QUIvMW1JRXl3VUZKTXpxOXhG?=
 =?utf-8?B?RXNzVXQwSlF2NVdIYnJ6aEQ3VmlPSUQyS0hWRTk4SmVuV25IQVpQTTE5bmZh?=
 =?utf-8?B?WlM1UHNVSE9MSkJ4RnAxeXBSdnRaLzJ5N2ZydjhQK0FneXFqMHJ5UmJzbWdX?=
 =?utf-8?B?MzZ4Mk5PRmJheE9HT3JzWVNqZlk0UDRERERaTWo4WmVNSWNhODBDWTF3Q01j?=
 =?utf-8?B?L2xOTGM0bk1SejNwK3FrVlZSME1BbjRmenF4ZnlNeWNEKzVENEdmMEF6cWVJ?=
 =?utf-8?B?bWtMSGxFaGRzM1I5bzhha0JaOHB2Z3N4NkxBRDNmL3JDZGJEQ3BZd0l3N2I2?=
 =?utf-8?B?blJLY0hvOUxjaUY5NmE1enFzRlIvME9tM0Jid0NDNTI0bkdzRDFZS0ZPNElY?=
 =?utf-8?B?RFZ1bFBncU52S3VlSEJUYktvbm5reHRHS1hJY0lrRmpoUVlDZVFrVVN4Ulli?=
 =?utf-8?B?ckFwWW1MV3ErS1FTZklhRFVyN2pRVExKemlrUkZlUzZIYzJTZUtlRVUvSVRH?=
 =?utf-8?B?TEVySmpjQ2RIR29wbWFnWmtYZUdiZnF3VTh5Z25hbm50ZTRkS29lZEM2NElS?=
 =?utf-8?B?VmZjM2MxUDBRelNVUGpiZzA4QlFnV0N4WHd2Q3NVNGJkNms1MUQwNE5yUWtq?=
 =?utf-8?B?cFRWdG5vb1BsaFRDQmZTV3BNQWdqQnhtdDlxdytTV21VdGhZMlhHSy9FQ3p5?=
 =?utf-8?B?SkJKY0o1cTJGMmpsVU5HUHVQTlNYdEdCekVMTk9XTk8xMkRJTW5YRW9wdWd5?=
 =?utf-8?B?UzRHS3hEOHNiRjdGdFFKUGxJU01GVDRiQW5RUnU2VVA2bW1oV0hHZG9WbzAx?=
 =?utf-8?B?bFhDL1F2Z0FyNlRVNEZPUERralh1T3ZqRlkrTTM4RVQ4QUY4ai9RSEw0TnBh?=
 =?utf-8?B?dTh5QVBSODByQks1Uk43ZDhqVlQwZDhaKzJkU3ozdDJHZS8zblpDMG5Ld3dN?=
 =?utf-8?B?V2gxMEl2dE5OcjVvZys1ZmI2OVBmTHpDOE1RTUlyby9vcnVSYlNWbkFWenFR?=
 =?utf-8?B?YlNXcVA5Qmd1WnlrYUlETHRYS01Nc3VmWUhuMVZWVjB2OG0vUjRwc1REa3Rq?=
 =?utf-8?B?RDhQajRVWTVNbUVCRlY1R0V5V0NHWWJzemROdm9CK0Zaa29HeTF3QXlvYW5D?=
 =?utf-8?B?RVk4Qmd2bGxybXMwa1lzQTllNkl3VytMSmR1SVB2Ykhua0VFQnZXQVl3WnJI?=
 =?utf-8?B?OC9aQWw2SkU3RzErbkJMdFMyaHBaaUxZN21vSDdnejFiSlhCZW0vSkVKMytH?=
 =?utf-8?Q?Y1b+NIVB/+0TVfymagLHGxcSA?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8fb5b746-b21c-46b8-54d3-08daef0d565f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 11:09:39.1235
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +B+wBs2iRsoReeE210tsipcmTXN7ZbomIyk/FJWtmIpVSsWip4tYaO2w/V1w6rh8W/SYQ7rV1K8sRXEZkjxI2Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8423

First and foremost correct a comment implying the opposite. Then, to
make things more clear PV-vs-HVM-wise, move the PV check earlier in the
function, making it unnecessary for both callers to perform the check
individually. Finally return NULL from the function when using the idle
domain's page tables, allowing a dcache->inuse check to also become an
assertion.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -28,8 +28,11 @@ static inline struct vcpu *mapcache_curr
     /*
      * When current isn't properly set up yet, this is equivalent to
      * running in an idle vCPU (callers must check for NULL).
+     *
+     * Non-PV domains don't have any mapcache.  For idle domains (which
+     * appear to be PV but also have no mapcache) see below.
      */
-    if ( !v )
+    if ( !v || !is_pv_vcpu(v) )
         return NULL;
 
     /*
@@ -41,19 +44,22 @@ static inline struct vcpu *mapcache_curr
         return NULL;
 
     /*
-     * If guest_table is NULL, and we are running a paravirtualised guest,
-     * then it means we are running on the idle domain's page table and must
-     * therefore use its mapcache.
+     * If guest_table is NULL for a PV domain (which includes IDLE), then it
+     * means we are running on the idle domain's page tables and therefore
+     * must not use any mapcache.
      */
-    if ( unlikely(pagetable_is_null(v->arch.guest_table)) && is_pv_vcpu(v) )
+    if ( unlikely(pagetable_is_null(v->arch.guest_table)) )
     {
         /* If we really are idling, perform lazy context switch now. */
-        if ( (v = idle_vcpu[smp_processor_id()]) == current )
+        if ( idle_vcpu[smp_processor_id()] == current )
             sync_local_execstate();
         /* We must now be running on the idle page table. */
         ASSERT(cr3_pa(read_cr3()) == __pa(idle_pg_table));
+        return NULL;
     }
 
+    ASSERT(!is_idle_vcpu(v));
+
     return v;
 }
 
@@ -82,13 +88,12 @@ void *map_domain_page(mfn_t mfn)
 #endif
 
     v = mapcache_current_vcpu();
-    if ( !v || !is_pv_vcpu(v) )
+    if ( !v )
         return mfn_to_virt(mfn_x(mfn));
 
     dcache = &v->domain->arch.pv.mapcache;
     vcache = &v->arch.pv.mapcache;
-    if ( !dcache->inuse )
-        return mfn_to_virt(mfn_x(mfn));
+    ASSERT(dcache->inuse);
 
     perfc_incr(map_domain_page_count);
 
@@ -187,7 +192,7 @@ void unmap_domain_page(const void *ptr)
     ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
 
     v = mapcache_current_vcpu();
-    ASSERT(v && is_pv_vcpu(v));
+    ASSERT(v);
 
     dcache = &v->domain->arch.pv.mapcache;
     ASSERT(dcache->inuse);


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 11:11:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 11:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471807.731810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDO9E-0008DN-DM; Thu, 05 Jan 2023 11:11:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471807.731810; Thu, 05 Jan 2023 11:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDO9E-0008DG-8p; Thu, 05 Jan 2023 11:11:00 +0000
Received: by outflank-mailman (input) for mailman id 471807;
 Thu, 05 Jan 2023 11:10:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDO9C-0008D0-Ns
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 11:10:58 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2080.outbound.protection.outlook.com [40.107.249.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0e0defa-8ce9-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 12:10:57 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6929.eurprd04.prod.outlook.com (2603:10a6:208:181::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 11:10:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 11:10:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0e0defa-8ce9-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AWZoSVZvVS+yUg0EpG/vMUAWdVh7Qo1rQaUII+hhTEbOZzXd40K+Xfg1Bsb2St6lhcHH+X3KuvXCBVu9lk7DgMwvPua9effvt/RTqnGYzkEm/4dpJfA4tyMi1euxjP3jSMAO2k9tbIYnUZje0xREeiJfFMo93SMDuOXes9+Zr6DrAj75wvJRFhrtiQ0HrtY9iEeOccAJE4zpTFjLu/CNFLMAItB+HCl7xZSb11DEkkw1Ne5ubIwEihcMG/mp9f5Lvzfk7ZniUDKF9T1vf+7wUW2uBGMcS6EccPQgFizn4yV/CoF+NqDxyHlARI/OlsUl/fb6W25l165lsjRMKklPgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iUugKwc+XLh8jVo+BdOIcApbl8H5L6XGAwjWzprilPQ=;
 b=MK31eb6SjI/yXHQ26o2U1qacXuWd5JhoFc+O3P3VcrwrmUMCgrCaQ8C4CucMhl4UZfUNo6FbH8EC+hzVC+OttvKYU/PQRA7hKkOj+Vm+ZlJGZs3imAxSQEnPgFu8O1hUydvafIIqbSem/RlN+Y3+r3s2xmUT/26oSWJyLQTXeiVJ6Cx9HI8e7i5o45tfKpK+MwSVsskFLiMoOk0c2WfxeI2XJfanVEOEHwMgJybc0lvPLnO56KbxG16peR8uM5p/W+MVtWriRIJwm9IDFF5aYhZT4sYerbEb8ZZJBekDb9grUxTuR8U3rJmzkXc82e5qspT0ZggBMV1MLyQIAt0XtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iUugKwc+XLh8jVo+BdOIcApbl8H5L6XGAwjWzprilPQ=;
 b=ODjQ5kewyuCXTJX3CKR0265bER3XIRcv/DPSRzYl7EGk0lOdTTUJZDI1mq2OhZRxxfoIQdG/BD1OXGjMOmvD8Uo8tYdW6O++q4Ea+hbbA/rxPU026cldN8SJJa91C1Qn6urKZ0V+QbIRu31t4xp0aaO08YVAfeCQYos+6oxUTqIjHcUBilTjLMI7TG7Yb8+ILUTVcRYtYNsrJO0vkULDtCiM+cTD4PAxtQRiW6Dhh4Wk/XGXl/9g6hfev+KHjwsbgZL00wEqkGVjX/22NjLb0AYu/tMqt0jDgGkjzmI3EWzNcU5ovFY0CbVYIrC42m4czmz/GSXTYiFsD19N2C4M3Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ae3f21f6-6a66-2fe5-9d4a-3f93e6dd64d7@suse.com>
Date: Thu, 5 Jan 2023 12:10:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] x86: adjustments to .fixup section handling
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0132.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6929:EE_
X-MS-Office365-Filtering-Correlation-Id: 516afa8e-5c22-4e95-a115-08daef0d8429
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	moD69y5FSppqjtD+p6wntZ4D4cgsMjR4/ZGuXCAKFyPUm4zo6hCJm+cQdOgMxrL9PvXtXyAUW1WBSQINJJZ1Z8PIBz9IXQ/EWZkXCyDiJOY5OnJTw7FJM6/n+NQ7COS4tHvtmVNHczCG+Sp+ynHE3jiMWo+tdjOkp3lYaxKRdJlGL/cfNPi/bwX3hr/+R22lR9/xnJV8eS+vkYjPJ2qU0O4c3fNXRm4AQpPbLt6MVjuABCMkISikg5a+Ov+rMbvPGsjnRRftdkUUVQho3RJxwJw34brRD3PuWpzdxQM4AQZgtbWAb+bZcGYpnH0HEU06t0gGcQATQ7lTqZwalq8ZfhFJgNuPUOh1637ibw0rmur04wsoKdRZKl51pY0tWZactzgsvzrTzAITcPgXqt84YGEsyD3aEZfjzawbYaKpsSQMWEW3CJ+UZuJVJWFCiRP0Q2RdIG9BBUOas24+6FqaFxnu3/sXO8u4yzGDHw6ti4M382aEXt4qCvdyNSO9s+tVnhG+KsHD03FyB90+pESdHMPPhJDv/z/YYwWrF+PUJOxJtWfLRT2lJCgAcD7KvD6ZgMZRgyLXrP8hkuAwDuIBvjCzmLFu6KTetjT8VDNdz5D37I3IVuNGGDOrx8qYsq4qGfeHGKb9OqqwPVrXLKwRQx+1q5IuTNbpMwt/WcbjO0UaDi6b2DuliquwfatbTOADfpPu0VZ1TLvyt84VThp5xQTWdvd/9WhA5oSrOOn/P4j1RIKy1yrV4AM9NMU4W11F
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(366004)(396003)(376002)(136003)(451199015)(31686004)(5660300002)(316002)(8676002)(54906003)(6916009)(2906002)(41300700001)(4326008)(66556008)(66476007)(8936002)(66946007)(6486002)(6506007)(478600001)(36756003)(186003)(86362001)(26005)(2616005)(6512007)(558084003)(31696002)(38100700002)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YmZsWFRXVXRwNXpUWDVUL25DdDVoQTRkakNXc25TMEhHSEVZb2RDNkJUSjgw?=
 =?utf-8?B?YUE0RWdidnFxb0JERkFxTGtUZUtFU0tXMFh0QzMrWisvQkVQcndDRWlVenRI?=
 =?utf-8?B?NHBQTWQwR3E1dHFlSk1ZTHVSN2hQZlJrcVhaNFlVbGlMbjl5c0JyRjJEQ0dX?=
 =?utf-8?B?alV0VnV3YVgwTGt5SERqaXh3ZXhiVHd1NmducXR0ZXVONEQ0QVFnSktvR1I4?=
 =?utf-8?B?QXF3aXJZUHZTRTVYV2c3d2lNbDhEamwzbzB3ZHErb1NPMFkxbkhiRzhnWmN5?=
 =?utf-8?B?VFBSL1dja3ZKZFk2MWR4S3hNdThRWUEvSE5BeVV3Rkg4Zm5Pc29ONWNtRHYy?=
 =?utf-8?B?ZXlGeVI0YTkyTzc5QlFERGMxUWZ5TklxNTFrREgzcUwyTTJDaHFYOTQyRDEv?=
 =?utf-8?B?eW1NRzEyNjU2TmZ3SnR0RVhhdkNiMTdUWDl1SWtTVkJiOEZoNWpYTHJhOVA5?=
 =?utf-8?B?NUZLZnVUS2VQRXd0MnBLWW5nTlQ4MGU5eE9FZjdJZXdoZklNeWhkZnc0MU1Q?=
 =?utf-8?B?eXk2Zm1jSHpobzAvQ2RtcUVsaFRPOXNRZ29pckJvVU9ES1pncFllT0ljL2hC?=
 =?utf-8?B?RHZrdXo4WkdnY2JkU3c4WDUwUi9oVVk3WTNjckNyWlo5blRGNzFraEJIaGZP?=
 =?utf-8?B?TnptNnQvNEt0dlRmSlZsQmZUUk9xcTlwN1hmN1M4OVZWcUtqVVFJOUZXOHFF?=
 =?utf-8?B?dEhWV1pRcElXYllpZi9NZERkc3lLWTZMWDRvdm9RV05sTGFTK1gwM2hFcVNa?=
 =?utf-8?B?RjA4YWlDbHNzZ2JwMGw4N21EZ0RJN0M1eU1NUVNLNjhDVkVPbmdPWG00dVpD?=
 =?utf-8?B?alpmdGNpWEl4Nmozc0FHeWhOUFpTQndqaG5iRWcxbndWa2VrZ2JCdE10Z0FG?=
 =?utf-8?B?TldudHR5N21HUXdIMzF2MzVtNVlHdjNhRnlxblFKQnJUWGU1RGhOWWdsWFha?=
 =?utf-8?B?dlU0YmxtUHErU0EvSkk2eUZ1UjNGZEk3dU1XNzJ0UVpndzhuQXFGcEZKRUFS?=
 =?utf-8?B?UjVHRFhRbUxsa2dBR1BmU0ZuVDJkbEpLRzNpcUhxNUxiUjc2OFBKZS9Udkdr?=
 =?utf-8?B?a0ppVnhZcUFNOVFnemR1OXhkUDN4Q2JKWTdJa2V0UGZOTW1EU2pXVHhaR3Y5?=
 =?utf-8?B?Zy8yUEU2TWVZK2F5eUI4QVBCZmdXWHRhTVJPYkNSRWZuNGNGNTZiMWxEQkw5?=
 =?utf-8?B?L1VDd0dpRXQwc3NmMnFqaXh2QmQ3TTVpM0tJMndGcGFnUFAzOTVObVM2SWM4?=
 =?utf-8?B?THJ2eE1Md2Qrdkc3THB5eDFxaGdUMWFQZitTeHV5RHNZdHJWSjBId01aQ2p1?=
 =?utf-8?B?QVloeEkyT2YxaFZBZ25PSVh4VzhSTzIrSkVxQ0dqTjJEWFlWSU9XRHRvczlB?=
 =?utf-8?B?WWh6S3F5endxMmFaRFN0WmRjbTJCT0xMaVFWYWZEWE8zNDluWjRDb1JkcE5q?=
 =?utf-8?B?bFhFazFCYkI2V3ZBMHZPTERyNlFLa1g4U25FMWQvV1RSc3I0eTMwanltQzFl?=
 =?utf-8?B?UnJXcnV5dmZteXJZZzZzRStGTVRlMkp2NThaOE4wV2U4eFRKbEFmQWl1bmRi?=
 =?utf-8?B?ejJHU2tNWWVlSTErK051TzdCb0dIT1FyNkpXYmF0L1NLR2VucU1TdWhiTGU2?=
 =?utf-8?B?ZDQ3dlB6NUZEK011V1hNdVdoS0VFWTFVZEE1TlZUenRsL1BoRVlFanBNRzZI?=
 =?utf-8?B?SVhwZFIyQWFlYUl0YzJEYkhuOVlxeWVIMlBRcC9zaE9SZ25rUjVoUXEvZW1s?=
 =?utf-8?B?Q1pRL3VoVnRPcmhHdmRqSTV2cytRNmhFdVZGRG82OW9tcU51ZEt5VG1NU2Yz?=
 =?utf-8?B?aEVUMWFLR3VhOE9yNm9CZzFCQ2VlM2xrN2ljd2VDd0xGUWI4OTlnc3lCdm9Y?=
 =?utf-8?B?QkozT0JrV0J3ZVdQcDRNS1JJeEJvaHEwenA5Mk1rSFR1L0JWRHJackRsYnZu?=
 =?utf-8?B?Y0toZkJhM05td2tDb0t2MTdlQ1JSbGdqd0RDeFVLWTR4R3piRll3NVhvRVV2?=
 =?utf-8?B?Q2J3S29ZUHJrZWpSQ1FwczFjZmlvK0dVaDVIekFSSFh3U2JBWXluS3RIWldz?=
 =?utf-8?B?WEk5RzNEdHdBc0taenhaMVV4dGFJSnJJUlVFenVlWTAxeStaNlVNNElsZkhx?=
 =?utf-8?Q?k0EFqKPS1sI9A/UY9e7AvviqR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 516afa8e-5c22-4e95-a115-08daef0d8429
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 11:10:56.0092
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: padRoTB60SLNRoV3a9fmw9Zx70KJNpLIH6RdmH90E4ma+DKGhHbt/ybxFqKBIQbdgO/kIm++gEWMzdY8YaXtMg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6929

1: macroize switches to/from .fixup section
2: split .fixup section with new enough gas

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 11:11:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 11:11:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471812.731821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDO9s-0000Mj-Kp; Thu, 05 Jan 2023 11:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471812.731821; Thu, 05 Jan 2023 11:11:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDO9s-0000Mc-Hy; Thu, 05 Jan 2023 11:11:40 +0000
Received: by outflank-mailman (input) for mailman id 471812;
 Thu, 05 Jan 2023 11:11:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDO9r-0000MS-HE
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 11:11:39 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2043.outbound.protection.outlook.com [40.107.15.43])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b8ef1ecb-8ce9-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 12:11:38 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6929.eurprd04.prod.outlook.com (2603:10a6:208:181::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 11:11:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 11:11:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8ef1ecb-8ce9-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n1z2ouDNT8Y943IKrK5OPbF8jF8m6MCFsdtr+V+6cJsi3bmuxH53N/8wHACn+oML4Ke7Howg7PYwVhbzikQccmyqYsHiVvvKFlNmF0OcI1mO7GPQnG67MPHBREBY10MUZ2qkObj9++TYnKxtUm1RYzaMN5Le1tZQx2DWMVgnIZqBiXk35HrLjyPqhtCyMRxrChaLiM4wKu2E2p8uY8/HjJsMofnIcqx4yr++jq/9rralAiJJyBt/J5auKHNjJn1E7qca42auMHX1ZIa4YSgY/cwccK8bGRKMLNDXV6HnXHd3jOd/dC2WW+IQE1wA8HkFSn/Q//cnFjOG4B7PpwX13Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Cm9QUjQ0OiBxmbXmS8go9kS0Co5ZCYy7d8B43NW/X6E=;
 b=hjpnfc4CEnTc4m52gTccQVOKhh6EERJJ3QXkgHZV9kP+LApWQfe8JZ6n1cbkeIpftSlnL8u9mQQzCJrR8K4WXQFsEjaJXnyfMhPNavRI6tvCDtzvjM62SK+LU+XmLfj1Dhg9aJdxRv18yFrnrVn4UQ6LdZhkqkTsJEuHLcBiYIb8X/vfRw4z5SF8xTpfTYHOdPdvdveK7YsOgqUPpStPy8REvlzCwlk3W92i7GRAgGsh6T6Apw58iU6x7HQ0VlTIAHQol+ccM4VxTqFPrN3zk31rbqLpieZ4uHioCkx20AC9WhfcMVZqmUrPNpt+YIUsUd2xAjAZ1ln+BK3PRuM5Fg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cm9QUjQ0OiBxmbXmS8go9kS0Co5ZCYy7d8B43NW/X6E=;
 b=KAqDODOOtOkq3Mn+Wr7k3Ag2w3gvH9whJUHp/ycOF80s+d1WUhTHxEM13EGsUHNs+3kO3DHADoRhXaFfqDd5w6qJyAETlbGKDJ/DObnwPhJuN9HsZnakdXqvtiSISGGlUsSgW1Y9S/MNlURRJ6Qn+HV8AZi2DXK01Mu+prhMdBdm/Le89nH5jOAuHTpptkkk/c7QYxgmzJaJGIqrFct/ZQ3ZL3nOBqbjg7clSTd34OfQ/aXrW9HifbkWiytZu4wXt7gqSMWzi4C8ZP0s4GwQCA5tq2USLlYWVC3d+S98unBoE3/nqPCXwf+SPTuPhfvfjBvmTBnfGu3N7XxG0GAqig==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <87efc536-3344-0459-b756-62a035db4090@suse.com>
Date: Thu, 5 Jan 2023 12:11:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 1/2] x86: macroize switches to/from .fixup section
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <ae3f21f6-6a66-2fe5-9d4a-3f93e6dd64d7@suse.com>
In-Reply-To: <ae3f21f6-6a66-2fe5-9d4a-3f93e6dd64d7@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0133.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6929:EE_
X-MS-Office365-Filtering-Correlation-Id: 41ffbc2c-292a-4dd4-e515-08daef0d9c45
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4N2lcKCBikWndPU9hdaDzG55Qa/GnydCdeHJ61KkwZaXURg31s15l8EWZg3sK1vlkfBuRhkesWYeq1X9sDJZ5v3DyNs4O9GJGMcq4DBNoiijigX2IG07XA2ZD1Q5MfeQxmlGD75xROJhU9VGrdYWAph5cJmoCMBjr23Tb5RogET0+qCnsubVaYCdOHFAn0aDuLL/vgRmeKj2PQj5tKP0N1dUKfIpaJdUbf9iDce/2DJyCNPYnx5EtYhAVJ4U5sFL+c2HuAr/3IYYXrtrwNCT4S/zIA4BQ0OUsx1xl6wtSUVlR0CW8c475yPFURteYjwBFjYHCkqqqH+IT5IRdt/zWHQzSZsiODwRYW8caCO20UR7qkzy1+RBaOhJLODIipRLzl1ZVdARYLLZVx6sPiqd8w4KR7w++jdM26Vmo+EuHMjK7GIHg17xInfDUCbHD/P+bKlaDJdLwtfNtRH9m4X66+cuhBgLY1eXGurRUQHcCS3D8AXJBpEp/5cENjWSYGxhg9/KZhEeqUjDt93fCS6GJO+ronNZVhDGqVj250ltpCGMq09J/izlxGygNAnV/o/+moHi26BI4bP3sE9t/PrFboSasg5Mp44pVcU+mfnn02UG8kuaax9c1fQ97wRK80wHRMlq++JZvxyqRP5CqdJl09rs1TYpTOqrHFyvVUPqHm5wW3qZe/BwdDVaSk6TtY1s+84rLYKSaVMrtJP1FZQT4Yhv1se9KX5dSK+foGRzSFyTYmgyaBFB4KuOT2/EkZhk/Wlwx6A2VqRCzarKzuaFCA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(366004)(396003)(376002)(136003)(451199015)(31686004)(5660300002)(30864003)(316002)(8676002)(54906003)(6916009)(2906002)(41300700001)(4326008)(66556008)(66476007)(8936002)(66946007)(6486002)(6506007)(478600001)(36756003)(83380400001)(186003)(86362001)(26005)(2616005)(6512007)(31696002)(38100700002)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TTdiQVRlY3ppcWVZUG10b09TcVZ4NVZhdWtrQ0RETWhFeUE2amM4aUVlY1Ft?=
 =?utf-8?B?YVNteDhyMHE5TTJEYjZSV0lnN3IrTk9kSnZoai9WclpxU1R3WHNidHlzNkhX?=
 =?utf-8?B?VlZxMmQ2a3hJcnM3SHhqRnBHRUNoMS9IS1AvOXdKSFh2MGEwUzRqS1hoT3dl?=
 =?utf-8?B?MjVzMkN0NW5RZG1yVnA3UWh6VnZvQ3NkS2M3T1BRMVlrS1MrM0pNZzJkMlg5?=
 =?utf-8?B?NTUxZEVENXQvUzBkemR5V2VlMndUNnUwcmkzSFMxQVQ0Qm15eXRZZjRyYzRn?=
 =?utf-8?B?Z0kzVDM1YlNDZzk3WmpJeXV4dWE2Smh2VXFTQmxJMnFWVnVkL3RQcVV6TnpR?=
 =?utf-8?B?TGI2M0lvakc0U3IxQXNua3AvRFRLWm9pRWZXUXpxVVlLbGVzSzRJNHFGZVZx?=
 =?utf-8?B?cmpETHNJaEFpQVRROVl0eVgzNWllU0VRTFhnMUdTSnJQR1JXVVJxbnQ0dlRE?=
 =?utf-8?B?K0tGNDRlVDA5c1UxanFTUGdZN3JjRHJrQnhxaVdLdlJFeGk3Q0d0d05uZWdK?=
 =?utf-8?B?ckhpVjREVkpzYzA2SjNiS25XSWlhbjhLOFFhTExPMW84WHptMWdEZnVYaFBK?=
 =?utf-8?B?TUR6ZUNIL05hWEhMeVE2WWl5aVdMSlU2d3ZOczF0RzgxeGVtY045akpxR2Fy?=
 =?utf-8?B?WmdrWUpPS1pVRnFoNjRzZ2NieXdvdy9zRDBZQVlvSnlGWjk0U0orbVhhcWxV?=
 =?utf-8?B?UXVMS0lta2JTNkh5dU85TldpTUMxc3NoSjVVV3Z2Z1dSQ00wRlRrVXUyWEV3?=
 =?utf-8?B?SERCcTZSYm8zMUEydzZDVlJtLzFWM1dvYUdWWkxFME4zbUpycVJiRHdPSW5u?=
 =?utf-8?B?czJPR09KVDB4MXpBajVrZzZEaU9uMHdkb2FSd0tNSHE2ZjkrTDg1S1lTeWts?=
 =?utf-8?B?K3p4QkpRSVlTcnRSQ1JlaGJrMlpYbWpPbE45N3pta0tndThRMkd6T0t5MUw2?=
 =?utf-8?B?K2NoenRqb1JsbHNWTzlud01wdmNZeWFab3kybUw3dkJmYkFOZ1VQQzlTckVW?=
 =?utf-8?B?R2RFcXd3KzhBNkNGNHVxcFJFQ3haMDJxdWNRc3FnYlNDcXNFaDZybXYzWmt5?=
 =?utf-8?B?bDdIelc1NVZ6dXRlYlJDdFNpeFBEMXFwclFybmlNcmpCeHM3MWdENE1Kb0VX?=
 =?utf-8?B?WGRWaS94T1VLWHQzcVpueFRaVk1nbWV0K1Z2Y1lJcjB0MWI1bFpoTTZXd29W?=
 =?utf-8?B?dHJzVlRkL1U3UTNranRBdnFxRFFaOGZYTTRGd1NQZjFpbnlDc3hnMmp6SjlB?=
 =?utf-8?B?R09OWERONjBzajRuZzNkK000blJpSHlYaHRaLzh6Qi9VakRoby9LUlgwbGJl?=
 =?utf-8?B?UGhoQnQrNm9LR0dtaTRJemxIak1NOENILzFwbHcrYXNIWWEvQUEzZE9MZ05j?=
 =?utf-8?B?cXZqQ1VRZmpwTURiY1ZiemJqdDJzOGx4SDVCaWpzQ3hja3lrNnhHS05TWTBa?=
 =?utf-8?B?ZnM4UDRtdWhVYzFQTysyalJDWVRnbWpaZ3kvUnZvL3AyYTNZMDkwbEpTSlp5?=
 =?utf-8?B?cHFaRVUvWEFYd3BlcC8rR2xSdytKSDZocEZGYW9iQ0s1T2dYNFJ3blpaS0Ja?=
 =?utf-8?B?KzN5cnp0Y1lPaGNtV25ydlUrMGVMdGFmdXgzOHk5cjB6c3FyZW5HeG1pQzY1?=
 =?utf-8?B?OEFIZXNGVklqUEtxZkQ5TGpUNDlRaStkdGpHTkd1aTIySmZaVU5TYVphWDRW?=
 =?utf-8?B?VFdTREYzcllOdEFwMWY2QTdSc1pRbWhlb043amJVTENNVTVMbTd4QVBrL1o2?=
 =?utf-8?B?OU0yY1lRamJ1alJ5cm8zMnRLWTBPOERHUmFZSityenEvSnNpbHAweXc3VDV3?=
 =?utf-8?B?dklqWU9weWw1RGVRWlB1VkgrNkxKYlI4VkVSRkRLT21lSVNhMXNvLzN1S3RZ?=
 =?utf-8?B?VlVwSXFINjk2WlZkOEZ6YVNLcHAzM2o3MkE1WmdFN294T013ZUxGMndGYWZ4?=
 =?utf-8?B?RGhQQW1uK1I3U3hpVG9MRDN5MllBVDAxK0NOR21WdWJYRnloWjJrVEk2TnEv?=
 =?utf-8?B?ZmxtaGdhVXNCU2lIR2o4SDVsYmdpakgzYjJJYjh6WnRhak9hNmJnd0pEZGZL?=
 =?utf-8?B?RUljYW1UZE5TRlhSN0p6WWl3SkpXbmIyL2NSMjd0Y0RFbUhySTJTQVdNMW5M?=
 =?utf-8?Q?Z/dBPvNeG5CkirnDqUylLSDNf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 41ffbc2c-292a-4dd4-e515-08daef0d9c45
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 11:11:36.3973
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BtYRtnUuOTh34jTC7v99gUGKdyAvXUTJ6Q5xC1y9n0jzXfoi7mQYLrs1DUj1boSZ3Rg/Cs9zJQMtuLAj2KHx2w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6929

This centralizes section name and attribute setting, thus simplifying
future changes to either of these.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -57,10 +57,10 @@ static inline int rdmsr_amd_safe(unsigne
 	int err;
 
 	asm volatile("1: rdmsr\n2:\n"
-		     ".section .fixup,\"ax\"\n"
+		     _ASM_FIXUP "\n"
 		     "3: movl %6,%2\n"
 		     "   jmp 2b\n"
-		     ".previous\n"
+		     _ASM_FIXUP_END "\n"
 		     _ASM_EXTABLE(1b, 3b)
 		     : "=a" (*lo), "=d" (*hi), "=r" (err)
 		     : "c" (msr), "D" (0x9c5a203a), "2" (0), "i" (-EFAULT));
@@ -74,10 +74,10 @@ static inline int wrmsr_amd_safe(unsigne
 	int err;
 
 	asm volatile("1: wrmsr\n2:\n"
-		     ".section .fixup,\"ax\"\n"
+		     _ASM_FIXUP "\n"
 		     "3: movl %6,%0\n"
 		     "   jmp 2b\n"
-		     ".previous\n"
+		     _ASM_FIXUP_END "\n"
 		     _ASM_EXTABLE(1b, 3b)
 		     : "=r" (err)
 		     : "c" (msr), "a" (lo), "d" (hi), "D" (0x9c5a203a),
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1607,11 +1607,11 @@ static void load_segments(struct vcpu *n
 #define TRY_LOAD_SEG(seg, val)                          \
     asm volatile ( "1: mov %k[_val], %%" #seg "\n\t"    \
                    "2:\n\t"                             \
-                   ".section .fixup, \"ax\"\n\t"        \
+                   _ASM_FIXUP "\n\t"                    \
                    "3: xor %k[ok], %k[ok]\n\t"          \
                    "   mov %k[ok], %%" #seg "\n\t"      \
                    "   jmp 2b\n\t"                      \
-                   ".previous\n\t"                      \
+                   _ASM_FIXUP_END "\n\t"                \
                    _ASM_EXTABLE(1b, 3b)                 \
                    : [ok] "+r" (all_segs_okay)          \
                    : [_val] "rm" (val) )
--- a/xen/arch/x86/extable.c
+++ b/xen/arch/x86/extable.c
@@ -164,11 +164,11 @@ static int __init cf_check stub_selftest
 
         asm volatile ( "INDIRECT_CALL %[stb]\n"
                        ".Lret%=:\n\t"
-                       ".pushsection .fixup,\"ax\"\n"
+                       _ASM_FIXUP "\n"
                        ".Lfix%=:\n\t"
                        "pop %[exn]\n\t"
                        "jmp .Lret%=\n\t"
-                       ".popsection\n\t"
+                       _ASM_FIXUP_END "\n\t"
                        _ASM_EXTABLE(.Lret%=, .Lfix%=)
                        : [exn] "+m" (res) ASM_CALL_CONSTRAINT
                        : [stb] "r" (addr), "a" (tests[i].rax));
--- a/xen/arch/x86/i387.c
+++ b/xen/arch/x86/i387.c
@@ -67,7 +67,7 @@ static inline void fpu_fxrstor(struct vc
         asm volatile (
             /* See below for why the operands/constraints are this way. */
             "1: " REX64_PREFIX "fxrstor (%2)\n"
-            ".section .fixup,\"ax\"   \n"
+            _ASM_FIXUP               "\n"
             "2: push %%"__OP"ax       \n"
             "   push %%"__OP"cx       \n"
             "   push %%"__OP"di       \n"
@@ -79,7 +79,7 @@ static inline void fpu_fxrstor(struct vc
             "   pop  %%"__OP"cx       \n"
             "   pop  %%"__OP"ax       \n"
             "   jmp  1b               \n"
-            ".previous                \n"
+            _ASM_FIXUP_END           "\n"
             _ASM_EXTABLE(1b, 2b)
             :
             : "m" (*fpu_ctxt), "i" (sizeof(*fpu_ctxt) / 4), "R" (fpu_ctxt) );
@@ -87,7 +87,7 @@ static inline void fpu_fxrstor(struct vc
     case 4: case 2:
         asm volatile (
             "1: fxrstor %0         \n"
-            ".section .fixup,\"ax\"\n"
+            _ASM_FIXUP            "\n"
             "2: push %%"__OP"ax    \n"
             "   push %%"__OP"cx    \n"
             "   push %%"__OP"di    \n"
@@ -99,7 +99,7 @@ static inline void fpu_fxrstor(struct vc
             "   pop  %%"__OP"cx    \n"
             "   pop  %%"__OP"ax    \n"
             "   jmp  1b            \n"
-            ".previous             \n"
+            _ASM_FIXUP_END        "\n"
             _ASM_EXTABLE(1b, 2b)
             :
             : "m" (*fpu_ctxt), "i" (sizeof(*fpu_ctxt) / 4) );
--- a/xen/arch/x86/include/asm/asm_defns.h
+++ b/xen/arch/x86/include/asm/asm_defns.h
@@ -79,6 +79,15 @@ register unsigned long current_stack_poi
 #define _ASM_EXTABLE(from, to)     _ASM__EXTABLE(, from, to)
 #define _ASM_PRE_EXTABLE(from, to) _ASM__EXTABLE(.pre, from, to)
 
+/* Exception recovery code section */
+#ifdef __ASSEMBLY__
+# define _ASM_FIXUP     .pushsection .fixup, "ax", @progbits
+# define _ASM_FIXUP_END .popsection
+#else
+# define _ASM_FIXUP     " .pushsection .fixup, \"ax\", @progbits"
+# define _ASM_FIXUP_END " .popsection"
+#endif
+
 #ifdef __ASSEMBLY__
 
 #ifdef HAVE_AS_QUOTED_SYM
--- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
@@ -579,9 +579,9 @@ static inline int __vmxon(u64 addr)
         "1: " VMXON_OPCODE MODRM_EAX_06 "\n"
         "   setna %b0 ; neg %0\n" /* CF==1 or ZF==1 --> rc = -1 */
         "2:\n"
-        ".section .fixup,\"ax\"\n"
+        _ASM_FIXUP "\n"
         "3: sub $2,%0 ; jmp 2b\n"    /* #UD or #GP --> rc = -2 */
-        ".previous\n"
+        _ASM_FIXUP_END "\n"
         _ASM_EXTABLE(1b, 3b)
         : "=q" (rc)
         : "0" (0), "a" (&addr)
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -44,10 +44,10 @@ static inline void wrmsrl(unsigned int m
     uint32_t lo_, hi_; \
     __asm__ __volatile__( \
         "1: rdmsr\n2:\n" \
-        ".section .fixup,\"ax\"\n" \
+        _ASM_FIXUP "\n" \
         "3: xorl %0,%0\n; xorl %1,%1\n" \
         "   movl %5,%2\n; jmp 2b\n" \
-        ".previous\n" \
+        _ASM_FIXUP_END "\n" \
         _ASM_EXTABLE(1b, 3b) \
         : "=a" (lo_), "=d" (hi_), "=&r" (rc_) \
         : "c" (msr), "2" (0), "i" (-EFAULT)); \
@@ -64,9 +64,9 @@ static inline int wrmsr_safe(unsigned in
 
     __asm__ __volatile__(
         "1: wrmsr\n2:\n"
-        ".section .fixup,\"ax\"\n"
+        _ASM_FIXUP "\n"
         "3: movl %5,%0\n; jmp 2b\n"
-        ".previous\n"
+        _ASM_FIXUP_END "\n"
         _ASM_EXTABLE(1b, 3b)
         : "=&r" (rc)
         : "c" (msr), "a" (lo), "d" (hi), "0" (0), "i" (-EFAULT));
--- a/xen/arch/x86/include/asm/uaccess.h
+++ b/xen/arch/x86/include/asm/uaccess.h
@@ -160,10 +160,10 @@ struct __large_struct { unsigned long bu
 		)							\
 		"1:	mov"itype" %"rtype"[val], (%[ptr])\n"		\
 		"2:\n"							\
-		".section .fixup,\"ax\"\n"				\
+		"       " _ASM_FIXUP "\n"				\
 		"3:	mov %[errno], %[ret]\n"				\
 		"	jmp 2b\n"					\
-		".previous\n"						\
+		_ASM_FIXUP_END "\n"					\
 		_ASM_EXTABLE(1b, 3b)					\
 		: [ret] "+r" (err), [ptr] "=&r" (dummy_)		\
 		  GUARD(, [scr1] "=&r" (dummy_), [scr2] "=&r" (dummy_))	\
@@ -177,11 +177,11 @@ struct __large_struct { unsigned long bu
 		)							\
 		"1:	mov (%[ptr]), %"rtype"[val]\n"			\
 		"2:\n"							\
-		".section .fixup,\"ax\"\n"				\
+		"       " _ASM_FIXUP "\n"				\
 		"3:	mov %[errno], %[ret]\n"				\
 		"	xor %k[val], %k[val]\n"				\
 		"	jmp 2b\n"					\
-		".previous\n"						\
+		_ASM_FIXUP_END "\n"					\
 		_ASM_EXTABLE(1b, 3b)					\
 		: [ret] "+r" (err), [val] ltype (x),			\
 		  [ptr] "=&r" (dummy_)					\
--- a/xen/arch/x86/pv/misc-hypercalls.c
+++ b/xen/arch/x86/pv/misc-hypercalls.c
@@ -251,11 +251,11 @@ long do_set_segment_base(unsigned int wh
          * re-read %gs and compare against the input.
          */
         asm volatile ( "1: mov %[sel], %%gs\n\t"
-                       ".section .fixup, \"ax\", @progbits\n\t"
+                       _ASM_FIXUP "\n\t"
                        "2: mov %k[flat], %%gs\n\t"
                        "   xor %[sel], %[sel]\n\t"
                        "   jmp 1b\n\t"
-                       ".previous\n\t"
+                       _ASM_FIXUP_END "\n\t"
                        _ASM_EXTABLE(1b, 2b)
                        : [sel] "+r" (sel)
                        : [flat] "r" (FLAT_USER_DS32) );
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -550,9 +550,9 @@ static void show_trace(const struct cpu_
 
     /* Guarded read of the stack top. */
     asm ( "1: mov %[data], %[tos]; 2:\n"
-          ".pushsection .fixup,\"ax\"\n"
+          _ASM_FIXUP "\n"
           "3: movb $1, %[fault]; jmp 2b\n"
-          ".popsection\n"
+          _ASM_FIXUP_END "\n"
           _ASM_EXTABLE(1b, 3b)
           : [tos] "+r" (tos), [fault] "+qm" (fault) : [data] "m" (*sp) );
 
--- a/xen/arch/x86/usercopy.c
+++ b/xen/arch/x86/usercopy.c
@@ -38,12 +38,12 @@ unsigned int copy_to_guest_ll(void __use
         "    mov  %[aux],%[cnt]\n"
         "1:  rep movsb\n" /* ...remainder copied as bytes */
         "2:\n"
-        ".section .fixup,\"ax\"\n"
+        "    " _ASM_FIXUP "\n"
         "5:  add %[aux], %[cnt]\n"
         "    jmp 2b\n"
         "3:  lea (%q[aux], %q[cnt], "STR(BYTES_PER_LONG)"), %[cnt]\n"
         "    jmp 2b\n"
-        ".previous\n"
+        "    " _ASM_FIXUP_END "\n"
         _ASM_EXTABLE(4b, 5b)
         _ASM_EXTABLE(0b, 3b)
         _ASM_EXTABLE(1b, 2b)
@@ -81,7 +81,7 @@ unsigned int copy_from_guest_ll(void *to
         "    mov  %[aux], %[cnt]\n"
         "1:  rep movsb\n" /* ...remainder copied as bytes */
         "2:\n"
-        ".section .fixup,\"ax\"\n"
+        "    " _ASM_FIXUP "\n"
         "5:  add  %[aux], %[cnt]\n"
         "    jmp 6f\n"
         "3:  lea  (%q[aux], %q[cnt], "STR(BYTES_PER_LONG)"), %[cnt]\n"
@@ -92,7 +92,7 @@ unsigned int copy_from_guest_ll(void *to
         "    xchg %[aux], %%eax\n"
         "    mov  %k[from], %[cnt]\n"
         "    jmp 2b\n"
-        ".previous\n"
+        "    " _ASM_FIXUP_END "\n"
         _ASM_EXTABLE(4b, 5b)
         _ASM_EXTABLE(0b, 3b)
         _ASM_EXTABLE(1b, 6b)
@@ -149,10 +149,10 @@ unsigned int clear_guest_pv(void __user
             "    mov  %[bytes], %[cnt]\n"
             "1:  rep stosb\n"
             "2:\n"
-            ".section .fixup,\"ax\"\n"
+            "    " _ASM_FIXUP "\n"
             "3:  lea  (%q[bytes], %q[longs], "STR(BYTES_PER_LONG)"), %[cnt]\n"
             "    jmp  2b\n"
-            ".previous\n"
+            "    " _ASM_FIXUP_END "\n"
             _ASM_EXTABLE(0b,3b)
             _ASM_EXTABLE(1b,2b)
             : [cnt] "=&c" (n), [to] "+D" (to), [scratch1] "=&r" (dummy),
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -321,11 +321,11 @@ __UNLIKELY_END(compat_bounce_null_select
         mov   %al,  TRAPBOUNCE_flags(%rdx)
         ret
 
-.section .fixup,"ax"
+        _ASM_FIXUP
 .Lfx13:
         xorl  %edi,%edi
         jmp   .Lft13
-.previous
+        _ASM_FIXUP_END
         _ASM_EXTABLE(.Lft1,  dom_crash_sync_extable)
         _ASM_EXTABLE(.Lft2,  compat_crash_page_fault)
         _ASM_EXTABLE(.Lft3,  compat_crash_page_fault_4)
@@ -346,9 +346,9 @@ compat_crash_page_fault:
         movl  %esi,%edi
         call  show_page_walk
         jmp   dom_crash_sync_extable
-.section .fixup,"ax"
+        _ASM_FIXUP
 .Lfx14:
         xorl  %edi,%edi
         jmp   .Lft14
-.previous
+        _ASM_FIXUP_END
         _ASM_EXTABLE(.Lft14, .Lfx14)
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -580,7 +580,7 @@ __UNLIKELY_END(create_bounce_frame_bad_b
         mov   %al,  TRAPBOUNCE_flags(%rdx)
         ret
 
-        .pushsection .fixup, "ax", @progbits
+        _ASM_FIXUP
         # Numeric tags below represent the intended overall %rsi adjustment.
 domain_crash_page_fault_6x8:
         addq  $8,%rsi
@@ -616,7 +616,7 @@ ENTRY(dom_crash_sync_extable)
 #endif
         xorl  %edi,%edi
         jmp   asm_domain_crash_synchronous /* Does not return */
-        .popsection
+        _ASM_FIXUP_END
 #endif /* CONFIG_PV */
 
 /* --- CODE BELOW THIS LINE (MOSTLY) NOT GUEST RELATED --- */
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1260,11 +1260,11 @@ static inline int mkec(uint8_t e, int32_
     block_speculation(); /* SCSB */                                     \
     asm volatile ( pre "\n\tINDIRECT_CALL %[stub]\n\t" post "\n"        \
                    ".Lret%=:\n\t"                                       \
-                   ".pushsection .fixup,\"ax\"\n"                       \
+                   _ASM_FIXUP "\n"                                      \
                    ".Lfix%=:\n\t"                                       \
                    "pop %[exn]\n\t"                                     \
                    "jmp .Lret%=\n\t"                                    \
-                   ".popsection\n\t"                                    \
+                   _ASM_FIXUP_END "\n\t"                                \
                    _ASM_EXTABLE(.Lret%=, .Lfix%=)                       \
                    : [exn] "+g" (stub_exn.info) ASM_CALL_CONSTRAINT,    \
                      constraints,                                       \
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -45,10 +45,10 @@ static inline bool xsetbv(u32 index, u64
 
     asm volatile ( "1: .byte 0x0f,0x01,0xd1\n"
                    "3:                     \n"
-                   ".section .fixup,\"ax\" \n"
+                   _ASM_FIXUP             "\n"
                    "2: xor %0,%0           \n"
                    "   jmp 3b              \n"
-                   ".previous              \n"
+                   _ASM_FIXUP_END         "\n"
                    _ASM_EXTABLE(1b, 2b)
                    : "+a" (lo)
                    : "c" (index), "d" (hi));
@@ -403,10 +403,10 @@ void xrstor(struct vcpu *v, uint64_t mas
 #define _xrstor(insn) \
         asm volatile ( "1: .byte " insn "\n" \
                        "3:\n" \
-                       "   .section .fixup,\"ax\"\n" \
+                       "   " _ASM_FIXUP "\n" \
                        "2: incl %[faults]\n" \
                        "   jmp 3b\n" \
-                       "   .previous\n" \
+                       "   " _ASM_FIXUP_END "\n" \
                        _ASM_EXTABLE(1b, 2b) \
                        : [mem] "+m" (*ptr), [faults] "+g" (faults) \
                        : [lmask] "a" (lmask), [hmask] "d" (hmask), \



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 11:12:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 11:12:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471821.731832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOAL-0000xH-3L; Thu, 05 Jan 2023 11:12:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471821.731832; Thu, 05 Jan 2023 11:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOAK-0000xA-WB; Thu, 05 Jan 2023 11:12:08 +0000
Received: by outflank-mailman (input) for mailman id 471821;
 Thu, 05 Jan 2023 11:12:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDOAJ-0000K8-F1
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 11:12:07 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2087.outbound.protection.outlook.com [40.107.15.87])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c8ef8586-8ce9-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 12:12:04 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6929.eurprd04.prod.outlook.com (2603:10a6:208:181::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 11:12:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 11:12:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8ef8586-8ce9-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dyXTtLgnBsZZO9u37fZXGeTTu5JWvKD1DVDmwSnNIsJQhB4siXfK9v9Tcogv9xqtAuOJ34+E92vtmDxOo1cXP0yuIguQPvURdOmjE4EqTkJiDgYvqu+ZeL6PIfUedMqOIouLtV9Key2oj7EoCKMZbh8UbGmDN8raP3k7zcKAGMhYbs9nxJh6nST71VLKQhm1mwZfnQfTm4y9gaUlQFKoa+ecGv/jux4cmyVFNe5pkPkto2RAvnovUuynVzsU6tYFrhjiqqmYA3p9mH3JoQh29hsxMgwPcHBeTDHkVxQbI41HRxdZi9QXUiQsd0qv5DNNZLgk2PXzdYPkWaTcKd3yyg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NJJFY6hYgfpPLoX7W5+fl1W4cpHjk669sUfN4cB1eO4=;
 b=Gl2XsCoFZrKOoBk5spBRrAEktn0fQa577mT5Rnth1Dk6kGV6ksK+avl2viwFwl6P3ElZKsl4MZuSCjWG0Yozb7PPGsBMiGPoAV66uCe1qLMhGzmVEVXzck/Xz0t6QUTnanPEkiFjgKxhZ3beKo16vF2NheRMJ6bhGgVuFTtnB8bOB154N+daxkCHO0+ssF0yJRvuuOxUvkLxmW7astb0J5SDjZLss9JVoqRpjtwmLQi4WaJ6YNNIx4FTmxIIfBYZDyhi/zwA53l6LH8Dp3moRFyrno13sf/oxRL7WEHjPR0vojwTMy2pM4dkwqYi3g/lL2WzH917ZCi1DjHtq74zmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NJJFY6hYgfpPLoX7W5+fl1W4cpHjk669sUfN4cB1eO4=;
 b=r9cAfSWtjaYlWbFUtsDO0rIgcMGtWDSdnzVedeNJEcmPFJl/IYnCluiuWlueG4j33u5CQPjW7RYLwsy12Ci2xKnsJS/2hO0onT31j2TYhfCAlQwX+Pc2bsjinKeyS7SL1+5FJ8Us0VrDyXJI+3Quwxn6CYvn+ZwBuJKpYbOoCHfO6CdOUQYO9+nIrML2Be7+U8Dke3CJu+gm3uTXEPf1o0Et9ZMwf0cAt67UhvqZJba78DkPSw1535c8TIuQjvSFB8+H4VyQluEPgoEnJ0KpVz/b1WINzX+S+YyZAEmotC26dZYasfnh35iu4iUvuRy+MNDhaeU+/DrkpmkxpCa8Bw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d223a0f9-6a66-8d7d-a214-51ddc24bb40f@suse.com>
Date: Thu, 5 Jan 2023 12:12:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 2/2] x86: split .fixup section with new enough gas
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <ae3f21f6-6a66-2fe5-9d4a-3f93e6dd64d7@suse.com>
In-Reply-To: <ae3f21f6-6a66-2fe5-9d4a-3f93e6dd64d7@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0078.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6929:EE_
X-MS-Office365-Filtering-Correlation-Id: c7427232-bf02-40f8-a2bf-08daef0dacc1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lC9rxu0veaq+jiBLxYuaD8heqYt7pi8bkfVQSigOj/ZB8x23c/gswtkVkwOdqcS6xqwV/gAh1kWxUHwlM3uUgDbLqLz+JGEm/g/JxObZ7TIau6nr95lUV8UQIf0WMfcU+pdlLqHrNj92ggQVkdGa4x/H4ZpW8S1w0alnxaBMTsAJ8XBThlATmdAj0cIxhTyu/tAiuqHNf2Xh7uO5oNjWitCezEXYmb14uU/OC7X92ppMjEyh19THOKQDxWOCtkgUrwexCMb0xsrNSF8namc2Hgm20DFUoLB9QzvEa4l0QSO6mwcF41zG+wJCPD/MmnUzDuiC77Adev8Cbvfn3rRKuGmoYcUsC8X27yp6Zru/UZd0ZSSANf9DkVYzgrOE0mQ/ow45omhfbI2sxhaWoM9EGlgO1b59+dgRigLdqKnee+8MVZ9bT/R9KG5FCOPGq1xk439o1l6h3Tf86f8MIUOf81SWZ33MDUIeyzTGBrCM1u4t663Kzf58YWjTQDVJyKa+TS/0Wp2gQRS3mGZ2DgGZmZ6AZsg/PczsKKD/JaPjfpp3J6HEa7gd6qqcL1KPj6FQ+vB6ThMh6G0Rp6BOjj73392UZpxXGeF0NwccMGvczpAVyJdZze3b8ZagE05rf8rmsLoLc+J5HwBkHisSG2tBthfhr89wrVHC/Q+SjM+ogZB12ybIxOiUuC1pGMKyUhPvZlwKb9a/gs5i6s7QRR0dXM2mXNCPIjyBCWV7i8gBwE4+07FJccqGZgj18OprApUe
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(366004)(396003)(376002)(136003)(451199015)(31686004)(5660300002)(316002)(8676002)(54906003)(6916009)(2906002)(41300700001)(4326008)(66556008)(66476007)(8936002)(66946007)(6486002)(6506007)(478600001)(36756003)(83380400001)(186003)(86362001)(26005)(2616005)(6512007)(31696002)(38100700002)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cW5FME1OQ3FuMkdlOGNwY2c2MWFQNW8yUkxaMU1HWG5sRDJJK2lFblROaTBr?=
 =?utf-8?B?ZVVnb3duakVkRDFLSitlNlVkenVTcVZlR0UxdzQyUkdBRU1rZEM2eFVKUVZy?=
 =?utf-8?B?NXB3QUIwZjVnQ3ZnUjRORVhMSXVCdDFMSkRQSkdOaWZ6azlzc3NyOTFoaVc3?=
 =?utf-8?B?SE9keEppeG1wRHFRTkxDMXRkMkpYZ1I3dlRSTEQ4NWZGQWJpeFZlVGI3QWtj?=
 =?utf-8?B?bGVRbEpjc2hvVGNmRGtheFpiUmdoSVdTYjgxeEJnZGxyeWJGY01DbkdqVDF3?=
 =?utf-8?B?amkzbEQ2UHEzTE9xTkYwdkZ4QzJqN3o5S3FySmFkSFBZQzlTUWtUaVFPUnFw?=
 =?utf-8?B?VlcyakYzWHpVUVgzUDc1dTMzMTVtRzRxalNFV0RraURjOHdCcm0rY2FuN0hO?=
 =?utf-8?B?dFRWU1hKeHZmejlNY0ptdmJaZHFUWHVFanZVYkh0eEZnNmVyR3owenMxUEtt?=
 =?utf-8?B?Mno0dnM3M21QbXZxZXA2Q2JEVzd0NUwyWGQ5dGxuSnJ5UDBRbmxDTzBZNHJk?=
 =?utf-8?B?eSticjlBdTBXTGM4cFZzM0RzSEJlTnRCT0tVeVNWWTEwU0ZQRHA4ajY5M3lE?=
 =?utf-8?B?Yi9vS2tYQzZJbW9XZjlNcUtrLzlwQ3ZUZ1VWY0ZMeWJpa2FQcFp6NTVhL3Fx?=
 =?utf-8?B?RmdxTWZSVit4V2FFdVVncFViN1ZKbnU0RGVJKzFKSC93M1Bya0pTM241ZU1O?=
 =?utf-8?B?c2IxRVF4ejhWWUVLem9VU005b1l4bWpYbWhmZjZ4ekVFSS8vRjZRU3FZcFMz?=
 =?utf-8?B?KzF2VmNId09CdEgwWkdXQnJBc3hGeml6OHJGelZ0dDhFRkZrb2JXT3hJK28w?=
 =?utf-8?B?NmU1bVlCbUtTWkdjdEFGMDZmTDhLUnVxVnc3L2FIQ1NPU2JUOFNCSFhUTDQw?=
 =?utf-8?B?MUEvN015YlNPRlgxZlNQZDZua0NaTG0xajlGMzJxNVBPZWVjQndNT21RRmxu?=
 =?utf-8?B?MEgrRVZnamNLcThlR3BzWDg4VC9Cc3Q2dUVYSnorK3lFQWtCOG5JcGdoNEc3?=
 =?utf-8?B?SlM4YnpTTmZWOVdDV2tWU2l2RkpvaDlhS0lSR3orMXZjb1A2bGxGbkplU2ZH?=
 =?utf-8?B?V2E2QlhqNGhDa3Uwd0tSOHpnazJ5YkhpNlRpOUd6TDQxeFllWUdud3UrRnVm?=
 =?utf-8?B?eXBDbzllOWs3RWZLZ2NycGlMLzBNL215Uk9vVnpyWHphelpnWmVCSDZiUVJp?=
 =?utf-8?B?Z3lCMUZXNDgyRGlFUTBwYzd4bjliemFyYWxsM0xSYWFsU0FTNUdVNjU1Mk16?=
 =?utf-8?B?WHF1ZGUzV1Q2bk1mTWFwakZwZEV1L2Q0ZnNTNG5Fa2gxdUFjZzJucDJoVkRw?=
 =?utf-8?B?N0ZWZ1YvYjd0K3NmYVl0L2I1SkNCQzdmYm9XZDlDNjNRdFUzaGF4U24zU0p2?=
 =?utf-8?B?OE9ZUVlYZ1I3Nmx1a1VIVWtUSnI4N0I1Tk94U0MrN1JoTGNVNzc5TzRXbEpV?=
 =?utf-8?B?YmUzbGd3MkhHN0J1SWpJazJ0R2FwUXNRWWRqaUczSWxjRmx3SlVSYXdZcm9O?=
 =?utf-8?B?ODlzWDFYV3hxQzhwanFuQWdNNkVDMUF5TkNnZXFOSkUrRkFPbDJHRW53OThG?=
 =?utf-8?B?cXVJa0ZwLzVvdDZkdFBQNk5uREZrcytEeVRSWjYwUkh5cFNGN2xPVG4wamhN?=
 =?utf-8?B?ZkZLQngzWmRxU2VuV3NoSlVQM1lKbUhHK2lTa2dIM3U1NXE2bkNMTmtpbDZX?=
 =?utf-8?B?dThLajdzeVRRY2hFRnU1cmR6VzZtcFF1Y1dNLzdZd0Q4d0p0blZnekk0amhu?=
 =?utf-8?B?Qnh4WnZrVWFKeE4vR0JJT3YvbUdSRE5UU01DajFzR2xNVkR3bC9ZTWxJOERP?=
 =?utf-8?B?VGs3c3N3eGwvVDJvZk96L0dCWlkyOUxMTzV4Mk5iVlJYa042djljMmpxLzBP?=
 =?utf-8?B?Z2t4bHlFaERQcVFuRjE4NTNrWVU5SzI2TmV1TzlzdmQycDdNem1neFFHcGRm?=
 =?utf-8?B?dFAzUjBIYTlsRXV5LzJoTlVRem5BVG1Jb3ZCVHFFd2tOREpWQzdRdlhvSVVw?=
 =?utf-8?B?dkhtbEgydWY3ZWFKTXcrYmVpcXBSV2NNbm9rNG1LazFlQUhGYnE1c2JQa2U5?=
 =?utf-8?B?YzBkaGs1SnZyaS9FbUhLWS8rWnZ5M2h5VnF4dGxrNGlVVE5xdzl6cDFOZmg5?=
 =?utf-8?Q?kKn3C+plDyUlzhPCa6SC7RhUZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c7427232-bf02-40f8-a2bf-08daef0dacc1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 11:12:04.0674
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HYs/zNKJahfOc5qL3EDW2ibHQ07ZE6zKwKZrrrrslZYiriAQh7Hseqoq2oJI8IdzYcnk0/MUltBfNdqG5klAqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6929

GNU as, as of version 2.26, allows deriving the name of a section to
switch to from the present section's name. For the replacement to occur
--sectname-subst needs to be passed to the assembler.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Similarly (and perhaps of more interest) we could split .ex_table,
allowing to reduce the number of entries to search through post-init.

--- a/Config.mk
+++ b/Config.mk
@@ -98,7 +98,7 @@ cc-option = $(shell if test -z "`echo 'v
 # Usage: $(call cc-option-add CFLAGS,CC,-march=winchip-c6)
 cc-option-add = $(eval $(call cc-option-add-closure,$(1),$(2),$(3)))
 define cc-option-add-closure
-    ifneq ($$(call cc-option,$$($(2)),$(3),n),n)
+    ifneq ($$(call cc-option,$$($(2)),$(firstword $(3)),n),n)
         $(1) += $(3)
     endif
 endef
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -34,6 +34,9 @@ $(call as-option-add,CFLAGS,CC,\
 $(call as-option-add,CFLAGS,CC,\
     ".L1: .L2: .nops (.L2 - .L1)$$(comma)9",-DHAVE_AS_NOPS_DIRECTIVE)
 
+# Check to see whether the assmbler supports the --sectname-subst option.
+$(call cc-option-add,CFLAGS,CC,-Wa$$(comma)--sectname-subst -DHAVE_AS_SECTNAME_SUBST)
+
 CFLAGS += -mno-red-zone -fpic
 
 # Xen doesn't use MMX or SSE interally.  If the compiler supports it, also skip
--- a/xen/arch/x86/include/asm/asm_defns.h
+++ b/xen/arch/x86/include/asm/asm_defns.h
@@ -81,10 +81,18 @@ register unsigned long current_stack_poi
 
 /* Exception recovery code section */
 #ifdef __ASSEMBLY__
-# define _ASM_FIXUP     .pushsection .fixup, "ax", @progbits
+# ifdef HAVE_AS_SECTNAME_SUBST
+#  define _ASM_FIXUP    .pushsection .fixup%S, "ax", @progbits
+# else
+#  define _ASM_FIXUP    .pushsection .fixup, "ax", @progbits
+# endif
 # define _ASM_FIXUP_END .popsection
 #else
-# define _ASM_FIXUP     " .pushsection .fixup, \"ax\", @progbits"
+# ifdef HAVE_AS_SECTNAME_SUBST
+#  define _ASM_FIXUP    " .pushsection .fixup%%S, \"ax\", @progbits"
+# else
+#  define _ASM_FIXUP    " .pushsection .fixup, \"ax\", @progbits"
+# endif
 # define _ASM_FIXUP_END " .popsection"
 #endif
 
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -103,6 +103,12 @@ SECTIONS
        *(.text.__x86_indirect_thunk_*)
 
        *(.fixup)
+       *(.fixup.text)
+       *(.fixup.text.cold)
+       *(.fixup.text.unlikely .fixup.text.*_unlikely .fixup.text.unlikely.*)
+#ifdef CONFIG_CC_SPLIT_SECTIONS
+       *(.fixup.text.*)
+#endif
        *(.gnu.warning)
        _etext = .;             /* End of text section */
   } PHDR(text) = 0x9090
@@ -215,6 +221,8 @@ SECTIONS
        _sinittext = .;
        *(.init.text)
        *(.text.startup)
+       *(.fixup.init.text)
+       *(.fixup.text.startup)
        _einittext = .;
        /*
         * Here are the replacement instructions. The linker sticks them
--- a/xen/include/xen/xen.lds.h
+++ b/xen/include/xen/xen.lds.h
@@ -89,7 +89,9 @@
 #define DISCARD_SECTIONS     \
   /DISCARD/ : {              \
        *(.text.exit)         \
+       *(.fixup.text.exit)   \
        *(.exit.text)         \
+       *(.fixup.exit.text)   \
        *(.exit.data)         \
        *(.exitcall.exit)     \
        *(.discard)           \



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 11:19:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 11:19:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471830.731843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOHc-0001jc-R1; Thu, 05 Jan 2023 11:19:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471830.731843; Thu, 05 Jan 2023 11:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOHc-0001jV-Nr; Thu, 05 Jan 2023 11:19:40 +0000
Received: by outflank-mailman (input) for mailman id 471830;
 Thu, 05 Jan 2023 11:19:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDOHb-0001jP-Hi
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 11:19:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDOHb-0008PS-D9; Thu, 05 Jan 2023 11:19:39 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=[192.168.4.62])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDOHb-0003qE-5I; Thu, 05 Jan 2023 11:19:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=oeDOelHc6GbkzzkkG0rlTlPgpKRaqk6cNmUOAcKglGk=; b=fHJKWoOV3BY2w4ouLf4l9dWsaX
	UUTWg8ajpy6rWd4TaImZpjnM0hbEnMI82VG0i1n8SBXwLzuzr4R71AzVH4bekXwWKXyChDovQnVlL
	/1f6XmQdXHjGZrGs6SSnAFKuvXRW64fckW9n8/BBiKuGVY0oz6mhQu7tSklkFl3dILSc=;
Message-ID: <c80f90d7-d3b5-1b13-d809-9506ff5414e4@xen.org>
Date: Thu, 5 Jan 2023 11:19:37 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in
 construct_domU
To: Ayan Kumar Halder <ayankuma@amd.com>, xen-devel@lists.xenproject.org
References: <20230103102519.26224-1-michal.orzel@amd.com>
 <alpine.DEB.2.22.394.2301041546230.4079@ubuntu-linux-20-04-desktop>
 <1264e5cc-1960-95d3-5ecb-d6f23d194aa4@xen.org>
 <29460d07-cd43-7415-7125-6ed01f3c2920@amd.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <29460d07-cd43-7415-7125-6ed01f3c2920@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 05/01/2023 09:59, Ayan Kumar Halder wrote:
> Hi Julien,

Hi,

> I have a clarification.
> 
> On 05/01/2023 09:26, Julien Grall wrote:
>> CAUTION: This message has originated from an External Source. Please 
>> use proper judgment and caution when opening attachments, clicking 
>> links, or responding to this email.
>>
>>
>> Hi Stefano,
>>
>> On 04/01/2023 23:47, Stefano Stabellini wrote:
>>> On Tue, 3 Jan 2023, Michal Orzel wrote:
>>>> Printing memory size in hex without 0x prefix can be misleading, so
>>>> add it. Also, take the opportunity to adhere to 80 chars line length
>>>> limit by moving the printk arguments to the next line.
>>>>
>>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>>> ---
>>>> Changes in v2:
>>>>   - was: "Print memory size in decimal in construct_domU"
>>>>   - stick to hex but add a 0x prefix
>>>>   - adhere to 80 chars line length limit
>>>
>>> Honestly I prefer decimal but also hex is fine.
>>
>> decimal is perfect for very small values, but as we print the amount in
>> KB it will become a big mess. Here some examples (decimal first, then
>> hexadecimal):
>>
>>   512MB: 524288 vs 0x80000
>>   555MB: 568320 vs 0x8ac00
>>   1GB: 1048576 vs 0x100000
>>   512GB: 536870912 vs 0x20000000
>>   1TB: 1073741824 vs 0x40000000
>>
>> For power of two values, you might be able to find your way with
>> decimal. It is more difficult for non power of two unless you have a
>> calculator in hand.
>>
>> The other option I suggested in v1 is to print the amount in KB/GB/MB.
>> Would that be better?
>>
>> That said, to be honest, I am not entirely sure why we are actually
>> printing in KB. It would seems strange that someone would create a guest
>> with memory not aligned to 1MB.
> 
> For RTOS (Zephyr and FreeRTOS), it should be possible for guests to have 
> memory less than 1 MB, isn't it ?

Yes. So does XTF. But most of the users are likely going allocate at 
least 1MB (or even 2MB to reduce the TLB pressure).

So it would be better to print the value in a way that is more 
meaningful for the majority of the users.

>> So I would consider to check the size is 1MB-aligned and then print the

I will retract my suggestion to check the size. There are technically no 
restriction to run a guest with a size not aligned to 1MB. Although, it 
would still seem strange.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 12:02:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 12:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471846.731854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOwU-00072F-IO; Thu, 05 Jan 2023 12:01:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471846.731854; Thu, 05 Jan 2023 12:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOwU-000728-Dt; Thu, 05 Jan 2023 12:01:54 +0000
Received: by outflank-mailman (input) for mailman id 471846;
 Thu, 05 Jan 2023 12:01:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDOwT-0006u7-2X
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 12:01:53 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bdc2f18e-8cf0-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 13:01:52 +0100 (CET)
Received: by mail-wm1-x334.google.com with SMTP id ja17so27834038wmb.3
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 04:01:52 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 x5-20020a05600c21c500b003d9b89a39b2sm2159826wmj.10.2023.01.05.04.01.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 04:01:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdc2f18e-8cf0-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=SlLZPNoeJrg9cCvwh4LLybSVlr8nbBmVdnfp1y+ineg=;
        b=hzRcx6YBiRcm4FokAjkbrcKpadrRq2GwCMqX7mJssJXf+av4F1cwWTYDYof9ohOX0E
         QHgcn4c/eLYPFWKqfhCqQc+Ykp9kePrrxl/AMqDFaoZPULpGq7mIeKsHEl7ro1P+PGUK
         FGMzNL0ZvmJgpvRNdSJ75CaQteR7AZ1J/OjvyNObec5xt56Syz/1MpfyGmiy7XMM9kPO
         HUrS2E7ShbrxBzPhl3XVT6V1ojlxFDAM/wBuYC8cGNl4kLeGot/MhWuq+AKbsm0kgoGB
         Xzv3Wg95Qr2kIrKs7MjsTN2v9TSS5/AT/PbnN0VO9RGTyn2gQJLJrW4PjeuEwKmmqqsp
         /fEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=SlLZPNoeJrg9cCvwh4LLybSVlr8nbBmVdnfp1y+ineg=;
        b=yyAy/80HCFbkys9mSLXzkCL7lOle4CeiV8tGsFiVloqW+XC35tCWBDqTqiRkWU4BZg
         FaJJts5iLfShXLD4NtwTklyo12V0L/Emz9pqdGXoG7wk4o2QpLsOkdeW+LN0PvOQ0Krb
         DqbF1lcNwrbhAy6AzdEd7MOWS7rCyvvxUzQ8TEkIE4ytErb7gU65VAizOUJCUvwR+GZs
         KYXWxFOLFFJcN7Q35p06vxpl11gyIoQFRyGASXgEpErC0O5IwM2Zhe+DPWGaVQosvzil
         QBX433OvGQqswA4YEbo1SfamMLsID95lYsOpIt2hfNXhvIsC1JLVJAQE3T4LOhYGON83
         lVGw==
X-Gm-Message-State: AFqh2kpYZajz9b7jXzsZ6Q26iqiUceudNc86bSeyRnaX2SqP9N1D/and
	EmKhip8HAwLwcGugU1s/n6TOc0C6s8bQHab+
X-Google-Smtp-Source: AMrXdXvODMR5TZzDvOc65KDrVwjTNQu+GioNqH2BQd8eEEciG1Bvmi3YA5DM+yUlGniuo0fV8xbKIA==
X-Received: by 2002:a1c:7c0f:0:b0:3d5:816e:2fb2 with SMTP id x15-20020a1c7c0f000000b003d5816e2fb2mr39008747wmc.14.1672920111508;
        Thu, 05 Jan 2023 04:01:51 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Gianluca Guida <gianluca@rivosinc.com>
Subject: [PATCH v4 0/2] Add minimal RISC-V Xen build and build testing
Date: Thu,  5 Jan 2023 14:01:44 +0200
Message-Id: <cover.1672912316.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- provide a minimal amount of changes to add initial RISC-V support
  to make Xen binary buildable and runnable for RISC-V architecture
  which can be used for future development and testing.
- add RISC-V 64 cross-compile build jobs to check if any new changes
  break RISC-V build.

Changes in V4:
- Rebase on mainline 'staging'.
- Code style fixes and commit message updates.
- Minor changes of riscv/Makefile and build.yaml

Changes in V3:
- Remove include of <asm/config.h> from head.S.

Changes in V2:
- Remove the patch "automation: add cross-compiler support
  for the build script" because it was reworked as a part of the patch
  series "CI: Fixes/cleanup in preparation for RISCV".
- Remove the patch "automation: add python3 package for riscv64.dockerfile"
  because it is not necessary for RISCV Xen binary build now.
- Rework the patch "arch/riscv: initial RISC-V support to build/run
  minimal Xen" according to the comments about v1 of the patch series.
- Add HYPERVISOR_ONLY to RISCV jobs in build.yaml after rebasing on
  "CI: Fixes/cleanup in preparation for RISCV" patch series.

Oleksii Kurochko (2):
  arch/riscv: initial RISC-V support to build/run minimal Xen
  automation: add RISC-V 64 cross-build tests for Xen

 automation/gitlab-ci/build.yaml     |  56 ++++++++++
 xen/arch/riscv/Makefile             |  14 +++
 xen/arch/riscv/arch.mk              |   4 +
 xen/arch/riscv/include/asm/config.h |   9 +-
 xen/arch/riscv/riscv64/Makefile     |   2 +-
 xen/arch/riscv/riscv64/head.S       |   4 +-
 xen/arch/riscv/xen.lds.S            | 158 ++++++++++++++++++++++++++++
 7 files changed, 242 insertions(+), 5 deletions(-)
 create mode 100644 xen/arch/riscv/xen.lds.S

-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 12:02:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 12:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471847.731866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOwW-0007Hh-QZ; Thu, 05 Jan 2023 12:01:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471847.731866; Thu, 05 Jan 2023 12:01:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOwW-0007Ha-L9; Thu, 05 Jan 2023 12:01:56 +0000
Received: by outflank-mailman (input) for mailman id 471847;
 Thu, 05 Jan 2023 12:01:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDOwU-0006u7-I6
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 12:01:54 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be606e7b-8cf0-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 13:01:53 +0100 (CET)
Received: by mail-wm1-x32a.google.com with SMTP id ay40so27872133wmb.2
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 04:01:53 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 x5-20020a05600c21c500b003d9b89a39b2sm2159826wmj.10.2023.01.05.04.01.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 04:01:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be606e7b-8cf0-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nOuybka6m9Zk5Om7Z7pMNEOSvOvsqAFonp5yEXrOY74=;
        b=RHgAudbgayymqxO19Hdfk4rKzq5iyVG5oIOYA3urYRNss2gMQl40VGFGFfSJqpa7HV
         XwOcGcl4fW+B7VlnUKLsQF7Vg8ydupY/GwDhgsJYrxysloDOxwQnAEQkt8p1Q2ONv3/A
         TRVUeHHk54dna433b0q2o47TZDcSpsB2D+7poTLJbv34nL8IAcgnpg3rJh/KJgMvZnTP
         e1ADeReuU2xwDDRqBOcplSll1FGDcFan+2gbM5UOaazjr4DzLj9xD/hY8GW0C4ICmJMh
         bUBbxrfU4VK8I68HHFJuImfeLDjwVOOwC/CqbQzQfBiEcY7xOTmI2LCzTboghuDznAue
         IwdQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nOuybka6m9Zk5Om7Z7pMNEOSvOvsqAFonp5yEXrOY74=;
        b=ftYERKJONKAtpwzXSinemlbCW5c6y0QDZyxRXKzhbI+U+ysSIwsxbtyXlbYgQ+WbNO
         SuVq3gwjgYcE8gUbZah4G+Uk0mJeQJwUGJ9zIq51wWIG6Scxw+ju5n0ycOoYv6njok/d
         7iA7Wcz/MetjwqY50MAWtz5PwizByY6gg+eLWR1DJxq20ec2o7pZh1kUplAcz7bNXQmW
         ToPOKQm2285YP8ylC7+XDOvTKar8odmdKIu8KYqbTnx+NlzI/SZyj+Nf4Ritv5E3wisz
         N31n8P8g57Mws54W+s6dJBHq0harEssA7WIacVJBk9qgtdSW0YTJoNsLo1NlyceX73Yq
         lH6Q==
X-Gm-Message-State: AFqh2koNfgmh3hGoeZnsJYcffl17khSnr/Wc6RZmpwEydDN2JoiokfDt
	7mRhhgkhW6qj49XVonJuzZ2uypGSldGoHNMN
X-Google-Smtp-Source: AMrXdXsj8/JreMZze0R9CwSolXzrA+C8W9QYfrFquPaqB8brVIc7qbccQ7ISl+IDm15WZcdwGEq9eA==
X-Received: by 2002:a05:600c:1c27:b0:3cf:a83c:184a with SMTP id j39-20020a05600c1c2700b003cfa83c184amr37045215wms.24.1672920112580;
        Thu, 05 Jan 2023 04:01:52 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>
Subject: [PATCH v4 1/2] arch/riscv: initial RISC-V support to build/run minimal Xen
Date: Thu,  5 Jan 2023 14:01:45 +0200
Message-Id: <ef6dbb71b27c75fe0dffb72d65ab457d27430475.1672912316.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1672912316.git.oleksii.kurochko@gmail.com>
References: <cover.1672912316.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch provides a minimal amount of changes to start
build and run minimal Xen binary at GitLab CI&CD that will
allow continuous checking of the build status of RISC-V Xen.

Except introduction of new files the following changes were done:
* Redefinition of ALIGN define from '.align 2' to '.align 4'.
  '.align 2' was incorrect choice done previously
* ALL_OBJ-y and ALL_LIBS-y were temporary overwritted to produce
  a minimal hypervisor image otherwise it will be required to push
  huge amount of headers and stubs for common, drivers, libs etc which
  aren't necessary for now.
* Section changed from .text to .text.header for start function
  to make it the first one executed.
* Rework riscv64/Makefile logic to rebase over changes since the first
  RISC-V commit.

RISC-V Xen can be built by the following instructions:
  $ CONTAINER=riscv64 ./automation/scripts/containerize \
       make XEN_TARGET_ARCH=riscv64 -C xen tiny64_defconfig
  $ CONTAINER=riscv64 ./automation/scripts/containerize \
       make XEN_TARGET_ARCH=riscv64 -C xen build

RISC-V Xen can be run as:
  $ qemu-system-riscv64 -M virt -smp 1 -nographic -m 2g \
       -kernel xen/xen

To run in debug mode should be done the following instructions:
 $ qemu-system-riscv64 -M virt -smp 1 -nographic -m 2g \
        -kernel xen/xen -s -S
 # In separate terminal:
 $ riscv64-buildroot-linux-gnu-gdb
 $ target remote :1234
 $ add-symbol-file <xen_src>/xen/xen-syms 0x80200000
 $ hb *0x80200000
 $ c # it should stop at instruction j 0x80200000 <start>

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Remove clean-files target from Makefile as ${TARGET-syms} was
  simplifed and there is no any sense for clean-files target
  for now.
- Update the commit message.
- Code style fixes.
---
Changes in V3:
- Remove include of <asm/config.h> from head.S
---
Changes in V2:
- Update commit message:
  - Add explanation why ALIGN define was changed.
  - Add explanation why section of 'start' function was changed.
- Rework xen.lds.S linker script. It is mostly based on ARM except
  ARM-specific sections which have been removed.
- Rework in riscv64/Makefile rule $(TARGET)-syms
- Remove asm/types.h header as after reworking of riscv64/Makefile
  it is not needed now.
- Remove unneeded define SYMBOLS_DUMMY_OBJ.
---
 xen/arch/riscv/Makefile             |  14 +++
 xen/arch/riscv/arch.mk              |   4 +
 xen/arch/riscv/include/asm/config.h |   9 +-
 xen/arch/riscv/riscv64/Makefile     |   2 +-
 xen/arch/riscv/riscv64/head.S       |   4 +-
 xen/arch/riscv/xen.lds.S            | 158 ++++++++++++++++++++++++++++
 6 files changed, 186 insertions(+), 5 deletions(-)
 create mode 100644 xen/arch/riscv/xen.lds.S

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 942e4ffbc1..248f2cbb3e 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,2 +1,16 @@
+obj-$(CONFIG_RISCV_64) += riscv64/
+
+$(TARGET): $(TARGET)-syms
+	$(OBJCOPY) -O binary -S $< $@
+
+$(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@
+	$(NM) -pa --format=sysv $(@D)/$(@F) \
+		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
+		>$(@D)/$(@F).map
+
+$(obj)/xen.lds: $(src)/xen.lds.S FORCE
+	$(call if_changed_dep,cpp_lds_S)
+
 .PHONY: include
 include:
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
index ae8fe9dec7..012dc677c3 100644
--- a/xen/arch/riscv/arch.mk
+++ b/xen/arch/riscv/arch.mk
@@ -11,3 +11,7 @@ riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
 # -mcmodel=medlow would force Xen into the lower half.
 
 CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+
+# TODO: Drop override when more of the build is working
+override ALL_OBJS-y = arch/$(TARGET_ARCH)/built_in.o
+override ALL_LIBS-y =
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index e2ae21de61..c6f9e230d7 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -1,6 +1,9 @@
 #ifndef __RISCV_CONFIG_H__
 #define __RISCV_CONFIG_H__
 
+#include <xen/const.h>
+#include <xen/page-size.h>
+
 #if defined(CONFIG_RISCV_64)
 # define LONG_BYTEORDER 3
 # define ELFSIZE 64
@@ -28,7 +31,7 @@
 
 /* Linkage for RISCV */
 #ifdef __ASSEMBLY__
-#define ALIGN .align 2
+#define ALIGN .align 4
 
 #define ENTRY(name)                                \
   .globl name;                                     \
@@ -36,6 +39,10 @@
   name:
 #endif
 
+#define XEN_VIRT_START  _AT(UL, 0x00200000)
+
+#define SMP_CACHE_BYTES (1 << 6)
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/riscv64/Makefile b/xen/arch/riscv/riscv64/Makefile
index 15a4a65f66..3340058c08 100644
--- a/xen/arch/riscv/riscv64/Makefile
+++ b/xen/arch/riscv/riscv64/Makefile
@@ -1 +1 @@
-extra-y += head.o
+obj-y += head.o
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 0dbc27ba75..990edb70a0 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,6 +1,4 @@
-#include <asm/config.h>
-
-        .text
+        .section .text.header, "ax", %progbits
 
 ENTRY(start)
         j  start
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
new file mode 100644
index 0000000000..ca57cce75c
--- /dev/null
+++ b/xen/arch/riscv/xen.lds.S
@@ -0,0 +1,158 @@
+#include <xen/xen.lds.h>
+
+#undef ENTRY
+#undef ALIGN
+
+OUTPUT_ARCH(riscv)
+ENTRY(start)
+
+PHDRS
+{
+    text PT_LOAD ;
+#if defined(BUILD_ID)
+    note PT_NOTE ;
+#endif
+}
+
+SECTIONS
+{
+    . = XEN_VIRT_START;
+    _start = .;
+    .text : {
+        _stext = .;            /* Text section */
+        *(.text.header)
+
+        *(.text.cold)
+        *(.text.unlikely .text.*_unlikely .text.unlikely.*)
+
+        *(.text)
+#ifdef CONFIG_CC_SPLIT_SECTIONS
+        *(.text.*)
+#endif
+
+        *(.fixup)
+        *(.gnu.warning)
+        . = ALIGN(POINTER_ALIGN);
+        _etext = .;             /* End of text section */
+    } :text
+
+    . = ALIGN(PAGE_SIZE);
+    .rodata : {
+        _srodata = .;          /* Read-only data */
+        *(.rodata)
+        *(.rodata.*)
+        *(.data.rel.ro)
+        *(.data.rel.ro.*)
+
+        VPCI_ARRAY
+
+        . = ALIGN(POINTER_ALIGN);
+        _erodata = .;        /* End of read-only data */
+    } :text
+
+    #if defined(BUILD_ID)
+    . = ALIGN(4);
+    .note.gnu.build-id : {
+        __note_gnu_build_id_start = .;
+        *(.note.gnu.build-id)
+        __note_gnu_build_id_end = .;
+    } :note :text
+    #endif
+    _erodata = .;                /* End of read-only data */
+
+    . = ALIGN(PAGE_SIZE);
+    .data.ro_after_init : {
+        __ro_after_init_start = .;
+        *(.data.ro_after_init)
+        . = ALIGN(PAGE_SIZE);
+        __ro_after_init_end = .;
+    } : text
+
+    .data.read_mostly : {
+        *(.data.read_mostly)
+    } :text
+
+    . = ALIGN(PAGE_SIZE);
+    .data : {                    /* Data */
+        *(.data.page_aligned)
+        . = ALIGN(8);
+        __start_schedulers_array = .;
+        *(.data.schedulers)
+        __end_schedulers_array = .;
+
+        HYPFS_PARAM
+
+        *(.data .data.*)
+        CONSTRUCTORS
+    } :text
+
+    . = ALIGN(PAGE_SIZE);             /* Init code and data */
+    __init_begin = .;
+    .init.text : {
+        _sinittext = .;
+        *(.init.text)
+        _einittext = .;
+        . = ALIGN(PAGE_SIZE);        /* Avoid mapping alt insns executable */
+    } :text
+    . = ALIGN(PAGE_SIZE);
+    .init.data : {
+        *(.init.rodata)
+        *(.init.rodata.*)
+
+        . = ALIGN(POINTER_ALIGN);
+        __setup_start = .;
+        *(.init.setup)
+        __setup_end = .;
+
+        __initcall_start = .;
+        *(.initcallpresmp.init)
+        __presmp_initcall_end = .;
+        *(.initcall1.init)
+        __initcall_end = .;
+
+        LOCK_PROFILE_DATA
+
+        *(.init.data)
+        *(.init.data.rel)
+        *(.init.data.rel.*)
+
+        . = ALIGN(8);
+        __ctors_start = .;
+        *(.ctors)
+        *(.init_array)
+        *(SORT(.init_array.*))
+        __ctors_end = .;
+    } :text
+    . = ALIGN(POINTER_ALIGN);
+    __init_end = .;
+
+    .bss : {                     /* BSS */
+        __bss_start = .;
+        *(.bss.stack_aligned)
+        . = ALIGN(PAGE_SIZE);
+        *(.bss.page_aligned)
+        . = ALIGN(PAGE_SIZE);
+        __per_cpu_start = .;
+        *(.bss.percpu.page_aligned)
+        *(.bss.percpu)
+        . = ALIGN(SMP_CACHE_BYTES);
+        *(.bss.percpu.read_mostly)
+        . = ALIGN(SMP_CACHE_BYTES);
+        __per_cpu_data_end = .;
+        *(.bss .bss.*)
+        . = ALIGN(POINTER_ALIGN);
+        __bss_end = .;
+    } :text
+    _end = . ;
+
+    /* Section for the device tree blob (if any). */
+    .dtb : { *(.dtb) } :text
+
+    DWARF2_DEBUG_SECTIONS
+
+    DISCARD_SECTIONS
+
+    STABS_DEBUG_SECTIONS
+
+    ELF_DETAILS_SECTIONS
+}
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 12:02:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 12:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471848.731876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOwY-0007Xa-17; Thu, 05 Jan 2023 12:01:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471848.731876; Thu, 05 Jan 2023 12:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOwX-0007XQ-TP; Thu, 05 Jan 2023 12:01:57 +0000
Received: by outflank-mailman (input) for mailman id 471848;
 Thu, 05 Jan 2023 12:01:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDOwW-00071H-ES
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 12:01:56 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bf1eaf22-8cf0-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 13:01:54 +0100 (CET)
Received: by mail-wm1-x329.google.com with SMTP id
 k26-20020a05600c1c9a00b003d972646a7dso1115571wms.5
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 04:01:54 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 x5-20020a05600c21c500b003d9b89a39b2sm2159826wmj.10.2023.01.05.04.01.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 04:01:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf1eaf22-8cf0-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HEdE610llDLMp8O4DeyVRrmQCHhD1SKsTN47twxNoiI=;
        b=QFErldNCO2mrk5LL8zlYwXziQNBpOK+PphWvyCkUgYROLNMOs7C9I4qr/uDXg5kYhj
         8kqTI4rfGYYbB32LMUgGcX/Sx5/zHTuOYtvFe5kGHfwxbkswYskCJmkRk3h2RF4G3LfN
         A6gNjbSnsFpr7RValUIU12j34ckg6WaXbdMa0R0yZeeXPv24mhxvhY1avQsi5lGDKdLz
         H35Up31QgjmueDcNvKW+BIB9KjYx3MqtmyN8P/sqJLapiXBz0f4RTSy8YOkyTHcFXBWr
         O6LmVK76Ww0wRoiFONMqNSm3C2i539AjjvecNwt5f+LnsONlBxDAdkIhGJ7LHQjimAHe
         crrw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=HEdE610llDLMp8O4DeyVRrmQCHhD1SKsTN47twxNoiI=;
        b=c0waLa+KnFc3tPsXKrXG/AToea9LFdoLugrYQuOufxRqM7bQdhKfTZRqHLddlj8n9Y
         ZlAH2M5dqLKcejVX92vyz8wyZMiC6qAtHE6VzhY6eNbr4zSZySQA8DSi9w0H8sFg4sMm
         c1jMfHJhjHWktZJSQUuTwg1+mH1DWxoC4hjA5drJ5sKnrYZYumfAoQ15jSPbRVypBC3U
         MPkS/0jlVCPLy/BgOzK0YwkJePL9/mlCKSN7oUi18lOsV/2vHeTibFLkWq+igtwCvkME
         MYmN1xxkKjA/4u9pI+UCMg69q6WYgk8MB4xaPfTmDfcMQq2X2H4UflHgxgh8LtLSm9CE
         1dJQ==
X-Gm-Message-State: AFqh2krEUHOMb6CLH8JcQ3aQvAbx/urSTSS/IvhWKoriwrI7aeRmXcbl
	aO4caCaw1i3OQ8EIKotaLoQvrFI63mcV7zK+
X-Google-Smtp-Source: AMrXdXtCSkkQJPSIaON/PqlUvKP0j+OCC8GRjx3CUMUnB0kYwufcQ54q3I7Z7/HiR3oPi6kFNsMgNQ==
X-Received: by 2002:a05:600c:3ac3:b0:3d1:cfcb:7d19 with SMTP id d3-20020a05600c3ac300b003d1cfcb7d19mr43668618wms.32.1672920113643;
        Thu, 05 Jan 2023 04:01:53 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v4 2/2] automation: add RISC-V 64 cross-build tests for Xen
Date: Thu,  5 Jan 2023 14:01:46 +0200
Message-Id: <7d9bf454d056180376031aaf862963036a46d1e5.1672912316.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1672912316.git.oleksii.kurochko@gmail.com>
References: <cover.1672912316.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add build jobs to cross-compile Xen-only for RISC-V 64.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V4:
- Add RISCV RANDCONFIG jobs
- Remove unnecessary comments
---
Changes in V2:
- Add HYPERVISOR_ONLY to RISCV jobs because after rebase on
  top of the patch series "CI: Fixes/cleanup in preparation for RISCV"
  it is required to set HYPERVISOR_ONLY in build.yaml
---
 automation/gitlab-ci/build.yaml | 56 +++++++++++++++++++++++++++++++++
 1 file changed, 56 insertions(+)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index 43dbef8aba..6784974619 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -172,6 +172,33 @@
   variables:
     <<: *gcc
 
+.riscv64-cross-build-tmpl:
+  <<: *build
+  variables:
+    XEN_TARGET_ARCH: riscv64
+  tags:
+    - x86_64
+
+.riscv64-cross-build:
+  extends: .riscv64-cross-build-tmpl
+  variables:
+    debug: n
+
+.riscv64-cross-build-debug:
+  extends: .riscv64-cross-build-tmpl
+  variables:
+    debug: y
+
+.gcc-riscv64-cross-build:
+  extends: .riscv64-cross-build
+  variables:
+    <<: *gcc
+
+.gcc-riscv64-cross-build-debug:
+  extends: .riscv64-cross-build-debug
+  variables:
+    <<: *gcc
+
 # Jobs below this line
 
 archlinux-gcc:
@@ -619,6 +646,35 @@ alpine-3.12-gcc-debug-arm64-boot-cpupools:
     EXTRA_XEN_CONFIG: |
       CONFIG_BOOT_TIME_CPUPOOLS=y
 
+# RISC-V 64 cross-build
+riscv64-cross-gcc:
+  extends: .gcc-riscv64-cross-build
+  variables:
+    CONTAINER: archlinux:riscv64
+    KBUILD_DEFCONFIG: tiny64_defconfig
+    HYPERVISOR_ONLY: y
+
+riscv64-cross-gcc-debug:
+  extends: .gcc-riscv64-cross-build-debug
+  variables:
+    CONTAINER: archlinux:riscv64
+    KBUILD_DEFCONFIG: tiny64_defconfig
+    HYPERVISOR_ONLY: y
+
+riscv64-cross-gcc-randconfig:
+  extends: .gcc-riscv64-cross-build
+  variables:
+    CONTAINER: archlinux:riscv64
+    KBUILD_DEFCONFIG: tiny64_defconfig
+    RANDCONFIG: y
+
+riscv64-cross-gcc-debug-randconfig:
+  extends: .gcc-riscv64-cross-build-debug
+  variables:
+    CONTAINER: archlinux:riscv64
+    KBUILD_DEFCONFIG: tiny64_defconfig
+    RANDCONFIG: y
+
 ## Test artifacts common
 
 .test-jobs-artifact-common:
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 12:04:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 12:04:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471866.731887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOyw-0000Ge-Es; Thu, 05 Jan 2023 12:04:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471866.731887; Thu, 05 Jan 2023 12:04:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOyw-0000GX-BR; Thu, 05 Jan 2023 12:04:26 +0000
Received: by outflank-mailman (input) for mailman id 471866;
 Thu, 05 Jan 2023 12:04:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDOyu-0000GR-Ob
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 12:04:24 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1804023e-8cf1-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 13:04:23 +0100 (CET)
Received: by mail-wr1-x430.google.com with SMTP id bn26so16422647wrb.0
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 04:04:23 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 u13-20020a5d468d000000b00275970a85f4sm34267278wrq.74.2023.01.05.04.04.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 04:04:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1804023e-8cf1-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=BJDrDvFPt6L3MK72KoH0Saq8Kzr+Mx5258nj+894ZnY=;
        b=U4fioNqQrhKU+OkSBh7GX0XZ4/AppZ6oWbH+gw5ezXws42D6Fbw6aX3ibvVeMdi8w+
         pjfQE/YJECPRmQ7fOePqETTutf217by7GmfzlisFlXorhlBHSjyMFIPK6f43RI2AbCl2
         KHP0+5mk8iMykygjMPTq1ih/UjxYBhA/ppL1Vm5L6cD60waefK+RK3WVarBsBDbw/h5v
         1pI33DkHjuNXV3q+pECSdz2KvwK0OfcKJGqwJxP5Hv55iFUVGG0MQoo7dAnvdsMsVCOK
         jnL4Gc14z7vbsyBqNbWKrKrdwid40GM1kQ8q5y4ZM6WvkcHKZhJbrqpR2i8cScHLWTwt
         I/QA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=BJDrDvFPt6L3MK72KoH0Saq8Kzr+Mx5258nj+894ZnY=;
        b=SRYsny3Nunh4HRnEZwJUlUZ+KvsuVI6SKRvRAYl5DaAQ5FqIKbt6+i80Q27foMOabd
         wuP2Ydo4S+p89UB3DS92pq4vkqS9OPFxcBhoHIOCJummwHtkBeysM+JXM+GjXLGblgwN
         LdIGOwO+Vh7PJXnAx8siwcNSgrWNoaQwKANEVLfvdzSMCSxdjgM42rOvD7n2/t1BbXe1
         Ejtd1leeWvNObFXvQn0CNKh7csKrZ7xE7iAU5x0lv0POFf2tdLWAlEDCh+s7edydR4Ly
         5IHxRI4WwEDp/gIQbTObpaAF1VH3hkclOH9ppWJ2fDT5lKgHiNcqZ+yRbkFar6AWet4I
         UyOw==
X-Gm-Message-State: AFqh2kpIyIm8bOyHpinXEzjjdgfpzVvM4gMOXG0rCRwoqmdl7YHP9OGD
	fzjN+QB3b6j6yyKw53QallIyH1ppRrbCqrlB
X-Google-Smtp-Source: AMrXdXvcUk2t/09+QmXNjOMtv1LqPJHVsHvnHbRAP0Da4oAbisBu6nLsTLvKEdiTipdIbJuEqSeF+A==
X-Received: by 2002:a5d:4081:0:b0:27a:cc74:977c with SMTP id o1-20020a5d4081000000b0027acc74977cmr25261803wrp.70.1672920263074;
        Thu, 05 Jan 2023 04:04:23 -0800 (PST)
Message-ID: <0b4f63ee0d1f72a0cf566364d8be0cdf06e9fc75.camel@gmail.com>
Subject: Re: [PATCH v3 0/2] Add minimal RISC-V Xen build and build testing
From: Oleksii <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Doug
 Goldstein <cardoe@cardoe.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Gianluca Guida
 <gianluca@rivosinc.com>
Date: Thu, 05 Jan 2023 14:04:21 +0200
In-Reply-To: <cover.1672906559.git.oleksii.kurochko@gmail.com>
References: <cover.1672906559.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

Sorry for flooding but please skip review of patch series v3 as
I missed comments of Andrew so only v4 should be reviewed.

On Thu, 2023-01-05 at 10:40 +0200, Oleksii Kurochko wrote:
> The patch series introduces the following:
> - provide a minimal amount of changes to add initial RISC-V support
> =C2=A0 to make Xen binary buildable and runnable for RISC-V architecture
> =C2=A0 which can be used for future development and testing.
> - add RISC-V 64 cross-compile build jobs to check if any new changes
> =C2=A0 break RISC-V build.
>=20
> Changes in V3:
> - Remove include of <asm/config.h> from head.S.
>=20
> Changes in V2:
> - Remove the patch "automation: add cross-compiler support
> =C2=A0 for the build script" because it was reworked as a part of the
> patch
> =C2=A0 series "CI: Fixes/cleanup in preparation for RISCV".
> - Remove the patch "automation: add python3 package for
> riscv64.dockerfile"
> =C2=A0 because it is not necessary for RISCV Xen binary build now.
> - Rework the patch "arch/riscv: initial RISC-V support to build/run
> =C2=A0 minimal Xen" according to the comments about v1 of the patch
> series.
> - Add HYPERVISOR_ONLY to RISCV jobs in build.yaml after rebasing on
> =C2=A0 "CI: Fixes/cleanup in preparation for RISCV" patch series.
>=20
> Oleksii Kurochko (2):
> =C2=A0 arch/riscv: initial RISC-V support to build/run minimal Xen
> =C2=A0 automation: add RISC-V 64 cross-build tests for Xen
>=20
> =C2=A0automation/gitlab-ci/build.yaml=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 45 =
++++++++
> =C2=A0xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 16 +++
> =C2=A0xen/arch/riscv/arch.mk=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 4 +
> =C2=A0xen/arch/riscv/include/asm/config.h |=C2=A0=C2=A0 9 +-
> =C2=A0xen/arch/riscv/riscv64/Makefile=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=
=A0 2 +-
> =C2=A0xen/arch/riscv/riscv64/head.S=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=
=C2=A0=C2=A0 4 +-
> =C2=A0xen/arch/riscv/xen.lds.S=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 | 158
> ++++++++++++++++++++++++++++
> =C2=A07 files changed, 233 insertions(+), 5 deletions(-)
> =C2=A0create mode 100644 xen/arch/riscv/xen.lds.S
>=20



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 12:04:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 12:04:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471867.731898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOz4-0000aS-Qj; Thu, 05 Jan 2023 12:04:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471867.731898; Thu, 05 Jan 2023 12:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDOz4-0000aG-Nb; Thu, 05 Jan 2023 12:04:34 +0000
Received: by outflank-mailman (input) for mailman id 471867;
 Thu, 05 Jan 2023 12:04:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDOz2-0000GR-KM
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 12:04:32 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1b70698f-8cf1-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 13:04:31 +0100 (CET)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 07:04:28 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5295.namprd03.prod.outlook.com (2603:10b6:208:1e7::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 12:04:22 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 12:04:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b70698f-8cf1-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672920271;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=3ykmsFdGGmg1YLJtWbMR4ZNI61TRCFhCcyqvlK2ivpU=;
  b=ePd4g3rOTR8305ZimjzVPXebsbEdlPvVJ4Td49KNzhDejW7T8GzB8trL
   KpG0UZl1vEocGGu3rgorV8lkyVFa07mZit+UTjXjx2HyVltXXVn0agnol
   gLCfgYjCH+ZVwtaQ15O2YqL2jDUlyiks7VbbkG2elG5/dNELX9MZRiCiq
   Q=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 91298111
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:9zD4u6PZFhHXXs3vrR2XlsFynXyQoLVcMsEvi/4bfWQNrUp20TEOm
 jAdWTiBO62NZWunKY12O4m//B9V7ZTUmt9kGgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo42tB5gFmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0t1dBltTp
 fBEEmxOQTatosaYg7a/W8A506zPLOGzVG8ekldJ6GiASNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PpxujaDpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexHmqCNtLTdVU8NZQsniX7EI2OCE6VFCCvfCHqFPnZ45Af
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaESoIKW4PYwcUQA1D5MPsyLzflTrKR9dnVaSz3tv8HGipx
 yjQ9XZuwbIOkcQMyqO3u0jdhC6hrYTISQhz4RjLWmWi7UVyY4vNi5GU1GU3JM1odO6xJmRtd
 lBd8yRCxIji1a2wqRE=
IronPort-HdrOrdr: A9a23:vXrQQqxj75MPpseGR4QcKrPwIr1zdoMgy1knxilNoH1uHvBw8v
 rEoB1173DJYVoqNk3I++rhBEDwexLhHPdOiOF6UItKNzOW21dAQrsSiLfK8nnNHDD/6/4Y9Y
 oISdkbNDQoNykZsfrH
X-IronPort-AV: E=Sophos;i="5.96,302,1665460800"; 
   d="scan'208";a="91298111"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jbTn9W3gGCGXeF2CksLIfEvd1pRokFjoUfqXABsKPJ9P/AY/757YKX9M6iP80NEmyByWoGSnAin0iDCANe7IA4EkzLBWRlPrXzKqIfNrih+eq/iDIIC7q6Noma/i6YbcUBpj9EfRUy58zfzNfAFEsgab3ifXozluOMbWqWn5sqjn78xKMr9CgDuusFAPs9l1qN1Wtf2qAa6Z5uckRbQhVSOALu8gF1rMcVDJJKcHB0TVr2gdq7yfpZRBHluAXwJw9ihGT5jr/NsrDgyf2zMaN69+U8IRcawDeockCtInRKGMAyPoS+y7UOwMYfpEFMVIMJO9FtKl016ppqwZNF/F2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3ykmsFdGGmg1YLJtWbMR4ZNI61TRCFhCcyqvlK2ivpU=;
 b=HcvZ0mdCL5isDDdSPtIZj4DBTMZyPV7/HiOoDr6J8EQR6DPnUjXYwAXJGM6BUDHo+6ovLCFliV+wB4+NeViXq0ebbMr2eNd7uSCHFtQnl/4Wq9oNw7eBcTPvA/esuBYCbQuKAE9Bdw2Iz5M2pAFlStNXZPBhqBlKG1NgxR7jjyLpuCpHPegMou4KaWbY7QMpnHBSw5Kmre4BO6WWk2s7AgvxoWaNcXdbx6QvyAa5ytkUhHqS4mTEgP2SaYOyXIQOJETorkHzl1d/gGsIm6qPoG5xkKdBOOogIR9AtmG2OdlgcfKKnttilMBVFvip5Mlf86KQecSz1mAZ1Npn+NoMVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3ykmsFdGGmg1YLJtWbMR4ZNI61TRCFhCcyqvlK2ivpU=;
 b=MKRJMOXdtGKbOxW+TDzV7g6MpXmIl0ZmEJrJ34O44q+9cbcfJLVkFn+7uwzhiwEqwqvXtnbTqFQeElIay/TnLSdwSdmNFttaZJsLZrKS2q+Zb+f4AVn/RlieC/lRwA8cJkxIaOxlXpX83GJA+UVUcwxY+nTvRZreJurQ/aHHI00=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH] x86: idle domains don't have a domain-page mapcache
Thread-Topic: [PATCH] x86: idle domains don't have a domain-page mapcache
Thread-Index: AQHZIPY76jcWuuXMREKwR1wL4WEG2q6PucOA
Date: Thu, 5 Jan 2023 12:04:21 +0000
Message-ID: <330dce2f-ffa4-2f6b-a481-cac53e996255@citrix.com>
References: <139bab3a-fd3e-fec2-b7b4-f63dd9f9439a@suse.com>
In-Reply-To: <139bab3a-fd3e-fec2-b7b4-f63dd9f9439a@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MN2PR03MB5295:EE_
x-ms-office365-filtering-correlation-id: 7ce8f36c-5f51-4642-fc8c-08daef14fb28
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 yhHvV0+XBIxKkIv9HBgo10+6WtH5CSmUEnL6/n62VlehdvPOhZB9pmfNLWflvLyqE1UaoWLeL6xOdVgh4I0EaJK5KAGoXEMFUEu5Y+vgJ3mQ5JrGG/UIZHMrWVujafVZl6GODD2nUZLFEdSaOLXvo5OsaNZnthHbRbeh1/KlgnWPpzqME0GCogjYN+mxuHXfkYTropurmtjE28jQAC/DGEjULlM3AILGhWObkAvWywToqTcUCV4b2Hf04zHi5cj/W3nYZcoB5qcwh9UuZ9G6xuiidqLNHhpa7pb6lnHIgVrjTXhQmaXEqjSuPn5NM3DLxrCp/YVNGiFDpLbaDfAIdRU+J//7lhbWU1wHwztiQfm/ZJ1blRzjZg3zlBBepSaIbK5qKDVSRZ6TLgm4tdo+5VoSPd/g1R5i36DmjTHuJODlGFzLGksXy2Mvm3Stzq9RWNvLV+c6ABm0dsROo4FzrhqrmH10vSSgVAdPTS+yK6e404xKkkC+xNrdavLD4oPMJegnEnt3DX7+HLnEXFqHucHK32WCeZlnWzoS2DbgTUxVp59CfXegLAkXBXKj1JE3f3ZevElfkO620Yn46x3+lvu5LLpLzEclWzamXhWVGmqZ9zosOxWFrgtapIzcsqENJGJ/YBkFnctVJAzDlobvlyiahq+9weP6bahvRQzGTuc+RkU7ZXdXoQBSvNpp4kwaWL66/gtedDbpV03deAN0GzVqZqAn99lRxaxphtDEbsYZ6pBFg3R8V+qT/uxJQWIoOz9vy5LQyE5qt8obTebIrw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(451199015)(64756008)(66476007)(66446008)(76116006)(2616005)(66946007)(66556008)(36756003)(8676002)(91956017)(41300700001)(38070700005)(4326008)(54906003)(316002)(31696002)(2906002)(82960400001)(86362001)(5660300002)(110136005)(38100700002)(83380400001)(31686004)(122000001)(6486002)(8936002)(53546011)(478600001)(71200400001)(6512007)(186003)(26005)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MFREWW9qdlBCWmhOSVBEbzJNeXhKR0hMck1CTjFkL3ZoSUZDUC9UYUxwa25J?=
 =?utf-8?B?MGMybEloRm0xTDN5UzhZRlhCdEYzRW1RWThOdCtwcG45eVNQdE85QnFycGhz?=
 =?utf-8?B?WFMxRENoL0JpaytwdmZUTVVHRVVTbHYyY0FwcFBDUVQ3OWNRczVoelVqV2pq?=
 =?utf-8?B?anpONytRZ1dIV1JqV2NmV24rL2Y5OEtRUFVxMHZsdUJLa1M0OURuT3YzY21a?=
 =?utf-8?B?bnVVWnN0Q1dzaWVBZXBWSXBhYjRkWTBIdEUvVGs0cmVsSXUvN2V6a0Q2bFZJ?=
 =?utf-8?B?bGg0bVhMUTBWc3RvQ2ZnbzFqRW5vRTI3ZHQxcGpqSlBMYzRKMW9xLzRhVzhG?=
 =?utf-8?B?RDlZYlp1WS9hOGZYSnp3RFFkdUgvWnBZdE0wQ2lzUzh4NTNQamFYYlMrZUd3?=
 =?utf-8?B?MEdtbzg4V3RFUkdkQ0U4VENVQmFXTmpEWGdhRDAydEtOTmhXaldGYUo1NmVy?=
 =?utf-8?B?L3E5NHBtK0c5RTB6OElYYUV0NDA1WTlCT1M1WDAyNDluTlVNTFZmd0JQQ0t2?=
 =?utf-8?B?aUhvbWMwQmpXZStIN0ovQ01ZaFlTYVBSblcwUnVuek1QTGRpdWhKdkdYZWNX?=
 =?utf-8?B?bmNmN1dVcnh5eGtlaWRrQythMTN5bU5DeWdkZzR6WS9WckRkeDhRRjhDaGxI?=
 =?utf-8?B?RkdVUnR2OVRPTS9MMFdWNUVCaDBnM2g1aVNMM1RmOWNrazFNaGUwdWRNNHh1?=
 =?utf-8?B?ajdHbzlFWEo5SVV6SEN2NHJXaE1xZXdOVmVFMS9vQ0F2UzNTdnhFcVNTNkVu?=
 =?utf-8?B?ZWttbXpNMGVydWxaTmY0ODhDY3Vud29RZmJ6VmhQSFBEYkxBeVlhWmNnZkJW?=
 =?utf-8?B?RkZtWnE0SGxnMkg0QUxQN05Jck5OdGRXVFUxM3E4TEhPWUVad1BYSGhoNVUz?=
 =?utf-8?B?Y2hYN2p6VDlId1N3cWJremtaUEE0RGJPV0hhOWg3TzE3dzNkaXlvK0tJandh?=
 =?utf-8?B?aUFOcDlRbjhyVGgyVHJrWW56aDQxTmd5ODUzeWdaZDl3RXZyZDdMaTE5S20r?=
 =?utf-8?B?TUluUmZNNC84SWdia1dwWU9vMW1MZDMvc1RmWGpPOWx2MVZ0YlE4Q2hFYnhQ?=
 =?utf-8?B?eXNYbUFIUjB1WStyVVp6d25rMi9Qdk1zYVlrN1ZWbDJFd2pMNWgrS21rQ1pC?=
 =?utf-8?B?RE9YMzB4TWoyL0N1UWt3ckZhV2xPaVVYZmtOS2EzeENLOExwaXg4R1NEMUxP?=
 =?utf-8?B?Zm81Y1FFUS9WZWI4S2pmd3kySm80eHF0VGtDbjNWOHRoRTQ4ZnJpbTRGUDVk?=
 =?utf-8?B?aDVJN0I5eDVjNWFLWXAwMkdMYk4vY3ZXZkpCYjRHSG5zS3hmU25NY051aWEx?=
 =?utf-8?B?ZVlNU1M5aVlJZUJDSkhtSkZOd0VnTnAxVFlCR293NXRYUnloYzNMM2tQVXF1?=
 =?utf-8?B?bENKNjRQR2hoNEtKbElYVWlGQmV3WmlQTmhtVS9Ram9oMlA0MWVUNFdaczJp?=
 =?utf-8?B?SjhuQ2t4WWNrRHh3UUR2OXdxSnJNZGFzblZCb0JZVEtKVHFzSSt1R0tTYnNi?=
 =?utf-8?B?eFZhZ25mODV4Y0ZHclJzRDl6VlluZENKZktMVWtkM0c0VkxTYk53Q0htcldO?=
 =?utf-8?B?WXZONkRjZDI5dTNGWEZCTlAyRkJXVytyamRORGJQOE0veTF2OEIyNmVTVUQ0?=
 =?utf-8?B?UndyV1FMd083cnpJL0pZRWI5Z09BaXBHWmkzQU83MkhVYnpLeEZ5TklPNnJJ?=
 =?utf-8?B?YU5EeXdveHFSMjNkRDlCMXhUdm9icmRGbE16bmpNcjg2NDFJM1lzWW1Wbkxj?=
 =?utf-8?B?UTFxNHl5Y05mdHdJTUZEa2pOdFl2dWI2bmQ0WEtKOUphL2ZsU1VWTDljaC9l?=
 =?utf-8?B?SHBaeFVTUHlDNEJSV1NJUXR3Tm10dlhmOW50WDdaaG5DSTlsWC8veXJScnk1?=
 =?utf-8?B?d3dHbVBiSStXN2syWDgyd3l3VlJHZGVkamlkY215MGl6eUl5QytOOGFpcXVT?=
 =?utf-8?B?ZFpQQkMvYTNQeG5nVG1qSkYxSm9LQ0l2MDFGaHZjbGZ1cE45SWh5Wk9BU05l?=
 =?utf-8?B?a0lKbHdmT1VmNE1ZSU9PRDNUUnM0amxtQ1VDdFUxNEp6REpEaEJSUmhnUWVS?=
 =?utf-8?B?ZEYvVitHS3JzZFlNVm9ldW4rVjJEZWdUSURNL3BYdlRQbmNvOWtUTW1YcWRP?=
 =?utf-8?Q?HikDexVlQaH+IIVRnr/U4gB/V?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <AD07377E309CC24EBCB4290E5A9D58A9@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	T6d27e3FhXyIDIMKCduoDVv73J+MEKkZTDhndCzJpN+QEIqLVlk+OquL9EFTp7rkcgiZOYx7CRoNZ3c5QevDX7lA5Mnme16vYb0X8tMALattAUA+VhXh3njsG1W6DBqbW2LHpVr2nSNW4R8E7QeKcpdPf14YixsAO5g9La69DO6P6JoniO/4qpq//dRwvPCiBY8zC7Kofj5/dZXV29g/1XcRexScSB7T2psIzfk5rvjD9lwCTXcVRmBGmuzTx1wGQ1EEpTtkAGuq7GLn/pdod/GrRuYVxKPtdg3ba6xIz5s67LdGY6MlfEehceF50tUQENWXdSoyjIcXaIOqSkPW3m26vwquCpD7yZfOeQV8KdWhhoX9aR4IMlmr0f7eMW7rp9kH6+2NwqpmubX2rlPolzxxn68dgB7vhI+ikkVBEQfkS9s5KmEN4rLrp1bpl3L6fX4iTXYgC8qIe0ZQciE/6M1SHXpCh1b2zbsOB9P5y1MCyftFxd82cnG8qnYYE/JKA0wPc57TT8g/N5NGgx24AQil+EyYeT9ww/CTITaej4mqY46HKIcrWV+4XfOVeOuAnp0NqPNCXpJSdhIeLlTXuFMtFa9uMl9UJ+9mBrYHNFh+OWoHNd83oaTXkDiSylSeNKkRorFp3eX6fuXYCHpROXLOA80mAZPyrhN9I8HOhlP2RxLKkeKg2zNcO1gkJEbF66YQkgq3c4aTVhH2sEtCLRHCS5Pzltlu9GjBuCsi5KRK571vDQL95ZFZT/i3zIQYV44PUYzz+wDeDO+/Fr0VJ2Vgiqf1HKrT8122qZ8QTfGWUCSAWF29ki4S2EypOgZe9YdDnPV35TbUFWybtL++wA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ce8f36c-5f51-4642-fc8c-08daef14fb28
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 12:04:21.8185
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: WNMyz/g446vbZ4FzMskTda4cDRsUAXKNhkeZdoLbhcaqOodce+/3MCWfIFKdXD9hSDw3OQPgWNwAnp3e8XjkcJHFHvi869eIDiduPRfSWoU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5295

T24gMDUvMDEvMjAyMyAxMTowOSBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IEZpcnN0IGFuZCBm
b3JlbW9zdCBjb3JyZWN0IGEgY29tbWVudCBpbXBseWluZyB0aGUgb3Bwb3NpdGUuIFRoZW4sIHRv
DQo+IG1ha2UgdGhpbmdzIG1vcmUgY2xlYXIgUFYtdnMtSFZNLXdpc2UsIG1vdmUgdGhlIFBWIGNo
ZWNrIGVhcmxpZXIgaW4gdGhlDQo+IGZ1bmN0aW9uLCBtYWtpbmcgaXQgdW5uZWNlc3NhcnkgZm9y
IGJvdGggY2FsbGVycyB0byBwZXJmb3JtIHRoZSBjaGVjaw0KPiBpbmRpdmlkdWFsbHkuIEZpbmFs
bHkgcmV0dXJuIE5VTEwgZnJvbSB0aGUgZnVuY3Rpb24gd2hlbiB1c2luZyB0aGUgaWRsZQ0KPiBk
b21haW4ncyBwYWdlIHRhYmxlcywgYWxsb3dpbmcgYSBkY2FjaGUtPmludXNlIGNoZWNrIHRvIGFs
c28gYmVjb21lIGFuDQo+IGFzc2VydGlvbi4NCj4NCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KDQpCeSBhbmQgbGFyZ2UsIEkgdGhpbmsgdGhlc2UgY2hh
bmdlcyBvaywgYnV0IEkgd2FudCB0byBtYWtlIGFuIGFyZ3VtZW50DQpmb3IgZ29pbmcgZnVydGhl
ciByaWdodCBhd2F5Lg0KDQoNCk1vc3Qgb2YgbWFwY2FjaGVfY3VycmVudF92Y3B1KCkgaXMgYSBn
aWFudCBib2RnZSB0byBzdXBwb3J0IHRoZSB3ZWlyZA0KY29udGV4dCBpbiBkb20wX2NvbnN0cnVj
dF9wdigpLCBidXQgd2UgcGF5IHRoZSBwcmljZSB1bmlsYXRlcmFsbHkuDQoNCkJ5IHVwZGF0aW5n
IGN1cnJlbnQgdG9vIGluIHRoYXQgd2VpcmQgY29udGV4dCwgSSB0aGluayB3ZSBjYW4gZHJvcA0K
bWFwY2FjaGVfb3ZlcnJpZGVfY3VycmVudCgpLsKgIEl0J3MgYWxzbyB3b3J0aCBub3RpbmcgdGhh
dCB0aGUgb25seQ0KY2FsbGVycyBhcmUgX19pbml0IHNvIGhhdmluZyB0aGlzX2NwdShvdmVycmlk
ZSkgYXMgcGVyLWNwdSBpcyBhIHdhc3RlLg0KDQpCdXQgSSBhbHNvIHRoaW5rIHdlIGNhbiBkcm9w
IHRoZSBwYWdldGFibGVfaXNfbnVsbCh2LT5hcmNoLmd1ZXN0X3RhYmxlKQ0KcGFydCB0b28uwqAg
SWYgd2UgaGFwcGVuIHRvIGJlIGhhbGYtaWRsZSwgaXQgZG9lc24ndCBtYXR0ZXIgaWYgd2UgdXNl
IHRoZQ0KY3VycmVudCBtYXBjYWNoZSBieSBmb2xsb3dpbmcgY3B1X2luZm8oKS0+Y3VycmVudCBp
bnN0ZWFkLsKgIFRoYXQgc2FpZCwgSQ0KY2FuJ3QgdGhpbmsgb2YgYW55IGNhc2Ugd2hlcmUgd2Ug
Y291bGQgbGVnYWxseSBhY2Nlc3MgdHJhbnNpZW50IG1hcHBpbmdzDQpmcm9tIGFuIGlkbGUgY29u
dGV4dCwgc28gSSdkIGJlIHRlbXB0ZWQgdG8gc2VlIGlmIHdlIGNhbiBzaW1wbHkgZHJvcA0KdGhh
dCBjbGF1c2UuDQoNCg0KT3ZlcmFsbCwgSSB0aGluayB0aGUgbG9naWMgd291bGQgYmVuZWZpdCBm
cm9tIGJlaW5nIGV4cHJlc3NlZCBpbiB0ZXJtcw0Kb2Ygc2hvcnRfZGlyZWN0bWFwLCBqdXN0IGxp
a2UgaW5pdF94ZW5fbDRfc2xvdHMoKS7CoCBUaGF0IGlzIHVsdGltYXRlbHkNCnRoZSBwcm9wZXJ0
eSB0aGF0IHdlIGNhcmUgYWJvdXQsIGFuZCBpdCdzIGFsc28gdGhlIHByb3BlcnR5IHRoYXQgaXMN
CmNoYW5naW5nIGluIHRoZSBlZmZvcnQgdG8gcmVtb3ZlIHRoZSBkaXJlY3RtYXAgZW50aXJlbHku
DQoNCklmIG5vdCBzaG9ydF9kaXJlY3RtYXAsIHRoZW4gYXQgbGVhc3QgaGF2aW5nIHRoZSBwcm9w
ZXJ0eSBleHByZXNzZWQNCmNvbnNpc3RlbnRseSBiZXR3ZWVuIHRoZSB0d28gcGllY2Ugb2YgY29k
ZSwgc2VlaW5nIGFzIGl0IGlzIHRoZSBzYW1lDQpwcm9wZXJ0eS4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 12:06:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 12:06:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471881.731909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDP0z-0001UJ-6s; Thu, 05 Jan 2023 12:06:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471881.731909; Thu, 05 Jan 2023 12:06:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDP0z-0001UC-3m; Thu, 05 Jan 2023 12:06:33 +0000
Received: by outflank-mailman (input) for mailman id 471881;
 Thu, 05 Jan 2023 12:06:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDP0x-0001U6-Bq
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 12:06:31 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 636b288f-8cf1-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 13:06:30 +0100 (CET)
Received: by mail-wr1-x42d.google.com with SMTP id h16so35845418wrz.12
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 04:06:30 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 t64-20020a1c4643000000b003cf75213bb9sm2139544wma.8.2023.01.05.04.06.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 04:06:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 636b288f-8cf1-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=c5egG6SdIg28D8Hak31Q8FP4RU+sU/CcgK37WsyJhEI=;
        b=KneGFqdrfzkp+yNp5pGslu+l140GD7lt7lIT2kVrN0m4OW2KbeIYJxQRKxKzby92la
         EveviTrZqr013ou3RMGE3d3jbeA1viGPJ3wP82VuM0zr2h1GbfEyxi3evN/or7E6RiJa
         JQ0BrOJmHwvP1goasEEea30HS68XnsCI2C2IcrDrKskeTYVVmEijPb5cyoo5ic6cxHJq
         kSnBj6uioUUDCAkW75p4WEttCKr5rUd1+H+8ZR0cb+klSWSTon8xr5DtS4uBbffbR2R7
         bJNsDpovOqjMdMsB5vOApSJ0aet6VC3D4vzTN9fZvjQgWRuHEv4S1eL72c+QqMYX4Lq9
         H3bw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=c5egG6SdIg28D8Hak31Q8FP4RU+sU/CcgK37WsyJhEI=;
        b=ZSqH5ukq/cy/N5J7UscSMzJYWzRKLwbMaSXqyLPojMWo1jGAv2eA1CdHKlAiXe+Swn
         oA9gy0TUj+FKdxBTz9z1rdnUsfgovbZy93MPp0k3sRkRiltf3cA+WnzfgBSxxLMNNVqN
         SHwt6i/s5P6NwlF3EXUkvO9yADENzJOY6jegbCQN7eQlniC41E1qWB8MzNz5bbL900Bb
         x5GmrGm+E11PjPa5VAvCVEfFeuoQ2HQ6enURTSmUKTt7S/hPaEVh/TbEyUJj9e6x/Dvi
         BnngaqguHNz0ppspG1JDkGIPNw63G3q9LnBk6ffJSA48Oaws6Ad8jHzar55YHunI7TEt
         KUig==
X-Gm-Message-State: AFqh2krC5AZE3jagpoqV2Sed5cayNVsPHR2Khl/kFjRYuR7aDBsG7vRs
	SgtN2DeUoWwLFimLeXEuiFo=
X-Google-Smtp-Source: AMrXdXtQesEYZEhViTTqQkcghYE/pMA5SBTjW7QG4I5ITutpnea+M1/HooniRyuqkpaBKQ/c5//3kg==
X-Received: by 2002:adf:f187:0:b0:296:4854:2c6d with SMTP id h7-20020adff187000000b0029648542c6dmr9630694wro.32.1672920389857;
        Thu, 05 Jan 2023 04:06:29 -0800 (PST)
Message-ID: <58d3356fb92d0534ed3c023d205909c57c3eb063.camel@gmail.com>
Subject: Re: [XEN PATCH v2 1/2] arch/riscv: initial RISC-V support to
 build/run minimal Xen
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Julien
 Grall <julien@xen.org>, Anthony Perard <anthony.perard@citrix.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>
Date: Thu, 05 Jan 2023 14:06:28 +0200
In-Reply-To: <fd67c2a1-8f57-2efc-e6d9-f82d529b8b8b@citrix.com>
References: <cover.1672401599.git.oleksii.kurochko@gmail.com>
	 <4702cb223dbd7629fe3d3e494eb363f4b2534e96.1672401599.git.oleksii.kurochko@gmail.com>
	 <fd67c2a1-8f57-2efc-e6d9-f82d529b8b8b@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Wed, 2023-01-04 at 20:36 +0000, Andrew Cooper wrote:
> On 30/12/2022 1:01 pm, Oleksii Kurochko wrote:
> > The patch provides a minimal amount of changes to start
> > build and run minimal Xen binary at GitLab CI&CD that will
> > allow continuous checking of the build status of RISC-V Xen.
> >=20
> > Except introduction of new files the following changes were done:
> > * Redefinition of ALIGN define from '.algin 2' to '.align 4' as
>=20
> "align"
>=20
> > =C2=A0 RISC-V implementations (except for with C extension) enforce 32-
> > bit
>=20
> While the C extension might mean things to RISC-V experts, it is
> better
> to say the "compression" extension here so it's clear to non-RISC-V
> experts reading the commit message too.
>=20
> But, do we actually care about C?
>=20
> ENTRY() needs to be 4 byte aligned because one of the few things it
> is
> going to be used for is {m,s}tvec which requires 4-byte alignment
> even
> with an IALIGN of 2.
>=20
We don't care about C (at least now).
>=20
> I'd drop all talk about C and just say that 2 was an incorrect choice
> previously.
>=20
> > =C2=A0 instruction address alignment.=C2=A0 With C extension, 16-bit an=
d 32-
> > bit
> > =C2=A0 are both allowed.
> > * ALL_OBJ-y and ALL_LIBS-y were temporary overwritted to produce
> > =C2=A0 a minimal hypervisor image otherwise it will be required to push
> > =C2=A0 huge amount of headers and stubs for common, drivers, libs etc
> > which
> > =C2=A0 aren't necessary for now.
> > * Section changed from .text to .text.header for start function
> > =C2=A0 to make it the first one executed.
> > * Rework riscv64/Makefile logic to rebase over changes since the
> > first
> > =C2=A0 RISC-V commit.
> >=20
> > RISC-V Xen can be built by the following instructions:
> > =C2=A0 $ CONTAINER=3Driscv64 ./automation/scripts/containerize \
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 make XEN_TARGET_ARCH=3Driscv64 tin=
y64_defconfig
>=20
> This needs a `-C xen` in this rune.
>=20
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index 942e4ffbc1..74386beb85 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,2 +1,18 @@
> > +obj-$(CONFIG_RISCV_64) +=3D riscv64/
> > +
> > +$(TARGET): $(TARGET)-syms
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$(OBJCOPY) -O binary -S $< $=
@
> > +
> > +$(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$(LD) $(XEN_LDFLAGS) -T $(ob=
j)/xen.lds -N $<
> > $(build_id_linker) -o $@
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$(NM) -pa --format=3Dsysv $(=
@D)/$(@F) \
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0| $(objtree)/tools/symbols --all-symbols --xensyms
> > --sysv --sort \
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0>$(@D)/$(@F).map
> > +
> > +$(obj)/xen.lds: $(src)/xen.lds.S FORCE
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$(call if_changed_dep,cpp_ld=
s_S)
> > +
> > +clean-files :=3D $(objtree)/.xen-syms.[0-9]*
>=20
> We don't need clean-files now that the main link has been simplified.
>=20
> > diff --git a/xen/arch/riscv/include/asm/config.h
> > b/xen/arch/riscv/include/asm/config.h
> > index e2ae21de61..e10e13ba53 100644
> > --- a/xen/arch/riscv/include/asm/config.h
> > +++ b/xen/arch/riscv/include/asm/config.h
> > @@ -36,6 +39,10 @@
> > =C2=A0=C2=A0 name:
> > =C2=A0#endif
> > =C2=A0
> > +#define XEN_VIRT_START=C2=A0 _AT(UL,0x00200000)
>=20
> Space after the comma.
>=20
> Otherwise, LGTM.
>=20
Thanks for the comments.
They were fixed in patch series v4:
[PATCH v4 0/2] Add minimal RISC-V Xen build and build testing

~ Oleksii
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 12:07:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 12:07:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471888.731919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDP1r-00023V-Fo; Thu, 05 Jan 2023 12:07:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471888.731919; Thu, 05 Jan 2023 12:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDP1r-00023O-D5; Thu, 05 Jan 2023 12:07:27 +0000
Received: by outflank-mailman (input) for mailman id 471888;
 Thu, 05 Jan 2023 12:07:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDP1q-000236-F6
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 12:07:26 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8459670b-8cf1-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 13:07:25 +0100 (CET)
Received: by mail-wr1-x430.google.com with SMTP id bs20so33793004wrb.3
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 04:07:25 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 t12-20020a05600001cc00b0027b35baf811sm29936404wrx.57.2023.01.05.04.07.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 04:07:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8459670b-8cf1-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=8wnBL/jmFKxYHcNRdtvyM6TuTfxXm1XjkKh93ADJPd8=;
        b=gUI5XxHfoeoHELRuzdmvS2vOl6ebJkCm30Oo/sEHOSzSoZbPLk4ij+quExGJOn/Puv
         ZnSPCbCR5dvKNEkopnS/tS/17utaV6w35cxpm/TQIX+NAXwe2tJ+wxnlN8R29wWiuy0o
         WHBFutdw0WubFz08UEF6mvFyhitQXGYHIz99ZholPoDi+V/45Ph1hLiOanNJIgknsLZO
         VyswLgpvT6rsgk2ek3NhBvLtOzcps/lj5lpCqGWNBPkZGZESWsK0oKQT5VBwZK0ITYd4
         zrcKpDYTHwLG181NjyYnaBtO2QO2casFeSJMGxunX9H2NFq82dQo2tuvjf1V57NLMYvc
         cFYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=8wnBL/jmFKxYHcNRdtvyM6TuTfxXm1XjkKh93ADJPd8=;
        b=jLCZxK2OBUynSixEKPX5O27KmHOBHVDWmfzdN9Z74TAxVPTSBItEI8MM9ZATkodLEo
         ezjYrF8+MxXmij1MFynZOYs7g38//S9FH9mn0npJ2ekmYiUV/2ii5xfGPymzadnRBFrv
         rGbZ4o2ZY/FtKvAhELgFCBb7Iftj65XF5mxiDIsej+7OeJhGTzC3Xbkp2Q2fhMb8+9Ht
         w3pc1pkjOvM8TOY9tntnZ3gUgfKb1AiRpNTnlRo3TulHEhPUcWySA3hdLpJGSe5h4Osr
         vWcOJ/whFuZOzR74xW1fMK3jJnAi77DckSo9DYdddMKi8ZROMDpuJTzSEQIXRftcaHR9
         +T+Q==
X-Gm-Message-State: AFqh2kpkG1EViqPpGzcV0fQSHj0vYTM2IvMqjF1+aZyt8jQKsyFAEgmp
	KdS4FiZ6/SI/+d8ePDEf+y4=
X-Google-Smtp-Source: AMrXdXuIuwVjNQyHpItyL3RjKI1GzeiriNBFSFhMxVmzPx3yx1C2Ww/4cGLZeUGjAH2jAAQs9kuS6A==
X-Received: by 2002:a05:6000:1109:b0:28d:b028:b16 with SMTP id z9-20020a056000110900b0028db0280b16mr14837756wrw.32.1672920445017;
        Thu, 05 Jan 2023 04:07:25 -0800 (PST)
Message-ID: <b6654ba5ec4002279fe101cf5796604264c18bbe.camel@gmail.com>
Subject: Re: [XEN PATCH v2 2/2] automation: add RISC-V 64 cross-build tests
 for Xen
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Doug Goldstein <cardoe@cardoe.com>, Alistair Francis
	 <alistair.francis@wdc.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Gianluca Guida <gianluca@rivosinc.com>
Date: Thu, 05 Jan 2023 14:07:23 +0200
In-Reply-To: <f05a72a7-9991-ed31-8174-596d5b3f8145@citrix.com>
References: <cover.1672401599.git.oleksii.kurochko@gmail.com>
	 <855e05a0459d44282679f08c8f67e38d35635eb6.1672401599.git.oleksii.kurochko@gmail.com>
	 <f05a72a7-9991-ed31-8174-596d5b3f8145@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Wed, 2023-01-04 at 20:39 +0000, Andrew Cooper wrote:
> On 30/12/2022 1:01 pm, Oleksii Kurochko wrote:
> > diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-
> > ci/build.yaml
> > index e6a9357de3..11eb1c6b82 100644
> > --- a/automation/gitlab-ci/build.yaml
> > +++ b/automation/gitlab-ci/build.yaml
> > @@ -617,6 +644,21 @@ alpine-3.12-gcc-debug-arm64-boot-cpupools:
> > =C2=A0=C2=A0=C2=A0=C2=A0 EXTRA_XEN_CONFIG: |
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 CONFIG_BOOT_TIME_CPUPOOLS=3Dy
> > =C2=A0
> > +# RISC-V 64 cross-build
> > +riscv64-cross-gcc:
> > +=C2=A0 extends: .gcc-riscv64-cross-build
> > +=C2=A0 variables:
> > +=C2=A0=C2=A0=C2=A0 CONTAINER: archlinux:riscv64
> > +=C2=A0=C2=A0=C2=A0 KBUILD_DEFCONFIG: tiny64_defconfig
> > +=C2=A0=C2=A0=C2=A0 HYPERVISOR_ONLY: y
> > +
> > +riscv64-cross-gcc-debug:
> > +=C2=A0 extends: .gcc-riscv64-cross-build-debug
> > +=C2=A0 variables:
> > +=C2=A0=C2=A0=C2=A0 CONTAINER: archlinux:riscv64
> > +=C2=A0=C2=A0=C2=A0 KBUILD_DEFCONFIG: tiny64_defconfig
> > +=C2=A0=C2=A0=C2=A0 HYPERVISOR_ONLY: y
> > +
>=20
> Judging by the Kconfig which gets written out, I suggest inserting
> the
> two RANDCONFIG jobs right now.
>=20
> > =C2=A0## Test artifacts common
> > =C2=A0
> > =C2=A0.test-jobs-artifact-common:
> > @@ -692,3 +734,6 @@ kernel-5.10.74-export:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - binaries/bzImage
> > =C2=A0=C2=A0 tags:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - x86_64
> > +
> > +# # RISC-V 64 test artificats
> > +# # TODO: add RISC-V 64 test artitifacts
>=20
> Drop this hunk.=C2=A0 All you're going to be doing is deleting it in the
> next
> series...
>=20

Thanks for the comments.
They were fixed in patch series v4:
[PATCH v4 2/2] automation: add RISC-V 64 cross-build tests for Xen

~ Oleksii
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 12:15:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 12:15:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471896.731931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDP9y-0003qG-9o; Thu, 05 Jan 2023 12:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471896.731931; Thu, 05 Jan 2023 12:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDP9y-0003q9-6l; Thu, 05 Jan 2023 12:15:50 +0000
Received: by outflank-mailman (input) for mailman id 471896;
 Thu, 05 Jan 2023 12:15:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDP9x-0003q1-Cq
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 12:15:49 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id adae7a0d-8cf2-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 13:15:46 +0100 (CET)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 07:15:43 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5834.namprd03.prod.outlook.com (2603:10b6:806:f8::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 12:15:38 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 12:15:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adae7a0d-8cf2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672920946;
  h=from:to:subject:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version;
  bh=DugpVJyShds+JdKCIeHKaDxrw2o9ohUEHmwSAbO7ieE=;
  b=NEcHbNJ8/0vmiG8jmJQDMU3NV2KU6F2UNRkMkNIEXEyVRj5VtwiJ5adC
   RmPfu+an7KmbMp7LmzM4NejOiTq780v3M8q/0s8EoQdJJh8ahA3fL4rHn
   F3C0sS/s4ej8B5CzClgKkfmxelOmk1gEsTs1vjDhhkPG/r04XhOg5vNRa
   4=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 91299082
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:eSW5tK+hLxkHBFkgSSIQDrUDr3+TJUtcMsCJ2f8bNWPcYEJGY0x3m
 mEXXG6OMvrcYjSmKdB/O4rloB4B7JaBz9dlHgY/+3o8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIx1BjOkGlA5AdmPKgW5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklHz
 KImGhEwaiuH3bqJwvWZasJ9qpU8eZyD0IM34hmMzBn/JNN/GNXpZfWP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWDilUvgdABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTCdtIRezgpqQCbFu771NPLh1LSl6AoPi5zQmPBYpuE
 3Ys9X97xUQ13AnxJjXnZDW6vXqFsxg0S9dWVeog52mlw7vd5QWEA2EsRztNLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDRLoViMA6tjn5Ys13hTGS486FLbv14OkXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94aRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:ZUG3m6xl1NPIJAR6A1g3KrPxHegkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9IYgBZpTgZUJPwC080hqQFmrX5Wo3SETUO2VHYZ72KiLGP/9SOIVybygcw78
 Zdmu1FeaTN5DtB/IrHCWuDYrEdKbC8mcjG69s2jU0dKz2CAJsQjDuRfzzrd3GeMzM2Z6bReq
 D92uN34x6bPVgHZMWyAXcIG8DFut3wjZrjJTIWGhI97wGKrDWwrJr3CQKR0BsyWy5Ghe5Kyx
 mKryXJooGY992rwB7V0GHeq7xQhdva09NGQOCcl8QPLT3oqwCwIKBsQaeLsjwZqPymrHwqjN
 7PiRE9ONkb0QKfQkiF5T/WnyXw2jcn7HHvjXWCh2H4nMD/TDUmT+JcmINwaHLimgodleA59J
 gO83OStpJRAx+Ftj/6/cL0WxZjkVfxiWY+kNQUk2dUXeIlGfxsRM0kjQFo+aU7bWbHAbMcYa
 5T5QbnlbBrmGahHjXkV69UsYWRtzoIb0+7qwM5y7aoOnBt7Q1EJg0jtY0idz47he0Ao9Mv3Z
 W5Do140L5JVcMYdqR7GaMIRta2EHXERVbWPHuVOkmPLtBwB5vhke+C3FwO3pDcRLUYiJ8p3J
 jRWlJRsmA/P0roFM2VxZVOthTAWn+0UzjhwtxXo8ERgMy1eJP7dSmYDFw+mcqppPsSRsXdRv
 aoIZpTR/vuN3HnF4pF1xD3H5NSNX4dWssIvctTYSPGnuvbbonx8uDLevfaI7TgVT4iR2PkG3
 MGGCP+Ic1Rh3rbL0MQQCKhKU8FVnaPja6YSpKqgdT74LJ9R7Fko0wSlUmz4N2NJHlLrrE2FX
 EOU4/arg==
X-IronPort-AV: E=Sophos;i="5.96,302,1665460800"; 
   d="scan'208";a="91299082"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dEuBtK7R2/dikj5S0egCNpZYTwdYKIaJZhTTqCRGVoMBuJmAOGxWJn84A8TbLEwgmF4rboWN+ZYfsXz8jYuKIVNmIgMRztbDF81zBlWmpkRRyx1ITt8HTBZoiMNDEz3y+nfuC3zlRv31XDp7fagCKTa3dUIDY4oUOTmgTJiXbxUHjqlEyJaQVh7IojblWUS/dbN9AvEnxJ2FX0I87ATj8swjtKoJYxp7fveFuYyiuWgPc8jnzhMQ1NVs8fBDFdMX9iLYce4mVz4xChfYVeuZzU2MPwteIHMuTexftQTxz57R9lWY4spOExu7R8miglXLNHQc53NsFDJ84jcCz7+pew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DugpVJyShds+JdKCIeHKaDxrw2o9ohUEHmwSAbO7ieE=;
 b=FdMuWBcYihPW6GdIBk+kqpBWqP/7n1iSrBI0sn0aDpEopCmhkW7xySzs3g0jHKyvsREOxuHUGgFMimohuEMdIl1+FksC/XweDXrO2tvuy/rt4siTBdIO+ur0Yp3xoWDGK1AwUQTpYySj+S5yiiC2ijAfwUgq+mazthKexjB0a0C3Dmz5cAMTZI0JIzbVNV7nVgliEObDdyNpzwhgyBdQsQpClbAf8tfvqNyveZyo69GdTKFDqIIKrSPvJ79q1SKSmRtZPaCv+Rc+BCVXU5HzD9NaCADpvdAzaXM0ueqEuDzaV8VnJqEHUPN2pQEwK0I3aVEVCUdXIaZINiT4AeRBbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DugpVJyShds+JdKCIeHKaDxrw2o9ohUEHmwSAbO7ieE=;
 b=v3kl4IdwuaDP2Bc7P+xtEEJ1zUts3e/jXGfB9ZKx3SnB9oIfc9Kwcq8JJVOw0hapHP2DMshSg9TzmzcGGw7AH7UGN6JHuOUirLgF5hUv/ocTErPhFWjHXWkOEhhvWDVu/bFJv2q6pCjyuU6qdlYSha02gL4wgmagzoRliesQDS4=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Julien Grall <julien@xen.org>, Ayan Kumar Halder <ayankuma@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in
 construct_domU
Thread-Topic: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in
 construct_domU
Thread-Index: AQHZH13GXnua+lhJCkOWuIi1KAI8E66O7vgAgACh3ICAAAlTgIAAFk6AgAAPpYA=
Date: Thu, 5 Jan 2023 12:15:38 +0000
Message-ID: <35d590fb-4b96-70dd-a60b-1c8d7cc8f2d6@citrix.com>
References: <20230103102519.26224-1-michal.orzel@amd.com>
 <alpine.DEB.2.22.394.2301041546230.4079@ubuntu-linux-20-04-desktop>
 <1264e5cc-1960-95d3-5ecb-d6f23d194aa4@xen.org>
 <29460d07-cd43-7415-7125-6ed01f3c2920@amd.com>
 <c80f90d7-d3b5-1b13-d809-9506ff5414e4@xen.org>
In-Reply-To: <c80f90d7-d3b5-1b13-d809-9506ff5414e4@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA2PR03MB5834:EE_
x-ms-office365-filtering-correlation-id: 561bfbb8-daa3-4e30-4ebf-08daef168e5b
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 xyPw9xg5TWfMdiMi313HlIzUHVp9FMOyIBzaV7KVuPrv5//apseEdadhYTWNK7J8fiV1zwlfuZX5rLZ69FeZzywqrFXw6TZCcTI8b8isskpYmGwpzVilqVvAMV0MH3/xhqiGbyUkq3+2yVinyg2uLyF3DLsH27JVP4HEmdU6p6ZgMDv59I2QQmELH8e5yDknPm0/NMb0W8XUX2v+oN21VgiCkvinQ8O2IsKCMkZbUV9UGG3LDEbo/zSfz+g/DS74g/ekH2D5F/NMA6BeOT++NKemgby8vz0JNckAEVj92s2XmqAsJKufT6KJWl4xzPX9mUOFfct8zRuvzLjQZC9vi4mvaGcHf/wQPjp74HbZgpNJ/OSJO6t+Oi9HtykVDyfI+eHNE43cNnZTg2ZX+sFsRCQmiqp/5U555qzXyQ4BuQvYR2iqGPvdS0m66IA/DZvXw5jqJJbJ5ZDInYr7/ZAmBsQDwd8VhHNklTzWxoGZPhsS4b7gMxaUCE2ZCYOWOcC1YD6agmBIGhOFKnIK2C91+wdOOYXHjhUzlhLY1F8cRbbwmDl4V/oQPt09XPJnONREKeOfTYXFP2PSE6W0+ENPayxRLdNpmWQcu0n1pQsXptEAbj8VzBqHWTn4PrYDvasowJj+rBnxo7dTdijrdcDBG2KEts22OUwF8hu5fOxXEFjux5KrcmUrbD5TsTvZtLxIvrGiCmtnoIIrM0rYel9dHHHU5gNfIaBscp9kCVDijFq0qHZyWlVBFovBHSII9jxD4B/AnKwFnu0eHM7NNvV0gA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(346002)(376002)(396003)(366004)(451199015)(316002)(2616005)(186003)(38070700005)(6512007)(26005)(6486002)(478600001)(71200400001)(36756003)(86362001)(6506007)(31696002)(53546011)(110136005)(41300700001)(8936002)(2906002)(66476007)(83380400001)(8676002)(91956017)(82960400001)(64756008)(66446008)(76116006)(66946007)(31686004)(66556008)(5660300002)(122000001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?QlZMRDZTMWZMTm9taW1GYjVleDdwY0tRQWlLNnUxbTVpSnJUOUlGUFR3UUdN?=
 =?utf-8?B?M25aQk9ObXJBeTAvVmZHc0ZmckM1UC8yb01uL1EzNzNPUGRYU0JDWjFxcVM3?=
 =?utf-8?B?ZW9yWVI2OG80bGd1WFlxSmF6bHJQdFkzVTNmTlA1aWdFWkQyUHZyN2Q2dEpJ?=
 =?utf-8?B?d0psUThsYi9uSXU3MndvNGtvVUhJTlVNVGV5b25kd3JYWGxlWnpZby9LVkxm?=
 =?utf-8?B?RkNMNDFqMUp5SHIxaVJvbFV2Tzk2RlhyU0VBZHJWODBtTHVlSDdPdDFPcEFo?=
 =?utf-8?B?M1I1a21uZ2JBckVHSklsVVVLblh1SFBPbEhMeGYxTHRDZzRubE1PbDVxTHpJ?=
 =?utf-8?B?UVdmbCtBWmZ0U0JyYjBaSHA0djUrTDluMEVxa3ZCT0tXOC9hYnVseFIzNnFT?=
 =?utf-8?B?N3FSZ1pWWHRGYmRwWE1Ma0hwSTIwN1VXZ2FFM2Q4VjZQSEEvRFdCenAvSVJx?=
 =?utf-8?B?bXhyeDhxd1AzYm1IdW5hNVlOOSsveUI0TXFoR0oxNE1aSkREdVF3d2x4amgw?=
 =?utf-8?B?Vzc2SkJIUTFINkNuMk1IOWVlRGZZZXF4K0J6Wjc5S0RhK0RLVG1rdXp0V0wv?=
 =?utf-8?B?T0JhY29OWkhXOGVMTHRJZmJiSitIaER2b0xzY01ZNzNydWxwUUNreVZnc0Qr?=
 =?utf-8?B?SDNQKzdLU2NMcEVHSVA5UlZuckdnbzgvOWIvOEF4SHlSU2h1TUxQck9pblBC?=
 =?utf-8?B?K3NmY01zSjREdFFZWTRxVXZpNHd1MU9JZUZ4bmtYRkp1K2VyV3J2b3lmaERo?=
 =?utf-8?B?UmFMNW0rZnZEcUhONkNoNlJJZWVsUUxlS0lreDRCZWdnU2xDbERPZmY3VDVY?=
 =?utf-8?B?RTAwU3U1bTBCTS9CZmhhb0NmREk4WXB4cDBEUXIyU3VJNk8yQVBoQW1WZXhH?=
 =?utf-8?B?T2M4MmQvZXNhVE9BRitncEZoUXF5MGRrWXpxbk9GbGxVbHcwMm1DWDZ0Y2hI?=
 =?utf-8?B?eDByeTZIbkFpUUZxVDdXT2NjZ2pFYnhxQ1hMekltYnBkZ3VxR2hGQXVJeHNi?=
 =?utf-8?B?Nnh3SWp2anVsdko1T2JaM3RVcmsvWDR4WmdZb2s1MWN6eE13cGcxSWlsM2JU?=
 =?utf-8?B?anczU0RaWEVMT1UrOXR6c3FPQ0JleE9Nd1J5UEQ4NUkwQU1QZlQ3STZDSEM2?=
 =?utf-8?B?dnNoS3NxUWdNZjBuUisvcXBHVWYrdmJxSFJXT1hiREhWSUt4R1JQWWl2eFBh?=
 =?utf-8?B?azEyMHFIM3NLZHdzcS9yOFNyL290L1QzR0FRUE82bjhST3pzT1R3ejBibEt0?=
 =?utf-8?B?M1FrS0J6QlROSVlHS0N4T0pkWVU0TXFjNkFlTkVRbHo3OCtmNGl3bzB0eXVZ?=
 =?utf-8?B?ZmlmVldhYUd0L1N5a2gzSmN0YlA3R0ZOdHJpZkd4UENFdStLVXF4YnpzNEFv?=
 =?utf-8?B?R1N5VTQzRGJ4TER4eHo5Z0hBbWxzOU4valJiNDU4MlhRUXMwclZuclYvZlVk?=
 =?utf-8?B?N0pqN2t6cVJ1UlFoako3OVM5ZkgxLzEwWGp3aGZNTVVPRjVnK09Hbm9sZ3BR?=
 =?utf-8?B?VWM4Mmxib3d5MmdKYkh3OFNQQ21QSzhGOVlhSzVldWkwOXovazdsYlROSFR0?=
 =?utf-8?B?OGxtMmVBbWNsM2o4bkFDd0RKZy9MWUFKeTNoU3pmVG9KT0wzMGhXNFF0MzFY?=
 =?utf-8?B?cmQ3RnlWbWYvOFRmYk1jTE96dDNtdTQ3ZU93VnpXUThxSDlXRmwzbkx6bGMy?=
 =?utf-8?B?Q1R4ZEZiNUp6RXE2VE1KWXVNVnlFOHdMdzF2OVRTMkN2YVFMaHZKVE9jSnNC?=
 =?utf-8?B?YVozRFNrTHVUbXphakNhb1c1MEN2Y1lFMEI3Skk1b3pNcDdpU3FoRThSU2VM?=
 =?utf-8?B?bzN0MWNieGpQcThyMUZQTm9OTG42azg5SS85NExaSGlYS3NVZy8xUnphSkhz?=
 =?utf-8?B?L2JOL3F1cm5PVjlhVEhXQjVtYkNnTVNEQ1A4SlhobTBGV2x3b0pyaHZ6U1Jw?=
 =?utf-8?B?Wmovc2pIMmY2ODNpSDJSNWt2RU5XSWRDUFZqakM5bjJLZCtnRW5oQnBCMXpS?=
 =?utf-8?B?TTd0dFdEQ3VVbkpZTG92bjZ5QUd4SGVVRGhqZml1bTd1NHl6ZjArT3VsZVFy?=
 =?utf-8?B?NDRGVzVDcDByYk04UGlaM0ZnSHdBWWZkZjVYdmladFpvTG41c1N4OUFQMXNw?=
 =?utf-8?Q?mEPJ2sIOV7e6GNcJy7xYSTsXD?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <963D43C4BD8960498316A574EAAE293B@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	n3ajyhzCRMcmmqKew02VHWJEnMDogo8YCgir4BvtzKgx4Rs6RYHSEP7KiGPjdf2u2agLRHVH+/gH0A7eojzr9qidInBwTHRpDR4JbKGafwNObMDC6wLEinI/Sr8etu5YZD1tL9DQd7FXHa3K9wTnDGZU60vSm11vpCjjToyS8I8OnLVW1Nx8vT5mTHwHAyyGVktwMBT2pu3hZ2fNaXDdnc5oGTQi7s7J7Z640dbmNk57jzmx/e4v6wBPdgsFrR0HyJz+OgyRkehCSGDWLw3fjXCEGKJAq91cTHyvzqWPVUzsM3vqP2Md9Pa+HvG5h+UDtLMNneWAgh9H3xRPCV25YnDf2Wdia6xk1ZGZM5t84wp23b6ZtIIPP7LrEqxCv7UW6/yluHsI8eaqouXYClZRXhpD4RLDInK2NbMs0xPYwYGYui2dY9VPfIldeMWLfln/lAVYCYEYWy8BLJsMKAaB17pp+5HDe92+kz473FIye7I5jl1e4ruJH2kaGWftH99J6NjKWqEn9p/Bgc3AQsDnBWilbe4J37RFkNhrtdfjs4UHQk94qz3y2XlSOa6pT4hKtu7g7LvcsJ2Ox79Wt6eXB23cHa47vEVBexQPJ0pvSChBc2oQa3M4IEd0eZgEkkvJSYG4ZlaWKJF6XrO8U972NXtdog95DCxTaWM409DcyNDTmLNZvUwSwZkgJj3DKl0/6dcvp6HxqTMWzZUG5TUCemb46duAr8gvTUvq9EcqRjmK9qk7bDuja1QXF8XpOL5xBQIMuAQWtU112LqNRH/ZouxEhIJPa8C3/poArb0xB2aArhAqp8mFu80W4wDTRhw9
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 561bfbb8-daa3-4e30-4ebf-08daef168e5b
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 12:15:38.2701
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nUp7YfNB8YGBVquC/yvFiTlwUa3RqWMPfg5o7aT4kTvuJlk6LYeL7iFUJIKVwNXJTMLVIFwp42S3hP0ENM5PtwZz0S2yCcVx75ghJwysMQc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5834

T24gMDUvMDEvMjAyMyAxMToxOSBhbSwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPiBPbiAwNS8wMS8y
MDIzIDA5OjU5LCBBeWFuIEt1bWFyIEhhbGRlciB3cm90ZToNCj4+IEhpIEp1bGllbiwNCj4NCj4g
SGksDQo+DQo+PiBJIGhhdmUgYSBjbGFyaWZpY2F0aW9uLg0KPj4NCj4+IE9uIDA1LzAxLzIwMjMg
MDk6MjYsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+PiBDQVVUSU9OOiBUaGlzIG1lc3NhZ2UgaGFz
IG9yaWdpbmF0ZWQgZnJvbSBhbiBFeHRlcm5hbCBTb3VyY2UuIFBsZWFzZQ0KPj4+IHVzZSBwcm9w
ZXIganVkZ21lbnQgYW5kIGNhdXRpb24gd2hlbiBvcGVuaW5nIGF0dGFjaG1lbnRzLCBjbGlja2lu
Zw0KPj4+IGxpbmtzLCBvciByZXNwb25kaW5nIHRvIHRoaXMgZW1haWwuDQo+Pj4NCj4+Pg0KPj4+
IEhpIFN0ZWZhbm8sDQo+Pj4NCj4+PiBPbiAwNC8wMS8yMDIzIDIzOjQ3LCBTdGVmYW5vIFN0YWJl
bGxpbmkgd3JvdGU6DQo+Pj4+IE9uIFR1ZSwgMyBKYW4gMjAyMywgTWljaGFsIE9yemVsIHdyb3Rl
Og0KPj4+Pj4gUHJpbnRpbmcgbWVtb3J5IHNpemUgaW4gaGV4IHdpdGhvdXQgMHggcHJlZml4IGNh
biBiZSBtaXNsZWFkaW5nLCBzbw0KPj4+Pj4gYWRkIGl0LiBBbHNvLCB0YWtlIHRoZSBvcHBvcnR1
bml0eSB0byBhZGhlcmUgdG8gODAgY2hhcnMgbGluZSBsZW5ndGgNCj4+Pj4+IGxpbWl0IGJ5IG1v
dmluZyB0aGUgcHJpbnRrIGFyZ3VtZW50cyB0byB0aGUgbmV4dCBsaW5lLg0KPj4+Pj4NCj4+Pj4+
IFNpZ25lZC1vZmYtYnk6IE1pY2hhbCBPcnplbCA8bWljaGFsLm9yemVsQGFtZC5jb20+DQo+Pj4+
PiAtLS0NCj4+Pj4+IENoYW5nZXMgaW4gdjI6DQo+Pj4+PiDCoCAtIHdhczogIlByaW50IG1lbW9y
eSBzaXplIGluIGRlY2ltYWwgaW4gY29uc3RydWN0X2RvbVUiDQo+Pj4+PiDCoCAtIHN0aWNrIHRv
IGhleCBidXQgYWRkIGEgMHggcHJlZml4DQo+Pj4+PiDCoCAtIGFkaGVyZSB0byA4MCBjaGFycyBs
aW5lIGxlbmd0aCBsaW1pdA0KPj4+Pg0KPj4+PiBIb25lc3RseSBJIHByZWZlciBkZWNpbWFsIGJ1
dCBhbHNvIGhleCBpcyBmaW5lLg0KPj4+DQo+Pj4gZGVjaW1hbCBpcyBwZXJmZWN0IGZvciB2ZXJ5
IHNtYWxsIHZhbHVlcywgYnV0IGFzIHdlIHByaW50IHRoZSBhbW91bnQgaW4NCj4+PiBLQiBpdCB3
aWxsIGJlY29tZSBhIGJpZyBtZXNzLiBIZXJlIHNvbWUgZXhhbXBsZXMgKGRlY2ltYWwgZmlyc3Qs
IHRoZW4NCj4+PiBoZXhhZGVjaW1hbCk6DQo+Pj4NCj4+PiDCoCA1MTJNQjogNTI0Mjg4IHZzIDB4
ODAwMDANCj4+PiDCoCA1NTVNQjogNTY4MzIwIHZzIDB4OGFjMDANCj4+PiDCoCAxR0I6IDEwNDg1
NzYgdnMgMHgxMDAwMDANCj4+PiDCoCA1MTJHQjogNTM2ODcwOTEyIHZzIDB4MjAwMDAwMDANCj4+
PiDCoCAxVEI6IDEwNzM3NDE4MjQgdnMgMHg0MDAwMDAwMA0KPj4+DQo+Pj4gRm9yIHBvd2VyIG9m
IHR3byB2YWx1ZXMsIHlvdSBtaWdodCBiZSBhYmxlIHRvIGZpbmQgeW91ciB3YXkgd2l0aA0KPj4+
IGRlY2ltYWwuIEl0IGlzIG1vcmUgZGlmZmljdWx0IGZvciBub24gcG93ZXIgb2YgdHdvIHVubGVz
cyB5b3UgaGF2ZSBhDQo+Pj4gY2FsY3VsYXRvciBpbiBoYW5kLg0KPj4+DQo+Pj4gVGhlIG90aGVy
IG9wdGlvbiBJIHN1Z2dlc3RlZCBpbiB2MSBpcyB0byBwcmludCB0aGUgYW1vdW50IGluIEtCL0dC
L01CLg0KPj4+IFdvdWxkIHRoYXQgYmUgYmV0dGVyPw0KPj4+DQo+Pj4gVGhhdCBzYWlkLCB0byBi
ZSBob25lc3QsIEkgYW0gbm90IGVudGlyZWx5IHN1cmUgd2h5IHdlIGFyZSBhY3R1YWxseQ0KPj4+
IHByaW50aW5nIGluIEtCLiBJdCB3b3VsZCBzZWVtcyBzdHJhbmdlIHRoYXQgc29tZW9uZSB3b3Vs
ZCBjcmVhdGUgYQ0KPj4+IGd1ZXN0DQo+Pj4gd2l0aCBtZW1vcnkgbm90IGFsaWduZWQgdG8gMU1C
Lg0KPj4NCj4+IEZvciBSVE9TIChaZXBoeXIgYW5kIEZyZWVSVE9TKSwgaXQgc2hvdWxkIGJlIHBv
c3NpYmxlIGZvciBndWVzdHMgdG8NCj4+IGhhdmUgbWVtb3J5IGxlc3MgdGhhbiAxIE1CLCBpc24n
dCBpdCA/DQo+DQo+IFllcy4gU28gZG9lcyBYVEYuIEJ1dCBtb3N0IG9mIHRoZSB1c2VycyBhcmUg
bGlrZWx5IGdvaW5nIGFsbG9jYXRlIGF0DQo+IGxlYXN0IDFNQiAob3IgZXZlbiAyTUIgdG8gcmVk
dWNlIHRoZSBUTEIgcHJlc3N1cmUpLg0KPg0KPiBTbyBpdCB3b3VsZCBiZSBiZXR0ZXIgdG8gcHJp
bnQgdGhlIHZhbHVlIGluIGEgd2F5IHRoYXQgaXMgbW9yZQ0KPiBtZWFuaW5nZnVsIGZvciB0aGUg
bWFqb3JpdHkgb2YgdGhlIHVzZXJzLg0KPg0KPj4+IFNvIEkgd291bGQgY29uc2lkZXIgdG8gY2hl
Y2sgdGhlIHNpemUgaXMgMU1CLWFsaWduZWQgYW5kIHRoZW4gcHJpbnQgdGhlDQo+DQo+IEkgd2ls
bCByZXRyYWN0IG15IHN1Z2dlc3Rpb24gdG8gY2hlY2sgdGhlIHNpemUuIFRoZXJlIGFyZSB0ZWNo
bmljYWxseQ0KPiBubyByZXN0cmljdGlvbiB0byBydW4gYSBndWVzdCB3aXRoIGEgc2l6ZSBub3Qg
YWxpZ25lZCB0byAxTUIuDQo+IEFsdGhvdWdoLCBpdCB3b3VsZCBzdGlsbCBzZWVtIHN0cmFuZ2Uu
DQoNCkkgaGF2ZSBhIG5lZWQgdG8gZXh0ZW5kIHRvb2xzL3Rlc3RzL3RzeCB3aXRoIGEgVk0gdGhh
dCBpcyBhIHNpbmdsZSA0aw0KcGFnZS7CoCBTb21ldGhpbmcgd2hpY2ggY2FuIGV4ZWN1dGUgQ1BV
SUQgaW4gdGhlIGNvbnRleHQgb2YgYSBWTSBhbmQNCmNyb3NzLWNoZWNrIHRoZSByZXN1bHRzIHdp
dGggd2hhdCB0aGUgInRvb2xzdGFjayIgKHRlc3QpIHRyaWVkIHRvIGNvbmZpZ3VyZS4NCg0KWGVu
IGlzIGJ1Z2d5IGlmIGl0IGNhbm5vdCBvcGVyYXRlIGEgVk0gd2hpY2ggbG9va3MgbGlrZSB0aGF0
LCBhbmQgYQ0KYm9udXMgb2YgZXhwbGljaXRseSB0ZXN0aW5nIGxpa2UgdGhpcyBpcyB0aGF0IGl0
IGhlbHBzIHRvIHJlbW92ZQ0KaW5hcHByb3ByaWF0ZSBjaGVja3MuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 12:47:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 12:47:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471904.731941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDPeo-00082q-RT; Thu, 05 Jan 2023 12:47:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471904.731941; Thu, 05 Jan 2023 12:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDPeo-00082j-Nn; Thu, 05 Jan 2023 12:47:42 +0000
Received: by outflank-mailman (input) for mailman id 471904;
 Thu, 05 Jan 2023 12:47:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDPem-00081o-VN; Thu, 05 Jan 2023 12:47:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDPem-0001y9-Sf; Thu, 05 Jan 2023 12:47:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDPem-0004kR-GO; Thu, 05 Jan 2023 12:47:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDPem-0007oE-FO; Thu, 05 Jan 2023 12:47:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZMNXeJV++glhK4En1x2i6SjIi8H7eoQIC1VTaQe7bcY=; b=Lqt6R7ZlumZAkTvsgnxzI6Xr/+
	0H24+oH7gkmW6iXqumLWDbPtiALjRKl/T7fweBahjFHv+GWYYsh8deCTjcamseiuxXwvcjiAyz5NX
	zwhHILSsbeZm1TiFPrzow/WE0vAqZP4CSxkykputKof3EtfICFjPhpeM9ZWEYcjlSeqo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175577-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175577: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=78b3400e50ee0ba01575749728aae9c79d9d116b
X-Osstest-Versions-That:
    libvirt=9193bac260620fbeec580d709699b3e0f97b9bd7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Jan 2023 12:47:40 +0000

flight 175577 libvirt real [real]
flight 175587 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175577/
http://logs.test-lab.xenproject.org/osstest/logs/175587/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm  8 xen-boot            fail pass in 175587-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 175587 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 175587 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175564
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175564
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175564
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              78b3400e50ee0ba01575749728aae9c79d9d116b
baseline version:
 libvirt              9193bac260620fbeec580d709699b3e0f97b9bd7

Last test of basis   175564  2023-01-04 04:20:21 Z    1 days
Testing same since   175577  2023-01-05 04:18:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   9193bac260..78b3400e50  78b3400e50ee0ba01575749728aae9c79d9d116b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 13:20:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 13:20:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471913.731953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDQAS-0003w3-BH; Thu, 05 Jan 2023 13:20:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471913.731953; Thu, 05 Jan 2023 13:20:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDQAS-0003vw-8W; Thu, 05 Jan 2023 13:20:24 +0000
Received: by outflank-mailman (input) for mailman id 471913;
 Thu, 05 Jan 2023 13:20:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SMos=5C=citrix.com=prvs=362393fcb=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1pDQAQ-0003vq-Va
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 13:20:23 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b30c4293-8cfb-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 14:20:20 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b30c4293-8cfb-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672924820;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=ByrJoUcDQW/Jw5ganKu1HSJnrrgr2Xc0sa9YKf7jgcg=;
  b=L/C4QhKhdcx/XHGJnNL6xcf3kB9NEd3UayI8Fdjmksb5DOK0eAEeqDbq
   5ytu6fR+M0IFoj64a77KOB5PU/pKKx+cK6cRvxTARC2AjqHBVCKqRJ2dP
   R2B2pmMPga2XB4Dly8l0wH1oJLXT+zEyywkrjr7Es+hcNop/X6IsRUATY
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90793652
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:wna+i6NrHskaRtvvrR2ql8FynXyQoLVcMsEvi/4bfWQNrUol3zJRz
 WIZWWrXPKmMYmf9edBwb9m+/BtUsZfVx9ZqTgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo42tB5gFmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uhqHGFI0
 Pg1EncIdwGj3ca5+p+HTsA506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w/N3wsYDUWUrsTIIkzhuillz/zYjRDrFO9rqsr+WnDigd21dABNfKEIoPbHpsIxC50o
 Ers0UOmDy4mOObB0GemsWmB3s3wrHv0Ddd6+LqQqacx3Qz7KnYoIAISfUu2p7++kEHWc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUS6guA167V6AaxHXUfQ3hKb9lOiSMtbWV0j
 BnTxYqvXGEx9u3OIZ6AyluKhT2vJCgTCGsvX2gBVBoluf3smqwPijuaG76PD5WJptHyHDjxx
 RWDoy4/m6gfgKY36kmrwbzUq2ny/8aUF2bZ8i2SBzv4tV0hOOZJcqTysTDmAeB8wJF1p7Vrl
 FwNgICg4e8HFvlhfwTdEbxWTNlFCxtoWQAwYGKD/bF7rVxBGFb5J+i8BQ2Sw283WvvogRezP
 CfuVfp5vfe/xkeCY65teJ6WAM8316XmHtmNfqmKMYAeP8crJF/fpHAGiausM4fFyRFErE3CE
 c3DLZbE4YgyU8yLMwZat89CiOR2l0jSNEvYRIzhzgTP7IdykEW9EO9fWHPXN7BR0U9xiFmNm
 zqpH5fQmko3vSyXSnW/zLP/2nhTdSdjWMqr9pcOHgNBSyI/cFwc5zbq6etJU+RYc259zY8kI
 lnVtpdk9WfC
IronPort-HdrOrdr: A9a23:watsgKsdkdKZltfVIqUM7ujA7skDZ9V00zEX/kB9WHVpm62j+v
 xG+c5xvyMc5wxhO03I5urwWpVoLUmzyXcX2+Us1NWZPDUO0VHARL2KhrGM/9SPIUzDH+dmpM
 JdT5Q=
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="90793652"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Sergey Dyasli <sergey.dyasli@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH] x86/ucode/AMD: apply the patch early on every logical thread
Date: Thu, 5 Jan 2023 13:20:04 +0000
Message-ID: <20230105132004.7750-1-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The original issue has been reported on AMD Bulldozer-based CPUs where
ucode loading loses the LWP feature bit in order to gain the IBPB bit.
LWP disabling is per-SMT core modification and needs to happen on each
sibling SMT thread despite the shared microcode engine. Otherwise,
logical CPUs will end up with different cpuid capabilities.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=216211

In Linux kernel this issue has been addressed by e7ad18d1169c
("x86/microcode/AMD: Apply the patch early on every logical thread").
Follow the same approach in Xen.

Introduce SAME_UCODE match result and use it for early AMD ucode
loading. Late loading is still performed only on the first of SMT
siblings and only if a newer ucode revision has been provided (unless
allow_same option is specified).

Intel's side of things is modified for consistency but provides no
functional change.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: "Roger Pau Monné" <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu/microcode/amd.c     | 16 +++++++++++++---
 xen/arch/x86/cpu/microcode/intel.c   |  9 +++++++--
 xen/arch/x86/cpu/microcode/private.h |  1 +
 3 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
index 4b097187a0..96db10a2e0 100644
--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -176,8 +176,13 @@ static enum microcode_match_result compare_revisions(
     if ( new_rev > old_rev )
         return NEW_UCODE;
 
-    if ( opt_ucode_allow_same && new_rev == old_rev )
-        return NEW_UCODE;
+    if ( new_rev == old_rev )
+    {
+        if ( opt_ucode_allow_same )
+            return NEW_UCODE;
+        else
+            return SAME_UCODE;
+    }
 
     return OLD_UCODE;
 }
@@ -220,8 +225,13 @@ static int cf_check apply_microcode(const struct microcode_patch *patch)
     unsigned int cpu = smp_processor_id();
     struct cpu_signature *sig = &per_cpu(cpu_sig, cpu);
     uint32_t rev, old_rev = sig->rev;
+    enum microcode_match_result result = microcode_fits(patch);
 
-    if ( microcode_fits(patch) != NEW_UCODE )
+    /*
+     * Allow application of the same revision to pick up SMT-specific changes
+     * even if the revision of the other SMT thread is already up-to-date.
+     */
+    if ( result != NEW_UCODE && result != SAME_UCODE )
         return -EINVAL;
 
     if ( check_final_patch_levels(sig) )
diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c
index f7fec4b4ed..59a99eee4e 100644
--- a/xen/arch/x86/cpu/microcode/intel.c
+++ b/xen/arch/x86/cpu/microcode/intel.c
@@ -232,8 +232,13 @@ static enum microcode_match_result compare_revisions(
     if ( new_rev > old_rev )
         return NEW_UCODE;
 
-    if ( opt_ucode_allow_same && new_rev == old_rev )
-        return NEW_UCODE;
+    if ( new_rev == old_rev )
+    {
+        if ( opt_ucode_allow_same )
+            return NEW_UCODE;
+        else
+            return SAME_UCODE;
+    }
 
     /*
      * Treat pre-production as always applicable - anyone using pre-production
diff --git a/xen/arch/x86/cpu/microcode/private.h b/xen/arch/x86/cpu/microcode/private.h
index 73b095d5bf..c4c6729f56 100644
--- a/xen/arch/x86/cpu/microcode/private.h
+++ b/xen/arch/x86/cpu/microcode/private.h
@@ -7,6 +7,7 @@ extern bool opt_ucode_allow_same;
 
 enum microcode_match_result {
     OLD_UCODE, /* signature matched, but revision id is older or equal */
+    SAME_UCODE, /* signature matched, but revision id is the same */
     NEW_UCODE, /* signature matched, but revision id is newer */
     MIS_UCODE, /* signature mismatched */
 };
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 13:41:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 13:41:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471920.731964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDQUM-0006Mc-1L; Thu, 05 Jan 2023 13:40:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471920.731964; Thu, 05 Jan 2023 13:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDQUL-0006MV-UK; Thu, 05 Jan 2023 13:40:57 +0000
Received: by outflank-mailman (input) for mailman id 471920;
 Thu, 05 Jan 2023 13:40:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDQUK-0006MP-MH
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 13:40:56 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2070.outbound.protection.outlook.com [40.107.8.70])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 936cee6e-8cfe-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 14:40:54 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8640.eurprd04.prod.outlook.com (2603:10a6:102:21f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 13:40:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 13:40:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 936cee6e-8cfe-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m624Ro/TcUAEGaZVcd4acTgTW6U39SATAspbq+q2l/GlxHLMFFs/jMoQYrLHg2HUdB7UdPrff4hxL5uLNKgIkDy649wDM0AL1WCIlJvpLIT7CftEjBWBSQhcM2kxPuU7YezvwZPXcGrvafJvL9Jgg/3WcEoVO/OtxVY/VRNiqs4yPCV9sVuXJebcmjHCVpMGt7F5lCyhhVgrplegt29SOum4uDQG/z578Xq2O38rev0pQXjhH3hBzIvW8o0bAbJ+c+u73xXuN4YqSY3qTSDUrBBA+y8rGhQZglELwLOTYiTihpWtqxNky3OHgfMXBVQDtfOca5JF+HtO4Voswe7Kgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0ELXjnIH1l3VfliUzSU5AAkC8qMZTwHQR5Hju6YtaeU=;
 b=IRuIvA7TG5JizJU36HLurET2P9CUDOAhLJRKh8z9g5jwH/uBfHSQmDYJFsoOhVX0O3WTkbhwEWqhgFwBlZ0D8RkK0XBIigjUW3pBxgXk4/fOCkAFTxmH0GSDOwJZ7K1BVb0SGylT9+iQTa/m/DvjQw6rmi5PsIe0Mzuk1BMpj+RuYKHoR0ftUstUNYPPQ1Rwy+YIgUWsndz7FZ1ru19IMpXX4NxYUTJSLUkCwkOJZX1BBRRT6mQBPa2uP5zjxm9ogoAisHSl3A6XtmhkItrrZb6xwNkbIU9exdmkR2MKokycbIfObKO6BaNUCU+UEqU6NwyE+f+DbhdkQ4VqrMUq2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0ELXjnIH1l3VfliUzSU5AAkC8qMZTwHQR5Hju6YtaeU=;
 b=ooz5FSMRqfE/Z/YNVWn3rteEnJGUV+RcWByOisrMOe0zPpu6CsRlTE/FjKh+H3I2YkisshArPO4jVx65WFFJogjhFf9SXSgOqWqMZiz51xzK3P8j4dC3+FK6+o/pDZMmKR2P+RuvDx8yYnqYxfLNuP2VWb23XXq6y2GT9zBoQ4ZYN1UBbkY2bkxKQ+H9VKMUTNnM2hIZt6sWQskJcjAfeSlFkLEFYis9J2WAboK3TccZLnx0CJFYqo5iyrGCjs47za61nK3+dF+A6Kf0pakSnHF/RnSsP78xU8niPyWvlQz+EQFsg7d6Zha3UmJehW21Hu+EuZMarb83ymwjJHh+PQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <591d6624-2bd2-93f0-f5d6-760043230756@suse.com>
Date: Thu, 5 Jan 2023 14:40:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 1/2] arch/riscv: initial RISC-V support to build/run
 minimal Xen
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, xen-devel@lists.xenproject.org
References: <cover.1672912316.git.oleksii.kurochko@gmail.com>
 <ef6dbb71b27c75fe0dffb72d65ab457d27430475.1672912316.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ef6dbb71b27c75fe0dffb72d65ab457d27430475.1672912316.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0197.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8640:EE_
X-MS-Office365-Filtering-Correlation-Id: 34acf9eb-9845-479d-3821-08daef227638
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pzGs7xX+Ai8f9dpJatylOgWefTQBh6RVFqcG0BMBVkRPmT33fRbfPtuvbZKSvQtTPyS5BCnVyDtK3TK1rdbo/rFuThXSwZNV8vVqfu4fXrxgMAWp98d42eaQU33dngK3CvDI+DzR8Ifili8NB/R0JCbMavSSWjMxNqaq4MogUHDIUXstdsdRB0xZzf0+KP0j5LCnauniC2p/cjlKoPmQI1p1n0bczE1oXJVayyFSf+PAkmoDewgQ2D+DM5SFmWVZP5rdhPst7KN4CHL0h6pNJ4KhdLP0IcyM+/fG4RlRt09iQji1fg8y1PRonfQTs/ecsDcs+JnTQXgEWTlp2ScAtRIbNl7cEmTzOHke2mJuZi0z9VSppacqFbzdidJVu5Vb/ran3PEOU6oBx5cyFZqjXWT8mOLzS+quDQA0p4R02n4ec8oQwOLx9EQEaXzsVLK1XXmDpVNOXjgcORFJCrLNqlQQYpZfwdMHJUwL8UryZKbk07QpWPa8Ds8Qpo7VTPQg0Ti6qoNLB8Ewu9FKdpYRWKuqRJnX7tT2Er7R6J+xHcLp7CPoAW7Sy3ghZ676fLGVLWxn3gSIzY3ih1WtK2bZ5L1qFgDOJsC+UxrSrVn55vr/LdSwV9cKJ3n/7w0lGlOgAIbFNoczFzejZvORkd5K5vz/EO75Io4p5ddVGmDKgvTKApuy/kKmx1AWPa6Mj6crPGdIEN9R0jYCAJAd7sfb6lw8QDsGzDnTuexy8ET47xE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(396003)(346002)(366004)(136003)(451199015)(31686004)(53546011)(6512007)(186003)(66476007)(6666004)(66946007)(26005)(66556008)(478600001)(6486002)(2616005)(8676002)(6916009)(41300700001)(7416002)(8936002)(83380400001)(4326008)(5660300002)(2906002)(38100700002)(31696002)(86362001)(316002)(54906003)(6506007)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SVBlbUlVRWJGM0VXZ1ZFbjZ3bVY4MVVBWFc3R3daVFlORHU2Y0lwYWcycU1P?=
 =?utf-8?B?QWgzbzJIaU5sbndtVUxsQkIrRUd5N0JrMHRBdVkwR0toNWVQMUZmNC9lYXdy?=
 =?utf-8?B?bWMrSmRBak04Tjc5SUcvdGs0em1ja1VaeGtUMHBucE5JeEtzMDhhWTBXVk4x?=
 =?utf-8?B?bXFJLzNCb0FscVpmbDJzRVZEVVpFN2ZTZTFSN2RkS0VUallNKzRCdzFreVV4?=
 =?utf-8?B?THBBWnlNS0RDTE1kVU9sbnRuZys3ZkFHczBtY1RvR2RrM3dtOXFIdmw1VzVw?=
 =?utf-8?B?TGJXZmdDdU9TY0Y4OXZzWHppVk4yUy9icUsrNzBZdnFXdEkvdUxNSlJWbWRN?=
 =?utf-8?B?VjdGdVlsMys5NFU0QTBNb1UwZFBvYkVxS2U1UlhncUkxNDg3S2VhWjdreHRJ?=
 =?utf-8?B?b0FaYndxZzNxM0swNDluRVB5YUxCQVF2bFhleVpoMUVWcEJHM3IvSlpMc2NT?=
 =?utf-8?B?aHBzNm9OTXZrVW5ZTHVTT1d0SzZBZDJNbW5tYkVrMDVEWis1WGRJZmE4YW9Q?=
 =?utf-8?B?Q2FDdlNwbmFwUjN2ZXM4SkxmNXh2STZsWEVUOVpGN2o1YUdxRkM5Qjdaem5s?=
 =?utf-8?B?VDh2OG1lOGFTcExORVZwT2l0TnBtUG9pVktLSkxJamhUSUVycm83aGtQcWYv?=
 =?utf-8?B?amRNeE14S3EwbmFVTUlWN0JzZVZDMGJDaVZOU200L3BNckVob2JJWUVhV3dv?=
 =?utf-8?B?S1hvYXJOWjBqeTZIcFowczcrTW9xY1R3NGdGeE1mNUd3WVB4TEtQd0w5VnE4?=
 =?utf-8?B?SHlXRWNQcFlzVU1KTmhIOXRFYmpJT2c2L0l2Z3A1eFRVcGpCcXg2NWE0SDFK?=
 =?utf-8?B?cWxQYmJRbkJXajA4NEg4WncrL2J5RzZ6RnpjdmlyRlVscUhLL0FiL1J3Wmk5?=
 =?utf-8?B?V1VLR05NMXBsWStOcFd6eWpaeGw1NjJHbFFUbkhTQkN4TVJFdVdBVnJRbmNG?=
 =?utf-8?B?dC9YUmduQnRrbjdtWkE1S285MmY3bWo2WUNRcngveTRzSGlmOXhGTElKbXl2?=
 =?utf-8?B?cWRXaW5GZkQ2KzdMbHcrU2IzT0N4Zjl4YXQxOThQRkw0SkM3S0VBS0RXYjdx?=
 =?utf-8?B?bzhscUdRbDROLzBRNXBNajRoSHY3WFhTbTU2UFZqMmRKZjRZTXlnN3Aya1VS?=
 =?utf-8?B?M1RoeEgvZ3lzaHp3TFdCSnkxQXc0cjVWYVE3L0V6YjMwa3BaOFd3ZHZZay80?=
 =?utf-8?B?RmwyenJDQ09uRTAra0JRNkxhY3dua05jUEU4eXRZSGdSUkVmQW1LN3g0U3kr?=
 =?utf-8?B?eVU2WFdxeEFxMkpaL2Mxc2JMY0tpQ2JUZWswNjNVdTllY3EzQS9TS0I4M1Fh?=
 =?utf-8?B?YlZtZnh2Y3ljN29MNkpwa0ZzZWQrVmdhQzgrQTUwbzhBNTZVOEtCWnhNM1dM?=
 =?utf-8?B?eC9SMnJ1MGFOOVNhaUI4YTJ1VmlheXVlL1lRQzlTMCszNFdBcWVuOXhkeEJZ?=
 =?utf-8?B?U2M1Q2psQmloUkJPdjJzN3k2bXYrY3FtTG1VelV5N0ZSQm9raGZFQmZ6MjYv?=
 =?utf-8?B?RkxoMlZsVnh6M09xYlNEWU9abXBoc21MUUJ4blNKZzIxTEl1bGxqU3dHbEs4?=
 =?utf-8?B?enUwRStLeEt2YXEveWVzTlJYT3pvS2xtaUlrd2pTV2xPQjc5VUVRUUE3aHlq?=
 =?utf-8?B?NnZleGEzVDlkb3ZJemc4M2o2SldLTGpkS1ExRUFXcTZla01zRkxqTngrNC9D?=
 =?utf-8?B?YmkxbEhwTUhENS96RXlhUHlrd3ZDMDZ1aC9VNDRNc1hWNmJUa1VUU1E4bXhU?=
 =?utf-8?B?Y0xacXkxaE1yOUR4WWFDcGZMdWc1T0o5Nnk4UXE2aE1nREhWOUttQVRFdHlR?=
 =?utf-8?B?VGxWalAxbGdmdEVhWW9oSWhtWGZzQlJwNTlCMm03a01pVUo1WW9Jdy9LRzh0?=
 =?utf-8?B?bC93ZDlCMmxZS1Y4NHRDWTNiZkJOZlB3amRUbHE1SWpxdlUvOUpHc0NQZEMz?=
 =?utf-8?B?VEQrbkhOMEV6OUtMZ291TWUyRUNJQWlqNU1pMXJLMHJUOGU4Yk5OdlRSVGNV?=
 =?utf-8?B?cmJUQmF6aXlSSXdCV3M3ZDRZM3V2UWFOZTUvOGlpUGwvNnExL2p4MjFCNXFO?=
 =?utf-8?B?TEhUWWhjNWdzK1VGdUl1UmoxRzZMeVlQRDFpaEh5dTErVk41NGI0SWRBUnU2?=
 =?utf-8?Q?ds23T5EUeu+hFWT0NBqNLNx1o?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 34acf9eb-9845-479d-3821-08daef227638
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 13:40:52.0012
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: V6w1sdHbXCPJwQcfaUlFXsm1kpjD/VMJ95ECxDVAcNWV3lxcv2lUGwY2+svSx4fe/r8RkbQ6IsyNfdFMwTRkAQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8640

On 05.01.2023 13:01, Oleksii Kurochko wrote:
> To run in debug mode should be done the following instructions:
>  $ qemu-system-riscv64 -M virt -smp 1 -nographic -m 2g \
>         -kernel xen/xen -s -S
>  # In separate terminal:
>  $ riscv64-buildroot-linux-gnu-gdb
>  $ target remote :1234
>  $ add-symbol-file <xen_src>/xen/xen-syms 0x80200000
>  $ hb *0x80200000
>  $ c # it should stop at instruction j 0x80200000 <start>

This suggests to me that Xen is meant to run at VA 0x80200000, whereas ...

> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -1,6 +1,9 @@
>  #ifndef __RISCV_CONFIG_H__
>  #define __RISCV_CONFIG_H__
>  
> +#include <xen/const.h>
> +#include <xen/page-size.h>
> +
>  #if defined(CONFIG_RISCV_64)
>  # define LONG_BYTEORDER 3
>  # define ELFSIZE 64
> @@ -28,7 +31,7 @@
>  
>  /* Linkage for RISCV */
>  #ifdef __ASSEMBLY__
> -#define ALIGN .align 2
> +#define ALIGN .align 4
>  
>  #define ENTRY(name)                                \
>    .globl name;                                     \
> @@ -36,6 +39,10 @@
>    name:
>  #endif
>  
> +#define XEN_VIRT_START  _AT(UL, 0x00200000)

... here you specify a much lower address (and to be honest even 0x80200000
looks pretty low to me for 64-bit, and perhaps even for 32-bit). Could you
clarify what the plans here are? Maybe this is merely a temporary thing,
but not called out as such?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 13:42:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 13:42:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471927.731974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDQVR-0006th-BP; Thu, 05 Jan 2023 13:42:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471927.731974; Thu, 05 Jan 2023 13:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDQVR-0006ta-8H; Thu, 05 Jan 2023 13:42:05 +0000
Received: by outflank-mailman (input) for mailman id 471927;
 Thu, 05 Jan 2023 13:42:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ImXy=5C=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1pDQVQ-0006tR-0x
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 13:42:04 +0000
Received: from mail-ed1-x541.google.com (mail-ed1-x541.google.com
 [2a00:1450:4864:20::541])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc475629-8cfe-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 14:42:02 +0100 (CET)
Received: by mail-ed1-x541.google.com with SMTP id b88so45573959edf.6
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 05:42:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc475629-8cfe-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=YrwToGlaZtM/V+QOWTx69dqDId0AmAIUx3GIoQbT6Bg=;
        b=d7Yq66CGlHscffW/i9FbHmDw9qnz1UeqlyB5MFPDT9uagCWjCP9z8OaZF3J/jjmIze
         aqXu0+93ye5pNGMmVU22HJAZUmOJj33x4VJnxV/CTo81My0jdX6pSopHIoCGhMoi807T
         uAmxrMiNepxMhQNJi2uO6lG6fZ2/6kFL/33uE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=YrwToGlaZtM/V+QOWTx69dqDId0AmAIUx3GIoQbT6Bg=;
        b=oxivEhUslR+KUfCRZ/tBFqQfMlGPBVKMtBgAj9Q+oWyl3hQScTWp0jfIhVobxjhfaU
         uvF63SAma0liBLdweHOiwNvPxFFlWMg3WKVmHxtNIOuzOlP+NbfYOT9zdzpQv+8734Kx
         RljEyyTcwaa64viIwdUq9Af2drY1ZpSGfzISh8U5klOeyh7YZ1W7ezpDfQBJeXXSzZMp
         SjbNMPtZzs+Vqr4qBZgCX8uZ3iScZQq1E/LqlaQsJzrLQISfWbD+h3ZQBY57TnHHMCaW
         sTmkU7zUJG2NaHB36jJu+IYAItSR6PhP3V7hRDrvWeMFrIytWLLgzdGxS0Q3dLmjmFiS
         e57w==
X-Gm-Message-State: AFqh2ko+Aj40DtzvRP7ZjMSaP24ceZcIaegVdS3SavVezfwJFMwYHUjO
	iNyoLYrIiQZZfA4jYDlPwjT4sLzIW4lbS+DvNPbRFP3aJIa6dWqekWdUsw==
X-Google-Smtp-Source: AMrXdXt1cX7PSjEZmptCzOl7HpzC6qTift+SlOxtvTDV0eugEZ3rhm6bgNwftK07TNzV6MZ1x2mi35Ubsxbtus12P+g=
X-Received: by 2002:aa7:d895:0:b0:486:3d22:5685 with SMTP id
 u21-20020aa7d895000000b004863d225685mr3979832edq.106.1672926122015; Thu, 05
 Jan 2023 05:42:02 -0800 (PST)
MIME-Version: 1.0
From: George Dunlap <george.dunlap@cloud.com>
Date: Thu, 5 Jan 2023 08:41:51 -0500
Message-ID: <CA+zSX=aVsQMX0a+3h9x=xFoON0b30VakpGh=J4c9uVqfivVH1w@mail.gmail.com>
Subject: [ANNOUNCE] Call for agenda items for 12 January Community Call @ 1600 UTC
To: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="0000000000003d17af05f18477c6"

--0000000000003d17af05f18477c6
Content-Type: text/plain; charset="UTF-8"

Hi all,

NB that as discussed at the previous call, this month's community call is
NEXT THURSDAY, 12 January.

The proposed agenda is in
https://cryptpad.fr/pad/#/2/pad/edit/9YJwttVUMtpCrgS4ZX3sbNyA/ and you can
edit to add items.  Alternatively, you can reply to this mail directly.

Agenda items appreciated a few days before the call: please put your name
besides items if you edit the document.

Note the following administrative conventions for the call:
* Unless, agreed in the previous meeting otherwise, the call is on the 1st
Thursday of each month at 1600 British Time (either GMT or BST)
* I usually send out a meeting reminder a few days before with a
provisional agenda

* To allow time to switch between meetings, we'll plan on starting the
agenda at 16:05 sharp.  Aim to join by 16:03 if possible to allocate time
to sort out technical difficulties &c

* If you want to be CC'ed please add or remove yourself from the
sign-up-sheet at
https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/

Best Regards
George


== Dial-in Information ==
## Meeting time
16:00 - 17:00 UTC
Further International meeting times:
<https://www.timeanddate.com/worldclock/meetingdetails.html?year=2020&month=4&day=2&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179>
<https://www.timeanddate.com/worldclock/meetingdetails.html?year=2021&month=04&day=8&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179>
https://www.timeanddate.com/worldclock/meetingdetails.html?year=2023&month=1&day=12&hour=16&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179


## Dial in details
Web: https://meet.jit.si/XenProjectCommunityCall

Dial-in info and pin can be found here:

https://meet.jit.si/static/dialInInfo.html?room=XenProjectCommunityCall

--0000000000003d17af05f18477c6
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><p>Hi all,<br></p><p>NB that as discussed at the previous =
call, this month&#39;s community call is NEXT THURSDAY, 12 January.</p><p>T=
he proposed agenda is in <a href=3D"https://cryptpad.fr/pad/#/2/pad/edit/9Y=
JwttVUMtpCrgS4ZX3sbNyA/">https://cryptpad.fr/pad/#/2/pad/edit/9YJwttVUMtpCr=
gS4ZX3sbNyA/</a>=C2=A0and you can edit to add items.=C2=A0 Alternatively, y=
ou can reply to this mail directly.<br></p><p>Agenda items appreciated a fe=
w days before the call: please put your name besides items if you edit the =
document.<br></p><p>Note the following administrative conventions for the c=
all:<br>* Unless, agreed in the previous meeting otherwise, the call is on =
the 1st Thursday of each month at 1600 British Time (either GMT or BST)<br>=
* I usually send out a meeting reminder a few days before with a provisiona=
l agenda<br></p><p>* To allow time to switch between meetings, we&#39;ll pl=
an on starting the agenda at 16:05 sharp.=C2=A0 Aim to join by 16:03 if pos=
sible to allocate time to sort out technical difficulties &amp;c</p><p>* If=
 you want to be CC&#39;ed please add or remove yourself from the sign-up-sh=
eet at=C2=A0<a href=3D"https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6R=
FPz0sRCf+/">https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/<=
/a><br></p><p>Best Regards<br>George<br></p><p><br></p><p>=3D=3D Dial-in In=
formation =3D=3D<br>## Meeting time<br>16:00 - 17:00 UTC<br>Further Interna=
tional meeting times:=C2=A0<a href=3D"https://www.timeanddate.com/worldcloc=
k/meetingdetails.html?year=3D2020&amp;month=3D4&amp;day=3D2&amp;hour=3D15&a=
mp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p4=3D179"=
></a><a href=3D"https://www.timeanddate.com/worldclock/meetingdetails.html?=
year=3D2021&amp;month=3D04&amp;day=3D8&amp;hour=3D15&amp;min=3D0&amp;sec=3D=
0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p4=3D179"></a><a href=3D"https=
://www.timeanddate.com/worldclock/meetingdetails.html?year=3D2023&amp;month=
=3D1&amp;day=3D12&amp;hour=3D16&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p=
2=3D37&amp;p3=3D224&amp;p4=3D179">https://www.timeanddate.com/worldclock/me=
etingdetails.html?year=3D2023&amp;month=3D1&amp;day=3D12&amp;hour=3D16&amp;=
min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p4=3D179</a>=
<br></p><p><br>## Dial in details<br>Web:=C2=A0<a href=3D"https://meet.jit.=
si/XenProjectCommunityCall">https://meet.jit.si/XenProjectCommunityCall</a>=
<br></p><p>Dial-in info and pin can be found here:<br></p><p><a href=3D"htt=
ps://meet.jit.si/static/dialInInfo.html?room=3DXenProjectCommunityCall">htt=
ps://meet.jit.si/static/dialInInfo.html?room=3DXenProjectCommunityCall</a><=
/p></div>

--0000000000003d17af05f18477c6--


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 14:01:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 14:01:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471937.731986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDQo7-000139-09; Thu, 05 Jan 2023 14:01:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471937.731986; Thu, 05 Jan 2023 14:01:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDQo6-000132-TF; Thu, 05 Jan 2023 14:01:22 +0000
Received: by outflank-mailman (input) for mailman id 471937;
 Thu, 05 Jan 2023 14:01:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDQo5-00012w-Eu
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 14:01:21 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2076.outbound.protection.outlook.com [40.107.249.76])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6e158459-8d01-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 15:01:20 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7976.eurprd04.prod.outlook.com (2603:10a6:20b:2af::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 14:01:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 14:01:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e158459-8d01-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jfYAIrI4wcKltuCEO6ZZUOZztEYPJ9+TqSQd+KEVEsL1yXXgX0YFpTTRmV7FO+NWioJ2us6AnLxg4mYUlVq3NxL9IRMYfZXZBrbuEE6ae6lqZMviIFGM4yueRdwz6N4FyJyh4w8X9no2sFX3DLcZUnFPG5jkPVPhsdUrR+9oeOKAjJhwauwqByEQt1AgUEsJJ9j0/REGscDO7qgWia49P03QsbRR/7rxEm17gG/BCqXFwMzDzZ6eVUmhHZ/FccW/FtfbGTRrM2oO0FK/XyvxWjg/JLYL8ZnhSu1gRhH0t3SpNTmdIJxlbESkNoTq9CDWO+WIMv1bhsdm+1Uuh4n7GQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pXqvXHLbXthir5g+J/oME+mWyeNJxBhm5JM/UMSs7BI=;
 b=aRq0J+LJHeKQRhhtHQ3kv5v+85H53IkyBTqTWkpPeov0iX1F/0R2klcrT6IbY1bPJHggqdyokqiUfzJGiVrAHpRIQS5pxlKFgKunen5jBG0xI7lUcMbmYXWvEACc032gnVzvDvCkcg2I4aRChp0wn35lQ1sKO4B4HQM0pjmUlm+WX9OcW+GALQ79KMjog0q4zRsKLO/T7rOqfWNApiWiLNoDnYctQq3ofcPv3q+A1pektjjs0gqTcvce95j2wS828+3l+8r7jnT0V1Be4mzi/xMlMILmvLEsyxRQRSrbmyZrI/tSsEfODZ3DYkEgmoifiEOQ6LZ5LcuZ75l1RVbJwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pXqvXHLbXthir5g+J/oME+mWyeNJxBhm5JM/UMSs7BI=;
 b=FAq2xY1+TwLbzkWzWeeiyvWauQeT2opAZ9rAQ5vi/eYR0L3YjRhBqO9qIZzwY1tOvXIdquKidts187MiOvoNm3sn6Qbmi/WF9054xvl/LI3nVn+37nYy0CSHTgIAhYj/WILvFauRkgIuUyK7GR28sNXi/34Qf1St19i7l4+y0z68l3OOUfhsWG6DWyEJIu0ueW+MwC/mNI1aLWNMkJ4pQo3YnhvkSw1T3YO6ci/po2WPloo+6Lxa6+tB/6YTVn6WOTcyxnueUHJrf+CJbithowANGiQiUb3YxtbMUCjaom03KmiwsfjOWsTtbXKEYLPRINIwx9k5k4e9eWf6ta52xA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ad323ad9-28f8-9eac-f6f5-f6b6373e2272@suse.com>
Date: Thu, 5 Jan 2023 15:01:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v6 3/5] x86/mm: make code robust to future PAT changes
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan <tim@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.1671744225.git.demi@invisiblethingslab.com>
 <00505454afa89b759122008852502a5ef7b1b1ee.1671744225.git.demi@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <00505454afa89b759122008852502a5ef7b1b1ee.1671744225.git.demi@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0112.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7976:EE_
X-MS-Office365-Filtering-Correlation-Id: 3849455c-823c-494f-5e21-08daef255196
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0M4uPT5ioLkgH7QYDT2eUCGT8bd5ladOgH/OaItF8y7Hp93fDWsNJi+MLemG7H33KVl1hi+aHyl0VeIMRhbHvU7DNqN6vld9TDc6dFWujK9MffTFiBDc2AlczPSMpMaTy6AVeVNjEYTBW66t1kj2KivmimpI7WqrQJ01sVCr66U0YCoS6pnnia7CCH6VUpuZe2VcuNXlZZyl/6SEa7FIEhtqOcEsWNpzT6e4Okp+HWZAUx6gmZpy1P31hyCRXuCK7g373Hpq6p9gcDPMmhGUo+qPQLgPDQ2NKiTISkeLlZlcMPUCOjuIKo/y4UcKRnL7zt0FI8Lyot80gObyQElRLvxW6EuRmBPx45jIXaRg9RbtpqVRppU3wqO93PcVUstKRlVL8tKfPyE8od532yND/T+jFqvedmWOtz7Kc3SJD2kAoCU4ao+RUtGkHTGLleZzJWUitwbA7nFmHTZDdWljpv4tHvWrXMbTkAKM4LFkgYo3V1mCLyb7iO5jL3FtjNVQL20h3lDCZp9xoEOzD3xQ8/msftkC4rL56ggGkGR06RO+XxNDea7e1m4+oJJSUuv2PEknXLLS+mWatMnfkUze0342Grqaymn+GEamvEeT1oX7qq3bUrUJIBe9jKYUPr9W+HkY1sHhxX/4tZrjjg+jkRZv2kZQr5ybEKCuB5Oh/84Ynj7H1SuY3fygxUS7d7x8jzTnOb18XNmFSK/rb70tH31EIVFxKreSztT3XvtYDtA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(39860400002)(136003)(376002)(396003)(346002)(451199015)(2616005)(6486002)(316002)(186003)(26005)(6512007)(478600001)(6506007)(36756003)(86362001)(31696002)(54906003)(53546011)(6916009)(41300700001)(8936002)(83380400001)(8676002)(66476007)(5660300002)(4326008)(31686004)(66556008)(2906002)(66946007)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VWJtOTJ3M2xkK1ZkR1ZKbkZkSk5QcFh2bjAycGUvTXlwejRZdGJ0TjljK29i?=
 =?utf-8?B?K3dhWlBRaFZVYkdJMktFbEh1VTVRTjdQeW93M1FlVm4rdzdWekVZeXViRm16?=
 =?utf-8?B?Ui8yaFBvTHpBNnBFTXJqTXpwazMxc1IybkM5OG9hMmxEYndmczNUZDAveU5Q?=
 =?utf-8?B?T2hqY2JRanhDaFh2V3ZUS2x0YmxqdG15L0lQamx0WnVKUWFVWUlYb3Y1N3ZU?=
 =?utf-8?B?a1Zvb0xwYmViU0NFd1BkczdtTVNPcTlkbllETXNzRmZYci84YUdpOUpiSE1y?=
 =?utf-8?B?a0w2N0xNcCtNZ2sxSnAyVzludGZkeHRiS0E1dGI1QUVKMGpoNGZYbmdxeWw2?=
 =?utf-8?B?V3NseFo0c0Q3QmsyeVFOQ0U0TlRCZjRuY2lacFhWUFljTG0vVDR6VFdNQlNt?=
 =?utf-8?B?M1Y5UmxISXpmT1RmVWhocHdNTlFDUlBWT0hjVUoyUHh4eEFQWnFiQzRMbEI0?=
 =?utf-8?B?cUQ2QmZpbGNtbW1OVER2TkEzQytmeVBMblNZbUo3bU5KZ0pxQTd1cGhxa0JJ?=
 =?utf-8?B?OFJNaVVpamd0OVNsMWNQNTQ1OWpwYUpLNzhVazhWVTJOdGFsSkhmUFZYclpl?=
 =?utf-8?B?TTlMTjhKZlVTeDA1a1VQcWNMNXpvdTI2V0dFdms1ZVJvdlQyREtHdkd2VFFo?=
 =?utf-8?B?b1Vma2pBMlgzWWE2cGdJNUxYdXNlaTRVVGtDY1puZUpMcERhdndUR1lBdDZi?=
 =?utf-8?B?UzFWT2p4dzVwOUN1YjJFb3BOWVQ0eGFCUDdqTCtQQm1lYWtJWm1zSk1TWW96?=
 =?utf-8?B?bTdFTE4xSHNLUHpkYjVzRG9QdTNBdTZQOXVmV0dyeHRGZUZhMlFUZkRCRnV1?=
 =?utf-8?B?UURYSTJxSm5aT0Rlb1M0RUxpMkZNYU1yb0tlejRQTHFFcGxwL0pEVjlTUm1r?=
 =?utf-8?B?RmlkQ0RYTnd3NlFvbEVNcWlBaWptRmphc0Q4M2N2UzNFRkMxRWpBMDB5UkNh?=
 =?utf-8?B?emNhQmk0eVVBZXdYRkNaL29vSmZFRnRyaHQ5ekp6bEJmckJhTmVDQkZoWlVm?=
 =?utf-8?B?R0t3ZjBDN0hlaFYweCswcTFJRzJxNVhMOER4T3NNQUdPQTVUNGhKRmp4TVJM?=
 =?utf-8?B?MjRUdFVQaDRGRjg1NStST3hzdnF4ZWN3WDdBM2FmL295WE1nTGFVL1FEbmk3?=
 =?utf-8?B?ZGt3VzR5cnVnaDdZWUpJS2VDMnJkRTNBdkZreVVCelQrdEdKeHVPTkl5NHhQ?=
 =?utf-8?B?S1Vub0svYlJlQSthMGd6aTUwOW0vVXFBOTVWVU5MU2poajIzYUdtVDFFMTI4?=
 =?utf-8?B?c0c4amE2UWhleXNiNzhVcDRTS2xTd3hCb1pjZ1NINGtMWE9TWWxkNVlxQ201?=
 =?utf-8?B?VlpBVzNQLzhqYzBnYW5hQnNOMXFKTHRkbjhkcEJtaTBHVjh2TXN4Q0VWWGJs?=
 =?utf-8?B?TmZydVl6L09FZ3BWK1pRaFhQd05GR0lQaVNLNVZGTS9pbjZ5RS9yZzcyRXdt?=
 =?utf-8?B?SlRiYU1lTHdsbzV0OFZTY3cxZEdTNXhGcE9yS2s5c0RCWWhpYTc1UjhRRlo2?=
 =?utf-8?B?N2ROenJkZ0JvNzhIRGlOQzRYdUZqZmZlNlNTVU5JNEhzV3Q5ZDJHR3FvL0o4?=
 =?utf-8?B?TkFNNlBtbWlSUm1uZlFXbzBlMlI0YVNFcnY0WXBNOWZLN1ZLTUc4VnVteHlS?=
 =?utf-8?B?ZHFzNFpyOTY2NXE2UURoNFM2cEdtK0FQSms1UHFmNE9xWVB0VnlZeXkrcFBB?=
 =?utf-8?B?NjRRTllWbEVxMy96YTB0UXFPUWlzcVlEc2lQNTYwTHM0b2hZOXFlc0tYU29H?=
 =?utf-8?B?OXgzbmxHMmdDT3dIbG9ROG5Wb0k0QXEwa3d3LzhpdTZkRTRVcVpMb2hYa0RP?=
 =?utf-8?B?SHM2NlVGY2QvODNzQW1yOWxhNi9PWE9KZnluMnhRdGVpL0lVblNxblRkRnRq?=
 =?utf-8?B?OTE3c2JvekV4N0VVWFg1VjMvR20wNkh0RXpmUmRTK1lBRUd6MFNCUlVBSnBY?=
 =?utf-8?B?MFFCdnpzTGtHbGZtbFFmbUxOMk95MlN3M01oMFRnYkhjMGR3WmZBMmN6Yndy?=
 =?utf-8?B?TWV4NkRGbTB5eFEwM0Y2d0E4K0NLMGkvR0ZHYm5XWkMxdGtvektvZzlOSnVE?=
 =?utf-8?B?QlgvZStUVkZaL3NteStXclRUbndaWlpScW11dXZEMW14SVBLZUlucGRidFJK?=
 =?utf-8?Q?MJ54of3L2cCCOnr4KffkqQ07r?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3849455c-823c-494f-5e21-08daef255196
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 14:01:19.0171
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /ROGeeHtFAGgu4QsrH5XSBSp1MjAUCpFbQWzyvVzTzgLS6/YKKwiX1KAgTTD+1K0xUqaCArE5okmMB8nXFz8fA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7976

On 22.12.2022 23:31, Demi Marie Obenour wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -6352,6 +6352,11 @@ unsigned long get_upper_mfn_bound(void)
>      return min(max_mfn, 1UL << (paddr_bits - PAGE_SHIFT)) - 1;
>  }
>  
> +
> +/*

Nit: Please avoid introducing double blank lines.

> + * A bunch of static assertions to check that the XEN_MSR_PAT is valid
> + * and consistent with the _PAGE_* macros, and that _PAGE_WB is zero.

This comment is too specific for a function of ...

> + */
>  static void __init __maybe_unused build_assertions(void)

... this name, in this file.

> @@ -6361,6 +6366,71 @@ static void __init __maybe_unused build_assertions(void)
>       * using different PATs will not work.
>       */
>      BUILD_BUG_ON(XEN_MSR_PAT != 0x050100070406ULL);
> +
> +    /*
> +     * _PAGE_WB must be zero for several reasons, not least because Linux
> +     * PV guests assume it.
> +     */
> +    BUILD_BUG_ON(_PAGE_WB);
> +
> +#define PAT_ENTRY(v)                                                           \
> +    (BUILD_BUG_ON_ZERO(((v) < 0) || ((v) > 7)) +                               \
> +     (0xFF & (XEN_MSR_PAT >> (8 * (v)))))
> +
> +    /* Validate at compile-time that v is a valid value for a PAT entry */
> +#define CHECK_PAT_ENTRY_VALUE(v)                                               \
> +    BUILD_BUG_ON((v) < 0 || (v) > 7 ||                                         \

See my v5 comments.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 14:19:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 14:19:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471944.731996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDR60-0002eI-F8; Thu, 05 Jan 2023 14:19:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471944.731996; Thu, 05 Jan 2023 14:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDR60-0002eB-CJ; Thu, 05 Jan 2023 14:19:52 +0000
Received: by outflank-mailman (input) for mailman id 471944;
 Thu, 05 Jan 2023 14:19:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDR5z-0002e5-G1
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 14:19:51 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 02593f81-8d04-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 15:19:49 +0100 (CET)
Received: from mail-bn7nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 09:19:45 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB4924.namprd03.prod.outlook.com (2603:10b6:5:1f2::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 14:19:42 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 14:19:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02593f81-8d04-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672928389;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=+yGZRXvGO+UVpHf9k+TYi2cBP1Ac/eKITtQzNYJkGlQ=;
  b=WeGqc1VmVVuS2Y7cUEmO3lUkM3NC6DJb7BDge9IiHIhQbBanAzXNvs8+
   7sVyQ+dfhRkOFehUGcDGC4jRSl0I9OJGNQuUF1/0kiB4Rx4Z9Xwfg9puA
   B7RxamxMNf+q8+M0zSu7ZQZ5u/k2xfwHMzFET3BlCmy+Y+amBT9GaPDw1
   w=;
X-IronPort-RemoteIP: 104.47.70.103
X-IronPort-MID: 91735738
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:SvxiAKnrRdxr73YIM3eI2bHo5gysJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJMXDqPOqncajb1eY1wao+x8EgCvJLUytZjGQNtqCwzFyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aqaVA8w5ARkPqgS5AKGzBH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 f0fE2EBZ03cvbmd55CcZcVTqc8EJ9a+aevzulk4pd3YJdAPZMmbBo/suppf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1Q3ieC2WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDTeLjq6Ex0DV/wEQJEj42bFahk8PhoUC/fspzK
 nU/pAQh+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6JC25BQjNfZdgOsM4tWSdsx
 lKPh8nuBzFkrPuSU3313qiQhSO/P24SN2BqWMMfZQ4M4t2mqodsiBvKF45nCPTs1oazHizsy
 TeXqiR4n68UkcMAy6S8+xbAni6ooZ/KCAUy4207Q16Y0++wX6b9D6TA1LQRxa8dRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:8cJE46g8U3u9jJtJtR8kelhTFnBQXtIji2hC6mlwRA09TyX4rb
 HMoB1/73TJYVkqNU3I9ertBED4ewK4yXcX2+ks1NWZMjUO0VHAROtfBODZogEIdReQysdtkZ
 1KXINaYeeAb2RHsQ==
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91735738"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XzoHwUik2GVOOmptzIG5rMi4xwNuoXZEa3p72ClICGxDm4wzZO1Tus1ADDJvR6Ht0d1ku2qCLbUiUJm9HuZG498hRXRQWoS613HmOxaV+3XTTUyVBTaTYviAeIk/k9bfutYAQCshJSYTlmr5uu9MytOtDZxshMOW4sRLWUqVytXE7rTdUuh0xn1ZuuEvHYdmUTPKXtIIkoBa95gpSYFHvLaoQMqoroAG68U+kRYTbUUw5eyIPK8CXpBH2ae2Iz2bMbMV6iKfc/SZ9R0MnPTYkpaLBgDyTOdOI0OW4zxjeAT8LW21sYYGaas8So2Yt86vi5dXF/AWiWBw4W69FJVi7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+yGZRXvGO+UVpHf9k+TYi2cBP1Ac/eKITtQzNYJkGlQ=;
 b=AdROlE6D309vvUFefv6kkO+1igeyry1s7hTZZNfcuBhpc+HmbHAESBjuWc4SAXKkv94PJz/zW2j27rHTlSdHz0zBUnw/acZSb9GQQjn/cfZIHyDu8yMFVr/9d417Hk+rtMKuid/OvEjQpDdxMdgFK0C6n6J3d5wlFLjkiI//K3Opow1OAYFn2KtZSttv/4P/EL7QkdYymlaQCD87S8cCpSal4jhQ4BmN66pSbgqJ85PNdLj6NTFaVhX57mZdleGKMDreYTMiqWLzrxf4Pb2rWv5IzW2be+tjhizYEEzGb2Yc96GfBzBTVE9pi6Ip0ebU5ZKttmORYWuU7PrDx7Lc8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+yGZRXvGO+UVpHf9k+TYi2cBP1Ac/eKITtQzNYJkGlQ=;
 b=qSb6hO2wsQq6gNj0WspyOcQdMHIi3q8eBdQRZQr5HOLGU8YyUJE5UjtqhWU+DUbW5O0/+Iptqa562F8dLy+ZUQcAwVR+LCZ2mjK+70gFgmVlcGowXLOxR1ZhEFALUjfOjIgT2BfDaSYArwc3K9W5DdjEGEROwDVJqZPx3crYsKQ=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH 0/2] x86: adjustments to .fixup section handling
Thread-Topic: [PATCH 0/2] x86: adjustments to .fixup section handling
Thread-Index: AQHZIPZlI/g6KVqo90Cr/TLfuVP4Fq6P35OA
Date: Thu, 5 Jan 2023 14:19:42 +0000
Message-ID: <2385cc5f-0680-2ce2-4411-d56df75d2133@citrix.com>
References: <ae3f21f6-6a66-2fe5-9d4a-3f93e6dd64d7@suse.com>
In-Reply-To: <ae3f21f6-6a66-2fe5-9d4a-3f93e6dd64d7@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB4924:EE_
x-ms-office365-filtering-correlation-id: 3cb2e80f-323d-4994-ef58-08daef27e371
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 hQwRc0j1GL58PpwL+MmfEOPPtToKA2KvqnJRrHPRa5qcyfkrfVHfak6uIRI+6QL1oblI5/+cQ1cLSVJwU6UuCZCA4eKRzLwF1hDfCWq/s9Cyyt2OsNXAp8kZ9MQ1d0Pd+qx35qs8C1w4Np3hmWv3vHNfuUSjALX00ZQ2FmW/FmOOXvZQPdf6mfdFwY6bIznQLlI4ARj+FZPiqFyDvl0r/BCZHFPi/owoMku+vAMSJQJNdJeINXDTZwpM3KjUbCK1pRJfr5MOtGISisVPfD0MMCVVsFvBJPnORFZPBmHCEnTjKO9TEfD8bNIY0jnS7LcAH8t9WcnsPMGGUSnGc5oTgXr9ukd/R6S8+1tFA0rYDEnLBhldOP+Bk8M5Pfxdg8oNbdEis3MK94xz5+m030+MaHz+UvUzPM3bM9mAUMqcGj1kRLU2kTEwnisal1I24vNDp2pvU0szJcsc7TAWC23nTjLlF8Aj9L4wWuM8UiMqfkLIbrCLrWrz2zxm2v0H54d4b8Dj60Vo+jAyeB80tKQTQK9udbZgzpqNaz90QXvm+6T6He0edKZTJohkF0ZV5Hbzcu82X+LrtSlfJu1ycLi/H+ifwDFkQelJOPjebQtkRZx9uFNaHiNG5g9wuxm7yFR3sAy6YxvR6ojIYESr5uERcNi11DPPu3LKQTBloDUH5dZbI4DsoEu/gks+aX1nHg5rKhoF2pF8/UOd1U+yQA99ljzYmwfn3KD+0KDRHrjmFqUcXbbzcvu2Eokp14w3AzJ/Jv8N2oX/aRAjpu5NeJ+hRkdxT58UV89B89B/Tme0S6M=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(396003)(39860400002)(366004)(376002)(451199015)(107886003)(6512007)(26005)(186003)(8936002)(71200400001)(83380400001)(6486002)(478600001)(53546011)(6506007)(31686004)(91956017)(54906003)(316002)(66946007)(2616005)(66476007)(8676002)(66556008)(76116006)(66446008)(4326008)(64756008)(41300700001)(38070700005)(36756003)(82960400001)(122000001)(5660300002)(110136005)(2906002)(38100700002)(31696002)(86362001)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?djJXOUkxTTZtK1pUMjdUYy9EdkVJamxiVWpEU0YrTTVVczB0ZzkzaGRlem4v?=
 =?utf-8?B?VXZWV2NlZUFKTlNJM2ovdFMvNjBJYTh3YUZlWUYwYklUMXVWcDg4cE1XWkY5?=
 =?utf-8?B?QXFDYTF3aEF2b2JFSmpibFpVdk9aL3Q1ajBvQ2kza1ZKeDBja1l3YzZXaWFh?=
 =?utf-8?B?NTJDeURYQVMwMGZXc3d1NjNJWUlEWU9zZ1JzTTBNV0hwZjJjMG50OU5xSDRt?=
 =?utf-8?B?ZXYydWxqNkpzWk52ay9FZGluNnFZaFhJNjBYSW1QVVhlTGRkVklDSHlQK3ha?=
 =?utf-8?B?dlM5VGFlekJBT2lBVmw4Z1R2QURYalVxRVcyN2YwYUNJdGxmMXB0M2xUY2VV?=
 =?utf-8?B?VDAzdGw1dFpZQ3gwbkJtZ0tGekJ6aW91d254K0thR0xjc0tBVjlxNy82UGd1?=
 =?utf-8?B?eGU4Z3RKSHRMRGtzWHphZzQ1M0RmVCt5SFhNbEFoR3NoSVY1NXAyNVpMSm5Z?=
 =?utf-8?B?WHB6MFRnczhKblIrcXpoQmttMkwydWhqa0Y5cmlVbE9FYjRVdXVvSUZBdUxv?=
 =?utf-8?B?UHcrS0tvaUdQdE9PRjY4UUg4VUFpclVPNTh1Y1NlVGVhOFJ3aWF0Yy9pUHNw?=
 =?utf-8?B?bjhDUjRYdmJMVGFuU21ZTGx1QUVZZ05GRVhOVEMxS2pzaHY1RThJYVJPRlV0?=
 =?utf-8?B?U3d4T3BON0lYN0RLQlljcEdGSjlRWUJkMktOdWlkMGpzRURPZkhwaFRQUUZX?=
 =?utf-8?B?KzJ4Y1M2WTEwVFJXQUJZL2NyZldSR1YxZzhibUZwVVZZMTRrVXd0YXhqZzBx?=
 =?utf-8?B?SG1YdU5QR0FLaEFiZmVGdmJRY01vY1FFRlhtd3UxdUVVS0djZm5UVE5uR2JC?=
 =?utf-8?B?ajlSVy9IZW8wVVRpc1kvNCtOWHNwczQ3QkVsa1RBVXp3bCtVQ2hncnlQdVd6?=
 =?utf-8?B?YjBrUTZCWTE3czFyRFNUNHZJVFM2aVVmdVMrOHBHdE16UkZHUFVCbUh4WnBl?=
 =?utf-8?B?c3lTeXpOOTdENXhnSmpRSWM1OG9oS2ZyV0hFOHpCQmpaNm8wVVJJM1ZYbmFS?=
 =?utf-8?B?RDlnWHNjeS9LNlRHeDFCVW4zQ0JXR3RnN3ZRcitpNEhWcHh6VDBqeG5KNUFo?=
 =?utf-8?B?YlVDZ3FuT3V2dHpVR2VFSXJWNFJMK2lSRWVoN01FU3RVTG9XYmlHcWkxa2ZT?=
 =?utf-8?B?TUV3ZHJ1N1ZYTXJRL3g2T29SOVpjNEdoMlluVmxyWDJUOHJrY2FvTGNlYi9P?=
 =?utf-8?B?UWxaeXJNOXE1RzczdVZkWjFEcmRWOXdmMjUyTmRTVXVzT3BaSmZwOWp2WjFE?=
 =?utf-8?B?bUVCM1ViTzZaU0ZBYVVCZEZ3YUJWampXQ3pUK3hUdWxIYnNDQ3o4S2tlcXlH?=
 =?utf-8?B?NnhFRGFQSnI1WkNqOGxKUmlIOG1YTndKVkNUbit0UUlkTHUwODFDK3A3czAz?=
 =?utf-8?B?SnlmMlVjTGlJUURLNVJhUlo5QnBEQXdtMGM5WWxBL0NkenkxQzZ2VVJlM2RF?=
 =?utf-8?B?SWgwSTRCM2xkUHhOUXhoaktQTG90ZHlCWWN6cll2azE2czI5OHlkZitiS0tu?=
 =?utf-8?B?eTJmbTFEbEppRmczbkZPcUJZY0d5OVU1c1JBUzBMdDdBN1pDTkptWStKNE8w?=
 =?utf-8?B?YS9sU0hLeWJOSUtESDFBQkRIczBLbkE2eUpKTlE0d1pKVUZmOG1nTDdjNUZw?=
 =?utf-8?B?OGtoRW54Zjg3U08zRmkydnVkNzFhNTFKL3lJNDB0UDdtUXExbWYzblB5dTZs?=
 =?utf-8?B?eDJiTmh0dEV3SEpUNW9SN0hHNGRGT2xZR2lBZzlkcm9PVS9ldTY3MENaS205?=
 =?utf-8?B?SEFLR0tqU1E2azFIS2Jkc0d0dmE4VW5Rc3JLSlV3bksrVkpsWVBpMEdDU0s1?=
 =?utf-8?B?SmFYbDdpd0Q2TlVNK3NEemdjbmJRT09pOWdnbEdHQVF5Vm5EYWNabmhXeFJI?=
 =?utf-8?B?WE82VDZEaHZDS3NjVCtnOUpKWFhneDRxMTU1M08vQ3JKME04SHNHUFA3Zkk5?=
 =?utf-8?B?ZCtGK1hSMGtNV2NsRlNlN2hRcDJWTE1Uc3VDSmRJeUVMbTdzYVBrdm5SZm9K?=
 =?utf-8?B?czdXTHE4eWpLNWt2VEVZZmNBYno3QnBCVjBPa3Jkd29wZ0RGVjVaeEZ1dXBa?=
 =?utf-8?B?dVFOVEJab2owUHJpNVNtS1FaL3FtYjFWaG5xV3JGaTZzQU1OT3BOWnJCNUFX?=
 =?utf-8?B?RFJFZGtVNWpVLy9hWjJtbERHZUcwNFlkUTdjMWRLbld1UWppcVlQdEJiYlVr?=
 =?utf-8?B?MUE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <8EB551F302AD8546BD280F143C24B64E@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ye27i/udrfAaB2JaV24MGVsHHVVcsgWwd9inFkOck8tcTaT0rh05CsjM5C/jNBtXlgd0evdKXnXRmGdIKA3pftmRUrJ4J7tAV2Ly+r8x3rjI3Y4l13MOsfqSAEa7zX8x3banpSVo/rsIQJ/DI7S2TCoR6OX8duFVvQt3ckBYWWhdtcIQ0Lx8nlcRKyQJKM4RXXLSk2eHjBfKXVV8dy+udbOIvJeLRao7SngElIYkzajpgcmEVgSTBpHleQ1o2Umj1QHF60hraFjEPa5o9J1X8BJKI17YcTJcmgx62s1BSPjP+Hm8Erq7Zclmd6v1glNerJ4jYoz8AUPgXo3wodnzg+i5n90I1Ti9QOEjFw1m9FAbjcPZ+PAuwli/7/vXX9u+0fCAqx1xLh9JodlbFMFgBYMiMXt8WQB4PHW4adIcemACqIsUYVVWRWLp7FPEOg8aGG4gRShSx5onYAa4y0f+mKt0aPRJz4Y8KxY5nNlO9k3reGvpf0UujGfNJcMTZomX7MAaQvhwb01OjKwnz6dF+7yB/n1k2cJWRFL/N5nHvoBoEcqbTqcRt8IVRPBbsYljgvePjujTlbJ2OnmHUMImHpXVZHC1umznR28Db8k8ecbZIlchgUcuZGaamUsOibnD2zaafTCh3LxQvKvyOwKpsGVINPRUp/Gadb02seYfLL5va6sz7JKM/lgRz9yQD/wVQcsCMB6DWbCYpwXOXVn8OdD7mYp1D6pHapbu68BSKbIvMElsmxXmsOWhcTjBtce9oqNdnDqSQyucJ/O1ZKGDG1TDbmPcSf3+8gxNXwMkk1EpVaFpZnHQEyBxGaeo4RWw
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3cb2e80f-323d-4994-ef58-08daef27e371
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 14:19:42.4848
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: e0wH7PX80qklJj2XrcXnc/0iRC5UMxZh/N8D+gCVXIqT6IOGFhFZPaby4cnL1Uwq9mRghO4Hd5xOUCVsntB2yI0ke+68BnrE0Y1qcrmF7LA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4924

T24gMDUvMDEvMjAyMyAxMToxMCBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IDE6IG1hY3JvaXpl
IHN3aXRjaGVzIHRvL2Zyb20gLmZpeHVwIHNlY3Rpb24NCj4gMjogc3BsaXQgLmZpeHVwIHNlY3Rp
b24gd2l0aCBuZXcgZW5vdWdoIGdhcw0KPg0KPiBKYW4NCg0KSG9uZXN0bHksIEkgd2FzIHBsYW5u
aW5nIHRvIG1ha2UgYW5vdGhlciBlZmZvcnQgdG8gdXAgdGhlIG1pbmltdW0NCmNvbXBpbGVyIHZl
cnNpb25zIHRvIHNvbWV0aGluZyB3aGljaCBzdXBwb3J0cyBhc20gZ290bywgYW5kIGRlbGV0ZQ0K
LmZpeHVwIGVudGlyZWx5Lg0KDQpUaGlzIGlzIGEgcHJlcmVxdWlzaXRlIGZvciB0YWtpbmcgb2Jq
dG9vbCBhbmQgdXNpbmcgT1JDIHVud2luZGluZy7CoCBUaGUNCnVzZSBvZiB0aGUgZml4dXAgc2Vj
dGlvbiBpbiB0aGUgZmlyc3QgcGxhY2UgYWN0dWFsbHkgaW50ZXJmZXJlcyB3aXRoDQpiYWNrdHJh
Y2VzOyBtb3N0IHVzZXMgY2FuIGJlIHJlbW92ZWQgd2l0aCBzb21lIHR3ZWFrcyAoYW5kIHRpZ2h0
ZW5pbmcNCm92ZXJhbGwpIHRvIHRoZSBleHRhYmxlIGhhbmRsaW5nIG1lY2hhbmlzbSwgYnV0IHRo
ZSBWTVggVk0qIGluc3RydWN0aW9ucw0KKG5lZWRpbmcgamFlIGVycikgaW4gcGFydGljdWxhciBj
YW4ndCB1c2UgZXh0YWJsZS4NCg0KR2l2ZW4gdGhhdCB3ZSB3YW50IHRvIGRvIHRoaXMgZm9yIHNl
dmVyYWwgcmVhc29ucyBhbnl3YXksIEknbSBub3Qgc3VyZQ0KdGhlIGFkZGVkIGNvbXBsZXhpdHkg
aGVyZSBpcyB1c2VmdWwuDQoNCg0KQXMgZm9yIGV4dGFibGUgc2l6ZSBub3RlLCBzcGxpdHRpbmcg
aW50byB0d28gdGFibGVzIHdpbGwgY29tcGxpY2F0ZSB0aGUNCmxvb2t1cCBsb2dpYyBhdCBydW50
aW1lLsKgIEFuZCBldmVuIGJ5IHNwbGl0dGluZyB0aGUgdGFibGUsIHlvdSdyZSBvbmx5DQpyZWR1
Y2luZyB0aGUgc2VhcmNoIGxlbmd0aCBieSBsZXNzIHRoYW4gMSBzdGVwLg0KDQpJIGRvbid0IHNl
ZSBzcGxpdHRpbmcgdGhlIHRhYmxlcyB0dXJuaW5nIG91dCB0byBiZSBhIHdpbiwgYnV0IHRoZXJl
IGlzIGENCmZhciBzaW1wbGVyIG9wdGlvbiBJIHRoaW5rLsKgIFRhYmxlIGlzIHNvcnRlZCBieSBh
ZGRyZXNzLCBzbyBhbGwgd2UgbmVlZA0KdG8gZG8gaXMgbWFrZSBleHRhYmxlX2VuZFtdIHZhcmlh
YmxlLCBhbmQgbW92ZSBpdCBmb3J3YXJkIHdoZW4gd2UgZnJlZQ0KLmluaXQsIGF0IHdoaWNoIHBv
aW50IHdlIG9ubHkgYmluYXJ5IHNlYXJjaCB0aHJvdWdoIHRoZSBmaXJzdCBwYXJ0IG9mDQp0aGUg
dGFibGUuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 14:30:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 14:30:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471951.732008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDRGC-0004xV-CO; Thu, 05 Jan 2023 14:30:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471951.732008; Thu, 05 Jan 2023 14:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDRGC-0004xO-9Q; Thu, 05 Jan 2023 14:30:24 +0000
Received: by outflank-mailman (input) for mailman id 471951;
 Thu, 05 Jan 2023 14:30:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDRGB-0004xI-6I
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 14:30:23 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2076.outbound.protection.outlook.com [40.107.6.76])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7b6505c8-8d05-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 15:30:20 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8431.eurprd04.prod.outlook.com (2603:10a6:10:24e::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 14:30:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 14:30:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b6505c8-8d05-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BvKCrra4pCGYX2/7daq2swJ+jHr5ThbKox1PqD+IxuoNlNrQIQY05kcSpEdTdsTCUdHVVfM4LfRqBH0K/HgewjppCfyw53X8kL6WPUx0WCQVv2gflVavBTJCLO6KdpwbjyaYo/nHpNC6uwYK/Dx1CviGz43fw/W6h1lyodHovH+RbbWGDb6Sq9upmIYSSaqLgFSz7rF8E7M13Dsq+Y2uX0rV3aQtvnu9wr0v/XZ0FchaI85BhF41bkeAKXs3PUyBmONsPmIoXxn7Ajj5w1vykJaUHpl47iujkJ+Qsh6kwe3fL6OugCRl0QsAfEp+ecz4MD8/67QOePCRFaHMeGuv6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=S9/wtwNaCOrx6wqy/6Rdxnz+hvDUMmJOTdH7TQTWBzk=;
 b=a1sz5c2xOicAhM451JpZnNm1ZhIP4OgawUxXDlK6Yr96tnSFZVx6dj0PUPbw52XB15lqCuzta1ntwY6iVmb5rPAZKrufKgEc4b5/b0KTKIz5JnV0Mx4F3YgtBuqhJk1Bi/diZmVgnMOJDTUnuIF6cBVQsYkSMgwsvq5KTTTyakNfh7bkGjj3qSpQtDA+AtRkUZ7eWUalusGFWJ6bYYahjz7DVx5L9vKMMxEY0dKEGVGKLGFPJCeGCAx+DEw5/DUe2o78UMYsLf5SpyFAN5Mu0VQCWishBfa8V0nry9IsgHWYP7xnQgMkW0O/ZqTUuoMCwVvv26e8SDqC6Rv8wU5Cig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S9/wtwNaCOrx6wqy/6Rdxnz+hvDUMmJOTdH7TQTWBzk=;
 b=XdU+5R3xwr+DkBWuB6q0dF9RoL7rOhR8R45CUJ0c0Fkw9goIXsK6Vfv3h7AOV0/WNNgIyCalvXuxjRosbZ85Ft4x5BfPVRnZyEsfWNA7r3x8x942YF7Wdeo7s6eOPL/u+/jcIJqeJvgrU82ACVn0440HuM44URrE1+PqvKp2AWRUCSDiH+y2HtvpkUKUIBn8L9iRJXaCDEzRGDICPT81UbU9xjkvuvoP3kfd/u1WOqB2SUAZ11YhnnmfEFrc1g5h6G/wY0NoM4MCgqOEtU30vSd9AP4NWTwmkHNYzeBcEan/qjGpwRsxex37rujyG8+GF7TYUGicgn1sDpasR37Jjw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2fce631d-2e59-f0f9-3330-9618eb5f3057@suse.com>
Date: Thu, 5 Jan 2023 15:30:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan <tim@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.1671744225.git.demi@invisiblethingslab.com>
 <2236399f561d348937f2ff7777fe47ad4236dbda.1671744225.git.demi@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2236399f561d348937f2ff7777fe47ad4236dbda.1671744225.git.demi@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0165.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8431:EE_
X-MS-Office365-Filtering-Correlation-Id: e05a91eb-adef-49b5-c670-08daef295e74
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8uIeeGYxl30ILeSNdKYRqRuVeARHcGIZxqQhndcuZ/xl0HF1F9HYnj9Sh43/+9lb6yfYa9KpaRHmHnIized6PghRDQsrdpktt6BJzwdOMl+pYE2vdPNg7XJ+3yLwX57DOGcephPC//FqPPwmfza4C19Q0KuTzaqRY0ndK6nUwqoycksIbewqzYqiRKEZ9pT6L+nfQNkX+5sHkdvIbG3e+N1qsJyZkJz5IG4pVvlSFMScA+RLb7/35J24LArzcxjq0RqIDUCAImcmHTHOLj7ygdKlQSS30kjsNSPeUIJKOQ8o4symS5gJIc8tpwP8P7J7R9WCWcuNIBfu1fjQ2ANpHFzFUNLG3iZaMfH3lNqjYP067pa7paKT/IeAIvRi02Np1JmngWPZOq0j0lIMOdOoYNaWQ4KoQkt6vQMVIRhLCczVp/A0/naJWRc/+vdPBHSAzrtyiSc1ArF4SgrPNHQSZitvejAQASwHTmMptxeA9yPcKknlZMZIJTfFCVfEUv2zFtcJtI03DHZFxW8HzR79vrYfVrRkKrwM7t4ktsQRrTm12KEWhZ2lL0vVHTsXZbyeU9Me99QSHW2BMGp96Q7rL+sbKYowj7/JavEqDPghvxBHg6Pp0fGtW/bIySJ70K1LBtpnQL2/VaUXoHFCTb1vz6NSDvynFODcQ+SERpaX/rHgeg2LgHOqncCDw4/kpfhofiKk9jkH72xPRnEsvKFKrF69VOsvh2GDg7z3YqJOSC0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(396003)(366004)(39860400002)(376002)(346002)(451199015)(66946007)(66476007)(41300700001)(8936002)(4326008)(66556008)(8676002)(5660300002)(31686004)(86362001)(54906003)(316002)(6916009)(31696002)(2906002)(6486002)(478600001)(38100700002)(2616005)(6512007)(36756003)(186003)(26005)(83380400001)(53546011)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NElCeW01RUZwYXhTLzYyNUdGd3lzbjM3UDgvYk1SMU14OVJoMkg2R1BrNmMw?=
 =?utf-8?B?R3RYaWp5MXRDWmZvOU1ocVNGTThtMUpjUkpHZTVnNmtndU0vRkFxUG9LSW1U?=
 =?utf-8?B?RnJNa1VZWTNzaGhsSVdLcDEzVGJVVW92UVE3QkRDUXRjNThnUlRjMkJzbkc0?=
 =?utf-8?B?ZkQ4djNGYVljeUJXWHhsRXFlN2FCV25Ca2ZYZ0FkTE0xTFRZaHBVeHhPdlY3?=
 =?utf-8?B?djJNZnFKQnd0cGN5c0hvZ3pFMkRwNDZLWFErUGFaVnRqSGVIVS95K0ZMZVlF?=
 =?utf-8?B?cnpKWEpFNWYwOFVYNGlrSkdhdFJFbjl1YjFwckV2aUVrRDQxWW5uck5HTzNa?=
 =?utf-8?B?d2dFNjlQSkVYMS9vVTVuV2srS2JRWENTRk45bzJYREpvYVJGc2ZId0lsOEJ0?=
 =?utf-8?B?czZ2V0hpR212RFBXajhCbDlSeC9UVFlnMmk0M2tsL3FJYys2VU95RnRXYXQ2?=
 =?utf-8?B?M09lZEFndmhSeHExcHlZY2hOS2FtYWZxSko0dlBKT1IvMFBGdWcrZ3kxem93?=
 =?utf-8?B?R1djN1dsZ1Z2S2NVaEFDbFgrVjErWGU0QXpjbTZZL3BzbWpjdkpsVWtBVldh?=
 =?utf-8?B?QXhXcWhveHY3WVlEOUdaOFhHbURzMWUxRGN4MGpQaEZBUEoxL05vWmFwak8y?=
 =?utf-8?B?MG9hZytTV2pWUWdjN3pqcm5ZL0lrbGRTYWlFckoycXgwdWU1bHBBV2M3dktL?=
 =?utf-8?B?L0Zaa3Q4N1ZYRzZJQXl5TVNMZGFrZGN6dERJaXh5Zms0cWthQVNwSWZxL3ZB?=
 =?utf-8?B?S0tmallQZDVNcDd2K3ZoQ3BMaUN3d0VOcDF1NUV2dmo4amZPZVAvaXpqUFZV?=
 =?utf-8?B?RDc1TGRrb2JtSmt4bldEVHphRnVoRnIzMm0wZ3pINjlYNDVjaWZ6c3B3RE5W?=
 =?utf-8?B?dTlRSnpJZm1id2ZRMHZxTXZsNkhhMk41YTlSTmx2R0tRSVhxdFM1Q2RQUzd6?=
 =?utf-8?B?OVBpNUJobER3dEtPWDY2SnhjUDF6SmdNU2ljeFNmNDd0ekFHSjhNUHFBK1Zz?=
 =?utf-8?B?dkx1a1ZOUi9kdVc5VUFpM25sRHBhZXgxb043eXRHa3AwN0s2cDIyWENzd3U1?=
 =?utf-8?B?QzlTMU1XRkRsSG5Bdm1mMWVyc0Q4Ynd3T0c4TkZWVjdYQjFoa25XTHRWVXFl?=
 =?utf-8?B?dFlrRXhNekVEYjVEMWxSdlFmWFpRWDNCR2Z0Rmw3aXlObXRqaWhGeFRsSnoz?=
 =?utf-8?B?MGRtMEoySzZvNTlKbCtuM1B3ek9sVTMwajNBNmNwbWNFb1FRQmpRVGlqcUZq?=
 =?utf-8?B?VzFXM3poYjZmZVYvUnMxRHREdlZVMnMycWJTdURGOHNCOWc4NUVJUWFpQWlS?=
 =?utf-8?B?RFdCYUhNUjZDSXhBUXFRVkY0YkIvaFFHdFcyQThvb2Z3VmxGK3R0dGNENEVn?=
 =?utf-8?B?bHp6Zzd2KzN1TEUxWU8yQm1YTGZ0OEI5SlZtVkpDYW1GSTM2NStzQWlJUHNo?=
 =?utf-8?B?dTl3WG90a09xMVJ5cW1VSzUxUHFaVnFIVEQwNk1QS0cxekVZU2Q2SVFMdk9D?=
 =?utf-8?B?TDM2cUxpZmRtMXRXNkZPYXFZOWIvVnFYL3hyTU5xZkx6bVl0MlVrM2cxa0Zv?=
 =?utf-8?B?US9PM0Vsc3VJODNkcmJZZmtCRndlM1RIaFcrUGF6ajNZeXRCR3RSb0llcFJX?=
 =?utf-8?B?L0dGNXYxMG95MGNRYkptWTJSQi9BeGYvZVBBcEl2NXkxUmhieU53K2gwdnJS?=
 =?utf-8?B?TVMzems2dWMzUlFIaXV2bnBFczVBZWRydEx2a1dwdDF6ZWRET0Y2YzF5QWNy?=
 =?utf-8?B?dVNWbnBSNTBLZzVCOEJEellpMnd4MnM1ckEzOWVrdG1ONnVWLytSY25oU2ha?=
 =?utf-8?B?Z0U2V0pVNWJZNFNNOVppM3A0Y2ZOTVFScnZMalpmRldoSWYrdUxaOVlMMy9p?=
 =?utf-8?B?WjhuVzkyWEJ2S3F1MzVXOGEzcGpnUXBkSm9XbG1RUmROSHlwN0lXMUwxcGNn?=
 =?utf-8?B?Wllvby9MeUozT1o0eEZHUkFNUUFXc1E5bnZiUDNmWHRnNHR3Q1cyb09sb3J0?=
 =?utf-8?B?S1RidGgxeWV1ZWdDdjRXWHptOXJVYm90WlA1dytmVk41TTZYYWRncC84enY2?=
 =?utf-8?B?c0ZYRjZxcFpOTTVFNDllZnFYbVcvbkNDemt6VHJvTStLMnZNNjZKbGYyM00w?=
 =?utf-8?Q?6lsN93a6QTdXUiWLmgQ0OtjE5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e05a91eb-adef-49b5-c670-08daef295e74
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 14:30:18.5005
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gSvra5CT/1d3qprmWD2+sD8Npfvmq700Uw6+9C6BNzU57c21iKk1BL77OxYVso7xVIVlASf9F9sfccWWtL1vLA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8431

On 22.12.2022 23:31, Demi Marie Obenour wrote:
> Setting cacheability flags that are not ones specified by Xen is a bug
> in the guest.  By default, return -EINVAL if a guests attempts to do
> this.  The invalid-cacheability= Xen command-line flag allows the
> administrator to allow such attempts or to produce

Unfinished sentence?

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1324,6 +1324,37 @@ static int put_page_from_l4e(l4_pgentry_t l4e, mfn_t l4mfn, unsigned int flags)
>      return put_pt_page(l4e_get_page(l4e), mfn_to_page(l4mfn), flags);
>  }
>  
> +enum {
> +    INVALID_CACHEABILITY_ALLOW,
> +    INVALID_CACHEABILITY_DENY,
> +    INVALID_CACHEABILITY_TRAP,
> +};
> +
> +#ifdef NDEBUG
> +#define INVALID_CACHEABILITY_DEFAULT INVALID_CACHEABILITY_DENY
> +#else
> +#define INVALID_CACHEABILITY_DEFAULT INVALID_CACHEABILITY_TRAP
> +#endif
> +
> +static __ro_after_init uint8_t invalid_cacheability =
> +    INVALID_CACHEABILITY_DEFAULT;
> +
> +static int __init cf_check set_invalid_cacheability(const char *str)
> +{
> +    if (strcmp("allow", str) == 0)
> +        invalid_cacheability = INVALID_CACHEABILITY_ALLOW;
> +    else if (strcmp("deny", str) == 0)
> +        invalid_cacheability = INVALID_CACHEABILITY_DENY;
> +    else if (strcmp("trap", str) == 0)
> +        invalid_cacheability = INVALID_CACHEABILITY_TRAP;

Style: Missing blanks immediately inside if(). Also note that generally
we prefer '!' over "== 0".

> +    else
> +        return -EINVAL;
> +
> +    return 0;
> +}
> +
> +custom_param("invalid-cacheability", set_invalid_cacheability);

Nit: Generally we avoid blank lines between the handler of a
custom_param() and the actual param definition.

> @@ -1343,7 +1374,34 @@ static int promote_l1_table(struct page_info *page)
>          }
>          else
>          {
> -            switch ( ret = get_page_from_l1e(pl1e[i], d, d) )
> +            l1_pgentry_t l1e = pl1e[i];
> +
> +            if ( invalid_cacheability != INVALID_CACHEABILITY_ALLOW )
> +            {
> +                switch ( l1e.l1 & PAGE_CACHE_ATTRS )

You want to use l1e_get_flags() here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 14:52:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 14:52:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471959.732019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDRbv-0007VN-B2; Thu, 05 Jan 2023 14:52:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471959.732019; Thu, 05 Jan 2023 14:52:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDRbv-0007VG-6o; Thu, 05 Jan 2023 14:52:51 +0000
Received: by outflank-mailman (input) for mailman id 471959;
 Thu, 05 Jan 2023 14:52:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDRbu-0007V6-Rr; Thu, 05 Jan 2023 14:52:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDRbu-0004ud-PC; Thu, 05 Jan 2023 14:52:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDRbu-00087h-DR; Thu, 05 Jan 2023 14:52:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDRbu-00073t-D2; Thu, 05 Jan 2023 14:52:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l15FJLeYlfyLOiiWjYTABtrxHMab3qI2j6EMOzMVZpE=; b=VkS3ECMfqm47qL2iwM2BTSyrO/
	sXtqB9s8+NXFYm9ewiig1ecK8tg/wR6vd87cr5VVvkIBGGnMDkkv6VmMGx+CQU8K06qr4vfcOjPgU
	tIEIuqeakPpu4G+YbV0ne+PTGcDYnWUMlQ0smt7Gp/D8y/czrV9EIKWc51n5Uhkbs7Bg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175583-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175583: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=cb9c6a8e5ad6a1f0ce164d352e3102df46986e22
X-Osstest-Versions-That:
    qemuu=ecc9a58835f8d4ea4e3ed36832032a71ee08fbb2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Jan 2023 14:52:50 +0000

flight 175583 qemu-mainline real [real]
flight 175588 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175583/
http://logs.test-lab.xenproject.org/osstest/logs/175588/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 175588-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175571
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175571
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175571
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175571
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175571
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175571
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175571
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175571
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                cb9c6a8e5ad6a1f0ce164d352e3102df46986e22
baseline version:
 qemuu                ecc9a58835f8d4ea4e3ed36832032a71ee08fbb2

Last test of basis   175571  2023-01-04 17:08:54 Z    0 days
Testing same since   175583  2023-01-05 06:47:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Mukilan Thiyagarajan <quic_mthiyaga@quicinc.com>
  Peter Maydell <peter.maydell@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   ecc9a58835..cb9c6a8e5a  cb9c6a8e5ad6a1f0ce164d352e3102df46986e22 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 15:49:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 15:49:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471969.732029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSUC-0006CR-Ez; Thu, 05 Jan 2023 15:48:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471969.732029; Thu, 05 Jan 2023 15:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSUC-0006CK-CQ; Thu, 05 Jan 2023 15:48:56 +0000
Received: by outflank-mailman (input) for mailman id 471969;
 Thu, 05 Jan 2023 15:48:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDSUB-0006CE-MX
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 15:48:55 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7445aadb-8d10-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 16:48:54 +0100 (CET)
Received: from mail-co1nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 10:48:40 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO1PR03MB5699.namprd03.prod.outlook.com (2603:10b6:303:95::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 15:48:34 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 15:48:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7445aadb-8d10-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672933734;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=GtOP1m3e6y8lukYciiLXnlei1xaVG5tNYJ7a1NRQhcs=;
  b=Ip7VWfkCxH5OWRIQYhU8G2DiwjKxu0S74VXHg2goZM4z6An6Wfnlalrq
   D7hHyIROdR/YSIs1oxC9lhv3IRY9IznLKc1CF7iYThoTr1uHUxy1koAYK
   RTIvt2ddjPA+zKEkT6wI0ywlHT72ieG81mamLoEBdqn/Y6uBxoJThi08n
   E=;
X-IronPort-RemoteIP: 104.47.56.173
X-IronPort-MID: 90280575
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:NvkHKKs+XvokwBFsJQ1dQyto1ufnVBNeMUV32f8akzHdYApBsoF/q
 tZmKWDQOKmIZ2T0f4gnPN6w9ExXsZbQnd5hHgBqrn00ESgb+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaGyiFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwCDkrTA+JmOKK352KQdFPncoDDsziBdZK0p1g5Wmx4fcOZ7nmGvyPzvgBmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjv60b4G9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzgrK4y3QLOroAVIB05VEHjk+S0tlS/a8xCE
 3MI5zgzsqdnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLm8AQz1aad1gt9IsQjcq1
 VihkNbgBDgpu7qQIVqX8b2evD6pOSwYKGYETSABRAoBpdLkpekbhxbOVcpqEbTzgMf8Hzrx2
 BiVoCN4jLIW5eYA0KO9+lLLiiytvbDGSwc04kPcWWfNxgFkYI+oYaS45F6d6uxPRK6CVUWIt
 nUAn8mY7cgNAIuLmSjLR/8CdJmm5v+DNCDXiHZ1Hocm7DWr8DioeoU4yCFzIgJlP9gJfRftY
 VTPokVB6ZlLJnyoYKRrJYWrBKwXIbPIEN3kUrXfaYNIa50oLQufpngxPgiXwnznl1UqnecnI
 5CHfM2wDHEcT6N60D6xQORb2rgurswj+V7uqVnA50zP+dKjiLS9E9/p7HPmgjgF0Z65
IronPort-HdrOrdr: A9a23:3gm+3a6Q8E7sU8+CewPXwMHXdLJyesId70hD6qkRc203TiX8ra
 qTdZsguCMc5Ax8ZJhCo7C90cu7L080nKQdieIs1N+ZLWzbUQCTTb2Kg7GM/9WqcxeOktK0v8
 1bAspDNOE=
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="90280575"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M2AjWK/FjdCHkKTHc70MNpqmIi2UqLLDrkZEdJ4XcPqDa+szk4tZr+I0mBwQeR9XXtLWuA6FTxcxTPHQsY8LVUKsXfU6W8TQS5urNBMVW8qkxUgYy9ubbiSUoizK+jVRoGCGCPsRIpbGDc7ULKPgsi5Inj8fHYs+XPJTo74EyX/NWcSqGhJzPHYPMwiC47lgwwijq475S93zQuGWI1yrtM0KAomBhT7aSiidpwcXWHKkEsx/ow1FzIv1Oz76Yt+qvt3HTlGbV1n1K2PHM80chghrNB6dTel7Z55T2yGgUqWXoGxVL7TUZeb1pQXjzine4qsUaSx8oxPl35ct31m64Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GtOP1m3e6y8lukYciiLXnlei1xaVG5tNYJ7a1NRQhcs=;
 b=a92T8A40IUUpJtN6helnwLbzKwD2485aLZ0IUJjOEbJqyizu4/IOOcRr/e6Wju52pniqyeF5Af7TR73Lqv44025i/PCiQCEYW3Z3/Z24Z4oUalctAEZJ5NNcllxxMwuei1tzlm42bwnyCL7Ubt8FGK+UyCVzbqzSacbCAA9IAij8uh+W3F/a5IzzFbFhsGIpXY5EgKvg+E6ST7jDp9ZStSUDz3gjlxRRO+v5Y5NggdnLXp4gaGz+AIUb3XffzecR8tu05d+Zy16ggO/YpCmEhzVOboRFlJRgryd77OpLb41sBRESU+c7PMzVTsfRXHgy8MqTvpg3g1FrJwjbZnTeRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GtOP1m3e6y8lukYciiLXnlei1xaVG5tNYJ7a1NRQhcs=;
 b=FL3F/KTjNwe4EiwTy1E59VF2aYHgjg7COUG0H4wQH807Cnpg/aGRWRlNyitYmp1t4v9mBCktnwLr87TTBVNZ26mHPnvvAWHLKePsj5UWwtfx+XzGJIeO+qWs1AnrcJcp7M6GFNnLi3vQRgLNR+0YvYjCzKfLU7byvfKX/IsHfQQ=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Oleksii Kurochko
	<oleksii.kurochko@gmail.com>
CC: Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Julien
 Grall <julien@xen.org>, Anthony Perard <anthony.perard@citrix.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 1/2] arch/riscv: initial RISC-V support to build/run
 minimal Xen
Thread-Topic: [PATCH v4 1/2] arch/riscv: initial RISC-V support to build/run
 minimal Xen
Thread-Index: AQHZIP2Dn/VS+v8CxEaK8nfj7fmLt66P1KmAgAAjsIA=
Date: Thu, 5 Jan 2023 15:48:33 +0000
Message-ID: <01888162-49fb-a280-a088-5e81edff3919@citrix.com>
References: <cover.1672912316.git.oleksii.kurochko@gmail.com>
 <ef6dbb71b27c75fe0dffb72d65ab457d27430475.1672912316.git.oleksii.kurochko@gmail.com>
 <591d6624-2bd2-93f0-f5d6-760043230756@suse.com>
In-Reply-To: <591d6624-2bd2-93f0-f5d6-760043230756@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CO1PR03MB5699:EE_
x-ms-office365-filtering-correlation-id: 5a4eae3b-32bb-42cf-c9bb-08daef344d2f
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 SBpXQ9wKeImxtbTvohm4xlTVRKGqZT0z3FppNE+LXj9jnKncPtFiOIKlFP8ghvOZCJ/mXf00xIsMzUfhjqAg6b5gM7UHUktrSBD690FfdVOJegX+tgQRbWnZU68ZIvdrvE9Ut1lMsTICITMNumG6/E2ilAjDCMsYJygNcM9zOc7hbmEw7CLqvKjx3gfpEyywL8NpmITjx2cZmBhYmk2I00vlaJWuK81Mjj2r3Cm1KAHTrptby02BEkOGoo9AOAkUaOJjffqM9bPmHXgf/xHF4hb7OnbLihbZI2jC64RJDlIDnVZPkfiT4wMkDQqwWHOd4oWXaAqX5fkiW1F4befUJIxinW+8XIuKM9Og89+xCwWy/PQ72UthWK4PPU180drBH5VEyD9GWdXjAnPBEidvastsH32rQ7taIuTAeK4vZrETufQb2wm7Y+a+0Kf6rR6T9CFODWyRYEGWT6Lh2F2XSRsFvtL7HYYN2UR8szHn63SvI9iH0CTgzTCoIgxLbmPz7SMZR7O9JDWUMBSDcIZjuuOh8ZhxQ+slt1d1gsvv9c5PkznXkjw9Hr54sW5AhCn6ZWRSuPqK9CXsZWTdz5yCSFmNjPKwY5PU60BcIZ2bnlFIK8axJV89S78Tb5YXt5MUrT4+agDw2l53Fl4JkJdNx2+H6py1NEEytC23FsXuPT2k2EqWMRGaPmt7fKwqDVGP/cmFmHIfKJkvqySzEOOv56wt5hKdoO6lQ+cy4BIywtCARr8xOga/ARoVwSgUXlReiAapsbTiXd5dAL7j6O4rgA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(396003)(136003)(376002)(366004)(451199015)(6506007)(478600001)(53546011)(122000001)(31686004)(38100700002)(82960400001)(6512007)(36756003)(26005)(71200400001)(186003)(6486002)(2616005)(110136005)(2906002)(54906003)(38070700005)(316002)(66946007)(8676002)(7416002)(66476007)(76116006)(66556008)(83380400001)(8936002)(64756008)(5660300002)(66446008)(91956017)(31696002)(41300700001)(4326008)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MXlrOWxrZHk2UDEyaFJZTUNNQ2UxTzducXRjRnJvNmh1c1pWWXRqVzJhbjJE?=
 =?utf-8?B?RFNWQm83N3UwVStKMEJxK1NuM0ZuSFcvYUNlM3RhQTFXaEpTK29WUFl1YWtO?=
 =?utf-8?B?eFltU0gzOWpPbjc2OFVIMklTZVJWOFpFUDhlaUhhSi9JYStnTnRMbmo2cU1L?=
 =?utf-8?B?eko1QWV3WXltdFdRWHV2bXQySWVYT243UGlXc3dwUTVZOEZJS09YQ0pHMXcz?=
 =?utf-8?B?TklCVmhoLzluVWYrWHVIMEhNQjlZM2lPVkN4eXplYzl6VGJrSHJ3Y1VxMEgv?=
 =?utf-8?B?eHZ5cXFTZlVaY3VNYW1xVjZ4YUZWeTVSTHNDaXBwNjJpWnpIMTZ3a0J5dEpB?=
 =?utf-8?B?bURza1MwU3hiMlUyK2haTFBqS1VVVUZGcktpdHFmeUpFbHdibGJCb0xNVlJO?=
 =?utf-8?B?MC8zMUNJNGYvWWl0NlJsTFllQllZYzJoVEUybS96ekFqbkdldmprblNlSDNp?=
 =?utf-8?B?NVlrRExCRDNGTVhDeGRveVlMWDRkd01WZXBsSU5nc0lIUHU1Ry94blBhdENU?=
 =?utf-8?B?dnl4b2dOdnJURlUwK2lsM0dRQy9FdUY2TS91WEx2bzdHRUZSb1RMU25qYUVS?=
 =?utf-8?B?MGc4YXZFQ1ZLZnJNSThQcW1sTzBvMnFKak9iYUtqbXpPSUFUb1BmWmRrM0x5?=
 =?utf-8?B?ZFVibmhQMU5HRjFQU3VXNzZBQ0ltb21XSlVRd1prQmNJUWpINmNKTG9HZzUw?=
 =?utf-8?B?MXlUdkJGSlpIZ1RjdnRCNUpIaWNwRmgxOTltcXFDaXA4amw5VUcvK25YKy9y?=
 =?utf-8?B?L1had2pvYytQSmFqYXNGWE1xMkwzaUM3SVhVeWtVRlNwNnlGaUgvaEdOZGpJ?=
 =?utf-8?B?akdSQkN0YlZrU3hBKzFDZFIxY2VYdGhVNjVpWk1yNkpkaUlnYmVDa0ZmRVVH?=
 =?utf-8?B?L0N4ZU0vSy9yRThaK1NQa3FXcmxaUFM1TFBRVjlLalkxSFJ6aGh0TUhVZ1dD?=
 =?utf-8?B?a1lYTFdsVUFpUGZiNENOMjJmRkowNm5kQ0VSeWNDZmhOVEJkN3NKUktZNE1n?=
 =?utf-8?B?cFFIQzVQa0tRb1NIVGhxcGhWZlh1K3RBY2tqNUkyWnBwSVk3NFJmRkRmcDlE?=
 =?utf-8?B?dHVrbGs3UU5kSjlQcXV2YkhjMVNQd0dJbjJZd0N5dzlmV2hYTi9IczhoaC9V?=
 =?utf-8?B?bVBSSHB3U2RDNVBVNThscHQwblk2TWlDUmJGK0diWDFNYzdMY1FNN1lWdjB4?=
 =?utf-8?B?WVBlUmxMa1QrQThybzBaVWZGZm9SdEZIcDhFS2dQOHZwU2J1RmsrZk5FeDFt?=
 =?utf-8?B?SGcyWjZKbW9SMDY3aDJqSEViREpiSjBMZXViL2hFYW95WUtJbjVqanczVkhx?=
 =?utf-8?B?S3V5SUkwY3g1RHlWb0lsbW42RzdLaUhCZ2JOaHBHclgydnp2T1h6eXlXaCth?=
 =?utf-8?B?aUFPazVVb0FBWm1QTnRGZ1hhb3luNm4xb2NMREVhblNaMHBBOEl6Sy9MZmZl?=
 =?utf-8?B?ekw4d0xXcnV3c0xYV0tqd3htQUNCMzRmeFdlVDlhRU1teVZnY1NDVk82cUtE?=
 =?utf-8?B?b0JycXVhcnhjOHZ2ZEdDNW9UeWg1cjBtU1FjK2dwYjQ3NVM5NE5RelVnRWVv?=
 =?utf-8?B?VHlKUmZmVDVrNWY5MVp3ZklUekRPdTVLcGZ0N3ZoVmM0ZWMzWUswQXFCRzhQ?=
 =?utf-8?B?dGM5bGtYRTExNEsyL05uSDhqbnJMVWsrbGs0dzRNc2NIM3cvV1BRRnZZSnM2?=
 =?utf-8?B?aVhGK1czUW1lcnVzTWg4cVVWVStodU1zWnFOZ1BMdi9wdnZTSXBEYWxhRkNl?=
 =?utf-8?B?ZFZRL1B6K0FKSFJXVnJBYzI3RUhLbGpkMGJ2bHU3VGtqNGJVS2JzTVVLbEFL?=
 =?utf-8?B?ZXNSL1owL21HbGZzNzhKWi93anVqTFQ3WlNOMWRNd3JRekxsV3RWRE1BNm9n?=
 =?utf-8?B?Lytpc3Q3dUFFbWl4bXZpNFZueEppVGY4dmpnR1JpWkhuZlZVVGF4Nk9TOE5q?=
 =?utf-8?B?eHFpNUlOKzloa0Jpem1Vekp5MzJTVXVpYXkrNW1tU2FDZ2k3T0ZGUllvZGQ3?=
 =?utf-8?B?UmNQMFNoWVR1UGZMSXNaTlhuSUtnOHd4dTBjUDdibGxXa0RjSkJRSk9leWV4?=
 =?utf-8?B?ejZmYk9WWEhVUEtzUG95N1dLUGk4QTJ5VjgxQlZFZzZTTGlSOTlyQlc0cTB5?=
 =?utf-8?B?VGZnZE81UW92M2szZ0Y4eFVoWm0xMitMNUhsbDFpaW43WVFSVkFIQ2NveHJm?=
 =?utf-8?B?bnc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <B62CF006BFB63A42930C93E4028879A8@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?RFNFN0x6Qk1XZTRuME5TZGF5ZmpkbzRzVzRxbGJGeGdwb1FhaVhTMnB6UFAz?=
 =?utf-8?B?UFJyS0Jpdk8zbFhMWWZPNzRDbmV0cUlveE5VQ1NMZGxvY3Y3Z3p4Vy9vbUpW?=
 =?utf-8?B?WUJhWUIrR1ZNNkUzaWZzNmRaU1NTTDBHN1hkNDUvWUdGQmorUVY4T0lHS3R6?=
 =?utf-8?B?SmhFbXdzWDY4V0FXcmY2S2VBWnRkZThpZDBSdVc1THk0SGhHeUJaOVNoalFx?=
 =?utf-8?B?a2tFMnl5aUEyVGVuMDZyNGNPdjZocFd3OWtQZEU1MENqTnBRQkFJVFo2TGYr?=
 =?utf-8?B?QWtDWFQ1WktRcHNKamxtbnZ0c2FCbDVoc3h1N2hGYVZmeERsWXA1cTY4aEI0?=
 =?utf-8?B?RHZIeXk3VXNEK09iZ3dNV0s3aFVQVFhHNWdVelJLeE9IREFRdXVVRXZEeTNp?=
 =?utf-8?B?VzZiaURqR00xU28yZDVJUmxUUGhHSWNaZmFEVnM5MEFUaTZjRlQxSlIwcmJq?=
 =?utf-8?B?NzN6djhqYWQyQUpYR0kxcmxPRnVKTTkzcFEwVStMQThzeVpGQm51Und4YlQ3?=
 =?utf-8?B?eVMycmRic1ZzTE1sWWVvN2pPNmorYTlhRHFpZ1ZYUk1PYjRJQzJnRUllcStZ?=
 =?utf-8?B?Z3FtMlZKK0s5QnpYNmFKQzRrb1VpalVaNVlRdTloQzJoZ0pPR1l1eWUrekE3?=
 =?utf-8?B?Wmpjb1Rnakt1UjgvdGVwWHJBN05IZXZoV0xPdEkzSGM5SHJDS3I1VEhyeW5i?=
 =?utf-8?B?Y1hXR08vS3hwdVZTNmVKeXRtaU5SMHJXMWlkN08xU2U5QUdaT2Z4NnBrZ1JG?=
 =?utf-8?B?YUd4eW40YkFmSjIvSmdPWjFLZ1BvSHkxSEQxZVY3RVYzZW9BeWV6SHc0SWFJ?=
 =?utf-8?B?ZVF3bHh3Kzk2RWlGWjlqUmpLeXhuYTFIWUI1Wk4wMHdpWHZyb2xQMEs5RDd0?=
 =?utf-8?B?M3loa29yTStnbE51UHRJcHBWSjN5Y09seEtua09sT2gzRjRJd3VzK1YyaE5S?=
 =?utf-8?B?OUcycjIwNHQ0RnIwMzJwMEpoWnVwN0VIYUpqbHFFZXRNdHVIMWEya2JJVHgy?=
 =?utf-8?B?bGNLY3JMVUJKNCsvNUhGZGUveEZuL3lXbnNkZzI0Zk1kMlZ0dWZkN3lnbkpN?=
 =?utf-8?B?enU4QWVIWjZaSEtFYlFLTytqSFBYclRCbzhpYit0NnhGZGlRSUJzRlVtcHNo?=
 =?utf-8?B?aDcrZkRVSTBsMVNoVm5RTTRKdHJDZk0zVWVBSjRJaHdKY2NGZzM3MnBOd3N4?=
 =?utf-8?B?Q1pWVXhlSHlvSFg5WE9YeFVNaWJWMlJvRHExam5aK0V0NVY2a0pMSE81SWNK?=
 =?utf-8?B?bllYUTJqeXE3Wk11bUREb1RJSkY2YnJHSmFwQUY2U0Y2Y3F1bDNDeWpPeFdM?=
 =?utf-8?Q?Ij0niykTwlJ5c=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a4eae3b-32bb-42cf-c9bb-08daef344d2f
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 15:48:33.8193
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: pfThgX/v3jfMuzlD55NpXmHzK4VtlAue98gy/5IbuTRT0Pv8e9JlxFuysHHypYEGADwjZXbIrgUcvoF/dR95UivN4jmbQXp74LDltiMOTXo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5699

T24gMDUvMDEvMjAyMyAxOjQwIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDUuMDEuMjAy
MyAxMzowMSwgT2xla3NpaSBLdXJvY2hrbyB3cm90ZToNCj4+IFRvIHJ1biBpbiBkZWJ1ZyBtb2Rl
IHNob3VsZCBiZSBkb25lIHRoZSBmb2xsb3dpbmcgaW5zdHJ1Y3Rpb25zOg0KPj4gICQgcWVtdS1z
eXN0ZW0tcmlzY3Y2NCAtTSB2aXJ0IC1zbXAgMSAtbm9ncmFwaGljIC1tIDJnIFwNCj4+ICAgICAg
ICAgLWtlcm5lbCB4ZW4veGVuIC1zIC1TDQo+PiAgIyBJbiBzZXBhcmF0ZSB0ZXJtaW5hbDoNCj4+
ICAkIHJpc2N2NjQtYnVpbGRyb290LWxpbnV4LWdudS1nZGINCj4+ICAkIHRhcmdldCByZW1vdGUg
OjEyMzQNCj4+ICAkIGFkZC1zeW1ib2wtZmlsZSA8eGVuX3NyYz4veGVuL3hlbi1zeW1zIDB4ODAy
MDAwMDANCj4+ICAkIGhiICoweDgwMjAwMDAwDQo+PiAgJCBjICMgaXQgc2hvdWxkIHN0b3AgYXQg
aW5zdHJ1Y3Rpb24gaiAweDgwMjAwMDAwIDxzdGFydD4NCj4gVGhpcyBzdWdnZXN0cyB0byBtZSB0
aGF0IFhlbiBpcyBtZWFudCB0byBydW4gYXQgVkEgMHg4MDIwMDAwMCwgd2hlcmVhcyAuLi4NCj4N
Cj4+IC0tLSBhL3hlbi9hcmNoL3Jpc2N2L2luY2x1ZGUvYXNtL2NvbmZpZy5oDQo+PiArKysgYi94
ZW4vYXJjaC9yaXNjdi9pbmNsdWRlL2FzbS9jb25maWcuaA0KPj4gQEAgLTEsNiArMSw5IEBADQo+
PiAgI2lmbmRlZiBfX1JJU0NWX0NPTkZJR19IX18NCj4+ICAjZGVmaW5lIF9fUklTQ1ZfQ09ORklH
X0hfXw0KPj4gIA0KPj4gKyNpbmNsdWRlIDx4ZW4vY29uc3QuaD4NCj4+ICsjaW5jbHVkZSA8eGVu
L3BhZ2Utc2l6ZS5oPg0KPj4gKw0KPj4gICNpZiBkZWZpbmVkKENPTkZJR19SSVNDVl82NCkNCj4+
ICAjIGRlZmluZSBMT05HX0JZVEVPUkRFUiAzDQo+PiAgIyBkZWZpbmUgRUxGU0laRSA2NA0KPj4g
QEAgLTI4LDcgKzMxLDcgQEANCj4+ICANCj4+ICAvKiBMaW5rYWdlIGZvciBSSVNDViAqLw0KPj4g
ICNpZmRlZiBfX0FTU0VNQkxZX18NCj4+IC0jZGVmaW5lIEFMSUdOIC5hbGlnbiAyDQo+PiArI2Rl
ZmluZSBBTElHTiAuYWxpZ24gNA0KPj4gIA0KPj4gICNkZWZpbmUgRU5UUlkobmFtZSkgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4+ICAgIC5nbG9ibCBuYW1lOyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+PiBAQCAtMzYsNiArMzksMTAgQEANCj4+ICAg
IG5hbWU6DQo+PiAgI2VuZGlmDQo+PiAgDQo+PiArI2RlZmluZSBYRU5fVklSVF9TVEFSVCAgX0FU
KFVMLCAweDAwMjAwMDAwKQ0KPiAuLi4gaGVyZSB5b3Ugc3BlY2lmeSBhIG11Y2ggbG93ZXIgYWRk
cmVzcyAoYW5kIHRvIGJlIGhvbmVzdCBldmVuIDB4ODAyMDAwMDANCj4gbG9va3MgcHJldHR5IGxv
dyB0byBtZSBmb3IgNjQtYml0LCBhbmQgcGVyaGFwcyBldmVuIGZvciAzMi1iaXQpLiBDb3VsZCB5
b3UNCj4gY2xhcmlmeSB3aGF0IHRoZSBwbGFucyBoZXJlIGFyZT8gTWF5YmUgdGhpcyBpcyBtZXJl
bHkgYSB0ZW1wb3JhcnkgdGhpbmcsDQo+IGJ1dCBub3QgY2FsbGVkIG91dCBhcyBzdWNoPw0KDQpJ
dCdzIHN0YWxlIGZyb20gdjEgd2hpY2ggaGFkOg0KDQojZGVmaW5lIFhFTl9WSVJUX1NUQVJUwqAg
MHg4MDIwMDAwMA0KDQoNCkJ1dCBob25lc3RseSwgSSBkb24ndCB0aGluayB0aGUgcWVtdSBkZXRh
aWxzIGluIHRoZSBjb21taXQgbWVzc2FnZSBhcmUNCnVzZWZ1bC7CoCBUaGlzIHNlcmllcyBpcyBq
dXN0IGFib3V0IG1ha2luZyAibWFrZSBidWlsZCIgd29yay4NCg0KVGhlIG5leHQgc2VyaWVzIChi
ZWluZyB3b3JrZWQgb24sIGJ1dCBub3QgcG9zdGVkIHlldCkgaXMgb25seSBhIGZldw0KcGF0Y2hl
cyBhbmQgZ2V0cyBhIGZ1bGwgR2l0bGFiIENJIHNtb2tlIHRlc3QsIGF0IHdoaWNoIHBvaW50IHRo
ZSBzbW9rZQ0KdGVzdCBzaGVsbCBzY3JpcHQgaXMgdGhlIHJlZmVyZW5jZSBmb3IgaG93IHRvIGlu
dm9rZSBxZW11Lg0KDQoNCkknbSBoYXBweSB0byBSLWJ5IHRoaXMgc2VyaWVzIGFuZCBkcm9wIHRo
YXQgcGFydCBvZiB0aGUgY29tbWl0IG1lc3NhZ2UNCm9uIGNvbW1pdC4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 15:57:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 15:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471977.732040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDScp-0007lc-Dl; Thu, 05 Jan 2023 15:57:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471977.732040; Thu, 05 Jan 2023 15:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDScp-0007lV-Aw; Thu, 05 Jan 2023 15:57:51 +0000
Received: by outflank-mailman (input) for mailman id 471977;
 Thu, 05 Jan 2023 15:57:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDScn-0007lP-No
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 15:57:49 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2058.outbound.protection.outlook.com [40.107.14.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b31eb18d-8d11-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 16:57:48 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9285.eurprd04.prod.outlook.com (2603:10a6:20b:4df::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 15:57:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 15:57:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b31eb18d-8d11-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KIr82r64RqQ2fdOQa8XsTQ00nLj001I3N9/HBwlly9WHA674lkaPET8gm6s0GOSL34D7SmarEX6etShRAh7tmU7D4SxrRFH9fPWypk6xhGAEYu2FOKM6Apw9215/TKn9X7ONB5thoqd9DCei4IOlz98y4txraAOFPVTiVFfHtkCs3LmhTzlgKI2T2BZipTomrL/rN4iuQiJhQD7bbm1m4UcVMmQ9VrbdoO8B0QYIrowF+dKx0pU5tFrafgdupAoL2jO7LsuCm7ydh8el16Ib0okcC7esQz9nEFVlWSSdn/uOxn7BKzeuV8m16/vQ6uNLP3+sYxb0Y3rh6DWYOhZbyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jSiR8+UjgVMW8QiXCNqPlZifOkOHmStw8A6NZfqu+kQ=;
 b=A5RV05DiHqFjo4Y9ZzQbBkgzLfzbZk+2lruXvKkecMj6WY00YuyOJZj6GsfxQ1k6Vid/mIyuJPUKuAMQD87keo9Iu4PZIRFyaR8+5a2MP+7bMQ25UxNJiXWg6ukfIs0MVSFiFFumCMDcqlZp80y5HFOWCc0Ex7cn8umQkXveFHFEKAi365PSLkz6QAvVEhKUF4NzS23NGAhDMgem2GcqPLJ7IZh0I4q4k0X39Ld7QzJfnC1ZOM4mJo7fl/9Va0JCmYd51ENtTSe+kRPUHz4uC7W2h42n7VIazDv7+GcnC555udbhVeDiO5NDGF/R/AmXF03ACSJymEUTFbJDwSOk/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jSiR8+UjgVMW8QiXCNqPlZifOkOHmStw8A6NZfqu+kQ=;
 b=idkqKQzojWApzPy/EL5JrXlaczlBfUJSfwrx5YRqH3cVQQshhSU519kBftJ1M5s5Tm2Y2ibRaakAkyv4iq0Sk3Z5zATUROvqyIOzDasv0xkNCwZS6UEg7/Yt3HYMsiYqq39DwZ2/avlv9jwgLYHMwDMkyoLcAaihOaBnooJkPsL6PQlsxJ31MiD9x86221EihUtTqbdJ30q2hlOw3cbildXOEzKCaS29xKdWeU348p4gCC2TbyLh7oAEXbUjjDp7GEcPupwMLx604BzCaxeDTCPw8wVyX7vI5xfolWYmxsNg+Z5yJmYk22TVikUy7Dd3Wts5nGhtImhwat45buELdw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Date: Thu, 5 Jan 2023 16:57:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 00/11] x86/shadow: misc tidying
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0063.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9285:EE_
X-MS-Office365-Filtering-Correlation-Id: 5ac1bcb8-b040-43f4-df0c-08daef359602
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fXlf/Ki00uVFwYM2FFy4afSYUzRKNVVLDDglYy/3Fpu8JS6AAbIZsupLFv+pWwV6sUfALnRarCOMktysUidNCqtngc329xmaqMBut4z++jUa5OQduU/ukN2zySfQgoGXfIzhFdtSzRQ9yY+do3973dwD4uWOhznm7/dPMD39P3Ivy+CB+Wq0BF69JzCQG2z1EZh7TT+OOpWc51qK0QmCm/wKfb3MBOhV6NgmZx69KD1JkhcIQaLTUUxCk6ClWIkRDlNYI+tWOY5k33XlAaC0ST72O+daZM7xEPx6+UbzyhWCdNTnjX0MP+Cu6nLjp6PtmN9+F7RhWyCx3buG8CFh1ZbV+3dCq9MPsnS00Uvbqo1KEQZMdirdnsg8zpo1+hxzn+QzZS8piF1ocvJNvHDoJhEou98KdNAsWUa+YG9+k3EcTI7RYsgo46et+b6TCjUakwpdW2um6CTwZu7fVC+qJHAwOHuSJDJFZfDsinrOrE9zLUExvOiNXKxHtdpafr+bz/sCr61kMSwxUsQ9MjT5mpD6aD53V12wipkDk2iuWJ7ZslAPzUc0QjBiNu77IRQJndhCpkHVvnHOEX03fRB/ysaCwFLvE98thO2dhHOR5laVdtJk2ZLqn3Bvct1JsmFSGT3AFYLsascGpwM171pyiLq8yQs7QDFwBiMrUJufZA5jYy8MoUtjZ+YFjP3RcM9ymVSgIHB3pV27H2gu8KQkxSn1HfJLWlHgpon7k59al4A=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(376002)(366004)(136003)(396003)(451199015)(54906003)(186003)(26005)(31686004)(66476007)(66556008)(2616005)(6486002)(66946007)(6512007)(478600001)(83380400001)(8676002)(8936002)(5660300002)(4744005)(41300700001)(4326008)(2906002)(316002)(31696002)(86362001)(38100700002)(6916009)(6506007)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZWplbXdERUJDR3k5dUpUQVUva2U5b2JHMWQ1N1k2RlJDV1Qrb1YvcmRxNlNJ?=
 =?utf-8?B?VG5iemNkUEN4a0lEV0l2aEs1Q0N3KzNRMVZaWmMvMVh2RWFOM0tpMmRDa3JG?=
 =?utf-8?B?NTdFcXpuWlRsWjIzU0VBSktGbHYvY2lNMUROMjk1eWhvSzcvRXNnYXhZQzg1?=
 =?utf-8?B?NkpWQW9GT0FpVlJ1czJSKzRseXQxZ21ub0tzZU5BL2d3UjRRWVVFRm0wVi9u?=
 =?utf-8?B?QlhzVkNieWl0SndDTHc5bGp1ZVJyRmEybnhtMWU5b1NYY1p0SWVDRys1bm9v?=
 =?utf-8?B?VlFYazhJeHl4WmtzZWJIK1VtVENTeWNscGExSHJXMWh1dGh4WldDcGYxSWow?=
 =?utf-8?B?dWRVWlpaa0JxQmwwdHdnTzVsYkFFOU0wdEZHQUI2anpEK3l0K2gwMXcvVmRC?=
 =?utf-8?B?dXlhSzJKUndzWmNwRTMycDEvZTh4ZDJjTnN3ais0dmd1RGFTZk93cE9YTEsr?=
 =?utf-8?B?Z2VZSzVXekdiazRCYmg5ckZhK3UzcnNsNTJyQkZCdHVnVFJrbXhvZlJSQVJa?=
 =?utf-8?B?YUhSVFZrTjFaUGtHZ1pDbFN6aEVOYnhYbW9pd0J4M0s2RXRQZ2lYVEdXRHBh?=
 =?utf-8?B?cnZTNmlQUnUrejQ0RnMrRnZ4SDVSYXFCd28vL2hHYlhHdUkvbVo4QmtINnVy?=
 =?utf-8?B?MVhPdXBxYUwyUS81bm54dzM0UDdZZFgwK2hjY3RiMHNOb1JJMFlGTk5uQTRT?=
 =?utf-8?B?dXVsRmlPck5ZM2xnMU0vS2V3Rk0rRDhremdVQmU2WkJQVlZGVzJMd3ZUSWVO?=
 =?utf-8?B?dVc3MWVKa0hSNWtKM3E4MmZLNk1OZHlVcWhDQ2dXZ09YSkprNG5JR3ZVWTVP?=
 =?utf-8?B?dmMyS0dSTVNKeVJBbURCMk5MUDFTUWNmRE9FVXFoVVo1NmxrR3JxWEJNVCt1?=
 =?utf-8?B?NmhSZlFtNURFdXFBVTNDQ0dESVliVUpjVTVmdmFkNjZTaTJPbHl2a2RYUjhj?=
 =?utf-8?B?NGE1b2NsQ08rU3dkTUpCUVZiQ3htN3BySlhvdTVXVmVHNlJpUXMxUkZEMUxp?=
 =?utf-8?B?Q1A0bXFjVndMZ3E5My9YSHRBdzVBYzBVRnZlWHVLbldXQjNiMkhKM0FqVHNp?=
 =?utf-8?B?UHNBSitPWnRFZ1ZjK1hMS0JZeGpBVTZpd0VFSDVUM3hFVm1xckRvNDVXY1Yz?=
 =?utf-8?B?TUVzVWVTNUQxVnB0dDIySlU5SEtJaFl5M0JWUHJ5ejZ1c1FaMmdLK20wTTA2?=
 =?utf-8?B?UzVxT09YSDZwazEvNk45N0QrUGNobXVkMnpwY3RRTFdGNytRS2Q1VlExT0JM?=
 =?utf-8?B?TGNvdElzSzdqc2R4WStaY1Z3WlI5UnZDZHArTGlrbjR6WGtyQnN0eG9hM3NW?=
 =?utf-8?B?dTBXazNSbUw4S0FIQzUyM0FkdkdUaGlpMTFISnhMaGNtTU5MQk5SZVA0VTlo?=
 =?utf-8?B?K3hCSGFlemNYdmJ1OEZkdDNMU0VZNnZyazdTZ1pwZ05UUWRlV3ZNR29wQ3V3?=
 =?utf-8?B?ZE44Q2hBM1NEOFVGaUdlWXNGU2ZKeFlvdHd5THhNeTVvZ0syZkFqNjBmb1FD?=
 =?utf-8?B?a2xnYmxaYzBsUnpTaE5LQ2c4T3FhM25lajNiakR0YWRQaGNVbVFjMVQ4R1gv?=
 =?utf-8?B?SURuSEcrSEg4SEpsYXkvandmNXpCdUQ1bzJ2ck84dGRJYXE5c0hIMTZVcGxF?=
 =?utf-8?B?Q0lveWFiVDAzcStSTWVOUXZYVDE2QTlBRlM2NDVReGF2T082SHRoaXJ4MThL?=
 =?utf-8?B?N0picUY5NTE1dGZVc0xzUmxPZmF3QnFkbkpML1o2OStJVExFbDFVMzFTcW8r?=
 =?utf-8?B?TzFpcGhnU3FsUjVFQkxDODhtTTZNS1llVjd0ZERQbEVhcmwwZUZxOVhXOGpK?=
 =?utf-8?B?T09ZUWhxUnNJbEpEV3FOeUJsR1NSY0ZYN3dDZGs1T3VkR1YwSmhxVG10R3VL?=
 =?utf-8?B?cFJyOXc5RWtFTEcydUowaW9jU01xZzZULzdEeGRvajE2dEF5ZjZubm1nRWZo?=
 =?utf-8?B?dE5EcEtaRGhsU3dqQW5vbEhDamlRR3R0S1hJTlAwMU5pR1l6ZktONFhIUldL?=
 =?utf-8?B?cjVCSGdHT2RMdmNGV2czZHVHcndiU3VLeHJKcFpRMkRuNTRSQmlOc2tRcnZG?=
 =?utf-8?B?SG91SmZ0Nm50bDk3cndIZnExNGZOcHh0QUQvalJqMy9Hc251YThnY0JzaUtJ?=
 =?utf-8?Q?cYwyh8Vbs8QRuQc99saRqVWNg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ac1bcb8-b040-43f4-df0c-08daef359602
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 15:57:45.6828
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IS/IdPKpj6oDXBaHcckujCKCUeFvD/0yTweAplS4JRhMpNefvKFCkexkZkkTGRQW1ExK0zxkrjBoJ/0dZbFRZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9285

... or so I hope. The main observation was that we still have both
hash_vcpu_for_each() and hash_domain_for_each(), where the latter was
introduced in 2014/15 to replace the former. Only some eight years
later we can now complete this conversion. Everything else addresses
other things noticed along the road.

01: replace sh_reset_l3_up_pointers()
02: convert sh_audit_flags()'es 1st parameter to domain
03: drop hash_vcpu_foreach()
04: rename hash_domain_foreach()
05: move bogus HVM checks in sh_pagetable_dying()
06: drop a few uses of mfn_valid()
07: L2H shadow type is PV32-only
08: reduce effort of hash calculation
09: simplify conditionals in sh_{get,put}_ref()
10: correct shadow type bounds checks
11: sh_remove_all_mappings() is HVM-only

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 15:59:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 15:59:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471984.732052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSeF-0008KB-OA; Thu, 05 Jan 2023 15:59:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471984.732052; Thu, 05 Jan 2023 15:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSeF-0008K4-LK; Thu, 05 Jan 2023 15:59:19 +0000
Received: by outflank-mailman (input) for mailman id 471984;
 Thu, 05 Jan 2023 15:59:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSeE-0008Jw-Fs
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 15:59:18 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2080.outbound.protection.outlook.com [40.107.14.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e853aee9-8d11-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 16:59:17 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9285.eurprd04.prod.outlook.com (2603:10a6:20b:4df::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 15:59:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 15:59:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e853aee9-8d11-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hZcD+H4ve0PIG6pU2giq5aYtk29Z7Prsw7S1ywA9nwCa54jRSIr98DdPEDcZhd+Y/U+mpknJAv8xlTRpOTiTmyFvQjblbCq4eMXsPWRcwBCeEPVdvr9ctyv+2u1M5kyT+SMiy+e4SC0oBJ+xiaIK6RhfzMItOmPH8rUS44TGoWW5FhvPEyNQ9Vd5jdSDJxoSrsS0IrNiPddk0cM7d4d3wew4edjqt5gfO+D5C6pAbpJApBDLVy3Yv2hIHgJVxbd7orZNu08waJgiKeWC6X+7qMYLYfm6t4m3wDYKlWMoj83hGjZ4kF2gSNBULvHClufXjEOqYBYehQjPa+MY/FHnUQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rmoM5XdZSCG0IQymIs8Nrrx2EaS6/cU5V9J6nOA40tQ=;
 b=mYZswt96NTqvno4IES7zeafWv+2a36rtHyT0gdowFdCT7R+sfR0HXaiGtmayWNzWC2fzBQvI2nMVMvppRP5xO1Zt8umLbzMRUelDL8hQeQnO+WLXqiHxeIWuBMed6j6qcHZBthL7GigNLU0/0vA+clu7JSIpRsXp7A2uZeOxidM1Qw1bwdgouGymAliDajcjybkY7f+OMtkiLSl4LNTFU00JITKahW9krFP3EIU0hG0bF6rUuBfJDDkqGg5P4qOZMe35d8vMER7ttZBftCMxrHIt0l7+xwHRQbBz8f5gPa6QdZUjSOvgS7suFxC1Zbzz1woRwPb40sAVV0ewYakLEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rmoM5XdZSCG0IQymIs8Nrrx2EaS6/cU5V9J6nOA40tQ=;
 b=NN5NO4raLPKWcWez2JHf3sHZRckIwTAjAZgObsXRrYxh3z6bGc+o+NssxGntOkm19X7GZ4yP71VUPlaLKbKqIzpA5zFscnBhdgyK5HiEW/K4dAxejU+93Ld3lrGlFxQ3BGujaM87iACgWFT2/wecreVWiGGDMGsEHScX57u+D3UmNR4whSU8zjAkw09PU8axIsCKWii8fwD/iyEFb1WnJSRZLYmlmcIbo+JboVe7xwYKQjQKITl8pl/SGE9B/dT4R8aRhnJce4a7GtfxFCwKnPF1Q5kqng6pGojWqq7WH6l6KF6e+77KQBNCYJidcMaciqzC5jD2DkVg2IGfl2vU9g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <03ae5a1a-4417-0aa0-27d8-833ade20cc0b@suse.com>
Date: Thu, 5 Jan 2023 16:59:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 01/11] x86/shadow: replace sh_reset_l3_up_pointers()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0039.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9285:EE_
X-MS-Office365-Filtering-Correlation-Id: 59051f4a-e839-4c34-6f74-08daef35cbcf
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FTEVlLgcfqoiXp5dVP/eLRCdy9nVrMuV9os0uKWjLO5FOBp2XqUsOlFFmM2ieAKGN1UApSgusgic7EWQrbRzT0/DoHcbXOlr0lpFvPKqqosfkffCR/IilozbVKHs/7qSUc9uPLQaJSBdL9tPvWKQhQ4IwwEFBlDMYWQAS0plE40qF708ZpVjT2dgeDK/bOFUICNq3mwrJvPXWz4H2cCiFQISiio2yQmtUYlLVjvmvRGbkTn60fn/UTzP02i67DrjjGSk3bLElsRNG9krdO/53jgxZXVuqxcrH7lkfIdQI+JZjw/E/HzyiU4SBXI53aPJDRElbIb6LMCfm7AM+HQPWS6hcJFBJf2GaIdT8E0aNZhd6auEGg2i6oQ2YtVTytSo6Ah+5qDPyLkbCfSVdAI4fzMDZ4cJzK9lHVadm6vIHRVFxIdmzDz+qtREWmDTHw/ul73W3InhDsSHkWM774SNzCM6gymzkxlOBnZpEXUH5JwwE9AZhEUC0wHHGFIPTxtqeBZrgSMqZ9GGf5HKGY9TVe1MtXZsx4Q8jY/XCFCTSMU2XeuDyzc4mFQewQiie769kycTzBpZtmBtn27jAmuM0jOZ69Sd+ButFz4X/mDvLTMD7AafL7qegpvmk7bZhK+rqG2YDnkdCvG8KMarXK6k/lvyb1vj7pnqzPG5fiexHDg8EP7oGllkE2eOkmsuNqCRznDVLBxSGrsQV4YrYUMNhqOgR+i5kPDYpb62TlOE5fI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(376002)(366004)(136003)(396003)(451199015)(54906003)(186003)(26005)(31686004)(66476007)(66556008)(2616005)(6486002)(66946007)(6512007)(478600001)(83380400001)(8676002)(8936002)(5660300002)(41300700001)(4326008)(2906002)(316002)(31696002)(86362001)(38100700002)(6916009)(6506007)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dlpwdnZZeERmT2l0OXhZR2tBWFplTDZ6TXRLbDlYN1FEK0FLVzdoa3BWV2tS?=
 =?utf-8?B?bUJhakRuZ1N3T2RVaDVIVXluWXJpVWVQNlh0Q2R2YURWbTIxM0JZeHo3MTFw?=
 =?utf-8?B?TFF3ekdzN255QjFISkFGQVo5NXd4SDFtZ0ZEdHExV21lcm9MeEIxREx3d2Nh?=
 =?utf-8?B?QmVhMk9WZ1EzOFlMRmoxQmVQdEpYdnloWkR2R2tXMFBNTGZMOHR5NkhnYjhu?=
 =?utf-8?B?cVVLaDFKVHJqNENuUkhpeXhtNUNTd0Jxb0ExdHhqR0RnUnZQTmxHS2l3UzAw?=
 =?utf-8?B?VEIyTjltcWJBMVRyc3daN2RLU3VNMktFTkEzV2tZQU5uK0s2QXh3WHhlazFS?=
 =?utf-8?B?Qzc2dVQwOGZqblk5UFYzVVhob0l1ak10TlVrMHA3c2V1Zm1rYk83TXlhTzU2?=
 =?utf-8?B?ZkxnRE56TXVaSlRQUW04UkpsTjZSYUNtR0RkR0JaSUp6YWlyU3E4MjBOaDl6?=
 =?utf-8?B?NkZpOTV2T2hDbGZQcUt4V0FyaWxJRjBScHpYN1pOc2NTZ0lzYlczWHZhMmNV?=
 =?utf-8?B?bVBQOTFDSXlwd3pBc0tEanFBYUxEWnBRaXBXb3ZhZlVGYlJlOEpOOVA1UVZM?=
 =?utf-8?B?SFpRUmhEQ3JTVFptZVFSL0hpaFRKNWN6M0dZUWp1b05leTVSb0ZZWTgwR3Mw?=
 =?utf-8?B?T1hPdlgwZCsxbjdxUGdzTk9Jckt2V2xMUXBFQStjZStwc2NrZnJVbjBvZDFr?=
 =?utf-8?B?YTltNUV6Uk9CdXZUYkFlVU16Vnl2dGtrK0VaMlJhcjhzd2JRNFZrZExadGZy?=
 =?utf-8?B?LzBvYlRLSlFRdTM4QUdXZ1c4UXEvQ0tlU2szb1EwNTg3a25zVnNJeEd6Q29F?=
 =?utf-8?B?VndiTWpsQmtpTjQvQXJpdE9GV05ELytEMXBsZG9tNkFoYWdUN1RId1NLZlVD?=
 =?utf-8?B?YURJVjMxQ0ZhbnVyNnJPMXlFd0dJN2YraHBjSlNjOFBEWklmdzNBOUhaeXNB?=
 =?utf-8?B?N2M1RnNRU3NWblVHNnpOc25LdkhCVTJraWYrOTV3T01tNHVDdjhHbVlTb0Zs?=
 =?utf-8?B?dmpKam5UdGNFOG1JMHl5MzJ1RE04M1pQb3NQWm1kQ3pSdmdiVjFMcGVFREZE?=
 =?utf-8?B?SzNzczE5M3NlM3lZcGFIYldZd1ZPSUQ3Z0tXWGZzRmlpSmlheXJKTDNxWTNI?=
 =?utf-8?B?RU9nbzY1dUNjYTNuMG5Uc2JSNlZBVjRiV3V5cTlBQmJQUUMrQWZxaUNpbkFm?=
 =?utf-8?B?cFZ3NmtTbzN1ek84SlB4aStJbVlWVDZZYmdNU0NtZ3V5dmh2UkgrOXdyVE9N?=
 =?utf-8?B?VXRta2k1N1NJd2lJLzJ3bjF5TXVMU2pVLy9tZk9LRUFJR3JBYVd1U2dQSUFY?=
 =?utf-8?B?QWFBYmdueFRuTTRNTlF0UG55U0haVy9Sc1VlclpLTTNyUXF6cWIxZTZhdG9w?=
 =?utf-8?B?VWhOS1l0L3YxRnRjdWI5Q3ZackpucGJXT01Pb0NCWTdJTW5VdndPZU5PdEJq?=
 =?utf-8?B?L252cGw1VHNqcnF1Nm41RktwREsrNVN2QUFtVXJ4RWUrWFd6clVuMHNjeU9G?=
 =?utf-8?B?ZStZc0pSWVRYZ2dkaklwTUtoQ0VaY213TEplNGxjeWJicmFlVURHTHZRL2lN?=
 =?utf-8?B?Tms4VWZvOXlOM0J2dHVCOXFWVlNtdUN0TjkzdkxnUG1GTWZkK3VGS3RqWm1p?=
 =?utf-8?B?Z2F4Q1hucUs1OTgvUE01M2dZR2x5bEZPVUV6Mm1LZThWMkQ5eSt4WmhxeWhx?=
 =?utf-8?B?elQzOFdEWTZtdTgzZHRicGdZdGF0Vk4rVkFITU93TEtQQ1RXQWY0WmdLQXFH?=
 =?utf-8?B?TXlmV3FnTGJIWXpLTEw3NjN2U1FKMDJxUGUyMXJsNFc0Q3c4dDA0MkxoZ2ZN?=
 =?utf-8?B?UVhhKzBwcmlTbmVHRzhXODNncUhJL1BKWmE1bm54MmFyNmFpY3RaLyt0aXhE?=
 =?utf-8?B?ZTVnVnpDM0xienk2cUhCOTE5KzVnbnowM2ozc29Lbno2VWd2QmpqUUYzVjhv?=
 =?utf-8?B?SDFSRThNUHNtNVJ0SWJkdlpETWZTbnR6N0dpUGZDSFJuMWsvcnpMbXpXSU1M?=
 =?utf-8?B?MVhvUjJkWUcwdzJMdEwxWHNvbTJhMDhYMnNkOStZb29rMjA1cjFSNzVZZ05x?=
 =?utf-8?B?UXdpZ0M5cDFOMFZjb1FtYXUyeGc4djFRc3dzWlBiTXo2VldOTUNzNlQ5ZEJz?=
 =?utf-8?Q?KAVpB169ko7DFsCQsxcY6mKqf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 59051f4a-e839-4c34-6f74-08daef35cbcf
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 15:59:15.9114
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: c/gj38l7AgNF5OFPO0/fRcXrJBCzrlW2XRt8HgGw71aRQk16bZVvetdPzXIEdBp/GzPmfCu+TwqO5d3u/HwuNQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9285

Rather than doing a separate hash walk (and then even using the vCPU
variant, which is to go away), do the up-pointer-clearing right in
sh_unpin(), as an alternative to the (now further limited) enlisting on
a "free floating" list fragment. This utilizes the fact that such list
fragments are traversed only for multi-page shadows (in shadow_free()).
Furthermore sh_terminate_list() is a safe guard only anyway, which isn't
in use in the common case (it actually does anything only for BIGMEM
configurations).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -116,6 +116,9 @@ struct shadow_domain {
     /* OOS */
     bool_t oos_active;
 
+    /* Domain is in the process of leaving SHOPT_LINUX_L3_TOPLEVEL mode. */
+    bool unpinning_l3;
+
 #ifdef CONFIG_HVM
     /* Has this domain ever used HVMOP_pagetable_dying? */
     bool_t pagetable_dying_op;
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2306,29 +2306,6 @@ void shadow_prepare_page_type_change(str
 
 /**************************************************************************/
 
-/* Reset the up-pointers of every L3 shadow to 0.
- * This is called when l3 shadows stop being pinnable, to clear out all
- * the list-head bits so the up-pointer field is properly inititalised. */
-static int cf_check sh_clear_up_pointer(
-    struct vcpu *v, mfn_t smfn, mfn_t unused)
-{
-    mfn_to_page(smfn)->up = 0;
-    return 0;
-}
-
-void sh_reset_l3_up_pointers(struct vcpu *v)
-{
-    static const hash_vcpu_callback_t callbacks[SH_type_unused] = {
-        [SH_type_l3_64_shadow] = sh_clear_up_pointer,
-    };
-
-    HASH_CALLBACKS_CHECK(SHF_L3_64);
-    hash_vcpu_foreach(v, SHF_L3_64, callbacks, INVALID_MFN);
-}
-
-
-/**************************************************************************/
-
 static void sh_update_paging_modes(struct vcpu *v)
 {
     struct domain *d = v->domain;
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -960,6 +960,8 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
         }
         if ( l4count > 2 * d->max_vcpus )
         {
+            d->arch.paging.shadow.unpinning_l3 = true;
+
             /* Unpin all the pinned l3 tables, and don't pin any more. */
             page_list_for_each_safe(sp, t, &d->arch.paging.shadow.pinned_shadows)
             {
@@ -967,7 +969,8 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
                     sh_unpin(d, page_to_mfn(sp));
             }
             d->arch.paging.shadow.opt_flags &= ~SHOPT_LINUX_L3_TOPLEVEL;
-            sh_reset_l3_up_pointers(v);
+
+            d->arch.paging.shadow.unpinning_l3 = false;
         }
     }
 #endif
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -497,11 +497,6 @@ void shadow_blow_tables(struct domain *d
  */
 int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn);
 
-/* Reset the up-pointers of every L3 shadow to 0.
- * This is called when l3 shadows stop being pinnable, to clear out all
- * the list-head bits so the up-pointer field is properly inititalised. */
-void sh_reset_l3_up_pointers(struct vcpu *v);
-
 /******************************************************************************
  * Flags used in the return value of the shadow_set_lXe() functions...
  */
@@ -722,7 +717,7 @@ static inline void sh_unpin(struct domai
 {
     struct page_list_head tmp_list, *pin_list;
     struct page_info *sp, *next;
-    unsigned int i, head_type;
+    unsigned int i, head_type, sz;
 
     ASSERT(mfn_valid(smfn));
     sp = mfn_to_page(smfn);
@@ -734,20 +729,30 @@ static inline void sh_unpin(struct domai
         return;
     sp->u.sh.pinned = 0;
 
-    /* Cut the sub-list out of the list of pinned shadows,
-     * stitching it back into a list fragment of its own. */
+    sz = shadow_size(head_type);
+
+    /*
+     * Cut the sub-list out of the list of pinned shadows, stitching
+     * multi-page shadows back into a list fragment of their own.
+     */
     pin_list = &d->arch.paging.shadow.pinned_shadows;
     INIT_PAGE_LIST_HEAD(&tmp_list);
-    for ( i = 0; i < shadow_size(head_type); i++ )
+    for ( i = 0; i < sz; i++ )
     {
         ASSERT(sp->u.sh.type == head_type);
         ASSERT(!i || !sp->u.sh.head);
         next = page_list_next(sp, pin_list);
         page_list_del(sp, pin_list);
-        page_list_add_tail(sp, &tmp_list);
+        if ( sz > 1 )
+            page_list_add_tail(sp, &tmp_list);
+        else if ( head_type == SH_type_l3_64_shadow &&
+                  d->arch.paging.shadow.unpinning_l3 )
+            sp->up = 0;
         sp = next;
     }
-    sh_terminate_list(&tmp_list);
+
+    if ( sz > 1 )
+        sh_terminate_list(&tmp_list);
 
     sh_put_ref(d, smfn, 0);
 }



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 15:59:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 15:59:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471987.732062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSeb-0000KC-Vf; Thu, 05 Jan 2023 15:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471987.732062; Thu, 05 Jan 2023 15:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSeb-0000K3-Sx; Thu, 05 Jan 2023 15:59:41 +0000
Received: by outflank-mailman (input) for mailman id 471987;
 Thu, 05 Jan 2023 15:59:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSea-0000Ja-9G
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 15:59:40 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2061.outbound.protection.outlook.com [40.107.247.61])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f4cccead-8d11-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 16:59:38 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8498.eurprd04.prod.outlook.com (2603:10a6:20b:341::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 15:59:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 15:59:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4cccead-8d11-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oKo9nrP3Em0BVszABkaGrsSeWIlHIN9fKTE80L45UsjljkAUV7OPHNltFpikUE4zE0JxybWBtjOn+wwRPRmw0tBTTY5SDgH8kEw42RMAIt5mtH2FbErvtCxtb7SIGS2G7Qaj/RhkxL4pskKXE3TvNjbtlo0XVYQUAhR1LNCZhcDD8vzQ6pX+TYuh5CcGcvtmKG5YD6MlklZwrZcXP/flDi83PpjdKtibdqLYkGnkNmVXmv5G+MLIJ6xQ2Eq8V3IyBwe1TazEjuTHI4sLJOcJOEEU7cw7QDCFqt5sjJeBaciD4kssI2Fy01bRKBYJGOpOa0RmiPYzZqhxFzMAJpelmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ssTI6at9j+dmm0UVb4iktGaytHsP2Cy0qsceqf5wJgQ=;
 b=hn0DDszc2nBSsRuDm3v5cAL6I62a+2xk0rdW8rcNpdW4lUCKT/u62rNbHeWSLQKyKwga4pPk36HbkDTIHjhqUL+uQFW3o14rkxJj+TMKNUFrwRNbeSX2/lEx0W2Zc6LXe5b3z6/EwM0Hcaefm5uh3wWg26udhARbqN9nA58RsYKV9P7jA7xtExrVv+oPuCyGAqBVpODXPBsM4qv4LT8WTteGx7/18tosmtqMMVDlbYftKcZOzbWiggpDNS/lNdcSdsWmBhc5NBF9Xp3FGmZc+CvrdsUPh5Iq5TNohc48wS6tv6lJBDO41JvnhpXyDtMEkaI5YOB2hpYBW0KF+j292Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ssTI6at9j+dmm0UVb4iktGaytHsP2Cy0qsceqf5wJgQ=;
 b=2RBAg+xx8kgkN831po1jqJcIGZOcFlOsr5Ngz2uR5KO1Hz0ccHP46lMwbo2lzVk9W7BpzlDGfcbCSvbOgTEYUyXOoeeELal3nh1rCRhFhXno6m0eSUKesiGGV0t62mA1jh7ZRFKiu4AiGug1JIdfRqJ9gSh2356tCA6uvD/2PaKTuhD6JKHvXGmtlK5x0RgplTCpz/UzEuDJy/8tgxu7bLBnfQs0ZCMpx2WaPowhWqaO2+SQIxVc9kpGycx62kN1YY4XIi/iEkZ7ld6/BLJ4e9I92MvdF5GnBtMSTpKl9fY9sSwM2x7HDI+lrvyl53btvtWG5BLD6G0m6Mxt5iROLw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ab2a5149-4d5b-7edd-916b-b5a6b69e486b@suse.com>
Date: Thu, 5 Jan 2023 16:59:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 02/11] x86/shadow: convert sh_audit_flags()'es 1st parameter
 to domain
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0046.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8498:EE_
X-MS-Office365-Filtering-Correlation-Id: 1beecf0d-43f8-45ed-1238-08daef35d7ce
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TCLgu6YBl5RO5kw9pDjgt7TEbbZx1slc1MN/kYGf4ExPPho0prgIVK8xGtOP012TIJqjJTpHZ54IuP/aTZf/21uZhFQc9uKI/qeYOiojFW2/JWCl4eD6Zf5N81iPHLf9f+h7Fv1XWv6G3qeg8e9SphlwaD84QFnKbAHv6ZgDkxiqjN7hWuOqL6EwAOjpqsDxxYBdVRk6pQbdv8RizYUmD1mvOWdmlkl+ogSTSN3Gd8ZHbayx21D965abWYNSFQ6o72vp0Witw7kOpvY5TjYKbye6qjnzsqPTpv7InVZ4n866JojRxmjCCjfOX/S8K+r0bXV8uR5/CWhcVQuKrCK290cgoZMVVYGdGm8jEaffdSftqQ/pVKY7MNaJYEKmo9Frf4fnesl44MG0g5ALijeAY7a0f3m+Xk44CldKLAxZDToYAUVRnyuxgjirY/1ACtamsMovVP3f1eFmaU5UCs+JmJX+j3HATt7THd5QKeYHHf86KVeIQX+jkuP9DtVPMnKk+d4QyC+cOxKLOnjbwqxVMkR50NOfz/XJTOriqxzpvSB0OBLeMko/Dii9a+on3g3sLzulqpmaqlIh18KCU5Ro4B4mXc/24PwcTFN3B0Q3f8LKcB1eWkF2IghSiHFvpEOL4fjohMSFyhYLCengpap9JkBVPqKtUe3jOkfzoR5pdFAO2IiOctPKNeRLtDpjE62WmLLb5EhYoFAeer8IJ7/xiCkJ5bej4317kGcXyJXRzdJL6UBV/5+eQHUOk7a8z1QRat0yMqBXGXnVpVOrS0mO2yA/i/cmiXw/y/A/D+RQGEacthmhrKnaE3pQvqblt/Ke
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(396003)(376002)(136003)(39860400002)(451199015)(38100700002)(41300700001)(36756003)(2906002)(8936002)(5660300002)(83380400001)(186003)(6512007)(26005)(478600001)(6486002)(31686004)(6506007)(316002)(66556008)(6916009)(66476007)(31696002)(86362001)(8676002)(54906003)(4326008)(66946007)(2616005)(45980500001)(43740500002)(414714003)(473944003)(357404004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SnpCYXp6R2dIaUQzU2l4aHRhUjhkNStyTUZIQlc0RXE2TUJhZUtHeE44Qmdx?=
 =?utf-8?B?SFZhYXpVVlBobHIrYjdUN3A1cENKa2ROS1c4SkU2cE4yQ2Z5Mng2bEgxSFNw?=
 =?utf-8?B?a2NDVzNCUjdHQi8yVVhUSHZUdnN5WUNRTW8rWlV5N3k4VWtmaEpIaFlmMHVY?=
 =?utf-8?B?TWZJZjFQK0M4OENXZFNudTBSS0x6U290MHkwSndWZ3c2OThHZk16c0Rrc1U2?=
 =?utf-8?B?VlhwWUNoWVU0WjZrQU9Vc0NDbmUyU2ZBdUZQaVlLRUZpMEkrem53amU5dHZ6?=
 =?utf-8?B?VzkyNTg1WTRDWFdTemNZUUwwYWw2NEs4cG8wSE8xcjJhVUJaWGRBZjZvcU1k?=
 =?utf-8?B?dEh2b0FqRk15K0U5TU54ZkU2bzYyaE8yTm5IYWZRMTJENkZBWE1rVWF6OEFU?=
 =?utf-8?B?RTNuNTQ0V21tcGdGaUdpR2FmUy9KcVVpRkpxS0RzRVVPajNOaVJwSzg2STNR?=
 =?utf-8?B?OG5qdDNYQjlIeFVTU2Zvb3RuSGl0MW00alNlUXhLNFowZVQ5dzBEeDFzdmls?=
 =?utf-8?B?L05TSnVZTWU1QmZJa3orUFdIUVdTTUYxNzBRRGMzV005cVl0RndwMHFVUSth?=
 =?utf-8?B?QmRPeldGLzQwcVMzb0lHY29JNVpNOHVXendCamVEVWJZMnJVVTA0cHFSK3JZ?=
 =?utf-8?B?bEhieFVuV0RrdWc2aWhCeU9TeHdVczQxaC96UndyRTlCOVJQYWhlRDZxRjBa?=
 =?utf-8?B?My9DYWJNMDMxNElBbFQxQXU5aU9GWDk2SjJvQ0lWZVdpWHFOWmxMYzd5Q0l3?=
 =?utf-8?B?T3RrTTRIS0NRemhOZkFZbWk5Zm5XZkdXYnhESE9DeXNFR0loVWRBNTZtRndL?=
 =?utf-8?B?YUJZTjRGZUVBZG9IRGZ1Vm1sdys5ZXNSMFRlb2J4MEVIbUk2Vm1RSGZFOUFw?=
 =?utf-8?B?TkNjWVQ5M0ZJbHdHWDE2TkZxNXU4NmVKcHgyU0tETUhuY21zMStTY1BXeGtz?=
 =?utf-8?B?WDJCNWdYZWl6Rmx0b2dnUzQ4VG9sRXpKQ0RsQU5XRDg5aUY5emQ1SnBWbU42?=
 =?utf-8?B?S29keUlFNVZtaExpSkh0K1FkdGZCelVOMEFjSUNaSnR4ek1SaUF5ZEw0OXhB?=
 =?utf-8?B?Z2dFamtZYjBXaW5iK3h6NmhNT1VXWFFuemRWeTdmMWVCTTdyMHROMXhFSW1Y?=
 =?utf-8?B?VWwrWVo3ZzNKVGY2c3QrRkZHWVpNdWY3Qkc5dFlCMHlSdWZ5MlArdktTeTcr?=
 =?utf-8?B?ZklsaUdHc2JtSkhLc1p4bEhYZnRPRkJJcUJOeGFuazBZUU9IYzY2UUh2WTJj?=
 =?utf-8?B?QmZoN1YzMWhxRUlZTXloODcybytlV044MFM2bnQ0Q0crbTBOZ1lOTVhXa2pZ?=
 =?utf-8?B?L1RrRVI3RVdlMWlyeFhicGRNMGN1eERtbG5OUktvYmpPZ2NBWlU3TmRBNHRO?=
 =?utf-8?B?S0grSE5lUnVPOVJnVGtIUk9abGQ2NzQydkgrV2NnR0VRZFdZM0FUQ3IrNG9w?=
 =?utf-8?B?djQ4NkVSaC9yemJOMXBrVVhFQ1lDdHUvZ1o5cEZSODgrd3p6cHV0NlZrNFA4?=
 =?utf-8?B?YUVBbGdlS0NaZ0R5NEhMc0djVUNqeDVwU0tlMHJDVnpad1BRMXU3S2FEWWdR?=
 =?utf-8?B?RmF1UytQTjFOMFQ4OXAvcjQ1T05tWjkvNnd5QUFCdjBNYzFmQndRY1hCVlhm?=
 =?utf-8?B?WlFRK1VBdkkzc0YyRmFycndpaU5XQXZsbE5YY2QwVVhjY2JvbTA5N0ZmbGlW?=
 =?utf-8?B?SzFISXp4SmYxT3h3ZlJEeHl2emtUZGUzL2F5R1VGalc3R2NxcEliLzBVQkVM?=
 =?utf-8?B?d3lMMmZGbDhadHhaRDBXQndiZ0lML3VITUxiaWxRSGNtN1dCNzlUSXlGZUFO?=
 =?utf-8?B?MndFS2JxblF1RnFNU2x1SE40MEtDYW43YllZRGZ6NDd2bHp1UDBiN25UQWdm?=
 =?utf-8?B?ZHlkemtHcFdhVFdneFZGblQ5M0hKNEQvYTVBdGZFbTZTSkFMYm1JNUtCeDZu?=
 =?utf-8?B?TncrQmtCTGVzcXRpb21zcmVWWEJZdlVwZUN5a1hCSk41NWk0b0ZWTFVPZ3RO?=
 =?utf-8?B?V3pzd1p1T2VkTVpJeXEwcEhEc1VJbTlvK0w2dWZ6Nk85WlJidGlscTcrakdu?=
 =?utf-8?B?WnkxSGwvdUxobk1MYmN6YzVna2FZOC9LZWZTQ0s5dEZxREZ1SHowRk5WdThT?=
 =?utf-8?Q?r9/FykCNYn3h+Mm2AEXmsQHZN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1beecf0d-43f8-45ed-1238-08daef35d7ce
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 15:59:36.0508
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4metWACwmbaaQeyvmJyFj/QJ02e4TS3mtJkR00Uzs+7WPuBxm9GNKFMW+4vEjTVAzi8HKTSsw74kcczGaFmWYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8498

Nothing in there is vCPU-specific.

With the introduction of the local variable in sh_audit_l1_table(),
convert other uses of v->domain as well.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3919,13 +3919,13 @@ static void cf_check sh_pagetable_dying(
     done = 1;                                                           \
 } while (0)
 
-static const char *sh_audit_flags(struct vcpu *v, int level,
+static const char *sh_audit_flags(const struct domain *d, int level,
                                   int gflags, int sflags)
 /* Common code for auditing flag bits */
 {
     if ( (sflags & _PAGE_PRESENT) && !(gflags & _PAGE_PRESENT) )
         return "shadow is present but guest is not present";
-    if ( (sflags & _PAGE_GLOBAL) && !is_hvm_vcpu(v) )
+    if ( (sflags & _PAGE_GLOBAL) && !is_hvm_domain(d) )
         return "global bit set in PV shadow";
     if ( level == 2 && (sflags & _PAGE_PSE) )
         return "PS bit set in shadow";
@@ -3948,6 +3948,7 @@ static const char *sh_audit_flags(struct
 
 int cf_check sh_audit_l1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
 {
+    struct domain *d = v->domain;
     guest_l1e_t *gl1e, *gp;
     shadow_l1e_t *sl1e;
     mfn_t mfn, gmfn, gl1mfn;
@@ -3964,7 +3965,7 @@ int cf_check sh_audit_l1_table(struct vc
     /* Out-of-sync l1 shadows can contain anything: just check the OOS hash */
     if ( page_is_out_of_sync(mfn_to_page(gl1mfn)) )
     {
-        oos_audit_hash_is_present(v->domain, gl1mfn);
+        oos_audit_hash_is_present(d, gl1mfn);
         return 0;
     }
 #endif
@@ -3994,7 +3995,7 @@ int cf_check sh_audit_l1_table(struct vc
         }
         else
         {
-            s = sh_audit_flags(v, 1, guest_l1e_get_flags(*gl1e),
+            s = sh_audit_flags(d, 1, guest_l1e_get_flags(*gl1e),
                                shadow_l1e_get_flags(*sl1e));
             if ( s ) AUDIT_FAIL(1, "%s", s);
 
@@ -4002,7 +4003,7 @@ int cf_check sh_audit_l1_table(struct vc
             {
                 gfn = guest_l1e_get_gfn(*gl1e);
                 mfn = shadow_l1e_get_mfn(*sl1e);
-                gmfn = get_gfn_query_unlocked(v->domain, gfn_x(gfn), &p2mt);
+                gmfn = get_gfn_query_unlocked(d, gfn_x(gfn), &p2mt);
                 if ( !p2m_is_grant(p2mt) && !mfn_eq(gmfn, mfn) )
                     AUDIT_FAIL(1, "bad translation: gfn %" SH_PRI_gfn
                                " --> %" PRI_mfn " != mfn %" PRI_mfn,
@@ -4064,8 +4065,8 @@ int cf_check sh_audit_l2_table(struct vc
     gl2e = gp = map_domain_page(gl2mfn);
     SHADOW_FOREACH_L2E(sl2mfn, sl2e, &gl2e, done, d, {
 
-        s = sh_audit_flags(v, 2, guest_l2e_get_flags(*gl2e),
-                            shadow_l2e_get_flags(*sl2e));
+        s = sh_audit_flags(d, 2, guest_l2e_get_flags(*gl2e),
+                           shadow_l2e_get_flags(*sl2e));
         if ( s ) AUDIT_FAIL(2, "%s", s);
 
         if ( SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES_MFNS )
@@ -4116,8 +4117,8 @@ int cf_check sh_audit_l3_table(struct vc
     gl3e = gp = map_domain_page(gl3mfn);
     SHADOW_FOREACH_L3E(sl3mfn, sl3e, &gl3e, done, {
 
-        s = sh_audit_flags(v, 3, guest_l3e_get_flags(*gl3e),
-                            shadow_l3e_get_flags(*sl3e));
+        s = sh_audit_flags(d, 3, guest_l3e_get_flags(*gl3e),
+                           shadow_l3e_get_flags(*sl3e));
         if ( s ) AUDIT_FAIL(3, "%s", s);
 
         if ( SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES_MFNS )
@@ -4164,8 +4165,8 @@ int cf_check sh_audit_l4_table(struct vc
     gl4e = gp = map_domain_page(gl4mfn);
     SHADOW_FOREACH_L4E(sl4mfn, sl4e, &gl4e, done, d,
     {
-        s = sh_audit_flags(v, 4, guest_l4e_get_flags(*gl4e),
-                            shadow_l4e_get_flags(*sl4e));
+        s = sh_audit_flags(d, 4, guest_l4e_get_flags(*gl4e),
+                           shadow_l4e_get_flags(*sl4e));
         if ( s ) AUDIT_FAIL(4, "%s", s);
 
         if ( SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES_MFNS )



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 15:59:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 15:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.471993.732074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSet-0000pt-Bb; Thu, 05 Jan 2023 15:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 471993.732074; Thu, 05 Jan 2023 15:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSet-0000pk-8m; Thu, 05 Jan 2023 15:59:59 +0000
Received: by outflank-mailman (input) for mailman id 471993;
 Thu, 05 Jan 2023 15:59:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSes-0008Jw-Ef
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 15:59:58 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2084.outbound.protection.outlook.com [40.107.247.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 00508178-8d12-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 16:59:57 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8498.eurprd04.prod.outlook.com (2603:10a6:20b:341::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 15:59:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 15:59:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00508178-8d12-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aBAGFaC+7XaXEaS/dMOryNP8HQJjZaJgWlsPf8YSRNoX2nhTcHx8BN9OPCZ0ghGj2h1mfpcHcxFYWugMFPKXx656Ui1k0WuaWoc0w6b3tBV4el8J+UZ1Uu7uw9ONtKCbUq6MS+Xfq6Z+ehawsia9QN33tmh8bzvxut0cKRvx+sJBlJuBJGgSk80vX92j97RfcIdLMxlcj8Twv/Z1A6Qc9fnLEVQSu3xM7T/1DIec4/nhxrEL6p220Ovb9eSOHt9B/VbPsR3gcITTXVc24NOLyvTBYfbgBhSsjQkfnso54zsfVmW6SqrODVyqdoVch/GiZ2QhIyULe3AtHbQGltzGBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0MAeVXjSAqbPszhTtBmGKP8+Eqtn6/71idrhhw4dk4U=;
 b=cIi/8cE5EczZeXkInl+/1Cs8QWA8mtfdQytQCHE3xg+bLry2lnI7eCVAwAfo9pUdUViw6IsyuZQo5D+qmt0xmKaOzecmby/6M7yUCZjVvN9r1vDyqt5ph4Ppgz/f60xE34HQo7jmvpoS5IR/1U5vUVuMy3OFD07DFPKMaf2+V4kMtpKq6paxBW54qkMR8M+UHek/eNpylprBkzrRWBqY1fR22xcPj/7qk9Cyuyxcgx6x0cCBVcG/+vI+7b5t9jVCad5+aXPUCrjLjQyjh70Nf3s0c7qblO78qeFM3a6GAElP15Lj+EjmdOx27kxNnzEW0WQamr4WjWBO4CM2B9BbpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0MAeVXjSAqbPszhTtBmGKP8+Eqtn6/71idrhhw4dk4U=;
 b=CLHNi215qsvBFMX1AQSq46LskidQpk2iR6B37Zv7cfzoL5vw/IpazCn8HHcqPfX/19KQOldOxZJ+NEGitxt02Yh1nwd9ZKrrVbiIp94j38GqNKsbDHj3SXha73mk/4NQu1K0FzJ17jojiFwpr5ng/hurKwRy4fUemfWhnoh2EAdmgSXObAU7qHBH4TGRKdzP/q0BWkI5ENTerJ6+fMBO5IcWj5e+/VMLWRY9xQ7P2QaBUL/5AIwvBFZkKXOb25FNUSw2FJXdiZNqlKLtTFgS9bf6wbxTzYpWeW8cWHhU/xpW5ZyVMIZw3qUiNL+34s66JxgMg0Xkv+bwupAF37osYQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9096ecd9-b3ec-4d06-8641-f8d6cef19027@suse.com>
Date: Thu, 5 Jan 2023 16:59:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 03/11] x86/shadow: drop hash_vcpu_foreach()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0112.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8498:EE_
X-MS-Office365-Filtering-Correlation-Id: ff7b9a27-cbaa-4d0c-29be-08daef35e3e2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wMA0ow7UH2i2HFI6MYvE66D9IP3H0N4cmahrWnviOiahEWb7sz+WfCQwhn/hTtN0iG2/CLnILb+7ft3cFxkIi5s0h3wON6DmsR7CeZprOaHsvIqHasBJIQKcW/DK8evOSBXFDJebmRcgSjc2EoR5TDkTFLPQqE/KPl8xUJJ1xq2ZRwku/o47QjDsWbTcqzhZf0p07jXebdGkSrs01R+fAosvvylKxK3Uc0bNT0DARD4k4xrK6rOGzdUy3sSBTMVoA44xq/6ZZwmRZhQhd/pB0VLxpqbltW1FlEiryclG4SwHeWfeQywyzO7QuClIAQj9clD/N4I+/+JLsX3/3/8Wf7FWc7KXLuCllg8wxm3aKmvis7ZQb0Gu5AeNrGrcLl+gMYzOP+eMqWJt1xFG/cJ3mEiJXGR9sQ+hk3KZP35vEIuYe3goCGvX7MOWJdIKQNajYmWoHO9kVDQ/Q5OkN7JzW1kbAlTIG6MsfKuvAQ7bd2s8OqShANB+058L2qbiNCU7US8r62xNBr6ln3iqvoeAG3ov7JXgqY1Fii6hM8xR0xj5Ava/wUuahIPuoB9zFLI/hj1N5mkMgwADMnlnpB9hnae65nh6TEtTL5+XRGhFKavpY0j91dVWngn/9W4HJfPsfujZBAgwx2QXBxxOZsuUEn82BLVWkmu6+MbzhUgngf6vrJo+zB8YNVOIff/+zc1uGbYxHYhZ1yssCmFb95pyV4vSIqXdoKenE3wUhktYFIMbYBswtd4uPyy+ZL0KFdhQkV8rlqOe4AhscWeZdbq6gduh7zxFwbOtnT9Od6G4h40c0+dWEvVJpcRNJ/NwZ2D6
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(396003)(376002)(136003)(39860400002)(451199015)(38100700002)(41300700001)(36756003)(2906002)(8936002)(5660300002)(83380400001)(186003)(6512007)(26005)(478600001)(6486002)(31686004)(6506007)(316002)(66556008)(6916009)(66476007)(31696002)(86362001)(8676002)(54906003)(4326008)(66946007)(2616005)(45980500001)(43740500002)(414714003)(473944003)(357404004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ejlnMWhta3N5dXBNdHhxNmhsWmdTWU8yL3Y2RTVpcjluVk93OTF5L3NQeWhS?=
 =?utf-8?B?Yi9RbytQUFVGMDQxNDFxOTJESUhIcUJKdFlKWDZoak9WNmhvcHRMYThkR095?=
 =?utf-8?B?RDVvS3BoR0UvdEVqZ1NuSjhiR2FydkJHNkpTNzhJYmJMR3VUMWZHbUs3emk3?=
 =?utf-8?B?VlQ1Y3hwOXNHbmNMT0NKcGdHNkdJNVg1UG5QQ3YwYjVwdE92K1YwVWtQNUxZ?=
 =?utf-8?B?TDhkMWFRdkhvMGFSOEZwWFArd1pHTE1iTlhsbEtnWFRFQnhLZXJlL0NiWFFl?=
 =?utf-8?B?VkM2TTc1clh4aW5qcTJmWUZhWTlpek5PRjRxc2JGVXJkNE9FSHhVWFBMMmMy?=
 =?utf-8?B?U3lBQUpodFFSbFhFSE9TMUNDaFhOUnpwOHVIeGJCZit2VnFCZEpyQ2traExM?=
 =?utf-8?B?M1Y4NmpLS0RNZTF0ZDZYYzVobHJnekhpYWdVMUVXZlRwWE1RRi8waGZWS2pY?=
 =?utf-8?B?VDBhVXR1VWJQQm1qSWsxSTVEZzIrUW9KQmV5UCswMjZNUUE2MVBDaEd2R0RB?=
 =?utf-8?B?L25sS05QbmVnNU5QK1hsWlVsZkdzRk94cGNQWW5ycWhiNlBpLy9ZNWNFbXJa?=
 =?utf-8?B?MEdDRytUY25DTkQwdmxzUFNiOFBoOVdQNmdGTTg2ZVExcytwUmFhdW02WVFF?=
 =?utf-8?B?akVUYitwRHFrTTFNSHgyTXJlTXVlbG90MFEyOVM2aXVRT2hUMjdkaytUQWFU?=
 =?utf-8?B?Q1FodkJoYmRwVjNQUGtKSVhYVHRJcnBKK21UTkVnUCtnSDRQQzJzaCt3T2dp?=
 =?utf-8?B?djFKbTBEQ3NkMlY2dWRTa3RydmVRam9qVTRiL2ptS1dFa1dOV3hSZWVGMkJL?=
 =?utf-8?B?YkNOa2ZFYkx5K1pHT2QrbHVUUUdRODdEZEVubVFFN0dEQkZSYlN0SWo0cWVQ?=
 =?utf-8?B?VWZzTG5SdnlSdURnUzA0NlZBUWV0ZWJQS1hHSUtPbjYrdjFDNTQ4TmF5OUxG?=
 =?utf-8?B?dGZ1U0RWUlRyVjZqWng0Qk0xQURDbHlQZ2twVWRIK1FqOXFxb1Q0TXVVU2FG?=
 =?utf-8?B?SVZRbjVpaVVMSTZ3a2I1eG1paWtjVWwzY2hEZUVnM3ppZWhIbkNKRzBma0tr?=
 =?utf-8?B?U0V4NFlPQkV0eDhwRVlValdoSmFwOXhkRGxMSFE5Mjhnbk5kRVA3Uko4dkhq?=
 =?utf-8?B?clNEZ1VzaDQ0SmJvc25XZUhIV0NDd0wxY1M0UDc1ZHJGa1NwUXNwK0VoUHB3?=
 =?utf-8?B?L2RIVExaZTNTdWduQ3phVS9rQzQ4U2tBYmY4bS94LzV4YlZjcVR3Q2M4Q3V5?=
 =?utf-8?B?Q0p0NlRNS3ByNTA3aEJrajU2QW5iZmRvbEJWUy93ZVZadkZub2pNaERjS1Y5?=
 =?utf-8?B?ZnVETWNEYS82dUhmSEJJeW51ZWxyMS8yT24yUDUyS0szeVBmdElmblRHUzd6?=
 =?utf-8?B?ejUrMXczUzEwN1ZvQ2k1YU1ZaDVobzBNWUp0US9LbE1YaGUxQXhpK1FRc1Ni?=
 =?utf-8?B?SENFemR6UXBjUmFJWTFOU0hjblIyZ3NvMUNqcW1GZlg2WHBkSXZPUGppUER3?=
 =?utf-8?B?WEpEMCtQbC9oeGRHWHZIT3g4emJKc3M5bnp0TlFEMEFEVndINHBNY280dkFQ?=
 =?utf-8?B?Z25hNElXbVRKY1RrcmFMVnZIT1UyeG0zSUU4MHVlek1ReWpZUTROK0VkcFJT?=
 =?utf-8?B?c3RpT3BISGJJdzZCd3BURXAweVU3bDlPdk5oVUdVZVd0QVZ3NjRCM0M0ZEdu?=
 =?utf-8?B?RkFRaDU0bUNMRDA1YWRiTWpqSjV3SEE0RWZoMnh3cHY4TDhLVnlNTmgvb3g5?=
 =?utf-8?B?c0xjTzE3cnFIaEo0TmJvbzlRVDgwOXFTc1c0a2ZPQWtqeTUwMVZzYThIL25V?=
 =?utf-8?B?MEtJUURMR1VRc3dDaWNrRjR3SlZ1UWVWSGNBUFQzVjJ6bGMwTkRVdnYwV3U1?=
 =?utf-8?B?TkEyY0RTaVlhSHF6aWJOdC9kN0phZElWblNDdCtqY1RUR1BaNWlMa2RIVW1p?=
 =?utf-8?B?Nk9BOW9iQWRaYjFDUndPYUF4cTlPSDMwaHhsamh5RThlY0xYZlpGem95MU1X?=
 =?utf-8?B?dmNQSkxZTGhldnlEeGRPaVZGRlFpakRncVFEZ3VvRnl1ZmVvUEZkS2FUc2xV?=
 =?utf-8?B?SXBTK0VVS1RLNTBhditOUzRxWE9kMGs4QW5UUElBVEpSeGVPcUJYRzZRaGhC?=
 =?utf-8?Q?E0L7ERpl/SYCFvPQ5RpMUUJMz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ff7b9a27-cbaa-4d0c-29be-08daef35e3e2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 15:59:56.3307
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JiJSv7t2nDH/vrExr8DiKaQCocO2Paf1PDFdzbgsUuVqoUuUFHSeXzPGQqkyQSJankupVF5RQhgRkEbKst6Qfg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8498

The domain based variant is easily usable by shadow_audit_tables(); all
that's needed is conversion of the callback functions.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1640,59 +1640,11 @@ bool shadow_hash_delete(struct domain *d
     return true;
 }
 
-typedef int (*hash_vcpu_callback_t)(struct vcpu *v, mfn_t smfn, mfn_t other_mfn);
 typedef int (*hash_domain_callback_t)(struct domain *d, mfn_t smfn, mfn_t other_mfn);
 
 #define HASH_CALLBACKS_CHECK(mask) \
     BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
 
-static void hash_vcpu_foreach(struct vcpu *v, unsigned int callback_mask,
-                              const hash_vcpu_callback_t callbacks[],
-                              mfn_t callback_mfn)
-/* Walk the hash table looking at the types of the entries and
- * calling the appropriate callback function for each entry.
- * The mask determines which shadow types we call back for, and the array
- * of callbacks tells us which function to call.
- * Any callback may return non-zero to let us skip the rest of the scan.
- *
- * WARNING: Callbacks MUST NOT add or remove hash entries unless they
- * then return non-zero to terminate the scan. */
-{
-    int i, done = 0;
-    struct domain *d = v->domain;
-    struct page_info *x;
-
-    ASSERT(paging_locked_by_me(d));
-
-    /* Can be called via p2m code &c after shadow teardown. */
-    if ( unlikely(!d->arch.paging.shadow.hash_table) )
-        return;
-
-    /* Say we're here, to stop hash-lookups reordering the chains */
-    ASSERT(d->arch.paging.shadow.hash_walking == 0);
-    d->arch.paging.shadow.hash_walking = 1;
-
-    for ( i = 0; i < SHADOW_HASH_BUCKETS; i++ )
-    {
-        /* WARNING: This is not safe against changes to the hash table.
-         * The callback *must* return non-zero if it has inserted or
-         * deleted anything from the hash (lookups are OK, though). */
-        for ( x = d->arch.paging.shadow.hash_table[i]; x; x = next_shadow(x) )
-        {
-            if ( callback_mask & (1 << x->u.sh.type) )
-            {
-                ASSERT(x->u.sh.type <= SH_type_max_shadow);
-                ASSERT(callbacks[x->u.sh.type] != NULL);
-                done = callbacks[x->u.sh.type](v, page_to_mfn(x),
-                                               callback_mfn);
-                if ( done ) break;
-            }
-        }
-        if ( done ) break;
-    }
-    d->arch.paging.shadow.hash_walking = 0;
-}
-
 static void hash_domain_foreach(struct domain *d,
                                 unsigned int callback_mask,
                                 const hash_domain_callback_t callbacks[],
@@ -3215,7 +3167,7 @@ int shadow_domctl(struct domain *d,
 void shadow_audit_tables(struct vcpu *v)
 {
     /* Dispatch table for getting per-type functions */
-    static const hash_vcpu_callback_t callbacks[SH_type_unused] = {
+    static const hash_domain_callback_t callbacks[SH_type_unused] = {
 #if SHADOW_AUDIT & (SHADOW_AUDIT_ENTRIES | SHADOW_AUDIT_ENTRIES_FULL)
 # ifdef CONFIG_HVM
         [SH_type_l1_32_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l1_table, 2),
@@ -3262,7 +3214,7 @@ void shadow_audit_tables(struct vcpu *v)
     HASH_CALLBACKS_CHECK(SHADOW_AUDIT & (SHADOW_AUDIT_ENTRIES |
                                          SHADOW_AUDIT_ENTRIES_FULL)
                          ? SHF_page_type_mask : 0);
-    hash_vcpu_foreach(v, mask, callbacks, INVALID_MFN);
+    hash_domain_foreach(v->domain, mask, callbacks, INVALID_MFN);
 }
 
 #ifdef CONFIG_PV
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -326,32 +326,32 @@ static void sh_audit_gw(struct vcpu *v,
     if ( mfn_valid(gw->l4mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l4mfn,
                                                 SH_type_l4_shadow))) )
-        (void) sh_audit_l4_table(v, smfn, INVALID_MFN);
+        sh_audit_l4_table(d, smfn, INVALID_MFN);
     if ( mfn_valid(gw->l3mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l3mfn,
                                                 SH_type_l3_shadow))) )
-        (void) sh_audit_l3_table(v, smfn, INVALID_MFN);
+        sh_audit_l3_table(d, smfn, INVALID_MFN);
 #endif /* PAE or 64... */
     if ( mfn_valid(gw->l2mfn) )
     {
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2_shadow))) )
-            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
+            sh_audit_l2_table(d, smfn, INVALID_MFN);
 #if GUEST_PAGING_LEVELS >= 4 /* 32-bit PV only */
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2h_shadow))) )
-            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
+            sh_audit_l2_table(d, smfn, INVALID_MFN);
 #endif
     }
     if ( mfn_valid(gw->l1mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l1mfn,
                                                 SH_type_l1_shadow))) )
-        (void) sh_audit_l1_table(v, smfn, INVALID_MFN);
+        sh_audit_l1_table(d, smfn, INVALID_MFN);
     else if ( (guest_l2e_get_flags(gw->l2e) & _PAGE_PRESENT)
               && (guest_l2e_get_flags(gw->l2e) & _PAGE_PSE)
               && mfn_valid(
               (smfn = get_fl1_shadow_status(d, guest_l2e_get_gfn(gw->l2e)))) )
-        (void) sh_audit_fl1_table(v, smfn, INVALID_MFN);
+        sh_audit_fl1_table(d, smfn, INVALID_MFN);
 #endif /* SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES */
 }
 
@@ -3946,9 +3946,8 @@ static const char *sh_audit_flags(const
     return NULL;
 }
 
-int cf_check sh_audit_l1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
+int cf_check sh_audit_l1_table(struct domain *d, mfn_t sl1mfn, mfn_t x)
 {
-    struct domain *d = v->domain;
     guest_l1e_t *gl1e, *gp;
     shadow_l1e_t *sl1e;
     mfn_t mfn, gmfn, gl1mfn;
@@ -4015,7 +4014,7 @@ int cf_check sh_audit_l1_table(struct vc
     return done;
 }
 
-int cf_check sh_audit_fl1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
+int cf_check sh_audit_fl1_table(struct domain *d, mfn_t sl1mfn, mfn_t x)
 {
     guest_l1e_t *gl1e, e;
     shadow_l1e_t *sl1e;
@@ -4041,9 +4040,8 @@ int cf_check sh_audit_fl1_table(struct v
     return 0;
 }
 
-int cf_check sh_audit_l2_table(struct vcpu *v, mfn_t sl2mfn, mfn_t x)
+int cf_check sh_audit_l2_table(struct domain *d, mfn_t sl2mfn, mfn_t x)
 {
-    struct domain *d = v->domain;
     guest_l2e_t *gl2e, *gp;
     shadow_l2e_t *sl2e;
     mfn_t mfn, gmfn, gl2mfn;
@@ -4093,9 +4091,8 @@ int cf_check sh_audit_l2_table(struct vc
 }
 
 #if GUEST_PAGING_LEVELS >= 4
-int cf_check sh_audit_l3_table(struct vcpu *v, mfn_t sl3mfn, mfn_t x)
+int cf_check sh_audit_l3_table(struct domain *d, mfn_t sl3mfn, mfn_t x)
 {
-    struct domain *d = v->domain;
     guest_l3e_t *gl3e, *gp;
     shadow_l3e_t *sl3e;
     mfn_t mfn, gmfn, gl3mfn;
@@ -4141,9 +4138,8 @@ int cf_check sh_audit_l3_table(struct vc
     return 0;
 }
 
-int cf_check sh_audit_l4_table(struct vcpu *v, mfn_t sl4mfn, mfn_t x)
+int cf_check sh_audit_l4_table(struct domain *d, mfn_t sl4mfn, mfn_t x)
 {
-    struct domain *d = v->domain;
     guest_l4e_t *gl4e, *gp;
     shadow_l4e_t *sl4e;
     mfn_t mfn, gmfn, gl4mfn;
--- a/xen/arch/x86/mm/shadow/multi.h
+++ b/xen/arch/x86/mm/shadow/multi.h
@@ -83,19 +83,19 @@ SHADOW_INTERNAL_NAME(sh_remove_l3_shadow
 #if SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_l1_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl1mfn, mfn_t x);
+    (struct domain *d, mfn_t sl1mfn, mfn_t x);
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_fl1_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl1mfn, mfn_t x);
+    (struct domain *d, mfn_t sl1mfn, mfn_t x);
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_l2_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl2mfn, mfn_t x);
+    (struct domain *d, mfn_t sl2mfn, mfn_t x);
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_l3_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl3mfn, mfn_t x);
+    (struct domain *d, mfn_t sl3mfn, mfn_t x);
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_l4_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl4mfn, mfn_t x);
+    (struct domain *d, mfn_t sl4mfn, mfn_t x);
 #endif
 
 extern const struct paging_mode



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:00:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:00:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472002.732085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSfQ-0002ri-Ky; Thu, 05 Jan 2023 16:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472002.732085; Thu, 05 Jan 2023 16:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSfQ-0002rb-IC; Thu, 05 Jan 2023 16:00:32 +0000
Received: by outflank-mailman (input) for mailman id 472002;
 Thu, 05 Jan 2023 16:00:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSfP-0000Ja-JV
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:00:31 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2050.outbound.protection.outlook.com [40.107.103.50])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 134a6360-8d12-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 17:00:29 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8046.eurprd04.prod.outlook.com (2603:10a6:102:ba::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:00:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:00:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 134a6360-8d12-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fxAjOOLrji3RG5UA1diO5HQwpPpZmwtgl/I4YEhiFwdGWMqONKqxCWXTBeHzm3iPhlRWZIUXQq+llrbVlocHN00J1vrSs/hwR+wuweTW101hIuS/eDiRbGmjrCrE/41lSrCNx5abbvpy63aL88EBK4aua1/7ZxLnJusC5ePjiqXU8P+M4660veGwJ42/lZ4Ibb3nWQL6M+c/RdZj6wjIN7UODCG/dzYpgnfDeez+KJw/PlnEo+giB1nI8lixJi41IjB/oLdLxpK+fDj0r6to6djr15Mi3uSvyBptWvBk5NSHdZIh39lXodr46vLXOqEs0tdbWaKUHCn+hhFdeaplQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ccVaAbNoDzT3PDnX15GWyJdkpvb9ChZOySArZobL5pY=;
 b=J16Pr14Zty8dPH0kjaQ69LVW9oK8wGdnE/S5ZtL2VbqsJIPuXyReoOn9IUCFKPM5iLXz2505wNnMrhJE8DEHO8FzrZokXtcbiTPPcQhWQqepMpEk2Z+6Lk7wEWiZak4TNXasxUnoh0tdejQXQ3xKbZr0SkSxnVoFnG/LjXLqL3liB66pq7K7+QCbYxqYeQxWzva0h+b0EHjRbtXeJdjR9wb/XC8ISXa0+SBg4Izm/dX8BCPkm9a96HQMg7IQhHq9bkfAPhwY8YoP3T4cDx7ayB4RIOIxnfbexqSDv6NtikyVE6Nr+9duJHTrSpRMsB7KVsobsPrFY8BToS11C02zEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ccVaAbNoDzT3PDnX15GWyJdkpvb9ChZOySArZobL5pY=;
 b=uleiPKAIJvQhzEH51SiiGkOKvGwSELn6D7nRQ9CCD0VnznoAgXMhBwgFl5AHV0CIEjTJt6z5A3aer9+hBOf4vc6JjCk6EHkUyQcEA1wffN910+82DhrV2MG+PTjqZcFxh2bHzMVqxJ+WfqracXALtAKuvC77oOAy3+w8hOItEAzxIMN20RSiQLG7ioRR4jSqSkHl+zpP+j0TCYdsfmW5mseq5MJDuuiogsm+qP0GEMvNK5lqucotzmrmnfhvzMxHcYQcAYn4ueidfwa+96mXnUHCJTSPuRr9A8WcxERLKVmcgpdQ1Lr+19ZCE5uG6w881pQwLoNahH49i/elBOgL1Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <46233ea3-9bce-b23f-3e6c-887b2c21ee71@suse.com>
Date: Thu, 5 Jan 2023 17:00:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 04/11] x86/shadow: rename hash_domain_foreach()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0047.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8046:EE_
X-MS-Office365-Filtering-Correlation-Id: d2f29f5c-f0f0-4bd8-8ee4-08daef35f633
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oVbbK8g/mC5+HMlDJhWTg9A8eqdUUCE2nqSxOCWajgyL7w6Y5i/r9tnK0rmkLutZADeYLKt7rYUGVXDiC533QhzA1U2mO8xLAabf/w28n+zj9cKmVPj6b/4htYwOgkUFwaxG5V/FuV8XWa8d4fOv2St6aqvV93EJfqwJZKcayJ82vaS6Rn1JespxPHfHDlBZN0ox8+r2B9u9ANVuuTsrNNbL0h1Dib/SBFFDsVz4fRuV82sDf416kJZPyN05ADT6rjedAu38ViyDeAWlG+ItxJuQZ1E3IGtoujgb3vrFJyJ9TX5crEViDFfVcgNzDDsDYKS6ssvtbYovR6dBaHYTFAaAiSUMjxzmdoFY7Gy1FWbCWnkpmaJotZlpkV5WuURfuKesESKkwTe7878KjRsWxMMO9ZXNHH9/6BmwhiFTJVWPRdQrJqb9Ga3+/8fBjR8wMXLDhQkV3Gdf/VBudCQDx3bEz3LKj952QsLBejEWOyK+7w0pqL/sWsJ+pf7AD29gE5seFExlVf+IXG/Dc6lbX9qESQy2uqDh6ev2dOJbYVmM1Z+ewyY13Oj47zenVB8DGKgQOfxAEpcH2YZbiumBxgp/AOaaVjJUiMjFmhf/w74Jbq1OvQoloc7/YpBh0EPA9s7HiKgwco9qcczG24vIhNMZN5DEzs4kTXkeAs4RhmdkcrkCUG/YYuQrNGWSHyRCUyBDI/IYPvWi9YamLbVwZF6gVZ4rSKQe70Tht7GVobo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(376002)(396003)(346002)(366004)(39860400002)(451199015)(38100700002)(6506007)(316002)(83380400001)(31696002)(4326008)(86362001)(66476007)(66946007)(5660300002)(8676002)(2906002)(66556008)(41300700001)(8936002)(6512007)(26005)(186003)(2616005)(6916009)(54906003)(478600001)(6486002)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WUFhL0dZWWFwbXMvMzVRTHlVaGg5SGIyMmcvc0dJcWFqTzNpaExmN2RsNG9K?=
 =?utf-8?B?RER2TlFyTlpZeWlWWjZ3aEd6dC9nYXdWb0JmNVduQzNqK005eEk4cCtPdGxh?=
 =?utf-8?B?MDhaMUZVZlA4L20vb29rSEZOWDR4NDllZWJic2g2cmtGeTUvSm53b2lCVDlq?=
 =?utf-8?B?c2prOUdDU0dCNHNFNlU0R295ZXdQU1pTMWVTUDN4Q0JIa2hEcjNKU1NwZFVu?=
 =?utf-8?B?c0NrQVg3WEU3Nk5wWStwekNHMm5rWE9nUEp5UTJFRGQrZGU4WE9hQ0xBN3Mw?=
 =?utf-8?B?R1FZbi9ieWZ3QkFjVnRwdVozVmxIUDlMYTFWSHBZZFExRTlUTFkxTVYxRlRZ?=
 =?utf-8?B?Z3NVRTJhdWNYUk5CUzRsOFUrdUcxdk9VYTFQYjg1S0txcWthMnZUZ0ttR1Z2?=
 =?utf-8?B?UGNPekR6T1hvelBabnRWVHc4ZVhGcjY4enpwc2R3QWZ3ZWthT2dTNDVZVklj?=
 =?utf-8?B?WlJ6V1Q2WDZYSzBwSC9BRzdVYjJ2VkwwT24zSXRRbHZYRDJhZ0x0V041VUpX?=
 =?utf-8?B?Tm5ia0UvS01pTTNydTJTbDNUVVI5ZkJycitNeld5MzVCalhIeXg5LzhPRnV0?=
 =?utf-8?B?bDRHdEpIQlZCK3Z3MFAwUmV0MmROYlE2WW5TRENZZmw2aE9UbHVpTnZtNTI2?=
 =?utf-8?B?S3ZET1ZXeTlsUGxkeDc0LzlCU3NjQ2xwZDY2R3JhZzg1NnBjQVlIeDEvTXAr?=
 =?utf-8?B?enNMRjF4VHlhaUtueE5QY0FyUVpndGlIUmRXTHNsZTB4YjlWTXRNZGtaUW84?=
 =?utf-8?B?R3RRTzlVd0dxM2FjK2d1dmRwRmNpdGNqd21IZlJ0T283KzdrTmptakRXWkYy?=
 =?utf-8?B?blFBQjArVUkrUWgxczU4WVdITDhDVCtSa0dpSHRrRGF6R0FmVkVZWnFaZDMz?=
 =?utf-8?B?bVhSNFBRYXludDA4bWZaajJDZ2FHNUE2Q1NiL0N0MUpQeitwNUs0YngvRk03?=
 =?utf-8?B?RTg2alBudnlhR0lNUzlSaTJZWERGTThjY3dnZFQ2SjhraitsL21ISkZsSHhY?=
 =?utf-8?B?MWtnY3g4ZHVUK2FyZ3ZQTlpEZmNEWjhFQ2dVOVk0ZXhDWTdVbjV5N3pjYThJ?=
 =?utf-8?B?Vm9qTyt1M3JhbGUrKzZHZzUxMVBWK2NaV3V4Ymxxc2I2eVVPbjdKZmJ2dlZU?=
 =?utf-8?B?NFUzd25RMVV2YXJzbnlLR3J4QlpicDRyd3luRUNMek1YTmpjUUc2NG85QXMz?=
 =?utf-8?B?cXR0MXRPVEt4WE5nNE1sS2lZam9FVEZCeVk5ajloYlNwc3JhbUg4bnJ2d3Mr?=
 =?utf-8?B?SVRHZzBuMHVIVEtpT2w3bGEvYlpnQm42cnpGOVFjQ3RaU3pwNWFaWkRqUTRk?=
 =?utf-8?B?MTdMbHVDN3pFaVBLQXIxeEF4TUVTSjJ1Zkd5aHVRdDFMSkY2SEVEL05ZZ0p6?=
 =?utf-8?B?SS9tNFlFS3FvVEdSTUdNckhDUnYvazBBQWRjZE5UMjhoNWkzUHUzWnFEM3J0?=
 =?utf-8?B?VysvQml3YWhYVXJiaFJRTnFWSkNyWWp3Z0N5Y2pwTW4rZzdtSUFCem53d3pJ?=
 =?utf-8?B?c0tod1Qrei9DNTZYSHB4STc0ZEVzN2tqRmcwQk9DbnA5VWg1UGFaS2lNaytB?=
 =?utf-8?B?NVd4ZHkxZlVzTjQvbzBuQkRqbjU3dWdqdm5jam1wTW96alc5L3JRdVoyS1lD?=
 =?utf-8?B?NnBaSWM3aXpLdDlmKzRab1FTZlo5cE5BMVkvazBYdEc3UjZLWVRJNHRqcjI4?=
 =?utf-8?B?cU9oVU1yL3prVkpZSjFMbnBqTDg0YTZDdUhLL3BZdXNha3ZrQ3VGTlEvdlNt?=
 =?utf-8?B?STRKc2RTM2Y0R0VGWmVvZk0wNWx6SE9OWGhkMmgrY2kwWHR4c2h5aGFyZENC?=
 =?utf-8?B?ZS85ZzliLytHcnVBOE42NTZlRU44NFF6clhWeUo3Mk5oZUNIU2p0cDZmTElO?=
 =?utf-8?B?a2hZd3gxbTVHOEI3VTJ0SVdaMW5JQnp6anJmS1Y2MURhckR1SG1xVmZqWVdZ?=
 =?utf-8?B?SURXMmpEanUrc0UyRjl2NHJteStMeUpoRzlvOVlMWW9RRFNYWTQ3V0VyaUg4?=
 =?utf-8?B?RFk2VHlOdEVTeFpqcm9TcWNtUVE5eXlHVG9YTmVIb1pxTHhwS0x1bEVIUGd4?=
 =?utf-8?B?K0d6QSthcFIvRkQvWDh3ckF2djQ5bHFqTDdvaXZhVjRyK2l3RHo0RDBMc2xE?=
 =?utf-8?Q?SbGAA+/PE/ZQFLQYh4kZN6S+6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d2f29f5c-f0f0-4bd8-8ee4-08daef35f633
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 16:00:27.1413
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +l6KEO+gl/jpLOR3lQpqLK0F4Xt1P139pvE6qzAXAjshO2TnaAlMetrMwu3MQEf3oheEFvV/Pxw8I/sh/L1Cbw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8046

The "domain" in there has become meaningless; drop it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1640,15 +1640,15 @@ bool shadow_hash_delete(struct domain *d
     return true;
 }
 
-typedef int (*hash_domain_callback_t)(struct domain *d, mfn_t smfn, mfn_t other_mfn);
+typedef int (*hash_callback_t)(struct domain *d, mfn_t smfn, mfn_t other_mfn);
 
 #define HASH_CALLBACKS_CHECK(mask) \
     BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
 
-static void hash_domain_foreach(struct domain *d,
-                                unsigned int callback_mask,
-                                const hash_domain_callback_t callbacks[],
-                                mfn_t callback_mfn)
+static void hash_foreach(struct domain *d,
+                         unsigned int callback_mask,
+                         const hash_callback_t callbacks[],
+                         mfn_t callback_mfn)
 /* Walk the hash table looking at the types of the entries and
  * calling the appropriate callback function for each entry.
  * The mask determines which shadow types we call back for, and the array
@@ -1784,7 +1784,7 @@ int sh_remove_write_access(struct domain
                            unsigned long fault_addr)
 {
     /* Dispatch table for getting per-type functions */
-    static const hash_domain_callback_t callbacks[SH_type_unused] = {
+    static const hash_callback_t callbacks[SH_type_unused] = {
 #ifdef CONFIG_HVM
         [SH_type_l1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_write_access_from_l1, 2),
         [SH_type_fl1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_write_access_from_l1, 2),
@@ -1969,7 +1969,7 @@ int sh_remove_write_access(struct domain
     else
         perfc_incr(shadow_writeable_bf);
     HASH_CALLBACKS_CHECK(SHF_L1_ANY | SHF_FL1_ANY);
-    hash_domain_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
+    hash_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
 
     /* If that didn't catch the mapping, then there's some non-pagetable
      * mapping -- ioreq page, grant mapping, &c. */
@@ -1998,7 +1998,7 @@ int sh_remove_all_mappings(struct domain
     struct page_info *page = mfn_to_page(gmfn);
 
     /* Dispatch table for getting per-type functions */
-    static const hash_domain_callback_t callbacks[SH_type_unused] = {
+    static const hash_callback_t callbacks[SH_type_unused] = {
 #ifdef CONFIG_HVM
         [SH_type_l1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 2),
         [SH_type_fl1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 2),
@@ -2024,7 +2024,7 @@ int sh_remove_all_mappings(struct domain
     /* Brute-force search of all the shadows, by walking the hash */
     perfc_incr(shadow_mappings_bf);
     HASH_CALLBACKS_CHECK(SHF_L1_ANY | SHF_FL1_ANY);
-    hash_domain_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
+    hash_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
 
     /* If that didn't catch the mapping, something is very wrong */
     if ( !sh_check_page_has_no_refs(page) )
@@ -2132,7 +2132,7 @@ void sh_remove_shadows(struct domain *d,
 
     /* Dispatch table for getting per-type functions: each level must
      * be called with the function to remove a lower-level shadow. */
-    static const hash_domain_callback_t callbacks[SH_type_unused] = {
+    static const hash_callback_t callbacks[SH_type_unused] = {
 #ifdef CONFIG_HVM
         [SH_type_l2_32_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 2),
         [SH_type_l2_pae_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 3),
@@ -2177,9 +2177,9 @@ void sh_remove_shadows(struct domain *d,
 
     /*
      * Lower-level shadows need to be excised from upper-level shadows. This
-     * call to hash_domain_foreach() looks dangerous but is in fact OK: each
-     * call will remove at most one shadow, and terminate immediately when
-     * it does remove it, so we never walk the hash after doing a deletion.
+     * call to hash_foreach() looks dangerous but is in fact OK: each call
+     * will remove at most one shadow, and terminate immediately when it does
+     * remove it, so we never walk the hash after doing a deletion.
      */
 #define DO_UNSHADOW(_type) do {                                         \
     t = (_type);                                                        \
@@ -2203,7 +2203,7 @@ void sh_remove_shadows(struct domain *d,
          (pg->shadow_flags & (1 << t)) )                                \
     {                                                                   \
         HASH_CALLBACKS_CHECK(SHF_page_type_mask);                       \
-        hash_domain_foreach(d, masks[t], callbacks, smfn);              \
+        hash_foreach(d, masks[t], callbacks, smfn);                     \
     }                                                                   \
 } while (0)
 
@@ -3167,7 +3167,7 @@ int shadow_domctl(struct domain *d,
 void shadow_audit_tables(struct vcpu *v)
 {
     /* Dispatch table for getting per-type functions */
-    static const hash_domain_callback_t callbacks[SH_type_unused] = {
+    static const hash_callback_t callbacks[SH_type_unused] = {
 #if SHADOW_AUDIT & (SHADOW_AUDIT_ENTRIES | SHADOW_AUDIT_ENTRIES_FULL)
 # ifdef CONFIG_HVM
         [SH_type_l1_32_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l1_table, 2),
@@ -3214,7 +3214,7 @@ void shadow_audit_tables(struct vcpu *v)
     HASH_CALLBACKS_CHECK(SHADOW_AUDIT & (SHADOW_AUDIT_ENTRIES |
                                          SHADOW_AUDIT_ENTRIES_FULL)
                          ? SHF_page_type_mask : 0);
-    hash_domain_foreach(v->domain, mask, callbacks, INVALID_MFN);
+    hash_foreach(v->domain, mask, callbacks, INVALID_MFN);
 }
 
 #ifdef CONFIG_PV



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:04:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472013.732096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSj0-0003hg-96; Thu, 05 Jan 2023 16:04:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472013.732096; Thu, 05 Jan 2023 16:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSj0-0003hZ-64; Thu, 05 Jan 2023 16:04:14 +0000
Received: by outflank-mailman (input) for mailman id 472013;
 Thu, 05 Jan 2023 16:04:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSiy-0003hT-GS
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:04:12 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2085.outbound.protection.outlook.com [40.107.241.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 97d2e2b6-8d12-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 17:04:11 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8046.eurprd04.prod.outlook.com (2603:10a6:102:ba::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:04:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:04:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97d2e2b6-8d12-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xr++NsNdGlT6TtWx6/n2fQDVYMbGG7WnSMBPz5KHSmGDF9QPtDETIPnA+CjBMH4oPBWT2bwfWX4suSbXeGIypOwZCQRpTh1AompNiWRAlvR3hIiY0nJ6hTmozX01k/WZA6eOtK75QCrBTeKQR+G1WaHokMXzv0nXtNRfUHLzLajaYJfDUasD8xfrsGU7MFRG3/BJ9VBHUTxL2ixjbZZ2EnZb7G1bYkJODXyMMoBjDClEwluN71Ga3FRwXt5oxS0IRdnWM/ugNIysCEu5Rdy/khqVLzO7lWhksIzdRGFzZ/+e/VN1Gehli+018mZsv3P2TtkPaY38eJzZbIkw8YXx4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tiwgHbCRXYI+56kwUcOCgcEtwg3HWt+T1yYbvqBjdYM=;
 b=LcDpJmTakRzhrmgRdUK6CYYQUe0Rpt+3dW/0+TdMvxGSu5wItFqSnWTvbFLKR4YBDH+/JywaQUpwL9yhMcnLvWM/D5C0zA2YEDGLw5fY/zeBJaEVxpNejZnxLn4w6Jb601ayQPlXmww48lJ/pgMbM5mRfQSt5gOHzYXyaObjKhmGfZjdUFATmi4zgh1eQ/oWwBXca9YnX/MtE58FQs8z5x4RM+1kLU9rpQ2nKQ0Ud1rVQICgmbyBefN9mQX5W96KKsYtEVKVaFnXZDiRS3JYnAZXMz3oHl1/RxrogIen7ZkcMsa3CECnCh0Qr6OJ7nX81OaF8klNdkZfUL2ewxuwag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tiwgHbCRXYI+56kwUcOCgcEtwg3HWt+T1yYbvqBjdYM=;
 b=NAkNqr3ZTsQ6QsEjVhcI8ueY+7QrN4VUuvFkAFkIBTqeEiP8EK3GFw+kTVoxokZpp3tcgyQ50rlfowHZT2mHmtJ8bO7cwH0+ZvuRqrS/xNDtOH1pKNpb5As/6l6bz9hxEvnMpGV1XSQAr78B20FqnpeYx9qqTgpMjjxd3sx2wYsWM7tt7XicuathzWeoFk3oyXJWGEYNilzhN7Lp5fiK6nVattSg4Ms4s2y7LgUMitJsfCqLLjh17w6MDqRdI+R/7eEFeh1uVpTEcYZEbkqzXPXorlJbcP7cEdugdfQdxDp6vfhcavgfjh4WiZC5HA4/cdAoh38HmJwfxWfWYyMTpw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <76cc0b4a-27ca-21b7-841f-315f31833762@suse.com>
Date: Thu, 5 Jan 2023 17:04:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 05/11] x86/shadow: move bogus HVM checks in
 sh_pagetable_dying()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0003.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::13)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8046:EE_
X-MS-Office365-Filtering-Correlation-Id: 2ad15354-d748-4b28-5a12-08daef367af6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oYfdVss4Qr69VnO87usxY+HEyw1UMO6LfKX7cL5QHL01Mt22PQlDL5/zr1VOP8m1OrKHXDzroJIOagvtHhs5VJshZ9zWSssQg8wJwZGSQftieXSqaXPiyqP5iFbKoxFKDMnugdqNysq0ufCb6z23gJhGZQ/ACGXSCmvnosYxC7LHTT/v5eVGacB0fGH7hQU7mZscidMDPnLVNxMjPp1o5TXjuAH9zYz0ZiwZdUxYaQLj3jpAH55VHpviJqWUZ/Pvd+ZWyAFsCa5XOtvNj7ZeQfA5VCJfF5BGJ0J3lOzwMAyY+IHhW9v1FhSvh9MZRZMajp4UAdrdhnIp0EszcdOAZiAc7pB0ZeLiOp5Y+1Bp2ucVv6AU3Y+AYW0pOmDOuga94WxnnOMxxbYsE9g6CJEQYswex1lqIg2icEE27CQR2kyYAW+IDv5Qt++uedbM21KfOkcyVUIub6qjbAf6/ddZtYqD/nDB1qxHZR65Xkh3AgeLR0WZ+RsGO1F+Kd5LdEsWzQGtNBiE+u3g6pgdgyaVm3Ahl8R0oa9oVXrKnNyiOXXU8haTqfJoTwyceF3PXyReiWQPWt4FHhhCSxSxdJRkIZlCj8JqvqSKR/QqG0C6JP4QOKuYcEYlHv7QoxVk6QXiFJcli4eT3ZWzLZ5clpiVFL6G5oUFms8D7bmZJej8ORwu7IjmR7IKs3QBUTKBi+E+qF1RSt3RQ6vhA1UH/prX8PXLBvvwziV9e+3XaIN3KlU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(376002)(396003)(346002)(366004)(39860400002)(451199015)(38100700002)(6506007)(316002)(83380400001)(31696002)(4326008)(86362001)(66476007)(66946007)(5660300002)(8676002)(2906002)(66556008)(41300700001)(8936002)(6512007)(26005)(186003)(2616005)(6916009)(54906003)(478600001)(6486002)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RnFhWjJXUGdxZlNaaUlpcldMNkoxeDlveWwzRmgvaCtHTUwxUStzMHZISmRX?=
 =?utf-8?B?d0JabDZKaU5sZldrOW9Bck9aVTR1UHZ1Z3RuZG0xM0FrVGdoeDdsalZJb3Fm?=
 =?utf-8?B?WFpLaThBVTdMbVBMTTF0Q1hwMnRzSkhQR0NGbGordmNVbWQxQ0xRN1VRSUFw?=
 =?utf-8?B?Nm9ZQnR1bVJza1k5RWJNQXdhQklhMDVOZ25sREpEU29zaDhNZTcyNytKUU94?=
 =?utf-8?B?WVp4NjM1aEZPRlAwQ1QraStXOFdRVEsvUmFtOWt4eXk3aFdoMGg1SllxbUE5?=
 =?utf-8?B?aG5yb3RlM293bFpPc3o2WENPVVJNT2d1YTdxS2Y1YWd6bHhPbjFVNENlMlNX?=
 =?utf-8?B?UktJVEI5WUcwb0x2ZGRISFZiMXozQy9zbC9LSWFsWmhXQ29PS1M1UUFjUFY4?=
 =?utf-8?B?TzZIM3JVbG9qV2YwU3FXOStZS2s4M2tMdStmYlh3VDUzNnFsWWFJY0txYXlk?=
 =?utf-8?B?VWZobVdUbkJTRUdUNE02UDlHaDR4Z2ViWUZpbVpYNndTRFVkN08vYjVtYlBl?=
 =?utf-8?B?anJPdnh5LzYxdHRVU2VNbzFQdE4yR3RZNnlVMFZtdnNqTXhhM1lCd21LTnNM?=
 =?utf-8?B?bUozelNDNEFzWUlaQlNuU1lTaVptNnp3aUljNWsxZ0xpVVdxdm9ZdC81R3lC?=
 =?utf-8?B?YnpvMzYyZHpBbTV0bFUxR2lvYmYybHMwbmhyTkJud2tUTG8ybUpyM1pLeGcz?=
 =?utf-8?B?eG52QWlaTk9kYVRsRTI5L2JyaGhKL3FNZUMySDA1SE0weURNYUdNTDdqeHV3?=
 =?utf-8?B?VVVSSEQwKytISWt2djdNQzNnV1doaUdQRThlOGRBTjdRK1l5TGdVMlpNYk5W?=
 =?utf-8?B?WEVuMU9wQzlkRDJvMU1WNnloQkdpNmVpVWM4WG1UbmVPK1AxYzVCQS9PVGw4?=
 =?utf-8?B?MEJndjhqTWFSYTN2cWhCdEFnNFRUcU1HZURmVlVmSGN0TWZISCtJK2g3Ukhz?=
 =?utf-8?B?TmtTKy9vNjR2UWVaSld0SVUzc2QvUXJTV3JTWkhCaWNEa2hzVkw3UTFCYllD?=
 =?utf-8?B?NHB0elJHbmEwd1Z1bjlodGpZbDNodko5MG1iM2tGSUt5bk95YlVuZmx4bzZJ?=
 =?utf-8?B?ZzV4cW9HbHgvMVBlSnIxVmdnSjI3K3FLUG5qTEhDa1IzZGJUT01qaFA0bjM4?=
 =?utf-8?B?YzZlM25DUGRwS2ROL29VOGkwc1BrMTk5Sjc5WnExSkFCOEpjVEFiUkF0ZXE1?=
 =?utf-8?B?NFRRM0FCUDhLQkZ5RFJpbXJvdHJEdm5VdUh3V3czU240REM0OXV1ZU11SWg1?=
 =?utf-8?B?TVpoSEEvMTc0bGhOWjdJTXdPemRENkxJMFg1azhxRDhTdWxycktuMVNJSEdZ?=
 =?utf-8?B?T3ViTGxJTXVFWVQya0pSdDRRSGl0bmw2Y2pzYklaSmc3dUt3bU93a0I1ZmFt?=
 =?utf-8?B?Ry9JT0d1eWlpSExkbnc5eHVjWjkwL0QrdjFOc2x2QXVrME9NeWs1a0Vjb2d1?=
 =?utf-8?B?RE1HallIbjU1RXB3SHVRWVNNRkhZQTlpQUEvNjlQbHBnblpMeEU5cEZKRUwr?=
 =?utf-8?B?ZnpFMldiazBDeGxzKzlIRnpBdnhzVytmSGxhblQxYUdLSWErelBJYXJWaFFM?=
 =?utf-8?B?azZldVY1bkxCZlZxdkU3aHpYRWt1bENYWjJhczY3UFR5MEtFVDlxZkZGOWdy?=
 =?utf-8?B?ZEladDYvRXBodXRrT1huNFd1MTNEYzU5aUkwYUM1dGxsUmJxdVRwK0JBSk5n?=
 =?utf-8?B?cHg3MUY4SGRVMkdkaDlZQThtKzFXbVhBL2FIaC9qcnl4RFBaS3ZOMUlzeEp5?=
 =?utf-8?B?dm9HTG9VMlNlQXFWU3d2V0hBTVh3YzgraFAxRUJUMnpodWpPcVNBVGd1M25U?=
 =?utf-8?B?aGRiOUlpdlNCamhlVDBrY3R6UVp2K2d1bFd2UTBrcC8vRlhlRUNhTytVNlYv?=
 =?utf-8?B?T0l6N0hrTks4NzhJSDVOaFJMQnhqd09NL2lVM3JxSjZHeXRvQjA0L01iZnpz?=
 =?utf-8?B?MVNvZGMxQ0NjODJvQ3dZUlZzaldiNWs0VTZSYW9WL1JIeERPMWs4S1dvR2FW?=
 =?utf-8?B?V1hZTFMybGx1WmxLT0t4Ti8vNk5FVGlHY04zaUs0MFlGN3NZc1Rtemc0MVJP?=
 =?utf-8?B?QlBCTXdncnJHb3NLbWk0dzZQNG4zdE1yR3BjL0FlR3FreWo5ektUaWRCSEVh?=
 =?utf-8?Q?Yg4FF3Svc/nM8Gbgn54ejbmmZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ad15354-d748-4b28-5a12-08daef367af6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 16:04:09.9084
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vTjYqBoyreX/HEhw3Arn7DPnliwAhjDiE6xN38Kxx3c/YlAgml3JHhNnV0QlXADvtvFKRxHymfFrEBeHvvuoPA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8046

Perhaps these should have been dropped right in 2fb2dee1ac62 ("x86/mm:
pagetable_dying() is HVM-only"). Convert both to assertions, noting that
in particular the one in the 3-level variant of the function comes too
late anyway - first thing there we access the HVM part of a union.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3780,6 +3780,8 @@ static void cf_check sh_pagetable_dying(
     unsigned long l3gfn;
     mfn_t l3mfn;
 
+    ASSERT(is_hvm_domain(d));
+
     gcr3 = v->arch.hvm.guest_cr[3];
     /* fast path: the pagetable belongs to the current context */
     if ( gcr3 == gpa )
@@ -3822,7 +3824,7 @@ static void cf_check sh_pagetable_dying(
                    : shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l2_pae_shadow);
         }
 
-        if ( mfn_valid(smfn) && is_hvm_domain(d) )
+        if ( mfn_valid(smfn) )
         {
             gmfn = _mfn(mfn_to_page(smfn)->v.sh.back);
             mfn_to_page(gmfn)->pagetable_dying = true;
@@ -3854,6 +3856,8 @@ static void cf_check sh_pagetable_dying(
     mfn_t smfn, gmfn;
     p2m_type_t p2mt;
 
+    ASSERT(is_hvm_domain(d));
+
     gmfn = get_gfn_query(d, _gfn(gpa >> PAGE_SHIFT), &p2mt);
     paging_lock(d);
 
@@ -3863,7 +3867,7 @@ static void cf_check sh_pagetable_dying(
     smfn = shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l4_64_shadow);
 #endif
 
-    if ( mfn_valid(smfn) && is_hvm_domain(d) )
+    if ( mfn_valid(smfn) )
     {
         mfn_to_page(gmfn)->pagetable_dying = true;
         shadow_unhook_mappings(d, smfn, 1/* user pages only */);



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:04:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:04:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472017.732106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSjS-00048a-HN; Thu, 05 Jan 2023 16:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472017.732106; Thu, 05 Jan 2023 16:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSjS-00048T-EU; Thu, 05 Jan 2023 16:04:42 +0000
Received: by outflank-mailman (input) for mailman id 472017;
 Thu, 05 Jan 2023 16:04:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSjQ-0003hT-Bx
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:04:40 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2059.outbound.protection.outlook.com [40.107.241.59])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a844ed62-8d12-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 17:04:39 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8046.eurprd04.prod.outlook.com (2603:10a6:102:ba::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:04:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:04:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a844ed62-8d12-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XmgGKirqO97aW2lzSqCXpaIMGF3eF+gzlXSdDILsRUTRy53Ts+dmTcDoA2bJ/bAnU3z2nlZx2jmznhmU7AK84CybI9T70vXkmie0xgGRut+p6DfJhXVCyNYQYEVyKncF4fIt8u+2ttL2Rt1EihkvWd3GGx9kPusq43eeLKp0iGlEQieUBy6eMtNNcrfy9F/eH/fsGSm0nMo2FGTGENBE2b+OxRjzPauOtWUvDUv2nwbiGW1lw5CAZxcSjIlBYvw/AlMKs0j9DkWWvutTSqko6nE4ozkU8m34mOZ+zTvaq8EjkGeY+cj1tUDHxFx5fGGOifCFF8EsRMH7JJjjEMw3hg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MJaoeutQLbYPzxfYQUitLAuxg6Tn53bDvw1b7Tq+keI=;
 b=PMQyKOvRZKzmstJbOfx7xQT76V22aS/cbEfpwt/kZXbC5BmTLEYAc79ZZqUOjWrv949KUVYamDLCZibCj39TvC/WNHS2UOf5dE6NrJxRuUzculrwq5V0Pa1RirGIo/zvLa6v+m16Xf0LFGFQZjeI9ZoxpvgmBoNCvv10TGr6RbyXKidJPEm0lfvI6ZBP/CHf4z/x5qW8/33ABw7evNPnq/beGfRsRt5ax0Fvmm669kJQ63QRXyebQj9Y1+2PiZK1ZnhktMBwc+n3Q1wtJ70myb1H5bsoe7PzSS8m85hWVaP3t+VtN2WMxP6B+ovqZ8DcmC3JIhL1V5wcoe+VwiV/Tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MJaoeutQLbYPzxfYQUitLAuxg6Tn53bDvw1b7Tq+keI=;
 b=Q8YlPobdSJaKnb4lppkjZ0CgQtxuyIXFDSEXkUhMrUEhDLLZALOxc29Ls+Wyt7uyBEdo6xMI6/C+77pbJHzMLDb6O62IYm3YgHWrplzBI+YnAXef5+bCGgAuRWsuFI65PsoCw/jGZC9Bm/ORLYAVCiQCMUwBwGdJViIYwAgkvuprRDgbe176GqqoXkFyfZZnsS1YVgkJ5LV7t4DU7ndRgY7mGY1ANrRq9H1xZsAJkVM3UO6rBaWRhrh3TBKC7WhhWlgm5IsK+omMpMVUVqINWG9z68+PpmEgOnb2hUg+4OaVDpHDsfmXXYPEUxxxuJBB+4Aaff8VLOwUgldlwb8Xjg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ca3e0c70-ae80-4c21-97f7-36525229074b@suse.com>
Date: Thu, 5 Jan 2023 17:04:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 06/11] x86/shadow: drop a few uses of mfn_valid()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0002.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::12)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8046:EE_
X-MS-Office365-Filtering-Correlation-Id: 15b6a024-c967-437e-adc4-08daef368beb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XbhDPUSuADkRhwC6IzYXEToveZgcb20lXMisLZ+GMM3jSAMmAniv17a7NvzsDci67mEPC+Zzq5fi47TEnsLI3RWN2RHqlXBAEWYUITFOOldngtRs5IfrJ/IemNBFosSmXyimqXnN7HOnePDVVU7GFovd9Dgi0l1ZtG2LeSc5QD+j/6AcvV+3MqbTMrHs06dEduNxprJcZQ542I/L3MRu5WOlEkyPYA/AzYZIouvSgV0ILSvi6QjsfFlSJDhXx4vlOMOqsZsuDkMHQ4AIYYRtMkihj18fKgrvLP240VyRwhXnE45g5oL+Jgvfnrlbg4kTSauyLo/zeMQcPFw88V0G8fE2j72r4OdNbSsMIr5L5JP7IvkzTFDHNMg77cG5FWKIJ2bha14z3QtGKUup7zj8ZpBCERjqCXeFE0sXp6A/10J+Jo8bJ0AwICNUnP+WFcJQysYpxL71uLY2wIUU7VWjFhAbGyujcraQjdUNulTKpDcNO7274wQqr52wnNec4VxFkVw4qaBGSXduUaNjyzHb8uO/fGWMspsCFrn8i+u17kRjJzuo86muPb5nMY0UVgyZq9/lW2gwGv+yD6+kX+VP6ITn+tDDQr46au/SUqgRsbpJyfG1BEJIgeWkUcs1pG95UHkfl/S+3wJtRXNALCxZKzwtAqBRpbU9OsDr2r997pmYHyXqf/IX484ATxnq8nSkAhRaPH7SyRr27BQvT4RLov5L9ocdKSGepiaGwyeYN20=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(376002)(396003)(346002)(366004)(39860400002)(451199015)(38100700002)(6506007)(316002)(31696002)(4326008)(86362001)(66476007)(66946007)(5660300002)(8676002)(2906002)(66556008)(41300700001)(8936002)(6512007)(26005)(186003)(2616005)(6916009)(54906003)(478600001)(6486002)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YjhjUlVUSDB0dW9LVE1FTUlwYm1TNm91Z1RLa3JWRmkxRm5YYUNqWWNxdWxv?=
 =?utf-8?B?VmoxMWwxeUMyUkVnUm01YVV4MEdaUW9TV0M3S2xSOGtqZ05ncDhOdmhkMTFm?=
 =?utf-8?B?dm1GelpCVENjeWlSUitWeHZUWExhR2phWldqRm03VkZLTHNkUEFUYTRjRzNI?=
 =?utf-8?B?WmxDY0haS1d0T3BsYzlQSElCcGJVY00reXhFMC9YK3QybnlrQURIR3VoMXN4?=
 =?utf-8?B?WkJRU21FTWMxRkVFZGZCUUlpNmUyZldXemVzTC9zMGdZT0dhMzVBWjdON3F5?=
 =?utf-8?B?UExvRXd1YUcxejRjS1pmVXVseVlzQ1NiRUJSdU12c2lpL2RLU1JGWU05cVdk?=
 =?utf-8?B?Q290d2xPUUVYemo4VThnNWNUeVQyU2FJcmpqWHY5L01sbTZxcEZnOFAvNVRp?=
 =?utf-8?B?R09IVWRKa09Pb25RUWlGV1pSNkRNdmQ2Qkd1RTFDOVdCczR0S2YwSzFoVFRj?=
 =?utf-8?B?eWJBck1ZV3FvNFVxMCt0cFJMdmVYdC9NVG5XMmp5S0lKMTR5QVB6djJyMVpN?=
 =?utf-8?B?ankzUEhGemZlTk1ydldWWGRNVFpIMlBHOU8zVGFDdzRYRG1SeUp3OHlHRVBZ?=
 =?utf-8?B?ZFBvQk9lSDQ1RWNUaDhGNzMyTnB1VlZsVm94dHQrcHpXemFZUGo4VHEyOFRR?=
 =?utf-8?B?M1Q0bnRhdHEvUVRDaFgvMk93ODh4WmMxM0g2YXhsdWp4ajJKZTErVk1JRTNH?=
 =?utf-8?B?L1IxSDI3VkxkdjJtTThqWkZqaGlaS3FIamI4UGc5emQ5R3IzdEcyTnpJUno0?=
 =?utf-8?B?WDJmai95RDlBd3cvVGduOUlJU1pWTUErR2VHYVZXb3BhVGJBWTZSWlFCV1dk?=
 =?utf-8?B?MFV0NkpsUlg3bnhjTm8vSkFmbE5Rd0M3bUw2bVNOa3dTdTVHUjhnS2IyaVQ4?=
 =?utf-8?B?c29TN0psT0E4eHpzYkZhcWl2WnFDd1pON291OVlSaTl1SkdjMEMzaVR2LzBJ?=
 =?utf-8?B?V1hHRTBjNHcvMjNhbWYzL2RBSTh2VnZNRUhpYmpWL05HQVl0V1Nxb0k5R1Rm?=
 =?utf-8?B?NHc4WjZoUGhnQWlnTytpYWNaKzdwQnFPVFMrb1dnaFlIYW9LSnBheHdpclRH?=
 =?utf-8?B?cEtxeEp3SHMzZkVtekdmdGFoNDcrVkY2aEZJYUJzTzNGZVhJSDQ4UFN2RGlS?=
 =?utf-8?B?ZFlVSFdJRWNwMkRkRVUreSsyRkUvbTlsYWNEUDZqbFVyUTE4VkRLUVhGYmhC?=
 =?utf-8?B?SE53dUZQS29vbDNrcUVYQlNQTG5yeXZiWkF2bmQ0SHlwZGcyUDRaWkFtcWVr?=
 =?utf-8?B?d1h5Mmx2cjljU0pSKzA0VFhtRWhMVExDV0tLeE1XVTkwdytLL3hDaThtQm9m?=
 =?utf-8?B?Y3BBb1pUVVVKOXN1djgzNHNDUy9IRnI3a2gwRDVnOTBqUkVoRklGTVNOeHN5?=
 =?utf-8?B?SmVnVlNhV0Q5Tms1K0hKQVRwL3JwQ21Lc0cwL3dTeWFxWkwvanFuMzJHbGNG?=
 =?utf-8?B?bGF1dUpBZlJiamFRUkFXVHE1MHdkdTZpV0x0WmRQODJSTjJacEMraTQybGhS?=
 =?utf-8?B?UWJsRmd6SzFtMm0yNDdtckpiODBlZzQwTjFFR01HM2hRcTZtZ1ZxZEJiQ2ZN?=
 =?utf-8?B?MnFvYTBBYUpqNGthVHE3aDV6cUlpTGpjQjBwY2tzMHpxMGhiZGtzRFRVRktY?=
 =?utf-8?B?d3FGNkZvZUM5QjZHbWdLSERlQ1N1Qnp6Z1BaUm5rdnNNUTNiUkxTcmwrQ0JX?=
 =?utf-8?B?Y0NrMS9rUkdaaXFXNzNndnJCSSt0T3pwQjNMamN2eU5DelRKSlhSZHUyNFB6?=
 =?utf-8?B?RGlwbEppZ0wvZHNHM3RrUysycFNMK3hmbldoREt2dTBkaG81NWF1dUpCMGwz?=
 =?utf-8?B?Wm5MazU1QkpSbHZmQk1IWUZ6d24wREI1SVoza3A2UzRDQUk2RG5wLzA3SHF0?=
 =?utf-8?B?eXpETWVXVGZpZ3ZBWmhCcDV0SFkzS2U1WUtqMEVZMmdGZTE1dlRPQk5jRWJh?=
 =?utf-8?B?c1cxVUpZY21NSzZUT3R6YnJqQlRiUkRwOUkyM1ZHRkhzT2hiTWwxOWRwT0RL?=
 =?utf-8?B?dG9IQWVVbHNsN1hVdWFNM0hkS2QycVpnL2V2cTB2K0lYR3ZJdG9DVFhad0tH?=
 =?utf-8?B?WXJ0TzlWN1ZJY2d5ZDZUWmhQRVpLem9TMVFybWF0MHJmbUE5bm9pOS91clFv?=
 =?utf-8?Q?gMS60a4htwmvqQ0o4mrqRwHNZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 15b6a024-c967-437e-adc4-08daef368beb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 16:04:38.2034
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1ARPHIyL5U0VA9mm0aThyC45AGojNQUrsmdoH6T6X0+YXij7vXFKOyiVmsx4gZnZ4J7lGFRveNRWRUu9+sXXUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8046

v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
hash table are all only ever written with valid MFNs or INVALID_MFN.
Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
these arrays.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
There are many more uses which can likely be replaced, but I think we're
better off doing this in piecemeal fashion.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -171,7 +171,7 @@ static void sh_oos_audit(struct domain *
         for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
         {
             mfn_t *oos = v->arch.paging.shadow.oos;
-            if ( !mfn_valid(oos[idx]) )
+            if ( mfn_eq(oos[idx], INVALID_MFN) )
                 continue;
 
             expected_idx = mfn_x(oos[idx]) % SHADOW_OOS_PAGES;
@@ -327,8 +327,7 @@ void oos_fixup_add(struct domain *d, mfn
             int i;
             for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
             {
-                if ( mfn_valid(oos_fixup[idx].smfn[i])
-                     && mfn_eq(oos_fixup[idx].smfn[i], smfn)
+                if ( mfn_eq(oos_fixup[idx].smfn[i], smfn)
                      && (oos_fixup[idx].off[i] == off) )
                     return;
             }
@@ -461,7 +460,7 @@ static void oos_hash_add(struct vcpu *v,
     idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
     oidx = idx;
 
-    if ( mfn_valid(oos[idx])
+    if ( !mfn_eq(oos[idx], INVALID_MFN)
          && (mfn_x(oos[idx]) % SHADOW_OOS_PAGES) == idx )
     {
         /* Punt the current occupant into the next slot */
@@ -470,8 +469,8 @@ static void oos_hash_add(struct vcpu *v,
         swap = 1;
         idx = (idx + 1) % SHADOW_OOS_PAGES;
     }
-    if ( mfn_valid(oos[idx]) )
-   {
+    if ( !mfn_eq(oos[idx], INVALID_MFN) )
+    {
         /* Crush the current occupant. */
         _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
         perfc_incr(shadow_unsync_evict);
@@ -607,7 +606,7 @@ void sh_resync_all(struct vcpu *v, int s
 
     /* First: resync all of this vcpu's oos pages */
     for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
-        if ( mfn_valid(oos[idx]) )
+        if ( !mfn_eq(oos[idx], INVALID_MFN) )
         {
             /* Write-protect and sync contents */
             _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
@@ -630,7 +629,7 @@ void sh_resync_all(struct vcpu *v, int s
 
         for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
         {
-            if ( !mfn_valid(oos[idx]) )
+            if ( mfn_eq(oos[idx], INVALID_MFN) )
                 continue;
 
             if ( skip )
@@ -2187,7 +2186,7 @@ void sh_remove_shadows(struct domain *d,
          !(pg->shadow_flags & (1 << t)) )                               \
         break;                                                          \
     smfn = shadow_hash_lookup(d, mfn_x(gmfn), t);                       \
-    if ( unlikely(!mfn_valid(smfn)) )                                   \
+    if ( unlikely(mfn_eq(smfn, INVALID_MFN)) )                          \
     {                                                                   \
         printk(XENLOG_G_ERR "gmfn %"PRI_mfn" has flags %#x"             \
                " but no type-%#x shadow\n",                             \
@@ -2755,7 +2754,7 @@ void shadow_teardown(struct domain *d, b
             int i;
             mfn_t *oos_snapshot = v->arch.paging.shadow.oos_snapshot;
             for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
-                if ( mfn_valid(oos_snapshot[i]) )
+                if ( !mfn_eq(oos_snapshot[i], INVALID_MFN) )
                 {
                     shadow_free(d, oos_snapshot[i]);
                     oos_snapshot[i] = INVALID_MFN;
@@ -2938,7 +2937,7 @@ static int shadow_one_bit_disable(struct
                 int i;
                 mfn_t *oos_snapshot = v->arch.paging.shadow.oos_snapshot;
                 for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
-                    if ( mfn_valid(oos_snapshot[i]) )
+                    if ( !mfn_eq(oos_snapshot[i], INVALID_MFN) )
                     {
                         shadow_free(d, oos_snapshot[i]);
                         oos_snapshot[i] = INVALID_MFN;
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -110,7 +110,7 @@ get_fl1_shadow_status(struct domain *d,
 /* Look for FL1 shadows in the hash table */
 {
     mfn_t smfn = shadow_hash_lookup(d, gfn_x(gfn), SH_type_fl1_shadow);
-    ASSERT(!mfn_valid(smfn) || mfn_to_page(smfn)->u.sh.head);
+    ASSERT(mfn_eq(smfn, INVALID_MFN) || mfn_to_page(smfn)->u.sh.head);
     return smfn;
 }
 
@@ -2680,7 +2680,7 @@ static int cf_check sh_page_fault(
                 mfn_t smfn = pagetable_get_mfn(
                                  v->arch.paging.shadow.shadow_table[i]);
 
-                if ( mfn_valid(smfn) && (mfn_x(smfn) != 0) )
+                if ( mfn_x(smfn) )
                 {
                     used |= (mfn_to_page(smfn)->v.sh.back == mfn_x(gmfn));
 
@@ -3824,7 +3824,7 @@ static void cf_check sh_pagetable_dying(
                    : shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l2_pae_shadow);
         }
 
-        if ( mfn_valid(smfn) )
+        if ( !mfn_eq(smfn, INVALID_MFN) )
         {
             gmfn = _mfn(mfn_to_page(smfn)->v.sh.back);
             mfn_to_page(gmfn)->pagetable_dying = true;
@@ -3867,7 +3867,7 @@ static void cf_check sh_pagetable_dying(
     smfn = shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l4_64_shadow);
 #endif
 
-    if ( mfn_valid(smfn) )
+    if ( !mfn_eq(smfn, INVALID_MFN) )
     {
         mfn_to_page(gmfn)->pagetable_dying = true;
         shadow_unhook_mappings(d, smfn, 1/* user pages only */);
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -770,8 +770,10 @@ get_shadow_status(struct domain *d, mfn_
 /* Look for shadows in the hash table */
 {
     mfn_t smfn = shadow_hash_lookup(d, mfn_x(gmfn), shadow_type);
-    ASSERT(!mfn_valid(smfn) || mfn_to_page(smfn)->u.sh.head);
+
+    ASSERT(mfn_eq(smfn, INVALID_MFN) || mfn_to_page(smfn)->u.sh.head);
     perfc_incr(shadow_get_shadow_status);
+
     return smfn;
 }
 



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:05:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:05:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472023.732118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSjq-0004e3-SQ; Thu, 05 Jan 2023 16:05:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472023.732118; Thu, 05 Jan 2023 16:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSjq-0004dw-Nk; Thu, 05 Jan 2023 16:05:06 +0000
Received: by outflank-mailman (input) for mailman id 472023;
 Thu, 05 Jan 2023 16:05:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSjp-00043x-Qx
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:05:06 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2074.outbound.protection.outlook.com [40.107.241.74])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6f04905-8d12-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 17:05:03 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8046.eurprd04.prod.outlook.com (2603:10a6:102:ba::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:05:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:05:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6f04905-8d12-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=duAJiTua+JHJcndxBJVume4ofnWTeM8JJbVBTrJj0HJGQa/ojDD0MvZdj+FXNKMBWIG/2JMhmlMJYajPd4PuvDS0NYByn6mcVdmlfXMklo6p7d1OUcrVPJXI85wJcO3zYChCcpdOYY3D0POhf1lJu8yjApZTHBJ6bXh5NJAAqbTgKYRJ05bI93tQhgiy0rYQMRNUClAs2Ax9lylKHvVVeadJMjd1cZlP0lbFaIMKor9GmrHn/yWVYUNFzumU90OZuizDt1ezZZwcqQJo9Ywc68sVU2No1UDRSdfq33swMHAeQ8iHwSBQYJdNr1yr+Q3w2SUYZP1RKgRsdYf1PV8fkQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qf+emyh5II2V0/A5S4zzJusw1CCXBsG+DAb5LICmZYA=;
 b=IUvndegZT73HSgQRKLGacBhYnCgufztfBzZeZA2Fuhoj0G++tNuPEDDWSgtvdWGwDMqO6bhBrqM6LMvpQn4gX6EVyouMOnbqqFU+NZiakbI2DEjTWcJQ8EfaB26r3JjlO3SFKMj0n9jXPaGW1HztLCHt7q5LxYsbw5zMysG+DbAd/OeSHFw2ch4V5x8N5r90eTLaRo6q8a6P6AsFFAr3mqAjwyrXoXcIS8LnoaLsjniEVwbjdNimWKVzcrHPm6OVZVoUa9z4P6l5/Yjbkup9Y+HvmKenEyeCCjL41vc2RiWvnsqwMJ74FIdovv/b2kpN7+B4klET8d660rPvMHCNdw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qf+emyh5II2V0/A5S4zzJusw1CCXBsG+DAb5LICmZYA=;
 b=vJJFZfrdRCISbegw2kzc9xX5Sdv/+Jj1Ys3IOalhX19F598fMDsJmIBwRW6Kh8rTeDzfC9zKlTFjVJIEczguch81bfG92zOD7GP4cEIvMNz+HKeyE9YsQ0ZoF/3yL/HIDpQdO7bcLwqs6zV3Q1EtzqxGyK/11OaT/w/iq4GDx7DTvOxxjkf4K0yTG57//wCX8h7ANNu+59U2tqbPG/PcXI8/H5oSJq4Ur1l1rSz1U9ePpW76J+K2BdtuFtinF3EmnrqcFdMOYGDfaaK3+ntPZkC6yHGfPt1Qm8FuKqi7Tb7KBb4+cHQGsDEmKOQwOWbZgGouA+BlGkZhqC5QgQHtLw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2743393d-852d-b385-9eba-e22806b1c4af@suse.com>
Date: Thu, 5 Jan 2023 17:05:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 07/11] x86/shadow: L2H shadow type is PV32-only
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0119.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8046:EE_
X-MS-Office365-Filtering-Correlation-Id: f543001e-0cbd-41fc-a49d-08daef369a0e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8t3hWtPV3+fLpGRwWuwYnggj1sjtO2C3j2+B9m6JD5BrgbHsCww3hExqYXv+QbV8m1AfWwvvaVUAmVpqvFkdJ62ZnjhMRpis7Fxnz121J5hWYSicvIA2Kn6cea0gKN9fUrVD6ZdanPO41E85dGjuIuG62oWTwdXICtS781UwCewduKaKROmBFY/i7t0xKhcQVkaO4wndeFjoYx22lJtsJVpO97AHTK7PMyu8dp6K7co2nnPnJ7dP+yk0svMvD/hZmJUHCh88+Lb3oyMzkclni/hzG3vpsopV6i6Y77E4ncKpRt0KJ4U/blDn8hoO1DfuY3ENRP6X7KNZm9KiVY7Aop4I4/VQvXFyZ/jgk1LNyAxw3qmgK+zMz1yeCYZcskwmTc7D0pECw4QmEWxDwxW6EuvfS3mpRM9z6XQsYOET7RYJEv+AF2y2G88FABcDaYzi/ot26ZTwYRgphCcrUGHo1C42DyQFq6fnsXmCYKKpT0iY2akGrtyP7WxY3uZNHJfSuXFDyDxI8bguExmWZin9/EG2LCz9nBgsSsTAE72gCyin1PJ4VfSb2GqpHqQj7MW4/eZGDNGdtSxcmUJQofi8+7e37lzufMQ4Ijwzh+1qr0pYbarri0vxiJBh2l8bRKI8ABXVDpY6Q49EcBvXoxbf+wWCnNtMhpdRNPOEo2DIvrn21y9/GezNM4K+nkevUezUwVloXSSfQzNZwgu1/zrSLRPXL0y/q5kS27ppQvWnzeg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(376002)(396003)(346002)(366004)(39860400002)(451199015)(38100700002)(6506007)(316002)(83380400001)(31696002)(4326008)(86362001)(66476007)(66946007)(5660300002)(8676002)(2906002)(66556008)(41300700001)(8936002)(6512007)(26005)(186003)(2616005)(6916009)(54906003)(478600001)(6486002)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T1hVeUlNRlZLY2N5QWtHSllXV2Z1YWtVb1VYajJpN1lqSFh2b0p1V3VrdHcv?=
 =?utf-8?B?MHkvelkzRnV0RUFtR0w0d2pxdFdKU1lZckFEMldYWDVkV2hBZTNDM0JVTGJl?=
 =?utf-8?B?YTJJTnhZVmxtbldYdXdTR3pqaW5xVW5PTlFXS1lnbnVkRjRqbXc2TzZCZWkw?=
 =?utf-8?B?YTR3bEhuYVlzK0MwUGNSd1YxTE83YXZCS1FYSW1ISWpEVktSa0wvdlk1eStF?=
 =?utf-8?B?U0RyWW9ycGZLbXhKYlNWWnA1d015M0pidzdZcnphWnpnMHgxcXMxN3V2WUIy?=
 =?utf-8?B?RlJHZitTWWVJTVF5Y0pnVVpNZUJJc0VFNUZsZjhBSWNHMXNDdzhscWtxREZr?=
 =?utf-8?B?QnEvd2pMVnJYZGs2RjRCV0ZQT3c2VFUyQlVmNG5sODlmUEhYYTFYajlOUjg1?=
 =?utf-8?B?TUp4NDB0aUQyU1hxYi82WU1vUjZmUjV4K1FVSEJUT2o2VDRJd05GTUhSQmlq?=
 =?utf-8?B?K2J4eVJndWlJQmVjdjFIc0NvMysvWkxHM1Y4azV3R2kwYm1CcTV0NWFXcTNX?=
 =?utf-8?B?SndsMWNjVitKYVU1RHlxeTYzWFNZYS9FazdKRC9xWTlzQ2hTdDdXaGZuWElo?=
 =?utf-8?B?LzQyYUdERnYwdUhKZ0c5S0VmdFllV21uQUFrb0U0MzdXeERtVW9Yc0xoUDlX?=
 =?utf-8?B?d1VvZmFTMlFwVzNBZU85Z1lDRHN5RTBMV2VHem81V0FQa0lvNUg0WWxvMW81?=
 =?utf-8?B?OWRMWmJ2Q0dxY0pqKzdDOWtFK0FjaHdMbjZDZDdHSkxGNHBOa1haNVR2T0hu?=
 =?utf-8?B?MFJzNUpMZU1mL2xlU0ltTVJ5ZFJoTWF2bDR1OHBLUVRHMXhXR0l6dVFYM3N4?=
 =?utf-8?B?ZjNhZzN6SWFWV0g2SlhmU2hVZ3pqNGtLOGppcHo4WlNZTFRlTUhodTluc1BE?=
 =?utf-8?B?eG9JOUJ2NHVKcy8zOE50YjdvZ0VPcXZQZ3p4VUs0SnZBM3dzUnpkdUlIQkE4?=
 =?utf-8?B?ejlEdm5BaW5WUEY5V3Jjem9ZTUREQS9KdTJhR3dCYndzQmwveENrZnJ5eWRI?=
 =?utf-8?B?cklkSHhVVXNzdGVFTkxTR1pWdWd6ZldmNDd1ZFE0d3dyWFZnQVFqa1QvNDNk?=
 =?utf-8?B?ZzVNNTB2NW9oaWtFU2tUWUNBa3lQUHJPdTJRbldCNVg2OEFaU1kzSVpHRnd2?=
 =?utf-8?B?b2xKRTBiNGd1VEFTcFNiL0ZBUVhFV1J6c0prek02VElLVlRFdTNSdlExcGly?=
 =?utf-8?B?Q1Qzbm9aaXJuSzRmS2U2SVlEdk9jbjJ2WWtBcXhiZ0lrT2kyV1UxeGUySGJL?=
 =?utf-8?B?SWlVaGhxVFZKZGJhVG5vemp2b1JPRE4yeVdKTkxjbTF5WXlaNXZnMUtRZVFt?=
 =?utf-8?B?a2VHeHl4U3VCSkJzZGFsQ1d0R0tCSlVJVnNqYlNpSGRGNlpwVWZwcUF4aG5L?=
 =?utf-8?B?MGJ4K0wzV3RLTzlieTBIMWNHSy9qM0lrYnBJeS93U2JzUVRackNEWmw4UUJD?=
 =?utf-8?B?c3ZoeXdmaFBGQWpnWlBNdzc4YzJzNzZTVEs5WHRUaFNyVitPOXVCY0lDRWlS?=
 =?utf-8?B?NkFZZ09sanJ6SEFSNWlDMk5KQ0xYM043bTJ4L0ZYQWx3enVYMmxQaHNVOUtH?=
 =?utf-8?B?RG1CeExWU2tDbll6NHZmNzg5WjdNQ2h5U1lOSnc2Z1VTeUxVeS9ZbUtqdHFC?=
 =?utf-8?B?bWl2MjV2UFZIczJnaGZhRWFzTTFFYVdZT1BmN0JSWC9WWDNaeVZSbTArV0I3?=
 =?utf-8?B?UGlHUGFFNmF2TnRrQmkwWWovNVAwZ2VYVUw0aFlIRVpKcng4MzRyWlFJYzA5?=
 =?utf-8?B?THh2RDNWbEd3Wkl3bHlCV1ZDSUwrQ2tDQnRQaEFINFRCbXZLYXNlaDBvSU5k?=
 =?utf-8?B?a2VkNnlMaklDV2xxb21hc002Z3JIOGUyZDhKZHZGQWFOTkZ3R05QYTd3SHFO?=
 =?utf-8?B?c3hMUUM1Nms0R3hxMnVOeXhkYWR3UUdOMWZvTGVzVk5KZ1pSSDNqcEtQR3F2?=
 =?utf-8?B?dmlDVUZ4aDBPemdUcnBYMHNMaDFVNnFSQmxNem9kWXZHbjhnNEVoWEdQZWNy?=
 =?utf-8?B?anlJT05yeWlidG95UlFLNTNCT3Z1bmRBYkc2VG5kaE9HakJ2UVA5Tk1peFo0?=
 =?utf-8?B?VkF3dTNGR0xIWm9aWnFSakc3K3ZQc0E4TkJsS3dKbWtoMnJ5Z3k4dEVnWTJZ?=
 =?utf-8?Q?6MEtkhrrzr9MphC1N+iEEbAZG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f543001e-0cbd-41fc-a49d-08daef369a0e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 16:05:02.0144
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WTXXaRZjnpb1VjUjcxphjU+lnjhdVmEQ5clxnF6UNL6YPmc5B/0WMfz8baRlbRRyaxVzmpVCJOoyr7DwIaYM4Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8046

Like for the various HVM-only types, save a little bit of code by suitably
"masking" this type out when !PV32.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I wasn't really sure whether it would be worthwhile to also update the
"#else" part of shadow_size(). Doing so would be a little tricky, as the
type to return 0 for has no name right now; I'd need to move down the
#undef to allow for that. Thoughts?

In the 4-level variant of SHADOW_FOREACH_L2E() I was heavily inclined to
also pull out of the loop the entire loop invariant part of the
condition (part of which needs touching here anyway). But in the end I
guess this would better be a separate change.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1740,9 +1740,11 @@ void sh_destroy_shadow(struct domain *d,
     case SH_type_fl1_64_shadow:
         SHADOW_INTERNAL_NAME(sh_destroy_l1_shadow, 4)(d, smfn);
         break;
+#ifdef CONFIG_PV32
     case SH_type_l2h_64_shadow:
         ASSERT(is_pv_32bit_domain(d));
         /* Fall through... */
+#endif
     case SH_type_l2_64_shadow:
         SHADOW_INTERNAL_NAME(sh_destroy_l2_shadow, 4)(d, smfn);
         break;
@@ -2099,7 +2101,9 @@ static int sh_remove_shadow_via_pointer(
 #endif
     case SH_type_l1_64_shadow:
     case SH_type_l2_64_shadow:
+#ifdef CONFIG_PV32
     case SH_type_l2h_64_shadow:
+#endif
     case SH_type_l3_64_shadow:
     case SH_type_l4_64_shadow:
         SHADOW_INTERNAL_NAME(sh_clear_shadow_entry, 4)(d, vaddr, pmfn);
@@ -2137,7 +2141,9 @@ void sh_remove_shadows(struct domain *d,
         [SH_type_l2_pae_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 3),
 #endif
         [SH_type_l2_64_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 4),
+#ifdef CONFIG_PV32
         [SH_type_l2h_64_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 4),
+#endif
         [SH_type_l3_64_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l2_shadow, 4),
         [SH_type_l4_64_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l3_shadow, 4),
     };
@@ -2150,7 +2156,9 @@ void sh_remove_shadows(struct domain *d,
 #endif
         [SH_type_l1_64_shadow] = SHF_L2H_64 | SHF_L2_64,
         [SH_type_l2_64_shadow] = SHF_L3_64,
+#ifdef CONFIG_PV32
         [SH_type_l2h_64_shadow] = SHF_L3_64,
+#endif
         [SH_type_l3_64_shadow] = SHF_L4_64,
     };
 
@@ -2214,7 +2222,9 @@ void sh_remove_shadows(struct domain *d,
 #endif
     DO_UNSHADOW(SH_type_l4_64_shadow);
     DO_UNSHADOW(SH_type_l3_64_shadow);
+#ifdef CONFIG_PV32
     DO_UNSHADOW(SH_type_l2h_64_shadow);
+#endif
     DO_UNSHADOW(SH_type_l2_64_shadow);
     DO_UNSHADOW(SH_type_l1_64_shadow);
 
@@ -3179,7 +3189,9 @@ void shadow_audit_tables(struct vcpu *v)
         [SH_type_l1_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l1_table, 4),
         [SH_type_fl1_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_fl1_table, 4),
         [SH_type_l2_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l2_table, 4),
+# ifdef CONFIG_PV32
         [SH_type_l2h_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l2_table, 4),
+# endif
         [SH_type_l3_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l3_table, 4),
         [SH_type_l4_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l4_table, 4),
 #endif
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -56,7 +56,6 @@ const uint8_t sh_type_to_size[] = {
     [SH_type_l1_64_shadow]   = 1,
     [SH_type_fl1_64_shadow]  = 1,
     [SH_type_l2_64_shadow]   = 1,
-    [SH_type_l2h_64_shadow]  = 1,
     [SH_type_l3_64_shadow]   = 1,
     [SH_type_l4_64_shadow]   = 1,
     [SH_type_p2m_table]      = 1,
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -97,6 +97,13 @@ static void sh_flush_local(const struct
     flush_local(guest_flush_tlb_flags(d));
 }
 
+#if GUEST_PAGING_LEVELS >= 4 && defined(CONFIG_PV32)
+#define ASSERT_VALID_L2(t) \
+    ASSERT((t) == SH_type_l2_shadow || (t) == SH_type_l2h_shadow)
+#else
+#define ASSERT_VALID_L2(t) ASSERT((t) == SH_type_l2_shadow)
+#endif
+
 /**************************************************************************/
 /* Hash table mapping from guest pagetables to shadows
  *
@@ -337,7 +344,7 @@ static void sh_audit_gw(struct vcpu *v,
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2_shadow))) )
             sh_audit_l2_table(d, smfn, INVALID_MFN);
-#if GUEST_PAGING_LEVELS >= 4 /* 32-bit PV only */
+#if GUEST_PAGING_LEVELS >= 4 && defined(CONFIG_PV32)
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2h_shadow))) )
             sh_audit_l2_table(d, smfn, INVALID_MFN);
@@ -859,13 +866,12 @@ do {
     int _i;                                                                 \
     int _xen = !shadow_mode_external(_dom);                                 \
     shadow_l2e_t *_sp = map_domain_page((_sl2mfn));                         \
-    ASSERT(mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_64_shadow ||\
-           mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2h_64_shadow);\
+    ASSERT_VALID_L2(mfn_to_page(_sl2mfn)->u.sh.type);                       \
     for ( _i = 0; _i < SHADOW_L2_PAGETABLE_ENTRIES; _i++ )                  \
     {                                                                       \
         if ( (!(_xen))                                                      \
              || !is_pv_32bit_domain(_dom)                                   \
-             || mfn_to_page(_sl2mfn)->u.sh.type != SH_type_l2h_64_shadow    \
+             || mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_64_shadow     \
              || (_i < COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(_dom)) )           \
         {                                                                   \
             (_sl2e) = _sp + _i;                                             \
@@ -992,6 +998,7 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
         }
         break;
 
+#ifdef CONFIG_PV32
         case SH_type_l2h_shadow:
             BUILD_BUG_ON(sizeof(l2_pgentry_t) != sizeof(shadow_l2e_t));
             if ( is_pv_32bit_domain(d) )
@@ -1002,6 +1009,8 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
                 unmap_domain_page(l2t);
             }
             break;
+#endif
+
         default: /* Do nothing */ break;
         }
     }
@@ -1123,11 +1132,13 @@ static shadow_l2e_t * shadow_get_and_cre
         shadow_l3e_t new_sl3e;
         unsigned int t = SH_type_l2_shadow;
 
+#ifdef CONFIG_PV32
         /* Tag compat L2 containing hypervisor (m2p) mappings */
         if ( is_pv_32bit_domain(d) &&
              guest_l4_table_offset(gw->va) == 0 &&
              guest_l3_table_offset(gw->va) == 3 )
             t = SH_type_l2h_shadow;
+#endif
 
         /* No l2 shadow installed: find and install it. */
         *sl2mfn = get_shadow_status(d, gw->l2mfn, t);
@@ -1337,11 +1348,7 @@ void sh_destroy_l2_shadow(struct domain
 
     SHADOW_DEBUG(DESTROY_SHADOW, "%"PRI_mfn"\n", mfn_x(smfn));
 
-#if GUEST_PAGING_LEVELS >= 4
-    ASSERT(t == SH_type_l2_shadow || t == SH_type_l2h_shadow);
-#else
-    ASSERT(t == SH_type_l2_shadow);
-#endif
+    ASSERT_VALID_L2(t);
     ASSERT(sp->u.sh.head);
 
     /* Record that the guest page isn't shadowed any more (in this type) */
@@ -1865,7 +1872,7 @@ int
 sh_map_and_validate_gl2he(struct vcpu *v, mfn_t gl2mfn,
                            void *new_gl2p, u32 size)
 {
-#if GUEST_PAGING_LEVELS >= 4
+#if GUEST_PAGING_LEVELS >= 4 && defined(CONFIG_PV32)
     return sh_map_and_validate(v, gl2mfn, new_gl2p, size,
                                 SH_type_l2h_shadow,
                                 shadow_l2_index,
@@ -3674,7 +3681,7 @@ void sh_clear_shadow_entry(struct domain
         shadow_set_l1e(d, ep, shadow_l1e_empty(), p2m_invalid, smfn);
         break;
     case SH_type_l2_shadow:
-#if GUEST_PAGING_LEVELS >= 4
+#if GUEST_PAGING_LEVELS >= 4 && defined(CONFIG_PV32)
     case SH_type_l2h_shadow:
 #endif
         shadow_set_l2e(d, ep, shadow_l2e_empty(), smfn);
@@ -4124,14 +4131,16 @@ int cf_check sh_audit_l3_table(struct do
 
         if ( SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES_MFNS )
         {
+            unsigned int t = SH_type_l2_shadow;
+
             gfn = guest_l3e_get_gfn(*gl3e);
             mfn = shadow_l3e_get_mfn(*sl3e);
-            gmfn = get_shadow_status(d, get_gfn_query_unlocked(
-                                        d, gfn_x(gfn), &p2mt),
-                                     (is_pv_32bit_domain(d) &&
-                                      guest_index(gl3e) == 3)
-                                     ? SH_type_l2h_shadow
-                                     : SH_type_l2_shadow);
+#ifdef CONFIG_PV32
+            if ( guest_index(gl3e) == 3 && is_pv_32bit_domain(d) )
+                t = SH_type_l2h_shadow;
+#endif
+            gmfn = get_shadow_status(
+                       d, get_gfn_query_unlocked(d, gfn_x(gfn), &p2mt), t);
             if ( !mfn_eq(gmfn, mfn) )
                 AUDIT_FAIL(3, "bad translation: gfn %" SH_PRI_gfn
                            " --> %" PRI_mfn " != mfn %" PRI_mfn,
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -209,6 +209,10 @@ extern void shadow_audit_tables(struct v
 #define SH_type_unused        10U
 #endif
 
+#ifndef CONFIG_PV32 /* Unused (but uglier to #ifdef above): */
+#undef SH_type_l2h_64_shadow
+#endif
+
 /*
  * What counts as a pinnable shadow?
  */
@@ -286,7 +290,11 @@ static inline void sh_terminate_list(str
 #define SHF_L1_64   (1u << SH_type_l1_64_shadow)
 #define SHF_FL1_64  (1u << SH_type_fl1_64_shadow)
 #define SHF_L2_64   (1u << SH_type_l2_64_shadow)
+#ifdef CONFIG_PV32
 #define SHF_L2H_64  (1u << SH_type_l2h_64_shadow)
+#else
+#define SHF_L2H_64  0
+#endif
 #define SHF_L3_64   (1u << SH_type_l3_64_shadow)
 #define SHF_L4_64   (1u << SH_type_l4_64_shadow)
 



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:06:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:06:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472036.732129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSko-0005MP-8Y; Thu, 05 Jan 2023 16:06:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472036.732129; Thu, 05 Jan 2023 16:06:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSko-0005MI-5k; Thu, 05 Jan 2023 16:06:06 +0000
Received: by outflank-mailman (input) for mailman id 472036;
 Thu, 05 Jan 2023 16:06:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSkm-0005LZ-Ad
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:06:04 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2068.outbound.protection.outlook.com [40.107.241.68])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d9cb5a9d-8d12-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 17:06:02 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8046.eurprd04.prod.outlook.com (2603:10a6:102:ba::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:06:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:06:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9cb5a9d-8d12-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JK0twxZG6alBtmSSizwcYEyh563fpdxnpgziu3kcb+CZZ8IDa3K+w4U1anaG+W5B10VvmEUVzSq2++WzJjigo0XK4vSOicrqj0v7/FMvfiD/mtB4i+4DroFTkkLtV6kwzHTpAg1q61qYzpZinpL1gKcmbwwq6Hfcn5fasTL+8vpHh3MfmK8DkW5E9LdCmFpbgcmPIJa0m5wFw5cWxwgfkR/fG+0Y+p+jAyzm3fngBE/6TGSnp24Z07B2mgE0DvT47/AbpyVEmyQYoJFFdR7fy3/V5EyLlbboWGEUM2LwcndOzE4oVF0zW4lmyV72gemtGWqdkNl/uYstkh85iSql1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tp/xRy5VT94YqOsqdM3hhLWnXJKhH74obPOAkyaCA0Q=;
 b=clxZolyFyrRJJ2KxqPgWzTGcPFh0Ka8PidOk5WXeoMN+a3lbNJxVedKEhiMtnQTLzaVNXZbt+L/sXRhqvk5rAD00BJvW4k7n+aeUbwZ8nuqjHJcgbfcazVtxF0Gy2AcYkZF7btAfAsAiyu2+xM6kFVtHvPVjz6V3MZSDfOIcItJE+15VvOCeDYJzJqu8GF5BYLnb+RAcQ3EHydRgnlw4GaAtzgJEEmtZ4VkSWJ+tqL05jcRr4TDgrx8RfuNmBQROOe4CW5pd77vgdVerYIVJCAcQpfjWtF8H6MN3V3UmMOPJZ3zDx2Qg+khE6ioIXnA6jKJy8DuG4OGx5SOEETi0dQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tp/xRy5VT94YqOsqdM3hhLWnXJKhH74obPOAkyaCA0Q=;
 b=wY3IacouTb28PDk+gWcXEamSJDS2A134FGrYIL0baydIR6X0TvXOQWNCUKq4VtvoPY7DyuwI2t5+r1u/qmEAasK5h8iT2n1epbzlR/A3Y+9FjykMmxCbpc9X7tbVEXMc4qJnof0BAzJfpWjngIkpmPNFamKGPP+EGFe6lzReulrfoGsIgylPajpULKo9SV5pTcZkCg5WOUmMq4KCyl17pNCey9xjj4iOa04hYI4lutWYL7nite7OBEjqP1XF7nUy3Q1xqvTEwErOCqRIP7md4JhplgA58MnyoHRFufh+SEjrI1P7mYzILrckaew0fm2lAGEp7JJ0knKkXTQbYIu+nw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <acf0f5f6-f4da-cd88-1515-2546153322b4@suse.com>
Date: Thu, 5 Jan 2023 17:05:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 08/11] x86/shadow: reduce effort of hash calculation
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0006.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8046:EE_
X-MS-Office365-Filtering-Correlation-Id: 64ff1aa6-ab3e-4d85-38af-08daef36bd2f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KhmN35WnVVCHX3NTS8nYQ8roJdSOC+NdLb/oE7FghutaIbMflusxce+Qa2o9/ePhdtP7fSb6npkm70rH99J+SroCKqom1lPPQjYAfCDw6n+J3CWh9OqPXXVEWbXQXqufrAZgBxf+wsbDWwLKQ+LHED6bps0cPQUO2Eb/+ng16H16KmMfu4R0hchihMPwb5y40M13/q2CunrtLxoKEmRL1B5BSqBlcRJ/Mtk+bPpjAZ+hdk+FhbFx51UeE0aF1SzpEcTHCHFgh6xL1AfOFl9u1FMhGqy1b1tDC8B+9zAKk99PfFXhuaDTmJ6ZBkbe12qZTxy06z4XFEH3kc81Lkd2pFFLxo7tPnPLjiJRyaPvww0f6hXlfKtJ40Xe83tN1fdsT3AmCRemmZYB2JUx5s/uHhPEYe2OyewMpFa6JJb94QEdWnbbgusXidQUpWAjmOGT7agQ2lEHdSIlhPtuG0oN+4ckDMXZwCKUkH8swTLm00SMQcfp+ruSjmfDQuJhv7LJ3PSzF/6IilnfmXe0yqk6iXPMWwc9giR1VRZ1EltKQjqokrj5yLkvGzjHAXwgOC93A1m5iFvDSQXvU3dSPxG447K7lg7HCPw0tSztI203dMbIkWjAfAVMAkf/lE8RQDukxX0Vco2gdODinfQ9UnSUJ4RSA1gO7zk4Gslb3D5EX2KeR/0qDaKfx6bUzjWqyn7EcYLaCbkkM1/9UWsjO62XKFsw/rrtRLnlhadZEOPpJSo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(376002)(396003)(346002)(366004)(39860400002)(451199015)(38100700002)(6506007)(316002)(31696002)(4326008)(86362001)(66476007)(66946007)(5660300002)(8676002)(2906002)(66556008)(41300700001)(8936002)(6512007)(26005)(186003)(2616005)(6916009)(54906003)(478600001)(6486002)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?azU2cWxwMXJkbmtUWFIveVhubTVja1NtKytQMFdFeVQ1REtEdDFUbFRnaEtv?=
 =?utf-8?B?dTdYMlJzUGxLdlc3NlZ0UEdweEtQNklqcG16QUVrYVkxYWdBR3VreE8yYURE?=
 =?utf-8?B?U0pPYVE3bExqTXpsY2wzTzhqL2NDWTJzK09iTlF4TmxCZzdCazg2Tm1DaW9u?=
 =?utf-8?B?aEIyOG93eWhNd3lnWThTeW0ydGR0Yk1sY1R5MTgxdm44bXlwWTJ3eVo2c1FC?=
 =?utf-8?B?L3ROK2NYUjJicU9wTmYyUWt0MDFieGJhemd3OEkvaFRiUHF1VXp6c1FBcUpp?=
 =?utf-8?B?ejRIUGdlNEY0MUZxK3R6a3VwS05KTkUxQWh4UWpEOVB2TXFKZExXbE43UWhU?=
 =?utf-8?B?VVloU3U5SllBQUdXZStCZFhjK0NhUWpBMXF2ZDd3bkhQUnlydlJVRnR1dHZF?=
 =?utf-8?B?RlJPOUR0MDBkbXp3cHFGVzVlZGViQ1h2YXMyRkVhcW1rbXM5bExPN2c3TURw?=
 =?utf-8?B?SUdKNDNyYUl4YjRoRFFENUxXeEswUVk3NjRNNHhKQlZSMStQWEZLNW1CYXo2?=
 =?utf-8?B?VWdBaTBCWWJReHYxU0VIRnFYUFI3Wmc4WjJ1aHA4QkF0NXdtYjhMUXI3SG1t?=
 =?utf-8?B?M05RQ3E5NmVMd2xkTjNlNDBPQmM0aWo1US9nZnZyNWtpZW0veEw4bTNLSzJF?=
 =?utf-8?B?cHdyRFNBaldRaWpKSDRTUUJFeTNoZWV3S0N0aVBzajRRT3VwZHBCNnhjL1V4?=
 =?utf-8?B?Q24rdmJtNjQrRndDV3FyREU2S2thS1hLYjU5MUVYTDZYUlc4dlpNVXFsNHly?=
 =?utf-8?B?S0czbitYV3BHcUNNODQwYjQ5SzQrV0N4cXhreHFKK2lQV3A4VW9JY1FmazZq?=
 =?utf-8?B?MExFdm9uaGc5bE9qQ3ExS1NCNFNNRmdIenFWWmo0eXNtaDRCNk1NYXZjNXY1?=
 =?utf-8?B?c21PbnU4RC9sUHYxSzVrVHJsWVZ5Qll0Wnc1b0lMNjlUSDliUVloYUx0ZzZq?=
 =?utf-8?B?aXdlZjJtaDdFazVKK0UxVkI0eUs0NXFkSWhZVjlTV1ZhNFh6LzZBR05KZHZs?=
 =?utf-8?B?TmxOd2VTTHl2Y2pIU1B0WVA5MzRoVmtVWll3dndFTmxDb0dCRlFHTTN2VnR5?=
 =?utf-8?B?RjJyVFJiVVF1NksvSVNuRTA5R3p3Qk5iRVdtMkg5VThFdlJOQTVucC83cGJR?=
 =?utf-8?B?YWlRU3RxNkEydGlUYTZVRU1OcXo2MHFiZDkxcTdqVHR3UFdnZjc0Zk9FMERS?=
 =?utf-8?B?NmJsR25jeng3NlhBbldJbjcrRVVXSFNjbWtGZk1la3ZhVTliczRBNmdvaXdX?=
 =?utf-8?B?bzd0SEZGc2NBM1F2KzZyMW4zRWQ0bE9hUnl3VHViekxjQTZhOXhhclkyUDdO?=
 =?utf-8?B?eDNEUHRoVFpQNmxJbDNQb0hLbnlHcXNTR1hKSG1vWEZGMXIzTldJNndBWHJr?=
 =?utf-8?B?eUl2bUY1Y0pnei9DV0RodXBRZXlvTE5EY0VSQmdFOU83dVB5Z2xWSE9nK2RY?=
 =?utf-8?B?T1VWak94akpRZGt2d2NVTXRYdTZodXkyVmVWcXVaWDhqTXMzcC9nTUJVQ24w?=
 =?utf-8?B?Wnhab3V1VFUxWTNuT1N6OUloMERVWmJyKzErZEJxZllMVlZyNGVhWlFHOER4?=
 =?utf-8?B?S1RtUUhvQVpUV0NhelhBM2FlYVZPWUplbVJNc2hGdU9tOWdhMVNFL1JvYWF4?=
 =?utf-8?B?R1lxa3BJaTdWRm55Tm9mUElTbTJ2d1ZpTGR5YnBwZCtRMTRuK1F3UEYwRWFL?=
 =?utf-8?B?YTY2bFJYWGhSc281d2RRVHExZU5CTkVDcFdxTVBFWEZ0MkVESUFnYXFyZU5X?=
 =?utf-8?B?R1NkUzc5Wkp1Sng2bWRuZ1VCeVZsa0pLMjJta1pOR20vc2dGN21XL1g1NFdk?=
 =?utf-8?B?cXBnYWxRSXB5cnN1VGkvOEFDTFdLVTZtVDQrenhwOTFidTFPMTRBbUNYeXll?=
 =?utf-8?B?bjRZRkl3Qmh4aHNGaUVnZjRrcXhCWXVaQnRxR1c5elZWc1hkeEsvSFNBSUpE?=
 =?utf-8?B?RFhzSXUxdWZVQzBvbFI0SDRUQ3V2eVVxbnFwekE2U3NQZ1lzQUU2djhLWmJF?=
 =?utf-8?B?QjFqYjdtZ3AwNUQ4OWJtQlJITDZOUFFWbW12THptWWpUZ1BJWGczNE9mUFMr?=
 =?utf-8?B?NXJyRlF6WHNveFdIK1lUblFPQk5CU1IwUzMyNG15RDFlU1dCcm01eTJ4clRM?=
 =?utf-8?Q?lzHrdtOdMLwiWmMoK4pJtWXEE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 64ff1aa6-ab3e-4d85-38af-08daef36bd2f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 16:06:00.8857
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LaQDgsNlFjFJCocMBEUkxOUjT0qwE/voMbSmTu3fAN7wtJpxx0dgCsa2Y6wWK7xICqEttpvbeczQIWvQT6NEbg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8046

The "n" input is a GFN value and hence bounded by the physical address
bits in use on a system. The hash quality won't improve by also
including the upper always-zero bits in the calculation. To keep things
as compile-time-constant as they were before, use PADDR_BITS (not
paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.

While there also drop the unnecessary cast to u32.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was tempted to also change the types of "p" (pointer to const) and "i"
(unsigned) right here (and perhaps even the "byte" in the comment ahead
of the function), but then thought this might be going too far ...

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1400,7 +1400,11 @@ static inline key_t sh_hash(unsigned lon
     unsigned char *p = (unsigned char *)&n;
     key_t k = t;
     int i;
-    for ( i = 0; i < sizeof(n) ; i++ ) k = (u32)p[i] + (k<<6) + (k<<16) - k;
+
+    BUILD_BUG_ON(PADDR_BITS > BITS_PER_LONG + PAGE_SHIFT);
+    for ( i = 0; i < (PADDR_BITS - PAGE_SHIFT + 7) / 8; i++ )
+        k = p[i] + (k << 6) + (k << 16) - k;
+
     return k % SHADOW_HASH_BUCKETS;
 }
 



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:06:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:06:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472043.732140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSlN-0005uC-H7; Thu, 05 Jan 2023 16:06:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472043.732140; Thu, 05 Jan 2023 16:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSlN-0005u2-EI; Thu, 05 Jan 2023 16:06:41 +0000
Received: by outflank-mailman (input) for mailman id 472043;
 Thu, 05 Jan 2023 16:06:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSlM-0005kL-2C
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:06:40 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2087.outbound.protection.outlook.com [40.107.104.87])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id efab7802-8d12-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 17:06:39 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9432.eurprd04.prod.outlook.com (2603:10a6:20b:4d8::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:06:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:06:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efab7802-8d12-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l4YhXWejcZbws13WDR0aSs6oVQoxmssMvS6YypsZTBokNSZJ9iiDVsXHqYHgXkdAsoHjvCFIlfg7kr7p/fCVUxfHLrUSQkICw6Ia+Fc7LirDbF/np3/oebizf+9HFM8Aa6CLSUE2kytICE0Ung6Z3fnGylwiwiOlaPkZAs6dpMWGkeM9ODMRASQh+paGb5CFWgR0Kv/5kz5+GtxeTPabRc4SfKhoUJx3LHmXbZ7EPwE+QcwXyAueXy/Dxbhi4asxcjtP3L3SptiRsNmTCoWBHaRV2vHFuELT4Wwx27l4KucBIlNNOY7RTEIPiR8o8uvBGZ7AMXTqNPYfJS1VKqqzgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iBgM5VtWVqIkPoDCOPK8xEPFq96mp+OUcLEBtT+07eA=;
 b=ZlHpO0xn9nPfqrWGjoomxwjgSXtoEQvCZAtYBuqneles+J7jyoz3PlESkWmfdEhpcr78533t5xqRSPwQIL5VA3MOGUVoexYKzIw5wvhP8hwX+oSUv5U+0jCcXnpViZVaQeTaSIf3tonW5XLKp6SgjpoWnnMBFDplXIkUKybfRJEEG1XwpBME47OLP8NyYexaNHdbVojtKNljyp7RVnuELpO6I7wP2lW29QB05peTl/Qwve25GNtEZ/uDPw+2PIwJdQFmFOyDbe4g+eHfCyRJ6m2VulvlssUI+OMeEEwl2nlhbH/qxnjVRhuqgp5N5LmaGg5NsIS6FIANbjwC12M0Iw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iBgM5VtWVqIkPoDCOPK8xEPFq96mp+OUcLEBtT+07eA=;
 b=l7VkZDgN1vWhY7eQ1WCKwhL2ReJP9SDbXcsvQtxT5n+FxK28rOKERP3bt16/JyPsrrEWCkQAvumNn6xhlvx+xLZ4+npqJ9LAZMjpp3CVJoCxkahqcASK1myTLVKI5durFKN+jaFOIBpVzia5qThqnKtg1LNVAY7dfRSfqy0k5KBc3IYtkkPFZXDYu4U6a4hFWZb9NRtdVDX9pPnyhzsjiXZKka6+mZFaPkXl2c9D7b00gpIflcJqRh9wgwhaVisykP2hc4xLOWL9qoraNHYeVjFSUMy/JKWLEU8KpU6lyhjAJmpkTSfYGRzhzAjdYKh+IUnhEIlPJGmvtNcRL19+/A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0998689a-91b1-e381-ac63-a485ab2cb65d@suse.com>
Date: Thu, 5 Jan 2023 17:06:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 09/11] x86/shadow: simplify conditionals in sh_{get,put}_ref()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0010.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9432:EE_
X-MS-Office365-Filtering-Correlation-Id: 3af18b23-60d9-47d6-01bb-08daef36d320
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vJ6piFQCYG2spTAB6SysG6x25rneJH89bOIqSUkifTZp+K99hrxhrqqMMJKy2KNR/rxGmpiCT4keIHBmCT3Pqe5be1Hy8aTH9IqiNvknbiAA7SyAdozGkaTHQj/muOIf9PIP+gSoqTQH1TyMOOklJ0Gy+iODzINriRUgt50lN+MCSMGS3Q1YjCCd+bShJ/AY8fJ5ayVW5o5yKWaYftAwRcOiaTGOsjvLs1Abx14HJxZlh9rkxsEms1wyjjbOwDBIf7wDXy+hepIdiRfSGkvg07f+OMBmhLCJjYig45Tseq2No2d36zylFjOJcNpQvESBAOOYJ3XBJ0tZVj8fMq3n41TIBdTWP+DA++mUFl1Tf7mUe6adGxWZYvrBwX5mNzagu7HtyOParcy40mZOsUG2xZVMYfbN5mK4/aSL9m2cw1Kga/SK54insiK4jLMy9bItbiGQqGvQ87dw1VeXQJTD5+MZhjAzJFpnM8MKTxRNYUWvZOGTiKDpW2LnoizoHRbl/hrvIZvjYrNnsNUdJ2tYfRA/45xqoBR95x4ADAAhMDh8UVetEgElTz/FFXXZLlx+vVO/aaQsXmOEYhxoFhUjIGLWfE18gjRyOj0hEzjtw1QGjb2NeY6gAXg8NERqbEAKusPkWz7msIuWxd2YxVlZAntaBZlMkKEStNTzY4OAwPHp6teQ7Vm4hSRz7Uhh7F6GYJiwuMPYfmnrvOybkbP8ij3lIMycyPXz8NuEsIROTcs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(346002)(366004)(39860400002)(396003)(451199015)(54906003)(6916009)(31686004)(26005)(186003)(66946007)(8676002)(6486002)(66476007)(2616005)(478600001)(6512007)(66556008)(41300700001)(8936002)(5660300002)(83380400001)(4326008)(2906002)(316002)(6506007)(38100700002)(31696002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K2kwek5ieHczN2tGMXVBd2NlcE9lSitTS2poMEhVa05TZGZraFNNekVEditV?=
 =?utf-8?B?MnBoNm9FUVQvNlQ5empVdEszN09SdzAvSGhKSWNNeWNsVHVka1YvTDJvSnNJ?=
 =?utf-8?B?V2tTakZVL25Lb3dEbGJ4R2tiR2JGejIwMnVwSjZ3VGRsdGplRmNyYjF3aDBa?=
 =?utf-8?B?Z1Zsc1ljUUYrRmpDR2VUcUdzYXFpYTNzbjBGTGI3RXJlNGlWb0gvZmxBVk9M?=
 =?utf-8?B?WDA1OVo5Y2padWozclJxUHkxV0hueWpsdTBFcVg3TktDMmxQWFJoVUZ3WjZr?=
 =?utf-8?B?cUhyZzQxZ0F6SGptdjV5bi9rOXVpSUpaMllHVmNkYVJiSVpHVHZIa0ZFZkk1?=
 =?utf-8?B?QmxueExLVGlhZ1AxaXYvdTIvRFFiYmlCUjF0N3ZMRFJrODFJMXIxK2FRQXNY?=
 =?utf-8?B?ZngvV2NCYWZtQWl0UWNTVUUwL1doRjB0STM1MTZ0TlRHeDBjWGoxRGpvd2tX?=
 =?utf-8?B?L3o5Zm0xZ2hzYXg3dTJaQndvOEFwWWpzU1NJeXcxWHBXaG1hQ0dnbzJMMGlv?=
 =?utf-8?B?bEJIc2hVTm91a0FQSUs1KzV5Yi8wWDgrczdnUFhkZkJDQUpVV1JtVWRVbmln?=
 =?utf-8?B?UlJlQytzNEdJQUsyaHN0NXg2dG41cXY4M3czYnAzVzZ0OFpuUnNNb2VKdVc4?=
 =?utf-8?B?US9sSWZvdXJueURwdi95dDM5OGRDWkRzWHFlVHlEK3I4eFdDZTB2OWJPdlFL?=
 =?utf-8?B?b2xrSXcwY0lSZWYyeDZLbXh6dGJweTg3U3BIZXoyQUEyRlZzeVdrQUJRelBi?=
 =?utf-8?B?WGd2K3U1MFh6T0grb0VFRjh6R09md1dFb3d2YWhSU0NHV3QzQjJWbVBKNTVH?=
 =?utf-8?B?SUphdHZsNnluQjltODBaUmtHWi9SOG1nZDd0elNhenFlaWZtZHpyZitrMnVP?=
 =?utf-8?B?ZGxrRU9KL2ZJd0MyY0NjWnFuazhncU9CeEdBdWpTQmUrcTMxUEtyMTNOalJF?=
 =?utf-8?B?M0NhclpBM1QxbE9zZmFTS0FIejV5QkdQTGV1cEZWamZLOWNpQUtCUjgwTGdp?=
 =?utf-8?B?ZVF4L29QdHhDZVdRb2JTNHFVejAreXV0d1dLZXhtV25BT3NKV2s0NnhsUlUw?=
 =?utf-8?B?eGMxTG90RFBkd25NNGI2SzdnK00vL3hYMGE5U0h0STFub2lUV0lDeHZaT2hv?=
 =?utf-8?B?WE5mZ2ZFOXU2eUdOc2l0SVZCZWNZcjYyeGg5N3RzL0M2U3M0NlBneklBVEhs?=
 =?utf-8?B?Unc1cnVJVFAyQ2ErUnkrbXRTVzNsSEttVUNEWW8zbnMvczdDSEdCbEpRWklj?=
 =?utf-8?B?dDNzcEpnS2VVVjdGWlFXUFEwWDlXcTgyUXQvU2dhQkJoY28zcXVPVDZLRzdj?=
 =?utf-8?B?N0RZcnVDbWh0ZEc2UFh0cXZMR2k1ZEErMXZ0L2RoREQrSnZzS29VTVFVS1hu?=
 =?utf-8?B?eTJXck9DYTBXT2J3WkdoTjBMTkd4d1FqcUdXNnB5LzIzUmRxMGZYTm5UcTR3?=
 =?utf-8?B?dHA5cTRjRW1jRXNacVVoSHB4TE15YVROeGd3dVJYbk5OaFp1MWNJWkhqVlBu?=
 =?utf-8?B?Zlg1NEZacE9IVHllVElLa1duRlUzaEhQaU5pWno1ODhGOGEzVjNTZm5aeFlz?=
 =?utf-8?B?RkwwdGFERlN0RktnUUtQb2FKYUxqVTllMUtOcEo3VjZLMXRBZnBsTHA1dUpX?=
 =?utf-8?B?MTBwa0orWkZUVkJpWjM0THV4UkhnSjZ3NUxHSWIwSW9hNDIvRXFDRVVkeTlU?=
 =?utf-8?B?Vzg1UzVsb2xNVEFLMzByU2MzM2JqK3NkY0Y4bVJ0am1QUVFuSTBJWXNEZ2V0?=
 =?utf-8?B?dE1oUnNveWJ4V3F5RVYzVzlnVzVhZVh3ZmRUZVpyanQ3THVwRk9IZG00b1Uw?=
 =?utf-8?B?eTVMVE9CVUFrYm1vRFh2dEJUL3BSdFR2L09QWGVkZnArK0dEYUpraWpPRVUx?=
 =?utf-8?B?ejRJNHM4SEIyVmtub2s0YkxJenpuQkhFWTd3c2U0ZFhhb0lzV1lmdkxsbWdm?=
 =?utf-8?B?QjFvZTdsc1lBQThnNzJnalZiQ1JsbDg0M3ZsN096WWdoampneGtsdGFTeGl5?=
 =?utf-8?B?Q25pdmxLbEs0N3d1QWg0eVVLNHFxWkxEaGIvY0RSRm9qelE2MmlNdldObVVV?=
 =?utf-8?B?MUk2a29FOStpRkw0QnlXZVhBR3BuMFQ1bHZucy9rWGw4N0svNzEzMWRQSkor?=
 =?utf-8?Q?ijeNtPiyTDbDaQ7TftyWUyokf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3af18b23-60d9-47d6-01bb-08daef36d320
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 16:06:37.6802
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: d54P2iPN7JaHNZZWMtCb08lWhqROSGJ6/zUqKXb7t54PlXli06etdxO9gupjKq5uyxcP6sjcp+wb4ne8smx/xg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9432

In both cases the "entry_pa != 0" check is redundant; storing 0 when the
field already is 0 is quite fine. Move the cheaper remaining part first
in sh_get_ref(). In sh_put_ref() convert the has-up-pointer check into
an assertion (requiring the zero check to be retained there).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: Strictly speaking accessing ->up ahead of checking that the type
     actually has an "up" pointer is UB, as only the last written field
     of a union may be read. But we have violations of this rule in many
     other places, so I guess we can assume to be okay-ish here as well.

--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -586,9 +586,7 @@ static inline int sh_get_ref(struct doma
     sp->u.sh.count = nx;
 
     /* We remember the first shadow entry that points to each shadow. */
-    if ( entry_pa != 0
-         && sh_type_has_up_pointer(d, sp->u.sh.type)
-         && sp->up == 0 )
+    if ( !sp->up && sh_type_has_up_pointer(d, sp->u.sh.type) )
         sp->up = entry_pa;
 
     return 1;
@@ -607,10 +605,11 @@ static inline void sh_put_ref(struct dom
     ASSERT(!(sp->count_info & PGC_count_mask));
 
     /* If this is the entry in the up-pointer, remove it */
-    if ( entry_pa != 0
-         && sh_type_has_up_pointer(d, sp->u.sh.type)
-         && sp->up == entry_pa )
+    if ( sp->up == entry_pa )
+    {
+        ASSERT(!entry_pa || sh_type_has_up_pointer(d, sp->u.sh.type));
         sp->up = 0;
+    }
 
     x = sp->u.sh.count;
     nx = x - 1;



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:07:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472049.732151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSlt-0006Sa-Q0; Thu, 05 Jan 2023 16:07:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472049.732151; Thu, 05 Jan 2023 16:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSlt-0006ST-N6; Thu, 05 Jan 2023 16:07:13 +0000
Received: by outflank-mailman (input) for mailman id 472049;
 Thu, 05 Jan 2023 16:07:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSls-0005kL-Au
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:07:12 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2083.outbound.protection.outlook.com [40.107.104.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 02f3f296-8d13-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 17:07:11 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9432.eurprd04.prod.outlook.com (2603:10a6:20b:4d8::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:07:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:07:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02f3f296-8d13-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eWrIEqMbpyYPKBp72HL5f48dhXvraNwHfPslAXimjYCTBZdobp4Ly08KfFp8+86rDHZuJH8D+FLtd0bY6HUgytiWyUKZDRkef/IDjnHKv0i9hMDK/SpuwDnkf4oo3wANK9lxTXqDlssTOkNqzJcA8R/rF7xdshIloMMQ/FhjSb+KClg3LqisbR3tcYC0vCV6VQwhU+WeER1WFbNni/nkAKWLWpsKgnMKusAjHIssgr+9gipNzw9K6op3Y8LjuiarJpp+Sm20vTEaW6vD2tqXhsQ80rMjGZn7uC8c6MAY2US3Iy/mYk1hfdsj8zJrB61mXpjg14TuYfKl52QfolDC6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dOUOzfolYR+T2htvQf9ACsmsUnGdPR9fDgi+OoWIHI0=;
 b=Wg+B8UTW5zEDkrK57UooT4YDqKkAAv9kO63FjwiESgumJVqxqnNo3eEzJ9B/mYrN6R1S/gM6K9xJjZkIGziH6YhAc/tSyWzMccv85knW/mHkKMOLt6iLFLd85vGErOjSTq2Xv5PpPqyrHYgqojo237s7mIQO4LgTZaP5XjyY1Kpz/SbA57IooXFeoWF9oKx74+DL5KqKYVX1ofhnYp9qJ6HHLi2nS7kmmiMu1rSaowgpBu1PUAK2awRyDjAwyg9mQOkbiIVI9GbhU4P3xhxhNlYIcpPRl1JOh1QYaaqtDO1DqSrckrU4snjIbTpRxDRZGap5dAYfTItR5uyk/QOofw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dOUOzfolYR+T2htvQf9ACsmsUnGdPR9fDgi+OoWIHI0=;
 b=z17qbJaR+U8+QysUbU2feN+Njj0RrcgbeisU+ko/35QZqOwpFkiT9t78+nxfwEVfst1RaZWU2eJZ2SIJUjzwK8rVpaH+NRXIq45ikCbXnaNzD8QEQ3NUmk4YOG0A6TguLDGKFMlKyoPbk3AmAK+kZsdYcB2438bw0kLRM/VQ9zRlw9GL4G7/yHBwoWZ0W/+T+XKEySu9UBJtIdCbrcn3FJxjt1stSI0R2ApWpfKXe2abgDA/smY/tymzlbcOIUvjV6SfzoVi3HUO48OcolUr4vGCEcmMY74e487DfJRhTQckQ27t0t+0zdeONLJ2yI0hUIz50R/MOoSn97nGltfxCQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8f19a31a-9bc3-4216-db1c-a9732c6363a7@suse.com>
Date: Thu, 5 Jan 2023 17:07:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 10/11] x86/shadow: correct shadow type bounds checks
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0140.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9432:EE_
X-MS-Office365-Filtering-Correlation-Id: d8c9e9b9-d8ec-4bc1-7e89-08daef36e670
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uoCWZDymnphVsFDawywMhEmu+1148SzShYbJBzyY4dG9y15rG8F4xDSzzhcSV2MKJd1h6sOvHivuv+3LbaScO8YJGiABJoRJgvAZJtFKnP/93F1uSlfNaD3TZlMv/OfMueNfBov+NJnhB82JvtBwJE6UV1Goatemfr+1YttvM+Y6VTbPsRYAKb4yfSN0QXwRxV+AegTiTWfLo7TJxqM6eyN15eMorLMlHQ/7h71dNgJVum/dyg4JbJ/YGx8mzl/foR4bu8EPw2JENCkyvQmnFkRGeZw/uEIjQXcqCjSWyVcpQFS/wkghD4gfCqVHOFXb3bdVnaYJovPjnano0mWt2sYpNKLsrt5Wk4TAMkDV1gGYntq/XjoEHcwzCvK2poQLlW+dqUpGXAcRqzOWdIZ/RDseNF7krUcT8FSlrWm6CTemNmxv13cRDdtdJD49Mpz/Glio27umvYwMsAy1V2+z/7LsRfeD/FKPNMlp7ZJuNVgDZDLwTcHLFPqyJ6g8Bl50wKZLvQJNZTAZSKOubcBJuIEeUHkUPehR3IQ7aOREPEl17KEkpQDLylPNc3Cy8p1PZMskEY7nN/q6XsLHfS93sGbjSNrMXlsKHOXbarLQB66opW7M9b8bXyDy8w5eSShTnBAFSCzOf1iFeh3ReU9NlnykkHeRit6oMmvGm/3NsNO0IAHvAK8f8VJILbjm1ph8eJqMS2q5T6TC9oCZB8QeyV7Gc11xL1jqj53cK52CdYE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(346002)(366004)(39860400002)(396003)(451199015)(54906003)(6916009)(31686004)(26005)(186003)(66946007)(8676002)(6486002)(66476007)(2616005)(478600001)(6512007)(66556008)(41300700001)(8936002)(5660300002)(83380400001)(4326008)(2906002)(316002)(6506007)(38100700002)(31696002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a1FSNVRHQUlsdDVxdWdybk1xUjhBd28xK2ZFNnFwN0JHWXVON2w4RitYcEc0?=
 =?utf-8?B?cWZXbnZ4cERLaFJQaUZYRklvbWY5UFM1TUU3UVNMYitzWlV0aW9jWmJRZXJU?=
 =?utf-8?B?RkVMTWVlLzRRNFQwbzEwN3pOSlZBZDVDckN5T0NXOW9yQzMwMW9GNXFSQ1hP?=
 =?utf-8?B?cTlaM2k5TlNWclJsVUZnUWp0czN2YVNOdnZnSE5QZ1NJaHYvcFdGMjdtcHRy?=
 =?utf-8?B?YjIyTngvSFVPTy91M1NhMWw4VWt0TTZYdHh3Y0NpakFGSUI5b0pOMU5aMzJR?=
 =?utf-8?B?d1RkcVUycFhSZkdOc0UyeUdMcTM4TTR4VTlkNmNzUHRqeEtXd0NwdVpRLy9p?=
 =?utf-8?B?TWtKZDVBQ1RzazFpZkt1bGpOdlQ1SnlZUnp4TkRTemxybTNhY29wY0tiekRi?=
 =?utf-8?B?Y2ZJUjVjNFk5RzBSSTdBQ0dwRFhKazlkQ0hzYXBVeWVpamdzVjRnT29ESEM4?=
 =?utf-8?B?N0tPZDVwaCtRbmRIUlVaVXhSeEtlc3I4eWFlTXpCODRhMjZhMGU5YXB2WmRW?=
 =?utf-8?B?VkdtNnk5UjRuSDV2aGdUMWtyVDU3cjhVVzVjQ1RrZkpNSnFkNi8zR2JtNmd1?=
 =?utf-8?B?bUtIY2w1ZGt2bmZTRlowQmJEUE1FNGQ1ZDM0N0kvWldoSW9VUU1ZTWlvNSti?=
 =?utf-8?B?RHhKRisySS9zYjBjb2krREdseGNkWE5odTN0eXZ0WDVTNit2WFlCL2U4c3JJ?=
 =?utf-8?B?WHZRQi9HdjQyZWYrV2crOHRUUjRvRHZ6VVFyekEwdVpnNzQwQlU1ZEpndzFn?=
 =?utf-8?B?STc2RWFhc2kwQnpwUnA1RXNrdEVFbWpYcHNuQ1pVc01ESFhHbGlZM0ZUV3Bu?=
 =?utf-8?B?S2xycUFkZm91akJvUWZTVjlKWC9xdE1reHhmYkdXQUFkTnVTc05mUFV1NVp1?=
 =?utf-8?B?OVpjZWtpOFdCOHYzWUlyV2RJOGJaZkpLRmRSRXY4aWQ5dVZTME5nNFlOd2lM?=
 =?utf-8?B?VHZVOXV0VXN6cERSbU5FdnZRaUdlSGdLeForK2VacCtKdUVxeUdEOWlEZk5y?=
 =?utf-8?B?Z0MyKzNXemdYcjA2Qk1JbWdkL0xXQ01NenhlNzIxZ08zb0JKWkdkWkN6b0do?=
 =?utf-8?B?NTZwSnY3a3pLVFpwcEEzc1VtYWx4UUJvR0htSHBBSnB3WXptK0lGWXZyQTZw?=
 =?utf-8?B?NUp3MWFJSFRGdG5mVkQ2WWZrWTQ5RWpGTHgyTGFrTHYzZWQwSDJvTjFZN09s?=
 =?utf-8?B?UVorTmg0MWZhY1h6SFRtSFFNUUVIWDFEc1o4M055dHIwUDl1R0JxZHZTSUNu?=
 =?utf-8?B?dmhuV3p4eGlFTXVFS3p5Zmd4cDZMTXlGNDBuRnBPNjE2MTFkcm92Uzl1azdH?=
 =?utf-8?B?Unp3TStRc2ZKSkl3bFlPaU1tRDdmWGtPaVdudDg2N3ZQSVhUNzBmem9OTUFy?=
 =?utf-8?B?dEpZazhnOTY3ZWp2Rm1UNjBNRW5oVkU0aUhNTTk4L05KSDJEa3dHUFdBMmtF?=
 =?utf-8?B?RW5JSFRWaDJDODVhQ0Q5YmhvU0RITEpWTEwyL2JocU5Sc1Bkdkx4QkpmOFUv?=
 =?utf-8?B?NXhjUkYxOWZGaVpidVh4NUliZG1MVE12ajQ4T2tKbWZlWUQreWFPeWU0RXpW?=
 =?utf-8?B?QlB3YU5PNlJoUEhDQlAzRHdWN01lYllMWGgrMHRZSzNMeVVTZmhVT0xkdWxv?=
 =?utf-8?B?eEVBelhPODg0TllhZTVIaGNodS9yQjJFWHA0eVpLcnBnbThCU290VjVWTldN?=
 =?utf-8?B?M2htZ29xZCtnYmlqRWtMTmZBWmFFNTk3TXFRNERTUHdXOUszajlQcXhwWGRp?=
 =?utf-8?B?MVpmbEFZS1Z4UklFR3FPOEpNMlJreE5DcHh3QWp6MHNacW5NUk1lQXVtN29l?=
 =?utf-8?B?aDFWUWVoTmRoR3dRclUvNE9QQW16eElzZXNFMW1vZmZISHJKeU9oSm5aUVJ6?=
 =?utf-8?B?a1I3dTZVUFpvVmptK0tFY1gxc2RJdEtDQUlTbFFaM2lRVXl2VXJJdjdLS2FB?=
 =?utf-8?B?UzdOY0t1YTNiWFlRZTlQeWdzU2lSdUo1Z1JicTF2Wnc4elIyNUgydDZIZzA5?=
 =?utf-8?B?RUhtVWQ0ZEFEeTZ5NXZkSk9US1R3OFlGa1c0ZmN0MWZSRDlPNE1YOC9xOEFT?=
 =?utf-8?B?RnFVVnhiR281VFJ3dWJvc1pYK1YrV2dub1NzVmVWUmJSem1UZmQ2ZXg0VG03?=
 =?utf-8?Q?yGiOi2clw0uO6+QEgU4zWerct?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d8c9e9b9-d8ec-4bc1-7e89-08daef36e670
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 16:07:10.0844
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hi+aiyZkAnzOe+vVp5V3J33MF6OzPz4oNZn4SK2QzVZAReP/4NcmxZMlaU5qpgNmIKVGfGtlI4Pec9sF9ktW3g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9432

In sh_remove_shadow_via_pointer() the type range checks, besides being
bogus (should be ">= min && <= max"), are fully redundant with the has-
up-pointer assertion. In sh_hash_audit_bucket() properly use "min"
instead of assuming a certain order of type numbers.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
While style is wrong for the BUG_ON(), keep that aspect as is because of
all the neighboring ones.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1425,7 +1425,7 @@ static void sh_hash_audit_bucket(struct
         /* Not a shadow? */
         BUG_ON( (sp->count_info & PGC_count_mask )!= 0 ) ;
         /* Bogus type? */
-        BUG_ON( sp->u.sh.type == 0 );
+        BUG_ON( sp->u.sh.type < SH_type_min_shadow );
         BUG_ON( sp->u.sh.type > SH_type_max_shadow );
         /* Wrong page of a multi-page shadow? */
         BUG_ON( !sp->u.sh.head );
@@ -2077,8 +2077,6 @@ static int sh_remove_shadow_via_pointer(
     l1_pgentry_t *vaddr;
     int rc;
 
-    ASSERT(sp->u.sh.type > 0);
-    ASSERT(sp->u.sh.type < SH_type_max_shadow);
     ASSERT(sh_type_has_up_pointer(d, sp->u.sh.type));
 
     if (sp->up == 0) return 0;



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:07:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472056.732162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSmT-00076W-8T; Thu, 05 Jan 2023 16:07:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472056.732162; Thu, 05 Jan 2023 16:07:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSmT-00076P-4f; Thu, 05 Jan 2023 16:07:49 +0000
Received: by outflank-mailman (input) for mailman id 472056;
 Thu, 05 Jan 2023 16:07:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDSmR-0005kL-Ss
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:07:48 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2067.outbound.protection.outlook.com [40.107.104.67])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1833933e-8d13-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 17:07:47 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9326.eurprd04.prod.outlook.com (2603:10a6:102:2b8::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:07:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:07:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1833933e-8d13-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mkvHJGOSyTC+uh7KK9yKtwXtD4wBCxEwOJlMnqQSu0uq0g5kEhJP33TmxQx5ZLAnAeh6eKMBP8hBcNSTnA/eZVd/yyV3iemh/pZwK6bnI2twKpzn7b32ohBawSvYYc4UnmfWnUz27Nzz2BDbcL8G2jDbJaJsJQuPb12RU65DEoGBTi9/CfUPELl7bwhceZIxHT9fU27AE5/kA35CjnzJck9pIgGo3oeYYH54JXjMoweMBO0dHVZ2EFzrWU8fMFJlV44FgMt2Rrbu3UCpLjHjdf0xqXiAY5qraRKvwzLyhOFQzmLKAJWlanwHq4l3qg+nTh72XW5Fall2qvwOn3q7ZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rkzMWSo4VP4iVpCTjNCVb8H5HyGWrBz+Rgr5MlOxOXI=;
 b=idd6L6FpiU/2DvabEp8/JucIpXyj+UvQJxZXHrqG/WXbSXT9xPtk7rEb5jzkr195Ly6+1lr8bFzLvTCI1VZ69bb5UHy79RAf183moAqy81GH80ZoDksquItCilhSiaff6AJva41BfQC02vmNgHcgXgTnQs/Ix6YXRAZMblcLBc7ttkVfjOxq2nows5tv38cD0Cd8xbn+yoTjFimuTr2h4ox/Ai35saWHLWVawU9qIG1JQh66TeQNMFtZmW5QebKq8kwJrexAAmq7997yp+Tc9sGIWDyfJGuo3/WhuDRlw2JvHw7F18Y3klQU7YfAZygcEe6sOj5PSwXVAn30LnXNtw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rkzMWSo4VP4iVpCTjNCVb8H5HyGWrBz+Rgr5MlOxOXI=;
 b=gQz6yA5sZHFf4Twv/F1LXZxXucKEh3yRKR3vYxJ78t+WO8OlSBTxbPMiKzC+9KAFJoXSFDWSPtnYjmazJ5cDfynhBBUDQx1W1lEB0LNXzNVhbhF9l6Ga3vfFUEoL+P/PW8hNPQ6/uYSpJrETfq8Lmw29avdvD1RqbsIpRFXUCfaOU1GJeM+m12mLSiiLKW6TMJM9zyUPxPM1fv2FNhXKzt9fU4/I6wKHsaGiyDpXzodcUG1B/4KxOoyfjZ4xkMzPT6PienCcRRuQcIbZ3w+6/wonafpQ2PT2Snu/4bQRhUrBtYo4hzLioSz0YjYUuy9qB4zEYJBUYX1PR/yZh4Q12Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <514f374d-daa0-ddc0-8cdf-9dfd014d508b@suse.com>
Date: Thu, 5 Jan 2023 17:07:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 11/11] x86/shadow: sh_remove_all_mappings() is HVM-only
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0070.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9326:EE_
X-MS-Office365-Filtering-Correlation-Id: 3a50beb0-154a-425f-3cd2-08daef36fbd2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3UX9Gb8wOQdYBNL9QMEEOh/S1m4Gqp6MVJVEmW5FkJ4v1Yw6hcrzEQ2bc6lLbB8jlHJwIOV/MvG/aHVvOpnANkCw+NvZYH1WY/cFVNstBkFx2NAD9t4g7hturD5hrjbDW+3LEc56WRnHhhBlkBKV2Mv1jOuZXRYJRLHx6eMigYrkhqQzQw1qeUn+uy63WtgvbqSNpj7XwdyyT2/UrV0FElj7qvgmPP0nFgiTvM9UtqgzGyDeHpFDBT6qGPKmh6PYmYLDnBjd/YxG/UuPuGKcASoDp8xWkp5eCA8yhDiUYjmglusPifgnXBE2IRNF9VFz88w8LjMk027+RqhW4Krd6VR/APKXFGwhvd3PMK6PxFGDbhttPAGTL0cEUX0sULKYp6fH46ctayr3I/46WmDvrcAwe3X5Yh9MVHBB9LIAZapfJaSJLNxfhn3ZKcP0q/Or0LfuPljDkaWweHXCP23z1JhgpOgPH0Gjxsu3e7yoYATC6w//88r/2q5N+mgiZRK1LDyCkPuRoXmd2wyEkIpDJb7/1qXtvPd2ggvbyzNWUcYXfWCNQwNnlGTX6aGcew+1JgOleoEQEHpw0Yw49M/hAxYjDrQShvy8m+3JEElbNWpWtnlOqyVrRQnyaIuf1u6ZSKYjZzlPplgA2+iHFUXmz0h5qU07IWapG/OslnwmCGI1mo8VE0tsVoei9OI1KR438XKYS1QJcVLiy+Olm2AD4pNbxAf9uheuxKCLcsJ0+JA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(376002)(136003)(39860400002)(396003)(366004)(451199015)(186003)(26005)(2616005)(83380400001)(86362001)(31696002)(36756003)(6512007)(38100700002)(316002)(66556008)(31686004)(54906003)(6916009)(2906002)(4326008)(66946007)(8936002)(41300700001)(8676002)(5660300002)(66476007)(478600001)(6486002)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R05veldYdGltS2N2aEd0NE4rY2tnRzBuMFBZZVZXVTJIampqMHRtL0Y3RjJv?=
 =?utf-8?B?MVBFU05QcjBUajdPOWx1ZGtJZEM2MFN5RFUwVGxrRWVmaTFCZm5VeUdDeCtj?=
 =?utf-8?B?TzJQeWlobkFhRFhTQTdwUjc5NzJNNEp4OW5YREs1a2x0elhFWCtlUmg2aTNY?=
 =?utf-8?B?YkJVYklnM3NzazVzV1VTNkhxSk5wbkJRbDM1K0pDUmZ4aytUNVM3NTcyaE5M?=
 =?utf-8?B?cE9aeDNLVk9ncjJFekwrZXAvSWJseHIxeEd1OUhjbmRrcGhNS2N0aC9HTjF5?=
 =?utf-8?B?R2hHcjJWUmlXV2U0OUtrMUp2cHBuNHZ4RmZGTzJPNE5GR3NDM2xjU0lYeWxD?=
 =?utf-8?B?RWhpVVVuaEFSY3RTRTFBMXZiSnRSSGp3ZXRWZnNMYmtVRVRRTmN0cUxmOGpz?=
 =?utf-8?B?dzJhY0FIaXJocFExaWNadjhIa3IvZnJOQmRPaTM4N2hEc1BMeDVxaGM4elVs?=
 =?utf-8?B?UDdlNTBtWk9lbCs4NjM1V3d3NkVBWE1LWVRXUjRhUTZrWmZEV3k2eEErZy9H?=
 =?utf-8?B?TlpwU0dVVWtYa2ZzMWlOT3R1NEhaa0JZYUpicWdoakFpMm9lNE01bzBncW84?=
 =?utf-8?B?dHFRQnozaFJ2R0s2QXY3UTJRR295WFVRQ29ZRStlTVQzbHNuUXh2SDkybDJM?=
 =?utf-8?B?ZFJNek9semszRWxrbjF2YjZDVElKN2k0Y1g1STROci9aaEFqVDV2eGlsRzBM?=
 =?utf-8?B?alJxNGhTd0ZvdkdzbFc1ckRQVWhnQ1N0VFBFU0RDT0RONmxaVzZIM3F3V2th?=
 =?utf-8?B?c3dob0JGRHBJY2tGZ0JVdWQ0Q3pzZ01zMHlDSVV6Sk11S2dXNnJQbkJYeXRz?=
 =?utf-8?B?SGNjWTU4bHdVejU5bWVaeXV2eENiRlVxUjVqbndSRXNib2s0YW9lYlJRL1ZC?=
 =?utf-8?B?QktvZzZraWtzRmdhRHlzU09BM3p3bThJbDhHQy9JOUFnZzk3ZjF2czlMSEM0?=
 =?utf-8?B?eTA2ZjlZRW5XUDYyRDZPdjlONEcwaFFVL3AveXVTSjZ3MWNnd3NhbFM5aEJj?=
 =?utf-8?B?ektNTWl1V0kwY1lRV3FycmlzUS9oSnBOMGkzVncybWhzeVRyQUJ1cVVvUHgx?=
 =?utf-8?B?dXRWdWthVkkwQkdmUmM0VTQwYXo4cnowRUFLNWxqb3U3Uk4yOWJCd1hMSXhT?=
 =?utf-8?B?V2RBWXZWQjEyaFpmWEpNTm4yazhqbEs4YVo5RG5US1ByVm9vU2pmOUduMDhN?=
 =?utf-8?B?R29SNHpETDh5NHNNT1dSZlZGRnpITjVZRUxZd2NHcjFjaTk1NnhPeTVwdkYv?=
 =?utf-8?B?WUthWHUyR0pzRWxtbzRJMXQwSTlNMkdRR1JkSzhJeWpHdzRJTW1IRGhEdmU2?=
 =?utf-8?B?U1lLVmx6YWdlU3MxL2NvdXozb3loWmpKL1hBZUEwS2F6VGl0M2N4aFplbmox?=
 =?utf-8?B?NmdrSTJpNjJYbEJCUFRLNjdEOWp5ejF3TWF6czJEZUVCNHRpazBaWi9jbTJl?=
 =?utf-8?B?dHpNMVltYisvZHZNR2lQY3AwQWp4TjhIWWNhYnZ0QVV0ZVIvU2l2YjR2K2Fh?=
 =?utf-8?B?WjllVjVqaTRTcndEUVBmTmJiS1BBVEFxLzN2Y3lFYkVLY1M0SVlpb2RUZXIv?=
 =?utf-8?B?a0FKZFFOa3lwa0NWZVp5RGJhL3VhODJvaU1lU3p3cHd3YWJFNkJIRWdRWHgz?=
 =?utf-8?B?TS9WUFBwZms1U2swL2V4R0h4S21lTjRjdUNGcHJCVWhvOGZJc216NW9ZV3lH?=
 =?utf-8?B?N2NtVlFHekMxRVRKRUpkM3NocW8yTStQNWVSSm9jTTZpMXhPQ0ZlSkpDS1pm?=
 =?utf-8?B?RkdqV2VZeDFOZmpNOFBOZVlFNmxObHc2ZWxJTGlDSjZFeUlKVmxVTHBOTFFI?=
 =?utf-8?B?NXZXRnU5eUlHOEEvZyt1TWM5VTFDdk9GMW4wSkxqMXg4cUZyL0VWQzBGWDE5?=
 =?utf-8?B?NE9tc2VRNjY1WWkwcC9VcGxSVmxrL2xPNVc5WmxsNFA5YlRId1JoVENydFNk?=
 =?utf-8?B?OVJTZjBKUDhVY0EvbCtpZWtMMlpVOXhncXRkMnFwL0RBb201R2FPcmFvUXJi?=
 =?utf-8?B?QUtkR043VUNnMlRwanh2OFQ2ZTlCZXhWODg5MUhTUnc1Z3FnMm1TUGV3RWdQ?=
 =?utf-8?B?Y2RJdWJDdE9Xc2F3c0t6WHlyVll4eFQxbGRiK1ZLT2x4MGpWZjg1alZHSGJp?=
 =?utf-8?Q?IH31xceya9dKIixK+0UWJGd2v?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a50beb0-154a-425f-3cd2-08daef36fbd2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 16:07:45.9884
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A9Kk2GnJgQoXn6o5Aw5NeB4cTtB+/9ZszHm7RsREp/286bWoi2PaoGWsz28cy/lSyRECxe8zOeimWxZIkK2dDg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9326

All callers live in hvm.c. Moving the function there is undesirable, as
hash walking is local to common.c and probably better remains so. Hence
move an #endif, allowing to drop an #ifdef.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1992,7 +1992,6 @@ int sh_remove_write_access(struct domain
     /* We killed at least one writeable mapping, so must flush TLBs. */
     return 1;
 }
-#endif /* CONFIG_HVM */
 
 /**************************************************************************/
 /* Remove all mappings of a guest frame from the shadow tables.
@@ -2004,12 +2003,10 @@ int sh_remove_all_mappings(struct domain
 
     /* Dispatch table for getting per-type functions */
     static const hash_callback_t callbacks[SH_type_unused] = {
-#ifdef CONFIG_HVM
         [SH_type_l1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 2),
         [SH_type_fl1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 2),
         [SH_type_l1_pae_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 3),
         [SH_type_fl1_pae_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 3),
-#endif
         [SH_type_l1_64_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 4),
         [SH_type_fl1_64_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 4),
     };
@@ -2064,6 +2061,7 @@ int sh_remove_all_mappings(struct domain
     return 1;
 }
 
+#endif /* CONFIG_HVM */
 
 /**************************************************************************/
 /* Remove all shadows of a guest frame from the shadow tables */



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:10:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:10:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472065.732173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSpJ-0000Cn-Lo; Thu, 05 Jan 2023 16:10:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472065.732173; Thu, 05 Jan 2023 16:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDSpJ-0000Cg-Iq; Thu, 05 Jan 2023 16:10:45 +0000
Received: by outflank-mailman (input) for mailman id 472065;
 Thu, 05 Jan 2023 16:10:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDSpH-0000Ca-Uv
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:10:44 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80764ce5-8d13-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 17:10:41 +0100 (CET)
Received: by mail-wm1-x332.google.com with SMTP id
 m26-20020a05600c3b1a00b003d9811fcaafso1696015wms.5
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 08:10:41 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 iv14-20020a05600c548e00b003b47b80cec3sm3319613wmb.42.2023.01.05.08.10.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 08:10:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80764ce5-8d13-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Y5mKZv7OSvb+gUvfokTjsdLcRFx1w454ZQ5fGUDCzkc=;
        b=mLRGhG4N2khtiYCt0m2gljJKvR5S6u3EOJQNLukMupFWL7inRHP+h85SmPZUph10lg
         JWM56s82st49lZEq075Jb0l+VN/hZ969l/j9iPFwgWK8g10kROhPQ97LUUmCZOnh2ZVu
         W7SZglo/H6cwoCZa9AdqS4Z+mfSRHowsFEk/Ou2ET7VPDfGRom799hoxNejczlyEy02S
         05FBdnuzRc8ZyVbxIKDufDdtCm2fJbvZlqgQafVlJxiqW1+JJV0iDPVXiO+KjcFVeafP
         cSEq5202DsNMb2yuEZ2PKmFxdJPjP9/EuxjuHr2QPPpPqWxb/BIHOokQ+QfFv0bRagCB
         XktQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Y5mKZv7OSvb+gUvfokTjsdLcRFx1w454ZQ5fGUDCzkc=;
        b=4WFarSJpmF/Mh/Nd1ORHKK9OZ6+/htwi7oge25C5u1bF9JHMQ2lDes0EyGANvJRn68
         zWKviIJ49UtEyE6+OgA+3CububWDYSQOcDvtFsVGq5VcKmfL9S1aa8hqOF+r5KufoSrq
         cUXteQ8BMFUELjKLToB+a3neQjNPKWAd7bGdz2QHs1oaDsTZt/+3JofFugCJabJJYXdF
         Cm26EdQGVROGUpHZ8mkXkJkzDzZkHQTCkuQv+MwNtkgmaLwxm+vsO3KSezvddVCSpsCI
         pBuahWVMk43unNRqSNcxxKc3+60cijdcJf9O8YvYzfXvTw3K5Zj2vXbCzjobrBSfRJFh
         eX9w==
X-Gm-Message-State: AFqh2kr7ErHVVdp5ra7YCjBYznoUfI4oH+UYapWHJUxEBdulVRcv2l1M
	3SKriAP84+0IOI9ZUisOrTs=
X-Google-Smtp-Source: AMrXdXt8/5c0/IzSkBmxdpQks124bU7k/ld+ca9ksdm1QYFSy+VyjzGBbu+UMDMdaO2U0KjQrHm05A==
X-Received: by 2002:a05:600c:4da2:b0:3d2:39dc:f50e with SMTP id v34-20020a05600c4da200b003d239dcf50emr36630874wmp.7.1672935041317;
        Thu, 05 Jan 2023 08:10:41 -0800 (PST)
Message-ID: <439a5b7624dbf4d4ff6acbb9b3a6f15b777ba0fc.camel@gmail.com>
Subject: Re: [PATCH v4 1/2] arch/riscv: initial RISC-V support to build/run
 minimal Xen
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Julien
 Grall <julien@xen.org>, Anthony Perard <anthony.perard@citrix.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida
 <gianluca@rivosinc.com>,  "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Date: Thu, 05 Jan 2023 18:10:39 +0200
In-Reply-To: <01888162-49fb-a280-a088-5e81edff3919@citrix.com>
References: <cover.1672912316.git.oleksii.kurochko@gmail.com>
	 <ef6dbb71b27c75fe0dffb72d65ab457d27430475.1672912316.git.oleksii.kurochko@gmail.com>
	 <591d6624-2bd2-93f0-f5d6-760043230756@suse.com>
	 <01888162-49fb-a280-a088-5e81edff3919@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Thu, 2023-01-05 at 15:48 +0000, Andrew Cooper wrote:
> On 05/01/2023 1:40 pm, Jan Beulich wrote:
> > On 05.01.2023 13:01, Oleksii Kurochko wrote:
> > > To run in debug mode should be done the following instructions:
> > > =C2=A0$ qemu-system-riscv64 -M virt -smp 1 -nographic -m 2g \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -kernel xen/xen -s -S
> > > =C2=A0# In separate terminal:
> > > =C2=A0$ riscv64-buildroot-linux-gnu-gdb
> > > =C2=A0$ target remote :1234
> > > =C2=A0$ add-symbol-file <xen_src>/xen/xen-syms 0x80200000
> > > =C2=A0$ hb *0x80200000
> > > =C2=A0$ c # it should stop at instruction j 0x80200000 <start>
> > This suggests to me that Xen is meant to run at VA 0x80200000,
> > whereas ...
> >=20
> > > --- a/xen/arch/riscv/include/asm/config.h
> > > +++ b/xen/arch/riscv/include/asm/config.h
> > > @@ -1,6 +1,9 @@
> > > =C2=A0#ifndef __RISCV_CONFIG_H__
> > > =C2=A0#define __RISCV_CONFIG_H__
> > > =C2=A0
> > > +#include <xen/const.h>
> > > +#include <xen/page-size.h>
> > > +
> > > =C2=A0#if defined(CONFIG_RISCV_64)
> > > =C2=A0# define LONG_BYTEORDER 3
> > > =C2=A0# define ELFSIZE 64
> > > @@ -28,7 +31,7 @@
> > > =C2=A0
> > > =C2=A0/* Linkage for RISCV */
> > > =C2=A0#ifdef __ASSEMBLY__
> > > -#define ALIGN .align 2
> > > +#define ALIGN .align 4
> > > =C2=A0
> > > =C2=A0#define ENTRY(name)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > =C2=A0=C2=A0 .globl name;=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > @@ -36,6 +39,10 @@
> > > =C2=A0=C2=A0 name:
> > > =C2=A0#endif
> > > =C2=A0
> > > +#define XEN_VIRT_START=C2=A0 _AT(UL, 0x00200000)
> > ... here you specify a much lower address (and to be honest even
> > 0x80200000
> > looks pretty low to me for 64-bit, and perhaps even for 32-bit).
> > Could you
> > clarify what the plans here are? Maybe this is merely a temporary
> > thing,
> > but not called out as such?
>=20
> It's stale from v1 which had:
>=20
> #define XEN_VIRT_START=C2=A0 0x80200000
>=20
>=20
> But honestly, I don't think the qemu details in the commit message
> are
> useful.=C2=A0 This series is just about making "make build" work.
>=20
> The next series (being worked on, but not posted yet) is only a few
> patches and gets a full Gitlab CI smoke test, at which point the
> smoke
> test shell script is the reference for how to invoke qemu.
>=20
>=20
> I'm happy to R-by this series and drop that part of the commit
> message
> on commit.
>=20
I'm happy with that. Thanks.
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:24:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:24:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472072.732184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDT2Q-0001l9-R2; Thu, 05 Jan 2023 16:24:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472072.732184; Thu, 05 Jan 2023 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDT2Q-0001l2-OF; Thu, 05 Jan 2023 16:24:18 +0000
Received: by outflank-mailman (input) for mailman id 472072;
 Thu, 05 Jan 2023 16:24:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aZA+=5C=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDT2P-0001kw-Ml
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:24:17 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65e22e20-8d15-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 17:24:16 +0100 (CET)
Received: by mail-lj1-x233.google.com with SMTP id bn6so29248242ljb.13
 for <xen-devel@lists.xenproject.org>; Thu, 05 Jan 2023 08:24:16 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 u21-20020a05651c131500b0027ff16ee6b9sm1010127lja.8.2023.01.05.08.24.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Jan 2023 08:24:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65e22e20-8d15-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Hh6raYC8+4X6HbqN1scrKnswGRbcSCRlzyoPUATYkiQ=;
        b=JxR1FSzr+zAv/aJQSWZfHNwfhtqy3BDEW+N4+snt13MYz0y9DHpP1P9P60PjcVe6cx
         KyAEGoKKa3AXdclQrtQL5BDxwIN8IPfYJPORe5vr6x8P8LQfA7Gmk/CBYpjarHb27G6H
         L2ESsZK88KNHUCmSiLWTjNcSe5nNNCVzBVE55Kh+VHUJcxiK+9FzdkD6JJu2GgM/nL95
         6zQa5ZDSxnMWCmHY8hSG0vIdGsf37+OcoWisHrijns1iS9UPJpYgJPW8ECjXviMhhCL9
         B2Gvjj306wJMEhPhN+ksBNYYOXwPIxXfBE5oMp0BU+AVIzC3iGHzzPpitjc6k61K771L
         pizQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Hh6raYC8+4X6HbqN1scrKnswGRbcSCRlzyoPUATYkiQ=;
        b=4JkEL/Wy7vokr+RXdLEWOiqynbCPk5H+oSARI6au4shF5mf9fTjFXj2hLhFhvv9w0i
         HJ2/KPmqGxpX2mIzqPZNcMVd72cPxRT1bpOHOS2UtIqc9pCBkdUGqGrste11OdZijKKm
         g4QmhjJ1sbLP93QrsKELH98bmOKlkChXlq4/MUk3EcU6E3h4x4pY6t8AjoqK87IqCQ6Y
         8Cs1mH/4rJa7sHGIlrJfo06pjPda44RrTRYoYS5+vnwfJUqGJMbV90Jbf+HA2XZgqgnV
         hT9SFYCm769q0nqsjOIH3/Qkmu1mR0uT1ROMPv8nV+q+I+4K2uURTDonRksv92HvgLou
         blgw==
X-Gm-Message-State: AFqh2kpJIjS2NG757xNcioSMETWlAhXtWGtP07PtUOsrv5WKjHcRtDgp
	LhmYb+dST6ql9W/Efb7YxUg=
X-Google-Smtp-Source: AMrXdXvi2omhnfUo0JZk87iT+w5i+GcD8PntMwGT3VUk03/GCBWvj0UWx+NPLl+zhCHOz51rQIUJIA==
X-Received: by 2002:a2e:9e57:0:b0:281:789:7ee5 with SMTP id g23-20020a2e9e57000000b0028107897ee5mr1199738ljk.6.1672935855707;
        Thu, 05 Jan 2023 08:24:15 -0800 (PST)
Message-ID: <8ab7b45a75cdfa332954c8a112cf9b54b4d35c62.camel@gmail.com>
Subject: Re: [PATCH v4 1/2] arch/riscv: initial RISC-V support to build/run
 minimal Xen
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Julien
 Grall <julien@xen.org>, Anthony Perard <anthony.perard@citrix.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida
 <gianluca@rivosinc.com>,  "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Date: Thu, 05 Jan 2023 18:24:14 +0200
In-Reply-To: <439a5b7624dbf4d4ff6acbb9b3a6f15b777ba0fc.camel@gmail.com>
References: <cover.1672912316.git.oleksii.kurochko@gmail.com>
	 <ef6dbb71b27c75fe0dffb72d65ab457d27430475.1672912316.git.oleksii.kurochko@gmail.com>
	 <591d6624-2bd2-93f0-f5d6-760043230756@suse.com>
	 <01888162-49fb-a280-a088-5e81edff3919@citrix.com>
	 <439a5b7624dbf4d4ff6acbb9b3a6f15b777ba0fc.camel@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Thu, 2023-01-05 at 18:10 +0200, Oleksii wrote:
> On Thu, 2023-01-05 at 15:48 +0000, Andrew Cooper wrote:
> > On 05/01/2023 1:40 pm, Jan Beulich wrote:
> > > On 05.01.2023 13:01, Oleksii Kurochko wrote:
> > > > To run in debug mode should be done the following instructions:
> > > > =C2=A0$ qemu-system-riscv64 -M virt -smp 1 -nographic -m 2g \
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -kernel xen/xen -s -S
> > > > =C2=A0# In separate terminal:
> > > > =C2=A0$ riscv64-buildroot-linux-gnu-gdb
> > > > =C2=A0$ target remote :1234
> > > > =C2=A0$ add-symbol-file <xen_src>/xen/xen-syms 0x80200000
> > > > =C2=A0$ hb *0x80200000
> > > > =C2=A0$ c # it should stop at instruction j 0x80200000 <start>
> > > This suggests to me that Xen is meant to run at VA 0x80200000,
> > > whereas ...
> > >=20
> > > > --- a/xen/arch/riscv/include/asm/config.h
> > > > +++ b/xen/arch/riscv/include/asm/config.h
> > > > @@ -1,6 +1,9 @@
> > > > =C2=A0#ifndef __RISCV_CONFIG_H__
> > > > =C2=A0#define __RISCV_CONFIG_H__
> > > > =C2=A0
> > > > +#include <xen/const.h>
> > > > +#include <xen/page-size.h>
> > > > +
> > > > =C2=A0#if defined(CONFIG_RISCV_64)
> > > > =C2=A0# define LONG_BYTEORDER 3
> > > > =C2=A0# define ELFSIZE 64
> > > > @@ -28,7 +31,7 @@
> > > > =C2=A0
> > > > =C2=A0/* Linkage for RISCV */
> > > > =C2=A0#ifdef __ASSEMBLY__
> > > > -#define ALIGN .align 2
> > > > +#define ALIGN .align 4
> > > > =C2=A0
> > > > =C2=A0#define ENTRY(name)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > =C2=A0=C2=A0 .globl name;=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > @@ -36,6 +39,10 @@
> > > > =C2=A0=C2=A0 name:
> > > > =C2=A0#endif
> > > > =C2=A0
> > > > +#define XEN_VIRT_START=C2=A0 _AT(UL, 0x00200000)
> > > ... here you specify a much lower address (and to be honest even
> > > 0x80200000
> > > looks pretty low to me for 64-bit, and perhaps even for 32-bit).
> > > Could you
> > > clarify what the plans here are? Maybe this is merely a temporary
> > > thing,
> > > but not called out as such?
> >=20
> > It's stale from v1 which had:
> >=20
> > #define XEN_VIRT_START=C2=A0 0x80200000
Let's switch XEN_VIRT_START to 0x0000000080200000 while we don't have
any MMU support as 0x80200000 is an address where OpenSBI will load
binary (in our case Xen).
> >=20
> >=20
> > But honestly, I don't think the qemu details in the commit message
> > are
> > useful.=C2=A0 This series is just about making "make build" work.
> >=20
> > The next series (being worked on, but not posted yet) is only a few
> > patches and gets a full Gitlab CI smoke test, at which point the
> > smoke
> > test shell script is the reference for how to invoke qemu.
> >=20
> >=20
> > I'm happy to R-by this series and drop that part of the commit
> > message
> > on commit.
> >=20
> I'm happy with that. Thanks.
> > ~Andrew
>=20
~ Oleksii



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:40:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:40:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472079.732195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDTHc-0003KM-6E; Thu, 05 Jan 2023 16:40:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472079.732195; Thu, 05 Jan 2023 16:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDTHc-0003KD-32; Thu, 05 Jan 2023 16:40:00 +0000
Received: by outflank-mailman (input) for mailman id 472079;
 Thu, 05 Jan 2023 16:39:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDTHa-0003K7-4E
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:39:58 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9476fed6-8d17-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 17:39:55 +0100 (CET)
Received: from mail-dm6nam04lp2041.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 11:39:51 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5133.namprd03.prod.outlook.com (2603:10b6:208:1a5::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:39:48 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:39:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9476fed6-8d17-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672936794;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=KrNF1Nng7ydDEhClHRLc+rGtl/F90VRQU6sC1aXF9SQ=;
  b=Qsk2H+htkcuM4bufC4cTU+oicWBddNQaMmAjTPraVfbsUc5P70AtlVrM
   BJbC4qAyv/+J8m2Nn3q67EOHQRkdZNQk+U7LymJSrnUh1zlAOI0jX6IGl
   wowvsHO9e6hXVlYlIkFj9kd+O+jCsdPnaBZU9fhRzFo3FufipODg9I7cc
   0=;
X-IronPort-RemoteIP: 104.47.73.41
X-IronPort-MID: 91384130
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ubANwaJLze2yMEjBFE+Ra5QlxSXFcZb7ZxGr2PjKsXjdYENS1GcAy
 2YZXm3SPP6NamOgeYonYNyxpBgF6p/VzoJrSVBlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPdwP9TlK6q4mhA5wRjPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5uK2xTs
 tAYEglVa0G4rtqw8Oyads9F05FLwMnDZOvzu1lG5BSBUbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VspTSKpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexHirBNtJTNVU8NZTq3OJm1Y1FiEnVFjkotKTyQ2jQ/B2f
 hl8Fi0G6PJaGFaQZt75Uh6joX/CvQMGXNFQEOoS5wSEy66S6AGcbkAGRDNcbN0ttOctWCcnk
 FSOmrvBDztluqyYUnKZ+7Kdqxu9PCEUKSkJYipsZQIB4sL/qYApyBzVR9BoEbWdkdH+Xzr3x
 liipy84gbIRgccV1r6T8lXOgjbqrZ/MJiY3+wHWU2SN/g5/Iom/aOSA8kPH5PxNKIKYSFipv
 3UencWaqucUAvmlmy2ERuEQF7iB/febNybdh1UpFJ4knxy99njmcY1O7TVWIEZyLt1CaTLve
 FXUuw5a+NlUJnTCUENsS4e4CsBvwa2+E93gDqzQdoAWPsE3cxKb9iZzY0LWx3rqjEUnjaA4P
 9GcbNqoCnEZT69gyVJaWtsg7FPi/QhmrUu7eHwx503PPWa2DJJNdYo4DQ==
IronPort-HdrOrdr: A9a23:Rj36s6x+2M8xv+E689KzKrPwKr1zdoMgy1knxilNoH1uA6qlfq
 WV98jzuiWatN98Yh8dcLO7Scq9qBHnlKKdiLN5Vd3OMDUO3lHYTr2KhrGD/9SPIVybysdtkY
 tmbqhiGJnRIDFB/KHHCdCDYrMd/OU=
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91384130"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d/BT0ei8KvpBcUPIEQm/KsC4nKi4tAxB4RHvULPE8icLZggMIAN0Z5UHVl7SymfsDgFCuqUyaMionxcINob1Ive3GkepyZOUqycjsUN+t4O6o9kABmtxhYafIs6RluVzsxUSW9/O6jtvEzKHbOMKvLksH3X+ghjUsUvueGcCIapPMrow4B4u2wo06qvn4QFwzWnbhSFi/cBUtO2yq/drh8FAPK0SMJmSytGJw5EQ71t86CF0JuxH419RoJRWTTSsHXNgm1pIi2dlzrP7Zu7bBIYLNHAp2bSoouzPDRK5PALowwsE2SOGY+qepBoXilhpLmWT2ph9SlNcGD41jpBMcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KrNF1Nng7ydDEhClHRLc+rGtl/F90VRQU6sC1aXF9SQ=;
 b=XAON5OMb1JLYkiekyqIwvfg3w9zt6FA4TKLqSdadSYFiUIftPJUyOr+lqlnRWIWFnDrAQGQJv4JHaWn8fY//XoifJ+P5GClQpF1/C4YKdZeEJ2H3EtUx+h1enkt5493EkEE7IgMcatQaZBHmm6qZTDKbjR5wPpiDl9V9Tysi5rQTA7p9Y/r7XnwIeSEV+rbpY8mMYIMYmTUCqSFkZSsZtyMinGJ7+y+tiBXUw0EH1Tst7ujRuuB8a4QLt3S52QhU9RIUhffUx9Uufd97rzClhOmP7ca6UuNSAJ02sbZL0Hxmssz+SlFxXgeTrGq5UhodRN/jl/MNUb5byHPj5a2EPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KrNF1Nng7ydDEhClHRLc+rGtl/F90VRQU6sC1aXF9SQ=;
 b=v/pOvXVjz/tlF6DDFEPTDb4dgox4gi2gmIGTnM9YdRjktP9GFAv1Q5Kc+m3N+J6J9CGuUX4SkmsijIcjR+gXrBE/q96RHi9RXsI96OIQeJQoQ6XjWLVBUZJHcfU2/XvpUTk3ImEvWaXK08TgizVT293yOf8GSSkMexol437zAYE=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii <oleksii.kurochko@gmail.com>, Jan Beulich <jbeulich@suse.com>
CC: Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Julien
 Grall <julien@xen.org>, Anthony Perard <anthony.perard@citrix.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 1/2] arch/riscv: initial RISC-V support to build/run
 minimal Xen
Thread-Topic: [PATCH v4 1/2] arch/riscv: initial RISC-V support to build/run
 minimal Xen
Thread-Index: AQHZIP2Dn/VS+v8CxEaK8nfj7fmLt66P1KmAgAAjsICAAAYtgIAAA8sAgAAEWIA=
Date: Thu, 5 Jan 2023 16:39:47 +0000
Message-ID: <ae6a5904-31e8-ad56-45d4-a5a7467eecb9@citrix.com>
References: <cover.1672912316.git.oleksii.kurochko@gmail.com>
 <ef6dbb71b27c75fe0dffb72d65ab457d27430475.1672912316.git.oleksii.kurochko@gmail.com>
 <591d6624-2bd2-93f0-f5d6-760043230756@suse.com>
 <01888162-49fb-a280-a088-5e81edff3919@citrix.com>
 <439a5b7624dbf4d4ff6acbb9b3a6f15b777ba0fc.camel@gmail.com>
 <8ab7b45a75cdfa332954c8a112cf9b54b4d35c62.camel@gmail.com>
In-Reply-To: <8ab7b45a75cdfa332954c8a112cf9b54b4d35c62.camel@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MN2PR03MB5133:EE_
x-ms-office365-filtering-correlation-id: 2923fe1d-be79-4fb7-4014-08daef3b757a
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Pm7Ix79peBDskOd23e6MYcxAAIPGojji3Cxzxi7GUSai3YJSekgqiSVqdWX6V8zl2TdZ8SJUGJyOSr5qVHTy0qysjSzDRI2ZHyWZ1ROJDAC3pzlt2xehrtTbnH/IetLHwOZVuGctpPGd0YiGVB8peBZGAYbCmnYMvN4B6SskkfaLOdCphJI1OMhSfIUjaKW5EFLT+wHDoAgxectiDeIG8jj7gcNHaweMzv0l/vTrOpcX6jtz7DkRriimUPFv0so+RvK0ROVFfvBAJj2GmdmuiJPeICL6QJm/ccEzJVnqUGIdARhRUs7bjFpiG0v50J3rmnMp+RlW6hKT1Et/8fAVcBtocEnKco/cQPpulY3nc3fTgsTEbpHj8QkinDpMq6P5rkD5Qgz9mt7854UcWvs77G7qM5KTqcsHMb8Lcsm+zImUuVsgkCIwqoTJ0yoSwlUuBc6oyfhcZEFvJrFHN+CFSJyv+iKALWI0hiR5aC0L4WF7Ecbknx6qoOdmlgAeoU0rVKCCB/FaF9U0v2HgYf4Q1pp7g6jmcREMkHPe/HJKBtyeJKYGQGYkq/8DOU0CyzaupHUpAMgQ4xe2dPK+k/mVi6F2WHH+JICA/RhvAIE2FhNMgHRpQPkhSIYlpNnr+Gezu5wuuH2bXb1G6IoIhQVoD8IWDm4GrfDpusV8+IZXiFDMqdGpZ5iv6jMV1V+XGVmMHL7yGldu0MqKv6GN1qU8CWJMiMxiN+AXScqJ0YsmVXG+Imlifl/UMekZr1AGORtJ156L1UzsBMjOxuMoMIS/0Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(346002)(366004)(376002)(451199015)(186003)(6512007)(26005)(83380400001)(53546011)(6506007)(31696002)(86362001)(38070700005)(36756003)(122000001)(38100700002)(82960400001)(2616005)(31686004)(41300700001)(2906002)(5660300002)(7416002)(8936002)(4326008)(71200400001)(478600001)(8676002)(66946007)(110136005)(66556008)(66476007)(76116006)(91956017)(316002)(6486002)(66446008)(64756008)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?a2lySUl2b3NzSktwVTNlbjk0UTVGdlMzTzhKTUtyT0s0YzIxQWNvbWp4OVVD?=
 =?utf-8?B?ZzZReVZXVWVkWnl2OVE5dmowM0swVmVaVm1jelFzWFV0QitQbVlPVEEycGU1?=
 =?utf-8?B?TWU2WXRYK3VSY2I1dUdpODJ1WXF3aE41TWFFK2JWcTVXUExXc3BDajhPWG5Q?=
 =?utf-8?B?VmNCb2kvT0R1Mk1WRlkyWlFSTTZiVW4yVVRKRG81d2lWWjFDakhLTVRhM09k?=
 =?utf-8?B?anByWS9UbWlnbkU5R3lsZVNmSW1tR0dqQkZZNlJpRldGN2wxL0xPZEExY01G?=
 =?utf-8?B?VmJNWVU2YkxkL01rUHd6WmFFRHY3cGxqZGVJSTc4dXF3R2VSVUtVREpSdVd4?=
 =?utf-8?B?RnF6UXMybUxhMnBqdjR6VmpRdG9GM3RVTHd0RlB0UU1VbHg3REJtanpnanRo?=
 =?utf-8?B?SEp5TmxjdlN4NjRHa0ozZVR4b1pPM1N3RFBKYzFNL3hDOU5oSWtzWi85REVt?=
 =?utf-8?B?ekRGSUs3clN0NlJlOC93NndJWU5obFpUTVpsWDRCeDVZVlA3ZDhYaXdTYXlh?=
 =?utf-8?B?eU81QWU1d2JqalkwMXp1VVM0bVNLZGk2RmRFOVlwd21LN1JtM1BzYVhySU9I?=
 =?utf-8?B?YjEvcTZNTlJMTFo0U3UrbWovYlNBeWxZU2VWaXdsZ1dXQjJ1cnF1SUJGVmtS?=
 =?utf-8?B?M1VPczc0L2lzaE0yYk55OUhVaFJnSFpYS3hUd0kyYlRsUlRjQWtObThyZVBC?=
 =?utf-8?B?OHhrVDhNSkhnN2Z5QnBpZ3dpZzBteERJZlF6K2VodnJ5dS9mcEVhdGVQMVdn?=
 =?utf-8?B?bXRhUGFMSmhENndua1pKR0xIVi9MeUUwU3pITWtWeTVVc3FoTnI0S2tTY3FT?=
 =?utf-8?B?dmQ5M2t0dUljM3Z2N1czcTdoUURZWGMvYVVvUW9ab0Q2YkpVNHp5NENkQ1BO?=
 =?utf-8?B?NGZ4YmhkWlBoQXpPVTlDVlFiaGxwSDZZa2pCZUdIelZLNlAxcUZDMnhnaUFi?=
 =?utf-8?B?QU5lUFYvM1IzNC9lMG8vS0liMElXcldSWWZWNTFZNThIZGFuVUM1aURoWk4r?=
 =?utf-8?B?SFFxalU5c21FMlQ4ekpRMG01T3JCK1VSYWQ2SnU4RE5xbmsybmUzVHIvL0VX?=
 =?utf-8?B?YURJWG5LYVhScG5rL2JuMEFScDhtYnd6ZFBCVUxGY0xYSDE3SUtIbGV1ckgx?=
 =?utf-8?B?ZWdXMGZNUWQySG9lOEl1Sm8zS1BPSDFIM2dCOVFOTG1RbG5TeFR4ZU1TbTQ4?=
 =?utf-8?B?eXlNL0xSQTVZYndoM2h6RmZ4RE11bWU0Z21lSDN3RW1VSjhjMkl2dHBjZW5k?=
 =?utf-8?B?WjBpbC9pMHNsT0dmaW5aMkYxczhiNG9GTy9rUDZYSjJJbk1xVTFMMThQVDVs?=
 =?utf-8?B?elA3WGRVeXlEb3dCVlBFK2prMFhlcUl4SDFVZjlLa0xSZ3ZyN090aXdCUDBk?=
 =?utf-8?B?RnhacVRGK1dHN3ZJQ0lxUWpxU3A4WVJGWkVUWUNLelJZZjd3bisxbDEwNVZm?=
 =?utf-8?B?TWJXRWVIZnY3a3ZYWVpaZThJNlQwRzR5ei9JMWwxSHp3UE9ZZkhMcFlOczF3?=
 =?utf-8?B?U0tBdnVxQVlwaTNzNmszbm1XSXA3K0ZsL2h2UWJPWlVhZkY0cjZJYWZTRzhz?=
 =?utf-8?B?UHdjclpuZmxEWGZUNG5ucmVoL0xwOThLbENSenRnSENFM3FIM2pWOGhEdENq?=
 =?utf-8?B?UWJSb092VGZkd1NUM3l1QnpaZlNtZXhoVEx4QlBPdjBBQnI2RHUzbEpzeHU0?=
 =?utf-8?B?VFJZbENYcm1wZEduK2d6Z0JuT2J4d0U1NUhacDY5ZVJkTDBPMitrV2dQVzJ3?=
 =?utf-8?B?M0E5amdUYVhkMU1nNjkxUHVEK245QWk0Z3BXNU9GcEt6MlhOVGJVeUZCYkVM?=
 =?utf-8?B?OHA0TTNBb3ZxbHhWNE5NWnQzd0ZYTS9leXBGei9kYVA4eFVtWXB6TFIraHVG?=
 =?utf-8?B?TGxUMTBhcmxtM3d6ZW9mRkQyaVZoNEZyNVgwTWNBMlF5eDIvdmh6MnBrc2Z4?=
 =?utf-8?B?dFhGaGsyRzRxQm55eG5yR3RodTVIQzhrY2NjcUh0blNpUzhxMTlNMG1hdXNI?=
 =?utf-8?B?QnBVRzVvaGxuTGZKdXU3Ly9PelRhTnpyZ0dtNHlJd2gwbXVMUjJNdlIwMHVW?=
 =?utf-8?B?L1JHdUNMS0c0QnR4SEVTdkJhSDc5RWIveUllMnE0RHcyeVhSY0FZY1kvMnFP?=
 =?utf-8?B?bWF4K1RkSklEa1VFNjBCN1BjMkltd3JQSEZ4R2J5RFkva3NtMDltTmFNaXlD?=
 =?utf-8?B?S1E9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <E25CCFE29BEE444A9EF5BD53974AA95C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?cW9GT0c5WHJNaW92Z1dCRUFLNXozakNlc01wSEhNNk1jNmYwTW9TRW5BK2N1?=
 =?utf-8?B?c0pPZThOMEhtVTJaNGxFMXlBMEYrSVo0YnRVNEk1d1V2TmtoOVNxUnZ6U2Q0?=
 =?utf-8?B?KzZseTZwdUt4UUoydkxjbFR4VlpFbGdqa1YrWXQybXBjYWo0bWZ6dzZjUWlN?=
 =?utf-8?B?MlRrZ29BNDFIK1dPZU1vaEN2QW9BSk1taUt1ZXpCYktXL003ckQ2RW1IVFE1?=
 =?utf-8?B?R0RxQVc5RWx1d2ZpSVVCR1oxdzU1b0M5eE5LODhMNjF3NEkyZ2VyeTZ1bTd1?=
 =?utf-8?B?MENSM3YyR09JV2R2ZU5iUVBZMGZlZTR3NXQxVmdiSENTQ2F0czUwdW5qTmpQ?=
 =?utf-8?B?YjZhaHc3SFJXSUVRWU1TRmxXRWlwMUQwMW9HeWhhYTRNYjg2Y01MYlJqZTlr?=
 =?utf-8?B?amdOb2hicktaK3NRdElxaUZkdjkwS3ZsL1BVSFp5UnJUS3lJMEcrVEp5MnRn?=
 =?utf-8?B?cGE2YXNPKzhqdWNBbWZVNUFEaGxxSTIrZEYxOFlIRGdOOWxQVGJDZExKMERK?=
 =?utf-8?B?K3ozRXVEaisvcVA0Vzc5WEpGK1dDMUxSMU9oS05NUSt5a2I0dWIwQS9MWGdy?=
 =?utf-8?B?eVlsODFYMjRnc0ZYVHVJa0lWZEFVYXZTbVhCV0xVL2xyUnBJU2pmZ1JhSTRo?=
 =?utf-8?B?d2hsVm9OeDhKNUFtZDVVNnFyU2tnSTIyOEExYWd2VWlmUFRaaU1rZEpZL0RE?=
 =?utf-8?B?am9ZN2dGL0FHYXR0OWJ2THVES3JPOW5SYU9LS1phU0VMMm9VTm9UTFZLbW56?=
 =?utf-8?B?RWg1eVlBZnpOT2U0dzFiQWFoUzJxSFBFS0F5ejFYR1EwK0lhbzJKOHRFUlV4?=
 =?utf-8?B?NFlsRCtBd3JucWVxVmpNamdEbDF2Y20vUzhiNktnVlhKbFlQWUFtRXRzU0hp?=
 =?utf-8?B?Y2h2VUpVZFJMa0dBSFg5bG9hbFhhakdQL3ZCdFd2bFFNNS9iR0tDbHVTK3dl?=
 =?utf-8?B?cHJwT0Z6TFM5ZHF5cnlab0YrSUh5dENleWdaODZLMVF3R3d0ZzBwVHgxZ1Ar?=
 =?utf-8?B?NzNXd3VqU1pjZG8yMjcvU0oyR1UwbEkzS2J0SzlRc0hpSG9lNXJkTTlyRUli?=
 =?utf-8?B?ZHY4d21ydzdjeXQ5QzEreGhZSm4vWEM5Q2FnZUVob2xRTmE2d2puYjlTVUgx?=
 =?utf-8?B?UHo3a3dRWG5pdVdWeVhpMThsd2g4a1hXajhvOXBSbzUrd0QyTjNGcGErSnFq?=
 =?utf-8?B?a3R2TWNhKzJtUUxRc3JkUXR2aG1wL0lmRCtLWk5raVBSd05oWFZSUnhlY01E?=
 =?utf-8?B?UWJ5QlJZTHFXdjN5M1I4c1hQVUh4MnhvOG5EaE5MT1ZBKzRTMGpQdm5uZHdH?=
 =?utf-8?Q?fH3lSQlFvv1yY=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2923fe1d-be79-4fb7-4014-08daef3b757a
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 16:39:47.8932
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 0LaO/eli6Go9fPMBrQrklwiGEYnTrP+7Db5zPaC1M4oLk25I0v+y3nQ+a7PhvlX7BCoIExhTqiFhzLSJNRwAHLt9gF7ZiMKqTzeyOjlXOAo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5133

T24gMDUvMDEvMjAyMyA0OjI0IHBtLCBPbGVrc2lpIHdyb3RlOg0KPiBPbiBUaHUsIDIwMjMtMDEt
MDUgYXQgMTg6MTAgKzAyMDAsIE9sZWtzaWkgd3JvdGU6DQo+PiBPbiBUaHUsIDIwMjMtMDEtMDUg
YXQgMTU6NDggKzAwMDAsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+Pj4gT24gMDUvMDEvMjAyMyAx
OjQwIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4gT24gMDUuMDEuMjAyMyAxMzowMSwgT2xl
a3NpaSBLdXJvY2hrbyB3cm90ZToNCj4+Pj4+IFRvIHJ1biBpbiBkZWJ1ZyBtb2RlIHNob3VsZCBi
ZSBkb25lIHRoZSBmb2xsb3dpbmcgaW5zdHJ1Y3Rpb25zOg0KPj4+Pj4gwqAkIHFlbXUtc3lzdGVt
LXJpc2N2NjQgLU0gdmlydCAtc21wIDEgLW5vZ3JhcGhpYyAtbSAyZyBcDQo+Pj4+PiDCoMKgwqDC
oMKgwqDCoCAta2VybmVsIHhlbi94ZW4gLXMgLVMNCj4+Pj4+IMKgIyBJbiBzZXBhcmF0ZSB0ZXJt
aW5hbDoNCj4+Pj4+IMKgJCByaXNjdjY0LWJ1aWxkcm9vdC1saW51eC1nbnUtZ2RiDQo+Pj4+PiDC
oCQgdGFyZ2V0IHJlbW90ZSA6MTIzNA0KPj4+Pj4gwqAkIGFkZC1zeW1ib2wtZmlsZSA8eGVuX3Ny
Yz4veGVuL3hlbi1zeW1zIDB4ODAyMDAwMDANCj4+Pj4+IMKgJCBoYiAqMHg4MDIwMDAwMA0KPj4+
Pj4gwqAkIGMgIyBpdCBzaG91bGQgc3RvcCBhdCBpbnN0cnVjdGlvbiBqIDB4ODAyMDAwMDAgPHN0
YXJ0Pg0KPj4+PiBUaGlzIHN1Z2dlc3RzIHRvIG1lIHRoYXQgWGVuIGlzIG1lYW50IHRvIHJ1biBh
dCBWQSAweDgwMjAwMDAwLA0KPj4+PiB3aGVyZWFzIC4uLg0KPj4+Pg0KPj4+Pj4gLS0tIGEveGVu
L2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vY29uZmlnLmgNCj4+Pj4+ICsrKyBiL3hlbi9hcmNoL3Jp
c2N2L2luY2x1ZGUvYXNtL2NvbmZpZy5oDQo+Pj4+PiBAQCAtMSw2ICsxLDkgQEANCj4+Pj4+IMKg
I2lmbmRlZiBfX1JJU0NWX0NPTkZJR19IX18NCj4+Pj4+IMKgI2RlZmluZSBfX1JJU0NWX0NPTkZJ
R19IX18NCj4+Pj4+IMKgDQo+Pj4+PiArI2luY2x1ZGUgPHhlbi9jb25zdC5oPg0KPj4+Pj4gKyNp
bmNsdWRlIDx4ZW4vcGFnZS1zaXplLmg+DQo+Pj4+PiArDQo+Pj4+PiDCoCNpZiBkZWZpbmVkKENP
TkZJR19SSVNDVl82NCkNCj4+Pj4+IMKgIyBkZWZpbmUgTE9OR19CWVRFT1JERVIgMw0KPj4+Pj4g
wqAjIGRlZmluZSBFTEZTSVpFIDY0DQo+Pj4+PiBAQCAtMjgsNyArMzEsNyBAQA0KPj4+Pj4gwqAN
Cj4+Pj4+IMKgLyogTGlua2FnZSBmb3IgUklTQ1YgKi8NCj4+Pj4+IMKgI2lmZGVmIF9fQVNTRU1C
TFlfXw0KPj4+Pj4gLSNkZWZpbmUgQUxJR04gLmFsaWduIDINCj4+Pj4+ICsjZGVmaW5lIEFMSUdO
IC5hbGlnbiA0DQo+Pj4+PiDCoA0KPj4+Pj4gwqAjZGVmaW5lIEVOVFJZKG5hbWUpwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXA0K
Pj4+Pj4gwqDCoCAuZ2xvYmwgbmFtZTvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXA0KPj4+Pj4gQEAgLTM2LDYg
KzM5LDEwIEBADQo+Pj4+PiDCoMKgIG5hbWU6DQo+Pj4+PiDCoCNlbmRpZg0KPj4+Pj4gwqANCj4+
Pj4+ICsjZGVmaW5lIFhFTl9WSVJUX1NUQVJUwqAgX0FUKFVMLCAweDAwMjAwMDAwKQ0KPj4+PiAu
Li4gaGVyZSB5b3Ugc3BlY2lmeSBhIG11Y2ggbG93ZXIgYWRkcmVzcyAoYW5kIHRvIGJlIGhvbmVz
dCBldmVuDQo+Pj4+IDB4ODAyMDAwMDANCj4+Pj4gbG9va3MgcHJldHR5IGxvdyB0byBtZSBmb3Ig
NjQtYml0LCBhbmQgcGVyaGFwcyBldmVuIGZvciAzMi1iaXQpLg0KPj4+PiBDb3VsZCB5b3UNCj4+
Pj4gY2xhcmlmeSB3aGF0IHRoZSBwbGFucyBoZXJlIGFyZT8gTWF5YmUgdGhpcyBpcyBtZXJlbHkg
YSB0ZW1wb3JhcnkNCj4+Pj4gdGhpbmcsDQo+Pj4+IGJ1dCBub3QgY2FsbGVkIG91dCBhcyBzdWNo
Pw0KPj4+IEl0J3Mgc3RhbGUgZnJvbSB2MSB3aGljaCBoYWQ6DQo+Pj4NCj4+PiAjZGVmaW5lIFhF
Tl9WSVJUX1NUQVJUwqAgMHg4MDIwMDAwMA0KPiBMZXQncyBzd2l0Y2ggWEVOX1ZJUlRfU1RBUlQg
dG8gMHgwMDAwMDAwMDgwMjAwMDAwIHdoaWxlIHdlIGRvbid0IGhhdmUNCj4gYW55IE1NVSBzdXBw
b3J0IGFzIDB4ODAyMDAwMDAgaXMgYW4gYWRkcmVzcyB3aGVyZSBPcGVuU0JJIHdpbGwgbG9hZA0K
PiBiaW5hcnkgKGluIG91ciBjYXNlIFhlbikuDQoNCk9rLsKgIEkndmUgZml4ZWQgdGhhdCB1cCBh
bmQgcHVzaGVkIHdpdGggYSB0d2Vha2VkIGNvbW1pdCBtZXNzYWdlLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 16:47:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 16:47:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472087.732206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDTOT-0004pE-0s; Thu, 05 Jan 2023 16:47:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472087.732206; Thu, 05 Jan 2023 16:47:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDTOS-0004p7-Tk; Thu, 05 Jan 2023 16:47:04 +0000
Received: by outflank-mailman (input) for mailman id 472087;
 Thu, 05 Jan 2023 16:47:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lIpW=5C=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDTOR-0004oy-QN
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 16:47:03 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2044.outbound.protection.outlook.com [40.107.6.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 93fde1ec-8d18-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 17:47:02 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7801.eurprd04.prod.outlook.com (2603:10a6:10:1eb::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 16:46:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 16:46:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93fde1ec-8d18-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fyFAUSgsOH+zDOkh4CLbEMJmVcXFjXCYwnAzUm1vNxoTmmyqSQ6x22x4CVQqCnk3ouFVvhy0cDME3uWMixHhCNLaFt8NH1dFwqPbl87TipA7ijO6hW/CI5q5QyUVjZr6o60m1AHhW/rbbHbF1T5v8im70Cvjl86MjQD/7PV3yR2T318CzbpupVGaY9nml2aRtISUyqqmsJed1EtPb40btBCKI/sOR56LPVIdPyvMfWxV/KlWOWUfsCOmxysTjMkHS9bXzWotHOS4U/bGvpT7eHdpu5b6+xDNbNQwWXUOPwffwGpHMUoujy8GAgXiYjRwoJwGDxlZHuCQvqBjPGuPtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DcGuQU1bGC5b9LD2CPTGRvx/D0t7v/48EG45UhsDssI=;
 b=EYJGaclsZu9VvnjzuYRqetiU4sJlx4UhnljHDgi8uwvVuVEUlcJZSzYLlyGgaj7NRrLt9L1ICN7ZIGyKHEqwhzDu1K1De28DsXB4u1MQK3bbCQddPdZ8io9DA2kAaolXlMi0RA3ee/fnyEqdU4D1MMDrPBjpMwoQNUMsvNImnHvXFEeJZ0o/BowyFeL7tl3Ork8DBkUbFNYBlEuvS3rvIXcpz58Ja1Cim5jGVh4cKGpUgFAD43Xz1mER413SzY1csUU0DFR8otorO91BEa7T7ElYBfq5yI987JxZicCdWexKMjnVz8WyMMizJpyADSrmadJAjNVQ3bVlRRH+I/RmXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DcGuQU1bGC5b9LD2CPTGRvx/D0t7v/48EG45UhsDssI=;
 b=PVtTdxnirIoMYiH0CKO7EX+xjWYZFQ3yRGQnWOO+0W76WAvXzs0YfeNVqvTTzt6Nn2++o5OKwBT2k8cctU9HkJ0nUoJpidoyAyL+iNRDgdCTuGRAkXJTTE7M7ugz2pZrf4T7VLDzZLTO6HsZhUQ3MG4CtkTS441F0YdonOVidTtC+d5B0y1rIGpF0ENUwWD9c4iwwvSk1NfwQ76lwXRM6qtOHTFPDAZTkuG+CvJMWcMdBpggb6KcaYj2f1YZib1hf7xIHvmkHfwjw30CsHzk1imtVkB/qh5FZJ5GMmXfx59moOot9W4iQDI2lpNMqUm4IJKaKNONAfRTRa5NwKSQ3A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <84a96972-3c41-ec94-3513-9944467d9e1c@suse.com>
Date: Thu, 5 Jan 2023 17:46:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 13/22] xen/x86: Add support for the PMAP
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-14-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20221216114853.8227-14-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0155.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7801:EE_
X-MS-Office365-Filtering-Correlation-Id: c792d951-9630-4f99-1317-08daef3c768e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	msvgdgstWuvw1g4ocUHhX6Yi53k+MY0tvxYieSI8DYP9uD4WR1kA7fkWjQyMobP+UHB1bft4lb41MMhWaUFJORQTf1d8woFo4okqoHSd7Pap8Qx3tqICH7NJr+c9AqcVR3Ijv/4gq/JlHPwPzhLk72FIL8J7OVU+0N0ifYVM/XTc4yKFwVblCaSasOIfD5jER8TXLsC5gui5bIwLmfanNjZJGzRD6ncSq9EFKzXoGG5N+Ez1Aqvy3OZUR+qkG2Xtao66YHBLL6TuVkEJD5i0mNUOuencs+rVStYkdqLi/zgpsLqaoQQLC6by6axg7M5Y44/HM6+tQzOiOP8dwLTLMfsMxW9rBzENeiAk7TyabYYjXr+OQ2pdmx6C7GbBUDqYY8PNkyqCQ6Jhc8H7LmXQRJrgEjoUPhALBtm4Zv+OBu0qLgBuG/Y5Q6vLvb5DBwDGsUDVSOyKabNbfj5PtOPY/Rif6omjb6rSNQao7nQnoSJs/w2fwIysBktgtpUG7PfC29BJkGiAXCgBegMa3ods5wMP8RDeaROKybAq3v/frmFbjlP7dUcKzn8yGcdb9Rsp1wtSRQ/p7nW4l94A58jUcv9yFSPt8A6uuL3Qpm/Feyx6/PTAs32XJ3E206LSaM/RjWyO5/Gjr9D+mJ3Ch37z/r0NaQk/NT9UZsW1i+27wXozgkrDyHmNqp+MCtrEB7Wp8+2GGMq4NQPqDtg3ZgxCn4A2Zjo5tPPqli6HaD0dr8+xbSEnmiwJ0aXG63dUr6s/
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(366004)(39860400002)(396003)(346002)(376002)(451199015)(41300700001)(83380400001)(31686004)(2906002)(5660300002)(4326008)(66476007)(66556008)(66946007)(8676002)(8936002)(38100700002)(31696002)(2616005)(478600001)(316002)(6916009)(36756003)(6486002)(53546011)(54906003)(6506007)(186003)(86362001)(26005)(6512007)(45980500001)(43740500002)(357404004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MWZrb0EyaG8zNEF2WTdaT1hTcEtHQmdpSTVzM3RRM243eVJVaS9YZk9NOUUw?=
 =?utf-8?B?MDZNeXRDRjNVeExyQ01ESWc3Q0hzZjN6RzduWVRQYnJzZmNJbFgyakFQN0pZ?=
 =?utf-8?B?LzdGM21vc0Z6Ty9LOS9DYzYxK2hwc2VUOGVpTGpDL29iemY0Y212elgrMzFR?=
 =?utf-8?B?dDY2djVwOVBIa24ralhJUUNQNlFWTExoeVh0WlR1M1RWLyt1OGxOeFJPL0xr?=
 =?utf-8?B?VWpDcVA1U2dsMXV5YXRHWGxPWklZQnlVM0xmQlFYcGUyK2t0UnJudGJkY0Jr?=
 =?utf-8?B?eERyam9jOUkwUnc2Z1BPaXhSa2c0UzIrSUdWZ0FUZ0RoK2wzV05CM0s5RUtR?=
 =?utf-8?B?L0Z2dVRPOGZVTUZ6ZmY2eGtHK29ITTBMVFpKQWo3bzduT2ttQkRXK0xnVkw3?=
 =?utf-8?B?STdFVDFSSWJ6MTF4UVpPT2ZaNWdteU5KU0V5Vkx4Q0tsUmY5K1M0d2RFUFNM?=
 =?utf-8?B?SW9HTENFUmZnWnNrVC8vaitsNlhwRHBwWWVPWFh1aUUrNFJYN2F2UlNDUmZQ?=
 =?utf-8?B?d3lMTjg2dEdrUDdaMkFCYkM0Ry96dWFuTXhYZG0vOVh4cFkyVzhxNmN6MWRY?=
 =?utf-8?B?MFkxb3Vaa3RBQWpRWTJnUW8yeU1iSWI1bWhSandGZmFxSFJaWHhIQjByZkQ2?=
 =?utf-8?B?VGE1OUQrdnVPSGVzcGc4SFo1aTBxOEMyU04vclM1bFBRU1JaUFo4bVBxaFpr?=
 =?utf-8?B?Mjd5K2hoOHFFQnIzOUN0UUlSdHFIcitYcU9RMitxNnpJbUc5UVlvVXMwdWNL?=
 =?utf-8?B?UW1wSEJMeitMS2M2djFYODBGR200UFU5MUhrT2tsb1F0bW1nOHAyMzdLSEdm?=
 =?utf-8?B?anNCRWx1WmVvaVU2bVZHeEdUY2VWMjltbGRFelNxK2VFTkRDV2lhLzJ6L1Fz?=
 =?utf-8?B?MU9YZXdydTVqK0VScnJFSnBXeGRiQ25maHEvSUtjTmc3bGVLM1Aza1RXOXp2?=
 =?utf-8?B?emJpdDN2bGZuUGZEd29qb1NRTXVDSkp4OE9NUlNlOGw3VFkxTXFWTXNzczNZ?=
 =?utf-8?B?c2F1TXR1dGlveWVIN0dObHZWRDdRMkI0YUM5SU9UdTVSNDhVQjlZc3FFSEU4?=
 =?utf-8?B?S09vaFBCdEtPdkcrbVpZZmU3YUZ6aWFkbUEwbDdhWHdEMGU4QnpaYmpPZ2Z0?=
 =?utf-8?B?UEZ0WkxvbjZtb1prOWNnYVJ6SGVtYUREUHF0VnNrV3lIZzh5TkZUZU43K0pW?=
 =?utf-8?B?aVlsZ1h5V1VpeHV1cmpOYi8rYzdoV3ZwZUFyVXpEYUJlQUxoNHlGek1TeG8v?=
 =?utf-8?B?YmFKbUdBdXpveXBtYTRsTWVRR2xtdEp5MGZsY09IMVUvNkRkREtjRWVXOWpC?=
 =?utf-8?B?bVAvaWtwMzVkZEpPZlNsRlFBcjdPLzZvNm1ONVZCcjF4SUI0a2o5MktLeGpt?=
 =?utf-8?B?WGZDTExvcGtPZTljbFJqckVmSUtnQnpLMGVJVFArUWszbG1VeFgzdGFzeTIv?=
 =?utf-8?B?UkhkMkh5R1FLTWMybnp5c3JmblR6K0VoN0JpRDhxMXVjUDhmU0RrR2R4ZFln?=
 =?utf-8?B?bWxiNmlPbmlkN2FLN3Q4ZVVqMzdIaUZGYTNoQjJhYmR1U21zYzYvMUFwUjJl?=
 =?utf-8?B?NkRwZ0tWVzNOMXZsS1Y3alg1ZGJlbGtVU1lzbEEwMjBPa0I2Uy82T1JISXVs?=
 =?utf-8?B?V3VrUXMwMGpjeDhkbCtzRW1GYXpuenZuUXorNGsyeFFuZmtWeUF4QjFSa3Fz?=
 =?utf-8?B?MUEyR0NPT2Z6eW40SGpMeHV1bC9Pd3Iyb3hBWU04LytYSnllN0szNGJ5eDBU?=
 =?utf-8?B?RDRteFRkWHh4M2JGVllOQkFhY0VpV2E4aDMrWll6V0lCT2J2ZlpmZGlOMXAy?=
 =?utf-8?B?MSt5d25uM0hsalNVdTFETEEvejlISjJmdFAzTVlNbnk5SFh4cGZzaUlYeUxT?=
 =?utf-8?B?V2N3ZHE5NjAvbXRPcmRnOFJNc3F0ZHFMbEFoN3VtU2tXUE9oZ2Zyb0NGUU5q?=
 =?utf-8?B?ZHY3K3NheHhyb0xqM29QbWJpT1JwWHBWdjFnaUtiTm1Nb2I2V0RZUWtMOEZT?=
 =?utf-8?B?NVBkcWdxRm9sbFdFaXdVakhqVERxMnltYXpyR0NvZmZDMUpwY21UR2ZRSHZY?=
 =?utf-8?B?aURMQ3E3Wi92RG1yT1pmYXFlT1JLK1VJaDVDYjJWTUR1N0g2bW01UTNGcXRP?=
 =?utf-8?Q?BkaLzuenbNaTdz0GLYUjgAuE6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c792d951-9630-4f99-1317-08daef3c768e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2023 16:46:59.4007
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VSjo84KHjVES13ybKNrfkO98Uka8BlRUk5K7bMCctnw/MxZIvuuGpMaozNlNoncG8crbPe1hkCCojKv6FciMHw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7801

On 16.12.2022 12:48, Julien Grall wrote:
> PMAP will be used in a follow-up patch to bootstap map domain
> page infrastructure -- we need some way to map pages to setup the
> mapcache without a direct map.

But this isn't going to be needed overly early then, seeing that ...

> --- a/xen/arch/x86/include/asm/fixmap.h
> +++ b/xen/arch/x86/include/asm/fixmap.h
> @@ -21,6 +21,8 @@
>  
>  #include <xen/acpi.h>
>  #include <xen/pfn.h>
> +#include <xen/pmap.h>
> +
>  #include <asm/apicdef.h>
>  #include <asm/msi.h>
>  #include <acpi/apei.h>
> @@ -54,6 +56,8 @@ enum fixed_addresses {
>      FIX_XEN_SHARED_INFO,
>  #endif /* CONFIG_XEN_GUEST */
>      /* Everything else should go further down. */
> +    FIX_PMAP_BEGIN,
> +    FIX_PMAP_END = FIX_PMAP_BEGIN + NUM_FIX_PMAP,

... you've inserted the new entries after the respective comment? Is
there a reason you don't insert farther towards the end of this
enumeration?

> --- /dev/null
> +++ b/xen/arch/x86/include/asm/pmap.h
> @@ -0,0 +1,25 @@
> +#ifndef __ASM_PMAP_H__
> +#define __ASM_PMAP_H__
> +
> +#include <asm/fixmap.h>
> +
> +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
> +{
> +    unsigned long linear = (unsigned long)fix_to_virt(slot);
> +    l1_pgentry_t *pl1e = &l1_fixmap[l1_table_offset(linear)];
> +
> +    ASSERT(!(l1e_get_flags(*pl1e) & _PAGE_PRESENT));
> +
> +    l1e_write_atomic(pl1e, l1e_from_mfn(mfn, PAGE_HYPERVISOR));
> +}
> +
> +static inline void arch_pmap_unmap(unsigned int slot)
> +{
> +    unsigned long linear = (unsigned long)fix_to_virt(slot);
> +    l1_pgentry_t *pl1e = &l1_fixmap[l1_table_offset(linear)];
> +
> +    l1e_write_atomic(pl1e, l1e_empty());
> +    flush_tlb_one_local(linear);
> +}

You're effectively open-coding {set,clear}_fixmap(), just without
the L1 table allocation (should such be necessary). If you depend
on using the build-time L1 table, then you need to move your
entries ahead of said comment. But independent of that you want
to either use the existing macros / functions, or explain why you
can't.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 17:51:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 17:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472094.732216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDUO7-0003Kr-N1; Thu, 05 Jan 2023 17:50:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472094.732216; Thu, 05 Jan 2023 17:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDUO7-0003Kk-KI; Thu, 05 Jan 2023 17:50:47 +0000
Received: by outflank-mailman (input) for mailman id 472094;
 Thu, 05 Jan 2023 17:50:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDUO6-0003Ke-7C
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 17:50:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDUO3-0001Mr-Sv; Thu, 05 Jan 2023 17:50:43 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=[192.168.4.62])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDUO3-0000oE-Mo; Thu, 05 Jan 2023 17:50:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=8vcieIbCBqWK4v4MK2ILng/goWaB9VQK/FpAtpowD4k=; b=Z3E9fWBcJwZgZSxJziOArJA0tc
	rURGvep9oQuxASKqtELj08YwhJhKMqeiBz2GZgZkqqCdV2yYAyBec2kKwOiyPuAlxObNAE/Sist1w
	7fBRAHnEZjxQdMlsrMSz1RkdvIyRGQX9nWeWA+Ni8SmIYC5pMgM+rhFtIR8FrAoQKfS8=;
Message-ID: <924abe0d-6ba8-5d64-d74a-c2e1894d4f64@xen.org>
Date: Thu, 5 Jan 2023 17:50:41 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 13/22] xen/x86: Add support for the PMAP
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-14-julien@xen.org>
 <84a96972-3c41-ec94-3513-9944467d9e1c@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <84a96972-3c41-ec94-3513-9944467d9e1c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 05/01/2023 16:46, Jan Beulich wrote:
> On 16.12.2022 12:48, Julien Grall wrote:
>> --- a/xen/arch/x86/include/asm/fixmap.h
>> +++ b/xen/arch/x86/include/asm/fixmap.h
>> @@ -21,6 +21,8 @@
>>   
>>   #include <xen/acpi.h>
>>   #include <xen/pfn.h>
>> +#include <xen/pmap.h>
>> +
>>   #include <asm/apicdef.h>
>>   #include <asm/msi.h>
>>   #include <acpi/apei.h>
>> @@ -54,6 +56,8 @@ enum fixed_addresses {
>>       FIX_XEN_SHARED_INFO,
>>   #endif /* CONFIG_XEN_GUEST */
>>       /* Everything else should go further down. */
>> +    FIX_PMAP_BEGIN,
>> +    FIX_PMAP_END = FIX_PMAP_BEGIN + NUM_FIX_PMAP,
> 
> ... you've inserted the new entries after the respective comment? Is
> there a reason you don't insert farther towards the end of this
> enumeration?

I will answer this below.

> 
>> --- /dev/null
>> +++ b/xen/arch/x86/include/asm/pmap.h
>> @@ -0,0 +1,25 @@
>> +#ifndef __ASM_PMAP_H__
>> +#define __ASM_PMAP_H__
>> +
>> +#include <asm/fixmap.h>
>> +
>> +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
>> +{
>> +    unsigned long linear = (unsigned long)fix_to_virt(slot);
>> +    l1_pgentry_t *pl1e = &l1_fixmap[l1_table_offset(linear)];
>> +
>> +    ASSERT(!(l1e_get_flags(*pl1e) & _PAGE_PRESENT));
>> +
>> +    l1e_write_atomic(pl1e, l1e_from_mfn(mfn, PAGE_HYPERVISOR));
>> +}
>> +
>> +static inline void arch_pmap_unmap(unsigned int slot)
>> +{
>> +    unsigned long linear = (unsigned long)fix_to_virt(slot);
>> +    l1_pgentry_t *pl1e = &l1_fixmap[l1_table_offset(linear)];
>> +
>> +    l1e_write_atomic(pl1e, l1e_empty());
>> +    flush_tlb_one_local(linear);
>> +}
> 
> You're effectively open-coding {set,clear}_fixmap(), just without
> the L1 table allocation (should such be necessary). If you depend
> on using the build-time L1 table, then you need to move your
> entries ahead of said comment.

So the problem is less about the allocation be more the fact that we 
can't use map_pages_to_xen() because it would call pmap_map().

So we need to break the loop. Hence why set_fixmap()/clear_fixmap() are 
open-coded.

And indeed, we would need to rely on the build-time L1 table in this 
case. So I will move the entries earlier.

> But independent of that you want
> to either use the existing macros / functions, or explain why you
> can't.

This is explained in the caller of arch_pmap*():

     /*
      * We cannot use set_fixmap() here. We use PMAP when the domain map
      * page infrastructure is not yet initialized, so 
map_pages_to_xen() called
      * by set_fixmap() needs to map pages on demand, which then calls 
pmap()
      * again, resulting in a loop. Modify the PTEs directly instead. 
The same
      * is true for pmap_unmap().
      */

The comment is valid for Arm, x86 and (I would expect in the future) 
RISC-V because the page-tables may be allocated in domheap (so not 
always mapped).

So I don't feel this comment should be duplicated in the header. But I 
can certainly explain it in the commit message.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 17:57:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 17:57:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472101.732228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDUU7-0003yE-E1; Thu, 05 Jan 2023 17:56:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472101.732228; Thu, 05 Jan 2023 17:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDUU7-0003y7-9b; Thu, 05 Jan 2023 17:56:59 +0000
Received: by outflank-mailman (input) for mailman id 472101;
 Thu, 05 Jan 2023 17:56:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDUU5-0003y1-Pc
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 17:56:58 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 55a2dd1b-8d22-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 18:56:55 +0100 (CET)
Received: from mail-mw2nam12lp2049.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 12:56:46 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN9PR03MB6171.namprd03.prod.outlook.com (2603:10b6:408:100::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 17:56:43 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 17:56:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55a2dd1b-8d22-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672941414;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=GdZtvZrOsg8Sj7hTmBkOQFY27J9pWH8YOATNQ+FZNv4=;
  b=VncuYD7R936i8/gJa2YXn/vHiulqifIqmedUh7iI49ly2AltjjsPc582
   e4dSpfRcBPWyHfn6iFEuI8Wwr+8qdH40pU9+Bx1Qji6oiHNotVHUcnMhK
   kNziwHkEHhmf6R00Agknk7EhCAeFbL+LLHAulJVMpU01kl8Q1jJk7YPch
   Y=;
X-IronPort-RemoteIP: 104.47.66.49
X-IronPort-MID: 90305539
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:LgcBa6n8nTAbdhTZeSWVHgLo5gy3J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIdDGDTOfbfZ2ekftojaN6w80NSupOGnNEwHFRrrXwzESMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aqaVA8w5ARkPqgS5AKGzBH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 bsbGmAcYQCuveiVx7elYdFigcgCHeC+aevzulk4pd3YJdAPZMmZBoD1v5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVklI3jOmF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNKT+Xlq6U76LGV7lE8Il5KdgOkm/6ajnKmC+hAe
 nYT2zV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxmdLngJSHhGctNOnNM3QBQ62
 1nPmMnmbRR/vbvQRX+D+7O8qTKpJTNTPWIEfTUDTwYO/5/kuo5bs/7UZtNqEarwi8KvHzj1m
 mqOtHJm2+RVitMX3aKm+1yBmyirupXCUg8y4EPQQ36h6QR6IoWiYuRE9GTm0BqJF67BJnHpg
 ZTOs5P2ADwmZX1VqBGwfQ==
IronPort-HdrOrdr: A9a23:wznkk6152PDbSQyhwfmTkAqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="90305539"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DLCnfSJAAmZy1e6lyGUu6OIAHSwZboa9yqGaPYb7a7r44D29NjAsHZd2v8gp0sUAuP0KD07cOHT8JEVKZfxDd75RhGx+XJivhOFcvFTPXCIsGGm/9xj6QvVt3dXE618pAiWkmr9TAyCwHqBHH10V1D0Z1Wiw92+9rAmYE+KzsfOGqwTLwC6DwuGplowYM54nfNoI91EXt0LYlc72uNZLavplGLaFygHtpE040pYY7shhUJuOqtvxFHMOXt+No/55L8ihaW/YX1r3xSDp8NeKnWnWDfq2OvA6mvcxOn/xoaD3auV0k7umlcZWTgzG6cGUxNDs61y7qCpiuBBpWUErPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GdZtvZrOsg8Sj7hTmBkOQFY27J9pWH8YOATNQ+FZNv4=;
 b=ddVpCJi7LzHy8Cv1Xp+iHATHYa32oDZHkm1A7HQjaHgBkiYZn1A2CUmLNZrCBLHzp548UQ3FqcVtQ8vS8eZ99ntJUND2OxhbDmxjQ4RFyAdWYQSt1P3OagazZBQK500VGxl+/kH+R7nA3iguJ188HcQnSNIn188eVovqmWkzqIq9PDZMK6di33DT9EMllojfTAJqQ03MSSM1NjsfWW1GwdN4WfNg5rVQSTbDTuotcLwndUwYlzOJxjTLcuyBxgrWY+obrQTti+damMNyukSj7t8FPFALmlBLGBEZls5LjVeYyv0etkUo3ah7rrRW229OmOfiNMoDeNFs9Nz+VjDr/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GdZtvZrOsg8Sj7hTmBkOQFY27J9pWH8YOATNQ+FZNv4=;
 b=BHbm2WpuG4MYCiMuzgjf2krOLF/wprdV+ibF0ky1cjJ46ak1Puf8bpupSBUw6N4XmOUK5Jq7vifYoXBnNqsrUog3hWR/ys8rmUgA7Te8WgThaCB7d8lu7VONbVD6lbaqxQkCS/0SmEva/5rOXCdd4/4XmA4PhIZZLigtl6jF/+w=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, George
 Dunlap <George.Dunlap@citrix.com>, "Tim (Xen.org)" <tim@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/8] x86/paging: fold most HAP and shadow final teardown
Thread-Topic: [PATCH 2/8] x86/paging: fold most HAP and shadow final teardown
Thread-Index: AQHZFT+yfO+11AII6U6krdJn3spgiq54lXqAgAD0VYCAFqnVgA==
Date: Thu, 5 Jan 2023 17:56:43 +0000
Message-ID: <4de3cf99-f8ab-c439-b383-859a41e90517@citrix.com>
References: <f7d2c856-bf75-503b-ad96-02d25c63a2f6@suse.com>
 <8d519e00-83c6-aee9-e7ba-523aa4265e1e@suse.com>
 <af33fef0-46f7-b206-53ed-33d66f0f008c@citrix.com>
 <780eb542-34b9-c64f-0a40-acc462b87c72@suse.com>
In-Reply-To: <780eb542-34b9-c64f-0a40-acc462b87c72@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BN9PR03MB6171:EE_
x-ms-office365-filtering-correlation-id: 67e2efb6-64f7-451a-afd4-08daef463492
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 HBCwJX1K0xfL49T3CmNDukRYdJvKkR4X8O+LZ7kEYBy6PVSClQminp5gatOGyg4eViKjIFAeYCY01abfaE/Fj7Hb8B0Ft+cjJKL1Tfdh+TbPii5BNlUu5rBYnuby+AtSflrHAzxdqFXVfIsMUCrV20WaE+rstpA495hSruv2VNMfw8NjHwgs/vW4Nq50ySNBxJDxU+OTUDAxhL48BLy6nGzDY+PpZKFJjlyysiP8IV1+9BRQu7HcE7+YBO6ojSMYfiKT2M47aNIkgxdKFe/+AeJe15UNV7xrxgahX2PhlTYJvozoAXCxIEx3nwoRnHsYx/qsjCPNiPa6GnfuZx2T0HIx0b5A8FP5dRCAoR5L+nfSbKmKxh5ZhU7Jii/xs77llTfqHY9JQw8I0QIKJ3/DO8N7hbJPf71qAgZ3vEwhZ9KBGx9h6hoYwVfpwaW6A0XPVEH+azoIE9sJXQTA1vhzy61u4YfNP4RS6Z7pqQwBe465C8wluFI/rEawpzbzvE7lqHyS9RX1W5wwW4tzX9+BGNrqhFVUmlwVdu3Cl3kbPwkbBgrj1TFyTsreu/y4mQwngcjNfk+edSgfx7MLqy/rECdePjdUr+0Z6MyfFqCDFxOzTJwA8pGao7T/MYmmJx3eR+gykMokt3yCTx77bzwaNAhEwNY2Omfa9Oa+VkIZzv32EPEaWzjOapMllDWs2t9XAMtaMSULQrT4x5zduQAK4paWq2LXQi/nY1XDee0UUpV4GhKhIOhnWpmzR7n9RuBjY0fGCag99VVB6+pXzbvg6Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(376002)(346002)(366004)(39860400002)(451199015)(5660300002)(8936002)(41300700001)(8676002)(4326008)(2906002)(66556008)(76116006)(316002)(64756008)(54906003)(6916009)(66446008)(66946007)(91956017)(71200400001)(31686004)(66476007)(6486002)(478600001)(2616005)(26005)(186003)(6512007)(6506007)(53546011)(82960400001)(38100700002)(122000001)(86362001)(31696002)(38070700005)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SEJ6Qm5TTXozMVk0TDREVjRkN0oxbWhqQlVwS1QwcDlKUjl5dVoxbnZWMGIr?=
 =?utf-8?B?Q2srNmY2OTFqNmxQNGZzenNwNzRmYzhGVTRncW1KWXhDY254TXJObDhKckx3?=
 =?utf-8?B?ZVdlMGNLQjVMWDl5dkNNNDRnMzR0empVL0tEMFRmZVd2VVllTTJENUlzSUM3?=
 =?utf-8?B?eFVScnhxVVR4eENvZncvUnAxNER4TFE4WWR2QnM5WTAzeGFIc1hPK05NYlBS?=
 =?utf-8?B?WGFRTEJjSGNxRmw4SW1GUEVWcjJZK1JiZlNLNGZYemFnUXRGV21ReWpqSDJN?=
 =?utf-8?B?NjI1OUdjQjYrbklyTERnVk1rYlhiUEE0cExRNEhsdXdRdjk5OHhpYk8vYjVE?=
 =?utf-8?B?Vm8wSFdSVGVZZUtyd0VkbnE3VWcwZFExSW5Kd0lRb1lEVW5Dc05DL3BBRXZH?=
 =?utf-8?B?VnpiSzlCM1FYTkFzaEpRU3YxRGpKclUwbzhMWVYrc3FxVEpkWS9kN2FPQlZ3?=
 =?utf-8?B?WHpvb2xBVkYxVExpVFZhSy8xMnBkbWthb2lmNVVwMitjTUxScW9aaEdNaVl2?=
 =?utf-8?B?WjdnejRpT09rMFhQWmk5RWlWWDNSajNDbjdXRFJDa2FhK0FGU1YzQkQ2YlN6?=
 =?utf-8?B?Wml6ZFphNCtIcUZNSGpvVUZtZ1dVc0xmb2ZxUEVBNGxoejJOQVV5LzRycnpX?=
 =?utf-8?B?QUZGVHF5RzhDMnNpV0IrbDF6eUVSOGFUTGkwdFRKSkhIKy9hNDVVOEtnUHU3?=
 =?utf-8?B?VzNnc2JzMkgxdVR4YjRjMzByck90VjRmTG5KNXlPN3BJRVMxTzFSUUJERnpl?=
 =?utf-8?B?UUxSMExIdDlGY2FmSXh0ZnpCRUpXb2ZkeWo5aHE0ZFhyMVhuM1hkL1hmY2FB?=
 =?utf-8?B?a3VVZlVvRGh3T2hjNkQrZnViM3oxY3FHZ215NzFIZFVkUWY3K0d5amRzMXlM?=
 =?utf-8?B?MGxuSXJsRnJEQnAxWExlOXA4aGpVRW5wRzlLWGJFZjgvU3FrZHhOelgvbXFk?=
 =?utf-8?B?RWU2S01nRW9SRTRtcFovTy9YeTk3UDl3c1pwZ0pLcXlveU1EYVVWMnZlWDl1?=
 =?utf-8?B?OFdqQkpMQ012V0xWODBYOXRUVHBIRnIybzlrZ3BFNmlqUmFSRWw0VHBsYy9T?=
 =?utf-8?B?Nk51Y3FPeE80bEpxT3FZQmxtYkJhN05YdkZQd3NXMEowS2xFbjQ4Q0F4Qkhw?=
 =?utf-8?B?Z3ovYXZzbE8zTE1iSXhYOExMZk9NVW5UNWFFc25maHI2T2JrRGpRd2hiZEl6?=
 =?utf-8?B?ei8xY0x3VElxdW10cHBCdmV2V1RpMFc4Zjd5dGNKM2F0d1V0eVd3N1JXUFR1?=
 =?utf-8?B?U0lqM1N5UnhQT0d1VEdPa2JobEowM1o4dVpSZTFDVlNSMUJvbU1FMmxNcjV5?=
 =?utf-8?B?a2VDUEZFSnlCTFJxMDBydkl5VkNnZGJXLzFGTGZBSHoxNkNiblMxN25RQld5?=
 =?utf-8?B?RWVaanVUdFVaV09LaU1LekRNcVc3YkpRdXFQQ3F0ZW9mN3JDQzRCUmcvcmZC?=
 =?utf-8?B?L1d5cjBZUXdsWFBrQXluR2thc2ovMHRRdEJJcHVtVDhEQWw3LzBQbExPbGgz?=
 =?utf-8?B?UjZhM0djYk5XNjF4SWpFVmR0MjRCaUltL2pPVG9scnE1OFZtRWZQRXdhL2t0?=
 =?utf-8?B?NWo0OVNtM0x3YVB5WDhOY2p1MFJRdUcyeXdMQUh3QW9CV0ZhYnkvSFZybmZN?=
 =?utf-8?B?Mi9ubmd1bUd2SDN2aFpjS3pHeW9aSVdNOWxSOTBacXcyRG81WWlGdEJUaVNh?=
 =?utf-8?B?UTJiQjAxeU85ckI1OFF1UEN1NmliTGhBU3NBSVNoWmFqaENpVEpaRUw4Njcz?=
 =?utf-8?B?Ly9ITHd5UVFBamRNVkVOZDhsMTVWd2lZVFhaalBkYTZZQ1hOTis0NGxGNUxN?=
 =?utf-8?B?aThibUZUNU1XOXR4ZnZRMGwvMklHOXZYV3dhTEUyN0ZvOWpUMS9ab2VFTFNz?=
 =?utf-8?B?bHR4RHZtU1oveTB4MVNLV29kbUJkdnROZEgxZmVYK3NUSm9oQnNzNklGeUlK?=
 =?utf-8?B?Qk9zYWJrZHJNRXBFYXF4L2xmZ0gwYldRUUdYb1BOUFZaSnJXeStPTjVHQXZj?=
 =?utf-8?B?ZDRxZTd0eUt3S2JtcnhSZ296aHpoY3MwYnBYOEttUERhdGJVS0RYdFlkZHpM?=
 =?utf-8?B?NEdTTUVUMnhIS3UzSTZwcE8vVzMrZzVBRmdmM0ttOXVQc0VQa1RibzQyU2hq?=
 =?utf-8?B?UU5JaXV4ODR2aW55VWdhRWhGdmJXemY3MWJJdUhjZEthUHBRMWNqRi9OYk96?=
 =?utf-8?B?V0E9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <BFC612EC572CA74E9147DD7EC7613536@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	k+E5ZMhKbdStadBdw1m79HIviaKZ51we5sJsVWlqEbgGVJ3/7MuHoYeJFekOE72SX6SECi+yzexXeBA8X1H2RhB5IJ07nTM4bcy9RTajHWzICDjPeOe/qrgdSL4mDhpScIw9ALWXEL1gCPOk1Km9Ur4ubib2zWYLRg5AtkVQy5ncsw43UGnMh1aln+HyMS3oAhBILJphthD92Vl3b64DDsp0iEiU6WZYBSeCIftXfyvdzvRBHkEjD9I6Trz7XxLQjje7ginmnrHnuW71ZoRXIRcs0sZmrcq/XBR3Z94O32YBEG29fy5zmjdGS5JU69/62CIwwOascujnvnEFawhpvrP6ukvI8r7eQb6hwEPD8nGX1rks1eQ8jrYC1mpUuIptlf/j/YI1EguqlPoaOIrMh8djnhxWGF9D/cYmGX+5E8Oh34s0B1koxnFwb9yCinZDy8fBZr8D/IEogboc05XOOjZe3kHi8Gf4MRJZba/6NkmOYcAqvc+C1ecn8b9No8duz5nZEg1hDiOqkPcgXlpzxPSScttE9KWO1HIYJhSviXDCPf/fZCO6pIkdNEw2ZZd9dFpCsB/XL14j0Qpuo5+Y6p9N0REbMaWXcgdBzXZbHSknqYIneP0qlaMNeiE96OkPokVQFJBvp/2cxI9D1MYPFlnQbK/bSb5vO5ExKgSijm0mmCQw9LkTl8/njF54N2NmHlQDEGgyVzh9HXGpGJxxQiVCH8pWjJRLPBC4dNNPA6VeBJotaYqxcgsdZGGURMeB/1XlzXLwtBjdWYf6sQomRYvjDGAYnOMsFLDV+b0Z3jvZRyVexc/xgQtJASmq/fjkw0c4d0rcjI4sGHiggEYxPA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 67e2efb6-64f7-451a-afd4-08daef463492
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 17:56:43.4967
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Dr0WReK5iycGELS96DFQqEUrB24Dv8nvgqdBk3HMhC8/CWySLAgzsTzcPZSMBLdHS5J3GEjXARdqAesgYflh8fe8EQf3rrUfkhhCeOOcf8U=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6171

T24gMjIvMTIvMjAyMiA3OjUxIGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjEuMTIuMjAy
MiAxODoxNiwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDIxLzEyLzIwMjIgMToyNSBwbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gKyAgICAgICAgICAgICAgICAgIGQsIGQtPmFyY2gucGFn
aW5nLnRvdGFsX3BhZ2VzLA0KPj4+ICsgICAgICAgICAgICAgICAgICBkLT5hcmNoLnBhZ2luZy5m
cmVlX3BhZ2VzLCBkLT5hcmNoLnBhZ2luZy5wMm1fcGFnZXMpOw0KPj4+ICsNCj4+PiArICAgIGlm
ICggaGFwICkNCj4+PiAgICAgICAgICBoYXBfZmluYWxfdGVhcmRvd24oZCk7DQo+Pj4gKw0KPj4+
ICsgICAgLyoNCj4+PiArICAgICAqIERvdWJsZS1jaGVjayB0aGF0IHRoZSBkb21haW4gZGlkbid0
IGhhdmUgYW55IHBhZ2luZyBtZW1vcnkuDQo+Pj4gKyAgICAgKiBJdCBpcyBwb3NzaWJsZSBmb3Ig
YSBkb21haW4gdGhhdCBuZXZlciBnb3QgZG9tYWluX2tpbGwoKWVkDQo+Pj4gKyAgICAgKiB0byBn
ZXQgaGVyZSB3aXRoIGl0cyBwYWdpbmcgYWxsb2NhdGlvbiBpbnRhY3QuDQo+PiBJIGtub3cgeW91
J3JlIG1vc3RseSBqdXN0IG1vdmluZyB0aGlzIGNvbW1lbnQsIGJ1dCBpdCdzIG1pc2xlYWRpbmcu
DQo+Pg0KPj4gVGhpcyBwYXRoIGlzIHVzZWQgZm9yIHRoZSBkb21haW5fY3JlYXRlKCkgZXJyb3Ig
cGF0aCwgYW5kIHRoZXJlIHdpbGwgYmUNCj4+IGEgbm9uemVybyBhbGxvY2F0aW9uIGZvciBIVk0g
Z3Vlc3RzLg0KPj4NCj4+IEkgdGhpbmsgd2UgZG8gd2FudCB0byByZXdvcmsgdGhpcyBldmVudHVh
bGx5IC0gd2Ugd2lsbCBzaW1wbGlmeSB0aGluZ3MNCj4+IG1hc3NpdmVseSBieSBzcGxpdHRpbmcg
dGhlIHRoaW5ncyBjYW4gY2FuIG9ubHkgaGFwcGVuIGZvciBhIGRvbWFpbiB3aGljaA0KPj4gaGFz
IHJ1biBpbnRvIHJlbGlucXVpc2hfcmVzb3VyY2VzLg0KPj4NCj4+IEF0IGEgbWluaW11bSwgSSdk
IHN1Z2dlc3QgZHJvcHBpbmcgdGhlIGZpcnN0IHNlbnRlbmNlLsKgICJkb3VibGUgY2hlY2siDQo+
PiBpbXBsaWVzIGl0J3MgYW4gZXh0cmFvcmRpbmFyeSBjYXNlLCB3aGljaCBpc24ndCB3YXJyYW50
ZWQgaGVyZSBJTU8uDQo+IEluc3RlYWQgb2YgZHJvcHBpbmcgSSB0aGluayBJIHdvdWxkIHByZWZl
ciB0byBtYWtlIGl0IHN0YXJ0ICJNYWtlIHN1cmUNCj4gLi4uIi4NCg0KVGhhdCdzIHN0aWxsIGF3
a3dhcmQgZ3JhbW1hciwgYW5kIGEgYml0IG1pc2xlYWRpbmcgSU1PLsKgIEhvdyBhYm91dA0KcmV3
cml0aW5nIGl0IGVudGlyZWx5Pw0KDQovKiBSZW1vdmUgcmVtYWluaW5nIHBhZ2luZyBtZW1vcnku
wqAgVGhpcyBjYW4gYmUgbm9uemVybyBvbiBjZXJ0YWluIGVycm9yDQpwYXRocy4gKi8NCg0KPg0K
Pj4+ICsgICAgICovDQo+Pj4gKyAgICBpZiAoIGQtPmFyY2gucGFnaW5nLnRvdGFsX3BhZ2VzICkN
Cj4+PiArICAgIHsNCj4+PiArICAgICAgICBpZiAoIGhhcCApDQo+Pj4gKyAgICAgICAgICAgIGhh
cF90ZWFyZG93bihkLCBOVUxMKTsNCj4+PiArICAgICAgICBlbHNlDQo+Pj4gKyAgICAgICAgICAg
IHNoYWRvd190ZWFyZG93bihkLCBOVUxMKTsNCj4+PiArICAgIH0NCj4+PiArDQo+Pj4gKyAgICAv
KiBJdCBpcyBub3cgc2FmZSB0byBwdWxsIGRvd24gdGhlIHAybSBtYXAuICovDQo+Pj4gKyAgICBw
Mm1fdGVhcmRvd24ocDJtX2dldF9ob3N0cDJtKGQpLCB0cnVlLCBOVUxMKTsNCj4+PiArDQo+Pj4g
KyAgICAvKiBGcmVlIGFueSBwYWdpbmcgbWVtb3J5IHRoYXQgdGhlIHAybSB0ZWFyZG93biByZWxl
YXNlZC4gKi8NCj4+IEkgZG9uJ3QgdGhpbmsgdGhpcyBpc24ndCB0cnVlIGFueSBtb3JlLsKgIDQx
MCBhbHNvIG1hZGUgSEFQL3NoYWRvdyBmcmVlDQo+PiBwYWdlcyBmdWxseSBmb3IgYSBkeWluZyBk
b21haW4sIHNvIHAybV90ZWFyZG93bigpIGF0IHRoaXMgcG9pbnQgd29uJ3QNCj4+IGFkZCB0byB0
aGUgZnJlZSBtZW1vcnkgcG9vbC4NCj4+DQo+PiBJIHRoaW5rIHRoZSBzdWJzZXF1ZW50ICpfc2V0
X2FsbG9jYXRpb24oKSBjYW4gYmUgZHJvcHBlZCwgYW5kIHRoZQ0KPj4gYXNzZXJ0aW9ucyBsZWZ0
Lg0KPiBJIGFncmVlLCBidXQgaWYgYW55dGhpbmcgdGhpcyB3aWxsIHdhbnQgdG8gYmUgYSBzZXBh
cmF0ZSBwYXRjaCB0aGVuLg0KDQpJJ2QgYmUgaGFwcHkgcHV0dGluZyB0aGF0IGluIHRoaXMgcGF0
Y2gsIGJ1dCBpZiB5b3UgZG9uJ3Qgd2FudCB0bw0KY29tYmluZSBpdCwgdGhlbiBpdCBzaG91bGQg
YmUgYSBwcmVyZXF1aXNpdGUgSU1PLCBzbyB3ZSBkb24ndCBoYXZlIGENCiJjbGVhbiB1cCAkWCIg
cGF0Y2ggd2hpY2ggaXMgc2h1ZmZsaW5nIGJ1Z2d5IGNvZGUgYXJvdW5kLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 18:02:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 18:02:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472110.732239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDUZY-0005Wg-6p; Thu, 05 Jan 2023 18:02:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472110.732239; Thu, 05 Jan 2023 18:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDUZY-0005WZ-28; Thu, 05 Jan 2023 18:02:36 +0000
Received: by outflank-mailman (input) for mailman id 472110;
 Thu, 05 Jan 2023 18:02:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDUZW-0005WP-LL; Thu, 05 Jan 2023 18:02:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDUZW-0001ga-GY; Thu, 05 Jan 2023 18:02:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDUZW-0004mx-3j; Thu, 05 Jan 2023 18:02:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDUZW-0007nK-3D; Thu, 05 Jan 2023 18:02:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1EU9qurYFQPXc5nUFf7pHcGnW/IEOA6wEcGYqxLA+Mo=; b=6Ms1WXGhmL7u5HhI1AFTu9LQmG
	XZShUL4aRM5B/0xvZWd8xWspIrQtjac3YEteFtMvNEtBf708oW+bjI6np+AKV92xcIBsEhl+iU7gV
	/Y+yypQdU4usBr4McDVNrfGrFgx01+tdQmrwHjCCnyOvP9bKoTpYoYmtyL+b2yZ8sQNw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175579-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175579: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=41c03ba9beea760bd2d2ac9250b09a2e192da2dc
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Jan 2023 18:02:34 +0000

flight 175579 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175579/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                41c03ba9beea760bd2d2ac9250b09a2e192da2dc
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   89 days
Failing since        173470  2022-10-08 06:21:34 Z   89 days  186 attempts
Testing same since   175579  2023-01-05 04:57:31 Z    0 days    1 attempts

------------------------------------------------------------
3265 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 498271 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 19:27:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 19:27:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472119.732250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDVt7-00054V-6s; Thu, 05 Jan 2023 19:26:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472119.732250; Thu, 05 Jan 2023 19:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDVt7-00054O-4D; Thu, 05 Jan 2023 19:26:53 +0000
Received: by outflank-mailman (input) for mailman id 472119;
 Thu, 05 Jan 2023 19:26:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDVt5-00054E-CE; Thu, 05 Jan 2023 19:26:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDVt5-0003YM-50; Thu, 05 Jan 2023 19:26:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDVt4-0000Ph-Qd; Thu, 05 Jan 2023 19:26:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDVt4-0005JW-QE; Thu, 05 Jan 2023 19:26:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dPeraHKPSt5Xr24vT/88edpDy/j4fFVsR05C2rQ2J3Q=; b=Qr+6z3kV87dST/biDrGz5WkwDG
	a3GG9ufcMgtWntxvueVKy9rXSc28RRO/h7fssvWh1ca6auH1AFmPHdUYsElhwVMjf2WxKIOqJLTFj
	n5QAhL/OXFoh1dj/fgVbGxrnSSemgxOHmzQJutyrF+lpuSr58NPcteUQ3D/wvkSzhmtI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175589-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175589: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=671f50ffab3329c5497208da89620322b9721a77
X-Osstest-Versions-That:
    xen=c1df06afe578f698ebe91a1e3817463b9d165123
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Jan 2023 19:26:50 +0000

flight 175589 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175589/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  671f50ffab3329c5497208da89620322b9721a77
baseline version:
 xen                  c1df06afe578f698ebe91a1e3817463b9d165123

Last test of basis   175568  2023-01-04 14:00:26 Z    1 days
Testing same since   175589  2023-01-05 16:01:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Demi Marie Obenour <demi@invisiblethingslab.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c1df06afe5..671f50ffab  671f50ffab3329c5497208da89620322b9721a77 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 19:54:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 19:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472128.732261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDWJd-0008MA-BO; Thu, 05 Jan 2023 19:54:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472128.732261; Thu, 05 Jan 2023 19:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDWJd-0008M3-8P; Thu, 05 Jan 2023 19:54:17 +0000
Received: by outflank-mailman (input) for mailman id 472128;
 Thu, 05 Jan 2023 19:54:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDWJc-0008Lx-DR
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 19:54:16 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b92b2737-8d32-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 20:54:13 +0100 (CET)
Received: from mail-sn1nam02lp2047.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 14:54:03 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB5957.namprd03.prod.outlook.com (2603:10b6:208:310::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 19:53:56 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 19:53:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b92b2737-8d32-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672948453;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=fYJ4tDZPy7y5zf+0cJyZ+4IZankQAXIi/098J4yBmig=;
  b=G1y5Y65rdr7lZ8AgMs9ix8dp0Pdw1h7JJ59QsqbJNCth8Px4KlN8lqeO
   6vTLYGEBLjRPTcEiUSGOpcmxk0RsAfaLHfkTI1pOA3CCnNcv+t4rcuytP
   d4kSXHUqTLJoMrfuyYrECxCNkm/W7N1IwU4gfCopIx/L4NM56AoQZ3Dxi
   I=;
X-IronPort-RemoteIP: 104.47.57.47
X-IronPort-MID: 93848175
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:7t5sN6PseYxVTN7vrR2TlsFynXyQoLVcMsEvi/4bfWQNrUp31WBVz
 zQaWGrUa/nZa2agLtAgYdyz9E8Dv8fSyt42Gwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo42tB5gFmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0vxKGU9T+
 tJbEx5TPjqyvcuk6+uaSdA506zPLOGzVG8ekldJ6GmDSM0AGNXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PVxvze7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6UzX+rAd1PfFG+3u9WnWyKw3QBMgBVCgqciqG4u2Kna+sKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwB6J4rrZ5UCeHGdsZiJAbfQ2uclwQiYlv
 neZktWsCTFxvbm9TXOG6qzSvT60ITISL2IJeWkDVwRt3jX4iIQ6jxaKVdA6Fqew1ofxAWuon
 2/MqzUijbIOi8JNz7+84V3MnzOroN7OUxIx4QLUGGmi62uVebKYWmBh0nCDhd4oEWpTZgDpU
 KQs8yRG0N0zMA==
IronPort-HdrOrdr: A9a23:RX9hPKCmqgbJ/JnlHemV55DYdb4zR+YMi2TDtnoBMCC9F/bzqy
 nAppQmPHPP+VEssVsb6LO90dC7MBHhHP1Oj7X5X43SOTUO0VHAROpfBODZslnd8kPFh4hgPG
 RbH5SWyuecMbG3t6nHCcCDcuod/A==
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="93848175"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BjHFKosjOx/x+S9fXC5uTIJEUAXgW8hiR6biz8BAnZPiY3k9IisrgLtSxRWb6x/+mlT8QFZfMyRtAF1rt1f84cL0QoubN6RvDxOjhLIo+pKpa2CD9RVKOMJ4YxYgTyDNqIGxlsnFNlZ3E2FTR0kCGvUThadgNo+E/PL9OipsH55Fa2fwQluS/Amg3GijdIDPryNttNB+h9RuVs8cUW2J7GTNWWsRw3fPlbK541/eBB06lW80BMG6hynykuT5OS1z1fvi2705fv6j0GesJWmvDyNJSOO5yqYwDpTV6crEPxaNfdMo0EBDwP3V++oSJTuf6ejLLIi6PcnBppJv/m2VpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fYJ4tDZPy7y5zf+0cJyZ+4IZankQAXIi/098J4yBmig=;
 b=RAzb1rd/Tt0g5yNmw4JdGdNvRVqvm/BZWmVMcC4o0aExQTmglKtL406tQUOYuTeXIy8RGngsEXTEF5xUVNsY4eARfQMFbc0frC+xOopXrMpO8tVSdRwvSt3vmntZtYSThRBpbg1rZtu25W4CiUXk/EMD44xMrBrf4dVksTjQXj9saxzaXZZ1vZ2fnUnPJqzfwBP1k1PS9J93Y8LjS8vVdmEajMbQUvBouzjbX+VNcqH4UqvT7o3vvIFk9jF0SRfvFV2Ua1Vt3LMLqBs0e3WmFnvxlqA8mEpl76gkmJ5ISuMB3vcpzuYJGNUvby5YkjBA6s9H2sGeB8QGy1io50aE7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fYJ4tDZPy7y5zf+0cJyZ+4IZankQAXIi/098J4yBmig=;
 b=e1xGavS1KQKaEdMUcZnNzoaqAyc9t+w7MVM6Sa4GCF1FiNcy8RsrfFbSyCEmBOH00oytDKBrVf8N6DI55F1XM6V3dFieT7z7+MIisNaIb3td9OqNOXlAar9Dcivs5i14sAlG4Xappgk4MalkN1iCEcpqLoDjkFOyG3tq53MX8EE=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, George
 Dunlap <George.Dunlap@citrix.com>, "Tim (Xen.org)" <tim@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 3/8] x86/paging: move update_paging_modes() hook
Thread-Topic: [PATCH 3/8] x86/paging: move update_paging_modes() hook
Thread-Index: AQHZFT/G4QjmyfM5nk+WhnK7qSG8yq54nRIAgADvdgCAFsfagA==
Date: Thu, 5 Jan 2023 19:53:56 +0000
Message-ID: <42d0630e-36d6-f5d2-839f-ef5811714807@citrix.com>
References: <f7d2c856-bf75-503b-ad96-02d25c63a2f6@suse.com>
 <bee2d51f-534e-f6e2-6385-70f8597eac1e@suse.com>
 <de5343ab-9083-360f-3d7b-8bf84a70732d@citrix.com>
 <96b5613f-144d-29ea-9e17-515509c16300@suse.com>
In-Reply-To: <96b5613f-144d-29ea-9e17-515509c16300@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BL1PR03MB5957:EE_
x-ms-office365-filtering-correlation-id: a406e9f9-10d5-4905-225c-08daef569465
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 xNmuFfzB5YK7hzr+qi436xYOIV28x+XRv+lfatVtoXc5XmzyIXEIEDxJJB9FEZwV6QPTx/8AwfI/HZZwuk3E/oP1C9xBvxsv+xSxJBDKEeCpOqz3cNmW3s2HuxLBzWsmABnbDK21wJvHvCSj4zkQAHCnVjYx/M5mjIBWhhgINgqdqrLAKY2A4E7bQ0ZjWzoor6GCq+pEaXzPmznoPeyHxZMKi7ayFPet6tzckg6Ptj2yJ+zEaiiLqkQUBPk5JhPNNkLPNnWbgyFhuXU93p65sjLa0DVQZIPt9WbZc7qm+/FPwbVPLNwURnXVAcvmsR/i6rwr1C3lo4GJhtB9WqUdeFmvaKkGPEi+X5Hqy0Wim14+auOfgvc4X3mnEooIx+Rgcvk0DGSqywV4dOtOFwFfrKwGaqRJ1vm9T7J1KbRpUPaP0xdjFSyU6laatG2XqK5jle+h0H2VmH9FgslZXoPt36HRR81vZW9gwPDLzQMBRrxFzitAsfp2Qb2Cet4vFSKHABeKl0sGEpXFJxBMHzNvK0UlgJTQ6lncw9oX22WWBTYidUIiSFfwxPMG3KmNnEFHqPTLparcNeOQ3i7/ZQfTKJEs3TOLVEalqwQEl46VIvuNad5cC1DcLoLpE5/WQGTOeb85ri54jk+Fwo4WFY6j7E5mKbm3K2ACpeP5c0kN1TMwnmTSzOpAyOduSfWzhEK/rDdhFJqYVGqmsakK8qJtoffWpO4dhuGBijSD9k0yqhs+FhDAIOUMOMyN1HcAUzHaTAH/LaC0fDeSzagnAO9XAQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(366004)(136003)(396003)(376002)(451199015)(83380400001)(26005)(6512007)(6506007)(38070700005)(86362001)(36756003)(31696002)(122000001)(2616005)(82960400001)(38100700002)(53546011)(186003)(316002)(4326008)(478600001)(41300700001)(8676002)(2906002)(5660300002)(8936002)(15650500001)(71200400001)(31686004)(6486002)(76116006)(66476007)(54906003)(66556008)(66446008)(91956017)(6916009)(64756008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Y01ZTFp4SDVnRFRXUUNjN2lublZqV3cvd3dwejdmbEtiWU4ybCtjRXhQYm5R?=
 =?utf-8?B?WElTdDlwSlJFNG1UT1VPaWlCZWIzTXlSMHJBRkZwWnFqMEk4VnRwMTJZVVFO?=
 =?utf-8?B?ZVFBdHFUdnRjcVRhSm9nUzBTYlBOY09JcXNla2JNQnl3QzI0dzNBSXlJSWRh?=
 =?utf-8?B?K3JYNjJYelRUYys0aXE2c2lGd3NQTXdCdmt2WW1VREdOYjkyZFFhSFcrNTVZ?=
 =?utf-8?B?cFROSTVkVWZsQ2w3VENuWXNVd05ZTDROd3AyTXMzK005alRPbDZBWTlQb0VD?=
 =?utf-8?B?MzNGWmRoYUJWS3V3ZkluQUQ3TFFvV2hKbnlzMDRWendDZ0dlakVwampFbzd4?=
 =?utf-8?B?S0c3d2hEWktrUVJKSWVuTDdEMFhOOXVDajE3bm4xUTYrTXVDT0FqQStBUi9a?=
 =?utf-8?B?blQwSGV3VndXQUVqbVorYjdRVDZEbzJOVUsvNis1bzhXd2FtcTg0R3c5M0hV?=
 =?utf-8?B?b0hqVXNRR0QvR29Ncm8rS3EvWExHa1BLT0tUNHBhWUhEMXRyV25PTStLbUhL?=
 =?utf-8?B?RnNaODQ0dGxMcTFCWUQwenFkL3dESXdOYWlEaHpPaU03d1h4ZGpmYmd1ZVFn?=
 =?utf-8?B?ZXB4M2dKMGZVWnZiemYxYVNqMHBUT1c3bUxNZysyN1hKelcvTDZsRjRHdGUv?=
 =?utf-8?B?SGpRbm9nUlFUbDR4NkMrdnVLMnd4UmhOYU5sSjhtbUVva1ArUjJBdHVheURQ?=
 =?utf-8?B?ZjVuYTBtZ2ZYWmxjd0YzZHpCcm9KRm12VDdCSUY0ZnlEN2cxOW9VZTZNaEkv?=
 =?utf-8?B?S0N3MmhGbmtXUWVudlIvdW9VVCtTYVhiVS9taEltTm4wYThxUmd4ZzFKNzh3?=
 =?utf-8?B?NGl6OSsxWEhtMmJRdWZVejRsL3VuMmxFNnhhOUo2RWlvV0h5UG1CczkxZzJa?=
 =?utf-8?B?cGVpdXBJRU1VRXdHUTNDcVBnaU5SaERIemFTL3pzOEs3MHBmdHdxUzVtT1o1?=
 =?utf-8?B?K1diZ1c1TUFpY1UweXJYaEZUcFV1Uk9JLzIvYWlLQ1RLaVR2RllwRXhsU0Fl?=
 =?utf-8?B?VHAxaW03VkFJM0dGOGRuNjZiYmtlOTNIU1h1QVF5VTlua3ArSUozVURVQUNs?=
 =?utf-8?B?T1EwM3JnTnJHelo1ZFJpNW5ENUU0NVMvT2FQUzBRYm5NUmdNV1pVdFgvVHJn?=
 =?utf-8?B?dy8zLzJLUGIzUnlCWG9OaGJBeTZVK3FPVkVLeXdwSHdvUnlqSTJlY29XL0RT?=
 =?utf-8?B?NDNzVGZVbU5mYnAwVmQ3Y2tLSTQ2N2ora1Z4WnNjWWR4d21hM2dvNnp3MHlr?=
 =?utf-8?B?cS9lNHQ0UGZPdEdVa0h3UEk1dGM5SDhMM1o4aEVyamUveGZyM3B3WTIvaUFY?=
 =?utf-8?B?ZDBmMEwxcTNSUmJtUHNyOWpDOUhUZmk4dU9CL2JEOFBUUWc0M1hzemhWRmkr?=
 =?utf-8?B?eXhOUGFkdUg2VDkyTXRHamE1ei96U3dOa0E2cUwxMFh2RnhTU2FBSW00SVFo?=
 =?utf-8?B?RnB2aEZOR2JTVHdwYWRiTkhQbFovTlFGeVZIOGJNOEYxYVNzdVF0QmhIQkFX?=
 =?utf-8?B?K2M5b1lyRUJPNi9sMWorTmFyQ0xoZEY2TkdSQVJaOEJtNjRnRFhMcVN1M1E0?=
 =?utf-8?B?eU04akFSaHgzaUpuMFNtTHNzeFBoNjgxeW4yT2ZBZ2luemRjczNWL2U4MXk2?=
 =?utf-8?B?WlJHY1dGay9hQ0l1NkhVWks3NThvVXFWVjhMN3YxMkZ2aG5lV0NIY0o1cjZZ?=
 =?utf-8?B?ZHMrOWxGYUthb2c4R29tOVd5dW5tVGNCNTZ2ekNYVU8vakM5WHRiWkFZdTU1?=
 =?utf-8?B?dHhxWkRDbC9aRVV4aTNkQXowZWsySzlvYTY4UG52YWhlTVhSTmI4ZkRCbTBH?=
 =?utf-8?B?ZER6Nmt5NzlVT3dpdFIvNC90VXQzYzhFemtSU1NTUHJvTkFVcnhhTFAwdito?=
 =?utf-8?B?TG1vQWJiOSsrblVZY0NwQ0pHNGpWM2VBRzZDRnRqZVROQ0NiL1g5MkFuYytL?=
 =?utf-8?B?SjVYQzFpbHZTU1ZNNy9SeHhSczIreU5rU0JYdmZDRmpmU1dHNS9QRG02NzRC?=
 =?utf-8?B?NGVlL0RKODQ1TE9tVWY1cXhBSHI4bnRUbkwvUEhOblZBdmxmZW50NldGTTE1?=
 =?utf-8?B?eVdaVjRDb1dhY2RKMlRFVVUwTXVxWlRaN3lqNlVCeHFJcHhHUXJTVTlFTVNq?=
 =?utf-8?B?ZnhvL0RzMEV0TGhpeTIvY3d4ZTVGQ2VUU2d1UkFhR2FhTDFYS1hpT2xDSzZk?=
 =?utf-8?B?MWc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <196236DCF8F0874888BA9EACB7F358AA@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ajULz0UhLgcffh0Kb+50XonyB3hoQVK1kOJMnKmjSjI2Um2ybJnE1dblgj3js4TuECAIVqWMRETrEoB+yJ5QvHXBO8Nj3Yds+HRndpydQoKdO22zIzFnYjZwOv65Mqjq5X36UvoyT0MwDr7kMW+kmMNrK9Kq3itXiwGazRnzO1u2e8wTEBRvoPvblXWd7MPhpvHOIafTjah1P1uvvgrU/Gj10zGI3VGIrULnXpU3s0OlD27t30F1f53ORr/gHjl+hV2Qa2W0iuSUVVq+ksDews+hbdcEXjKDayxE68XLFdIcG/iFsxQ9FvOsLkQtYzoB+UFJ00qYmw6owiBD/GJYzS+rQvyPM7jIIbB8GbTnnGG7Cd5s9NKvjtwPVeZdDWVvX6JcUW/J787IDkP3PWXworeIVtCLpC6Ai9UhCVKtHWlguCGfs63oLZxszzVTFVw6il7XvngmtytXWXDxZ5Ns4A2gL6BEneVscFjcpKV8SiXnzqwXwrCuvUJVErxepxI0uQvyeV8bP48UwtJldj/uQC5O3gQ/70pRQGXtjahIjUAQWXQSWp00mY6WHlvlR0KWU/LNlxaNEQYTaiECBN7GElrM+CD1/I6CHn/K6DruaBPCPJkCd8ZP4SzaR8RAxC1mXzAIYgqz+XcDLA3y57s9hnL4hPpEMS72FBvgMFV8MBpuTON99AWFoUSOZG4HOVAdEpAQh6LIizhahUgR0vd9LdU9aNJL097VteMPdSFGTYBX+oxW+C6n58VomR/yApuDFYstOJJEso/oC6fH1yYd3zpgKbi0KtactSMQRtn2FXjWlUZQF/h59Mk0j9UQ3hGJPPN9IinoOWsSNB5us3YyvA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a406e9f9-10d5-4905-225c-08daef569465
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 19:53:56.1759
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: oSYX5YcHIZl4aTmyI//Z2Z4IuAUSfQOO3OvyuWQsy10jMx0GAD3fbGIIzoPCrRY0MN6T6ZtdUkpv90lNMeFF/38ckr3Fl08YnOUrWKZdbVM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB5957

T24gMjIvMTIvMjAyMiA4OjAwIGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjEuMTIuMjAy
MiAxODo0MywgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDIxLzEyLzIwMjIgMToyNSBwbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gVGhlIGhvb2sgaXNuJ3QgbW9kZSBkZXBlbmRlbnQsIGhl
bmNlIGl0J3MgbWlzcGxhY2VkIGluIHN0cnVjdA0KPj4+IHBhZ2luZ19tb2RlLiAoT3IgYWx0ZXJu
YXRpdmVseSBJIHNlZSBubyByZWFzb24gd2h5IHRoZSBhbGxvY19wYWdlKCkgYW5kDQo+Pj4gZnJl
ZV9wYWdlKCkgaG9va3MgZG9uJ3QgYWxzbyBsaXZlIHRoZXJlLikgTW92ZSBpdCB0byBzdHJ1Y3QN
Cj4+PiBwYWdpbmdfZG9tYWluLg0KPj4+DQo+Pj4gV2hpbGUgdGhlcmUgcmVuYW1lIHRoZSBob29r
IGFuZCBIQVAncyBhcyB3ZWxsIGFzIHNoYWRvdydzIGhvb2sgZnVuY3Rpb25zDQo+Pj4gdG8gdXNl
IHNpbmd1bGFyOyBJIG5ldmVyIHVuZGVyc3Rvb2Qgd2h5IHBsdXJhbCB3YXMgdXNlZC4gKFJlbmFt
aW5nIGluDQo+Pj4gcGFydGljdWxhciB0aGUgd3JhcHBlciB3b3VsZCBiZSB0b3VjaGluZyBxdWl0
ZSBhIGxvdCBvZiBvdGhlciBjb2RlLikNCj4+IFRoZXJlIGFyZSBhbHdheXMgdHdvIG1vZGVzOyBY
ZW4ncywgYW5kIHRoZSBndWVzdCdzLg0KPj4NCj4+IFRoaXMgd2FzIHByb2JhYmx5IG1vcmUgdmlz
aWJsZSBiYWNrIGluIHRoZSAzMi1iaXQgZGF5cywgYnV0IHJlbW5hbnRzIG9mDQo+PiBpdCBhcmUg
c3RpbGwgYXJvdW5kIHdpdGggdGhlIGZhY3QgdGhhdCBzdHJ1Y3QgdmNwdSBuZWVkcyB0byBiZSBi
ZWxvdyB0aGUNCj4+IDRHIGJvdW5kYXJ5IGZvciB0aGUgUERQVFJzIGZvciB3aGVuIHRoZSBndWVz
dCBpc24ndCBpbiBMb25nIE1vZGUuDQo+Pg0KPj4gSEFQIGFsc28gaGlkZXMgaXQgZmFpcmx5IHdl
bGwgZ2l2ZW4gdGhlIHVuaWZvcm1pdHkgb2YgRVBUL05QVCAoYW5kDQo+PiBhbHdheXMgNCBsZXZl
bHMgaW4gYSA2NC1iaXQgWGVuKSwgYnV0IEkgc3VzcGVjdCBpdCB3aWxsIGJlY29tZSBtb3JlDQo+
PiB2aXNpYmxlIGFnYWluIHdoZW4gd2Ugc3RhcnQgc3VwcG9ydGluZyBMQTU3Lg0KPiBTbyBkb2Vz
IHRoaXMgYm9pbCBkb3duIHRvIGEgcmVxdWVzdCB0byB1bmRvIHRoZSByZW5hbWU/IE9yIHVuZG8g
aXQganVzdA0KPiBmb3Igc2hhZG93IGNvZGUgKGFzIHRoZSBIQVAgZnVuY3Rpb24gcmVhbGx5IGRv
ZXMgb25seSBvbmUgdGhpbmcpPyBBcyB0bw0KPiBMQTU3LCBJJ20gbm90IGNvbnZpbmNlZCBpdCds
bCBiZWNvbWUgbW9yZSB2aXNpYmxlIGFnYWluIHRoZW4sIGJ1dCBvZg0KPiBjb3Vyc2Ugd2l0aG91
dCBhY3R1YWxseSBkb2luZyB0aGF0IHdvcmsgaXQncyBhbGwgaGFuZC13YXZpbmcgYW55d2F5Lg0K
DQpXaXRoIExBNTcsIEhBUCByZWFsbHkgZG9lcyBiZWNvbWUgMiB0aGluZ3MuwqAgVXNpbmcgYSA1
LWxldmVsIHdhbGsgYXQgdGhlDQpIQVAgbGV2ZWwgaXMgYSBwcmVyZXF1aXNpdGUgZm9yIGJlaW5n
IGFibGUgdG8gVk1FbnRyeSB3aXRoIENSNC5MQTU3IHNldCwNCm9uIGJvdGggSW50ZWwgYW5kIEFN
RCBoYXJkd2FyZSBJSVJDLg0KDQpUaGVuLCBmb3IgVk1zIHdoaWNoIGRvbid0IGVsZWN0IHRvIGVu
YWJsZSBMQTU3LCB3ZSB3aWxsIGJlIGluIGEgNC1vbi01DQoodG8gYm9ycm93IHRoZSBzaGFkb3cg
dGVybWlub2xvZ3kpIHNpdHVhdGlvbi4NCg0KVGhlIGNvbW1lbnQgYnkgcGFnaW5nX3VwZGF0ZV9w
YWdpbmdfbW9kZXMoKSBpcyBmYWlybHkgY2xlYXIgYWJvdXQgdGhpcw0Kb3BlcmF0aW9uIGJlaW5n
IHN0cmljdGx5IHJlbGF0ZWQgdG8gZ3Vlc3Qgc3RhdGUsIHdoaWNoIGZ1cnRoZXINCmRlbW9uc3Ry
YXRlcyB0aGF0IGhhcCB2ZXJzaW9uIHBsYXlpbmcgd2l0aCB0aGUgbW9uaXRvciB0YWJsZSBpcyB3
cm9uZy4NCg0KPg0KPj4+IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvbm9uZS5jDQo+Pj4g
KysrIGIveGVuL2FyY2gveDg2L21tL3NoYWRvdy9ub25lLmMNCj4+PiBAQCAtMjcsNiArMzIsOSBA
QCBpbnQgc2hhZG93X2RvbWFpbl9pbml0KHN0cnVjdCBkb21haW4gKmQpDQo+Pj4gICAgICB9Ow0K
Pj4+ICANCj4+PiAgICAgIHBhZ2luZ19sb2dfZGlydHlfaW5pdChkLCAmc2hfbm9uZV9vcHMpOw0K
Pj4+ICsNCj4+PiArICAgIGQtPmFyY2gucGFnaW5nLnVwZGF0ZV9wYWdpbmdfbW9kZSA9IF91cGRh
dGVfcGFnaW5nX21vZGU7DQo+Pj4gKw0KPj4+ICAgICAgcmV0dXJuIGlzX2h2bV9kb21haW4oZCkg
PyAtRU9QTk9UU1VQUCA6IDA7DQo+PiBJIGtub3cgeW91IGhhdmVuJ3QgY2hhbmdlZCB0aGUgbG9n
aWMgaGVyZSwgYnV0IHRoaXMgaG9vayBsb29rcyBicm9rZW4uwqANCj4+IFdoeSBkbyB3ZSBmYWls
IGl0IHJpZ2h0IGF0IHRoZSBlbmQgZm9yIEhWTSBkb21haW5zPw0KPiBJdCdzIGJlZW4gYSBsb25n
IHRpbWUsIGJ1dCBJIGd1ZXNzIG15IHRoaW5raW5nIGJhY2sgdGhlbiB3YXMgdGhhdCBpdCdzDQo+
IGJldHRlciB0byBwdXQgaW4gcGxhY2UgcG9pbnRlcnMgd2hpY2ggb3RoZXIgY29kZSBtYXkgcmVs
eSBvbiBiZWluZyBub24tDQo+IE5VTEwuDQoNCkFueSBjaGFuY2Ugd2UgY291bGQgZ2FpbiBhIC8q
IHNldCB1cCBwb2ludGVycyBmb3Igc2FmZXR5LCB0aGVuIGZhaWwgKi8NCnRoZW4gPw0KDQp+QW5k
cmV3DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 19:58:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 19:58:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472136.732272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDWNp-0000bf-Um; Thu, 05 Jan 2023 19:58:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472136.732272; Thu, 05 Jan 2023 19:58:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDWNp-0000bY-S4; Thu, 05 Jan 2023 19:58:37 +0000
Received: by outflank-mailman (input) for mailman id 472136;
 Thu, 05 Jan 2023 19:58:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDWNp-0000bQ-7V
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 19:58:37 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5570c304-8d33-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 20:58:35 +0100 (CET)
Received: from mail-bn8nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 14:58:31 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ2PR03MB7111.namprd03.prod.outlook.com (2603:10b6:a03:4f4::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 19:58:29 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 19:58:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5570c304-8d33-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672948715;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=A5DcOx1HurqUvgeh7+k8zn9b8IirzaV/OriLckeh4Hk=;
  b=Mlp/CeQrAiSbVJT8gjMWCRrFIGXZueFuTZwCws12wQ2Hunr6cfIhiKpm
   6Qy7pgpOmuxK9/cyNcgIpAYBsEKRVB4DEy6oPDjSor53+R649M76K5Ihw
   xayiZLCk95it5/2JCWg9QlffPXRYiRpYdSZpJUCefxHF6Jd/JQi/LhM9m
   A=;
X-IronPort-RemoteIP: 104.47.55.175
X-IronPort-MID: 91373675
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:SUBD3K4s/PD5/zJEPIOu4AxRtM/GchMFZxGqfqrLsTDasY5as4F+v
 jFKCjqFbqmOYjanfNx3O4+28UJXvsfVndZkSwE/+38xHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakT4QeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m/
 PhIEA5dahC4gOuPnKKdWLJ9lJkgM5y+VG8fkikIITDxK98DGMqGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooj+WF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNLT+LjqKE36LGV7nMwDwJGb1u+mqW4lmifZPAEE
 2w4/iV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8ywqUA2ksTzhfaccnvss7WTwr0
 FCSm9rjQzdotdW9THuH876OoDCaOC4LLHQDbysJUQsE5db4pIg5yBnIS75LD6qdntDzXzbqz
 FiisCk7r6Uei4gMzarT1UDKhXegq4bESiYx5x7LRSS14wVhfomnaoe0r1/B4p59wJ2xS1CAu
 D0OnZiY5eVXV5WVznXSHaMKAa2j4OuDPHvEm1lzEpI99jOrvXm+YYRX5zI4L0BsWioZRQLUj
 IbokVs5zPdu0LGCNMebv6rZ5xwW8JXd
IronPort-HdrOrdr: A9a23:8LwbLqibOOMxa/Nn/EPnU5OCpXBQXiAji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPE3I/OrrBEDuexzhHPJOj7X5Xo3SOTUO2lHYT72KhLGKq1Hd8kXFndK1vp
 0QEZSWZueQMbB75/yKnTVREbwbsaW6GHbDv5ag859vJzsaFZ2J921Ce2Gm+tUdfng8OXI+fq
 DsgPZvln6bVlk8SN+0PXUBV/irnaywqHq3CSR2fiLO8WO1/EuV1II=
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91373675"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T2YyvaLvZvrl3ZruLCpIl93JC4EYd06NrS8HhlF3EVWJqTaO+e3UEWkvG1D0mSHHF5k/3XhwFIUx+E3Yjo9Q6IiJgUldKFyHbr5X23zennlAHuTY2N/th5CdbpqnQBP0+XIQd/1NRNpTWzVpGYmUcJg8pWBIFXroMb6EeoSg2G9X1WNsrXw36yYYtdWljNVtLmayRsiHp+4n17ue2HJ3X52G1LHzDEhsn+M2G+mfe5p/isJeCbNmFIEx+ke2iy+znN0tI2Zt6L7CtIzTPoWmbjI4GURhcMbqAP3aKmRj/xIrPLuQfEQoJUhjXNLTVnEfEJdoNObwLFnIjzXFUyblTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A5DcOx1HurqUvgeh7+k8zn9b8IirzaV/OriLckeh4Hk=;
 b=GKIcsnA0LpeKE2C2/NOUsnsrGmM9pqGYbrbljVtWTXmLsO5fIpF1uD7zLYYSkZF2W6JCqMHXeAehQXGrcRGeaCNYTlWyIP+czz+v4/K3JrbhP8I2zqK0Zc7ocJ7R5EiyLeJ+XUyuTlTRddfRbfgKhhgBDsEXOJBwjXvHm+KyDM3/Imq0Zs488JJjn3Xqe+mbZy3/IH51jV4bhFkb1rf4COMvsMuRLJvNhGopWnb4o5Dbx8JqyOfNr9MKcQcZBKNEzMgsYeBw9+DT2PaGqeq748fwXO9b6T+tuToeJ6lW0xQB0lfy47nToDD9hSxTHVJ464GPQ+3/Ujo6oKnkohuIug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A5DcOx1HurqUvgeh7+k8zn9b8IirzaV/OriLckeh4Hk=;
 b=ioebbGTWgv9qbVNkiN4x/IHPYF9YCDzyka8iIBxFYTsNOqJA57zmoTRfMpsqI1jf2lx3TJwRrOIAxCn8KWHZwe+PfAhVwSPvxe2TPHAakJmDNRaHPOtys1nZ6j+YaZPmRXYoMSZUKwsoo56BWmosOifKjkkiYvfJ1LsALsMcjZ4=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Demi Marie Obenour
	<demi@invisiblethingslab.com>
CC: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>, Roger Pau Monne <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, "Tim (Xen.org)" <tim@xen.org>, George Dunlap
	<George.Dunlap@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v6 3/5] x86/mm: make code robust to future PAT changes
Thread-Topic: [PATCH v6 3/5] x86/mm: make code robust to future PAT changes
Thread-Index: AQHZFlVFWAiHUIB/Aka1sujnPXJ1rq6P77GAgABjzAA=
Date: Thu, 5 Jan 2023 19:58:29 +0000
Message-ID: <52df603e-624c-56ae-dd5b-c732f3ca5fe6@citrix.com>
References: <cover.1671744225.git.demi@invisiblethingslab.com>
 <00505454afa89b759122008852502a5ef7b1b1ee.1671744225.git.demi@invisiblethingslab.com>
 <ad323ad9-28f8-9eac-f6f5-f6b6373e2272@suse.com>
In-Reply-To: <ad323ad9-28f8-9eac-f6f5-f6b6373e2272@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ2PR03MB7111:EE_
x-ms-office365-filtering-correlation-id: 90df5c1e-ff22-4256-8e66-08daef573707
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 LjSnePJvZTwgIOZdHkPQuGjMvcApVPFlnQBz1CEFLzTdsWVBTcHjCjkVCPeOmM/KXs9ETW9fn8TjxJ9oEIuTipH9ZaaOF8s4fcVoF9vKRqIbKkGxuSZ6FnQbUf8EM1hbmBaCPQfjFGe4a8UxJztFZLY1krsjAGGbcqIVSYxc8A2SvNzGTKu9NZvMy4JJ4mo7JKioQ5eSBauaVC062ZBC4Ab1dHJDnq6J4GYdgfdi0uGn+YH2CiDKkYcdSjMTwKXr1RqdwBA6zEM//mdQ9kH0Vse+ZEJZRatv47TCy7FuW/vs/52xZYPtgMJyryNWkJHF80COT30WJWfe2fy54lvc6F4SJ2NxMuLsyFz/7afuUr4raa6vIR8MWFpMMIt6cPWR3L33zfjRJxuMOgNHw44dJ8mmmAp14vYZj+sF/SjEN+cFI0BeJeIzUQ+of6Sl7FU1HS4S+K/whUCW8zKjsHsvm6BHgJYVTFJ++S1+8T06KrcHujaW5hLZp65AmlnlpXwnpRpeeazVDaPPE8hFqahnTLQSDQtW8xQQJsfJJiI7pXTBbfGq1sE/1jkQf3l5WIigSVQzwQLjyu7DM07kPtfpFBlHtrIYIKl31Bj2EJwLzlcCPpSC3zlCs3+BrQVVUHndXicctg3LGLxRwCpzgVMB6KR5lj55zlojffMxRCqhKCiDlTSAcp8u2C623cr+xLuJwvQwcnlx2pioFB8qNsSwmzRqVZApP3jMgmSE0VUdWFEmVXuQKronkZOqe6aPfpuMSpK8Gok7dLJwNBUlY/cJ8Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(366004)(346002)(136003)(39860400002)(396003)(451199015)(6512007)(186003)(26005)(91956017)(31686004)(71200400001)(478600001)(6486002)(6506007)(110136005)(53546011)(2616005)(66556008)(66446008)(66946007)(4326008)(41300700001)(64756008)(8936002)(76116006)(66476007)(8676002)(4744005)(5660300002)(2906002)(38100700002)(122000001)(38070700005)(82960400001)(31696002)(316002)(86362001)(54906003)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?RVJGNm1HSU5LNnZmMExEY29XQXlDWnJwdkRITnRnYUkzSUtPT3NFQ2lZYktI?=
 =?utf-8?B?SFk1K3B4d3l1K2xJbEp3bWJGZnAycXp0NlhuQk4xWHJMbW5wazFtWDg4Z3U3?=
 =?utf-8?B?WXQ1cmtCcHNqNytvVmZsMVJnMXBaSEdWQUdGYkVuTitMUGZHMXJxU3RMMTR0?=
 =?utf-8?B?b2Ezall1aTNYVld5TldjSDJCeFNmZ05LY25EVEd5YmFBNFRjWElaejIyMldo?=
 =?utf-8?B?ODhKaVRqT1JpaHhBamV2WUI0dEN6bWtpQ002b1l0UEFqWm5wdTZsUGozd21p?=
 =?utf-8?B?amFLOWthSExxTDRTZitjL09kWGF0WjA3SFdxQVc4UWg5ZGY0YzgrRU1Vb0gy?=
 =?utf-8?B?NkdZeS9tRCttbXZqVm1MTXo5elNVNWp4akwzOStEOGpQTWlWUGxCWE91UTRE?=
 =?utf-8?B?OHFTZmNKQy9Jd0Zib0VaUXBYZGpsNVl6cUtuVmN2dEt2cmVNY1p3Mnd5Wlc2?=
 =?utf-8?B?emFSV04waGdZcEVFZXBDcXpzOFFBeVlzdHV6NndSWWcyMVJkVDNVdjQ0bVJr?=
 =?utf-8?B?NGt6ZXVMMVN1b1ZzOFlVVEVaMFoyN055VnVLcitqWGIrSGhYZVpWZW5JM2Rp?=
 =?utf-8?B?MmFoMVNZamdEdkN0M1dtYUlZbG5UenhSN3RsMk9VYU5YcjBnTWxaelQvTWFG?=
 =?utf-8?B?WlJGZEJ5VVhHa0tOSm9YdTVBK0tIcTBSZHE4b2U5WWdlczE0b2RUY1dDQjVN?=
 =?utf-8?B?c21YVSttZUhMcjVkSVpFdnR4SERMTVJYZHZjcjdhUU0vdG5zNm84cC80MTF3?=
 =?utf-8?B?NXMwMjhhTjhyaGNzWGR1OU5YeVVSU1NDRHRmVHFlbTBLckRmV2dHdmYyUjQ0?=
 =?utf-8?B?Zmc3N0xPVW5VTXNLbXNiYVR6Vi9wMnlKT3YvQWFCSlh1ZkMrL0Z6M05oOFJX?=
 =?utf-8?B?WVl6cXNCbkRCVS9kc0l6WVkzejR5WDRlT3RSNHFNN3djWnVFL1RzRHZVYS95?=
 =?utf-8?B?b3ppRjByQ0ZYbUJwcmdOZ0x4S3N5dXNRcGI4ZGVIb1V4L1NCeS92cnYzL1ZT?=
 =?utf-8?B?MTdqWE4reUVxeW1GL3hxUk41YVJrclFocFNpekJ3QW5yL05SR3BSQjh1SkFE?=
 =?utf-8?B?T2Nic1dWeS8rZmFNd0pMeFVVT1hXTXduaWs5eitlYnRCbXBmalhTZkFSN0Qv?=
 =?utf-8?B?QWpQVG5GOGlYcWRMSzFTZS90RmZWM0lnMXhyY0x3MVROMGg2TjIwenBWRUNo?=
 =?utf-8?B?Q0o0ZDBBUEJ0U2JIZUZiV0J2NXZYNHBUcmI3NzZvY2I3NXJURE5QcW5JbGFt?=
 =?utf-8?B?cW1YZXRtTmVWV29KOUx5R3l4NnltY0hUZTEzNE9DOFpoTGFGZFNQS2IwUXlJ?=
 =?utf-8?B?blNoTzhRZEhlK0w2c3Jzd1BzelA0d3U4Nk5GNi9kYUNpT00rejFTRVN2SG5s?=
 =?utf-8?B?TXB0c3RHd3liK2tEVDJkVUtEcEhOWjMxWEFnajRuRlJJMjNZelVzZWdNOTh6?=
 =?utf-8?B?dXNjckdEcUxPNGdmdWRxalovSGhVam1LNDBOM01HVzNMbDRYclFtajJWalNO?=
 =?utf-8?B?OCtiYjN3WWoxTkl5cFQzTFZaZElvSVhvNFNxQXIrUERRUDduMUR0bnMxMHcy?=
 =?utf-8?B?SkJmNVplWlFlYTM3b2ZOZ0VMdkczaDNPZFBIQnpQdElDZU1KaDd2aFU3cHNI?=
 =?utf-8?B?VFc2WkU0NlZSaWhZSVFiU1RwRndmdDhYQ3pXOGFpSzFiNEdjbjN4bFRoYW0z?=
 =?utf-8?B?Zyt3dXJwbndZbWlPWXQ2c3cyVU5YbFp3enUvSzdMM2trUk5md3JJMjZ0SW5G?=
 =?utf-8?B?MXVtdzZRQVVFSmV2MGd1Y2VSaE5wbWJKZWJhZjl3UkFrQUpicUlNeGRTQ1ha?=
 =?utf-8?B?aHVvVDJMOUJwWTZicThUTTVRZU9ycDl2Q0lDWmlnRXp4ZzM2dHY1dXZGRngx?=
 =?utf-8?B?Y2VWUVIzaW9kQzl3VlU3bjFROWJWM3RwQ2IrcTFwQWhuTkJOTFByVVVXSWh2?=
 =?utf-8?B?bTFRdGI4K2lWcFFQaWNRL29tNmFhMlc0WklZZG85dnVZcU80SnhsOXhvMGZs?=
 =?utf-8?B?YXNWZENtSHJGTk9LTUduS3hrQkh0RnZ2MjdELzNiYlJ1VlFFVDQ1bGdSU2l3?=
 =?utf-8?B?aEJpRWpicDFnV1BSVEZ1TXpNN01uK3NoRzNkakV5UUg3cWYyNlA3L1crZ1J5?=
 =?utf-8?B?R3c1Sm9WekRUVFduSHJ5QXE1anhGQXJWVTlQN3RrT2FjT2hTb0hFSGFCenJQ?=
 =?utf-8?B?L3c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <00F1820B124F124B8B6C10D840CE692E@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?Y1k1NnV4dG05eHVkU1kvbUsrdUd5VXdwNU9KRUQ3TXRkKzlqOFFES1JaTnhS?=
 =?utf-8?B?WmNBdWdTVVREZ1hXQzkvTDAwdXpTZC9GTklReEZYK1J6eFdKc2xHRFJYTHMx?=
 =?utf-8?B?WWdJVGwyYUNsMDVjOCtoR1dwbUt6Und0UUg4VzVvY2dwNytPUmpHY21OSmZV?=
 =?utf-8?B?NHZMb0tlbzJ4K3FOUTFjbHM1S2JLV1UxMlN5YTB6SnMyYm9wTEJuRDZpWEEz?=
 =?utf-8?B?dFNwYmw0R1V2VDJPZ01xTmcyM2U4eWdxMG5helAxMWpmMTh4dFo3cGxNWm1p?=
 =?utf-8?B?MkgwQmJITXlTNWtXekFFM3VmYjM3TkF2bTREa0NCZjBzMkVFK3M3UmxYUGhR?=
 =?utf-8?B?b3FsTHE1OEtSeUd2LzluL2VFdGxQZXlwblVNRnA0MEt0bDN5WXlFVjF6OHZm?=
 =?utf-8?B?eTJBY2ZjeFoxS2s5ckUxbDB1RWN3SThGT0VEOHF4M3lKeDdNejdRRnZCRjdy?=
 =?utf-8?B?K0FjL205alIvMHp1Z21ISEp6ampuSnFCWFRkTm9jb0tnWXBNaVlPVHFsOE9O?=
 =?utf-8?B?TUpxT09wMFJtTjlhVHpEb0dHUDRlcHhWT1B4TllGWnZVQkVBMXJkZkVPalk3?=
 =?utf-8?B?YTNJeERTL1AyTURWa1o4TVNIZXFwcmovSlhOVlFkcHQvaE0rYytGZmFkd01P?=
 =?utf-8?B?WEdxdmtyOFV4NHpIOElpN3BPK0xLWTB3R0d6VEczcnM4TlBlbXZ5N29idFVJ?=
 =?utf-8?B?eWxKTStwb3NCTzlJcGdhcE1MajlCd3MwektsVXhLdlkrZ0FFTFJSakc1N2N5?=
 =?utf-8?B?TDM5UVRDUFNaOWwwVk5SQnZDcERYTGJvWkVwaEh5OE51TVBwVTdyMmJtL0NM?=
 =?utf-8?B?RmFvSFRYb1prWlF4OHRQdWV6aUZBckZYMWZUcDQ5ZzVaTWU1MTlJRDk3bVR4?=
 =?utf-8?B?RlJmRWF3T3dHdTBvVnB3WTR6OURNY0JpNmlxb0Y4UWlEajFvSXRJWXc0Q1hv?=
 =?utf-8?B?UHFBc2RYT004dmtJL2hqTjJ6a0NqTHlHa0VDeTB2ei9DYmZIaFZoN2txWTNQ?=
 =?utf-8?B?VkkybThjNlRFeFNzeXI0RHRtaXloVXNvRVhLSk5EWVpRT1pFZkk2aUtBc2FH?=
 =?utf-8?B?cFJLQWdUVVc0WGZBd2JVSVZGL0VqM0VLV0NDQkdGVWdHRThRL2J5RG9KcW90?=
 =?utf-8?B?dWpTaFh0MU5QNDRzQTdxYS9aNHVXY1E1VDJwcHU4ZlMyU3pOcVBTL0pGU1JK?=
 =?utf-8?B?NDlMRndJV3ZRWnJOamNQTkZqemVRVDJSSldiSFVNRE5jL0FvU1kyN05YR1I5?=
 =?utf-8?B?ZWZaYlp1SFhHaURCbVV3L0Y3QUpMZjlEd0E4Z0w3V20vNDZUdz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 90df5c1e-ff22-4256-8e66-08daef573707
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 19:58:29.0469
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: TPIWb6wAnITXz6UMltIEA89Wd7MpGjecZzRf+PkCE3o9jzHPPm/zMzudlKNNE8w3fXdjymY3CMZnyVtu51aW/v55FgebrpIlsGeG1YyIu8s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR03MB7111

T24gMDUvMDEvMjAyMyAyOjAxIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjIuMTIuMjAy
MiAyMzozMSwgRGVtaSBNYXJpZSBPYmVub3VyIHdyb3RlOg0KPj4gLS0tIGEveGVuL2FyY2gveDg2
L21tLmMNCj4+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jDQo+PiBAQCAtNjM1Miw2ICs2MzUyLDEx
IEBAIHVuc2lnbmVkIGxvbmcgZ2V0X3VwcGVyX21mbl9ib3VuZCh2b2lkKQ0KPj4gICAgICByZXR1
cm4gbWluKG1heF9tZm4sIDFVTCA8PCAocGFkZHJfYml0cyAtIFBBR0VfU0hJRlQpKSAtIDE7DQo+
PiAgfQ0KPj4gIA0KPj4gKw0KPj4gKy8qDQo+IE5pdDogUGxlYXNlIGF2b2lkIGludHJvZHVjaW5n
IGRvdWJsZSBibGFuayBsaW5lcy4NCj4NCj4+ICsgKiBBIGJ1bmNoIG9mIHN0YXRpYyBhc3NlcnRp
b25zIHRvIGNoZWNrIHRoYXQgdGhlIFhFTl9NU1JfUEFUIGlzIHZhbGlkDQo+PiArICogYW5kIGNv
bnNpc3RlbnQgd2l0aCB0aGUgX1BBR0VfKiBtYWNyb3MsIGFuZCB0aGF0IF9QQUdFX1dCIGlzIHpl
cm8uDQo+IFRoaXMgY29tbWVudCBpcyB0b28gc3BlY2lmaWMgZm9yIGEgZnVuY3Rpb24gb2YgLi4u
DQo+DQo+PiArICovDQo+PiAgc3RhdGljIHZvaWQgX19pbml0IF9fbWF5YmVfdW51c2VkIGJ1aWxk
X2Fzc2VydGlvbnModm9pZCkNCj4gLi4uIHRoaXMgbmFtZSwgaW4gdGhpcyBmaWxlLg0KDQpJTU8s
IHlvdSByZWFsbHkgZG9uJ3QgbmVlZCB0byBjb21tZW50IGJ1aWxkX2Fzc2VydGlvbnMoKS7CoCBJ
dCdzIGENCnBhdHRlcm4gd2UgdXNlIGVsc2V3aGVyZSwgYW5kIHRoZSBCVUlMRF9CVUdfT04oKSdz
IGFyZSBpbmRpdmlkdWFsbHkNCmNvbW1lbnRlZC4NCg0KSSdkIGp1c3QgZHJvcCB0aGlzIGh1bmsg
ZW50aXJlbHkuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 20:28:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 20:28:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472143.732283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDWqq-000431-7R; Thu, 05 Jan 2023 20:28:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472143.732283; Thu, 05 Jan 2023 20:28:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDWqq-00042u-4F; Thu, 05 Jan 2023 20:28:36 +0000
Received: by outflank-mailman (input) for mailman id 472143;
 Thu, 05 Jan 2023 20:28:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDWqp-00042k-5p
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 20:28:35 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 840dda3e-8d37-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 21:28:31 +0100 (CET)
Received: from mail-mw2nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 15:28:28 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5017.namprd03.prod.outlook.com (2603:10b6:5:1ee::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 20:28:26 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 20:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 840dda3e-8d37-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672950511;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=3Brf3BTM2yp64T+Ybhf1eDYGhFqzzLxM11WSv72X/XE=;
  b=C29z4icAFt4RTKpOy3XWziCRFHb4lvUMyg5Nnmh0ggJQCAlLSaPGupvi
   rzy508nhpL9lG1heYFS0KC5LNEhzhtmt9p9Xz42dnkUsI6mnccToNTgGZ
   JZOJGfahQMPzFZhZeME9/oaTMdVF/cjZP4LK0JjaNB5+kwuKYjRztSf5Q
   8=;
X-IronPort-RemoteIP: 104.47.55.108
X-IronPort-MID: 91377134
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:5HYGcq5l0/+SaxhwbLzycQxRtM/GchMFZxGqfqrLsTDasY5as4F+v
 mpKWDjXP/aMMGDzfN9zOYS/oRgAu8fUm9RkGQY5rHoxHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakT4QeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mz
 OIfdyImPkG6q+e0xI7kYcV9xdshFZy+VG8fkikIITDxK98DGMiGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooiOiF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNLT+Lnr6U26LGV7nVMV0EuUl2endeegEejBMluB
 GEM/DV7+MDe82TuFLERRSaQrHOBvzYdXcRRCOww7AyRyqvS7B2dD2JCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BWoLfyoNVwYGy9jlvoAojxjLQ8pjEai6ldn8E3f7x
 DXihCo0iqgXjMUL/76m5l2BiDWpzrDWSiYl6wORWXiqhitlZYuNd4Gur1/B4p59wJ2xS1CAu
 D0BhJKY5eVXVZWVznXVEKMKAa2j4OuDPHvEm1lzEpI99jOrvXm+YYRX5zI4L0BsWioZRQLUj
 IbokVs5zPdu0LGCNMebv6rZ5xwW8JXd
IronPort-HdrOrdr: A9a23:QsKeJKrVWbObZ5vgS29hFPYaV5r/eYIsimQD101hICG9E/bo8v
 xG+c536faaslkssR0b9+xoQZPwJU80rKQFhrX5Xo3SPjUO2lHFEGgK1+KLqQEIfReeygc078
 xdmsNFeb7N5DZB7foTNGGDYq8dKMbuytHWuQ/Op00dKz2Ddclbnn9E4wygYzFLrFQvP+tDKH
 KFjvA33QZJYhwsH7mGOkU=
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91377134"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LD/UWG8VzebcNoOk7SPqsCyl+nTffsHe8QpB0Vuq2oTcY/Jq749FfNPvAYGio2m9sdwfKBXBOKi71WmQcACKrougWzjdpo+I8WJdzKayjQGGS9gS+Cn2B6UsQevoyTNVKp3Yn9bEQi5MduNMMaqdX0AreZYW7XXk9LeUpQG3qwSXHjz4LQslK+eeyZHfBl/ZMDl7ZzCQkprG1YDEgpYvclsiPXK2bvBozSeD9Ld4CmiHuK6S7GQQZHlaYWh2NLl9fREL3lmac8JQ0e0BifY1nQ5VSBVKj1RzIqe6D2ZMdlhDteZzT8jCa7B8KmGN5uAXVSXmoSyTQlJ2DU5vsFSzMg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3Brf3BTM2yp64T+Ybhf1eDYGhFqzzLxM11WSv72X/XE=;
 b=KNJFVuf0RS0te31QSxcj6gXnmvAEAR7vBY/UDYkAwT4yI4Ntib8i4aCtiRB+0+TBGzzS2w9aHSJwC2nuLZcQ59qI9oakcJcUYTTA1WH5dDMU5ZeUSykp9JSasvh+uYiTaFN8+ccKSbn30DWuQuSpvMvtSIYso4AGsWBS/fIbss2jtmN3K8bRMv4klPyUhywwctqSDLO8rhDPa/Xcyw2TZnlduifW6kJJk734f8HSzYHf+0APDppEIwqrLhHiHR9Gi4SWFA/PPPvTBf8rqFQUkNMUkvBgZUmdAxEX/wrZkWDwzABJcaRA1Ir4L78bY6dg5WD4jY3f1HbRMuM1/CJmMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3Brf3BTM2yp64T+Ybhf1eDYGhFqzzLxM11WSv72X/XE=;
 b=eH1S7z7xP+vxHazu+YSZBp5p87svwqSJUHwu8IdRT0hxOo2WRH4AXNpMAYx8YklI1xYIwrY78TVeWWT7o5sTniXp610refqRxIEKGuoxlLyNCwubv4dws3wTzwVwwdkOkBCcyuO2GHhnJNevfB7KkURHxj35X7n1l9O7XcQJN7E=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>, Jan Beulich <jbeulich@suse.com>, Roger Pau
 Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, "Tim (Xen.org)"
	<tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default
Thread-Topic: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default
Thread-Index: AQHZFlU8BnV2WctoSkOEZs7zNTRXG66QW9uA
Date: Thu, 5 Jan 2023 20:28:26 +0000
Message-ID: <c6223295-c4f9-8fa8-7635-80d48094190f@citrix.com>
References: <cover.1671744225.git.demi@invisiblethingslab.com>
 <2236399f561d348937f2ff7777fe47ad4236dbda.1671744225.git.demi@invisiblethingslab.com>
In-Reply-To:
 <2236399f561d348937f2ff7777fe47ad4236dbda.1671744225.git.demi@invisiblethingslab.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB5017:EE_
x-ms-office365-filtering-correlation-id: dd9dce96-99f8-4f8d-5f19-08daef5b6626
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 gkqglxggRT2enXx90sWB8lBtZs1iUl3miOz8EfUdtJMGNXj00h6C4cfcta1CMGsfDeSDRZ87QAQuHNXnWqATV226+rUoP5Q55+wRVbWkrL/N+rl9Q+4uN5Wpr9+3sq8PFU4+YtEp6wuGxLo2oqnv9sRvVFkcQvvRaDgWp7j6WodaBqjQEQumDwUMaYWowajZj7W8K2sgSUlDHYUtnE06gpqCVhEnI1tYOys9ReVREfSwHsJx3RsoWSE/qkk60fAPs47Zxp5bpZjH8MjmEWxIP6nBUTAfT0jwaL1Lk9hoIlPNbV7BqzVK9Sqg9XpTR18v+JDQFVWMFcfSji24zkTTBWUfBzoozsiLMmH3Op4LoDiRnxIeP+MBj+NtWVAlbP3JPIYGh+M+FioEbTH4XvIaFlzbSVJ2JQQtYRz/KjGChfwKV85igY0hbPUMjBktnMNkrmqE2NGaLhalDwVCPuI+9ArqmPv4LU+bkG5QmMT3Q8c+3ymJPlLLhBWeLBXHOu1SmT3QK5Sn8j3SSfF2UXBR1Dvgiw+ebmPQr7GA83sjI6Mk508fCq1iJgjwHpHkFL3hPf0/srdYgrOt42z3LFDHqFnOiJ+ut3bWLhCAMpEov2FKVS/ma6yZ/FAE92JFLXObhmGEhrX56P0ByjfRb+0LUrKqd5UmsbFN+IzyAAlwnICBW96OYfn1FgwBM3PDzAp1a/Pa+B7XNGttB/Tf5XPTCOdQKQd4dwPv4ILMW5aQxFnx9MOkHqAiCN4EqRrv7PyvjPVukMYL+Bj1ZzhdxrQ07A==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(396003)(376002)(366004)(346002)(451199015)(2906002)(5660300002)(71200400001)(8936002)(41300700001)(478600001)(4326008)(316002)(8676002)(66556008)(76116006)(66476007)(64756008)(66446008)(110136005)(66946007)(91956017)(54906003)(31686004)(6486002)(26005)(6512007)(107886003)(6506007)(83380400001)(2616005)(122000001)(53546011)(38100700002)(186003)(82960400001)(86362001)(38070700005)(31696002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?cmZxOTNnYWZMTWhjSFUwN3dHejVja1I4RzM5V2QwUmNoNit5Z3BVSHFvcncz?=
 =?utf-8?B?UDcrYVJqcEpwT0JvVTFOby9ZSnp2ampIbW1KZ3RZaUwzWVRucUN3VVp0b0Ex?=
 =?utf-8?B?WkZkc0pGd1lzcStBTCs5NzBDZDhHZEFnZG1URGNUNFowR09hUmR3MjR6Q2FO?=
 =?utf-8?B?MnJRWUlXbm4yaHFlZUtCS1hGWktNSkg1SlNzQzJDTnprQXE2WGR0ODRPWGpS?=
 =?utf-8?B?TC9ZZGdhaHUzMU5QSUUrMWNqUkljUEJrYXdhVXMxV3pzU1RpRjI4ZHUwVnIx?=
 =?utf-8?B?L0N3R3ZiRkUwbmoza2xxMFgrcDcvWVRDL01yRTZvNVFVeGY4dXcwMW5va3BV?=
 =?utf-8?B?T2xBOWd6NFFHVzFXRkIxRXN4dTRJWlVwb3R6NzhYc25RMC9XTkUzeEsvekc4?=
 =?utf-8?B?S0FnVndrYmp1SlpLWEN5TTNKWHdRWUlCSXJhclNHV1QySjM0TTJEdDU2eUh4?=
 =?utf-8?B?TzRweDF1cG9ENmk4SHFYTER3N3dqZXJDdzg4QzJPSHpWZVg3aGp2bk0wQTJI?=
 =?utf-8?B?SXk1Zk00c0h3WDE2NVRzbWpuZ2NyUUJqbjE0NkYvR1JoYXNMbEVXYUhuWHdw?=
 =?utf-8?B?OWhqbGthWDdEcXpZcGE1eG9oSTJ1VWVxc2Z2M0dUdlFuNmViaEM2KzdjWEtV?=
 =?utf-8?B?WDZxaTV5NDBBR2tGQVlnaGlHLyszMTA3dXZpVzVoY0plSWdIOVFma3BiN2pH?=
 =?utf-8?B?ZDdZSm9DQUtLeUtxbTVjRDdzSzRYS2NvQ3hrTnB3bWt2MHdHTjQyT3N1aFZm?=
 =?utf-8?B?Q3kzd1Q3U1FYVWRwVnhValV5blZxaU5NL0FGUTJuellaTG9JbzdrVGxnS1lM?=
 =?utf-8?B?OEF5UHJJTmpvbUN4VXE4a1hWZFd6OG4wekV4WUhwZHdNS3IyeWo5aENnSld2?=
 =?utf-8?B?a3MxdFFJSW1wUUN3NEMzYmt4Tk50Y2puQWw4d3c0alJUalM4YmtDakZMODlH?=
 =?utf-8?B?cmVOejlRUVpIb09yTUZTVEQ0S1NTY2pJSW1wYXl5YWNpRG95ZmpvVm14OWt5?=
 =?utf-8?B?OEZWZHRmL29qbm56K0RnSjRLdnBDZXJHRXdsMHlwcUF5WUVWamhoME05MGpi?=
 =?utf-8?B?bUUxSElWV3JJajB3ZFNrZDN6WU5vNFAvZHFwTXI1Y3Y2VWVFTnZ3dmtyZ0Qx?=
 =?utf-8?B?NHJKTUQ3RDZQZEpQTk45YzltdmlydDZ6ejNkZGcveEVxQWRVVko2VzJKUFdS?=
 =?utf-8?B?bllUQWNoOXkyMVhhaXZGRWV1R21mNmN6dW8vblNneUp2a1ZlNDVwVG14VGl0?=
 =?utf-8?B?cGRzUFBTSFhQN28xQjJ0UXhpRFNOV0UraFU1OWdVUHVoN1piV01wYVlIbWti?=
 =?utf-8?B?VXNWVDNITlBEU0VZVysrN3I3WXduamppa1dYQUYzMTIzNnVWQnd2NmwreFBP?=
 =?utf-8?B?RzdxS0poR0dxZzljOSsyVnIxNkFjRExnZXU0MXhUaVdjZjR3R1l4N05sMFR5?=
 =?utf-8?B?clFoa2x4V1A0SnNldHRXMmdtZnJHRUNmczE4UGFYQmI4WHNUMVphTFJzbitB?=
 =?utf-8?B?SHpQNFF2L2xISm91UEdjQWV5VTgvN1pPSEljRk1kZGVudFZJek4wQnpMZm5H?=
 =?utf-8?B?QWx6bkx5d3Z3NTJFUVR3dFBsSDRFYnFubzBqTVV5Yml0T3BMUkMwQWdIQUtu?=
 =?utf-8?B?a0c2RllqZnhIT3V6NGMwUzRlL0s2dnhjYS8vM1hFeGxUUTY3SVVOUG8zY1Mz?=
 =?utf-8?B?aDZQS2g2YVlWbm1wZGxDMVdTQnFGU0NCS0F5SUxQeTFDQ0VUUkdVbTBsMlRO?=
 =?utf-8?B?T3cvcjZpaE93dWNoaVBRTFFUQjh2ZHVjSlV1UnlTRFdvNGdra0RoUWVqVzBi?=
 =?utf-8?B?RWFzZDZGQzQwbUozcGVuQ01tTFlXdzNUU01ZYnNqVE9xeVRGbmlJUkorWEM4?=
 =?utf-8?B?K08zanV6RkVUbnZtV09kdzBjQmtHSXpYUTZmV1Z4Sy9DZ0pIU2tVWi9meEFN?=
 =?utf-8?B?VWRhay81OU42czYwc0k0M0JzODNyK3pFMmhxVklRa1Y3M2t4TXBHU0piQTVn?=
 =?utf-8?B?ZTkxQ3E1L1RtQi84eGFYaGt1eUpFK3ZnZTNWMVdsVWdtR3FSSEc5WnNiT2VT?=
 =?utf-8?B?eDRzeWFPOFNySjdxT1l0TzlJVjN2VU5HcjF2bWw4ODZ5WU04RU44c2Y5dFlk?=
 =?utf-8?B?ME9yYjlBQjh3T1ZFeEVwdXhkY3NpdUpMd1JPNDc1bnd6R1hvVE5QcW9PMDlq?=
 =?utf-8?B?YXc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <9289119F77DBB54D877676899D7893FB@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?dkdiSCtTM3A2YzZHZnZsU3haU2FlN2hGTHZYM0FZZnpieE5nQzJkRStGaGdv?=
 =?utf-8?B?ekFBZ3R0cE9QSGpSYWhWUkRFcDZENGF1Ti9LOUk2SmZOd0pRcGVtK0xzTzRB?=
 =?utf-8?B?OWtHQVRhbEtzZzJaNUdpYW42Y1c5YTZneVRFRGdYalJnL2c4Y2JjMWpCTGEv?=
 =?utf-8?B?VkZ2ZFNHOGhUNzYrMTliSDFOL3diMHBtZzc1ZXJMaFhvN0N4Tjlzckd1ajNm?=
 =?utf-8?B?bVdoaXhUWW12Rm95OTA5N3lHOUdWcCtsRTZPTlN0Tkhtc2ZzdGVMTmpqQk1j?=
 =?utf-8?B?dHlWZ1MrbFZIY0w2WTRtWDBMNnpQMnRPbXpXRElqcWo2eXM4VjZaVXZyMVZ3?=
 =?utf-8?B?SmhMeVU3SzBNOTdNTzc3UVNaZ21DeGlDaE1FaHZzZkFTWWQ4VWUrL3oxN2Y2?=
 =?utf-8?B?azV6Vk1SZ2FUZWdodEpSeGNYcjRTaCthWjMwMUl3Z3MzMnBPVlNkbnZEY2hq?=
 =?utf-8?B?TGdIVTM4dWRpeFJtYUR2d1daeHJkZk9nem52TzhWelB1QnZQeGNsZS9QUjNL?=
 =?utf-8?B?UkcvcTVVMDVSL2YvV1lvdXBiOWxhUkx0MjhKVDZ6cVBIN1hIdkhNczRwNEhr?=
 =?utf-8?B?djhoSENsY1Racm5jSUVPeFFRNlEyQXpGQVlJS0tzUDRBZlRXTW9hczlOVWN4?=
 =?utf-8?B?RzJ2YUlydmxTOTVFZ1BpMHo5anoveDMwcy9LVGFtQ1NKUVlhZXVJNnA3R1Rh?=
 =?utf-8?B?eFlzQmhMdkxJc3NLeURaMUtEamt0QmppaFpwOEhVNVlHay9iTVVqdnBCQ0pK?=
 =?utf-8?B?aGg0eUdTNHVOaWRZS0REZDhLNVFpUXgvcWFMUVd4TmFRTzJ3R3B0WG1XUG45?=
 =?utf-8?B?TXNES2huaThPTkpvalVJTVhJNm1YS0M0RE9KWktZOFRtWmJpV2szbDdvTGRy?=
 =?utf-8?B?cStoN1I1MzFUSFc3czlLTGl2WEIyaUgyREUxYnhkbXk1R1RSUkNsdnFXS1lH?=
 =?utf-8?B?ZFVxYldndUtvRXZuVW83VTFRbHJuSFErUThRWHlUb2tGSUJjT3kxVThseG9I?=
 =?utf-8?B?OVc1dGJOb1BsTy94cjcvMzg4S0lqMHpKQVJUc2ZhRVdLczE0S2pxNnFVVHJm?=
 =?utf-8?B?L2gyajdsdlRzYnNNYllQRCsxZFBNNE01b281VDdtY2tOSG02Smp5a0dQY08z?=
 =?utf-8?B?RUZUUjBpeVh4OTFkNTlwZXJNWDlHSDlwUGZQQ055bmlvMEpJUDhhRzQ0TGxZ?=
 =?utf-8?B?Zzd5Vlp5ekQ1KzRBRkZaTEFkM3p1S1E0dlVURS85UExySlVCNHZUSFpzRUpW?=
 =?utf-8?B?TkFVdENkcm04UWdvZmE4dmkyK0NtY1FCZzVWaVU4UHE2cmpVdz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dd9dce96-99f8-4f8d-5f19-08daef5b6626
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 20:28:26.0574
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 0BhOetJbSr8p7P6Kv2FiLPk+Uzn/rS+SKXJmri1WLrROMwoueQIYVgK/nZI8ztKLKJzL3Ekj1w8wXZdMVav22ByMA7Vm8bCBSLFOrDjBSjE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5017

T24gMjIvMTIvMjAyMiAxMDozMSBwbSwgRGVtaSBNYXJpZSBPYmVub3VyIHdyb3RlOg0KPiBkaWZm
IC0tZ2l0IGEvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jIGIvZG9jcy9taXNjL3hl
bi1jb21tYW5kLWxpbmUucGFuZG9jDQo+IGluZGV4IDQyNGIxMmNmYjI3ZDZhZGUyZWM2M2VhY2I4
YWZlNWRmODI0NjU0NTEuLjAyMzBhN2JjMTdjYmQ0MzYyYTQyZWE2NGNlYTY5NWYzMWY1ZTBmODYg
MTAwNjQ0DQo+IC0tLSBhL2RvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYw0KPiArKysg
Yi9kb2NzL21pc2MveGVuLWNvbW1hbmQtbGluZS5wYW5kb2MNCj4gQEAgLTE0MTcsNiArMTQxNywx
NyBAQCBkZXRlY3Rpb24gb2Ygc3lzdGVtcyBrbm93biB0byBtaXNiZWhhdmUgdXBvbiBhY2Nlc3Nl
cyB0byB0aGF0IHBvcnQuDQo+ICAjIyMgaWRsZV9sYXRlbmN5X2ZhY3RvciAoeDg2KQ0KPiAgPiBg
PSA8aW50ZWdlcj5gDQo+ICANCj4gKyMjIyBpbnZhbGlkLWNhY2hlYWJpbGl0eSAoeDg2KQ0KPiAr
PiBgPSBhbGxvdyB8IGRlbnkgfCB0cmFwYA0KPiArDQo+ICs+IERlZmF1bHQ6IGBkZW55YCBpbiBy
ZWxlYXNlIGJ1aWxkcywgb3RoZXJ3aXNlIGB0cmFwYA0KPiArDQo+ICtTcGVjaWZ5IHdoYXQgaGFw
cGVucyB3aGVuIGEgUFYgZ3Vlc3QgdHJpZXMgdG8gdXNlIG9uZSBvZiB0aGUgcmVzZXJ2ZWQgZW50
cmllcyBpbg0KPiArdGhlIFBBVC4gIGBkZW55YCBjYXVzZXMgdGhlIGF0dGVtcHQgdG8gYmUgcmVq
ZWN0ZWQgd2l0aCAtRUlOVkFMLCBgYWxsb3dgIGFsbG93cw0KPiArdGhlIGF0dGVtcHQsIGFuZCBg
dHJhcGAgY2F1c2VzIGEgZ2VuZXJhbCBwcm90ZWN0aW9uIGZhdWx0IHRvIGJlIHJhaXNlZC4NCj4g
K0N1cnJlbnRseSwgdGhlIHJlc2VydmVkIGVudHJpZXMgYXJlIG1hcmtlZCBhcyB1bmNhY2hlYWJs
ZSBpbiBYZW4ncyBQQVQsIGJ1dCB0aGlzDQo+ICt3aWxsIGNoYW5nZSBpZiBuZXcgbWVtb3J5IHR5
cGVzIGFyZSBhZGRlZCwgc28gZ3Vlc3RzIG11c3Qgbm90IHJlbHkgb24gaXQuDQo+ICsNCg0KVGhp
cyB3YW50cyB0byBiZSBzY29wZWQgdW5kZXIgInB2IiwgYW5kIG5vdCBhIHRvcC1sZXZlbCBvcHRp
b24uwqAgQ3VycmVudA0KcGFyc2luZyBpcyBhdCB0aGUgdG9wIG9mIHhlbi9hcmNoL3g4Ni9wdi9k
b21haW4uYw0KDQpIb3cgYWJvdXQgYHB2PXtuby19b29iLXBhdGAsIGFuZCBwYXJzZSBpdCBpbnRv
IHRoZSBvcHRfcHZfb29iX3BhdCBib29sZWFuID8NCg0KVGhlcmUgcmVhbGx5IGlzIG5vIG5lZWQg
dG8gZGlzdGluZ3Vpc2ggYmV0d2VlbiBkZW55IGFuZCB0cmFwLsKgIElNTywNCmluamVjdGluZyAj
R1AgdW5pbGF0ZXJhbGx5IGlzIGZpbmUgKHRvIGEgZ3Vlc3QgZXhwZWN0aW5nIHRoZSBoeXBlcmNh
bGwNCnRvIHN1Y2NlZWQsIC1FSU5WQUwgdnMgI0dQIG1ha2VzIG5vIGRpZmZlcmVuY2UsIGJ1dCAj
R1AgaXMgZmFyIG1vcmUNCnVzZWZ1bCB0byBhIGh1bWFuIHRyeWluZyB0byBkZWJ1ZyBpc3N1ZXMg
aGVyZSksIGJ1dCBJIGNvdWxkIGJlIHRhbGtlZA0KaW50byBwdXR0aW5nIHRoYXQgYmVoaW5kIGEg
Q09ORklHX0RFQlVHIGlmIG90aGVyIGZlZWwgc3Ryb25nbHkuDQoNCj4gQEAgLTEzNDMsNyArMTM3
NCwzNCBAQCBzdGF0aWMgaW50IHByb21vdGVfbDFfdGFibGUoc3RydWN0IHBhZ2VfaW5mbyAqcGFn
ZSkNCj4gICAgICAgICAgfQ0KPiAgICAgICAgICBlbHNlDQo+ICAgICAgICAgIHsNCj4gLSAgICAg
ICAgICAgIHN3aXRjaCAoIHJldCA9IGdldF9wYWdlX2Zyb21fbDFlKHBsMWVbaV0sIGQsIGQpICkN
Cj4gKyAgICAgICAgICAgIGwxX3BnZW50cnlfdCBsMWUgPSBwbDFlW2ldOw0KPiArDQo+ICsgICAg
ICAgICAgICBpZiAoIGludmFsaWRfY2FjaGVhYmlsaXR5ICE9IElOVkFMSURfQ0FDSEVBQklMSVRZ
X0FMTE9XICkNCj4gKyAgICAgICAgICAgIHsNCj4gKyAgICAgICAgICAgICAgICBzd2l0Y2ggKCBs
MWUubDEgJiBQQUdFX0NBQ0hFX0FUVFJTICkNCj4gKyAgICAgICAgICAgICAgICB7DQo+ICsgICAg
ICAgICAgICAgICAgY2FzZSBfUEFHRV9XQjoNCj4gKyAgICAgICAgICAgICAgICBjYXNlIF9QQUdF
X1VDOg0KPiArICAgICAgICAgICAgICAgIGNhc2UgX1BBR0VfVUNNOg0KPiArICAgICAgICAgICAg
ICAgIGNhc2UgX1BBR0VfV0M6DQo+ICsgICAgICAgICAgICAgICAgY2FzZSBfUEFHRV9XVDoNCj4g
KyAgICAgICAgICAgICAgICBjYXNlIF9QQUdFX1dQOg0KPiArICAgICAgICAgICAgICAgICAgICBi
cmVhazsNCj4gKyAgICAgICAgICAgICAgICBkZWZhdWx0Og0KPiArICAgICAgICAgICAgICAgICAg
ICAvKg0KPiArICAgICAgICAgICAgICAgICAgICAgKiBJZiB3ZSBnZXQgaGVyZSwgYSBQViBndWVz
dCB0cmllZCB0byB1c2Ugb25lIG9mIHRoZQ0KPiArICAgICAgICAgICAgICAgICAgICAgKiByZXNl
cnZlZCB2YWx1ZXMgaW4gWGVuJ3MgUEFULiAgVGhpcyBpbmRpY2F0ZXMgYSBidWcNCj4gKyAgICAg
ICAgICAgICAgICAgICAgICogaW4gdGhlIGd1ZXN0LiAgSWYgcmVxdWVzdGVkIGJ5IHRoZSB1c2Vy
LCBpbmplY3QgI0dQDQo+ICsgICAgICAgICAgICAgICAgICAgICAqIHRvIGNhdXNlIHRoZSBndWVz
dCB0byBsb2cgYSBzdGFjayB0cmFjZS4NCj4gKyAgICAgICAgICAgICAgICAgICAgICovDQo+ICsg
ICAgICAgICAgICAgICAgICAgIGlmICggaW52YWxpZF9jYWNoZWFiaWxpdHkgPT0gSU5WQUxJRF9D
QUNIRUFCSUxJVFlfVFJBUCApDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICBwdl9pbmplY3Rf
aHdfZXhjZXB0aW9uKFRSQVBfZ3BfZmF1bHQsIDApOw0KPiArICAgICAgICAgICAgICAgICAgICBy
ZXQgPSAtRUlOVkFMOw0KPiArICAgICAgICAgICAgICAgICAgICBnb3RvIGZhaWw7DQo+ICsgICAg
ICAgICAgICAgICAgfQ0KPiArICAgICAgICAgICAgfQ0KDQpUaGlzIHdpbGwgY2F0Y2ggY2FzZXMg
d2hlcmUgdGhlIGd1ZXN0IHdyaXRlcyBhIGZ1bGwgcGFnZXRhYmxlLCB0aGVuDQpwcm9tb3RlcyBp
dCwgYnV0IHdvbid0IGNhdGNoIGNhc2VzIHdoZXJlIHRoZSBndWVzdCBtb2RpZmllcyBhbg0KaW5k
aXZpZHVhbCBlbnRyeSB3aXRoIG1tdV91cGRhdGUvZXRjLg0KDQpUaGUgbG9naWMgbmVlZHMgdG8g
YmUgaW4gZ2V0X3BhZ2VfZnJvbV9sMWUoKSB0byBnZXQgYXBwbGllZCB1bmlmb3JtbHkgdG8NCmFs
bCBQVEUgbW9kaWZpY2F0aW9ucy4NCg0KUmlnaHQgbm93LCB0aGUgbDFfZGlzYWxsb3dfbWFzaygp
IGNoZWNrIG5lYXIgdGhlIHN0YXJ0IGhpZGVzIHRoZSAiY2FuDQp5b3UgdXNlIGEgbm9uemVybyBj
YWNoZWF0dHIiIGNoZWNrLsKgIElmIEkgZXZlciBnZXQgYXJvdW5kIHRvIGNsZWFuaW5nIHVwDQpt
eSBwb3N0LVhTQS00MDIgc2VyaWVzLCBJIGhhdmUgYSBsb2FkIG9mIG1vZGlmaWNhdGlvbnMgdG8g
dGhpcy4NCg0KQnV0IHRoaXMgY291bGQgYmUgc29tZXRoaW5nIGxpa2UgdGhpczoNCg0KaWYgKCBv
cHRfcHZfb29iX3BhdCAmJiAobDFmICYgUEFHRV9DQUNIRV9BVFRSUykgPiBfUEFHRV9XUCApDQp7
DQrCoMKgwqAgLy8gI0dQDQrCoMKgwqAgcmV0dXJuIC1FSU5WQUw7DQp9DQoNCmZhaXJseSBlYXJs
eSBvbi4NCg0KSXQgb2NjdXJzIHRvIG1lIHRoYXQgdGhpcyBjaGVjayBpcyBvbmx5IGFwcGxpY2Fi
bGUgd2hlbiB3ZSdyZSB1c2luZyB0aGUNClhlbiBBQkkgQVBULCBhbmQgc2VlaW5nIGFzIHdlJ3Zl
IHRhbGtlZCBhYm91dCBwb3NzaWJseSBtYWtpbmcgcGF0Y2ggNSBhDQpLY29uZmlnIG9wdGlvbiwg
dGhhdCBtYXkgd2FudCBhY2NvdW50aW5nIGhlcmUuwqAgKFBlcmhhcHMgc2ltcGx5IG1ha2luZw0K
b3B0X3BiX29vYl9wYXQgYmUgZmFsc2UgaW4gYSAhWEVOX1BBVCBidWlsZC4pDQoNCn5BbmRyZXcN
Cg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 20:49:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 20:49:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472152.732294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDXAU-0006VT-Tr; Thu, 05 Jan 2023 20:48:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472152.732294; Thu, 05 Jan 2023 20:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDXAU-0006VM-RG; Thu, 05 Jan 2023 20:48:54 +0000
Received: by outflank-mailman (input) for mailman id 472152;
 Thu, 05 Jan 2023 20:48:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDXAT-0006VG-A6
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 20:48:53 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b9bb47e-8d3a-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 21:48:51 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b9bb47e-8d3a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672951731;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=r4XtkTJHtLTXCBT8B4UdVIsiYW9W6Ow00fDClOE0Tyg=;
  b=LLLDGAPuzRYRBI6AaNBbPV42exrE68jVXs3YtyEGWSu3kY8LPX51E2lP
   NEH5dvFNrE4YLrqX4t/YkvfOknXuUeqxelDYnqed2w/aY1QByBBvIjzYo
   JoJOy9T/56wemgKhKAD9MUP4TjoZVYRUM8Wl6C07QDAAwMkvTjAWWs9tV
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93855203
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:V20rZ6kmwjj9cq3e9eJzEV/o5gxLJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJNDT3XPfrbZzSjKtAlOdywp0xQvJPWmNVrSwA6pX1mESMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aqaVA8w5ARkPqgS5AKGzBH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 d4DEgEIcgCAu8eryfWaa+1Ktsd+N/C+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsfYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9zx3I+
 z+cpz+R7hcyGf3A52SH6lyQpLWfk2TiXaM1DriJz6s/6LGU7jNKU0BHPbehmtGmjmauVtQZL
 FYbkgIrpLI3/VamTfH8WQO5u3+OuhMAW9tWHPY+4QvLwa3Riy6VC20FZj9Hdt09tcUySCAq1
 1mGhNfgD3pkt7j9YWKQ8PKYoC2/PQARLHQefmkUQA0d+d7hrYovyBXVQb5LMoS4k9n0EjHY2
 C2RoW41gLB7sCIQ//zlpxad2Wvq/8WXCF5ujunKYo67xiFiXr+ge5yr1X3G5LFhDYK/flrcp
 0FRzqBy89syJZ2KkSWMRsAEE7eo++uJPVXgvLJ/I3Uy32/zoiD+JOi89Bk7fR40aZhcJVcFd
 WeJ4WtsCIlv0GxGhEOdS6a4EIwUwKfpDrwJvdiEP4MVMvCdmOJqlRyChHJ8PUi3yyDAcollY
 /93lPpA6l5EYZmLNBLsG48gPUYDn0jSP1/7S5Hh1AiA2rGDfnOTQrptGALQMbposPjZ8FSJo
 oc32y62J/N3Cr2Wjs7/qNd7ELz3BSJjWcCeRzJ/LIZv3TaK6El+UqSMkNvNiqRunrhPl/egw
 0xRrnRwkQKl7VWecFXiV5yWQO+3NXqJhS5hbHNE0JfB8yRLXLtDG49FJsdvLeJ3qrY8pRO2J
 tFcE/i97j10Ymyv01wggVPV9eSOqDzDadqyAheY
IronPort-HdrOrdr: A9a23:5lCdTqEnYyuT4xgkpLqE7MeALOsnbusQ8zAXPiFKJSC9F/byqy
 nAppsmPHPP5gr5OktBpTnwAsi9qBrnnPYejLX5Vo3SPzUO1lHYSL1K3M/PxCDhBj271sM179
 YGT0GmMqyTMbGtt7ee3DWF
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="93855203"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
	<marmarek@invisiblethingslab.com>, Demi Marie Obenour
	<demi@invisiblethingslab.com>
Subject: [PATCH] x86/S3: Restore Xen's MSR_PAT value on S3 resume
Date: Thu, 5 Jan 2023 20:48:39 +0000
Message-ID: <20230105204839.3676-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

There are two paths in the trampoline, and Xen's PAT needs setting up in both,
not just the boot path.

Fixes: 4304ff420e51 ("x86/S3: Drop {save,restore}_rest_processor_state() completely")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
CC: Demi Marie Obenour <demi@invisiblethingslab.com>

Entirely untested, but this a fairly embarassing mistake in hindsight.
---
 xen/arch/x86/boot/wakeup.S | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/x86/boot/wakeup.S b/xen/arch/x86/boot/wakeup.S
index c17d613b61ff..08447e193496 100644
--- a/xen/arch/x86/boot/wakeup.S
+++ b/xen/arch/x86/boot/wakeup.S
@@ -130,6 +130,11 @@ wakeup_32:
         and     %edi, %edx
         wrmsr
 1:
+        /* Set up PAT before enabling paging. */
+        mov     $XEN_MSR_PAT & 0xffffffff, %eax
+        mov     $XEN_MSR_PAT >> 32, %edx
+        mov     $MSR_IA32_CR_PAT, %ecx
+        wrmsr
 
         /* Set up EFER (Extended Feature Enable Register). */
         movl    $MSR_EFER,%ecx
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 05 21:20:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 21:20:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472186.732322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDXf4-0003FH-HE; Thu, 05 Jan 2023 21:20:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472186.732322; Thu, 05 Jan 2023 21:20:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDXf4-0003FA-E2; Thu, 05 Jan 2023 21:20:30 +0000
Received: by outflank-mailman (input) for mailman id 472186;
 Thu, 05 Jan 2023 21:20:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cz8e=5C=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDXf2-0003F4-Ju
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 21:20:29 +0000
Received: from sonic314-20.consmr.mail.gq1.yahoo.com
 (sonic314-20.consmr.mail.gq1.yahoo.com [98.137.69.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c4b19715-8d3e-11ed-91b6-6bf2151ebd3b;
 Thu, 05 Jan 2023 22:20:26 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Thu, 5 Jan 2023 21:20:23 +0000
Received: by hermes--production-bf1-5458f64d4-qthpt (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 3ae8c14202cbd65911d34094373944e4; 
 Thu, 05 Jan 2023 21:20:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4b19715-8d3e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1672953623; bh=NA4PsefjDtPgPqtlyXz47iDncBWU1Smv50WH37X63KI=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=FuoCFGtdpukvOkCciwKmcGKLHqacIweVRaRLWLwU/ULIdq6Pu+0E756V/cQbyzXvDRrdGIuyODdILXwdzsCiEj7i/mkKI74ewXvfN+xB88ZmcVAQHB/ZergsVg/MKma2vQyJCqI2/nNXF+dgLvnI88NNSEAGo4qpMryfwQCscbo/gXBgqwH3UiodD/6mVgWf0OdGvlwgF+1YIABOI2BF/cdzW2u27O6DQCN/ymMmYWlE3fSk+Aqfs0adwbQXgS4NzhmmUwBqm09rkytMwxVfhKXS0oexRIWqtdK+qdTXqRJggPetYKSkeHXzQXlWYunZ9I7LjA53inTRpKcODhtSGQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1672953623; bh=hof15gZ0OQwZiOINK1+nvXS3/ZOloYg/esR0maWeSKU=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=NstvfaDt1BeTqp4xp7OWIGZd03giDoK6wtR8WP1ENFFZGBuFOR+/C7nsC2IOXj5yH5HDwsRUcuNKQx/OsBW3uD45p9sIkzaSe23Dw5Cv1QZoc841snF82OlWkYWYAda8dROYKizVIAEgG0zprVAUWV9n78WUMkEQDP3PEDF7n740rtGmSIT20KGLUnWdUZ/3gjFnyay2bXu4tBD5Lp++dDFkjIeC8FLFpXcBSeRxThGj9/cEAnC8PSMqcPB60QQbj2oq6JF9fiTkF++FZ/9hLa/pX1Q5rUMhBW7clZD4W6/C4vy51Fc7qqcGGAxqdc5LqnGBVx3qlpiinB86Fa2bzA==
X-YMail-OSG: FEjuAi8VM1kbyC.qcEennX6bxIhKcpE1uGVa7upuD2iMQq.681tscMkpRL0LDLI
 gX4vGip4JEhHlBSmKio3sJbpVzFjaexMKTV8rGBLcIurOYUDz_tOtLJDGtNiPX9zOKxVK7RZI6pZ
 0NOgdIwDy08e63NMuekOGtUnPNLKk3EhD47wr8a86utP351NDAJRzbMJaJfSFPiGNbHFplBvUUIi
 kXUXqHmolCPDdZJRfZHkoCUduPb1J6pfK5J1.vpj8rUq7d_gKB5nOOCNAeuQPBAJpdHh1gaL5664
 drpIYh4aHYU67Uf_gHbludebiZh8y7RTQ.fBlyC4O9Hzw1xY9T6eEJXUhDyxsZRdifUZ8pILdDQN
 ozFDELeC2.Fa_8DBrnaPIsLSosfIQtBkR8hwW7nncEO78MBV0Gma_nB.gkuUnkCcAjty8.W20ECV
 weTThqRdjxwTL.IJraHgeTOJPhBYayJr8rwnEk7PyihGSXZV8Ebi6dq1_NoOTkSNLZSt7up6eJlH
 N4JHCt7mwyG6y.L265B6oP5rCFsXtgyM4cAd5slq0MRehUCMrvhGTBoY9GPMKhnIU7t7FxdZtfPD
 vR18H12P5g9mviyL00rB80NJKNJTKzehp57PONLks44qNHGWGvhm6RzlGqtlej5tsDbYume9McrT
 mEQ51kWFzgUiJDtoJQsXkkP2..pxLBSiSYeTy4XPH51u7EY1eWGV9v8YAPWUj.3eAOH1PenCplbb
 vbnN6mTleiIh6z864mZvuBL5Wy8kFlVoGKI_.N04veAk00hb.OmXkxD2OVPMOjUof4eRFWqdhpjO
 CD3KcYkT91qQuyyNTLh1xb9DKioYXT11aGblNt0iC0ou2Y8YEfn_LB9jO.6HZx05Pqmd86gK0LPv
 wvqQIierPXIrLJL3HkvWOkMDsyOXMzJfy9GRT0vNtMt76pbiH5Z8pewhHHe51_RRqPQzabvWtlwE
 o_gtuISXU84iTlPe5TetE2LBSkIFXF5GwLwgS1CiPwCC.tPwyGtscoXx.uhJryI7TV3t3uHRUdP8
 IloBRlB2BJMB8q51vwguAatevaYVkv0.kV7WWMmYkBX0SP.haCnpTDRdmatwKNPXAcgkJHOyB3YY
 ksLszkQZUdNRBMGXZnBfTzRpc4UBRF9gwUgSOOMnaH12mejf3OMihcRXnqppp_mN2fmdCNnglwBX
 8H9bcc3zNuf8Xs3a_Eqn_LNx16cOAHpAVe58xHDApAXJQ8L5jXbVetTz0QL.UW59rucFuDoU3bTU
 6v2x_g2sMlpKVICoG7ryGc5IWIK4eto2G1fOfOQtSYAzs_sfd_0TkjsxDj2F3t8Lt.rlGD08fpzH
 Rm7Ffb5lIigpYE.kvUqiz_yxw0k2Lue7kHwtS27ErbB37SEikCXPVXGTTG_jUUBnhj_eb8ctntSK
 tRPLMwj6h_lIGqkhFOaufyDyrOyfAnxF9Wp8REcM0mQblLQP6cJ3b6_DM07M8av86soDDFRUEVIR
 dWfuXrOG4rshUhy4yvqA3PxG4H.q7PCfiBzHaTw_Log_xHIQkXLItF8obbP06riHC1J0jOE9CFfP
 AOdyHUwGhD2Tyn8jb4PGqW91B79_TmDmYY7PwvSk9dvDgstk59t.LLO1e2f3h4KDKaAPgwCs4vQm
 ZtlJ40Lg.l_f3zqSSk1TZ4nYxdYDKZFW3uUc4zGGuFY.uWVHVCBd_i5RyiCnVmP0l6F_13oY6wrZ
 j2I7RpukTZnGjAAGKFuxxwxki6kcMBELMHH8SRcLT58frDV3h_mmYI70n0m8aoN9UKhM0hg1ViFJ
 FYqGMyqNlAw_RGyTZ_107qvnEI3H7CG6ODHIFB6.qipYHnfzfQ1igMTMXCtOIftAwbviB3129s2c
 DyegtBOvyKX.MCtNY6q1DJufDK2V3jht3rxpSM8Iw_mGSJ8qR.dJkij7hVZoJgnln3gBAXl8IjIk
 pcHNk.wFn756fVNv0wgt.okLD_kKtMhZgaN1jS9b35IL.zgcfuJHKLMADYpllmpypKmTyGbIOTzp
 NAqg.iJv4XCFFpJFhVYyLYXUuSM6EGRhrsx2Yzov89.Y29Bhiy4m3f6QH7suQJhA2cvsPNF4XkL.
 o4yTt92SaWWh9LWovuTP4yjiYqUpkoab46lvyfQhPJEYf1Qm1R_LJaGJuBw9.pvdchb89CKiHuSp
 62RgzEXWCA219_yLIBPkHGiZ8X87WNcfYFhJcdnuEjbYCWJPZvn9tRYlv_EyrUhLMUOhtJUZPUVo
 6aUT0yYjOnvQWhKnM
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <32193041-ff39-fdc5-01b5-b6e390ed997e@aol.com>
Date: Thu, 5 Jan 2023 16:20:15 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
From: Chuck Zmudzinski <brchuckz@aol.com>
To: Alex Williamson <alex.williamson@redhat.com>, Paul Durrant
 <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <20230102124605-mutt-send-email-mst@kernel.org>
 <c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
 <20230103081456.1d676b8e.alex.williamson@redhat.com>
 <b2ce641b-73ad-f3a2-dc9d-1ccfdd1ee8d8@aol.com>
Content-Language: en-US
In-Reply-To: <b2ce641b-73ad-f3a2-dc9d-1ccfdd1ee8d8@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4521

On 1/4/23 3:47 PM, Chuck Zmudzinski wrote:
> On 1/3/23 10:14 AM, Alex Williamson wrote:
> 
>> 
>> It's necessary to configure the assigned IGD at slot 2 to make it
>> functional, yes, but I don't really understand this notion of
>> "reserving" slot 2.  If something occupies address 00:02.0 in the
>> config, it's the user's or management tool's responsibility to move it
>> to make this configuration functional.  Why does QEMU need to play a
>> part in reserving this bus address.  IGD devices are not generally
>> hot-pluggable either, so it doesn't seem we need to reserve an address
>> in case an IGD device is added dynamically later.
> 
> The capability to reserve a bus address for a quirky device need not
> be limited to the case of hotplugged or dynamically added devices. The
> igd is a quirky device, and its presence in an emulated system like
> qemu requires special handling. The slot_reserved_mask member of PCIBus
> is also well-suited to the case of quirky device like Intel the igd that
> needs to be at slot 2. Just because it is not dynamically added later
> does not change the fact that it needs special handling at its initial
> configuration when the guest is being created.
> 
>>  
> 
> Here's the problem that answers Michael's question why this patch is
> specific to xen:
> 
> ---snip---
> #ifdef CONFIG_XEN
> 
> ...
> 
> static void pc_xen_hvm_init(MachineState *machine)
> {
>     PCMachineState *pcms = PC_MACHINE(machine);
> 
>     if (!xen_enabled()) {
>         error_report("xenfv machine requires the xen accelerator");
>         exit(1);
>     }
> 
>     pc_xen_hvm_init_pci(machine);
>     pci_create_simple(pcms->bus, -1, "xen-platform");
> }
> #endif
> ---snip---
> 
> This code is from hw/i386/pc_piix.c. Note the call to
> pci_create_simple to create the xen platform pci device,
> which has -1 as the second argument. That -1 tells
> pci_create_simple to autoconfigure the pci bdf address.
> 
> It is *hard-coded* that way. That means no toolstack or
> management tool can change it. And what is hard-coded here
> is that the xen platform device will occupy slot 2, preventing
> the Intel igd or any other device from occupying slot 2.
> 

Actually, today I discovered it is possible to workaround
this issue with libxl that appears to hard-code the xen
platform device to slot 2.

The code referenced here that effectively hard codes the xen
platform device to slot 2 is only executed with the
'-machine xenfv' qemu option, which is the default that
libxl uses for hvm guests, but this behavior can be changed
by setting xen_platform_pci = '0' in the xl guest configuration
file, in which case libxl configures qemu with the
'-machine pc,accel=xen' option instead, and then one can add
the xen platform device at a different slot by adding, for
example,

device_model_args = ['-device','xen-platform,addr=03']

to the xl guest configuration file which adds the device at
slot 3 instead of slot 2.

This approach is an ugly workaround which has other side effects,
such as by having -machine as pc,accel=xen instead of xenfv, the
machine type is not exactly the same because the xenfv machine
type is based on a much older version of qemu's i440fx emulation
(qemu version 3.1 IIRC), so with this workaround there could be
some unintended side effects, although I just tested this
workaround and it does seem to work, but I have not been using
this approach to the problem for very long.

Also, this approach requires setting type=vif in the vif network
setting of the xl guest configuration to suppress the creation of
the qemu emulated network device that would grab slot 2 if it is
created by the ordinary libxl network configuration settings and,
for guests that do not have the xen pv network drivers installed,
this also would require manually building the qemu command line
using device_model_args to allow adding an emulated qemu network
device at a different slot and probably also manually configuring
the correct networking scripts outside of the normal xen networking
scripts, because the ordinary xl guest configuration options do not
have a setting for assigning an emulated qemu network device to a
specific slot, and the device would otherwise grab slot 2.

So,it is possible to configure the guest to have the Intel igd at slot
2 without patching either libxl or qemu, but it is a real PITA with
some possible unintended side effects, and that is why I think patching
qemu to reserve slot 2 is a much better solution to the problem.

Kind regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 22:17:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 22:17:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472196.732333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDYYA-0000Fe-QL; Thu, 05 Jan 2023 22:17:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472196.732333; Thu, 05 Jan 2023 22:17:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDYYA-0000FX-N0; Thu, 05 Jan 2023 22:17:26 +0000
Received: by outflank-mailman (input) for mailman id 472196;
 Thu, 05 Jan 2023 22:17:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDYY8-0000FR-Mt
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 22:17:24 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b862fe73-8d46-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 23:17:21 +0100 (CET)
Received: from mail-dm6nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 17:17:08 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB5752.namprd03.prod.outlook.com (2603:10b6:510:36::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 22:17:04 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 22:17:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b862fe73-8d46-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672957041;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=NTeMV9JIzE4pzgd6mrGvLnoq1bkSnPsinoSCXObJHH4=;
  b=Ca95tP+hZ1Q+i8LH/91xnRhx2DlhD6hmby3gkj21Qifx881+d9r1nECD
   0F5HrkF/45QiWVBziPpJlj080pY+pFM8yYxDYG8onn88aMiKwzc3SlMar
   PR/EZt1QGMqn0WkJxYCiLM4TC4uXqD5PMJlJYij2s8NWauBDfY74uzfYB
   o=;
X-IronPort-RemoteIP: 104.47.59.175
X-IronPort-MID: 93863639
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:IeHSUKCcboJAFhVW/+Piw5YqxClBgxIJ4kV8jS/XYbTApD8h0TZSn
 TcWXDyOP/yPZTPyc90iaYWzo0lQup7cxoVrQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpA4wRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw5PhaJGAf9
 fEhLB8DTkumhN2Ny7SEY7w57igjBJGD0II3nFhFlGucIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9OxuvDW7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6UzX2gBtlDfFG+3sBw3XGxgVUXM0ctV3CSjqC+t0y5Wc0Kf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBqW1qPe7gKdB24FZj1MctorsIkxXzNC/
 kCNt8PkA3poqrL9YXCA8raZqxuiNC5TKnUNDQcUQA1A79T9rYUbihPUUs0lAKOzlsfyGzz73
 3aNtidWulkIpcsC1qH+8VWZhTup/8LNVlRsuViRWX+55ARkYoLjf5av9VXQ8fdHKsCeU0WFu
 38H3cOZ6YjiEK2wqcBEe81VdJnB2hpPGGS0bYJHd3X5ywmQxg==
IronPort-HdrOrdr: A9a23:rfbaWKDTQWlvbLrlHem955DYdb4zR+YMi2TDtnoddfUxSKfzqy
 nApoV56faKskdyZJhNo7690cq7LU80l6QU3WB5B97LYOCMggSVxe9ZjLcKygeQfhHDyg==
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="93863639"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DJxcXuXAL+5EWBMm9DGct1BB0rRtt59ZxIL47YJr0+93YhPepuXo27T1kmtVMCLo2xo8iFHbEoTr7+EEUl3p1xNU/lPTNWBtrGXf5CxAahUkQ3l1FcIV+NlUn8KccXf0rArmgyO/m5BsmDvSxduRPUjDzXIb9VA6MuEqMzeqLezgKMJjLzx3hytnqRML1l4geBmTv02pt3m5IVIz4OdDgqFfNtgWOwXLq4q84niixFq1vzCWT2SOPIt04FOMSIc3uVZFjdek07PZOOu9qjC2vWCN65eS3TYdR4N7vaHQ77nAcAFtWeCaJBy+rHECIintEJenRANP8rf/EA8MsakE/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NTeMV9JIzE4pzgd6mrGvLnoq1bkSnPsinoSCXObJHH4=;
 b=aM3oSUd62btlE5tQn/eXgjBvNUWwIvXEAZbMrZz0PG1RKcrcJroBo/oWqLbqEOmhbRRv0iNLmQzN/QvcPS8SRxwgOsCJCeGqwcsnK664lODR9EHj+y/ucBkBbkfwEHSQJ5NtLlf2EW37kgJwiXUNOvdQxrhSGLhCa0T3d5jqNTJ7OfFwX6azXGEnVdBJ/Fuq5/DxhtxYcIQXy+5uJBTHWchIKAteGYusDzoZcqGm761Cb09y7KjoEyuGlfcuthYZvnB+lZ3E1NuWVyM02KN1KeFbwCTJiaddYCjPcTgGk+SngEuavONwKKWWUcNeLxKJ4C/fHIpWF3zLw8t4fQv76w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NTeMV9JIzE4pzgd6mrGvLnoq1bkSnPsinoSCXObJHH4=;
 b=Yx1wl7/nVciWG5LXojAyx4rdZuL1wX00qhcWhfEvd6Itl06W4zmt3fiuU/aCU2OwJqh4ExphHeHZqwqumfyWs15GRjNHGsJBsDENlcV52UnFzI3VrNNkarcs4q7jBz4fL+1KFcnL106HWFw9kiUiL/tK6JLlSTBFITFAVwdh8Nc=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Thread-Topic: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Thread-Index: AQHZH69XYhd2rMGoWE+asSBKOPPG4q6OdxyAgAA2ioCAAMm+gIAA8B2A
Date: Thu, 5 Jan 2023 22:17:03 +0000
Message-ID: <9ce20298-5870-aa1b-ee5e-e16a623beadb@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-4-andrew.cooper3@citrix.com>
 <540a449d-f76d-eb16-4f98-c4fb3564ce98@suse.com>
 <7dd00ce3-a95b-2477-128c-de36e75c4a34@citrix.com>
 <ebaf70e5-e1ba-d72d-84f2-5acb7e38a6bc@suse.com>
In-Reply-To: <ebaf70e5-e1ba-d72d-84f2-5acb7e38a6bc@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|PH0PR03MB5752:EE_
x-ms-office365-filtering-correlation-id: fce01bf6-94b9-4f4f-bdc3-08daef6a9323
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 iJ9q6jHhc/4H0YcLEhzGM8yn+F6m90YXCXAu+siwAaEvmEsYlKP+F9wtifjC4V5qGlrDsTIhIrWgWiEQauIcxFIUE4KHcg8O0XX8AaMJc9ANaGLOPt+0rc673iw5l5jcVdMBri9KXv7zrMzK9uRG6Ngv4ErmmS3mpMPJDHPV5dhKoVuMeJg+L2dgJLFkQFmiYeqcuXBXOP7GkNqZ4H9fiaCmh2muzW+vKPcgLk3cKU5BPiOYpBeiV9o87KRjiN0rpad6Hohj6nNFiYac56cO1NeWrnNRsUUlJoO1k35OG01RdUELBdFWpvWDBLsEvBIOyxXFbOTkyHiYpI5X2ZqFAEMGT+82chMq5DUvHyWNQB3bIrRt3ka16LpPt2cQh4qDt31caZ4mHEKz4qZXxniyyK3IR5jXdc2fw1WMlksmnDjuBUHknTUL7Hjea0hI9rfhc97m9T9yatVuJguxwLawUGgCvPG+FNZD3nVEBbHO4iskMYEBsBmi477OCpAwP/i7f0FmBiXWVuRiCJQHRcDK8nxJ+vxda0rYbBTQ5fRkgT1RbhMJW+t859w0F24B8LR9cQcX/FHueuW0fyAgRP+njg71Uq4O9P4oYuljISf+jtSZxQhqpv+yAVRUvRdbW7O8jeovBNmanBkTbviH3u7gD4BNxPMBwYVJ7I/61OKmw9GYyRa0xvhje3MfVfK6fRpvi9rapDMyCQMO69mGG21nwfXJh3nxflpRYQjHJFPTgANbR46q1yyvUTlqCcPmfz0ywxjyWgRxOjVgVSYJuwXvZA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(366004)(346002)(136003)(39860400002)(451199015)(31686004)(5660300002)(41300700001)(71200400001)(6512007)(2906002)(26005)(186003)(8936002)(6506007)(66446008)(66476007)(91956017)(66946007)(8676002)(4326008)(66556008)(64756008)(76116006)(82960400001)(6916009)(38100700002)(2616005)(316002)(54906003)(6486002)(38070700005)(31696002)(53546011)(86362001)(122000001)(36756003)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?cTBhT0N5dXFEYUd1QUdhbmVHRUg2U1ZuZElKdDJYWDJ2MmhQNFZyRXF0TDZW?=
 =?utf-8?B?VkFDVFRTNER5U1RXS3JvSkJYeFp2aHFZQzRCVVRMMENhVHE2UEV3cWdNakFK?=
 =?utf-8?B?ZS9QMUM0YlFVM0pkZEtPVU5oYWlNbStxMTV2V2ZNajY1eTZIRWFWSGZ0R2pQ?=
 =?utf-8?B?ZnFEN29pVlJzVkQ0d1ErdDhPRCswTjYzZjEvL2s4dkNSWFNMdWFIRS91ajZH?=
 =?utf-8?B?aFliaDdwMzd1Q2xWaEZFYnFEaHoydVZFZWFaa3FJUm4zemhnd0NsSzR2YVZj?=
 =?utf-8?B?OUFBbzJmc0ZSZU8xVU5od2ZONm5ZNG94c0RNQm5mOVBXRWlDMVg5czZEbnNo?=
 =?utf-8?B?WnVOTUVkQlcrS3pRSkdsLzZCK0dncHh5d1F4akRhdG1aOFFxZTFoV28xTElR?=
 =?utf-8?B?OUlkZkFkZ05CT1pJOC9WNEdtdXZEZmVSOFNmMjdzTWM0QkR6MDBaWHJHb2Nl?=
 =?utf-8?B?MkxhbzNhaVprUWRNNDM4VG1HZUlEU3BkU1RjMnIzZ0lYSkNHcnZvMHVJbUQ0?=
 =?utf-8?B?MkFsZ0Q1RW85UmxTdW8xWFNHMTE4dDJpZHg1SGVVeVluK1UvTnlsZ0kyQkdK?=
 =?utf-8?B?T1dSSlRVOE5VODU4Y2RUR0h4NW1UOHZyL1ZSWENzME53UGkrUFpSZWJJVHEy?=
 =?utf-8?B?MWkrdXM3a014SFowRU1KU2tCNVFiekNjNHlwMExUb1hlYlhJaUVGa09ZazUr?=
 =?utf-8?B?OUVDZnI2QjV5czVQZ1kwMnlXQzFFYk5UNUhEcE5jKy9lc1E1Vk1Kd1RINnpy?=
 =?utf-8?B?V1RYY0tpREZkSnNNallGUlIvZU51V21PZnBIbldSNFlBYU01SGdiK2l6Uno0?=
 =?utf-8?B?d1JDTkxVZEZUUExEakJ3MmhjOHVZaXUrOGlJakVCWXQ4TzZUa3hlMmRjclJI?=
 =?utf-8?B?MHIySTNoKzBKSEN5SVJzNnF1ZjVJc2pZcUtXK0o1YVZsNWJrQld0UUJkcFow?=
 =?utf-8?B?eG11cFQ0cXNYZjVpMEtlRTUrYXVNaUZyMkszVDNqNUlab1pCTTZGRWRZMGZ1?=
 =?utf-8?B?dWZUVktjTisrZzFlckhaRmdQNTlLN3VYMkFYRlkxaHR6L0hLdlh1S2VUZ3dx?=
 =?utf-8?B?ZmJjMGZYcG9BZmhSOGpmZmhFSHVXUm5WcW1wcHhMRC9Mbi80VmZRUk9ybCtK?=
 =?utf-8?B?VytXZ3U1Z3hPNnVXMzBCRlJhZjFKeEN3d0hDT28reFNUbStLUnJGbmlYb0V6?=
 =?utf-8?B?a0lWK25MSUx4a29JNjZYUkpLUnM4QktWMHFWcG9RRGdEd1kvMUJ3ZHNNRzdO?=
 =?utf-8?B?YWJkdnRmRWNqY01yNENFR2IrTDZ1YXN6bmU5Y0phM2p5UkcxUmEySVpsUFoz?=
 =?utf-8?B?SExKY1R2N2RrUE91NFZ6U096WE9kQmNSQitTdHJYWDYvN3hXZm1VNHBtLzZV?=
 =?utf-8?B?VERvaEwrOXU5MzNVdVN0SFprQzU3OCsvNkh1c00xMUdBT3pMemxsaWNBWUZ0?=
 =?utf-8?B?TzE5dTVaVGk4anZXRGViQnR2d0hBMURzcXgrMm96aHMvc1Y2ZUxwSzk1Y2xo?=
 =?utf-8?B?K01TWnVKZnJuMlRRSHQwTDJGK1pEVlVVSjk2RnBMY2NmZnVaUGFNU1RPbmdk?=
 =?utf-8?B?WVZxY3J6WU90UG1LbEJneDk5ME1QYUJ2d0lkazBzUUI5VHVKNUp4d2ZTNVFY?=
 =?utf-8?B?clNZazNaZ1Bhd2tXWU5tZzVPZmVCbUk3L1NBSURzV3BHNTZEY1JGM2hzRjdI?=
 =?utf-8?B?WDIvMXZvTHJtRGhWTm00R0lETWNkc1l1RUVpdDBEN1hRaXp4em00ODZHaFEx?=
 =?utf-8?B?V2hOMVJpUGtWOHlnUFBFYURSUjAxcjNJL3V2SGFBaHlWczQ3eEwxZW43Q0tO?=
 =?utf-8?B?cXR2RGZSK2dlNWIvSlJGYlpyNVlGS1Y5S0hxUDkxVzhkNTFnUzBjZmhRYlJL?=
 =?utf-8?B?WWRYd1hzTytuL0dRVk05TVhxNGMxQ0hvWWN3VFh6Nk9mVnhxbHZCNm9WUDBZ?=
 =?utf-8?B?cTdJQVVWNm5WbkZDMVZEeG1WOTJicHJZZjZNclJZR2xxYjZmL2t0ZkthNWhZ?=
 =?utf-8?B?b2UyVWZQM0RIZ2Q5NWJncWdVa1ZROXFLRjhtZmpmekJFUE54a3YxQlFXOVFV?=
 =?utf-8?B?ZUs1blJ6NE1SUWhqeDB1VjZNK0lGalpHRVFkUFlIZ3UxQkV6RmFRck0yb09N?=
 =?utf-8?B?cDlyWk16bFRWd3M3dmppODhsMDQ3c0wwdHhIWUUzRCtaS1pva1VEQXFtMmJ5?=
 =?utf-8?B?RUE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <2F4E482822F8FD4690A3C6E2B4755AFB@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Gx7jn0rvzQh1NLCadvcaTQxONMUaRsvfVlzCKl5aWONjH57IXEaCwolllBDNelCj1LI5Ul1eLusEVLiX2nNrKe+Kd74kwqvb+qUrGsiJbxDasOeE2tXcGo+kgzMbZND73rIVbmO/Fxz0gaP+SFFJaoqQqKoashLmdwWzI9nXdN5+kPa8zrM3YxS6LhjWyuPfcLPbfL8ghmyXfrBZWvAvOC/vLuETzLCgtmsy0pijeeXaS5bhdx/bHmoVsi8utasXb8C1VF1op7NgmenX7U4WttY59tloVbiRD10eC4y+e/PtZwBE0JdUZ2O3GO+UoVg+hx7ls29WNINaSnbe98aAQoDlMpaVABxeSLBfMoMEoRsqRsUkxPg8ebWZvCVsI34yBT51kJhSph6oG4WOjgcvPGCcFDVq00JE/vsiLkrhb2d1RmG9im5ai2eXdl72Q+Px/Vy56uw09PVu/FufkswnZtLAFXLzXBXFa5JPyEf8sxOxDlxDVlq9Yw5YU7GB5hrCVGdr0CyNI9117DH9dKVh0NRgznzhuwvlCjKHMkvNtqUx6jlEosLC7C27ed0W9g8arEkMwbUynIFiQn26URcQr19m8/vNBIiziSV4nP8juZmPsQdTCVaqaU9D/ajSeOOk+vzzl8U09KglpDPe/XjQ0KmSWjksVhxY9T3NeBZN2kkpPhXwjKRUICSilBgQX+3EIxWotWbqHbCYkp+HTnzpZW4usdTME8awBE8+j9rYB1jLKgecFNUYb/sko0ww/mhWqleDJyQBmt9lpgIecGinls6j/3+9FgG2ns5Lbvq7skqYMtyWX5yLCuDd6Xx11vY7PldQz+qtNw9cafjtAJtDSF79XSK6miEGD46hXUaPTE5PfIiqYAUhbC9+ob7AL3+T
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fce01bf6-94b9-4f4f-bdc3-08daef6a9323
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 22:17:04.0355
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 7tilhqD3apfYkbjA+tIWuuNyR0E0lMWW+ytej3O7ua9QtNhnm+YAU2EOPBnsedYPWQ2h7KEwBLUwoysYkHKjDAJqLW0bK1pMZQwaaXvbl4Y=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5752

T24gMDUvMDEvMjAyMyA3OjU3IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDQuMDEuMjAy
MyAyMDo1NSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDA0LzAxLzIwMjMgNDo0MCBwbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMDMuMDEuMjAyMyAyMTowOSwgQW5kcmV3IENvb3Bl
ciB3cm90ZToNCj4+Pj4gQSBzcGxpdCBpbiB2aXJ0dWFsIGFkZHJlc3Mgc3BhY2UgaXMgb25seSBh
cHBsaWNhYmxlIGZvciB4ODYgUFYgZ3Vlc3RzLg0KPj4+PiBGdXJ0aGVybW9yZSwgdGhlIGluZm9y
bWF0aW9uIHJldHVybmVkIGZvciB4ODYgNjRiaXQgUFYgZ3Vlc3RzIGlzIHdyb25nLg0KPj4+Pg0K
Pj4+PiBFeHBsYWluIHRoZSBwcm9ibGVtIGluIHZlcnNpb24uaCwgc3RhdGluZyB0aGUgb3RoZXIg
aW5mb3JtYXRpb24gdGhhdCBQViBndWVzdHMNCj4+Pj4gbmVlZCB0byBrbm93Lg0KPj4+Pg0KPj4+
PiBGb3IgNjRiaXQgUFYgZ3Vlc3RzLCBhbmQgYWxsIG5vbi14ODYtUFYgZ3Vlc3RzLCByZXR1cm4g
MCwgd2hpY2ggaXMgc3RyaWN0bHkNCj4+Pj4gbGVzcyB3cm9uZyB0aGFuIHRoZSB2YWx1ZXMgY3Vy
cmVudGx5IHJldHVybmVkLg0KPj4+IEkgZGlzYWdyZWUgZm9yIHRoZSA2NC1iaXQgcGFydCBvZiB0
aGlzLiBTZWVpbmcgTGludXgnZXMgZXhwb3N1cmUgb2YgdGhlDQo+Pj4gdmFsdWUgaW4gc3lzZnMg
SSBldmVuIHdvbmRlciB3aGV0aGVyIHdlIGNhbiBjaGFuZ2UgdGhpcyBsaWtlIHlvdSBkbyBmb3IN
Cj4+PiBIVk0uIFdobyBrbm93cyB3aGF0IGlzIGJlaW5nIGluZmVycmVkIGZyb20gdGhlIHZhbHVl
LCBhbmQgYnkgd2hvbS4NCj4+IExpbnV4J3Mgc3lzZnMgQUJJIGlzbid0IHJlbGV2YW50IHRvIHVz
IGhlcmUuwqAgVGhlIHN5c2ZzIEFCSSBzYXlzIGl0DQo+PiByZXBvcnRzIHdoYXQgdGhlIGh5cGVy
dmlzb3IgcHJlc2VudHMsIG5vdCB0aGF0IGl0IHdpbGwgYmUgYSBub256ZXJvIG51bWJlci4NCj4g
SXQgZWZmZWN0aXZlbHkgcmVwb3J0cyB0aGUgaHlwZXJ2aXNvciAodmlydHVhbCkgYmFzZSBhZGRy
ZXNzIHRoZXJlLiBIb3cNCj4gY2FuIHdlIG5vdCBjYXJlIGlmIHNvbWV0aGluZyAoa2V4ZWMgd291
bGQgY29tZSB0byBtaW5kKSB3b3VsZCBiZSB1c2luZw0KPiBpdCBmb3Igd2hhdGV2ZXIgcHVycG9z
ZS4NCg0KV2hhdCBhYm91dCBrZXhlYyBkbyB5b3UgdGhpbmsgd291bGQgY2FyZT8NCg0KVGhlIG9u
bHkgdGhpbmcga2V4ZWMtdG9vbHMgY2FyZXMgYWJvdXQgaXMgWEVOVkVSX2NhcGFiaWxpdGllcywg
YnV0IGV2ZW4NCnRoYXQncyBub3QgYWN0dWFsbHkgY29ycmVjdCBmb3IgZmlndXJpbmcgb3V0IHdo
ZXRoZXIgWGVuIGNhbiBkbyBrZXhlYw0KdHJhbnNpdGlvbnMgdG8gRUxGNjQvMzIuDQoNCj4gQW5k
IHRoaW5raW5nIG9mIGl0LCB0aGUgdG9vbCBzdGFjayBoYXMgdXNlcywNCj4gdG9vLiBBc3N1bWlu
ZyB5b3UgYXVkaXRlZCB0aGVtLCBkaWQgeW91IGNvbnNpZGVyIHJlbW92aW5nIGRlYWQgdXNlcyBp
bg0KPiBhIHByZXJlcSBwYXRjaCAoYW5kIGRpc2N1c3MgdGhlIGVmZmVjdHMgb24gbGl2ZSBvbmVz
IGluIHRoZSBkZXNjcmlwdGlvbik/DQoNClRoZXJlIGlzIG9ubHkgb25lIHRvb2xzdGFjayB1c2Ug
SSBjYW4gc3BvdCB3aGljaCBpcyBub24taW5mb3JtYXRpb25hbCwNCmFuZCBpdCdzIGJyb2tlbiBB
RkFJQ1QuDQoNCmB4bCBkdW1wLWNvcmVgIHdyaXRlcyBvdXQgYSBoZWFkZXIgd2hpY2ggaW5jbHVk
ZXMgdGhpcyBtZXRhZGF0YSwgYnV0IGl0DQp0YWtlcyBkb20wJ3MgdmFsdWUsIG5vdCBkb21VJ3Mu
wqAgKE5vdCB0aGF0IHRoaXMgaXMgcmVsZXZhbnQgQUZBSUNULA0KYmVjYXVzZSB0aGUgTTJQIGlz
IGhhbmRsZWQgc3BlY2lhbGx5IGFueXdheS4pDQoNCg0KTW9zdCBYRU5WRVJfKiBpbmZvcm1hdGlv
biBpcyBnbG9iYWwgKGFuZCBieSB0aGlzLCBJIG1lYW4gaW52YXJpYW50IGFuZA0Kbm9uLWNhbGxl
ciBkZXBlbmRlbnQsIG91dHNpZGUgb2YgbGl2ZXBhdGNoaW5nLikNCg0KWEVOVkVSX2d1ZXN0X2hh
bmRsZSBpcyBjYWxsZXItdmFyaWFudCwgYnV0IHRoZSB0b29sc3RhY2sgaGFzIHByb3Blcg0KaW50
ZXJmYWNlcyB0byBnZXQvc2V0IHRoaXMgdmFsdWUuDQoNClhFTlZFUl9wbGF0Zm9ybV9wYXJhbWV0
ZXJzIChhbmQgWEVOVkVSX2dldF9mZWF0dXJlcyBmb3IgdGhhdCBtYXR0ZXIpIGFyZQ0KY2FsbGVy
LXZhcmlhbnQsIGFuZCB0aGUgdG9vbHN0YWNrIGhhcyBubyB3YXkgdG8gZ2V0IGRvbVUncyB2aWV3
IG9mIHRoaXMNCmRhdGEuDQoNCg0KRXZlcnkgaW5zdGFuY2UgKHdlbGwgLSB0aGlzIGlzIHRoZSBv
bmx5IGludGVyZXN0aW5nIG9uZSkgb2YgdGhlIHVzZSBvZg0KWEVOVkVSX3BsYXRmb3JtX3BhcmFt
ZXRlcnMgSSBjYW4gZmluZCBpcyBicm9rZW4sIGV2ZW4gaW4gdGhlIFhlbiBjb2RlLg0KDQo+Pj4+
IC0tLSBhL3hlbi9pbmNsdWRlL3B1YmxpYy92ZXJzaW9uLmgNCj4+Pj4gKysrIGIveGVuL2luY2x1
ZGUvcHVibGljL3ZlcnNpb24uaA0KPj4+PiBAQCAtNDIsNiArNDIsMjYgQEAgdHlwZWRlZiBjaGFy
IHhlbl9jYXBhYmlsaXRpZXNfaW5mb190WzEwMjRdOw0KPj4+PiAgdHlwZWRlZiBjaGFyIHhlbl9j
aGFuZ2VzZXRfaW5mb190WzY0XTsNCj4+Pj4gICNkZWZpbmUgWEVOX0NIQU5HRVNFVF9JTkZPX0xF
TiAoc2l6ZW9mKHhlbl9jaGFuZ2VzZXRfaW5mb190KSkNCj4+Pj4gIA0KPj4+PiArLyoNCj4+Pj4g
KyAqIFRoaXMgQVBJIGlzIHByb2JsZW1hdGljLg0KPj4+PiArICoNCj4+Pj4gKyAqIEl0IGlzIG9u
bHkgYXBwbGljYWJsZSB0byBndWVzdHMgd2hpY2ggc2hhcmUgcGFnZXRhYmxlcyB3aXRoIFhlbiAo
eDg2IFBWDQo+Pj4+ICsgKiBndWVzdHMpLCBhbmQgaXMgc3VwcG9zZWQgdG8gaWRlbnRpZnkgdGhl
IHZpcnR1YWwgYWRkcmVzcyBzcGxpdCBiZXR3ZWVuDQo+Pj4+ICsgKiBndWVzdCBrZXJuZWwgYW5k
IFhlbi4NCj4+Pj4gKyAqDQo+Pj4+ICsgKiBGb3IgMzJiaXQgUFYgZ3Vlc3RzLCBpdCBtb3N0bHkg
ZG9lcyB0aGlzLCBidXQgdGhlIGNhbGxlciBuZWVkcyB0byBrbm93IHRoYXQNCj4+Pj4gKyAqIFhl
biBsaXZlcyBiZXR3ZWVuIHRoZSBzcGxpdCBhbmQgNEcuDQo+Pj4+ICsgKg0KPj4+PiArICogRm9y
IDY0Yml0IFBWIGd1ZXN0cywgWGVuIGxpdmVzIGF0IHRoZSBib3R0b20gb2YgdGhlIHVwcGVyIGNh
bm9uaWNhbCByYW5nZS4NCj4+Pj4gKyAqIFRoaXMgcHJldmlvdXNseSByZXR1cm5lZCB0aGUgc3Rh
cnQgb2YgdGhlIHVwcGVyIGNhbm9uaWNhbCByYW5nZSAod2hpY2ggaXMNCj4+Pj4gKyAqIHRoZSB1
c2Vyc3BhY2UvWGVuIHNwbGl0KSwgbm90IHRoZSBYZW4va2VybmVsIHNwbGl0ICh3aGljaCBpcyA4
VEIgZnVydGhlcg0KPj4+PiArICogb24pLiAgVGhpcyBub3cgcmV0dXJucyAwIGJlY2F1c2UgdGhl
IG9sZCBudW1iZXIgd2Fzbid0IGNvcnJlY3QsIGFuZA0KPj4+PiArICogY2hhbmdpbmcgaXQgdG8g
YW55dGhpbmcgZWxzZSB3b3VsZCBiZSBldmVuIHdvcnNlLg0KPj4+IFdoZXRoZXIgdGhlIGd1ZXN0
IHJ1bnMgdXNlciBtb2RlIGNvZGUgaW4gdGhlIGxvdyBvciBoaWdoIGhhbGYgKG9yIGluIHlldA0K
Pj4+IGFub3RoZXIgd2F5IG9mIHNwbGl0dGluZykgaXNuJ3QgcmVhbGx5IGRpY3RhdGVkIGJ5IHRo
ZSBQViBBQkksIGlzIGl0Pw0KPj4gTm8sIGJ1dCBnaXZlbiBhIGNob2ljZSBvZiByZXBvcnRpbmcg
dGhlIHRoaW5nIHdoaWNoIGlzIGFuIGFyY2hpdGVjdHVyYWwNCj4+IGJvdW5kYXJ5LCBvciB0aGUg
b25lIHdoaWNoIGlzIHRoZSBhY3R1YWwgc3BsaXQgYmV0d2VlbiB0aGUgdHdvIGFkamFjZW50DQo+
PiByYW5nZXMsIHJlcG9ydGluZyB0aGUgYXJjaGl0ZWN0dXJhbCBib3VuZGFyeSBpcyBjbGVhcmx5
IHRoZSB1bmhlbHBmdWwgdGhpbmcuDQo+IEhtbS4gVG8gcHJvcGVybHkgcGFyYWxsZWwgdGhlIDMy
LWJpdCB2YXJpYW50LCBhIFtzdGFydCxlbmRdIHJhbmdlIHdvdWxkIG5lZWQNCj4gZXhwb3Npbmcg
Zm9yIDY0LWJpdCwgcmF0aGVyIHRoYW4gZXhwb3Npbmcgbm90aGluZy4NCg0KVGhlIDMyLWJpdCB2
ZXJzaW9uIGlzIGEgc3RhcnQvZW5kIHBhaXIsIGJ1dCB3aXRoIGVuZCBiZWluZyBpbXBsaWNpdCBh
dA0KdGhlIDRHIGFyY2hpdGVjdHVyYWwgYm91bmRhcnkuDQoNCklmIHdlIHdlcmUgZG9pbmcgNjQt
Yml0IGZyb20gc2NyYXRjaCwgdGhlbiByZXBvcnRpbmcgZW5kIHdvdWxkIGhhdmUgYmVlbg0Kc2Vu
c2libGUsIGJlY2F1c2UgZm9yIDY0LWJpdCwgc3RhcnQgaXMgdGhlIGFyY2hpdGVjdHVyYWwgYm91
bmRhcnkgd2hpY2gNCmNhbiBiZSBpbXBsaWNpdC4NCg0KQnV0IHRoZXJlIGlzIG5vIHN1Y2ggdGhp
bmcgYXMgYSA2NGJpdCBQViBndWVzdCB3aXRoIGFueSAodXNlZnVsKSBpZGVhIG9mDQphIHZhcmlh
YmxlIHNwbGl0LCBiZWNhdXNlIHRoaXMgbnVtYmVyIGhhcyBiZWVuIGp1bmsgZm9yIHRoZSBlbnRp
cmUNCmxpZmV0aW1lIG9mIDY0Yml0IFBWIGd1ZXN0cy7CoCBJbiBwYXJ0aWN1bGFyLCAuLi4NCg0K
Pj4+PiArICogRm9yIGFsbCBndWVzdCB0eXBlcyB1c2luZyBoYXJkd2FyZSB2aXJ0IGV4dGVudGlv
bnMsIFhlbiBpcyBub3QgbWFwcGVkIGludG8NCj4+Pj4gKyAqIHRoZSBndWVzdCBrZXJuZWwgdmly
dHVhbCBhZGRyZXNzIHNwYWNlLiAgVGhpcyBub3cgcmV0dXJuIDAsIHdoZXJlIGl0DQo+Pj4+ICsg
KiBwcmV2aW91c2x5IHJldHVybmVkIHVucmVsYXRlZCBkYXRhLg0KPj4+PiArICovDQo+Pj4+ICAj
ZGVmaW5lIFhFTlZFUl9wbGF0Zm9ybV9wYXJhbWV0ZXJzIDUNCj4+Pj4gIHN0cnVjdCB4ZW5fcGxh
dGZvcm1fcGFyYW1ldGVycyB7DQo+Pj4+ICAgICAgeGVuX3Vsb25nX3QgdmlydF9zdGFydDsNCj4+
PiAuLi4gdGhlIGZpZWxkIG5hbWUgdGVsbHMgbWUgdGhhdCBhbGwgdGhhdCBpcyBiZWluZyBjb252
ZXllZCBpcyB0aGUgdmlydHVhbA0KPj4+IGFkZHJlc3Mgb2Ygd2hlcmUgdGhlIGh5cGVydmlzb3Ig
YXJlYSBzdGFydHMuDQo+PiBJTU8sIGl0IGRvZXNuJ3QgbWF0dGVyIHdoYXQgdGhlIG5hbWUgb2Yg
dGhlIGZpZWxkIGlzLsKgIEl0IGRhdGVzIGZyb20gdGhlDQo+PiBkYXlzIHdoZW4gMzJiaXQgUFYg
d2FzIHRoZSBvbmx5IHR5cGUgb2YgZ3Vlc3QuDQo+Pg0KPj4gMzJiaXQgUFYgZ3Vlc3RzIHJlYWxs
eSBkbyBoYXZlIGEgdmFyaWFibGUgc3BsaXQsIHNvIHRoZSBndWVzdCBrZXJuZWwNCj4+IHJlYWxs
eSBkb2VzIG5lZWQgdG8gZ2V0IHRoaXMgdmFsdWUgZnJvbSBYZW4uDQo+Pg0KPj4gVGhlIHNwbGl0
IGZvciA2NGJpdCBQViBndWVzdHMgaXMgY29tcGlsZS10aW1lIGNvbnN0YW50LCBoZW5jZSB3aHkg
NjRiaXQNCj4+IFBWIGtlcm5lbHMgZG9uJ3QgY2FyZS4NCj4gLi4uIG9uY2Ugd2UgZ2V0IHRvIHJ1
biBYZW4gaW4gNS1sZXZlbCBtb2RlLCA0LWxldmVsIFBWIGd1ZXN0cyBjb3VsZCBhbHNvDQo+IGdh
aW4gYSB2YXJpYWJsZSBzcGxpdDogTGlrZSBmb3IgMzItYml0IGd1ZXN0cyBub3csIG9ubHkgdGhl
IHIvbyBNMlAgd291bGQNCj4gbmVlZCB0byBsaXZlIGluIHRoYXQgYXJlYSwgYW5kIHRoaXMgbWF5
IHdlbGwgb2NjdXB5IGxlc3MgdGhhbiB0aGUgZnVsbA0KPiByYW5nZSBwcmVzZW50bHkgcmVzZXJ2
ZWQgZm9yIHRoZSBoeXBlcnZpc29yLg0KDQouLi4geW91IGNhbid0IGRvIHRoaXMsIGJlY2F1c2Ug
aXQgb25seSB3b3JrcyBmb3IgZ3Vlc3RzIHdoaWNoIGhhdmUNCmNob3NlbiB0byBmaW5kIHRoZSBN
MlAgdXNpbmcgWEVOTUVNX21hY2hwaHlzX21hcHBpbmcgKGUuZy4gTGludXgpLCBhbmQNCmRvZXNu
J3QgZm9yIGUuZy4gTWluaU9TIHdoaWNoIGRvZXM6DQoNCiNkZWZpbmUgbWFjaGluZV90b19waHlz
X21hcHBpbmcgKCh1bnNpZ25lZCBsb25nICopSFlQRVJWSVNPUl9WSVJUX1NUQVJUKQ0KDQppbiBm
YWN0LCBsb29raW5nIGF0IHRoaXMsIE1pbmlPUyBpcyBhbHNvIGJyb2tlbiBhcyBhIDMyYml0IFBW
IGRvbTAsDQpiZWNhdXNlIGl0IGhhcmRjb2RlcyBfX01BQ0gyUEhZU19WSVJUX1NUQVJUIGluIHRo
ZSBjYXNlIHdoZXJlIHRoZSBzcGxpdA0KcmVhbGx5IGlzIHZhcmlhYmxlLg0KDQoNCkl0cyBvbmx5
IFBWIGd1ZXN0cyB3aGljaCBhcmUgTEE1NyBhd2FyZSB3aGljaCBjYW4gcG9zc2libHkgYmVuZWZp
dCBmcm9tDQphIHZhcmlhYmxlIHBvc2l0aW9uIE0yUCwgYW5kIG9ubHkgYmVjYXVzZSB0aGF0IHdp
bGwgYmUgYSBuZXcgRUxGTk9URQ0KcHJvdG9jb2wuDQoNCj4NCj4+IEZvciBjb21wYXQgSFZNLCBp
dCBoYXBwZW5zIHRvIHBpY2sgdXAgdGhlIC0xIGZyb206DQo+Pg0KPj4gI2lmZGVmIENPTkZJR19Q
VjMyDQo+PiDCoMKgwqAgSFlQRVJWSVNPUl9DT01QQVRfVklSVF9TVEFSVChkKSA9DQo+PiDCoMKg
wqDCoMKgwqDCoCBpc19wdl9kb21haW4oZCkgPyBfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRfU1RB
UlQgOiB+MHU7DQo+PiAjZW5kaWYNCj4+DQo+PiBpbiBhcmNoX2RvbWFpbl9jcmVhdGUoKSwgd2hl
cmVhcyBmb3Igbm9uLWNvbXBhdCBIVk0sIGl0IGdldHMgYSBudW1iZXIgaW4NCj4+IGFuIGFkZHJl
c3Mgc3BhY2UgaXQgaGFzIG5vIGNvbm5lY3Rpb24gdG8gaW4gdGhlIHNsaWdodGVzdC7CoCBBUk0g
Z3Vlc3RzDQo+PiBlbmQgdXAgZ2V0dGluZyBYRU5fVklSVF9TVEFSVCAoPT0gMk0pIGhhbmRlZCBi
YWNrLCBidXQgdGhpcyBhYnNvbHV0ZWx5DQo+PiBhbiBpbnRlcm5hbCBkZXRhaWwgdGhhdCBndWVz
dHMgaGF2ZSBubyBidXNpbmVzcyBrbm93aW5nLg0KPiBXZWxsLCBva2F5LCB0aGlzIGxvb2tzIHRv
IGJlIGdvb2QgZW5vdWdoIGFuIGFyZ3VtZW50IHRvIG1ha2UgdGhlIGFkanVzdG1lbnQNCj4geW91
IHByb3Bvc2UgZm9yICFQViBndWVzdHMuDQoNClJpZ2h0LCBIVk0gKG9uIGFsbCBhcmNoaXRlY3R1
cmVzKSBpcyB2ZXJ5IGN1dCBhbmQgZHJ5Lg0KDQpCdXQgaXQgZmVlbHMgd3JvbmcgdG8gbm90IGFk
ZHJlc3MgdGhlIFBWNjQgaXNzdWUgYXQgdGhlIHNhbWUgdGltZQ0KYmVjYXVzZSBpdCBpcyBzaW1p
bGFyIGxldmVsIG9mIGJyb2tlbiwgZGVzcGl0ZSB0aGVyZSBiZWluZyAoaW4gdGhlb3J5KSBhDQps
ZWdpdGltYXRlIG5lZWQgZm9yIGEgUFYgZ3Vlc3Qga2VybmVsIHRvIGtub3cgaXQuDQoNCn5BbmRy
ZXcNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 22:28:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 22:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472204.732344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDYik-0001pO-Te; Thu, 05 Jan 2023 22:28:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472204.732344; Thu, 05 Jan 2023 22:28:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDYik-0001pH-Qw; Thu, 05 Jan 2023 22:28:22 +0000
Received: by outflank-mailman (input) for mailman id 472204;
 Thu, 05 Jan 2023 22:28:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDYii-0001p8-Qx
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 22:28:20 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3fce2a92-8d48-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 23:28:18 +0100 (CET)
Received: from mail-bn8nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 17:28:15 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5569.namprd03.prod.outlook.com (2603:10b6:208:287::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 22:28:11 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 22:28:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3fce2a92-8d48-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672957698;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=i+0q0drz1PnYzIVvpMPaNg9lgG7Tam6XO5zaVwxxGiE=;
  b=L7Xh1nKT3WVRrlE4RciWyMn3y9PikKXeuxMcJsIEe/bsEanMmI+UmNvx
   mpaVUD2pvtZZa3ZmHy6NbfdC5HdOH2MeOdNpxWg6vD5/dPrGcuZHGIc+A
   cerGEOFkx8FUvMr/dRykJTOOZojQurXKqyBEcSVJBGc4vq+k5M0fMX7Cf
   4=;
X-IronPort-RemoteIP: 104.47.55.172
X-IronPort-MID: 91430264
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:wiET/qBXMbvdIhVW/+Piw5YqxClBgxIJ4kV8jS/XYbTApGsg1GRTx
 2scWj/SOqvbNGfwetgjbYXjpksO6p/TxtZgQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpA4wRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw2utICmhVx
 b8hAx8VbiyFqf+m3Je/Y7w57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDK7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6WwXijAtxOfFG+3r0xg3eamWYyMzg9VleJg9aQh3S0As0Kf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBqW1qPe7gKdB24FZj1MctorsIkxXzNC/
 kCNt8PkA3poqrL9YXCA8raZqxuiNC5TKnUNDQcUQA1A79T9rYUbihPUUs0lAKOzlsfyGzz73
 3aNtidWulkIpcsC1qH+91aXhTup/8LNVlRsuV+RWX+55ARkYoLjf5av9VXQ8fdHKsCeU0WFu
 38H3cOZ6YjiEK2wqcBEe81VdJnB2hpPGGe0bYJHd3X5ywmQxg==
IronPort-HdrOrdr: A9a23:5z2GP67If76j9ysXNwPXwPfXdLJyesId70hD6qm+c20tTiX4rb
 HXoB1/73XJYVkqKRQdcLy7Scu9qDbnhP1ICOoqXItKPjOW3FdARbsKheDfKn/bexEWndQtsp
 uIHZIObuEYzmIXsS852mSF+hobr+VvOZrHudvj
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91430264"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zrb9IreDg66tjwfw1OkaJB+jjOghiDSYRM2vPTx61fiYRuQP88+ePftaMXsQBH7q6NptTGAzj7i313qvAx1uxZ7IIeZAgh7uHrx0FVqOA9jCNLkL8nOg0LS3PVAkRddDQ8KyX+Y+sGG2L5WvjZ8sQMW17X9QGer4Gd+jgdj4eNTW7oMyd0H/eV9s/F4ZNhySG2DkQyaXzxxelxU1sS9wVH1aECJQfDc/T1I9ArqCNbAquZSRL16tyWjE2F8Nby60UMSqHOK+g1JyjwU3x8EE3DrNH2JKENlNrqI1Snr5TTZ+tyW46+a22rX5V6EeN74SkicpGj6C4+nZ0/3i2zm+gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=i+0q0drz1PnYzIVvpMPaNg9lgG7Tam6XO5zaVwxxGiE=;
 b=PZ0g6YEEIh7hXrQbCWrB9z1NAEBfTf2KCL7tBdDvmHBbfFK4+LZ1c7UyhpztHJM41qsBeZ4wS5gjUyVExpKukv+MLxhfMKQlJLCh3A9sAgHncY1yqp+wN2rpjkl8ZKBcnizxxwVWMt/4AQ7LdnHN9/g0lK2CCtEEC6zH9N03KUxQ4p5msV7+5G1fSCl/JKKkDV2PCbgbd0f4qgxQ6NkzWZjXQMr2/j05d5WqXW4WMmS8SOb6+R6EGR85cbfNny6fsfyCqdRxxde5CaNLs9Mzzb/J+2R7cXtZ65XW5x7148vtlgiaR0H8xpjTcytwLu38cohCENh4qau0qEDwfoFgFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i+0q0drz1PnYzIVvpMPaNg9lgG7Tam6XO5zaVwxxGiE=;
 b=uZAKkSzuD5sIoLORsSwUOQfzrOpRNOVJNAtok8NqLQd56X5cxS+31tA62uzO8H5P7fCDWOctpKbSWu+xU8M9ljYNZrPLq2d5bPstFIQO4/DA9Cw5la6crYYPPXHds8D552DeVR56L7tPfG1AoRKAqKTsfk6f9kEVB26HiKiapEc=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_* subops
Thread-Topic: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_*
 subops
Thread-Index: AQHZH69eYU4qCMuPJUaHOPPUV+yM+a6OfdiAgAAZEwCAAOVUAIAA7l0A
Date: Thu, 5 Jan 2023 22:28:10 +0000
Message-ID: <63789ce5-5dff-c981-4127-d1ef3227595e@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-5-andrew.cooper3@citrix.com>
 <a0cb9c83-dc4d-5f03-0f65-3756fadfde0b@suse.com>
 <9c9cedd5-cca7-95d4-00bb-f34a56de2695@citrix.com>
 <f90111d0-b94e-8127-3b13-fbe3558d53f7@suse.com>
In-Reply-To: <f90111d0-b94e-8127-3b13-fbe3558d53f7@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5569:EE_
x-ms-office365-filtering-correlation-id: b2332e90-9cc2-4dfa-6863-08daef6c20b8
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 +8VM/PAO9Hrylt7YNfVEZ6AKAvBUnxz1pNTlPNr2ICjUwtI8H3aH1F0xpa4uxzdXul88dG3+g+9Ki9gCqqYJB0Ciq03D8d4wMjhjaAYWKbgPb33R4/tH0Rhl3i+mTlqW0lRFwakqR6Qdcqs2YkfcIft8z3OOkZGynCMwjFnqBhaieEzc3ZbqjvYJTBMH3F23TnqgkFhVuX61Fd34OdkrR0yTAj63aFV5qIGtSKN9deBBWqXJ7F2VpYancW/AhkjmY2JnSdTEelzqFIpu5J61FR+zNOZGl8wsSg4L+15AIpeqBOlYdoifeLm7kC90SuLE81bOe1w9Msl69MK9rLTDjDgM+wpnMllh3zaOfG4ABgJBnPOflaYsnljU2YML+yPqYCg8XkDyvZarPl6mP0AKDg8t3cjkN9fHhxEx+nLQnRT9oTjmGt3DpRiMh0pbcEfbuefW+1+Wvynhxokc2NHt96pFScFy+xY+z2wL+vnwPF4ShhtwZgjebus6QGwQYuwdJ04/sA8nTtCHUr5vld9EEqM8miZ3u7Xad/qF7oKNJ/Abbho/IFxpLQ5YLqiA9FoGgri60JyxmI9vWItft8iMS7ccjoaULj27qpruo47eOeT8JCwfcbum5/C0dEb1PMnlLqD7CuQDJh1c1vSsnVT/LpSshZyWBpoDQ5mXhYWtoD24TW3owAdjCE04RGvbSjntT/sNo0smFAvu0gCmE2ralgSe3C4uw9ixl2TbPARS+qPCJEgv2N5sJLa8ewzW63zNQYJHRNkUdjTu8Mb7KsN8cQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(396003)(136003)(39860400002)(366004)(451199015)(2616005)(83380400001)(31696002)(86362001)(38070700005)(122000001)(82960400001)(38100700002)(36756003)(2906002)(6916009)(54906003)(66899015)(5660300002)(91956017)(4326008)(64756008)(8676002)(66946007)(76116006)(66446008)(66556008)(66476007)(8936002)(316002)(41300700001)(478600001)(26005)(186003)(53546011)(6486002)(6512007)(31686004)(6506007)(71200400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZDQzaWZrNTYrZmhWcjJoQUkvU0h3ZTg1SkxUbHozaW9hRU5LTlpzTmxhNlo5?=
 =?utf-8?B?N0VteE1TMWF6T0I2ZTFYVjkxYTZIQThFaHU5YW5YdCtrT3hRVzVTd1R4NkF5?=
 =?utf-8?B?U3R6bzlpdDN1ZjBhblZVRHRjakhmeFpKbWVEMzRhS1V1STlydlZJV0k0bDM1?=
 =?utf-8?B?Q3Q2MXA1UTZCZU1Bc2puV1lTNXlLU2N4ei80TWRaSEFjeGk2MU5Ub1pqaFl5?=
 =?utf-8?B?R2dWSUNPWVB3KytzdnhkTUR0SklFRXRPaDNGc1h5aW5hbk9EY0ZLdUdLUUdF?=
 =?utf-8?B?TExOajNzeTEyNy9ETVVtVWN4WlNZY3FvTWJXODRyYXBpZzY0a1NNVGRuR21O?=
 =?utf-8?B?Q21JUmErOGcxYnFtRU5SVVpoL0lMQm5XYXpxSElRS3ZBM3VyUkQ4TC8vNUgx?=
 =?utf-8?B?anUvZzdvWVJjUUJZQUJnZmdLdGg0Vnd4WksvSkZ1QnRadURiOXJoZHZQbkF4?=
 =?utf-8?B?K3o0VDZUN2dUUU15UjcwRllBczYvdDZPOXlUZVhBSS9VVWNVRHZkNUIzSis1?=
 =?utf-8?B?NzM1cW1YOVkxckRxSllYUzVOdEx6ZW12djJkamtyYlFLczBlSVhQUC9jcEtj?=
 =?utf-8?B?TFR3NGFmRDBuZ3RPNDFTRmltbldyNVB0aWZZaTBXQ1J3WlkrZHQ1QS9CNmR0?=
 =?utf-8?B?aEZZNURmT3ZqVCtlMUVmOUliN3l3NC8yQ0plV0xLeG1NWVQ0NDUxWDBCZ3R6?=
 =?utf-8?B?K0cyWERpTnhmbGpycHhHZ213cytIcm5aMzJNUTNRd2t1S3BsamRiZi84S0Vw?=
 =?utf-8?B?Vm1OVnlxQTMrYkdKbHMwRlZjWnIxZWVNNXVmOXBhYmxiMVJvWHRuWnFvZDFy?=
 =?utf-8?B?cDgvWUdKbDY3R080NzBEYk9uU1oxaUlmS3VGazJJOFdYMTVnTDJmYndqL2dL?=
 =?utf-8?B?cVVFdUxZdWlZaVcrZGxCTVZFRk5LaXoyYjd4dms3SHpVd0tiV2RFYWMramJU?=
 =?utf-8?B?dU81VWFma1pLZm9OUlQ4SXd5d3VqS0MwWFo3U0piMlBjNDdneW5nK3djR0Ux?=
 =?utf-8?B?OGRrMHNXMGhrQkdJYnhUWUMyaDBzSCttWFZXVVc4VWZuck1ucldnN1ZyRkwv?=
 =?utf-8?B?NzZnZnh5bkZiVjNxR3M1YW9XK3lCWjB3WmRpNDltTnZxSndrMy95S01pT0Yz?=
 =?utf-8?B?andJejdrLytwSEF0SWM2RVpJOFlibUVZVk5ibFRHaU1Mak5GeXNSbGUwajFN?=
 =?utf-8?B?Q1Z1eVpaVDhWVThmdy9HejFIdlYxR1I5azQwVXpMMUJkVnh3UllTckVhYWxO?=
 =?utf-8?B?c0ZpZ3JJLzZSR09EZmNTdnZPL1IxOEdoOEdDemRLdUNMeWNtQmViRHI3RlhB?=
 =?utf-8?B?SU5tck5WSDBYalpKUHdHaU9lUjAxZmkzQXFlaG9MWWFUSDl5TSt6OENyV00x?=
 =?utf-8?B?RHl4SGNNeU9NT0VTV0N2UzdoNDdnNUpYUDVsU2lsRElpUXpBeFVOR2hYT1pZ?=
 =?utf-8?B?UFArNGFKSlVVK0RRMkU3L1dsTjMxL2tmTVV4M1VqTlhUOFBzUWRTOHhmcG5h?=
 =?utf-8?B?OHJMZTBFL0xyOUZ2bEpMbkNLTUhKbUZpdkJGM2dUbjFFZHNaMFY4OVFtTjVX?=
 =?utf-8?B?bUFoYWU3bzdiVnliRjlieHpaZXpwN2srUGFIN3dqeUhzL3p3Z2g5OVNCR245?=
 =?utf-8?B?b1dvcGk0T1RlR3NwdDROd0hab1Nlam1JcmVJVEFTYXNiZDlIS0VVc2thS3Bm?=
 =?utf-8?B?YldJWWlZS09KMXZOQWpTelZuTWNsRWtNdFpLWmJwdTNsaWVOaDJpVWJFZ1gz?=
 =?utf-8?B?TG1RVXN2ejNId2FyeXpMY0lXdkF5M2xFblM5VDkxNEVjUWpGbW1hUVhySUtr?=
 =?utf-8?B?clVaNGdOS0JTcTdrbTNGSGdXQkVzb0NKMnNZeVRHNWxzcnFtMmptYUJrMFVt?=
 =?utf-8?B?V1NpRlNiL3hmNGNEQ2JZakJ5bTBPanRGVm5aYURtYkxGVGt1aUcxcjRMK0Rn?=
 =?utf-8?B?dU5YVDAzQVNFRG40U3RETjFEb3Z1TjQ5Z0hiWEVEK0ljRUtSR1l5RW8yMVI5?=
 =?utf-8?B?UW80TkdGWmwyRCtmYUFrTmNVeXZRMjlNTXpUVkdDYW8zeE0vTE0za2JYajBK?=
 =?utf-8?B?V3JnVHA5Y0w2ZURJaXg2R1ZZOFFHMGdZa0Nva3RNTER4Tms0SVduVzZEdTdO?=
 =?utf-8?B?TGdCUXM3TnlObis2SnVaZzNUVmx5cGoveXoybGZzVW1tUXBUV2NLbXJJRGQy?=
 =?utf-8?B?UEE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <3E9623949BD3074388EC03CC50EF2EE8@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	QigMAkdvPPYWVCNoEj4GeJ81KHkEhccDbwreKnpg4dmaRT5iOS8UZIaB1FyUW3qzEgpKlf39PsBzDhgugcHGJYbvfloSMHDIX3Ka7TA7+BFLgLpEvz5OVCsJLXKCsv5Ict3c3si4JMBl/7Bui760HLrwZA5gFEOHSyt/Je8IH/2SfoEjCSsskDSSWzZaTP5Cg2iJ7p6CyKLAaAGz7+rmTsXLYMtEoCKzde/j6KIromPMellAH0PePdnFZuc6Aar/47qleOW7YUHLrjevT59U0XNwf9rsUER4C0kUIKHVc4+Vrpy2KyIWRHtlljFNNkIxdlk2QENn43873ezZ+XVxaF5OannQx01uNMi0LRYKieQ7EcR4ijU65iYKzfDX6MF/fAsVpnU1XUBlhhjloQ1XmFoN+RYfnZKeaGxNfnElPzw3VtZh3qW6PA3EmQIsi8VnDJenciw4sxdceMT42FAmfxFKsP1ZDVkYx6DIoHwk29adQ+0fONo8SHQZIncqWt4zcOsLQDmftiVpxId8FnXDW03m+Z3WRBnH3w3FPPj5G7G6KQVxu+7xQswZUNJtDYgG1WSz8hvkDRkOpPrA4OXqyA9FCETUULT6Rle5D5mOyPgitQPCeFQVY02m9hiU8M+S0Rqm5INADYdOj64ymKCh38oMc9yt7KwCqaSmQjtcC0v6vFTIYYKz3m0Kq7cIQ6tG/T/+o4bOxLqS/sDLSYK7/+M47+jeo9mQ1FMtGcotMQ2ixUgWBb8sVILPqvgTO9ILU/HphuG7FKY3JrBeoS9iSs5FyufEfGyCOeT5RlpZq5Ms/npitoAR65K3gBHLjBDU03vzM7pAfcDpZQomf/IfeF5RmIqC50THRf04BaHGOCpeMjVJwHRZ5oojWRxU6Xmi
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b2332e90-9cc2-4dfa-6863-08daef6c20b8
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 22:28:11.0184
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: wD4otfobODsK4ugo1DAob0VhjshZe2/+E4TFYI6mYy6WyIr0ND+IQ3ERGVmYSE9VMyCmyHut1tE39WgQ3FT50F5KyEbVB0gPP9fijUQuArU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5569

T24gMDUvMDEvMjAyMyA4OjE1IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDQuMDEuMjAy
MyAxOTozNCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDA0LzAxLzIwMjMgNTowNCBwbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMDMuMDEuMjAyMyAyMTowOSwgQW5kcmV3IENvb3Bl
ciB3cm90ZToNCj4+Pj4gQVBJL0FCSSB3aXNlLCBYRU5WRVJfYnVpbGRfaWQgY291bGQgYmUgbWVy
Z2VkIGludG8geGVudmVyX3ZhcnN0cmluZ19vcCgpLCBidXQNCj4+Pj4gdGhlIGludGVybmFsIGlu
ZnJhc3RydWN0dXJlIGlzIGF3a3dhcmQuDQo+Pj4gSSBndWVzcyBidWlsZC1pZCBhbHNvIGRvZXNu
J3QgZml0IHRoZSByZXN0IGJlY2F1c2Ugb2Ygbm90IHJldHVybmluZyBzdHJpbmdzLA0KPj4+IGJ1
dCBpbmRlZWQgYW4gYXJyYXkgb2YgYnl0ZXMuIFlvdSBhbHNvIGNvdWxkbid0IHVzZSBzdHJsZW4o
KSBvbiB0aGUgYXJyYXkuDQo+PiBUaGUgZm9ybWF0IGlzIHVuc3BlY2lmaWVkLCBidXQgaXQgaXMg
YSBiYXNlNjQgZW5jb2RlZCBBU0NJSSBzdHJpbmcgKG5vdA0KPj4gTlVMIHRlcm1pbmF0ZWQpLg0K
PiBXaGVyZSBhcmUgeW91IHRha2luZyB0aGlzIGJhc2U2NCBlbmNvZGluZyBmcm9tPyBJIHNlZSB0
aGUgYnVpbGQgSUQgYmVpbmcgcHVyZQ0KPiBiaW5hcnkgZGF0YSwgd2hpY2ggY291bGQgZWFzaWx5
IGhhdmUgYW4gZW1iZWRkZWQgbnVsLg0KDQpPaCwgc28gaXQgaXMuDQoNCkknZCBmYWlsZWQgdG8g
c3BvdCB0aGF0IGxpYnhsIGZvcm1hdHMgaXQgYXV0b21hdGljYWxseSBvbiBiZWhhbGYgb2YgdGhl
DQpjYWxsZXIuDQoNCj4+Pj4gKyAgICBpZiAoIHN6ID4gSU5UMzJfTUFYICkNCj4+Pj4gKyAgICAg
ICAgcmV0dXJuIC1FMkJJRzsgLyogQ29tcGF0IGd1ZXN0cy4gIDJHIG91Z2h0IHRvIGJlIHBsZW50
eS4gKi8NCj4+PiBXaGlsZSB0aGUgY29tbWVudCBoZXJlIGFuZCBpbiB0aGUgcHVibGljIGhlYWRl
ciBtZW50aW9uIGNvbXBhdCBndWVzdHMsDQo+Pj4gdGhlIGNoZWNrIGlzIHVuaWZvcm0uIFdoYXQn
cyB0aGUgZGVhbD8NCj4+IFdlbGwgaXRzIGVpdGhlciB0aGlzLCBvciBhIChjb21hdCA/IElOVDMy
X01BWCA6IElOVDY0X01BWCkgY2hlY2ssIGFsb25nDQo+PiB3aXRoIHRoZSBpZmRlZmFyeSBhbmQg
cHJlZGljYXRlcyByZXF1aXJlZCB0byBtYWtlIHRoYXQgY29tcGlsZS4NCj4+DQo+PiBCdXQgdGhl
cmUncyBub3QgYSBDUFUgdG9kYXkgd2hpY2ggY2FuIGFjdHVhbGx5IG1vdmUgMkcgb2YgZGF0YSAo
d2hpY2ggaXMNCj4+IDRHIG9mIEwxZCBiYW5kd2lkdGgpIHdpdGhvdXQgc3VmZmVyaW5nIHRoZSB3
YXRjaGRvZyAoZXNwZWNpYWxseSBhcyB3ZSd2ZQ0KPj4ganVzdCByZWFkIGl0IG9uY2UgZm9yIHN0
cmxlbigpLCBzbyB0aGF0J3MgNkcgb2YgYmFuZHdpZHRoKSwgbm9yIGRvIEkNCj4+IGV4cGVjdCB0
aGlzIHRvIGNoYW5nZSBpbiB0aGUgZm9yc2VlYWJsZSBmdXR1cmUuDQo+Pg0KPj4gVGhlcmUncyBz
b21lIGJvdW5kYXJ5IChwcm9iYWJseSBmYXIgbG93ZXIpIGJleW9uZCB3aGljaCB3ZSBjYW4ndCB1
c2UgdGhlDQo+PiBhbGdvcml0aG0gaGVyZS4NCj4+DQo+PiBUaGVyZSB3YW50cyB0byBiZSBzb21l
IGxpbWl0LCBhbmQgSSBkb24ndCBmZWVsIGl0IGlzIG5lY2Vzc2FyeSB0byBtYWtlDQo+PiBpdCB2
YXJpYWJsZSBvbiB0aGUgZ3Vlc3QgdHlwZS4NCj4gU3VyZS4gTXkgcXVlc3Rpb24gd2FzIG1lcmVs
eSBiZWNhdXNlIG9mIHRoZSBzcGVjaWFsIG1lbnRpb25pbmcgb2YgMzItYml0IC8NCj4gY29tcGF0
IGd1ZXN0cy4gSSdtIGZpbmUgd2l0aCB0aGUgdW5pdmVyc2FsIGxpbWl0LCBhbmQgSSdkIGFsc28g
YmUgZmluZQ0KPiB3aXRoIGEgbG93ZXIgKHVuaXZlcnNhbCkgYm91bmQuIEFsbCBJJ20gYWZ0ZXIg
aXMgdGhhdCB0aGUgKHRvIG1lIGF0IGxlYXN0KQ0KPiBjb25mdXNpbmcgY29tbWVudHMgYmUgYWRq
dXN0ZWQuDQoNCkhvdyBhYm91dCAxNmsgdGhlbj8NCg0KPj4gQnV0IG92ZXJhbGwsIEknbSBub3Qg
c2VlaW5nIGEgbWFqb3Igb2JqZWN0aW9uIHRvIHRoaXMgY2hhbmdlP8KgIEluIHdoaWNoDQo+PiBj
YXNlIEknbGwgZ28gYWhlYWQgYW5kIGRvIHRoZSB0b29scy8gY2xlYW51cCB0b28gZm9yIHYyLg0K
PiBXZWxsLCBJJ20gbm90IGVudGlyZWx5IGNvbnZpbmNlZCBvZiB0aGUgbmVlZCBmb3IgdGhlIG5l
dyBzdWJvcHMgKHdlIGNvdWxkDQo+IGFzIHdlbGwgaW50cm9kdWNlIGJ1aWxkIHRpbWUgY2hlY2tz
IHRvIG1ha2Ugc3VyZSBubyB0cnVuY2F0aW9uIHdpbGwgb2NjdXINCj4gZm9yIHRoZSBleGlzdGlu
ZyBvbmVzKSwgYnV0IEkgYWxzbyB3b24ndCBvYmplY3QgdG8gdGhlaXIgaW50cm9kdWN0aW9uLiBB
dA0KPiBsZWFzdCBmb3IgdGhlIGNvbW1hbmQgbGluZSBJIGNhbiBzZWUgdGhhdCB3ZSB3aWxsIHdh
bnQgKG5lZWQpIHRvIHN1cHBvcnQNCj4gbW9yZSB0aGFuIDFrIHdvcnRoIG9mIGEgc3RyaW5nLCBj
b25zaWRlcmluZyBob3cgbWFueSBvcHRpb25zIHdlIGhhdmUNCj4gYWNjdW11bGF0ZWQuDQoNCkkn
dmUgbGVnaXRpbWF0ZSBjdXN0b21lciBidWdzIGNhdXNlZCBieSB0aGUgY21kbGluZSBsaW1pdCwg
YW5kIHJlYWwgdGVzdA0KcHJvYmxlbXMgY2F1c2VkIGJ5IHRoZSBleHRyYXZlcnNpb24gbGltaXQg
d2hpY2ggSSdtIHVud2lsbGluZyB0byAiZml4Ig0KYnkgc3RpY2tpbmcgdG8gdGhlIGN1cnJlbnQg
QVBJL0FCSS4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 22:41:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 22:41:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472211.732355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDYve-00048O-4n; Thu, 05 Jan 2023 22:41:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472211.732355; Thu, 05 Jan 2023 22:41:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDYvd-00048H-Vn; Thu, 05 Jan 2023 22:41:41 +0000
Received: by outflank-mailman (input) for mailman id 472211;
 Thu, 05 Jan 2023 22:41:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDYvd-000483-0k; Thu, 05 Jan 2023 22:41:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDYvc-0007vo-V2; Thu, 05 Jan 2023 22:41:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDYvc-0000DQ-JA; Thu, 05 Jan 2023 22:41:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDYvc-0004sP-IQ; Thu, 05 Jan 2023 22:41:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V8HwzMZOSno/ACkYrpNZ4+zTmEMbJF0sZFGklF2A0zc=; b=GHXkwX5DIL01Hqt37FoAkbBMjf
	uAY/bHWK6qjF6Y0V6mPHuVBkZBoZWAQlSE+/VBIwtSg2Ji/DuzSmsvH1G+g98WJCioT3Pdnix453x
	4dk5C2DtcEQ99s/LaydLcmpNSHNFHd7oajeKHniQIwrZ+z/UYSyRKUVm1GE/VxQ2ijow=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175593-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175593: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
X-Osstest-Versions-That:
    xen=671f50ffab3329c5497208da89620322b9721a77
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Jan 2023 22:41:40 +0000

flight 175593 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175593/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2
baseline version:
 xen                  671f50ffab3329c5497208da89620322b9721a77

Last test of basis   175589  2023-01-05 16:01:55 Z    0 days
Testing same since   175593  2023-01-05 20:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   671f50ffab..2b21cbbb33  2b21cbbb339fb14414f357a6683b1df74c36fda2 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jan 05 22:56:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Jan 2023 22:56:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472220.732366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDZ9l-0005hS-Bk; Thu, 05 Jan 2023 22:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472220.732366; Thu, 05 Jan 2023 22:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDZ9l-0005hL-8o; Thu, 05 Jan 2023 22:56:17 +0000
Received: by outflank-mailman (input) for mailman id 472220;
 Thu, 05 Jan 2023 22:56:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u+KT=5C=citrix.com=prvs=36272ec6f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDZ9k-0005gy-DR
 for xen-devel@lists.xenproject.org; Thu, 05 Jan 2023 22:56:16 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 257e7de7-8d4c-11ed-b8d0-410ff93cb8f0;
 Thu, 05 Jan 2023 23:56:12 +0100 (CET)
Received: from mail-bn7nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 17:56:09 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6085.namprd03.prod.outlook.com (2603:10b6:208:308::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 5 Jan
 2023 22:56:06 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Thu, 5 Jan 2023
 22:56:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 257e7de7-8d4c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672959371;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=EScHE4GWpXVuFvg/BK0Zc3NxeCBO/9KiCKg60f4Y4Fs=;
  b=PwqXqPRUrjN0hOSYjQX8b4MUv8MuFxEI8iMRzhltjL/5r9WDd6b7Or/6
   VhE+uTKQBc2I6KcIttJI9et+v7Wd2nQWDRxQne2jSOnI8PmYUqlp0doPj
   SglVvyq4e2dAINd6OmCn8sh8bgz40T3/i+CYoyu8+1/z8oZEEz38qPlou
   o=;
X-IronPort-RemoteIP: 104.47.70.105
X-IronPort-MID: 91392351
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:DAMSgK7IbAWrD2I5bCUMNgxRtBrGchMFZxGqfqrLsTDasY5as4F+v
 mUXUGuHbvaPamf8eNp+bY22phgGvJ/VyYcwTgRr+Xw8Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakT4QeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m0
 sUGESgLYAu/hs29nJzlFflH1/k6FZy+VG8fkikIITDxK98DGMiGaYOVoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MlEooiOWF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNLT+ziraMz6LGV7lITWB00CwacneCgmES7C+lkc
 mo72yV7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3Oc0TzE30
 l6Cn/vyGCdi9raSTBq16bO8vT60fy8PIgc/iTQsSAIE55zppt41hxeWFtJ7Svft05vyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLzt56s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:ZJ/PeanNlc1CGxls72udf8RkKKTpDfIg3DAbv31ZSRFFG/Fw9v
 re/sjzsCWetN9/YhsdcLy7VZVoOEmskaKdgrNhXotKPjOGhILAFugL0WKF+VLd8gLFmtK1vp
 0BT0ERMrPN5eQRt7ee3DWF
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91392351"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CGrHqgA48/q/GnbuXAUJNU5v4Lr4AHey98FaXcRsoaNR5C85lI/vLV8/y1C+fI73yFbGdwhsf/GC3yVfDQRFr/9bZrIerjzuGw9kzLeWKb+Gqc0ZG29GGuVWXhPD46kyei9ipYWMfzswim8vi9ZMhP08D2gt0ARrl5WQr/S/NwYXZrH7uSNuR3W+KCgQ9uk8pqxXGjfmJehDzK1+glEo/MX/bJ5l9BQZsBSzcx1HCpfqLENnIy42fi58n3S2BnF9y13JZ+4yCn2ev6DiuXuPxbmrbJcWfdXosJ/8aIpYl0jpfmQxXhFBTrgThd4L2koAntzBdqWsWtayyBv786hSfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EScHE4GWpXVuFvg/BK0Zc3NxeCBO/9KiCKg60f4Y4Fs=;
 b=oL1Obf/k1M2j1qROlt5u62Pfofq83gWPzaInltwyHH92kX4/erGjiN+bACrjzWUyFzszth4p5sBbTqf5Tbv8nwR2/kURWH+Kax9skiY8tqgkVq0yFejVkBtJExxjJF2OqL+9v75bcFosAY5qeZpZbveDJrhc/MokBXXv33qfAJaAVCeGSL0PGQOIlV6vFpJPh8dvSFXntlBZDV+ZiPx0J3J7TvIVB8jyjasJanqy7ImJ0MCYtqJ2Are4LNjcs3rCYD4XfUbC6DERZPmMxm1lCq/K/kErHht9xjBRCtHC+9lvJZFyGa6HJ2ql1pILG8TEJonVa/EwCrKHEeaFM3yGKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EScHE4GWpXVuFvg/BK0Zc3NxeCBO/9KiCKg60f4Y4Fs=;
 b=aKnTcDpFQ7ScJlOWezra+VaPRKa2VoXhWPas/h9VWJxRsFDSqeqqfBdKLQOlCkWevLkWBjtIovMRT91z5tV5dGPpaWC+1203gb8hkAyJj6crORgVBsW8YE3jNXeF54eXx2X4IZ/Q/ppnKqjcA7EhJCwAaI5a08evgkY2pZGcOgw=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Sergey Dyasli <sergey.dyasli@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/ucode/AMD: apply the patch early on every logical
 thread
Thread-Topic: [PATCH] x86/ucode/AMD: apply the patch early on every logical
 thread
Thread-Index: AQHZIQhyMB1JmOs75E6HWYrcfqEv3a6Qb7eA
Date: Thu, 5 Jan 2023 22:56:06 +0000
Message-ID: <55cccf9d-e4c9-6dec-ee9d-fec56f521931@citrix.com>
References: <20230105132004.7750-1-sergey.dyasli@citrix.com>
In-Reply-To: <20230105132004.7750-1-sergey.dyasli@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BL1PR03MB6085:EE_
x-ms-office365-filtering-correlation-id: d2315998-d0f7-43fc-ad13-08daef70072c
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 zSZLrQe32GMJM5yqYe+oiiWsJpOit+mViJoZRZb3yFHEY4AF5Ar6XTbZ3YO+uIg/vA0C2Ba/YhbAECxW5qhmz7tc3woVHpiso+DURo487nuCG3lxJ9jwpaAc1rTa8rCe90iUFDQz8fWlviEsw2zaH7+WXZ3z5Q2SOh4l+fnk+rNQGIQqiY4VvpTodlhi0kYk2dnJYgD7PgmDprkh4Ipk3rk1wNfYbW+nuOPcB03W6TR/yzzfhzi1jMX/REUUpmXnx17prPXsUWFDeNzO3wUhe+xpH6askyp+dKPzRpG+Ut8IgbuMbWEdnfjQASRjTr3+dYGZQaornpxjPkze68mTlY1aunX+5jd8hHfSPNozXj6b/6M7UbYay/gUHU3zlNTcOLMwRJ3bKNZjN8op8QptHAm3OlB/7ospxkYJT+Z0zTkqZjL1zhGGpGGBPb9mAQH2D0VAqGpGNpK91NHLohrTUkad+LE46WttvPRJy28Y87T1hsIoIr/5KNSYRqk5l+qlrUKedQP1OWrm2ZbbMOOY1a7gYyDtcs7ECc3VsY28MLayTXzrtbpn0UAzyThCKpb7OCbAz5Aj9ouWbQs4IVTmhzxFvwl0g3Mhu3ZARhk3EJk+oBQr/4xLiEhRt0eAaiVwH2mpgTY9tAEc76yGKqKwY09Z51/Yhee0wgTJ8WjPq5Wr+fq+JfMasRyLG0qMgLQihrLWMFKJX1KrL1ipnAYFhyUQtSHWdyedSgTYmnDVpk6cI3ocPdi/h5BMvisn3QWn9OfhFXGhOGkMry6SrCkyMOSy4cCTbmaH4qzc1Swgpi8=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199015)(6512007)(53546011)(26005)(31686004)(478600001)(2906002)(6486002)(966005)(71200400001)(6506007)(186003)(66446008)(76116006)(2616005)(66476007)(64756008)(66556008)(91956017)(8676002)(66946007)(41300700001)(110136005)(4326008)(316002)(38100700002)(5660300002)(83380400001)(36756003)(122000001)(54906003)(82960400001)(8936002)(38070700005)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?U1NCYWxqZ0FiNWl5enJ2aWhIWjFPUTRrUEJUa1dsTUhqbkdMZ2JBWVdDa3hl?=
 =?utf-8?B?b2xYajM2VTBoN3dwOGlGUXQ5S0Z1TE84NXpzcUdpT1d2RHFhWEJZdjJFczho?=
 =?utf-8?B?UjF1Q28rL1Y4Y20yTXdvOFpGOFVScDlKemJVeHNyTmY2NFJ2UFluZjBGY2R6?=
 =?utf-8?B?bEY3ZWNSNkZwOTM3Qm9LaDB6K1lCSG85eitzOHNUUjlrQUsvWGhCaUdxWnFN?=
 =?utf-8?B?eGV4b0RlZEJrRzBUTzYwenNwTjRUZ0Nuc1lKd2tUUHpya1lPSWIvbmZ1cUl4?=
 =?utf-8?B?dktLZnNNazFiUmxSTmMyNE1RdlI1OVo4L3NYczRkUllaS2JieHV6VWp6SCsy?=
 =?utf-8?B?cGd0aGllb2JZV0o2TTZRVFFURFdrdVRHSEZQS1A5anY2MW1sZ0QyUW9rclNy?=
 =?utf-8?B?aDF5VTFPMmd6cFpsak5sNFVJWGhLeXpKZnp4YWZYT3A3RmRBNHl3Uk11L2Zz?=
 =?utf-8?B?NTNwa3JvMzRqRXhvMEZQWUw4TDRBK3crVTUrd0RCeEVEbzFTcXdFN0FiVGth?=
 =?utf-8?B?SmMyYzI1VDlSRWV1Z1BlMEtQdHBhSWN0Zkh6MStaY3o0YWZvVzhkamZCenM3?=
 =?utf-8?B?SitnOW1pSjlQZ2NZdGpHeUUrL1BTM1RzUlNrVy9lRk8wWEdRbVRpK1NNSDM1?=
 =?utf-8?B?aURSWkdWOWZWYm8wajhwcGdId0VGSVBMZElBQXlpTXp4ekFOQVZ5Q05udUdB?=
 =?utf-8?B?OEVEMTRnOG5MNm0vWXlGNHAvUmROcDllakRNZXR0R1p2ZUYva0dWK3FUYi9T?=
 =?utf-8?B?cGFIVVV3dkNINWU1c2Z6VU9zK3ZtV2JKR2VkR3Fvek1DMDd1OGRVL3ZBSEVh?=
 =?utf-8?B?QzlaV3hYK3M5L1Z0ZjlubkVOQ3c1MHJRQSt3d01oL2F2WUNzTHIxWW1kMldW?=
 =?utf-8?B?WmVGQjdSdGhJQzBacjZteWhXYWRzVDc1a0R3ZTNNTUk3RFJtU1hDSFFKL1dn?=
 =?utf-8?B?aDFLNmxnUFJ0aUZUQXkxSjNKbUUxbHBvOTlUVDJ6N2IwUnVoUEtvS2hUeVo3?=
 =?utf-8?B?Wk42cFNjR1VMaGh5TVRmdEtpT2ViaTFYd3JLQ09jQzRSY1FIYUFJcDVpMENZ?=
 =?utf-8?B?WGN2VDM4R0ppdE5mbm9uUWZibklMYnJIQWdiQktuam9sK29zb2ZzaWVXZzRm?=
 =?utf-8?B?ZldRWDg1YjFCY3RWVXNRY1NnVzdJd1ZoSnd1YlF2NXZybTFiaXE4VnFVOUJv?=
 =?utf-8?B?bTFJcUxjektydTJTUlFlRHlTeTVrNGRHbFhqV0t1dnhURDNCQUZQdmF4cDRn?=
 =?utf-8?B?dGFESG1DcXpPenREWjlCenBtaG9GaTl0b1FFVjZZN2x3VU94Z3JDOGFScVVH?=
 =?utf-8?B?dWM4dzVtSXBKUWlJeHFuRFovZzFQaitMWlF4VUVzV0dyT0p1THM1SlV0U21r?=
 =?utf-8?B?TlI1Zll6Z1pxQ2ptemtZSThybHJJb3dHU21tUjJCT3R2V2w5WmdadWNiMmdu?=
 =?utf-8?B?ZFFvSUNBM1VSZk1nWTN3ZVdOT2Q5Ymt2aCt4YUlSVWlQbHg2SVBFanhUQ21Y?=
 =?utf-8?B?QW1hUk1rdXpHa1VpM3l5YTJKajFqZnpEcVUvQnkwRmoyRmFINlpyYWNYem9S?=
 =?utf-8?B?cDR3RzFxa3M0RzNiSlg3UlIwRERpZk9IbENQaXBWeSszVDlLZjlBbXpFbWtz?=
 =?utf-8?B?RXVjYmhVNUJOMTg5d2pGZWNzMWkyYTJ6T0w5TStLMmpQM0ozY2hqSU5SMjJu?=
 =?utf-8?B?TkNYT2x1N0Q3TEpTZVIyTjhNSmR3MzBPM2tWTG93Ky9zWWs0NUQvMWlpaW0r?=
 =?utf-8?B?VXk4cXB6dENwZmNoZHROWmNvQXhjaGlnNkE1T0FFVm4vR05HVnh4a2NLVVhW?=
 =?utf-8?B?VlFRNldYUzZjR0RWUDEya2ZvdzdtRy81QnFWNEtLL3p3NkpLWGttM0dHTnNJ?=
 =?utf-8?B?NmNXc1pNMDJuakFXYWdKYml6Wm9XOUNWOFpvQkd3b0RQYm8vRTlWT00vbW1F?=
 =?utf-8?B?ekJHOU4xMkMrTGNpTmRWeWw2bUZQYURJcWlSM2JCMFlEUDB5a09JcktXcklF?=
 =?utf-8?B?Wkx5VkpyVWs3YnhoWDhvaVRxaW80RXpqekp6ZkNtRlNnMm1lWEF2NzVVaElk?=
 =?utf-8?B?MU5xdDhZOGRZcysxS045UVhwazVEQk9HaERqdk9QVUxUbUZFWXNnK096MEx0?=
 =?utf-8?B?Y05ram84Uk5VcUZ4UnB0VWRCWFRZbHl3UzFTcWFlaTZRb0JKeU9sNjFOZFJU?=
 =?utf-8?B?aGc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <EC394087C5200443B58C720CC23E9CDF@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ZlOgr2EL7jQC8BCa0cqfjfQnJLDZgt9remToFQZWQC4zR7mcZddzrcQzphcI1sPThe0lv6NnHBwzwWpzAQxXgKvJp9gi/FFD4g8RL0OEGPL2brqEDwMtnLN/d+IZWwRhTJG2gCQ7g6yGGkhjwGB5v8gaOMiijR8g9yqv2fQ/IH8eSQ3lUk9gC7AY++tPUXuLNk3oZmvojiH4q2hj3G8Sj8XSmU3ajbV5HccpeExzRpmjq6fc0m/gLI+MyqI7HqQr2FdWElc+EjOKDb/ZXN9eCci2iG/tKFP2UeqhGmif1/drr3Fy6BHSL/o/9iRzEVFzp/1f9ZOYpuPqQaYg0zyJlyA+/NqXRLe9ifhzVP15ff13dJ1/aj2XKg3fxWQ+TTHKQKoPLZQDrXhSPncRRY7JlJkwPdTOwTx9pk3LfP214fTjaxC/xIAkCazj2+wJ0KhxT+Gdyo/lL5BrhXGGnbYx2xiaofOdNLoxuX8YvD8P7dbOYcSI4wWhfMWh1zDuMp8/KeSaJmze68WBacIlX8AQkEeVfSej7TenOn9EIphbpyjze0NbNzCn9uTS70VPGcHD6pZ5lOap8lwYQXF2h3iTL10CbE/2HGEMN9CgXo9SgvhcB2ZyrLAAuGiuZylvGQzIX3JZGBitA1xPKGQVCKzHVz8rzL053fiSLYVnocpoaN9U7hmDYJk5XdtNIXTWMog8f1oE+3tt2YiwUgfLBB9Xe+y7gdn+DknyKFu4p6nZyL2f93UN04u8T0VJUzQbPEKSbTkYWX+TriR4BcbMCR6pVBVK+5t0FfqUGOB6C+52O8PJ2mbXZZEfhrucgKLrZ81Q
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d2315998-d0f7-43fc-ad13-08daef70072c
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jan 2023 22:56:06.1934
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: tFs4yesNkYEDDl0KywZZpeSR6zPhUVGwG67/ZtiLbBiQ4IErxL4ZIyajX8EfJK5ojbLE/HZfJZ1dWN2lQxdVFn1TB+2jti+f2u+7a/7N1OY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6085

T24gMDUvMDEvMjAyMyAxOjIwIHBtLCBTZXJnZXkgRHlhc2xpIHdyb3RlOg0KPiBUaGUgb3JpZ2lu
YWwgaXNzdWUgaGFzIGJlZW4gcmVwb3J0ZWQgb24gQU1EIEJ1bGxkb3plci1iYXNlZCBDUFVzIHdo
ZXJlDQo+IHVjb2RlIGxvYWRpbmcgbG9zZXMgdGhlIExXUCBmZWF0dXJlIGJpdCBpbiBvcmRlciB0
byBnYWluIHRoZSBJQlBCIGJpdC4NCj4gTFdQIGRpc2FibGluZyBpcyBwZXItU01UIGNvcmUgbW9k
aWZpY2F0aW9uIGFuZCBuZWVkcyB0byBoYXBwZW4gb24gZWFjaA0KPiBzaWJsaW5nIFNNVCB0aHJl
YWQgZGVzcGl0ZSB0aGUgc2hhcmVkIG1pY3JvY29kZSBlbmdpbmUuIE90aGVyd2lzZSwNCj4gbG9n
aWNhbCBDUFVzIHdpbGwgZW5kIHVwIHdpdGggZGlmZmVyZW50IGNwdWlkIGNhcGFiaWxpdGllcy4N
Cj4gTGluazogaHR0cHM6Ly9idWd6aWxsYS5rZXJuZWwub3JnL3Nob3dfYnVnLmNnaT9pZD0yMTYy
MTENCg0KQnVsbGRvemVyIGlzIENNVCwgbm90IFNNVC7CoCBUaGUgcmVsZXZhbnQgcHJvcGVydHkg
aXMgdGhhdCBib3RoIENNVCdzDQpzaGFyZSBhIHNpbmdsZSBtaWNyb2NvZGUgc2VxdWVuY2VyLg0K
DQpYZW4gaGFwcGVucyB0byBub3QgYmUgaW1wYWN0ZWQgYnkgdGhhdCBzcGVjaWZpYyBidWcsIGJl
Y2F1c2Ugd2UgbGV2ZWwNCnRoZSBmZWF0dXJlIG1hc2tpbmcvb3ZlcnJpZGUgTVNScywgYW5kIHRo
ZSBMV1AgYml0IGZhbGxzIG91dC4NCg0KDQpJdCdzIGFsc28gaW1wb3J0YW50IHRvIHN0YXRlIHRo
YXQgbG9hZGluZyBvbiBldmVyeSBsb2dpY2FsIHByb2Nlc3NvciBpcw0KdGhlIHJlY29tbWVuZGF0
aW9uIGZyb20gQU1ELCBhZnRlciBkaXNjdXNzaW5nIHRoYXQgYnVnLsKgIEJlY2F1c2UgaXQNCmNv
bnRyYWRpY3RzIHRoZSBjdXJyZW50bHkgd3JpdHRlbiBhZHZpY2UgaW4gdGhlIEJLREcvUFBScy4N
Cg0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2NwdS9taWNyb2NvZGUvcHJpdmF0ZS5oIGIv
eGVuL2FyY2gveDg2L2NwdS9taWNyb2NvZGUvcHJpdmF0ZS5oDQo+IGluZGV4IDczYjA5NWQ1YmYu
LmM0YzY3MjlmNTYgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9jcHUvbWljcm9jb2RlL3By
aXZhdGUuaA0KPiArKysgYi94ZW4vYXJjaC94ODYvY3B1L21pY3JvY29kZS9wcml2YXRlLmgNCj4g
QEAgLTcsNiArNyw3IEBAIGV4dGVybiBib29sIG9wdF91Y29kZV9hbGxvd19zYW1lOw0KPiAgDQo+
ICBlbnVtIG1pY3JvY29kZV9tYXRjaF9yZXN1bHQgew0KPiAgICAgIE9MRF9VQ09ERSwgLyogc2ln
bmF0dXJlIG1hdGNoZWQsIGJ1dCByZXZpc2lvbiBpZCBpcyBvbGRlciBvciBlcXVhbCAqLw0KPiAr
ICAgIFNBTUVfVUNPREUsIC8qIHNpZ25hdHVyZSBtYXRjaGVkLCBidXQgcmV2aXNpb24gaWQgaXMg
dGhlIHNhbWUgKi8NCj4gICAgICBORVdfVUNPREUsIC8qIHNpZ25hdHVyZSBtYXRjaGVkLCBidXQg
cmV2aXNpb24gaWQgaXMgbmV3ZXIgKi8NCj4gICAgICBNSVNfVUNPREUsIC8qIHNpZ25hdHVyZSBt
aXNtYXRjaGVkICovDQo+ICB9Ow0KDQpJIGRvbid0IHRoaW5rIHRoaXMgaXMgYSBjbGV2ZXIgaWRl
YS7CoCBGb3Igb25lLCBPTEQgYW5kIFNBTUUgYXJlIG5vdw0KYW1iaWd1b3VzIChhdCBsZWFzdCBh
cyBmYXIgYXMgdGhlIGNvbW1lbnRzIGdvKSwgYW5kIGhhdmluZyB0aGUNCmRpZmZlcmVuY2UgYmV0
d2VlbiB0aGUgdHdvIGRlcGVuZCBvbiBhbGxvd19zYW1lIGlzIHVuZXhwZWN0ZWQgdG8gc2F5IHRo
ZQ0KbGVhc3QuDQoNCkkgbmV2ZXIgcmVhbGx5IGxpa2VkIHRoZSBlbnVtIHRvIGJlZ2luIHdpdGgs
IGFuZCBJIHRoaW5rIHRoZSBsb2dpYyB3b3VsZA0KYmUgY2xlYW5lciB3aXRob3V0IGl0Lg0KDQoN
CldlIGRlcGVuZCBlbnRpcmVseSBvbiB0aGVyZSBiZWluZyBvbmUgdWNvZGUgYmxvYiB3aGljaCBp
cyBhcHBsaWNhYmxlDQpnbG9iYWxseSBhY3Jvc3MgdGhlIHN5c3RlbSwgc28gTUlTX1VDT0RFIGNh
biBiZSBleHByZXNzZWQgYXMgcmV0dXJuaW5nDQpOVUxMIGZyb20gdGhlIGluaXRpYWwgc2VhcmNo
ZXMuwqAgRXZlcnl0aGluZyBlbHNlIGNhbiB0aGVuIGJlIGV4cHJlc3NlZA0KaW4gYSBub3JtYWwg
e21lbSxzdHJ9Y21wKCkgd2F5IChpLmUuIC0xLzAvKzEpLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 00:55:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 00:55:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472228.732377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDb10-0001AO-D4; Fri, 06 Jan 2023 00:55:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472228.732377; Fri, 06 Jan 2023 00:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDb10-0001AH-AR; Fri, 06 Jan 2023 00:55:22 +0000
Received: by outflank-mailman (input) for mailman id 472228;
 Fri, 06 Jan 2023 00:55:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDb0y-0001AB-27
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 00:55:20 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c80ad64e-8d5c-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 01:55:16 +0100 (CET)
Received: from mail-dm6nam04lp2041.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 19:55:11 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH2PR03MB5191.namprd03.prod.outlook.com (2603:10b6:610:93::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 00:55:09 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 00:55:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c80ad64e-8d5c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672966516;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=NuV4HaD2bCln0Nm0zAO4yk9D+7lRmMaC1YSTGaUxamE=;
  b=faRatfXkWZUwY5dpYh8tM1c74HuJDSweMaP52qtggbH0syub/nEmejFa
   bH7m4wrr1krlRTl5luJ95RA2z2K2byo7y/wku3JImeur/GeyAciZniQbv
   KaV8Gm4UZBKyNfwWOS3YT8XqBH0AWuTxfaug4t/bZG/lrKatjCjNMpr43
   o=;
X-IronPort-RemoteIP: 104.47.73.41
X-IronPort-MID: 90885529
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:WpQ7/aj+dSw5rgEWgAUF9+fNX161VhEKZh0ujC45NGQN5FlHY01je
 htvDDyBMq2DZ2b9LYoia9nnpB4D7Z+DyIRqTgJl+XtmEysb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWs0N8klgZmP6sT5QeDzyJ94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQBcA4AcwytrNuo5+68cdtuqdwSNenkadZ3VnFIlVk1DN4AaLWbH+DmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilMqluS0WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDTO3oq6Mw0TV/wEQ9Gh0na0K0isX6ixeRVtxtF
 0ZP2WkH+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6JC25BQjNfZdgOsM4tWSdsx
 lKPh8nuBzFkrPuSU3313qiQhSO/P24SN2BqTTMFSCMV7t+lp5s85i8jVf5mGa+xy9HwRzf5x
 mjWqDBk3+lKy8kWy6+84FbLxSq2oYTERRI04QORWX+56gR+Z8iuYInABUXn0Mus5b2xFjGp1
 EXoUeDHhAzSJflhTBCwfdg=
IronPort-HdrOrdr: A9a23:ko1WGK4XRbuPRHTvmQPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="90885529"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I7zJ96J4bufKwqxWhQmZFMYqEgbXCZjVxGzVwGpoNdNWIWmcHeT9id8UoLiY4SOdSowwaeSJ+C0A7OJ9b7ifO4hK/SXAhU0otJQoH5VY41p1xdRMsRgZujw2EBFJeqeKiI3tog6tY6Q9qvPuAe+uVU0sb62B3XPW40vQOFv6Oi1vw0nrmWn5CWAx4ZYDJlqg9SeC5Y4mePGxhmAa+qIyk+99p+fsau/A0Qpaim4VeY1O6u14ct++LwnQqHdsU/7luk8T24+vk/Wo9F07z2S3+lCInIB5GDzuPRw1ZD+EDnqfLoqc6+XK7UdrsArwBfvvwj4C2BqE/v/OizsR+5y7mQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NuV4HaD2bCln0Nm0zAO4yk9D+7lRmMaC1YSTGaUxamE=;
 b=fkt7Jmhe8FBzQHojwck8pTwVl+k5EmauAVr7a2Z6Efz0sP77L/m2dmHxd7xafYzRxm7xkgRDHuN25DayPQVMnFTfqezc4PytcJfqC/LbU0ScTQr6HpQGcj8oUziifFXq9lClx5l/c/zAhaFNaAhzZNXzCgEjBPFMVUhjN54RGdKFwk++Ev2QE4EtBNc2AorN3TTUJP4ofIJ1nM7BUGlVQFEpPpxoAW239HWAiOXMDFnRR9++4bsU30nyXyAVS12AX6mifmaLR+EkdRmb34hugJ+IIz3yap/tkiFsdLCFgcJqVB/IxOUDbnMVPpf4LfB1grpKtL3juy9Lw1b4xNYXlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NuV4HaD2bCln0Nm0zAO4yk9D+7lRmMaC1YSTGaUxamE=;
 b=p+ruaNFaGd6vUFTnU+01rrprObMQ499tFa/M5t0NT1YUX+SfFiq6nmo8ADlkRtM/K3ViceuJoNrfwVkp/6LIM2WXAMibyY2imcrX12nIx21aAyL+9WlXvG4kdmWPqi6WMw9r+3ceM2M1jWwYb3kLM+eT6WGfQs8p0XdxZCE0IqM=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 01/11] x86/shadow: replace sh_reset_l3_up_pointers()
Thread-Topic: [PATCH 01/11] x86/shadow: replace sh_reset_l3_up_pointers()
Thread-Index: AQHZIR6u/vHw4RYb/kGm/un2td4VGq6QkM0A
Date: Fri, 6 Jan 2023 00:55:09 +0000
Message-ID: <e8bdffdf-d5ea-4320-3e84-b6654ac83002@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <03ae5a1a-4417-0aa0-27d8-833ade20cc0b@suse.com>
In-Reply-To: <03ae5a1a-4417-0aa0-27d8-833ade20cc0b@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH2PR03MB5191:EE_
x-ms-office365-filtering-correlation-id: 19a9adfd-73b3-4b6b-4cd3-08daef80a8e6
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 IJj+nX1eunCDjNIy1QL+QUTJxVZGauey08tbp+kzR1HImsNqe1Q1R9f7k5ADqA918d6tgcE2A9gkZP0tQtw5fNCKrMxrdck73u+ZGph+8Cn9++3/DbPQOG2h4bwyG7uuMJobSC+ZdDtF6w91jvgtcJPDrRLYDvmLPjlsHXXdupqtp44ICGEmUVsW3P1BiedHkiJICFFrI++Jo2L46nSPFyGR0WgD5oDWNZ5MYIYMu0WDdkhisbTKbTBLvnhwyBN6CO3Hrvu6dBv1qYUKYGBI65SE3F8l6Wi/DuZGbL5yL1I70JwXf7F8s0zfQLtgxTW1exSfoExGbm5KIgZirmcoi7q1h2nfmxCtBlBHh+VK7+1B5P2g5vo5Ms9olwO7fki2GJ1codkVXvtXer4FWHJtQwbpLYDmp3+Kfe29Ni+FnBNVJxWN4ZY+Fw6lC2/q9Kpbg93EdR9Y3mkr+2MlNs58SfWamZrR0+S0oZPTx6ICdY0N+HfHVjwVxeJVJbOv9SllKvWqmQAVjCOFQscvI5eLbI6xnWkYGdUiDnoGn00GvB0g2BitrswYMtoOJgVdHrmIDK2TWyrgv5X6pROpuxGntt/j7RkxIliwpc4bc9lVbgAoDKzgfHWs7syYJ/+iQ/r2vTrNtY0BJA5zsM/ofo3IRpNmhpM2Zn5ojyakvio1UnWXVUK8F1JhbX72eBrVYFbCXqrffzEtG1sMh1hsWjr02nW8t1U12fASm0y+pSSRd2zn7yI9aopoD8ArNVwE63XETVknFi643dKewJbdwaZwJQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(366004)(346002)(136003)(376002)(396003)(451199015)(83380400001)(6512007)(26005)(107886003)(6506007)(38070700005)(86362001)(36756003)(31696002)(122000001)(186003)(82960400001)(53546011)(38100700002)(2616005)(316002)(4326008)(478600001)(41300700001)(8676002)(4744005)(2906002)(5660300002)(8936002)(71200400001)(31686004)(6486002)(76116006)(66476007)(66556008)(66946007)(91956017)(54906003)(66446008)(110136005)(64756008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?NTdaN1RKZGRraXpUSnVjWFF1SlVlN2kzZ1NyV2tMNWJHdWF1WXBvUnN0cER1?=
 =?utf-8?B?bGwwMitNM1pud3VzYU9XMENBNzVheTkvRlRlSnV1alhpa09yZ3VITkpBUitK?=
 =?utf-8?B?WGRERThDdUg5K1lJb2VpRnUwTzgzT2hDYk9VaWtaVGM1VjBnRHhNME9yYTFR?=
 =?utf-8?B?ZE8rZS9jN29QQ0cvWXBMazEwWHFXUzFqQ08yakd0MngxMlNxbURzNUR3Yk9q?=
 =?utf-8?B?RndraXpCeFF3dU9za0Rjajh1d2EwYVhqTWdjSmFRMkYvOXJaR2o3WmFOdXNt?=
 =?utf-8?B?aXM4azVyV3VjNVU5cEpxTm1YeG5vZlNFNk5hV1MzcktPQVpmU0srdFFWdy94?=
 =?utf-8?B?NWRCMENsVTRYZ2ZwQnJ6aGZSaHNvV2FvVDNtaVlNb1ovVllUekVPSWt6VGxk?=
 =?utf-8?B?OFh3VzRNRDN4aWFVcUQzcFZtd29UZXVpaGd0dW5XNFplKy9pQXFWelBQaTdl?=
 =?utf-8?B?NThUV09XQkUwdFl0bUdBL2RkRFdOU2UyN1V1aS9jQ1lEd2VyWnpFTUdpbC9O?=
 =?utf-8?B?RUh0aUYxSkFPUHEzTno2UUhlQUtUTVNzcTJoMFhMTllzWlpPak82bHgyTVdv?=
 =?utf-8?B?VTEzWGxjQkozdGovd1ZZUXhnZUJldy9vang1QWVkbE5vNVZDdWIzS1hiMVo0?=
 =?utf-8?B?bWZ0dkJJaE1JM0RpdTVtVzRSamVFMUpEcVJDenEyRnVucWt5dWd6aGxXVGdQ?=
 =?utf-8?B?NzNFZW41ZUxWUGlIWTRQOGZTZFlaOWZjdUxEUERHS0NxcTVXVXhyZVdza3Q4?=
 =?utf-8?B?ZnB0SjJEWTB2TGFnQ21xM0h1ZEZGK0g3dDVwd3VwMHRHK3RsaXNpa2hZWmwx?=
 =?utf-8?B?SXdTZWc1ZG5ySCt0cGh3UURodGh5TWRyOUsyM2NPN3I0ZEt1Z1BFWVJXY01Q?=
 =?utf-8?B?L3JXMmFRUkZLYWh6ZjgxZ2dNQjkvV3REdnl6Z1VucTdjQTBtSUVWVHdCTkVo?=
 =?utf-8?B?aHNpZXNUWUlOZ0pYQ2xRa3IrL3ZuNk9LWVdEL2Uzd2lIaWh1STAyVWdsWDRh?=
 =?utf-8?B?NXVPWk4wU2RMN3RUbjRmc0tlQ1NsaGV3S1VRb2liSkVVbVlENHUrZ2tna05H?=
 =?utf-8?B?cEtzZ1R3a2xySXRka3ZXV2t6RmxUQXVCd2ZqV2lQVGlJMkcrcU44aU5IUE93?=
 =?utf-8?B?ZEFFR3VUWDhUN1MwZTQ4OUd6K29vN3c1SGdUT3BxZVVGOHJ0bzVVYUVmVUlm?=
 =?utf-8?B?VkxjeFhPeUlDNVZORE4vTURBV3dTNmtkZWNIQmJYd0ovVXlzNWVRU1Q1Vnov?=
 =?utf-8?B?a3hpaXNaZXNFR0xJTUYxZlpFbmpXMmdibE9JUVZubGlLRzRNUWUwQ3JJMFJq?=
 =?utf-8?B?OFo1bnlYSkVpcHJrK3p2bWZZazZjbm1ZTG9acHpLVVJYVEx6Z0Q1TWZiOHEx?=
 =?utf-8?B?Nnk4b1luakduZzdHM05sdFBUSGE2dWlUbHJOMG5JaGxjWUhENTJOcnZ1YVVh?=
 =?utf-8?B?MWcwUTNuWFdmd0VtZUljd2dVa21vMmhDQW83ZXV5YXlmelBNK3dNWTdORER4?=
 =?utf-8?B?b0owWTIydk0yK3ZaQVBvcUV0VG42RW50amtBWDZ6ME8wcWFYd0VNcWF2RmdL?=
 =?utf-8?B?ZmV1QXpiR1ZuY25QWXZGQjVTdFlHbUxyY1ZVaXNxMTF4QmhVSHprNkZMZVVx?=
 =?utf-8?B?bnBwSDBleGp5cTdIY2RUeVR1MmE4QjhaQU1NUUhBUEJYa0xsNXZWSTFpQ2Zm?=
 =?utf-8?B?dXNIZ2Zmc1Y2TU1USjZuOFJvSE5MdVJNdnp5cmo4OG8vdjNKTm1rdnRjenRL?=
 =?utf-8?B?dElGSEcyOTJkZGd0SmJibHZNTDhCaGNLd3hhbEJaMmd6Y0sxK0NwRGZsQkxT?=
 =?utf-8?B?Ri9FSFJOL3BtY0FEaTdxaTZIN0FIdXdURGxUUlFvbzk3V2F1Z2hlZ2ptKzMx?=
 =?utf-8?B?Z2VkQng1ckl1WFNwOGxYa1VyR25aczVjMFdPYjlvcDE3cmZqUS9zSWc4M1hL?=
 =?utf-8?B?ZWMzRzJDZnRKbVhYL2ZqMXBHdW5jU1l3MmY3VWR1ZFRCdnZ0aTBpRFNmNmZn?=
 =?utf-8?B?U2NYWWM5QkEzdHZIbXpFUlhzU2NvYmVDRFpjaUswY1R6SGpHS1pCN0hhbWFv?=
 =?utf-8?B?d25kanErNzl3NGNGL0gxUU5YYmZzZ294Ni93TklhTnpNNElHZDJkRzJpZWFn?=
 =?utf-8?Q?gpdSwKXPZ2ub1n/ez84Y2o/+z?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <46AE5BC851407045A5EA2D1E706E1809@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	49FpFA4vQox+TsqtH3wz2XDrcKy8/sCvDID/U4MQ2f+E6UG95e29N3zD/wz1aHATYQIG1gQaaXceU+yEpnM4wHNiv8s2Hr5c1kMOu+MP37RrFYlO3eyzXMgDu2YVxyGlyR/vWMdj7JvuN74WKW77vq2A2A/43Y5kTj3DWtdDtnpkyvca8tITspKpPF9yOENeaH+NfoqCEArIXTD/SwFgTDHMyBoncTeFwg2XUOmIbYDWQcNPfOOSbVzOf5xVPry35LPTjjgxEvHyheNAZiZY0c3MORrGPnzgBYEMWF2shevwuJFB4MNql2KBthUVwGjLKuZAO0QEVGre76gP+BtPxDcsIC1/2vMzs7CtLqOMRyL0L9fwmNt5sn/wqIyyZSKuwdaZaV2CqzR+Q+BnqSmeOQq2wyW3t8KtG5Wjs18V5esGyPO3WfUNoEz4nRFS94NkFt+KxR4Yxx3OKlQwbMADm4riDn818n08XUyd0iK+6+2v02GDkF8NdCvHr45tWRCAG0FAcGYpD4s5BO0k78Yihw4VCN13KZWxQtYRnJ36uoD7B1HYhudTZLrZ5HNnceXCUbo8ahydgmjfylqQBSjAwJKewsFcqyWROPeHeQJiqvXBnpX52/g0Hi2rHhlXU6RF2v8W+X4buzHstww+ljMnXeGBh0G9ZQ6uTwL6ziwo43SMVRzU2ZHeAGV8Wb2+W6TmIDCGI5FjMqiJXV0k7IhC+IlcUKgWrAjnvjFsoQ+2Fw7FkREfblYtMUurkzp7pxdjZEnEgWX1UgBa/dNGb1v/ztJIitBCowpEWcRsNv7I1X9hGKuUorcvGYEeBg5gtUGBsjJch9BVz7eQrJBmO5bx9w==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 19a9adfd-73b3-4b6b-4cd3-08daef80a8e6
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 00:55:09.4551
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ss3eMvwswg/O91i+peOhWmidkxXVm3kGbBQhMaViI6AN5YBvWzss1WnbY95FrOZAHfT5W1yLy8bFbqD+dTLumhSdFoU4TKCza3uvjX2qdxk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5191

T24gMDUvMDEvMjAyMyAzOjU5IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gUmF0aGVyIHRoYW4g
ZG9pbmcgYSBzZXBhcmF0ZSBoYXNoIHdhbGsgKGFuZCB0aGVuIGV2ZW4gdXNpbmcgdGhlIHZDUFUN
Cj4gdmFyaWFudCwgd2hpY2ggaXMgdG8gZ28gYXdheSksIGRvIHRoZSB1cC1wb2ludGVyLWNsZWFy
aW5nIHJpZ2h0IGluDQo+IHNoX3VucGluKCksIGFzIGFuIGFsdGVybmF0aXZlIHRvIHRoZSAobm93
IGZ1cnRoZXIgbGltaXRlZCkgZW5saXN0aW5nIG9uDQo+IGEgImZyZWUgZmxvYXRpbmciIGxpc3Qg
ZnJhZ21lbnQuIFRoaXMgdXRpbGl6ZXMgdGhlIGZhY3QgdGhhdCBzdWNoIGxpc3QNCj4gZnJhZ21l
bnRzIGFyZSB0cmF2ZXJzZWQgb25seSBmb3IgbXVsdGktcGFnZSBzaGFkb3dzIChpbiBzaGFkb3df
ZnJlZSgpKS4NCj4gRnVydGhlcm1vcmUgc2hfdGVybWluYXRlX2xpc3QoKSBpcyBhIHNhZmUgZ3Vh
cmQgb25seSBhbnl3YXksIHdoaWNoIGlzbid0DQo+IGluIHVzZSBpbiB0aGUgY29tbW9uIGNhc2Ug
KGl0IGFjdHVhbGx5IGRvZXMgYW55dGhpbmcgb25seSBmb3IgQklHTUVNDQo+IGNvbmZpZ3VyYXRp
b25zKS4NCj4NCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29t
Pg0KDQpBY2tlZC1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4N
Cg0KSSB0aGluay7CoCBUaGUgcmVhc29uaW5nIHNlZW1zIHBsYXVzaWJsZSwgYnV0IGl0IHdvdWxk
IHByb2JhYmx5IGJlbmVmaXQNCmZyb20gc29tZW9uZSBlbHNlIGRvdWJsZSBjaGVja2luZy4NCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 01:00:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 01:00:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472235.732387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDb6M-0005fE-14; Fri, 06 Jan 2023 01:00:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472235.732387; Fri, 06 Jan 2023 01:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDb6L-0005eI-UO; Fri, 06 Jan 2023 01:00:53 +0000
Received: by outflank-mailman (input) for mailman id 472235;
 Fri, 06 Jan 2023 01:00:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDb6L-00058x-71
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 01:00:53 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8f088e50-8d5d-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 02:00:50 +0100 (CET)
Received: from mail-bn7nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 20:00:47 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH2PR03MB5191.namprd03.prod.outlook.com (2603:10b6:610:93::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 01:00:45 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 01:00:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f088e50-8d5d-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672966850;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=7R5caMStUTAOv6xs/IolzeChzCMqWnsnP5dG+yqWnRY=;
  b=AXzwh3cWaZFbKUeg7k6lVc1GH74tuZcRyaFycI0GIqT5Fif9e/0H/Yw0
   y9vtrjaHMtU26/HosfTMi7Kms7UAuBOH3W2q14UXzfDn3lJLiRUOkCquH
   xpaDvFD7Bb3XGRvifQ908KUaXNL/1MOfwzDE/O7ZZyQx3RVnmQVktAjCl
   k=;
X-IronPort-RemoteIP: 104.47.70.104
X-IronPort-MID: 91441917
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ze01W66kd+ezYDiZXq7GbQxRtAnGchMFZxGqfqrLsTDasY5as4F+v
 mYeCDuGPauNYjPyft1xPtmw/RhTscOGyoBlQQo+rS00Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakT4QeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mz
 s0xBz4XVTC6vM2x/LTiVcZBqc4JBZy+VG8fkikIITDxK98DGMiGaYOVoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MlEooiOSF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNLSOHhp6c16LGV7m8iBgRIBXz8m77nq3KGY89bB
 kgQ+QN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq16bO8vT60fy8PIgcqeS4BZRsI5Z/kuo5bs/7UZtNqEarwhNuqHzj1m
 miOtHJn3+lVitMX3aKm+1yBmyirupXCUg8y4EPQQ36h6QR6IoWiYuRE9GTm0BqJF67BJnHpg
 ZTOs5H2ADwmZX1VqBGwfQ==
IronPort-HdrOrdr: A9a23:WkNGP6s3DjLwdJm1/91fD+sa7skDgNV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd2ZJhEo7q90ca7Lk80maQa3WBVB8bBYOCEghrOEGgB1/qA/9SIIUSXmtK1l5
 0QFpSWYOeaMbEQt7ef3ODXKbcdKNnsytHWuQ/dpU0dMz2DvctbnnZE4gXwKDwHeOFfb6BJba
 Z1fqB81kedkXJ8VLXCOlA1G9Ltivfsj5zcbRsPF3ccmXWzZWPB0s+AL/CAtC1uKQ9y/Q==
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91441917"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KGzl9qiPkUbUV4gsp02fCM4xZS9Nlqw88rI49597S3lBd3Lg5EK1fzu3YegUHa/8gJktu5lO2bksRCl6tm3LZc3sxGqnBSSE+vqQbliu9qLHsh54zFodQjNLrYNSssbD9D7qSynm4kp+2z3wD4ShChwjEPjZuR+e4BlE6z2G6JQsGHH+tWA4wBMCVJGM/UGfzGVvLG5iRRMArkUXseHCN7c1XTcFs9HcBBe7bSMlgV/uxx9NJ/pz5DBXScMpPOSMXGqDf5c2amE/K8RFzDuoULEdHOjbwJ09Jx3GQDQ02HbWgtFFtLi7eCqmP/kV2mnGn2VZxaamQa8gJ6x3OANNfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7R5caMStUTAOv6xs/IolzeChzCMqWnsnP5dG+yqWnRY=;
 b=dI8Xo5bm2EGFKGt8yPfsMMJB/UGEjOCOzNGwSnMYyPORa4zyqFFHz3T0sJnWbAGjx0xGaYgc93OMxZx1ehg4k089ZlZz0NJ2kYESvIvIEZP8Eiode7FEz7YNWBaxveE5ZdhA8a66BBW8egJxjUTaW9t9Ui2XWgPMLDFmGx2c9tsc7wiMrc/MdHtNquamLnETG+UKFcwMtSZ6c2NkPApgsJjnLDJOVKpqSq4x+CCCj9B9NTOGujLzF0ef5u8XV5vrmxzt8ec0SdI6G40k2GOP/V7iCqodNpvK9DsesbsR6bTdKcuNAlSZTKbuDUJbcUBbSMJbie5KwSL1hDmACzIodg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7R5caMStUTAOv6xs/IolzeChzCMqWnsnP5dG+yqWnRY=;
 b=UndqmyDVTo7AbT6A85r1bWNTpjnhCzpSqVLfurhIPdX7h8cXXyz9jrBcEXL06OCyDh3dh3S5vc/3CcM+GvM/thcIoyqn+2qqI4GTE5Wh2Z7y/L8XrTR6UsmZQBWsh9ojfN298i/ePevj1pacosBEBCEOHGU37zrtOIW7mu86Hpw=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 05/11] x86/shadow: move bogus HVM checks in
 sh_pagetable_dying()
Thread-Topic: [PATCH 05/11] x86/shadow: move bogus HVM checks in
 sh_pagetable_dying()
Thread-Index: AQHZIR9eJqf2NVsSC0qgmrUFbi0j7K6QklwA
Date: Fri, 6 Jan 2023 01:00:44 +0000
Message-ID: <37bfe65b-d989-7c34-5e14-171a23df37f8@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <76cc0b4a-27ca-21b7-841f-315f31833762@suse.com>
In-Reply-To: <76cc0b4a-27ca-21b7-841f-315f31833762@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH2PR03MB5191:EE_
x-ms-office365-filtering-correlation-id: 875d8298-c7f6-4c92-727a-08daef8170d9
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 6NuPmKg82/S8Q4P+kltjf/Teo2AJgJCNabqDBG/x6u2bgmU0TJpcGB3YRmso4Skj5KaflbpmPerk5EJ3QL4Dj+3yXEd3r6/uBlSS9CD5KSrcMo0evkXbmZUSlbNCsaCeBKWx5n1ljqW6QjLDIejO22LZWKHvS86DoFT0x+6EYAVFE08qFz/DSztgNNVOQZNcsDJvlgSM2bqNJANd5CpgtrGs1Uk25sKGj/yPWBtel3m5oPSykPHgLQBPzNFhJ0MUqrrjqrSYg3aSyeBlHHdLwGLN3gY1MV19/JcUFrW23EMTjfBcLSvAdHV30Y3eaBFRNtA70zMk964UQU9mVIDoEtG2DHxKujKOeJUIqOQMNVynrmce52BqswRKmxmdHCrmBp+BMkDLHd3ZME3VZp+bd5mEMTyzvwWE+rsZR+LVD03RsIsfWU3Rj9m7ivsUAcdiUenEi9oNXVWO8BzgjkN39yfXSdmKj2eUuPfl8rcujROPF79pv9jAdHd9hiJ3G9FOMBBU8sv9Bi9rcgNzfoRJuxQAN5qhe1+InDHJI4hGajKwCTrgkunMuJp2XRWLSzdApiL+WNEGczxqomM0J+BTefpctdw63xdIrL2juaZoaZreDY4gYYGyVmpaRkFBrus0ka+0ttgiVZ/sOXFN5ez0nyT67ZHY4N5PsHO73rnPOHkBeDmwY86EPnzFEVlJJe6Qm0FQBO6M2xloZEVKvMvEd9NrIQ/IpxnINASwf/ZKDRunBbp2w35bbcLYwLnG05lsQLYXiLke4RsG9VidPESvpg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(366004)(346002)(136003)(376002)(396003)(451199015)(83380400001)(6512007)(26005)(107886003)(6506007)(38070700005)(86362001)(36756003)(31696002)(122000001)(186003)(82960400001)(53546011)(38100700002)(2616005)(316002)(4326008)(478600001)(41300700001)(8676002)(4744005)(2906002)(5660300002)(8936002)(71200400001)(31686004)(6486002)(76116006)(66476007)(66556008)(66946007)(91956017)(54906003)(66446008)(110136005)(64756008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?cUdzNnBSM2ZKYXZHRSsyMnlkM0dJS01WUkMxMTdLR3ZId24rbm5ja2d2bzdC?=
 =?utf-8?B?S04xbHFKTUtPN2dCNkhiQkJpNjZjWkdBaENxdDVzWXZqdnl5a01WcU10dDAv?=
 =?utf-8?B?WWwra0NmeW81bGhWOXA1OWVndzlGWG5mNGFNc2QxVVhzUUZ6ZnpIaUczaDc0?=
 =?utf-8?B?aW4xZ09UQUZBc0VYZno1WGZUR3dkUHZvNWlyUzBWWGc2MjB0eVdtNEFxNmx2?=
 =?utf-8?B?UFlMQU1venphSkpyOXYvbFdqbmVieExidVY3K1ZIZEVHYUZzeU9hREgwOXpy?=
 =?utf-8?B?ZUhad1VRdnc1cHUySFk0ZVcxRVF1bkt6L2k5aDFKMnBNVmZvSVRCaWhBTkhB?=
 =?utf-8?B?QnNIZGJNMkw1NmxudWlneTduN3R0OUo0Z2c0T2UwcmIrMElOYldQR2llWVBP?=
 =?utf-8?B?aDl5eDFST3ZBakdBWXhoY2xTWitTVkdhNkNUM2NQZjJWS05uMFNSZStSbFcv?=
 =?utf-8?B?VlFpR0Y0Z29keitSWGJkR2dLVE5EejJxZXdrVjdGR1hhaHRBVmdtMkQ3Viti?=
 =?utf-8?B?M2s2dUhoWlQxYTNmYVkrNXRvdCtMSVBnZENBZEtHaFVGcVRpVkNQaVo4YWJu?=
 =?utf-8?B?MXRRMXBFaU02ZkVBeS9hT3VnTG9acjZxSVY2emR1a2FtY3QvaVYyTTdocHND?=
 =?utf-8?B?dUdGNjhhRGl0R0VYU0pWVEwzK3JOZG03TnFSQjFsZWdBbC9PNXlnMXZQUUlV?=
 =?utf-8?B?ME9CNTh5clpNNzBEdXE1Snpjbk9sTEE1SE9aTURTNmF0L1h2WHFZdER1VUND?=
 =?utf-8?B?eFVnUnlWNTRqUlNsQkZySzdYVXAvRGg2bTNST0lFQStkUWhHWFluWmRqSkcr?=
 =?utf-8?B?dW5rZmxpeEhhZzZOTjF4ZEtYZUJQckF0eERrY2ZFN0FPd25INzRINTdNeXli?=
 =?utf-8?B?T1N4VzV3ak9acUJMUnpiOEFDZXZGQ281NHNxMVdQWUdXdzdDSE1qQVVFMDBS?=
 =?utf-8?B?OUR6NlRuMnBEZmFLNEtLU1o1RUNQY2lPMFJXc21sVFhRay9ic3dDT1NDTnhW?=
 =?utf-8?B?TXpINXZvWlBLUGpVY2J1MkZveGQzVFdXRGRWVXl0eE9CTUdFTjNHL0VGMldI?=
 =?utf-8?B?VVVqdUxKeXBQNWlVbzg3d3Urb056Y1hFZmswUWo2NUI4NXp0b0lmRUo0SGNP?=
 =?utf-8?B?YXRzalQvaHJ4NW1xUFNRMEhiNEZaTTFkVlY5ZFJCQTJxZk1WanEwYzk3N1pX?=
 =?utf-8?B?WjVNT1I5QTFzQU9DSVlHVTFIWlFYZjRLMUN3eG94bThLQ3pDdU8vVVR6MDlD?=
 =?utf-8?B?czU4QzJBckc4a3RRMHQxbkNGQlhhc1JkN0tUaFhnL1E0RzBHYjgzUkVVUTlW?=
 =?utf-8?B?YUhqUkR0cDZOekZOanFYcElKM1IzdVNTaGgrQW4zK0dxeEd5eFBVVTk3TnMx?=
 =?utf-8?B?VEVRTkROM2Y5aWNPRG9OQmlmQkY1eGFNQzgxSG05bFF2TXdGb1o3ZHZBMWJR?=
 =?utf-8?B?aXd2cXdoQTZZMEFKZEx6Yjk0b2NOYUcwc0NmR3BiNzl4R3VCYmdyWUl2ampt?=
 =?utf-8?B?SnNEdlEzaWhSNVhEUENQUFZBVkVyaFdzQUtOZEpSUWMxcnd5WjI1dFRyRnpT?=
 =?utf-8?B?QWQyWHZ4TDVsL01hdkI4QTRqeURGTFFpVElLeHVNNDJpU0RGM3R0K2NBc0Zo?=
 =?utf-8?B?d1haK2hiWmJzNDUzTEhmb2VMdytwME4zRWtiZjVndTNYZXpPMldPZmYydjFI?=
 =?utf-8?B?RWMxaUltNGtTRkgvN0hyYWcrM1dKRnJQdHlsUTg3YnYxSnFBeXE4WStMMHQz?=
 =?utf-8?B?bkZxcUU5eE84c2dRbVhpdnBpWmcwLzRteWd6NTZadDNuK2ZiVUxqU2lWN3ZG?=
 =?utf-8?B?bmNVdkNCLzRLQkM0OGdadDVvVUg1MC9seVhQQWMrZi9nelh4UmtUSjFIOUVV?=
 =?utf-8?B?Um9CcURjY29raThtSjQrRFcwWkVLQmxLQ3FXMUp1VXh0MVB6Mi9LMDlQTngr?=
 =?utf-8?B?QjdlWXZWZndpNkxVOEFCcUlZZ0RpYnR6R3grN0VabWdmMTE3VUFSTDkwSTBv?=
 =?utf-8?B?emVVNnFUZlQyalR1ejFNMkxFSmd0MnR2VXBaeWlMVllqcVp2SXRQNGJ2SWpl?=
 =?utf-8?B?QlBocmJyc2x6TkEwaUtGSVBGOVZPQ2ozZDk0aXZZOURUYzNtOVFiVkY4R2ZD?=
 =?utf-8?Q?vqSGp5qhMwZeHtZ8+cmGbj5Ks?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1BF7B6676F02A54E8C464CE41FF9C8A4@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	+CYBpSjTgnwIA2FDTgwFuhPgUn2gitImkxRvVJLj+4UD9WlL0bIMjBn7iKAiUHlLjlvHDEYjabyQ6651tmYJDQN6d5buVJcgNUJbYEu+OshwQkQdMlf4xu+CQC1gu1vDQpdeT8GZmRjGzhPcEUwhtQAerN+xfsiM2zo6f/aI2BN5uiqg3SX2MstG4/f0DeI4hKyGpbt0wwTxMuAcnVQf30yNolyHnlwb8b6yrUy6tbiH50LthRl/tt6LotelJMCcDLv38Za+avmBQ+KNv+OBCE+nsIO3hsuNdpHzNMYI+yt/6Zz2KGkXMwsjYkF0fYV8KMtXvX4haB/NhR7zTBhcDnkvwwUo0O8mzdz2AmUKOLF/EYUPiC/5hV5UfHs5efg9eR04J2UHdPGqtE3Q75E15TkrKq6VWi+gtpekspee/ivagtLsyVFsyfz80iB94FFnfK5SkfzLJCvUkHrMelI4cNPq8llqYNXbwupKkROw88NJl4iqAdi2rbSG9nGMyPWsG/2yZRqpfc425/KYjwY5jmFOfiiJ6W9aKZb3Xl0J0R2la9m6ZgGLNeBBgoRgexIvYSRqjpJ/sdK7tFIJIkaX2yfg9vBp0vM1j0FqjdiLAPRTRi2O0S2sHoly3utG+I/vgv6+oOfNWaDWe5uRyTfDQtR1UPmeFqmVFeOlBLhk/Ln05jAQDprL/nRWjidy+2aKvVyo2j+7j4S47P8O4H/nAc3/UuDy9Gjw2yp1Zh5yQgwNZnX12KmVxb+piNXuULbQFJGYRNIR31oJKH7Gqu1+Gpc4mXjjdtYGjQulULk8M4i8FRfO1MFdUgCyPUwYLcvN+ZEvZGJzpIjKHxjNgAjRLQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 875d8298-c7f6-4c92-727a-08daef8170d9
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 01:00:44.9308
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hjtYzlm4f5VuXWYdXqQ9HAt1A1cMPZ4nDu4LhjrM2Y/PMNi8lkr+J7nDXQkxx5T/XBondEPSaBFRbRkBs9GVe6WjvwHMsn7ZuNEB/iF65OM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5191

T24gMDUvMDEvMjAyMyA0OjA0IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gUGVyaGFwcyB0aGVz
ZSBzaG91bGQgaGF2ZSBiZWVuIGRyb3BwZWQgcmlnaHQgaW4gMmZiMmRlZTFhYzYyICgieDg2L21t
Og0KPiBwYWdldGFibGVfZHlpbmcoKSBpcyBIVk0tb25seSIpLiBDb252ZXJ0IGJvdGggdG8gYXNz
ZXJ0aW9ucywgbm90aW5nIHRoYXQNCj4gaW4gcGFydGljdWxhciB0aGUgb25lIGluIHRoZSAzLWxl
dmVsIHZhcmlhbnQgb2YgdGhlIGZ1bmN0aW9uIGNvbWVzIHRvbw0KDQoiY2FtZSB0b28gbGF0ZSI/
DQoNCkl0IGRvZXNuJ3QgYW55IG1vcmUgd2l0aCB0aGlzIGNoYW5nZSBpbiBwbGFjZS4NCg0KPiBs
YXRlIGFueXdheSAtIGZpcnN0IHRoaW5nIHRoZXJlIHdlIGFjY2VzcyB0aGUgSFZNIHBhcnQgb2Yg
YSB1bmlvbi4NCj4NCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPg0KDQpBY2tlZC1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4NCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 01:02:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 01:02:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472242.732398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDb8H-0005Ep-CX; Fri, 06 Jan 2023 01:02:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472242.732398; Fri, 06 Jan 2023 01:02:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDb8H-0005Ei-9l; Fri, 06 Jan 2023 01:02:53 +0000
Received: by outflank-mailman (input) for mailman id 472242;
 Fri, 06 Jan 2023 01:02:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDb8G-0005Ec-LL
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 01:02:52 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d71c3705-8d5d-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 02:02:51 +0100 (CET)
Received: from mail-sn1nam02lp2047.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 20:02:48 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6912.namprd03.prod.outlook.com (2603:10b6:8:47::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5944.19; Fri, 6 Jan 2023 01:02:47 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 01:02:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d71c3705-8d5d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672966971;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=5Yu4XtuSxJDYHI1Yk6SQYnBFkmruKjaDT1sYhAHD+tw=;
  b=TWdKUaAPVj2hnfxTfO7x6dOzTh8e5K7iX5wAnYNhrKx4l7gn3RnLXUI1
   s6X6nX9D8lEmsDdCxerxJdkX6eSxUxIl5mhqOUSYkl6LSW5xbMT2ZTxBH
   eU988D4YDvk0d6lKTIGhG/O4HKfCobF+yjHjYnqe1QyM9BBEstVtEBYM9
   g=;
X-IronPort-RemoteIP: 104.47.57.47
X-IronPort-MID: 90886482
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:nGqCOqPWjlD5Y0fvrR2TlsFynXyQoLVcMsEvi/4bfWQNrUoq1jIDz
 jEcX2yPOvqCZ2Smf9l+bYzko0IAucDQxtNnHQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo42tB5gFmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0qEsIjse5
 OVGEgs2SzSjv+adzJC7deY506zPLOGzVG8ekldJ6GmDSM0AGNXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PVxvze7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6XzXOlBNJIfFG+3uFAjEWtzGEpMQAXD2CC/aOhrheuat0Kf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHr7m9WX+bsLCOoluaJSkQBX8PY2kDVwRt3jX4iIQ6jxaKVdA6Fqew1ofxAWuon
 2/MqzUijbIOi8JNz7+84V3MnzOroN7OUxIx4QLUGGmi62uVebKYWmBh0nCDhd4oEWpTZgDpU
 KQs8yRG0N0zMA==
IronPort-HdrOrdr: A9a23:e81QRKCRz+yKL5zlHela55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZImPHrP4gr5N0tOpTntAse9qDbnhPxICOoqTNCftWvdyQiVxehZhOOP/9SjIVyaygc078
 xdmsNFebnN5DZB7PoT4GODYqkdKNvsytHXuQ8JpU0dPD2DaMtbnndE4h7wKDwOeOHfb6BJaa
 Z14KB81kKdUEVSVOuXLF8fUdPOotXa/aiWHSLvV3YcmXKzZSrD0s+BLySl
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="90886482"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oQdCJvLwx8V+iSvT6xKYjaGt9hayxUiVII1fyeMf0xzj6i1bC3+6fEbAdXU05dgTBw9zNYrYWi6kQhJyJ2EiH7JKEzaQf5es89Ja6MKC+Z3WXSlJUcrgDCuwllFW1ucHvOCeu/KAokSmSSbp/N/fOakWivKyPvjBGeNO6QaPUp87zwQv7OlDHzusxqfKLz3+tkNCBkyc+rW7qTuAbnApW0QGmb3HQIMW6XEUQSBcBHgf3rx/i8bmhDmqscoAUOrCncSO7XsnySSm1ufh/HRJuBD7/neRNU4VFFbyIfOJfGHIEZM4JUSW4TUzv2c2D6Kq0lw32boKlCJIoxFOwCkOzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5Yu4XtuSxJDYHI1Yk6SQYnBFkmruKjaDT1sYhAHD+tw=;
 b=mTx9e2jxWwsi7stqfON1C/pbusLE5690cs3WSz2PoZH8dowHYw/G4JlyiAKWcvJTYrLcULo+TKi218q5hWxAP4c9k1RciZO6r/lxhY1M8AFox6RUQ+kyO5A4bB02Sr2mEGv+q+EyGzDbO8LTRWWjZ1mzklEMRz7pS4Lp93hIzGxxCwMulqwVfKYQrXBlEtS9wbpMr9iwR89snzw9eN7eN9ecE7Q39vssSVrTZtSbvgqpmRV1OUf4ixHFnAX1f/i1RcXO2tmu9OkclJLotSiNnM6l/9N9R7c+c3TvgdPLkBtbCL0phFQStkw+LdPJiP0+irh+eGgn8uyVmpKOOv28Jw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Yu4XtuSxJDYHI1Yk6SQYnBFkmruKjaDT1sYhAHD+tw=;
 b=OvzRuH5q/Sl6dpvc+CSI43yAb7OLazUIu9YFtmOBP7V488uhYzkCvyobb9Wos9flI0Jb6zBjxBqvxIR6ZOGxAd25glBjcPskC99s8gsFV/Gfiz3NqBdZYfX9dwmy6fLobHbO1RWeRZgxb/dR84I6yMg2plaoF6Qnbglw+32jP0E=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 06/11] x86/shadow: drop a few uses of mfn_valid()
Thread-Topic: [PATCH 06/11] x86/shadow: drop a few uses of mfn_valid()
Thread-Index: AQHZIR9umeBFnTREcUub2eAZa8aRda6Qku4A
Date: Fri, 6 Jan 2023 01:02:47 +0000
Message-ID: <11758833-fb7a-f937-a847-fe79ba932679@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <ca3e0c70-ae80-4c21-97f7-36525229074b@suse.com>
In-Reply-To: <ca3e0c70-ae80-4c21-97f7-36525229074b@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM4PR03MB6912:EE_
x-ms-office365-filtering-correlation-id: 488bded1-b792-4150-d21d-08daef81b9ae
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 MGrdWtshkseu4iF4dVNOQq8rwOuFkRYuBaXK3T+YrErwoWX0JJtltjnINMoLGFguJks+P93JHMkV2UuUaZUQAYfY/xVvjDvHYVYu6e6HzcUE8JcJ6z7e6NFEtU/edyPjPjnjFZA8omhahZHSUnKBtXGNFeZcg/fUdiCYZZYemfAfi+rMcKTvY8QpkSneSwoq6drFXdAC/LX2KXPtNqsoTbSkItFRqIpaUXewhjHuVb0Zr0faW68Ci9JBIGDUDuNKlDRUzxmsfq2AfqLdAMkbrQhJ6YGhNfHZpl/U9dRnsu+Ygf3CDe0eDzgc6PBIpW+mIHy3LRniOxo3jlwo9aPeusoI4le3Tn+T9QvST4fzOUQJptMafSm10bgUVbxD07CC3vHBROw/zBlyLpSQ/1Qe11hm/APqbRx4/Hp5oCykAc29n28yP6a/kJ/P7cSvYZ75MeGst3DJa2fAEXVAocX6czp/YJzt9bTpgcykZIJPItyABEK0a/Z8wCGIgm52S1XnGR7lnDy+I/sKubzf1l1xO2re5/STf8lwXprOzKlGJsP8dk0QXsIHV4P+U/0frGByhzWefqyMNgN6g2SYhMokcHyo8RTpjmzBi9/Pk34vTNqpIDV+3VWpPt7xXGUfYyWZjVzkby5YIiRJwkCcTpRBa72ap9MJ9gtuFbtpiufvt9X0cX2n4wem5yrfO4aZI8FfwhgLgc86LVrME86UYH5odwou7MYFUNg79VaQmuDdrn3tTxecVWv+9PFcNH+Y7GYV
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(346002)(366004)(376002)(396003)(451199015)(31686004)(8936002)(110136005)(41300700001)(4326008)(76116006)(5660300002)(8676002)(66556008)(66476007)(66946007)(66446008)(6512007)(2906002)(316002)(54906003)(64756008)(91956017)(6506007)(71200400001)(26005)(478600001)(107886003)(53546011)(186003)(6486002)(86362001)(31696002)(2616005)(38100700002)(82960400001)(38070700005)(122000001)(36756003)(558084003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?N1dPdnllOG8yVGFqeUV0ZHZ4YzdzQW4rSkJ6NlZTM3RXUTZSRkdoUXkyYkQx?=
 =?utf-8?B?UFlFU1M3MjdmU29WKzVZaUludUlBd2RSMVBYcmRKYXdVWFErQXNldkM1N2pC?=
 =?utf-8?B?Q3oxTEdmUnA2RUFrWlFERG9VVFpuRGkwbUV4bjYvak9oMmV1eE9aSHNtZmxO?=
 =?utf-8?B?NkNMTmIzdzlLYXdRWGNLYlJwdkcva0VGTmZiUm4xQU5YdHgwQWhHOE4yV0FK?=
 =?utf-8?B?bnNPQmxOMWpFNklzY3ZtQWVTNm0ydmU5cDJZdmVNZFRNZ1dhWkV6MWlxa054?=
 =?utf-8?B?c3NNNysybVdVbEgzQUtNOGZlaU53ejJvS3d6V2dXdi84bGVFR3pkMGlCdGRF?=
 =?utf-8?B?MnM0dENROW5STGw3RFhOSFk1YnMyWFc1SElDVjNWcE1HUGY0REJ6bTVjR21k?=
 =?utf-8?B?Ymo2dkZQV0ZIaHhyNkQrblMyY21meWhYSlM2ekF4emVPSFBvbWQwelB2N3hG?=
 =?utf-8?B?RGc2YjNmVzFtVE1hRG9NYi9RN1FETFVsbU40MElxMEEyMitJRXd0ejVKdDhQ?=
 =?utf-8?B?VGFEbFQ2WEVJdWxHWUpqaThTeUdVWXJKTXFqd0JISEpqUUtnOW5INnhkUzAr?=
 =?utf-8?B?WVd6K2lHRHNQTVdwTW1udTFRL0JqVFpDVGlwWFc2QzNndkRNM3FUS0l2WkdF?=
 =?utf-8?B?Tk43Z0ZCR2VZazh4WWhtVC81V2ozT094cG1qbXNJL3RhNzRuZzBCQzdZRGZG?=
 =?utf-8?B?R1p6NTJvejBYRnp1eTZnTzV3WEYvU1ArWmJlVk9JcVBGa3JCUWQxVWcwVEdM?=
 =?utf-8?B?cUpuU1BWVWNsNEFLd3ZZY080SnlFUEhIN1g0MWFNbk5NdXVESEJtRm9zZlpU?=
 =?utf-8?B?TXNLSnNzNzZjdDJsRmZQRXc1M0M5UC9YNEVhUFZTYktXRWZKVlcvZlgvYVNH?=
 =?utf-8?B?S21JUGw0Z0ozeWR2bmMzbklmQmFDeHlrVkVhY1VLTlhHNE1LcFk5QktQNnBF?=
 =?utf-8?B?eVFlSkxoRFgwY0ZVTDhvTGVVUUdzUUcvQS9pM25CWjJaYk5SdVhZOHhzQytN?=
 =?utf-8?B?MkRNSWhEU0s2dThvYkp3bHBNOGxiT3JUdjRudUYrcEY2S3Z1MVVsNFI5bDNO?=
 =?utf-8?B?bDlXUmtNNE5IamVVS1JuZXQrM1BCaHVFemkyWmpFc3QydmdwclBZTkpHQmx0?=
 =?utf-8?B?Q0tDQ0R0SU5FdERQSWt2UHhFUmN0YnU2YjRtUHBRN1BNTHFHa2cxSHl4U29L?=
 =?utf-8?B?TE15clVTekp3SUordktwR1pDV1hLQjlaWTZ3NmZJUnpad0VOeUZFUzlKajFH?=
 =?utf-8?B?Znp4R1E1VlJXR1RyeTNzVUlmY2c2SkliNldTUjM4bzRITitWS29Td1NwcitF?=
 =?utf-8?B?K0M2VTBzMDFlT2toVEM3Z1R4Wm50Z1l0STBTcnppWWZiYmZoaWd6M2E4NnVU?=
 =?utf-8?B?a29qdFNsdGVXaW1mUmFtV0hGZ09vRG9BdlpJS1VsWmEwSDBGNjE2OVYvZ0ZH?=
 =?utf-8?B?Um9ha1lCK0dRdTE3ZzhpNHJrcXIxZ3hmdnhJTDFvS0k5ODBJYnRlQzl6dnBP?=
 =?utf-8?B?VXFrTFNyOHY3MXB3L0lqREx2Mml4YlRwSmU2YnFWTDhiYlcwak5GanJlam45?=
 =?utf-8?B?a01Qa21VcVlkdFNZZ1dhZGxYSmFnQWZVb2FEdFBWNm9yeGlZc1BDMGpGbHNq?=
 =?utf-8?B?ZDQyVzl0MWI2QkwzK0JqeTh1dVZUZDNkZ29VaUZpWlVBRkZ1NEZFWElreDRZ?=
 =?utf-8?B?a2FreE8rV1hUdnNhb0s0MG8zWnFJbEd2TTEyMDhYVWlHTFk2OWR5Vy9sVHhM?=
 =?utf-8?B?bTNBUXVjaFdLRHVaUnIzM1RVMUJYUEUycVR1ZER5NGZxejJaTXB6aXdRbitu?=
 =?utf-8?B?NnlZcDg4ZDFzTGs1WVJNeVA4UzJ4WjhPVTNyb1c1Z3Y0RUwyZ3BYMmMwVmti?=
 =?utf-8?B?TzVtOXdaSWU5Ry9MRzhPZEtpSFpXQlI5K0ltM1AvMUtGYlB5NHdZeVRmc3hF?=
 =?utf-8?B?RTFqUlhBYkpya1UvaytzMFJqa0phVldrUTEzLzladjlhaTFKdmVzakJqMDZs?=
 =?utf-8?B?U2pUdDBjeldrVk03U0NxWHZBRTY0SVhIUjdCbnFnWDhIMklnWG9aUW80UEF0?=
 =?utf-8?B?UVFiaEJkRHFhMjY1U0FaM1hndGpCRmZUNldVTE9GYXhNU09TSDJDbzZEMVlY?=
 =?utf-8?Q?9gQElr30ilnyVO6IaVXNfWzT+?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <A95E2BFA5B299246ADBFA743A34DA020@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	WXoTPzzy3oThcE5NiNJCz2EntLdsClcM2Gyou7Hi79FMfSvbU5b8DWjN8/J9Qenmrgi0LrLkcHhRSXYAE1N1gV0IyHBodcaKYLb71Ud6pLTX3xooexoVzWC9bZ56P5UOB056Xd1dckJZHKxiA9TPxHyPHCMYUkeqwG/I8BWnMmHdY08UBD1YhQ0GJ+CWVPAGmrnXs8y7HIqlXvLQi+w+lI5xFjyV0O2xi7LVRHYUa/ldrYwpXIEUYIJfw6bBZ/aS6Xr0tnFCOYzHayErQZoX3xb3W68uBs1WmQds5EqKCegDySuudCsCr8fLklo57AvnTcC74nViSPbKUL9G9dFlGikJY0Cx5NHORPLuJdIn2OtmWEeaSyUvpwaRvOaALqgsaBNcSVk2Jh8jg67GfxXj1c0A54jbNg+4BxKKQhU9GAC7TWD/kL64nbp4V30daAV48RdGzP4lYHfItBmbzQvXDRY5IJkGm1aCokpm9b0fMiKAVQOH7PixI4iD3znENwHiRwPZKChZwODDq5047PDu7h0bOJveIHjoV2a80z10RWR3h9m+IJWn5CidUp/HJfP+qAoW3SYaxIzbHZeT221OgM9aLXTfLh8CppagygrJI0VIW66BlYKCP4FiyHibbvfeTFXAOsFaBQAIUttNFx3gfT5aEuuATncs5hT901KoQJ+rfcNhU5lBWTWrGRqsmjHXDpi8wL3tpJS4aah0qGlIIgGQ1pPT9hcRLacghOkjte7GbghTeXu9GDcJeDa5NspW6Nf+m9VXVNSgix9N6jzA4N56GdQXiBTMgLtlLo2Ujgm4jeVE8CNfhpDiME9xFT1tjELliwQfvj3ZFCao1Rsv5w==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 488bded1-b792-4150-d21d-08daef81b9ae
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 01:02:47.0783
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: I2ZG4slglp0Umqlt4DdYmxVJgBBAEF4wdh8AMd0qWwJMOrIZ+6hR24ufSUk5bbEW+E8ohbWaGF8AzuB8h1s4bHiC6iHdLZeWqLAQSmCpuos=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6912

T24gMDUvMDEvMjAyMyA0OjA0IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gdi0+YXJjaC5wYWdp
bmcuc2hhZG93LnNoYWRvd190YWJsZVtdLCB2LT5hcmNoLnBhZ2luZy5zaGFkb3cub29zW10sDQo+
IHYtPmFyY2gucGFnaW5nLnNoYWRvdy5vb3Nfe3NuYXBzaG90W10sZml4dXBbXS5zbWZuW119IGFz
IHdlbGwgYXMgdGhlDQoNCmZpeHVwW10sc21mbltdID8NCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 01:32:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 01:32:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472250.732410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDbaN-0000Hy-Oi; Fri, 06 Jan 2023 01:31:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472250.732410; Fri, 06 Jan 2023 01:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDbaN-0000Hp-KW; Fri, 06 Jan 2023 01:31:55 +0000
Received: by outflank-mailman (input) for mailman id 472250;
 Fri, 06 Jan 2023 01:31:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDbaM-0000Hj-4a
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 01:31:54 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e4003a05-8d61-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 02:31:51 +0100 (CET)
Received: from mail-bn8nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 20:31:47 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6665.namprd03.prod.outlook.com (2603:10b6:303:120::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 01:31:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 01:31:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4003a05-8d61-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672968711;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=oh7cyCArkL503G2e+TUqgklZst87qK1v5xznyrpur2s=;
  b=dFZJpfuvOJiVwU5WqMAiQTjAKijQbnopO1bT8Ah9mAOxvNQhL9Mqi27i
   A2f5C5OAKBcMuOZReBtkp/1YpZBl8li+O35nAUAsb0niE4UCx17G3WZIx
   jHF/VwSBVgGPn33eJoh3JjKgrBl3f24GKklNCjC2aYyACuPqWV3gi4O/G
   8=;
X-IronPort-RemoteIP: 104.47.55.171
X-IronPort-MID: 91404169
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jbWz1qpII069CFmoh6826T8AuR5eBmIpZBIvgKrLsJaIsI4StFCzt
 garIBmHPquMNmegftp/PYSyoUxTucTTnYRnGlQ+qn0zEn4b8puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHzyRNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADAPLSDTuOeH+4yEG8l9pNR9A/uwf5xK7xmMzRmBZRonabbqZvyQoPV+jHI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeWraYWMEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXpr6Iy3ATNlwT/DjUnCBjnsainm3TnVtZiK
 xEx9nEJ97MLoRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8hSy2ETgYKykFfyBsZRcE5vHzrYd1iQjAJuuPC4awh9zxXDv2k
 zaDqXFkg61J1JFSkaKm4VrAnjSg4IDTSRI47RnWWWTj6R5lYImiZMqj7l2zAet8Ebt1h2Kp5
 BAs8/VyJshUZX1RvERhmNkwIYw=
IronPort-HdrOrdr: A9a23:AsAjkK6cP2IM/bTnMgPXwMbXdLJyesId70hD6qkRc3Bom6mj/P
 xG88516faZslgssRMb+exoSZPgfZq0z/cci+Qs1NyZLWrbUQWTXeRfxLqn7zr8GzDvss5xvJ
 0QF5SW0eeAb2RHsQ==
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91404169"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kRj2BAisd7GPfX3Nws1+dtWFtkLJFDkvpUXCBubddKlJdTAv8Z56z9VHEG9ym54fFMNk/FPpXe0Zr2Ps5aNgz84K5inC02LqICYMDvpl9pqfxQT00LhX7rTXk6ZSovMUKonpwvlcKkldDIxT1dydhaaWEc1DyHx5sRS6/j9KA89klMULbpApmMLNFSvVLB02KNafnq899eXQ8lW1wAUO6kdmdXW9ZnBvS/GVUEn/ayxfB5zTLRxHDWF6ssW10JBYgVH2d6KpkqEyx7JqYgBW10ufc+kouWClpOq2x0rwQ86N8BcdPAUPnifPdNWD++YN0DXMVZI3Amj046W7JWFtig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oh7cyCArkL503G2e+TUqgklZst87qK1v5xznyrpur2s=;
 b=gEKVuBrohVqmcbbARxKYbTQ1342wV/LX2GtC6XwNlw4OitPFc9nQUyWQb+SHRmxprVg/L17PJzic/uBKsU0YLuvhqdR6D7p3cJ+amVGs6RupfQJzIah5l05yDB1JI1fwWjn96sw4iHQA0LWJku5iajRwffOOIuj97Yfj1EgG5qXdsG1lam1enWgvuEnglzu8No9kIPnhvseSY9DDjs3kmFAJptICkBULQLB8DfGT45r7PU0EwCEyH9fsrBPJYqBEWawqZi5pnvLFacOXMSQjgRJh0XsUl/QpUnt1vHx4XLcuuAEGj8jCO/SlgDN8AFX/6H1cYHs7qv6+0dHmzGmrRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oh7cyCArkL503G2e+TUqgklZst87qK1v5xznyrpur2s=;
 b=DFAuZDlztl3HQpQtG1Nh645LwUi0r2f+g3pejWomrFwSgLDSXYkOYLh7hMBuPezz/zBajvJ2Fd7yprAqILdttcnJp/S+Cm47P0euzVKJcIX3xwQjeLg5Kk9nOUa365pRgCZ/G07q7QZL4qfDF3PZP2Lhii0a19t0nHpF8bf8IJI=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 07/11] x86/shadow: L2H shadow type is PV32-only
Thread-Topic: [PATCH 07/11] x86/shadow: L2H shadow type is PV32-only
Thread-Index: AQHZIR97UU3x5E5JME2c1R2RCJMcMa6QmwUA
Date: Fri, 6 Jan 2023 01:31:44 +0000
Message-ID: <a3b7631c-07e7-455b-3531-c33ce435521b@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <2743393d-852d-b385-9eba-e22806b1c4af@suse.com>
In-Reply-To: <2743393d-852d-b385-9eba-e22806b1c4af@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MW4PR03MB6665:EE_
x-ms-office365-filtering-correlation-id: 21858d6e-9db4-427e-65b5-08daef85c538
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 /W6+L2nbaZOeWCAxt7r7gH8X6X8ewwOj6Ag0mhwdkg4XezhhAbaJtYEiSASlDMGDrTke7Gc2oLZ88ad9H2eTwib9Hi7PAjCpSc+ja0yvrmpLmJ0kfHfcxULltxwvdGfJkwZkLzQHtlkWs8ZLYhWxi3SNZB9jQwdK2OZA3zu0pk4wzTAtR0wi0JjNMmrE+Qu3AdkJYqP/7dqMo+pgFVSvboPaMQXa7Z3jzI24Dl34PDPfUBHEfQXnxB2NyaYZQBy+gcIB6cMzJBzFjWaCHTu4zMomWQk9H2g+up+YJI4J4qqO0ZWbJJWV7AkzoSvs92a3r2KlxQdMjv6/bTAwUjOpiyKYpVMNyZSw+goOtF0sg+yEzBm3YRZ1YOXiJi0d7EagRapcEYqOXIVgerNE5LnrDLOj3UbrdQtnGAamXSvJ3ptq0e8Hppz//D69k69qIwBWV54PAtA4JeEVbECv6FbzaGG1sCQJRWlbWSlIUPyywP57U+IpEopIJtx7fmwKzvbZucJL59iGhEIru+qY7CHPJOOo55xRckN5+bS4Y6RDyS64q5hX8euDaV0j61lti0PIYKm7oQ6hvrOdCSkwPpAiiX6LsPAXLcy6YPA+XQeRbQwYjLPKh0O0v0WFU6XToI9r94IJPVX3JeZmNuMbswXguY5ca8EbploKbUFgsB/omXO8IkERh5hwf0I9cMO11MNsQCRZjWiuKDAc7eo79rvLzaCdGY4IaY9oAwc03ve8FaBFcbu/ws7/W5WZ0zjdv3EMDDLG/lAgeVu8qNwjB2w00Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(39860400002)(346002)(376002)(396003)(136003)(451199015)(316002)(31686004)(5660300002)(76116006)(91956017)(66556008)(2906002)(41300700001)(54906003)(8676002)(64756008)(66446008)(66476007)(66946007)(8936002)(4326008)(110136005)(107886003)(6506007)(478600001)(53546011)(71200400001)(36756003)(83380400001)(26005)(186003)(6512007)(2616005)(6486002)(82960400001)(86362001)(38100700002)(122000001)(38070700005)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?eGVsWGRDOVduS0piK0xMRnYvaVFvSVd5b0tkbnZ4SnVXZ1l0Q251SDhYTC9B?=
 =?utf-8?B?dTdEdmZuS1JjVGQzQXlvR0RROGJVemFuVHB4b1UwTWszT3RZWHpGWTBYcXFu?=
 =?utf-8?B?Yzg0TE1zUTR5eWJZcS85SzQwNEExRUJqZGp3c1cxQU9DaXo5UzdwNE96WDUx?=
 =?utf-8?B?cFQ5b0tueFdWKzRtaGo0dyt4dVNBRVdaOVZCTVFnRTNhNUFFS0ZBalRDenho?=
 =?utf-8?B?dnpiWkFuNjFmT2ZlZ1JUSEZ4Y0YrVVRWL0pOU2tnN1lkR3FxUkN4Y09qZk1W?=
 =?utf-8?B?TFdEVXRwNnhKRGlSWC9HSmtUT2MxcXNsNm9sb01mdEFoV1RmdFdOVDlDNFF1?=
 =?utf-8?B?cjRaZjhwR3AyaGQ0czkxWFA0bkYzTVhUbHBNeGRZK2xlYW4zTWpna0dwWEF5?=
 =?utf-8?B?Y2YrdXNyNkhJbk1PYkZ5Tm5KNlQyNnlCbDkxR1hFRjRpc1NuR1JnREpWb0Nq?=
 =?utf-8?B?b041TTVBbDhZMDAwN3ZKRzMrZ2g2WTI5cEJUdDZpVUhoSHBrbFgxMmxXWkhU?=
 =?utf-8?B?TTd6MTN0ZzArTGlBQTM4MkRtRU9aOFZqTlVDRmNRY0RtekZqanhqMDhiRXMw?=
 =?utf-8?B?UmZsMUpVM3FSSmVrVWNHU3B5MmZqVHJ6WW1mMGVmakdoZmxzZVJjY0hJQ0Jq?=
 =?utf-8?B?NlFyS3E3YjZSTGdTb09hRDNCMjJMREkyZ1RGVTYydE83dzJpQkI0TERKd3Rl?=
 =?utf-8?B?d3Q4c214Yk0wN0pBV1pQei9VeFJJRExZU1ZzbU1xZ0xnQUExMFBSaWE4YWlk?=
 =?utf-8?B?RW5CdGxRS0tWZDZzdFdvbnhxSkhMMWxzS3lDT0JJUXhCVXl1bzZ2eWNsVGRR?=
 =?utf-8?B?dVlUOG5OOW56WDBTbHlQd2NRWmlpTXRFaGl6bVphcTJCWWFDUmRSdXUxbno4?=
 =?utf-8?B?bHpaVGlNS2tWQzZhUUZRZ1JjVXo2U2FYNXBqWDVFRk15aFRlL2ZLOHVEWHA4?=
 =?utf-8?B?dldUanBLN0VxdWkrZGJrY3RlRUFYV2w3TFZuNkROVVZjbWtVdEFtNUt3ck91?=
 =?utf-8?B?a3dNc1diOUU2eFdEb2E4eDV3TkZPSHBielhlWkhmckoxYy9XQnJtWjd6RFpq?=
 =?utf-8?B?YWFXSklTT2xvbnl0SjFxTGd5TmNuTnVxN3ArcmlmaHQ4WnhIMC9LWm92dlo0?=
 =?utf-8?B?ODAvQkhrZ2g3SDdheStpSzhlRVB6Y2hoUjFyd0p6d1lEU3dHRUFjUklEK3VN?=
 =?utf-8?B?all3RzdjSkFiTExyb2NrVE9EVEd6d3dPMDMvRkE2QlQ3MWFZNHJoUEM5VklP?=
 =?utf-8?B?WGNFNXVnemJveGJVa1JZTFlCTjFHUEk4Ly9RT1lKQ3E4RnRFcmg5UkxZREpX?=
 =?utf-8?B?ajNNdGU2a3o1QU5kS1dsd2puczJRVGJrMW8rTnRyTDV4anpXNEFGUjRLVi9M?=
 =?utf-8?B?OU5OQ2dra3JtSmNmYVlVMjlSamlVakN6UkFyaTJUL2x2NlhiVW4wUUk3UmtR?=
 =?utf-8?B?RTJOREw3RExZS1lqQkxuUDRJazR3OFpZVmVES3ZsZkd4U2dBRFBCZXF5bWxh?=
 =?utf-8?B?TzVNMi9oMERoNWl2ckJ3MTlBQUJ5T1FPdmJCY2dGcDJHbnhWUnlzUCt0MXg3?=
 =?utf-8?B?cXM2T3VFMEhyQUZhYXhRNTFoY2VmcWo0cndYNkhidmIvN2JTdUtTVC8za3pM?=
 =?utf-8?B?OU9WQTBOenlBeU44VENLZm5IbVk2bEI3b2FnbnpFdXJnbjBMU2g2UHMyZWtP?=
 =?utf-8?B?cW1hMWhWR0hBaEhJQXpjd1ZzUzBnMWhzQjRrM1lPelhuaENCSWVydEdlSVBO?=
 =?utf-8?B?L0ZiOEtPQyt1blQrOFJRcHpHSXdkOHg0WjQya0J0SmkxcmQvZ0g0ZVhqbGxF?=
 =?utf-8?B?dXlLeEdKNTB0aGphTUNsS0FZcXhYSWpaclVtNE0zN1FHb0l5aGJyTm9BNDB0?=
 =?utf-8?B?L3dhL0V6MGVHLzJLTXhnU0lBUlhsNjlnUXpaVlBwSkV0dWxLMWU3OWxiQ1pN?=
 =?utf-8?B?bFgyMitZS0hHNGlWU040OGYvUGlUeFRzeWhHU0tSUkxpd1U5a2tXZFByMnNV?=
 =?utf-8?B?RFJNNWRhbHdLRytnVkJ0RXdwb3pLQXdqN0tnREx0YmQvWlZ3Rk5TOFN2M2Uv?=
 =?utf-8?B?dUxLcEU0NTU5ZDBwNU9KK1cza1ZDTkVZZWhLS2s2ZjA1S09iaWp0L1JSMWJS?=
 =?utf-8?Q?IxHkPB2beWiZR8Qcqi7kn0gkL?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0814466D35FD6349AFD19F49B8F92DAE@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	VEh2B/EXa/D9x0sSy96Sb7t+FopKeAPWJ6gi7zsx2dRQOIZ+hilPBKs7xJFP9mZdn0MW33/kkSj7yo3DfbEzyTOSD5a410+P2dthzhqYyu71VlpFe8In+FVvFcdaNRiTJQhlGhR5N7kYHpJHnE8KMo0t5OADWZAz5SN72C+qF6YRFXOVjL3GSWYCxr2ti5u10YEmhrjZaDdxPuEauP83Fg1bET5KF+WLDq2s+xVgwx1tUkKpoIVHiqrqmRKqLG/BFP7zyvAJ4CgHeX1Z2xz7MWeSXkzdEvF9RHUdGW40dXroWQ26v+DnoV+S/AT+Qg8XPHpglaUDGww38hwLn8g4qLFGQ9icDd5bp7+QFoC80bund82cNlpWKqcA6EBglRxaBJ2LKbG4+y+veDOmbd4mU7Thr04dWRLA7WfM0Dm9K3XbFIgy/+XVjNY1HDuOYR7PRYzd2sqs69Yev6CrQc6cJCkyLvcNFYlkI6rO049wgn3OEV1OvMPUM5jRl14JKihPjxF4yBfnILHaJFQ2lngzjB04JrvbXk85fP/+lTdBNi+nEYcjksLYXanS/rhCMYIuudxDTo5rV4+5YdSqfRPH8AWEvQa6crALFfSmWZDd+eOQ/ZjF9JlTBupCpgDtswB+0etDOVieUJHWsc56cFsE3ldwiNYBo5o4AiSE3eSupwkbDsmZdLdN/RtNofbAAB6uEhMruOhPWXW6vLVoBXrl5jKnWdt0b70ZTmrkP40/oQvzA99KF1fp+hNqS8V9yYRQqeKUByfyrwL0lnAttWO8yE/DDDbfoR8WudPHsphoJQ3+Jw7xJC4dGKQr2vwaB9/yAVRYiA70zDLAIICbB0SMYg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 21858d6e-9db4-427e-65b5-08daef85c538
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 01:31:44.4530
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: twvfdl8yv7tvoLkiLODHPcpFMr7m9YHlAYshIkgA+Z4Qmti2AM2B0/CpWS299sjZc8kvTe+EUF0XikOXmYI0oHEgan9gX4lacSkGKvNGQIM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6665

T24gMDUvMDEvMjAyMyA0OjA1IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gTGlrZSBmb3IgdGhl
IHZhcmlvdXMgSFZNLW9ubHkgdHlwZXMsIHNhdmUgYSBsaXR0bGUgYml0IG9mIGNvZGUgYnkgc3Vp
dGFibHkNCj4gIm1hc2tpbmciIHRoaXMgdHlwZSBvdXQgd2hlbiAhUFYzMi4NCg0KYWRkL3JlbW92
ZTogMC8xIGdyb3cvc2hyaW5rOiAyLzQgdXAvZG93bjogNTQ0Ly05MjIgKC0zNzgpDQpGdW5jdGlv
bsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCBvbGTCoMKgwqDCoCBuZXfCoMKgIGRlbHRhDQpzaF9tYXBfYW5kX3Zh
bGlkYXRlX2dsMmVfX2d1ZXN0XzTCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDEzNsKgwqDCoMKgIDY2
NsKgwqDCoCArNTMwDQpzaF9kZXN0cm95X3NoYWRvd8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAyODnCoMKgwqDCoCAzMDPCoMKgwqDCoCArMTQN
CnNoX2NsZWFyX3NoYWRvd19lbnRyeV9fZ3Vlc3RfNMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgMTc4wqDCoMKgwqAgMTc2wqDCoMKgwqDCoCAtMg0Kc2hfdmFsaWRhdGVfZ3Vlc3RfZW50cnnC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgNTIxwqDCoMKgwqAgNDcy
wqDCoMKgwqAgLTQ5DQpzaF9tYXBfYW5kX3ZhbGlkYXRlX2dsMmhlX19ndWVzdF80wqDCoMKgwqDC
oMKgwqDCoMKgwqAgMTM2wqDCoMKgwqDCoMKgIDLCoMKgwqAgLTEzNA0Kc2hfcmVtb3ZlX3NoYWRv
d3PCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDQ3
NTfCoMKgwqAgNDU0NcKgwqDCoCAtMjEyDQp2YWxpZGF0ZV9nbDJlwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgNTI1wqDCoMKgwqDC
oMKgIC3CoMKgwqAgLTUyNQ0KVG90YWw6IEJlZm9yZT0zOTE0NzAyLCBBZnRlcj0zOTE0MzI0LCBj
aGcgLTAuMDElDQoNCk1hcmdpbmFsLi4uDQoNCj4NCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiAtLS0NCj4gSSB3YXNuJ3QgcmVhbGx5IHN1cmUgd2hl
dGhlciBpdCB3b3VsZCBiZSB3b3J0aHdoaWxlIHRvIGFsc28gdXBkYXRlIHRoZQ0KPiAiI2Vsc2Ui
IHBhcnQgb2Ygc2hhZG93X3NpemUoKS4gRG9pbmcgc28gd291bGQgYmUgYSBsaXR0bGUgdHJpY2t5
LCBhcyB0aGUNCj4gdHlwZSB0byByZXR1cm4gMCBmb3IgaGFzIG5vIG5hbWUgcmlnaHQgbm93OyBJ
J2QgbmVlZCB0byBtb3ZlIGRvd24gdGhlDQo+ICN1bmRlZiB0byBhbGxvdyBmb3IgdGhhdC4gVGhv
dWdodHM/DQoNClRoaXMgcmVmZXJzIHRvIHRoZSBzdHJhaWdodCBkZWxldGlvbiBmcm9tIHNoX3R5
cGVfdG9fc2l6ZVtdID8NCg0KSSB3YXMgY29uZnVzZWQgYnkgdGhhdCBhdCBmaXJzdC7CoCBUaGUg
c2hhZG93IGRvZXMgaGF2ZSBhIHNpemUgb2YgMS7CoCBNaWdodA0KDQovKsKgwqAgW1NIX3R5cGVf
bDJoXzY0X3NoYWRvd13CoCA9IDEswqAgUFYzMiBvbmx5ICovDQoNCndvcmsgYmV0dGVyP8KgIFRo
YXQgbGVhdmVzIGl0IGNsZWFybHkgaW4gdGhlcmUgYXMgYSAxLCBidXQgbm90IG5lZWRpbmcNCmFu
eSBmdXJ0aGVyIGlmZGVmYXJ5Lg0KDQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvbXVs
dGkuYw0KPiArKysgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93L211bHRpLmMNCj4gQEAgLTg1OSwx
MyArODY2LDEyIEBAIGRvIHsNCj4gICAgICBpbnQgX2k7ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+ICAgICAgaW50IF94
ZW4gPSAhc2hhZG93X21vZGVfZXh0ZXJuYWwoX2RvbSk7ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgXA0KPiAgICAgIHNoYWRvd19sMmVfdCAqX3NwID0gbWFwX2RvbWFpbl9wYWdlKChf
c2wybWZuKSk7ICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4gLSAgICBBU1NFUlQobWZuX3Rv
X3BhZ2UoX3NsMm1mbiktPnUuc2gudHlwZSA9PSBTSF90eXBlX2wyXzY0X3NoYWRvdyB8fFwNCj4g
LSAgICAgICAgICAgbWZuX3RvX3BhZ2UoX3NsMm1mbiktPnUuc2gudHlwZSA9PSBTSF90eXBlX2wy
aF82NF9zaGFkb3cpO1wNCj4gKyAgICBBU1NFUlRfVkFMSURfTDIobWZuX3RvX3BhZ2UoX3NsMm1m
biktPnUuc2gudHlwZSk7ICAgICAgICAgICAgICAgICAgICAgICBcDQo+ICAgICAgZm9yICggX2kg
PSAwOyBfaSA8IFNIQURPV19MMl9QQUdFVEFCTEVfRU5UUklFUzsgX2krKyApICAgICAgICAgICAg
ICAgICAgXA0KPiAgICAgIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4gICAgICAgICAgaWYgKCAoIShfeGVu
KSkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
DQo+ICAgICAgICAgICAgICAgfHwgIWlzX3B2XzMyYml0X2RvbWFpbihfZG9tKSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiAtICAgICAgICAgICAgIHx8IG1mbl90b19wYWdl
KF9zbDJtZm4pLT51LnNoLnR5cGUgIT0gU0hfdHlwZV9sMmhfNjRfc2hhZG93ICAgIFwNCj4gKyAg
ICAgICAgICAgICB8fCBtZm5fdG9fcGFnZShfc2wybWZuKS0+dS5zaC50eXBlID09IFNIX3R5cGVf
bDJfNjRfc2hhZG93ICAgICBcDQoNCklzbid0IHRoaXMgcmVkdW5kYW50IHdpdGggdGhlIEFTU0VS
VF9WQUxJRF9MMigpIG5vdz8NCg0KUGVyIHlvdXIgb3RoZXIgcXVlc3Rpb24sIHllcyB0aGlzIGRl
c3BlcmF0ZWx5IHdhbnRzIHJlYXJyYW5naW5nLCBidXQgSQ0Kd291bGQgYWdyZWUgd2l0aCBpdCBi
ZWluZyBhbm90aGVyIHBhdGNoLg0KDQpJIGRpZCBwcmV2aW91c2x5IHBsYXkgYXQgdHJ5aW5nIHRv
IHNpbXBsaWZ5IHRoZSBQViBwYWdldGFibGUgbG9vcHMgaW4gYQ0Kc2ltaWxhciB3YXkuwqAgQ29k
ZS1nZW4gd2lzZSwgSSB0aGluayB0aGUgTDIgbG9vcHMgd2hhdCB0byBjYWxjdWxhdGUgYW4NCnVw
cGVyIGJvdW5kIHdoaWNoIGlzIGVpdGhlciA1MTIsIG9yIGNvbXBhdF9maXJzdF9zbG90LCB3aGls
ZSB0aGUgTDQNCmxvb3BzIHdoYXQgYW4gImlmKGkgPT0gMjU2KSBpICs9IDc7IGNvbnRpbnVlOyIg
cmF0aGVyIHRoYW4gaGF2aW5nDQpMRkVOQ0UtaW5nIHByZWRpY2F0ZXMgb24gZWFjaCBpdGVyYXRp
b24uDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 01:49:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 01:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472258.732421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDbr6-0001yM-5K; Fri, 06 Jan 2023 01:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472258.732421; Fri, 06 Jan 2023 01:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDbr6-0001yF-0w; Fri, 06 Jan 2023 01:49:12 +0000
Received: by outflank-mailman (input) for mailman id 472258;
 Fri, 06 Jan 2023 01:49:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDbr4-0001y3-0d; Fri, 06 Jan 2023 01:49:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDbr3-00076v-LU; Fri, 06 Jan 2023 01:49:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDbr3-0001qW-3A; Fri, 06 Jan 2023 01:49:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDbr3-0007Tv-2e; Fri, 06 Jan 2023 01:49:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/ZOnyD3PoMAIe3qC2MqwyG80SHNHBUxwg+S/kpcFPpk=; b=Q41X371o2v4DDuGOi1ft3dsLkk
	TGW5rQtf2Veh74LTn0vuS8yPhDFytzuYvLZqKi64BX5pI+HJgkVaJxNvA1mFlT2nluKniEg9Cqj7l
	EJRxtxJXjdLrMUC3kouIobsJO2vkwZOqWtNlih+xjjvlEOSvVKdK81/dyQhUAofUxi5M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175590-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175590: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-shadow:xen-install:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f8af61fa14441e67300176a5e07671ea395426b3
X-Osstest-Versions-That:
    qemuu=cb9c6a8e5ad6a1f0ce164d352e3102df46986e22
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 01:49:09 +0000

flight 175590 qemu-mainline real [real]
flight 175594 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175590/
http://logs.test-lab.xenproject.org/osstest/logs/175594/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-shadow     7 xen-install         fail pass in 175594-retest
 test-arm64-arm64-xl-vhd 17 guest-start/debian.repeat fail pass in 175594-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 175583
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175583
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175583
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175583
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175583
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175583
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175583
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175583
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175583
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                f8af61fa14441e67300176a5e07671ea395426b3
baseline version:
 qemuu                cb9c6a8e5ad6a1f0ce164d352e3102df46986e22

Last test of basis   175583  2023-01-05 06:47:09 Z    0 days
Testing same since   175590  2023-01-05 17:07:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chenyi Qiang <chenyi.qiang@intel.com>
  David Hildenbrand <david@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   cb9c6a8e5a..f8af61fa14  f8af61fa14441e67300176a5e07671ea395426b3 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 02:00:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 02:00:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472268.732432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDc1u-0004ka-AG; Fri, 06 Jan 2023 02:00:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472268.732432; Fri, 06 Jan 2023 02:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDc1u-0004kT-6i; Fri, 06 Jan 2023 02:00:22 +0000
Received: by outflank-mailman (input) for mailman id 472268;
 Fri, 06 Jan 2023 02:00:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ci4=5D=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pDc1s-0004kN-VL
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 02:00:21 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dc1f9371-8d65-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 03:00:16 +0100 (CET)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id E534F5C00E8;
 Thu,  5 Jan 2023 21:00:13 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Thu, 05 Jan 2023 21:00:13 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 5 Jan 2023 21:00:12 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc1f9371-8d65-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672970413; x=
	1673056813; bh=n4HZkBJhhcWVGPn6wmsxBFb/A7GL3M2SBbnddi32WiQ=; b=e
	psyyWjBsG+IJnTVB10cKN9EOWHjm70L+SWVTbqnQOguVu8FS3C6CEcVIdbJ9xMBd
	sliMpNKPhus6U84NPW10VK3y15ak+tVar98JKuBa4pxxm07usQS6VKwyKJdtUFjo
	30o/d0VinMjUBiJMHhE5q/enUPJKUoLN0AC/7tM49M3mrXQLdB4xcq513cYO/r0A
	ZBTmDxmLbhjdmCDPN9wu/n2Kq1p1Mm2JMpGYu42dnCS5PSBuvAcgGsY+nNFxIa2T
	AzO7CbHQPCBYum4c3CE0f+9cygoK4Q+VywY3o5VJnSk4400JYaDRtHdgFxnYWSG1
	KoIuXbHnpDetF4IvJat1w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672970413; x=1673056813; bh=n4HZkBJhhcWVGPn6wmsxBFb/A7GL
	3M2SBbnddi32WiQ=; b=PoyysGK8WDMzfpzDbwxKUd7IPk+//sYnUN3974pl44j5
	nSce2chh1XAlZB16Hyy1MCfwE2mFGVYWkhKl+fQ5b1gMxQL4sB5+2rM5pT/Lw1RC
	K5vSOe/LXtw+Iud9rGE/sSkpqd/ktaPoi8+esbVyuuAJ2TMKVbcPxkZPJerB6jGo
	gxk6Qkg1seIhd0AFBF7EwcCyUNuT36PTCQp4XetY1R+3LS6Lw1+2O4Q1Ipx9mmWo
	q7S10TREYgr6S6/xoJBvRQd4rCpcRKEsMXiIpnNE7dI38ZV5sAXQFRarw8VY2ps1
	4NqT0wMSA4+CTBVqvlp1z9NfOQj7shSHNfzvQaN9Yg==
X-ME-Sender: <xms:rYC3YxIk-Ze6QFIRfx9x1z-Yy3A9vQ-0U8303sDIaOpySvMVLc8Ovg>
    <xme:rYC3Y9KjCrDMxfz5Bkz0dXYzB9ceW7tCmBnc-Qk5tAR3ZsaoST4aYHyESLEsfyM8S
    6u7Ru1pdWsh9Fo>
X-ME-Received: <xmr:rYC3Y5vErpxCevJPn9O_6212xBESe2v_QbTIs_K8ZX2bQofC8DrIjK7VfiF0>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjeelgdegudcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpedvjeetgeekhfetudfhgfetffegfffg
    uddvgffhffeifeeikeektdehgeetheffleenucevlhhushhtvghrufhiiigvpedtnecurf
    grrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhl
    rggsrdgtohhm
X-ME-Proxy: <xmx:rYC3YyZk1Kj2aNBLftcuLto_NwBd7cFm8bL5ylc8vAIhr0tgM_wzFA>
    <xmx:rYC3Y4aMtDf39RA4vrHrwd6ewXslTMbEYWTIJg83NJsPDKlkj5PD7g>
    <xmx:rYC3Y2DVji5xQNj-YULi3dzoBY398W3uXOYPamO86oN7WSSbFgLXGQ>
    <xmx:rYC3Y3x87RNoum8ZMGKrY3J3k0JKOtB6cZir3443ORQJaqiTcUUFvg>
Feedback-ID: iac594737:Fastmail
Date: Thu, 5 Jan 2023 21:00:03 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default
Message-ID: <Y7eAqgcEENVcn+bl@itl-email>
References: <cover.1671744225.git.demi@invisiblethingslab.com>
 <2236399f561d348937f2ff7777fe47ad4236dbda.1671744225.git.demi@invisiblethingslab.com>
 <c6223295-c4f9-8fa8-7635-80d48094190f@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="FR8Kja2I10q88hmI"
Content-Disposition: inline
In-Reply-To: <c6223295-c4f9-8fa8-7635-80d48094190f@citrix.com>


--FR8Kja2I10q88hmI
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Thu, 5 Jan 2023 21:00:03 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default

On Thu, Jan 05, 2023 at 08:28:26PM +0000, Andrew Cooper wrote:
> On 22/12/2022 10:31 pm, Demi Marie Obenour wrote:
> > diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-=
line.pandoc
> > index 424b12cfb27d6ade2ec63eacb8afe5df82465451..0230a7bc17cbd4362a42ea6=
4cea695f31f5e0f86 100644
> > --- a/docs/misc/xen-command-line.pandoc
> > +++ b/docs/misc/xen-command-line.pandoc
> > @@ -1417,6 +1417,17 @@ detection of systems known to misbehave upon acc=
esses to that port.
> >  ### idle_latency_factor (x86)
> >  > `=3D <integer>`
> > =20
> > +### invalid-cacheability (x86)
> > +> `=3D allow | deny | trap`
> > +
> > +> Default: `deny` in release builds, otherwise `trap`
> > +
> > +Specify what happens when a PV guest tries to use one of the reserved =
entries in
> > +the PAT.  `deny` causes the attempt to be rejected with -EINVAL, `allo=
w` allows
> > +the attempt, and `trap` causes a general protection fault to be raised.
> > +Currently, the reserved entries are marked as uncacheable in Xen's PAT=
, but this
> > +will change if new memory types are added, so guests must not rely on =
it.
> > +
>=20
> This wants to be scoped under "pv", and not a top-level option.=C2=A0 Cur=
rent
> parsing is at the top of xen/arch/x86/pv/domain.c
>=20
> How about `pv=3D{no-}oob-pat`, and parse it into the opt_pv_oob_pat boole=
an ?

Works for me, though I will use =E2=80=98invalid=E2=80=99 instead of =E2=80=
=98oob=E2=80=99 as valid PAT
entries might not be contiguous.

> There really is no need to distinguish between deny and trap.=C2=A0 IMO,
> injecting #GP unilaterally is fine (to a guest expecting the hypercall
> to succeed, -EINVAL vs #GP makes no difference, but #GP is far more
> useful to a human trying to debug issues here), but I could be talked
> into putting that behind a CONFIG_DEBUG if other feel strongly.

Marek, thoughts?

> > @@ -1343,7 +1374,34 @@ static int promote_l1_table(struct page_info *pa=
ge)
> >          }
> >          else
> >          {
> > -            switch ( ret =3D get_page_from_l1e(pl1e[i], d, d) )
> > +            l1_pgentry_t l1e =3D pl1e[i];
> > +
> > +            if ( invalid_cacheability !=3D INVALID_CACHEABILITY_ALLOW )
> > +            {
> > +                switch ( l1e.l1 & PAGE_CACHE_ATTRS )
> > +                {
> > +                case _PAGE_WB:
> > +                case _PAGE_UC:
> > +                case _PAGE_UCM:
> > +                case _PAGE_WC:
> > +                case _PAGE_WT:
> > +                case _PAGE_WP:
> > +                    break;
> > +                default:
> > +                    /*
> > +                     * If we get here, a PV guest tried to use one of =
the
> > +                     * reserved values in Xen's PAT.  This indicates a=
 bug
> > +                     * in the guest.  If requested by the user, inject=
 #GP
> > +                     * to cause the guest to log a stack trace.
> > +                     */
> > +                    if ( invalid_cacheability =3D=3D INVALID_CACHEABIL=
ITY_TRAP )
> > +                        pv_inject_hw_exception(TRAP_gp_fault, 0);
> > +                    ret =3D -EINVAL;
> > +                    goto fail;
> > +                }
> > +            }
>=20
> This will catch cases where the guest writes a full pagetable, then
> promotes it, but won't catch cases where the guest modifies an
> individual entry with mmu_update/etc.
>=20
> The logic needs to be in get_page_from_l1e() to get applied uniformly to
> all PTE modifications.

I will move it there, and also update Qubes OS=E2=80=99s patchset.

> Right now, the l1_disallow_mask() check near the start hides the "can
> you use a nonzero cacheattr" check.=C2=A0 If I ever get around to cleanin=
g up
> my post-XSA-402 series, I have a load of modifications to this.

I came up with some major cleanups too.

> But this could be something like this:
>=20
> if ( opt_pv_oob_pat && (l1f & PAGE_CACHE_ATTRS) > _PAGE_WP )
> {
> =C2=A0=C2=A0=C2=A0 // #GP
> =C2=A0=C2=A0=C2=A0 return -EINVAL;
> }
>=20
> fairly early on.
>=20
> It occurs to me that this check is only applicable when we're using the
> Xen ABI APT, and seeing as we've talked about possibly making patch 5 a
> Kconfig option, that may want accounting here.=C2=A0 (Perhaps simply maki=
ng
> opt_pb_oob_pat be false in a !XEN_PAT build.)

It=E2=80=99s actually applicable even with other PATs.  While Marek and I w=
ere
tracking down an Intel iGPU cache coherency problem, Marek used it to
verify that PAT entries that we thought were not being used were in fact
unused.  This allowed proving that the behavior of the GPU was impacted
by changes to PAT entries the hardware should not even be looking at,
and therefore that the hardware itself must be buggy.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--FR8Kja2I10q88hmI
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmO3gKsACgkQsoi1X/+c
IsEGthAAj11euyGtkuY086E9VeDW2EyrMeCxS3CWqScUund/rBgQsR2J3oYVaZun
sjaNxZwPNnrB1qgd38evjCQF0Jq+MCC8Ng24wF1yYYtPqTUXUc63mowbPpoVBUBi
OJD1CIlU4joGsnLfQboKwgzV573KUWazeIQU30kasE4sXplyFIX62VlD9tsfPJwB
zzm+pzvj1OHDCNja1xtW77krS+2AbPbdTuArktXBduq8U8LirqzDr1ITYh1ypUBT
ridc/wxpmc1SBiGmjqs2Ae1xunbVHwzYLU3Kw0a/bChHv/kyb6yNIn0Q/uGBnw10
l3xuyF+QSaNdeeGyzYYzMGffESAv1c23gXU5bTRkAyo8TxgWyMvO8Wz8bzzLkUmR
yulAnsLO4oVVI356TpYeceOmVJbz/A6ZUJbD++cn/iEvzrAi+eAheCTSYTHDwC3j
MKhDX2XkVtAIOtU8phuSuF72i8zax8HAVhPTFSYeb/YKzrakPowR8+kSMU0d/b0D
egNyBqRTIPl6AAqOME/r0asoO7n/Icd89Zjp70whg4wF4lnx5Ky5HH3TYjdZa58q
fNyU7BMLxlSC78ekC3Qwj2feWx5VIYEca03RAZedWmnRY4Flr3tXeiQEK72POpF4
RNT9738xnN6oZQUBJnb6lczqI1Tiumi/3xmCkS12H0dYmPx2hXE=
=/Bzv
-----END PGP SIGNATURE-----

--FR8Kja2I10q88hmI--


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 02:03:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 02:03:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472276.732443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDc4i-0005Lb-Oh; Fri, 06 Jan 2023 02:03:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472276.732443; Fri, 06 Jan 2023 02:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDc4i-0005LU-LK; Fri, 06 Jan 2023 02:03:16 +0000
Received: by outflank-mailman (input) for mailman id 472276;
 Fri, 06 Jan 2023 02:03:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDc4g-0005LO-Tu
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 02:03:15 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 44e3f0eb-8d66-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 03:03:11 +0100 (CET)
Received: from mail-dm6nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 21:03:09 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5162.namprd03.prod.outlook.com (2603:10b6:5:24a::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 02:03:07 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 02:03:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44e3f0eb-8d66-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672970591;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ViXsOb5Flt6Rc1xOWmrEZV/r88DR+Cz+8dlxDG5k1LQ=;
  b=A60tlV3taHW7xyih+zLoV5d62aIywIiUH8fgtmLKnc1ChzRzjpMaW1vH
   FrYQUxz0Ac5KOtl06d5/CDRihUuS5dKMhGgjAvo8L2qezRp+TdDrS5LRp
   loMHRT4eeWA9mtXh3IPiIzvtjO2aL8hzLUq1iPwWBaddegu/z6i6i8rK3
   k=;
X-IronPort-RemoteIP: 104.47.58.102
X-IronPort-MID: 91823714
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/bRAp6qqAkcLlqOrsYvR30hrLBheBmIpZBIvgKrLsJaIsI4StFCzt
 garIBnTa/nba2f0e9h3b9vi8k4Dv5HQyYVjTgo++yw2FihH85uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHzyRNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAC4UaRuljOOT+aKAafVGlOIDKtjlF5xK7xmMzRmBZRonabbqZvySoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeiraYKPEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXpr64w2wLKmgT/DjURaWCln9eh1neccMl+M
 FQ4+wc1qbIboRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8hSy2ETgYKykFfyBsZRcE5vHzrYd1iQjAJuuPC4awh9zxXD31n
 TaDqXFmg61J1JZRkaKm4VrAnjSg4IDTSRI47RnWWWTj6R5lYImiZMqj7l2zAet8Ebt1h2Kp5
 BAs8/VyJshXZX1RvERhmNkwIYw=
IronPort-HdrOrdr: A9a23:8CjS0apxrdXtxfT3aHuefk8aV5oaeYIsimQD101hICG9E/bo9f
 xG+c5xvyMc5wx9ZJheo6HlBEDtex/hHOdOkO4s1NSZLWrbUQmTTb2KhLGKqwEIfReQygc378
 ddmsZFZuEZ8jBB/KPHCQODYrAdKHDuytHQuQ7W9QYUcT1X
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91823714"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jA73pZut7Y+iTb1LnGgibzfH8YEh+g9uaFPV4cpbbfSkhbTKlEqifJfPt7OQm9WkPXkz1tByncsU3BvlDK7WtnWTepxOw4y2sfbyWwrqDQxBAaNOxmLg8nbejVxqTSCajjdQliRt+4hThLNkMqwnxpDddIHzLr4Sbn9glwhk+oOyC4kMrl/yyLABmSw+Bf48cRqFSLOdwZ+sPuJencjYoCzDRkOhtPN8XzDesKYMcwBv+saeHKE3wuSNqiYZPvphcJdHwu/9hKhXSYg0h5RfVCpAWf8sDTSLfXyHilTAXxzI2GuVRmfPvP8s2mjSVe7NjvLVW3YhAbz88JhuEcW0Qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ViXsOb5Flt6Rc1xOWmrEZV/r88DR+Cz+8dlxDG5k1LQ=;
 b=OAhUO2Nnxts/0Jpnsm3DxWLAdIvnHRHlmoun9CjuLTXoWXxPHE4axLyshJDTwS3L7CsZ64zrhKbQGSOc9gac4qm+ZCQec8hMbEGADscQMSNi3Phv5Zla5YuKUW3dW4zKKEjnrbfpcywNtY5vEpN/UwKbjPX/DQMIDowLoM24K9JPEQF1sdp5ddLmWOmYvpaez4WKeO6xJbFUrOn3AwJJDXQYi2As+X5cAgSLVxDIJQ+78+LutqFekRRsDinxWP2QRw3j1sUX5/GvyLYdksOA/dzteEG4+rds5X7p9qoALSRTuzXfTTGUi/Ap3rlMTeJ2b3bDqKJcd7pdoojA4PYuUw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ViXsOb5Flt6Rc1xOWmrEZV/r88DR+Cz+8dlxDG5k1LQ=;
 b=PbJZySaOSUgqzAR2Gq88Twgeqm+Wm6v7Ct+tvkYb6/tJb/aMaxolgW5humWZj9ny90ksphf7RXpSiPSzvNgP7adSk0JocOmJgI95QjBRKihdxxnKgVV1OppD2SGpCcSGEiYIAH6ahPMKxNBE94k8ZGMLRcol/nmVeHsqytngaLg=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 08/11] x86/shadow: reduce effort of hash calculation
Thread-Topic: [PATCH 08/11] x86/shadow: reduce effort of hash calculation
Thread-Index: AQHZIR+fQDluUga6GUWdpW8jT2W5ga6Qo8kA
Date: Fri, 6 Jan 2023 02:03:07 +0000
Message-ID: <20c268c0-979b-5ac9-da16-7cb7322552a6@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <acf0f5f6-f4da-cd88-1515-2546153322b4@suse.com>
In-Reply-To: <acf0f5f6-f4da-cd88-1515-2546153322b4@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB5162:EE_
x-ms-office365-filtering-correlation-id: 986fcdd0-93fe-441b-2a55-08daef8a275d
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 UO60F1oygaASK8cIWlOWwo2CZUW1CIWOFJHLq65K7AMuKEwnpsQEaT+BSgFAKXWMB7dOVE6FtEqxoCzBCGa1xCiqSZGmzU1MouOi2w8EvFkJtBM9sguQhXDzLRtlp1xbn7brNLgceGQJfFlTISlZ2tw21ToTwoaea0haD/dUnuX9O+C9Lq/1MR3UYzrNY5e9iKcDVUQGj8TXtExsEfFG075rMu9z0nfOMpGzKfiUNlxdgibgK4Yr2j448in5TwFD/cBp/UguiJUPNyaSu+ery6nMIsrEgGfa6rq6BcDn5r0UxBSF9Yss3425fMlei8k+seDYCWZtAd4WlOhy4SWpp2fhkNlDKIf79PX/cMIYKtd3BpIYCSFX0W3uzGxg0nVqWr1bXBiz2Bh1dqK29gG2TRLisBzau9oVkfgFhdCuSmjQW1T5FQAPO0eMHZrq8Ha6MQzfBDWkxcqCrcoqYVX36e7b0ZXEUndNsvCL25rn0wT0U4xeNoqBENXIW90iDBOkbM9h61houq9OiXAlUu7ugo4CNnVW1qsYfucFTFtB3B+WxTsRc0VfMoq4O1RsYyBbv1sY/Pa0tWZnqn36bQ2Shaidrfovl+esNUVZ4JbgluizbxQ5WxEyH8sNDA1kGixYyNE/WcIYu/ouweILwT82eBnhNqBEH0sl7/taSSdYezvXB8GQTRx7MPikEqDvwVmpMoCQkVFORwUOPN7JYziw2YGwzRviOacGrIrOn2g45JmBL+bE8ZnkUyAa4zE5alBRIII1OtWKlVNmkO1eQv4Opg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(39860400002)(396003)(376002)(136003)(346002)(451199015)(5660300002)(4326008)(8936002)(31686004)(8676002)(2906002)(41300700001)(71200400001)(66476007)(64756008)(66446008)(91956017)(66946007)(66556008)(316002)(54906003)(110136005)(76116006)(6486002)(478600001)(6512007)(6506007)(2616005)(186003)(83380400001)(107886003)(26005)(53546011)(122000001)(82960400001)(38100700002)(31696002)(86362001)(36756003)(38070700005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?d3pGZnI3NjJ6SGxqenpwQk1pcXVEbnBGQlA1aGVzL1RTTnNNcGF3bDkvazAr?=
 =?utf-8?B?Z2t5ZWk5RkVGOEduOXc2clhTK2x0cWtwakt5QkY3YklyQXNzamRaSVhQaWxK?=
 =?utf-8?B?eGZBbTE2NnV4NHN5RVhwM2trVDFEcUdoUnJTQ2JjMnEvWkVvb1k4bmUvN1Ji?=
 =?utf-8?B?YXlNb0NvMkVBcHFBY21KSjRRTFB1TjhxT1lyLzRJek85dkZmZHpyMzZ1MWxr?=
 =?utf-8?B?bzhDSVBxQnJFUUlzckxtTGNoeUcyS3NuVzlkOTNOS0ZBSzk3TnorY3pSWGtu?=
 =?utf-8?B?V1FtRUlOUWtpTnE4ZEVnOGZYanBmRnFnVng3b3BCeGwxK0RjcVI5WU5jemFY?=
 =?utf-8?B?dmdHTVhzMEsrSUdwc2liSW1NMDdjZkFwdWdVTzFLZGdxRkdGWGR3Y1lYWHVs?=
 =?utf-8?B?K2RHTWxUdDRqQkVKNS9tN0pPMlVzeFVnUzMvSjdnRG93UEdjQW9hVmdQdWEy?=
 =?utf-8?B?a3lNQ2tYTC9UbFZ3VGFwRTAwaTN0dHJJS3gvYUl4cHZ1SEpZNHBqbVBIOHI0?=
 =?utf-8?B?cmJkUFJvQzk2MzErWDQzMnpPMFpraHBYM1dZOEdlNUl3dXRhQVdJWWVUYW03?=
 =?utf-8?B?cHdhSTloOHVucU5JN1RxNnJ6TEw1RHJMSnlpYTlEY2xIRktjNWZYRW8yc2Fw?=
 =?utf-8?B?bzEvZ1RRd3huQkUwVGFPemgwazgwYmZXNFJiWmhhcUw4Z2JFTXk0REpRdlVE?=
 =?utf-8?B?YVl6YUZrNTlXYStEbVQ5UHZsTmVTb0ZtVDU2NWRpb2t1M0JFQ0xSU3dnN0lJ?=
 =?utf-8?B?citGbEloOXBUK2R3dHZLeTRaKzVWeFNwMEVveFVpSm52Y1FYdVJWdlgvNGd2?=
 =?utf-8?B?RE5IUEd0RDFTVGtza1o4UGlwZmxPTlJmZy9yY2N6L2dNa08vdHJybEFlMlFt?=
 =?utf-8?B?Qm93ckhGZnFBbGZVbWg3bDRJekw5ZmNOanVWSFlsVk9oTzVuQUtrcTlKS0V3?=
 =?utf-8?B?aEhJdlFVbjBLZ3ZkdmsxN0o3Z1F0Ly9jcHpOT2h0S3ZwNDhlOFloZXJ4ZFh4?=
 =?utf-8?B?Nk1vRjFqdnBXOG52enJYVWlwSGJQZ1I5cXJJTzhPb2svYlhkOGFldWcyNjVP?=
 =?utf-8?B?RzBxdjFHdHhpR0pudEYzZEFLWTE3eUo0S3lJSXo0VUZQMmRxVjJtTHR5SjNQ?=
 =?utf-8?B?NzZYZmpHbXpqWkZ4bDV4dm54enJuQk1SRjBIYnNzWkdXNUJlNVp1bnFOdmRV?=
 =?utf-8?B?OThBU0xzRktMeUhPVVQrWmI0ZkJtMitTdW1SaGplMlBmUU14Y1JvUHlqTzNR?=
 =?utf-8?B?VjhMeGlwKzZxOUtuNDljMDlDVUJlWUZQTEsrNkIvT0Zua2Z4eU5oNnZ6LzNJ?=
 =?utf-8?B?NytFSDBOaTcyWU5kNmxuUWY1aEhENWQrRXcrRFlDa0xmd3d2SUpBK2FtRlpp?=
 =?utf-8?B?REMwZWlicXJqSzUyVEZkVFU2U0hyRTBIY1duZTQyNCtvQWlkdHJ0TnlVN0pH?=
 =?utf-8?B?ZUFuUGFPSnJBZzc3WG53SFRKTlNzOFBFM1ZJc0VFKzdJK3AvUk9PS0VGcmNT?=
 =?utf-8?B?QVpBYVBGVEpkSkdyU3VXckxuNVZmVXZyS1JZaUx1N2ZXelBhSm12dUxFazNv?=
 =?utf-8?B?V3o4M2pmUEJGQlR2RytmMlV2RTdWMUpKVlpxRnFibk4wTDRHK1ZBdCsxUVc5?=
 =?utf-8?B?MWJFNnk5M0ROYk5rUEFVSlFjZmlvTDl2MnZ3WisybnF6d3ZkeHVQb0hVRGN0?=
 =?utf-8?B?alBneVYvQ2JIdXU3eGlMYlNLemVMUy9SMk9jTG5UdU9CSmtlUEVqWXp2NmV6?=
 =?utf-8?B?RTdCZGZQUUhWc2F2amh1NVYzNXpiMVYzT1ZnYWdpN3RUMUt1TFQ1MWNXWWFp?=
 =?utf-8?B?eGR0czdWeUxyMk1FREJkN25LZGZkWTJNek9YVG5yM3ptSE5iVjFkaG5jNDlx?=
 =?utf-8?B?Rmc1T1JTNWl3M3hkU0FSOUpUbFZ5TWlDek9uZHRvRWJGS3JqVE1aWFZFbll5?=
 =?utf-8?B?QU5MejdOaEJhSUovTVZYaTVjZm5yU3ZPaEF2dStXaG5yOXloVmRnMmtjdUdp?=
 =?utf-8?B?UUttTXliU1cxQllXNm1Zb3FJNklBR1g1ZllKbjRPTXlha3BtdTV3czB3bVJx?=
 =?utf-8?B?aVQ4ZFNVNk92ZXU3azVJMU9Oc2lOT25oVWpWMFZ2WGtad01uellLZUVsVi9S?=
 =?utf-8?Q?VTuy5xi5PbT8KiF4WVuXHk7hL?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <05F2A68EEA9AAE40863E4275C0D8C0AB@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	uPliLEFzi9zAh3hyMJKsfd2OAcurMonT71C7nDk5U6tyneRv7pOoM8c50mEqwsQpiVctLwcjZCOfNFQgpqOsc0ldfQCaWmnBV7M6NNSotTcuTHzDpc1a8HvXnScy3BThYLsur/IjAtA2wekIE+nWknu3rCQc+ZPUBmEPOIwCN9lkbuGr0LcyibP79mtqRqGI6lldlL579kPYxiWviJDXlZXOdjsfQODl1a8TOYQKNxkpd2x1s3nuBFEMxMWgF+3aQyFp8PY9liDncxikd01HcvNXy/pkQEXyzg8PJdklyxZ3KQkF1oKHTedLQytEVZNmDYRFlaAeXrevwF3h5sk9oAQJzRfs2pgwM0GA8oCG48W1v1JxPev6djSGiqlzViB4jfoDElrzSXy5QsATk4cwlrFoKBofsON8v1YyDgSbEPZWbc2O0D73wVrTBfxPu3NjH/sGuwdHUrO9oxUn0rmVqB7ml7TwvRLUKO/pV66B/UdrulXAKUY58dDa52eCyZfuYKkRgkKDMLJTW70n1vBW8ienMUNx87tB9vnekAXv9uBbGlBS8qQs8xmgZiaPrB9nVKIhlIVFTgB4bJeuinTn6ks/h7GQAGqBQID/Y8LWvkd3pecs/d/JCrRpxg0PKcvY95zEp48ZWQsCa0946m1IBJKjxs+YRF5bfDttawypQ/YQNw2K9cnE8m7WCDP+0wRsZymtqK2bTiHsiqWLBHe9a+iTgDbsJZ93cIaeBZZSZgH/wZCX92d+Nb6gzuos0P7wRxaBiVPmTZdYzOkG8dXpjd4kXcOgYlZs5ORTylG7PICEvl7BHzHZko+A2ALehAJ47dBpdjTgvxwthkZ+8kKiuw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 986fcdd0-93fe-441b-2a55-08daef8a275d
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 02:03:07.1132
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 92bAOolrs6jF9bwbErrfI2bwccOSB9RiWSNCf/QxmfVmXxQ69/ZFSlNBMSF5Dhldy5apQEUkxt7Nnz4wV2wWJgf7QllBpV2cAYx/FiveGTI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5162

T24gMDUvMDEvMjAyMyA0OjA1IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gVGhlICJuIiBpbnB1
dCBpcyBhIEdGTiB2YWx1ZSBhbmQgaGVuY2UgYm91bmRlZCBieSB0aGUgcGh5c2ljYWwgYWRkcmVz
cw0KPiBiaXRzIGluIHVzZSBvbiBhIHN5c3RlbS4NCg0KVGhlIG9uZSBjYXNlIHdoZXJlIHRoaXMg
aXNuJ3Qgb2J2aW91c2x5IHRydWUgaXMgaW4gc2hfYXVkaXQoKS7CoCBJdCBjb21lcw0KZnJvbSBh
IHJlYWwgTUZOIGluIHRoZSBzeXN0ZW0sIG5vdCBhIEdGTiwgd2hpY2ggd2lsbCBoYXZlIHRoZSBz
YW1lDQpwcm9wZXJ0eSBXUlQgUEFERFJfQklUUy4NCg0KPiAgVGhlIGhhc2ggcXVhbGl0eSB3b24n
dCBpbXByb3ZlIGJ5IGFsc28NCj4gaW5jbHVkaW5nIHRoZSB1cHBlciBhbHdheXMtemVybyBiaXRz
IGluIHRoZSBjYWxjdWxhdGlvbi4gVG8ga2VlcCB0aGluZ3MNCj4gYXMgY29tcGlsZS10aW1lLWNv
bnN0YW50IGFzIHRoZXkgd2VyZSBiZWZvcmUsIHVzZSBQQUREUl9CSVRTIChub3QNCj4gcGFkZHJf
Yml0cykgZm9yIGxvb3AgYm91bmRpbmcuIFRoaXMgcmVkdWNlcyBsb29wIGl0ZXJhdGlvbnMgZnJv
bSA4IHRvIDUuDQoNCldoaWxlIHRoaXMgaXMgYWxsIHRydWUsIHlvdSdsbCBnZXQgYSBtdWNoIGJl
dHRlciBpbXByb3ZlbWVudCBieSBub3QNCmZvcmNpbmcgJ24nIG9udG8gdGhlIHN0YWNrIGp1c3Qg
dG8gYWNjZXNzIGl0IGJ5dGV3aXNlLsKgIFJpZ2h0IG5vdywgdGhlDQpsb29wIGxvb2tzIGxpa2U6
DQoNCjxzaGFkb3dfaGFzaF9pbnNlcnQ+Og0KwqDCoMKgIDQ4IDgzIGVjIDEwwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgc3ViwqDCoMKgICQweDEwLCVyc3ANCsKgwqDCoCA0OSA4OSBj
OcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG1vdsKgwqDCoCAlcmN4LCVy
OQ0KwqDCoMKgIDQxIDg5IGQwwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
bW92wqDCoMKgICVlZHgsJXI4ZA0KwqDCoMKgIDQ4IDhkIDQ0IDI0IDA4wqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgbGVhwqDCoMKgIDB4OCglcnNwKSwlcmF4DQrCoMKgwqAgNDggOGQgNGMgMjQg
MTDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBsZWHCoMKgwqAgMHgxMCglcnNwKSwlcmN4DQrC
oMKgwqAgNDggODkgNzQgMjQgMDjCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBtb3bCoMKgwqAg
JXJzaSwweDgoJXJzcCkNCsKgwqDCoCAwZiAxZiA4MCAwMCAwMCAwMCAwMMKgwqDCoMKgwqDCoMKg
IG5vcGzCoMKgIDB4MCglcmF4KQ0KLy0+IDBmIGI2IDEwwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgbW92emJsICglcmF4KSwlZWR4DQp8wqDCoCA0OCA4MyBjMCAwMcKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGFkZMKgwqDCoCAkMHgxLCVyYXgNCnzCoMKgIDQ1
IDY5IGMwIDNmIDAwIDAxIDAwwqDCoMKgwqDCoMKgwqAgaW11bMKgwqAgJDB4MTAwM2YsJXI4ZCwl
cjhkDQp8wqDCoCA0MSAwMSBkMMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
IGFkZMKgwqDCoCAlZWR4LCVyOGQNCnzCoMKgIDQ4IDM5IGMxwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgY21wwqDCoMKgICVyYXgsJXJjeA0KXC0tIDc1IGVhwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgam5lwqDCoMKgIGZmZmY4MmQwNDAy
ZWZkYTANCjxzaGFkb3dfaGFzaF9pbnNlcnQrMHgyMD4NCg0KDQp3aGljaCBkb2Vzbid0IGV2ZW4g
aGF2ZSBhIGNvbXBpbGUtdGltZSBjb25zdGFudCBsb29wIGJvdW5kLsKgIEl0J3MNCnJ1bnRpbWUg
Y2FsY3VsYXRlZCBieSB0aGUgc2Vjb25kIGxlYSBjb25zdHJ1Y3RpbmcgdGhlIHVwcGVyIHBvaW50
ZXIgYm91bmQuDQoNCkdpdmVuIHRoaXMgZnVydGhlciBkZWx0YToNCg0KZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tbS9zaGFkb3cvY29tbW9uLmMNCmIveGVuL2FyY2gveDg2L21tL3NoYWRvdy9j
b21tb24uYw0KaW5kZXggNGE4YmNlYzEwZmU4Li45MDJjNzQ5ZjI3MjQgMTAwNjQ0DQotLS0gYS94
ZW4vYXJjaC94ODYvbW0vc2hhZG93L2NvbW1vbi5jDQorKysgYi94ZW4vYXJjaC94ODYvbW0vc2hh
ZG93L2NvbW1vbi5jDQpAQCAtMTM5NywxMyArMTM5NywxMiBAQCBzdGF0aWMgdW5zaWduZWQgaW50
IHNoYWRvd19nZXRfYWxsb2NhdGlvbihzdHJ1Y3QNCmRvbWFpbiAqZCkNCsKgdHlwZWRlZiB1MzIg
a2V5X3Q7DQrCoHN0YXRpYyBpbmxpbmUga2V5X3Qgc2hfaGFzaCh1bnNpZ25lZCBsb25nIG4sIHVu
c2lnbmVkIGludCB0KQ0KwqB7DQotwqDCoMKgIHVuc2lnbmVkIGNoYXIgKnAgPSAodW5zaWduZWQg
Y2hhciAqKSZuOw0KwqDCoMKgwqAga2V5X3QgayA9IHQ7DQrCoMKgwqDCoCBpbnQgaTsNCsKgDQrC
oMKgwqDCoCBCVUlMRF9CVUdfT04oUEFERFJfQklUUyA+IEJJVFNfUEVSX0xPTkcgKyBQQUdFX1NI
SUZUKTsNCi3CoMKgwqAgZm9yICggaSA9IDA7IGkgPCAoUEFERFJfQklUUyAtIFBBR0VfU0hJRlQg
KyA3KSAvIDg7IGkrKyApDQotwqDCoMKgwqDCoMKgwqAgayA9IHBbaV0gKyAoayA8PCA2KSArIChr
IDw8IDE2KSAtIGs7DQorwqDCoMKgIGZvciAoIGkgPSAwOyBpIDwgKFBBRERSX0JJVFMgLSBQQUdF
X1NISUZUICsgNykgLyA4OyBpKyssIG4gPj49IDggKQ0KK8KgwqDCoMKgwqDCoMKgIGsgPSAodWlu
dDhfdCluICsgKGsgPDwgNikgKyAoayA8PCAxNikgLSBrOw0KwqANCsKgwqDCoMKgIHJldHVybiBr
ICUgU0hBRE9XX0hBU0hfQlVDS0VUUzsNCsKgfQ0KDQp0aGUgY29kZSBnZW4gYmVjb21lczoNCg0K
PHNoYWRvd19oYXNoX2luc2VydD46DQrCoMKgwqAgNDEgODkgZDDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCBtb3bCoMKgwqAgJWVkeCwlcjhkDQrCoMKgwqAgNDkgODkgYznC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBtb3bCoMKgwqAgJXJjeCwlcjkN
CsKgwqDCoCBiOCAwNSAwMCAwMCAwMMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG1vdsKgwqDC
oCAkMHg1LCVlYXgNCi8tPiA0NSA2OSBjMCAzZiAwMCAwMSAwMMKgwqDCoMKgwqDCoMKgIGltdWzC
oMKgICQweDEwMDNmLCVyOGQsJXI4ZA0KfMKgwqAgNDAgMGYgYjYgZDbCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCBtb3Z6YmwgJXNpbCwlZWR4DQp8wqDCoCA0OCBjMSBlZSAwOMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHNocsKgwqDCoCAkMHg4LCVyc2kNCnzCoMKgIDQx
IDAxIGQwwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgYWRkwqDCoMKgICVl
ZHgsJXI4ZA0KfMKgwqAgODMgZTggMDHCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCBzdWLCoMKgwqAgJDB4MSwlZWF4DQpcLS0gNzUgZTnCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBqbmXCoMKgwqAgZmZmZjgyZDA0MDJlZmQ4Yg0KPHNoYWRv
d19oYXNoX2luc2VydCsweGI+DQoNCndpdGggYW4gYWN0dWFsIGNvbnN0YW50IGxvb3AgYm91bmQs
IGFuZCBub3QgYSBtZW1vcnkgb3BlcmFuZCBpbiBzaWdodC7CoA0KVGhpcyBmb3JtIChldmVuIGF0
IDggaXRlcmF0aW9ucykgd2lsbCBlYXNpbHkgZXhlY3V0ZSBmYXN0ZXIgdGhhbiB0aGUNCnN0YWNr
LXNwaWxsZWQgZm9ybS4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 02:04:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 02:04:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472283.732454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDc5Y-0005t4-1o; Fri, 06 Jan 2023 02:04:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472283.732454; Fri, 06 Jan 2023 02:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDc5X-0005sx-Uw; Fri, 06 Jan 2023 02:04:07 +0000
Received: by outflank-mailman (input) for mailman id 472283;
 Fri, 06 Jan 2023 02:04:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDc5W-0005LO-1i
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 02:04:06 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64263892-8d66-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 03:04:03 +0100 (CET)
Received: from mail-dm6nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Jan 2023 21:04:01 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5162.namprd03.prod.outlook.com (2603:10b6:5:24a::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 02:03:59 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 02:03:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64263892-8d66-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1672970643;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ZHeOMSQywW2NTLeKBWaSVPj0grpZVg+YVmImQT2V6Uo=;
  b=iAK4+TIk7ZZ/pk0q0ikgq8cx7A7/5xDfNIowJrCLod3KsypndzUXhE1g
   ByHIbi8vi2mICf/Zcenv5gmclwNQRnQLXIyzx35HtAadT8jOEqslgR1M8
   wkpQ0oPFdvsST9wVrslykwf1xkjdQOZUivS1uMzRX9z6deoZPjGoqRBH9
   o=;
X-IronPort-RemoteIP: 104.47.58.107
X-IronPort-MID: 91823839
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Xy5d+aCDG/WEBhVW/xjiw5YqxClBgxIJ4kV8jS/XYbTApDMq1T0Fy
 mpMW2HXO/uNYDanfI1yPNyz8U8B6pDVxoMyQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpA4wRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwwvswW2ZXs
 s0heBdWcEu8juyTw4uHc7w57igjBJGD0II3nFhFlGmcIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9exuuze7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6WzXmgCNlDfFG+3tpSvgOr23IyMh8be3KjvPKrulWuCvsKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHr7m9WX+bsLCOoluaJSkQBX8PY2kDVwRt3jX4iIQ6jxaKQtM9Fqew14XxAWupm
 2/MqzUijbIOi8JNz7+84V3MnzOroN7OUxIx4QLUGGmi62uVebKYWmBh0nCDhd4oEWpTZgPpU
 KQs8yRG0N0zMA==
IronPort-HdrOrdr: A9a23:Y2b5N6pNurjA8vGv/2n3SOEaV5oaeYIsimQD101hICG9E/bo9f
 xG+c5xvyMc5wx9ZJheo6HlBEDtex/hHOdOkO4s1NSZLWrbUQmTTb2KhLGKqwEIfReQygc378
 ddmsZFZuEZ8jBB/KPHCQODYrAdKHDuytHQuQ7W9QYUcT1X
X-IronPort-AV: E=Sophos;i="5.96,303,1665460800"; 
   d="scan'208";a="91823839"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kXxZGYJTLZKwAG0Q5EA8XxDz9HOMkTmpeIgS8gbRrifJE3CfsgaOBXYuNhFmXvwxrNYGnE3CiSEI4UrBMwQEOcyWdSVcL1EYLkmJ670oE9dS94bGkZySlZaiUtdnfNJZUFg+lEaQC6eRfXos4y/zLLWTrXrvQufv4r8gf5pZRI0Fo+lg5jsouL8u5vEbmZMRa8tLUCjeCdavN9dVI5Q4+VWnDApXVSg4kIqQqbes4luxgaY7CkaNCJHt82zJacrKcM8SpZGKvvdQRXazS+z+t+PeAaTnPLIrGITX9wobhsLOFjZ2h/5GXLLxbyfXhyDH732qflND4C/cY0Uety/HXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZHeOMSQywW2NTLeKBWaSVPj0grpZVg+YVmImQT2V6Uo=;
 b=F4jc4xG86ipihN3KWfOT8ihmCHPSbT+Zofto1dc0KzSUOn5WhtMtUI4Ll246LVoqq5dwsLlHEy0t5jeIHA+yDjTj72NvtZZdpkroj3Ea7pdrijlpxFZ80kQ4Y7TldeALk70QdkNx5hkN8Y/ajKaFemcqwBpKIVA5KokMUex9Mx09xcRHwTQ8ohpxtACGwpTjzxYwmkaaz+5UAj1zkseOM/rScKIvf+P3NcIKaSH+ZYEiXL8DCa7I3tM7k/mI2293odg1acRcTCJ3z+Vkd12LufFHx3fMmeZwgXSXXiwQAj9ki27J0dWOnW4rBJwbAR7qCQB59Awrp0HcWWxUdixi1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZHeOMSQywW2NTLeKBWaSVPj0grpZVg+YVmImQT2V6Uo=;
 b=KZoHF1BSiVE1njAGPTjjpTq4yyCO2ZR+TO7qswTyFUEYLeT9adH64iNbdtykn9ga6p9o30bmmWej/PJcbaxbDmYrL6XCNJULLlFKCHqTWrbr4GXcG2XrGhRbbwabuzd85YMFXyGlDWZ55F84YmDMt2EWJIK56qRjbBqHB/MFY4s=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 00/11] x86/shadow: misc tidying
Thread-Topic: [PATCH 00/11] x86/shadow: misc tidying
Thread-Index: AQHZIR54p1+0rBg1XUyPGnzUS/cXvK6QpAqA
Date: Fri, 6 Jan 2023 02:03:59 +0000
Message-ID: <2e261172-76b3-936d-e95c-e12ed6acf7bd@citrix.com>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
In-Reply-To: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB5162:EE_
x-ms-office365-filtering-correlation-id: 2c48cc74-d03f-4fe9-d4e9-08daef8a46a9
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 dz5fwdiBu9cLqBv14/BMXOXc3QxZPzIsc0yARruOdDRnUTYZWNPABrp9baAWxoirtFzTnw9NP/mJE4cEVHt6L+pGP5giTi2uHQMSuwHuJnLt+A+/pHh/DBFpVENBv2V1Kqj6cjPr38aAAqJZzhHXJvZ55YzvFGoDp6kvbZ/mFW77bNN5RJ0firuH8fWXH9Ho2pJhlqFJRfORBCLMSMN7MI6k5eiGtTgxmAaujXOlKvzBYC/K5FsQW6FJGiBeldU/SCQjPaQ/Si/Tw12slllSZ0FUcXtYufeYw9/fFqxM/He/DTckVf0Z2mGnnTNt0JwSc1jpC+yTNHBviBXLfuENnlo3pQJK3lifBFNeCvvwrBF749L2x5rHkTZaUky7JXf6G9kme7DkwA5FPknrk9mXvcFWs/xvs1pyET/6UT0L1X9PqDAJUyfwmD5yxh/KcyzedzffkkmZ+IZEl61wiVR711/9rhkHl21mGQI6zABd4Q3YRjpM9EdpYDerymkxlvODFEv71rh0bRlf7H9bnLqfbxXqXoq4ksXEXjOclc6SnuO2ZrXLyT1s2SqdarZP7winZ1dY31g1ZGURhq+tJxBvgdGL+tCcHyxGpQgxkobOpZ2uI8Fm9iuc97eQGTZnOj+1rpV/P0yXZZjpOyMdYXpe/1QYSIFSRUCSLzngGpq38zoVKgO56YMCZ+sTnG7YklT1saJpq9N3XmPYkknY3IngH0AtD9msURd1FyXvcveKDM+Q0XohMOIRD9xy4b0lIIJLvYw0L61fJrWn9FIGYTWjBg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(39860400002)(396003)(376002)(136003)(346002)(451199015)(5660300002)(4326008)(8936002)(31686004)(8676002)(2906002)(41300700001)(71200400001)(66476007)(64756008)(66446008)(91956017)(66946007)(66556008)(316002)(54906003)(110136005)(76116006)(6486002)(478600001)(6512007)(6506007)(2616005)(186003)(83380400001)(107886003)(26005)(53546011)(122000001)(82960400001)(38100700002)(31696002)(86362001)(36756003)(38070700005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?WXNpZDV3RGJBWnFYWTN5dzlWYmozS09lMG4wS3l2VlRYdGMzVTZvSzdPVll6?=
 =?utf-8?B?QVV2enE4Wm1zbjdpcUVjcnNTRGdNWW5PSy9Ya2h5QzlzdUVmTHdUcUkyWXF3?=
 =?utf-8?B?Y0F5V3dhR3JOSnFXZzl0bHpsSDA0Sk9PNWttRkdjYVBDUnNwSWM2dENtWnV1?=
 =?utf-8?B?a2Fud0xySDZIZ0ZGbDI2MVZyTVhKNERTUzVjR25uZzVFc3NkQ3FWUlduN0Qz?=
 =?utf-8?B?V3ZEMXdZTVZKWThTcWZUK3M0MVhzOE1URlNNZG1uR2ZTa0J5MExWK013T0Vi?=
 =?utf-8?B?RnViWjBsdG1XcmVXKzZxb3pSM3RZcUFJU2Q3WjFZSnRaek56a0RNZU8zSlBL?=
 =?utf-8?B?aDRHV2diQ2RYQWxJU1ZZQWlnS0FQQ3BLN2poMTJxWUFiOS82cTFoL0tpZGM4?=
 =?utf-8?B?SVl5RVZiOUxLdGc2ZktUODNFd1lJOVpVRjBaZmJQbS9VVUtqb0Zoajhuc2ov?=
 =?utf-8?B?L1RPN2VjbThCMC8rSG5TbzlYUHlJSS8zSGlGYlpjc2xvNDV6eVRHZGN3MUw1?=
 =?utf-8?B?MGJmclBJeWZnT01UYy9wSkUxQjVHbE45WUQvV1lTbzVBdDc2YlU0YzVFeWh2?=
 =?utf-8?B?NEJYS2pIdzZTTVdSZWJtYXgwWmlOM0c3VzRWcjVaYVpMUjF4UjRTdEhReERn?=
 =?utf-8?B?ZHJjNkoxT0ZzK2VFd2VEU2cyL2c3UTF5Q0lrcHY2TnAvNjVMK290TGRSZHJO?=
 =?utf-8?B?WUNlRDMxOTU3MzFxY3cwejBudHFMY3FRM0RDZUdrSENmdGt4bEw0UnJYZjl0?=
 =?utf-8?B?ZEtlS0ZPYjc3ZXMyWTRVbUx3YS9MSHAzWU1kU2dVaS9xdXg5dEQ4dkllL2ox?=
 =?utf-8?B?L2I1L014K09LR3hhT1RtN1podjBaVU1XQzEybEdGT0ZrcEs5QkJ4bXhJbTlV?=
 =?utf-8?B?cElGQ2h5bDdHTzNPVGNCTVhlWWZnWWV3VWUvRUgwSGJTazl4TEJSeFhMeFJI?=
 =?utf-8?B?VzVyREJZcFAzY2FwYTQ5eWhBN1FHUjNGR3JiZ04xRFV3N3FLU1ZaSllaSmU0?=
 =?utf-8?B?c3pLUERXOElFektYVGRFOU5iWUprRGJVRXRpUEp4SVJPRzk1SkNYRHBjci9n?=
 =?utf-8?B?MGNPMm5qb2g3ejdFaGtLQnlXVFdvbEZ5WHpjWTc1S0xMM0pMZ281UW1tSU5o?=
 =?utf-8?B?a0FDdk8rblM5Rm5WcUYzY1RhZFBGWm9vcmZ5TUdodTB0RXBQZnF1cnVTNngz?=
 =?utf-8?B?bUV2YmtFK3RTQ3ZUSVc0Kyt6RUtuYnpiblEzZDZpNk9OYmMrWktmbEFOZEpH?=
 =?utf-8?B?NHV2djFjcUdhbTBxS2hUWlAvNjJ6OUlrZVBEakRqVHl0UjF4Yk5TcTdpR3Uv?=
 =?utf-8?B?ZmJNWnZBRXl6aERNRU9Fcjk5bjlMRTAwSTV5U3ozTnhVT2x5QkNlemxTbjM1?=
 =?utf-8?B?VXJiVDRGdFB3OFJ2dzlMOHRROHd2V2tjMzR1WGx1aG90NHl4UWFMcFpYUVlh?=
 =?utf-8?B?Yk9RR2JPend2RTd3ZE1mMFJuS1dvVllpdHM3UTRqdFZKbWlick9Wdmp5VlJE?=
 =?utf-8?B?dFV3MUMrYzFrNlg2WExrbDFqK0J5OFZWeWJldXlDNzBHa21USkExbUx4ZlJE?=
 =?utf-8?B?WlFnbDRyZ3Z2NU0rYkZZbUgrK1RjUk9jMXNGMUo0cFR2ZnpQUU50WmRTaTlF?=
 =?utf-8?B?TzhKaGE1U3NKdTRidi9naW14aUlNQU1CNGIwaXJtSC8vTGhTK1dNUHVDMkxj?=
 =?utf-8?B?S0JjSHQreVR0eE03SXVVa0FwcFcyTEVEY01LOHFKT1BtVTNkeGZ6dld4LzZr?=
 =?utf-8?B?cjVRR2pDV1dWRFF4dElJbGtWTHpUVFpmYy9yZnJoY3pJWk4yU2NCSm9rdUFk?=
 =?utf-8?B?alFVdVN2bVVuajQyb2QrWDVaOWtZUy82dFJOSXZQSXRldU93NGtJd2ViNnNJ?=
 =?utf-8?B?SldvL3dCMXQvTTNXdmN5aTVtYmZiT3dSNWorcERwVGFaaGF0cWtOU1MwVHEw?=
 =?utf-8?B?VW11WWNIa2RDcm1SUGV0cmFUOERCdkliTHdYVlh1UHlhc1VuZ1UxZENjSmpz?=
 =?utf-8?B?S0t4YmdURG1RZi9PTlhqaTVFQVRzUHFmM3QxaHJhUFN6QmNBY0E1ZysvSFRq?=
 =?utf-8?B?UUpwU1daT1p6N2FCeDhIdEUyc3RPcXU0UUcxMGlmOE80QnRlenVpc000T3do?=
 =?utf-8?Q?aJEvNMrgxEfIKhd70CKmi7tUh?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <EAE227FBBA47F040A9464FCB5CDC828C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	wdfGT9vTdyWvZoIZk62splpgef0lKnxDHXkbPy2HA7xUtEPHGjbIVLzQDT+boh7zBfSLj9ZDV3+tMNDmXxTScCiXykCZ8fXW2C/W0Ea3NjohGdDbm4Ug/aDqpEbQpNiNeOUdtT5LsV/e2dlJAH74DgPVWsqLhnzp+3UvLWBklQB7jIt6AEysVkgOnrcIE01XidF5WMh+t3ERP5JHfkRm5XPdcm6NIIfXYqYOb2ohCb/DCEdE8+Gl3okivv+BSJSlsu6j6KPHD81Mp3Kn2NH+AOBBOtHF+Ni6MYToAvusRx0TfdyqR5orcBdIGfmqfIIXaGklPVn9i4uKNRY94ScHb3sjoa308CyZr7NTNIfhizo7jc+wXn0ZnSz93VWMKsNAj5oiqbDysszX4A+qdt4FSLgPRB4X0bezEaFqhAs+7yKi4EKqw0oAYFSUc7XNcps3146vyU7DfJRaNDxmMGswBsoADMqvXg7/uniEHR+n/aWksNlbaWfxbmT/nrDZZLXCk2/GcVTCUQUTNxkjbJVgLHiGqBeM1LgNkuu8ixQnH4xlm3m9OL6D7efbTLM4frWOyFTFh0Uzprn7cPdYZYImT6qfRDXq124clI4cMsitBpOtUn9kC8XbbrWL/fdDzMQgvuYr1YtgQynOwqj3E5W5zdPiYMIS7xNC/J1KYT+eWMMTL7Gw3MzPwQcjCfs1zpTLiurYGFlLWEchkRttDvZAzMvIZbbWvHXSGYj6xuRx3c5wAu6+hn+cBcDfyQXD8eK8yCijsdffjJpLgdS41PJtgVs1DqvSlXBPmKM4kskamlo/3g80sG5pmHp22XbmWMzZKvntj37heTS+A2brCfEYSg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2c48cc74-d03f-4fe9-d4e9-08daef8a46a9
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 02:03:59.6094
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: n6BTaOhygdNgILOORG6vr6r/ecbiEQdNOQyxGgWPx4G6HC37lnhzX+nKdjGwE0P49bHa4viw039Gxoj6QC3ChSmp/Rjq978Cyc0t7ybkhGE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5162

T24gMDUvMDEvMjAyMyAzOjU3IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gLi4uIG9yIHNvIEkg
aG9wZS4gVGhlIG1haW4gb2JzZXJ2YXRpb24gd2FzIHRoYXQgd2Ugc3RpbGwgaGF2ZSBib3RoDQo+
IGhhc2hfdmNwdV9mb3JfZWFjaCgpIGFuZCBoYXNoX2RvbWFpbl9mb3JfZWFjaCgpLCB3aGVyZSB0
aGUgbGF0dGVyIHdhcw0KDQpmb3JlYWNoDQoNCj4gaW50cm9kdWNlZCBpbiAyMDE0LzE1IHRvIHJl
cGxhY2UgdGhlIGZvcm1lci4gT25seSBzb21lIGVpZ2h0IHllYXJzDQo+IGxhdGVyIHdlIGNhbiBu
b3cgY29tcGxldGUgdGhpcyBjb252ZXJzaW9uLiBFdmVyeXRoaW5nIGVsc2UgYWRkcmVzc2VzDQo+
IG90aGVyIHRoaW5ncyBub3RpY2VkIGFsb25nIHRoZSByb2FkLg0KDQpXb3csIGl0IGhhcyBiZWVu
IGEgbG9uZyB0aW1lLsKgIFRoYXQgd2FzIHRoZSBzdGFydCBvZiB0aGUgIm1ha2UgWGVuIG5vdA0K
ZmFsbCBvdmVyIE5VTEwgcG9pbnRlcnMgaWYgdGhlIHRvb2xzdGFjayBpc3N1ZXMgc29tZSBoeXBl
cmNhbGxzIG91dCBvZg0Kb3JkZXIiLCBhIHRhc2sgdGhhdCBpcyBzdGlsbCBvbmdvaW5nLi4uDQoN
Cj4NCj4gMDE6IHJlcGxhY2Ugc2hfcmVzZXRfbDNfdXBfcG9pbnRlcnMoKQ0KPiAwMjogY29udmVy
dCBzaF9hdWRpdF9mbGFncygpJ2VzIDFzdCBwYXJhbWV0ZXIgdG8gZG9tYWluDQo+IDAzOiBkcm9w
IGhhc2hfdmNwdV9mb3JlYWNoKCkNCj4gMDQ6IHJlbmFtZSBoYXNoX2RvbWFpbl9mb3JlYWNoKCkN
Cj4gMDU6IG1vdmUgYm9ndXMgSFZNIGNoZWNrcyBpbiBzaF9wYWdldGFibGVfZHlpbmcoKQ0KPiAw
NjogZHJvcCBhIGZldyB1c2VzIG9mIG1mbl92YWxpZCgpDQo+IDA3OiBMMkggc2hhZG93IHR5cGUg
aXMgUFYzMi1vbmx5DQo+IDA4OiByZWR1Y2UgZWZmb3J0IG9mIGhhc2ggY2FsY3VsYXRpb24NCj4g
MDk6IHNpbXBsaWZ5IGNvbmRpdGlvbmFscyBpbiBzaF97Z2V0LHB1dH1fcmVmKCkNCj4gMTA6IGNv
cnJlY3Qgc2hhZG93IHR5cGUgYm91bmRzIGNoZWNrcw0KPiAxMTogc2hfcmVtb3ZlX2FsbF9tYXBw
aW5ncygpIGlzIEhWTS1vbmx5DQoNCkV2ZXJ5dGhpbmcgd2l0aG91dCBjb2RlIHF1ZXJpZXMsIEFj
a2VkLWJ5OiBBbmRyZXcgQ29vcGVyDQo8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 02:30:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 02:30:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472291.732465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDcUo-0000xo-A8; Fri, 06 Jan 2023 02:30:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472291.732465; Fri, 06 Jan 2023 02:30:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDcUo-0000xh-72; Fri, 06 Jan 2023 02:30:14 +0000
Received: by outflank-mailman (input) for mailman id 472291;
 Fri, 06 Jan 2023 02:30:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9IjP=5D=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pDcUm-0000xb-6F
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 02:30:12 +0000
Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com
 [64.147.123.19]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0978b90c-8d6a-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 03:30:10 +0100 (CET)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id 9AD8632009B3;
 Thu,  5 Jan 2023 21:30:06 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Thu, 05 Jan 2023 21:30:07 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 5 Jan 2023 21:30:04 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0978b90c-8d6a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672972206; x=
	1673058606; bh=68Gw4tH2z0Hvyc9UriftpU1Mlh3pagHOU2hiJGFdXPk=; b=i
	W4f5zzV1SWMlQpBjgBVF575l0EhACjoGMMNeSp2Afl4RsW3auT/hHPYHAs4Qfl3p
	jIyYA+iysxtMzq8h5ngwECGg+vU56kcZis1w6Rmx2JMbLCUOBJGf7bz3r5wCPlgY
	AyqzmZSPLC8OGrEM46Ce1xaWGpgtSrImNIyXCX0Vqi40lXioCQ1pLVOlsGYGjKP0
	FhU2LJSZjH/XRms3LD27FBVplKl1pYaLc04bOOTzispT4LvcGSfNbB2gi59Q8diX
	mEuBbuV8KtKffd5NP/hMS11vChbxteDNhTqsQE+6lokdWdrGrrkkV4oMLVQW2BWz
	TzFLUnV9DmW0Kadh+0RbA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672972206; x=1673058606; bh=68Gw4tH2z0Hvyc9UriftpU1Mlh3p
	agHOU2hiJGFdXPk=; b=ocj4/K24DPbrGGn2EIgJAchu3RIDPC9CmOmgnMziWQ0z
	zBl9jTuhCy9j2riO49ut2Bxj6sWcD1msVhnayV1VUmsDTNKtMISaq3F9uT04alkr
	EW7XVuLjDoypBCF8NuXWKpF/ioRXxs2P3ZUlSdXcqn7uyHRXZQEAlMJ3qP2LULjP
	lBlzNPn/ab1e0b6JWvzDH+9kuWvjrA2+/TAmTzDTSgYQKcpSyWqaMGoROIKlz74f
	t/VwO3W8hEG8AS2E7o2t/793VpDPe/ZFBsCDFac4Usaossc5XQsv8pjS/plEEwsY
	ulyhtLQt7CyDXlu3DZB7ry5DnzFN07AzQcu6UYvXPQ==
X-ME-Sender: <xms:roe3Y3UonzmnxHgsjE7QjbXUpC2T6C52WqK4vBZPYVEz40Dd54jY_A>
    <xme:roe3Y_lz75A8DFd1zlNjb2w4b0A-YOYCpYEU65KgDfKyoqSYXJCB6QJHHE2qj0J5q
    coEcwUdrc7ngw>
X-ME-Received: <xmr:roe3YzZvEqc7LjYxJ_8bVl1Lff4YFeDtbGQ4cIXNSL49FxV3JeyW9UwfOf2f3ZJ8SILB_IAFgzXnzhy3CmrTxhKtHZQOoOvqDg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjeelgdegjecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefgudel
    teefvefhfeehieetleeihfejhfeludevteetkeevtedtvdegueetfeejudenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:roe3YyVKb1FVzroAb7cMMCVdBr0jPtntxgrUdD0mWrkyQSUbkbYaAg>
    <xmx:roe3YxmUX1UFUe5L3McsBYLLteQ_dXjJoM4Z_cU3PMMNO1PmHJVvsA>
    <xmx:roe3Y_cgs-6AIlUb7sSt3Tw0BERSUKgB_VQgFBCFct-qJcPUPD1HcA>
    <xmx:roe3Y0uyOkzCTSjmmb2lQRpAr6U9FkTTvx4-bs5TP1AAmNqREhc60Q>
Feedback-ID: i1568416f:Fastmail
Date: Fri, 6 Jan 2023 03:30:01 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default
Message-ID: <Y7eHqmeNgFmf3NqH@mail-itl>
References: <cover.1671744225.git.demi@invisiblethingslab.com>
 <2236399f561d348937f2ff7777fe47ad4236dbda.1671744225.git.demi@invisiblethingslab.com>
 <c6223295-c4f9-8fa8-7635-80d48094190f@citrix.com>
 <Y7eAqgcEENVcn+bl@itl-email>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="eeQ5Sw3MESoZAt5D"
Content-Disposition: inline
In-Reply-To: <Y7eAqgcEENVcn+bl@itl-email>


--eeQ5Sw3MESoZAt5D
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 6 Jan 2023 03:30:01 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default

On Thu, Jan 05, 2023 at 09:00:03PM -0500, Demi Marie Obenour wrote:
> On Thu, Jan 05, 2023 at 08:28:26PM +0000, Andrew Cooper wrote:
> > On 22/12/2022 10:31 pm, Demi Marie Obenour wrote:
> > > diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-comman=
d-line.pandoc
> > > index 424b12cfb27d6ade2ec63eacb8afe5df82465451..0230a7bc17cbd4362a42e=
a64cea695f31f5e0f86 100644
> > > --- a/docs/misc/xen-command-line.pandoc
> > > +++ b/docs/misc/xen-command-line.pandoc
> > > @@ -1417,6 +1417,17 @@ detection of systems known to misbehave upon a=
ccesses to that port.
> > >  ### idle_latency_factor (x86)
> > >  > `=3D <integer>`
> > > =20
> > > +### invalid-cacheability (x86)
> > > +> `=3D allow | deny | trap`
> > > +
> > > +> Default: `deny` in release builds, otherwise `trap`
> > > +
> > > +Specify what happens when a PV guest tries to use one of the reserve=
d entries in
> > > +the PAT.  `deny` causes the attempt to be rejected with -EINVAL, `al=
low` allows
> > > +the attempt, and `trap` causes a general protection fault to be rais=
ed.
> > > +Currently, the reserved entries are marked as uncacheable in Xen's P=
AT, but this
> > > +will change if new memory types are added, so guests must not rely o=
n it.
> > > +
> >=20
> > This wants to be scoped under "pv", and not a top-level option.=C2=A0 C=
urrent
> > parsing is at the top of xen/arch/x86/pv/domain.c
> >=20
> > How about `pv=3D{no-}oob-pat`, and parse it into the opt_pv_oob_pat boo=
lean ?
>=20
> Works for me, though I will use =E2=80=98invalid=E2=80=99 instead of =E2=
=80=98oob=E2=80=99 as valid PAT
> entries might not be contiguous.

If we're talking about alternative PAT settings, I'm not sure if they
can be called "invalid". With the default Xen's choice of PAT, top two
entries are documented as reserved (in xen.h) and only that makes them
forbidden to use. But with alternative settings, it's only behavior of
Linux parsing that prefers using lower options. When breaking contract
set in xen.h, "reserved" entries stop being reserved too.

So, _if_ that option would be applicable alternative PAT choice, it's
only useful for debugging Linux specifically (assuming Linux won't
change its approach to choosing entries - which I think it's allowed to
do).

> > There really is no need to distinguish between deny and trap.=C2=A0 IMO,
> > injecting #GP unilaterally is fine (to a guest expecting the hypercall
> > to succeed, -EINVAL vs #GP makes no difference, but #GP is far more
> > useful to a human trying to debug issues here), but I could be talked
> > into putting that behind a CONFIG_DEBUG if other feel strongly.
>=20
> Marek, thoughts?

With Xen's default PAT, #GP may be useful indeed, but it must come with
a message why it was injected.

> > > @@ -1343,7 +1374,34 @@ static int promote_l1_table(struct page_info *=
page)
> > >          }
> > >          else
> > >          {
> > > -            switch ( ret =3D get_page_from_l1e(pl1e[i], d, d) )
> > > +            l1_pgentry_t l1e =3D pl1e[i];
> > > +
> > > +            if ( invalid_cacheability !=3D INVALID_CACHEABILITY_ALLO=
W )
> > > +            {
> > > +                switch ( l1e.l1 & PAGE_CACHE_ATTRS )
> > > +                {
> > > +                case _PAGE_WB:
> > > +                case _PAGE_UC:
> > > +                case _PAGE_UCM:
> > > +                case _PAGE_WC:
> > > +                case _PAGE_WT:
> > > +                case _PAGE_WP:
> > > +                    break;
> > > +                default:
> > > +                    /*
> > > +                     * If we get here, a PV guest tried to use one o=
f the
> > > +                     * reserved values in Xen's PAT.  This indicates=
 a bug
> > > +                     * in the guest.  If requested by the user, inje=
ct #GP
> > > +                     * to cause the guest to log a stack trace.
> > > +                     */
> > > +                    if ( invalid_cacheability =3D=3D INVALID_CACHEAB=
ILITY_TRAP )
> > > +                        pv_inject_hw_exception(TRAP_gp_fault, 0);
> > > +                    ret =3D -EINVAL;
> > > +                    goto fail;
> > > +                }
> > > +            }
> >=20
> > This will catch cases where the guest writes a full pagetable, then
> > promotes it, but won't catch cases where the guest modifies an
> > individual entry with mmu_update/etc.
> >=20
> > The logic needs to be in get_page_from_l1e() to get applied uniformly to
> > all PTE modifications.
>=20
> I will move it there, and also update Qubes OS=E2=80=99s patchset.
>=20
> > Right now, the l1_disallow_mask() check near the start hides the "can
> > you use a nonzero cacheattr" check.=C2=A0 If I ever get around to clean=
ing up
> > my post-XSA-402 series, I have a load of modifications to this.
>=20
> I came up with some major cleanups too.
>=20
> > But this could be something like this:
> >=20
> > if ( opt_pv_oob_pat && (l1f & PAGE_CACHE_ATTRS) > _PAGE_WP )
> > {
> > =C2=A0=C2=A0=C2=A0 // #GP
> > =C2=A0=C2=A0=C2=A0 return -EINVAL;
> > }
> >=20
> > fairly early on.
> >=20
> > It occurs to me that this check is only applicable when we're using the
> > Xen ABI APT, and seeing as we've talked about possibly making patch 5 a
> > Kconfig option, that may want accounting here.=C2=A0 (Perhaps simply ma=
king
> > opt_pb_oob_pat be false in a !XEN_PAT build.)
>=20
> It=E2=80=99s actually applicable even with other PATs.  While Marek and I=
 were
> tracking down an Intel iGPU cache coherency problem, Marek used it to
> verify that PAT entries that we thought were not being used were in fact
> unused.  This allowed proving that the behavior of the GPU was impacted
> by changes to PAT entries the hardware should not even be looking at,
> and therefore that the hardware itself must be buggy.

In fact, I did that via WARN() on the Linux side, to _not_ have guest
killed in this case, to potentially collect more info. As said above,
with alternative PAT settings, the contract which entries are "valid"
isn't there anymore, so punishing guest for using them isn't
appropriate. It could still be useful feature for debugging Linux (and
it feels, we'll need this feature for some time...). So, with !XEN_PAT
it should be at least disabled by default, or maybe even hidden behind
CONFIG_DEBUG.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--eeQ5Sw3MESoZAt5D
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmO3h6oACgkQ24/THMrX
1yyQWAf9HO56+c4tEleItkwNk4egtEM7JbnfHfue4DVNGbvYgvqpORvvhtx7wbm8
hjQTIb0bWRBtOmOlPa23conmtzGhJIQgsRJVx+4Y8Uu95BMjAChur9jUo56AmeWA
frRMIrJg1va3KcyRdLiChERtTNXH1E43e5vcH2I/Yswntwvq3+vwl2Vk/y0SYlSw
J7L3X7USBEcvJMe8jUZwHKXlGebRf9axovR8Gp7KH0xHUofLGE0BOgP9RtERu82t
q4LLxrmPRZJyd2rpAnN1vOT1LlqX3lywUluDFGvME94gt0Sl5npQZoGbwmiDBi0U
7Gd2xPjjNTIC3eVlfLV8fztV4Zxiag==
=OXk0
-----END PGP SIGNATURE-----

--eeQ5Sw3MESoZAt5D--


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 05:24:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 05:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472299.732476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDfD9-0001Eh-S1; Fri, 06 Jan 2023 05:24:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472299.732476; Fri, 06 Jan 2023 05:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDfD9-0001Ea-OI; Fri, 06 Jan 2023 05:24:11 +0000
Received: by outflank-mailman (input) for mailman id 472299;
 Fri, 06 Jan 2023 05:24:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ci4=5D=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pDfD8-0001EU-3G
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 05:24:10 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 55b8809d-8d82-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 06:24:05 +0100 (CET)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 423E05C012E;
 Fri,  6 Jan 2023 00:24:04 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 06 Jan 2023 00:24:04 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 6 Jan 2023 00:24:03 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55b8809d-8d82-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1672982644; x=
	1673069044; bh=T6QxApLPZtDk4+Nt1PDDcWCHZVNoE6nWso9ONeSi5PA=; b=i
	pRpRT9szWvhq/kVRiSz3DjByOdrKFTa31g7Eb9gtZ1EKeB4OaNVJYUcU2oB4A2Rn
	pp7rqMqQIYYfWTb2evUM8LeC4eFZNSRukQaFXR6WKiM9eMcrYeXskLxLuDWPzZqI
	ePHTznNYHUPAGlfd5aj/urlBYp/Csbc4YYC81A9gb2qyHqlqRYykGdN4jr3EKQWo
	6pbKlp/NA1UUmN7EauiLbKJ5Xgxn/p1acPlFSQXueSfTf5n1ojP3rLEu1NxQvr5Q
	PHRe5JXoaqVN9w4N5No32O4aMC5ApSX/n3mlIzFJ9vDV7zW4Y14ex/d2D0wKmDyO
	tpV/GYXhcekFbzDFRnMBA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1672982644; x=1673069044; bh=T6QxApLPZtDk4+Nt1PDDcWCHZVNo
	E6nWso9ONeSi5PA=; b=SKNpQ4cNhxrQ2jwwwU4+oUk8/OtbFhtqD1z4zvOUOoMz
	Ue7/ak5HzaQbsn8iRQUh9CfUL9TDLKdk+AQil/ABbhQweIlE2AmXAj9FDhqvQs/5
	K58KhLbKtENsERT4ciGqO5LZbudoXaDTpab5ocia7J1fWlvVwS55cORwBfr1pQj1
	z2ahHk/MA6iniKSj8XTtgyFKQreSWRgW8ECEBminv8tGoRuQtdpVKwz66xftfFTN
	zjOWCv5bcVRksI6K4yM/4vcSOl5JvevVoBuvEQvrO0xtdWFT87xcq0ABELsrZYZa
	pDwV8PsX5wr2al3T+AosADUaPaclz1HUGLHcegUKfQ==
X-ME-Sender: <xms:c7C3Y4l37EtISfRAilH_biBA-WhqARDGPy2U0ZFzXYXovG0IpNyBOw>
    <xme:c7C3Y30XVg8fBn3xWJU9Ylc78Mz5FsMv6QpVV9OS-PGoDgrO_94MBxBhMvXiAJ3FP
    j3MTFn2arKIq-Y>
X-ME-Received: <xmr:c7C3Y2qmXuUCBs25zk9nMHV01mcb8WRfb2yJreAjFMuuuA4kUHQNLt5Z6YT9>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrjeelgdekgecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpedvjeetgeekhfetudfhgfetffegfffg
    uddvgffhffeifeeikeektdehgeetheffleenucevlhhushhtvghrufhiiigvpedtnecurf
    grrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhl
    rggsrdgtohhm
X-ME-Proxy: <xmx:c7C3Y0mDa02vhsNzQF-lntmAnR_LzzwCkPo5-lhJ7KyMUErWxPfC-g>
    <xmx:c7C3Y21ASiA7cAV4GoVffjQZ0cQlC3uxcyS7LmqYyrdMpH74JYuUmQ>
    <xmx:c7C3Y7v44eAw5AacZCHczPKdwmq2o15hdgyLj0PzqqLFQeD6bkS7kg>
    <xmx:dLC3Yy_iztWrWAdKWgZt9Mbazkr7YIDIQsOmpXU8_lnIDcldHvZLxQ>
Feedback-ID: iac594737:Fastmail
Date: Fri, 6 Jan 2023 00:23:13 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default
Message-ID: <Y7ewcXxuEL92rI4v@itl-email>
References: <cover.1671744225.git.demi@invisiblethingslab.com>
 <2236399f561d348937f2ff7777fe47ad4236dbda.1671744225.git.demi@invisiblethingslab.com>
 <c6223295-c4f9-8fa8-7635-80d48094190f@citrix.com>
 <Y7eAqgcEENVcn+bl@itl-email>
 <Y7eHqmeNgFmf3NqH@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="ivFJOV8FZBUMgnec"
Content-Disposition: inline
In-Reply-To: <Y7eHqmeNgFmf3NqH@mail-itl>


--ivFJOV8FZBUMgnec
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 6 Jan 2023 00:23:13 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default

On Fri, Jan 06, 2023 at 03:30:01AM +0100, Marek Marczykowski-G=C3=B3recki w=
rote:
> On Thu, Jan 05, 2023 at 09:00:03PM -0500, Demi Marie Obenour wrote:
> > On Thu, Jan 05, 2023 at 08:28:26PM +0000, Andrew Cooper wrote:
> > > On 22/12/2022 10:31 pm, Demi Marie Obenour wrote:
> > > > diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-comm=
and-line.pandoc
> > > > index 424b12cfb27d6ade2ec63eacb8afe5df82465451..0230a7bc17cbd4362a4=
2ea64cea695f31f5e0f86 100644
> > > > --- a/docs/misc/xen-command-line.pandoc
> > > > +++ b/docs/misc/xen-command-line.pandoc
> > > > @@ -1417,6 +1417,17 @@ detection of systems known to misbehave upon=
 accesses to that port.
> > > >  ### idle_latency_factor (x86)
> > > >  > `=3D <integer>`
> > > > =20
> > > > +### invalid-cacheability (x86)
> > > > +> `=3D allow | deny | trap`
> > > > +
> > > > +> Default: `deny` in release builds, otherwise `trap`
> > > > +
> > > > +Specify what happens when a PV guest tries to use one of the reser=
ved entries in
> > > > +the PAT.  `deny` causes the attempt to be rejected with -EINVAL, `=
allow` allows
> > > > +the attempt, and `trap` causes a general protection fault to be ra=
ised.
> > > > +Currently, the reserved entries are marked as uncacheable in Xen's=
 PAT, but this
> > > > +will change if new memory types are added, so guests must not rely=
 on it.
> > > > +
> > >=20
> > > This wants to be scoped under "pv", and not a top-level option.=C2=A0=
 Current
> > > parsing is at the top of xen/arch/x86/pv/domain.c
> > >=20
> > > How about `pv=3D{no-}oob-pat`, and parse it into the opt_pv_oob_pat b=
oolean ?
> >=20
> > Works for me, though I will use =E2=80=98invalid=E2=80=99 instead of =
=E2=80=98oob=E2=80=99 as valid PAT
> > entries might not be contiguous.
>=20
> If we're talking about alternative PAT settings, I'm not sure if they
> can be called "invalid". With the default Xen's choice of PAT, top two
> entries are documented as reserved (in xen.h) and only that makes them
> forbidden to use. But with alternative settings, it's only behavior of
> Linux parsing that prefers using lower options. When breaking contract
> set in xen.h, "reserved" entries stop being reserved too.

That makes sense.

> So, _if_ that option would be applicable alternative PAT choice, it's
> only useful for debugging Linux specifically (assuming Linux won't
> change its approach to choosing entries - which I think it's allowed to
> do).

Point taken.

> > > There really is no need to distinguish between deny and trap.=C2=A0 I=
MO,
> > > injecting #GP unilaterally is fine (to a guest expecting the hypercall
> > > to succeed, -EINVAL vs #GP makes no difference, but #GP is far more
> > > useful to a human trying to debug issues here), but I could be talked
> > > into putting that behind a CONFIG_DEBUG if other feel strongly.
> >=20
> > Marek, thoughts?
>=20
> With Xen's default PAT, #GP may be useful indeed, but it must come with
> a message why it was injected.

In xl dmesg?

> > > > @@ -1343,7 +1374,34 @@ static int promote_l1_table(struct page_info=
 *page)
> > > >          }
> > > >          else
> > > >          {
> > > > -            switch ( ret =3D get_page_from_l1e(pl1e[i], d, d) )
> > > > +            l1_pgentry_t l1e =3D pl1e[i];
> > > > +
> > > > +            if ( invalid_cacheability !=3D INVALID_CACHEABILITY_AL=
LOW )
> > > > +            {
> > > > +                switch ( l1e.l1 & PAGE_CACHE_ATTRS )
> > > > +                {
> > > > +                case _PAGE_WB:
> > > > +                case _PAGE_UC:
> > > > +                case _PAGE_UCM:
> > > > +                case _PAGE_WC:
> > > > +                case _PAGE_WT:
> > > > +                case _PAGE_WP:
> > > > +                    break;
> > > > +                default:
> > > > +                    /*
> > > > +                     * If we get here, a PV guest tried to use one=
 of the
> > > > +                     * reserved values in Xen's PAT.  This indicat=
es a bug
> > > > +                     * in the guest.  If requested by the user, in=
ject #GP
> > > > +                     * to cause the guest to log a stack trace.
> > > > +                     */
> > > > +                    if ( invalid_cacheability =3D=3D INVALID_CACHE=
ABILITY_TRAP )
> > > > +                        pv_inject_hw_exception(TRAP_gp_fault, 0);
> > > > +                    ret =3D -EINVAL;
> > > > +                    goto fail;
> > > > +                }
> > > > +            }
> > >=20
> > > This will catch cases where the guest writes a full pagetable, then
> > > promotes it, but won't catch cases where the guest modifies an
> > > individual entry with mmu_update/etc.
> > >=20
> > > The logic needs to be in get_page_from_l1e() to get applied uniformly=
 to
> > > all PTE modifications.
> >=20
> > I will move it there, and also update Qubes OS=E2=80=99s patchset.
> >=20
> > > Right now, the l1_disallow_mask() check near the start hides the "can
> > > you use a nonzero cacheattr" check.=C2=A0 If I ever get around to cle=
aning up
> > > my post-XSA-402 series, I have a load of modifications to this.
> >=20
> > I came up with some major cleanups too.
> >=20
> > > But this could be something like this:
> > >=20
> > > if ( opt_pv_oob_pat && (l1f & PAGE_CACHE_ATTRS) > _PAGE_WP )
> > > {
> > > =C2=A0=C2=A0=C2=A0 // #GP
> > > =C2=A0=C2=A0=C2=A0 return -EINVAL;
> > > }
> > >=20
> > > fairly early on.
> > >=20
> > > It occurs to me that this check is only applicable when we're using t=
he
> > > Xen ABI APT, and seeing as we've talked about possibly making patch 5=
 a
> > > Kconfig option, that may want accounting here.=C2=A0 (Perhaps simply =
making
> > > opt_pb_oob_pat be false in a !XEN_PAT build.)
> >=20
> > It=E2=80=99s actually applicable even with other PATs.  While Marek and=
 I were
> > tracking down an Intel iGPU cache coherency problem, Marek used it to
> > verify that PAT entries that we thought were not being used were in fact
> > unused.  This allowed proving that the behavior of the GPU was impacted
> > by changes to PAT entries the hardware should not even be looking at,
> > and therefore that the hardware itself must be buggy.
>=20
> In fact, I did that via WARN() on the Linux side, to _not_ have guest
> killed in this case, to potentially collect more info.

Whoops!  I must have misunderstood what you meant by "trap".

> As said above,
> with alternative PAT settings, the contract which entries are "valid"
> isn't there anymore, so punishing guest for using them isn't
> appropriate. It could still be useful feature for debugging Linux (and
> it feels, we'll need this feature for some time...). So, with !XEN_PAT
> it should be at least disabled by default, or maybe even hidden behind
> CONFIG_DEBUG.

Okay, makes sense.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--ivFJOV8FZBUMgnec
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmO3sHAACgkQsoi1X/+c
IsHeXg//S1AGIYEzJpckidY72FKng8WRFxrRxiaLYi1x/BaJisp1faR/UiV5qga6
6hcI+GpGYxFwE5KC8+PpJWiTAWkx2GmzGLY4D12f4Qrp/dl97qy2bN1Aqc4vKlfH
DnY4yC0nfVHSYXmzHL/WACu0ffIGkE8+yRNZhc3iikMaY1yqFv04aH48AWVVcnGW
flZGRSjJWY/jgc7Ih787nr+Lc0iu16TkgUMNApVoLIO680r8YMwJrGLLSXafd6tz
aCywzechhBRnsx+tWPDqpDv5iM3Hh+H/gPi66pMRyoEjWkAIeoO2mD52pbJ6Iq8K
97AJZ7C2cTMeSSj+Uv3Q0mPLlrci0GwP7Ej3uMHxvj5ctIkcNW3RDX/Q9+UTZG5z
+XtFqY7FXUiulKLX3FHt83daGd9sgV8rR+i6sTMe9un5YrfW5fPM51U+LDjyWf6S
lhbcT/yYMbf5rziZhv+/2M67UiUrIhWVYXVp6ctqSbvlLvrhPVddIQRo5sDphiRQ
pyMHhQIh9lLvexOgp56DkRCAVvkjaZqWUEt7EHTgmLJSuXPRtLulu5ojImlrsZTl
Bu6OvjfNbv/nTRiOUilta2D8bxnrQ+VDYTFTDl4Xzot1pu0SLLJBYNHun+INDyG2
zgTI0s5l93yh1I4KVjR3cRUv+pg+Un6CoZDBxSSzrCQJ5pKmDyg=
=qcVE
-----END PGP SIGNATURE-----

--ivFJOV8FZBUMgnec--


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 05:33:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 05:33:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472307.732487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDfMF-0002k1-PF; Fri, 06 Jan 2023 05:33:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472307.732487; Fri, 06 Jan 2023 05:33:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDfMF-0002ju-Kf; Fri, 06 Jan 2023 05:33:35 +0000
Received: by outflank-mailman (input) for mailman id 472307;
 Fri, 06 Jan 2023 05:33:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDfME-0002jg-A0; Fri, 06 Jan 2023 05:33:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDfME-0004Yx-7s; Fri, 06 Jan 2023 05:33:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDfMD-0003EC-Lr; Fri, 06 Jan 2023 05:33:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDfMD-0004Tu-LQ; Fri, 06 Jan 2023 05:33:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hVQ+F5Z9aFptuE60vsMMDxGFlAf/aEEGiSqiirlZucE=; b=4rYYykOPW9Uv0AU8nATNbMovkO
	vUUT50T37J3RlmyncdzVBa2QeBpmjnNrNquMso8g/2T44N4eEh36olTWIkEqPteXWDSH0JXJwowri
	pUikxczSbsoowXcyeblQHIfsvpx9wGDNTlWLNNuzeek5dTSqfECJ2ZTHk3BRW0Cl09Bg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175596-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175596: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c32e7331513bfdb625986e4570c304dced4ea109
X-Osstest-Versions-That:
    ovmf=9ce09870e721efacc41fa7ee684e9e299f120350
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 05:33:33 +0000

flight 175596 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175596/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c32e7331513bfdb625986e4570c304dced4ea109
baseline version:
 ovmf                 9ce09870e721efacc41fa7ee684e9e299f120350

Last test of basis   175567  2023-01-04 10:13:33 Z    1 days
Testing same since   175596  2023-01-06 03:10:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiaxin Wu <jiaxin.wu@intel.com>
  Wu, Jiaxin <jiaxin.wu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9ce09870e7..c32e733151  c32e7331513bfdb625986e4570c304dced4ea109 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 07:01:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 07:01:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472320.732498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDgio-0003NA-73; Fri, 06 Jan 2023 07:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472320.732498; Fri, 06 Jan 2023 07:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDgio-0003N3-4E; Fri, 06 Jan 2023 07:00:58 +0000
Received: by outflank-mailman (input) for mailman id 472320;
 Fri, 06 Jan 2023 07:00:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDgin-0003Mt-77; Fri, 06 Jan 2023 07:00:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDgin-0006bv-4Y; Fri, 06 Jan 2023 07:00:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDgim-0005zf-M8; Fri, 06 Jan 2023 07:00:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDgim-0005DV-Ll; Fri, 06 Jan 2023 07:00:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bEKZQki7r8t/zqa36d89r+HNXOS2sar3ggG0+GjEMoE=; b=aqkMBI43hyCuijasrvUrGP1wXA
	ZlX83deHclFf9bKlEF0GfLGB8viBSXq+xn9wG9QMu3Ms0tg3ezNF5gmOgujjeZNlSh970d9jc/Mmt
	WChHswkntypJlvunTIttMnvzD+rd8nHVbP/IK6G2hl/lTl2MxSxmcFM3JDOvv3VSoiSQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175591-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175591: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=41c03ba9beea760bd2d2ac9250b09a2e192da2dc
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 07:00:56 +0000

flight 175591 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175591/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                41c03ba9beea760bd2d2ac9250b09a2e192da2dc
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   90 days
Failing since        173470  2022-10-08 06:21:34 Z   90 days  187 attempts
Testing same since   175579  2023-01-05 04:57:31 Z    1 days    2 attempts

------------------------------------------------------------
3265 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 498271 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 07:11:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 07:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472329.732509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDgtC-0004tk-9L; Fri, 06 Jan 2023 07:11:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472329.732509; Fri, 06 Jan 2023 07:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDgtC-0004td-4a; Fri, 06 Jan 2023 07:11:42 +0000
Received: by outflank-mailman (input) for mailman id 472329;
 Fri, 06 Jan 2023 07:11:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ggnj=5D=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDgtB-0004tT-7F
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 07:11:41 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2041.outbound.protection.outlook.com [40.107.103.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d49113e-8d91-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 08:11:39 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8976.eurprd04.prod.outlook.com (2603:10a6:102:20f::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 07:11:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 07:11:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d49113e-8d91-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oTO/hX2NnU0CaFQ1rnRN2JHB8m6wntDKLv4VxP7WfR2Ms4ooXiUib3yWR7mQJEvDV5ZoSFwXnK/SG2kmnkbHzyRBy30Vbfa35N1jpo5vMApc+K/HJsOZGJxilu3BCy4fpoPH4+gXQxnEwIlAsMjcF+qPMTGCA58ahDMczSupMwvI0hODX2+yPLuZ7ooQgx7jkKJtyVOQlDo8sK8qVhrEFSTeF/1R6Yg93DGFmSpBGSqDrfcJw1d76up8UKKAU0BAr5l8Zx2qGgwkOO5/x5JYB4lK48roL8HA7rIGbpzmAzUQkzz8ZLRgdiKlTyNLmXG0oc7dgRqniWFivkv4jkziEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Dp/SB8FHsc/pkJeDx9F8rMb9yST6AkHe9p8Ch4PCHkc=;
 b=nCZf8kPTPLb6VlMmePmX6Vlebf0nWDgWVjFdTt7I+n8s7posmeO1hmCcVw/IYeadXnwFKAxAOLBdQSENoPDPhC2Sax9fahrE0eUTg9BDVN3OqPDdFj0Goi36mihpWhmOTTIQGhEaHGMVlRWKCWTCUq3pswyYd8QQvvOgi2qMff5SZfkvihRxZIchSP5QDz0gf6XpbGnGO9YDICmYnmzmPIRywb38ow1X9ocrn9wKHNim/NE5ccNJvgEBMjhqDMRKCqgIFQp4GsIMe3bLykZYe9adYdy5K2f46YEq/YkDV/AZEfXXu47YXE1iAsVmuwHNIxZOzUuUdKh9yC4CfZkrJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Dp/SB8FHsc/pkJeDx9F8rMb9yST6AkHe9p8Ch4PCHkc=;
 b=2QuCnoSdN3kvDepH/R+VdMtQmnLBl8p3cMOjg1OpFMmlIoVLSNqkwvomdJHTmc5WjMjeDUrNYghAVJVOs9FY+qa/DvZz9ChtINDZfqqAdsWAJatRN/y/0yFlGL8+OuV0PGBNjbjJrDe52KfI+/nNzXhB8+/zoUg1v0GSu/kRRMx1TZyf+pF7xBmdnGAdrafG8sEzOdotn+tNQV4w6myeMDW+SVbIsNN8P5Ja+c9TVuTAow5ftS+KU8isK8HraIoWHgu4KbvcFeHtAXFFi3xjwNmAuVXD+djHjG8z/weP31DR7Tjc00TJImPEkFJxIPnhx2hngF3qKu9KYjUA/z/ymw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0869534b-4481-d3d1-2afc-09560844d9d5@suse.com>
Date: Fri, 6 Jan 2023 08:11:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v6 4/5] x86/mm: Reject invalid cacheability in PV guests
 by default
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Roger Pau Monne <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, "Tim (Xen.org)" <tim@xen.org>,
 George Dunlap <George.Dunlap@citrix.com>,
 Demi Marie Obenour <demi@invisiblethingslab.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1671744225.git.demi@invisiblethingslab.com>
 <2236399f561d348937f2ff7777fe47ad4236dbda.1671744225.git.demi@invisiblethingslab.com>
 <c6223295-c4f9-8fa8-7635-80d48094190f@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <c6223295-c4f9-8fa8-7635-80d48094190f@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0129.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8976:EE_
X-MS-Office365-Filtering-Correlation-Id: 79b9a312-a0d2-49dd-bc05-08daefb53ff2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HffHmHG3QUyYvttGBY2irtzuM5ced7UXhSmTV3vBy9eXsWy4WrOUfgwOzUDSlls5rTHNCqhvCf70ltkFr15bUO8RlhTmothzLY7uBMhqCzJMxh//c3RqB90qpghirRU9ML3YLUTX63TGlfIraOH3u0c8tLCRZnml6ikH4tZ4dC+TNWQWjoHtlIzqhr3h0tN7WjSQgHoSFGXFiaxS5BvHRwaxA4LKOScFnJbH8p6xTyS2c6Z76Qu/EbggqFN2SN60zNdnnjLemUMx81dUlLILpuc/3o/xgvUlJOxccWQGds/Wyb8Mwhk3GV7GaypdQYrWyJmQNyarpmLRyiieJlS6OJ/lo8BTWkH7508qNdfEYkQPaz9vWOpOMfqPckDop7rGyW5rNLinJX+MiMcEiQNFKNAbdWdyiPzozZ67BILx+FgSALWqoXpcM2p3uyPj2lbdWT/UqW4bCZ4SQJGjp0K3J4r7/KUI1r1SrISOgt0s5Qz0SFL4mW67G2HS9XWsVBlPHXScCIGMpqnp2siB6MoCjhZf+GPikhJfdUXcaIhgAQmhLFMbWIPC+2GarTxaS0cxq4fGoWlvR8MQQeR6ihUjFC3KXUyPw6Q3euIdnsMkR7SFPPx5WAfFuXVklklFMqIMWoYFZAzgeuExlcZN7n+voCko/H7W+gd9FKQxi6rjuu/GpwhiSxb++LjDyqfdYzVCSQnUYpQYw06LAu7N0/CADnbJWW36Zgwr7Jdjm5h3L7c=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(346002)(39860400002)(396003)(136003)(376002)(451199015)(83380400001)(2616005)(86362001)(31696002)(38100700002)(36756003)(54906003)(316002)(2906002)(8936002)(41300700001)(66556008)(4326008)(66946007)(8676002)(53546011)(6512007)(26005)(66476007)(478600001)(186003)(6486002)(31686004)(5660300002)(6916009)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z3RiZnFSbUFBMTB3SHkvbkdqelluQ0VGVDZmQmJoUE01WGxXNms3ZFNXZHpO?=
 =?utf-8?B?eHFIU3U2L2NXTjBzN0EwRU9BMXAxMlhsbHpyNS96ZFE2dkF4M2hIWkNDVC9s?=
 =?utf-8?B?Y0oySitWaW1JUGlCcTNlOXhVM21nRFhNTDIzTE1VSnUrTnNNMlN2cTRKUlpO?=
 =?utf-8?B?SHErTWpkTGxvOGRtczBHZTJlL05qUTBES1VCZGt4Y2srTVJwUktENE9rVlBj?=
 =?utf-8?B?RDNCSDdtYzN2U2k5dUdvdG9ETXFnODBjS0srcFhPY3Rrem54cDFRZlljQk5Y?=
 =?utf-8?B?cnpLZEk1U1RENXJvdVlXbHAxbi9WTXdWMDVoemVVeUtRTnJWWTNEZUVyTG80?=
 =?utf-8?B?OTdiRVRXRW1ia0JhYXF0aktQbWl2cTc5aVpCK0xhVnk1ZXQ0dFVPQlB5RkhX?=
 =?utf-8?B?ZExUN25ZeFRGZHBNMjJCRUxqS25DNUVNK1ZIbnJBNUxiTUNpQk5wTVRreEM4?=
 =?utf-8?B?TVd2NDJWYitadDk5S0tSUWVuM0ZJNUo5akx0bTFTN2FEOFpOcUpQMDlhWG5F?=
 =?utf-8?B?MldXWkJmVTQ1SS9vRzg2Qm5vOGVZNXcxU21KWFlneldGZnBnU0g5WWN3Z0d4?=
 =?utf-8?B?RWJ3d29sRXJZbkFoSGoxdENyeis3dXZjS2JINUFUQXpta3hmRGowOThRT2J2?=
 =?utf-8?B?MmpsMjFSVlNQWXZuMEMwQ0hOdDRpUUJHcFVLWTJiZDZTRzRlTDByYTlzRGo5?=
 =?utf-8?B?SDJ2ek4zeGw2N0ZKZlhxUnIvcVY0QXlOL081Z2l4SkNDVUVldkRmUVpZeU91?=
 =?utf-8?B?bTkyVUhzLzZSdXc5cGFVMXhROWxTa1hHdjJGT1RHdDJCUWZxYzg2UGVtci9k?=
 =?utf-8?B?cGduVjhSbVRFNklTdGpPcGVOcVZFU2IyaWNybkIzUUQyeUtMT1Y4TTdHOGFv?=
 =?utf-8?B?ZnNGcHBnV0VIUkhOT0tWdVh0N1BJTDBOUkt4bEsxbWJsc0ZBd0tPZnUyK01w?=
 =?utf-8?B?UU9peHU5SXNYYTBkRnpxby85aTMwYXpGV0E2OVhBYTB0L2pzQUU2ZVZDYzhX?=
 =?utf-8?B?eVVKOVVWTG9oUHFSNFEraDN3WVlpSzFCbVFpRmxEdlRMb2JrUkV0eDVaekFO?=
 =?utf-8?B?Q0dWaXFhT1RNd2duaG8yTEZLeHI0T05LZnBQUndWai9XcUxSWFFrb1czYWF0?=
 =?utf-8?B?OXBFMktaUE80OVA5b3JKTUhSeGZZRGdyU0ovOXFTMVQxemJFVGRPVVV2RGZJ?=
 =?utf-8?B?TGxIUHNYb1BWYWZmNkkxanJNTVA2TDdyVTZNMGFwNnBJSnhpTFdtOGNWVkxH?=
 =?utf-8?B?SlJ5bC80UTVwVXNUbEJJUXNZQkZRTk81alVYRGhUR1pCeHR2OTExV2pnNDVn?=
 =?utf-8?B?UDJ3SS94VlljWHpSYkFsUnlGV1IvcWFMeUtUdnhpZkQ2dzU5UWFacVQyazhL?=
 =?utf-8?B?UDJoR3o3MnNKS3ZJR0E4N1lGYWVWWXBaMG0yTWdtWkRVbWovMXNPaHZxaUNF?=
 =?utf-8?B?eDRSWTJyTlAvaUtpMkxMc0hycEpNY2dSalB0b29xM3QwRWRKYTBGbWp6K0Zh?=
 =?utf-8?B?cE1nOXdKVVJ2c0pabmUvRU94RklRZDBHaVE4RHN0N2E3aU43b0dTdHRPV242?=
 =?utf-8?B?WGFDQ0ZsRmEzVGVuSHBrb0ppTnA3RVdJbi9rdzczNFlHc3FFTTlSdkxraTI2?=
 =?utf-8?B?R2kwaUZ2OXJMZlpYNDJrdmgxdGg1U29hZEY5QW12WmxRRFdNN3BBN2FadWIw?=
 =?utf-8?B?T09DUGswRlhZTzJDVXVGRmVUQXFmQXRpSWpyd2h4clVrWUlkdndpdjZPZGdx?=
 =?utf-8?B?SlVkQnQ0QUhxczNFMk4zVDN6UnRsYWRveTNBc0tmeGxYTmNiemY3RWZKdGk3?=
 =?utf-8?B?cExsSVZOS2VJOVRRcTZ6Slp4RzRGbGVGeFJPS0JYTVMwMHdabW5wdGhiM0dy?=
 =?utf-8?B?ak9kdVhJWkpTMjBwRDNneUVOM2o2K0tNdVB1TnFFb1hTalhsYTZoLzhncCtm?=
 =?utf-8?B?SkZMVnYwOG8wdXpmMVpXN1dLVUVPelE1bzBYcnZ4cmpreUNjTVEyYVJ1MFBz?=
 =?utf-8?B?MktzSzE1R1RRcWRraXpOMkRxeTIxa1cvZitTREtmYnJvdFVhUlNYTEt0eVN5?=
 =?utf-8?B?TEh0M1BhRnVqS1Z5M2ZuNUNqTWxDTkk1dUdxSUJHVFpGbFlXT2tNb3ZtbWU3?=
 =?utf-8?Q?a24DWZVbFQAGTcK+rpU7DTZWX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 79b9a312-a0d2-49dd-bc05-08daefb53ff2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 07:11:36.9056
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DVQd4rDxEdi6rGYpskdP6bYcAmzaB0VKxB1XC0IXVuRmOxNo9VUAjzm/RmmYyCMrIax3A2YlL4yT+cPwgjCUuw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8976

On 05.01.2023 21:28, Andrew Cooper wrote:
> On 22/12/2022 10:31 pm, Demi Marie Obenour wrote:
>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
>> index 424b12cfb27d6ade2ec63eacb8afe5df82465451..0230a7bc17cbd4362a42ea64cea695f31f5e0f86 100644
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -1417,6 +1417,17 @@ detection of systems known to misbehave upon accesses to that port.
>>  ### idle_latency_factor (x86)
>>  > `= <integer>`
>>  
>> +### invalid-cacheability (x86)
>> +> `= allow | deny | trap`
>> +
>> +> Default: `deny` in release builds, otherwise `trap`
>> +
>> +Specify what happens when a PV guest tries to use one of the reserved entries in
>> +the PAT.  `deny` causes the attempt to be rejected with -EINVAL, `allow` allows
>> +the attempt, and `trap` causes a general protection fault to be raised.
>> +Currently, the reserved entries are marked as uncacheable in Xen's PAT, but this
>> +will change if new memory types are added, so guests must not rely on it.
>> +
> 
> This wants to be scoped under "pv", and not a top-level option.  Current
> parsing is at the top of xen/arch/x86/pv/domain.c
> 
> How about `pv={no-}oob-pat`, and parse it into the opt_pv_oob_pat boolean ?
> 
> There really is no need to distinguish between deny and trap.  IMO,
> injecting #GP unilaterally is fine (to a guest expecting the hypercall
> to succeed, -EINVAL vs #GP makes no difference, but #GP is far more
> useful to a human trying to debug issues here), but I could be talked
> into putting that behind a CONFIG_DEBUG if other feel strongly.

What is or is not useful to guests is guesswork. They might be "handling"
failure with BUG(), in which case they'd still see a backtrace. So to me,
as previously said, injecting #GP in the case here can only reasonably be
a debugging aid (and hence be dependent upon DEBUG).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 07:17:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 07:17:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472337.732520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDgye-0005ag-VJ; Fri, 06 Jan 2023 07:17:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472337.732520; Fri, 06 Jan 2023 07:17:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDgye-0005aZ-SE; Fri, 06 Jan 2023 07:17:20 +0000
Received: by outflank-mailman (input) for mailman id 472337;
 Fri, 06 Jan 2023 07:17:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ggnj=5D=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDgyd-0005aA-Ur
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 07:17:19 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2065.outbound.protection.outlook.com [40.107.241.65])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2763f03c-8d92-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 08:17:18 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8560.eurprd04.prod.outlook.com (2603:10a6:102:217::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 07:17:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 07:17:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2763f03c-8d92-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=emMJ+JXQB9LtxGhdEY6eeX0ouxEramrc6DiI9rAefvPZ6JJ21UsUaeWH6KWZqjKltX52nUuQ+LxLyOLSmDZXuT1mA/XabV0RvvttQceWvKr30WSTPw/ZOwAbEGynxonUAGYhanJSeYFhWJnwNUF221Q3MCcybctWYmDmo6WmoEFTqIKDAmZz8KzUiPavc27aVf6FAvuSUOJqtnzX/9ui/kVNY7adhJ/u8W9Y1ekm31c7BeUNuTqnGI2hLXet9UtByWqS0SphUyIypU7UMidVecOt1VJ6wRbOd0c1UwJ5Z84Fizd/gHG3ohANOwWRPCCI0CFwxj1yevU/iqXcpNbD2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hB0wHVv2g8VFNrzjAIujsFumyNKGcqZkk2GyhZB67bk=;
 b=F+Qj9PZvQBf8haBY9WkgIU7bRjlJ3/puJmXz0WrtFmcYu1RGGlECIFvW889r3yGDk3/0PlbKdKt+bWcSaFoQRveP4kXm+HL+R588OnS8kSOPoX/THVTSyazntg18Uwfb8HVF8ZaFe/4OZU4N98YiIwlkCjmHg5QkPsYcBCrUc1WKLs2szgev346nlkgujVaYnFFM/HL4SEpngOi5bTj8yVep0TF6HwIX+k3WAPLMXpC1wkBCp54ZwJkppCMYtfIWOtQN7C6eB7ssYdb7bVdr8LYPeESPKZOm9MZatWung6QCoF8HhXXK6s823zZNGbOFGA0WrR3yMYO5qN67FD34Fw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hB0wHVv2g8VFNrzjAIujsFumyNKGcqZkk2GyhZB67bk=;
 b=BTIX3zUFGTfctCPFHiF1JOJEtpglgEvrGDfMBgnI38+nk37tzA4rQEe0v1Lanym98J5ZBxUrxprnOYXrx4N4RI+HpLg2mfMpJB3w/Kx2i1WdYOyymdHw93mWw91wwgdAVDDiA2Gfp0ogVVXpi0/vcwcgXCm2GBxXiumT6fU4vAQSzqfQyFXO1pATGTZ//yPX2uO3h/PQBxkBmBBTWWCJXInXoBrq6X8GsrY8GUhM7/Rai+IcjXpDB56rzKjyMyZSbDvl7MzmggYhlASx2J6njdMy/Tf6l2Ch/eV/6rVONwgSAqCh00SK4RxRXPDyvVyQQx9wFKn4hVLPKzLXDwnrmg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a673fd59-7506-9459-a0b2-2da7972b9b82@suse.com>
Date: Fri, 6 Jan 2023 08:17:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 13/22] xen/x86: Add support for the PMAP
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-14-julien@xen.org>
 <84a96972-3c41-ec94-3513-9944467d9e1c@suse.com>
 <924abe0d-6ba8-5d64-d74a-c2e1894d4f64@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <924abe0d-6ba8-5d64-d74a-c2e1894d4f64@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0126.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8560:EE_
X-MS-Office365-Filtering-Correlation-Id: a0691bbb-a53b-458e-7fea-08daefb60a60
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+DVBXwMDZKPb2T8SIq95pVc6B3p4a+ie6p2lEuCjLHPzFlJI8OdgVdNoKMPoIOCiUAK7hDHNVw1ydS6nBx/B9P8/YmtVN58EOlg6cKF/BbbNPWlmtbyALdoG022RnAVdlfMBKDm/DBcaueHAWXa8qjElHMOo5HQFqJfAYSezpn2nMUK+7RJ+hXqIjSfvCOVrmkFaMMD72yPyqk7PUNvFGR7PzGybRcYEEiXpt9EXOTP6KG51MfeVykISyOIHx7PoPIJUL6F7zKRUYzFeSyqYfIXhjpHGGh61JXSJLfrz9z1nz5+bnWhuAPOtCdCXHfZCunRtzqra7D5zz1ND5rQ4h8ycDur/WaKXwoIksgUpE9Uu/jfJfm7MXygLVXckIsqEEw63jqm74EuGdlDZgf2CIxmImGAPkoQOA3N9egyo2tui+9OsHsqATK+oViP22QH8hsrTeOMIen+P2dz93kjjDJRcfqRdxJIv1nqVfWXFf3yamDLkPnQh5yzVM8hnNJMWzzTPEDKk/9kmRzyg7SMKjfp4cXRldhnlNk3NGZwJm9KuUgjsyawPqlqsVbaKNU0Oz37nkVyr5pG70VUSqmqOrF4RlRBQIglB+ehEI+/eU5WSxebj3FRTCqMUyhd7R6YmPzrXkHUxSC2KFkQ5P1XqCW2XsNeKnaEHqaU7gQqNMJtX6o9yieue2fh9y8npLxdY/pq5vbYmU2At85rmkGi6ocqAri4OSvOA0qSxIMAlyvOzWMuRfWTQGRaIxtVlyXf9juQrBabB/Y2Ji3fNn3sVCA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(366004)(396003)(136003)(376002)(346002)(451199015)(83380400001)(86362001)(31696002)(8936002)(2906002)(5660300002)(38100700002)(41300700001)(478600001)(2616005)(8676002)(53546011)(186003)(6506007)(316002)(6512007)(66476007)(4326008)(26005)(6916009)(66946007)(6486002)(66556008)(54906003)(31686004)(36756003)(45980500001)(43740500002)(357404004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dHp0L0xVNlB0amQ5MmZNTURjOTJyV3QrbkZHc2tUMmhCM2dqOHd0azY1MGdj?=
 =?utf-8?B?NEhjalJMbzJqUEd6RXZaK3pIY3NHdENpeFNCaHoxV2tvRXNsT3NCd3RXdTMv?=
 =?utf-8?B?SzRYTnZoRG0wYXBRR0Q3L0t4K0JVSGxJb01KbEZnNzh6aFd3QzRKZXJkWXha?=
 =?utf-8?B?U1ZrTDV2MzVtRzJ6S3hVRWRvZlhDQmtQT2o5S3NzK1Q5dTBkQlBYS1RWYTVY?=
 =?utf-8?B?VVBJYnFOSkV2MzFsRE43UjV3eFZMRkpCeHM1SWxNTGxwVkRJa0h0K2xRcHNW?=
 =?utf-8?B?akhFZnRNZHI5R042c1JCL09CZTY3RUp5TEhRMTRqMmtTQjdnNlFsNkpUT3Rm?=
 =?utf-8?B?WHA4ckxqWTRBSVdDRHBlSkR2aTN6RWljcU5tZDF3ZmpaNFAxdDllazhqLzNs?=
 =?utf-8?B?OG5SQU1VQjNma1BleERyZzBQVFBrZDMrUERIZWhHOVNJN2pRcjJRT2RrbnNI?=
 =?utf-8?B?eHJSY3dWOGJscHpNYVNWQmUwdlp1M1hEMjNzYVN5K3VaZ0JsN3c2clBWSEpP?=
 =?utf-8?B?dVJ2cDlNd2YxV3lzcnAza0NlQWVOQnpHSWdFT3BKend5bzhwNU16TWdFWmVJ?=
 =?utf-8?B?UE5xMDRwQUtva05VbjFqc3NOdFFkSVloTjVMZGFUbWFNWVR2a0pQcEhSaUw1?=
 =?utf-8?B?QllzTm1jVnJleXNidlYzUjBCSnllamRzeENhcW1KWG96M2xxNk4yS2hCZGxG?=
 =?utf-8?B?L3FERW5uYlFXTCszNGJJeG9LTkkySmNZSW55RldVNk1xV3Z3R25GOU9MUlkr?=
 =?utf-8?B?WnRJeXhjbFVQZ1k3eVh5UmFkYnhhbGdyQzBqY1lwYmZ1QUxRcVVuYXhzMHJK?=
 =?utf-8?B?aElxZ292cUtERmVaTmkzWGx6OExVWFlMRjRZRDkwV1BUcEtCWGs1SlpXSFlM?=
 =?utf-8?B?QVdxQnJKbFFEWVl2YVFwZ2pxT0t2LzJndTFuWTR5K0lRTGQveUMvREJKMmtt?=
 =?utf-8?B?eWxkT2ZPNFNJSkFOOEtoS2xlNEtuWXpqS2pGcFoyUkp2YzExY0RSelA4MEdP?=
 =?utf-8?B?YXFhbXgrdko1YWxKbmQxRjQvWHlhVTlzY1ZHNVdGU1hyeDViRjQraXhmdGo4?=
 =?utf-8?B?Q0pldGxKS212NC9KaHJvVy9NL2JNeXU5ZjBrSjVneHErT2VqZjdYSWFlaXY1?=
 =?utf-8?B?VzliVzQwQk1TdlJRWEljS0J6akdPNHV5V29BanFKUHREREZpK1VwUmFLcnpw?=
 =?utf-8?B?aWZXRG5ZMDltRWh6NlI1aWtnQkk3SnhJOVllTjJrN3M3azYvR3h5ME9CSFhE?=
 =?utf-8?B?Zm5sS3J0WjNkdlVkNkdVMXRNb2pIeHQvZzZxMm9vcGxJVUhrbktFYlpyeVNu?=
 =?utf-8?B?NGJBWXBzRnlRMjNZKzhQOWM5ZE04d3B1S25LcnpPRTNhdXdtTTB0ZWh0L1ZD?=
 =?utf-8?B?Z1NycDBJNGQwUnFWbFB1L1lWdHdrZEtSMGFyNVlmNmRZQ3dPSkVlejNQRzFw?=
 =?utf-8?B?YWNXRE5acFp5Z0FmM1dzZ1NOMkR5SzlxV25pTjhiRHQ4a1RnUjlNSEluTjBr?=
 =?utf-8?B?bmx5YkIxeHZwZVBKOVh5U091bk1YMlNzT3FHMFZuM1VKVlhwb0poMU51dXpK?=
 =?utf-8?B?eG50QWZDcUFBU1psMkcyVW56cUE3VGgzSENmWGJyc2JidENJUUJvM1BJRXNJ?=
 =?utf-8?B?U3VGcFEzaGhEa3JReSsrV1NTYXVtN1ExR2x1SGNZVzNVTWdtMVdNZW1kK011?=
 =?utf-8?B?UlMyRDRvT3Q0dlJJRHN1VGt3WTVRSGNscCtuYStXc2l5aTBUc0xkWkRJOUtY?=
 =?utf-8?B?L09jNi9oL21tZjE0cGdOdm5JYUZzTnB3bWw4YWVHTERDS0lKdzdHOUxtMW4z?=
 =?utf-8?B?Z0pFRGhlZDlQN2ZIeHNJWEVwOWdycm5kV0NTYlZxL01kakhWaGdGbzRHZjhF?=
 =?utf-8?B?d0VZY05Cc2dUUFNaYmFmMEdFTW9mQ2tTTnphNmVhcmg4WEhqYWRCMmRLQXMy?=
 =?utf-8?B?OTNHUnAzUVFLR1RKTG1sU1ZSQ0lSNjd4ZC9CNHRvdFRzQVA1MkROcGlJNzZa?=
 =?utf-8?B?LzYralYxQ3A3R1NIUVhNZEkwR0g5aHEwUG5Ib05yUzRVOTVqNklGTTNMb1hy?=
 =?utf-8?B?QXNZS3RSOGl6VVhBOGlxbW05V0I5aE5IQ1c4S1FuaGhZSjlkdmVtZ2wzNGdS?=
 =?utf-8?Q?kg/ZSCihAZzcsGYxVQqJ9qFkz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a0691bbb-a53b-458e-7fea-08daefb60a60
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 07:17:16.5091
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jwHgQklrxSGgTefBzzFmpZhe5sL6EDxTHSJ8BFMhrr9+v3DLz5pbZRfjrlNkffflr4UCrMJs9gh+nZbBIKG9Zw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8560

On 05.01.2023 18:50, Julien Grall wrote:
> On 05/01/2023 16:46, Jan Beulich wrote:
>> On 16.12.2022 12:48, Julien Grall wrote:
>>> --- a/xen/arch/x86/include/asm/fixmap.h
>>> +++ b/xen/arch/x86/include/asm/fixmap.h
>>> @@ -21,6 +21,8 @@
>>>   
>>>   #include <xen/acpi.h>
>>>   #include <xen/pfn.h>
>>> +#include <xen/pmap.h>
>>> +
>>>   #include <asm/apicdef.h>
>>>   #include <asm/msi.h>
>>>   #include <acpi/apei.h>
>>> @@ -54,6 +56,8 @@ enum fixed_addresses {
>>>       FIX_XEN_SHARED_INFO,
>>>   #endif /* CONFIG_XEN_GUEST */
>>>       /* Everything else should go further down. */
>>> +    FIX_PMAP_BEGIN,
>>> +    FIX_PMAP_END = FIX_PMAP_BEGIN + NUM_FIX_PMAP,
>>
>> ... you've inserted the new entries after the respective comment? Is
>> there a reason you don't insert farther towards the end of this
>> enumeration?
> 
> I will answer this below.
> 
>>
>>> --- /dev/null
>>> +++ b/xen/arch/x86/include/asm/pmap.h
>>> @@ -0,0 +1,25 @@
>>> +#ifndef __ASM_PMAP_H__
>>> +#define __ASM_PMAP_H__
>>> +
>>> +#include <asm/fixmap.h>
>>> +
>>> +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
>>> +{
>>> +    unsigned long linear = (unsigned long)fix_to_virt(slot);
>>> +    l1_pgentry_t *pl1e = &l1_fixmap[l1_table_offset(linear)];
>>> +
>>> +    ASSERT(!(l1e_get_flags(*pl1e) & _PAGE_PRESENT));
>>> +
>>> +    l1e_write_atomic(pl1e, l1e_from_mfn(mfn, PAGE_HYPERVISOR));
>>> +}
>>> +
>>> +static inline void arch_pmap_unmap(unsigned int slot)
>>> +{
>>> +    unsigned long linear = (unsigned long)fix_to_virt(slot);
>>> +    l1_pgentry_t *pl1e = &l1_fixmap[l1_table_offset(linear)];
>>> +
>>> +    l1e_write_atomic(pl1e, l1e_empty());
>>> +    flush_tlb_one_local(linear);
>>> +}
>>
>> You're effectively open-coding {set,clear}_fixmap(), just without
>> the L1 table allocation (should such be necessary). If you depend
>> on using the build-time L1 table, then you need to move your
>> entries ahead of said comment.
> 
> So the problem is less about the allocation be more the fact that we 
> can't use map_pages_to_xen() because it would call pmap_map().
> 
> So we need to break the loop. Hence why set_fixmap()/clear_fixmap() are 
> open-coded.
> 
> And indeed, we would need to rely on the build-time L1 table in this 
> case. So I will move the entries earlier.

Additionally we will now need to (finally) gain a build-time check that
all "early" entries actually fit in the static L1 table. XHCI has pushed
us quite a bit up here, and I could see us considering to alter (bump)
the number of PMAP entries.

>> But independent of that you want
>> to either use the existing macros / functions, or explain why you
>> can't.
> 
> This is explained in the caller of arch_pmap*():
> 
>      /*
>       * We cannot use set_fixmap() here. We use PMAP when the domain map
>       * page infrastructure is not yet initialized, so 
> map_pages_to_xen() called
>       * by set_fixmap() needs to map pages on demand, which then calls 
> pmap()
>       * again, resulting in a loop. Modify the PTEs directly instead. 
> The same
>       * is true for pmap_unmap().
>       */
> 
> The comment is valid for Arm, x86 and (I would expect in the future) 
> RISC-V because the page-tables may be allocated in domheap (so not 
> always mapped).
> 
> So I don't feel this comment should be duplicated in the header. But I 
> can certainly explain it in the commit message.

Right, that's what I was after; I'm sorry for not having worded this
precisely enough.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 07:43:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 07:43:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472344.732531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDhNZ-0000Qp-25; Fri, 06 Jan 2023 07:43:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472344.732531; Fri, 06 Jan 2023 07:43:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDhNY-0000Qi-Ti; Fri, 06 Jan 2023 07:43:04 +0000
Received: by outflank-mailman (input) for mailman id 472344;
 Fri, 06 Jan 2023 07:43:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDhNW-0000QS-Vz; Fri, 06 Jan 2023 07:43:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDhNW-0007Va-Sv; Fri, 06 Jan 2023 07:43:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDhNW-0007Hk-BE; Fri, 06 Jan 2023 07:43:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDhNW-0003OY-87; Fri, 06 Jan 2023 07:43:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Gw1p+CeJNvUxpqN1/nODy5yLrzoIUuFIbUE2LDwYmXw=; b=v0+th8tBFpcArHepq0qEFfY6+C
	oGR3H1+u4NzOH76ql+qeqcdyFBdh6fNuG0hGCZGSJoLH+/mGyss0rIjkBm94avLhB8BgUhrCgZUG8
	mrDf4jI0z5g6flvHMUncHzGqdnO+ciwmtqb+1sMEeyGc+BC1F3bW1LzXSTt4KtjT5Yec=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175598-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175598: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8c2357809e2c352c8ba7c35ab50f49deefd3d39e
X-Osstest-Versions-That:
    ovmf=c32e7331513bfdb625986e4570c304dced4ea109
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 07:43:02 +0000

flight 175598 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175598/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8c2357809e2c352c8ba7c35ab50f49deefd3d39e
baseline version:
 ovmf                 c32e7331513bfdb625986e4570c304dced4ea109

Last test of basis   175596  2023-01-06 03:10:50 Z    0 days
Testing same since   175598  2023-01-06 05:40:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c32e733151..8c2357809e  8c2357809e2c352c8ba7c35ab50f49deefd3d39e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 07:54:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 07:54:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472353.732541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDhYT-0001v3-2G; Fri, 06 Jan 2023 07:54:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472353.732541; Fri, 06 Jan 2023 07:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDhYS-0001uw-Vh; Fri, 06 Jan 2023 07:54:20 +0000
Received: by outflank-mailman (input) for mailman id 472353;
 Fri, 06 Jan 2023 07:54:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ggnj=5D=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDhYR-0001ua-9e
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 07:54:19 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2078.outbound.protection.outlook.com [40.107.20.78])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51c2c393-8d97-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 08:54:17 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9127.eurprd04.prod.outlook.com (2603:10a6:20b:44a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 07:54:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 07:54:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51c2c393-8d97-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HQDvcmMxiwkn2ol/77g7DO0PUSbnXsXb1q6nvfNxR7/bQapPuaHkQ5AuP0b/FxV0r4DEfVI6OGXNTBdJDs0+an4T2sONOays7fyEu8aomnIgzILfE2Sd4NLELkNWy6j7uYlTF9v8GdrvMP0dss1tsDLZ51fY/wpy4JpCREWjGtBDEBkVVMmJ7rv9iZ1WeNtqYY4IZKugOn9KBApf4VuJOYT+j4/R8SJAgCfhSbphuZhyMs6CyNnBLGzmYCRWscoKWuNflb6Kn5yJh8hbI6p3CzoXZd0vaue+xebqOtiKFXRnklE9C4IVZzWFCUPRSxjosvamySqVOZYLr4v7a+YfYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SFq0d37GfmCTvGfcS+p64VQhkk9QauO6RTWgA1aQC+c=;
 b=KoMJqrubLyEJtRq1y6MtQNRM26P4tmdtl8Msw/OyWakwIzoaKi/K4/VBtUWa0Tlc5BD50Jq1ypLjZjYbieK6OeNMo0zZmGShs4JTtXk64HhMzJNMSLpcmVJmw57xl7GbhSDJGObbrxnLYcGg2HAG9JvflYSqF2TC04cWf1WgF+DQz63a0QqIY6tpVc/5RIO9xlb6/nd6r2zJtn8BSVoxucq5ccxw0ugGxxh4B1UlmaisuPSSwgGPB6CTKgN+9N+UY0CNEywDxcKbZJNPqu6rRxMpA1zxJZpIZMOVbZyfWQCWldB/747jCxFHWccBtQrByHatSLVKwFIV63NmGGbEdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SFq0d37GfmCTvGfcS+p64VQhkk9QauO6RTWgA1aQC+c=;
 b=ao79GPE7HsGd2D62XNWsLHdWYCcJBZsedDnf4c0MBbFaqh77qZsmx2Y7seyTWwlQnvwI4FYjPywUd7nASCpFlRZjM4isAE7OOHEj7ljtQeob7eZ8pAlJOl6FYEKZbibseVk03PRNQhHO6IFNP1Y+0UliIZuFoCQr3BIVRJ+eiTYmFUqL3P7uPrf5lvjbj6+3OAKhA21j/meS5e8BrprptuMik+IaQjJKHPXadRwEBQhOslqo73TI4RyWDW/wKiI9yaVX2Hv8/OUndZfkj+Gs6eBu/sbsGKqQisnZYpH2vNo+HooK1gyFf0SWNYvRzIkL/3xZmDlEy9ycGBgFI/jYdg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a65a4553-d003-1a8d-abb7-5d8c1c9fface@suse.com>
Date: Fri, 6 Jan 2023 08:54:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-4-andrew.cooper3@citrix.com>
 <540a449d-f76d-eb16-4f98-c4fb3564ce98@suse.com>
 <7dd00ce3-a95b-2477-128c-de36e75c4a34@citrix.com>
 <ebaf70e5-e1ba-d72d-84f2-5acb7e38a6bc@suse.com>
 <9ce20298-5870-aa1b-ee5e-e16a623beadb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <9ce20298-5870-aa1b-ee5e-e16a623beadb@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0138.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9127:EE_
X-MS-Office365-Filtering-Correlation-Id: 5eaf4153-e3cc-4233-ca2e-08daefbb34e6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xHrmCoTqOQ2dHghrY662x85f+6ekiqYNk4NcKcz30GlAtcJnMgL6kpPUREXty4Xd+HtLOs29LnN6pMtr8dSzCCj0lZhtzm+3J1YsuytuFSuC4PovXxPUcyF6cOd8VCBuBT/+agVx2yJk0E2stzimJ7vkwxkK2HrX7LbzBYrgbZN3aYXX84Ry7rFxWApvb/UuW19XH8dKOvmFLlRkrn1RJJQ4m6cEZ3HKmXVYglriJBDlOEi/WDwbQBVJ3Tw9YZ73gfrvqFT8/h7rkqvYMvHo1dNFl1qTQQZbeJOoNd42h/2LLme3Vb75DbqOITZWptNcfmUNjBz5pzFAk9DL656TCodSkdSgqP89uvPEqGLzFgZNxitJ60v3VSXCOA4AbC1XtkEM3IS9AGpKgxWktpIJdlp7ql6phGEwvIgKa2TxyFlKQ02VqWsUEy/X6RgmR9HgUdTqjzmrTMDTGotJmcNEqXL2QKrPk+SCCooBI4mDGXyvwpN34cLF02jhUmiFLnhkN2FRar/kxsfvkkbSQcEaj2J3xPZcN/0iQWR/5KwZu6PLZSrvWt3IXFpGi8tXKZ5byY+GYjz1vm/jQjL40cE5ePiUiNcwEhpPYWfr0H58rTsvOUJWSiLZG4RhAFOpIy3L3GkhK5pFaz3evADlnbxqN9qeUpn7ZPBP7+ex65eEkNXOWjSZ+5QiGjFtom9ILBjkm/LhO3cQV/VY5eaRRQ3EdEBrRuyUKzPoeTzU550l5qg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199015)(2616005)(186003)(26005)(6506007)(53546011)(6512007)(31696002)(86362001)(36756003)(38100700002)(8676002)(4326008)(316002)(41300700001)(31686004)(2906002)(8936002)(5660300002)(6486002)(66556008)(478600001)(66946007)(6916009)(66476007)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MUdjdEJ4aEFReGxEZDNtSU1ReW1oMUViaDFCODNIZENvenNwNzRrd1VQUUtw?=
 =?utf-8?B?VGltWStsTXoreVBkelNXcEMyVUJDRm9WWGZNdXFIOVZ0eHZySk1LQnViUHhq?=
 =?utf-8?B?YldVc3lEdDd5SHEveGQyUW0rVS9QTUR2czcyeTlZNXl5WTBvdEszMzk0MXlR?=
 =?utf-8?B?TXl3RXRLQTNCVGJYOW9JbzZzMDNRaldiaXRUbG5hZXBIRnhKNFR0TnZDeGZp?=
 =?utf-8?B?TitwNmpZeXVuVWN1VTZMMkxjV3JZelZBbGFoRUFkWlFqVWNNNTlqcXRRYlhY?=
 =?utf-8?B?bEtoMzAvUGRneXQxTkpKenNQYy96UGNPbStaM1VvQlF3MG9vSCtRSFY2eDFm?=
 =?utf-8?B?Q1NrZHpYUjV3amhzRmhIRXVweEtjUkxmVElZMHlKTW5XcCtnM3ZRV3Y5T21X?=
 =?utf-8?B?ZWk4MU1oM0VQc09BelQ4Y1A1S0QrWXBmSjhCblhvSkpMSkxNaDNiU3gwdGdi?=
 =?utf-8?B?bGdZVTFld1lxM1RMWlRVL3o5QjR1Mk9HaUpZcFRMZ3ZhQjIvQkNLU1ZUV1Vz?=
 =?utf-8?B?MWlDdGFBVy9iQ1Zya0FDUlVmbVNSekVSeVlHS0psWG1pWlVUL09aeUxKak1G?=
 =?utf-8?B?M2FqeEptN1V1UWlGVUlxVXdRV3JtSjFqV044NzRmS0dpZkRPQzlGMUMzeHV5?=
 =?utf-8?B?VDVYYWJGWVc3L3N3TGs2T05LcHVLSXZUMWRyTlFFTFJiZjd5N3MxRDVqYjRj?=
 =?utf-8?B?YlNoVHV5WmZoWWx6QWYxQzFhZThIelBpTm95RzhvcDA2UDBpa2pQTVdldlF0?=
 =?utf-8?B?OTVMOW84UkFwTEdJOXc5aC9rRFZiZzhtNUh5d3R0Zkc5NXhtZzV6LzdmWFZk?=
 =?utf-8?B?OU5BbWtlNnVMR1NJMkNhZ09RejlrY05yYWN5WHp5cFpUTlVKcm9uSEU4VTRZ?=
 =?utf-8?B?VlN5Qmx1M2FrT1g4NGZ4ZzZqcUhKMzFtSDFwaFBvbmdJdjhCMjhEd2dnQlBC?=
 =?utf-8?B?eVh3Rlh4ak1MUlR4Q1VjZ2x6a0VuaGUvempWNytPTk5qQlYxMnJ5OVhmVkhQ?=
 =?utf-8?B?MmMwUUc2N0FWMkk4SjI1L0dSdXFoNU84TzNOUzZiN0FOWjFLNGNYWmxobHlE?=
 =?utf-8?B?RlphQTlRa25acXBsM2ZLWXI0Umg5UlhTSzFjZGs3SnRJYTM4RnRBUTVHY0tO?=
 =?utf-8?B?bC8wanJTVVEvSE5hS01KK0tQNitLeUlKQS93U1NzV01Ocmt4SlkyRCtwVWF2?=
 =?utf-8?B?SkFwQ2ZHcEc3TEl5VExHRlBDcEJlYWRRbmJtYTNjVnJTNDB6ZXV0OStGR09w?=
 =?utf-8?B?QlROUkU2bWNENzJSYmxBSWNYeE5Nc01obk9ON2EwUHpWNWtyd1Z4QUYwaDNq?=
 =?utf-8?B?U2hieTlEVjBqWStHMUZsUFJ0RGtVWmQydUdNL2N5NzRCejk3Tms0b0MzeGpx?=
 =?utf-8?B?YmFlM00vZXVDcGFUNm5yRFRNWno2V2hFQXFVclEzT3hoTFF1eDIwdUQxU083?=
 =?utf-8?B?eXNxcng5U0ZqVjRKelcrTFVHN2JmSzkyZ2lJUkNoa3hRdkIwVEpBTnZlSWxN?=
 =?utf-8?B?d2IyeVkzenQyV01WSWtRRFVWYTc3TXNnemJFcjN1MFBrUGZsb1JGUVRIRkNG?=
 =?utf-8?B?Nk9kRXNEb2IwUUJhRzRGRG1YWTF1eC91QVFGUHRJM1Z1KytPS0NGeEVQd2N0?=
 =?utf-8?B?dmlDbUhWanhCNUViZDlaVXVRTWZXN2RVNmI3VHZaUHNQV2VvbkZYd0RvemlQ?=
 =?utf-8?B?QmRYbzVuWVNzUUdxUlY2emZYMlJBVDJtWkRpazNvQVM4TGQ2VWFjRjZoOGtJ?=
 =?utf-8?B?Wno5NFVmbU8zNFlzcmk2azJ2eHVJNFZoaG9keWhleDFmaWRRSDBsNDRRYnFF?=
 =?utf-8?B?ZGtBQXZrTW42bDlyY1haVmJraEQwclJpSWV4bTJua0toQ0lsaGFKQU9nWmtT?=
 =?utf-8?B?Sy90Kzl3dzVtN0RDK3YvVExNUVhTT1VrbElUdGFYQm5rWHFPajZhNi9QS1R6?=
 =?utf-8?B?UGxtVmRQakZIUVY3VGx6dDFsdGFRNk5MU21DbittSmtOYjlmWEsvVXlPMVVH?=
 =?utf-8?B?RWNPSFUvLzhEdlI1NHJ2WHgyOVRib3Ira1RJZEFuZGFHaHdmQ1JQazBwbVVk?=
 =?utf-8?B?U293S1lOYm1wUFM3QUtLdkUydEVNZkVKODJNajgwL0dNSWFMYmVMakJMVHR3?=
 =?utf-8?Q?v/cwUaHW6GpULbqk0jjHvKHGP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5eaf4153-e3cc-4233-ca2e-08daefbb34e6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 07:54:15.3528
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FLwAO2Wedall6vsCujEvsec+C8SuXXHVlARz8iE/W+WkP8qssY9kfvbC+D6XbN7zz05xCsBlSchiuq6KwhRotw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9127

On 05.01.2023 23:17, Andrew Cooper wrote:
> On 05/01/2023 7:57 am, Jan Beulich wrote:
>> On 04.01.2023 20:55, Andrew Cooper wrote:
>>> On 04/01/2023 4:40 pm, Jan Beulich wrote:
>>>> On 03.01.2023 21:09, Andrew Cooper wrote:
>>>>> A split in virtual address space is only applicable for x86 PV guests.
>>>>> Furthermore, the information returned for x86 64bit PV guests is wrong.
>>>>>
>>>>> Explain the problem in version.h, stating the other information that PV guests
>>>>> need to know.
>>>>>
>>>>> For 64bit PV guests, and all non-x86-PV guests, return 0, which is strictly
>>>>> less wrong than the values currently returned.
>>>> I disagree for the 64-bit part of this. Seeing Linux'es exposure of the
>>>> value in sysfs I even wonder whether we can change this like you do for
>>>> HVM. Who knows what is being inferred from the value, and by whom.
>>> Linux's sysfs ABI isn't relevant to us here.  The sysfs ABI says it
>>> reports what the hypervisor presents, not that it will be a nonzero number.
>> It effectively reports the hypervisor (virtual) base address there. How
>> can we not care if something (kexec would come to mind) would be using
>> it for whatever purpose.
> 
> What about kexec do you think would care?

I didn't think about anything specific, but I could see why it may want to
know where in VA space Xen sits.

>>>>> --- a/xen/include/public/version.h
>>>>> +++ b/xen/include/public/version.h
>>>>> @@ -42,6 +42,26 @@ typedef char xen_capabilities_info_t[1024];
>>>>>  typedef char xen_changeset_info_t[64];
>>>>>  #define XEN_CHANGESET_INFO_LEN (sizeof(xen_changeset_info_t))
>>>>>  
>>>>> +/*
>>>>> + * This API is problematic.
>>>>> + *
>>>>> + * It is only applicable to guests which share pagetables with Xen (x86 PV
>>>>> + * guests), and is supposed to identify the virtual address split between
>>>>> + * guest kernel and Xen.
>>>>> + *
>>>>> + * For 32bit PV guests, it mostly does this, but the caller needs to know that
>>>>> + * Xen lives between the split and 4G.
>>>>> + *
>>>>> + * For 64bit PV guests, Xen lives at the bottom of the upper canonical range.
>>>>> + * This previously returned the start of the upper canonical range (which is
>>>>> + * the userspace/Xen split), not the Xen/kernel split (which is 8TB further
>>>>> + * on).  This now returns 0 because the old number wasn't correct, and
>>>>> + * changing it to anything else would be even worse.
>>>> Whether the guest runs user mode code in the low or high half (or in yet
>>>> another way of splitting) isn't really dictated by the PV ABI, is it?
>>> No, but given a choice of reporting the thing which is an architectural
>>> boundary, or the one which is the actual split between the two adjacent
>>> ranges, reporting the architectural boundary is clearly the unhelpful thing.
>> Hmm. To properly parallel the 32-bit variant, a [start,end] range would need
>> exposing for 64-bit, rather than exposing nothing.
> 
> The 32-bit version is a start/end pair, but with end being implicit at
> the 4G architectural boundary.
> 
> If we were doing 64-bit from scratch, then reporting end would have been
> sensible, because for 64-bit, start is the architectural boundary which
> can be implicit.
> 
> But there is no such thing as a 64bit PV guest with any (useful) idea of
> a variable split, because this number has been junk for the entire
> lifetime of 64bit PV guests.  In particular, ...
> 
>>>>> + * For all guest types using hardware virt extentions, Xen is not mapped into
>>>>> + * the guest kernel virtual address space.  This now return 0, where it
>>>>> + * previously returned unrelated data.
>>>>> + */
>>>>>  #define XENVER_platform_parameters 5
>>>>>  struct xen_platform_parameters {
>>>>>      xen_ulong_t virt_start;
>>>> ... the field name tells me that all that is being conveyed is the virtual
>>>> address of where the hypervisor area starts.
>>> IMO, it doesn't matter what the name of the field is.  It dates from the
>>> days when 32bit PV was the only type of guest.
>>>
>>> 32bit PV guests really do have a variable split, so the guest kernel
>>> really does need to get this value from Xen.
>>>
>>> The split for 64bit PV guests is compile-time constant, hence why 64bit
>>> PV kernels don't care.
>> ... once we get to run Xen in 5-level mode, 4-level PV guests could also
>> gain a variable split: Like for 32-bit guests now, only the r/o M2P would
>> need to live in that area, and this may well occupy less than the full
>> range presently reserved for the hypervisor.
> 
> ... you can't do this, because it only works for guests which have
> chosen to find the M2P using XENMEM_machphys_mapping (e.g. Linux), and
> doesn't for e.g. MiniOS which does:
> 
> #define machine_to_phys_mapping ((unsigned long *)HYPERVISOR_VIRT_START)

Hmm, looks like a misunderstanding? I certainly wasn't thinking about
making the start of that region variable, but rather the end (i.e. not
exactly like for 32-bit compat).

>>> For compat HVM, it happens to pick up the -1 from:
>>>
>>> #ifdef CONFIG_PV32
>>>     HYPERVISOR_COMPAT_VIRT_START(d) =
>>>         is_pv_domain(d) ? __HYPERVISOR_COMPAT_VIRT_START : ~0u;
>>> #endif
>>>
>>> in arch_domain_create(), whereas for non-compat HVM, it gets a number in
>>> an address space it has no connection to in the slightest.  ARM guests
>>> end up getting XEN_VIRT_START (== 2M) handed back, but this absolutely
>>> an internal detail that guests have no business knowing.
>> Well, okay, this looks to be good enough an argument to make the adjustment
>> you propose for !PV guests.
> 
> Right, HVM (on all architectures) is very cut and dry.
> 
> But it feels wrong to not address the PV64 issue at the same time
> because it is similar level of broken, despite there being (in theory) a
> legitimate need for a PV guest kernel to know it.

To me it feels wrong to address the 64-bit PV issue by removing information,
when - as you also say - it is actually _missing_ information. To me the
proper course of action would be to expose the upper bound as well (such
that, down the road, it could become dynamic). There's also no info leak
there, as the two (static) bounds are part of the PV ABI anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 07:56:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 07:56:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472362.732552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDhaw-0002Y0-Ie; Fri, 06 Jan 2023 07:56:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472362.732552; Fri, 06 Jan 2023 07:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDhaw-0002Xt-Fr; Fri, 06 Jan 2023 07:56:54 +0000
Received: by outflank-mailman (input) for mailman id 472362;
 Fri, 06 Jan 2023 07:56:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ggnj=5D=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDhaw-0002Xn-5P
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 07:56:54 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2088.outbound.protection.outlook.com [40.107.20.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id add77bcc-8d97-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 08:56:51 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9127.eurprd04.prod.outlook.com (2603:10a6:20b:44a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 07:56:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 07:56:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: add77bcc-8d97-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d9JQ8z+niPqPw2LkBL00+4xDnfDwMmHDHKyCSDQk/T0nikroYfuh3ZgXS1NXf9gOurmZmf4h7EmWUsLpC/nvGYTexIhi+2p4hLmtz8fOX6uF9ayQnwwvhMDL0zg8BPjreoUUy8l0NO6jnWVXVUS3Xjgetyk0YlCsezviNFxIjH3iYukwZUDCSYpBT4+tzlVtT6XrRqLs04saEpKlAs4dlfM0GbSPpl0IeRf/4EtwQ4YxRCV+GTZWK3un/aXybgQ5nvUqZo3iy4cLPLKB4M5b2DVI9vhmj5K7H1eZ6nuIylTEgZyH/iDNDcHqNaYcxsB47O/WIS8d7v2jPeWaSE5NCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JRhZxzzZ3J6kJnMtHAlWdvDXceG75enGMlLVUHsUWvE=;
 b=fNYmCoNc4IGsnFAbChKwC90zH872QGfWjPuiMTwy2mkVB8vyznxWf2Y+rAJDSMaP7+/W08kk66j6XKQEVhiHf0Ct73fstaeiHZ+DazZp+J9IAd077BaWNJ81uZOz8stlnLgzZIdH1P3UPoVI0xM2S9hMCQ2E51tbIfuXzaacAKIf4Ys7tP+0E1g7oCn+pAy7dyJj7VsxXnV3/sUML4OFRGq6/asPV0UKqjlohy9M9m96V4w5tLyH+o7Yp2j6lvB+BCWdH2dyi4C4KqPKPqXRlpTb4c70r0mvJ8EFcwxhveZ1v2Xk+whbGpMgB3cxwyhOL0dV4aQ7FRWBdzusFyaLeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JRhZxzzZ3J6kJnMtHAlWdvDXceG75enGMlLVUHsUWvE=;
 b=DG9Z+Y/653hM3aRV9vS4ImGGejfRK/RYMYaLSNj8cIEour9MLIHCQeYhkh8tSno13gXet2umftwpKv7i2uvM1nPmXOa0UMsuyNjtYp8OGT5KgOp58BpQ9aSFP/WknR+soHPgmgBTj8KDKrs6JQX9M6DnWBu3ZbIfr09e4vQ9QmiHrXsP1V0A5x8Dx8a/yMuQFEFoT10Aq+4GUDhAhigxMQ7AEi7A2cjIt53UyK5kOz7/6zRopk0Z6ee3Dn1Dn47de1Th2KqSCXj1aMPtwwvBEvjMCa8JlLCNjpu/2ujTqwNvAudGvF/dTZPzj9u2d6Og1TiGxvkWcuL4kCB5XrhEMw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8ce9eaed-7684-22ea-9ce6-9a45d9cc69c1@suse.com>
Date: Fri, 6 Jan 2023 08:56:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 4/4] xen/version: Introduce non-truncating XENVER_* subops
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-5-andrew.cooper3@citrix.com>
 <a0cb9c83-dc4d-5f03-0f65-3756fadfde0b@suse.com>
 <9c9cedd5-cca7-95d4-00bb-f34a56de2695@citrix.com>
 <f90111d0-b94e-8127-3b13-fbe3558d53f7@suse.com>
 <63789ce5-5dff-c981-4127-d1ef3227595e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <63789ce5-5dff-c981-4127-d1ef3227595e@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0129.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9127:EE_
X-MS-Office365-Filtering-Correlation-Id: 1184f8a4-23b1-4bb1-074a-08daefbb9146
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8R0P1dpyZKCsjrAS67GvwwrP+/tl043X/nqDlgYmvldn943s6m3WE1trL4B/5JawmPY/VjOIay3GbjDqCSQYJp7hIr5aYJ+vHqsZhmE+BwqMJce9PiPnyQxzK81YVYGCZbJvirmq+JZPUBh/i3nF6YC3hQ6TYl0O3JB8WHMaG1SVUJqYQ9BcpqQirlZpjvEChg+8fI76BnplnfhytDUmShiMqEI8Imw3pTg7vxXuAco8mDTR7qJePbXVWKbY6O3jMIZ6KmxhhdvEs/PjsmwMmjVaSQClmcs7hFMSKZLwDbilP2PlvonnhUFkfuLV4hJFzAUnIZKYWP9AT1xGl1suFXxcSY8bjYN7WmHcDSNQXHvCTP1uxp1BkNrWtzEFo6xllisf9S8iLcn3EGC/ti8417erwL2/SSxv0t7zXr375QlaR7GstywBRxmkd+GjTquwNqf55uA1SB+2juNJIaUsVNOAJM1VLdBU4EEqpysZzVH5YXgeIphZmdJwTddoF+Ga+atrPqeHmj9uX6kuOekCZKAji3ViNx7p+rz+jwtf2WBgHS6OQPxW1X7v+UP1Cg9ymcdegiw4ymayoks1qr1+d8hw6aHqsnYAG4NO3DBfF5v8b0pKgPFuRGpG7VKoF5yOwXz9v48pSXK5iBw590EGCgoCZkmFuZ5XbDgPNeYCfscyO5ULgA5KVXpdR0gp4vO7RoPiXUAm7vzfO+HZc0ApIuzShIBq47mtYBnM+Ng8oUc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199015)(83380400001)(2616005)(186003)(26005)(6506007)(53546011)(6512007)(31696002)(86362001)(36756003)(38100700002)(8676002)(4326008)(66899015)(316002)(41300700001)(31686004)(2906002)(8936002)(5660300002)(6486002)(66556008)(478600001)(66946007)(6916009)(66476007)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZFZEM2o4cjRDTk5lN2lBdW9sU1c5ZUZkdmlkdk1EbkJBYk41Z0xtT2hzeTNL?=
 =?utf-8?B?NjEzZWRCVnFEK2RJMis2WnJoTGhOWEsrM0Z5NCtHUDE1clZMQmFtK1J2azFm?=
 =?utf-8?B?V0tHN0pGUjh6NE9HSlpkbmhKY28rRXRoOG51a2VYcDJTZ2ZkN2pjS1lPRlpl?=
 =?utf-8?B?U2ZHVUJRZk5uaGVqNndoVk50SE4vK1QyNHRJajdSMks5TFAvVzNBQi91b3Iy?=
 =?utf-8?B?eVJ0RlpuU2plZGx0MXpLY2NySGEyeVI0UE9oM3liMGpnWnVuOFN1WkJTNUlF?=
 =?utf-8?B?NHNSRUNvdnlwcmtsMm9LeTc4Z3R4YUdJc3VzWHIvbEIrRlh4bWNGWXoxbkVN?=
 =?utf-8?B?Q1ZtSWZ1SjYxV2Y3SWVTeEgzSUM3Qk9pSlIxMWx1eFhxei9tMzE1UW1sYXpv?=
 =?utf-8?B?dmNWZjB5R3JIMmtsbVF0MklCbDRkeE1aTW16Skh4d1gwb3pUMkJUNGNlZmY0?=
 =?utf-8?B?dXNmcncvbmdJTTZlRzlSenkwVlRMeEo1QlJMdWN2RlZZbkZ2UGhJNjRUdHNv?=
 =?utf-8?B?Ymo1UHo4cEVhaC8ycERtZy82ZE4xcEJlblduNnVPb2xPdkxrejl3V3V4VTRJ?=
 =?utf-8?B?RDZ5c1ljNEZhM3FwN1NKWU9VaGhDaG1JWkhKVU5NRGtrc0krRHg5RlVUVm5Q?=
 =?utf-8?B?TWMvdm5oc2VVK0JZMzlURjF1ZGMzRThkOCtkLy9nNWN3b3lDZ3BoQTI0WTdX?=
 =?utf-8?B?dERZQ3dTR1lKQkZNWjJsZlJuZzkzd1BuTk1hbWJCa01ndUFlM3dIeHhPME55?=
 =?utf-8?B?bm50VVhBbXRDU0l0ODFnbkxHZ24vdnhGdmVBQ2diUjZOcndBaXcxc1V6TkRk?=
 =?utf-8?B?NU9tK0ZxZmdvL1NYckF3MG1QMFNWUzJRWXN5STRyZkY1T2JlaDJ3V0lHRlI3?=
 =?utf-8?B?Q0JteHVDYVZ4TGIvTk5TYUFvRkhNdlNYV0VteUUrOWhGREh3UGwyRVk0dmJV?=
 =?utf-8?B?aVpIM2FQQnFjaUF5NGZDR1pXK0hBZzNkVTVSMXMwRUVsUGhnbU1TS2J4ZmNR?=
 =?utf-8?B?Nk9KdG5Makw2QWlKQldKV3o5Nk5oQk83UzZRS2ZzTlkzb29Mb2RUS2FGWTEy?=
 =?utf-8?B?VHUwOUdad1d2RkJMN05wL0M5aTllbzZlb25tazhvbnhITWZkZzBxeUtQQUVp?=
 =?utf-8?B?cmRQR1QrRXhDK09MVFlSRzB1QWRYYU5kc2p4ditkZ0tpVkVXSHAvTitPelpL?=
 =?utf-8?B?clczcU8wQWNrMG4yRlRIUkpwZjczcDQ4dzF0RkZvcnk3a3ZzVXk0NzJhVnJY?=
 =?utf-8?B?NmZESFJaTUprUmtacWdyZzBEN29RaGtacG9UN1JKMVo5UFBXMnFCSVdxTTRo?=
 =?utf-8?B?S1F6MG84V3VUKytMSmZrMUdOTnVCSWZPTlZuTEcyK2NVRWE1RWYzOVlmeUhK?=
 =?utf-8?B?VkNvZkI0YkRjaWZzYU9odmpOSk9mLzMwMzBXNlpPNGxxdHBsNHhpQWVJcTRG?=
 =?utf-8?B?T3k3UUdzZk1SSXRrVXlJSExKbEUySkthN29ianFBV1BoVm01Mysxd29QNUcr?=
 =?utf-8?B?a0VsKzNKM2FJamtNRksxREJENjJ1U1FzZE5sRHJPNEpCUUpOTXM5YkZ4VGJE?=
 =?utf-8?B?RGt0WTQyVmNJZ1NidEJDRGhpWWV2OUdzVitDeFVoWi8wODQ4d21ZUmdZZjM5?=
 =?utf-8?B?ZDNRQlFGUE1lZm5PSmU4MnR2S2RxS3FtdjNmM3N2Mk9VTml3ZnlnMDJZZ2hI?=
 =?utf-8?B?R0FUNG9YZ0lIMHJJalhVTzJadHVBbnFFbWpvN0Q4S0J5QjFHRThsbjBtMEUw?=
 =?utf-8?B?QTVYaGpLbGR6VUkvSG1UZ0FvdmJoYlE4bTdzcmJVbGJBSVVCNEt3bFFXN2xC?=
 =?utf-8?B?Y0dSbE1EN3VmMXdOMHcyNCtLeGl2MGVZTEl1OE1vSEVlcDVDdU0xYWp4SnB5?=
 =?utf-8?B?emhnaWl3bG9UeHE3RFlaWTdRaXo3VVQ2SXRoS2t4b1ZWa2tlVGNIeU1UMnZ1?=
 =?utf-8?B?SG1qZS94Ri9HOU04WGJQb3BWK2hhR2JwRW1uZ3BkYi9hS2RJTE80QTNwRDZ5?=
 =?utf-8?B?L21xeis2aVhSVHFXTVc4N2Z3Tnc5Rzh3a1FqUTkzbGc5WXkwd1pzZkowa3ZR?=
 =?utf-8?B?N1FLU01OSk9rNEtQQXVvbkhKTWpHVzF6RlY4OWFWajByMTg0Q3VVUzlhNlE2?=
 =?utf-8?Q?4dkkcLtLtXH4X27PysBUpRqDN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1184f8a4-23b1-4bb1-074a-08daefbb9146
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 07:56:50.2804
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6sLSIJxn6efdQWTCApkn9G0rXtAqqn5qqGTTB/f1ZyKZmC6vJm6XnDPYIOWw8jRNDKLEg8cf3HDZ02gjou3exg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9127

On 05.01.2023 23:28, Andrew Cooper wrote:
> On 05/01/2023 8:15 am, Jan Beulich wrote:
>> On 04.01.2023 19:34, Andrew Cooper wrote:
>>> On 04/01/2023 5:04 pm, Jan Beulich wrote:
>>>> On 03.01.2023 21:09, Andrew Cooper wrote:
>>>>> +    if ( sz > INT32_MAX )
>>>>> +        return -E2BIG; /* Compat guests.  2G ought to be plenty. */
>>>> While the comment here and in the public header mention compat guests,
>>>> the check is uniform. What's the deal?
>>> Well its either this, or a (comat ? INT32_MAX : INT64_MAX) check, along
>>> with the ifdefary and predicates required to make that compile.
>>>
>>> But there's not a CPU today which can actually move 2G of data (which is
>>> 4G of L1d bandwidth) without suffering the watchdog (especially as we've
>>> just read it once for strlen(), so that's 6G of bandwidth), nor do I
>>> expect this to change in the forseeable future.
>>>
>>> There's some boundary (probably far lower) beyond which we can't use the
>>> algorithm here.
>>>
>>> There wants to be some limit, and I don't feel it is necessary to make
>>> it variable on the guest type.
>> Sure. My question was merely because of the special mentioning of 32-bit /
>> compat guests. I'm fine with the universal limit, and I'd also be fine
>> with a lower (universal) bound. All I'm after is that the (to me at least)
>> confusing comments be adjusted.
> 
> How about 16k then?

Might be okay. If I was to pick a value, I'd use 64k.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 08:37:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 08:37:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472377.732572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDiE9-0007TF-3e; Fri, 06 Jan 2023 08:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472377.732572; Fri, 06 Jan 2023 08:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDiE8-0007T8-W3; Fri, 06 Jan 2023 08:37:24 +0000
Received: by outflank-mailman (input) for mailman id 472377;
 Fri, 06 Jan 2023 08:37:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDiE8-0007Sy-53; Fri, 06 Jan 2023 08:37:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDiE8-0000zV-2X; Fri, 06 Jan 2023 08:37:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDiE7-0000Kv-Ml; Fri, 06 Jan 2023 08:37:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDiE7-0005an-M8; Fri, 06 Jan 2023 08:37:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oaay86yFGUMaN1OluyyeALLbyQBWNI5Vw+Fo2g0aicE=; b=LWJIS4vFBiZUa3N3kiZjPlBSq2
	WeYNHCXkDGjHi3jkkvZ4LTU2rlZESP3BTh0bwPncf4vxChY7CPHyJiQhXn4RjdZR4OnUyDHovBoA7
	fuo5s9rO+xbpvMba9E0Bl4TxZhwI8qTO2ZQdBIgTpwbub8rcPYfV1E4nB1TmvFEs9xLE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175592-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175592: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-examine-bios:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=671f50ffab3329c5497208da89620322b9721a77
X-Osstest-Versions-That:
    xen=c1df06afe578f698ebe91a1e3817463b9d165123
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 08:37:23 +0000

flight 175592 xen-unstable real [real]
flight 175600 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175592/
http://logs.test-lab.xenproject.org/osstest/logs/175600/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install   fail pass in 175600-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  6 xen-install                  fail  like 175573
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175573
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175573
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175573
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175573
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175573
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175573
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175573
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175573
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175573
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175573
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175573
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175573
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  671f50ffab3329c5497208da89620322b9721a77
baseline version:
 xen                  c1df06afe578f698ebe91a1e3817463b9d165123

Last test of basis   175573  2023-01-05 02:18:00 Z    1 days
Testing same since   175592  2023-01-05 19:38:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Demi Marie Obenour <demi@invisiblethingslab.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c1df06afe5..671f50ffab  671f50ffab3329c5497208da89620322b9721a77 -> master


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 09:16:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 09:16:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472387.732583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDipr-0003Or-7k; Fri, 06 Jan 2023 09:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472387.732583; Fri, 06 Jan 2023 09:16:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDipr-0003Ok-57; Fri, 06 Jan 2023 09:16:23 +0000
Received: by outflank-mailman (input) for mailman id 472387;
 Fri, 06 Jan 2023 09:16:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDipp-0003Oa-Rw; Fri, 06 Jan 2023 09:16:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDipp-00020C-PP; Fri, 06 Jan 2023 09:16:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDipp-0001MM-4I; Fri, 06 Jan 2023 09:16:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDipp-00029C-3j; Fri, 06 Jan 2023 09:16:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jpTPvr9GvyBEH8ptwigiUIrODYrUeuSuGyY7qQEASt0=; b=WCHTlGKlbXVMoZjGfEEC46jaXa
	bRmwuQGkVlwYCMO941uel0y209A0vUs7kZsqbov1lTW/ECe6+jyOtfc9uUxhLMFf04eFX1hi7OoD9
	26wQgrzbj/KZc3Wn1uCz7+BRdz6VDLeFYk2WXgkJ9b7RYD52XcEuLC8aacEuSR2UVFz8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175595-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175595: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d1852caab131ea898134fdcea8c14bc2ee75fbe9
X-Osstest-Versions-That:
    qemuu=f8af61fa14441e67300176a5e07671ea395426b3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 09:16:21 +0000

flight 175595 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175595/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175590
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175590
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175590
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175590
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175590
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175590
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175590
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175590
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d1852caab131ea898134fdcea8c14bc2ee75fbe9
baseline version:
 qemuu                f8af61fa14441e67300176a5e07671ea395426b3

Last test of basis   175590  2023-01-05 17:07:09 Z    0 days
Testing same since   175595  2023-01-06 01:55:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  John Snow <jsnow@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   f8af61fa14..d1852caab1  d1852caab131ea898134fdcea8c14bc2ee75fbe9 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:28:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:28:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472396.732593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDjxL-00026x-Hf; Fri, 06 Jan 2023 10:28:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472396.732593; Fri, 06 Jan 2023 10:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDjxL-00026q-Eh; Fri, 06 Jan 2023 10:28:11 +0000
Received: by outflank-mailman (input) for mailman id 472396;
 Fri, 06 Jan 2023 10:28:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDjxK-00026k-2A
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:28:10 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ceb701eb-8dac-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 11:28:06 +0100 (CET)
Received: by mail-lf1-x12d.google.com with SMTP id g13so1387619lfv.7
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 02:28:06 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 g1-20020a056512118100b004b5732080d1sm107093lfr.150.2023.01.06.02.28.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 02:28:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceb701eb-8dac-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=NweUbldW38duLDKXCcl5QGEb3HNDCi1J7bPtOMrCqec=;
        b=OeezEwoj7ine/gcxcZW1zzNRA7i8wpct49Nd4JgA4zr1yDpClhm9WNCrqzM+fTYpY5
         BNSu6Kxo2+EK2QAdtoYNG9S7uYhtQTa+FChFBfW/ikhXKZK7DB9NzjEOG+fBFMYeowIS
         78l5EXV1QmB4iNaKD+2AwQq88rk1Tjfn+ukOiSq7yLDzoXehu1CTssFu5qt6n6WjrGqS
         q6YQAt2rG7SzZIM31jJQnsQtxnUru8UqJ+IOpNsLGzc2eABHlB0lta1ErjcbrVQ2769x
         VDJYDI6ntRFKLBWzhN8MzmhsK3HD99AEsHDMAUi/gvOKx++8NABafbb+xqAEgjw1Sn36
         3U2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=NweUbldW38duLDKXCcl5QGEb3HNDCi1J7bPtOMrCqec=;
        b=CpZwv2Cp8/WwXSceqgLKEqeB2mXXtn2tQ3a9Z5lqv20LrolI41YHLzFPzn9PU25Hnd
         jscpR84ei4S0sD+XOH6rHiElrvySZFhoEYkuVxshSXLw6yRBVoWlg+K/VN8RGCfRGv1F
         j1QolQ8op++fP4u+eKNe72p/SJES2+A7pv3QJprk2k0hLl6BwvHICSpt1KO4CFZhIWhb
         pG4Wr3u2PCXxawb/U75LBzQMi1JagupsP3AAo0SVAkBmS3ps7JGc+GXb8V9SXv/gx1pa
         dqYxDnQfESTRi3HvWn+1ED9IQNyHxiaep2cBynWMUdE6X5nELf1Qyj8wHKVg4V/pbYF6
         JMyQ==
X-Gm-Message-State: AFqh2ko2LySfzEZy8rirIwxAMy+Hazpi0NGamnFdJPY+gjFZ9GgbVm+v
	ygfQ2Erm7ypVMr469JWy0k4clDIYsPokdjLG
X-Google-Smtp-Source: AMrXdXs2EFiit5RARTsnthGpvKYH7U73IESbrbaV/NAHQICGUs6kEHu9To4eM5uoRA4Tco6rDK5GBA==
X-Received: by 2002:ac2:558f:0:b0:4cc:597b:583e with SMTP id v15-20020ac2558f000000b004cc597b583emr1634692lfg.55.1673000885346;
        Fri, 06 Jan 2023 02:28:05 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH] automation: temporarily disable CONFIG_COVERAGE for RISC-V randconfig jobs
Date: Fri,  6 Jan 2023 12:28:01 +0200
Message-Id: <5f47cd290a5f173655d7dace7f61384e1f32c8c1.1673000881.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As common isn't built for RISC-V architecture now, accordingly,
common/coverage (where __gconv_* function are defined) isn't built either
but randconfig may decide to enable CONFIG_COVERAGE which will lead to
the following compilation error:

riscv64-linux-gnu-ld: prelink.o: in function `.L0 ':
arch/riscv/early_printk.c:(.text+0x18):
    undefined reference to `__gcov_init'
riscv64-linux-gnu-ld: arch/riscv/early_printk.c:(.text+0x40):
    undefined reference to `__gcov_exit'

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 automation/gitlab-ci/build.yaml | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index 6784974619..a292f0fb18 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -667,6 +667,8 @@ riscv64-cross-gcc-randconfig:
     CONTAINER: archlinux:riscv64
     KBUILD_DEFCONFIG: tiny64_defconfig
     RANDCONFIG: y
+    EXTRA_FIXED_RANDCONFIG:
+      CONFIG_COVERAGE=n
 
 riscv64-cross-gcc-debug-randconfig:
   extends: .gcc-riscv64-cross-build-debug
@@ -674,6 +676,8 @@ riscv64-cross-gcc-debug-randconfig:
     CONTAINER: archlinux:riscv64
     KBUILD_DEFCONFIG: tiny64_defconfig
     RANDCONFIG: y
+    EXTRA_FIXED_RANDCONFIG:
+      CONFIG_COVERAGE=n
 
 ## Test artifacts common
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:30:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472403.732605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDjz9-00032L-TO; Fri, 06 Jan 2023 10:30:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472403.732605; Fri, 06 Jan 2023 10:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDjz9-00031s-Pw; Fri, 06 Jan 2023 10:30:03 +0000
Received: by outflank-mailman (input) for mailman id 472403;
 Fri, 06 Jan 2023 10:30:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDjz7-0002dy-MU; Fri, 06 Jan 2023 10:30:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDjz7-0003fn-K7; Fri, 06 Jan 2023 10:30:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDjz7-0004Rf-Ao; Fri, 06 Jan 2023 10:30:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDjz7-0008IM-AJ; Fri, 06 Jan 2023 10:30:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=psVmlWnKebrLqwSydI25TOhJ0s5wtPmbJuzCuPgU4KA=; b=Ep3QxLWgXKXnj6NFcPLyrB3fRh
	dPM0hxdtvthUuiyQpLn7P8xZX4x8RQpNLbF4oxnK73LOQk7sqZO4w7eoZ4salWuf1Z2n6vGDtqI8p
	cAwkFoZuyxYpXQUg+RnaGwsnSrqQ1QjptFi41SMVGSYKli+Uy3V7EaDb9rsAVLyayRZ8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175597-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175597: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=6cd2b4e101cc0b60c6db83f763a08daea67ad6eb
X-Osstest-Versions-That:
    libvirt=78b3400e50ee0ba01575749728aae9c79d9d116b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 10:30:01 +0000

flight 175597 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175597/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175577
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175577
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175577
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              6cd2b4e101cc0b60c6db83f763a08daea67ad6eb
baseline version:
 libvirt              78b3400e50ee0ba01575749728aae9c79d9d116b

Last test of basis   175577  2023-01-05 04:18:49 Z    1 days
Testing same since   175597  2023-01-06 04:20:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   78b3400e50..6cd2b4e101  6cd2b4e101cc0b60c6db83f763a08daea67ad6eb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:31:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:31:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472413.732615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDk04-00043E-BQ; Fri, 06 Jan 2023 10:31:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472413.732615; Fri, 06 Jan 2023 10:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDk04-000437-8i; Fri, 06 Jan 2023 10:31:00 +0000
Received: by outflank-mailman (input) for mailman id 472413;
 Fri, 06 Jan 2023 10:30:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDk02-0003xJ-MJ
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:30:59 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32d9d40b-8dad-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 11:30:56 +0100 (CET)
Received: from mail-dm6nam11lp2172.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jan 2023 05:30:50 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6778.namprd03.prod.outlook.com (2603:10b6:a03:40d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 10:30:48 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 10:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32d9d40b-8dad-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673001056;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=rXUw/uc30/WNZGuy9EcIKoULDRRv8AEzspJUmBKxztE=;
  b=LuAtUQMx9rG5F/OpBowChq8ca1mtIUBuvBXTqLX0F+vN6ha2CSyp8B7S
   WQLrBsUZEXc+KBkCt95aYfbjiyDFZIuuKAiSHM37o3R09MVd5VtQ7ytOi
   w1hTngPCorqtWGi42bupqYzLp1sds0m099+IAq5yhHNgnfzWtEKsoKGzJ
   M=;
X-IronPort-RemoteIP: 104.47.57.172
X-IronPort-MID: 90403247
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rBzbTKskuu5UpgUpCNI0ddfP5OfnVLVfMUV32f8akzHdYApBsoF/q
 tZmKWvQMvbYYGXwftpybY+x8hlTsZaDmIcyHVFl/yozF3sU+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaGySFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwEDEEUUuimcaM3rOwR7dumuAcEc/BBdZK0p1g5Wmx4fcOZ7nmGvyPyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osj/60b4C9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzgqqYx2gHMroAVICVRRFyAsdubsXX9ZtRuC
 xM+q3A/jKdnoSRHSfG4BXVUukWsuRoRWMFREqs59RuKwarX5C6WA2EFSnhKb9lOnN87Q3km2
 0GEm/vtBCdzq/uFRHSF7LCWoDiufy8PIgcqfjQYRAEI593ipoAbjR/VSNtnVqmvgbXdFjD5x
 TSXpyEWnbQNitUK0a799lfC6xqnr4LFSQMd7QzNUmWoqAh+YeaYi5eA7FHa6bNMKdifR1zY5
 XwcwZHBtqYJEI2HkzGLTKMVBra16v2ZMTrax1lyA50m8Dfr8HmmFWxN3AxDyI5SGp5sUVfUj
 IX74Gu9OLc70KOWUJJK
IronPort-HdrOrdr: A9a23:L+UR3a9Qw7jKL3tI1NBuk+DlI+orL9Y04lQ7vn1ZYhZeG/bo8P
 xG+8526faUslkssRQb8uxoV5PvfZqxz/9ICOsqTNSftXjd2FdARbsKheGO/9SKIVydygcy79
 YFT4FOTPH2EFhmnYLbzWCDYrAdKQC8gcWVuds=
X-IronPort-AV: E=Sophos;i="5.96,304,1665460800"; 
   d="scan'208";a="90403247"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Kr1jw6yrjfFBf+xSNrxjPuT+BBWEE91c9/80LgY2sCkfDwp/3A4MpUvjPfrIXdnqQiw8b2J6xVVqlZH6hU8Om9IZmr8fNYiN/ZKzTFRV3cBnVpDCJ9xIYwlV65N6DcR9B+WOMAD8j71VL29vEQxPeNp/c4wjMG9/eEB/RWM3Mty9aixfvUDRvrd+CZqz2gwKPML+YczXCzW/Xu+gOov6OoxoxZb/CvpaabL1zC73vy7MxBE30WzclNNsasKaILCnBOnnysGENAniOSYrMxsLjkmPVgd3Ym8rqzKKOEoIddYsU0WCLTxgt63vAQZZQ8lPtzK3npXeHVO4FntyvMBEKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rXUw/uc30/WNZGuy9EcIKoULDRRv8AEzspJUmBKxztE=;
 b=LV+Zw8acpp9ka4Ctrsu3e7RLg2BCt7wDlHNvKzifEjXwHtFQ+cah7Nt1szJ6Y4z0cJuAARFrPEX9HsRE7U8AyJnvvbvu38Tl46wUwuGgtd5ZLtWs9C5pGzBmHRiy4/jU4RSUhRdypMuX1BPz1st0YQyvdKjP/h/J+VrmNWAc1pUtAmN/gR8MIe+tpQrsK7PQCugyy463aQAvfP1do+S0bwdumytvB5YtkAj91tMXuT7Zs9tb/k4n0jjd+FjVIIK0Okw75o0yz68Cg0It2co7ohCUoO2G9Pbry2MYIC2p+9VU9xE7NaCRXIPRoCiHB0dLKhvN6RchCx1J34tQ1w04Sw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rXUw/uc30/WNZGuy9EcIKoULDRRv8AEzspJUmBKxztE=;
 b=v2vjaPtuZ1eOILbcPir4UdLx+7kuWS5ixM9+u7vuKqUyb8hqU0yq5XuozjLCbWuJmexSAckfTRbquVNjKZEmbwnswcT2lF+iKsQoPWPbRct+JzBLPKNNtoIzFQyBVnc1RPhWzPP/8XC7oo4+G2HkWDZPaKMezmVnlEaBtRoi+OI=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH] automation: temporarily disable CONFIG_COVERAGE for
 RISC-V randconfig jobs
Thread-Topic: [PATCH] automation: temporarily disable CONFIG_COVERAGE for
 RISC-V randconfig jobs
Thread-Index: AQHZIbmTfeWmiVKwmkCDRKG76wzwDK6RMG4A
Date: Fri, 6 Jan 2023 10:30:48 +0000
Message-ID: <5c03fcbf-d08a-5d0a-d217-c0f4fbdeb133@citrix.com>
References:
 <5f47cd290a5f173655d7dace7f61384e1f32c8c1.1673000881.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <5f47cd290a5f173655d7dace7f61384e1f32c8c1.1673000881.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6778:EE_
x-ms-office365-filtering-correlation-id: 79d96588-837d-4481-7eef-08daefd113e3
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 8Thc07je6C/RbJ7y5HLZDTCg8l/nFkzMTo2vR5xaG4gYG3yQvEf2QQm2UNMeCB4EAe8ijZRPXR1QzH0GTcgYdm2MS4/4SkTKuwzMv5jayRMPbs9T8oEN+0zPhLn7RAkwe5hsqQMVHCB7YpQtE6pjw0tLDS7QMlpDYJbqeXdBGM2eBw/nYCxlt1yHWOaRyvJVVkWZ9eCq/WxrPS7svvNC/y0aTiby9P9wKUI5iKrIhKmQ5rkpDWXDxOAFHqz+4Q7dDyiGtxG+BEySrOrAT/WIPXnwdTn27fojhTFKnejHkbq6rKmny043eNM42O3otokLivSatog119kM0KtSPOFQqlc8lZgJ/wHKtsAAIx3f/GkyspSVnctCQRaYV2AHFYBtxdquRwkILswqO+OUyWCp3Fm2obgeVJH1n09fZLrba6vpONr+2yiInA8rCnWuzNMpwcluVFiD17YhTdJr2Sg3krXaWrqzKvgflBzjiq05Bq6TNWxgdHnntQMg8shFwAz/iOFwhzR/Avfh4WxgFrTGKawYW8juoaIZbsyW/hR03PO7g2FI/Yo5+/poi6lPlU54qOQJwYTfyH453ZjPkCsisCHuRdqKk8LXZRd7TWw9dUm5liMe/HgrmVMj4YH1sTlp1Obr23Y8wOLMM1U1B/U+ZvktHWcZ04iT4ExHulwPX856mms0y10ohcKRAsFJJTPK00DSr+zjM7UZ6WsdQj8gXrrSatozbt8ST201+UUzr9jsxKhH5wokuRkm/CvrTx8gFgyb8tvhXs7Kvum9SRaCyQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(366004)(136003)(396003)(346002)(451199015)(2616005)(186003)(26005)(53546011)(6512007)(82960400001)(38100700002)(31696002)(86362001)(4744005)(122000001)(36756003)(38070700005)(31686004)(316002)(4326008)(8936002)(66946007)(64756008)(5660300002)(8676002)(91956017)(76116006)(66476007)(478600001)(66556008)(66446008)(6506007)(41300700001)(110136005)(54906003)(6486002)(71200400001)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?d1AwaVRJckw0aERUZGRNMnQzVHlBSWRxTjBldFJmU0ZrTWFzcG1kUW1YYkRN?=
 =?utf-8?B?dVEyT2M5TE9yVyt4M2h4WUVVTUl2eG5VODcxYnluY0pGc2pVV2V6ZnhsazhZ?=
 =?utf-8?B?VnVhV0lhRHN5cFYyM0RuakJHU0FUNWhKblhCRlo2WkFsOWxjb1BhM0Jzb1Ez?=
 =?utf-8?B?cDFCVlowSGVlOG9mSzZCZ3crUXlqRURqQXNhNHJaWENDK3B0NDlNSkpTUnFE?=
 =?utf-8?B?YWZXYnBIclB1bE9odVVvWE5Pb0tFbldseTZ3TjdoUmM2bVVIOWN4MTAzU2pC?=
 =?utf-8?B?MmtDazhpYlFnRlpOd3BPcW1wUjh1MkIxZGh0dmk4Qlp0ZldZL3ZBcU1rVmNo?=
 =?utf-8?B?TkZ4MGo2VFlUV0EwSWlSTmg0OTJ3QnNRV1ZuZXhJV1hPZEw2TEVmdWd4anhK?=
 =?utf-8?B?dlNOaWhIYVZFNy9HMzU4dGlsSXFrTjRiNGdvRHllRXVpK1RFOWpUZk0vSTBq?=
 =?utf-8?B?QnU1STRvM3c0UllIUzgwSFNXcXdaTGYzeVNvY3RDN1ZReW1XMEZ4d0ExdGtO?=
 =?utf-8?B?T082a1hzTm9SNVVOeTVPRGhWdE5LdlZXMG53UjJ0UnhWWDV5TEhRTnBGZWpG?=
 =?utf-8?B?VHFlbUxKU0Y4SGxGTy9OSjdPVTV6TVY5UDhHSXpiQ0h6aWtheFNycTdzREhl?=
 =?utf-8?B?eGtqdWR6MUwxK3JHVHpLdGFNaklZbjdkc3lEZkorQldoeHZQTGNYbXF6aWpH?=
 =?utf-8?B?UmFHVzNuYnB2aGRnYlcxTkh5cHlVS0wwbXV1b0hRbXo5OTdQQXR3MlJWQm4r?=
 =?utf-8?B?V0xLNXI2cnc2TWR2YVNlRDBoZm9nYk8vcVoxcFdjdllxMVo3dDVKZVJ5N05s?=
 =?utf-8?B?bmIwbUZURFJoS09LM2kzTDVxTXNhYll1eDI1SkI1R2wvTFdrQWdSVU5FN0d4?=
 =?utf-8?B?amFaclhnVnRaakNuTnBoZVEzVzJOL0J1QytjU2wwRWJLMGRWNWo5Vlc1UVdn?=
 =?utf-8?B?L0RZWmhEOTV1SThSaHlacldzSW4yczYvSU9lNVBIQVh1VGlPQXNmd2F1R2dh?=
 =?utf-8?B?Um9TSlhuVXovcC83VWVyVS8zUDlFaUhKVXlmMGVYcEJ0SU5Ec05ha0lvMTUz?=
 =?utf-8?B?dlBSSXMzbk12bUxyaE5pa1kycURvVUNWak11ZTltcjkrWXdycElyTkNPRjNE?=
 =?utf-8?B?Y1FuNXBlQTEzL2diWHpVR2E0QjllOWt5S1lrWjNqaEJYTnhPbWlwZDdoYVl5?=
 =?utf-8?B?Qm1LdHJndWR3akkzK01mWXNvR3p0SnFURUp1WVd4RGtKR0dyeXEvdXhUSnM0?=
 =?utf-8?B?aVlhSkpFb1BsWll6c2FkVHBLWXZ4Sk5IQWY0eDdCc250azF4KzVZSTYwcXBm?=
 =?utf-8?B?cVQ5dlVGdVlYL2lNWkpVSklDRVhHR1ZqUCswcHMwMnlRZCs1c2pFRzAvYzdh?=
 =?utf-8?B?bU5GN212c1V1ZUhualJTcVZNUVRNbzhpTHg4MVlYcndldFovWk9jbVh0TnVs?=
 =?utf-8?B?cmVrbDhheEFkeFpDajZoN2RwNmtCbEFrd0NabE5KWUpHOXB1enNIWU0xSXp4?=
 =?utf-8?B?TVF0dW8rWWNxckdlenhiYWpwempBZHVBNWZkcGlQVUVjd2RHcjQ1elpLRE1p?=
 =?utf-8?B?RndQVUs1MUxhZkJEaFBod01tT2tHM1NjRmFDMEsybFN4NDB0NWRKbUMwczJE?=
 =?utf-8?B?ZkJ0M0FNS21KeGcvZzRNQlFJNjJaNUlNc0FhVmc5UXFvVGR1cXlJTi9menV1?=
 =?utf-8?B?UWh4RllNTU9ZTHE5cWFPdGNKNEVROXo5VDJNckVLQ3lrdFJsalpUMHlJKzlX?=
 =?utf-8?B?amsrTUQ3MW00TXY5R0tmM255MEtwb2tSQVY3MFE2NTZTbXVyNW1zS2Ixdzcx?=
 =?utf-8?B?YklvRHRLaWR6K2QrNWRiU3V1azhzc1I3Y2hoT0tacVdpZDMrb1VrWTJSS3dV?=
 =?utf-8?B?QVlDNkpTMkgrRTZhSFBPSiszVnhkdUNJWXYxS3lzRERwclVkU01HcVpWeVg1?=
 =?utf-8?B?TS94Yks5Z202OEl3UXQwd3laeGU4bUh0MEpVTlRSSmJtWnVIVGF6akdqdFRO?=
 =?utf-8?B?NStQTGpxMFRndTdRd1FkNXJ1RDFVN2hsdHN2RDVpai8xT25Qejk2TU92R04v?=
 =?utf-8?B?TjY0cTY0eVFWR2xVaVpWUnNpSVhuVFlFV3QzSmFpaUVPYndtcGkwQjFWRk80?=
 =?utf-8?B?WW4yYUd4MzNDZzFOTisyMURNNjRiamlRWnJNSGhWUlJoakFVTFQ2dU9aYUMv?=
 =?utf-8?B?NWc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <6E45605D93C7A547BF138F56FC0FE356@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	hPGcfu6MHrGkHVLPUt82U8hgAKNhDfHJorm64O8cMoH8AN2RsVrOJk3h9Rmq+eJerI5h5+Fvp4OEWb/1oKqN4LtrMEk1+Z99SOfgGfqjH3+kPlwBHN+L2lQ7YFur8ei0a+T29q8fcMJrs2aVeLDvmewD+CtCLTsiB195+2H9JEhNB19m5ttIgNa3Y106VIo4D70MOL+029poH+HUcvYbHY0eWcpBYoshV/Blf0YflrzYfW9TMhPwdS6MX6o8r1pI7udqlmofIQlDTFVytAliNxAkfCtrZ3jA4MLi+Aoke3/R6BUSQSnOBAjHzoxvweHRjNRufV4oObjFSmNuDjzm8XrFu30ck/VNRUBVDOtvvLu12vSlDTWqBYYp4ZBnGHxr/ZdQAJ6AKpWKYDgbE8WAw6G5J2BTn2uYEjGwt6rCn0Eh1oYPexrLsbHD9lnNfsXed4jsuMDw5KN/3vzQgQNbJCUWp0awDEtMHsrRRmf7S3rV/0AgDD+lKbQ4M+p2n7QiO3LFIEsnWazjAi5zkNH1I/MIsdm/VjZmx7dIcoq+2sTjMpDkGgGndIMB0l2wq+kNIDRIveOJ2sCiMci6KngZZv80z1phnsmSzRACr45J//nchKw7BZ47N8E1FHX/EVUfLOFCwijO+XFLjLa4EQJRfsWSNzfDIsaYnJ+42+O+TE4jYne0IG+xoIns+hAu98DtOygXLYqqqR52Fxd+D+h3SwuDLcl+81YSepdsU6BB1VU4r/N4uaRU7hee2geIDAr9s086wM2I9pcAmx5C2tTSTBfX/sw6vmNFLvAtMX0EgEirKlKzcBLcpLtV6xkZMAGrAgcSRM6lJR4clWxZtY7/LNHQr/fwcEJCn5G6lCTQUeUxCy4JG1x7wqE8R3rj/+qL/dCWgt2QJqccqaF7Z9Xe8g==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 79d96588-837d-4481-7eef-08daefd113e3
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 10:30:48.7069
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: bOcdNoYJknmUMi8azqM/NmVNSuJRHTfVWVm+M7lpyBdBH14jfMgMAvyEC7xmIMWXZNSC4gjh+HUhHzD3VgYXxQndmzshMiIFh6fOhz8LAf4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6778

T24gMDYvMDEvMjAyMyAxMDoyOCBhbSwgT2xla3NpaSBLdXJvY2hrbyB3cm90ZToNCj4gQXMgY29t
bW9uIGlzbid0IGJ1aWx0IGZvciBSSVNDLVYgYXJjaGl0ZWN0dXJlIG5vdywgYWNjb3JkaW5nbHks
DQo+IGNvbW1vbi9jb3ZlcmFnZSAod2hlcmUgX19nY29udl8qIGZ1bmN0aW9uIGFyZSBkZWZpbmVk
KSBpc24ndCBidWlsdCBlaXRoZXINCj4gYnV0IHJhbmRjb25maWcgbWF5IGRlY2lkZSB0byBlbmFi
bGUgQ09ORklHX0NPVkVSQUdFIHdoaWNoIHdpbGwgbGVhZCB0bw0KPiB0aGUgZm9sbG93aW5nIGNv
bXBpbGF0aW9uIGVycm9yOg0KPg0KPiByaXNjdjY0LWxpbnV4LWdudS1sZDogcHJlbGluay5vOiBp
biBmdW5jdGlvbiBgLkwwICc6DQo+IGFyY2gvcmlzY3YvZWFybHlfcHJpbnRrLmM6KC50ZXh0KzB4
MTgpOg0KPiAgICAgdW5kZWZpbmVkIHJlZmVyZW5jZSB0byBgX19nY292X2luaXQnDQo+IHJpc2N2
NjQtbGludXgtZ251LWxkOiBhcmNoL3Jpc2N2L2Vhcmx5X3ByaW50ay5jOigudGV4dCsweDQwKToN
Cj4gICAgIHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYF9fZ2Nvdl9leGl0Jw0KPg0KPiBTaWduZWQt
b2ZmLWJ5OiBPbGVrc2lpIEt1cm9jaGtvIDxvbGVrc2lpLmt1cm9jaGtvQGdtYWlsLmNvbT4NCg0K
QWNrZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:36:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:36:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472422.732637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDk5Q-00050f-6Q; Fri, 06 Jan 2023 10:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472422.732637; Fri, 06 Jan 2023 10:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDk5Q-00050W-3L; Fri, 06 Jan 2023 10:36:32 +0000
Received: by outflank-mailman (input) for mailman id 472422;
 Fri, 06 Jan 2023 10:36:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <pdurrant@amazon.com>) id 1pDk5P-000500-5n
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:36:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <pdurrant@amazon.com>)
 id 1pDk5K-0003o2-SY; Fri, 06 Jan 2023 10:36:26 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=debian.cbg12.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <pdurrant@amazon.com>)
 id 1pDk5K-0001wX-Ir; Fri, 06 Jan 2023 10:36:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
From: Paul Durrant <pdurrant@amazon.com>
To: x86@kernel.org,
	kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Sean Christopherson <seanjc@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v7 1/2] KVM: x86/cpuid: generalize kvm_update_kvm_cpuid_base() and also capture limit
Date: Fri,  6 Jan 2023 10:35:59 +0000
Message-Id: <20230106103600.528-2-pdurrant@amazon.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20230106103600.528-1-pdurrant@amazon.com>
References: <20230106103600.528-1-pdurrant@amazon.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

A subsequent patch will need to acquire the CPUID leaf range for emulated
Xen so explicitly pass the signature of the hypervisor we're interested in
to the new function. Also introduce a new kvm_hypervisor_cpuid structure
so we can neatly store both the base and limit leaf indices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>

v7:
 - Morph kvm_update_hypervisor_cpuid() into kvm_get_hypervisor_cpuid()
 - Place the definition of struct kvm_hypervisor_cpuid to avoid churn
   in patch #2.

v6:
 - New in this version
---
 arch/x86/include/asm/kvm_host.h |  7 ++++++-
 arch/x86/kvm/cpuid.c            | 24 +++++++++++++-----------
 2 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c70690b2c82d..85cbe4571ac9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -678,6 +678,11 @@ struct kvm_vcpu_hv {
 	} nested;
 };
 
+struct kvm_hypervisor_cpuid {
+	u32 base;
+	u32 limit;
+};
+
 /* Xen HVM per vcpu emulation context */
 struct kvm_vcpu_xen {
 	u64 hypercall_rip;
@@ -826,7 +831,7 @@ struct kvm_vcpu_arch {
 
 	int cpuid_nent;
 	struct kvm_cpuid_entry2 *cpuid_entries;
-	u32 kvm_cpuid_base;
+	struct kvm_hypervisor_cpuid kvm_cpuid;
 
 	u64 reserved_gpa_bits;
 	int maxphyaddr;
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 69768e4d53a6..db5a4d38fcd0 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -180,15 +180,15 @@ static int kvm_cpuid_check_equal(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2
 	return 0;
 }
 
-static void kvm_update_kvm_cpuid_base(struct kvm_vcpu *vcpu)
+static struct kvm_hypervisor_cpuid kvm_get_hypervisor_cpuid(struct kvm_vcpu *vcpu,
+							    const char *sig)
 {
-	u32 function;
+	struct kvm_hypervisor_cpuid cpuid = {};
 	struct kvm_cpuid_entry2 *entry;
+	u32 base;
 
-	vcpu->arch.kvm_cpuid_base = 0;
-
-	for_each_possible_hypervisor_cpuid_base(function) {
-		entry = kvm_find_cpuid_entry(vcpu, function);
+	for_each_possible_hypervisor_cpuid_base(base) {
+		entry = kvm_find_cpuid_entry(vcpu, base);
 
 		if (entry) {
 			u32 signature[3];
@@ -197,19 +197,21 @@ static void kvm_update_kvm_cpuid_base(struct kvm_vcpu *vcpu)
 			signature[1] = entry->ecx;
 			signature[2] = entry->edx;
 
-			BUILD_BUG_ON(sizeof(signature) > sizeof(KVM_SIGNATURE));
-			if (!memcmp(signature, KVM_SIGNATURE, sizeof(signature))) {
-				vcpu->arch.kvm_cpuid_base = function;
+			if (!memcmp(signature, sig, sizeof(signature))) {
+				cpuid.base = base;
+				cpuid.limit = entry->eax;
 				break;
 			}
 		}
 	}
+
+	return cpuid;
 }
 
 static struct kvm_cpuid_entry2 *__kvm_find_kvm_cpuid_features(struct kvm_vcpu *vcpu,
 					      struct kvm_cpuid_entry2 *entries, int nent)
 {
-	u32 base = vcpu->arch.kvm_cpuid_base;
+	u32 base = vcpu->arch.kvm_cpuid.base;
 
 	if (!base)
 		return NULL;
@@ -439,7 +441,7 @@ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
 	vcpu->arch.cpuid_entries = e2;
 	vcpu->arch.cpuid_nent = nent;
 
-	kvm_update_kvm_cpuid_base(vcpu);
+	vcpu->arch.kvm_cpuid = kvm_get_hypervisor_cpuid(vcpu, KVM_SIGNATURE);
 	kvm_vcpu_after_set_cpuid(vcpu);
 
 	return 0;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:36:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:36:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472421.732626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDk5N-0004l0-UJ; Fri, 06 Jan 2023 10:36:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472421.732626; Fri, 06 Jan 2023 10:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDk5N-0004kt-Rf; Fri, 06 Jan 2023 10:36:29 +0000
Received: by outflank-mailman (input) for mailman id 472421;
 Fri, 06 Jan 2023 10:36:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <pdurrant@amazon.com>) id 1pDk5N-0004kn-53
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:36:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <pdurrant@amazon.com>)
 id 1pDk5E-0003mV-2t; Fri, 06 Jan 2023 10:36:20 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=debian.cbg12.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <pdurrant@amazon.com>)
 id 1pDk5D-0001wX-Ni; Fri, 06 Jan 2023 10:36:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
From: Paul Durrant <pdurrant@amazon.com>
To: x86@kernel.org,
	kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Borislav Petkov <bp@alien8.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Woodhouse <dwmw2@infradead.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [PATCH v7 0/2] KVM: x86/xen: update Xen CPUID Leaf 4
Date: Fri,  6 Jan 2023 10:35:58 +0000
Message-Id: <20230106103600.528-1-pdurrant@amazon.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Patch #2 was the original patch. It has been expended to a series in v6.

Paul Durrant (2):
  KVM: x86/cpuid: generalize kvm_update_kvm_cpuid_base() and also
    capture limit
  KVM: x86/xen: update Xen CPUID Leaf 4 (tsc info) sub-leaves, if
    present

 arch/x86/include/asm/kvm_host.h       |  8 +++++++-
 arch/x86/include/asm/xen/hypervisor.h |  4 +++-
 arch/x86/kvm/cpuid.c                  | 26 +++++++++++++++-----------
 arch/x86/kvm/x86.c                    |  1 +
 arch/x86/kvm/xen.c                    | 26 ++++++++++++++++++++++++++
 arch/x86/kvm/xen.h                    |  7 +++++++
 6 files changed, 59 insertions(+), 13 deletions(-)
---
Cc: Borislav Petkov <bp@alien8.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:36:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:36:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472425.732649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDk5e-0005Z8-GT; Fri, 06 Jan 2023 10:36:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472425.732649; Fri, 06 Jan 2023 10:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDk5e-0005Yz-C8; Fri, 06 Jan 2023 10:36:46 +0000
Received: by outflank-mailman (input) for mailman id 472425;
 Fri, 06 Jan 2023 10:36:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <pdurrant@amazon.com>) id 1pDk5d-0005Uh-4T
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:36:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <pdurrant@amazon.com>)
 id 1pDk5O-0003oH-OG; Fri, 06 Jan 2023 10:36:30 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=debian.cbg12.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <pdurrant@amazon.com>)
 id 1pDk5O-0001wX-EX; Fri, 06 Jan 2023 10:36:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
From: Paul Durrant <pdurrant@amazon.com>
To: x86@kernel.org,
	kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Sean Christopherson <seanjc@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v7 2/2] KVM: x86/xen: update Xen CPUID Leaf 4 (tsc info) sub-leaves, if present
Date: Fri,  6 Jan 2023 10:36:00 +0000
Message-Id: <20230106103600.528-3-pdurrant@amazon.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20230106103600.528-1-pdurrant@amazon.com>
References: <20230106103600.528-1-pdurrant@amazon.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The scaling information in subleaf 1 should match the values set by KVM in
the 'vcpu_info' sub-structure 'time_info' (a.k.a. pvclock_vcpu_time_info)
which is shared with the guest, but is not directly available to the VMM.
The offset values are not set since a TSC offset is already applied.
The TSC frequency should also be set in sub-leaf 2.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: David Woodhouse <dwmw2@infradead.org>

v7:
 - Add a definition of XEN_SIGNATURE into asm/xen/hypervisor.h
   and use that

v6:
 - Stash Xen cpuid base and limit values when cpuid is set
 - Re-name kvm_xen_setup_tsc_info() to kvm_xen_update_tsc_info()

v5:
 - Drop the caching of the CPUID entry pointers and only update the
   sub-leaves if the CPU frequency has actually changed

v4:
 - Update commit comment

v3:
 - Add leaf limit check in kvm_xen_set_cpuid()

v2:
 - Make sure sub-leaf pointers are NULLed if the time leaf is removed
---
 arch/x86/include/asm/kvm_host.h       |  1 +
 arch/x86/include/asm/xen/hypervisor.h |  4 +++-
 arch/x86/kvm/cpuid.c                  |  2 ++
 arch/x86/kvm/x86.c                    |  1 +
 arch/x86/kvm/xen.c                    | 26 ++++++++++++++++++++++++++
 arch/x86/kvm/xen.h                    |  7 +++++++
 6 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 85cbe4571ac9..f3e9f6e4b3ea 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -703,6 +703,7 @@ struct kvm_vcpu_xen {
 	struct hrtimer timer;
 	int poll_evtchn;
 	struct timer_list poll_timer;
+	struct kvm_hypervisor_cpuid cpuid;
 };
 
 struct kvm_queued_exception {
diff --git a/arch/x86/include/asm/xen/hypervisor.h b/arch/x86/include/asm/xen/hypervisor.h
index 16f548a661cf..5fc35f889cd1 100644
--- a/arch/x86/include/asm/xen/hypervisor.h
+++ b/arch/x86/include/asm/xen/hypervisor.h
@@ -38,9 +38,11 @@ extern struct start_info *xen_start_info;
 
 #include <asm/processor.h>
 
+#define XEN_SIGNATURE "XenVMMXenVMM"
+
 static inline uint32_t xen_cpuid_base(void)
 {
-	return hypervisor_cpuid_base("XenVMMXenVMM", 2);
+	return hypervisor_cpuid_base(XEN_SIGNATURE, 2);
 }
 
 struct pci_dev;
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index db5a4d38fcd0..560f880cc9fd 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -25,6 +25,7 @@
 #include "mmu.h"
 #include "trace.h"
 #include "pmu.h"
+#include "xen.h"
 
 /*
  * Unlike "struct cpuinfo_x86.x86_capability", kvm_cpu_caps doesn't need to be
@@ -442,6 +443,7 @@ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
 	vcpu->arch.cpuid_nent = nent;
 
 	vcpu->arch.kvm_cpuid = kvm_get_hypervisor_cpuid(vcpu, KVM_SIGNATURE);
+	vcpu->arch.xen.cpuid = kvm_get_hypervisor_cpuid(vcpu, XEN_SIGNATURE);
 	kvm_vcpu_after_set_cpuid(vcpu);
 
 	return 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f6ef44dc8a0e..e849a5445cac 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3158,6 +3158,7 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
 				   &vcpu->hv_clock.tsc_shift,
 				   &vcpu->hv_clock.tsc_to_system_mul);
 		vcpu->hw_tsc_khz = tgt_tsc_khz;
+		kvm_xen_update_tsc_info(v);
 	}
 
 	vcpu->hv_clock.tsc_timestamp = tsc_timestamp;
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 2e29bdc2949c..fbfc7c17defd 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -22,6 +22,9 @@
 #include <xen/interface/event_channel.h>
 #include <xen/interface/sched.h>
 
+#include <asm/xen/cpuid.h>
+
+#include "cpuid.h"
 #include "trace.h"
 
 static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *xe, struct kvm *kvm);
@@ -2067,6 +2070,29 @@ void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu)
 	del_timer_sync(&vcpu->arch.xen.poll_timer);
 }
 
+void kvm_xen_update_tsc_info(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpuid_entry2 *entry;
+	u32 function;
+
+	if (!vcpu->arch.xen.cpuid.base)
+		return;
+
+	function = vcpu->arch.xen.cpuid.base | XEN_CPUID_LEAF(3);
+	if (function > vcpu->arch.xen.cpuid.limit)
+		return;
+
+	entry = kvm_find_cpuid_entry_index(vcpu, function, 1);
+	if (entry) {
+		entry->ecx = vcpu->arch.hv_clock.tsc_to_system_mul;
+		entry->edx = vcpu->arch.hv_clock.tsc_shift;
+	}
+
+	entry = kvm_find_cpuid_entry_index(vcpu, function, 2);
+	if (entry)
+		entry->eax = vcpu->arch.hw_tsc_khz;
+}
+
 void kvm_xen_init_vm(struct kvm *kvm)
 {
 	idr_init(&kvm->arch.xen.evtchn_ports);
diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h
index ea33d80a0c51..f8f1fe22d090 100644
--- a/arch/x86/kvm/xen.h
+++ b/arch/x86/kvm/xen.h
@@ -9,6 +9,8 @@
 #ifndef __ARCH_X86_KVM_XEN_H__
 #define __ARCH_X86_KVM_XEN_H__
 
+#include <asm/xen/hypervisor.h>
+
 #ifdef CONFIG_KVM_XEN
 #include <linux/jump_label_ratelimit.h>
 
@@ -32,6 +34,7 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe,
 int kvm_xen_setup_evtchn(struct kvm *kvm,
 			 struct kvm_kernel_irq_routing_entry *e,
 			 const struct kvm_irq_routing_entry *ue);
+void kvm_xen_update_tsc_info(struct kvm_vcpu *vcpu);
 
 static inline bool kvm_xen_msr_enabled(struct kvm *kvm)
 {
@@ -135,6 +138,10 @@ static inline bool kvm_xen_timer_enabled(struct kvm_vcpu *vcpu)
 {
 	return false;
 }
+
+static inline void kvm_xen_update_tsc_info(struct kvm_vcpu *vcpu)
+{
+}
 #endif
 
 int kvm_xen_hypercall(struct kvm_vcpu *vcpu);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:41:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:41:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472445.732660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkA7-0007Mi-6G; Fri, 06 Jan 2023 10:41:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472445.732660; Fri, 06 Jan 2023 10:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkA7-0007Ma-3L; Fri, 06 Jan 2023 10:41:23 +0000
Received: by outflank-mailman (input) for mailman id 472445;
 Fri, 06 Jan 2023 10:41:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7O5U=5D=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pDkA6-0007M7-4u
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:41:22 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id a7effac7-8dae-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 11:41:20 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 638D911FB;
 Fri,  6 Jan 2023 02:42:01 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 76EEA3F23F;
 Fri,  6 Jan 2023 02:41:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7effac7-8dae-11ed-91b6-6bf2151ebd3b
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/2] Cppcheck MISRA analysis improvements
Date: Fri,  6 Jan 2023 10:41:06 +0000
Message-Id: <20230106104108.14740-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1

This serie is adding a way to skip the check for some rules that the Xen project
has agreed to follow, this is because cppcheck reports too many false-positive
on some rules and it would be easier in a first phase to skip the check on them
and allow the tool to be mature enough before using it on the specific rules.

The serie includes also an improvement for the cppcheck report.

Luca Fancellu (2):
  xen/cppcheck: sort alphabetically cppcheck report entries
  xen/cppcheck: add parameter to skip given MISRA rules

 xen/scripts/xen_analysis/cppcheck_analysis.py |  8 +++--
 .../xen_analysis/cppcheck_report_utils.py     |  2 ++
 xen/scripts/xen_analysis/settings.py          | 35 +++++++++++--------
 xen/tools/convert_misra_doc.py                | 28 ++++++++++-----
 4 files changed, 48 insertions(+), 25 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:41:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:41:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472446.732671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkA8-0007bo-Da; Fri, 06 Jan 2023 10:41:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472446.732671; Fri, 06 Jan 2023 10:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkA8-0007bf-A5; Fri, 06 Jan 2023 10:41:24 +0000
Received: by outflank-mailman (input) for mailman id 472446;
 Fri, 06 Jan 2023 10:41:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7O5U=5D=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pDkA6-0007M7-UG
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:41:22 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id a8b5b86e-8dae-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 11:41:21 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AFFB112FC;
 Fri,  6 Jan 2023 02:42:02 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C54FB3F23F;
 Fri,  6 Jan 2023 02:41:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8b5b86e-8dae-11ed-91b6-6bf2151ebd3b
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] xen/cppcheck: sort alphabetically cppcheck report entries
Date: Fri,  6 Jan 2023 10:41:07 +0000
Message-Id: <20230106104108.14740-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230106104108.14740-1-luca.fancellu@arm.com>
References: <20230106104108.14740-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Sort alphabetically cppcheck report entries when producing the text
report, this will help comparing different reports and will group
together findings from the same file.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/xen_analysis/cppcheck_report_utils.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
index 02440aefdfec..f02166ed9d19 100644
--- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
+++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
@@ -104,6 +104,8 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
                 for path in strip_paths:
                     text_report_content[i] = text_report_content[i].replace(
                                                                 path + "/", "")
+            # sort alphabetically the entries
+            text_report_content.sort()
             # Write the final text report
             outfile.writelines(text_report_content)
     except OSError as e:
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:41:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:41:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472447.732682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkAA-0007sj-Lr; Fri, 06 Jan 2023 10:41:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472447.732682; Fri, 06 Jan 2023 10:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkAA-0007sc-IM; Fri, 06 Jan 2023 10:41:26 +0000
Received: by outflank-mailman (input) for mailman id 472447;
 Fri, 06 Jan 2023 10:41:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7O5U=5D=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pDkA9-0007m5-9K
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:41:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id a9884c2e-8dae-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 11:41:22 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0964B15A1;
 Fri,  6 Jan 2023 02:42:04 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1EC053F23F;
 Fri,  6 Jan 2023 02:41:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9884c2e-8dae-11ed-b8d0-410ff93cb8f0
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/2] xen/cppcheck: add parameter to skip given MISRA rules
Date: Fri,  6 Jan 2023 10:41:08 +0000
Message-Id: <20230106104108.14740-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230106104108.14740-1-luca.fancellu@arm.com>
References: <20230106104108.14740-1-luca.fancellu@arm.com>

Add parameter to skip the passed MISRA rules during the cppcheck
analysis, the rules are specified as a list of comma separated
rules with the MISRA number notation (e.g. 1.1,1.3,...).

Modify convert_misra_doc.py script to take an extra parameter
giving a list of MISRA rule to be skipped, comma separated.
While there, fix some typos in the help and print functions.

Modify settings.py and cppcheck_analysis.py to have a new
parameter (--cppcheck-skip-rules) used to specify a list of
MISRA rule to be skipped during the cppcheck analysis.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/xen_analysis/cppcheck_analysis.py |  8 +++--
 xen/scripts/xen_analysis/settings.py          | 35 +++++++++++--------
 xen/tools/convert_misra_doc.py                | 28 ++++++++++-----
 3 files changed, 46 insertions(+), 25 deletions(-)

diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
index 0e952a169641..cc1f403d315e 100644
--- a/xen/scripts/xen_analysis/cppcheck_analysis.py
+++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
@@ -153,11 +153,15 @@ def generate_cppcheck_deps():
     if settings.cppcheck_misra:
         cppcheck_flags = cppcheck_flags + " --addon=cppcheck-misra.json"
 
+        skip_rules_arg = ""
+        if settings.cppcheck_skip_rules != "":
+            skip_rules_arg = "-s {}".format(settings.cppcheck_skip_rules)
+
         utils.invoke_command(
             "{}/convert_misra_doc.py -i {}/docs/misra/rules.rst"
-            " -o {}/cppcheck-misra.txt -j {}/cppcheck-misra.json"
+            " -o {}/cppcheck-misra.txt -j {}/cppcheck-misra.json {}"
                 .format(settings.tools_dir, settings.repo_dir,
-                        settings.outdir, settings.outdir),
+                        settings.outdir, settings.outdir, skip_rules_arg),
             False, CppcheckDepsPhaseError,
             "An error occured when running:\n{}"
         )
diff --git a/xen/scripts/xen_analysis/settings.py b/xen/scripts/xen_analysis/settings.py
index a8502e554e95..8c0d357fe0dc 100644
--- a/xen/scripts/xen_analysis/settings.py
+++ b/xen/scripts/xen_analysis/settings.py
@@ -24,6 +24,7 @@ cppcheck_binpath = "cppcheck"
 cppcheck_html = False
 cppcheck_htmlreport_binpath = "cppcheck-htmlreport"
 cppcheck_misra = False
+cppcheck_skip_rules = ""
 make_forward_args = ""
 outdir = xen_dir
 
@@ -53,20 +54,22 @@ Cppcheck report creation phase runs only when --run-cppcheck is passed to the
 script.
 
 Options:
-  --build-only          Run only the commands to build Xen with the optional
-                        make arguments passed to the script
-  --clean-only          Run only the commands to clean the analysis artifacts
-  --cppcheck-bin=       Path to the cppcheck binary (Default: {})
-  --cppcheck-html       Produce an additional HTML output report for Cppcheck
-  --cppcheck-html-bin=  Path to the cppcheck-html binary (Default: {})
-  --cppcheck-misra      Activate the Cppcheck MISRA analysis
-  --distclean           Clean analysis artifacts and reports
-  -h, --help            Print this help
-  --no-build            Skip the build Xen phase
-  --no-clean            Don\'t clean the analysis artifacts on exit
-  --run-coverity        Run the analysis for the Coverity tool
-  --run-cppcheck        Run the Cppcheck analysis tool on Xen
-  --run-eclair          Run the analysis for the Eclair tool
+  --build-only            Run only the commands to build Xen with the optional
+                          make arguments passed to the script
+  --clean-only            Run only the commands to clean the analysis artifacts
+  --cppcheck-bin=         Path to the cppcheck binary (Default: {})
+  --cppcheck-html         Produce an additional HTML output report for Cppcheck
+  --cppcheck-html-bin=    Path to the cppcheck-html binary (Default: {})
+  --cppcheck-misra        Activate the Cppcheck MISRA analysis
+  --cppcheck-skip-rules=  List of MISRA rules to be skipped, comma separated.
+                          (e.g. --cppcheck-skip-rules=1.1,20.7,8.4)
+  --distclean             Clean analysis artifacts and reports
+  -h, --help              Print this help
+  --no-build              Skip the build Xen phase
+  --no-clean              Don\'t clean the analysis artifacts on exit
+  --run-coverity          Run the analysis for the Coverity tool
+  --run-cppcheck          Run the Cppcheck analysis tool on Xen
+  --run-eclair            Run the analysis for the Eclair tool
 """
     print(msg.format(sys.argv[0], cppcheck_binpath,
                      cppcheck_htmlreport_binpath))
@@ -78,6 +81,7 @@ def parse_commandline(argv):
     global cppcheck_html
     global cppcheck_htmlreport_binpath
     global cppcheck_misra
+    global cppcheck_skip_rules
     global make_forward_args
     global outdir
     global step_get_make_vars
@@ -115,6 +119,9 @@ def parse_commandline(argv):
             cppcheck_htmlreport_binpath = args_with_content_regex.group(2)
         elif option == "--cppcheck-misra":
             cppcheck_misra = True
+        elif args_with_content_regex and \
+             args_with_content_regex.group(1) == "--cppcheck-skip-rules":
+            cppcheck_skip_rules = args_with_content_regex.group(2)
         elif option == "--distclean":
             target_distclean = True
         elif (option == "--help") or (option == "-h"):
diff --git a/xen/tools/convert_misra_doc.py b/xen/tools/convert_misra_doc.py
index 13074d8a2e91..8984ec625fa7 100755
--- a/xen/tools/convert_misra_doc.py
+++ b/xen/tools/convert_misra_doc.py
@@ -4,12 +4,14 @@
 This script is converting the misra documentation RST file into a text file
 that can be used as text-rules for cppcheck.
 Usage:
-    convert_misr_doc.py -i INPUT [-o OUTPUT] [-j JSON]
+    convert_misra_doc.py -i INPUT [-o OUTPUT] [-j JSON] [-s RULES,[...,RULES]]
 
     INPUT  - RST file containing the list of misra rules.
     OUTPUT - file to store the text output to be used by cppcheck.
              If not specified, the result will be printed to stdout.
     JSON   - cppcheck json file to be created (optional).
+    RULES  - list of rules to skip during the analysis, comma separated
+             (e.g. 1.1,1.2,1.3,...)
 """
 
 import sys, getopt, re
@@ -47,21 +49,25 @@ def main(argv):
     outfile = ''
     outstr = sys.stdout
     jsonfile = ''
+    force_skip = ''
 
     try:
-        opts, args = getopt.getopt(argv,"hi:o:j:",["input=","output=","json="])
+        opts, args = getopt.getopt(argv,"hi:o:j:s:",
+                                   ["input=","output=","json=","skip="])
     except getopt.GetoptError:
-        print('convert-misra.py -i <input> [-o <output>] [-j <json>')
+        print('convert-misra.py -i <input> [-o <output>] [-j <json>] [-s <rules>]')
         sys.exit(2)
     for opt, arg in opts:
         if opt == '-h':
-            print('convert-misra.py -i <input> [-o <output>] [-j <json>')
+            print('convert-misra.py -i <input> [-o <output>] [-j <json>] [-s <rules>]')
             print('  If output is not specified, print to stdout')
             sys.exit(1)
         elif opt in ("-i", "--input"):
             infile = arg
         elif opt in ("-o", "--output"):
             outfile = arg
+        elif opt in ("-s", "--skip"):
+            force_skip = arg
         elif opt in ("-j", "--json"):
             jsonfile = arg
 
@@ -169,14 +175,18 @@ def main(argv):
 
     skip_list = []
 
+    # Add rules to be skipped anyway
+    for r in force_skip.split(','):
+        skip_list.append(r)
+
     # Search for missing rules and add a dummy text with the rule number
     for i in misra_c2012_rules:
         for j in list(range(1,misra_c2012_rules[i]+1)):
-            if str(i) + '.' + str(j) not in rule_list:
-                outstr.write('Rule ' + str(i) + '.' + str(j) + '\n')
-                outstr.write('No description for rule ' + str(i) + '.' + str(j)
-                             + '\n')
-                skip_list.append(str(i) + '.' + str(j))
+            rule_str = str(i) + '.' + str(j)
+            if (rule_str not in rule_list) and (rule_str not in skip_list):
+                outstr.write('Rule ' + rule_str + '\n')
+                outstr.write('No description for rule ' + rule_str + '\n')
+                skip_list.append(rule_str)
 
     # Make cppcheck happy by starting the appendix
     outstr.write('Appendix B\n')
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:42:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:42:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472464.732693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkBN-0000Xo-WA; Fri, 06 Jan 2023 10:42:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472464.732693; Fri, 06 Jan 2023 10:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkBN-0000Xh-TA; Fri, 06 Jan 2023 10:42:41 +0000
Received: by outflank-mailman (input) for mailman id 472464;
 Fri, 06 Jan 2023 10:42:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NPe1=5D=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pDkBM-0000XJ-FA
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:42:40 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2040.outbound.protection.outlook.com [40.107.101.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b7d06d31-8dae-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 11:41:48 +0100 (CET)
Received: from DM6PR21CA0003.namprd21.prod.outlook.com (2603:10b6:5:174::13)
 by PH7PR12MB5879.namprd12.prod.outlook.com (2603:10b6:510:1d7::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 10:41:43 +0000
Received: from DM6NAM11FT055.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:174:cafe::f1) by DM6PR21CA0003.outlook.office365.com
 (2603:10b6:5:174::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.6 via Frontend
 Transport; Fri, 6 Jan 2023 10:41:43 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT055.mail.protection.outlook.com (10.13.173.103) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5966.17 via Frontend Transport; Fri, 6 Jan 2023 10:41:42 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 6 Jan
 2023 04:41:42 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 6 Jan
 2023 04:41:42 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 6 Jan 2023 04:41:41 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7d06d31-8dae-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IiYab2Jg0ZKuKIhxBACtNp6p8PwuoRkBPzMs39OmGEJUIdmkHkgWdxLRnxfTJGMzBfWjEzH9PqGH1lTkKZ8df3rQSELr5Nr/WUNctnSP34t34849Rclu1PnDy5nnqsnvjRSbGwbq+VlodgEVsJjOWHuOU9V7Fa33qZ8DNJKI0qN43lZ7oaka+q+nB53H1rVWnYwo1GgYlhZh3qkTJrCMAQCR9YVLt+Y1Kxh8zA6m+gljoWWH6BbtatKzfzAbi07XCVWE+2eJvhfhvIGvHk9fWGA1Acp/pFszu0zqVlbwd8NWbbnQJ2XzaKxDlOulJ5wJDVZnoYlD3RlkHem+oCaCgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4wHK6HK99xoJCu9Ih3KlJHciWVIQHNeLO/b5S8q4/KI=;
 b=AcGRnSMe4L9Azub3tL8sbY61HtQ+wzx7S85knTYuBqC/C+NZMEFF7L9cib2LsRv2/c9ayt8tPuwfavTkAU9HgZeTJ9XubknK6X6m4RpTtUD06lgOqpzRtQL0TRo63qxe121qGGHq9PFIuIHikTz39ZVQAZC79+8iBieEpMaX7e9CjVhKNm/zqW2c6nV3gSM7mcAcUdoQlKtkLxcuHzoEGPTUr+aVdBma6obTXDn+ybQtbEWWZ91TuTLGl5blweM7BI7lseqaiohmJKgY3QAmW0PPnPphhsb5AhUrHZ+FE7U/h3byq5/PajOoD/hhplV/NcMdpZYCUlpFvU67z/et0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4wHK6HK99xoJCu9Ih3KlJHciWVIQHNeLO/b5S8q4/KI=;
 b=wENSy1xT8swSrvQMsJ354Si5OIDmGRlTvcqdTTUplA7ZfC5AI2/DREFiFruMGIqHprrRvkgeW1zJyW7h+Y2Dzh40f3w/cRkULgOjRYIHTA0e77fhgCRtM5jMEpUjsB1DDW3sulwsrAkNJRwD1gw39nmeI6J0VgkXwDQOSiCSNjg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <383485e0-37ac-9b0c-f0c5-18e50cf7905b@amd.com>
Date: Fri, 6 Jan 2023 11:41:40 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in
 construct_domU
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <julien@xen.org>,
	Ayan Kumar Halder <ayankuma@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <20230103102519.26224-1-michal.orzel@amd.com>
 <alpine.DEB.2.22.394.2301041546230.4079@ubuntu-linux-20-04-desktop>
 <1264e5cc-1960-95d3-5ecb-d6f23d194aa4@xen.org>
 <29460d07-cd43-7415-7125-6ed01f3c2920@amd.com>
 <c80f90d7-d3b5-1b13-d809-9506ff5414e4@xen.org>
 <35d590fb-4b96-70dd-a60b-1c8d7cc8f2d6@citrix.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <35d590fb-4b96-70dd-a60b-1c8d7cc8f2d6@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT055:EE_|PH7PR12MB5879:EE_
X-MS-Office365-Filtering-Correlation-Id: 2326ee58-65f2-4a1d-90cc-08daefd299dc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LQ7bU/l0aR8396jR9E4VLOGtrrjq18ZKAKvrzojzguor9zwgX5DLPjk53dlt2rr5ghmvFjfzbIVpQUMJYK04zFqP5meexxXrSlA1T0ipnB/r4nqhi7bn09oMfId9jiEKyjnW0Rn1WMz9450NZJCcNpk27JzUSo1APvaKLIQdo61R5vzE5LXRqr6NxasRBihWPNB264M4YeLfhaEaG2gaWzUzq3Ni4f2f6/fTkaX6+mnzIY15Uyz3JvC+3FrHu6jX8NH9Fkw5S7FTJIDW9UE7uv10yrSTM6pl4SOVAa6Z+qv2OwV1xYMd4MCiFmdVZ1LAGVj+TlJ3N9KEkPAUsB4FcdtWgQk495dYkMq8RKBJq77B1aCfnZPJkMdGMSLDDgX8pEx0Ag6YV8pmFuTFliMFxdWCkYoKvkLA952kQONue5hhAXJix8crjnc0xRVudAlbIEMIPAPplbR/H7mgKnfCqkFdvYvioYw42C3Lsvg5U3aTt78HJafiexDFinujZo+5zXyr/CHFGeNMcUosWlMPnCofxFilIBtpT1XC6e2KWq8aA7aNBmrd220oqqytdoEcX4e4dteAecnsGe+k2GfyboxXLQ0CgNl+Dt/unABRCxSfMl0L0MPpQht0wtL2RSpTp97SB7V0i0mDntzhI9PIFI9aj2MKPjPh2jVIkzd9k//O8TGU1YUGnjzzLh6F2eejJDBGtJXkBeFiERVQfpNN8vQNTyxLBw5UX2jstchUfJ4JwD4uMGq11ObJsfmTn1D2U28roCx+WAKeXHg6oKsxzQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(31686004)(36756003)(82740400003)(81166007)(70206006)(70586007)(8676002)(356005)(2906002)(44832011)(5660300002)(336012)(31696002)(86362001)(83380400001)(36860700001)(316002)(16576012)(478600001)(110136005)(41300700001)(82310400005)(8936002)(40460700003)(426003)(2616005)(53546011)(186003)(47076005)(26005)(40480700001)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 10:41:42.9104
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2326ee58-65f2-4a1d-90cc-08daefd299dc
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT055.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5879



On 05/01/2023 13:15, Andrew Cooper wrote:
> 
> 
> On 05/01/2023 11:19 am, Julien Grall wrote:
>> On 05/01/2023 09:59, Ayan Kumar Halder wrote:
>>> Hi Julien,
>>
>> Hi,
>>
>>> I have a clarification.
>>>
>>> On 05/01/2023 09:26, Julien Grall wrote:
>>>> CAUTION: This message has originated from an External Source. Please
>>>> use proper judgment and caution when opening attachments, clicking
>>>> links, or responding to this email.
>>>>
>>>>
>>>> Hi Stefano,
>>>>
>>>> On 04/01/2023 23:47, Stefano Stabellini wrote:
>>>>> On Tue, 3 Jan 2023, Michal Orzel wrote:
>>>>>> Printing memory size in hex without 0x prefix can be misleading, so
>>>>>> add it. Also, take the opportunity to adhere to 80 chars line length
>>>>>> limit by moving the printk arguments to the next line.
>>>>>>
>>>>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>>>>> ---
>>>>>> Changes in v2:
>>>>>>   - was: "Print memory size in decimal in construct_domU"
>>>>>>   - stick to hex but add a 0x prefix
>>>>>>   - adhere to 80 chars line length limit
>>>>>
>>>>> Honestly I prefer decimal but also hex is fine.
>>>>
>>>> decimal is perfect for very small values, but as we print the amount in
>>>> KB it will become a big mess. Here some examples (decimal first, then
>>>> hexadecimal):
>>>>
>>>>   512MB: 524288 vs 0x80000
>>>>   555MB: 568320 vs 0x8ac00
>>>>   1GB: 1048576 vs 0x100000
>>>>   512GB: 536870912 vs 0x20000000
>>>>   1TB: 1073741824 vs 0x40000000
>>>>
>>>> For power of two values, you might be able to find your way with
>>>> decimal. It is more difficult for non power of two unless you have a
>>>> calculator in hand.
>>>>
>>>> The other option I suggested in v1 is to print the amount in KB/GB/MB.
>>>> Would that be better?
>>>>
>>>> That said, to be honest, I am not entirely sure why we are actually
>>>> printing in KB. It would seems strange that someone would create a
>>>> guest
>>>> with memory not aligned to 1MB.
>>>
>>> For RTOS (Zephyr and FreeRTOS), it should be possible for guests to
>>> have memory less than 1 MB, isn't it ?
>>
>> Yes. So does XTF. But most of the users are likely going allocate at
>> least 1MB (or even 2MB to reduce the TLB pressure).
>>
>> So it would be better to print the value in a way that is more
>> meaningful for the majority of the users.
>>
>>>> So I would consider to check the size is 1MB-aligned and then print the
>>
>> I will retract my suggestion to check the size. There are technically
>> no restriction to run a guest with a size not aligned to 1MB.
>> Although, it would still seem strange.
> 
> I have a need to extend tools/tests/tsx with a VM that is a single 4k
> page.  Something which can execute CPUID in the context of a VM and
> cross-check the results with what the "toolstack" (test) tried to configure.
> 
> Xen is buggy if it cannot operate a VM which looks like that, and a
> bonus of explicitly testing like this is that it helps to remove
> inappropriate checks.
I can see we are all settled that it is fully ok to boot guests with memory size less than 1MB.
The 'memory' dt parameter for dom0less domUs requires being specified in KB and this is the
smallest common multiple so it makes the process of cross-checking easier. Stefano is ok with
either decimal or hex, Julien wanted hex (hence my v2), I'm more towards printing in hex as well.
Let's not forget that this patch aims to fix a misleading print that was missing 0x and can cause
someone to take it as a decimal value.

> 
> ~Andrew

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 10:53:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 10:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472477.732704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkLa-00029W-VL; Fri, 06 Jan 2023 10:53:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472477.732704; Fri, 06 Jan 2023 10:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDkLa-00029P-ST; Fri, 06 Jan 2023 10:53:14 +0000
Received: by outflank-mailman (input) for mailman id 472477;
 Fri, 06 Jan 2023 10:53:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g6IK=5D=citrix.com=prvs=36316be06=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pDkLZ-00029J-3u
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 10:53:13 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4cfb5768-8db0-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 11:53:09 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cfb5768-8db0-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673002389;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=6Wk2TSCEO9A+5OEvRCIDTA7EztFEDFJs4i2Z9OGalEk=;
  b=FJThnpB0G0nAi5Uj4/vTmBx89lsFwzWXoSGCQ5wri/bsJ1jpRfWWV54T
   xJ2jQLqNlzJLtxKZFJAvpiggtADuoFxCGsNqY/vuR/kp8OKrEHylleegg
   s8BxWyRe1y0KC6GUnbF6OVGSeM74U3EtC5eNR0BHPq5BjOrvUeYtwq0bm
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90405203
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:ullLqqDkDLmaBBVW/33kw5YqxClBgxIJ4kV8jS/XYbTApDp2hmECy
 2sbDzqBOfaLYWr0cowgPo7j9E4Bu5/Tz9VqQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpA4ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwxtxWG1NRq
 fgjMwsObSuM2Mu7+52RY7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2o0BPjDS0Qn1lM/AZQinOCulz/nfidRsl69rqsr+WnDigd21dABNfKEIoDSGJ8NxS50o
 Erhw0DwIjgHF+bYzBub6U61rMOVuR/CDdd6+LqQqacx3Qz7KnYoIBAaSFKhrf6Rike0WNVEN
 woS9zZGhbIz/0yiVNW7XxC+rHOepRkac95RFeQg70eK0KW8ywOQHGMJSnhIcNIrsMU/WDkC2
 VqAntevDjtq2JWMRHeAs7uZsz62ES4SK2AEeGkDVwRty8nupsQ/gwzCSv5nEbWplZvlFDfo2
 TeIoSMiwbIJgqYjz6+8+0LGhTOEvJXFTgcpoA7QWwqN5wd0dMi6Zois6FHe9vFGBJyUQlmIo
 D4PnM32xOoUBpGQny+faOwKGPei4PPtGCXVnFpHD5QnsTO39BaLZptM6TtzIENoNMcsejLzZ
 kLX/wRL6/d7OWC2RbV6b4K4F4Ihyq2IKDj+fqmKNJwUOME3LVLZunE1DaKN44zzuHQWsLsNJ
 sufSt2XDnUhBvtOwQGZbc5IhNfH2RsC7W/UQJn6yTGu3ryfeGOZRN85DbeeUgwqxPja+VuIq
 r6zI+PPkkwCC7OmPkE75KZJdTg3wW4H6YcaQiC9XsqKOUJYFW4oEJc9KptxKtU+z8y5egoll
 0xRu3O0KnKl3hUryi3QMBiPjY8Dur4hxU/XxQR2YT6VN4ELOO5DFps3eZotZqUA/+d+1/NyR
 PRtU5zeXa4eFm+aq2VBPMiVQGlemPKD31Lm082NOWFXQnKdb1aRpo+MkvXHrkHi8RZbReNh+
 ub9h2s3sLIIRhh4Dda+Vc9DO2iZ5CBH8MorBhugHzWmUBm0mGScA3Cr36BfzgBlAUmr+wZ2I
 C7NXEZE9bGS+tZlmDQL7Ijdx7qU/yJFNhIyNwHmAXyebEE2IkLLLVd8bdu1
IronPort-HdrOrdr: A9a23:12j/1qAICkWol0nlHemW55DYdb4zR+YMi2TDsHocdfU1SKOlfq
 WV98jzuiWbtN98YhAdcLK7Scq9qALnlaKdiLN5Vd3OYOCBghrMEGgI1/qB/9SPIVyYysdtkY
 tmbqhiGJnRIDFB/KDHCdCDYrId/OU=
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="90405203"
Date: Fri, 6 Jan 2023 10:52:54 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
CC: Alex Williamson <alex.williamson@redhat.com>, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>, "Richard
 Henderson" <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	<xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <Y7f9hi0SqYk6KQzW@perard.uk.xensource.com>
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <20230102124605-mutt-send-email-mst@kernel.org>
 <c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
 <20230103081456.1d676b8e.alex.williamson@redhat.com>
 <cbfdcafc-383e-aea3-d04d-38388fab202f@aol.com>
 <ba4f8fd6-ae10-da60-7ef5-66782f29fdb9@aol.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <ba4f8fd6-ae10-da60-7ef5-66782f29fdb9@aol.com>

On Tue, Jan 03, 2023 at 05:58:01PM -0500, Chuck Zmudzinski wrote:
> Hello Anthony and Paul,

Hi Chuck,

> I am requesting your feedback to Alex Williamson's suggestion that this
> problem with assigning the correct slot address to the igd on xen should
> be fixed in libxl instead of in qemu.
> 
> It seems to me that the xen folks and the kvm folks have two different
> philosophies regarding how a tool stack should be designed. kvm/libvirt
> provides much greater flexibility in configuring the guest which puts
> the burden on the administrator to set all the options correctly for
> a given feature set, while xen/xenlight does not provide so much
> flexibility and tries to automatically configure the guest based on
> a high-level feature option such as the igd-passthrough=on option that
> is available for xen guests using qemu but not for kvm guests using
> qemu.
> 
> What do you think? Should libxl be patched instead of fixing the problem
> with this patch to qemu, which is contrary to Alex's suggestion?

I do think that libxl should be able to deal with having to put a
graphic card on slot 2. QEMU already provides every API necessary for a
toolstack to be able to start a Xen guest with all the PCI card in the
right slot. But it would just be a bit more complicated to implement in
libxl.

At the moment, libxl makes use of the QEMU machine 'xenfv', libxl should
instead start to use the 'pc' machine and add the "xen-platform" pci
device. (libxl already uses 'pc' when the "xen-platform" pci card isn't
needed.) Also probably add the other pci devices to specific slot to be
able to add the passthrough graphic card at the right slot.

Next is to deal with migration when using the 'pc' machine, as it's just
an alias to a specific version of the machine. We need to use the same
machine on the receiving end, that is start with e.g. "pc-i440fx-7.1" if
'pc' was an alias for it at guest creation.


I wonder if we can already avoid to patch the 'xenfv' machine with some
xl config:
    # avoid 'xenfv' machine and use 'pc' instead
    xen_platform_pci=0
    # add xen-platform pci device back
    device_model_args_hvm = [
        "-device", "xen-platform,addr=3",
    ]
But there's probably another device which is going to be auto-assigned
to slot 2.


If you feel like dealing with the technical dept in libxl, that is to
stop using 'xenfv' and use 'pc' instead, then go for it, I can help with
that. Otherwise, if the patch to QEMU only changes the behavior of the
'xenfv' machine then I think I would be ok with it.

I'll do a review of that QEMU patch in another email.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 11:58:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 11:58:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472486.732715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDlMO-000077-RO; Fri, 06 Jan 2023 11:58:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472486.732715; Fri, 06 Jan 2023 11:58:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDlMO-000070-Oj; Fri, 06 Jan 2023 11:58:08 +0000
Received: by outflank-mailman (input) for mailman id 472486;
 Fri, 06 Jan 2023 11:58:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bSpX=5D=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pDlMO-00006p-5R
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 11:58:08 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 616d8e3d-8db9-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 12:58:06 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id qk9so3000878ejc.3
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 03:58:06 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb20078bb45bd8c0dfe08.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:78bb:45bd:8c0d:fe08])
 by smtp.gmail.com with ESMTPSA id
 hq15-20020a1709073f0f00b0084c7029b24dsm330529ejc.151.2023.01.06.03.58.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 06 Jan 2023 03:58:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 616d8e3d-8db9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TBxYdY0nhsET79NZ9XKnKiBr1yDDIcpHo558HUBkNFQ=;
        b=onoSQlpLMrOzWEEpbJjcfYgWALF8enzzBWje3CUbC5Cl+zXPJYyq+J0iI2WdL8Dwd7
         0p+MQoMZeclUSiUclWaCU/SWK/Bu6PhKzFvqRoPBT6lUS48gpP8kX3xAiGYv0Qbm/5n9
         EWK+IYEarusxXx5rRBpsGiRCjVRYtPBDZJhUreyVJSmt1RfCEYyhtioYkV8inanRpbHH
         wp1ISW5RaTZ/td3RP/qk2D7GLFhm+sGb3LTtscSU5O+Ey0aDUqhaG+UV8NGxTe36jqxK
         AtrKk/1kMLv7iqmgQmTxMdB+6LZ9PgbSguUxNYey0zUq5ZlHwT1aHqzwEa9T7d+rKgUn
         Xdjw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=TBxYdY0nhsET79NZ9XKnKiBr1yDDIcpHo558HUBkNFQ=;
        b=DKWlhm7sbIkiUM8fDHPCxk6xeLvmhOcvjO1wkI4FWsrPwhSQH9uKz3vqda6uKlj/jT
         2qOzSim1eqfcrXh/ylFWli+6tU1X3TJxPdwprOe2U+1J/7mVtQmEknHVPvWmohFtvgL7
         v+NYdXUR8Yi5b0xdoTpvku4CvcisJUXge92lBAkggLWRghndL5SZ33F6PkwOZ81xpo87
         c/jsSnF9VaNWuBmgBUBXbDKbNb07qGTCnKidb676Gta7ZXVIdKPCxtIpAbUpyBbTelk8
         4ZJqG2rtwciaypevK28tSkNeob7M2SdTPuv1uNosRpCu47XQqnWbm5PymdLv5uaBCFg5
         gOPA==
X-Gm-Message-State: AFqh2koKA2+eJ7Mul5+OzO+rLF2eXhpaX8DawhTG8c/AnnmNdwZc5iT7
	5zzYLGVX+lfl3R5owJsC2gOa9m+AXHduvg==
X-Google-Smtp-Source: AMrXdXtk3S7Dcv8qIz8C3RhdtMz6UKvTSdukRpBJB5XtgEItGpsj5crne5XSJSHuqB4f9YXLznGuXw==
X-Received: by 2002:a17:907:c71b:b0:7c1:67ca:56f5 with SMTP id ty27-20020a170907c71b00b007c167ca56f5mr53336482ejc.15.1673006285861;
        Fri, 06 Jan 2023 03:58:05 -0800 (PST)
Date: Fri, 06 Jan 2023 11:57:59 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>
CC: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Chuck Zmudzinski <brchuckz@aol.com>,
 Thomas Huth <thuth@redhat.com>
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v2_6/6=5D_hw/isa/piix=3A_Res?= =?US-ASCII?Q?olve_redundant_TYPE=5FPIIX3=5FXEN=5FDEVICE?=
In-Reply-To: <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
References: <20230104144437.27479-1-shentey@gmail.com> <20230104144437.27479-7-shentey@gmail.com> <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
Message-ID: <B207F213-3B7B-4E0A-A87E-DE53CD351647@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 4=2E Januar 2023 15:35:33 UTC schrieb "Philippe Mathieu-Daud=C3=A9" <ph=
ilmd@linaro=2Eorg>:
>+Markus/Thomas
>
>On 4/1/23 15:44, Bernhard Beschow wrote:
>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>> TYPE_PIIX3_DEVICE=2E Remove this redundancy=2E
>>=20
>> Signed-off-by: Bernhard Beschow <shentey@gmail=2Ecom>
>> ---
>>   hw/i386/pc_piix=2Ec             |  4 +---
>>   hw/isa/piix=2Ec                 | 20 --------------------
>>   include/hw/southbridge/piix=2Eh |  1 -
>>   3 files changed, 1 insertion(+), 24 deletions(-)
>>=20
>> diff --git a/hw/i386/pc_piix=2Ec b/hw/i386/pc_piix=2Ec
>> index 5738d9cdca=2E=2E6b8de3d59d 100644
>> --- a/hw/i386/pc_piix=2Ec
>> +++ b/hw/i386/pc_piix=2Ec
>> @@ -235,8 +235,6 @@ static void pc_init1(MachineState *machine,
>>       if (pcmc->pci_enabled) {
>>           DeviceState *dev;
>>           PCIDevice *pci_dev;
>> -        const char *type =3D xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>> -                                         : TYPE_PIIX3_DEVICE;
>>           int i;
>>             pci_bus =3D i440fx_init(pci_type,
>> @@ -250,7 +248,7 @@ static void pc_init1(MachineState *machine,
>>                                          : pci_slot_get_pirq);
>>           pcms->bus =3D pci_bus;
>>   -        pci_dev =3D pci_new_multifunction(-1, true, type);
>> +        pci_dev =3D pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE)=
;
>>           object_property_set_bool(OBJECT(pci_dev), "has-usb",
>>                                    machine_usb(machine), &error_abort);
>>           object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>> diff --git a/hw/isa/piix=2Ec b/hw/isa/piix=2Ec
>> index 98e9b12661=2E=2Ee4587352c9 100644
>> --- a/hw/isa/piix=2Ec
>> +++ b/hw/isa/piix=2Ec
>> @@ -33,7 +33,6 @@
>>   #include "hw/qdev-properties=2Eh"
>>   #include "hw/ide/piix=2Eh"
>>   #include "hw/isa/isa=2Eh"
>> -#include "hw/xen/xen=2Eh"
>>   #include "sysemu/runstate=2Eh"
>>   #include "migration/vmstate=2Eh"
>>   #include "hw/acpi/acpi_aml_interface=2Eh"
>> @@ -465,24 +464,6 @@ static const TypeInfo piix3_info =3D {
>>       =2Eclass_init    =3D piix3_class_init,
>>   };
>>   -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>> -{
>> -    DeviceClass *dc =3D DEVICE_CLASS(klass);
>> -    PCIDeviceClass *k =3D PCI_DEVICE_CLASS(klass);
>> -
>> -    k->realize =3D piix3_realize;
>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>> -    k->device_id =3D PCI_DEVICE_ID_INTEL_82371SB_0;
>> -    dc->vmsd =3D &vmstate_piix3;
>
>IIUC, since this device is user-creatable, we can't simply remove it
>without going thru the deprecation process=2E

AFAICS this device is actually not user-creatable since dc->user_creatable=
 is set to false once in the base class=2E I think it is safe to remove the=
 Xen class unless there are ABI issues=2E

Best regards,
Bernhard

>Alternatively we could
>add a type alias:
>
>-- >8 --
>diff --git a/softmmu/qdev-monitor=2Ec b/softmmu/qdev-monitor=2Ec
>index 4b0ef65780=2E=2Ed94f7ea369 100644
>--- a/softmmu/qdev-monitor=2Ec
>+++ b/softmmu/qdev-monitor=2Ec
>@@ -64,6 +64,7 @@ typedef struct QDevAlias
>                               QEMU_ARCH_LOONGARCH)
> #define QEMU_ARCH_VIRTIO_CCW (QEMU_ARCH_S390X)
> #define QEMU_ARCH_VIRTIO_MMIO (QEMU_ARCH_M68K)
>+#define QEMU_ARCH_XEN (QEMU_ARCH_ARM | QEMU_ARCH_I386)
>
> /* Please keep this table sorted by typename=2E */
> static const QDevAlias qdev_alias_table[] =3D {
>@@ -111,6 +112,7 @@ static const QDevAlias qdev_alias_table[] =3D {
>     { "virtio-tablet-device", "virtio-tablet", QEMU_ARCH_VIRTIO_MMIO },
>     { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_VIRTIO_CCW },
>     { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_VIRTIO_PCI },
>+    { "PIIX3", "PIIX3-xen", QEMU_ARCH_XEN },
>     { }
> };
>---
>
>But I'm not sure due to this comment from commit ee46d8a503
>(2011-12-22 15:24:20 -0600):
>
>47) /*
>48)  * Aliases were a bad idea from the start=2E  Let's keep them
>49)  * from spreading further=2E
>50)  */
>
>Maybe using qdev_alias_table[] during device deprecation is
>acceptable?
>
>> -}
>> -
>> -static const TypeInfo piix3_xen_info =3D {
>> -    =2Ename          =3D TYPE_PIIX3_XEN_DEVICE,
>> -    =2Eparent        =3D TYPE_PIIX_PCI_DEVICE,
>> -    =2Einstance_init =3D piix3_init,
>> -    =2Eclass_init    =3D piix3_xen_class_init,
>> -};
>> -
>>   static void piix4_realize(PCIDevice *dev, Error **errp)
>>   {
>>       ERRP_GUARD();
>> @@ -534,7 +515,6 @@ static void piix3_register_types(void)
>>   {
>>       type_register_static(&piix_pci_type_info);
>>       type_register_static(&piix3_info);
>> -    type_register_static(&piix3_xen_info);
>>       type_register_static(&piix4_info);
>>   }
>>   diff --git a/include/hw/southbridge/piix=2Eh b/include/hw/southbridge=
/piix=2Eh
>> index 65ad8569da=2E=2Eb1fc94a742 100644
>> --- a/include/hw/southbridge/piix=2Eh
>> +++ b/include/hw/southbridge/piix=2Eh
>> @@ -77,7 +77,6 @@ struct PIIXState {
>>   OBJECT_DECLARE_SIMPLE_TYPE(PIIXState, PIIX_PCI_DEVICE)
>>     #define TYPE_PIIX3_DEVICE "PIIX3"
>> -#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
>>   #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
>>     #endif
>


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 12:14:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 12:14:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472502.732726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDlcR-0002kc-JA; Fri, 06 Jan 2023 12:14:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472502.732726; Fri, 06 Jan 2023 12:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDlcR-0002kV-Fr; Fri, 06 Jan 2023 12:14:43 +0000
Received: by outflank-mailman (input) for mailman id 472502;
 Fri, 06 Jan 2023 12:14:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDlcP-0002kP-IO
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 12:14:41 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id affb8b01-8dbb-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 13:14:39 +0100 (CET)
Received: from mail-mw2nam12lp2046.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jan 2023 07:14:35 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN9PR03MB6073.namprd03.prod.outlook.com (2603:10b6:408:136::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 12:14:33 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 12:14:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: affb8b01-8dbb-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673007279;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=fCGwVpkqYvY+ebNdIlosFz/zHpTs09CKo5DcQO4LugE=;
  b=TLqEQ+N42Lz3l61qGg1TUgcgcW3ylFrmZ42f26Uf987LBPGYinSc8jVZ
   giia0vZ/TIx0iCVRqGFenC8YXuCDaAn4C7BrPUPd7foIU3WLWjmzt07fO
   3PiYTiX4iwKVx9UxWFPltQfaTYkqfve6AzCLJ9C1xSQ7sEJqNm4ZJQyQI
   c=;
X-IronPort-RemoteIP: 104.47.66.46
X-IronPort-MID: 91882619
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:n7hKqqnsttR2muTDhFU+qeno5gxKJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJJC2HVO/aKNmT9c9p/aISy9EJSvJ7UndNgSApp/ig8ECMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aqaVA8w5ARkPqgS5AGGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aYEDg0dPlOGu/CRw4y9evtP3906F+C+aevzulk4pd3YJdAPZMmZBoD1v5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVklI3jOaF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNLRO3iqKc76LGV7k0cSwQnZXSQmvKSm3yGQe4FL
 VdMoBN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yx2CGmEOQzpFadonnMw7Xzon0
 hmOhdyBLSNrmK2YTzSa7Lj8hTGvPSkYK0cSaClCShEKi/HzrYd2gh/RQ9JLFK+uksazCTz22
 yqNriU1m/MUl8Fj6kmg1VXOgjbpo4eTSAcwv1/TRjj9sl0/Y5O5bYu171Sd9exHMIuSUliGu
 j4DhtSa6+cNS5qKkURhXdkwIV1g3N7dWBW0vLKlN8BJG+iFk5J7Qb1t3Q==
IronPort-HdrOrdr: A9a23:y5JW5a7IPuV9pSEKKQPXwHrXdLJyesId70hD6qm+c20wTiX4rb
 HcoB1/73SbtN9/YhEdcK+7SdO9qB/nlKKdgrNhTItKIjOW2ldARbsKheHfKlbbak7DH4BmpM
 Jdm6MXMqyMMbAT5/yX3OHSeexO/DFJmprEuc7ui05ICSVWQ+VY6QF9YzzrYnGfhmN9dOYE/F
 733Ls4m9JkE05nEfhTfUN1ONTrlpnwjZf7ZhxDLAIm7QTmt0LS1JfKVyKA2wsYUXd04ZpKyx
 miryXJop+7tu29yFvn23TN449wkN/so+EzffCku4wuMzDxjQTtW4h7Qb2Fu1kO0ZmS1Go=
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="91882619"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gnzv7MAvU48SvKl6ofvDE2CqNYjtMDmtwPNEprPFcX5B/cUkYeO/3pxDwDUOxOXB/uX9zTzD4ssPckYMPcXHMaZ83aTgV+dZg6BJSSZqkrPJIYt8XMYoVu814f9iQYlHiutSvjeKxxtfBFh3FOlcNc4HW+z+vLCMmFZ50D6ABO9+0Lx6G27jDcTOpy2fUC2MU6tLB8GVNuE2PrG0EE3QSxilTXkTQ+Agv4nFmlM6sjczHRxJb8Bw0hEWY+MGxxOKgLngiax8EyA0eGEgE57EIkmZehUKseY7N5a8dY35i5yetf2rgr6oIqvicYeI9hLi2X+++AkTEvTAV76roGgbpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fCGwVpkqYvY+ebNdIlosFz/zHpTs09CKo5DcQO4LugE=;
 b=mnvCVmDHDXZWfDmHXxQLe3PXF63ajX/wS+b6uEiWIC4QdXcRP9i2EYcSffdZL+KNgG6TCeFjRoYDB0uCFtGNkuwKv1lC8EMxP/evgwTqn1llQHd3oQKct5J9QodSNI9lP2ex6y+qsHyZb8G6M7awI031ojFWqhYB0WtAaRQ2J0yav6bmT5XKNABKSQuT488mq/Y5BXpAE7Axbuwf9kk0/APE8OknSkl7huz+BUb+QWG/OssJvOJXW4J3F7Q2rCKPL+viXuaX62HZ0YSCjDZOIxDFU4Ge5FcVWWESNt4hMbOSSwIpwvcirDAlXcvVu/CfB3mU8b5+kqxPwwwTuLpxrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fCGwVpkqYvY+ebNdIlosFz/zHpTs09CKo5DcQO4LugE=;
 b=RArDU5/77Yjd/SXsCXY23eeI65cb2+umWMXSn6dqxW68J8R+F3xLb5wYB+CyiIQNp4fWxBDarwWv6+nMSe3ZVAwxZUuSWXc0VZGK3w4FtMXODc7MOWeFpfY0SkwmBP1sHoC6a8WV1zb9sfVkNX4nsgA/gIqJsG/wA8Rsp7ILK1A=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Thread-Topic: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Thread-Index:
 AQHZH69XYhd2rMGoWE+asSBKOPPG4q6OdxyAgAA2ioCAAMm+gIAA8B2AgAChQoCAAEi6gA==
Date: Fri, 6 Jan 2023 12:14:31 +0000
Message-ID: <5bb7382c-bed0-144c-2906-38bc08c76a52@citrix.com>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-4-andrew.cooper3@citrix.com>
 <540a449d-f76d-eb16-4f98-c4fb3564ce98@suse.com>
 <7dd00ce3-a95b-2477-128c-de36e75c4a34@citrix.com>
 <ebaf70e5-e1ba-d72d-84f2-5acb7e38a6bc@suse.com>
 <9ce20298-5870-aa1b-ee5e-e16a623beadb@citrix.com>
 <a65a4553-d003-1a8d-abb7-5d8c1c9fface@suse.com>
In-Reply-To: <a65a4553-d003-1a8d-abb7-5d8c1c9fface@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BN9PR03MB6073:EE_
x-ms-office365-filtering-correlation-id: 48acae29-53ec-4d7d-4413-08daefdf911b
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 hCyLquK7cW01e/Tx4i3/q1Bt4LO3VvJ0buTqfcQx6fG+iMvNiFfPsfQ4yd2figH5KAGqlb/Yy9s4cGtkBVyEWjkc9nQexST7nGecX3EQoj9G5ByfU12qbfx8kKM/0vt9D1Fe51/HrRckZNZSU3A4D+Djz+1Sv083+surlkGLsAxNvOU41bVTsi2azBlgUmt5O8Hdf/TN/5qQyCwnhRoUkidhfE9v0QZwhvT21Nj+ELAznrbglMsMXDlmpJO+h1larHtq/peXgo7nLJA8zocK3LJ7vbdQ8+IVarPioxFxqVY9wOaES+EJ9pb8oGr1MBLqK4rDfLR9o2HNx8y0gp78AWyosZU8ezDx8Nf9iSX8Z9qrQWcjaV1m21uyUo74ouyPzdGIqz+aYWWXe3Xda/fx5LXbkDbt2AQhgqc1+NNAKRN00lpGdiJx2zbf/bxUSkQRy10jQYAycPxHL/gNsZK+SJX57QYcGEtvgC9r69a5HYFPlhkiqoP/MD8HR8acqy1x0L/q/tlvGx8HPfzdHiVRhkivfrSaEesrp8hFCaUrJei55KLAnlYko0UfZWEkzJRJnYWQoFUyahlEcRfc5EY6WamLG57sYe5jbGpZlM0tDNRwltLXAkvwOtdM9pfgzrfmYT1epmKq9Q3oPLKCw7V/Hnm4Rri7c6LtNG+m1W9/PTFLSi3C5YsWVcXI0y3ZgITkUQZERlgNuzHJ6wciUe817AsVFQCz20cdnA5CEWLuFiZ7BYGQSRzU6OEIIOc5EOPnAyb6/o5xO2D5w5jcvV02AQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(376002)(136003)(346002)(396003)(39860400002)(451199015)(5660300002)(2906002)(8936002)(4326008)(8676002)(31686004)(41300700001)(66946007)(64756008)(76116006)(316002)(91956017)(54906003)(66446008)(6916009)(478600001)(66556008)(66476007)(71200400001)(6486002)(6512007)(2616005)(186003)(26005)(6506007)(53546011)(31696002)(38100700002)(122000001)(86362001)(38070700005)(82960400001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?NkI1a3Q4Z3pPQlQ2OW9RcUVydGRxa0p2cWZNWTBWMnNiM2NiKzZDdE1yaUdx?=
 =?utf-8?B?ZjAvN25Tb1dvMHo2ZHZtK1NpejVXZ2lJT3hGdHZ0ZjN6dno4SkVSbHlQMFF5?=
 =?utf-8?B?YVA1aHZNNWpqV3NWdGtQNFhxYXNSOFl6V3pWR0tJZ1B6aC8xSWU0VmVnZFJS?=
 =?utf-8?B?Z25JTmRpMWJ4eEZpMldyMzNVL1FlVk53dlZ2Snh0STREcWprNFVNdnNVMTJZ?=
 =?utf-8?B?MmdlYm02eDg1cTlGWFUwM1RHRjZkTElBR3dLdFJRaWJ2MGdhOXZSaEVFWE81?=
 =?utf-8?B?RStyTzVidi9mQWFmQkpINTVCbS90d0RLWFM1dFVGL1NaVmc1ZVZSVURONmdl?=
 =?utf-8?B?U3ZuOGNhdUEyNUJGWHVwb2Q4RS9EMnpVUzYrR3FsZCswdjhSNDlvWXhUV3po?=
 =?utf-8?B?M0NWT2g5NmMwRUVBWVhtbTlIZ1RxZ2l1emdTVW0zbGcvQkx3anN2MG04QUVB?=
 =?utf-8?B?Ty8rWEpqOU1jbG5vM3czUWhjajU4eDVkYndOR1J5N0Q2bG1VYUdmM3NBeFRr?=
 =?utf-8?B?eEh4TGZqZ3RIWko0Q3dacTREVVNpa3pBNVNCYnNyYlVpNXZYMDZIRzlLUGJG?=
 =?utf-8?B?dkJXQTBJbVpLRlhUNFkwZWd1RWdoTjY3T1RXRitIWkplck0vOXloOHZuSnlp?=
 =?utf-8?B?MzlYVmdaQnUvWnNmeHNGeVBYdElpNU9RQWZZV1BNWHZOMnZONzcvL3NGZW81?=
 =?utf-8?B?KzJnNmJSYnhPWmpMbHZPblZFcEFkL3NTV25udzJHZkRvekg4RU9wMzZieEVH?=
 =?utf-8?B?T3o5TjJaSTQ0VzN3eXhwak11dWI3TWJwUU9WMkRTNllhaVVROU9PK0I3SUl5?=
 =?utf-8?B?cDhqZm53MTlGSEM0NDZnU1hZdTk1MTEyMlhQbFhvRzVaUUlZWXRJZ1pDSzYv?=
 =?utf-8?B?cEdwT285MHdlWitlR3g2R0FMVitqUjdubjc2YXgzQnVwZEo0Z1ExRVlLV1VT?=
 =?utf-8?B?ekUwUXJSc1ZtRTFvd3JVZ3dOZmxrMTdyYmIxZUZrZWxzNVpmcXBJRUQxZE5B?=
 =?utf-8?B?aTRYdGwwMzJNaUUrY0M3MldTbEFSQkFvYUFUNGZ4ZTc5SkdOL2ZGTUo2STBl?=
 =?utf-8?B?UnJIWThoa2ZVVXNBbFpHRWlZd0FaaXRKUE9WeGRqNU5IbjdCYUFYMTZtZmdo?=
 =?utf-8?B?dlQ0R3pWUmpwM3RxY0daREtoTm1SRDhYTHNOaVdKNHRjSXdiMzhaMWIxMytu?=
 =?utf-8?B?TXo4MW9wN3NZMmRTY25MY2plMXdzazRiSVdDRzBWc2tuNmp3UjBUR0ZtR29Q?=
 =?utf-8?B?NU5CZlkzYUJFdXNxbDlHNWcyVnM1ZWFRSmFrZUFxM2NscncyNkt1QnVNc0ZY?=
 =?utf-8?B?UlYxdCtSWHlOQ1cxdHhuUWdlUkszcWxUQ2ZFYnI4Mk12VU5vQ1JMM1BHUWx4?=
 =?utf-8?B?VlNHc1Q4ZEhMc3JEY2Irek9VdXdzVk9CbmIwaEFoY00vVVA5bmU2NW1ZeW1m?=
 =?utf-8?B?ZEpiaktLQ29BT2VhYlJVM1pOOUZTUDdCSytkNlNGRGQrUm1XZmlma2R3Q21x?=
 =?utf-8?B?UFo4NjVPaElxak8xdld2R0hqSllQNjZIUDk1UjBOSzBTN3B2c2x1M245b0ZY?=
 =?utf-8?B?WnV5bEE5eWs1Rk5MaTJoTzlrRCtjYWhUbURWVUJTa3d3R21VaW1GVzBUY2gw?=
 =?utf-8?B?N1ZVcWJ3THhFUmdsci9LMWsvVmNEemRyd2EvS21pYjVtaG5nakc4Zk9iS2w2?=
 =?utf-8?B?S2NUbzNJR0Z0RW95K1Bla0d0SGhjRFZIajAwQ3ZoOGMvU0F5MHg1bEVZTW8z?=
 =?utf-8?B?dStVY2E3ZFBJeVlvMjhwM3owYmtMWDFLNGhpcnFIQlQ4UXkzNmd0UkdxaDJh?=
 =?utf-8?B?cVFZSmJaUG85VGJrdDltd0tqREx3YjQ5L0dBb2hGVXBnZDkxaFUyM05yWWF3?=
 =?utf-8?B?RzJQemRrUE50d0dBNlpHZmNNY25Ccm01eEoyZXdqSFh0TmNWQXA3eFF6UGli?=
 =?utf-8?B?Wnk1Rk40MldSMW9PSkRkTkc3STI1MmZDbFpheWNBRHJEbWxjQXFCUlV5dlFI?=
 =?utf-8?B?eis2VFJNNGNXOXc4bHBPK2xrRElQZzQ2NmYrb0pwMUpTVWRocCtzczRFRWNX?=
 =?utf-8?B?RzhDY3JEdWtvWHl2cnZ2eGU5RzVmaURqVzFxRy9tWjlKdzhuTGgyR0hFWVdz?=
 =?utf-8?B?M0Z3N2wxeDZidzVrSTlHczRmbmprU0pZaFc1eldDUG83bklQeHZnVTVnQUN2?=
 =?utf-8?B?blE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <7596C5B903702049A2859A08808AEB5A@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	hLUuJW/ByHtEP3ERw0H2Q1OUqsOvoJ7ddioftGV5fdJLxV7OKf0gtvVQMRUsi6/jRJSEEZnuI5upCLVTbBnNS0DkCZ0jGSLlVj+j9fnUGh75grnlU02yOSTDDoIENg9jm8osfH17No+C8+sRe3l+7pGkK/O0bPSypYyD6rBixV3c4pSIozUxBLsNFYml6UDtMW7b09WsrUeMYg93t/mk+5IjyXUix+LxK91ZXtashNk2e1ViCdCz+HVl1N30gEv0fzMBSdpguoKamev4m9Jb6h21D1bmF3bBrQOb7plDS9PDQiLYzlc/X7O0BODI5luJH95cbruajaXKQMkoaAEfwPXMg34uGFZUut/hTfPv3IJAYSLgTx2uOTtKPkslI1wxQxbQZDPeYj/Nh5GbACBQUmWd2JNGMBeAZx9Hlcb5KVOCnrx4zZ1y1tPgWKzhdpLAjQs3DO89L9eb17ZnthQBEF4HiFQeIQfBJkgaTbU2zNJhNdKsw0Ba2Fdhi01ViXVF9qxIqaNuRI3X4MsSCJ6dQEw2hmssnxRV0poXBjSKH7RXBw2xXhrnLhKafQLLxw6dYOFn6/G33YOdbsYMKLGCPp02uYppOQ/drNdGzS+PqBfMT0FTqulZuQASUbckqqGjtLha/Gl0Zj2ta1ECwWMcGDykJ9R7BTTjiQ82KxELZJCAbIGHIG5VvaI3CZBWKtU9aevCYHeTTE3kvoWdYAx0L+WRJV8xCiptlCYyKxLYmtJqYWL5DS1kAzhdQrk9sykY39YiorA9yEsG71F3Ep2qJ95e1oWZHrUDHWoeCwE4Q3YuUSi2W6tdfUGmpUyppCd89lxXNG8dTQFxRP7bXIKtxeb0/J1VApdgDRxigtkz+1Ab2Md7O5XCKokUH8//GM+i
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 48acae29-53ec-4d7d-4413-08daefdf911b
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 12:14:31.6950
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 5wuhcADoqDBze5TA6FXCLosO7FAiYnUeIbW3udpoCgE9PPNTi5T3viMHUcNVd8KFzDQb0Mf/sU0C7Bd5kInxiOx3FpgYPdF2Vdk//Y/Oa44=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6073

T24gMDYvMDEvMjAyMyA3OjU0IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDUuMDEuMjAy
MyAyMzoxNywgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDA1LzAxLzIwMjMgNzo1NyBhbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMDQuMDEuMjAyMyAyMDo1NSwgQW5kcmV3IENvb3Bl
ciB3cm90ZToNCj4+Pj4gT24gMDQvMDEvMjAyMyA0OjQwIHBtLCBKYW4gQmV1bGljaCB3cm90ZToN
Cj4+Pj4+IE9uIDAzLjAxLjIwMjMgMjE6MDksIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+Pj4+Pj4g
QSBzcGxpdCBpbiB2aXJ0dWFsIGFkZHJlc3Mgc3BhY2UgaXMgb25seSBhcHBsaWNhYmxlIGZvciB4
ODYgUFYgZ3Vlc3RzLg0KPj4+Pj4+IEZ1cnRoZXJtb3JlLCB0aGUgaW5mb3JtYXRpb24gcmV0dXJu
ZWQgZm9yIHg4NiA2NGJpdCBQViBndWVzdHMgaXMgd3JvbmcuDQo+Pj4+Pj4NCj4+Pj4+PiBFeHBs
YWluIHRoZSBwcm9ibGVtIGluIHZlcnNpb24uaCwgc3RhdGluZyB0aGUgb3RoZXIgaW5mb3JtYXRp
b24gdGhhdCBQViBndWVzdHMNCj4+Pj4+PiBuZWVkIHRvIGtub3cuDQo+Pj4+Pj4NCj4+Pj4+PiBG
b3IgNjRiaXQgUFYgZ3Vlc3RzLCBhbmQgYWxsIG5vbi14ODYtUFYgZ3Vlc3RzLCByZXR1cm4gMCwg
d2hpY2ggaXMgc3RyaWN0bHkNCj4+Pj4+PiBsZXNzIHdyb25nIHRoYW4gdGhlIHZhbHVlcyBjdXJy
ZW50bHkgcmV0dXJuZWQuDQo+Pj4+PiBJIGRpc2FncmVlIGZvciB0aGUgNjQtYml0IHBhcnQgb2Yg
dGhpcy4gU2VlaW5nIExpbnV4J2VzIGV4cG9zdXJlIG9mIHRoZQ0KPj4+Pj4gdmFsdWUgaW4gc3lz
ZnMgSSBldmVuIHdvbmRlciB3aGV0aGVyIHdlIGNhbiBjaGFuZ2UgdGhpcyBsaWtlIHlvdSBkbyBm
b3INCj4+Pj4+IEhWTS4gV2hvIGtub3dzIHdoYXQgaXMgYmVpbmcgaW5mZXJyZWQgZnJvbSB0aGUg
dmFsdWUsIGFuZCBieSB3aG9tLg0KPj4+PiBMaW51eCdzIHN5c2ZzIEFCSSBpc24ndCByZWxldmFu
dCB0byB1cyBoZXJlLsKgIFRoZSBzeXNmcyBBQkkgc2F5cyBpdA0KPj4+PiByZXBvcnRzIHdoYXQg
dGhlIGh5cGVydmlzb3IgcHJlc2VudHMsIG5vdCB0aGF0IGl0IHdpbGwgYmUgYSBub256ZXJvIG51
bWJlci4NCj4+PiBJdCBlZmZlY3RpdmVseSByZXBvcnRzIHRoZSBoeXBlcnZpc29yICh2aXJ0dWFs
KSBiYXNlIGFkZHJlc3MgdGhlcmUuIEhvdw0KPj4+IGNhbiB3ZSBub3QgY2FyZSBpZiBzb21ldGhp
bmcgKGtleGVjIHdvdWxkIGNvbWUgdG8gbWluZCkgd291bGQgYmUgdXNpbmcNCj4+PiBpdCBmb3Ig
d2hhdGV2ZXIgcHVycG9zZS4NCj4+IFdoYXQgYWJvdXQga2V4ZWMgZG8geW91IHRoaW5rIHdvdWxk
IGNhcmU/DQo+IEkgZGlkbid0IHRoaW5rIGFib3V0IGFueXRoaW5nIHNwZWNpZmljLCBidXQgSSBj
b3VsZCBzZWUgd2h5IGl0IG1heSB3YW50IHRvDQo+IGtub3cgd2hlcmUgaW4gVkEgc3BhY2UgWGVu
IHNpdHMuDQoNClRoZSBrZXhlYyBpbWFnZSBkb2Vzbid0IHJ1biAidW5kZXIiIFhlbjsgaXQgcmVw
bGFjZXMgWGVuIGluIG1lbW9yeSwgYW5kDQp0cmFuc2l0aW9uIGludG8gdGhlIG5ldyBpbWFnZSBp
cyB2aWEgbm8gcGFnaW5nICgzMmJpdCkgb3IgaWRlbnRpdHkNCnBhZ2luZyAoNjRiaXQpIGluIHRo
ZSByZXNlcnZlZCByZWdpb24uDQoNCldlIGRvbid0IHJlYWxseSBzdXBwb3J0IGtleGVjIGxvYWQg
KGl0J3MgdGhlcmUsIGJ1dCBJIGRvbid0IGV4cGVjdA0KYW55b25lIGhhcyBleGVyY2lzZWQgaXQg
aW4gYW5nZXIpLCBidXQgaWYgd2Ugd2VyZSB0byBsb2FkIGEgbmV3DQpYZW4rZG9tMCwgdGhlbiBr
ZXhlYy10b29scyBzdGlsbCBoYXMgbm90aGluZyByZWxldmFudCB0byBkbyB3aXRoDQpuZXctWGVu
K2RvbTAncyBzcGxpdC4NCg0KPj4+Pj4+ICsgKiBGb3IgYWxsIGd1ZXN0IHR5cGVzIHVzaW5nIGhh
cmR3YXJlIHZpcnQgZXh0ZW50aW9ucywgWGVuIGlzIG5vdCBtYXBwZWQgaW50bw0KPj4+Pj4+ICsg
KiB0aGUgZ3Vlc3Qga2VybmVsIHZpcnR1YWwgYWRkcmVzcyBzcGFjZS4gIFRoaXMgbm93IHJldHVy
biAwLCB3aGVyZSBpdA0KPj4+Pj4+ICsgKiBwcmV2aW91c2x5IHJldHVybmVkIHVucmVsYXRlZCBk
YXRhLg0KPj4+Pj4+ICsgKi8NCj4+Pj4+PiAgI2RlZmluZSBYRU5WRVJfcGxhdGZvcm1fcGFyYW1l
dGVycyA1DQo+Pj4+Pj4gIHN0cnVjdCB4ZW5fcGxhdGZvcm1fcGFyYW1ldGVycyB7DQo+Pj4+Pj4g
ICAgICB4ZW5fdWxvbmdfdCB2aXJ0X3N0YXJ0Ow0KPj4+Pj4gLi4uIHRoZSBmaWVsZCBuYW1lIHRl
bGxzIG1lIHRoYXQgYWxsIHRoYXQgaXMgYmVpbmcgY29udmV5ZWQgaXMgdGhlIHZpcnR1YWwNCj4+
Pj4+IGFkZHJlc3Mgb2Ygd2hlcmUgdGhlIGh5cGVydmlzb3IgYXJlYSBzdGFydHMuDQo+Pj4+IElN
TywgaXQgZG9lc24ndCBtYXR0ZXIgd2hhdCB0aGUgbmFtZSBvZiB0aGUgZmllbGQgaXMuwqAgSXQg
ZGF0ZXMgZnJvbSB0aGUNCj4+Pj4gZGF5cyB3aGVuIDMyYml0IFBWIHdhcyB0aGUgb25seSB0eXBl
IG9mIGd1ZXN0Lg0KPj4+Pg0KPj4+PiAzMmJpdCBQViBndWVzdHMgcmVhbGx5IGRvIGhhdmUgYSB2
YXJpYWJsZSBzcGxpdCwgc28gdGhlIGd1ZXN0IGtlcm5lbA0KPj4+PiByZWFsbHkgZG9lcyBuZWVk
IHRvIGdldCB0aGlzIHZhbHVlIGZyb20gWGVuLg0KPj4+Pg0KPj4+PiBUaGUgc3BsaXQgZm9yIDY0
Yml0IFBWIGd1ZXN0cyBpcyBjb21waWxlLXRpbWUgY29uc3RhbnQsIGhlbmNlIHdoeSA2NGJpdA0K
Pj4+PiBQViBrZXJuZWxzIGRvbid0IGNhcmUuDQo+Pj4gLi4uIG9uY2Ugd2UgZ2V0IHRvIHJ1biBY
ZW4gaW4gNS1sZXZlbCBtb2RlLCA0LWxldmVsIFBWIGd1ZXN0cyBjb3VsZCBhbHNvDQo+Pj4gZ2Fp
biBhIHZhcmlhYmxlIHNwbGl0OiBMaWtlIGZvciAzMi1iaXQgZ3Vlc3RzIG5vdywgb25seSB0aGUg
ci9vIE0yUCB3b3VsZA0KPj4+IG5lZWQgdG8gbGl2ZSBpbiB0aGF0IGFyZWEsIGFuZCB0aGlzIG1h
eSB3ZWxsIG9jY3VweSBsZXNzIHRoYW4gdGhlIGZ1bGwNCj4+PiByYW5nZSBwcmVzZW50bHkgcmVz
ZXJ2ZWQgZm9yIHRoZSBoeXBlcnZpc29yLg0KPj4gLi4uIHlvdSBjYW4ndCBkbyB0aGlzLCBiZWNh
dXNlIGl0IG9ubHkgd29ya3MgZm9yIGd1ZXN0cyB3aGljaCBoYXZlDQo+PiBjaG9zZW4gdG8gZmlu
ZCB0aGUgTTJQIHVzaW5nIFhFTk1FTV9tYWNocGh5c19tYXBwaW5nIChlLmcuIExpbnV4KSwgYW5k
DQo+PiBkb2Vzbid0IGZvciBlLmcuIE1pbmlPUyB3aGljaCBkb2VzOg0KPj4NCj4+ICNkZWZpbmUg
bWFjaGluZV90b19waHlzX21hcHBpbmcgKCh1bnNpZ25lZCBsb25nICopSFlQRVJWSVNPUl9WSVJU
X1NUQVJUKQ0KPiBIbW0sIGxvb2tzIGxpa2UgYSBtaXN1bmRlcnN0YW5kaW5nPyBJIGNlcnRhaW5s
eSB3YXNuJ3QgdGhpbmtpbmcgYWJvdXQNCj4gbWFraW5nIHRoZSBzdGFydCBvZiB0aGF0IHJlZ2lv
biB2YXJpYWJsZSwgYnV0IHJhdGhlciB0aGUgZW5kIChpLmUuIG5vdA0KPiBleGFjdGx5IGxpa2Ug
Zm9yIDMyLWJpdCBjb21wYXQpLg0KDQpCdXQgZm9yIHdoYXQgcHVycG9zZT/CoCBZb3UgY2FuJ3Qg
bWFrZSA0LWxldmVsIGd1ZXN0cyBoYXZlIGEgdmFyaWFibGUgcmFuZ2UuDQoNClRoZSBiZXN0IHlv
dSBjYW4gZG8gaXMgbWFrZSBhIDUtbGV2ZWwtYXdhcmUgZ3Vlc3QgcnVubmluZyBvbiA0LWxldmVs
IFhlbg0KaGF2ZSBzb21lIG5ldyBzZW1hbnRpY3MsIGJ1dCBhIDQtbGV2ZWwgUFYgZ3Vlc3QgYWxy
ZWFkeSBvd25zIHRoZQ0Kb3ZlcndoZWxtaW5nIG1ham9yaXR5IG9mIHZpcnR1YWwgYWRkcmVzcyBz
cGFjZSwgc28gYmVpbmcgYWJsZSB0byBoYW5kDQpiYWNrIGEgZmV3IGV4dHJhIFRCIGlzIG5vdCBp
bnRlcmVzdGluZy7CoCBBbmQgZm9yIG1ha2luZyB0aGF0IGhhcHBlbi4uLg0KDQo+Pj4+IEZvciBj
b21wYXQgSFZNLCBpdCBoYXBwZW5zIHRvIHBpY2sgdXAgdGhlIC0xIGZyb206DQo+Pj4+DQo+Pj4+
ICNpZmRlZiBDT05GSUdfUFYzMg0KPj4+PiDCoMKgwqAgSFlQRVJWSVNPUl9DT01QQVRfVklSVF9T
VEFSVChkKSA9DQo+Pj4+IMKgwqDCoMKgwqDCoMKgIGlzX3B2X2RvbWFpbihkKSA/IF9fSFlQRVJW
SVNPUl9DT01QQVRfVklSVF9TVEFSVCA6IH4wdTsNCj4+Pj4gI2VuZGlmDQo+Pj4+DQo+Pj4+IGlu
IGFyY2hfZG9tYWluX2NyZWF0ZSgpLCB3aGVyZWFzIGZvciBub24tY29tcGF0IEhWTSwgaXQgZ2V0
cyBhIG51bWJlciBpbg0KPj4+PiBhbiBhZGRyZXNzIHNwYWNlIGl0IGhhcyBubyBjb25uZWN0aW9u
IHRvIGluIHRoZSBzbGlnaHRlc3QuwqAgQVJNIGd1ZXN0cw0KPj4+PiBlbmQgdXAgZ2V0dGluZyBY
RU5fVklSVF9TVEFSVCAoPT0gMk0pIGhhbmRlZCBiYWNrLCBidXQgdGhpcyBhYnNvbHV0ZWx5DQo+
Pj4+IGFuIGludGVybmFsIGRldGFpbCB0aGF0IGd1ZXN0cyBoYXZlIG5vIGJ1c2luZXNzIGtub3dp
bmcuDQo+Pj4gV2VsbCwgb2theSwgdGhpcyBsb29rcyB0byBiZSBnb29kIGVub3VnaCBhbiBhcmd1
bWVudCB0byBtYWtlIHRoZSBhZGp1c3RtZW50DQo+Pj4geW91IHByb3Bvc2UgZm9yICFQViBndWVz
dHMuDQo+PiBSaWdodCwgSFZNIChvbiBhbGwgYXJjaGl0ZWN0dXJlcykgaXMgdmVyeSBjdXQgYW5k
IGRyeS4NCj4+DQo+PiBCdXQgaXQgZmVlbHMgd3JvbmcgdG8gbm90IGFkZHJlc3MgdGhlIFBWNjQg
aXNzdWUgYXQgdGhlIHNhbWUgdGltZQ0KPj4gYmVjYXVzZSBpdCBpcyBzaW1pbGFyIGxldmVsIG9m
IGJyb2tlbiwgZGVzcGl0ZSB0aGVyZSBiZWluZyAoaW4gdGhlb3J5KSBhDQo+PiBsZWdpdGltYXRl
IG5lZWQgZm9yIGEgUFYgZ3Vlc3Qga2VybmVsIHRvIGtub3cgaXQuDQo+IFRvIG1lIGl0IGZlZWxz
IHdyb25nIHRvIGFkZHJlc3MgdGhlIDY0LWJpdCBQViBpc3N1ZSBieSByZW1vdmluZyBpbmZvcm1h
dGlvbiwNCj4gd2hlbiAtIGFzIHlvdSBhbHNvIHNheSAtIGl0IGlzIGFjdHVhbGx5IF9taXNzaW5n
XyBpbmZvcm1hdGlvbi4gVG8gbWUgdGhlDQo+IHByb3BlciBjb3Vyc2Ugb2YgYWN0aW9uIHdvdWxk
IGJlIHRvIGV4cG9zZSB0aGUgdXBwZXIgYm91bmQgYXMgd2VsbCAoc3VjaA0KPiB0aGF0LCBkb3du
IHRoZSByb2FkLCBpdCBjb3VsZCBiZWNvbWUgZHluYW1pYykuIFRoZXJlJ3MgYWxzbyBubyBpbmZv
IGxlYWsNCj4gdGhlcmUsIGFzIHRoZSB0d28gKHN0YXRpYykgYm91bmRzIGFyZSBwYXJ0IG9mIHRo
ZSBQViBBQkkgYW55d2F5Lg0KDQouLi4gdGhlIGFic29sdXRlIGJlc3QgeW91IGNvdWxkIGRvIGhl
cmUgaXMgaW50cm9kdWNlIGEgbmV3DQpYRU5WRVJfcGxhdGZvcm1fcGFyYW1ldGVyczIgdGhhdCBo
YXMgYSBkaWZmZXJlbnQgc3RydWN0dXJlLg0KDQpXaGljaCBzdGlsbCBsZWF2ZXMgdXMgd2l0aCBY
RU5WRVJfcGxhdGZvcm1fcGFyYW1ldGVycyBwcm92aWRpbmcgZGF0YQ0Kd2hpY2ggaXMgc29tZXdo
ZXJlIGJldHdlZW4gdXNlbGVzcyBhbmQgYWN0aXZlbHkgdW5oZWxwZnVsLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 12:25:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 12:25:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472509.732738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDln6-0004FV-Jw; Fri, 06 Jan 2023 12:25:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472509.732738; Fri, 06 Jan 2023 12:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDln6-0004FO-FD; Fri, 06 Jan 2023 12:25:44 +0000
Received: by outflank-mailman (input) for mailman id 472509;
 Fri, 06 Jan 2023 12:25:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oE+Y=5D=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pDln4-0004FI-Oh
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 12:25:42 +0000
Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com
 [2a00:1450:4864:20::330])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3b2c188f-8dbd-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 13:25:40 +0100 (CET)
Received: by mail-wm1-x330.google.com with SMTP id l26so937018wme.5
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 04:25:40 -0800 (PST)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 az28-20020a05600c601c00b003cf57329221sm6055371wmb.14.2023.01.06.04.25.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 06 Jan 2023 04:25:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b2c188f-8dbd-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=cGfTsxDRG+LfoKkBFGZZ/WvwF0lb6LoKmVVmXvkk4C0=;
        b=j2BRoliK8qTd+F+OwsZmpdkMniMJPUoh7KnKOodKcQYhH8dYCE2hyVbmWCgTN/maAx
         r6eQt2KWt4gVNOl07Yrcf+URqL26xP1qlD6w0frzh8QHhCSN6G57NOTcuvs5a31tcrL9
         avJ61vY5r0QCoAeGgk2zzuozoAT4vUlXZu5A/EzY20ki4osHwGvFAMBpj/0sBdYVdyNL
         LedpN2q7UKSxvg7sEVLNNlfg27/NpLeX6nm9lg+c6dlaKgujb7BGQ5Mwx3ltCuIq/jWm
         dGR4bfGffK1SpFmx9areMCmVEZo8cuK3QDTXkDYPkwaRZljpwDvTOyTrkKc8XHSmVEl1
         8zkQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=cGfTsxDRG+LfoKkBFGZZ/WvwF0lb6LoKmVVmXvkk4C0=;
        b=Xcc1hMaebxcfcSCDVxNv3wmXy7r8lvjF2P7XSLI3JINGd5cnpByvI4lyd2gEqB10CE
         NIcH3jH7Cs5uXMzZ9MM9bIpq0eYpeyrHxOAQUTMonw3dV6LCrSCiMPD9zLJohbZcTR8Y
         vcCWqfkifcIhvL60BfD351+xAxcNxrfTUfAczL/5KSmnNj/4mE6ek8tZV9s2ShdrosLz
         kZiaUN78HvrIhfHcpfiiZwiXdae8GHKHLs+uvMOXPWWr+3oFMziJyI84McZF0t2RcUsZ
         xKtwkGp67rRSiIwIPHz6NM1W+xv02UEKrEeSbH6DWv0B69Lu5LPkFu2KsH/Lb3S8oh6w
         WDhA==
X-Gm-Message-State: AFqh2kqj8UX47c+Civ7J/U5UJ/uCR4h6reAVXwWPNbF0I2HF6JtCcYf3
	sFlJPjZC4VAmkKN1mbgJRjE87w==
X-Google-Smtp-Source: AMrXdXtK2NS9bhPqosfbghflNCW3WATI4EQme8et2nuxBkQ/Co7cKEvbgMhqSWzvFuxk9D76m9gkLw==
X-Received: by 2002:a05:600c:3d1b:b0:3d0:6a57:66a5 with SMTP id bh27-20020a05600c3d1b00b003d06a5766a5mr38329071wmb.0.1673007938784;
        Fri, 06 Jan 2023 04:25:38 -0800 (PST)
Message-ID: <6a1a6ed8-568d-c08b-91a7-1093a2b25929@linaro.org>
Date: Fri, 6 Jan 2023 13:25:36 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Chuck Zmudzinski <brchuckz@aol.com>,
 Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <B207F213-3B7B-4E0A-A87E-DE53CD351647@gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <B207F213-3B7B-4E0A-A87E-DE53CD351647@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 6/1/23 12:57, Bernhard Beschow wrote:
> 
> 
> Am 4. Januar 2023 15:35:33 UTC schrieb "Philippe Mathieu-Daudé" <philmd@linaro.org>:
>> +Markus/Thomas
>>
>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>>
>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>>> ---
>>>    hw/i386/pc_piix.c             |  4 +---
>>>    hw/isa/piix.c                 | 20 --------------------
>>>    include/hw/southbridge/piix.h |  1 -
>>>    3 files changed, 1 insertion(+), 24 deletions(-)


>>>    -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>> -{
>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>> -
>>> -    k->realize = piix3_realize;
>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>>> -    dc->vmsd = &vmstate_piix3;
>>
>> IIUC, since this device is user-creatable, we can't simply remove it
>> without going thru the deprecation process.
> 
> AFAICS this device is actually not user-creatable since dc->user_creatable is set to false once in the base class. I think it is safe to remove the Xen class unless there are ABI issues.
Great news!


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 12:41:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 12:41:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472516.732748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDm21-0006e9-Sr; Fri, 06 Jan 2023 12:41:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472516.732748; Fri, 06 Jan 2023 12:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDm21-0006e2-Pd; Fri, 06 Jan 2023 12:41:09 +0000
Received: by outflank-mailman (input) for mailman id 472516;
 Fri, 06 Jan 2023 12:41:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDm20-0006dq-Iq; Fri, 06 Jan 2023 12:41:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDm20-0006jO-Dj; Fri, 06 Jan 2023 12:41:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDm20-0003dC-3C; Fri, 06 Jan 2023 12:41:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDm20-0004LS-2n; Fri, 06 Jan 2023 12:41:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rdmeO4nQZw8Y7+7nm1vDwBhfMiwnUXBXQ98V3RiimaY=; b=T/k6eQ3zZ2RzVFgm03ssMFhVc3
	t6Q0g9gCh6amDOGxRElcRmlLXWCH4c2fXBTOymtfoycZzVbokW0kBnv6RO8RO1GYw6wzeAYOegEEG
	L0XUT9FH2iCJcshIx1AzfekcpstHilxHExiMYG19cBVtkflAYbrk8ozOpbr/Pdz/N1xM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175602-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175602: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0aca5901e34489573c4e9558729704f76869e3d0
X-Osstest-Versions-That:
    ovmf=8c2357809e2c352c8ba7c35ab50f49deefd3d39e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 12:41:08 +0000

flight 175602 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175602/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0aca5901e34489573c4e9558729704f76869e3d0
baseline version:
 ovmf                 8c2357809e2c352c8ba7c35ab50f49deefd3d39e

Last test of basis   175598  2023-01-06 05:40:43 Z    0 days
Testing same since   175602  2023-01-06 08:41:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8c2357809e..0aca5901e3  0aca5901e34489573c4e9558729704f76869e3d0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 12:41:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 12:41:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472520.732758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDm2N-00075z-8K; Fri, 06 Jan 2023 12:41:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472520.732758; Fri, 06 Jan 2023 12:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDm2N-00075s-5O; Fri, 06 Jan 2023 12:41:31 +0000
Received: by outflank-mailman (input) for mailman id 472520;
 Fri, 06 Jan 2023 12:41:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ggnj=5D=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDm2L-0006zY-TI
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 12:41:30 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2055.outbound.protection.outlook.com [40.107.6.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6feb620b-8dbf-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 13:41:27 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8189.eurprd04.prod.outlook.com (2603:10a6:102:1c2::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 12:41:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 12:41:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6feb620b-8dbf-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PDNeCI/9Mcs81yqa3FuUlVsUYVD52x2KheC+ryWkejk5FWbe2QJXhiNuHMP+5lZKGFxpCbDSs06f42g49OtV1nsgc4VxPzTiFNV2+aoxy924hRiC8xDTh5I3dSHbF5mx+KAhRdnsHn0CRzwMzfitba474IWEq5BK0f2rtYfswBXjwAd7B6qMTj+1FNAF4mgn4V7TjpRuGnMDF11uvjRm1/wQE9lDv14xm6g0RsRD11B7Y+Ld5ttqs2KUwHjg/4g38UN2Y0SBgtCDGVzrzOqSJzQCXEbHWEfhBPaiy8oMqa6olzR1lHo4APVmkNAZJxpZL+W0Zm5bYydKsJlTIYbYVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5LiZnKW8QO/RyfEYputoxeS3/lQZ5Rzk4LEiCHSD+Nw=;
 b=BS6l6sbeOxfrMss4I86APMahar1ul34O/9rSkBVzRnk+6S+5L57ojEMiftV0/VPWIKf1NsF7bhDvDz5pXMMf+sz2E3gzBuKbX7nrSVVlm/2M0Mz8Ynrum0G5bUERsIOnpJ+eJWJ1z13vrQrPccFDeS6jnj8J0mWYZ/5ssSSph1vc3O/1O128EH6KV6JeTNqcjoeUiUkfqJp2m9E9Czr3BlB/4hS9NWMVzl32mPD7jBb6oTzllOj0iGZxpZJeawPl97LmKlOc6T8DZ41/dPhWsAPlKw8XiSzfVINThvVR4Y6hY0ZTGdr8ksW9NrTexKZznKESBpAsHiPZSK10+qz1ew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5LiZnKW8QO/RyfEYputoxeS3/lQZ5Rzk4LEiCHSD+Nw=;
 b=TdEeQrYUKHZSuLkvJovCcc3YzL0ZsVYwcu9ghYSvYoaD2bMA4WKOlaXaVWysc3U4tV+J8YAUTuAmWdDk+P4188H23DIFwDCYHaIluwNObuiwu32v28IPSHnVXuBJsE5X3H3HnA7iKFR6LVgnrhRq4RWX8z47dkYbP3ZY74niQmkkO3U8L+6pqWxbOULscXo1NVV+FZUMcjr/rnf3rMO1u1l89ZVq/TJb+WvI/pyNJ8NvvPN6YAyGUlPxJpYM7XpM5gpMYtYnK+o1+7ZUefL3UYOFd407bC8Rt5e/j0ztpNPeYGV7y9QvGWOYgim4R86AuAf9V/Q2UAlM/28t9a8ZUQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8aa0a1a9-b69a-c201-e5de-0bd5ae6318d6@suse.com>
Date: Fri, 6 Jan 2023 13:41:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/S3: Restore Xen's MSR_PAT value on S3 resume
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>,
 Demi Marie Obenour <demi@invisiblethingslab.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230105204839.3676-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230105204839.3676-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0191.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8189:EE_
X-MS-Office365-Filtering-Correlation-Id: f4dad9d1-ec64-43b8-a9e4-08daefe35359
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+VrNfjV8SeVJn8b0cG+NfPTVcj63RBb75aR/KNd0pIVGt7ap7m4ue/yLVaYfp7dWO52i+g+VCLqIMMeE4nGeL6sBcehv0o648yqxBJOdRvFTlvJVs7hY9gHdkFoqku4vVALM19RHjZoFDyNBrh46I1wpIe9FA1pPo+3L7A5u3alSVnI4QKNtjJ1uOPVQb0R/yY/KdyZLfM8GXMZbCIDuPeKuPOCegewdiV61jwxLNRI43p44KrTVjJiEl20+KPs+eCZcirlToSd4OdknGhpSffwP22Xuuvcb9XJb/yUR1dd2EtecA+/qFO085YTlZmKoKScBqCjIz0SA/7lxJXNcdQl6T+ssgPJRaPknfj7Sct/008ZaBbT5GHht5VcPMsyTWwOwTXbmG9EmbsBYgBqeOAF2QxjX2V1noFbOOmgq0qGONj2vAQKNIUZaUHUVnTaZBE35Cbmj6Y4xY60Oa5cnZIRg5U8gvJfN2lj0KdpVnEXxIF2fnR79BJd0z3tR6WejMSGIk/WtfB/BZ2IA1uhxHyOQ2dPBdnhP7w2vV+mZ4kdoJbJhKibIOfqNMRG8OGLk07VwEX/aew0Czvx+A9QdVgpZCWh4sTwJX9/PfDljkWNm0Z1HNYAws2BanL5/yiCSR/ESdxP2fB2gFHmfOozBjNOouoSIRCgATMdg9azkMYmQouMQ1qtnlxg8OwEWHJsLVGDkcuvYP8Xm9M+MGqAveZjff4TWpYftkS3gehHH/1E=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(136003)(376002)(39860400002)(366004)(396003)(451199015)(2616005)(83380400001)(4326008)(8936002)(86362001)(38100700002)(36756003)(66946007)(66556008)(316002)(66476007)(2906002)(6916009)(54906003)(4744005)(186003)(6512007)(31686004)(5660300002)(53546011)(6506007)(8676002)(26005)(31696002)(41300700001)(6486002)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Ym1GbmwvUEpnQjRsUVNSdHVZYmUzVkpUbUVtSjZHZlRQYTBiLzBWSWZFcHRF?=
 =?utf-8?B?UXB1dXMzVmxrZjBQMTZrbWVRWXBjQ3pseVZ4Vm9CZWNlbndETTBIR2pFTEFl?=
 =?utf-8?B?aHB5Y0NCVGYxdm1RSklobUZER01zU1FqdDV5RUI2M2llTS91elh6WkNLcEJB?=
 =?utf-8?B?Q2NFQ2tGZTlvS2ZBaUJLNlptWjg4cUNlS3l0VkdvdGFmMjJDU042NVZTTkwx?=
 =?utf-8?B?a09OVG1WZE13aHFOMnBZaFhXQnFYM3ZaYlRoWGRtZlZOWk5MTHhxajZyUlpt?=
 =?utf-8?B?b2VXUnJkdEFZWkxLeWFOY0lRTTR3L1ZXaVF6YzlTWHJBT01pN0lUV3pTMFkx?=
 =?utf-8?B?NEYzNTBqcWxXaXNYYzFRZGwzZXV6Vys0S0s5WSs4NG1HK0Jmb0ExejdDVkRr?=
 =?utf-8?B?L1poVjBNeWtvWUcwMS9TdFVqZXhBVHpHNGQxU0duTVVQYk10cFY5TjZLLzBX?=
 =?utf-8?B?QUluVTNaREMxS0NPMlFRM1FZVWl2cXpDa0lKdFZiL1Y1cnRGckw2WGdTenVN?=
 =?utf-8?B?cStMZWsxTW5hWUpRQ1F0d1hXcEFMeGN5eUJMbjRWSzZDMTB6U0kxa1VOTDcr?=
 =?utf-8?B?enhvRkNQR2xqN2o3MUQ0S042Y1NEYzRpbUlselBjQ1hvRThTVS9ORGg2dUxH?=
 =?utf-8?B?WDVmS24zRjVkV2tVczdwM2ZjMXpBQzcwQVNkOE9Hc1czbVVaWUM3VklJUGxv?=
 =?utf-8?B?MEJ3cjkydjFUWWFxUGxSSUVtdUJsMGFab2s0NDdldy84QXVSYjdLWlZFV0RX?=
 =?utf-8?B?TU82a2xoM1A1bTJKam5HbFp6Tzd6Ry96ZlFwME9JcjVrZkhEa0R2WWNmdUsw?=
 =?utf-8?B?OEpXMlFza0RkK0g5eDJsaUhJWGpOcW1OdFNyS2huOGtWQjBxdm8xWTFqSmhi?=
 =?utf-8?B?WXF1dUNPQVNPWks0YTRNcW8zQUZ1dTZBcGRBSnBTRThWVHZJTDRGUE1VdG9G?=
 =?utf-8?B?b0VoUmV1bEtMaC9EUnhVd1pFVU1DYmsvY0l2U1pEdGROKzlrdSt4dmVUbkl3?=
 =?utf-8?B?MmMySC9ZUythaktscVFuMnNuQU5rdWFzZ2VmU2hNbGxNa2plWFdtQVVXd3FU?=
 =?utf-8?B?VUVPYUlyd2tlZUlDUmM3aWp3Z0JDNkxQcHl3OFJYd3F1dENCanhEY1UwSEZr?=
 =?utf-8?B?ckR6SStlekdoemhuczE4MHQ3U2RmQmFhdktnTkpDOGhPNjRDZ01JQXlOUlFY?=
 =?utf-8?B?WEx5eDdLL1pMTEMxNnRna1N5eEI1SzBLUjI4Z2syM1FBc3JiWVMxaEJLb3Az?=
 =?utf-8?B?cW1ianFwcC9MekphZ3JMWE1zTWxuRTFjZG13WkU2dHc2MUVhQ0hmZTllM3NB?=
 =?utf-8?B?aE50TnkwcmcranBkVmdDWCttZ2N2MlA0Nm4rSU81L29VYVAvNm00WXZ4NTV2?=
 =?utf-8?B?VStYK3k2RTV6V1V4VmRTY1JTM0hBYy9KQ2dZSS9Sdi9LSEt0VUxCK2hCT0tY?=
 =?utf-8?B?YmVheldDcnNOb2pKY3piTmFqd3BUSTR5L1hwSVZpRHV0ZXI0bTl3QXJadUFp?=
 =?utf-8?B?VHVldzdiRDdZMUV0ellWdERwU1JvdXhQaGVOMnpyemFoZ1ljNmdaNEg2OU82?=
 =?utf-8?B?cEcyS1ZjeVE2aEZoRHh2OENObDVBMXE5VHIraW0wUXFPZ2ViV1VxN2xCVWlX?=
 =?utf-8?B?MzNmbkhMVzh3Z1ljQmFEL3JjRFJjUXRhWnpVSjFub2p5YnlvV0hRanVuM3RB?=
 =?utf-8?B?MEIvRXUzeWh6R1VJY3A1VVJHNFVxS0ZuRVpoRGZ2Mi9IMzFDNnlYZkR5MDI2?=
 =?utf-8?B?ckZlQ3R3N1YyaTNSeUFhVTBDK0hOYUwyVzJLaVdySzd5dWVzREdXWEl2cldT?=
 =?utf-8?B?SG44cHI0WUNydSt5K1ZydDNUTTlpdjlJNlg2b1NTSnZpby80eFdEbWp1Mngy?=
 =?utf-8?B?TU1IUlNLZFV4QlRnV1lGbWhqb1Zncno2RVZRUEdJdU1iMUR6eHZqcTJIZGZT?=
 =?utf-8?B?cUhpUTl2THpOTW5wRitpbFJTcXpqeXhBbFNsNkFiOHU5cUszY0M4WkIrNDFy?=
 =?utf-8?B?M3NOT3l3SXNEZXFTOUV1QlJBUDhHUjRsaXVhTEpwZmE2eXVReWE2c0RSQVMr?=
 =?utf-8?B?M0dBemNVdzE1Sm5aM2YybnRXcGljUXN5aFlwVzRPM3BEZnBLZUZMZGhsaWM3?=
 =?utf-8?Q?+DmvzRF0BIaHhVFqfhIslTdM6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f4dad9d1-ec64-43b8-a9e4-08daefe35359
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 12:41:26.3064
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9NQCFnaSVvFpcK/XOnwg/IvV+A4N+xkEGshuGKuZVQb0BPPZWFg0pnua/V7Ier12WAg2GZWeOUbNgP2b3RHeNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8189

On 05.01.2023 21:48, Andrew Cooper wrote:
> There are two paths in the trampoline, and Xen's PAT needs setting up in both,
> not just the boot path.
> 
> Fixes: 4304ff420e51 ("x86/S3: Drop {save,restore}_rest_processor_state() completely")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:00:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:00:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472533.732770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmKh-0001NM-Rj; Fri, 06 Jan 2023 13:00:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472533.732770; Fri, 06 Jan 2023 13:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmKh-0001NF-Om; Fri, 06 Jan 2023 13:00:27 +0000
Received: by outflank-mailman (input) for mailman id 472533;
 Fri, 06 Jan 2023 13:00:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qMKn=5D=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pDmKf-0001N3-Qj
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:00:26 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 136ee7d5-8dc2-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 14:00:21 +0100 (CET)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-617-kMSRisufPmahaEwqqtJ8Dg-1; Fri, 06 Jan 2023 08:00:19 -0500
Received: by mail-wm1-f69.google.com with SMTP id
 g9-20020a7bc4c9000000b003d214cffa4eso456142wmk.5
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:00:19 -0800 (PST)
Received: from redhat.com ([2.52.141.223]) by smtp.gmail.com with ESMTPSA id
 bi22-20020a05600c3d9600b003d208eb17ecsm1610633wmb.26.2023.01.06.05.00.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:00:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 136ee7d5-8dc2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673010020;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=sR6um4BbjBj20kfzemMVQ0ZkaqRAHLWQBhrjILhwaG4=;
	b=Wev9ieiR3/r7UN/G8a9B/RCHMNbUe/KDb9um403EEkBlHBdpW+Ncg0qPEnUrX4+xVhouiV
	83z7vIqBlhHAScWt++zgCbD0kT2btbLtYgYKvC2zW6Ivjayr2wjARctgubBJS06jHwW1Ha
	fv9Ncrg31pFhsrzIWgS/Un/9F/WedrE=
X-MC-Unique: kMSRisufPmahaEwqqtJ8Dg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sR6um4BbjBj20kfzemMVQ0ZkaqRAHLWQBhrjILhwaG4=;
        b=uEktCH0vDhq+6J9b4C2rD0JEZ4viCVjh3nzPLO8xYjDb+DJiKl6xptZetBWrLpGO/i
         qShmvcy8QyjeEcfh7+azMRTkXKVm9Nltlpnq32qOGL+lKFt3jAIlY/OFkbfqPK9M3uNq
         BCIHFJUO2QysHcG6HXS98mjaOe4bCvtehMZRpTzdeJlAW6BoovXBKURqrD0JsoICeJlQ
         fwbbIQXjPDT4xRx7ZWvmKnXXIESYmCpj2ZWQ8c5fgfzxW4MuHJOdIMLjZDfmfMXf0e9K
         K6QH4RlkerTkkyRyPk5LpFUVG3DZk1PtjocTtovAXjQ2xDmf8T9xdPDG/LZaldC9fZF9
         Iebw==
X-Gm-Message-State: AFqh2kr50khKCsY00F8U0qkS7wevTBYVf/y6R8f49qifQaOUABzaz6+J
	MPmyAbGKxDT/9TlBuSGh9uwjU04agqmcEKD55ZJj9qumu86BY+l2jYRq0LdeIfjyI4v2dZJt/NO
	jgSGenkWz7H8GGXNX5q1p2ellLEk=
X-Received: by 2002:a05:600c:4d25:b0:3d3:5b7a:1791 with SMTP id u37-20020a05600c4d2500b003d35b7a1791mr48498783wmp.41.1673010018146;
        Fri, 06 Jan 2023 05:00:18 -0800 (PST)
X-Google-Smtp-Source: AMrXdXuk957aN9qRhSS4mlPMbpT2Ktb2g/2MsJT1cShsyRoFBbX7cRz0ybSLDVX1tt7CCv5iHeOHqA==
X-Received: by 2002:a05:600c:4d25:b0:3d3:5b7a:1791 with SMTP id u37-20020a05600c4d2500b003d35b7a1791mr48498757wmp.41.1673010017758;
        Fri, 06 Jan 2023 05:00:17 -0800 (PST)
Date: Fri, 6 Jan 2023 08:00:13 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230106064838-mutt-send-email-mst@kernel.org>
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
MIME-Version: 1.0
In-Reply-To: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> as noted in docs/igd-assign.txt in the Qemu source code.
> 
> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> Intel IGD passthrough to the guest with the Qemu upstream device model,
> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> a different slot. This problem often prevents the guest from booting.
> 
> The only available workaround is not good: Configure Xen HVM guests to use
> the old and no longer maintained Qemu traditional device model available
> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> 
> To implement this feature in the Qemu upstream device model for Xen HVM
> guests, introduce the following new functions, types, and macros:
> 
> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> * typedef XenPTQdevRealize function pointer
> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> 
> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> the xl toolstack with the gfx_passthru option enabled, which sets the
> igd-passthru=on option to Qemu for the Xen HVM machine type.
> 
> The new xen_igd_reserve_slot function also needs to be implemented in
> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> in which case it does nothing.
> 
> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> 
> Move the call to xen_host_pci_device_get, and the associated error
> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> initialize the device class and vendor values which enables the checks for
> the Intel IGD to succeed. The verification that the host device is an
> Intel IGD to be passed through is done by checking the domain, bus, slot,
> and function values as well as by checking that gfx_passthru is enabled,
> the device class is VGA, and the device vendor in Intel.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>

If we are to make changes, something that might be generally
useful is a new mask along the lines of slot_reserved_mask,
however
	- only affecting auto-allocated addresses
	- controllable from a command line property

this way one could say "don't allocate any devices to slot 2"
and later "put igd device in slot 2".
And, xenpv machine could set defaults for these using the
compat machinery.


> ---
> Notes that might be helpful to reviewers of patched code in hw/xen:
> 
> The new functions and types are based on recommendations from Qemu docs:
> https://qemu.readthedocs.io/en/latest/devel/qom.html
> 
> Notes that might be helpful to reviewers of patched code in hw/i386:
> 
> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> not affect builds that do not have CONFIG_XEN defined.
> 
> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
> existing function that is only true when Qemu is built with
> xen-pci-passthrough enabled and the administrator has configured the Xen
> HVM guest with Qemu's igd-passthru=on option.
> 
> v2: Remove From: <email address> tag at top of commit message
> 
> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
> 
>     is changed to
> 
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     I hoped that I could use the test in v2, since it matches the
>     other tests for the Intel IGD in Qemu and Xen, but those tests
>     do not work because the necessary data structures are not set with
>     their values yet. So instead use the test that the administrator
>     has enabled gfx_passthru and the device address on the host is
>     02.0. This test does detect the Intel IGD correctly.
> 
> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>     email address to match the address used by the same author in commits
>     be9c61da and c0e86b76
>     
>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
> 
> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>     for the Intel IGD that uses the same criteria as in other places.
>     This involved moving the call to xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>     Intel IGD in xen_igd_clear_slot:
>     
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     is changed to
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> 
>     Added an explanation for the move of xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot to the commit message.
> 
>     Rebase.
> 
> v6: Fix logging by removing these lines from the move from xen_pt_realize
>     to xen_igd_clear_slot that was done in v5:
> 
>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>                " to devfn 0x%x\n",
>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                s->dev.devfn);
> 
>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>     set yet in xen_igd_clear_slot.
> 
>     Sorry for the extra noise.
> 
>  hw/i386/pc_piix.c    |  3 +++
>  hw/xen/xen_pt.c      | 46 +++++++++++++++++++++++++++++++++++---------
>  hw/xen/xen_pt.h      | 16 +++++++++++++++
>  hw/xen/xen_pt_stub.c |  4 ++++
>  4 files changed, 60 insertions(+), 9 deletions(-)
> 
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index b48047f50c..bc5efa4f59 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>      }
>  
>      pc_xen_hvm_init_pci(machine);
> +    if (xen_igd_gfx_pt_enabled()) {
> +        xen_igd_reserve_slot(pcms->bus);
> +    }
>      pci_create_simple(pcms->bus, -1, "xen-platform");
>  }
>  #endif
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 0ec7e52183..7fae1e7a6f 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                 s->dev.devfn);
>  
> -    xen_host_pci_device_get(&s->real_device,
> -                            s->hostaddr.domain, s->hostaddr.bus,
> -                            s->hostaddr.slot, s->hostaddr.function,
> -                            errp);
> -    if (*errp) {
> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
> -        return;
> -    }
> -
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> @@ -950,11 +941,47 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>  }
>  
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
> +}
> +
> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
> +{
> +    ERRP_GUARD();
> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
> +
> +    xen_host_pci_device_get(&s->real_device,
> +                            s->hostaddr.domain, s->hostaddr.bus,
> +                            s->hostaddr.slot, s->hostaddr.function,
> +                            errp);
> +    if (*errp) {
> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
> +        return;
> +    }
> +
> +    if (is_igd_vga_passthrough(&s->real_device) &&
> +        s->real_device.domain == 0 && s->real_device.bus == 0 &&
> +        s->real_device.dev == 2 && s->real_device.func == 0 &&
> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
> +    }
> +    xpdc->pci_qdev_realize(qdev, errp);
> +}
> +
>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>  
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
> +    xpdc->pci_qdev_realize = dc->realize;
> +    dc->realize = xen_igd_clear_slot;
>      k->realize = xen_pt_realize;
>      k->exit = xen_pt_unregister_device;
>      k->config_read = xen_pt_pci_read_config;
> @@ -977,6 +1004,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>      .instance_size = sizeof(XenPCIPassthroughState),
>      .instance_finalize = xen_pci_passthrough_finalize,
>      .class_init = xen_pci_passthrough_class_init,
> +    .class_size = sizeof(XenPTDeviceClass),
>      .instance_init = xen_pci_passthrough_instance_init,
>      .interfaces = (InterfaceInfo[]) {
>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index e7c4316a7d..40b31b5263 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -3,6 +3,7 @@
>  
>  #include "hw/xen/xen_common.h"
>  #include "hw/pci/pci.h"
> +#include "hw/pci/pci_bus.h"
>  #include "xen-host-pci-device.h"
>  #include "qom/object.h"
>  
> @@ -41,7 +42,20 @@ typedef struct XenPTReg XenPTReg;
>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>  
> +#define XEN_PT_DEVICE_CLASS(klass) \
> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
> +
> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
> +
> +typedef struct XenPTDeviceClass {
> +    PCIDeviceClass parent_class;
> +    XenPTQdevRealize pci_qdev_realize;
> +} XenPTDeviceClass;
> +
>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>                                             XenHostPCIDevice *dev);
> @@ -76,6 +90,8 @@ typedef int (*xen_pt_conf_byte_read)
>  
>  #define XEN_PCI_INTEL_OPREGION 0xfc
>  
> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
> +
>  typedef enum {
>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> index 2d8cac8d54..5c108446a8 100644
> --- a/hw/xen/xen_pt_stub.c
> +++ b/hw/xen/xen_pt_stub.c
> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>          error_setg(errp, "Xen PCI passthrough support not built in");
>      }
>  }
> +
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +}
> -- 
> 2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:14:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:14:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472543.732815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYX-0003kP-4n; Fri, 06 Jan 2023 13:14:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472543.732815; Fri, 06 Jan 2023 13:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYW-0003kH-VJ; Fri, 06 Jan 2023 13:14:44 +0000
Received: by outflank-mailman (input) for mailman id 472543;
 Fri, 06 Jan 2023 13:14:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDmYV-0002z5-E9
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:14:43 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 154670af-8dc4-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 14:14:42 +0100 (CET)
Received: by mail-lf1-x12f.google.com with SMTP id cf42so1913857lfb.1
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:14:42 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f14-20020a0565123b0e00b004b7033da2d7sm150221lfv.128.2023.01.06.05.14.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:14:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 154670af-8dc4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WNxlUh9Ka9jPjJlnMEMMefixCKNIhgCiXW1ZWCXQV/k=;
        b=Cmp2WrBPADxtPoiKL4ZCWZjbwolz9CwJgGbJPD1rMDDiI4pZKxyKs8tIFpSVx7xZph
         Zcr/Wt6lvg3FkgxLGPbYrF4RjLh+MGPOl10al8e9P5VIfsaWmG7qfqwCr0gDRlmbkAfz
         CO6Ru90aBDKMmqCyBvUnfKTZ59ad/qLzNaERvfFpq6tS0plPuFGbhtSTcZgJkNcphG7/
         lPwDSreK1qoe1UVY5EEgEN8gA9fHaUCF8UDtd1r6OC2Dyo1tcmf6wdE+qUhpO9dKCZ/u
         AR6XI3Uh00paOLb9LVgKW81uqYPOwtEkeq9LK9r8uTmgKaMjbvQUwNO/eMCW375itLxT
         /Wmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=WNxlUh9Ka9jPjJlnMEMMefixCKNIhgCiXW1ZWCXQV/k=;
        b=ujdg131bgkDNS3qAHK/yHdeARCfT6Gj+cHjplIlO4XNknz8yGVkhbyBx3HluqPEz1T
         /RJmQCwgxUVUDUZnYNwX0JW53//D1T/2E7QDUyDdU58ktV3XSrokah9aVxM19fxiq1rB
         uWCXBLK19ZmHoAs8V0zaxMfgpucQnkR6zBTgTFTt3XTh87d5cHnMrN1pYyuoQd5Okaap
         U4aQGT14qXm8yayUpL2kYv/b8LLT79qT7uYKfSOHwntgqzoR+OQr2j2CHN/ikZIHnDCy
         6b/HMN+P2Tev0od2jf60jAGIS8aIe7uQvBiQmNgU+PXK4YkvcY6lAxoWlQ/FendQiOh1
         B8Qw==
X-Gm-Message-State: AFqh2kpRjTK+rKIz6o+3fwT73Gd5hIN9KMZWW2jQhsyZHkEQepBrLSj4
	fu73Q2D9zyY9feCAmQ5OiKWGp5c9exjV3cpW
X-Google-Smtp-Source: AMrXdXvor3i3sFq+rVboLeo1F3I7ZDy57MwxwjjT77+SwrslgXm0lEQNqBGQktwp9LJBw0V3e91aUw==
X-Received: by 2002:ac2:4f13:0:b0:4b5:5da1:4bcb with SMTP id k19-20020ac24f13000000b004b55da14bcbmr20144859lfr.13.1673010882425;
        Fri, 06 Jan 2023 05:14:42 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 3/8] xen/riscv: introduce stack stuff
Date: Fri,  6 Jan 2023 15:14:24 +0200
Message-Id: <e8f65c43d20ebdaba61738200360b14152531321.1673009740.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673009740.git.oleksii.kurochko@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces and sets up a stack in order to go to C environment

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/Makefile       | 1 +
 xen/arch/riscv/riscv64/head.S | 8 +++++++-
 xen/arch/riscv/setup.c        | 6 ++++++
 3 files changed, 14 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/setup.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 248f2cbb3e..5a67a3f493 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
+obj-y += setup.o
 
 $(TARGET): $(TARGET)-syms
 	$(OBJCOPY) -O binary -S $< $@
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 990edb70a0..ddc7104701 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,4 +1,10 @@
         .section .text.header, "ax", %progbits
 
 ENTRY(start)
-        j  start
+        la      sp, cpu0_boot_stack
+        li      t0, PAGE_SIZE
+        add     sp, sp, t0
+
+_start_hang:
+        wfi
+        j  _start_hang
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
new file mode 100644
index 0000000000..2c7dca1daa
--- /dev/null
+++ b/xen/arch/riscv/setup.c
@@ -0,0 +1,6 @@
+#include <xen/init.h>
+#include <xen/compile.h>
+
+/* Xen stack for bringing up the first CPU. */
+unsigned char __initdata cpu0_boot_stack[PAGE_SIZE]
+    __aligned(PAGE_SIZE);
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:14:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:14:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472540.732781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYU-0002zN-3K; Fri, 06 Jan 2023 13:14:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472540.732781; Fri, 06 Jan 2023 13:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYU-0002zG-0e; Fri, 06 Jan 2023 13:14:42 +0000
Received: by outflank-mailman (input) for mailman id 472540;
 Fri, 06 Jan 2023 13:14:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDmYS-0002z5-LP
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:14:40 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 135808af-8dc4-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 14:14:39 +0100 (CET)
Received: by mail-lf1-x129.google.com with SMTP id v25so1862672lfe.12
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:14:39 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f14-20020a0565123b0e00b004b7033da2d7sm150221lfv.128.2023.01.06.05.14.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:14:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 135808af-8dc4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=V2UtY1Vvt5nd0Ps36YR7BE9Io/IKMRHuA+OuD+Ua5WU=;
        b=dEJWA4OToBPY9VGTbfJnZSgsfvptCU7zIzTVaNvbPQvAhGXP//sNT8PPf5rOffTiix
         lXAXd7mf/86mUgdQkvmqhamNAjeOq1FD3iNRV9f++Z04hf0q23EIUyT1I/L9PfNvge9q
         Oo3LDiRXtRp13jPMCGtewMHfzqD4h4FwdCs0vIhSAjVIBrWqLG4HQslgfjlK5+IWh/J2
         P07GZkxP+zJLNYMooHlcfnFw9KOAYhuCQOHtfrW6qsZKIQpnsLJNqeE/bO4N10lqSwoJ
         dgwK7cNnmmU32YaMINbQnnEFYk34QcObMVDHx/q7yFl+HxLV647YyNN1hy0u7DURGfgI
         P+Ww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=V2UtY1Vvt5nd0Ps36YR7BE9Io/IKMRHuA+OuD+Ua5WU=;
        b=GCrot8gdBjjbXAO1eTEc7y+FRWCmwGv30SFOaLDBnei0DRtjL5ObuHGBgr8eTDSzjM
         bg1q/9YcQB0zHSXyTGUDMpsooHbRMH864aYjPLH7kOqc7bgvD0ZJ5gXYXTb15EH+Jrkj
         KElkGypdf05ypVOF/nIDDQnDxT0HT3mQBCZW7YVbpHggGm7hYxQC6bJ+cpyeavbC2fEB
         WaFyVplspICmJYnWZWUT4edGFxrowmo9AKshxLsqMv5rv67JMEX4Lz4k7rGUMXv3B7Rn
         RkGdmOFaOnkC3w1JQqVLvIUVB6oHvWxdbB+kGrZaN3vUZEmQhPdOtWhesinsm4O/Bc7M
         utOA==
X-Gm-Message-State: AFqh2kobqWUNjGN/fDQUWw2yUsKplvQXHitR+1/ZnErgOB1xJu0FhCvG
	XhY4cAWoU3bLnK4N6xwX1tqwc8S6H3k4MzOm
X-Google-Smtp-Source: AMrXdXsDCwcZ3QWsZUvaCxQJXZp+bIW/488M3BCCgWqS4hkN/+qe2srwuDhBoHTrJvsHJovegdV3bg==
X-Received: by 2002:a05:6512:31cc:b0:4b5:5caf:9d62 with SMTP id j12-20020a05651231cc00b004b55caf9d62mr17690302lfe.61.1673010878710;
        Fri, 06 Jan 2023 05:14:38 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Wei Liu <wl@xen.org>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v1 0/8] Basic early_printk and smoke test implementation
Date: Fri,  6 Jan 2023 15:14:21 +0200
Message-Id: <cover.1673009740.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- the minimal set of headers and changes inside them.
- SBI (RISC-V Supervisor Binary Interface) things necessary for basic
  early_printk implementation.
- things needed to set up the stack.
- early_printk() function to print only strings.
- RISC-V smoke test which checks if  "Hello from C env" message is
  present in serial.tmp

Oleksii Kurochko (8):
  xen/riscv: introduce dummy asm/init.h
  xen/riscv: introduce asm/types.h header file
  xen/riscv: introduce stack stuff
  xen/riscv: introduce sbi call to putchar to console
  xen/include: include <asm/types.h> in <xen/early_printk.h>
  xen/riscv: introduce early_printk basic stuff
  xen/riscv: print hello message from C env
  automation: add RISC-V smoke test

 automation/build/archlinux/riscv64.dockerfile |  3 +-
 automation/scripts/qemu-smoke-riscv64.sh      | 20 +++++
 xen/arch/riscv/Kconfig.debug                  |  7 ++
 xen/arch/riscv/Makefile                       |  3 +
 xen/arch/riscv/early_printk.c                 | 27 +++++++
 xen/arch/riscv/include/asm/early_printk.h     | 12 +++
 xen/arch/riscv/include/asm/init.h             | 12 +++
 xen/arch/riscv/include/asm/sbi.h              | 34 +++++++++
 xen/arch/riscv/include/asm/types.h            | 73 +++++++++++++++++++
 xen/arch/riscv/riscv64/head.S                 |  6 +-
 xen/arch/riscv/sbi.c                          | 44 +++++++++++
 xen/arch/riscv/setup.c                        | 18 +++++
 xen/include/xen/early_printk.h                |  2 +
 13 files changed, 259 insertions(+), 2 deletions(-)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h
 create mode 100644 xen/arch/riscv/include/asm/init.h
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/include/asm/types.h
 create mode 100644 xen/arch/riscv/sbi.c
 create mode 100644 xen/arch/riscv/setup.c

-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:14:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:14:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472542.732803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYV-0003TR-Qd; Fri, 06 Jan 2023 13:14:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472542.732803; Fri, 06 Jan 2023 13:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYV-0003Sw-Kp; Fri, 06 Jan 2023 13:14:43 +0000
Received: by outflank-mailman (input) for mailman id 472542;
 Fri, 06 Jan 2023 13:14:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDmYU-0002z5-Dm
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:14:42 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 149870ba-8dc4-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 14:14:41 +0100 (CET)
Received: by mail-lf1-x12f.google.com with SMTP id cf42so1913806lfb.1
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:14:41 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f14-20020a0565123b0e00b004b7033da2d7sm150221lfv.128.2023.01.06.05.14.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:14:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 149870ba-8dc4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MB3mP3saGY5zi4vniFjN+fb62F0tD8zzx/q/eV6PI4g=;
        b=dHhG0y7E8/G36jeTb3kkFqbhUorGTTD1mCXTlEOCv5nLd0t6yij8q6f2XhSvjbCHSz
         4LzUKh/1RbdJJ7qgeiImwk3EztRh9oPbpX+iJqmQgCuJY2xHs3Jqb3giU9uM6FDa+cFP
         gutliizUf4Wg24ZwSqNd3yN/cKX5mUXHtkje923o2AaxnnfaLMYg/53aRm+WB7XV95bC
         0ND1IglLWaPtvyWmqxjBvmnsy29uP6OkLA6ELEAiogGUwyvDU08UIx8tU6BBSVrsHKUk
         bOfrfqyRVJhfSD4VpvorITmB1PTLQxhnzj9KYF7wEyyyEBYF+qQXyj7s8YOnkkMMQZ1g
         5ufQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=MB3mP3saGY5zi4vniFjN+fb62F0tD8zzx/q/eV6PI4g=;
        b=H8EkbD5Bs3MyPAC5+0dBnv88GLV9kVuGS3+EXnaXKhG5Pgecjjb3x37di/aiUJNgBB
         rxDWUjO3Up91Nq49mLD5129Q0AVtnE+kzn8KkAjwOxTT6GlIvdVedEfbpCVRf/zRGpcT
         zk4eeBgrTqh+qHATHSEQbfxdl50tjvM1qS1rf40v8xG83jYcnogGZu1kx3ObenzkfOph
         VHc20MK5z+OsEemOL1rYr/1QYc9wCAIsPOsfruHPjtxX2RtifUrPriQZ7ur1jjn8Y19X
         1hooJ0kr6kGSmpurlv2lffmnuzxwNtEIfezuQSLKrnKEL1vBTHW7tLQnPTHQd5+Rc/4V
         mJ2Q==
X-Gm-Message-State: AFqh2kpCRf64iCbwrcO5crNNwEPYOFNnTun2nMD6SosStHjyz+yn4p7c
	ObeJvLWYrjtkfDYfkk021kmDFVDgaeMX0rv0
X-Google-Smtp-Source: AMrXdXtja92B/6NXyi/klzTpbEH82FEmrjFVVSWx/83yonduRUFATBxzf42E31lSTN70xqAB7n1C+w==
X-Received: by 2002:a05:6512:1513:b0:4a4:a7d7:4769 with SMTP id bq19-20020a056512151300b004a4a7d74769mr17352068lfb.8.1673010881180;
        Fri, 06 Jan 2023 05:14:41 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 2/8] xen/riscv: introduce asm/types.h header file
Date: Fri,  6 Jan 2023 15:14:23 +0200
Message-Id: <ce66a86285e966700acb13521851aca5b764a56e.1673009740.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673009740.git.oleksii.kurochko@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/include/asm/types.h | 73 ++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/types.h

diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
new file mode 100644
index 0000000000..48f27f97ba
--- /dev/null
+++ b/xen/arch/riscv/include/asm/types.h
@@ -0,0 +1,73 @@
+#ifndef __RISCV_TYPES_H__
+#define __RISCV_TYPES_H__
+
+#ifndef __ASSEMBLY__
+
+typedef __signed__ char __s8;
+typedef unsigned char __u8;
+
+typedef __signed__ short __s16;
+typedef unsigned short __u16;
+
+typedef __signed__ int __s32;
+typedef unsigned int __u32;
+
+#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+#if defined(CONFIG_RISCV_32)
+typedef __signed__ long long __s64;
+typedef unsigned long long __u64;
+#elif defined (CONFIG_RISCV_64)
+typedef __signed__ long __s64;
+typedef unsigned long __u64;
+#endif
+#endif
+
+typedef signed char s8;
+typedef unsigned char u8;
+
+typedef signed short s16;
+typedef unsigned short u16;
+
+typedef signed int s32;
+typedef unsigned int u32;
+
+#if defined(CONFIG_RISCV_32)
+typedef signed long long s64;
+typedef unsigned long long u64;
+typedef u32 vaddr_t;
+#define PRIvaddr PRIx32
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0ULL)
+#define PRIpaddr "016llx"
+typedef u32 register_t;
+#define PRIregister "x"
+#elif defined (CONFIG_RISCV_64)
+typedef signed long s64;
+typedef unsigned long u64;
+typedef u64 vaddr_t;
+#define PRIvaddr PRIx64
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PRIpaddr "016lx"
+typedef u64 register_t;
+#define PRIregister "lx"
+#endif
+
+#if defined(__SIZE_TYPE__)
+typedef __SIZE_TYPE__ size_t;
+#else
+typedef unsigned long size_t;
+#endif
+typedef signed long ssize_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_TYPES_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:14:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:14:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472541.732787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYU-00035p-Hx; Fri, 06 Jan 2023 13:14:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472541.732787; Fri, 06 Jan 2023 13:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYU-00034D-CJ; Fri, 06 Jan 2023 13:14:42 +0000
Received: by outflank-mailman (input) for mailman id 472541;
 Fri, 06 Jan 2023 13:14:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDmYT-0002z5-Dm
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:14:41 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 13e4f564-8dc4-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 14:14:40 +0100 (CET)
Received: by mail-lf1-x136.google.com with SMTP id b3so1912398lfv.2
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:14:40 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f14-20020a0565123b0e00b004b7033da2d7sm150221lfv.128.2023.01.06.05.14.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:14:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13e4f564-8dc4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=yAcYh/VW7eYXXl0R2jdwjlS37Qv+vZcW+kdMkmb+PS4=;
        b=IlrI0mSe81MehUEzGp9/MbqL/Pkl6vTOQkeNdnArj8JWWJejtDMyERVIbGWTZ3BpXm
         cdl2hGleDIcpJ8VKttf+B3DguD+ugtu1V5plc39rePhJu70fhf4HORtDXOluwNgzZFdF
         HPU35Kzhc/c0ITVRDRzHYzb+qkaLBq9q8fpq4gFPV6cVl8Ct3QcAV2Theof2IrBH4SHE
         2KCh5GoBjbLvlq+IxEVb/ykaL0xTbyg+kX6ILflHPNKLfqeC9SXO2/19T6giQkNi91q5
         vctCpEYEioJe8qs2XLEIUQyPX5+UPDs1eNsh6eCsx3C76Yk8+brLRJzm+IWy0S7DSxOW
         np5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=yAcYh/VW7eYXXl0R2jdwjlS37Qv+vZcW+kdMkmb+PS4=;
        b=qvCxCB94RnvTv9Okv6riYgvGGcH3oOafz3lGhFCDqn7TIozOh5gDr7I40MmpHQKhtV
         UAnk7iFqkr8rGL9mX9w1ob7vVNhHpNIvN0yOFgaIBHfwfog6lbM8F5CodV3K5Wv0DsIF
         7nJ1VBafyahkPpV6LVrUNUWXxTVZkU2UPYHSOvHPoQXnS+HKK15m/Hv1/90YaoNsf0jm
         iVItWbDgsEmIaaO2ThyaTBtCLdm/4owYAnvVj4I4qivKJipY0lBsjshZGMm4GgjRZeOb
         VZ5Yx4ax3qa7uzCUCcoQRYMDS9T7Z3S+O8a/5DS+vVqfXWDreV6mu4aO94LydDgEGYsa
         bO9g==
X-Gm-Message-State: AFqh2kp444fpXxSdwXUmiXCXaUfRtsmD2gOnDXrJIZ6g7fcCTdZCI07u
	4iwpvpQE1zrBNdi14AZbXCVPdakgVcBoSYwp
X-Google-Smtp-Source: AMrXdXuRGgsiwA4HhZEUjvPe3rUe82HuuCxRCrIxhvRyjc68zQIey6oqWemF0IKGffdvUIL0o0nSeQ==
X-Received: by 2002:a05:6512:6ce:b0:4cc:553f:5b68 with SMTP id u14-20020a05651206ce00b004cc553f5b68mr3020397lff.40.1673010879984;
        Fri, 06 Jan 2023 05:14:39 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 1/8] xen/riscv: introduce dummy asm/init.h
Date: Fri,  6 Jan 2023 15:14:22 +0200
Message-Id: <cb2f0751d717774dfe065727c87b8f62f588ca17.1673009740.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673009740.git.oleksii.kurochko@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/include/asm/init.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/init.h

diff --git a/xen/arch/riscv/include/asm/init.h b/xen/arch/riscv/include/asm/init.h
new file mode 100644
index 0000000000..237ec25e4e
--- /dev/null
+++ b/xen/arch/riscv/include/asm/init.h
@@ -0,0 +1,12 @@
+#ifndef _XEN_ASM_INIT_H
+#define _XEN_ASM_INIT_H
+
+#endif /* _XEN_ASM_INIT_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:14:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:14:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472544.732825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYa-00045H-CR; Fri, 06 Jan 2023 13:14:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472544.732825; Fri, 06 Jan 2023 13:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYa-000458-8K; Fri, 06 Jan 2023 13:14:48 +0000
Received: by outflank-mailman (input) for mailman id 472544;
 Fri, 06 Jan 2023 13:14:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDmYY-0003zy-Jd
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:14:46 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16168da3-8dc4-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 14:14:44 +0100 (CET)
Received: by mail-lf1-x136.google.com with SMTP id bp15so1865644lfb.13
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:14:44 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f14-20020a0565123b0e00b004b7033da2d7sm150221lfv.128.2023.01.06.05.14.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:14:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16168da3-8dc4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Szof3vhqcf1cuBmqVq5Q2IZcNa/4nJpd7CeH5aS/VM8=;
        b=cQzbsV1We3uOandm0dnfdaj/H7IIIhcsfpN/Lxb7zZEiEQWUgLQ4F9PMzPlKUVHcXY
         jRSZco3lfG1JhL/PTtNL7XwLktSfpwQQPYNJ1d8F0CfgrXGn0EfOkFV8iO9DqQpHj6j5
         60YsQ63IXJTO1yh7XAumMCtqfgb/wNJtY1saMKPfMfW913cDSM61f+kLg/zBzPlghdPq
         uuoErP5j9CAIfWku6Qdh1xr1Wa1dq4fIGHpkRozz7IuzyJxZGdgk779OoZINEAKJCGq6
         tjhrvsqOEvkmRuxkyGpg/lQeYe0+5RkvngIsCvKtpTzVl3YD0IPli5DJGsdkopklAup8
         kwPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Szof3vhqcf1cuBmqVq5Q2IZcNa/4nJpd7CeH5aS/VM8=;
        b=FwZCyupFCjNZq1d+nmAg73AMVK7AvMOoT3uGXLJp62W529RSlzmU6dY6PqcMPO6MpO
         aj57YQnlGK1jvnGO9uDLkCePYxwXa/0cKjVq0gX0edZPH/ciAm2HHST1+FDCgYvluUJw
         BKFVGxJxWDjCEVtGAiVvJ+/LbGyoHq0teaGg2F03I1VQhRgkQeyMRo9+tUbsPBOlOE4Y
         dSkF1fdRPvSyzifY3sFCHvnf79S2ZK1O1BD7100gnK3TR1mHJwtC8yRQE6NqS9sbTuds
         PrEy8RbbawVjuiOg5w64I90leJcfaPcmAy4RNCMISsdW0cyqFhv6ReHQjIqX0Lzl6i/E
         jmrA==
X-Gm-Message-State: AFqh2kq462u/jLGdE86KSGpch86/sVfnVHZJbydqR4c4xIIL/p05zGD4
	Tl2K2Hzv9vpWkx7m9t9KvWlRyXbUuNYmZHrf
X-Google-Smtp-Source: AMrXdXvNq2lNiXoqpUOykfZiuBP8QUksdxBOvF4zWSj54AMf05ZpoqtWPB5OqFs7UVSr9Xh6n0qkVA==
X-Received: by 2002:a05:6512:32cc:b0:4cb:4378:9c6 with SMTP id f12-20020a05651232cc00b004cb437809c6mr3292688lfg.23.1673010883600;
        Fri, 06 Jan 2023 05:14:43 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to console
Date: Fri,  6 Jan 2023 15:14:25 +0200
Message-Id: <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673009740.git.oleksii.kurochko@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduce sbi_putchar() SBI call which is necessary
to implement initial early_printk

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/Makefile          |  1 +
 xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
 xen/arch/riscv/sbi.c             | 44 ++++++++++++++++++++++++++++++++
 3 files changed, 79 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/sbi.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 5a67a3f493..60db415654 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += setup.o
+obj-y += sbi.o
 
 $(TARGET): $(TARGET)-syms
 	$(OBJCOPY) -O binary -S $< $@
diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
new file mode 100644
index 0000000000..34b53f8eaf
--- /dev/null
+++ b/xen/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: (GPL-2.0-or-later) */
+/*
+ * Copyright (c) 2021 Vates SAS.
+ *
+ * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Taken/modified from Xvisor project with the following copyright:
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ */
+
+#ifndef __CPU_SBI_H__
+#define __CPU_SBI_H__
+
+#define SBI_EXT_0_1_CONSOLE_PUTCHAR		0x1
+
+struct sbiret {
+    long error;
+    long value;
+};
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid, unsigned long arg0,
+        unsigned long arg1, unsigned long arg2,
+        unsigned long arg3, unsigned long arg4,
+        unsigned long arg5);
+
+/**
+ * Writes given character to the console device.
+ *
+ * @param ch The data to be written to the console.
+ */
+void sbi_console_putchar(int ch);
+
+#endif // __CPU_SBI_H__
diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
new file mode 100644
index 0000000000..67cf5dd982
--- /dev/null
+++ b/xen/arch/riscv/sbi.c
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Taken and modified from the xvisor project with the copyright Copyright (c)
+ * 2019 Western Digital Corporation or its affiliates and author Anup Patel
+ * (anup.patel@wdc.com).
+ *
+ * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2021 Vates SAS.
+ */
+
+#include <xen/errno.h>
+#include <asm/sbi.h>
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid, unsigned long arg0,
+            unsigned long arg1, unsigned long arg2,
+            unsigned long arg3, unsigned long arg4,
+            unsigned long arg5)
+{
+    struct sbiret ret;
+    register unsigned long a0 asm ("a0") = arg0;
+    register unsigned long a1 asm ("a1") = arg1;
+    register unsigned long a2 asm ("a2") = arg2;
+    register unsigned long a3 asm ("a3") = arg3;
+    register unsigned long a4 asm ("a4") = arg4;
+    register unsigned long a5 asm ("a5") = arg5;
+    register unsigned long a6 asm ("a6") = fid;
+    register unsigned long a7 asm ("a7") = ext;
+
+    asm volatile ("ecall"
+              : "+r" (a0), "+r" (a1)
+              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
+              : "memory");
+    ret.error = a0;
+    ret.value = a1;
+
+    return ret;
+}
+
+void sbi_console_putchar(int ch)
+{
+    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
+}
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:14:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:14:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472545.732830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYa-00049s-Qr; Fri, 06 Jan 2023 13:14:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472545.732830; Fri, 06 Jan 2023 13:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYa-00049H-K4; Fri, 06 Jan 2023 13:14:48 +0000
Received: by outflank-mailman (input) for mailman id 472545;
 Fri, 06 Jan 2023 13:14:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDmYZ-0003zy-CT
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:14:47 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16d71d02-8dc4-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 14:14:45 +0100 (CET)
Received: by mail-lf1-x12d.google.com with SMTP id z26so1886263lfu.8
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:14:45 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f14-20020a0565123b0e00b004b7033da2d7sm150221lfv.128.2023.01.06.05.14.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:14:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16d71d02-8dc4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=V/C7mq9n+LLhC4Y4ZbcEJdqsm6p7eqTbyDE7iupd4oI=;
        b=DmPgp211+8J2th3XIdLG7IFTx4S08FyDzOd2EKY33xrDl/G3AFncyIEwd9hidIynjW
         My5wV8H1swatr8xVCeu/nA4fhbDyYXH215UKAkYHa/zjrp9Gws+38Z0SsYUN1qcFFlVS
         rbuwrpYwBVF7X3vdf8bKHyA62wshCC0L+uXcVvoBk/Rr8XdlqLdoiJCiSiST7hPNYHVG
         bwBJ1t1ISSHFQfFYPJ9W79L+cI51pnkYJ0WQScbMrNDxRYFC9gaAJUsN5oAuuwX10gj6
         VJ+sERai9LxYoUtqqt3lpvj1a3yeLGGtY+m4VUcEfkRPxvXvNfUJ5SmYw0VU59TpGiu8
         vevg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=V/C7mq9n+LLhC4Y4ZbcEJdqsm6p7eqTbyDE7iupd4oI=;
        b=NnkWNNRrOhmeW9ToyNCn7stmnwK2/uLmmA2pUXs4BQzMTP+qb0vuO3bJ76tfXOwC4A
         nD4ioATkBrNxHEWERPXobYdVR+QkDLa5Y+23oZKGHJTrYaMQZFvppysDvrH6Hqt6jp9V
         Hx/vCtfxDQX1bPD3J1V/qInD0QL482p5YXYYuMihOi/rifWPaoyHk+n75RsnDlcrID08
         QFUbj1RVeiu9cenpbfWCx9gHzOoQw3U5vlJKCyJcZnmBrUetzGU3qurpvSTqzFdeUa8l
         d14rkFt2/Dz8WEneNpt9FwPquKLOra1YZoHUpjZEbBYxl/VCbrjj/eKA80PEwydiOMbZ
         QGaQ==
X-Gm-Message-State: AFqh2kpQm1xgce9MP5LoGvanJpv00kkSmKBAX96JBrCOcyKqVzSkHOK4
	jCDGqYQrvpr8sYL1dBh6P8xyJfzL61IzToK6
X-Google-Smtp-Source: AMrXdXuzdj7/0NVCbmFTZQ7fqbaayzfjRYlIo5wrF/b9HwI6nUYbU6E3dM8LqLQZDIL0pVc1/Rrufg==
X-Received: by 2002:a05:6512:16a8:b0:4b5:b7ba:cae with SMTP id bu40-20020a05651216a800b004b5b7ba0caemr16054266lfb.48.1673010884899;
        Fri, 06 Jan 2023 05:14:44 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 5/8] xen/include: include <asm/types.h> in <xen/early_printk.h>
Date: Fri,  6 Jan 2023 15:14:26 +0200
Message-Id: <940bf18969634564fa5d206d02eb2a116c9e0ea0.1673009740.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673009740.git.oleksii.kurochko@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

<asm/types.h> should be included because second argument of
early_puts has type 'size_t' which is defined in <asm/types.h>

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/include/xen/early_printk.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/include/xen/early_printk.h b/xen/include/xen/early_printk.h
index 0f76c3a74f..abb34687da 100644
--- a/xen/include/xen/early_printk.h
+++ b/xen/include/xen/early_printk.h
@@ -4,6 +4,8 @@
 #ifndef __XEN_EARLY_PRINTK_H__
 #define __XEN_EARLY_PRINTK_H__
 
+#include <asm/types.h>
+
 #ifdef CONFIG_EARLY_PRINTK
 void early_puts(const char *s, size_t nr);
 #else
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:14:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472546.732847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYc-0004b0-9A; Fri, 06 Jan 2023 13:14:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472546.732847; Fri, 06 Jan 2023 13:14:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYc-0004ZS-0y; Fri, 06 Jan 2023 13:14:50 +0000
Received: by outflank-mailman (input) for mailman id 472546;
 Fri, 06 Jan 2023 13:14:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDmYa-0002z5-5I
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:14:48 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 180c2e86-8dc4-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 14:14:47 +0100 (CET)
Received: by mail-lf1-x12f.google.com with SMTP id cf42so1914091lfb.1
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:14:47 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f14-20020a0565123b0e00b004b7033da2d7sm150221lfv.128.2023.01.06.05.14.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:14:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 180c2e86-8dc4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/Qi8VnEIuZ7tu3k7g7nrwofraJgO1NpoRR5W+cQW214=;
        b=IUO9MIUCR9uzevVnoE6Q5BoLbKRfVemQHPUHAEoYTQT3tphPo6e4/4rUj/rYQLAvQX
         5iqss+xdYJai0JXiu5wZ8CbZrItkTyIG0h28Xdz/rgcuE4tUCZxm9SKAcs6jjj31yu68
         AUm0+UBq8a3vRf90sO+kGhFnD6ZCthJqi5ifsGvZpMomyC4JBhCISalm6Tmcs49jMoE3
         Z0kSgzSdSlTeV//BPmlZIOl5GMM7wXL5GC/Avh7rhY4HXo3xbWF3sAGJuWH0yu4ipGUH
         VyyDRfok7u/KqekTkgoRn2mNY6Fx2E4YnoHWhOQXyg86MbWAgijpBqtre+bLF+hpV2dT
         trGg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/Qi8VnEIuZ7tu3k7g7nrwofraJgO1NpoRR5W+cQW214=;
        b=z1goe7/QVe31MFZFwXDra6gie3EfYvBWLMPL6THzaQTzWLM4jE7AL1dxRRNOpRFCpH
         aEME83JKovQfiioOdQyBwoVg+NH02PejfiAOCzP4+sNHZdkkUy48XR/8ZKAZWVhDVND1
         R0RLBczI+Iwm5lclOmrQPDgBygJ70J1W76ek0nrKrr19L0VoyeuU3DY/2B8FFLyiq0vl
         Y+RMgMEgF7eblVV+jBvIbGuxOLV/CqboBcD+MCdglt+UEagPyQFl1peAJUSqc6bNhRz7
         wxE3p6ewQnHFtFotb4F/zvzloinckwlPaZxGy7VKr7boW8N2dM2ZKn9HxQ6tPAzTPL88
         MwAg==
X-Gm-Message-State: AFqh2kr7+gd46GFlM1LUT1c1U/rP5EYIJ/cUK8sApLKMGtivicz3cLHM
	u1C8aFv9sDkUKpBO7AmLFrntlEnnMeXCWTnZ
X-Google-Smtp-Source: AMrXdXsxlHbO2bPPbcGhGWP/ZxA8JWr/SwPBqT0WRMNphDupmhWfQ7F9toeww1dIfqqiphwz81zoYA==
X-Received: by 2002:a05:6512:308f:b0:4cb:1645:7259 with SMTP id z15-20020a056512308f00b004cb16457259mr11225608lfd.61.1673010886967;
        Fri, 06 Jan 2023 05:14:46 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 7/8] xen/riscv: print hello message from C env
Date: Fri,  6 Jan 2023 15:14:28 +0200
Message-Id: <21fb02aa340fe3a7a95dfbb950b33ae7a363496e.1673009740.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673009740.git.oleksii.kurochko@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/riscv64/head.S |  4 +---
 xen/arch/riscv/setup.c        | 12 ++++++++++++
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index ddc7104701..4e399016e9 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -5,6 +5,4 @@ ENTRY(start)
         li      t0, PAGE_SIZE
         add     sp, sp, t0
 
-_start_hang:
-        wfi
-        j  _start_hang
+        tail    start_xen
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 2c7dca1daa..785566103b 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,6 +1,18 @@
 #include <xen/init.h>
 #include <xen/compile.h>
 
+#include <asm/early_printk.h>
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[PAGE_SIZE]
     __aligned(PAGE_SIZE);
+
+void __init noreturn start_xen(void)
+{
+    early_printk("Hello from C env\n");
+
+    for ( ;; )
+        asm volatile ("wfi");
+
+    unreachable();
+}
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:14:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472547.732851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYc-0004fO-NY; Fri, 06 Jan 2023 13:14:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472547.732851; Fri, 06 Jan 2023 13:14:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYc-0004ef-FG; Fri, 06 Jan 2023 13:14:50 +0000
Received: by outflank-mailman (input) for mailman id 472547;
 Fri, 06 Jan 2023 13:14:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDmYa-0003zy-CY
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:14:48 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 176521e3-8dc4-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 14:14:46 +0100 (CET)
Received: by mail-lf1-x130.google.com with SMTP id bu8so1902321lfb.4
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:14:46 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f14-20020a0565123b0e00b004b7033da2d7sm150221lfv.128.2023.01.06.05.14.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:14:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 176521e3-8dc4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=deWE8GI9Ntl4m3olds6B452/c7DgLQANjeRayM/Lyr4=;
        b=KkPDjBNkaIRh+ppwZJdsQCOYaVYRO59kzdiLFP1dy8UBa5sP3DpybFKTltLHwEWE5g
         KAYOkw4Ci8qbPUnqpCIYzjYOAAvc5e5XHrQJWr5LnptUGHBoVk2xY8Q+UTnTPPfYY1nB
         uEVVSm42l9BMNhgPTuLO4qNUCxGOzEIZmVmRuXkZleKJbitUzjbO8a+s4sZMXEM8PAkF
         EbAeJCKxY0SikI/TczNSG4E2MiRweJNsScgLFwAnlxnoQd/3S2zzPPQjcB5/dBm1SCrC
         wn8XpSFiW60HixsnBnthHs0wFp2MdPqZspdADTGemwPHVSi7qrXnDJopky9SG7Vmw21Y
         DLzA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=deWE8GI9Ntl4m3olds6B452/c7DgLQANjeRayM/Lyr4=;
        b=0zdFsieWuRshahHRoaJ9HeeX6qiAUOeXOnenmFvBW1lo5+x1Jlz48JYZOaX5ph/nox
         dVSTAVu/bbX4OA51/snCrLS/2JX7soPJBiwhEmnvYNrKTD9CZRcdRbB1U//ZC5BBqWiD
         NMxl+oQsSQo/G1p5gxgIGcfQ5KYDv+VvcVd3PSdw5I9PfmlzeNU7dqGc83s7F39ThWCV
         6EsYLKov/07OP7tgaI1o8fLj2eF/QTkEdvePfODSK25Xl7/L/QiotQDyltA//kgWtNVV
         9BzDyz8cS7Zhy5O1JefXeMy5mHDGMMfsjyN8bpnJu7/9qprHyYC4gsAcwUKJovwKIBTB
         Pzyg==
X-Gm-Message-State: AFqh2kq8jRR7RN1xDTOzD+E+o/gcHn7QfqfLcsVzimM4kDQuEIlwAnxV
	jgbOMKD+Bc47tu+Ti7e2Ma/+8U9pxGokCBR6
X-Google-Smtp-Source: AMrXdXtd7Mm879/iojPZdzolaaaSsCDSS6dg4V03gg7DzzioaB2FFkJaTLjtD/LgOltwFLEIDf5Tag==
X-Received: by 2002:a05:6512:4020:b0:4b6:f595:89a7 with SMTP id br32-20020a056512402000b004b6f59589a7mr16272641lfb.14.1673010885964;
        Fri, 06 Jan 2023 05:14:45 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 6/8] xen/riscv: introduce early_printk basic stuff
Date: Fri,  6 Jan 2023 15:14:27 +0200
Message-Id: <3f30a60729b45ee01adc2d4c0eec5a89bb083abd.1673009740.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673009740.git.oleksii.kurochko@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces a basic stuff of early_printk functionality
which will be enough to print 'hello from C environment"

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/Kconfig.debug              |  7 ++++++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 27 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 ++++++++++
 4 files changed, 47 insertions(+)
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
index e69de29bb2..940630fd62 100644
--- a/xen/arch/riscv/Kconfig.debug
+++ b/xen/arch/riscv/Kconfig.debug
@@ -0,0 +1,7 @@
+config EARLY_PRINTK
+    bool "Enable early printk config"
+    default DEBUG
+    depends on RISCV_64
+    help
+
+      Enables early printk debug messages
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 60db415654..e8630fe68d 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,6 +1,7 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += setup.o
 obj-y += sbi.o
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 
 $(TARGET): $(TARGET)-syms
 	$(OBJCOPY) -O binary -S $< $@
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
new file mode 100644
index 0000000000..f357f3220b
--- /dev/null
+++ b/xen/arch/riscv/early_printk.c
@@ -0,0 +1,27 @@
+/*
+ * RISC-V early printk using SBI
+ *
+ * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
+ */
+#include <asm/sbi.h>
+#include <asm/early_printk.h>
+
+void early_puts(const char *s, size_t nr)
+{
+    while ( nr-- > 0 )
+    {
+        if (*s == '\n')
+            sbi_console_putchar('\r');
+        sbi_console_putchar(*s);
+        s++;
+    }
+}
+
+void early_printk(const char *str)
+{
+    while (*str)
+    {
+        early_puts(str, 1);
+        str++;
+    }
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
new file mode 100644
index 0000000000..05106e160d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -0,0 +1,12 @@
+#ifndef __EARLY_PRINTK_H__
+#define __EARLY_PRINTK_H__
+
+#include <xen/early_printk.h>
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_printk(const char *str);
+#else
+static inline void early_printk(const char *s) {};
+#endif
+
+#endif /* __EARLY_PRINTK_H__ */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:14:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:14:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472548.732867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYe-00059H-BB; Fri, 06 Jan 2023 13:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472548.732867; Fri, 06 Jan 2023 13:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmYe-00057p-41; Fri, 06 Jan 2023 13:14:52 +0000
Received: by outflank-mailman (input) for mailman id 472548;
 Fri, 06 Jan 2023 13:14:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QRQJ=5D=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pDmYc-0003zy-5N
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:14:50 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 18918d88-8dc4-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 14:14:48 +0100 (CET)
Received: by mail-lf1-x135.google.com with SMTP id bt23so1893102lfb.5
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 05:14:48 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f14-20020a0565123b0e00b004b7033da2d7sm150221lfv.128.2023.01.06.05.14.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 05:14:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18918d88-8dc4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=zkuUnAoSkk9s+gFXJpwU6jCUW0aPzs/gF76MvPyHXnI=;
        b=MVGTWNTTVQ4gdBPkwAo6PykkJ/JZNdEmbXh+V9u38VeofVDL6lNjpAk8ebSnjOy2aC
         qdLCnQD+QuMrhICGYxaMiQjun2jKwVFXKP+D+RGUuQ8/NnBlnGiAymNJ+tVOGEdV39T4
         hYUvy4IYESRjdG7JvciMGqn+fDjDRtJTPGZI+ZrdYeRG9gwdj3qMjkacvklP4Hml+O2/
         zbDxQY3UwHMbjR6fq/YTOZSb0EmYs3XrB3pAcrUfObI5XefUYO75+bAAyoBcZcWktlOs
         fQRD2eR4Rjy9b/ghj/nGUgUiiWydaIrR42doT6+DO44J8H2wblCMCupD0rV6CukG3VaH
         HSPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=zkuUnAoSkk9s+gFXJpwU6jCUW0aPzs/gF76MvPyHXnI=;
        b=W8TdNl3tHDppVRxm80uT4J3Ub7KUvhuqmKNlk3k+2wOwyPtAfMtixr61r9b+jjeT1N
         S1uVJ65AjV5LYw6YPLk9UGW4o8JY5LquhjEiOyXPYAKTE3FFBiNUEG1VbgF6jQc8kaSt
         EPkzXZBDXUtvbln0CD6Whj/iclLSgyq8VjfrJURQkzgRwfloDtOMOd3H0VeVb+mQ81mC
         fFrxzdG5yLW0mPGKMjzhFUmDhUKl9IeEJUo6KxjiML15Rd9Fv5eiUKiKfuCyGYddvVtB
         VQ0n8QcgNn5bBNsy6bPmNxYj79FhRUnPRcjvGwR3larjU9hDzacUSoLe+pOWY4BZvwvX
         kbLw==
X-Gm-Message-State: AFqh2krlsZ14pK5xLJzAlW7U5jb7oYk5/ckd9GvnCJ072d5Ec8BZitHl
	parp1EziKAR36dyAB7mWXh1dBoG2of7SucNy
X-Google-Smtp-Source: AMrXdXuUGrBpflnPuMb2DD1qolCCvG4jfT8bJ+rtMFDENSeIxdQCNUSBSPGNfD78+DSR6H0Ar5nG1Q==
X-Received: by 2002:ac2:53ab:0:b0:4cb:145d:c407 with SMTP id j11-20020ac253ab000000b004cb145dc407mr9542521lfh.7.1673010887872;
        Fri, 06 Jan 2023 05:14:47 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v1 8/8] automation: add RISC-V smoke test
Date: Fri,  6 Jan 2023 15:14:29 +0200
Message-Id: <90078a83982b37846e9845c8ffc50c92f3be1f47.1673009740.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673009740.git.oleksii.kurochko@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add check if there is a message 'Hello from C env' presents
in log file to be sure that stack is set and C part of early printk
is working.

Also qemu-system-riscv was added to riscv64.dockerfile as it is
required for RISC-V smoke test.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 automation/build/archlinux/riscv64.dockerfile |  3 ++-
 automation/scripts/qemu-smoke-riscv64.sh      | 20 +++++++++++++++++++
 2 files changed, 22 insertions(+), 1 deletion(-)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh

diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/riscv64.dockerfile
index ff8b2b955d..375c78ecd5 100644
--- a/automation/build/archlinux/riscv64.dockerfile
+++ b/automation/build/archlinux/riscv64.dockerfile
@@ -9,7 +9,8 @@ RUN pacman --noconfirm --needed -Syu \
     inetutils \
     riscv64-linux-gnu-binutils \
     riscv64-linux-gnu-gcc \
-    riscv64-linux-gnu-glibc
+    riscv64-linux-gnu-glibc \
+    qemu-system-riscv
 
 # Add compiler path
 ENV CROSS_COMPILE=riscv64-linux-gnu-
diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
new file mode 100755
index 0000000000..e0f06360bc
--- /dev/null
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -ex
+
+# Run the test
+rm -f smoke.serial
+set +e
+
+timeout -k 1 2 \
+qemu-system-riscv64 \
+    -M virt \
+    -smp 1 \
+    -nographic \
+    -m 2g \
+    -kernel binaries/xen \
+    |& tee smoke.serial
+
+set -e
+(grep -q "Hello from C env" smoke.serial) || exit 1
+exit 0
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:41:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:41:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472605.732880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmxq-0002XZ-93; Fri, 06 Jan 2023 13:40:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472605.732880; Fri, 06 Jan 2023 13:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmxq-0002XS-6S; Fri, 06 Jan 2023 13:40:54 +0000
Received: by outflank-mailman (input) for mailman id 472605;
 Fri, 06 Jan 2023 13:40:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDmxo-0002XK-Ky
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:40:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDmxo-00082r-7K; Fri, 06 Jan 2023 13:40:52 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.4.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDmxo-0001Dv-0C; Fri, 06 Jan 2023 13:40:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=pHwJANpkRcSFgmVES30QT1jRvgzCAyd+JZcZ0J+t3G4=; b=fDT4xYoOMSMvX2R8xytxQG0yJZ
	0RcDYcDeS5Xe+Iy7oZYXr0p9Zr7Bd00eR1p22/OZ0mYVDdkgTntLbNjMTiUbEe24usMnQ4cMpD879
	WtPgxwtvA/+XIY3tBYIMggLt2O4Cqym+z+dAbrCvFfH8aYdWK6yaTln9710nKqrdUsNw=;
Message-ID: <d77e7617-5263-0072-4786-ba6144247a4b@xen.org>
Date: Fri, 6 Jan 2023 13:40:49 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 06/01/2023 13:14, Oleksii Kurochko wrote:
> The patch introduce sbi_putchar() SBI call which is necessary
> to implement initial early_printk
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>   xen/arch/riscv/Makefile          |  1 +
>   xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>   xen/arch/riscv/sbi.c             | 44 ++++++++++++++++++++++++++++++++

IMHO, it would be better to implement sbi.c in assembly so you can use 
print in the console before you jump to C world.

>   3 files changed, 79 insertions(+)
>   create mode 100644 xen/arch/riscv/include/asm/sbi.h
>   create mode 100644 xen/arch/riscv/sbi.c
> 
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 5a67a3f493..60db415654 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,5 +1,6 @@
>   obj-$(CONFIG_RISCV_64) += riscv64/
>   obj-y += setup.o
> +obj-y += sbi.o

Please order the filename alphabetically.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:42:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:42:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472612.732891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmzi-00037C-LG; Fri, 06 Jan 2023 13:42:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472612.732891; Fri, 06 Jan 2023 13:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDmzi-000375-IU; Fri, 06 Jan 2023 13:42:50 +0000
Received: by outflank-mailman (input) for mailman id 472612;
 Fri, 06 Jan 2023 13:42:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDmzh-00036z-Ve
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:42:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDmzh-000861-L1; Fri, 06 Jan 2023 13:42:49 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.4.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDmzh-0001I7-DC; Fri, 06 Jan 2023 13:42:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=1xSvn0wKkIrdDwdTQKVcoGlupvegRZ85iKM+1ckxPpQ=; b=xsEVc6FnI0VAfju6DbUu0t+GQh
	CK6v/X38FlZSPUut7OG28p28bZ+amJ3w/B2eUpcwYZZ6P6zMNGIO1AWXxyuxz2xI4JRwBH+FhsbHk
	LWnTpymxM+zq4XNt3N6exlFk0XV/KLf9e5xbP0BuegpY5E1VQCuvHjY+rNHidF5nEq8Q=;
Message-ID: <0da22900-63f0-b8fa-00b6-855e2a94485d@xen.org>
Date: Fri, 6 Jan 2023 13:42:47 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 1/8] xen/riscv: introduce dummy asm/init.h
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <cb2f0751d717774dfe065727c87b8f62f588ca17.1673009740.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <cb2f0751d717774dfe065727c87b8f62f588ca17.1673009740.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 06/01/2023 13:14, Oleksii Kurochko wrote:

I am guessing this is necessary because you will use <xen/init.h> will 
be used later on.

If so, then I think it would be useful for mention it in the commit message.

> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:46:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:46:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472619.732902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDn2o-0003jM-45; Fri, 06 Jan 2023 13:46:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472619.732902; Fri, 06 Jan 2023 13:46:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDn2o-0003jF-0s; Fri, 06 Jan 2023 13:46:02 +0000
Received: by outflank-mailman (input) for mailman id 472619;
 Fri, 06 Jan 2023 13:46:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDn2n-0003j7-4v
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:46:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDn2m-00088g-6r; Fri, 06 Jan 2023 13:46:00 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.4.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDn2m-0001UQ-0p; Fri, 06 Jan 2023 13:46:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=c9iXVFW3rTmIfW3C/upQrZFmISBGPfs6G7zwCooEQRg=; b=2gEsn1qCwQ/J91sTxhEMaLFzx/
	le4lD/EPMDVOP2QpO/C+9vmfClynQs7/Sa45hepN9a1BJ8/DuWzrAbAJqxEvUfOYef3IdGhZyCioz
	hoZLIICMm469iZzmRvcGCbGqYDIdDzNpbYqGlDk8G2zKV70+sRF7wkWL+fwdbgp2ltUw=;
Message-ID: <2db650a0-49c4-9894-3220-a13306c7e2ea@xen.org>
Date: Fri, 6 Jan 2023 13:45:58 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 5/8] xen/include: include <asm/types.h> in
 <xen/early_printk.h>
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <940bf18969634564fa5d206d02eb2a116c9e0ea0.1673009740.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <940bf18969634564fa5d206d02eb2a116c9e0ea0.1673009740.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 06/01/2023 13:14, Oleksii Kurochko wrote:
> <asm/types.h> should be included because second argument of
> early_puts has type 'size_t' which is defined in <asm/types.h>
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
>   xen/include/xen/early_printk.h | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/xen/include/xen/early_printk.h b/xen/include/xen/early_printk.h
> index 0f76c3a74f..abb34687da 100644
> --- a/xen/include/xen/early_printk.h
> +++ b/xen/include/xen/early_printk.h
> @@ -4,6 +4,8 @@
>   #ifndef __XEN_EARLY_PRINTK_H__
>   #define __XEN_EARLY_PRINTK_H__
>   
> +#include <asm/types.h>
> +
>   #ifdef CONFIG_EARLY_PRINTK
>   void early_puts(const char *s, size_t nr);
>   #else

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:51:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:51:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472626.732912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDn8D-0005AD-Of; Fri, 06 Jan 2023 13:51:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472626.732912; Fri, 06 Jan 2023 13:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDn8D-0005A6-Lz; Fri, 06 Jan 2023 13:51:37 +0000
Received: by outflank-mailman (input) for mailman id 472626;
 Fri, 06 Jan 2023 13:51:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDn8C-0005A0-6A
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:51:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDn8B-0008PF-Kh; Fri, 06 Jan 2023 13:51:35 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.4.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDn8B-0001aw-EQ; Fri, 06 Jan 2023 13:51:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=eO/UT7LLAV0/9FLExjojEvWVJufAtRmMpUZsDz++tBE=; b=dnOcL8rB7uXZp08K21CcfYvv5i
	VcfkGVeN2HlaJZEHzWc2qkPKJvkmheLakkq58reBQiULQSwlY68tbT5gfdN91XrWkPUWXQ1C+WeAk
	hFNF5D9IibUBZVhAPTqKaauuXKCjD94Z0/gsCxjYx3rMPp4UiLc6LVQnwl76JXKsDdn0=;
Message-ID: <e7e66208-5a4f-f37a-6368-29489e93aad9@xen.org>
Date: Fri, 6 Jan 2023 13:51:33 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 6/8] xen/riscv: introduce early_printk basic stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <3f30a60729b45ee01adc2d4c0eec5a89bb083abd.1673009740.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <3f30a60729b45ee01adc2d4c0eec5a89bb083abd.1673009740.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 06/01/2023 13:14, Oleksii Kurochko wrote:
> The patch introduces a basic stuff of early_printk functionality
> which will be enough to print 'hello from C environment"
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>   xen/arch/riscv/Kconfig.debug              |  7 ++++++
>   xen/arch/riscv/Makefile                   |  1 +
>   xen/arch/riscv/early_printk.c             | 27 +++++++++++++++++++++++
>   xen/arch/riscv/include/asm/early_printk.h | 12 ++++++++++
>   4 files changed, 47 insertions(+)
>   create mode 100644 xen/arch/riscv/early_printk.c
>   create mode 100644 xen/arch/riscv/include/asm/early_printk.h
> 
> diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
> index e69de29bb2..940630fd62 100644
> --- a/xen/arch/riscv/Kconfig.debug
> +++ b/xen/arch/riscv/Kconfig.debug
> @@ -0,0 +1,7 @@
> +config EARLY_PRINTK
> +    bool "Enable early printk config"
> +    default DEBUG
> +    depends on RISCV_64

OOI, why can't this be used for RISCV_32?

> +    help
> +
> +      Enables early printk debug messages
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 60db415654..e8630fe68d 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,6 +1,7 @@
>   obj-$(CONFIG_RISCV_64) += riscv64/
>   obj-y += setup.o
>   obj-y += sbi.o
> +obj-$(CONFIG_EARLY_PRINTK) += early_printk.o

Please order the files alphabetically.

>   
>   $(TARGET): $(TARGET)-syms
>   	$(OBJCOPY) -O binary -S $< $@
> diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
> new file mode 100644
> index 0000000000..f357f3220b
> --- /dev/null
> +++ b/xen/arch/riscv/early_printk.c
> @@ -0,0 +1,27 @@

Please add an SPDX license (the default for Xen is GPLv2).

> +/*
> + * RISC-V early printk using SBI
> + *
> + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>

So the copyright here is from Bobby. But I don't see any mention in the 
commit message. Where is this coming from?

> + */
> +#include <asm/sbi.h>
> +#include <asm/early_printk.h>

Please order the files alphabetically.

> +
> +void early_puts(const char *s, size_t nr)
> +{
> +    while ( nr-- > 0 )
> +    {
> +        if (*s == '\n')
> +            sbi_console_putchar('\r');
> +        sbi_console_putchar(*s);
> +        s++;
> +    }
> +}
> +
> +void early_printk(const char *str)
> +{
> +    while (*str)
> +    {
> +        early_puts(str, 1);
> +        str++;
> +    }
> +}
> diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
> new file mode 100644
> index 0000000000..05106e160d
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/early_printk.h
> @@ -0,0 +1,12 @@
> +#ifndef __EARLY_PRINTK_H__
> +#define __EARLY_PRINTK_H__
> +
> +#include <xen/early_printk.h>
> +
> +#ifdef CONFIG_EARLY_PRINTK
> +void early_printk(const char *str);
> +#else
> +static inline void early_printk(const char *s) {};
> +#endif
> +
> +#endif /* __EARLY_PRINTK_H__ */

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:52:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472630.732923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDn8m-0005fV-1O; Fri, 06 Jan 2023 13:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472630.732923; Fri, 06 Jan 2023 13:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDn8l-0005fO-Uy; Fri, 06 Jan 2023 13:52:11 +0000
Received: by outflank-mailman (input) for mailman id 472630;
 Fri, 06 Jan 2023 13:52:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDn8k-0005ey-4y
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:52:10 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4d53b167-8dc9-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 14:52:07 +0100 (CET)
Received: from mail-dm6nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jan 2023 08:52:02 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5427.namprd03.prod.outlook.com (2603:10b6:208:29d::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 13:52:00 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 13:52:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d53b167-8dc9-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673013127;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=64DRU3wTSykqmUOKNnDmouyLt/Ao2F1H8xBMh1AeVhM=;
  b=H4BTQiDDyv3aLW5LuMvFxQ2MVek0EZQ1PTpwF//naWlLiKcfvrTUIZcg
   TDP9xFJ4k4U+KyVqpZb5AXogeCrYWLozvEL6kHqLyZk3bEwRzdo7eGGZW
   +oYPfgQ9u09Dy2Uu8n3gTbcJBDzksBVpcuXt04zBLqGm/NvWYp7AITYoY
   M=;
X-IronPort-RemoteIP: 104.47.59.174
X-IronPort-MID: 91892705
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:afk756p04iu/9DFQRcJVb/gVVBJeBmJCZRIvgKrLsJaIsI4StFCzt
 garIBmBbPqLajekf95/bdy08k1T7MDRm9BrTgA/rX1mFH4b9JuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHzydNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAAwBXEGyqO+k/LeiSMxwguYOF8vJH5xK7xmMzRmBZRonabbqZvyQoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jemraYWJEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXpr6472gLLmwT/DjVODmmV++Djs3TmUtFSN
 28VuTd3pKk9oRnDot7VGkfQTGS/lhkYVtlLEux86xyXzaHU4AGxCW0NTzoHY9sj3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqTTQfUQIP7t3noYcyphHCVNBuFOiylNKdMT7xy
 jaOsSM3r68Sk8kQ1qOwu1vAhlqErJ/DRB84/QXTU2es6Ct2YYekY8qj7l2zxf9ELZ2FR1ib+
 nYeks6V7fsmEp2G0ieKRY0lF7av4fGHPDTCgEVHEJwo9jDr8HmmFahS6jxjIEZiMu4fZCTkJ
 kTUvGt55oJXPXasRb96ZcS2EctC5bfkPcToULbTdNUmSpN4bgOA8QllbFSc2G2rm08p+ZzTI
 r+eeMeoSHofV6JuyWPsQ/9HiOB3gCcj2WnUWJb3iQy91qaTb2KUTrFDN0aSauc+7+WPpwC9H
 8tjCvZmAi53CIXWChQ7O6ZPRbzWBRDX3azLlvE=
IronPort-HdrOrdr: A9a23:JoGSAa52kLtOSyjfYgPXwPLXdLJyesId70hD6qkRc3Bom6mj/P
 xG88516faZslgssQgb6La90cu7IU80hKQV3WB5B97LNmTbUQCTXeJfBOXZslndMhy72ulB1b
 pxN4hSYeeAamSSVPyKgjVQxexQpeW6zA==
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="91892705"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JRymUHbqmOzUyZsaDr7P2CPCuPK4taQQDR9pxWekOWAlxDxZIrQZTGXXKFVv2NQaMm2r09Di1uFghCsrC5j7NF5SsFcYw66UkRRf0SgAn9Cqd5z1X/APmrd8j62c9FpjDIP3Bb6Yu+ZkoMrnoUgCzQN9Z1bkVIXoIs6nMoft2fIclP0ndI2aIt3WZmHhOMVttoRMl90ZCX3ghl781QlKXaeFck3Rcj0miUMk7qkQXsu0nfD26JMVe1vgr/JYLQGtLLGY//P9MPbAdyTZ1/4/uSnHGvV3BeNgw93SB7Tua3YLUPMntN5eXbaO+nYLvcDEExB2EwmD5/cRWgxBboC2fA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=64DRU3wTSykqmUOKNnDmouyLt/Ao2F1H8xBMh1AeVhM=;
 b=hFT/xRAQ6rJusYeiexW8nsWlPCtygaEONLxRuYa7gsWA+ZM30pFaRO8CSLRgreeGhISKA40WK9Lxrh9xNVhSr+313Qqi4u/b/xyqzKy4cKLKokxS+ar3q/Nvfrez6/QrqYf3zPe4xCs6J8xTn1pf6EfqlL7lx6GnJe22df1N0RZdsQdTCIDn6CKstmKkv+dVff7qEmB6Tjc0bd+K67fmx2V5IrhZgQtrfERMwJAepDa0fQhLbJX3CfaSsoy6JE8K5HL5h4lG5g2OjNU1DZxesOmjrRc+c5VGPGkZ+vS6LZaHI50Wes7eTIvjBZ7E9/geHuu3HrOUyIq5m7vMsuvhlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=64DRU3wTSykqmUOKNnDmouyLt/Ao2F1H8xBMh1AeVhM=;
 b=mEZLOC8Zl43mFxkajabciUrLkLU2qUIOWY9bt++BG2sXShjorDsuIYi0tNzSx2cavx/9tIg5hPqbYcI4wfJTNomZ1vZpRZDaAqwy/2q40wGFb83hbXQ+540f4Xok0ZkZqnin4AgXk71viDlWhfZcCXO22CfvrtrddSBSlvOFUDg=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>,
	George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Doug Goldstein
	<cardoe@cardoe.com>
Subject: Re: [PATCH v1 0/8] Basic early_printk and smoke test implementation
Thread-Topic: [PATCH v1 0/8] Basic early_printk and smoke test implementation
Thread-Index: AQHZIdDY/WFtbXZWa0GsMaL1YLWWe66RaHaA
Date: Fri, 6 Jan 2023 13:51:59 +0000
Message-ID: <299d913c-8095-ad90-ea3b-d46ef74d4fdc@citrix.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
In-Reply-To: <cover.1673009740.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5427:EE_
x-ms-office365-filtering-correlation-id: c0626e55-cf4e-4baa-e38c-08daefed2ee5
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Ali680qyf8HTQfh+9d+wzx0jih4Ud9sdaSAo2gULwwwAPsJzaHmdh/97Cxe7cWcI2o1YzqtADx7PqZ0RjPugtBSHwnCJjRnobFS57eeYohcFM8QZtDrifmvF4QEsCfmEVRCkPnJ5t28YKrWxUj95mM1LnjlvXKq8Kf3hhID6bhwMDeYj5GQ8TA/G34tSfdr1C2EXhw0MSuKxFfalGeqf0QNVx0TDygX33lGqoyTC7yD9conXgpvNaOQPRqsv0OF1A32LXbzRXlsRUHQ9HRdzmWsCrZYOdnKBMyMSIhVjoSuejRh7ssBiGwEeCjTidzbzDa05nLhpdHnO+dLuRZeVNxKs/0lZ0Wd8MY5eTdhlhrQ5R+r5N80g+55U4KxPzx7h09cEdGtPdGStaL65pgyL5ZEM8R/ZHPL+l1U3YYA5LqUxJqoDNQkKjnvqXmk8Na3TRMRtcDgEks69tz4OSOQa91i6HScXremc0tIsIKba4uXHIo+aOT7mMQJC/u8R5Z5kEBDQzFc3/puZNVak5jMgBJqLPWw5bAg2cFMUtC3GW/dYbtkcdD1ITgCpEh1+m54wJRgZ5dAeu768/oNTeJ6DmkH0AmXMltX7hB7cp7a+195o918GA4/Y3pbr4mLRwK8tR7CVHgh+j2kAdHHrfGHmP3lTQG9FYuODHOLT+NLts5ZW8wldn3NgQvpBVBDF0BtaNIY3r3YjKgKCg5aL3FGRH1o3i/QS6ELdnrIe30YkCO0V94tiZShXeAf7hjvpCqRLpmNcjvALCr8zSVChMpsjrg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(136003)(346002)(366004)(376002)(451199015)(5660300002)(41300700001)(7416002)(8936002)(316002)(91956017)(8676002)(4326008)(66946007)(54906003)(2906002)(110136005)(31686004)(53546011)(66556008)(6486002)(71200400001)(6506007)(66446008)(478600001)(76116006)(66476007)(64756008)(2616005)(86362001)(31696002)(186003)(6512007)(26005)(122000001)(83380400001)(82960400001)(38100700002)(36756003)(38070700005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SmFHRVpMOFZHazh5aDBwaTZQYzlBZzhFa1NKcW9rcWJSK0JQMmhrOXQ3V2gz?=
 =?utf-8?B?NUs2WjI2OXMzbTV4RVo0bnYxREtyZlRtbEZ1V2tWV0xCKzRXdnhPOUhpcHRm?=
 =?utf-8?B?cld3Zm1yMitiU1BMdW5yV1prMU50L1hUT1dTbEl2WTlQMmg2NWEzTEhTS2tm?=
 =?utf-8?B?UWdJOVBlYUx4bU9aS3l2dU1QY2h0L0VzbWcvcExJcVAzK2tJTjZKQ1Bpd2J3?=
 =?utf-8?B?d2EvR3AxL0UwRlhyajNDTktCb3NFSm56b1RXa2g0NThTaE1PUTJwdmJUOUEr?=
 =?utf-8?B?anhSb1JUblFIM2FyZkdEb3F0ZlUzTnhnT2QzazNhRjBQRkRIT0JnSHYwbVJL?=
 =?utf-8?B?T1RZL210N1JMdkhqdkdsbjRWR0NTdFdhQTRpNUZiTU52ZklnVW9iZ3F1a0pE?=
 =?utf-8?B?dXkxUDZLYkI5WEpKMzZvcE9EUiszTkVwajRIZVRZU2FEZjEwQUhmWWtDYWU3?=
 =?utf-8?B?d01mZ0dUOElOOGYxT0tNTkdybUI1a0JVR3dPQkpKYzlXbml0eCtsYzJobDRF?=
 =?utf-8?B?eHZNZDFCQm5MYXhRRlM5cmdwZFpLWG1qN3RwR21tcW9yR3FicXhTYXBPNW0v?=
 =?utf-8?B?ZzlneUFKc0FibkRJRWM5djdjcUpYcUJKY1RWM2ZOMHlzMm9TR05SZ216Uys0?=
 =?utf-8?B?V2tNeFFpNnhDL2MvVTNqV0pPMHMxT2lvWWw5bkphMzU3RGhyRHpKbFgxbzIy?=
 =?utf-8?B?bWo3RFpoK0R0WEtBV21DZDdJYk84WTRvZFFqVFhYSThCYUlOMFp0V1d5WTZS?=
 =?utf-8?B?aFJSS2M5Si9XVDhUeDhHc3hYNlRQdEN0VkdQSmJsMnMwOEFZdEIvUkNDdVY1?=
 =?utf-8?B?MDB3WXlZNGFsMU1hZGFCaFJyU0xvaEZOUVhIOTFwT0ppT1Y4Slc4clQvSUhi?=
 =?utf-8?B?U0VhTitKTitaVzJyeXVvNXA5Z0pVWUFmOER1U21weFl2WEM1OU1sSGdXVzMw?=
 =?utf-8?B?MlYzSm8wWnBueTVRdmx2M2hjbVVhek9YZmxUdVBxVFFGdERtNmNJNDJ1UTZt?=
 =?utf-8?B?dDZhdnJkaUI2ZHlkdE1OZFd0dktsRHQxMHBJbDVrUXQ5U3lkTm1tb1FkNnFk?=
 =?utf-8?B?dVA1NWNkUE9ndXhGelRXS05qdnRjaFViUlpFRXBoT1VqZ2tFOWthZVoxRzhP?=
 =?utf-8?B?encwUlRhZXlhNnRXcTF2ZDdvWk92QTc0Uld5MFFacnU0RjYyd1A3ZEo0Y2xw?=
 =?utf-8?B?Tm9Sdk9qL0l1Q3ZuNHdEeFBrSFNsRE9VREVnM3hXbFd5SDkrL3BLYnV2bGR4?=
 =?utf-8?B?MU9VSk9Kc2ZKYkdrSmY4dTBPZi85TDJRalZ1Tnhrc1k3SDg3VFJaZDFaUFhQ?=
 =?utf-8?B?V0NBT0dzLzZZdkZRdVZ6Ukt6dVhyT3BMT2JxeURTZHdqQnpReDR1YU10TFpC?=
 =?utf-8?B?VGJwN3lIM1EwVjJ6UkNuOU0vYTdVTjU5UTc5d0pYZXRnblRONExrL0srREc1?=
 =?utf-8?B?VUtVN0NwbUhhdmZCSkc3QjYxSzYvVDdoMlNqalMrK1k2MHUrbDBtWGUvTTFk?=
 =?utf-8?B?azdQQlAxZnBnVGg4djlSSDZFVXlnWjltbmR5cGl6WDU5MjhoNW55VmJEbUhN?=
 =?utf-8?B?RFJldCttRVlBNjhHb2g5cjRnbHVqTEM3ZiszbkFHWjE2OTVUbGdISnhLem96?=
 =?utf-8?B?dG04a0Y4WDhyQ09jdWVCRWt1WDNKclVjV3k2eFlGamdCUU9pSDNvQ3pKL3Iv?=
 =?utf-8?B?eHpqam1xc2ZldHFBSk9IU1YvY2dxOHRMMFlSV01GMUcrZnZvYUlPckJnaTJY?=
 =?utf-8?B?RkhTUS9uRFlHRklmVGkrM292d0E0QkNtaG8rTzNOMGpqd0VTV1pXbUhSaito?=
 =?utf-8?B?RisvZGVCRGdQRjQvb1A3elZOaVlicnc4Y0hBS0JLcW00dU1MODRUSlJKamYr?=
 =?utf-8?B?eW5MWjdqUkhBTmdBeUROeHZUdmNoUTlVOHRldjRReERSUVQ1NHlHK2cxVTRm?=
 =?utf-8?B?cHVtaXR2YlZWM2ZPQ3UwSFdBc052RWJSWVdTVk11Q3pLQVZWWVVkYWFKZmtO?=
 =?utf-8?B?VzBJTU5vR2Y4RjBZSDFCRUhzZDIvVC9VQUJOdEJNZHJOQm5mSGhFWEo0SHV0?=
 =?utf-8?B?YnVQeDBqU09WQWFTd2FkZS93dXVqZWNtYVF2eUVjKy91TUFpblRSc014blgv?=
 =?utf-8?B?eENTRjZieWxwL0JZaUNyYzcyRWRhOTVHWG5ZcU5jVUlHbTFsNW9nQ3B5WHFW?=
 =?utf-8?B?aXc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <3B46A8CDEAD330428C214F13784C08DC@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?U2hyS0ZibGdtR3ZiZ3ZzczdrSEphQXI1bHRxL2hrbVdEMlFaeE1DSGNPNjdJ?=
 =?utf-8?B?RENNWjZMZ045ZUtvcE93QmJ2WDBpRWhwWTYyVEtxMjJLTWNKWnhJOUtFR2xQ?=
 =?utf-8?B?K3R0eGMvU2J2bzU0alNDYlFoUk11bDhRWjE0Z3NkY2hYVjFHcFhlYWhZeStZ?=
 =?utf-8?B?RERqaUNud0VYQkxVemhNbHB3NldTQkZURm56Nm1tWHY1M2lZQmIrV3Nvb2V6?=
 =?utf-8?B?ZjhyY0J6VEw0dFp2RVBNQVkwTnZveCtRRHZDcXNkM1c3Q3ZrZkJ6ZDRDRDQ5?=
 =?utf-8?B?cUpKMkx6SWtqRUJnRk41SGowWklPUjBVY2hSY3YvYUYyYkM0RFR4Z1J1RWRR?=
 =?utf-8?B?S2pFbG9HT3ltWTZKR0xKc1VqVks1QTZpckdwQ0RUeXdkdUp0UXVBS0ZBVkJP?=
 =?utf-8?B?d2M3MkxMdkRlV29NODhnQVdLeGJGVDVJTzZmbGxkRG1WcnArY3VxUjlrdHBz?=
 =?utf-8?B?aUFXclVMSFZtOGtiQ1NzdFVHWGxtYzkwdHlsVWRyUlpPVVNWcmdXdjVTVHFL?=
 =?utf-8?B?bXpSSUJUaURmTTh6ZUZLUHoyekYxTVZUM1ZnV2l4K2NHVXJkOHd0Nk5WT2pX?=
 =?utf-8?B?b0RORWhGWFhXT21yZFFQVEMzc2hNamMyZmJ6c0JUcUNHWWJXeXFLYkxodEJE?=
 =?utf-8?B?RU1GVGdZdDRsaStxdzZvZ1VvQzNRN0tvbXl4NzM4ckM3TUM0b3JUTHdnY2hG?=
 =?utf-8?B?MXZuWkg2ZG1CakdndDBxTEZVWmtZdFJ0NXQxN25qbUhwVThiYXhycXBGWUdQ?=
 =?utf-8?B?QVdyMUlVK2g0VU0wdjB5c3Frdk9vMHFMbWhlMzBYWUVUa1VkUDlucW1OK2dv?=
 =?utf-8?B?bjZQY0cxbFVaYVF4TmpxTm9rdlovVmttUjIzODFQSnBOZUR5NzhkVjlOZXJP?=
 =?utf-8?B?emp5T1lmWHNLaDgzaWVKbFgzaWEyVlNyNnJPMzRKbmN4WE5EeUswZUt5YXlL?=
 =?utf-8?B?VTUwRXpTNjljQWlaa2k3TUY5UVNuVmtkbHg2bU0rbWhVS0RtbnBoY201YkRa?=
 =?utf-8?B?dDQ3dnQvQ3hpV2FCY1k1WDZuTjNwTWxKbzFsWmU2TUI4OVMyb3ZRbjQvWkhU?=
 =?utf-8?B?MFNIazJBMFVvZjJwQWQ1OFpSWGhSMGRiUm42MFZ2QkgvT0hMb3FjNkhBTk9w?=
 =?utf-8?B?cHpoMXk2TzM0MC9MaHRUU3FFemJBT2NYWm9RWEdFWW1SU2x6NWsvRmRKT3BI?=
 =?utf-8?B?VHhjUXdXdDNaQk5jMEREUGtxRlIwclZkUHNZaEJ4c1NocHBRaldZRGJxdk9r?=
 =?utf-8?B?clprdCtjZGVsUkF2aGRQd3dIY2JMYURVOWlwakZ4Qk5DcmZUNnMwL3ZQY1hp?=
 =?utf-8?B?ZXp5YzdJUHNPSWZzakhjcEZ6ZkFjazNGd1ptREpOeTVpMlJQZm5QRWVTRFlk?=
 =?utf-8?Q?tVAdcBw6gb0rZDsBXJRO8GZUFqKEZyek=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c0626e55-cf4e-4baa-e38c-08daefed2ee5
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 13:51:59.8827
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: MNIzf08frQaQSTI/ORu48GE5J8F0MxZWmG33bgc0xSvSsRG7fnx7zQL6tYYyF/CoRL3khzW1aFJJ7HVu5QHS00ElyqTxZFuUb1UbkpjRfSY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5427

T24gMDYvMDEvMjAyMyAxOjE0IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBUaGUgcGF0
Y2ggc2VyaWVzIGludHJvZHVjZXMgdGhlIGZvbGxvd2luZzoNCj4gLSB0aGUgbWluaW1hbCBzZXQg
b2YgaGVhZGVycyBhbmQgY2hhbmdlcyBpbnNpZGUgdGhlbS4NCj4gLSBTQkkgKFJJU0MtViBTdXBl
cnZpc29yIEJpbmFyeSBJbnRlcmZhY2UpIHRoaW5ncyBuZWNlc3NhcnkgZm9yIGJhc2ljDQo+ICAg
ZWFybHlfcHJpbnRrIGltcGxlbWVudGF0aW9uLg0KPiAtIHRoaW5ncyBuZWVkZWQgdG8gc2V0IHVw
IHRoZSBzdGFjay4NCj4gLSBlYXJseV9wcmludGsoKSBmdW5jdGlvbiB0byBwcmludCBvbmx5IHN0
cmluZ3MuDQo+IC0gUklTQy1WIHNtb2tlIHRlc3Qgd2hpY2ggY2hlY2tzIGlmICAiSGVsbG8gZnJv
bSBDIGVudiIgbWVzc2FnZSBpcw0KPiAgIHByZXNlbnQgaW4gc2VyaWFsLnRtcA0KPg0KPiBPbGVr
c2lpIEt1cm9jaGtvICg4KToNCj4gICB4ZW4vcmlzY3Y6IGludHJvZHVjZSBkdW1teSBhc20vaW5p
dC5oDQo+ICAgeGVuL3Jpc2N2OiBpbnRyb2R1Y2UgYXNtL3R5cGVzLmggaGVhZGVyIGZpbGUNCj4g
ICB4ZW4vcmlzY3Y6IGludHJvZHVjZSBzdGFjayBzdHVmZg0KPiAgIHhlbi9yaXNjdjogaW50cm9k
dWNlIHNiaSBjYWxsIHRvIHB1dGNoYXIgdG8gY29uc29sZQ0KPiAgIHhlbi9pbmNsdWRlOiBpbmNs
dWRlIDxhc20vdHlwZXMuaD4gaW4gPHhlbi9lYXJseV9wcmludGsuaD4NCj4gICB4ZW4vcmlzY3Y6
IGludHJvZHVjZSBlYXJseV9wcmludGsgYmFzaWMgc3R1ZmYNCj4gICB4ZW4vcmlzY3Y6IHByaW50
IGhlbGxvIG1lc3NhZ2UgZnJvbSBDIGVudg0KPiAgIGF1dG9tYXRpb246IGFkZCBSSVNDLVYgc21v
a2UgdGVzdA0KDQpUaGFua3MuwqAgVGhpcyBoaWdobGlnaHRzIHNldmVyYWwgYXJlYXMgd2hlcmUg
SSB0aGluayB3ZSB3YW50IHNvbWUgcmV3b3JrDQp0byB0aGUgY3VycmVudCBjb21tb24vYXJjaCBz
cGxpdC4NCg0KRmlyc3QsIGl0IHJlYWxseSBzaG91bGRuJ3QgYmUgbmVjZXNzYXJ5IGZvciBhcmNo
aXRlY3R1cmVzIHRvIGNyZWF0ZQ0KZW10cHkgc3R1YiBmaWxlcy7CoCBUaGVyZSBhcmUgdHdvIG9w
dGlvbnMgLSBmaXJzdCBkcm9wIHNvbWUgZW1wdHkgZmlsZXMNCmluIHhlbi9pbmNsdWRlL2FyY2gt
ZmFsbGJhY2svYXNtIGFuZCBwdXQgYSBzdWl0YWJsZSAtSSBhdCB0aGUgZW5kIG9mDQpDRkxBR1Mu
wqAgVGhlIG90aGVyIG9wdGlvbiwgd2hpY2ggaXMgbmljZXIgSU1PLCBpcyB0byB1c2UgX19oYXNf
aW5jbHVkZSgpDQphbHRob3VnaCB0aGF0IHdvdWxkIHJlcXVpcmUgdXMgZmluYWxseSBkZWNpZGlu
ZyB0byBidW1wIHRoZSBtaW5pbXVtIEdDQw0KdmVyc2lvbiB0byA1IGZvciB4ODYgKHdoaWNoIHdl
IG5lZWQgdG8gZG8gZm9yIG1heSBvdGhlciByZWFzb25zIHRvbykuDQoNClNlY29uZCwgdGhlIGFz
bS90eXBlcyBpcyBhYnN1cmQuwqAgVGhhdCBzaG91bGQgYmUgb25lIGNvbW1vbiBoZWFkZXIsDQpi
ZWNhdXNlIHRoZXJlJ3Mgbm90aGluZyBhcmNoIHNwZWNpZmljIGFib3V0IG1ha2luZyB0aG9zZSB0
eXBlcyBhcHBlYXIuDQoNClRoaXJkLCB0aGUgZWFybHkgcHJpbnRrIGluZnJhc3RydWN0dXJlIGlz
IHBhcnRpYWxseSBicm9rZW4gKHRoZSBjb21tb24NCmhlYWRlciBjYW4ndCBiZSBjbGVhbmx5IGlu
Y2x1ZGVkKSwgYW5kIHVuc2F0aXNmYWN0b3J5IHdpdGggaG93IGl0IHBsdW1icw0KaW50byB0aGUg
ZGVmYXVsdCBjb25zb2xlIHN0ZWFsIGZ1bmN0aW9uLsKgIFdpdGggYSBiaXQgb2YgY2xlYW51cCwg
bW9zdCBvZg0KaXQgY2FuIGJlIG5vdCBkdXBsaWNhdGVkIHBlciBhcmNoLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 13:54:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 13:54:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472641.732935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnAw-0006MZ-IF; Fri, 06 Jan 2023 13:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472641.732935; Fri, 06 Jan 2023 13:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnAw-0006MS-F5; Fri, 06 Jan 2023 13:54:26 +0000
Received: by outflank-mailman (input) for mailman id 472641;
 Fri, 06 Jan 2023 13:54:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDnAw-0006MM-6o
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 13:54:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDnAt-0008Rs-7L; Fri, 06 Jan 2023 13:54:23 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.4.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDnAt-0001fh-1D; Fri, 06 Jan 2023 13:54:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=UUiwMOZIAOAV5xDV4JldE8+o4vqZBnXwxrWhXML++K4=; b=fK94T02u7bRcydLdw/R/jOy8+0
	Xxb7BdpiA9pYRq10fKuoGa48mddIRdT1VzXJRop9KsconihYakVbfRu6b6SnZRxsxxm+R+3elnhDk
	xC7umJ3LZu/4f7ZmuvdMSObiZMya+wBTrAda8uI0MD4Q0At9PQrU4EP3gCz0by0mqKbY=;
Message-ID: <4e577d78-ff2f-3258-99d6-712af3b6330d@xen.org>
Date: Fri, 6 Jan 2023 13:54:21 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 3/8] xen/riscv: introduce stack stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <e8f65c43d20ebdaba61738200360b14152531321.1673009740.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <e8f65c43d20ebdaba61738200360b14152531321.1673009740.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 06/01/2023 13:14, Oleksii Kurochko wrote:
> The patch introduces and sets up a stack in order to go to C environment
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>   xen/arch/riscv/Makefile       | 1 +
>   xen/arch/riscv/riscv64/head.S | 8 +++++++-
>   xen/arch/riscv/setup.c        | 6 ++++++
>   3 files changed, 14 insertions(+), 1 deletion(-)
>   create mode 100644 xen/arch/riscv/setup.c
> 
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 248f2cbb3e..5a67a3f493 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,4 +1,5 @@
>   obj-$(CONFIG_RISCV_64) += riscv64/
> +obj-y += setup.o
>   
>   $(TARGET): $(TARGET)-syms
>   	$(OBJCOPY) -O binary -S $< $@
> diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
> index 990edb70a0..ddc7104701 100644
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,4 +1,10 @@
>           .section .text.header, "ax", %progbits
>   
>   ENTRY(start)
> -        j  start
> +        la      sp, cpu0_boot_stack
> +        li      t0, PAGE_SIZE

I would recommend to a define STACK_SIZE. So you don't make assumption 
on the size in the code and it is easier to update.

> +        add     sp, sp, t0
> +
> +_start_hang:
> +        wfi
> +        j  _start_hang
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> new file mode 100644
> index 0000000000..2c7dca1daa
> --- /dev/null
> +++ b/xen/arch/riscv/setup.c
> @@ -0,0 +1,6 @@
> +#include <xen/init.h>
> +#include <xen/compile.h>
> +
> +/* Xen stack for bringing up the first CPU. */
> +unsigned char __initdata cpu0_boot_stack[PAGE_SIZE]
> +    __aligned(PAGE_SIZE);

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:02:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:02:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472648.732946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnIV-0007tw-BJ; Fri, 06 Jan 2023 14:02:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472648.732946; Fri, 06 Jan 2023 14:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnIV-0007tp-7T; Fri, 06 Jan 2023 14:02:15 +0000
Received: by outflank-mailman (input) for mailman id 472648;
 Fri, 06 Jan 2023 14:02:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J7eG=5D=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDnIT-0007tj-Kg
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:02:13 +0000
Received: from sonic316-54.consmr.mail.gq1.yahoo.com
 (sonic316-54.consmr.mail.gq1.yahoo.com [98.137.69.30])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b4e5bb0a-8dca-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 15:02:09 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic316.consmr.mail.gq1.yahoo.com with HTTP; Fri, 6 Jan 2023 14:02:07 +0000
Received: by hermes--production-ne1-7b69748c4d-drrwg (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 9c3339e25d8418037deac281a90e36bb; 
 Fri, 06 Jan 2023 14:02:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4e5bb0a-8dca-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673013727; bh=QMiqbDC+61V0Rk1soqHizYsaVxM0KkrnEK/KqG2Ugb8=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=dLAt+wG+AyS1eswmXsp5/JpM8WzoW7JftDUt75LKqmPdXJMNjrJ+X8tamc0AMvXCAJzg44JNG2wjaoiHH5WBuTlmtpmcGaUFUwCgafgcI/eTaEOJqPvX4pceIDcHuDujxnqQ6nsAb+YPjblGec1CU8A2mfeuFeKAPX33Hh7jtAsdnSK+d0QfIWLK1R/1I8nRNzh8Eo7smxOsBZdpfgwlH5GaEiJ+hkQQceNJ0RIY7mI/DkEF1kAUMI9jmm+Ra92xeiLNDuRdoNM4RQcdJOzw7Nrg2CJkr9nyaQEVDM4U0RcBfrRBiinyfO8po7oZ2/bBdFZSiJAOwdX+Ve7aVc2jFw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673013727; bh=fjn70Kv45ZJMgPFlTmq03reHzYK2FHDh7LA5QD2tKRq=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=eBbavZuE/LSY3h0hfEpwWkd5JzmX1KdIFJhJypq+p6NWXEyP8/40yIV3C8H6hbh0QMg2FT5qRZi7ChdpfUYnOBiCVzR4knG9SYG/52GPiUV4nyRDmPcdW5B8SOzdeXlhOnUEfBiYE6IFLXueWU2k571HSgx3Xqvpty//uPjS/0dqJLrlDGQpOcUJt+9lZ+vhYSSfZ8YUlwb0WtpptzvYlpjut9oXk+MQs34Yqeyg88oGISdfQ3hEYrlJADIR7w/zBIsGLM93g51fqVu+ImoSikfuYDrLfpiBAQ8j02ZSQmnaS2EBk4AslLPGj6AZ5bGwvrlO5TrDABOeZJqukpphHQ==
X-YMail-OSG: kL_u7lEVM1mSuxyTPDJDIPhacyL5.wwB7HAKS.TTZJ3MJ8qexUFUuMzLjITEqnB
 LGYRdsSwR.icbCUrgwz6Zfag0x0viZLWqCbigG9KM3v29JJKNn8lpXW7zwatfpotniG2rmNapqNF
 aA25xqC1BeJi6nsAAYfLviHaRhuNn.Mmj8CmyO_.CKHw709Eu0h1kHueCoWrdYwy_hg7Tcj_R_U4
 j.xPKFgsvyql7xBvGc5HQCgaHoDpIh5.6p1CPVOypryqGJLe.ax9wLTGECxXomkquXZ9hcjDspw8
 q7UTTJMqxQUxnEXyPlol9wdSEXUrAaU2GIx7ARbBsMnzuII8YTKq95D7Lqo_3mmjmwy1n701FEtM
 T56TOJbFgLDtTf0pPHzdSP93KZ_GuzUhI3bweWKIWSeA3vVGEH5NItGWlCrGRSnQvyOGyHne5jsv
 TcNIsxdYplNMzQTRiqa.0zCwF3oyvpOVC2T864OptM5Hf9hps_K9rWX5HHgpos1xQtFts7RHph4.
 ZpIS8WH1Wwi49sbChVivclNu9UNiuRuMrxEV8KpRbUoNUEg_9iTjjzh5hC_NlyyiB5N6_qaiL1tm
 zvKUQCkEQC1jfs9.L1Gom71krC6.N1L0ypU.52biP.ICCN021Fj5NAFwbBxE_yd9IAtPTCBzflFj
 I4RCvqIZiag1lTbup66UJ5H_A6300xI4WI1O3lzNvLQK_2hmWEWxzjC2CfUMQXzZ.CqJOCuaXxIu
 qLlewqRMM2JAZj_7tiucsyJKWZadcG1lYqb3maloSG1jGzVGg_l7KEBPsenqh57SmSSuM6bdT0gt
 DQ35edZCwbV9e26IcMaKamWn4QHBXbgDZnF8CahIHbfcPdnPHkDPjhwPVyjaQppMggZfuIehXHDH
 qbzQsRj9vGYhvTUUmhONIHamvZMLYOZ7doo.DbjVJNlMqVaKuJFtkB6PxZm5cWuowJsqvEYw7LPB
 c3IexELiJ.OhtIho7J8Ppqthsog4ou5TpcmRBRMqGu0UgyQ6O3vMuDu_dbUSR2nILA3tzXjAPeG_
 udUyIaboyu3JKEMxtzRh6dK_asTLRMZG6RWNHByp4t8og_JIc8oG37Tm54zQ87exEyenS7EmUNDV
 Z2SyUPBkH9EYVHkCSQTA7_RuHYhz6wgNHwo2Iq.qaWX4WZe4guuiJXMZKqQ405xUzlV0AebbCOQw
 gijuKRQy.ifIIlr0GHF8Gp9_VCXhFQqqNoiSyjPgPAD.hUD8dUc8h.IaYCUKhQidUV9RzUeCX258
 rbvKHSERMyIl7pUdlAe25EQAt9pY.TYGCTBNjiVaAibZh61QJqHt43KnntB6fp5x2psHy9LKZ5vt
 Eh5hZBOI3O6ZKTFedrgAU_1zoS1WOuCK0DCLPiwWqEwZ.8KDxvASj9lc2uuIrfpZXAvqEGVKkmTE
 UguYnHhRglmLjuZKzmgkA6kr36odmO6.5YXtVgvPs9DPlwFaxC.EM51yEint_zQlwuXjchDdNfcL
 0OREZXjCo.1L3RdlB1DDK2UJdt2k92DXkxM8r17o2XG_mGrxQz265t1t708lT5HT1FEc_XPuumTs
 L5Jtn8y_oWe.dwy_eCMCiRNkhCcNeU6r2QXkp5zhsFqG6X0Z0qL_LPYj3cGzLTTPkLanvZ8km7_H
 aEP9bDwhuusHtcDRSxeFOvE_N5oj3X8.FYIglmOYOVbgHJpOtBaw0x9gW9eJARMmIO850_hBanFx
 tto.6eCvifmjdq9el8r_Ni2_.0gYdJUzf.pzJlF_Ix8NW9DWGaEKtH.e8WkiedDU3.nUIDdr2kgu
 2UdELYF8wzVpEVgLkJxVYws4x3dJ4U0pMqwv.JTkaq0dOmjfFIOfwqDFwvepjN2NKJdXXgIepCdG
 iip1d09s6DBlani6fCrBEQM1yiVWnJju3joYXsCTiBCa9_fok4A31nMPLJB48y_s3jn9iZQC2kZb
 O32YH0KYXccuKQp8ft2_2TrVELhtde9jRVeVDl4a3DWjlDdodsM6UxPxqXJt.VM5jWyS_zHKbkJi
 uIXPZZcQWp9QVudpBursAaizFm4Sa2nWOrUbzhYckQKgHpduVVAvwedRZu.fDBpshQfd7ks7eT07
 sAw_BBN2CeG7C7Vc4_yOXj6NgVtmgdnVjsSGb89Oa92JrngjFaoakBYryALp7rULImuEwgCL6DA6
 lzL3h5jN99AQqPyP8EUuHRmu0MBpQ11M4kopPv2h_5PYiEJeY5DMLst6Qzm8oOlKEZ0EuKmtLnCo
 Pr91k.igyTehyYw--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <b07581b1-3f36-bb3e-ab2b-7400bdffe0ef@aol.com>
Date: Fri, 6 Jan 2023 09:02:01 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Alex Williamson <alex.williamson@redhat.com>, Paul Durrant
 <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, qemu-devel@nongnu.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <20230102124605-mutt-send-email-mst@kernel.org>
 <c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
 <20230103081456.1d676b8e.alex.williamson@redhat.com>
 <cbfdcafc-383e-aea3-d04d-38388fab202f@aol.com>
 <ba4f8fd6-ae10-da60-7ef5-66782f29fdb9@aol.com>
 <Y7f9hi0SqYk6KQzW@perard.uk.xensource.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <Y7f9hi0SqYk6KQzW@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3673

On 1/6/23 5:52 AM, Anthony PERARD wrote:
> On Tue, Jan 03, 2023 at 05:58:01PM -0500, Chuck Zmudzinski wrote:
>> Hello Anthony and Paul,
> 
> Hi Chuck,
> 
>> I am requesting your feedback to Alex Williamson's suggestion that this
>> problem with assigning the correct slot address to the igd on xen should
>> be fixed in libxl instead of in qemu.
>> 
>> It seems to me that the xen folks and the kvm folks have two different
>> philosophies regarding how a tool stack should be designed. kvm/libvirt
>> provides much greater flexibility in configuring the guest which puts
>> the burden on the administrator to set all the options correctly for
>> a given feature set, while xen/xenlight does not provide so much
>> flexibility and tries to automatically configure the guest based on
>> a high-level feature option such as the igd-passthrough=on option that
>> is available for xen guests using qemu but not for kvm guests using
>> qemu.
>> 
>> What do you think? Should libxl be patched instead of fixing the problem
>> with this patch to qemu, which is contrary to Alex's suggestion?
> 
> I do think that libxl should be able to deal with having to put a
> graphic card on slot 2. QEMU already provides every API necessary for a
> toolstack to be able to start a Xen guest with all the PCI card in the
> right slot. But it would just be a bit more complicated to implement in
> libxl.
> 
> At the moment, libxl makes use of the QEMU machine 'xenfv', libxl should
> instead start to use the 'pc' machine and add the "xen-platform" pci
> device. (libxl already uses 'pc' when the "xen-platform" pci card isn't
> needed.) Also probably add the other pci devices to specific slot to be
> able to add the passthrough graphic card at the right slot.
> 
> Next is to deal with migration when using the 'pc' machine, as it's just
> an alias to a specific version of the machine. We need to use the same
> machine on the receiving end, that is start with e.g. "pc-i440fx-7.1" if
> 'pc' was an alias for it at guest creation.
> 
> 
> I wonder if we can already avoid to patch the 'xenfv' machine with some
> xl config:
>     # avoid 'xenfv' machine and use 'pc' instead
>     xen_platform_pci=0
>     # add xen-platform pci device back
>     device_model_args_hvm = [
>         "-device", "xen-platform,addr=3",
>     ]
> But there's probably another device which is going to be auto-assigned
> to slot 2.
> 
> 
> If you feel like dealing with the technical dept in libxl, that is to
> stop using 'xenfv' and use 'pc' instead, then go for it, I can help with
> that. Otherwise, if the patch to QEMU only changes the behavior of the
> 'xenfv' machine then I think I would be ok with it.
> 
> I'll do a review of that QEMU patch in another email.
> 
> Cheers,
> 

Hello Anthony,

Thanks for responding!

The first part of my v6 of the patch only affects the xenfv
machine. Guests created with the pc machine type will not call
the new function that reserves slot 2 for the igd because that
function is only called when the machine type is xenfv (or xenfv-4.2).
But the new functions I added to configure the TYPE_XEN_PT_DEVICE
when igd-passthru=on will be called to check if the device is an
Intel igd and clear the slot if it is, but this will not have any
effect on the behavior in this case because the slot was never
reserved. Still, this would add some unnecessary processing in the
case of machines other than xenfv, which is undesirable.

So I can add a check for the machine type to a v7 of the patch
that will skip the new functions that clear the reserved slot if
slot 2 is not reserved and therefore does not need to be cleared.

Would that be OK?

Kind regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:04:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:04:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472655.732957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnKE-0008TA-Lx; Fri, 06 Jan 2023 14:04:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472655.732957; Fri, 06 Jan 2023 14:04:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnKE-0008T3-Ix; Fri, 06 Jan 2023 14:04:02 +0000
Received: by outflank-mailman (input) for mailman id 472655;
 Fri, 06 Jan 2023 14:04:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g6IK=5D=citrix.com=prvs=36316be06=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pDnKD-0008St-1X
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:04:01 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f67f4809-8dca-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 15:03:59 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f67f4809-8dca-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673013839;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=DJMl9lyCpvXVrpBfjxQaBv2r/oW5Se5HYnobbelCQy4=;
  b=epOnTaofThDJkCAZfopEpRQFBFaGItTRZHFdA7DlrkFAL0MhBQeMAM8p
   QDgYw8aY2IE1z56cnS3cR5ww2u6nyFlzmLQ5JNvn4eoulFfjFH2274UVU
   kslua9c+73eQ3zCuf+M7yB1yownobN/904frTtebFrBCctXUbPbhX4d5Z
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91894645
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:qjrk46/Sj8krt9LOfMKhDrUDMH6TJUtcMsCJ2f8bNWPcYEJGY0x3m
 mIeXGvTOPnbN2qned4iOonjoR4A7cTWxtYxSQM6riE8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIx1BjOkGlA5AdmPKgV5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklM3
 NNDFBMwXCq+lv+N/+KkZ/JKv5gaeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGM0Yn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZQMzhfE+
 D6bl4j/Khwlb4av1hikyGyxi8HOmA7BBplVSKLto5aGh3XMnzdOWXX6T2CTrPWkg1SyXPpfI
 kYX/TEy664/6CSDVNj2XxSirDiEsxgQVsBLFOsS7ACLw7DTpQGDCQAsSyVdQMYruM8/WXoh0
 Vrht8PkA3ljvaOYTVqZ96yItnWiNC4NN2gAaCQYCwwf7LHLpYgphR/LCN97HqOxhNToHBn/x
 jmLqG41gLB7pdcG0rj+8V3ZjjaEoJ/PQQgooALNUQqN5BlyZJOuZJ6A4F/fq/1HKe6xVUKdt
 XIJn8yf6uEmDpyXkiGJBuIXE9mB//yCNi/dh19HBZQt9z2xvXWkeOh4+DhjIG95P8BCfiXmC
 HI/oisIusUVZiHzK/YqPcThUKzG0JQMC/znf9+PUvlzUKJMZRK8w3FxSFW25jzUxR1Efb4EB
 b+XdsOlDHA/AKthzSarS+p17YLH1hzS1kuIG8mlkk3PPa62ISfMFOxbaAfmgvURtvvsnenDz
 zpI2yJmIT17Wfa2XCTY+JV7wbsifSliXsCeRyC6m4e+zuta9IMJUaS5LVAJIdYNc0FpegDgo
 BmAtrdwkgaXuJE+AVzihopfQL3uR41jinkwIDYhO12ls1B6P9n1vftEKstmIeh2nACG8RKSZ
 6NVEylnKq0WIgkrBhxHNcWtxGCcXE7Daf2y09qNP2FkIs8Iq/3h8d74ZAr/nBQz4t6MnZJm+
 dWIj1qLKafvsiw+VK46ntrzlQLu1ZXc8couN3b1zi57Ih20qtYyc3ah0pfa4agkcH3++9dT7
 C7OaT9wmAUHi9NdHAXh7Uxck7qULg==
IronPort-HdrOrdr: A9a23:l907BKm4XICYeUhJ1BKpyShd6wnpDfIT3DAbv31ZSRFFG/Fw9v
 rAoB1/73TJYVkqKRcdcK+7UpVoLUmskKKdgrN9AV7BZmXbUQKTRelfBO3Zslnd8kbFh4xgPM
 lbAs9DIey1Ll5wjcuS2njaLz9a+re6GVeT5ds3oh1WLD1XVw==
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="91894645"
Date: Fri, 6 Jan 2023 14:03:52 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
CC: <qemu-devel@nongnu.org>, Stefano Stabellini <sstabellini@kernel.org>, Paul
 Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, Richard
 Henderson <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, Marcel
 Apfelbaum <marcel.apfelbaum@gmail.com>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>

On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> as noted in docs/igd-assign.txt in the Qemu source code.
> 
> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> Intel IGD passthrough to the guest with the Qemu upstream device model,
> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> a different slot. This problem often prevents the guest from booting.
> 
> The only available workaround is not good: Configure Xen HVM guests to use
> the old and no longer maintained Qemu traditional device model available
> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> 
> To implement this feature in the Qemu upstream device model for Xen HVM
> guests, introduce the following new functions, types, and macros:
> 
> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> * typedef XenPTQdevRealize function pointer
> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> 
> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> the xl toolstack with the gfx_passthru option enabled, which sets the
> igd-passthru=on option to Qemu for the Xen HVM machine type.
> 
> The new xen_igd_reserve_slot function also needs to be implemented in
> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> in which case it does nothing.
> 
> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> 
> Move the call to xen_host_pci_device_get, and the associated error
> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> initialize the device class and vendor values which enables the checks for
> the Intel IGD to succeed. The verification that the host device is an
> Intel IGD to be passed through is done by checking the domain, bus, slot,
> and function values as well as by checking that gfx_passthru is enabled,
> the device class is VGA, and the device vendor in Intel.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>


This patch looks good enough. It only changes the "xenfv" machine so it
doesn't prevent a proper fix to be done in the toolstack libxl.

The change in xen_pci_passthrough_class_init() to try to run some code
before pci_qdev_realize() could potentially break in the future due to
been uncommon but hopefully that will be ok.

So if no work to fix libxl appear soon, I'm ok with this patch:

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:11:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:11:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472662.732968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnR4-0001V1-Dd; Fri, 06 Jan 2023 14:11:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472662.732968; Fri, 06 Jan 2023 14:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnR4-0001Uu-Aa; Fri, 06 Jan 2023 14:11:06 +0000
Received: by outflank-mailman (input) for mailman id 472662;
 Fri, 06 Jan 2023 14:11:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J7eG=5D=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDnR2-0001Um-TU
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:11:04 +0000
Received: from sonic314-19.consmr.mail.gq1.yahoo.com
 (sonic314-19.consmr.mail.gq1.yahoo.com [98.137.69.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1e92323-8dcb-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 15:11:01 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Fri, 6 Jan 2023 14:10:59 +0000
Received: by hermes--production-ne1-7b69748c4d-drrwg (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 75201934fc037ba28ec21f3880c24bd7; 
 Fri, 06 Jan 2023 14:10:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1e92323-8dcb-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673014259; bh=xZI1yY4P5g0p968nUuIlQk/jsifkHbagJgWbktARJJs=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=lR0BeB9Tn7zcvH0HFlBbQLMP47nxkx3bpuNUBpb/CRKm6KYDq2LmrmxVmuisubjdkLDmRiKQM3fVS3N0+RYI0sCd1KWMHBjhdpNk4Ibd8x6ixs3l6Z9QsVkqQKPymjRNL9vIpVySXsAHGJ/xtBK6eR8i4yQaFIsed4RfZeH5mGs5F7HivoowFs6QVF+pegV2Bqu1xOI7ZbYHfAxqVxRuyNpTc8PMzbWnAkSdNbqjQpBbTC7c6YPI7WT1R8GP4oZmq6leFlRM80ZiWd730atom8/D5kYNqvKAGuMr8ltB/zQg5pi+x5lI5unN65Pc4IMG5qk7Mk7lOkM+YlfAGKjA5A==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673014259; bh=4l8UCGyy/Qck99GFchfUmSKlEoCFcyg2e9lXbploMDp=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=bVAgo5krm8BjB3DF7tEI9BWpA2ME3BtDZsxWOI5zHEFGxahdosrYxX/0tJbJdaTzoe8iwoHnjH2fxDW5mQg4VotplGgqbr5W5U3dfGglVCI7RD6nyhBT9wgoNRLkM17nD1GZYAD6qYpcq4vF8X7zA21wNmTnq3nnHu9VM80D0rHp3eFmofpnPwZjYNMKjLCrvmcm9vUd9UdDI8sc2TM4nl1cSmf9BElmf07JDJNx+RNUe0uoHYfnHrJ9FPmWpKF4A5FncU0lvR0+wbe1SSpiVBa3HK1UWbQnXJVqW0UPVls/OpwFn7xR25wgbn7JDvxONf5f10ZcZK0UtR5Y+PrKNQ==
X-YMail-OSG: 0OFH0ewVM1mftW2s2coCmnaLRsU0Dhyc8.M9VsbDNnBZkYxz0PWBssajaKQeqPW
 QxMvj.gd6PxIigg73v8abgaT3E0g1sd9DNMNLSdrsGUky6zzoruHxaUwpdSeh3SuTzXWEKxW8tjL
 N8FXCYRMa3ITayDWYnY0ARXc9K7DtHGoCBbl.jFcj9dfllN6Evh4J1FuBGTcHUz50kP8pMx_zWHt
 P.2_.L0zUB5uaan0L.94pa7LapxPM1Wp7xTPFkGObwc7GkUjDH9qKHyaTV7dFdQqfyDYJh.uL8hI
 _toph2f6V6BXy9bp7.juYqRDUPfpYeurpsaIqHtvrmVCX2WB6uOU9BSP1zoVbFZF2RqSpuuwpRAw
 M8edg2RHk6jgUpHeRm1SXj4NFyu0ouRH4OoWsNPlTrjOFBZBrOOIjtZwJlT8JIKDCHDLVmstPj3J
 OHMkAE2T85feZT.apQK1OcDfwc3DOYS.lpqDTlZmlk4eP84W30WI2IArPuht2Lm5f4zyOkTFYwqq
 b_Z691WTWgWxLTZnIbneNG1U3P0qm0E1gh_mPkN1jqWtjblbsOOukFn0QfJZhqdmiW4QAn_sXZpG
 HUIb3WkjRhJuvRpsZUAzMAlJD.wgO0vUqrgTOhbFCjhOWRHUob.AohyND_Ue8knNfaaeiQ3WnzoC
 4UAuAwXGA5OdJ1DoFnUlC3hnDzcyEKr87DyL_2JH48wU0sZnXM0RrSyH24q0SUQUgGnxMF1qWj12
 V.OZ9V6RJhpzMCAceS6xUyhGtp_FFGmkY_g2ev_NWZWjauhEgSi6QdI7.LTxtV0.Uv_PaUprsyiA
 9xLeiKnt0PpDU_2aukW8nqlghceLtcc763MUorxONgakXuil9p46KegJ7F6xEaXwJqyD1vYF5sPg
 j4BqjrAPgAOhsyjfIp0A88Q14_S5NvOFWbjjYtwmqT0fNCzK3cjS5lU_mKgpiV6KqFDhPWYYGxB7
 lVj_998vL2GGzysHLJ29GmsIRh6NU1c9WVJRGWeY9WWXa8XUuL3BD2UMUP6CdMjgFRWTwelQ_JZo
 dFGtMvLB8Y94Qgyla4FREJ1CZN0VLKmrUSXjBi6CrNXQsWd5u1p6hahYnb0aBEgmtH1m7cF5iaLD
 1WvDweiD1HiyJM82sqBPsayddI7uzbHNiPYf3ssWoEPu3RPuBpq1rrvJA2FitkwFGLGSdxhOTahT
 VQ_dKcHWSWjKs4QqN57g4MBo_bpg1KBzWKi53M8ooZIkNyEXlEGOfrdOH2rGKq2BzYXXGE408s9Q
 Ho3OWejwKeU.wlazhgIt6HqyuNJymQh_ZRpoNuHbx6TSM9CgtdYQxeDS.cymE5IB9T47ruQr1iTQ
 BX27eL9J9KfyJKFV.kMfs6n_tMoG8ecoKp4TTX9PCsdE05EK2a0G9k.y98pMi.HVAvXG5.Fit_Vc
 MepzWz8zxvK_Pk7AV2DX_uIUgAEURFTfSE2_ngGKi_Q9tKeZNu5BjTWwr9KuVAjX1iK7nNZXYEuc
 m6SIxSQd14mVMsZMX3JA2UDkSipVYg6ztngUbKhud99..GJbgcKdiJ1XOhqjbBbeCBLJBatTAImo
 lp8_HdmsvIDi182HzhoL2kgzOVLzrhgr6sOr__v7OWMLJ5CQlFO8IpKTrafMte25yhmbmFzsXlGR
 2u7pQJFejp8NhgBfkR2Vq9styImQmBtvECpnQxxeREwvBXdnq837wGfbVu.Aa8BjNKVG11BeanBm
 oRYKLxECr76s858b17b5Iseh_c8_eFrpT70Arp3d2uCnXhxeaMyPrNyVTfH0QNX33mrnGnenWbYw
 d38wq0SRbbjB4AMvfyHnlbeu1US2QahPviWgZk1ZB45qz5Uk9CeuPuEYqqgfgevFM_NSZrwuNDp2
 2zNyEdM8NrhLO7NWb6qXp9fFYwrzvLn7GTdTzazHNNLtqhFIhIS0.rwRJ_gh6Yn9wAd6FomCm6YC
 SoDdVybM1jk8R5PsG3Iqi6chFHSmulNc4yVEcX7kH9aMSfk8_A8g4m3CB2coSq_xW0U591uaonkH
 FMB_lZ1bCUjEZylepkNHNkRjmiWvPU4dFdxwrl4Io7yCb7UAKsbsE8WR8GeNhXxRc5Kee066pGRq
 xKo5xR4Yg7JS77W0_6ydCh2o5KTMq0PolaUa0bJf9GPPObz02MSyEWgw.QKelLXx_saKbqIU6MaM
 B3QxQ_ERfvKDh30WclrVO.3DQptRJfYpRfy3LVOvTqyg5gGC7lTaUYgJ4jXGmn8ogA8d.H.Ce4X5
 j.hiX6gSlXQ--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <c39b9502-0020-ce54-abd8-b362430ba086@aol.com>
Date: Fri, 6 Jan 2023 09:10:55 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3563

On 1/6/23 9:03 AM, Anthony PERARD wrote:
> On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>> as noted in docs/igd-assign.txt in the Qemu source code.
>> 
>> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>> Intel IGD passthrough to the guest with the Qemu upstream device model,
>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>> a different slot. This problem often prevents the guest from booting.
>> 
>> The only available workaround is not good: Configure Xen HVM guests to use
>> the old and no longer maintained Qemu traditional device model available
>> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>> 
>> To implement this feature in the Qemu upstream device model for Xen HVM
>> guests, introduce the following new functions, types, and macros:
>> 
>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>> * typedef XenPTQdevRealize function pointer
>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>> 
>> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>> the xl toolstack with the gfx_passthru option enabled, which sets the
>> igd-passthru=on option to Qemu for the Xen HVM machine type.
>> 
>> The new xen_igd_reserve_slot function also needs to be implemented in
>> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>> in which case it does nothing.
>> 
>> The new xen_igd_clear_slot function overrides qdev->realize of the parent
>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>> 
>> Move the call to xen_host_pci_device_get, and the associated error
>> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>> initialize the device class and vendor values which enables the checks for
>> the Intel IGD to succeed. The verification that the host device is an
>> Intel IGD to be passed through is done by checking the domain, bus, slot,
>> and function values as well as by checking that gfx_passthru is enabled,
>> the device class is VGA, and the device vendor in Intel.
>> 
>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> 
> 
> This patch looks good enough. It only changes the "xenfv" machine so it
> doesn't prevent a proper fix to be done in the toolstack libxl.
> 
> The change in xen_pci_passthrough_class_init() to try to run some code
> before pci_qdev_realize() could potentially break in the future due to
> been uncommon but hopefully that will be ok.
> 
> So if no work to fix libxl appear soon, I'm ok with this patch:
> 
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Thanks,
> 

Well, our messages almost collided! I just proposed a v7 that adds
a check to prevent the extra processing for cases when machine is
not xenfv and the slot does not need to be cleared because it was
never reserved. The proposed v7 would not change the behavior of the
patch at all but it would avoid some unnecessary processing. Do you
want me to submit that v7?

Chuck


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:12:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:12:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472669.732979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnSa-00026n-Rq; Fri, 06 Jan 2023 14:12:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472669.732979; Fri, 06 Jan 2023 14:12:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnSa-00026g-Or; Fri, 06 Jan 2023 14:12:40 +0000
Received: by outflank-mailman (input) for mailman id 472669;
 Fri, 06 Jan 2023 14:12:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ggnj=5D=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDnSZ-00026a-6O
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:12:39 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2046.outbound.protection.outlook.com [40.107.21.46])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b231ddc-8dcc-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 15:12:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8804.eurprd04.prod.outlook.com (2603:10a6:20b:42f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 14:12:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 14:12:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b231ddc-8dcc-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Die09GvshbmyQ/BF8x9nlKVsSsQbaHL7K0SWsZT90Gedjo0lfYfKqc13uC2bFmwhbKLL1PO+8rdKKaunANu1T7QkNzaiWOuwIvWH7MeFK8HJg//f9NeTERdeKvZhteUFWgibMsC2qWAJ6ISfijp9OXKNkOJVruNoihTa/NvGwu/kI+/7K984XjP/rp0+JJipQii07ct8jBSet4tL3qLG5Fbh/vOjhTO+Ywbg8q8cNfgCICak6Qb1P0QAxCjKsSGbH5n4tuoR+TIKKD/Tq2DypBNy7P1HqdmVohjqasf6JWXm2HwZ3OsPUzAO2BRNOkS90vk1dpK7THyoiTYqFcEILw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FacX2mzZ5EiF0EJmp6Q13pLRP+eKFI0c7R9yu/clnsI=;
 b=SGHJmvCx1JS0T6dQ0q99/WYFDrMRNGbqqimfdGAyxhTuy/4asU4B8hiLbUA/bWGMmMNvEN/7B7EaAfU60EhaOdKdAuTWKRqy/zZx+CgSKgLkuPaiQARNTw/RO191Yt+jVkZ/RCqD5R/CXFJbOC+BVkdTNlEK3xmW6fkQh3GRlqsnwEJ8eTyoNP8GCfsUfMxjNPDwDzfdpmHjpGfY/UUKEi8D9B1Aja8N/4CZNBEPiJaM9JnMjY42mMTEGhRODUCiBpxG4yblQ67QBwlU3+MQczE2X6EaOr3u10u1n/PWjSFxDjMT/UA9D+YAFR9kScDXGAp84uQrvz7GwKyTNCoV5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FacX2mzZ5EiF0EJmp6Q13pLRP+eKFI0c7R9yu/clnsI=;
 b=soDH96wceQd94okvEZf9r9mAZox8uanxRwaWkdC8//K8u1vlFQ3UgPcZqIiKy0xZaLAD8jG5oNDn2SynD6fjZdMC/XfiJBIF4rATyAofIOj7+A0px2yzoGWPr7SAjjvpmBrVfS72UtFr2rhJELSo9TnWxEQFHOwPRXbjf5A80SkpjIOfi/YPD/z6XLjS79M7T61wkXfQkAybSLDiFvd7upMiKpwhoMj/MKYw3VHk9SB9MVnkwEhKp+c3z37bpGXFdZMsZ5d4xsFZiVg+gmoqxL21dNikUYmeh6Mub/k74KWdeGcxFdmClwXwa5sBC7udIPjdqp5mPu3WrDsCUePlqg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <204d2288-df9a-0d53-2c42-a52ad0c0c0f7@suse.com>
Date: Fri, 6 Jan 2023 15:12:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 2/8] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <ce66a86285e966700acb13521851aca5b764a56e.1673009740.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ce66a86285e966700acb13521851aca5b764a56e.1673009740.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0088.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8804:EE_
X-MS-Office365-Filtering-Correlation-Id: 7e1f449b-2f90-4230-91f6-08daeff00e72
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+ssqVpAcrNPecd0wGsU3wXCmNnd1mbUfKdQD5oPDudNFTvGjJEdbRHExzNQwTPA911YQj/9NohGSY/8ye4BKfO+fhHWE22x6jR09sYZgZ/EgJVhuStxhY9foWtQQbZ1Ac07/6b9vROuXuZAUorp0aicSHZo+XBmpAafZSgndVEVDNG7RQek7c3nxrfeKPUC1CEBf5H53AIxUrTm6oUOW5RIkPxzczdpHnoPVR1KkjXB/IScFO6h+rwspzDIU0/QruCR8Ts8rC9kBnFLPsMDizbfs07j1MHR5E3clst+a0NqAjfD1gP93c0LfHiVknG5evHTjaXWnfcZMGeyDmIkcgiuKqZRhjpNCiOILsC54NbTqUmZqZPqX67VuYW8VP+6rwuGEIHIQFB9+hoDG87CbPxGgpD5Iyt4O3JcG1s/TwU/VGeiri3ljFf2uq+YsnzNvLuEy5/FOjUlManUrbRaD5WcVVucJvAr+eSrB6mUKFLOs6oxqYx+R0YroGOHmtGiH2mKz5MT3nKggKtVyMvFxViJcICTnh2sGeDtEIdW4/4zT6CJq/ptaCPWrCeCY42xvo/Q83tp1Rc8t8yovDQaryJWCNL/weq7MeOjQ6AV9SlHUPd5aCcFs67/FnmcffJzGm5B2FOIE/JNNYIoNHQaB9z9FEn/QWKw3m3d5eyqvRDoWoVTIh+nVm8J/ESDaG5xv5LOg8tuT5girdKQP89XHni99tmtou9QjdmUTTpvW0hM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(346002)(396003)(136003)(39860400002)(451199015)(6512007)(26005)(186003)(2616005)(31696002)(36756003)(38100700002)(316002)(6916009)(54906003)(31686004)(2906002)(4326008)(86362001)(66946007)(41300700001)(8936002)(5660300002)(8676002)(66476007)(478600001)(6486002)(66556008)(53546011)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OUdhZGtZZkRycHdHYm5PQ2E1Z0FBNTkxdnJVOUJvS2poNTZBWWVLa09aZmk0?=
 =?utf-8?B?NjVzS1AxMkNRcmQycmxZVklrcVQ5SWVTYmwvc2k0ZGJJcGZkNjlvRW41WThF?=
 =?utf-8?B?QzNGQTZ4RlBOQTJTZGdCTjBIR3h1cDI1VEpqK2tJT2FUdk9yN3N2SjgwaDdi?=
 =?utf-8?B?aVVYcnp6UVdkMG5CV0szRlV5b2ZYS1FhQk0zT092dFBDMDNzektETVd1VzdT?=
 =?utf-8?B?T0Y1ZlFpU3hBVjFlbGFLSHU5bjBCUlVnazd2M1Z0dXkzUFErQzA3RTRrM09v?=
 =?utf-8?B?YnpHc0JRM2JTREhoTXpxaXc5NHlSUXFNV1lld3RPNW9xWlFVdTcxOUs4eWVS?=
 =?utf-8?B?QzRGM2dGN2UyaFU2N1lCOTl2aTRqVlkrUm05THV3cXNLL3JTdjltazhDR3Bx?=
 =?utf-8?B?MTRLZWFEdkFlUlZKZmI0YzF4TG1BTXJzMzY1bitnbWo0cjBQOUlzVkpCR3l4?=
 =?utf-8?B?Zit1M29KR2gxbE1nMzFXc2lsSEVxU0VRZE9uRnRMRWZtNVZreXRvUFV3NGZ2?=
 =?utf-8?B?QnByT25LTTliNWJySEZRZFU0dWE1Y2wwTTczQUZKVnRUTGpQN05URmU3WnAy?=
 =?utf-8?B?YzN5REMwSkxiTWM5aWkweXZkZTVnZU9zcFRBdTFlTkYrRThZb2N5L0lWMmcx?=
 =?utf-8?B?TDN4dllKL2RKU2kvbWR0U0xrTlYwekxiU01IdjBRL1I4MkQ1c3Q3SUtIcTM5?=
 =?utf-8?B?NHh0UkxzSlE0VU9vbVBYcjZGc3dha1o2S0hHLzBhTlNtUmViY0phdnZSOENC?=
 =?utf-8?B?OEdjTDBCL1YyQmQycDNnTWR6YTdwVllNZkxOUngzYnkycTdXZ2p3TlppU2Rs?=
 =?utf-8?B?LzJPSXErUHBVRVF5UXNDakxxclJEUW9YT3pHTHRGZkJXMDJpNmxiV0dPa3VC?=
 =?utf-8?B?SzNmVWRSTm0ySy9sRk9wUUdYdzlsanQ2c21NUGFYcUtLanErQWRCVVgyVE1y?=
 =?utf-8?B?NXVIUFBnYy9XMkJ6bE05VjZnbUxWZGRGWHU1eEdQSnhTb1ZLN2pWL252ODBm?=
 =?utf-8?B?Q1M3K2hDS3ZOS2UvVUFuNFNUL1MvejlFQnR1c2orU2d1Z1BzMGJQU2xOUkIx?=
 =?utf-8?B?U0hSTy9pMllSMmx5YjNwWGJ2NWtXUHNNNzZqL0N4V0NVRmNXR3NkUWEzMEJm?=
 =?utf-8?B?ZDZQUHJySGhubWJIRkdnU09pNGNmdTE0WlhYQVNtZHEzSTByT0lRaXRlaUF6?=
 =?utf-8?B?L2loWE54R0N3R2hDMURGTGpVT1FhYnNXN2ZzQ2twL0tGb2VURDEvTjhjNkhO?=
 =?utf-8?B?RFhHd3VJeTlFZlhEczVUZUtYQ2ZVVWJpd0Y0Z3NjUEptSXNzVXpydTNoWEN3?=
 =?utf-8?B?dmErQXRyTExGOW1OMmdRdUhheXp1Z0FEdi9UenRVUTduWE1CRk9zRktsdjEv?=
 =?utf-8?B?dnVjRXo2TU1lcW9EZDhkczVhaXRRbzBQWkhVb1M5aDBPVk1QQzBwbkhQSWNl?=
 =?utf-8?B?RGRKaE0rdVBWZFJxUmxBbUxwQnUzck9rNjEvRW9Hb3VRaE9ZdHg3YkdRMU5G?=
 =?utf-8?B?aFlibTJ3UFhtdkQ5RVZ5K3IyREY2ZVV4eHFLUHZ3KzgrSmdnMHNaZEM5Y0Z1?=
 =?utf-8?B?d0dyRVB5VWFCVWFFNkhwaExDdFJTeCtjOEFRUW81cWRubnVZRy9KUlg0SVEx?=
 =?utf-8?B?SHN3cHVWMWVmZDR4ODd4bzBOdU5UZ0QvQ09rM3g0QVRuYnJUb2xFaTZ0NHRs?=
 =?utf-8?B?dGhwMXh1R28zUzVKZ3l6c1VGd3k3TWFKSUUxRTJua2JLRGRlWjgzYXNVRU9M?=
 =?utf-8?B?a1pJYmI0bVVRdmZuS25FSkFjNWFVNnE3S3o4c0FWTTBLUnM2MkhqWUlVY3lx?=
 =?utf-8?B?eUgralYrTk9pVTQzN1NjVGx2NXd3aURSdnlzQkZKdEZxQ1ZibUVDSWE1SERs?=
 =?utf-8?B?T09YSE0zSlE3Q0ZFUEw0N29saUZMU0dRb0lXU2ZYakFSNnRub0dMN1ltN0FW?=
 =?utf-8?B?NTMvbENsWm05OVBpNS9NYnVQcEJ3K24zTCtDL2h5NnBjNEo2YUNra0cxZUN5?=
 =?utf-8?B?S01BN0ZQTzJPQkM4djFuczZPa3JLT3ZiRjB2ZEg0MzB0THlIQWtHcmViMjQ2?=
 =?utf-8?B?a2ptMHZvdi95aWJ3anNQa21ER2NRUXlQVTFMamdvRkd3a0tlQVRPenFYT3V4?=
 =?utf-8?Q?YT3t1yGvp4GDIgY5qn2eOq8NI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e1f449b-2f90-4230-91f6-08daeff00e72
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 14:12:34.1019
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r72DBgK292McbeACiiYzGlaQWAJ4zZNjA926GYOzng+hnTIG0YJrP3gUi0Vwo8BHpwWm+xrWKDFEqkWwgiLB6A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8804

On 06.01.2023 14:14, Oleksii Kurochko wrote:
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  xen/arch/riscv/include/asm/types.h | 73 ++++++++++++++++++++++++++++++
>  1 file changed, 73 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/types.h
> 
> diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
> new file mode 100644
> index 0000000000..48f27f97ba
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/types.h
> @@ -0,0 +1,73 @@
> +#ifndef __RISCV_TYPES_H__
> +#define __RISCV_TYPES_H__
> +
> +#ifndef __ASSEMBLY__
> +
> +typedef __signed__ char __s8;
> +typedef unsigned char __u8;
> +
> +typedef __signed__ short __s16;
> +typedef unsigned short __u16;
> +
> +typedef __signed__ int __s32;
> +typedef unsigned int __u32;
> +
> +#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
> +#if defined(CONFIG_RISCV_32)
> +typedef __signed__ long long __s64;
> +typedef unsigned long long __u64;
> +#elif defined (CONFIG_RISCV_64)
> +typedef __signed__ long __s64;
> +typedef unsigned long __u64;
> +#endif
> +#endif

Of these, only the ones actually needed should be introduced. We're
in the process of phasing out especially the above, but also ...

> +typedef signed char s8;
> +typedef unsigned char u8;
> +
> +typedef signed short s16;
> +typedef unsigned short u16;
> +
> +typedef signed int s32;
> +typedef unsigned int u32;
> +
> +#if defined(CONFIG_RISCV_32)
> +typedef signed long long s64;
> +typedef unsigned long long u64;

... all of these.

> +typedef u32 vaddr_t;

(New) consumers of such types should therefore use {u,}int<N>_t instead.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:15:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:15:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472677.732989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnVf-0002k0-C9; Fri, 06 Jan 2023 14:15:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472677.732989; Fri, 06 Jan 2023 14:15:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnVf-0002jt-8w; Fri, 06 Jan 2023 14:15:51 +0000
Received: by outflank-mailman (input) for mailman id 472677;
 Fri, 06 Jan 2023 14:15:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ggnj=5D=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pDnVe-0002jl-P0
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:15:50 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2080.outbound.protection.outlook.com [40.107.7.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9ecebf5f-8dcc-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 15:15:49 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8804.eurprd04.prod.outlook.com (2603:10a6:20b:42f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 14:15:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 14:15:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ecebf5f-8dcc-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ErnpGFYYvUKQPCgH7ZKSSwUdvqUNMkGtZqaxORVlKUJO94OC/cHO738bRKuFrouO2vIuOiCzlWfaLSE7cRTRVLp39IAZjaJTizWm8KSlDx0Tmdk6QyS6uxzaR7/BTLSELQUXBj/JswI4taDZD8Oy7IrEqUg+kwUTPKteZq5FI3X6974TeQqfdNAKas38YPGIl6E+NttZDddEFAw4ZeEsIoZiDqkrK4PDcgTdg8JVZOaQZd2Rj4lpp17j1oydf/kFlXlCSX48kSQRteV3/dnNUMbI8essHDFw++fyM2ca8mAzqlSSOVxAsb97IQzmgPLYLpY+jvqFbuo5BB9312mTxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PwcgD9BxLJS1uqkaTVy7HqdrmgombZtt5wnEh7+I6Dc=;
 b=cRzaDVcCXY6sLwGCunnPAZ28lg6XvU3hweHGkmOmRx5Ckx4UYkqV4RsMKwu36d2xMCRmqrK9YqEQcgR6XF+OWrsCq4itBpnO9qWifKOwtZMlhy5fPYUjykEUy9oA/FkkDklwFi5WyiZrsUHLLkIeYySNdYu0YHBz/KuYPqIW7tpw9F0tWLAPfIAVOxcZ7HPSn7URzdfMb68RkA2MkogJZtPAXfxcy5DzvlKqS5JbQnxGZd/NVWdYfnbCRyHRll45wEYIYGEauAz7eJTZfd5dQQapEDizvwgGM3rEYgjZW0jf6DTPG88sFWqyworl8i8ukPbRlxIedTlGWjQNH8hn8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PwcgD9BxLJS1uqkaTVy7HqdrmgombZtt5wnEh7+I6Dc=;
 b=Cabvio0plZdPZQFyYRIMIma3OaOl1IYRNkwrhm7rCTXynh61fDe2Jr3ClHapIpt1YqIDnK6qlsy6cUHQunK1089ppGT+nwNo3L4qJqe1SXpWa8/6fhezi+rFJV2tra7cJHwZ+Lynyzjd7TuhWGUvF70uNnlZfe/sSo/CgqYYAv5Ln1HrtiM1gI0hPnWL8sXJF7JkcLen3Oih8A7iH6nEwZM7QIGuWZx8yU3qOfrMWRweInfRvKoS5B1R2XZdynK7rDQU7elXMOaM4heta85RC3AG9nkaN76TQbM5S493lrTAsMd4DjVcDdzDWcC0QqAzCsx/m8x+Dr17yyV7+EMSRQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9c93ee8c-1c7f-d1aa-c0fd-72518e37a74d@suse.com>
Date: Fri, 6 Jan 2023 15:15:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 3/8] xen/riscv: introduce stack stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <e8f65c43d20ebdaba61738200360b14152531321.1673009740.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e8f65c43d20ebdaba61738200360b14152531321.1673009740.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0157.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8804:EE_
X-MS-Office365-Filtering-Correlation-Id: ebafbd80-91ae-40d4-80ac-08daeff08255
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wP+azzrjv1nZ56NX2/jndl/qwA3Zgvu8CDhiN1Sk+wHCDNXmWUaX9a98ggfvS3zF0rWAq9i9H0qjP5c+cF9fSvm/20+NvQUT8Jz+Uq7m26E/syeBK2/SY0FxcD+ZqHMQl5iA8Vtv5qRqr7Hxh+8Mb0v69GopZds+s7V2XP+9RqCvipGoUW2WfzkFeGQ6djyQ72bEKlWJJ4ZcgHFOtWHeoqGj7U2QNN0dBPAD65B2xbvLvbDr1Tf4xtHdq3KXnkZPSQn/9r/tzYe2KYHA8Im5konitW1QMk+P6Ceb9lb5LWRMZbRuW0TwyTrBcYcoh6nZc+asJ7xblkFh8iihlvaOGeuFjPNft+X/1uHRZ2eXvF8pczKqxoa/hl8e1H7IxE+dVHouwyCrp4bMnDdNWwoSg8tE0zduwrE/QLGDb8fL9Dbu2PPVgvSaaQgMc5EoTKNZ3AQdeaRUqVDcQwrzi0E5jwrQkdi9tFIFHfAREqcUy+Fmy7X4cCS6dOaQWpGc+DUzHNMf4ThHGlww5L4PgyUXwGuTmkEHIgcVfBQx+xNPxAaHiCrhdK+gMkMFEjaGGZfJHV/Z2OWsQdA85idwn7AS341kcG2Lo2r9SRuYeuvgWZ8h87IlIIVEqdUXkp6Xj/CsuT77G8GjUHSCg8+vL6iigJw8rskUL8OJt2HteTE3iL48z2Qtd5eR8H+POat9W955Oo3fWfcOjRagylVBge3dZFDErCbwYWqu8eZoZ8B4lBQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(346002)(396003)(136003)(39860400002)(451199015)(6512007)(26005)(186003)(2616005)(31696002)(36756003)(38100700002)(316002)(6916009)(4744005)(54906003)(31686004)(2906002)(4326008)(86362001)(66946007)(41300700001)(8936002)(5660300002)(8676002)(66476007)(478600001)(6486002)(66556008)(53546011)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?czVSVzdmZGNuekxPWDhRcXc4ekJHZCtJOXJ2a05ic0kwR3FZUjF5QjJvRTVh?=
 =?utf-8?B?dVB6M2c5S0lqWUFPNG56a25lU1R3b1h2djVzQUVRNTVyalpxM2tITTY5RTgw?=
 =?utf-8?B?UENXY2tKUGhTMFloa1FOT2xMZWpEVDlVRTRhcUdqa1ZhbHFPT0J4Nm1KV3VE?=
 =?utf-8?B?RWlVVTBQYVpya2J1ZlJZUHBmaWtBY1I0Sy9INVQ5MVQvSkxCc3FMR1F5Tlk0?=
 =?utf-8?B?c3VRWjVNcHJoS0Q3TEo0SmlGa1Q1QXc4OTRhSlloN1U0VmxQWHdWaVMrU3M4?=
 =?utf-8?B?cFlDL1Jwd2I3alE0V1dqNzJTZ1RYTC9aRm1DeUpncmQ0ZUVmVyt0STZiR3NF?=
 =?utf-8?B?aTdqYjhoeE5qUHRqV25BNmtqeWp5M1c3VUJJWHFWeTRoTHV2ejlhYm4xbHVt?=
 =?utf-8?B?eFZXYkF1ZjV6dlBmK3VUNkQ1djI3cUpPcmpqbVdVNGM1R2w4d0lYcVoxdUxD?=
 =?utf-8?B?aXh1T25XNHNVVU5Ob1luY1Q4dnlYaEJkK0svN0dLYW5JZkFSRjVYNFF3YUNs?=
 =?utf-8?B?Qk5qdEJYZU1LRlVyWXFZa2ZoUWJQMkxnOU9BN1VaMjlQTHZVM2hWOXJUTXNP?=
 =?utf-8?B?VVR3QjFobVRmWkZ2NDA5blRUL1lybnNnbSt5TTlLM2ZtS0ViN0R2anpjUEhj?=
 =?utf-8?B?aU1IUU84Sm5BSWlpVDMvdEJwZTZQZ2JuY1F5YWVmMXRaWVBsRmhvczYzM2th?=
 =?utf-8?B?MWR0ZWJzbDlHSlpOQ0dWazNRekN4M1BnMmRsek1vWXgrRXh1elFhTHp6bWh2?=
 =?utf-8?B?ZXk3S3diSHFvei9KUDVIUU1MUTFsZ3BoTHlBYkNQYWE3alJPU2M1THg1aXNo?=
 =?utf-8?B?NkVtQVhBUDVRdjBzUkhuendhWjJuZkNZcnE0WWJ6eVkvOHo4bVBXcThsTUNi?=
 =?utf-8?B?cjdBRk1GZnBCeUJGNHorL2NhZUgvMEJrQXdnbXVyNERoeUJFYmxlTk1DZEk3?=
 =?utf-8?B?TkNCMldlWktkVDZDOGtOdm90MFh4d0t0U0sxQ3dsTTZTL3lPUkM3YzVUekd5?=
 =?utf-8?B?cG9YL05ENVNZQVdJNEFsVGYrM2VOdkdTZ0FDN20wd2lBdmx0SkMwQkRZSDZH?=
 =?utf-8?B?a3hKRzNod1lsdmo4K2d5cjc5eVM0RTVmZkpkM1FhcWVYdzlzTlA3SEYxNmNG?=
 =?utf-8?B?SVdVSjlISDl0UmZ2TUUzQkFOcVdTTnZyYmZMdndQU1VsQUpGQUgzcFJTVFdM?=
 =?utf-8?B?LzVEZVRSRmJFakZpYTJzYVl2MlNoZnNSczFGS2lvNDRuQzc4Njg5dmZLTnpQ?=
 =?utf-8?B?VE1KYWJNeitEZ050TGwrWURLc1Y0VHl5clQ5K2NxU29nOEtpcHlKWXBjdjQx?=
 =?utf-8?B?ZFE5cmFUTG43QSs5WkN6dlpRZ0tXV01DQkVuWFlicnRoOEwxQUwrNi8xRW82?=
 =?utf-8?B?cEY0UTJETm84WGZNcVk5UHJvQ0VwMVJUNW03RWE4ZGhJSmVNN2R2dFJIOTRF?=
 =?utf-8?B?bFprankwMWZZbHN4TG15VGRrR09SQ3dSbXNXRXFjWVAybno3K09VUUVsU2sw?=
 =?utf-8?B?SXpOSkxjSjRETitpbjFmWU0xcFRadUlLMHRvalhUL0xZNy85Nk85ZmVRN2RB?=
 =?utf-8?B?VldoS2Q4QzRybHpSdW4wdGd3Q3I5UmpyVGxRZ0pIWGhqSTdUOUovc09nTEpM?=
 =?utf-8?B?amdIU2kybXJ4UmY3T3ptaGtQYVhqRERLVFJ6ZjZqVStDM1JNTEdGZEl2Zk5u?=
 =?utf-8?B?ZXhkcHFveGpUNldoY2huSUpNalFLVkJsK2ROMVRldng2L250UThhRFpib2xZ?=
 =?utf-8?B?dEpyZmpJQWozRkMzRng1ZzEybGFkenY2aHJsaFRSN2xMR1JjUzFjZmJDVS9I?=
 =?utf-8?B?UzFpOWxhN1FUMFlORFNrSTNIZGlOZThLMVVpTG1jZVBwNjN2eFpHRDJNV2Fy?=
 =?utf-8?B?eUVGeXhzS3FxV0RiQnU3UnNCTUJGNDRBTml2akpVUlVoOVZuTFRQd1d4ajIr?=
 =?utf-8?B?Y1ozQTZhSzE4eEdxV1dRVktpa3cxWEQ4Q0p6elh2TktmaU9YczExcEFnTkRh?=
 =?utf-8?B?THphUERFWVplQnJJbW0vMGRUZnMzMlRER2gwNVMwc0JpelhNdjkwdzFSUGcy?=
 =?utf-8?B?OEc1R3o2bmJXWmlJbVNMNjdUQm9xZ3RORlh0UUpRYm1xZDRZRUFiU3J0d0VV?=
 =?utf-8?Q?Bij+whFGccS4t7DA3//acna6V?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ebafbd80-91ae-40d4-80ac-08daeff08255
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 14:15:48.5115
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BDsVYOKeq9YkEKVU3gx7WERb2ApMY6TOrOjUr5dMthiadyGiFEbxSkzIzfSpsm4AGq0IBIdA7IPad/7tGzrNpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8804

On 06.01.2023 14:14, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,4 +1,10 @@
>          .section .text.header, "ax", %progbits
>  
>  ENTRY(start)
> -        j  start
> +        la      sp, cpu0_boot_stack
> +        li      t0, PAGE_SIZE
> +        add     sp, sp, t0
> +
> +_start_hang:
> +        wfi
> +        j  _start_hang

Nit: I think it would be nice if in a new port assembly code used
consistent padding between insn mnemonic and operand(s).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:31:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:31:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472684.733001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnkv-00055p-Mx; Fri, 06 Jan 2023 14:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472684.733001; Fri, 06 Jan 2023 14:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnkv-00055i-KC; Fri, 06 Jan 2023 14:31:37 +0000
Received: by outflank-mailman (input) for mailman id 472684;
 Fri, 06 Jan 2023 14:31:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J7eG=5D=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDnku-00055c-HS
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:31:36 +0000
Received: from sonic306-20.consmr.mail.gq1.yahoo.com
 (sonic306-20.consmr.mail.gq1.yahoo.com [98.137.68.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d035499f-8dce-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 15:31:32 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic306.consmr.mail.gq1.yahoo.com with HTTP; Fri, 6 Jan 2023 14:31:31 +0000
Received: by hermes--production-ne1-7b69748c4d-pm9xv (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 99343fc4f63f4df1a7ca33195a3121cf; 
 Fri, 06 Jan 2023 14:31:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d035499f-8dce-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673015491; bh=y6zkYfIxxD5S/JDQeE4SivkZD5jJU4lvdvq0QEx85Mw=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=gMJWL0CifJBv4PIXb85oKrIf4K47Qac2HcIlDOncDX8HEZIWNdy9TIQRqiIA3b5XATpkMM1hJFLWZpqB4uqaEauoXAlyrpzDfMQCkrCet7AfV9T8iGauEUSqsA5Y/wVbYQh3ZH3Gn8bJDa6DzwsTP/gM0CpHouMKL/lzHff06fdHPvR9l7an1tC0Y1o3Mh8pAbNCM9kKPZcHNLNsHpEdNLgOYuXGft14bJtr1etvZuOgdyhQfjxZ1HuPfacxPdpUoKKcT68seumGyFufEpJq2nXUBU4jPupyARgBJ/jOs9o+yquV6F6jPmlZmEIVHnhgEeJEtk627NFbF2w0Bv6oGQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673015491; bh=ARB2ekFXVfL0AnjnYt/lBtyXroynDuu7XYPBY3z8lhu=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=nh3kvsIQY5F6Di+rurJIEsQyEaONX+TWAgLdWkJ9aiHCUmGmEgHhPbiHV97/JP0yB/MEma9xQl/lcj7EL+tdNwutgOrlS/EiLbgAT1Oz1opwHx+MFWagI8uEiaEYk61OSwAUdszBo9hjYLCHQArn9JzsGwaLqzpWEYFFsVEpf2eW3fg+AEKOLc5LYVkBolgrlI3kpJLEE/GNzKPn4SZ2Km4E42w01JknPXE5HuaMqqwFFApwupZ4nh4+d3sqHy/gcWsy0bKZtAE9vKMVdf5nmFqF7OAjRvYsrIOCNwzqQ0y8jZgW8LcW4yGHgViORnmi0Kzn5rtPB3v0l0PnkfwnHg==
X-YMail-OSG: JX18UQcVM1l7cWKjfZWdrumO8AgdUDt2Esk5Ipbgv8k4TYiXRbvlKzHJO.VrKu3
 pr1CX43aD0bmKvPtGTeWKwYF.ml1B.XHdkDWqcSI2GsfZbuIb8Gw9Oi1trCtdDkJJekqM7VLrl89
 _wUesm_hKMO7FdJxMrmZqExO6CWfmE7mmy38NNt77FSQbZ1X2WbcN5xzqGtYItnXIdXZskAEDpxb
 U5ahzAAJ52c.0aOLX8ROf8hbvCMOJFulQbW.yTfvfu3ldOAi3wPpVqm_OjCCuN4wDUf9sFjKXUuW
 c1F0BpLcwGZWjjbuKMMd4MBLDvp221CAgkKzYvL4rjZ0W0UBH2CRRX.zcMyXtaoCoVukDJiwkg7C
 7TANBIVg.qPmyc1uxUnp517HRq.0Hgd9fv45L8dmOedJ6yXtG6TMu16w1yMsEga3dx2NQzzZJQws
 mCWcF_29NfkgbBkUkLb2GAEzaVoaugW13pxytYicJRsjCajzes_.3TNr0Iv0JnvwG9elgj5iyuBY
 G_MSkGgkQtORlow2rLGkATJ4qx4C5p93ctcYP8G5jbyiU13BoGxRebuKJMOdJT9wylyAlX1r6M1.
 7WKcVz5FPdKC_tc1PSWJGuK04MutTBRP5OadYFYbYU77nP_3Ur4hKrC9Sc.ZhWKyJHLRpBFgegqy
 6.9Lq0cxvrRHYvpttLd5lPLqKPjd2wLsr_OE9a6sBuS0YsY2JeIsO9Bty2hO2razoA57xxUKw.EG
 j6D9dnrWjb_v47U0dPClyaieYtef6E8eoreZ10szXskmYoZQXj1M93cS7KGgLjv4r2FR.1TYta.K
 .xq3D0OhHK6xWEFN3YS11vt0KPHSiUtuG4437kkuDOXS_UC92r1LIdvL7b46KUqvhLrkx369OuRP
 Ap65gOCh21Zj_as27WAsR47qc0txlmPBrFPGsWXgvAvtWQNCKK_j9Ibc2YVnKvwkGgF__bvpA4my
 voquMG2xxFPF7W8A1YH9ViPP.HmaeggD1z7BRKGSbAWKP66AQX21JRUCOD0dxW8mL.miSJ5YibHg
 8iTINMCXOO1zRn_ggdg6YsGllXcavf2N42If_DPdjB3k6kVliqmqMd.J6K91M5rnxpZLeVgEWp0j
 juzKfgKgOsiIWVWFZZolaGaYGrD3mpT836.eZJEXtspHeh9pgqew9TC__sbQz65ityswhmm5XHOa
 sqLLFu2KYiW2323ZuFza18M1nWAPLRm9IEVAyRHTXqXg.zWUQtw6nJNswPW1rMoS3lFyXUJyqP5w
 8QA31EId4H3IPUCLQOnIX_eeerjOnnNxHw59PN2tAsg7MVg9CpoyaVIABqj0VaesaAhWbs9j4MnO
 9ri30WMYiTMyIR1gOdaDbLgI_f1LfRQj4bZqctOy2OiLXMQ2OtxgLjU5j_ImU8LmKll9CgU_NA9B
 ZRWCyInmZOopens1mAT6b5jjH8GK11HNxNELkLDbmOtsrFBGWvq1OjC4YriFy._uL7UD4KC2e_0n
 LkXAVaLG4mPi_nTgCmcoeV6RSmYbgK1XSjuQzCKptThDWUlckyhkG0PKkQHT3.7SLhBQBte7KGQK
 ux6pXK98w4U8frPPby1EqoR.Vg6QSZJ84e0pKYeynMPlOvoNEgsw0I0x6oR52P5klQQwW6OyfGpM
 j1HefmkNGNmqPP_M7WiRDv5Ft9aY.elN6jmpGkfpREB3EhstVkDXhuVBz0zYJrGh0kzmSisXA0Ec
 Q2s7adhJieyZ1R30ZmBjP0q7SStBXqcHYUxl4tocWZRqMcWrKhhJ0eXxappDJqySE3HjTWfrFOah
 WeaQO0f2.ikNMWUgPjNVoCl3RXxKVaL9JQldMsd0QuS2RxuzmJv1Lk.ULEGauJ9Ae.hov9UR_zBf
 5WRXyWv4aU3a1DbIvoYyqWrsE.U1TxLGGFg9lyo82tcDwm85D63snpc21U6z5ktEoxtMhcOpimK9
 awAFVRvj8S_alux4..DwyrD8SXfaVKiu3.DbCEzcg_X9rj_dpSQgAFSCDFmreTaBIS4VZ8uRhq03
 dYD9ZYWCr1WSGTFHjO8eTToxq5ezYJgpN16mVJSHfJp4XZauCzkXv6qc.AWeYVSiXzyXf8mJ.Ytk
 ESEhJgampBejPnxAbRiUqF6y2GYEZZ5voU3DzCTejkTwgaUfHENvpkL51RxxTbbiVYoOL70EE57u
 Qy2c20k6eumDqNoTJKX.gQ4Bee_j7p5MoWu1cfIdNBtcNyDxDonzDX2t6IwzbgncOadIKHx3ncwc
 9KkhzjcjXWA--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <882652f8-ddda-a7d8-85b9-da46568036d3@aol.com>
Date: Fri, 6 Jan 2023 09:31:24 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
 <c39b9502-0020-ce54-abd8-b362430ba086@aol.com>
In-Reply-To: <c39b9502-0020-ce54-abd8-b362430ba086@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3579

On 1/6/23 9:10 AM, Chuck Zmudzinski wrote:
> On 1/6/23 9:03 AM, Anthony PERARD wrote:
>> On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
>>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>>> as noted in docs/igd-assign.txt in the Qemu source code.
>>> 
>>> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>>> Intel IGD passthrough to the guest with the Qemu upstream device model,
>>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>>> a different slot. This problem often prevents the guest from booting.
>>> 
>>> The only available workaround is not good: Configure Xen HVM guests to use
>>> the old and no longer maintained Qemu traditional device model available
>>> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>>> 
>>> To implement this feature in the Qemu upstream device model for Xen HVM
>>> guests, introduce the following new functions, types, and macros:
>>> 
>>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>>> * typedef XenPTQdevRealize function pointer
>>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>>> 
>>> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>>> the xl toolstack with the gfx_passthru option enabled, which sets the
>>> igd-passthru=on option to Qemu for the Xen HVM machine type.
>>> 
>>> The new xen_igd_reserve_slot function also needs to be implemented in
>>> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>>> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>>> in which case it does nothing.
>>> 
>>> The new xen_igd_clear_slot function overrides qdev->realize of the parent
>>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>>> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>>> 
>>> Move the call to xen_host_pci_device_get, and the associated error
>>> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>>> initialize the device class and vendor values which enables the checks for
>>> the Intel IGD to succeed. The verification that the host device is an
>>> Intel IGD to be passed through is done by checking the domain, bus, slot,
>>> and function values as well as by checking that gfx_passthru is enabled,
>>> the device class is VGA, and the device vendor in Intel.
>>> 
>>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>> 
>> 
>> This patch looks good enough. It only changes the "xenfv" machine so it
>> doesn't prevent a proper fix to be done in the toolstack libxl.
>> 
>> The change in xen_pci_passthrough_class_init() to try to run some code
>> before pci_qdev_realize() could potentially break in the future due to
>> been uncommon but hopefully that will be ok.
>> 
>> So if no work to fix libxl appear soon, I'm ok with this patch:

Well, I can tell you and others who use qemu are more comfortable
fixing this in libxl, so hold off for a week or so. I should have
a patch to fix this in libxl written and tested by then. If for
some reason that does not work out, then we can fix it in qemu.

Cheers,

Chuck

>> 
>> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
>> 
>> Thanks,
>> 



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:43:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:43:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472692.733011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnvr-0006hx-S7; Fri, 06 Jan 2023 14:42:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472692.733011; Fri, 06 Jan 2023 14:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDnvr-0006hq-PU; Fri, 06 Jan 2023 14:42:55 +0000
Received: by outflank-mailman (input) for mailman id 472692;
 Fri, 06 Jan 2023 14:42:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g6IK=5D=citrix.com=prvs=36316be06=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pDnvq-0006hk-7h
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:42:54 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6474b747-8dd0-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 15:42:51 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6474b747-8dd0-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673016171;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=dIxp4kB3Fq0R4kfKEpGGR4PCvfMFoXf/LwvzR19LURA=;
  b=Sd9OQZXf6dEblJY7G6FfZwWwp/VbBLeZF1yZLzDuGFEP0cN5SX5pO1Xm
   Arwq1zTrZPQGk1FvB90lgrJ23rlfS6+c66F/321C6U82EvJpAHAs3MTEb
   k7WA2ILrnSGeYtl1LvexmmdC6eEZzlTtGnf8eqoxnxFlv8BN1quDxSlYX
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90966612
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:VJswE6jTu9xi3QxlCHxLmkylX1610BAKZh0ujC45NGQN5FlHY01je
 htvXmCAb66KNmChf41/aYzk8E0G7Zbcn9VqHApvrX89Fykb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWs0N8klgZmP6sT5QeAzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQaOmsffgiMo9uo44K3dMN8t58/KvPCadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJagx/5JEWxY9EglH2dSFYr1SE47I6+WHJwCR60aT3McqTcduPLSlQthfA9
 zyYoT2jav0cHMSSxAHfq3utv8beu3nnVdMLF5yz+OE/1TV/wURMUUZLBDNXu8KRh0KjUshTL
 GQU8yAtqrMuskqmUrHVRRyzoHeeslgcVtxcHvch7welzqvS6hyeQG8eQVZpbcc6nNU7STwjy
 hmCmNaBLSxitviZRGyQ8p+QrCiuIm4FIGkafygGQAAZpd75r+kbixvVRdtnVqetgNDxEzjtx
 hiFqSE/g/MYistj/7y2+E2Cjz+yq5zhSAkz6QPKGGW/4WtRbpSuZ5Gj6krz5PFEao2eSzGpp
 2MYksKT6OQPC5CllyGXRugJWraz6J6tKDfbh0xuGZgJ7Tmh+3e/O4tX5VlWPE50Nu4UdDmvZ
 1Xc0T69/7cKYiHsN/UuJdvsVYJ6lsAMCOgJSNjWfIFccoB+UDaZ3xFiW2SA7jvxlnIFxPRX1
 YigTe6gCnMTCKJCxTWwRvsA3bJD+h3S1V8/VrigkU35jOP2iGq9DO5cbQDQNrxRALas+l29z
 jpJCyedJ/yzusXaazKfz4McJEtiwZMTVcGv8Jw/mgJuz2Nb9IAd5x35m+hJl29Nxf49egL0E
 paVBCdlJKLX3yGvFOlzQikLhEnTdZh+t2knGicnIEyl3XMuCa72svhFJsdsIOl2qL0ypRKRc
 xXjU5/QahioYm2ekwnxkLGn9NAyHPhVrVjm09WZjMgXIMc7Gl2hFi7MdQrz7igeZheKWT8Fi
 +T4jGvzGMNTLzmO+e6KMJpDOXvt5ylC8A+zNmOUSuRulLLEqtAycXKp1qVnf6nh63zrn1On6
 upfOj9AzcGlnmP/2IChaXysx2txL9ZDIw==
IronPort-HdrOrdr: A9a23:/alh+6ClgOS3hhjlHemd55DYdb4zR+YMi2TDgXoBLCC9E/bo7P
 xG+c5wuCMc5wxhP03I9erwQZVoIkm8yXcW2/h0AV7KZmCP01dASrsSjrcK7AeQeREWndQts5
 uIHZIOcOEYzmIUsS852mWF+hobruVvOZrJudvj
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="90966612"
Date: Fri, 6 Jan 2023 14:42:35 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
CC: <qemu-devel@nongnu.org>, Stefano Stabellini <sstabellini@kernel.org>, Paul
 Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, Richard
 Henderson <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, Marcel
 Apfelbaum <marcel.apfelbaum@gmail.com>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <Y7gzW/hkYc6xPqEC@perard.uk.xensource.com>
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
 <c39b9502-0020-ce54-abd8-b362430ba086@aol.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <c39b9502-0020-ce54-abd8-b362430ba086@aol.com>

On Fri, Jan 06, 2023 at 09:10:55AM -0500, Chuck Zmudzinski wrote:
> Well, our messages almost collided! I just proposed a v7 that adds
> a check to prevent the extra processing for cases when machine is
> not xenfv and the slot does not need to be cleared because it was
> never reserved. The proposed v7 would not change the behavior of the
> patch at all but it would avoid some unnecessary processing. Do you
> want me to submit that v7?

Well, preventing an simple assignment and a message from getting logged
isn't going to get us far. On the other end, the message "using slot 2"
when we don't even if slot 2 is actually going to be used could be
confusing, so I guess preventing that message from been logged could be
useful indeed.

So your proposed v7 would be fine.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:55:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:55:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472699.733023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7h-0008FA-WF; Fri, 06 Jan 2023 14:55:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472699.733023; Fri, 06 Jan 2023 14:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7h-0008F3-Rx; Fri, 06 Jan 2023 14:55:09 +0000
Received: by outflank-mailman (input) for mailman id 472699;
 Fri, 06 Jan 2023 14:55:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fu4g=5D=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pDo7f-0008Ex-P6
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:55:07 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2058.outbound.protection.outlook.com [40.107.8.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1ab1b88d-8dd2-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 15:55:05 +0100 (CET)
Received: from DB6PR1001CA0006.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:b7::16)
 by AM0PR08MB5331.eurprd08.prod.outlook.com (2603:10a6:208:187::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.14; Fri, 6 Jan
 2023 14:55:00 +0000
Received: from DBAEUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:b7:cafe::72) by DB6PR1001CA0006.outlook.office365.com
 (2603:10a6:4:b7::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.15 via Frontend
 Transport; Fri, 6 Jan 2023 14:55:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT046.mail.protection.outlook.com (100.127.142.67) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5944.17 via Frontend Transport; Fri, 6 Jan 2023 14:55:00 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Fri, 06 Jan 2023 14:55:00 +0000
Received: from a14c970605c6.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C6CA9B0E-D381-4961-BF1E-972E26964A33.1; 
 Fri, 06 Jan 2023 14:54:53 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a14c970605c6.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Jan 2023 14:54:53 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB8639.eurprd08.prod.outlook.com (2603:10a6:10:401::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 14:54:51 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%4]) with mapi id 15.20.5986.007; Fri, 6 Jan 2023
 14:54:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ab1b88d-8dd2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iefGtseeR8T4bINEtwsgEEg3NVUu/gPqbj6/DDnjGLo=;
 b=WYxe4ReimclZiI1EH30OaHnHnsxWQlvxToFxLNK5vpWzlS1IzA1yzv+UOa1kqJWrmzfp0VoZvKZv4TucwbjMC6Ii2xVc/pAePhXpYScwJuJ3mBQyxrMJlTqBWFWnzbK2mbW9Rb5fzO98wpnQQ/pXNEHQ85yjRkMDwbBywWe+w2Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nK8s3jq95aR0GBfj8VeMG5XE+IEAdi5+NIVp+ttXPT6HrXPERuhc2KjI9nxqKqJ2vryfgbZp2a7EJTtujl6j5OamOV5/GTTOWDcDTpVpEPL4H1T7MKPCTTVnNRHTsVdVzk9IegXXozZQDYnHT+yVDg1r5uo8geKe6xU0EksvISlpXJJX572K77HsSvJGvXVflJP5Y3zCf1qJN6BO+vi9cBvPWmHXKCJgVM2yKFOkn6DZuayfsnZyaloZe4wgPm9s1SAznUodIOC1VcuWNbH+41NX6JYsbfTnvK4CGfQnGrjlgBDxmxRfB49erP5X/l8AgrupLRnOMvijFoPj06eNdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iefGtseeR8T4bINEtwsgEEg3NVUu/gPqbj6/DDnjGLo=;
 b=G+eoA2F1dkPxbXqb/+qctC7wW/KWGThGi2UWUXIg9z7mxb56qAw1uhV05N+jv2lnL/kNLGKjKm7FUbk63rZztFuAOfzYRwAx4DcaQFm8a2ZyyMOHYs9fLQvzNx3Q7I5VHnL8bDxI80Z01mnyi5B+wd9WMhVdKmjG5Yrlnty9QtrV9cUfjWs04YDihuu4VqF7M5HWq0cfFz+M8T6Yc4J+iDZaVs5t/djCDR4JHoHubYlBZz2s6c92kac3v3uveG/j080O0R6/RHiuuqDYs8cYcjgDRA0BuMdpYW/aGCw/HumF4Ra7V8gcNseCpntURkBgtK49mnWbtBYhARl6QAd6gA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iefGtseeR8T4bINEtwsgEEg3NVUu/gPqbj6/DDnjGLo=;
 b=WYxe4ReimclZiI1EH30OaHnHnsxWQlvxToFxLNK5vpWzlS1IzA1yzv+UOa1kqJWrmzfp0VoZvKZv4TucwbjMC6Ii2xVc/pAePhXpYScwJuJ3mBQyxrMJlTqBWFWnzbK2mbW9Rb5fzO98wpnQQ/pXNEHQ85yjRkMDwbBywWe+w2Q=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 19/22] xen/arm32: mm: Rename 'first' to 'root' in
 init_secondary_pagetables()
Thread-Topic: [PATCH 19/22] xen/arm32: mm: Rename 'first' to 'root' in
 init_secondary_pagetables()
Thread-Index: AQHZEUh3UTDXWpRZjEa7nLtXRiROF66RkhiA
Date: Fri, 6 Jan 2023 14:54:51 +0000
Message-ID:
 <AS8PR08MB7991C55F30024E017DB2912D92FB9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-20-julien@xen.org>
In-Reply-To: <20221216114853.8227-20-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 166E9CEA8F8BE44593B701E6599657EF.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB8639:EE_|DBAEUR03FT046:EE_|AM0PR08MB5331:EE_
X-MS-Office365-Filtering-Correlation-Id: 2cbd839d-541e-4173-2576-08daeff5fc12
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ms5UL069jDPYJWnlaoI1S8c/OvH6A2RVp3IAsyEXhbB56DSBllGdOjmqfH+7TzJwOd6kfNPoQwPQ7luC/V0Mj47EcwfMpHHEF62ig2IyeCd9SVCTi/B5VxdKFY1GRIbA/n+1zJquDCMCT3F4/HrzE1azvvg9dciZSPgtL7AmwCQgWqo5hxZUSZSAxnX8DgNXV07CyV7MRvOPORGG7wOHL6TEPfaOUe/Yol+AExt5MlNss17zTvGdFVvJCYdweMV3Fj2E3uSAh531XWsX0G2ckIbVE1OZ2wazQjBrXB1skDFJkdtArmtXRvRXFfHXq+wlq+wM+UwI3dNtvRuFkWKt+3G83fudhG7Xt6rJGRhlqhy4UGUEEInXxSi0WxR7XnDR2XM7nnCODYv8RE/4+8Gy3Xl9I1Xbxmcan5Ur+Rin91BPLZqOifY6ZxXJ/DWf0Esz7FbxTk9RLhTt5GFeQ33zApk3bPs+FEeJv+AIv1wuH9gP/w8apRROE3zv6kQPW1Kr+BebrUWIsZoeCF5DGYOL8RnUu7E4Bq7aGnmseBB200xUVbADjAPtA/I8Ek5WsFKEt21XSL9IXw27K7uNnUdkssfFDbxWvJ6wT5CQfXru8CejcSAEYNcJJtGrRtr4nGfXJp8a62LZh43DQt0/xQDaSSM21M3TzaBmDbS34+rCE0nap78LbaBVM2QyFEVJzVuRW9IelogtW77mnQfFXj0RuA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(376002)(366004)(39860400002)(396003)(451199015)(64756008)(66556008)(4744005)(66476007)(54906003)(66446008)(8676002)(86362001)(66946007)(478600001)(52536014)(76116006)(41300700001)(4326008)(8936002)(5660300002)(33656002)(2906002)(316002)(83380400001)(186003)(9686003)(26005)(7696005)(6506007)(55016003)(71200400001)(38070700005)(110136005)(38100700002)(122000001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8639
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bd643ca2-f6f7-4c27-44f1-08daeff5f6f5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SumpMe6cDP0Ji0gMfc+hDKBovLswnVLXyJkAK738wCOA4tF1l/fABb+kcJV6vz8LnnfgpN22sDiLHeJllLlHjRV6Ol3R1TQ8F9GcLy9JWPbf7/n45ZNKe7ADHlbZZ/+edYy7Ktfq49ibzEvIkCDJHIJEE/uuPhDi57Jh2PIXeSH3Gc7mirEk7YzGE084UAjR68G+Srt/qmwXlkvVUlAZhwm/8y6eE/a0L3zMzxF5rvHgySuOqEGqZi9KENcttTLJlxmwDR9b38UMW+tiQ4RLG4GsKyM1gbQrQYDXA6PDe2dpXgBTHzPDQVRQWFoOc8qIB/hD0CHLIUGfQ6FEpq70pu4lMeLob/UfICZcwIKlRSaZb7i0KCxg5v0xphTo4IzAer5r62IFZZl3UwkcdQkjyv8eTcfpTHJlN/mwcVpYFJRNVJOuBUqYTUThvjpeR0UJNldWMgtTYKaPUh4ijkYeEIeANVqrdczTTIUpqtZZ7KTagthvjeM01xKj22MeKL1Pd0aKEbO5+mr1wM69OPFs//KaYkyvYduC1UPA32oWCZVaQror0f/Eh5DXUnQ0HAfkRn0JnUk7X9Smxg5Ll3JFtD3zaO6jnqHtlaU7A8NxxwG1Au3+or2PGyCrLvr+ZRnDMlYjo8LO9t56XPFUfoafU6hyWgU/787CMSf0OaF3bh+siHX0N8SZIskX2TJNa0nwqTCHW8OzjnLVw66qsBqC7g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(39860400002)(396003)(451199015)(36840700001)(46966006)(40470700004)(336012)(356005)(8676002)(316002)(70206006)(82740400003)(4326008)(70586007)(54906003)(110136005)(33656002)(81166007)(40460700003)(36860700001)(4744005)(55016003)(40480700001)(2906002)(41300700001)(47076005)(52536014)(8936002)(5660300002)(83380400001)(186003)(6506007)(7696005)(26005)(86362001)(82310400005)(107886003)(478600001)(9686003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 14:55:00.1420
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2cbd839d-541e-4173-2576-08daeff5fc12
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5331

Hi Julien,

> -----Original Message-----
> Subject: [PATCH 19/22] xen/arm32: mm: Rename 'first' to 'root' in
> init_secondary_pagetables()
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> The arm32 version of init_secondary_pagetables() will soon be re-used
> for arm64 as well where the root table start at level 0 rather than level=
 1.

Nit: s/start at/starts at/

>=20
> So rename 'first' to 'root'.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:55:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:55:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472701.733045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7n-0000KY-Dg; Fri, 06 Jan 2023 14:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472701.733045; Fri, 06 Jan 2023 14:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7n-0000KR-Ay; Fri, 06 Jan 2023 14:55:15 +0000
Received: by outflank-mailman (input) for mailman id 472701;
 Fri, 06 Jan 2023 14:55:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fu4g=5D=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pDo7m-0008RI-AH
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:55:14 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2053.outbound.protection.outlook.com [40.107.247.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1efb52f5-8dd2-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 15:55:12 +0100 (CET)
Received: from DB9PR02CA0003.eurprd02.prod.outlook.com (2603:10a6:10:1d9::8)
 by PAWPR08MB10090.eurprd08.prod.outlook.com (2603:10a6:102:367::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.9; Fri, 6 Jan
 2023 14:55:10 +0000
Received: from DBAEUR03FT065.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1d9:cafe::8a) by DB9PR02CA0003.outlook.office365.com
 (2603:10a6:10:1d9::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.15 via Frontend
 Transport; Fri, 6 Jan 2023 14:55:10 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT065.mail.protection.outlook.com (100.127.142.147) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5944.13 via Frontend Transport; Fri, 6 Jan 2023 14:55:09 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Fri, 06 Jan 2023 14:55:09 +0000
Received: from 24d18afe814d.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7ADE9A14-D1A1-4072-95B3-5E69754B0C35.1; 
 Fri, 06 Jan 2023 14:54:59 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 24d18afe814d.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Jan 2023 14:54:59 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB8639.eurprd08.prod.outlook.com (2603:10a6:10:401::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 14:54:56 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%4]) with mapi id 15.20.5986.007; Fri, 6 Jan 2023
 14:54:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1efb52f5-8dd2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mHgBBafFbsjAzJLPSAzRUNa0KLBtz3G/c8cHAzWDPrc=;
 b=qJQRTP7k13vjSQC/AluPAxNQ7JnPBFhal6DfVpLbDVJcnJBwQmsPvoBnjnLYRpORFrjkRBZJ6IFIaz7klFZiJmurfF6ZVtuhQOHMBneXQZJNSUqS6NjcojEZmjE/hGWiVtuWWv6jsjvf2LvfJpfgwIVsr26cvRb4znFq/E1r8js=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AKsN0lqP4/0psYiA8OjdxuQR1XPZckPpQ8uQQst73D0cdGqVeEU3E7jONU85IXszsvCtyYCxKbupQM9FNgXaZGrCeBxsPZuVmeXyhrpfxRIOiPqXqhip/x522/bsKHJZ/0U+zQ0fyBv8WLI0MjdD7nXMgo4x3VRx2YqHuD+0pfDechbkU78nvViW6jTsf+CD+0I1qS3va45mJpuYSuZJncEOL23orR1VaJZ18yq5QaXJxfZbmsuaUrgRQ/yydCchtkG/xcawzsDRG87V7OArN+UyU+MlFWwCFLzvOGHO3T5RtRQRmXbYBV8LRNYRkjwUhOA9AIwVxGaL2ZM49yIE+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mHgBBafFbsjAzJLPSAzRUNa0KLBtz3G/c8cHAzWDPrc=;
 b=P2+juIv/+mzTcf1Rmd6Nval0zVoEGtfRUyyVFjkFSFcDGgsRrSovOdI47pnyJoiwCXixzo56verUBAOy35IAVWb2kbyT0K0qTN4uVaHDGNfnOvTdyUZTRpGNaRMWzMG598pWrjJM2QAqhvPQstohE8MHfzyK8jIZIAHlvhWIUBC6PedUn/EbrfH09P3LKnH7GHy1fKzOf04ZbvFdc6uLsHm89LTzVd3CFX4kMoTok65j+HvDesUmlFu8ag5JIUf25FXfIHOh3A3dTA/npYaHkH1Mt5fl5oTQ9y/pqTOchJLPsC7V/6PcQl62yYgLvUUqSg0PNgnfdAppTvLCde8sNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mHgBBafFbsjAzJLPSAzRUNa0KLBtz3G/c8cHAzWDPrc=;
 b=qJQRTP7k13vjSQC/AluPAxNQ7JnPBFhal6DfVpLbDVJcnJBwQmsPvoBnjnLYRpORFrjkRBZJ6IFIaz7klFZiJmurfF6ZVtuhQOHMBneXQZJNSUqS6NjcojEZmjE/hGWiVtuWWv6jsjvf2LvfJpfgwIVsr26cvRb4znFq/E1r8js=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 20/22] xen/arm64: mm: Use per-pCPU page-tables
Thread-Topic: [PATCH 20/22] xen/arm64: mm: Use per-pCPU page-tables
Thread-Index: AQHZEUh12RKmuw645EafAVdnfeFYOK6RkuHg
Date: Fri, 6 Jan 2023 14:54:56 +0000
Message-ID:
 <AS8PR08MB7991125F288492FFD81F02BA92FB9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-21-julien@xen.org>
In-Reply-To: <20221216114853.8227-21-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: B5732067FE3B8D43A75B9CD84A545DCE.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB8639:EE_|DBAEUR03FT065:EE_|PAWPR08MB10090:EE_
X-MS-Office365-Filtering-Correlation-Id: ef139786-cb2b-4702-b1ec-08daeff601ec
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 kapN2QfqB9/KdMesDdP8tuACJMOdi10pXNKAV+wdOGaH9PPRHqyZgNmNX3oBLLq0xLmQDyGbFBj2vDcCzsGvyrz0j53bjUbxSi40XDJlZb3QdeYrNN+G0sayWDm+xgydN8uoN2y9ILYDkVw+SP364Z34vC7tWsDXUzFuKlpuAwrL+2WwXon5bEM4vAFnEcHNVktzMd6GEXsAisHk0Ja8jIflNBJFpq/AGqHT8lYlR0VwGyeMhRKgmMCRerZAVV2Zezl82QXxRpTgwojuKfgg568TORrCRoaB2fvL8ZeBk9om/8fzrcUtUCTzA6aDxl0hLlzlMR7BSvUZFD4Qt1We6tPjqiEDAc+7niTbIWUOkhA4+e6cbzTEHL+SDdto+6hw08vATsY+lRiGn8UMnij9QKqk7VLI5bzdwdK7x7JESkFiI5a2lcSq6B2rPOZtyh4zZUba+tnHZ+2VhwoPV07SiNMpjZ0/3WFaMQ7DXfEN1f2XQ9jV/acHpaecxWxcy+/bIwT6Rsjk0PGQ+3V1n9Nyp3h6UkLI3tuCkh4g5uJc20ak5WaoAn2xOd0Rimml99kqmR+6mm3xefM50MD8rg+KYj+g8V/uQ+fyw7Z3xEtXP6Fgk4A6S8BsxT1YiMSck9R4Qi8JaDfoc5YHHiE44lptLmjnBUFVma5S/1TN6M8sNLJADOYTH8YihNbgGuUUV8LPD9VgPjiUeIMoHJXXzg0Ulg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(376002)(366004)(39860400002)(396003)(451199015)(64756008)(66556008)(66476007)(54906003)(66446008)(8676002)(86362001)(66946007)(478600001)(52536014)(76116006)(41300700001)(4326008)(8936002)(5660300002)(33656002)(2906002)(316002)(83380400001)(186003)(9686003)(26005)(7696005)(6506007)(55016003)(71200400001)(38070700005)(110136005)(38100700002)(122000001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8639
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT065.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d7623bb5-f1fc-49b1-c157-08daeff5fa1d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XHipFpSUxuRkpmYA73We9iEN/twARRzJ9JWp4b8kD31H5L2LiedSxZtHs4Jxa2Fw/qp+F19TH2H+OXUlxKJj2rGjpGb8s7J1x4agO4rwo4dXz9+SLfA1FpGEFQv7TssYHh6IRiXEWV64/PokRkNNqw/2qwIzmnPNwdoPT7qXZfniEuPeBypI+8mwTs5R2ZFMvgodUscrW4ZXjBxLylWsSULcczB0zW00V36oudtXDZDnfcnDRMiN4YcQm7CtikF/i0Rqd1qVkhi5yLtVmBVoMsXnrZuGFhpEjS4NJlPTEgLQI6vHi6yPzRlO+nR8Mf0+U3yq67oj79lYIUmP1QiUNE1S8NIJ+HKk31IyGcJA1fMHeSQI/UBDASh4XE0/zCM2lg7xnH9KJVM/26/Fby91gikDouqL+ld1Gt7/y1fT4oh2AKDJsFr8dmPVxiDqryK2ygybJdzlLfb4aAAzSgyk31tJWUmHSd/sUOleWNJclP6vE6rFJtsbQnCqbvUzh0mwxCdp506Qj6Ubb0o7soJv+v+Hv+z4RfT4n5SL0OEdFz/8V32EzY2JZyHnjlPRjtP8qNC0vNFzJxCR4STL2Un1wwH8+gwlDEYgzLiOkZs9TXNXcCBS+nRrsaB8ZAAhpo5NrGj+6std+6psGW/++PttkYYJQjBneQffqf3Jf5SAkAE7F0dEi9bVvdycRFJ6H1M/HxYBABmki1jdxqlHfxORZg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(39850400004)(376002)(136003)(451199015)(40470700004)(46966006)(36840700001)(86362001)(83380400001)(36860700001)(356005)(70206006)(33656002)(4326008)(5660300002)(70586007)(52536014)(8676002)(2906002)(81166007)(41300700001)(478600001)(40480700001)(55016003)(6506007)(47076005)(186003)(26005)(107886003)(9686003)(54906003)(82310400005)(336012)(110136005)(316002)(82740400003)(40460700003)(7696005)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 14:55:09.9472
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ef139786-cb2b-4702-b1ec-08daeff601ec
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT065.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10090

Hi Julien,

> -----Original Message-----
> Subject: [PATCH 20/22] xen/arm64: mm: Use per-pCPU page-tables
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, on Arm64, every pCPU are sharing the same page-tables.

Nit: s/every pCPU are/ every pCPU is/

>=20
>  /*
> diff --git a/xen/arch/arm/include/asm/domain_page.h
> b/xen/arch/arm/include/asm/domain_page.h
> new file mode 100644
> index 000000000000..e9f52685e2ec
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/domain_page.h
> @@ -0,0 +1,13 @@
> +#ifndef __ASM_ARM_DOMAIN_PAGE_H__
> +#define __ASM_ARM_DOMAIN_PAGE_H__
> +
> +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
> +bool init_domheap_mappings(unsigned int cpu);

I wonder if we can make this function "__init" as IIRC this function is onl=
y
used at Xen boot time, but since the original init_domheap_mappings()
is not "__init" anyway so this is not a strong argument.

> +#else
> +static inline bool init_domheap_mappings(unsigned int cpu)

(and also here)

Either you agree with above "__init" comment or not:
Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:55:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:55:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472700.733033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7k-0008UM-6N; Fri, 06 Jan 2023 14:55:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472700.733033; Fri, 06 Jan 2023 14:55:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7k-0008UF-3S; Fri, 06 Jan 2023 14:55:12 +0000
Received: by outflank-mailman (input) for mailman id 472700;
 Fri, 06 Jan 2023 14:55:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fu4g=5D=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pDo7j-0008RI-3C
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:55:11 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2059.outbound.protection.outlook.com [40.107.103.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1cd42182-8dd2-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 15:55:08 +0100 (CET)
Received: from AS9PR06CA0341.eurprd06.prod.outlook.com (2603:10a6:20b:466::14)
 by AS8PR08MB6614.eurprd08.prod.outlook.com (2603:10a6:20b:338::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.9; Fri, 6 Jan
 2023 14:54:58 +0000
Received: from VI1EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:466:cafe::18) by AS9PR06CA0341.outlook.office365.com
 (2603:10a6:20b:466::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.15 via Frontend
 Transport; Fri, 6 Jan 2023 14:54:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT003.mail.protection.outlook.com (100.127.144.82) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5944.16 via Frontend Transport; Fri, 6 Jan 2023 14:54:57 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Fri, 06 Jan 2023 14:54:57 +0000
Received: from 0b56ebb67809.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 526C064E-1FEE-4E50-8EAE-57352C2BB4C2.1; 
 Fri, 06 Jan 2023 14:54:46 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0b56ebb67809.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Jan 2023 14:54:46 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB8639.eurprd08.prod.outlook.com (2603:10a6:10:401::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 14:54:44 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%4]) with mapi id 15.20.5986.007; Fri, 6 Jan 2023
 14:54:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cd42182-8dd2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DG4sc0OudoXcRpUzDMDRap1+qqTUrPvf4wRtlmnKiSA=;
 b=90O2dl5eyYiuH1YBAQZgg23+FPwdTPXuDkAeJ1hDaVheY5OfE2qLP7jn8cRf0eHTuSukvatFNVGrmRje01hxsxDxHIOmiBJq1MXiEMXQLPwLQtgGT19C6CEnXrOb1cBIMXsGDfcYaHWx5YKsyGBYeLjH0mN5tbmbMg+baIzUnfA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y6uM9svt7xQB4r+KwcwNlZeQBPlGAIZ47wMu4zeEf6uv27mLWwjkXsTh2SkGCQnqmPXefHkcZmJcgHLB6dEkMhiwO8ryjGatSGn1fCvTcS+IjrXYd1tzMYa4ORQKCb1c1W3mXts0hYFk6m6+k8hSKuFEifSVMRAU2n+6q8A2xel/M/Xc9Z65xFJKqqsJnku2r1hzbOrXU4hF6Gzk7HB+JQQ6cvmDgC9Bgwqsgxjm0xUGub+ecI52ntNKR93dusCm5eDJ1sBKBxRQwc45v/eqnQtsxwMKwpmIK06LFjo4xJHgUtUE9PlpzxiNwFn2ktl+MCVhDniSpSM2irbto6+PLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DG4sc0OudoXcRpUzDMDRap1+qqTUrPvf4wRtlmnKiSA=;
 b=TZFW/QUrCXPyIb9pgN4MtxtsdjHSEo9pvJmiaafmEhDNcaTvHcFRvdVwe372cWaJIt+s6MZ6IY14Dy+pCwrGdaV20K4P4UW0auHhZdbDOIjfjfNo7AYm1U7C/b00bPtqnT3spCZnDr5UlW9lmihUtzO7r7RMnzKF6nxMRYUvByj438IoFxLkFFP9Gb4HQPhk1Qz1UiSbkxBgCHwi0EAav3vMutYKyhuFsE8elx4C+qNAeD7GvvwoSwjrZOnCZN7HRw03AH9bh+NCrafYaZZMFSo4PKiFhibHj91L/L9b2CaxL9QZZ5C9pSFf7duyqn6I32thrj3QjBFTeUu4suZquw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DG4sc0OudoXcRpUzDMDRap1+qqTUrPvf4wRtlmnKiSA=;
 b=90O2dl5eyYiuH1YBAQZgg23+FPwdTPXuDkAeJ1hDaVheY5OfE2qLP7jn8cRf0eHTuSukvatFNVGrmRje01hxsxDxHIOmiBJq1MXiEMXQLPwLQtgGT19C6CEnXrOb1cBIMXsGDfcYaHWx5YKsyGBYeLjH0mN5tbmbMg+baIzUnfA=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: RE: [PATCH 12/22] xen/arm: fixmap: Rename the fixmap slots to follow
 the x86 convention
Thread-Topic: [PATCH 12/22] xen/arm: fixmap: Rename the fixmap slots to follow
 the x86 convention
Thread-Index: AQHZEUiBSJDHdzI2P0y6FBuQ6cG/JK6RkVVg
Date: Fri, 6 Jan 2023 14:54:43 +0000
Message-ID:
 <AS8PR08MB7991D6F06E5F3959E36307B092FB9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-13-julien@xen.org>
In-Reply-To: <20221216114853.8227-13-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 5CCACC92360311429CBBF1ACCE56BA62.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB8639:EE_|VI1EUR03FT003:EE_|AS8PR08MB6614:EE_
X-MS-Office365-Filtering-Correlation-Id: a3d23b52-799f-42fd-c8a7-08daeff5fab2
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 cagJFJsjL3U8dHLo2wgYxmFhDL+KW6ZO67mRrAVZWWECZD4h2sKLuFf/U0Sy2C/IOvDa0gvXAVkyVs0YlXFqgzbSpxjmrIVKCMCNO9NHJx1TVMTkEyP2dx/7h0agpt+ufcUcqhJQGqfhUamR/AEXX29n3PsFNKohPWbGv4BgsLFwWGAdZrOnguZ9LDIf5aed0G5p9UJrcqVrgpYpFm09iyya+kLcMRh1w66UKasFtidydBcufOaGLepKDKusGqsDNRVXDAysMiuutO3Mr+1Fb4WfUfv3NX8bjxOnSJ4T3kWOWKaidTsAqIGHq94YJrofE6T8Fp9ISpXkxhb4aWaP4N5TF4HwWRKrq621HZnSBw3AK7TuCkbPAyZJiZEPLiGGkTivAD8jrizZ5gAdfHf9T4obWcHpHdpK/jwhG0ysIk8iGvjR+b8yNg8spwCYEVmaaqdT0kEQVjkL1dD3zxWTGBeYBGAF1BjjJum4so1m2bnNfh8jB29bmsx3lGWzo2pxgDTARzy/Z8iFm2avm1utj8Z4/qf4p8TPfW7yyyy4pxXTM7B9Dh4FPYo/aqi3KL7KY5P+Zb0uBFcpaSKDhDi1KLAM1s5sOutp5APXTPsZAa8+0T5FD+6gGmMunXKyNcurgZG42XqUSZyV9dxcGVxfovDdLF8Xedd/cNPij7bAaaKvnjWGyHRgZAHn7Hk5BMV0RcyPL54pI2lY3SGJ3SNKKA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(376002)(366004)(39860400002)(396003)(451199015)(64756008)(66556008)(4744005)(66476007)(54906003)(66446008)(8676002)(86362001)(66946007)(478600001)(52536014)(76116006)(41300700001)(4326008)(8936002)(5660300002)(33656002)(2906002)(316002)(83380400001)(186003)(9686003)(26005)(7696005)(6506007)(55016003)(71200400001)(38070700005)(110136005)(38100700002)(122000001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8639
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a4a9665f-c49e-4bf7-96a6-08daeff5f25f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UlrHIm40oCwryi13wX+5etIYW9MZdl4YYPYIz4DKR0CCGqgftYh9uqH5jlPC3zEUVFKN77hN7M1djsGPnifwhu/LDSMpL5hs9lpgYdTCM51XIYRi6mDO+3UgiNP7hesbFblpVarRityPZoRuxaC28v1np9xKdtAyUpvBI8bfx+OZPewhOyFk3qMhMqjljgRzCaZHRKyOgHQFrF+7ZVBxnqqNSo8JLAHgccwXpM+PZix/SioYieoZIsqnt7nhFr/iJt/NwLUE09Df2hh2lzoUHGla8CejPYG0yFo5ejQnf766NZhEX7gyrhGO5XBLyO1kfW0YM6+C0Gv0sIlHhgVXASril2Cf3/woADKXV0yeGrWZT2dapP6bJNSjx+Vek1Tm2QhAj53Ie9wqO6Fw0U+RlOn7nGw3KO7CcoDK1gxvJm10NTiR0fAKYFVTktQh9/MI2dvSsFElrENEriGEQ21gVjSurTOKYBnljFidkZwPqN9VeLYBj0uv35Lh/0hL1ik3QzVznP4nZwttzlM5pBIQ54CxDmyObShV1V6erA0/mr1XXJ7rdsldVBHC1mz4DPxZ6p57n7f6VZj7jLHgY/EgoX0T+gG8qJsy/oIusp1QWQ4IQfE/1OPHEo6EZd1o8asiK5SqkCANH8Z8TmIfsx0sLmq2HxEOsEZ2j59wCS0IKC1iXQilFfphq6SyClJNkM8TjD6nOJzr5lQdKS98KIxOQg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39850400004)(346002)(376002)(396003)(136003)(451199015)(36840700001)(40470700004)(46966006)(6506007)(81166007)(7696005)(478600001)(82310400005)(336012)(2906002)(83380400001)(47076005)(54906003)(316002)(36860700001)(86362001)(40460700003)(33656002)(5660300002)(110136005)(40480700001)(9686003)(55016003)(186003)(356005)(4744005)(8936002)(52536014)(26005)(82740400003)(70586007)(41300700001)(4326008)(8676002)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 14:54:57.7455
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a3d23b52-799f-42fd-c8a7-08daeff5fab2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6614

Hi Julien,

> -----Original Message-----
> Subject: [PATCH 12/22] xen/arm: fixmap: Rename the fixmap slots to follow
> the x86 convention
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment the fixmap slots are prefixed differently between arm and
> x86.
>=20
> Some of them (e.g. the PMAP slots) are used in common code. So it would
> be better if they are named the same way to avoid having to create
> aliases.
>=20
> I have decided to use the x86 naming because they are less change. So
> all the Arm fixmap slots will now be prefixed with FIX rather than
> FIXMAP.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry



From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:55:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472702.733056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7v-0000lO-Pc; Fri, 06 Jan 2023 14:55:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472702.733056; Fri, 06 Jan 2023 14:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7v-0000l4-MO; Fri, 06 Jan 2023 14:55:23 +0000
Received: by outflank-mailman (input) for mailman id 472702;
 Fri, 06 Jan 2023 14:55:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fu4g=5D=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pDo7t-0008Ex-UQ
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:55:22 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2082.outbound.protection.outlook.com [40.107.22.82])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24680166-8dd2-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 15:55:21 +0100 (CET)
Received: from AM6PR10CA0102.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:8c::43)
 by DU0PR08MB7924.eurprd08.prod.outlook.com (2603:10a6:10:3cb::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.14; Fri, 6 Jan
 2023 14:55:19 +0000
Received: from AM7EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8c:cafe::9e) by AM6PR10CA0102.outlook.office365.com
 (2603:10a6:209:8c::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.15 via Frontend
 Transport; Fri, 6 Jan 2023 14:55:19 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT007.mail.protection.outlook.com (100.127.140.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5944.17 via Frontend Transport; Fri, 6 Jan 2023 14:55:18 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Fri, 06 Jan 2023 14:55:18 +0000
Received: from e09339cda5a3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7DA152D5-473D-4C33-BD83-1B60E4AA420E.1; 
 Fri, 06 Jan 2023 14:55:10 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e09339cda5a3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Jan 2023 14:55:10 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB8276.eurprd08.prod.outlook.com (2603:10a6:20b:56e::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.9; Fri, 6 Jan
 2023 14:55:05 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%4]) with mapi id 15.20.5986.007; Fri, 6 Jan 2023
 14:55:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24680166-8dd2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bB7mfEV+3gGaxVSg/wMmc8Z8VBSo/9rvzcBJqcrEW6E=;
 b=puFJSeB392T+R3NumBsyprgvCxKLnijg9owvd748IFyIboDftf6A/9sElKDNlHbCRsm5q3w6LOsffYO/OHnAu+kU/FUjOkLHAfMJZj0mssN0no6kAZK8vvXvkLrb/K2NRFCfvP1nxmq1HlswA+2y41b7gs4P5w5R7FFnbqnNJzQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X8atF9RdBlD1kySDpW94N0I9Az1vOquWr9G77/EQVtFJfSi70MmV/Qgj2enwQihYlMQ2+PR3od9zEnHJ83kYXo4A1lf7qitm8qxcggPUlvolcn+uZYV8R9LuegOe3KEsa62jDx2PW5HuGiVi+wdqATywW4mCyWgnnm9sOWwJkgFVlh6vAmNbjJUaAV27bgS9JStqbs6iQKXKRDfJCj4PDx9y9ZGsKE7o9RWt1QgM53/gZyPwR4sFhT8kG44iujKnEB6av07dgPRkaoKUNq7pMB2yEuKW/abjsdON+zefIgnY4AeJzkeTRfP8dR1PW2XFg5fdwP0Vrx2XaHwcoUS4iw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bB7mfEV+3gGaxVSg/wMmc8Z8VBSo/9rvzcBJqcrEW6E=;
 b=G+hLPZqKi6CnfJwl/Ec+diB7WYMxiAddUDdxwMF05XmJLX1OzXLW0Vk6oNFWPtlo7lipM9Mn9dlzKpe8G0gnLzdCKqibNwowi5oi7L0vZAMa/aVBFqbzvfsxo4jvjLoPvcGIJnSRaUjk/r2aCzgaJPVJ8Sgn3ecGBBpkYYc2pHAc9E9h5vvQ72JQpjiwq0lL/u5RRjue79m0K78snn5Spyq7eMoiv0P4FSTaA7Whd/KLlzYso1nAPbXKJ5oiDQ3xDGj8uNIGZScOECbS3RnKpx1p2eELpNCwLRgQxqkF3lXX9MYWl8l/bbdWcYP+RyIolJr90ijEnZrCWGse1lEYlA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bB7mfEV+3gGaxVSg/wMmc8Z8VBSo/9rvzcBJqcrEW6E=;
 b=puFJSeB392T+R3NumBsyprgvCxKLnijg9owvd748IFyIboDftf6A/9sElKDNlHbCRsm5q3w6LOsffYO/OHnAu+kU/FUjOkLHAfMJZj0mssN0no6kAZK8vvXvkLrb/K2NRFCfvP1nxmq1HlswA+2y41b7gs4P5w5R7FFnbqnNJzQ=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 21/22] xen/arm64: Implement a mapcache for arm64
Thread-Topic: [PATCH 21/22] xen/arm64: Implement a mapcache for arm64
Thread-Index: AQHZEUhyvmaM0DFPMU2aLMVoP+HuFa6RlnZQ
Date: Fri, 6 Jan 2023 14:55:05 +0000
Message-ID:
 <AS8PR08MB7991F5B73FB994683F3C00B992FB9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-22-julien@xen.org>
In-Reply-To: <20221216114853.8227-22-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: DE5208C9CF86F842B8BCA8463358CE56.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB8276:EE_|AM7EUR03FT007:EE_|DU0PR08MB7924:EE_
X-MS-Office365-Filtering-Correlation-Id: 0c4f5ae3-c42e-4cb2-ba48-08daeff60718
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ukdUnozmQMKQ6pzLifjqnjK8/ooHo7JyIDuPmvdrU7IqhmW/3byjGqRD68LNOPS+s2ZBltQeRWGkYCEeUxpbILpi7hAQaelQxOWQZvYvXvWveHi+duqYny9U5sOrdUhezVt0FXvMX9Kw2VKpmiS219FLuZEg6Tt0MbbMtHQ/qYHiYIzrocnG9Ab0Bz1K0WWCJplrm7CurwB1giMgXFmstGU+3KwbIXN28ZuhkBtwOQ+LsO0VGBMy2iWGgj2eS4Ua2SEHsakhfQmwvtdBUOATK917w/YBfR0iijjL26uM3j9xmQ9OZp26Hp7LdOsqq6jguzilNF3xnNZE11nnn8fO76IZIboawEvhVFc1ItVTJI0lzmHQa4ceQfUj/K1gRHBD2slNMDFhZVSl+qvMz1Kdww/CIfY5T+XfU01JCJZ+46zL9p5EjriZQpC2UQpZageMwM3EDrkHCBmX5h1D3niKfwIP8NmfcxxC4kiSTmwyRDW2JeeJQsd6ec1dfmLqXR/EfC+ebLXopk244DieY9m1VbrD40KL0E2jPYEJpj/nvKoWSJa5xAjsS6qEe84J+PZAYjBTsP0fGNvQ9ZUub4PP0vvssOX233Ou/YV/w+QHDPT/9rKeHVL8+y1LAzhGpFQPzpeupqUw/5Vlp7lDdmDyF7DWuiiiqPI0We+K8OqosEmDWH4UzEUEh0viRubWwenEFujSayAp5g2vgX90ggUvAA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(366004)(346002)(136003)(376002)(451199015)(9686003)(86362001)(2906002)(316002)(110136005)(186003)(55016003)(38070700005)(54906003)(33656002)(122000001)(6506007)(38100700002)(83380400001)(71200400001)(76116006)(478600001)(64756008)(7696005)(8936002)(8676002)(66556008)(66476007)(52536014)(41300700001)(5660300002)(66446008)(26005)(66946007)(4326008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8276
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	02c45455-cb9a-4a70-4736-08daeff5ff04
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FzOsndoCTQoQS+Atcvxo22dDwB+vKIoCf+2jAETww4rGMW1yYo1r+quRBaAlUDmh6eejwrjl0jZ0Sz3prwFzTKNycNGIgw2b8GhnvHGhexseS0Rd7xAeCSvQL3qxmv54AQ7te6k3ivUAweNHrILgd6kcIsVhYK7UyMeS3lrbgCphtLYfOuXogSKqFA1prIZFO0/SSV/Kh39UtXxFD7lUfSsM0FLYe2sjp5ZRVlZf/opbNhx8V3gJ/AFev6t+YZmKL+91StKv7lkRR9X9/InMTgLosMKYNhSaclXg8WrWvjzOjTMm6NahLxVUvxl6vBZADlPDy/iD/xZvMeQF+oM71PuQx+VsL1egkv6ZvopR3rnkm3BXAU2Z6oe0ZGOnW5gHLcPe/iqtHLOnmQSUAEGSmeSa5zluw613Zgfp51oEW2mKKTq38IPXNDg3k2WF1nMHOA+mZUJbGvDDY0lKz2DjCN2BCB3p8E5N5Ks/PALopoORIxrdRm3zNZj1NPsMSSGmEE9bycwy3JyXwX5GwGZ59dy0x9rUtiateYUsDVygB2Fpxj3QyKxt0qPxp4M7JeXQZG2PTP8KUFHKY24IyWsfzxmmX2QEBtksJszCopd6ZQPjlh3fxmaR44/oVv02bS0HGxg3+ozAYKKKOxKXz60V15Li+CjyDqmYMh7mNfNKKYInNNfDZMvQeC+rUNmAsBlM4s0pa/pAIuKXYDkotZ1fWg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(376002)(396003)(39850400004)(451199015)(46966006)(40470700004)(36840700001)(316002)(41300700001)(70586007)(8676002)(86362001)(54906003)(70206006)(4326008)(110136005)(8936002)(81166007)(36860700001)(82740400003)(356005)(33656002)(6506007)(83380400001)(336012)(107886003)(478600001)(9686003)(47076005)(40460700003)(52536014)(2906002)(40480700001)(7696005)(26005)(186003)(55016003)(5660300002)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 14:55:18.5932
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0c4f5ae3-c42e-4cb2-ba48-08daeff60718
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7924

Hi Julien,

> -----Original Message-----
> Subject: [PATCH 21/22] xen/arm64: Implement a mapcache for arm64
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, on arm64, map_domain_page() is implemented using
> virt_to_mfn(). Therefore it is relying on the directmap.
>=20
> In a follow-up patch, we will allow the admin to remove the directmap.
> Therefore we want to implement a mapcache.
>=20
> Thanksfully there is already one for arm32. So select
> ARCH_ARM_DOMAIN_PAGE
> and add the necessary boiler plate to support 64-bit:
>     - The page-table start at level 0, so we need to allocate the level
>       1 page-table
>     - map_domain_page() should check if the page is in the directmap. If
>       yes, then use virt_to_mfn() to limit the performance impact
>       when the directmap is still enabled (this will be selectable
>       on the command line).
>=20
> Take the opportunity to replace first_table_offset(...) with offsets[...]=
.
>=20
> Note that, so far, arch_mfns_in_directmap() always return true on

Nit: s/return/returns/

> arm64. So the mapcache is not yet used. This will change in a
> follow-up patch.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 14:55:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 14:55:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472703.733067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7z-00019w-40; Fri, 06 Jan 2023 14:55:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472703.733067; Fri, 06 Jan 2023 14:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDo7z-00019h-02; Fri, 06 Jan 2023 14:55:27 +0000
Received: by outflank-mailman (input) for mailman id 472703;
 Fri, 06 Jan 2023 14:55:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fu4g=5D=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pDo7x-0008RI-Ee
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:55:25 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2048.outbound.protection.outlook.com [40.107.15.48])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 258e20f2-8dd2-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 15:55:23 +0100 (CET)
Received: from AS9PR05CA0282.eurprd05.prod.outlook.com (2603:10a6:20b:492::30)
 by DBAPR08MB5765.eurprd08.prod.outlook.com (2603:10a6:10:1ac::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.9; Fri, 6 Jan
 2023 14:55:20 +0000
Received: from AM7EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:492:cafe::2c) by AS9PR05CA0282.outlook.office365.com
 (2603:10a6:20b:492::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.15 via Frontend
 Transport; Fri, 6 Jan 2023 14:55:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT041.mail.protection.outlook.com (100.127.140.233) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5944.17 via Frontend Transport; Fri, 6 Jan 2023 14:55:19 +0000
Received: ("Tessian outbound 0d7b2ab0f13d:v132");
 Fri, 06 Jan 2023 14:55:19 +0000
Received: from 8f9292e1ac63.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7220374F-2E78-4048-A5A1-36C49D8790D3.1; 
 Fri, 06 Jan 2023 14:55:13 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8f9292e1ac63.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Jan 2023 14:55:13 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB8276.eurprd08.prod.outlook.com (2603:10a6:20b:56e::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.9; Fri, 6 Jan
 2023 14:55:11 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%4]) with mapi id 15.20.5986.007; Fri, 6 Jan 2023
 14:55:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 258e20f2-8dd2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Pvb4D82JzYha9OwUYmFkfhpda4iWTVoaQQUM1CakWko=;
 b=zaPCpssBGC6uqfrsAcLDqZm38Jv5/amYx6w0JVnnONXRWT55fpV5RoZqmIrEwUIoMH8bctUJPpLD+9mAZzbcev3iEaG2g7gKCaL/b4kjqVLx0txk8UBdMW6APi3xMOyLW2iOv5iS5WyEVtIBPyJGwze5+T4ABctKaGzIxgh8bns=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OzVdum+7CSjxhKe2Z2G+T37IUNFKTKFriY2iJoxOz6cbxwyHjmkhfJx5d3mfHYlKUxtCx5pkki0/1asrTIXBELbkgN86LAWj9cQnikWlvrfC3Sm3bbYTqzN/qiKiuZB/7THpYZVAa/AhqzQFO+439Z8HCp/o/kyoHLFATvcq+/bOtFBaApnVjw6px75IZjBhyHOzbzgS7sPg0jq7Qld4bzByPsyOeIig4nAe5hUe4kw1gpWuUOeOdjHQd4wy9z9Hsu5SQdVnP/h2n2WwrUe/EKGA8l7Raku/2g++7Brz4BviGb2Lc1+wDOvqPcNQN8bDyvRw9mGh92BOuqHVMseC1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Pvb4D82JzYha9OwUYmFkfhpda4iWTVoaQQUM1CakWko=;
 b=KtDvqMcies27SHY2M+lKqeUZmXejBtrtUSvlKnzhkE1LwRig6HXVquxeLavBaOYVYSvldfRsSqz6Wv7xuNwBQNPlG4rjC2bPUlZAWYuZUWKMBptwvaHNPUjoihM2piFRCChfcFTHRZwseuE/0S9HHDSOGCIwD6OtzA65b6AFRZlO00fxJOeOwQBpbxgkYSoYUJTZfAQYtS3boSGALi3k87unzefNr42dxKm9Xt8aN7QPBH5Lntr6jPZJYDmZG3cCtaDs/lLgl2v78F6d3BTPcNE/cXABOy55XoJ5G/ddVHzO/GS65gC/Va9WD//Fd1cKoaj8sM5Kuw4clB4dsB3Djw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Pvb4D82JzYha9OwUYmFkfhpda4iWTVoaQQUM1CakWko=;
 b=zaPCpssBGC6uqfrsAcLDqZm38Jv5/amYx6w0JVnnONXRWT55fpV5RoZqmIrEwUIoMH8bctUJPpLD+9mAZzbcev3iEaG2g7gKCaL/b4kjqVLx0txk8UBdMW6APi3xMOyLW2iOv5iS5WyEVtIBPyJGwze5+T4ABctKaGzIxgh8bns=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 22/22] xen/arm64: Allow the admin to enable/disable the
 directmap
Thread-Topic: [PATCH 22/22] xen/arm64: Allow the admin to enable/disable the
 directmap
Thread-Index: AQHZEUh1e0APpjxFfUWN/NZXS3PZ8q6RmKBQ
Date: Fri, 6 Jan 2023 14:55:10 +0000
Message-ID:
 <AS8PR08MB799182CB8FD58DDAFB38228592FB9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-23-julien@xen.org>
In-Reply-To: <20221216114853.8227-23-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A6FB936F6608A0469363276AA3709019.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB8276:EE_|AM7EUR03FT041:EE_|DBAPR08MB5765:EE_
X-MS-Office365-Filtering-Correlation-Id: b46bf93e-0806-4662-5142-08daeff607e1
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 qtra5IawdtYSHZQGtqyEjs/He7J0Z+2eiRTZMrK+hQYyFUI3AXm8AVFE9PfuWCztjUJGoQ3hOMHXWMaSumbU6bph64nnVfzM6AbBRfWQYfn+/K5PVJkACydbOsjI0/ShPD9iZXI37K+R2ap0cUbTRKV9TzCYe8h8IZzvEV5f0ebg1qJl7W+lWMAmE8oc0WTeC+eXNSeIIJgUUbLwCjtwcZv2eNhIMOK5iD72D+6MUBpk/h3rrTIkYO7/66ATd+MhMO0tdfnENsUjrwFkWt/DMoHVpawKXPpz2mLufuJkk2obh7O7jtM1wLH7WBn13kor7ZSOJYckAoTvVSz3Z106DVtbnZWBNVF+2jv6IARHwALxcTSX8N0aPlyFPcJFnlD0D5avbJJR3kQtRkM6TJIMinIOZm6vPGfsL/Ab0vVQJ7wnGTfAKiQQE0mDqYsVL+kfPxmJIqhArla5pa6HGIlmJv0dui7oqHRBWGRqZSyO3yxnN80igsjp2vObDh5/XbIMsi+oiVS44cKl8UPhxS3uNeikCLQcrYz0y0VIHfzoIJ+p0M5AUMCTMVt+dVNZiXd213d3vhMDtXn2vhZPTmiqeTV1lyOLZE/2glPLdXNQXOLq89bDeMj+KyuZLvrhwLSTRN60soXCgoCmh8Az9Qbu24iYW3ToN+0H7QrYeDbcmY4y1I8wnaZXrkKMmfAMIy0Zw4PqzR75f+fG17cCuFmgRg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(366004)(346002)(136003)(376002)(451199015)(9686003)(86362001)(2906002)(316002)(110136005)(186003)(55016003)(38070700005)(54906003)(33656002)(122000001)(6506007)(38100700002)(83380400001)(71200400001)(76116006)(478600001)(64756008)(7696005)(8936002)(8676002)(66556008)(66476007)(52536014)(41300700001)(5660300002)(66446008)(26005)(66946007)(4326008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8276
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2a94f699-ac09-4852-da28-08daeff6028d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uNzPYVJ8DDoYaVm4JDPfmmqUruzP/3TbFdui6aQ3eMLqzRr46E1jxDylHMRATVZxBROCFFixpqC9hoH1DjwlnAhwOVsVCP1Fm9j6/rdA0HMsajXc3eFK46YJzye3re4u1J4EioF8EBIR3YE3j2IsLVBzAp4ARvTe0rbWExVC+XVtiag3sD64JJ9pwaMmbq1pUzntDiaLh1jkSsIMHrcZTUQrsSK89AC8OP6eHJJ0Eb/nI8E+qYZ1w2dDcQE7FOYVOJTp+TvFoElXqqT0odBNVyywjrRfL8Bkq0V13EGIOxFkdtbtRQSdsFx33FvffuGFzSjwMOxv770EJ94i7TLwJVCN6/Xd/vmoxhxouF93CxGuRMT0hklwi7KEKFJpDTjGMyBIjlRdL8Mo6q0zcoepmXQAcPevM8vJeltWcJtIhwcl2CvOAOKUzWKw1VRDUJzaAciUpMMMiczcw5XXpBIAgvbL2PhaROYlNk06YW1EtWQa4k5CZj/daxqIwdHDAh0gu+ZdlemghtL6UxOk9WaPB5ZJmTj55xEbYRIUFZj2vA0XZ2tCr7ltaa3BQEr2VQFMZKjLb25t43PJ2nYiF0NAU6x6ikkL5mgRzeNnWjXkz16ycwequiIEBobm/7XytSQh11ITrXV+YeBw77jk5iHID/+L6akf+5iSslPLgSZULMtrdmZmMIm+dT5eckdolwpGLsSgUgXggC5xLKy5B9YgNQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39850400004)(136003)(346002)(376002)(396003)(451199015)(46966006)(36840700001)(40470700004)(81166007)(82740400003)(40460700003)(356005)(2906002)(8676002)(70206006)(70586007)(5660300002)(33656002)(86362001)(83380400001)(41300700001)(4326008)(36860700001)(8936002)(478600001)(82310400005)(54906003)(7696005)(336012)(110136005)(316002)(55016003)(52536014)(9686003)(40480700001)(186003)(107886003)(6506007)(26005)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 14:55:19.8921
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b46bf93e-0806-4662-5142-08daeff607e1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5765

Hi Julien,

> -----Original Message-----
> Subject: [PATCH 22/22] xen/arm64: Allow the admin to enable/disable the
> directmap
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Implement the same command line option as x86 to enable/disable the
> directmap. By default this is kept enabled.
>=20
> Also modify setup_directmap_mappings() to populate the L0 entries
> related to the directmap area.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20
> ----
>     This patch is in an RFC state we need to decide what to do for arm32.
>=20
>     Also, this is moving code that was introduced in this series. So
>     this will need to be fix in the next version (assuming Arm64 will
>     be ready).
>=20
>     This was sent early as PoC to enable secret-free hypervisor
>     on Arm64.
> ---
> @@ -606,16 +613,27 @@ void __init setup_directmap_mappings(unsigned
> long base_mfn,
>      directmap_virt_end =3D XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
>  }
>  #else /* CONFIG_ARM_64 */
> -/* Map the region in the directmap area. */
> +/*
> + * This either populate a valid fdirect map, or allocates empty L1 table=
s

I guess this is a typo: s/fdirect/direct/ ?

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 15:02:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 15:02:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472734.733078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoEz-0003an-TS; Fri, 06 Jan 2023 15:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472734.733078; Fri, 06 Jan 2023 15:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoEz-0003ag-Pa; Fri, 06 Jan 2023 15:02:41 +0000
Received: by outflank-mailman (input) for mailman id 472734;
 Fri, 06 Jan 2023 15:02:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J7eG=5D=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDoEy-0003aa-P8
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 15:02:40 +0000
Received: from sonic317-20.consmr.mail.gq1.yahoo.com
 (sonic317-20.consmr.mail.gq1.yahoo.com [98.137.66.146])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 284c4ba5-8dd3-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 16:02:39 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic317.consmr.mail.gq1.yahoo.com with HTTP; Fri, 6 Jan 2023 15:02:36 +0000
Received: by hermes--production-ne1-7b69748c4d-g8q5j (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID e73641d6e4425e24aadb96822c955bf6; 
 Fri, 06 Jan 2023 15:02:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 284c4ba5-8dd3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673017356; bh=0VDm502ucQEkSIIWDIcrXnsroFh40vokq/Ikx7Y6nk8=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=iyzlupQ0NS73Y5VIXErGypapDPeMcOXKm4mG8BWrz6ebrPUZEc4f2DkAaBV1KGmg2mV+XPANhp24iEXb9SGt/YErUOOqPznQTmQka4Xp+NeTELGnd5KhJhlaI4reTdrQvPuBd0FEEGjUPcTcFvUOVGp4hBjJ8F8pSt+hP59N1L7mTsybJjFkOWz2fFFTXoUwFjo3A3eWPoDJR7AfaMtO2HEgpg6MQjlP/kt1uS51zF3B3veq8uBVtsDWzeDb9nVuPfK8y3b3hcpqKfAgtAOvteeNOYbiNo7MJlvdRWUIR4YybjJll8FHXNBSececiHcSX972vBBuRwZoho4VN6dMlg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673017356; bh=oRw9T96FHEYw+EaVMMY57P6PCdN82bq0am7S/TzbQFq=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=UNjgox1dsuEuzIPLPIC4sZDgSpPa5Etnwq8tiv2rYdQRDgYDfJ7cOPA6AUCF45LcALh1KyTNXv8HOogzllAMSPb6smJAnfO0enrZ5Gfq3haiqAyzO44zc9KMJ+/0qTxrhsVGwwSuWveaaCcaCEtqEvOAiacQ5r3Xi3pJKA8bBmwY+rz2N9g+0vOibsuY2iu9nfEnrtNudlG4XrHmz8K1ur3rGtm8tfR1NKmu0hH28aTXEY9h7s+PflNAO/iWUJB0TiWBRdqfxH9B5yqAFjLQppKJqsUsoQ8jSitgcPycHaTJjTrriBA4ziSmDdasi10pQxzZaKPufiwBPMYUnpteVg==
X-YMail-OSG: .Uqmgf4VM1msNoeUWzjybWiLGsnPNeWKLnowVtNS9ZX9bjNd9b78LRkGTjcqfVB
 575A2Xmw7k5SQwscjnM_GjoW.JV.XkZvtH2.A0Y.iuQkSD1NGfPBJ9.8847ZcKwZbQwh6lSE4iYd
 PMb1ITHJ21KKDdHU6eg2TPKMIiPbvMNMyeQN35NOI5.l2BxfOzCBeHot4j6Xpyeopdc3ci.jTcb3
 7LrAZc13o_AEwnpmumOydFn4VpuFa7H4IqDn0AeL1QnbL7feWGmjfjFWJdT7dl1T_Ryo2fTTj7Lz
 EXbXjzZ0DJ.2JOIb2MBn50UWpQAKDcfhmTi_TR5MveD9cRqjc7kptVtys_t..D4z6JKOpYHKl_MY
 yPg7BSFf2KsVzinONTsKfBLxMV2dqj8rzuuKXr15oczoyk_rq6H1k5peuF.H0D9Q5MxCx0hIEKVh
 fW4j2ZSd_jd_Ft605Gv6qap5IOrxPbO1fl0le_ADwXqbBROycLMdYvImy.OA_KSuhdLBgTdmlDaR
 EkCJkaRBxU4nN9tjYp5EIBKel.rSCheKPV3mdFPYrngGzgZ9KTS3u9xD3SeA9yaEoloG0QKIE5xj
 qCAlXWIdK593knMKdwg8kNJeXVlBNZ9kte5PH7CPIZ5dCGgl.Teq8tzzPI7ShZpbCkk1JAMbqIj9
 ZUGUGJ_T1Bxg3e3NNkkyZZxgD14XdaaeF1Dw7yny6PgOHRMRed6hJW1wFLvFrdqnzSUOCSe8LnD8
 0wdHdzEHUxvCt25FNHHVYPtcs5VsSMBT7o_b2oAw8ZBNIytEB2vKIl4QmAUf9zvl.zMI1O2b8imH
 7Hw7as44Vt5rFbP4BPfOEbc9ys5ojwn_nStmfxMrtXlGWVTMXfJcYdQEXqPMmENf4NSdj2J3zlSG
 K92lo.ZqMDvWlgHGtDdZMbmmODvRpSpt5RQgRORVysAH_El98myuugLUO7SlkbU_f4SQDZ8FKR2b
 5N6ohPKLpV41FXb5U1CuFcdrjQ7Kj4sH07dA_bYPrkkkMYWZhNH28am_ZuOBc_fHhXqH0uPrWaCp
 sMjuHroFNVO0IeV.ZQGroX8dtBAtFKhW988yHl_umYCW50_Qyb6t5jMFRR2kWXtAQc2QBK7GUlXx
 kZchz_tUETzoJnfgu2bMtCtDhRUn4EYuEN1ixTSPjlLi_QBofr5ve.M7rQ1GYIeDG.OefAbmcrR5
 4MXmTSu7RcvJVuhns11yuu_3DcSK8dOxVzPsCBRbAjKpSyQa9u3jbJ8Ifw5GQrbQtyMQetVev731
 lJ4mtSYdF5a0.jbJT9lZFrumznU8eZwtLeKLQWDTuF4rUBfcDgfXQN19Tg49G2nHnIV3SiEb0FeD
 sZbYld168j8m4ELRCH4ddFO_ilHKcWMSSOHnsJ0AjbN.tNuFH3gwtMrGlMqnSHOVyU15NGnEqgPS
 ZM3iheGPNaDpBs9DOD5IjKUvlqoopHmiOFTcu9QKPluUQWXLeljZj7xRlGFLD2PsmVsAj3KCS6JM
 1TBsQuH9TaEYg1QMKvud4JWn6VuUsCPikOhxoXDtwHEC5L9H6kXAmTb_SXNBQF3.EuxxaOgWXG40
 WCCJce.Dy2Cqxd8Dasz8h6_asl5xRLlitnvXw2dD6x7uYGBjZeg_dn3BGLZ1OcntGEBco.oKhry.
 pV1pwXrUDve777MNN4c0FJm2IIsJcuQ6JonANYfaVVqDzwIm9wQVhBr.loGRt.aF8MgNuuqtmuyX
 7_0KqPvZZ_AiYo7KNN9zUrMHZvwtan.3NXbGSj09fDzP0V3OIvC9zQWKy4bnECXkrCrzuZRPbVRJ
 5QD77RLb1U4xrtIEn9fIgUdHNf_ewu1c1xrxJKt1.RwrxXrDPALtVwioVvDQZlvN38La_XV6UIOF
 iwYgb0SWMO2Zo_Yxxe7.oFW.6XFEOH2mKnC_HBvDayQG8Eu7Mgw.Nrf0O0stg8zSD11UrCsDx2Kb
 Kd_qH6fD.WcLZnytzwruDWxiQ1cwO77BBBKFrxrqTTGg.GVgFjHHLDslsvocY2iHo9AGmTZpGKf8
 eP8eVPkfK.7vp8xVMl7bW9wdg3Hs2EZRDUtauj_zvAJQqio3bzl9W91PQ5EhPcjiy2oAWUet9pgy
 uFBH8RxrUeIC.G4au93m6Htz38yxd5He8vwMSJ8XXiPyNq3ymoNPGvVH5xzqvSkbJ0fqwtmOxKdL
 18257Wda6bmLyIwr2h_vKHc91dLtUbclWkO767zayiCyn8N5p._Jd93s8HcmnMW2M7.xZ2i.Spjy
 wSjxa2W7nyg--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <6931ef9f-1978-97f5-2d32-003a9e64833c@aol.com>
Date: Fri, 6 Jan 2023 10:02:29 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
 <c39b9502-0020-ce54-abd8-b362430ba086@aol.com>
 <882652f8-ddda-a7d8-85b9-da46568036d3@aol.com>
In-Reply-To: <882652f8-ddda-a7d8-85b9-da46568036d3@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4557

On 1/6/23 9:31 AM, Chuck Zmudzinski wrote:
> On 1/6/23 9:10 AM, Chuck Zmudzinski wrote:
>> On 1/6/23 9:03 AM, Anthony PERARD wrote:
>>> On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
>>>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>>>> as noted in docs/igd-assign.txt in the Qemu source code.
>>>> 
>>>> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>>>> Intel IGD passthrough to the guest with the Qemu upstream device model,
>>>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>>>> a different slot. This problem often prevents the guest from booting.
>>>> 
>>>> The only available workaround is not good: Configure Xen HVM guests to use
>>>> the old and no longer maintained Qemu traditional device model available
>>>> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>>>> 
>>>> To implement this feature in the Qemu upstream device model for Xen HVM
>>>> guests, introduce the following new functions, types, and macros:
>>>> 
>>>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>>>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>>>> * typedef XenPTQdevRealize function pointer
>>>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>>>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>>>> 
>>>> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>>>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>>>> the xl toolstack with the gfx_passthru option enabled, which sets the
>>>> igd-passthru=on option to Qemu for the Xen HVM machine type.
>>>> 
>>>> The new xen_igd_reserve_slot function also needs to be implemented in
>>>> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>>>> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>>>> in which case it does nothing.
>>>> 
>>>> The new xen_igd_clear_slot function overrides qdev->realize of the parent
>>>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>>>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>>>> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>>>> 
>>>> Move the call to xen_host_pci_device_get, and the associated error
>>>> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>>>> initialize the device class and vendor values which enables the checks for
>>>> the Intel IGD to succeed. The verification that the host device is an
>>>> Intel IGD to be passed through is done by checking the domain, bus, slot,
>>>> and function values as well as by checking that gfx_passthru is enabled,
>>>> the device class is VGA, and the device vendor in Intel.
>>>> 
>>>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>>> 
>>> 
>>> This patch looks good enough. It only changes the "xenfv" machine so it
>>> doesn't prevent a proper fix to be done in the toolstack libxl.
>>> 
>>> The change in xen_pci_passthrough_class_init() to try to run some code
>>> before pci_qdev_realize() could potentially break in the future due to
>>> been uncommon but hopefully that will be ok.
>>> 
>>> So if no work to fix libxl appear soon, I'm ok with this patch:
> 
> Well, I can tell you and others who use qemu are more comfortable
> fixing this in libxl, so hold off for a week or so. I should have
> a patch to fix this in libxl written and tested by then. If for
> some reason that does not work out, then we can fix it in qemu.

One last thought: the only donwnside to fixing this in libxl is that
other toolstacks that configure qemu to use the xenfv machine will not
benefit from the fix in qemu that would simplify configuring the
guest correctly for the igd. Other toolstacks would still need to
override the default behavior of adding the xen platform device at
slot 2. I think no matter what, we should at least patch qemu to have
the xen-platform device use slot 3 instead of being automatically assigned
to slot 2 when igd-passthru=on. The rest of the fix could then be
implemented in libxl so that other pci devices such as emulated network
devices, other passed through pci devices, etc., do not take slot 2 when
gfx_passthru in xl.cfg is set.

So, unless I hear any objection, my plan is to patch qemu to use slot
3 for the xen platform device when igd-passthru is on, and implement the
rest of the fix in libxl. I should have it ready within a week.

Thanks for your help,

Chuck


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 15:05:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 15:05:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472743.733089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoHD-0004FV-CX; Fri, 06 Jan 2023 15:04:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472743.733089; Fri, 06 Jan 2023 15:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoHD-0004FO-9L; Fri, 06 Jan 2023 15:04:59 +0000
Received: by outflank-mailman (input) for mailman id 472743;
 Fri, 06 Jan 2023 15:04:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDo8k-0008RI-DA
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 14:56:14 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4159138a-8dd2-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 15:56:11 +0100 (CET)
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jan 2023 09:55:52 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6151.namprd03.prod.outlook.com (2603:10b6:208:315::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 14:55:45 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 14:55:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4159138a-8dd2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673016971;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=y6nqwjCeO6yIH8eftBJ7Xpict7gjEx2x3YmP+N15b4c=;
  b=AJ7a2zyVM9HgI+lBlJmJYSG1CWyFsAUCw63xF0qDUWCtDbxCltOFMCrc
   Eh+AvvncBPtiOWsaI4LD81vLBizECL0qvMG+RNAV/4ddLhtH7zkq2Engd
   tHdTwQxGuHwR0WFsEm+6crrkKaoZhA0wYLI+V6jxhdf9A+Q/zIGOpEesF
   k=;
X-IronPort-RemoteIP: 104.47.70.100
X-IronPort-MID: 90432588
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pyYw0aLtNcLAU0NvFE+RSpQlxSXFcZb7ZxGr2PjKsXjdYENSg2YGy
 mdLDGnUPfyDYmLzf41wPYy19k0FvpWAyIBrQFFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPdwP9TlK6q4mhA5wRgPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5vC2FV9
 PMfAwokSTvblezm4Z7rdMpV05FLwMnDZOvzu1lG5BSAVbMMZ8+GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VvpTGLlGSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzHunA9hPRO3QGvhC3nOi4i8TOSwtCgGL++a/rEKadIhBN
 BlBksYphe1onKCxdfH6WxC7u3+F+B0BQd1bE+49wA6Iw6vQpQ2eAwAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxRupIjQcJ2IGYS4CTCMG7sPlrYV1iQjAJv5sEaezisD+EBnqw
 i6Ntyk4jPMYistj/6+891rWjimsopXMRwgd6QDeX2bj5QR8DKasY42z9VHa97BONo+fRVial
 GcIkI6V6+VmJZqKkiqKQukEArCyz/mAOTzYx1VoGvEcGy+F/neiecVa5m54LUIwasIcI2axO
 AnUpB9b44JVMD2yd6hrbomtCsMsi6/9CdDiUfOSZd1LCnRsSDK6EOhVTRb49wjQfIIEy/pX1
 UuzGSp0MUsnNA==
IronPort-HdrOrdr: A9a23:qvsDdqOPQCkUO8BcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="90432588"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f4YR2n1k8weDMCmcqvXEEtZ4lP/nhok4woeIP7Okv76b+8VH2AV/RLAGd8n67RwoaK0uVbLovQNRCV88x5EbTvjvjSJSfgVUVGmv3E7dsnl5MQ5H9YxE6yRlKykV09ecMnzaxXq4HFYP6labhCBBfXTi53yh3Gj4YoutM7J2+rBLjhjjp0PvOyOtaVwnvD0mhH04R07PwpHf2gpCcymmE2IWmZ1kXuTt42xINC4MLSUVTofNsf/TZ1l7n09KxfO6kCI2r2yMGbtb7ciWzGqTkg/H5jnIQnk99vNUOx3EFV3bGES+SINUB1XsHGO4SYbvOMytg2jkHg6CfhhCE73N5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=y6nqwjCeO6yIH8eftBJ7Xpict7gjEx2x3YmP+N15b4c=;
 b=ZrtkplRlKvwTWfAyVK03tMEA+y+9i4lu8aXzV3KJ2BXQl/HSsQV/DaXAjpOrhJT7yZCZ/UI7PrFFbq0PMtfFlZ71FrgPT9s14cVSZZTkcYIhIQFHSr/fNse+YnsrLnloAf+HYlnjt0cueXc71SQoS47x9Domz7tvvqpjmHaRsYaycSARA31GLUwKMmuSTlLSldQ6ep9omizZ8K7VS6SmZfK9IuslSjOP7vzH4WKaAmrzWjbdR1g6olYOSzc++qPqGwG0FlmHnYn7+/quLEMfZaKMMB+WlthyaS2fM+SzPp18sMCAMvdM56zPsAu0X/BA13mhyYuBZVJNmWdaouFPsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y6nqwjCeO6yIH8eftBJ7Xpict7gjEx2x3YmP+N15b4c=;
 b=Wp7NRNs+IropkTTTFu4f+ameUEif6qThcmfZKq52HRSu5iEqrrHEBavjebZMMN4emLp1doQztEyWLotLE3bBrY6gIdilcYOsnxpcNPeD2II3zovwylTUFV31qjQYpV/hcM2IthGUoQibb0Yhft5FYh4LBXYzQLQD3BYjeK6BZEE=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 3/8] xen/riscv: introduce stack stuff
Thread-Topic: [PATCH v1 3/8] xen/riscv: introduce stack stuff
Thread-Index: AQHZIdDaBwrpw7Xp9EGdXN7qbTOW/a6RekUA
Date: Fri, 6 Jan 2023 14:55:44 +0000
Message-ID: <ab3c05b2-14a2-6860-0452-c8873617f332@citrix.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <e8f65c43d20ebdaba61738200360b14152531321.1673009740.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <e8f65c43d20ebdaba61738200360b14152531321.1673009740.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BL1PR03MB6151:EE_
x-ms-office365-filtering-correlation-id: ed02e942-0001-4006-13d0-08daeff616d1
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Lmyb4e0csK28luGYz/fJ2JV1vv2uAckOP+nDyUIceCuALgTLYBzpUkGM3WTldWEQw91Myr7i9v8qXeAYgrub0mFLuZlW39iDEBHXUQ66rU4G4/r95AzGTLGESx7Ssz84vAb73+AjrgtIGxV3X5c6Ie7Lja+uO/xt5c7psQuGQVBfcRbP0qZTVqfWjc0ojQPR5HqnjBj0/Y3m0IGRE5e0nA/VatXIi6rwRN5rW/BfOrb36PpofeF4Q4Gu5k2lErHXdEWjcMMxFNZj2+em6DryXzxXMQk8bkKFl1gJI+RvbWHbr7V/KdLtk64p64qjJAA69MRtT275KAgAFIiBaUcmpgwAnrCx7l3oTqCKtNNRuXVS4ZfUsTlyd7RXQ3Hmm7dZjddnr5VjrOEIZuxJz5T0vkmgUWPFkHAMBNl8kFfqF6SOwFd0JCOqOrxe7vwfljjnIoNVX8/rQru6U9gb901xmXinwwUor5tc0ZYt8pfpYrGFO72s1+1e+DelT34v6toLkHH0rz2idEKQY8kRhW0oW1n7FomPO0gE6CnrvdDepYqzLEX4X0UbJA9L3a9QQqlOqi7rk8cLtrZtERlpRnf7B8ebAl24lk43Stef2nsz5CdtnJT/kPUOIwZHcqRQdcVUfp+soNgnAgoK8ydLx4c+vrHMilwcuXWBDGR+EQpbfodTxqgGvsLaC20uRl7Wkss9hTGXK9OqOmz4Jz+f6T+DdlgqFHLpiLueypwpb24Aq6pTokGf47aJP67wmxMynRe44Sd+qmo2JyantSuQFrzkIQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(136003)(376002)(39860400002)(366004)(451199015)(26005)(186003)(6512007)(6486002)(8936002)(53546011)(478600001)(6506007)(31686004)(71200400001)(91956017)(316002)(66946007)(54906003)(2616005)(8676002)(66476007)(64756008)(66446008)(66556008)(76116006)(82960400001)(4326008)(41300700001)(38070700005)(36756003)(122000001)(83380400001)(5660300002)(2906002)(110136005)(31696002)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?QkpraHlCekVqdTRTR0FTMXpybUZQcFlMaFlwTUNmWDRzeW1YQnV5OVZIUkMr?=
 =?utf-8?B?M280MjFrbHdlU21UOEpXU1JzQXJHbGxqTE1JZ2dpM0NUTkYyOEU5aHBPN2xS?=
 =?utf-8?B?cjBWSzR6ZG1lbEFNYldndEJNR04rb0JMbWtEdmdGL2RMMU5CZE85SzVVc1hT?=
 =?utf-8?B?c29ESU1Tc2QwOEhxR0lObTN3S0R4NU1Zak5PeDltL0pUTFkydkpxcSswZ1ow?=
 =?utf-8?B?NXJuSjJxN1loUjR5eStCM2ZhMTE3ZnlYOFdvUU5sS0hzZThpK1VwRVlzSXVu?=
 =?utf-8?B?V0lLVnkvTGIxY3pFSFRoZHpRK3ZQWXhHdnY2QjlvNHZGWkZxdmVPZTNoT1JJ?=
 =?utf-8?B?MkpNL2N2dDdLR1dud3hzdVBuV0ZaV0VlL2pzYmNoYTJpQmd3Y2haYkM5ZUs2?=
 =?utf-8?B?RW9BOVdyVlpBVlVlbHFtWmFiWmNuVUpMWlVJUloyU3hjanFQWmw3a3krUTY5?=
 =?utf-8?B?M25INnY2U2ZXa080MVh3Mk1Nb3IrdnR1TllaWWZiOE00Smk0dXFURFoxWk96?=
 =?utf-8?B?aWdLbTNERUh4Qm83dmxxckhJdkJiVXM0MlkxV0tvMkthOFkwNjFGd1ZlVGdO?=
 =?utf-8?B?TTlLeHE3MUZPVWdvNzlteFhxbFJEWlZoYzNpM2d5Ukl4M0VveEhZalZZOThq?=
 =?utf-8?B?aGdwTHdzbXhZcCtDNzBkYWYzQmFUdDUzMWJ4cnNkaFJ5b0lSelJnSjRIeU1Y?=
 =?utf-8?B?Y1BVNTRpc2xuaDY1V0d4MDNnWnVvdzYzUlJ5SHJZSzl4Rnh3YUp6b1JCQWRu?=
 =?utf-8?B?THo2ZkVYL0JNTWFQVEQ0VzJnTWtRS2tMSW8zM2g4dVlsaTBDRXplOWNRbWFM?=
 =?utf-8?B?T2x6d2xna2FJRFV1a05iVUxBNVRBMWJKbERERG5CQ0dhRFhiM205OERHZFV2?=
 =?utf-8?B?NFVaVWYvdldMMVhyRTFFNUlYbmRWNVp6OXNLQ1JZbUFiY2J6RHljcGhFMW9K?=
 =?utf-8?B?TUw1MVdUeCtVeXFVcS85ZGRQRkVxMEJ3VmQzV3haN211ZWJIMUtxMC85T3pn?=
 =?utf-8?B?VXJQTXBsY1B2b1pMRTN5NTdjTHRIS3IreElReWl5dC9tYThIcEcxNDlXYVNJ?=
 =?utf-8?B?SnFENnZ3ZWE0Z3AzaXNEbks5MHdPcjkyQmRoMllUZjdSVEc2ME9SNUU3S2ZH?=
 =?utf-8?B?dGI4czJiNFJNMURsL0hMUUZsaUFhRC81NTA3eFhPd2xyd1d4WEJXaXlpWTBR?=
 =?utf-8?B?SmYvL1ZYWm1RN2U0ZlVmQlNVYWFOMGNwSFRQbnc2Q25uVDhEMW5YakRTZ2dN?=
 =?utf-8?B?K25KOHpvaVJocWs4Z1JVa0U2bERyaXdWUmY0Sk9QakJzK2dMS0VpRy9BaXVB?=
 =?utf-8?B?VTFxcWNBeG5aMk1WeWlLWU52UHVJUCtEL3dic1ZXWlNiQ1JEbkVWSzZUbjha?=
 =?utf-8?B?VzVMeTB2SnJGYk9XL2pmM1llU0JUVTc4bVlVamJTT3dkQTVWM3ZPb0FOb0Nm?=
 =?utf-8?B?c2tleXh6QVF4Wit6b3dTMWtlRzBVZ2tFb1BqMmJhZVRKR3A3U2V0R0FrUHVS?=
 =?utf-8?B?djd0TmVVcklkMy85YTBpTCtlOXlKamdZQ2RZclNET3E0TFlnRTlaUnJJZFp6?=
 =?utf-8?B?eTdQSGc4YmpWby9oRDA4TGd2VXowcCt2cnVXa0dnbUxEZnlQTVhIQTE5bFAr?=
 =?utf-8?B?RUswVDY0cjNWbm4xZUV5QlJ1OCt5WDdhNjVFVzVjc0lOaHhrcXlaSTJZSUJB?=
 =?utf-8?B?WVN6N1I1NHIwcEpSRkVsZTYrUE9HVFRLVHg4TWZLUzArS3lsYW9oekhyM2R0?=
 =?utf-8?B?UHd5SU02MGlRNi82Y25NQnFKQ1lMSkxkMGVwZ2RENStXTDFRd1ltSS9yWS9w?=
 =?utf-8?B?YzJibzVkU0dIVlZOWmNGSjZmNU51bklISzQ5ZldxbjhiTk5vZldSTFpFSnJz?=
 =?utf-8?B?NjVNcFFNOHBrT2FpM1lQdTRSUUpQMVZZMU1vYW5xZU5GVXBFNFNoUDBOeGVR?=
 =?utf-8?B?Mk1vcTM4eHo3RjBmTkN1eC93ZTkvK2Y4U1pqd2VNMjVSaEdtMTlTY09IcFJQ?=
 =?utf-8?B?T0Y1QkhmU3Qzc3NSSk4vWEtCZVpJclhuVHpjcWdYNXJ3cXl4TzJFSGQwVGpB?=
 =?utf-8?B?WUpvT3R3aytEWnlvQUcvOTY4ck92QzEwb2lWdHBRMFExV1krcWg1QnYwWFNt?=
 =?utf-8?B?ZVJyK2hBU0JKN3NGMzE1OFZIMWVGMXArWWhleCtOQzJDSVBrQkE5dklGKzBI?=
 =?utf-8?B?bVE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <3D76733CFA921E4C803663BEC4C94DEC@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?Y05VVS9XMjZlMllNV0ExMkwzUHJzMTdSVUdXVmlhWkQ2UXFENHBJZlRINGZX?=
 =?utf-8?B?UHAvYUZXbVZJVGpzV3lHYVNqWkZkYjVCd1IxNlJMaVdsSHM5R3RMRVVqN2N3?=
 =?utf-8?B?RE52ZVFzandaN2cvL3NGSHVYUFNOZmFxdllubWF1Q2J3YzZGZ201K2Y2Nm00?=
 =?utf-8?B?M3NqMWoyQ1czWkRUa2xPcFVVQmJOUkhpNVR1UGtCd3YxeG9UY3UrK3lrVEtO?=
 =?utf-8?B?dHJkbWd5L2NBeFE4L1c4bDhiVktBc09YRk9Eb0NzcVpQelhqa3VMU0JGOXhQ?=
 =?utf-8?B?SC9TYWpPZnRxbkx0WmlxK3phbW9WaDF3MFFkUXduUXM1eUdEZjV5RGY5a2p4?=
 =?utf-8?B?czJuY09jY0laYXJUbkdzRnFuTzhUR0VvWVM4VnFHUmZaMU5NQnJSRDMzRTlv?=
 =?utf-8?B?VmlyQzdacU95Vy9BRmVmVWUrUnozMEEvUTB5dHpRUE1wWjZXamd2TEhLSlR1?=
 =?utf-8?B?dXBCRlU5RURna0RnY3ZhRms3VG9oSTgzNzI0emRnQUR0TVdsYUplMUd0MlVL?=
 =?utf-8?B?ajc4TWlESW5RVlJXRWd2VmtNV1hyMWk2Z216aGZEWU9XanI4ZDhKYzRVeTJC?=
 =?utf-8?B?dW1iRkxuaUJWdnhDVDdkVlVjOWtJQ09GNnh6TzQrYXZXaS80eGg4Mk9EWVFD?=
 =?utf-8?B?R3IzL3JGemFrZ1hLbUdZRUF1cU9wWUg1Vmx2eVRsWWY3eVBLdDZ1NnZtc2x5?=
 =?utf-8?B?OC9nRDZqWmZpNWtOYlN0bDZpQVRxRzEvL2tGUXAxUTZ5UERYSWhIUkg4WlYw?=
 =?utf-8?B?TmZtNjZlVUdFV1NRUm55TGU2K1VsVUwrbXAzcnpYaW0zbkQ0Yytwb2xxU2xJ?=
 =?utf-8?B?QStEOGwzLzNZYTBxVnNXUkNHR250WFpjV2N2OGRqbG55NHBjNldWRndCOFZT?=
 =?utf-8?B?a0ZLY3pVU2YwSnZGMEtJTzhFZ2R1YnFyTk9hbXVaTWxqNzR5Q21uQmdhK3NH?=
 =?utf-8?B?RVJuQlNqNy9NUXBIelM5aGhRVFowamJZU24wbWllYjVqTXhDQlBKS2pONnN1?=
 =?utf-8?B?dEZVSEV6WWYyTkVDUFBqRzRxNE5MVWZwQTIvNStUTTFqUGRzdmV6Y0E5dno3?=
 =?utf-8?B?WTVFYURSQ2ZDYmM1WGlzKzlBWFpWMTgxWEZqWG1hTTE3OUh4Q2FUQ2tWMmky?=
 =?utf-8?B?WFRtRHdseXNEYnY2Z2IrOWxEc3NXKzlKVEgwWThYMllid2NEbGZEKzAvd0c1?=
 =?utf-8?B?OUsrcHRhZFVIc3RmOTlpbWRXc0EwL01KS0pzeUtPZkRXaDdnRGk4K1gvWDA4?=
 =?utf-8?Q?eco7TEMUDbb3bWT?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ed02e942-0001-4006-13d0-08daeff616d1
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 14:55:44.9515
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: S48rDDuw/IZz0Bpd8awmrIwg2rEUXIhmH5XhFIlE7sk0NJOlQkBUGijQ6zJTsxgIZQYrpeK98HmSn0UJKQury8/eTNIExP6teTJ6W7u6eH4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6151

T24gMDYvMDEvMjAyMyAxOjE0IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvcmlzY3YvcmlzY3Y2NC9oZWFkLlMgYi94ZW4vYXJjaC9yaXNjdi9yaXNj
djY0L2hlYWQuUw0KPiBpbmRleCA5OTBlZGI3MGEwLi5kZGM3MTA0NzAxIDEwMDY0NA0KPiAtLS0g
YS94ZW4vYXJjaC9yaXNjdi9yaXNjdjY0L2hlYWQuUw0KPiArKysgYi94ZW4vYXJjaC9yaXNjdi9y
aXNjdjY0L2hlYWQuUw0KPiBAQCAtMSw0ICsxLDEwIEBADQo+ICAgICAgICAgIC5zZWN0aW9uIC50
ZXh0LmhlYWRlciwgImF4IiwgJXByb2diaXRzDQo+ICANCj4gIEVOVFJZKHN0YXJ0KQ0KPiAtICAg
ICAgICBqICBzdGFydA0KPiArICAgICAgICBsYSAgICAgIHNwLCBjcHUwX2Jvb3Rfc3RhY2sNCj4g
KyAgICAgICAgbGkgICAgICB0MCwgUEFHRV9TSVpFDQo+ICsgICAgICAgIGFkZCAgICAgc3AsIHNw
LCB0MA0KPiArDQo+ICtfc3RhcnRfaGFuZzoNCj4gKyAgICAgICAgd2ZpDQo+ICsgICAgICAgIGog
IF9zdGFydF9oYW5nDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9yaXNjdi9zZXR1cC5jIGIveGVu
L2FyY2gvcmlzY3Yvc2V0dXAuYw0KPiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPiBpbmRleCAwMDAw
MDAwMDAwLi4yYzdkY2ExZGFhDQo+IC0tLSAvZGV2L251bGwNCj4gKysrIGIveGVuL2FyY2gvcmlz
Y3Yvc2V0dXAuYw0KPiBAQCAtMCwwICsxLDYgQEANCj4gKyNpbmNsdWRlIDx4ZW4vaW5pdC5oPg0K
PiArI2luY2x1ZGUgPHhlbi9jb21waWxlLmg+DQo+ICsNCj4gKy8qIFhlbiBzdGFjayBmb3IgYnJp
bmdpbmcgdXAgdGhlIGZpcnN0IENQVS4gKi8NCj4gK3Vuc2lnbmVkIGNoYXIgX19pbml0ZGF0YSBj
cHUwX2Jvb3Rfc3RhY2tbUEFHRV9TSVpFXQ0KPiArICAgIF9fYWxpZ25lZChQQUdFX1NJWkUpOw0K
DQpBaCwgSSBkaWRuJ3Qgc3BvdCBoaXMgd2hlbiBsb29raW5nIGF0IHRoZSB1bmlmaWVkIGRlbHRh
IG9mIHRoZSBzZXJpZXMuDQoNCllvdSB3YW50IG1vc3Qgb2YgcGF0Y2ggNyBtZXJnZWQgaW50byB0
aGlzLCBzbyB3ZSBlbmQgdXAgd2l0aA0KDQordm9pZCBfX2luaXQgbm9yZXR1cm4gc3RhcnRfeGVu
KHZvaWQpDQorew0KK8KgwqDCoCBmb3IgKCA7OyApDQorwqDCoMKgwqDCoMKgwqAgYXNtIHZvbGF0
aWxlICgid2ZpIik7DQorDQorwqDCoMKgIHVucmVhY2hhYmxlKCk7DQorfQ0KDQppbiBzZXR1cC5j
IHRvby7CoCBUaGF0IG1lYW5zIHdlIGRvbid0IHRyYW5zaWVudGx5IGFkZCBfc3RhcnRfaGFuZyBq
dXN0IHRvDQpkZWxldGUgaXQgMyBwYXRjaGVzIGxhdGVyLsKgIChBbmQgaXQgd2lsbCBmaXggSmFu
J3Mgb2JzZXJ2YXRpb24gYWJvdXQgdGhlDQptaXNhbGlnbmVkIG9wZXJhbmQuKQ0KDQpBZGRpbmcg
dGhlIGVhcmx5X3ByaW50aygiSGVsbG8gZnJvbSBDIGVudlxuIik7IGxpbmUgd2FudHMgbWVyZ2lu
ZyBpbnRvDQpwYXRjaCA2Lg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 15:05:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 15:05:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472746.733100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoHZ-0004jB-NN; Fri, 06 Jan 2023 15:05:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472746.733100; Fri, 06 Jan 2023 15:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoHZ-0004j2-Je; Fri, 06 Jan 2023 15:05:21 +0000
Received: by outflank-mailman (input) for mailman id 472746;
 Fri, 06 Jan 2023 15:05:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDoHY-0004hP-5M
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 15:05:20 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 868da59d-8dd3-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 16:05:17 +0100 (CET)
Received: from mail-co1nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jan 2023 10:05:14 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO1PR03MB5713.namprd03.prod.outlook.com (2603:10b6:303:6f::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 15:05:11 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 15:05:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 868da59d-8dd3-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673017517;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=pe0jS5A0aRzTdLML3t90WfHJv8bffJSIVdsTEe5UdMk=;
  b=dzuPRB/0bgkMBp4pYv3NwRhnGTyEvkPWMluqyWYi6trZVeCcVR2MYwFR
   vBQLov5cf8EP+Ljn7rOnLhQsyxOThKWHnpMbTRi53oKvwcsJcoAmJzuEL
   X96dFMGnL+Qood2bh+2GzNKpOcLC90EqgepFbTIzdb6oAIs0eh69ZZfHS
   Q=;
X-IronPort-RemoteIP: 104.47.56.170
X-IronPort-MID: 91903485
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:OE9KcqN4zexMOkTvrR1GlsFynXyQoLVcMsEvi/4bfWQNrUor1zQBy
 mEWUW/UbPaOamqgLt1xOYuz9hkBuZTQzdAwTAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo42tB5gJmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uFlWTgU/
 s1AFDAqR0G9m6WnnpmrcsA506zPLOGzVG8ekldJ6GiASN0BGNXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PRxujeLpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexHKjA95CSdVU8NZrsFKXn1I3CyFVdni4ndW7lxSZXvB2f
 hl8Fi0G6PJaGFaQZt75Uh6joX/CvQMGXNFQEOoS5wSEy66S6AGcbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8hSipJSEfIGsGZCkFZQgI+d/upMc0lB2nZtliGaixk9b8MSvx3
 TeRrS41wb4UiKYj3Kyh8VfKqzmlvJTOQ0g+4QC/Y46+xgZwZYrgbIvx71HetK9ENNzAEQHHu
 2UYkc+D6uxIFYuKiCGGXOQKGveu+uqBNzrfx1VoGvHN6giQxpJqRqgIiBkWGaujGp9slePBC
 KMLhT5s2Q==
IronPort-HdrOrdr: A9a23:f6FWVqirmHkD/M179Y38v0gT1nBQXugji2hC6mlwRA09TyX4rb
 HJoB1/73XJYVkqKRMdcLy7Sc29qBDnm6Kdg7NhXouKZwXvsmqvMJxe9oPpwTH6chefygc178
 4JGJSWY+eAaGSS4/yKhzVQ0OxN/DBEys2VbCvloEuFhDsKV51d
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="91903485"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hxwcozv58hzWGjZOhl/wdeG1mja8/ArtjWt7beR03BTma8/G35ymEVAQh2KdamLVcfr8/XuZs57LikOKxv/AFktKe+dvfOixh0m8uhblFbVunAqduwmP/LlxAwyBE4HuilIK2SIe2YmEe29Z9b5WSF0/8euEjldETkb5iAddDQRXk06g5FGnFEhI0u3OvBFD2GZak3tU09NIhb26N5q6fWb1WZM2pQtaQ+VL7d5fc49CRRzx++/9kCs/4B1BopRlBFu0tEyK0HcDmSTiGp68AEz5FCCaliO+cMLsXt+CsAnzJIS/lNH5ZOd9VrDMapxouBympihPQ6I1d7IcfQjsZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pe0jS5A0aRzTdLML3t90WfHJv8bffJSIVdsTEe5UdMk=;
 b=VvKg+LCkeRYoVcIL3Xqy3msz+pOic9vIbFH0xgJ5pOUo2ywNcbkgCXj19b194dsrsbTlJeL7mvmvISvNBtbv9tcsYgJURuxVefIcwEei+/6BsEqVpoHVqi/HvJ2Ma7VkqC1BmCZGt5y8H5XLB+JZcPnVQtqvrVkClmLyV749L5LCr4cZOY3g9a3m0m0fYI86IeaifqAFVboIdxLbgcQXkYIKI73Mr+Qeb2CTuQZf+19lvWMJN4c5brhbay8Dk3suOe9wxkR2MidRO4B0yaYfYTTrxHHh8gsqeh0je7uZ0VDMnpd0DmF2WToy5fm6vtfOU1zulFGlMFg2PnhyJmNowA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pe0jS5A0aRzTdLML3t90WfHJv8bffJSIVdsTEe5UdMk=;
 b=UoovUS3/vEe2eI1HWSWP/fU+oAeaDkXD8NFw9sElyzvAL73nF9sCf8sHgWNzMdST6UiZUi8eYiBx3KTnIJRVrkortH/NCc5cVe0WvIaBcSiEemCW2nkBCDZ6dvZXkMXPjUdzxVJ35FBaylzxkkMs/ftTUV4ANqj+daoakHpmMTM=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v1 8/8] automation: add RISC-V smoke test
Thread-Topic: [PATCH v1 8/8] automation: add RISC-V smoke test
Thread-Index: AQHZIdDdCltTrKYYX0C6LpSEFhyV666RfOgA
Date: Fri, 6 Jan 2023 15:05:10 +0000
Message-ID: <c55a0743-5433-205c-f40c-cd296576c0f4@citrix.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <90078a83982b37846e9845c8ffc50c92f3be1f47.1673009740.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <90078a83982b37846e9845c8ffc50c92f3be1f47.1673009740.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CO1PR03MB5713:EE_
x-ms-office365-filtering-correlation-id: c74696c0-0cd2-4e38-82d8-08daeff76814
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 +pUFiG13bMbXd6hgK9ZV2vnYmoym7Dw7IDwZJ6gmzWwlWIXP1WnYqmHbDBvdiE0hp7DsPhjJavNE+o+EjIVRgxSHEofVfDRgsuocV51Ojj5MSQjuGxfIik++5kfydTxn324hZBaQcyEdlTWFA4dGpmqtMDVE4mONGoUlSRyXi47VBw5fb/1ipzUGuyxU9UC2ScGMyxgJNoD4lWs40EuyInybSxfdDq3/M9Jsaar2MxJUAcNSfeHYpLGSmSHRiA7gJ3Au1M9L3EDLSyRgnag1tWBMFIauUZW84wThYAyf8/jqffiE7xw5MMafljpQA4PjAAs1bXtx6miTWX8Y6/w1CnJhg3kGrMh6F2KK2vaxoIAP19KD6KaZx+1Jwuq/XkVUOAmLEK9BTD5MpHPFawDbY8iF6u5/4sCEvrl0dzYqTCsqwfv0kOx5L+zU3+LeHOQaVU6SGDdn9wRBmqxwounPL5SMjO7fEbFzFul0gJhSCuaf3e4et0qC9brDXb8a8NjZwNJQfVPBNejz+sEu/5Kq1j5SMB/2W2Iuo9KeEZGNFKCezGx4b7+TFANv8uTsLG17LpC/2CNOH7T8h66RBEnjpdbPJgSGxf/rtpI04Uu23N24VKSenxedYfxsoYXmgzN/jODCkW9Q9nTmSruxq1U009BmkO1k08CpztCmcymwHGYabd9/DMpzLGOVb47aWOXKX6Ke+XsUTKCT0oe8VHZFtceANCE4qUQ6/S30Ob5aQblMKqLvEA2hp6v6qfQ0NWgnUCnfbokPDE0HdkVjn/1mNA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199015)(8936002)(31696002)(86362001)(31686004)(41300700001)(122000001)(6506007)(53546011)(478600001)(38070700005)(6486002)(64756008)(71200400001)(5660300002)(38100700002)(82960400001)(83380400001)(36756003)(26005)(6512007)(316002)(66556008)(4326008)(91956017)(76116006)(8676002)(186003)(66476007)(66446008)(2906002)(110136005)(2616005)(66946007)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?KzR4cDR5Zm8rT2sveGNrcU53N0tFTEp1VTdER3V6RGxuQS9EajJHTDhyOTlj?=
 =?utf-8?B?NUhpWk1paytPamkzbkM2T3ZBM2UrVzdHWWVmaWt1VllTK2VrQ0k2ajV1eSsr?=
 =?utf-8?B?bGVRQXhDSW8rVWt2ZXhIdjhDbnhUZTNxcU9SeFNoY3RmOG93MXpmTmlDdDZI?=
 =?utf-8?B?K1plU0U4YWNObnBlZHg3MlNHNFJEakFsRGZSK3lVOHFBTklSSHZjWkd6UVp3?=
 =?utf-8?B?UWlSL1ZyRUZqRVMvM2ozS1RzWTlNc1FNMTRteVliSDhMUlQ2NVpCelEySHhY?=
 =?utf-8?B?aTBZMHF0OVhwUVE5dzlhSlRqeDJ0OHAzM1V4YndsTGZjeklhZGRZVTBTM0pr?=
 =?utf-8?B?Sk51UWdSZzdtNkVQeEZYb3ZURmVGWTJ3Tmx0ZEltcEl3cStUSE9JUHlXT3Vl?=
 =?utf-8?B?cTVuV2hUYnBSVmN4T01ZelNoUHh2WkgvcnBtb1N2amVHV2dYWEcvSE9PbUh0?=
 =?utf-8?B?MWZic2hFaHVuVDZYcVNDZldKSUtlUVJGNnZ5Wm9HbHlWOTFuek5OT0tZVEFk?=
 =?utf-8?B?UEMwT2Z3b3dtRzlBN0ZIcUNhemI1cXBzQ2JuYzVVN2pqNnMrTzVRemt1aUE1?=
 =?utf-8?B?V2RuQlhPbTBudzhUMUx6a2VOZmJLWS93WVlHejNhaGh0QTVORXBRdVR0ZFVJ?=
 =?utf-8?B?NGthK1JIUWNiNjVMdXJ0cDc2SzMwcmllRHdNWjZtVHl6TEk3aHF0QXJFZFVZ?=
 =?utf-8?B?LzVOYTJXV0R0SXNHSVJvMGZERGNIdmZzcytPZXg5Wm90OFo5RlRFODNlYm1K?=
 =?utf-8?B?L2lmV0laR0tKMWdRZW9Od05XT1BjZVloZGVvWjdIamJOakZTZ0QrY2JrWWhJ?=
 =?utf-8?B?d3BkN0VEejhuclhadysrakhFV3VhUmxOOWYzcnJEREFzZ1pNOGRqM00xQUFL?=
 =?utf-8?B?Y2tFN1NLdXVvMDR2YmRiVnN6NXJpS0NSL1ZkdEJqZmpZMUpodWlhUkFLc0Y1?=
 =?utf-8?B?ZTl3MWh1K1ZjckNQazVXSThkNzNSUnIrVG9ZSlBYNkV5VXJwN0FFYk81VzNt?=
 =?utf-8?B?cDl1Q200UzVSZGsrUjFTdm1rcTJGTzhab3dUMTcyS242S0FMMDBad3NFdENR?=
 =?utf-8?B?SThjQVVHL3hETnNDMU1sQ1ZFTkc0UlJGWHNXYU5QK3kzZ2dTczFYeHpyT0pi?=
 =?utf-8?B?MFhUREk3UDlEeFR5QW9yeHMyb3hiVisvNmFhSS9qa1gxMnZGN01lbU5yUTF2?=
 =?utf-8?B?M2NYRkE5RHJmTWRxRmRYang4bjFtbVZQaG1HVmZTMFdNZExJWE54ZFFhMW9W?=
 =?utf-8?B?YzZXbitvNk41WmJKRU5hUk1jODRFa0NvM0RNUUNRdjFxZEVSWXJ0MWtNaTdh?=
 =?utf-8?B?VnlmZll2TnNuaUl5MWZmLzg4MXpLMDkwKzZpNjJhMG0zRFF1WkUvWU5jZFhE?=
 =?utf-8?B?SGVUZXlZVFVRSUVkS0EwdENtNi9oTW1WUUNvazlnMDNTMWh4TGV0R29ORnYy?=
 =?utf-8?B?Qnc2ck91WWJ6L0lmYXBORGlKMUdPRENrbFFVSXdIc2FzVXlTOExSY0VtdGx3?=
 =?utf-8?B?L2tENlUwUW1GTnRjcW9WSnh3ZUNyUUpuVFR6TmRlVm5rbXIzYU9iTithWi9u?=
 =?utf-8?B?TW5hb2xyTCtsdHBEdkZkdUo1bVhKdVhGTkR3V3JPMFg5QnlUaXZVOUdOTlhw?=
 =?utf-8?B?Q1ZLZXpQYzd5UVY5bUh4QnNHZW1aY3dGa1JWZlVQSEUzcWJPZXVRTE5kNDJT?=
 =?utf-8?B?c2ErM1dETEpVLzFqeHFQYWJpNmNpZXZoWis1Y1ZjMExZTnZXVWgzWUJiSEJP?=
 =?utf-8?B?bDZhVHVqbzJYZjRhZnFFaFJZd2E5UE1VdmVzVnQyZ0pPVnJjVkdVdkhDOExW?=
 =?utf-8?B?TFhmM1FMYjl5N284NXh5a2dQSmpQeWY5QVJxeElITzJiTzgyb2ZQaHdsNWhR?=
 =?utf-8?B?YXNnSnhJTWtpVUxGckxabGlrREM5UFFKcm80Ty9IYWNLTEVncHdMTEVRRDA2?=
 =?utf-8?B?WUJ0U3dLT001UUE5SDR6a0xpWnprbEJBYVhwdTNUeU1DeXJ3cnFxc1lFWjZ5?=
 =?utf-8?B?ZzZEM3VaNWNua2V3eHVsNWZsTnRNc2lKUTBDSS9YM2IyUlkyY1E1cmg1K2Ni?=
 =?utf-8?B?ekt4dEZvS0tIOUpQbXV6Uk5wckUveTBXeW5ZbUdmeUFXVzB3T2oxTFhtWmhJ?=
 =?utf-8?B?UHdpVUh2OVNxRVhzam14N0MzYWZBa2l4c0Z6N3BDa3IvMmNVWjBpR3JQNjZ6?=
 =?utf-8?B?U3c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1BF3244BACA46F41A9BDC56E5F1021AB@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ZLBDK3RI+Iyn3SMPH/NpdUYlyHEARYJ208Y7xZfzziH3Gda7EimquiZTOE4Nq+bI3m11UcqaWFqr5fD3zrgxYf+ilinTpzqcZ4cKOYDPZJG7Wc/GRX808zCa8DR1O72Y7Ce5S9XHheApMiICiwl1BMNdt/rodGeGMYs5S8Y7U5osqM0DR9TF46CtiD/xoAw9OUMMZzoGoMWINPyAymiUQnPWRzGllIdu1SMAokN6UJ5c9OGKl+GN65wRD1aOnkFLS9rh9ZeTn0F75ZAkibQlov3cUaA7sW9tuQqih5G0ZTCokKH1uc0ZEQM67/1B8EcNJYZ+fwio87Imy45Ni5I5oBcDvDrZPgRa0TsXUFFmkhoJsOpx0bI7RcjLvXdJzDFxr/HJAeCkI2UgvWROPGUY9+qrhm3aOAj4DzPBR01wi+vxUfvouUg7IHFy6KWtUPN8l6GZG2DouMY+bK6loNan0YmfB2GZ3ivOwlq2ufSYOx5GrurxcLGF6vKnUSyldpKzVMVokojH26uZk90AvmB5rYatmHDATQ5BYTR8ZmACyqnUHBeCn9f5Gjkn0nAQgoSmB/g/fqoU7K8gOw7LGkFdWvzI5EC93bxTMEnjnx+YQ/B1BgwqGWFupMZYcBOtAarIhQnxnA/gsOSiuvf/koJevTv6rwALxZ84u62vQy+mDJO1lIGGpS111R7f7efujGqzjW0eIq3sAvvFI2fgTDhaFuZvImVB3fir/Ep7VxsHyIs7fgoUNJmFvXv2nDFrhBYlkEx4D5pq8Y52KfQA0hLomKaxNZy2NlrJ1DfMtI+2jIJSPl8sUOmGNDiyn2Izw6vy5+duCgNXz9YpaMZL5iPn9q8wV0gHxPxGvb+jyvWRU1Ni9GbZNH9j2VePAc2Vz2RIBMuxjHetRyfHHkSyf+y6dQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c74696c0-0cd2-4e38-82d8-08daeff76814
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 15:05:10.7856
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JJNFylVQzLs1MiBSYW3d/41lueGCMi+lSoOgQRrQvnWdcVePNIRQRctq5A+u8crvs9nfDwxz0v0Xr3CPV6WFHS3EjTiwU2If6rrkogEJlzA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5713

T24gMDYvMDEvMjAyMyAxOjE0IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBBZGQgY2hl
Y2sgaWYgdGhlcmUgaXMgYSBtZXNzYWdlICdIZWxsbyBmcm9tIEMgZW52JyBwcmVzZW50cw0KPiBp
biBsb2cgZmlsZSB0byBiZSBzdXJlIHRoYXQgc3RhY2sgaXMgc2V0IGFuZCBDIHBhcnQgb2YgZWFy
bHkgcHJpbnRrDQo+IGlzIHdvcmtpbmcuDQo+DQo+IEFsc28gcWVtdS1zeXN0ZW0tcmlzY3Ygd2Fz
IGFkZGVkIHRvIHJpc2N2NjQuZG9ja2VyZmlsZSBhcyBpdCBpcw0KPiByZXF1aXJlZCBmb3IgUklT
Qy1WIHNtb2tlIHRlc3QuDQo+DQo+IFNpZ25lZC1vZmYtYnk6IE9sZWtzaWkgS3Vyb2Noa28gPG9s
ZWtzaWkua3Vyb2Noa29AZ21haWwuY29tPg0KPiAtLS0NCj4gIGF1dG9tYXRpb24vYnVpbGQvYXJj
aGxpbnV4L3Jpc2N2NjQuZG9ja2VyZmlsZSB8ICAzICsrLQ0KPiAgYXV0b21hdGlvbi9zY3JpcHRz
L3FlbXUtc21va2UtcmlzY3Y2NC5zaCAgICAgIHwgMjAgKysrKysrKysrKysrKysrKysrKw0KPiAg
MiBmaWxlcyBjaGFuZ2VkLCAyMiBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0pDQo+ICBjcmVh
dGUgbW9kZSAxMDA3NTUgYXV0b21hdGlvbi9zY3JpcHRzL3FlbXUtc21va2UtcmlzY3Y2NC5zaA0K
DQpMb29raW5nIHRocm91Z2ggdGhlIGVudGlyZSBzZXJpZXMsIGFyZW4ndCB3ZSBtaXNzaW5nIGEg
aHVuayB0byB0ZXN0LnltbA0KdG8gd2lyZSB1cCB0aGUgc21va2UgdGVzdD8NCg0KSXQgd2FudHMg
dG8gbGl2ZSBpbiB0aGlzIHBhdGNoIGFsb25nIHdpdGggdGhlIGludHJvZHVjdGlvbiBvZg0KcWVt
dS1zbW9rZS1yaXNjdjY0LnNoDQoNCkhvd2V2ZXIsIHRoZSBtb2RpZmljYXRpb24gdG8gdGhlIGRv
Y2tlcmZpbGUgd2FudCBicmVha2luZyBvdXQgYW5kDQpzdWJtaXR0ZWQgc2VwYXJhdGVseS7CoCBJ
dCB3aWxsIGludm9sdmUgcmVidWlsZGluZyBhbmQgcmVkZXBsb3lpbmcgdGhlDQpjb250YWluZXIs
IHdoaWNoIGlzIGEpIGZpbmUgdG8gZG8gc2VwYXJhdGVseSwgYW5kIGIpIGEgbmVjZXNzYXJ5DQpw
cmVyZXF1aXNpdGUgZm9yIGFueW9uZSBlbHNlIHRvIHRha2UgdGhpcyBzZXJpZXMgYW5kIHRlc3Qg
aXQuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 15:19:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 15:19:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472760.733111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoV3-0006Sq-2M; Fri, 06 Jan 2023 15:19:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472760.733111; Fri, 06 Jan 2023 15:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoV2-0006Sj-VL; Fri, 06 Jan 2023 15:19:16 +0000
Received: by outflank-mailman (input) for mailman id 472760;
 Fri, 06 Jan 2023 15:19:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDoV1-0006Sd-Nd
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 15:19:15 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 790f2b12-8dd5-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 16:19:13 +0100 (CET)
Received: from mail-co1nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jan 2023 10:19:10 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5553.namprd03.prod.outlook.com (2603:10b6:208:285::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 15:19:08 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 15:19:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 790f2b12-8dd5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673018353;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=9CdlP3hOQO0KgwM+w6gCU4wWB2bLJCigK1wKvCRZYcI=;
  b=cci7ysnuDjLvwZgUHLlEaNVK9ChHIXDjQcINM8dya28MrAjmk5xSPFwE
   QDi8r5MtCIYYsNUcoIg4n1/6fUYOBtmj22+bzaJL4xZYy/g33qWnyURbM
   EkCjRhkGsv/g6U1oapXI6+1HofJmh/YBXbgUKYfAw1BlSkxqamr6U454R
   4=;
X-IronPort-RemoteIP: 104.47.56.168
X-IronPort-MID: 90436429
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1JDpgqy2sFJpBAmL+PB6t+efxyrEfRIJ4+MujC+fZmUNrF6WrkVUy
 WBKUD+AP/uIMzaje9xzYY7goE0EvceEztZiSVBrriAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5QRnPa4T5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KWF15
 9gyNzUcUhGgnOmX++u7RPdlgv12eaEHPKtH0p1h5RfwKK9/BLvkGuDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjaVlVIhuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqAN5JSuHip5aGhnXI7VMUKRMJUmHnhtuDjki/C/9yd
 mMLr39GQa8asRbDosPGdx+lpH+JuDYMVtwWFPc1gCmKzqfX+AeYQGsZVDlKaN8hnMAzTD0uk
 FSOmrvBFTFp9bGYV3+Z3rOVti+pfzgYK3cYYi0JRhdD5MPsyKkolQ7GRNtnFK+zj/X2FCv2z
 jTMqzIx750cjMcN07iw9HjdgiyrvZnPRUg+4QC/dmii5AloaZWlY4Gt4F7z4vNJLYLfRV6E1
 FANn8mF8OkPF9eDjiWLS+QWNK6l7LCONzi0qVVoGZ8m9Tik5X+4VY9V6TB6YkxuN645lSTBZ
 UbSvUZa48ZVNX7zNKtvOdvvW4It0LTqEsnjWrbMdN1Sb5NtdQiBuiZzeUqX2GOrm08p+U0iB
 aqmnQ+XJS5yIcxaIPCeHY/xDZdDKvgC+F7u
IronPort-HdrOrdr: A9a23:b8K5na7+HwlcPJ/HqgPXwMvXdLJyesId70hD6qkRc203TiX8ra
 uTdZsgvyMc9wxhP03I9ertBED4ewK5yXcX2+ks1NWZMTUO0VHARL2Lh+DZsljd8kvFmdK1vp
 0AT0ERMrPN5TUQt7eZ3DWF
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="90436429"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BL4fr2ItjcJz2gpyhd2NLktdrbNWJA6gxnzwGh5g1JdUp7nKcgViEGSKI5Rj5hG6xpiVPNqvbdEVBUoVgO+q5j/M3FGpnWV1nsCIKBO3XOsZ6UbALWQ/TA4B5X3tOI2+dzcm3Ysg2lZB0NDgSSwqSjTNNcUtasSb9C/GZLqYP6HHBEgRUUGDEHfifFiqbT9VgtO3rWBPiMxkeyQRkE5A3+NDrOc6/nwAQ5FeyBcMQBbhxjvd0vBD3RC/8eLkL4jADWbRC4h/hiK6PXszZtLpFlzbT1Iw4VJvHhK5ylQ+jhEHQL1sBr2Qr66hdzUsC4YBONvp+1TEAuYOD9F60+wcMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9CdlP3hOQO0KgwM+w6gCU4wWB2bLJCigK1wKvCRZYcI=;
 b=n6uSIBEBAfbx8lNqmi+bRTuY8D5jTHX/1sWcIGB+YkD1bJYbrQCefIXMOjay/fsAtb1Dz+9rwcLwL6rkUv2ZShnozBQ4VK9w6BtbGbuWDg6KODmC0YiHtVWcrN0hDfMN0YRuTibBpouLJ71kKF8VZ1aSVatP+iohQvw9Zd4AOptjUPDs1ZPLWYME+Bf5HP2QCIeZL6I9O4zzBRIy1cxkPnirtJI8GP4FLEznHcOkkK4O2zYayxw/LFjVoc4+m0u4F8Hux/XQSqdjN7lkYyIiCdeb2nvEC3N7wOUk9d1czQeKwyji6xCOvvnFR00zYa8RjyUbAMfhRuYMptruRXHB/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9CdlP3hOQO0KgwM+w6gCU4wWB2bLJCigK1wKvCRZYcI=;
 b=S1g2CAi9PN3T2/23Rjrg/eoylXaEoAuGIs6vM21xQIQ7/Hv2RSDGJ9qX6GFzFI0z0zo9bPwCH35EVaAqlfko8m833lf40gsteDK8OzNTiYx9NzuPu5DMKJZC8qFpGGOYJGLHG8TRAt9i43Z++6ol/LHTzxGTE4mfTU5gHg8BTzs=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Julien Grall <julien@xen.org>, Oleksii Kurochko
	<oleksii.kurochko@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Thread-Topic: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Thread-Index: AQHZIdDaR/Z8ZObOWUu8wVolwH1ydK6RZVeAgAAbd4A=
Date: Fri, 6 Jan 2023 15:19:07 +0000
Message-ID: <8b6a9927-8a56-6fab-6658-deb4083d2c45@citrix.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
 <d77e7617-5263-0072-4786-ba6144247a4b@xen.org>
In-Reply-To: <d77e7617-5263-0072-4786-ba6144247a4b@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5553:EE_
x-ms-office365-filtering-correlation-id: 10292a8f-3a22-477e-f977-08daeff95b0b
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 L1Hk74rfAPmZ18+2oz1DZ5nGC9TRx3hjxYVdqZC4j9X/OvrXEHDBWoN4lhefVtvMn+YtzC1olyi9EbIrtkF84Xg2oplR2RnLv6Th8gzb4T3KmD/AoQ5Jv6dGfiEveQAKtMyt9OMYAL5WrxCYHnFaQ0emqn1EZeOTuFMZtCKvIPxDiczfz230Ak9/t66F3ZXQ/WsIqgLlHIqFa6wBpS2dSpW9/U7DjQW4IucrhgwH5HXlJTJrIYw8mCkNFWqXHXCIgOBGcNGXs6mk+JKf9/KAlk+QzmEdsZRnRHXUnMvGBLJNM4C3LLgifepauOQ5LiYZ0jyXSRNkzduKT0pxnSM9oevyDk4/Vi83nIVkibaG1GewkcVhYlk0bnseuLFcFIlJ67XmyeNA3OvP5IFTD/rWBbbNZsZrg+5sINTaCLN7bXTyd8v6xlvE+OsP7zwLkI2GJOddZDkIPSKyP9inP0o6o56YpvaRHSwcDaKV7+P/7rJKgAr8dpmK6+hLsaFfktX85I6GmFwFNgkSoHysh3OyGIk8GR3iCTigLjyMY0W1ZtZfQbHpeccucaa2J6epa6RS0VFL7KJsp4nrX/C4YoFdPvCWhAeP5f+e7iRR72UMxRy1cNIhR/A+RmJYWW5oE3xb3AEpVJEaBzBtUvlE6zGA48ZTcbdh2+0dMPbMbh93wT0vjZIWphDoT3oD8owLrnx0bj2kdVFlIuS4gnhAdupQCc5ixYBIYariG2cYtiLqli/9HZegc/zYYf/b9Ta64/GFzRYk87c4usm+4ztYb3ac1g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(376002)(346002)(366004)(136003)(451199015)(2616005)(26005)(6512007)(186003)(83380400001)(6506007)(38070700005)(86362001)(31696002)(36756003)(82960400001)(122000001)(53546011)(38100700002)(4326008)(41300700001)(8676002)(8936002)(2906002)(5660300002)(4744005)(31686004)(71200400001)(316002)(66556008)(110136005)(76116006)(6486002)(478600001)(66946007)(64756008)(66476007)(54906003)(91956017)(66446008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bnU5OWRsajJ5OGtQZ3BxQUV2b0hnYkl2OTRzTDZXb3kwSnRUSVNCWUFIeHdP?=
 =?utf-8?B?UXhtaTBwSFhCdE5jcU51UGJ0VDh5QXpZc2c4TlhlUVhlOTJWMXdEVTFSUGZX?=
 =?utf-8?B?aURycmE4NFlJWHRkRmIvcVdRaFFhdlJTTk8rQ1h1dWozVWRudUdLdDF5d2pk?=
 =?utf-8?B?TjhzZGxIQ1M5T3lZVEFSUGdlRDNKN0VtZk96MFNHbmpvelE2ZXVka1RLWEFM?=
 =?utf-8?B?eCtxdUVOdkowa2QvbG9OeDIvem1Nc0tKczg0bVpVY1dnZVZUUVgydWswK0Vs?=
 =?utf-8?B?NGprOVN6eDVLeUNBVk9BMkhYWFVHdHRCZnFXNzdINndVOEFKcDBUUnBHZnAr?=
 =?utf-8?B?SXJ0NzAvdlFDY3pWMUVTZFhQWEc4SytRaE5hL1VhSG83TzdhWVNOVkI3Y3Zi?=
 =?utf-8?B?NDZFMndKcHdxeEFOR29hWmZqazhXL0lZZWFJb2tGWWFDc0NtVXNVQTJUY1l1?=
 =?utf-8?B?M0ZBVHRoUnlRSTZRMDMrREJveDVTaG8rUko4VUxybUN0Z2hMZVVzQkIvN3hv?=
 =?utf-8?B?UEQ4alVLdnZ5Z3ZEbTRjYnFuZHZKNE4zN0VDVWhKc2JJTmtrSHBYcGcvcks5?=
 =?utf-8?B?TnJ5RGZEVE1sdTJHTG1wc05vbUUxTFR0U0xYZ1R2U1k2c0NCdmk3dHhzbXV0?=
 =?utf-8?B?dGlOSVltaGRxa1ZVbzlFL1NlSXdWcWlnWXphNGtKVjRTRFl1Tk1lYXZDTUc5?=
 =?utf-8?B?ZGJVcUwzeXJKYU1LTFZxZ0p3Mjdhb0VRWStyZ2E0SGNyVWJad1pkajErOUxH?=
 =?utf-8?B?bDZva0thbEM4RjBEMEkyaVlCY2VzSVZHL1pGa3Q0NVZRcHZhMzYzeUtGREcz?=
 =?utf-8?B?TmI1NU9PSURzOVRNVHY1QXRDcTBYTFZKOEp1T0w2bk5KL3NXN0ZobWVlRjkz?=
 =?utf-8?B?eFoxR2ZQY1Jkcm5YU1BPQXFKeE1DMnRDUjNaZW1YRTVKSldwOG1hOWVDbVFC?=
 =?utf-8?B?aGpKUjBFa05kejJmQXUxcHRjVjd5bElLdGRXdDBPUjg2OU5RU2kvbTVsd01K?=
 =?utf-8?B?S3BieEZBaWZObDFjKzZsOTc3bndUZjNtbVh2UTgyMFJHNzJaMkpDZDFrK0dK?=
 =?utf-8?B?UzJSelB5ZXd2dHNpQlpJM25OM3VYeWpoYWR3YzRWR2s2YTlTa1ZIQ0V4NFJS?=
 =?utf-8?B?cWN6bU12d1Z6U2V5QXZJRXJhT2kvUk5DazlGOURRTktVdTU1NXlBZzJkSmhj?=
 =?utf-8?B?dmIxc2o3RW0vUXM1aUx5Zk1hbGZLbUs1OXcwbGUrVTltOUVjWENnamRWL3k1?=
 =?utf-8?B?S2lSZXViQkxEL0tnWUNzYURWMHRXS1FYejU0MTVrUFk1d0FMTWRjL1VHNS9W?=
 =?utf-8?B?SlVNalViSjZYNHovc1k0ZW5uc1lIaEFWalI0TjZLcWJVRUlzWTlVR284dk10?=
 =?utf-8?B?cUlxNHVrbkhPVkFnNnNpcEMxWDR6SE1VTTZaTUM2K1FIbXRuRmxDK2ZrTXBR?=
 =?utf-8?B?bHhzWmE5WllqekdGZk1ZWFBXTHM0T3NKQXFZY24vdTlQTDRSZ0xmM3R3VVd2?=
 =?utf-8?B?UkpBejBDRWFCc2ZYb1FVSzlINWMwbjlMUlVIVTRCUTRVcHk0T2wzQzFybzBy?=
 =?utf-8?B?OCtWRzJ0MERyZ21vTEtQMy9uSTJBbWltMWRPRTU3SUxMS3l6TmhoMEhITG1G?=
 =?utf-8?B?bVkwS0F0c04xOFltMEpRWFZibk9KNEZab1JDY2tXNEpTL2Q3TVRxMkljWkVr?=
 =?utf-8?B?NG45MExPNmIrbVRnb2dKaGJvemQzYTdmdFltQXk3d1dOQnVKU1AzZEtMMUJt?=
 =?utf-8?B?ZWNVQzY1dDk3cHVsZVlCejVaSHNRU2hjeWZFK0ltNE1SSDZpbStZZ3gvWk1w?=
 =?utf-8?B?WlF1a2I0SVJCRThISjdBSktwWUZzZmlCNEcra1hETGJBZUdlc3BnV09DK294?=
 =?utf-8?B?UDR6NitiQnhndVlYQk81S0ZFMndqL3E1R3pHMytzL2QwOGlPTFgrVGhsZ2Fo?=
 =?utf-8?B?MnljL2VBNzd6aDdBSWZFWlhDVXJMK3l0YnNEMkFzY25rWHBFc3pPZmU4ZkdR?=
 =?utf-8?B?R2ZBSTloTGVyRDBRamo4WUF0aGJ3Y0k5MHpnVnllOUY5RHNDN3gwblBwVyti?=
 =?utf-8?B?MDZOczNWcWVsVVBKMys1anZIWmttUzJScW1kU3NoNXVybHljNkl4VEc2STFB?=
 =?utf-8?B?NHNORnZzQnJsd2pJLzNKSGNCY0J0am8zTHliVTRzd0ZuQStsTzBUeTYyNGpV?=
 =?utf-8?B?NUE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <34D1B9580B0B1440BF0B3D484ACFC147@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?VWdsZmFOTkYyelNxTnZmUm1oT1lOa0VSTEs5NmV2MVJYVzlaMzN2bmwrRW5Q?=
 =?utf-8?B?ZXRQeGRlM3NoSncxbkVUa0hMZHY1Vi8rdDA5a0JCQ05jS0lxeUh6Ulplbkwv?=
 =?utf-8?B?Nlc2eldMTTV6bHhsd2MyYmdubEs2eHVGczIvb0RCUVllcExlTk1CWitXWTkv?=
 =?utf-8?B?Z0wrTTd3b00rS29JYnE1eXVSZWkzREFHZVRLNmJCcjdUKzFKUUJnbUZsZGtp?=
 =?utf-8?B?clFXdDEvZDlHU0dLV1pjdm9RK05LeFZvc1kyT2dRR0d1V1Z5VnNLbTlkZ01a?=
 =?utf-8?B?OWZCcG04VytWTEI1ZmhDR1VmbmNJTHFqNmttc1FOMzMrSmEyR2g0MUV2QXFC?=
 =?utf-8?B?MytHL3NqQ0ZSMXpyOTRyLzNNT21wZ0VycEgvK1V5ZUZOSUJ1ZHBURmNrc3da?=
 =?utf-8?B?bGxlTENLbTRIS1VSZVc3cUxaZnNnZmlRYURpcktBbGltSkdPb1hHbnZpY2Np?=
 =?utf-8?B?b2tINDhSeW1Dbkp2Y1hNSkFPZ0txV1R4bElOdEtZbXYxNCtVRkFXc29pNEVr?=
 =?utf-8?B?aWRRM1pHVTRrdVFYVzZvSWVLb21kWmNFMmRYZS9xZ1V2ZnM2ZGwzUmxEN2Fh?=
 =?utf-8?B?N0VNWUp6VFA5REovYW9MYkNnOFNKR0JBZWo4T3R0Y3Jxc1ljL2I5LzhxUG1Y?=
 =?utf-8?B?WnU5ZVpyaFhwWnFHY0cxcmV3SS9sQTc5RkJzQU5ScVVVdFZaN1IrTy9VQ1c5?=
 =?utf-8?B?M0pqb1pSY09lOWJOWnpNcmJGeVo1OW95MUdNZW1ObWJUd1JGV2ZreW9uc21E?=
 =?utf-8?B?TG1ob3JjN3VSQVdkcGc2QWhvV0RkR2s3cVBTTzJRZVNsanJZN09PMHhMYU1B?=
 =?utf-8?B?bkdIOVhQNGdxNDhFT3JRT0VFWjZCdmxRYzNjSVZ3N2NNdmxiMHc3MGFTWmhF?=
 =?utf-8?B?ZVUxTmUzOFdMQ1BxMHdKN2hmbndLbzd6N3ZlSmpYWkhYOG1ERzBYWExSSWN0?=
 =?utf-8?B?ZThSM0tKbEJoSFRkQVZMcUhWanlrQmNqeERSNzhneDllTk8xcit4ZjdWVUZz?=
 =?utf-8?B?Tmg0ZmVTVklOL0ZwZC9BWlFva0Z0UkFTMmdzVTdvNlBXUitrMEhGbk05Vjdp?=
 =?utf-8?B?Qk5LTnN1UjFtYTMwU3FiVUZWWlgzTXlIRkR4N0lId1Zwd0F0Z2dtY29wMTZ0?=
 =?utf-8?B?bENJd0JCdi9DbVlHQzJ6N1hvVGpCTnM5dVc2UU5JUEl6Mk9uMERSRTFtR0FS?=
 =?utf-8?B?VkE5MDFjRzJjWVZlcTM0cWZXR04vRXVDUzQwaklRVFdBU0grdHJRVjMrYlJV?=
 =?utf-8?B?L3BoM0lncGl2NU1ObXluTjlLcytOUWg0YzFpRnNtbjkxVXQydz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10292a8f-3a22-477e-f977-08daeff95b0b
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 15:19:07.9596
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: v6gdBzdIEH0yRdm/TUOnloKmUObTk6CHn4I/YyHa1aTw5flo2yqvDlQNBEgjCltcyREY/JIdytn2eyHQz9ule78UAYDt7RPbpMDPhDXS7zI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5553

T24gMDYvMDEvMjAyMyAxOjQwIHBtLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+IEhpLA0KPg0KPiBP
biAwNi8wMS8yMDIzIDEzOjE0LCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPj4gVGhlIHBhdGNo
IGludHJvZHVjZSBzYmlfcHV0Y2hhcigpIFNCSSBjYWxsIHdoaWNoIGlzIG5lY2Vzc2FyeQ0KPj4g
dG8gaW1wbGVtZW50IGluaXRpYWwgZWFybHlfcHJpbnRrDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTog
T2xla3NpaSBLdXJvY2hrbyA8b2xla3NpaS5rdXJvY2hrb0BnbWFpbC5jb20+DQo+PiAtLS0NCj4+
IMKgIHhlbi9hcmNoL3Jpc2N2L01ha2VmaWxlwqDCoMKgwqDCoMKgwqDCoMKgIHzCoCAxICsNCj4+
IMKgIHhlbi9hcmNoL3Jpc2N2L2luY2x1ZGUvYXNtL3NiaS5oIHwgMzQgKysrKysrKysrKysrKysr
KysrKysrKysrDQo+PiDCoCB4ZW4vYXJjaC9yaXNjdi9zYmkuY8KgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCB8IDQ0ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrDQo+DQo+IElNSE8sIGl0
IHdvdWxkIGJlIGJldHRlciB0byBpbXBsZW1lbnQgc2JpLmMgaW4gYXNzZW1ibHkgc28geW91IGNh
biB1c2UNCj4gcHJpbnQgaW4gdGhlIGNvbnNvbGUgYmVmb3JlIHlvdSBqdW1wIHRvIEMgd29ybGQu
DQoNClRoYXQgd2FzIGFscmVhZHkgcmVxdWVzdGVkIG5vdCB0byBoYXBwZW4uwqAgRnJhbmtseSwg
aWYgSSB3YXMgYW4gYXJtDQptYWludGFpbmVyIEknZCBvYmplY3QgdG8gdGhlIGhvdyBpdCdzIHVz
ZWQgdGhlcmUgdG9vLCBidXQgUklTQ1YgaXMNCm1hc3NpdmVseSBtb3JlIHNpbXBsZSBzdGlsbC4N
Cg0KTm90IGV2ZW4gdGhlIHBhZ2V0YWJsZSBzZXR1cCwgb3IgdGhlIHBoeXNpY2FsIHJlbG9jYXRp
b24gKGlmIGV2ZW4NCm5lY2Vzc2FyeSkgbmVlZHMgZG9pbmcgaW4gQVNNLg0KDQpJJ20gbm90IGNv
bXBsZXRlbHkgcnVsaW5nIGl0IG91dCBpbiB0aGUgZnV0dXJlLCBidXQgc29tZW9uZSBpcyBnb2lu
ZyB0bw0KaGF2ZSB0byBjb21lIHVwIHdpdGggYSB2ZXJ5IGNvbnZpbmNpbmcgYXJndW1lbnQgZm9y
IHdoeSB0aGV5IGNhbid0IGRvDQp0aGlzIHBpZWNlIG9mIGNyaXRpY2FsIHNldHVwIGluIEMuDQoN
Cn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 15:19:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 15:19:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472762.733122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoVT-0006tP-Bh; Fri, 06 Jan 2023 15:19:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472762.733122; Fri, 06 Jan 2023 15:19:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDoVT-0006tI-7y; Fri, 06 Jan 2023 15:19:43 +0000
Received: by outflank-mailman (input) for mailman id 472762;
 Fri, 06 Jan 2023 15:19:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NPe1=5D=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pDoVS-0006sw-6J
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 15:19:42 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2076.outbound.protection.outlook.com [40.107.223.76])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 892c6e25-8dd5-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 16:19:39 +0100 (CET)
Received: from BL0PR1501CA0022.namprd15.prod.outlook.com
 (2603:10b6:207:17::35) by CH3PR12MB8511.namprd12.prod.outlook.com
 (2603:10b6:610:15c::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 15:19:35 +0000
Received: from BL02EPF0000EE3D.namprd05.prod.outlook.com
 (2603:10b6:207:17:cafe::ef) by BL0PR1501CA0022.outlook.office365.com
 (2603:10b6:207:17::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.15 via Frontend
 Transport; Fri, 6 Jan 2023 15:19:35 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF0000EE3D.mail.protection.outlook.com (10.167.241.134) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.15 via Frontend Transport; Fri, 6 Jan 2023 15:19:35 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 6 Jan
 2023 09:19:35 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 6 Jan 2023 09:19:33 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 892c6e25-8dd5-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YmYYZKmsm8wPu+mMVvCMuOGDou5onGM5gBzV/I4eOskIXzI+6NE30RbZvtBsCrMHS++tOmWmL3Ha0EvDtHLO9qERQLAqOIGWMIoChzx3uTWAfgv9yOQiLqmLEHxRhyExmCXI9jdefmQnCtuCT3arjzSZdjxvWALXdJ72DUBJMQ5vp7p6tIqxonc8o3SFI7UedASi60Na0EdjiEe2ZrPhfkcPAlOPRx8h6Xrsnj9Lh0s7L5uBuF5YCIkODOonQpQZ8tc7vVp2tepfU5KRXPliRSjhYZqi1N+qPWG+7b819kZ//RhrzUSYGZbkGmlcct/QSk1MnZ8qE/hXTo90szETaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NwgDyJdzWg87zclswFLQ8OvS/mSB4UK6QDkVuEFIJx8=;
 b=YY1/D4tXCUqkIMeyRDeoA4A0TSXpgcaWh5ZhTbvZujoEk/0ndSCmocg3z1eTBCuwQbv+ztICIw7eTMp8MzXszip0g/ubMcdLbi0p+2qYs5Rekum85bUcgtZAh8uFBrB7aHFUSY5UPAQepJo5dWNjVJ2bZ5VXEh47EgLsqamh6c6ve8kXab47DZ3FfYv8l7PYw+WQKayGZUblLSoMLu1q6sUv/OSFQmAHVn91JMMFYKxjEtSLrLpZ7TRRwk0HCykLAKesNVz1ePWA3r1Z8fatCF1n7+clvmuZg1JALPLonHYP5E+KnbTFZd595S9GPXY+iA+vwTadKaWBFHY3rUaRiw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NwgDyJdzWg87zclswFLQ8OvS/mSB4UK6QDkVuEFIJx8=;
 b=CUVuitha+SUvHdMk/QZ7K/reCnYq9OAWxIbTvhZzLLDUAW6T26z+Bif3L+4OtRz6ovMIYJpA8SIS0YEPEBsfPnoacjdCzKQYkTHVnI2T0LpXZCggp5mX8mKMVRII/8mk7Sh+YvMv2W10cP3jymoclxpJrh+mU15xMXXpPU9sNuM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <420e9747-09ba-b6e4-d3c5-14f0f174c1d7@amd.com>
Date: Fri, 6 Jan 2023 16:19:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	<xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF0000EE3D:EE_|CH3PR12MB8511:EE_
X-MS-Office365-Filtering-Correlation-Id: 74ed7080-a8fc-4de6-717a-08daeff96b8d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/GHIgVc3pfa5ce4nYdV4nJGpB4d+LgU5rIVcl4/DeVnsU0X1wVKEIWhrgohd+LZ8kOMWs8rpE6k7rxg3pDAqyp65II5TR520mNXRC3A0Ua02MjXVW6p/5eBofN8HU2fx6RyTi4+7qoO2TlYlHHvLSr2NzqH9bNsin3G5S2/QLsz+guHiicGXaplb3uX15o8wgqHYrQQn3s4XwB5uVN4VG1fuQqfpdBwj+zDvo2J8rNEnwVGMWhqU3PF77wZ0jAd4DI0OZe6BjH+4bjR2/mGtoHD8Y581eb8FZgpJe3xfTT1HUz+EzBjLB6DJdiYMUP0VhbgpUGy8DEZksXnSzbjjr3Tfr4tsEHHifn2wAVmkbwaleLbuN1yZtVlhXGlkoqVjV7b36fzqR7j2EDBVFxg/JYnkv4DeFm/Pup0ZxGA/OSkh1rQZ1ho05elMNHNirJOvDdmYM0AfYFxSnXuRi+VYgDd++YDX7do+LoKqv/o6jTHrW5oc2++ngZm6J5d2Z5hFNdxm1jkeua8inxwCB30Anu+xKry8FlT9LAn963MH6rD2ypfVFUcleJytHYgDYNhqUBzGIUJ56/pUi1BmjztjOhsW5XAxAqxFy1LwL7SACR4w1NtZNj7wGaj9nxMkMPETbNekeUnNPfXQ3kXzVEdu0/TUtaCKcGa5fRUmtWOrRK0eNat7stcmbmm5QJNBXmX2MQS0EiPYMn/Ivv7UnLa5Qt7ABxRqa//H75idZ9bzAuej9XNws2E9jpZ63F5kgaOrGnXAQ/VJzlPY/KPk48JV5A==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(47076005)(2616005)(426003)(336012)(186003)(40460700003)(81166007)(26005)(356005)(82740400003)(40480700001)(86362001)(36860700001)(31696002)(36756003)(82310400005)(110136005)(70206006)(8676002)(8936002)(4326008)(70586007)(316002)(31686004)(5660300002)(2906002)(54906003)(53546011)(16576012)(41300700001)(44832011)(478600001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jan 2023 15:19:35.6204
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 74ed7080-a8fc-4de6-717a-08daeff96b8d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF0000EE3D.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8511

Hi Oleksii,

On 06/01/2023 14:14, Oleksii Kurochko wrote:
> 
> 
> The patch introduce sbi_putchar() SBI call which is necessary
> to implement initial early_printk
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  xen/arch/riscv/Makefile          |  1 +
>  xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>  xen/arch/riscv/sbi.c             | 44 ++++++++++++++++++++++++++++++++
>  3 files changed, 79 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/sbi.h
>  create mode 100644 xen/arch/riscv/sbi.c
> 
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 5a67a3f493..60db415654 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,5 +1,6 @@
>  obj-$(CONFIG_RISCV_64) += riscv64/
>  obj-y += setup.o
> +obj-y += sbi.o
> 
>  $(TARGET): $(TARGET)-syms
>         $(OBJCOPY) -O binary -S $< $@
> diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
> new file mode 100644
> index 0000000000..34b53f8eaf
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/sbi.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
> +/*
> + * Copyright (c) 2021 Vates SAS.
> + *
> + * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> + *
> + * Taken/modified from Xvisor project with the following copyright:
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + */
> +
> +#ifndef __CPU_SBI_H__
> +#define __CPU_SBI_H__
I wonder where does CPU come from. Shouldn't this be called __ASM_RISCV_SBI_H__ ?

> +
> +#define SBI_EXT_0_1_CONSOLE_PUTCHAR            0x1
> +
> +struct sbiret {
> +    long error;
> +    long value;
> +};
> +
> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid, unsigned long arg0,
> +        unsigned long arg1, unsigned long arg2,
> +        unsigned long arg3, unsigned long arg4,
> +        unsigned long arg5);
The arguments needs to be aligned.

> +
> +/**
> + * Writes given character to the console device.
> + *
> + * @param ch The data to be written to the console.
> + */
> +void sbi_console_putchar(int ch);
> +
> +#endif // __CPU_SBI_H__
// should be replaced with /* */

> diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
> new file mode 100644
> index 0000000000..67cf5dd982
> --- /dev/null
> +++ b/xen/arch/riscv/sbi.c
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/*
> + * Taken and modified from the xvisor project with the copyright Copyright (c)
> + * 2019 Western Digital Corporation or its affiliates and author Anup Patel
> + * (anup.patel@wdc.com).
> + *
> + * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + * Copyright (c) 2021 Vates SAS.
> + */
> +
> +#include <xen/errno.h>
> +#include <asm/sbi.h>
> +
> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid, unsigned long arg0,
> +            unsigned long arg1, unsigned long arg2,
> +            unsigned long arg3, unsigned long arg4,
> +            unsigned long arg5)
The arguments needs to be aligned.

> +{
> +    struct sbiret ret;
Could you please add an empty line here.

> +    register unsigned long a0 asm ("a0") = arg0;
> +    register unsigned long a1 asm ("a1") = arg1;
> +    register unsigned long a2 asm ("a2") = arg2;
> +    register unsigned long a3 asm ("a3") = arg3;
> +    register unsigned long a4 asm ("a4") = arg4;
> +    register unsigned long a5 asm ("a5") = arg5;
> +    register unsigned long a6 asm ("a6") = fid;
> +    register unsigned long a7 asm ("a7") = ext;
> +
> +    asm volatile ("ecall"
> +              : "+r" (a0), "+r" (a1)
> +              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
> +              : "memory");
> +    ret.error = a0;
> +    ret.value = a1;
> +
> +    return ret;
> +}
> +
> +void sbi_console_putchar(int ch)
> +{
> +    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
> +}
> --
> 2.38.1
> 
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 15:39:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 15:39:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472774.733132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDooT-0000yh-Um; Fri, 06 Jan 2023 15:39:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472774.733132; Fri, 06 Jan 2023 15:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDooT-0000ya-S8; Fri, 06 Jan 2023 15:39:21 +0000
Received: by outflank-mailman (input) for mailman id 472774;
 Fri, 06 Jan 2023 15:39:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDooS-0000yU-Lo
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 15:39:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDooS-0002Sd-8N; Fri, 06 Jan 2023 15:39:20 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.4.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDooS-0006Cg-1W; Fri, 06 Jan 2023 15:39:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2b/ivgfkVHeJEkn1ZmY0o6bI75hFvxbn7e3b1wNI5So=; b=5I4xi7U2qDvifQww5bLNeTgXm8
	Mc9JUJTuJARTbERribUbYhjc+2ay5F17v3TDVW1v9Rf64EG7jM03KvyYSSudISznzvoSQI7jP0IBa
	frHHY1e9q3gdh3UTWSBuyq3Vt7bO1BI/ahw54iMRjggKH79e7DbB2YgJKHccrweO56LQ=;
Message-ID: <d1f1f50d-86f7-5969-2df0-1dc9a890554c@xen.org>
Date: Fri, 6 Jan 2023 15:39:17 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
 <d77e7617-5263-0072-4786-ba6144247a4b@xen.org>
 <8b6a9927-8a56-6fab-6658-deb4083d2c45@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <8b6a9927-8a56-6fab-6658-deb4083d2c45@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Andrew,

On 06/01/2023 15:19, Andrew Cooper wrote:
> On 06/01/2023 1:40 pm, Julien Grall wrote:
>> Hi,
>>
>> On 06/01/2023 13:14, Oleksii Kurochko wrote:
>>> The patch introduce sbi_putchar() SBI call which is necessary
>>> to implement initial early_printk
>>>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>>    xen/arch/riscv/Makefile          |  1 +
>>>    xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>>>    xen/arch/riscv/sbi.c             | 44 ++++++++++++++++++++++++++++++++
>>
>> IMHO, it would be better to implement sbi.c in assembly so you can use
>> print in the console before you jump to C world.
> 
> That was already requested not to happen.  Frankly, if I was an arm
> maintainer I'd object to the how it's used there too, but RISCV is
> massively more simple still.

There are a few reasons:
   1) Xen is not fully position independent. Even if -fpic is enabled, 
you would still need apply some relocation in order if you want to use 
it before running in the virtual address space.
   2) Doing any memory access before the MMU is setup requires some 
careful though (see [1]). With function implemented in C, you can't 
really control which memory accesses are done.

> 
> Not even the pagetable setup, or the physical relocation (if even
> necessary) needs doing in ASM.
> 
> I'm not completely ruling it out in the future, but someone is going to
> have to come up with a very convincing argument for why they can't do
> this piece of critical setup in C.

That's great if RISC-V doesn't have the issues I mentioned above. On 
Arm, I don't really think we can get away. But feel free to explain how 
this could be done...

Cheers,

[1] 
https://events.static.linuxfound.org/sites/events/files/slides/slides_10.pdf

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 15:44:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 15:44:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472782.733144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDot5-0002Rl-KW; Fri, 06 Jan 2023 15:44:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472782.733144; Fri, 06 Jan 2023 15:44:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDot5-0002Re-Hg; Fri, 06 Jan 2023 15:44:07 +0000
Received: by outflank-mailman (input) for mailman id 472782;
 Fri, 06 Jan 2023 15:44:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pDot4-0002RY-EZ
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 15:44:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDot4-0002Xv-4Z; Fri, 06 Jan 2023 15:44:06 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.4.240]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pDot3-0006RU-Uf; Fri, 06 Jan 2023 15:44:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=yRY9qGHE9XSyQSDaaF/j9f3fr2k+NbzXeokeIUubZSU=; b=pvV0i8P2cfFYmDriFmPBrFO9J8
	gQlfBNAmICSRM8+kk0rsu5Km98tkUxxyec/LZw87A+bB5gPtJJ2NwwEEjBuBJfxoJUwVeapeumJOI
	JOR9oq61TTcQkxSJi/B9wGYtxRz7g/GpQOMGgvPQzhwY7h2b3H1dKdUlmtzZ5OiHPnIg=;
Message-ID: <9c9099e2-dd7a-db37-4d0c-38f1dd8d3e48@xen.org>
Date: Fri, 6 Jan 2023 15:44:04 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 20/22] xen/arm64: mm: Use per-pCPU page-tables
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-21-julien@xen.org>
 <AS8PR08MB7991125F288492FFD81F02BA92FB9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AS8PR08MB7991125F288492FFD81F02BA92FB9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Henry,

On 06/01/2023 14:54, Henry Wang wrote:
>> -----Original Message-----
>> Subject: [PATCH 20/22] xen/arm64: mm: Use per-pCPU page-tables
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, on Arm64, every pCPU are sharing the same page-tables.
> 
> Nit: s/every pCPU are/ every pCPU is/

I will fix it.

> 
>>
>>   /*
>> diff --git a/xen/arch/arm/include/asm/domain_page.h
>> b/xen/arch/arm/include/asm/domain_page.h
>> new file mode 100644
>> index 000000000000..e9f52685e2ec
>> --- /dev/null
>> +++ b/xen/arch/arm/include/asm/domain_page.h
>> @@ -0,0 +1,13 @@
>> +#ifndef __ASM_ARM_DOMAIN_PAGE_H__
>> +#define __ASM_ARM_DOMAIN_PAGE_H__
>> +
>> +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
>> +bool init_domheap_mappings(unsigned int cpu);
> 
> I wonder if we can make this function "__init" as IIRC this function is only
> used at Xen boot time, but since the original init_domheap_mappings()
> is not "__init" anyway so this is not a strong argument.

While this is not yet supported on Xen on Arm, CPUs can be 
onlined/offlined at runtime. So you want to keep init_domheap_mappings() 
around.

We could consider to provide a new attribute that will be match __init 
if hotplug is supported otherwirse it would be a NOP. But I don't think 
this is related to this series (most of the function used for bringup 
are not in __init).

>> +static inline bool init_domheap_mappings(unsigned int cpu)
> 
> (and also here)
> 
> Either you agree with above "__init" comment or not:
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Thanks!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 15:51:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 15:51:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472788.733155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDp09-0003s8-Dl; Fri, 06 Jan 2023 15:51:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472788.733155; Fri, 06 Jan 2023 15:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDp09-0003s1-A9; Fri, 06 Jan 2023 15:51:25 +0000
Received: by outflank-mailman (input) for mailman id 472788;
 Fri, 06 Jan 2023 15:51:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDp08-0003rr-TY; Fri, 06 Jan 2023 15:51:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDp08-0002ok-Qi; Fri, 06 Jan 2023 15:51:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDp08-0005x1-Aa; Fri, 06 Jan 2023 15:51:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDp08-00049G-9l; Fri, 06 Jan 2023 15:51:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bdQG64tiKkkrRoD3nuu+tBnUNusf5+Q5LrcxGnsvoqU=; b=WxxGmaKA7H6kGFRPSfVRV3zgQ7
	KjP3pdsDiQ+t+R7E07dYm/R4XJKBbdhx4pg6lAGIvekLNU3OpETQdJYDSHGrr2Ay2AUUPSDBv05e/
	pEv3ljti9GUk6HD5Nk6N/R8pw4Uvei+Wu+cK0+8+Z9tQnlF7dbthET4Ee8T3so05Evbo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175599-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175599: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1f5abbd77e2c1787e74b7c2caffac97def78ba52
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 15:51:24 +0000

flight 175599 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175599/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                1f5abbd77e2c1787e74b7c2caffac97def78ba52
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   90 days
Failing since        173470  2022-10-08 06:21:34 Z   90 days  188 attempts
Testing same since   175599  2023-01-06 07:05:33 Z    0 days    1 attempts

------------------------------------------------------------
3293 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 501982 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 15:57:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 15:57:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472798.733166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDp5l-0004Wi-1u; Fri, 06 Jan 2023 15:57:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472798.733166; Fri, 06 Jan 2023 15:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDp5k-0004Wb-Ur; Fri, 06 Jan 2023 15:57:12 +0000
Received: by outflank-mailman (input) for mailman id 472798;
 Fri, 06 Jan 2023 15:57:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDp5j-0004WR-Ot; Fri, 06 Jan 2023 15:57:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDp5j-0002vJ-My; Fri, 06 Jan 2023 15:57:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDp5j-00066P-Ap; Fri, 06 Jan 2023 15:57:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDp5j-00086t-AP; Fri, 06 Jan 2023 15:57:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g/I03d3MZ7AG/eC5r5t5Zb6LOu2cfQNNU6Ha2XHJIEY=; b=y91qi223mU9LjwTNZughHSzKX0
	XxFWUc2P1OXcusu2LIUp/kHRWkyx8rbpbDoUDFGHMCQgGp7O7sPVutHxmn/QV4/1QMtGLqXfFHDew
	0pFQee3Qn3QU2R40Gb4fZeLi4ml4I6msVaKkrs/7cFPfm4KufugJrTRV/akUsrDjySy8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175604-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175604: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5386c9e6dab2f6a555e679aff9f6a59f60c8e029
X-Osstest-Versions-That:
    ovmf=0aca5901e34489573c4e9558729704f76869e3d0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 15:57:11 +0000

flight 175604 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175604/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5386c9e6dab2f6a555e679aff9f6a59f60c8e029
baseline version:
 ovmf                 0aca5901e34489573c4e9558729704f76869e3d0

Last test of basis   175602  2023-01-06 08:41:27 Z    0 days
Testing same since   175604  2023-01-06 13:12:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiewen Yao <jiewen.yao@intel.com>
  Michael Roth <michael.roth@amd.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0aca5901e3..5386c9e6da  5386c9e6dab2f6a555e679aff9f6a59f60c8e029 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 16:32:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 16:32:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472807.733177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDpdw-0000me-P2; Fri, 06 Jan 2023 16:32:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472807.733177; Fri, 06 Jan 2023 16:32:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDpdw-0000mX-Lu; Fri, 06 Jan 2023 16:32:32 +0000
Received: by outflank-mailman (input) for mailman id 472807;
 Fri, 06 Jan 2023 16:32:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g6IK=5D=citrix.com=prvs=36316be06=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pDpdu-0000mR-Rz
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 16:32:30 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b3f7364a-8ddf-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 17:32:28 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3f7364a-8ddf-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673022748;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=eavG7+hkgdUhfacVrhQ5HxWN5atkNNEzN09mWvS484E=;
  b=P/1rdWPpa2r4tjboEGibWw9MNNY16enMNewFQC+7X0+9eoEJnD/A7O9i
   hC+Dxx9E8nR7eYvdbbB+/7BcQ4K93U53KwVzEcn6cDmikOh075qIIOQxe
   OZCuRniiy6FSLCdcIMIeJv8nCGazurSbHqTgv3w/X/ex2amesnSIMkZ5d
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91538047
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:8gv5T6+4bAVqRDvTvzLcDrUDHn6TJUtcMsCJ2f8bNWPcYEJGY0x3y
 jQfWWDTbPqMN2akLdB/admy8RkPuJ/UmIAySgFpryA8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIx1BjOkGlA5AdmPKgV5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkf7
 uECcmFSZSualsu/ybK2d7Rnn98aeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGM0Yn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZQMwx3I/
 j6Zl4j/KhUbHYGvmRydzjW1n+7QrR/KSIkDO5Tto5aGh3XMnzdOWXX6T2CTvv2RmkO4HdVFJ
 CQ8/jclqKk5/02sZtb4Vhy85nWDu3Y0XtNKGOp88waJy6fO4i6cB24JVCBIc8098tM7TjxC/
 l2GhdTyHhR0raaYD3ma89+8vT60fCQYM2IGTSsFVhcepcnuppkpiRDCRcolF7S65vX+ECv0x
 HaMqy43m7I7iM8N2r+l9EvWmHSwq5PJJiY84AnNU2vj5ApieImjZqSt6F7W9+hJN5eCCEWMt
 3hss82X9usJS4qKkC2AWuQlAbeo4vGfPXvdnTZS84IJrmr3vST5JMYJvW84dBwB3ts4lSHBM
 VTOo1kAu5RoJWqDZKZ3br/sAN0x9P21fTj6bcz8Yt1La5l3UQaI+iByeEKdt1zQfFgQfbIXY
 snCL5v1ZZoOIeE+lWftGb9BuVM+7npmrV4/U6wX2PhOPVC2QHeOAYkIP1KVBgzSxPPV+V6Fm
 zqz2ibj9vm+bAEcSnOMmWLwBQpQRZTeOXwRg5I/SwJ7ClA6cFzN8teIqV/bR6Rrnr5OisDD9
 WynV0lTxTLX3COYc1TWMS0zOe++DP6TSE7X2wR2bT5EPFB6OO6SAFo3LcNrLdHLCsQ5pRKLc
 xX1U5rZWakeItg20z8ccYP8vORfSfhfvirXZ3DNSGFmL/Zdq/nhpoeMkv3HqHNfUUJadKIW/
 9Wd6+8sacBbH148VJ6MOK7HIpHYlSF1pd+elnDgerF7EHgAOqAwQ8Atppfb+/0xFCg=
IronPort-HdrOrdr: A9a23:u9b666wS0thNwpKMtpuMKrPwFr1zdoMgy1knxilNoEpuA6ulfq
 eV7ZcmPH7P6Ar5N0tKpTntAsO9qBDnlKKdg7N/AV74ZniDhILAFugL0WKF+VDd8kbFmNK1u5
 0NT0DQYueAa2STIazBkWuF+3dL+qjjzJyV
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="91538047"
Date: Fri, 6 Jan 2023 16:32:15 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Carlo Nonato <carlo.nonato@minervasys.tech>
CC: <xen-devel@lists.xenproject.org>, <marco.solieri@unimore.it>,
	<andrea.bastoni@minervasys.tech>, <lucmiccio@gmail.com>, Wei Liu
	<wl@xen.org>, Juergen Gross <jgross@suse.com>, Marco Solieri
	<marco.solieri@minervasys.tech>
Subject: Re: [PATCH v3 4/9] tools/xl: add support for cache coloring
 configuration
Message-ID: <Y7hND1jfB/sKUzA7@perard.uk.xensource.com>
References: <20221022155120.7000-1-carlo.nonato@minervasys.tech>
 <20221022155120.7000-5-carlo.nonato@minervasys.tech>
 <Y3+MDElm8YQ7/2nS@perard.uk.xensource.com>
 <CAG+AhRUfdnwCYkXw3TR=XrQOWQFt4FTdEsGvcE5kyAmwEyAaeg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <CAG+AhRUfdnwCYkXw3TR=XrQOWQFt4FTdEsGvcE5kyAmwEyAaeg@mail.gmail.com>

On Thu, Dec 22, 2022 at 04:28:56PM +0100, Carlo Nonato wrote:
> > Could you invent a name that is more specific? Maybe "cache_colors" or
> > something, or "vcpu_cache_colors".
> 
> What about llc_colors? LLC stands for Last Level Cache and I'm trying to use
> it everywhere else since it's what is really implemented (not any cache is
> colored, just the last level) and it's shorter than cache_colors.

"llc_colors" sounds good.

Cheers,


-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 16:59:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 16:59:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472815.733187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDq43-0003KV-0O; Fri, 06 Jan 2023 16:59:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472815.733187; Fri, 06 Jan 2023 16:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDq42-0003KO-Tz; Fri, 06 Jan 2023 16:59:30 +0000
Received: by outflank-mailman (input) for mailman id 472815;
 Fri, 06 Jan 2023 16:59:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CnAh=5D=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1pDq42-0003KI-73
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 16:59:30 +0000
Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com
 [2607:f8b0:4864:20::536])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a40c5c8-8de3-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 17:59:27 +0100 (CET)
Received: by mail-pg1-x536.google.com with SMTP id v3so1536321pgh.4
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 08:59:27 -0800 (PST)
Received: from localhost (c-73-164-155-12.hsd1.wa.comcast.net. [73.164.155.12])
 by smtp.gmail.com with ESMTPSA id
 5-20020a621505000000b005772d55df03sm1366021pfv.35.2023.01.06.08.59.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 08:59:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a40c5c8-8de3-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=xw2fjJMCegprkjtpBWjKKtwQO+Lc1SNHRh05H+6IcWk=;
        b=oy42dzreI1K2/czGG5RMStkjz3WM5r9ObUmxjgHNSRhFJi5T7LftO2BOVka5gAfwk4
         Id7aK311Fes10tyYVopdlU6zcsUtkC+I2UA723sjEoEnGfDA+6I2KeXPEGlQDQ4PrTJp
         Or7tv36dEbF+ikv0Yv9FKjo7zEKQK0duRVhVDqJFZzCwkQIUxeovv9LiNxKvkwQpJyEB
         NlbeQKJ06CkRp4sbLdU4xCRibVD4iq54UaEM9ZchxAQ9pmiuOOF/Fp4kLMbvaydCcMhV
         NcV+rKKF32N30iOo9VIlzfT1QnHbAVFvl8mvf7EHVikqRWlHn9+aQ1iyo7rj7Q3qSAuv
         QRxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xw2fjJMCegprkjtpBWjKKtwQO+Lc1SNHRh05H+6IcWk=;
        b=PzmB2xhripFXcFWhEGRKbWUz3knQaw1bXpONLyJsIZgr7gO5bJLs64pm8YzjyJfn7M
         uf0tHG0Vvx/Rexy2FMySfSKVIJiMEbEiwx5iMhXNRj66lqBBG7Fgr8mVlOkeO0v1v8H8
         25WFPWO3qup8dFHc3BlCKPQZycnBf+ZukOQqfSj6gPCvQ5oa9lsyeFC8Wh708uTNb8mW
         fvaQijkrr6M+GXQ1LnOgPuL/rzERFkl3/BqBFIbmwVHDlEeWbugbkAetXV9C26QkUt/4
         +MRvnfh7XpsjiYsPaIsckGvaUbQa4lqaj+U+Z+ASkhebrKcasUukTo9/TR8E5/E44vTe
         f78A==
X-Gm-Message-State: AFqh2kpjzypgMMNj02yp7rhEgA80MYyTc9dM/ZkP/JMfZ2jcZdlhgFHM
	QYYP6UWaVST6fJiunQpiess=
X-Google-Smtp-Source: AMrXdXtOGTNbjH3A3XUDHxWC8R10zSMyAYt2FDPQRFBwCu6VlK9Tb4xRLCIxxecja0y+hqDRmg2czw==
X-Received: by 2002:a62:1c95:0:b0:583:3adc:baed with SMTP id c143-20020a621c95000000b005833adcbaedmr4356405pfc.8.1673024365828;
        Fri, 06 Jan 2023 08:59:25 -0800 (PST)
Date: Tue, 20 Dec 2022 06:23:07 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Message-ID: <Y6FUy/F0mbrvRP3e@bullseye>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>

On Fri, Jan 06, 2023 at 03:14:25PM +0200, Oleksii Kurochko wrote:
> The patch introduce sbi_putchar() SBI call which is necessary
> to implement initial early_printk
> 

I think that it might be wise to start off with an alternative to
sbi_putchar() since it is already planned for deprecation. I realize
that this will require rework, but it is almost guaranteed that
early_printk() will break on future SBI implementations if using this
SBI call. IIRC, Xen/ARM's early printk looked like a reasonable analogy
for how it could work on RISC-V, IMHO.

> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  xen/arch/riscv/Makefile          |  1 +
>  xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>  xen/arch/riscv/sbi.c             | 44 ++++++++++++++++++++++++++++++++
>  3 files changed, 79 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/sbi.h
>  create mode 100644 xen/arch/riscv/sbi.c
> 
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 5a67a3f493..60db415654 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,5 +1,6 @@
>  obj-$(CONFIG_RISCV_64) += riscv64/
>  obj-y += setup.o
> +obj-y += sbi.o
>  
>  $(TARGET): $(TARGET)-syms
>  	$(OBJCOPY) -O binary -S $< $@
> diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
> new file mode 100644
> index 0000000000..34b53f8eaf
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/sbi.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
> +/*
> + * Copyright (c) 2021 Vates SAS.
> + *
> + * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> + *
> + * Taken/modified from Xvisor project with the following copyright:
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + */
> +
> +#ifndef __CPU_SBI_H__
> +#define __CPU_SBI_H__
> +
> +#define SBI_EXT_0_1_CONSOLE_PUTCHAR		0x1
> +
> +struct sbiret {
> +    long error;
> +    long value;
> +};
> +
> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid, unsigned long arg0,
> +        unsigned long arg1, unsigned long arg2,
> +        unsigned long arg3, unsigned long arg4,
> +        unsigned long arg5);
> +
> +/**
> + * Writes given character to the console device.
> + *
> + * @param ch The data to be written to the console.
> + */
> +void sbi_console_putchar(int ch);
> +
> +#endif // __CPU_SBI_H__
> diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
> new file mode 100644
> index 0000000000..67cf5dd982
> --- /dev/null
> +++ b/xen/arch/riscv/sbi.c
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/*
> + * Taken and modified from the xvisor project with the copyright Copyright (c)
> + * 2019 Western Digital Corporation or its affiliates and author Anup Patel
> + * (anup.patel@wdc.com).
> + *
> + * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + * Copyright (c) 2021 Vates SAS.
> + */
> +
> +#include <xen/errno.h>
> +#include <asm/sbi.h>
> +
> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid, unsigned long arg0,
> +            unsigned long arg1, unsigned long arg2,
> +            unsigned long arg3, unsigned long arg4,
> +            unsigned long arg5)
> +{
> +    struct sbiret ret;
> +    register unsigned long a0 asm ("a0") = arg0;
> +    register unsigned long a1 asm ("a1") = arg1;
> +    register unsigned long a2 asm ("a2") = arg2;
> +    register unsigned long a3 asm ("a3") = arg3;
> +    register unsigned long a4 asm ("a4") = arg4;
> +    register unsigned long a5 asm ("a5") = arg5;
> +    register unsigned long a6 asm ("a6") = fid;
> +    register unsigned long a7 asm ("a7") = ext;
> +
> +    asm volatile ("ecall"
> +              : "+r" (a0), "+r" (a1)
> +              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
> +              : "memory");
> +    ret.error = a0;
> +    ret.value = a1;
> +
> +    return ret;
> +}
> +
> +void sbi_console_putchar(int ch)
> +{
> +    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
> +}
> -- 
> 2.38.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 17:16:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 17:16:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472823.733199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqKl-0005iC-Cc; Fri, 06 Jan 2023 17:16:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472823.733199; Fri, 06 Jan 2023 17:16:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqKl-0005i5-9D; Fri, 06 Jan 2023 17:16:47 +0000
Received: by outflank-mailman (input) for mailman id 472823;
 Fri, 06 Jan 2023 17:16:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDqKk-0005hz-9H
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 17:16:46 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e3a47588-8de5-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 18:16:44 +0100 (CET)
Received: from mail-dm6nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jan 2023 12:16:36 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO3PR03MB6776.namprd03.prod.outlook.com (2603:10b6:303:164::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 17:16:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 17:16:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3a47588-8de5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673025404;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=gLRhIwINg8SKw4p5CaBpYacg5GUGCKOjTC1xEUkX0UQ=;
  b=Ytqk02QEqvY9ZJzT/a7QuTp3+j3s07WKSjMwX3THD+XLQh9XZzQBW+k5
   QFKdLI+KB5cjEMGuSn+ui7TtBX2/D0lOtLNR4va0XVnV6dFyNCVm+3ZHg
   mu73ehuA5P3BtHyWEOUOCB1wiBW8JhudISqM7MSrCklaXrIguF+hSFbHW
   c=;
X-IronPort-RemoteIP: 104.47.58.105
X-IronPort-MID: 90455449
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Ps99SKw+00Fh8a2Yr696t+esxyrEfRIJ4+MujC+fZmUNrF6WrkUGn
 zAbXW6HbvyNZDemf40iOdi+o0sHvpDVzNAxGgpsqCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5QRnPa4T5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVte/
 K1EMAsdVwikq6HnzYzjRsdxmO12eaEHPKtH0p1h5RfwKK9+BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjiVlVQsuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqAN5PSeHkp5aGhnXD/2sRVQNPCWGCsPqaklW3aYlBL
 hQtr39GQa8asRbDosPGdxe/qnSVswUcX9dVGusS5wSEy66S6AGcbkADSjNCc90n8swrXzsh1
 lyOt9zsDD1r9raSTBq15rqS6D+/JyURBWsDfjMfCxsI5cH5p4M+hQ6JScxseIaulcH8Ezz0x
 zGMrQA9iq8VgMpN0L+0lXjIgjuqooXCRyYv5xvQRWOj5UVyY4vNT4Ws6EPH5PdaaoiDR1+Kv
 WMshMSVqusJCPmlnyuLRuIPELi35u2tPzjVgFopFJ4knwlB4FamdIFUpT17ekFgN59cfSezO
 ReO/wRM+JVUIX2mK7dtZJ68ANgryq6mEsn5UvfTbZxFZZ0ZmBK7wRyCrHW4hwjF+HXAW4lmZ
 v93re7E4a4mNJla
IronPort-HdrOrdr: A9a23:Q6PAEakmA/747Zojpcj75InezEjpDfIW3DAbv31ZSRFFG/Fw9v
 re/8jzsCWetN9/Yh0dcLy7V5VoIkm9yXcW2+cs1N6ZNWGN1VdAR7sC0WKN+UyGJwTOssJbyK
 d8Y+xfJbTLfD5HZB/BkWyF+gAbsb26GQuT9IXj80s=
X-IronPort-AV: E=Sophos;i="5.96,305,1665460800"; 
   d="scan'208";a="90455449"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MxutUXdXhtsYDp/qqj4sO6WaRC5GSjPswhkGR2CC/FzLsy3hpfXYwUPecr6RZKgsP9UE/P/vpaLJ8HTfqxZWDVQEuanf1VzRIgBl4nX25koLiNBAIwRp8Nt+o9hzxh6OyznLDdr9JEZ5g+gEhBhxoucPui8yEkDXkivwLl3Dtu/rhHbsQYLgEydanGxALRqB6DHFhAwcUtGR6pj84vVv7W2iUZjwj+13Zl8qnfqDFpHbDJW841A93/vScLvIftLFulr5GxSrAjrVlC7WGEnfsAawbSPvitkHkDeY1u5qMTGaWSWEHcxKqA7gc7fid+XGHOjLtiWvQhN0yjBprFCTEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gLRhIwINg8SKw4p5CaBpYacg5GUGCKOjTC1xEUkX0UQ=;
 b=cQv5xgzXF5MSQCti70zoM3aDp4HMlB5szNkWLF3VrORWTneCRwXWikfz6EFNHkNzBlcwF6VXFDVWeVHYEBBSmoNjxMjvMw52oX5MXnpsuFTLSM9zZ+bYNNJ/SX7/d0/+ppMxHV9ESLpjG5A2EFIanlaPXeS0NiIq2AVugPRafKsRe86mblIwSFb11iZ9hxfVkmnXGFgd0a4W5uyw+sBJnOWBuzSM8Pkvq45UjT2JZsR1xNOIQnQ5OAEy9s2BwTfT+qJ2F1/kOykpJ3oguRMaYIXZyUV9X/oepcnfELzP67K8+g69FBz1vD2shZnoA4DTegv6ta+bgFN6b/yq6RrpIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gLRhIwINg8SKw4p5CaBpYacg5GUGCKOjTC1xEUkX0UQ=;
 b=AvWFFyy7w+a4lnwzce+FLOWzbc6UYbSujbJQKjL5AZ3eFd7hiT6TcnUxN614s8+q9wdOc5zeRsvd4LWRv079g4+qkk7JyAYJZaJF9Qg6H1r3B1Q7xHaNSQseKhGPXGDvclL1mlJhZP937GfSpzZD7vzWKw7pJJOMHzymaQgyGTM=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Bobby Eshleman <bobbyeshleman@gmail.com>, Oleksii Kurochko
	<oleksii.kurochko@gmail.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>, Connor Davis
	<connojdavis@gmail.com>
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Thread-Topic: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Thread-Index: AQHZIdDaR/Z8ZObOWUu8wVolwH1ydK52M2qAgBtuMAA=
Date: Fri, 6 Jan 2023 17:16:31 +0000
Message-ID: <320cc1b3-f03f-ad87-205f-d3c5db446f7d@citrix.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
 <Y6FUy/F0mbrvRP3e@bullseye>
In-Reply-To: <Y6FUy/F0mbrvRP3e@bullseye>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CO3PR03MB6776:EE_
x-ms-office365-filtering-correlation-id: f7a1615e-5147-44d3-3917-08daf009c13f
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 JiEODwImNz+2GnlQJbmMK/wHO55GC2W8oGAyCiZsGC93bsLDYtr2SMFkmPn5KXSPORVvQ3jgEynVlj2edemnEqtRdg42l2xlL10TGQPZLG4FuPj1y0lN/abG3nUBX96e4UhLn7gJ0Q75aLw7YnJoNYaXEb9hZvdAKKRC8U5Tiirw84JHaHt25xyAE/CD5RAf5UPagpODA+s5IGtP59+x0xjLAe6ecqIEqfWgr0LosGhyDDCwR9k/lrzi5JvD+Xq8uANL2ReU6RCOtOp5jwE705oO4iQMaekooNYnFdsUOXNUK46rYnVsBuFU66lcrdW/i/GRWZ4TYMDcEzQ9EsTcgUEuZGgGNbDxO+wqc+km6SuQXm2ExyremOWR5xexykbVIh9i7tyiyXSX+QSF3MdPnMxRf+KwRfb29bot/fVcRbCeBqvXsyV1cFDzAo3b29X7n/pw9MBl9a+wzNOKaOI2zDaZcPzykgDybqv5QeQMTVjh39FRoJTV+YzKKAkSq/F5S6wpMxjSDSZS91t2xfCbGMlM1usmxwJPKOuoyGGDlHcyBy48XgslpZKMgjwEcUw7LqqHo3od4aHIPk86usJvOB5h9w1HggoUNVJsfSe4w9KsUSPUOjQ9+hxXAMoRNT3rEWYGrBHIR3YmyW+AYRCqtlo8+4PQw9O4eY7U36ODweE4QDgYcNIl11JN8N5SaKN8XmCjzqt+nkmZDDJHFWZ+5kLNNiCCFvFBpM3cHqP/z46VxpuuLb+7EN5BJlwBZ/TGhq1on37+Xntf9/8tG7+sfw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(366004)(346002)(376002)(136003)(39860400002)(451199015)(186003)(26005)(2616005)(53546011)(6506007)(82960400001)(86362001)(38070700005)(36756003)(31696002)(38100700002)(122000001)(8676002)(31686004)(4326008)(41300700001)(2906002)(5660300002)(8936002)(66556008)(6512007)(66476007)(71200400001)(6486002)(64756008)(91956017)(76116006)(316002)(478600001)(110136005)(66946007)(66446008)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ckpMbXdzQjlsbStXMUJFRElWVHRZWU04YXh2bWlpS1dkakRWdzhkWjQwUFB4?=
 =?utf-8?B?N2E5SGVSb2RqMXpzODJSckdoYWFnZk8xWnNpYi9kWS9KaUNvSnVOZm4rcTVK?=
 =?utf-8?B?Y0J0SjdJVWFLUlp5eitqa0lmVEtuRDBqbDVVU25VNWhWTXZGelR3VC9CUEJk?=
 =?utf-8?B?c09ZT2FDcVVibmd4TWNyRmRMM3JoWk45bEtmSUkvZU1FSEtpdkNyRkFWcWtX?=
 =?utf-8?B?aU9lTGk3K2sydnFadkVKUmVHY0FZRUtPL3NmV2xQVnIvS3B2bkw5cThQQ0h6?=
 =?utf-8?B?NzE4MFhCSDl3L2tUaENnb3ZpZjJFMzNNSk4zRjRTU1M0dEFJQ2JHWFNwRkpv?=
 =?utf-8?B?WW4wVzJjVEJmWWpId2tSNjhwVzBqNTBwS1JWbkdoUU4rY3VRUGVEVm52N1dD?=
 =?utf-8?B?MVIrNUIydUh0Smh0eTRRdkE0QmU3N1ZURk5YNEVZekJXVGNkenZqbDY0aTQ1?=
 =?utf-8?B?NWVMTitoR0RRRnd6dTlYa2NRN2FHMHdYakE4R2RGYkVhZXAzcGs4ZjBoLy8y?=
 =?utf-8?B?Y00yd3pBN29PMDdRTFJ5V1p6UXYrb0lSWFJwQUM0UGY5a2tWazRHaVFMeHB0?=
 =?utf-8?B?MGQ2ZitHMHhPSlhiVEg5UzJ5ZlQwT1lwR2tHeVhxRXBJSkVqcmtUeHVGSW50?=
 =?utf-8?B?Yk5IV0VNQ1ZaL29wNVFveThSTG5Sc1pyRkZaME96RnJxbitoZExKTkMwOW85?=
 =?utf-8?B?Qm1NaStSTTVhNkhkeEJ1dG1Ib0hBVEdCeDBIZnA4U3Q3VUhZM1RqOWhUc1BY?=
 =?utf-8?B?a1FBQ1dwSGIxb3B2WC9yc2VCaXh2RGtvL0N3Qmk3ZXhpVzdIbHFodDU4Tm9p?=
 =?utf-8?B?UWQ1NVBjaFNQMXdXYm1hMGpUdFo1K2YzUGpWWDZwdFViTXJyU2hnaGRTMjFl?=
 =?utf-8?B?ZUdZL1JOeGFON2JnREVkdGluZ1RJb0p1V0VUc3hmZVZjeE1sb0Nsb0RxVjJt?=
 =?utf-8?B?eGVFQ3BjZ09UUjMvS25wUHhEdTB3ZzBQYy91RHh4N2FzZzRGenF3OE5xYURh?=
 =?utf-8?B?N1RVODdjdnNUVmtkT1hEYlNhRXpaNGZ4TTdQOENOT3JqOXJZemtWa1djV0Ja?=
 =?utf-8?B?c1B1U1R3UlNYS3BJYVpmUDRJVmhsdmtiejlWMC9DRXcwS0xPdVBrY0VMejlv?=
 =?utf-8?B?WSszb2FIOVVyZE93YWJNYXJ0ZHJraklxdzdGQkU1bDljYldNeHZ4YmhEYkRF?=
 =?utf-8?B?bUx2Yko3bHdQVHo1SnRpZDB1TmVjS3QxaERhYkI3bnlyWWo3a2haL0RGdXFI?=
 =?utf-8?B?T3lmdWdvQ3pNTEJ1K0wwRWVrTE1BYU1tOTFTQm5wTU5BZGZRbmJWUC9EY05R?=
 =?utf-8?B?eUZiM1RObUlNOEp3K3ZxZXZvdVlOVjJJRXJXdTJEZWhnZGl3MGloTG9CZTYx?=
 =?utf-8?B?dkVVekExdklncm54bkV1ZnVUazdUMVhheW5EMTMwUHc4NW10TGRzY3lVTEhi?=
 =?utf-8?B?cm5uTHQ2Z3dOUTFTcDlEeFhmQmtCSHhtQW52Q2puYm1pQXgyNzJ0ZEdsSnlS?=
 =?utf-8?B?clNQRU9IN1E5MGVad0ZaT3BadXdHN1AwWHZKZytwUDc4SkpCc2IvclpzZFha?=
 =?utf-8?B?dTE4QUcwRlBoV01GZjR2R0xhZ0FPV3IyWmxWM0p2Mlp4WUx5NW92K3B4R3hl?=
 =?utf-8?B?UVZQeDErUWdlVlQyV1dvRmIzUHQzOTFVSi8wU0U4YXhCY0U0S1RsU3BPYUIx?=
 =?utf-8?B?a05xY05tV0RoWkpDQXJWMzQvQTFwblhQZjJhMnprcENwdTlVOW5QRVVmY0J2?=
 =?utf-8?B?Ry9RRkwyWGVuQWlCL3N1YmNwamJGWmNOTXdjZ3AyUDR0ZkRhRlBFWEdPNmpK?=
 =?utf-8?B?aHRjdWN5SzJ0eGRock1WQkxLZVhKMXdTSEx6VWI1NXJqRThhZy9qSlRuMnht?=
 =?utf-8?B?SEpEMVB5V09aUnlHTUlVRXlzTlN4N2NEeXF4YWtIbCtHZmJoRG9VcWZLLzlk?=
 =?utf-8?B?Mnd4Q1c4ZTZzaDRBNE5nbUJYRXovWnBkM1Z0Wk05ME5pMlZFMi9OSWJaV1JC?=
 =?utf-8?B?aWFUWmZjdXZYT0l2ajVXTkFteDBaUWNhYXEvbFZScnA1ZnN1ZnBydnFHR292?=
 =?utf-8?B?LzhjWXV1VEtiQWNjYjN6UlBKU2NCbFN5MFgzNTQzYTlRbEFoa2FpV214YUhZ?=
 =?utf-8?B?SlE4dXZWV2NjQ1pwQ2JBbG9DNE1iVWo0Y0VqY1REMEdrajRLWDVxWVpOSFdJ?=
 =?utf-8?B?clE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C38535D70A0BFF46BEE909E546DB8C91@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?RE5zbUlyMnRGbjhOU2JLci80RmY5WUZrQXBJN3dMZmFzdGZEVzJyTlQva0py?=
 =?utf-8?B?RjlsQVBlclU0WVF2b001TkU0NXZRdWdsaFEvWHdPaXdiZkpwWHVyY3FTNlZj?=
 =?utf-8?B?VnNuYmlmYTVXZDdwNVVIY1FTSFNXak1uY2drcWYxb0g5a0o3MDFkZkI1dklP?=
 =?utf-8?B?OUd4a2M3TjVWMmxpWkpLd1E4MGtjaHhqdXB1SU4rektoMnpKNVd2Wml3V1ZY?=
 =?utf-8?B?WkQzSXQ0UzFKaGkxOEpIWXB1Z3duYUNvakYxSGlNeU5WbEt0K3RXMzdXMnFI?=
 =?utf-8?B?QWxEQ1gvUVhvRk4yTzdlSXBnUTcrZ0hFVlZhaFRVRjRuN3ZGTWJBZjRjdEZF?=
 =?utf-8?B?ZVBVMEUwaEVJNG85bUs1VlVFbTBFUDh6bGF0VGRCRjBGQUJXQUc3YzBIOE80?=
 =?utf-8?B?OS9zTVJGS0sxWk84VFk3SFN5UzdkQllRcjU4Z0RuYmhDKzlBNWduQ29tdk9Q?=
 =?utf-8?B?cU83bkN6cFIxMHVOZEVJd2JwQnlmNlJPRDFDbjJSZm5qRXpLQWhZOEl1L0tH?=
 =?utf-8?B?bXpocWZtZjFHTnVqdWUxRytnRHZ4cldrajBuMGhwbDFXSnpwZ05FdjREZTBu?=
 =?utf-8?B?Q25YZ2xyQkFZU3pyb2RHa2k3QTQrT2pWeTdNODlkY24wZDhGZVV6LzBiK0oy?=
 =?utf-8?B?VlFmMFZlU1BvMjNQU3J1VXZpRUV1SXFaTElKaTdWUXl2d1RZeXoyM2M3RXVJ?=
 =?utf-8?B?UkJvZThIV1N3ZjZJSnZWOEl1cDNUZFlpTDM3SmYySzJZaGVDem9TWlNLcUh4?=
 =?utf-8?B?NmRTcng1clA1UE4xTU81d1pOSHJnQk1uSlA4TGlETjJtMzVPNkk5SUlRWjdH?=
 =?utf-8?B?NlZpekhPS3U4R2xSWEQ3Sk0zSGNaam5qM040NFVTU0FJaVdsZzZBMU9GTVhi?=
 =?utf-8?B?VXh4YmI0SEY1NHFoOXBQelVnNXVBUk5IMndQTnNhcDNnb1VsLy9yMDQ1YmJB?=
 =?utf-8?B?eUk3Qk9MZ2NqL25aSGNHSTEwRFNDNkplUkNVbzAwMG1ZUERObm9pczFwK0pU?=
 =?utf-8?B?SFlhTU44TFZRckNzbGFKQXFvUjczNlJndFd5SXM5cXZxbjAwR2VBTWpmVGsx?=
 =?utf-8?B?dFovZ3gzTHJPWlFWM2xZRjhmeTNtTG0zSE0wVnh0SjZ0ZEN0VlE3Y3VNbXVP?=
 =?utf-8?B?cmNnUWErUU1UM1lhb0JXdlpzNGNLaW85RVNuRjJpbzFVbURLUzZsTDY0SGdl?=
 =?utf-8?B?YnZ1VHhyWTkzZ2ZFclRUU3RrVU10emdKeThzODk0NVhhaWZYQVVTMzJmM3FN?=
 =?utf-8?Q?9wGmK6NMmRm8Vqb?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7a1615e-5147-44d3-3917-08daf009c13f
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 17:16:31.3239
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ae54frv2IRlJmslBHhoP5kW6SLZ5QpwmQfzyH2owvxOiyOmas7ZqX9yfwCY15J769g+7wEYNgz+xlQGFnWdg3uf80xR74sJUHJTG6DXhGEM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO3PR03MB6776

T24gMjAvMTIvMjAyMiA2OjIzIGFtLCBCb2JieSBFc2hsZW1hbiB3cm90ZToNCj4gT24gRnJpLCBK
YW4gMDYsIDIwMjMgYXQgMDM6MTQ6MjVQTSArMDIwMCwgT2xla3NpaSBLdXJvY2hrbyB3cm90ZToN
Cj4+IFRoZSBwYXRjaCBpbnRyb2R1Y2Ugc2JpX3B1dGNoYXIoKSBTQkkgY2FsbCB3aGljaCBpcyBu
ZWNlc3NhcnkNCj4+IHRvIGltcGxlbWVudCBpbml0aWFsIGVhcmx5X3ByaW50aw0KPj4NCj4gSSB0
aGluayB0aGF0IGl0IG1pZ2h0IGJlIHdpc2UgdG8gc3RhcnQgb2ZmIHdpdGggYW4gYWx0ZXJuYXRp
dmUgdG8NCj4gc2JpX3B1dGNoYXIoKSBzaW5jZSBpdCBpcyBhbHJlYWR5IHBsYW5uZWQgZm9yIGRl
cHJlY2F0aW9uLiBJIHJlYWxpemUNCj4gdGhhdCB0aGlzIHdpbGwgcmVxdWlyZSByZXdvcmssIGJ1
dCBpdCBpcyBhbG1vc3QgZ3VhcmFudGVlZCB0aGF0DQo+IGVhcmx5X3ByaW50aygpIHdpbGwgYnJl
YWsgb24gZnV0dXJlIFNCSSBpbXBsZW1lbnRhdGlvbnMgaWYgdXNpbmcgdGhpcw0KPiBTQkkgY2Fs
bC4gSUlSQywgWGVuL0FSTSdzIGVhcmx5IHByaW50ayBsb29rZWQgbGlrZSBhIHJlYXNvbmFibGUg
YW5hbG9neQ0KPiBmb3IgaG93IGl0IGNvdWxkIHdvcmsgb24gUklTQy1WLCBJTUhPLg0KDQpIbW0s
IHdlJ3JlIHVzaW5nIGl0IGFzIGEgc3RvcGdhcCByaWdodCBub3cgaW4gQ0kgc28gd2UgY2FuIGJy
ZWFrDQoidXBzdHJlYW0gUklTQy1WIHN1cHBvcnQiIGludG8gbWFuYWdlYWJsZSBjaHVua3MuDQoN
ClNvIHBlcmhhcHMgd2Ugd2FudCB0byBmb3JnbyBwbHVtYmluZyBzYmlfcHV0Y2hhcigpIGludG8g
ZWFybHlfcHJpbnRrKCnCoA0KKHRvIGF2b2lkIGdpdmluZyB0aGUgaW1wcmVzc2lvbiB0aGF0IHRo
aXMgd2lsbCBzdGF5IGFyb3VuZCBpbiB0aGUNCmZ1dHVyZSkgYW5kIHVzZSBzYmlfcHV0Y2hhcigp
IGRpcmVjdGx5IGZvciB0aGUgaGVsbG8gd29ybGQgcHJpbnQuDQoNCk5leHQsIHdlIGZvY3VzIG9u
IGdldHRpbmcgdGhlIHJlYWwgdWFydCBkcml2ZXIgd29ya2luZyBhbG9uZyB3aXRoIHJlYWwNCnBy
aW50ayAocmF0aGVyIHRoYW4gZm9jdXNpbmcgb24gZXhjZXB0aW9ucyB3aGljaCB3YXMgZ29pbmcg
dG8gYmUgdGhlDQpuZXh0IHRhc2spLCBhbmQgd2UgY2FuIGRyb3Agc2JpX3B1dGNoYXIoKSBlbnRp
cmVseS4NCg0KVGhvdWdodHM/DQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 17:24:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 17:24:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472830.733209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqS6-00079y-3E; Fri, 06 Jan 2023 17:24:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472830.733209; Fri, 06 Jan 2023 17:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqS6-00079r-0W; Fri, 06 Jan 2023 17:24:22 +0000
Received: by outflank-mailman (input) for mailman id 472830;
 Fri, 06 Jan 2023 17:24:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CnAh=5D=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1pDqS4-00079l-Bp
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 17:24:20 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f38434da-8de6-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 18:24:19 +0100 (CET)
Received: by mail-pl1-x62c.google.com with SMTP id d15so2305571pls.6
 for <xen-devel@lists.xenproject.org>; Fri, 06 Jan 2023 09:24:19 -0800 (PST)
Received: from localhost (c-73-164-155-12.hsd1.wa.comcast.net. [73.164.155.12])
 by smtp.gmail.com with ESMTPSA id
 e1-20020a170902784100b0017f74cab9eesm1198776pln.128.2023.01.06.09.24.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Jan 2023 09:24:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f38434da-8de6-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=S2NFXzGo9zbhDvyYpgSdnlq0/2Qb+GtlNnT0u3EMLTM=;
        b=DiSL6l2MsSAryJG7j0Z37LD17gArmltOyGPinT3NjqJv7lftzxLhm9IZXzrJkYHUDI
         BtipZeoVabBIAJAejC1MghunCM6HY4RIpjybcgIjhhZk0hEyQGcxke5XgPvKgpN50WOW
         QwjHYG3CU7gNCevU8jDJS3w7o5diAj8UP8u54p6Y5sP2Cpx+mVfLBgqk+yRdWWRb/5ty
         7NhWBQspkLacfubNTcOhegRsW2dpua7UpmDhM95Jqp1AF61AdjI06e9Sr/RTKf0mW2Gy
         bim3LBTTEsTWLMFQwf0M5I98KGza1Zq62zg9CQ4nkCs+vLs56M++PXc6NJb5cKN4lZxg
         5sAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=S2NFXzGo9zbhDvyYpgSdnlq0/2Qb+GtlNnT0u3EMLTM=;
        b=U8PjYgyICUeEjwoB598hlf4Q9JbWeBLAFpooXKPMJ/6TA7FvimBSqYmyMdck1wZxzm
         /oTiiPr2QQjnaOGMJFVKYj63tR5dlUE5n8NTF0xy4K/oZU2xBR2jMpNO4stQIMlLFTA8
         U5Vx0FW7PIDV+JyFFM8X4h7cTtCHmL9Uudkbd9Bo1vsJc/ML8xufgPzK1kNAnZxi4cBg
         9us/npu4v7HlvzLxyPRZrO1cR4CAttvRZstVktwXI6KscO4caSO4mbwppc9wgR2u7qHr
         p/uz87jVd+vHnz2C3m9NuI0Lgmi71T4wTwVI7nCjfmChJ72V7TEj7c0dg/0OcYnsyN+G
         DL2Q==
X-Gm-Message-State: AFqh2koqQPrJnukFeBZJL88bcPi8CKXaMhrUTSdHVH2VdvLjJzv90ipw
	w1nrvbPhYor8WquGKBEmvLk=
X-Google-Smtp-Source: AMrXdXsj7xV3yYdlyIF55ed/UL9hxZ2u0HQTV+FoImGHqTzFNiUJDr/MsyFpLaVnRy4S08wcgY0y8g==
X-Received: by 2002:a17:902:c643:b0:192:b6ca:200 with SMTP id s3-20020a170902c64300b00192b6ca0200mr22606994pls.3.1673025858004;
        Fri, 06 Jan 2023 09:24:18 -0800 (PST)
Date: Tue, 20 Dec 2022 06:50:04 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Message-ID: <Y6FbHB74Y6D3kvjH@bullseye>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
 <Y6FUy/F0mbrvRP3e@bullseye>
 <320cc1b3-f03f-ad87-205f-d3c5db446f7d@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <320cc1b3-f03f-ad87-205f-d3c5db446f7d@citrix.com>

On Fri, Jan 06, 2023 at 05:16:31PM +0000, Andrew Cooper wrote:
> On 20/12/2022 6:23 am, Bobby Eshleman wrote:
> > On Fri, Jan 06, 2023 at 03:14:25PM +0200, Oleksii Kurochko wrote:
> >> The patch introduce sbi_putchar() SBI call which is necessary
> >> to implement initial early_printk
> >>
> > I think that it might be wise to start off with an alternative to
> > sbi_putchar() since it is already planned for deprecation. I realize
> > that this will require rework, but it is almost guaranteed that
> > early_printk() will break on future SBI implementations if using this
> > SBI call. IIRC, Xen/ARM's early printk looked like a reasonable analogy
> > for how it could work on RISC-V, IMHO.
> 
> Hmm, we're using it as a stopgap right now in CI so we can break
> "upstream RISC-V support" into manageable chunks.
> 
> So perhaps we want to forgo plumbing sbi_putchar() into early_printk()
> (to avoid giving the impression that this will stay around in the
> future) and use sbi_putchar() directly for the hello world print.
> 
> Next, we focus on getting the real uart driver working along with real
> printk (rather than focusing on exceptions which was going to be the
> next task), and we can drop sbi_putchar() entirely.
> 
> Thoughts?
> 
> ~Andrew

That sounds like a reasonable approach to me.

Best,
Bobby


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 17:35:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 17:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472837.733221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqd5-0000DB-4N; Fri, 06 Jan 2023 17:35:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472837.733221; Fri, 06 Jan 2023 17:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqd5-0000D4-1S; Fri, 06 Jan 2023 17:35:43 +0000
Received: by outflank-mailman (input) for mailman id 472837;
 Fri, 06 Jan 2023 17:35:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XdSJ=5D=casper.srs.infradead.org=BATV+47733503048e2e898d5f+7075+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pDqd1-0000Cy-K3
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 17:35:41 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86f42156-8de8-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 18:35:38 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pDqct-00HMLI-DX; Fri, 06 Jan 2023 17:35:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86f42156-8de8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=j5SJjOVUWB6bIzEcIGazxSgaPhnqoJFI6j1SWoTN3y0=; b=eLneMP0+lyMm7EmYyw8wlZhQij
	nOzOoIcwVYy/a4kyBXMQ4Y2XzSwmxE8uorOKLivjNneYDuJLGOiqA/Di32Qlj9dP77EY/cjSg/yim
	3LgxAXqM4XA1lRdX/aCPKM74JUduIRgAwfQF6aZNVKPleRtG/PdMhRhLQfpx0pkohPaPKSCcdqwsH
	Q9TxkfDIgT8sl42HW2yXtiVlxHXjThdIBy9JU8f3upn48TO6iuLQUH8J6aetTIDDG+Tt7zTpsbp/g
	hU9acEpR/g8ouV9xDYk+g04QITRBiOIJPxPetqRxMX5C3Fl8WwPyCaErM8Nkoln1zQIJ2aSa4JhOn
	DLvSxipw==;
Message-ID: <b80c0bb26d8539899f53b91deb48f748e2495d23.camel@infradead.org>
Subject: Re: [PATCH v2 3/6] hw/isa/piix: Wire up Xen PCI IRQ handling
 outside of PIIX3
From: David Woodhouse <dwmw2@infradead.org>
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>, Stefano Stabellini
 <sstabellini@kernel.org>, xen-devel@lists.xenproject.org, 
 =?ISO-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>, Aurelien Jarno
 <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Philippe
 =?ISO-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>, Chuck Zmudzinski
 <brchuckz@aol.com>
Date: Fri, 06 Jan 2023 17:35:18 +0000
In-Reply-To: <20230104144437.27479-4-shentey@gmail.com>
References: <20230104144437.27479-1-shentey@gmail.com>
	 <20230104144437.27479-4-shentey@gmail.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-1GDoUcmA+YFZPkTfypSI"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-1GDoUcmA+YFZPkTfypSI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2023-01-04 at 15:44 +0100, Bernhard Beschow wrote:
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (xen_enabled()) {

Could this perhaps be if (xen_mode !=3D XEN_DISABLED) once we merge the
Xen-on-KVM series?

--=-1GDoUcmA+YFZPkTfypSI
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTA2MTczNTE4WjAvBgkqhkiG9w0BCQQxIgQg91B2jFp1
PMQtGaFyS4i9Up5i5gJoNToNZs0En054QjIwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAXuoNJktsTX1LWKHjCdT9hwq9O8vm027u2
KKGDTtG+iZP2vdPw620FjeyjdMm8lcu8DPVxJK6Su8AgaMU7MK7QUdfyJuOXDjDYEJjIJIb92LQq
DXzxxH2+Iy/yXz/ti8XiL/BpNfDtMrwbMg+u23XVsumO5mXmyyeSgpcWAmzpwdikgkh2jVc3khlS
P1XOSz3BadQPWlm7K8h7Ttqk5fsqutmPn1+Sti2UUGCNoVXVa9RPWaNqdRu9voaYwcCGnmtAC+pW
pDXGQjpQsib6n93mqvnAXXbzmi9/HSBPIp24dbP6l6OD6LxSF2EezyFWt/JvLXZxr2Bv3SfStgr8
5583QlbPrWA+feXpA33ylNy4TlZj3U0ZpcjeNUelpAh0Zlbthqd2x/oIKahn4ZuXCqBadTiFTJoI
vePPKOuQ5QW0CTCXydMeyuFjhrcDtKnLqYObo6ZbSzZJPn921Wdb0Q08LGd/D4dXuT0P3/ZzAOf+
vH1WjQAaY2zIPDq2msbubNACCdcGzFNaYVAQABzFWVPYTq42XI2nawvjLScYLPAcqqXjZ3P44Bst
Ot0Y6GQi3/9F2yAEzdAj7nFcdYKPVIoqBi7eFZVQlUEWZy8k4MMDCIw/yNE8ANIXlyEOAynLkDIf
yyeeFihY2CvVVdb8aKZKfMDhQ+XD9fa/lfgXtliwygAAAAAAAA==


--=-1GDoUcmA+YFZPkTfypSI--


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 17:47:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 17:47:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472848.733240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqnx-0001qW-BW; Fri, 06 Jan 2023 17:46:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472848.733240; Fri, 06 Jan 2023 17:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqnx-0001qO-84; Fri, 06 Jan 2023 17:46:57 +0000
Received: by outflank-mailman (input) for mailman id 472848;
 Fri, 06 Jan 2023 17:46:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J7eG=5D=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDqnw-0001qI-Dy
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 17:46:56 +0000
Received: from sonic308-55.consmr.mail.gq1.yahoo.com
 (sonic308-55.consmr.mail.gq1.yahoo.com [98.137.68.31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 19e360e7-8dea-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 18:46:53 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic308.consmr.mail.gq1.yahoo.com with HTTP; Fri, 6 Jan 2023 17:46:50 +0000
Received: by hermes--production-bf1-5458f64d4-qthpt (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID abddd79a48058f79d1ffdd9f2f3866e9; 
 Fri, 06 Jan 2023 17:46:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19e360e7-8dea-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673027210; bh=e/yuR16Te9FloniNuhJpbWJIR1Zp8hbWNdHK0VdfjsM=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=sTq7fUsaVqnW1jpDYFOf3sgyWg2QJx9xgnlpXe+D9eVnbmEEi+/W/ZKjNh6wtVVn27n7ItlTh2pgoyTpyt5f1L+jBl9ncdfRNt+ljb71kvbY9C0tCroCr0wPcSGL4nEvRVOPUQMq+GhBXkR4ZfmXHouXNvCELOdPtp1cFPh8bCXsjuadMJNZQC+cZ/gRmO2aOMGIR9xIdNhJ/iEBgBl7vckp1zUL7GXZb4GMQrd3EsO/mJk69BOGOAUg3YaIZibynDgkwdNDPFA2woTQcORHKe4rkLZx4kCboOUa7no4W5tyRiS5qRdJ/g0mNp3ja5n1l9hmScQyhA3wzHEjdYDtVg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673027210; bh=X1JckgDvtHLNYcDAfFF2eP/1ckxVrDQy9LCUKex4cZy=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=lkktQxlD3qmu6y5M2KgUC3WCfgrEvBkPBNPD2lE4aEcYETsQbotlYBTE0ehfoDlgTlKKtgn7pdK4hYil8D6SEzKEGe75TxKvuw9g1fR+hFWItTNwQZPgIM2Sj7vt/uMYzUBdPsVYH4Dzu3UvRKmiSwdLVwo3tXJmHgArDMsYsxyGZBp8KXBxpRu1zTEUQ114h83rhcVzAXYeQJzvx63sTIHszgmbhUx7fBFoapr4/tc5pnJKLKQlN4/3waMt1GxLcbId6o1GaKihlV2vlGbt8AcF9t5perS6C+MNEkpuCeeOg+Ad3ZTb0HHRkOygY9p8eCA6RLYe0gZoi/VyGrPYbw==
X-YMail-OSG: 6X0GYxgVM1mDTxdl3ycRcmjH8SrRcUf1frDKKdnj0DaulY4.ExN8BjoBGNv5xYu
 1sZpclq.cRAkBMUNvjVFO9kvymnAUXQgvBw5gc9IYZJnXsXQl5MOcftAJTb5Nb6aUs2UXNvDLBKH
 hcayIq3HXl2n_GcECF1T2qBC4yetb6qsXTGDKGl5x7EWndMZfv_x_jaW2jqXRdbD4Vi25b1XQa27
 UPgVql6WNSy8M.3zLgk4_Iy4Q52ro6MuaO0BFo0riEBsVDd_8qibd0b0Fo.i6AqumD9lHFrOvdbn
 Fwm_IH3ru7R4E8xlZ2Y2jB.N.0h4zr8KGU91mU9BVbTsrsNQStHmjl3_A5KBVKmidCQKB_8HqRv_
 jdw2yqm8ekfRQs_0ZW3aOH.33rgic7ntu8ENrN9U5iOuvlxPH5CcNMS.3JmF2nZ6YheRXVy5sZSx
 D..csbaXJLnJinLrFOSlzCjHAQ08vKFMZkjWZ8zqiaRMDm__2XSI26wrZ16MhbNm6eiEhNwRtx5e
 D047ZzCBav6hYZe2IKIlNA17FxDZS_O54VL3S1wP9GsOzAWrC7aHl4QMAPjG752.EQvPqKSv1fCB
 hjgCeACJkj1B92mBCtjdX2BFaSawFSEhcHgm2nJbL_HUuy0S_Rfl9b.Lvt_ZtRYADTjRVQXSieeb
 DsHT5nvZHdfmve6fOBO37RDhza8pN_KnCHLPcAP0D_DUhXkbey3WzZs.fCspBwNj3oSdZaTysLjB
 RkFHvfU1KM6Je.HCqiIahkYWLw0w6NRbM.g2MbaQQ.nIgBoqrZB83.3_5s0GgJZvDI2GrxIksgpP
 VjKi98vCgWhvcCPYjpakQb4LcDTA2ViMdvR7_WnAyvsLT5opM6JVnymRJWJElt0f8Lo0.jnBFtag
 T3F1QWYGKpc04x4O9ccWQRxNOJZWDx4sOuq8NPEGTSsrNJRmcittTrdgUYYaoqvlSC72MUMej6h_
 YPn7Xz5eJxq8qWzzwp3jTJ.2oGsiZdrKRv2l.wqtvrFjk7A7b08Kv_7.wdkC0Auu3Ox277K6NV7T
 p4bIhE37SHOZfBEzE2sU0RmPJ.zotC1X2edpWbsSR_kI7.8xJq1A46SXdIvaThTqDPh5vmXHKXSW
 B8NcZvo8SiS_i0CpEB6zTxkL1LmaQQJGCzuzF8j1yEVrqA5eTlkv4_oxCkNb2gOv7xgfl31bwNRx
 4C5q0xdwMCOoHBR1pY3PQHmQpE_xPPNCCyGZbtR02x7K0xNi2qaUbo0Y_2kXDot9XR5vsHQmWr4H
 ol.Tajp44XINGZqKlaL1VNJPTI0mEMNPOBz34mj8.d8ctkAe9ReJ2FJNWY076N80qEVVGzPzXp4M
 EpuMsFpvW.TfkaMLVHZgmXhBbyM1qfjWITrv2TbcL8gXgLcajH45MbH5uL7XaW7ITbTcfwUVyW2D
 g919FlXMY7n24ypVRv3cWDC2_rdCmA7YSNjIWJp2tpPCxRRGC2GoU0BPy81vgdvYjXXWML8uBHo0
 I9HmgYdCkkfDD6PJiKUtPJZuDGwCBi7wO_Xs2nr0tRhZI62.64shVukxfdKpGToNnC4RdVPA4.Cv
 7p6LWOO0s1I9cESqt.xO5aHupg7ATQuP6_zh1fqNhCWIOPR41QHoavbnO7SrELtx4izms4R4zxPD
 AKEVsWYYYbd0NrcW7RG57t4Eet4PcSS6W9rsLFfjvuI731pJ7nDUqvTCJ8A2zaTBKbyd82kATGoe
 253TUeVA0z661P5WDkzEyJAD8qx2StVKEW.ZFi0padXZXbuVF7TO7Q34CyarzVLyLq3ghS_ZWkPR
 60Rx5DbCv8RRQfumA7M7KF5YOGG1T_pRu0YedDEtKi8QDfvUtWmPNUpti1huIW6YNHaA.1lUO7em
 B_nci28BYlNfiTBNFaB2xZngGQMQbOhIcCBNEu9KOw4DxxplLF5tfVDgNQ_b0PBjOkvRjY3xzrpY
 ju6QxtXVLxc7NboV4VDEXYiM2wLU8y87Ta.ryVbjjDo2VUKxFmOBmrqXmDofQI9gKXm01u7e9JwQ
 o1KPxAWMi2GyeRdITq_o6nUP27jpFzoadZnz9C2F0nF54jHY1YbQ7eZ.51O61iqLm5IQ_W9hOFks
 n0l4lRp.RiznfeH9z4e85MMNAjKmKa8Zw7ELOFdRHZ_odQ78ofZlZcMsc.aA3YqoEDli.kDrv4pH
 jlsR0.oOdQ_4sMOIxb5mXGCbqjdhukrOzIoXXqyRUjCXq0ZF_Xdjia3E2kTFlJULrtGIxXqZyY0N
 7h6rSfFOIODBCZwRFt5w-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <c328e499-0a52-e46d-f080-dbaa6b98cac0@aol.com>
Date: Fri, 6 Jan 2023 12:46:42 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 3/6] hw/isa/piix: Wire up Xen PCI IRQ handling outside
 of PIIX3
Content-Language: en-US
To: David Woodhouse <dwmw2@infradead.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-4-shentey@gmail.com>
 <b80c0bb26d8539899f53b91deb48f748e2495d23.camel@infradead.org>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <b80c0bb26d8539899f53b91deb48f748e2495d23.camel@infradead.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 428

On 1/6/23 12:35 PM, David Woodhouse wrote:
> On Wed, 2023-01-04 at 15:44 +0100, Bernhard Beschow wrote:
>> +        if (xen_enabled()) {
> 
> Could this perhaps be if (xen_mode != XEN_DISABLED) once we merge the
> Xen-on-KVM series?

I am not an expert and just on here as a user/tester, but I think it
would depend on whether or not the Xen-on-KVM mode uses Xen PCI IRQ
handling or Linux/KVM PCI IRQ handling.

Chuck


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 17:48:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 17:48:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472855.733251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqpp-0002QK-NK; Fri, 06 Jan 2023 17:48:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472855.733251; Fri, 06 Jan 2023 17:48:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDqpp-0002QD-KL; Fri, 06 Jan 2023 17:48:53 +0000
Received: by outflank-mailman (input) for mailman id 472855;
 Fri, 06 Jan 2023 17:48:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDqpo-0002Q1-GL; Fri, 06 Jan 2023 17:48:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDqpo-00062i-E6; Fri, 06 Jan 2023 17:48:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDqpn-0000Zl-Va; Fri, 06 Jan 2023 17:48:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDqpn-0007Un-Ux; Fri, 06 Jan 2023 17:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Gtvfw6gDyHV+Mth32Vij04ibsiLuak7DdxstQZ1PlBI=; b=6cWqNQBxJZTPH+XxJda2D644mk
	3yuSkUTPvXi8LopwSJnpJ5vp6RDKa7AgwYTNvy4bS909dotr+684E0a7/bqxlXcJ79YfsavHUjYgz
	mWSsqdXFZlBhJv1mvLi1rmpEjWaPKsdmMMRCK00G9XONvZq5JO+tsVnbAuXkgKSsdKzg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175601-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175601: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
X-Osstest-Versions-That:
    xen=671f50ffab3329c5497208da89620322b9721a77
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 17:48:51 +0000

flight 175601 xen-unstable real [real]
flight 175607 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175601/
http://logs.test-lab.xenproject.org/osstest/logs/175607/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 175607-retest
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail pass in 175607-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175592
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175592
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175592
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175592
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175592
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175592
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175592
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175592
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175592
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175592
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175592
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175592
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2
baseline version:
 xen                  671f50ffab3329c5497208da89620322b9721a77

Last test of basis   175592  2023-01-05 19:38:50 Z    0 days
Testing same since   175601  2023-01-06 08:40:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   671f50ffab..2b21cbbb33  2b21cbbb339fb14414f357a6683b1df74c36fda2 -> master


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 18:22:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 18:22:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472864.733262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDrM0-0006ek-96; Fri, 06 Jan 2023 18:22:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472864.733262; Fri, 06 Jan 2023 18:22:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDrM0-0006ed-5n; Fri, 06 Jan 2023 18:22:08 +0000
Received: by outflank-mailman (input) for mailman id 472864;
 Fri, 06 Jan 2023 18:22:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sfve=5D=citrix.com=prvs=363380921=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pDrLy-0006eX-AK
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 18:22:06 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 030f4c88-8def-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 19:22:03 +0100 (CET)
Received: from mail-dm3nam02lp2046.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Jan 2023 13:21:52 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB6019.namprd03.prod.outlook.com (2603:10b6:610:be::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 6 Jan
 2023 18:21:45 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5944.019; Fri, 6 Jan 2023
 18:21:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 030f4c88-8def-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673029323;
  h=from:to:subject:date:message-id:content-id:
   content-transfer-encoding:mime-version;
  bh=OkfaNdfhnQi+sNg+lseArwfD8y8L6hRJ8ngS8xkww8A=;
  b=ZQKeohnOsh4rhgfDcbjF3TE9bmwVYdAF0pbtCjiLpWV1sxttgSUY3lQ9
   tbJ1+EYpuz66YjrEBm6ALn/OWf+vNZ5/f/VwXh/T3ZM7kcR7xwqhO7P1I
   rrs4MvkW8Z6gmW8wTnshHJ11jSPQ4kH0F+r5G5jh1b7Gs2ey8xgdX0yNR
   U=;
X-IronPort-RemoteIP: 104.47.56.46
X-IronPort-MID: 90999364
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:TiA9BaNdVfBXDwvvrR26lsFynXyQoLVcMsEvi/4bfWQNrUon1DYOm
 2AZW2yFbKuCYGX9L9x3YI7i9xhUscLVy95iGwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo42tB5gJmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0v95XjBR2
 PkTFBNOazuEnbiLz/HhT8A506zPLOGzVG8ekldJ6GmDSNoDGtXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PRxvza7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6XzHKqA9xOfFG+3qJykEfK51chMU0bVFa+iMaiiXCBSt0Kf
 iT4/QJr98De7neDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJhZDYtE7sM49RRQxy
 0SE2djuAFRSXKa9THuc8vKYqGi0MC1Nd2saP3dYE00C/sXpp5w1glTXVNF/HaWpj9rzXzbt3
 zSNqyt4jLIW5SIW65iGEZn8q2rEjvD0osQdv207gkrNAttFWbOY
IronPort-HdrOrdr: A9a23:tH2Eb6peOvEG5oUjPzT5SEIaV5oReYIsimQD101hICG9Ffb1qy
 nOppsmPHrP4wr5N0tPpTntAsi9qBHnhP1ICPgqXYtKNTOO0AHEEGgI1/qB/9SPIVyYysdtkY
 tmbqhiGJnRIDFB/KDHCdCDYrMdKQ+8gcSVuds=
X-IronPort-AV: E=Sophos;i="5.96,306,1665460800"; 
   d="scan'208";a="90999364"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lbWwwx+e9dhWZLXk1fryigzUFjERC1fbNrlwnUY3Yqdf1I2ajDLwmbUHkWr8Roht9T3kgIzyoi3UsENbkYlE5BvzBcJHKjMbaE1o7sv5p0LpsrZTxLSyYyiN20GG3kXe0WySD2Vd7Mr7BWVvrwZmQL5+bGjKDasH12RqpWjUTgWl7B/EupuriVWi5wLIKly8IK3Y4vaPB3kOWL7lrAEmpmkEi0O+kAOY+PzEVyc2D9WBKp8J90zQmnRbMmquaT8OM88Xql+EKOEc94EzDgSkAhUYXPhTVcep1j5G8PPT4KckEmYebdO5QTRH1u1EEa/jbE+wH9vSNxhecfi3dsAivg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OkfaNdfhnQi+sNg+lseArwfD8y8L6hRJ8ngS8xkww8A=;
 b=SHLnlE6ijQEy4HEV8TPXTPF508EsFgj8+FZsow/QnChSXSOMvC8ozWyQmadhWtJbrPU1q58lM2/yNjYQeuQvyTZAokAKEqQ4aeu3pYqwe2UfUgOESDM6BPH9d/B2X7kmqo+833UJeN1iu0c6jr5Yo4eSJ56sP3BmaFeVEhCauz2k7TZFvXhYFxa5QyblXCKUvSXmTyv88I9lk6zLB1LjGNHzcJECba1o76ISMi8BNdTp8f975DO3lX8hSsdONNi3mugePLZ37sbcNvE8/+lynMYVbb1K1teb99hAbUlxTv7lpIkVAKIROn9FkrpMbceS9LWw+vY7254KL8yDpbdDwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OkfaNdfhnQi+sNg+lseArwfD8y8L6hRJ8ngS8xkww8A=;
 b=gW0pKr+JTNCBZGNSV9GtO8lCbKS5mwkTme6+KMayNm6FJviLWprm6VOgZsGJNVB966Qv5QD7S6kBPAAseIUd35fSIDlkmgsT1aRcUJ+EbA4WZ/JUeOTMx/W6mgfYmRJD/NKptI1pCOPX6Hmo7xUDlJEX7ZqSpmQrMoK2a+HjENs=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: LBR and Sapphire Rapids
Thread-Topic: LBR and Sapphire Rapids
Thread-Index: AQHZIfu71M1ca5PaBU6iVuTRvRgdgw==
Date: Fri, 6 Jan 2023 18:21:45 +0000
Message-ID: <3a80b974-1ddd-2063-863f-8aff3453d545@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH0PR03MB6019:EE_
x-ms-office365-filtering-correlation-id: 87524891-defb-4ddb-ba05-08daf012de56
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 iXpDIV98Qe/JxHdEYt1IqqPPqKdSTmcn9/BVvzqfTWmo+o7i5BN8sD9hV32yu90fLTfw74fzcJDPd84jFCM1BWHWtutf5q/AzM5DxuvcPtK1PjpYaFE5llu8951PuexkhjA/LsE2hoAU6HNKm7Borw/NOcdIAQWsman1sHB8CPUamBwKeZzt1AYgteEiEcVJ3gDExkhsM0tMV360IARlgx/jasRIbrMkBPNSR6cn/Wk5iG/udN2DEU1JILMR1w8+7k6/HrBwOpBDCsIjUUHRYORQ2OflDHCwOMCIPb9xA6rukx6ZhCW1wvehA/BfUoL+hO3pyFwisMdi3LfQcO9smM6MRYI7ZIwxjgnWuZ/vY2c2LZTK5K+25tUFvMXamKthtVb0w6AUbYq6Rqsc9GdAUyuc7Z8MIitnbGvbTi9MzZEAko+5Zasokg+LiwUGiDOwjAx+Xc2FZ0KqtQwCj/p4DxiJUCAqrRs2QYnC7FBvpfDBXfeQup6xMPlm1xvH4lCO09GW+VQaJ2BYK1esyAWTSqw11yi5bg1+LguEHzAGm3ITUtNz/ItmMt8ENby8mN4o/29s767OhQ9iR8Zr6g0lY3A0h7agepfXmBtBqZWHkH1d5jaKHB2YI6ptf2I9fDCr110Mrg8G14cV5Gu2FvSde1NkpLHhrEHybqEX4O+a5KQalgFAHLSaOIX/OxIhSE6A3yM1BJr8JXlDZlpLzLdkG1uKRZxT3Wd58CUqIvSOsrNtk4PgyeZNjZpoFdN+I6I0dp6EKpIzPh6KTgLCCl+rfw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(366004)(396003)(136003)(376002)(346002)(451199015)(66446008)(66556008)(76116006)(82960400001)(2616005)(66476007)(64756008)(8676002)(36756003)(41300700001)(38070700005)(91956017)(6636002)(66946007)(316002)(31696002)(2906002)(110136005)(3480700007)(38100700002)(86362001)(5660300002)(122000001)(83380400001)(6486002)(8936002)(6506007)(478600001)(26005)(186003)(6512007)(71200400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?cUg0S2gwV1FhcDRaalJIbGxLZTYrWWFLSndnaklWWlN0cTA2bmd4b1gzK2FP?=
 =?utf-8?B?QkZpdHF1QkptY1lEZzlCcnI0WENMSW5WUHcwbkJGbURqZk9TN3pTcHN5WXcy?=
 =?utf-8?B?QU1YbGlHY2NFSkhwTVk5aWtvTG41cW45cUdWeG1sdit5VUs1ZFp5VmFxUS81?=
 =?utf-8?B?ZXFZQlRFcWx6cTd1TTRKRDVmMnRleEtGaThZR2RSN3pKTi92VDBqT0pFVHpi?=
 =?utf-8?B?ckNZb3VaOWVQVVdQNFgxamsvUEIyZHZ4NWg2ZW1NNkwzTmExS0NHT3Fvd3R1?=
 =?utf-8?B?QmQybHNDUGhuaENVbXI0Mm1RTGJlOCtUZUI3TDVITEl6aXJnTWhEZDM5Rjlt?=
 =?utf-8?B?cGkrZGdXd1pFK2hOTUw1bndWMkc0YnN3elNLRjF5Uy9qZFd0S0JmaHVqTG1q?=
 =?utf-8?B?SnRFQUlWM1Evcis0ak5iV044QjJOOUdCVlpnTlJLZTNFaFdtSWlXcDEwd2RQ?=
 =?utf-8?B?QzNaZHBLVjVIVnZGV1J3WkZGNTFpbThDdkFyUVRJVGZ2UXdZN1VVZk5HYUoy?=
 =?utf-8?B?c2RhVGYvRVVBVEoxUnAwTzhFeHAzQVNYNVR3OER6blpySmkwVjFwV2pMTVBs?=
 =?utf-8?B?MmdpaG04czg0eC80TkZMNUxhM21TUFVoei94S3gxZ2VvYjUxOVZxRDJBbVRR?=
 =?utf-8?B?ZTQwSzB1M0VtSXp5NFJuSlN4VGZoU0pFY3IxMm02eXV2RWovVHJkenVnQWRY?=
 =?utf-8?B?MUNPZE5VQjkzMXJoYVVsQTdwU0l6b3gxekdkT3RESE5nNjkxNUJnSUNwa05F?=
 =?utf-8?B?TzBBT1FHdTNtajgrTEpZakNOaTJGeEZMcGJ0ZUF0UFJKelJEbzZ0R0lucFRa?=
 =?utf-8?B?ZjA2MHlBYml5NmNYR3Zyb3NCZjVvR1JGaU9ZVXFZZ1o4U1N0U1pWZHozZXFB?=
 =?utf-8?B?UzhoM0tSb0ZubGM4T3Bnang5alFwRkNXV25JK0h2TlV4RytZdDNBbCs1M053?=
 =?utf-8?B?K2ErNEt5aGdMMHhJUkl4OTlIWHNNc0VYWldhQmN0MHRHS0M2cit0MkZJLzB6?=
 =?utf-8?B?a1ZnTzBFRkh2b2V1SjVNbHREeUYxZWNPblhTVS9hM0NMSTN2RlUxYUdNSXps?=
 =?utf-8?B?Sm5XaS9RcGVDNG12ZjBQbXhHYnh5QlJ4eXVIR2RFb2prRWY5NXEyUm5zQWdo?=
 =?utf-8?B?SXR0Wlc3WHNXVWhDTXF4UlJoOCt0SHVja0M0QldGcjF6OFZzbFZ4MDhLOHQr?=
 =?utf-8?B?TDVIZXN5YmJjZldyUzEwNVoyMW0xMGRHV1hMOUh5WElWb0ZBdmVWSE5lc0RX?=
 =?utf-8?B?dnE5S2xQWFZQNmdsczBUUElGcktUVzJDT3J2blp6QnhmTFdjZmJBaUNid0NY?=
 =?utf-8?B?d1lNeVptaDAxdU1uN0drdTJMUTM5UjA4MmlDVnFXellTL3FMc1NzalJkU1dS?=
 =?utf-8?B?RFBpRkkwY29FczFEaG96T0J1dTAzRkczZ1ZGaEVzY3RvOFhDNU1jeFFZUXBI?=
 =?utf-8?B?MmEzTTN0QlNBU2VuT3p4WDY2TlpRNFliYkJCcW4rUUd4QWpYdDFZTWY5UW8y?=
 =?utf-8?B?L3IzMTZYdWdhcFhUeEdYcnhvd0NvNllrWndKS0R3K1dlNm5lVmJKcGxOYlht?=
 =?utf-8?B?dEIvdzhQZVF0Q3NDRXNJS3RzVjVRMGVIRHNxVjN0RTJWZ0E3cytERGRGaDNi?=
 =?utf-8?B?Q05VY3lFdmtmSmhMQ3NMWkx2S1NKSkFTMEF6amVLakI2SXdZQUN3V00vdTVO?=
 =?utf-8?B?ZjVXVzNDNW9EU0o1elFrUWFTVVFiaFhRQndaRGVqbjB2UU8wd0N5akVtMFFy?=
 =?utf-8?B?K09vbHMxSWdoazFaTkZHTG4yWlJLZkxkditEdGtGOC9sUGpnSkFJY21HZUpV?=
 =?utf-8?B?YVR6OFpJYURLSGh6SkV0WTVDYW5TMEkzNWNiZXlPVk5UQUxWM1NpT09vY1h1?=
 =?utf-8?B?ZVl3eWY4N0tYbFBxN3pQRktWTW1LbGFWSnpKTTdFSm1OQ25XWi9hUDJ4ZTEw?=
 =?utf-8?B?NERmcm8rMXlkK1RDbWZldmNNbWo2RGZaTndEb2VGUm5PcWpubmxPVWFUODN4?=
 =?utf-8?B?aHFrUWRZajRpNkZsRm9PK2pmaE4vU0VaZm5SYzc2M3JVem5DMTRnWVJTVkpJ?=
 =?utf-8?B?S1pHckRXeVhJai9VQjBUWUNHRnJ0MEtXUStCbFFSRG9TNWxNdHkrTlM1UHRO?=
 =?utf-8?B?LzdpSWx0bFkzaXcwUXp2VVRNd3FqYnowUG1pUEFCN01oSlhORENWSE0yQ2M1?=
 =?utf-8?B?ckE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <77325048A88F7B40B66F7C5B729569B1@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	5tmGb9AEwnUtuSp9xSnthnDoNbIJMA/t0RIOaIk89Q2qhuHBWe7s6EohYMQEEhe87gQgo6hE3JkkuMzlfW0/v9/cCqzng0d0NcxqQxGBm+ch75dbyGMwzQd22Vd+oxSt5l+SEkvJJA+KgQ5uWGB4vTMjjWBBF//C+xREsoyvC1+R9n6ZDD5bmaPg8UF8rlO/Dngz1vALJuw4diE8MQEVXvzzdIZN8TXVYPOQ1rTFZK7LKKRVmjLY2Fiw1GOFjOmcLofxH2d2oopXFwJKxCcuGCCKF1sQZi2cEQ/XD4Ck8bkA2dM3tGmfkP5CjRpYQ4zDk0ekWTRtF/ceW5YbHiQWZcKBMMdUgOJjiOT/slt687EYWF+Gr0yrRzQkOQj552d6j4HMpTxWBCKQMdINXfOBaADIGJeqOrE3E3H/5C6UT6yF84CrPxO0Ewn+OiCT2MiZogvGshc31sSOKp4VmIAaHCtIYDjebOS4Mv5F4JYq+kmY8znJ04komLVDoDzyRU06vkXDYUXD7vra/tZJjdGUsqe1NYfX0AX7X0xFLFOckxAcOrnSBnPFZKd+Dlh6hq2apy3jEC65y+PjKJpUQlS/BtzbFpg53HO3R/Xg6Pj1IH9kT0/1vf632H5BYkqjtC8n3B01j3QaJYZVMTHTks3OBfMJIYsU5dwbSCdKoqVQ5VOwXKGDFxGaUJy/jVUg/mcK29+M1dQJAEHOztXF0dV7BUOfeGiRanXGDRzl0m1EH1a3YpS9j9hiNzD/FxvwSmU0qYq1Jv5BgZnj2ZrxzWR283TFYP+y+VRC7TscDakxn9g=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 87524891-defb-4ddb-ba05-08daf012de56
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Jan 2023 18:21:45.6337
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: cg555sAKZk4jhGAPWoIAqlD607JNU5IsEuu81gpycHzK3W51975ioaonRSVD4L/xHATO0kJxIc9TvNexwX6iHhYRFdWjxz7ox1RzwbpNApU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6019

SGVsbG8sDQoNClRlc3RpbmcgaGFzIGlkZW50aWZpZWQgdGhhdCBWTXMgb24gU1BSIHN0aWxsIGNy
YXNoIHdoZW4gdHJ5aW5nIHRvIHR1cm4NCm9uIExCUiwgYW5kIHRoaXMgaXMgaW1taW5lbnRseSBn
b2luZyB0byBjZWFzZSBiZWluZyBhICJmdXR1cmUiIHByb2JsZW0uDQoNClRoZXJlIGlzIGEgc2Vy
aWVzIG91dCBhYm91dCB0aGlzLCBidXQgdGhlcmUgaXMgc29tZSBnZW5lcmFsIGNvbmZ1c2lvbg0K
Y3JlYXRpbmcgbWlzdGFrZXMsIHNvIEkgd2FudCB0byB0cnkgYW5kIGxheSB0aGluZ3Mgb3V0IGNv
aGVyZW50bHkgaGVyZQ0KaW4gb25lIGdvLg0KDQpSaWdodCBub3cgKGZvciBJbnRlbCksIHRoZSBQ
RENNIENQVUlEIGJpdCBpcyBoaWRkZW4sIEhWTSB3aWxsICNHUCBmb3INCnJlYWRzLCB3aGlsZSBQ
ViBibGluZGx5IHJldHVybnMgMC4NCg0KVGhlIGZpcnN0IHRpbWUgYSB2Q1BVIHRyaWVzIHRvIGVu
YWJsZSBNU1JfREJHX0NUUkwuTEJSLCB3ZSBlaXRoZXIgc2V0IHVwDQp0aGUgTVNSIGxvYWQvc2F2
ZSBsaXN0cyBmb3IgdGhlIExCUiBNU1JzLCBhbmQgZGUtaW50ZXJjZXB0IHRoZW0sIG9yIHdlDQpj
cmFzaCB0aGUgVk0gaWYgd2UgY2FuJ3QgZmlndXJlIG91dCB3aGF0IHRvIGRvLg0KDQpMQlIgTVNS
cyBhcmUgbmV2ZXIgcHJlc2VydmVkIG9uIG1pZ3JhdGUuwqAgQSBWTSB0aGF0IGlzIG1pZ3JhdGVk
IHdpbGwgKGF0DQpiZXN0KSBvbmx5IHNlZSBjb3JydXB0aW9uIG9mIGl0cyBkYXRhLsKgIElmIGl0
IG1pZ3JhdGVzIGJldHdlZW4gb3RoZXJ3aXNlDQppZGVudGljYWwgc3lzdGVtcyB0aGF0IGhhdmUg
ZGlmZmVyaW5nIGh5cGVydGhyZWFkIHNldHRpbmdzLCBpdCBtYXkgZmluZA0KdGhhdCB0aGUgTEJS
IHN0YWNrIGlzIGEgZGlmZmVyZW50IHNpemUuwqAgSWYgaXQgbWlncmF0ZXMgdG8gYSBzeXN0ZW0g
d2l0aA0KYSBkaWZmZXJlbnQgTEJSIGZvcm1hdCwgdGhlbiBwcmV0dHkgbXVjaCBldmVyeXRoaW5n
IHdpbGwgZXhwbG9kZS4NCg0KDQpMb25ndGVybSwgd2Ugd2FudCB0byBzdXBwb3J0IEFyY2ggTEJS
LCBidXQgd2UncmUgYSBsb25nIHdheSBvZmYgYmVpbmcNCmFibGUgdG8gZG8gdGhhdC7CoCBJbiB0
aGUgbWVhbnRpbWUsIHdlIG5lZWQgdG8gbWFrZSBWTXMgbm90IGNyYXNoLsKgDQpJY2VMYWtlIChz
ZXJ2ZXIgYXQgbGVhc3QsIG5vdCBzdXJlIGFib3V0IGNsaWVudCkgaGFzIGJvdGggQXJjaCBMQlIg
YW5kDQptb2RlbCBzcGVjaWZpYyBMQlIuwqAgU2FwcGhpcmUgUmFwaWRzIGRvZXMgbm90IGhhdmUg
bW9kZWwgc3BlY2lmaWMgTEJSLg0KDQpBbHNvLCB3ZSBjYW5ub3QgYWR2ZXJ0aXNlIHRoZSBQQ0RN
IGJpdCB1bnRpbCB3ZSd2ZSBnb3QgTVNScyBwcm9wZXJseQ0KYWNjb3VudGVkIGZvciBpbiB0aGUg
bWlncmF0aW9uIHNhZmV0eSBjaGVja3MgKHdoaWNoIGlzIHN0aWxsIGEgd29yayBpbg0KcHJvZ3Jl
c3MpLg0KDQpGcm9tIGEgIm5vdCBjcmFzaGluZyBvbiBtaWdyYXRlIiBwb2ludCBvZiB2aWV3LCBt
aWdyYXRpb24gbmVlZCB0byBiZQ0KYmxvY2tlZCBpbiBhbnkgY2FzZSB3aGVyZSB0aGUgTEJSIGZv
cm1hdCBjaGFuZ2VzIChhbmQgb3RoZXIgY2FzZXMgdG9vKS7CoA0KV2hpY2ggYWxzbyBtZWFucyB0
aGF0IGJ5IGRlZmF1bHQsIFZNcyB3YW50IHRvIGJlIHRvbGQgIm5vIG1vZGVsLXNwZWNpZmljDQpM
QlIiLsKgIEJ1dCBmb3IgYmFja3dhcmRzIGNvbXBhdGliaWxpdHkgd2UgYWxzbyBuZWVkIGEgd2F5
IGZvciB0aGUgdXNlcg0KdG8gc2F5ICJwbGVhc2UgbGV0IGl0IHN0aWxsIHVzZSBtb2RlbCBzcGVj
aWZpYyBMQlIiLCBhbmQgdGhpcyBjYW4ndCBiZQ0KYW4gYXJjaGl0ZWN0dXJhbCBDUFVJRCBiaXTC
oCAoQnV0IEkgdGhpbmsgaXQgY2FuIGJlIGV4cHJlc3NlZCBhcyBhDQpjb21iaW5hdGlvbiBvZiBQ
RENNPTEsZm9ybWF0IT0weDNmLEFSQ0hfTEJSPTApDQoNCg0KQnV0IGl0IHN0aWxsIGRvZXNuJ3Qg
aGVscCB3aXRoIFNQUiB0b2RheS4NCg0KT24gU1BSLCBNU1JfREJHX0NUUkwuTEJSIGlzIGEgd3Jp
dGUtZGlzY2FyZCBiaXQuwqAgVGhlcmUgcmVhbGx5IGFyZSBubw0KbW9kZWwgc3BlY2lmaWMgTEJS
cywgc28gd2Ugc2hvdWxkIGVtdWxhdGUgaXQgYXMgd3JpdGUgZGlzY2FyZCB0b28uwqAgTW9yZQ0K
Z2VuZXJhbGx5LCBJIHRoaW5rIHdlIHNob3VsZCBhcHBseSB0aGF0IHRvIGFueSBzeXN0ZW0gd2Vy
ZSB3ZSBkb24ndCBrbm93DQp0aGUgbW9kZWwtc3BlY2lmaWMgaW5kaWNlcy4NCg0KSSB0aGluayB0
aGlzIHdpbGwgYmUgc3VmZmljaWVudCB0byBhdm9pZCBjcmFzaGluZyBndWVzdHMgb24gU1BSLsKg
IEFueQ0Kc29mdHdhcmUgYWN0dWFsbHkgZXhwZWN0aW5nIHRvIHVzZSBtb2RlbCBzcGVjaWZpYyBM
QlIgd291bGQgbmVlZCBhIG1vZGVsDQp0YWJsZSBhbnl3YXkganVzdCBsaWtlIFhlbiBoYXMsIGFu
ZCB3aWxsIG5vdCBnZXQgaXQgdXBkYXRlZCB3aXRoIFNQUidzDQptb2RlbCBudW1iZXIsIHNvIGZv
ciB0aGUgKG1vcmUpIGNvbW1vbiBjYXNlIG9mIG5vdCBoYXZpbmcgbWlncmF0ZWQsDQp0aGluZ3Mg
c2hvdWxkIHR1cm4gb2ZmIGNsZWFubHkuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 18:39:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 18:39:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472874.733272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDrcd-0008IH-QX; Fri, 06 Jan 2023 18:39:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472874.733272; Fri, 06 Jan 2023 18:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDrcd-0008IA-No; Fri, 06 Jan 2023 18:39:19 +0000
Received: by outflank-mailman (input) for mailman id 472874;
 Fri, 06 Jan 2023 18:39:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J7eG=5D=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDrcb-0008I4-3l
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 18:39:18 +0000
Received: from sonic304-24.consmr.mail.gq1.yahoo.com
 (sonic304-24.consmr.mail.gq1.yahoo.com [98.137.68.205])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 691ed5a2-8df1-11ed-b8d0-410ff93cb8f0;
 Fri, 06 Jan 2023 19:39:14 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic304.consmr.mail.gq1.yahoo.com with HTTP; Fri, 6 Jan 2023 18:39:10 +0000
Received: by hermes--production-bf1-5458f64d4-bl5tb (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 90dc30dca055f6fc7430c5ec161196a7; 
 Fri, 06 Jan 2023 18:39:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 691ed5a2-8df1-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673030350; bh=ByxGwuLu/B4A3w3BivVK9fvPYeRgohNkEitT7jEThBM=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=Q7bZzcO898sepwxtrzaC9YEj5BLAJ5g/LqzXEmKpvGwPAa1PdF6vTrUETxd89y2+BSwyEt7NS/OqgfrBrw+r576vtPuI4luvbtLVwglwso/dcdEpLw5v1nqw0vkL6U5Uugpl41ADIw7jBYiACxggu6WCJaLjToeTGpxQaY5AZs05O1ofiOY+ydtDYtkw0AoVEKG0H1cDZuPoDaJodjYiYcPsBWqLpRV/M2Rls3uaIxTv0xiun4AYqbajGhBo5Ct3Zh291k9GdcFXvolDjG9TS2TdRLFEL3e278PEt+uppBV4IQAiHtFa1nloDISBQ+we/RjWPe3QCO3KIIfLyfLkqg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673030350; bh=jdDpriTmaKK322OkMVuZTh256ShgFjR0VXVJysXGec/=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=PHOt+GCwsQFKPbdXf7Ap0bW28SQ1JcAg14Jx2/23A1RcGGynxXzL42v/KJ63A3YDN4R2RAVjsjpArIzDkU5/C81yOQZgX0rWFc83AVJ6mqiIO0564z0vq45eF1pVQknJbHi7ZW0vgjpRrEGckRwMlXa4jTphOsFxDBhLx4pAAMpLtUB/2/tVYkFkhVjGllxohxy9Dx4TnuNEZR+9XoxVf2OUMrhPvIhhlI6XUrs//qX9OyJ4kY46/1rWZtxfU8o7KT8Ly8iO9KcoAj2MTFNG8X0hXMBTGAI9sUTpiT+x1K/WEVAhzTkr4G9TrpHWJIsXG1xgdGj39DzEGaPrXdwiVg==
X-YMail-OSG: xytN2Q8VM1mZBpk1MsZy3FNWjf3FsAF8Ppe6zrirAut4WLJjyyEBBSUn8m.wtF8
 9GG5GUBThPXGMjPVnLQt5ezUGjjcV_UjrQ7gsKJiuXuoxYF_h8qU9_UACINf7aSCqKmHEwsivGkF
 UIxtKQ5mp5kVCcgXq3nrKZa74S.vV_HWOTXH47qoKeccoOhIar4dQIY1jzZKQT_s5FSitFPEcBjP
 6JYmjFeSMerF_1QRJ0HHdseT38eQ8HyhiK4GHCtnhr0jSkfU8twtOiFNiKnRXhmJ_Y1pnrQkTLD0
 ZFDYQjubpbJxPTHy.ks95G3vBZBRYca7MciirZZo3sW390akVSlrZCkR5nYFrmI0vY_XVJs_CoDe
 PYBQCsGLOn1L1hN8pLFOVQn1uwvIjsEjy5.v3NncQ1ujB3RPCKAF.80FJoT0X.vEz2X1mSRq9qjs
 .01OncTLbiFnzOJzsLWaoWzwdGu2wuggoZJk3Polunqpk.NdksO4l9jU1SlxO8TuTPhSxat4Q6L.
 CA6alouZ6wDfUrtSf11EJrqx.O.8XFFCx15bx2isYbW7jYPdYDQVaK9r0zdrvnhdsAlqam18Z6gc
 oPxva6SReuQlsU13wZBsGwWQ6L5VtusOcElaF4lDg5SP5HgxsA_EjiUiiLCiaytuk0EL9oMna4xL
 RR4oUESW_UH.JI.dBswqCc0Acj.HKx4B.WK.pt1UWb4v318hA4vGtrvoeDZ1mif_5ozkMsaBrjwR
 IGfJmMyR8XJIGdtGqtAmK9eD1fys_d3wHLp0P7r7Q8f.DFUCUJF10YWSA8YeACgx6z_Cu8fbHTzd
 V7HLUTNzxbCCFYkqpivOyMpG6ZnBu9Rx0HhK4QPyQIhfDxnsdunt4WZ5miSCEHd8yITql6NgDeaG
 ZhXJKgxY2BzXCpFDFCgtpX627krdRq7yCzadlMabIryRMPVk7uh.4HwapC933iQAvHPlm22VSiRe
 TOCBbSbQ9Y7iwXBGIHvaM0uJItA5V1kSI02YLpZJxo2DdAhBP3F2Am0mtZG5ND9ZC7U7zToXf95Y
 O_xiZIgB0nJ4kXRwgrLUq_LAyL5Sdcz8BxakB.sL_uF.EG4hkGUnMXa1MT494TBsE.tj_cq7sIkF
 8avx_r6_fNRh_wVvziEk_VZQYvWpOQCq5j1nX8Rc_ezgQtdS8M.5PUSfFwj6CaRDpLrvbdra3bA3
 Lbkqgph9eZpqamha5w4yNTEZvotmxjY_5KTqLZDIuf0bjQA1ZOLaHkaW8wSpLtCcMJx8VZLTsC_v
 8tru_Z.TT7FXh9R6NX9LIzW6CzkJlZtk8AwVa7qYKOyBqwgZcRlAHxpdFBXRa7n9XyXS2GPHKNuW
 PLFBwuBgtu9PwTc2AhEL2DHDziRRoTAf2HTSPuxTpc9LoSBeMji72VGsnDZAHWvRwkflHnZXVf1w
 3omiMWoPu4QYbOPN0q.1jXLlojmVkfHesUH1YymYT1UECK9xsdcrlkEt_4dCaqKM_rKgpGUa.XeO
 6y3L1N8L34UsSLr6kDefrGAa_UhhagN7hYDztgBKskeC1Up7PW_A00rx.EO7plMA6tKv7deUqbAF
 uapikbTWHqhzjnK_oGXSgYRr4m2EZUfQJSQOKYFZ_tyb5KgmGZgPdzvlb3yOBeeZbxpvqmlsiFzU
 aET757Svb1gkbpsFgbLdiJdfFeh2JmFntMX7L6Xj81BRNV2NEBl_UKQBHOINZq7DwSJQP9q9oIaz
 ZL0uKDpaad0tTzseZQ2fpF1Z_tToF16FcZ85ss3Em6tffvzyISID87gjbze.Bhu6WKJ5V_cCwHjz
 nfiENnHf1uXhhTATVQQsmCB9B_S3uoz637.Z0KBYC5j_MrJ5qzjVR74NUTiqppLWYxjbeQx1Ho8s
 QYz_gE1o58zfBCfXeiO1u07xuZauxIMgEmsM5_FKaOutv8iLoiDIAjGkA.ITgWk8dgnKFltyDf8_
 vHRa_x08UxLf9mZLejf3mo6XBYVkglwAhXI..uBQT7udtiqaoKw1ubbH9g5mMjo05QwilOTQERT3
 rhytkKQK8gOmh7n00HN.OyQ59vgyXRze3ZEParQQd9bl3ewzJ6elrToo.FiJPVftGfs8r_k8kZ06
 YkF6bF7jnm1A7XtoCgjWVmgT_11UoWEAWxfUcNcIMFLYTUAkpW7ErAycbObYkaU3_V8BEh8xu129
 vvsXlW8yrMD3GvUiFyb1THvlUBLb76SU15sUzSmV_QT.RXVGa9oUbCW8Nw4ntooNpfoWTk_RBm8n
 DQ9k8N3mg4L8eh0Uly08a
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <3f7527da-2f0a-2d26-95f2-23776d0ab141@aol.com>
Date: Fri, 6 Jan 2023 13:39:05 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 3/6] hw/isa/piix: Wire up Xen PCI IRQ handling outside
 of PIIX3
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
To: David Woodhouse <dwmw2@infradead.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-4-shentey@gmail.com>
 <b80c0bb26d8539899f53b91deb48f748e2495d23.camel@infradead.org>
 <c328e499-0a52-e46d-f080-dbaa6b98cac0@aol.com>
In-Reply-To: <c328e499-0a52-e46d-f080-dbaa6b98cac0@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 1256

On 1/6/23 12:46 PM, Chuck Zmudzinski wrote:
> On 1/6/23 12:35 PM, David Woodhouse wrote:
>> On Wed, 2023-01-04 at 15:44 +0100, Bernhard Beschow wrote:
>>> +        if (xen_enabled()) {
>> 
>> Could this perhaps be if (xen_mode != XEN_DISABLED) once we merge the
>> Xen-on-KVM series?
> 
> I am not an expert and just on here as a user/tester, but I think it
> would depend on whether or not the Xen-on-KVM mode uses Xen PCI IRQ
> handling or Linux/KVM PCI IRQ handling.
> 
> Chuck

I could try Bernhard's patches in a Xen-on-KVM configuration if that might help.
I would need help configuring the Xen-on-KVM mode to do it, though, because
I have never tried Xen-on-KVM before.

The test I could do would be to:

1) Test my Xen guest that uses the current PIIX3-xen device in the
Xen-on-KVM environment and get that working.

2) Apply Bernhard's patch series and see what, if anything, needs to
be done to make Bernhard's patch work in Xen-on-KVM. I have already
verified that Bernhard's patches work well with Xen on bare metal, so I
don't think we need to answer this question before going forward with
Bernhard's patches. This can be patched later to support Xen-on-KVM if
necessary as part of or in addition to the Xen-on-KVM series.

Chuck


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 18:55:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 18:55:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472881.733284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDrrx-0002CH-4E; Fri, 06 Jan 2023 18:55:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472881.733284; Fri, 06 Jan 2023 18:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDrrx-0002CA-1R; Fri, 06 Jan 2023 18:55:09 +0000
Received: by outflank-mailman (input) for mailman id 472881;
 Fri, 06 Jan 2023 18:55:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDrrv-0002C0-Cw; Fri, 06 Jan 2023 18:55:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDrrv-0007fy-6G; Fri, 06 Jan 2023 18:55:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDrru-0002Xg-Qt; Fri, 06 Jan 2023 18:55:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDrru-0000gT-QK; Fri, 06 Jan 2023 18:55:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=btEX+Y9FURyODiBxP1JET2mJ3sTKOYHbkqKQ09Qv/18=; b=aPDbQOPZJfyw9MkdlR36AWH6bK
	U5DPXIgJSl5QuXQSytYo7xEK+vWJSCiTAjwXwAap7o34DiEqDR7Ux1BhY59OL8reIthQVfKsbwNMV
	4V32XYdKpaIfzrBqAWcgscqtHOzhXLpgYt92iEVg4RVzcsekof+Ueu8nFTGx5P+oVK7Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175606-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175606: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d8d829b89dababf763ab33b8cdd852b2830db3cf
X-Osstest-Versions-That:
    ovmf=5386c9e6dab2f6a555e679aff9f6a59f60c8e029
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 18:55:06 +0000

flight 175606 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175606/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d8d829b89dababf763ab33b8cdd852b2830db3cf
baseline version:
 ovmf                 5386c9e6dab2f6a555e679aff9f6a59f60c8e029

Last test of basis   175604  2023-01-06 13:12:23 Z    0 days
Testing same since   175606  2023-01-06 16:13:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  KasimX Liu <kasimx.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5386c9e6da..d8d829b89d  d8d829b89dababf763ab33b8cdd852b2830db3cf -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 19:08:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 19:08:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472890.733295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDs4b-0003kH-AB; Fri, 06 Jan 2023 19:08:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472890.733295; Fri, 06 Jan 2023 19:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDs4b-0003kA-77; Fri, 06 Jan 2023 19:08:13 +0000
Received: by outflank-mailman (input) for mailman id 472890;
 Fri, 06 Jan 2023 19:08:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J7eG=5D=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDs4a-0003k4-Ti
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 19:08:12 +0000
Received: from sonic310-21.consmr.mail.gq1.yahoo.com
 (sonic310-21.consmr.mail.gq1.yahoo.com [98.137.69.147])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 751bff22-8df5-11ed-91b6-6bf2151ebd3b;
 Fri, 06 Jan 2023 20:08:11 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic310.consmr.mail.gq1.yahoo.com with HTTP; Fri, 6 Jan 2023 19:08:08 +0000
Received: by hermes--production-bf1-5458f64d4-kq8fm (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 462c69a279a2e24d2afd89e95d1e005d; 
 Fri, 06 Jan 2023 19:08:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 751bff22-8df5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673032088; bh=bGPA4yzFOwZB3mapZnZRp7YkIuMge68eikWnzp8C73A=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=L7ms3qqlnqmOSxzPL4GJR/DSQWufryfaP8W6xkdRkr8E0itME7qaaojNIvJvdlwKN7rI87ITSOfPsxwlTCr2iFgaykM56PEwcxwtUvgBUrltH3i0G4RRXCQKm1m1wO+qwxZzeMxGyr/wgIrYJkI3J5fJi/D7mQDX2BhDheR+AjnPbN9B3vWLCXZ3NEkdWb/8rGuj9y+BqHT7+OyGweW8PKWgWDBBM2fv/knyZg4G07JubdyZsdVxXP2p88XoUX6w4I5GakjiziOGPOjy0o+vdVVDBN4Im1RYIBmRuKDYumjgLRiIclVeqp2309SfF/UZiTZj/AOeTi0sgOh9Iasg8A==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673032088; bh=W828f11+CRPmLujnXGHcpkkrKCf9aA7T0guyPLYK0YX=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=mgikH/Y55awpAMVoWqhvsi115jC2PtamCU5BtL15iF3FFG3QZPnN7Ymks0PuFn8kM4A71L9KYvuq2WyFUmQ5sEVUiGldgSulVCPxdnYgN0wMQ1DooTW8mGViPcZrbtrwupEsR7puMD0zFnkdSQpzIBxcbOFQrINFXxmNLGtxQVHovVTeppqU316rKTZ0CaPrA+KTgRzNtxWcFc8yZhVB/naBT7Qpydl/rgC0MA3f3Bim2ab45YOpzOR1XC7S+aTs1Z1ggUQWp5yhDC5r0mORLG4c3a41/hePdewsoOLZBFqiITUcI2EDUGO4GWXQWG8FTbm3yDy503LHNCA+Q1mgfg==
X-YMail-OSG: TPustp8VM1n2dfKzoFYbXatnxmVYHD5VeyB1Pji9J7P.xLMjLELOyAxIAzXR.2e
 RVnvJHEmjufCpvav5UjEplhwOImdMeob6iaAg79.UXL_jFjoSkQXGpCchi8eZeVb5NXVLJRSr094
 ofeTmcVjrUxAauhrjtTi5lQsoAsd7xWOYXvYbgwFYqwl7hbxzS4kx9qRGphIZKt544OKEWJxOGUs
 BNtBj8P10jOGgLZISdgJxIDT8p.GccYmQvVoiex7l6i0gCH7IHhEV4smtxt64yLApVSxrc0k1mvi
 WDGxjbd8iOsrMrmXz8TCP_guJkNPQ9HLZOkb.Hy5UwmcraW3807fC8lEDX9Fb72LmM9q0D55tNUR
 Xb5YdnSpZN1Tcf4XE0p_n9Z1pvYX0uELCM6.TbwWCxOiVjyKivkOVP8OhpqAzz_WiJJpIJZzCHeS
 f6Slsxwx2aXseNZakxS_.cjGh5mapfJv07c0O2bg3BFduPkh7zY.ZsjVbduAJ0IgrHADwOdhkz.H
 hMxfuupb9eBqFirHKckw5jeKwnRzYbkrmRMSFeEGM3fnnonFBkPF0oUk5QzQACyr1YII61umIbm0
 A3jLiWmg4KzBSBqQvRSqyJOfziqPKDEv3mOe7BMmUW89eMtePhCMPeKPN5.8OM47WOcmDov9kEuj
 klInieFhbudKyJmtZ6qgG4pDxNS0E.An0oKc.mNsR9cchxgG4WQHqLcEiXJkSBCkIcmzR25vVPcG
 Ky.Y9S0pkhbELfEKTghh7NWHyV7lJ9cI6805KBv.T6FUdwlesvA_kVa7erbpxlpcT5nbMCoGF6ff
 G2f2mr7k80GYkwYwJqxVGyC6bJIteIWSwmPVk9.xYpwX5KwkpRBQkb6JRnUkxIZFumzg1dkIuUBa
 TG_6Z80tv0jfkTZM3Osr1kmsjTDKNwEe5s.w7yyevjk4Uh2_Duhg5FefQ6Y6.jMGviYmCqHWhaHP
 uoYFB5FKmh.oStx0g0X.Rf9Yhnw1HHeV1.09pX1ENk9GokQdmUgHfs_RFzUcaYw161fsgIVw2Iew
 9Ku.IgDBCJ1HNgfbtWkQP_J7hjqSeYOPODlVjAeCX_hTwhCKF4OpApCz59wxHDvy2I7AFYUP4vDb
 Iod3.suW.QvtBCFrwdBEqBVZjOsi1tQgpAh2mKMJAbmcrEhfzInPNdY2BoAmHrcclwE1CSDJgSvF
 VbFNRsEGaZ_NLNuEdNKP8nwj7ViVfyt9qB.3DzyhJYLT8mJMcf5k3E_5xd_apwfsRyGGHYPqRF7z
 nvdVY2m40FgJumaHROGyrWOlKkbTC4yghypKhtqGHE1DjMm55rcC7RWwsbAkDbnJDLRVy_NXNpB2
 bPkNc58AJAR9i2Sl67clenKqFnDcDRhG_EBqbWpN57GBY16VjO6j2TBCA.OxNUvhga.xIwZxT1Fm
 02kX.ttrniZodyzpDAs._uwC9UYVuKqxME_PgEay.Cl5mwD4fUK1LhmPD.gX1S.b5dnLtZcTvOJ2
 bd2S5o6KEO4cv.o6FQR72hO9sTG0Akt9hzy01bJqLcwJK8YfxhEUcVh7oiiPPdwTy5R6bk2F_5PU
 1Gb9pxAE6G77ZpeQJHSr31NFAk5oGSz20mLmSALvJmYaMxT3Jw8hg_Rt6HGX6HTLCyuMdQQBzfmy
 3WOqbM9mDTVXG6BI3gA1JsMFzqD6xf7AuD9vM.7f0J8wKMKgthlRqNGSs3jIgHMFFoR2iwROgCgX
 XlOYlT8OlQ7l9O.o7FbN0pNmUlKinzDML.dILkYVnTb8i1ivp4UkLx1f5tCA4WhFFHiMISy2hn4P
 1N.mjdMX1SIWsLewN7XMV5aNwyQHgnqqCaj9H.XrpF_oLPI7hQ29BDNN1uKt9ks7KWU9qduyZRfp
 gxTqQUB2ck8kYejXb7KGiD4FdhF2qPyXoHiZnxzxH6xV43tk9_BIO_fY21iyRsKC4Lapevmdbpch
 ufTx6gJ_3hqZpIPxDxJyAt73ErICChmmL7naMyiOx7sTcipoNq01OSnP.2tLDXGh86vatQC.b8vs
 J5tPsPzEime6sDYIkhIPz6cII.iWoRv7TmzOC28LJHHP_.4vTaKpBINBP24lSf5qiic7WwgODs2.
 BfCKfyvaw6JfwY0KZkC4.rTAJR4PZkZ1tIc8mAuoqgFggA9WMQOtcM5jsE2Qrf63zTZ06sO.EbiB
 51GwfSWeZJxBb.zVKeYNGd82MyAqs7GH5ah_MKfDsKWUQ6OXZQqWRZcQbp_nt3Z3ovHHBNEZ1JHK
 MOfdPK46i_M9ZMJsBVVyA
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <d9e2f616-d3bf-fc6c-2dc5-a0bf53148632@aol.com>
Date: Fri, 6 Jan 2023 14:08:03 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <B207F213-3B7B-4E0A-A87E-DE53CD351647@gmail.com>
 <6a1a6ed8-568d-c08b-91a7-1093a2b25929@linaro.org>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <6a1a6ed8-568d-c08b-91a7-1093a2b25929@linaro.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 2137

On 1/6/23 7:25 AM, Philippe Mathieu-Daudé wrote:
> On 6/1/23 12:57, Bernhard Beschow wrote:
>> 
>> 
>> Am 4. Januar 2023 15:35:33 UTC schrieb "Philippe Mathieu-Daudé" <philmd@linaro.org>:
>>> +Markus/Thomas
>>>
>>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>>>
>>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>>>> ---
>>>>    hw/i386/pc_piix.c             |  4 +---
>>>>    hw/isa/piix.c                 | 20 --------------------
>>>>    include/hw/southbridge/piix.h |  1 -
>>>>    3 files changed, 1 insertion(+), 24 deletions(-)
> 
> 
>>>>    -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>>> -{
>>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>>> -
>>>> -    k->realize = piix3_realize;
>>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>>>> -    dc->vmsd = &vmstate_piix3;
>>>
>>> IIUC, since this device is user-creatable, we can't simply remove it
>>> without going thru the deprecation process.
>> 
>> AFAICS this device is actually not user-creatable since dc->user_creatable is set to false once in the base class. I think it is safe to remove the Xen class unless there are ABI issues.
> Great news!

I don't know if this means the device is user-creatable:

chuckz@bullseye:~$ qemu-system-i386 -device piix3-ide-xen,help
piix3-ide-xen options:
  addr=<int32>           - Slot and optional function number, example: 06.0 or 06 (default: -1)
  failover_pair_id=<str>
  multifunction=<bool>   - on/off (default: false)
  rombar=<uint32>        -  (default: 1)
  romfile=<str>
  x-pcie-extcap-init=<bool> - on/off (default: true)
  x-pcie-lnksta-dllla=<bool> - on/off (default: true)

Today I am running qemu-5.2 on Debian 11, so this output is for
qemu 5.2, and that version of qemu has a piix3-ide-xen device.
Is that this same device that is being removed? If so, it seems to
me that at least as of qemu 5.2, the device was user-creatable.

Chuck


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 21:50:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 21:50:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472897.733306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDubq-00032M-Pz; Fri, 06 Jan 2023 21:50:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472897.733306; Fri, 06 Jan 2023 21:50:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDubq-00032F-Ml; Fri, 06 Jan 2023 21:50:42 +0000
Received: by outflank-mailman (input) for mailman id 472897;
 Fri, 06 Jan 2023 21:50:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDubp-000325-7j; Fri, 06 Jan 2023 21:50:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDubp-0003SU-4k; Fri, 06 Jan 2023 21:50:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDubo-0006ea-2N; Fri, 06 Jan 2023 21:50:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDubo-0005Vz-1h; Fri, 06 Jan 2023 21:50:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h6zTPsL5iplcreE3c6x0M3u7gsvUdIpxd8P7eb2GRRU=; b=LWTJnOfXqwfzfIEiIMGIGoIysV
	bQ4ba8P0+6AtT+OIO89jgO0azmqiyZByYNzisUWg/lg1Hyep8nEIIwbK1vma0JYIZcK3VRVmmTTJe
	zCOFhIODGkGNQJqovOx1LmfVfDDOP5bhrc81eDmifLFAagIO4HTc3ncEpH1pxxo45Gtc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175603-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175603: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d365cb0b9d14eb562ce85d3acfe36e8aad13df3f
X-Osstest-Versions-That:
    qemuu=d1852caab131ea898134fdcea8c14bc2ee75fbe9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Jan 2023 21:50:40 +0000

flight 175603 qemu-mainline real [real]
flight 175608 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175603/
http://logs.test-lab.xenproject.org/osstest/logs/175608/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 175595

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175595
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175595
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175595
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175595
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175595
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175595
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175595
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175595
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d365cb0b9d14eb562ce85d3acfe36e8aad13df3f
baseline version:
 qemuu                d1852caab131ea898134fdcea8c14bc2ee75fbe9

Last test of basis   175595  2023-01-06 01:55:04 Z    0 days
Testing same since   175603  2023-01-06 12:10:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Axel Heider <axel.heider@hensoldt.net>
  Claudio Fontana <cfontana@suse.de>
  Fabiano Rosas <farosas@suse.de>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stephen Longfield <slongfield@google.com>
  Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
  Zhuojia Shen <chaosdefinition@hotmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 631 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 06 23:04:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Jan 2023 23:04:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472908.733320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDvlB-0001o8-Fj; Fri, 06 Jan 2023 23:04:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472908.733320; Fri, 06 Jan 2023 23:04:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDvlB-0001o1-BK; Fri, 06 Jan 2023 23:04:25 +0000
Received: by outflank-mailman (input) for mailman id 472908;
 Fri, 06 Jan 2023 23:04:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J7eG=5D=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDvl9-0001nv-BK
 for xen-devel@lists.xenproject.org; Fri, 06 Jan 2023 23:04:23 +0000
Received: from sonic309-21.consmr.mail.gq1.yahoo.com
 (sonic309-21.consmr.mail.gq1.yahoo.com [98.137.65.147])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 72c0f768-8e16-11ed-91b6-6bf2151ebd3b;
 Sat, 07 Jan 2023 00:04:20 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.gq1.yahoo.com with HTTP; Fri, 6 Jan 2023 23:04:17 +0000
Received: by hermes--production-bf1-5458f64d4-s52fb (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 73550f2f3ad1d326fe382ae3bf119149; 
 Fri, 06 Jan 2023 23:04:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72c0f768-8e16-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673046257; bh=5EDETl/Bn40ksYDdHxFQPm8HxH4mlDcmh7uTqfmAZxw=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=uUot1KD7m/rROeXRXcAneoVytEcn/YyvQzAOgceGEzlR5k3liW6unrbm3A686MBXXFJgkJae9qamIQpPVf/ZbpPhw1XIiG2aCA3YhZ9m53OgKMU15PfVBG5y8t2RHn3JnAy1bqj/Fz7d3ri5OKfPAeNWraQEiOaPbFV/kAAKLNdh/b0+F3VKrYc2yiS63riATrVqRxoSw0zWkQtKilHjfIhLMonWJd+bMDUASt7L95qJwm5cqIrQEHGGdtaqQeLH5NtgFZJVX4+VPhM+IvOA2PL/mTVWT/4RQIvLNoSNPXKpX6H7TwfTJv06JR9M+MPcNYXXxZAYiNWUKoBWk5Q9ww==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673046257; bh=H8pCqt53rijduXGA/ATbxaUuckY3oh3NtPO5R2CHj6a=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=rWOpOIPg20sdwrwQWuAaLDFreFKshWlhfOfGWduypNVPkV94j/o/m4wIRKiNi30MP3aH492/+3x6lY/wqtVzhvwwrGpGsqMerYJuiFw93tA1BjOXetVaM54UqRS95PDhNR99PqXznrgLvsnDWpDdBNlti8S7ozZ0Rr1KdV43F+ZVtUULpq7lGV1k+BjB+BPjVORO5urC88/12GC9R4TJwcaVfXe0DZQ9yaFUgzcdK0a9LkuYFOa/a8VrqZlvaatQin+lkmmrRxnmzpyc1CSIlrKbZQbA9xlj2j0KH3gW2RchBCzmrbL3SdDXsjf2QDhp/yHCOH1qGgmfzPg3AW57Xw==
X-YMail-OSG: w5.iMUIVM1l0eqGZQfenJKq0f3WWOoVU1G_PwzmLYayjTCdr5p_EL6nBnjFJgqa
 rDA7tjOPnqDGyMB94mV.fxfplHns2lCUBZRAg8osRBZaM.WwdM7Jpd2Ss2YHoI1IzlfFKvkSy9K4
 A4wvuHYlJQ73AyvAB_n7XKjVl1OevIpQ2Jh5NZfPhnwzPm_0pa6FiOxyTJ8AJ.jB6vChfIqZm.Qp
 O7vaX.ubl.X9Ses4q082Eg3okm1J9MbLtOmM2h_nCh9s_T2mupXc.4ujZXmpheVkpqQwc5vOM5RH
 hYzPObcAS_v9RyuDTDDP6V869VHrJ6JBuT2g430TxIYovuuYa51an7ridQK22eo.Cv5npxcSrUw3
 t4aOq_rcExrMjreScMYx0BFY6ccJYcAvrPq_ZBOwhgbl4NNikE.LIVWleYqi1fTmRbePh2t7NflP
 rDIprbzCihTdFc6X6h8KOVSRgKsPLNGidtawWu315IfQYrWGuhDcWDxt4zRSGY8MaLUcpXZUWAgv
 Kjfpqv8DXkpotVssAulyhrZKXGwI9KKONjuFD0uQmsQhcZeAeKXeJhxrcRXYucQDiqjgctOw.NZh
 wrSLgVUJOeUTmHncgyi4tm7nxb2FEKA47ZhppgoDGFRJhy5463d5QWtxzh1larSaiexRTpLgY4WM
 PQ_AMwXD8fDggq.XlnjWSLGg4_IFChEKzg7YgRtqj0GA_i0ey8kHTglvAqUg1Wn2KU2JWfSvQTyc
 iYroyzWpfvItkGx2ZeH6_XXbVQpUDAqsP2BdSVnt4fJ.Ct7RKwH4Jq1K_iby9zjIj7ZJsYfhCdFW
 2xsRWJyxKeebG3.QIgtJARqrFmxdIzJRk0wdoW1rNwz1WHKEiVsN.7dQpSgUHdpuwZ4YaefdlVvZ
 SXVzqQLOevHZjrJ_gFI873PK.JHKFyMWfUxkdfVQLol73f93DNkSbrl0ClCDXATsMgp5_BCxhfGK
 h_4w0916KfWFkOf5kGaddnM7V_bAttxjCBOz9vq2EDHsHU0jK6AP76Q7a9_2cR3E.xRGk9fXCCaB
 bXGzu3q2i748rbm3mwcWvPJicS5DKi.MInmquPmkYkvSYy7Zd7S9FrHKY0tqzvI1Xe.VjycytL80
 JdrkkB8vAIX_DL0YcIbF4NnDIlL.phu5kO1j6_MOjG1nf7OWhSoesUyognQYge_u76YpQ7DJKcOo
 kMDI8SK8g5gpw_pAMMpafG8tod8yDCD6XjJ0YlzHDr7iicYthWcQSXdedOtf9ooBYZsJqNZw3v9A
 wgJu45w40Lo_0RdkXsL4L1FzOaHqwXZqcbC4aXuOs8NfrmhB_RpvVKsx0Lhev1nNdIQPKS32X7dJ
 NgiKhA5b3L0HsteXprIGKyLxlEd6mP5_nnkL2qgVF9qE.vshx0c1wKTKsH2aCbGu03c._oqPoTWG
 GrBS_UK7A_U2XDU63.fq4CbsF57b0jbJH44rbNwZSMWXh5ER.PCa0p3U6ErCCgAhhFeX7qE_4vas
 QcRM3ZdRkXEZDXU8.LEfUQWoxKJetCOQF6.o43qJQEy8iBaLH09jz7uCZnOtd_dOHakcAzcjLXV8
 XzQQbMnf5pxEhdFi7n0.cHixRpESkdUGX5dPa2_ofEaP1qcKrXkKqH6zj3xrlBSpqzn5EwVW04eb
 67avXN5m_RVI0FUJgls0aXxF1MD.fRBhu1Pkpm5U7MERwU00XJircEYOTIdqueXUdGtHr2YolXWO
 _CiiVuEpVSJPJZ.c6QtdXUH3dO._5kAQ5MIHPcYI3FY0FFxCVmNBoEGFJxsOhymy5uUwPOtTLQG5
 vOJVOBFy.yCuvtyaCJQN6Uj.47AdTOZd22p0MMIlCDhw1anAeTPIivksbwTX5ri8DWgYjzi6prX_
 xyAVQ3neeiCvUJdxvXnjgspq9SFj6hcQgnAE8X5kT7icqm9tUiordUmXgm6jL.lK1P9S0au2YAzE
 QVP1vSX4wK9IOwjRb1HkZyTTxEYdrmJVoT59Kf1NrXRRic8.Au8JAsflprYiPFuo.c2U4C.QwnBu
 fB94511G7U.deu.EI4asowZUUXhuGvxg.iznHm2NlxOzJOIoL70.9BworJTQiO1QOiKgFheTN.dZ
 dQCiNXbk4A6YcpGHPviTW_PfSj2lLn589dJ4w2Zvmpl_ceC5aCmKFiQ1Uc6dQpujyp5VZZDRFEvr
 fCP_ZZG4cxQuNzlTavIuLKnFwfVA5NbqyH4Arl9fvPCX1aqtMAfgZccXJQnYZ9iMeFE4YTaEkkPO
 YkZxbL8J7RGaOGNUGZuU-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <30337c62-a938-61c8-3ae5-092dbccf6302@aol.com>
Date: Fri, 6 Jan 2023 18:04:13 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
From: Chuck Zmudzinski <brchuckz@aol.com>
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <B207F213-3B7B-4E0A-A87E-DE53CD351647@gmail.com>
 <6a1a6ed8-568d-c08b-91a7-1093a2b25929@linaro.org>
 <d9e2f616-d3bf-fc6c-2dc5-a0bf53148632@aol.com>
Content-Language: en-US
In-Reply-To: <d9e2f616-d3bf-fc6c-2dc5-a0bf53148632@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 2881

On 1/6/23 2:08 PM, Chuck Zmudzinski wrote:
> On 1/6/23 7:25 AM, Philippe Mathieu-Daudé wrote:
>> On 6/1/23 12:57, Bernhard Beschow wrote:
>>> 
>>> 
>>> Am 4. Januar 2023 15:35:33 UTC schrieb "Philippe Mathieu-Daudé" <philmd@linaro.org>:
>>>> +Markus/Thomas
>>>>
>>>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>>>>
>>>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>>>>> ---
>>>>>    hw/i386/pc_piix.c             |  4 +---
>>>>>    hw/isa/piix.c                 | 20 --------------------
>>>>>    include/hw/southbridge/piix.h |  1 -
>>>>>    3 files changed, 1 insertion(+), 24 deletions(-)
>> 
>> 
>>>>>    -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>>>> -{
>>>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>>>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>>>> -
>>>>> -    k->realize = piix3_realize;
>>>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>>>>> -    dc->vmsd = &vmstate_piix3;
>>>>
>>>> IIUC, since this device is user-creatable, we can't simply remove it
>>>> without going thru the deprecation process.
>>> 
>>> AFAICS this device is actually not user-creatable since dc->user_creatable is set to false once in the base class. I think it is safe to remove the Xen class unless there are ABI issues.
>> Great news!
> 
> I don't know if this means the device is user-creatable:
> 
> chuckz@bullseye:~$ qemu-system-i386 -device piix3-ide-xen,help
> piix3-ide-xen options:
>   addr=<int32>           - Slot and optional function number, example: 06.0 or 06 (default: -1)
>   failover_pair_id=<str>
>   multifunction=<bool>   - on/off (default: false)
>   rombar=<uint32>        -  (default: 1)
>   romfile=<str>
>   x-pcie-extcap-init=<bool> - on/off (default: true)
>   x-pcie-lnksta-dllla=<bool> - on/off (default: true)
> 
> Today I am running qemu-5.2 on Debian 11, so this output is for
> qemu 5.2, and that version of qemu has a piix3-ide-xen device.
> Is that this same device that is being removed? If so, it seems to
> me that at least as of qemu 5.2, the device was user-creatable.
> 
> Chuck

Good news! It looks the device was removed as user-creatable since version 5.2:

chuckz@debian:~$ qemu-system-i386-7.50 -device help | grep piix
name "piix3-usb-uhci", bus PCI
name "piix4-usb-uhci", bus PCI
name "piix3-ide", bus PCI
name "piix4-ide", bus PCI
chuckz@debian:~$ qemu-system-i386-7.50-bernhard-v2 -device help | grep piix
name "piix3-usb-uhci", bus PCI
name "piix4-usb-uhci", bus PCI
name "piix3-ide", bus PCI
name "piix4-ide", bus PCI
chuckz@debian:~$

The piix3-ide-xen device is not present either with or without Bernhard's patches
for current qemu 7.50, the development version for qemu 8.0

Cheers,

Chuck


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 01:09:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 01:09:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472917.733337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDxhj-0007mQ-To; Sat, 07 Jan 2023 01:08:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472917.733337; Sat, 07 Jan 2023 01:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDxhj-0007mJ-QU; Sat, 07 Jan 2023 01:08:59 +0000
Received: by outflank-mailman (input) for mailman id 472917;
 Sat, 07 Jan 2023 01:08:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QMsY=5E=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pDxhh-0007m8-Ta
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 01:08:58 +0000
Received: from sonic314-20.consmr.mail.gq1.yahoo.com
 (sonic314-20.consmr.mail.gq1.yahoo.com [98.137.69.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d989aa9b-8e27-11ed-b8d0-410ff93cb8f0;
 Sat, 07 Jan 2023 02:08:53 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Sat, 7 Jan 2023 01:08:51 +0000
Received: by hermes--production-ne1-7b69748c4d-g8q5j (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 34031321042cfd49358cba9e2ea3354a; 
 Sat, 07 Jan 2023 01:08:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d989aa9b-8e27-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673053731; bh=RM3OQ5SqcJacBinVYA5gdpDbq4pClsj3VLZOZHffPJQ=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=dq2vESZ00s4dsw8quVaRQcy0BZg2MMAe1up1U3Xu4SS+hChXr+soZoOJHNf72JNnUmRqIjZt7CPd2Wj4o16s1l3suRFko0+555Qm/GJQUZh2HjjSazpYQJ1/LAUbMkGqx/X3S0Vyo76UZOVcJAThz9ZyUTQGW1cI3dr9adL6PpCIvQC4ggbq00a0lnAVdZKtotM38ldmUgdci/qQaNVTEtphM6mYYxBNpEQu2sg4TsDl7SV6bqpWlskF008qPLCTG639brYeH26TDogN/Em/C+B1hg8vg7mu9Nxi7FakyqhUVtyHuVLLMEmF8PRhctor2tNjooWG5+wGk8BYxg9dyA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673053731; bh=4KfaWLub71iH+DxRqHsT1oed9V4ykxAI+tvjzwupahF=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=dIzK4EfUtagvwewOlyDYJzaHBeOE1dd6FxKhqKRJHVS1XT7ETQpsMvLowXWgauUyCvsxFlGI6+6e8MFwk/TI50wNq1ZXzeXmDSV/iCDRkf3wY9R3KKl6vHknczVpZ0s4J74pn+B4GpUDjRbb/pQPBuYODrX4Gtq5FGvTUCaRcHaVL/T0BFDK4uTWqCteGzMw3NVAyCMYvuK4EkIMlCfg42OU0hB8vIClBrgKzep1tnt1/wpkZXo5zWhj2invPMg/vpZkWiLuudUfdQrRyCGTCvxybLAI5mkDjaDwBr5OoGLZgIAYWtBgUlRLVi3XqnuWQWdT2HWXS9uSzMvKnTIY9g==
X-YMail-OSG: gR.45zEVM1llY6rhNVg5om7pz3Lkij.kCFZX_Kf25.wujBsPXCa1drYl9VuuuST
 YImZ77FlHMRfjIC1BGeqBxNRH7W2cztj2fMryxh1TrjLPKpqzStbkiTq.hAWlA60D_OK2aaB2H3K
 GQpHFXEbv_ZycZ4fsAl_bs6hf2RMs7uMzOnIwKScFbuh8uPFHt6Q8Q9TeP6ZiXV3TEDcES0CeW4S
 fKbH27Wltwp5WU3JxJ38ID_VEk4N9JCos4IpLgX4tMRhi7VAu4gnujuSOTdpE0wbSHjrMDpp02ng
 .CBmPYzGveJWvlhmuTyo12Bb0urBmQTv9ri0hiQBW7RMYkGiyJFWic7jjALj0obWu4jBVV5x_tmu
 lXIK6AvTTwSL1K4HwFUw5G3Qt9pvUYIT7D4scKSqI8Bp5BXSaC3Bg63AjTqkRKiH1Auoe7Tmafmt
 rPma.QIljJk2Xifb4BCnXGXIQE2TfH2jyZs6JYnIC2omcMLTFuzhdLIwg6iVyzyD0GMv2HpABup9
 hhls2QFdnGr4w5SvNKcUR1fKdYwb9x.cGFVjGPrHOJbwlJnEjqJ.3vjXJRhfJRN9EyP9oUsCHj4Y
 5e6eWr7KZdShlzd3M44mBNe_WKhNC36jzzaEwOWEXdoedNWAcE_4GSPbHFj0vQHYXINASFj2P2Bf
 89w3EZvwOLctiVKy0DlINKGn5Phb391wMG7OGYrTBulQzHjZlFaoGvbK6b5Kxsp0Anv3Y2hpyOtq
 otM_OqibOXCPToBMd4gw2ZdfX3N_CsSUI9fx1emlXlr9bXDOnAet7Fgf2m0m4B14ZpFBFHNIC1NG
 9p5Kra2XccUh6EaSc_UYsGRQZH1T.bdO_zFJdRlOF1RE2o6GaSuKYDuNUYRlRn_WSZWf5QbQDCK1
 nzUuCJXeVkNfLQ_Klg5x_Hbs65CPm93SxWJh7_7tESbDRtAQk78slvLnIjA780llDOYgQ2AYMCox
 a90nymUlZw35b2LJItaQ7GczJMGKffoZJTrYrmTfIyCDdK0FpWQigQ3zcTi7XJVRLBSggobCiLMa
 U_G._NxtXd1N96SLaRrVw4kC7Gx67e9.Ft8uuwNKP2R5tzHceuO799VezVkW1KO2VydGrIalChkd
 u23.VU1wj2v_1B6edEJesBz_geqdMmsj5mhBRWoR21uYIMcn7YEyzSNA7TSZNVNTXqFeFPP_nedR
 1KGzYcLklAbq0r7rb6MEvj64NaZXJK1NWaPg20O9sDLhkE9Q0UiyGfFmDN29jfIzA.WDcRRbnfxX
 oCnR8gpRN5BaOigpo823oPwNolA1hbQINhvseJYJNEt.zaBcRPA4mY1etTDo4CBRem.ebisB38CG
 UJpAZ.T2Eer6GG1T6O3vJ06TwxDDy1s_OQaDSknq8krl1ny1njz8L3wLlCaJU8L5uCc.b20SXTCu
 QErbEW0hCmlo5zZsOfisDKfc8qat0ifJda6jrddbx7u75idAVMrN.vvf_0Ve8D2PG5advZg1wr4.
 lOWkqxV9IBWo3uO2nWq3webNXgS7K3Uq61Xy4yOhWgtVvR.FmB66JFz9ppo6WeZaaw6IHU7C1cm8
 zRGTErBUkx.Y24QPJWA7x8eI5JB4xr9tf9jPf7U52P1PcHab5_XrmjpNwZxKTmDhXpGgTstEvIoy
 E8oXy_NEBSLLWK5EPdBluUMiRVGDi61rUdnP7LFarcokxhBa0kAjdEtLOFXFqHQ0zA6oeKaEyYXX
 vHTOyE_6y_q3bYQzPL7p2ZfelWIVyqglLXgj25iK4N3ULo6tuowMO4YrJOs3DAir0XtgpfjasbCo
 NLCDLeY6.mWLYdSDitC_3jYBUYxkB5Gb6vl8gZ2Dq1V1eciYauuLzHVU7mZABDJStVjnSkZ2FUO0
 ZaN.UZZDfEN_a.neb34EGqqOX_pb6j6EhLe7bny3xvMiRGmt7jWbdRedQPqMZ65LoErRHSLoqvaa
 hyIqXMnIUsJZFEF1Qbwmf57ZbINnAIB4rgiN0Gm7l04pzGNaGy1I1Wn5E5fUlI5ahJjhBmWjhQ_p
 GHq22mn6lmS6MAgwfiJIgEklqCtf7y44im619ABPT8PHX0LzFJ327Zl2E2FudwOrV_FzqOk17y8w
 BZXqo.YCW1jF50HKpvBIFepTKvzDe2hgn4IrnNoeDMy3kjDhCm3K80ab87O2e1HrnDjxoCerLOe3
 E7_FDFwJAh16fA6KrQSKTbSM3sb4rcflYoDfrZa_VyVoRpIg8.ZGdQl.FCRg9Tf7CpzhMGLAEARJ
 zmXpMh4QfV3SA.BWDwKOCeQ--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <00ff4875-62e0-05c8-a13e-5a52d4195cc2@aol.com>
Date: Fri, 6 Jan 2023 20:08:46 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
From: Chuck Zmudzinski <brchuckz@aol.com>
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <B207F213-3B7B-4E0A-A87E-DE53CD351647@gmail.com>
 <6a1a6ed8-568d-c08b-91a7-1093a2b25929@linaro.org>
 <d9e2f616-d3bf-fc6c-2dc5-a0bf53148632@aol.com>
 <30337c62-a938-61c8-3ae5-092dbccf6302@aol.com>
Content-Language: en-US
In-Reply-To: <30337c62-a938-61c8-3ae5-092dbccf6302@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3627

On 1/6/23 6:04 PM, Chuck Zmudzinski wrote:
> On 1/6/23 2:08 PM, Chuck Zmudzinski wrote:
>> On 1/6/23 7:25 AM, Philippe Mathieu-Daudé wrote:
>>> On 6/1/23 12:57, Bernhard Beschow wrote:
>>>> 
>>>> 
>>>> Am 4. Januar 2023 15:35:33 UTC schrieb "Philippe Mathieu-Daudé" <philmd@linaro.org>:
>>>>> +Markus/Thomas
>>>>>
>>>>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>>>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
>>>>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
>>>>>>
>>>>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
>>>>>> ---
>>>>>>    hw/i386/pc_piix.c             |  4 +---
>>>>>>    hw/isa/piix.c                 | 20 --------------------
>>>>>>    include/hw/southbridge/piix.h |  1 -
>>>>>>    3 files changed, 1 insertion(+), 24 deletions(-)
>>> 
>>> 
>>>>>>    -static void piix3_xen_class_init(ObjectClass *klass, void *data)
>>>>>> -{
>>>>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
>>>>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>>>>> -
>>>>>> -    k->realize = piix3_realize;
>>>>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>>>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
>>>>>> -    dc->vmsd = &vmstate_piix3;
>>>>>
>>>>> IIUC, since this device is user-creatable, we can't simply remove it
>>>>> without going thru the deprecation process.
>>>> 
>>>> AFAICS this device is actually not user-creatable since dc->user_creatable is set to false once in the base class. I think it is safe to remove the Xen class unless there are ABI issues.
>>> Great news!
>> 
>> I don't know if this means the device is user-creatable:
>> 
>> chuckz@bullseye:~$ qemu-system-i386 -device piix3-ide-xen,help
>> piix3-ide-xen options:
>>   addr=<int32>           - Slot and optional function number, example: 06.0 or 06 (default: -1)
>>   failover_pair_id=<str>
>>   multifunction=<bool>   - on/off (default: false)
>>   rombar=<uint32>        -  (default: 1)
>>   romfile=<str>
>>   x-pcie-extcap-init=<bool> - on/off (default: true)
>>   x-pcie-lnksta-dllla=<bool> - on/off (default: true)
>> 
>> Today I am running qemu-5.2 on Debian 11, so this output is for
>> qemu 5.2, and that version of qemu has a piix3-ide-xen device.
>> Is that this same device that is being removed? If so, it seems to
>> me that at least as of qemu 5.2, the device was user-creatable.
>> 
>> Chuck
> 
> Good news! It looks the device was removed as user-creatable since version 5.2:
> 
> chuckz@debian:~$ qemu-system-i386-7.50 -device help | grep piix
> name "piix3-usb-uhci", bus PCI
> name "piix4-usb-uhci", bus PCI
> name "piix3-ide", bus PCI
> name "piix4-ide", bus PCI
> chuckz@debian:~$ qemu-system-i386-7.50-bernhard-v2 -device help | grep piix
> name "piix3-usb-uhci", bus PCI
> name "piix4-usb-uhci", bus PCI
> name "piix3-ide", bus PCI
> name "piix4-ide", bus PCI
> chuckz@debian:~$
> 
> The piix3-ide-xen device is not present either with or without Bernhard's patches
> for current qemu 7.50, the development version for qemu 8.0
> 
> Cheers,
> 
> Chuck


I traced where the pciix3-ide-xen device was removed:

It was 7851b21a81 (hw/ide/piix: Remove redundant "piix3-ide-xen" device class)

https://gitlab.com/qemu-project/qemu/-/commit/7851b21a8192750adecbcf6e8780a20de5891ad6

about six months ago. That was between 7.0 and 7.1. So the device being removed
here is definitely not user-creatable, but it appears that this piix3-ide-xen
device that was removed between 7.0 and 7.1 was user-creatable. Does that one
need to go through the deprecation process? Or, since no one has complained
it is gone, maybe we don't need to worry about it?

Cheers,

Chuck


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 02:20:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 02:20:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472924.733348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDyoz-0007gU-3K; Sat, 07 Jan 2023 02:20:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472924.733348; Sat, 07 Jan 2023 02:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDyoy-0007gN-V3; Sat, 07 Jan 2023 02:20:32 +0000
Received: by outflank-mailman (input) for mailman id 472924;
 Sat, 07 Jan 2023 02:20:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDyoy-0007fx-6p; Sat, 07 Jan 2023 02:20:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDyoy-0004Cl-4R; Sat, 07 Jan 2023 02:20:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pDyox-0000QX-FT; Sat, 07 Jan 2023 02:20:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pDyox-0003Mw-E6; Sat, 07 Jan 2023 02:20:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=isJGOt6dK2DTswjT/fAFqEH7SItEwJJE2GVNAf5yLM8=; b=hSOKNGRE2TRnIOElklDjawRC/K
	ZrwRxEixIkQyK+h/pxBf3S2l/1JvNtB05kJtsBSV+gytaUpMtVDOSlbwStHfhd0q6k3V90uK2Ycqg
	PCKolLWH0WOMC/1xLynNdeUQLP0jHVKAxM/YGfmDWtCuBobgYy896JHVmHnYEGo8e96U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175605-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175605: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1f5abbd77e2c1787e74b7c2caffac97def78ba52
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Jan 2023 02:20:31 +0000

flight 175605 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175605/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                1f5abbd77e2c1787e74b7c2caffac97def78ba52
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   91 days
Failing since        173470  2022-10-08 06:21:34 Z   90 days  189 attempts
Testing same since   175599  2023-01-06 07:05:33 Z    0 days    2 attempts

------------------------------------------------------------
3293 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 501982 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 02:23:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 02:23:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472934.733359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDyrQ-0008LZ-K7; Sat, 07 Jan 2023 02:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472934.733359; Sat, 07 Jan 2023 02:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pDyrQ-0008LS-Gn; Sat, 07 Jan 2023 02:23:04 +0000
Received: by outflank-mailman (input) for mailman id 472934;
 Sat, 07 Jan 2023 02:23:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NfDG=5E=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pDyrO-0008LG-Q0
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 02:23:02 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2077.outbound.protection.outlook.com [40.107.8.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 34b0843e-8e32-11ed-91b6-6bf2151ebd3b;
 Sat, 07 Jan 2023 03:23:00 +0100 (CET)
Received: from AS9PR06CA0780.eurprd06.prod.outlook.com (2603:10a6:20b:484::35)
 by DU2PR08MB10187.eurprd08.prod.outlook.com (2603:10a6:10:46d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.7; Sat, 7 Jan
 2023 02:22:57 +0000
Received: from AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:484:cafe::91) by AS9PR06CA0780.outlook.office365.com
 (2603:10a6:20b:484::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Sat, 7 Jan 2023 02:22:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT032.mail.protection.outlook.com (100.127.140.65) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.14 via Frontend Transport; Sat, 7 Jan 2023 02:22:56 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Sat, 07 Jan 2023 02:22:56 +0000
Received: from d27c0665e07f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C4C22D3F-99C7-4350-BC67-73FC8926E703.1; 
 Sat, 07 Jan 2023 02:22:46 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d27c0665e07f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 07 Jan 2023 02:22:46 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB8994.eurprd08.prod.outlook.com (2603:10a6:20b:5b3::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.9; Sat, 7 Jan
 2023 02:22:43 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%4]) with mapi id 15.20.5986.007; Sat, 7 Jan 2023
 02:22:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34b0843e-8e32-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Coaca47qk/N0rae4zzA1XwE4RCQ6I/ay8ADLcscw32E=;
 b=VsaGRwJvXJFxwkRCGFCGYDVFjQu5FdAxmZiEwd8ZRLLHkHSaKQQglwQn4OV8E5OCcCcrKA/IzVK1Fe18/XMnZvFK/IXAMeLB58maLDLcMreD3/5i1MlvAvzl6Ku6GCOKw/WLmucEBKQFucmyZWIn7Mopej3vEodGf7a3fiCG5Z8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cC9oGbMlyeECtaoiA/sxan9Rpy03LjTF+hP89PiRyZuhDHWOkQPoSksFa3z0SzzjAajHYbY2DVCIN3SapmMDOuf0dGml2rivc5F1yc//vuTZ03LtQp8HjTXweKqvFU6DXnrMMijZhi9NriIm/IuD3K2l0SCBTZpqlcQN5sc+0s9n4M+3Bb6ePSJT+aIVwpQj9SLba4SuCjSJ2yrpIvHJX8W58J1IsLhXvOFavFTj6wC3uVYuwYpVRMTL02Q2oO6qLvp0gTrm5IqQOFWi1aGid1Gi4tDzOoJPDf0c6pKxWquVscmZwx6lBkck7AHlyuhya94oRGa7IEVTLtGYbg6L4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Coaca47qk/N0rae4zzA1XwE4RCQ6I/ay8ADLcscw32E=;
 b=BuMTaCce4+5ReNlA4XyGcL3rD6h1ALsTl7sKO8m6oOSwvxWQ5kCxKLphFQUKVnSzxQxNbO/D9sJpusyJL8EVChGT4A1OohqnRvi/5C7Mb6bi0P02iFvJIKUfqVCRvCv0gh/Q9qtpsBZaLi3GnZ8xhCK7m+qdOSIwZ2DvInUl1x7+m7Qn9b+YVSZrLdaKW79RBKZ8vj4BWZegDqhL5HT6aiZqGvChV0A9Bh9n/ctPZLdYqvQEuYce24m20050ht5Pp55ZjxWOlxPoRoEJbPzrOxx44vJdob60Rj1rIcG0U3VkpIyZ+1z7pqQc7ukxk6y/IJd4qogRG+409l+Htgs87w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Coaca47qk/N0rae4zzA1XwE4RCQ6I/ay8ADLcscw32E=;
 b=VsaGRwJvXJFxwkRCGFCGYDVFjQu5FdAxmZiEwd8ZRLLHkHSaKQQglwQn4OV8E5OCcCcrKA/IzVK1Fe18/XMnZvFK/IXAMeLB58maLDLcMreD3/5i1MlvAvzl6Ku6GCOKw/WLmucEBKQFucmyZWIn7Mopej3vEodGf7a3fiCG5Z8=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 20/22] xen/arm64: mm: Use per-pCPU page-tables
Thread-Topic: [PATCH 20/22] xen/arm64: mm: Use per-pCPU page-tables
Thread-Index: AQHZEUh12RKmuw645EafAVdnfeFYOK6RkuHggAAV9gCAALFqIA==
Date: Sat, 7 Jan 2023 02:22:42 +0000
Message-ID:
 <AS8PR08MB79915285C38B78C7F4BA039692F89@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-21-julien@xen.org>
 <AS8PR08MB7991125F288492FFD81F02BA92FB9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <9c9099e2-dd7a-db37-4d0c-38f1dd8d3e48@xen.org>
In-Reply-To: <9c9099e2-dd7a-db37-4d0c-38f1dd8d3e48@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 671A181F4710F64494B838F57762C79B.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB8994:EE_|AM7EUR03FT032:EE_|DU2PR08MB10187:EE_
X-MS-Office365-Filtering-Correlation-Id: 065b479d-d2e0-4b80-1e7d-08daf05616b0
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 IQPiaSb+jdMZz4yZRL3EWx/2gGIGO5OfcCNPZCOn1EKHx+3v8D9TaH0PkwIjdQnUixV0MIKPcnNpeSw3BhP3JH95I5R/AdjfZOWwhdOkymJrYJTjI/zGfH5woIQrscDtZt5CodMjk7NJvcBJoB4lCPVBZqpw/vW5aDar98dMRrqH5RNwOkxfV77qDiRzOm4tBVS/haBZ66UwcymlTIGjEgCJpKQZJSXg/WD3i3YRrHKuNB1GB6+oPRhc1mOPVcBdCxYEzlQXu+Rxp9Eyhh+3o14wV8UyizsxQhZPfkMbrAtfQAwM6pGoTYKSlbJtL5bR24e/k0NhoiYudNc+Bjg8AhcHulrThxmbbitWZKn89fxo7sMQBok/BJ7M7HJJpvIXZxsDUMil7g06PKInsGl42tRlDUSuwF3wWBpP/f2H9nSvMxCswboynCCqDiZU408W4sL4vltg5AmMwZn6FK//yr+sD2Va2kZ8uv4lSSmlyF8d6McZnZQJXb6KzO+8BZ1AyrIfKPxJGOkfR/OM2V49sLYC8yS0KujKK6DEsxNCWyuYOO5cnVYMf/lj2Ue7+a0pmrEEJrLj5pWpMqD6MWTS9fOAxMQ5/gqRnvoeUPG8hq402XESNTy3TYHmPJiIyFXTeyefbKfscT21L+2TC6OcAhhFDdSEN7QjNiZn85OgmoCvIRlnJeX8t3TpL8P2sUdZWkq3rn/WTI4KhBG4hXaXOg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(366004)(346002)(376002)(136003)(451199015)(55016003)(9686003)(122000001)(86362001)(26005)(186003)(71200400001)(7696005)(478600001)(8676002)(66476007)(52536014)(66556008)(66446008)(64756008)(8936002)(66946007)(76116006)(4326008)(41300700001)(5660300002)(316002)(110136005)(2906002)(38100700002)(54906003)(6506007)(38070700005)(33656002)(83380400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8994
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5710ed0f-43c6-44d8-b70f-08daf0560e48
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZozwVpDQVhqmME44eo7KZIQccbSkMEOiGwhAVZKqIeOzW3Ih/3QYVbFGFhDtxpLFM8AFX0zWPM83HCwOKlgyI+1OBu1Yu8TQWC7fLb76VBPx0tVtdGtpqyePD2YNvSTiW/pKKioQcMFMHZdipVv09/BiZujRGgHdoEc9Bt2kjaT5TZj1T3Wb6ztapXMqnuwM1jE7kow298/LK3HZkXkYoDhxsnA6yk4XHbcR7wARvZnPNEMS0YgPfu7II6SRzTjalggYTTjTbbZIscOxAYIThHoTqWt5oSvEj3KmVyk34L9KRep3nTVdFrsbxgi6nc5ra0wkapHwiPz+OyMptRqHdHJM8XKtUE/e4vCIGutdQUXUmbtP5JxJk3h+QtY5Syz70tQRUNvVBE22ilvta3dpQ9khD7CvShp6JA3weGrIbuP2qHVR3sgMLkujoOf0GhnXY2wT3wm5lnYAnRy2yhob0iFM+vpe8vb41y8U6hxjt9APvjqvC9/66bB29dkw0ri5o9GatP9J1+nZsIsxVd36xI218D2dIbhqzAjFWEx7TEmahHi1zVF0jmtVjmXAL8XeT2AhSHtg/0efoeevLHOUBfvCeFUfKyi9d9BNb451gKLB04vKGIOuhYkl8HHBiSfjagBZFbhKDGpuN7aP6iTFCkGCyXwekVvYfZ68XGZyp3lgf246YaYWAGVU9sLEumR7lFKU2UDjVA4X2BEuozYT+Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(346002)(396003)(376002)(451199015)(40470700004)(36840700001)(46966006)(8936002)(52536014)(5660300002)(70586007)(7696005)(9686003)(8676002)(70206006)(41300700001)(186003)(26005)(4326008)(54906003)(110136005)(478600001)(356005)(33656002)(83380400001)(316002)(40460700003)(2906002)(47076005)(336012)(55016003)(86362001)(40480700001)(6506007)(82310400005)(107886003)(81166007)(82740400003)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jan 2023 02:22:56.4216
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 065b479d-d2e0-4b80-1e7d-08daf05616b0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB10187

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IFN1YmplY3Q6IFJl
OiBbUEFUQ0ggMjAvMjJdIHhlbi9hcm02NDogbW06IFVzZSBwZXItcENQVSBwYWdlLXRhYmxlcw0K
PiANCj4gSGkgSGVucnksDQo+IA0KPiA+PiBTdWJqZWN0OiBbUEFUQ0ggMjAvMjJdIHhlbi9hcm02
NDogbW06IFVzZSBwZXItcENQVSBwYWdlLXRhYmxlcw0KPiA+Pg0KPiA+PiBGcm9tOiBKdWxpZW4g
R3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KPiA+Pg0KPiA+PiBBdCB0aGUgbW9tZW50LCBvbiBB
cm02NCwgZXZlcnkgcENQVSBhcmUgc2hhcmluZyB0aGUgc2FtZSBwYWdlLXRhYmxlcy4NCj4gPg0K
PiA+IE5pdDogcy9ldmVyeSBwQ1BVIGFyZS8gZXZlcnkgcENQVSBpcy8NCj4gDQo+IEkgd2lsbCBm
aXggaXQuDQoNClRoYW5rIHlvdS4NCg0KPiANCj4gPj4gK2Jvb2wgaW5pdF9kb21oZWFwX21hcHBp
bmdzKHVuc2lnbmVkIGludCBjcHUpOw0KPiA+DQo+ID4gSSB3b25kZXIgaWYgd2UgY2FuIG1ha2Ug
dGhpcyBmdW5jdGlvbiAiX19pbml0IiBhcyBJSVJDIHRoaXMgZnVuY3Rpb24gaXMgb25seQ0KPiA+
IHVzZWQgYXQgWGVuIGJvb3QgdGltZSwgYnV0IHNpbmNlIHRoZSBvcmlnaW5hbCBpbml0X2RvbWhl
YXBfbWFwcGluZ3MoKQ0KPiA+IGlzIG5vdCAiX19pbml0IiBhbnl3YXkgc28gdGhpcyBpcyBub3Qg
YSBzdHJvbmcgYXJndW1lbnQuDQo+IA0KPiBXaGlsZSB0aGlzIGlzIG5vdCB5ZXQgc3VwcG9ydGVk
IG9uIFhlbiBvbiBBcm0sIENQVXMgY2FuIGJlDQo+IG9ubGluZWQvb2ZmbGluZWQgYXQgcnVudGlt
ZS4gU28geW91IHdhbnQgdG8ga2VlcCBpbml0X2RvbWhlYXBfbWFwcGluZ3MoKQ0KPiBhcm91bmQu
DQoNClRoaXMgaXMgYSB2ZXJ5IGdvb2QgcG9pbnQuIEkgYWdyZWUgdGhlIHBDUFUgb25saW5lL29m
ZmxpbmUgaXMgYWZmZWN0ZWQgYnkgdGhlDQoiX19pbml0IiBzbyBsZWF2aW5nIHRoZSBmdW5jdGlv
biB3aXRob3V0IHRoZSAiX19pbml0IiBsaWtlIHdoYXQgd2UgYXJlIGRvaW5nDQpub3cgaXMgYSBn
b29kIGlkZWEuDQoNCj4gDQo+IFdlIGNvdWxkIGNvbnNpZGVyIHRvIHByb3ZpZGUgYSBuZXcgYXR0
cmlidXRlIHRoYXQgd2lsbCBiZSBtYXRjaCBfX2luaXQNCj4gaWYgaG90cGx1ZyBpcyBzdXBwb3J0
ZWQgb3RoZXJ3aXJzZSBpdCB3b3VsZCBiZSBhIE5PUC4gQnV0IEkgZG9uJ3QgdGhpbmsNCj4gdGhp
cyBpcyByZWxhdGVkIHRvIHRoaXMgc2VyaWVzIChtb3N0IG9mIHRoZSBmdW5jdGlvbiB1c2VkIGZv
ciBicmluZ3VwDQo+IGFyZSBub3QgaW4gX19pbml0KS4NCg0KQWdyZWVkLg0KDQo+IA0KPiA+PiAr
c3RhdGljIGlubGluZSBib29sIGluaXRfZG9taGVhcF9tYXBwaW5ncyh1bnNpZ25lZCBpbnQgY3B1
KQ0KPiA+DQo+ID4gKGFuZCBhbHNvIGhlcmUpDQo+ID4NCj4gPiBFaXRoZXIgeW91IGFncmVlIHdp
dGggYWJvdmUgIl9faW5pdCIgY29tbWVudCBvciBub3Q6DQo+ID4gUmV2aWV3ZWQtYnk6IEhlbnJ5
IFdhbmcgPEhlbnJ5LldhbmdAYXJtLmNvbT4NCj4gDQo+IFRoYW5rcyENCg0KTm8gcHJvYmxlbS4g
VG8gYXZvaWQgY29uZnVzaW9uLCBteSByZXZpZXdlZC1ieSB0YWcgc3RpbGwgaG9sZHMuDQoNCktp
bmQgcmVnYXJkcywNCkhlbnJ5DQoNCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBH
cmFsbA0K


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 03:51:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 03:51:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472942.733373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE0F2-0000JC-VH; Sat, 07 Jan 2023 03:51:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472942.733373; Sat, 07 Jan 2023 03:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE0F2-0000J5-RJ; Sat, 07 Jan 2023 03:51:32 +0000
Received: by outflank-mailman (input) for mailman id 472942;
 Sat, 07 Jan 2023 03:51:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE0F1-0000Iv-K1; Sat, 07 Jan 2023 03:51:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE0F1-0006ED-HT; Sat, 07 Jan 2023 03:51:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE0F1-0003Wb-1a; Sat, 07 Jan 2023 03:51:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pE0F1-00077S-0g; Sat, 07 Jan 2023 03:51:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7yE10MTh8AUkLs7RuHSs2Em5Pcv9PvfdeglSPP/cc8Q=; b=LM5rBbEU5EqQcl1Kp/giigi7Zi
	PEO2+R0C0UZxyPwQmo0+3kBy9AwKEVLldUeSbWMlgUo8MLRSzX93TOSCRTAgFz3xIsi3tp1umjO6W
	YPocoaGJJNG3IoQ6AcCOFWGQ4h9FNa9rwE2exCpU9+5h6z65B2M6uHmN7zRELquc2BNk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175610-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175610: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-pair:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-pair:host-install/src_host(6):broken:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=171033e8dbac356f9a84c2e7cc8556a4eb0a1359
X-Osstest-Versions-That:
    qemuu=d1852caab131ea898134fdcea8c14bc2ee75fbe9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Jan 2023 03:51:31 +0000

flight 175610 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175610/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pair           <job status>                 broken
 test-amd64-amd64-pair       6 host-install/src_host(6) broken REGR. vs. 175595
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 175595
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 175595

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175595
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175595
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175595
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175595
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175595
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175595
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175595
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175595
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                171033e8dbac356f9a84c2e7cc8556a4eb0a1359
baseline version:
 qemuu                d1852caab131ea898134fdcea8c14bc2ee75fbe9

Last test of basis   175595  2023-01-06 01:55:04 Z    1 days
Failing since        175603  2023-01-06 12:10:38 Z    0 days    2 attempts
Testing same since   175610  2023-01-06 22:08:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alessandro Di Federico <ale@rev.ng>
  Alex Bennée <alex.bennee@linaro.org>
  Axel Heider <axel.heider@hensoldt.net>
  Claudio Fontana <cfontana@suse.de>
  Fabiano Rosas <farosas@suse.de>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Marco Liebel <quic_mliebel@quicinc.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mukilan Thiyagarajan <quic_mthiyaga@quicinc.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stephen Longfield <slongfield@google.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
  Zhuojia Shen <chaosdefinition@hotmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-pair broken
broken-step test-amd64-amd64-pair host-install/src_host(6)

Not pushing.

(No revision log; it would be 820 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 08:44:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 08:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472956.733384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE4oD-0003ir-Og; Sat, 07 Jan 2023 08:44:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472956.733384; Sat, 07 Jan 2023 08:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE4oD-0003ik-LI; Sat, 07 Jan 2023 08:44:09 +0000
Received: by outflank-mailman (input) for mailman id 472956;
 Sat, 07 Jan 2023 08:44:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE4oC-0003ia-I5; Sat, 07 Jan 2023 08:44:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE4oC-0005Dg-Fe; Sat, 07 Jan 2023 08:44:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE4oB-00024m-TS; Sat, 07 Jan 2023 08:44:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pE4oB-0003A7-T0; Sat, 07 Jan 2023 08:44:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iZSRkKUV23tjtvK8xwFvzH9D6U2Z6zcalKGHFEuwNxA=; b=OC2f2rmOaZQmruLyb/INxZs1P2
	dUXEBmCublOqyAZEJjXO+vFbZqGUZ/eDLt55W4xQLOkZlcW8LP0XRX+6tje8JQOcihUI+tmRQKy3+
	WtQsPf5a3CrnJy2YCOI14HTnU1J3CzFc/EbCi3hzLoeD/J3PNKmBX1Y7OfkcI0FtUICg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175612-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175612: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-shadow:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
X-Osstest-Versions-That:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Jan 2023 08:44:07 +0000

flight 175612 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175612/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 175601 pass in 175612
 test-amd64-i386-xl-shadow     7 xen-install                fail pass in 175601
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 20 guest-start/debianhvm.repeat fail pass in 175601

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175601
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175601
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175601
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175601
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175601
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175601
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 175601
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175601
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175601
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175601
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175601
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175601
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175601
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2
baseline version:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2

Last test of basis   175612  2023-01-07 01:52:11 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Jan 07 10:59:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 10:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472965.733394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE6uS-00083F-E2; Sat, 07 Jan 2023 10:58:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472965.733394; Sat, 07 Jan 2023 10:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE6uS-000838-BI; Sat, 07 Jan 2023 10:58:44 +0000
Received: by outflank-mailman (input) for mailman id 472965;
 Sat, 07 Jan 2023 10:58:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s8zH=5E=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pE6uQ-000832-Aq
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 10:58:42 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3e1d6bce-8e7a-11ed-91b6-6bf2151ebd3b;
 Sat, 07 Jan 2023 11:58:40 +0100 (CET)
Received: by mail-ej1-x632.google.com with SMTP id qk9so8872544ejc.3
 for <xen-devel@lists.xenproject.org>; Sat, 07 Jan 2023 02:58:40 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb200f901f42a62c21174.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:f901:f42a:62c2:1174])
 by smtp.gmail.com with ESMTPSA id
 gc22-20020a1709072b1600b007c127e1511dsm1312119ejc.220.2023.01.07.02.58.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 07 Jan 2023 02:58:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e1d6bce-8e7a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ojbJNP36CvrRWo22zSn32zK2y/pXsV+Mmkw5xGIK2Pg=;
        b=hEtdA4nME3eNcpjNgQ3F3dCNDNRy3K91tNQWFFKz8eYVdhYNamrTCdnub5/2uK22E+
         otuEYgoHszVpnjT9ZZToFJ8dMPn51ocD2GU6W1XXPFU8K1W7fdCPCVQfr5x49H1hccF5
         Zpi7ZlFeLP38+URxOL0BU6WLqVOb3aoNbhZnQLJYmldB8fDj8Uaz+0Pd2ApDbvpHbkqE
         3k4WJogU4GwnqqrjLd2PO98f2IJw9TdGJaUWtJEQykq7lt3ujr4QNbgZ70Rn+cyqeji4
         o/eX4vLZShAc+aj/A2DxCCcQIka9T3jPGFmN6I7Q9cwgwDhjA48MfPZbdF0CvnneLS8r
         H3iA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ojbJNP36CvrRWo22zSn32zK2y/pXsV+Mmkw5xGIK2Pg=;
        b=jO3w1IFBlK/bMO2QDgTD/eT/GHzf4aOcXlpe7a9M693d2gKRDJUDQEq1JN6HHfkZum
         ZRFHU9EP9AKzswLYZK17QTBv9txH1vAYuoHl+o6Y6IxGuqaKdat5EIDU2LZo4BJnyKuK
         83TMggy26xNpa8LEpwf1f/nK9uEGY4DAZLY9lmO2tJZkMtLv+PI/mnEP58ra4YssGRN3
         1vgHStSyCGmI7PcSriU8L2K/MK12kezkccbuhM3Lp+M5iPFZRY7am7neKTV5BCXDXH6y
         QB28dRnQphrGQQ7/qxiGjWO6XBRHkmTUx/F2PInl0cPcyLCfe4rQ34vGo4uZSaVE8oXh
         QHbg==
X-Gm-Message-State: AFqh2kqRdEcNnaJWw+RBhydkpZ6bxyDLOPgDbNKLRJrN54GmyFwwZLfB
	6+JZEwhXwPI+hk/AJ46Io/I=
X-Google-Smtp-Source: AMrXdXuVhiLxEGrharK3dUjrwBfNTxBrdm5SiG2nGWJW4zBv8CB5mvtoicCvREcZjQt+b9feh78vcw==
X-Received: by 2002:a17:906:4a0c:b0:7c1:3018:73b6 with SMTP id w12-20020a1709064a0c00b007c1301873b6mr49497559eju.61.1673089119432;
        Sat, 07 Jan 2023 02:58:39 -0800 (PST)
Date: Sat, 07 Jan 2023 10:58:29 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: David Woodhouse <dwmw2@infradead.org>, qemu-devel@nongnu.org
CC: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Chuck Zmudzinski <brchuckz@aol.com>
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v2_3/6=5D_hw/isa/piix=3A_Wire_u?= =?US-ASCII?Q?p_Xen_PCI_IRQ_handling_outside_of_PIIX3?=
In-Reply-To: <b80c0bb26d8539899f53b91deb48f748e2495d23.camel@infradead.org>
References: <20230104144437.27479-1-shentey@gmail.com> <20230104144437.27479-4-shentey@gmail.com> <b80c0bb26d8539899f53b91deb48f748e2495d23.camel@infradead.org>
Message-ID: <B71743F3-1C6A-4A18-A9A2-AEB252BE25CE@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 6=2E Januar 2023 17:35:18 UTC schrieb David Woodhouse <dwmw2@infradead=
=2Eorg>:
>On Wed, 2023-01-04 at 15:44 +0100, Bernhard Beschow wrote:
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (xen_enabled()) {
>
>Could this perhaps be if (xen_mode !=3D XEN_DISABLED) once we merge the
>Xen-on-KVM series?

It's the same condition which selected TYPE_PIIX3_XEN_DEVICE until the las=
t patch of this series=2E If you had to change this condition in your serie=
s then just perform the same change here instead=2E

Best regards,
Bernhard


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 11:05:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 11:05:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472972.733405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE713-00015O-3m; Sat, 07 Jan 2023 11:05:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472972.733405; Sat, 07 Jan 2023 11:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE713-00015H-12; Sat, 07 Jan 2023 11:05:33 +0000
Received: by outflank-mailman (input) for mailman id 472972;
 Sat, 07 Jan 2023 11:05:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s8zH=5E=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pE710-00015B-UR
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 11:05:31 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32360c36-8e7b-11ed-91b6-6bf2151ebd3b;
 Sat, 07 Jan 2023 12:05:29 +0100 (CET)
Received: by mail-ej1-x635.google.com with SMTP id t17so8899430eju.1
 for <xen-devel@lists.xenproject.org>; Sat, 07 Jan 2023 03:05:29 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb200f901f42a62c21174.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:f901:f42a:62c2:1174])
 by smtp.gmail.com with ESMTPSA id
 kw16-20020a170907771000b007adf2e4c6f7sm1310121ejc.195.2023.01.07.03.05.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 07 Jan 2023 03:05:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32360c36-8e7b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=p8znfIc5uTd1Ou5iVaebPJV/Az3nHisy/UQuipCCwyg=;
        b=pqpQ25EjgFzgKfoRnFjp4IMjSfLb5H/CQhHcoDR5YRXx0SOr1BzR8t8EpmRE81BL+y
         0fiilZA4Bk9IEa39roFI0tL6WUtpsoPVtYCWDLw8uJeHLCd6s6UJjmcBXZdkkHISzxap
         wEyq9X0AuXWbDNZ2hti48n52SfyGxB00FYO+iOM62mykAeC/pMmJHfLhEcdihyxpPYeA
         jLuSjCv0+dzzl0tBD/9xKOCdERm1P10utZIxHMAU2EbzzTeICPQSfJWPrBjt8RjsuVRu
         xRfWnTSjBzUPeFB0HyrfDh9bESZN0jwLXfIiZ9p9Qr/eoO0qypGzTEAaOqbPevWY5MrS
         UWOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=p8znfIc5uTd1Ou5iVaebPJV/Az3nHisy/UQuipCCwyg=;
        b=xf/fdf7lPeLHZwrjtFzOLPlB/Q++Gf36eSuCxXu1a7+/zmQ4G7pmTUEmyJUyJq9OTG
         XhCat0KqfTZ+sjhUmnKpv3Bi+KuRu1GseyhPeUql2Es/OQ8JFyP1WEJK/efTREQylxGq
         /gIufvok9XKlOvgVC8vYf1l04Fkb84e8zdEv5NcRNs1UrumZT0oM4aOow6RffbjxVQ6k
         IS8iDpULQSQxP79zqcpjxXPr7cU2tzDSOLM/UE0AqDUKNq9E2A8fwtrjykjstkrEANrD
         LhvWKqEB+p9ZnaoDOEBk/fwOfY6dyCBF3YAi8wVA3LabmvyIYctUDwbdfY8jUhVYDedY
         5+Hg==
X-Gm-Message-State: AFqh2kriU6inNswU4Ctm4AE4eiHGDG+oAiTqhzQTq5r+ttT6qvZKAhQX
	rfmsz9pkCtvfpLKLKPPYxuY=
X-Google-Smtp-Source: AMrXdXu3EvFsoaT/hHNtPJ6LgsuygrkUQxHjYjz32DZvgWWafo+QGIRF9hiULkUfK0ut69Re1Wwjvw==
X-Received: by 2002:a17:906:883:b0:84d:134a:2076 with SMTP id n3-20020a170906088300b0084d134a2076mr7280855eje.44.1673089528882;
        Sat, 07 Jan 2023 03:05:28 -0800 (PST)
Date: Sat, 07 Jan 2023 11:05:20 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Chuck Zmudzinski <brchuckz@aol.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>
CC: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v2_6/6=5D_hw/isa/piix=3A_Res?= =?US-ASCII?Q?olve_redundant_TYPE=5FPIIX3=5FXEN=5FDEVICE?=
In-Reply-To: <00ff4875-62e0-05c8-a13e-5a52d4195cc2@aol.com>
References: <20230104144437.27479-1-shentey@gmail.com> <20230104144437.27479-7-shentey@gmail.com> <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org> <B207F213-3B7B-4E0A-A87E-DE53CD351647@gmail.com> <6a1a6ed8-568d-c08b-91a7-1093a2b25929@linaro.org> <d9e2f616-d3bf-fc6c-2dc5-a0bf53148632@aol.com> <30337c62-a938-61c8-3ae5-092dbccf6302@aol.com> <00ff4875-62e0-05c8-a13e-5a52d4195cc2@aol.com>
Message-ID: <01A7F932-0DF1-4977-A111-0907A7FC6FF9@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 7=2E Januar 2023 01:08:46 UTC schrieb Chuck Zmudzinski <brchuckz@aol=2E=
com>:
>On 1/6/23 6:04=E2=80=AFPM, Chuck Zmudzinski wrote:
>> On 1/6/23 2:08=E2=80=AFPM, Chuck Zmudzinski wrote:
>>> On 1/6/23 7:25=E2=80=AFAM, Philippe Mathieu-Daud=C3=A9 wrote:
>>>> On 6/1/23 12:57, Bernhard Beschow wrote:
>>>>>=20
>>>>>=20
>>>>> Am 4=2E Januar 2023 15:35:33 UTC schrieb "Philippe Mathieu-Daud=C3=
=A9" <philmd@linaro=2Eorg>:
>>>>>> +Markus/Thomas
>>>>>>
>>>>>> On 4/1/23 15:44, Bernhard Beschow wrote:
>>>>>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone=
 of
>>>>>>> TYPE_PIIX3_DEVICE=2E Remove this redundancy=2E
>>>>>>>
>>>>>>> Signed-off-by: Bernhard Beschow <shentey@gmail=2Ecom>
>>>>>>> ---
>>>>>>>    hw/i386/pc_piix=2Ec             |  4 +---
>>>>>>>    hw/isa/piix=2Ec                 | 20 --------------------
>>>>>>>    include/hw/southbridge/piix=2Eh |  1 -
>>>>>>>    3 files changed, 1 insertion(+), 24 deletions(-)
>>>>=20
>>>>=20
>>>>>>>    -static void piix3_xen_class_init(ObjectClass *klass, void *dat=
a)
>>>>>>> -{
>>>>>>> -    DeviceClass *dc =3D DEVICE_CLASS(klass);
>>>>>>> -    PCIDeviceClass *k =3D PCI_DEVICE_CLASS(klass);
>>>>>>> -
>>>>>>> -    k->realize =3D piix3_realize;
>>>>>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
>>>>>>> -    k->device_id =3D PCI_DEVICE_ID_INTEL_82371SB_0;
>>>>>>> -    dc->vmsd =3D &vmstate_piix3;
>>>>>>
>>>>>> IIUC, since this device is user-creatable, we can't simply remove i=
t
>>>>>> without going thru the deprecation process=2E
>>>>>=20
>>>>> AFAICS this device is actually not user-creatable since dc->user_cre=
atable is set to false once in the base class=2E I think it is safe to remo=
ve the Xen class unless there are ABI issues=2E
>>>> Great news!
>>>=20
>>> I don't know if this means the device is user-creatable:
>>>=20
>>> chuckz@bullseye:~$ qemu-system-i386 -device piix3-ide-xen,help
>>> piix3-ide-xen options:
>>>   addr=3D<int32>           - Slot and optional function number, exampl=
e: 06=2E0 or 06 (default: -1)
>>>   failover_pair_id=3D<str>
>>>   multifunction=3D<bool>   - on/off (default: false)
>>>   rombar=3D<uint32>        -  (default: 1)
>>>   romfile=3D<str>
>>>   x-pcie-extcap-init=3D<bool> - on/off (default: true)
>>>   x-pcie-lnksta-dllla=3D<bool> - on/off (default: true)
>>>=20
>>> Today I am running qemu-5=2E2 on Debian 11, so this output is for
>>> qemu 5=2E2, and that version of qemu has a piix3-ide-xen device=2E
>>> Is that this same device that is being removed? If so, it seems to
>>> me that at least as of qemu 5=2E2, the device was user-creatable=2E
>>>=20
>>> Chuck
>>=20
>> Good news! It looks the device was removed as user-creatable since vers=
ion 5=2E2:
>>=20
>> chuckz@debian:~$ qemu-system-i386-7=2E50 -device help | grep piix
>> name "piix3-usb-uhci", bus PCI
>> name "piix4-usb-uhci", bus PCI
>> name "piix3-ide", bus PCI
>> name "piix4-ide", bus PCI
>> chuckz@debian:~$ qemu-system-i386-7=2E50-bernhard-v2 -device help | gre=
p piix
>> name "piix3-usb-uhci", bus PCI
>> name "piix4-usb-uhci", bus PCI
>> name "piix3-ide", bus PCI
>> name "piix4-ide", bus PCI
>> chuckz@debian:~$
>>=20
>> The piix3-ide-xen device is not present either with or without Bernhard=
's patches
>> for current qemu 7=2E50, the development version for qemu 8=2E0
>>=20
>> Cheers,
>>=20
>> Chuck
>
>
>I traced where the pciix3-ide-xen device was removed:
>
>It was 7851b21a81 (hw/ide/piix: Remove redundant "piix3-ide-xen" device c=
lass)
>
>https://gitlab=2Ecom/qemu-project/qemu/-/commit/7851b21a8192750adecbcf6e8=
780a20de5891ad6
>
>about six months ago=2E That was between 7=2E0 and 7=2E1=2E So the device=
 being removed
>here is definitely not user-creatable, but it appears that this piix3-ide=
-xen
>device that was removed between 7=2E0 and 7=2E1 was user-creatable=2E Doe=
s that one
>need to go through the deprecation process? Or, since no one has complain=
ed
>it is gone, maybe we don't need to worry about it?

Good point! Looks like it fell through the cracks=2E=2E=2E

There are voices who claim that this device and its non-Xen counterpart sh=
ould have never been user-creatable in the firtst place:
https://patchwork=2Ekernel=2Eorg/project/qemu-devel/patch/20190718091740=
=2E6834-1-philmd@redhat=2Ecom/

Best regards,
Bernhard

>
>Cheers,
>
>Chuck


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 13:34:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 13:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472980.733417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE9L6-0007IX-4b; Sat, 07 Jan 2023 13:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472980.733417; Sat, 07 Jan 2023 13:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE9L6-0007IQ-1a; Sat, 07 Jan 2023 13:34:24 +0000
Received: by outflank-mailman (input) for mailman id 472980;
 Sat, 07 Jan 2023 13:34:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE9L3-0007IG-V9; Sat, 07 Jan 2023 13:34:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE9L3-0003LV-Ru; Sat, 07 Jan 2023 13:34:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE9L3-0003Zi-Gq; Sat, 07 Jan 2023 13:34:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pE9L3-0004Ow-GP; Sat, 07 Jan 2023 13:34:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JqunCl8n3bcebceswnm3DOO6Z30M9yvLbed2GUBTXhw=; b=lfnGrxzZs4VouedwrymubCSnFV
	sGn51UURwPNeU0YA7WDsmdy3F/Kc4FTZ4UdMFn7wQJJfeuqlXT7348qwD/KBfB5/ZJyQHWReGZIj6
	4owkOPWnP45syHRIjb80b9VGF3hqIvupCk+v6qg7em4BUakkIxENx1v9m1kJqvQzxU1M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175615-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175615: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=7050dad5f92010720cc8e8b7d5c37eaad7696c5e
X-Osstest-Versions-That:
    libvirt=6cd2b4e101cc0b60c6db83f763a08daea67ad6eb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Jan 2023 13:34:21 +0000

flight 175615 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175615/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175597
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175597
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175597
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              7050dad5f92010720cc8e8b7d5c37eaad7696c5e
baseline version:
 libvirt              6cd2b4e101cc0b60c6db83f763a08daea67ad6eb

Last test of basis   175597  2023-01-06 04:20:37 Z    1 days
Testing same since   175615  2023-01-07 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiang Jiacheng <jiangjiacheng@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Shaleen Bathla <shaleen.bathla@oracle.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   6cd2b4e101..7050dad5f9  7050dad5f92010720cc8e8b7d5c37eaad7696c5e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 14:15:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 14:15:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472989.733427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE9yt-0003CQ-6u; Sat, 07 Jan 2023 14:15:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472989.733427; Sat, 07 Jan 2023 14:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pE9yt-0003CJ-4G; Sat, 07 Jan 2023 14:15:31 +0000
Received: by outflank-mailman (input) for mailman id 472989;
 Sat, 07 Jan 2023 14:15:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE9yr-0003C9-L2; Sat, 07 Jan 2023 14:15:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE9yr-0004G1-HM; Sat, 07 Jan 2023 14:15:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pE9yq-0004SN-UL; Sat, 07 Jan 2023 14:15:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pE9yq-0005c7-Ts; Sat, 07 Jan 2023 14:15:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B4K4Id2eUthO6Tz6qiP3YV2AMwGJ76ZcgnyMBb+giiE=; b=o0Y3KpFJHXZHeWb33i2m2rD+rM
	nIsM1a4xfU1o03CHOAfb80WqVSRahv/K6JZ4oHH44W0xdl9+3E1F0Vuchp8JC1EGp6l7gapsHE2ff
	qdkS+aR4RZV5iqcJ21uG+4sR8JNtKzXdvI5O3N6devH+vDKMfK7eNiD7yTz5SXO+X4N4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175613-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175613: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0a71553536d270e988580a3daa9fc87535908221
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Jan 2023 14:15:28 +0000

flight 175613 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175613/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                0a71553536d270e988580a3daa9fc87535908221
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   91 days
Failing since        173470  2022-10-08 06:21:34 Z   91 days  190 attempts
Testing same since   175613  2023-01-07 02:22:17 Z    0 days    1 attempts

------------------------------------------------------------
3305 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 504416 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 16:16:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 16:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.472998.733439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEBs9-0006yw-RB; Sat, 07 Jan 2023 16:16:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 472998.733439; Sat, 07 Jan 2023 16:16:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEBs9-0006yp-OE; Sat, 07 Jan 2023 16:16:41 +0000
Received: by outflank-mailman (input) for mailman id 472998;
 Sat, 07 Jan 2023 16:16:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEBs8-0006yL-RB; Sat, 07 Jan 2023 16:16:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEBs8-0007dJ-NN; Sat, 07 Jan 2023 16:16:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEBs8-0007ED-8r; Sat, 07 Jan 2023 16:16:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEBs8-0002wK-8K; Sat, 07 Jan 2023 16:16:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1a8hYTM5JnszGlchucaPY02b7ZSnksZ3SksDLDa2DuM=; b=SsTgAMgBhadDdFtMpqInY4UlLb
	Bh4qTSHXqvPkzsZLn1ZZyKNHg1+m7+vCqEZ+paA+pBbmPCzEqLuwADNvK/1/QkDKs1TqAziP/Iou/
	jEMV1klEVz+XpGqPTOmwTiXF+D2wFkIRwmPKQkvL8R0uLaXtWvgzT99gx2JdLYtXtYpo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175614-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175614: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=aaa90fede5d10e2a3c3fc7f2df608128d2cba761
X-Osstest-Versions-That:
    qemuu=d1852caab131ea898134fdcea8c14bc2ee75fbe9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Jan 2023 16:16:40 +0000

flight 175614 qemu-mainline real [real]
flight 175617 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175614/
http://logs.test-lab.xenproject.org/osstest/logs/175617/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 175595

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 175617-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175595
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175595
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175595
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175595
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175595
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175595
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175595
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175595
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                aaa90fede5d10e2a3c3fc7f2df608128d2cba761
baseline version:
 qemuu                d1852caab131ea898134fdcea8c14bc2ee75fbe9

Last test of basis   175595  2023-01-06 01:55:04 Z    1 days
Failing since        175603  2023-01-06 12:10:38 Z    1 days    3 attempts
Testing same since   175614  2023-01-07 03:55:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alessandro Di Federico <ale@rev.ng>
  Alex Bennée <alex.bennee@linaro.org>
  Axel Heider <axel.heider@hensoldt.net>
  Claudio Fontana <cfontana@suse.de>
  Fabiano Rosas <farosas@suse.de>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mukilan Thiyagarajan <quic_mthiyaga@quicinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stephen Longfield <slongfield@google.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
  Zhuojia Shen <chaosdefinition@hotmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1486 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 18:08:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 18:08:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473009.733452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEDcC-00011x-Sh; Sat, 07 Jan 2023 18:08:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473009.733452; Sat, 07 Jan 2023 18:08:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEDcC-00011q-Q4; Sat, 07 Jan 2023 18:08:20 +0000
Received: by outflank-mailman (input) for mailman id 473009;
 Sat, 07 Jan 2023 18:08:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QMsY=5E=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pEDcA-00011k-Pd
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 18:08:19 +0000
Received: from sonic308-55.consmr.mail.gq1.yahoo.com
 (sonic308-55.consmr.mail.gq1.yahoo.com [98.137.68.31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 40027ada-8eb6-11ed-b8d0-410ff93cb8f0;
 Sat, 07 Jan 2023 19:08:14 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic308.consmr.mail.gq1.yahoo.com with HTTP; Sat, 7 Jan 2023 18:08:11 +0000
Received: by hermes--production-ne1-7b69748c4d-pm9xv (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 14ccf2a9c0a9dae2713d4c3fcc9e4744; 
 Sat, 07 Jan 2023 18:08:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40027ada-8eb6-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673114891; bh=Q18RRjMQ2O5Mt1JGmUYM8swg2EPTEvKitcNzVDG+mic=; h=Date:From:Subject:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=fjZLMu/4N+qQkDDgJB0TSZO7e8Pry9AarIVXvSBjVwLKd/CCMn0NwNsZPq8giBgrhn+T72ilBMcxxutstnAM7877zlpi1VVbJdM2G8/bGXaskLoEqb/Mbuy+geuW2zuGJQ49fzWMOrUZS+kDWsptUmiRuusENddhbM46UtEn2rn7fg+i5VyqF+NoLQ2TGZ8tLEiR+NioEXGtLh7mlG6fhj8a2RNMHG5ulyLGnwdz6wSm45u8dHm+dDFj2YGTBp1glgl3dzftnRbzNagMhbKEjPn5RvxMqmubwq5Uzyo8RC8dX3haU6NCldk5CMN8S3VV7zO7BFvEt1PVTBFNVByzPg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673114891; bh=EkH3Ky6RbP270JuPU3dSVegfYzgL45PaYO8+avJ1Ebs=; h=X-Sonic-MF:Date:From:Subject:To:From:Subject; b=RDudI94oDRtNQBMm0MK3ljhj0zRNvrRI5RAxiIrBJh+d1o2YG+hPjCSWs6oou/r+LptAIWF7OLkrOXwwvJBAhlgpkzOBRjYPVgcwDBME0yumrrhPpOj+wy2aLfldcO0GFTFE5m+2GAhS1EqQ3zFsPiqbfubao4+3hhLwxnegkN5+qgwv++GI3l7BC4THGIUIFTCD7DY6MYoAe8L0QuOGg77x3dNGAvqr3dGpUnplFq5i4+oYZ+Y0MRJgW4L00DGV2CokBjKafv4Tq/hmRB8cs1AVqXWtnBxV20Bqxa+WdWCRz5fqL6rjO88k324PX4apJfmBHD/vZjR3NyGnSCoi8Q==
X-YMail-OSG: CxxQpWUVM1ldtrqoR4yjKv3_XMyGb7dsJmacpJpL6CPoJfYMMk4DXun3TyLUOWs
 hz25SdmCghqgAgdhqyles8WsOK_mSTWrD_rQaleE857SNz1CYXNHOePBj_6lzCy9b8pCLYGA3DFF
 pQrPZEXPG9lxylgxVIxi.3XdmSmclk3gxU6qnS0EJe9at4hkko6TtkgBzxSgn5CS7DsgUMK7mVr.
 Hlyk1Intzs2J.AGWuCJL2H4lGohdh0M3XhxXXUQGaauFO.fluYbrs61ef50jWjzZBtvCV1IMparr
 vEokkMk4M6Yw1q6gOF13pL9OD2P9fd40uuPagovNeGSopcly3uSXQFj8PgpCD_ss2L7m2udpxlr6
 y1ZfYepDGv393B3m..oIrOGB8x7M6.MK45MIAn_yq2LTvS0XZZd6Wf56CY7s37pARk5QtakB1USm
 qdNPWW1VrEHYEBdXS64xkk.QZMXD56r9sh7m5Y769baT04XnxU9vbhm3odBZVXVF4jcwWXaJQihT
 dAWv7mdkVwD1Joc4FVoOPAGHBwPo3EGWga5JlvAI7_yOUWMBCz7lGvTvmH071ojJFso9Gl9F3dE8
 VsJxGNtptu50epj8zwnlApvTeow5nRGbaNNtDLAp09TLekdnZWF02OxEehzZdoPNIlhts6sqQqSo
 iSqoHnRSAoYAWyfpwsp_nDe0TZauzRDE8_TyHsL0A.R9A9L2uIweYBNlteFgvNbjAGwp_QozyuZL
 wONCgNDOvEZa5QQDhH_kW3QChgKOb0firteEPIVNGorR5C6y0vGN65TxkA1XfwzORY7R9vZynidR
 lhJtZEb718z8aMBElt1nlFspfMLFxzBNWVWe6uV8h8olY9MUjsYtb5IwaiNc_i8TBU0_xpMUgd9p
 lNUS2kyGJaBX.I696yLnsg2AkMt4D0nr3ByyyA9TCTWK4ZA2r_oXZI6az48vjEvTVq1e0U7LIBND
 TzmdCd0GszoYMXK7ngPH8eajgJIgov1ues_5dA9M.MuE5h49_qntBmuUF_i1aKeyJfPsqebmT77b
 2jc8sB6Dk_WRJsc_qsSktNjp._UN_V0bV71IB3E_Vscs84WM56psutehLVV24TVkDQhR4ZJgDh4b
 7hZ2Gh2wI6oeTt2yC4Ydmx3_O1HfW4ygYf_lbtWxwW3HfwNl.C0sQAmj3dt6nJXayItAYDKlEYVi
 NsGwhW9QAoBncHs_7VJPs2XGfxp78ixi8D5Yb8dCWDGIpX4HohAecZ5p3XHIyvJy5kOO.kgsn1cj
 qomvFvfooFWuztIoVrNTO7zc8lOWbz7_gt85j7mN.8xaLloxyU.9PeSWn1zd6GghONrSNn2FYNfA
 VDB3iPQMw1HAJw94uiBPRtQM3u6bqoqJAsLhFw_rj69FPr1t7BR7pm.CZoEmbkZEw.ajpWTLVK58
 bq4wZAjJ0ufXUS8YPg.bPie93xR4v5kE5_3lRB8HBtRMtjCGxC_iULoOzbdrVCje5F2HHYXI.33U
 Pb9bWu313vbonT3Kc8QyKMRqsJEq8fNhT_zpBTaj96.iRNqI7pn_29pIDAw6UIcCbdlcAVqJC10H
 8O7jstgU9L2Yw4CkniHiTCCmmfcuyWHr.EhTygIEF4Idrq7ZLNBCkiqz.e9rMgEwEXYezKXuA3rG
 a0q48Hn5ogrNg.xHZAAVj46NbezGeJVLWagutBlaa2nO1Sc3ytgOgeCe74eaTEOEMQQ63YRcW9LW
 l1q9xxrFF4i_.1MtGxtgNC6Db3wZGD7b8Oi8LmFgXKMp3fMWhOOiwZduHzz7mgh3PrL8vuUnoZxF
 .Q0bL6UC_BfxRJCoqdpYGZYdql2T.ox8jbKCBhignqkGLo2LdoRx.BEGQgIzsNbOdDl8O7OU0qsT
 9iQ_F_pyOsaNNUPg2k85zIHWuxXR_043uHSwrkQGiA6JedbJRJuAIo8qCtdvltr_GAg4OmtGogJK
 EuY1pmRdCwCu7yhDnU9crRbBYy3rbjUseaMERD_dR_CUGXsI2dPDkbcdWrlaZ9gzMuRIZH.WamKc
 1SlKsuxAApUflAl2aQS4fvvgTFdObwFFEKVbVxvH9Nh.GreI3x65LASYKGHJoC3ITbYQTVZNppuE
 byx4ERQu9pd1nU8S74.rHGOHJIRHh7ZnD5iN4XEGnNEVtcO.VL74pjl6s2b.Pa.jISVcD31DGVYg
 41p0V__VzbDJ5j59h1TTOJy1qKxPKcYhneSo4aeLsY1CDNrOR2bA0RdPbs0hU6RkZdXT4B_1cFP8
 OVqLzLzdgmo_68fm.QgHrEg--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <d8bf3a56-6dae-8679-424f-920e792627d5@aol.com>
Date: Sat, 7 Jan 2023 13:08:03 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
From: Chuck Zmudzinski <brchuckz@aol.com>
Subject: Re: [PATCH v2 6/6] hw/isa/piix: Resolve redundant
 TYPE_PIIX3_XEN_DEVICE
To: Bernhard Beschow <shentey@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>, Thomas Huth <thuth@redhat.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230104144437.27479-7-shentey@gmail.com>
 <1c2e0780-e5fb-1321-0d84-b0591db9fec7@linaro.org>
 <B207F213-3B7B-4E0A-A87E-DE53CD351647@gmail.com>
 <6a1a6ed8-568d-c08b-91a7-1093a2b25929@linaro.org>
 <d9e2f616-d3bf-fc6c-2dc5-a0bf53148632@aol.com>
 <30337c62-a938-61c8-3ae5-092dbccf6302@aol.com>
 <00ff4875-62e0-05c8-a13e-5a52d4195cc2@aol.com>
 <01A7F932-0DF1-4977-A111-0907A7FC6FF9@gmail.com>
Content-Language: en-US
In-Reply-To: <01A7F932-0DF1-4977-A111-0907A7FC6FF9@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4572

On 1/7/23 6:05 AM, Bernhard Beschow wrote:
> Am 7. Januar 2023 01:08:46 UTC schrieb Chuck Zmudzinski <brchuckz@aol.com>:
> >On 1/6/23 6:04 PM, Chuck Zmudzinski wrote:
> >> On 1/6/23 2:08 PM, Chuck Zmudzinski wrote:
> >>> On 1/6/23 7:25 AM, Philippe Mathieu-Daudé wrote:
> >>>> On 6/1/23 12:57, Bernhard Beschow wrote:
> >>>>> 
> >>>>> 
> >>>>> Am 4. Januar 2023 15:35:33 UTC schrieb "Philippe Mathieu-Daudé" <philmd@linaro.org>:
> >>>>>> +Markus/Thomas
> >>>>>>
> >>>>>> On 4/1/23 15:44, Bernhard Beschow wrote:
> >>>>>>> During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
> >>>>>>> TYPE_PIIX3_DEVICE. Remove this redundancy.
> >>>>>>>
> >>>>>>> Signed-off-by: Bernhard Beschow <shentey@gmail.com>
> >>>>>>> ---
> >>>>>>>    hw/i386/pc_piix.c             |  4 +---
> >>>>>>>    hw/isa/piix.c                 | 20 --------------------
> >>>>>>>    include/hw/southbridge/piix.h |  1 -
> >>>>>>>    3 files changed, 1 insertion(+), 24 deletions(-)
> >>>> 
> >>>> 
> >>>>>>>    -static void piix3_xen_class_init(ObjectClass *klass, void *data)
> >>>>>>> -{
> >>>>>>> -    DeviceClass *dc = DEVICE_CLASS(klass);
> >>>>>>> -    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
> >>>>>>> -
> >>>>>>> -    k->realize = piix3_realize;
> >>>>>>> -    /* 82371SB PIIX3 PCI-to-ISA bridge (Step A1) */
> >>>>>>> -    k->device_id = PCI_DEVICE_ID_INTEL_82371SB_0;
> >>>>>>> -    dc->vmsd = &vmstate_piix3;
> >>>>>>
> >>>>>> IIUC, since this device is user-creatable, we can't simply remove it
> >>>>>> without going thru the deprecation process.
> >>>>> 
> >>>>> AFAICS this device is actually not user-creatable since dc->user_creatable is set to false once in the base class. I think it is safe to remove the Xen class unless there are ABI issues.
> >>>> Great news!
> >>> 
> >>> I don't know if this means the device is user-creatable:
> >>> 
> >>> chuckz@bullseye:~$ qemu-system-i386 -device piix3-ide-xen,help
> >>> piix3-ide-xen options:
> >>>   addr=<int32>           - Slot and optional function number, example: 06.0 or 06 (default: -1)
> >>>   failover_pair_id=<str>
> >>>   multifunction=<bool>   - on/off (default: false)
> >>>   rombar=<uint32>        -  (default: 1)
> >>>   romfile=<str>
> >>>   x-pcie-extcap-init=<bool> - on/off (default: true)
> >>>   x-pcie-lnksta-dllla=<bool> - on/off (default: true)
> >>> 
> >>> Today I am running qemu-5.2 on Debian 11, so this output is for
> >>> qemu 5.2, and that version of qemu has a piix3-ide-xen device.
> >>> Is that this same device that is being removed? If so, it seems to
> >>> me that at least as of qemu 5.2, the device was user-creatable.
> >>> 
> >>> Chuck
> >> 
> >> Good news! It looks the device was removed as user-creatable since version 5.2:
> >> 
> >> chuckz@debian:~$ qemu-system-i386-7.50 -device help | grep piix
> >> name "piix3-usb-uhci", bus PCI
> >> name "piix4-usb-uhci", bus PCI
> >> name "piix3-ide", bus PCI
> >> name "piix4-ide", bus PCI
> >> chuckz@debian:~$ qemu-system-i386-7.50-bernhard-v2 -device help | grep piix
> >> name "piix3-usb-uhci", bus PCI
> >> name "piix4-usb-uhci", bus PCI
> >> name "piix3-ide", bus PCI
> >> name "piix4-ide", bus PCI
> >> chuckz@debian:~$
> >> 
> >> The piix3-ide-xen device is not present either with or without Bernhard's patches
> >> for current qemu 7.50, the development version for qemu 8.0
> >> 
> >> Cheers,
> >> 
> >> Chuck
> >
> >
> >I traced where the pciix3-ide-xen device was removed:
> >
> >It was 7851b21a81 (hw/ide/piix: Remove redundant "piix3-ide-xen" device class)
> >
> >https://gitlab.com/qemu-project/qemu/-/commit/7851b21a8192750adecbcf6e8780a20de5891ad6
> >
> >about six months ago. That was between 7.0 and 7.1. So the device being removed
> >here is definitely not user-creatable, but it appears that this piix3-ide-xen
> >device that was removed between 7.0 and 7.1 was user-creatable. Does that one
> >need to go through the deprecation process? Or, since no one has complained
> >it is gone, maybe we don't need to worry about it?
>
> Good point! Looks like it fell through the cracks...
>
> There are voices who claim that this device and its non-Xen counterpart should have never been user-creatable in the firtst place:
> https://patchwork.kernel.org/project/qemu-devel/patch/20190718091740.6834-1-philmd@redhat.com/

Of course, only the xen variant was removed, so only users of the
xen variant were affected by the removal of the device. Any affected
users probably just substituted the non-xen variant for the xen variant
in their machines and didn't experience any problems.

Kind regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 22:07:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 22:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473026.733509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLa-0008Ku-EG; Sat, 07 Jan 2023 22:07:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473026.733509; Sat, 07 Jan 2023 22:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLa-0008Kl-BE; Sat, 07 Jan 2023 22:07:26 +0000
Received: by outflank-mailman (input) for mailman id 473026;
 Sat, 07 Jan 2023 22:07:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TYdO=5E=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pEHLY-0007Zv-9Q
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 22:07:24 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7a36bb3-8ed7-11ed-91b6-6bf2151ebd3b;
 Sat, 07 Jan 2023 23:07:20 +0100 (CET)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id CCA885C00A3;
 Sat,  7 Jan 2023 17:07:19 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Sat, 07 Jan 2023 17:07:19 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 7 Jan 2023 17:07:18 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7a36bb3-8ed7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1673129239; x=1673215639; bh=KY3VkgyHYk1MHJ+JHUtQDqXke9gx/LY5Hil
	UBmVJC3E=; b=q0My5TYPQPe18oQO9hEvLKwKW4d5zSqUxAmpDHBGYJTZbjLsdMg
	d54+OrM2pLEeGZNx2T/uxC+8c7DyQtEQWx0q9G2yeTCMAlx+Fbjp3e0KyOSVm5YM
	LYsCVHOKrclVDK9FSuCJ71FjhNK5AjDQkqOfs3m2WNrU8ksXdVEyYIsnTnR3WJV2
	nxi8dKxqAU+6FM0giP6m+8XWdoLF9TWTykok7WExRdx96/sfSb3gBltJiLXC20B7
	BkQCdraxZqWqhf4OQUsm/Q5DyPtvxKyYg8vN46fuAAeq1dbqXAHUGqeNDnJ/qq/P
	+oBGQob73HEH9a2loAS6w/CAT04SIO7QOOQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1673129239; x=1673215639; bh=KY3VkgyHYk1MH
	J+JHUtQDqXke9gx/LY5HilUBmVJC3E=; b=AjOHpAmEwZB3gnpwa+jDY1P8wlWzL
	ropOlFa9K5T58Pl3kZi4eArD1izpdCbYooGZ6aLg3pDtuhSpNfZGdogbINRVkjb6
	wz7Ghm7nN2qUSB1c0MqLQT8B2RL4tvYJzjG8kLJdaYKCB8rAtKb6W8QMCd1O3Vf0
	5Be2CJ7i9qffwvMFtO+nvi0ROjkXh+A5cgGb2QoRL/0ebG74ozA8oFSX6J3pSx1L
	WMjrH7aSYCPa0iulETHO/STnVx/dPqkKg8hlduV0gKl9CiHHfdcCDkAOUHJYXXaO
	diQwrCrPsPyd0Ja1eI0jQD6jvJfiDvZQevHpXB1Dk8reMUqo2h+KAs3IA==
X-ME-Sender: <xms:F-25Y5mQVMLNNZilFqi5Oh921MVKPSJIytdoVLFMmxosv60Of7jEcw>
    <xme:F-25Y00j7foDT-878D8gn0aiQFI9pHPirq74oCa2abnMRr-6C1t0tWhZL2M8dH5Ut
    NF1dLu0MvHyOvU>
X-ME-Received: <xmr:F-25Y_qNCy2q17TegT6_fWNAVIcu_gZZBG1w8LqCqHIODOuDbfO3rUBUUX7lD6oD8tA5hG0VPEd2>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrkedvgdduheegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepjeffjefggfeugeduvedvjeekgfeh
    gffhhfffjeetkeelueefffetfffhtdduheetnecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:F-25Y5l71YLa0apdx4Hx2xbOfcNQjYiQlXhkVzgEfIu5T_YN6H3Y9w>
    <xmx:F-25Y33aQF1hZE_sHYNKRQFFCc37cDeSqGSQNV8w_0EFVtRW_I_cvg>
    <xmx:F-25Y4vxC0uJSc0Bav-8AkfezIt0SICo8ubjwLs-ZR4ZmilDSWzWSg>
    <xmx:F-25YwLpAdRkRGPwoFEMnspu1D01E08LFaazgm4baAxamxYYBaSFbA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [PATCH v7 3/4] x86/mm: make code robust to future PAT changes
Date: Sat,  7 Jan 2023 17:07:05 -0500
Message-Id: <89201c66b0261b2f5ee83e7672830317fde21dfa.1673123823.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673123823.git.demi@invisiblethingslab.com>
References: <cover.1673123823.git.demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It may be desirable to change Xen's PAT for various reasons.  This
requires changes to several _PAGE_* macros as well.  Add static
assertions to check that XEN_MSR_PAT is consistent with the _PAGE_*
macros, and that _PAGE_WB is 0 as required by Linux.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
Changes since v6:
- Remove overly-specific comment.
- Remove double blank line.
- Check that unused entries are distinct from higher entries that are
  used.
- Have PTE_FLAGS_TO_CACHEATTR validate its argument.
- Do not check if an unsigned number is negative.
- Expand comment explaining why _PAGE_WB must be zero.

Changes since v5:
- Remove unhelpful comment.
- Move a BUILD_BUG_ON.
- Use fewer hardcoded constants in PTE_FLAGS_TO_CACHEATTR.
- Fix coding style.

Changes since v4:
- Add lots of comments explaining what the various BUILD_BUG_ON()s mean.

Changes since v3:
- Refactor some macros
- Avoid including a string literal in BUILD_BUG_ON

Additional checks on PAT values

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 xen/arch/x86/include/asm/page.h |  2 +
 xen/arch/x86/mm.c               | 96 +++++++++++++++++++++++++++++++++
 2 files changed, 98 insertions(+)

diff --git a/xen/arch/x86/include/asm/page.h b/xen/arch/x86/include/asm/page.h
index b585235d064a567082582c8e92a4e8283fd949ca..c7d77ab2901aa5bdb03a719af810c6f8d8ba9d4e 100644
--- a/xen/arch/x86/include/asm/page.h
+++ b/xen/arch/x86/include/asm/page.h
@@ -338,6 +338,8 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t);
 #define _PAGE_UC         (            _PAGE_PCD | _PAGE_PWT)
 #define _PAGE_WC         (_PAGE_PAT                        )
 #define _PAGE_WP         (_PAGE_PAT |             _PAGE_PWT)
+#define _PAGE_RSVD_1     (_PAGE_PAT | _PAGE_PCD            )
+#define _PAGE_RSVD_2     (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)
 
 /*
  * Debug option: Ensure that granted mappings are not implicitly unmapped.
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a8f137925cba1846b97aee9321df6427f4dd1a94..d69e9bea6c30bc782ab4c331f42502f6e61a028a 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -890,6 +890,8 @@ get_page_from_l1e(
         case _PAGE_WT:
         case _PAGE_WP:
             break;
+        case _PAGE_RSVD_1:
+        case _PAGE_RSVD_2:
         default:
             /*
              * If we get here, a PV guest tried to use one of the
@@ -6412,6 +6414,100 @@ static void __init __maybe_unused build_assertions(void)
      * using different PATs will not work.
      */
     BUILD_BUG_ON(XEN_MSR_PAT != 0x050100070406ULL);
+
+    /*
+     * _PAGE_WB must be zero.  Linux PV guests assume that _PAGE_WB will be
+     * zero, and indeed Linux has a BUILD_BUG_ON validating that their version
+     * of _PAGE_WB *is* zero.  Furthermore, since _PAGE_WB is zero, it is quite
+     * likely to be omitted from various parts of Xen, and indeed L1 PTE
+     * validation code checks that ((l1f & PAGE_CACHE_ATTRs) == 0), not
+     * ((l1f & PAGE_CACHE_ATTRs) == _PAGE_WB).
+     */
+    BUILD_BUG_ON(_PAGE_WB);
+
+    /* _PAGE_RSVD_1 must be less than _PAGE_RSVD_2 */
+    BUILD_BUG_ON(_PAGE_RSVD_1 >= _PAGE_RSVD_2);
+
+#define PAT_ENTRY(v)                                                           \
+    (BUILD_BUG_ON_ZERO(((v) < 0) || ((v) > 7)) +                               \
+     (0xFF & (XEN_MSR_PAT >> (8 * (v)))))
+
+    /* Validate at compile-time that v is a valid value for a PAT entry */
+#define CHECK_PAT_ENTRY_VALUE(v)                                               \
+    BUILD_BUG_ON((v) > X86_NUM_MT || (v) == X86_MT_RSVD_2 ||                   \
+                 (v) == X86_MT_RSVD_3)
+
+    /* Validate at compile-time that PAT entry v is valid */
+#define CHECK_PAT_ENTRY(v) CHECK_PAT_ENTRY_VALUE(PAT_ENTRY(v))
+
+    /*
+     * If one of these trips, the corresponding entry in XEN_MSR_PAT is invalid.
+     * This would cause Xen to crash (with #GP) at startup.
+     */
+    CHECK_PAT_ENTRY(0);
+    CHECK_PAT_ENTRY(1);
+    CHECK_PAT_ENTRY(2);
+    CHECK_PAT_ENTRY(3);
+    CHECK_PAT_ENTRY(4);
+    CHECK_PAT_ENTRY(5);
+    CHECK_PAT_ENTRY(6);
+    CHECK_PAT_ENTRY(7);
+
+    /* Macro version of pte_flags_to_cacheattr(), for use in BUILD_BUG_ON()s */
+#define PTE_FLAGS_TO_CACHEATTR(pte_value)                                      \
+    /* Check that the _PAGE_* macros only use bits from PAGE_CACHE_ATTRS */    \
+    (BUILD_BUG_ON_ZERO(((pte_value) & PAGE_CACHE_ATTRS) != (pte_value)) |      \
+     (((pte_value) & _PAGE_PAT) >> 5) |                                        \
+     (((pte_value) & (_PAGE_PCD | _PAGE_PWT)) >> 3))
+
+    CHECK_PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(_PAGE_RSVD_1));
+    CHECK_PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(_PAGE_RSVD_2));
+#define PAT_ENTRY_FROM_FLAGS(x) PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(x))
+
+    /* Validate at compile time that X does not duplicate a smaller PAT entry */
+#define CHECK_DUPLICATE_ENTRY(x, y)                                            \
+    BUILD_BUG_ON((x) >= (y) &&                                                 \
+                 (PAT_ENTRY_FROM_FLAGS(x) == PAT_ENTRY_FROM_FLAGS(y)))
+
+    /* Check that a PAT-related _PAGE_* macro is correct */
+#define CHECK_PAGE_VALUE(page_value) do {                                      \
+    /* Check that the _PAGE_* macros only use bits from PAGE_CACHE_ATTRS */    \
+    BUILD_BUG_ON(((_PAGE_ ## page_value) & PAGE_CACHE_ATTRS) !=                \
+                 (_PAGE_ ## page_value));                                      \
+    /* Check that the _PAGE_* are consistent with XEN_MSR_PAT */               \
+    BUILD_BUG_ON(PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(_PAGE_ ## page_value)) !=    \
+                 (X86_MT_ ## page_value));                                     \
+    case _PAGE_ ## page_value:; /* ensure no duplicate values */               \
+    /*                                                                         \
+     * Check that the _PAGE_* entries do not duplicate a smaller reserved      \
+     * entry.                                                                  \
+     */                                                                        \
+    CHECK_DUPLICATE_ENTRY(_PAGE_ ## page_value, _PAGE_RSVD_1);                 \
+    CHECK_DUPLICATE_ENTRY(_PAGE_ ## page_value, _PAGE_RSVD_2);                 \
+    CHECK_PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(_PAGE_ ## page_value));             \
+} while ( false )
+
+    /*
+     * If one of these trips, the corresponding _PAGE_* macro is inconsistent
+     * with XEN_MSR_PAT.  This would cause Xen to use incorrect cacheability
+     * flags, with results that are unknown and possibly harmful.
+     */
+    switch (0) {
+    CHECK_PAGE_VALUE(WT);
+    CHECK_PAGE_VALUE(WB);
+    CHECK_PAGE_VALUE(WC);
+    CHECK_PAGE_VALUE(UC);
+    CHECK_PAGE_VALUE(UCM);
+    CHECK_PAGE_VALUE(WP);
+    case _PAGE_RSVD_1:
+    case _PAGE_RSVD_2:
+        break;
+    }
+#undef CHECK_PAT_ENTRY
+#undef CHECK_PAT_ENTRY_VALUE
+#undef CHECK_PAGE_VALUE
+#undef PAGE_FLAGS_TO_CACHEATTR
+#undef PAT_ENTRY
 }
 
 /*
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Sat Jan 07 22:07:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 22:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473027.733519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLb-0000AD-M2; Sat, 07 Jan 2023 22:07:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473027.733519; Sat, 07 Jan 2023 22:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLb-00009S-IY; Sat, 07 Jan 2023 22:07:27 +0000
Received: by outflank-mailman (input) for mailman id 473027;
 Sat, 07 Jan 2023 22:07:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TYdO=5E=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pEHLZ-0007Zv-9R
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 22:07:25 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8363c73-8ed7-11ed-91b6-6bf2151ebd3b;
 Sat, 07 Jan 2023 23:07:21 +0100 (CET)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id C4D2E5C00A0;
 Sat,  7 Jan 2023 17:07:20 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Sat, 07 Jan 2023 17:07:20 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 7 Jan 2023 17:07:19 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8363c73-8ed7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm2; t=1673129240; x=1673215640; bh=POJFkOQt3y
	GloJZrLhIV9nkjsTfKozCDcDniLFyd4nI=; b=Oifh/nCYWga5HbxTHiQazgkc/l
	33df31xXEvqm7IuHvb04cpwFAHVo3eY36wRdCX1Xov2Qt0xcoR+eTIz6f+u9Rqs0
	uXeW9FDccKhntA3lYBsF/hdozXlbugTNJdcozNUBMEDebUqA/f9dE99EYMMJy3c+
	cPObnEE5Y6QHgludXMJvmNCgag7OJLgPe7xxb2t9f30kZTXPf2m3aeGTxL3dpw0V
	XUKGJErAu8nNMsPf/ZHQZonny8a34/RfyDOaQqbvkCNI/IsJEVLDfW2hbiqhu3v3
	kKtXIRO1uoV1ouOEB8PrAkh2cvbSwoXMauUsxRUst3Z8C0QqrRDhjAGCJo0A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1673129240; x=
	1673215640; bh=POJFkOQt3yGloJZrLhIV9nkjsTfKozCDcDniLFyd4nI=; b=e
	++Ev6cFb5oNXBg6Delfd4mJ6yWRBP9oYyPFvo+/UXkxif9fa0AJuMT2yrKRG7xSs
	iXV35wznh1y5awDMHnO8nt1KIJ4ooZg4LBtH3SxfdUZQQBbH8aZ8qrFTclW/0QK3
	L7UkIt5nfBvZrPWw0FcHcFxShKWJ47dJpaM0BZlqECYlw2FqiAmvqkl4X9Pn2Pvx
	6ZSSm163HjRVJZ6MmiIWtZ8cWhS5ssONXl8bQUCWs1t1+iTk9ti+9q/xtu3p3btS
	3IH+jzSkMLCJPKa4/ctTLfPTIge7/uTFwdPVzoCJXZmozzju1SofvKsNaXIQqs+T
	3fKZvlwjY+CQM+p0NE7YQ==
X-ME-Sender: <xms:GO25YyxdBjS1rRYb3hIKf2rOLsN3ecWkixfolkijTpWLc91lcEYxAA>
    <xme:GO25Y-RP8TLHCnJpvlBiqy6V806Zy2mV_YOClRZ9osg75XUSiNyPiMcXgzeO-MZqf
    xdqgr6gR4RogTg>
X-ME-Received: <xmr:GO25Y0U0FtQEkOMlcdq1g7eJtspL6CEgCFA1dOvPHpJFGgJPhhm1qIyBIWj4g3GOHJwIYzrZQUpd>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrkedvgdduheegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeludekleeljeekveekfeeghfff
    gedvieegleeigeejffefieeviedvjeegveetieenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:GO25Y4ir6Yo4ScMFBjuSspTQcGQWBqomnbpnQXlq6PJD0_45UKJ5XA>
    <xmx:GO25Y0D9PWONspNsDwPSp825txx1nSTPtyPOl93Z4fYEJ5jf77SfNw>
    <xmx:GO25Y5Lw0moouHzfhRsPxJujV7LC3OlW_CUc5uhBT7vnS9tiBeHSnQ>
    <xmx:GO25Yw24_syCrPlN6zABNgNVxDaNi8s1ZRrqL1uH8Gwi6vLctdr7tg>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [PATCH v7 4/4] x86: Allow using Linux's PAT
Date: Sat,  7 Jan 2023 17:07:06 -0500
Message-Id: <9fd0360dd914d93dab357d16b46b4290e6119d30.1673123823.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673123823.git.demi@invisiblethingslab.com>
References: <cover.1673123823.git.demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Due to a hardware errata, Intel integrated GPUs are incompatible with
Xen's PAT.  Using Linux's PAT is a workaround for this flaw.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 xen/arch/x86/Kconfig                 | 33 ++++++++++++++++++++++++++++
 xen/arch/x86/include/asm/page.h      | 12 ++++++++++
 xen/arch/x86/include/asm/processor.h | 15 +++++++++++++
 xen/arch/x86/mm.c                    |  2 ++
 4 files changed, 62 insertions(+)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 6a7825f4ba3c98e0496415123fde79ee62f771fa..18efccedfd08873cd169a54825b0ba4256a12942 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -227,6 +227,39 @@ config XEN_ALIGN_2M
 
 endchoice
 
+config LINUX_PAT
+	bool "Use Linux's PAT instead of Xen's default"
+	help
+	  Use Linux's Page Attribute Table instead of the default Xen value.
+
+	  The Page Attribute Table (PAT) maps three bits in the page table entry
+	  to the actual cacheability used by the processor.  Many Intel
+	  integrated GPUs have errata (bugs) that cause CPU access to GPU memory
+	  to ignore the topmost bit.  When using Xen's default PAT, this results
+	  in caches not being flushed and incorrect images being displayed.  The
+	  default PAT used by Linux does not cause this problem.
+
+	  If you say Y here, you will be able to use Intel integrated GPUs that
+	  are attached to your Linux dom0 or other Linux PV guests.  However,
+	  you will not be able to use non-Linux OSs in dom0, and attaching a PCI
+	  device to a non-Linux PV guest will result in unpredictable guest
+	  behavior.  If you say N here, you will be able to use a non-Linux
+	  dom0, and will be able to attach PCI devices to non-Linux PV guests.
+
+	  Note that saving a PV guest with an assigned PCI device on a machine
+	  with one PAT and restoring it on a machine with a different PAT won't
+	  work: the resulting guest may boot and even appear to work, but caches
+	  will not be flushed when needed, with unpredictable results.  HVM
+	  (including PVH and PVHVM) guests and guests without assigned PCI
+	  devices do not care what PAT Xen uses, and migration (even live)
+	  between hypervisors with different PATs will work fine.  Guests using
+	  PV Shim care about the PAT used by the PV Shim firmware, not the
+	  host’s PAT.  Also, non-default PAT values are incompatible with the
+	  (deprecated) qemu-traditional stubdomain.
+
+	  Say Y if you are building a hypervisor for a Linux distribution that
+	  supports Intel iGPUs.  Say N otherwise.
+
 config X2APIC_PHYSICAL
 	bool "x2APIC Physical Destination mode"
 	help
diff --git a/xen/arch/x86/include/asm/page.h b/xen/arch/x86/include/asm/page.h
index c7d77ab2901aa5bdb03a719af810c6f8d8ba9d4e..03839eb2b78517332663daad2089677d7000852c 100644
--- a/xen/arch/x86/include/asm/page.h
+++ b/xen/arch/x86/include/asm/page.h
@@ -331,6 +331,7 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t);
 
 #define PAGE_CACHE_ATTRS (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)
 
+#ifndef CONFIG_LINUX_PAT
 /* Memory types, encoded under Xen's choice of MSR_PAT. */
 #define _PAGE_WB         (                                0)
 #define _PAGE_WT         (                        _PAGE_PWT)
@@ -340,6 +341,17 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t);
 #define _PAGE_WP         (_PAGE_PAT |             _PAGE_PWT)
 #define _PAGE_RSVD_1     (_PAGE_PAT | _PAGE_PCD            )
 #define _PAGE_RSVD_2     (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)
+#else
+/* Memory types, encoded under Linux's choice of MSR_PAT. */
+#define _PAGE_WB         (                                0)
+#define _PAGE_WC         (                        _PAGE_PWT)
+#define _PAGE_UCM        (            _PAGE_PCD            )
+#define _PAGE_UC         (            _PAGE_PCD | _PAGE_PWT)
+#define _PAGE_RSVD_1     (_PAGE_PAT                        )
+#define _PAGE_WP         (_PAGE_PAT |             _PAGE_PWT)
+#define _PAGE_RSVD_2     (_PAGE_PAT | _PAGE_PCD            )
+#define _PAGE_WT         (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)
+#endif
 
 /*
  * Debug option: Ensure that granted mappings are not implicitly unmapped.
diff --git a/xen/arch/x86/include/asm/processor.h b/xen/arch/x86/include/asm/processor.h
index 60b902060914584957db8afa5c7c1e6abdad4d13..413b59ab284990cca192fa1dc44b437f58bd282f 100644
--- a/xen/arch/x86/include/asm/processor.h
+++ b/xen/arch/x86/include/asm/processor.h
@@ -92,6 +92,20 @@
                           X86_EFLAGS_NT|X86_EFLAGS_DF|X86_EFLAGS_IF|    \
                           X86_EFLAGS_TF)
 
+#ifdef CONFIG_LINUX_PAT
+/*
+ * Host IA32_CR_PAT value to cover all memory types.  This is not the default
+ * MSR_PAT value, but is the same as the one used by Linux.
+ */
+#define XEN_MSR_PAT ((_AC(X86_MT_WB,  ULL) << 0x00) | \
+                     (_AC(X86_MT_WC,  ULL) << 0x08) | \
+                     (_AC(X86_MT_UCM, ULL) << 0x10) | \
+                     (_AC(X86_MT_UC,  ULL) << 0x18) | \
+                     (_AC(X86_MT_WB,  ULL) << 0x20) | \
+                     (_AC(X86_MT_WP,  ULL) << 0x28) | \
+                     (_AC(X86_MT_UCM, ULL) << 0x30) | \
+                     (_AC(X86_MT_WT,  ULL) << 0x38))
+#else
 /*
  * Host IA32_CR_PAT value to cover all memory types.  This is not the default
  * MSR_PAT value, and is an ABI with PV guests.
@@ -104,6 +118,7 @@
                      (_AC(X86_MT_WP,  ULL) << 0x28) | \
                      (_AC(X86_MT_UC,  ULL) << 0x30) | \
                      (_AC(X86_MT_UC,  ULL) << 0x38))
+#endif
 
 #ifndef __ASSEMBLY__
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index d69e9bea6c30bc782ab4c331f42502f6e61a028a..042c1875a02092a3f19c293003ef12209d88a450 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -6407,6 +6407,7 @@ unsigned long get_upper_mfn_bound(void)
 
 static void __init __maybe_unused build_assertions(void)
 {
+#ifndef CONFIG_LINUX_PAT
     /*
      * If this trips, any guests that blindly rely on the public API in xen.h
      * (instead of reading the PAT from Xen, as Linux 3.19+ does) will be
@@ -6414,6 +6415,7 @@ static void __init __maybe_unused build_assertions(void)
      * using different PATs will not work.
      */
     BUILD_BUG_ON(XEN_MSR_PAT != 0x050100070406ULL);
+#endif
 
     /*
      * _PAGE_WB must be zero.  Linux PV guests assume that _PAGE_WB will be
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Sat Jan 07 22:07:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 22:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473024.733481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLX-0007c4-UP; Sat, 07 Jan 2023 22:07:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473024.733481; Sat, 07 Jan 2023 22:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLX-0007bX-O1; Sat, 07 Jan 2023 22:07:23 +0000
Received: by outflank-mailman (input) for mailman id 473024;
 Sat, 07 Jan 2023 22:07:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TYdO=5E=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pEHLW-0007Zv-Ro
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 22:07:22 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a673a04f-8ed7-11ed-91b6-6bf2151ebd3b;
 Sat, 07 Jan 2023 23:07:19 +0100 (CET)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id CE6FC5C0070;
 Sat,  7 Jan 2023 17:07:17 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Sat, 07 Jan 2023 17:07:17 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 7 Jan 2023 17:07:17 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a673a04f-8ed7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1673129237; x=1673215637; bh=lFRjAyz2V8QJ7yU1YRzmTT1WULJJ6rEk/i2
	CB/Rdt1o=; b=TcfGk5AK7ZARdCLVukNVb/Z5aT/jz9wzzUuiOrlE0r1WL2jGFpg
	vA3eRk53cB04tmhryKp4f3DUnv2jK9fCH1TjNtEpUhnAUvWqRu7G+KzcZJHibL2K
	pWAP7rYDXZl8CIAps8E04TYn6dbgKqcejusmBQBRK4MHGgVDLJZN/euE9gt9Tpjx
	lCYCfUXwWXyFVjsWnyD0quqe6P7rI06/yC5lF0aiuP9ET3mGkIfkET0k4kew4xa9
	iRH9z3cNWYEM52O47fsG1F1cKLzO4j7SM3QKVjWky0LsLvLvA7IEIp43UXlFbuNP
	cXwFz4bjYf5UKGj+LNAT+u1Vssl7aVksESg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1673129237; x=1673215637; bh=lFRjAyz2V8QJ7
	yU1YRzmTT1WULJJ6rEk/i2CB/Rdt1o=; b=AdLhRwqTNTZZqgenNh0x4w8h3eh6Y
	G+Zq5P8NlXojtpNaBHghUWWZsY8fZIL5LZ5K4V43tOFYFicwLqSzfNXWetyK/vFO
	iZyebRdy5EgtWaPgS/+S19r22V6vYfJSB8eOF9J3FBFfQpdwzo0nxt/H4LqPQUQJ
	YXTyeKfETokij1et7xogLh0NPxRBZ7SChOenZbdGMtF4fbRO0dZ45gOuEmza8pWL
	6tYZBlMz69s8aTHn7Q+7o4ih3mAPOnv+WOkIUrz6R3RlR4GmWaZoX4fUQbql9OvU
	AiF0W/e7VUYuftYigqDLpeHS2qv8cwOs6U9cuuKwA5GnONHwcUVfiNbpw==
X-ME-Sender: <xms:Fe25Y5lnOWPudxwEXwtW7gTbm91C1gBwKpKKSY12RraIkVNo1e6Ydg>
    <xme:Fe25Y02uhkkSydBMSiIRCQC7NPQ0n28RcYmpH_F0AX29EnIvxpENYsXUnXLmf5b21
    LDBSMSMsTlpGtQ>
X-ME-Received: <xmr:Fe25Y_q8TbuShyalWqgCZ7XLEzvecc43xdhOU5Mo9GZK6WhwW-TmU0IZFnjI3_nZ65VoOykKk8Oz>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrkedvgdduheegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepjeffjefggfeugeduvedvjeekgfeh
    gffhhfffjeetkeelueefffetfffhtdduheetnecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:Fe25Y5l-Pd9tqLHp5e3AlJTnl56Kv6X4Qvgbt2vnJc1kPnMjrxuO7g>
    <xmx:Fe25Y30hPOylGRWE_ucBJPaExmgGrEDMCGwoZA4dEM9qI2NMRP823w>
    <xmx:Fe25Y4ss02GaXAl9hHv3ukLwZaCWJq60_x9iKPqS8qRsIufPt82vXQ>
    <xmx:Fe25YwLYLcOOpQNTAPza_cyrMCIbeNe_2d1q7wsC6eUepdN31hs3LA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [PATCH v7 1/4] x86: Remove MEMORY_NUM_TYPES and NO_HARDCODE_MEM_TYPE
Date: Sat,  7 Jan 2023 17:07:03 -0500
Message-Id: <dda75fd1ad51d041b7ff2fd934395040352fd6aa.1673123823.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673123823.git.demi@invisiblethingslab.com>
References: <cover.1673123823.git.demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

No functional change intended.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
Changes since v2:

- Keep MTRR_NUM_TYPES and adjust commit message accordingly
---
 xen/arch/x86/hvm/mtrr.c         | 18 +++++++++---------
 xen/arch/x86/include/asm/mtrr.h |  2 --
 xen/arch/x86/mm/shadow/multi.c  |  2 +-
 3 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 093103f6c768cf64f880d1b20e1c14f5918c1250..05e978041d62fd0d559462de181a04bef8a5bca9 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -38,7 +38,7 @@ static const uint8_t pat_entry_2_pte_flags[8] = {
 
 /* Effective mm type lookup table, according to MTRR and PAT. */
 static const uint8_t mm_type_tbl[MTRR_NUM_TYPES][X86_NUM_MT] = {
-#define RS MEMORY_NUM_TYPES
+#define RS MTRR_NUM_TYPES
 #define UC X86_MT_UC
 #define WB X86_MT_WB
 #define WC X86_MT_WC
@@ -66,9 +66,9 @@ static const uint8_t mm_type_tbl[MTRR_NUM_TYPES][X86_NUM_MT] = {
  * Reverse lookup table, to find a pat type according to MTRR and effective
  * memory type. This table is dynamically generated.
  */
-static uint8_t __read_mostly mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES] =
-    { [0 ... MTRR_NUM_TYPES-1] =
-        { [0 ... MEMORY_NUM_TYPES-1] = INVALID_MEM_TYPE }
+static uint8_t __read_mostly mtrr_epat_tbl[MTRR_NUM_TYPES][MTRR_NUM_TYPES] =
+    { [0 ... MTRR_NUM_TYPES - 1] =
+        { [0 ... MTRR_NUM_TYPES - 1] = INVALID_MEM_TYPE }
     };
 
 /* Lookup table for PAT entry of a given PAT value in host PAT. */
@@ -85,7 +85,7 @@ static int __init cf_check hvm_mtrr_pat_init(void)
         {
             unsigned int tmp = mm_type_tbl[i][j];
 
-            if ( tmp < MEMORY_NUM_TYPES )
+            if ( tmp < MTRR_NUM_TYPES )
                 mtrr_epat_tbl[i][tmp] = j;
         }
     }
@@ -317,11 +317,11 @@ static uint8_t effective_mm_type(struct mtrr_state *m,
                                  uint8_t gmtrr_mtype)
 {
     uint8_t mtrr_mtype, pat_value;
-   
+
     /* if get_pat_flags() gives a dedicated MTRR type,
      * just use it
-     */ 
-    if ( gmtrr_mtype == NO_HARDCODE_MEM_TYPE )
+     */
+    if ( gmtrr_mtype == MTRR_NUM_TYPES )
         mtrr_mtype = mtrr_get_type(m, gpa, 0);
     else
         mtrr_mtype = gmtrr_mtype;
@@ -346,7 +346,7 @@ uint32_t get_pat_flags(struct vcpu *v,
     /* 1. Get the effective memory type of guest physical address,
      * with the pair of guest MTRR and PAT
      */
-    guest_eff_mm_type = effective_mm_type(g, pat, gpaddr, 
+    guest_eff_mm_type = effective_mm_type(g, pat, gpaddr,
                                           gl1e_flags, gmtrr_mtype);
     /* 2. Get the memory type of host physical address, with MTRR */
     shadow_mtrr_type = mtrr_get_type(&mtrr_state, spaddr, 0);
diff --git a/xen/arch/x86/include/asm/mtrr.h b/xen/arch/x86/include/asm/mtrr.h
index e4f6ca6048334b2094a1836cc2f298453641232f..4b7f840a965954cc4b59698327a37e47026893a4 100644
--- a/xen/arch/x86/include/asm/mtrr.h
+++ b/xen/arch/x86/include/asm/mtrr.h
@@ -4,8 +4,6 @@
 #include <xen/mm.h>
 
 #define MTRR_NUM_TYPES       X86_MT_UCM
-#define MEMORY_NUM_TYPES     MTRR_NUM_TYPES
-#define NO_HARDCODE_MEM_TYPE MTRR_NUM_TYPES
 
 #define NORMAL_CACHE_MODE          0
 #define NO_FILL_CACHE_MODE         2
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index f5f7ff021bd9e057c5b6f6329de7acb5ef05d58f..1faf9940db6b0afefc5977c00c00fb6a39cd27d2 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -578,7 +578,7 @@ _sh_propagate(struct vcpu *v,
                             gflags,
                             gfn_to_paddr(target_gfn),
                             mfn_to_maddr(target_mfn),
-                            NO_HARDCODE_MEM_TYPE);
+                            MTRR_NUM_TYPES);
             }
     }
 
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Sat Jan 07 22:07:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 22:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473023.733477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLX-0007ae-NV; Sat, 07 Jan 2023 22:07:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473023.733477; Sat, 07 Jan 2023 22:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLX-0007aA-Fq; Sat, 07 Jan 2023 22:07:23 +0000
Received: by outflank-mailman (input) for mailman id 473023;
 Sat, 07 Jan 2023 22:07:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TYdO=5E=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pEHLW-0007Zv-0J
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 22:07:22 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a5fe16f2-8ed7-11ed-91b6-6bf2151ebd3b;
 Sat, 07 Jan 2023 23:07:19 +0100 (CET)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id E36BA5C0077;
 Sat,  7 Jan 2023 17:07:16 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute5.internal (MEProxy); Sat, 07 Jan 2023 17:07:16 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 7 Jan 2023 17:07:15 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5fe16f2-8ed7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1673129236; x=1673215636; bh=GH9GoCF1lU7nE5K7/oncm4hQo/KJyvZaDda
	S9Bw1XBM=; b=v3ZYRJ7NH0g+bqOKNv61dDmt2+zjrneH1S9D0lyiuM5qBYChvdd
	XhZA+zL+nQEXqgPz9erA46quFg9EmAJEE53x+fyQKsTfTh3VGTJULUYkL99uPbt0
	1AS4qL83aiwJl5EKKd+d16SbAErUSbYaVaaZPteR7f3Fs9yz/DyZLFL987W4MltA
	3di+nkG/a5uJiXIO+N8yggqSrGdXdT06M8+dy+7TMqn0iKIFHVjX1ienmeF74j11
	6VwU/dewHncE6Y2wm3q0HwiD0APtF2+2CbScvMgaRdfPBwTGUnEenJFdRAyyBhv5
	OhviKA3CugVtIw+YvK3Y62MHQ29U560+4LQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1673129236; x=1673215636; bh=GH9GoCF1lU7nE
	5K7/oncm4hQo/KJyvZaDdaS9Bw1XBM=; b=SXlteS+xEfpxleQtjTy4hLha6qRsX
	WD/l1H6nPIO/BaW663rTK38D+K4zromLewm6ywtOAdIYFMcEUPutpxNDFapSTL5E
	yf6Uiq7YAZwBGE9+RLe6S5VS/WmXRBksb7mKyCIN6Zt1K9RFnU/OBc1PVkro2yQs
	anxbChvqOfhhOFRPkaWZbqhZIom4cDqU7f3lRIZLvqWHcaKdWZgBx3lUmjtSttud
	vmLf1v0NxXz3ihvL5K22lL7VfD7uOguJRQ6ozmtlszS2srlhw6wqSuQ+K0YWtMmm
	gCsVoS32ByTcmFXeZLgZNWPqSlKNM/2RNFcXCBmab4d9j9SdxSQ7bA2/Q==
X-ME-Sender: <xms:FO25Y66Zbdn4iYuYAuWz-TRA7WqAGGkOnayMYRCsD2l2viohQNWJ_A>
    <xme:FO25Yz7fGuS0C_qTdXWxwZMOqsQQ01svxTyTaFUZY-0bnlXN25sg_hB-eEhTL6mc6
    n15irlBH0MCohI>
X-ME-Received: <xmr:FO25Y5cZi8ysXcnWVOQn-oZyKmG0Z-PDroHb-86OGvpLmOuDojvsZ7sUwZy6Fao82i9qDLOdIxob>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrkedvgdduheegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefhgefgieeitdeijeeguddtkefgteeg
    heehgeetkeevhfefgfduhedtveelgeeugeenucevlhhushhtvghrufhiiigvpedtnecurf
    grrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhl
    rggsrdgtohhm
X-ME-Proxy: <xmx:FO25Y3L7Tt7Hc8wTp4nU6gttT8pDQiALdRg0dbemdWrJhrXPqp1HJQ>
    <xmx:FO25Y-I_5YLH7RUGPqi5b9usyjG5w-4pD_flWkX9C5KTTmjzrcS1vQ>
    <xmx:FO25Y4zbPxQliD-O5ifnCRASSZK0L0WpXxBPonvmuZsKwPSpl65MAQ>
    <xmx:FO25Y9-Vn3UtjaXwDjCFPEqcVKyHuKtxRaMhfMA39LbVm9SZpsa8og>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [PATCH v7 0/4] Make PAT handling less brittle
Date: Sat,  7 Jan 2023 17:07:02 -0500
Message-Id: <cover.1673123823.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

While working on Qubes OS Marek found out that there were some PAT hacks
in the Linux i195 driver.  I decided to make Xen use Linux’s PAT to see
if it solved the graphics glitches that were observed; it did.  This
required a substantial amount of preliminary work that is useful even
without using Linux’s PAT.  Furthermore, it turned out that the graphics
glitches were due to a hardware bug, which means that Xen's PAT is
fundamentally incompatible with the use of current-generation Intel
integrated GPUs assigned to a PV guest (including dom0).

Patches 1 through 3 are the preliminary work.  Patch 3 does break ABI by
rejecting the unused PAT entries, but this will only impact buggy PV
guests and can be disabled with a Xen command-line option.  Patch 4
provides a new Kconfig option (CONFIG_LINUX_PAT) to use Linux's PAT
instead of Xen's default.

Only patches 3 and 4 actually change Xen’s observable behavior.  Patch 1
is strictly cleanup.  Patch 2 makes changing the PAT much less
error-prone, as problems with the PAT or with the associated _PAGE_*
constants will be detected at compile time.

Demi Marie Obenour (4):
  x86: Remove MEMORY_NUM_TYPES and NO_HARDCODE_MEM_TYPE
  x86/mm: Reject invalid cacheability in PV guests by default
  x86/mm: make code robust to future PAT changes
  x86: Allow using Linux's PAT

 docs/misc/xen-command-line.pandoc    |  11 ++
 xen/arch/x86/Kconfig                 |  33 ++++++
 xen/arch/x86/hvm/mtrr.c              |  18 ++--
 xen/arch/x86/include/asm/mtrr.h      |   2 -
 xen/arch/x86/include/asm/page.h      |  14 +++
 xen/arch/x86/include/asm/processor.h |  15 +++
 xen/arch/x86/include/asm/pv/domain.h |   7 ++
 xen/arch/x86/mm.c                    | 151 ++++++++++++++++++++++++++-
 xen/arch/x86/mm/shadow/multi.c       |   2 +-
 xen/arch/x86/pv/domain.c             |  18 +++-
 10 files changed, 255 insertions(+), 16 deletions(-)

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Sat Jan 07 22:07:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 22:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473025.733498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLZ-00085K-3o; Sat, 07 Jan 2023 22:07:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473025.733498; Sat, 07 Jan 2023 22:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHLY-00085D-V8; Sat, 07 Jan 2023 22:07:24 +0000
Received: by outflank-mailman (input) for mailman id 473025;
 Sat, 07 Jan 2023 22:07:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TYdO=5E=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pEHLX-0007Zv-DB
 for xen-devel@lists.xenproject.org; Sat, 07 Jan 2023 22:07:23 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a705382b-8ed7-11ed-91b6-6bf2151ebd3b;
 Sat, 07 Jan 2023 23:07:19 +0100 (CET)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id C14895C0076;
 Sat,  7 Jan 2023 17:07:18 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute5.internal (MEProxy); Sat, 07 Jan 2023 17:07:18 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 7 Jan 2023 17:07:17 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a705382b-8ed7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm2; t=1673129238; x=1673215638; bh=1qVsCevJ6z
	1i8YffwrKngj5a2580Okyv9pEFojRJYNA=; b=pVheUbXFIxxTOWkRi7gSa6D9oG
	s8EdUmbeNUSwXHeyblgFwLhp93vkZ4MSeHaRLW2ouiCFkW5Ott+ptNZBplxltd93
	3HLUZUblxDPPfgpUZgeVpF8ac6mYyuB4cfGSW2r9p3Vk9Sn1/FvsFGDvH+DEvYZa
	gqmP9eSxKOR3JpXmcxOzeVseTSBtx6cDWY/swLcC1iyRsyq1gm/RsgqiWQLb/Cvv
	6T3m403W86Rf5zFc7NpjL5IweVUEuNFF6Nn5WKe/Si6dwZCmxmepHelVSTdDzI1f
	mjNzaue389LsJE6YY1rKcLo3tumGk132gj7RwitjpOpxKYOoxOm18dG90t1A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1673129238; x=
	1673215638; bh=1qVsCevJ6z1i8YffwrKngj5a2580Okyv9pEFojRJYNA=; b=a
	bjm0RBaapBEXH0sQAmfiCSgHquGTUojGtDfX5R/19gmlFZ2A88xVRMegVn2zpQLo
	bLHOnntsNHe3ZFCW29NC8rlYUN1coPLxS9COC6ApFXfmR9igrTIh3Av/nqBZ6LbB
	r2tkIuHszo9Ul0omCXAjKgOfGzaeFSMZqUfu0niYZQKFinv/tMOe6IoRYZZFDH90
	FrG/4l+92wha6ZKMlV9Ql4s95F39gkhPPp5UBOsNP/zmSqwe4l6BkffxWL9AmLIL
	RsUdBm52ZXTfVAl/B674s4jJMfsEwoWtsgDHP+g3eEz20O9ZoDol26dNxhWgWv/W
	JnoVS4A7PhBSYwEiphlbQ==
X-ME-Sender: <xms:Fu25Y9patg4kSgmPQo9TzWGLOThvS-GrB6Sazi0KqRZudT0jtykErw>
    <xme:Fu25Y_qmMi6yZeW655VfJkaE4QMNF4_UOU-QubyE4zeMvISHQTzb2NAbl8dYaVVXw
    yqmMQ9uRYysvjg>
X-ME-Received: <xmr:Fu25Y6MMbt9cbFAIGn1HQihdbqa4aHCAjydv5DdObi2hx8IzG94tNQ8UtUgk0XBBDbjlnj8uMJWJ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrkedvgdduheegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeludekleeljeekveekfeeghfff
    gedvieegleeigeejffefieeviedvjeegveetieenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:Fu25Y44_XUawRcw4IvU0zLPeYDgHWOIpnWkD8Udj5earHMTfZg7mDw>
    <xmx:Fu25Y85n-bPKixdS8_trUgU29JcBLUttkKpn9QBYYyRGVxm6X-cwbA>
    <xmx:Fu25YwjjQotbFm4kevyz97ITbHcSS3spZawko8jT9NUey94YpDUsmA>
    <xmx:Fu25Y-tjlagSpifP1t5XTgFOYeRRUpcrejMVhTG4blcLuOqm_0-goQ>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [PATCH v7 2/4] x86/mm: Reject invalid cacheability in PV guests by default
Date: Sat,  7 Jan 2023 17:07:04 -0500
Message-Id: <eb9aff037aa9afe1a4a37661847e44d2316ad094.1673123823.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673123823.git.demi@invisiblethingslab.com>
References: <cover.1673123823.git.demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Setting cacheability flags that are not ones specified by Xen is a bug
in the guest.  By default, return -EINVAL if a guests attempts to do
this.  The invalid-cacheability= Xen command-line flag allows the
administrator to allow such attempts or to produce

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
Changes since v6:
- Make invalid-cacheability= a subflag of pv=.
- Move check for invalid cacheability to get_page_from_l1e().

Changes since v5:
- Make parameters static and __ro_after_init.
- Replace boolean parameter allow_invalid_cacheability with string
  parameter invalid-cacheability.
- Move parameter definitions to near where they are used.
- Add documentation.

Changes since v4:
- Remove pointless BUILD_BUG_ON().
- Add comment explaining why an exception is being injected.

Changes since v3:
- Add Andrew Cooper’s Suggested-by
---
 docs/misc/xen-command-line.pandoc    | 11 ++++++
 xen/arch/x86/include/asm/pv/domain.h |  7 ++++
 xen/arch/x86/mm.c                    | 53 +++++++++++++++++++++++++++-
 xen/arch/x86/pv/domain.c             | 18 ++++++++--
 4 files changed, 85 insertions(+), 4 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 424b12cfb27d6ade2ec63eacb8afe5df82465451..0230a7bc17cbd4362a42ea64cea695f31f5e0f86 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1417,6 +1417,17 @@ detection of systems known to misbehave upon accesses to that port.
 ### idle_latency_factor (x86)
 > `= <integer>`
 
+### invalid-cacheability (x86)
+> `= allow | deny | trap`
+
+> Default: `deny` in release builds, otherwise `trap`
+
+Specify what happens when a PV guest tries to use one of the reserved entries in
+the PAT.  `deny` causes the attempt to be rejected with -EINVAL, `allow` allows
+the attempt, and `trap` causes a general protection fault to be raised.
+Currently, the reserved entries are marked as uncacheable in Xen's PAT, but this
+will change if new memory types are added, so guests must not rely on it.
+
 ### ioapic_ack (x86)
 > `= old | new`
 
diff --git a/xen/arch/x86/include/asm/pv/domain.h b/xen/arch/x86/include/asm/pv/domain.h
index 924508bbb4f0c199b3cd2306d9d8f0bd0ef399f9..1c9ce259ab4ee23ea5d057f5dfa964effb169032 100644
--- a/xen/arch/x86/include/asm/pv/domain.h
+++ b/xen/arch/x86/include/asm/pv/domain.h
@@ -71,6 +71,13 @@ void pv_vcpu_destroy(struct vcpu *v);
 int pv_vcpu_initialise(struct vcpu *v);
 void pv_domain_destroy(struct domain *d);
 int pv_domain_initialise(struct domain *d);
+extern __ro_after_init uint8_t invalid_cacheability;
+
+enum {
+    INVALID_CACHEABILITY_ALLOW,
+    INVALID_CACHEABILITY_DENY,
+    INVALID_CACHEABILITY_TRAP,
+};
 
 /*
  * Bits which a PV guest can toggle in its view of cr4.  Some are loaded into
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 3558ca215b02a517d55d75329d645ae5905424e4..a8f137925cba1846b97aee9321df6427f4dd1a94 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -879,6 +879,30 @@ get_page_from_l1e(
         return -EINVAL;
     }
 
+    if ( invalid_cacheability != INVALID_CACHEABILITY_ALLOW )
+    {
+        switch ( l1e.l1 & PAGE_CACHE_ATTRS )
+        {
+        case _PAGE_WB:
+        case _PAGE_UC:
+        case _PAGE_UCM:
+        case _PAGE_WC:
+        case _PAGE_WT:
+        case _PAGE_WP:
+            break;
+        default:
+            /*
+             * If we get here, a PV guest tried to use one of the
+             * reserved values in Xen's PAT.  This indicates a bug
+             * in the guest.  If requested by the user, inject #GP
+             * to cause the guest to log a stack trace.
+             */
+            if ( invalid_cacheability == INVALID_CACHEABILITY_TRAP )
+                pv_inject_hw_exception(TRAP_gp_fault, 0);
+            return -EINVAL;
+        }
+    }
+
     valid = mfn_valid(_mfn(mfn));
 
     if ( !valid ||
@@ -1324,6 +1348,31 @@ static int put_page_from_l4e(l4_pgentry_t l4e, mfn_t l4mfn, unsigned int flags)
     return put_pt_page(l4e_get_page(l4e), mfn_to_page(l4mfn), flags);
 }
 
+#ifdef NDEBUG
+#define INVALID_CACHEABILITY_DEFAULT INVALID_CACHEABILITY_DENY
+#else
+#define INVALID_CACHEABILITY_DEFAULT INVALID_CACHEABILITY_TRAP
+#endif
+
+__ro_after_init uint8_t invalid_cacheability =
+    INVALID_CACHEABILITY_DEFAULT;
+
+static int __init cf_check set_invalid_cacheability(const char *str)
+{
+    if (strcmp("allow", str) == 0)
+        invalid_cacheability = INVALID_CACHEABILITY_ALLOW;
+    else if (strcmp("deny", str) == 0)
+        invalid_cacheability = INVALID_CACHEABILITY_DENY;
+    else if (strcmp("trap", str) == 0)
+        invalid_cacheability = INVALID_CACHEABILITY_TRAP;
+    else
+        return -EINVAL;
+
+    return 0;
+}
+
+custom_param("invalid-cacheability", set_invalid_cacheability);
+
 static int promote_l1_table(struct page_info *page)
 {
     struct domain *d = page_get_owner(page);
@@ -1343,7 +1392,9 @@ static int promote_l1_table(struct page_info *page)
         }
         else
         {
-            switch ( ret = get_page_from_l1e(pl1e[i], d, d) )
+            l1_pgentry_t l1e = pl1e[i];
+
+            switch ( ret = get_page_from_l1e(l1e, d, d) )
             {
             default:
                 goto fail;
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index f94f28c8e271549acb449ef2e129b928751f765d..40b424351fd99fe1fb0a5faa5b20bf4070bb1d4a 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -28,9 +28,21 @@ static int __init cf_check parse_pv(const char *s)
     do {
         ss = strchr(s, ',');
         if ( !ss )
-            ss = strchr(s, '\0');
-
-        if ( (val = parse_boolean("32", s, ss)) >= 0 )
+            ss += strlen(s);
+        if ( !strncmp("invalid-cacheability=", s,
+                      sizeof("invalid-cacheability=") - 1) )
+        {
+            const char *p = s + (sizeof("invalid-cacheability=") - 1);
+            if (ss - p == 5 && !memcmp(p, "allow", 5))
+                invalid_cacheability = INVALID_CACHEABILITY_ALLOW;
+            else if (ss - p == 4 && !memcmp(p, "deny", 4))
+                invalid_cacheability = INVALID_CACHEABILITY_DENY;
+            else if (ss - p == 4 && !memcmp(p, "trap", 4))
+                invalid_cacheability = INVALID_CACHEABILITY_TRAP;
+            else
+                rc = -EINVAL;
+        }
+        else if ( (val = parse_boolean("32", s, ss)) >= 0 )
         {
 #ifdef CONFIG_PV32
             opt_pv32 = val;
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Sat Jan 07 22:20:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 22:20:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473064.733531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHYD-0003nZ-Sm; Sat, 07 Jan 2023 22:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473064.733531; Sat, 07 Jan 2023 22:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEHYD-0003nS-PZ; Sat, 07 Jan 2023 22:20:29 +0000
Received: by outflank-mailman (input) for mailman id 473064;
 Sat, 07 Jan 2023 22:20:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEHYC-0003nI-Ud; Sat, 07 Jan 2023 22:20:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEHYC-0007RO-S8; Sat, 07 Jan 2023 22:20:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEHYC-0003aY-7S; Sat, 07 Jan 2023 22:20:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEHYC-0001Mw-6x; Sat, 07 Jan 2023 22:20:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QBvnUNJmuDURAaJUNMjWW66DzyjdVfCotKeNP//3YfE=; b=sv3pv5EFc6dvMFSJXwkRn86evc
	gz80xjOiNwuBCr6iL0BrBXq1CjpNJojuA5HwIApNAsOnQHWkm+dRjS0PjZ4h4n5L27LTNYzQFUW3s
	EsOC2D2zseskt+GnchKmCkpuqbPR0GRVRxIT6nZAA3Ue184Z1dz8Y97InLSmK1aZPpPw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175616-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175616: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0a71553536d270e988580a3daa9fc87535908221
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Jan 2023 22:20:28 +0000

flight 175616 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175616/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                0a71553536d270e988580a3daa9fc87535908221
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   92 days
Failing since        173470  2022-10-08 06:21:34 Z   91 days  191 attempts
Testing same since   175613  2023-01-07 02:22:17 Z    0 days    2 attempts

------------------------------------------------------------
3305 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 504416 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 07 22:59:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Jan 2023 22:59:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473074.733541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEI9o-0007RA-0S; Sat, 07 Jan 2023 22:59:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473074.733541; Sat, 07 Jan 2023 22:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEI9n-0007R3-Th; Sat, 07 Jan 2023 22:59:19 +0000
Received: by outflank-mailman (input) for mailman id 473074;
 Sat, 07 Jan 2023 22:59:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEI9m-0007Qt-SJ; Sat, 07 Jan 2023 22:59:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEI9m-0008Gl-OZ; Sat, 07 Jan 2023 22:59:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEI9m-0004TT-6J; Sat, 07 Jan 2023 22:59:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEI9m-0005Qw-5k; Sat, 07 Jan 2023 22:59:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wRQNIE2tQIZzQPBChXRQflvwDzHAws6qj0q20uaYFkM=; b=OzzmBYbWhu6Ks0D9SzlNDdSoPM
	4VBwUTX3/61kOFcznCv31LNmXJhKQR3kXk5UBAnqRiBxp5Td/B0ic6/f1L0scoAgpBNFI1ggokSCw
	W5OZwHrrerM7TwXC1YENqFE9Br21zO+N3fbXx7KA0gvb1Wt2sFn4DpjzxS19Z7jCYcGE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175619-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175619: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=052e6534c49ebef8901824b77abc39271f0d852e
X-Osstest-Versions-That:
    qemuu=d1852caab131ea898134fdcea8c14bc2ee75fbe9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Jan 2023 22:59:18 +0000

flight 175619 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175619/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175595
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175595
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175595
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175595
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175595
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175595
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175595
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175595
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                052e6534c49ebef8901824b77abc39271f0d852e
baseline version:
 qemuu                d1852caab131ea898134fdcea8c14bc2ee75fbe9

Last test of basis   175595  2023-01-06 01:55:04 Z    1 days
Failing since        175603  2023-01-06 12:10:38 Z    1 days    4 attempts
Testing same since   175619  2023-01-07 16:40:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alessandro Di Federico <ale@rev.ng>
  Alex Bennée <alex.bennee@linaro.org>
  Alistair Francis <alistair.francis@wdc.com>
  Anup Patel <apatel@ventanamicro.com>
  Atish Patra <atishp@rivosinc.com>
  Axel Heider <axel.heider@hensoldt.net>
  Bin Meng <bmeng@tinylab.org>
  Christoph Muellner <christoph.muellner@vrull.eu>
  Christoph Müllner <christoph.muellner@vrull.eu>
  Claudio Fontana <cfontana@suse.de>
  Conor Dooley <conor.dooley@microchip.com>
  Fabiano Rosas <farosas@suse.de>
  Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jim Shu <jim.shu@sifive.com>
  LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mayuresh Chitale <mchitale@ventanamicro.com>
  Mukilan Thiyagarajan <quic_mthiyaga@quicinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stephen Longfield <slongfield@google.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
  Wilfred Mallawa <wilfred.mallawa@wdc.com>
  Zhuojia Shen <chaosdefinition@hotmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   d1852caab1..052e6534c4  052e6534c49ebef8901824b77abc39271f0d852e -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 07:56:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 07:56:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473086.733556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEQXr-0003PF-Nq; Sun, 08 Jan 2023 07:56:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473086.733556; Sun, 08 Jan 2023 07:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEQXr-0003P8-LA; Sun, 08 Jan 2023 07:56:43 +0000
Received: by outflank-mailman (input) for mailman id 473086;
 Sun, 08 Jan 2023 07:56:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEQXq-0003Ox-6H; Sun, 08 Jan 2023 07:56:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEQXp-0007G4-UL; Sun, 08 Jan 2023 07:56:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEQXp-0006iR-JD; Sun, 08 Jan 2023 07:56:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEQXp-0000DG-Ik; Sun, 08 Jan 2023 07:56:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7GZZhe3XJ+8rH/vHlbowa+Ayy2MAp8E5cDRm7nyBw6U=; b=6wNHmzrNTthdCiaAsDjuclkgXU
	abmigufhH/iS+MBZl75ujXZHUD5EqRzUok0vzSrG7J/WsktUMJ38LSnsE6MknEssyyxTKAlHJLWRF
	jd3QziiJyR4AnxvLeRwjyJNeOOsoc9ocrc8P9QNLkF0pdts6BGjJmtdUg0ZzVYJ5W24g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175622-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175622: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9b43a525db125799df81e6fbef712a2ae50bfc5d
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Jan 2023 07:56:41 +0000

flight 175622 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175622/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                9b43a525db125799df81e6fbef712a2ae50bfc5d
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   92 days
Failing since        173470  2022-10-08 06:21:34 Z   92 days  192 attempts
Testing same since   175622  2023-01-07 22:40:16 Z    0 days    1 attempts

------------------------------------------------------------
3311 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505194 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 09:31:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 09:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473104.733567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pES1M-0005WC-50; Sun, 08 Jan 2023 09:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473104.733567; Sun, 08 Jan 2023 09:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pES1M-0005W5-1d; Sun, 08 Jan 2023 09:31:16 +0000
Received: by outflank-mailman (input) for mailman id 473104;
 Sun, 08 Jan 2023 09:31:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pES1K-0005Vv-CN; Sun, 08 Jan 2023 09:31:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pES1K-0001YD-99; Sun, 08 Jan 2023 09:31:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pES1J-0000lC-Pb; Sun, 08 Jan 2023 09:31:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pES1J-0002pk-P1; Sun, 08 Jan 2023 09:31:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zo0qSBLW2AZd2KAmUFUZbchSx+cRmLGieF7eUtHPMjQ=; b=dU0LNLXI2c4aHM6PyskV6+iaNl
	qX00uWknKir/7LbELUCQDX8B2Nlrv8ZBVWilLfBwHvBHYo0pefegezXiAmo0E8SS3F1AlXUPqoXj1
	8akc2Ye2hDeaOE/Lpn/UpczFJBgxmF8r3CFSwK2XYkC9OOAaYcU5giU0QarE8lY1eqcE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175624-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175624: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-shadow:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
X-Osstest-Versions-That:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Jan 2023 09:31:13 +0000

flight 175624 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175624/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-shadow     7 xen-install      fail in 175612 pass in 175624
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 20 guest-start/debianhvm.repeat fail in 175612 pass in 175624
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail pass in 175612
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 175612
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install    fail pass in 175612

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 175612 like 175601
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 175612 like 175601
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 175601
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175612
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175612
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175612
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175612
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175612
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175612
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175612
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175612
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175612
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175612
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175612
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2
baseline version:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2

Last test of basis   175624  2023-01-08 01:53:38 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jan 08 09:39:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 09:39:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473113.733577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pES8m-0006DI-Vw; Sun, 08 Jan 2023 09:38:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473113.733577; Sun, 08 Jan 2023 09:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pES8m-0006DB-TO; Sun, 08 Jan 2023 09:38:56 +0000
Received: by outflank-mailman (input) for mailman id 473113;
 Sun, 08 Jan 2023 09:38:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pES8l-0006D1-C2; Sun, 08 Jan 2023 09:38:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pES8l-0001hn-8h; Sun, 08 Jan 2023 09:38:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pES8k-0000uP-Pw; Sun, 08 Jan 2023 09:38:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pES8k-0002im-PS; Sun, 08 Jan 2023 09:38:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XxQPPNXG0492sG9hg+AOO27aiipg9TW9vNatxeXMyRs=; b=LqSWybjPluIfkWbwTpmnEh1xY2
	HRMk5rbVPxXl1Y75+LRgpHkBIUOxMf+yo90scsDpOVipcrsrh2TXQz2HzZDLWlZ71OloYd16bwvFU
	ABH67gWcT9uLU+eGt0cuVrjGg0aFzh1I3A+pDTmPPSy8t5HihCbDvO0MHPBQxRKJBbNc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175623-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175623: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
X-Osstest-Versions-That:
    qemuu=052e6534c49ebef8901824b77abc39271f0d852e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Jan 2023 09:38:54 +0000

flight 175623 qemu-mainline real [real]
flight 175626 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175623/
http://logs.test-lab.xenproject.org/osstest/logs/175626/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail pass in 175626-retest
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 175626-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175619
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175619
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175619
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175619
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175619
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175619
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175619
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175619
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713
baseline version:
 qemuu                052e6534c49ebef8901824b77abc39271f0d852e

Last test of basis   175619  2023-01-07 16:40:19 Z    0 days
Testing same since   175623  2023-01-07 23:09:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>
  Song Gao <gaosong@loongson.cn>
  Tianrui Zhao <zhaotianrui@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   052e6534c4..0ab12aa324  0ab12aa32462817f0a53fa6f6ce4baf664ef1713 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 11:44:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 11:44:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473123.733588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEU6H-0002Ei-2M; Sun, 08 Jan 2023 11:44:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473123.733588; Sun, 08 Jan 2023 11:44:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEU6G-0002Eb-W1; Sun, 08 Jan 2023 11:44:28 +0000
Received: by outflank-mailman (input) for mailman id 473123;
 Sun, 08 Jan 2023 11:44:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEU6F-0002EV-S2
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 11:44:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEU6F-0004sj-Il; Sun, 08 Jan 2023 11:44:27 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEU6F-0002Uv-E1; Sun, 08 Jan 2023 11:44:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=6nVFpbwrs3F1UgLekW7AzPCT/IBWSWbJmXStwI/E5gg=; b=y1oHWd5NlqzrPTG8Lbg4tF4aRh
	nLFsuZEaoxiYhgxDoycVIqVIhV/xCCh+tkI5P28QbBAtFvFJ5XX6Z63brsjP5BYkjp/vhzvc8MvDe
	mU3cI+Y/r9S07lhpF7KEZ0oa/35CqQdvmDKF2/XHbINQp0JUxt0qWVTkDk+i7i9pKDp8=;
Message-ID: <dd5b93c3-51d1-40ad-88b4-5bbd54633651@xen.org>
Date: Sun, 8 Jan 2023 11:44:25 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-2-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v1 01/13] xen/arm: re-arrange the static shared memory
 region
In-Reply-To: <20221115025235.1378931-2-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 15/11/2022 02:52, Penny Zheng wrote:
> This commit re-arranges the static shared memory regions into a separate array
> shm_meminfo. And static shared memory region now would have its own structure
> 'shm_membank' to hold all shm-related members, like shm_id, etc, and a pointer
> to the normal memory bank 'membank'. This will avoid continuing to grow
> 'membank'.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/bootfdt.c            | 40 +++++++++++++++++++------------
>   xen/arch/arm/domain_build.c       | 35 ++++++++++++++++-----------
>   xen/arch/arm/include/asm/kernel.h |  2 +-
>   xen/arch/arm/include/asm/setup.h  | 16 +++++++++----
>   4 files changed, 59 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 6014c0f852..ccf281cd37 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -384,6 +384,7 @@ static int __init process_shm_node(const void *fdt, int node,
>       const __be32 *cell;
>       paddr_t paddr, gaddr, size;
>       struct meminfo *mem = &bootinfo.reserved_mem;

After this patch, 'mem' is barely going to be used. So I would recommend 
to remove it or restrict the scope.

This will make easier to confirm that most of the use of 'mem' have been 
replaced with 'shm_mem' and reduce the risk of confusion between the two 
(the name are quite similar).

[...]

> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index bd30d3798c..c0fd13f6ed 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -757,20 +757,20 @@ static int __init acquire_nr_borrower_domain(struct domain *d,
>   {
>       unsigned int bank;
>   
> -    /* Iterate reserved memory to find requested shm bank. */
> -    for ( bank = 0 ; bank < bootinfo.reserved_mem.nr_banks; bank++ )
> +    /* Iterate static shared memory to find requested shm bank. */
> +    for ( bank = 0 ; bank < bootinfo.shm_mem.nr_banks; bank++ )
>       {
> -        paddr_t bank_start = bootinfo.reserved_mem.bank[bank].start;
> -        paddr_t bank_size = bootinfo.reserved_mem.bank[bank].size;
> +        paddr_t bank_start = bootinfo.shm_mem.bank[bank].membank->start;
> +        paddr_t bank_size = bootinfo.shm_mem.bank[bank].membank->size;

I was expecting a "if (type == MEMBANK_STATIC_DOMAIN) ..."  to be 
removed. But it looks like there was none. I guess that was a mistake in 
the existing code?

>   
>           if ( (pbase == bank_start) && (psize == bank_size) )
>               break;
>       }
>   
> -    if ( bank == bootinfo.reserved_mem.nr_banks )
> +    if ( bank == bootinfo.shm_mem.nr_banks )
>           return -ENOENT;
>   
> -    *nr_borrowers = bootinfo.reserved_mem.bank[bank].nr_shm_borrowers;
> +    *nr_borrowers = bootinfo.shm_mem.bank[bank].nr_shm_borrowers;
>   
>       return 0;
>   }
> @@ -907,11 +907,18 @@ static int __init append_shm_bank_to_domain(struct kernel_info *kinfo,
>                                               paddr_t start, paddr_t size,
>                                               const char *shm_id)
>   {
> +    struct membank *membank;
> +
>       if ( kinfo->shm_mem.nr_banks >= NR_MEM_BANKS )
>           return -ENOMEM;
>   
> -    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].start = start;
> -    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].size = size;
> +    membank = xmalloc_bytes(sizeof(struct membank));

You allocate memory but never free it. However, I think it would be 
better to avoid the dynamic allocation. So I would consider to not use 
the structure shm_membank and instead create a specific one for the 
domain construction.

> +    if ( membank == NULL )
> +        return -ENOMEM;
> +
> +    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].membank = membank;
> +    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].membank->start = start;
> +    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].membank->size = size;

The last two could be replaced with:

membank->start = start;
membank->size = size;

This would make the code more readable. Also, while you are modifying 
the code, I would consider to introduce a local variable that points to
kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].

[...]

>   struct meminfo {
> @@ -61,6 +57,17 @@ struct meminfo {
>       struct membank bank[NR_MEM_BANKS];
>   };
>   
> +struct shm_membank {
> +    char shm_id[MAX_SHM_ID_LENGTH];
> +    unsigned int nr_shm_borrowers;
> +    struct membank *membank;

After the change I suggest above, I would expect that the field of 
membank will not be updated. So I would add const here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 11:57:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 11:57:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473130.733600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEUIM-0003me-7e; Sun, 08 Jan 2023 11:56:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473130.733600; Sun, 08 Jan 2023 11:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEUIM-0003mX-50; Sun, 08 Jan 2023 11:56:58 +0000
Received: by outflank-mailman (input) for mailman id 473130;
 Sun, 08 Jan 2023 11:56:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEUIK-0003mQ-W3
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 11:56:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEUIK-0005HT-Jv; Sun, 08 Jan 2023 11:56:56 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEUIK-0002nq-ED; Sun, 08 Jan 2023 11:56:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=xeNRan2TYy/Nfh0C3omfZfsutDQU4xaPHcI77DDdhBU=; b=MEn4IcqV0h89DF8dLI+aSo8Wv6
	BxHRvl67ESoLzIvpgOaqjEoWMlx/liUvAShJ1/u5fbbGjNpCYy1opFodqaMimlXa8vp6PbDylTSB5
	3v2XBj9Ski19qoIdyszin30jVyx3LvHRPODgSC6QVPDrG+0I0o+lJAe45QzBiHlvwl8w=;
Message-ID: <670d3b0b-86fc-4673-5a3d-0417b9a002f8@xen.org>
Date: Sun, 8 Jan 2023 11:56:54 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-3-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v1 02/13] xen/arm: switch to use shm_membank as function
 parameter
In-Reply-To: <20221115025235.1378931-3-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 15/11/2022 02:52, Penny Zheng wrote:
> Instead of using multiple function parameters to deliver various shm-related
> info, like host physical address, SHMID, etc, and with the introduction
> of new struct "shm_membank", we could switch to use "shm_membank" as
> function parameter to replace them all, to make codes more clear and
> tidy.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/domain_build.c | 46 ++++++++++++++++++-------------------
>   1 file changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index c0fd13f6ed..d2b9e60b5c 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -751,40 +751,31 @@ static void __init assign_static_memory_11(struct domain *d,
>   }
>   
>   #ifdef CONFIG_STATIC_SHM
> -static int __init acquire_nr_borrower_domain(struct domain *d,
> -                                             paddr_t pbase, paddr_t psize,
> -                                             unsigned long *nr_borrowers)
> +static struct shm_membank * __init acquire_shm_membank(const char *shm_id)

You are returning bootinfo. AFAICT, nobody you modify it after it was 
populated. So can this be const?

Also, I think the function wants to be renamed because this is too 
similar to the function acquire_shared_memory_bank().

I would rename it to find_shared_memory_bank().

>   {
>       unsigned int bank;
>   
>       /* Iterate static shared memory to find requested shm bank. */
>       for ( bank = 0 ; bank < bootinfo.shm_mem.nr_banks; bank++ )
> -    {
> -        paddr_t bank_start = bootinfo.shm_mem.bank[bank].membank->start;
> -        paddr_t bank_size = bootinfo.shm_mem.bank[bank].membank->size;
> -
> -        if ( (pbase == bank_start) && (psize == bank_size) )
> +        if ( strcmp(shm_id, bootinfo.shm_mem.bank[bank].shm_id) == 0 )
>               break;
> -    }
>   
>       if ( bank == bootinfo.shm_mem.nr_banks )
> -        return -ENOENT;
> -
> -    *nr_borrowers = bootinfo.shm_mem.bank[bank].nr_shm_borrowers;
> +        return NULL;
>   
> -    return 0;
> +    return &bootinfo.shm_mem.bank[bank];
>   }
>   
>   /*
>    * This function checks whether the static shared memory region is
>    * already allocated to dom_io.
>    */
> -static bool __init is_shm_allocated_to_domio(paddr_t pbase)
> +static bool __init is_shm_allocated_to_domio(struct shm_membank *shm_membank)

AFAICT, the function will not modify shm_membank. So please use const.

>   {
>       struct page_info *page;
>       struct domain *d;
>   
> -    page = maddr_to_page(pbase);
> +    page = maddr_to_page(shm_membank->membank->start);
>       d = page_get_owner_and_reference(page);
>       if ( d == NULL )
>           return false;
> @@ -835,14 +826,17 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d,
>   }
>   
>   static int __init assign_shared_memory(struct domain *d,
> -                                       uint32_t addr_cells, uint32_t size_cells,
> -                                       paddr_t pbase, paddr_t psize,
> +                                       struct shm_membank *shm_membank,

Same here.

>                                          paddr_t gbase)
>   {
>       mfn_t smfn;
>       int ret = 0;
>       unsigned long nr_pages, nr_borrowers, i;
>       struct page_info *page;
> +    paddr_t pbase, psize;
> +
> +    pbase = shm_membank->membank->start;
> +    psize = shm_membank->membank->size;
>   
>       printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
>              d, pbase, pbase + psize);
> @@ -871,9 +865,7 @@ static int __init assign_shared_memory(struct domain *d,
>        * Get the right amount of references per page, which is the number of
>        * borrower domains.
>        */
> -    ret = acquire_nr_borrower_domain(d, pbase, psize, &nr_borrowers);
> -    if ( ret )
> -        return ret;
> +    nr_borrowers = shm_membank->nr_shm_borrowers;
>   
>       /*
>        * Instead of letting borrower domain get a page ref, we add as many
> @@ -941,6 +933,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>           const char *role_str;
>           const char *shm_id;
>           bool owner_dom_io = true;
> +        struct shm_membank *shm_membank;

same here.

>   
>           if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-v1") )
>               continue;
> @@ -991,12 +984,20 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>           }
>           BUG_ON((strlen(shm_id) <= 0) || (strlen(shm_id) >= MAX_SHM_ID_LENGTH));
>   
> +        shm_membank = acquire_shm_membank(shm_id);
> +        if ( !shm_membank )
> +        {
> +            printk("%pd: failed to acquire %s shared memory bank\n",
> +                   d, shm_id);
> +            return -ENOENT;
> +        }
> +
>           /*
>            * DOMID_IO is a fake domain and is not described in the Device-Tree.
>            * Therefore when the owner of the shared region is DOMID_IO, we will
>            * only find the borrowers.
>            */
> -        if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) ||
> +        if ( (owner_dom_io && !is_shm_allocated_to_domio(shm_membank)) ||
>                (!owner_dom_io && strcmp(role_str, "owner") == 0) )
>           {
>               /*
> @@ -1004,8 +1005,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>                * specified, so they should be assigned to dom_io.
>                */
>               ret = assign_shared_memory(owner_dom_io ? dom_io : d,
> -                                       addr_cells, size_cells,
> -                                       pbase, psize, gbase);
> +                                       shm_membank, gbase);
>               if ( ret )
>                   return ret;
>           }

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 12:13:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 12:13:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473138.733611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEUYH-0006HT-QD; Sun, 08 Jan 2023 12:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473138.733611; Sun, 08 Jan 2023 12:13:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEUYH-0006HM-M2; Sun, 08 Jan 2023 12:13:25 +0000
Received: by outflank-mailman (input) for mailman id 473138;
 Sun, 08 Jan 2023 12:13:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEUYG-0006HG-Kq
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 12:13:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEUYG-0005cM-5L; Sun, 08 Jan 2023 12:13:24 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEUYF-0003cA-Sh; Sun, 08 Jan 2023 12:13:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=8J4YsxLSyyEjda5xIkLHeBep8ikZw7KoQ0QgXbjTHhY=; b=BJiou41POJ4/8CqOQbzJv9JCu/
	MG4YKthQUFoh28feDGPCq9NlHHltY9Da0Hi06UKAGUZzIcXBinm3sVldG8b+d2LV/u6daexAKbjdR
	nbLlXJ80m/SPr8tGZ+027bMUL8ZIx6CXCnzVctSWTHaiCV+NpLw0UKXAPrgX9d/IKDPk=;
Message-ID: <3832d94f-6856-82b3-ea64-a9e79460c547@xen.org>
Date: Sun, 8 Jan 2023 12:13:21 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 04/13] xen/arm: expand shm_membank for unprovided host
 address
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-5-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20221115025235.1378931-5-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 15/11/2022 02:52, Penny Zheng wrote:
> When host address is not provided in "xen,shared-mem", we let Xen
> automatically allocate requested static shared memory from heap, and it
> stands good chance of having multiple host memory banks allocated for the
> requested static shared memory as a result. Therefore current membank is not
> going to cover it.
> 
> This commit introduces a new field "mem" to cover both scenarios.
> "struct membank" is used when host address is provided, whereas
> "struct meminfo" shall be used when host address not provided.

 From this patch, it is not clear to me how a user can know which part 
of the union should be used.

However... I am not entirely sure why you need to create a union because 
in your new structure you can fit one bank.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 12:22:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 12:22:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473145.733622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEUhC-0007ie-KP; Sun, 08 Jan 2023 12:22:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473145.733622; Sun, 08 Jan 2023 12:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEUhC-0007iX-Ho; Sun, 08 Jan 2023 12:22:38 +0000
Received: by outflank-mailman (input) for mailman id 473145;
 Sun, 08 Jan 2023 12:22:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEUhC-0007iR-2Z
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 12:22:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEUhB-0005vd-Mi; Sun, 08 Jan 2023 12:22:37 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEUhB-0003tr-HP; Sun, 08 Jan 2023 12:22:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=J8OCN7wTNVpbOJr1mSt4M84BOP08M1gbQLNxXEpFxKo=; b=XlSfKSMWT0zhg8fVGW1Ah7Lk+F
	eRziJAxnghXqRHmL1sp5BHXiGLlT2dcBv0jbcZcpwfV9v467hFz6TOQW9ugzHanaS2AyUp2PMhOko
	VPZsUps26K5JSjmtEokHCAX3K7A06RAWbaBCpOd2hxpcGnsbTc0Ty2N24SeZwXRapCy4=;
Message-ID: <ff0870ab-d1b1-e029-26aa-c690063d348b@xen.org>
Date: Sun, 8 Jan 2023 12:22:35 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-6-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v1 05/13] xen/arm: allocate shared memory from heap when
 host address not provided
In-Reply-To: <20221115025235.1378931-6-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 15/11/2022 02:52, Penny Zheng wrote:
> when host address is not provided in "xen,shared-mem", we let Xen
> allocate requested shared memory from heap, and once the shared memory is
> allocated, it will be marked as static(PGC_static), which means that it will be
> reserved as static memory, and will not go back to heap even on freeing.

Please don't move pages from the {xen,dom}heap to the static heap. If 
you need to keep the pages for longer, then get an extra reference so 
they will not be released.

> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/domain_build.c | 83 ++++++++++++++++++++++++++++++++++++-
>   1 file changed, 82 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index fbb196d8a4..3de96882a5 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -835,6 +835,72 @@ static bool __init is_shm_allocated_to_domio(struct shm_membank *shm_membank)
>       return true;
>   }
>   
> +static int __init mark_shared_memory_static(struct shm_membank *shm_membank)
> +{
> +    unsigned int bank;
> +    unsigned long i, nr_mfns;
> +    struct page_info *pg;
> +    struct meminfo *meminfo;
> +
> +    BUG_ON(!shm_membank->mem.banks.meminfo);
> +    meminfo = shm_membank->mem.banks.meminfo;
> +    for ( bank = 0; bank < meminfo->nr_banks; bank++ )
> +    {
> +        pg = mfn_to_page(maddr_to_mfn(meminfo->bank[bank].start));

mfn_to_page(maddr_to_mfn(...)) is equivalent to maddr_to_page(..).

> +        nr_mfns = PFN_DOWN(meminfo->bank[bank].size);
> +
> +        for ( i = 0; i < nr_mfns; i++ )
> +        {
> +            /* The page should be already allocated from heap. */
> +            if ( !pg[i].count_info & PGC_state_inuse )

I don't think this is doing what you want because ! will take precedence 
over &. You likely want to add parenthese:

!(... & ...)

> +            {
> +                printk(XENLOG_ERR
> +                       "pg[%lu] MFN %"PRI_mfn" c=%#lx\n",
> +                       i, mfn_x(page_to_mfn(pg)) + i, pg[i].count_info);
> +                goto fail;
> +            }
> +
> +           pg[i].count_info |= PGC_static;
> +        }
> +    }
> +
> +    return 0;
> +
> + fail:
> +    while ( bank >= 0 )
> +    {
> +        while ( --i >= 0 )
> +            pg[i].count_info &= ~PGC_static;
> +        i = PFN_DOWN(meminfo->bank[--bank].size);
> +    }
> +
> +    return -EINVAL;
> +}
> +
> +static int __init allocate_shared_memory(struct shm_membank *shm_membank,
> +                                         paddr_t psize)
> +{
> +    struct meminfo *banks;
> +    int ret;
> +
> +    BUG_ON(shm_membank->mem.banks.meminfo != NULL);
> +
> +    banks = xmalloc_bytes(sizeof(struct meminfo));

Where is this freed?

> +    if ( banks == NULL )
> +        return -ENOMEM;
> +    shm_membank->mem.banks.meminfo = banks;
> +    memset(shm_membank->mem.banks.meminfo, 0, sizeof(struct meminfo));
> +
> +    if ( !allocate_domheap_memory(NULL, psize, shm_membank->mem.banks.meminfo) )
> +        return -EINVAL;
> +
> +    ret = mark_shared_memory_static(shm_membank);
> +    if ( ret )
> +        return ret;
> +
> +    return 0;
> +}
> +
>   static mfn_t __init acquire_shared_memory_bank(struct domain *d,
>                                                  paddr_t pbase, paddr_t psize)
>   {
> @@ -975,7 +1041,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>           unsigned int i;
>           const char *role_str;
>           const char *shm_id;
> -        bool owner_dom_io = true;
> +        bool owner_dom_io = true, paddr_assigned = true;
>           struct shm_membank *shm_membank;
>   
>           if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-v1") )
> @@ -1035,6 +1101,21 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>               return -ENOENT;
>           }
>   
> +        /*
> +         * When host address is not provided in "xen,shared-mem",
> +         * we let Xen allocate requested memory from heap at first domain.
> +         */
> +        if ( !paddr_assigned && !shm_membank->mem.banks.meminfo )
> +        {
> +            ret = allocate_shared_memory(shm_membank, psize);
> +            if ( ret )
> +            {
> +                printk("%pd: failed to allocate shared memory bank(%"PRIpaddr"MB) from heap: %d\n",
> +                       d, psize >> 20, ret);
> +                return ret;
> +            }
> +        }
> +
>           /*
>            * DOMID_IO is a fake domain and is not described in the Device-Tree.
>            * Therefore when the owner of the shared region is DOMID_IO, we will

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 12:53:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 12:53:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473152.733633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEVAa-0002eG-WE; Sun, 08 Jan 2023 12:53:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473152.733633; Sun, 08 Jan 2023 12:53:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEVAa-0002e9-Te; Sun, 08 Jan 2023 12:53:00 +0000
Received: by outflank-mailman (input) for mailman id 473152;
 Sun, 08 Jan 2023 12:52:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEVAZ-0002e3-Kq
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 12:52:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEVAZ-0006YE-C2; Sun, 08 Jan 2023 12:52:59 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEVAZ-0004sX-5w; Sun, 08 Jan 2023 12:52:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=Jle59iCiMsqHyxp4ZpisjmnJ5CRT/9Fb4pdXCheufD4=; b=L9dnkChRACQaoIFXmeMAfTP1tb
	84VHd6rY3RfaSfLTaUiS+W1WKRtr9i8fSCw7aOxCVf2ua8s3rxAR7WY1AMTsxQ2Zawy2rA2I36l7Q
	4Q9M7RglpY+N4bXXuup9UuzuGQ6eKV3TIY3Fpf0tEYpKUicN7jbWxVb4daOnCC50GC4c=;
Message-ID: <d7f12897-c6cc-0895-b70e-53c0b88bd0f9@xen.org>
Date: Sun, 8 Jan 2023 12:52:57 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-7-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner when host
 address not provided
In-Reply-To: <20221115025235.1378931-7-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 15/11/2022 02:52, Penny Zheng wrote:
> @@ -922,33 +927,82 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d,
>       d->max_pages += nr_pfns;
>   
>       smfn = maddr_to_mfn(pbase);
> -    res = acquire_domstatic_pages(d, smfn, nr_pfns, 0);
> -    if ( res )
> +    page = mfn_to_page(smfn);
> +    /*
> +     * If page is allocated from heap as static shared memory, then we just
> +     * assign it to the owner domain
> +     */
> +    if ( page->count_info == (PGC_state_inuse | PGC_static) )
I am a bit confused how this can help differentiating 
becaPGC_state_inuse is 0. So effectively, you are checking that 
count_info is equal to PGC_static.

But as I wrote in a previous patch, I don't think you should convert 
{xen,dom}heap pages to a static pages.

[...]

> +static int __init assign_shared_memory(struct domain *d,
> +                                       struct shm_membank *shm_membank,
> +                                       paddr_t gbase)
> +{
> +    int ret = 0;
> +    unsigned long nr_pages, nr_borrowers;
> +    struct page_info *page;
> +    unsigned int i;
> +    struct meminfo *meminfo;
> +
> +    /* Host address is not provided in "xen,shared-mem" */
> +    if ( shm_membank->mem.banks.meminfo )
> +    {
> +        meminfo = shm_membank->mem.banks.meminfo;
> +        for ( i = 0; i < meminfo->nr_banks; i++ )
> +        {
> +            ret = acquire_shared_memory(d,
> +                                        meminfo->bank[i].start,
> +                                        meminfo->bank[i].size,
> +                                        gbase);
> +            if ( ret )
> +                return ret;
> +
> +            gbase += meminfo->bank[i].size;
> +        }
> +    }
> +    else
> +    {
> +        ret = acquire_shared_memory(d,
> +                                    shm_membank->mem.bank->start,
> +                                    shm_membank->mem.bank->size, gbase);
> +        if ( ret )
> +            return ret;
> +    }

Looking at this change and...

> +
>       /*
>        * Get the right amount of references per page, which is the number of
>        * borrower domains.
> @@ -984,23 +1076,37 @@ static int __init assign_shared_memory(struct domain *d,
>        * So if the borrower is created first, it will cause adding pages
>        * in the P2M without reference.
>        */
> -    page = mfn_to_page(smfn);
> -    for ( i = 0; i < nr_pages; i++ )
> +    if ( shm_membank->mem.banks.meminfo )
>       {
> -        if ( !get_page_nr(page + i, d, nr_borrowers) )
> +        meminfo = shm_membank->mem.banks.meminfo;
> +        for ( i = 0; i < meminfo->nr_banks; i++ )
>           {
> -            printk(XENLOG_ERR
> -                   "Failed to add %lu references to page %"PRI_mfn".\n",
> -                   nr_borrowers, mfn_x(smfn) + i);
> -            goto fail;
> +            page = mfn_to_page(maddr_to_mfn(meminfo->bank[i].start));
> +            nr_pages = PFN_DOWN(meminfo->bank[i].size);
> +            ret = add_shared_memory_ref(d, page, nr_pages, nr_borrowers);
> +            if ( ret )
> +                goto fail;
>           }
>       }
> +    else
> +    {
> +        page = mfn_to_page(
> +                maddr_to_mfn(shm_membank->mem.bank->start));
> +        nr_pages = shm_membank->mem.bank->size >> PAGE_SHIFT;
> +        ret = add_shared_memory_ref(d, page, nr_pages, nr_borrowers);
> +        if ( ret )
> +            return ret;
> +    }

... this one. The code to deal with a bank is exactly the same. But you 
need the duplication because you special case "one bank".

As I wrote in a previous patch, I don't think we should special case it. 
If the concern is memory usage, then we should look at reworking meminfo 
instead (or using a different structure).

>   
>       return 0;
>   
>    fail:
>       while ( --i >= 0 )
> -        put_page_nr(page + i, nr_borrowers);
> +    {
> +        page = mfn_to_page(maddr_to_mfn(meminfo->bank[i].start));
> +        nr_pages = PFN_DOWN(meminfo->bank[i].size);
> +        remove_shared_memory_ref(page, nr_pages, nr_borrowers);
> +    }
>       return ret;
>   }
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 12:54:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 12:54:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473159.733644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEVBq-0003As-BI; Sun, 08 Jan 2023 12:54:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473159.733644; Sun, 08 Jan 2023 12:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEVBq-0003Al-7O; Sun, 08 Jan 2023 12:54:18 +0000
Received: by outflank-mailman (input) for mailman id 473159;
 Sun, 08 Jan 2023 12:54:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEVBo-0003Ab-U6
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 12:54:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEVBo-0006ZW-4E; Sun, 08 Jan 2023 12:54:16 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEVBn-0004tf-VO; Sun, 08 Jan 2023 12:54:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	References:Cc:To:From:Subject:MIME-Version:Date:Message-ID;
	bh=mpwGGOGOV4+9snxHoL34MVzq/OouXdyYF0EdBGrWeZE=; b=QaKOzd2UTu+Lo9eoaRG+V/zb6r
	tTpuMAz6ZYEvjR6krF9+ne8+IWK/jFmHQSHPSEBhY3ccfPlIPtkZA/ObJDP1bWflIkHclejOyLKw8
	F7s33PsONN3sZP5khxnLJ1j0L4OxFh7DBIfw6a7oG5/tbemiPG+tnmiHAeKpR3u0PoFo=;
Message-ID: <bebd3fa6-1b04-4158-5dd6-55feba0f5560@xen.org>
Date: Sun, 8 Jan 2023 12:54:14 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 04/13] xen/arm: expand shm_membank for unprovided host
 address
From: Julien Grall <julien@xen.org>
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-5-Penny.Zheng@arm.com>
 <3832d94f-6856-82b3-ea64-a9e79460c547@xen.org>
In-Reply-To: <3832d94f-6856-82b3-ea64-a9e79460c547@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 08/01/2023 12:13, Julien Grall wrote:
> Hi Penny,
> 
> On 15/11/2022 02:52, Penny Zheng wrote:
>> When host address is not provided in "xen,shared-mem", we let Xen
>> automatically allocate requested static shared memory from heap, and it
>> stands good chance of having multiple host memory banks allocated for the
>> requested static shared memory as a result. Therefore current membank 
>> is not
>> going to cover it.
>>
>> This commit introduces a new field "mem" to cover both scenarios.
>> "struct membank" is used when host address is provided, whereas
>> "struct meminfo" shall be used when host address not provided.
> 
>  From this patch, it is not clear to me how a user can know which part 
> of the union should be used.

Ah it is a struct rather than an union. Yet...

> 
> However... I am not entirely sure why you need to create a union because 
> in your new structure you can fit one bank.

... my point here stands.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 13:18:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 13:18:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473166.733655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEVZK-0005j1-8o; Sun, 08 Jan 2023 13:18:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473166.733655; Sun, 08 Jan 2023 13:18:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEVZK-0005iu-5o; Sun, 08 Jan 2023 13:18:34 +0000
Received: by outflank-mailman (input) for mailman id 473166;
 Sun, 08 Jan 2023 13:18:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEVZI-0005io-HX
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 13:18:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEVZI-00078g-2f; Sun, 08 Jan 2023 13:18:32 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEVZH-0005zV-Sb; Sun, 08 Jan 2023 13:18:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=7Q80rUEM6UxqVUpefjUUaTM3iolJdTQeW1ClY4I1A7c=; b=e9nO+s2xyHRlP92vbvMs1CwT6U
	Yfh6KjEdObwHMUbxKytiCXjfPh6L8j7H6RnNWXzecDxXrpnHcGghA/aka58VMwEfdwt8cmmQC4gv8
	UUbuqlUFGsrQjZb6R9XZOA6CdkyVYURLQ2cVzWOC54pvs7r4ll6HQF/ryvBS+h4pJkwo=;
Message-ID: <3de0f1fe-19a8-8cfe-4a50-f8905f64bdd6@xen.org>
Date: Sun, 8 Jan 2023 13:18:29 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-14-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v1 13/13] xen: make static shared memory supported in
 SUPPORT.md
In-Reply-To: <20221115025235.1378931-14-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 15/11/2022 02:52, Penny Zheng wrote:
> After patching previous commits, we could make feature of "static shared memory"

Are you referring to the patch in this series? If so, they seem to had 
new features which I don't think are necessary to mark the "static 
shared memory".

Instead, "static shared memory" could be marked as supported if we 
believe that the new code has no security hole.

Looking below, the STATIC_SHM depends on STATIC_MEMORY which is 
currently unsupported. So it seems a bit strange to mark one supported 
but not the other.

Now, in order to support them, we need to make sure that the XENMEM_* 
operations are working as intended. I know there are some works that 
were done in the past, but I can't exactly remember if we fixed 
everything. So what happen if the domain (consider the case where the 
domain is directmapped or not):
   1) Remove the page?
   2) Remove the page twice? (Only in the directmap case)
   3) Request to map the page?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 16:02:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 16:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473175.733666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEY7V-0005id-SS; Sun, 08 Jan 2023 16:02:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473175.733666; Sun, 08 Jan 2023 16:02:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEY7V-0005iW-Ny; Sun, 08 Jan 2023 16:02:01 +0000
Received: by outflank-mailman (input) for mailman id 473175;
 Sun, 08 Jan 2023 16:01:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ukzP=5F=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pEY7T-0005iQ-If
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 16:01:59 +0000
Received: from sonic307-55.consmr.mail.gq1.yahoo.com
 (sonic307-55.consmr.mail.gq1.yahoo.com [98.137.64.31])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c3e3a156-8f6d-11ed-91b6-6bf2151ebd3b;
 Sun, 08 Jan 2023 17:01:55 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic307.consmr.mail.gq1.yahoo.com with HTTP; Sun, 8 Jan 2023 16:01:51 +0000
Received: by hermes--production-ne1-7b69748c4d-dzr9v (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 472b6276cf2cc0deb61dd8cd01ce9286; 
 Sun, 08 Jan 2023 16:01:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3e3a156-8f6d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673193711; bh=j/7REqUdpZNLaRsnWjMV+rHTABevisAIlRn8Ne5heHo=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=CK566PHrpi57mTF6dEykqonSl4NCcWgRHD0SxIjg8azrvsHyAGsLgNL48tylANTWNMqT27z/qefWR5yk5FlwV7a12VP5J3ohequZNOgRv3GZxF2TUuwpGHapyfzTaj9mbxvKu11l2VLv4IsgW/88Zb6S/SvSjRnrAsD0dr1SJJ81EAOg/FjYjfpI+4CCJLLcH+NjUGQEuWxCaGK5i+HzsYG0KX2feA8ZaC8dlovA1hvNqcbrhjaU2CbpWoLZdA+l91auWfKxKeTf+8n2hfYuP+dN2PRF+vDoqCdOxwIZJg+DyZGPNysZ7j5NftIoQPaksWtWclrHCMwIi9ueUqi21w==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673193711; bh=mwmcsLfD1qcJX5LwxSSnbsMwmw6uNLxC/Ndv5pDi2iH=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=t9cdDfsqNdFXrSZuy4KeRJ7oMspGv1IMnpARwxS3tmjcjbc7NCPPckPpZxq//d6mGQ+yaN8Jvsli1TnQm4IffSnbbhwAO/lbQrO5lveAwl1m9ol1kZGPXXgVeUX5ERxI5JKxgNCzYTvAQvH1QAlveyMsngmpm9yVjC3NU//PusBPS5yGtQvbbEb5rvgDbxYKPn05jHhfw9dmQHqR1pazcXmn6y30AhOYHRiIEXLjcFyfW2VBH9QIxJJn/8OCYXp6VQHA8MmJ9PHqaixbFIjUoKY7PnyQgR65hpuaKw/5ioKspNDkZk1qiSNgEv7un6zLBOVzQTt4Q9zXafAqQK5YEg==
X-YMail-OSG: x9Gz0rUVM1lNIL_HEu9RrHIUXk4jsl6PSPnwtx.kmzvYoHOXSAjNMmto23elA2w
 TDtb22kmikRqJIJFY6hH96vw_AuwGnYbzsNjQI_3h6lJ.5GDFKIMIUNKCh2mdLTkvHd8tMLMnXMV
 SJYtETssrKviv8OJSAWOVd4MCmLEICYJ4VdIOXhBzChJwxjVKbljv6ggXsTJ.SXaqQQo85_FUBrO
 gZULtiCI7ULTNvh_NiRDUnubpwCIw2.isLQ8sFSH9NYd0OfiSjm_4pmcW9SCcEJVLzLst89A_dok
 R25xGJNiLMaKRGydtKFmjRbsBCgq2GE9y50hOUnNnZr2.TVBrdBI5y3HSfg2zwwoTwn4GPA4ynGP
 EKGk_n5Ep6BdUedGYgz4fwgYGbr59Szi7I1GKToR2fA2OKFYC39rCCp5mzk2HV8ZsNeeC36z6OCA
 SnD0Zz12__r9btmgO0L4hze5MFYG6BfN0x8eqbD_5HchMqSR8wE_UPGo9Pws_4CcpAslVzRlvpIK
 5HYtUfd.g5yJmf7iWQKu.cKimrM5nR34nAlE38DgR1zyQkTC9lGzcTCd_KdGQhdWQq.lmfyDc3s7
 l63F7FJdqukv5gn9zuzdtHCcRDJMOuujHEuZdirsyY1toZSiwDsHfbF0xPBK28ElafpTzDy7ojX8
 gZQ8MFrbZUxcS02ycp48T6.9bJNv9tR6Q4H8By5kGi3ldmQm5IcAww374SSduBlfl47HZgMnInyP
 R9fzkaYbR34KCjK5sSQLjIhFoWVkW9cZe_rc9Ipq532Bi5nk2E59vmP0HQLbX9y.KamIEOt1sD_d
 BK5zefbGvYPEriM65sbWJ0Ab8vKayzS2tiSbN2r_hLJXXyMN0G05GQWnzoTez.irKAz5Bn45dAY8
 QCCkHqCrvfxwm0WQRxJnhLX9ON2qARSkYuKf.ZMr0ClrioPTh_Aea6eZQD6VTMZALvPK3x8ZtYRL
 oJnwTWm5Mwz.PsLsedCZZM7L5DRq_U2FvrqBLRG.RmDMrNi2KsXLaTWZR4f.E_BbL2gCerX6nKgX
 SlA55qwFW0SQ4SsIbihXUOxn4XqvKTeUgI5wjMil_yKTcQiv0CmhZ2x1zF1Jwb3QQd7blPicH9oh
 iergd.UgHRIwHj0kvRxLl3Qzr6RkArcHKKkY2KapLHyIBsriGRzQrYkjAc91whiVZJHVs1mXaHul
 YwY1xxNm9Ro1mZt7fAMOCJ2Djkz7bkI.6S8GTmC_zb3Kygq3ikuJnvbnOUaoqWjXtp66a2P4piIG
 HPhQHCLhNM44RUNJoCWrIJfnMxqsFE1PLESV8OUjEdjgoy8EqMeETv6ER5hu4onIuQRIBiPtZf4P
 S0Np7j1GwGKXJH7B05UPqc4n1lH6cZ8mEQ7uFGvCXjgp48YmqTzC_7XJVFuHdPmngk9Xu4UlAOU9
 BNNg7K.FEd2r.8gPxiRYxRwPRQV.PYRYsUKOnY.c6elbAnWfRUlXAOGJIns3isuOYcEt5QhOqpf0
 uMYWyZy_o1jnTrIE8PEJmthNjxnxZ_qhacyo1j9isZQ3wCdOgxuVyIsniWulr3btMXXE0pOxenVu
 ir3d7ftco5Ho_FpRdnvEG4CVeXLS81DbCSYYbGJKAvJmSGLWbZ4Ba09EGUXSPqeekUi2C67XCBOs
 osZue6sQTE4aYYb5sV30i6XPveB_xXAXUWT9TEG6qpT8YCDFkNEDS_yP8R06KVb0N1GNv5TCozwW
 PZQUOeG3MzObYrKODEwBimRiUF._pkaJB84tlBlDsCj3t1.skRswlBiqO.YDWHmLCE7P1tRbAr2b
 .YibCfY0tq9j5HF2jQawxzo53M4Gy1OgifIH3RzP4asmqpkFTzOsVqv5VB5g2PSG0QwErlKy0Jyg
 f0QnIeSM1pRoZOSmJONNYrmQIhQa8.ep8BX.gwJJDSqOuvhq4cJiUTXkWmP1Mt3g36QmkS5NtQBg
 k2uvw7ED8aevoYi0vMR08mZOP4WERjem02oy0Vh4UnL4E6vJEndUqOoE67EFQhHgNzGthr1Su1tO
 q8kKYq4E1jzdPibGfattBdFBh.SQnUQZtwtsJwBfbgqtOXyzUHkuwS4M8Qk806sJyjcriefoBxrG
 9Do6xMAfMEPDHcZXHRdYP3lAEQCg_W2Dq0MD9n8S6PrKOhVGWDcIXNOXQ8Sx3IeH6EUod8xXjDsv
 ZhFyhsbg7Szkh2BgXB4cSX6DuMywv9g.h6F9Z0daqM9o5hx2oAc9RRSCdq_aviX.GuOeL9kbcNU4
 2rvCLBBA6cD8-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <2772822b-cbd3-ea25-2742-a4de195e8dbd@aol.com>
Date: Sun, 8 Jan 2023 11:01:44 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
From: Chuck Zmudzinski <brchuckz@aol.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
 <c39b9502-0020-ce54-abd8-b362430ba086@aol.com>
 <882652f8-ddda-a7d8-85b9-da46568036d3@aol.com>
 <6931ef9f-1978-97f5-2d32-003a9e64833c@aol.com>
Content-Language: en-US
In-Reply-To: <6931ef9f-1978-97f5-2d32-003a9e64833c@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 2493

On 1/6/2023 10:02 AM, Chuck Zmudzinski wrote:
> On 1/6/23 9:31 AM, Chuck Zmudzinski wrote:
> > On 1/6/23 9:10 AM, Chuck Zmudzinski wrote:
> >> On 1/6/23 9:03 AM, Anthony PERARD wrote:
> >>> On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
> >>>> ...
> >>>> 
> >>>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> >>> 
> >>> 
> >>> This patch looks good enough. It only changes the "xenfv" machine so it
> >>> doesn't prevent a proper fix to be done in the toolstack libxl.
> >>> 
> >>> The change in xen_pci_passthrough_class_init() to try to run some code
> >>> before pci_qdev_realize() could potentially break in the future due to
> >>> been uncommon but hopefully that will be ok.
> >>> 
> >>> So if no work to fix libxl appear soon, I'm ok with this patch:

I have a patch that fixes it in libxl. It still needs a few tweaks before it is
ready for submission, but I plan to do that soon, perhaps later today or
tomorrow at the latest.

> > 
> > Well, I can tell you and others who use qemu are more comfortable
> > fixing this in libxl, so hold off for a week or so. I should have
> > a patch to fix this in libxl written and tested by then. If for
> > some reason that does not work out, then we can fix it in qemu.
>
> One last thought: the only donwnside to fixing this in libxl is that
> other toolstacks that configure qemu to use the xenfv machine will not
> benefit from the fix in qemu that would simplify configuring the
> guest correctly for the igd. Other toolstacks would still need to
> override the default behavior of adding the xen platform device at
> slot 2. I think no matter what, we should at least patch qemu to have
> the xen-platform device use slot 3 instead of being automatically assigned
> to slot 2 when igd-passthru=on. The rest of the fix could then be
> implemented in libxl so that other pci devices such as emulated network
> devices, other passed through pci devices, etc., do not take slot 2 when
> gfx_passthru in xl.cfg is set.

I decided to write the patch to libxl to fix this presuming no
changes to qemu. I think dealing with the "qemu behaves
differently starting from version 8 problem" is more trouble
that it's worth, so I am OK with implementing the fix completely
in libxl, which means libxl will now use the "pc" machine type
when igd-passthru=on and xen_platform_pci is true, but my patch
to libxl will still use the "xenfv" machine when xen_platform_pci
is true and igd-passthru is disabled.

Cheers,

Chuck


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 16:07:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 16:07:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473183.733677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEYCM-0006Ox-Js; Sun, 08 Jan 2023 16:07:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473183.733677; Sun, 08 Jan 2023 16:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEYCM-0006Oq-GX; Sun, 08 Jan 2023 16:07:02 +0000
Received: by outflank-mailman (input) for mailman id 473183;
 Sun, 08 Jan 2023 16:07:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEYCL-0006Ok-MQ
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 16:07:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEYCL-0002oi-AO; Sun, 08 Jan 2023 16:07:01 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEYCL-0003go-42; Sun, 08 Jan 2023 16:07:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=lLRsBB8Nv9xfhlmusdMdDlR0vRlqvO+R9Os1teUNfd0=; b=5wgzVLwFES9dTOFu3lJnIuiKzG
	LXRpr0o5mf228VhcGUiIpChjRph8QpFsvZf30LCPs0csfPdxIA544AKUl2NXD3xZ6eUzoMXduKxLQ
	pK2k+BzvpqTXB8+uWqG475hSiB6bPV8A8qHIqNfjISyJMEt/nTvws6H5E3DTwoD0A0Cs=;
Message-ID: <e26768b7-99f7-f4e4-6ae5-094d17e1594a@xen.org>
Date: Sun, 8 Jan 2023 16:06:59 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com, michal.orzel@amd.com,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>
References: <20221221185300.5309-1-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [XEN v4] xen/arm: Probe the load/entry point address of an uImage
 correctly
In-Reply-To: <20221221185300.5309-1-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Ayan,

On 21/12/2022 18:53, Ayan Kumar Halder wrote:
> Currently, kernel_uimage_probe() does not read the load/entry point address
> set in the uImge header. Thus, info->zimage.start is 0 (default value). This
> causes, kernel_zimage_place() to treat the binary (contained within uImage)
> as position independent executable. Thus, it loads it at an incorrect
> address.
> 
> The correct approach would be to read "uimage.load" and set
> info->zimage.start. This will ensure that the binary is loaded at the
> correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
> address).
> 
> If user provides load address (ie "uimage.load") as 0x0, then the image is
> treated as position independent executable. Xen can load such an image at
> any address it considers appropriate. A position independent executable
> cannot have a fixed entry point address.
> 
> This behavior is applicable for both arm32 and arm64 platforms.
> 
> Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
> point address set in the uImage header. With this commit, Xen will use them.
> This makes the behavior of Xen consistent with uboot for uimage headers.

The changes look good to me (with a few of comments below). That said, 
before acking the code, I would like an existing user of uImage (maybe 
EPAM or Arm?) to confirm they are happy with the change.

> 
> Users who want to use Xen with statically partitioned domains, can provide
> non zero load address and entry address for the dom0/domU kernel. It is
> required that the load and entry address provided must be within the memory
> region allocated by Xen.
> 
> A deviation from uboot behaviour is that we consider load address == 0x0,
> to denote that the image supports position independent execution. This
> is to make the behavior consistent across uImage and zImage.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from v1 :-
> 1. Added a check to ensure load address and entry address are the same.
> 2. Considered load address == 0x0 as position independent execution.
> 3. Ensured that the uImage header interpretation is consistent across
> arm32 and arm64.
> 
> v2 :-
> 1. Mentioned the change in existing behavior in booting.txt.
> 2. Updated booting.txt with a new section to document "Booting Guests".
> 
> v3 :-
> 1. Removed the constraint that the entry point should be same as the load
> address. Thus, Xen uses both the load address and entry point to determine
> where the image is to be copied and the start address.
> 2. Updated documentation to denote that load address and start address
> should be within the memory region allocated by Xen.
> 3. Added constraint that user cannot provide entry point for a position
> independent executable (PIE) image.
> 
>   docs/misc/arm/booting.txt         | 26 ++++++++++++++++++
>   xen/arch/arm/include/asm/kernel.h |  2 +-
>   xen/arch/arm/kernel.c             | 45 ++++++++++++++++++++++++++++---
>   3 files changed, 69 insertions(+), 4 deletions(-)
> 
> diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
> index 3e0c03e065..12339dfecb 100644
> --- a/docs/misc/arm/booting.txt
> +++ b/docs/misc/arm/booting.txt
> @@ -23,6 +23,28 @@ The exceptions to this on 32-bit ARM are as follows:
>   
>   There are no exception on 64-bit ARM.
>   
> +Booting Guests
> +--------------
> +
> +Xen supports the legacy image header[3], zImage protocol for 32-bit
> +ARM Linux[1] and Image protocol defined for ARM64[2].
> +
> +Earlier for legacy image protocol, Xen ignored the load address and

We should explicitly say when the change was introduced. So please 
replace "Earlier" with "Until Xen 4.17...". The rest of the sentence may 
be reworded.

> +entry point specified in the header. This has now changed.
> +
> +Now, it loads the image at the load address provided in the header.
> +And the entry point is used as the kernel start address.
> +
> +A deviation from uboot is that, Xen treats "load address == 0x0" as
> +position independent execution (PIE). Thus, Xen will load such an image
> +at an address it considers appropriate. Also, user cannot specify the
> +entry point of a PIE image since the start address cennot be
> +predetermined.
> +
> +Users who want to use Xen with statically partitioned domains, can provide
> +the fixed non zero load address and start address for the dom0/domU kernel.
> +The load address and start address specified by the user in the header must
> +be within the memory region allocated by Xen.
>   
>   Firmware/bootloader requirements
>   --------------------------------
> @@ -39,3 +61,7 @@ Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/t
>   
>   [2] linux/Documentation/arm64/booting.rst
>   Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst
> +
> +[3] legacy format header
> +Latest version: https://source.denx.de/u-boot/u-boot/-/blob/master/include/image.h#L315
> +https://linux.die.net/man/1/mkimage
> diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h
> index 5bb30c3f2f..4617cdc83b 100644
> --- a/xen/arch/arm/include/asm/kernel.h
> +++ b/xen/arch/arm/include/asm/kernel.h
> @@ -72,7 +72,7 @@ struct kernel_info {
>   #ifdef CONFIG_ARM_64
>               paddr_t text_offset; /* 64-bit Image only */
>   #endif
> -            paddr_t start; /* 32-bit zImage only */
> +            paddr_t start; /* Must be 0 for 64-bit Image */
>           } zimage;
>       };
>   };
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 23b840ea9e..58b3db0aa4 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -127,7 +127,7 @@ static paddr_t __init kernel_zimage_place(struct kernel_info *info)
>       paddr_t load_addr;
>   
>   #ifdef CONFIG_ARM_64
> -    if ( info->type == DOMAIN_64BIT )
> +    if ( (info->type == DOMAIN_64BIT) && (info->zimage.start == 0) )
>           return info->mem.bank[0].start + info->zimage.text_offset;
>   #endif
>   
> @@ -162,7 +162,12 @@ static void __init kernel_zimage_load(struct kernel_info *info)
>       void *kernel;
>       int rc;
>   
> -    info->entry = load_addr;
> +    /*
> +     * If the image does not have a fixed entry point, then use the load
> +     * address as the entry point.
> +     */
> +    if ( info->entry == 0 )
> +        info->entry = load_addr;
>   
>       place_modules(info, load_addr, load_addr + len);
>   
> @@ -223,10 +228,35 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>       if ( len > size - sizeof(uimage) )
>           return -EINVAL;
>   
> +    info->zimage.start = be32_to_cpu(uimage.load);
> +    info->entry = be32_to_cpu(uimage.ep);
> +
> +    /*
> +     * While uboot considers 0x0 to be a valid load/start address, for Xen
> +     * to mantain parity with zimage, we consider 0x0 to denote position

s/mantain/maintain/
s/zimage/zImage/

> +     * independent image. That means Xen is free to load such an image at
> +     * any valid address.
> +     * Thus, we will print an appropriate message.

NIT: No need to say you will print a message.

> +     */
> +    if ( info->zimage.start == 0 )
> +        printk(XENLOG_INFO
> +               "No load address provided. Xen will decide where to load it.\n");

NIT: I would consider to add a else and printing the requested load 
address and entry point.

> +
> +    /*
> +     * If the image supports position independent execution, then user cannot
> +     * provide an entry point as Xen will load such an image at any appropriate
> +     * memory address. Thus, we need to return error.
> +     */
> +    if ( (info->zimage.start == 0) && (info->entry != 0) )
> +    {
> +        printk(XENLOG_ERR
> +               "Entry point cannot be non zero for PIE image.\n");
> +        return -EINVAL;
> +    }
> +
>       info->zimage.kernel_addr = addr + sizeof(uimage);
>       info->zimage.len = len;
>   
> -    info->entry = info->zimage.start;
>       info->load = kernel_zimage_load;
>   
>   #ifdef CONFIG_ARM_64
> @@ -366,6 +396,7 @@ static int __init kernel_zimage64_probe(struct kernel_info *info,
>       info->zimage.kernel_addr = addr;
>       info->zimage.len = end - start;
>       info->zimage.text_offset = zimage.text_offset;
> +    info->zimage.start = 0;
>   
>       info->load = kernel_zimage_load;
>   
> @@ -436,6 +467,14 @@ int __init kernel_probe(struct kernel_info *info,
>       u64 kernel_addr, initrd_addr, dtb_addr, size;
>       int rc;
>   
> +    /* We need to initialize start to 0. This field may be populated during

Coding style:

/*
  * Foo
  * Bar
  */

> +     * kernel_xxx_probe() if the image has a fixed entry point (for eg

s/eg/e.g./

> +     * uimage.ep).
> +     * We will use this to determine if the image has a fixed entry point or
> +     * the load address should be used as the start address.
> +     */
> +    info->entry = 0;
> +
>       /* domain is NULL only for the hardware domain */
>       if ( domain == NULL )
>       {

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 16:15:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 16:15:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473190.733687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEYKM-0007sA-DN; Sun, 08 Jan 2023 16:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473190.733687; Sun, 08 Jan 2023 16:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEYKM-0007s3-AV; Sun, 08 Jan 2023 16:15:18 +0000
Received: by outflank-mailman (input) for mailman id 473190;
 Sun, 08 Jan 2023 16:15:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEYKK-0007rx-Li
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 16:15:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEYKI-0002wD-B7; Sun, 08 Jan 2023 16:15:14 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEYKI-00048z-4P; Sun, 08 Jan 2023 16:15:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=GmJ/zyeoEhIUHgdbR+I4UZJePj+rfIrAL48WhbxkeSM=; b=YC8sprd7xd+Z8RbQAxqCRoy4Xq
	leyu1wzrZhxoUx8sEQ4sfJn2JH5cknpVxYb1+/BpTRcV2QzQsVf56mBo9C8+vgTEn0jEVe1PTq/KM
	Y1kE4w1pYiD/KVpLF5GFUb2o9QYLfSBmlqp0Aa9ixynOhxeN9qwAcz8/8nLxP9rovoNw=;
Message-ID: <611b0857-155a-a50b-5996-ce735ebce22d@xen.org>
Date: Sun, 8 Jan 2023 16:15:12 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Henry Wang <Henry.Wang@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20221014080917.14980-1-Henry.Wang@arm.com>
 <a947e0b4-8f76-cea6-893f-abf30ff95e0d@xen.org>
 <AS8PR08MB7991FD5994497D812FE3AE2E92249@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <9d5ab09e-650f-118d-0233-d7988f1504f1@xen.org>
 <AS8PR08MB799170627B34BD2627CE3092921D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org>
 <alpine.DEB.2.22.394.2212091409020.3075842@ubuntu-linux-20-04-desktop>
 <A52E1C09-40F1-416C-A085-2F2320EE69EA@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2] xen/arm: p2m: Populate pages for GICv2 mapping in
 arch_domain_create()
In-Reply-To: <A52E1C09-40F1-416C-A085-2F2320EE69EA@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 03/01/2023 12:34, Bertrand Marquis wrote:
> Hi,
> 
> Sorry for the delay but I have very limited access to my mails right now.
> 
>> On 9 Dec 2022, at 23:11, Stefano Stabellini <sstabellini@kernel.org 
>> <mailto:sstabellini@kernel.org>> wrote:
>>
>> On Fri, 9 Dec 2022, Julien Grall wrote:
>>> Hi Henry,
>>>
>>> On 08/12/2022 03:06, Henry Wang wrote:
>>>> I am trying to work on the follow-up improvements about the Arm P2M 
>>>> code,
>>>> and while trying to address the comment below, I noticed there was an
>>>> unfinished
>>>> discussion between me and Julien which I would like to continue and here
>>>> opinions from all of you (if possible).
>>>>
>>>>> -----Original Message-----
>>>>> From: Julien Grall <julien@xen.org <mailto:julien@xen.org>>
>>>>> Subject: Re: [PATCH v2] xen/arm: p2m: Populate pages for GICv2 
>>>>> mapping in
>>>>> arch_domain_create()
>>>>>>> I also noticed that relinquish_p2m_mapping() is not called. This
>>>>>>> should
>>>>>>> be fine for us because arch_domain_create() should never create a
>>>>>>> mapping that requires p2m_put_l3_page() to be called.
>>>>
>>>> For the background of the discussion, this was about the failure path of
>>>> arch_domain_create(), where we only call p2m_final_teardown() which does
>>>> not call relinquish_p2m_mapping()...
>>> So all this mess with the P2M is necessary because we are mapping the 
>>> GICv2
>>> CPU interface in arch_domain_create(). I think we should consider to 
>>> defer the
>>> mapping to later.
>>>
>>> Other than it makes the code simpler, it also means we could also the 
>>> P2M root
>>> on the same numa node the domain is going to run (those information 
>>> are passed
>>> later on).
>>>
>>> There is a couple of options:
>>> 1. Introduce a new hypercall to finish the initialization of a domain 
>>> (e.g.
>>> allocating the P2M and map the GICv2 CPU interface). This option was 
>>> briefly
>>> discussed with Jan (see [2])./
>>> 2. Allocate the P2M when populate the P2M pool and defer the GICv2 CPU
>>> interface mapping until the first access (similar to how with deal 
>>> with MMIO
>>> access for ACPI).
>>>
>>> I find the second option is neater but it has the drawback that it 
>>> requires on
>>> more trap to the hypervisor and we can't report any mapping failure 
>>> (which
>>> should not happen if the P2M was correctly sized). So I am leaning 
>>> towards
>>> option 2.
>>>
>>> Any opinions?
>>
>> Option 1 is not great due to the extra hypercall. But I worry that
>> option 2 might make things harder for safety because the
>> mapping/initialization becomes "dynamic". I don't know if this is a
>> valid concern.
>>
>> I would love to hear Bertrand's thoughts about it. Putting him in To:
> 
> How option 1 would work for dom0less ?

Xen would call the function directly. Effectively, this would the same 
as all the other "hypercalls" we are using to build a dom0less domain.

> 
> Option 2 would make safety more challenging but not impossible (we have 
> a lot of other use cases where we cannot map everything on boot).
> 
> I would vote for option 2 as I think we will not certify gicv2 and it is 
> not adding an other hyper call.

It sounds like option 2 is the preferred way for now. Henry, can you 
have a look?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 16:30:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 16:30:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473198.733698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEYYy-0001lO-M6; Sun, 08 Jan 2023 16:30:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473198.733698; Sun, 08 Jan 2023 16:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEYYy-0001lH-JU; Sun, 08 Jan 2023 16:30:24 +0000
Received: by outflank-mailman (input) for mailman id 473198;
 Sun, 08 Jan 2023 16:30:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEYYx-0001lB-7b
 for xen-devel@lists.xenproject.org; Sun, 08 Jan 2023 16:30:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEYYu-0003KX-74; Sun, 08 Jan 2023 16:30:20 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEYYu-0004dP-09; Sun, 08 Jan 2023 16:30:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=VqVrJAhAwMQZyArxthFYx4Bi7j19mKF+UQf2Vp0behw=; b=wyBgwnw1ukLAB2drrYM1VA2lQk
	fUiUHCf0C+5mDIzder8uOpk0YCHZyQbuhU781JBLgJcKQZXF/LBlJp7UPH1bQ4CrpliKw6Ey8JApk
	vAY9ZOuj0SyAbVbMWvdGSyzyd1NFXr1LoDANwQJANlArhatBwoF+M1QMeRp9HVjIeVLs=;
Message-ID: <7b0435dd-bf2b-fa26-daba-7dec43f9c88e@xen.org>
Date: Sun, 8 Jan 2023 16:30:17 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
 "Smith, Jackson" <rsmith@riversideresearch.org>,
 "Brookes, Scott" <sbrookes@riversideresearch.org>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 "bertrand.marquis@arm.com" <bertrand.marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 "christopher.w.clark@gmail.com" <christopher.w.clark@gmail.com>
References: <BN0P110MB1642835E0DE845205B5EA59CCFE39@BN0P110MB1642.NAMP110.PROD.OUTLOOK.COM>
 <b7a367d4-a9df-0733-5a11-6ba11043c6b5@xen.org>
 <BN0P110MB1642A6DCBD15780CD8767B8CCFE19@BN0P110MB1642.NAMP110.PROD.OUTLOOK.COM>
 <513d0cc3-a305-5029-32f7-67993ae83c55@xen.org>
 <alpine.DEB.2.22.394.2212151725090.315094@ubuntu-linux-20-04-desktop>
 <7a7a7156-138d-a53c-fb65-a210e14bd8c1@xen.org>
 <BN0P110MB16429FF1A9FF3507A684C5B2CFEA9@BN0P110MB1642.NAMP110.PROD.OUTLOOK.COM>
 <Y6I3oqYdTKa/57I/@itl-email>
 <alpine.DEB.2.22.394.2212211639070.4079@ubuntu-linux-20-04-desktop>
 <af62bf3c-1046-3ed2-8662-e79375fe4794@xen.org>
 <alpine.DEB.2.22.394.2212221109220.4079@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
Subject: Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen
In-Reply-To: <alpine.DEB.2.22.394.2212221109220.4079@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 22/12/2022 21:28, Stefano Stabellini wrote:
> On Thu, 22 Dec 2022, Julien Grall wrote:
>>> What other hypervisors might or might not do should not be a factor in
>>> this discussion and it would be best to leave it aside.
>>
>> To be honest, Demi has a point. At the moment, VMF is a very niche use-case
>> (see more below). So you would end up to use less than 10% of the normal Xen
>> on Arm code. A lot of people will likely wonder why using Xen in this case?
> 
> [...]
> 
>>>   From an AMD/Xilinx point of view, most of our customers using Xen in
>>> productions today don't use any hypercalls in one or more of their VMs.
>> This suggests a mix of guests are running (some using hypercalls and other
>> not). It would not be possible if you were using VMF.
> 
> It is true that the current limitations are very restrictive.
> 
> In embedded, we have a few pure static partitioning deployments where no
> hypercalls are required (Linux is using hypercalls today but it could do
> without), so maybe VMF could be enabled, but admittedly in those cases
> the main focus today is safety and fault tolerance, rather than
> confidential computing.
> 
> 
>>> Xen is great for these use-cases and it is rather common in embedded.
>>> It is certainly a different configuration from what most are come to
>>> expect from Xen on the server/desktop x86 side. There is no question
>>> that guests without hypercalls are important for Xen on ARM. >
>>> As a Xen community we have a long history and strong interest in making
>>> Xen more secure and also, more recently, safer (in the ISO 26262
>>> safety-certification sense). The VMF work is very well aligned with both
>>> of these efforts and any additional burder to attackers is certainly
>>> good for Xen.
>>
>> I agree that we have a strong focus on making Xen more secure. However, we
>> also need to look at the use cases for it. As it stands, there will no:
>>    - IOREQ use (don't think about emulating TPM)
>>    - GICv3 ITS
>>    - stage-1 SMMUv3
>>    - decoding of instructions when there is no syndrome
>>    - hypercalls (including event channels)
>>    - dom0
>>
>> That's a lot of Xen features that can't be used. Effectively you will make Xen
>> more "secure" for a very few users.
> 
> Among these, the main problems affecting AMD/Xilinx users today would be:
> - decoding of instructions
> - hypercalls, especially event channels
> 
> Decoding of instructions would affect all our deployments. For
> hypercalls, even in static partitioning deployments, sometimes event
> channels are used for VM-to-VM notifications.
> 
> 
>>> Now the question is what changes are necessary and how to make them to
>>> the codebase. And if it turns out that some of the changes are not
>>> applicable or too complex to accept, the decision will be made purely
>>> from a code maintenance point of view and will have nothing to do with
>>> VMs making no hypercalls being unimportant (i.e. if we don't accept one
>>> or more patches is not going to have anything to do with the use-case
>>> being unimportant or what other hypervisors might or might not do).
>> I disagree, I think this is also about use cases. On the paper VMF look very
>> great, but so far it still has a big flaw (the TTBR can be changed) and it
>> would restrict a lot what you can do.
> 
> We would need to be very clear in the commit messages and documentation
> that with the current version of VMF we do *not* achieve confidential
> computing and we do *not* offer protections comparable to AMD SEV. It is
> still possible for Xen to access guest data, it is just a bit harder.
> 
>  From an implementation perspective, if we can find a way to implement it
> that would be easy to maintain, then it might still be worth it. It
> would probably take only a small amount of changes on top of the "Remove
> the directmap" series to make it so "map_domain_page" doesn't work
> anymore after boot.

None of the callers of map_domain_page() expect the function to fais. So 
some treewide changes will be needed in order to deal with 
map_domain_page() not working. This is not something I am willing to 
accept if the only user is VMF (at the moment I can't think of any other).

So instead, we would need to come up with a way where map_domain_page() 
will never be called at runtime when VMF is in use (maybe by compiling 
out some code?). I haven't really looked in details to say whether 
that's feasiable.

> 
> That might be worth exploring if you and Jackson agree?

I am OK to continue explore it because I think some bits will be still 
useful for the general use. As for the full solution, I will wait and 
see the results before deciding whether this is something that I would 
be happy to merge/maintain.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 16:54:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 16:54:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473205.733710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEYwP-0004Ca-KE; Sun, 08 Jan 2023 16:54:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473205.733710; Sun, 08 Jan 2023 16:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEYwP-0004CT-HN; Sun, 08 Jan 2023 16:54:37 +0000
Received: by outflank-mailman (input) for mailman id 473205;
 Sun, 08 Jan 2023 16:54:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEYwO-0004CJ-1e; Sun, 08 Jan 2023 16:54:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEYwN-0003rp-Uq; Sun, 08 Jan 2023 16:54:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEYwN-0004np-E5; Sun, 08 Jan 2023 16:54:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEYwN-0008Jw-DJ; Sun, 08 Jan 2023 16:54:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=desVzAkbhywnJdfwHf3WYpQ1snGRZykxmVaKUSw5Ddo=; b=Er1uDf3Nhz6M9umu5bV95Y8+Lg
	VyACGTB63k72Sm2tMf80g7pPE0s6IwWJaA4gDaBpenxWh4TwE0FC+wQ1mYamSgTFCw73D00C0kGn2
	474dyuNF0dGwQuo68aEE0lbH9/0exkuY1IUdWS5lDE8/vjvqW6pFEov6KuStMbmzcAQc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175625-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175625: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9b43a525db125799df81e6fbef712a2ae50bfc5d
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Jan 2023 16:54:35 +0000

flight 175625 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175625/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                9b43a525db125799df81e6fbef712a2ae50bfc5d
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   92 days
Failing since        173470  2022-10-08 06:21:34 Z   92 days  193 attempts
Testing same since   175622  2023-01-07 22:40:16 Z    0 days    2 attempts

------------------------------------------------------------
3311 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505194 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 08 19:57:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Jan 2023 19:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473267.733760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEbmx-00070k-Kj; Sun, 08 Jan 2023 19:57:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473267.733760; Sun, 08 Jan 2023 19:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEbmx-00070d-Hg; Sun, 08 Jan 2023 19:57:03 +0000
Received: by outflank-mailman (input) for mailman id 473267;
 Sun, 08 Jan 2023 19:57:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEbmw-00070T-QZ; Sun, 08 Jan 2023 19:57:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEbmw-0008By-ND; Sun, 08 Jan 2023 19:57:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEbmw-0002Wk-4M; Sun, 08 Jan 2023 19:57:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEbmw-0003LQ-3p; Sun, 08 Jan 2023 19:57:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vuFWoXEQK0YC6GRfoj0lSn0ml8WP0jqgTBQOHLBu7nQ=; b=dZV1CGicLyrG5XGKIYToh1R7gK
	FwIn6QnBb6u4cbzX3xZnwB9X0ikwsAtgCB7omwHGLo4GTBhkrv5WKNy67db92t4Vjdet5n3MuJ1hF
	zaV0oKeE1tNNlDW1pzYJvPcp+Q3zWvKJptd8TWB9P3jaPSs/xbWABzj1xQ4soKiVmOrk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175627-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175627: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=528d9f33cad5245c1099d77084c78bb2244d5143
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Jan 2023 19:57:02 +0000

flight 175627 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175627/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 test-arm64-arm64-xl         18 guest-start/debian.repeat fail REGR. vs. 175623
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175623
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175623
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175623
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175623
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175623
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175623
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                528d9f33cad5245c1099d77084c78bb2244d5143
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    0 days
Testing same since   175627  2023-01-08 14:40:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 378 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 01:32:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 01:32:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473287.733792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEh1T-0008N0-Sk; Mon, 09 Jan 2023 01:32:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473287.733792; Mon, 09 Jan 2023 01:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEh1T-0008Ms-MY; Mon, 09 Jan 2023 01:32:23 +0000
Received: by outflank-mailman (input) for mailman id 473287;
 Mon, 09 Jan 2023 01:32:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEh1S-0008Mi-Et; Mon, 09 Jan 2023 01:32:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEh1S-0001si-CU; Mon, 09 Jan 2023 01:32:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEh1R-0006M0-Bi; Mon, 09 Jan 2023 01:32:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEh1R-0004iI-9j; Mon, 09 Jan 2023 01:32:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CcQ7YUgfX8GnSvWrI7mdAC5X6XqwrbR/DNYaueiSeM0=; b=1ORGlAKQiy+h0VrBmeImJpMkli
	W4paYqDaXaV+ehNa2zjA2hKoEVMpAtk/uDXz2g6Hb13dvtHTPjWysBr9KBkJZ1jGOnuEsWc4zvUEJ
	nARwyhq8PNZCr0YDiezkyoYu7jqbYrtVO+4k8QybDNHMnFZjeQPSi8UqPuc8elRG/39k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175628-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175628: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=93928d485d9df12be724cbdf1caa7d197b65001e
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 01:32:21 +0000

flight 175628 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175628/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                93928d485d9df12be724cbdf1caa7d197b65001e
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   93 days
Failing since        173470  2022-10-08 06:21:34 Z   92 days  194 attempts
Testing same since   175628  2023-01-08 17:13:25 Z    0 days    1 attempts

------------------------------------------------------------
3313 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505370 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 01:41:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 01:41:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473299.733803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEh9v-0001Uv-VH; Mon, 09 Jan 2023 01:41:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473299.733803; Mon, 09 Jan 2023 01:41:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEh9v-0001Uo-Rb; Mon, 09 Jan 2023 01:41:07 +0000
Received: by outflank-mailman (input) for mailman id 473299;
 Mon, 09 Jan 2023 01:41:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qhPP=5G=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pEh9u-0001Ui-Sw
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 01:41:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2087.outbound.protection.outlook.com [40.107.6.87])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ad9135d6-8fbe-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 02:41:04 +0100 (CET)
Received: from ZR0P278CA0157.CHEP278.PROD.OUTLOOK.COM (2603:10a6:910:41::22)
 by AS8PR08MB6341.eurprd08.prod.outlook.com (2603:10a6:20b:33f::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 01:41:02 +0000
Received: from VI1EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:910:41:cafe::96) by ZR0P278CA0157.outlook.office365.com
 (2603:10a6:910:41::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Mon, 9 Jan 2023 01:41:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT055.mail.protection.outlook.com (100.127.144.130) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Mon, 9 Jan 2023 01:41:01 +0000
Received: ("Tessian outbound 0d7b2ab0f13d:v132");
 Mon, 09 Jan 2023 01:41:00 +0000
Received: from 4d1961368fd6.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B9CB8763-F1E7-4A7F-BF3A-D8121F8A50BE.1; 
 Mon, 09 Jan 2023 01:40:55 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4d1961368fd6.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Jan 2023 01:40:55 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAWPR08MB10317.eurprd08.prod.outlook.com (2603:10a6:102:330::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 01:40:47 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 01:40:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad9135d6-8fbe-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gElYYVRIq6cb2IFc9bv0ot+K3mKiPjAQBlshZI0YrbY=;
 b=LwuMYO/RoNMUY/KYHCgKDTiIZ91Zlp/Uwbt/kCvOsGBoX+JPZTDJ3NjEZA/gqklO4hRCwCNx+vGmjGgOib7rup4sfg2GLijvaiyL3P2epMUkpu372v10YtJF7Ok2vCKhcTU8u3HDeHgjVoi9fQu6AkhsRjmraaJC7dW96c6TDIA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QPXIH5vrmiTndYmYAmug2xAUgVrAiqb7U35Di6qvwYiXKv0NnGP5JVicdJF/6XzVAMpuw34IFsI2W3vrOO/f90WMFeK8AqG8TUjsI0U8SuBJjUKKnWG4IAfqBgN8/opqLRRbtsKhOQPVywEh8tJSx29wH4T+CZ8hms1DOW/qafo3c84K/W434oYTA7xZsWmh03YXhRc2pr0z/I/EG+sM2uejzHpI89uA8l4GaHPp9JDWEMQKmf7oE/D73z+HAX9YYZivmYhbQ3VE9ZNajx2J5aTomVzMYvfZ+bo8K+XR2PMdrgUlxt5gdlx8EXB7Nk4XpMqLhCmlzABb2ZqXyxs6fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gElYYVRIq6cb2IFc9bv0ot+K3mKiPjAQBlshZI0YrbY=;
 b=gB+iPuV4z5yYYKmC64JICGQWV84A0ZKg9SBO5Gyqyazl+O4CJO6PJ7hp197aYVc7LfpZdZBo4tHOfOwDU2WcJX6EfojYn3dLdXOwUovsH/rE+pouemC/rUFaKYGTXhYdQpdDuMK6SXulmixl3d5uan5w7R9fbDQ/Ar3fXeDTnU2oIITCURD3K3Mj2F4BwZT1pmqUpz2kRNQ9wkMr6UPsV77jhs4GjfrD+1pZomhAZFONIaIQ2HktFgvPROQvpk0TT/QfyNc7EJFuBFFpDzYL0nXgCfrzsrQj1wa8PJ3XLgqo/Jg1y9uclVZaBohs9OATdordh06psSPW/eJQWT9fIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gElYYVRIq6cb2IFc9bv0ot+K3mKiPjAQBlshZI0YrbY=;
 b=LwuMYO/RoNMUY/KYHCgKDTiIZ91Zlp/Uwbt/kCvOsGBoX+JPZTDJ3NjEZA/gqklO4hRCwCNx+vGmjGgOib7rup4sfg2GLijvaiyL3P2epMUkpu372v10YtJF7Ok2vCKhcTU8u3HDeHgjVoi9fQu6AkhsRjmraaJC7dW96c6TDIA=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v2] xen/arm: p2m: Populate pages for GICv2 mapping in
 arch_domain_create()
Thread-Topic: [PATCH v2] xen/arm: p2m: Populate pages for GICv2 mapping in
 arch_domain_create()
Thread-Index:
 AQHY36RSrOIThkYtnEq9RrtWc+uYX64NsO2AgAACdYCAAZeJAIBUVNkAgAKflACAADe5AIAmqQQAgAgZZwCAAJ2dcA==
Date: Mon, 9 Jan 2023 01:40:45 +0000
Message-ID:
 <AS8PR08MB799108673439A1983DB695BA92FE9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20221014080917.14980-1-Henry.Wang@arm.com>
 <a947e0b4-8f76-cea6-893f-abf30ff95e0d@xen.org>
 <AS8PR08MB7991FD5994497D812FE3AE2E92249@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <9d5ab09e-650f-118d-0233-d7988f1504f1@xen.org>
 <AS8PR08MB799170627B34BD2627CE3092921D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org>
 <alpine.DEB.2.22.394.2212091409020.3075842@ubuntu-linux-20-04-desktop>
 <A52E1C09-40F1-416C-A085-2F2320EE69EA@arm.com>
 <611b0857-155a-a50b-5996-ce735ebce22d@xen.org>
In-Reply-To: <611b0857-155a-a50b-5996-ce735ebce22d@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A7E01A4412B29B418CE6B5E56B7E40E9.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAWPR08MB10317:EE_|VI1EUR03FT055:EE_|AS8PR08MB6341:EE_
X-MS-Office365-Filtering-Correlation-Id: 4b29914d-64ac-4c81-14b2-08daf1e2906b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 s6EMuGyWFEz2wj2+xlPeCYuJu1E4/6ahNhwLCMkxIzpjMdLjKvKYHbSI8JICMU3Mp9axUZSJd1pzYWwusWHxWGr3wtxTpU6Rk2bgEqkgnJy8xp9dGGKU6XkoYv3YFole3OH0iRVEjyxHtf7smxb0ztwmmnynqEqZzWiuH2c/8NS52WFoUHMF9y6o0UyYYS0WUcDKo+TwxALBHLkmIl1jwIcyz3xtDn8K1TK46f1j39nr5SWnuN+4XJ3RNK9NiQT2Pm77n1XStzwDKZV+yrb988hgM12UqnPWmUx+8c95j+oUoQc4KOom4duF1242P3KIDkltGzmXnIMbci+ui449U9OhlfrQVEhmUAwh5z9iWdK7TRSVbsIq9TjezGXEtdQEk6Pw2I4nBIsfNWzol7JOw5o/kTwZ7hAUZseU0Tz3S17lCI5NWzfnAaF5qpO1KqytOu+0XBmyd+5MUWqIynzIcTkdTJuoNvDrv6sPFjJ/N3IxEL6it/BIpz6Z5IOHlWeaLEcDdpKzwmpB7ncqRDUghwq9A5cSa7KwfGdsElCr7+wIDWUjyMm9cxJh/DYd/flqO1hsmeJX20zPHLkQVjFnpkaOhd4XuHLkL+5VrW9f+Kfo9IIjp/m2K0eXmjZMLb2qw1K9iMgi4EWx+pgh7MI9zTVMb+DiBvHaEANVTYbA8InnIbNNanKTpDIMRUXD2Kl3iRH2uE70Hu5F+fA6FLcC/g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(366004)(346002)(39860400002)(376002)(451199015)(38100700002)(122000001)(38070700005)(33656002)(66476007)(66556008)(66946007)(66446008)(8676002)(64756008)(4326008)(54906003)(76116006)(41300700001)(86362001)(110136005)(316002)(55016003)(5660300002)(2906002)(52536014)(8936002)(9686003)(83380400001)(478600001)(71200400001)(7696005)(6506007)(26005)(186003)(53546011)(66899015);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10317
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5c511e29-7120-4f37-506c-08daf1e28704
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eKwde+RqZGq4jXxZXBk0bx9Qmb/KPdIp/+LiGJkNLkvyp/6v6FW7nnk3SZod0M1mJ/EnMcwka5IXXCQ/WdtTJIp5Y24reeFE2Bzl5cj98jTe8dgkqJRe8p4kqb8Bc6z6YrWQpmH1VZv5M9GuB0PMwQ3Qk2zek/ZGypM5JgRZLN3agqBhr20L7w6lzC54Tbvtyjt0tHA8krLwOigwqnthswcbMvwEYmfniv2IOEhxNXwQvpTkPynS1n5/sArVyoEaZgeQWOkQk3yaDZo1yhEKQwMu+SJj/b20pR+uV81D2bObVeUMRxmWYXPYwgpY1IPw3hZ9tm7DO2SlJCW/glIxBLWeUDxzYtxVsGbiuj1UM3SHa6TRnchRp7rQsQJEFZuPsCLNhuMl2an7DO0lPbiOYaLb0iegOmk0BLJY44pBbkCSMtWmrYAkTw0MEo9czmzfqQdmFIPWMpMS0I5ls7flTX7MqJIo0gHq67g89upPo64C3AER4JFbp+2R46EcaZJpUO8a67ejuO46vLtPMsgR+eg18vBATMyrtznknhKzGOuqY9MZ528RncycYKWcQ2t85SvJN7wytawiSoGIDYEFtnfxGjIQ/P/+ll5akq1ZvV9sr7QQImckPSCPcOihfhO5kSyichVTFU+fczdKR6rajB09kv1Mn1N503CPazR8YDI=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(346002)(396003)(376002)(451199015)(46966006)(36840700001)(5660300002)(66899015)(8936002)(52536014)(41300700001)(110136005)(54906003)(2906002)(83380400001)(8676002)(70206006)(70586007)(316002)(4326008)(33656002)(7696005)(26005)(9686003)(186003)(478600001)(53546011)(6506007)(336012)(86362001)(55016003)(40480700001)(47076005)(82740400003)(81166007)(82310400005)(356005)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 01:41:01.2780
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b29914d-64ac-4c81-14b2-08daf1e2906b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6341

SGkgSnVsaWVuIGFuZCBCZXJ0cmFuZCwNCg0KPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0K
PiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KPiBTdWJqZWN0OiBSZTogW1BB
VENIIHYyXSB4ZW4vYXJtOiBwMm06IFBvcHVsYXRlIHBhZ2VzIGZvciBHSUN2MiBtYXBwaW5nIGlu
DQo+IGFyY2hfZG9tYWluX2NyZWF0ZSgpDQo+IA0KPiBIaSBCZXJ0cmFuZCwNCj4gDQo+IE9uIDAz
LzAxLzIwMjMgMTI6MzQsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+ID4gSGksDQo+ID4NCj4g
PiBTb3JyeSBmb3IgdGhlIGRlbGF5IGJ1dCBJIGhhdmUgdmVyeSBsaW1pdGVkIGFjY2VzcyB0byBt
eSBtYWlscyByaWdodCBub3cuDQo+ID4NCj4gPj4gT24gOSBEZWMgMjAyMiwgYXQgMjM6MTEsIFN0
ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZw0KPiA+PiA8bWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+PiB3cm90ZToNCj4gPj4NCj4gPj4gT24gRnJpLCA5IERlYyAy
MDIyLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+ID4+PiBIaSBIZW5yeSwNCj4gPj4+DQo+ID4+PiBP
biAwOC8xMi8yMDIyIDAzOjA2LCBIZW5yeSBXYW5nIHdyb3RlOg0KPiA+Pj4+IEkgYW0gdHJ5aW5n
IHRvIHdvcmsgb24gdGhlIGZvbGxvdy11cCBpbXByb3ZlbWVudHMgYWJvdXQgdGhlIEFybSBQMk0N
Cj4gPj4+PiBjb2RlLA0KPiA+Pj4+IGFuZCB3aGlsZSB0cnlpbmcgdG8gYWRkcmVzcyB0aGUgY29t
bWVudCBiZWxvdywgSSBub3RpY2VkIHRoZXJlIHdhcyBhbg0KPiA+Pj4+IHVuZmluaXNoZWQNCj4g
Pj4+PiBkaXNjdXNzaW9uIGJldHdlZW4gbWUgYW5kIEp1bGllbiB3aGljaCBJIHdvdWxkIGxpa2Ug
dG8gY29udGludWUgYW5kDQo+IGhlcmUNCj4gPj4+PiBvcGluaW9ucyBmcm9tIGFsbCBvZiB5b3Ug
KGlmIHBvc3NpYmxlKS4NCj4gPj4+Pg0KPiA+Pj4+IEZvciB0aGUgYmFja2dyb3VuZCBvZiB0aGUg
ZGlzY3Vzc2lvbiwgdGhpcyB3YXMgYWJvdXQgdGhlIGZhaWx1cmUgcGF0aCBvZg0KPiA+Pj4+IGFy
Y2hfZG9tYWluX2NyZWF0ZSgpLCB3aGVyZSB3ZSBvbmx5IGNhbGwgcDJtX2ZpbmFsX3RlYXJkb3du
KCkgd2hpY2gNCj4gZG9lcw0KPiA+Pj4+IG5vdCBjYWxsIHJlbGlucXVpc2hfcDJtX21hcHBpbmco
KS4uLg0KPiA+Pj4gU28gYWxsIHRoaXMgbWVzcyB3aXRoIHRoZSBQMk0gaXMgbmVjZXNzYXJ5IGJl
Y2F1c2Ugd2UgYXJlIG1hcHBpbmcgdGhlDQo+ID4+PiBHSUN2Mg0KPiA+Pj4gQ1BVIGludGVyZmFj
ZSBpbiBhcmNoX2RvbWFpbl9jcmVhdGUoKS4gSSB0aGluayB3ZSBzaG91bGQgY29uc2lkZXIgdG8N
Cj4gPj4+IGRlZmVyIHRoZQ0KPiA+Pj4gbWFwcGluZyB0byBsYXRlci4NCj4gPj4+DQo+ID4+PiBP
dGhlciB0aGFuIGl0IG1ha2VzIHRoZSBjb2RlIHNpbXBsZXIsIGl0IGFsc28gbWVhbnMgd2UgY291
bGQgYWxzbyB0aGUNCj4gPj4+IFAyTSByb290DQo+ID4+PiBvbiB0aGUgc2FtZSBudW1hIG5vZGUg
dGhlIGRvbWFpbiBpcyBnb2luZyB0byBydW4gKHRob3NlIGluZm9ybWF0aW9uDQo+ID4+PiBhcmUg
cGFzc2VkDQo+ID4+PiBsYXRlciBvbikuDQo+ID4+Pg0KPiA+Pj4gVGhlcmUgaXMgYSBjb3VwbGUg
b2Ygb3B0aW9uczoNCj4gPj4+IDEuIEludHJvZHVjZSBhIG5ldyBoeXBlcmNhbGwgdG8gZmluaXNo
IHRoZSBpbml0aWFsaXphdGlvbiBvZiBhIGRvbWFpbg0KPiA+Pj4gKGUuZy4NCj4gPj4+IGFsbG9j
YXRpbmcgdGhlIFAyTSBhbmQgbWFwIHRoZSBHSUN2MiBDUFUgaW50ZXJmYWNlKS4gVGhpcyBvcHRp
b24gd2FzDQo+ID4+PiBicmllZmx5DQo+ID4+PiBkaXNjdXNzZWQgd2l0aCBKYW4gKHNlZSBbMl0p
Li8NCj4gPj4+IDIuIEFsbG9jYXRlIHRoZSBQMk0gd2hlbiBwb3B1bGF0ZSB0aGUgUDJNIHBvb2wg
YW5kIGRlZmVyIHRoZSBHSUN2Mg0KPiBDUFUNCj4gPj4+IGludGVyZmFjZSBtYXBwaW5nIHVudGls
IHRoZSBmaXJzdCBhY2Nlc3MgKHNpbWlsYXIgdG8gaG93IHdpdGggZGVhbA0KPiA+Pj4gd2l0aCBN
TUlPDQo+ID4+PiBhY2Nlc3MgZm9yIEFDUEkpLg0KPiA+Pj4NCj4gPj4+IEkgZmluZCB0aGUgc2Vj
b25kIG9wdGlvbiBpcyBuZWF0ZXIgYnV0IGl0IGhhcyB0aGUgZHJhd2JhY2sgdGhhdCBpdA0KPiA+
Pj4gcmVxdWlyZXMgb24NCj4gPj4+IG1vcmUgdHJhcCB0byB0aGUgaHlwZXJ2aXNvciBhbmQgd2Ug
Y2FuJ3QgcmVwb3J0IGFueSBtYXBwaW5nIGZhaWx1cmUNCj4gPj4+ICh3aGljaA0KPiA+Pj4gc2hv
dWxkIG5vdCBoYXBwZW4gaWYgdGhlIFAyTSB3YXMgY29ycmVjdGx5IHNpemVkKS4gU28gSSBhbSBs
ZWFuaW5nDQo+ID4+PiB0b3dhcmRzDQo+ID4+PiBvcHRpb24gMi4NCj4gPj4+DQo+ID4+PiBBbnkg
b3BpbmlvbnM/DQo+ID4+DQo+ID4+IE9wdGlvbiAxIGlzIG5vdCBncmVhdCBkdWUgdG8gdGhlIGV4
dHJhIGh5cGVyY2FsbC4gQnV0IEkgd29ycnkgdGhhdA0KPiA+PiBvcHRpb24gMiBtaWdodCBtYWtl
IHRoaW5ncyBoYXJkZXIgZm9yIHNhZmV0eSBiZWNhdXNlIHRoZQ0KPiA+PiBtYXBwaW5nL2luaXRp
YWxpemF0aW9uIGJlY29tZXMgImR5bmFtaWMiLiBJIGRvbid0IGtub3cgaWYgdGhpcyBpcyBhDQo+
ID4+IHZhbGlkIGNvbmNlcm4uDQo+ID4+DQo+ID4+IEkgd291bGQgbG92ZSB0byBoZWFyIEJlcnRy
YW5kJ3MgdGhvdWdodHMgYWJvdXQgaXQuIFB1dHRpbmcgaGltIGluIFRvOg0KPiA+DQo+ID4gSG93
IG9wdGlvbiAxIHdvdWxkIHdvcmsgZm9yIGRvbTBsZXNzID8NCj4gDQo+IFhlbiB3b3VsZCBjYWxs
IHRoZSBmdW5jdGlvbiBkaXJlY3RseS4gRWZmZWN0aXZlbHksIHRoaXMgd291bGQgdGhlIHNhbWUN
Cj4gYXMgYWxsIHRoZSBvdGhlciAiaHlwZXJjYWxscyIgd2UgYXJlIHVzaW5nIHRvIGJ1aWxkIGEg
ZG9tMGxlc3MgZG9tYWluLg0KPiANCj4gPg0KPiA+IE9wdGlvbiAyIHdvdWxkIG1ha2Ugc2FmZXR5
IG1vcmUgY2hhbGxlbmdpbmcgYnV0IG5vdCBpbXBvc3NpYmxlICh3ZSBoYXZlDQo+ID4gYSBsb3Qg
b2Ygb3RoZXIgdXNlIGNhc2VzIHdoZXJlIHdlIGNhbm5vdCBtYXAgZXZlcnl0aGluZyBvbiBib290
KS4NCj4gPg0KPiA+IEkgd291bGQgdm90ZSBmb3Igb3B0aW9uIDIgYXMgSSB0aGluayB3ZSB3aWxs
IG5vdCBjZXJ0aWZ5IGdpY3YyIGFuZCBpdCBpcw0KPiA+IG5vdCBhZGRpbmcgYW4gb3RoZXIgaHlw
ZXIgY2FsbC4NCg0KVGhhbmtzIGZvciB0aGUgaW5wdXQuDQoNCj4gDQo+IEl0IHNvdW5kcyBsaWtl
IG9wdGlvbiAyIGlzIHRoZSBwcmVmZXJyZWQgd2F5IGZvciBub3cuIEhlbnJ5LCBjYW4geW91DQo+
IGhhdmUgYSBsb29rPw0KDQpZZXMsIEkgd291bGQgbG92ZSB0byBwcmVwYXJlIHNvbWUgcGF0Y2hl
cyB0byBmb2xsb3cgdGhlIG9wdGlvbiAyLg0KDQpLaW5kIHJlZ2FyZHMsDQpIZW5yeQ0KDQo+IA0K
PiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 03:11:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 03:11:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473310.733820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEiYy-0002yT-Ix; Mon, 09 Jan 2023 03:11:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473310.733820; Mon, 09 Jan 2023 03:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEiYy-0002yM-Fs; Mon, 09 Jan 2023 03:11:04 +0000
Received: by outflank-mailman (input) for mailman id 473310;
 Mon, 09 Jan 2023 03:11:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEiYx-0002xu-Ad; Mon, 09 Jan 2023 03:11:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEiYx-0004vz-8g; Mon, 09 Jan 2023 03:11:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEiYw-0000Bo-SI; Mon, 09 Jan 2023 03:11:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEiYw-00049b-Rq; Mon, 09 Jan 2023 03:11:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2lHWD1fdzlJWE47m3fAVModD9B7f9OG9gdk4i0XACvE=; b=oFTPYXZEH/gCp+V29T0r6qB5dX
	teHZzT8gWBjz2fTgzT2K8eEagKbra5wKpHjeDhhuz5ektlospRr2EDNLlRraihY+bTNGuMSLz56+B
	AqNW8k/twhMhjPDm9LVwrN3BiQpGCy1FKFNnSL80ZrEpXDacZsKjrLU8wqtgNwiD5DTo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175629-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175629: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=528d9f33cad5245c1099d77084c78bb2244d5143
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 03:11:02 +0000

flight 175629 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175629/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit2     <job status>                 broken
 build-arm64-xsm               6 xen-build      fail in 175627 REGR. vs. 175623

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-credit2   5 host-install(5)          broken pass in 175627
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 175627 pass in 175629
 test-arm64-arm64-xl 18 guest-start/debian.repeat fail in 175627 pass in 175629
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail in 175627 pass in 175629

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 175627 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 175627 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175623
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175623
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175623
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175623
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175623
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175623
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                528d9f33cad5245c1099d77084c78bb2244d5143
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    1 days
Testing same since   175627  2023-01-08 14:40:14 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-credit2 broken
broken-step test-amd64-amd64-xl-credit2 host-install(5)

Not pushing.

(No revision log; it would be 378 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 06:35:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 06:35:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473322.733839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pElkq-0005wH-6D; Mon, 09 Jan 2023 06:35:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473322.733839; Mon, 09 Jan 2023 06:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pElkq-0005wA-3g; Mon, 09 Jan 2023 06:35:32 +0000
Received: by outflank-mailman (input) for mailman id 473322;
 Mon, 09 Jan 2023 06:35:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CfuE=5G=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pElkp-0005w4-1K
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 06:35:31 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ceac5909-8fe7-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 07:35:29 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 44CC329395;
 Mon,  9 Jan 2023 06:35:28 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E4B0F134AD;
 Mon,  9 Jan 2023 06:35:27 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id hsJENq+1u2OUIwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 09 Jan 2023 06:35:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceac5909-8fe7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673246128; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3cpNAoTzx4mYIaRdBrMGabQXSJ35A+xG9+z4dCYHRHs=;
	b=s9ErdALrW00XQok62mNw/8rTGW2CPZT5VanCuBOfU2t9SX+O6twaig4ldnBCmbu7svcodi
	US61xk/DSj01wra/MEW1iw7TNmjm7kUe+tYDIsj+Ppv8hn8+JgPTXbe2lnwv3FGuGpiN8B
	XZXqWn9q/4KE7jDM+I11RfMfvjnbh9Y=
Message-ID: <fe81a331-530d-9328-ebc4-1da2c4ec4571@suse.com>
Date: Mon, 9 Jan 2023 07:34:31 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] x86/xen: Remove the unused function p2m_index()
To: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Cc: boris.ostrovsky@oracle.com, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Abaci Robot <abaci@linux.alibaba.com>
References: <20230105090141.36248-1-jiapeng.chong@linux.alibaba.com>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230105090141.36248-1-jiapeng.chong@linux.alibaba.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------EewI0ZGsy7qxZ2dMrnpmluuY"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------EewI0ZGsy7qxZ2dMrnpmluuY
Content-Type: multipart/mixed; boundary="------------K0aOUOdipexF2AKtyJOa40IA";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Cc: boris.ostrovsky@oracle.com, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Abaci Robot <abaci@linux.alibaba.com>
Message-ID: <fe81a331-530d-9328-ebc4-1da2c4ec4571@suse.com>
Subject: Re: [PATCH v2] x86/xen: Remove the unused function p2m_index()
References: <20230105090141.36248-1-jiapeng.chong@linux.alibaba.com>
In-Reply-To: <20230105090141.36248-1-jiapeng.chong@linux.alibaba.com>

--------------K0aOUOdipexF2AKtyJOa40IA
Content-Type: multipart/mixed; boundary="------------GKoe8SjNPT20lOoePu2aELwU"

--------------GKoe8SjNPT20lOoePu2aELwU
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDUuMDEuMjMgMTA6MDEsIEppYXBlbmcgQ2hvbmcgd3JvdGU6DQo+IFRoZSBmdW5jdGlv
biBwMm1faW5kZXggaXMgZGVmaW5lZCBpbiB0aGUgcDJtLmMgZmlsZSwgYnV0IG5vdCBjYWxs
ZWQNCj4gZWxzZXdoZXJlLCBzbyByZW1vdmUgdGhpcyB1bnVzZWQgZnVuY3Rpb24uDQo+IA0K
PiBhcmNoL3g4Ni94ZW4vcDJtLmM6MTM3OjI0OiB3YXJuaW5nOiB1bnVzZWQgZnVuY3Rpb24g
J3AybV9pbmRleCcuDQo+IA0KPiBMaW5rOiBodHRwczovL2J1Z3ppbGxhLm9wZW5hbm9saXMu
Y24vc2hvd19idWcuY2dpP2lkPTM1NTcNCj4gUmVwb3J0ZWQtYnk6IEFiYWNpIFJvYm90IDxh
YmFjaUBsaW51eC5hbGliYWJhLmNvbT4NCj4gU2lnbmVkLW9mZi1ieTogSmlhcGVuZyBDaG9u
ZyA8amlhcGVuZy5jaG9uZ0BsaW51eC5hbGliYWJhLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEp1
ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------GKoe8SjNPT20lOoePu2aELwU
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------GKoe8SjNPT20lOoePu2aELwU--

--------------K0aOUOdipexF2AKtyJOa40IA--

--------------EewI0ZGsy7qxZ2dMrnpmluuY
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO7tXcFAwAAAAAACgkQsN6d1ii/Ey+N
sgf/XR9GrAGEdbpZ0hQ7Pwqb9n7fjbuBP/LRDat2KloqfIVY/g6wXVpazTu/pP9QBbKfsKbmwGZT
JldRX6f91xy+xKl2Rh/9fOIHKL19uvNmKJMNmMTIXcVGDH8lLffBNngyqVDMGNCGjV9CI4AtoIfq
rD/FcfL+EiSsjITwBoqdH1oIaYolfrlGccPZ7v1+c5vSN6UzflBVmGU8buZrSdslq0tMHBtwPuAQ
XetoH09lULfGo83mmNTZ6/O+gXOt3hzKFxkLIdcRRUMPCFtibqQSe2TadpuPCS0gw/oOwcFGUO0z
y0618kgsKF+yPhL7B2wPVAe/MydQ3LsUex1kdax1JA==
=qHuF
-----END PGP SIGNATURE-----

--------------EewI0ZGsy7qxZ2dMrnpmluuY--


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 07:40:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 07:40:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473329.733854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEmlP-0004Tu-Ud; Mon, 09 Jan 2023 07:40:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473329.733854; Mon, 09 Jan 2023 07:40:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEmlP-0004Tn-QR; Mon, 09 Jan 2023 07:40:11 +0000
Received: by outflank-mailman (input) for mailman id 473329;
 Mon, 09 Jan 2023 07:40:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CfuE=5G=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pEmlN-0004Th-Ra
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 07:40:09 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d6e8425d-8ff0-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 08:40:08 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7DB577688E;
 Mon,  9 Jan 2023 07:40:07 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5EC5A13583;
 Mon,  9 Jan 2023 07:40:07 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id JyK7FdfEu2MpPgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 09 Jan 2023 07:40:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6e8425d-8ff0-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673250007; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NSvDl3n0oqx37p6IpNF7M63u6VQ9OMHV69YCbkVaQ4M=;
	b=Z++I4M9J3r2vZzgao5n8Nv5kJxoqLIJLdYgGV/bbOfXdlK1uJnghIz+PEFc1/qLq4K9IA3
	cLWDqq+UM2ByZ1RBkz8E65UnqJIvBlok+ig/XEy24JWABm6xEi8MvcZdL5KZ75iuVsEr/r
	xIsbvOGQCd2PkfnX/DUS65wMc2BQrcQ=
Message-ID: <a1ee03ca-1304-17c8-d075-9a235aa02fee@suse.com>
Date: Mon, 9 Jan 2023 08:40:06 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: David Woodhouse <dwmw2@infradead.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>
References: <22a2352464be2df92dc0d30a955034c59fdf3927.camel@infradead.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: (Ab)using xenstored without Xen
In-Reply-To: <22a2352464be2df92dc0d30a955034c59fdf3927.camel@infradead.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------ZWAth3w0qNbwl4AZad2QP00l"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------ZWAth3w0qNbwl4AZad2QP00l
Content-Type: multipart/mixed; boundary="------------KL09YnqV00A0Z8RF2HctPEEv";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: David Woodhouse <dwmw2@infradead.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>
Message-ID: <a1ee03ca-1304-17c8-d075-9a235aa02fee@suse.com>
Subject: Re: (Ab)using xenstored without Xen
References: <22a2352464be2df92dc0d30a955034c59fdf3927.camel@infradead.org>
In-Reply-To: <22a2352464be2df92dc0d30a955034c59fdf3927.camel@infradead.org>

--------------KL09YnqV00A0Z8RF2HctPEEv
Content-Type: multipart/mixed; boundary="------------bKnsV4VNMmfJvVAL63oPCe6j"

--------------bKnsV4VNMmfJvVAL63oPCe6j
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

U29ycnkgZm9yIHRoZSBsYXRlIGFuc3dlciwgYnV0IEkgd2FzIHByZXR0eSBidXN5IGJlZm9y
ZSBteSAzIHdlZWsgdGltZSBvZmYuIDotKQ0KDQpPbiAyMC4xMi4yMiAxMzowMiwgRGF2aWQg
V29vZGhvdXNlIHdyb3RlOg0KPiBJJ3ZlIGJlZW4gd29ya2luZyBvbiBnZXR0aW5nIHFlbXUg
dG8gc3VwcG9ydCBYZW4gSFZNIGd1ZXN0cyAnbmF0aXZlbHknDQo+IHVuZGVyIEtWTToNCj4g
aHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcvcWVtdS1kZXZlbC8yMDIyMTIxNjAwNDExNy44NjIx
MDYtMS1kd213MkBpbmZyYWRlYWQub3JnL1QvDQo+IA0KPiBUaGUgYmFzaWMgcGxhdGZvcm0g
aXMgbW9zdGx5IHdvcmtpbmcgYW5kIEkgY2FuIHN0YXJ0IFhURiB0ZXN0cyB3aXRoDQo+ICdx
ZW11IC1rZXJuZWwnLiBOb3cgaXQgcmVhbGx5IG5lZWRzIGEgeGVuc3RvcmUuDQo+IA0KPiBJ
J20gdGhpbmtpbmcgb2YgaW1wbGVtZW50aW5nIHRoZSBiYXNpYyBzaGFyZWQgcmluZyBzdXBw
b3J0IG9uIHRoZSBxZW11DQo+IHNpZGUsIHRoZW4gY29tbXVuaWNhdGluZyB3aXRoIHRoZSBy
ZWFsIHhlbnN0b3JlZCBvdmVyIGl0cyBzb2NrZXQNCj4gaW50ZXJmYWNlLiBJdCB3b3VsZCBu
ZWVkIGEgJ1NVJyBjb21tYW5kIGluIHRoZSB4ZW5zdG9yZWQgcHJvdG9jb2wgdG8NCj4gbWFr
ZSBpdCB0cmVhdCB0aGF0IGNvbm5lY3Rpb24gYXMgYW4gdW5wcml2aWxlZ2VkIGNvbm5lY3Rp
b24gZnJvbSBhDQo+IHNwZWNpZmljIGRvbWlkLCBhbmFsb2dvdXMgdG8gJ0lOVFJPRFVDRScg
YnV0IG92ZXIgdGhlIGV4aXN0aW5nDQo+IGNvbm5lY3Rpb24uDQoNCldvdWxkbid0IGFuICJ1
bnByaXZpbGVnZWQiIHNvY2tldCBtYWtlIG1vcmUgc2Vuc2U/DQoNCj4gSG93ZXZlciwgdGhl
cmUgbWlnaHQgYmUgYSBiaXQgb2Ygd29yayB0byBkbyBmaXJzdC4gQXQgZmlyc3QsIGl0IHNl
ZW1lZA0KPiBsaWtlIHhlbnN0b3JlZCBkaWQgc3RhcnQgdXAgT0sgYW5kIHFlbXUgY291bGQg
ZXZlbiBjb25uZWN0IHRvIGl0IHdoZW4NCj4gbm90IHJ1bm5pbmcgdW5kZXIgWGVuLiBCdXQg
dGhhdCB3YXMgYSBjaGVja291dCBmcm9tIGEgZmV3IG1vbnRocyBhZ28sDQo+IGFuZCBldmVu
IHRoZW4gaXQgd291bGQgc2VnZmF1bHQgdGhlIGZpcnN0IHRpbWUgd2UgdHJ5IHRvIGFjdHVh
bGx5DQo+ICp3cml0ZSogYW55IG5vZGVzLg0KPiANCj4gTmV3ZXIgeGVuc3RvcmVkIGJyZWFr
cyBldmVuIHNvb25lciBiZWNhdXNlIHNpbmNlIGNvbW1pdCA2MGUyZjYwMjANCj4gKCJ0b29s
cy94ZW5zdG9yZTogbW92ZSB0aGUgY2FsbCBvZiBzZXR1cF9zdHJ1Y3R1cmUoKSB0byBkb20w
DQo+IGludHJvZHVjdGlvbiIpIGl0IGRvZXNuJ3QgZXZlbiBoYXZlIGEgdGRiX2N0eCBpZiB5
b3Ugc3RhcnQgaXQgd2l0aCB0aGUNCj4gLUQgb3B0aW9uLCBzbyBpdCBzZWdmYXVsdHMgZXZl
biBvbiBydW5uaW5nIHhlbnN0b3JlLWxzLiBBbmQgaWYgSSBtb3ZlDQo+IHRoZSB0ZGIgcGFy
dCBvZiBzZXR1cF9zdHJ1Y3R1cmUoKSBiYWNrIHRvIGJlIGNhbGxlZCBmcm9tIHdoZXJlIGl0
IHdhcywNCj4gd2UgZ2V0IGEgbGF0ZXIgY3Jhc2ggaW4gZ2V0X2RvbWFpbl9pbmZvKCkgYmVj
YXVzZSB0aGUgeGNfaGFuZGxlIGlzDQo+IE5VTEwuDQo+IA0KPiBXaGljaCBpcyBraW5kIG9m
IGZhaXIgZW5vdWdoLCBiZWNhdXNlIHhlbnN0b3JlZCBpcyBkZXNpZ25lZCB0byBydW4NCj4g
dW5kZXIgWGVuIDopDQo+IA0KPiBCdXQgd2hhdCAqaXMqIHRoZSAtRCBvcHRpb24gZm9yPyBJ
dCBnb2VzIGJhY2sgdG8gY29tbWl0IGJkZGQ0MTM2NiBpbg0KPiAyMDA1IHdoY2ggdGFsa3Mg
YWJvdXQgYWxsb3dpbmcgbXVsdGlwbGUgY29uY3VycmVudCB0cmFuc2FjdGlvbnMsIGFuZA0K
PiBkb2Vzbid0IG1lbnRpb24gdGhlIG5ldyBvcHRpb24gYXQgYWxsLiBJdCBzZWVtcyB0byBi
ZSBmYWlybHkgaG9zZWQgYXQNCj4gdGhlIG1vbWVudC4NCg0KSSBndWVzcyB0aGlzIHdhcyBz
b21lIGRlYnVnZ2luZyBhZGQtb24gd2hpY2ggaGFzbid0IGJlZW4gdXNlZCBmb3IgYWdlcy4N
Cg0KSSdtIGluY2xpbmVkIHRvIGp1c3QgcmVtb3ZlIHRoZSAtRCBvcHRpb24uDQoNCj4gSXMg
aXQgcmVhc29uYWJsZSB0byBhdHRlbXB0ICJmaXhpbmciIHhlbnN0b3JlZCB0byBydW4gd2l0
aG91dCBhY3R1YWwNCj4gWGVuLCBzbyB0aGF0IHdlIGNhbiB1c2UgaXQgZm9yIHZpcnR1YWwg
WGVuIHN1cHBvcnQ/DQoNCkkgZG9uJ3Qgc2VlIGEgbWFqb3IgcHJvYmxlbSB3aXRoIHRoYXQu
DQoNClRoZSByZXN1bHQgc2hvdWxkbid0IGJlIHRvbyB1Z2x5LCBvZiBjb3Vyc2UsIGFuZCBJ
IGRvbid0IHNlZSBhbnkgZWZmb3J0DQpvbiBYZW4gc2lkZSB0byB0ZXN0IGFueSBjaGFuZ2Vz
IGZvciBub3QgYnJlYWtpbmcgeW91ciB1c2UgY2FzZS4NCg0KDQpKdWVyZ2VuDQo=
--------------bKnsV4VNMmfJvVAL63oPCe6j
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------bKnsV4VNMmfJvVAL63oPCe6j--

--------------KL09YnqV00A0Z8RF2HctPEEv--

--------------ZWAth3w0qNbwl4AZad2QP00l
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO7xNYFAwAAAAAACgkQsN6d1ii/Ey/p
fwf+IuKOn8kcvHYU4CBvgV/Qmri9ctxjGGmSCBQgMrmnHVuCs6WVZXdmpwWIgNXTuPYgPSiBUtJJ
iLlo2ViuniKGYVeBBRbBX0C+S1p91RSDYPIQ3k/t7WH1pdpd2TK4wYb794KqLoqE8D9LJ7RHdwwg
RwVOQbAIHYYlbfEJg/v1ljMOY8UmeUgZr/8q0ucQf3RWs2GvTHVYjKIKAf0SWe9Bl4LDTi4783Vn
fyYAl1RignJYuKIf3pgKYeC8nreJMsY9n4uI9tiOMOuBlGL36vIuHEbzIGwiFjslnq4tnv6X+Sey
fYwzIPCGME0i+GX+uqI2Ou9RiRkldzjsXttmYZvA7Q==
=SOOS
-----END PGP SIGNATURE-----

--------------ZWAth3w0qNbwl4AZad2QP00l--


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 07:49:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 07:49:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473335.733864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEmuI-0005AG-O1; Mon, 09 Jan 2023 07:49:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473335.733864; Mon, 09 Jan 2023 07:49:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEmuI-0005A9-LP; Mon, 09 Jan 2023 07:49:22 +0000
Received: by outflank-mailman (input) for mailman id 473335;
 Mon, 09 Jan 2023 07:49:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LgaK=5G=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pEmuG-0005A0-89
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 07:49:20 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2078.outbound.protection.outlook.com [40.107.103.78])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d8e83a7-8ff2-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 08:49:17 +0100 (CET)
Received: from FR3P281CA0204.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a5::9) by
 PA4PR08MB6144.eurprd08.prod.outlook.com (2603:10a6:102:e3::18) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18; Mon, 9 Jan 2023 07:49:14 +0000
Received: from VI1EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:a5:cafe::cc) by FR3P281CA0204.outlook.office365.com
 (2603:10a6:d10:a5::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Mon, 9 Jan 2023 07:49:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT016.mail.protection.outlook.com (100.127.144.158) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Mon, 9 Jan 2023 07:49:13 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Mon, 09 Jan 2023 07:49:12 +0000
Received: from 992d601173f9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7360FF9E-90E4-47EF-A60E-D7F9330BDBDF.1; 
 Mon, 09 Jan 2023 07:49:03 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 992d601173f9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Jan 2023 07:49:03 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by AS8PR08MB8086.eurprd08.prod.outlook.com (2603:10a6:20b:54b::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 07:48:59 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 07:48:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d8e83a7-8ff2-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AKAXuZ2ln55fEjAN4R5sFOXI/x6Y4sfBhXHBHo1uyiQ=;
 b=AAP78dGGludVtFDkbJmRU+EU6e6YOjd/rCBm22UCIU7GVRuT55Yg55n4qSc7LPvqCmfdl2MD9XfEy+DcTkE78KQqdiEQOAvykude1+13eANHJP4Vt07ysQWDdFpDFWUL5P48xecLdnwlJ+5IxZD5Hile3UFg/l2bTAUm8N5l2ZA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FIUxU/SsqmBKeN11QdSKzPD1xdXXbivyP38+sSER05jXzK+5kaacysVwgG5c7HaYvjd85tKGNGxTwDzLWkNoGMU2uKBq835/09PBXR1S2jokcy0gHr/9X0klKmBtSxY/loGTINWQMl2NZPfci/bK5VGt8zSj3IsuHKEbJAKRcWOnpUGseLYiTBcKHSKBXYW4zUFKflWo4EDWZFxnMqDJzHombPYEQ6IHgd8gt5oxm8USVo0EJKbCDn06w7LcdRDpn/fRXxk1ppnAEBGrBHqQFuGL+fNvkTb0dpCT0vpbwhIZWrQLPXDG2uvN1uQsuSpGE9U27r5nVWX6iT65f6I9zA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AKAXuZ2ln55fEjAN4R5sFOXI/x6Y4sfBhXHBHo1uyiQ=;
 b=KItY2d6wp3XmTl2tNndMyZcO3+pjdKwXp1+gRh4kwCbCzn8m/+IL6KwTyxXkfkSsitjdW+xrPSBHRKHUDPgvmp/pWP1Z4rC+0EZ5Lh7gqTJNV41xrP6g6EzrbxDdUGkaoAloCQOyIj2Z2l2+zEV1pQaZpUY9ZHEB+wqwG2YUK+J9KufIbwb1jeTYb+7WDW/uMoUOX0+6RsOCE1Xnz7RBoHCCXrYU/U6OlAWJwWYFxX1960meyyDA5Qv1Y6RpY25RELXcG0bt9h97R5J/dBKqp5Dogx0OqrVMDX+Bl5YvH8xrp5jU2JCLkK/zbplKZcjAASCJSTlEtoNuDdExdDVMZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AKAXuZ2ln55fEjAN4R5sFOXI/x6Y4sfBhXHBHo1uyiQ=;
 b=AAP78dGGludVtFDkbJmRU+EU6e6YOjd/rCBm22UCIU7GVRuT55Yg55n4qSc7LPvqCmfdl2MD9XfEy+DcTkE78KQqdiEQOAvykude1+13eANHJP4Vt07ysQWDdFpDFWUL5P48xecLdnwlJ+5IxZD5Hile3UFg/l2bTAUm8N5l2ZA=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v1 01/13] xen/arm: re-arrange the static shared memory
 region
Thread-Topic: [PATCH v1 01/13] xen/arm: re-arrange the static shared memory
 region
Thread-Index: AQHY+J1fJT59o0hz5k65YJBzZn+QX66Uu+KAgAFK6sA=
Date: Mon, 9 Jan 2023 07:48:59 +0000
Message-ID:
 <AM0PR08MB4530A8EA76480621255CEFC9F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-2-Penny.Zheng@arm.com>
 <dd5b93c3-51d1-40ad-88b4-5bbd54633651@xen.org>
In-Reply-To: <dd5b93c3-51d1-40ad-88b4-5bbd54633651@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 77C55A13AE0A2942AE889D654333364B.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|AS8PR08MB8086:EE_|VI1EUR03FT016:EE_|PA4PR08MB6144:EE_
X-MS-Office365-Filtering-Correlation-Id: 802ccaa4-56dc-481e-a91f-08daf216002c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 qR+k8egtqPC5/4MXYyif4xA2C0CBn4RzLOh8XsZLZ3nh4b4KMrZRW8p3ocvAe+sQ1cUw4TxEzYpcCIJlhBJzoHuLPAbst0GfVUSxNvNM/sFcNg9qVPQLEGTDfsDD77Bd0PlKW3BTRCQZfBFAgtacvQIzShJb49dbldZlBVH4GM+rKUfSRSstN+4u+FSx7WaHI2+LcwC227Fc/w3+cuiaDTrBEsTFMgYzSRTljZRk67VHZCrW+Acsl+9HBC/dxoHueWfq0FIoedrla6NtMmjQNqkJ1lP3yEV6aY6mTwzTa1avxbH4imdNPDbrXSXGnBf9je1oOz5uUFN0+cL2ZfmMY/igIdo326AXnV78OzlJfQtO1e7ydajYIkZbdVwKyBoIiZH0eF5mcdg82k4M6GebpZMxbnySruzA2NlHKx208VseurrqT4aJArRdQBQY64eUFDN+wPK01dUfN4gVIAAM+XpKfjDb9F0Ftj0vrze1+lnpKfSYUXMmofiTebHME/dInBtkXaJeiUCaRt60GWzJ9cC2FITujYo4qswUjwQ9JM6OY9Nz2pSgzH/lvsyK7D0ADMtbNVAm6PfXoG+TTZSKZx+WH9wE6HTBNN5rkQmuCbWHyyS7usw7q2RvEkGWlPgXbPz3EVjUTNFBVYKmRspjxp6GNmtvW0/atdT2EoYflw49Kifix/TLcelhIbei5Qof
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(366004)(39860400002)(376002)(346002)(451199015)(86362001)(7696005)(478600001)(71200400001)(55016003)(316002)(54906003)(110136005)(38100700002)(122000001)(38070700005)(33656002)(186003)(26005)(53546011)(6506007)(9686003)(64756008)(41300700001)(76116006)(66946007)(66446008)(66476007)(66556008)(4326008)(5660300002)(8676002)(52536014)(2906002)(83380400001)(8936002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8086
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	24aaf046-90b7-479f-779c-08daf215f805
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+6AmQ5MuTUnkVHqRt2j/5BSsOLq6SAYZB3qcgHoa/lhAkJYpl5ul8uo38c/tvSWRfAnLRxc3dX3OEURfHrSvX+VKfKyUoPyC45UUuolEAW7wIV0VV5Bhkg/fzIyb+zto/vsV2bjLcxIr1fECtSlNmEkPFI9S7bzwk1ISvV1AQmbVeHXleAGq+xAzVZSeolY7c7hg6nIBb7o+TatBiQ4pqE7YzB4hFIa0svUgoLRZdILB6R+wX7B0bu4Q/VEWB/TK8H6MX1PykK1YKgLDc2o/uV6kzLvQgAf5wFQTr1yLTYvZazCt/f3DJ/P9o/2UnvUKjjzG6BZRO0BZhEWCXS0VYULe4TgyHmibPHibwwFu8z5E/WRkGrb/bBtMl0YzTI3Hh+BI/EXB8+CBCVPY/FXN+DR98M7bQd3v/9f+9W4jkIt4TwBmvMu558QndmBvjln5Lvxb9u3hnbjB1dNO8r60ecP6pK8R03lvSkk8/I7vQ7YVVgV5frNfFsH152lEUmsGVigmFfRJJ+3u1YFzVSSgZUEIqpmKKgOSEVrhGkP9yFVHj6zxPzZZADwKuY9bK2yurglTaFHr7hNdCJPK6FkwEkOL6iEEo9uj9RhmVm/LIb3U6t8tWoGCbhaWEUFxXX6XWCxavbjKJn1lg9FppsvX25MbYAunpcVbdqwJVv1VHGSlygj6UQ64j60MlDihuK9dqPNw/fjoj53+d5X7Nfx6Ag==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(346002)(136003)(376002)(451199015)(40470700004)(36840700001)(46966006)(33656002)(478600001)(316002)(4326008)(8676002)(110136005)(54906003)(70586007)(36860700001)(70206006)(86362001)(83380400001)(81166007)(82740400003)(7696005)(40480700001)(107886003)(356005)(6506007)(186003)(26005)(55016003)(9686003)(53546011)(40460700003)(336012)(82310400005)(47076005)(52536014)(8936002)(5660300002)(2906002)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 07:49:13.1069
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 802ccaa4-56dc-481e-a91f-08daf216002c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6144

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4NCj4gU2VudDogU3VuZGF5LCBKYW51YXJ5IDgsIDIwMjMgNzo0NCBQTQ0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFu
byBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9k
eW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYxIDAxLzEzXSB4
ZW4vYXJtOiByZS1hcnJhbmdlIHRoZSBzdGF0aWMgc2hhcmVkIG1lbW9yeQ0KPiByZWdpb24NCj4g
DQo+IEhpIFBlbm55LA0KPiANCg0KSGkgSnVsaWVuDQoNCj4gT24gMTUvMTEvMjAyMiAwMjo1Miwg
UGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gVGhpcyBjb21taXQgcmUtYXJyYW5nZXMgdGhlIHN0YXRp
YyBzaGFyZWQgbWVtb3J5IHJlZ2lvbnMgaW50byBhDQo+ID4gc2VwYXJhdGUgYXJyYXkgc2htX21l
bWluZm8uIEFuZCBzdGF0aWMgc2hhcmVkIG1lbW9yeSByZWdpb24gbm93DQo+IHdvdWxkDQo+ID4g
aGF2ZSBpdHMgb3duIHN0cnVjdHVyZSAnc2htX21lbWJhbmsnIHRvIGhvbGQgYWxsIHNobS1yZWxh
dGVkIG1lbWJlcnMsDQo+ID4gbGlrZSBzaG1faWQsIGV0YywgYW5kIGEgcG9pbnRlciB0byB0aGUg
bm9ybWFsIG1lbW9yeSBiYW5rICdtZW1iYW5rJy4NCj4gPiBUaGlzIHdpbGwgYXZvaWQgY29udGlu
dWluZyB0byBncm93ICdtZW1iYW5rJy4NCj4gPg0KPiA+IFNpZ25lZC1vZmYtYnk6IFBlbm55IFpo
ZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+IC0tLQ0KPiA+ICAgeGVuL2FyY2gvYXJtL2Jv
b3RmZHQuYyAgICAgICAgICAgIHwgNDAgKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0tLQ0K
PiA+ICAgeGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jICAgICAgIHwgMzUgKysrKysrKysrKysr
KysrKy0tLS0tLS0tLS0tDQo+ID4gICB4ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20va2VybmVsLmgg
fCAgMiArLQ0KPiA+ICAgeGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL3NldHVwLmggIHwgMTYgKysr
KysrKysrLS0tLQ0KPiA+ICAgNCBmaWxlcyBjaGFuZ2VkLCA1OSBpbnNlcnRpb25zKCspLCAzNCBk
ZWxldGlvbnMoLSkNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vYm9vdGZkdC5j
IGIveGVuL2FyY2gvYXJtL2Jvb3RmZHQuYyBpbmRleA0KPiA+IDYwMTRjMGY4NTIuLmNjZjI4MWNk
MzcgMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2FyY2gvYXJtL2Jvb3RmZHQuYw0KPiA+ICsrKyBiL3hl
bi9hcmNoL2FybS9ib290ZmR0LmMNCj4gPiBAQCAtMzg0LDYgKzM4NCw3IEBAIHN0YXRpYyBpbnQg
X19pbml0IHByb2Nlc3Nfc2htX25vZGUoY29uc3Qgdm9pZCAqZmR0LA0KPiBpbnQgbm9kZSwNCj4g
PiAgICAgICBjb25zdCBfX2JlMzIgKmNlbGw7DQo+ID4gICAgICAgcGFkZHJfdCBwYWRkciwgZ2Fk
ZHIsIHNpemU7DQo+ID4gICAgICAgc3RydWN0IG1lbWluZm8gKm1lbSA9ICZib290aW5mby5yZXNl
cnZlZF9tZW07DQo+IA0KPiBBZnRlciB0aGlzIHBhdGNoLCAnbWVtJyBpcyBiYXJlbHkgZ29pbmcg
dG8gYmUgdXNlZC4gU28gSSB3b3VsZCByZWNvbW1lbmQgdG8NCj4gcmVtb3ZlIGl0IG9yIHJlc3Ry
aWN0IHRoZSBzY29wZS4NCj4NCg0KSG9wZSBJIHVuZGVyc3RhbmQgY29ycmVjdGx5LCB5b3UgYXJl
IHNheWluZyB0aGF0IGFsbCBzdGF0aWMgc2hhcmVkIG1lbW9yeSBiYW5rIHdpbGwgYmUNCmRlc2Ny
aWJlZCBhcyAic3RydWN0IHNobV9tZW1iYW5rIi4gVGhhdCdzIHJpZ2h0Lg0KSG93ZXZlciB3aGVu
IGhvc3QgYWRkcmVzcyBpcyBwcm92aWRlZCwgd2Ugc3RpbGwgbmVlZCBhbiBpbnN0YW5jZSBvZiAi
c3RydWN0IG1lbWJhbmsiDQp0byByZWZlciB0byBpbiAiYm9vdGluZm8ucmVzZXJ2ZWRfbWVtIi4g
T25seSBpbiB0aGlzIHdheSwgaXQgY291bGQgYmUgaW5pdGlhbGl6ZWQgcHJvcGVybHkgaW4NCmFz
IHN0YXRpYyBwYWdlcy4NClRoYXQncyB3aHkgSSBwdXQgYSAic3RydWN0IG1lbWJhbmsqIiBwb2lu
dGVyIGluICJzdHJ1Y3Qgc2htX21lbWJhbmsiIHRvIHJlZmVyIHRvIHRoZQ0Kc2FtZSBvYmplY3Qu
DQpJZiBJIHJlbW92ZWQgaW5zdGFuY2UgaW4gYm9vdGluZm8ucmVzZXJ2ZWRfbWVtLCBhIGZldyBt
b3JlIHBhdGggbmVlZHMgdG8gYmUgdXBkYXRlZCwgbGlrZQ0KSW5pdF9zdGF0aWNtZW1fcGFnZXMs
IGR0X3VucmVzZXJ2ZWRfcmVnaW9ucywgZXRjDQogDQo+IFRoaXMgd2lsbCBtYWtlIGVhc2llciB0
byBjb25maXJtIHRoYXQgbW9zdCBvZiB0aGUgdXNlIG9mICdtZW0nIGhhdmUgYmVlbg0KPiByZXBs
YWNlZCB3aXRoICdzaG1fbWVtJyBhbmQgcmVkdWNlIHRoZSByaXNrIG9mIGNvbmZ1c2lvbiBiZXR3
ZWVuIHRoZSB0d28NCj4gKHRoZSBuYW1lIGFyZSBxdWl0ZSBzaW1pbGFyKS4NCj4gDQo+IFsuLi5d
DQo+IA0KPiA+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMgYi94ZW4v
YXJjaC9hcm0vZG9tYWluX2J1aWxkLmMNCj4gPiBpbmRleCBiZDMwZDM3OThjLi5jMGZkMTNmNmVk
IDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYw0KPiA+ICsrKyBi
L3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYw0KPiA+IEBAIC03NTcsMjAgKzc1NywyMCBAQCBz
dGF0aWMgaW50IF9faW5pdA0KPiBhY3F1aXJlX25yX2JvcnJvd2VyX2RvbWFpbihzdHJ1Y3QgZG9t
YWluICpkLA0KPiA+ICAgew0KPiA+ICAgICAgIHVuc2lnbmVkIGludCBiYW5rOw0KPiA+DQo+ID4g
LSAgICAvKiBJdGVyYXRlIHJlc2VydmVkIG1lbW9yeSB0byBmaW5kIHJlcXVlc3RlZCBzaG0gYmFu
ay4gKi8NCj4gPiAtICAgIGZvciAoIGJhbmsgPSAwIDsgYmFuayA8IGJvb3RpbmZvLnJlc2VydmVk
X21lbS5ucl9iYW5rczsgYmFuaysrICkNCj4gPiArICAgIC8qIEl0ZXJhdGUgc3RhdGljIHNoYXJl
ZCBtZW1vcnkgdG8gZmluZCByZXF1ZXN0ZWQgc2htIGJhbmsuICovDQo+ID4gKyAgICBmb3IgKCBi
YW5rID0gMCA7IGJhbmsgPCBib290aW5mby5zaG1fbWVtLm5yX2JhbmtzOyBiYW5rKysgKQ0KPiA+
ICAgICAgIHsNCj4gPiAtICAgICAgICBwYWRkcl90IGJhbmtfc3RhcnQgPSBib290aW5mby5yZXNl
cnZlZF9tZW0uYmFua1tiYW5rXS5zdGFydDsNCj4gPiAtICAgICAgICBwYWRkcl90IGJhbmtfc2l6
ZSA9IGJvb3RpbmZvLnJlc2VydmVkX21lbS5iYW5rW2JhbmtdLnNpemU7DQo+ID4gKyAgICAgICAg
cGFkZHJfdCBiYW5rX3N0YXJ0ID0gYm9vdGluZm8uc2htX21lbS5iYW5rW2JhbmtdLm1lbWJhbmst
DQo+ID5zdGFydDsNCj4gPiArICAgICAgICBwYWRkcl90IGJhbmtfc2l6ZSA9DQo+ID4gKyBib290
aW5mby5zaG1fbWVtLmJhbmtbYmFua10ubWVtYmFuay0+c2l6ZTsNCj4gDQo+IEkgd2FzIGV4cGVj
dGluZyBhICJpZiAodHlwZSA9PSBNRU1CQU5LX1NUQVRJQ19ET01BSU4pIC4uLiIgIHRvIGJlDQo+
IHJlbW92ZWQuIEJ1dCBpdCBsb29rcyBsaWtlIHRoZXJlIHdhcyBub25lLiBJIGd1ZXNzIHRoYXQg
d2FzIGEgbWlzdGFrZSBpbiB0aGUNCj4gZXhpc3RpbmcgY29kZT8NCj4gDQoNCk9oLCB5b3UncmUg
cmlnaHQsIHRoZSB0eXBlIHNoYWxsIGFsc28gYmUgY2hlY2tlZC4NCg0KPiA+DQo+ID4gICAgICAg
ICAgIGlmICggKHBiYXNlID09IGJhbmtfc3RhcnQpICYmIChwc2l6ZSA9PSBiYW5rX3NpemUpICkN
Cj4gPiAgICAgICAgICAgICAgIGJyZWFrOw0KPiA+ICAgICAgIH0NCj4gPg0KPiA+IC0gICAgaWYg
KCBiYW5rID09IGJvb3RpbmZvLnJlc2VydmVkX21lbS5ucl9iYW5rcyApDQo+ID4gKyAgICBpZiAo
IGJhbmsgPT0gYm9vdGluZm8uc2htX21lbS5ucl9iYW5rcyApDQo+ID4gICAgICAgICAgIHJldHVy
biAtRU5PRU5UOw0KPiA+DQo+ID4gLSAgICAqbnJfYm9ycm93ZXJzID0NCj4gYm9vdGluZm8ucmVz
ZXJ2ZWRfbWVtLmJhbmtbYmFua10ubnJfc2htX2JvcnJvd2VyczsNCj4gPiArICAgICpucl9ib3Jy
b3dlcnMgPSBib290aW5mby5zaG1fbWVtLmJhbmtbYmFua10ubnJfc2htX2JvcnJvd2VyczsNCj4g
Pg0KPiA+ICAgICAgIHJldHVybiAwOw0KPiA+ICAgfQ0KPiA+IEBAIC05MDcsMTEgKzkwNywxOCBA
QCBzdGF0aWMgaW50IF9faW5pdA0KPiBhcHBlbmRfc2htX2JhbmtfdG9fZG9tYWluKHN0cnVjdCBr
ZXJuZWxfaW5mbyAqa2luZm8sDQo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHBhZGRyX3Qgc3RhcnQsIHBhZGRyX3Qgc2l6ZSwNCj4gPiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqc2htX2lkKQ0K
PiA+ICAgew0KPiA+ICsgICAgc3RydWN0IG1lbWJhbmsgKm1lbWJhbms7DQo+ID4gKw0KPiA+ICAg
ICAgIGlmICgga2luZm8tPnNobV9tZW0ubnJfYmFua3MgPj0gTlJfTUVNX0JBTktTICkNCj4gPiAg
ICAgICAgICAgcmV0dXJuIC1FTk9NRU07DQo+ID4NCj4gPiAtICAgIGtpbmZvLT5zaG1fbWVtLmJh
bmtba2luZm8tPnNobV9tZW0ubnJfYmFua3NdLnN0YXJ0ID0gc3RhcnQ7DQo+ID4gLSAgICBraW5m
by0+c2htX21lbS5iYW5rW2tpbmZvLT5zaG1fbWVtLm5yX2JhbmtzXS5zaXplID0gc2l6ZTsNCj4g
PiArICAgIG1lbWJhbmsgPSB4bWFsbG9jX2J5dGVzKHNpemVvZihzdHJ1Y3QgbWVtYmFuaykpOw0K
PiANCj4gWW91IGFsbG9jYXRlIG1lbW9yeSBidXQgbmV2ZXIgZnJlZSBpdC4gSG93ZXZlciwgSSB0
aGluayBpdCB3b3VsZCBiZSBiZXR0ZXIgdG8NCj4gYXZvaWQgdGhlIGR5bmFtaWMgYWxsb2NhdGlv
bi4gU28gSSB3b3VsZCBjb25zaWRlciB0byBub3QgdXNlIHRoZSBzdHJ1Y3R1cmUNCj4gc2htX21l
bWJhbmsgYW5kIGluc3RlYWQgY3JlYXRlIGEgc3BlY2lmaWMgb25lIGZvciB0aGUgZG9tYWluIGNv
bnN0cnVjdGlvbi4NCj4gDQoNClRydWUsIGEgbG9jYWwgdmFyaWFibGUgInN0cnVjdCBtZW1pbmZv
IHNobV9iYW5rcyIgY291bGQgYmUgaW50cm9kdWNlZCBvbmx5DQpmb3IgZG9tYWluIGNvbnN0cnVj
dGlvbiBpbiBmdW5jdGlvbiBjb25zdHJ1Y3RfZG9tVQ0KDQo+ID4gKyAgICBpZiAoIG1lbWJhbmsg
PT0gTlVMTCApDQo+ID4gKyAgICAgICAgcmV0dXJuIC1FTk9NRU07DQo+ID4gKw0KPiA+ICsgICAg
a2luZm8tPnNobV9tZW0uYmFua1traW5mby0+c2htX21lbS5ucl9iYW5rc10ubWVtYmFuayA9DQo+
IG1lbWJhbms7DQo+ID4gKyAgICBraW5mby0+c2htX21lbS5iYW5rW2tpbmZvLT5zaG1fbWVtLm5y
X2JhbmtzXS5tZW1iYW5rLT5zdGFydCA9DQo+IHN0YXJ0Ow0KPiA+ICsgICAga2luZm8tPnNobV9t
ZW0uYmFua1traW5mby0+c2htX21lbS5ucl9iYW5rc10ubWVtYmFuay0+c2l6ZSA9DQo+ID4gKyBz
aXplOw0KPiANCj4gVGhlIGxhc3QgdHdvIGNvdWxkIGJlIHJlcGxhY2VkIHdpdGg6DQo+IA0KPiBt
ZW1iYW5rLT5zdGFydCA9IHN0YXJ0Ow0KPiBtZW1iYW5rLT5zaXplID0gc2l6ZTsNCj4gDQo+IFRo
aXMgd291bGQgbWFrZSB0aGUgY29kZSBtb3JlIHJlYWRhYmxlLiBBbHNvLCB3aGlsZSB5b3UgYXJl
IG1vZGlmeWluZyB0aGUNCj4gY29kZSwgSSB3b3VsZCBjb25zaWRlciB0byBpbnRyb2R1Y2UgYSBs
b2NhbCB2YXJpYWJsZSB0aGF0IHBvaW50cyB0bw0KPiBraW5mby0+c2htX21lbS5iYW5rW2tpbmZv
LT5zaG1fbWVtLm5yX2JhbmtzXS4NCj4gDQo+IFsuLi5dDQo+IA0KPiA+ICAgc3RydWN0IG1lbWlu
Zm8gew0KPiA+IEBAIC02MSw2ICs1NywxNyBAQCBzdHJ1Y3QgbWVtaW5mbyB7DQo+ID4gICAgICAg
c3RydWN0IG1lbWJhbmsgYmFua1tOUl9NRU1fQkFOS1NdOw0KPiA+ICAgfTsNCj4gPg0KPiA+ICtz
dHJ1Y3Qgc2htX21lbWJhbmsgew0KPiA+ICsgICAgY2hhciBzaG1faWRbTUFYX1NITV9JRF9MRU5H
VEhdOw0KPiA+ICsgICAgdW5zaWduZWQgaW50IG5yX3NobV9ib3Jyb3dlcnM7DQo+ID4gKyAgICBz
dHJ1Y3QgbWVtYmFuayAqbWVtYmFuazsNCj4gDQo+IEFmdGVyIHRoZSBjaGFuZ2UgSSBzdWdnZXN0
IGFib3ZlLCBJIHdvdWxkIGV4cGVjdCB0aGF0IHRoZSBmaWVsZCBvZg0KPiBtZW1iYW5rIHdpbGwg
bm90IGJlIHVwZGF0ZWQuIFNvIEkgd291bGQgYWRkIGNvbnN0IGhlcmUuDQo+IA0KPiBDaGVlcnMs
DQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 07:50:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 07:50:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473342.733876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEmv5-0006YH-73; Mon, 09 Jan 2023 07:50:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473342.733876; Mon, 09 Jan 2023 07:50:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEmv5-0006YA-3S; Mon, 09 Jan 2023 07:50:11 +0000
Received: by outflank-mailman (input) for mailman id 473342;
 Mon, 09 Jan 2023 07:50:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LgaK=5G=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pEmv3-0005jj-OU
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 07:50:09 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2073.outbound.protection.outlook.com [40.107.8.73])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3c1bc36a-8ff2-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 08:50:07 +0100 (CET)
Received: from DB6PR07CA0089.eurprd07.prod.outlook.com (2603:10a6:6:2b::27) by
 AM9PR08MB5889.eurprd08.prod.outlook.com (2603:10a6:20b:2d5::8) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18; Mon, 9 Jan 2023 07:50:03 +0000
Received: from DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2b:cafe::3) by DB6PR07CA0089.outlook.office365.com
 (2603:10a6:6:2b::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.10 via Frontend
 Transport; Mon, 9 Jan 2023 07:50:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT043.mail.protection.outlook.com (100.127.143.24) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Mon, 9 Jan 2023 07:50:03 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Mon, 09 Jan 2023 07:50:02 +0000
Received: from b24ff69f56c9.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4DEA6985-949A-4C8F-BB54-5328AB1B1B6E.1; 
 Mon, 09 Jan 2023 07:49:52 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b24ff69f56c9.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Jan 2023 07:49:52 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by DBBPR08MB6297.eurprd08.prod.outlook.com (2603:10a6:10:20b::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 07:49:48 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 07:49:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c1bc36a-8ff2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JBX0aIpnIUaKUAwZTfrn7DzcMpOI+YfOn9QeJP7C4fE=;
 b=SpuAgp8f2VDmS0Tj6scUTJ1KDdjikTZdVVof7ApKotQUzeoGrpWTUH6iN5BQX1z5ZxwIW5QmK4DTRNHevSMGB+TXDsXBFxFvjoGQ9ddCKF9tNazRJq18324SLkJvSbykS1fUDhFnS5NA4K7MZMJWB7yDA8QR6s0nbKoyPyuMaZ4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W+0zfIY1jde15/jbvQZfi6gVTFcZCdh+LeJuRTj8nqIM36iOyeNEz5lcSFF172wGvYxAWNoGrubSvf0IxZuE8S33dKI+r5QEeqcx8giGwsw6XKKAZbPsbqv9KKcevLLHhrRlJ2CicjGOEUxTuZPFzEpH8gtPB/Qc7dvKr6LCciH08f0csumWgtOQD8tFYL+9rNd+pfNMBMW4Rp8qg0r8HG/yLCP/CtMbZmfnYlGDIVZHL3v0xIhd6VvVu4IRQz1uqAkR1gU7NR6PNn7f4ZIJoow4dBrpXwjJXomujfpsxpPPamypQAC7VSsiVgoUnmlGZSy1aRvPRgeVd9Ti6TYyvQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JBX0aIpnIUaKUAwZTfrn7DzcMpOI+YfOn9QeJP7C4fE=;
 b=Qftwo7inCgg2wuNTT8sVGUrgkK/iTZ0TC1LelOzm2n/LkXEJ17V2byGnmw0k6xBGzfOVXHMGc5IFZ91zB3WeYQbHChAv62rtFBiZn3tkpy/u94QmFm5WPqSs0ONSo6bDzZJPk53x/74mWJ6sY7G3oCC/uRxI/HBV+/SXts7aAZe24ifK5gfzJ1awHD4hWYYl+9uYKPZXenNmRfPNwjrf4Z1NI3pMKB3duTR4y9D/52MkT0HEfhkweUHzBifmFX6YHoVGG42n69fneIOd06Gq9wsO8K7o+nYL/Ue1LlusLBUR7vY8WYh4GK9ucM1b+I7zAUU+2dXuX+222/lJpqttHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JBX0aIpnIUaKUAwZTfrn7DzcMpOI+YfOn9QeJP7C4fE=;
 b=SpuAgp8f2VDmS0Tj6scUTJ1KDdjikTZdVVof7ApKotQUzeoGrpWTUH6iN5BQX1z5ZxwIW5QmK4DTRNHevSMGB+TXDsXBFxFvjoGQ9ddCKF9tNazRJq18324SLkJvSbykS1fUDhFnS5NA4K7MZMJWB7yDA8QR6s0nbKoyPyuMaZ4=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v1 06/13] xen/arm: assign shared memory to owner when host
 address not provided
Thread-Topic: [PATCH v1 06/13] xen/arm: assign shared memory to owner when
 host address not provided
Thread-Index: AQHY+J1pmmYEajWz5UynKX+c5rdEZK6UzwiAgAEQoXA=
Date: Mon, 9 Jan 2023 07:49:48 +0000
Message-ID:
 <AM0PR08MB453041150588948050F718D4F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-7-Penny.Zheng@arm.com>
 <d7f12897-c6cc-0895-b70e-53c0b88bd0f9@xen.org>
In-Reply-To: <d7f12897-c6cc-0895-b70e-53c0b88bd0f9@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 346C653763857F478338C32F81DBCC90.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|DBBPR08MB6297:EE_|DBAEUR03FT043:EE_|AM9PR08MB5889:EE_
X-MS-Office365-Filtering-Correlation-Id: f06fa070-c870-43cd-c4ea-08daf2161de2
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3CQWUNRbaIuJPXeSOGDKdCOwN3/TLoWtc1cRu+LGQrgiAYrjZq3GSayhx2aG7RVC938QSwLAvx4imlJlkmmNl2K1wQoseJIMN83sGt6LaA/ZVsI0QIa22oNiflF6jwiFEG76jEcfIzhr853CvTOb+bNGyTU0BNbq2Lyt+1Gp71L8bBONHQGRVQnEvJr7sRNfG/AsxayjQpLVSQSGAvgtv6Sr66lUmysEz6LmY66PIc7XnHhv73gMKViAJckJytOkti37Y8cYxG09m/DhhGIII+NUFL/0nIiRqgfv19Dw2ZOqfBV6plQhb5SkCuddGSFX/NYizfsPxo/0731hwzv9UT3EbT7nbf0dmRkOuu6jEkPCY0XF3Dau2SHUTWItB5kpAppmJ22Il9CYTEZbZbjQGd0A5zYQXRfdSJdp0GMyirkN2VVFffSkvlOVl/mI4/Jb3J9Ij/DTvypIa9K52udoLlKq2rxC3HsPJJVU2G1j9Gox9q7qJG9PImdCeOj1z7nNQVbaxp2fptIuwKE/lWY1pfcE2n3nytkYwf5zJiubUQNOvm3YOFCEhhWcBBjXrJ9I6ZT+aCV0P5LupHvQrNEaA3z61WLGS0lSXeA7PUjgFc8bVxDbobwADRQ8yArD1X+etYpNHyjDgQWvoHZNLvQJ/ry4XPgmHnKpNFFJEWaYwCIbUxRPRKN7At61jg7qxU5xbj0KU0bjqUTqAxgeUj8Feg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199015)(33656002)(52536014)(41300700001)(83380400001)(64756008)(4326008)(76116006)(66946007)(66476007)(8676002)(38100700002)(66556008)(5660300002)(8936002)(66446008)(54906003)(110136005)(55016003)(9686003)(186003)(26005)(122000001)(7696005)(478600001)(38070700005)(53546011)(71200400001)(6506007)(86362001)(316002)(2906002)(66899015);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6297
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d4f2de75-35b4-4ce7-c457-08daf2161539
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZWaYVQVRAFHk9cHwIouinG87iuoXAF6d06OCfVAMQahP/LHIzPVaHauoNYPbOODdAvyM5mhSaP9oPvHbzh6zeb6tuk2yZjA62YqzJcdi2n+OrYyqfhv3w0dHKPxWPeKFhqFab2yF2ymSKtl2NIDWjxk8Doj3+/COONt1SvoYMuPtCxMrGEYlFNAF4vPeX3D1nw4hprH+fQ9SqlBjUjZs8aCMwt4Os5TpQ/8USRXHP5s8hz8IwUahuzJCRSByGW0009lCiipt5KZYRSsVakiSPpoJlNgGHZkcLRlYMAFS+698Hj0EyWHbOVFN4Ia5rx+DPrcHBqnqa+qltvf0rfaLsPuLDgP/ZXUvC4aVOL6hZJM+7lhfVmUMQR2StG+i9dW44T8NSRWYoDXecc2tKsENL3JVCVIHnuhsPSqrbq2HirRr6r0bT6VVQqQ+JvlcFarC37Zh3ae9O5CRY2lnYthamB90cUZbLM3Gnv84zU2vqJP+UiSUxvGgF61C0G423Xf1VJqTU/XxEnJYKH3hvo49y/ChT0VjFvuez77agblp1Ic4udWGjy1/6PbHR5yJEnTeWIeQDkIHSGr8cpLyyJjNheb55sJbRQzkYnU1zms+Sd28auKNohiJOkPDCHYdxGy0FuyN2omqFLrAAMFkHdI3qfUoorjogFYQ7VXhpxlp73+zjLQTL1ArmHiqT2XhCn9rdDZUGCCBogPOpK6IaEGVNw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199015)(36840700001)(46966006)(40470700004)(33656002)(7696005)(2906002)(54906003)(110136005)(8676002)(70206006)(4326008)(316002)(70586007)(83380400001)(81166007)(82740400003)(47076005)(356005)(36860700001)(82310400005)(40460700003)(478600001)(9686003)(26005)(186003)(86362001)(40480700001)(55016003)(107886003)(6506007)(336012)(53546011)(66899015)(8936002)(52536014)(5660300002)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 07:50:03.0949
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f06fa070-c870-43cd-c4ea-08daf2161de2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5889

SGkgSnVsaWVuDQoNCkhhcHB5IG5ldyB5ZWFyfn5+fg0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2Fn
ZS0tLS0tDQo+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IFN1
bmRheSwgSmFudWFyeSA4LCAyMDIzIDg6NTMgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5a
aGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IENjOiBXZWkg
Q2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaQ0KPiA8c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz47IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNv
bT47DQo+IFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT4NCj4g
U3ViamVjdDogUmU6IFtQQVRDSCB2MSAwNi8xM10geGVuL2FybTogYXNzaWduIHNoYXJlZCBtZW1v
cnkgdG8gb3duZXINCj4gd2hlbiBob3N0IGFkZHJlc3Mgbm90IHByb3ZpZGVkDQo+IA0KPiBIaSwN
Cj4gDQo+IE9uIDE1LzExLzIwMjIgMDI6NTIsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+IEBAIC05
MjIsMzMgKzkyNyw4MiBAQCBzdGF0aWMgbWZuX3QgX19pbml0DQo+IGFjcXVpcmVfc2hhcmVkX21l
bW9yeV9iYW5rKHN0cnVjdCBkb21haW4gKmQsDQo+ID4gICAgICAgZC0+bWF4X3BhZ2VzICs9IG5y
X3BmbnM7DQo+ID4NCj4gPiAgICAgICBzbWZuID0gbWFkZHJfdG9fbWZuKHBiYXNlKTsNCj4gPiAt
ICAgIHJlcyA9IGFjcXVpcmVfZG9tc3RhdGljX3BhZ2VzKGQsIHNtZm4sIG5yX3BmbnMsIDApOw0K
PiA+IC0gICAgaWYgKCByZXMgKQ0KPiA+ICsgICAgcGFnZSA9IG1mbl90b19wYWdlKHNtZm4pOw0K
PiA+ICsgICAgLyoNCj4gPiArICAgICAqIElmIHBhZ2UgaXMgYWxsb2NhdGVkIGZyb20gaGVhcCBh
cyBzdGF0aWMgc2hhcmVkIG1lbW9yeSwgdGhlbiB3ZSBqdXN0DQo+ID4gKyAgICAgKiBhc3NpZ24g
aXQgdG8gdGhlIG93bmVyIGRvbWFpbg0KPiA+ICsgICAgICovDQo+ID4gKyAgICBpZiAoIHBhZ2Ut
PmNvdW50X2luZm8gPT0gKFBHQ19zdGF0ZV9pbnVzZSB8IFBHQ19zdGF0aWMpICkNCj4gSSBhbSBh
IGJpdCBjb25mdXNlZCBob3cgdGhpcyBjYW4gaGVscCBkaWZmZXJlbnRpYXRpbmcgYmVjYVBHQ19z
dGF0ZV9pbnVzZSBpcw0KPiAwLiBTbyBlZmZlY3RpdmVseSwgeW91IGFyZSBjaGVja2luZyB0aGF0
IGNvdW50X2luZm8gaXMgZXF1YWwgdG8gUEdDX3N0YXRpYy4NCj4gDQoNCldoZW4gaG9zdCBhZGRy
ZXNzIGlzIHByb3ZpZGVkLCB0aGUgaG9zdCBhZGRyZXNzIHJhbmdlIGRlZmluZWQgaW4NCnhlbixz
dGF0aWMtbWVtIHdpbGwgYmUgc3RvcmVkIGFzIGEgInN0cnVjdCBtZW1iYW5rIiB3aXRoIHR5cGUN
Cm9mICJNRU1CQU5LX1NUQVRJQ19ET01BSU4iIGluICJib290aW5mby5yZXNlcnZlZF9tZW0iDQpU
aGVuIGl0IHdpbGwgYmUgaW5pdGlhbGl6ZWQgYXMgc3RhdGljIG1lbW9yeSB0aHJvdWdoICJpbml0
X3N0YXRpY21lbV9wYWdlcyINClNvIGhlcmUgaXRzIHBhZ2UtPmNvdW50X2luZm8gaXMgUEdDX3N0
YXRlX2ZyZWUgfFBHQ19zdGF0aWMuDQpGb3IgcGFnZXMgYWxsb2NhdGVkIGZyb20gaGVhcCwgaXRz
IHBhZ2Ugc3RhdGUgaXMgZGlmZmVyZW50LCBiZWluZyBQR0Nfc3RhdGVfaW51c2UuDQpTbyBhY3R1
YWxseSwgd2UgYXJlIGNoZWNraW5nIHBhZ2Ugc3RhdGUgdG8gdGVsbCB0aGUgZGlmZmVyZW5jZSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAuDQoNCj4g
QnV0IGFzIEkgd3JvdGUgaW4gYSBwcmV2aW91cyBwYXRjaCwgSSBkb24ndCB0aGluayB5b3Ugc2hv
dWxkIGNvbnZlcnQNCj4ge3hlbixkb219aGVhcCBwYWdlcyB0byBhIHN0YXRpYyBwYWdlcy4NCj4N
Cg0KSSBhZ3JlZSB0aGF0IHRha2luZyByZWZlcmVuY2UgY291bGQgYWxzbyBwcmV2ZW50IGdpdmlu
ZyB0aGVzZSBwYWdlcyBiYWNrIHRvIGhlYXAuIA0KQnV0IG1heSBJIGFzayB3aGF0IGlzIHlvdXIg
Y29uY2VybiBvbiBjb252ZXJ0aW5nIHt4ZW4sZG9tfWhlYXAgcGFnZXMgdG8gYSBzdGF0aWMgcGFn
ZXM/DQogDQo+IFsuLi5dDQo+IA0KPiA+ICtzdGF0aWMgaW50IF9faW5pdCBhc3NpZ25fc2hhcmVk
X21lbW9yeShzdHJ1Y3QgZG9tYWluICpkLA0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBzdHJ1Y3Qgc2htX21lbWJhbmsgKnNobV9tZW1iYW5rLA0KPiA+ICsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYWRkcl90IGdiYXNlKSB7DQo+ID4g
KyAgICBpbnQgcmV0ID0gMDsNCj4gPiArICAgIHVuc2lnbmVkIGxvbmcgbnJfcGFnZXMsIG5yX2Jv
cnJvd2VyczsNCj4gPiArICAgIHN0cnVjdCBwYWdlX2luZm8gKnBhZ2U7DQo+ID4gKyAgICB1bnNp
Z25lZCBpbnQgaTsNCj4gPiArICAgIHN0cnVjdCBtZW1pbmZvICptZW1pbmZvOw0KPiA+ICsNCj4g
PiArICAgIC8qIEhvc3QgYWRkcmVzcyBpcyBub3QgcHJvdmlkZWQgaW4gInhlbixzaGFyZWQtbWVt
IiAqLw0KPiA+ICsgICAgaWYgKCBzaG1fbWVtYmFuay0+bWVtLmJhbmtzLm1lbWluZm8gKQ0KPiA+
ICsgICAgew0KPiA+ICsgICAgICAgIG1lbWluZm8gPSBzaG1fbWVtYmFuay0+bWVtLmJhbmtzLm1l
bWluZm87DQo+ID4gKyAgICAgICAgZm9yICggaSA9IDA7IGkgPCBtZW1pbmZvLT5ucl9iYW5rczsg
aSsrICkNCj4gPiArICAgICAgICB7DQo+ID4gKyAgICAgICAgICAgIHJldCA9IGFjcXVpcmVfc2hh
cmVkX21lbW9yeShkLA0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbWVtaW5mby0+YmFua1tpXS5zdGFydCwNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIG1lbWluZm8tPmJhbmtbaV0uc2l6ZSwNCj4gPiArICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGdiYXNlKTsNCj4gPiArICAgICAgICAgICAgaWYg
KCByZXQgKQ0KPiA+ICsgICAgICAgICAgICAgICAgcmV0dXJuIHJldDsNCj4gPiArDQo+ID4gKyAg
ICAgICAgICAgIGdiYXNlICs9IG1lbWluZm8tPmJhbmtbaV0uc2l6ZTsNCj4gPiArICAgICAgICB9
DQo+ID4gKyAgICB9DQo+ID4gKyAgICBlbHNlDQo+ID4gKyAgICB7DQo+ID4gKyAgICAgICAgcmV0
ID0gYWNxdWlyZV9zaGFyZWRfbWVtb3J5KGQsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHNobV9tZW1iYW5rLT5tZW0uYmFuay0+c3RhcnQsDQo+ID4gKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHNobV9tZW1iYW5rLT5tZW0uYmFuay0+c2l6ZSwg
Z2Jhc2UpOw0KPiA+ICsgICAgICAgIGlmICggcmV0ICkNCj4gPiArICAgICAgICAgICAgcmV0dXJu
IHJldDsNCj4gPiArICAgIH0NCj4gDQo+IExvb2tpbmcgYXQgdGhpcyBjaGFuZ2UgYW5kLi4uDQo+
IA0KPiA+ICsNCj4gPiAgICAgICAvKg0KPiA+ICAgICAgICAqIEdldCB0aGUgcmlnaHQgYW1vdW50
IG9mIHJlZmVyZW5jZXMgcGVyIHBhZ2UsIHdoaWNoIGlzIHRoZSBudW1iZXIgb2YNCj4gPiAgICAg
ICAgKiBib3Jyb3dlciBkb21haW5zLg0KPiA+IEBAIC05ODQsMjMgKzEwNzYsMzcgQEAgc3RhdGlj
IGludCBfX2luaXQgYXNzaWduX3NoYXJlZF9tZW1vcnkoc3RydWN0DQo+IGRvbWFpbiAqZCwNCj4g
PiAgICAgICAgKiBTbyBpZiB0aGUgYm9ycm93ZXIgaXMgY3JlYXRlZCBmaXJzdCwgaXQgd2lsbCBj
YXVzZSBhZGRpbmcgcGFnZXMNCj4gPiAgICAgICAgKiBpbiB0aGUgUDJNIHdpdGhvdXQgcmVmZXJl
bmNlLg0KPiA+ICAgICAgICAqLw0KPiA+IC0gICAgcGFnZSA9IG1mbl90b19wYWdlKHNtZm4pOw0K
PiA+IC0gICAgZm9yICggaSA9IDA7IGkgPCBucl9wYWdlczsgaSsrICkNCj4gPiArICAgIGlmICgg
c2htX21lbWJhbmstPm1lbS5iYW5rcy5tZW1pbmZvICkNCj4gPiAgICAgICB7DQo+ID4gLSAgICAg
ICAgaWYgKCAhZ2V0X3BhZ2VfbnIocGFnZSArIGksIGQsIG5yX2JvcnJvd2VycykgKQ0KPiA+ICsg
ICAgICAgIG1lbWluZm8gPSBzaG1fbWVtYmFuay0+bWVtLmJhbmtzLm1lbWluZm87DQo+ID4gKyAg
ICAgICAgZm9yICggaSA9IDA7IGkgPCBtZW1pbmZvLT5ucl9iYW5rczsgaSsrICkNCj4gPiAgICAg
ICAgICAgew0KPiA+IC0gICAgICAgICAgICBwcmludGsoWEVOTE9HX0VSUg0KPiA+IC0gICAgICAg
ICAgICAgICAgICAgIkZhaWxlZCB0byBhZGQgJWx1IHJlZmVyZW5jZXMgdG8gcGFnZSAlIlBSSV9t
Zm4iLlxuIiwNCj4gPiAtICAgICAgICAgICAgICAgICAgIG5yX2JvcnJvd2VycywgbWZuX3goc21m
bikgKyBpKTsNCj4gPiAtICAgICAgICAgICAgZ290byBmYWlsOw0KPiA+ICsgICAgICAgICAgICBw
YWdlID0gbWZuX3RvX3BhZ2UobWFkZHJfdG9fbWZuKG1lbWluZm8tPmJhbmtbaV0uc3RhcnQpKTsN
Cj4gPiArICAgICAgICAgICAgbnJfcGFnZXMgPSBQRk5fRE9XTihtZW1pbmZvLT5iYW5rW2ldLnNp
emUpOw0KPiA+ICsgICAgICAgICAgICByZXQgPSBhZGRfc2hhcmVkX21lbW9yeV9yZWYoZCwgcGFn
ZSwgbnJfcGFnZXMsIG5yX2JvcnJvd2Vycyk7DQo+ID4gKyAgICAgICAgICAgIGlmICggcmV0ICkN
Cj4gPiArICAgICAgICAgICAgICAgIGdvdG8gZmFpbDsNCj4gPiAgICAgICAgICAgfQ0KPiA+ICAg
ICAgIH0NCj4gPiArICAgIGVsc2UNCj4gPiArICAgIHsNCj4gPiArICAgICAgICBwYWdlID0gbWZu
X3RvX3BhZ2UoDQo+ID4gKyAgICAgICAgICAgICAgICBtYWRkcl90b19tZm4oc2htX21lbWJhbmst
Pm1lbS5iYW5rLT5zdGFydCkpOw0KPiA+ICsgICAgICAgIG5yX3BhZ2VzID0gc2htX21lbWJhbmst
Pm1lbS5iYW5rLT5zaXplID4+IFBBR0VfU0hJRlQ7DQo+ID4gKyAgICAgICAgcmV0ID0gYWRkX3No
YXJlZF9tZW1vcnlfcmVmKGQsIHBhZ2UsIG5yX3BhZ2VzLCBucl9ib3Jyb3dlcnMpOw0KPiA+ICsg
ICAgICAgIGlmICggcmV0ICkNCj4gPiArICAgICAgICAgICAgcmV0dXJuIHJldDsNCj4gPiArICAg
IH0NCj4gDQo+IC4uLiB0aGlzIG9uZS4gVGhlIGNvZGUgdG8gZGVhbCB3aXRoIGEgYmFuayBpcyBl
eGFjdGx5IHRoZSBzYW1lLiBCdXQgeW91IG5lZWQNCj4gdGhlIGR1cGxpY2F0aW9uIGJlY2F1c2Ug
eW91IHNwZWNpYWwgY2FzZSAib25lIGJhbmsiLg0KPiANCj4gQXMgSSB3cm90ZSBpbiBhIHByZXZp
b3VzIHBhdGNoLCBJIGRvbid0IHRoaW5rIHdlIHNob3VsZCBzcGVjaWFsIGNhc2UgaXQuDQo+IElm
IHRoZSBjb25jZXJuIGlzIG1lbW9yeSB1c2FnZSwgdGhlbiB3ZSBzaG91bGQgbG9vayBhdCByZXdv
cmtpbmcgbWVtaW5mbw0KPiBpbnN0ZWFkIChvciB1c2luZyBhIGRpZmZlcmVudCBzdHJ1Y3R1cmUp
Lg0KPiANCg0KQSBmZXcgY29uY2VybnMgZXhwbGFpbmVkIHdoeSBJIGRpZG4ndCBjaG9vc2UgInN0
cnVjdCBtZW1pbmZvIiBvdmVyIHR3bw0KcG9pbnRlcnMgInN0cnVjdCBtZW1iYW5rKiIgYW5kICJz
dHJ1Y3QgbWVtaW5mbyoiLg0KMSkgbWVtb3J5IHVzYWdlIGlzIHRoZSBtYWluIHJlYXNvbi4NCklm
IHdlIHVzZSAic3RydWN0IG1lbWluZm8iIG92ZXIgdGhlIGN1cnJlbnQgInN0cnVjdCBtZW1iYW5r
KiIgYW5kICJzdHJ1Y3QgbWVtaW5mbyoiLA0KInN0cnVjdCBzaG1fbWVtaW5mbyIgd2lsbCBiZWNv
bWUgYSBhcnJheSBvZiAyNTYgInN0cnVjdCBzaG1fbWVtYmFuayIsIHdpdGgNCiJzdHJ1Y3Qgc2ht
X21lbWJhbmsiIGJlaW5nIGFsc28gYW4gMjU2LWl0ZW0gYXJyYXksIHRoYXQgaXMgMjU2ICogMjU2
LCB0b28gYmlnIGZvcg0KYSBzdHJ1Y3R1cmUgYW5kIElmIEkgcmVtZW1iZXJlZCBjbGVhcmx5LCBp
dCB3aWxsIGxlYWQgdG8gIm1vcmUgdGhhbiBQQUdFX1NJWkUiIGNvbXBpbGluZyBlcnJvci4gDQpG
V0lULCBlaXRoZXIgcmV3b3JraW5nIG1lbWluZm8gb3IgdXNpbmcgYSBkaWZmZXJlbnQgc3RydWN0
dXJlLCBhcmUgYm90aCBsZWFkaW5nIHRvIHNpemluZyBkb3duDQp0aGUgYXJyYXksIGhtbW0sIEkg
ZG9uJ3Qga25vdyB3aGljaCBzaXplIGlzIHN1aXRhYmxlLiBUaGF0J3Mgd2h5IEkgcHJlZmVyIHBv
aW50ZXINCmFuZCBkeW5hbWljIGFsbG9jYXRpb24uDQoyKSBJZiB3ZSB1c2UgInN0cnVjdCBtZW1p
bmZvKiIgb3ZlciB0aGUgY3VycmVudCAic3RydWN0IG1lbWJhbmsqIiBhbmQgInN0cnVjdCBtZW1p
bmZvKiIsDQp3ZSB3aWxsIG5lZWQgYSBuZXcgZmxhZyB0byBkaWZmZXJlbnRpYXRlIHR3byBzY2Vu
YXJpb3MoaG9zdCBhZGRyZXNzIHByb3ZpZGVkIG9yIG5vdCksIEFzDQp0aGUgc3BlY2lhbCBjYXNl
ICJzdHJ1Y3QgbWVtYmFuayoiIGlzIGFscmVhZHkgaGVscGluZyB1cyBkaWZmZXJlbnRpYXRlLg0K
QW5kIHNpbmNlIHdoZW4gaG9zdCBhZGRyZXNzIGlzIHByb3ZpZGVkLCB0aGUgcmVsYXRlZCAic3Ry
dWN0IG1lbWJhbmsiIGFsc28gbmVlZHMgdG8gYmUNCnJlc2VydmVkIGluICJib290aW5mby5yZXNl
cnZlZF9tZW0iLiBUaGF0J3Mgd2h5IEkgdXNlZCBwb2ludGVyICIgc3RydWN0IG1lbWJhbmsqIiB0
bw0KYXZvaWQgbWVtb3J5IHdhc3RlLg0KRm9yIHRoZSBkdXBsaWNhdGlvbiwgSSBjb3VsZCBleHRy
YWN0IHRoZSBjb21tb24gY29kZXMgdG8gbWl0aWdhdGUgdGhlIGltcGFjdC4NCg0KPiA+DQo+ID4g
ICAgICAgcmV0dXJuIDA7DQo+IA0KPg0KPiA+ICAgIGZhaWw6DQo+ID4gICAgICAgd2hpbGUgKCAt
LWkgPj0gMCApDQo+ID4gLSAgICAgICAgcHV0X3BhZ2VfbnIocGFnZSArIGksIG5yX2JvcnJvd2Vy
cyk7DQo+ID4gKyAgICB7DQo+ID4gKyAgICAgICAgcGFnZSA9IG1mbl90b19wYWdlKG1hZGRyX3Rv
X21mbihtZW1pbmZvLT5iYW5rW2ldLnN0YXJ0KSk7DQo+ID4gKyAgICAgICAgbnJfcGFnZXMgPSBQ
Rk5fRE9XTihtZW1pbmZvLT5iYW5rW2ldLnNpemUpOw0KPiA+ICsgICAgICAgIHJlbW92ZV9zaGFy
ZWRfbWVtb3J5X3JlZihwYWdlLCBucl9wYWdlcywgbnJfYm9ycm93ZXJzKTsNCj4gPiArICAgIH0N
Cj4gPiAgICAgICByZXR1cm4gcmV0Ow0KPiA+ICAgfQ0KPiA+DQo+IA0KPiBDaGVlcnMsDQo+IA0K
PiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 07:50:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 07:50:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473345.733887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEmva-00071H-F7; Mon, 09 Jan 2023 07:50:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473345.733887; Mon, 09 Jan 2023 07:50:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEmva-00071A-CP; Mon, 09 Jan 2023 07:50:42 +0000
Received: by outflank-mailman (input) for mailman id 473345;
 Mon, 09 Jan 2023 07:50:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LgaK=5G=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pEmvY-00070q-Rl
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 07:50:40 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2055.outbound.protection.outlook.com [40.107.8.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f417093-8ff2-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 08:50:39 +0100 (CET)
Received: from AS8PR04CA0032.eurprd04.prod.outlook.com (2603:10a6:20b:312::7)
 by AS8PR08MB5926.eurprd08.prod.outlook.com (2603:10a6:20b:29d::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 07:50:27 +0000
Received: from AM7EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:312:cafe::7c) by AS8PR04CA0032.outlook.office365.com
 (2603:10a6:20b:312::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Mon, 9 Jan 2023 07:50:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT022.mail.protection.outlook.com (100.127.140.217) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Mon, 9 Jan 2023 07:50:27 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Mon, 09 Jan 2023 07:50:26 +0000
Received: from 632d197bea05.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3F94A596-AE24-47C1-B5F4-662D93B1B2E7.1; 
 Mon, 09 Jan 2023 07:50:16 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 632d197bea05.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Jan 2023 07:50:16 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by DBBPR08MB6297.eurprd08.prod.outlook.com (2603:10a6:10:20b::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 07:50:14 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 07:50:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f417093-8ff2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LVK191TWGVu50zjP5zcg/o8vjQpCfDN8S9btledwHMg=;
 b=BiylJFlZ+NjNKj1x8vyNZ/29pzHx7mNpuvfoO/O8L3xlZuCHrLqwmBaNmbLmoxpPMCp91TfQOUHJT97Nv4bXbOZL/7fX1P/2AKoaMOPwuDXgyUZXym6iN4fnzBdiRPsYtbkAC7wFtwCQWbaJ7Z1hZJl7onvfnYRXH2vXuizhspM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Du20DoeOPqi3kaT8UzsB5UDfZShHAz45elGu1XQAsZ6Kzpglsz5Nik+R+Bjet5S0VwDl2g4bI5wkm7c5sa0fD3AqdYpY/Wm2VMLaXpP2hjOS/Jy+b0Dasvo8/3E3BjZlnVGxat3tlZvxDwVlVO0St2yxWIeNWC4pDNvwwYGGsNCnFjWKZOBDkk67NozqFdNX/VQMpofQY5r7QqKEhRAog9HBN41pznbrCJyDj3wYiL/zpRS2mKno9iRZIz23LFA5ntZlAp9w4iplRhu6xDEQGWEgYSJ62Q9u3Da/rQPjg3+cn7mXQn/z25stEVgVRNjvh+BS8E64oTwIVQa4AgJxkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LVK191TWGVu50zjP5zcg/o8vjQpCfDN8S9btledwHMg=;
 b=Wn7LtknKvssJbJq8ZQbTYoEN12GqDEU+KHVyQQaJQcrExW9sJp77wo49fvRR6AOKqLG1Q7jP5xUuwGg4Za5o6sRXDoJrQpd8ziAqd6ahStzDmyPPxS0Z1ymb/Ii6Yh5r/wdCQtaJ1YIFYoSUBoBiXrQEBiDWQH1KQwW1q2y5rfDLQYe8hZZayNOlvbI4J1vonyGz5GHYCCVuiw+e2CdnWAnyPNkh4MLlpIBRmXMQCi77P5nMw6uYx1GieVpJIToVFoVAu0vFhsxqbctncb2SpYGSTJx8Vq1heTArSpf++n8SqHP4hnM4iMVZD2NDN62eIersm8SG1VDSFGuipmzYfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LVK191TWGVu50zjP5zcg/o8vjQpCfDN8S9btledwHMg=;
 b=BiylJFlZ+NjNKj1x8vyNZ/29pzHx7mNpuvfoO/O8L3xlZuCHrLqwmBaNmbLmoxpPMCp91TfQOUHJT97Nv4bXbOZL/7fX1P/2AKoaMOPwuDXgyUZXym6iN4fnzBdiRPsYtbkAC7wFtwCQWbaJ7Z1hZJl7onvfnYRXH2vXuizhspM=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v1 05/13] xen/arm: allocate shared memory from heap when
 host address not provided
Thread-Topic: [PATCH v1 05/13] xen/arm: allocate shared memory from heap when
 host address not provided
Thread-Index: AQHY+J1p2kGDrijhokmn3Ycm+ECYcK6UxoyAgAE5/KA=
Date: Mon, 9 Jan 2023 07:50:13 +0000
Message-ID:
 <AM0PR08MB4530EC5FBFD2625E521AC067F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-6-Penny.Zheng@arm.com>
 <ff0870ab-d1b1-e029-26aa-c690063d348b@xen.org>
In-Reply-To: <ff0870ab-d1b1-e029-26aa-c690063d348b@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 3B605AC35546304E89DBEC85BA01DC7D.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|DBBPR08MB6297:EE_|AM7EUR03FT022:EE_|AS8PR08MB5926:EE_
X-MS-Office365-Filtering-Correlation-Id: 8fc0a4c0-92e4-4b1c-33ad-08daf2162c52
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FRs+PTnF8EBW0OjHJ0bE6z+6fHSZ85EimSgMRSRX5gj6UNmL0mcUct9AAmoM9zqHcm11FsR6+nPsFlEb5ZaWXnbYHWPgmkzS/oBBHkXxxpFsQ9R9Btd6yPJOZG0hdxSPGa3NUULp2sW5BP+9Ms1u3rLjUHc0RAdBU+iyj0B4eZoeqySnRE1eei45ohqU4yq0KbJUsJde/m3U+sMBWHPkKKzbLkPVnnj/+SBQdM0L6Rk4XiHfWXHTGIgoFk3oxF9+FVaDqBFU8ebkst3t9II4cCOsve3aau+XDSkL83GN9cVNDjklYN0U2Hk2u1V7XscaT1ZGI7FldPFI41wDLgZRDDIgYf7H+o9VycqHLveB8SNKuy4vmnnPu4pJMURQbLG1u23ZhGdiT4Q9YB5FDAb/uUz076I2eQClCVNJc5zG69k3iCKV6dBxGmTkjq00HjJ16TLcFU0AHdbVvuZcHyPlk1cjSEd2O3G+I/e6NZXaMIy16OCKbClwV/vrUtjqLg/Ms+eK7YKMnlO15ogm7SiQfRsHT7Ja6bQYurPOmfKe01qAv4UtyIr935/f1iyv0QGZJfMvcLyrL1QBEPi76B5a58Seoa575no6gFtXfo3Ev/jjsPyG0wzmJEBYPmHOLE3JUeUVoLvVnFvR9yA0/1Lmke1brq7Pg0GOf6yUEeGg9U9Mb7slcCnW2AviUC9bOa6foJXdlWGfyLM61+QiA9vgxg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199015)(33656002)(52536014)(41300700001)(83380400001)(64756008)(4326008)(76116006)(66946007)(66476007)(8676002)(38100700002)(66556008)(5660300002)(8936002)(66446008)(54906003)(110136005)(55016003)(9686003)(186003)(26005)(122000001)(7696005)(478600001)(38070700005)(53546011)(71200400001)(6506007)(86362001)(316002)(2906002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6297
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f3fe3cac-3ef9-4e33-30c2-08daf216245e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KOFN0Qw2vGg7DUQBBjJurtTgPq5lUpo0qleKV/4MU+2UG+hHVvqNDWG5Q8U2SCNZ4qyeFaiMxMVJ6bDucctzFRn74MmSQp3mC2H8+YAShgz2e8GMr0OazNJ+oyOl44dX0tAcCFVHOPc2YUkOL+AtkDi1aBbLAcQ0EAITcDWjn2hriTaj/Jx1PKYMvyta31q+KmYHfddHugdHKQO5HXZ6f5CSNHbDcEdftAKsE8TF1pTIvhV/9NSiQIUxyRYpGLVo2h1F3Cux4/6vAPQWxqo+Vr0lBh8QfIejHxgXjrCADpHmL12iGjc49vn8EMBGb718HEUOSqd0vX71caJucB7PpKOrTRYXWa00kTjIaOFf/uJeI3rPh+N1rXX0v2LdIXQgnBsqKL2AtiXmGiwI56P3UBy29E23pshGQLSbNqgtlVlU5euRec5JhSxNBm2PHV9BQMy8jzqTJGGJuI/M0GNXtyK6TYKhq/v5m1Myr5GJydArEg4UI+zX08Cg6e6d/0Ct1Yc/9/LsMuh3dQSwHAqkLkUld2mC/b9IUdIDDuNjLBkNOw8fEOSmQhxLbk1jLz/AfRvdeYQge5y1cZ2ekRJLYfFVcajZHMeb+TT6OL+RhAYQGNQoomp7mvOSkMz/60LVfPscU6xbVFlGOLvXXOCm5ahtUQxYOCtMIpLU5F0DYNFfcSXltk2DMkcdlTg9rzo/X8b+8pxCedHI4K7yffZBMg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199015)(46966006)(40470700004)(36840700001)(33656002)(7696005)(2906002)(54906003)(110136005)(8676002)(70206006)(4326008)(316002)(70586007)(83380400001)(81166007)(82740400003)(47076005)(356005)(36860700001)(82310400005)(40460700003)(478600001)(9686003)(26005)(186003)(86362001)(40480700001)(55016003)(107886003)(6506007)(336012)(53546011)(8936002)(52536014)(5660300002)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 07:50:27.2508
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8fc0a4c0-92e4-4b1c-33ad-08daf2162c52
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5926

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPg0KPiBTZW50OiBTdW5kYXksIEphbnVhcnkgOCwgMjAyMyA4OjIzIFBNDQo+IFRv
OiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5w
cm9qZWN0Lm9yZw0KPiBDYzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBTdGVmYW5vIFN0
YWJlbGxpbmkNCj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0cmFuZCBNYXJxdWlzIDxC
ZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+Ow0KPiBWb2xvZHlteXIgQmFiY2h1ayA8Vm9sb2R5bXly
X0JhYmNodWtAZXBhbS5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjEgMDUvMTNdIHhlbi9h
cm06IGFsbG9jYXRlIHNoYXJlZCBtZW1vcnkgZnJvbSBoZWFwDQo+IHdoZW4gaG9zdCBhZGRyZXNz
IG5vdCBwcm92aWRlZA0KPiANCj4gSGkgUGVubnksDQo+IA0KDQpIaSBKdWxpZW4sDQoNCj4gT24g
MTUvMTEvMjAyMiAwMjo1MiwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gd2hlbiBob3N0IGFkZHJl
c3MgaXMgbm90IHByb3ZpZGVkIGluICJ4ZW4sc2hhcmVkLW1lbSIsIHdlIGxldCBYZW4NCj4gPiBh
bGxvY2F0ZSByZXF1ZXN0ZWQgc2hhcmVkIG1lbW9yeSBmcm9tIGhlYXAsIGFuZCBvbmNlIHRoZSBz
aGFyZWQNCj4gbWVtb3J5DQo+ID4gaXMgYWxsb2NhdGVkLCBpdCB3aWxsIGJlIG1hcmtlZCBhcyBz
dGF0aWMoUEdDX3N0YXRpYyksIHdoaWNoIG1lYW5zDQo+ID4gdGhhdCBpdCB3aWxsIGJlIHJlc2Vy
dmVkIGFzIHN0YXRpYyBtZW1vcnksIGFuZCB3aWxsIG5vdCBnbyBiYWNrIHRvIGhlYXAgZXZlbg0K
PiBvbiBmcmVlaW5nLg0KPiANCj4gUGxlYXNlIGRvbid0IG1vdmUgcGFnZXMgZnJvbSB0aGUge3hl
bixkb219aGVhcCB0byB0aGUgc3RhdGljIGhlYXAuIElmIHlvdQ0KPiBuZWVkIHRvIGtlZXAgdGhl
IHBhZ2VzIGZvciBsb25nZXIsIHRoZW4gZ2V0IGFuIGV4dHJhIHJlZmVyZW5jZSBzbyB0aGV5IHdp
bGwNCj4gbm90IGJlIHJlbGVhc2VkLg0KPiANCj4gPg0KPiA+IFNpZ25lZC1vZmYtYnk6IFBlbm55
IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+IC0tLQ0KPiA+ICAgeGVuL2FyY2gvYXJt
L2RvbWFpbl9idWlsZC5jIHwgODMNCj4gKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrLQ0KPiA+ICAgMSBmaWxlIGNoYW5nZWQsIDgyIGluc2VydGlvbnMoKyksIDEgZGVsZXRpb24o
LSkNCj4gPg0KPiA+ICtzdGF0aWMgaW50IF9faW5pdCBhbGxvY2F0ZV9zaGFyZWRfbWVtb3J5KHN0
cnVjdCBzaG1fbWVtYmFuaw0KPiAqc2htX21lbWJhbmssDQo+ID4gKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgcGFkZHJfdCBwc2l6ZSkgew0KPiA+ICsgICAgc3RydWN0
IG1lbWluZm8gKmJhbmtzOw0KPiA+ICsgICAgaW50IHJldDsNCj4gPiArDQo+ID4gKyAgICBCVUdf
T04oc2htX21lbWJhbmstPm1lbS5iYW5rcy5tZW1pbmZvICE9IE5VTEwpOw0KPiA+ICsNCj4gPiAr
ICAgIGJhbmtzID0geG1hbGxvY19ieXRlcyhzaXplb2Yoc3RydWN0IG1lbWluZm8pKTsNCj4gDQo+
IFdoZXJlIGlzIHRoaXMgZnJlZWQ/DQo+IA0KDQpUaGVzZSBraW5kcyBvZiBpbmZvIHdpbGwgYmUg
b25seSB1c2VkIGluIGJvb3QtdGltZSwgc28gbWF5YmUgSSBzaGFsbCANCmZyZWUgdGhlbSBpbiBp
bml0X2RvbmUoKT8NCk9yIGp1c3QgYWZ0ZXIgcHJvY2Vzc19zaG0oKSA/DQoNCj4gPiArICAgIGlm
ICggYmFua3MgPT0gTlVMTCApDQo+ID4gKyAgICAgICAgcmV0dXJuIC1FTk9NRU07DQo+ID4gKyAg
ICBzaG1fbWVtYmFuay0+bWVtLmJhbmtzLm1lbWluZm8gPSBiYW5rczsNCj4gPiArICAgIG1lbXNl
dChzaG1fbWVtYmFuay0+bWVtLmJhbmtzLm1lbWluZm8sIDAsIHNpemVvZihzdHJ1Y3QNCj4gPiAr
IG1lbWluZm8pKTsNCj4gPiArDQo+ID4gKyAgICBpZiAoICFhbGxvY2F0ZV9kb21oZWFwX21lbW9y
eShOVUxMLCBwc2l6ZSwgc2htX21lbWJhbmstDQo+ID5tZW0uYmFua3MubWVtaW5mbykgKQ0KPiA+
ICsgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPiA+ICsNCj4gPiArICAgIHJldCA9IG1hcmtfc2hh
cmVkX21lbW9yeV9zdGF0aWMoc2htX21lbWJhbmspOw0KPiA+ICsgICAgaWYgKCByZXQgKQ0KPiA+
ICsgICAgICAgIHJldHVybiByZXQ7DQo+ID4gKw0KPiA+ICsgICAgcmV0dXJuIDA7DQo+ID4gK30N
Cj4gPiArDQo+ID4gICBzdGF0aWMgbWZuX3QgX19pbml0IGFjcXVpcmVfc2hhcmVkX21lbW9yeV9i
YW5rKHN0cnVjdCBkb21haW4gKmQsDQo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHBhZGRyX3QgcGJhc2UsIHBhZGRyX3QgcHNpemUpDQo+ID4gICB7
DQo+ID4gQEAgLTk3NSw3ICsxMDQxLDcgQEAgc3RhdGljIGludCBfX2luaXQgcHJvY2Vzc19zaG0o
c3RydWN0IGRvbWFpbiAqZCwNCj4gc3RydWN0IGtlcm5lbF9pbmZvICpraW5mbywNCj4gPiAgICAg
ICAgICAgdW5zaWduZWQgaW50IGk7DQo+ID4gICAgICAgICAgIGNvbnN0IGNoYXIgKnJvbGVfc3Ry
Ow0KPiA+ICAgICAgICAgICBjb25zdCBjaGFyICpzaG1faWQ7DQo+ID4gLSAgICAgICAgYm9vbCBv
d25lcl9kb21faW8gPSB0cnVlOw0KPiA+ICsgICAgICAgIGJvb2wgb3duZXJfZG9tX2lvID0gdHJ1
ZSwgcGFkZHJfYXNzaWduZWQgPSB0cnVlOw0KPiA+ICAgICAgICAgICBzdHJ1Y3Qgc2htX21lbWJh
bmsgKnNobV9tZW1iYW5rOw0KPiA+DQo+ID4gICAgICAgICAgIGlmICggIWR0X2RldmljZV9pc19j
b21wYXRpYmxlKHNobV9ub2RlLA0KPiA+ICJ4ZW4sZG9tYWluLXNoYXJlZC1tZW1vcnktdjEiKSAp
IEBAIC0xMDM1LDYgKzExMDEsMjEgQEAgc3RhdGljIGludA0KPiBfX2luaXQgcHJvY2Vzc19zaG0o
c3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IGtlcm5lbF9pbmZvICpraW5mbywNCj4gPiAgICAgICAg
ICAgICAgIHJldHVybiAtRU5PRU5UOw0KPiA+ICAgICAgICAgICB9DQo+ID4NCj4gPiArICAgICAg
ICAvKg0KPiA+ICsgICAgICAgICAqIFdoZW4gaG9zdCBhZGRyZXNzIGlzIG5vdCBwcm92aWRlZCBp
biAieGVuLHNoYXJlZC1tZW0iLA0KPiA+ICsgICAgICAgICAqIHdlIGxldCBYZW4gYWxsb2NhdGUg
cmVxdWVzdGVkIG1lbW9yeSBmcm9tIGhlYXAgYXQgZmlyc3QgZG9tYWluLg0KPiA+ICsgICAgICAg
ICAqLw0KPiA+ICsgICAgICAgIGlmICggIXBhZGRyX2Fzc2lnbmVkICYmICFzaG1fbWVtYmFuay0+
bWVtLmJhbmtzLm1lbWluZm8gKQ0KPiA+ICsgICAgICAgIHsNCj4gPiArICAgICAgICAgICAgcmV0
ID0gYWxsb2NhdGVfc2hhcmVkX21lbW9yeShzaG1fbWVtYmFuaywgcHNpemUpOw0KPiA+ICsgICAg
ICAgICAgICBpZiAoIHJldCApDQo+ID4gKyAgICAgICAgICAgIHsNCj4gPiArICAgICAgICAgICAg
ICAgIHByaW50aygiJXBkOiBmYWlsZWQgdG8gYWxsb2NhdGUgc2hhcmVkIG1lbW9yeQ0KPiBiYW5r
KCUiUFJJcGFkZHIiTUIpIGZyb20gaGVhcDogJWRcbiIsDQo+ID4gKyAgICAgICAgICAgICAgICAg
ICAgICAgZCwgcHNpemUgPj4gMjAsIHJldCk7DQo+ID4gKyAgICAgICAgICAgICAgICByZXR1cm4g
cmV0Ow0KPiA+ICsgICAgICAgICAgICB9DQo+ID4gKyAgICAgICAgfQ0KPiA+ICsNCj4gPiAgICAg
ICAgICAgLyoNCj4gPiAgICAgICAgICAgICogRE9NSURfSU8gaXMgYSBmYWtlIGRvbWFpbiBhbmQg
aXMgbm90IGRlc2NyaWJlZCBpbiB0aGUgRGV2aWNlLVRyZWUuDQo+ID4gICAgICAgICAgICAqIFRo
ZXJlZm9yZSB3aGVuIHRoZSBvd25lciBvZiB0aGUgc2hhcmVkIHJlZ2lvbiBpcw0KPiA+IERPTUlE
X0lPLCB3ZSB3aWxsDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg0K
Q2hlZXJzLA0KDQotLQ0KUGVubnkgWmhlbmcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 08:06:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 08:06:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473358.733898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnB7-0000vP-B4; Mon, 09 Jan 2023 08:06:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473358.733898; Mon, 09 Jan 2023 08:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnB7-0000vI-82; Mon, 09 Jan 2023 08:06:45 +0000
Received: by outflank-mailman (input) for mailman id 473358;
 Mon, 09 Jan 2023 08:06:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEnB6-0000vC-Fa
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 08:06:44 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061.outbound.protection.outlook.com [40.107.21.61])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8daac4b6-8ff4-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 09:06:43 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7391.eurprd04.prod.outlook.com (2603:10a6:800:1b3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 08:06:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 08:06:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8daac4b6-8ff4-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Toj6OLK9+HcXevSZIphcQ+MOhw3bq+0nJgRois7oHE4CaVSHWthoD9hGtuRdkHhh5u6+v0mObArIWbnQM1W5cbZQFnzhMnOtogt5LPLwvq7b8XYZo11rqOE3/YyQcIG30dGw3oAVxb+K9GHkcshHA3jv4aWw1/q7S/MNaxJJeL6UZ2/p0dLXRR2is2QGiuc585QtuzODAYM4WPm3oy78T3CknFAQtqJug+Ro9QBMY4jIhTKRgqlTDZ5V3KAmgvSZYNcl9iRrfo1USTeWOR0+TbaQf5wSUVexRNkSgCeawXnHcpefTvcVm6drr7IrtQlpy7vnEC5GN3gX3QF97paHgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MTBO89uOjTyQTH7hk4wHEkjHNHDeznBX+XQQGm4IwBk=;
 b=GuRvzc4uS2yBxtIvr8VOPpO17rbgoIayKRnUqGi3NlCrJ0H6PdoW8vSVG7V9FCw0QTrnpEeQCwCiJrdzSw2cAI+jAtN5A2rxOw6BeX/KU++GeFakJ3S1qkfQVggnsN3+Ur8Tr17XD4gMgvYwyNVI42q10cuggoBS8bCnlhFOBchOK/r3aPX7qd+AM7SCITck6uYG13KPOIf2VI1oK6NhaGGBWB8ZUEDjQCYYXTTODitvzKn7gVViopPKDTzkxI686P8lS1VnXrod/sjg6YgzfNXSmAysUHxEi6Z2vNoPILNz+Fp1dkX3X4ZcyxGfsId4q9pwSg5NIrYxwiT5wVm+zQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MTBO89uOjTyQTH7hk4wHEkjHNHDeznBX+XQQGm4IwBk=;
 b=w3TiNdiEunxT/rscBj/dJ/71TBx3a9XB03AADii7/+4qE1Lt1x+anmQdV6cu+wdXQVzo6ObQ3lUrJxZ2aoOlTsBq/3QklSWcidb2B3WBcMsXnTRChY1XxGJ0J2l68XX9HASpQg94vlWG8RR3XsV/ZOhlEc+smnQsCTZY9iJpb+XDBW60gcNVl2HrdogSWWCAS3VBooZ1OZsVphPiSBAfdIgB0TGnnISByYKVNu2CHZrRxmVRhPeq+AdSbhsJh19xb/Syv2KTaLQsN1P0BNNKAOg4262044Lmj05J0ivq3W8V6dMOEIkoCNJ6stgLqmKzc3dH3DSkyHEQ4Z9HQOgSjw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <18366360-7bfd-0d27-fd32-e3064eaf9b7e@suse.com>
Date: Mon, 9 Jan 2023 09:06:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 3/4] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230103200943.5801-1-andrew.cooper3@citrix.com>
 <20230103200943.5801-4-andrew.cooper3@citrix.com>
 <540a449d-f76d-eb16-4f98-c4fb3564ce98@suse.com>
 <7dd00ce3-a95b-2477-128c-de36e75c4a34@citrix.com>
 <ebaf70e5-e1ba-d72d-84f2-5acb7e38a6bc@suse.com>
 <9ce20298-5870-aa1b-ee5e-e16a623beadb@citrix.com>
 <a65a4553-d003-1a8d-abb7-5d8c1c9fface@suse.com>
 <5bb7382c-bed0-144c-2906-38bc08c76a52@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5bb7382c-bed0-144c-2906-38bc08c76a52@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0046.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7391:EE_
X-MS-Office365-Filtering-Correlation-Id: ea05dff7-113d-4cb5-9694-08daf21870d8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mhnNTKH+pM4MX+DFjb4S2sal3p+xCKCWWODuJKgrHxZplxV99SF4/FWDP680dFgZ3LcFu6HvzOIGVuebIFkchKayJB+DD9lxWp+04gmSOy3j2b5CNpnkWCd7ByGcqlTTXny/33RE/Fo+c09NZSifWZDrUfUPek09U+w7fkIq8ysezP4DiEek1nbMms3mTscZlIT2w2NHp+Xm7yNrYT+0/HP1BKi3FLpS2w6yex3hX5zQAHrxXD6zAZ9miJagUPEPIVGWqxl4LFWhu8jWUyx19FujBnfXwhUdUyqQlU7SQQ1FBj5Y9TPDfZvKigwwM5EHuIYwksoz6/Cb9hiWQxLfJ4hxv/TYsekUWn+vT+F06UYqJhDrgYUWDR1hqUlaypb4DSvMZZxnLPwhMDJ1iqwFNzmAGzcQ5+f1VI2WaTZtX6rcEgocnhK/E+00Y7/jc/zfC/J1XaDA/v74TVKxze48jAopZObhRvEPAswMk9+YMt3X19x/K+IoN8tWvUHz22CNHy97vMI6BvjLGjVrC4jP0w/9oaKR0WNsyjcaznr8j5I9C9RlymNxsvVdkhAywTg60IG8DLEwsf4/yInhlwXzlw9L3q89Qtsq5g0xPvhGQlnpmabzTTHOQFMfga3H7pK6uZ8TrFxV3xG2MnSV+ucvgyDvSg5+7mr1w8EgiGZHGQ1tPfm3EfUhGqROD71TTobNzvX7TC+7K5HF/mRg3GPMWku1z+t/sAa7xCFxQNKBLOE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(136003)(346002)(39850400004)(376002)(451199015)(5660300002)(316002)(26005)(6512007)(186003)(6486002)(478600001)(2616005)(31696002)(41300700001)(8676002)(54906003)(66946007)(4326008)(66476007)(6916009)(66556008)(8936002)(36756003)(86362001)(53546011)(31686004)(6506007)(38100700002)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZE5oVXFTYW1mVXhoK1ZVYzZvR1lCRWRPbTcwQWRGdDBtYWl1b0J2S05aWDdU?=
 =?utf-8?B?TW1FN2I4Qkpmd3lDYWVIVUcyNGJ2VlNrRm5rVnRWUTlWWVArSVh2K0llczQy?=
 =?utf-8?B?ZVgvVDFqQXlWelp4UHo0SmcwUWtxYU1BUENwSVBkRFNObjYrcytDWGVUZnZv?=
 =?utf-8?B?VklxandVelhUa1JIbnVUcUNJRGVnb3c4QmY5Snp6Tnh5VzhXRG5HQmlmOXFG?=
 =?utf-8?B?UXZTOFV4WTZHcUFHNmRRdDRvdkFSZmlCTk5FS1ltNGtnUlFYell4eEZxUkJ4?=
 =?utf-8?B?WnVydDB0VGFsRjZia28rMERDQVVidWJlV0RIWWc3dWhrQjhCQjgzcjhkSUZC?=
 =?utf-8?B?S2dDVWpBNjhRVlRpakd1OUtuYzdOd1FpOEZBdmhvNXA0ajJ3NXVzMXphRGVG?=
 =?utf-8?B?U2pGVCtrajlnNDhmSmxQTkh6TlppYUFLZU5wd05SWmRYZ1dzYnNoQzR1NHdN?=
 =?utf-8?B?SnpiVzJQRnZ4MEZ6THlISHAxVHJONHQ3RnFOQTVIRGl1UzNOalFpazBFaXZs?=
 =?utf-8?B?bWZYa084bU1teHVCeXpvMHVmajBuZmZjNlFtMmZlNUgwWUQrM2xDYmlodHN4?=
 =?utf-8?B?Vlh0dWxzMkRIeHhDdHk4OXErSVZ4VDJwSFdwVEdVb0h3SFlQalBua1I3RjZW?=
 =?utf-8?B?RDgwcWpPQlhWRVpTYXZLYW1weTBKdXJnZkZJYU5qVkZuVngzbmxBQmwxZWhO?=
 =?utf-8?B?YU1CZWgzNzc1bTdvTzczbzdUMjFkbE1jTjNiMGpwQUpzeFVHaTZmYVJJS2NL?=
 =?utf-8?B?ZTBYdEt3YS9RUjB5UXZuVTB6dklNM2tSa3lUWDFHTWtCcjEvM1l4eGlWVmty?=
 =?utf-8?B?MnNIMW43a3dRTm0rRzl6UnZOQit5emlaRTl6RTdoZFBZRVVmcnVMY3dDenZC?=
 =?utf-8?B?Z2ZQdHFOK0NnbkdERE5nWVFDUncrU0lxa1BLZGRZWmZrcFhsZCtZeFA1SDFH?=
 =?utf-8?B?elZ4a3k2SWExWjhWU0JEM3pCaHhqUGg0ZmtMWmlCWCt2aEFiQ2RMek81ajFV?=
 =?utf-8?B?OWlXSE1vVkI3SjRUM0tBZTdBTnYwUFpLa0g5L3c4bVF5aEVZWncwTTN2OU8z?=
 =?utf-8?B?bDFKaG9WSFZkYXVOM1Ezc2U2UzMxNllZVmxvaDJPd3lxUEZ0OFhzTTlXZnpi?=
 =?utf-8?B?b0JBOXh0YXBVeW5Hekt3NXNHd1FwNU11eHhjczJqSk9FcFJwRkFRRmlnQzVl?=
 =?utf-8?B?S0FucmFyRjZFcWNJYURWZXl3VElvRlhTTG1JTW13WjZ1WnJjM1RmUHRpdG1G?=
 =?utf-8?B?ck0wTHNYSFlHbDZsN2kyc2prTzlXd3FDRzZJelpRUm4rRnRncU9yQ0NMYkRZ?=
 =?utf-8?B?WnlDSFVLSFJNOW5TRExzaVlTd3ZKSnFqOU10R1pnREVMcXlTcG1uUWFpN2NC?=
 =?utf-8?B?R05QQlJLMkVVYjdNQnlTRmxQUXhXcUU5bXZWbU5TQVUzRXEvYjBTdUxva3ha?=
 =?utf-8?B?LzlyS0FpUDFWYTNPdnVEbmxmMlR1amJTN2FyZlI1cmVaQTJkYjdxczZHdlow?=
 =?utf-8?B?T1B0UXU3aG1aY1lDVmwvSTI0VTcrMnJQV0V0emNWQnpFTEZrbjN5ZTlJa2Zw?=
 =?utf-8?B?UUhYb20xNUozZEZld3FJdVJOY1c3YWVoN2tsaENUc3JRdTczVzBjT0xyM0hz?=
 =?utf-8?B?cTBqTjJBTmtjRzlWd29RMTBBVTRHMUxkQzV4QUN4ZEdtbElDYjAvL2xteitz?=
 =?utf-8?B?aXVERW5GVWFGWGxoU3F4bk81SGFERHRqQUhhMFoxd2tlWG5pNkNLQ1B6L2RI?=
 =?utf-8?B?blZFM0lzZHZVOUJKMGx6S2hxajAxL0dVOW0rWmtWdncvY1NEeFJjOVNmYWlN?=
 =?utf-8?B?UjMxai9NdU9yR0lrby9ZRFd3ZDhDejlXRjdrZ001ZW1qQ05oQ0p1OGs5T0p3?=
 =?utf-8?B?d1FFeGtKT011MERXb2lMdGFJTW9PaU5nRmhEQmgyaG94cFBJUDNXbjEvMDIv?=
 =?utf-8?B?TlZqM2MzYm1POFNhcFNoRk51blJOQndXdmsxaWQzMStJdzJLMmFNeEVXZm9z?=
 =?utf-8?B?cGVJR3NNc3IzZ3hLUjdET1VYSXJrWHZBNTJ5eWNjMjJleUJjYXdmeEdQNjA3?=
 =?utf-8?B?am1kb0I2MFU1Y0hoajZmRTNHSi9hUzVrYmZqRkZycjNvT3JudkFnb3V4VzFZ?=
 =?utf-8?Q?WzVI4oEjl4plgQ/Sxs5Mtde40?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ea05dff7-113d-4cb5-9694-08daf21870d8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 08:06:41.4600
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: maF30kX1GGKAf5TH3ix7noGSxLd94A2ukB/M/WLnle2g+OHyOEpOBUN/Kio1IwyTJAwORIhE/iPdxDRU4KHbUg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7391

On 06.01.2023 13:14, Andrew Cooper wrote:
> On 06/01/2023 7:54 am, Jan Beulich wrote:
>> On 05.01.2023 23:17, Andrew Cooper wrote:
>>> On 05/01/2023 7:57 am, Jan Beulich wrote:
>>>> On 04.01.2023 20:55, Andrew Cooper wrote:
>>>>> On 04/01/2023 4:40 pm, Jan Beulich wrote:
>>>>>> On 03.01.2023 21:09, Andrew Cooper wrote:
>>>>>>> + * For all guest types using hardware virt extentions, Xen is not mapped into
>>>>>>> + * the guest kernel virtual address space.  This now return 0, where it
>>>>>>> + * previously returned unrelated data.
>>>>>>> + */
>>>>>>>  #define XENVER_platform_parameters 5
>>>>>>>  struct xen_platform_parameters {
>>>>>>>      xen_ulong_t virt_start;
>>>>>> ... the field name tells me that all that is being conveyed is the virtual
>>>>>> address of where the hypervisor area starts.
>>>>> IMO, it doesn't matter what the name of the field is.  It dates from the
>>>>> days when 32bit PV was the only type of guest.
>>>>>
>>>>> 32bit PV guests really do have a variable split, so the guest kernel
>>>>> really does need to get this value from Xen.
>>>>>
>>>>> The split for 64bit PV guests is compile-time constant, hence why 64bit
>>>>> PV kernels don't care.
>>>> ... once we get to run Xen in 5-level mode, 4-level PV guests could also
>>>> gain a variable split: Like for 32-bit guests now, only the r/o M2P would
>>>> need to live in that area, and this may well occupy less than the full
>>>> range presently reserved for the hypervisor.
>>> ... you can't do this, because it only works for guests which have
>>> chosen to find the M2P using XENMEM_machphys_mapping (e.g. Linux), and
>>> doesn't for e.g. MiniOS which does:
>>>
>>> #define machine_to_phys_mapping ((unsigned long *)HYPERVISOR_VIRT_START)
>> Hmm, looks like a misunderstanding? I certainly wasn't thinking about
>> making the start of that region variable, but rather the end (i.e. not
>> exactly like for 32-bit compat).
> 
> But for what purpose?  You can't make 4-level guests have a variable range.

That entirely depends on the kernel. Linux, for example, could put its
direct map lower, ...

> The best you can do is make a 5-level-aware guest running on 4-level Xen
> have some new semantics, but a 4-level PV guest already owns the
> overwhelming majority of virtual address space, so being able to hand
> back a few extra TB is not interesting.  And for making that happen...

... allowing it to cover some more space. That's not a whole lot more,
sure.

>>>>> For compat HVM, it happens to pick up the -1 from:
>>>>>
>>>>> #ifdef CONFIG_PV32
>>>>>     HYPERVISOR_COMPAT_VIRT_START(d) =
>>>>>         is_pv_domain(d) ? __HYPERVISOR_COMPAT_VIRT_START : ~0u;
>>>>> #endif
>>>>>
>>>>> in arch_domain_create(), whereas for non-compat HVM, it gets a number in
>>>>> an address space it has no connection to in the slightest.  ARM guests
>>>>> end up getting XEN_VIRT_START (== 2M) handed back, but this absolutely
>>>>> an internal detail that guests have no business knowing.
>>>> Well, okay, this looks to be good enough an argument to make the adjustment
>>>> you propose for !PV guests.
>>> Right, HVM (on all architectures) is very cut and dry.
>>>
>>> But it feels wrong to not address the PV64 issue at the same time
>>> because it is similar level of broken, despite there being (in theory) a
>>> legitimate need for a PV guest kernel to know it.
>> To me it feels wrong to address the 64-bit PV issue by removing information,
>> when - as you also say - it is actually _missing_ information. To me the
>> proper course of action would be to expose the upper bound as well (such
>> that, down the road, it could become dynamic). There's also no info leak
>> there, as the two (static) bounds are part of the PV ABI anyway.
> 
> ... the absolute best you could do here is introduce a new
> XENVER_platform_parameters2 that has a different structure.

Indeed.

> Which still leaves us with XENVER_platform_parameters providing data
> which is somewhere between useless and actively unhelpful.

If it's useless, there's still no reason to remove / alter the information,
as you can never be absolutely certain that no-one uses this in some way.
And for the "actively unhelpful" aspect it looks like our views simply
differ. Is there no way we can settle on making the change affect HVM only?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 08:35:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 08:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473364.733908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEndA-0004Dq-Ii; Mon, 09 Jan 2023 08:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473364.733908; Mon, 09 Jan 2023 08:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEndA-0004Dj-G1; Mon, 09 Jan 2023 08:35:44 +0000
Received: by outflank-mailman (input) for mailman id 473364;
 Mon, 09 Jan 2023 08:35:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEnd9-0004Dd-RW
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 08:35:43 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2072.outbound.protection.outlook.com [40.107.247.72])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a5b8adc-8ff8-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 09:35:42 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7093.eurprd04.prod.outlook.com (2603:10a6:20b:11d::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 08:35:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 08:35:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a5b8adc-8ff8-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XKzkWlUEBDcGqylYuWi/2iZl0ad3xGYBVOo2l62UWVFAiP1jTWP2FRv1b1m+1CcW2EbkdEUflZY/VQf7GizW3UIxD7VKedWUvcJvTwkuR8H7I0ZrauD4Qzpuw4M9Vzmz/U0GczMSKn5t0ikVcvyHQ8AQuUIHLWXbirCY6I8vwCWFNtRGxcYAgQkkZ8cfdAqWE8t/0YksClSFls5cTfY/1CX0RVkm52rgDJ8sErvr6yb5Fl0JtvZQdgx+a3fVAAMPuDPDTD9hAmogevz3FXWE+5zTD7Ih4aCIErdhbtUtXc5kX7ANJw6rrW+JsgHNUmfkSbX+vTx5jlEdANTTjnGI7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xcNPlZgRTW3qxOdChmYcm+u8tTQMcr/GLd+WO/WmLE4=;
 b=RGvVYL38Gylr4KLhV3ztUWtSPI0J98WiR8IIDP480w1gni/dcTyqxq09XrG+ZkntCv8q39KlcLpjGzHUsxIXgybD3FYg/lU1Fw7XfyMR91E55UTBxjphCMhlPoKDTympWNDapfTiApzA7fuKPeb5afcLRHv4sAABSiFWwTm55RfNHj67ObSCOhBV/EZ8pAlJiNj/bAerpX2XGCxWV0PEBvuy3YeF/rVYS/gRHdHLZwy1oyyUpxrS/MiA0XO50AmyS/Vq1A98lKe7tsdujNDPiMb2KUqbhJ3GcQ2Tq/4owT1rL/CTk+lD8dzh4HWFdJri6tPuAUk5xa/PRrlylpdieA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xcNPlZgRTW3qxOdChmYcm+u8tTQMcr/GLd+WO/WmLE4=;
 b=OGdka1zdhgF47qMwR7HmfcvrDipyzkXWtI5/X4bpQOzlIuIR4/P18rksJyi/ZmN3JABVoaEYFWquFxhp6TH5jpRTFgwKN69+X6LgbQWJZnUBwutdis3UfC+wPP7YR1mKVwW3scXQ4m7A/A7xedWkXP8OePSsqRzoxsrrfA9+lPo6M4XM0AnT5ePD/CRbn/e/cOig1r4GOnUSKvSMoznUOyFL4+XJE+TnTiSLA8TIvnbadb8v6t2KOykUzXG3ZWFD2Iw8PDDqcN9Uv6frp7A6fxF2O8hK6iZuXX8vkV8OYmjJm1yGkcvdN68IE7yBJnPGWJ4D6DuC0VwngWVbYCYsag==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <694bdf12-ab8f-9896-30b3-3b2d7bc3d892@suse.com>
Date: Mon, 9 Jan 2023 09:35:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 01/11] x86/shadow: replace sh_reset_l3_up_pointers()
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <03ae5a1a-4417-0aa0-27d8-833ade20cc0b@suse.com>
 <e8bdffdf-d5ea-4320-3e84-b6654ac83002@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e8bdffdf-d5ea-4320-3e84-b6654ac83002@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0056.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7093:EE_
X-MS-Office365-Filtering-Correlation-Id: 18e48e0b-6108-4028-1a5a-08daf21c7dca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l5r7Fh+Z2N16tMShuJ6ZY2BZCLnnU64v0LfSa4iHDGg52SOOtTBNIcjQiNJ9r8ITG2svKxDtt7XwA2jdiJVzgeGtkgDeAiDltKLibDgmpl9kjztXx1K7ylln8NZ5YhDIoy02iWu9ennjapLhP6Ff0aeeyylmC+/MlFGjwjjWRXQrDf5ZKDv3NGZ/24JaGG69XakTLzVYjvZP+MBKm5VTr9rfXo0/4gkPUm8gDmF5T54VEfLa00CaHNpO62QLIyKb/zGLDjflo/AUOJJ7LVRQZVFzEcr4uy5k76QBy7xp1Myt1p7qmjklgFwEIivLna4ekvgpTHRtx6x9Klaa1xHXvSRtCpeEl0f071FLWKeZMYrFUUm7FlxQJo8jr3UHFgXBjux02pSlmwA/MWQ/CzWPblaRVJviSSE9vAlJOsQnmm2Otds1Mg2HvN0yFeTXs17mBvp+XwZC7F8TgOCSr0I+Tn6aCUEsGL+nBbXFjdARpBdfPUZdWuk4w1iJ2hNjgh0QrhY3oH1Uz/dLO35lbEIAwHvo5qcMwhJ+qqxG5wMiZ8BB/6e6y136SprHtuSHoRsdbHRm74SApVTQBRYX/2MsI8lxSd/24KBLoDAXF8IzhtSmo1K/k6c8vUv1JepvJce4cwPSK06O497m6Kvkb+x3rQXK7PUaB7fZQHGj3/4qItcSP+LMpNhbCZ2KHXwST0bsjWRMLXb4WKMtxfZ2elRrtEANktlFm4k3T49+wSv03Oc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(136003)(376002)(346002)(396003)(39850400004)(451199015)(8936002)(2906002)(5660300002)(41300700001)(66556008)(66476007)(4326008)(6916009)(8676002)(316002)(66946007)(54906003)(186003)(26005)(6512007)(2616005)(38100700002)(31686004)(31696002)(83380400001)(86362001)(36756003)(6506007)(478600001)(6486002)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dkpEVEJ2N2g1WUlFWHh5Uy9PeThXUnFiRCtrZnI0ZnJpYVhJWDNuS1NCWk0r?=
 =?utf-8?B?MVdtRW14Q09lODBIOTlNVEVrOEtURkhWWXY4NGhNTUR1UjZUSGFSWlpEaVV2?=
 =?utf-8?B?U0RlYjdjQ29JZnJiVkdmblZScElxUGFvK0d2VUx3Rk8wYitmTzhPT2VIUTZ6?=
 =?utf-8?B?VEY5Ty9aZ0RzNVJUQmVVRkZFNDJTYnRHSGxqekI5Z1M2RVJGR3RGTVB0VE9V?=
 =?utf-8?B?OW1YQzRMZ0FTaUhoempWR2NLSUZxYlBRaUp5bmVpWW5STTloeE1kMlo1b3gv?=
 =?utf-8?B?TkFtak1jbEZoS3FsNmQxMEFnbFZ5V3cxL2Fxc0U2NkYvRE5MQlY0aEtKZkNP?=
 =?utf-8?B?bXZPaWxadGlhNkhONmpuSzFpVkM3Z0lrREVtWE01NXc4RmgxNWRUamFHOWhV?=
 =?utf-8?B?dzlCdXd6UW9rTzk4K0x5TDBibDVuT2F1Ym9wQzVZRHFFRjdxUWE3aGpJM0xM?=
 =?utf-8?B?Yk03QjU5NXkvYUFSR0c2Zm1pT0pmRWV4eVVRYmNsQWhiRHI5dVBGcjFkWHNO?=
 =?utf-8?B?aGpBQldKS1RQTy9qV1RvTWZpdjkvRmhQS3Zxb1RkOUd6VVdrRFV3N0hsZ3JF?=
 =?utf-8?B?Lzhhd2lER3RIcG1MTTdqM3V4ZEVoT1RpTTJWWFJiZnRoU1BtOXZKRDhTa1NZ?=
 =?utf-8?B?bVNSTWx4eFd2MnN6NEpTMnhycTlUNS9iWml2bDFqaEZLL1Fydmh5M2hScUR2?=
 =?utf-8?B?cE1VSU04MkhHS1NQZTJGVmhCcXhZRFlSTUZhY09kd1JNNU5JTEROZUxqMXBm?=
 =?utf-8?B?aHpqQjJQOXRKdTQ0bklyWGtKZjNoN040VnpFaURXYjVFWTBZZmhyd2FyUWpI?=
 =?utf-8?B?Q2d5cVB6S0x0Ulc4eGlXajR6cnNnMmdjZlJpaHc2bUF4OTBuN0RjQ2NHbmlS?=
 =?utf-8?B?OGticFRmbUdTalljUVNHN1pOcTgrZ1lhU1ZnM2l0MUF3NVhySG1sTURYTEF5?=
 =?utf-8?B?cFhTQXpWWkdYc0NMNDBxQTQ2QVVwN2JyS1AxZUJzWTZMVnRoblhXNzlyVWZQ?=
 =?utf-8?B?VVljS0FBSmUzSnd5VVVuZ05DVzBvQk5ZdWpmUWFvODlNWGV0Y2ZhcFUxZ2NL?=
 =?utf-8?B?SVB1Z2dvREZhaGtRWmg1NkpEVkxFRzNmbjIvdWJ5SUNKY2JlUTkzbVpwQmVk?=
 =?utf-8?B?Y21tZno0eTEvL0lBdFp2c1dlTnNaTzc1Vndwcy9rdmVwTVVNbUtFMzAvV3V1?=
 =?utf-8?B?Q2FuKzNLVEdNZ1U1ZHh3MDQ0QmpLUU11YlhVT3FpVDNKcG5tQWhhbTZPZ01j?=
 =?utf-8?B?MFNGMnFDMHVDbENOYkJ3dzFzdEk0ZkdUL1FidU80Sm91SFZPL1FzOE12RURC?=
 =?utf-8?B?NEZvcHdtWGNjMCt0WWUrTnQvV0pRaDZFdXM0b1JWTGpxVGJ6SDd2alNTblRV?=
 =?utf-8?B?RnY1cUpMNXJFVUF6d1AzR3hoYWcyR0lFZDAxZzN1c3FKNU11cUhVZlhaT1ZW?=
 =?utf-8?B?ejBiTHkvSkdMOVdBbmF6VEVSeHNOL1kvMURRRE9FNENzUHFNNGdPNklFR1RF?=
 =?utf-8?B?cUdCcUdaQjVseXFaWUtsOThrT3BMODV3NXVMSkJqVk5NNjZldUk4NWFFT21a?=
 =?utf-8?B?RXh4MTY1VUhtcU8vT0FqMmtwY0FhMVpocDY0ZVNETUZJemp5eXNvWUhyUEYr?=
 =?utf-8?B?anFZYlZ4MFJaOWdtTkxsN2lrMGxUeTAxOEY3Nit3cVVXZ3BYUlVhZi9uNzhZ?=
 =?utf-8?B?UjFrZklId1lhVC8yditiNS9RWXN1elRFSVlBa3l0UWNuU0IyaUU1VlVHR3hN?=
 =?utf-8?B?YlR5MnRoTUZkUWxrcGpYUlJRYm5VWnRmWUQxYS9RbjZ2Qlpma3doMHlibmFH?=
 =?utf-8?B?ZWxHa1JNajlNbUlZV0poZFZuS25memd0c2F4VFZGSGZpTFRvWW9RRURwVVZK?=
 =?utf-8?B?K0tyR0NVREdnczJNcngwVVFkWkl4UldzdFpFenZ2blJFeW9GTlgrVWhtWWhW?=
 =?utf-8?B?bmJFczNpNFhSVUNibVFLejVMdnlCZmdoY045YjZGR2c1dk42dDBWRXlTcVRN?=
 =?utf-8?B?RkxjMUdVWkwvaVZTTEJONmhUZjJmMldyRGVnVkVLT1M5UXl3R1J3MFIvai9B?=
 =?utf-8?B?SkNucmRYK0Jmc1JhTTdadGlEVjBkWEx3cHkwbm9nRUJIY0EycXhLNEw1Y3Rn?=
 =?utf-8?Q?pw3VqbaMl36YV85u8t2WtKIoa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 18e48e0b-6108-4028-1a5a-08daf21c7dca
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 08:35:41.0999
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yV4JwE7l3dsEB8w6YWaHS5iquKftg+WFLYxvsv3hRKE+Hpt6YKKWQxAzx4o4cjVLhOePV/ecbj/2TKlecIwJ2A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7093

On 06.01.2023 01:55, Andrew Cooper wrote:
> On 05/01/2023 3:59 pm, Jan Beulich wrote:
>> Rather than doing a separate hash walk (and then even using the vCPU
>> variant, which is to go away), do the up-pointer-clearing right in
>> sh_unpin(), as an alternative to the (now further limited) enlisting on
>> a "free floating" list fragment. This utilizes the fact that such list
>> fragments are traversed only for multi-page shadows (in shadow_free()).
>> Furthermore sh_terminate_list() is a safe guard only anyway, which isn't
>> in use in the common case (it actually does anything only for BIGMEM
>> configurations).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

> I think.  The reasoning seems plausible, but it would probably benefit
> from someone else double checking.

Okay, I'll wait some to see whether Tim or George may voice a view. Perhaps
until the end of this week, committing early next week if no contrary
indications appear. The "good" thing here is that all modern 64-bit guests
undergo this transition aiui, so the code is / will be properly exercised.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 08:39:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 08:39:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473370.733919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnhF-0004p1-30; Mon, 09 Jan 2023 08:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473370.733919; Mon, 09 Jan 2023 08:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnhF-0004ou-0N; Mon, 09 Jan 2023 08:39:57 +0000
Received: by outflank-mailman (input) for mailman id 473370;
 Mon, 09 Jan 2023 08:39:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEnhD-0004oo-To
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 08:39:55 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2087.outbound.protection.outlook.com [40.107.249.87])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 30c48fcb-8ff9-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 09:39:55 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7093.eurprd04.prod.outlook.com (2603:10a6:20b:11d::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 08:39:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 08:39:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30c48fcb-8ff9-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OFssc1o728s/XI9pI/909eCPUbOjqfhLyRNGxBl9NEDRVlnu9TFc0rXgHeAv7P82t6u5lf0qfbwFPZrvng87iNHTXzkasl+p4Akag2J5V2ahLzKuQ2ZkeFSjvk/TTzIsnQJZ4/suR4lqHM0fLueP6JlMmZQmEt2f9tdp3kycDMPtf6lhUNeN7Sw4DG6Q5PXxR98thKIQYKW9UYJ7m0Bwqv7sNE/QcbvwEWIhzymr1jmtARs9Q3Gwuliz91O6ycigX3CPwVbVRG4NRVWkrYLgifqFrOy6Cx6d4HWGwmmivpP9bpfEO8sAx0KE8urPnJnKJcW72wAO+hnXsXnCxCUrFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=O/vQoGlSzAJMqyZU02m9lIFhjHp/l09qgQro552DAa8=;
 b=ir2WkemBCq/kbWjgkvntvhYJGrNxIhNKt+qMmwXJQ/RZVuedNiYtFUUbxa+9IDDrKBgbJSZ3y508nBhL/RD+h7/3KPo9dyZveG0JqvazRxC6ks1ZW2PceT1aeDi6BKtjYMx/nFcSQYUMYtzZypF51F8s3KIekbmGOVnyxViyS5/FGbuG6kT8BNcZIjdUOZthXttdulhho1lzeJN1SP3LB4Mi8J0zK6K7dHpUIz8zXdvVmPJoR31xJRaAmn5ZQWBODVhlWgU1uEEVtdtntmqG5hKK1dyeQPeecwg87Vwak/C+5yiyHd/IdYvqB3y8EcuLdOiWuFYiJ1pdWizuqQ6p5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O/vQoGlSzAJMqyZU02m9lIFhjHp/l09qgQro552DAa8=;
 b=cbIyJCZqIDZ6H+JIDe9IiisFEG8GXbYwuwTmg5zprbMt5OO5hCiN9gDsb+naVuBEjkiqDAa/9zU7nSAqE6cYwjFcIZ3o9EJQbVOYeybwgYJU/3EZA9jsDKLLVEoT0S1VL9RT3/Q4G6uePDNxN0EcCuzJDkbf15mVbczsrAzQQxPHh2c2qqEabSfnzaKWAJ+o44WownImk68Loy5bMAOnTiPtiRiZCLnGNrWQkooW4TFsLlWWRiHaNq3TccuYYu8ixLllOqhHIBgBRkn9Lppnk4mzLmcbV93555qqpwBin7HlKhvyqy8hzAlYmVuLAucSM6IKKatNwxOCrBKeXCkwLQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0a5da887-d1f5-5a50-306c-d60be95ccca4@suse.com>
Date: Mon, 9 Jan 2023 09:39:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 05/11] x86/shadow: move bogus HVM checks in
 sh_pagetable_dying()
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <76cc0b4a-27ca-21b7-841f-315f31833762@suse.com>
 <37bfe65b-d989-7c34-5e14-171a23df37f8@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <37bfe65b-d989-7c34-5e14-171a23df37f8@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0002.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7093:EE_
X-MS-Office365-Filtering-Correlation-Id: 806feb09-f4c0-4ab6-4aa6-08daf21d13f2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	US750JBcwczqFONrfAtt1BfJJUj/2Cl4IIKiG3gRF9lyCbOe4LBCfMKSdwDdUA3JFawfclELYqoTDn/tNoQbVSIAWIubqbi98Xvl59cptAKL7fukE8tIoIn0uwkc/cyAgpvCGS77iEj02GOnJFnjd8HB0T6uZmS504OMxyzxvFi+OHgFBWr5SYdM9CL/RMaKfO+BMcYB5AbHGGXTjkxyzhBcdHWyv5JkDiXLJxirlJCdycvSd7fXrNurAWHQnbEKyw+KMz1OyTXb316RQRm8eFtYAAm6YhVpRCyRG7Gw6bN/ttbR8VYjILkq13Ykswn8+7aTRXB20O90JhKVJn+8PLyoGEm/G8TfQvf5oQnhhpbhkh3BHrN+0v7+ofHmn6/5sjfoMW5V9ZkboCBCGYGdZo5obyR0omLSJBycphN4riOSALensd9e5k2kzppGB9HY2rD6RRzAexYmhwTLWZ+aEWMeilqN8M70qFcSEBXsIqLeL4hWxvoDrtyeW19g01ghLpqMRdLuoAmbTKMEfh7TypuunRAJy7mRukl95fqysEcb5PKeFQ1VNtknXOjOaawXcUmdCIeDry6/mSOMS1knVv4b3JqBm2uBkYrPVOFgXuTwJEZdWCNJcllAukb6ySm+SFJhnT3v5QW2TcuvEKkMtCTQLY/VVHubsxYD+4AZj+mV4Dc95dBH1Pg0+DnBPIs4x3pRrCgtQMDE8Dn+giFYlbLFXpVO6ibCXss+IVrm0IY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39850400004)(346002)(376002)(136003)(366004)(396003)(451199015)(8676002)(4326008)(6916009)(316002)(66946007)(66476007)(66556008)(54906003)(4744005)(2906002)(5660300002)(8936002)(41300700001)(86362001)(36756003)(31696002)(83380400001)(53546011)(6506007)(6486002)(478600001)(38100700002)(2616005)(186003)(26005)(6512007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cVh6T2REZTBCeStwdUp3TjlUYzh6RStWMU9Hck5pQXJSNVZ2enp0N2QxNUtw?=
 =?utf-8?B?WVh0WUpYTW1YUk9OZDRINmNFL2I2Q3JPZ2o1amFaWDFKUmg3b05EWWVDQUFJ?=
 =?utf-8?B?ejVERlArV2tkaGNUM1FkdHozKytHUEFRNjFmRHNHYnhJYkxOUzBvemc4dzV5?=
 =?utf-8?B?bGtKTkR1SGU5YnRsQXFCYStIVWEwemRqeEptNzZHU1lZeVh0RTVVUUFwK1Yz?=
 =?utf-8?B?VXdHSjlUQTJBeUViNFhTTzZRekNiRm02T2NuQUtuUEpsRGFGOUhlZkdiS0Vj?=
 =?utf-8?B?azc1Z21DcUtpSWJNQlhwcHZDSm44WUZBZ0FsS0pheUtEMnJYb0Z2RHA4djIy?=
 =?utf-8?B?TUVreHZoR3luRkt4bTFvNDRZMXhNYkdhWFhOc2g5amFhS25xMWFyQ1Bacmdp?=
 =?utf-8?B?Y1Fiend3Skw4T0RDbVArNkxSUFh1ai8wUFRzQnJsTXZtelVtTU11Y0MyOVdN?=
 =?utf-8?B?RXQrWlJPSkZQdURCQi9ONjE4RVRnQUNSK1NhdHdQZlFUZUIyTi9FelVhbGU1?=
 =?utf-8?B?U0dtblE3c0oyYi9vbUtTbnVwSWpqc240OGhRSXlDcytVSjBnK09MamlvR2dN?=
 =?utf-8?B?WnJYVnFvdU0rTENUdFZCam1yNGZrb2p4WEhiODAwN2JZZTZjVi93cWNMbGg1?=
 =?utf-8?B?VHo5Ky9YMHpjS2VJSFNBTlJNNGhxNFl5dyt3ZHRXWS9JZzRrUWxtSEJ4VWJJ?=
 =?utf-8?B?di9ZbE0xQUhyVDBQWVdtRW5DLzhUVGcyNFk2emIvbW51anFCSUc2OU9QNE1h?=
 =?utf-8?B?Qnc3QWhvRDlMWEVwWlVlUDlZZ2FiWDRYNXlSTUg2TDVEVFBnbTlQd3pXNkpP?=
 =?utf-8?B?VGJrM0Y4UDN1Rk1HOFAwQkVIblRCRjNIM0ZMOGlpZ0c1TGdCRVNRZFRadERF?=
 =?utf-8?B?by9tVW5LMkRLaXQxM3ZWN2gwbU5rVmNXVE12b3NlRjFiWURCTzJRYVZLZDhX?=
 =?utf-8?B?NVFQTTJQdDF6L0hBL3BqeDFxYlFHQTdiVXpXbDhoOGpwNHZUc0tTR3NkYTdl?=
 =?utf-8?B?RkJYWWtIdjhEdVhKV0pGaTlwOW1KejNudzhrR2w5My9DWkVYZnRnc1hDcG5o?=
 =?utf-8?B?UnR2REkyenEwZnE2d0hnbXNna1pSaWpTdUhNT2RDQllpUUZjNmFNOTZ4MmdW?=
 =?utf-8?B?elRUeGhxalFtQU1GbHVvMFRRNmc1eFhEc3h3OCtTVUlXWEVrM3hPTnE5U2Zy?=
 =?utf-8?B?ZFU2dmhVZENLZ3ZZTTdjb1cvYjZUMmdkT3JLNnZLcURCczZHa29SUklzY0M5?=
 =?utf-8?B?VDMzbnZLTm91aDBUZXpxZmM1OVEwcG5sTDB5SmN3YnVna20wMWFHVm5NdkxE?=
 =?utf-8?B?OVlEQUxKeGZLc0xCZkFaV0RWM0Q5NGs5dXZYdENHRjluWnpHbXpPWEJSb0h0?=
 =?utf-8?B?UkovWTZvMW5RQnpBT3htVUNza3lpL1Y1TlRhSEdyWEN6Y1B4Mm0wYmVheCtC?=
 =?utf-8?B?eFVhckYrdFJXdjU3TEJKaHEwdFE4Y0ZJR05DMEx0YW9xK09jS0dHRkppNy80?=
 =?utf-8?B?UWZ3bVMwZHdkR3p1TEtiTG05RWF1NGdwVnpsdTdYcEY0UG9aaVpVM1ovelpY?=
 =?utf-8?B?YnFDTUp1blJYQUVDOEpSNXdZUmQ3TEMvMnZlbEdnYy9ld2RsYnAxTGhOa3Qv?=
 =?utf-8?B?OG5IRFpTMGZHWnFMdlpTeUI0YmNYS0FOVWoyMWVvZGpGOGs0WVJhWjFUK0ho?=
 =?utf-8?B?OGV6c1h4S0N6QkZrd1o1ZnVpME8rM3dNYTl6MEZYU0xFaFNiQXppMGVreThT?=
 =?utf-8?B?dGRNNFE1eHVaSi9SczdJaGlpRmp3dDRvakJNY001akg2YjU2Z1dCRm5kbWE1?=
 =?utf-8?B?S2dNZ3hvQVM3RzJ0U1JGQlpQdHh0OTdXbGNIZXFYUEV6SDFuZ2c4azJnVVN4?=
 =?utf-8?B?dmhKZUszVEdHbVZseFVHSWFyNVZMQlNoTmgvWWgzMTRjSk5TcGpZc0UrZHlY?=
 =?utf-8?B?VzNjR29RemErLy9XclMxblNNRlMzRjJqWCtZRm5tYXhYU3NBU3ZjaEs0cnRF?=
 =?utf-8?B?WFFSekh4dHJCQzlWdmo1eHRVZ3dCcUxxM0djWEhhQzlZSzlLV0VIc25pdHZ2?=
 =?utf-8?B?M0hFWk92VjRSazN1UjFJaUsvcVFZbVFiMHI5c1NRU09tS3FMazkyaUtkbi9K?=
 =?utf-8?Q?/lEGbwzy567fm1Z9ZwdjtHGst?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 806feb09-f4c0-4ab6-4aa6-08daf21d13f2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 08:39:53.2715
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +VVuD1v/xioKYOe/J9Azpjz9bbirBzhKmjrhPUbnElvBp9HjAh1WO2mZvdZMrNr55sOLOsDkKNSw1DVqS1v69g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7093

On 06.01.2023 02:00, Andrew Cooper wrote:
> On 05/01/2023 4:04 pm, Jan Beulich wrote:
>> Perhaps these should have been dropped right in 2fb2dee1ac62 ("x86/mm:
>> pagetable_dying() is HVM-only"). Convert both to assertions, noting that
>> in particular the one in the 3-level variant of the function comes too
> 
> "came too late"?
> 
> It doesn't any more with this change in place.

Fine with me either way, so I've changed it. Iirc in particular George has
been advocating for writing descriptions in present tense, even if in the
course of a change the stated fact changes as well.

>> late anyway - first thing there we access the HVM part of a union.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 08:42:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 08:42:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473377.733931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnja-0006Ha-LI; Mon, 09 Jan 2023 08:42:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473377.733931; Mon, 09 Jan 2023 08:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnja-0006HT-IK; Mon, 09 Jan 2023 08:42:22 +0000
Received: by outflank-mailman (input) for mailman id 473377;
 Mon, 09 Jan 2023 08:42:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEnjZ-0006HN-7e
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 08:42:21 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2085.outbound.protection.outlook.com [40.107.7.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 87575a56-8ff9-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 09:42:20 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9481.eurprd04.prod.outlook.com (2603:10a6:20b:4ea::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 08:42:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 08:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87575a56-8ff9-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZrLxFM09aKt2gSNNsVBKYFe+OAbIls/JYGUFCX5ASOwc8DTaEBrqyYdSaNKI2uu74PsubJmrr1nkjZijYFIYgmm35kKwRUIfQ83yRBTvvBLBM2m0wvVYdHlbwu5V8QymUBj+TX3ZHyjPuMoNCnXpmk8bd240Czf0LaJC2vuBtW0VoH38g5yb697eqklbEfMqpI5ujXy5OakIqKi0gcyjdoIKpIVgYuI8pzfZl5wAyT7g5OjG73NcqzFXaVxGM3X+LSDDyqcu0C9Vo/yejllRiaF8Luz/rJTVJIx+OiegQysiOmJ6PuUotZ3oGi0WSSsEL9p0Ekg4WMusQFsEaQE7JQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p8TokGGxB94VKlVbvpTwqndAo8B/W2RsotoDsKDJ7KA=;
 b=JrOFo/Npa4d0+m6UE5XuSVKNCF5sIyCm0DZbFDPLruH02Ug8k3d0AFtasFL09Qe9rZomjy87x+T/PGRjW5QgJkZTbeGlpJvNmtGSTAiQZS6Dic0YiRVPQ6lU0Na54LzrWecRDlJbN7416HpArKurIDe5pHDKdEE4WsfvY+FC9SC1qZDLDjOWOc2jvF2AW9vNb4l3N4EbhstHx9ZrDUkafD0EDcjKHhsADdCMFbM90iFkby+ur6pqi6W9X2Ma4dubSKS8DFIQS2OEpKnTeZdFtUMCE/Zo9k3RkT47LxZz4gq0CHUUUD/2zZg4W95K4TIdeEQOiZSxkgzcDouEcRhLJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p8TokGGxB94VKlVbvpTwqndAo8B/W2RsotoDsKDJ7KA=;
 b=C8F+nWduWBPMXjWwqmQSEGlsXVqnVrVdxccq3fj8TnZJhoYrjYikmr6VPqBW81yMvDCF8M3aLRJYSyL6XpeClmLAOHTjjMEWN7qJYJETNir5majDLuI/D9QAzjbXs+vZ9nz95ivAOmsF3p3PepmFRjOyvk/dbmPzOYT6Cz4rJH7eNCD8i0kjfcESUEO45EivBRaf/xTWEZUWHl8NsjcCZoXqw2fka4jkqUJn8UR39ZqLzuQcxXgeO9z2qTxNnVWGsMSyJTLCeTZgf6rkAIRgBdyf3/gQpOD4d+cFeHS/1fHOfgngxezEVD28jcP8PQUiXPl92zknPe5OCTPR3hrMag==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2c333051-4c2b-fdcf-75bb-1a700bb3a3ca@suse.com>
Date: Mon, 9 Jan 2023 09:42:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 06/11] x86/shadow: drop a few uses of mfn_valid()
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <ca3e0c70-ae80-4c21-97f7-36525229074b@suse.com>
 <11758833-fb7a-f937-a847-fe79ba932679@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <11758833-fb7a-f937-a847-fe79ba932679@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0123.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9481:EE_
X-MS-Office365-Filtering-Correlation-Id: 20f8b1fd-57f3-4394-5d8b-08daf21d6a3a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XRCL1r1SSzj/RlFLW9OMCOpNDPiiGcpvgLxUmDnOKIV+6pX9FQ0I9+OO2UcXnMu1nZ+ckW85+9Siw0CQ05ZWGn7RWxBYv1AehB2OarwR+9Lrtod4ZlqC7w288+tOHcPGnalfUKQQNtKH9f7EulECDocu3W23sxApeSJR4UrdSZUUtnAx9DqTcd4nc7143elmhhlsHg+DKHfJT1Y4VaAxOA4meadqzMxVpn9xchUIR6BO2s6YXUw0sxDdqZljYKIs9ncNw2hstYgPVHyuj/tPjk3Mq8XgyWX4p8XK16FyeU657VhWsYmHCAxf6DzlwldI2QdXWMdM6zOvF5d8KnvuHw3IrIXt+D2bUNlCvZqLlaJoJet6Dz1JQHW+TiQrnttEwrJUCVeaieNcAcB5kyDlh77HHzp3IsKPd7GlsHU9OCjyK00UdzvCoMuYNPO7qWeuo9P6CQ0KMKso1WlDm/X8xxSYDCiBXCEbAuwBDMjVfVphTb3A2Sxngk5VB0V2aSvY4q4lrLjXGhVaeqsz46CrWWFQ4nnAiNmMsPvF7tDHT6jv/FekNaP6w4fo1KIC+bolQDmmxNvkBKe2unJgjwCAtkbEj2vUQ8wSCykA4bVCHQEhfSV1suLw1zcyVc9iOhKQzFT/iCXVJflft4kuVgxcRdw5vsKVY8htBk2A/coQIHX2be1B+2f0F7OomKXMDxSPLm05VokKcg0mwlUiR0M0xDlx/3vKEjpyP7ewPcQMONs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(39860400002)(376002)(136003)(346002)(451199015)(5660300002)(4744005)(41300700001)(6916009)(8936002)(36756003)(66556008)(66946007)(4326008)(66476007)(8676002)(6512007)(2616005)(53546011)(26005)(186003)(6506007)(31696002)(54906003)(316002)(86362001)(478600001)(6486002)(38100700002)(31686004)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Tkt1Wm5MeXFsQXh1QjVwM1RnY0VsR1puZGRXU09pSFRaOGpYeEpzK0tTNFB2?=
 =?utf-8?B?WDRoOHd4T0h6ejNTMWVCa2k5MXQxRkhlbHRMaUxaOStndFJQNEFFZVdlZ0tK?=
 =?utf-8?B?WFpMcERlckx1U0hTRDB3VmtacytFdThRR1pUSjR6SURQWnpTTHJac2xSbnI3?=
 =?utf-8?B?dXFWb2VzMmlvOGl0bDBSc2tyTTBkamtvSHYyUTJqNklud2tSd204bUxjQ001?=
 =?utf-8?B?ZFI5bjJVdWhJQTBkNzlSZHFkM2pjWVpJU294VjN0Q3BQZ2o0aWFCamlKTEt6?=
 =?utf-8?B?emJnMGduVlZkajF1SnZiK0RwMHR6bDlSSmNuaXhDM3RuNE5sNzNxMnRaS3VF?=
 =?utf-8?B?L0dtaExFSlN2aUgwbGdFaHNZcTZoYXRxOHpZUkhBNFFSQndYVENlbUxUTG1U?=
 =?utf-8?B?ZERybWtNSW5MYkhzK0pqZzJlRGNoRkpBZnhoOHdQOUlEZTdwa3lhTk0zVm0y?=
 =?utf-8?B?N1EzVUhMdUpVNjQ2WVFXVS9tUCs0TFd6MzRhNm1NdkVMaHVvQmFCOE9qTElw?=
 =?utf-8?B?SWRaUHVRdEVFUG9PY1VHOXpRN3hWampKcGRBd2dST1F4c2xxUFM1TFlNV1V6?=
 =?utf-8?B?TVVTVjkvOGxtVkgveVVKYUV5bGdpZWxoZXVlcFF5c0czRHNySDk4MVhJTFpJ?=
 =?utf-8?B?ejBBVUNwaEE3WG1SVFBYRWpmYW9xNVoxelpGTXhTV1lOcVVGUXphYzhUTnZx?=
 =?utf-8?B?ZlNrSTdWb2FLMFRYOENUOFRZQlpwNk8wUmoxcXhNOHUrK21LbjZNRldUdnFV?=
 =?utf-8?B?b1htM1F3emM1dEl1QlJtNngvRHQ0WWdOblN5ZDdWS1hiSW43aE5VTmFJa2pY?=
 =?utf-8?B?NmpKYVZlVTBlN1F2dytkOElRUXVQQzRablo5eElmMlZJM3hKbW0ycCtVVnZj?=
 =?utf-8?B?VUo5UTJKYkdLTnFhSWhyUE9pekZpRE4rdzRqNmErWS9uQU50UUZJelE5NDdL?=
 =?utf-8?B?NkJBTjVOZklaNSsrajdGcEVFSjdDTk5lS2NhaC80WUNadW8vTTJDc2ZqOXBJ?=
 =?utf-8?B?ZjZrVEcvNlJ4cG14a1NxcE1lSm5mMi9ic09EQmtaSUhOcnpIUnBzSHdMT3pj?=
 =?utf-8?B?K292THdndmhUUFQ0Y0swRXNkc0YrQ2tjVjE4djVxQ2lWSjhtcEViVWphTmIv?=
 =?utf-8?B?aVFFT2hJOUNCVFIwdkpVejBBMHRXWmdISzliTiswL2xWWmJhd05rdEhURzA2?=
 =?utf-8?B?Myt4eVk1cHZoM0MxUnJ0Q1JrSXFaV3BwcU5kRytUWHpRR0dHTkRoR0NPUkdG?=
 =?utf-8?B?OHNGRFZUMkNiUVo4RUl5Y2Q1QW1HQ2hNTkpZNHFKQUF4cjBpWnFLdXcrTi8v?=
 =?utf-8?B?eTFtNGYvejloTy92bm1XeVVMNGY4MmhDRFhTWWswOC9RMU5tWFJuSUxEMVls?=
 =?utf-8?B?YmdPQThleEVrMlpMMjdxeURHZ0ZRZ0JncFMrSHlJSlBrbXhyQVJXdC9pYkVQ?=
 =?utf-8?B?My9zak1Fa3N2aFNEYVBPWnVxckE3OVg3cHkxblpYbTlSbnYxRkl2eHZDSFNF?=
 =?utf-8?B?TDZpa213SDFmYUR1c3U1Q3p4SGZKL2IwZlpIb3FZT1lFOUg4ci9xU3FVZ2pL?=
 =?utf-8?B?aU05WFVCUVlmQm9DempmVDBacmxkUzRzYXhmd2JIY01qR2ZhZlJDVDYzM0pG?=
 =?utf-8?B?T01Ld0xZWERIVnN5eWNKZk4wWitJTGdaVUtDRzFoWkViS3NIOUNYYmswNHl2?=
 =?utf-8?B?N1VNMDZRQmtnNlpLZG5wZy9MaVJ0UkwyTDU2UE90MGlDdUdWQ2F4Ukl5WUlP?=
 =?utf-8?B?VUc1NzFGUEVhOUE5eko1YmgxOWpLb0tmY2UwVlFlcUJ1MU0vTGZJcjFLTHZD?=
 =?utf-8?B?V044MGNkUHczbi9hU3pYZVJZb0t3WVYzRWdiU284MHZVK2tZYU13VEwvcHRq?=
 =?utf-8?B?TXRJN3ZwY1JTeUxLcndQdnBFOWpBY3VjSFIwS2xkQ0JLbkhwb1dLYWxuN0RB?=
 =?utf-8?B?Zk1MM2NvTk9VaEI0Mlg4cDUvODhuOE1VZ2lQaDVKN1ErZzhzeG1scHcvbFFy?=
 =?utf-8?B?cGxOWml3ektMZjg0eStjWXNXQ2lJNFlRMFE4emFkTEE2RFduSnBNeUNtMUNZ?=
 =?utf-8?B?d0I3bU1HUzYzeGNueDh3UGdzd1RYRDJuMU1tUm12WWU3NzdTMzNyRkVhSUY5?=
 =?utf-8?Q?qjo8vpBiCKvuf+SDG0a4Y3T4w?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 20f8b1fd-57f3-4394-5d8b-08daf21d6a3a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 08:42:17.7779
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4xexnsUkheh04hithoZeuD2EQvbz2+ufG/K0Yx7rainbDrLewa8PD7zxw+GUW6bLWViECzZWZeiPjVMH4jC81w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9481

On 06.01.2023 02:02, Andrew Cooper wrote:
> On 05/01/2023 4:04 pm, Jan Beulich wrote:
>> v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
>> v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
> 
> fixup[],smfn[] ?

No. See e.g. shadow_vcpu_init():

    for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
    {
        v->arch.paging.shadow.oos[i] = INVALID_MFN;
        v->arch.paging.shadow.oos_snapshot[i] = INVALID_MFN;
        for ( j = 0; j < SHADOW_OOS_FIXUPS; j++ )
            v->arch.paging.shadow.oos_fixup[i].smfn[j] = INVALID_MFN;
    }

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 08:45:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 08:45:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473384.733945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnn3-0006vr-5V; Mon, 09 Jan 2023 08:45:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473384.733945; Mon, 09 Jan 2023 08:45:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnn3-0006vk-2M; Mon, 09 Jan 2023 08:45:57 +0000
Received: by outflank-mailman (input) for mailman id 473384;
 Mon, 09 Jan 2023 08:45:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CfuE=5G=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pEnn1-0006vZ-VZ
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 08:45:55 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 077914b5-8ffa-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 09:45:55 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6D1EA3398C;
 Mon,  9 Jan 2023 08:45:54 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 54674134AD;
 Mon,  9 Jan 2023 08:45:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ZYAxE0LUu2NzXgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 09 Jan 2023 08:45:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 077914b5-8ffa-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673253954; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=OKc77BZ7jmVzkkEt05fVty6YJF7FntGuB3ivZJkm7/4=;
	b=FFNjB0y2bC0EskGxw5uSjfduhjy1VzFCJoSmFuc8KmvCaeXpd7ARpI+YlE6xzpLU+Kvdoy
	Ue+L6oqu51TP9OKkVHJwVn21ffVdwVj4RCfL0ptr7+/IoNQwe3AOlRlLNKAeEILQYBiOBl
	cGTzp8ZXV2mVDwaC6K7VnnkWUfKj21s=
Message-ID: <8690f6fa-ff8d-7416-866a-7effa5cae72c@suse.com>
Date: Mon, 9 Jan 2023 09:45:53 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [linux-linus test] 175625: regressions - FAIL
Content-Language: en-US
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-175625-mainreport@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <osstest-175625-mainreport@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------HNJvX8s02B3jvvabVF0HFalX"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------HNJvX8s02B3jvvabVF0HFalX
Content-Type: multipart/mixed; boundary="------------tY8fI00bzT6IVIp5R04hkNCh";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
Message-ID: <8690f6fa-ff8d-7416-866a-7effa5cae72c@suse.com>
Subject: Re: [linux-linus test] 175625: regressions - FAIL
References: <osstest-175625-mainreport@xen.org>
In-Reply-To: <osstest-175625-mainreport@xen.org>

--------------tY8fI00bzT6IVIp5R04hkNCh
Content-Type: multipart/mixed; boundary="------------8R4VdzGAmBj31La6YM1pdPJw"

--------------8R4VdzGAmBj31La6YM1pdPJw
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDguMDEuMjMgMTc6NTQsIG9zc3Rlc3Qgc2VydmljZSBvd25lciB3cm90ZToNCj4gZmxp
Z2h0IDE3NTYyNSBsaW51eC1saW51cyByZWFsIFtyZWFsXQ0KPiBodHRwOi8vbG9ncy50ZXN0
LWxhYi54ZW5wcm9qZWN0Lm9yZy9vc3N0ZXN0L2xvZ3MvMTc1NjI1Lw0KPiANCj4gUmVncmVz
c2lvbnMgOi0oDQoNCkknbSBsb29raW5nIGF0IHRoZSB4ODYgZmFpbHVyZXMgcmlnaHQgbm93
Lg0KDQpJIGNhbiByZXByb2R1Y2UgdGhlIGlzc3VlIGxvY2FsbHkgdXNpbmcgdGhlIGxpbnVz
IHRyZWUuDQoNCg0KSnVlcmdlbg0K
--------------8R4VdzGAmBj31La6YM1pdPJw
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8R4VdzGAmBj31La6YM1pdPJw--

--------------tY8fI00bzT6IVIp5R04hkNCh--

--------------HNJvX8s02B3jvvabVF0HFalX
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO71EEFAwAAAAAACgkQsN6d1ii/Ey/j
PggAgeKSGRrziGl9iKfleVwiJY7RIPlUp860WtpSsXDxBhI9sXOisoDezWG2QIMS0+wLRxh1myIW
GfYwKh+M+MnPH6MWzUkutJnGvogTSqmQ85zOKS3NigRz7bmWdcN7MU0nkq7jtls292oIzOT7moAO
4midC35oKfpONBziKIcxG1tk9BCb2dEYM55aqieMd2OMvYcNuQ2VR+IMABkFaCln6acQ8l3GYuhC
QKNFWLNlIZhIXc6zY/4jaIz7Vuht2Q7TpA9JYOy25IT+q9lmhypJV7svdTJhL9jTJSUTjM01A43+
2rDphLViXa0adQZaKkPuyHuNV7kcuOLgbJ1m+q3oVw==
=luNn
-----END PGP SIGNATURE-----

--------------HNJvX8s02B3jvvabVF0HFalX--


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 08:53:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 08:53:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473391.733959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnuK-0008RG-Sw; Mon, 09 Jan 2023 08:53:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473391.733959; Mon, 09 Jan 2023 08:53:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnuK-0008R9-QI; Mon, 09 Jan 2023 08:53:28 +0000
Received: by outflank-mailman (input) for mailman id 473391;
 Mon, 09 Jan 2023 08:53:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEnuK-0008R3-41
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 08:53:28 +0000
Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com
 [2a00:1450:4864:20::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 144c49ba-8ffb-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 09:53:25 +0100 (CET)
Received: by mail-lf1-x133.google.com with SMTP id y25so11876765lfa.9
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 00:53:25 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 z3-20020a0565120c0300b004cb03999979sm1512441lfu.40.2023.01.09.00.53.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 00:53:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 144c49ba-8ffb-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=H6FchQyf5CB5ussTRx2mN0jjbocnfHQbf5QwEa2tL0Y=;
        b=nJdRXNf8vbCPVbpIkHSGAj+BncCPktEt1FFZFDhkFlWUSp/46Xz0t0kYD0AMEnU5Hh
         7VKFy+ejJjJ8O2xoY4G6qIoTWCq960QLSAuMc0AeA7oi9N573ruGZ8MSaL/79baUfYUm
         pz4wraEETJPub4TQ6t/26Ga1IjVKwLeq9zdj/DLklOjYqPURBA/YONUzOeHjrvTYPdQS
         mi8lcbd6MYJOEodkfxY0xp9K4Oipbyz7syXE42cJZVEMM4rkm3Etm2ZQ41k13Gf6A2UY
         xAsCClpVFss8ZN4x7teE6SrsMZ7zmUnilSaUETeGsX7Z+haIaoNMNXAo4m3jSQF2JEcZ
         wmxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=H6FchQyf5CB5ussTRx2mN0jjbocnfHQbf5QwEa2tL0Y=;
        b=fmx32kq4aQJIzbrNYunDSxdJfEtUQ/d53k7r4qGKUvVkHbgexeLb5izgVGVGxatQz5
         PJ3dUiIRmRQoRJStD/Ns4AZ5yZIBIOjGHZU+LQZchn9Wj7AH0e5CVftovYmSgTdJxd9A
         5l67X7+w3sAkJiK5w3lgQqrV5dG2o4yWqz0fEiOOSOa/6rLjOmaQUcQQ0mLuXou6cHmS
         H1P5D9sOS48TLHf3jbf4ZAR6tPFJvDFjf5EW5J4m7NHIeigw6LdC0GyvM1rmYcSt15qV
         miju8wzQ3rK3OJzV0S0Crsddb2E1kK2haLQaRG95o6MFFeDiQI0g5lPnUrp7NcMWvfU2
         c6LQ==
X-Gm-Message-State: AFqh2kr2kfH2RRT4nBOK07tTfbmQ38BfwIJxE+3GvXCvAJLAZj2iaWsh
	cJHoPh/hoffplXztXry5ky4=
X-Google-Smtp-Source: AMrXdXubr6vT7bcQ32NP1PlkH/uhp5KzoNbS5xjBB8VIi3OOeIREPdxCN9KnYVG44Z2h0/NGVjTeIQ==
X-Received: by 2002:a05:6512:1049:b0:4b6:edce:a192 with SMTP id c9-20020a056512104900b004b6edcea192mr20185277lfb.4.1673254405308;
        Mon, 09 Jan 2023 00:53:25 -0800 (PST)
Message-ID: <58d868b003f4a64922f3947cbf293c953471d8eb.camel@gmail.com>
Subject: Re: [PATCH v1 1/8] xen/riscv: introduce dummy asm/init.h
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>
Date: Mon, 09 Jan 2023 10:53:24 +0200
In-Reply-To: <0da22900-63f0-b8fa-00b6-855e2a94485d@xen.org>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
	 <cb2f0751d717774dfe065727c87b8f62f588ca17.1673009740.git.oleksii.kurochko@gmail.com>
	 <0da22900-63f0-b8fa-00b6-855e2a94485d@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

Thanks for the comments. I'll update commit message in the next
patch series.

~Oleksii
On Fri, 2023-01-06 at 13:42 +0000, Julien Grall wrote:
> On 06/01/2023 13:14, Oleksii Kurochko wrote:
>=20
> I am guessing this is necessary because you will use <xen/init.h>
> will=20
> be used later on.
>=20
> If so, then I think it would be useful for mention it in the commit
> message.
>=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> Reviewed-by: Julien Grall <jgrall@amazon.com>
>=20
> Cheers,
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 08:55:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 08:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473397.733970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnwB-0000a3-72; Mon, 09 Jan 2023 08:55:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473397.733970; Mon, 09 Jan 2023 08:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnwB-0000Zw-4J; Mon, 09 Jan 2023 08:55:23 +0000
Received: by outflank-mailman (input) for mailman id 473397;
 Mon, 09 Jan 2023 08:55:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEnwA-0000Zq-2g
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 08:55:22 +0000
Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com
 [2a00:1450:4864:20::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 59026008-8ffb-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 09:55:21 +0100 (CET)
Received: by mail-lf1-x133.google.com with SMTP id bp15so11864142lfb.13
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 00:55:21 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 n26-20020a05651203fa00b0049464d89e40sm1502263lfq.72.2023.01.09.00.55.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 00:55:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59026008-8ffb-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=nYOajGftRdTkdrjBSRQwUgFFIaAX9r6zGeq9XIW/ws4=;
        b=DdkCHQiSyTD2dFrDCmzf+zYN4WGp1iT2WSB2urYTar/CCN4vroT6ukONTKtx5PYcWA
         bu3A1mUs2SX74BOxqBjJw3XvWY+WhgLsXVqV9mMhkpr7bs+9g4amaer0Gq5EYo02/Uyz
         OQAm4s092ior6vzU5dPTLf06ED7doJ2QpPmhjXhAevABzNOBvuZ5A6YovJfTzyGs5jRx
         P0xqRhGh+5yzCXQliJSIoS6x1tSSjrEaiR5v1WhNFHW2RluDdoDc6MJENu3Z8sDH+aiF
         R64RRPAOs0d1nIqjPNcbKzPoUMkx5xvG2e68n+vvHckp+XzI8ad0kImALUzj4RR1XKRC
         7eEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=nYOajGftRdTkdrjBSRQwUgFFIaAX9r6zGeq9XIW/ws4=;
        b=25RHpqYXefMdCFw3yaHFRzMuQsTMnrypurMyiltwaTY1UzcAc2ZK/pXk/X1MvIf12r
         ncY6cNk6bpMwjy0mthxNULiDY9AuLfFtdGoLNI8ntgPsyRE6WiY+G5Gx72wV/62Ca1Q6
         7PzaIGfWdElkHRfhi1FOmlLq+9s16GgPxCtWyBJf/tGFNDmEkI5jKgWix5rbCUJH2igU
         H52mkv+1jFu6EPrFzMRsrHPIFzCyA9X0PTf4LZrScXimykHxeBnDJJ5r2bY41R74o7rC
         NNUwTFHbcxuQnGY6Q0owFomoXjXn7shgw7rcYevFwrcSH4u3vuyZQJKgWC+RrmPbfbx5
         upnw==
X-Gm-Message-State: AFqh2kqjl9//y0grZLllU/5BAPUfJCX26M+gwNxUJNtL40tBYviOW7nD
	/uk9LoV94nwwnOgqqyICnWM=
X-Google-Smtp-Source: AMrXdXvhMxVXiiZqFaweyeLj5UV+VElqYeT2rSPHzTV9zTcfjhd6AHcMj6YW2lk4qTHMexNBt7Kj5A==
X-Received: by 2002:a05:6512:6d6:b0:4a4:68b9:609b with SMTP id u22-20020a05651206d600b004a468b9609bmr21181328lff.38.1673254520712;
        Mon, 09 Jan 2023 00:55:20 -0800 (PST)
Message-ID: <572f67e83b0da55acf28d15826a3ecde43b6c8a5.camel@gmail.com>
Subject: Re: [PATCH v1 2/8] xen/riscv: introduce asm/types.h header file
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>,
 xen-devel@lists.xenproject.org
Date: Mon, 09 Jan 2023 10:55:19 +0200
In-Reply-To: <204d2288-df9a-0d53-2c42-a52ad0c0c0f7@suse.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
	 <ce66a86285e966700acb13521851aca5b764a56e.1673009740.git.oleksii.kurochko@gmail.com>
	 <204d2288-df9a-0d53-2c42-a52ad0c0c0f7@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-06 at 15:12 +0100, Jan Beulich wrote:
> On 06.01.2023 14:14, Oleksii Kurochko wrote:
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > =C2=A0xen/arch/riscv/include/asm/types.h | 73
> > ++++++++++++++++++++++++++++++
> > =C2=A01 file changed, 73 insertions(+)
> > =C2=A0create mode 100644 xen/arch/riscv/include/asm/types.h
> >=20
> > diff --git a/xen/arch/riscv/include/asm/types.h
> > b/xen/arch/riscv/include/asm/types.h
> > new file mode 100644
> > index 0000000000..48f27f97ba
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/types.h
> > @@ -0,0 +1,73 @@
> > +#ifndef __RISCV_TYPES_H__
> > +#define __RISCV_TYPES_H__
> > +
> > +#ifndef __ASSEMBLY__
> > +
> > +typedef __signed__ char __s8;
> > +typedef unsigned char __u8;
> > +
> > +typedef __signed__ short __s16;
> > +typedef unsigned short __u16;
> > +
> > +typedef __signed__ int __s32;
> > +typedef unsigned int __u32;
> > +
> > +#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
> > +#if defined(CONFIG_RISCV_32)
> > +typedef __signed__ long long __s64;
> > +typedef unsigned long long __u64;
> > +#elif defined (CONFIG_RISCV_64)
> > +typedef __signed__ long __s64;
> > +typedef unsigned long __u64;
> > +#endif
> > +#endif
>=20
> Of these, only the ones actually needed should be introduced. We're
> in the process of phasing out especially the above, but also ...
>=20
Got it. I will take it into account when the next version of patch
series will be ready.
> > +typedef signed char s8;
> > +typedef unsigned char u8;
> > +
> > +typedef signed short s16;
> > +typedef unsigned short u16;
> > +
> > +typedef signed int s32;
> > +typedef unsigned int u32;
> > +
> > +#if defined(CONFIG_RISCV_32)
> > +typedef signed long long s64;
> > +typedef unsigned long long u64;
>=20
> ... all of these.
>=20
> > +typedef u32 vaddr_t;
>=20
> (New) consumers of such types should therefore use {u,}int<N>_t
> instead.
>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 08:57:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 08:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473403.733981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnxz-0001AP-K7; Mon, 09 Jan 2023 08:57:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473403.733981; Mon, 09 Jan 2023 08:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEnxz-0001AI-HT; Mon, 09 Jan 2023 08:57:15 +0000
Received: by outflank-mailman (input) for mailman id 473403;
 Mon, 09 Jan 2023 08:57:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEnxy-0001AC-Cn
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 08:57:14 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9bfca4dc-8ffb-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 09:57:13 +0100 (CET)
Received: by mail-lf1-x12c.google.com with SMTP id v25so11860451lfe.12
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 00:57:13 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 g21-20020a0565123b9500b004b585ff1fcdsm1498813lfv.273.2023.01.09.00.57.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 00:57:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bfca4dc-8ffb-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=4wkF6tuT3ILdVALUx5K5GPfr+jHcvrDZ50Ltc77TZSE=;
        b=LT3y1fwmkcfrV19NBwv7MfcuvTFRXUz+aLUwjesC+sBDuo2Oz++WmAVAuBMXRAAGpA
         ++30qP5XudDO57RleTOFwSLFUSL36DHLeHJ5L3jWNKS52eI862L6arhv+HwmTsF8xav2
         lte1aAQl1o97GSyzSCpTZ01j4r5B6DKsERe3uUJ2ncHSDzLwH5u7V7JVaUYoBD/TJbIw
         w7IFJ9kxI1DRJOhGozBKLHUp940QAx/f/KFZMQUN2lPm6Ijbs2fUnSgEODTeGkh1Il9j
         /w3ZLOxwR36qOjBsy9ao5O092jNOUT8nKs5FZp6iWRiiB6aKqDSaUmTw5cFucCi/VVHZ
         8qxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=4wkF6tuT3ILdVALUx5K5GPfr+jHcvrDZ50Ltc77TZSE=;
        b=vKnkr9yn0hI+NL1M5XTv31wT59NIeZE/BS5+f5Zkw7MdpHs5c++ojcGdzSMcLgg7TW
         7mGseQZybFb6BZYFi87K8Ncd52iTay6U9/Qsm36MWkmOxhUKjm2JH3SuFc25fgxON0S7
         4Rb4sPf6XQAZZ+qikDCaWUAFAY1ZyxCd3Yl3sAPHK2YFJMSHw8JrgUjtbNt+WdQI76Qu
         jqOGE8OPz3IH9vfoCNS3ijEtX5Bj4Tqkp1WWT2MEyrIuSrJd5agXijPsI/+LeDKYOZ6N
         UDM/6HeBmQdkwnuvVqEx6LPs8mULya7dik8Hx4HOR58pHp48S4t819FlZ0EvSc+QxKHd
         18jg==
X-Gm-Message-State: AFqh2kok6UFyb/1CXhdoaXOEe1672wfmeorVoKl++sTpUw49cNUpWhRO
	7zrc1zExKG9MxSuzAjgngdQ=
X-Google-Smtp-Source: AMrXdXtqGBPuU4BYoI33w6HEcaGy2imRXOppDjWi32nk/MUffs9aCENlAC67IsuEO72f39cdKnYpjw==
X-Received: by 2002:a05:6512:6d4:b0:4cb:1e1:f380 with SMTP id u20-20020a05651206d400b004cb01e1f380mr17009980lff.40.1673254633239;
        Mon, 09 Jan 2023 00:57:13 -0800 (PST)
Message-ID: <572055bcc7b9716700fcb4139afe3697a04b7e98.camel@gmail.com>
Subject: Re: [PATCH v1 3/8] xen/riscv: introduce stack stuff
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>,
 xen-devel@lists.xenproject.org
Date: Mon, 09 Jan 2023 10:57:11 +0200
In-Reply-To: <9c93ee8c-1c7f-d1aa-c0fd-72518e37a74d@suse.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
	 <e8f65c43d20ebdaba61738200360b14152531321.1673009740.git.oleksii.kurochko@gmail.com>
	 <9c93ee8c-1c7f-d1aa-c0fd-72518e37a74d@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-06 at 15:15 +0100, Jan Beulich wrote:
> On 06.01.2023 14:14, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/riscv64/head.S
> > +++ b/xen/arch/riscv/riscv64/head.S
> > @@ -1,4 +1,10 @@
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .section .text.header,=
 "ax", %progbits
> > =C2=A0
> > =C2=A0ENTRY(start)
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j=C2=A0 start
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 la=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 sp, cpu0_boot_stack
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 li=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 t0, PAGE_SIZE
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 add=C2=A0=C2=A0=C2=A0=C2=A0=
 sp, sp, t0
> > +
> > +_start_hang:
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 wfi
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j=C2=A0 _start_hang
>=20
> Nit: I think it would be nice if in a new port assembly code used
> consistent padding between insn mnemonic and operand(s).
>=20
Missed it.
Actually it will be "fixed" in the later patches of the patch series
but i'll fix this while working on a new version of the patch series.

~Oleksii
> Jan



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 09:01:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 09:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473411.733992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEo1a-0002gm-92; Mon, 09 Jan 2023 09:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473411.733992; Mon, 09 Jan 2023 09:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEo1a-0002gf-59; Mon, 09 Jan 2023 09:00:58 +0000
Received: by outflank-mailman (input) for mailman id 473411;
 Mon, 09 Jan 2023 09:00:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEo1Y-0002gX-FL
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 09:00:56 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f51d740-8ffc-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 10:00:53 +0100 (CET)
Received: by mail-lf1-x131.google.com with SMTP id f34so11879769lfv.10
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 01:00:53 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 a14-20020ac25e6e000000b004cb0cf701bfsm1502848lfr.56.2023.01.09.01.00.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 01:00:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f51d740-8ffc-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=OkSnczKg8DyifVN5v5YKXWTt1W4XmuFXNXvw+DyBKU0=;
        b=XDj9h2i19rAfolCNFCazX7w+18ZgxLxJS59w/t1gv/T28YR9U6aAjJEStqyTP9uM5R
         QmloYJA5WwDndpGqq3drWQ/MkUGuD1VZHCe/PUOkzV9UntQSftoKSkbEHE4CPpzephpb
         j6hwXavwY/qnkzEKwIcjAPbYVsTVOQO+XRvkfti+BHmsLg6Zt/BdA7dwAw5Loo9I8KyX
         hBJVHceUVONhV39V9jOwbNRmR/+saXFGkFZgecN90idMY8+dJxUZiZKLkj0T1Y4VzTM1
         /rYCAg256LzQTexIfkD7YgXkQAc3q7q9z/UA+lYG+/5oQN5HlgoiR7WokiZ0hQtdBT6n
         9mOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=OkSnczKg8DyifVN5v5YKXWTt1W4XmuFXNXvw+DyBKU0=;
        b=UrUmpqH7A3X215JVtaIL6CFJUIvi1UpOYG+MXiRaEgCqPlL21EVZpLavawEYQoXDxp
         eOQO62wFNvXYhAj+Cp3IC25du4Lxab+XBT3wfMUIjW3CdzS8DfqbTqZfdkw2q/eS1Sy/
         ZLDJAPfkSjc+WfdWGz4vpEykjjevRWE3hNlMH86kj5ThK2g9sYTDKNkJ1d2nv+F+4+ey
         YbMpSbZgq3sepb5O9jJ1rF5HdCgv+yv8LdRL4Zl0dafXgWISHOJjxLYst3UR4YHimBR6
         67uS3GBWlaZ3rdGnbgGZs33dv94pv4rEju3iqf/XjO+Er2qEBlrzV2Tw3lWuY8GYhhdS
         S5Jg==
X-Gm-Message-State: AFqh2krNDZHRDIlBQTq21I1NFwYXmPsjhYa6HlnKLfbluKy81oKw7GT3
	iuYhgOQA5rEEDsogd3dWaKM=
X-Google-Smtp-Source: AMrXdXsSM0oFwJAP0FmpVn2BirfgZuTFma7hq5cErRpJJkZAP2wZrZnPHoZ5sCpaUAuZNx6pBfNRcw==
X-Received: by 2002:a05:6512:3588:b0:4cb:1189:285a with SMTP id m8-20020a056512358800b004cb1189285amr12548844lfr.5.1673254853553;
        Mon, 09 Jan 2023 01:00:53 -0800 (PST)
Message-ID: <53c3402164304ddc0b27d82aa6c1716cb99c4ccf.camel@gmail.com>
Subject: Re: [PATCH v1 3/8] xen/riscv: introduce stack stuff
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>
Date: Mon, 09 Jan 2023 11:00:47 +0200
In-Reply-To: <4e577d78-ff2f-3258-99d6-712af3b6330d@xen.org>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
	 <e8f65c43d20ebdaba61738200360b14152531321.1673009740.git.oleksii.kurochko@gmail.com>
	 <4e577d78-ff2f-3258-99d6-712af3b6330d@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

Hi,

On Fri, 2023-01-06 at 13:54 +0000, Julien Grall wrote:
> Hi,
>=20
> On 06/01/2023 13:14, Oleksii Kurochko wrote:
> > The patch introduces and sets up a stack in order to go to C
> > environment
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > =C2=A0 xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 =
+
> > =C2=A0 xen/arch/riscv/riscv64/head.S | 8 +++++++-
> > =C2=A0 xen/arch/riscv/setup.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 | 6 ++++++
> > =C2=A0 3 files changed, 14 insertions(+), 1 deletion(-)
> > =C2=A0 create mode 100644 xen/arch/riscv/setup.c
> >=20
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index 248f2cbb3e..5a67a3f493 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,4 +1,5 @@
> > =C2=A0 obj-$(CONFIG_RISCV_64) +=3D riscv64/
> > +obj-y +=3D setup.o
> > =C2=A0=20
> > =C2=A0 $(TARGET): $(TARGET)-syms
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$(OBJCOPY) -O binary -S=
 $< $@
> > diff --git a/xen/arch/riscv/riscv64/head.S
> > b/xen/arch/riscv/riscv64/head.S
> > index 990edb70a0..ddc7104701 100644
> > --- a/xen/arch/riscv/riscv64/head.S
> > +++ b/xen/arch/riscv/riscv64/head.S
> > @@ -1,4 +1,10 @@
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .section .text.h=
eader, "ax", %progbits
> > =C2=A0=20
> > =C2=A0 ENTRY(start)
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j=C2=A0 start
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 la=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 sp, cpu0_boot_stack
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 li=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 t0, PAGE_SIZE
>=20
> I would recommend to a define STACK_SIZE. So you don't make
> assumption=20
> on the size in the code and it is easier to update.
>=20
Thanks. I decied to not define STACK_SIZE because it is used now only
once and the STACK_SIZE that was introduced in Bobby's patch series was
too big at least for current purpose.

Any way it probably makes sensne to intrdouce STACK_SIZE from the start
so will take it into account during a work at new patch series.
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 add=C2=A0=C2=A0=C2=A0=C2=A0=
 sp, sp, t0
> > +
> > +_start_hang:
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 wfi
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j=C2=A0 _start_hang
> > diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> > new file mode 100644
> > index 0000000000..2c7dca1daa
> > --- /dev/null
> > +++ b/xen/arch/riscv/setup.c
> > @@ -0,0 +1,6 @@
> > +#include <xen/init.h>
> > +#include <xen/compile.h>
> > +
> > +/* Xen stack for bringing up the first CPU. */
> > +unsigned char __initdata cpu0_boot_stack[PAGE_SIZE]
> > +=C2=A0=C2=A0=C2=A0 __aligned(PAGE_SIZE);
>=20
> Cheers,
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 09:04:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 09:04:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473417.734003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEo4a-0003Ho-Ni; Mon, 09 Jan 2023 09:04:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473417.734003; Mon, 09 Jan 2023 09:04:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEo4a-0003Hh-Kh; Mon, 09 Jan 2023 09:04:04 +0000
Received: by outflank-mailman (input) for mailman id 473417;
 Mon, 09 Jan 2023 09:04:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEo4Z-0003Hb-I9
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 09:04:03 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8fdc65aa-8ffc-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 10:04:02 +0100 (CET)
Received: by mail-lj1-x232.google.com with SMTP id q2so8174328ljp.6
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 01:04:02 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 bj36-20020a2eaaa4000000b0027ff2fabcb5sm800824ljb.104.2023.01.09.01.04.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 01:04:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fdc65aa-8ffc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=XwOSHtC2Z1W6QBgrGVhCWI9Of2uz2+3UV4UHU0xmC2E=;
        b=VAJfE9vScem/y8hG5SFrPdtZ6BN+ZrlnvdkAyO3EzxZ2ONwJ21XfqOrIL/9HMOBAj2
         UmXT+9mmUAlr4puFqdD2r7LDGOYEJEyCPfqNXdeFPTLda5FLoEkAxlttTaVGruO9PJ7O
         TmVjZu6rCMHmUJT0d8C7x0lUOPss04TSwBrhW1Z41Y3xr0Ns6sXzVau+Ir+XQGBAPKgS
         9GPCmdoNySDAEaJY6debV3W1qBmhpnA61kUZweHpPZoW9IZa6+TsEHPjxvFY3DGgOOpR
         p5PAP5DHOAbxwBEKPG4Pe2Xr4PrQz9ZVVdkQ5Y6srgoRIyH5Jq7fBfCjdfGFfpGqgU0b
         M7ZA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=XwOSHtC2Z1W6QBgrGVhCWI9Of2uz2+3UV4UHU0xmC2E=;
        b=U5gequm/iOWv8n65Lz0G7gAOkreTmHtbODemZyVp4mFevd+WRAFHo499fhUhR8xC6G
         Fvd625bvvSJ0jZEWqanvuZW7DMIRGIOOruBalcZVq8o6hR33Xfg9TLU/gWoIwVINoRdS
         s2p3K6qMrTH4BpKqqKa01j9BdouKh7xWJRdTLHwQ1HL6kfWlk/E0eaav+2Hfe9iLbwEa
         NcyhQCXvGtn/owRolFVu0bnQAf4WaSoxH5UEPF7/RBWR8x4TCr+D1a6wd63US/+ctB0G
         FHJaJhcL39KcdTG8ObEQFOsa272CqUaSvOl6Xcfgh268f7JRvEH8vzCnUabdsGSju5qd
         sXeg==
X-Gm-Message-State: AFqh2kr/bRHCsVcEGQq/PFgvR3XDO0vrWSiM6K0G+bWIoOFImGRvrbUE
	Gf2YnHl/1J6O69FwO2VFfVE=
X-Google-Smtp-Source: AMrXdXtrEt8Zl7w+6S+A8Zrb1+1m71MEWc5BL3xoM7vK53fbLXYeQAmr+bVCVxBp8jSu05s830AlWQ==
X-Received: by 2002:a2e:8882:0:b0:27f:ed3e:8c70 with SMTP id k2-20020a2e8882000000b0027fed3e8c70mr7521295lji.34.1673255042163;
        Mon, 09 Jan 2023 01:04:02 -0800 (PST)
Message-ID: <0475c058d0655f5b7b245f19b20c5ef0f14b3618.camel@gmail.com>
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>
Date: Mon, 09 Jan 2023 11:04:00 +0200
In-Reply-To: <d77e7617-5263-0072-4786-ba6144247a4b@xen.org>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
	 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
	 <d77e7617-5263-0072-4786-ba6144247a4b@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

Hi,

On Fri, 2023-01-06 at 13:40 +0000, Julien Grall wrote:
> Hi,
>=20
> On 06/01/2023 13:14, Oleksii Kurochko wrote:
> > The patch introduce sbi_putchar() SBI call which is necessary
> > to implement initial early_printk
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > =C2=A0 xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 |=C2=A0 1 +
> > =C2=A0 xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
> > =C2=A0 xen/arch/riscv/sbi.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 44
> > ++++++++++++++++++++++++++++++++
>=20
> IMHO, it would be better to implement sbi.c in assembly so you can
> use=20
> print in the console before you jump to C world.
>=20
I thought that we can live with C version as we set up stack from the
start and then we can call early_printk() from assembly code too.
Is it bad approach?

> > =C2=A0 3 files changed, 79 insertions(+)
> > =C2=A0 create mode 100644 xen/arch/riscv/include/asm/sbi.h
> > =C2=A0 create mode 100644 xen/arch/riscv/sbi.c
> >=20
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index 5a67a3f493..60db415654 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,5 +1,6 @@
> > =C2=A0 obj-$(CONFIG_RISCV_64) +=3D riscv64/
> > =C2=A0 obj-y +=3D setup.o
> > +obj-y +=3D sbi.o
>=20
> Please order the filename alphabetically.
>=20
> Cheers,
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 09:06:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 09:06:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473423.734014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEo6t-0003sm-5W; Mon, 09 Jan 2023 09:06:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473423.734014; Mon, 09 Jan 2023 09:06:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEo6t-0003sf-0x; Mon, 09 Jan 2023 09:06:27 +0000
Received: by outflank-mailman (input) for mailman id 473423;
 Mon, 09 Jan 2023 09:06:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEo6r-0003sZ-UL
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 09:06:25 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e480c940-8ffc-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 10:06:24 +0100 (CET)
Received: by mail-lf1-x131.google.com with SMTP id y25so11929278lfa.9
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 01:06:24 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 m5-20020a056512114500b004a6f66eed7fsm1515230lfg.165.2023.01.09.01.06.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 01:06:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e480c940-8ffc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=ldZV+qH6ZXr7RZBH0qLzar7l+uCmbf1eTyBjBz70T8Q=;
        b=hfIQEM3gqJz9V4DD8KXaZPv58twh+RjCL0/BOjTcMCh21W8Sm5c/S+l8mgdA/uupb8
         CHOEJ2AI/dsVqnYpam1DP2uduywm42l+0g7yzcM2gxzcN0P362+xm5cHQBS7vvQVwN49
         OYXWDcuXqBtFYUdIbSK/6lgOisZO5d9Bke7NEMX0+NhpT1FeubVEHjuQvcreETHuj8hA
         tHFbbjK6PBjm0fcJ3JAutv0uhy+25vm/9V6iAC8mLzEB4ICpl7cTweaRHqEfLNj+TMeL
         Ds8Q/MI/qrURLjus36Fr6DZ+m4aA9kPe6vSh9LNd0fPGuiQnTeI0JNN6QoIszh07734l
         DV1A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ldZV+qH6ZXr7RZBH0qLzar7l+uCmbf1eTyBjBz70T8Q=;
        b=vJDMnCbfW5+O1FcYKlZFqoXjc/k9JXeiEBof7VrMbLcjoQ4LMg/0hKd//Xoc5np++D
         qfn8cFXexLdW23EutjSSg07E0fCFxCmCxMcjLdeNHmCo2k1NwNuj3TRtTWHjqFDi4wre
         UrKyCAffPNzXOiFivImL7bjC/nIXav1aa+iCfFXuoRrmZ+NAgjrepI+Ht0AfKA6aAaGM
         au0bCXAnWT6T1xgtIndaKd0OAm0K3F00dtokCJiGATLRsoCjgnHPuEpcR9Y6haFvlh+a
         k6DDUoQ9lbvL2rX7Wns9QSaLoq4ZUws8ZPUyCKTxscBaMcqaBoDYDHqBy+PQFX3foZc5
         N6UQ==
X-Gm-Message-State: AFqh2krJkrOCevFB5uUGotcTDaewh4gaNZmjak/FHlsLHqug3bYTIc4N
	NXxSlXQxdLagpxR+9CmZp78=
X-Google-Smtp-Source: AMrXdXtYq5nJUVungVZkOKVSlDy4VxKpUU5/JWi9bv9wdziHsqW6Peis4hX9djXGZ69UQnpY+m0Z5w==
X-Received: by 2002:a05:6512:318d:b0:4b5:70e0:f2e6 with SMTP id i13-20020a056512318d00b004b570e0f2e6mr22776856lfe.24.1673255184192;
        Mon, 09 Jan 2023 01:06:24 -0800 (PST)
Message-ID: <ce77619047e452bd7950bdc4e3c772f98464bf1f.camel@gmail.com>
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
From: Oleksii <oleksii.kurochko@gmail.com>
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>
Date: Mon, 09 Jan 2023 11:06:22 +0200
In-Reply-To: <420e9747-09ba-b6e4-d3c5-14f0f174c1d7@amd.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
	 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
	 <420e9747-09ba-b6e4-d3c5-14f0f174c1d7@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

Hi Michal,

On Fri, 2023-01-06 at 16:19 +0100, Michal Orzel wrote:
> Hi Oleksii,
>=20
> On 06/01/2023 14:14, Oleksii Kurochko wrote:
> >=20
> >=20
> > The patch introduce sbi_putchar() SBI call which is necessary
> > to implement initial early_printk
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > =C2=A0xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0 1 +
> > =C2=A0xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
> > =C2=A0xen/arch/riscv/sbi.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 44
> > ++++++++++++++++++++++++++++++++
> > =C2=A03 files changed, 79 insertions(+)
> > =C2=A0create mode 100644 xen/arch/riscv/include/asm/sbi.h
> > =C2=A0create mode 100644 xen/arch/riscv/sbi.c
> >=20
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index 5a67a3f493..60db415654 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,5 +1,6 @@
> > =C2=A0obj-$(CONFIG_RISCV_64) +=3D riscv64/
> > =C2=A0obj-y +=3D setup.o
> > +obj-y +=3D sbi.o
> >=20
> > =C2=A0$(TARGET): $(TARGET)-syms
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 $(OBJCOPY) -O binary -S $< $=
@
> > diff --git a/xen/arch/riscv/include/asm/sbi.h
> > b/xen/arch/riscv/include/asm/sbi.h
> > new file mode 100644
> > index 0000000000..34b53f8eaf
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/sbi.h
> > @@ -0,0 +1,34 @@
> > +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
> > +/*
> > + * Copyright (c) 2021 Vates SAS.
> > + *
> > + * Taken from xvisor, modified by Bobby Eshleman
> > (bobby.eshleman@gmail.com).
> > + *
> > + * Taken/modified from Xvisor project with the following
> > copyright:
> > + *
> > + * Copyright (c) 2019 Western Digital Corporation or its
> > affiliates.
> > + */
> > +
> > +#ifndef __CPU_SBI_H__
> > +#define __CPU_SBI_H__
> I wonder where does CPU come from. Shouldn't this be called
> __ASM_RISCV_SBI_H__ ?
>=20
I missed that when get this files from Bobby's patch series.
It makes sense to rename the define.
Thanks.
> > +
> > +#define SBI_EXT_0_1_CONSOLE_PUTCHAR=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0x1
> > +
> > +struct sbiret {
> > +=C2=A0=C2=A0=C2=A0 long error;
> > +=C2=A0=C2=A0=C2=A0 long value;
> > +};
> > +
> > +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
> > unsigned long arg0,
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long arg1, unsigne=
d long arg2,
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long arg3, unsigne=
d long arg4,
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long arg5);
> The arguments needs to be aligned.
>=20
> > +
> > +/**
> > + * Writes given character to the console device.
> > + *
> > + * @param ch The data to be written to the console.
> > + */
> > +void sbi_console_putchar(int ch);
> > +
> > +#endif // __CPU_SBI_H__
> // should be replaced with /* */
>=20
> > diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
> > new file mode 100644
> > index 0000000000..67cf5dd982
> > --- /dev/null
> > +++ b/xen/arch/riscv/sbi.c
> > @@ -0,0 +1,44 @@
> > +/* SPDX-License-Identifier: GPL-2.0-or-later */
> > +/*
> > + * Taken and modified from the xvisor project with the copyright
> > Copyright (c)
> > + * 2019 Western Digital Corporation or its affiliates and author
> > Anup Patel
> > + * (anup.patel@wdc.com).
> > + *
> > + * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> > + *
> > + * Copyright (c) 2019 Western Digital Corporation or its
> > affiliates.
> > + * Copyright (c) 2021 Vates SAS.
> > + */
> > +
> > +#include <xen/errno.h>
> > +#include <asm/sbi.h>
> > +
> > +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
> > unsigned long arg0,
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uns=
igned long arg1, unsigned long arg2,
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uns=
igned long arg3, unsigned long arg4,
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uns=
igned long arg5)
> The arguments needs to be aligned.
>=20
It looks that I mixed tabs with space or vice versa.
Will double check.
Thanks.
> > +{
> > +=C2=A0=C2=A0=C2=A0 struct sbiret ret;
> Could you please add an empty line here.
>=20
> > +=C2=A0=C2=A0=C2=A0 register unsigned long a0 asm ("a0") =3D arg0;
> > +=C2=A0=C2=A0=C2=A0 register unsigned long a1 asm ("a1") =3D arg1;
> > +=C2=A0=C2=A0=C2=A0 register unsigned long a2 asm ("a2") =3D arg2;
> > +=C2=A0=C2=A0=C2=A0 register unsigned long a3 asm ("a3") =3D arg3;
> > +=C2=A0=C2=A0=C2=A0 register unsigned long a4 asm ("a4") =3D arg4;
> > +=C2=A0=C2=A0=C2=A0 register unsigned long a5 asm ("a5") =3D arg5;
> > +=C2=A0=C2=A0=C2=A0 register unsigned long a6 asm ("a6") =3D fid;
> > +=C2=A0=C2=A0=C2=A0 register unsigned long a7 asm ("a7") =3D ext;
> > +
> > +=C2=A0=C2=A0=C2=A0 asm volatile ("ecall"
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 : "+r" (a0), "+r" (a1)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6),
> > "r" (a7)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 : "memory");
> > +=C2=A0=C2=A0=C2=A0 ret.error =3D a0;
> > +=C2=A0=C2=A0=C2=A0 ret.value =3D a1;
> > +
> > +=C2=A0=C2=A0=C2=A0 return ret;
> > +}
> > +
> > +void sbi_console_putchar(int ch)
> > +{
> > +=C2=A0=C2=A0=C2=A0 sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0,=
 0, 0, 0);
> > +}
> > --
> > 2.38.1
> >=20
> >=20
>=20
> ~Michal
~Oleksii


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 09:10:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 09:10:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473429.734025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoAW-0005JN-KW; Mon, 09 Jan 2023 09:10:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473429.734025; Mon, 09 Jan 2023 09:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoAW-0005JG-H5; Mon, 09 Jan 2023 09:10:12 +0000
Received: by outflank-mailman (input) for mailman id 473429;
 Mon, 09 Jan 2023 09:10:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEoAV-0005J8-FG
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 09:10:11 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a6727c9-8ffd-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 10:10:09 +0100 (CET)
Received: by mail-lf1-x132.google.com with SMTP id cf42so11974699lfb.1
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 01:10:09 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 i7-20020ac25227000000b004ac6a444b26sm1514062lfl.141.2023.01.09.01.10.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 01:10:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a6727c9-8ffd-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=ESdRJf7ipG0u+2VSciQ+mY4GL0Sloy+alwrd7Z0FJHQ=;
        b=caekt8SUz767Ip7U2HWOV+cA9HA8AlxKV6Y4vgom8TF4riKxo+KpDzTSEUzmNe596D
         DIqAak1XhGCuLAO2RLnEgrrOG/ZPhHXbiIg0+3OequXDPxMVsb699Qmqy+z2tUWGtxuq
         GpkN9JfbrobXUcXZyv28GvTVyHpuVOfzdgGsxwVm4d6DkSTdzqgivTYMBAQE+fbz6hgY
         WGh8Cqd7TYeBbnTFTxCvJChIeKJcDrYas2gNhxYxAxu/0epTHKzPu/rSC+uigo0xdKHz
         E6m3z+sgqhsL+FRusBfxrtb3GPp+N+b35SF1ELHK/CcmsSxTCnIt+jg0xIlEbna5qDnm
         Afyg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ESdRJf7ipG0u+2VSciQ+mY4GL0Sloy+alwrd7Z0FJHQ=;
        b=zvhq4kTZfby0Ny6V3JwaKeBog7Nr4FM0R7UmyeYPnukKeAuXYkJGS9+IrKkPpgJIeO
         fyzk6Yjj/UMD104nC1125ywrERBbevtQuF+yH+soN04qm6rTKk65TNgWLiJhVPnzFVx8
         1FaUg+9apb0PNvXuGuIlMQZd46cJZlbjBj39RnM2e5ya06jsz0HD5iy3G61pIXYRH3uy
         7StIHV9NeCY3tzy5zS7yKre0on3Dje466wXx3LB1e+VZl62AYx8G1qOHcKhjkqNId4Cg
         izX3E8cLeGOVrsvitPDPJ7UJMlHMU0ZiwHsLiCNmfwGUqL/GK1ChJ3EIkhB/CLlB00sV
         ZEaQ==
X-Gm-Message-State: AFqh2kqrlmGSRN28PwctwZP1g4jUBFcVdn0ZE0WeKoFva8wRWkDaDXem
	4p3lEZbRhP+ZWrB7aoCdzx0=
X-Google-Smtp-Source: AMrXdXsrX12RVMgm7xzFs7RLSnMQdkqbrqYByDYOSg8cvE8JP2MqZEmQYYRjVlmj8Q8zvtYSzgqFQg==
X-Received: by 2002:a05:6512:2356:b0:4cb:4416:1e7d with SMTP id p22-20020a056512235600b004cb44161e7dmr8781564lfu.48.1673255409001;
        Mon, 09 Jan 2023 01:10:09 -0800 (PST)
Message-ID: <c197037c48921c3bbfd797172829ffa5d01609c2.camel@gmail.com>
Subject: Re: [PATCH v1 6/8] xen/riscv: introduce early_printk basic stuff
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>
Date: Mon, 09 Jan 2023 11:10:07 +0200
In-Reply-To: <e7e66208-5a4f-f37a-6368-29489e93aad9@xen.org>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
	 <3f30a60729b45ee01adc2d4c0eec5a89bb083abd.1673009740.git.oleksii.kurochko@gmail.com>
	 <e7e66208-5a4f-f37a-6368-29489e93aad9@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

Hi,

On Fri, 2023-01-06 at 13:51 +0000, Julien Grall wrote:
> Hi,
>=20
> On 06/01/2023 13:14, Oleksii Kurochko wrote:
> > The patch introduces a basic stuff of early_printk functionality
> > which will be enough to print 'hello from C environment"
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > =C2=A0 xen/arch/riscv/Kconfig.debug=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 7 ++++++
> > =C2=A0 xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=
=A0 1 +
> > =C2=A0 xen/arch/riscv/early_printk.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 27
> > +++++++++++++++++++++++
> > =C2=A0 xen/arch/riscv/include/asm/early_printk.h | 12 ++++++++++
> > =C2=A0 4 files changed, 47 insertions(+)
> > =C2=A0 create mode 100644 xen/arch/riscv/early_printk.c
> > =C2=A0 create mode 100644 xen/arch/riscv/include/asm/early_printk.h
> >=20
> > diff --git a/xen/arch/riscv/Kconfig.debug
> > b/xen/arch/riscv/Kconfig.debug
> > index e69de29bb2..940630fd62 100644
> > --- a/xen/arch/riscv/Kconfig.debug
> > +++ b/xen/arch/riscv/Kconfig.debug
> > @@ -0,0 +1,7 @@
> > +config EARLY_PRINTK
> > +=C2=A0=C2=A0=C2=A0 bool "Enable early printk config"
> > +=C2=A0=C2=A0=C2=A0 default DEBUG
> > +=C2=A0=C2=A0=C2=A0 depends on RISCV_64
>=20
> OOI, why can't this be used for RISCV_32?
>=20
We can. It's my fault we wanted to start from RISCV_64 support so I
totally forgot about RISCV_32 =3D)
> > +=C2=A0=C2=A0=C2=A0 help
> > +
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Enables early printk debug messages
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index 60db415654..e8630fe68d 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,6 +1,7 @@
> > =C2=A0 obj-$(CONFIG_RISCV_64) +=3D riscv64/
> > =C2=A0 obj-y +=3D setup.o
> > =C2=A0 obj-y +=3D sbi.o
> > +obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o
>=20
> Please order the files alphabetically.
>=20
> > =C2=A0=20
> > =C2=A0 $(TARGET): $(TARGET)-syms
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$(OBJCOPY) -O binary -S=
 $< $@
> > diff --git a/xen/arch/riscv/early_printk.c
> > b/xen/arch/riscv/early_printk.c
> > new file mode 100644
> > index 0000000000..f357f3220b
> > --- /dev/null
> > +++ b/xen/arch/riscv/early_printk.c
> > @@ -0,0 +1,27 @@
>=20
> Please add an SPDX license (the default for Xen is GPLv2).
>=20
> > +/*
> > + * RISC-V early printk using SBI
> > + *
> > + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
>=20
> So the copyright here is from Bobby. But I don't see any mention in
> the=20
> commit message. Where is this coming from?
>=20
Could you please share with me an example how it should be look like?
Signed-off: ... ? Or "this file was taken from patch series ..."?
> > + */
> > +#include <asm/sbi.h>
> > +#include <asm/early_printk.h>
>=20
> Please order the files alphabetically.
>=20
> > +
> > +void early_puts(const char *s, size_t nr)
> > +{
> > +=C2=A0=C2=A0=C2=A0 while ( nr-- > 0 )
> > +=C2=A0=C2=A0=C2=A0 {
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (*s =3D=3D '\n')
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sbi=
_console_putchar('\r');
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sbi_console_putchar(*s);
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 s++;
> > +=C2=A0=C2=A0=C2=A0 }
> > +}
> > +
> > +void early_printk(const char *str)
> > +{
> > +=C2=A0=C2=A0=C2=A0 while (*str)
> > +=C2=A0=C2=A0=C2=A0 {
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 early_puts(str, 1);
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 str++;
> > +=C2=A0=C2=A0=C2=A0 }
> > +}
> > diff --git a/xen/arch/riscv/include/asm/early_printk.h
> > b/xen/arch/riscv/include/asm/early_printk.h
> > new file mode 100644
> > index 0000000000..05106e160d
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/early_printk.h
> > @@ -0,0 +1,12 @@
> > +#ifndef __EARLY_PRINTK_H__
> > +#define __EARLY_PRINTK_H__
> > +
> > +#include <xen/early_printk.h>
> > +
> > +#ifdef CONFIG_EARLY_PRINTK
> > +void early_printk(const char *str);
> > +#else
> > +static inline void early_printk(const char *s) {};
> > +#endif
> > +
> > +#endif /* __EARLY_PRINTK_H__ */
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 09:12:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 09:12:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473436.734035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoCL-0005wq-4L; Mon, 09 Jan 2023 09:12:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473436.734035; Mon, 09 Jan 2023 09:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoCL-0005wj-0u; Mon, 09 Jan 2023 09:12:05 +0000
Received: by outflank-mailman (input) for mailman id 473436;
 Mon, 09 Jan 2023 09:12:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEoCJ-0005wZ-4Y
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 09:12:03 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ad01c3e8-8ffd-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 10:12:01 +0100 (CET)
Received: by mail-lf1-x135.google.com with SMTP id bu8so11961102lfb.4
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 01:12:01 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 bp22-20020a056512159600b004cc800b1f2csm742680lfb.238.2023.01.09.01.11.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 01:11:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad01c3e8-8ffd-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=F6yvyptlUCAsacDdikFfhnC9npXRFDLTNFsf28a75Wg=;
        b=FxZ4jcCi8mZ8rEMAPmd0IXgws7BmViiQsT+Kv0oqrS/UTm07Ov1DPGZ236zidT89DA
         mlKSkKreSrR3gPEHZq9dMyyF3RkT/90gQ9cb/WSw2OZTiGSe6sn5FjFDM9arH/LgDLBN
         4bGI7MOh3GGr4BYi+vGG+/MLUxbwDVFWPu7pfmDcH774EehRkmcGtGuMcMJ0Njia4NZl
         UGWjQbJZXNrT5aKooEnOe1fSeum54MAtk630WyGAwhV8GVVrSwgUrP7h0/5FHAieTZpG
         qxfOQEnotjelP/2GoQcH16+27bTHMfuFmDRC8hVMdWI5PwnvuJFeMzXnsGc0FJjDAiKb
         kqcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=F6yvyptlUCAsacDdikFfhnC9npXRFDLTNFsf28a75Wg=;
        b=iNAAYa5wYWGFJAF70q8MAtMBluC3h5zGGsY6zMwvke0l13KGqVFylf5JqQXa6AxUgY
         D3y1QlUQwTBldLPwhMZ7zp73HfdUwq0lQ7WHw5s3xq75dnBovFqtZKNkQCDwHD4maZyu
         818w8vRKPwxg+nvNSTNXCuUZsTgH3DNXvCqudludEi5ff5tF2MPnLD3G4Mhij80msyM0
         M3UQsJfXw0IHFCwtXJ2Mq1XDUpypEK6Cw/ZuCd3Anv7EETwVHVZSdnwEHjgrU+XGF2/q
         qhiWLZboFPr8LIct07+W8ubcbOBdCW5yr5xrprAUg/EXoi6cEyCIeP/jgTExLto79dS9
         N1Rw==
X-Gm-Message-State: AFqh2krx28ftZ3ZtRRsaQ+zzI4OKfQxmrQeq2Wr0+wUTOKUzrrX1uMZ0
	KIJYFNjjh5V9ckowwRPkzzrDEM/fmiThNLFJ
X-Google-Smtp-Source: AMrXdXt2V3ufGnlWIbDre7Yx0eC6+4r8m61TMSXXthy6fhu+7nWEYEC0UgA/8vxJm8C4cuuMyW/xeg==
X-Received: by 2002:a05:6512:4014:b0:4b6:f22c:8001 with SMTP id br20-20020a056512401400b004b6f22c8001mr27849177lfb.56.1673255520584;
        Mon, 09 Jan 2023 01:12:00 -0800 (PST)
Message-ID: <2904df5d2d0882dc53ffb1da74072cb3956911ff.camel@gmail.com>
Subject: Re: [PATCH v1 8/8] automation: add RISC-V smoke test
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	 <gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
Date: Mon, 09 Jan 2023 11:11:58 +0200
In-Reply-To: <c55a0743-5433-205c-f40c-cd296576c0f4@citrix.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
	 <90078a83982b37846e9845c8ffc50c92f3be1f47.1673009740.git.oleksii.kurochko@gmail.com>
	 <c55a0743-5433-205c-f40c-cd296576c0f4@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-06 at 15:05 +0000, Andrew Cooper wrote:
> On 06/01/2023 1:14 pm, Oleksii Kurochko wrote:
> > Add check if there is a message 'Hello from C env' presents
> > in log file to be sure that stack is set and C part of early printk
> > is working.
> >=20
> > Also qemu-system-riscv was added to riscv64.dockerfile as it is
> > required for RISC-V smoke test.
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > =C2=A0automation/build/archlinux/riscv64.dockerfile |=C2=A0 3 ++-
> > =C2=A0automation/scripts/qemu-smoke-riscv64.sh=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 | 20
> > +++++++++++++++++++
> > =C2=A02 files changed, 22 insertions(+), 1 deletion(-)
> > =C2=A0create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
>=20
> Looking through the entire series, aren't we missing a hunk to
> test.yml
> to wire up the smoke test?
>=20
Missed that. Will update test.yml in the next patch series.
> It wants to live in this patch along with the introduction of
> qemu-smoke-riscv64.sh
>=20
> However, the modification to the dockerfile want breaking out and
> submitted separately.=C2=A0 It will involve rebuilding and redeploying th=
e
> container, which is a) fine to do separately, and b) a necessary
> prerequisite for anyone else to take this series and test it.
>=20
I am going to send a patch with the dockerfle modification today.
Thanks.

> ~Andrew
~ Oleksii


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 09:12:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 09:12:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473440.734047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoCn-0006Sa-DW; Mon, 09 Jan 2023 09:12:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473440.734047; Mon, 09 Jan 2023 09:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoCn-0006ST-9i; Mon, 09 Jan 2023 09:12:33 +0000
Received: by outflank-mailman (input) for mailman id 473440;
 Mon, 09 Jan 2023 09:12:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEoCm-0005wZ-HX
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 09:12:32 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2052.outbound.protection.outlook.com [40.107.8.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id be1d729d-8ffd-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 10:12:30 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8321.eurprd04.prod.outlook.com (2603:10a6:20b:3ed::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 09:12:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 09:12:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be1d729d-8ffd-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Pbreo01RYKT1B/6AAl0X9Vg0gaD/l5PDs/zR0o1gvkZrIBQNm6I2jxU+WXmwJs8BHrlhAzs8K2ZXdBjnS7HM17u6Z7ocJyi0EkczE9m1nlRg3f1e6mJFqinZ3zHjxlN6TSbTPnRHhp/Ssk2MN86Vrct3lNUXMJvzEP5v1aH+7YY00ziPkeqYJbV+eElPgis60DVSM/QzRFbDRfIEy8sWj/dkDt4wJ8188buMM/cHrJSKn+lJLZl6xWzGGca5csOYRr9mNpEJ9qL8RqxUkBtAGW3j8txOlFVhC+ArOBtJkx2HRNxsGKoS7Jd/NODDPC7/hug+9+syHiWBBOqhxobcCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zltIqmOu9avvin4Lknq+ZL4sWOJEvhYdePJIffPVP0w=;
 b=gp5xerPkiY0IkV/+dzXIA/+IkhemAt/WGddun7+YZEoy+zd1XHm2wBvtYYF/pmKWRR2po0Y6k8SknIODwl4jsRfVua/2ZDPdjGHPzt/Eu8C1QQLJCrONMJ4cjqqgZAwa4I4NKLswzKv4RVRpyUKAVe6S6igf8410QtgQVj7V4iBx25JLosIwchPSN2MvRtpvyIEmHD7Rxvrn12DZztJVtoStDqYtqcBHVvsxevpvTqMlG1lJ+7jzBY/mY2vdzdjJltAAwFumv+CbNMaWOodt+VJblnp7z0lPIeo8Ql7F2tzLWn3JWD60xPPTffFWkHyqNW5F+UYQjoTTYqqo0UPKOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zltIqmOu9avvin4Lknq+ZL4sWOJEvhYdePJIffPVP0w=;
 b=AhCd43YETew4/jDGg1+tay8E0hhTfuVEMY52CzFH5NKmlPJ8L/CrgVCCyAcWKsymFEXcLlH41DggFGdYeLuP5Rz4QgeP50FaNwm5aQPzog385k3tZ6vgPR8Z2Aa+xFirMtDz8wvFRI0Du2EziaUJQ3HerqKumJCeJ9GH6I1eDQQGZJIhKXoxfYxO6voYkf+STbsLD/Ms46qeFaEhJ0IGzUKDM3lzsY0NsqY8vMmkTX57+MX15TKUkDTCtV7Q+2SXrUJn493drvci1PNbduI3KTS85tb4qn8JSJjguuYcahP0qI83ZXgIj4tyF4sS7o+lsVEmS2w0L07x1HhfjErErA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <50fe40ce-56d7-4184-7677-53bb1f1cbf22@suse.com>
Date: Mon, 9 Jan 2023 10:12:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 07/11] x86/shadow: L2H shadow type is PV32-only
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <2743393d-852d-b385-9eba-e22806b1c4af@suse.com>
 <a3b7631c-07e7-455b-3531-c33ce435521b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a3b7631c-07e7-455b-3531-c33ce435521b@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0069.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8321:EE_
X-MS-Office365-Filtering-Correlation-Id: 47c39ab3-7ee9-4cae-9594-08daf221a14f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oExBowF7lkbTFfyXp1B72R/6rGJ5XyTz88lAjhVgqt1OGEKY3/Q6JA0WNFjeJfrJUnhPsdHi7bNE1+h5dTEhwyKt0H7yPak64M/n2QFtLi2hAsMx1IXWCxA3LiR6cnk+8Dda8aQrjcIA4b5It+zshYvAouRY/oymmr25c3TVMMullemoI5UJ252Vf84VuyiShEdqJuWWWkqnerr2nJc9FBbGEz23phMkZnVfS/rd+iReimL2Kq1AxyQRP2z+oBTJbvFQdQZTqBeMRN2bz+4j5CNoKqJ+ErUF2iyrngsWO1cCgHF9YhYTub6hqe0BWAIAtCejyQuEjK8lqhssWd2j869gRytmOR30zRkrU8p8d9oeZNnVdXEaYQap3JWMQgrnl8tb5y+lauOzrnsj+S9xk7WGupMO4Q5NNmzk2r1VhpAerS6kTkQF/6DSpaca1MXToQqneUCJY8MhGQeyVu7m9Q1PUQDY55HS5K3L1qbICdqZr81ViKcP033YTaFI4SYVEOzcAt8IrjAhD+YkiB0MnJPhCJWijAaflPlwwAnTthA4qw4ShAcGddd+4re6GlMkKHo8KUA5AoQyDH4LFF7NCiuDd8tfQy6BCA7Bpd3MNDOhkDScTaRMucSrbawFBZRqCLIZQfG5S4LZWw/45u/ZA1XxAj+vitWSrwXkzyfSBXhnnXth/Zt7s4Qeb+L6IrAsl8OX17g2Zuf087XZaZy1209aou0PcQgAZy+KnizUj34=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(346002)(136003)(396003)(39860400002)(451199015)(2906002)(83380400001)(54906003)(2616005)(66476007)(66556008)(5660300002)(66946007)(26005)(186003)(53546011)(36756003)(8936002)(6506007)(6512007)(478600001)(38100700002)(8676002)(41300700001)(31686004)(6486002)(86362001)(6916009)(4326008)(316002)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dzBaTVdVL1hRK3ZoSkxLTDNtenAxWjl3cjVQU2ZlUUZYbHR1eHlZaHJXRlAx?=
 =?utf-8?B?UisyQWFwUzlvUTExR3NoVGo1S1lWaGswdjF4MUZCTUYybFluMks5SmNmTzVy?=
 =?utf-8?B?NmNyMmZjZU5tajduSzRJSThlbGFTMFhVS1BlMFU0Yi9xOXFPZENOaVhDcjBT?=
 =?utf-8?B?cHRNOGVFMDljWUs2eWxnWTlkZkhZVDZ4VFJ3cGZoWjZ1UVpVZEtKb0pwNVI5?=
 =?utf-8?B?ZmUrVERuQnJEY3Fvekh0cG4wQjJaYUJOcGtURlA3ajdGMFJjQjI2bGRVY3Zo?=
 =?utf-8?B?S00wMGVrRDJiZytzdlVjT0wyQ2ZIdmJxRFUyNCtPWk1nQUlXbm8xSmxmY3ZJ?=
 =?utf-8?B?eXE2VmRGaEUxdXVuTFpsVUkyT2loK2tIYlVZK1RGZlpJTUlaRVhUNnBYWHZx?=
 =?utf-8?B?d2VDRUNYaXlRbTVMWTc4VVY5ejJNZVRPaGdLdHJVbldyZkZidXJuWW9SMkYx?=
 =?utf-8?B?aFROSlBhQ2RBb2JCY0R4c1Fhd1huVit2RlEwcnJWTzFnSkR1OEdPVWpadVFm?=
 =?utf-8?B?ZzR2bEtGWDFnd1FYTzVieGZnM2p2UnBLcy91Zk9lNzJad0tHaEt4NEdnVUFk?=
 =?utf-8?B?WVpnZmMwZ1Y1ZGZIK1h0dXhzTWJKeUh0ZVFLU2twRDdnSS9WTDNyYWd4TVpx?=
 =?utf-8?B?QWN3dkVoL1lXQlhObmRIZDZqenk2aDltZWRhREQzY2NMbXFYS0p2ZnJaQmhm?=
 =?utf-8?B?WVY0VDVuNVFrS2Z3UitKeml6Q0t5bVk1dFlpQTMwRFh3UWFRTlJORStGRXZx?=
 =?utf-8?B?MXhDeWJyZXgvUVMrdDd2bUMxUHR2NTUrZUlUWHJSNndZTlNZV1RoeWRmbThj?=
 =?utf-8?B?SVJFNXpUcGd3NW9QakxqZjFEa1lkZEJMeDQvRDVlN2RmZXYwdHJidCtXU1dL?=
 =?utf-8?B?SFlkakg1NExkR1ZvdTVzMHhJY2h3MmRxZkEzRmtGcndQeERCWVAvWVgrY2NU?=
 =?utf-8?B?RW8yOVR3ckhaeFkwNXFMNWNVbXNIVy9zS0hqcjltTytnMEJ3cisxTTk1SHJS?=
 =?utf-8?B?OEl6aUdBOW94U0ViSGlqNEVQQVZ4NCticXpJL0J2ZEdibHRSTlBiejlielBU?=
 =?utf-8?B?ZlY4QUhtdFNTZkd5S2FGZmpTUyt3WVg1dXp6cnZKYnp4azF1Y2hDQWJMSXhw?=
 =?utf-8?B?c2paRWFwU1k4S0k0UE45dS90eHdkdXZ5ZkQrYkVhUW93U3pLTXZIS3RPeEgy?=
 =?utf-8?B?VnFudG5QdmlHeVlkcjhMVFVtaGR3a2J4bWliaUVmcXdyVlJVL0RIRm56TFpl?=
 =?utf-8?B?MFZLbjEwLytCaTJ6QWlQQU41ZkRzd1NmdXlkeTU2RmZvZXRzZFdLd24yOXlu?=
 =?utf-8?B?Rjg1OXpvWHdPOWxseEFjbC9wYTFaR2JLVi9lRmRiUGUyUU41eklNbGZmemZu?=
 =?utf-8?B?Y2ZmTXp0VWNlK25sOTBoWFhMZ2ZHckQ5WFpqUDdKWkFubUs5V3Q5MG1yZm8r?=
 =?utf-8?B?WFQwc2dZbG1hSnVHM0RGb3BOYmcxRjBBRkExWTBnNWpXZ3dwdy8vTS81RGpV?=
 =?utf-8?B?S1JuRjVjdFJici94dGN6a2ViRXJKT2lZdzdaamxJcTcvK1M3MXdxN1hCSmJJ?=
 =?utf-8?B?TjhMb3l0Z3Q3Znp0RVY5VDVTUzNvSGM3bzRvVmJqeTNBWlQyU2hINWd3d3Js?=
 =?utf-8?B?S1NVN1hZVVp3NkJHdnlZVzM2VzVIbG5oR0EwVXhiMW93NkFidVNyYmZnVWFp?=
 =?utf-8?B?cUpoQ2xScUtaMW5XZWFyR3hTQ3ZiRFVIK3dpN3hkb1FRc1B6THh1K1JhSU13?=
 =?utf-8?B?bDk5TnprZzdnOEhPT2QxWW9aNjkzQUZORmpwcndML09zb1htbE9iOXpDYk9F?=
 =?utf-8?B?ck5DQzc1cXV0b1ZybCswZzZobHlMa1cxMlBjK1BCVVQwbURsOUljV2lpU2Ri?=
 =?utf-8?B?R1FNUkZoVUo5a3BISUFXV3NlczBLNGEvZytTdTdORzdoenFwbFRxN2NxNC9s?=
 =?utf-8?B?bzZKWVhZaUVVY3dHNmxCZnBoUlZMUTE3NG9PVFVzMDM3bTg5SWFjbDZxaDFR?=
 =?utf-8?B?a1psRTRBSWYzOWdxSUMrZWQ5RU04aUlCUXRnRkJuZXBmelhWOURyUlFjMWtz?=
 =?utf-8?B?dC9KQUh4U21VUFVrVTNCUUMyVCtJRzBML1BLK3M0Sm1GTmlQTDJvSWZYMjVI?=
 =?utf-8?Q?12XGZvdo/iJeoJRS0+bwVecjE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 47c39ab3-7ee9-4cae-9594-08daf221a14f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 09:12:28.1786
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mk9kXQR+QIKXjj/mnalL+ehT36gcfeObZE3KVMFvwbMxQ+6uwMKOo9PJ+t0XYz67shrWsTJFUx7fk+1QAAQJBw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8321

On 06.01.2023 02:31, Andrew Cooper wrote:
> On 05/01/2023 4:05 pm, Jan Beulich wrote:
>> Like for the various HVM-only types, save a little bit of code by suitably
>> "masking" this type out when !PV32.
> 
> add/remove: 0/1 grow/shrink: 2/4 up/down: 544/-922 (-378)
> Function                                     old     new   delta
> sh_map_and_validate_gl2e__guest_4            136     666    +530
> sh_destroy_shadow                            289     303     +14
> sh_clear_shadow_entry__guest_4               178     176      -2
> sh_validate_guest_entry                      521     472     -49
> sh_map_and_validate_gl2he__guest_4           136       2    -134
> sh_remove_shadows                           4757    4545    -212
> validate_gl2e                                525       -    -525
> Total: Before=3914702, After=3914324, chg -0.01%
> 
> Marginal...

Talk wasn't of only size, but also of what actually is being executed
for no gain e.g. in sh_remove_shadows(). I think "a little bit" is a
fair statement.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I wasn't really sure whether it would be worthwhile to also update the
>> "#else" part of shadow_size(). Doing so would be a little tricky, as the
>> type to return 0 for has no name right now; I'd need to move down the
>> #undef to allow for that. Thoughts?
> 
> This refers to the straight deletion from sh_type_to_size[] ?

Not really, no, but ...

> I was confused by that at first.  The shadow does have a size of 1.  Might
> 
> /*   [SH_type_l2h_64_shadow]  = 1,  PV32 only */
> 
> work better?  That leaves it clearly in there as a 1, but not needing
> any further ifdefary.

... I've made this adjustment anyway. I'd like to note though that such
commenting is being disliked by Misra.

The remark is rather about shadow_size() itself which already doesn't
return the "correct" size for all of the types when !HVM, utilizing that
the returned size is of no real interest for types which aren't used in
that case (and which then also don't really exist anymore as
"distinguishable" types). Plus the connection to assertions like this

    ASSERT(pages);

in shadow_alloc(): "pages" is the return value from shadow_size(), and
I think it wouldn't be bad to trigger for "dead" types, requiring the
function to return zero for this type despite it not becoming aliased to
SH_type_unused (i.e. unlike the types unavailable when !HVM).

Which makes me notice that with HVM=y shadow_size() would probably
better use array_access_nospec(). I'll add another patch for this.

>> --- a/xen/arch/x86/mm/shadow/multi.c
>> +++ b/xen/arch/x86/mm/shadow/multi.c
>> @@ -859,13 +866,12 @@ do {
>>      int _i;                                                                 \
>>      int _xen = !shadow_mode_external(_dom);                                 \
>>      shadow_l2e_t *_sp = map_domain_page((_sl2mfn));                         \
>> -    ASSERT(mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_64_shadow ||\
>> -           mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2h_64_shadow);\
>> +    ASSERT_VALID_L2(mfn_to_page(_sl2mfn)->u.sh.type);                       \
>>      for ( _i = 0; _i < SHADOW_L2_PAGETABLE_ENTRIES; _i++ )                  \
>>      {                                                                       \
>>          if ( (!(_xen))                                                      \
>>               || !is_pv_32bit_domain(_dom)                                   \
>> -             || mfn_to_page(_sl2mfn)->u.sh.type != SH_type_l2h_64_shadow    \
>> +             || mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_64_shadow     \
> 
> Isn't this redundant with the ASSERT_VALID_L2() now?

How would that be? ASSERT_VALID_L2(), when PV32=y allows for both l2 and
l2h, whereas here we strictly mean only l2. The sole reason for the change
is that the l2h constant simply doesn't exist anymore when !PV32.

> Per your other question, yes this desperately wants rearranging, but I
> would agree with it being another patch.
> 
> I did previously play at trying to simplify the PV pagetable loops in a
> similar way.  Code-gen wise, I think the L2 loops what to calculate an
> upper bound which is either 512, or compat_first_slot, while the L4
> loops what an "if(i == 256) i += 7; continue;" rather than having
> LFENCE-ing predicates on each iteration.

I'll see what I can do without getting distracted too much. (I also guess
you mean "i += 15".)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 09:37:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 09:37:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473452.734063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoaG-0002gR-QW; Mon, 09 Jan 2023 09:36:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473452.734063; Mon, 09 Jan 2023 09:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoaG-0002gI-LE; Mon, 09 Jan 2023 09:36:48 +0000
Received: by outflank-mailman (input) for mailman id 473452;
 Mon, 09 Jan 2023 09:36:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bkgU=5G=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1pEoaF-0002gC-WA
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 09:36:48 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 21ba2e36-9001-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 10:36:45 +0100 (CET)
Received: by mail-ej1-x634.google.com with SMTP id fc4so18483436ejc.12
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 01:36:45 -0800 (PST)
Received: from [192.168.0.106] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id
 s17-20020a1709060c1100b0084d21db0691sm3499300ejf.179.2023.01.09.01.36.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 09 Jan 2023 01:36:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21ba2e36-9001-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=B/99t8qgIMvqhTeD4zkjN62XwwCpzj4U7vSzDxnnCDc=;
        b=Wr6wSX47NhAaHWDBxrRzP3g44XWLE7Sy4kLqNdnb4foosqDf+nLDomsCssY2zOOFog
         NXPtAcnszBjFC9BS097w7hd8JmxrV8iNjfl9n7AZGkrK3EgI5EYIZeDrPqokl4jqwOvc
         +j2Hxtw3xlpjURIxGkdnfvWzCUxvpQAY3PxkreiLVHXCL6UBUtGr9Iq20YUW0vici0NI
         kzgVzZC5tNRTd6takvgAt90yusSH2cV89CQeoEPCx6XWQmBls8ewonRT1cpYKsOCV8ej
         Jdknn++muHOcvJZCFsNHJM73JecI5DumceEo8ZZN/duRaVt0CSnb5FAVWarGefsUczRr
         gYFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=B/99t8qgIMvqhTeD4zkjN62XwwCpzj4U7vSzDxnnCDc=;
        b=5wOiGOX/p5ztpHw+s1ivzWaOE0TV1qCnYu94RaEdRzGs++ZPwJk/Y6fwfVloZmv/kI
         pdMPmxQHOFmv8h5Wyrzgp7ThZbETIswKgLkAOvFY7oVO2UrzOfFLOBr0L3xDeHQ2W77M
         5ITsJ4dIYUNxm97SYTeoCSvfa3EjtN0Hczv5xcLn0TzXluZvnWQ9s/Iwl6Z1ImBE1zMl
         ZoJ8E3IgFwhfFb6LVBN6qrQar2n/KaWQEeShMKpQftqHA/eF2zCPPymCmK62KlO0s3Pl
         O/mzADqTQxLWlIsfmaCb9HkZhTcCAID9+9cvkMULG8dn88E4U2HiWtdwdiaw45wvJF/I
         E0CQ==
X-Gm-Message-State: AFqh2krKytZ1t6L6TiPADzw4zfQgNFIoohBivaRurIT1mbxcJfy3yrm0
	m/r539nZv+PMOphfEY0jSL0=
X-Google-Smtp-Source: AMrXdXs0p4wKdR5+FK+UzVIUwLWzUEefxJdL4gN93AyqKDIsqVQkT7sg4nqObI5zLUjwfACLEYyPJw==
X-Received: by 2002:a17:906:1583:b0:83f:384f:ea23 with SMTP id k3-20020a170906158300b0083f384fea23mr51928606ejd.57.1673257004921;
        Mon, 09 Jan 2023 01:36:44 -0800 (PST)
Message-ID: <20b15211-492b-713e-288c-14bd5e137ed7@gmail.com>
Date: Mon, 9 Jan 2023 11:36:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [XEN v4] xen/arm: Probe the load/entry point address of an uImage
 correctly
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org, Dmytro_Firsov@epam.com
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com, michal.orzel@amd.com,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>
References: <20221221185300.5309-1-ayan.kumar.halder@amd.com>
 <e26768b7-99f7-f4e4-6ae5-094d17e1594a@xen.org>
Content-Language: en-US
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
In-Reply-To: <e26768b7-99f7-f4e4-6ae5-094d17e1594a@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 08.01.23 18:06, Julien Grall wrote:

Hello Julien, Ayan, all

> Hi Ayan,
> 
> On 21/12/2022 18:53, Ayan Kumar Halder wrote:
>> Currently, kernel_uimage_probe() does not read the load/entry point 
>> address
>> set in the uImge header. Thus, info->zimage.start is 0 (default 
>> value). This
>> causes, kernel_zimage_place() to treat the binary (contained within 
>> uImage)
>> as position independent executable. Thus, it loads it at an incorrect
>> address.
>>
>> The correct approach would be to read "uimage.load" and set
>> info->zimage.start. This will ensure that the binary is loaded at the
>> correct address. Also, read "uimage.ep" and set info->entry (ie kernel 
>> entry
>> address).
>>
>> If user provides load address (ie "uimage.load") as 0x0, then the 
>> image is
>> treated as position independent executable. Xen can load such an image at
>> any address it considers appropriate. A position independent executable
>> cannot have a fixed entry point address.
>>
>> This behavior is applicable for both arm32 and arm64 platforms.
>>
>> Earlier for arm32 and arm64 platforms, Xen was ignoring the load and 
>> entry
>> point address set in the uImage header. With this commit, Xen will use 
>> them.
>> This makes the behavior of Xen consistent with uboot for uimage headers.
> 
> The changes look good to me (with a few of comments below). That said, 
> before acking the code, I would like an existing user of uImage (maybe 
> EPAM or Arm?) to confirm they are happy with the change.


I have just re-checked current patch in our typical Xen based 
environment (no dom0less, Linux in Dom0) and didn't notice issues with 
it. But we use zImage for Dom0's kernel, so kernel_uimage_probe() is not 
called.



I CCed Dmytro Firsov who is playing with Zephyr in Dom0 and *might* use 
uImage.


[snip]


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 09:48:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 09:48:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473458.734074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEolm-0004DU-Q3; Mon, 09 Jan 2023 09:48:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473458.734074; Mon, 09 Jan 2023 09:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEolm-0004DN-Mr; Mon, 09 Jan 2023 09:48:42 +0000
Received: by outflank-mailman (input) for mailman id 473458;
 Mon, 09 Jan 2023 09:48:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEoll-0004DH-J4
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 09:48:41 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2051.outbound.protection.outlook.com [40.107.7.51])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cb13073a-9002-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 10:48:39 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8908.eurprd04.prod.outlook.com (2603:10a6:20b:40b::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 09:48:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 09:48:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb13073a-9002-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xo64tgn/sivb8DgDWuQ1gopVmjLgmtIIXqD3ASdtyzsseqUiemf9H4taxs16wpK08In6h7Dg1rgF0TvFEMIlEVjr8VO89W84Cwe7bgQza3KHVVvge5bzlPIZXffX4+J/kf7LNUbwRHh80gaOcfkOcnrJBPQXsL2NTNVOWMjuux1xSWS1MIMdwW0Cy9lWzhDNigeyjS6ql+Q0Sa0QamEumVUfiBmk4UZeEVc+7JB9rMalVxXLPpTjxRCxrIXrc0aLofoxQsfVjI2iyDt+Xauq0amNFwzso3c4HKgSMi3i1eehCrPz8lzIMtHaUXWirjZvX7yonW3IjrdQ6HFwBjQ59Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MMHx9wpBRn012eK/sOZhbqy2uXhf9JMI/fRvQnuW4bg=;
 b=oaQT+O9CrhpCUOg/7Yye2TJAW7UWoSmnsVcRJUpAvYNv2Om+J0p/x1xqgzzzZACFvMAHK2vv1VmFqAgHhZ1co2TRiHAmyb6BIusrvbyjyC5mrFuUEJ/r1O5mQVxfTaMADOAis9HreuZuZExh7Im3wQkBSjhRWV14ZMIhaeEzYTriUF4dLq7ZZFmskVCvPa5mMHq5QRoIYAi/wUI06+e51QeVuzIPL6wx+fcy4aLBwxjZ7TkHvuJf5Zq0jmmwJOxPMDNi+cirHVWslN+V1nPmx6bb++M2aRI+TewDJ2vdq4mcGSaKb35ECXXDentMXbaxNqfJJId1URtGYs9vnzJSkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MMHx9wpBRn012eK/sOZhbqy2uXhf9JMI/fRvQnuW4bg=;
 b=SEylff1mwt7Cfm+NgX4SOGrdEPTlipD7mhLpIdrH/5SGxycSP+AQ6iIuDErsPmt/wVfGmkCHqBX3l6XuYK//IrCpprDYKonZjcI91iFcbKhrYMdXS9hXobNaFT49vBmTSoFj9jN1E9X8NPE39M6O/2yY0JPvd8SH2/yEE2MZz/lUKPsicr6tgtE/sXUgmpVIP7ouoaWkjilHZKcPaG0I+08amQqlY0nUe3RGDi8ZpywO2RTAAHQrIm+mii28BoWiw9oX76rPmePpa23wnyMZsNLZA85w0VleeaTyosQn5wWalAsqihPHVvet5/8UzeWamCKnx5ZPACfkS2bqxamEjg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8b65788c-9815-6526-a023-3c1c64699d8f@suse.com>
Date: Mon, 9 Jan 2023 10:48:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 08/11] x86/shadow: reduce effort of hash calculation
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <074dc3bb-6057-4f61-d516-d0fe3551165c@suse.com>
 <acf0f5f6-f4da-cd88-1515-2546153322b4@suse.com>
 <20c268c0-979b-5ac9-da16-7cb7322552a6@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20c268c0-979b-5ac9-da16-7cb7322552a6@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0079.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8908:EE_
X-MS-Office365-Filtering-Correlation-Id: 5d419066-a30e-4949-112b-08daf226ae6d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BELkF1sJgRNzFLnYmimgwZHadSrauKFRyX177OQ6sgqETg64PtMv9rzU01pFLf8uVro003GeoaN03kJlJ1y/FBkZJJoqPBOpGmOZdikQTW1Zu0NKfI9LzXrPXYLs10ANp7IvSSoNZoBqUEG5gMgDbnOwi1369E292YK5L3h9S/c34msi2O4BP5X35hhWRKnYn9skFlDpBqV+hKrheODqXzv90pwLgyv1aqKXwAU7zoPmdqwcL0UXUvohJSKsA24CfAKK+Ab1Ow6n0zr0uCmt8syg6CUQTpZZ7+oT3Efr9NUnfeinVE7y4PmhdbbLYq0I6evPCKZXIX27FpELfdzSAyGTGBoLYZ4GB7+KWQGLxDKqtz0yAlabxnP06rZm5LphM0g94AkdEuDJBSnlHsE/2xakDOTgJmknPb4sF9pKWt8fANTnLO/0IkJmloJKUCUiES3yRbPygz6hOFYurvn1gOYW5jP5xAbaSi+BZaeehUww9AtIzrkkmHbqNiN64XHRiS4Zc4m/TgWQoA45oNC/tWb2xgOyDbAegMnOhp+aUevPSr4XKf9usjuM7VmDl4GAA5iP9WCpJVVHYN1L3JOs3QrzlzoWiK7wVa2VtXxVKeDAE6YO1GXci6c0qq/U5X/rW5qXEsb/ZuIMP4RXe4YVTCDitmUXu7x9OJN5put/RzuKNIsuKx1D+n/nJQtTg1WQ/zvKMxfbg4HzoxBS9viF8XIti6ZgXukTfkfDF1BfgDY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(346002)(136003)(396003)(39860400002)(451199015)(2906002)(83380400001)(54906003)(2616005)(66476007)(66556008)(5660300002)(66946007)(26005)(186003)(53546011)(36756003)(8936002)(6506007)(6512007)(478600001)(38100700002)(8676002)(41300700001)(31686004)(6486002)(86362001)(6916009)(4326008)(316002)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dEhkUGpDOTVlS1hvSldtbUlVaVNQQnpKTWdYcTgrdmJQSkpFbFZGRmRHdzNK?=
 =?utf-8?B?bExZTEh4TGxjSDBNZ2NuUWhGWXNyTXhNdVlZbXlNZFYxZW1INWpuM2tXRUY4?=
 =?utf-8?B?b3YwRCtaTTAveUo0U0dzdHJ4UU5BUHhVb2xQL1R0aWVYUThXWW9WZnd3WlZi?=
 =?utf-8?B?MWI2VkxhamdJYzNvSU1MUWdBZkYxTG9qeVlGNFZlamRoRHJaYis5QVExV3Uw?=
 =?utf-8?B?a1AveXcrT2E5OVROdVR6YTFTcWdDcmt3MXI3RTZxdHgyOXVLR2hLQnlySEhk?=
 =?utf-8?B?VWpxVjk4TXJEQk1hK0hFQ2szY0JqVzFkVDhKbG9yS2hyR0RNRXNKSU8xQ3M0?=
 =?utf-8?B?d0VYem5wcktJVitjV2NMYit5ZitaQU51emlMb2c0TTdOSXpMOGZYaEhaYU00?=
 =?utf-8?B?V3hlNUFiR3JWYUo3Ni9qYW5SSURIZDFid1lRZDM2N3lDUFhsZUNlWUV5Q0d0?=
 =?utf-8?B?c1ZpaWNUQldIanl3aU9ZU0twV1pTZDhMNmdYWldGVDZSd3hNcVV6UHRNcmZD?=
 =?utf-8?B?M2hOWktUMzhnbDlUb2tJYXFob0ltOGtGVzRvTFdQclRiYm1iRWZvRUN0Uk5S?=
 =?utf-8?B?NmxaNFgrdDRpVlE3UU1kdTBqVThBVVdodWZmMmxReHdJMUd2ZWJyUllnVGl2?=
 =?utf-8?B?VVJMbm5jMnEyaTZ3aVNadDU3aDJoYTR6VTlWYXBDdHBMUlF4TE1rVGtHb2xW?=
 =?utf-8?B?dWVjakhUUGtVb1ErYWxXZFVzUGJCRGFwblB5blE3UVNWdGxHTUo3SDVYQitj?=
 =?utf-8?B?a2toTmZWMVRwVFdBTXorTHlrQmtkWjExaTlpbzUyaVlJQTZVZFNpQU5Yc0lk?=
 =?utf-8?B?N1JVVWw5c2VKZmh6R2M0cEdMbWVzclZWNDVVTTB6bTNvZFZmSE41Vk4ydkc4?=
 =?utf-8?B?bnlpbEJkbGEvcjZJc0psWnpRaVVBVlZBejAzYmJrVmRveXB5bVV0aWJxUThx?=
 =?utf-8?B?V2ozdUFTWEtKQjBwVy83bURFbVhXRmYwMjB3dmhBRFg0ci9EQjI4VHd0a0Rj?=
 =?utf-8?B?Vmh0dEdmdUw1VXJYWjNrdXEybThVUEhibDVuMVl3cTBKdWpiVUhibjFzcVFp?=
 =?utf-8?B?clUvZzc5QjVGSnZYUmZrMmV0Q1FrR0R3ZTFyU0hLTnlVWSt1K0R5YXZEbGsy?=
 =?utf-8?B?ZGpoTlJhUCtHbmlHeGNwUi9GTVJsd2tzN3ZHaml4NnVSQ3pGTkhIdy84a3h5?=
 =?utf-8?B?aENEUFRrQm9Dbkxjc056dGdkZnhYMU12RFVrTWgzMkJTQW00a3J5dmZsMS8v?=
 =?utf-8?B?YnR5cDlOdjlDV3lkMytTeERuSTFtRTVLbVNxQVFGUVRTSGVscVUrNFpsdTNF?=
 =?utf-8?B?ZEd3YmFreGk0Z1BpZXJsRlNNVTViTWQxU0htcVdscC9IVi9rMW90ekZ3bkIw?=
 =?utf-8?B?QXBxMW1PTDlCTmJUaFRoWm5ySjRmeisrcDIrNUVqN3M3M080MFQ2Wlg2QW9Q?=
 =?utf-8?B?RHRSOFppWWpwR2QyQkxOaldvTytVOUgxSmVrc1FnSU91YUwyRUNQWGFPZlhh?=
 =?utf-8?B?K3Q0c3RueWFlckdpUlFsbERCejNsTS9mS21oMytKQ3p3UWhENllnYUUyWXVX?=
 =?utf-8?B?ZThEWW41UzhXK0dObHdhRGxFdG9DS3BybnJiWVE0QmU0MjhSK3hiVmFJZ0k0?=
 =?utf-8?B?dkljVWp5MWVoVjVGQnJyNFFSTU4vbkpkRm8wMzlucGlkNlJIRmVwRWpPM0NE?=
 =?utf-8?B?UHlGTVk2cjJwcUg0cFVZNWNSYUo3cFgwZGdEa250RXRaQktMd1F1UERlTXp5?=
 =?utf-8?B?K1JOeXBCT3ZONFhoU1pqNmI4cXRJZHNLQ3Q5ZTNuY3F1RDUzdjc4SWI5cys5?=
 =?utf-8?B?V2FKdUMwZ2dNT0NWQVI0MDRTdGxDUXlER2hOU1c3TEo3OFA0d29qUUNhcUZr?=
 =?utf-8?B?N3F1ckZaeW5vOUR0MU9VdTNDQUVRTnN0WVoyc1c2OGg3SHZMa0lMbWpnVzFP?=
 =?utf-8?B?ZHlYSzUrWVJ4M0NOb1dxYnQvNmdtV0ZEY0o0VCtDWlUwTnJGUnZxVVMvWW9l?=
 =?utf-8?B?ZTFESVo4clVSUU5ab3U0MjltRlVKZUFmdWxvVGhNSitWVW1qc21mb0p0L044?=
 =?utf-8?B?aUQrYWFBUnU4S0M0ZlZSdHJ3TS9aRkVXZmNITE4xaUNwcDZaemFlUndSSENm?=
 =?utf-8?Q?ksk7j8bBxRLBmMAjq4eAPhbd5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5d419066-a30e-4949-112b-08daf226ae6d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 09:48:37.6971
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 393mA7+MeNtuYYLN0SZcNVhchjH8HpROUNfP/+o4r1ZvN0nynSqDkYsOjycMGGNBpq0Oc9D5+F2yb8Zcw6AIbQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8908

On 06.01.2023 03:03, Andrew Cooper wrote:
> On 05/01/2023 4:05 pm, Jan Beulich wrote:
>> The "n" input is a GFN value and hence bounded by the physical address
>> bits in use on a system.
> 
> The one case where this isn't obviously true is in sh_audit().  It comes
> from a real MFN in the system, not a GFN, which will have the same
> property WRT PADDR_BITS.

I'm afraid I was more wrong with that than just for the audit case. Only
FL1 shadows use GFNs. All other shadows us MFNs. I'll update the sentence.

>>  The hash quality won't improve by also
>> including the upper always-zero bits in the calculation. To keep things
>> as compile-time-constant as they were before, use PADDR_BITS (not
>> paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
> 
> While this is all true, you'll get a much better improvement by not
> forcing 'n' onto the stack just to access it bytewise.  Right now, the
> loop looks like:
> 
> <shadow_hash_insert>:
>     48 83 ec 10                 sub    $0x10,%rsp
>     49 89 c9                    mov    %rcx,%r9
>     41 89 d0                    mov    %edx,%r8d
>     48 8d 44 24 08              lea    0x8(%rsp),%rax
>     48 8d 4c 24 10              lea    0x10(%rsp),%rcx
>     48 89 74 24 08              mov    %rsi,0x8(%rsp)
>     0f 1f 80 00 00 00 00        nopl   0x0(%rax)
> /-> 0f b6 10                    movzbl (%rax),%edx
> |   48 83 c0 01                 add    $0x1,%rax
> |   45 69 c0 3f 00 01 00        imul   $0x1003f,%r8d,%r8d
> |   41 01 d0                    add    %edx,%r8d
> |   48 39 c1                    cmp    %rax,%rcx
> \-- 75 ea                       jne    ffff82d0402efda0
> <shadow_hash_insert+0x20>
> 
> 
> which doesn't even have a compile-time constant loop bound.  It's
> runtime calculated by the second lea constructing the upper pointer bound.
> 
> Given this further delta:
> 
> diff --git a/xen/arch/x86/mm/shadow/common.c
> b/xen/arch/x86/mm/shadow/common.c
> index 4a8bcec10fe8..902c749f2724 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -1397,13 +1397,12 @@ static unsigned int shadow_get_allocation(struct
> domain *d)
>  typedef u32 key_t;
>  static inline key_t sh_hash(unsigned long n, unsigned int t)
>  {
> -    unsigned char *p = (unsigned char *)&n;
>      key_t k = t;
>      int i;
>  
>      BUILD_BUG_ON(PADDR_BITS > BITS_PER_LONG + PAGE_SHIFT);
> -    for ( i = 0; i < (PADDR_BITS - PAGE_SHIFT + 7) / 8; i++ )
> -        k = p[i] + (k << 6) + (k << 16) - k;
> +    for ( i = 0; i < (PADDR_BITS - PAGE_SHIFT + 7) / 8; i++, n >>= 8 )
> +        k = (uint8_t)n + (k << 6) + (k << 16) - k;
>  
>      return k % SHADOW_HASH_BUCKETS;
>  }
> 
> the code gen becomes:
> 
> <shadow_hash_insert>:
>     41 89 d0                    mov    %edx,%r8d
>     49 89 c9                    mov    %rcx,%r9
>     b8 05 00 00 00              mov    $0x5,%eax
> /-> 45 69 c0 3f 00 01 00        imul   $0x1003f,%r8d,%r8d
> |   40 0f b6 d6                 movzbl %sil,%edx
> |   48 c1 ee 08                 shr    $0x8,%rsi
> |   41 01 d0                    add    %edx,%r8d
> |   83 e8 01                    sub    $0x1,%eax
> \-- 75 e9                       jne    ffff82d0402efd8b
> <shadow_hash_insert+0xb>
> 
> with an actual constant loop bound, and not a memory operand in sight. 
> This form (even at 8 iterations) will easily execute faster than the
> stack-spilled form.

Oh, yes, good idea. Will adjust.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 09:50:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 09:50:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473465.734084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEong-0005c6-8n; Mon, 09 Jan 2023 09:50:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473465.734084; Mon, 09 Jan 2023 09:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEong-0005bz-6B; Mon, 09 Jan 2023 09:50:40 +0000
Received: by outflank-mailman (input) for mailman id 473465;
 Mon, 09 Jan 2023 09:50:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEone-0005bt-UQ
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 09:50:38 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1118a50e-9003-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 10:50:37 +0100 (CET)
Received: by mail-lf1-x130.google.com with SMTP id bq39so12138925lfb.0
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 01:50:37 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 f16-20020a056512361000b004cb27d8edd1sm1535009lfs.85.2023.01.09.01.50.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 01:50:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1118a50e-9003-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=2Jj8LyZa6YPi4Y5Mz4SQwLw+I42jookzrxO26/BlN8E=;
        b=llpOJVMyHx+hqMj5pMpxIXiU+htmQ7fma61iraWV72ZjNHDWRIWSllbF+5ubvNGIfS
         K3l9U7PtOK5+zYjsGx9cpnpX0hjMLkMegvEaLrt9h8dxVcJoW0eiN1RUdgqn0+qVn1vb
         BnRK10fAYiohdPbq9dOmiiWbZD0ptqa7BX3ClMOdXwzJz8yVqMdsBjlwqECqh/tzZFGM
         3QG9JrWIBHDL2Xs1lVXwyDeWGLQtfE5C9RvyyVF++fI+ntJGuXbm1wr63ZU+qMvHSO01
         Q8ARP0/eziV9SerpRxMp0OBNJBcXKRV8d5jg0Di3wYWsGPGCoXPY8Sr0vTP9ihb2+flz
         FYmQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=2Jj8LyZa6YPi4Y5Mz4SQwLw+I42jookzrxO26/BlN8E=;
        b=P2cTZeTkiWpwOiEGzY61D8saRF3fDyNXHUhkg7FoofzHW4ykA3XEy1UHWKro/ZteJA
         1j3Z0UMloZ3R/Jfa3ZXNIO8bJ3BQnQhszDjra3tMl+dvPo9nTQgzuk9ceObwBWFumnTZ
         5MBA8W0T/PUCrEIu1tjc/Fr2NMRAhLNvFG/X2a9X5CEmHd90dVl6Q2UegSVCVR1XEcV3
         DAG5hrXcuetDTqjBZBAqCDi2cR5EHrr+fxit5TbyYuJffocZBQmaPVg1wzLwckoDBjLo
         D4mFlowRrb0D7JyNmNZYWIL1pL0wB+URKxrqc3mQgn4c37/4WU7azTjeNfPbyUIE7ONe
         ZukA==
X-Gm-Message-State: AFqh2kpqjK3Q+kXI8mfltGrw3uv1sQ1IObGMFJaBlwgBqKG1FFnsksE1
	EIp9JSuvwQJh7VI83Uglm5ErVy/c796gg1lP
X-Google-Smtp-Source: AMrXdXup80ZX2b/beLFfkhYOyCb8R29k7q8npzgK5+aPorkhb7HhFH98ZY/WPMv4yf4i0qwHpyHASQ==
X-Received: by 2002:a05:6512:2393:b0:4a4:68b9:1a00 with SMTP id c19-20020a056512239300b004a468b91a00mr21786223lfv.40.1673257835889;
        Mon, 09 Jan 2023 01:50:35 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH] automation: add qemu-system-riscv to riscv64.dockerfile
Date: Mon,  9 Jan 2023 11:50:32 +0200
Message-Id: <8badde729e97ef6508204c5229199b7247c7a3da.1673257832.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

qemu-system-riscv will be used to run RISC-V Xen binary and
gather logs for smoke tests.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 automation/build/archlinux/riscv64.dockerfile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/riscv64.dockerfile
index ff8b2b955d..375c78ecd5 100644
--- a/automation/build/archlinux/riscv64.dockerfile
+++ b/automation/build/archlinux/riscv64.dockerfile
@@ -9,7 +9,8 @@ RUN pacman --noconfirm --needed -Syu \
     inetutils \
     riscv64-linux-gnu-binutils \
     riscv64-linux-gnu-gcc \
-    riscv64-linux-gnu-glibc
+    riscv64-linux-gnu-glibc \
+    qemu-system-riscv
 
 # Add compiler path
 ENV CROSS_COMPILE=riscv64-linux-gnu-
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 10:01:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 10:01:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473471.734095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoyE-0007EN-8M; Mon, 09 Jan 2023 10:01:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473471.734095; Mon, 09 Jan 2023 10:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEoyE-0007EG-5a; Mon, 09 Jan 2023 10:01:34 +0000
Received: by outflank-mailman (input) for mailman id 473471;
 Mon, 09 Jan 2023 10:01:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEoyC-0007EA-Ar
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 10:01:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEoyB-0001fy-Vb; Mon, 09 Jan 2023 10:01:31 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.1.158]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEoyB-0002YM-Pm; Mon, 09 Jan 2023 10:01:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=dfINzAzh++LGzgBuhBGmyhM9SrR6FkgJApXtlpHnQNU=; b=vqWeODwC/lqjLuIrPBOXCv02it
	AGBEd0fi/1DEqgKSbdplhKfHBjegu0UKjZRPkUuWzQvWkqgjn5W1sjINC249sDUXKAif//x0fa1E6
	xx6CINl1qff8tE0fCbPM1phqFqkZwmJz2v0D/madtDe8uF7KTBqpCfYwrz7EqJSl6dJs=;
Message-ID: <1ff6107c-bcfe-353a-04ed-b429ac65a81d@xen.org>
Date: Mon, 9 Jan 2023 10:01:29 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 01/13] xen/arm: re-arrange the static shared memory
 region
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-2-Penny.Zheng@arm.com>
 <dd5b93c3-51d1-40ad-88b4-5bbd54633651@xen.org>
 <AM0PR08MB4530A8EA76480621255CEFC9F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM0PR08MB4530A8EA76480621255CEFC9F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


Hi Penny,

On 09/01/2023 07:48, Penny Zheng wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Sunday, January 8, 2023 7:44 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v1 01/13] xen/arm: re-arrange the static shared memory
>> region
>> On 15/11/2022 02:52, Penny Zheng wrote:
>>> This commit re-arranges the static shared memory regions into a
>>> separate array shm_meminfo. And static shared memory region now
>> would
>>> have its own structure 'shm_membank' to hold all shm-related members,
>>> like shm_id, etc, and a pointer to the normal memory bank 'membank'.
>>> This will avoid continuing to grow 'membank'.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>>    xen/arch/arm/bootfdt.c            | 40 +++++++++++++++++++------------
>>>    xen/arch/arm/domain_build.c       | 35 ++++++++++++++++-----------
>>>    xen/arch/arm/include/asm/kernel.h |  2 +-
>>>    xen/arch/arm/include/asm/setup.h  | 16 +++++++++----
>>>    4 files changed, 59 insertions(+), 34 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c index
>>> 6014c0f852..ccf281cd37 100644
>>> --- a/xen/arch/arm/bootfdt.c
>>> +++ b/xen/arch/arm/bootfdt.c
>>> @@ -384,6 +384,7 @@ static int __init process_shm_node(const void *fdt,
>> int node,
>>>        const __be32 *cell;
>>>        paddr_t paddr, gaddr, size;
>>>        struct meminfo *mem = &bootinfo.reserved_mem;
>>
>> After this patch, 'mem' is barely going to be used. So I would recommend to
>> remove it or restrict the scope.
>>
> 
> Hope I understand correctly, you are saying that all static shared memory bank will be
> described as "struct shm_membank". That's right.
> However when host address is provided, we still need an instance of "struct membank"
> to refer to in "bootinfo.reserved_mem". Only in this way, it could be initialized properly in
> as static pages.
> That's why I put a "struct membank*" pointer in "struct shm_membank" to refer to the
> same object.

I wasn't talking about the field in "struct shm_membank". Instead, I was 
referring to the local variable:

struct meminfo *mem = &bootinfo.reserved_mem;

AFAICT, the only use after this patch is when you add a new bank in 
shm_mem. So you could restrict the scope of the local variable.

> If I removed instance in bootinfo.reserved_mem, a few more path needs to be updated, like
> Init_staticmem_pages, dt_unreserved_regions, etc
>   
>> This will make easier to confirm that most of the use of 'mem' have been
>> replaced with 'shm_mem' and reduce the risk of confusion between the two
>> (the name are quite similar).
>>
>> [...]
>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index bd30d3798c..c0fd13f6ed 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -757,20 +757,20 @@ static int __init
>> acquire_nr_borrower_domain(struct domain *d,
>>>    {
>>>        unsigned int bank;
>>>
>>> -    /* Iterate reserved memory to find requested shm bank. */
>>> -    for ( bank = 0 ; bank < bootinfo.reserved_mem.nr_banks; bank++ )
>>> +    /* Iterate static shared memory to find requested shm bank. */
>>> +    for ( bank = 0 ; bank < bootinfo.shm_mem.nr_banks; bank++ )
>>>        {
>>> -        paddr_t bank_start = bootinfo.reserved_mem.bank[bank].start;
>>> -        paddr_t bank_size = bootinfo.reserved_mem.bank[bank].size;
>>> +        paddr_t bank_start = bootinfo.shm_mem.bank[bank].membank-
>>> start;
>>> +        paddr_t bank_size =
>>> + bootinfo.shm_mem.bank[bank].membank->size;
>>
>> I was expecting a "if (type == MEMBANK_STATIC_DOMAIN) ..."  to be
>> removed. But it looks like there was none. I guess that was a mistake in the
>> existing code?
>>
> 
> Oh, you're right, the type shall also be checked.

Just to clarify, with this patch you don't need to check the type. I was 
pointing out a latent error in the existing code.

> 
>>>
>>>            if ( (pbase == bank_start) && (psize == bank_size) )
>>>                break;
>>>        }
>>>
>>> -    if ( bank == bootinfo.reserved_mem.nr_banks )
>>> +    if ( bank == bootinfo.shm_mem.nr_banks )
>>>            return -ENOENT;
>>>
>>> -    *nr_borrowers =
>> bootinfo.reserved_mem.bank[bank].nr_shm_borrowers;
>>> +    *nr_borrowers = bootinfo.shm_mem.bank[bank].nr_shm_borrowers;
>>>
>>>        return 0;
>>>    }
>>> @@ -907,11 +907,18 @@ static int __init
>> append_shm_bank_to_domain(struct kernel_info *kinfo,
>>>                                                paddr_t start, paddr_t size,
>>>                                                const char *shm_id)
>>>    {
>>> +    struct membank *membank;
>>> +
>>>        if ( kinfo->shm_mem.nr_banks >= NR_MEM_BANKS )
>>>            return -ENOMEM;
>>>
>>> -    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].start = start;
>>> -    kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].size = size;
>>> +    membank = xmalloc_bytes(sizeof(struct membank));
>>
>> You allocate memory but never free it. However, I think it would be better to
>> avoid the dynamic allocation. So I would consider to not use the structure
>> shm_membank and instead create a specific one for the domain construction.
>>
> 
> True, a local variable "struct meminfo shm_banks" could be introduced only
> for domain construction in function construct_domU

Hmmm... I didn't suggest to introduce a local variable. I would still 
much prefer if we keep using 'kinfo' because we keep all the domain 
building information in one place.

So ``struct meninfo`` should want to be defined in ``kinfo``.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 10:07:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 10:07:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473477.734106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEp49-0007s8-TP; Mon, 09 Jan 2023 10:07:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473477.734106; Mon, 09 Jan 2023 10:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEp49-0007s1-Qp; Mon, 09 Jan 2023 10:07:41 +0000
Received: by outflank-mailman (input) for mailman id 473477;
 Mon, 09 Jan 2023 10:07:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEp49-0007rv-8t
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 10:07:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEp48-0001lq-UR; Mon, 09 Jan 2023 10:07:40 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.1.158]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEp48-0002mi-OT; Mon, 09 Jan 2023 10:07:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=k5BdFwjEDTMlEnnae9Dcz9y3S/+ffqN3Ib5N2Jhcr9Y=; b=ufQqEnTzUhf+V5fqms0QRn46Vz
	MfLbjmIZtZrmNFaqX9JFTw2fy0ZS/cSm7E+bXehMhQE4nTR61MpCsGJ+1G+Ew9KeoMcnbv83MEpxP
	pX/xSSN6MD+M5DTfktTCgMif8ynGsmYTUQCByTwqd+r3cJZM1Or9xLYP76dedIEPUwhk=;
Message-ID: <23983bc4-8868-dfed-d58d-ca6baa6d05d4@xen.org>
Date: Mon, 9 Jan 2023 10:07:39 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 05/13] xen/arm: allocate shared memory from heap when
 host address not provided
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-6-Penny.Zheng@arm.com>
 <ff0870ab-d1b1-e029-26aa-c690063d348b@xen.org>
 <AM0PR08MB4530EC5FBFD2625E521AC067F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM0PR08MB4530EC5FBFD2625E521AC067F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 09/01/2023 07:50, Penny Zheng wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Sunday, January 8, 2023 8:23 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v1 05/13] xen/arm: allocate shared memory from heap
>> when host address not provided
>>
>> Hi Penny,
>>
> 
> Hi Julien,
> 
>> On 15/11/2022 02:52, Penny Zheng wrote:
>>> when host address is not provided in "xen,shared-mem", we let Xen
>>> allocate requested shared memory from heap, and once the shared
>> memory
>>> is allocated, it will be marked as static(PGC_static), which means
>>> that it will be reserved as static memory, and will not go back to heap even
>> on freeing.
>>
>> Please don't move pages from the {xen,dom}heap to the static heap. If you
>> need to keep the pages for longer, then get an extra reference so they will
>> not be released.
>>
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>>    xen/arch/arm/domain_build.c | 83
>> ++++++++++++++++++++++++++++++++++++-
>>>    1 file changed, 82 insertions(+), 1 deletion(-)
>>>
>>> +static int __init allocate_shared_memory(struct shm_membank
>> *shm_membank,
>>> +                                         paddr_t psize) {
>>> +    struct meminfo *banks;
>>> +    int ret;
>>> +
>>> +    BUG_ON(shm_membank->mem.banks.meminfo != NULL);
>>> +
>>> +    banks = xmalloc_bytes(sizeof(struct meminfo));
>>
>> Where is this freed?
>>
> 
> These kinds of info will be only used in boot-time, so maybe I shall
> free them in init_done()?

I don't think you can free it in init_done() because we don't keep a 
pointer to kinfo past construct_dom*().

> Or just after process_shm() ?

This might work. But I think it would be better to avoid the dynamic 
memory allocation if we can.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 10:57:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 10:57:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473492.734126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEpqb-0004ui-Tc; Mon, 09 Jan 2023 10:57:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473492.734126; Mon, 09 Jan 2023 10:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEpqb-0004ub-Q9; Mon, 09 Jan 2023 10:57:45 +0000
Received: by outflank-mailman (input) for mailman id 473492;
 Mon, 09 Jan 2023 10:57:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEpqa-0004uV-Pl
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 10:57:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEpqa-0002tO-Fc; Mon, 09 Jan 2023 10:57:44 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.1.158]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEpqa-0004em-8b; Mon, 09 Jan 2023 10:57:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=DisyklBe5soDqZmTSfIpDs6EqXU/3Me8FA8/h5eys9I=; b=Qph4OWtpGLVxVOXnm5PrShrxF7
	qRyy02FRJnJykqEouy1kzDF51o2Cgq10ynAV2t+R7sLNfqwMmpF2hXAznUqjoa9BIYio0m+S3B8zY
	0S3IHpgTydzLQLUekLLAHCgjcevxP9kymFypkMtUmWAjZ8MLO2FeJoLZDW+PNOIibLSE=;
Message-ID: <6db41bd2-ab71-422a-4235-a9209e984915@xen.org>
Date: Mon, 9 Jan 2023 10:57:42 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner when host
 address not provided
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-7-Penny.Zheng@arm.com>
 <d7f12897-c6cc-0895-b70e-53c0b88bd0f9@xen.org>
 <AM0PR08MB453041150588948050F718D4F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM0PR08MB453041150588948050F718D4F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 09/01/2023 07:49, Penny Zheng wrote:
> Hi Julien

Hi Penny,

> Happy new year~~~~

Happy new year too!

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Sunday, January 8, 2023 8:53 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner
>> when host address not provided
>>
>> Hi,
>>
>> On 15/11/2022 02:52, Penny Zheng wrote:
>>> @@ -922,33 +927,82 @@ static mfn_t __init
>> acquire_shared_memory_bank(struct domain *d,
>>>        d->max_pages += nr_pfns;
>>>
>>>        smfn = maddr_to_mfn(pbase);
>>> -    res = acquire_domstatic_pages(d, smfn, nr_pfns, 0);
>>> -    if ( res )
>>> +    page = mfn_to_page(smfn);
>>> +    /*
>>> +     * If page is allocated from heap as static shared memory, then we just
>>> +     * assign it to the owner domain
>>> +     */
>>> +    if ( page->count_info == (PGC_state_inuse | PGC_static) )
>> I am a bit confused how this can help differentiating becaPGC_state_inuse is
>> 0. So effectively, you are checking that count_info is equal to PGC_static.
>>
> 
> When host address is provided, the host address range defined in
> xen,static-mem will be stored as a "struct membank" with type
> of "MEMBANK_STATIC_DOMAIN" in "bootinfo.reserved_mem"
> Then it will be initialized as static memory through "init_staticmem_pages"
> So here its page->count_info is PGC_state_free |PGC_static.
> For pages allocated from heap, its page state is different, being PGC_state_inuse.
> So actually, we are checking page state to tell the difference                                                    .

Ok. This is definitely not obvious from the code. But I think this is a
very fragile assumption.

Instead, it would be better if we allocate the memory in 
acquire_shared_memory_bank() when the host address is not provided.

> 
>> But as I wrote in a previous patch, I don't think you should convert
>> {xen,dom}heap pages to a static pages.
>>
> 
> I agree that taking reference could also prevent giving these pages back to heap.
> But may I ask what is your concern on converting {xen,dom}heap pages to a static pages?

A few reasons:
  1) I consider them as two distinct allocators. So far they have the 
same behavior, but in the future this may change.
  2) If the page is freed you really don't want the domain to be able to 
re-use the page for a different purpose.


I realize that 2) is already a problem today with static pages. So I 
think the best is to ensure that pages allocated for shared memory never 
reach the any of the allocators.

>   
>> [...]
>>
>>> +static int __init assign_shared_memory(struct domain *d,
>>> +                                       struct shm_membank *shm_membank,
>>> +                                       paddr_t gbase) {
>>> +    int ret = 0;
>>> +    unsigned long nr_pages, nr_borrowers;
>>> +    struct page_info *page;
>>> +    unsigned int i;
>>> +    struct meminfo *meminfo;
>>> +
>>> +    /* Host address is not provided in "xen,shared-mem" */
>>> +    if ( shm_membank->mem.banks.meminfo )
>>> +    {
>>> +        meminfo = shm_membank->mem.banks.meminfo;
>>> +        for ( i = 0; i < meminfo->nr_banks; i++ )
>>> +        {
>>> +            ret = acquire_shared_memory(d,
>>> +                                        meminfo->bank[i].start,
>>> +                                        meminfo->bank[i].size,
>>> +                                        gbase);
>>> +            if ( ret )
>>> +                return ret;
>>> +
>>> +            gbase += meminfo->bank[i].size;
>>> +        }
>>> +    }
>>> +    else
>>> +    {
>>> +        ret = acquire_shared_memory(d,
>>> +                                    shm_membank->mem.bank->start,
>>> +                                    shm_membank->mem.bank->size, gbase);
>>> +        if ( ret )
>>> +            return ret;
>>> +    }
>>
>> Looking at this change and...
>>
>>> +
>>>        /*
>>>         * Get the right amount of references per page, which is the number of
>>>         * borrower domains.
>>> @@ -984,23 +1076,37 @@ static int __init assign_shared_memory(struct
>> domain *d,
>>>         * So if the borrower is created first, it will cause adding pages
>>>         * in the P2M without reference.
>>>         */
>>> -    page = mfn_to_page(smfn);
>>> -    for ( i = 0; i < nr_pages; i++ )
>>> +    if ( shm_membank->mem.banks.meminfo )
>>>        {
>>> -        if ( !get_page_nr(page + i, d, nr_borrowers) )
>>> +        meminfo = shm_membank->mem.banks.meminfo;
>>> +        for ( i = 0; i < meminfo->nr_banks; i++ )
>>>            {
>>> -            printk(XENLOG_ERR
>>> -                   "Failed to add %lu references to page %"PRI_mfn".\n",
>>> -                   nr_borrowers, mfn_x(smfn) + i);
>>> -            goto fail;
>>> +            page = mfn_to_page(maddr_to_mfn(meminfo->bank[i].start));
>>> +            nr_pages = PFN_DOWN(meminfo->bank[i].size);
>>> +            ret = add_shared_memory_ref(d, page, nr_pages, nr_borrowers);
>>> +            if ( ret )
>>> +                goto fail;
>>>            }
>>>        }
>>> +    else
>>> +    {
>>> +        page = mfn_to_page(
>>> +                maddr_to_mfn(shm_membank->mem.bank->start));
>>> +        nr_pages = shm_membank->mem.bank->size >> PAGE_SHIFT;
>>> +        ret = add_shared_memory_ref(d, page, nr_pages, nr_borrowers);
>>> +        if ( ret )
>>> +            return ret;
>>> +    }
>>
>> ... this one. The code to deal with a bank is exactly the same. But you need
>> the duplication because you special case "one bank".
>>
>> As I wrote in a previous patch, I don't think we should special case it.
>> If the concern is memory usage, then we should look at reworking meminfo
>> instead (or using a different structure).
>>
> 
> A few concerns explained why I didn't choose "struct meminfo" over two
> pointers "struct membank*" and "struct meminfo*".
> 1) memory usage is the main reason.
> If we use "struct meminfo" over the current "struct membank*" and "struct meminfo*",
> "struct shm_meminfo" will become a array of 256 "struct shm_membank", with
> "struct shm_membank" being also an 256-item array, that is 256 * 256, too big for
> a structure and If I remembered clearly, it will lead to "more than PAGE_SIZE" compiling error.

I am not aware of any place where we would restrict the size of kinfo in 
upstream. Can you give me a pointer?

> FWIT, either reworking meminfo or using a different structure, are both leading to sizing down
> the array, hmmm, I don't know which size is suitable. That's why I prefer pointer
> and dynamic allocation.

I would expect that in most cases, you will need only one bank when the 
host address is not provided. So I think this is a bit odd to me to 
impose a "large" allocation for them.

> 2) If we use "struct meminfo*" over the current "struct membank*" and "struct meminfo*",
> we will need a new flag to differentiate two scenarios(host address provided or not), As
> the special case "struct membank*" is already helping us differentiate.
> And since when host address is provided, the related "struct membank" also needs to be
> reserved in "bootinfo.reserved_mem". That's why I used pointer " struct membank*" to
> avoid memory waste.

I am confused... Today you are defining as:

struct {
     struct membank *;
     struct {
        struct meminfo *;
        unsigned long total_size;
     }
}

So on arm64 host, you would use 24 bytes. If we were using an union 
instead. We would use 16 bytes. Adding a 32-bit fields, would bring to 
20 bytes (assuming we can re-use a padding).

Therefore, there is no memory waste here with a flag...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:10:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:10:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473500.734137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq2k-0007He-2c; Mon, 09 Jan 2023 11:10:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473500.734137; Mon, 09 Jan 2023 11:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq2j-0007HX-WC; Mon, 09 Jan 2023 11:10:18 +0000
Received: by outflank-mailman (input) for mailman id 473500;
 Mon, 09 Jan 2023 11:10:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEq2i-0007HN-6z; Mon, 09 Jan 2023 11:10:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEq2i-00038n-4n; Mon, 09 Jan 2023 11:10:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEq2h-0000oq-Lg; Mon, 09 Jan 2023 11:10:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEq2h-0004ao-LI; Mon, 09 Jan 2023 11:10:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=F7ACoF2OZCLPMZoWzsjW2slpCkD72ziB/ykI8p9d6m4=; b=zPFcgQxQFCDl0T4TVcuKDVyOM6
	xA/QS+EON76UGznQqYrxWOOYJV5xmS7j8KkGcL2BS+vA0xtUdg/7ZxUNns3gjk1XcI3HhHRUBDGbB
	QKtFbJoQcIil0L7gY6Olx1EuzjwrOv6O6mX0KJr59BO/QiwXtkXMR7DZDbaRthvp46UI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175634-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175634: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1fe4fd6f5cad346e598593af36caeadc4f5d4fa9
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 11:10:15 +0000

flight 175634 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175634/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                1fe4fd6f5cad346e598593af36caeadc4f5d4fa9
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   93 days
Failing since        173470  2022-10-08 06:21:34 Z   93 days  195 attempts
Testing same since   175634  2023-01-09 01:40:59 Z    0 days    1 attempts

------------------------------------------------------------
3317 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505645 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:11:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:11:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473509.734148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq3Z-0007rz-HC; Mon, 09 Jan 2023 11:11:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473509.734148; Mon, 09 Jan 2023 11:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq3Z-0007rs-EP; Mon, 09 Jan 2023 11:11:09 +0000
Received: by outflank-mailman (input) for mailman id 473509;
 Mon, 09 Jan 2023 11:11:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEq3X-0007rP-DZ
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 11:11:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEq3W-0003AP-VX; Mon, 09 Jan 2023 11:11:06 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.1.158]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEq3W-0005QZ-Pg; Mon, 09 Jan 2023 11:11:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=jLnrsMX/pCyW46cSp4wgU/f2FHoQ8Rm/20It4O8+GC8=; b=qwUKTfCHQclWFAHe3ZmCwvqcBd
	06+n2WR1UnIaPq7+t1BJoKYHH3a0mg6mer6lsuNnPI50YnMqLyt8adP0KZhOBNybboLiF+W4nmcac
	42me4AxxdRKNlIQMkywhoQ/2aJ4JnV9Y5h4O7gJIEMWBbjjolMO7i6qCCptQqFMSUf+M=;
Message-ID: <5531505f-b04b-7b70-68da-18d804790c25@xen.org>
Date: Mon, 9 Jan 2023 11:11:04 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
 <d77e7617-5263-0072-4786-ba6144247a4b@xen.org>
 <0475c058d0655f5b7b245f19b20c5ef0f14b3618.camel@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <0475c058d0655f5b7b245f19b20c5ef0f14b3618.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Oleksii,

On 09/01/2023 09:04, Oleksii wrote:
> On Fri, 2023-01-06 at 13:40 +0000, Julien Grall wrote:
>> Hi,
>>
>> On 06/01/2023 13:14, Oleksii Kurochko wrote:
>>> The patch introduce sbi_putchar() SBI call which is necessary
>>> to implement initial early_printk
>>>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>>    xen/arch/riscv/Makefile          |  1 +
>>>    xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>>>    xen/arch/riscv/sbi.c             | 44
>>> ++++++++++++++++++++++++++++++++
>>
>> IMHO, it would be better to implement sbi.c in assembly so you can
>> use
>> print in the console before you jump to C world.
>>
> I thought that we can live with C version as we set up stack from the
> start and then we can call early_printk() from assembly code too.
> Is it bad approach?

It depends on how early you want to call it. For Arm, we chose to use 
assembly because the C code may not be PIE (and even with PIE it may 
need some relocation work).

Andrew suggested that this may not be a problem with RISC-V. I have 
looked a bit more around and notice that the kernel is also calling some 
C function very early (like setup_vm()). But they ensure that the code 
is built with -mcmodel=medany.

It looks like you are already building Xen with this option. So all 
looks good for RISC-V. That said, I would suggest to check that 
__riscv_cmodel_medany is defined in files where you implement C function 
called from early assembly code.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:13:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:13:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473515.734159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq68-0008VJ-Tz; Mon, 09 Jan 2023 11:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473515.734159; Mon, 09 Jan 2023 11:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq68-0008VC-Qp; Mon, 09 Jan 2023 11:13:48 +0000
Received: by outflank-mailman (input) for mailman id 473515;
 Mon, 09 Jan 2023 11:13:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEq67-0008V4-UI
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 11:13:48 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2073.outbound.protection.outlook.com [40.107.104.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae9c549e-900e-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 12:13:45 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7043.eurprd04.prod.outlook.com (2603:10a6:208:19b::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 11:13:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 11:13:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae9c549e-900e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dhbA+xF5F9F0s1xVFYRMqPNzCKXaC1w9xi2LIfUOMOoDpfGAKKEpLCpi2peH3Qx9yAWjd461nOTNGgwGRFiicZW5+VCGg3LLTVJLW4QYdBhsv0jFSCm9tQEBcYFb0oDy62XFhMgBl6kjskJ0z5W+dIZHWD9PsSb0REdbzZt0jIHJ4DXotY1EqU8kJlAVLwwl/ntoFBkF3o5YyRZ96cLG/7VJmiC/QZgio9PYkti/CjERGstwE1dhTPTO9SV4reMANy6Hy4/2mqb4IS38Diyak5YbB2pEwDwtX2CzbAOyZcc+UfWyumz40HWBV43a4UyuFJadppQWZmq0NMNiGYGxjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=N7NX/w/IBF0btVz6DIEesT3r76F6LqndQWtRaYgCeiU=;
 b=AJ8oo8EyeJFy5//LG4xnC/POfw4l54Pqia4MCqO2I+8acB+w4kJ3slG7dxfcF2Y2I2JYjgbxIqc065DsLQlkDPv8O5poXzCDsvrEX9Bn8ZWj7ZPXi1leocp9DdntEGhqyOzGBO/rx8ONARn6UC5pQMcthSt1wQPAM9J9g/wntBStN2eSnGPZI53gNsL1Id/UX7w1qdTX4V9Vm9s6USqqMLuaqHEUb5SN/0gjRwadVzNGIr45x2Hfb1Db/fGXr+Hh4Z7WO7voh+uFhikapFekfv9bn6vUcP4FnKgkDoTeP4b3Ckf2YZqcJ4SjoWBWuDOCYiWpQKZF8lofBO9miyR93g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N7NX/w/IBF0btVz6DIEesT3r76F6LqndQWtRaYgCeiU=;
 b=iqqUeP7Z/2E+ZgCQG3tUTqyxqN3zhliLWpjsjWyyzDDKiR1P9oDCmaOtBbOBb482eM9y11kAzXdRsuOm8BOjilWzAfMUUxyW+k4NVH0QwmoS7bn00NiMmxDyIUl5K2dmVMq7syyI66LvWLJiVb588jWKqWngRJm/9nkj5wlNlFpqT6J5beqKY6L+7eph7trTTqpRtmEVSFCbhGrUhKM6EwtnD6E5TbwAwXSoNAM0tfqN7FcQC3WO3R1DoOz6vtf8x89UjI76ORf8pwpCzEEJnR+VgFJEBptGJIJG398BkssZWREwGWiNAMgAddfC/gAw2LVgZAuLGoDJTHRMqWwIgw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dc720181-6f1b-a8d7-df27-2d90f64306d4@suse.com>
Date: Mon, 9 Jan 2023 12:13:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: LBR and Sapphire Rapids
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
References: <3a80b974-1ddd-2063-863f-8aff3453d545@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <3a80b974-1ddd-2063-863f-8aff3453d545@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0077.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7043:EE_
X-MS-Office365-Filtering-Correlation-Id: b80541d4-406b-44c4-150a-08daf23291dc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dLLpeHc7J4IiVywpeHdNEEVS70oFymKLLWDymlQuUb1xaqkrXmiMjaygX1m6bMvQDj9RQsUYuregeaBpaQ1OYv209+NhZnCmqER0uWFMV21zpeGddt1iPz4th93NLCNHD7g6TElEaoSvbTfjpAb+aQMGuLhXba2ATElLFW34aR/dgkRGROQgbF23BDS5lyMZk5Ax238hsRx7NrEpIKceFguaekA25FuXY/oIq0I4PEHLgUp2XziC2572JrMtjrXcyy77CzeoIy3hfplqkA9iHbyd8cRb72oFtxteqU1rFPz334Lty7z8tCXjVZRk+wX5MogQwFrqW4P7K5iUjwwfXVoMhIzNWss2ooRifj2HkdMEHbVy4LUjW3RXWYQqWpng7Pgbg+2+Wsaa4PeUcinniDsT2Csz2DlZV4lU3tKlN7DjTw7tkW7p+/pNjsXYJQSRcLJj8VGTQul9zn102QnUvu6MNL55hb77QAmFXne/qTDt0/AaNR1kEL5IEgiDEZRn6Qp7+uBJfgXPnl0bvQTnX8xp5f7L6mNsKnnm+ZXfLf4Fdv1sFfScCJgFrnul3z+jPUejRC+nDBXvEqj9yrvzhT8yQWWNKonqMMOi8YvhYupVTxuPQeCUhrOdRPulYnP+uwquye7oaRc160gIV1gsI1uowKdraUC3owNT3onkic65cqlc/WWfRO3b63SSQjNhDSS0c3sRonGtaN4oJiS3WopXQ5LFU2YDMHz5lTnv13U=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(396003)(39860400002)(136003)(346002)(451199015)(3480700007)(2906002)(83380400001)(54906003)(2616005)(66476007)(66556008)(5660300002)(66946007)(26005)(186003)(53546011)(36756003)(8936002)(4744005)(6506007)(6512007)(478600001)(38100700002)(8676002)(41300700001)(31686004)(6486002)(86362001)(6916009)(4326008)(316002)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MVBweHV0Ynk1YS9jQ0Z3ZFJZaDJ2R0dxRmNSRjlJQUQwNVRtOWY5MkxoM1Ar?=
 =?utf-8?B?VjJBYzNzeDZGbVpvOWdXRmNqcnAvUW9UYlBMa2Nnc2xzejFuOHhzYlptUTJo?=
 =?utf-8?B?RDB2eldVRnV6S3AxZEVHZkVCZ280RVpOMkVJNzZJUW5zT0t6YkFYSFFiRjZE?=
 =?utf-8?B?bGp3dGdvNHJaenEzSHgyV0w0b2ZNeUtWL25xeEtsTU1iSXMrdWNVckJvZisv?=
 =?utf-8?B?WWhTSkszdjFNT1JFL1hWbzkyTWFOYUdpdE56Vm51b3FYYWhqVWpCOUUzQVFQ?=
 =?utf-8?B?N1pJNFRnNms5ZE5MamxqSitmaFEyOURMdlhxQ2tKbXFIMVhTeFhBTkdMUlRl?=
 =?utf-8?B?MmluMDNtNEFiTkdSN2dHLzROL1JWTEdDRWFPUmJ5ZE5Ua1hRQm52M1JiZHd1?=
 =?utf-8?B?Vk1OV3MxMURCQkJwdzgzaTJ5K0wzNzV4K1FsMCtOTXN0V1JEeitraW11dW0w?=
 =?utf-8?B?QlErOW9mZmszZjNXajRsTEJiQVJEdUc3ZGVSQjJTN3RSdkRBQzNkNDRJSmpY?=
 =?utf-8?B?Rlc4dytIMTgxSEtFUzFFV3B6ZzU4QlVCZk1QcFpkS2FmV2h4MkJLVCtSSkVM?=
 =?utf-8?B?OFd5NVRiR25Sb3Ntay96cTY1RUN5RGs5azNya3kxejlHNFUzeXNaM041MlN4?=
 =?utf-8?B?NnY2NEZZWDc2ckt4em9EWGNZcVBDNlZFUlJ0by84VmtkNUtkNkZmQkVoWW44?=
 =?utf-8?B?QTAyOVROMUN0d3g3TkdJMTVKTnYxWDBrOGZQRWlnZlZXMFhlQjV0enE4Vi81?=
 =?utf-8?B?WUx3Rk56U1hxRzZOY2RINEFsOFFOMlJWZnRaNXI2Rk5wd2ZTUWU5QW9GYm9x?=
 =?utf-8?B?anR5YU9ZbVZLSmpJWWtyTVFzdEVNcXptM2pDTFJlN1p2UFB3SSs1UEZOYVo4?=
 =?utf-8?B?akJoajNGWGE5Q2VVdTNXTmJpV1FZZHZEemhCWWV3blJQOTlaQXlxOVo0OGVL?=
 =?utf-8?B?TVZkYzRDVjVOd2FQbEpHeG1tRDI3ekhaTEVGZHkyNVRJZU1jb2FKbGFSMlJx?=
 =?utf-8?B?SmVkU0F1K0Y0OGl0ZFNEQTV4YmxvaEptWGRIb3QrNmtaMEJlRDJoclpWdm5N?=
 =?utf-8?B?NjFQeWtWZFdQZWtyaGxRV2V4a3k0ZUxJUzNJbXJiOHhmQTM0NGVGMWliVVFE?=
 =?utf-8?B?em0xTjB5d2hHdmEwVW1kM0QvMVBjWVpnTEFIUXg4SHZQdU83SG4yQ3hWWFBq?=
 =?utf-8?B?MWY0UXRORDZCOW05MHhKMjdYTkU5Sk51YnBkNENDVjd1NW5nRTRDazcxcDdB?=
 =?utf-8?B?akhNMTlRZ0RoeTdWd1NqOHM0SFZwZW9MSmxBREVCeXEvaG1NZlloZUNiNDFO?=
 =?utf-8?B?L2pDZzZFdTFHMEdiWG5aTnNHcE1nVkY3dWkwZ1U1Y1RKaTRkbkdwS1VSRDBP?=
 =?utf-8?B?cFZ4Q0o2SHZVZXVldWZ3b2FSam14UlNSSVBzdXZIUmI3U1FyVmlmcXZIM3NP?=
 =?utf-8?B?SEg2cjBGakM1T3RqNUo3UjFOaitNeGdxNER2TXFqUE8wdW1CMStFZUE5d1B2?=
 =?utf-8?B?OURaZXV4ZjlNRk1wRVlCcFRBd2lmUmVLVnZoZlJpV2Z2KzFQOTErU0RSM1N3?=
 =?utf-8?B?RzNwV1RkR3ZNakh4RVYzZi9iTm5MRTB1WGQvL2pOLzVqNHNHQ0YvOEUzUlM1?=
 =?utf-8?B?OHNYWkJrTzJZeXBhTTlGY0k1RFlFUXR6MDNuSkJQZ25neFhGTk15RXo4NTJh?=
 =?utf-8?B?bnFUVDgzK1Z1ZGhRdEgxVUlFTUVRTENHTWRsZnR1ZWw2dysrZTF3Rnd4c3hL?=
 =?utf-8?B?VUxOOVluc3AzR05qa1VGYjEvRUQ2K0JOQUZEQ09waHp1RC9GdFBhMzZvM3l0?=
 =?utf-8?B?L1N6S2s1VDE4TE1sblozSHc5dGE5aWE1alB2N1FEMitWUzRzK1R2a3NaUWlN?=
 =?utf-8?B?a0JURFdML0FCV2U2YjdoWC96Uy8xSjFKZWkrM2JPejEwdGNMK1R5ZE5vN2w2?=
 =?utf-8?B?d2RVUTIzVi9PT3N2SjBPWmxDa082YWFtY2Q5ckdRL3AvMlRVd0lzR2Q4ak1u?=
 =?utf-8?B?S3dZNnRxK01TeHh2S3lTTDVOQjhmN2lGSDFxSG5QaVpwSkVPT3pUVjR2Vksx?=
 =?utf-8?B?THNVb0V0VEE5SkM5cmcwc253NmtQeTRIeXZuSURDSlIzU1lNaTRhalpnQmtJ?=
 =?utf-8?Q?n++Sxu8r35sOhYbLU7atCLA5P?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b80541d4-406b-44c4-150a-08daf23291dc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 11:13:44.0127
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wj4PbyWldqr1VVJoXCez8QvC16zLZAYQQUS9w+wUQKVy9Of2eyWscF5KdPA925MuHieKd2UDvbKLhUzi0cF09A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7043

On 06.01.2023 19:21, Andrew Cooper wrote:
> On SPR, MSR_DBG_CTRL.LBR is a write-discard bit.  There really are no
> model specific LBRs, so we should emulate it as write discard too.  More
> generally, I think we should apply that to any system were we don't know
> the model-specific indices.
> 
> I think this will be sufficient to avoid crashing guests on SPR.  Any
> software actually expecting to use model specific LBR would need a model
> table anyway just like Xen has, and will not get it updated with SPR's
> model number, so for the (more) common case of not having migrated,
> things should turn off cleanly.

Sounds reasonable as a plan.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:14:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:14:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473518.734169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq6h-0000aZ-63; Mon, 09 Jan 2023 11:14:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473518.734169; Mon, 09 Jan 2023 11:14:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq6h-0000aS-3O; Mon, 09 Jan 2023 11:14:23 +0000
Received: by outflank-mailman (input) for mailman id 473518;
 Mon, 09 Jan 2023 11:14:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEq6g-0000aI-GE
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 11:14:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEq6f-0003Dl-RB; Mon, 09 Jan 2023 11:14:21 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.1.158]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEq6f-0005UR-LS; Mon, 09 Jan 2023 11:14:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=bpMEbxC0C/EchsSeg1PKvRuvBtnvzv7EgYhijVCgq4c=; b=A8mw0AwuLBabsFTTLabdZ5ukUO
	sF+L/DU/ZQyeFTZZhdOq+/b2XGBcYaspI0reac9JaXN8HfLsSY2KwO9xjhWxEfc1pyFNZmpIYmMla
	EPGHYD3wVf8+oeAvh98TSIxSElET2JAPfHrQuNdJPKAunFZeyra1pKTkVHmqjwQzTMc0=;
Message-ID: <82388ac9-784b-d13a-a0db-36d2fffb7cfb@xen.org>
Date: Mon, 9 Jan 2023 11:14:19 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 6/8] xen/riscv: introduce early_printk basic stuff
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
 <3f30a60729b45ee01adc2d4c0eec5a89bb083abd.1673009740.git.oleksii.kurochko@gmail.com>
 <e7e66208-5a4f-f37a-6368-29489e93aad9@xen.org>
 <c197037c48921c3bbfd797172829ffa5d01609c2.camel@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <c197037c48921c3bbfd797172829ffa5d01609c2.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 09/01/2023 09:10, Oleksii wrote:
> On Fri, 2023-01-06 at 13:51 +0000, Julien Grall wrote:
>> Hi,
>>
>> On 06/01/2023 13:14, Oleksii Kurochko wrote:
>>> The patch introduces a basic stuff of early_printk functionality
>>> which will be enough to print 'hello from C environment"
>>>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>>    xen/arch/riscv/Kconfig.debug              |  7 ++++++
>>>    xen/arch/riscv/Makefile                   |  1 +
>>>    xen/arch/riscv/early_printk.c             | 27
>>> +++++++++++++++++++++++
>>>    xen/arch/riscv/include/asm/early_printk.h | 12 ++++++++++
>>>    4 files changed, 47 insertions(+)
>>>    create mode 100644 xen/arch/riscv/early_printk.c
>>>    create mode 100644 xen/arch/riscv/include/asm/early_printk.h
>>>
>>> diff --git a/xen/arch/riscv/Kconfig.debug
>>> b/xen/arch/riscv/Kconfig.debug
>>> index e69de29bb2..940630fd62 100644
>>> --- a/xen/arch/riscv/Kconfig.debug
>>> +++ b/xen/arch/riscv/Kconfig.debug
>>> @@ -0,0 +1,7 @@
>>> +config EARLY_PRINTK
>>> +    bool "Enable early printk config"
>>> +    default DEBUG
>>> +    depends on RISCV_64
>>
>> OOI, why can't this be used for RISCV_32?
>>
> We can. It's my fault we wanted to start from RISCV_64 support so I
> totally forgot about RISCV_32 =)
>>> +    help
>>> +
>>> +      Enables early printk debug messages
>>> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
>>> index 60db415654..e8630fe68d 100644
>>> --- a/xen/arch/riscv/Makefile
>>> +++ b/xen/arch/riscv/Makefile
>>> @@ -1,6 +1,7 @@
>>>    obj-$(CONFIG_RISCV_64) += riscv64/
>>>    obj-y += setup.o
>>>    obj-y += sbi.o
>>> +obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>>
>> Please order the files alphabetically.
>>
>>>    
>>>    $(TARGET): $(TARGET)-syms
>>>          $(OBJCOPY) -O binary -S $< $@
>>> diff --git a/xen/arch/riscv/early_printk.c
>>> b/xen/arch/riscv/early_printk.c
>>> new file mode 100644
>>> index 0000000000..f357f3220b
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/early_printk.c
>>> @@ -0,0 +1,27 @@
>>
>> Please add an SPDX license (the default for Xen is GPLv2).
>>
>>> +/*
>>> + * RISC-V early printk using SBI
>>> + *
>>> + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
>>
>> So the copyright here is from Bobby. But I don't see any mention in
>> the
>> commit message. Where is this coming from?
>>
> Could you please share with me an example how it should be look like?
> Signed-off: ... ? Or "this file was taken from patch series ..."?

This depends on the context. Do you have a pointer to the original work?

If you are taking the patch mostly as-is, then the author should be 
Bobby. The first signed-off-by would be Bobby and you will be the second 
one.

Otherwise, you could credit him with "Based on the original work from 
...". A link could be added in the commit message or after ---.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:15:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:15:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473527.734180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq7b-0001Bd-F9; Mon, 09 Jan 2023 11:15:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473527.734180; Mon, 09 Jan 2023 11:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEq7b-0001BW-CL; Mon, 09 Jan 2023 11:15:19 +0000
Received: by outflank-mailman (input) for mailman id 473527;
 Mon, 09 Jan 2023 11:15:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IK2Z=5G=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pEq7Z-0000Uj-P8
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 11:15:18 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2076.outbound.protection.outlook.com [40.107.243.76])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e4565d2f-900e-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 12:15:16 +0100 (CET)
Received: from MW4PR03CA0101.namprd03.prod.outlook.com (2603:10b6:303:b7::16)
 by PH7PR12MB7869.namprd12.prod.outlook.com (2603:10b6:510:27e::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 11:15:12 +0000
Received: from CO1NAM11FT020.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b7:cafe::f7) by MW4PR03CA0101.outlook.office365.com
 (2603:10b6:303:b7::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Mon, 9 Jan 2023 11:15:12 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT020.mail.protection.outlook.com (10.13.174.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Mon, 9 Jan 2023 11:15:12 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 9 Jan
 2023 05:15:10 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 9 Jan
 2023 03:15:09 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 9 Jan 2023 05:15:07 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4565d2f-900e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K1/+2q7QWZHrBzeI83U4dvkmjyRXXPMNuyccV+cP+S9qq5zXz9s8TolbEL9HHpYcBeFQwAFOGJJJ9rwBwRllSkTvrDp9CsFgoz8RIePboDZyn5yPyU+b6OlL6xwJzBREVVKhPB9tta/o5SOueEsgLRTT8LG3VfNWhzTSOdG5ByRorSqJIp3c74mQZcG5nP7wWcHWpiJdkzygbnbHD4TClFLr0aJEIRBqceMUOjoWrFRQVJdMohuE3cZUHN4tsGgPk92G0o7B7BAJX27zOMKhCHgWQIeoOm0V9/N9A21IHk4799rkG+GfnpNDpJZVHKa2rdRA1JTblmDSA/qgSWJ8gA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SxPfohhAy9Pfz20exhjw8wrurETc87GXeKs0KT+jF6s=;
 b=PovTWRX9ik1Ysh+NbGNPglzQhIaH+u8k+vRQhw6GMidSebVDWY8PrD81TZ3V5HAChTqsI3/ojp6NJ3w6HXbw8rqKiS7sjilQ0SRIFdtoa7cVUoGwAn9TlJrYsFmayu4xnc5A9NiVdfG1HUecAHXS2aIXX1nJVYIZ6me3UyjJVmY1V1GpwRWFGHwUxidp22sGKy6pwCCBNaBISTGZhGcDKlCJXE/Mgmh0BdasrITWnh71cHbodes3RgIv+Nt/6tuEq1S6GgQR+pnHE2+ccQ/3rtNzcAn6bL2yMAfwVvx6ji7O/Ip2cnrkg6C+CL/WyL5LHznfggtXvIxpMmAlvtMNVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SxPfohhAy9Pfz20exhjw8wrurETc87GXeKs0KT+jF6s=;
 b=exYPtEsf6nAR5X7o4U9Gi/Ja4Oost7tRYjFl5rBFOwSMEJlKD59PfGfxMvLYmIG80V5OvXdLT1Ss5Cg33Do6h3Q2zBo4dUO6JmF2IR4Lbo+mcMdFovOn/ZBZVQYbaR8QSSSktbLdIPyiNccTNd0OuBbNdoFlZuIByluIPQZAhbw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <6373383d-d6d3-3d92-b09e-6434c5b5d15b@amd.com>
Date: Mon, 9 Jan 2023 12:15:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] xen/cppcheck: sort alphabetically cppcheck report
 entries
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, <xen-devel@lists.xenproject.org>
CC: <wei.chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Julien
 Grall" <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
References: <20230106104108.14740-1-luca.fancellu@arm.com>
 <20230106104108.14740-2-luca.fancellu@arm.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230106104108.14740-2-luca.fancellu@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT020:EE_|PH7PR12MB7869:EE_
X-MS-Office365-Filtering-Correlation-Id: 5e715692-6bc9-481b-1535-08daf232c6d5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	W9TL+z4cfEa+hSAGxMhbrV7dQZkzoWSohH2eFzkT16VHHCe6vIDfIigIQTjy/Sz8+iq8ym6AAr+At8pyoq+hCUVwZR4eGfqcee9ye8UUZk/Bw1iLhfrK7mwk14SjHp2jw8b1rGaaZdOSao6m1cCjPgPoZ4Yjc2Hw6rkG2SneExZZ5Lng757PVBzYH36/EUc+E3+3UGEplF+4hPNMwdsfJ9OkycHAdxfI6B5pyxZ5vqovuFTkDk1C8IHDScw2LTjg2IzTg0/aHsjMCAuQmuFnohnbrxVQzM1ki8VMOe2W1ZQ0pft+GA6y8Ei/Fxgne6+2XW3E2KCQv+Yis3VkrMU8TAeX4ldIBMHXo5K5uQEB8kOHUMZjtUKQwL+11Vo4ZEu4D6ZAgpMF47zlyk1PjfNY9s1MyGLOpgn0FOibwDYDEgjvyOe3JsLuJ+qK6XnV8Dtq+F2BctWmFHo9Io3fpXyRbkXewErIiltuIgO+zsL3AJdkT8EsvH8eN6D2CRGzr6XFJC+0OUNE5eJ3srcKsIH4Al20lICyE6J0G9VLa8MCTlDBv/x3NgdIx/QvbHNYzsnPW1xnR1AwDsh9dG0YrLzjDzhM5ytlOQgeG0QH71mvLsAK6FR1G2o+irLaS3WVeJGP3ZV0pdT0w9543vRFLeU499LiAzkzUCZiUryx6Rga+jf5IbxikpcEh59y3+joVyVDpDSmOswaXrcm/IY+UzfuhKVOakC0BmhpRIPhoQrEIcH+zzKiq9wRlFeR0zibz85xyu/qeNGO5FXmi6se+NZ4HA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(36756003)(86362001)(2906002)(356005)(81166007)(8936002)(5660300002)(82310400005)(44832011)(41300700001)(82740400003)(36860700001)(47076005)(426003)(31696002)(110136005)(70206006)(70586007)(54906003)(31686004)(53546011)(6666004)(40480700001)(186003)(40460700003)(8676002)(26005)(4326008)(16576012)(316002)(2616005)(336012)(478600001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 11:15:12.2924
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5e715692-6bc9-481b-1535-08daf232c6d5
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT020.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7869

Hi Luca,

On 06/01/2023 11:41, Luca Fancellu wrote:
> 
> 
> Sort alphabetically cppcheck report entries when producing the text
> report, this will help comparing different reports and will group
> together findings from the same file.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  xen/scripts/xen_analysis/cppcheck_report_utils.py | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> index 02440aefdfec..f02166ed9d19 100644
> --- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
> +++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> @@ -104,6 +104,8 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
>                  for path in strip_paths:
>                      text_report_content[i] = text_report_content[i].replace(
>                                                                  path + "/", "")
> +            # sort alphabetically the entries
> +            text_report_content.sort()
>              # Write the final text report
>              outfile.writelines(text_report_content)
>      except OSError as e:
> --
> 2.17.1
> 
> 

Having the report sorted is certainly a good idea. I am just thinking whether it should be done
per file or per finding (e.g. rule). When fixing MISRA issues, best approach is to try to fix all
the issues for a given rule (i.e. a series fixing one rule) rather than all the issues in a file
from different rules. Having a report sorted per finding would make this process easier. We could
add a custom key to sort function to take the second element (after splitting with ':' separator)
which is the name of the finding to achieve this goal. Let me know your thoughts.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:37:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:37:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473534.734192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqTF-0003kD-DD; Mon, 09 Jan 2023 11:37:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473534.734192; Mon, 09 Jan 2023 11:37:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqTF-0003k6-AJ; Mon, 09 Jan 2023 11:37:41 +0000
Received: by outflank-mailman (input) for mailman id 473534;
 Mon, 09 Jan 2023 11:37:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEqTD-0003ju-I0
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 11:37:39 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2072.outbound.protection.outlook.com [40.107.20.72])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04879000-9012-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 12:37:37 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8591.eurprd04.prod.outlook.com (2603:10a6:102:21a::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 11:37:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 11:37:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04879000-9012-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dX8oNcqXQ0REezT1XQaLyhmHRlRZmUcmJkqy8ZBwwIA1COjnCrIDoNXnggbF7yr4GIgwjEmzJSnc/etlxmEAokVuqXtAkyRmoNO32C+wYoEDGQnUhFW8W7VUMpaQ0+rOCqmk3iDwP6nEFzUfhbccalajIuzJUYKve/IfcPUIkh68Xh+iqz9Yx32oWdUkFL5H99S5DB+Jt6aZ9kmWbj5DlgfGCOxxG94/1gLhlgPktPwEdJx4JsgNr8ORaGV+oCNh24Eb/denaXaaz7u6Kof2/ul0h3wfkFlmbXWAn8A7qVez18WSyCd+O0VXDlEILeZEEpaIPPbJc8gWWQzETVELRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qeasBoJ8JVmoBzjjKM2kVe3eIn/F5S+BWSxWiQtk0UQ=;
 b=gFU98fARG9RCgLOc5pUIEISQMSD6hDjcp9IcipgolaoRpk8i6rsKxOb+ZQKO6ay345rBIWgNbOaKcjdGogguubZvcRWMHMVPNiOCqg0zp+2QYsT9Ph/7BVf29zCRbpBC5aXdKFFKZYIX0hEaZRLb/wV4ncdBHYtrvUayGe38c+eittMi97QM3SMgVDeb8Q5YjR+xdLPFcqM/stkVRQg58uLGFzZaHVvKMN9ha8ThScYI4gHwj98R9Fs8YQbuJtbp7rTq0J98LrtaSPcW6Ka+CVJCsV21Nkt/70VtWgHhYEYkBMKNleDqu1wtXAStpRJ/QyL5LhwEgthr/Y+bO9NREw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qeasBoJ8JVmoBzjjKM2kVe3eIn/F5S+BWSxWiQtk0UQ=;
 b=UkzOA3Krzp26Ducr1JivXXE7/2UG4nBJ1xwl8hw3Q5AVnm8HD1eTksZPkXsH4A971FVxp5ADNtPbESyq5sIEJnBicIcEEdr77caaO50bQoT7cNL/BVxm77ZBxk12XVRlLbH/q4avMtnGRQCUcRbltaxP7sWW/zvPtRFkKOXKVUGV2iPwQok5+HQDaGbXHbb1QFUcyEWv7W9VYfLbUJOAi3UMa07Bsk8ehr+s2cfyqOQ4As2gX/El5RoeJJGkTdP1npMw99hBjXtJDDX4m4RJ0XqdaWu03NUtl6P6v6fzZwKHVqSW9E3nyL2EPCRvPsFNx3YFHzHuWjDSHhWRlYNO6A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2b46602a-f518-c191-946d-3b343b46dc87@suse.com>
Date: Mon, 9 Jan 2023 12:37:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v7 4/4] x86: Allow using Linux's PAT
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tim Deegan <tim@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1673123823.git.demi@invisiblethingslab.com>
 <9fd0360dd914d93dab357d16b46b4290e6119d30.1673123823.git.demi@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <9fd0360dd914d93dab357d16b46b4290e6119d30.1673123823.git.demi@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0183.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8591:EE_
X-MS-Office365-Filtering-Correlation-Id: ce3ef517-1964-4f8b-431d-08daf235e7b0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RpZyaufuPRObXfmKbM/ZsNaiBh7jdYG9GvQOhzKU9REhq0mLCwrPGD9fm/nLRO/jGZgNF1TF5A7kNodWRktjND0BweoTMSNIi5h/RCnBB52o0wznLa4a/xMfQ2yQsWCUbQ5Qfo5q/5rSR3GjwUnPTSijpHBQsL0DLqC7JDsyLk/ITtCCUpwJbGDQgqLvnasyvX0NrMyO4OSrjUZREHmCyf52bOq4dOEmATHIGLx/wtAJAmR06bss8oVlUQYTJx6+RjVLlx1sbtzVZ8ZeilBVRw+WHb2yIys/xjzHbKWvK3c5/EQa944RmWIkNwmsbi2u19Dv8yb8/91gLs1jckrDfVDMTdqQFEukhhJZNYjsa5shIEFqHr23YYH0jrabSyQiOK4/9rlJuC9HdSHALcnF9inVUW1/BkqmtUgMr1Cmfcb41tDqL8DmnIiBlZhsGTK3Mw08k/xORfV+BdlrJZsT9krZ0G6egusoPT0nJWWFvQpyi68S7YLtgxyygA1ITprAFh9svT2UVcMFGccr8JaFZdfVVC25vPGai3yfVA5xujYY6czxEwOzEma6/MNmqs6T+V5HkvZ4SHnqPAwK4IC555Me3+1GxiXoGQefg+1PIgS6Eyz6BMFwgmkjDgVxk/FPgUzo1qYgBeLhVIgskjoilVe8clAdnX1Sb+5wVIWfSOpNEYl1lZyKdPhOlQXP6LdsMrz1AN/DlCCzZsIl1+fVHXmWWSIhE4yLhs0uDhnljlg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(396003)(376002)(136003)(366004)(451199015)(6506007)(38100700002)(53546011)(31686004)(2906002)(478600001)(6486002)(2616005)(26005)(6512007)(186003)(5660300002)(316002)(7416002)(83380400001)(8936002)(36756003)(86362001)(31696002)(41300700001)(66946007)(8676002)(54906003)(66556008)(4326008)(6916009)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MG84UW1aa2prc0dlYzFjWElSdkZYTXFJZ3llRi9GV1RkYnhDeWdseVpnbVdk?=
 =?utf-8?B?bmUrZlVIdytHSVRxTCtnT0h3MmJvcHJjOWY0TkpYQXpCd3JoVmRqaXQyTWVp?=
 =?utf-8?B?SE1kbWNEU0xpb1p0NW5iV2pkVDR6ZU4wNUdzSGorYUJXTXZ5Vm1mSUd3Lyt4?=
 =?utf-8?B?VXNvMUppVUE5NjAwc3IzN0M3TnRzbHZtWGtzMlBtQ1ZoUUNKbWYvdHgxbjNv?=
 =?utf-8?B?aHNWNDRac2pjQnJWNUNIaWtIWk1YLzV3bGtlN3p4eFkyNGJtS1NZMEJscWcy?=
 =?utf-8?B?S3NEK2RvUTJKN0xGeHdHWTdiWk5kV3JlaGxpV3JjbkZOdDNTUjdkYWZUZXpk?=
 =?utf-8?B?OTEyNTBEQUpCcnZRYW5HWDgxUWZydVJpcUFVb3REYVpUelQrMlJaVkVrdDNo?=
 =?utf-8?B?cFRaQ3RZUytOejZRdU1NTzZ4ei9aaW95VHlwSDV1cTBkVVUwTjQwZXV4K094?=
 =?utf-8?B?OURuOFk0dTVzNSt2NVNYTzJDbWRuQmdKNHRMWitsNmJZU0xDbDR1dnlIaVNE?=
 =?utf-8?B?Nnc1SXRJVENGRGRoOElmWTAzbm9aWThZOEF1SzN5a1Z0dWxBY2ZaUjF4UUFT?=
 =?utf-8?B?Y3BIbzRJYkFoRUFRaW9HOEp2MUVYZjJIWXoyMEQwMG45c2FwNitzd01tUnR2?=
 =?utf-8?B?QUY1bDk1Q0RrVXJBQkhEUGtWNGpGa0g0ZVFqT1hUSzBGby9yaFpGMGpMWkZ1?=
 =?utf-8?B?Ty9jL2FoK2hBQ3pMemxoSGNQenIzSlBycDkzSWZhK05YU0NoRHlXcnhUYnNv?=
 =?utf-8?B?V3Y3MEpoaHV4VlZyT2JLcHdHbVhEdGhJa0t4NVdPVHhHNkxGbzZETTRpbGpG?=
 =?utf-8?B?bWFReXp6bjBPdkVZQmRadktIV25IL3BsLzFTL21BRkFjVXI2UzVpR3lTS1FE?=
 =?utf-8?B?dk0yM1dFdG9MVmtaWTV1aDRXOVNxSlZ4ZG5ITXZlcUdKMFd4bVY5c2lDQVZN?=
 =?utf-8?B?RHE0V0srRVFEMHprMEdLYUpmTnJMSStaTXRHMWZRdm4yQWF6ZzQvWUYrRUtO?=
 =?utf-8?B?U0NJTWJ2eWhTeVlFckRoR3BPbFh5SFJRS1IxeVBpRkcvb20zbFJ1SjJXWXUv?=
 =?utf-8?B?VmVSSVoxU21ZYVBUYzBXVVV4MWlvREM5b05abW83Y2VJY1kyWVJjellnNXFm?=
 =?utf-8?B?U09haW5vQWd6YzFMakl1SmZFdXRhL3NLQTg5YmtGL1dnRE1iV0k2ZVpjbEVH?=
 =?utf-8?B?aThuSng1VjBsR1F6MEJFUEcvTk5iSzFVaG9kUmFNQmVmVmFwNlN0L01LZklr?=
 =?utf-8?B?ZHI0YWRRczUxU1kxTWlwTXhSOFZBbnJ3aFZqWVFJeGNwREZHNW0vNDRqd25k?=
 =?utf-8?B?ZGZsMHFEdWtHR1dnK256UkZyYksycUpVdTJzZGlKU0lBNS9lL1AzUGJ2MnBE?=
 =?utf-8?B?NlZBMHhiLzhSdWdURUhpd2k4Z1BSZEFpMlc4YWZTd3hGVjRRWjZaS25JMTZF?=
 =?utf-8?B?dmcvUmszMld4alFNZW9pWlgramhVbkpkSmpUdG5SYlUybVVldm5uU01ocVdG?=
 =?utf-8?B?NEQrbC93V1lVYnZ3NjdmK3ZGM2xTRlpZU1R3eVhsN002aExtMmtiS1RhQXJa?=
 =?utf-8?B?UE5kZXVkWjRMdFBjbjZPYUhYQWRtSjBVSFQzSy9YRDFPa3RveDlYLzE3dklx?=
 =?utf-8?B?QTZ1NnBRZTFIeGRTN2ZUaFh6cndPSVROZTVDeER0RzdEVEhaTzBQbXo2QlRu?=
 =?utf-8?B?ZW8xZzluRDV5bmZTZTJaNjBmenJiV2xCbWl6bUF5MS91YkptdmhtWnpVK05W?=
 =?utf-8?B?Mm9MZzUrbmI2ODYyZTJYc056Zkxjd2FJVXIrTmVDS2NjNUVGYTJjenU0Tzl2?=
 =?utf-8?B?bXh5cUhIc3JWYnI2Mk96eG9mM3dVOEpyVllrUGNhUU1rZlhUMDRhYk01bmRD?=
 =?utf-8?B?SWJ4YitGZGhrREk3dTJMZ0Vqb0lFYkd5SzBWQVVWT0JWM3JMSGRDSlpodTdW?=
 =?utf-8?B?WDVTL2RQSDY0eEd3QVpUOGlCOCtCaW1mcXV0YlJKYjNHUlhGSW8xUVRva1V0?=
 =?utf-8?B?ekU1ZnM4Z25mWjNaclRqRUIxRTZrb3FpYkFQazJ0UGlsNEhOUlcydE93bG02?=
 =?utf-8?B?YndvTlN3WnVDMGEyZXU0cGcwYkZLdk5icGRFcS9qdWhIbzNEK2xYZU9GUzdJ?=
 =?utf-8?Q?9DJxA2dTt4dee2kbn4gW5iFj9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ce3ef517-1964-4f8b-431d-08daf235e7b0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 11:37:36.2498
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JH3Dmsl19TbZ1OBesapdipF0LHRVO+Mlcs7YDyZEBGnBj6oplof87T9e2popt4YkgTv9Ao58theuc4vzVlixbg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8591

On 07.01.2023 23:07, Demi Marie Obenour wrote:
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -227,6 +227,39 @@ config XEN_ALIGN_2M
>  
>  endchoice
>  
> +config LINUX_PAT
> +	bool "Use Linux's PAT instead of Xen's default"
> +	help
> +	  Use Linux's Page Attribute Table instead of the default Xen value.
> +
> +	  The Page Attribute Table (PAT) maps three bits in the page table entry
> +	  to the actual cacheability used by the processor.  Many Intel
> +	  integrated GPUs have errata (bugs) that cause CPU access to GPU memory
> +	  to ignore the topmost bit.  When using Xen's default PAT, this results
> +	  in caches not being flushed and incorrect images being displayed.  The
> +	  default PAT used by Linux does not cause this problem.
> +
> +	  If you say Y here, you will be able to use Intel integrated GPUs that
> +	  are attached to your Linux dom0 or other Linux PV guests.  However,
> +	  you will not be able to use non-Linux OSs in dom0, and attaching a PCI
> +	  device to a non-Linux PV guest will result in unpredictable guest
> +	  behavior.  If you say N here, you will be able to use a non-Linux
> +	  dom0, and will be able to attach PCI devices to non-Linux PV guests.
> +
> +	  Note that saving a PV guest with an assigned PCI device on a machine
> +	  with one PAT and restoring it on a machine with a different PAT won't
> +	  work: the resulting guest may boot and even appear to work, but caches
> +	  will not be flushed when needed, with unpredictable results.  HVM
> +	  (including PVH and PVHVM) guests and guests without assigned PCI
> +	  devices do not care what PAT Xen uses, and migration (even live)
> +	  between hypervisors with different PATs will work fine.  Guests using
> +	  PV Shim care about the PAT used by the PV Shim firmware, not the
> +	  host’s PAT.  Also, non-default PAT values are incompatible with the
> +	  (deprecated) qemu-traditional stubdomain.
> +
> +	  Say Y if you are building a hypervisor for a Linux distribution that
> +	  supports Intel iGPUs.  Say N otherwise.

I'm not convinced we want this; if other maintainers think differently,
then I don't mean to stand in the way though. If so, however,
- the above likely wants guarding by EXPERT and/or UNSUPPORTED
- the support status of using this setting wants to be made crystal
  clear, perhaps by an addition to ./SUPPORT.md.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:41:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473540.734202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqXI-00058u-VA; Mon, 09 Jan 2023 11:41:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473540.734202; Mon, 09 Jan 2023 11:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqXI-00058n-SX; Mon, 09 Jan 2023 11:41:52 +0000
Received: by outflank-mailman (input) for mailman id 473540;
 Mon, 09 Jan 2023 11:41:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEqXH-00058h-Hv
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 11:41:51 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2052.outbound.protection.outlook.com [40.107.103.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9b0dc2f1-9012-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 12:41:50 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7143.eurprd04.prod.outlook.com (2603:10a6:20b:112::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 11:41:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 11:41:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b0dc2f1-9012-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dPaYxXm471tiT+NlC/PqDcVZ4B/n9IoQmquFif2yYGhkcEqxT8JEdlQjxG695tpsfp1BwxpKTd/9sT/1UliBX01SBAHKXn5s/NF6zuOBFcRd0/wRNHtZRHQLccXBzf9e9CP7TF2QUPUdPzRl6ONdi/H7mx8zDKVIW09xJpKa8A9DnFJASaoShQDfdqt6QJM/+Q/kvKsvhVkM/5IiZ0Ne5avQYWDnE3v0SvKwAduMqzhAOpsurlZygszHDVCE4cW/t0qA4cH++nO/M0hxKaY+eke97kEQDVd7qqZfbcFY3W0m1FjFhX5wX+4HdI5/o/uCV9nIIUWIqtMk6F/3Dx3X9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=60wc/vwFvoq+aOfiArChDicj2kOicqX9zYbvV3rdDSQ=;
 b=Ws1F3wXP2F5oHCYRWD76Xn05vww4AO6TsE9scpoY/9LIqI0dvXiwKV/LJIm9GX4IQnwNsudSOx+iLg2PdJNGuGlreqxZ6v0zP6xIqECN/WcytVsCjQf6CPfHDHpnczfncvDRkWklbtdESXRN2ZxJppIyqZVJp/tndsreblVi4Zbepxs2Tg27yaqsnXsOMKI3MbI5br4xo30Lg/hHPnGo/COFAArbGmveKUSCDsYMO5JF54fl+vvNMRF9mKYkFEqY3Um87WvzROOQX76DQ6Nq9eeO6gZBk4OtQG++8qHeEk3xPED3+l1jnTsiiD0O5rBk8edbaZfG9uRjoHwbzwgl5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=60wc/vwFvoq+aOfiArChDicj2kOicqX9zYbvV3rdDSQ=;
 b=mjMeHYn/NggfvIBewv3bqrSwKJ93h9t3cHg1Z5VVeeB5+1wrkIRDyA1uzj8SKW1hQ6PkRff+STgAnk6Jgo+f63PhzL4CqAnBglimL3yLE0h9OOLzQtd4G/XXE4C58ugQWeAKiF2nPV7z9qY8W4oVFR547Rx7+RRPzjhrJs8Lm4cPoBvokzeGShJKXljeGWslmBJsj9aLrY601gk+KwjBDi7zjMsqPN7ZbqSn9qD+T0gxkHZnXSwt5ZoyIVS3IpQPjn/noeRWfthuFSJ1MiTInuAKowgT7OEFsoEvJDD5X3QgT3Wuvv+jIYlFXOZaApI3JNR8pgNaAmnn8yXWaB8cOQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <af7610a2-11d6-48e2-6bf0-762525187612@suse.com>
Date: Mon, 9 Jan 2023 12:41:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] xen/cppcheck: sort alphabetically cppcheck report
 entries
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, Luca Fancellu <luca.fancellu@arm.com>
Cc: wei.chen@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230106104108.14740-1-luca.fancellu@arm.com>
 <20230106104108.14740-2-luca.fancellu@arm.com>
 <6373383d-d6d3-3d92-b09e-6434c5b5d15b@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6373383d-d6d3-3d92-b09e-6434c5b5d15b@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0114.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7143:EE_
X-MS-Office365-Filtering-Correlation-Id: a7797b28-b92e-4564-f83f-08daf2367de1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eh3AeoVXIcMhB5QFLvsXNGzXTRNpAnPaWoWx60pp8LqbdxrYPucCLbp/PFiS/1T1ZcQ3ge1FWxt4z4lsErazR/BW9cV0okRhaqFadVGaYiF8C0VUhP7P++LUYyq0OTO89XN5TerlCgUO4T4CqKEFx1HLPRM5XuQhAbCuP3yn+2Qp5XZNGCPyXkMbMgx4+SHkFIvfKgLYvqBjScv8g2m06Xk7nbwWrZCmTJ+ElffjcGyL5ICDZOVV3oQ4lPwDOroe2W+wjzQkeXxRer7ToNrVy5QQ6l3YMPtE17MeHWPbyW2TgTD6NbntRn3L1x2shuzMNOK9djgelrlVOXq5XyeTasiJ8v55SQn5z4zs3bTkSgsrg3PtbrqDsq6rk93gYFb8GGPtWSkxwUs+SC9Zjg3tDcfUyq5ofafuAkVU1fC9STsedywPQ+VWvVlQVAB4xAUhcaLBLGHg++Z1FSsJO8cannB1jPV3PeYbrZ+jY4VK9ZepDOTkdbOuKc+MNkcZpvO9r+cOTHLdpWeMOV3uP+m0d5lSFK1vOfpolGLdHR6zzG+lY40uIuYz4DdQJDFwpmmKgDCJEmxiKyMbC68z0Rix0Vf4EaiiLPCDWElmkiSd7pHKRWs+o/SkLq/YM1YH252eFii1M3RriQMRH7ErpmzTNMkkktaZUi3r+6JpYfbsp6nEnxG7K/CATRKt97fGcUXV23bwPN40D2e8Zvv4l/ZBK0DfzdI/gfjPNH1jeVVbxlQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39850400004)(346002)(376002)(366004)(396003)(136003)(451199015)(6506007)(478600001)(53546011)(6486002)(26005)(2616005)(4326008)(186003)(66556008)(66476007)(6512007)(8676002)(110136005)(54906003)(316002)(86362001)(38100700002)(31696002)(66946007)(41300700001)(31686004)(36756003)(5660300002)(2906002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cWRqS3hQdkEzaVRpZ1pRcTREZzlVMFpwdHRkMmpHSHBTMzlWRi91MU1JdlZP?=
 =?utf-8?B?ajBVWTV2bU5JQ1FaVG9yOXUxZkRZUnJJaVlNK0lveHlRUDErT3BLbVAvWC9R?=
 =?utf-8?B?UGluckExWDl3VVMwb0hqN244VEROck94MVU3RzY5cE51bzZUQSsyZzJ4eEo5?=
 =?utf-8?B?eFdWMzhkSkRtclBXQ2FqQXVJTndpQTNpa2hlQUVnYmZDRWlPbW5vcFpDNW00?=
 =?utf-8?B?WU1yMkFWdHZMUGRzdU1wVGp2R05lblBab2I2aWNveVVKYWFORXVkRVoxZ0Yv?=
 =?utf-8?B?MHp6UDNDUEc1Z2JjeXdGRHN1bWVoeHR1cDMyRTI4eGhuUDY3YXdocUs1dW5r?=
 =?utf-8?B?amlwZUlmcVpPOEJ1SEFyMlozaTNBS2E3V2Q3Y3VRSnRZQWRMdGN6d3BUWXpJ?=
 =?utf-8?B?VEFOVHZjVzQxM2ljS0Z4L29nRUZVcHg0K1R6M25MVnJHeWx3RUdIanJ6QXFW?=
 =?utf-8?B?cVpZYzQvVksrd0hJWkNaM0owWmdOQ0xsdFptemlJdHVTWkZQNUdwUS9kRUlV?=
 =?utf-8?B?R1ZnRmFiWkpqUjNNamVPQ1h1WUwremR2Tkt5VlN6WVpUMUtzaUI0N2xTY1p2?=
 =?utf-8?B?Q3Z2Zk9JalJSREpIK2x5MjYyZ0w4MVl6ZzY0MFlZR0xjcGxudkRpc0R3ekE2?=
 =?utf-8?B?MENJc3REbmFUZGVuTmJJS0IvZUZNR0pPNGw2emc4Q0NESmRKNGlPZ2tSR0lh?=
 =?utf-8?B?QUJWOWJoU1QzbDYxY1J1Y3dZdnRrcEo4UG02dnVpem9vR1NEM0w5RS9TYXgz?=
 =?utf-8?B?UzdHUXNiSzdOMVl6b3NoRXZ0VjBZRmlucDdLTzZaWGpwSzZGNWlEeDRqb3VI?=
 =?utf-8?B?eEtIT1M0cVZia2NqZVNrMVJGWkdhdEhLUWQvbFZUYTVJRGVKMVBzbzB4clpO?=
 =?utf-8?B?bHc0NXpvZENqNDZuMWlCZTZ1blJITXA2c0JGMVY4YUk3ZERBaTlNdnJTTkd5?=
 =?utf-8?B?d1ZWUFJvSFdGQXE1OE9halErTTMvTS9kUGdDcjExc2VMTkFIK2RmMmxuSUwx?=
 =?utf-8?B?WkNMVFFQUXJZNHRNQkJ4SjJQSFRqN2RGUlVNYWJsVEFJOEVUQVNta1A3R3gw?=
 =?utf-8?B?MjhGb1k4NlFjZ0dzWTBoOWs2WjFCdEVtQ2VyTVZYcnFVdkMzbDRjMnBuSjBF?=
 =?utf-8?B?a0FnZ3FaVE9wQU1EZ3M2ZEkxNVloWENIbHZOMzdXSGluZGJvQlFFZk13S0Jo?=
 =?utf-8?B?bU1qNEMya0VzcDBLUkh4cHlGSkxCUmt6ckVraDluWjAweFpacHR4cXBrMDY1?=
 =?utf-8?B?L0dkV0ZzNXA2bnNoY3NNcHBBRE1WRkVRY3h5OG85UzZTT3g0bTVJQmM4eUxO?=
 =?utf-8?B?ZVdxdlJTak1ESTVCblZSMmtaRDBnRW1zNFJNK0hHRU1CZlhsUUlxUEM3TEg3?=
 =?utf-8?B?WUZWUjZZTHRsM0hBaHh4Y3dWb1B5RjhYanJwTkFETGpkUVl5NG9vRWhGMUNl?=
 =?utf-8?B?ZDlPdzlVUXJRTStYOEZ3TWk5cnFEcXkwUThtR0tKTHBSVkxSeFVCaE42S2tD?=
 =?utf-8?B?SmJwNWNIaDY0NkQ2cVhQVlFOTXZkR1RpN2JDM3ovVm82TnpTNStGNFdYazE5?=
 =?utf-8?B?Wkc4VDdjSk9xamgvd0JMYW1aKzVha0xrWTNNVU5Jd1ZnUlZ0ZlVtN1RNWXRr?=
 =?utf-8?B?eCs3ZmhDeGttYUx3SndmekMwY1BUb28yY2xIcTAvQUYvenh2OHF4WldUNEhW?=
 =?utf-8?B?NitGbHlPNi9YWkxkcDRuOTEvVUFXZVpJTXZsKzBuaEg1SDZtS0tWc3lEQUk3?=
 =?utf-8?B?c08rV1BjSWZTN3FQc1pYaVZrRzlkSDZ5QlVZQ3ZLVlB5ZjQ2S1J2YmpLYUs1?=
 =?utf-8?B?bGV4TGJvQS9ubXZNbTdkV0hRL3Azc0czSk9yVytHeHBEWG5lZ1ExWDNkNDZL?=
 =?utf-8?B?M3lNbi9OOUdZMWtFblZpQXU1c1MyT2Z1b1I2eXE5Q3UvOXZlTWVrdFZyV2VQ?=
 =?utf-8?B?L1I0TUdlNy9NZ0RSOG5FSjV6WGxzUXhKZ01LWkhKcjJLckJLWDJGbUg5TG9V?=
 =?utf-8?B?M3hwV3lxeE9rS0hkTUs2R1hRVHpXYWhTcmdGK05OT1lIc2lXSUY1ZDhYSUl1?=
 =?utf-8?B?SHFDdkphMmUzckU4Qk9TTXB6Z3R1V1FQM05qamg3ek5TczBVYXdveWxuTEtW?=
 =?utf-8?Q?3XZyEfJwkduNHecBnZD34c8Pr?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a7797b28-b92e-4564-f83f-08daf2367de1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 11:41:48.1713
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7eMtJ43cd32SEtkYcIx46bXhAxUffL5bt57FnPNN5ApH/lPpejJOeBRAl4KxMBmY1hmlaqk0SSGAx8SzqBB9PQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7143

On 09.01.2023 12:15, Michal Orzel wrote:
> On 06/01/2023 11:41, Luca Fancellu wrote:
>> Sort alphabetically cppcheck report entries when producing the text
>> report, this will help comparing different reports and will group
>> together findings from the same file.
>>
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>>  xen/scripts/xen_analysis/cppcheck_report_utils.py | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
>> index 02440aefdfec..f02166ed9d19 100644
>> --- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
>> +++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
>> @@ -104,6 +104,8 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
>>                  for path in strip_paths:
>>                      text_report_content[i] = text_report_content[i].replace(
>>                                                                  path + "/", "")
>> +            # sort alphabetically the entries
>> +            text_report_content.sort()
>>              # Write the final text report
>>              outfile.writelines(text_report_content)
>>      except OSError as e:
>> --
>> 2.17.1
>>
>>
> 
> Having the report sorted is certainly a good idea. I am just thinking whether it should be done
> per file or per finding (e.g. rule). When fixing MISRA issues, best approach is to try to fix all
> the issues for a given rule (i.e. a series fixing one rule) rather than all the issues in a file
> from different rules. Having a report sorted per finding would make this process easier. We could
> add a custom key to sort function to take the second element (after splitting with ':' separator)
> which is the name of the finding to achieve this goal. Let me know your thoughts.

+1 - sorting by file name wants to be the 2nd sorting criteria, i.e. only among
all instances of the same finding.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:51:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:51:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473546.734214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqgE-0006cm-Rh; Mon, 09 Jan 2023 11:51:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473546.734214; Mon, 09 Jan 2023 11:51:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqgE-0006cf-Ob; Mon, 09 Jan 2023 11:51:06 +0000
Received: by outflank-mailman (input) for mailman id 473546;
 Mon, 09 Jan 2023 11:51:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEqgE-0006cV-01; Mon, 09 Jan 2023 11:51:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEqgD-0004EX-ST; Mon, 09 Jan 2023 11:51:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEqgD-0001wG-EB; Mon, 09 Jan 2023 11:51:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEqgD-0000sw-Dm; Mon, 09 Jan 2023 11:51:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zlILV5HZMdcoQMn06wy5JfrKcgGmAjOBQDraPW001wY=; b=5aLfPgrWdzfk6W7N3FzWu41ReQ
	JTUmHUwinr7I62V7vVZkDSYvf8k6M3SdFwCiLF9dzL5Bq2r7jTfT9dXBje4WSthvxcd0ggH271ARo
	yY45VBwfd0DQOjXh4Z4jwhvO955wulJCzdUKB6iPzXnWdfMJ2V7NxhzACvPmMsl164C8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175635-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175635: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
X-Osstest-Versions-That:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 11:51:05 +0000

flight 175635 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175635/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175624
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175624
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175624
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175624
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175624
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175624
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175624
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175624
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175624
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175624
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175624
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install       fail like 175624
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2
baseline version:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2

Last test of basis   175635  2023-01-09 01:51:57 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:55:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:55:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473554.734225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqkT-0007JM-GF; Mon, 09 Jan 2023 11:55:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473554.734225; Mon, 09 Jan 2023 11:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqkT-0007JF-Dg; Mon, 09 Jan 2023 11:55:29 +0000
Received: by outflank-mailman (input) for mailman id 473554;
 Mon, 09 Jan 2023 11:55:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O+7k=5G=redhat.com=berrange@srs-se1.protection.inumbo.net>)
 id 1pEqkS-0007J9-40
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 11:55:28 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 802e7c60-9014-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 12:55:25 +0100 (CET)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-496-1tSs-3KtOzSxT7itYk1Mjw-1; Mon, 09 Jan 2023 06:55:20 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E5B4C858F0E;
 Mon,  9 Jan 2023 11:55:19 +0000 (UTC)
Received: from redhat.com (unknown [10.33.37.5])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 51B4D2026D4B;
 Mon,  9 Jan 2023 11:55:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 802e7c60-9014-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673265324;
	h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
	 content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2Bax02BSkvxaOEt2CRhklLybKGV/xR8mYNLZmlYBdyg=;
	b=iZ1vw41NOfSmJlm+VPfFnXYnGMs6gnAeQ716TUqyu63V9hqbjmvtDBFk6gbcZrmUad45lT
	pwmI2TxnnLHhfCIXaI0p8OKnnmfzVA+52FExnVifAqVlHY2/EzW97TJ7miQDe5dKzkEcau
	WrMC9On41xRaf9OSanPuR+QBd1t3mN4=
X-MC-Unique: 1tSs-3KtOzSxT7itYk1Mjw-1
Date: Mon, 9 Jan 2023 11:55:00 +0000
From: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
To: Thomas Huth <thuth@redhat.com>
Cc: qemu-devel@nongnu.org, "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-ppc@nongnu.org, xen-devel@lists.xenproject.org,
	Laurent Vivier <lvivier@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Daniel Henrique Barboza <danielhb413@gmail.com>,
	virtio-fs@redhat.com, Michael Roth <michael.roth@amd.com>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	qemu-block@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
	qemu-arm@nongnu.org, Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	=?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>,
	John Snow <jsnow@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>, Greg Kurz <groug@kaod.org>
Subject: Re: [PATCH 5/6] tests: add G_GNUC_PRINTF for various functions
Message-ID: <Y7wAlHEIafbNzsdf@redhat.com>
Reply-To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
References: <20221219130205.687815-1-berrange@redhat.com>
 <20221219130205.687815-6-berrange@redhat.com>
 <27da4d93-38e6-1c40-4a60-92f3343f380f@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <27da4d93-38e6-1c40-4a60-92f3343f380f@redhat.com>
User-Agent: Mutt/2.2.7 (2022-08-07)
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

On Thu, Dec 29, 2022 at 10:34:55AM +0100, Thomas Huth wrote:
> On 19/12/2022 14.02, Daniel P. Berrangé wrote:
> > Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
> > ---
> >   tests/qtest/ahci-test.c           |  3 +++
> >   tests/qtest/arm-cpu-features.c    |  1 +
> >   tests/qtest/erst-test.c           |  2 +-
> >   tests/qtest/ide-test.c            |  3 ++-
> >   tests/qtest/ivshmem-test.c        |  4 ++--
> >   tests/qtest/libqmp.c              |  2 +-
> >   tests/qtest/libqos/libqos-pc.h    |  6 ++++--
> >   tests/qtest/libqos/libqos-spapr.h |  6 ++++--
> >   tests/qtest/libqos/libqos.h       |  6 ++++--
> >   tests/qtest/libqos/virtio-9p.c    |  1 +
> >   tests/qtest/migration-helpers.h   |  1 +
> >   tests/qtest/rtas-test.c           |  2 +-
> >   tests/qtest/usb-hcd-uhci-test.c   |  4 ++--
> >   tests/unit/test-qmp-cmds.c        | 13 +++++++++----
> >   14 files changed, 36 insertions(+), 18 deletions(-)
> ...
> > diff --git a/tests/unit/test-qmp-cmds.c b/tests/unit/test-qmp-cmds.c
> > index 2373cd64cb..6d52b4e5d8 100644
> > --- a/tests/unit/test-qmp-cmds.c
> > +++ b/tests/unit/test-qmp-cmds.c
> > @@ -138,6 +138,7 @@ void qmp___org_qemu_x_command(__org_qemu_x_EnumList *a,
> >   }
> > +G_GNUC_PRINTF(2, 3)
> >   static QObject *do_qmp_dispatch(bool allow_oob, const char *template, ...)
> >   {
> >       va_list ap;
> > @@ -160,6 +161,7 @@ static QObject *do_qmp_dispatch(bool allow_oob, const char *template, ...)
> >       return ret;
> >   }
> > +G_GNUC_PRINTF(3, 4)
> >   static void do_qmp_dispatch_error(bool allow_oob, ErrorClass cls,
> >                                     const char *template, ...)
> >   {
> > @@ -269,7 +271,7 @@ static void test_dispatch_cmd_io(void)
> >   static void test_dispatch_cmd_deprecated(void)
> >   {
> > -    const char *cmd = "{ 'execute': 'test-command-features1' }";
> > +    #define cmd "{ 'execute': 'test-command-features1' }"
> >       QDict *ret;
> 
> That looks weird, why is this required?

This means that the compiler can see we're passing a constant string
into the API call a few lines lower. Without it, the compiler can't
see that 'cmd' doesn't contain any printf format specifiers, and thus
it would force us to use  func('%s', cmd)... except that's not actually
possible since our crazy json printf formatter doesn't allow arbitrary
'%s', the '%s' can only occur at well defined points in the JSON doc.

Using a constant string via #define was the easiest way to give the
compiler visibility of the safety.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:55:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473555.734235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqkc-0007bK-Nf; Mon, 09 Jan 2023 11:55:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473555.734235; Mon, 09 Jan 2023 11:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqkc-0007bD-L3; Mon, 09 Jan 2023 11:55:38 +0000
Received: by outflank-mailman (input) for mailman id 473555;
 Mon, 09 Jan 2023 11:55:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEqkb-0007J9-Gt
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 11:55:37 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2057.outbound.protection.outlook.com [40.107.8.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86a8a279-9014-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 12:55:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9400.eurprd04.prod.outlook.com (2603:10a6:102:2b2::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 11:55:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 11:55:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86a8a279-9014-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=crr5/fVaSMfV8/izX594wQZNidv+u79DFNfTsG9LI3y6C4Z4DUOhHyV+TUTTJ3NQXuVC30jocpCVKkowTZRxeSIrnxKnH9msbhZhVKUpgJPbXUfwGcSAVZ1NEaKc9vrLd5Z8khIKxSqCHOcCQ6SBRKrRi1kOboLKpEj+oMrmh4xhNr64ZJ/2nD1MZ2wRReByq3srDPogPxnieJTtHKZI6EWllmpiMbM6kKozTjxPLOwx19ALWYqkdXSbY+WjDMjVF9f+WrwfcEtWR8JF+zu6jjcfXYvZrbm0/T0BCgYQPqVN3V/vj4NHhbSwLNeGdRsYSl3CD3iFON8tgEsB8nCLFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0FwTL+KLmDvFMFlT0Yz3dMkCUPBpwkY5q/+tUnrTZl8=;
 b=De8y2WucBoSmmYNMTqnM46DpPXgd1F3QdSEtMhErnS8PSGk/HFg3MAQpnoGAMpwWCz9EmjUiLz6Gp4iWlVhYw6bmO9uMM0NxYVeNzmHv5Gz3rfNgch7sqUUydpoxktNZiyu3uIHJ7/dgeHjLDoWSrvNSnDvtmRO8v2L+F1w3Gqz5jUJ31yDrNmnWWh1aeGmpAHS1m4RY98XKAJ/Fe3KEdfC6F4iR2RhQZ6X1vfXNt6uQRMZ1hldLBrg6x1vGhOwOGt0Gyeyr9oHBgbqHg6ibbi8bPe5VX+IVqwD+MLbp7I3G8cmmBHZ41lNfQI6PJ3B0wOzzkJuc2bGXNOvAS4zT4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0FwTL+KLmDvFMFlT0Yz3dMkCUPBpwkY5q/+tUnrTZl8=;
 b=XmTPTlhqJurksy9GqtyefS4o4280jpcvUqH3Mzw+ZXjs/AVjinLYIeiO4QLvXu7uKJZbFaEoBW2+LC6H2OuBCEOY+MnRbjGFSE44enWmEMTDKQG5BhSNGaVM5WOYXINwzCrjMQ2E6kD24RVlMlCCcxReLbBR+pOXjvVIiLBA+RtgEDNb4ziRicEGidEfQ7P3yHM8dsZLSc0Af9vk0jIgfnqLrC9kOscMe0K/fm7B7lLBOMzyBJ7Jd+Vov82baXjn8uYhGt5OXYBu5ZbgQsU3y6xB2gvpqYVyaqNB6n3VpG5IUWCOi9JuH2dYTgdIjSrIgKMXomLABnat1BM8zpHOsw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f237eab1-1555-97de-7151-58bc7abbac1b@suse.com>
Date: Mon, 9 Jan 2023 12:55:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/8] x86/paging: fold most HAP and shadow final teardown
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>, "Tim (Xen.org)" <tim@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f7d2c856-bf75-503b-ad96-02d25c63a2f6@suse.com>
 <8d519e00-83c6-aee9-e7ba-523aa4265e1e@suse.com>
 <af33fef0-46f7-b206-53ed-33d66f0f008c@citrix.com>
 <780eb542-34b9-c64f-0a40-acc462b87c72@suse.com>
 <4de3cf99-f8ab-c439-b383-859a41e90517@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <4de3cf99-f8ab-c439-b383-859a41e90517@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FRYP281CA0011.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::21)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9400:EE_
X-MS-Office365-Filtering-Correlation-Id: e465f2dc-006c-4373-97f4-08daf2386994
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IVPgEcMNuAabCFFoJtKbCOF9VG6EB/Qf8GxGQRySmCdrRuT8EIMRvdN8cb36F9VDoQNIfF+k1yu+srMux2+JAVqLV51XCW5/pUFTYfzGNboYeCqqNbMn67VwH8j4x11Ngu0LahP6yKdqmmf/QNx0Vtift3U0rMzU+u/OzB6xdzWuZdL99MWkWv2TT7cMdQRkkvMSmlK1oNi4F06Kge/hRJ2tDsG8xTGAPa1EQC4aI528lWwL0uLph7SjcZ8Pu4VY3PAk3XxJaARBv5vfObsEHG55sDZwALSnpF5UL8oCWMpAEeF5To3/XPrkXV1MEfRw8khlwjgWiO8D3PgHRvAWRP13+u2/AG1GdmQrkGehgyfJcIM2QNsDptrAVDBwC2O8w3ry1QACemvQIJIzMY2ffGUiPVAx1JmglNKWdyRJugTRcZ3Fa1SO4Snsvou15cu9/GLDJ9uuOXHWQe8Uq4MT0lOTD27RS9ODsgrQjDO40QFtGjePvImdvp8Ryp/hdKBKmUPismFfllm/1BOgP5S5OZhuoaOLEhPxypqH0TCbyJ7FR/VsleB/y1nfPhVEupE6vqptAmKtCKCqEVf9gPYbPSm6SJkD6eX2beWwwGn9VoTPxpB4C3yYFtz1hjk8E2RnLqxnauX0ZF4ptgmAZmPBWl7ULVMdNOsLvQl9svBZY/EE/mAjEZxu36AfKkjTXtePjo4XClIzOA6VOY/EfPMKuaqcE8CdmdvxkmhnHGnhlD0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(136003)(39860400002)(396003)(366004)(376002)(451199015)(36756003)(186003)(53546011)(8936002)(26005)(6506007)(6512007)(2616005)(54906003)(5660300002)(66946007)(66476007)(66556008)(6916009)(4326008)(316002)(86362001)(31696002)(6486002)(38100700002)(478600001)(31686004)(41300700001)(8676002)(2906002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QUkvMHV1YWlJT2VHODExcVpiV0dtY1kwbjFER2p2QS9LL0JaS0xvUXNLOXpi?=
 =?utf-8?B?WVlvcm9QWmxDU3VINzRNZUFlSDhoREhiU0RPaEhGbUFZaXoxNCtweWVyNy9p?=
 =?utf-8?B?SVFTOGdzRUN4alBZeHdFZVVpcFpOVzV2QU43U1hGRkRGMDh0UnhpTzMxSnVY?=
 =?utf-8?B?RFFjb3ZyUml2OFg2RXE0a3BXYlpIQnRMNVdsc1BWZ3NVc1dOd1pDUktRSnJi?=
 =?utf-8?B?TjhudHB1ci9zYld1UmsxSFIxLzNaUVM3TEFvMDlwWWNYVWNZV253d0RVWkxF?=
 =?utf-8?B?ajlJeXpHZ3MwenNDQ25mUEliK1ovbm5sc3NNNXd3SmtKcGJvbythRjVCblJw?=
 =?utf-8?B?cDdnMm1vTzBKNnJEd3Q0RkRUbVJkUlZkYlhjN1RpVG9yb01Ic0ZqeDQ2SWxO?=
 =?utf-8?B?K3lWNHpneXp1WGZTdE1wTG1RcTlRL05idlBWeC9VeVAvTVFqMzJtTEtmTlp6?=
 =?utf-8?B?WG95Z3A3SVRTTlZMbWtLYWluUjREd1h1U3Z4aFExckZXU1NNMGJLUDVwR0Uz?=
 =?utf-8?B?aGQwYXNxeWVyYzFLOEdEWXN1TlRTc1JOdFNROEZjRXN6VDkyQkxjenNveHh3?=
 =?utf-8?B?M0lzNy9QZnhBQitSY1NrbWN3YWo5SmIxeTFDL3RDOHhCZHRpejdnczFhK0Ri?=
 =?utf-8?B?N21pYnVLNG9FYmFqMlBoMjlNei8xQnd6bWgyblV1a3FNU0VKcnNDQTRKc3NU?=
 =?utf-8?B?eHFGaXc1aEYzeUo3TFZ1d3NwL2JCSUlMazdIVFM5YnRlcGhhL0dOWDhyU0F5?=
 =?utf-8?B?djcyTHlBYytpNkpQcCt0eHZ3eEd6SmVDTmsrNk9HMnE3ZjB6aGd6S1ZUcFZU?=
 =?utf-8?B?Q3lYVitDL2hRdGI2ZnlGa2VGbUtKcU53OUlVWFhJYS9zT2JKMWJkdXhjM2FX?=
 =?utf-8?B?aTNKeFg1WVJ5eTNsY0d0T2N0VTdWNEZYNW5FRHd3STNCd1k5akdLaTJqQk5H?=
 =?utf-8?B?SWtidUlrRkJPbkZGL094cjVmMFBhNUlHamhBUjk3NU9HYmRXTUJZbXB0Yzdn?=
 =?utf-8?B?RjJOd2NIU3ROMFJVNU4wUU9PU3AvQXhlVktHQi80bS8zSVkxbE5vdWppeFpJ?=
 =?utf-8?B?aFJUY1Jva3hKYzZoM2tLOUc1Sy84bGNvaGVVRkhOUkJkVEVRQ2RPMjZ6ZitJ?=
 =?utf-8?B?Sjg3M3BBSks2UzdYdUdTZnlBTzI0ZDJ0WEtaT0J1MitPaDNRMm9VaW54NGRS?=
 =?utf-8?B?cksrSGF2RnU1U3hnOVprMG1pUUxjMUNrcUEwZFlLVU5TOVVQcXVWRUd6a1lB?=
 =?utf-8?B?K0tXM1lDTHF3cUViZEtpYzgrczU3OWVya3RNVzkwaWVXbTE4Yi9oYi9LZkVr?=
 =?utf-8?B?U1plQ0RQellJS0c2anhEZXAvVEJpVEg4NisyYWxVWW5VdENyVFJlZ240WFps?=
 =?utf-8?B?MjJFVW5IdnJWT0pITU95d3NHaVQzTDA4TUtMaEIzTEw5WVYyekx5L3QzOEp6?=
 =?utf-8?B?OFBwdU53aldMakt4T0VxZTUzc3h6c2NoNHRiSHNWUzNpY3VxU1dzQTdmcWMz?=
 =?utf-8?B?WnBUR21jU0o0V2pRRElTMWtFamNZT1Y4c2N2TGpGV1J6T0l4aERJVFdJZUZh?=
 =?utf-8?B?anRINm81ZjhzYkllOS83L05ON2hzcUdmVnJ1VTN6MXRUVktoSHhPVTZDbnda?=
 =?utf-8?B?ay92UEtEb3g1b2ZIT0VVaVQveDlkalJjVFpJeVVoWDJPK1YzbXJEZTIxQ0pK?=
 =?utf-8?B?ZElRSEhmMS9mZ2ZuTHNTUWJuenZzT0ZDbjFVYzZlbkVrL2o3UDhRb0Mva3Qx?=
 =?utf-8?B?dzZUT0M5OWhMdCttSFNTdXZ0VTdOOWhkVVhGSHpuRy9hUWdVOGZVNGQ0UXFL?=
 =?utf-8?B?ZjljOUNvR3JBcW9zSE80NmpOOW1oV1c0eVBKdFVKTzNZaWYyZitsQU5LUEFj?=
 =?utf-8?B?Q3dOaS9FRWNNckpScTQvMlFEdWV5NXVxbnpwWW5zY2h6Zzd0bGNYSmxkVVhB?=
 =?utf-8?B?UnJHbE9waWR2S20zdHZid1BiVC9zd0lzY2Z2eFcvM3pFTUhCQ3Jmc1VOY3FP?=
 =?utf-8?B?QlhGQW5WaXUyc2FMa2FpbGkwRXZGZHIxaEVvVGl1TjFQUWx3d1VDNFdzSE8r?=
 =?utf-8?B?bVBVam42dWtHVmp3SkprN2UwY0pMU3VvOTJZZXh5YzU3Vk9Cay9kbDlLT09I?=
 =?utf-8?Q?O68vnlFMdNJbnWWYm2ccbxTps?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e465f2dc-006c-4373-97f4-08daf2386994
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 11:55:33.2751
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Lmq3W3luSVdb4fUE+wCZKY6ranJlBVhzo+ouP1/Je5iBR/PB6rod/ibN6QTkBmBRCrMOmG/HrRzuCKy3+R2BsA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9400

On 05.01.2023 18:56, Andrew Cooper wrote:
> On 22/12/2022 7:51 am, Jan Beulich wrote:
>> On 21.12.2022 18:16, Andrew Cooper wrote:
>>> On 21/12/2022 1:25 pm, Jan Beulich wrote:
>>>> +                  d, d->arch.paging.total_pages,
>>>> +                  d->arch.paging.free_pages, d->arch.paging.p2m_pages);
>>>> +
>>>> +    if ( hap )
>>>>          hap_final_teardown(d);
>>>> +
>>>> +    /*
>>>> +     * Double-check that the domain didn't have any paging memory.
>>>> +     * It is possible for a domain that never got domain_kill()ed
>>>> +     * to get here with its paging allocation intact.
>>> I know you're mostly just moving this comment, but it's misleading.
>>>
>>> This path is used for the domain_create() error path, and there will be
>>> a nonzero allocation for HVM guests.
>>>
>>> I think we do want to rework this eventually - we will simplify things
>>> massively by splitting the things can can only happen for a domain which
>>> has run into relinquish_resources.
>>>
>>> At a minimum, I'd suggest dropping the first sentence.  "double check"
>>> implies it's an extraordinary case, which isn't warranted here IMO.
>> Instead of dropping I think I would prefer to make it start "Make sure
>> ...".
> 
> That's still awkward grammar, and a bit misleading IMO.  How about
> rewriting it entirely?
> 
> /* Remove remaining paging memory.  This can be nonzero on certain error
> paths. */

Fine with me, changed.

>>>> +     */
>>>> +    if ( d->arch.paging.total_pages )
>>>> +    {
>>>> +        if ( hap )
>>>> +            hap_teardown(d, NULL);
>>>> +        else
>>>> +            shadow_teardown(d, NULL);
>>>> +    }
>>>> +
>>>> +    /* It is now safe to pull down the p2m map. */
>>>> +    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
>>>> +
>>>> +    /* Free any paging memory that the p2m teardown released. */
>>> I don't think this isn't true any more.  410 also made HAP/shadow free
>>> pages fully for a dying domain, so p2m_teardown() at this point won't
>>> add to the free memory pool.
>>>
>>> I think the subsequent *_set_allocation() can be dropped, and the
>>> assertions left.
>> I agree, but if anything this will want to be a separate patch then.
> 
> I'd be happy putting that in this patch, but if you don't want to
> combine it, then it should be a prerequisite IMO, so we don't have a
> "clean up $X" patch which is shuffling buggy code around.

Well, doing it the other way around (as I already had it before your
reply) also has its advantages. Are you feeling very strong about the
order?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 11:58:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 11:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473567.734247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqnf-0008U4-6w; Mon, 09 Jan 2023 11:58:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473567.734247; Mon, 09 Jan 2023 11:58:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqnf-0008Tx-4G; Mon, 09 Jan 2023 11:58:47 +0000
Received: by outflank-mailman (input) for mailman id 473567;
 Mon, 09 Jan 2023 11:58:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LgaK=5G=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pEqnd-0008Tr-Ct
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 11:58:45 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2042.outbound.protection.outlook.com [40.107.105.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f6816b93-9014-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 12:58:43 +0100 (CET)
Received: from FR0P281CA0014.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:15::19)
 by DU0PR08MB9370.eurprd08.prod.outlook.com (2603:10a6:10:420::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 11:58:37 +0000
Received: from VI1EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:15:cafe::6f) by FR0P281CA0014.outlook.office365.com
 (2603:10a6:d10:15::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Mon, 9 Jan 2023 11:58:36 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT055.mail.protection.outlook.com (100.127.144.130) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Mon, 9 Jan 2023 11:58:36 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Mon, 09 Jan 2023 11:58:35 +0000
Received: from 650a6f315a8d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A68F06F4-1A8C-421B-99BA-F79F4A4991A7.1; 
 Mon, 09 Jan 2023 11:58:29 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 650a6f315a8d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Jan 2023 11:58:29 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by AS2PR08MB10156.eurprd08.prod.outlook.com (2603:10a6:20b:64e::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 11:58:28 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 11:58:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6816b93-9014-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mso8kklQ8Xtd7MdJcZvsl0oNwRmlZTQARRAzdSnXFls=;
 b=1rj0/wAR5l+cfJlEIFT6tChlNOMi4AO5oKtUMnABlaXblh3Nc+d53Y6zjbKGgWSqiXGE5qVICRd8hU70K2uxOQFHrdvO0TmuuhubBsBeJ+oUWR4c8SY9GCOyJMiDKTVQxGcdGcGUjHHBSfdazhZAsCJ1965HGf2QlFUU05bFHwU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Fh6bQe1ISWjVQz5sKuDjU5rYItbpkwoojcdYRAJVp2P5OIukw9CUbzZWy7wIHsaA/Oo/tFxvq9m8eHiKUYRyIIy4PJ4AdRG3jC21ZmNnEk2LYo1V0HG5Cy+So2QMVzoe7RU6JUqQxR6so8+qnYwjZ/x59c6XU3zZZHRWBHSrNvqtJUeS9ohgmrdfo34Q1eLTCqrzPb5zWr1k6k75F3WaYugUJfW8ookD3fXwJlxAQfWXuXSvXNr7P44LZ3jisLi4T2Bq4swmYD9sdVP+t0UsQYpnu4ET3CYShmcqLluOngh+T7jLzrSg+1H+xzHb/YxXE7sKWvCdr4edE6amEEiG3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mso8kklQ8Xtd7MdJcZvsl0oNwRmlZTQARRAzdSnXFls=;
 b=M9by6Y/Xf43w6XylRpIvTLgWIKHiGm4eKszfjPFjra5Ggj3c68jzY+KbXMQN4yDE14cOj6QnVRegjfCGwlRUotC/jfTVr7mzVEYs74Vh8faj67Tfd9F/EntaM9M9JHL0OjWZVtrTCrGD4BDBIUVm4IxYvV9RCNTfHSjKEIdoq30KzHjBtUQSLeVZwwdAs85N0f9Enkz0CNKSmW6fzGxj2gVjLBG8Z2NnX0kvI8so833RL9Wz2zidZI5FziOOUtr99Ta5euFJf7nXKZ7DJq3fG/nR+RxgQFt+Np2NQl/dWfyQ8DyQEa/VJdIxbds7DjSfAQQzh9e9wfvjWhcynAHVsg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mso8kklQ8Xtd7MdJcZvsl0oNwRmlZTQARRAzdSnXFls=;
 b=1rj0/wAR5l+cfJlEIFT6tChlNOMi4AO5oKtUMnABlaXblh3Nc+d53Y6zjbKGgWSqiXGE5qVICRd8hU70K2uxOQFHrdvO0TmuuhubBsBeJ+oUWR4c8SY9GCOyJMiDKTVQxGcdGcGUjHHBSfdazhZAsCJ1965HGf2QlFUU05bFHwU=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v1 06/13] xen/arm: assign shared memory to owner when host
 address not provided
Thread-Topic: [PATCH v1 06/13] xen/arm: assign shared memory to owner when
 host address not provided
Thread-Index: AQHY+J1pmmYEajWz5UynKX+c5rdEZK6UzwiAgAEQoXCAAGGAAIAACRWA
Date: Mon, 9 Jan 2023 11:58:28 +0000
Message-ID:
 <AM0PR08MB4530048C87F24524BDE2DCF8F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-7-Penny.Zheng@arm.com>
 <d7f12897-c6cc-0895-b70e-53c0b88bd0f9@xen.org>
 <AM0PR08MB453041150588948050F718D4F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <6db41bd2-ab71-422a-4235-a9209e984915@xen.org>
In-Reply-To: <6db41bd2-ab71-422a-4235-a9209e984915@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 31865A1BCDB07F428FCE5A5A1FA00D2F.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|AS2PR08MB10156:EE_|VI1EUR03FT055:EE_|DU0PR08MB9370:EE_
X-MS-Office365-Filtering-Correlation-Id: 511d1500-78db-43ce-526b-08daf238d6cf
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oyzQBFRJWcbgxsly5DSZ3uHhM26NnIL/SVn2l8KPOiKXhn0Bb1IpGyo0w5snISB8EwCMcIU/Z6Dd8C/6wOddNqxOcdsamH6fZMbtb804si+lKpuHFYRL0EwQdUgyiK2h+Ggkd5ewYRaK9rhjjm9WJCD4DBF8nVjQhVtFwBxgh3APW8aVGQ/vPT78HF7YWkD0t7xT/i4upkBAKr4GFhdQKHy4UutLjYzNsi9oGJI+psb32mxYnJG5FoY1dqNzCQLVJv30cYvsy3AA1OMbqQZubcxiT39e7zOf0/yoQ2lOi5N/g/2dGroJ2XFfcGHG0lBBis8LU+PgZqW6pmUJubo607LQGw9Y0kiCIUXPsmIdLTXfpCOaWo/Obv8eQ26MQinn61YmTkwPT9h/6wfLkkb6hCnGcaQqYpuX5tP45KkiG+yjUrxmUP+5En8/xc+ZZVqiAWl7Sxfhpa+nV2qQVtPcAYfA3aKq2MG7Bhzg64TBpESLYRbbP0nhhWPCrUn+oSYPfbU2ayX2c0Rs4cj+9Q2Q3whQttbZu4Wd56SJWFhjl9rFMuVzHPcxx8IDTfLwGUQB4BW/qJNqtXhkXaxP3ytM+ocrud4Bky60LH6oWphXOehm8t+frafMfN6an11ssfcO1POwz/G+2mMJf93SnG+dUC8+E4CLmGjIzBPRu95joiHphEnvrH7EeabWTZlJb1AE28NX+lyCTUzx2QZZW1jECA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(376002)(366004)(396003)(136003)(451199015)(86362001)(54906003)(110136005)(41300700001)(76116006)(66556008)(66946007)(66476007)(66446008)(4326008)(64756008)(8676002)(38100700002)(33656002)(122000001)(38070700005)(26005)(478600001)(7696005)(186003)(71200400001)(53546011)(6506007)(2906002)(5660300002)(8936002)(52536014)(316002)(55016003)(83380400001)(9686003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10156
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ceabb286-bc94-40b5-7b21-08daf238d200
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CA1dfNgqVzTJlC1qQy5OcPAx8W3E6CKQ5/9RoPQiigACLe3YwGCOjJa2udCJ9WMGis3xH8F4FMre2IUtFfrGUTx8PCw31xOCjOLvyQlc5swgjA8G4tBr1V7zcD57TV4fdvNysKupKbRHbPPuUW3KaCyc1iBnRldSbqKn50+XN2JccxKizbR+UF3FNoyp4T6ETC78BG4oEnNWnKbzWz4RR1gIt6TKmUQmuCQHsbZeh/M0BDC4DtHUbSNS2IGAjRbgrsLbE5ceKN32046wyM57yah0zqNrERGU2yhtVZ3eiNlA96KQsMy5vUpb1f4SklP38QDbeWdVL1dRfJwAmwG++UQADerj6VB2sZ+Yt8Ni6LivG+1bPPbDXSlLO2xDnmTN6JVng7WvPH63SG3rB1IkjWa20B6F9pERHRNS7qh+4mO04Evr544qQB0yLD/cvdKNQjC2TiKRywaBA/Wqi3YBj7vNbtmuiAHkLvppNxL22MU6LukdbEkBMcxfXlKTMBvfb8Ewy7GK/t/a+A3JK82CU9lSwJyso93ehG7Ow4qJoNPVSOMFprePlKGy0fbfNcv3IbkkqhJzFa27bxZUroTq4tW4DtuuxhUwaeVLLjpgULhuHydKtwbnxS88x343VjnPhRMkfG/nq4rcJpSo7Jz+GRdQTbSh/hynJhZQUmsvj5pnVZ6K12ZQrms7RDYwE+LinBufUADfpct9z2AWVZo3jw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(39860400002)(396003)(346002)(451199015)(36840700001)(46966006)(40470700004)(70586007)(8676002)(4326008)(70206006)(316002)(7696005)(54906003)(110136005)(356005)(2906002)(5660300002)(40460700003)(8936002)(41300700001)(81166007)(47076005)(52536014)(86362001)(36860700001)(83380400001)(53546011)(6506007)(107886003)(33656002)(478600001)(40480700001)(55016003)(336012)(9686003)(186003)(82740400003)(26005)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 11:58:36.0941
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 511d1500-78db-43ce-526b-08daf238d6cf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9370

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPg0KPiBTZW50OiBNb25kYXksIEphbnVhcnkgOSwgMjAyMyA2OjU4IFBNDQo+IFRv
OiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5w
cm9qZWN0Lm9yZw0KPiBDYzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBTdGVmYW5vIFN0
YWJlbGxpbmkNCj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0cmFuZCBNYXJxdWlzIDxC
ZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+Ow0KPiBWb2xvZHlteXIgQmFiY2h1ayA8Vm9sb2R5bXly
X0JhYmNodWtAZXBhbS5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjEgMDYvMTNdIHhlbi9h
cm06IGFzc2lnbiBzaGFyZWQgbWVtb3J5IHRvIG93bmVyDQo+IHdoZW4gaG9zdCBhZGRyZXNzIG5v
dCBwcm92aWRlZA0KPiANCj4gDQo+IA0KPiBPbiAwOS8wMS8yMDIzIDA3OjQ5LCBQZW5ueSBaaGVu
ZyB3cm90ZToNCj4gPiBIaSBKdWxpZW4NCj4gDQo+IEhpIFBlbm55LA0KPiANCj4gPiBIYXBweSBu
ZXcgeWVhcn5+fn4NCj4gDQo+IEhhcHB5IG5ldyB5ZWFyIHRvbyENCj4gDQo+ID4+IC0tLS0tT3Jp
Z2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5v
cmc+DQo+ID4+IFNlbnQ6IFN1bmRheSwgSmFudWFyeSA4LCAyMDIzIDg6NTMgUE0NCj4gPj4gVG86
IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLQ0KPiBkZXZlbEBsaXN0cy54
ZW5wcm9qZWN0Lm9yZw0KPiA+PiBDYzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBTdGVm
YW5vIFN0YWJlbGxpbmkNCj4gPj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0cmFuZCBN
YXJxdWlzDQo+ID4+IDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBWb2xvZHlteXIgQmFiY2h1
aw0KPiA+PiA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+DQo+ID4+IFN1YmplY3Q6IFJlOiBb
UEFUQ0ggdjEgMDYvMTNdIHhlbi9hcm06IGFzc2lnbiBzaGFyZWQgbWVtb3J5IHRvIG93bmVyDQo+
ID4+IHdoZW4gaG9zdCBhZGRyZXNzIG5vdCBwcm92aWRlZA0KPiA+Pg0KPiA+PiBIaSwNCj4gPj4N
Cj4gPg0KPiA+IEEgZmV3IGNvbmNlcm5zIGV4cGxhaW5lZCB3aHkgSSBkaWRuJ3QgY2hvb3NlICJz
dHJ1Y3QgbWVtaW5mbyIgb3ZlciB0d28NCj4gPiBwb2ludGVycyAic3RydWN0IG1lbWJhbmsqIiBh
bmQgInN0cnVjdCBtZW1pbmZvKiIuDQo+ID4gMSkgbWVtb3J5IHVzYWdlIGlzIHRoZSBtYWluIHJl
YXNvbi4NCj4gPiBJZiB3ZSB1c2UgInN0cnVjdCBtZW1pbmZvIiBvdmVyIHRoZSBjdXJyZW50ICJz
dHJ1Y3QgbWVtYmFuayoiIGFuZA0KPiA+ICJzdHJ1Y3QgbWVtaW5mbyoiLCAic3RydWN0IHNobV9t
ZW1pbmZvIiB3aWxsIGJlY29tZSBhIGFycmF5IG9mIDI1Ng0KPiA+ICJzdHJ1Y3Qgc2htX21lbWJh
bmsiLCB3aXRoICJzdHJ1Y3Qgc2htX21lbWJhbmsiIGJlaW5nIGFsc28gYW4gMjU2LQ0KPiBpdGVt
DQo+ID4gYXJyYXksIHRoYXQgaXMgMjU2ICogMjU2LCB0b28gYmlnIGZvciBhIHN0cnVjdHVyZSBh
bmQgSWYgSSByZW1lbWJlcmVkIGNsZWFybHksDQo+IGl0IHdpbGwgbGVhZCB0byAibW9yZSB0aGFu
IFBBR0VfU0laRSIgY29tcGlsaW5nIGVycm9yLg0KPiANCj4gSSBhbSBub3QgYXdhcmUgb2YgYW55
IHBsYWNlIHdoZXJlIHdlIHdvdWxkIHJlc3RyaWN0IHRoZSBzaXplIG9mIGtpbmZvIGluDQo+IHVw
c3RyZWFtLiBDYW4geW91IGdpdmUgbWUgYSBwb2ludGVyPw0KPiANCg0KSWYgSSByZW1lbWJlcmVk
IGNvcnJlY3RseSwgbXkgZmlyc3QgdmVyc2lvbiBvZiAic3RydWN0IHNobV9tZW1pbmZvIiBpcyB0
aGlzDQoiYmlnIigyNTYgKiAyNTYpIHN0cnVjdHVyZSwgYW5kIGl0IGxlYWRzIHRvIHRoZSB3aG9s
ZSB4ZW4gYmluYXJ5IGlzIGJpZ2dlciB0aGFuIDJNQi4gO1wNCg0KPiA+IEZXSVQsIGVpdGhlciBy
ZXdvcmtpbmcgbWVtaW5mbyBvciB1c2luZyBhIGRpZmZlcmVudCBzdHJ1Y3R1cmUsIGFyZQ0KPiA+
IGJvdGggbGVhZGluZyB0byBzaXppbmcgZG93biB0aGUgYXJyYXksIGhtbW0sIEkgZG9uJ3Qga25v
dyB3aGljaCBzaXplDQo+ID4gaXMgc3VpdGFibGUuIFRoYXQncyB3aHkgSSBwcmVmZXIgcG9pbnRl
ciBhbmQgZHluYW1pYyBhbGxvY2F0aW9uLg0KPiANCj4gSSB3b3VsZCBleHBlY3QgdGhhdCBpbiBt
b3N0IGNhc2VzLCB5b3Ugd2lsbCBuZWVkIG9ubHkgb25lIGJhbmsgd2hlbiB0aGUgaG9zdA0KPiBh
ZGRyZXNzIGlzIG5vdCBwcm92aWRlZC4gU28gSSB0aGluayB0aGlzIGlzIGEgYml0IG9kZCB0byBt
ZSB0byBpbXBvc2UgYSAibGFyZ2UiDQo+IGFsbG9jYXRpb24gZm9yIHRoZW0uDQo+IA0KDQpPbmx5
IGlmIHVzZXIgaXMgbm90IGRlZmluaW5nIHNpemUgYXMgc29tZXRoaW5nIGxpa2UgKDJeYSArIDJe
YiArIDJeYyArIC4uLikuIDtcDQpTbyBtYXliZSA4IG9yIDE2IGlzIGVub3VnaD8NCnN0cnVjdCBu
ZXdfbWVtaW5mbyB7DQogICAgdW5zaWduZWQgaW50IG5yX2JhbmtzOw0KICAgIHN0cnVjdCBtZW1i
YW5rIGJhbmtbOF07DQp9Ow0KDQpDb3JyZWN0IG1lIGlmIEknbSB3cm9uZzoNClRoZSAic3RydWN0
IHNobV9tZW1iYW5rIiB5b3UgYXJlIHN1Z2dlc3RpbmcgaXMgbG9va2luZyBsaWtlIHRoaXMsIHJp
Z2h0Pw0Kc3RydWN0IHNobV9tZW1iYW5rIHsNCiAgICBjaGFyIHNobV9pZFtNQVhfU0hNX0lEX0xF
TkdUSF07DQogICAgdW5zaWduZWQgaW50IG5yX3NobV9ib3Jyb3dlcnM7DQogICAgc3RydWN0IG5l
d19tZW1pbmZvIHNobV9iYW5rczsNCiAgICB1bnNpZ25lZCBsb25nIHRvdGFsX3NpemU7DQp9Ow0K
TGV0IG1lIHRyeX4NCg0KPiA+IDIpIElmIHdlIHVzZSAic3RydWN0IG1lbWluZm8qIiBvdmVyIHRo
ZSBjdXJyZW50ICJzdHJ1Y3QgbWVtYmFuayoiIGFuZA0KPiA+ICJzdHJ1Y3QgbWVtaW5mbyoiLCB3
ZSB3aWxsIG5lZWQgYSBuZXcgZmxhZyB0byBkaWZmZXJlbnRpYXRlIHR3bw0KPiA+IHNjZW5hcmlv
cyhob3N0IGFkZHJlc3MgcHJvdmlkZWQgb3Igbm90KSwgQXMgdGhlIHNwZWNpYWwgY2FzZSAic3Ry
dWN0DQo+IG1lbWJhbmsqIiBpcyBhbHJlYWR5IGhlbHBpbmcgdXMgZGlmZmVyZW50aWF0ZS4NCj4g
PiBBbmQgc2luY2Ugd2hlbiBob3N0IGFkZHJlc3MgaXMgcHJvdmlkZWQsIHRoZSByZWxhdGVkICJz
dHJ1Y3QgbWVtYmFuayINCj4gPiBhbHNvIG5lZWRzIHRvIGJlIHJlc2VydmVkIGluICJib290aW5m
by5yZXNlcnZlZF9tZW0iLiBUaGF0J3Mgd2h5IEkNCj4gPiB1c2VkIHBvaW50ZXIgIiBzdHJ1Y3Qg
bWVtYmFuayoiIHRvIGF2b2lkIG1lbW9yeSB3YXN0ZS4NCj4gDQo+IEkgYW0gY29uZnVzZWQuLi4g
VG9kYXkgeW91IGFyZSBkZWZpbmluZyBhczoNCj4gDQo+IHN0cnVjdCB7DQo+ICAgICAgc3RydWN0
IG1lbWJhbmsgKjsNCj4gICAgICBzdHJ1Y3Qgew0KPiAgICAgICAgIHN0cnVjdCBtZW1pbmZvICo7
DQo+ICAgICAgICAgdW5zaWduZWQgbG9uZyB0b3RhbF9zaXplOw0KPiAgICAgIH0NCj4gfQ0KPiAN
Cj4gU28gb24gYXJtNjQgaG9zdCwgeW91IHdvdWxkIHVzZSAyNCBieXRlcy4gSWYgd2Ugd2VyZSB1
c2luZyBhbiB1bmlvbiBpbnN0ZWFkLg0KPiBXZSB3b3VsZCB1c2UgMTYgYnl0ZXMuIEFkZGluZyBh
IDMyLWJpdCBmaWVsZHMsIHdvdWxkIGJyaW5nIHRvDQo+IDIwIGJ5dGVzIChhc3N1bWluZyB3ZSBj
YW4gcmUtdXNlIGEgcGFkZGluZykuDQo+IA0KPiBUaGVyZWZvcmUsIHRoZXJlIGlzIG5vIG1lbW9y
eSB3YXN0ZSBoZXJlIHdpdGggYSBmbGFnLi4uDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBK
dWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 12:08:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 12:08:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473583.734257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqx7-0001pn-Df; Mon, 09 Jan 2023 12:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473583.734257; Mon, 09 Jan 2023 12:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqx7-0001pg-B7; Mon, 09 Jan 2023 12:08:33 +0000
Received: by outflank-mailman (input) for mailman id 473583;
 Mon, 09 Jan 2023 12:08:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TjmM=5G=desiato.srs.infradead.org=BATV+c4f4c0a351f3f62582fc+7078+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pEqx5-0001pa-Es
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 12:08:32 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51dc7caa-9016-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 13:08:29 +0100 (CET)
Received: from [2001:8b0:10b:1:b916:482e:69fd:b8ad] (helo=[IPv6:::1])
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pEqwq-002iBc-0V; Mon, 09 Jan 2023 12:08:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51dc7caa-9016-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:Content-Type
	:MIME-Version:Message-ID:References:In-Reply-To:Subject:CC:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=9a7cthhMBn+fiERgH4CK4Ki4YEtD9TG5jpHvDi/tOEo=; b=Fo9zpnpI7UBzufJvu5HcNTrO/D
	6QBdJtaAOHdTslw5o6E0XWsT6VMss13MKp6+ADHiDU6VpOky+qlQ0VkjwwFL6/SDwDwrGD7YWY4LU
	sjQ2AUV+O3lRkYjw8pLWpZAPzojaPGee6H3i7sr0WjVCYZ5C1SGH7CJYqb0o66y8+g+IzenxGOkTY
	TqNSuDjjgltdWDqlv1NgC0LiUNWk4K7JqWl33rjR9Fae5PSHYtQNXbDEhSYV21iUc/8kubRoXd13U
	LKLGQdNJabdRlaRBycKKqZ0qJdQv38HLZmX5liwpUNn5DK02NFuta37DrNTcGTm/Dj4FE4AmZrvAT
	RQXRdNgw==;
Date: Mon, 09 Jan 2023 12:08:19 +0000
From: David Woodhouse <dwmw2@infradead.org>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
CC: Julien Grall <jgrall@amazon.com>
Subject: Re: (Ab)using xenstored without Xen
User-Agent: K-9 Mail for Android
In-Reply-To: <a1ee03ca-1304-17c8-d075-9a235aa02fee@suse.com>
References: <22a2352464be2df92dc0d30a955034c59fdf3927.camel@infradead.org> <a1ee03ca-1304-17c8-d075-9a235aa02fee@suse.com>
Message-ID: <60DDB54B-41CC-4DE9-83C9-FA20AEEAA9E8@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by desiato.infradead.org. See http://www.infradead.org/rpr.html



On 9 January 2023 07:40:06 GMT, Juergen Gross <jgross@suse=2Ecom> wrote:
>Sorry for the late answer, but I was pretty busy before my 3 week time of=
f=2E :-)

No problem, I had lots of other things to work on, including my own time o=
ff=2E Hope you enjoyed it!

>On 20=2E12=2E22 13:02, David Woodhouse wrote:
>> I've been working on getting qemu to support Xen HVM guests 'natively'
>> under KVM:
>> https://lore=2Ekernel=2Eorg/qemu-devel/20221216004117=2E862106-1-dwmw2@=
infradead=2Eorg/T/
>>=20
>> The basic platform is mostly working and I can start XTF tests with
>> 'qemu -kernel'=2E Now it really needs a xenstore=2E
>>=20
>> I'm thinking of implementing the basic shared ring support on the qemu
>> side, then communicating with the real xenstored over its socket
>> interface=2E It would need a 'SU' command in the xenstored protocol to
>> make it treat that connection as an unprivileged connection from a
>> specific domid, analogous to 'INTRODUCE' but over the existing
>> connection=2E
>
>Wouldn't an "unprivileged" socket make more sense?

Not sure=2E I think we still need a privileged operation to "introduce" th=
e client domain anyway, don't we?

>> However, there might be a bit of work to do first=2E At first, it seeme=
d
>> like xenstored did start up OK and qemu could even connect to it when
>> not running under Xen=2E But that was a checkout from a few months ago,
>> and even then it would segfault the first time we try to actually
>> *write* any nodes=2E
>>=20
>> Newer xenstored breaks even sooner because since commit 60e2f6020
>> ("tools/xenstore: move the call of setup_structure() to dom0
>> introduction") it doesn't even have a tdb_ctx if you start it with the
>> -D option, so it segfaults even on running xenstore-ls=2E And if I move
>> the tdb part of setup_structure() back to be called from where it was,
>> we get a later crash in get_domain_info() because the xc_handle is
>> NULL=2E
>>=20
>> Which is kind of fair enough, because xenstored is designed to run
>> under Xen :)
>>=20
>> But what *is* the -D option for? It goes back to commit bddd41366 in
>> 2005 whch talks about allowing multiple concurrent transactions, and
>> doesn't mention the new option at all=2E It seems to be fairly hosed at
>> the moment=2E
>
>I guess this was some debugging add-on which hasn't been used for ages=2E
>
>I'm inclined to just remove the -D option=2E

Or use it for the new Xenless behaviour perhaps?

>> Is it reasonable to attempt "fixing" xenstored to run without actual
>> Xen, so that we can use it for virtual Xen support?
>
>I don't see a major problem with that=2E
>
>The result shouldn't be too ugly, of course, and I don't see any effort
>on Xen side to test any changes for not breaking your use case=2E

If anything it possibly makes some test cases a lot easier to run?


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 12:08:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 12:08:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473584.734269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqxB-00025Y-Kb; Mon, 09 Jan 2023 12:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473584.734269; Mon, 09 Jan 2023 12:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqxB-00025R-HV; Mon, 09 Jan 2023 12:08:37 +0000
Received: by outflank-mailman (input) for mailman id 473584;
 Mon, 09 Jan 2023 12:08:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPNl=5G=citrix.com=prvs=36677a302=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pEqx9-0001pa-TV
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 12:08:35 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55e40d19-9016-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 13:08:34 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55e40d19-9016-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673266114;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=X8smxQVa1GygRZF5ef4Zl6tvClnyvVfOZsr0vV3ltfY=;
  b=BiV2jukolb08yYA/QWyzghyCjCzr+5Z6jrdpS5UiKpsiJLcaeGvc+P+t
   RiKLmL3wpDLTMyP6JDtcLW+OCuhdVTPwdx8km3LZOACaoTnuYVIEw7brk
   L4XkeKptryhzQVcd2pV/f66jqtKPiRUIkT9vHNKihIoZRle45R1vHTQ1p
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91779490
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:ibDDP6vE42Pzcz5M42V/L+ChJ+fnVJdeMUV32f8akzHdYApBsoF/q
 tZmKWHSb//ZYjP2c9t2ad+/pk0G68TWy4VlSws9+Hs3FXwV+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaGxiFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwCyI0UkuEld6MwrOSSLFO3NQbcfjbM9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfAdMYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27J/
 z2ZrjioWnn2MvTB+H254yy9hdblkAnEV6c7KOOc19Bl1Qj7Kms7V0RNCArTTeOColG6c8JSL
 QoT4CVGhaov8gqtR9r0XRy9qVaFuAIRX5xbFOhSwAKA1KvSpRqYD24sTzhdZdhgv8gzLRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBsZSkv7sTnoYozpgnSVdslG6mw5uAZAhmpn
 WrM9nJnwexO04hbjM1X4GwrnRqPtKH1FQkn2j/VX0untiEjdLb4VrynvA2zAel7EK6VSVyIv
 X4hkseY7fwTAZzlqBFhUNnhD5nyua/bbWS0bUpHWsB4qm/zoyLLkZV4umkWGat/DioTldYFi
 mf3sBgZ2pJcNWDCgURfM9PoUJRCIUQN+L3YuhHogjhmOMUZmOyvpnsGiausM4fFziARfVkXY
 8vzTCpVJS9y5V5b5DS3XfwB9rQg2zozw2jeLbiikUv8i+LEOyLOGehZWLdrUgzfxPneyOky2
 48BX/ZmNj0FCLGuCsUp2dJ7wa82wYgTWsmt9p0/mh+rKQt6AmAxY8I9Mpt4E7GJa599z7+Sl
 lnkAx8w9bYKrSGfQel8Qiw5OeyHsFcWhS5TABHAyn70gSNyOdb+svhCH3b1FJF+nNFeITdPZ
 6FtU6297j5nEFwrJxx1gUHBkbFf
IronPort-HdrOrdr: A9a23:u444lqq2KJ+jeL4UZyF9KyQaV5rieYIsimQD101hICG9Ffbo9P
 xG/c566faQsl0ssR4b8+xoVJPsfZr3z+8I3WBpB8bbYOCEggqVxeNZgrcKqgeIcxEWndQy6U
 4PScRD4PeZNykfsS/x2njGLz9x+qj/zEiF7d2uqkuEYGlRGsZdBygQMHf4LqVWLDM2Y6bQT/
 Knl7F6T2jJQwVsUizsbkN1ItT+mw==
X-IronPort-AV: E=Sophos;i="5.96,311,1665460800"; 
   d="scan'208";a="91779490"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH 0/2] x86/vmx: Don't crash guests when there is no model-specific LBRs available
Date: Mon, 9 Jan 2023 12:08:26 +0000
Message-ID: <20230109120828.344-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This is the minimum bodge required to stop guests crashing on Sapphire Rapids
hardware.

Note that both Arch LBR and safely (in terms of migration) enumerating
PDCM/MSR_PERF_CAPS depend on improved MSR levelling support which is still not
yet complete.

i.e. We cannot do the second half (enumerating LBR_FORMAT=0x3f) yet because
we'll make it more likely for VMs to crash in migrate.

Andrew Cooper (2):
  x86/vmx: Calculate model-specific LBRs once at start of day
  x86/vmx: Support for CPUs without model-specific LBR

 xen/arch/x86/hvm/vmx/vmx.c | 307 +++++++++++++++++++++++----------------------
 1 file changed, 155 insertions(+), 152 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 12:08:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 12:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473585.734280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqxC-0002M1-V5; Mon, 09 Jan 2023 12:08:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473585.734280; Mon, 09 Jan 2023 12:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqxC-0002Lu-QY; Mon, 09 Jan 2023 12:08:38 +0000
Received: by outflank-mailman (input) for mailman id 473585;
 Mon, 09 Jan 2023 12:08:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPNl=5G=citrix.com=prvs=36677a302=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pEqxB-0001pa-35
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 12:08:37 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 57ecce77-9016-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 13:08:36 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57ecce77-9016-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673266116;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=8U+oMbjFcnCWKV1GHRn181dRulE+W90T2RctIPLLxek=;
  b=iEvuOkm38xhLg8CC9uODx5ELqS16A68TvM18bHfSTo1i6WgGPr6HwmY4
   BfIg73wcQkpshyJ9QddV3THqckAaMtbYZDwfQDBmO2llR4hIKLLS9eLqq
   27yhkvUvv0VxUEGFqarik2TREkH/q3ApD613PtIKEcuW0hQ4tMUKnJpck
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91779491
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:b+85kq0sNJ2BzgzCevbD5RJxkn2cJEfYwER7XKvMYLTBsI5bp2YCy
 2QWWj3UOf6PN2b8eox/OYu/oxkOvZSHyN9jHQc9pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK5ULSfUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS9nuDgNyo4GlD5gVmNKgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfJjkR1
 eE+KSg3MgmnoeCy27S5dOJeiZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKkSbC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TbHJoKzxrJ/
 woq+UzGIiMLMM2T0wGq73yni+zgvQflf5orQejQGvlC3wTImz175ActfVmmp7+/g023WdNaI
 mQV/DYjqe4580nDZtrwQRy+5mKFtxg0WtxMHul84waIooLL5y6JC25CSSROAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BU0oaDIATAAFy8L+u4x1hRXKJuuPC4bs0IezQ2uph
 WnX8m5u3N3/kPLnyY2cpQ/nnhStvqKVVyoT5AHMWGCb5TF2MdvNi5OT1XDX6vNJLYC8R1aHv
 WQZl8X20N3iHa1hhwTWHrxTQejBC+KtdWSF3AUxR8VJGyGFoSbLQGxG3N1pyK6F2O4gcCShX
 kLcsBg5CHR7bCrzNv8fj25c5q0XIUnc+TbNDK28gjlmOMIZmOq7EMZGOyatM5jFyhRErE3GE
 c7znTyQJXgbE7976zG9Wv0Q17QmrghnmzyIH8ihkEj8geXCDJJwdVvjGAHUBgzexPrayDg5D
 v4Fb5fao/mheLGWjtbrHX47cglRcClT6WHeoM1LbO+TSjeK60l4Y8I9NYgJItQ/94wMz7egw
 51IchMAoLYJrSGdeFrih7EKQO+HYKuTWlpnZ3d9ZQb5hSd+CWtthY9GH6YKkXAc3LQL5ZZJo
 zMtIq1s3twnpuz7xgkg
IronPort-HdrOrdr: A9a23:hjsV8KM0FOT0JcBcTgijsMiBIKoaSvp037BN7TETdfU1SKylfq
 WV98jzuiWftN98YhwdcPq7SdW9qArnhOZICOoqXItKPjOIhILAFugL0WKI+VPd8kPFmtK0/8
 9bAsxD4dfLfCdHZJbBkXCF+/pJ+qjjzEiE7d2uqEuEYmlRGsNdByYQMHf8LqUsLDM2fqbQRv
 Knl7B6T2zJQwVrUviG
X-IronPort-AV: E=Sophos;i="5.96,311,1665460800"; 
   d="scan'208";a="91779491"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH 2/2] x86/vmx: Support for CPUs without model-specific LBR
Date: Mon, 9 Jan 2023 12:08:28 +0000
Message-ID: <20230109120828.344-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230109120828.344-1-andrew.cooper3@citrix.com>
References: <20230109120828.344-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Ice Lake (server at least) has both Arch LBR and model-specific LBR.  Sapphire
Rapids does not have model-specific LBR at all.  I.e. On SPR and later,
model_specific_lbr will always be NULL, so we must make changes to avoid
reliably hitting the domain_crash().

The Arch LBR spec states that CPUs without model-specific LBR implement
MSR_DBG_CTL.LBR by discarding writes and always returning 0.

Do this for any CPU for which we lack model-specific LBR information.

Adjust the now-stale comment, now that the Arch LBR spec has created a way to
signal "no model specific LBR" to guests.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c | 31 ++++++++++++++++---------------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 17320f9fb267..c76b09391c76 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3555,17 +3555,25 @@ static int cf_check vmx_msr_write_intercept(
             goto gp_fault;
 
         /*
+         * The Arch LBR spec (new in Ice Lake) states that CPUs with no
+         * model-specific LBRs implement MSR_DBG_CTL.LBR by discarding writes
+         * and always returning 0.
+         *
+         * Use this property in all cases where we don't know any
+         * model-specific LBR information, as it matches real hardware
+         * behaviour on post-Ice Lake systems.
+         */
+        if ( !model_specific_lbr )
+            msr_content &= ~IA32_DEBUGCTLMSR_LBR;
+
+        /*
          * When a guest first enables LBR, arrange to save and restore the LBR
          * MSRs and allow the guest direct access.
          *
-         * MSR_DEBUGCTL and LBR has existed almost as long as MSRs have
-         * existed, and there is no architectural way to hide the feature, or
-         * fail the attempt to enable LBR.
-         *
-         * Unknown host LBR MSRs or hitting -ENOSPC with the guest load/save
-         * list are definitely hypervisor bugs, whereas -ENOMEM for allocating
-         * the load/save list is simply unlucky (and shouldn't occur with
-         * sensible management by the toolstack).
+         * Hitting -ENOSPC with the guest load/save list is definitely a
+         * hypervisor bug, whereas -ENOMEM for allocating the load/save list
+         * is simply unlucky (and shouldn't occur with sensible management by
+         * the toolstack).
          *
          * Either way, there is nothing we can do right now to recover, and
          * the guest won't execute correctly either.  Simply crash the domain
@@ -3576,13 +3584,6 @@ static int cf_check vmx_msr_write_intercept(
         {
             const struct lbr_info *lbr = model_specific_lbr;
 
-            if ( unlikely(!lbr) )
-            {
-                gprintk(XENLOG_ERR, "Unknown Host LBR MSRs\n");
-                domain_crash(v->domain);
-                return X86EMUL_OKAY;
-            }
-
             for ( ; lbr->count; lbr++ )
             {
                 unsigned int i;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 12:09:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 12:09:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473593.734291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqxb-00038x-7O; Mon, 09 Jan 2023 12:09:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473593.734291; Mon, 09 Jan 2023 12:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEqxb-00038q-47; Mon, 09 Jan 2023 12:09:03 +0000
Received: by outflank-mailman (input) for mailman id 473593;
 Mon, 09 Jan 2023 12:09:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPNl=5G=citrix.com=prvs=36677a302=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pEqxa-00031g-9t
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 12:09:02 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64b2d057-9016-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 13:08:59 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64b2d057-9016-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673266139;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=m7oq3+yXJxv1JJKdBmmrPVeWQ6KupvyHV6r1iw6a5S0=;
  b=TQDax71N9I1YY3Aa3vI4IbSXYhyEryNjpaZK31Buv/jM+wDYVhFCNajn
   P5cz36zMF/Nh9ipMjKDxoPoQylAhmzBos8F4CG8k8rIxq7q6fv60KSm1m
   cVOYieWBT41ULMcosx31+UohSnx9vSEJTXvJA1Aa4maS5rgV+g6LENsuf
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91739042
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:mAtMeKmQx3zQ/QYF4pNKpdno5gxAJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIYWTrUbP3fYWr8LdxxYd+1/UsFuJHdmtUyTwY/+ythFCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aqaVA8w5ARkPqgS5A6GzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 fIAC2knSAGpvfy3wK38esVmg+RyDuC+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsfYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9zx/H/
 TOeoz6R7hcyKey1lgLY9FKVq+rXrTLYebBCV4D/+as/6LGU7jNKU0BHPbehmtGmjmauVtQZL
 FYbkgIssK508kWoR9v8WhSQoXiYsxpaUN1Ve8U55R+MzOzI4g+fLmkCUjNFLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDQcmZwYY59jooKkokwnCCN1kFcaIYsbdQG+qh
 WrQ9W5n2utV3ZVjO7iHEU7vjSqP/7LvXyQP+ATXQSWVwgl8RN+HTtn9gbTE1spoIIGcR1iHm
 XELncmC8ewDZa2weDyxrPYlR+/wuavcWNHIqRs2RsR6qWzxk5K2VdoIiAySMnuFJSrtldXBR
 EbI8T1c65ZIVJdBRf8mOtnhYyjGIEWJKDgEahw2RoATCnSSXFXdlM2LWaJ39z6FraTUuftjU
 ap3iO71ZZrgNYxpzSCtW8AW2qIxyyY1yAv7HM6klE7/i+XCPy7KFd/p1WdiiMhjtstoRy2Mr
 b5i2zaikU0DAIUSnAGLmWLsEbz6BSdiXs2nwyCmXuWCPhBnCAkc5wz5mNscl3het/0NzI/gp
 yjtMnK0PXKj3RUr3y3WMCE8AF4uNL4jxU8G0dsEYQzziiBzMNjwsc/ytfIfJNEayQCq9tYsJ
 9FtRilKKq4npujvk9jFUaTAkQ==
IronPort-HdrOrdr: A9a23:OBj1za4xFFZ6abg+IQPXwPDXdLJyesId70hD6qhwISY6TiX+rb
 HWoB17726TtN9/YhEdcLy7VJVoBEmskKKdgrNhWotKPjOW21dARbsKheCJrgEIWReOktK1vZ
 0QC5SWY+eQMbEVt6nHCXGDYrQd/OU=
X-IronPort-AV: E=Sophos;i="5.96,311,1665460800"; 
   d="scan'208";a="91739042"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH 1/2] x86/vmx: Calculate model-specific LBRs once at start of day
Date: Mon, 9 Jan 2023 12:08:27 +0000
Message-ID: <20230109120828.344-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230109120828.344-1-andrew.cooper3@citrix.com>
References: <20230109120828.344-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

There is no point repeating this calculation at runtime, especially as it is
in the fallback path of the WRSMR/RDMSR handlers.

Move the infrastructure higher in vmx.c to avoid forward declarations,
renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
these are model-specific only.

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c | 276 +++++++++++++++++++++++----------------------
 1 file changed, 139 insertions(+), 137 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 43a4865d1c76..17320f9fb267 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -396,6 +396,142 @@ void vmx_pi_hooks_deassign(struct domain *d)
     domain_unpause(d);
 }
 
+static const struct lbr_info {
+    u32 base, count;
+} p4_lbr[] = {
+    { MSR_P4_LER_FROM_LIP,          1 },
+    { MSR_P4_LER_TO_LIP,            1 },
+    { MSR_P4_LASTBRANCH_TOS,        1 },
+    { MSR_P4_LASTBRANCH_0_FROM_LIP, NUM_MSR_P4_LASTBRANCH_FROM_TO },
+    { MSR_P4_LASTBRANCH_0_TO_LIP,   NUM_MSR_P4_LASTBRANCH_FROM_TO },
+    { 0, 0 }
+}, c2_lbr[] = {
+    { MSR_IA32_LASTINTFROMIP,       1 },
+    { MSR_IA32_LASTINTTOIP,         1 },
+    { MSR_C2_LASTBRANCH_TOS,        1 },
+    { MSR_C2_LASTBRANCH_0_FROM_IP,  NUM_MSR_C2_LASTBRANCH_FROM_TO },
+    { MSR_C2_LASTBRANCH_0_TO_IP,    NUM_MSR_C2_LASTBRANCH_FROM_TO },
+    { 0, 0 }
+}, nh_lbr[] = {
+    { MSR_IA32_LASTINTFROMIP,       1 },
+    { MSR_IA32_LASTINTTOIP,         1 },
+    { MSR_NHL_LBR_SELECT,           1 },
+    { MSR_NHL_LASTBRANCH_TOS,       1 },
+    { MSR_P4_LASTBRANCH_0_FROM_LIP, NUM_MSR_P4_LASTBRANCH_FROM_TO },
+    { MSR_P4_LASTBRANCH_0_TO_LIP,   NUM_MSR_P4_LASTBRANCH_FROM_TO },
+    { 0, 0 }
+}, sk_lbr[] = {
+    { MSR_IA32_LASTINTFROMIP,       1 },
+    { MSR_IA32_LASTINTTOIP,         1 },
+    { MSR_NHL_LBR_SELECT,           1 },
+    { MSR_NHL_LASTBRANCH_TOS,       1 },
+    { MSR_SKL_LASTBRANCH_0_FROM_IP, NUM_MSR_SKL_LASTBRANCH },
+    { MSR_SKL_LASTBRANCH_0_TO_IP,   NUM_MSR_SKL_LASTBRANCH },
+    { MSR_SKL_LASTBRANCH_0_INFO,    NUM_MSR_SKL_LASTBRANCH },
+    { 0, 0 }
+}, at_lbr[] = {
+    { MSR_IA32_LASTINTFROMIP,       1 },
+    { MSR_IA32_LASTINTTOIP,         1 },
+    { MSR_C2_LASTBRANCH_TOS,        1 },
+    { MSR_C2_LASTBRANCH_0_FROM_IP,  NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
+    { MSR_C2_LASTBRANCH_0_TO_IP,    NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
+    { 0, 0 }
+}, sm_lbr[] = {
+    { MSR_IA32_LASTINTFROMIP,       1 },
+    { MSR_IA32_LASTINTTOIP,         1 },
+    { MSR_SM_LBR_SELECT,            1 },
+    { MSR_SM_LASTBRANCH_TOS,        1 },
+    { MSR_C2_LASTBRANCH_0_FROM_IP,  NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
+    { MSR_C2_LASTBRANCH_0_TO_IP,    NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
+    { 0, 0 }
+}, gm_lbr[] = {
+    { MSR_IA32_LASTINTFROMIP,       1 },
+    { MSR_IA32_LASTINTTOIP,         1 },
+    { MSR_SM_LBR_SELECT,            1 },
+    { MSR_SM_LASTBRANCH_TOS,        1 },
+    { MSR_GM_LASTBRANCH_0_FROM_IP,  NUM_MSR_GM_LASTBRANCH_FROM_TO },
+    { MSR_GM_LASTBRANCH_0_TO_IP,    NUM_MSR_GM_LASTBRANCH_FROM_TO },
+    { 0, 0 }
+};
+static const struct lbr_info * __ro_after_init model_specific_lbr;
+
+static const struct __init lbr_info *get_model_specific_lbr(void)
+{
+    switch ( boot_cpu_data.x86 )
+    {
+    case 6:
+        switch ( boot_cpu_data.x86_model )
+        {
+        /* Core2 Duo */
+        case 0x0f:
+        /* Enhanced Core */
+        case 0x17:
+        /* Xeon 7400 */
+        case 0x1d:
+            return c2_lbr;
+        /* Nehalem */
+        case 0x1a: case 0x1e: case 0x1f: case 0x2e:
+        /* Westmere */
+        case 0x25: case 0x2c: case 0x2f:
+        /* Sandy Bridge */
+        case 0x2a: case 0x2d:
+        /* Ivy Bridge */
+        case 0x3a: case 0x3e:
+        /* Haswell */
+        case 0x3c: case 0x3f: case 0x45: case 0x46:
+        /* Broadwell */
+        case 0x3d: case 0x47: case 0x4f: case 0x56:
+            return nh_lbr;
+        /* Skylake */
+        case 0x4e: case 0x5e:
+        /* Xeon Scalable */
+        case 0x55:
+        /* Cannon Lake */
+        case 0x66:
+        /* Goldmont Plus */
+        case 0x7a:
+        /* Ice Lake */
+        case 0x6a: case 0x6c: case 0x7d: case 0x7e:
+        /* Tiger Lake */
+        case 0x8c: case 0x8d:
+        /* Tremont */
+        case 0x86:
+        /* Kaby Lake */
+        case 0x8e: case 0x9e:
+        /* Comet Lake */
+        case 0xa5: case 0xa6:
+            return sk_lbr;
+        /* Atom */
+        case 0x1c: case 0x26: case 0x27: case 0x35: case 0x36:
+            return at_lbr;
+        /* Silvermont */
+        case 0x37: case 0x4a: case 0x4d: case 0x5a: case 0x5d:
+        /* Xeon Phi Knights Landing */
+        case 0x57:
+        /* Xeon Phi Knights Mill */
+        case 0x85:
+        /* Airmont */
+        case 0x4c:
+            return sm_lbr;
+        /* Goldmont */
+        case 0x5c: case 0x5f:
+            return gm_lbr;
+        }
+        break;
+
+    case 15:
+        switch ( boot_cpu_data.x86_model )
+        {
+        /* Pentium4/Xeon with em64t */
+        case 3: case 4: case 6:
+            return p4_lbr;
+        }
+        break;
+    }
+
+    return NULL;
+}
+
 static int cf_check vmx_domain_initialise(struct domain *d)
 {
     static const struct arch_csw csw = {
@@ -2846,6 +2982,7 @@ const struct hvm_function_table * __init start_vmx(void)
         vmx_function_table.tsc_scaling.setup = vmx_setup_tsc_scaling;
     }
 
+    model_specific_lbr = get_model_specific_lbr();
     lbr_tsx_fixup_check();
     ler_to_fixup_check();
 
@@ -2992,141 +3129,6 @@ static int vmx_cr_access(cr_access_qual_t qual)
     return X86EMUL_OKAY;
 }
 
-static const struct lbr_info {
-    u32 base, count;
-} p4_lbr[] = {
-    { MSR_P4_LER_FROM_LIP,          1 },
-    { MSR_P4_LER_TO_LIP,            1 },
-    { MSR_P4_LASTBRANCH_TOS,        1 },
-    { MSR_P4_LASTBRANCH_0_FROM_LIP, NUM_MSR_P4_LASTBRANCH_FROM_TO },
-    { MSR_P4_LASTBRANCH_0_TO_LIP,   NUM_MSR_P4_LASTBRANCH_FROM_TO },
-    { 0, 0 }
-}, c2_lbr[] = {
-    { MSR_IA32_LASTINTFROMIP,       1 },
-    { MSR_IA32_LASTINTTOIP,         1 },
-    { MSR_C2_LASTBRANCH_TOS,        1 },
-    { MSR_C2_LASTBRANCH_0_FROM_IP,  NUM_MSR_C2_LASTBRANCH_FROM_TO },
-    { MSR_C2_LASTBRANCH_0_TO_IP,    NUM_MSR_C2_LASTBRANCH_FROM_TO },
-    { 0, 0 }
-}, nh_lbr[] = {
-    { MSR_IA32_LASTINTFROMIP,       1 },
-    { MSR_IA32_LASTINTTOIP,         1 },
-    { MSR_NHL_LBR_SELECT,           1 },
-    { MSR_NHL_LASTBRANCH_TOS,       1 },
-    { MSR_P4_LASTBRANCH_0_FROM_LIP, NUM_MSR_P4_LASTBRANCH_FROM_TO },
-    { MSR_P4_LASTBRANCH_0_TO_LIP,   NUM_MSR_P4_LASTBRANCH_FROM_TO },
-    { 0, 0 }
-}, sk_lbr[] = {
-    { MSR_IA32_LASTINTFROMIP,       1 },
-    { MSR_IA32_LASTINTTOIP,         1 },
-    { MSR_NHL_LBR_SELECT,           1 },
-    { MSR_NHL_LASTBRANCH_TOS,       1 },
-    { MSR_SKL_LASTBRANCH_0_FROM_IP, NUM_MSR_SKL_LASTBRANCH },
-    { MSR_SKL_LASTBRANCH_0_TO_IP,   NUM_MSR_SKL_LASTBRANCH },
-    { MSR_SKL_LASTBRANCH_0_INFO,    NUM_MSR_SKL_LASTBRANCH },
-    { 0, 0 }
-}, at_lbr[] = {
-    { MSR_IA32_LASTINTFROMIP,       1 },
-    { MSR_IA32_LASTINTTOIP,         1 },
-    { MSR_C2_LASTBRANCH_TOS,        1 },
-    { MSR_C2_LASTBRANCH_0_FROM_IP,  NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
-    { MSR_C2_LASTBRANCH_0_TO_IP,    NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
-    { 0, 0 }
-}, sm_lbr[] = {
-    { MSR_IA32_LASTINTFROMIP,       1 },
-    { MSR_IA32_LASTINTTOIP,         1 },
-    { MSR_SM_LBR_SELECT,            1 },
-    { MSR_SM_LASTBRANCH_TOS,        1 },
-    { MSR_C2_LASTBRANCH_0_FROM_IP,  NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
-    { MSR_C2_LASTBRANCH_0_TO_IP,    NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
-    { 0, 0 }
-}, gm_lbr[] = {
-    { MSR_IA32_LASTINTFROMIP,       1 },
-    { MSR_IA32_LASTINTTOIP,         1 },
-    { MSR_SM_LBR_SELECT,            1 },
-    { MSR_SM_LASTBRANCH_TOS,        1 },
-    { MSR_GM_LASTBRANCH_0_FROM_IP,  NUM_MSR_GM_LASTBRANCH_FROM_TO },
-    { MSR_GM_LASTBRANCH_0_TO_IP,    NUM_MSR_GM_LASTBRANCH_FROM_TO },
-    { 0, 0 }
-};
-
-static const struct lbr_info *last_branch_msr_get(void)
-{
-    switch ( boot_cpu_data.x86 )
-    {
-    case 6:
-        switch ( boot_cpu_data.x86_model )
-        {
-        /* Core2 Duo */
-        case 0x0f:
-        /* Enhanced Core */
-        case 0x17:
-        /* Xeon 7400 */
-        case 0x1d:
-            return c2_lbr;
-        /* Nehalem */
-        case 0x1a: case 0x1e: case 0x1f: case 0x2e:
-        /* Westmere */
-        case 0x25: case 0x2c: case 0x2f:
-        /* Sandy Bridge */
-        case 0x2a: case 0x2d:
-        /* Ivy Bridge */
-        case 0x3a: case 0x3e:
-        /* Haswell */
-        case 0x3c: case 0x3f: case 0x45: case 0x46:
-        /* Broadwell */
-        case 0x3d: case 0x47: case 0x4f: case 0x56:
-            return nh_lbr;
-        /* Skylake */
-        case 0x4e: case 0x5e:
-        /* Xeon Scalable */
-        case 0x55:
-        /* Cannon Lake */
-        case 0x66:
-        /* Goldmont Plus */
-        case 0x7a:
-        /* Ice Lake */
-        case 0x6a: case 0x6c: case 0x7d: case 0x7e:
-        /* Tiger Lake */
-        case 0x8c: case 0x8d:
-        /* Tremont */
-        case 0x86:
-        /* Kaby Lake */
-        case 0x8e: case 0x9e:
-        /* Comet Lake */
-        case 0xa5: case 0xa6:
-            return sk_lbr;
-        /* Atom */
-        case 0x1c: case 0x26: case 0x27: case 0x35: case 0x36:
-            return at_lbr;
-        /* Silvermont */
-        case 0x37: case 0x4a: case 0x4d: case 0x5a: case 0x5d:
-        /* Xeon Phi Knights Landing */
-        case 0x57:
-        /* Xeon Phi Knights Mill */
-        case 0x85:
-        /* Airmont */
-        case 0x4c:
-            return sm_lbr;
-        /* Goldmont */
-        case 0x5c: case 0x5f:
-            return gm_lbr;
-        }
-        break;
-
-    case 15:
-        switch ( boot_cpu_data.x86_model )
-        {
-        /* Pentium4/Xeon with em64t */
-        case 3: case 4: case 6:
-            return p4_lbr;
-        }
-        break;
-    }
-
-    return NULL;
-}
-
 enum
 {
     LBR_FORMAT_32                 = 0x0, /* 32-bit record format */
@@ -3233,7 +3235,7 @@ static void __init ler_to_fixup_check(void)
 
 static int is_last_branch_msr(u32 ecx)
 {
-    const struct lbr_info *lbr = last_branch_msr_get();
+    const struct lbr_info *lbr = model_specific_lbr;
 
     if ( lbr == NULL )
         return 0;
@@ -3572,7 +3574,7 @@ static int cf_check vmx_msr_write_intercept(
         if ( !(v->arch.hvm.vmx.lbr_flags & LBR_MSRS_INSERTED) &&
              (msr_content & IA32_DEBUGCTLMSR_LBR) )
         {
-            const struct lbr_info *lbr = last_branch_msr_get();
+            const struct lbr_info *lbr = model_specific_lbr;
 
             if ( unlikely(!lbr) )
             {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 13:02:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 13:02:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473610.734302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pErmu-0001II-72; Mon, 09 Jan 2023 13:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473610.734302; Mon, 09 Jan 2023 13:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pErmu-0001IB-3i; Mon, 09 Jan 2023 13:02:04 +0000
Received: by outflank-mailman (input) for mailman id 473610;
 Mon, 09 Jan 2023 13:02:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pErms-0001I5-Kc
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 13:02:02 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce42a66c-901d-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 14:02:00 +0100 (CET)
Received: by mail-ed1-x52d.google.com with SMTP id 18so12376231edw.7
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 05:02:00 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 ch17-20020a0564021bd100b004585eba4baesm3707013edb.80.2023.01.09.05.01.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 05:01:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce42a66c-901d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=VL3jfDiNUYbqOELuZt5dZ3P1mshWkZMGXFgg0J+f4Qg=;
        b=YCHzlmY8elUGdxQDlNgEocdnRTVqJFqqPrQtJzJGjMd98YTvCpEbEwWUSaLlOTDNwX
         NhrKwwLyKwgvDjeHJqF1WBBO8jpo2gjDvnAJJjpNVxMMsvCgHYPqJ3f7uI3j2BGVG+IB
         4jHZmPyE4OX0j2MIaWlzb1WCs02fk7WvkFzr45dHjNTnyp/SmhM6jcb4FKJUjV7DuIkq
         SUsNAFyB20GGUdgnpmDkuomwLiBaEuIYoQyWmH74gxyPDjFmMsNjNEDJE9VYJZ+VuKBj
         SOSBNNRzxRkR6MfTCFKF/lTEkWyEVH9BhCQ6Qq1CZsvukzVXZOYuUSjfE7MB2RnQ095f
         j1sw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=VL3jfDiNUYbqOELuZt5dZ3P1mshWkZMGXFgg0J+f4Qg=;
        b=3EUf57WYj/YCwOZlvkMNpbEQ2T2AcOj8mso5hil3Hx6cc7KZ82iL9Fzw0f7a3gi2WU
         Y8ZNEkCiQyopCCOBwHoMy4nzidlpojKpWoEHRz8w8TNsaGjEed4OlpfXJtdTRf1wY21W
         oDFavHADv1CcVMNcuXe4CUseEbRSc4FJnFFJ1c9ZB21EXmNhLP8adtsYktRYJ6QkqRAp
         r1SF46NvQN7p4w3WYU1Jc02QyAu/psU/b4AtsseZLrzAFJIWtsglWlc/JDycXAH6LKvu
         Oe01eXVaZoGNQeYFrZrn/+b35xoxsKS1bFd0LdcqvERbnE2EawHjuUF1nEuGGMkqg0T0
         cWIw==
X-Gm-Message-State: AFqh2koN1kq+eTyZBLlbsklL4rjs0u0UQlCES02YQ0F4aNSmtzVIOts0
	ONcPh9ssE+ko8DQ+2XRltu8=
X-Google-Smtp-Source: AMrXdXsPP2m+OTQBykqALpdCEv2sU9yIgaVoNx24f2Dv27Y0z1oh6SHNAuEylft9mxjswOnYbylfWg==
X-Received: by 2002:a05:6402:2202:b0:48e:bb08:c96 with SMTP id cq2-20020a056402220200b0048ebb080c96mr20115152edb.28.1673269320300;
        Mon, 09 Jan 2023 05:02:00 -0800 (PST)
Message-ID: <7abe43df7db5a7ec969cca9ec360c0de6fbf523d.camel@gmail.com>
Subject: Re: [PATCH v1 4/8] xen/riscv: introduce sbi call to putchar to
 console
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Bobby Eshleman
	 <bobbyeshleman@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
 Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
 <gianluca@rivosinc.com>, Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
Date: Mon, 09 Jan 2023 15:01:57 +0200
In-Reply-To: <320cc1b3-f03f-ad87-205f-d3c5db446f7d@citrix.com>
References: <cover.1673009740.git.oleksii.kurochko@gmail.com>
	 <09da5a3184242152af6af060720a007738a55d6e.1673009740.git.oleksii.kurochko@gmail.com>
	 <Y6FUy/F0mbrvRP3e@bullseye>
	 <320cc1b3-f03f-ad87-205f-d3c5db446f7d@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-06 at 17:16 +0000, Andrew Cooper wrote:
> On 20/12/2022 6:23 am, Bobby Eshleman wrote:
> > On Fri, Jan 06, 2023 at 03:14:25PM +0200, Oleksii Kurochko wrote:
> > > The patch introduce sbi_putchar() SBI call which is necessary
> > > to implement initial early_printk
> > >=20
> > I think that it might be wise to start off with an alternative to
> > sbi_putchar() since it is already planned for deprecation. I
> > realize
> > that this will require rework, but it is almost guaranteed that
> > early_printk() will break on future SBI implementations if using
> > this
> > SBI call. IIRC, Xen/ARM's early printk looked like a reasonable
> > analogy
> > for how it could work on RISC-V, IMHO.
>=20
> Hmm, we're using it as a stopgap right now in CI so we can break
> "upstream RISC-V support" into manageable chunks.
>=20
> So perhaps we want to forgo plumbing sbi_putchar() into
> early_printk()=C2=A0
> (to avoid giving the impression that this will stay around in the
> future) and use sbi_putchar() directly for the hello world print.
>=20
> Next, we focus on getting the real uart driver working along with
> real
> printk (rather than focusing on exceptions which was going to be the
> next task), and we can drop sbi_putchar() entirely.
>=20
> Thoughts?
>=20
I think it would be better to remain it as it is done now and add TODO
comment "TODO: rewrite early_printk() to use uart directly as
sbi_putchar() is already planned for deprecation" and update the commit
message too.
After real uart will be ready it will be easy to remove sbi_putchar()
and do something similar to ARM early printk implementation.
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 13:10:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 13:10:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473615.734313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pErv6-0002n2-1v; Mon, 09 Jan 2023 13:10:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473615.734313; Mon, 09 Jan 2023 13:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pErv5-0002mv-V3; Mon, 09 Jan 2023 13:10:31 +0000
Received: by outflank-mailman (input) for mailman id 473615;
 Mon, 09 Jan 2023 13:10:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pErv4-0002ml-Rv; Mon, 09 Jan 2023 13:10:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pErv4-0005sw-Pi; Mon, 09 Jan 2023 13:10:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pErv4-0003xz-Cf; Mon, 09 Jan 2023 13:10:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pErv4-0002GM-CF; Mon, 09 Jan 2023 13:10:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7UgZepBJVEQhjpUbIeMGhEmr0z8B9mwKmFT/rr1IwIU=; b=IZWV/ID/2QCC2o0vYGQWxs7l7G
	tyGly7jBeUa7LOTvaagb5HGuOEMqS8RVB/uSvg9MCX7yWkbGhZ7j/LZVttyFWp70wJOQGCTHOUPWo
	RnjBwKdaebKEZHhaQCZM44Ud4MIs8/jBNL9NlTMqzDD9B2hu9SBrq89C6vyOLBKQ8IwQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175637-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175637: FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-xl-credit2:host-install(5):broken:heisenbug
    qemu-mainline:test-arm64-arm64-xl:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=528d9f33cad5245c1099d77084c78bb2244d5143
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 13:10:30 +0000

flight 175637 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175637/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 175629

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-credit2  5 host-install(5) broken in 175629 pass in 175637
 test-arm64-arm64-xl          18 guest-start/debian.repeat  fail pass in 175629

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175623
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175623
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175623
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175623
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175623
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175623
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                528d9f33cad5245c1099d77084c78bb2244d5143
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    1 days
Testing same since   175627  2023-01-08 14:40:14 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-credit2 broken

Not pushing.

(No revision log; it would be 378 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 13:38:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 13:38:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473626.734324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsLp-0005NZ-M6; Mon, 09 Jan 2023 13:38:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473626.734324; Mon, 09 Jan 2023 13:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsLp-0005NS-JJ; Mon, 09 Jan 2023 13:38:09 +0000
Received: by outflank-mailman (input) for mailman id 473626;
 Mon, 09 Jan 2023 13:38:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEsLo-0005N6-7R
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 13:38:08 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2088.outbound.protection.outlook.com [40.107.22.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d8a57d8d-9022-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 14:38:05 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6936.eurprd04.prod.outlook.com (2603:10a6:20b:106::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 9 Jan
 2023 13:38:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 13:38:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8a57d8d-9022-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=azxAq4dpjPYPh1WxjdhC7LKnUkyoq4ctCCAc+zjR7bW40oWrXYmf5E1HwhTO3ccX1Z3rsBlULiqxf3qwt7ESus8XeZ98Qk6Z+bLeU7eEypi1OcULV+HLY2bvCDFnnaKyVB3yrUWGUmoYIgIPbOlXlB2chMYK0sm/GnElQYLVoIPLllyHg4c1N+bjbio3qY4iZOsdvTIfaq2yPz7hfEXHShd+6h4+/iyHB/id7cy5oHbIIrbBUUz4b/7O3+owYLFfKNMTi6WieY0bdz2CwWroMr/h9PMphrDWKglSm68+c3dX426AY/z8jYxW6Wub88y2Hz7PRLT3QhTyfJfb7/gTwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=m9o4UrkbsEgoHelMxnH7a7O4Qq+EJzc3EuuaLl0O0G8=;
 b=hbI3qgqYIgFE5oDcly9xKHvsCs6hqFql/pR+KeFg3VFAzbEGC9hOLXb8QTjcrnoIzsS7raa8XWH52iNps51dtzz4ThdYjUjU1iryYKvsKBnVeug4FWvmOfWUXEpkKpKpXq6lxoMNcJFdXvtMoR1BoRAl8yRcW9ltWFXXSEFiyOiZgVB7LKHzrn7GW8M6wppTan/JV7AvBqhjgEwD3ynmrpmbSfUJu0P1OULhfRPhS4PyOP5AX9lW8l818FWLEmhTGysZZWWvkORayr/3o7lrahLQkTwg2Uk4k1HiX70V56GmWQQoNwXqojnih97Uru/nAoQWZzGyxvzpet5UEyLOlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m9o4UrkbsEgoHelMxnH7a7O4Qq+EJzc3EuuaLl0O0G8=;
 b=VuHszqQ3az6uUEek1rWCetPlt5PxfgGPvqgHDPXimnD/GJqigyBgdaQldBhP8mXv8E6pLcMjPsn/kYBNLUkKAHBO9nCk/6Q1Jr9VLvcV338BqFCu4wk4wktfGUCFvXHTASL5yg3XpsymvJLSTn4qxSoH2TFBsFrtKjcPOyOgagXjW9uSPj/YlWIDz30IT+LaZx0llbxymV7H44u6e+cGs5jfkiBwKIKmB9TfWlEhCWCrDFGFr0e3U/AV7jsvMZ3xT6e3VN/NSqG2AU+4JOsSPCkNOVLeYqdRsBo91CM8sizXz5ONb+7Aj248tvhcAIvA+Yq4QJTOCEijIUjDHQoW4w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
Date: Mon, 9 Jan 2023 14:38:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/5] x86/mm: address aspects noticed during XSA-410 work
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0128.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6936:EE_
X-MS-Office365-Filtering-Correlation-Id: c1ddda3e-760b-4912-83ed-08daf246bb90
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/shlvZKMYuhF+/bbG2r5tfckaNFP/46uupUT0U0bcTY0zf22j+5L7+PXFKmy+kHZgrCMxbpepl+YeR+h8RPvL6YnY9LFgWR/KZ0Uc6dsz0VsG4iF7ny0NUkUziIFnPifCRyu3ZQo4o+6sEhYgLi/JUOFrbYZ143BiYuZ9G6svmZv14I4Itf1QlShNZpohUQmbt9AMoU+vTyqSeg0NBv1lFOnTC+UD419rL87/c4g4BMv/bayCShcL85ZPEyz3giu1lC0lAxmp4e+r5qHTJduTYswbOePS7WI6qWrRxgU91GBEw5OahSuxQyDZZMrxNnxYzaOT4f1U4Ng9Ae5jD6+IRsyXl3IC2zKZFPeu5Pd82/h3A1T143XRIYUD9vDpnxAS9Fi75GwS72pheDBZ7X0XjONg2stDS9+ewPifShkZns5Nc5iHZbaShTH1HNxwYlJXEb9gWsW0AI+9LXwT1n1bMAASgUkkghxGP94cUsKkbPLP6uL+z4oGheDhHchUf3R0yRSr0eqQxZrldzQhE0vjJTlkyUDwgdGcZt8VmWDpaskrADjsp4I1Invuf9DOp6jki3hg9y8SFe2Jpynknu5zWLPOxF1zP2GvUovHF8eAeqnXFMW43jEgHub1Lxi9gwW3UfSpBoz4Z5orTNGRib0BYrrEgElvdOb/Gllfi5GgUErSpehW3T1Qe4BBnqKrn0CnF4eL96ttCEQvXWNQZHLGJp3FMn9CjM150iZTFuwOHM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39850400004)(396003)(136003)(366004)(346002)(376002)(451199015)(83380400001)(86362001)(31696002)(38100700002)(5660300002)(41300700001)(4744005)(15650500001)(2616005)(8676002)(478600001)(6512007)(6506007)(316002)(4326008)(66476007)(186003)(26005)(66946007)(6486002)(6916009)(2906002)(66556008)(54906003)(31686004)(8936002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NTAvMEJvdTlGTEVIdW9WSkk0MTZScmQySk9XTFVxWXNOekJUNytYL3A3OWZk?=
 =?utf-8?B?L3B5NmZaeWJnV2p6aHRNZTlpZ0ZCT3ZyWHFXNHZURHFRMEV2bnB2L0RZVm0y?=
 =?utf-8?B?Vnh2WXNoOGE3RlRpSjloejdBeUNJTWg5Ly9YWXpybW90QmJ6QmgwaXEzUG41?=
 =?utf-8?B?dU42K3JvNEZxdGE0bWJIc1NPZXNaQWpGaldEaGtIZXMyVzRXQXhUdUxIOFhQ?=
 =?utf-8?B?TWM1dGw1SWJlTm9lVzBraDgxU2pBYmVGWE5EcXNxU0NYdzdRcFgrK0FkVFpD?=
 =?utf-8?B?YjBrOWMrWjJ4am5NUWhLY1BQYmVVT09ISlIwK01CKzkyazQvNVlhcElFek9j?=
 =?utf-8?B?MzN0amhIVjFtVC8weit0R2dVT3BpN2N3THZrOVUyb1JDYUQ4TVRwWStNL3dt?=
 =?utf-8?B?WlZCMHc2MUJVcTV3UDVmZ3U1dUpCSy9ZQkZrQVducGJmMy9hTGpYdFZGUGVh?=
 =?utf-8?B?VmFkTHlUYUR5UDhpN3JLUy9NaWZ5bUdvYmxFK1RzOExvR3ZKQXZTcm9haVdp?=
 =?utf-8?B?VUQ2VkJ2eWRYZGxoV255Ti9sMTZsOVpFb2IrbE5ITVZvVDBibUpRblhWN0Nq?=
 =?utf-8?B?LzB3aHhjMUp3TEQ4MzdiR21IL3ExcEEzcExYT09wYm5BQ2REbis1eGk4YUln?=
 =?utf-8?B?aXJlVTRERXYyS1lpcWVndlpvQzhUa241SXREYnRzQzRuRzI4QTRlRldRVWd1?=
 =?utf-8?B?TGRJWEJZeWlPbUhVdE9VaW1xVDhNL3RSNGt1ZVQvaSs4K09vWmtiK1krVk1k?=
 =?utf-8?B?WDk3Vldqam5zNXZuSFpOekJub21SMVJqZFR4MFVzRXhBVWdzVmp1aU9mMWMr?=
 =?utf-8?B?VFpYM3JISnZJRkxUeFcxem02WWt6V1FjZFEydi96UHZLRm5vcktPSHlUNEw5?=
 =?utf-8?B?WHI4QkwwNW4zdFRpOU1FY3dsS1h6OW1sTWtJWTZaTS84VEpzejdCY2pjeGNE?=
 =?utf-8?B?cTJ0bjdROVRrWk9RR1JNV0dVMlAzbmErOFNBL2hxbFZxT0NQOGxVV2c1QTFB?=
 =?utf-8?B?ZnFqODAxY0NXc3l4NngxTzJRaFRRcjQ4azArRGNoYlZlYzQrbzRPc3pwcFV4?=
 =?utf-8?B?Z0tRVnYvdm0xOEs1d245RXlxL2o1N0JFa1hxZVBjeVNpOHd6Zll2YTY3M2xE?=
 =?utf-8?B?bmdsRXcySlZTenBvRnFURU83Z3FUUnlVL3EzSVRGN29hdEczWjUweGIyMWRZ?=
 =?utf-8?B?cHZ1ZTAvTVdYOWh5L0RxUG05b2hOSFhpNzFtUEM5N0VOUmNjaC9aWko1Y2V6?=
 =?utf-8?B?bkdSUFhweWVqeDExRkg4OTdHQkI2UW9adWljYisxclFxNktGczF4cCtvd2tp?=
 =?utf-8?B?LzVBWmQzNWNycnJaZ2FiNFZ1WTltYW9LRkc5Y0kxQlJVZ1NvZkJEVzV2VDMr?=
 =?utf-8?B?emF5UjR5R1R1NkNyOFZORVNqNkF0WStZa2RNUHZYNFBaayszclV4Qk9yaHRh?=
 =?utf-8?B?Rm9TM2lPVjNxTmNWVjZ4aEV4Y0dDNVlTN1Y0Q3kwNlE3MHlVT2hhM2ErNXIv?=
 =?utf-8?B?azBXL0Z0aDRQVzYwNFdpV1FicUo0NzBCWDBMVU9qbEt4YzhKeDNNZlRoR1pS?=
 =?utf-8?B?anJNcjNMZmtvUkd5MXFwdWJaZnIyME9DK0NnbXY4ajk4RTVHWFEyZ0M3ZXAy?=
 =?utf-8?B?YlBlWlFuUktLT0p6REZBc0Z2dXBzcnpPZ1BsREFOZUJIZ0kzWFZ1clFrS3li?=
 =?utf-8?B?WldvQUtFNUVrZ0cvZmZlcFJGUnpKK1NUUXh5RzVZV2N6UVBUQThKY0FqczZm?=
 =?utf-8?B?bElUaElKdTBzazhIWlRWUVJ5aUpJaDBOZENTM3lFYjdTQzYvQzIySTliTXdw?=
 =?utf-8?B?a25wWjNmRUpkOVdVU1NWR2RoS08yU2tHcUxuMzY1K3R4c05KeFFNc0U0Ri8z?=
 =?utf-8?B?YnJsL05pVnNwUGhmQzFWaDlwSDJiWEU0Y0JzbnpYUW1FMkYzdWU0bDl6YndG?=
 =?utf-8?B?eGxiSWRYOWpmbm9RRUNlWjhiOVRQb3dST2gzR1ZHK1dYcFM1OTVxb09pNS94?=
 =?utf-8?B?QlRDS2VIRDlhMUp3dlhaaityTXRWaCtSU09LdmpMOEZjaWo1SVdoMW5YbTlZ?=
 =?utf-8?B?enpZOVU5RGVKMm1aK2VxbUM5b1M3cjFFSkZhVE5RYUxTdFk0YnNUVzZxbTFq?=
 =?utf-8?Q?9tz4zZK7+pANlFCfOZ52u5Bvj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c1ddda3e-760b-4912-83ed-08daf246bb90
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 13:38:03.6976
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: N1ContN+gIzUX6HF0u/AJh0mz1t1CLlwQY8HNvZhlQ8GMqm4H2mL7ZEXSWCX0iTOcxXxF53/kNAXGu6MOlEZeA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6936

Various tidying changes accumulated during that effort. They're all
functionally independent afaict, but there may be contextual
interactions between some of them.

v2 addresses review comments, which however goes as far as introducing
a new patch (2).

1: paging: fold most HAP and shadow final teardown
2: paging: drop set-allocation from final-teardown
3: paging: move update_paging_modes() hook
4: paging: move and conditionalize flush_tlb() hook
5: shadow: drop zero initialization from shadow_domain_init()

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 13:39:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 13:39:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473633.734335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsN4-0005zc-4Z; Mon, 09 Jan 2023 13:39:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473633.734335; Mon, 09 Jan 2023 13:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsN4-0005zV-1s; Mon, 09 Jan 2023 13:39:26 +0000
Received: by outflank-mailman (input) for mailman id 473633;
 Mon, 09 Jan 2023 13:39:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEsN3-0005zN-HI
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 13:39:25 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2055.outbound.protection.outlook.com [40.107.21.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06bc94a5-9023-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 14:39:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6936.eurprd04.prod.outlook.com (2603:10a6:20b:106::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Mon, 9 Jan
 2023 13:39:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 13:39:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06bc94a5-9023-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JJ4khDbk3cVq5YTxYUB4dk52fiu5dfEC2JdUpW/bgZv2WY4hGFK1tBOFRAKW6PERxDS3fGqBKaU5o3WgW9kF91NC79dhBf/lmHYJ/El8M1LQrCJgrghnQd+Oj2ZFAXVZN2/h1HQ7HfdDqrBZp8VJpfWXm2zBF7Y57ccWVL9ylBPuplVn7cSABZiTj3X7vrEIE1vjV1D/4K3Fmu6GFNN/oyyxwvzi0GbjPx2MYoVs2Ln7voabdMqnq0u4dHlgTQiSxvpx3IiuOQjZfYu8rGE5IZP51NCjeBB4OhMEgAfamRRA4waQ3TleAM+axyU+f4GO76YEnbi7jxfG1gfQesa5Sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yskimPD3tXl4FmikgAtMvcJ03/xUebDIDOeJX91rW+Q=;
 b=KHg1zGii2t2RB72aGsH9fmlw3vmZPlqXjVHO7pzslCdobHcGatCoTv9lqqku7VPFh7EH0C7B/P468G781vvy+zF+HzikvP/D2mdGQTXzrXbJFzE7/G1X5bgVwPxoJZGtTdUk1+5FwOt0OhUQGEng+ols/27ZQDIIyiLC74le5KUwrGV1SjiOSH3OzDFc0gxqDFFF59UHCY7GNjZC4FhKvSWx6cq7SGR/8l4rqmyHlcP8yarDoSbhidczflPHj/FVmWXtMxr6h3COwwG4qRG/iJ/WOPIbG2oQHTXSn1FTsT86LoGpKEHvIbxJaBOIl3+Zj3VRx3NtBXRMuWDLFawBgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yskimPD3tXl4FmikgAtMvcJ03/xUebDIDOeJX91rW+Q=;
 b=Xr6MwYYcMAQZp0cWnSjmdvTPUgEAldOpZaNZi1DcyPVEvdasfK2vGAWgDyMGlhDcT94jY9xLTtQWC3Upnw0ERYe7RrOBRBNmpzxygdr5eaQ2D93NcWLZuD8SvNb/+Dc8u2gLzzqTVq0RToVwkB7YHKmCAPJJdr5SePdVcPSLvTxw4BIR8fdel1N6zPRA2Keu6O5raCcgjOd/fUBGoVrJfuLrT1xmmhwPW/q4ICsUPRsEpMZ8xw3PCN06hNt1WCl6rhvnrVDjCMDUgF74nJA9fJoainBEWPJfZnEwK2dlCRCy5u7cagri84AngwnuRkPvtLRhhRwNo12ctg853aw8yA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <67b9378f-cf4a-f210-aa2d-85af51c51ab0@suse.com>
Date: Mon, 9 Jan 2023 14:39:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 1/5] x86/paging: fold most HAP and shadow final teardown
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
In-Reply-To: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0208.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6936:EE_
X-MS-Office365-Filtering-Correlation-Id: a8da4fec-62f9-4d64-4f16-08daf246e9ff
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	X8wAcUZhned8iPJmlDUNadU0pDJVOBRfbCBt5jvVCZK43MGOY4X9xudFhiswrm6j9hbeWHRQ4C1EW3QILnI7s1A76WawJvgtGfVTA8w02zeYcDflNQbCOH1UtNbVMqxZUe3pWb0s112qQqKTITaZ5gqIr62dXjI+IaAUZK/0UovstgPdoBIuiJ7Bx5QBNCo0MeYVhg6i9e4o4TN4xfg9P6EDOVygnaaqVR/dyKkxA4XBwi6+WjZ7QwJU+bHHgWb62mbvJeBt3aGQg6u6qD27skqKJZYAqEzq5q55bY8hu6HuEFxmCyQjGMTgsZtkjCezDndY1Egw4JKuKPzMDLm5I2KkWjQXuoh6HHmK8F/AGehnyBz0Bsv58O4eb0YOCZIVCsfBwDm6BNlmLR2Eg1wmh3pSbN3GZPg3YsxAyr+mzFM+J3U973K73e4j/+lV3WHSOxvSjI+F1SptEMegEu9OIuFlqU5jgGYPlwut3QUQiywqx4yhnjjfnOb7jVI1yQOgMxVrftHcY21n64I7JOFQTr4bz1xmznEtS/9G4krLAkgj43x4OeOtGORzavZbbg3Q6vogR1pOTHoI4/h44mKKAGyLIaupwoYWQ6lkyo0g3FmrIsTOfPalB01L9BEKmbDArqm/0VgHfHZxv1T6C5YcJ/bCxbHuIVIbUHqxgUu142wHyN58wgSuODM5qu2ml9iUOhI9cwVXmOwotOtQItiO+KxG5ARLzCNWumqnEbraIl0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39850400004)(396003)(136003)(366004)(346002)(376002)(451199015)(83380400001)(86362001)(31696002)(38100700002)(5660300002)(41300700001)(2616005)(8676002)(478600001)(6512007)(6506007)(316002)(4326008)(66476007)(186003)(26005)(66946007)(6486002)(6916009)(2906002)(66556008)(54906003)(31686004)(8936002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ak5lcDF2RTFQck9BV3Q2NDZNU1ZpZnRodHhCUTZjaDZNOURvQXF3MmtPdjRG?=
 =?utf-8?B?MEgwTVp2WUpoQWxiQnFLck12SExheEYveXIyT2ZSaGtSQ3kyb2FWY2RZRUpG?=
 =?utf-8?B?ZkV4RFlWT2hTM2crcUNEcnZQMXp2WHlsUGxrMkxTZ0xwV0JVakJMZnZDSlMr?=
 =?utf-8?B?S21tMzVKQjREZlZFdk42S0trOG9Ja2tJTDNJZE00UUJZZHowNDhxZ1FTWTg1?=
 =?utf-8?B?KzFkalZkdkQ2LzRNM1lGbTA5eWFnYVVYSHZHUVdHTHMzOWJxQ1VSSkU2bkRF?=
 =?utf-8?B?R3dtZnI0Z0hFSlJlUUplZkRMNkZHNmFqMDV1cXkyS2pUNWlJYTQ3UlEzQlR2?=
 =?utf-8?B?TmEya0xIQVhnRkFORzFZdTlUVm43VGQ1Y0pJVXBCUStidG9nazhOc0hwTk8x?=
 =?utf-8?B?eE1XWk8rQld2MU1OM08rS2g0a2xGdHFwWVl3eVE4bUJaaTJ1Yk0xcE9iczcr?=
 =?utf-8?B?VDdRajlEeDJxa2V5YVRPb0l5L0JCamV2N1QrcjJyUzBxQlZ0Q1hmY1Jlekc5?=
 =?utf-8?B?UVBxV1I2NEF6ZDRFdktKMTJ1dW0rTkNHOXZsMWYrcDRpOGdJSkxwSmdjYlJI?=
 =?utf-8?B?TlR6bXh4U1NxVldHWlFrVGdzTnFwYzNrZnIwOE40L2srbGVFZldJRlZMRnNj?=
 =?utf-8?B?SVpSRjJESXVINStUdjJsaWFTM0czKzRXK0xWZjc0Q0J3WktsQVNyUEFHNXRS?=
 =?utf-8?B?L1MzdkZTaDJrbU5yajVhZ1QyRW8zZVU2TDBmTmdnYnl3aXpuY1U1ck9MVEpG?=
 =?utf-8?B?MW8rL2Flcy9Nc0xxb2tUMTQ5UUJoOS9HRGpKenhCQzM1TDlzRWlzQnV0VjhN?=
 =?utf-8?B?SnZ0Z1B6QitJU0prYWlIa0dMZm5RZ1cwOFc1bEZsTzNLUzVZaDhtSm1wRlRh?=
 =?utf-8?B?bVdJaGN0cmg3NG11UTJQZit2anlpWTJpTkN2ektzdVlRcFJBeW1QT0VqMHhI?=
 =?utf-8?B?cUlEd0R0OE5VNXJIaVFPRXJMWjlRampLUlNzVWtwbnI1ZzNaVkhwZThvb2Vu?=
 =?utf-8?B?WVhYeEtqMHlrZC9pV29rU1J2SnVOWU1Jb2lHN3RQUnp6NWlGREtCeU8zOW1l?=
 =?utf-8?B?cHNtV1FkdVVrNmNDSWJRT3RNUnQ4NExoWmZWbjVVM0pXUVo5L2ZNd3Q2UTZ2?=
 =?utf-8?B?OXMxaERaRzk0R1gvSGJZZDFNSXk5RVpqNDF3M0trZDl2QTZsblhZQXJjeFlh?=
 =?utf-8?B?RXhscHJWcXVBVzNGUjdjWmRNMHhxdGUyQy9sNmUrSUZhcDJyT0lqREVMNVFh?=
 =?utf-8?B?d1RXZHVQODJ3VUFDK3EwNXBSeUk0V0VIdEZjazBub1dBSUpFM2JSN08wdzdp?=
 =?utf-8?B?NTVDUUVmZFN5N3BneUZKbFc4Uy9aMzlWcCs5T0hYT0JQTmVabE92cTNrOU5o?=
 =?utf-8?B?YVhLV3h4Zk1yMHVoSkdweWxnckpGMng1MUhneHhFa09rQlp1bXlNYjZFV2w2?=
 =?utf-8?B?Tk5JVnJmcnVSZUFVWjM4eDl4WFpJNnIvNnUyN2l6NC95U25xVUtWWkU5Y3U5?=
 =?utf-8?B?d0pvdm9yd1VvdDIxU3lGeGxXYmdTRUg0eHdPc2lEWnFqaStqRWduTDNDd1hE?=
 =?utf-8?B?VEdqUTNmZGNnZkVWcUlVZ1VUUkduSStnd085UmplVEY4Y2p6NjRHa3U5WnlJ?=
 =?utf-8?B?d2tTbGpSTlNPazE2MVhISEVCaEttRUpWaDlzalF5ZWQ2bUZYKzg3U0lac2tr?=
 =?utf-8?B?U3JaNnpzQWNTMzdQVzdMYk84MHBSRTJVbS9WbWNkRGl0SXhmdDFmRHRUV29w?=
 =?utf-8?B?Uy9FQU9rMDd6eEV5Z0dSNWtKZFV2WVBzOXUzN1dseG1aY2o2L0tXV1ZkOTVK?=
 =?utf-8?B?WEl5SVNsRE9mZEkrRFRPeUVGZXpMSXc2NTJ5MkJ3K1Y3SEljYVZBNnN2WGRJ?=
 =?utf-8?B?SXg2WHUzZXBkTkFYMnBCS3lEaFJ4U1ZPcmRPb3Q3Ym0rd09Xc0NCb0tnd0N2?=
 =?utf-8?B?MWdlbUVFSTZVMkZ3ZDJuWlpzL1M0clA0MHdXVmM0T0JjSTErVnV3cEhPS0lC?=
 =?utf-8?B?R0tZQ01PcTUrV1lpeHBoMGxMdHhHV3dnODM1WTJTUEFsVzViWnM5Y1Z0WCtE?=
 =?utf-8?B?dFFISEFCbnduSzlWV0l2N2Y0Tmdzc3BlNGRiVFFEWGhUUDJoNzNSdlpkR1Iw?=
 =?utf-8?Q?1CsJJKI1TfDgxsQ/fNL6vJZQP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a8da4fec-62f9-4d64-4f16-08daf246e9ff
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 13:39:21.5677
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Uyl3MhlNygBrjHdR+lDpUwNRec7MV4mrWTXFOA6NSVVpznR13HuBvtfkBhV67UpiqcvBCM5WTvZkk3foDuueAA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6936

HAP does a few things beyond what's common, which are left there at
least for now. Common operations, however, are moved to
paging_final_teardown(), allowing shadow_final_teardown() to go away.

While moving (and hence generalizing) the respective SHADOW_PRINTK()
drop the logging of total_pages from the 2nd instance - the value is
necessarily zero after {hap,shadow}_set_allocation() - and shorten the
messages, in part accounting for PAGING_PRINTK() logging __func__
already.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The remaining parts of hap_final_teardown() could be moved as well, at
the price of a CONFIG_HVM conditional. I wasn't sure whether that was
deemed reasonable.
---
v2: Shorten PAGING_PRINTK() messages. Adjust comments while being
    moved.

--- a/xen/arch/x86/include/asm/shadow.h
+++ b/xen/arch/x86/include/asm/shadow.h
@@ -78,9 +78,6 @@ int shadow_domctl(struct domain *d,
 void shadow_vcpu_teardown(struct vcpu *v);
 void shadow_teardown(struct domain *d, bool *preempted);
 
-/* Call once all of the references to the domain have gone away */
-void shadow_final_teardown(struct domain *d);
-
 void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all);
 
 /* Adjust shadows ready for a guest page to change its type. */
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -268,8 +268,8 @@ static void hap_free(struct domain *d, m
 
     /*
      * For dying domains, actually free the memory here. This way less work is
-     * left to hap_final_teardown(), which cannot easily have preemption checks
-     * added.
+     * left to paging_final_teardown(), which cannot easily have preemption
+     * checks added.
      */
     if ( unlikely(d->is_dying) )
     {
@@ -552,18 +552,6 @@ void hap_final_teardown(struct domain *d
     for (i = 0; i < MAX_NESTEDP2M; i++) {
         p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
-
-    if ( d->arch.paging.total_pages != 0 )
-        hap_teardown(d, NULL);
-
-    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
-    /* Free any memory that the p2m teardown released */
-    paging_lock(d);
-    hap_set_allocation(d, 0, NULL);
-    ASSERT(d->arch.paging.p2m_pages == 0);
-    ASSERT(d->arch.paging.free_pages == 0);
-    ASSERT(d->arch.paging.total_pages == 0);
-    paging_unlock(d);
 }
 
 void hap_vcpu_teardown(struct vcpu *v)
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -842,10 +842,45 @@ int paging_teardown(struct domain *d)
 /* Call once all of the references to the domain have gone away */
 void paging_final_teardown(struct domain *d)
 {
-    if ( hap_enabled(d) )
+    bool hap = hap_enabled(d);
+
+    PAGING_PRINTK("%pd start: total = %u, free = %u, p2m = %u\n",
+                  d, d->arch.paging.total_pages,
+                  d->arch.paging.free_pages, d->arch.paging.p2m_pages);
+
+    if ( hap )
         hap_final_teardown(d);
+
+    /*
+     * Remove remaining paging memory.  This can be nonzero on certain error
+     * paths.
+     */
+    if ( d->arch.paging.total_pages )
+    {
+        if ( hap )
+            hap_teardown(d, NULL);
+        else
+            shadow_teardown(d, NULL);
+    }
+
+    /* It is now safe to pull down the p2m map. */
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
+
+    /* Free any paging memory that the p2m teardown released. */
+    paging_lock(d);
+
+    if ( hap )
+        hap_set_allocation(d, 0, NULL);
     else
-        shadow_final_teardown(d);
+        shadow_set_allocation(d, 0, NULL);
+
+    PAGING_PRINTK("%pd done: free = %u, p2m = %u\n",
+                  d, d->arch.paging.free_pages, d->arch.paging.p2m_pages);
+    ASSERT(!d->arch.paging.p2m_pages);
+    ASSERT(!d->arch.paging.free_pages);
+    ASSERT(!d->arch.paging.total_pages);
+
+    paging_unlock(d);
 
     p2m_final_teardown(d);
 }
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1194,7 +1194,7 @@ void shadow_free(struct domain *d, mfn_t
 
         /*
          * For dying domains, actually free the memory here. This way less
-         * work is left to shadow_final_teardown(), which cannot easily have
+         * work is left to paging_final_teardown(), which cannot easily have
          * preemption checks added.
          */
         if ( unlikely(dying) )
@@ -2898,35 +2898,6 @@ out:
     }
 }
 
-void shadow_final_teardown(struct domain *d)
-/* Called by arch_domain_destroy(), when it's safe to pull down the p2m map. */
-{
-    SHADOW_PRINTK("dom %u final teardown starts."
-                   "  Shadow pages total = %u, free = %u, p2m=%u\n",
-                   d->domain_id, d->arch.paging.total_pages,
-                   d->arch.paging.free_pages, d->arch.paging.p2m_pages);
-
-    /* Double-check that the domain didn't have any shadow memory.
-     * It is possible for a domain that never got domain_kill()ed
-     * to get here with its shadow allocation intact. */
-    if ( d->arch.paging.total_pages != 0 )
-        shadow_teardown(d, NULL);
-
-    /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
-    /* Free any shadow memory that the p2m teardown released */
-    paging_lock(d);
-    shadow_set_allocation(d, 0, NULL);
-    SHADOW_PRINTK("dom %u final teardown done."
-                   "  Shadow pages total = %u, free = %u, p2m=%u\n",
-                   d->domain_id, d->arch.paging.total_pages,
-                   d->arch.paging.free_pages, d->arch.paging.p2m_pages);
-    ASSERT(d->arch.paging.p2m_pages == 0);
-    ASSERT(d->arch.paging.free_pages == 0);
-    ASSERT(d->arch.paging.total_pages == 0);
-    paging_unlock(d);
-}
-
 static int shadow_one_bit_enable(struct domain *d, u32 mode)
 /* Turn on a single shadow mode feature */
 {



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 13:40:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 13:40:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473637.734346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsNg-0006yt-Ew; Mon, 09 Jan 2023 13:40:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473637.734346; Mon, 09 Jan 2023 13:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsNg-0006y3-Aj; Mon, 09 Jan 2023 13:40:04 +0000
Received: by outflank-mailman (input) for mailman id 473637;
 Mon, 09 Jan 2023 13:40:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEsNe-0006U7-QZ
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 13:40:02 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2072.outbound.protection.outlook.com [40.107.22.72])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d298941-9023-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 14:40:00 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8550.eurprd04.prod.outlook.com (2603:10a6:10:2d5::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 13:39:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 13:39:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d298941-9023-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NIhcv3sj7hHMzNyVFwdLe1PWuBigcoIRGMpDC8tsMA/bODvEVSAaL27RqYaMEImUl13K4mpaxLlGbj/rmL52XmaVssCUuUWA/TTZkZb3yFTshLar+yv2ORSjothtefUNEebr4RVkJ5rmju1xgDE0zZUMWGUZaxvGPBttKwp+jkWM8LOPQ1O6cfmXRXYGdNoJwdE/wu/42364wwAj+d3emm0VeeBuxa5uGbxZQqe41ZQijNRkBuev5PqeYiegxtdZVo7vMsFOgSMt6rYyoOO8JRPXiO3x/N8+atwLovccpyQuBmMb2pwxfPlKIh8hJVCFoF9ps8Q/lCSaaY3oUXXA5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qCeUlm4/DsmO3G3gcntNt7cjbvdyT3iGeg4tOoUssFE=;
 b=Mb28XGvCoXUrcF1+Lw16yDhoZmXYAcfjSlEQK9Q9E9/7QkQ6VL84YzTOuRTmnmxNGgzYx58HTUZcdMdVGw3LK397B4UGXJ0SxAQ1lKxNhYSw1VFxwQEtOVG9sIfQfVKktS5UffBB3HKpuOtY61vkRuRxHn5KuZcMEK63+vqkt/8C2H7rbFNrYQrsakeq/N9EUWjPe2eeHj9XTpOU4znF7W/Ae+PALX26xWXRFs2HyyOFFxWsqfw05bffkg+e+MIOnZFnw7/zRmeJwYFtC17tF4EKzUU1PUgzaH/bdrDGQTgH6d1JILea/upUPDSqUIVsmNQNCj2mwqgS7adLaOzaIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qCeUlm4/DsmO3G3gcntNt7cjbvdyT3iGeg4tOoUssFE=;
 b=RQLvGu4kdQVpYmpAfMhbQ8clm6tM9gVB2KHZAo+K+VXk+RA0TY1xDLpoIroxA9l7dQY2MgLklH+CL+C4oHerutN92UD+eKaIgYykQ8DXr+tJDc+zaP7gmYUWvkJaGjkffTgPHUtxpDT7V1P8kmKsK75b1Z1jCgNu5Htfqy/Ul35SBCVH/ZLQ52DSwnAzlJ4FkdZamytXuVpSBEkEwJiwzDR8xd/MZFTwwjeBEhe6jZ4556QGBF7fkcVXmvBp5Uu1Tif/LA8xpK5xpoQLTXPsSV5JaL4guXRkoZiOqfOu0FCLi+yOidWbnHRk3+0WniymMsnNMJaPM9j/pLyvn6r1Yg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <99270ce9-39d8-1f3e-f922-afc2c0289205@suse.com>
Date: Mon, 9 Jan 2023 14:39:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 2/5] x86/paging: drop set-allocation from final-teardown
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
In-Reply-To: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0198.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8550:EE_
X-MS-Office365-Filtering-Correlation-Id: f2aad4a9-77f8-42c4-650d-08daf246fd8b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	m9tBaVBaKZbhgx18EP3tSG9OpEvRjlycY+WpeubIJEdOPBtQPibYDHuzJ1Y2C+gatL/vlWPQde7TAS0PWRY4bD4JrXpMslTtAttWfh384sPLqnvx5Xya3z6bOA3+8Ll2S0RIpRFAyDM8jygW2I4PiEbuL/m6CaMZPfQ5VCeSOOSGpVzgK8joMdowooy9QY1XqhDu0+WOGhoCCjt4KQApghc7SPLraXgUvIGNgzE9UiEENC94tEP0q1m+fMDG2fkct/Lu+38tkIZN5Wn9mZw5opMSCfeIlWSXsxJIF/9JouiBlctiSj6PrJofM1joULwUBkrBPaKobxIbGgzGw85h/ljSXszZHlHDuYGTpOUquNhLQ28Ji6RpMB3usbiEVjLJHOoDkJ+naEavCjZAJLzJRC8J4Le10VfgA1B3jYQHFJX8DBt2eZJERTrpWYf8Q/6oyImPHPTXlIRTxtXtJOsiaBs+/pT9KVLsvB8gAiVVSwzT/N8KT9Isn8Bj+F1ugyo3+SCtyoetSfGu7DoDXfeALfIzE7uAvlMJikajcP/2uoXlYIdTSP/rn/wslHhN1MvnqxkF+okSH40aYrumVy/zfk51IPlfoCFFx5WzwevMPpo3B2APjtP8YYBm51/BmR9K36uwvkugwyQ6qYNSyzmG8reH2c3OIoj7qug89HIFDyjJ9BGbisZLSRc/P5eS82gUplTfRDIPt7X1j79TGU27p/h1eSKbbMLAPk1bEbbY7/o=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(39860400002)(366004)(396003)(346002)(451199015)(4326008)(316002)(66476007)(66946007)(8676002)(6916009)(66556008)(54906003)(2906002)(5660300002)(8936002)(41300700001)(86362001)(478600001)(83380400001)(31696002)(6486002)(6666004)(6506007)(186003)(6512007)(38100700002)(26005)(2616005)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WXJPTVRlbVA3OVk3VkV0YWdscTJvZVRSWndHUmdHVDQrOEoxWEdrMmRoZ2hZ?=
 =?utf-8?B?MlpMd3VGWTlhV291MXUzcDNSY2lpNjhFN0J0S1V1RzAwcklOV3hUSktzaFlz?=
 =?utf-8?B?L0E4VlpOZkd0ZXluZ0RJNWZvQ0ZzVTh4WVh0V1hsVHk0Y3Mwa2ZlbWxwYXRJ?=
 =?utf-8?B?NUh4c1NVYnVMMCtmemJ0SjFyVG1BTkZEVTRjb0plblpUMndCK0hQbkp2Mm5w?=
 =?utf-8?B?M00xM0lBTlMrbkc4dVR4elROTWlsMVNUN1FoeFJmdUxmZjlHTUFUN040a2ts?=
 =?utf-8?B?UytNNlVzTzJibEtETUpyRmxzSlpMR1ZIdTg4UTc4SkF6b2ZoaHVyN0d2SkVO?=
 =?utf-8?B?bW9WUWNBU1YrWXFRbkZsKy9IeTloT0dDYk1GS2MzbEU1RUs1bjhiRnJQZitZ?=
 =?utf-8?B?QnIxSWh1MTF2cFIyd1dkSEZXVlEyLzNvUDh1aFJKNUloTW40UXhOajhvbzlh?=
 =?utf-8?B?WW82QnlrS0JvQzllMnk0dTQzMEdyQTgrMms2QUdKTXpjclVBWFhyS043YzVG?=
 =?utf-8?B?TzU3aWExalZ4TGtwalh1azhONUlJK05oVTFEbGhMc1llMkFieTVqWXkwdkJG?=
 =?utf-8?B?djZ1SlYzTkNsNmczRERKczd4MnR6MkZHUW9SN1FmWkxLRmk1bzBoVlZLdGRZ?=
 =?utf-8?B?TEJyY2ZpMWNaRVk3ZDNOTytWa0tRaHd4TXNuYVUvTkoyMlJrMWFoaC9ZNnFX?=
 =?utf-8?B?UGJHb1dvdFhVWUxZRTRCcCtoZXV5NG0wdnYrVUJSY2FET2tPdGZxZEh3TDIv?=
 =?utf-8?B?d2M0VWY0cjFJZE1FRzU2WGF1UUpUc2FXaEIyRGxZNFYxNG5IZjRmek43YXY2?=
 =?utf-8?B?RDFaendrQ2dHaXMzS0FHb2s5ZDhBbG1IWjNwNlRpbzZoUjJtay9ITmtVWjJh?=
 =?utf-8?B?RkFRdzQ5SmVveU9GOEdwQmJ1UEpJTnlOcGF3NkhKREN6SEJrSGdnbWd0NDlI?=
 =?utf-8?B?S3BqQzRseHE0bjFUYUUxTEdEdVFKbXEwMGMvU2pmRXZNN2VRUjczQ3FCc1NN?=
 =?utf-8?B?Z3NjSEtDNmtXMWZZYmJnU0tDc0VkeFBTQ3RwSVh3eWdaaE00MDJoL05nNjE5?=
 =?utf-8?B?b3QrZ3UvM0JVcGtER1BDK2h5NWNLTWwraWd4cFZwNXU4WXhyYWoxYzhMOEFM?=
 =?utf-8?B?aFEwclloQ0VqNUkrUHE4U240WTY5OExyZnhKTk5YUDdmaXBmV1IvZUZnZHJn?=
 =?utf-8?B?SEZ6a3Y1dHFXSFI1UlVqRW9va2dlVDhjRG83czVXQlArRlpZYmtXMUN5bCt3?=
 =?utf-8?B?bm1XeG9CZ2tJZENoUWl6Z2c0UDg3RFFzY3RIcjZKL2pMamhPdUJ1WmZ3ZmVI?=
 =?utf-8?B?SHBFaWIrcnhvZGg5Q25GUVJSVWg3QVRxdFVEcG85RmoydmFQODl2UkVGVUN4?=
 =?utf-8?B?YVg4ck5EOS8yZ3hIRWZmTU5na2NDRzNHT256MXhRNEkwKzFFNUtJV3VRUzJ0?=
 =?utf-8?B?Vjl4THM1UVRuOEVGU3A0bFkxQzdwVDhXbEF1RHB3V3l3WHhVd3dBN0dwb1V6?=
 =?utf-8?B?QmZnS01jSWpiQnJIRkxuaVlLckJ5S3FHZC82TiswYXAvUjBFUkFWVHpUdWZt?=
 =?utf-8?B?ZE5BRUFvc2NvenlPdEtDY3lZRW04VmNHU3R4eVppVkU2U3FpakhRTi94akRo?=
 =?utf-8?B?dUh4NHgrRW5Qc2VCbExpMFA0bFdkVGRMbnY3VTVoUGZyZ3kxc0FJWFpPUHVi?=
 =?utf-8?B?ZEpVbzRvSWVVUjFoSUNuS28rcHoyOHA0aHYza20rRGpOdFZhenozK2hjazlC?=
 =?utf-8?B?cll6ZmxsM0hYR3BQOXlSZlRqVFptMWJtV2VSWk1FZnhSMXpEU1VTTUp6Yno3?=
 =?utf-8?B?bzB2NG9LWTlqWDJmdGF3Nms5MU5ZaTA5SWtYMG9ZOFFqaTVyWXVlM0Rsd1RL?=
 =?utf-8?B?dDYwWFNHTEl3eHpyK2UzS1J5Q012UHl1bVJjVFFTV3Zjckk2M2dmZ3I0aUNZ?=
 =?utf-8?B?ZElEZDZQbDBUNUdaNFUxVU0xaCtzRnIvODNFMFFqSU9LTzRnTzNmT3p2VTJR?=
 =?utf-8?B?VlpjdzByNHFNMk03Z1J3RllWUGRndGN6bkJrQTJtbDRwN0hGeUVFeGphQkJE?=
 =?utf-8?B?a2FBMENSL01oZURqV2JaeXh0T25NeHA3c0FYOG1qTXRZR2pYM3J0ZW5rR0l4?=
 =?utf-8?Q?yZONNtxRgBOSOyIQltmDDN5NW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f2aad4a9-77f8-42c4-650d-08daf246fd8b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 13:39:58.3309
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ko1/NrRBmWHa3rQMRF+oY26kiEWQWNFhraKKk9YOkZnubMeS/nEGTkBpjxHs/sqO8BSa0lkBY7KJk5iTyy0dNA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8550

The fixes for XSA-410 have arranged for P2M pages being freed by P2M
code to be properly freed directly, rather than being put back on the
paging pool list. Therefore whatever p2m_teardown() may return will no
longer need taking care of here. Drop the code, leaving the assertions
in place and adding "total" back to the PAGING_PRINTK() message.

With merely the (optional) log message and the assertions left, there's
really no point anymore to hold the paging lock there, so drop that too.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The remaining parts of hap_final_teardown() could be moved as well, at
the price of a CONFIG_HVM conditional. I wasn't sure whether that was
deemed reasonable.
---
v2: New.

--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -866,22 +866,13 @@ void paging_final_teardown(struct domain
     /* It is now safe to pull down the p2m map. */
     p2m_teardown(p2m_get_hostp2m(d), true, NULL);
 
-    /* Free any paging memory that the p2m teardown released. */
-    paging_lock(d);
-
-    if ( hap )
-        hap_set_allocation(d, 0, NULL);
-    else
-        shadow_set_allocation(d, 0, NULL);
-
-    PAGING_PRINTK("%pd done: free = %u, p2m = %u\n",
-                  d, d->arch.paging.free_pages, d->arch.paging.p2m_pages);
+    PAGING_PRINTK("%pd done: total = %u, free = %u, p2m = %u\n",
+                  d, d->arch.paging.total_pages,
+                  d->arch.paging.free_pages, d->arch.paging.p2m_pages);
     ASSERT(!d->arch.paging.p2m_pages);
     ASSERT(!d->arch.paging.free_pages);
     ASSERT(!d->arch.paging.total_pages);
 
-    paging_unlock(d);
-
     p2m_final_teardown(d);
 }
 



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 13:40:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 13:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473645.734356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsOW-0007rv-OO; Mon, 09 Jan 2023 13:40:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473645.734356; Mon, 09 Jan 2023 13:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsOW-0007ro-Lc; Mon, 09 Jan 2023 13:40:56 +0000
Received: by outflank-mailman (input) for mailman id 473645;
 Mon, 09 Jan 2023 13:40:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEsOV-0007rg-5U
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 13:40:55 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2065.outbound.protection.outlook.com [40.107.22.65])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3c310292-9023-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 14:40:52 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8550.eurprd04.prod.outlook.com (2603:10a6:10:2d5::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 13:40:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 13:40:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c310292-9023-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QQpHT/Nv3OY2ry+Y8Ale1DrdmBcnB0vBQjPagKUY7FTV9Cru5ZtKL2Zg2Rz7tNKu1qvQA+iaYDfNzxwaUL6FXlD2g7/6qysoNA7SfID0PgQu+lO936x8d9F8H7iucmN/IgxWQndhTd/2CxBF/GFaK/ZmKHmqHH2vWudnOj/lxX/dTHK9TxlGRqvZku9AUtk8xpiHTMoYinnbXrHPAbtkR9J6PA6gJ5U4cPU2RYhcGv66Zmw4I+vkEt8tZJ1t9fWcpXfL7/lgKVUZB/Y+AoB4Fo9cKaLIuwK4X0nwj18A+oizjVWbyDrYBV6ya+mxGxpl9p1PwpK6xhCdwcSuuOcHCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vUyDk05ke+Lt65CNEbe0BSHjvjrpHd5u/KRVoCvKhts=;
 b=lGP7OYN1LiCKVZ9k0E2X2/mb+8MceoRYqBkqiSeXrzHy3wbwoLV9E9kM+pBNZUEAneOojnq+FRJQCoA2BWjS0P3cSY94RfzXeZBS/uriivXaHg5rblEHqDArzITfNYsHHW1ntmwcjc1cLNWiQ9KGcj0rWDgidgbt7FBgUPSpAusCXLrOtOQiOI37IpMlscqJ0VMK5XXxWiYQeTncNFTsaEeqm/rKyZ/XA3sMB6OUkgzEXRWXKIulth38O1aeK5MqFdWSmPWoB+KvLRjdZogFNvqnFeIk/IgddG2+2/WePbIoyKqYuASBe7/OM/1WBUCDI/A3t7QPwChCuCIpktfKJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vUyDk05ke+Lt65CNEbe0BSHjvjrpHd5u/KRVoCvKhts=;
 b=Tm5yIrxHS+9jHq2W4Jw1DNVTJkJGuTJJ6jzLXMQAiJEkCFNkCG8SXYXfI87/uvgBMjRj9CCZ8WPzpA5ox4jVM/Qm3po/BJ48y7Y2pQelTwimAl9FRuuXvCTLVQP1w/15ZMkX0bWIkfgY36EJU3/rGW4zTVOGhWgzrW6KztdsO4o21mIf1Oe7Ett1TgI53tgsOVtRbteVkhXZ+v8msOns8wInO8U8blQ/rGk+04FPR7wB0U7WCv3yHVCQK2CMdTBglD+OL3wX8d5TT3vfy7KMr0xO5PYv2uPu2duJ4oWoeyI7a7LhEi7r/ZoFO03+/f2VAeOn5v6sb+m16XSq1lKsYg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <032e81c2-ac86-638a-1611-43bc3bea6d0e@suse.com>
Date: Mon, 9 Jan 2023 14:40:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 3/5] x86/paging: move update_paging_modes() hook
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
In-Reply-To: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0085.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8550:EE_
X-MS-Office365-Filtering-Correlation-Id: f09e2cb6-95b1-4992-45f2-08daf2471faf
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sdWw33qHo136i0GX5j/Fo0xqKIOXT/zIlUFUuz864Q2aKElOBFPZBz4NHScC2F60po4BLRovpDpjylu8UZbbZB6r7jCRQjU9xpwc1NFDJMFiYeoEsdVFZuU3n4vFwuQ2HuOP7kpNzHWWqUJFx793qAiUYb2O4ESV3ludPTv0M0L0zZBd7CYThDnaCs9qurWt7WgurTN9GMYFIQjL3ywqRdXAuQ2cDWLwiWtJd9Mzsor3rMYj7ynv33S0LJ+d18Z/bCuNTq0a/sNbclpUGWobvRP/SDcn9QunVs1F3mb9uB6lJjv31O2YdLwBdRhAhfkou05SRiglWdGZGW901blgctH0Q0QYCnzGY/A7sYjxeNy6gcVbPfvJo3R0ZVBJMC6lRIspHNsHB8mTdO1z1lFTRZxnAaW0R5PlGd4cB53F1zE9NX++JrcKwShi1kvAbBIlutpknYyI4Rzk7XnurB+q0PLGdvui9wt5HQ/beDZdyjmXDTbBE/YqDm+rtoNqU1qeVZ5x8sMtT2lPKZ2TEG9gGYYIe1EJoqwyZNCob+F6xkBS0cPReZ7FyiHl6Kk8dN4Mb7x2lkpXq4dn7xbUfvoAELepild6pKcCDFwgyab0RyTmkeaA7mWBODb5He16YUBavB2zQ3BiUSI+uvwWUv7itUl/lruUosQUgCs19AnLkPUN2u4JuA2XxydbpIfnQJImb6/FpS4wjrTGVAI8B6oWuil4xMVjrjp0aJOnOzt2Awc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(39860400002)(366004)(396003)(346002)(451199015)(4326008)(316002)(66476007)(66946007)(8676002)(6916009)(66556008)(54906003)(15650500001)(2906002)(5660300002)(8936002)(41300700001)(86362001)(478600001)(83380400001)(31696002)(6486002)(6506007)(186003)(6512007)(38100700002)(26005)(2616005)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RjZHZ0pHSllWQVRhbkY4dWpLY3lSRmRXZEE2dUE3OVdXZkJzRG94eFprZ1VK?=
 =?utf-8?B?aXpLbGJ2MDhqTHdoVlorWHJoWjlCOGdybWV1QWJQMkcvbVhmT0NuOU85WkY5?=
 =?utf-8?B?OG0xd29FVERrVXFwY0dMSTBZbnAvakFFM1dyUmRUa2c1cnd1VEZvdm5vQXdI?=
 =?utf-8?B?ZHVuOUZWeThVV3EzWVk3bG00QUZnaGZYbmw1d1F0eTVKSVV5U2tIYk1ZNm5m?=
 =?utf-8?B?eWhIUUdVNHZqRVN1WU95YUJUbmQ2UEc3RHBjMGZFVXU0UmJBSGxUU0ZoOUwz?=
 =?utf-8?B?am45Zi9LY1lyY0pkUEErYWZ6TTNjaVYvcFZxQmNPRHRXb2N5TXdHdnlwTEdw?=
 =?utf-8?B?OHZKZWpzODhnMzVzbVFMK0F0QjU1TWZJVmc2dTRjNERWT2hHSnh2cm9TWnZy?=
 =?utf-8?B?MUk5NUE2NFZGRjRKSkthTHFuQ1JPb3h1MndYL3dVL1JabU1qM0tsWk9wL1ZF?=
 =?utf-8?B?L0RQTlJEaFRESmZRUitiNWI1QlBnRk9xQlhZeFFCRXE2SVZQc3RjZUp5eXJH?=
 =?utf-8?B?L1ZzWjNNbzU4V09relFjM2E0WDBnMG53MHJLa3JEWlFDQ3B3Z0pzUGpTb29n?=
 =?utf-8?B?UmhrcS9VVUtKWTA2SjFFdS96MStlaGdYbGxhMjdaSFJWcUxxV0J5akk0WmpW?=
 =?utf-8?B?cUQwRVlKUXdPM2lTa1I5NVh5MTg5WHFSRFppRVRZMUNnaUcweHNrSTZNekQy?=
 =?utf-8?B?anJnZXd1bEU5bzlxU21wcUgwVm9JN3ErVm85RC9LY3pNZktZVGhSS2RUS21Y?=
 =?utf-8?B?M0cxK0l3eXlGQUlHVEQvaXpYVkZnNDVXTGNtam41a3I5R2VjSXJYR0RwSmZH?=
 =?utf-8?B?eHV4ZHVxQ1JOVEE4ZDR0QTFEQnowR05qQm5yWVVLUTQ2bFZ1Z2dYZGdNT2NN?=
 =?utf-8?B?eWVaWVZOdWZZTDhuekZsQ012V3pSck50VGE3THRCZHhXd2hpYzBLUDUvWmZZ?=
 =?utf-8?B?Z2Z4eTI5akF2cytHR1pUaU0wdThyTHBLd1VjNG9sSzhETEtUWmJvVFFpNXdP?=
 =?utf-8?B?b1JiQjV4QVBRaC8xbnFxVTgrVTRqaTdNOGVDREd4dmMyNVJWL0VQVkpBWG1B?=
 =?utf-8?B?S0FCS0g0TVNCd2RNWHNaMDViUXVHWXBsQVNIZ051L1pmcGJSVmMycm9SRjVM?=
 =?utf-8?B?azdpaUZHVm9TK0lZOFB0anROWkh0NmVmMjN0amc4cDJtVEF1UDRLUHBHVXE2?=
 =?utf-8?B?Y2J5bmVSNEpHR1VrMC92TGUyQ0MvSmRwWDc4RVBLaXB5MU5yVWhGMVp6YW44?=
 =?utf-8?B?amxLWTlnR3FZTGpFTnZicHlPak1uMnNVSmgwQktIT3dOOTlkd3gzbmRGUUU0?=
 =?utf-8?B?SVJrbWV4VW5md1dUZG1zTStHenRqWDlObUdlUmVkTDVmZGtkb2dvRy90c01V?=
 =?utf-8?B?R2Jyd2VUUklBUzJTeGthbnBXTXFHSjEwYkd1clRoa3lrRGNHQTZCbkxsMjVC?=
 =?utf-8?B?cjhHYVhvUFRDLzQ2eWw1cVhpSXZLV3VuTktYVXdZMnNXYmRLbkN2b0MxZ01M?=
 =?utf-8?B?ZXA2RTJTY2k4YXBvS3FFZjYxUEN5SzVTZ3JPOUhoZGx0anZibXlBb0M2ZWIr?=
 =?utf-8?B?c2JXSHczT01XQ2tYOGxvK04wVFZobVZGcVNxdzlWeC9pU1NxTm5TTlVXY3k2?=
 =?utf-8?B?ZTJSdFJzNUZENXV2SGFsb3hya3VqOERwODBqWnFHd2wzMVZEUlJ6Q1NuUmNE?=
 =?utf-8?B?VlNQOTdkNUdza3Y0MUNYZlY4OVJVSGh2SEpvTnI2RmdkazBFOStJS3JsVCtO?=
 =?utf-8?B?M2VlbjJlYkkrdUhRcm5KRjBJNElIRzFPaWlDc0paTFd6bEhGWHpHUHp3blNE?=
 =?utf-8?B?bGJ5UG4vcS8rSHcweG40eWFaY2RJQVdqZzV5c2ZVdVByMlFBeWUvSWsveHN6?=
 =?utf-8?B?SG1aSlRmR3pLbjlVV1IxU1lBU0lLdkV2dHptSzYvSDlvNElsTUt2NXFOZkFF?=
 =?utf-8?B?VXpWM3htMHlzRkh1Q3JWZGFveWNkUW1vbnVWVGhneERIczNZVFh5dStCZHdB?=
 =?utf-8?B?TG5CcmRxZTNicmlST2dQOUZhOHNnbG91WnR5eENCSlFUaU1PWElBUE5XNU45?=
 =?utf-8?B?Mi9yTE5LenBZYzFNUDYrMnBYcTh5MGlvMmJRY3VpTEpnTGw4c1hGVm0zRW53?=
 =?utf-8?Q?XqDRkBkAkOyFyBat4RTGQVx4i?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f09e2cb6-95b1-4992-45f2-08daf2471faf
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 13:40:51.5307
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JTgmkumJEz7iXBQOIbUDj5bHglMfbJ2w36hZK77H9b5qicC8w9rKmsMNeG6CLc/xecGtgpRE7g4YDHgyRmAquA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8550

The hook isn't mode dependent, hence it's misplaced in struct
paging_mode. (Or alternatively I see no reason why the alloc_page() and
free_page() hooks don't also live there.) Move it to struct
paging_domain.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Undo rename (plural -> singular). Add a comment in shadow/none.c.

--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -235,6 +235,8 @@ struct paging_domain {
      * (used by p2m and log-dirty code for their tries) */
     struct page_info * (*alloc_page)(struct domain *d);
     void (*free_page)(struct domain *d, struct page_info *pg);
+
+    void (*update_paging_modes)(struct vcpu *v);
 };
 
 struct paging_vcpu {
--- a/xen/arch/x86/include/asm/paging.h
+++ b/xen/arch/x86/include/asm/paging.h
@@ -140,7 +140,6 @@ struct paging_mode {
 #endif
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
-    void          (*update_paging_modes   )(struct vcpu *v);
     bool          (*flush_tlb             )(const unsigned long *vcpu_bitmap);
 
     unsigned int guest_levels;
@@ -316,7 +315,7 @@ static inline void paging_update_cr3(str
  * has changed, and when bringing up a VCPU for the first time. */
 static inline void paging_update_paging_modes(struct vcpu *v)
 {
-    paging_get_hostmode(v)->update_paging_modes(v);
+    v->domain->arch.paging.update_paging_modes(v);
 }
 
 #ifdef CONFIG_PV
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -443,6 +443,9 @@ static void hap_destroy_monitor_table(st
 /************************************************/
 /*          HAP DOMAIN LEVEL FUNCTIONS          */
 /************************************************/
+
+static void cf_check hap_update_paging_modes(struct vcpu *v);
+
 void hap_domain_init(struct domain *d)
 {
     static const struct log_dirty_ops hap_ops = {
@@ -453,6 +456,8 @@ void hap_domain_init(struct domain *d)
 
     /* Use HAP logdirty mechanism. */
     paging_log_dirty_init(d, &hap_ops);
+
+    d->arch.paging.update_paging_modes = hap_update_paging_modes;
 }
 
 /* return 0 for success, -errno for failure */
@@ -842,7 +847,6 @@ static const struct paging_mode hap_pagi
     .gva_to_gfn             = hap_gva_to_gfn_real_mode,
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_real_mode,
     .update_cr3             = hap_update_cr3,
-    .update_paging_modes    = hap_update_paging_modes,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 1
 };
@@ -853,7 +857,6 @@ static const struct paging_mode hap_pagi
     .gva_to_gfn             = hap_gva_to_gfn_2_levels,
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_2_levels,
     .update_cr3             = hap_update_cr3,
-    .update_paging_modes    = hap_update_paging_modes,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 2
 };
@@ -864,7 +867,6 @@ static const struct paging_mode hap_pagi
     .gva_to_gfn             = hap_gva_to_gfn_3_levels,
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_3_levels,
     .update_cr3             = hap_update_cr3,
-    .update_paging_modes    = hap_update_paging_modes,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 3
 };
@@ -875,7 +877,6 @@ static const struct paging_mode hap_pagi
     .gva_to_gfn             = hap_gva_to_gfn_4_levels,
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_4_levels,
     .update_cr3             = hap_update_cr3,
-    .update_paging_modes    = hap_update_paging_modes,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 4
 };
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -45,6 +45,8 @@ static int cf_check sh_enable_log_dirty(
 static int cf_check sh_disable_log_dirty(struct domain *);
 static void cf_check sh_clean_dirty_bitmap(struct domain *);
 
+static void cf_check shadow_update_paging_modes(struct vcpu *);
+
 /* Set up the shadow-specific parts of a domain struct at start of day.
  * Called for every domain from arch_domain_create() */
 int shadow_domain_init(struct domain *d)
@@ -60,6 +62,8 @@ int shadow_domain_init(struct domain *d)
     /* Use shadow pagetables for log-dirty support */
     paging_log_dirty_init(d, &sh_ops);
 
+    d->arch.paging.update_paging_modes = shadow_update_paging_modes;
+
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
     d->arch.paging.shadow.oos_active = 0;
 #endif
@@ -2516,7 +2520,12 @@ static void sh_update_paging_modes(struc
     v->arch.paging.mode->update_cr3(v, 0, false);
 }
 
-void cf_check shadow_update_paging_modes(struct vcpu *v)
+/*
+ * Update all the things that are derived from the guest's CR0/CR3/CR4.
+ * Called to initialize paging structures if the paging mode has changed,
+ * and when bringing up a VCPU for the first time.
+ */
+static void cf_check shadow_update_paging_modes(struct vcpu *v)
 {
     paging_lock(v->domain);
     sh_update_paging_modes(v);
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4203,7 +4203,6 @@ const struct paging_mode sh_paging_mode
     .gva_to_gfn                    = sh_gva_to_gfn,
 #endif
     .update_cr3                    = sh_update_cr3,
-    .update_paging_modes           = shadow_update_paging_modes,
     .flush_tlb                     = shadow_flush_tlb,
     .guest_levels                  = GUEST_PAGING_LEVELS,
     .shadow.detach_old_tables      = sh_detach_old_tables,
--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -18,8 +18,14 @@ static void cf_check _clean_dirty_bitmap
     ASSERT(is_pv_domain(d));
 }
 
+static void cf_check _update_paging_modes(struct vcpu *v)
+{
+    ASSERT_UNREACHABLE();
+}
+
 int shadow_domain_init(struct domain *d)
 {
+    /* For HVM set up pointers for safety, then fail. */
     static const struct log_dirty_ops sh_none_ops = {
         .enable  = _enable_log_dirty,
         .disable = _disable_log_dirty,
@@ -27,6 +33,9 @@ int shadow_domain_init(struct domain *d)
     };
 
     paging_log_dirty_init(d, &sh_none_ops);
+
+    d->arch.paging.update_paging_modes = _update_paging_modes;
+
     return is_hvm_domain(d) ? -EOPNOTSUPP : 0;
 }
 
@@ -57,11 +66,6 @@ static void cf_check _update_cr3(struct
     ASSERT_UNREACHABLE();
 }
 
-static void cf_check _update_paging_modes(struct vcpu *v)
-{
-    ASSERT_UNREACHABLE();
-}
-
 static const struct paging_mode sh_paging_none = {
     .page_fault                    = _page_fault,
     .invlpg                        = _invlpg,
@@ -69,7 +73,6 @@ static const struct paging_mode sh_pagin
     .gva_to_gfn                    = _gva_to_gfn,
 #endif
     .update_cr3                    = _update_cr3,
-    .update_paging_modes           = _update_paging_modes,
 };
 
 void shadow_vcpu_init(struct vcpu *v)
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -426,11 +426,6 @@ void cf_check sh_write_guest_entry(
 intpte_t cf_check sh_cmpxchg_guest_entry(
     struct vcpu *v, intpte_t *p, intpte_t old, intpte_t new, mfn_t gmfn);
 
-/* Update all the things that are derived from the guest's CR0/CR3/CR4.
- * Called to initialize paging structures if the paging mode
- * has changed, and when bringing up a VCPU for the first time. */
-void cf_check shadow_update_paging_modes(struct vcpu *v);
-
 /* Unhook the non-Xen mappings in this top-level shadow mfn.
  * With user_only == 1, unhooks only the user-mode mappings. */
 void shadow_unhook_mappings(struct domain *d, mfn_t smfn, int user_only);



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 13:41:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 13:41:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473650.734368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsP7-0008RK-4x; Mon, 09 Jan 2023 13:41:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473650.734368; Mon, 09 Jan 2023 13:41:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsP7-0008RD-1T; Mon, 09 Jan 2023 13:41:33 +0000
Received: by outflank-mailman (input) for mailman id 473650;
 Mon, 09 Jan 2023 13:41:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEsP4-0008Qk-WE
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 13:41:31 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2079.outbound.protection.outlook.com [40.107.20.79])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51afb927-9023-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 14:41:29 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8348.eurprd04.prod.outlook.com (2603:10a6:10:25c::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 13:41:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 13:41:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51afb927-9023-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LtmaXcXgS4IYt11xQ1Y8vbtpaMjDLWD9QtxAfrLcN2h22vqk1LIYlXhlwrAJengpnqQype31VEk70YlCffD0DQOTesj351CGLyoA1KUtZ9q288HPoZdTjGTKjmzt6cwP8oqkpI1NVJO/1MiKk56KMbWwcAu5/a1G8WjhUEww+AaMEnm4xGenu3ZpQ+UX5sxHHIPKxHlRY7YwVdXPFE4dNi+Kp3Iakj3CAEv8d0vv2DHSRIp/sm4hbZKwuIjcPTM91Zkx31dtWKlBSMM5dqZhNtOI74jVDbjygQKXRTb5Ou8HWzEEnE1uue9LAxxYUOBkLzhOOx42BJcFGhlKd7VLlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jKVxpWRnLLsiXUS1L42o6OfU5C2lD6jrUePiBTdkwDc=;
 b=iHbeyBn/ssFl0VWeJAeZ/KS9Tio7CdbHTyepVjQzUqgOQyYhW2P95xCyvU6HGcqxwBmG5OwvOEt34tN6QFiIuvHLr9EoIP+aPBl4MJ/fyKlj9vK6knCqX5LJX66nKff+wplprgVxeriDgIZ2qgVXTGaNbEUrfdNc/E12d8xdBOQ8mcEqE63lDCv7ftGnLfHaAMepTB2KK9VZUEjTUywHrTL9HM4bg/kGi+AUhTGsTDQtPOG2iAgFWhT1dIu4tMo38AOuSm/SASQgYYZCks2GfE6/YNNasOPo36zAadOCJ+RmHEXctyjiCvqIk9NjCOoz6evex/kEC+fpJlDuIBxTPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jKVxpWRnLLsiXUS1L42o6OfU5C2lD6jrUePiBTdkwDc=;
 b=vMcMN10EbPFfT4G1KVaVUqa9ZSuTYv00rVEh7EAlmIcvHyfwZddIQ6VtD/EZCXVfpRYz3buWTr1ruyrZRDZO52FS8DsKkcirQ/2U5Zl+P1C9dTfAp4hIY478Go014eDjFSXTtTrUkotMYN3+frr27UfzY+Qyvkixu/FRrF1t5il1yTXyu0jtofT/acYuNmn3ZOTVSJhC29Fh6RrVz8jhu1kHsAIIuD2HOBr01JDRX2HR3toPSOSFXVSzSFzKZqyYZxqML2vg6SIk1VLET0SFIsHfJj3wcNIFxJ+M7sDXotNJPUguJWKjSBJantpXixdR5nuKIuvXsNIqi/ZtOX+Dpg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cab2c132-130c-ce2c-de04-cf45143efa80@suse.com>
Date: Mon, 9 Jan 2023 14:41:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 4/5] x86/paging: move and conditionalize flush_tlb() hook
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
In-Reply-To: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0151.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8348:EE_
X-MS-Office365-Filtering-Correlation-Id: bfde1e5e-1d07-4bc2-69fd-08daf24734f4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0unLGsazmk15rLIvSqnFfWdz5C1osJOQGM1aoaq8KPwUdnc8n5+Abhqz+YpnFOPry4ZNZ56XAZiZz9AEERMkoJpayuTgOmnd9kBYS8G/ukVsHBKCiZXHcUcA4RaKhj3ZTv7NP9vMJUjxjiLo965cw+mFKGqgst2U/sJflSxb/rLL0txNlJD+G7nKYLBsIiiIkQPWXz1tP9SHF2iCoelV8PgBB5oc00SkiV+Obffjp/E6f39Xohf8+qhJweEhkNk6HxSFbTy8+768XSoP8Ji6iv8ECrieHApxorqSrlv1BfXwZTTKnVWTlO6QKyFlQ69GoKWPxyo/8QrwZ+06zAv/bfTyP/tC0Pb9vnAl3mbMxlztpXP40K/VFj9tnaYlFU6BhlYQ3tX6SkJyzWcYbelx3sm51dTTDd9N59xvgLWPWZPnTRvX5Tbn1jEr8UGB6OjXQfhBHy3iguns61+Xl6nLlGZg84niKnXEDJhwnR9YXkrH6NjV0FmIyi9EvoR2tMdnYybxwHNMHW9vDVsRkLLYv8RVYtWbp4ghjActDeGJfwLxLcrkmyK8QPI4f2Rxd4aRa7UYOkyRexj7mwFmjZHGUAI5AmCSQyPV+XNpfEHzk13i385t1uQ0cfNtxeqrrXa1H5Y85NEy6xqnqQfWJdVAyLLw7yCi6OAQF5e8ZU8Vxu7J/K/TY/qDC+I73FexcetisPHGTpQOx68NJaBDaMLtecNN/aOHn/WNrY7TwCl56ZQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(39860400002)(136003)(346002)(366004)(451199015)(478600001)(36756003)(86362001)(54906003)(5660300002)(31696002)(4326008)(66556008)(66946007)(8676002)(66476007)(38100700002)(41300700001)(316002)(26005)(8936002)(6486002)(83380400001)(2906002)(6916009)(31686004)(66899015)(6506007)(6512007)(186003)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y0owdkk1eXM1TjFOUW0vTFByZllScUU0MjQ1RWkyNWpmODNRYnRrc3E5RDVX?=
 =?utf-8?B?YytJRlRaa0toQm9aVy9aWHNLUFJQbUFsYXFVSktuQWdtcTgvWGV2R0wvbURX?=
 =?utf-8?B?M2dkUmZnWTAyWjZuUDJzTmRXM0UraTlUaFM5eGRqSjM4czBUUjViQ0dRSFUz?=
 =?utf-8?B?ZWgxNEI2dXpjZ3kraHhjOEs3S29GbWx2cmprMDZCc1gwdjJJTGRjdGZ3WHdG?=
 =?utf-8?B?ODM5cm5tcmV4TWw0d3hpYTN5aWhJQVNoRk1FR0RCamM5akoyZ3pBTUwvZFBZ?=
 =?utf-8?B?ellkdlN3OFE0TFdydWtUbTRrUEpQK2pmRTZ5R3VLRkMvZW42dmRacVpKUXZl?=
 =?utf-8?B?dVd3azhpR3BNVEJPLzlsR0NxSHBIZm1IczVhekF6OG5nclJtcUoxbnpET3lp?=
 =?utf-8?B?U21QdG9yMGtreEpib2w2bG1JUHFwY3JYNTJxY00vTHcydEVoREVqY2ZEakc3?=
 =?utf-8?B?b1FUUVNlWmJCeVBPQTJ5WmJDM2VhZWFHakozaGNBakcxR2E3OFhtN1Q2ZGNO?=
 =?utf-8?B?Yk5Nd1l5elIwdmhlN0M2SGlhUnVlQkRqMFYwT0VQckdEcE45dVJjWk5QY3Ux?=
 =?utf-8?B?WjRCS0kzWmZ6dk9ncmEwQStXb1UzWkp5a01xVUdWR3Y4Q2dYMUcvYTFKclI5?=
 =?utf-8?B?bjVGZXdSQ25STnpLdENSclZHTmZESHppcUxqVmJubDc2Q2p3cDU2Q1lIenpJ?=
 =?utf-8?B?UFdyRXVJa0tNNlArcXZOTHd4dkFjV09sSWtNRFJCZ0d5YTlFQXdieS9CdDgw?=
 =?utf-8?B?VnJPMXpwZ2h5aU9wTDUvTlE0ZVhmZjVWZzRYYkFzUEE3M3VIaEUzbnk2SVF4?=
 =?utf-8?B?NnpSMis1NlZYekF6RE5MdFQxSEdhYmhRME9vYndlb3VCSmkxUGYxMGpiWXND?=
 =?utf-8?B?cDl1eEc1T213ZkVPVjNPbmVFRVpjN3l0Yk13bGlFdTFGbVM2c3k5YVRXN2Zv?=
 =?utf-8?B?RHg2N01OWUNxd2h6dDdGcHNaTmwvVTdXbWE3MUhiVmRIM3gvamdaZmt3U2lj?=
 =?utf-8?B?WDdFcWNoMlEySC9ST2U3LzJLclR1d0crVExpSDlpWUxFM3ZUTlMxYXM1bXho?=
 =?utf-8?B?VGxnSFF3VVhDOStSUmVsMXFYamZoMDBybC8xU09CbncydFN5MGxpMjV3VUlL?=
 =?utf-8?B?TVcyOHgzamsyanIzcWNMRnVIcW81NXZ4MURyWFEzS1RJbkR3dC9OemNvWGUw?=
 =?utf-8?B?N0tjYXhRU2tFMG1IR0IrU01qR3pHdUxCcnJVaW9kZGpTNWxhMm9WTE52K0NC?=
 =?utf-8?B?dTlycFBjSEh3dEMwdlZNeC82K0pta2VBZmVKYlNCVHIra1hjNjJnaEFnTm9C?=
 =?utf-8?B?ZDBkeldNVXhpUXBZbWdpZDRhcnl6VS80M1pyS1ZlR21UU3lrdVFHZVEyL0Y0?=
 =?utf-8?B?SkJ5bUI0ZkNacmhma1F2Uy9YQUZzVTJOL0xORXpha1Z0WDZkWFVla1hTQ0V6?=
 =?utf-8?B?Y0tzVlYyUkU4UGJiZkQrWnp5SDFBb2NxWXBLdUNlR2UwNnZ2bnNvRTNnYzNR?=
 =?utf-8?B?ODk5VzkxS1hOYmVNZXhGYVhDazFoaGZXeDlmb2IwWEdkWDRrRWVTeEcyQ3ZQ?=
 =?utf-8?B?V1lmMjZnRDBCTFl4cURuamVZRThpT2tYUHU0aWJuc1dHdXdSWlFBd1NjdFJ6?=
 =?utf-8?B?dU9zZms2dDhBUGx4aUNqamFSOEV1VjJMZlIva1ZXRysrdFphZ3dRK2tMdUZ3?=
 =?utf-8?B?cmRVKzM0OTljTHBMTURGWmZzK3V5bFZtUXZaZ1NSalVJc1VYMjhkUFVPb0Fm?=
 =?utf-8?B?TytTR1NmS2tiRS9hd1pPcm5kSURObUJ3NUNaZVlyUDZQSU9LdUhUMVhTd09K?=
 =?utf-8?B?dFFCQ3dMaG93NUhMU3FKL0c4cllNMkJ0MHpJNWh0cy9acm5LNmJIWnpNMVVh?=
 =?utf-8?B?NFVxVkg2R2ZGZU5Xcm9ZcHYyL3NzNG1PZ0dBR0pxd1ZtRC9zN2JBeHJpZ2c3?=
 =?utf-8?B?VGlncXFxa2xlUTNDZEZZcmxSUDk5TkNITFVJVGFyT2dzdnZxU1NDQ3F1LzYr?=
 =?utf-8?B?dVBaczlsMEl4bzYrVFUyL2dUR3c3OXlDQmZBc2VHRHF2a0xMVytnaHhmZC9v?=
 =?utf-8?B?QXNla1VObFN2STRqSFMvcjVLWUdOZ3EwaElieFhRSlZlWjVPTEVjQ09vSFh6?=
 =?utf-8?Q?27WHGR8KeH/U0QQGCA+UcojOG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bfde1e5e-1d07-4bc2-69fd-08daf24734f4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 13:41:27.2316
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 81YsYPoRvEOYWZRseE3AHjHUeqUF1GH5K++BWi6muNxpn31yw4f1XSTE/Rrw8M/wv0Qy4yZ+TTkqECabKDkKAw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8348

The hook isn't mode dependent, hence it's misplaced in struct
paging_mode. (Or alternatively I see no reason why the alloc_page() and
free_page() hooks don't also live there.) Move it to struct
paging_domain.

The hook also is used for HVM guests only, so make respective pieces
conditional upon CONFIG_HVM.

While there also add __must_check to the hook declaration, as it's
imperative that callers deal with getting back "false".

While moving the shadow implementation, introduce a "curr" local
variable.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v2: Re-base over changes earlier in the series.

--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -237,6 +237,11 @@ struct paging_domain {
     void (*free_page)(struct domain *d, struct page_info *pg);
 
     void (*update_paging_modes)(struct vcpu *v);
+
+#ifdef CONFIG_HVM
+    /* Flush selected vCPUs TLBs.  NULL for all. */
+    bool __must_check (*flush_tlb)(const unsigned long *vcpu_bitmap);
+#endif
 };
 
 struct paging_vcpu {
--- a/xen/arch/x86/include/asm/paging.h
+++ b/xen/arch/x86/include/asm/paging.h
@@ -140,7 +140,6 @@ struct paging_mode {
 #endif
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
-    bool          (*flush_tlb             )(const unsigned long *vcpu_bitmap);
 
     unsigned int guest_levels;
 
@@ -300,6 +299,12 @@ static inline unsigned long paging_ga_to
         page_order);
 }
 
+/* Flush selected vCPUs TLBs.  NULL for all. */
+static inline bool paging_flush_tlb(const unsigned long *vcpu_bitmap)
+{
+    return current->domain->arch.paging.flush_tlb(vcpu_bitmap);
+}
+
 #endif /* CONFIG_HVM */
 
 /* Update all the things that are derived from the guest's CR3.
@@ -408,12 +413,6 @@ static always_inline unsigned int paging
     return bits;
 }
 
-/* Flush selected vCPUs TLBs.  NULL for all. */
-static inline bool paging_flush_tlb(const unsigned long *vcpu_bitmap)
-{
-    return paging_get_hostmode(current)->flush_tlb(vcpu_bitmap);
-}
-
 #endif /* XEN_PAGING_H */
 
 /*
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -445,6 +445,7 @@ static void hap_destroy_monitor_table(st
 /************************************************/
 
 static void cf_check hap_update_paging_modes(struct vcpu *v);
+static bool cf_check flush_tlb(const unsigned long *vcpu_bitmap);
 
 void hap_domain_init(struct domain *d)
 {
@@ -458,6 +459,7 @@ void hap_domain_init(struct domain *d)
     paging_log_dirty_init(d, &hap_ops);
 
     d->arch.paging.update_paging_modes = hap_update_paging_modes;
+    d->arch.paging.flush_tlb           = flush_tlb;
 }
 
 /* return 0 for success, -errno for failure */
@@ -847,7 +849,6 @@ static const struct paging_mode hap_pagi
     .gva_to_gfn             = hap_gva_to_gfn_real_mode,
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_real_mode,
     .update_cr3             = hap_update_cr3,
-    .flush_tlb              = flush_tlb,
     .guest_levels           = 1
 };
 
@@ -857,7 +858,6 @@ static const struct paging_mode hap_pagi
     .gva_to_gfn             = hap_gva_to_gfn_2_levels,
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_2_levels,
     .update_cr3             = hap_update_cr3,
-    .flush_tlb              = flush_tlb,
     .guest_levels           = 2
 };
 
@@ -867,7 +867,6 @@ static const struct paging_mode hap_pagi
     .gva_to_gfn             = hap_gva_to_gfn_3_levels,
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_3_levels,
     .update_cr3             = hap_update_cr3,
-    .flush_tlb              = flush_tlb,
     .guest_levels           = 3
 };
 
@@ -877,7 +876,6 @@ static const struct paging_mode hap_pagi
     .gva_to_gfn             = hap_gva_to_gfn_4_levels,
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_4_levels,
     .update_cr3             = hap_update_cr3,
-    .flush_tlb              = flush_tlb,
     .guest_levels           = 4
 };
 
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -68,6 +68,7 @@ int shadow_domain_init(struct domain *d)
     d->arch.paging.shadow.oos_active = 0;
 #endif
 #ifdef CONFIG_HVM
+    d->arch.paging.flush_tlb = shadow_flush_tlb;
     d->arch.paging.shadow.pagetable_dying_op = 0;
 #endif
 
@@ -3092,66 +3093,6 @@ static void cf_check sh_clean_dirty_bitm
     paging_unlock(d);
 }
 
-
-static bool flush_vcpu(const struct vcpu *v, const unsigned long *vcpu_bitmap)
-{
-    return !vcpu_bitmap || test_bit(v->vcpu_id, vcpu_bitmap);
-}
-
-/* Flush TLB of selected vCPUs.  NULL for all. */
-bool cf_check shadow_flush_tlb(const unsigned long *vcpu_bitmap)
-{
-    static DEFINE_PER_CPU(cpumask_t, flush_cpumask);
-    cpumask_t *mask = &this_cpu(flush_cpumask);
-    struct domain *d = current->domain;
-    struct vcpu *v;
-
-    /* Avoid deadlock if more than one vcpu tries this at the same time. */
-    if ( !spin_trylock(&d->hypercall_deadlock_mutex) )
-        return false;
-
-    /* Pause all other vcpus. */
-    for_each_vcpu ( d, v )
-        if ( v != current && flush_vcpu(v, vcpu_bitmap) )
-            vcpu_pause_nosync(v);
-
-    /* Now that all VCPUs are signalled to deschedule, we wait... */
-    for_each_vcpu ( d, v )
-        if ( v != current && flush_vcpu(v, vcpu_bitmap) )
-            while ( !vcpu_runnable(v) && v->is_running )
-                cpu_relax();
-
-    /* All other vcpus are paused, safe to unlock now. */
-    spin_unlock(&d->hypercall_deadlock_mutex);
-
-    cpumask_clear(mask);
-
-    /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */
-    for_each_vcpu ( d, v )
-    {
-        unsigned int cpu;
-
-        if ( !flush_vcpu(v, vcpu_bitmap) )
-            continue;
-
-        paging_update_cr3(v, false);
-
-        cpu = read_atomic(&v->dirty_cpu);
-        if ( is_vcpu_dirty_cpu(cpu) )
-            __cpumask_set_cpu(cpu, mask);
-    }
-
-    /* Flush TLBs on all CPUs with dirty vcpu state. */
-    guest_flush_tlb_mask(d, mask);
-
-    /* Done. */
-    for_each_vcpu ( d, v )
-        if ( v != current && flush_vcpu(v, vcpu_bitmap) )
-            vcpu_unpause(v);
-
-    return true;
-}
-
 /**************************************************************************/
 /* Shadow-control XEN_DOMCTL dispatcher */
 
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -719,6 +719,66 @@ static void sh_emulate_unmap_dest(struct
     atomic_inc(&v->domain->arch.paging.shadow.gtable_dirty_version);
 }
 
+static bool flush_vcpu(const struct vcpu *v, const unsigned long *vcpu_bitmap)
+{
+    return !vcpu_bitmap || test_bit(v->vcpu_id, vcpu_bitmap);
+}
+
+/* Flush TLB of selected vCPUs.  NULL for all. */
+bool cf_check shadow_flush_tlb(const unsigned long *vcpu_bitmap)
+{
+    static DEFINE_PER_CPU(cpumask_t, flush_cpumask);
+    cpumask_t *mask = &this_cpu(flush_cpumask);
+    const struct vcpu *curr = current;
+    struct domain *d = curr->domain;
+    struct vcpu *v;
+
+    /* Avoid deadlock if more than one vcpu tries this at the same time. */
+    if ( !spin_trylock(&d->hypercall_deadlock_mutex) )
+        return false;
+
+    /* Pause all other vcpus. */
+    for_each_vcpu ( d, v )
+        if ( v != curr && flush_vcpu(v, vcpu_bitmap) )
+            vcpu_pause_nosync(v);
+
+    /* Now that all VCPUs are signalled to deschedule, we wait... */
+    for_each_vcpu ( d, v )
+        if ( v != curr && flush_vcpu(v, vcpu_bitmap) )
+            while ( !vcpu_runnable(v) && v->is_running )
+                cpu_relax();
+
+    /* All other vcpus are paused, safe to unlock now. */
+    spin_unlock(&d->hypercall_deadlock_mutex);
+
+    cpumask_clear(mask);
+
+    /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */
+    for_each_vcpu ( d, v )
+    {
+        unsigned int cpu;
+
+        if ( !flush_vcpu(v, vcpu_bitmap) )
+            continue;
+
+        paging_update_cr3(v, false);
+
+        cpu = read_atomic(&v->dirty_cpu);
+        if ( is_vcpu_dirty_cpu(cpu) )
+            __cpumask_set_cpu(cpu, mask);
+    }
+
+    /* Flush TLBs on all CPUs with dirty vcpu state. */
+    guest_flush_tlb_mask(d, mask);
+
+    /* Done. */
+    for_each_vcpu ( d, v )
+        if ( v != curr && flush_vcpu(v, vcpu_bitmap) )
+            vcpu_unpause(v);
+
+    return true;
+}
+
 mfn_t sh_make_monitor_table(const struct vcpu *v, unsigned int shadow_levels)
 {
     struct domain *d = v->domain;
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4203,7 +4203,6 @@ const struct paging_mode sh_paging_mode
     .gva_to_gfn                    = sh_gva_to_gfn,
 #endif
     .update_cr3                    = sh_update_cr3,
-    .flush_tlb                     = shadow_flush_tlb,
     .guest_levels                  = GUEST_PAGING_LEVELS,
     .shadow.detach_old_tables      = sh_detach_old_tables,
 #ifdef CONFIG_PV



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 13:41:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 13:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473652.734379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsPS-0000We-Ch; Mon, 09 Jan 2023 13:41:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473652.734379; Mon, 09 Jan 2023 13:41:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsPS-0000WX-9q; Mon, 09 Jan 2023 13:41:54 +0000
Received: by outflank-mailman (input) for mailman id 473652;
 Mon, 09 Jan 2023 13:41:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEsPR-0000WK-Tt
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 13:41:53 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2088.outbound.protection.outlook.com [40.107.20.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f5c314d-9023-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 14:41:52 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8348.eurprd04.prod.outlook.com (2603:10a6:10:25c::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 13:41:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 13:41:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f5c314d-9023-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=My+JoqO6h5y27qEa+gNkIzD5XbmjuEeij+vbmCg7+wb+oDbVVbZmF+k8hPhK3yujbhBr21qZLHItaForEQSwG9QtC4WklzHJwa7qaFp4HXgLG4HwqA5JtsEHw57C3Ua2vCFxTe2JCaH1TqQ+GBHZ274vONz3eHlEr6u39etx28WmMED5srjoMKmdAJkwGsmD0mrq3WtHI5cu6X+u0GFo/n0o3DsYiZE7sf3Z2Xkdi+HEvqor9t4dAKuezNmbBCvWxUnEJfCubtekRKfjAY4f46pjYxn7yOsWR51GVMiXah3u4gfAxodMCVVeHyigp8hzrVW622mj6xc4ZZe2s2jYgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ldmrcBpaOs88X5aWnm3c9XzMTqcT0z7U/BEsg7rlocA=;
 b=oOpP5kcDhrQNgTmKSkFoukYGoyZ5AeCfPw8RHlKIP28B/2cMUN/EIVwJlJbI99T3l/tmBqTCRTYIBSD9F/gxT4WmfT8DwO7CF5mHMgiE17H22BmAMolr4H+BkKtHvBHkGGQLU2T5zS3jaRCHg7iGZNx3//jM1HeJv28rhyQgg6cVPOrFs+dkIgOCt38jVDLNOdvIXuCIr+rBnXSVm+wgoSVpxBtYIR+JvS8Xzl/hw9oERG4zqj/4opGmSc6ZA/iSJ0SMy0tx7UcovYF2iFyLBGWttTjcvIyevk8tGySfqeBw7mxuxDldUOvc3axY2VfFPoyH1lLZ2OvVGKF6lEfeSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ldmrcBpaOs88X5aWnm3c9XzMTqcT0z7U/BEsg7rlocA=;
 b=4AWJ/z4QMmbuE+TdW4gHklaV7jUuMXm4kCBH5TEScwQ2pH+leo9Qlfy8kKfztQEjz5doiBGb9nsx7WlE2iJsIPWGt/CWf5+EZKpC3PnEM/HuPPEdgyRU2yyJ4SneWmI9ZE/2owoXVNut7QL6medqo62wR2eMASBREimJpCGkTsgQB3GfoV65eWRGzmIQjD15O+4u02hbYaKCMlTAWFU9qNftZe1KcacDm/vJN0hymupxdIWGj7jydJH2b4XPhHxqIHjdU8MSN6dzpbmhZ36HN4okSU6cn3qbiGF3MM121AbFWPmLeGUJRS7iKL6Gu/CN5TxeRUCrLfYThG6uGC1YlQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b25cd042-be18-189d-d53e-4ea8b28e2e57@suse.com>
Date: Mon, 9 Jan 2023 14:41:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 5/5] x86/shadow: drop zero initialization from
 shadow_domain_init()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
In-Reply-To: <882f700f-9d79-67d5-7e13-e42c3c79ba11@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0016.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8348:EE_
X-MS-Office365-Filtering-Correlation-Id: eebc57dd-0faa-4bb9-2a30-08daf24742de
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/g5Fhg/WFGQ3i6EQjIuIAJrgx6d+r9ywCwbB+v8XzL0fn21OiVNDh1YoO3qgfd284PczrIxwKp/ISWhMeA6SWTD+fc7h4qOERKFWHZ3FiFu43P1snWsQwaEFLDxoFg7Xz/RnWsxVy2gzDJtjprBBLjWy1pApzjpFuO1n7hyAE98GFMF/6s7gVbJJsHGYp3ixwmrtyQb/D6+aD2gGJw+zaKuHH2WYteZDDvkR1sPGWO66G9hbUCgaiDQMeAKZKDgrzfFF67DulJ5LNXNQR2F2rUhthshatkOAomcq71fIJX6SlpYv3XBDLJA7YnBraa3zt+JnnuQN5QnpPqJwAJ3BNAtlg/Vt1kQ4tVOEGvSBHgfVIBbI/cggmOYDRmGYk7c65Mr0ETZAok6awTwcy+ZYT1SSPwS7IVsHA15CKSzYDue2XwOoVEIJwcBZl4jfqVETb23RUA4ut5EOMFjufpLG0ykBOBG0XPgHsg2toEr19TY3E0aTReC1ZbKB2uCZLQV0JjL3jVqwT7Mc+vyh+K2d6jupE647EUwTweV0QDRkj7lYt8Xu6KZ14JoJXEen84GgdurfxFl3So9H2DUOB2+JisEhqdpGLCoRZxIstVR49NOwgP82ZcDsij3gwt5QTccWUYen1TaMjKnu4lBOO2t8fOqui/mWKwWXsbYFcHAJQmNZ03a8uCUQAF/4lvPv2zTkrERHupkxKMzmVqOhyUQtjAAESjYfAwZFHnn4vlxPq+A=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(39860400002)(136003)(346002)(366004)(451199015)(478600001)(36756003)(86362001)(4744005)(54906003)(5660300002)(31696002)(4326008)(66556008)(66946007)(8676002)(66476007)(38100700002)(41300700001)(316002)(26005)(8936002)(6486002)(83380400001)(2906002)(6916009)(31686004)(6506007)(6512007)(186003)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Ukp0RUxqdUp5aWp6RjRQSTVTR2piZFhScFhLSVdzOTVTZm1HY01La3M0QjQz?=
 =?utf-8?B?R2xDd1ZLWkxZYmVoM2xFMlZnbTJKMWpnRkhsOFV0cE8rMWZVUExMRld1dW9r?=
 =?utf-8?B?bm5VN3VoSTk0QnlNWE5tdjVkeEJIZlZUc1VBcHE1b3c2aUFFeEExUHpFVm1l?=
 =?utf-8?B?OUxNM2NMTlNla0pMbnNkQVJZZzZRb1pnR05nTDdaTmtMVnMyaG1rYzhIWldi?=
 =?utf-8?B?RlFTQjRFdW1YL3ZCa3o0UGNSVnF6ZWdRUC85TGhiNkVtc2loZ1A4d29vbUZi?=
 =?utf-8?B?aHdpZk1VRUNGL1hLcmpGWFUzUE1CUDBha2lLTE1lNVB3clE2VXh2UnNWL2xh?=
 =?utf-8?B?dFZoMjgvZ3ZRYW9HVFVyOFBENXJjTzUzOVVkbzNCd0YwNGVTSjFybkhKNU93?=
 =?utf-8?B?ZzRUbUhJcDJQQUJPRzVSYTJyNVl4V0NVK01xL0dOZitQbHA2M3lsdzF6eSs0?=
 =?utf-8?B?VjJiS0NpUkZPam1JZG9SK3lQRklOWXZSeTBpMC92NHNBTHdDTERJYVd3RGEv?=
 =?utf-8?B?bnNaNmJ5RUJPTHdQbEJzYXVHb2dZYXhxcFN0NjcwSFZhUk8rWkhlZ2tQREI4?=
 =?utf-8?B?ZVcxdytyZVJJblQ4M2gwRHdOYjhFYTZYVGpoNXQ5Y1M0a2xiT1kwQjJSUVVN?=
 =?utf-8?B?a2pkeXVtOWt5VGRJRDNpeE5KZStYOGpTaEc2d1ZvLytkQUxyU0VTaG02VVRB?=
 =?utf-8?B?Sm1sR2d2WThOcURheDFpV2k1OHk4WkpHdDBWdEtLbjhqZzB1STI2RkxPa1Ir?=
 =?utf-8?B?SzRNSnZOR1ZwY0hoWEl0T2pmSHkxbGlBZGgvdFIzN0ZhMzhTdHpzYUZ2ZVdF?=
 =?utf-8?B?YkFKL3RXRzFaUFRCVkpWR21oNTYrZ2NnVkpWWUMyODFzQlJuSldaNytNdXUv?=
 =?utf-8?B?eEZXT0Q5ZzY0TjEvQ2xubEJnU0ptTDRMRWlRcmc1dnFBMC9UWmFWWWd1d3ZC?=
 =?utf-8?B?M0ZVT3c5UUU5R3VEbUhIZ0NZdW43YngyV2p3N296ZGZDU0U1MWptYk9xNy9U?=
 =?utf-8?B?UU8wK3lWWTQzNmFmUXJvbVRPRTc3TnVvY3JFVlB6YTM3SzgvazBKcXlEbW95?=
 =?utf-8?B?U3RlUlNKM2h3TEZoQTI1cldPSXFGalI2VlFFQ01nMFZQSkxZbVQvcmxRbzdk?=
 =?utf-8?B?R1BZeTZPdEoraDJ2R0FHc1RvamExanpaMDA5MTdSTm5DZzZzT3JoRk5hQ1Ey?=
 =?utf-8?B?VmFIaU9qZXFjeTVsbVU2bzNQUDBUZ2dBbllieXdWd3JtV1FzbFE3NjNydmRB?=
 =?utf-8?B?RUVnMUREZkVNZWJJTmZaSS9RTnNEbG1xREZhU0RJZ0NMcnFiQTNlcExJbGxD?=
 =?utf-8?B?V2JNUTFHSlNtVjgwM014Y2gxM1laSGJ5ejF3ZjVBb05oNUpZQmNlOXd6WnVP?=
 =?utf-8?B?UnZTS0NSSTlrVCtmcnBFaVhFM0V4R2VZaFlMMDdzZCtzZFlNRnBpc3lybVBy?=
 =?utf-8?B?TmVSemhLQ1J0Tkh5dUkwSXY3czB4Wmd3d01PKzBOZ0R4Nis5bG12Qlo3dk10?=
 =?utf-8?B?dy9aalZaWW80WG1VV0dUTHJmdDhJNXhZc05jUnpuWGZjdzBSZ0cxVnM1MExo?=
 =?utf-8?B?UmYxeEprajQvR2RaK1c2WnlBWEdUR003MTFPa0JzZHNjY0k2RE8vNXdPdktK?=
 =?utf-8?B?dENKQk5tVE9iTVh0d0dORnZmaC9XRWU1L1dkVDMxTXZVSTJpeFdoT25xekxw?=
 =?utf-8?B?S2xtZk0wLzVPQUtwSU9zZm5iVFZrVElMdlVWY0wvMFFWS2xiNEdsLytOTWpl?=
 =?utf-8?B?RjhEZ01RT0xTRWNNc0M2THp0eVdHaHdGN3k5UkFlRnZkS2FyM0tKZ2xsUmpD?=
 =?utf-8?B?dStLaHMvYmhXemdMZS9SdEt0OWg5RGpSaitEZDAvbm9xc295cmlpZlVkaDBF?=
 =?utf-8?B?MTV4YW1ad3IwZ1g4TlpkYXlvZEQwOHRIc1ZzWXZENEVKdmdTV3pzb0dNOXR0?=
 =?utf-8?B?Z2pKL2NnWnRVdFNnbkJUeWV6RzIwckYwVTFDbFdLRzlSbWYwZVFJTURabldO?=
 =?utf-8?B?Rk45dVEvVUFIUE1jR0R2ZExteWkyRnA5bjVSd0JIUWs3eURsR3pxbnkyT1Iz?=
 =?utf-8?B?R0xack92M0dETitVQzJ4eklUT3JuUEc5UmU0ak5DbVh6U01NaitJQ3orSlVz?=
 =?utf-8?Q?wmbRvaLJsG4vzq7dGK4JXPJeQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eebc57dd-0faa-4bb9-2a30-08daf24742de
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 13:41:50.6364
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ydGtY3AoJ3GhLG0asHd4xt16nog1wJxntn0TmoVTnoRM71INdJBM91TvfEUzCA5g2D5o2E8wImVHh8qTUVmHuA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8348

There's no need for this as struct domain starts out zero-filled.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v2: Re-base over changes earlier in the series.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -64,12 +64,8 @@ int shadow_domain_init(struct domain *d)
 
     d->arch.paging.update_paging_modes = shadow_update_paging_modes;
 
-#if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
-    d->arch.paging.shadow.oos_active = 0;
-#endif
 #ifdef CONFIG_HVM
     d->arch.paging.flush_tlb = shadow_flush_tlb;
-    d->arch.paging.shadow.pagetable_dying_op = 0;
 #endif
 
     return 0;



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 13:53:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 13:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473664.734390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsav-0002G3-DT; Mon, 09 Jan 2023 13:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473664.734390; Mon, 09 Jan 2023 13:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsav-0002Fw-AJ; Mon, 09 Jan 2023 13:53:45 +0000
Received: by outflank-mailman (input) for mailman id 473664;
 Mon, 09 Jan 2023 13:53:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEsau-0002Fq-Dt
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 13:53:44 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2083.outbound.protection.outlook.com [40.107.14.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06b0a9cb-9025-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 14:53:42 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7183.eurprd04.prod.outlook.com (2603:10a6:800:128::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 13:53:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 13:53:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06b0a9cb-9025-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E0tEHTExiuTOqFnDhL6pPEkwBOcPREzl45Bmkgu3wQO0wDdZzHuHcHEqxjQhM3v9/WLp9PC5IaSek0fbuZC3gq7IzcrUHtGeZRRCtWcJIefHuD4n+Ns87ACaS6uBDZCTFi/3qAk2Js+Ff/2k5hKhmnL0WS3tAcHjucvCgnVd3X21MEhO4LDoUPsI6FOd+TWDTf/mc5L/gIz+0EQkHse01/LdmkSzwBRntC2RiKVnvvnzrV23eKdorIsiggWNQygaA8Pq9h1J6XlWXa6TU8L41gsKAUf2Z1V9X6iLWkXyX6GArDN/TeUP6/awqvmcKNV2EI9KYOmjJHz0YR69POm7Fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UB+XLs3uTRl0EN98OjmZEzYQQmFrZx6LJBm6WUg9zXQ=;
 b=Cdi8zp/OdcgF8wqFKhiZboYamcqJ5cUUCw01XU7M8YUPEvdBPp/AZVoVSHGec+6onJWys7oteWwLFBx0FGMM+WVAm6kjEOYdoaqLHGV/uNIlOJoRmml30SYqIbr59fxkiMmLiPEoDVeBZismINScXzwu9bHVfLkmiA3EtHMEKoSqijzI7g7in9U/IpMkMV2POH8G/MeYpwjnC+YF+8Od0Uqmh+V4NZDX/gbRV9a4RNZZr/oRyU6nH/SWZW+2hxwjhhkQ3ksby4VV1OxG22ufUGRbLwcSDs/xrmDS8+aH/utmTXiXufCtpkx5RsaFtIZx3TD5s4uuuPuK0+I0fP+hqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UB+XLs3uTRl0EN98OjmZEzYQQmFrZx6LJBm6WUg9zXQ=;
 b=E//Up1tmz51BFj0CvPMV3O7zhzXCh/ipiVodjWL7l8I/P3v6wiJWXrmUgA6kENIMeCvXO7A6NnHAHlos+d9g60Pwi77Kd1MlUmZYNXZBwr9+qxMRO03tVsAFhT7VxDcATVpEhwmc+IGYuajeQZod6Uqp1m1mvOz1LsPb6QOjVbA7yTAfWC+OrH0MdPwEalJfLZfjd+esEQj9GRJ/9nCC9OnhrauMXHkDHdCbIM/77/d9RK0q5GcXB/F/7SuG6a9h2fPcBD/jFzdjP4rMY19o4rknJp6peua0j86Y3opMFxRm77urL3YbmZCnOeNPzE846AxtY1Gtp4LPY86bxOwwMg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9b2a41ca-f801-7188-e961-b29f577b8d78@suse.com>
Date: Mon, 9 Jan 2023 14:53:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v7 2/4] x86/mm: Reject invalid cacheability in PV guests
 by default
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tim Deegan <tim@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1673123823.git.demi@invisiblethingslab.com>
 <eb9aff037aa9afe1a4a37661847e44d2316ad094.1673123823.git.demi@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <eb9aff037aa9afe1a4a37661847e44d2316ad094.1673123823.git.demi@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0159.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7183:EE_
X-MS-Office365-Filtering-Correlation-Id: 8c85d9ff-4a18-44ee-ddd4-08daf248e974
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	23cjNQZz7xzlHnSCtkZ6zY/GBy1qbAENCqe7n3KKAucGlbQqtAh0Boo2RaX7N/oV7EWM7aFpXnYiuzOTnaNfToyY8XIMPfjsKWLnIYV5mjoy9AOFW4+BnZ8727skM8vbDHLF1GZWlbzSAvpH+D7Uyp/P9IggXwHmwo/USPYncaKEW9bikinqVkfF+yReIzCN7i7m9UmRmsv9Iwe6/t570S/aw7y0/6f1+cIVFeiA0MuHwURlxAnQdYR0ALErv20PtOHXcIJoQ+CJkDlNk11wvzUcKL08gA5K7jBSybuoZWCzoa16mtvY5/tEBpHDJyAxrulR3Ran7mlB2+Id6apbMAhafQpqFRGJzMhlZrQouyPjsK3uy3nrtqSIfYvlfHOk6/pNniKebtlqxQ6vuVm4bzZjiL0XAVlJTM2mVrwlpjc3rTh+dcNztzt5kpx5oUTRXqYgQEWxc7rGhAZi6248LmnyFWbuncjOtdVdujoLmiSPgmutxTJpmsg4bmNZmMRXdtxPilLweA/vU4+WQT+nI6p/XRi0MkblBi8Hu9gP+FKwfTOGBC+IbsHe1iKxETV2B1Fct3bGeDidnbIiVNkXvuynBCx1vVMaIxy/YS/bKxCUTJ/JnbuxUb0qqpm9GswOBoMucIGgh8YabpmK5x21kOEFWmy5JmtbacDAAmF/wE+0ShaJb4hCYVfXBKCyV8zoj4dhkOdpZHMg09HTm+HFUDzlIL7NBlv+nKJC/D1Mq0k=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(396003)(366004)(39850400004)(376002)(346002)(451199015)(316002)(5660300002)(7416002)(26005)(6512007)(186003)(6486002)(478600001)(2616005)(31696002)(41300700001)(6916009)(54906003)(4326008)(66556008)(66946007)(66476007)(8676002)(8936002)(83380400001)(86362001)(36756003)(31686004)(53546011)(6506007)(38100700002)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NGZIcDhtR0xCYmJiNCt3SDd3dkZZMENKQVVlOTcxenphOW1RZEFUMk05RVVY?=
 =?utf-8?B?anRneksveTdJK3lGNXNZNEhpTXlQaWlZcEZhMHVVU3hJVnVlaEVUdTY0NkZr?=
 =?utf-8?B?YUczWWZMNmlucnFCSmdYMFFGSkJHTjRIcXJ0R1QzWUI2ai9TRkI3Z2ZjdjNO?=
 =?utf-8?B?aFRPekMwa0RXUTI0aUhHSWdTdkpnWmRaY0VyekF2QVZ1SktLT254ZGxPNFoy?=
 =?utf-8?B?R3lzei96eWRBKzdEU1RiOU1UQVI0SkFYbkdHL0hvTGxmbytQRG02SFpHT2Rp?=
 =?utf-8?B?L0NaaTNQREU3UDRnRTRGemhpTFlOQkUxSmkydnlMbTZUSjZwcWUvRDhmU2t6?=
 =?utf-8?B?N0J3RW1kUHVJTXNmREdYNVR3aHl3bGpSdWY3TGRqYTRUWm1ObWNlZzQrdzZ5?=
 =?utf-8?B?cWNtMGRiVHNlRWt5VXhlbTdhWmJzM2l0M2F4ZnVzeDJ5QVlBZmFSOWZ5Rm5C?=
 =?utf-8?B?RXNOemhkWm81V0laeHArM2lvV2IzYlV3amRlNmNGRFY3TUZ0bzZGN1JlVDRy?=
 =?utf-8?B?Y2gxVGtmY2JMbnArKzJOY1JrbVNhcXVpczNVeXgvV3JpWEw4ODM5YVhyR29a?=
 =?utf-8?B?TXFYYS9YNmNIRjg5dGFpbTltUlhaaHJ5djM0MXNSbkFtSWJHWEFTZlEyRHJX?=
 =?utf-8?B?ZUk4WXh6L042bXNOQkQyQVdZWTUydGRzOUhPV2ZSSWk5TmRwQWNjZmlnRXAw?=
 =?utf-8?B?MGRoZVNYaExMenJJZ1FxVWxnTG1XZ0xUSmRyL2owZk9ESFBoQ3NQME1PckJt?=
 =?utf-8?B?TVZKY0FQeTBYNVFobUxLcFo3ZUpRaFZuZ0YwL0l5bnVuNEFKcnFnQWNiaEk2?=
 =?utf-8?B?dENoc3EyQ0t2di9CeWpuWkd5bDBZVU9kTlpManlLUTdYYlJqYlVycFZCK1Nl?=
 =?utf-8?B?TS9oV2lLbU1TR21xZjZMeVgrOWpmaG9YOFM4RUM1SEd4Y0FaZUh4WDlpZUtL?=
 =?utf-8?B?enhIdG9iUUo5MmJUTlY3cERMU1RTNXgzY1hRODgrVk9qaldLUTY2WllZb05Y?=
 =?utf-8?B?d2RTTjcvelR2YzdHbUxyNGlkVHdhLzFwQUFDNmgzWjJ4eWRIc0ppM0ZZVDJs?=
 =?utf-8?B?Yy9RZk15eUVGSkRVZnljR3BPdmFZaTFWdmd3bW1mVWVIZmJDdXRnc1EweFlp?=
 =?utf-8?B?RTA1QXd4cDgzdjdYN3h6ek4xQ2ZXUUJhRmY2RGZMM0ZucGlvYlpJYTE4c01F?=
 =?utf-8?B?U3FlUkVCdkRJSXVaMmw5TVNqb0VtVFRyQWdpV296bklkOVpvek0xbkwyZUhV?=
 =?utf-8?B?ejhIdEJtTjNVTDV5emVsQ0R5WnI1dkY4N25KM09aekpZd1FscUQzWk5kTUIy?=
 =?utf-8?B?b08zdVMvcDdZUVhOSTF5bzlOREpnVEhZem11STFWNUtLQlRMTzg2Z2VrejNI?=
 =?utf-8?B?V25kUDlwV1ZEdS84Um9Qa3NxV1JQS2xidXdzalozWGpDUWJNNCt4RjZPU3dm?=
 =?utf-8?B?V3ZSZUR1cEFjbXdqQzMvakczdkdiNDNLSUpLODN1ZHplWC9MR3BYSTZWb0Y4?=
 =?utf-8?B?U3ZwbUdrWkRVaENTc2YwamFUcXVSY3hlbXprS0xtQ3h2ckFFanl1SndOSWJG?=
 =?utf-8?B?R1ZtalprRndvbUFGcnFvUUo1RlpXcDVrRjRRMXU0b2Y4cy9PcCtzeVZXT09C?=
 =?utf-8?B?RG83eE8xeHhpdTBoN0Y3UVlIOW1CWkdVaXJETDNnTmo5QVh3bVNFUE9seGxq?=
 =?utf-8?B?WEpwQWxONEVKMW9ZVUsrYmZMenRDcDVyR092cmN3ckwzUEZKSkRvcUJjVnlP?=
 =?utf-8?B?dEpOM0g5cUpOYUNkRzhXK09NTlg0ZW54c1QzbW9UUmR0NStCNjVNUjZJT3Ur?=
 =?utf-8?B?cDBURGRWTTcrVlVIM1cvVmRlbDhhVGlQMkduNjlTSHd6d213emd0TUZXb0ht?=
 =?utf-8?B?UmJremkxK3R3Y29aUVBoMkFsL0FHdVdPenlXYUh1V1ZpZnpmS05COUUrdEhu?=
 =?utf-8?B?dk9nVDAzY1FKbkJHSWhBUHJGNE81dy9hNjUxUjlodFlBT3NFTWRGcDNjejJP?=
 =?utf-8?B?dFlvYTdwNDMvN2JJakhuZjJpZ1Awa0J2TzlRSzZPWGJmV2M4dGZBTitZM2lY?=
 =?utf-8?B?YWFnbjJsYTZuQVhtN1plY3h4Z2NBVThmdUZmTFdnWmF1QjlQQkx5Q3ZITk1X?=
 =?utf-8?Q?dmGN4iXiaCgyJSFopc9mCbecx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c85d9ff-4a18-44ee-ddd4-08daf248e974
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 13:53:39.8257
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pySXsT2j8WdRsShPXwI5gWWgoI8ou9SEWUpX+atwyw1v+TD11+9ORZk1JKzilV9HlVRT46TjTt8K+gfQ8hE+0Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7183

On 07.01.2023 23:07, Demi Marie Obenour wrote:
> Setting cacheability flags that are not ones specified by Xen is a bug
> in the guest.  By default, return -EINVAL if a guests attempts to do
> this.  The invalid-cacheability= Xen command-line flag allows the
> administrator to allow such attempts or to produce

Like in v6: Unfinished sentence?

> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> ---
> Changes since v6:
> - Make invalid-cacheability= a subflag of pv=.

While you did this, you've retained the standalone option, and documentation
also continues to describe that one instead of the new sub-option. You will
then also want to move where invalid_cacheability is defined, I think.

> @@ -1343,7 +1392,9 @@ static int promote_l1_table(struct page_info *page)
>          }
>          else
>          {
> -            switch ( ret = get_page_from_l1e(pl1e[i], d, d) )
> +            l1_pgentry_t l1e = pl1e[i];
> +
> +            switch ( ret = get_page_from_l1e(l1e, d, d) )
>              {
>              default:
>                  goto fail;

Stale (and now pointless) change?

> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -28,9 +28,21 @@ static int __init cf_check parse_pv(const char *s)
>      do {
>          ss = strchr(s, ',');
>          if ( !ss )
> -            ss = strchr(s, '\0');
> -
> -        if ( (val = parse_boolean("32", s, ss)) >= 0 )
> +            ss += strlen(s);
> +        if ( !strncmp("invalid-cacheability=", s,
> +                      sizeof("invalid-cacheability=") - 1) )
> +        {
> +            const char *p = s + (sizeof("invalid-cacheability=") - 1);
> +            if (ss - p == 5 && !memcmp(p, "allow", 5))

Nit: Blank line please between declaration(s) and statement(s).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 14:07:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 14:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473672.734401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsoN-0003xj-Nc; Mon, 09 Jan 2023 14:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473672.734401; Mon, 09 Jan 2023 14:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEsoN-0003xc-Ji; Mon, 09 Jan 2023 14:07:39 +0000
Received: by outflank-mailman (input) for mailman id 473672;
 Mon, 09 Jan 2023 14:07:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEsoL-0003xW-OD
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 14:07:37 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2076.outbound.protection.outlook.com [40.107.20.76])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f754497c-9026-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 15:07:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8235.eurprd04.prod.outlook.com (2603:10a6:10:243::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 14:07:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 14:07:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f754497c-9026-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fwZI4rFMY0PVzOQo1uqO3xH05hO5Vqpk+NbIUBmOHcpU2wZ+4Ix4O7SN8/FcXKRpYT8R7WaaabmIC0ryx0IphKy6aJCFWNMUVXyq/rHWQ6Je3qD8SZub2LwgX3BWhen93ZEDsqmYvfcBRtUMoDPp5/j8DNTCw28ViGCbvqIbmoOIEdzy+9wgX4PzXfl19SJ4uYxPHiqpNNMcsIVV4R2YXjm48lN6hdqlXX1NoaVT4MULf7WNiqupEf3YNbXu445PBVuDYgonM9QxJcKeHWkbC8mfJ+Tk/o7XF3EXClUFxnNN/Lwwe/WKuhIlulXLWcWot/MZl6HzFBxz6D3Hh8+u4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8FY854m4Y8kLI3rt7W99QdLxDdx4/YRm4M4xbOMZp9M=;
 b=I7SUOeEVyHhhug+Dn//bs413ewH+qaiF5sFXU9Q+ikwMfQGvSlSmQcJjQEwcoo07xxH57B/+TGnIBVuvFTZYUA2G3xU+bU/G3LxPGDOVg10i/qwHkZAJLY1Vl3rp1/Y2TV/dF1Ozqbt9dEf0KgsMPHRqkU1sHQxdYgAwy+vcoQ9oByTIo5FnCC+XhNtiHt+fVssP0oe/d4KpioeE/g8pXf5OSQlFyTGPdrEbf8qiKROQ2pWZXsUHagBzJK5bI/LJZTmQ7uWTadJNpzK51Rq6mslz5b2eH1gkubKZ1yAICZZLWCUu2+GuIaamv7FzuGUBGFbtgCscV/BV9qwKKNlv4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8FY854m4Y8kLI3rt7W99QdLxDdx4/YRm4M4xbOMZp9M=;
 b=oHy/nRgnCxtvvTl8nKA1fMyHpToYIw2HysRNKHPUvdDRRr5crUx6RAeXsyDSWz3wpeebwiiteAqj0IyptCQAap9Y9ZZ4D7No9LlZ7Zoa5fJ1VBwj6gVGE7jmdwcYJoEnoMKRE9Sv55XMbJljWxWjE9V2g+1UHqr/D8FAbHtKpZs++R/nwzcgfkvE0QaDzsva+iOMUDXhWBMFKxwo56QmnNRUj9gpMyyH6xE2qJrUBgYNFJP2sthpnCqAX3Hcda54x2CknEyIBigxWqXDOmbqqW4VapBVrmTufYtsNJ8+9o7++KRVinMT6GACmLm6bSXKY2BWall/Ho9oavT0isZA1A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3792c32f-06a9-4fd0-9d7a-c02bb38aa739@suse.com>
Date: Mon, 9 Jan 2023 15:07:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v7 3/4] x86/mm: make code robust to future PAT changes
Content-Language: en-US
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Tim Deegan <tim@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1673123823.git.demi@invisiblethingslab.com>
 <89201c66b0261b2f5ee83e7672830317fde21dfa.1673123823.git.demi@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <89201c66b0261b2f5ee83e7672830317fde21dfa.1673123823.git.demi@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0135.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8235:EE_
X-MS-Office365-Filtering-Correlation-Id: ebd233b5-24d3-4105-fc07-08daf24ad974
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	opTRco/sh5p1kYxbYY7mi7Wz/25hcCsYKuln6b6NAdoTOL5J6H0bnH8LQynDLoKc8kVcCeGLn/PvzJ4pq4pUkhti80ZygQ7LtIKesJeju8oraFMFv2i7X3dIbplQh5mD/yGQuUVuVqY2mQU5sDy/uU16u1wtExdnBi9tZaEYkN7spBA+GNC6gGF3va3vcrMC5gtRwOFI5WTCjfKQGjRau6n/fy5iPzJnUorbowMDOA6tXGgwATN589HJD/5KLLhgg92gvk10U2PvPzLa3fpi+xefwYIRWDchVl5gY/Qq5/MmT+HrBPc0LnP+OZMczx+RxNrZV4wm/Qbmnv2Fl3fzDpapH2DnbZ/FHUJCUjFZavMHw9Y2pUNG5kn6ac2fr2HhUU8ehDhc2JKxYb2ddTZdqChiccUIjlC9XnbXuNDRJGkhChzxB8b2xbhtCPQ7i9cPn+wTlxM/4cC6MwrwxUXj1jACpTbZ/m8JuORoyjqV0bGqZxXuph0Aeh+Hzkhr2sUx4kFwGVBkO2VDsnjFO023/ftDIkFPn/EkOQsQEa4RubjyutMdkvfHGR7tiFT32ZKKZVj00ntOfGLs06SghOK+tYX5COr0ayTYKPdhnzwV3s5QJCM+j8UrolEjAxoZ5SpgCbUYeF7Gf0Wy39OuAwJX2F8wJ4h/kI2Z0PsPITDWm3vTVx5Lz1pKvxe+8sEBoFZ629VwsqBqVQRdOjUXiaB+mOhAURkR2s/zZEejFYYbVlU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(346002)(136003)(39850400004)(376002)(451199015)(36756003)(53546011)(186003)(8936002)(26005)(6506007)(6512007)(2616005)(54906003)(5660300002)(66946007)(7416002)(66476007)(66556008)(6916009)(4326008)(316002)(86362001)(31696002)(38100700002)(478600001)(41300700001)(6486002)(31686004)(8676002)(2906002)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?THRvVE5tREtpZVprOW85N3FXUSs4N2t5OTB5YWlsSi9QSk5Sc3d0RDFUeW5C?=
 =?utf-8?B?MjdyR3R0a2tOV2prMm1qZWh3aFE2TnZBTGE2cVUyRkE3WUhuZk9NeXpZc0pU?=
 =?utf-8?B?UlBuK041WCtKdUsvMnFWSWZ2eEpkbHdac1pjaXJvRGMxd1c5NXYrQW5FcWEv?=
 =?utf-8?B?LzlvTGVLZnkwdVpNeDZTQW5qUEhiNE53OWlqZ3VHSzBJQkdLcnMySXJtOW9n?=
 =?utf-8?B?TFVZZnJpZXNGSmF4Q3ljT1EwbVJXakVjNi9LcU9oV3FacGRLSThiY29VL0Zo?=
 =?utf-8?B?dkhwQU1SeHVCYVlDd04wZkZsSVFqNDNJNFRnem9RcW5rbFNLcUFQYnQrdEJP?=
 =?utf-8?B?S2tMQmxqWU5QTjVUYmI1MWpNN3h1TE1vUHIwUXBPNzA5ZE9kQjkzV3I0QU51?=
 =?utf-8?B?YVo2RmpZN0dYN1pIeWxMYm1Jd0JrOGN1TjVxUHVtSktRN3ZkU2ZjSThsdWY4?=
 =?utf-8?B?bjZJRDQ1aDVMdDJVN3ZVZXAwckl4cTFScXYyM1FNNTRiWXJvVDlFZXZmb0NH?=
 =?utf-8?B?MHpoanJ5N3pHRGxKOFgrejhmUTlnNDU2dFBnUWJzQ1pmeGxNWUE3MkFiNVZs?=
 =?utf-8?B?aGNTTHZ6ZXllZGpzU0UreGpjay8rdEVOSlRranBVLzR0OGdET3lYekcvQWFj?=
 =?utf-8?B?U3htaUFNOUlhYkFOUFBRMTFqN2RKcTJERkJMbDNzcHo1WDhBNnJlMWtlNXZO?=
 =?utf-8?B?cFNPUmppU3lrak9BalQyUWxITlRlVmd5NE4reWZpMW8yOUpsbDlSclA2aHh2?=
 =?utf-8?B?d3dvNUd2aTNjZGV1N1FuSFpzMTVUaEJid01YN2t1Yno4ZGJseVpseHFOazRH?=
 =?utf-8?B?bVg2U016bWJwRVFOS3Jrc0xPNGxPUmJsSzBFaFVKSnVONEQwdnpEZ0RhMk5K?=
 =?utf-8?B?RGcwZ2hROTI1QTN5V25ONmg3Z05ndE5qOHVlWlNwdFdRSHJJTDI4VmU1MGYr?=
 =?utf-8?B?S1ppYlZsU0d5U2Q1S3NCU2RBNTRURmVaSzQ0eGQ0OEEvSG54VmJYVEpPVDlS?=
 =?utf-8?B?dENvRlpYa2pENHB1aUlmQ1Q3dDdxZytPd2hZYmY4RzRVSmVrYWpoekFPRG9G?=
 =?utf-8?B?d0x6ZnV0cFVVaFhHT3hJbDd3MWpEY2xINk9VMlNEbFBrUmVoVXBQSU5TU2Zw?=
 =?utf-8?B?cUtRLzk0WkRyY0Q5YU9waXp4aSt6dDFFVmhDWFlrTkFQb29JL1I5aHNKNWEy?=
 =?utf-8?B?bTk5WnhOT2xKME5wbUE5TlhYYjh5aW9YenFUeGFqeDdWNDRDR2pTRjNjOVdw?=
 =?utf-8?B?R09WU3NiMlg4RmxtZE5BK2tqNmJNV3VZcmpqR0w2by9kVEdQeEVSRjBBdGk0?=
 =?utf-8?B?ZzgxWlhSRzBlb2h5bDZ6Q1hGSnhibXZaR0NLeEhVRlg2MytvUTkxQUxoOEN1?=
 =?utf-8?B?aG9ZVFRXWW9rZFVZM3h2d294OS9vUERyQkJKWGJocGwxSXorQU1yaGlIajg3?=
 =?utf-8?B?WTBUWXEvY2Z4dzZjcTdRUU1YbEdhbnZUMGQ2bFUxUUhtS1hJLzVSVm5lbndl?=
 =?utf-8?B?TlJkTDdhaTRCWnJCYmRwUzg1UXo1ZTBzbUlRYUtiaVZMT0Y5T1c1TWJUYTRn?=
 =?utf-8?B?RFRJOXV3RUo0S285WjI5WS9uR3BYTWt5QTNnZk1iL0NiclJiU0JXaWlSeW15?=
 =?utf-8?B?Mi9FdHJuazNscUIyaytYY0R4aVl1NXUyeUwwNHByQjJjWnhta2YyOWZtMmgr?=
 =?utf-8?B?WEZjV3dSamV0bFVVUXBCRFlzK1UzSi9weE4rUXFVUXpQSVdyNEFndmxTVjJw?=
 =?utf-8?B?RlIydjBWb1FvZ2g1MkkrZFZqOC9BeVB5RnhaZkQvcEFzSUlLK1JoS0JBclhi?=
 =?utf-8?B?R3JPWHNOVTc3dXQ1dzFuQTdiSWoxWjRtUjNvajl3SUx0VCtoNkxyOEEvOElG?=
 =?utf-8?B?bzI0a0ZMUXBTQis2Tjhwc0V3WVoyTmpRUWs0UDZBdXB4aUFoOEJQOWlTbTla?=
 =?utf-8?B?dFpXNjErMlI5bEMwZVRDTTBzYXQ3VVNWZVl4cUZzNi9CdDloNGd1aVlTTk9C?=
 =?utf-8?B?QmdPeTFNbGpQbnplQkVGNFpGUE1iZmFTbUFzYkc2RVNuYVZLYXZMWmN0WCtJ?=
 =?utf-8?B?aGR4ZUROQ2F1YWVpaVpXZ01OTjgwL1pwQkNaTnJMQWlCRHBmbm56QjdJUEs0?=
 =?utf-8?Q?J7/O5TLILTR9aphETHuwO5pUf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ebd233b5-24d3-4105-fc07-08daf24ad974
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 14:07:31.8514
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /OmlMuPmaKVbOT8VyRAO7vE7SJB2ioGb8qpdTEjA8GIc7ngRT+ItY2cEGp5IK9SajLCoky7XMKUg45eMdPX+mg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8235

On 07.01.2023 23:07, Demi Marie Obenour wrote:
> @@ -6412,6 +6414,100 @@ static void __init __maybe_unused build_assertions(void)
>       * using different PATs will not work.
>       */
>      BUILD_BUG_ON(XEN_MSR_PAT != 0x050100070406ULL);
> +
> +    /*
> +     * _PAGE_WB must be zero.  Linux PV guests assume that _PAGE_WB will be
> +     * zero, and indeed Linux has a BUILD_BUG_ON validating that their version
> +     * of _PAGE_WB *is* zero.  Furthermore, since _PAGE_WB is zero, it is quite
> +     * likely to be omitted from various parts of Xen, and indeed L1 PTE
> +     * validation code checks that ((l1f & PAGE_CACHE_ATTRs) == 0), not
> +     * ((l1f & PAGE_CACHE_ATTRs) == _PAGE_WB).
> +     */
> +    BUILD_BUG_ON(_PAGE_WB);
> +
> +    /* _PAGE_RSVD_1 must be less than _PAGE_RSVD_2 */
> +    BUILD_BUG_ON(_PAGE_RSVD_1 >= _PAGE_RSVD_2);
> +
> +#define PAT_ENTRY(v)                                                           \
> +    (BUILD_BUG_ON_ZERO(((v) < 0) || ((v) > 7)) +                               \
> +     (0xFF & (XEN_MSR_PAT >> (8 * (v)))))
> +
> +    /* Validate at compile-time that v is a valid value for a PAT entry */
> +#define CHECK_PAT_ENTRY_VALUE(v)                                               \
> +    BUILD_BUG_ON((v) > X86_NUM_MT || (v) == X86_MT_RSVD_2 ||                   \
> +                 (v) == X86_MT_RSVD_3)
> +
> +    /* Validate at compile-time that PAT entry v is valid */
> +#define CHECK_PAT_ENTRY(v) CHECK_PAT_ENTRY_VALUE(PAT_ENTRY(v))
> +
> +    /*
> +     * If one of these trips, the corresponding entry in XEN_MSR_PAT is invalid.
> +     * This would cause Xen to crash (with #GP) at startup.
> +     */
> +    CHECK_PAT_ENTRY(0);
> +    CHECK_PAT_ENTRY(1);
> +    CHECK_PAT_ENTRY(2);
> +    CHECK_PAT_ENTRY(3);
> +    CHECK_PAT_ENTRY(4);
> +    CHECK_PAT_ENTRY(5);
> +    CHECK_PAT_ENTRY(6);
> +    CHECK_PAT_ENTRY(7);
> +
> +    /* Macro version of pte_flags_to_cacheattr(), for use in BUILD_BUG_ON()s */
> +#define PTE_FLAGS_TO_CACHEATTR(pte_value)                                      \
> +    /* Check that the _PAGE_* macros only use bits from PAGE_CACHE_ATTRS */    \
> +    (BUILD_BUG_ON_ZERO(((pte_value) & PAGE_CACHE_ATTRS) != (pte_value)) |      \

Slightly cheaper as BUILD_BUG_ON_ZERO((pte_value) & ~PAGE_CACHE_ATTRS)?

> +     (((pte_value) & _PAGE_PAT) >> 5) |                                        \
> +     (((pte_value) & (_PAGE_PCD | _PAGE_PWT)) >> 3))
> +
> +    CHECK_PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(_PAGE_RSVD_1));
> +    CHECK_PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(_PAGE_RSVD_2));

What do these two check that the 8 instances above don't already check?

> +#define PAT_ENTRY_FROM_FLAGS(x) PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(x))
> +
> +    /* Validate at compile time that X does not duplicate a smaller PAT entry */
> +#define CHECK_DUPLICATE_ENTRY(x, y)                                            \
> +    BUILD_BUG_ON((x) >= (y) &&                                                 \
> +                 (PAT_ENTRY_FROM_FLAGS(x) == PAT_ENTRY_FROM_FLAGS(y)))

Imo nothing says that the reserved entries come last. I'm therefore not
convinced of the usefulness of the two uses of this macro.

> +    /* Check that a PAT-related _PAGE_* macro is correct */
> +#define CHECK_PAGE_VALUE(page_value) do {                                      \
> +    /* Check that the _PAGE_* macros only use bits from PAGE_CACHE_ATTRS */    \
> +    BUILD_BUG_ON(((_PAGE_ ## page_value) & PAGE_CACHE_ATTRS) !=                \
> +                 (_PAGE_ ## page_value));                                      \
> +    /* Check that the _PAGE_* are consistent with XEN_MSR_PAT */               \
> +    BUILD_BUG_ON(PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(_PAGE_ ## page_value)) !=    \
> +                 (X86_MT_ ## page_value));                                     \
> +    case _PAGE_ ## page_value:; /* ensure no duplicate values */               \

Wouldn't this better come first in the macro? The semicolon looks unnecessary
in any event.

> +    /*                                                                         \
> +     * Check that the _PAGE_* entries do not duplicate a smaller reserved      \
> +     * entry.                                                                  \
> +     */                                                                        \
> +    CHECK_DUPLICATE_ENTRY(_PAGE_ ## page_value, _PAGE_RSVD_1);                 \
> +    CHECK_DUPLICATE_ENTRY(_PAGE_ ## page_value, _PAGE_RSVD_2);                 \
> +    CHECK_PAT_ENTRY(PTE_FLAGS_TO_CACHEATTR(_PAGE_ ## page_value));             \
> +} while ( false )
> +
> +    /*
> +     * If one of these trips, the corresponding _PAGE_* macro is inconsistent
> +     * with XEN_MSR_PAT.  This would cause Xen to use incorrect cacheability
> +     * flags, with results that are unknown and possibly harmful.
> +     */
> +    switch (0) {

Nit: Style.

> +    CHECK_PAGE_VALUE(WT);
> +    CHECK_PAGE_VALUE(WB);
> +    CHECK_PAGE_VALUE(WC);
> +    CHECK_PAGE_VALUE(UC);
> +    CHECK_PAGE_VALUE(UCM);
> +    CHECK_PAGE_VALUE(WP);

All of these are lacking "break" and hence are liable to trigger static checker
warnings.

> +    case _PAGE_RSVD_1:
> +    case _PAGE_RSVD_2:
> +        break;
> +    }
> +#undef CHECK_PAT_ENTRY
> +#undef CHECK_PAT_ENTRY_VALUE
> +#undef CHECK_PAGE_VALUE
> +#undef PAGE_FLAGS_TO_CACHEATTR

PTE_FLAGS_TO_CACHEATTR?

> +#undef PAT_ENTRY

You also #define more than these 5 macros now (but as per above e.g.
CHECK_DUPLICATE_ENTRY() may go away again).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 14:26:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 14:26:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473680.734412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEt6a-0006O6-81; Mon, 09 Jan 2023 14:26:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473680.734412; Mon, 09 Jan 2023 14:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEt6a-0006Nz-5H; Mon, 09 Jan 2023 14:26:28 +0000
Received: by outflank-mailman (input) for mailman id 473680;
 Mon, 09 Jan 2023 14:26:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/d=5G=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pEt6Y-0006Nn-AI
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 14:26:26 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2081.outbound.protection.outlook.com [40.107.14.81])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 97c89f7d-9029-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 15:26:23 +0100 (CET)
Received: from AS9PR01CA0047.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:542::28) by DU0PR08MB9437.eurprd08.prod.outlook.com
 (2603:10a6:10:42f::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 14:26:21 +0000
Received: from VI1EUR03FT021.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:542:cafe::3d) by AS9PR01CA0047.outlook.office365.com
 (2603:10a6:20b:542::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Mon, 9 Jan 2023 14:26:21 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT021.mail.protection.outlook.com (100.127.144.91) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Mon, 9 Jan 2023 14:26:19 +0000
Received: ("Tessian outbound b1d3ffe56e73:v132");
 Mon, 09 Jan 2023 14:26:19 +0000
Received: from 1329edeb07ec.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 243FDD70-51F1-4DFF-9D52-C59224BC8222.1; 
 Mon, 09 Jan 2023 14:26:13 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1329edeb07ec.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Jan 2023 14:26:13 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DBBPR08MB6076.eurprd08.prod.outlook.com (2603:10a6:10:1f5::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 14:26:09 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 14:26:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97c89f7d-9029-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+3bVSUkzFmxk0AlM4PzpGnlkh7s26gh58imKCbjKBkM=;
 b=XBAZmsaNGUzdJRaripfs2JpoBlpGYEgzKON0kvYNJd6s8JqdHyLO5yBAt5YONOdMz6BizjidXzoZlYbBOgEDGHQseGSwCanPIxWe8oKeV2lP2GLy10Rw/zSwKPRsaFtASzVwQA66rT3lb/Xdfn4ErB+GU4VV2Vd9wU13DSQW/SE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3294e52b668071c5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aotWyTUWM/zHOa6IMaZd0fnPscRN+QscPodxHs4dvVDd0IbSAbn2IEOd/vmx57ao6j2p4xFoV5PxR1bthqt3GLYKTXW8WWfUHUkIKDI5Rzb5bsyrY+uxE68CAGQDf7vEXObaiVd1230m9KiuVPuXXST5vcypkz/yLMUL0/jOmSx8S/hPDYGMdFLnRb8soKjiN72Dq71xn0FfD+9/x+wLI5IZFzufwzD+VYsUE/pioJMn8gu3rSQiqJMaScQI00tnCoRY5YAjxu1SpIub00r1a9PKyCBvD9caOrdB9eDJ9k+Szx7rYip9Qo7R+O4XEJXoZ9t04Fo4LXNLIh+ZkpBtcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+3bVSUkzFmxk0AlM4PzpGnlkh7s26gh58imKCbjKBkM=;
 b=HoYNfgN6Xm0gRCjf6z8/8OwuAVnty+ouRCY+F1B3hiFL+Lryh5v6hXKBrXetd7qQq1ZNnj0/s4otdoDC8K0EJOrO5fpjrtf8v3FVfornm/72uTW69So7DnZyFZepB9Aat+zFkdtcq8tX8793pDJ4/6WSCpANfhH+533yIqqZcmFeps/57kFK2o4i1IsXFr1ewlR5r2APcI3i7qHGgkVUTdITs5a2r08kH+Re8y0rbf7VOZxUuiLMeO2GvNUhE9/EXU2ZryUS/xdj2CTDCCij5lPpPkoJtHBM2m1uO1I39gThhHvssfyXlpfrIftEf8A+7G+SK9x/MNQ1HH/py3QcxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+3bVSUkzFmxk0AlM4PzpGnlkh7s26gh58imKCbjKBkM=;
 b=XBAZmsaNGUzdJRaripfs2JpoBlpGYEgzKON0kvYNJd6s8JqdHyLO5yBAt5YONOdMz6BizjidXzoZlYbBOgEDGHQseGSwCanPIxWe8oKeV2lP2GLy10Rw/zSwKPRsaFtASzVwQA66rT3lb/Xdfn4ErB+GU4VV2Vd9wU13DSQW/SE=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>, Michal Orzel <michal.orzel@amd.com>
CC: Wei Chen <Wei.Chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] xen/cppcheck: sort alphabetically cppcheck report
 entries
Thread-Topic: [PATCH 1/2] xen/cppcheck: sort alphabetically cppcheck report
 entries
Thread-Index: AQHZIbt/3GyhUKyUOka+bjXcc10z+K6V88UAgAAHeACAAC3gAA==
Date: Mon, 9 Jan 2023 14:26:09 +0000
Message-ID: <D4D6E4A3-691D-4D28-B912-26B12477E8BF@arm.com>
References: <20230106104108.14740-1-luca.fancellu@arm.com>
 <20230106104108.14740-2-luca.fancellu@arm.com>
 <6373383d-d6d3-3d92-b09e-6434c5b5d15b@amd.com>
 <af7610a2-11d6-48e2-6bf0-762525187612@suse.com>
In-Reply-To: <af7610a2-11d6-48e2-6bf0-762525187612@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DBBPR08MB6076:EE_|VI1EUR03FT021:EE_|DU0PR08MB9437:EE_
X-MS-Office365-Filtering-Correlation-Id: 1eff4c37-d097-428a-2188-08daf24d7a00
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 cElD3GHZC5ZUYu77aj+PjyZLdF5pJSwvJtyNowvL5kcBfMXgB95eD64rICOcKgpP5OSlDvPWQgrcNUz8M9kGoYlGtKDroyuDCIlM0w71K8p6NB9wYAzCgAXw4zffvye62rRrHXVM6t8rXrv6nzrxS/S5E6SNBnbcPk+Bkv4m+9RtwFkK4O2ECB/HL3DxsDTFGUDfxZojhaY1FuocQP6cDaNuYOYXLQEPOMv6L7UA9nvuBrni2IGKOAn+0Ku417AkcKLB5kfu5BeRX73/uxolGK8FG9ltOGdyvW+Q5ejSZ3FDoPFuOnrRRVxm6IJRhNpKydzsnUkou8vXVscPIpbmyxwWpuQWJ/dP0ZPFTt/K1VbgGinqrE9gYgIfJ9WYJGj4pNb3T/sq0Xn2ES00oAoNPVY5Yvp5yzYD8uzZ18jU+7t8Qj8A+c5VpGznfEOB1bE1tM9AwDFOTttUiJ8fE7TtyfFEmyeMyK7LVzQ6gSU8BTlC/HuHqI/MhpYq8iXaOLhbqi3icxR5RYxbjHhGYN3px7TniKAgRUAvP2Ul8RvB/YVmEOFwu16hQJRmcbD2Blt/HkHaju0GvED4vJOmyf8iwJhHwaRnN+twmmoa8cTfqMdavU5SG6fRRTpFMP47G9UieLly+GAsrCckvar34MHhi/YgWjrSxvHWEj2ywGYrrYuk8BMigg3DSDYEHtFkCgbRz5RhXdNiD8hS7+eO9CQ/UpusFNqOITUX2VadsmsYoa4=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(346002)(366004)(376002)(39860400002)(451199015)(71200400001)(110136005)(54906003)(86362001)(316002)(8936002)(2616005)(6486002)(2906002)(38100700002)(478600001)(66476007)(66946007)(76116006)(64756008)(66446008)(53546011)(41300700001)(66556008)(26005)(186003)(91956017)(6512007)(4326008)(38070700005)(8676002)(6506007)(33656002)(36756003)(122000001)(5660300002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <7D68142F30AF674399477BECF5595652@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6076
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	de4f6c6a-1dc0-4268-056f-08daf24d73e4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V3Y+qsdXRU5chpQ18VzViR+ZqZDkoJEZ9qvqLJHA5sFtlRDLhcqpcslDzHnR/+SYVdr2Tj9eXGSZQkq5rYj+nm93OeT6LNuPdj/9ovi/fYlB+u58KK8kmcOVtC0C16OYHTOPdz5AffEYgqNUI0oiOnFzXxpQn2C2zUVMBbaipaBBrXCxihu0k3VzwQt3G9AONhFht2zgzxUYGlNxIUGOZBg9QiYIEzG8LuHQKInMeSfpGTlPrHDlbG2LGUgpGi6fjnumTculJxPw7p0/3sqcmUAEWI4M/IjeNMbpKsyEBEwbQHPbPby0r+hNStDtNy6toBz0OvQkeTh8LfN0Y1csfAgMQjXENUfx57QAyXmBXjr4HN2UhRMOiUPg9slCidGMj2TDVsv5f+D02rDMItsFgSTmTg0vtVLM2QkUl3ZQAEpsDXjK1euLhnbnHmUh9E0yAYdZoRZbbVUct3GzB4cBg9aAQrknROg6N1trY2x1G+Ou5j+ErvjJR1TiPkpolrPcyYA/M0D+6ZCvMBZ8Csi7y6XOjME0dqMBZOfbUXzDgy5wZJcQCE7QbNkaqc5nKRiqnsZE/VfHEIL8POmEZT2L1C2V9XSLB0rWaPLLgoE3CxfGdX07oM2bBMAPo4fH96ENqE4iStLxwNkuJQfcA69We2s7ezM7nqia/kqmwETABqMtsymfWoOdwE60ugHr3LVODOglU+kifCesGmhP4rW0og==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199015)(40470700004)(46966006)(36840700001)(40480700001)(36756003)(33656002)(86362001)(40460700003)(6486002)(54906003)(110136005)(316002)(6506007)(5660300002)(478600001)(2906002)(4326008)(70586007)(8676002)(70206006)(41300700001)(8936002)(36860700001)(82740400003)(356005)(81166007)(2616005)(336012)(26005)(53546011)(6512007)(186003)(82310400005)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 14:26:19.8641
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1eff4c37-d097-428a-2188-08daf24d7a00
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9437



> On 9 Jan 2023, at 11:41, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 09.01.2023 12:15, Michal Orzel wrote:
>> On 06/01/2023 11:41, Luca Fancellu wrote:
>>> Sort alphabetically cppcheck report entries when producing the text
>>> report, this will help comparing different reports and will group
>>> together findings from the same file.
>>>=20
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>> xen/scripts/xen_analysis/cppcheck_report_utils.py | 2 ++
>>> 1 file changed, 2 insertions(+)
>>>=20
>>> diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/sc=
ripts/xen_analysis/cppcheck_report_utils.py
>>> index 02440aefdfec..f02166ed9d19 100644
>>> --- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
>>> +++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
>>> @@ -104,6 +104,8 @@ def cppcheck_merge_txt_fragments(fragments_list, ou=
t_txt_file, strip_paths):
>>>                 for path in strip_paths:
>>>                     text_report_content[i] =3D text_report_content[i].r=
eplace(
>>>                                                                 path + =
"/", "")
>>> +            # sort alphabetically the entries
>>> +            text_report_content.sort()
>>>             # Write the final text report
>>>             outfile.writelines(text_report_content)
>>>     except OSError as e:
>>> --
>>> 2.17.1
>>>=20
>>>=20
>>=20

Hi Michal, Jan,

>> Having the report sorted is certainly a good idea. I am just thinking wh=
ether it should be done
>> per file or per finding (e.g. rule). When fixing MISRA issues, best appr=
oach is to try to fix all
>> the issues for a given rule (i.e. a series fixing one rule) rather than =
all the issues in a file
>> from different rules. Having a report sorted per finding would make this=
 process easier. We could
>> add a custom key to sort function to take the second element (after spli=
tting with ':' separator)
>> which is the name of the finding to achieve this goal. Let me know your =
thoughts.
>=20
> +1 - sorting by file name wants to be the 2nd sorting criteria, i.e. only=
 among
> all instances of the same finding.

Yes both suggestions make sense to me.

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 14:58:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 14:58:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473717.734440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEtay-0002Vj-8Z; Mon, 09 Jan 2023 14:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473717.734440; Mon, 09 Jan 2023 14:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEtay-0002Vc-5M; Mon, 09 Jan 2023 14:57:52 +0000
Received: by outflank-mailman (input) for mailman id 473717;
 Mon, 09 Jan 2023 14:57:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPNl=5G=citrix.com=prvs=36677a302=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pEtaw-0002VW-8M
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 14:57:50 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f9c29b50-902d-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 15:57:47 +0100 (CET)
Received: from mail-dm6nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 Jan 2023 09:57:44 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6979.namprd03.prod.outlook.com (2603:10b6:510:169::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 14:57:42 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 14:57:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9c29b50-902d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673276268;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=S/j8qZxd5fo4q8BxZa+uetjPoTsTEamnOrI9iHm6nnQ=;
  b=UCCgfsguBmdHqYKgEbvEBvcmkg6rqORHXlvhhjVH2COFcKOfnzlJ2OLD
   NMZH6VXnixvHiZYN2zvP2pchOgXG+uSroGz+RcYWMtdv2b+HC0JPGDJYD
   xCtHSfSVjVvsrujCt+AqVpbq6KQ+wSO5zI2LbD1oHCJgz3VdCLpmig8+A
   0=;
X-IronPort-RemoteIP: 104.47.59.170
X-IronPort-MID: 91806294
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:NQwJ/6zk0Rh5+842AX96t+cVxyrEfRIJ4+MujC+fZmUNrF6WrkUAz
 2QdCG/Vb/mMZ2HxfN4nOtiwpxxVsJfXnIVkSQc5+SAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5QRnPaET5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUJ/2
 tYgLgEjVCGaxKGTyYvjeONchP12eaEHPKtH0p1h5RfwKK98BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjmVlVMpuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqAdJKSufpqpaGhnWi1HMCCEAsD2G9/6bk1UGCacllC
 BILr39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLL5y6JC25CSSROAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9EIMZTSoNTA9A79y9pog210jLVow6T/bzicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQKzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:KROsfaioaJy/LUz2+WpEwtMK8XBQX6F13DAbv31ZSRFFG/FwWf
 re+MjztCWE/Ar5PUtK9+xoV5PhfZqiz+8L3WB8B9aftWrdyRmVxf9ZnOnfKlTbckWVygc379
 YCT0ERMqyUMbBw5fyKnjVRe7wbrOVum8qT6ts3AB1WID1CWuVYy0NcNy7eK0txQWB9dO8E/F
 j33Ls3m9JlE05nHfhSwxM+Lpj+Tqbw5fXbSC9DPQcj9A6NyRuw8dfBYmCl9yZbaSpL3bAhtU
 PYkwn1j5/Tz82T+1vnzmrO6JYTv9PkxrJ4daqxo/lQECzolgGrIKJ+XLGY1QpF2d2H2RIRid
 zRpBVlBeRfgkmhBV2dkF/Wwgz91zRr0XP41lOCpnPmraXCNUgHIvsEv5tdbhzar3Utp8t91q
 Uj5RPli6Zq
X-IronPort-AV: E=Sophos;i="5.96,311,1665460800"; 
   d="scan'208";a="91806294"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gasRM56Dn463+ByYL/Vj063gtgAzJpLOiWJoN0xc46gh4vrXbzp9TlEdeODii+IfeKg9jx/oDYQzCNtfUdujH1Z06LzwAMnMJIdFQnZynWbrTOQWm4s3Wy+77gA7ljObke85SN9uhIOIr9qO7W1PH0Jpa/51P2ntygVKHYFPTF5RXHtkYAA/iVKVedqfSjg62NuktuXZDZx/k5/oKOs5bvTu7zY19v7iX8cm02yGCHH43OowsOEEEgWnWuoFhJWZtyxWMAgy+UWgObH/HkVSXieTg6r6u1sjKTco4PKR1GjL1oXpisGMdd1AQyqiUfhMhsbL3Kw8ExeegvwymCwsrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=S/j8qZxd5fo4q8BxZa+uetjPoTsTEamnOrI9iHm6nnQ=;
 b=hhIA679/9WQdXy3t0NRzNPefOX41VjmU/4EN+C6+vU0nQm7Aw1+Vn8R/g7OtvfqWoI7/Rf5gPj5Joe1F3+BDoGTV6hks3gk3cLVX/dF8e363vTM0TbFZf/JcZdbmUXtPMIoPlJWkEkvx+SKe8MlY988573G4WrwBM8kWkLXmE8CjyeftKOO0UJMzuxMwmCIHwjSKsmsoK0LdNu4z+qNsW7CLk99VV9swZnbmy92Ry+sEc2QTlNM87mvQjZj6jIIiZTi6hsgME4HVkyArl8UMnBqvL9LyB5m4uI03HFLi9hEKaLyuFmbkL27o3qJLsiLC7rC/fh3FEs69ueL+KYVWoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S/j8qZxd5fo4q8BxZa+uetjPoTsTEamnOrI9iHm6nnQ=;
 b=kT8IO3h1Lha2LZWZVEUl4toqdParjrKAoUioteVGF/S8K0pTmNgipEbvl1XL0sNzVL/Hpy0B6wNDPFvL8pRSre4zcptOzIb/K0wpK51VyCSCBHS+q2qjzhh5DW16yj1CvJp4ruKXNg8oO/qzDFiOSmGJpR/U/n2ErTAMxh5NtsA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/6] x86/prot-key: Split PKRU infrastructure out of
 asm/processor.h
Thread-Topic: [PATCH 2/6] x86/prot-key: Split PKRU infrastructure out of
 asm/processor.h
Thread-Index: AQHX8mMDkacqheDKokOTUkSgUg9d2qw81wgAglu5ogA=
Date: Mon, 9 Jan 2023 14:57:41 +0000
Message-ID: <b7c7d431-e7d5-9dd5-a33c-c61e53c42acc@citrix.com>
References: <20211216095421.12871-1-andrew.cooper3@citrix.com>
 <20211216095421.12871-3-andrew.cooper3@citrix.com>
 <427dc257-b318-de55-7126-0446264401f8@suse.com>
In-Reply-To: <427dc257-b318-de55-7126-0446264401f8@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|PH0PR03MB6979:EE_
x-ms-office365-filtering-correlation-id: d3716add-b3f4-4468-d33f-08daf251db40
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 8ssqwxJ5AQJ2VNADvIqSqIlbQRirifDLafZuDCLD9aAusRdmVHzHaPpvdwiW9x1D2Cc9fHO8D2Cr+T4o2ZtbzltAxshLPpcpU4k9WV8Pl4MXTItT13QFB3F+mehp3HPBcNsG+DNy15nZLrbqS6TEFhn/XyjPzvI7p9WWpG4nUYny7KMW/cBj0F/0+8QOK8rHemqgWg5lZrVQDJH+u7ZIVb9E0vHIHIpj+B5S1s4vve1Pw4U5TGgr3XP8doTfUy9NehtwPqZAflgcXah3Nz4HX/WELgQwlsladW35m7rycPRdqRRMscMRR/9Zu2xpWlMw0/PCtWGDdJam+2eG7PyDKpIncb4igjFkvjBTITLctA5Mn8KQvGgmvo3RKoSQq4FI1PXX4N1xccyoOyYGt4+nspdzsbr0RjOWa40gI+Q3t4jEHyaMdomOKDOCjAQ/lMDY9a3+Q2XmK6VOsBMQPY0Zau7x9VrgWtZKNDD5WVG2alWhbjkO5c9oey7FzI+m55xorEl5sZhzAOdtjB84f0X0060aE8iqyg2FJrF1iG/wCX7dpq5f1nboAn/yf2640HLiMZBAX7lsa94sIVQFcLNtg2oRSin0ycdVAyDq+UXB046xZ+zo0CTa+CfB4HNcyXLwgsAsDiwASivrvFW7SY64Hmp6a8xMgx/kD7pwK/qpTCOcZfWl4a/ea4Ax6yloDgPb3MWubdo/znvf91K2HgCV+7riOCSM9n/JFJmcNzA3BQMWmouZZ1ALVlta4VD/ZxS323L+eJIKvup2eLFJhm21BSHZq1Wwg271hrXl28CAAC6BW9wPHgj4ELV56yyPRIJi
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(136003)(39860400002)(376002)(366004)(451199015)(41300700001)(2616005)(316002)(54906003)(8676002)(66446008)(36756003)(66556008)(4326008)(6916009)(64756008)(76116006)(66476007)(66946007)(91956017)(86362001)(31696002)(38070700005)(38100700002)(122000001)(82960400001)(8936002)(5660300002)(83380400001)(6506007)(2906002)(71200400001)(53546011)(31686004)(6486002)(186003)(26005)(6512007)(478600001)(2004002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?M2pUL3ZFZ2NoaEtOSFZpNzFDcGxWMTRtNmJSakxtRXM2ajdGdTNLZDdNakZ5?=
 =?utf-8?B?UXpocnRsZkx2MjlrOUNuWEx3eVBCVFhuTjlZUzNLdWFLQjdERlNTUFdtaHVu?=
 =?utf-8?B?SDBrT2xRaTlFRm5SQ3phZE94cVJxWVY2UlNscVlCK0ZDUWRoNHZLRFBha3FT?=
 =?utf-8?B?WXJOaXlDMkJER1J5NVZtbG91WVN6aEZadHphY2Nmd0JVdWs1eXk3U09iRFI5?=
 =?utf-8?B?K2ZEZWpFQUczZzVUN1BOYlRNWVdOenNZQmh2WGpiTUlOc0lRRHU5RWhMWEtu?=
 =?utf-8?B?a3BtOXozUmlHbDd3aXNBdm5mVk51UE5QUWEzTXFKYmdPZ1AvdW9UeERkZkU4?=
 =?utf-8?B?SEFxdGN2TEMyZ01HcENjMHJ6ekdoR1U0dTRpMjhjNW52dEtlTjNLZVZwTzJY?=
 =?utf-8?B?UVgwdERYL1pnUHFzMzk3ODBTcnZzRlZUc3Z4ai9uZVBVR1Jjbkg0SHB6dmF5?=
 =?utf-8?B?K0ZlMGwzeHR0TVYxL2NBdU42QXVOTzBBbjVVT1htYzFnYkQ3ektrakI4S29X?=
 =?utf-8?B?d2RySElxdlhGdnNhSEp3Rk1ENHdwWklsVW8zN0VRYURhNEJ2SkhUMUVnOVpm?=
 =?utf-8?B?Y2hUMVVXT2ZEdFc4QTVHNFZYL0tlUVNTaWlRWUQwWmtSNlNKdFgrTnhpditp?=
 =?utf-8?B?Yy9kcTdlUVBTS1pLYnlUL25NT1F4SVV0RXNjTUxFSStpbHhSOW1tejFYeHhL?=
 =?utf-8?B?SFZGTjZadnFqNjZ0aUZKUXVpL1ZVV2tPd25GNXQ1SFNXRE1YbENaaDJuZXNw?=
 =?utf-8?B?NWVVMUxObUxpaUJueGFBZGZUcis2cXVKZG9UUVFMbmMzbFNtdDBMMis5eURZ?=
 =?utf-8?B?OXZPaFJiNno4dEhWQUMydjVXdWZzUUdXNG5pbnJpZ0QxQUg4WldlS21OdE1h?=
 =?utf-8?B?RDExRzF2ODlJcm95WFl3Vk5jaC9qWVNrUXpmRldWWmtMa1ZmKzUxbDJ3T1RT?=
 =?utf-8?B?Z09YRkZ5RVpNZTdLQUd1RnVLYlc5WVFEaE8xR1FVdityZWFNbXZCY0JpVEpl?=
 =?utf-8?B?VDN4dnhzcURQWWRScFdEL0lhN0FJcDFsRWkrVVBHYWdmcmJ5Z24rM1d1dkha?=
 =?utf-8?B?YlcvMWM1WnRrTjF4LzB5MytTUENlQkc1bTMwSFlNanNRUEprNlg2ZmY1T0dq?=
 =?utf-8?B?YXM3NW9RaTRYQjBqS3JFZ3Rza0kvRUFqUGFVZVczQldidDM5Vnl6Qk1BZmV2?=
 =?utf-8?B?anFyYjZ5UGNqTmZScEhiUEQrcDk4cnRCYTF6aXFMZnh5SUdERDUrY0pqS1Fz?=
 =?utf-8?B?bG5oU25zdWE1eHE3Q1RXSVlSc1R0eGIvRzNqeDZiMHZPcWpia0tSZDJMb0xN?=
 =?utf-8?B?ZnFrQUk3eTNLNzdFcjFqNlNuSnJsMDBWT290SUpMWkJYdnZ1M2Zxb0tib0t0?=
 =?utf-8?B?MTNWYjg1aUowQU1qWkJYK0F0N2FPZXV0TmRSU1pubGVDMklpQy9CT2RDQ1NL?=
 =?utf-8?B?ODB1V1B0Z05yWDZYNHJyNm43SVdXTVRwdGpBdW5DWHdVUm93emtQakFFOEJj?=
 =?utf-8?B?c3U3cWM1ZS82VElnT3E4YTZZZXBsTHlOZTZiUThTOXZMRXZSM3JFaUdpVUZH?=
 =?utf-8?B?RElSWXVEOEpUdktxTWdKb2p2TkJpVk5FdjUvNXRHdXk0bGxGZitRQW9ML1Rm?=
 =?utf-8?B?UCsvYWZRVEhlUTNkb1kxaDhYeG9EMDV4T2pyU08vZHdmOWY3WGNPcVp6QXFZ?=
 =?utf-8?B?Q1NKQ2V6SGtYNklYRzVFZFZTcnZtL0xJQkswV3dmZS8xQjhjQmF0RCtwYmkw?=
 =?utf-8?B?RzI1VEJPQ3BoQTVRMHhJcHkzZnNxT3JqRlZqMDRRQUxsSVRtRDhHc3oxUk5C?=
 =?utf-8?B?VmtteUpTNHR3ZUNXaVVlbVlwTGtYblpuRWV0VUhFRHNwTGFpdlpEdy9qK3NN?=
 =?utf-8?B?bDZXRmJ4R3BhZGYvYWQrV0FNWGpVV2E2dG94TE5Pd1Y1SUVNc3RzcmxOVkZi?=
 =?utf-8?B?eE9HWU1ZOHZnM2FJUDYwUzMrYS9EQXd2YjRDWFhXcFo4YldycnZqb3YwNGtX?=
 =?utf-8?B?c0FONmxBRjdVMitOK1JJcTVkdDh2SDQxWVJ5bmlSZzdSemNCWEdlMzhmRUNo?=
 =?utf-8?B?TC8zeFVYckJ1ZzVSd0RyWHYyeEo1T0k4b1Nnc3BqbTJtcFdpTGdhVDA0WGsw?=
 =?utf-8?Q?trkwh0Mqr2QRSJc5h6Sa+jb1E?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C5FB6ADFC953CE40A07C7D81CC2BB5BB@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	TJOxBeYW+ga1xsVFbo8Wv7QaY7a0Z59cuNCFhF0dBlCA2cmUiJ0UYfVkNK1Did/tZprEDO3LYo45v4DNGjWMzhdv5nfLFHm18yiLxATNJ1++tEM3oirFtPcjzbv87QlW+HHncMIhvlzMtZA8VRhwQS4I560DFFRkYvYHlx2+vzi/VEZqZkJFLpnSFpFkUNhjJ4+fvFEZRWjeTIfttQ1VpsGifOYrSHH4EqIFmENd0lXmHfnQ3vcdLsQLQo8tPgmD3nKb5eGBYbM51nyQ0VXUk9DDEcx5eCG/HOD2YlW0MZ8aYf5/89P5u2UZVR8E9nPOp+klnM8kAePDvCdFbjZDeKP7RHnHl+gHEiFsQ3pnaX3Jaj6WpfvjPqeZ5SfaC5cMj5wP8g0okLDBXIBKdoUGwjip0eD2yYyOcIxzmCn1rJbDwbVGPKfjv/m5ksVFpTpgRg9MeuvNlHo1YBpXAKUVTzD3n2kEl/p0mLwR0TXZoO3oOWCwmtdcK9RKOVGcfRuS8N8U42oYNA5673FFecMLfQjV1SC7mX4qCKo6ozNRP+L6V5A6uXqCAYgZt2j3mui4c7KXyrKLho32PAFXUrDiM0ve8xg+GWQU9JK/22SAxSbnW+nkBDEFJ+svB1//CtsmWB26gNCRVD+prQ7hB72wpi+Xb45UQiIEffwCrXNfLgjinlxWSyn4g1UjzvH8MgIVcmk6zZXFgJF2MaISHGZu0kUNirKNiR5rlpHs9tFiVb0gUlZjKNTZnI7NdmBLLttrJi1H8UfuE65AaGXmbdYRLZPwjHR+ihUXlGVniZ4UXTabImpFlrqPG0jfY2ezmAdj
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d3716add-b3f4-4468-d33f-08daf251db40
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jan 2023 14:57:41.0609
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: BSjV6k1FP4sfLF+3jePk5NkdaZy4o6di6NP/o6BO9f3wg3udcVeeX16R0ivZjsqr4J7Nwn4BqEqyQ+3S/LjhLrQI5YY/uUWdRAlyXGf8F4M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6979

T24gMjEvMTIvMjAyMSAxMToyOCBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDE2LjEyLjIw
MjEgMTA6NTQsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiAtLS0gL2Rldi9udWxsDQo+PiArKysg
Yi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vcHJvdC1rZXkuaA0KPj4gQEAgLTAsMCArMSw0NSBA
QA0KPj4gKy8qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4+ICsgKiBhcmNoL3g4Ni9pbmNsdWRlL2Fz
bS9zcGVjX2N0cmwuaA0KPj4gKyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdh
cmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkNCj4+ICsgKiBpdCB1bmRl
ciB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGFzIHB1Ymxpc2hl
ZCBieQ0KPj4gKyAqIHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb247IGVpdGhlciB2ZXJzaW9u
IDIgb2YgdGhlIExpY2Vuc2UsIG9yDQo+PiArICogKGF0IHlvdXIgb3B0aW9uKSBhbnkgbGF0ZXIg
dmVyc2lvbi4NCj4+ICsgKg0KPj4gKyAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0
aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLA0KPj4gKyAqIGJ1dCBXSVRIT1VUIEFOWSBX
QVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mDQo+PiArICogTUVS
Q0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRo
ZQ0KPj4gKyAqIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFpbHMuDQo+
PiArICoNCj4+ICsgKiBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUg
R2VuZXJhbCBQdWJsaWMgTGljZW5zZQ0KPj4gKyAqIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtOyBJ
ZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+PiArICoNCj4+ICsg
KiBDb3B5cmlnaHQgKGMpIDIwMjEgQ2l0cml4IFN5c3RlbXMgTHRkLg0KPj4gKyAqLw0KPj4gKyNp
Zm5kZWYgQVNNX1BST1RfS0VZX0gNCj4+ICsjZGVmaW5lIEFTTV9QUk9UX0tFWV9IDQo+PiArDQo+
PiArI2luY2x1ZGUgPHhlbi90eXBlcy5oPg0KPj4gKw0KPj4gKyNkZWZpbmUgUEtFWV9BRCAxIC8q
IEFjY2VzcyBEaXNhYmxlICovDQo+PiArI2RlZmluZSBQS0VZX1dEIDIgLyogV3JpdGUgRGlzYWJs
ZSAqLw0KPj4gKw0KPj4gKyNkZWZpbmUgUEtFWV9XSURUSCAyIC8qIFR3byBiaXRzIHBlciBwcm90
ZWN0aW9uIGtleSAqLw0KPj4gKw0KPj4gK3N0YXRpYyBpbmxpbmUgdWludDMyX3QgcmRwa3J1KHZv
aWQpDQo+PiArew0KPj4gKyAgICB1aW50MzJfdCBwa3J1Ow0KPiBJIGFncmVlIHRoaXMgd2FudHMg
dG8gYmUgdWludDMyX3QgKGkuZS4gdW5saWtlIHRoZSBvcmlnaW5hbCBmdW5jdGlvbiksDQo+IGJ1
dCBJIGRvbid0IHNlZSB3aHkgdGhlIGZ1bmN0aW9uJ3MgcmV0dXJuIHR5cGUgbmVlZHMgdG8gYmUs
IHRoZSBtb3JlDQo+IHRoYXQgdGhlIHNvbGUgY2FsbGVyIGFsc28gdXNlcyB1bnNpZ25lZCBpbnQg
Zm9yIHRoZSB2YXJpYWJsZSB0byBzdG9yZQ0KPiB0aGUgcmVzdWx0IGluLg0KDQpUaGlzIGlzIHRo
aW5uZXN0LXBvc3NpYmxlIHdyYXBwZXIgYXJvdW5kIGFuIGluc3RydWN0aW9uIHdoaWNoDQphcmNo
aXRlY3R1cmFsbHkgcmV0dXJucyBleGFjdGx5IDMyIGJpdHMgb2YgZGF0YS4NCg0KSXQgaXMgbGl0
ZXJhbGx5IHRoZSBleGFtcGxlIHRoYXQgQ09ESU5HX1NUWUxFIHVzZXMgdG8gZGVtb25zdHJhdGUg
d2hlbg0KZml4ZWQgd2lkdGggdHlwZXMgc2hvdWxkIGJlIHVzZWQuDQoNCj4+IC0tLSBhL3hlbi9h
cmNoL3g4Ni9tbS9ndWVzdF93YWxrLmMNCj4+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS9ndWVzdF93
YWxrLmMNCj4+IEBAIC0yNiw3ICsyNiw5IEBADQo+PiAgI2luY2x1ZGUgPHhlbi9wYWdpbmcuaD4N
Cj4+ICAjaW5jbHVkZSA8eGVuL2RvbWFpbl9wYWdlLmg+DQo+PiAgI2luY2x1ZGUgPHhlbi9zY2hl
ZC5oPg0KPj4gKw0KPj4gICNpbmNsdWRlIDxhc20vcGFnZS5oPg0KPj4gKyNpbmNsdWRlIDxhc20v
cHJvdC1rZXkuaD4NCj4+ICAjaW5jbHVkZSA8YXNtL2d1ZXN0X3B0Lmg+DQo+PiAgI2luY2x1ZGUg
PGFzbS9odm0vZW11bGF0ZS5oPg0KPj4gIA0KPj4gQEAgLTQxMywxMCArNDE1LDExIEBAIGd1ZXN0
X3dhbGtfdGFibGVzKGNvbnN0IHN0cnVjdCB2Y3B1ICp2LCBzdHJ1Y3QgcDJtX2RvbWFpbiAqcDJt
LA0KPj4gICAgICAgICAgIGd1ZXN0X3BrdV9lbmFibGVkKHYpICkNCj4+ICAgICAgew0KPj4gICAg
ICAgICAgdW5zaWduZWQgaW50IHBrZXkgPSBndWVzdF9sMWVfZ2V0X3BrZXkoZ3ctPmwxZSk7DQo+
PiAtICAgICAgICB1bnNpZ25lZCBpbnQgcGtydSA9IHJkcGtydSgpOw0KPj4gKyAgICAgICAgdW5z
aWduZWQgaW50IHBrciA9IHJkcGtydSgpOw0KPj4gKyAgICAgICAgdW5zaWduZWQgaW50IHBrX2Fy
ID0gcGtyID4+IChwa2V5ICogUEtFWV9XSURUSCk7DQo+IFRoaXMgaXMgY29ycmVjdCBvbmx5IGJl
Y2F1c2UgYmVsb3cgeW91IG9ubHkgaW5zcGVjdCB0aGUgbG93IHR3byBiaXRzLg0KPiBTaW5jZSBJ
IGRvbid0IHRoaW5rIG1hc2tpbmcgb2ZmIHRoZSB1cHBlciBiaXRzIGlzIHJlYWxseSB1c2VmdWwg
aGVyZSwNCj4gSSdkIGxpa2UgdG8gc3VnZ2VzdCB0byBub3QgY2FsbCB0aGUgdmFyaWFibGUgInBr
X2FyIi4gUGVyaGFwcw0KPiBzb21ldGhpbmcgYXMgZ2VuZXJpYyBhcyAidGVtcCI/DQoNClRoaXMg
dmFyaWFibGUgYmVpbmcgbmFtZWQgcGtfYXIgKG9yIHRoZXJlYWJvdXRzKSBpcyB2ZXJ5IGltcG9y
dGFudCBmb3INCmNsYXJpdHkgYmVsb3cuDQoNCkkndmUgbWFza2VkIHRoZW0gLSBzZWVtcyB0aGUg
Y29tcGlsZXIgaXMgY2xldmVyIGVub3VnaCB0byB1bmRvIHRoYXQgZXZlbg0KYXQgLU8xLg0KDQp+
QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:05:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:05:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473725.734451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEti9-00048P-5k; Mon, 09 Jan 2023 15:05:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473725.734451; Mon, 09 Jan 2023 15:05:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEti9-00048I-1y; Mon, 09 Jan 2023 15:05:17 +0000
Received: by outflank-mailman (input) for mailman id 473725;
 Mon, 09 Jan 2023 15:05:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CfuE=5G=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pEti7-00048B-53
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:05:15 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0499697b-902f-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 16:05:13 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id D18E14680;
 Mon,  9 Jan 2023 15:05:12 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AA32C134AD;
 Mon,  9 Jan 2023 15:05:12 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id fK85KCgtvGMGQwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 09 Jan 2023 15:05:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0499697b-902f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673276712; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=r5AchZgkjBMvT1rTdbO0/ZY4pyGy39ipbS0uUdkYcRY=;
	b=ALlw8jrDaBsAMeOsxe37gz788mxvsiIal84uC3vh8oCPb/RUOGaMrHPWS4u/XZ7YaTrojO
	/0vL7JTxDXcpiY4JVHHOHWSEoBC2rfvC7jt/VeA1fco6NMz+r9weMsFMOgQlN3Lxf7gIPR
	Wxpn6kaoTqLV/lIjIHJInWbL6UGGQMA=
Message-ID: <055adce8-ceba-983a-19cc-b09ec30bb3c3@suse.com>
Date: Mon, 9 Jan 2023 16:05:12 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 0/4] xen/blkback: some cleanups
Content-Language: en-US
To: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xenproject.org
References: <20221216145816.27374-1-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20221216145816.27374-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------Bn9el70qiXK9dBtQDHvkBxHs"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------Bn9el70qiXK9dBtQDHvkBxHs
Content-Type: multipart/mixed; boundary="------------5FtRzSy4gpSehjOHS0zh00HX";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xenproject.org
Message-ID: <055adce8-ceba-983a-19cc-b09ec30bb3c3@suse.com>
Subject: Re: [PATCH 0/4] xen/blkback: some cleanups
References: <20221216145816.27374-1-jgross@suse.com>
In-Reply-To: <20221216145816.27374-1-jgross@suse.com>

--------------5FtRzSy4gpSehjOHS0zh00HX
Content-Type: multipart/mixed; boundary="------------KfBHdy9WwI9ryEeh1hcoeXvF"

--------------KfBHdy9WwI9ryEeh1hcoeXvF
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTYuMTIuMjIgMTU6NTgsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IFNvbWUgc21hbGwg
Y2xlYW51cCBwYXRjaGVzIEkgaGFkIGx5aW5nIGFyb3VuZCBmb3Igc29tZSB0aW1lIG5vdy4N
Cj4gDQo+IEp1ZXJnZW4gR3Jvc3MgKDQpOg0KPiAgICB4ZW4vYmxrYmFjazogZml4IHdoaXRl
IHNwYWNlIGNvZGUgc3R5bGUgaXNzdWVzDQo+ICAgIHhlbi9ibGtiYWNrOiByZW1vdmUgc3Rh
bGUgcHJvdG90eXBlDQo+ICAgIHhlbi9ibGtiYWNrOiBzaW1wbGlmeSBmcmVlX3BlcnNpc3Rl
bnRfZ250cygpIGludGVyZmFjZQ0KPiAgICB4ZW4vYmxrYmFjazogbW92ZSBibGtpZl9nZXRf
eDg2XypfcmVxKCkgaW50byBibGtiYWNrLmMNCj4gDQo+ICAgZHJpdmVycy9ibG9jay94ZW4t
YmxrYmFjay9ibGtiYWNrLmMgfCAxMjYgKysrKysrKysrKysrKysrKysrKysrKysrKy0tLQ0K
PiAgIGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmggIHwgMTAzICstLS0tLS0t
LS0tLS0tLS0tLS0tLS0tDQo+ICAgMiBmaWxlcyBjaGFuZ2VkLCAxMTggaW5zZXJ0aW9ucygr
KSwgMTExIGRlbGV0aW9ucygtKQ0KPiANCg0KUGluZz8NCg0KDQpKdWVyZ2VuDQo=
--------------KfBHdy9WwI9ryEeh1hcoeXvF
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------KfBHdy9WwI9ryEeh1hcoeXvF--

--------------5FtRzSy4gpSehjOHS0zh00HX--

--------------Bn9el70qiXK9dBtQDHvkBxHs
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO8LSgFAwAAAAAACgkQsN6d1ii/Ey8T
rgf/YPspae1H6xxWZbx8KKP2yS3WylDZF0YoQUBMlTxslcwhIiRltIckPFalUVdJclpwj8v2DyXK
gIqKYpP1LJvXxIwty70hNd7lsWtSUeG633XCeEmkm/UlxGtk901ELOiPC6Pz4kHOS2Q+24CumPyV
Cto1pzmm4K0PiXaAZyLLJ1B9GBpKnoOBJoL4H9Gxoq3lNr+FATjxbIQaPNS45hKx9qFhlVqAsDUX
s+KMdJQsyhXcIakEzwBiqpZ0A4DAP7I8zHuoMeELoPR7DpZB+rPh8MlobYvQfv/eR1hIRjlq8zeB
5UV+XYYQGFDp1Z2p4QHR4IerT9sRCNCqyiQ8/AlYjw==
=sGDi
-----END PGP SIGNATURE-----

--------------Bn9el70qiXK9dBtQDHvkBxHs--


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:09:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:09:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473731.734462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEtmA-0004lL-KX; Mon, 09 Jan 2023 15:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473731.734462; Mon, 09 Jan 2023 15:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEtmA-0004lE-HZ; Mon, 09 Jan 2023 15:09:26 +0000
Received: by outflank-mailman (input) for mailman id 473731;
 Mon, 09 Jan 2023 15:09:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CfuE=5G=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pEtm9-0004l8-R5
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:09:25 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a20b14f-902f-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 16:09:24 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BF9EF34300;
 Mon,  9 Jan 2023 15:09:23 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 788C2134AD;
 Mon,  9 Jan 2023 15:09:23 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id diIQHCMuvGORRQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 09 Jan 2023 15:09:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a20b14f-902f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673276963; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=bvld8zPnYDTnQIG1YqFCYb/KZpQyo3ITt9siNGCo5AU=;
	b=vFJ4c9PD8hUN1mARIMwxWF3d/NZ/pc8pC3MQlSRQRs0Sikg/ojs58lhAE4+cVRT9SEL5Lt
	UqCOuBfA+OZ0KtIVPDjs94DZnYPv5X6A8k7wQ1G8tbRh3i+f1FV1MsMlcHcXUyFptlc1T+
	M4AJqrOad/ZUJ+3AhGSlF0oacbYL6LM=
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org,
	x86@kernel.org
Cc: xen-devel@lists.xenproject.org,
	Juergen Gross <jgross@suse.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH] x86/mm: fix poking_init() for Xen PV guests
Date: Mon,  9 Jan 2023 16:09:22 +0100
Message-Id: <20230109150922.10578-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()") broke
the kernel for running as Xen PV guest.

It seems as if the new address space is never activated before being
used, resulting in Xen rejecting to accept the new CR3 value (the PGD
isn't pinned).

Fix that by adding the now missing call of paravirt_arch_dup_mmap() to
poking_init(). That call was previously done by dup_mm()->dup_mmap() and
it is a NOP for all cases but for Xen PV, where it is just doing the
pinning of the PGD.

Fixes: 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/mm/init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index d3987359d441..5f8ba537d9d3 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -804,6 +804,9 @@ void __init poking_init(void)
 	poking_mm = mm_alloc();
 	BUG_ON(!poking_mm);
 
+	/* Xen PV guests need the PGD to be pinned. */
+	paravirt_arch_dup_mmap(NULL, poking_mm);
+
 	/*
 	 * Randomize the poking address, but make sure that the following page
 	 * will be mapped at the same PMD. We need 2 pages, so find space for 3,
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:42:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:42:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473737.734472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuI5-0000gy-Vk; Mon, 09 Jan 2023 15:42:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473737.734472; Mon, 09 Jan 2023 15:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuI5-0000gr-Ss; Mon, 09 Jan 2023 15:42:25 +0000
Received: by outflank-mailman (input) for mailman id 473737;
 Mon, 09 Jan 2023 15:42:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEuI5-0000gh-Fr; Mon, 09 Jan 2023 15:42:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEuI4-00011X-Vt; Mon, 09 Jan 2023 15:42:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEuI4-0007V1-JY; Mon, 09 Jan 2023 15:42:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEuI4-0002kk-J4; Mon, 09 Jan 2023 15:42:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jA0yvhez32OeCmtzp05Jy8HY2/RDxk7UhvH1Vikp3hA=; b=AgU8exHCApozfsWCmyLm1HbC/f
	/CGLL5AkGwiKFQkVBSg+AjSh96Pk7FiGsbnOaWFW/44HOyw7q6WWig0k7bi+pzH/CX9nJa/pqCBXG
	kgxzr8pTO8q1reIHWTbsmZAqUYqD2nvQSeYVBaTs7AXOuPoGHEcRIRcu7NRwqPnvnZNE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175643-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175643: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=3d83b78285d6e96636130f7d449fd02e2d4deee0
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 15:42:24 +0000

flight 175643 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175643/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                3d83b78285d6e96636130f7d449fd02e2d4deee0
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    1 days
Failing since        175627  2023-01-08 14:40:14 Z    1 days    4 attempts
Testing same since   175643  2023-01-09 13:40:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Greg Kurz <groug@kaod.org>
  Kai Huang <kai.huang@intel.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 799 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:47:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:47:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473746.734484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMa-0001Ox-MA; Mon, 09 Jan 2023 15:47:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473746.734484; Mon, 09 Jan 2023 15:47:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMa-0001Oq-JL; Mon, 09 Jan 2023 15:47:04 +0000
Received: by outflank-mailman (input) for mailman id 473746;
 Mon, 09 Jan 2023 15:47:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEuMZ-0001Ok-RX
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:47:03 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id db81c3f9-9034-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 16:47:01 +0100 (CET)
Received: by mail-ej1-x630.google.com with SMTP id az20so2095181ejc.1
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 07:47:01 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 22-20020a170906311600b0082535e2da13sm3851561ejx.6.2023.01.09.07.46.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 07:47:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db81c3f9-9034-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=82ki1vMowwI70MnBQW3UbvNHNyHvE7BkIpy4+UwdStM=;
        b=P6CKvK20Bj18bBrI07gAlkAo+Yh+biv8P2wsY4XBiE17W9/VOgxQwXj4WfxZ8GgF9b
         jb5+vCJoOWc5JhSskjJ4rNOWFjyjdn0157mn6PM5ZJhrhCsPRkBR7mGK5rBky6YHu0zQ
         FDsbwqYM2NFzdrEKZTIRNzXE/k+WyGWCVbaWARwLtSl+hA1UqtkpCxJTDgIVJpbcN5do
         ew++PrS2RxFx89d2KNVKIirztg0CKZmpgVtlwv8iPr8Y8WmBAXarw7WPF9VrlwnJblY6
         xDl+zjil2q58/ROXxJcg61H5N5kpYC/tzQEFBiheLSto0T5Nt09iLb7IRbZQwF3kkfAe
         +UOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=82ki1vMowwI70MnBQW3UbvNHNyHvE7BkIpy4+UwdStM=;
        b=S2QxqoRYEkrTgpHaTsKHKYumuExZ+cmKOY6nVRajCji9L5PZguBOY+fs65ZlNo9sZ4
         d6XK2+51dYkQlce5mH8tSRWAkFHiWzOHUwDAKTsL6FjVV2bjfLrZO0Kyc5QbspqnW2j7
         hw8HEnrTsjjeHv+TXG3jxYcOpqQSo0ebl8MjbtkOrluz+C7BytkWEl6ePHXQTN3/swz7
         s7ycCHeUjpy3N46MePmG8GPd+x7zANR4VLbqKL9rnk4XL+EGqlRhjOZhAwwfEt8zgqei
         w2FdCufjPXNfZgXKfAlrwZM73WBQJfAp1uiW3yAlpNGsg4N5NZmauWot1aahgO1BrvjB
         BBKA==
X-Gm-Message-State: AFqh2kpz3VnOBglCDJFV052Pzu9+Fs/Y6lKzKKdBPB5AUMgy/T10jgXE
	Tnz5uDoPOSynMbsNdp5szh1QpBbPSMV/ZA==
X-Google-Smtp-Source: AMrXdXvU+k7sC2FPfjiMKXaK/O1B58hpm9yxbI09VnM1XbVfxplgRx4xurEdUsRiaY2jyMDcRpRgbQ==
X-Received: by 2002:a17:907:a782:b0:7c1:6430:e5d0 with SMTP id vx2-20020a170907a78200b007c16430e5d0mr57971201ejc.4.1673279221035;
        Mon, 09 Jan 2023 07:47:01 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 0/8] Basic early_printk and smoke test implementation
Date: Mon,  9 Jan 2023 17:46:47 +0200
Message-Id: <cover.1673278109.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- the minimal set of headers and changes inside them.
- SBI (RISC-V Supervisor Binary Interface) things necessary for basic
  early_printk implementation.
- things needed to set up the stack.
- early_printk() function to print only strings.
- RISC-V smoke test which checks if  "Hello from C env" message is
  present in serial.tmp

---
Changes in V2:
    - update commit patches commit messages according to the mailing
      list comments
    - Remove unneeded types in <asm/types.h>
    - Introduce definition of STACK_SIZE
    - order the files alphabetically in Makefile
    - Add license to early_printk.c
    - Add RISCV_32 dependecy to config EARLY_PRINTK in Kconfig.debug
    - Move dockerfile changes to separate config and sent them as
      separate patch to mailing list.
    - Update test.yaml to wire up smoke test
---

Oleksii Kurochko (8):
  xen/riscv: introduce dummy asm/init.h
  xen/riscv: introduce asm/types.h header file
  xen/riscv: introduce stack stuff
  xen/riscv: introduce sbi call to putchar to console
  xen/include: include <asm/types.h> in <xen/early_printk.h>
  xen/riscv: introduce early_printk basic stuff
  xen/riscv: print hello message from C env
  automation: add RISC-V smoke test

 automation/gitlab-ci/test.yaml            | 20 +++++++++++
 automation/scripts/qemu-smoke-riscv64.sh  | 20 +++++++++++
 xen/arch/riscv/Kconfig.debug              |  7 ++++
 xen/arch/riscv/Makefile                   |  3 ++
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++
 xen/arch/riscv/include/asm/config.h       |  2 ++
 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++
 xen/arch/riscv/include/asm/init.h         | 12 +++++++
 xen/arch/riscv/include/asm/sbi.h          | 34 ++++++++++++++++++
 xen/arch/riscv/include/asm/types.h        | 22 ++++++++++++
 xen/arch/riscv/riscv64/head.S             |  6 +++-
 xen/arch/riscv/sbi.c                      | 44 +++++++++++++++++++++++
 xen/arch/riscv/setup.c                    | 18 ++++++++++
 xen/include/xen/early_printk.h            |  2 ++
 14 files changed, 234 insertions(+), 1 deletion(-)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h
 create mode 100644 xen/arch/riscv/include/asm/init.h
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/include/asm/types.h
 create mode 100644 xen/arch/riscv/sbi.c
 create mode 100644 xen/arch/riscv/setup.c

-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:47:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:47:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473747.734495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMb-0001eS-VJ; Mon, 09 Jan 2023 15:47:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473747.734495; Mon, 09 Jan 2023 15:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMb-0001eL-Ri; Mon, 09 Jan 2023 15:47:05 +0000
Received: by outflank-mailman (input) for mailman id 473747;
 Mon, 09 Jan 2023 15:47:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEuMa-0001Ok-VG
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:47:04 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dc623eab-9034-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 16:47:03 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id jo4so21168042ejb.7
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 07:47:03 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 22-20020a170906311600b0082535e2da13sm3851561ejx.6.2023.01.09.07.47.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 07:47:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc623eab-9034-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=yxR8yQrVeJYFYtv18lIm8lC/4hVNKMWmQGoHK/HMNYg=;
        b=MbfkYVQ21dyKLh0n1Yq8pvnM6WE4dqWitMH1Q14NyKLDFl9bDMfQKTQDnwjA2JmXDc
         99p/3TMEdJ5o+TUivAORi3ew2+4FRY0D3NvOGwi1nUkMa402TjOowdFxlZgaqRnjz0dX
         AmASVl2pEpkya66dzeJ9Beua3rAIbKil93GWaoMsluzUMhkCjkehqXp4rrC1Q04RPciy
         kqweKCTzoRzNadmxtEK8rgOpoChkJssmd4GV6M8siXBJ1cgM9k+Fr6+mMkKhmNprBYs2
         9gy9fGkhqDMI35pcyw+Vx0KgzW3RddIg4yk7WbQk8ESfVbUalfBMvQC7SQNCuTZmqHlG
         Yxfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=yxR8yQrVeJYFYtv18lIm8lC/4hVNKMWmQGoHK/HMNYg=;
        b=JQnDK2sNoOv3d/OgIMwxQusymbKmCIUuI6df0MyKTXBz4lyI6/WMYoyNcjF1oKV4/M
         0kGGX5uviMHNIuJVJRtyOq7Xpv2CdnTYat7b615AV3ScyaR3vi1X+cgADWAKjvSTr+7q
         CV3mHzb7sn+Ro2OQKWA39Jkqz8m+HZIlayBeAjPj8UquNWbpQG15Trt1LALDlIuYQcIy
         clFoiScw06gv3pJnCAkWl6lXZSrdjhS6fJoROZfNzbu4jbq93JvX4I30QbTuuVj005Tw
         Tm9IyvsgP9vi6cbfg7vzZNgCzVdz5Y1cqI/u2m2nW9HC4Qb1cxdiVDDo9ZQza2o4HzJX
         +u1A==
X-Gm-Message-State: AFqh2koVgebcjP8fihLk/snYU2SvDj74AT5TTnWhYRdPDtvYmBeY4Poy
	uJRBo6AQsbWt5nY7/6rTGvS977EELL8c5A==
X-Google-Smtp-Source: AMrXdXvkuJjLqc1ipzuRLKNWMvua3yBg6XMcbn6YVmn6JREdqV4ci4d6f62zaxqkCK+t4sEfWHylHQ==
X-Received: by 2002:a17:907:a0cc:b0:78d:f455:b5fa with SMTP id hw12-20020a170907a0cc00b0078df455b5famr55538052ejc.58.1673279222751;
        Mon, 09 Jan 2023 07:47:02 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 1/8] xen/riscv: introduce dummy asm/init.h
Date: Mon,  9 Jan 2023 17:46:48 +0200
Message-Id: <b1585373e39a7cbe023f485aa5a04b093e25ec80.1673278109.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673278109.git.oleksii.kurochko@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The following patches require <xen/init.h> which includes
<asm/init.h>

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
    - Add explanation comment to the comment message
---
 xen/arch/riscv/include/asm/init.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/init.h

diff --git a/xen/arch/riscv/include/asm/init.h b/xen/arch/riscv/include/asm/init.h
new file mode 100644
index 0000000000..237ec25e4e
--- /dev/null
+++ b/xen/arch/riscv/include/asm/init.h
@@ -0,0 +1,12 @@
+#ifndef _XEN_ASM_INIT_H
+#define _XEN_ASM_INIT_H
+
+#endif /* _XEN_ASM_INIT_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:47:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:47:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473748.734506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMe-0001vT-8B; Mon, 09 Jan 2023 15:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473748.734506; Mon, 09 Jan 2023 15:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMe-0001vF-2z; Mon, 09 Jan 2023 15:47:08 +0000
Received: by outflank-mailman (input) for mailman id 473748;
 Mon, 09 Jan 2023 15:47:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEuMc-0001Ok-Jv
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:47:06 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd5f96aa-9034-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 16:47:04 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id gh17so21186565ejb.6
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 07:47:05 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 22-20020a170906311600b0082535e2da13sm3851561ejx.6.2023.01.09.07.47.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 07:47:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd5f96aa-9034-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MwR1sb54ILTyXGbtU2ZO9pB2xjozIDX5an4zwc8vv24=;
        b=L7QA222PU0VUvuvMGGBDEY5WIvlPUOE7OnFknuP1vJWZ+5DsSu77SARWSBKusBvSlF
         VU1828ahEjw5maLmBJNeq0NvYEOVv7cN4Yc+7Wpk6f5gCQHbI6zMLWtEMeM4f+rJbGUU
         GLADZG2QCj6M1aaCP4RMM8FZxZlEXBozg/y87Fk0OIpQ+4UqwJzzDbdpRU5LhHzLtbdR
         FnmLP4SWJWlwYO+OXKR/tPpoNmMDgyB7yJnD8osLu0y33H5iVrXt0XqvsDb1K0yyEwY1
         6KXhM4Yc5YJBeqRd+XxHwQq3GTR7VDRERVYx26NiJWfltrnzQXI9gMkolZimFpwHKb4I
         KRkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=MwR1sb54ILTyXGbtU2ZO9pB2xjozIDX5an4zwc8vv24=;
        b=zwWgHIcYvFw6d94iT7lfj5qeUoj6dCvq+peAHTu/RK/9apxrWeYRsO7/esFgu+cK0E
         sPnfDz984IYcGoTZ0WcgEm6wVpcjpsCejPjDZxPWntcggaCalxRKASC45TGrToh4Vv/Q
         EX2QYDsGVKpmr4d6VenaDWcyau7cIf73qu0Q7U3vXMINb6w+kvxhwEkX7CisOktnjstD
         f2hdKwV5QCG0UI0OWK0xpTvekU6aryD4Ry++hNC8SAQKTas27m1BIpvuXmpV/Y+6OZ6l
         Nu4OIwjxq7dQO5i7tEjxTzA/Vl0Hq9D4ojaHKpAfPaNkoFqvvlm1bzB763mBaXwGTCKA
         0TbA==
X-Gm-Message-State: AFqh2koCt9f8BE/bFL1FWOCvDxWO5QxG7o3t2SqwbNxHJYmlMIY0MBJ/
	YNhFK22wgSBJEbMRClg8TBZ43rb9ozbo4w==
X-Google-Smtp-Source: AMrXdXvL4MYDqwCFCVKx5v3neXASMMTKKrCJr293/g3qz2bXcc55OuB2ZHUZv2Yb9p0KAYzdzrRApA==
X-Received: by 2002:a17:907:c018:b0:7c1:bb3:45e4 with SMTP id ss24-20020a170907c01800b007c10bb345e4mr55092176ejc.21.1673279224277;
        Mon, 09 Jan 2023 07:47:04 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 2/8] xen/riscv: introduce asm/types.h header file
Date: Mon,  9 Jan 2023 17:46:49 +0200
Message-Id: <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673278109.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673278109.git.oleksii.kurochko@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
    - Remove unneeded now types from <asm/types.h>
---
 xen/arch/riscv/include/asm/types.h | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/types.h

diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
new file mode 100644
index 0000000000..fbe352ef20
--- /dev/null
+++ b/xen/arch/riscv/include/asm/types.h
@@ -0,0 +1,22 @@
+#ifndef __RISCV_TYPES_H__
+#define __RISCV_TYPES_H__
+
+#ifndef __ASSEMBLY__
+
+#if defined(__SIZE_TYPE__)
+typedef __SIZE_TYPE__ size_t;
+#else
+typedef unsigned long size_t;
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_TYPES_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:47:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:47:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473749.734517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMf-0002CT-G8; Mon, 09 Jan 2023 15:47:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473749.734517; Mon, 09 Jan 2023 15:47:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMf-0002CK-Bq; Mon, 09 Jan 2023 15:47:09 +0000
Received: by outflank-mailman (input) for mailman id 473749;
 Mon, 09 Jan 2023 15:47:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEuMd-0001Ok-W7
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:47:08 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id de4035cc-9034-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 16:47:05 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id gh17so21186770ejb.6
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 07:47:06 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 22-20020a170906311600b0082535e2da13sm3851561ejx.6.2023.01.09.07.47.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 07:47:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de4035cc-9034-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=C4pVVB8xi310OysZICNFUHzxHNt1nf/1tEIawtEgWpc=;
        b=kNznxBbLQ/4OzsvHOcyh305lnTxgPvZLv3vD3O3+3t2rvt6aQlK2jR9FWI/YWgccMR
         VnOLnSUqE5mRKfrWnTx5LnR/uY5RTG19nJbjQfeRGpP0HroB+cNeuq8kvYvuzNEfB6Y7
         LZeBtMr4+OT/q2y9RcYnguf9AzjQTdSysXaxgLNDRiMHDFCmdoBu6CX3Wdx0bxeknvRh
         514Z27ruElHq0nZnD8Pjg8TwsRlmHZzzJq4Nz7TqjZixZRCcEg7ywvHVv1ofF6s/ImLU
         c+3L4nvMpi46mwBb/xzHI7WBKmzYuKFltG4gvld1iYtB+M7EE/5rMUt5glnSFpteonxa
         mjKA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=C4pVVB8xi310OysZICNFUHzxHNt1nf/1tEIawtEgWpc=;
        b=ypIV5VZkjbRbd6nY6erMjqjPBMsfvBzBnPeyzvti10vbBJLwmOk/zVd/Wg4KRLD9mG
         BCRvQ3Kuz1iDtcgvlpkfwWNzO3irgTAPT+1w+AkdQIJEV+fZRGha9z1z+pyGJTvI3buZ
         Lo1E1WgcfOB4NAmDV6UmrAMPWesXyIjIJ340S46tNwBMtew6rC2TDFNcR3HSbO5C1H7t
         1+CGFfWl3OGcN0NDT5s9wbFwGGjPRTi08LsNRtuxzLf529CCOM8c+6NJll9qipwm5bie
         hFYGPaXaQjCDOh/erx348I5nebmIgK3Ye6qv3nm1yTl+3nqyNfmE4wC7DFyPiEmcg0q0
         yXxg==
X-Gm-Message-State: AFqh2kpvM/1U/h//2TsdLUtO3mcgpfQ/4Zp5f+C/EtN+GnmNcsAHgPUO
	QueGCrR40cOxKUeXZ8x7QjBoSmYujhaIOg==
X-Google-Smtp-Source: AMrXdXt8ZBw2EErZkpU2Gr1NDyBk7nJqtVGI5o2yHEP4kiLKvjxrGf5Shm7rPDwNPBmK4Z4OloPCAQ==
X-Received: by 2002:a17:907:d604:b0:7ad:d62d:9d31 with SMTP id wd4-20020a170907d60400b007add62d9d31mr58787345ejc.67.1673279225854;
        Mon, 09 Jan 2023 07:47:05 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 3/8] xen/riscv: introduce stack stuff
Date: Mon,  9 Jan 2023 17:46:50 +0200
Message-Id: <b253e61bebbc029c94b89389d81643f9587200b7.1673278109.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673278109.git.oleksii.kurochko@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces and sets up a stack in order to go to C environment

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
    - introduce STACK_SIZE define.
    - use consistent padding between instruction mnemonic and operand(s)
---
 xen/arch/riscv/Makefile             | 1 +
 xen/arch/riscv/include/asm/config.h | 2 ++
 xen/arch/riscv/riscv64/head.S       | 8 +++++++-
 xen/arch/riscv/setup.c              | 6 ++++++
 4 files changed, 16 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/setup.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 248f2cbb3e..5a67a3f493 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
+obj-y += setup.o
 
 $(TARGET): $(TARGET)-syms
 	$(OBJCOPY) -O binary -S $< $@
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 0370f865f3..bdd2237f01 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -43,6 +43,8 @@
 
 #define SMP_CACHE_BYTES (1 << 6)
 
+#define STACK_SIZE (PAGE_SIZE)
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 990edb70a0..c1f33a1934 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,4 +1,10 @@
         .section .text.header, "ax", %progbits
 
 ENTRY(start)
-        j  start
+        la      sp, cpu0_boot_stack
+        li      t0, STACK_SIZE
+        add     sp, sp, t0
+
+_start_hang:
+        wfi
+        j       _start_hang
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
new file mode 100644
index 0000000000..41ef4912bd
--- /dev/null
+++ b/xen/arch/riscv/setup.c
@@ -0,0 +1,6 @@
+#include <xen/init.h>
+#include <xen/compile.h>
+
+/* Xen stack for bringing up the first CPU. */
+unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
+    __aligned(STACK_SIZE);
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:47:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:47:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473750.734528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMg-0002U6-Ss; Mon, 09 Jan 2023 15:47:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473750.734528; Mon, 09 Jan 2023 15:47:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMg-0002Tr-N2; Mon, 09 Jan 2023 15:47:10 +0000
Received: by outflank-mailman (input) for mailman id 473750;
 Mon, 09 Jan 2023 15:47:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEuMf-0001Ok-Td
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:47:10 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df406b08-9034-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 16:47:07 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id u19so21089343ejm.8
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 07:47:08 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 22-20020a170906311600b0082535e2da13sm3851561ejx.6.2023.01.09.07.47.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 07:47:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df406b08-9034-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UWY8J8kAj1GiWYViPSk623jkw9JIRhe7Xct39qK260A=;
        b=faYE9gRMjOWUAZnlxR0n2k9ulExL3BU8gI18/Go4uLkUS4fR5scoMfT35S9KZX6uOZ
         HYcsprc8PFa524gIL0PuYzj83/O0v76JhM9H0K71qV4HBkB1VbcvsHS9CsBandHEWM6v
         h6RSj/hpZPc6kaZrGllx0TIOL5weII6SDKKXrdHybOsF0lkyVfqUnyMR+QA4cpiiNvAH
         jJykCOSQewTcwUzVCbX8SQHJeNvHvrZovH3LDTsOph02VLCdUgOFaLKIQ15PUsNNoKTf
         c4vgdZn2MqTMN0j/Th5xBT2e2TryjD1gKV9oV+Ni4u/sO3SC56gvlhVsd5lqKmhT3lyt
         Vngw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=UWY8J8kAj1GiWYViPSk623jkw9JIRhe7Xct39qK260A=;
        b=gXMB/0PkkuIUe3w+qUiNu0M81ModaJNGFYTQpRUVuFb9TO70Dgp0Ehz0w6XeOIIuOA
         AtiKyP8It5zziN7zdb69VmHkGBdUYg6L5ZQxPO/f7LqVsA5Ytd80QP6JQDZ0fhbvCvpr
         iwdoMdqj9LNiIzMndFwVt/JptU8GLWf7xpCFSIlXH1ZfuovW0dg2HxjvA5sruDmK/tcC
         hpU40rnLfKETMyPHiyiJMw8Q17WRZLnNgyhRWjIsgGvM1kVZbjPVRGu0OhM0FDx1EcqN
         uIA0HmoETt52I1e0d069Zs3Ckg9kt+pkML9hRPIHsiEqfeqIq1XjlWTVD1fcAolwZtmC
         Kvgg==
X-Gm-Message-State: AFqh2kodYfOoFdBZB3XrolJPVZdCVJFnOT3DXg+7mTP6otPUxQhe0r7h
	rH5i3cRBi17eSh0n1Y3AF7Rp+9UTmrO53w==
X-Google-Smtp-Source: AMrXdXs1yjbXpCwoz+bkJ3Hs45mkL4PCi7WcVeUtjcR0VS78Hau/5cELtha3tptBw3sIe+pxnYA3NA==
X-Received: by 2002:a17:907:d48e:b0:7ae:b2e4:7b3f with SMTP id vj14-20020a170907d48e00b007aeb2e47b3fmr60936186ejc.8.1673279227517;
        Mon, 09 Jan 2023 07:47:07 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 4/8] xen/riscv: introduce sbi call to putchar to console
Date: Mon,  9 Jan 2023 17:46:51 +0200
Message-Id: <9b85a963db538e4735a9f99fc9090ad79508cb2c.1673278109.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673278109.git.oleksii.kurochko@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduce sbi_putchar() SBI call which is necessary
to implement initial early_printk

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
    - add an explanatory comment about sbi_console_putchar() function.
    - order the files alphabetically in Makefile
---
 xen/arch/riscv/Makefile          |  1 +
 xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
 xen/arch/riscv/sbi.c             | 44 ++++++++++++++++++++++++++++++++
 3 files changed, 79 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/sbi.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 5a67a3f493..fd916e1004 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
+obj-y += sbi.o
 obj-y += setup.o
 
 $(TARGET): $(TARGET)-syms
diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
new file mode 100644
index 0000000000..34b53f8eaf
--- /dev/null
+++ b/xen/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: (GPL-2.0-or-later) */
+/*
+ * Copyright (c) 2021 Vates SAS.
+ *
+ * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Taken/modified from Xvisor project with the following copyright:
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ */
+
+#ifndef __CPU_SBI_H__
+#define __CPU_SBI_H__
+
+#define SBI_EXT_0_1_CONSOLE_PUTCHAR		0x1
+
+struct sbiret {
+    long error;
+    long value;
+};
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid, unsigned long arg0,
+        unsigned long arg1, unsigned long arg2,
+        unsigned long arg3, unsigned long arg4,
+        unsigned long arg5);
+
+/**
+ * Writes given character to the console device.
+ *
+ * @param ch The data to be written to the console.
+ */
+void sbi_console_putchar(int ch);
+
+#endif // __CPU_SBI_H__
diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
new file mode 100644
index 0000000000..67cf5dd982
--- /dev/null
+++ b/xen/arch/riscv/sbi.c
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Taken and modified from the xvisor project with the copyright Copyright (c)
+ * 2019 Western Digital Corporation or its affiliates and author Anup Patel
+ * (anup.patel@wdc.com).
+ *
+ * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2021 Vates SAS.
+ */
+
+#include <xen/errno.h>
+#include <asm/sbi.h>
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid, unsigned long arg0,
+            unsigned long arg1, unsigned long arg2,
+            unsigned long arg3, unsigned long arg4,
+            unsigned long arg5)
+{
+    struct sbiret ret;
+    register unsigned long a0 asm ("a0") = arg0;
+    register unsigned long a1 asm ("a1") = arg1;
+    register unsigned long a2 asm ("a2") = arg2;
+    register unsigned long a3 asm ("a3") = arg3;
+    register unsigned long a4 asm ("a4") = arg4;
+    register unsigned long a5 asm ("a5") = arg5;
+    register unsigned long a6 asm ("a6") = fid;
+    register unsigned long a7 asm ("a7") = ext;
+
+    asm volatile ("ecall"
+              : "+r" (a0), "+r" (a1)
+              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
+              : "memory");
+    ret.error = a0;
+    ret.value = a1;
+
+    return ret;
+}
+
+void sbi_console_putchar(int ch)
+{
+    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
+}
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:47:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473751.734539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMj-0002om-7b; Mon, 09 Jan 2023 15:47:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473751.734539; Mon, 09 Jan 2023 15:47:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMj-0002od-4H; Mon, 09 Jan 2023 15:47:13 +0000
Received: by outflank-mailman (input) for mailman id 473751;
 Mon, 09 Jan 2023 15:47:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEuMh-0001Ok-E9
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:47:11 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e03e858f-9034-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 16:47:09 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id vm8so21174205ejc.2
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 07:47:09 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 22-20020a170906311600b0082535e2da13sm3851561ejx.6.2023.01.09.07.47.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 07:47:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e03e858f-9034-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bfnD404WY8hoWXz7M6QxUrNZUgpeIm9YCOZjEZ67MZU=;
        b=kCxCCMt//j+fO82VIWosCsH70bh7+2l51ewWKYmVjIJfFuALEKC6RrRaFxKMsNOt4u
         mjuZaTPIjMiAaysnpjOfy8Y3sEkrHlR7Oam+CrJ6Axc9x0SVjGV/b/BbWsFHm9jLJk4C
         h6Jxz1ee4Er1T0sYNM76oBiJYz5GJcC/Oi/yn+WiE7TAayD57bkjlkipn8eTTdtA3Mda
         8jN+ffqM/G/pOQo7PkWw5pcIoyRztRVxBOyozQ6qQt9g8X0P57tELOcajQaAdFgG5NjN
         VRFtwC9G3baHMpe5qGiXbG9ibx0wWj++cniJ4OomX1iPIb3q9Fxrfg9y17pQQ/oMdDlt
         5ABg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bfnD404WY8hoWXz7M6QxUrNZUgpeIm9YCOZjEZ67MZU=;
        b=Km+BtszLjgFniMVsMY8mgkx2qZk6XqPEL4hokzUdgxOx1q8gJywT4w/Iqna9VWFDGF
         KHK2W1p7OubGUmpMLb13rxQjn/mire/1lmahExJFeMoMnambOAELyxRUPpEgbtYV9U9V
         CyBFNHjECFVtx5Y/HWQF6wem1Jf0nqIKp/hwK3Ng7SimXZjyn/hjPxl61UCnxbe13BMC
         C0oh0wqh+N5rf6+vBFeDtP2OR2aE1qxYea2SRUiblN1eoERjTD5nbfwIcjT2K+qD7wzn
         q0HxdHFfl8k+55ci2vTNLEiXkOddQuVKm2Js1thPsvzBKxfuJ1T6QqYf0JiDtBsM7qX7
         0BeQ==
X-Gm-Message-State: AFqh2kqTpZgHe36wmdhEVOCG2lrDJl2ZvlBUhxGYB6fFWRnceXW1yGJp
	G2mEOuHe6hEtzB6hfwMOt69alDR0yLjGWQ==
X-Google-Smtp-Source: AMrXdXuqaCYbURIHqb86DXSLX04rTZYHXx+kJoChkuf3VUW7/sB/PJn0ZlqOSpiYG/s00ciULywomQ==
X-Received: by 2002:a17:906:8447:b0:7c8:9f04:ae7e with SMTP id e7-20020a170906844700b007c89f04ae7emr55018487ejy.22.1673279229209;
        Mon, 09 Jan 2023 07:47:09 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v2 5/8] xen/include: include <asm/types.h> in <xen/early_printk.h>
Date: Mon,  9 Jan 2023 17:46:52 +0200
Message-Id: <3b292b680a02e2413ad6d9bd7c64bbe6a71e0d5b.1673278109.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673278109.git.oleksii.kurochko@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

<asm/types.h> should be included because second argument of
early_puts has type 'size_t' which is defined in <asm/types.h>

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes in V2:
    - add Acked-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/xen/early_printk.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/include/xen/early_printk.h b/xen/include/xen/early_printk.h
index 0f76c3a74f..abb34687da 100644
--- a/xen/include/xen/early_printk.h
+++ b/xen/include/xen/early_printk.h
@@ -4,6 +4,8 @@
 #ifndef __XEN_EARLY_PRINTK_H__
 #define __XEN_EARLY_PRINTK_H__
 
+#include <asm/types.h>
+
 #ifdef CONFIG_EARLY_PRINTK
 void early_puts(const char *s, size_t nr);
 #else
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:47:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:47:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473752.734550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMk-00037L-MO; Mon, 09 Jan 2023 15:47:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473752.734550; Mon, 09 Jan 2023 15:47:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMk-000373-Hs; Mon, 09 Jan 2023 15:47:14 +0000
Received: by outflank-mailman (input) for mailman id 473752;
 Mon, 09 Jan 2023 15:47:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEuMi-0002mL-IQ
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:47:12 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e18a58d2-9034-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 16:47:11 +0100 (CET)
Received: by mail-ed1-x532.google.com with SMTP id s5so13089777edc.12
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 07:47:11 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 22-20020a170906311600b0082535e2da13sm3851561ejx.6.2023.01.09.07.47.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 07:47:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e18a58d2-9034-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=a/b1TrvDNNnrrBRjP6bD1TxMqmwHjv5FMfqQgK0ByUg=;
        b=RHU8wslsnUIOk0uJuOjVkpeHjI2zvT+TuWTUPIQBT8VG0gCJcnsUUdUqL0zrx7BlRA
         Ja4DMX7g8dlejSPSJdZnwcpTPDsROkFouutpy9jfH0AUv9F7DWb7fWnUgiMOgnhcN+cA
         K4Hj+xck+8lKZu1dWSshADexdjJBAQTgOwlC/i1NBhQIi3kZ/o2eoH6DgZztxByv5P1w
         DHwTWlWoPPteVHZhczHDwcDpyOGl6QpRVZK6yopgpRQw/CTHxukX5xICpq7Or3ph5OqF
         KlAilXyN5qRL2hzgdmMicADniLRmtR5lKVR7E7UE+WCLqEsEL/siJoPu+75ViPSuTA2C
         INCw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=a/b1TrvDNNnrrBRjP6bD1TxMqmwHjv5FMfqQgK0ByUg=;
        b=GfSymHcjSt7Xu7MTBR5v4/x7ncEsdBno0kGeSUMzEFRL48NkdOdmiYZPtHRuYpgu8K
         cW01tiXfzoLG3OV9ZtKrS5bUalqgFKxJkaB+IfrR5ckR6qiOEQc2/nRrmLpmWDPKOk3p
         qUkkvS4nITiZKZ7qCTLlGjWNf1BxKoFIqozYddFL6zBqSalDF0+uIst9nStmSAf90uZL
         EULUqJLAM4ghr2rPlUpzUe2L3S9XP3DRu3RkNxRI7VBuGvHyXpBOHDl/Nvf/ALJp/RT1
         SfpX1GhCuBJpJO5xaWfb1NrscgvLgRx8Z8diNfTfHtcDPPQKVkmwU1M7nLX3fTMlC4ga
         suIg==
X-Gm-Message-State: AFqh2koch8JGsQkwHPsAykOgv7SQ/imDE119Oqn7TxWmSFBk5VvsLuy1
	ALBWlZ9DDguDQelsB4AJkED5LruIh0dfAA==
X-Google-Smtp-Source: AMrXdXsMBfaLg7SGeLcx3dyX1xcH85/vA1FA9saSCXMCHzMRkNQd/aKJMn8Y/yKpnXpS2+KReFwA5Q==
X-Received: by 2002:a05:6402:1f89:b0:47b:16c7:492c with SMTP id c9-20020a0564021f8900b0047b16c7492cmr63677275edc.25.1673279230974;
        Mon, 09 Jan 2023 07:47:10 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: [PATCH v2 6/8] xen/riscv: introduce early_printk basic stuff
Date: Mon,  9 Jan 2023 17:46:53 +0200
Message-Id: <527727b2c9e26e6ef7714fe9a3fbe580caf1ae13.1673278109.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673278109.git.oleksii.kurochko@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces a basic stuff of early_printk functionality
which will be enough to print 'hello from C environment".
early_printk() function was changed in comparison with original as
common isn't being built now so there is no vscnprintf.

Because printk() relies on a serial driver (like the ns16550 driver)
and drivers require working virtual memory (ioremap()) there is not
print functionality early in Xen boot.

This commit adds early printk implementation built on the putc SBI call.

As sbi_console_putchar() is being already planned for deprecation
it is used temporary now and will be removed or reworked after
real uart will be ready.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
    - add license to early_printk.c
    - add signed-off-by Bobby
    - add RISCV_32 to Kconfig.debug to EARLY_PRINTK config
    - update commit message
    - order the files alphabetically in Makefile
---
 xen/arch/riscv/Kconfig.debug              |  7 +++++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
 4 files changed, 53 insertions(+)
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
index e69de29bb2..6ba0bd1e5a 100644
--- a/xen/arch/riscv/Kconfig.debug
+++ b/xen/arch/riscv/Kconfig.debug
@@ -0,0 +1,7 @@
+config EARLY_PRINTK
+    bool "Enable early printk config"
+    default DEBUG
+    depends on RISCV_64 || RISCV_32
+    help
+
+      Enables early printk debug messages
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index fd916e1004..1a4f1a6015 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,3 +1,4 @@
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
new file mode 100644
index 0000000000..88da5169ed
--- /dev/null
+++ b/xen/arch/riscv/early_printk.c
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V early printk using SBI
+ *
+ * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
+ */
+#include <asm/sbi.h>
+#include <asm/early_printk.h>
+
+/*
+ * TODO: 
+ *   sbi_console_putchar is already planned for deprication
+ *   so it should be reworked to use UART directly.
+*/
+void early_puts(const char *s, size_t nr)
+{
+    while ( nr-- > 0 )
+    {
+        if (*s == '\n')
+            sbi_console_putchar('\r');
+        sbi_console_putchar(*s);
+        s++;
+    }
+}
+
+void early_printk(const char *str)
+{
+    while (*str)
+    {
+        early_puts(str, 1);
+        str++;
+    }
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
new file mode 100644
index 0000000000..05106e160d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -0,0 +1,12 @@
+#ifndef __EARLY_PRINTK_H__
+#define __EARLY_PRINTK_H__
+
+#include <xen/early_printk.h>
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_printk(const char *str);
+#else
+static inline void early_printk(const char *s) {};
+#endif
+
+#endif /* __EARLY_PRINTK_H__ */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:47:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:47:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473753.734562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMm-0003PR-0m; Mon, 09 Jan 2023 15:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473753.734562; Mon, 09 Jan 2023 15:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMl-0003Or-QA; Mon, 09 Jan 2023 15:47:15 +0000
Received: by outflank-mailman (input) for mailman id 473753;
 Mon, 09 Jan 2023 15:47:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEuMk-0001Ok-UG
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:47:15 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e24e44e0-9034-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 16:47:12 +0100 (CET)
Received: by mail-ej1-x62e.google.com with SMTP id qk9so21190192ejc.3
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 07:47:13 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 22-20020a170906311600b0082535e2da13sm3851561ejx.6.2023.01.09.07.47.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 07:47:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e24e44e0-9034-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=fIal1kdq10TTB4IAGmI/j0NwEpx7kAoI5BB7eBhw2c4=;
        b=X6IzhvtXWr5L9CiA8iGBX4kCz93hvBhLwnP8ZtqJPSMr76fMAiVYmZ3vg8QaDnCgvr
         bzwQrLc6TMtZ/rCzBEcamajk03RWeXoiecRRdaQr6BEVq3QVGvSmZ3MlC0e+BqMEXQ7D
         T3ddrQktmO/+w4+jQGLGio6IxcQVEM0hiT1o2QhyTMbxG3FPcUJvpUhCOiiiZ1nmG2zy
         bWsy+dUeI8em63dtTOkABTf/0UGzBUa4PbGPuqqMPH8LUaRugXDXzCjoX2PE26OrEKSj
         JJrl2u/i6PzGKJf7mUVVvJtbgD75uTEaqsktxMjEgNXnnnV3vZceIWP0OvdthFtZUK+2
         +LYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=fIal1kdq10TTB4IAGmI/j0NwEpx7kAoI5BB7eBhw2c4=;
        b=7WHJDpAeOZNYZgFBgcO2Ydv7njRYvGPPJSxFo6KwTfzCSiRZkk5IJCd3XawluLW62k
         sQbZXDk3wSiJlz4G6XrwUhJY+TUVhFwo7fc6cygYAqy5aBBigi7oarrOYH7V7j0C65g0
         RtE+afdI2tqnGWvJz7lqx4jzy3XPB0WIOMm905b82uunhuiQ7xyAy71dz8lOobbOcYbF
         k6+cfQAQ+FbmPKIqXbf219Z8lyvccXKssLaUMbnFaTNCN40oWMm8VG5XHY1aFybdxhlR
         uz7IWUuhd/2hdG3TgvJ8e4oCn57RIKY5MpycTAFF5JN8pPM6/eJuQwre1vedPNSTgNnC
         34EA==
X-Gm-Message-State: AFqh2kppSZQifeejXVtBRYlcnubmeDriv9urXX878EWBIaldsb2QSY5K
	US5CYU3zDW8VkI759ReQt6EekzcLzDObng==
X-Google-Smtp-Source: AMrXdXsjwJwsKL7IqSx/UEmA78Tx6k+7Bgr/10WtTZ8nfNqziQbzWADBp4e3VxAIHznhJt2Cos4E5A==
X-Received: by 2002:a17:906:4892:b0:84d:489b:f1b1 with SMTP id v18-20020a170906489200b0084d489bf1b1mr3172007ejq.75.1673279232532;
        Mon, 09 Jan 2023 07:47:12 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 7/8] xen/riscv: print hello message from C env
Date: Mon,  9 Jan 2023 17:46:54 +0200
Message-Id: <837bb553a539713d4aa15bb169142018bf508afe.1673278109.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673278109.git.oleksii.kurochko@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/riscv64/head.S |  4 +---
 xen/arch/riscv/setup.c        | 12 ++++++++++++
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index c1f33a1934..d444dd8aad 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -5,6 +5,4 @@ ENTRY(start)
         li      t0, STACK_SIZE
         add     sp, sp, t0
 
-_start_hang:
-        wfi
-        j       _start_hang
+        tail    start_xen
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 41ef4912bd..586060c7e5 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,6 +1,18 @@
 #include <xen/init.h>
 #include <xen/compile.h>
 
+#include <asm/early_printk.h>
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
+
+void __init noreturn start_xen(void)
+{
+    early_printk("Hello from C env\n");
+
+    for ( ;; )
+        asm volatile ("wfi");
+
+    unreachable();
+}
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 15:47:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 15:47:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473754.734568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMm-0003Vk-If; Mon, 09 Jan 2023 15:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473754.734568; Mon, 09 Jan 2023 15:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuMm-0003Ut-7N; Mon, 09 Jan 2023 15:47:16 +0000
Received: by outflank-mailman (input) for mailman id 473754;
 Mon, 09 Jan 2023 15:47:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEuMl-0002mL-C1
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 15:47:15 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e35c2200-9034-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 16:47:14 +0100 (CET)
Received: by mail-ej1-x62a.google.com with SMTP id tz12so21150288ejc.9
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 07:47:14 -0800 (PST)
Received: from 2a02.2378.102e.bce5.ip.kyivstar.net
 ([2a02:2378:102e:bce5:dfc0:9312:b994:6b21])
 by smtp.gmail.com with ESMTPSA id
 22-20020a170906311600b0082535e2da13sm3851561ejx.6.2023.01.09.07.47.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 07:47:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e35c2200-9034-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=h4Pz1P177MM0Pgd5YIF5gjJ+E47uWjB6mT+S1Msdqus=;
        b=mvHWZ/Po8AsyALfPQ85VY4fsnq1mIOxUKh2/XXSLd3JqbaJ92Olup/1digm91cv0gC
         dZd2m45lB+qnUO7N3+Gpcrwsi95TSD6CfQv+6Ep4f/qviDGTQatchaK9RnzN856UYvTM
         Xt3Mj9juPPGHHILnnAtGCP8COymH5VVhJ5vR+FaxO23lgxlb0ZhBbDxMpsGg+HQ8x8Jb
         9+eRg3CdQNLj+QUXl5SAPitFz6YudBRxknhEgxm+FQZ1VmDACgnkUpj2St8/xlCyAkNY
         KFyWSRmajDq/RxQ3gGmnBhn4XwCHuPqHZJppN41GxpyEbmKR7LPEYAlt3AAe4shjRGEU
         sTEQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=h4Pz1P177MM0Pgd5YIF5gjJ+E47uWjB6mT+S1Msdqus=;
        b=iI1MeWRHQbc1qwod58ewcEhSs6J8rq82oCNWKkw/ENJ2y6Rwoa3JyW31UTcaQxDXDy
         8AMCXrOXbHJUFoIWL0ozGf5enYlSxRHoRbLJN/MJkipxusm6Gp1nW21gZZ5tpcc7kIZd
         +gM8eCzorMTV1/DyV1A7edcfvUXmuR9FxZdOZ/shCOrAWderQUtKz9BE1ftOH92qTjLq
         Kp9TKSP5sciIywWKVhbxjiYP180GSANmdA2Q759+G5I2xnhDsrq1VE0cjkVEPBYaAXzV
         jgZ2TIJKqUSkFtJY03MfksGe9ChQ5rtGdCImvMaiDSn31dh8ByHujq47yF+3jkZvnuhp
         7XBg==
X-Gm-Message-State: AFqh2kp6LlHRyk3/FPa0cdH00+IrQKFw32TaWLSBt7SGCYcecRAA61sf
	/GZjhIEYnFE3mX97rx0lOPmMaie2U9kR6Q==
X-Google-Smtp-Source: AMrXdXuPy+TC91tuEc1UgylA0Yqd8RJGvEiIih4KUnWHYbKwDvuGdEfAw/R1yhKsNCju7XVZMT5xYg==
X-Received: by 2002:a17:907:c007:b0:7ad:f165:70c2 with SMTP id ss7-20020a170907c00700b007adf16570c2mr77178927ejc.27.1673279233938;
        Mon, 09 Jan 2023 07:47:13 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 8/8] automation: add RISC-V smoke test
Date: Mon,  9 Jan 2023 17:46:55 +0200
Message-Id: <494c2fd1e046de20c2fa24be3989cc6adde8fdbe.1673278109.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673278109.git.oleksii.kurochko@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add check if there is a message 'Hello from C env' presents
in log file to be sure that stack is set and C part of early printk
is working.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
    - Move changes in the dockerfile to separate patch and  send it to
      mailing list separately:
        [PATCH] automation: add qemu-system-riscv to riscv64.dockerfile
    - Update test.yaml to wire up smoke test
---
 automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
 2 files changed, 40 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index afd80adfe1..64f47a0ab9 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -54,6 +54,19 @@
   tags:
     - x86_64
 
+.qemu-riscv64:
+  extends: .test-jobs-common
+  variables:
+    CONTAINER: archlinux:riscv64
+    LOGFILE: qemu-smoke-riscv64.log
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - x86_64
+
 .yocto-test:
   extends: .test-jobs-common
   script:
@@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
   needs:
     - debian-unstable-clang-debug
 
+qemu-smoke-riscv64-gcc:
+  extends: .qemu-riscv64
+  script:
+    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
+  needs:
+    - riscv64-cross-gcc
+
 # Yocto test jobs
 yocto-qemuarm64:
   extends: .yocto-test-arm64
diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
new file mode 100755
index 0000000000..e0f06360bc
--- /dev/null
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -ex
+
+# Run the test
+rm -f smoke.serial
+set +e
+
+timeout -k 1 2 \
+qemu-system-riscv64 \
+    -M virt \
+    -smp 1 \
+    -nographic \
+    -m 2g \
+    -kernel binaries/xen \
+    |& tee smoke.serial
+
+set -e
+(grep -q "Hello from C env" smoke.serial) || exit 1
+exit 0
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:00:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:00:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473803.734586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuZU-0000N8-JM; Mon, 09 Jan 2023 16:00:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473803.734586; Mon, 09 Jan 2023 16:00:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuZU-0000N1-Fp; Mon, 09 Jan 2023 16:00:24 +0000
Received: by outflank-mailman (input) for mailman id 473803;
 Mon, 09 Jan 2023 16:00:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEuZS-0000Js-Mu
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:00:22 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2077.outbound.protection.outlook.com [40.107.20.77])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b780baee-9036-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 17:00:20 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8077.eurprd04.prod.outlook.com (2603:10a6:102:1c3::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 16:00:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 16:00:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b780baee-9036-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f51LNE/+/DbmVlACFhsNpjEqZxev6I+1p86RVwLGorIztBLPtx9Rc7y54LOa3OvqQDdDpCpFZaJC9Axlqgidbu1r7HAelbvuWFEEuK249jjA/vrMLoi1MrI5iueSTTU6sHCNBJPvg1zZasPC+/Z2kwQ6xO3tPEvP24UV7xQM+ptI5M4EAkyT1rO5bUfo5C7V9TwgAD5YveAXZxwfhUBbK8c0Zt9/7det4Dib8Rn26XsSex7XG55d/Qqs7OLvCMZMmjJaAHQ2OytwqHRZSVMlT9zid6jr+IpZCM68zpesX3TNfJjNapufW5RoLjo/dbTnrhtpLzA4cADkBAvLio14GQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Qkg86uDjDm9pfxdD297SKw77Vr9JODE/mSN58hBFiZk=;
 b=DuLECOgZT7Sq26TqfS32tedJIy/0AJYaA5ltkclBGjQuXgrChVymGl94E2hINFfr/QOA+TFcO2eyAO1asCH6sAmCr6b4YJg4s8R5+sGoB1F+p8t1L1FhDCt0qEWWo+wIEtcXY1C8UHFnawIFymFySYC39DGZSA26ifv3dP/Pk2rxma9ll6J1YFOSDOFBapMJ/VtDZipUpAFdX63tTCakIoL4DUzqvyleO5vLDebgOCe9mZN+ke5xzoEMIMGrIwkEIDu8ZyzGpmBDs9w/+hUm+ot//mT/w/UCTobbPuQ1krxkjOXgDKmCaB4kN1g+gwE3gLfIIxIGPbGp/9e5q253LQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qkg86uDjDm9pfxdD297SKw77Vr9JODE/mSN58hBFiZk=;
 b=fGndejE86VLIzmBonqZdeEizFlivV/fjEW0NFhBoH+GxZUycRNIOT4VSUc/iPMNFmYrjombGB8D5jOSLt8of79+QiKFeGLalSxbHs0W2OZMdrMAn3TilJaGTGKrY4KSHczdI96p7tkjldRkxyyVujoHtPzPw4LA5HdfOraIRiPQeRhmVkhSRlDJiIDq1+con5bJfNF7WNtNdMdQXpejUfe/jBgWvKjl0cMLJYvsHwMB6N6vWRLa+cIdn/93exWPHDO7UnmILLUAZ6pHVhKXiLqjyHR+NXojhqgYH0WZuYey/RTIxAx3ikPjtrfTmD3mjeO2G5HtaiEPCHwxJNnKVqQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e33d028f-b6c4-d120-5aa9-36c9f2d02420@suse.com>
Date: Mon, 9 Jan 2023 17:00:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] xen/riscv: introduce sbi call to putchar to
 console
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <9b85a963db538e4735a9f99fc9090ad79508cb2c.1673278109.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <9b85a963db538e4735a9f99fc9090ad79508cb2c.1673278109.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0110.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8077:EE_
X-MS-Office365-Filtering-Correlation-Id: 7b7a78c8-7113-48fe-7ffb-08daf25a9a8d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6dX0rWGCZdWJYYSymrIE/Y0NfqEnMN0ucFB9OLmaWDWssfr8ag7OMs9AxbQpBcmc5TDUixtrF64C3d4t+a0vao9MTiHB54NU3SztyQaddgCf7TVS2lmO31Q8FEMBj8p3+aTI6fpFO7H2Na3ySHNHId6w+XP6ohG0V0Ig1bHpddqGnr39uGWLw4UOo/9Nzud3LLwjtk4l1ZMKOKCN0CKyWFTKsns3uyMtDxDZmUEN6xwDAKfsjNg9QNz9FoM08RhOTTENLhssuRyKU9IO7QnRTwDdDf3OgrVu+RkX+RkVoKwuJ2NvAVoia6omU+bE3ZilGFZ7yIAYg/lOqaVbfnNDChEKW0SQxCdm9tXd3G6cY3LlXz0curhwkgHL1vqjvm4RK7dM4G98/BILvx44jhsr7jbn/09MZr30RmKGhjFDr5CQnfbMqKUEtPQ/PnE4GcpfYN4gogvRsjO4MLMDrxXgXf9Ttc1B3m6WlQDfRh1nU0sPny7aCpIM49MDBghll63juJE7gmQ7sguA22lrfWSZpSw3zaZ1sIvj1THYTJj5LaRg9wnkwFl1OFL8H8h0ARq9sOH2LhO8krtU/A89J9E5foH9qmBWIAGJaf41R8ypECILQzTXx5MgL0sAzBy22g1jdV+JhvIcSjfQSdsQAswXRbbSXU49xUyu/qqCvTS1+OhlsBfnw/1HnwJ50+gg3VfD51L53D/fGgekcNSY0k1SuE22R7icDxEyxefKJRVDuCU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(376002)(346002)(39860400002)(366004)(451199015)(8676002)(4326008)(6916009)(316002)(66946007)(66476007)(66556008)(54906003)(2906002)(5660300002)(8936002)(41300700001)(36756003)(31696002)(53546011)(6486002)(6506007)(478600001)(38100700002)(2616005)(186003)(26005)(6512007)(86362001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y2cybWk1MmVXU1FsZ1p4K2lRaDhTWFdmcnRuRkRnbGdubHVjSk55U1N0amtZ?=
 =?utf-8?B?SXZlTjN5UnFiRS9kOXh4MFBGbm1KSFcwZ3BEYzNGOEpyVE1IYmMyU1hPeis1?=
 =?utf-8?B?RmxuUWR2RllvSFNtbXdXV1kxOWIzd0hwTGduMExQZEVXaXhhZGFkTHhqWTNV?=
 =?utf-8?B?ZE1oVVBra0N6YXcrN3hHSVZFU3hzOU0yaUsyQ213N25LNEYrTytyelF0L2tF?=
 =?utf-8?B?bE4wYlVDOGJONW9zV3l6UVIzeDNGZDQxN2hZcGtUOUxENGdpUzZ5MXNPR1V5?=
 =?utf-8?B?Y0VjUFA0cSt4K0NxTXdNdjhPaEl6K1B4VU1FOERpN3ZYdWcxZWZBYjVwTzZW?=
 =?utf-8?B?T1Z4c2VaZFRQMG9peHc3TjZzWGF4M0tScmRUZGZsZzdxeXhEUFgydU83N2Zq?=
 =?utf-8?B?dHIwa0tLcFdEeUlkeUhLeHZCYnhaNzFhRFp3RUNDMHlIeTdjdk9aVmtDNk16?=
 =?utf-8?B?R1NGK0g0dE5BcEw4UEh4UVpaTkFHSUNzVDlDQnlGSlhMZ3RnMVFBMVZZQTgz?=
 =?utf-8?B?b0Z0UHZyOFo4SHBRRzRJVlRIZklHR0RYRnJBMHB2MFlQWWNSU29DYW9DdlVY?=
 =?utf-8?B?aWtVeWQ2TnVwYnc0dElkVEplZ2xYM3Vpbys5ZGhxYSt5WkVXWlA4eWs3Zmgw?=
 =?utf-8?B?Z21PTlZRck9HWFZVbEZ0M2xPQ0F5aDhPckJPck9STmNWUTB5ZWlVSit6MU16?=
 =?utf-8?B?Sm9IWjRQcGxoODZmZUJpUWVPK211TXdNbU5NNUt6M3hTL3plcVhsUkkwNko5?=
 =?utf-8?B?Q2NHSUg1aFpPdmtraDBvTW9xM2VzZmtrcW5yWkpFM1FJZ2FUNVNTVmJmdi9k?=
 =?utf-8?B?eDNwTkRIeWNIRGt2RG5GVVFSd29uQ09aVmhoTXVHUTR1UlVZTDRzNnYwMU5s?=
 =?utf-8?B?eXVBUkJGVFl1cDFwVG9rWXh3Vzk0eTM2UVZId3NZRnpKeTIzVURGcnVDOTNp?=
 =?utf-8?B?V1c5Y2ZKbVNiSUxFU05jWWdmbVpwWi9LZENvcmw3anRJNGJWQjZnb25QMzZp?=
 =?utf-8?B?a1RGaDV2T29FR3VOR0pYanN6L0Fob3lkVGZOeHVkRXhRRmRCQXRCYWhsUVMw?=
 =?utf-8?B?SGxuZTB3dHAwSWg3Mit6NjZlZ2JxT1g5S2dhdFZYVHlYaC81aXhxdXBIZnVO?=
 =?utf-8?B?dkRteXY2TEliM0lJSG5XYjVsS0h6eG41dWMwTHJjYVlGL3VLckUrWFRzUEhC?=
 =?utf-8?B?RW1KcTRvOFJrYnBBRTJUZk9xUmtVM1VBWWhLdmtZUFI4ZHRyMFFzMUYwaCtW?=
 =?utf-8?B?cUUrR1FOcnIrZWlSWDExa1NaWklhdUNSbTNTYmFFUCtjSmI3L2NVbmhFaUo5?=
 =?utf-8?B?VmVTdTB4dUM5bFZpdlFVNWZPLzNQRXVBd0xiVmxtUzlhME5MUjdzV1dtNFQr?=
 =?utf-8?B?OTk0QVZZVk82V0tadDRET1QwUm1EQ1oxa082SzdpTlJDZ1lpc05LOEtxbVUw?=
 =?utf-8?B?azU1Z2V4MlZxdGRxT1Z5cHZXbHdmVWw5ZVBtQ2YzOUR3SUF4TXcydEp5Q3dw?=
 =?utf-8?B?MmZGd2Z0ZmFxVFM3bVoyVFVFUUhxeGU5ZlZlVWUxS01UaTF3eW14UE1OWmh0?=
 =?utf-8?B?Q09Da1RsRThCWis5MHgweWFtOEFOZDU2R1JZbUQ0TGlWMFltRVFnTmc1WWhp?=
 =?utf-8?B?djBka1FHMlo0WVhqYmxmdWxIVFZreWhYTm5HSE9UOHBmc2tuZW9BUVlhaVov?=
 =?utf-8?B?eDRONnRrcUt3YTFlYUlNRFNlSmlnb2NySWhPSUFZT2gvRFRzTXZVWTY1Zkxt?=
 =?utf-8?B?MmRYaVJxWllUeWJIaThPTEg3cW5rdEROZU9FQWVDeHFsU1BJZ1FHZWE0Um1r?=
 =?utf-8?B?OStEeFpHckFsZVJrV3l3QXIrU1JMY21leHg1bE01eFlreTc3N3hsRGZVTDcx?=
 =?utf-8?B?MEJIcXNkN1NGenBDcyt5T1RvcVpFS0lYT05nVFBnZ1NmTUpQTHZhalRBa2ww?=
 =?utf-8?B?VFllM2p4cWc2bHRxTFRPaDhUMWVWTGRJTHVMZDd3QWF2emdmL2ZIeWhxeWJm?=
 =?utf-8?B?VDBWL2ZxcEVhR0ErOUVMRkcxeFpLUUw2WXVBdU10KzZrbW1vUkk2OUdUdlRm?=
 =?utf-8?B?MXJjWTJSVWZ6VTZPVndrRGZBSFRHaWZiZ1g4VzdoM1lhWnY0L2RVVXdiOTZI?=
 =?utf-8?Q?un9Jj2t/dZLs4zxeWm+YQ2Lj4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7b7a78c8-7113-48fe-7ffb-08daf25a9a8d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 16:00:18.1851
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: txdMB7csWxGp+TjQD8WtPk1neVtl2FqoaVWEtc2T3kNlKBWQRUOpIa5Tu9qHVOVKkRrOoTqh2ieEEVR0CIgLlA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8077

On 09.01.2023 16:46, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/sbi.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
> +/*
> + * Copyright (c) 2021 Vates SAS.
> + *
> + * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> + *
> + * Taken/modified from Xvisor project with the following copyright:
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + */
> +
> +#ifndef __CPU_SBI_H__
> +#define __CPU_SBI_H__

Didn't you mean to change this?

> +#define SBI_EXT_0_1_CONSOLE_PUTCHAR		0x1
> +
> +struct sbiret {
> +    long error;
> +    long value;
> +};
> +
> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid, unsigned long arg0,
> +        unsigned long arg1, unsigned long arg2,
> +        unsigned long arg3, unsigned long arg4,
> +        unsigned long arg5);

Please get indentation right here as well as for the definition. Possible
forms are

struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
                        unsigned long arg0, unsigned long arg1,
                        unsigned long arg2, unsigned long arg3,
                        unsigned long arg4, unsigned long arg5);

or

struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
    unsigned long arg0, unsigned long arg1,
    unsigned long arg2, unsigned long arg3,
    unsigned long arg4, unsigned long arg5);

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:03:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:03:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473810.734597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuch-00017t-5B; Mon, 09 Jan 2023 16:03:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473810.734597; Mon, 09 Jan 2023 16:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuch-00017m-2D; Mon, 09 Jan 2023 16:03:43 +0000
Received: by outflank-mailman (input) for mailman id 473810;
 Mon, 09 Jan 2023 16:03:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEucg-00017g-0L
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:03:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEucf-00026D-JK; Mon, 09 Jan 2023 16:03:41 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.1.158]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEucf-0000hu-CG; Mon, 09 Jan 2023 16:03:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=r0hI2Lg1QMOkm1f5XcDf81indRAxlLTDU1kH8NhnuHc=; b=Eo4/j+cuX2ypI1r2XrbvSsfnBs
	s/ytfU3ft5EfOTpfO7usvNc+8m6n2A2HxYSHGbIPDAwyP/IHoPbO5CeTz6QAcO8XauYZy/2nXZCIE
	LKADJrTPaO5wBvsC5VXJ3tVZTxu/nA9gfUj0TqFz6SuObydnTNqTDAO+TYWwbGoTXth4=;
Message-ID: <7990322c-639b-38d4-ff6c-221988532c33@xen.org>
Date: Mon, 9 Jan 2023 16:03:38 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] xen/riscv: introduce sbi call to putchar to
 console
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <9b85a963db538e4735a9f99fc9090ad79508cb2c.1673278109.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <9b85a963db538e4735a9f99fc9090ad79508cb2c.1673278109.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 09/01/2023 15:46, Oleksii Kurochko wrote:
> The patch introduce sbi_putchar() SBI call which is necessary
> to implement initial early_printk
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V2:
>      - add an explanatory comment about sbi_console_putchar() function.
>      - order the files alphabetically in Makefile
> ---
>   xen/arch/riscv/Makefile          |  1 +
>   xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>   xen/arch/riscv/sbi.c             | 44 ++++++++++++++++++++++++++++++++
>   3 files changed, 79 insertions(+)
>   create mode 100644 xen/arch/riscv/include/asm/sbi.h
>   create mode 100644 xen/arch/riscv/sbi.c
> 
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 5a67a3f493..fd916e1004 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,4 +1,5 @@
>   obj-$(CONFIG_RISCV_64) += riscv64/
> +obj-y += sbi.o
>   obj-y += setup.o
>   
>   $(TARGET): $(TARGET)-syms
> diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
> new file mode 100644
> index 0000000000..34b53f8eaf
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/sbi.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
> +/*
> + * Copyright (c) 2021 Vates SAS.
> + *
> + * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
Hmmm... I missed this one in v1. Is this mostly code from Bobby? If so, 
please update the commit message accordingly.

FAOD, this comment applies for any future code you take from anyone. I 
will try to remember to mention it but please take pro-active action to 
check/mention where the code is coming from.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:05:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:05:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473815.734608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEueX-0001iE-H0; Mon, 09 Jan 2023 16:05:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473815.734608; Mon, 09 Jan 2023 16:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEueX-0001i7-E3; Mon, 09 Jan 2023 16:05:37 +0000
Received: by outflank-mailman (input) for mailman id 473815;
 Mon, 09 Jan 2023 16:05:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEueX-0001hx-0v; Mon, 09 Jan 2023 16:05:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEueW-00029Y-Vp; Mon, 09 Jan 2023 16:05:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEueW-00082M-H7; Mon, 09 Jan 2023 16:05:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEueW-0001fP-Gf; Mon, 09 Jan 2023 16:05:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hzMp4kf7ewhiebxZJhujzGBOUE6QnW3wQl+VrAby+2o=; b=jS04LdynFfKl9FzZzZjaX4fFNw
	K5COQZ5C6gN7CLpt0rTR/6DNf1JPDVK/4dVzefGxpgXGHqnqXIVnXthflafThAvf8CU3b++eJbZ3R
	UqKoGTCZqm5unTCnXFeQiyu0jQhVx26akMk4qDqqDzfQ7t0hy6vex7Ee+kntr0Aq+Mnk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175638-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175638: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-arm64-arm64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-arm64-arm64-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=ffd286ac6f25374d16f4eaa7ff64e30c77541b41
X-Osstest-Versions-That:
    libvirt=7050dad5f92010720cc8e8b7d5c37eaad7696c5e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 16:05:36 +0000

flight 175638 libvirt real [real]
flight 175645 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175638/
http://logs.test-lab.xenproject.org/osstest/logs/175645/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-qcow2 17 guest-start/debian.repeat fail pass in 175645-retest
 test-arm64-arm64-libvirt-raw 17 guest-start/debian.repeat fail pass in 175645-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175615
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175615
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175615
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              ffd286ac6f25374d16f4eaa7ff64e30c77541b41
baseline version:
 libvirt              7050dad5f92010720cc8e8b7d5c37eaad7696c5e

Last test of basis   175615  2023-01-07 04:18:53 Z    2 days
Testing same since   175638  2023-01-09 04:18:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiang Jiacheng <jiangjiacheng@huawei.com>
  Ján Tomko <jtomko@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   7050dad5f9..ffd286ac6f  ffd286ac6f25374d16f4eaa7ff64e30c77541b41 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:07:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:07:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473822.734618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEufz-0002Hq-S0; Mon, 09 Jan 2023 16:07:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473822.734618; Mon, 09 Jan 2023 16:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEufz-0002Hj-PR; Mon, 09 Jan 2023 16:07:07 +0000
Received: by outflank-mailman (input) for mailman id 473822;
 Mon, 09 Jan 2023 16:07:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEufz-0002Hd-Cm
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:07:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEufy-0002B3-RV; Mon, 09 Jan 2023 16:07:06 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.1.158]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEufy-0000ri-L0; Mon, 09 Jan 2023 16:07:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=h+ucAGSqSnprEn74b8uqbOL5dKAdNpX+IN8afJ1FO9c=; b=rQYsFLllnDmUOWkyS6M08PcLTT
	KiUlNT4fZvPc99ntVY93Siwn7yVKf4BaiLHhEUn5YxYM90N95lDOL8oRlNgPIbjq0hy9Qpqj3NEpM
	6nDXSs+Oq4CBxj/Mmzo0d8Wb2YvhPLTzaBzUCG8dh4uBpVJOlsyTZ8nbvptyJ2QObKA4=;
Message-ID: <215518b9-7ced-26bb-b64d-5cbae11342bd@xen.org>
Date: Mon, 9 Jan 2023 16:07:04 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/8] xen/riscv: introduce early_printk basic stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <527727b2c9e26e6ef7714fe9a3fbe580caf1ae13.1673278109.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <527727b2c9e26e6ef7714fe9a3fbe580caf1ae13.1673278109.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 09/01/2023 15:46, Oleksii Kurochko wrote:
> The patch introduces a basic stuff of early_printk functionality
> which will be enough to print 'hello from C environment".
> early_printk() function was changed in comparison with original as
> common isn't being built now so there is no vscnprintf.
> 
> Because printk() relies on a serial driver (like the ns16550 driver)
> and drivers require working virtual memory (ioremap()) there is not
> print functionality early in Xen boot.
> 
> This commit adds early printk implementation built on the putc SBI call.
> 
> As sbi_console_putchar() is being already planned for deprecation
> it is used temporary now and will be removed or reworked after
> real uart will be ready.
> 
> Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>

 From the previous discussion, I was under the impression you agreed 
that the code was mainly written by Bobby. And indeed you put him as the 
first signed-off.

So you want to also add a From tag right at the top of the patch so when 
we commit it, the author tag will be Bobby.

> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V2:
>      - add license to early_printk.c
>      - add signed-off-by Bobby
>      - add RISCV_32 to Kconfig.debug to EARLY_PRINTK config
>      - update commit message
>      - order the files alphabetically in Makefile
> ---
>   xen/arch/riscv/Kconfig.debug              |  7 +++++
>   xen/arch/riscv/Makefile                   |  1 +
>   xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
>   xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
>   4 files changed, 53 insertions(+)
>   create mode 100644 xen/arch/riscv/early_printk.c
>   create mode 100644 xen/arch/riscv/include/asm/early_printk.h
> 
> diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
> index e69de29bb2..6ba0bd1e5a 100644
> --- a/xen/arch/riscv/Kconfig.debug
> +++ b/xen/arch/riscv/Kconfig.debug
> @@ -0,0 +1,7 @@
> +config EARLY_PRINTK
> +    bool "Enable early printk config"
> +    default DEBUG
> +    depends on RISCV_64 || RISCV_32
> +    help
> +
> +      Enables early printk debug messages
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index fd916e1004..1a4f1a6015 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,3 +1,4 @@
> +obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>   obj-$(CONFIG_RISCV_64) += riscv64/
>   obj-y += sbi.o
>   obj-y += setup.o
> diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
> new file mode 100644
> index 0000000000..88da5169ed
> --- /dev/null
> +++ b/xen/arch/riscv/early_printk.c
> @@ -0,0 +1,33 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * RISC-V early printk using SBI
> + *
> + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> + */
> +#include <asm/sbi.h>
> +#include <asm/early_printk.h>
> +
> +/*
> + * TODO:
> + *   sbi_console_putchar is already planned for deprication

s/deprication/deprecation/

> + *   so it should be reworked to use UART directly.
> +*/
> +void early_puts(const char *s, size_t nr)
> +{
> +    while ( nr-- > 0 )
> +    {
> +        if (*s == '\n')
> +            sbi_console_putchar('\r');
> +        sbi_console_putchar(*s);
> +        s++;
> +    }
> +}
> +
> +void early_printk(const char *str)
> +{
> +    while (*str)
> +    {
> +        early_puts(str, 1);
> +        str++;
> +    }
> +}
> diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
> new file mode 100644
> index 0000000000..05106e160d
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/early_printk.h
> @@ -0,0 +1,12 @@
> +#ifndef __EARLY_PRINTK_H__
> +#define __EARLY_PRINTK_H__
> +
> +#include <xen/early_printk.h>
> +
> +#ifdef CONFIG_EARLY_PRINTK
> +void early_printk(const char *str);
> +#else
> +static inline void early_printk(const char *s) {};
> +#endif
> +
> +#endif /* __EARLY_PRINTK_H__ */

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:11:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:11:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473830.734629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEukR-0003lx-DR; Mon, 09 Jan 2023 16:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473830.734629; Mon, 09 Jan 2023 16:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEukR-0003lq-Ab; Mon, 09 Jan 2023 16:11:43 +0000
Received: by outflank-mailman (input) for mailman id 473830;
 Mon, 09 Jan 2023 16:11:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEukQ-0003li-AJ
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:11:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEukO-0002GF-KN; Mon, 09 Jan 2023 16:11:40 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.1.158]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEukO-0001C3-Du; Mon, 09 Jan 2023 16:11:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ZOnG/ZtCnORamGdA/bymsKSaCteFH1kKCRTB4TBz6Fc=; b=2iQODrpyy6jueEd77l8+s2TNcT
	DPoHF57QnXVobTXb9OQcl5VWpjvL9OGKadgsjrethnUXZOpUn2OjmdfPfTNws3Z3gY4oOcVZidYPN
	70k90J5qLSOT3WplbB+6+Buf3CtC1Xpz1yDuzyNmJK3m9IM4dw1T8U4OScRGTGbJUAvw=;
Message-ID: <e5ecc23f-4d4b-fa21-bd71-68cefcde644f@xen.org>
Date: Mon, 9 Jan 2023 16:11:38 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/8] xen/riscv: introduce stack stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <b253e61bebbc029c94b89389d81643f9587200b7.1673278109.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <b253e61bebbc029c94b89389d81643f9587200b7.1673278109.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 09/01/2023 15:46, Oleksii Kurochko wrote:
> The patch introduces and sets up a stack in order to go to C environment
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

It looks like the comments from Andrew were missed.

> ---
> Changes in V2:
>      - introduce STACK_SIZE define.
>      - use consistent padding between instruction mnemonic and operand(s)
> ---
>   xen/arch/riscv/Makefile             | 1 +
>   xen/arch/riscv/include/asm/config.h | 2 ++
>   xen/arch/riscv/riscv64/head.S       | 8 +++++++-
>   xen/arch/riscv/setup.c              | 6 ++++++
>   4 files changed, 16 insertions(+), 1 deletion(-)
>   create mode 100644 xen/arch/riscv/setup.c
> 
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 248f2cbb3e..5a67a3f493 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,4 +1,5 @@
>   obj-$(CONFIG_RISCV_64) += riscv64/
> +obj-y += setup.o
>   
>   $(TARGET): $(TARGET)-syms
>   	$(OBJCOPY) -O binary -S $< $@
> diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
> index 0370f865f3..bdd2237f01 100644
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -43,6 +43,8 @@
>   
>   #define SMP_CACHE_BYTES (1 << 6)
>   
> +#define STACK_SIZE (PAGE_SIZE)
> +
>   #endif /* __RISCV_CONFIG_H__ */
>   /*
>    * Local variables:
> diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
> index 990edb70a0..c1f33a1934 100644
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,4 +1,10 @@
>           .section .text.header, "ax", %progbits
>   
>   ENTRY(start)
> -        j  start
> +        la      sp, cpu0_boot_stack
> +        li      t0, STACK_SIZE
> +        add     sp, sp, t0
> +
> +_start_hang:
> +        wfi
> +        j       _start_hang
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> new file mode 100644
> index 0000000000..41ef4912bd
> --- /dev/null
> +++ b/xen/arch/riscv/setup.c
> @@ -0,0 +1,6 @@
> +#include <xen/init.h>
> +#include <xen/compile.h>

Why do you need to include <xen/compile.h>?

In any case, please order the include alphabetically. I haven't look at 
the rest of the series. But please go through the series and check that 
generic comments (like this one) are addressed everywhere.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:14:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:14:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473838.734647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEunO-0004OR-Uk; Mon, 09 Jan 2023 16:14:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473838.734647; Mon, 09 Jan 2023 16:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEunO-0004OK-Rw; Mon, 09 Jan 2023 16:14:46 +0000
Received: by outflank-mailman (input) for mailman id 473838;
 Mon, 09 Jan 2023 16:14:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rQi4=5G=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pEunM-0004OE-NJ
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:14:45 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b89d383d-9038-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 17:14:42 +0100 (CET)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id 7B72B5C01AC;
 Mon,  9 Jan 2023 11:14:40 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute6.internal (MEProxy); Mon, 09 Jan 2023 11:14:40 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 9 Jan 2023 11:14:39 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b89d383d-9038-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1673280880; x=
	1673367280; bh=1GGy41zJtcnIKoNV4zriEHhvQgYGglkS78ucKnWeKLk=; b=d
	xXuIip7xiWdi4SBSprYRK10+E0SDAP8/dxuyBD3V7HP3fz75FGDP/06FBGQDR3Dd
	681/t39T2kfUvgDqCnzB4JYUOxJGJJyPZ7jFI33ODFXFMXr5lYD43EK/5CcbiK/s
	XA5aR5iPEbI8n5tV8lanEyiz8+GG1NvdK1znQUwXA5vpYZh2ng0gB23g84j1bHDg
	8YI8xIapCq6Ur6QxW7+t/EVQtmrAOglBnNa7sW8VHAu+tIKu9QlCi+GIfeMp5lPp
	mjICH48JfJUxa3k3FcaiAb9+TjuGeuaYF/Nlr9q/H+g1m1foYDhz2QySwB038pz1
	sXfDLz1awrn6Cf422rlDQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm3; t=1673280880; x=1673367280; bh=1GGy41zJtcnIKoNV4zriEHhvQgYG
	glkS78ucKnWeKLk=; b=InLrfATdM2Jwxsd1QadJpxehdSzjAInH0f7vQBPu5qmx
	VsTs9lu3piNkPYHO8R4IQGfZojdaFrnTiIMjjKr25QvttPteQ0R+IId8L+ASnzJ9
	makmG32u3Rg1yD6UE6Q23LrR+JRThuJa66cMcVBoGDR8y/5ubPxZcjQ30UYug0s6
	UpIXoHAprly1Zw+msRU4cuCMAa8iVL6yV+udrUo1ohZQz19sqPPcIOdnVxS1pVTv
	2ckl3OuRlekZHsLTr0OFGz68gC6IIBcBueY2eor+DQOUHwLp2zpZ7jDrqPNIRCHI
	udF/dTa0UqAuwVHzb2kY52yWuOlBXkn5EpgQ/napdg==
X-ME-Sender: <xms:cD28Y7Ytqu3niWlOlKnKc_N2dUJioyXatZrdPP8iSjE1ntwnB2F4SA>
    <xme:cD28Y6YWRVYNcjcQMGldHQgs-wxENh88dkUQpL5BbMoZBXYtvsJnngzfZ6PamQl1D
    0hBWEDCrPj4OX4>
X-ME-Received: <xmr:cD28Y98gLJpfkg_yhwV8yyk_CYAAbkt54fYeJ3LlCQAiy1T_06PmLkGHUtb8>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrkeeigdekvdcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpedvjeetgeekhfetudfhgfetffegfffg
    uddvgffhffeifeeikeektdehgeetheffleenucevlhhushhtvghrufhiiigvpedtnecurf
    grrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhl
    rggsrdgtohhm
X-ME-Proxy: <xmx:cD28YxrFyT55g8H9UEe04BnvSeMTIMUr73DWVj4myWKK3Ac1SLjvEQ>
    <xmx:cD28Y2oMOn_9WRkWblB11cpyJA4EZJKjAnv0oTdaGJyU51ZwXKXg-Q>
    <xmx:cD28Y3QhfTFckjxiqxJwi1Rnncy6TnnplWeBydzd1yKBPMiKTK-O2w>
    <xmx:cD28Y2KWWedwFxassFcwQkUng5pgN28auVPjjtJ3c8WLCYK10YbL0A>
Feedback-ID: iac594737:Fastmail
Date: Mon, 9 Jan 2023 11:14:28 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Tim Deegan <tim@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v7 4/4] x86: Allow using Linux's PAT
Message-ID: <Y7w9bQn295Bc3GJz@itl-email>
References: <cover.1673123823.git.demi@invisiblethingslab.com>
 <9fd0360dd914d93dab357d16b46b4290e6119d30.1673123823.git.demi@invisiblethingslab.com>
 <2b46602a-f518-c191-946d-3b343b46dc87@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="PuiekJm17OZlMUMK"
Content-Disposition: inline
In-Reply-To: <2b46602a-f518-c191-946d-3b343b46dc87@suse.com>


--PuiekJm17OZlMUMK
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 9 Jan 2023 11:14:28 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Tim Deegan <tim@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v7 4/4] x86: Allow using Linux's PAT

On Mon, Jan 09, 2023 at 12:37:34PM +0100, Jan Beulich wrote:
> On 07.01.2023 23:07, Demi Marie Obenour wrote:
> > --- a/xen/arch/x86/Kconfig
> > +++ b/xen/arch/x86/Kconfig
> > @@ -227,6 +227,39 @@ config XEN_ALIGN_2M
> > =20
> >  endchoice
> > =20
> > +config LINUX_PAT
> > +	bool "Use Linux's PAT instead of Xen's default"
> > +	help
> > +	  Use Linux's Page Attribute Table instead of the default Xen value.
> > +
> > +	  The Page Attribute Table (PAT) maps three bits in the page table en=
try
> > +	  to the actual cacheability used by the processor.  Many Intel
> > +	  integrated GPUs have errata (bugs) that cause CPU access to GPU mem=
ory
> > +	  to ignore the topmost bit.  When using Xen's default PAT, this resu=
lts
> > +	  in caches not being flushed and incorrect images being displayed.  =
The
> > +	  default PAT used by Linux does not cause this problem.
> > +
> > +	  If you say Y here, you will be able to use Intel integrated GPUs th=
at
> > +	  are attached to your Linux dom0 or other Linux PV guests.  However,
> > +	  you will not be able to use non-Linux OSs in dom0, and attaching a =
PCI
> > +	  device to a non-Linux PV guest will result in unpredictable guest
> > +	  behavior.  If you say N here, you will be able to use a non-Linux
> > +	  dom0, and will be able to attach PCI devices to non-Linux PV guests.
> > +
> > +	  Note that saving a PV guest with an assigned PCI device on a machine
> > +	  with one PAT and restoring it on a machine with a different PAT won=
't
> > +	  work: the resulting guest may boot and even appear to work, but cac=
hes
> > +	  will not be flushed when needed, with unpredictable results.  HVM
> > +	  (including PVH and PVHVM) guests and guests without assigned PCI
> > +	  devices do not care what PAT Xen uses, and migration (even live)
> > +	  between hypervisors with different PATs will work fine.  Guests usi=
ng
> > +	  PV Shim care about the PAT used by the PV Shim firmware, not the
> > +	  host=E2=80=99s PAT.  Also, non-default PAT values are incompatible =
with the
> > +	  (deprecated) qemu-traditional stubdomain.
> > +
> > +	  Say Y if you are building a hypervisor for a Linux distribution that
> > +	  supports Intel iGPUs.  Say N otherwise.
>=20
> I'm not convinced we want this; if other maintainers think differently,
> then I don't mean to stand in the way though. If so, however,
> - the above likely wants guarding by EXPERT and/or UNSUPPORTED

I considered this, but decided against it.  Recent Intel iGPUs are
simply incompatible with Xen=E2=80=99s default PAT, so anyone wanting to us=
e Xen
in a desktop environment must say Y here.  Guarding this with EXPERT or
UNSUPPORTED will not prevent distribution maintainers from enabling it,
because the alternative is building a hypervisor that does not support
the hardware their users actually have.  Qubes OS is *already* shipping
a patch to use Linux=E2=80=99s PAT, so you don=E2=80=99t need to worry that=
 this code
will go untested.  And if there was a vulnerability that requires
CONFIG_LINUX_PAT=3Dy, I=E2=80=99d rather it not be dropped on Qubes users a=
s a
0day.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--PuiekJm17OZlMUMK
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmO8PWwACgkQsoi1X/+c
IsEJgBAAyuVY6uST5cWxL/JvZwJFwUaxZpys3NuNGGKFciAwb5UF1UBCFcA9DNG5
cACYAq3oU9c2fwVrvz+Rbvsx3F/yLXcbnnJ+3dlnxV3aejcOget52JPwBjC+7RJP
HYZwuPk+mObqfNc8DZ2eyfEN3nNZT0dDiGs55EesFpGp46TAuMnt3ajX6GUEl3OT
+auxdHKpYScOiObKKz5mklGR0hHRET73NlKRrsJB+Z0J88XYX3m/3kksQyDr51aC
n7IaixuvkdQHVFwU0l8AlcPjLmv8mdbb34yg9mDVM4Vf4Kmj3qCAUY/tzCnNoPff
RiDjEpaHMft7GdHrbYbk+ArpgE4dtV3Snk19zvUPr5LtYmKUjuGsY1RuxNx5fwDo
W73nFH8bfzStX1/UYz5xSbmiuKG4XUsWJ7JUP19hFK6jrA9nm/WnBoCkleYmfKEd
Q1Ua0vhwbapH2h5UkWobuy043RaWiH4WP4JN6AsAYqPvGlj/NdrKrFRUNzMfmGtm
nziEetZAGZ0sPCpBTcprVGeOMwQitseQyV1OP6qmuyLIbn5P/VL8utzBDS15TUBL
OKVLrppiQxVbEUTsa7zBV3UJBnj6NQ27ucbMRRDMxx7FKEBOmkG9CRptiFu2k9R7
qG/z/klN2pKWuR37hrjp9pUdLAIhTofuHzLSc/ybfszgLBvlcm4=
=CX+6
-----END PGP SIGNATURE-----

--PuiekJm17OZlMUMK--


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:20:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:20:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473846.734657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEusS-0005AB-KU; Mon, 09 Jan 2023 16:20:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473846.734657; Mon, 09 Jan 2023 16:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEusS-0005A4-Hn; Mon, 09 Jan 2023 16:20:00 +0000
Received: by outflank-mailman (input) for mailman id 473846;
 Mon, 09 Jan 2023 16:19:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEusR-00059y-1z
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:19:59 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2047.outbound.protection.outlook.com [40.107.249.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 757287c2-9039-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 17:19:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7651.eurprd04.prod.outlook.com (2603:10a6:20b:280::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 16:19:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 16:19:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 757287c2-9039-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DS6MVoscLNinHaz1t76NJ7UQfKOcw0J1N9Q3RTH+XjWvX0bjNqqYEgCQoLc3rvdXSmz/Wzpuglq8yaOp5Gsjww9PDjyJgYlV5BLZsulDi+olIwxefB7bZ7gOA0wOOycwzoNDNxoiG3UDXFxUn21wrXfdP74OW9hDhiZL+eyDGOGfpxLeZoLm7NBNxVPeW1BczsyMX0TIuDGoAiY8eTvagRhv2ll+0sEGT/KPGYxXgr8o5WY6dXw7bfkiI7zZYQzmo8aWKPzY9sZ+nn+YCoLUye/MP5oWLBqBTGBRzZnTuFpq8eCoMXhvWnDwka6Um1hs72UaTHYSmxNy0sXROSzbgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+zO+SgpjnIs9Q1ArPETmSVByQZEhBHEBoCmzzLQ0/Es=;
 b=MAKJ7vpeIruZYkFEWeaaHYatNvAGPzL4NoFLsOAepMcW7YrQxFuMjO1FoR9uA//EVcB+A+K0j9WRoQy5zRdLBiOAsx/liUHqL0RAIpW9/3dH7SSwmURVhNZE8WKKGsreBqeqLVeVWtvE+5lOmYavejgCMG/Y9qdqfTFG6KnxAOgNHzSPFpOlj6j+RyRhDfoDm0D6cfZDkPwAX+bo07TDt7DjshGM+ybuAfKh+7KqtSS9Mb3MhqFTe+75wlauDZlGQ5GsMIw+GurYMaeh7gt6ikew3k5JfzsWdb2hQDelfc2TobaumUnK6ydM8ebvmZpQRxqM8GVJYqhinxwW2neA3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+zO+SgpjnIs9Q1ArPETmSVByQZEhBHEBoCmzzLQ0/Es=;
 b=GfOzEc6S55qqxZvSvvVFZDQPWwDd3zxDQ3McgnqGEKnWMkXHvM+VwnJ+yERGpn3ZIOkEA0ueQwUcGq0WjO01cUVTD+pjQMD9XKTmeeukum3pZfRXD3mPiD8kl4vBNyvXIlXzY3GI+J/xYVKj6cE0YiUDBAWw9UhQI7+uHHboLaNBMD9pe30ZKX56AQm7AloYESL+vhzQv1fNqeTX0Puo4/A4XjPhxA1HhHzER6go5vpw4ov1qa+zE1+JySDq0dcLibIAodGtWzhdJR8FeL1DV/R+/Ahhcy58fHlbu5gCcnnLKfgD77pWCxJY7+08hwcbipdQKKad52C/6ytOCm6XSw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f83a68a4-69bb-a5b9-5113-14766dc15403@suse.com>
Date: Mon, 9 Jan 2023 17:19:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/8] xen/riscv: introduce stack stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <b253e61bebbc029c94b89389d81643f9587200b7.1673278109.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b253e61bebbc029c94b89389d81643f9587200b7.1673278109.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0047.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7651:EE_
X-MS-Office365-Filtering-Correlation-Id: 390b73f8-e9bd-4816-c87f-08daf25d58df
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SEdbCYIQ1+50KBCdHVvliUbscqiyJGwAwUvAI0Um6TCo7jYgztfR+qgU/KN5XBQWuzZ34QBw2fyuyeMExPX3bT5n0zpeztz20K4u+3yAGwiXPFfl8JjjDacDajijUzki39epEkhZ9RnA+gk8BdAUJmb1+ACMMiwdt0bKxSJfU4KmROY6iENqUVNC1O2YFGXEyO91D2SkFlNUtOIZhFmaYo0rMWpKHKkbygHdALdH1X/BHpKdFhyAkA9+uL/6bjMsYAXnfTN7xI58zEtpuOONutvLxaJw3mLCZY/glAs+ikiJIrZz4acG/Vw/7bv7qKPgmTgWpHQC+kRxDUUZJOeF5m1Ygu1hVe8ilk5NB1riJKMh7+TVdWEGJbDI4awY5ctzjlwJiMuZzV/uXfvKR8sdPAPl0VZ/AX00tARumxpY/HUDmR/JvbGu9lqxQjVK7lNXxQ4Kt19OtZuQmpTzlrpP/gr1oFXpB1E5ViGfWXAFlc/8svxLqfAaz5Yc59NzpyV/7exRFiNHxqzS4df1dP4qVB/em58IFK9X/5LGS4DeQvNPPDQjBUVwk8Kpi569ecMULCrjQagF4ZtKyUwlIBJyZk2IJLlwXuTPs8Ka06qMSSMRJNZTsDgONxFXQVr9HBBhEVX94GNcgbykG96xR3wdkQYtVFyVzkkzzX8/T0jf36pktjZuROEVJmjZj0ZcZkGgNKS2m/kodVfQ+BYV548l7cLNvtfksHDL2eTPoAmTKck=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(376002)(346002)(39860400002)(366004)(451199015)(8676002)(4326008)(6916009)(316002)(66946007)(66476007)(66556008)(54906003)(2906002)(5660300002)(8936002)(41300700001)(36756003)(31696002)(53546011)(558084003)(6666004)(6486002)(6506007)(478600001)(38100700002)(2616005)(186003)(26005)(6512007)(86362001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZDFVUTVOdmtmd0tqVHhIa3o5Q1IzWmNvVlhGYklmNnZocy9Xay95akdTM3c0?=
 =?utf-8?B?dzJQOW5BSXM3M2laSXUzZU9IMkVyK3A3ZVUwVnpUcUFsNGZWODRkYUR6RFYz?=
 =?utf-8?B?UTFYRFlUWENGUHlFUUU2RC9ld3lSc09ycS9sSEJoaHNzQ3NFcVlUQ1hLNThn?=
 =?utf-8?B?d2dScUoyQzhIQ1VXYWdFa2VTdzBFVFhRZlh2MUh0cjJTUFdWbzlOYzhYVkZY?=
 =?utf-8?B?WFh5akdvZFFjeUJSZmFuTCt3anNlZEJ5eVdyWFFDa3c0bEIzTzM2Rk04eS9i?=
 =?utf-8?B?bzRCN09rMTlEYTBubVhwN2w2emxwK015VzIwVmcxbXZsSm9EeFI1TWh0WWk5?=
 =?utf-8?B?RGxXb0xlUWovSXRjUy9rRFgwWnlidW1HUE9pVU9QVWw4TWJxdkd2RlBuUkN1?=
 =?utf-8?B?czAzWklRMFNERThEZStLSEV4M2pnaTU3YitVTUpmZ2diQ1JaeE83S0pLbkE3?=
 =?utf-8?B?Y0d0R1VucWsrTGoxRkVCekVMNVFKd1N1OFkvcXV2NjJNeEFlMnU1SGxKY0Ev?=
 =?utf-8?B?WC9tZUpNdXk2QXdSWDRvbDZkU2M2RGNHZWpMMjEzUUpUajVzd3FHSGE1RjNz?=
 =?utf-8?B?S0RmMEtjSTZoclZpVHNqVmtmcmt1MjBRMjd2SDlUbzBqUW52STFJTHNFM1g4?=
 =?utf-8?B?NFRHUmx2c3J3YkQwZ3VJREpGMjVWU3Jta0QxaXNjNGRIS09mSm83ODZTTTJL?=
 =?utf-8?B?bDFaNHYxRnBVNTNJbjRHaFpacGw0dFJQcDk3MHF2QlpRV0l5YzhlL1hTRUZp?=
 =?utf-8?B?Z2dkdzFITmVzMVBHdFlnMkVwMVE1YWx0UjQ5UGFscGZPSEhlMTF4R2JSSmZI?=
 =?utf-8?B?b3BsYlZraHZxNjdPVVRYMEtZeUtEbmNlS2FBbEZiT0dRM0dycEw5NVdzWkhD?=
 =?utf-8?B?QktsSDhaSXBtSUtUWlJHZjM5SmFtSVh5a1R6K2FFWmZnK3FxMFJaKzZRRUw5?=
 =?utf-8?B?OE50cWs3ZmZsOSt5ZDhOVzAxS1JEeEZackk1czhjclNrNVA5REswT2FCNmlj?=
 =?utf-8?B?YUJhdDI4RHFqVGRBcTRzS1k3VkhsSmplUGwyL000aXYxUnFUR0dqYlFGc1dC?=
 =?utf-8?B?SUUvWEVuTGRnVXFhbldBTG94dnJqQnhidTIyb3ZiZlJERld6VXJvODNsTTBV?=
 =?utf-8?B?bXdjdnJ6VzhhNFBPRnFLUlJJQzVYR3FQcHFsaW5kV0RZNGI5cCtWQThxWFVI?=
 =?utf-8?B?UzhyaUlNdGQ5Yk5Rdlg5Q1AvYzlVWVlnTFhsOEtEMENXZGtzSk1vdDJOTW5Y?=
 =?utf-8?B?bnBIYnlPbDlTZkdoWmN4bDZlZGdRRUhxL2ZHZENDSXdOZVhxYTdjaXdhNTlJ?=
 =?utf-8?B?dnVKUWFRNjlDbFBWYVIvVXRHZ3ZweEdVUzQyN2VscllaNXhLa0tGUjhtN0NY?=
 =?utf-8?B?RnFYeVRhclNEMlNOaHFXZ2JtaFA3dVdITjg1RXJ4N0lhVzl5ejZLSjBIS1ZK?=
 =?utf-8?B?cDdncHlhY2FPNGVoZHVqL0YzZGNwV09TU0FOTEppeFNjZ1ZzY2EzMWU2ZVg0?=
 =?utf-8?B?cDFUUzRjUVJoRG1nMkVGZXkrdC9JTFNoWE4xK05TTDFyVmw3WFZ1TGNlODBk?=
 =?utf-8?B?Qmhqdm5wbmJ2SjRpYitTTlpSYk8yazdPRnZHdmZrMmRGOXBQMUxkdGxyZjNh?=
 =?utf-8?B?MXJwTTVDTjBGeTF0Q3F4azZXYVNMSTNTVXA4WllwTTZhaFpjVXRyRnFWNmhR?=
 =?utf-8?B?R3k0SHBOY3Bza0U1RDhGQWFCa2I1K1d2dXp5WGhwK3c0MWQ3TlZjY0dibXh0?=
 =?utf-8?B?WVYvL2RtZG10QVdTZFBnZEdVRHRLYi9jVzVGbTYrallmQ0g2ZzJaOVFhcHBn?=
 =?utf-8?B?WWdUUkZiWTZFTUNWeXJVS2N1cTgvZkJGTm8rWDZENXQ0NWRiSDFQNmxSVzEz?=
 =?utf-8?B?VG1PQXkxZlAwYk51eEtDdktNRllnNmpBdGlpRjY0bXd4UXRSRDZsdkRsSHBq?=
 =?utf-8?B?RDd6ZWJhTUdSeCt4SEY1OGQxY045bHlHRk94bmptUG10L1BFVDdHR2ozTW56?=
 =?utf-8?B?bTVDTWo1M014Qit2UXM3d29jTXpYcDNNTWkwcmJIWmZ6a0RuRVBTNmQ0ZUli?=
 =?utf-8?B?cS95OFk1L3hXc0IzNGdKa21FdDFZU2Y1dXhBK3IreWRwOENTd0JRdm9HbnNm?=
 =?utf-8?Q?uGAuXivF0sOzeYa/CQlrbpxdD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 390b73f8-e9bd-4816-c87f-08daf25d58df
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 16:19:56.5631
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QhTlM6hHPGLJokD2xBSei21hInNYHsIP8cpRe22aCIyIU35npjj53v8FQaLeNA6DjHlqZ27kCAwdDovxmVqUSw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7651

On 09.01.2023 16:46, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -43,6 +43,8 @@
>  
>  #define SMP_CACHE_BYTES (1 << 6)
>  
> +#define STACK_SIZE (PAGE_SIZE)

Btw, nit: No need for parentheses here.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:21:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:21:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473852.734669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuta-0006US-Um; Mon, 09 Jan 2023 16:21:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473852.734669; Mon, 09 Jan 2023 16:21:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuta-0006UL-S0; Mon, 09 Jan 2023 16:21:10 +0000
Received: by outflank-mailman (input) for mailman id 473852;
 Mon, 09 Jan 2023 16:21:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7HIp=5G=tibco.com=sdyasli@srs-se1.protection.inumbo.net>)
 id 1pEuta-0006Qp-AJ
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:21:10 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9b8143d0-9039-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 17:21:01 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id hw16so9557889ejc.10
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 08:21:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b8143d0-9039-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=mxgcZWNI8GkETKKVvGw9joJznCW2xAb+qxghzX2Za40=;
        b=lLFP4XvctU/mUmolQf5pX9Tob4E0YQZsrxEo2ehASHnGfI5tkLWoVUqp7nHNWXv/YP
         JEv+du/aGO3LEuLqPcO6jSF6KBFBqz20VHSbnh9KIi6hObv7sxv25TkRRFiy+i4invC2
         JpE1Xw5W8zGSoFCMsZcmvUeO89PR+lCw3J148=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=mxgcZWNI8GkETKKVvGw9joJznCW2xAb+qxghzX2Za40=;
        b=b2UumJaGoVesuiT4DhaExh/vdNoGj+MM8BYTBqTE4fVf5YKJmUPm0FpZLrtTCAASnI
         LQUMVsov5b40XJqNgESUFoakoEW8p0ILOVbwzYSzc9R8Dcss0a1A8pbv9nfriNO7tIqc
         Kjs9XJ+M7HHGWTHXEUaUv58sZSaBJG05f4HNpnOngUIqZ/hc7KFRVP48amOLPTVjT/k1
         jfVHZs2NuziLAWk77Cbfr/shkFeYqWqgJIK3hs6mOYyg/ShrmGVdMe9egr5xF6CxoJiR
         AmU7VK/cErFsl5FxCxt6pIuKV2M4LrG9PSENnN90FcwlsbfR27aTobS6PqUCI6aMztQ9
         zcaQ==
X-Gm-Message-State: AFqh2kqFvh/vPEa26znwrSh1B0DtgwmtFMDQvgUEh+pR321hqAgKJ4Z9
	hVJxSjg1ISoy+Pfr3825d5xYJI0XWdX1krGsQT4lVQ==
X-Google-Smtp-Source: AMrXdXsZrHw3jRmSmMYwJi/qT1DFme5rV5+bvSZRUUeAggiRXKQmsZUj94ky1Tdl8MeZJsZYCO+U/5GPhFbdR+JjtoY=
X-Received: by 2002:a17:907:3987:b0:84d:3721:53b0 with SMTP id
 sr7-20020a170907398700b0084d372153b0mr599785ejc.534.1673281261204; Mon, 09
 Jan 2023 08:21:01 -0800 (PST)
MIME-Version: 1.0
References: <20230105132004.7750-1-sergey.dyasli@citrix.com> <55cccf9d-e4c9-6dec-ee9d-fec56f521931@citrix.com>
In-Reply-To: <55cccf9d-e4c9-6dec-ee9d-fec56f521931@citrix.com>
From: Sergey Dyasli <sergey.dyasli@cloud.com>
Date: Mon, 9 Jan 2023 16:20:49 +0000
Message-ID: <CAPRVcuf_ZYKQrpBiNyjG+zMDfXGX1afKx=hx8BPXE+A1L069oQ@mail.gmail.com>
Subject: Re: [PATCH] x86/ucode/AMD: apply the patch early on every logical thread
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Sergey Dyasli <sergey.dyasli@citrix.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, 
	Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Jan 5, 2023 at 10:56 PM Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
> > diff --git a/xen/arch/x86/cpu/microcode/private.h b/xen/arch/x86/cpu/microcode/private.h
> > index 73b095d5bf..c4c6729f56 100644
> > --- a/xen/arch/x86/cpu/microcode/private.h
> > +++ b/xen/arch/x86/cpu/microcode/private.h
> > @@ -7,6 +7,7 @@ extern bool opt_ucode_allow_same;
> >
> >  enum microcode_match_result {
> >      OLD_UCODE, /* signature matched, but revision id is older or equal */
> > +    SAME_UCODE, /* signature matched, but revision id is the same */
> >      NEW_UCODE, /* signature matched, but revision id is newer */
> >      MIS_UCODE, /* signature mismatched */
> >  };
>
> I don't think this is a clever idea.  For one, OLD and SAME are now
> ambiguous (at least as far as the comments go), and having the
> difference between the two depend on allow_same is unexpected to say the
> least.

Sorry I missed that "equal" comment which is easily removable. What I
don't follow is your concern about allow_same. It's already changing
if OLD/NEW is returned and my patch makes it SAME/NEW.

> I never really liked the enum to begin with, and I think the logic would
> be cleaner without it.
>
>
> We depend entirely on there being one ucode blob which is applicable
> globally across the system, so MIS_UCODE can be expressed as returning
> NULL from the initial searches.  Everything else can then be expressed
> in a normal {mem,str}cmp() way (i.e. -1/0/+1).

This idea sounds good but in practice there are vendor-specific functions
which return enum microcode_match_result and I don't see how it could be
easily replaced with NULL/-1/0/+1 without code changes. I also find the
enum values easier to read.

Sergey


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:22:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:22:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473858.734680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuut-000748-AV; Mon, 09 Jan 2023 16:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473858.734680; Mon, 09 Jan 2023 16:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEuut-000740-5s; Mon, 09 Jan 2023 16:22:31 +0000
Received: by outflank-mailman (input) for mailman id 473858;
 Mon, 09 Jan 2023 16:22:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEuus-00073u-HU
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:22:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2040.outbound.protection.outlook.com [40.107.22.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf5acc09-9039-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 17:22:28 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8829.eurprd04.prod.outlook.com (2603:10a6:102:20c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 16:22:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 16:22:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf5acc09-9039-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MT2kpAYyIb/PIcLodx9f2xKXPnIjq9crKeuRTXuRpSIhNlkQqC1kBuOm0MQHpA0zL1USfCJ1LYmPdov+zP4SaKpy9YojJQcLMpWWKE7ZFNvLQJS5YrJNLuIrpg5/Um0bMHt5Dq2J3XGiWWZLqLmfkF8jcxOfhUpDGfu/SkMdOvvG0/HmkdzzBtveCxQDaFDZkNQe+hsLXtLUrKsd2gPF01ueXQUMQHM/2TYfKwdhkCYSJOKqPpgb2R89LY4gKbOP9V/wap/a/BdJuM4rdzwVPUnouBp20CNr6aW84So2IRJOMbUsQ2cIq/XkGD0v/kmV1ipk1oV3/alSoS+gxHsirQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FgBFsUxeRhZbi1wZLbZ5QCUDhhErovrhDaq3a7acFbg=;
 b=URfuJg7BpvKIEp5dN5uKbFQKXAemcDE8iXZONegRu4dpmSeeWFFA/AwF/2O30/zzVWKTqdmZahn1Pr14HfY9PQKt3JuBJS3tl77NGjyNR/sn8qhVm5agoiCrxEuZtypIkp5BZlpuwQaoo58ttLFkhsnMCD/7YD9gJI+5IXHpf0bow3eFuq2vHoIKbrtLKXdyWrjHqoxYqyfHgCImatNkE8NI8QIuX6r4pvwRctk4yyoNyVqCAwVDp//EiZZleD1tuc+28jmGWa/Fhr6FMjgpMMAQ+lpv+Rd5SL9S6xn5j5/Z5bFnQ3C2rexPw8dIhqZ8rV48eDxNcKU6cHm76EwF/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FgBFsUxeRhZbi1wZLbZ5QCUDhhErovrhDaq3a7acFbg=;
 b=W4PlZbtl7C/DaYSVXQkhCsTKFlr5+dHyjazXnTvg9/RBu1TC10eT/3udjsuQABS7Jov6TVjvT8PIpMyHFG7OLNxYFols7IM1jGX3cRjyPKlemgjHUf+5B1OayVdAc+ta8KyxxMyouaMiEhEajNpH00BcmPt8f6jxTW/H1A4me+97kc7M7iK2ob0IinEBcpCokdV4T0MIlg2YMamaian4ZR0570A3fAAZvZcf1dHHoo3R+nc4nj4ZWt42mp/Bgz6gSAzm0F7PF/O16fes6U5h6AYfcD20yUS8lBASaxvmqNmwJ3PkFDgrhZWg8dqabC1kIEoAyzI+TQyFDguGF6inDg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9bfbfdc1-e3e8-a634-26b2-eb48c5e1d646@suse.com>
Date: Mon, 9 Jan 2023 17:22:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/8] xen/riscv: introduce early_printk basic stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <527727b2c9e26e6ef7714fe9a3fbe580caf1ae13.1673278109.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <527727b2c9e26e6ef7714fe9a3fbe580caf1ae13.1673278109.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0084.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8829:EE_
X-MS-Office365-Filtering-Correlation-Id: 176e1609-12cb-4a7e-cb53-08daf25db292
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KrnoI/kqUIr3j7d2lUizfM8cRAfi8UyFv4zJnKIo8Fu3Kx3V1EG3yo8btWUe6dxDTJgb58Lourvb3XX5AEYvR/AjRiFfpDiaMJOwIcITVA3ImsodXSLd05e9HMxHCO/oJp+EUOP3KefJ1lgWjvRYA9+He8XiQc9CRlh7txxnfOJY2et0HvQI0W+wKI39P5XQcHYTSXLfi1I6MTpk4oZVwg77HB2vo/Z3vfxF8ZTNQMTwti0r94u/OZqB7vzgyNWTtltWYoFiex4bLm82bnoJoDNkt29E0l7uTzud8ABfawMwcTiCfnOhK6K5OoXqNlueEnZIWCDnvwoO4cIDu3BzIYvnXx2cRN30g9iMRohyInkjQNBRrnlPNfkltcrg0Ak0t5bGAzjgs2boxJRaRZK7mWC9EBx9ifYFf9zXifRVxSn15FHeLfUpseJJAeN4YVkUQi9rnlpkRIi1TaifOTk7WwZ4355Uj+DORaaQw6nzd35Md6anEn9qexfCPbpIe2AhEQfnfI8Yo2efCeRWioUSM+43MWAiZ+zV2XR4/RbbqLUzCBwbRA7i2UtESi0NiLwVS5uNiyn/f9ixhYZgHsmf+oCP9w+OgFQKKI4TMI1xBIPCC8kOn1KzlhXdKd33/03Dsn30NTPLPl44Szq+pifY9cRg1DwXV00JUaNSRmomk+oUxrpGyILCYWlDuu8avrJyKWG6fZug/+PU59XeTe30EGDTFHr1zKM7Jd4KokgHjBI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(396003)(366004)(136003)(346002)(376002)(451199015)(36756003)(53546011)(186003)(8936002)(26005)(6506007)(6512007)(2616005)(54906003)(66946007)(5660300002)(7416002)(66476007)(66556008)(4326008)(6916009)(316002)(86362001)(31696002)(38100700002)(478600001)(41300700001)(6486002)(31686004)(8676002)(558084003)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QjRXL25QZitDT1ZnQjBKQUdwSDYyOHYzUDIyV0pRaUJBS2RCeFVrVTRmaHF4?=
 =?utf-8?B?U3dvQ2hvM2tOT2lkWnp0QVk3YmFyb1IrVEt0NDRzeW5hMHcxV0ZLaTg3Ymdl?=
 =?utf-8?B?REIyc1NSREd0SFVrY1ZucnZNVHdHVGxnOHp4UUlPZTNYd1dEN3VxcEI2OGxq?=
 =?utf-8?B?Tm5yLzJNSFJTa1hJWlFNVkY3T2pXUWNtaGNMdCsvcUU5Y2dEYU02THdOL0o2?=
 =?utf-8?B?MVhiTCsyMUxidFdrTFY5dk5sK21FeDRBVE1MVFdFSzRiWk1kbXoxWkdnRUNE?=
 =?utf-8?B?TmdGbm03SkJGVmp2YnlvRFIxV0NxZVlwTmdMNElDQTF1ejcyR1doOVVURXAx?=
 =?utf-8?B?eUsvSGcrWlpiV1M5d3h1YzFuSW9vcUppb0VMZVZ5RjJHcmRUSUxlQ2cyckNo?=
 =?utf-8?B?WE4vRm1FclNzZnNMNXhuQ3hvdFVwZ0VBeWh1M1JFRW8yUnNpenlzVk9qUW9U?=
 =?utf-8?B?bGNDdzM0M0lqRUUyN0RmNkVZWk56blBUWmg2OVNkSytMYTZBeWk3N010eVhZ?=
 =?utf-8?B?RXg1VkpFR0ZhcWV4cXFGeDE4YWJoYmFMRlZIaSt4T3ExRzNKak40WDhCZllE?=
 =?utf-8?B?VzRJWWNXczlVRHkzejZrVi8zNXQ5MUtiTVNOejdJS0NrOGNsOWNrMjFaNEhN?=
 =?utf-8?B?WlBicG8zZFl4MCtEbG5lZFBlbXRza2ZWbjY5RDhOeVZGYzB1MVRtVzY4NUx2?=
 =?utf-8?B?SUVmWHNFR0xBS2xjQUtYb0x0K0xjcllPZHpIdThudUFJRTdMR3UzTXhwbkV2?=
 =?utf-8?B?M0ROT2ZscUY2V2xPNlluYjBIUkFNQXhRUkJzU0RldDBPNmFoSkE1M1RhL1JU?=
 =?utf-8?B?YnM5dEd4NG92ZVFBNDlkOTd5VHh6bFAxeEJ1Zk0xRFZtS2M4Nktlb2VTb1d3?=
 =?utf-8?B?UlQvRklmQk52MUZEZjBMZjh3RnZ0Y0g2NytSQmVMRHV6aWl3OXB5dGljU1pu?=
 =?utf-8?B?MXBZbC9hS0d6NUtxYjJKeFpPUFpLRklsdlhvWDhtblNJY3FSUVRsVWR6Sklv?=
 =?utf-8?B?bEtMaHNabktGQ3dhNXJ4QzVBT3N4ZzhsSTE2QU1iRFpsbW9hbUZRQVI0MFBG?=
 =?utf-8?B?Y0ZnMnhmSWFGc0ptMlAyRUgxVk9hWmRWczVvcUNtUFo4c21odzVKL1hOalBS?=
 =?utf-8?B?ejBXeWdDRUJ6WFBtc3hKOHFkZFFtRDZML3ZQSG9RMlFMeUk1SCtKOUNqdllZ?=
 =?utf-8?B?VTlMUUVNQXNza2RzdVZJaVViejc2YlozcnNuMWcvcEFvNHVERHRXbm5vVGlQ?=
 =?utf-8?B?NTV1Mkc1eUxQUVB1Zk5IV1R3SkRNQXlYcElZbTdtSzFvdjVjT2JKUEFqcFMw?=
 =?utf-8?B?ZnhPUkJUa1hjTEgxL1k4bEtMZzZ0c2VkRTRVWVBVZVRTU0FnWjMzSDY4Rzc1?=
 =?utf-8?B?VUlhcTg3RGFBaTJFUDBCM2ZoVStxS2pYT0ZEK0l4ZG5RcnJvRzd1U3Bvd3Vm?=
 =?utf-8?B?VVBMSlVxYi9BVXlaOFcySFR0SDYrT003R2dRRzN1dUk2azZYajMzUjNTWDh2?=
 =?utf-8?B?cHVNUU1jMFR1NXB5THhBNG1jZFdFTXhSbFllUnZ5R0pLNjJDdklveVhMSWpE?=
 =?utf-8?B?VXBPWGJDeVYzN3RIYktPT2xKbFlhdW1IRlAzOXZOZWxUbm5GSU55L3hIQkpq?=
 =?utf-8?B?cWx6ZWEwbWo0UUZQdzlYaHRWTU1tV0NhekdIQXZwZWZGMjBiVStFa2VWbDRY?=
 =?utf-8?B?cGN2SnVFRnU4TkgzTWovWmliMVdjS0tEK1hqNW9sdmVGeUd0dDlOYnUwdG1F?=
 =?utf-8?B?SExSdVhaYWpqWXhubDFMaWd0cnE5ck95U2FCV2tnUmZsSzZNZHFDM09DUzM5?=
 =?utf-8?B?M0hoV2FmYVpqUVpDZ0F6cmU4cFVMbCs3KzdTQTcxS1FHMG1sL0tTRmlndjkr?=
 =?utf-8?B?dTZuMkZuaU10Z0dIRVU3TXpZalVLQUlMdFNnL2srNmt1UXFGZU1VWnJuWDdt?=
 =?utf-8?B?azlBU2x1WlJkN3YxVTJvY1FpcHYvQnFOSjBEU3dVWkRUaFRtcjNUSitNQzRv?=
 =?utf-8?B?YW9tbVBJSnpIdUpqQTRGWHc5OHl4TUNvYm1MUTFmakFUcUUwa2FtMDgwVVZh?=
 =?utf-8?B?Tldvc2MxbkdmRUU3VkZid1RWZ2haNzF6b3N3SC9zTkZrNUxySGZXd082aS9Q?=
 =?utf-8?Q?KvNgpBkwlzzP43U8+EVLrhc8y?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 176e1609-12cb-4a7e-cb53-08daf25db292
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 16:22:27.0066
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LxMaC6Ei2fcjh8NVJxyDT364JilGtcpojPVmFJ+GASxz2Brsr27gYVZZh9TJhG0vI4KPOiU6OofCv5c+HGhlEQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8829

On 09.01.2023 16:46, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/Kconfig.debug
> +++ b/xen/arch/riscv/Kconfig.debug
> @@ -0,0 +1,7 @@
> +config EARLY_PRINTK
> +    bool "Enable early printk config"

Nit: Stray "config" in the prompt text.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:30:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:30:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473864.734691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEv2l-0000AX-3Z; Mon, 09 Jan 2023 16:30:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473864.734691; Mon, 09 Jan 2023 16:30:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEv2k-0000AQ-W2; Mon, 09 Jan 2023 16:30:38 +0000
Received: by outflank-mailman (input) for mailman id 473864;
 Mon, 09 Jan 2023 16:30:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=f6do=5G=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pEv2j-0000AJ-8L
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:30:37 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2088.outbound.protection.outlook.com [40.107.20.88])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f1bc7c91-903a-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 17:30:36 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8882.eurprd04.prod.outlook.com (2603:10a6:20b:42d::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 16:30:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 16:30:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1bc7c91-903a-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P08qqfpkV3nR2yLxeo2XiTX3EaTtPuUaTcZvOnfdakfF1TDN9MWn2flYaMXEQIMRVCK2GurN5tnFxcRJ3Is9lOqP9RwxL3ZjbfjfkDueRoo4nnFDPzwoKY/m84NNQRd45qvVL9q/uWNtk6sXw4q+JqLurJFUkbvoFW3yqznsCfXWMFr5OeXTKUqBhVudlNIXttNuxJASFoYriEaoYZwSbNRbvqm/uK6SB9DrKD6DzDxPEI662cSav/cCqZc0Z6wabJ3lZ4Q1g9CpJB1TjPl0N0F8oU43gLINPkCk2ks1ynXIS1F1EDKIK7kCTTtUjuQ+XgIUwZ9cneiyaJxk12Xytw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VgMU/SgV6KX0g0UGb94kr9WJ66aNKxszDwcL6jaddWQ=;
 b=FszD5OIaCx1xFPdMxglqcygRyVkvtpM0CzZzosp1ot8qsZhy9vTN2e+Ae+V7hyQDqSHafr3ntNXbhGUSFq0e6+v0J4NaDZKKQMwMciZuH+9s1y7LFpONvX8zKtmydwvJjdHXrQnBUzPnIN1NoLA8+oqas/N5akwoDngc44Fek+ItQ+trCY5e0gnq1/ZSeaaoA8XoiYatvMQ6PvK7O5ckcprrw388/8rI121gJEHSN5VB0180EXsdZ4agO7CVBfUGcpGwEqoVaRz/hHsr/WSGK/eNrWhEmUHSqxXDNFmPiGQGcKLMh3nlKMIqaQxXSvdFxQmc2I3emQm/FTXhOGIskQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VgMU/SgV6KX0g0UGb94kr9WJ66aNKxszDwcL6jaddWQ=;
 b=ENKrlq++bOWJaC/Wl58fYWYUu3Wu9R/XP/Q/aaFr5PDlFRBiM+vPkCbJ/HTI5rPhb5mkFgkyYSpbpI06Zr3byrjZe5MNC/JncpkSdD8pTDRgRG4bQjedHaAFHBTt2F+d9dY34YqBs1KPnX8w7FKp8EydwOax6oz0I0ShaTSKdK3nPW5eqYuab6Zl6zlxKkpareOa2VhYH4jcPIP0KTK4dd7qUw9OSv6ym2yBgTAgPB02wOm3c0EnLk+80TATSY1FMxrlilMO45VxoQ7frsrCyj0g4TkV/XhVgXOkIhqgKpq9pxu6knpNz7JPKWvfh5Zc6q5wRonsc4ccd6y715oESg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <caa35078-7b53-4936-a8d1-42bdf72df4d8@suse.com>
Date: Mon, 9 Jan 2023 17:30:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/6] x86/prot-key: Split PKRU infrastructure out of
 asm/processor.h
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20211216095421.12871-1-andrew.cooper3@citrix.com>
 <20211216095421.12871-3-andrew.cooper3@citrix.com>
 <427dc257-b318-de55-7126-0446264401f8@suse.com>
 <b7c7d431-e7d5-9dd5-a33c-c61e53c42acc@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b7c7d431-e7d5-9dd5-a33c-c61e53c42acc@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0002.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8882:EE_
X-MS-Office365-Filtering-Correlation-Id: 986730dc-34ef-4064-0ba9-08daf25ed4a7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t2xMNLbuWQAsK5GJPk9m3ApsT/QG3Xxs8AAdEzqigA3jQD6oLjVjGhAwNeJtdffzMrwzk7SF7sl3j2qhht+ax8TriouXhoBszY73QaRX4zQtTOXH+XDuYsEq/RWnLQgkQzjqIFNdCxbdiLaEURfsVmXI/bpF338sVOIUpkML9c1nyQfed7HfP38F+hO3MPQLIyVgK6TxmG6xgvmx3cMmhVVQd5jXFl44ZXb60AMhzZP83jUzbgzV5R3VdOSPvdcZX/c+ywktzPWEUKaRxj5w3tOFdnBhZvwjYQ3thZE4cQkP66MJ/PzOs6tefF3X+Rbx0tTZLuxbM9OMwsI21UhGtEQmMwjZo+o6in/6RD5FaWxM0w7P4OgdV6YN7vQzTKLwJu+dNtFWEsaC2USUhYL1Hw8ez8z1rtyOSClSElUH73etAiLa8VelnSCDHJLzIewlSw/Osy0xN70VE6pvv8JiprsTCwTM4gQWH0nEtHt9cB73/VkGd4ddYcoltvWEKC61LaZKIsdHayA+mczjAa2ukngHLeK7mgw4Mbtrwg5bpJtMfhfQD3NTvs7sZNGlDDNhd2jJP0yGEEsnY81m2SgVTND2VMbOXRcpTGzvLTZHJfz7gqC4U4tUakYGSVvf637EayuEABttL2DiG+rRpEFXud9tiVmtSVptB4mDlDpG9eQ4ydaVqoWU5Lv15Lh6zW8XsYXcnInp5MQSMmR+2vY5ms1K7JI1jd7M2QmMqewu1Suar5ufMFpwHcG2es4T4JrPx5A0xhND/F0j8Ftzkh1vO3YiUUFML4g2qcQPGqcePpI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(39860400002)(376002)(136003)(396003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(66556008)(4326008)(6916009)(8676002)(66476007)(316002)(66946007)(54906003)(186003)(26005)(6512007)(2616005)(38100700002)(31686004)(86362001)(83380400001)(36756003)(31696002)(478600001)(6486002)(6506007)(53546011)(6666004)(2004002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SFA0WGZlRDZmaE5sdFRPOUlpZmtoNlNSZ0lFQk5sL0lLWnFNakR1WEpFd2ky?=
 =?utf-8?B?TzB2Yk1rTEt6a2liRWVvTnpIL0k2STRjWmVDcGtzYnJ6dXRhaklFYmpMdE81?=
 =?utf-8?B?TTRyV1B2OWtaRVRPczdkc0pta3ZxaGtjMUZOSVYzY1hqY0QxaEZGNmI4YVVX?=
 =?utf-8?B?VlJaTmwrR1AwSVdlZExzTUN6bmhvSU5ub2RkU1dETUhiUVdYbVlyM3Nod1Mv?=
 =?utf-8?B?akpNUjRhZWpUbnViUlM4czB2Rk1oeDd6TzRteTQ1UEVqb0lDSFg4SGhXREsv?=
 =?utf-8?B?SytKTENlSHU4elI5YXYxSWl5R09FeG5QWGhlSTRUKzFZbkszZU4rcmE4M0FZ?=
 =?utf-8?B?dnNXTkgrcWxSZVhxczU3Tk1FKzlFYUVsK0lLb2ppU3ZMRExIQURlQ3BDSG00?=
 =?utf-8?B?RHFBNnBRTHlqaWJWMUtvN1JYSllTYzd4UTFNWFdORmpnN2FhSnNSblJTNmk1?=
 =?utf-8?B?NEJTbHNzYmswNFlHblFzVW1UUmdmYjZ4cTFvRVRhUktITU4vOHJzWWVlUzVM?=
 =?utf-8?B?TDh6dExabHhKSFlBc1grMXJCeHZVRmE2dVgrTzV0bUhqdG5iY2RmK1pTKzNp?=
 =?utf-8?B?YXc3dVpldjk4TlA1akhGbzRja0l4QXUvWVRBcUdHcjIxNTNsYjJwanl3RStw?=
 =?utf-8?B?YXQ2YVBKNjVpZmk4dzRmb0Rnb1dGYUtldGMzUUVBeElJTFJqQW16ekJFTlJy?=
 =?utf-8?B?SXIvZFlqWmNKWmd2bktPV1lEOUdYS3J2N1pqL3B0MUF2T3JUd2Y3aDZwbWxx?=
 =?utf-8?B?V3hldXo0WWs4NlprQ25URS9SMGdUTDVtcS9YaUw5VVBSVWJTZFFnZmljV1RZ?=
 =?utf-8?B?dzk4MU9vL0hiTk52ejlZZTM0YlVGRDJOMWJ6MW85eG8rcG8vcW9pT2hiQkNa?=
 =?utf-8?B?YWxDandnV0h6Z2RtTjM0Snp3QmpkOEQwQTUxK05tWjhnM0x4b21HTHY1SUlr?=
 =?utf-8?B?c3BjZCtqQ0NwQnhONFlDVWQ3SEJydTVrZVNZeDQ1Ynpoc1VLeTJiZVVyQlJl?=
 =?utf-8?B?UDgxeTF3cEVLRTZPUG9zK1NCczFnTHNlN3lTblFvUVNCY0pnVktuaXgyREpU?=
 =?utf-8?B?c2NodEJnemo4dWs3UG5BRWUzNytVcnZzR2tFakdSa3VzTFVNa2IybEtqNnBP?=
 =?utf-8?B?cEZNdHhwVlc2VlZvN0VjQWlqc0NlaVJTRWh5czhxM3pkV2E4Mm5iMHNmYi9K?=
 =?utf-8?B?dk43Yms3R0dyZEkrWUhEMXdDV0pveVlwcjVYMjdnQ3ozZHZCNEoxMStQbGNw?=
 =?utf-8?B?Mi96cHNka2hTSXcwc3dTMS9ValI1YmR0UElOTnBTTkN5YUxkRGhRaTZxdTVE?=
 =?utf-8?B?cHl6SW1McFVrMFE5MzM1Z3ZPTndZenJ0M3hKWGRUL3Y1Z3VqVFZ2cDNyenVI?=
 =?utf-8?B?WEdXYWdrcnltK3ZnNUh6aklDQW10Qzc1M21SQUZneWQ0NGZwbEVFRjJWT0l6?=
 =?utf-8?B?MzFCSll1TUhrc3JyR1l3Um5idXVMZ2pqbGF5WXBQM3lkUnpqU1dRRTA0TGxt?=
 =?utf-8?B?MGZvczdUb3Q3VkMyZjl2R044Z25wTXo0UzBaY2RCMFc2TktzQjE2VXZscS8w?=
 =?utf-8?B?VE1zc3J2a1RMTEEzMHZWNlZHL2srR0cwRHBrRmN5L2RrMEFNR0NxN1ZJUUdG?=
 =?utf-8?B?WVFEdlgvV1ozMEJEcXB1T1I5Yko2NGExSHRmc0trQzZuVFVvRnkydEQwL3FD?=
 =?utf-8?B?UlZsZVFNRFY0Q1Q1WDFKTUVIS1NCYXJ6c1pZQlpBVzJFcjczVWhFZ3lUa3pE?=
 =?utf-8?B?YUtDMWxacmk5S3lqZkFuTmVYdkJPdytqbGxBTmZtVGs4UU9TVitGREErbUpW?=
 =?utf-8?B?R2hjeWlKT0tWRm9FZUpsVytrNTlxTU1TU2U1RzJ2R2NtcU5wSFdCcE1RQTVo?=
 =?utf-8?B?cTZFdHdMNllHUmcxU2dOQ3l6UkFGaXEwK1ZMeG4wTHZ1azc0QVRkTGUwN3dN?=
 =?utf-8?B?QllSU3NNZmwya1hDbmdCRWU0L0N5ZVpvNUlrbDBtQlo3YXd2VXhRZ2Q4ckor?=
 =?utf-8?B?dDdYK1U4WHRqUVQrRDNLdnVHOEJiOEdhRVg5cEJ5enFCL3pRcWhKNmMyeUlQ?=
 =?utf-8?B?eHRRRFozekUzNkpwUHVITHdaWVlsd0RBdjFldXdKVGl2TnMvcmhab0czZWRl?=
 =?utf-8?Q?Eah3Gblf1m+M4KCovhHT73+TQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 986730dc-34ef-4064-0ba9-08daf25ed4a7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jan 2023 16:30:33.7098
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +L2lHc1DcyLzi6F0nb0S1soPtBkBkT1rFKLfWnRzpzLDo5QPqa5TdoBaKxUO1kK0RrvR0XQtDOeubbO02OEMrA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8882

On 09.01.2023 15:57, Andrew Cooper wrote:
> On 21/12/2021 11:28 am, Jan Beulich wrote:
>> On 16.12.2021 10:54, Andrew Cooper wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/x86/include/asm/prot-key.h
>>> @@ -0,0 +1,45 @@
>>> +/******************************************************************************
>>> + * arch/x86/include/asm/spec_ctrl.h
>>> + *
>>> + * This program is free software; you can redistribute it and/or modify
>>> + * it under the terms of the GNU General Public License as published by
>>> + * the Free Software Foundation; either version 2 of the License, or
>>> + * (at your option) any later version.
>>> + *
>>> + * This program is distributed in the hope that it will be useful,
>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>>> + * GNU General Public License for more details.
>>> + *
>>> + * You should have received a copy of the GNU General Public License
>>> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
>>> + *
>>> + * Copyright (c) 2021 Citrix Systems Ltd.
>>> + */
>>> +#ifndef ASM_PROT_KEY_H
>>> +#define ASM_PROT_KEY_H
>>> +
>>> +#include <xen/types.h>
>>> +
>>> +#define PKEY_AD 1 /* Access Disable */
>>> +#define PKEY_WD 2 /* Write Disable */
>>> +
>>> +#define PKEY_WIDTH 2 /* Two bits per protection key */
>>> +
>>> +static inline uint32_t rdpkru(void)
>>> +{
>>> +    uint32_t pkru;
>> I agree this wants to be uint32_t (i.e. unlike the original function),
>> but I don't see why the function's return type needs to be, the more
>> that the sole caller also uses unsigned int for the variable to store
>> the result in.
> 
> This is thinnest-possible wrapper around an instruction which
> architecturally returns exactly 32 bits of data.
> 
> It is literally the example that CODING_STYLE uses to demonstrate when
> fixed width types should be used.

I don't read it like that, but I guess we're simply drawing the line in
different places (and agreeing on one place would be nice). To me using
uint32_t for a variable accessed by an asm() is what is meant. But that
then (to me) doesn't extend to the function return type here.

But no, I don't mean to block this change just because of this aspect.
We've got many far worse uses of types, which bother me more. It merely
would be nice if new code (regardless of the contributor) ended up all
consistent in this regard.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:33:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:33:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473871.734702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEv5R-0000rX-KF; Mon, 09 Jan 2023 16:33:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473871.734702; Mon, 09 Jan 2023 16:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEv5R-0000rQ-HO; Mon, 09 Jan 2023 16:33:25 +0000
Received: by outflank-mailman (input) for mailman id 473871;
 Mon, 09 Jan 2023 16:33:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0yl/=5G=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1pEv5P-0000rI-OO
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:33:23 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 54a76ad2-903b-11ed-b8d0-410ff93cb8f0;
 Mon, 09 Jan 2023 17:33:21 +0100 (CET)
Received: by mail-ed1-x52a.google.com with SMTP id s5so13318235edc.12
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 08:33:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54a76ad2-903b-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=Zpopwb8KrQWBXzBjPf9ZHOHK5nNkCWT6pVvIG4FyUoE=;
        b=H8Pl71SJ5f7aqxRvGqq/bFEXF26SjbnF6m0XUI83tTieEDfb2mVzv82XW7Jwq2VU1T
         AdZPo7+FlMGmXWouyOEkYlD3xoq9Hd/9G/UANGMK8KrqQSdk0UyCBCdPR7KoVJasdVgk
         VoR3myf1srd7KCnIiLrTn41YPcEpPB9qWLEBE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Zpopwb8KrQWBXzBjPf9ZHOHK5nNkCWT6pVvIG4FyUoE=;
        b=bbWcy3aUMC2a3xmsD4AZ0G17j6EyCZsN/lTYBSH8mbkEM62xFonJZZKiXlXyvq0ciH
         asIlIbJAoqTHLnQqICMuDf1yo/FFAtDexJt21t6MnvLeWQ5Qe59tHtzq6YMIhYQsNs35
         9KPtqLawerZrDGzKi2rNkeVS2sbPT7ctZ6Zwbd9ZdUf0IYJfNIL5UwI+g+Zci8kz60Fp
         izisgsF9Jj1TcT3BgH5QE0vP9f4x3lZMAoMOhgyG3lL40Yie1NESgUr6UHsQ8u1IHP1q
         esgaEv8dQ0tZLzLMEGwfJq+IX5JWcDVUp+TSflUpkZm2A1H1w6Gwst1s4tDSnl+/yyp2
         cZdw==
X-Gm-Message-State: AFqh2kp/Fl4La6flcsnAivaRH4Uzbukw8l6GzikPAKL7mj6lf8RMVpdq
	2f28+cypffcZPOyRMACZJqcdyNWURts4Hu6ARuIu1g==
X-Google-Smtp-Source: AMrXdXvC+aTYlGOLRuOjXHLpOfr0ey/W1GaCHKISSZRkiLP8yi/g2FsVydgIYKsenZ88MxmOw+CGpf+WKp1Muj6PfPw=
X-Received: by 2002:a05:6402:78f:b0:499:9dfa:4388 with SMTP id
 d15-20020a056402078f00b004999dfa4388mr656297edy.106.1673282001371; Mon, 09
 Jan 2023 08:33:21 -0800 (PST)
MIME-Version: 1.0
References: <20221208104924.76637-1-george.dunlap@cloud.com>
 <9b8cace3-1593-8400-0633-da04f12b9849@xen.org> <CA+zSX=Z=fX+BPHqxNVTiNipkBWdWPf+g0H6vetHoSy3vtN0shQ@mail.gmail.com>
In-Reply-To: <CA+zSX=Z=fX+BPHqxNVTiNipkBWdWPf+g0H6vetHoSy3vtN0shQ@mail.gmail.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Mon, 9 Jan 2023 16:33:10 +0000
Message-ID: <CA+zSX=aeqLkS-dXj+eQMiO931MqwDO5vwdpvX9cumHoLrEsqEw@mail.gmail.com>
Subject: Re: [PATCH] MAINTAINERS: Clarify check-in requirements for
 mixed-author patches
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: multipart/alternative; boundary="0000000000004d14cc05f1d75381"

--0000000000004d14cc05f1d75381
Content-Type: text/plain; charset="UTF-8"

On Thu, Dec 8, 2022 at 2:26 PM George Dunlap <george.dunlap@cloud.com>
wrote:

>
>
> On Thu, Dec 8, 2022 at 1:58 PM Julien Grall <julien@xen.org> wrote:
>
>> Hi George,
>>
>> On 08/12/2022 10:49, George Dunlap wrote:
>> > From: George Dunlap <george.dunlap@citrix.com>
>> >
>> > There was a question raised recently about the requriements for
>>
>> Typo: s/requriements/requirements/
>
> ...
>
>> Typo: s/non-maintiners/maintainers/
>>
>
> Thanks, I've changed these locally.
>
>
>> Acked-by: Julien Grall <jgrall@amazon.com>
>>
>
> Great, thanks.
>

This has now been checked in.

 -George

--0000000000004d14cc05f1d75381
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Thu, Dec 8, 2022 at 2:26 PM George=
 Dunlap &lt;<a href=3D"mailto:george.dunlap@cloud.com">george.dunlap@cloud.=
com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"marg=
in:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1e=
x"><div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quot=
e"><div dir=3D"ltr" class=3D"gmail_attr">On Thu, Dec 8, 2022 at 1:58 PM Jul=
ien Grall &lt;<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xe=
n.org</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"ma=
rgin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:=
1ex">Hi George,<br>
<br>
On 08/12/2022 10:49, George Dunlap wrote:<br>
&gt; From: George Dunlap &lt;<a href=3D"mailto:george.dunlap@citrix.com" ta=
rget=3D"_blank">george.dunlap@citrix.com</a>&gt;<br>
&gt; <br>
&gt; There was a question raised recently about the requriements for<br>
<br>
Typo: s/requriements/requirements/</blockquote><div>...=C2=A0</div><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px=
 solid rgb(204,204,204);padding-left:1ex">
Typo: s/non-maintiners/maintainers/<br></blockquote><div><br></div><div>Tha=
nks, I&#39;ve changed these locally.</div><div>=C2=A0</div><blockquote clas=
s=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid r=
gb(204,204,204);padding-left:1ex">
Acked-by: Julien Grall &lt;<a href=3D"mailto:jgrall@amazon.com" target=3D"_=
blank">jgrall@amazon.com</a>&gt;<br></blockquote><div><br></div><div>Great,=
 thanks.</div></div></div></blockquote><div><br></div><div>This has now bee=
n checked in.</div><div><br></div><div>=C2=A0-George</div><div><br></div><b=
lockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-le=
ft:1px solid rgb(204,204,204);padding-left:1ex">
</blockquote></div></div>

--0000000000004d14cc05f1d75381--


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 16:55:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 16:55:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473885.734717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEvQn-0003db-Fa; Mon, 09 Jan 2023 16:55:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473885.734717; Mon, 09 Jan 2023 16:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEvQn-0003dU-Cz; Mon, 09 Jan 2023 16:55:29 +0000
Received: by outflank-mailman (input) for mailman id 473885;
 Mon, 09 Jan 2023 16:55:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPNl=5G=citrix.com=prvs=36677a302=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pEvQm-0003dO-9G
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 16:55:28 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68f5307d-903e-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 17:55:26 +0100 (CET)
Received: from mail-co1nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 Jan 2023 11:55:22 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB7027.namprd03.prod.outlook.com (2603:10b6:a03:4e3::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 16:55:19 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 16:55:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68f5307d-903e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673283325;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=pATHQcznDD7E4YnvVRui+umjWBsKGx9P56qzl2XFL7s=;
  b=T4tJIHe6gqz9gl74eDe0DWf6MUZLf7CFkt+HsHyGeMKtbvNm6ll9xYez
   kOsq+B/lN73Pwd7TV8nTsGHPgJnDkMACCbGPzNuG43IKJJs10G9G3T5uW
   MzoMP4rE0HybmkF59iAS/FgmkyD3MEDoSzialx7wCqVN3CDWPaR399Dnf
   E=;
X-IronPort-RemoteIP: 104.47.56.175
X-IronPort-MID: 91786498
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:unsM9qBUjYjI+RVW/xHiw5YqxClBgxIJ4kV8jS/XYbTApDMh0WBWx
 jdMWW6CaKzfYTPyfI0ja9vi9UxXupKAzIQ2QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpA7wRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw3bhqXXEU7
 /ojJDkTLRze2OTm7K6+Vbw57igjBJGD0II3nFhFlGicJtF/BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI++xuvDW7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6WwnOlBNNCfFG+3v5zwwas2m89MSFIfBi2oNqcmnGTAesKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwB6J4rrZ5UCeHGdsZj1Mdt0g8tM3TDoC1
 1mVktevDjtq2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nczp4OKu8j9mwEjapx
 TmP9XE6n+9K0pNN0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLpm4U2l/MBVfNgwIQ==
IronPort-HdrOrdr: A9a23:4Wti2K/ax8+Q1zoBBgZuk+DcI+orL9Y04lQ7vn2ZLiY4TiX4ra
 +TdZEgviMc5wx+ZJhNo7G90cu7MBDhHO9OgbX5VI3KNGOKhILCFvAB0WKN+UyFJwTOssJbyK
 d8Y+xfJbTLfD9HZB/BkWyFL+o=
X-IronPort-AV: E=Sophos;i="5.96,311,1665460800"; 
   d="scan'208";a="91786498"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lGlAGjlDW6g/Gn/CR7frhd8ZnApQJxjrnIBFnK9n8kbjYzlyYjKff4q4Xs4l3KA0lKyjVJ9nkuJzSy0vT390KVKViZO76mM2MySHukuG2fultcwZUsWKg4ElZ++NTFtbSh88iGeOef8Rd3V+BMgDMsSsQbwgyWpG4VqFY9xbaq7CcwgNpcQcLmE54vS8P3JpmdDt7TbWPXf47+RofKujQz67ZdOvyjsn1PMKmZd2+we8qIt0OPHj9MsQiqTpQM4sgula/pe61o+zxZEDGgvonWfLbpGG+ainF1+4kqHetRLGA6pvBHWMPSE/lQoWOOYWkAw1y7YqL0we5x0CxxRsAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pATHQcznDD7E4YnvVRui+umjWBsKGx9P56qzl2XFL7s=;
 b=dCzKNwRkV2Tp0yxG/4/tXPOZsMwWlCkFcBsaBB4mRai+VLUzK/uHeCjnMdbh1lSrE8g83dFKnfX8sIS7iZ+oj083mK3RNGTBm/eaUcBV+ZmGhwNTqpb5QAFKFus5E4FHcom7jpf+An66qGNUJBGO+Xqod8uVyG0gl16pLEJ8bYwblv/URztNO8RG3j+GaTeAsqqRS9GQ4a3682Y+yiKM81aXG1qJvY9AsXtkcMaw2RIeuxV2okcNnnKmiWm1eAyJY9GNuKl6ARh08qmaJ0avKDpeiiIQDIS+7C8iX+EiOdyDADZBwXJMs9KSGkyac9m4myeyNHe7eG1sURoXplD62w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pATHQcznDD7E4YnvVRui+umjWBsKGx9P56qzl2XFL7s=;
 b=O/K1KIJHdxOj4zBbC/YYw2PHiH0ecBTGQyLBoBBq3UPl8FBLXjNIgNAKzXc1mJJHDbsuFXoGn32V7WAwq6w3ngKC7BS9Y32R6rAqgNFredxuGS4EehNUicFOhJ694aQjxW1B8Y3lyhKyT/Bh5TyxlhhVFDdQnGNhvuRd/8ElARk=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 3/6] x86/hvm: Context switch MSR_PKRS
Thread-Topic: [PATCH 3/6] x86/hvm: Context switch MSR_PKRS
Thread-Index: AQHX8mME5A98miRws0CbcM4WT1rTvaw83q0AglvS2wA=
Date: Mon, 9 Jan 2023 16:55:19 +0000
Message-ID: <83a08e78-cbee-8486-479a-5255c42e8239@citrix.com>
References: <20211216095421.12871-1-andrew.cooper3@citrix.com>
 <20211216095421.12871-4-andrew.cooper3@citrix.com>
 <6efed2d5-e7a0-139a-a2b3-6f0696f711d1@suse.com>
In-Reply-To: <6efed2d5-e7a0-139a-a2b3-6f0696f711d1@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB7027:EE_
x-ms-office365-filtering-correlation-id: 3d684fd2-1f24-4ef1-f117-08daf2624a4c
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 rIkqS7cfvhK9nD3zb3I1zF3dAZ06UgXJdwpPUU+Od8zGsRvZr9b6TNHIDETKzTD6wwlZfZgpSipHqbK1NL6iie8PAXke1i+IF0gXK5fOL8CN3I2Y1McTlWAaAXwUMujIrIdTgSrjJ3WQ2gNF3uwie0pw3t7PClERPp3Ovmcul6Vi/ce+wN7kYUnShu77Oj16acO2IMW6s0JV5RvauUXoeEJiZMYhx2oBvCTOgG9ahML9H5wXEtfq/syZUb7Rze9cVYbWExbBhxdM57qavl4aT+qIB/ilKgQfiVMYazkjmyDEi/Z1RsY7WCL9yfdK6ctY7NDGKWrbZBGE8hTkSQgeyS8FFkSfAbtAENloaRpxElwGBFkW2HE67jkbtPGg0earfp91MlpL1THyfEnFNcQPVsmLR12pJxnt91FlLGJHlZa5hXM4e3phsHJr3EahKR+kaXWDA1AlI42+C0ttFr8R4/ORbScwW7jvLSKoIj43SPAAQDhSqHLQTap75BvCTLvqn2Sy2WdQXe9pZ8soL9/pXXYG1GRVc53IvW2oSa1XHxWQuZ261nB/4TvoBlsDlRN9zyysr1vhHtW1U31daQ3unURCOIE3+hWOKyfMHi9TXwTQ9HRonEh8DOp1QIc5iE2XMq5OKGaV3VaWyrcOR4+yvfod4x2b8zUjWRD/yRVUbBwe91ddJklPxle59BUCGFoKUIoKmEB+LZIB8vL0Yqq4F4IpX+lbZg12Q8/eyHykp/KGV8r5x1QKKuPg40ZRbo3KtgE7KPn9fAr2ew5lfnDtbQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(39860400002)(366004)(396003)(376002)(451199015)(66946007)(41300700001)(31686004)(8936002)(36756003)(5660300002)(2906002)(8676002)(6916009)(91956017)(64756008)(66446008)(66476007)(66556008)(76116006)(6512007)(316002)(71200400001)(54906003)(6486002)(478600001)(6506007)(53546011)(4326008)(186003)(26005)(2616005)(31696002)(86362001)(83380400001)(38100700002)(82960400001)(122000001)(38070700005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MW0wbzRCVEgvYkk3VWQ1YWcwdTZtbndDMnJrMW5XV3hTdVNuVXdVYWRNNFZZ?=
 =?utf-8?B?OWNXU2VlK1FGdWhCRS93YmhlN3dscUxUbk9DVElKTnlYajRyb2wvV1E0TzJU?=
 =?utf-8?B?UjV3R21INzZFVGlMTWV1TzJJWm5oL1h4QzBMR2RVSHdGa01xNVR3Ym9mUXBp?=
 =?utf-8?B?bFZ1eVphRDBDT2VHRDM5SEJ2Y21peFIyY0ppcis1K2hyYmtzV0FMZVc3N0o0?=
 =?utf-8?B?MS9vNldVVzdYcXpSWlA3aEx3UXg2TFRaejhWNjdOVFJuVEE1cDZZR3N6WHVs?=
 =?utf-8?B?Q213RCtsNTlHRmloOVpjZ0NKODlEMUlFVEFYU2daYWROWHZESnY3Qm9QTE5v?=
 =?utf-8?B?WVpRbHFJT2tIanNMdjZHYjFQOEJLRFdFNEUxYmtMcE1PWEhzVGtzNEFqQzB3?=
 =?utf-8?B?cmZRbWhVQWxQYUw1MElWVWlXZkxYYlJvWHBVLy8rQUI0Z2U5dEhRT0p1amVw?=
 =?utf-8?B?alc0Q051VFhLSnFtbVB4RVl1dHU4RG0zZFJPRW9jV21OMlg1RFYyQW9RamNm?=
 =?utf-8?B?c0JIdHFNS3U5RjBlVmxEVXNXdXA4M2JXZU1FUHhlVWs0a2g4R0hmTkFYMDdY?=
 =?utf-8?B?ZjdKYlRhT3k5V0tPYlUwNXZPTWYwSU0vUDlKOUF1OENUQ3F6UE9yUlhGeUxs?=
 =?utf-8?B?aVZadEdraWpVa3VTTGlPUE51MXMzOU9RTlZlV0lucCtZREtZVm9sbXJ4VGto?=
 =?utf-8?B?K2RaL1N4U3ZiTXk4UzFSQTBkaEo5Uk9mdCtuTWV3YXllSDdNN3NXQTVleWZv?=
 =?utf-8?B?RGVTall3eUt5SzdmSHJzYVgvSStqRWxWQ2ZvWXhVdEY2dlRlbVIzWEw3RlhW?=
 =?utf-8?B?UnVlNktpajduZ3NoUEVWQ3lRL0lXeHIvYS80dWJJRG1QMGNLUkZZOVVTMElP?=
 =?utf-8?B?ejRuUjBnV1B0NjRvb09JM2xvYU5KSjhsZUtsamtzek5ZWU1tZkpIWlpGOW9p?=
 =?utf-8?B?ZzJuYTMyVzZRdGdwRWtHbys1TFFXaFhvMVpnejlRSkxkZW56d3V1YXZSaVY4?=
 =?utf-8?B?bjg1bS9hQXlLakxuZGZIQXFac0JvOXkzU096YkRQSTdCQWoyejA0d29weHJQ?=
 =?utf-8?B?WVV0ekc2UEMrcWF3Y01DdTY5bWFkN2t6TXJDZzlMM0E2dVpCTlFCTDVPREl2?=
 =?utf-8?B?MnlzVkU5N3NiWmsyaThpai9JVk8yTTBIZzEyZElEZVBuM2RkU1VNWGV6V3hO?=
 =?utf-8?B?dXNETTZjcy92QW5nbXRyREJwVFZ2aFhORkdCT1ZKWGZQclo2MkpCeDdCWVlP?=
 =?utf-8?B?SzlmNkR2blpQT2xWM3JoM1E1cG5ZR254VDM1d3Q0YWY1dVRFTGw0akd2bm92?=
 =?utf-8?B?TkNod2lkazY0dXgyQmtjTnBhVHExQk12Wmppd25PanptczhaQ3ZNdUphVmZL?=
 =?utf-8?B?VVJHdEVxVXN6QTYrN0NjWDFxSGZGNlhIVDc3SkVuODRmbCswZ3I4NzZyVTk4?=
 =?utf-8?B?Yk5OSnJobEc4eUZaNnU3anpnRWx4VmxoRGNwTEk0dkgvSzJXa1phVnBocUJX?=
 =?utf-8?B?d24wUXFUVTFqNXl6dC9TRVN0VGdMOHJuVzROUXo4MGFHb2w3QjRIU2hla3dt?=
 =?utf-8?B?SzF5UDM4NHpHeUFKNjRQdHpYSGFPSmhtZlAxMXoyR1dEUGNCbno0SlFDTEtW?=
 =?utf-8?B?WWU0SDJNbGNUckt1OCtHVVNFelV3c2VOZk9FeGpEYmpXMVh6MjNzNlBHWktS?=
 =?utf-8?B?dnFlVi9EYVkyUE1TNTJRUmE0TjVLcFBKVm9WalA2MldXSkpSMi82b0M3a1FJ?=
 =?utf-8?B?MFFyWUkyemcwOGpvSytPVFRmUk9waCszRFBoQTFVNkNFdmxma2hwWFlodnYz?=
 =?utf-8?B?VVZKamk1bUI3U3RtbzBvRDQ4dWZRWjBTRWY3SDFpaUlnSEsyeWRTTm9IQlo0?=
 =?utf-8?B?UWpvMHVYdDRQRGZXN1hWOFF0Zk05VFB2WXV5YWZBSm80cEFzT0VqUkJHbTV1?=
 =?utf-8?B?Q2prMit5K0dsVk5QZGhscXdITjJmOHd5YkpCeERNOVVESjRJTklValk3aXU5?=
 =?utf-8?B?RWZYd2JnbW80cDUycmpMN0w0YkNZdVpRY1pybG9GWmlNZE14bW9xVDhmL0FS?=
 =?utf-8?B?QUpVRTJzSWVYVWNsNHVzZ1lHTnRaWEJXUnorQ0FuNVRFd1drUDdJQWNYaFJx?=
 =?utf-8?Q?dvb/cviaDf3tGW1QroK/jJD1N?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <6FBFE4C814FB8F40A325845BAAF94C8D@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Ai3Oek7kiBXsaMqxSnBy898he84sZFr9eI7YszqsVyCpEjYOlXIS/1m08FBfBG9yqIy77PnIaTMG276l/48Rzm5cNRMme0FLvC6gJGeVdjZGi3QkYBmv+4SnKcU7ck+DqPErM0tjkLmyj0THLTB+aeg+lfI4LOoNUYEIAcIZY9R4iWQ1rLq2vkhVpkRsr/JiJHyOdpDVa+PK+A87LKpr3K+zIFmJNu36iVA8lLuqBFVODJsiIOaMyNdKWKcJmVGdk4n4Hs0w/+TSMekeAEyJ/uCJWWVnxOUYmKM8CLCt1b47F1+9Ns2HE6CqtBVma9hnE+XgQM5g8l3sIXY8ttnOsbMajueoyY4EW8QG2aRYdlz6jhsGVz2mn8VhkKrCp26+XK9qKZSAprVXTJE5UywcVeyVgsF8mtFBygHF0wY2++eNz8cZnE/nvsX4+qhjMY762Ehk3o+r/tB0Z3k5WQiZcRldCNnJrpmuHEhlhPJwVVjdjZHVkimnGDSIBIzEBl7n4c0o+IZFtUDF1216zsE8Owb5EvLMbUMtbqnCXxpur4S1amhoSV0/7qE/7vyQZcEKbPr5reAIOEEBJvr4NXH0Cz9CjR4H8iPjVoeG1YmwB/lmqNB3ajV2fmRL+m9ajnXsdBd/N1IC8S6+VgsS++CxReJvhqg6q3NPSwZ8W7eYpMsV9RZsucx7UWEHVYoi8Z0k2M1jcEB0YclDMeo8z509kYip9HBegSUOg0u9Ii35RF64wpLv8p1iVJoT6oouvLdfCfIC6/XH32JGlcXPX+OpXJLGCaA/FIkuKRl0ucVvJOhCkHFoJztm3rhbeXHKdC7VfSvs1qACkyWTVsFIzSSx/kss7Zw6X/YWo9ZXfYkv4/Q=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d684fd2-1f24-4ef1-f117-08daf2624a4c
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jan 2023 16:55:19.3307
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: W1x6LmHIW5Hr3X3gkmuddVbjnoUJJ/XsUB8q3uaHKKaFIzBc+k3jMhHmfRL2+Hz0f1zhy7JvOdLGnnbxWn3hswanDnAcP5gSzaXlbUzir1A=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB7027

T24gMjEvMTIvMjAyMSAxMTo1NiBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDE2LjEyLjIw
MjEgMTA6NTQsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBAQCAtNDIsNCArNDUsNDkgQEAgc3Rh
dGljIGlubGluZSB2b2lkIHdycGtydSh1aW50MzJfdCBwa3J1KQ0KPj4gICAgICAgICAgICAgICAg
ICAgICA6OiAiYSIgKHBrcnUpLCAiZCIgKDApLCAiYyIgKDApICk7DQo+PiAgfQ0KPj4gIA0KPj4g
Ky8qDQo+PiArICogWGVuIGRvZXMgbm90IHVzZSBQS1MuDQo+PiArICoNCj4+ICsgKiBHdWVzdCBr
ZXJuZWwgdXNlIGlzIGV4cGVjdGVkIHRvIGJlIG9uZSBkZWZhdWx0IGtleSwgZXhjZXB0IGZvciB0
aW55IHdpbmRvd3MNCj4+ICsgKiB3aXRoIGEgZG91YmxlIHdyaXRlIHRvIHN3aXRjaCB0byBhIG5v
bi1kZWZhdWx0IGtleSBpbiBhIHBlcm1pdHRlZCBjcml0aWNhbA0KPj4gKyAqIHNlY3Rpb24uDQo+
PiArICoNCj4+ICsgKiBBcyBzdWNoLCB3ZSB3YW50IE1TUl9QS1JTIHVuLWludGVyY2VwdGVkLiAg
RnVydGhlcm1vcmUsIGFzIHdlIG9ubHkgbmVlZCBpdA0KPj4gKyAqIGluIFhlbiBmb3IgZW11bGF0
aW9uIG9yIG1pZ3JhdGlvbiBwdXJwb3NlcyAoaS5lLiBwb3NzaWJseSBuZXZlciBpbiBhDQo+PiAr
ICogZG9tYWluJ3MgbGlmZXRpbWUpLCB3ZSBkb24ndCB3YW50IHRvIHJlLXN5bmMgdGhlIGhhcmR3
YXJlIHZhbHVlIG9uIGV2ZXJ5DQo+PiArICogdm1leGl0Lg0KPj4gKyAqDQo+PiArICogVGhlcmVm
b3JlLCB3ZSByZWFkIGFuZCBjYWNoZSB0aGUgZ3Vlc3QgdmFsdWUgaW4gY3R4dF9zd2l0Y2hfZnJv
bSgpLCBpbiB0aGUNCj4+ICsgKiBleHBlY3RhdGlvbiB0aGF0IHdlIGNhbiBzaG9ydC1jaXJjdWl0
IHRoZSB3cml0ZSBpbiBjdHh0X3N3aXRjaF90bygpLg0KPj4gKyAqIER1cmluZyByZWd1bGFyIG9w
ZXJhdGlvbnMgaW4gY3VycmVudCBjb250ZXh0LCB0aGUgZ3Vlc3QgdmFsdWUgaXMgaW4NCj4+ICsg
KiBoYXJkd2FyZSBhbmQgdGhlIHBlci1jcHUgY2FjaGUgaXMgc3RhbGUuDQo+PiArICovDQo+PiAr
REVDTEFSRV9QRVJfQ1BVKHVpbnQzMl90LCBwa3JzKTsNCj4+ICsNCj4+ICtzdGF0aWMgaW5saW5l
IHVpbnQzMl90IHJkcGtycyh2b2lkKQ0KPj4gK3sNCj4+ICsgICAgdWludDMyX3QgcGtycywgdG1w
Ow0KPj4gKw0KPj4gKyAgICByZG1zcihNU1JfUEtSUywgcGtycywgdG1wKTsNCj4+ICsNCj4+ICsg
ICAgcmV0dXJuIHBrcnM7DQo+PiArfQ0KPj4gKw0KPj4gK3N0YXRpYyBpbmxpbmUgdWludDMyX3Qg
cmRwa3JzX2FuZF9jYWNoZSh2b2lkKQ0KPj4gK3sNCj4+ICsgICAgcmV0dXJuIHRoaXNfY3B1KHBr
cnMpID0gcmRwa3JzKCk7DQo+PiArfQ0KPj4gKw0KPj4gK3N0YXRpYyBpbmxpbmUgdm9pZCB3cnBr
cnModWludDMyX3QgcGtycykNCj4+ICt7DQo+PiArICAgIHVpbnQzMl90ICp0aGlzX3BrcnMgPSAm
dGhpc19jcHUocGtycyk7DQo+PiArDQo+PiArICAgIGlmICggKnRoaXNfcGtycyAhPSBwa3JzICkN
Cj4gRm9yIHRoaXMgdG8gd29yaywgSSB0aGluayB3ZSBuZWVkIHRvIGNsZWFyIFBLUlMgZHVyaW5n
IENQVSBpbml0OyBJDQo+IGFkbWl0IEkgZGlkbid0IHBlZWsgYWhlYWQgaW4gdGhlIHNlcmllcyB0
byBjaGVjayB3aGV0aGVyIHlvdSBkbyBzbw0KPiBsYXRlciBvbiBpbiB0aGUgc2VyaWVzLiBBdCBs
ZWFzdCB0aGUgdmVyc2lvbiBvZiB0aGUgU0RNIEknbSBsb29raW5nDQo+IGF0IGRvZXNuJ3QgZXZl
biBzcGVjaWZ5IHJlc2V0IHN0YXRlIG9mIDAgZm9yIHRoaXMgTVNSLiBCdXQgZXZlbiBpZg0KPiBp
dCBkaWQsIGl0IHdvdWxkIGxpa2VseSBiZSBhcyBmb3IgUEtSVSAtIHVuY2hhbmdlZCBhZnRlciBJ
TklULiBZZXQNCj4gSU5JVCBpcyBhbGwgdGhhdCBDUFVzIGdvIHRocm91Z2ggd2hlbiBlLmcuIHBh
cmtpbmcgLyB1bnBhcmtpbmcgdGhlbS4NCg0KV2hpbGUgdHJ5aW5nIHRvIGFkZHJlc3MgdGhpcywg
SSd2ZSBub3RpY2VkIHRoYXQgd2UgZG9uJ3Qgc2FuaXRpc2UgUEtSVQ0KZHVyaW5nIENQVSBpbml0
IGVpdGhlci4NCg0KVGhpcyB3aWxsIGV4cGxvZGUgaW4gYSBmdW4gd2F5IGlmIGUuZy4gd2Uga2V4
ZWMgaW50byBhIG5ldyBYZW4sIGJ1dA0KbGVhdmUgUEtFWSAwIHdpdGggQUQvV0QsIGFuZCB0cnkg
YnVpbGRpbmcgYSBQViBkb20wLg0KDQpBcyBzb29uIGFzIHdlJ3ZlIGZ1bGx5IGNvbnRleHQgc3dp
dGNoZWQgaW50byBhIHZDUFUgY29udGV4dCwgd2UnbGwgcGljaw0KdXAgdGhlIDAgZnJvbSBYU1RB
VEUgYW5kIGRvIHRoZSByaWdodCB0aGluZyBieSBkZWZhdWx0Lg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 17:01:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 17:01:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473891.734729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEvWz-000552-5E; Mon, 09 Jan 2023 17:01:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473891.734729; Mon, 09 Jan 2023 17:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEvWz-00054v-22; Mon, 09 Jan 2023 17:01:53 +0000
Received: by outflank-mailman (input) for mailman id 473891;
 Mon, 09 Jan 2023 17:01:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPNl=5G=citrix.com=prvs=36677a302=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pEvWx-00054p-Kz
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 17:01:51 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4df174a1-903f-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 18:01:49 +0100 (CET)
Received: from mail-bn8nam04lp2049.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 Jan 2023 12:01:47 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB5998.namprd03.prod.outlook.com (2603:10b6:5:389::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 17:01:43 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 17:01:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4df174a1-903f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673283709;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=R/0Vk+33U8Ao4aCiSMmv1v9fvlZ/B4rPwFCHz8IDlcc=;
  b=D5UqDEcpmjSFquNmPXwLzjvjjp9TYPxyUqsYfzMaFSU5EpbRz+/cXQDL
   JSMQhSdvI8w61rfCFVoWYIa3gHz2TxdaZgu2kZmFodXrnqcCM9fJabB1e
   ng8kM5jeaDPX+ZEmvfqm1ikoC0u6MKB5x9YZLPuljaCoBxdlSMzkmhfjz
   c=;
X-IronPort-RemoteIP: 104.47.74.49
X-IronPort-MID: 91829678
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dETh6aA9i9GOJxVW/wriw5YqxClBgxIJ4kV8jS/XYbTApDwr1WZVy
 GAdDD/XMviONGqkf99zbIi1/UlXsJKGn95nQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpA7wRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwvaVwKFtCt
 sEieCkkVzfboce9g56VY7w57igjBJGD0II3nFhFlW2cJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK+exrsgA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE17afx3KlB9J6+LuQ6/51g2SOyE8pVDoXX1+ap7rnjBW/YocKQ
 6AT0m90xUQoz2SpRNTgWxyzoFafowURHdFXFoUS9wWl2qfSpQGDCQAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxQ5eIgAQJG4GICUCHQ0M5oG/pJlp1k6eCNF+DKSyk9v5Xynqx
 CyHpzQ/gLNVitMX06K8/hbMhDfESoX1czPZLz7/BgqNhj6Vrqb/D2B0wTA3Ncp9Ebs=
IronPort-HdrOrdr: A9a23:jUbozq0uZVY5aAggPg71egqjBLwkLtp133Aq2lEZdPU1SKClfq
 WV98jzuiWatN98Yh8dcLK7WJVoMEm8yXcd2+B4V9qftWLdyQiVxe9ZnO7f6gylNyri9vNMkY
 dMGpIObOEY1GIK7/rH3A==
X-IronPort-AV: E=Sophos;i="5.96,311,1665460800"; 
   d="scan'208";a="91829678"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lt0DUxG460tfBmNnHh1Q6av9KIRW69CcraKUjYeOM7VIZuKn7HJfJjziwcAWVGSA/qpMdm7eUt0wGa5IcK4tTXiv5oXnaRKBrAkf2Y88tYbFdbfPzOW3uSsj42i4Cm7ImRSCAfmnwlxLYPRb4+IK9d4R8SG9qfw7omZMvtUVzqd/cXVR0t0Eu55yZHiQ1N148yRQAS1NGBhJvQMCykjHd7hM2RywRvtWZcFzVDv7bqwPH7bmxVUBfvHuiZJxpErTCpQWJExXnNnrbyr7dSV7nuSaPY/BuTznfHZeFVmQwo+uP/nrHyXiGNDLTzciATROABpele9+WuD8gabMMDGIBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R/0Vk+33U8Ao4aCiSMmv1v9fvlZ/B4rPwFCHz8IDlcc=;
 b=FNrcjecLDJY5FPR7crHgMFhUY/h16LecuxnqIxHVBjhZ9alkaBKAp+2GL+1R1CjUSM1gFPGau9WBHLybWyl+yCD8lZFrjHLjFWiZu19egpSqMju9NZhdjW83IxN+C2GJP+6iRhRL7/wLqxZPSfYHgx885lLOzXAfNjPKvFacbYVTqolbe/JBgVmhlraa3F5nrvxqOQIXc/vjW9STOZAw0ESqlYjH/Xn/HZ74m1IpUHls19xM2hTra1jrj/u6g1hoA7uaTJf7oyO3Ma2jUGQu1ocDcMgqat9j4p4+rcxllewea6KJd6bWY4qE2zB6CRzvvHWVsLe3j3904YljbwrS0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R/0Vk+33U8Ao4aCiSMmv1v9fvlZ/B4rPwFCHz8IDlcc=;
 b=FCzUQXcz1pdhVeHJHKACc5MVzOTQCGQCg0iTgkuZ1M5H/XaSYp2X4eFgCUpYqia+NnM49xvjxtV4jV9j7OswMitNzv8QXA+GrHeJaG0subXCB4MIGvoYYvxBbjOOhwVCY/+qBDNCrdjcHsaD/xWv3rC07PvsUoAFP/OVAYRwsm8=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 6/6] x86/hvm: Support PKS
Thread-Topic: [PATCH 6/6] x86/hvm: Support PKS
Thread-Index: AQHX8mMBaDBKE3ERDkqjj25vs7mo16w85QUAglvOTAA=
Date: Mon, 9 Jan 2023 17:01:43 +0000
Message-ID: <f0095240-4122-df0b-9bd0-5e9d53181ebc@citrix.com>
References: <20211216095421.12871-1-andrew.cooper3@citrix.com>
 <20211216095421.12871-7-andrew.cooper3@citrix.com>
 <e8d01f45-fc80-0908-99c5-454991a9209e@suse.com>
In-Reply-To: <e8d01f45-fc80-0908-99c5-454991a9209e@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM4PR03MB5998:EE_
x-ms-office365-filtering-correlation-id: 00fee7e4-3012-43ba-3a62-08daf2632f1c
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 7zPoP3nXOPgEaqlN1CKTHSvOSAzReyv6I6AIgfjX9v0WfLUiFKfq+Z0/SQXmfuN4FDmskp836XswE8u8v0aUNsK8+N7Ejhgx9tD2QwAOH/rdVlYiKlNb64TnmphQWLdJOXp0Nha3hurRePLtAbnkjNg/NsFjvhtOFoEgZwdvXCQ4LEwQYSIcMBd5CkHNv4gDxwIvtexz1Z45m5928CIz8R8zVOSEXpKih8+2QnqHlkUOKYWeUSR4ib3N0L49g0x0FOwvqg51UVIyHQmmx7jY0gcTv2S0Ts26hcUkHEI/j30aBIG/EmYdSyP212abfu95BtyvL1dsab/9cU1gxOKu2b1guacmEQd6Y8RnA1fuubZL8UqRtT+46WKzpPSEcpa2UKh633JObBmWyfCcZQufOAbjF8jL5Z2c2J1gpNQ3PIy2xbNRQWMOTm2QfmbWIllbrzNUp+a5znAzFGjssuLJ24LZKsHponBF8N+O3LJbVavPYY+cldcUPrh2i4pCpMb4O3NWAe8VeMrSg4yoQFOtquiKiIk4hBMtTHeUB/L/Y/4gZOXl6j8hxN1CmlgHKTVC600dNfNnh9wY0THbfzm3ORC6gpt9tWdIoNnhAJII/cyT0NHqGWLWLquM/IqQFGqatiQmVDsi12S+FMYUwO4QDwGITuoCiWICs2WLw0k5k9ZOESNxfWyWBsn9nYryf5oQNcSm2krEQfOP8oIsQ78F7y5EHFKbo1KE+v+WZqke0SU3W6fNjZGEEbacWNQRApIV/i5Nyf0i7Bhyp+xc6/y4tA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(39860400002)(136003)(366004)(376002)(451199015)(6506007)(38100700002)(82960400001)(122000001)(31686004)(53546011)(2906002)(478600001)(6486002)(2616005)(26005)(6512007)(186003)(5660300002)(316002)(71200400001)(8936002)(38070700005)(36756003)(86362001)(41300700001)(31696002)(8676002)(54906003)(76116006)(91956017)(66476007)(66556008)(66946007)(66446008)(6916009)(64756008)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZTJiR2hMd1B2dFdRdDRDRmJQUXFWbU5DT0syUlJvWGJQb1F1TmVQOGZSQ3RZ?=
 =?utf-8?B?R3pnSjNvdG9CS1JqRTdyaWp6LzNWSjFNMHJoR2JQRXN5RGpqU28xVG5jVXNZ?=
 =?utf-8?B?c0JNZWY5VjVPQ1hLV2dHQWI1d3oxaWpkQXM2akFaUWxscmNheXhITXI1N21z?=
 =?utf-8?B?ejNvdnMzNUtZamkrQlZaRXBLcmxTdEE0RkpINGFCajhsUXFGWnNSNUNvN2Z5?=
 =?utf-8?B?T3ZoVEJucERCeFlmZEQzbjBobzV3WWdocGZPQnYxL0xrRERtTmdxMGpzeS8w?=
 =?utf-8?B?a3FJb2RIQWxqbjBJbER1Q1NCWXpXb2Uwdk9aUGpkc0N0UzRKbVdaQU5sZjNO?=
 =?utf-8?B?WlpkTTlMRVJSNTBLcVhSTjlvZTI2eU0rVEtENUtoRWxPTmZsVG1kZWwyR2kw?=
 =?utf-8?B?aG54Q0pUN1hveHVGWlpIZGJ2VTFmdWx5Mk93NGN2MDYxSkJNeGhXV3ozUVNF?=
 =?utf-8?B?RG9zQVhhUlhydWh3cjYvcEpoSWhTc3JEWGVsazZVU2NMdjhFL0hWYnJ2eGlN?=
 =?utf-8?B?NHpaWlhiNmt5L3poZE8zQXQzZ3ZJTVQ5MG1Ub3RscXc4eXljTkxtUDJLQVNr?=
 =?utf-8?B?SkMrU2MzaUVjbkUyVTZvZVkwM0E0MFlSaU9IYVQxRjh2Ymtma0VVQVBDU3JL?=
 =?utf-8?B?TStrcUt0L1JDVFR3NUdERnptZDhrZWRoWGhTK1FoNzRkZjI5bTVBM2F3N3NT?=
 =?utf-8?B?Sm5uUHFUZ0pCUlNORHRZbHVPbHdSSlQ4bGtMeTB5NnlsTDZGZ0lqYzdTTVVn?=
 =?utf-8?B?L0VnQU9XNUdmWGhTbElGeUVGWjdmMG9CNlk3dlV1TVpuK0hlYzBVMU1xd3dh?=
 =?utf-8?B?MVY1V21ueE53Zk5HZEZoYys5bHBxYjRiWHpac0tlOHlqeWNOU1ppUy84T1Fo?=
 =?utf-8?B?cnhTNHJ1c2Q0QW5oMmJ1Q09sOFpjMWNFYW9UZWJJbkZuRmYwcXJMNUl2Zjd3?=
 =?utf-8?B?SnIyYzk1cm11TlFOSy9EczNYMVgvMTRMVXZNUEhJVXBjNDhnQXpreTNRcWh0?=
 =?utf-8?B?am9jNGZDRWI1R3BsNjJBenBKUFVaRVU1OFUvUm4rU2ZtZlpPU0d1ZDlqQWd6?=
 =?utf-8?B?bzJ1eG4vbkpJWmdpUk50S0tMeThOd3JsOGFjWVNGUnZwUC9tOHdzQUlqZ2pH?=
 =?utf-8?B?UmRFaU00ZExrdWJnanpJbDVYSkNORlJ5OExiUnhZVnpvRkZvOHdIanVJL1FG?=
 =?utf-8?B?ZjdLYkN2MWczMUFYSHYrZ1N4MUlNRS84dE93WE0ybGRHWWRoSXhwWFdtMW56?=
 =?utf-8?B?WHBkZnA3UnhiN0pYdWdHcmpRTHFlZWZNYmROOVpadmduMzRvQm1ZVG9ySzRl?=
 =?utf-8?B?ZkhIWHFoSk1ZV1lWWU5DWGxqaDNNbmtua1FBbGJEWnpTcDV4RWdNRklrZnZO?=
 =?utf-8?B?TStNTjN0UDNCMkxKYWQ2bmM0NDRvWnVEY1RGbWtLb1FHaDVuQzFPWFNZZmF6?=
 =?utf-8?B?RTViOURFcFF3YjRzSXZvWURrMEVFNEwwQTJUT1RwcVd1SjhQcm1ueW0rOTRW?=
 =?utf-8?B?MWVCei91UVI3RDJiVUwvWlEyWnFEN3VZZGd5TzZiNWQ0SHB2VjBibjR5aUMx?=
 =?utf-8?B?UmVIN1hEZzBDS3VXUGhKQWQyS2JTc0kwODVrMWsvQzc3ZCtRTzVUeTk3Mm9C?=
 =?utf-8?B?QlN1SVJPRmtSZzE1QkFVOGt6cVVsUXpieGtrTjhodVg5aXhCeS9jNUtrbDla?=
 =?utf-8?B?QzRtYWRxNlhYL2FwbDlwTHZXbGF1SkhyZ1lxNWlHdkoyUGU4RkpKcTR6dVNk?=
 =?utf-8?B?Zk5wY01Ia2tjWkhvcHRORFhIVjNvaDIxMnhJOHJDdkY2eVlDZ0lQek03R1JI?=
 =?utf-8?B?bzZYdDR3bVFHaFhpS2hDZFAvMXdNVU5sWE9taDNpaG1ObHpNWHJwM202MEJu?=
 =?utf-8?B?cG1mM1FBbEJLMWhMZU9tSk1jZ21yY1FZVzRhUERMMmplaWpNM1hSZE1pVHBK?=
 =?utf-8?B?WnVZR1FhOUlKTENPVnJpYTRvdnA4dms4Y2k2eXBlN3Y5Ui9Rc0dzMFcyLzJF?=
 =?utf-8?B?ZEhhd3hWd1dRYWw2YjlwbVNBcDI0dmVzOUJWcDlxd09hNy9MRnl5RTR0S3lR?=
 =?utf-8?B?bTNTQUVQcnUyVXpZcWhvMHBJOWFpYW9vMzh6UFJPeC8rNkovOENHWTNRSlBw?=
 =?utf-8?Q?LwwYCYJawzUmXAl/cLfF4jZkF?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <24AA875F96FAE14097171D7FC883E2B1@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ENYMlNz8s+NnWJH+N5gWZ6j9PCZpCCNCK6NorTdB5k2kDqwGEYRorcd+xc3oe/JhyJKJ3/Q9e1I/DRAQxSyEDb5kq/k/IdR5CS47f5/iye1BI7ckXmvq50tpMXFGcpTgx65Dmew5053Q8r3wgNfeitZ9RgllX7Qdw2znmttFlbwXoPvqbadH2BeZoIsq059ywl13+Ptz4oKr8PUVLChfD2yBvHCf3C4nlVFWQpqTmvNU1lCRk+cassDUXMLkH8SGKUGHCTgkKHjdFmE1c5lQ1MUEcULkgCZLPcvIG02ukesSbTYJzx2XQFyZI/I1HcHBWqsjqb94GIoUQGtXwmCjI9xsuEKViACyMOgMRwzJuhMubBcHPCaYIOgO1DxnqsoZ1eeUwhD/G0t9OinbEeafdnQZgdaWHDrRq7eZLWIPViKnKdk74ZLn2jxMYGH3+CoPwoyvJzwtXqe1HkaFR0jOORcyjbpHF3k5o2W2d+Ixe8zZXJvq0w+jcQRAbriGzAgrkVEG2pTjEkjWAG5Y93ih5dxTIP2wyshtABUz77dnjRV0HP+4qvYKtyjU2c8QXoC81942PgwT8HTUAsk5/XAZenqKyBNIj3psDNVJOH2TWADX9q0U/sM9cvQqYSUWsk2oaaMYypCxtj6O0Gd5qaY18I71K0ViKXDHKCD1pKDpKVEBTEfgNR/LLWFpSmRd9+44GwyimzM4hQ0efYFTdCRnnmR+xoy3OW7NnnL0TD44hptWq1YhDvpNLQUvfpSc0ZH3xePGgi5yDM1BwpONCCiQZDX8PUvl/8QjXICrbMGx4q2XM7VEj+cIwkK6pjVpfFXA
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 00fee7e4-3012-43ba-3a62-08daf2632f1c
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jan 2023 17:01:43.1941
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Ja+7lKe3zMaYIq/YUSf0h/SAURmRCHscCEYyceKuzqvybDc24T6YfCun2gbq9rj1Zg6KXzs3gSXD0EHPOuwDu5VDQ10ucSCjhe0myUA2wtg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5998

T24gMjEvMTIvMjAyMSAxMjoxOCBwbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDE2LjEyLjIw
MjEgMTA6NTQsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBXaXRoIGFsbCBpbmZyYXN0cnVjdHVy
ZSBpbiBwbGFjZSwgYWR2ZXJ0aXNlIHRoZSBQS1MgQ1BVSUQgYml0IHRvIGd1ZXN0cywgYW5kDQo+
PiBsZXQgdGhlbSBzZXQgQ1I0LlBLUy4NCj4+DQo+PiBFeHBlcmltZW50IHdpdGggYSB0d2VhayB0
byB0aGUgbGF5b3V0IG9mIGh2bV9jcjRfZ3Vlc3RfdmFsaWRfYml0cygpIHNvIGZ1dHVyZQ0KPj4g
YWRkaXRpb25zIHdpbGwgYmUganVzdCBhIHNpbmdsZSBhZGRlZCBsaW5lLg0KPj4NCj4+IFRoZSBj
dXJyZW50IGNvbnRleHQgc3dpdGNoaW5nIGJlaGF2aW91ciBpcyB0aWVkIHRvIGhvdyBWVC14IHdv
cmtzLCBzbyBsZWF2ZSBhDQo+PiBzYWZldHkgY2hlY2sgaW4gdGhlIHNob3J0IHRlcm0uDQo+Pg0K
Pj4gU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4NCj4gUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KVGhh
bmtzLg0KDQo+DQo+IEkgd291bGQgbGlrZSB0byBhc2sgdGhvdWdoIHRoYXQgeW91IC4uLg0KPg0K
Pj4gLS0tIGEveGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L2NwdWZlYXR1cmVzZXQuaA0KPj4g
KysrIGIveGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L2NwdWZlYXR1cmVzZXQuaA0KPj4gQEAg
LTI0NCw3ICsyNDQsNyBAQCBYRU5fQ1BVRkVBVFVSRShDTERFTU9URSwgICAgICA2KjMyKzI1KSAv
KkEgIENMREVNT1RFIGluc3RydWN0aW9uICovDQo+PiAgWEVOX0NQVUZFQVRVUkUoTU9WRElSSSwg
ICAgICAgNiozMisyNykgLyphICBNT1ZESVJJIGluc3RydWN0aW9uICovDQo+PiAgWEVOX0NQVUZF
QVRVUkUoTU9WRElSNjRCLCAgICAgNiozMisyOCkgLyphICBNT1ZESVI2NEIgaW5zdHJ1Y3Rpb24g
Ki8NCj4+ICBYRU5fQ1BVRkVBVFVSRShFTlFDTUQsICAgICAgICA2KjMyKzI5KSAvKiAgIEVOUUNN
RHssU30gaW5zdHJ1Y3Rpb25zICovDQo+PiAtWEVOX0NQVUZFQVRVUkUoUEtTLCAgICAgICAgICAg
NiozMiszMSkgLyogICBQcm90ZWN0aW9uIEtleSBmb3IgU3VwZXJ2aXNvciAqLw0KPj4gK1hFTl9D
UFVGRUFUVVJFKFBLUywgICAgICAgICAgIDYqMzIrMzEpIC8qSCAgUHJvdGVjdGlvbiBLZXkgZm9y
IFN1cGVydmlzb3IgKi8NCj4gLi4uIGNsYXJpZnkgdGhpcyByZXN0cmljdGlvbiBvZiBub3QgY292
ZXJpbmcgc2hhZG93IG1vZGUgZ3Vlc3RzIGJ5DQo+IGFuIGFkanVzdG1lbnQgdG8gdGl0bGUgb3Ig
ZGVzY3JpcHRpb24uIEFpdWkgdGhlIHNvbGUgcmVhc29uIGZvcg0KPiB0aGUgcmVzdHJpY3Rpb24g
aXMgdGhhdCBzaGFkb3cgY29kZSBkb2Vzbid0IHByb3BhZ2F0ZSB0aGUga2V5IGJpdHMNCj4gZnJv
bSBndWVzdCB0byBzaGFkb3cgUFRFcz8NCg0KUEtVIGlzIG9ubHkgZXhwb3NlZCBvbiBIQVAsIHNv
IFBLUyByZWFsbHkgb3VnaHQgdG8gbWF0Y2guDQoNCldlIGluZGVlZCBkb24ndCBjb3B5IHRoZSBs
ZWFmIFBLRVkgaW50byB0aGUgc2hhZG93cy7CoCBXaGlsZSB0aGF0IG91Z2h0DQp0byBiZSByZWxh
dGl2ZWx5IHRvIGFkanVzdCwgd2Ugd291bGQgdGhlbiBoYXZlIHRvIG1ha2Ugc2hfcGFnZV9mYXVs
dCgpDQpjb3BlIHdpdGggc2VlaW5nIFBGRUNfcHJvdF9rZXkuDQoNCkJ1dCBob25lc3RseSwgdGhl
cmUgYXJlIGZhciBmYXIgbW9yZSBpbXBvcnRhbnQgdGhpbmdzIHRvIHNwZW5kIHRpbWUgb24uDQoN
Cn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 17:34:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 17:34:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473922.734768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEw2i-0001Ap-9k; Mon, 09 Jan 2023 17:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473922.734768; Mon, 09 Jan 2023 17:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEw2i-0001Ai-7E; Mon, 09 Jan 2023 17:34:40 +0000
Received: by outflank-mailman (input) for mailman id 473922;
 Mon, 09 Jan 2023 17:34:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEw2g-0001AU-S9; Mon, 09 Jan 2023 17:34:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEw2g-0004Ja-RZ; Mon, 09 Jan 2023 17:34:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEw2g-0001dG-IM; Mon, 09 Jan 2023 17:34:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEw2g-0001Q3-Hp; Mon, 09 Jan 2023 17:34:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bm4+xQZPeismvCZ6fSoVnBkCP82UfyiJyB+/a3WbQAI=; b=O0gjJhXUgqQ82IH/u0XFukdOYn
	sjuA/VoheqY5mh+9ByAtFM9nzwYIiju1f+M2Q7TECUKs2z/oE5VvSrjkeCIgm6WC2BLDKnPJtmRlW
	Z5cBlWaA5ztEgvXNbhoNgi+1584rfhAJee2CO9+gGkmNoFLTHZuGPcfvLZUn7mRUJq6g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175644-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175644: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=38525f6f73f906699f77a1af86c16b4eaad48e04
X-Osstest-Versions-That:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 17:34:38 +0000

flight 175644 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175644/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  38525f6f73f906699f77a1af86c16b4eaad48e04
baseline version:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2

Last test of basis   175593  2023-01-05 20:00:28 Z    3 days
Testing same since   175644  2023-01-09 14:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   2b21cbbb33..38525f6f73  38525f6f73f906699f77a1af86c16b4eaad48e04 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 17:40:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 17:40:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473932.734780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEw80-0002d2-Tl; Mon, 09 Jan 2023 17:40:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473932.734780; Mon, 09 Jan 2023 17:40:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEw80-0002cv-QR; Mon, 09 Jan 2023 17:40:08 +0000
Received: by outflank-mailman (input) for mailman id 473932;
 Mon, 09 Jan 2023 17:40:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEw7z-0002cl-Iy; Mon, 09 Jan 2023 17:40:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEw7z-0004QH-EB; Mon, 09 Jan 2023 17:40:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEw7z-0001kK-42; Mon, 09 Jan 2023 17:40:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEw7z-0002Cg-3W; Mon, 09 Jan 2023 17:40:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=KRdMCemvCA0LBp8ZLkzySDygNsIFdx2hVtjAUWw/oKk=; b=Fd12oRO4aXNj2X0cDQSP/8eV5w
	ODQX2B3ij++9E8lfkB9ktrayPHHyeges0oKHow55wwHHLKeVnuviwusxcoHTqMeLBVh3bf2/B3gZN
	ZUzMzlbdxrdMrwvorXPd2Rld3xfY35z1btOlh9AgcjtXXqmSDrBhlOCtjWFziuE/tDfg=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-amd64-xsm
Message-Id: <E1pEw7z-0002Cg-3W@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 17:40:07 +0000

branch xen-unstable
xenbranch xen-unstable
job build-amd64-xsm
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175650/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-amd64-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-amd64-xsm.xen-build --summary-out=tmp/175650.bisection-summary --basis-template=175623 --blessings=real,real-bisect,real-retry qemu-mainline build-amd64-xsm xen-build
Searching for failure / basis pass:
 175643 fail [host=himrod0] / 175637 [host=himrod2] 175627 ok.
Failure / basis pass flights: 175643 / 175627
(tree with no url: minios)
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Basis pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#d8d829b89dababf763ab33b8cdd852b2830db3cf-d8d829b89dababf763ab33b8cdd852b2830db3cf git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#528d9f33cad5245c1099d77084c78bb2244d5143-3d83b78285d6e96636130f7d449fd02e2d4deee0 git://xenbits.xen.org/osstest/seabios.git#645a64b4911d7cadf5749d7375544fc2384e70ba-645\
 a64b4911d7cadf5749d7375544fc2384e70ba git://xenbits.xen.org/xen.git#2b21cbbb339fb14414f357a6683b1df74c36fda2-2b21cbbb339fb14414f357a6683b1df74c36fda2
Loaded 5002 nodes in revision graph
Searching for test results:
 175627 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175637 [host=himrod2]
 175643 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175646 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175648 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175649 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175650 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Searching for interesting versions
 Result found: flight 175627 (pass), for basis pass
 Result found: flight 175643 (fail), for basis failure
 Repro found: flight 175646 (pass), for basis pass
 Repro found: flight 175648 (fail), for basis failure
 0 revisions at d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
No revisions left to test, checking graph state.
 Result found: flight 175627 (pass), for last pass
 Result found: flight 175643 (fail), for first failure
 Repro found: flight 175646 (pass), for last pass
 Repro found: flight 175648 (fail), for first failure
 Repro found: flight 175649 (pass), for last pass
 Repro found: flight 175650 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175650/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-amd64-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
175650: tolerable ALL FAIL

flight 175650 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/175650/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64-xsm               6 xen-build               fail baseline untested


jobs:
 build-amd64-xsm                                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 17:53:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 17:53:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473942.734794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwKe-0004DJ-6Z; Mon, 09 Jan 2023 17:53:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473942.734794; Mon, 09 Jan 2023 17:53:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwKe-0004DC-2a; Mon, 09 Jan 2023 17:53:12 +0000
Received: by outflank-mailman (input) for mailman id 473942;
 Mon, 09 Jan 2023 17:53:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPNl=5G=citrix.com=prvs=36677a302=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pEwKc-0004D6-Cu
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 17:53:10 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78b69883-9046-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 18:53:08 +0100 (CET)
Received: from mail-mw2nam04lp2172.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 Jan 2023 12:53:05 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5394.namprd03.prod.outlook.com (2603:10b6:208:294::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 9 Jan
 2023 17:53:03 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Mon, 9 Jan 2023
 17:53:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78b69883-9046-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673286788;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=kgC5ZqTfr1fu+DVV7X2WRGv6Vbzidx9rgyxhFEl4Ccc=;
  b=hTfd++tqjjak7HRcffvGsCB708/NvnighnSknIU+9b5GlDpMcO7R2eMY
   5U1aF2pzevnEkwULE/IXPnSjPJmjbJuiRsFMXc8HSgdFYeLH5FCMonqdP
   l4vFOL6CTrHnhZaOb+kD6ba/B7kTveHcBOo35XtMxomgcjLojy46aIrBp
   0=;
X-IronPort-RemoteIP: 104.47.73.172
X-IronPort-MID: 92216088
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BW2HBanPy56B5i6uARHeWino5gxiJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIfC2qOMq3cNjD8Ldpza4nn9U0FusfTyoBjGQVoqCo0RiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aqaVA8w5ARkPqgS5A6GzBH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aQIKxoKVAGnvO2r8OKkeuRRt8oIF+C+aevzulk4pd3YJdAPZMifBoD1v5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1c3iee3WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDTufhqKY60DV/wEQMDywyd3G0vMOSqXDjae5Wc
 HFT4AYH+P1aGEuDC4OVsweDiHmAsx0HWtsWEPAg7wqNya387AOQB2xCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BXQZeSYOQA8B4t/iiII+lBTCSpBkCqHdptL0EDf03
 juDhDI/mbIIjMgAka68+DjviDu2qp/EZgU8/AnQUySu6QYRWWK+T4mh6Fye6OkaKo+cFgWFp
 CJdxJLY6/0SB5aQkiDLWP8KALyi+/eCNnvbnEJrGJ4isT+q/hZPYLxt3d23H28xWu5sRNMjS
 Ba7Vd95jHOLAEaXUA==
IronPort-HdrOrdr: A9a23:RGwWD6p+XaPBUQS4NcJo5LwaV5oteYIsimQD101hICG9E/bo8f
 xG+c5x6faaslkssR0b9+xoW5PwIk80l6QV3WB5B97LNmSLhILPFvAB0WKI+V3d8kPFh4pg/J
 YlX69iCMDhSXhW5PyKhzVQyuxQpeW6zA==
X-IronPort-AV: E=Sophos;i="5.96,311,1665460800"; 
   d="scan'208";a="92216088"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AG/2JUCQ0GqhD2S+brMhc0rqSXpNscDO3P4gXZCQJXlKKtHFLB2Q+1ROmwMrubvJ+KM6F0tdkdEz34eeT/ahp+FNVtsi01MPkhuLjy5RQ/EWfr3OwBz8V9k4acRatpmcH9JLDnb0opacxPiw0bJBJ6rsyH5l2DPcVKe7maeV4F9XCLHLYkATsVk2owsF63DtenDTeQPBw7/cUtyghReglz+6egdZBheAX7KZx0pDCZva4s+BOtIPX7KC/WWK7aCPC/2o/IUQBCH85BZzZGQ3CqEmA6dhP1Lzwwho/V4z0RQ/HKRd9rjclkQQOfZXfhS0ZNJ7EayWeagjF7+Yh6sJkg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kgC5ZqTfr1fu+DVV7X2WRGv6Vbzidx9rgyxhFEl4Ccc=;
 b=LBna5Lxd+LE8D5yGlRXS72XTW2v2QorV9oBpxBqwfVrgTxmYGFE1DKsbOMlDjW5YiNyWCpCeIBEu7h3x7p0XhfEzBhX37vnN7SqziyCZmu8OLJTwtMOttOP8wkDmQ2TARz2VRXOj0LUq65nXqGp5z6g+tAkbtkpopvrxuOmfpMCsof+29vs4tlS4GbsjPxtDvlW8VwZj0vPL15hu/7kTBq+8gqu1hYxR5st2o219KNy0XmsTBUplhVkbOtR/budYwkQGCI8pmA7jwle65HcRkpRFaqoO5jAdfZZQbjIDRuGL/ABYqDmbcXWDs2Yy8kZ+Zcjf1uukcsKixsEeVbL2Fw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kgC5ZqTfr1fu+DVV7X2WRGv6Vbzidx9rgyxhFEl4Ccc=;
 b=uRinbe7OT1DOIDcki2zlFg0nMI3yPKx06V54usdO5wFlXPzpWT8Uz5LZtspggmUKjOsem9VsFxh5XBpcAR5N7X8+EdigJNi3KPA02Ign/T2wL6F+6iQmBXrwZhigL3y2gfQl5FwHvZnD4e+hdkA28eGo4pv/BK48lXI1263K/ms=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH] automation: add qemu-system-riscv to riscv64.dockerfile
Thread-Topic: [PATCH] automation: add qemu-system-riscv to riscv64.dockerfile
Thread-Index: AQHZJA/Xy2J+QEhgpUWfMMxFgz4kLK6WXk8A
Date: Mon, 9 Jan 2023 17:53:03 +0000
Message-ID: <a089f748-bd2c-a286-935e-78fa6b66a4f9@citrix.com>
References:
 <8badde729e97ef6508204c5229199b7247c7a3da.1673257832.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <8badde729e97ef6508204c5229199b7247c7a3da.1673257832.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5394:EE_
x-ms-office365-filtering-correlation-id: e496bc68-4bed-42af-19f5-08daf26a5ae0
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 QBMAxYpgf9J58EOfTSfI9i7s5MmEZL4RQjGo2zHZ/D/Yg9PFyjQlPhzFL1urIk0UVaDRjco9R6Z5gKvCJPH3AfE3Ycx1te1WaMSnmUgHn9jagfCrk5RR9I/kyPfvYazJf74MQblnBxGYZpVBVGSm0622kO+uQCL/brTi7Aq7Z4bUrCH4i2UzyfuGPSqdNRRqn2SKoGk61mlvVUnxRbskUcVCqGsSfQU8+bG+sL5lxKUFJs4b1ejiMiqLGg7EaOoNiMQ1iraViHhhsbF9Hi97xP7HhChaFzrsALoeHRpjgPL5Dx5aUoFHWbwpjbZROSCXOKAWXe7MaXPfHj+YRdT25R0dRDrroKiqxNiiRbq5R34UqpzqwHgPaMWU63Rp9/yddDe2d4y2QBF33xCoIloHvWJqC+Rc0pkMfOHE8ytGIPof099cvoui/I0Y2ubpr9cI6sn6o2hMgDrvNJdXtoJJXOJdvxvtN0pHX/zxbjk9yFWjHWIsj19ncjGTEx9qagjz5fJkJKmXpUbpnkR0Ifv0DmxTvQqO5Oq/r3OwkKWOv5783vFyPlpn2ikynwpoKtICYpFD4VJE9TQM9MQGDgfAvIopJCvi+wTzAk+PDzw5U8+nqRsm2mQCjkq0wokey41ZaQDfXpS2Br2jgem/fhoH4suG8qvuEAEsVEe71j5liTMF9oBTb7RLE6Yc7MPz2KIUPMNOug0l9I/W+4SSinLfWwPlNdvCuupOG8bblsPCycRJs1fwJe3x2d4Ux83YxVXu1nDcS85VMUjnnxAACa97/Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(366004)(346002)(376002)(451199015)(66446008)(76116006)(4326008)(8676002)(66556008)(66476007)(66946007)(316002)(64756008)(91956017)(54906003)(110136005)(4744005)(38070700005)(71200400001)(2906002)(5660300002)(8936002)(41300700001)(86362001)(478600001)(31696002)(6486002)(53546011)(82960400001)(6506007)(122000001)(2616005)(26005)(38100700002)(186003)(31686004)(6512007)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?d3IzdE16ZVdVOGhVZ3hDSjlXVVRCSVpnUE1Ra0xjNGV2d08xSDZrZngvM1Rq?=
 =?utf-8?B?TGRETUZ1YkNWb2Mvb2dKZ3pObmNIcW1MMTROWG5EYW5mNllXYzJmSkhveWJB?=
 =?utf-8?B?QWpEOVYwc0lZSHh3Qi9ydEFadExnbmEvWFFFcERoU250OXlHaitzYVVZK2Ew?=
 =?utf-8?B?dDdoMDVVMENSMG1oajlYYWtKMitpQU9xK0xQZlN5M1MvODJab0gyN0NVM2Nl?=
 =?utf-8?B?cjVOMVhHd3JDYVRvbHBES0FuQzdHN01kU3AxUXZtKzZwZEdzRW5WR1RQUzl0?=
 =?utf-8?B?QldhUE1pVW5pNGlLemdmUDRVdkN6YkI4Mzl6c2h1T24walkxdHAvQWY2VExR?=
 =?utf-8?B?aExRSS9RREV5RGZqR1VhRWJSbzMzTnA2SExmRFBsanpHcS9iRFYxWXJvMHkw?=
 =?utf-8?B?TXA5ZWgxMDN3SVovcmhVWjJBazRoK2YxZzNrYWNtbnJGSW1NQThtT3Y0cndk?=
 =?utf-8?B?MnNPdmZqeWhpL3dMa2hBVW82TGNxanJuSnFtU1NMblNJVWVLd3JwaCtPdlFR?=
 =?utf-8?B?RVR2WUNxU0VKcGFjMDNTMUEvamZISDBwN1lsQ1UvQXppYy9ocXFON2RZZnRM?=
 =?utf-8?B?THNDR21Pb3ZONlZ5MWdCWEF2NnIyWFViQSt5YnJYMk5XMVJudzRFSDdHZHZN?=
 =?utf-8?B?NXk1bnlHUWlhdXRGQ0x5UGFvbTZTWmdhaGxhZzhuOVFXaEJCK1dDa3J3QXFS?=
 =?utf-8?B?UmJ2a2tDbC9PUmM1bnovY0dyS3Vmdk11bDhUUTNoNWhONWN6eFMrK1k3dVpL?=
 =?utf-8?B?REo2dFl0dzkrbWRvZWtOaG5OYk9KZ3o5NU8yT2hFb0d2aVQ4THcrTW8vZXFU?=
 =?utf-8?B?OXdUU1F6aTBVMm43c0RDYW93NHBMTXQxZ0RmYXdBWlJLU08rdnRzeFEwZ0l4?=
 =?utf-8?B?MkF6ZHVocXp0aThPaExOK3oreUR6VHpMbnVzWmtHdUI5dGcvVkROSmxpQXVO?=
 =?utf-8?B?MHBWb1o4anFBS2g1YzF1K2ZHMk40ZkQ4TDFrMllMUzIycTVPeXI4bFF4T2lu?=
 =?utf-8?B?NDlpZTB5bWc1QXY4cVYvcm0zdXlxV2dUemdobHNXY0lMbjdYcWN4YnZIa0N0?=
 =?utf-8?B?aVNyQ2NUWHdOQld5QXhaSlBFaEo1Y2FEQlJnaGlkejlYTHRKeU4xcXNpbVcv?=
 =?utf-8?B?cC9tRmFmdTZ5RHlRUEx5d2luYmF1QmtDVW9qT2hRS1Y1L1RxZXhyd3ZqOFBp?=
 =?utf-8?B?MjJINnJMYlBSZUZPVUNkU0N1emhZeEhlSTlMdmdKekZDSVJob0FDRDRvK1I0?=
 =?utf-8?B?eGQvYWpZRlZBYjlDMGdvU1FORWtYNExveXljV2RDN3VDZHZpcnpXaXFZYnNs?=
 =?utf-8?B?U1FaUXBHbFZueEtYeUxxMDQxbFJLQWNrNE9HNk9LSXdBQUgzWFJDV2Q2VHBY?=
 =?utf-8?B?ZlR5R2N4dGdNeWtFbWVMeU5GMmlNYzgyWnpoeEcxOVBtUDdRbEhpazVlMXkx?=
 =?utf-8?B?R01IN2w2YzRZaXNsVUFWSXRvQUVmRzYwRGRZRnpjUTJwT0VKcTFNc2JhaFR0?=
 =?utf-8?B?eUk5MmxwbWtBMlM3dmZiZGVDWEU4d1Q2OXVuOGtORTVyR200UStHcGtnbS8y?=
 =?utf-8?B?WnZaNjJnZFliclRyK0t4dEhXd2VvOUpRZWY0eWpXcWcwa0ZjV0NXZmpGMUpJ?=
 =?utf-8?B?NmthR2hTcW9ZV3JqZjRMTGQ2VlhsWjQ0ZTlOSy9vTXA3RHJ5TWZhZXdHa3lC?=
 =?utf-8?B?N2tpcUREMlZZUTlyMWZ6VEdhTi9FOUZYbXpWWVlSc0xHQVQ0bTB3cFpvQ0Jy?=
 =?utf-8?B?Qm4xcDc2dVBFcE1rR21kTVo1RzJkV2RRa3NvSEozTTY3cEc3WnNSc09GVUpt?=
 =?utf-8?B?cS9XT0tSS1FFMjBlWXJ5eWVvRjg5bDN0bnhiZDVkTEVtd2xENTFzWjFNTmVI?=
 =?utf-8?B?bit1VGdia3EwMEFjVWtIMGlvZkdURlJHeDM3aDdhMFlmTlFvaEJjbnFxU01K?=
 =?utf-8?B?QVJQK0huaHBnU2syQTFKRkdtYW9OSVhISkRyc2Fad0ROeDBYeXVsOGtzK1JJ?=
 =?utf-8?B?U1FKeWo0K3o0T1NyeVlLTTVwbC9jN3owK2ZxdGUrY0NsMUZNMExpczFxQ2s0?=
 =?utf-8?B?Ylh0VTZNN3V1TGt5bURXQytCOC9nb3k0NUV5U0dxMjNDZEM0MVZteGFTd0FY?=
 =?utf-8?Q?HDq48pq+mtuu/SNEjjdwgehCJ?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C4AD1B6A81B17D4E9955E33ADC59B5E8@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	XZMxGzQI8NQmnK7LwzxCUwK5q59Ql30oorCKVjbi9alsm6Qvy/Rzx7jCtSqEIy2oLSIXsi++9U0BAlaq2oi6WIJ4Gbok1p/M+BJ90s10bZ5qPNMs/2O48Ak/OZdMFJwXClfKazrKVwCFfKcsQaef5oyjgYMBtSiF2WxjrWtBjgEgARw9TLU70+UOEvs0eo+2uCEc1W+EAuUkmS/tqRt/KYy4gSS6vQUvb9VsjCyS7t1HmO/wHGM/FrE5Fj08ce+9W4OScEyUI517ol0Ca/+aTZOkxL5wedpitICBhqkRIToL1XDHgxnZ6JihHRk8C2cfsgPZeIWNUtmLdrM9vWRAn5Q3v5GqbG/5GReOLapIlA2LGylOTvXNiTtFjOO24Xjd42fvoOwHVjtUFzyMvk6L+AJmV70Buycrz62uT/sDAz26/3X8Jam9Afb01SKrV13Y8epCi6c4tCv5wtbdIvNGT5ysU+FHfAC62e1hAJT67JOjuf0a6e/B0g9snpB9rdjAFjbuGyO3WSVQZpyvQxB7yBZoyK4VWWhUOoAso4lv8AHNN53fBu14F/hpYopRs0ZAA4fPzDtOJ8Yc1wBSU+RETaWbDQHlwh5rdXCcZA7f8E4vnk3jh7ahP5G64gddV9+hJWRWc+FLMkvOKSXw9R723vyU+t9zWhOmRVB8gtEd1WfO4a3yiu4AlND6w6s1e4PFCvTYApdHylXa+OdCxZyMFmJgiRD58hFUCqW6WFwN7od4r9F7bkFTLfn4WnA6UAGDncUTGb5Qoh+AxneQUM3/eKbbEpxUcAr/nAFYvjSIvDe8amqWS7sEz8qZfc4N2pUOQVUjxBKozBbBlC34wA7KAETvqg0MZEQgcqkxYU0iTGngMR8x7c981XXnRzi6emGlrN7ehm5V7C6FumLPH5QrPg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e496bc68-4bed-42af-19f5-08daf26a5ae0
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jan 2023 17:53:03.0969
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: IJTDlVSU1dmV9OslntvwEtTWD/1etGH0ydN5M9AKkj2dUTaKl1eBqWtb3ihtvlmliS7pnMSCyAMjGss++TVs7Y780CncQ6JmY1AuL6btrJI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5394

T24gMDkvMDEvMjAyMyA5OjUwIGFtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBxZW11LXN5
c3RlbS1yaXNjdiB3aWxsIGJlIHVzZWQgdG8gcnVuIFJJU0MtViBYZW4gYmluYXJ5IGFuZA0KPiBn
YXRoZXIgbG9ncyBmb3Igc21va2UgdGVzdHMuDQo+DQo+IFNpZ25lZC1vZmYtYnk6IE9sZWtzaWkg
S3Vyb2Noa28gPG9sZWtzaWkua3Vyb2Noa29AZ21haWwuY29tPg0KDQpJJ3ZlIGNvbW1pdHRlZCB0
aGlzLCBhbmQgcmVidWlsdCB0aGUgY29udGFpbmVyLsKgIFN1YnNlcXVlbnQgR2l0bGFiLUNJDQpy
dW5zIHNob3VsZCBiZSBhYmxlIHRvIHJ1biB0aGUgUklTQy1WIHNtb2tlIHRlc3RzLg0KDQp+QW5k
cmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 18:04:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 18:04:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473949.734805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwVP-0005oJ-4X; Mon, 09 Jan 2023 18:04:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473949.734805; Mon, 09 Jan 2023 18:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwVP-0005oC-0T; Mon, 09 Jan 2023 18:04:19 +0000
Received: by outflank-mailman (input) for mailman id 473949;
 Mon, 09 Jan 2023 18:04:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEwVN-0005o2-NE; Mon, 09 Jan 2023 18:04:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEwVN-00056A-L7; Mon, 09 Jan 2023 18:04:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEwVN-0002HK-AE; Mon, 09 Jan 2023 18:04:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEwVN-0005Yw-9k; Mon, 09 Jan 2023 18:04:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dBv++21dR6feO+FEuiTzE45YhW2D/JkJ/QuYMkTmFII=; b=wJBdpnP8zY+cScrnAKxJj6EAjr
	3RN4hY6QbEm7VBVKJZuQeseUbailJpq4NzOmdB16EXcn9+hJmWgksHa8Aj3xh6lL04xGJSdLPuf4J
	uranoH7iVcp/WZZZ8z/MjxHNylZuQf/PX38ljBcK47okhXTPQd8a7GXNLMqrUAFTlyl0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175647-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175647: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d6271b657286de80260413684a1f2a63f44ea17b
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 18:04:17 +0000

flight 175647 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175647/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                d6271b657286de80260413684a1f2a63f44ea17b
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    1 days
Failing since        175627  2023-01-08 14:40:14 Z    1 days    5 attempts
Testing same since   175647  2023-01-09 16:07:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2297 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 18:23:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 18:23:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473960.734821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwnk-0008Fc-PP; Mon, 09 Jan 2023 18:23:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473960.734821; Mon, 09 Jan 2023 18:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwnk-0008FV-Mf; Mon, 09 Jan 2023 18:23:16 +0000
Received: by outflank-mailman (input) for mailman id 473960;
 Mon, 09 Jan 2023 18:23:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pEwnj-0008FP-Ko
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 18:23:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEwnj-0005bY-Ac; Mon, 09 Jan 2023 18:23:15 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.1.158]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pEwnj-0003Pv-4p; Mon, 09 Jan 2023 18:23:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=N4kCznmep3NleYZso+r94HWGYmbX1dCPtioYPKAmcpY=; b=Wr+Lap0RKVgN2uy2wPA9GqmFzL
	P3fRBug9i6YqBxSCNW5tJlKKxvLUNGLUNLMol2f8gO8qNvchKa/VNzO5YIrRwfPMrRh0vWExNHdzB
	THJjaTcOa3vkulJcP5CPcwPBRkmK+BzxGiUYKU9dT3qEWmJjrF5Ta3PpsvQ6dNg9aig8=;
Message-ID: <8ae9e898-55ba-7fba-6ccc-883bd8b3e7ee@xen.org>
Date: Mon, 9 Jan 2023 18:23:13 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner when host
 address not provided
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-7-Penny.Zheng@arm.com>
 <d7f12897-c6cc-0895-b70e-53c0b88bd0f9@xen.org>
 <AM0PR08MB453041150588948050F718D4F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <6db41bd2-ab71-422a-4235-a9209e984915@xen.org>
 <AM0PR08MB4530048C87F24524BDE2DCF8F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM0PR08MB4530048C87F24524BDE2DCF8F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 09/01/2023 11:58, Penny Zheng wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Monday, January 9, 2023 6:58 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner
>> when host address not provided
>>
>>
>>
>> On 09/01/2023 07:49, Penny Zheng wrote:
>>> Hi Julien
>>
>> Hi Penny,
>>
>>> Happy new year~~~~
>>
>> Happy new year too!
>>
>>>> -----Original Message-----
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: Sunday, January 8, 2023 8:53 PM
>>>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-
>> devel@lists.xenproject.org
>>>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>>>> <sstabellini@kernel.org>; Bertrand Marquis
>>>> <Bertrand.Marquis@arm.com>; Volodymyr Babchuk
>>>> <Volodymyr_Babchuk@epam.com>
>>>> Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner
>>>> when host address not provided
>>>>
>>>> Hi,
>>>>
>>>
>>> A few concerns explained why I didn't choose "struct meminfo" over two
>>> pointers "struct membank*" and "struct meminfo*".
>>> 1) memory usage is the main reason.
>>> If we use "struct meminfo" over the current "struct membank*" and
>>> "struct meminfo*", "struct shm_meminfo" will become a array of 256
>>> "struct shm_membank", with "struct shm_membank" being also an 256-
>> item
>>> array, that is 256 * 256, too big for a structure and If I remembered clearly,
>> it will lead to "more than PAGE_SIZE" compiling error.
>>
>> I am not aware of any place where we would restrict the size of kinfo in
>> upstream. Can you give me a pointer?
>>
> 
> If I remembered correctly, my first version of "struct shm_meminfo" is this
> "big"(256 * 256) structure, and it leads to the whole xen binary is bigger than 2MB. ;\

Ah so the problem is because shm_mem is used in bootinfo. Then I think 
we should create a distinct structure when dealing with domain information.

> 
>>> FWIT, either reworking meminfo or using a different structure, are
>>> both leading to sizing down the array, hmmm, I don't know which size
>>> is suitable. That's why I prefer pointer and dynamic allocation.
>>
>> I would expect that in most cases, you will need only one bank when the host
>> address is not provided. So I think this is a bit odd to me to impose a "large"
>> allocation for them.
>>
> 
> Only if user is not defining size as something like (2^a + 2^b + 2^c + ...). ;\
> So maybe 8 or 16 is enough?
> struct new_meminfo {

"new" is a bit strange. The name would want to be changed. Or maybe 
better the structure been defined within the next structure and anonymized.

>      unsigned int nr_banks;
>      struct membank bank[8];
> };
> 
> Correct me if I'm wrong:
> The "struct shm_membank" you are suggesting is looking like this, right?
> struct shm_membank {
>      char shm_id[MAX_SHM_ID_LENGTH];
>      unsigned int nr_shm_borrowers;
>      struct new_meminfo shm_banks;
>      unsigned long total_size;
> };

AFAIU, shm_membank would still be used to get the information from the 
host device-tree. If so, then I am afraid this is not an option to me 
because it would make the code to reserve memory more complex.

Instead, we should create a separate structure that will only be used 
for domain shared memory information.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 18:28:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 18:28:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473966.734833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwsh-0000UQ-Bd; Mon, 09 Jan 2023 18:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473966.734833; Mon, 09 Jan 2023 18:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwsh-0000UJ-8F; Mon, 09 Jan 2023 18:28:23 +0000
Received: by outflank-mailman (input) for mailman id 473966;
 Mon, 09 Jan 2023 18:28:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0hpV=5G=microsoft.com=mikelley@srs-se1.protection.inumbo.net>)
 id 1pEwsf-0000UD-5T
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 18:28:21 +0000
Received: from DM5PR00CU002-vft-obe.outbound.protection.outlook.com
 (mail-centralusazon11021023.outbound.protection.outlook.com [52.101.62.23])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6386d6d5-904b-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 19:28:19 +0100 (CET)
Received: from BYAPR21MB1688.namprd21.prod.outlook.com (2603:10b6:a02:bf::26)
 by DM6PR21MB1324.namprd21.prod.outlook.com (2603:10b6:5:175::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.3; Mon, 9 Jan
 2023 18:28:15 +0000
Received: from BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::db1a:4e71:c688:b7b1]) by BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::db1a:4e71:c688:b7b1%5]) with mapi id 15.20.6002.009; Mon, 9 Jan 2023
 18:28:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6386d6d5-904b-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DY2U6W+n1ubaPGlp8RZxACN0MDJF+DS+DG8VaQ3eSk+8/5CjKQEwjNX3xccctI0qB1wyr9ijPat7dzlWX8daI8KveCS0LkrDL887qfjJp0nmG5qli7hTspPdETDPMME5/iA6WK0zbXWVXCTchbybaGyctRybIHWkAjZPcPH+0RsqoGyc71hRog/vk3Ka+ERw+HREDdXx/6O+90unILVzeYt3ZWJtfQZtGDc+cE8UlKcoFUnnCRGxujwwPuRPVdPvTbtbkEkftn74FZJDGVqZybG0VN7ZEeLxL4gnylFk2PuWJYp43l6kwoM11JDxD3DllVBKjzQ/VZI22zK2AvYQZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EILYjBVO3TVH8Ko5wjRUBk5n4pb4VQ0+5EcJsTroldY=;
 b=cw0pz7bapB1x5b2tnwxJ7eIQfylFTTQw4un/oSasfYLApqC8e3pXho/0AEPHgzwN4Ns4NjOycLwJcQSalDNI9iZzrCBEpp6eHh9ZYQKbUZgTh/f1/s+sTPe7n0trvQTsbErhWLTYB1DEnfy25vtYQlhzKtSLM9wh2+cFVXy9sCeAhnjgZzLPyJaZH7fJ9wimal1MuYODJvsE3SIe2j2hHyDIQqL412uKsjwIOxU/M2g/sc6rUcbYCNh4d2Y5fNDchQyBuoTnqIHojcDoHv18+b/3IvOczOzgit4XDMAg8heutwi6/34nwX4qpTen1tSM3A7po1iW2WQrfdG7GTBJOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=microsoft.com; dmarc=pass action=none
 header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EILYjBVO3TVH8Ko5wjRUBk5n4pb4VQ0+5EcJsTroldY=;
 b=JfO+sfhixC+W+fe0jtKhiG7i6ehSU2eYdkj+pXsdhIswSGiLgS3uG4IkYyDYeo/Z5tuf2T6W9C1w6dHa350uD/KY6gOfbz9eS1fRLOA5QR7m/2AkMEirVSst33W2FBS++zR3Itrl3/C8XxiGbURNvngcAd7jJY4cBRGw/CSs2q4=
From: "Michael Kelley (LINUX)" <mikelley@microsoft.com>
To: Andrew Lutomirski <luto@kernel.org>, Jan Beulich <jbeulich@suse.com>, Dave
 Hansen <dave.hansen@linux.intel.com>, Peter Zijlstra <peterz@infradead.org>
CC: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Problem with pat_enable() and commit 72cbc8f04fe2
Thread-Topic: Problem with pat_enable() and commit 72cbc8f04fe2
Thread-Index: AdkkUwD6aRGwmsjgQS+RzXLKsHA1TA==
Date: Mon, 9 Jan 2023 18:28:15 +0000
Message-ID:
 <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ActionId=00dc8fc2-24d1-4fce-9fff-420171fb08f5;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ContentBits=0;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Enabled=true;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Method=Standard;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Name=Internal;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SetDate=2023-01-09T17:47:38Z;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SiteId=72f988bf-86f1-41af-91ab-2d7cd011db47;
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=microsoft.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR21MB1688:EE_|DM6PR21MB1324:EE_
x-ms-office365-filtering-correlation-id: 4ea63db1-8ae9-4be2-2474-08daf26f45b2
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 /S0hSRZ3kks6ZlIjLxSNf+0y2KR2/w8sxlw3yukQtzfTPK43/V5HE/gMiw6emuN0VXD895iKbEVnljVYuPdrTszgmVzupCFn/hBWWtJw17H+dg5BdE7N4G7o28bN4AaPB3EyqYTXyxbE0ACUgO0vYEwnjW3bFkPa8mqpLQcIbm6/SQpb/lGH+TCCiyFWTYHyQexSD4MVC+Q47zfa1bd6CT65Iao9yVqUVkSbXFe8e5IQk09yc3JxtPnVwFA99BNTUddmK2J3Qu18bvj1N4Ef2bHYqyW53Cd9lHnlnPN8iE+PB2hksdb3lqOv9ljaw0V153UpMXuYMn2zQ4L+7bQHmKaH19p0TM/o2sMgQJE00uMpcwlMNOG1InZDt5Hfu9EmBWt14IOPGHCK3HmRDl4kpgStZQAhmpa7qhco44KVuqsYGGQfSiibWKNHQJ6pP5Tftn7GaQjmQ3oAiVqfVEBPYZaT+N41/Oo5ofVJXCjRQi45ayS4TG7kklFOSkRvVL1DVcLBZzEDmp/x+T9GrlpjWs0NjsRgM8VRCVpnNnn6n7Jlj0Qc/QsH0AQRIYFEku1ee5v+8shxBeUSvZJNgb2yd2/Lepg5kA+TyMrjEN2vANemJ+EazhTMWeoiSrpyiNbSYMeDQKS5IsOyOSy55L0V/h0Pue8klGp+XU3k1qp9f7qroZHZnZA91NgqJlfAUGT4i2Av1eGDY81D6zzoczi4jKK9BgnWSFVSyXW0X370tpqgtFxmMVyeuPkkPFi8rmcd1NxAQ3vK1IwVAkuPnWXbqw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR21MB1688.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(396003)(136003)(366004)(376002)(451199015)(9686003)(6506007)(186003)(55016003)(54906003)(82960400001)(82950400001)(86362001)(110136005)(66476007)(64756008)(8676002)(66556008)(38070700005)(66946007)(4326008)(76116006)(5660300002)(66446008)(38100700002)(122000001)(478600001)(2906002)(10290500003)(8936002)(26005)(71200400001)(52536014)(316002)(41300700001)(83380400001)(7696005)(8990500004)(33656002)(98903001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?UhePUQmo4pcNB93+LJA+ab4VyqBcZaklXjsfTUWOpomAu3I5B0M/wl52B2wR?=
 =?us-ascii?Q?s/qtuxhJ/xnZf49/7phYJpUUxAFlwPlcRaITETyDK6t+AiTtQ5n6OEpGAEhE?=
 =?us-ascii?Q?EWWZMCwHeFFDfYcztLDTZ/OzStfRlnW/S9zJ+8VJ27rhj5h+nVld7MMyvsdG?=
 =?us-ascii?Q?ZTmQGy/xp9fq3csKrt4t9a7fTeAyrB1spJZn2K7G4j2i8SsXZHKPIMeYVc6Q?=
 =?us-ascii?Q?RaUwqhZF8ZsRvhPdvza3VNo5WtFLk9FWfRRtHWREd76YyDUzdmH0TqMHUYSn?=
 =?us-ascii?Q?UQXkTmJ7W/iPn/P1ndIs+USd5Mw3xrDMBVmmb/DbjlYuPZkOo0v85AMA0nzR?=
 =?us-ascii?Q?tM6gV3NmDkVp3SPGyP3GE6MM6oFOvnov+XBRFIVfGikGYYWWOwYfsJiJcJbD?=
 =?us-ascii?Q?bo26a8v0pacik7bfC7X4eoK0Z1bZt3omlFuS9toSZ5+2vsiFtbcXDmVPZtXv?=
 =?us-ascii?Q?/naKw2IL62GKSGrodlkTiH2hbcagie42PmJetCom7rl5jSpxhVVz8f5b1IDO?=
 =?us-ascii?Q?NsgABXc/bPQ7dXrXaKURNjmSLma1VlP+RjyCqvviXCbizA8TSauf8xkdvc15?=
 =?us-ascii?Q?50BtAr8EIOjHpMOq42wdk/2LNk+wQEK8HNvoJsGKhRxnWbclABt04O9vrClr?=
 =?us-ascii?Q?LjSattkauptquADmERxi7vc13TRa91aS8NBARPdaKUn5xrb0L5HAD4LJMqGc?=
 =?us-ascii?Q?nc66jguSh4hxolCWeExQeyjnmBXUwbDxOfZmZUG+P54s4jHo29oQiCL5P+r4?=
 =?us-ascii?Q?L9p7OHHJD/Dx4iLLoDO8LBzey5ETDp92u8F5rnij/lkIX3TP2s/c/i78Qvt2?=
 =?us-ascii?Q?XPh6EXJM/RnNaFxqWoHkmNcFw9mSUMg83U+DqtqvmiQncDRgyKzcCXxpq2MJ?=
 =?us-ascii?Q?DMCvLc6utWHmXbT7L9vr4Vbej1USWUSnxQX+s3ZLbu14eZuf+w1ozpfkw4T9?=
 =?us-ascii?Q?uTMOcCRvkVhdnoODt0uInIon22SwTI7gtCbKN4Xz/9IUSWmF8X0zrVl+7JG6?=
 =?us-ascii?Q?IugCcxe5xvT4rZqilo7TV+UXYAEi0UqFxMpz0aVj436Tnbv5IBlzsOVCKMyy?=
 =?us-ascii?Q?znP0rFaQDOkPaZRgm0LONdhsSpYd9OuRCmpsWmmr/tQahyPQgl/SU1B6rak9?=
 =?us-ascii?Q?pS1tjRrnMw4Xm6zl21pwInBuxAfNHLaJa0iSwpvQO118u7fkd5OhKWj8b70R?=
 =?us-ascii?Q?GPTLsicCow1vyh9rY7pcj4ruFhUEY9nThs0dtoL/fL3mva81mZEHDhwq8/f0?=
 =?us-ascii?Q?y5hujW2Fbhx5IJ/1xl6+U6TZ0LHKHFv5nkxEEroCRC0CVog281fk59CF/o12?=
 =?us-ascii?Q?UuQnSGjygytJ4B3u8Dp+8EJ/fWt4XVlBiLx5eVv/ItbrsWRFKIPrKRUAOcYM?=
 =?us-ascii?Q?aw9Z0spVvl27eBSfSfamj6ZqrL9CMp5SPdtrRwuZR/MYwRcZ14m5+gUnkRRD?=
 =?us-ascii?Q?4gH+UqKNuiRegGsCwej7bXsUJD1KL8AvRayFb9vwoH5VFYrVoYY6EP4WBJzj?=
 =?us-ascii?Q?ORsBj98VkV1mJCREzBHknGiZ6Ngc3VG8yo4pgE0XvZJB6VPvyRSifBDHxJQ3?=
 =?us-ascii?Q?WlphnwnTRpcHpEwrbXE2M/uGmw+Yf3J96IEnboCk1Svha8FBQoe69NVLEaq4?=
 =?us-ascii?Q?2w=3D=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: microsoft.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR21MB1688.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4ea63db1-8ae9-4be2-2474-08daf26f45b2
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jan 2023 18:28:15.0628
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: eK4J3fGSxNtkhQDYvymBtQasQIIX9wrGiCtwIydOXfwHy6ypllp5CWpp6v7+zD/nHxgxm1hm14XFlOdB0gW97SYjZfFz0yiQItTIIyHiK6s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR21MB1324

I've come across a case with a VM running on Hyper-V that doesn't get
MTRRs, but the PAT is functional.  (This is a Confidential VM using
AMD's SEV-SNP encryption technology with the vTOM option.)  In this
case, the changes in commit 72cbc8f04fe2 ("x86/PAT: Have pat_enabled()
properly reflect state when running on Xen") apply.   pat_enabled() returns
"true", but the MTRRs are not enabled.

But with this commit, there's a problem.  Consider memremap() on a RAM
region, called with MEMREMAP_WB plus MEMREMAP_DEC as the 3rd
argument. Because of the request for a decrypted mapping,
arch_memremap_can_ram_remap() returns false, and a new mapping
must be created, which is appropriate.

The following call stack results:

  memremap()
  arch_memremap_wb()
  ioremap_cache()
  __ioremap_caller()
  memtype_reserve()  <--- pcm is _PAGE_CACHE_MODE_WB
  pat_x_mtrr_type()  <-- only called after commit 72cbc8f04fe2

pat_x_mtrr_type() returns _PAGE_CACHE_MODE_UC_MINUS because
mtrr_type_lookup() fails.  As a result, memremap() erroneously creates the
new mapping as uncached.   This uncached mapping is causing a significant
performance problem in certain Hyper-V Confidential VM configurations.

Any thoughts on resolving this?  Should memtype_reserve() be checking
both pat_enabled() *and* whether MTRRs are enabled before calling
pat_x_mtrr_type()?  Or does that defeat the purpose of commit
72cbc8f04fe2 in the Xen environment?

I'm also looking at how to avoid this combination in a Hyper-V Confidential
VM, but that doesn't address underlying the flaw.

Michael


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 18:31:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 18:31:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.473972.734844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwvf-0001r5-PH; Mon, 09 Jan 2023 18:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 473972.734844; Mon, 09 Jan 2023 18:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEwvf-0001qy-Mk; Mon, 09 Jan 2023 18:31:27 +0000
Received: by outflank-mailman (input) for mailman id 473972;
 Mon, 09 Jan 2023 18:31:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnCm=5G=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pEwve-0001qq-DM
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 18:31:26 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d2a547a0-904b-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 19:31:25 +0100 (CET)
Received: by mail-lf1-x12f.google.com with SMTP id bq39so14412048lfb.0
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 10:31:25 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 x28-20020a056512131c00b004b549ad99adsm1723527lfu.304.2023.01.09.10.31.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 10:31:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2a547a0-904b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=VcxOyx9+PmxK4qZSaoSOrWjjYSYft//gzQqweATZcy8=;
        b=oynIdUOT+KjJkD8zZLtAql3p2xI2v2o2kGwq+YKPCoznCZxcI7TiQ6gTgWsrDKRtkc
         s9r01t/jiGI+Pj6eCiPpKLQgHY4eeVFNa1Lxm3LL8tgqr65+mIcEIiyhFNEeAkz3t9h+
         dcDUbrMDXJwFBTcA+eltL8YtGeMPs8+fo0zE5KEwBtPyu+semx796a0X/QsPM7HAuYZK
         J88VvQqlBIPcjvMhkJamNGcIXYQvqSu60RRWIyCI66bCYUCkP09+u1lX2paTQXPq9zuA
         r+Eq7Br93Sh2U7SchoBxfOlwxSFuXJPDSCSuxkTAeQU9m1czOi7JS36GtN6T4ZtbGraE
         bldQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=VcxOyx9+PmxK4qZSaoSOrWjjYSYft//gzQqweATZcy8=;
        b=aa/oNhNpUZir2qKkWJecOOKckD/0KCLyt8VGMxcEFsICVbl2bvg6y4IolIU2HvebIA
         poHJXRUI+qIs08zKw1yfkAjbXrdxpXGwKRpjTdiA0KYvfesARXkUAuYaPkGNtWLye7sY
         e5YIL4fcltyO1xNybwGPyX4mOrFmcohO8CygayP1heBXbHqzMkY5JCSKo9tpTxnvN4/u
         wJv7dViOhTJmJioCC2BylYNExGCaTiWuspqutjAZuclGcv4ASIerNk42MhvraW7eHFrp
         K8sz7O9aRdnloQICYbqxWCt3Syv2+LQtf3Bd00PYZC5ghNndg7tUO/hy1rQblpolJQMX
         9DuQ==
X-Gm-Message-State: AFqh2kqvIEyPQHCa0kZaeyUwQTzFNSxEeZwvV8JP3iyb5dUEo3vBFEUP
	UNTBwwqG0ww8byf40c0tW8U=
X-Google-Smtp-Source: AMrXdXvWxnbV6EwtbOnYEE5/iFtjNgu+m0d/Ad9RiZJUwW5+8u79+XbokHa7+XHlCg7Pt/DXY3KZhw==
X-Received: by 2002:ac2:5f46:0:b0:4c2:7049:5fae with SMTP id 6-20020ac25f46000000b004c270495faemr15867776lfz.20.1673289084514;
        Mon, 09 Jan 2023 10:31:24 -0800 (PST)
Message-ID: <3075b5f9b180c72bb6736640c54efafcb840d571.camel@gmail.com>
Subject: Re: [PATCH] automation: add qemu-system-riscv to riscv64.dockerfile
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	 <gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
Date: Mon, 09 Jan 2023 20:31:22 +0200
In-Reply-To: <a089f748-bd2c-a286-935e-78fa6b66a4f9@citrix.com>
References: 
	<8badde729e97ef6508204c5229199b7247c7a3da.1673257832.git.oleksii.kurochko@gmail.com>
	 <a089f748-bd2c-a286-935e-78fa6b66a4f9@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-09 at 17:53 +0000, Andrew Cooper wrote:
> On 09/01/2023 9:50 am, Oleksii Kurochko wrote:
> > qemu-system-riscv will be used to run RISC-V Xen binary and
> > gather logs for smoke tests.
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> I've committed this, and rebuilt the container.=C2=A0 Subsequent Gitlab-C=
I
> runs should be able to run the RISC-V smoke tests.
>=20
Thanks a lot. Will check soon.
> ~Andrew
~Oleksii



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 19:45:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 19:45:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474000.734890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEy56-0001X5-OT; Mon, 09 Jan 2023 19:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474000.734890; Mon, 09 Jan 2023 19:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEy56-0001Wy-Ld; Mon, 09 Jan 2023 19:45:16 +0000
Received: by outflank-mailman (input) for mailman id 474000;
 Mon, 09 Jan 2023 19:45:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEy55-0001Wi-4a; Mon, 09 Jan 2023 19:45:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEy55-0007Pt-1u; Mon, 09 Jan 2023 19:45:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEy54-0008Cz-MV; Mon, 09 Jan 2023 19:45:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEy54-0001fP-KT; Mon, 09 Jan 2023 19:45:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=JHXTgRpQBy4NZyEQ8cTu4SVMpHSqEwL4iBaoQdfDF2Y=; b=ZcLUMgfU6orzzydI8/kX66uwSD
	WAqBdpcWWDjzaOs74ZflTrGD4ZwE9IfLjBzclbQsZfHR+c0Xq0K/hsyQLy2a78PnibUJgzGYmoy1R
	Cso41RB/nqxvee/yO9DTj+W5SKsKJaPJ+yPbDoJ0EUbGdy2ZfZwG5uedlPmwajBTSlmk=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-amd64
Message-Id: <E1pEy54-0001fP-KT@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 19:45:14 +0000

branch xen-unstable
xenbranch xen-unstable
job build-amd64
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175659/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-amd64.xen-build --summary-out=tmp/175659.bisection-summary --basis-template=175623 --blessings=real,real-bisect,real-retry qemu-mainline build-amd64 xen-build
Searching for failure / basis pass:
 175647 fail [host=himrod0] / 175637 [host=himrod2] 175627 ok.
Failure / basis pass flights: 175647 / 175627
(tree with no url: minios)
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d6271b657286de80260413684a1f2a63f44ea17b 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Basis pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#d8d829b89dababf763ab33b8cdd852b2830db3cf-d8d829b89dababf763ab33b8cdd852b2830db3cf git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#528d9f33cad5245c1099d77084c78bb2244d5143-d6271b657286de80260413684a1f2a63f44ea17b git://xenbits.xen.org/osstest/seabios.git#645a64b4911d7cadf5749d7375544fc2384e70ba-645\
 a64b4911d7cadf5749d7375544fc2384e70ba git://xenbits.xen.org/xen.git#2b21cbbb339fb14414f357a6683b1df74c36fda2-2b21cbbb339fb14414f357a6683b1df74c36fda2
Loaded 5002 nodes in revision graph
Searching for test results:
 175627 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175631 [host=himrod2]
 175640 [host=himrod2]
 175637 [host=himrod2]
 175643 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175647 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d6271b657286de80260413684a1f2a63f44ea17b 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175652 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175656 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d6271b657286de80260413684a1f2a63f44ea17b 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175657 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175658 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175659 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Searching for interesting versions
 Result found: flight 175627 (pass), for basis pass
 Result found: flight 175647 (fail), for basis failure
 Repro found: flight 175652 (pass), for basis pass
 Repro found: flight 175656 (fail), for basis failure
 0 revisions at d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
No revisions left to test, checking graph state.
 Result found: flight 175627 (pass), for last pass
 Result found: flight 175643 (fail), for first failure
 Repro found: flight 175652 (pass), for last pass
 Repro found: flight 175657 (fail), for first failure
 Repro found: flight 175658 (pass), for last pass
 Repro found: flight 175659 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175659/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
175659: tolerable ALL FAIL

flight 175659 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/175659/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   6 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 21:32:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 21:32:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474026.734910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEzkf-0004b2-Jf; Mon, 09 Jan 2023 21:32:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474026.734910; Mon, 09 Jan 2023 21:32:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEzkf-0004av-Gq; Mon, 09 Jan 2023 21:32:17 +0000
Received: by outflank-mailman (input) for mailman id 474026;
 Mon, 09 Jan 2023 21:32:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzkd-0004ai-EE; Mon, 09 Jan 2023 21:32:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzkd-0001aU-CE; Mon, 09 Jan 2023 21:32:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzkd-00073e-1h; Mon, 09 Jan 2023 21:32:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzkd-0001C5-1E; Mon, 09 Jan 2023 21:32:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9czV+nBOby4ICdg3XPJWfbPzEZ89NPCMWILMffFC6u4=; b=axsfg0Arm98aTmjyObsDeQH36v
	4/HCPuaasOU44VoN216HuNkx4+vkdTQ82lMK+igwVWWokr6illRHtJus1onlRE2qnOnIp7bvnSWUv
	T3rq5bUrHFUbGZF6CYmpV8y/2xthEZ7Kx9Hq9/Fv4EtXwuu8rnwCGxcEJ1JAHuC4wC68=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175655-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175655: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=33a3408fbbf988aaa8ecc6e721cf83e3ae810e54
X-Osstest-Versions-That:
    ovmf=d8d829b89dababf763ab33b8cdd852b2830db3cf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 21:32:15 +0000

flight 175655 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175655/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 33a3408fbbf988aaa8ecc6e721cf83e3ae810e54
baseline version:
 ovmf                 d8d829b89dababf763ab33b8cdd852b2830db3cf

Last test of basis   175606  2023-01-06 16:13:16 Z    3 days
Testing same since   175655  2023-01-09 18:11:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Yuanhao Xie <yuanhao.xie@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d8d829b89d..33a3408fbb  33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 21:40:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 21:40:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474034.734922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEzsX-00064j-GV; Mon, 09 Jan 2023 21:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474034.734922; Mon, 09 Jan 2023 21:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEzsX-00064c-B6; Mon, 09 Jan 2023 21:40:25 +0000
Received: by outflank-mailman (input) for mailman id 474034;
 Mon, 09 Jan 2023 21:40:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzsW-00064S-K6; Mon, 09 Jan 2023 21:40:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzsW-0001jG-GF; Mon, 09 Jan 2023 21:40:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzsV-0007Rb-Ve; Mon, 09 Jan 2023 21:40:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzsV-0003kt-VB; Mon, 09 Jan 2023 21:40:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3wbJTSmdyVMqOHzJ1ipHrDrtfp4kc76rtJtyXwaXbdU=; b=nqGHjOE/FyV+AnbQUGkpVSpwgJ
	OZXUAfY7QpM1T/pLy5Ky21FKmvWuTNoR1VWpk+YlndYM2fdXnGD75sir2QEwsRK4FKLoHwe/F4++m
	E4p1ZEBKhBXB3HYdpLowwvTfDSPIF/d11Xd/NKBDcOcfgWDQtIFrmmCsmAXqUGZKUD20=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175654-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175654: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 21:40:23 +0000

flight 175654 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175654/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    1 days
Failing since        175627  2023-01-08 14:40:14 Z    1 days    6 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 21:42:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 21:42:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474042.734932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEzuR-0006dL-R3; Mon, 09 Jan 2023 21:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474042.734932; Mon, 09 Jan 2023 21:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pEzuR-0006dE-OK; Mon, 09 Jan 2023 21:42:23 +0000
Received: by outflank-mailman (input) for mailman id 474042;
 Mon, 09 Jan 2023 21:42:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzuR-0006d4-4I; Mon, 09 Jan 2023 21:42:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzuR-0001ku-3N; Mon, 09 Jan 2023 21:42:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzuQ-0007Wx-O3; Mon, 09 Jan 2023 21:42:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pEzuQ-0005Sx-Nd; Mon, 09 Jan 2023 21:42:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5ozE18a5DxhD1Vctwz2eDCpPSk7dtuuJ9PuXDUSgAfs=; b=3FQ3dVz+/on0EODJs2YLaRb8Qy
	zUy3S7ukcpocvOAtTC3ObOu4DRhVh41ie5bqoiEODNezG0FTiNuOoVuktC8nQJSqBCH2co2e9pEf/
	E+QtJeEEaXOIT/j4xDD6Sha5GZCzOMM0zq6vuLAe9xxunmPnXV6LFAenLP9EJZ0V+3KA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175653-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175653: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=692d04a9ca429ca574d859fa8f43578e03b9f8b3
X-Osstest-Versions-That:
    xen=38525f6f73f906699f77a1af86c16b4eaad48e04
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 21:42:22 +0000

flight 175653 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175653/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  692d04a9ca429ca574d859fa8f43578e03b9f8b3
baseline version:
 xen                  38525f6f73f906699f77a1af86c16b4eaad48e04

Last test of basis   175644  2023-01-09 14:00:27 Z    0 days
Testing same since   175653  2023-01-09 18:05:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@cloud.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   38525f6f73..692d04a9ca  692d04a9ca429ca574d859fa8f43578e03b9f8b3 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 21:56:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 21:56:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474053.734949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF07o-0008IQ-3J; Mon, 09 Jan 2023 21:56:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474053.734949; Mon, 09 Jan 2023 21:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF07o-0008IJ-0E; Mon, 09 Jan 2023 21:56:12 +0000
Received: by outflank-mailman (input) for mailman id 474053;
 Mon, 09 Jan 2023 21:56:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=szyx=5G=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF07l-0008ID-UZ
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 21:56:10 +0000
Received: from sonic312-24.consmr.mail.gq1.yahoo.com
 (sonic312-24.consmr.mail.gq1.yahoo.com [98.137.69.205])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 69b3da00-9068-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 22:56:06 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic312.consmr.mail.gq1.yahoo.com with HTTP; Mon, 9 Jan 2023 21:56:03 +0000
Received: by hermes--production-ne1-7b69748c4d-drrwg (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 3063e915763b79494c5a1a49c3c8038d; 
 Mon, 09 Jan 2023 21:55:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69b3da00-9068-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673301363; bh=XmjEY3smFSBeH1GvhJfSTsSYJbxF46CwAKG5dAKcmRI=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=XQ4jxMvUKo2m81vt7fXP25qsUhR1PI6kigX4sFvJZ89oiNMiM0L1oeUygYTBH2w/MnnXB6LLsgprZ1eETJsBJUZU7unMRYVFvuQgiqKSfGq6k0lfkLJA/XuPjLw/OVCPMwELYfwhfWvqJhRygAW2g21Onrw4Kdjo0yOI5Hc0dr5X0Y+qMuutFrGDVLUOFIjiOfwXvLISDGUrKqH4QWciOB9bpYFw1UsQiax8pWoKDgTEM7SqdJH8jMcOdo62agw/QY2eYzmMTZLv8wfSxF7izdB90v6dCbENHe5FqRFKmjMyMZK+wFUodJE1LWuv3CBfY3fD0Bnn/qlBrqb3PPtrbQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673301363; bh=lcJGVb/afW/9QWvR+7d3BL+WwFDW/zQtJ9DNlrryTAD=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=QOgQo3tegk8jmBv3Ng7w79LJ/aNZzX5ERi9ymMQFCrWbayQ8uUhDFy0obt6l2lWh2qOHi1fd0LHR93v7XeI9OREnDWnXEVMs2yUZJk5NYDwKtqMRsBjK+TRmMq5lLv0YjnlPIN+FuBaa/P8esnQ3iFqaJ6kEmLPH/ebhYypsHFaL7EhDWTj2YdS6910kIBJo1yUbfe7V3bRH1GAszdULZEUUNHZ2kM8CcksfbHeK0b2CRgIK/aWdevby31waCzAPrfnCwxt6HHtKovcC4b162AwrEyWXRaXdRAIYomU5Wu2aLZ5dFxZ7nPjOKAi2c5y0/SlzRy2xCD8cDmj766CObg==
X-YMail-OSG: jD7hKu8VM1l0oOLuaqgRaeb30aQv6Qp.ODCmKP8f6Q9mBSVWzcsfDNYVsykA2ho
 behgY_jMFk_aXME9xrjnRS_UbMzJzOjYrDdoHuBBzXjRybX5ZJpAKTcm5mQC_sDQTYgOe.ec_iAM
 Myn.yvyAZfixP0ZhyiSdQHFTK4O7S1mYEbP_EnvNTk1wCOGbXUWhRrwUwfT7ysui.KhNPs4WfHNE
 ss69qDq30QYPN.igM4i96y25hYBsFq9PPfxFB7Lj07UuZt7Y9dzwZo14h77gQ6KOw4p0GuHrcNIp
 OL_JGjZOopff5u5r__uhwXfmZVS9mH5QdZf4HePHJbfgydDeo8DCbqlxl9FSCeMBB.8ly6bOnY1L
 fNW9Mq5UyvPhekEHUZa.cr1Plr6__p3mLtRSkefaiL8NldguAqJxo7J.2GWQjFItbcRwNuekenlP
 uqfUoZ_WITspASb_vkBokN3PhGG7UwY.4InnuWo3wNlez3CtngNLdJy0n59mgEb9ODF3tv96dXAx
 2Z3OEyVG7Rx4F0ceHoO9l4h_GVKrkX326EfaLiYvWeqAxw27WRowzgPWhcJjnk90pQSacKJnQhgV
 YUUDyV13LFZmhT56JVJ5ZsS8hOh0b3O1lFxsrUcP17XrLk1cd3ExodjeHQy9KqmVQUftBIGeXOhC
 wi8zEb.Og9kAVPf88sGf0uCFmn8egoa4F03lP4p0bBLeIUWPc19V9Nxoq0ubKb77g0ZDNbGM6WgP
 TzOou3wKPVh6EVSQFmExoBO.KvctLL5GwwwUtb1U4joolbv2p.AIeleTYmoetMIF7vWRhnsnffXj
 PnxAYJSGenBv17Y6fL9GsxI6VEQDYEUwgkCB_IZnFQIVz4j2eAfjV46LMX7kfAqPTvMQ0fjRet1L
 .4PDQda5tboB8o1lJN29wjT7m_LI_.y5MY33uUWbqS87og.ChzviNmVN8R3MQwYtFecpX0PhyvnP
 4kOnn5qRgEaC0aL9wSR6G6X44ucjrtWUXvsjLxRib91kWnnvxP3qajCnnUMbCgG51Q1ZByeE6zpW
 YdJO.EFMvbsk_P1uCPdJY2ruXmtNf6NhfiSQBXd9Rc.sCZXJz38sR0_Yj2IXUuIVWGVTo.KyIbwa
 aKctdmpnN0MLzBy4SxVkXP9RmSAXpKDQDDqkSI3_FmU_2jYvrSlEORoJUta44jh_CmzLGD9D9RpE
 GtYlfbu73Wvr_OR3UqtG97pTwxWzct9qsFmGqcTrnJ1jSVqCHvFXpZMYfQ10Fs5UzBalA5qi01zC
 u7DJo2ajgyrXRVAHpMatMXzxXVTSOJaI2o6b_5NADyxHasar5U8EfUQxOA2T9mm9jwwnei9kaPuH
 X19Poc7zTetiMVRLtEgBfMYJfOyq2CTMpOpaWRD6Uh6ZgWi8l9NysycpuGVsH.ecj2KqwtZ5oJFW
 KcnTKZUoL8epoDkWNWXNQxAgUcJPHwtDzoogyzwTqpEf1SQBQ0QRH4Ec7XacoBtCYNXfFCs28x_o
 eBVi9bI0v8e8u6TtjoN9gumUJ5RBlvxoMd90_CdiQ3Kki5XA1yDZKNPEDogWz1n2rqbppweaOIjx
 LibIevRK3I86zHkKsWZc4I5XDQCOpNm7oiPn2kuvDstRyZM.TVLVC3FbZEO.oV2z7pMdLeVVj_uq
 ncCUF1idA.F1GO3HcwrmBWdPzlNjxIt_Iy.jlgrNBCMLadpHitdH7vEK2LptFnVlmLIvug1EYBO.
 sDyWyOOPhrH_N07SPb2UnXVnT4YHflcctzylOwYjtZoiDaSX1fuhiUwt1OFopV2la7OTn0cOPBKS
 qF8mpGMgHwAHLd.4Z5U.QsTaSvLwTn.o3twGe0jZ4PqP1bAXGFNn4rIZMoRfcUZbwUxlNRhtfvLP
 wkmzXtBeNTcgzlwsJTBOXU9cSb3ufinbkNSBzifJsp49v4jCV4cPTchJtB1xbu5S3ikvCpvNILpI
 MX_XcDsT7s.YlEqNQYFtA6vILJifBpdjyErNLDcVyru7a9CvXniTmoJebP.gF8al7iHYJiSVUAG2
 Vrz_zgbHeYkJvq6Wa1hERRnpXFjSXOOC36mIep93Mpy4QDHLRagXhPJURx7znW._g989jDQjb0oO
 Nq7_Fza3JFyu2L2y75ql9ZE8IlrcIoF0da8gd8KrS_b7jWGHQ7K0LWi9GnomoOcMs1b26_Q_Ir1n
 FOHmWVywIISIOGTISU2ulzbCuvDunxL6flF5FMPvYLqr0hs4RFfQEJPaahPD1GXbDmudEP2YvCR9
 Yhgn1kyE-
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Date: Mon,  9 Jan 2023 16:55:42 -0500
Message-Id: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
Content-Length: 19875

Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
as noted in docs/igd-assign.txt in the Qemu source code.

Currently, when the xl toolstack is used to configure a Xen HVM guest with
Intel IGD passthrough to the guest with the Qemu upstream device model,
a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
a different slot. This problem often prevents the guest from booting.

The only available workaround is not good: Configure Xen HVM guests to use
the old and no longer maintained Qemu traditional device model available
from xenbits.xen.org which does reserve slot 2 for the Intel IGD.

To implement this feature in the Qemu upstream device model for Xen HVM
guests, introduce the following new functions, types, and macros:

* XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
* XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
* typedef XenPTQdevRealize function pointer
* XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
* xen_igd_reserve_slot and xen_igd_clear_slot functions

The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
the xl toolstack with the gfx_passthru option enabled, which sets the
igd-passthru=on option to Qemu for the Xen HVM machine type.

The new xen_igd_reserve_slot function also needs to be implemented in
hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
in which case it does nothing.

The new xen_igd_clear_slot function overrides qdev->realize of the parent
PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
created in hw/i386/pc_piix.c for the case when igd-passthru=on.

Move the call to xen_host_pci_device_get, and the associated error
handling, from xen_pt_realize to the new xen_igd_clear_slot function to
initialize the device class and vendor values which enables the checks for
the Intel IGD to succeed. The verification that the host device is an
Intel IGD to be passed through is done by checking the domain, bus, slot,
and function values as well as by checking that gfx_passthru is enabled,
the device class is VGA, and the device vendor in Intel.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Notes that might be helpful to reviewers of patched code in hw/xen:

The new functions and types are based on recommendations from Qemu docs:
https://qemu.readthedocs.io/en/latest/devel/qom.html

Notes that might be helpful to reviewers of patched code in hw/i386:

The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
not affect builds that do not have CONFIG_XEN defined.

xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
existing function that is only true when Qemu is built with
xen-pci-passthrough enabled and the administrator has configured the Xen
HVM guest with Qemu's igd-passthru=on option.

v2: Remove From: <email address> tag at top of commit message

v3: Changed the test for the Intel IGD in xen_igd_clear_slot:

    if (is_igd_vga_passthrough(&s->real_device) &&
        (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {

    is changed to

    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    I hoped that I could use the test in v2, since it matches the
    other tests for the Intel IGD in Qemu and Xen, but those tests
    do not work because the necessary data structures are not set with
    their values yet. So instead use the test that the administrator
    has enabled gfx_passthru and the device address on the host is
    02.0. This test does detect the Intel IGD correctly.

v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
    email address to match the address used by the same author in commits
    be9c61da and c0e86b76
    
    Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc

v5: The patch of xen_pt.c was re-worked to allow a more consistent test
    for the Intel IGD that uses the same criteria as in other places.
    This involved moving the call to xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot and updating the checks for the
    Intel IGD in xen_igd_clear_slot:
    
    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    is changed to

    if (is_igd_vga_passthrough(&s->real_device) &&
        s->real_device.domain == 0 && s->real_device.bus == 0 &&
        s->real_device.dev == 2 && s->real_device.func == 0 &&
        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {

    Added an explanation for the move of xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot to the commit message.

    Rebase.

v6: Fix logging by removing these lines from the move from xen_pt_realize
    to xen_igd_clear_slot that was done in v5:

    XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
               " to devfn 0x%x\n",
               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
               s->dev.devfn);

    This log needs to be in xen_pt_realize because s->dev.devfn is not
    set yet in xen_igd_clear_slot.

v7: Inhibit out of context log message and needless processing by
    adding 2 lines at the top of the new xen_igd_clear_slot function:

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
        return;

    Rebase. This removed an unnecessary header file from xen_pt.h 

 hw/i386/pc_piix.c    | 127 ++++++++++++++++++++++++++++++++-----------
 hw/xen/xen_pt.c      |  49 ++++++++++++++---
 hw/xen/xen_pt.h      |  16 ++++++
 hw/xen/xen_pt_stub.c |   4 ++
 4 files changed, 154 insertions(+), 42 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index b48047f50c..34a9736b5e 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -32,6 +32,7 @@
 #include "hw/i386/pc.h"
 #include "hw/i386/apic.h"
 #include "hw/pci-host/i440fx.h"
+#include "hw/rtc/mc146818rtc.h"
 #include "hw/southbridge/piix.h"
 #include "hw/display/ramfb.h"
 #include "hw/firmware/smbios.h"
@@ -40,16 +41,16 @@
 #include "hw/usb.h"
 #include "net/net.h"
 #include "hw/ide/pci.h"
-#include "hw/ide/piix.h"
 #include "hw/irq.h"
 #include "sysemu/kvm.h"
 #include "hw/kvm/clock.h"
 #include "hw/sysbus.h"
+#include "hw/i2c/i2c.h"
 #include "hw/i2c/smbus_eeprom.h"
 #include "hw/xen/xen-x86.h"
+#include "hw/xen/xen.h"
 #include "exec/memory.h"
 #include "hw/acpi/acpi.h"
-#include "hw/acpi/piix4.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
 #include "sysemu/xen.h"
@@ -66,6 +67,7 @@
 #include "kvm/kvm-cpu.h"
 
 #define MAX_IDE_BUS 2
+#define XEN_IOAPIC_NUM_PIRQS 128ULL
 
 #ifdef CONFIG_IDE_ISA
 static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
@@ -73,6 +75,32 @@ static const int ide_iobase2[MAX_IDE_BUS] = { 0x3f6, 0x376 };
 static const int ide_irq[MAX_IDE_BUS] = { 14, 15 };
 #endif
 
+/*
+ * Return the global irq number corresponding to a given device irq
+ * pin. We could also use the bus number to have a more precise mapping.
+ */
+static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
+{
+    int slot_addend;
+    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
+    return (pci_intx + slot_addend) & 3;
+}
+
+static void piix_intx_routing_notifier_xen(PCIDevice *dev)
+{
+    int i;
+
+    /* Scan for updates to PCI link routes (0x60-0x63). */
+    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
+        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
+        if (v & 0x80) {
+            v = 0;
+        }
+        v &= 0xf;
+        xen_set_pci_link_route(i, v);
+    }
+}
+
 /* PC hardware initialisation */
 static void pc_init1(MachineState *machine,
                      const char *host_type, const char *pci_type)
@@ -84,7 +112,7 @@ static void pc_init1(MachineState *machine,
     MemoryRegion *system_io = get_system_io();
     PCIBus *pci_bus;
     ISABus *isa_bus;
-    int piix3_devfn = -1;
+    Object *piix4_pm;
     qemu_irq smi_irq;
     GSIState *gsi_state;
     BusState *idebus[MAX_IDE_BUS];
@@ -205,10 +233,9 @@ static void pc_init1(MachineState *machine,
     gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
 
     if (pcmc->pci_enabled) {
-        PIIX3State *piix3;
+        DeviceState *dev;
         PCIDevice *pci_dev;
-        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
-                                         : TYPE_PIIX3_DEVICE;
+        int i;
 
         pci_bus = i440fx_init(pci_type,
                               i440fx_host,
@@ -216,21 +243,65 @@ static void pc_init1(MachineState *machine,
                               x86ms->below_4g_mem_size,
                               x86ms->above_4g_mem_size,
                               pci_memory, ram_memory);
+        pci_bus_map_irqs(pci_bus,
+                         xen_enabled() ? xen_pci_slot_get_pirq
+                                       : pci_slot_get_pirq);
         pcms->bus = pci_bus;
 
-        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
-        piix3 = PIIX3_PCI_DEVICE(pci_dev);
-        piix3->pic = x86ms->gsi;
-        piix3_devfn = piix3->dev.devfn;
-        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(piix3), "isa.0"));
+        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
+        object_property_set_bool(OBJECT(pci_dev), "has-usb",
+                                 machine_usb(machine), &error_abort);
+        object_property_set_bool(OBJECT(pci_dev), "has-acpi",
+                                 x86_machine_is_acpi_enabled(x86ms),
+                                 &error_abort);
+        qdev_prop_set_uint32(DEVICE(pci_dev), "smb_io_base", 0xb100);
+        object_property_set_bool(OBJECT(pci_dev), "smm-enabled",
+                                 x86_machine_is_smm_enabled(x86ms),
+                                 &error_abort);
+        pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
+
+        if (xen_enabled()) {
+            pci_device_set_intx_routing_notifier(
+                        pci_dev, piix_intx_routing_notifier_xen);
+
+            /*
+             * Xen supports additional interrupt routes from the PCI devices to
+             * the IOAPIC: the four pins of each PCI device on the bus are also
+             * connected to the IOAPIC directly.
+             * These additional routes can be discovered through ACPI.
+             */
+            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
+                         XEN_IOAPIC_NUM_PIRQS);
+        }
+
+        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
+        for (i = 0; i < ISA_NUM_IRQS; i++) {
+            qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
+        }
+        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(pci_dev), "isa.0"));
+        rtc_state = ISA_DEVICE(object_resolve_path_component(OBJECT(pci_dev),
+                                                             "rtc"));
+        piix4_pm = object_resolve_path_component(OBJECT(pci_dev), "pm");
+        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "ide"));
+        pci_ide_create_devs(PCI_DEVICE(dev));
+        idebus[0] = qdev_get_child_bus(dev, "ide.0");
+        idebus[1] = qdev_get_child_bus(dev, "ide.1");
     } else {
         pci_bus = NULL;
+        piix4_pm = NULL;
         isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
                               &error_abort);
+        isa_bus_irqs(isa_bus, x86ms->gsi);
+
+        rtc_state = isa_new(TYPE_MC146818_RTC);
+        qdev_prop_set_int32(DEVICE(rtc_state), "base_year", 2000);
+        isa_realize_and_unref(rtc_state, isa_bus, &error_fatal);
+
         i8257_dma_init(isa_bus, 0);
         pcms->hpet_enabled = false;
+        idebus[0] = NULL;
+        idebus[1] = NULL;
     }
-    isa_bus_irqs(isa_bus, x86ms->gsi);
 
     if (x86ms->pic == ON_OFF_AUTO_ON || x86ms->pic == ON_OFF_AUTO_AUTO) {
         pc_i8259_create(isa_bus, gsi_state->i8259_irq);
@@ -252,18 +323,12 @@ static void pc_init1(MachineState *machine,
     }
 
     /* init basic PC hardware */
-    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, &rtc_state, true,
+    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, rtc_state, true,
                          0x4);
 
     pc_nic_init(pcmc, isa_bus, pci_bus);
 
     if (pcmc->pci_enabled) {
-        PCIDevice *dev;
-
-        dev = pci_create_simple(pci_bus, piix3_devfn + 1, TYPE_PIIX3_IDE);
-        pci_ide_create_devs(dev);
-        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
-        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
         pc_cmos_init(pcms, idebus[0], idebus[1], rtc_state);
     }
 #ifdef CONFIG_IDE_ISA
@@ -289,21 +354,9 @@ static void pc_init1(MachineState *machine,
     }
 #endif
 
-    if (pcmc->pci_enabled && machine_usb(machine)) {
-        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
-    }
-
-    if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
-        PCIDevice *piix4_pm;
-
+    if (piix4_pm) {
         smi_irq = qemu_allocate_irq(pc_acpi_smi_interrupt, first_cpu, 0);
-        piix4_pm = pci_new(piix3_devfn + 3, TYPE_PIIX4_PM);
-        qdev_prop_set_uint32(DEVICE(piix4_pm), "smb_io_base", 0xb100);
-        qdev_prop_set_bit(DEVICE(piix4_pm), "smm-enabled",
-                          x86_machine_is_smm_enabled(x86ms));
-        pci_realize_and_unref(piix4_pm, pci_bus, &error_fatal);
 
-        qdev_connect_gpio_out(DEVICE(piix4_pm), 0, x86ms->gsi[9]);
         qdev_connect_gpio_out_named(DEVICE(piix4_pm), "smi-irq", 0, smi_irq);
         pcms->smbus = I2C_BUS(qdev_get_child_bus(DEVICE(piix4_pm), "i2c"));
         /* TODO: Populate SPD eeprom data.  */
@@ -315,7 +368,7 @@ static void pc_init1(MachineState *machine,
                                  object_property_allow_set_link,
                                  OBJ_PROP_LINK_STRONG);
         object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
-                                 OBJECT(piix4_pm), &error_abort);
+                                 piix4_pm, &error_abort);
     }
 
     if (machine->nvdimms_state->is_enabled) {
@@ -405,6 +458,9 @@ static void pc_xen_hvm_init(MachineState *machine)
     }
 
     pc_xen_hvm_init_pci(machine);
+    if (xen_igd_gfx_pt_enabled()) {
+        xen_igd_reserve_slot(pcms->bus);
+    }
     pci_create_simple(pcms->bus, -1, "xen-platform");
 }
 #endif
@@ -441,6 +497,11 @@ static void pc_i440fx_8_0_machine_options(MachineClass *m)
     pc_i440fx_machine_options(m);
     m->alias = "pc";
     m->is_default = true;
+#ifdef CONFIG_MICROVM_DEFAULT
+    m->is_default = false;
+#else
+    m->is_default = true;
+#endif
 }
 
 DEFINE_I440FX_MACHINE(v8_0, "pc-i440fx-8.0", NULL,
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 0ec7e52183..eff38155ef 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
                s->dev.devfn);
 
-    xen_host_pci_device_get(&s->real_device,
-                            s->hostaddr.domain, s->hostaddr.bus,
-                            s->hostaddr.slot, s->hostaddr.function,
-                            errp);
-    if (*errp) {
-        error_append_hint(errp, "Failed to \"open\" the real pci device");
-        return;
-    }
-
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
@@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Object *obj)
     PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
 }
 
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
+    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
+}
+
+static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
+{
+    ERRP_GUARD();
+    PCIDevice *pci_dev = (PCIDevice *)qdev;
+    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
+    PCIBus *pci_bus = pci_get_bus(pci_dev);
+
+    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
+        return;
+
+    xen_host_pci_device_get(&s->real_device,
+                            s->hostaddr.domain, s->hostaddr.bus,
+                            s->hostaddr.slot, s->hostaddr.function,
+                            errp);
+    if (*errp) {
+        error_append_hint(errp, "Failed to \"open\" the real pci device");
+        return;
+    }
+
+    if (is_igd_vga_passthrough(&s->real_device) &&
+        s->real_device.domain == 0 && s->real_device.bus == 0 &&
+        s->real_device.dev == 2 && s->real_device.func == 0 &&
+        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
+        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
+        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
+    }
+    xpdc->pci_qdev_realize(qdev, errp);
+}
+
 static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
+    xpdc->pci_qdev_realize = dc->realize;
+    dc->realize = xen_igd_clear_slot;
     k->realize = xen_pt_realize;
     k->exit = xen_pt_unregister_device;
     k->config_read = xen_pt_pci_read_config;
@@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info = {
     .instance_size = sizeof(XenPCIPassthroughState),
     .instance_finalize = xen_pci_passthrough_finalize,
     .class_init = xen_pci_passthrough_class_init,
+    .class_size = sizeof(XenPTDeviceClass),
     .instance_init = xen_pci_passthrough_instance_init,
     .interfaces = (InterfaceInfo[]) {
         { INTERFACE_CONVENTIONAL_PCI_DEVICE },
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index cf10fc7bbf..8c25932b4b 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -2,6 +2,7 @@
 #define XEN_PT_H
 
 #include "hw/xen/xen_common.h"
+#include "hw/pci/pci_bus.h"
 #include "xen-host-pci-device.h"
 #include "qom/object.h"
 
@@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
 #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
 OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
 
+#define XEN_PT_DEVICE_CLASS(klass) \
+    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
+#define XEN_PT_DEVICE_GET_CLASS(obj) \
+    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
+
+typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
+
+typedef struct XenPTDeviceClass {
+    PCIDeviceClass parent_class;
+    XenPTQdevRealize pci_qdev_realize;
+} XenPTDeviceClass;
+
 uint32_t igd_read_opregion(XenPCIPassthroughState *s);
+void xen_igd_reserve_slot(PCIBus *pci_bus);
 void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
 void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
                                            XenHostPCIDevice *dev);
@@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
 
 #define XEN_PCI_INTEL_OPREGION 0xfc
 
+#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
+
 typedef enum {
     XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
     XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
index 2d8cac8d54..5c108446a8 100644
--- a/hw/xen/xen_pt_stub.c
+++ b/hw/xen/xen_pt_stub.c
@@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
         error_setg(errp, "Xen PCI passthrough support not built in");
     }
 }
+
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 22:00:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 22:00:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474059.734960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF0Br-0001JX-Jv; Mon, 09 Jan 2023 22:00:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474059.734960; Mon, 09 Jan 2023 22:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF0Br-0001JQ-H2; Mon, 09 Jan 2023 22:00:23 +0000
Received: by outflank-mailman (input) for mailman id 474059;
 Mon, 09 Jan 2023 22:00:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF0Bq-0001JG-S3; Mon, 09 Jan 2023 22:00:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF0Bq-0002GV-Ng; Mon, 09 Jan 2023 22:00:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF0Bq-0008IL-C7; Mon, 09 Jan 2023 22:00:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pF0Bq-0004rA-Bf; Mon, 09 Jan 2023 22:00:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Vkt45SE6wJg2tjX9ASRsMHcHdNfpSIYzkeVea+jaqto=; b=BE7GnJ26ysNbgumMNMxOJDtvny
	GnOKK/Q1oU1jkBTsBXgh97q1MNfSHtBR4aoZl58FcSYHnSLhXaYft6lSpNPaUgRhKoUQ1x6kKABVi
	oXaUKiBTIvX6bd5DDYDozJvh6HDz8O3qcH9qzPYBa63wtiAlEkeAdoc5dPsu0sgNZD10=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175642-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175642: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1fe4fd6f5cad346e598593af36caeadc4f5d4fa9
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 22:00:22 +0000

flight 175642 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175642/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                1fe4fd6f5cad346e598593af36caeadc4f5d4fa9
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   94 days
Failing since        173470  2022-10-08 06:21:34 Z   93 days  196 attempts
Testing same since   175634  2023-01-09 01:40:59 Z    0 days    2 attempts

------------------------------------------------------------
3317 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505645 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 09 22:33:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 22:33:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474072.734982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF0i1-0004mm-Bt; Mon, 09 Jan 2023 22:33:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474072.734982; Mon, 09 Jan 2023 22:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF0i1-0004mf-9H; Mon, 09 Jan 2023 22:33:37 +0000
Received: by outflank-mailman (input) for mailman id 474072;
 Mon, 09 Jan 2023 22:33:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QidW=5G=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pF0hz-0004mZ-Or
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 22:33:36 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a5a7225e-906d-11ed-91b6-6bf2151ebd3b;
 Mon, 09 Jan 2023 23:33:33 +0100 (CET)
Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com
 [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-182-R8Lh8twDPVOy9RSSsUdPwQ-1; Mon, 09 Jan 2023 17:33:31 -0500
Received: by mail-qv1-f70.google.com with SMTP id
 q17-20020a056214019100b004b1d3c9f3acso5946286qvr.0
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 14:33:30 -0800 (PST)
Received: from redhat.com ([185.199.102.79]) by smtp.gmail.com with ESMTPSA id
 k14-20020ac8478e000000b003a820f9fb70sm5183013qtq.36.2023.01.09.14.33.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 14:33:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5a7225e-906d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673303612;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=RsiJAtB/qcJ8ifOYLs4heG+cPPY9pzvbSF2Y9UT+p6c=;
	b=LY0irCJlRCLrpsFuoUoUmJJktUyCrJxICmbd4H+mV6nfNAGl0xGCvaK4bva7nQUoUTZ5Ct
	eX2XDCPJyUQpD7areMvSSSjmov8NSaeUqaALPFYhEmugIamp+pQ0Tw8aPdlBx518eBytqY
	B+TE9F3voyt0EH5LDlkiTTeA4yAs3JM=
X-MC-Unique: R8Lh8twDPVOy9RSSsUdPwQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RsiJAtB/qcJ8ifOYLs4heG+cPPY9pzvbSF2Y9UT+p6c=;
        b=1lh6DfJ6yhVJLofPcI7tdGTw0Eo+OwMiuBg1gEMHyiQPx4g2+i5SpfQdFoAbQoSqdi
         AAoTjPyTmH7mxuiqjjS7lddnWEDUgbuOKy1D/0UgqGDQRlqAI7ZryK/n3vSQTnWtEI5B
         G1c/8ipxj8SR6SSwDnxJtIf8h07iBEH4hDdQO7DO3IjXvGWS8BuSol79Lhu89WOOQSwP
         zZzujVg5KICyP2Q6dBp6Ly3jyGGHROcNcia3BF0EmZPRMFHGVfqDVX9WdfAvVw8phdM4
         ER05QOC6wbAjL4RtVuqAFVT6LnZSnXb8sAC4ev8UmD6ajfCNvwvL4gHZm6T6YiGn7MFD
         EvAA==
X-Gm-Message-State: AFqh2kqmcSE2YQgtpjGv0k5yYEdLd0IdwycuJwqQ7vJ2gQI/ENTFgt5O
	RLjN1BkTW54n+9j2mj3/4nF47N5KQC5pgGFStF5Tb2ehI9TQVolmXI5Ah4LkVRiNto0EhPMPHAw
	fFt6iB5iNQi9PJnNhKxfFAMajrpA=
X-Received: by 2002:ac8:7ef7:0:b0:3a7:e2a1:331f with SMTP id r23-20020ac87ef7000000b003a7e2a1331fmr93987532qtc.7.1673303610312;
        Mon, 09 Jan 2023 14:33:30 -0800 (PST)
X-Google-Smtp-Source: AMrXdXtaob7RqRhF9iLiwcSc/zFn5EOlap6OIsPq92ia5ApkVxX2ABZ01lPSYcmGbVjA+riWVsTScA==
X-Received: by 2002:ac8:7ef7:0:b0:3a7:e2a1:331f with SMTP id r23-20020ac87ef7000000b003a7e2a1331fmr93987489qtc.7.1673303609817;
        Mon, 09 Jan 2023 14:33:29 -0800 (PST)
Date: Mon, 9 Jan 2023 17:33:23 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230109172738-mutt-send-email-mst@kernel.org>
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
MIME-Version: 1.0
In-Reply-To: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> as noted in docs/igd-assign.txt in the Qemu source code.
> 
> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> Intel IGD passthrough to the guest with the Qemu upstream device model,
> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> a different slot. This problem often prevents the guest from booting.
> 
> The only available workaround is not good: Configure Xen HVM guests to use
> the old and no longer maintained Qemu traditional device model available
> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> 
> To implement this feature in the Qemu upstream device model for Xen HVM
> guests, introduce the following new functions, types, and macros:
> 
> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> * typedef XenPTQdevRealize function pointer
> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> 
> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> the xl toolstack with the gfx_passthru option enabled, which sets the
> igd-passthru=on option to Qemu for the Xen HVM machine type.

I don't like how slot_reserved_mask is set initially then cleared on
device realize.
To me this looks like a fragile hack. I suggest one of the following
1. adding a new mask
"slot-manual-mask" or some such blocking auto-allocation of a given
slot without blocking its use if address is specified on command line.
2. adding a special property that overrides slot_reserved_mask
for a given device.

both need changes in pci core but look like something generally
useful.


> 
> The new xen_igd_reserve_slot function also needs to be implemented in
> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> in which case it does nothing.
> 
> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> 
> Move the call to xen_host_pci_device_get, and the associated error
> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> initialize the device class and vendor values which enables the checks for
> the Intel IGD to succeed. The verification that the host device is an
> Intel IGD to be passed through is done by checking the domain, bus, slot,
> and function values as well as by checking that gfx_passthru is enabled,
> the device class is VGA, and the device vendor in Intel.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> ---
> Notes that might be helpful to reviewers of patched code in hw/xen:
> 
> The new functions and types are based on recommendations from Qemu docs:
> https://qemu.readthedocs.io/en/latest/devel/qom.html
> 
> Notes that might be helpful to reviewers of patched code in hw/i386:
> 
> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> not affect builds that do not have CONFIG_XEN defined.
> 
> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
> existing function that is only true when Qemu is built with
> xen-pci-passthrough enabled and the administrator has configured the Xen
> HVM guest with Qemu's igd-passthru=on option.
> 
> v2: Remove From: <email address> tag at top of commit message
> 
> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
> 
>     is changed to
> 
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     I hoped that I could use the test in v2, since it matches the
>     other tests for the Intel IGD in Qemu and Xen, but those tests
>     do not work because the necessary data structures are not set with
>     their values yet. So instead use the test that the administrator
>     has enabled gfx_passthru and the device address on the host is
>     02.0. This test does detect the Intel IGD correctly.
> 
> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>     email address to match the address used by the same author in commits
>     be9c61da and c0e86b76
>     
>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
> 
> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>     for the Intel IGD that uses the same criteria as in other places.
>     This involved moving the call to xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>     Intel IGD in xen_igd_clear_slot:
>     
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     is changed to
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> 
>     Added an explanation for the move of xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot to the commit message.
> 
>     Rebase.
> 
> v6: Fix logging by removing these lines from the move from xen_pt_realize
>     to xen_igd_clear_slot that was done in v5:
> 
>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>                " to devfn 0x%x\n",
>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                s->dev.devfn);
> 
>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>     set yet in xen_igd_clear_slot.
> 
> v7: Inhibit out of context log message and needless processing by
>     adding 2 lines at the top of the new xen_igd_clear_slot function:
> 
>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>         return;
> 
>     Rebase. This removed an unnecessary header file from xen_pt.h 
> 
>  hw/i386/pc_piix.c    | 127 ++++++++++++++++++++++++++++++++-----------
>  hw/xen/xen_pt.c      |  49 ++++++++++++++---
>  hw/xen/xen_pt.h      |  16 ++++++
>  hw/xen/xen_pt_stub.c |   4 ++
>  4 files changed, 154 insertions(+), 42 deletions(-)
> 
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index b48047f50c..34a9736b5e 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -32,6 +32,7 @@
>  #include "hw/i386/pc.h"
>  #include "hw/i386/apic.h"
>  #include "hw/pci-host/i440fx.h"
> +#include "hw/rtc/mc146818rtc.h"
>  #include "hw/southbridge/piix.h"
>  #include "hw/display/ramfb.h"
>  #include "hw/firmware/smbios.h"
> @@ -40,16 +41,16 @@
>  #include "hw/usb.h"
>  #include "net/net.h"
>  #include "hw/ide/pci.h"
> -#include "hw/ide/piix.h"
>  #include "hw/irq.h"
>  #include "sysemu/kvm.h"
>  #include "hw/kvm/clock.h"
>  #include "hw/sysbus.h"
> +#include "hw/i2c/i2c.h"
>  #include "hw/i2c/smbus_eeprom.h"
>  #include "hw/xen/xen-x86.h"
> +#include "hw/xen/xen.h"
>  #include "exec/memory.h"
>  #include "hw/acpi/acpi.h"
> -#include "hw/acpi/piix4.h"
>  #include "qapi/error.h"
>  #include "qemu/error-report.h"
>  #include "sysemu/xen.h"
> @@ -66,6 +67,7 @@
>  #include "kvm/kvm-cpu.h"
>  
>  #define MAX_IDE_BUS 2
> +#define XEN_IOAPIC_NUM_PIRQS 128ULL
>  
>  #ifdef CONFIG_IDE_ISA
>  static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
> @@ -73,6 +75,32 @@ static const int ide_iobase2[MAX_IDE_BUS] = { 0x3f6, 0x376 };
>  static const int ide_irq[MAX_IDE_BUS] = { 14, 15 };
>  #endif
>  
> +/*
> + * Return the global irq number corresponding to a given device irq
> + * pin. We could also use the bus number to have a more precise mapping.
> + */
> +static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
> +{
> +    int slot_addend;
> +    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
> +    return (pci_intx + slot_addend) & 3;
> +}
> +
> +static void piix_intx_routing_notifier_xen(PCIDevice *dev)
> +{
> +    int i;
> +
> +    /* Scan for updates to PCI link routes (0x60-0x63). */
> +    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
> +        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
> +        if (v & 0x80) {
> +            v = 0;
> +        }
> +        v &= 0xf;
> +        xen_set_pci_link_route(i, v);
> +    }
> +}
> +
>  /* PC hardware initialisation */
>  static void pc_init1(MachineState *machine,
>                       const char *host_type, const char *pci_type)
> @@ -84,7 +112,7 @@ static void pc_init1(MachineState *machine,
>      MemoryRegion *system_io = get_system_io();
>      PCIBus *pci_bus;
>      ISABus *isa_bus;
> -    int piix3_devfn = -1;
> +    Object *piix4_pm;
>      qemu_irq smi_irq;
>      GSIState *gsi_state;
>      BusState *idebus[MAX_IDE_BUS];
> @@ -205,10 +233,9 @@ static void pc_init1(MachineState *machine,
>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
>  
>      if (pcmc->pci_enabled) {
> -        PIIX3State *piix3;
> +        DeviceState *dev;
>          PCIDevice *pci_dev;
> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
> -                                         : TYPE_PIIX3_DEVICE;
> +        int i;
>  
>          pci_bus = i440fx_init(pci_type,
>                                i440fx_host,
> @@ -216,21 +243,65 @@ static void pc_init1(MachineState *machine,
>                                x86ms->below_4g_mem_size,
>                                x86ms->above_4g_mem_size,
>                                pci_memory, ram_memory);
> +        pci_bus_map_irqs(pci_bus,
> +                         xen_enabled() ? xen_pci_slot_get_pirq
> +                                       : pci_slot_get_pirq);
>          pcms->bus = pci_bus;
>  
> -        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
> -        piix3 = PIIX3_PCI_DEVICE(pci_dev);
> -        piix3->pic = x86ms->gsi;
> -        piix3_devfn = piix3->dev.devfn;
> -        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(piix3), "isa.0"));
> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
> +        object_property_set_bool(OBJECT(pci_dev), "has-usb",
> +                                 machine_usb(machine), &error_abort);
> +        object_property_set_bool(OBJECT(pci_dev), "has-acpi",
> +                                 x86_machine_is_acpi_enabled(x86ms),
> +                                 &error_abort);
> +        qdev_prop_set_uint32(DEVICE(pci_dev), "smb_io_base", 0xb100);
> +        object_property_set_bool(OBJECT(pci_dev), "smm-enabled",
> +                                 x86_machine_is_smm_enabled(x86ms),
> +                                 &error_abort);
> +        pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
> +
> +        if (xen_enabled()) {
> +            pci_device_set_intx_routing_notifier(
> +                        pci_dev, piix_intx_routing_notifier_xen);
> +
> +            /*
> +             * Xen supports additional interrupt routes from the PCI devices to
> +             * the IOAPIC: the four pins of each PCI device on the bus are also
> +             * connected to the IOAPIC directly.
> +             * These additional routes can be discovered through ACPI.
> +             */
> +            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
> +                         XEN_IOAPIC_NUM_PIRQS);
> +        }
> +
> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
> +        for (i = 0; i < ISA_NUM_IRQS; i++) {
> +            qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
> +        }
> +        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(pci_dev), "isa.0"));
> +        rtc_state = ISA_DEVICE(object_resolve_path_component(OBJECT(pci_dev),
> +                                                             "rtc"));
> +        piix4_pm = object_resolve_path_component(OBJECT(pci_dev), "pm");
> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "ide"));
> +        pci_ide_create_devs(PCI_DEVICE(dev));
> +        idebus[0] = qdev_get_child_bus(dev, "ide.0");
> +        idebus[1] = qdev_get_child_bus(dev, "ide.1");
>      } else {
>          pci_bus = NULL;
> +        piix4_pm = NULL;
>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
>                                &error_abort);
> +        isa_bus_irqs(isa_bus, x86ms->gsi);
> +
> +        rtc_state = isa_new(TYPE_MC146818_RTC);
> +        qdev_prop_set_int32(DEVICE(rtc_state), "base_year", 2000);
> +        isa_realize_and_unref(rtc_state, isa_bus, &error_fatal);
> +
>          i8257_dma_init(isa_bus, 0);
>          pcms->hpet_enabled = false;
> +        idebus[0] = NULL;
> +        idebus[1] = NULL;
>      }
> -    isa_bus_irqs(isa_bus, x86ms->gsi);
>  
>      if (x86ms->pic == ON_OFF_AUTO_ON || x86ms->pic == ON_OFF_AUTO_AUTO) {
>          pc_i8259_create(isa_bus, gsi_state->i8259_irq);
> @@ -252,18 +323,12 @@ static void pc_init1(MachineState *machine,
>      }
>  
>      /* init basic PC hardware */
> -    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, &rtc_state, true,
> +    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, rtc_state, true,
>                           0x4);
>  
>      pc_nic_init(pcmc, isa_bus, pci_bus);
>  
>      if (pcmc->pci_enabled) {
> -        PCIDevice *dev;
> -
> -        dev = pci_create_simple(pci_bus, piix3_devfn + 1, TYPE_PIIX3_IDE);
> -        pci_ide_create_devs(dev);
> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
>          pc_cmos_init(pcms, idebus[0], idebus[1], rtc_state);
>      }
>  #ifdef CONFIG_IDE_ISA
> @@ -289,21 +354,9 @@ static void pc_init1(MachineState *machine,
>      }
>  #endif
>  
> -    if (pcmc->pci_enabled && machine_usb(machine)) {
> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
> -    }
> -
> -    if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
> -        PCIDevice *piix4_pm;
> -
> +    if (piix4_pm) {
>          smi_irq = qemu_allocate_irq(pc_acpi_smi_interrupt, first_cpu, 0);
> -        piix4_pm = pci_new(piix3_devfn + 3, TYPE_PIIX4_PM);
> -        qdev_prop_set_uint32(DEVICE(piix4_pm), "smb_io_base", 0xb100);
> -        qdev_prop_set_bit(DEVICE(piix4_pm), "smm-enabled",
> -                          x86_machine_is_smm_enabled(x86ms));
> -        pci_realize_and_unref(piix4_pm, pci_bus, &error_fatal);
>  
> -        qdev_connect_gpio_out(DEVICE(piix4_pm), 0, x86ms->gsi[9]);
>          qdev_connect_gpio_out_named(DEVICE(piix4_pm), "smi-irq", 0, smi_irq);
>          pcms->smbus = I2C_BUS(qdev_get_child_bus(DEVICE(piix4_pm), "i2c"));
>          /* TODO: Populate SPD eeprom data.  */
> @@ -315,7 +368,7 @@ static void pc_init1(MachineState *machine,
>                                   object_property_allow_set_link,
>                                   OBJ_PROP_LINK_STRONG);
>          object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
> -                                 OBJECT(piix4_pm), &error_abort);
> +                                 piix4_pm, &error_abort);
>      }
>  
>      if (machine->nvdimms_state->is_enabled) {
> @@ -405,6 +458,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>      }
>  
>      pc_xen_hvm_init_pci(machine);
> +    if (xen_igd_gfx_pt_enabled()) {
> +        xen_igd_reserve_slot(pcms->bus);
> +    }
>      pci_create_simple(pcms->bus, -1, "xen-platform");
>  }
>  #endif
> @@ -441,6 +497,11 @@ static void pc_i440fx_8_0_machine_options(MachineClass *m)
>      pc_i440fx_machine_options(m);
>      m->alias = "pc";
>      m->is_default = true;
> +#ifdef CONFIG_MICROVM_DEFAULT
> +    m->is_default = false;
> +#else
> +    m->is_default = true;
> +#endif
>  }
>  
>  DEFINE_I440FX_MACHINE(v8_0, "pc-i440fx-8.0", NULL,
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 0ec7e52183..eff38155ef 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                 s->dev.devfn);
>  
> -    xen_host_pci_device_get(&s->real_device,
> -                            s->hostaddr.domain, s->hostaddr.bus,
> -                            s->hostaddr.slot, s->hostaddr.function,
> -                            errp);
> -    if (*errp) {
> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
> -        return;
> -    }
> -
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> @@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>  }
>  
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
> +}
> +
> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
> +{
> +    ERRP_GUARD();
> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
> +
> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
> +        return;
> +
> +    xen_host_pci_device_get(&s->real_device,
> +                            s->hostaddr.domain, s->hostaddr.bus,
> +                            s->hostaddr.slot, s->hostaddr.function,
> +                            errp);
> +    if (*errp) {
> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
> +        return;
> +    }
> +
> +    if (is_igd_vga_passthrough(&s->real_device) &&
> +        s->real_device.domain == 0 && s->real_device.bus == 0 &&
> +        s->real_device.dev == 2 && s->real_device.func == 0 &&
> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
> +    }
> +    xpdc->pci_qdev_realize(qdev, errp);
> +}
> +
>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>  
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
> +    xpdc->pci_qdev_realize = dc->realize;
> +    dc->realize = xen_igd_clear_slot;
>      k->realize = xen_pt_realize;
>      k->exit = xen_pt_unregister_device;
>      k->config_read = xen_pt_pci_read_config;
> @@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>      .instance_size = sizeof(XenPCIPassthroughState),
>      .instance_finalize = xen_pci_passthrough_finalize,
>      .class_init = xen_pci_passthrough_class_init,
> +    .class_size = sizeof(XenPTDeviceClass),
>      .instance_init = xen_pci_passthrough_instance_init,
>      .interfaces = (InterfaceInfo[]) {
>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index cf10fc7bbf..8c25932b4b 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -2,6 +2,7 @@
>  #define XEN_PT_H
>  
>  #include "hw/xen/xen_common.h"
> +#include "hw/pci/pci_bus.h"
>  #include "xen-host-pci-device.h"
>  #include "qom/object.h"
>  
> @@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>  
> +#define XEN_PT_DEVICE_CLASS(klass) \
> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
> +
> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
> +
> +typedef struct XenPTDeviceClass {
> +    PCIDeviceClass parent_class;
> +    XenPTQdevRealize pci_qdev_realize;
> +} XenPTDeviceClass;
> +
>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>                                             XenHostPCIDevice *dev);
> @@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
>  
>  #define XEN_PCI_INTEL_OPREGION 0xfc
>  
> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
> +
>  typedef enum {
>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> index 2d8cac8d54..5c108446a8 100644
> --- a/hw/xen/xen_pt_stub.c
> +++ b/hw/xen/xen_pt_stub.c
> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>          error_setg(errp, "Xen PCI passthrough support not built in");
>      }
>  }
> +
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +}
> -- 
> 2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 23:08:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 23:08:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474080.735000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Fu-0008Eu-4v; Mon, 09 Jan 2023 23:08:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474080.735000; Mon, 09 Jan 2023 23:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Fu-0008En-1S; Mon, 09 Jan 2023 23:08:38 +0000
Received: by outflank-mailman (input) for mailman id 474080;
 Mon, 09 Jan 2023 23:08:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=szyx=5G=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF1Fs-0008EZ-Aa
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 23:08:36 +0000
Received: from sonic313-20.consmr.mail.gq1.yahoo.com
 (sonic313-20.consmr.mail.gq1.yahoo.com [98.137.65.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 897301bb-9072-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 00:08:34 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic313.consmr.mail.gq1.yahoo.com with HTTP; Mon, 9 Jan 2023 23:08:31 +0000
Received: by hermes--production-ne1-7b69748c4d-bgkrh (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 21a3ec8339abfccbadeaea4f5ef812e7; 
 Mon, 09 Jan 2023 23:08:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 897301bb-9072-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673305711; bh=PUsS/L6kSgEs9TSDYOSdibVZnoomFtUQ/2FTOn4ShZk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From:Subject:Reply-To; b=elXzfvBDLs+x3PQygRw4UOuVqtKHJNztSmeANXFZPKbkjXODD9HVpM6UngsfZNCsUNy2JDgYBHu+ln6Crtg2AcIwICBkBig4MOQJgkIIALddShN1H8ppOWXhxpZFtNqAuktS00wFZqfg6lJ6O7PfrQUbnQy0ZmqYChoK6Pm/JDcuDg8YuX85J6rdPWZxmc4dFbvcBTZ8lNd98XtbK9IBE9yBkNvF1HtzZ7S1SL46zDEXLUHwWJPNfgXiTkS5gbbxVINtGupzdxS50bsCoBeI//O5rV9YHJX+IKfI4jk4NELWBvL+SQWSxj+i/SA3aO3s1dV/gFfSo1b5xBUMVvwpEg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673305711; bh=Du1eQd5e3lE/n5cyxqbZF1SqsJq9/TIrud4RmL1t+rv=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=EqV1L9DzWBpmfjNT6DFmkvoslb9Tg0hh8943F8fvkbDYhCgCHBpJqoQVumBZeY08ibePhnHlkXpveZRSrdMUqytQo4u9kfYqyZ4+ckXTdRhBMZv7gbcYA4SVCYJzBsWUZrnkJqGHd6LaUdQljLqlh2ca1JDIcwOebu/Q4QCMXuclPnkF3S/gfpmaGRd47J7SkG26zqaQs1o0fFU4LOtcyjB+qFZU4ihv2pYSF+teGpMfou11wxdiAS3urJWtN5XJDa2ihQFKWPTaP7N/OnZLYjGVSxtdtH1+jBssGCcUyUE5/mA2qT3gnDHnEU2swYrLfP09hCiITR5uyu9QWabHzw==
X-YMail-OSG: PJkcDlwVM1lwBDfcA1m84Ixzd3yJi0Uf9wXfCxi14xWsP7d2Dz_7xD4xwfXISN0
 S4FRMFHbwmhIgyW8dmdIzZoWOsKWjLgoF26iGIv8uIDAfyJvT1rFijVrMyqxzUy5akZQJUkz9Fw0
 HCaX84s2BoEsxvktlpclY_yQ5h0JUyqRm4UCim69OvADBtF69i8Ab8cUEg6JUDTA6QqU_3JyEMWB
 Zs3v48oGLjN72Ais3RueIiHIVLjJZ3LYIE6Ekv_jUimru4MItuEzesvjXWCtFnHBTHS8QnXCa2Mt
 6MyniOP7xTFs6nrXWMdJBSOHjYKIIYlYI4iLRR7NwXDKQRq1MSLfDazgHluwHu1B.XrcdZTg6kbW
 nUVvV7KmS43Ut6hhA_pdqxx7O7HVOZU_8.On6FPb.sA4OpM1YPV.8LNMmvGKIIv25KGoPBTjOCw0
 Hhf5aAFSYxXr2.dLa6Ibdl5tCW4IRpnl.Wr_Uj69kG561wt_ZpuvMgwx.t4iz5kj9w8nY8lMB_sh
 7ehNLAskNXHSfN7.vbPVIP4Nf6P6xvoBHUkyY8RrmWOm8uZz0IXmTm5dk37fE.M8RMreIizdqhYA
 8wmnK.xMHeuQqWTp6fySym01Oi95JDpuRyGzB5sUVkV7CbFj.B_0oIkQuvyyvu714Gps1rPQ8ghE
 _Jp4XciK2Yd9LAkcBG2us9uscEWVU.dnKOnOIovxJjy.2RE4M5qKGqJY_owlS5Aov8Qn9gS2H_Gc
 6sMfwa1K0P_uIS00zid89Vz8EkSL_Pz9z84DQkGGaTxHIXTAWgdP10mprFJEedPpJK1klo4_ZOpq
 152jGan8ZavdZkAlnUSh8OcsU32EWsJZ3giSDVzIR9doWzfk1WG4Ds7poOWoYmYSwWPWqquP1p0u
 235z7EbNoLQNSoWKkcc9F1sbc9NwrQ9y.Z2Fl.FNKLi0tvy7gv70wEO10DBnxi5FIGjxFOVlwYuj
 U4m0kPQ5xMyxWJyIOk.SyXGqQZykyPO05HpBCe2qefOU.PC7hWAYIHUPesan24nndXEPjHrq7OSh
 5xq6bsijRSCjG3kUkBKvBrRwJslhnVQihmIO8Pf4M69MbR8qWYF0Oo6_v3SQdRym6nXSne0vcekx
 xn0DQFqdBWWNp1qY2zq5hJR33nq88w.xlyLsGeo95aw5Zmybm_BjKH8PF7kpbPcGP7LKEgJ44EZq
 vK06TfWuqMhOFc5C9vVQNgWF.6C2LP6Fx9ROsICTEwwNvbFedYHHfoUX1Fp_KWk0oWATok7Hi9wt
 TpCCR1WKnKZiFj.mdJ6V0EFtC7xuhySoRzISE3offXLCjfA3GUAyJz2kwaFi2_Ktx1QBDNqBM4OY
 LzSh40kEefxWiQKIeWgtJEIwlHHRtFVd3OQN8CAsKQNqzKrUo6iXweRhLtebWryw9SW72ZMDIZ0x
 qPQI80yamRENR5vvAbn9jc1ZB.2fT9pVrvkHekVkctLUnykNHZIK5zRzqn5t3KesTAAbagjFvGAj
 .sVXeSTT1p1CWJ_EaVxHi76Vu5.5SuD.QCMOHnZKhzYydG0QVA8hTMSzAv8.FrVFubj1zAsvzhc4
 Wv7evMcbCTkZVnQIKLUvfFiWUYcTATWbTYB72kTqwuiag34m9DTdF3TGv3usYpNCufF8L.PSXZbu
 YG7onspgwGcxqrBYJ72t_xZ07YyzgZX2ao1yl1U.dcUkbbHYPdtr.ThkapBpFfuTLoxxfqU70ug2
 cHmSgF9Ftgxb8tXIQsWjGJFt8F5pZJukq8h3ymGMywD35Ga1SFcxNLjwDmuZufEXJfsm4qlF_8gy
 FMZTnw80uNNLhAKM0XqaZpO2LuFC5JicviG0wilgsRH9kepIKWlwy3JqfUJtk2Ppsvqei0yumIhk
 ixUKrFiSOAEDyutmFkY6_VHIDLSoiC6CWmwyB5cYSPLLeL0oPpg0JBiU8lEi.PTkvvrndoMMkcNT
 FKNq3xtVB9mbkLGjx8sAacZ2EKKgW7CWr00ZiKJ4i1HAr07bBOr3_XHixAPdjvatOjV2v7gZx0Ck
 aZFUySehhNN1lNz5X8EswtAZFZq5T2lH9HLQ2S4i75t4Q2cNs7ODtY.g3jKIMDhqLn4A9_yKNiGJ
 XMeJayOJhqZPcG5XrpWz4z9UXG58tS9o_Ya_Dtfa.34.FheRV02vFB6U5PxAmgzQDh8uiUwNSxag
 oaut6Sy1dy9cuOPNcK6SsGMswTpB5v9zuBBiHSObxx9i.GtyqTuPLrMU5Jvv4un9CeKPy
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: anthony.perard@citrix.com
Cc: xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	qemu-devel@nongnu.org
Subject: [XEN PATCH 1/3] libxl/dm: Use "pc" machine type for Intel IGD passthrough
Date: Mon,  9 Jan 2023 18:08:11 -0500
Message-Id: <a38db9a2b829add5612e1bce44ae54ecd96e96b7.1673300848.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673300848.git.brchuckz@aol.com>
References: <cover.1673300848.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Length: 4647

The default qemu upstream "xenfv" machine type that is used when an HVM
guest is configured for Intel IGD passthrough assigns slot 2 to the
xen platform pci device. It is a requirement that slot 2 be assigned to
the Intel IGD when it is passed through as the primary graphics adapter.
Using the "pc" machine type instead of the "xenfv" machine type in that
case makes it possible for qemu upstream to assign slot 2 to the IGD.

Using the qemu "pc" machine and adding the xen platform device on the
qemu command line instead of using the qemu "xenfv" machine which
automatically adds the xen platform device earlier in the guest creation
process does come with some degredation of startup performance: startup
is slower and some vga drivers in use during early boot are unable to
display the screen at the native resolution of the monitor, but once
the guest operating system (Windows or Linux) is fully loaded, there
is no noticeable difference in the performance of the guest when using
the "pc" machine type instead of the "xenfv" machine type.

With this patch, libxl continues to use default "xenfv" machine type
with the default settings of xen_platform_pci enabled and igd
gfx_passthru disabled. The patch only affects machines configured with
gfx_passthru enabled.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Reviewers might find this patch easier to review by looking at the
resulting code in the patched file instead of looking at the diff because
it is hard to follow the logical flow of the resulting code in the diff
because the patch moves the check for igd gfx_passthru before the check
for disabling the xen platform device. That change was made because it
results in a more simplified logical flow in the resulting code.

 tools/libs/light/libxl_dm.c | 37 ++++++++++++++++++++-----------------
 1 file changed, 20 insertions(+), 17 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index fc264a3a13..2048815611 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1809,7 +1809,26 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, b_info->extra_pv[i]);
         break;
     case LIBXL_DOMAIN_TYPE_HVM:
-        if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
+        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+            enum libxl_gfx_passthru_kind gfx_passthru_kind =
+                            libxl__detect_gfx_passthru_kind(gc, guest_config);
+            switch (gfx_passthru_kind) {
+            case LIBXL_GFX_PASSTHRU_KIND_IGD:
+                /*
+                 * Using the machine "pc" because with the default machine "xenfv"
+                 * the xen-platform device will be assigned to slot 2, but with
+                 * GFX_PASSTHRU_KIND_IGD, slot 2 needs to be reserved for the Intel IGD.
+                 */
+                machinearg = libxl__strdup(gc, "pc,accel=xen,suppress-vmdesc=on,igd-passthru=on");
+                break;
+            case LIBXL_GFX_PASSTHRU_KIND_DEFAULT:
+                LOGD(ERROR, guest_domid, "unable to detect required gfx_passthru_kind");
+                return ERROR_FAIL;
+            default:
+                LOGD(ERROR, guest_domid, "invalid value for gfx_passthru_kind");
+                return ERROR_INVAL;
+            }
+        } else if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
             /* Switching here to the machine "pc" which does not add
              * the xen-platform device instead of the default "xenfv" machine.
              */
@@ -1831,22 +1850,6 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
         }
 
-        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
-            enum libxl_gfx_passthru_kind gfx_passthru_kind =
-                            libxl__detect_gfx_passthru_kind(gc, guest_config);
-            switch (gfx_passthru_kind) {
-            case LIBXL_GFX_PASSTHRU_KIND_IGD:
-                machinearg = GCSPRINTF("%s,igd-passthru=on", machinearg);
-                break;
-            case LIBXL_GFX_PASSTHRU_KIND_DEFAULT:
-                LOGD(ERROR, guest_domid, "unable to detect required gfx_passthru_kind");
-                return ERROR_FAIL;
-            default:
-                LOGD(ERROR, guest_domid, "invalid value for gfx_passthru_kind");
-                return ERROR_INVAL;
-            }
-        }
-
         flexarray_append(dm_args, machinearg);
         for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
             flexarray_append(dm_args, b_info->extra_hvm[i]);
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 23:08:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 23:08:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474081.735006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Fu-0008IQ-Gt; Mon, 09 Jan 2023 23:08:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474081.735006; Mon, 09 Jan 2023 23:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Fu-0008I1-AN; Mon, 09 Jan 2023 23:08:38 +0000
Received: by outflank-mailman (input) for mailman id 474081;
 Mon, 09 Jan 2023 23:08:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=szyx=5G=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF1Ft-0008EZ-2c
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 23:08:37 +0000
Received: from sonic314-20.consmr.mail.gq1.yahoo.com
 (sonic314-20.consmr.mail.gq1.yahoo.com [98.137.69.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88dcb11c-9072-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 00:08:34 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Mon, 9 Jan 2023 23:08:30 +0000
Received: by hermes--production-ne1-7b69748c4d-bgkrh (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 21a3ec8339abfccbadeaea4f5ef812e7; 
 Mon, 09 Jan 2023 23:08:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88dcb11c-9072-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673305710; bh=H7TDF5iwsM6S2zN42hlPdvBls2tMhLzS10FXdEhr034=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=WyvKkHyI9WmtiUVWL7RhhUB2bUuuMOWtxhbuujprdUVNbGvFlmrbWeQzZtBVFRXQgzXd6YnHv2TQS2sBLkRMwIgDa5aqK9SvPrToX4Y/17QGKsWYCSxkt0OayOddFM1WABQUulp7lITjKI4gkwjR7nOSBzwba+7aDqNLtAdjgSs5z/A8iZ49bCblGt1VtS4X2uZQlAkW32/oIhW9JTQcaNpYKJLxJkKkeKg56uwCLNkgUd8poCFD8cuQgr6nvgkzMRMW8Q8mKelbtkN9JSaD/XnBt5U0pLPQFK+W5d0dPDriK7c6AYbSs42yCWMAlqzdioTBUl3yZhdHpjO1g+6MQw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673305710; bh=PrWJ3S59N3HCaCyYCnpa2+rJpwHbO8m3Bitt+4YB6Pl=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=fqOl1PKOUnvafKXce0G+T+Z4srjLJB9XxRyHnUQ8MGKq2fMTtlJXVK0YJ1vdp1lizY5OKQYfenBVE4QrbjmJRKdxe+GB/DEloJKCfLtlNCJCfS8wYZRZVffSNqWQFz2M+cvlzVMAhbdnGVarD8ejTY1oRd829SubqXoo9z3mfbD2WkHgUwoU+otrn3GcgFYXgsQp0Biw2jld9+O1ok//129gr4jAB0IYMRW0JPzupP1qTCV23XrnBNk6pOb1ybmV9I0jMgOhnnFPoUmcb5fsf4DlFeZzcfV3eGnFPsormfBeH2R9KoTMOvO8hbz4FDS0AP+kmqFRVLSbt7EyAR386A==
X-YMail-OSG: mLmWokQVM1nd4ulBj7SEMVyO0AKJcP1LGJB6Lv13bPdjONFUJyA758Q9z_tDvsH
 EaDw22J.X0QvmLKE_mEjNj7qxrjMsyecS4dHtX887JN2s.F67n2v6gtN5hfAonv9_Hh_FFvgp8oQ
 jS3_wxawPyqqes6WtJD_LEyj5cp062RlelUHhwDNAOfvO0wJvAK5gddv1jPCBOp2k3qsnFAwXDJy
 bIdrF5ZVGqj0EY7zMNO1L217sOJG7C0ce93pPqq8ZN8wmWddaobqSE5to5qU1VPwgmx_whKvc0W0
 3OuPoBxEJlmoFWcxmxldRkuZ3iMRN_m_ulxeifYEFWvwRhKYpCW3iKqiN4WUXQiJnD7YSA9lM.w8
 wU117bcrPFaefV2usS6GwtsS00yMXb5wXHt36QdZEcEOOIsdyA6w37V4Y6OnhCJTTTn2UP.KCWT9
 xUBmMfOQhpY66zDS7ueObfX4zUKYVfiPIiCsb8a1KME0yM9DUn9gN94hLBLudRF7Ff.XJvHqz0Dk
 0AIafvTAmrnWG6TGE_YRk_NXBET5Z.1QfHGUMFrb44Ja46mnvG9AGkxF7aR1kI6tqAacEmSv11xi
 hHYNH.lNLG8zOTHJp3bK5qhcjL2ddwF5TCWJG5.oGI__XD8fY8NH3r5s0nwaVBwzQyO5LG9JJ1vr
 vNEHIEdpfA7bGrJQmpBBCR0PZc5EPJk8FLtK97R98mloQkpHLj2voIX3X.mzIuq1_SbTELlFWq5_
 yOlnlGPtsQ.NZFnp7GgM1eT0Ctzu9_o6gnZXThutfUlIjFSWGyqrUTblPwnkkj_RgKZJFNJlfS5x
 ooB6jC_rl_jqPAoyQfGwXGe8vVXoqfcDi0KY59SPa3fMKM47UcYKFcP2kyYAwfEk7wO6iAVXESht
 66ZEJ3QwpIpDyb50sKrmeoHq8HRG1mVy0M4Ehrrj_TsSCmr_Dh2TJofZa5BOIWTUr76XoyTxX2E_
 o55GdOJzZelXPoA3LwyyPqTCfgMFK_h4PBsP26t5pdqgMTeDZJMqL55_ZAplg1RRXbN5hMK1lE1O
 oJx1XuMIo6rXqfacFyuL2QRNC1LS.f2tdptbb5MJZAztcv3HAuL6MWktNQE2nw_mOzy0vxBb7x.G
 29JDig4Ewv.chBkitUygFDVDaq9_HlTH9AST8pfIcJvU.covWzj1kj_41bmOKYWodw9mCz3s9h_O
 8ki1eEWd7jWlikN2HXovImwVRDKHArsGADRRUp_aN5Zf2wTCvlxFNXkv5clU2Cl1TOhepa7M9ahp
 s2TUzlijxNCugcBwdjA1Wtm3Rk4MK0aO_AkAEZNRVYQbYcckAv3hcAFCpW3caKDhaj0jtesM38sv
 Frf9JiWkGaJ8T6OhzIoyOzRZh07qV_7g9V9GDiNk1cQCXsUwg7Tpl0dHoTonAkx13KeeEqRHg9Dm
 YyhOQY2OdWZqrHBWY3eGvfOcSSeyX8sryJu3SYP57D7xeBfXaXBdFAL3_Wh7nT1sVnTFoYr3.Jhx
 zWOpuLZaJMoFWglaYK9hj37HxgmUz0.3p3ZYNrVSp.uNHaVFj2EwnYFiOqL0PEOYhWmInLx79iV0
 lej49k1a.ZuL3JW.OXeX34kc1bR8JSXDRdetiN2Z4wW3UuMtcfWsHWtCEkNk2S3C2e1hQZeiZU.O
 w1qQSI8tNFVpm53pRrDaGkcdvvm0FZQzx_VldD43HZ9wryrfhZf7hQwnK0DV.F4bNJqdXClUgKt5
 tZfM5wr0LWLxnBiEi2Mw.72n1ts2Osm2nCpApGhndNcsOqX70TITAjq_HATkcUYhboyPyBicquz9
 Mg1xNs3r0yTPmLUGqXfewaGWWnk_zP5GGqkl4tUzlMBhpk6YAD8wJRcR0P6RvFaq3vPDCFs9wxWh
 6.Ua7Y375zpt39PjJM7Qgf3Wot0DOzjzJtjRjYEQQiIqLtY9GXvBxk_YEYf2kU2C68L46TyipzRb
 jFmFcx2ftRjxPbwvYGRDtlfI1IQ4nmPzBpddLhnwreQv4Hl3_mehcKQ8NOrDz159BFn8y7dItGt0
 EuHt4B5jtFCcih2nxvqMPgrewfuenVk309ZkY3wJ1RTEUzCU7Evb6vi3Lrpx.24Fti30A1bHZkZd
 MTNfK6XGF22HHvR9aUDEbs0L1pLES8jSkMrULYY4310P0OmJdkSzmvGqqXt3b6cadjxv2Frq7Kso
 99hfHXfW9npPnBprkYHbO5HAC5tMc5LtX9SfMUpfFDbU6GKp6pDCeIJxVJyzbpaZbyD0JhRwZwhE
 X_w--
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: anthony.perard@citrix.com
Cc: xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	qemu-devel@nongnu.org
Subject: [XEN PATCH 0/3] Configure qemu upstream correctly by default for igd-passthru
Date: Mon,  9 Jan 2023 18:08:10 -0500
Message-Id: <cover.1673300848.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <cover.1673300848.git.brchuckz.ref@aol.com>
Content-Length: 9696

Sorry for the length of this cover letter but it is helpful to put all
the pros and cons of the two different approaches to solving the problem
of configuring the Intel IGD with qemu upstream and libxl in one place,
which I attempt to do here. Of course the other approach involves a
patch to qemu [1] instead of using this patch series for libxl.

The quick answer:

I think the other patch to qemu is the better option, but I would be OK
if you use this patch series instead.

Details with my reasons for preferring the other patch to qemu over this
patch series to libxl: 

I call attention to the commit message of the first patch which points
out that using the "pc" machine and adding the xen platform device on
the qemu upstream command line is not functionally equivalent to using
the "xenfv" machine which automatically adds the xen platform device
earlier in the guest creation process. As a result, there is a noticeable
reduction in the performance of the guest during startup with the "pc"
machne type even if the xen platform device is added via the qemu
command line options, although eventually both Linux and Windows guests
perform equally well once the guest operating system is fully loaded.

Specifically, startup time is longer and neither the grub vga drivers
nor the windows vga drivers in early startup perform as well when the
xen platform device is added via the qemu command line instead of being
added immediately after the other emulated i440fx pci devices when the
"xenfv" machine type is used.

For example, when using the "pc" machine, which adds the xen platform
device using a command line option, the Linux guest could not display
the grub boot menu at the native resolution of the monitor, but with the
"xenfv" machine, the grub menu is displayed at the full 1920x1080
native resolution of the monitor for testing. So improved startup
performance is an advantage for the patch for qemu.

I also call attention to the last point of the commit message of the
second patch and the comments for reviewers section of the second patch.
This approach, as opposed to fixing this in qemu upstream, makes
maintaining the code in libxl__build_device_model_args_new more
difficult and therefore increases the chances of problems caused by
coding errors and typos for users of libxl. So that is another advantage
of the patch for qemu.

OTOH, fixing this in qemu causes newer qemu versions to behave
differently than previous versions of qemu, which the qemu community
does not like, although they seem OK with the other patch since it only
affects qemu "xenfv" machine types, but they do not want the patch to
affect toolstacks like libvirt that do not use qemu upstream's
autoconfiguration options as much as libxl does, and, of course, libvirt
can manage qemu "xenfv" machines so exising "xenfv" guests configured
manually by libvirt could be adversely affected by the patch to qemu,
but only if those same guests are also configured for igd-passthrough,
which is likely a very small number of possibly affected libvirt users
of qemu.

A year or two ago I tried to configure guests for pci passthrough on xen
using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
could not convert an xl.cfg file with a configuration item
pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
So it is unlikely there are any users out there using libvirt to
configure xen hvm guests for igd passthrough on xen, and those are the
only users that could be adversely affected by the simpler patch to qemu
to fix this.

The only advantage of this patch series over the qemu patch is that
this patch series does not need any patches to qemu to make Intel IGD
configuration easier with libxl so the risk of affecting other qemu
users is entirely eliminated if we use this patch instead of patching
qemu. The cost of patching libxl instead of qemu is reduced startup
performance compared to what could be achieved by patching qemu instead
and an increased risk that the tedious process of manually managing the
slot addresses of all the emulated devices will make it more difficult
to keep the libxl code free of bugs.

I will leave it to the maintainer of the code in both qemu and libxl
(Anthony) to decide which, if any, of the patches to apply. I am OK with
either this patch series to libxl or the proposed patch to qemu to fix
this problem, but I do recommend the other patch to qemu over this patch
series because of the improved performance during startup with that
patch and the relatively low risk that any libvirt users will be
adversely affected by that patch.

Brief statement of the problem this patch series solves:

Currently only the qemu traditional device model reserves slot 2 for the
Intel Integrated Graphics Device (IGD) with default settings. Assigning
the Intel IGD to slot 2 is necessary for the device to operate properly
when passed through as the primary graphics adapter. The qemu
traditional device model takes care of this by reserving slot 2 for the
Intel IGD, but the upstream qemu device model currently does not reserve
slot 2 for the Intel IGD.

This patch series modifies libxl so the upstream qemu device model will
also, with default settings, assign slot 2 for the Intel IGD.

There are three reasons why it is difficult to configure the guest
so the Intel IGD is assigned to slot 2 in the guest using libxl and the
upstream device model, so the patch series is logically organized in
three separate patches; each patch resolves one of the three reasons
that cause problems:

The description of what each of the three libxl patches do:

1. With the default "xenfv" machine type, qemu upstream is hard-coded
   to assign the xen platform device to slot 2. The first patch fixes
   that by using the "pc" machine instead when gfx_passthru type is igd
   and, if xen_platform_pci is set in the guest config, libxl now assigns
   the xen platform device to slot 3, making it possible to assign the
   IGD to slot 2. The patch only affects guests with the gfx_passthru
   option enabled. The default behavior (xen_platform_pci is enabled
   but gfx_passthru option is disabled) of using the "xenfv" machine
   type is preserved. Another way to describe what the patch does is
   to say that it adds a second exception to the default choice of the
   "xenfv" machine type, with the first exception being that the "pc"
   machine type is also used instead of "xenfv" if the xen platform pci
   device is disabled in the guest xl.cfg file.

2. Currently, with libxl and qemu upstream, most emulated pci devices
   are by default automatically assigned a pci slot, and the emulated
   ones are assigned before the passed through ones, which means that
   even if libxl is patched so the xen platform device will not be
   assigned to slot 2, any other emulated device will be assigned slot 2
   unless libxl explicitly assigns the slot address of each emulated pci
   device in such a way that the IGD will be assigned slot 2. The second
   patch fixes this by hard coding the slot assignment for the emulated
   devices instead of deferring to qemu upstream's auto-assignment which
   does not do what is necessary to configure the Intel IGD correctly.
   With the second patch applied, it is possible to configure the Intel
   IGD correctly by using the @VSLOT parameter in xl.cfg to specify the
   slot address of each passed through pci device in the guest. The
   second patch is also designed to not change the default behavior of
   letting qemu autoconfigure the pci slot addresses when igd
   gfx_pasthru is disabled in xl.cfg.  

3. For convenience, the third patch automatically assigns slot 2 to the
   Intel IGD when the gfx_passthru type is igd so with the third patch
   appled it is not necessary to set the @VSLOT parameter to configure
   the Intel IGD correctly.

Testing:

I tested a system with Intel IGD passthrough and two other pci devices
passed through, with and without the xen platform device. I also did
tests on guests without any pci passthrough configured. In all cases
tested, libxl behaved as expected. For example, the device model
arguments are only changed if gfx_passthru is set for the IGD, libxl
respected administrator settings such as @VSLOT and xen_platform_pci
with the patch series applied, and not adding the xen platform device to
the guest caused reduced performance because in that case the guest
could not take advantage of the improvements offered by the Xen PV
drivers in the guest. I tested the following emulated devices on my
setup: xen-platform, e1000, and VGA. I also verified the device that is
added by the "hdtype = 'ahci'" xl.cfg option is configured correctly
with the patch applied. I did not test all 12 devices that could be
affected by patch 2 of the series. These include the intel-hda high
definition audio device, a virtio-serial, device, etc. Once can look
at the second patch for the full list of qemu emulated devices whose
behavior is affected by the second patch of the series when the guest
is configured for igd gfx_passthru. These devices are also subject
to mistakes in the patch not discovered by the compiler, as mentioned
in the comments for reviewers section of the second patch. 

[1] https://lore.kernel.org/qemu-devel/8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com/

Chuck Zmudzinski (3):
  libxl/dm: Use "pc" machine type for Intel IGD passthrough
  libxl/dm: Manage pci slot assignment for Intel IGD passthrough
  libxl/dm: Assign slot 2 by default for Intel IGD passthrough

 tools/libs/light/libxl_dm.c | 227 +++++++++++++++++++++++++++++-------
 1 file changed, 183 insertions(+), 44 deletions(-)

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 23:08:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 23:08:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474082.735022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Fx-0000KR-U2; Mon, 09 Jan 2023 23:08:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474082.735022; Mon, 09 Jan 2023 23:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Fx-0000KE-P5; Mon, 09 Jan 2023 23:08:41 +0000
Received: by outflank-mailman (input) for mailman id 474082;
 Mon, 09 Jan 2023 23:08:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=szyx=5G=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF1Fx-0000Gp-3m
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 23:08:41 +0000
Received: from sonic315-55.consmr.mail.gq1.yahoo.com
 (sonic315-55.consmr.mail.gq1.yahoo.com [98.137.65.31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8b4624a4-9072-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 00:08:37 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic315.consmr.mail.gq1.yahoo.com with HTTP; Mon, 9 Jan 2023 23:08:34 +0000
Received: by hermes--production-ne1-7b69748c4d-bgkrh (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 21a3ec8339abfccbadeaea4f5ef812e7; 
 Mon, 09 Jan 2023 23:08:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b4624a4-9072-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673305714; bh=6KQQVKXjX5bPI5/yTS20Ov1Ty8T7qISVmf3UrSeOprM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From:Subject:Reply-To; b=PEM0aNGEXQPaqvoOKMU9QyD96JQr5zHZqkD3XdTeUDsr8UOaeO2eUCcfQC8eUzj6YvlHwhREB8BQeTAA/fQhhnLV0bKpwkosy3XaDR0++D6Wq93pLvBsp+lRLNsFhCuWe3S83y3GIsiZEoApYIhP38XFQgkU+V+71RWck/3sk4kRGGMqtlH6sKiY363PxXu0Pp6blaUbHk86dzsEIsJFAniHkUC4t6o9EUP0FenDRBc96qq8e1/py4lFl3kNKV8Uqbxi+kTSq/tFpbRpA7+M1GnZe1jb8gvYAevKFVGgBhSszHjhIDGF8HSfj4sU+V0J9DwCKKpmQ016z3jZHkau7Q==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673305714; bh=TWHL1z/CiE8ohTN0zQTQknjD74chjKnA+HPUBhabmew=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=I437GwK+vEuJyjpv7CsRTSm8h0S7JdolI9aSALXd3nUkdhdF0+9Rw+S3/c17bfn02+Y5ASvsOvQtH3sCGgDN5YQui32Fwk2uF4YB7bwAHB5oBhaYbLqJSIIWY+tf527R72vhgT8WtiEXzgFI/fBkgXYzw7oQtnd6Ol627Da5KIO5w23DDfeK6CIGwR1BWaFCoRVXVV/6MZjS+vVeyk9rYAAcDUKK1iWBX2HbRAFODD0C3RbeKWslcXDmab4ENB9jJ3WKua2B4tKlT9787hAHNbi3rANRTgSvQ+DU7jxNLxMtocnfpavdKqWTUA+PNkNwUZZMttoQXtgD9tHKNa8sqg==
X-YMail-OSG: TbIfoPMVM1nDGed1NwkTCQurEHl4PZPKruQylL5Hfy_lsaQ34xXGdbcXGGcz8am
 8KJ3zw1NlRjxXHSr1eJ2CCCcc8cQXw9nuJOxp.Y4tDdG9CilO4rrYXGde8Dfq1reHiJI_i.yqRCx
 XH6.YOOuxme2R5T.xbG0cSV.Tumu_kxNn9.WFFwc.TSc.29n0butPjuBbKl3.txxAnvr4sDB62h2
 3C7eLySTsMLflK5qUt.kXrUVnHscxQP5tuInMo3Va6QDqqaL6I2O9pP8kZmLQg8NHETXfDmfNsyE
 NoMlZ2bInDyLN83vRWmC.fjInk6BU6f7eIsBe0OimYgnJEDZr8rv8VSTdi4rZspZNKQ9pX8P7Vjd
 d26N6Eoq4q_SvEgRukC4zGKG3w4.0pwgDOGAm73WrglGtDStfxu1jIH.G0zlp.IK_6JT6d6ScOxA
 qBCDD2crReqSBoNN_TzI6kWyaOdFEzEsMb.8gSrdmllucEwAIQ.fPLAgG262ujb6dOR84CwPygqX
 XjGFubEyFEWcmqiCADQ1FNwfWatojDAkTExNRDXdUxzb.Xlxnd_sXWRaukLy9MFuKN5SeKlGta85
 KsHUJtjeC6azaYF2f0UpwaQCgunfxGsKdWgIeSTfBOp0mv.laMaeWZIpyrHsKJMbMzRdmAZDPkK6
 tetP.YQw0kFRdANd_dr.._hfKRwQNDMvcA9QpL6Fs2elpdZb5iOLD7yZKa2rGIjhUwdLP.ctmqXB
 pIVDGknG5jbjwWtkJ9eHu5vImPnyYrRosnL.bv6KYZ10UbhQvOis2r14LSOmYG9QGg8crk4DCX0J
 YpU6RQlIDSb9rYQfrZ1.47Al3y7xSUPJjGZNxBdwF6NvpbbxM_v3N8ke2r_HEDuLqw9xxbmV3bfk
 Lj1N7rit_LaGiBkEJ.fVR5leLuFP.3bhp35g3PErXf_3ZkQcsu3mMjHbr0JXPCuOIkc4rhha4yBj
 08LJUWfGhkWbtunQ6sEroWoB6QksKPI9ONH6y7e9btZE0dkzcXylmSFec8Y22Rm0d.kM0aJzd0yH
 wpyq.aNkTmKVyu394N17gULzDAHnO23jRsss6c5BN6AVZyrnxnYn._HJItmMaW5BrOSiebL4mlFa
 rs4zckGkbgNP8tfoc1ox6fSKVqB3ss9r8aW4OpXTgviIfLtQOda85apY02DCIq_lgukBHdNocZKC
 WM2fw8XlHdgm.vOTjHZGqmXbOPL27Mdw7HdyxGRHZMHIDpSRFhrH8JuFdJ_L.nc8kIFaBponEOAV
 OLbC_jieWUmMdKXlWv7j1bJnl84nPcawtY6T4ajxMdnBOtEy6QfBF.dNM1mHTeZdKm4xUtevFTec
 ntLNdcVkRs1SFJH6S5KjAiIFQbgSUSdSJ14FN2H9PyeYUGgHTddjB6pQ27KfnR5z1gupN8WN.esE
 UCY098ePhnithkcajn.c_tABPP4HIe_jNXZXH0PDLDEEVeQS95xIsEPIzyxfYizHF56kuq60KYe4
 h9WcjhSXPLtidm_fyK0dbdKhiimScIsrirgxSy6WZ8sHYFJ0IX1SsjYiMeq7lox.lNEk8CREHM92
 bgcOG88uyw.KsK2cFD.lj0TFlfG2Rpb2s0UJwrco7Nnpr.lCpn.ByvGSr.C6HVRCT530cLhI_vUs
 EqdwmNCasN.zcYmqUVH4LD7_zM07vZju7CkxHfP__3pWP.Q8bcX4oRWUgTxlZOQuK5e9iVzPElLO
 bzTHZzlbLc.0v7dX5dV95Zba9M395Yo0HdgyKGR7NUnF1VGd7Y0qFlC5tJ7ei5c_vkq6fGvP6E04
 s2b7n7IgIMZgq_WJ2_kFQ1i5t7tmpqNjOsqn5.yQz3KL1d555sYEuku.aqF1sbBwDQNF9M.Tfjkw
 UsW1h6xASufEA_XmYbd4eQuRz3UJF551d.qvx.DiWZyXxP39hJhXnk_IGsArpAsI2MGxd_CngflA
 o8b4LEfjVUjev696HkHyZD0UG_Ydm.58C8_5km2VzkCOdKQM8Ula61X.ld84oFv83N3Gxcf82MXW
 mN0lwv9_y8DoWZoL6Kq5AtcAEjx5CNQRD2vz2GFNd8D5XgsxqZrV3JaR0B6RAG84OKk1v4.6FVvT
 EDyCdVXuhrqTy7RWME1YAyMfWQHxrhOqZ4dpxw7uTBG367fnglB1.JdIKvdS6C9EC3wXmBolmi1y
 XEwACp5xf8ktCagD_XZauILSdQ7gL46G_f2W3EDrW9EcleRLg083XsJAc6sxAYSNMy5yKEanA3rc
 -
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: anthony.perard@citrix.com
Cc: xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	qemu-devel@nongnu.org
Subject: [XEN PATCH 2/3] libxl/dm: Manage pci slot assignment for Intel IGD passthrough
Date: Mon,  9 Jan 2023 18:08:12 -0500
Message-Id: <76d06f5d01e01df316230def4f31037695f11c1a.1673300848.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673300848.git.brchuckz@aol.com>
References: <cover.1673300848.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Length: 15129

By default, except for the ich9-usb-uhci device which libxl assigns to
slot 29 (0xid), libxl defers to qemu upstream's automatic slot assignment
process, which is simply to assign each emulated device to the next
available slot on the pci bus. With this default behavior, libxl and
qemu will not configure the Intel IGD correctly. Specifically, the Intel
IGD must be assigned to slot 2, but the current default behavior is to
assign one of the emulated devices to slot 2.

With this patch libxl uses qemu command line options to specify the slot
address of each pci device in the guest when gfx_passthru is enabled
for the Intel IGD. It uses the same simple algorithm of assigning the
next available slot, except that it starts with slot 3 instead of slot 2
for the emulated devices. This process of slot assignment aims to
simulate the behavior of existing machines as much as possible.

The default behavior (when igd gfx_passthru is disabled) of letting qemu
manage the slot addresses of emulated pci devices is preserved. The
patch also preserves the special treatment for the ich9 usb2 controller
(ich9-usb-ehci1) that libxl currently assigns to slot.function 29.7 and
the associated ich9-usb-uhciN devices to slot 29 (0x1d).

For future maintainance of this code, it is important that pci devices
that are managed by the libxl__build_device_model_args_new function
follow the logic of this patch and use the new local counter next_slot
to assign the slot address instead of letting upstream qemu assign the
slot if the guest is configured for Intel IGD passthrough, and preserve
the current behavior of letting qemu assign the slot address otherwise.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
The diff of this patch is easier to review if it is generated using
the -b (aka --ignore-space-change) option of diff/git-diff to filter
out some of the changes that are only due to white space.

This patch is difficult to verify for correctness without testing all
the devices that could be added by the libxl__build_device_model_args_new
function. There are 12 places where the addr=%x option needed to be
added to the arguments of the "-device" qemu option that adds an
emulated pci device which corresponds to at least 12 different devices
that could be affected by this patch if the patch contains mistakes
that the compiler did not notice. One mistake the compiler would not
notice is a missing next_slot++; statement that would result in qemu
trying to assign a device to a slot that is already assigned, which
is an error in qemu. I did enough tests to find some mistakes in the
patch which of course I fixed before submitting the patch. So I cannot
guarantee that there are not any other mistakes in the patch because
I don't have the resources to do the necessary testing of the many
possible configurations that could be affected by this patch.

 tools/libs/light/libxl_dm.c | 168 ++++++++++++++++++++++++++++++------
 1 file changed, 141 insertions(+), 27 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 2048815611..2720b5d4d0 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1205,6 +1205,20 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     const char *path, *chardev;
     bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
     int rc;
+    int next_slot;
+    bool configure_pci_for_igd = false;
+    /*
+     * next_slot is only used when we need to configure the pci
+     * slots for the Intel IGD. Slot 2 will be for the Intel IGD.
+     */
+    next_slot = 3;
+    if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+        enum libxl_gfx_passthru_kind gfx_passthru_kind =
+                        libxl__detect_gfx_passthru_kind(gc, guest_config);
+        if (gfx_passthru_kind == LIBXL_GFX_PASSTHRU_KIND_IGD) {
+            configure_pci_for_igd = true;
+        }
+    }
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1372,6 +1386,20 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
 
         if (b_info->cmdline)
             flexarray_vappend(dm_args, "-append", b_info->cmdline, NULL);
+        /*
+         * When the Intel IGD is configured for primary graphics
+         * passthrough, we need to manually add the xen platform
+         * device because the QEMU machine type is "pc". Add it first to
+         * simulate the behavior of the "xenfv" QEMU machine type which
+         * always adds the xen platform device first. But in this case it
+         * will be at slot 3 because we are reserving slot 2 for the IGD.
+         */
+        if (configure_pci_for_igd &&
+            libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
+            flexarray_append_pair(dm_args, "-device",
+                        GCSPRINTF("xen-platform,addr=%x", next_slot));
+            next_slot++;
+        }
 
         /* Find out early if one of the disk is on the scsi bus and add a scsi
          * controller. This is done ahead to keep the same behavior as previous
@@ -1381,7 +1409,14 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                 continue;
             }
             if (strncmp(disks[i].vdev, "sd", 2) == 0) {
-                flexarray_vappend(dm_args, "-device", "lsi53c895a", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("lsi53c895a,addr=%x", next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "lsi53c895a",
+                                      NULL);
+                }
                 break;
             }
         }
@@ -1436,31 +1471,67 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, "-spice");
             flexarray_append(dm_args, spiceoptions);
             if (libxl_defbool_val(b_info->u.hvm.spice.vdagent)) {
-                flexarray_vappend(dm_args, "-device", "virtio-serial",
-                    "-chardev", "spicevmc,id=vdagent,name=vdagent", "-device",
-                    "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
-                    NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("virtio-serial,addr=%x", next_slot),
+                        "-chardev", "spicevmc,id=vdagent,name=vdagent",
+                        "-device",
+                        "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
+                        NULL);
+                next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "virtio-serial",
+                        "-chardev", "spicevmc,id=vdagent,name=vdagent",
+                        "-device",
+                        "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
+                        NULL);
+                }
             }
         }
 
         switch (b_info->u.hvm.vga.kind) {
         case LIBXL_VGA_INTERFACE_TYPE_STD:
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("VGA,vgamem_mb=%d",
-                libxl__sizekb_to_mb(b_info->video_memkb)));
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("VGA,addr=%x,vgamem_mb=%d", next_slot,
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("VGA,vgamem_mb=%d",
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+            }
             break;
         case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("cirrus-vga,vgamem_mb=%d",
-                libxl__sizekb_to_mb(b_info->video_memkb)));
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("cirrus-vga,addr=%x,vgamem_mb=%d", next_slot,
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("cirrus-vga,vgamem_mb=%d",
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+            }
             break;
         case LIBXL_VGA_INTERFACE_TYPE_NONE:
             break;
         case LIBXL_VGA_INTERFACE_TYPE_QXL:
             /* QXL have 2 ram regions, ram and vram */
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64",ram_size_mb=%"PRIu64,
-                (b_info->video_memkb/2/1024), (b_info->video_memkb/2/1024) ) );
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("qxl-vga,addr=%x,vram_size_mb=%"PRIu64
+                    ",ram_size_mb=%"PRIu64, next_slot,
+                    (b_info->video_memkb/2/1024),
+                    (b_info->video_memkb/2/1024) ) );
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64
+                    ",ram_size_mb=%"PRIu64,
+                    (b_info->video_memkb/2/1024),
+                    (b_info->video_memkb/2/1024) ) );
+            }
             break;
         default:
             LOGD(ERROR, guest_domid, "Invalid emulated video card specified");
@@ -1496,8 +1567,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         } else if (b_info->u.hvm.usbversion) {
             switch (b_info->u.hvm.usbversion) {
             case 1:
-                flexarray_vappend(dm_args,
-                    "-device", "piix3-usb-uhci,id=usb", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("piix3-usb-uhci,addr=%x,id=usb",
+                                  next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args,
+                        "-device", "piix3-usb-uhci,id=usb", NULL);
+                }
                 break;
             case 2:
                 flexarray_append_pair(dm_args, "-device",
@@ -1509,8 +1587,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                         i, 2*(i-1), i-1));
                 break;
             case 3:
-                flexarray_vappend(dm_args,
-                    "-device", "nec-usb-xhci,id=usb", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("nec-usb-xhci,addr=%x,id=usb",
+                                  next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args,
+                        "-device", "nec-usb-xhci,id=usb", NULL);
+                }
                 break;
             default:
                 LOGD(ERROR, guest_domid, "usbversion parameter is invalid, "
@@ -1542,8 +1627,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
 
             switch (soundhw) {
             case LIBXL__QEMU_SOUNDHW_HDA:
-                flexarray_vappend(dm_args, "-device", "intel-hda",
-                                  "-device", "hda-duplex", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("intel-hda,addr=%x", next_slot),
+                                   "-device", "hda-duplex", NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "intel-hda",
+                                      "-device", "hda-duplex", NULL);
+                }
                 break;
             default:
                 flexarray_append_pair(dm_args, "-device",
@@ -1573,10 +1665,18 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                                                 guest_domid, nics[i].devid,
                                                 LIBXL_NIC_TYPE_VIF_IOEMU);
                 flexarray_append(dm_args, "-device");
-                flexarray_append(dm_args,
-                   GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s",
-                             nics[i].model, nics[i].devid,
-                             nics[i].devid, smac));
+                if (configure_pci_for_igd) {
+                    flexarray_append(dm_args,
+                        GCSPRINTF("%s,addr=%x,id=nic%d,netdev=net%d,mac=%s",
+                                  nics[i].model, next_slot, nics[i].devid,
+                                  nics[i].devid, smac));
+                    next_slot++;
+                } else {
+                    flexarray_append(dm_args,
+                        GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s",
+                                  nics[i].model, nics[i].devid,
+                                  nics[i].devid, smac));
+                }
                 flexarray_append(dm_args, "-netdev");
                 flexarray_append(dm_args,
                                  GCSPRINTF("type=tap,id=net%d,ifname=%s,"
@@ -1865,8 +1965,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         flexarray_append_pair(dm_envs, "XEN_DOMAIN_ID", GCSPRINTF("%d", guest_domid));
 
-        if (b_info->u.hvm.hdtype == LIBXL_HDTYPE_AHCI)
-            flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
+        if (b_info->u.hvm.hdtype == LIBXL_HDTYPE_AHCI) {
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                            GCSPRINTF("ahci,addr=%x,id=ahci0", next_slot));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
+            }
+        }
         for (i = 0; i < num_disks; i++) {
             int disk, part;
             int dev_number =
@@ -2043,7 +2150,14 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         switch (b_info->u.hvm.vendor_device) {
         case LIBXL_VENDOR_DEVICE_XENSERVER:
             flexarray_append(dm_args, "-device");
-            flexarray_append(dm_args, "xen-pvdevice,device-id=0xc000");
+            if (configure_pci_for_igd) {
+                flexarray_append(dm_args,
+                GCSPRINTF("xen-pvdevice,addr=%x,device-id=0xc000",
+                           next_slot));
+                next_slot++;
+            } else {
+                flexarray_append(dm_args, "xen-pvdevice,device-id=0xc000");
+            }
             break;
         default:
             break;
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 23:08:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 23:08:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474083.735033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Fz-0000bq-9r; Mon, 09 Jan 2023 23:08:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474083.735033; Mon, 09 Jan 2023 23:08:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Fz-0000bd-6T; Mon, 09 Jan 2023 23:08:43 +0000
Received: by outflank-mailman (input) for mailman id 474083;
 Mon, 09 Jan 2023 23:08:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=szyx=5G=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF1Fx-0000Gp-O1
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 23:08:41 +0000
Received: from sonic308-8.consmr.mail.gq1.yahoo.com
 (sonic308-8.consmr.mail.gq1.yahoo.com [98.137.68.32])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8be2d2cc-9072-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 00:08:37 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic308.consmr.mail.gq1.yahoo.com with HTTP; Mon, 9 Jan 2023 23:08:35 +0000
Received: by hermes--production-ne1-7b69748c4d-bgkrh (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 21a3ec8339abfccbadeaea4f5ef812e7; 
 Mon, 09 Jan 2023 23:08:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8be2d2cc-9072-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673305715; bh=ywgbO8lubWpnCassvIu6bBFzCJ/Z2sgLUfL80HdpzLE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From:Subject:Reply-To; b=pxzV/H1HM06MpAMRVFL0ssYCJXP59bXc5GrKUx/QcQJz7S71dqm4BINX2rzahxX81MtJ3y6CVWoFA+Z+Yvgmo6ZVoj16k+n0jLXlMLmUGbVqJE84JQ4p85+kGNkvkZ+CV3G3IVhDqZXv8Dr2rU0tVBF9iP1z0HwzRVp6x8KAHGEMS7q2ZCfeFIysV2I3RHAkuX8M4I5xj4vKnkI7rl/dsIfHsF+8921Wr5K1wPt2gMyHI+eTg8B2Lwfenjxuz9UX2sw8CglGE+K7lJLFfjSPx6RSsEPaGgY5LceZhom1Eg4fk0O1PK2D4Q/HnAm9NnGscRgslm01AHt6AiaU1UA4ZQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673305715; bh=keMjmw88+NB73GNZ0YN7HfypUkwkByV4+3fnszfn8CF=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=o5oc++gedcFxnVQru4ndLDTH+4PHQV48uZlZYgPEpbc9FMkTUtRJaY8ZIkTBLQ9efm7MNUoNRoYQnfOqkxARaYxTn/ztbDwwrODbc224H2IWyAX+TBSF1kLYjVlyGY6omgdm0a7VCaxl0USk+DiuFfy7uCVCauu1/xE9QsAb+cGlcbD+60pewrrq1aKWcKGuCZ2nPcelXtbFD9hyDybbZ/tCH1FtccsiuUBwWONs0gVITU8DORaBTRcDW9rQNQ5RCGYm5So2t35HzP30oAv5CjIuBFRrupoQ9UF26cgAB8YIBmCnsldzZZipBmKZq9RxiDZGaXsJ9YEBnZ0/lwvPCQ==
X-YMail-OSG: vQZpqCcVM1mUk8i0Yg6trsInc6cIWbw5_m7__7CkDb5NTL8YxNrrPG2YebLlzam
 cbBzvm1Qfug0CdK_EKzy9o7NDqNQ.oDx1dGzXpsW1UizBPvq8F5GwcOBS.9eepfelo2Hc068dSZg
 PFRNQRvtTcgxfA9dR.lOLBFOP25MpPq4pTWjtTC5DdLdlcdEDFg14WJHM_ruSYuLeOJMwtm8r6c9
 t0_IxRGikpcmywJ8l0GYmNert7SRfUAh_TwQKxpjYgoh0C5DBiJs.sXBqpZqmoCfw98iOO8F0B1v
 BDe8VwfbQ9LyLe6R9qNFsHW_4CylMgaDyfi6F1L6YUuHVzkGTL.zvCPjrPxZo1JhUkI9pbrlEjVq
 QZ07JAl9zztA3m_F9mdSrNIShZML8F7D8biOje2C.zDIdxp.wOwCMXbp5TAahcygh2q9AmJfItAd
 w3krBHXcXWT5UrE00TTFA_MVSoqQhqomFTu3x5VWZUSJGjXc4BEXiMxf.K_yT7uX4jtA.k0rAdHX
 jWw1b7zXKomaPLHmV3Hjg576CP0BeS.v2Owvv1ymlnWXvQNeWFVj10av1tH_9FXoT5BkGhNtGaxK
 q6aRWbSlcpT6lACd8EXQjUig6HuYvzGGYDIpm_L8R3wA_XKdCuzSxpFfHJ9oqaNyp0F_mNPQ7xfY
 H_OMYWsTwp0wIRtnUE3epufHf_ZwrqgVz7E.CVtvi_lQWBBMCO6zQsaaRWzver3NRHhDqJKzdllU
 Ki9Q6ypU1sKtHeCDN6uQivu0LUl3m3kXOLrDa8fIz4Hx23rrvtZ2y5feJ7mvXxEEqBRLjxBrSqBH
 usd0r_3usqyCBDIXyEU6yCm4G9lr8R1bhjdcdGEVxYbnGcQ22yqgQkisyrCjJhk2wlm1BnXFibeM
 bVIm6sis5av5BHgjo6FZkg02DT34K56H725bfkaGTGSm6hIsHYSTKa5TLoD3uLUtj.tObqcSdYuT
 mI4qH.3ADGN8fWZHaSnbmWl.S_5JucXWDsQrX.6eXSFhBX_anNtCFkCbUkihkXZV1Sva8EmqHs3R
 DBHRUidj68bGpq6rC6HGpakt55ONoI1_iiQGSAklpsBEaXB8GTqrBWQyEgy8mKdMHQn9.cDBkhn4
 YQQ8GIjjyzmbNWkNkTZmUw7F6lZ8Rsi8P4_RX1Cmg0h3AZ2o3DldWH8NUIIjzxVSAydONH8cYVCi
 _KCuEbj6K9MtQJtiAi8mWVIAHZMtngEcXb4vetP6QXoa3bscDARG8c3DrlrKeV0xWJh9IsPn_fO6
 lCKp7_j6NqmTG3oNlVhfZCpwNXDsWWmkw0zCBwxMxX_C0kDjoHK42ePacU4DtKyU5ggLQzPjoBZR
 8MIjGW9HtAzBnQyXoNK5iHTMx6hk8mDXEGvucUmkG6bSOk5iHb4bZuaDyibxfjnn1gu2vErd._91
 k8GmjuSozC_fApfkJ.yolqbPpOWW2dKb2vtwl3q78PlTqTbFzMGVVg.p_Zx8kt1ePl7x97FNTy1L
 YU5t77MmDasd5dfcapCgontTo.7eKb6EtANIjKOZF.IGxD0FSSPTKtRJiraMZrW1xThL3pjzBXy_
 idlDgeBz4V_.RbOd1GpUGTiZkTKP34mpwQ8aLNPXsEFSuX7yz4gPbrVSsRowIaus2sJxQnvbkQlf
 hG7JcMfmePc.u4hu7clkU23J3HU3KRyV8Zc3nJZLYXG8LFtWGYqdAdBDgjm0LV_KByE36kM3YgCW
 vrrmyF3IateQLwYucxGimj5zcjl8t9bAYDcEtIsZSg8DPWjty1LbAmJdxoj2WPbquYvnktxGBfdg
 0QznWRWZGIrkjLNC2QloR.OOMnb09ahDk9nJ9q9xtJ4voF05Q8c2Jvzg0FvkezRWl8LK2VhYpFs9
 W4M631i9xLc6yU5aZnWRrsB7joBr4FEDvkSxCzU_cBhRAjHfUTyoByUkBxTxMb9yt6ZM2ghZoX7M
 4X44tPVu604f.o9ZuWAJy7zhT2IAzwp1aYlqfKywOZ8iwK89zwurgxNux8odcJGe8wTK_NaFL1kI
 4UFtlTJ3XYeRL6fjGS6415Z5PVgwMpQ9qwVDnoW7fmctSGop.iWJO6ImljqeBecgMRDO0UFB.hbn
 rpzFDd3it6GvDxcNBoCdQDUkXIw_RwrNeqgzjAP2cKVqLBVt7Ft1u.ciEP9bWBj0_nhtGG.lPkDG
 y9c_Poj1g_5DWY9FwycTRu2KCaslyzwI2.DyIuP23TmQA0qm41xKWUkKkiK6wucoCyxzfS8lCYQ-
 -
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: anthony.perard@citrix.com
Cc: xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	qemu-devel@nongnu.org
Subject: [XEN PATCH 3/3] libxl/dm: Assign slot 2 by default for Intel IGD passthrough
Date: Mon,  9 Jan 2023 18:08:13 -0500
Message-Id: <27bb3979f234c8de6b51be7bb8195e3cacb5181c.1673300848.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673300848.git.brchuckz@aol.com>
References: <cover.1673300848.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Length: 2519

It is possible for the administrator to manually specify the virtual
slot addresses of passed through pci devices on the guest's pci bus
using the @VSLOT parameter in xl.cfg. With this patch, libxl will by
default assign the Intel IGD to slot 2 when gfx_passthru is configured
for the Intel IGD so it will no longer be necessary to use the @VSLOT
setting to configure the IGD correctly. Also, with this patch, libxl
will not override explicit @VSLOT settings by the administrator so
in that case the patch will have no effect on guest behavior.

The default behavior of letting qemu manage the slot addresses of passed
through pci devices when gfx_passthru is disabled and the administrator
does not set @VSLOT for passed through pci devices is also preserved.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
 tools/libs/light/libxl_dm.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 2720b5d4d0..b51ebae643 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1207,6 +1207,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     int rc;
     int next_slot;
     bool configure_pci_for_igd = false;
+    const int igd_slot = 2;
     /*
      * next_slot is only used when we need to configure the pci
      * slots for the Intel IGD. Slot 2 will be for the Intel IGD.
@@ -2173,6 +2174,27 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     flexarray_append(dm_envs, NULL);
     if (envs)
         *envs = (char **) flexarray_contents(dm_envs);
+    if (configure_pci_for_igd) {
+        libxl_device_pci *pci = NULL;
+        for (i = 0; i < guest_config->num_pcidevs; i++) {
+            pci = &guest_config->pcidevs[i];
+            if (!pci->vdevfn) {
+                /*
+                 * Find the Intel IGD and configure it for slot 2.
+                 * Configure any other devices for slot next_slot.
+                 * Since the guest is configured for IGD passthrough,
+                 * assume the device on the host at slot 2 is the IGD.
+                 */
+                if (pci->domain == 0 && pci->bus == 0 &&
+                    pci->dev == igd_slot && pci->func == 0) {
+                    pci->vdevfn = PCI_DEVFN(igd_slot, 0);
+                } else {
+                    pci->vdevfn = PCI_DEVFN(next_slot, 0);
+                    next_slot++;
+                }
+            }
+        }
+    }
     return 0;
 }
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 23:18:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 23:18:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474106.735044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Pa-00035O-8P; Mon, 09 Jan 2023 23:18:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474106.735044; Mon, 09 Jan 2023 23:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Pa-00035H-5J; Mon, 09 Jan 2023 23:18:38 +0000
Received: by outflank-mailman (input) for mailman id 474106;
 Mon, 09 Jan 2023 23:18:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF1PY-000354-TB; Mon, 09 Jan 2023 23:18:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF1PY-0003yp-PK; Mon, 09 Jan 2023 23:18:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF1PY-00031m-J5; Mon, 09 Jan 2023 23:18:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pF1PY-0003xB-Ia; Mon, 09 Jan 2023 23:18:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=FHRTWuZiU08PVVHIifyRcHWR/5PdIvZqRQy4FyP6s78=; b=oYRZNBPHKljCTQbWTJl4zrp5nD
	xHuaE6HXJLc0CyCnt05nwQVuvT5r7+Duu8gpegEiS0ZmqTyemup9aWKPof/m4mBzjVWs3qrS8rYy2
	Tuhi6oDfHwXrkmsAubPj6SGuWboSaAhwiLjvOPGf5WZxs7XEk/1eFmwaNuEg9snvx3/M=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-i386
Message-Id: <E1pF1PY-0003xB-Ia@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 23:18:36 +0000

branch xen-unstable
xenbranch xen-unstable
job build-i386
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175667/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-i386.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-i386.xen-build --summary-out=tmp/175667.bisection-summary --basis-template=175623 --blessings=real,real-bisect,real-retry qemu-mainline build-i386 xen-build
Searching for failure / basis pass:
 175654 fail [host=nobling1] / 175637 ok.
Failure / basis pass flights: 175654 / 175637
(tree with no url: minios)
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Basis pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#d8d829b89dababf763ab33b8cdd852b2830db3cf-d8d829b89dababf763ab33b8cdd852b2830db3cf git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#528d9f33cad5245c1099d77084c78bb2244d5143-aa96ab7c9df59c615ca82b49c9062819e0a1c287 git://xenbits.xen.org/osstest/seabios.git#645a64b4911d7cadf5749d7375544fc2384e70ba-645\
 a64b4911d7cadf5749d7375544fc2384e70ba git://xenbits.xen.org/xen.git#2b21cbbb339fb14414f357a6683b1df74c36fda2-2b21cbbb339fb14414f357a6683b1df74c36fda2
Loaded 5002 nodes in revision graph
Searching for test results:
 175627 [host=albana1]
 175631 [host=debina1]
 175637 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175643 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175647 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d6271b657286de80260413684a1f2a63f44ea17b 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175654 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175660 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175661 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d6271b657286de80260413684a1f2a63f44ea17b 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175662 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175663 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175666 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175667 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Searching for interesting versions
 Result found: flight 175637 (pass), for basis pass
 Result found: flight 175654 (fail), for basis failure
 Repro found: flight 175660 (pass), for basis pass
 Repro found: flight 175662 (fail), for basis failure
 0 revisions at d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
No revisions left to test, checking graph state.
 Result found: flight 175637 (pass), for last pass
 Result found: flight 175643 (fail), for first failure
 Repro found: flight 175660 (pass), for last pass
 Repro found: flight 175663 (fail), for first failure
 Repro found: flight 175666 (pass), for last pass
 Repro found: flight 175667 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175667/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-i386.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
175667: tolerable ALL FAIL

flight 175667 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/175667/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-i386                    6 xen-build               fail baseline untested


jobs:
 build-i386                                                   fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 23:29:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 23:29:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474116.735058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Zc-0004eT-Dl; Mon, 09 Jan 2023 23:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474116.735058; Mon, 09 Jan 2023 23:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1Zc-0004eM-9g; Mon, 09 Jan 2023 23:29:00 +0000
Received: by outflank-mailman (input) for mailman id 474116;
 Mon, 09 Jan 2023 23:28:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=szyx=5G=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF1ZZ-0004eF-Uk
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 23:28:58 +0000
Received: from sonic311-25.consmr.mail.gq1.yahoo.com
 (sonic311-25.consmr.mail.gq1.yahoo.com [98.137.65.206])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 60c807cb-9075-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 00:28:54 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic311.consmr.mail.gq1.yahoo.com with HTTP; Mon, 9 Jan 2023 23:28:52 +0000
Received: by hermes--production-ne1-7b69748c4d-g8q5j (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 71c008aaac249b928d42463620cb772e; 
 Mon, 09 Jan 2023 23:28:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60c807cb-9075-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673306932; bh=0rNGSoRWRSYnXN1sucMLBASe31ssmBptcK0GtyMBgtY=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=LL60Xu8nvrLq7p5WHbhl7D+xOw5Lf3YirLMgMhGxi+uCLwXiC1v/Gn7eHlB54U0ejQ1aI3Fmc2GWJEJ7qcYjS7IMV1XYqPfftAnV+2PcNnL/iZf9O/f7SBX+NtalBVJSxzZv5nO/esK9GIYyw1dVmUYjW4Vos2tbMb4hN5Nz69zYSLyR+/CPI52n3bXYcIVfOVdjQcjInghS12UbMJdjWmEgAI8Qc32BfbSBhIDMnf45RaI7B+w28e+S9lp19tzuvppxqANgMfRum9eM90Bhw3NXh9t10QdzsLVQRtbBkH8W2TfPedET9kX0i9dwmcsyUVi6Z3RomG0lqSEzgMQazA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673306932; bh=kD/AoFy5IB1tKFWmzyI4mNYcXIY4+IXvNvwP4zZ19aT=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=f5LYYnxs87oMTlIihCwgCII2apF4ULUME2UUD/cWpOGhkv4NKjTuXZgEqahTBWcDriiSOv1tchnqnGaI1A8cD/OYhY23IYYq7C0YayO8KsqDnNq9El19OAIEtlu16Pr+zYUVjZLH1qevDUY3NIS4BSGJ1ym5rRGTvAPFExhmtaJHLYSOLkK5EFWMmjWL+SfxjCfP2L5a/Gi+xdmJpe9wO6nx4SussD2sN8K5z/DlyXo306zscnJjBSeiArwkKw1LX+kBGZO7x9F2WQMOQu+I9BEtXUpdeeKSMdsDRj6s16nm52evyNJnzVhfmgsv64d76hdBvdArf6m/AvwU88a6Cg==
X-YMail-OSG: EVJ7IyoVM1nPiTEhoWVTpOh3ClkvNmucZ0pYiS.kzUE8a3aKrFGfHbqoGOOlmr9
 ec9VMkCd79k80WI8D2AfOWAhcTriyrV4NlXYTYxXLRsitDphRfTWlZ1Se1XMihQDKI_0_FRPKWom
 sNcHDOlKzFO6Gpqpn8.Or54J.midAVSb2c3OLfU5ePGUpBlVdOEqBaRV7miUtSGeE_zjzktP6m7.
 JQ1HQjx9P5UpDO2WgUmTpEZY4e35GDhYX6OYy4dxmk211f.Cr2zHcvh6766.2rpN2JKEfhF2GDap
 pXjjYONOFyDRLhDUOZRHgoOnBYKT0XggfRlpvhnlxheeBcTLRgAKnYFjsfdObNlvr3RHz_1HBBPB
 psxMESVmwpQH9.3phlz3yjw33lUWRvN4McRoeDbMBS4tfTxnRgJHMQjEYGwcG.EhZcfv_PIUbrHW
 CjYj_U9MDzZKRyJ.MVxu0NQIG4JuyVNZrrkRHOt3lAQSSFaFGJ9SjWg8WbItmKKlPAbSfzjzG_t1
 RnUXMP5.5EfD10EAz8gh2IDzVv0zzO.hfEds.ggru0dAbGdnE4kFQJG8T_jGG5MhbX1FfeqyPrxD
 Nnu4FYaF_KPUMlmM0UJiHUlvaZ6PL3dwHH9s4fr11iSBJnlp7wLxVWV0SEIE4i4_AA3MtPQAjrtR
 qUuIa9Ft2WCa08FHC1kI6OeVk52SboWppKTX7BOyB17E1S.YSIUHhnDZX5W53ABONd8oWf9rQ00Z
 YZr2qUE.GYwyCpUkSjbpojXqBvCS.tADKQqr6wjLEqpIDHB1rJoe..iXdxcRDgg8vDjSsou5iLew
 .refke8_q9zNTJZcQfl8dZzV.4Ri8FYRk3G.nbY1zNNhynG7s69hkvPqAJspqiaecpwbXZ6K_ApQ
 0NfrdYQNbGhD809dD0iw02lLnGENMmPaYE1L8Q9Th9u7HOi5UgjiCWrHvlbhzEIv.RulnNg7Q80k
 T7fS8WTm1anRA6WaTiLFM99Vk5GOcvxZh35Ud9BvCokF3FAXsbYr9VRvX99XC3iK0azeaE0E3Eeo
 M9YHJcY.0d7odEhYVIKUBVznFv0xAI4BHDkJqBQd9J6HOWQRec7lpIugrsQs1HMgb1GgfYeHFLML
 dncg_e7pR1n3qWw9OSI9Iu0ZnGzGFq3JMQKQMq_MTtzvVb7HuUVTPBxt.ErofgO_ITgLRqqwZ_Ca
 X7E9Wt20KrMCr0zT_UxX79IROM6gIh2c1MPV7f0QG7xhOEqIuPvvZUMfk4kcQL3b_SlhTy5wGbzg
 J9pjPWTIeggyYazgxaOb9f9_2qKdpJ0bhghyp0ZxmKoE30XiCG6fiXinYd_RNX7p_0k2pf54eAiu
 RHUyu3TAj3_uQ3ccLMleTt.CkCI9HxeFtVuU9qX2ONM66kh3CkIQReXuyiQdk2WiL073OETLu0cF
 kRlrUHyFB7yHkakobSO09egjP_Ihtp4Nyw_ihIraImUGARgpxILHFBTABEOl1v0BlqHZZ9DED50f
 2NF7.VLdhoXLlTecgvy8hYOIZ.KLBi1S72DOp6VBGa53.iMmWSU7916p7ZCL4XuppxiRm7CGhPYs
 zatQ90GOYCf583DB7p0Ry8u_uuhYlOQ9BfsPhmzfJepRUoDMpEKVkWfk_rrITxQUvGsS.L5cFZxl
 ._hsh.HGgUWAm5Izpub6NZP_exTZ5lPcMApO3D6Sa6SxgtAirpfGweIoDqjSQMj5FBEVWSX.vmaq
 MIQHCfcXHrYT6j2x_08M5UELGgtWRErV2qaz1WXdjWfmhQGTjQRAxDyNkOeD8xbokXODYK0LOyCf
 4VNMbGaVnB7CRue0oet3WQ92gZ_GFxiWPR7Hnw4GIso9H4sByrkjBk04Vv0oGpBMFmLV6wvjjhAR
 Sq2_aodo5x1PXmW4gysS5KZhtmf4LfK5xeLS4oqoDpfxHlAtwQrJMqGaSkVCoemsw6sm_MWYE2zj
 BpziyjQsnziCsjeft8qKpKLom_KNK38.xxbysA5snKBbKYqKfjHCnv8KwWaeqkvYHwqCJR0U6ojn
 sf9_2SwNOW_kYLLug40kLWOZWE3YVdyepESfBRlzR5jEIl2yBrLrRiQ3CcdC0xxuOmw0taRMgB0P
 yO1qrXEuVRRlZp4wv5eKVQ5..sY9.rBwVSoJBuFep8bvAaJjgbGt8lrrCpSiYPKr9kXRO.vP4bGE
 r2fB4YJ_ypgKEgLYaqQ1LZvh7IRhlSXoAIIZZbr6qHJhp6xtkIraqmo3n_kGuT7giXjpZGJr3uAm
 VYhhzaxPQnFHMQg65P_I-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <8c2531a8-ce99-7593-99f8-222076fe6bd6@aol.com>
Date: Mon, 9 Jan 2023 18:28:44 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
 <20230109172738-mutt-send-email-mst@kernel.org>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230109172738-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 22856

On 1/9/23 5:33 PM, Michael S. Tsirkin wrote:
> On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>> as noted in docs/igd-assign.txt in the Qemu source code.
>> 
>> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>> Intel IGD passthrough to the guest with the Qemu upstream device model,
>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>> a different slot. This problem often prevents the guest from booting.
>> 
>> The only available workaround is not good: Configure Xen HVM guests to use
>> the old and no longer maintained Qemu traditional device model available
>> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>> 
>> To implement this feature in the Qemu upstream device model for Xen HVM
>> guests, introduce the following new functions, types, and macros:
>> 
>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>> * typedef XenPTQdevRealize function pointer
>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>> 
>> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>> the xl toolstack with the gfx_passthru option enabled, which sets the
>> igd-passthru=on option to Qemu for the Xen HVM machine type.
> 
> I don't like how slot_reserved_mask is set initially then cleared on
> device realize.
> To me this looks like a fragile hack. I suggest one of the following
> 1. adding a new mask
> "slot-manual-mask" or some such blocking auto-allocation of a given
> slot without blocking its use if address is specified on command line.
> 2. adding a special property that overrides slot_reserved_mask
> for a given device.
> 
> both need changes in pci core but look like something generally
> useful.

I was hoping to not need to touch pci core but I understand it would be
better for this patch to not affect machines that are manually configured
on the command line.

However, keep in mind that this patch will only actually reserve the slot
initially for xen hvm machines (machine type "xenfv") that also are configured
with the qemu igd-passthru=on option which, AFAIK, only applies to machines
witn accel=xen. It will not affect kvm users at all. So I don't think this patch
will break many machines out there that manually specify the pci slots. The
only machines it could affect are machines configured for igd-passthru on xen.
This patch also does *not* reserve the slot initially for "xenfv" machines that
are not configured with igd passthrough which I am sure is the vast majority
of all the xen virtual machines out in the wild.

> 
> 
>> 
>> The new xen_igd_reserve_slot function also needs to be implemented in
>> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>> in which case it does nothing.
>> 
>> The new xen_igd_clear_slot function overrides qdev->realize of the parent
>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>> 
>> Move the call to xen_host_pci_device_get, and the associated error
>> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>> initialize the device class and vendor values which enables the checks for
>> the Intel IGD to succeed. The verification that the host device is an
>> Intel IGD to be passed through is done by checking the domain, bus, slot,
>> and function values as well as by checking that gfx_passthru is enabled,
>> the device class is VGA, and the device vendor in Intel.
>> 
>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>> ---
>> Notes that might be helpful to reviewers of patched code in hw/xen:
>> 
>> The new functions and types are based on recommendations from Qemu docs:
>> https://qemu.readthedocs.io/en/latest/devel/qom.html
>> 
>> Notes that might be helpful to reviewers of patched code in hw/i386:
>> 
>> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
>> not affect builds that do not have CONFIG_XEN defined.
>> 
>> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
>> existing function that is only true when Qemu is built with
>> xen-pci-passthrough enabled and the administrator has configured the Xen
>> HVM guest with Qemu's igd-passthru=on option.
>> 
>> v2: Remove From: <email address> tag at top of commit message
>> 
>> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
>> 
>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
>> 
>>     is changed to
>> 
>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>         && (s->hostaddr.function == 0)) {
>> 
>>     I hoped that I could use the test in v2, since it matches the
>>     other tests for the Intel IGD in Qemu and Xen, but those tests
>>     do not work because the necessary data structures are not set with
>>     their values yet. So instead use the test that the administrator
>>     has enabled gfx_passthru and the device address on the host is
>>     02.0. This test does detect the Intel IGD correctly.
>> 
>> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>>     email address to match the address used by the same author in commits
>>     be9c61da and c0e86b76
>>     
>>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
>> 
>> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>>     for the Intel IGD that uses the same criteria as in other places.
>>     This involved moving the call to xen_host_pci_device_get from
>>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>>     Intel IGD in xen_igd_clear_slot:
>>     
>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>         && (s->hostaddr.function == 0)) {
>> 
>>     is changed to
>> 
>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
>> 
>>     Added an explanation for the move of xen_host_pci_device_get from
>>     xen_pt_realize to xen_igd_clear_slot to the commit message.
>> 
>>     Rebase.
>> 
>> v6: Fix logging by removing these lines from the move from xen_pt_realize
>>     to xen_igd_clear_slot that was done in v5:
>> 
>>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>>                " to devfn 0x%x\n",
>>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>                s->dev.devfn);
>> 
>>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>>     set yet in xen_igd_clear_slot.
>> 
>> v7: Inhibit out of context log message and needless processing by
>>     adding 2 lines at the top of the new xen_igd_clear_slot function:
>> 
>>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>         return;
>> 
>>     Rebase. This removed an unnecessary header file from xen_pt.h 
>> 
>>  hw/i386/pc_piix.c    | 127 ++++++++++++++++++++++++++++++++-----------
>>  hw/xen/xen_pt.c      |  49 ++++++++++++++---
>>  hw/xen/xen_pt.h      |  16 ++++++
>>  hw/xen/xen_pt_stub.c |   4 ++
>>  4 files changed, 154 insertions(+), 42 deletions(-)
>> 
>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>> index b48047f50c..34a9736b5e 100644
>> --- a/hw/i386/pc_piix.c
>> +++ b/hw/i386/pc_piix.c
>> @@ -32,6 +32,7 @@
>>  #include "hw/i386/pc.h"
>>  #include "hw/i386/apic.h"
>>  #include "hw/pci-host/i440fx.h"
>> +#include "hw/rtc/mc146818rtc.h"
>>  #include "hw/southbridge/piix.h"
>>  #include "hw/display/ramfb.h"
>>  #include "hw/firmware/smbios.h"
>> @@ -40,16 +41,16 @@
>>  #include "hw/usb.h"
>>  #include "net/net.h"
>>  #include "hw/ide/pci.h"
>> -#include "hw/ide/piix.h"
>>  #include "hw/irq.h"
>>  #include "sysemu/kvm.h"
>>  #include "hw/kvm/clock.h"
>>  #include "hw/sysbus.h"
>> +#include "hw/i2c/i2c.h"
>>  #include "hw/i2c/smbus_eeprom.h"
>>  #include "hw/xen/xen-x86.h"
>> +#include "hw/xen/xen.h"
>>  #include "exec/memory.h"
>>  #include "hw/acpi/acpi.h"
>> -#include "hw/acpi/piix4.h"
>>  #include "qapi/error.h"
>>  #include "qemu/error-report.h"
>>  #include "sysemu/xen.h"
>> @@ -66,6 +67,7 @@
>>  #include "kvm/kvm-cpu.h"
>>  
>>  #define MAX_IDE_BUS 2
>> +#define XEN_IOAPIC_NUM_PIRQS 128ULL
>>  
>>  #ifdef CONFIG_IDE_ISA
>>  static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
>> @@ -73,6 +75,32 @@ static const int ide_iobase2[MAX_IDE_BUS] = { 0x3f6, 0x376 };
>>  static const int ide_irq[MAX_IDE_BUS] = { 14, 15 };
>>  #endif
>>  
>> +/*
>> + * Return the global irq number corresponding to a given device irq
>> + * pin. We could also use the bus number to have a more precise mapping.
>> + */
>> +static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
>> +{
>> +    int slot_addend;
>> +    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
>> +    return (pci_intx + slot_addend) & 3;
>> +}
>> +
>> +static void piix_intx_routing_notifier_xen(PCIDevice *dev)
>> +{
>> +    int i;
>> +
>> +    /* Scan for updates to PCI link routes (0x60-0x63). */
>> +    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
>> +        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
>> +        if (v & 0x80) {
>> +            v = 0;
>> +        }
>> +        v &= 0xf;
>> +        xen_set_pci_link_route(i, v);
>> +    }
>> +}
>> +
>>  /* PC hardware initialisation */
>>  static void pc_init1(MachineState *machine,
>>                       const char *host_type, const char *pci_type)
>> @@ -84,7 +112,7 @@ static void pc_init1(MachineState *machine,
>>      MemoryRegion *system_io = get_system_io();
>>      PCIBus *pci_bus;
>>      ISABus *isa_bus;
>> -    int piix3_devfn = -1;
>> +    Object *piix4_pm;
>>      qemu_irq smi_irq;
>>      GSIState *gsi_state;
>>      BusState *idebus[MAX_IDE_BUS];
>> @@ -205,10 +233,9 @@ static void pc_init1(MachineState *machine,
>>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
>>  
>>      if (pcmc->pci_enabled) {
>> -        PIIX3State *piix3;
>> +        DeviceState *dev;
>>          PCIDevice *pci_dev;
>> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>> -                                         : TYPE_PIIX3_DEVICE;
>> +        int i;
>>  
>>          pci_bus = i440fx_init(pci_type,
>>                                i440fx_host,
>> @@ -216,21 +243,65 @@ static void pc_init1(MachineState *machine,
>>                                x86ms->below_4g_mem_size,
>>                                x86ms->above_4g_mem_size,
>>                                pci_memory, ram_memory);
>> +        pci_bus_map_irqs(pci_bus,
>> +                         xen_enabled() ? xen_pci_slot_get_pirq
>> +                                       : pci_slot_get_pirq);
>>          pcms->bus = pci_bus;
>>  
>> -        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
>> -        piix3 = PIIX3_PCI_DEVICE(pci_dev);
>> -        piix3->pic = x86ms->gsi;
>> -        piix3_devfn = piix3->dev.devfn;
>> -        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(piix3), "isa.0"));
>> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>> +        object_property_set_bool(OBJECT(pci_dev), "has-usb",
>> +                                 machine_usb(machine), &error_abort);
>> +        object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>> +                                 x86_machine_is_acpi_enabled(x86ms),
>> +                                 &error_abort);
>> +        qdev_prop_set_uint32(DEVICE(pci_dev), "smb_io_base", 0xb100);
>> +        object_property_set_bool(OBJECT(pci_dev), "smm-enabled",
>> +                                 x86_machine_is_smm_enabled(x86ms),
>> +                                 &error_abort);
>> +        pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
>> +
>> +        if (xen_enabled()) {
>> +            pci_device_set_intx_routing_notifier(
>> +                        pci_dev, piix_intx_routing_notifier_xen);
>> +
>> +            /*
>> +             * Xen supports additional interrupt routes from the PCI devices to
>> +             * the IOAPIC: the four pins of each PCI device on the bus are also
>> +             * connected to the IOAPIC directly.
>> +             * These additional routes can be discovered through ACPI.
>> +             */
>> +            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
>> +                         XEN_IOAPIC_NUM_PIRQS);
>> +        }
>> +
>> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
>> +        for (i = 0; i < ISA_NUM_IRQS; i++) {
>> +            qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
>> +        }
>> +        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(pci_dev), "isa.0"));
>> +        rtc_state = ISA_DEVICE(object_resolve_path_component(OBJECT(pci_dev),
>> +                                                             "rtc"));
>> +        piix4_pm = object_resolve_path_component(OBJECT(pci_dev), "pm");
>> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "ide"));
>> +        pci_ide_create_devs(PCI_DEVICE(dev));
>> +        idebus[0] = qdev_get_child_bus(dev, "ide.0");
>> +        idebus[1] = qdev_get_child_bus(dev, "ide.1");
>>      } else {
>>          pci_bus = NULL;
>> +        piix4_pm = NULL;
>>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
>>                                &error_abort);
>> +        isa_bus_irqs(isa_bus, x86ms->gsi);
>> +
>> +        rtc_state = isa_new(TYPE_MC146818_RTC);
>> +        qdev_prop_set_int32(DEVICE(rtc_state), "base_year", 2000);
>> +        isa_realize_and_unref(rtc_state, isa_bus, &error_fatal);
>> +
>>          i8257_dma_init(isa_bus, 0);
>>          pcms->hpet_enabled = false;
>> +        idebus[0] = NULL;
>> +        idebus[1] = NULL;
>>      }
>> -    isa_bus_irqs(isa_bus, x86ms->gsi);
>>  
>>      if (x86ms->pic == ON_OFF_AUTO_ON || x86ms->pic == ON_OFF_AUTO_AUTO) {
>>          pc_i8259_create(isa_bus, gsi_state->i8259_irq);
>> @@ -252,18 +323,12 @@ static void pc_init1(MachineState *machine,
>>      }
>>  
>>      /* init basic PC hardware */
>> -    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, &rtc_state, true,
>> +    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, rtc_state, true,
>>                           0x4);
>>  
>>      pc_nic_init(pcmc, isa_bus, pci_bus);
>>  
>>      if (pcmc->pci_enabled) {
>> -        PCIDevice *dev;
>> -
>> -        dev = pci_create_simple(pci_bus, piix3_devfn + 1, TYPE_PIIX3_IDE);
>> -        pci_ide_create_devs(dev);
>> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
>> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
>>          pc_cmos_init(pcms, idebus[0], idebus[1], rtc_state);
>>      }
>>  #ifdef CONFIG_IDE_ISA
>> @@ -289,21 +354,9 @@ static void pc_init1(MachineState *machine,
>>      }
>>  #endif
>>  
>> -    if (pcmc->pci_enabled && machine_usb(machine)) {
>> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
>> -    }
>> -
>> -    if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
>> -        PCIDevice *piix4_pm;
>> -
>> +    if (piix4_pm) {
>>          smi_irq = qemu_allocate_irq(pc_acpi_smi_interrupt, first_cpu, 0);
>> -        piix4_pm = pci_new(piix3_devfn + 3, TYPE_PIIX4_PM);
>> -        qdev_prop_set_uint32(DEVICE(piix4_pm), "smb_io_base", 0xb100);
>> -        qdev_prop_set_bit(DEVICE(piix4_pm), "smm-enabled",
>> -                          x86_machine_is_smm_enabled(x86ms));
>> -        pci_realize_and_unref(piix4_pm, pci_bus, &error_fatal);
>>  
>> -        qdev_connect_gpio_out(DEVICE(piix4_pm), 0, x86ms->gsi[9]);
>>          qdev_connect_gpio_out_named(DEVICE(piix4_pm), "smi-irq", 0, smi_irq);
>>          pcms->smbus = I2C_BUS(qdev_get_child_bus(DEVICE(piix4_pm), "i2c"));
>>          /* TODO: Populate SPD eeprom data.  */
>> @@ -315,7 +368,7 @@ static void pc_init1(MachineState *machine,
>>                                   object_property_allow_set_link,
>>                                   OBJ_PROP_LINK_STRONG);
>>          object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
>> -                                 OBJECT(piix4_pm), &error_abort);
>> +                                 piix4_pm, &error_abort);
>>      }
>>  
>>      if (machine->nvdimms_state->is_enabled) {
>> @@ -405,6 +458,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>>      }
>>  
>>      pc_xen_hvm_init_pci(machine);
>> +    if (xen_igd_gfx_pt_enabled()) {
>> +        xen_igd_reserve_slot(pcms->bus);
>> +    }
>>      pci_create_simple(pcms->bus, -1, "xen-platform");
>>  }
>>  #endif
>> @@ -441,6 +497,11 @@ static void pc_i440fx_8_0_machine_options(MachineClass *m)
>>      pc_i440fx_machine_options(m);
>>      m->alias = "pc";
>>      m->is_default = true;
>> +#ifdef CONFIG_MICROVM_DEFAULT
>> +    m->is_default = false;
>> +#else
>> +    m->is_default = true;
>> +#endif
>>  }
>>  
>>  DEFINE_I440FX_MACHINE(v8_0, "pc-i440fx-8.0", NULL,
>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> index 0ec7e52183..eff38155ef 100644
>> --- a/hw/xen/xen_pt.c
>> +++ b/hw/xen/xen_pt.c
>> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>                 s->dev.devfn);
>>  
>> -    xen_host_pci_device_get(&s->real_device,
>> -                            s->hostaddr.domain, s->hostaddr.bus,
>> -                            s->hostaddr.slot, s->hostaddr.function,
>> -                            errp);
>> -    if (*errp) {
>> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
>> -        return;
>> -    }
>> -
>>      s->is_virtfn = s->real_device.is_virtfn;
>>      if (s->is_virtfn) {
>>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
>> @@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>>  }
>>  
>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>> +{
>> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
>> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
>> +}
>> +
>> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
>> +{
>> +    ERRP_GUARD();
>> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
>> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
>> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
>> +
>> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>> +        return;
>> +
>> +    xen_host_pci_device_get(&s->real_device,
>> +                            s->hostaddr.domain, s->hostaddr.bus,
>> +                            s->hostaddr.slot, s->hostaddr.function,
>> +                            errp);
>> +    if (*errp) {
>> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
>> +        return;
>> +    }
>> +
>> +    if (is_igd_vga_passthrough(&s->real_device) &&
>> +        s->real_device.domain == 0 && s->real_device.bus == 0 &&
>> +        s->real_device.dev == 2 && s->real_device.func == 0 &&
>> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
>> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
>> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
>> +    }
>> +    xpdc->pci_qdev_realize(qdev, errp);
>> +}
>> +
>>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>>  {
>>      DeviceClass *dc = DEVICE_CLASS(klass);
>>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>  
>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
>> +    xpdc->pci_qdev_realize = dc->realize;
>> +    dc->realize = xen_igd_clear_slot;
>>      k->realize = xen_pt_realize;
>>      k->exit = xen_pt_unregister_device;
>>      k->config_read = xen_pt_pci_read_config;
>> @@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>>      .instance_size = sizeof(XenPCIPassthroughState),
>>      .instance_finalize = xen_pci_passthrough_finalize,
>>      .class_init = xen_pci_passthrough_class_init,
>> +    .class_size = sizeof(XenPTDeviceClass),
>>      .instance_init = xen_pci_passthrough_instance_init,
>>      .interfaces = (InterfaceInfo[]) {
>>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
>> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
>> index cf10fc7bbf..8c25932b4b 100644
>> --- a/hw/xen/xen_pt.h
>> +++ b/hw/xen/xen_pt.h
>> @@ -2,6 +2,7 @@
>>  #define XEN_PT_H
>>  
>>  #include "hw/xen/xen_common.h"
>> +#include "hw/pci/pci_bus.h"
>>  #include "xen-host-pci-device.h"
>>  #include "qom/object.h"
>>  
>> @@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
>>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>>  
>> +#define XEN_PT_DEVICE_CLASS(klass) \
>> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
>> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
>> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
>> +
>> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
>> +
>> +typedef struct XenPTDeviceClass {
>> +    PCIDeviceClass parent_class;
>> +    XenPTQdevRealize pci_qdev_realize;
>> +} XenPTDeviceClass;
>> +
>>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
>> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>>                                             XenHostPCIDevice *dev);
>> @@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
>>  
>>  #define XEN_PCI_INTEL_OPREGION 0xfc
>>  
>> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
>> +
>>  typedef enum {
>>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
>> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
>> index 2d8cac8d54..5c108446a8 100644
>> --- a/hw/xen/xen_pt_stub.c
>> +++ b/hw/xen/xen_pt_stub.c
>> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>>          error_setg(errp, "Xen PCI passthrough support not built in");
>>      }
>>  }
>> +
>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>> +{
>> +}
>> -- 
>> 2.39.0
> 



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 23:33:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 23:33:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474124.735075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1eF-00064o-Vn; Mon, 09 Jan 2023 23:33:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474124.735075; Mon, 09 Jan 2023 23:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1eF-00064h-T7; Mon, 09 Jan 2023 23:33:47 +0000
Received: by outflank-mailman (input) for mailman id 474124;
 Mon, 09 Jan 2023 23:33:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QidW=5G=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pF1eE-00064b-Ir
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 23:33:46 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0e1be3c4-9076-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 00:33:44 +0100 (CET)
Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com
 [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-645-gmyzOA9_M-e2Fq7cS1mCBA-1; Mon, 09 Jan 2023 18:33:42 -0500
Received: by mail-qt1-f199.google.com with SMTP id
 cm18-20020a05622a251200b003ad06656adaso1834303qtb.22
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 15:33:42 -0800 (PST)
Received: from redhat.com ([191.101.160.154]) by smtp.gmail.com with ESMTPSA id
 de19-20020a05620a371300b006fa16fe93bbsm6221201qkb.15.2023.01.09.15.33.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 15:33:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e1be3c4-9076-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673307223;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=PWTun7gSxXXaZNLHwrimRZlxaWoC/MDjCF+Tw/aPQTM=;
	b=FTOuhuPOeFSgEQPlcI2AUd3UcVw8WZqFJTPWaTfQYGzN0CsU765bdbZ6A1lIhbRod4WNUU
	YX4L5MQV7DQOQE6H9AREAC/onFqmJJiOlQP8B0PeTGGys6iLheU/q9175xTpU+if4bisLa
	i/tguOaWRum/I328wMaxAVNoSNwCa0U=
X-MC-Unique: gmyzOA9_M-e2Fq7cS1mCBA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PWTun7gSxXXaZNLHwrimRZlxaWoC/MDjCF+Tw/aPQTM=;
        b=sdX9ndwDl5kmDAHiJz3KQevUwJzvMEkcji4lDewDi+bU3H9J1hXAwI8MUU2juG0E5i
         mPM+oRwWpMIsWNVPoRfVIuVd+0P2KZHfJUv0N1Ssrt1AR6J+PMmcFypNiKI2llai24y1
         gM+SWT3uyHu/YeFLiyQOScC4xmBD+EjsU8qefvd8KHOh8cFZbcONHxBGYDLbj4wWDf2O
         Nc85sYcUvVkUY53/iHqauPm6SjynPpAx7Ae3P6FBbdxp1+BeIaT/muKPRjdxhBl07l2j
         XQ3SjlatFJ/KDQ6nMz2yX5CvGemyLmmUgWFu1H2I3SR4QpqbYHTKA9bsD58RMX/aONDo
         +ATw==
X-Gm-Message-State: AFqh2kprRlxYUYvnHQ2aThTIc/ceRlLT8tfNK4JJMMazsVssmxYIaXdm
	ItGGZbRuIkpVa8n9z7R2q484RHPz+0htwQFIliZfNU2NrR/cD4dfwfxe+crCxlgqMZqNbJHBVBT
	GIcwe9tJ8VbC/yCDL1dbdaQj3YS0=
X-Received: by 2002:a05:6214:4208:b0:4c7:2ab5:9268 with SMTP id nd8-20020a056214420800b004c72ab59268mr23099304qvb.12.1673307221553;
        Mon, 09 Jan 2023 15:33:41 -0800 (PST)
X-Google-Smtp-Source: AMrXdXuVby5CmKuzjuLrLrmk9a1V/EZDhdj7ltp0WXPdabpLV9eoBzjA6L7KQkbuyuinh/yO8oAxKQ==
X-Received: by 2002:a05:6214:4208:b0:4c7:2ab5:9268 with SMTP id nd8-20020a056214420800b004c72ab59268mr23099277qvb.12.1673307221171;
        Mon, 09 Jan 2023 15:33:41 -0800 (PST)
Date: Mon, 9 Jan 2023 18:33:35 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230109183132-mutt-send-email-mst@kernel.org>
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
MIME-Version: 1.0
In-Reply-To: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> as noted in docs/igd-assign.txt in the Qemu source code.
> 
> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> Intel IGD passthrough to the guest with the Qemu upstream device model,
> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> a different slot. This problem often prevents the guest from booting.
> 
> The only available workaround is not good: Configure Xen HVM guests to use
> the old and no longer maintained Qemu traditional device model available
> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> 
> To implement this feature in the Qemu upstream device model for Xen HVM
> guests, introduce the following new functions, types, and macros:
> 
> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> * typedef XenPTQdevRealize function pointer
> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> 
> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> the xl toolstack with the gfx_passthru option enabled, which sets the
> igd-passthru=on option to Qemu for the Xen HVM machine type.
> 
> The new xen_igd_reserve_slot function also needs to be implemented in
> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> in which case it does nothing.
> 
> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> 
> Move the call to xen_host_pci_device_get, and the associated error
> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> initialize the device class and vendor values which enables the checks for
> the Intel IGD to succeed. The verification that the host device is an
> Intel IGD to be passed through is done by checking the domain, bus, slot,
> and function values as well as by checking that gfx_passthru is enabled,
> the device class is VGA, and the device vendor in Intel.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> ---
> Notes that might be helpful to reviewers of patched code in hw/xen:
> 
> The new functions and types are based on recommendations from Qemu docs:
> https://qemu.readthedocs.io/en/latest/devel/qom.html
> 
> Notes that might be helpful to reviewers of patched code in hw/i386:
> 
> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> not affect builds that do not have CONFIG_XEN defined.

I'm not sure how you can claim that.

...

> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index b48047f50c..34a9736b5e 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -32,6 +32,7 @@
>  #include "hw/i386/pc.h"
>  #include "hw/i386/apic.h"
>  #include "hw/pci-host/i440fx.h"
> +#include "hw/rtc/mc146818rtc.h"
>  #include "hw/southbridge/piix.h"
>  #include "hw/display/ramfb.h"
>  #include "hw/firmware/smbios.h"
> @@ -40,16 +41,16 @@
>  #include "hw/usb.h"
>  #include "net/net.h"
>  #include "hw/ide/pci.h"
> -#include "hw/ide/piix.h"
>  #include "hw/irq.h"
>  #include "sysemu/kvm.h"
>  #include "hw/kvm/clock.h"
>  #include "hw/sysbus.h"
> +#include "hw/i2c/i2c.h"
>  #include "hw/i2c/smbus_eeprom.h"
>  #include "hw/xen/xen-x86.h"
> +#include "hw/xen/xen.h"
>  #include "exec/memory.h"
>  #include "hw/acpi/acpi.h"
> -#include "hw/acpi/piix4.h"
>  #include "qapi/error.h"
>  #include "qemu/error-report.h"
>  #include "sysemu/xen.h"
> @@ -66,6 +67,7 @@
>  #include "kvm/kvm-cpu.h"
>  
>  #define MAX_IDE_BUS 2
> +#define XEN_IOAPIC_NUM_PIRQS 128ULL
>  
>  #ifdef CONFIG_IDE_ISA
>  static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
> @@ -73,6 +75,32 @@ static const int ide_iobase2[MAX_IDE_BUS] = { 0x3f6, 0x376 };
>  static const int ide_irq[MAX_IDE_BUS] = { 14, 15 };
>  #endif
>  
> +/*
> + * Return the global irq number corresponding to a given device irq
> + * pin. We could also use the bus number to have a more precise mapping.
> + */
> +static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
> +{
> +    int slot_addend;
> +    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
> +    return (pci_intx + slot_addend) & 3;
> +}
> +
> +static void piix_intx_routing_notifier_xen(PCIDevice *dev)
> +{
> +    int i;
> +
> +    /* Scan for updates to PCI link routes (0x60-0x63). */
> +    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
> +        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
> +        if (v & 0x80) {
> +            v = 0;
> +        }
> +        v &= 0xf;
> +        xen_set_pci_link_route(i, v);
> +    }
> +}
> +
>  /* PC hardware initialisation */
>  static void pc_init1(MachineState *machine,
>                       const char *host_type, const char *pci_type)
> @@ -84,7 +112,7 @@ static void pc_init1(MachineState *machine,
>      MemoryRegion *system_io = get_system_io();
>      PCIBus *pci_bus;
>      ISABus *isa_bus;
> -    int piix3_devfn = -1;
> +    Object *piix4_pm;
>      qemu_irq smi_irq;
>      GSIState *gsi_state;
>      BusState *idebus[MAX_IDE_BUS];
> @@ -205,10 +233,9 @@ static void pc_init1(MachineState *machine,
>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
>  
>      if (pcmc->pci_enabled) {
> -        PIIX3State *piix3;
> +        DeviceState *dev;
>          PCIDevice *pci_dev;
> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
> -                                         : TYPE_PIIX3_DEVICE;
> +        int i;
>  
>          pci_bus = i440fx_init(pci_type,
>                                i440fx_host,
> @@ -216,21 +243,65 @@ static void pc_init1(MachineState *machine,
>                                x86ms->below_4g_mem_size,
>                                x86ms->above_4g_mem_size,
>                                pci_memory, ram_memory);
> +        pci_bus_map_irqs(pci_bus,
> +                         xen_enabled() ? xen_pci_slot_get_pirq
> +                                       : pci_slot_get_pirq);
>          pcms->bus = pci_bus;
>  
> -        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
> -        piix3 = PIIX3_PCI_DEVICE(pci_dev);
> -        piix3->pic = x86ms->gsi;
> -        piix3_devfn = piix3->dev.devfn;
> -        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(piix3), "isa.0"));
> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
> +        object_property_set_bool(OBJECT(pci_dev), "has-usb",
> +                                 machine_usb(machine), &error_abort);
> +        object_property_set_bool(OBJECT(pci_dev), "has-acpi",
> +                                 x86_machine_is_acpi_enabled(x86ms),
> +                                 &error_abort);
> +        qdev_prop_set_uint32(DEVICE(pci_dev), "smb_io_base", 0xb100);
> +        object_property_set_bool(OBJECT(pci_dev), "smm-enabled",
> +                                 x86_machine_is_smm_enabled(x86ms),
> +                                 &error_abort);
> +        pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
> +
> +        if (xen_enabled()) {
> +            pci_device_set_intx_routing_notifier(
> +                        pci_dev, piix_intx_routing_notifier_xen);
> +
> +            /*
> +             * Xen supports additional interrupt routes from the PCI devices to
> +             * the IOAPIC: the four pins of each PCI device on the bus are also
> +             * connected to the IOAPIC directly.
> +             * These additional routes can be discovered through ACPI.
> +             */
> +            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
> +                         XEN_IOAPIC_NUM_PIRQS);
> +        }
> +
> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
> +        for (i = 0; i < ISA_NUM_IRQS; i++) {
> +            qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
> +        }
> +        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(pci_dev), "isa.0"));
> +        rtc_state = ISA_DEVICE(object_resolve_path_component(OBJECT(pci_dev),
> +                                                             "rtc"));
> +        piix4_pm = object_resolve_path_component(OBJECT(pci_dev), "pm");
> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "ide"));
> +        pci_ide_create_devs(PCI_DEVICE(dev));
> +        idebus[0] = qdev_get_child_bus(dev, "ide.0");
> +        idebus[1] = qdev_get_child_bus(dev, "ide.1");
>      } else {
>          pci_bus = NULL;
> +        piix4_pm = NULL;
>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
>                                &error_abort);
> +        isa_bus_irqs(isa_bus, x86ms->gsi);
> +
> +        rtc_state = isa_new(TYPE_MC146818_RTC);
> +        qdev_prop_set_int32(DEVICE(rtc_state), "base_year", 2000);
> +        isa_realize_and_unref(rtc_state, isa_bus, &error_fatal);
> +
>          i8257_dma_init(isa_bus, 0);
>          pcms->hpet_enabled = false;
> +        idebus[0] = NULL;
> +        idebus[1] = NULL;
>      }
> -    isa_bus_irqs(isa_bus, x86ms->gsi);
>  
>      if (x86ms->pic == ON_OFF_AUTO_ON || x86ms->pic == ON_OFF_AUTO_AUTO) {
>          pc_i8259_create(isa_bus, gsi_state->i8259_irq);
> @@ -252,18 +323,12 @@ static void pc_init1(MachineState *machine,
>      }
>  
>      /* init basic PC hardware */
> -    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, &rtc_state, true,
> +    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, rtc_state, true,
>                           0x4);
>  
>      pc_nic_init(pcmc, isa_bus, pci_bus);
>  
>      if (pcmc->pci_enabled) {
> -        PCIDevice *dev;
> -
> -        dev = pci_create_simple(pci_bus, piix3_devfn + 1, TYPE_PIIX3_IDE);
> -        pci_ide_create_devs(dev);
> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
>          pc_cmos_init(pcms, idebus[0], idebus[1], rtc_state);
>      }
>  #ifdef CONFIG_IDE_ISA
> @@ -289,21 +354,9 @@ static void pc_init1(MachineState *machine,
>      }
>  #endif
>  
> -    if (pcmc->pci_enabled && machine_usb(machine)) {
> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
> -    }
> -
> -    if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
> -        PCIDevice *piix4_pm;
> -
> +    if (piix4_pm) {
>          smi_irq = qemu_allocate_irq(pc_acpi_smi_interrupt, first_cpu, 0);
> -        piix4_pm = pci_new(piix3_devfn + 3, TYPE_PIIX4_PM);
> -        qdev_prop_set_uint32(DEVICE(piix4_pm), "smb_io_base", 0xb100);
> -        qdev_prop_set_bit(DEVICE(piix4_pm), "smm-enabled",
> -                          x86_machine_is_smm_enabled(x86ms));
> -        pci_realize_and_unref(piix4_pm, pci_bus, &error_fatal);
>  
> -        qdev_connect_gpio_out(DEVICE(piix4_pm), 0, x86ms->gsi[9]);
>          qdev_connect_gpio_out_named(DEVICE(piix4_pm), "smi-irq", 0, smi_irq);
>          pcms->smbus = I2C_BUS(qdev_get_child_bus(DEVICE(piix4_pm), "i2c"));
>          /* TODO: Populate SPD eeprom data.  */
> @@ -315,7 +368,7 @@ static void pc_init1(MachineState *machine,
>                                   object_property_allow_set_link,
>                                   OBJ_PROP_LINK_STRONG);
>          object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
> -                                 OBJECT(piix4_pm), &error_abort);
> +                                 piix4_pm, &error_abort);
>      }
>  
>      if (machine->nvdimms_state->is_enabled) {
> @@ -405,6 +458,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>      }
>  
>      pc_xen_hvm_init_pci(machine);
> +    if (xen_igd_gfx_pt_enabled()) {
> +        xen_igd_reserve_slot(pcms->bus);
> +    }
>      pci_create_simple(pcms->bus, -1, "xen-platform");
>  }
>  #endif
> @@ -441,6 +497,11 @@ static void pc_i440fx_8_0_machine_options(MachineClass *m)
>      pc_i440fx_machine_options(m);
>      m->alias = "pc";
>      m->is_default = true;
> +#ifdef CONFIG_MICROVM_DEFAULT
> +    m->is_default = false;
> +#else
> +    m->is_default = true;
> +#endif
>  }
>  
>  DEFINE_I440FX_MACHINE(v8_0, "pc-i440fx-8.0", NULL,


Lots of changes here not guarded by CONFIG_XEN.

-- 
MST



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 23:35:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 23:35:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474131.735085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1fs-0006h3-Dx; Mon, 09 Jan 2023 23:35:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474131.735085; Mon, 09 Jan 2023 23:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1fs-0006gw-BH; Mon, 09 Jan 2023 23:35:28 +0000
Received: by outflank-mailman (input) for mailman id 474131;
 Mon, 09 Jan 2023 23:35:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QidW=5G=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pF1fq-0006gn-Pk
 for xen-devel@lists.xenproject.org; Mon, 09 Jan 2023 23:35:26 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49a99b1e-9076-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 00:35:24 +0100 (CET)
Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com
 [209.85.160.198]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-203-3wPotfuDMTurSReKuHDI0g-1; Mon, 09 Jan 2023 18:35:22 -0500
Received: by mail-qt1-f198.google.com with SMTP id
 e18-20020ac84912000000b003a96d6f436fso4652339qtq.0
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 15:35:22 -0800 (PST)
Received: from redhat.com ([191.101.160.154]) by smtp.gmail.com with ESMTPSA id
 bp37-20020a05620a45a500b006e99290e83fsm6136992qkb.107.2023.01.09.15.35.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 15:35:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49a99b1e-9076-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673307323;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uCQhuqCyYYrUwYidZOjnRFW4Stp0dkql6Y3/FGUFmkM=;
	b=eifz0V98wZjFoHIKZt/cOd7cMh8FjXNCD14ijfFm6XNhr5Ja3A3+IsyFWge5clVHheTWYN
	BV1kHpbT3fCs1dq/kE+PrTw3xNFLWxGiGDj0R4Q3/jC/1adRGXaj69o+BAHFxU5id/iGEt
	4Qny/cetFObW9SjCcuqxCzlwLFN7zNE=
X-MC-Unique: 3wPotfuDMTurSReKuHDI0g-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=uCQhuqCyYYrUwYidZOjnRFW4Stp0dkql6Y3/FGUFmkM=;
        b=ssvI4b1mGQgaMRTslBdc3xul5A/okouYC13fWBipQy0FeTHCOvRO8vFZy2539Thv1D
         T7dP7xDkxHdbqwCElUxKxzOQiCOc22/qTok5Vj2cyMOXRt1sYsZQ7N7MBSHvF7n61NWa
         Bmlz93bk8Y3b7wt2OshvhD9ABkX7HYibkT9C/nLLDlgu5cUWUFInYyjdh1rkPI5Bgr+B
         HSLj9bEDd68MN6fLF5hJFB9E6IFicDEqZZbtqnnrEU3n6Cp+9oUEhGpuEieuAKLAvFh0
         OnpD7p3wh+JYuSf4lIw4S2S5G84qRXoSImftZTBaFGfmkbxJ6zvK3m2UvHK1V030S+63
         wL7Q==
X-Gm-Message-State: AFqh2kpnxR2nzRYgSyRNfnLUS3JgxAODeztO6izrVAOicYLn4z8G6yUE
	3Oa2ca8XA7iK/vATrN4mE1MRJKzG70nqgwuc90KoEN+IbZFIAr0xmqxq1McdsS4h94yVVn1sYdt
	qgijwqMxhXr8pAKljGdchFFAVFP8=
X-Received: by 2002:ac8:5e06:0:b0:3ab:5bb4:ae6d with SMTP id h6-20020ac85e06000000b003ab5bb4ae6dmr121013954qtx.43.1673307321689;
        Mon, 09 Jan 2023 15:35:21 -0800 (PST)
X-Google-Smtp-Source: AMrXdXsxRfx2SIl26nDoOzTKTEQ+9/jl7hcSlDyR1SFYdSQ4Zm7h7aG56hVi/f4qStCQE9iwRXD6Ow==
X-Received: by 2002:ac8:5e06:0:b0:3ab:5bb4:ae6d with SMTP id h6-20020ac85e06000000b003ab5bb4ae6dmr121013936qtx.43.1673307321427;
        Mon, 09 Jan 2023 15:35:21 -0800 (PST)
Date: Mon, 9 Jan 2023 18:35:15 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230109183413-mutt-send-email-mst@kernel.org>
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
 <20230109172738-mutt-send-email-mst@kernel.org>
 <8c2531a8-ce99-7593-99f8-222076fe6bd6@aol.com>
MIME-Version: 1.0
In-Reply-To: <8c2531a8-ce99-7593-99f8-222076fe6bd6@aol.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Mon, Jan 09, 2023 at 06:28:44PM -0500, Chuck Zmudzinski wrote:
> On 1/9/23 5:33 PM, Michael S. Tsirkin wrote:
> > On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
> >> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> >> as noted in docs/igd-assign.txt in the Qemu source code.
> >> 
> >> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> >> Intel IGD passthrough to the guest with the Qemu upstream device model,
> >> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> >> a different slot. This problem often prevents the guest from booting.
> >> 
> >> The only available workaround is not good: Configure Xen HVM guests to use
> >> the old and no longer maintained Qemu traditional device model available
> >> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> >> 
> >> To implement this feature in the Qemu upstream device model for Xen HVM
> >> guests, introduce the following new functions, types, and macros:
> >> 
> >> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> >> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> >> * typedef XenPTQdevRealize function pointer
> >> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> >> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> >> 
> >> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> >> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> >> the xl toolstack with the gfx_passthru option enabled, which sets the
> >> igd-passthru=on option to Qemu for the Xen HVM machine type.
> > 
> > I don't like how slot_reserved_mask is set initially then cleared on
> > device realize.
> > To me this looks like a fragile hack. I suggest one of the following
> > 1. adding a new mask
> > "slot-manual-mask" or some such blocking auto-allocation of a given
> > slot without blocking its use if address is specified on command line.
> > 2. adding a special property that overrides slot_reserved_mask
> > for a given device.
> > 
> > both need changes in pci core but look like something generally
> > useful.
> 
> I was hoping to not need to touch pci core but I understand it would be
> better for this patch to not affect machines that are manually configured
> on the command line.
> 
> However, keep in mind that this patch will only actually reserve the slot
> initially for xen hvm machines (machine type "xenfv") that also are configured
> with the qemu igd-passthru=on option which, AFAIK, only applies to machines
> witn accel=xen. It will not affect kvm users at all. So I don't think this patch
> will break many machines out there that manually specify the pci slots. The
> only machines it could affect are machines configured for igd-passthru on xen.
> This patch also does *not* reserve the slot initially for "xenfv" machines that
> are not configured with igd passthrough which I am sure is the vast majority
> of all the xen virtual machines out in the wild.

I'm just saying that adding a capability that is generally useful
as opposed to xen specific means less technical debt.

-- 
MST



From xen-devel-bounces@lists.xenproject.org Mon Jan 09 23:45:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Jan 2023 23:45:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474143.735121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1pr-0008Sz-IV; Mon, 09 Jan 2023 23:45:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474143.735121; Mon, 09 Jan 2023 23:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF1pr-0008Ss-Fb; Mon, 09 Jan 2023 23:45:47 +0000
Received: by outflank-mailman (input) for mailman id 474143;
 Mon, 09 Jan 2023 23:45:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF1pq-0008Sg-44; Mon, 09 Jan 2023 23:45:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF1pq-0004SN-1o; Mon, 09 Jan 2023 23:45:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF1pp-0003kU-Mr; Mon, 09 Jan 2023 23:45:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pF1pp-0007fn-MN; Mon, 09 Jan 2023 23:45:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wKYmamS0FEmE1IciCbsAyFi5G/MKaCWk7J7ky4TNnt0=; b=h7gOUIlBqufE7lESaZbp+tvURI
	+rEYe67DNzDIGrnUewWxCX4Gzy3U+4rXPRkse+LE8L4Kv1pRdwV+OV5VrNTNPklB3b46KSwYkh8K1
	5eOkKIZMEPjHV8aKXvIKn0p0GpkNgXbmu9AmR38BA/u2R/vQ7ULaw6xD8T/FEEA2qcpI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175651-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175651: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=38525f6f73f906699f77a1af86c16b4eaad48e04
X-Osstest-Versions-That:
    xen=2b21cbbb339fb14414f357a6683b1df74c36fda2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Jan 2023 23:45:45 +0000

flight 175651 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175651/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 175624
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175635
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175635
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175635
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175635
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175635
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175635
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175635
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175635
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175635
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175635
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175635
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install       fail like 175635
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  38525f6f73f906699f77a1af86c16b4eaad48e04
baseline version:
 xen                  2b21cbbb339fb14414f357a6683b1df74c36fda2

Last test of basis   175635  2023-01-09 01:51:57 Z    0 days
Testing same since   175651  2023-01-09 17:37:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   2b21cbbb33..38525f6f73  38525f6f73f906699f77a1af86c16b4eaad48e04 -> master


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 00:05:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 00:05:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474153.735138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF29D-00036D-7K; Tue, 10 Jan 2023 00:05:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474153.735138; Tue, 10 Jan 2023 00:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF29D-000366-3f; Tue, 10 Jan 2023 00:05:47 +0000
Received: by outflank-mailman (input) for mailman id 474153;
 Tue, 10 Jan 2023 00:05:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF29B-000360-1W
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 00:05:45 +0000
Received: from sonic307-8.consmr.mail.gq1.yahoo.com
 (sonic307-8.consmr.mail.gq1.yahoo.com [98.137.64.32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 851e7592-907a-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 01:05:42 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic307.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 00:05:40 +0000
Received: by hermes--production-ne1-7b69748c4d-drrwg (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 7a6ec1fe15f013b8e5a1b92f9096e4a4; 
 Tue, 10 Jan 2023 00:05:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 851e7592-907a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673309140; bh=xiK5KiqCfiN1A6VOg2adTua1q58rEf/Dco968cQ11hc=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=kO1KzrtO+4XcKbOjzzAqScu+K3AwYFKsLzP+XLE3nHtCFuPT3cFewPvO/aJo1V/RI+aJfQx4wdGMhmrfOWtpQBPmfcjfuN0mpNsJaw06IUcLiV/B9mGWFQno/p+zB/kgAkN2tpLTZRHeVbyMRJHSLG5U3zHj70vV8NGc+vJiBnXTopqeK/526JZtq0TAEmeLswooyGc6NUNPs2FjfoR8WL7FO6PbDUZJSBfSSwPMOpY/MwuUZFHQ7YxoabLqfzlNWKolykBelvzFQl9NfTYDAzhWfBO5rR2KXhBh4pe356AwYQKa0Y2Q19X3n6yyqCRBYqP0C7WxfLvL1QnMSMdYJA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673309140; bh=PCF+XPvsmO4dujDdrFfKM6wN/EXN9jjbZLFIWTFrSaX=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=ktlTFUsATzh6kYl4Bb4FXawzFtxRq7BS32w2DjlgBTeDOs3pevkR5GH3L2Lofy0amnx5/lwUO2835aBoGQWwFm5ZGP3uOWvYdpVfA80/dyH5Grq1zhbgnB8UU3qTqcZ/UZjsJ8ngN783W4Lq5rkEvIZzkREI23HU0ZDxJQ6KZ10Jt9LiEtS7YZu9GYoTcB3z9tEA3kS0mKDFDrsPotC4exFevW0phcVhlW5pRFHxTPnIOaXFHuw7tQOIDpDBGCaxvTbF1WF2niHzSEmjAfQWioTbzDGpbUM0XTqoNGeY3mk7+Q4ipfdZsUCEvXmC7kI4b9q4Vn7Gq6LySjFnTw5lLA==
X-YMail-OSG: qd69w.cVM1l2Y4YzytTaJ3hrOAhZ8NRYMKdfRqq8figUbSbVmltPRzcQNpSOcK4
 aaeHAGxLDxCqOI7aybhgkKqMxlQviYFTcZbDJs6TYaB__QlBATZ41IdXYO4tl9c7SdeUtpJ9cu_K
 E3NojDeMWoJSuoX4Bi5HO1OSiiKrs7rEXVTvCV_RXnPadagMgWhC7bjiXxO.AhzsoOb1gFNlWkUp
 Ae3yzxjpko3Hm_hIMFTJ8YQOQ4IYnnQZc8g7h.KeqnbCMB19jYExVZckySJNnrd2zNl3NovPMc2e
 bdLwFWENpcff80GX1iHBLN.9slrowBgu4dWc_7lMnluBQ6ENs5onXgpbLi7bGKMHCOXxe6wT089Q
 0zJaeVcGFf0VIfSgoHoczazwYlcy5CNCxtHhig3zkfAWC.LcQzP8qijHeviB8hFThVgzGfum99XP
 2gJfpmCc4j.RX_KlTb0peQtYCSdLvdGt_vBqATmD_TNoAfdZbkRbr3LM9YYs5WmOLnymMQ1RNaYZ
 bqNAnH9gWSslUtkMgbkIWMsdAGa20r18prttMh2NcDdmD14BAW5fdsc0ZVYsP5t1U4rOy8JpXb2N
 QJR99siH7YMFi7E6.hfWRB4Ce4AV.GyHjlw14rHnLC4w0RKko1scjrifqFpyxSafiHPcVjMOnnfF
 yBj0AQPlR4lRHka8TsWBh2ODT1ymyvJeTl3n3hK0Y7tqdwa6xzM.Y1SWNX3hHoU1RosaoLK6CAPo
 Z0bHDrNtoeZfQb6UJLj4.oFqhY495.cA.ep8kTT2K7.Y.DeVd2ZTBmCoJczpBJAWUBqMxwkb6FmV
 O5m5ZoOEUlANrahGyA4dmqZCbaRjrTLQnTCMPIeZ3qqPdrNQlpZZ._GzWn5P9U_DftwgXMjkneXz
 y_ZFc7yPwY9O25ga00Q8YDWPzGa_vvvJ5HX6oHKr021V9ca6jCOejSKXS0DLMfQ9yH09QRZ1IaUI
 2qrL.MJB6nAKfNZEdBKx8Zb_ANcYGeLvNGKodg1N.y96i.fzBs4M0DRyLRYrqDiXF0o93MLPvS42
 iJm3n5ijNnyAqdicw_yZhAjLI1Z41arMeh75Kz30nwPgh_bZrEGgODmdDfQd6GV6h44kk1Wck6TC
 52Ar9PQlMtoV7tK1kgfEmW8tfAI4wuaqxYYIO74nV9sqe5U8c4F8eO08YCseyu_KoZ68IllFu6Gb
 OTS4fHzL5fPBtcQg.i647S9.gpq5OtW0O1PLuYgquK1HiC50KE0qeQADuoDOHf_ClXXgDOLh6VnJ
 1w5CpZQ0PSZSiigg8mFwE8EOHJifXJAwjbuIhiZCyhrOeZcZiWtAVQF7NS7HLvkx5L3q28rsyo1r
 jhHp.fyw.slUg9dhIbQjdvHrCBNP4QYU8ZQFYFNaBXqmk.iLADV6zuaeqSorYKBDRF0RSmNOEGdN
 kVxRM0dA5gL03VOOw8ugqb_jvCIijz76G1hSIApQEhOThxAr2b3LPhPG_gDrLd_yEbekjqt34YRI
 weDd4Up2MU9SvqM9SEnA_GwnTmWi5lg_fic.Y7Sbj.a8ZTcHjwuI7xYnUuxURa9PJSLlALxPujC8
 hBvuLwPkoNIlB7Bz9ECJHo0HMnczSq1toZvWfOPBIiOOzmnBjTwB_Mn3FXSDm5Rp4ixOPNFWrbUT
 Aahg9bVlc_5ohi7R6yiRDYD4y1QT8qJcgcVSKj7wOE7FqNnvLyeOReAhLc0yetegPwL2FebIsP5x
 u040sy4CRjxEF7f4DqseSbGNM9wnOGUrM.Eky8ew_cNlX_mNnnxLZh9VsaFdn_TYJrqH.AuLVb_L
 SJEMvz7pXFo5OJNarazkn48kAn54143JMiK4c0_PF7R_vE9NWwnZmtXoOavIOn0NmMImoLP0II0i
 GUsCECLYmdDgRpUP4w_OlDrtU6lDMmdQyQbpUNro6ol_EBEGcf.chNPIWJFMBIuLwRc22kL0W74v
 aQUWljcVQ0wWY9oFbf7F9B.QhFtEEGchLNfVFShkPSf4PFHyB5K_01HuWLoPUHWGf0xjNyqFsvcn
 D6Feybu8eRcHNwngdUHqLdJyQ6xMQYAErbXb3z_nd_QzpQYvpK0bW0WAoWMAVHCuR5LHTwDGKF2N
 q_Y3NxGHlJHNU1x4T0l.tdqEmyWhgrG07inVckmQYte.Fltv.HCZo8dn0bWteJ.eM29tcfj0OsAr
 1UMILR8FNcXAKXPONB3Yhf0IbDlCyLsR8SY4t3sz0Gw9BAveR4zUKQVQGLf5bsdzp5WBPVPHR_Iw
 6I6QU3bhxnPE-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <aacffaa2-1e86-1392-8302-484248b893c4@aol.com>
Date: Mon, 9 Jan 2023 19:05:35 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
 <20230109183132-mutt-send-email-mst@kernel.org>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230109183132-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 13336

On 1/9/23 6:33 PM, Michael S. Tsirkin wrote:
> On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>> as noted in docs/igd-assign.txt in the Qemu source code.
>> 
>> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>> Intel IGD passthrough to the guest with the Qemu upstream device model,
>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>> a different slot. This problem often prevents the guest from booting.
>> 
>> The only available workaround is not good: Configure Xen HVM guests to use
>> the old and no longer maintained Qemu traditional device model available
>> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>> 
>> To implement this feature in the Qemu upstream device model for Xen HVM
>> guests, introduce the following new functions, types, and macros:
>> 
>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>> * typedef XenPTQdevRealize function pointer
>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>> 
>> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>> the xl toolstack with the gfx_passthru option enabled, which sets the
>> igd-passthru=on option to Qemu for the Xen HVM machine type.
>> 
>> The new xen_igd_reserve_slot function also needs to be implemented in
>> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>> in which case it does nothing.
>> 
>> The new xen_igd_clear_slot function overrides qdev->realize of the parent
>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>> 
>> Move the call to xen_host_pci_device_get, and the associated error
>> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>> initialize the device class and vendor values which enables the checks for
>> the Intel IGD to succeed. The verification that the host device is an
>> Intel IGD to be passed through is done by checking the domain, bus, slot,
>> and function values as well as by checking that gfx_passthru is enabled,
>> the device class is VGA, and the device vendor in Intel.
>> 
>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>> ---
>> Notes that might be helpful to reviewers of patched code in hw/xen:
>> 
>> The new functions and types are based on recommendations from Qemu docs:
>> https://qemu.readthedocs.io/en/latest/devel/qom.html
>> 
>> Notes that might be helpful to reviewers of patched code in hw/i386:
>> 
>> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
>> not affect builds that do not have CONFIG_XEN defined.
> 
> I'm not sure how you can claim that.

I mean the small patch to pc_piix.c in this patch sits
between an "#ifdef CONFIG_XEN" and the corresponding
"#endif" so the preprocessor will exclude it when CONFIG_XEN
is not defined. In other words, my patch is part of the
xen-specific code in pc_piix.c. Or am I missing something?


> 
> ...
> 
>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>> index b48047f50c..34a9736b5e 100644
>> --- a/hw/i386/pc_piix.c
>> +++ b/hw/i386/pc_piix.c
>> @@ -32,6 +32,7 @@
>>  #include "hw/i386/pc.h"
>>  #include "hw/i386/apic.h"
>>  #include "hw/pci-host/i440fx.h"
>> +#include "hw/rtc/mc146818rtc.h"
>>  #include "hw/southbridge/piix.h"
>>  #include "hw/display/ramfb.h"
>>  #include "hw/firmware/smbios.h"
>> @@ -40,16 +41,16 @@
>>  #include "hw/usb.h"
>>  #include "net/net.h"
>>  #include "hw/ide/pci.h"
>> -#include "hw/ide/piix.h"
>>  #include "hw/irq.h"
>>  #include "sysemu/kvm.h"
>>  #include "hw/kvm/clock.h"
>>  #include "hw/sysbus.h"
>> +#include "hw/i2c/i2c.h"
>>  #include "hw/i2c/smbus_eeprom.h"
>>  #include "hw/xen/xen-x86.h"
>> +#include "hw/xen/xen.h"
>>  #include "exec/memory.h"
>>  #include "hw/acpi/acpi.h"
>> -#include "hw/acpi/piix4.h"
>>  #include "qapi/error.h"
>>  #include "qemu/error-report.h"
>>  #include "sysemu/xen.h"
>> @@ -66,6 +67,7 @@
>>  #include "kvm/kvm-cpu.h"
>>  
>>  #define MAX_IDE_BUS 2
>> +#define XEN_IOAPIC_NUM_PIRQS 128ULL
>>  
>>  #ifdef CONFIG_IDE_ISA
>>  static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
>> @@ -73,6 +75,32 @@ static const int ide_iobase2[MAX_IDE_BUS] = { 0x3f6, 0x376 };
>>  static const int ide_irq[MAX_IDE_BUS] = { 14, 15 };
>>  #endif
>>  
>> +/*
>> + * Return the global irq number corresponding to a given device irq
>> + * pin. We could also use the bus number to have a more precise mapping.
>> + */
>> +static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
>> +{
>> +    int slot_addend;
>> +    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
>> +    return (pci_intx + slot_addend) & 3;
>> +}
>> +
>> +static void piix_intx_routing_notifier_xen(PCIDevice *dev)
>> +{
>> +    int i;
>> +
>> +    /* Scan for updates to PCI link routes (0x60-0x63). */
>> +    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
>> +        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
>> +        if (v & 0x80) {
>> +            v = 0;
>> +        }
>> +        v &= 0xf;
>> +        xen_set_pci_link_route(i, v);
>> +    }
>> +}
>> +
>>  /* PC hardware initialisation */
>>  static void pc_init1(MachineState *machine,
>>                       const char *host_type, const char *pci_type)
>> @@ -84,7 +112,7 @@ static void pc_init1(MachineState *machine,
>>      MemoryRegion *system_io = get_system_io();
>>      PCIBus *pci_bus;
>>      ISABus *isa_bus;
>> -    int piix3_devfn = -1;
>> +    Object *piix4_pm;
>>      qemu_irq smi_irq;
>>      GSIState *gsi_state;
>>      BusState *idebus[MAX_IDE_BUS];
>> @@ -205,10 +233,9 @@ static void pc_init1(MachineState *machine,
>>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
>>  
>>      if (pcmc->pci_enabled) {
>> -        PIIX3State *piix3;
>> +        DeviceState *dev;
>>          PCIDevice *pci_dev;
>> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
>> -                                         : TYPE_PIIX3_DEVICE;
>> +        int i;
>>  
>>          pci_bus = i440fx_init(pci_type,
>>                                i440fx_host,
>> @@ -216,21 +243,65 @@ static void pc_init1(MachineState *machine,
>>                                x86ms->below_4g_mem_size,
>>                                x86ms->above_4g_mem_size,
>>                                pci_memory, ram_memory);
>> +        pci_bus_map_irqs(pci_bus,
>> +                         xen_enabled() ? xen_pci_slot_get_pirq
>> +                                       : pci_slot_get_pirq);
>>          pcms->bus = pci_bus;
>>  
>> -        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
>> -        piix3 = PIIX3_PCI_DEVICE(pci_dev);
>> -        piix3->pic = x86ms->gsi;
>> -        piix3_devfn = piix3->dev.devfn;
>> -        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(piix3), "isa.0"));
>> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
>> +        object_property_set_bool(OBJECT(pci_dev), "has-usb",
>> +                                 machine_usb(machine), &error_abort);
>> +        object_property_set_bool(OBJECT(pci_dev), "has-acpi",
>> +                                 x86_machine_is_acpi_enabled(x86ms),
>> +                                 &error_abort);
>> +        qdev_prop_set_uint32(DEVICE(pci_dev), "smb_io_base", 0xb100);
>> +        object_property_set_bool(OBJECT(pci_dev), "smm-enabled",
>> +                                 x86_machine_is_smm_enabled(x86ms),
>> +                                 &error_abort);
>> +        pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
>> +
>> +        if (xen_enabled()) {
>> +            pci_device_set_intx_routing_notifier(
>> +                        pci_dev, piix_intx_routing_notifier_xen);
>> +
>> +            /*
>> +             * Xen supports additional interrupt routes from the PCI devices to
>> +             * the IOAPIC: the four pins of each PCI device on the bus are also
>> +             * connected to the IOAPIC directly.
>> +             * These additional routes can be discovered through ACPI.
>> +             */
>> +            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
>> +                         XEN_IOAPIC_NUM_PIRQS);
>> +        }
>> +
>> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
>> +        for (i = 0; i < ISA_NUM_IRQS; i++) {
>> +            qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
>> +        }
>> +        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(pci_dev), "isa.0"));
>> +        rtc_state = ISA_DEVICE(object_resolve_path_component(OBJECT(pci_dev),
>> +                                                             "rtc"));
>> +        piix4_pm = object_resolve_path_component(OBJECT(pci_dev), "pm");
>> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "ide"));
>> +        pci_ide_create_devs(PCI_DEVICE(dev));
>> +        idebus[0] = qdev_get_child_bus(dev, "ide.0");
>> +        idebus[1] = qdev_get_child_bus(dev, "ide.1");
>>      } else {
>>          pci_bus = NULL;
>> +        piix4_pm = NULL;
>>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
>>                                &error_abort);
>> +        isa_bus_irqs(isa_bus, x86ms->gsi);
>> +
>> +        rtc_state = isa_new(TYPE_MC146818_RTC);
>> +        qdev_prop_set_int32(DEVICE(rtc_state), "base_year", 2000);
>> +        isa_realize_and_unref(rtc_state, isa_bus, &error_fatal);
>> +
>>          i8257_dma_init(isa_bus, 0);
>>          pcms->hpet_enabled = false;
>> +        idebus[0] = NULL;
>> +        idebus[1] = NULL;
>>      }
>> -    isa_bus_irqs(isa_bus, x86ms->gsi);
>>  
>>      if (x86ms->pic == ON_OFF_AUTO_ON || x86ms->pic == ON_OFF_AUTO_AUTO) {
>>          pc_i8259_create(isa_bus, gsi_state->i8259_irq);
>> @@ -252,18 +323,12 @@ static void pc_init1(MachineState *machine,
>>      }
>>  
>>      /* init basic PC hardware */
>> -    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, &rtc_state, true,
>> +    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, rtc_state, true,
>>                           0x4);
>>  
>>      pc_nic_init(pcmc, isa_bus, pci_bus);
>>  
>>      if (pcmc->pci_enabled) {
>> -        PCIDevice *dev;
>> -
>> -        dev = pci_create_simple(pci_bus, piix3_devfn + 1, TYPE_PIIX3_IDE);
>> -        pci_ide_create_devs(dev);
>> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
>> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
>>          pc_cmos_init(pcms, idebus[0], idebus[1], rtc_state);
>>      }
>>  #ifdef CONFIG_IDE_ISA
>> @@ -289,21 +354,9 @@ static void pc_init1(MachineState *machine,
>>      }
>>  #endif
>>  
>> -    if (pcmc->pci_enabled && machine_usb(machine)) {
>> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
>> -    }
>> -
>> -    if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
>> -        PCIDevice *piix4_pm;
>> -
>> +    if (piix4_pm) {
>>          smi_irq = qemu_allocate_irq(pc_acpi_smi_interrupt, first_cpu, 0);
>> -        piix4_pm = pci_new(piix3_devfn + 3, TYPE_PIIX4_PM);
>> -        qdev_prop_set_uint32(DEVICE(piix4_pm), "smb_io_base", 0xb100);
>> -        qdev_prop_set_bit(DEVICE(piix4_pm), "smm-enabled",
>> -                          x86_machine_is_smm_enabled(x86ms));
>> -        pci_realize_and_unref(piix4_pm, pci_bus, &error_fatal);
>>  
>> -        qdev_connect_gpio_out(DEVICE(piix4_pm), 0, x86ms->gsi[9]);
>>          qdev_connect_gpio_out_named(DEVICE(piix4_pm), "smi-irq", 0, smi_irq);
>>          pcms->smbus = I2C_BUS(qdev_get_child_bus(DEVICE(piix4_pm), "i2c"));
>>          /* TODO: Populate SPD eeprom data.  */
>> @@ -315,7 +368,7 @@ static void pc_init1(MachineState *machine,
>>                                   object_property_allow_set_link,
>>                                   OBJ_PROP_LINK_STRONG);
>>          object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
>> -                                 OBJECT(piix4_pm), &error_abort);
>> +                                 piix4_pm, &error_abort);
>>      }
>>  
>>      if (machine->nvdimms_state->is_enabled) {
>> @@ -405,6 +458,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>>      }
>>  
>>      pc_xen_hvm_init_pci(machine);
>> +    if (xen_igd_gfx_pt_enabled()) {
>> +        xen_igd_reserve_slot(pcms->bus);
>> +    }
>>      pci_create_simple(pcms->bus, -1, "xen-platform");
>>  }
>>  #endif
>> @@ -441,6 +497,11 @@ static void pc_i440fx_8_0_machine_options(MachineClass *m)
>>      pc_i440fx_machine_options(m);
>>      m->alias = "pc";
>>      m->is_default = true;
>> +#ifdef CONFIG_MICROVM_DEFAULT
>> +    m->is_default = false;
>> +#else
>> +    m->is_default = true;
>> +#endif
>>  }
>>  
>>  DEFINE_I440FX_MACHINE(v8_0, "pc-i440fx-8.0", NULL,
> 
> 
> Lots of changes here not guarded by CONFIG_XEN.
> 

What diff is this? How is my patch related to it?



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 00:07:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 00:07:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474160.735149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF2B0-0003iF-Ns; Tue, 10 Jan 2023 00:07:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474160.735149; Tue, 10 Jan 2023 00:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF2B0-0003i8-LA; Tue, 10 Jan 2023 00:07:38 +0000
Received: by outflank-mailman (input) for mailman id 474160;
 Tue, 10 Jan 2023 00:07:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF2Az-0003i2-Tu
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 00:07:37 +0000
Received: from sonic306-21.consmr.mail.gq1.yahoo.com
 (sonic306-21.consmr.mail.gq1.yahoo.com [98.137.68.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c804ffd0-907a-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 01:07:35 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic306.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 00:07:32 +0000
Received: by hermes--production-ne1-7b69748c4d-pm9xv (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 3690a7ed7136d7d2ab4095291adae14f; 
 Tue, 10 Jan 2023 00:07:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c804ffd0-907a-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673309252; bh=TjWRQ3XDQ9v5XBbBlXFg/nqc2AKrVM/5dbL4mDfhkpE=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=JfA1QQc4IuZWn2q/0AwE8zlAE1hfXOHaXiKe8imLX/Ytrn4nLV2TMHnnMkwQdVJW5c5/zbVtxlJQtkCxzG2sE5Jv63uwlrk3/DuZF98BZAJvAAwEAbECmD/4YPsk8ksSaZHtDPIPFzbv5Twlt9GiZR0A1iOMElk5nd9SC1XiicMc06sGIcj9DdM/njYUzgbx4/8D8OL3nT9JPPp/o3YbFuxUcsTFX7oKyXe7AZL39bL7gRx+Xl5YhUbV6sQFPPCdovcfKeAiBd2ZAO8sCaMa7JgVz/M8RG5/3qn5GibUavkMR88dJQVZuk/I7Y33+R9qOwk20NItOngS612jnt2hZw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673309252; bh=nFCRAALOdJVOepdExywB8v7RnknYNQlGAfr1ya85Pc8=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=DSa7JwuwBkLV6l00NXnOyIr44e+lEFFZ8a4P3EMPcMpGbInWVh77BgrFCaoAKEQiIeumScpMUVSlL+m8o0vzX2l4XtU4ZCemJoddMwgFC6djS842HQZpleoBgK1cPj0fgjQUCIDNEHVPpHYxFHDkVpeBUEIXdoiPFu0bGnPRDRwCQ9Uhh9viyqyaz8d/b2LZExDKRyRTHf4LhYgLe88Zi6w8vF9r9V4uBAHouqDhzrGdArUobuFaBX/JScII3k22xm0k8TeEegABB0npmvB2Z0lBN7ttY+dj75tqxOo7Wt8xP/1Dj/+aUEjuqmEatQ7zuvBTrj/6UcxCojqd5ZvD2w==
X-YMail-OSG: 6VYIDtwVM1lfvejDjOPjsXuG7.vU5smWRTykHeFd1cl8ReMGrrANGFkQ1BAxjVH
 1bmhLyTI23B2OvNGBNdzClwcTSbq5g5BK8wpNw0ReoobjaK23Yi6EZ5AH6fXHMkRYUibFhbFMGGY
 S_Jv9bwoILhVVL1_UedMVme6K0YIKKNJ3KOnZvJrE8zugxX5twB9SNmuMBBIf8L32Jq6scwnA8Vc
 ET4UbAI5w5BEwEwQh4T91be9hIoopXyHVVC51ywi.Co0QZAgiCgzPACF.vtwJqq2tEP9GNGJLnnQ
 h7IRRoV1jLxSltR7ZY8KzjotWQ1SUydMzRhrQTMktfRGCCRswUckkDwMikB9OSPO9a3TwWlButHI
 J.nVbxJoN50G6uKn0uh_nQGGQ0RsONEfwe73zLB2Qy7_k3TdKaGGZ02GrCGGo5mj0eZqWe48qwjY
 aVtqElY0zrQGgbFIIDbt1BIkQ0p1oFcmUvJ2YvdGteM0CDSfV7qYxX2IH6e_1sIDDi1l0DBUkrP4
 nBZKV5Yfh3gWN4WbhTHJLJYc1l4LNrawQei8sFxkg3RqcHbUcXKjYfwSEGAWM86hzGyPBk9gwbfx
 qkI1hky9OSx1AHZNfRbq9Yhh_WkAhv1dQS4sRnC8xtYyTBtN3RljIsLXEN76.Zf.xJHvfR3Rkapj
 .o_YcXt7xA5ahkHKfwsAXdAzouBd.SRka1_URRhHbavUF_Kv5xILxDcbV.N2r9rTo7ZZelz8Yth1
 wi2Of8JsXEsyh5BRfK5bfh.puxWUtTxOVSqRg1dvQszXTBfHiggtPQM03yRnV44IIvwVZqXzdVQO
 kYXjM_T4RMFGeHMXmW2HavEMrmDgevuEPqZH5LViLh5ZAMY4.kys6GmaO3elB4ZkrWmK.1pnh4EC
 ekORxTg6OH9let0ZfG5NMX.1dZcTFLNMupDNwa89axi8DHTCzhrc_alYTVpr8wD.ObKFflQdNZcR
 a0WW5wjvrACPbphsOhgBWqFpO2MDTllyU7.Unnfo1EmZ0qbJCqkYUXjRa1Qn4oS8dFU9j9gKjQWB
 pVtPkJ55_46j41PiaGXI8hLD7MQczSzYawx3XKA9fBf8fv1Q9QypKqFaORsz1hJPRbMDgVee9hBC
 FVpM2WsEVYY.gEnWD_tCa2se1Bv2kw9s4u5pc3X3VOzfLHN8EI8fXd_4I6pvs1jqHy4UqduZ7W6Y
 EOwoTERAeBhMAi051Vm3FM9oIDWFDy9Dw6DYGY3qnmGYCdsRIQdzbT1XvbKOdFrUuc8btwPIwytn
 jlnDBpsqRRk89DbsytuzVgVovCn2T5xbuaFdKKq8ra1BWsAhT.8gp3T6wmsgDbhoWgpUOszyGsId
 w3m5YJ_zUzAYHrzRKtBZbVMRoOGwREe0.h_57gSWA9tlcoiRWl_.XFr2jG0i9FF59p0oGrBMXp.B
 HTRFVYQfgSP9nE4KGiK64bpifTypcvpIMN0nD2YhMp2QzP8DvjZT3HO1i4pYXTM4ghbg2q6S_tL.
 _mvqeyyzwtZkOpV8Gg6tjHLIF6ITZxVVGcCmKnGXBK.VnJuAWs4b1eQwKBTdbBqT4BQUgxuJvGxX
 1H8VQGsivLP.sq6Qk6DTMSSY4tO58QVRH_RMY1gm_iAdI8SIWeTwAkyU9DW.CN4AFKfL5NtffD0O
 j5RfFM_EE5DcnBjXqX.1nzLCQ_zGaSIHKYVRidoy8yifpdtNUzDv9.aS35ns_ktZ9dWizsur0Zem
 A.X28ii3y1r4egEJMQzur5yEp9WV0X704yiEb_8dp4aetjyWIKHmYdkQ0G0ZEAYSEjd9oJr5FOLP
 UJPd5NXTUNJ3RHG9V1jSRP38MOaFJOpAlSW_Po08.w8skQvkqCGgHEgVPL7GOPGl8RK.1KyBXFKT
 VznE9kIYac06bU77TZw2lVBMbvgLr7Nik8i2gTGjHTdCGA4nEiAbEq2fgyJ7pUkI60vOWrkUkKnM
 XtoCA2u_gmdX4Z3OTH0NFHAPjOStYiLwLNhbcVTOPcQn45FNffbSQMEs87YVIXCf3qDwr8fGTxzo
 ifJmNovycWkzV3RSTHRSM3MSsWDulkSVOqffSJ6Tz6a7_7m9g_V1f62FPh_nJ37oK6AOGlWaagYw
 ZsZtzTTGan7lhspITmnfZXjsqpsObHpbbFlukcDSgx9aML2ep4cYPU03GQtg71mfMpi_vI50IG9A
 OQLuCG_xlVgELCjCf22DM4Ys.XTXb2cxVOqkD8tqBGNVlYA_HKlB_wNgTmktAm2gjnZ.r2OQql9s
 qtlHn1sTQsw--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <024b9ab9-4204-aebb-9f89-95cbabdfe868@aol.com>
Date: Mon, 9 Jan 2023 19:07:25 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
 <20230109172738-mutt-send-email-mst@kernel.org>
 <8c2531a8-ce99-7593-99f8-222076fe6bd6@aol.com>
 <20230109183413-mutt-send-email-mst@kernel.org>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230109183413-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3331

On 1/9/23 6:35 PM, Michael S. Tsirkin wrote:
> On Mon, Jan 09, 2023 at 06:28:44PM -0500, Chuck Zmudzinski wrote:
>> On 1/9/23 5:33 PM, Michael S. Tsirkin wrote:
>> > On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
>> >> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>> >> as noted in docs/igd-assign.txt in the Qemu source code.
>> >> 
>> >> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>> >> Intel IGD passthrough to the guest with the Qemu upstream device model,
>> >> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>> >> a different slot. This problem often prevents the guest from booting.
>> >> 
>> >> The only available workaround is not good: Configure Xen HVM guests to use
>> >> the old and no longer maintained Qemu traditional device model available
>> >> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>> >> 
>> >> To implement this feature in the Qemu upstream device model for Xen HVM
>> >> guests, introduce the following new functions, types, and macros:
>> >> 
>> >> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>> >> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>> >> * typedef XenPTQdevRealize function pointer
>> >> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>> >> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>> >> 
>> >> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>> >> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>> >> the xl toolstack with the gfx_passthru option enabled, which sets the
>> >> igd-passthru=on option to Qemu for the Xen HVM machine type.
>> > 
>> > I don't like how slot_reserved_mask is set initially then cleared on
>> > device realize.
>> > To me this looks like a fragile hack. I suggest one of the following
>> > 1. adding a new mask
>> > "slot-manual-mask" or some such blocking auto-allocation of a given
>> > slot without blocking its use if address is specified on command line.
>> > 2. adding a special property that overrides slot_reserved_mask
>> > for a given device.
>> > 
>> > both need changes in pci core but look like something generally
>> > useful.
>> 
>> I was hoping to not need to touch pci core but I understand it would be
>> better for this patch to not affect machines that are manually configured
>> on the command line.
>> 
>> However, keep in mind that this patch will only actually reserve the slot
>> initially for xen hvm machines (machine type "xenfv") that also are configured
>> with the qemu igd-passthru=on option which, AFAIK, only applies to machines
>> witn accel=xen. It will not affect kvm users at all. So I don't think this patch
>> will break many machines out there that manually specify the pci slots. The
>> only machines it could affect are machines configured for igd-passthru on xen.
>> This patch also does *not* reserve the slot initially for "xenfv" machines that
>> are not configured with igd passthrough which I am sure is the vast majority
>> of all the xen virtual machines out in the wild.
> 
> I'm just saying that adding a capability that is generally useful
> as opposed to xen specific means less technical debt.
> 

I agree with that.


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 00:17:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 00:17:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474166.735160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF2Kg-0005D5-NR; Tue, 10 Jan 2023 00:17:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474166.735160; Tue, 10 Jan 2023 00:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF2Kg-0005Cy-KU; Tue, 10 Jan 2023 00:17:38 +0000
Received: by outflank-mailman (input) for mailman id 474166;
 Tue, 10 Jan 2023 00:17:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF2Kf-0005Co-L8; Tue, 10 Jan 2023 00:17:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF2Kf-0005v7-I8; Tue, 10 Jan 2023 00:17:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF2Kf-0004Rx-6A; Tue, 10 Jan 2023 00:17:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pF2Kf-0003EC-5l; Tue, 10 Jan 2023 00:17:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P3PEUlpPfGYvhrI+5O2g3+h8o+ytp3+GaGAYrHXmpZY=; b=m8VqvhTocGQVFIZEBpzBjcDnC8
	CyXMZJqKpwPXyPgnloJht/A7L4EZCIng5ZDY0LiTHw8xlomsAPku2Mvnu0JvpHz5HKV1JXJhZ5YvF
	YfUayoztU4HjFxMLs4jxi9tnVyqzAe1NI/KRpos+NQY/4Nv79U/vlkAMqHg0Zt4TxZ60=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175664-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175664: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 00:17:37 +0000

flight 175664 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175664/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    2 days
Failing since        175627  2023-01-08 14:40:14 Z    1 days    7 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 00:29:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 00:29:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474175.735170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF2Vh-0006mh-Pq; Tue, 10 Jan 2023 00:29:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474175.735170; Tue, 10 Jan 2023 00:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF2Vh-0006ma-N7; Tue, 10 Jan 2023 00:29:01 +0000
Received: by outflank-mailman (input) for mailman id 474175;
 Tue, 10 Jan 2023 00:29:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Esb2=5H=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pF2Vg-0006mU-N4
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 00:29:00 +0000
Received: from mail-vk1-xa36.google.com (mail-vk1-xa36.google.com
 [2607:f8b0:4864:20::a36])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c577b713-907d-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 01:28:58 +0100 (CET)
Received: by mail-vk1-xa36.google.com with SMTP id w72so4849940vkw.7
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 16:28:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c577b713-907d-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=SGOLN4L23AISJAYP2mb4TczTko7Qm/ptrKoRepXprRE=;
        b=AFBsG7uWOPQh5GjM8gNJxVCD0aL0hoKK7sbEQNiV7AMIVKrIb+GH2YSHRUE3Ko7rI1
         BcwtSwJN25Alg3nNY79MxqKw+afcTxSoGdRNPti3JSg4vQCnd6V7NDLX83RPpAtuc3fb
         Bv4Y6RnczOdeQeRrlKuIt8/k5uL2AMD5To8pniZRKY1X8NiXXrLw1up/231qX1YiZKp3
         pZLK6UKnXti9gjAfqtKo/AdkXmT7+I+Uhz9yIoId1s5lXB/HXbcUpYOwBb5dV6MTKDgN
         BLu1lSIOiKcD4EoGDinM4fxGSC3pu7jDCmq2sfHPH6ZuuNoYywYvwhonFEaHHQDQVpl+
         mRAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=SGOLN4L23AISJAYP2mb4TczTko7Qm/ptrKoRepXprRE=;
        b=LgQwlYyJCaVvbsADAKT8mUglij9JMEHWeNI7/14D3ykbYCseOaR1UUImTbDTasnx3u
         lCjgGYJw4P5fWC/zOLYKavMgxijSK+Cb4ZwZ/IOrH5csd9mEQSQ8Rc1uq5GOKDCFTLWF
         EfY2Dtyb/3T5lXIE2NQEr9lRz0YLC2T812EyMAIudfMoiv7IoxLIUpw+quxaqB0omRQN
         wN7iHcShqdnMDOagDMV9ykI1IDHvp/xjWHdZXL90My16fsbkS8h4NsgR66Y2gJOBk2L5
         GH+QswqCe7Ps7kKsMPPZafoC+on/r2Gih1jYidDqeiws9JOyzlovNtlseaM1QhkqYNqF
         rBfQ==
X-Gm-Message-State: AFqh2kquc5xfDXcAOFJJbWCOfhh+v6exbAArtnpijEa3MIoKfTvF0FIB
	tMbbAc+0E18XL5nM+ctFRbskXVLeRdEDxydra5Q=
X-Google-Smtp-Source: AMrXdXvJTtjn/avlGGgsNUpAucdZpkVW+3PjRmuldxgTr47AVrU/8YnXmDCRnhJy3ttHhZDle2IAxOnSVBWIbE4+qXI=
X-Received: by 2002:a1f:c703:0:b0:3d5:6ccb:8748 with SMTP id
 x3-20020a1fc703000000b003d56ccb8748mr6418658vkf.26.1673310537196; Mon, 09 Jan
 2023 16:28:57 -0800 (PST)
MIME-Version: 1.0
References: <cover.1673278109.git.oleksii.kurochko@gmail.com> <527727b2c9e26e6ef7714fe9a3fbe580caf1ae13.1673278109.git.oleksii.kurochko@gmail.com>
In-Reply-To: <527727b2c9e26e6ef7714fe9a3fbe580caf1ae13.1673278109.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 10 Jan 2023 10:28:31 +1000
Message-ID: <CAKmqyKNyRfAhyP-3uZwEf3OZEv5be4KNdGvNjUiQGu8w-vf_8g@mail.gmail.com>
Subject: Re: [PATCH v2 6/8] xen/riscv: introduce early_printk basic stuff
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jan 10, 2023 at 1:47 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> The patch introduces a basic stuff of early_printk functionality
> which will be enough to print 'hello from C environment".
> early_printk() function was changed in comparison with original as
> common isn't being built now so there is no vscnprintf.
>
> Because printk() relies on a serial driver (like the ns16550 driver)
> and drivers require working virtual memory (ioremap()) there is not
> print functionality early in Xen boot.
>
> This commit adds early printk implementation built on the putc SBI call.
>
> As sbi_console_putchar() is being already planned for deprecation
> it is used temporary now and will be removed or reworked after
> real uart will be ready.

There was a discussion to add a new SBI putchar replacement. It
doesn't seem to be completed yet, but there might be an SBI
replacement for this in the future as well.

Alistair

>
> Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V2:
>     - add license to early_printk.c
>     - add signed-off-by Bobby
>     - add RISCV_32 to Kconfig.debug to EARLY_PRINTK config
>     - update commit message
>     - order the files alphabetically in Makefile
> ---
>  xen/arch/riscv/Kconfig.debug              |  7 +++++
>  xen/arch/riscv/Makefile                   |  1 +
>  xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
>  xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
>  4 files changed, 53 insertions(+)
>  create mode 100644 xen/arch/riscv/early_printk.c
>  create mode 100644 xen/arch/riscv/include/asm/early_printk.h
>
> diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
> index e69de29bb2..6ba0bd1e5a 100644
> --- a/xen/arch/riscv/Kconfig.debug
> +++ b/xen/arch/riscv/Kconfig.debug
> @@ -0,0 +1,7 @@
> +config EARLY_PRINTK
> +    bool "Enable early printk config"
> +    default DEBUG
> +    depends on RISCV_64 || RISCV_32
> +    help
> +
> +      Enables early printk debug messages
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index fd916e1004..1a4f1a6015 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,3 +1,4 @@
> +obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>  obj-$(CONFIG_RISCV_64) += riscv64/
>  obj-y += sbi.o
>  obj-y += setup.o
> diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
> new file mode 100644
> index 0000000000..88da5169ed
> --- /dev/null
> +++ b/xen/arch/riscv/early_printk.c
> @@ -0,0 +1,33 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * RISC-V early printk using SBI
> + *
> + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> + */
> +#include <asm/sbi.h>
> +#include <asm/early_printk.h>
> +
> +/*
> + * TODO:
> + *   sbi_console_putchar is already planned for deprication
> + *   so it should be reworked to use UART directly.
> +*/
> +void early_puts(const char *s, size_t nr)
> +{
> +    while ( nr-- > 0 )
> +    {
> +        if (*s == '\n')
> +            sbi_console_putchar('\r');
> +        sbi_console_putchar(*s);
> +        s++;
> +    }
> +}
> +
> +void early_printk(const char *str)
> +{
> +    while (*str)
> +    {
> +        early_puts(str, 1);
> +        str++;
> +    }
> +}
> diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
> new file mode 100644
> index 0000000000..05106e160d
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/early_printk.h
> @@ -0,0 +1,12 @@
> +#ifndef __EARLY_PRINTK_H__
> +#define __EARLY_PRINTK_H__
> +
> +#include <xen/early_printk.h>
> +
> +#ifdef CONFIG_EARLY_PRINTK
> +void early_printk(const char *str);
> +#else
> +static inline void early_printk(const char *s) {};
> +#endif
> +
> +#endif /* __EARLY_PRINTK_H__ */
> --
> 2.38.1
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 00:29:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 00:29:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474180.735181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF2WP-0007IF-1m; Tue, 10 Jan 2023 00:29:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474180.735181; Tue, 10 Jan 2023 00:29:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF2WO-0007I8-VN; Tue, 10 Jan 2023 00:29:44 +0000
Received: by outflank-mailman (input) for mailman id 474180;
 Tue, 10 Jan 2023 00:29:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Esb2=5H=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pF2WN-0007Ck-Ev
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 00:29:43 +0000
Received: from mail-vs1-xe2d.google.com (mail-vs1-xe2d.google.com
 [2607:f8b0:4864:20::e2d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dfdf769c-907d-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 01:29:42 +0100 (CET)
Received: by mail-vs1-xe2d.google.com with SMTP id n190so6749524vsc.11
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 16:29:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfdf769c-907d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=eilaCtkBAmz2kS2wmB1ZZ+vUiTwTrEsVsNPmPjJb+lw=;
        b=m8mPtcFkKDEZ7LNPyapf1VWpZVk5ILpNQQiBFgHcppCOVBUiCHtIuTpyskfeEMhf8d
         SjpYD9G3pgHuU7WYAok7IOehwBtfN/rDv6rchYZP0rE3l8MSpiy720x9+eg5IboIITSa
         Vz6jVR+7RY99OHdFm1yPBLJrccsT4eJ6HK6jEcurYHKzwT0B+KvUrn2suCP7DtLxtAKR
         4S42L2m7A0rzmFusMNeG+l2Y9j9DtnM/VKsSE/11TXOzeKFrB2LXydssx4jHh0uPBRHr
         YGdnm1k+Sv06cJTB5uLAa94mHGG6Jjmpero/Xbplzfi6jbPFuj26PSCuugNUAxmcP3AJ
         Zt+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=eilaCtkBAmz2kS2wmB1ZZ+vUiTwTrEsVsNPmPjJb+lw=;
        b=rUWVhjJlZ27sKZDgNoivjA634vvI/dobpBU+2i5+UGPoi4FNIkh0jxGUeR8hmWlKfW
         gW1yHGNvf+VtPTKpd9PrncN07FZOe9/5/gvJQSroiubltd3fP2GTWC3d9jX3ia5zBUCX
         m2I9KbTFCopIbXqzRa1g7Oj3RM8BxQ1ki9fRFgjDUegoqXbwmOajG4t/K5cSXLuV0s5i
         e8OLi4hSZwVI72bt5AUHcbtIPX8Pcv6E++Oe59X2d+KxwSfs0FLwuS3Y15Bz93fYO1H3
         x/JR3dwkNTgvv1BFnHWcxWJMr82bOQy5Xb3tZ+okbPPv0lBt23VKIVv9gSnid/dDHQQI
         1D0g==
X-Gm-Message-State: AFqh2krF1kJyJVrKE+2OLl/7sjM0252BarTOSc1f8s9QKBzE2EUqLtq5
	gCdFnR2oSehRGBViVHhMs39kGPtzyFojDBEFBp4=
X-Google-Smtp-Source: AMrXdXuulYEaUvGd4+oZ43DaWUmRvtWUb4V7wXrDOlcrTLfH4+TtNKibAwr17T9JXLOZ+Dov/cGG6ZMFGDkH8V/Qs50=
X-Received: by 2002:a05:6102:f8c:b0:3c9:8cc2:dd04 with SMTP id
 e12-20020a0561020f8c00b003c98cc2dd04mr7246662vsv.73.1673310581572; Mon, 09
 Jan 2023 16:29:41 -0800 (PST)
MIME-Version: 1.0
References: <cover.1673278109.git.oleksii.kurochko@gmail.com> <837bb553a539713d4aa15bb169142018bf508afe.1673278109.git.oleksii.kurochko@gmail.com>
In-Reply-To: <837bb553a539713d4aa15bb169142018bf508afe.1673278109.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 10 Jan 2023 10:29:15 +1000
Message-ID: <CAKmqyKMafDNg0oJ-rn43goagy6sppvgjgeSts_oCTKhTUJMubw@mail.gmail.com>
Subject: Re: [PATCH v2 7/8] xen/riscv: print hello message from C env
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jan 10, 2023 at 1:47 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/arch/riscv/riscv64/head.S |  4 +---
>  xen/arch/riscv/setup.c        | 12 ++++++++++++
>  2 files changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
> index c1f33a1934..d444dd8aad 100644
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -5,6 +5,4 @@ ENTRY(start)
>          li      t0, STACK_SIZE
>          add     sp, sp, t0
>
> -_start_hang:
> -        wfi
> -        j       _start_hang
> +        tail    start_xen
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> index 41ef4912bd..586060c7e5 100644
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -1,6 +1,18 @@
>  #include <xen/init.h>
>  #include <xen/compile.h>
>
> +#include <asm/early_printk.h>
> +
>  /* Xen stack for bringing up the first CPU. */
>  unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
>      __aligned(STACK_SIZE);
> +
> +void __init noreturn start_xen(void)
> +{
> +    early_printk("Hello from C env\n");
> +
> +    for ( ;; )
> +        asm volatile ("wfi");
> +
> +    unreachable();
> +}
> --
> 2.38.1
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 01:46:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 01:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474193.735211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF3iS-0005Fc-OO; Tue, 10 Jan 2023 01:46:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474193.735211; Tue, 10 Jan 2023 01:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF3iS-0005FD-I2; Tue, 10 Jan 2023 01:46:16 +0000
Received: by outflank-mailman (input) for mailman id 474193;
 Tue, 10 Jan 2023 01:46:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF3iR-0005F3-Bf; Tue, 10 Jan 2023 01:46:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF3iR-0006OO-6p; Tue, 10 Jan 2023 01:46:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF3iQ-0007LJ-S9; Tue, 10 Jan 2023 01:46:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pF3iQ-0002zg-Qv; Tue, 10 Jan 2023 01:46:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=2f+4UnnW2L0E78oK3XKT0P04CWg/J3/IH9hbTWv20yI=; b=MiN2pKzuTq2+sH4gBx8527trrI
	DM39To0e/IUKgmiqZno2FHRxQc9D/RYMKeIDgC4fthtaM0a0HbyGG6rKIDyQWraTfPLHOxroQQhUP
	vaRLa1Zy5ysRHGHEO7YMmO/JdCGgklgd2YHMwOWOHWibyRP6FFjE0dkJznw5sP6Z4TeQ=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-i386-xsm
Message-Id: <E1pF3iQ-0002zg-Qv@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 01:46:14 +0000

branch xen-unstable
xenbranch xen-unstable
job build-i386-xsm
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175675/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-i386-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-i386-xsm.xen-build --summary-out=tmp/175675.bisection-summary --basis-template=175623 --blessings=real,real-bisect,real-retry qemu-mainline build-i386-xsm xen-build
Searching for failure / basis pass:
 175664 fail [host=nobling1] / 175637 ok.
Failure / basis pass flights: 175664 / 175637
(tree with no url: minios)
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Basis pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#d8d829b89dababf763ab33b8cdd852b2830db3cf-33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#528d9f33cad5245c1099d77084c78bb2244d5143-aa96ab7c9df59c615ca82b49c9062819e0a1c287 git://xenbits.xen.org/osstest/seabios.git#645a64b4911d7cadf5749d7375544fc2384e70ba-645\
 a64b4911d7cadf5749d7375544fc2384e70ba git://xenbits.xen.org/xen.git#2b21cbbb339fb14414f357a6683b1df74c36fda2-2b21cbbb339fb14414f357a6683b1df74c36fda2
Loaded 14998 nodes in revision graph
Searching for test results:
 175627 [host=albana1]
 175637 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175643 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175647 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d6271b657286de80260413684a1f2a63f44ea17b 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175654 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175668 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175669 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175664 fail 33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175670 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175673 fail 33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175674 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175675 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Searching for interesting versions
 Result found: flight 175637 (pass), for basis pass
 Result found: flight 175654 (fail), for basis failure (at ancestor ~4)
 Repro found: flight 175668 (pass), for basis pass
 Repro found: flight 175673 (fail), for basis failure
 0 revisions at d8d829b89dababf763ab33b8cdd852b2830db3cf 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
No revisions left to test, checking graph state.
 Result found: flight 175637 (pass), for last pass
 Result found: flight 175643 (fail), for first failure
 Repro found: flight 175668 (pass), for last pass
 Repro found: flight 175670 (fail), for first failure
 Repro found: flight 175674 (pass), for last pass
 Repro found: flight 175675 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175675/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-i386-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
175675: tolerable ALL FAIL

flight 175675 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/175675/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-i386-xsm                6 xen-build               fail baseline untested


jobs:
 build-i386-xsm                                               fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 02:11:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 02:11:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474203.735225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF46v-0000YH-Rh; Tue, 10 Jan 2023 02:11:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474203.735225; Tue, 10 Jan 2023 02:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF46v-0000YA-N9; Tue, 10 Jan 2023 02:11:33 +0000
Received: by outflank-mailman (input) for mailman id 474203;
 Tue, 10 Jan 2023 02:11:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF46u-0000Y4-M4
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 02:11:33 +0000
Received: from sonic301-20.consmr.mail.gq1.yahoo.com
 (sonic301-20.consmr.mail.gq1.yahoo.com [98.137.64.146])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16f7ee39-908c-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 03:11:28 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic301.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 02:11:26 +0000
Received: by hermes--production-ne1-7b69748c4d-55l5b (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 1a4c1345234cf01d6eb27377ae8cdc42; 
 Tue, 10 Jan 2023 02:11:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16f7ee39-908c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673316686; bh=7QkjhmpzJKIE7K6fWXoMGD50xZfWbV2q4lJFRzYCwz0=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=SuLJXzBIvmaPGU4/7UAInN4vmpHqidslwU5vGr7BJuTDvpCKjL6cX8sJXjoaWJVGxDdmCXy/u/QbJdcSYcdE33YlmY7BQ8jSkc/ZlustiOhIWkFbZU1Eim3Jjl9rL4ymJGsCOWZcPBS6wjB/w41M35etW+FsxjiVo/LyOqNQU/nS1Hz7RdlFWt96AutEDO+qs9+tR9cYehmf4IbQSNJF1Ue0UQsnSgtma04JkA3M1DzOKk+4kwZXipyqcNGKXxY0gCgupoGlqM+oGOBtg3OzNwy5it6VjnLfYGP6VI1KHuCcP7jTo1NpBIz/jTfW/lPz+w6EtT5JYE3y/RQxM1d4CA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673316686; bh=BEwSlhJwTePxyg+41CHEKfeAHlvHKT5e4gskjlhJiWo=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=tdViyb53ly9roW7VuHUEy1/nFSww5xxl0KSqu2lcpHoyCSihfPiwqJR4QmHo2f5gcpfDFV2B552T1th/p7TfAZzILCYWwABU7uETysYp8LZcX6RvY0TUIhs0Iuvtti5Ig7jN18OLKTxlrFAt9yc++e1KOEETlo8iOVI/WfKzKvw4LAv29pS+IhBt+RcMNAxWIJZ1/P2yVUEuPFOyTF5wZ3xvU7QKXqFq3AM9lpEJp6r9HWKW3+/CWSVD13LtTaV1D81WnDpFAB8KRgxAVoViwvbIKhCMlXkGekcoK2N6tBmNucm/Kz90msUYQMZQghCIjqL+2+5NtjZPkbibueQT2g==
X-YMail-OSG: 35CxZAcVM1lZMUWXJQtaooaptdpLsLjLj53wxdUyWpdCaRGmP17swB4qW3yzr8e
 Zv4OIHGnnd6LoDqqAhWFR0Xu9ENP7gGOzqLgzYwP5B8EXSqNv2QOP6u3JDbAAp8uqwq3Hg9LNXED
 f868PiHt_xX0QPD9R5O4FmqgttXZfG_bqgkm.gSA5LkSpjjtJA8eqmrQ9Cbvn8c81J0PajooDjlo
 U_bOuXqA_ueQ805CP9zjAH6LZHAOiPjQeGL9W67wWIFbPXm1Kw6HfNDM_1vcCRBr15oFTP.jUhwJ
 weO360hKMsxZzrcobIcDyGK6ZJliNOY8kcMi3v8Dwm3bP8gfTYjm9AbGbR5RClC469HKhAOoPqQR
 4Mfo9w27xaw0.zzxtQKcBFi0qA396ge6H1U6F099Q6Vqk3JDKjZ8X3q0SYvttCyyDboprbEJSDka
 QAarVhhm9iVAKL9cKLGnYcjA6tVQ4.Vk3am2WJfOH7jH7e0NdjClA6e3Mo2by3EflMOdO2_qFWo_
 9ZCiPiHpLau9IUfEoqv1zWGHi3N3dA0Rwov.rEVsbvwTwTO0gAo7dmwQ0tp8dxppVOysXJYleFBK
 d0Q7tLw0xtVWDJuL.TVUQA7kswt0I2dOkiKiG54RbdiaNkwbdIg.JUu8JjhgeaG72AovByODBz03
 SezF8Q4TNwWWgpTu.m7LFIr7r8huOOQOABbknBRt_40YQ_2WguFf9blnpTxvGsV9e.VnrvBOXHBH
 w4PcyKnhBq1R23MszMGALFx0dM2LJvT0ay2Th6D3iJ_UfbzilQrcKakoEku0oi7iK1qrJUPImYvo
 vxZK2Z9I2KHNEpnvRrs.7jybiZudOJbO.yic9NzwGqp9DUODd46Es5X23gyVfUWKt2hHj.YF9Kaq
 Txyk2.EchmKgwf9Kjts59OlabKAlRlqky_UIQTkFu7hvaqNDpz3RGW_7RbHe.Jj5kjkCt5U6RjhN
 nkcMFcOSt9c66BiUz2at3ByeUtA4jVLyfvMzIW4NB5dbVpKOGelXZt_i3OuHLBYfBc4fteBhd4n8
 kQfdkd1Q2ux.CpO2viGblbbdaQHyt4KkVYcs85GeUt4eMfPje796NKl0BMmsppAn9JYbtQnNPRHI
 R4FlXuy23wiytu..du88BIQp_nzEmSGbTI6U_BU_lLy4bZBYoqZoC.zO4gEUV7wITS_40KeB_Vfp
 k7K_cDREhmv5Ak.RPjNHwHGJb0uLPThEy8632FddBdHpO5ZufwubOoXfsa2qtS9QPvb0JlCZDix1
 srAPwhB0Q3oRvYvUfIMRGVyFlJw055MGZvt.eqR0UZbxKCkf5kfSytnhk0cCsSbcj1z2AhB2j0FF
 0mfjOco3g9809lTqJ47ygeW8jLv6YiyBbttOZBcFYj4YDmNuMbYFXh8s0pzkGp3FkmlVZpRU5.ru
 5HlD2cCZHvsrlrZh5jZ72vgvEZeN3CMJi_MsjCusSiFVxBp5323wsvHDg9UW5iWIRPiQugMI.naT
 MVYbXsHnGms7KiHjPmwBYA.zwVurIExq6J4w4qdnMcQ_xh8e9g.0oL7p4IblHEsC09Q7xzQcLALW
 8XZRRikQFINTpx3c.tGXJOwcPX64MwgTkvuOmE7w1CSPBOvMvmHBaieS6DcL6lsj2q53fALkq_07
 dASPThG56DoGobRvy5TC5x67jyGqEClXdStEPKXqDJGpPjpEirgc3ZiZPuYrHceFZ9_bRzBlPhf1
 880SF7LZ2BjoHCy3mTdA6KMzA_7OENHJnolN6hw4CrqfsKI2a1BN_y2F_vYINQDcRc7asMCMZ7Kc
 GwgXG_51YmviGOMGiBFc2nFtIM__TkLZxJLXpClhdKBzAdOWt3B5.GECz4Sc2TI5h82kElOPx6i7
 ssPuRg7OK6iWF05hNtmO.8aStctDIarZ8._R5x0Ca5m.6RJXdEbAkhzcDAHDjhqOfltAB3V11UPd
 sxjHiZ4m6yUekcB1nrcZZ2JTjDuAAnLh33XZE1alQomssOdQ0xnmbEzIJHxRBMEbNbW79Gd56rjl
 TW2FLzilRBMebGLaVqO0YQpQSwvp9We1aby8DATbG0gyjW.2uXMCYNe0eLpAsigj.8TL_OuiJupo
 VyDjk.1iEyiGttOySsKDGSJm_p6_.W.6LRlNo1RbbrgZXsNfHfhwBGZdjRjoze68L8OCNniUhMVI
 .jWtauilMJQWRDP4iKtdcbyhdUyAjVxad6D0tMj0iQaZXM3JvBa7PmVjs3SHRnhTkZrKyNQ8_5Kn
 2uHpJzDJWROk-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <1542de4a-3b42-0ca4-777a-ce01f75b5532@aol.com>
Date: Mon, 9 Jan 2023 21:11:22 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
 <20230109183132-mutt-send-email-mst@kernel.org>
 <aacffaa2-1e86-1392-8302-484248b893c4@aol.com>
In-Reply-To: <aacffaa2-1e86-1392-8302-484248b893c4@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 14738

On 1/9/2023 7:05 PM, Chuck Zmudzinski wrote:
> On 1/9/23 6:33 PM, Michael S. Tsirkin wrote:
> > On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
> >> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> >> as noted in docs/igd-assign.txt in the Qemu source code.
> >> 
> >> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> >> Intel IGD passthrough to the guest with the Qemu upstream device model,
> >> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> >> a different slot. This problem often prevents the guest from booting.
> >> 
> >> The only available workaround is not good: Configure Xen HVM guests to use
> >> the old and no longer maintained Qemu traditional device model available
> >> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> >> 
> >> To implement this feature in the Qemu upstream device model for Xen HVM
> >> guests, introduce the following new functions, types, and macros:
> >> 
> >> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> >> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> >> * typedef XenPTQdevRealize function pointer
> >> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> >> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> >> 
> >> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> >> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> >> the xl toolstack with the gfx_passthru option enabled, which sets the
> >> igd-passthru=on option to Qemu for the Xen HVM machine type.
> >> 
> >> The new xen_igd_reserve_slot function also needs to be implemented in
> >> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> >> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> >> in which case it does nothing.
> >> 
> >> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> >> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> >> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> >> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> >> 
> >> Move the call to xen_host_pci_device_get, and the associated error
> >> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> >> initialize the device class and vendor values which enables the checks for
> >> the Intel IGD to succeed. The verification that the host device is an
> >> Intel IGD to be passed through is done by checking the domain, bus, slot,
> >> and function values as well as by checking that gfx_passthru is enabled,
> >> the device class is VGA, and the device vendor in Intel.
> >> 
> >> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> >> ---
> >> Notes that might be helpful to reviewers of patched code in hw/xen:
> >> 
> >> The new functions and types are based on recommendations from Qemu docs:
> >> https://qemu.readthedocs.io/en/latest/devel/qom.html
> >> 
> >> Notes that might be helpful to reviewers of patched code in hw/i386:
> >> 
> >> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> >> not affect builds that do not have CONFIG_XEN defined.
> > 
> > I'm not sure how you can claim that.
>
> I mean the small patch to pc_piix.c in this patch sits
> between an "#ifdef CONFIG_XEN" and the corresponding
> "#endif" so the preprocessor will exclude it when CONFIG_XEN
> is not defined. In other words, my patch is part of the
> xen-specific code in pc_piix.c. Or am I missing something?
>
>
> > 
> > ...
> > 
> >> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> >> index b48047f50c..34a9736b5e 100644
> >> --- a/hw/i386/pc_piix.c
> >> +++ b/hw/i386/pc_piix.c
> >> @@ -32,6 +32,7 @@
> >>  #include "hw/i386/pc.h"
> >>  #include "hw/i386/apic.h"
> >>  #include "hw/pci-host/i440fx.h"
> >> +#include "hw/rtc/mc146818rtc.h"
> >>  #include "hw/southbridge/piix.h"
> >>  #include "hw/display/ramfb.h"
> >>  #include "hw/firmware/smbios.h"
> >> @@ -40,16 +41,16 @@
> >>  #include "hw/usb.h"
> >>  #include "net/net.h"
> >>  #include "hw/ide/pci.h"
> >> -#include "hw/ide/piix.h"
> >>  #include "hw/irq.h"
> >>  #include "sysemu/kvm.h"
> >>  #include "hw/kvm/clock.h"
> >>  #include "hw/sysbus.h"
> >> +#include "hw/i2c/i2c.h"
> >>  #include "hw/i2c/smbus_eeprom.h"
> >>  #include "hw/xen/xen-x86.h"
> >> +#include "hw/xen/xen.h"
> >>  #include "exec/memory.h"
> >>  #include "hw/acpi/acpi.h"
> >> -#include "hw/acpi/piix4.h"
> >>  #include "qapi/error.h"
> >>  #include "qemu/error-report.h"
> >>  #include "sysemu/xen.h"
> >> @@ -66,6 +67,7 @@
> >>  #include "kvm/kvm-cpu.h"
> >>  
> >>  #define MAX_IDE_BUS 2
> >> +#define XEN_IOAPIC_NUM_PIRQS 128ULL
> >>  
> >>  #ifdef CONFIG_IDE_ISA
> >>  static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
> >> @@ -73,6 +75,32 @@ static const int ide_iobase2[MAX_IDE_BUS] = { 0x3f6, 0x376 };
> >>  static const int ide_irq[MAX_IDE_BUS] = { 14, 15 };
> >>  #endif
> >>  
> >> +/*
> >> + * Return the global irq number corresponding to a given device irq
> >> + * pin. We could also use the bus number to have a more precise mapping.
> >> + */
> >> +static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
> >> +{
> >> +    int slot_addend;
> >> +    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
> >> +    return (pci_intx + slot_addend) & 3;
> >> +}
> >> +
> >> +static void piix_intx_routing_notifier_xen(PCIDevice *dev)
> >> +{
> >> +    int i;
> >> +
> >> +    /* Scan for updates to PCI link routes (0x60-0x63). */
> >> +    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
> >> +        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
> >> +        if (v & 0x80) {
> >> +            v = 0;
> >> +        }
> >> +        v &= 0xf;
> >> +        xen_set_pci_link_route(i, v);
> >> +    }
> >> +}
> >> +
> >>  /* PC hardware initialisation */
> >>  static void pc_init1(MachineState *machine,
> >>                       const char *host_type, const char *pci_type)
> >> @@ -84,7 +112,7 @@ static void pc_init1(MachineState *machine,
> >>      MemoryRegion *system_io = get_system_io();
> >>      PCIBus *pci_bus;
> >>      ISABus *isa_bus;
> >> -    int piix3_devfn = -1;
> >> +    Object *piix4_pm;
> >>      qemu_irq smi_irq;
> >>      GSIState *gsi_state;
> >>      BusState *idebus[MAX_IDE_BUS];
> >> @@ -205,10 +233,9 @@ static void pc_init1(MachineState *machine,
> >>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
> >>  
> >>      if (pcmc->pci_enabled) {
> >> -        PIIX3State *piix3;
> >> +        DeviceState *dev;
> >>          PCIDevice *pci_dev;
> >> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
> >> -                                         : TYPE_PIIX3_DEVICE;
> >> +        int i;
> >>  
> >>          pci_bus = i440fx_init(pci_type,
> >>                                i440fx_host,
> >> @@ -216,21 +243,65 @@ static void pc_init1(MachineState *machine,
> >>                                x86ms->below_4g_mem_size,
> >>                                x86ms->above_4g_mem_size,
> >>                                pci_memory, ram_memory);
> >> +        pci_bus_map_irqs(pci_bus,
> >> +                         xen_enabled() ? xen_pci_slot_get_pirq
> >> +                                       : pci_slot_get_pirq);
> >>          pcms->bus = pci_bus;
> >>  
> >> -        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
> >> -        piix3 = PIIX3_PCI_DEVICE(pci_dev);
> >> -        piix3->pic = x86ms->gsi;
> >> -        piix3_devfn = piix3->dev.devfn;
> >> -        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(piix3), "isa.0"));
> >> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
> >> +        object_property_set_bool(OBJECT(pci_dev), "has-usb",
> >> +                                 machine_usb(machine), &error_abort);
> >> +        object_property_set_bool(OBJECT(pci_dev), "has-acpi",
> >> +                                 x86_machine_is_acpi_enabled(x86ms),
> >> +                                 &error_abort);
> >> +        qdev_prop_set_uint32(DEVICE(pci_dev), "smb_io_base", 0xb100);
> >> +        object_property_set_bool(OBJECT(pci_dev), "smm-enabled",
> >> +                                 x86_machine_is_smm_enabled(x86ms),
> >> +                                 &error_abort);
> >> +        pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
> >> +
> >> +        if (xen_enabled()) {
> >> +            pci_device_set_intx_routing_notifier(
> >> +                        pci_dev, piix_intx_routing_notifier_xen);
> >> +
> >> +            /*
> >> +             * Xen supports additional interrupt routes from the PCI devices to
> >> +             * the IOAPIC: the four pins of each PCI device on the bus are also
> >> +             * connected to the IOAPIC directly.
> >> +             * These additional routes can be discovered through ACPI.
> >> +             */
> >> +            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
> >> +                         XEN_IOAPIC_NUM_PIRQS);
> >> +        }
> >> +
> >> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
> >> +        for (i = 0; i < ISA_NUM_IRQS; i++) {
> >> +            qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
> >> +        }
> >> +        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(pci_dev), "isa.0"));
> >> +        rtc_state = ISA_DEVICE(object_resolve_path_component(OBJECT(pci_dev),
> >> +                                                             "rtc"));
> >> +        piix4_pm = object_resolve_path_component(OBJECT(pci_dev), "pm");
> >> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "ide"));
> >> +        pci_ide_create_devs(PCI_DEVICE(dev));
> >> +        idebus[0] = qdev_get_child_bus(dev, "ide.0");
> >> +        idebus[1] = qdev_get_child_bus(dev, "ide.1");
> >>      } else {
> >>          pci_bus = NULL;
> >> +        piix4_pm = NULL;
> >>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
> >>                                &error_abort);
> >> +        isa_bus_irqs(isa_bus, x86ms->gsi);
> >> +
> >> +        rtc_state = isa_new(TYPE_MC146818_RTC);
> >> +        qdev_prop_set_int32(DEVICE(rtc_state), "base_year", 2000);
> >> +        isa_realize_and_unref(rtc_state, isa_bus, &error_fatal);
> >> +
> >>          i8257_dma_init(isa_bus, 0);
> >>          pcms->hpet_enabled = false;
> >> +        idebus[0] = NULL;
> >> +        idebus[1] = NULL;
> >>      }
> >> -    isa_bus_irqs(isa_bus, x86ms->gsi);
> >>  
> >>      if (x86ms->pic == ON_OFF_AUTO_ON || x86ms->pic == ON_OFF_AUTO_AUTO) {
> >>          pc_i8259_create(isa_bus, gsi_state->i8259_irq);
> >> @@ -252,18 +323,12 @@ static void pc_init1(MachineState *machine,
> >>      }
> >>  
> >>      /* init basic PC hardware */
> >> -    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, &rtc_state, true,
> >> +    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, rtc_state, true,
> >>                           0x4);
> >>  
> >>      pc_nic_init(pcmc, isa_bus, pci_bus);
> >>  
> >>      if (pcmc->pci_enabled) {
> >> -        PCIDevice *dev;
> >> -
> >> -        dev = pci_create_simple(pci_bus, piix3_devfn + 1, TYPE_PIIX3_IDE);
> >> -        pci_ide_create_devs(dev);
> >> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> >> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
> >>          pc_cmos_init(pcms, idebus[0], idebus[1], rtc_state);
> >>      }
> >>  #ifdef CONFIG_IDE_ISA
> >> @@ -289,21 +354,9 @@ static void pc_init1(MachineState *machine,
> >>      }
> >>  #endif
> >>  
> >> -    if (pcmc->pci_enabled && machine_usb(machine)) {
> >> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
> >> -    }
> >> -
> >> -    if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
> >> -        PCIDevice *piix4_pm;
> >> -
> >> +    if (piix4_pm) {
> >>          smi_irq = qemu_allocate_irq(pc_acpi_smi_interrupt, first_cpu, 0);
> >> -        piix4_pm = pci_new(piix3_devfn + 3, TYPE_PIIX4_PM);
> >> -        qdev_prop_set_uint32(DEVICE(piix4_pm), "smb_io_base", 0xb100);
> >> -        qdev_prop_set_bit(DEVICE(piix4_pm), "smm-enabled",
> >> -                          x86_machine_is_smm_enabled(x86ms));
> >> -        pci_realize_and_unref(piix4_pm, pci_bus, &error_fatal);
> >>  
> >> -        qdev_connect_gpio_out(DEVICE(piix4_pm), 0, x86ms->gsi[9]);
> >>          qdev_connect_gpio_out_named(DEVICE(piix4_pm), "smi-irq", 0, smi_irq);
> >>          pcms->smbus = I2C_BUS(qdev_get_child_bus(DEVICE(piix4_pm), "i2c"));
> >>          /* TODO: Populate SPD eeprom data.  */
> >> @@ -315,7 +368,7 @@ static void pc_init1(MachineState *machine,
> >>                                   object_property_allow_set_link,
> >>                                   OBJ_PROP_LINK_STRONG);
> >>          object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
> >> -                                 OBJECT(piix4_pm), &error_abort);
> >> +                                 piix4_pm, &error_abort);
> >>      }
> >>  
> >>      if (machine->nvdimms_state->is_enabled) {
> >> @@ -405,6 +458,9 @@ static void pc_xen_hvm_init(MachineState *machine)
> >>      }
> >>  
> >>      pc_xen_hvm_init_pci(machine);
> >> +    if (xen_igd_gfx_pt_enabled()) {
> >> +        xen_igd_reserve_slot(pcms->bus);
> >> +    }

These are the three lines from my patch that are added to
pc_piix.c. IIUC, this diff is showing the differences in pc_piix.c
without and with CONFIG_XEN defined. If so, then the fact that
the lines in my patch are added when CONFIG_XEN is defined
means they are not included as part of the build when CONFIG_XEN
is not defined. That is how I can claim that my small patch to
pc_piix.c does not affect builds without CONFIG_XEN defined.

I also think the rest of the files touched by my patch are only included
by the meson build system with the --enable-xen an option to configure.
So the entire patch is xen-specific and can easily be eliminated from
any build with the --disable-xen configure option.

Kind regards,

Chuck

> >>      pci_create_simple(pcms->bus, -1, "xen-platform");
> >>  }
> >>  #endif
> >> @@ -441,6 +497,11 @@ static void pc_i440fx_8_0_machine_options(MachineClass *m)
> >>      pc_i440fx_machine_options(m);
> >>      m->alias = "pc";
> >>      m->is_default = true;
> >> +#ifdef CONFIG_MICROVM_DEFAULT
> >> +    m->is_default = false;
> >> +#else
> >> +    m->is_default = true;
> >> +#endif
> >>  }
> >>  
> >>  DEFINE_I440FX_MACHINE(v8_0, "pc-i440fx-8.0", NULL,
> > 
> > 
> > Lots of changes here not guarded by CONFIG_XEN.
> > 
>
> What diff is this? How is my patch related to it?
>



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 02:21:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 02:21:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474210.735236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF4GL-00022g-Nu; Tue, 10 Jan 2023 02:21:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474210.735236; Tue, 10 Jan 2023 02:21:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF4GL-00022Z-Kt; Tue, 10 Jan 2023 02:21:17 +0000
Received: by outflank-mailman (input) for mailman id 474210;
 Tue, 10 Jan 2023 02:21:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qHhn=5H=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pF4GK-00022A-5b
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 02:21:16 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 73fe561a-908d-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 03:21:13 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 842C4B810A8;
 Tue, 10 Jan 2023 02:21:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 02EB7C433D2;
 Tue, 10 Jan 2023 02:21:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73fe561a-908d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1673317271;
	bh=g98YC6afw2raunyf2JttUHkwKLNQSW/goayP6h1YPKM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=I1+3tbcPXAch+Pit8m90QWM0vPTyjHqH/XbzasS06NVatE1ntE1H1uCr69rjcC69Q
	 HL2TInNRiMHt48M0O8wMa6udz/CKxEzf32pQWs3BnZj6ug9fDPyMDFthSfCrYkDDx7
	 n2WgNfNI0mgs6TjnB9rEV/S/vkpTEJYzPQ6a4tRvlzxa7EvTbhc21X1z31jSoTFuYt
	 TAAH99giAKAPRMY55xYb8eJHi1euKvtv9Wq1Rhhj3ePIOcxCCi6wyJFf4QxXCVw8Mg
	 p7kg1C5+e8UmiWIgIiiD2P+0Jobgq3+VqBNB8UZ2naf0qKSTafNR1iq/8ohCFLpaWL
	 snN6mdgeSksCg==
Date: Mon, 9 Jan 2023 18:21:04 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Gianluca Guida <gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v2 8/8] automation: add RISC-V smoke test
In-Reply-To: <494c2fd1e046de20c2fa24be3989cc6adde8fdbe.1673278109.git.oleksii.kurochko@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2301091820580.1342743@ubuntu-linux-20-04-desktop>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com> <494c2fd1e046de20c2fa24be3989cc6adde8fdbe.1673278109.git.oleksii.kurochko@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 9 Jan 2023, Oleksii Kurochko wrote:
> Add check if there is a message 'Hello from C env' presents
> in log file to be sure that stack is set and C part of early printk
> is working.
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V2:
>     - Move changes in the dockerfile to separate patch and  send it to
>       mailing list separately:
>         [PATCH] automation: add qemu-system-riscv to riscv64.dockerfile
>     - Update test.yaml to wire up smoke test
> ---
>  automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
>  automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
>  2 files changed, 40 insertions(+)
>  create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index afd80adfe1..64f47a0ab9 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -54,6 +54,19 @@
>    tags:
>      - x86_64
>  
> +.qemu-riscv64:
> +  extends: .test-jobs-common
> +  variables:
> +    CONTAINER: archlinux:riscv64

I realize that it is committed now, but following the arm32 convention
the name of the arch container (currently archlinux:riscv64) would be:

CONTAINER: archlinux:current-riscv64

I know this is not related to this patch, but I am taking the
opportunity to mention it now in case we get an opportunity to fix it in
the future for consistency.


> +    LOGFILE: qemu-smoke-riscv64.log
> +  artifacts:
> +    paths:
> +      - smoke.serial
> +      - '*.log'
> +    when: always
> +  tags:
> +    - x86_64
> +
>  .yocto-test:
>    extends: .test-jobs-common
>    script:
> @@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
>    needs:
>      - debian-unstable-clang-debug
>  
> +qemu-smoke-riscv64-gcc:
> +  extends: .qemu-riscv64
> +  script:
> +    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - riscv64-cross-gcc

Similarly here the "needs" dependency should be called
arch-current-gcc-riscv for consistency with arm32.

Basically we already have a crossbuild and crosstest environment up and
running in gitlab-ci and it is the one for arm32. You can just base all
the naming convention on that.

I realize that riscv64-cross-gcc is also already exported by build.yaml,
but I am mentioning it in case we get an opportunity to fix it in the
future.

Nonetheless this patch on its own is OK so

Acked-by: Stefano Stabellini <sstabellini@kernel.org>



>  # Yocto test jobs
>  yocto-qemuarm64:
>    extends: .yocto-test-arm64
> diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
> new file mode 100755
> index 0000000000..e0f06360bc
> --- /dev/null
> +++ b/automation/scripts/qemu-smoke-riscv64.sh
> @@ -0,0 +1,20 @@
> +#!/bin/bash
> +
> +set -ex
> +
> +# Run the test
> +rm -f smoke.serial
> +set +e
> +
> +timeout -k 1 2 \
> +qemu-system-riscv64 \
> +    -M virt \
> +    -smp 1 \
> +    -nographic \
> +    -m 2g \
> +    -kernel binaries/xen \
> +    |& tee smoke.serial
> +
> +set -e
> +(grep -q "Hello from C env" smoke.serial) || exit 1
> +exit 0
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 03:38:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 03:38:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474225.735268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF5TF-0000t4-Ox; Tue, 10 Jan 2023 03:38:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474225.735268; Tue, 10 Jan 2023 03:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF5TF-0000sr-Kk; Tue, 10 Jan 2023 03:38:41 +0000
Received: by outflank-mailman (input) for mailman id 474225;
 Tue, 10 Jan 2023 03:38:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H4fc=5H=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pF5TE-0000sS-6J
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 03:38:40 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2047.outbound.protection.outlook.com [40.107.8.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43ec73c1-9098-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 04:38:37 +0100 (CET)
Received: from DB6PR0802CA0025.eurprd08.prod.outlook.com (2603:10a6:4:a3::11)
 by PAVPR08MB9038.eurprd08.prod.outlook.com (2603:10a6:102:32d::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 03:38:34 +0000
Received: from DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::2) by DB6PR0802CA0025.outlook.office365.com
 (2603:10a6:4:a3::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 03:38:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT034.mail.protection.outlook.com (100.127.142.97) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 03:38:33 +0000
Received: ("Tessian outbound baf1b7a96f25:v132");
 Tue, 10 Jan 2023 03:38:33 +0000
Received: from 292c8d7ca3d8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7440E39D-8D33-473A-A682-1730D4CAFD7D.1; 
 Tue, 10 Jan 2023 03:38:27 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 292c8d7ca3d8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 03:38:27 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by AS4PR08MB7555.eurprd08.prod.outlook.com (2603:10a6:20b:4fd::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 03:38:25 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 03:38:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43ec73c1-9098-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3e9AB8zKSikcyZ/BJHR3Insf+vBAyU5ZbNaFxgDySyY=;
 b=IFjxtRJyKBAg0XLW7qmqSyZqOs8LXNWwMYjAlRHrKersWBl/sSIrzyOY8MQcosdKOltGCwT8tZEB4fP+sCz8RUfDEWpWU4b8za/Zur8ftVGJDOsYwSgCR1HODXv3V0Ls1ABF+IUUboZMpCjTKggV6PUb3B/9QALm6/Kh8pLnqYI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cMa+DMhSjSeids7d8uTD+vobVs+TebXxHUZZbFPBzkRot52ZUlw36Zf/BnIesVFEmdIsKGMBEeYuCqpfyF3LYjRjfv1AC2PhjPOXy2k0IEa0i+n7FvoBSAh3plGHNhEMmtJbr1mLW/nXTjXkWsYEPf39IDY4oZ3ahC6OLe13RR74f1woJ6VSlV5dNNkH/A/GJ5EIq4Efl1c08zJWr2sPjcDeDmGcLtmUbygjj/ct9ekGgMv0KiT3vGnC6uzWBLgUBUgzYdW4kGSTIyNBE/V6dv7ik33FBwUcKX65CIBjM0FRf8z2zFOdyCMn2fLZ9MVMqTB+4Uu3Gg4yL/uvgALotQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3e9AB8zKSikcyZ/BJHR3Insf+vBAyU5ZbNaFxgDySyY=;
 b=ifkVRWah4P9d/Cosh9S1jsy7nPqJqaRMR+AOxdXe4aDmPm3psmCHxEIJa8i7nYTCqGauQRvPlX2xXLyPEbRDv1TQ6+B2Xh9i7gsFa4pUxrIkPNYtiuvTjJd4EVOqIdB83G8+p8WZ0+KnCtS+t/P8414WpI2JdaLAlrh8eMAEC3B3IwNw97J3Boewc5ZMSvOGIun4DtiD1ng5Hg8GQiMszfsMk5FbeORcB0bkE5LwUo01IRQkGgUhBkC6IzcfvDmQ+6T6WIcU8U4BcEfSeM+79+PnYOIFx1WCeO6qJ8cysk3s7f/2gYMdXTi7MSiwIBP5TJxZD6gXBxGX7XauPUShTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3e9AB8zKSikcyZ/BJHR3Insf+vBAyU5ZbNaFxgDySyY=;
 b=IFjxtRJyKBAg0XLW7qmqSyZqOs8LXNWwMYjAlRHrKersWBl/sSIrzyOY8MQcosdKOltGCwT8tZEB4fP+sCz8RUfDEWpWU4b8za/Zur8ftVGJDOsYwSgCR1HODXv3V0Ls1ABF+IUUboZMpCjTKggV6PUb3B/9QALm6/Kh8pLnqYI=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v1 06/13] xen/arm: assign shared memory to owner when host
 address not provided
Thread-Topic: [PATCH v1 06/13] xen/arm: assign shared memory to owner when
 host address not provided
Thread-Index:
 AQHY+J1pmmYEajWz5UynKX+c5rdEZK6UzwiAgAEQoXCAAGGAAIAACRWAgABzZYCAAJEhAA==
Date: Tue, 10 Jan 2023 03:38:24 +0000
Message-ID:
 <AM0PR08MB4530F966B099071ED9428109F7FF9@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-7-Penny.Zheng@arm.com>
 <d7f12897-c6cc-0895-b70e-53c0b88bd0f9@xen.org>
 <AM0PR08MB453041150588948050F718D4F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <6db41bd2-ab71-422a-4235-a9209e984915@xen.org>
 <AM0PR08MB4530048C87F24524BDE2DCF8F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <8ae9e898-55ba-7fba-6ccc-883bd8b3e7ee@xen.org>
In-Reply-To: <8ae9e898-55ba-7fba-6ccc-883bd8b3e7ee@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: CF29401C2B1D2C42BC3A37E0A469BFEB.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|AS4PR08MB7555:EE_|DBAEUR03FT034:EE_|PAVPR08MB9038:EE_
X-MS-Office365-Filtering-Correlation-Id: d7d9df03-0d25-451c-0b14-08daf2bc265d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oHpCca9YR+Sx/V6/TwiK/Sdty8D8zMJuRMyLlzQIGR6PR9w0tdPHziky9c818JUK1PPvz52MkxQCY6wfsnLOpwH1t33JzVM9DaJrt3Rfb/66qdKAurn2wk3v4i2FitIRRpMyXSm4kkNuoTdMikHKjgEpBd9IVAR+2mbQp4vcGHKRToqDA1iNqrvbSbtp3cXxik7pmKUAow+HPMllKwpZkuhejWMyc48G0JP4/unzUDuN5H7MMWLQMeS7T9I6VCgMeKTg1jjoTRsz3EhgUcfCaNVPivTiDNdrQT3MdFw6hab+UrZwiwpG369yH0ntoeMiVy5FuFinkt+0FYZoXi7J9Wj3WP7ccJDNAM+bQlN3m49UAv2DhPrqxeJrqw+wtviqX9UJuYufxaKnS7MjDZUar5AmEyRhpNliUfXGkfUHfKq2iLiIpiFto45XEgtFHgkQJ4/Rd8mGF/ittBD7zKgqHGuk27xwtClQt8+BWft2Blo3hvle2EDwpQstc+agZYbt3+eluDQ7PrWUovWjNmXY1s9uLfVOjCI282gG5oXlRCUeKW/pIx2/hqMIccJvPGXr39WAW3tdnuUAdQqs98rI11p8QZvigShuEGB+lW02NvVxutj2KysKuE3R/n+031XxQI059zBgV3DBT9FwpwAas33qzsMQzNJm5kGcAkxek2Zw2VOitXO/KpJA8YcGWl9wg+jIKHZckLyOQEF4cjn7yw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(376002)(396003)(39860400002)(136003)(346002)(451199015)(66899015)(83380400001)(26005)(6506007)(110136005)(86362001)(316002)(54906003)(71200400001)(55016003)(33656002)(7696005)(38100700002)(122000001)(38070700005)(9686003)(53546011)(186003)(66446008)(2906002)(478600001)(5660300002)(8936002)(76116006)(66946007)(8676002)(64756008)(66476007)(41300700001)(4326008)(66556008)(52536014);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7555
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9f9f0909-e664-4261-79a4-08daf2bc2126
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SnFjPvC5oPZZi7ia/hCp+PEJw3pmehWqKmAaxqLfeI/5enDtPja63EurdITNtOea/fCMk81gnaID6382kCqt5LVWixjSW1j0jDSd+CZqskOksPknD/QOFACdXa+afElznezEN0EE75vBh6tQNkS8iHN20uiv0xQjxg9P2Yubp9s7br7zN6tIhPzkuZ3vGfUfdGWDELyGOvH8ONPBASnxbuuHxAGrOWDPMDal5sXRa0/twz3QCeMB7Ahhrqsn+WzJKDPJ06sEsEDstK+tRRzDql2dqzi/iSQLeAmyBSHl3Nrkqs6str1XhF/i7zLMeSVyVLdfKdNbxcjWs39HQNzW50NmcXYsURhrdz8RFE2zCA7qQt5JmvmG2/2rvDo/qcx5rc6dEv+YjEy17uVAgmd/GJZoanetkt3z/MVvuiGVvYBqPM1ImNhBnudXDTYesU+pcm7wOG/gQUJJDJAyNTXXdVwNDHQfcdWBLEZvCDr2LTPGft12oEuSLMOy+GC7BLA4n7xOH1QQ1Ah//QXCBHU8umfB2cQM9gs8SbC4rtwaPAIooWBASdt2cdQmeMwLQ6nTOycG75i0fVQoCo1vUuq/ZllhmUVIf+uSu0+r1cvhbGz9kpTyRbf8KhfLUzuVV/UY0wtZzc3qnKdg5DLEQ7qcxiklWFeFP7VG/bTRH6MVrmBwbRVtEERRicOzsHsAHnJyHx6qn51aIIjCXbBsVcZX7w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199015)(40470700004)(46966006)(36840700001)(66899015)(7696005)(86362001)(110136005)(40480700001)(316002)(54906003)(40460700003)(33656002)(55016003)(478600001)(82740400003)(356005)(81166007)(9686003)(107886003)(53546011)(6506007)(186003)(26005)(82310400005)(8676002)(70586007)(4326008)(8936002)(70206006)(41300700001)(5660300002)(2906002)(52536014)(36860700001)(336012)(47076005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 03:38:33.7595
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d7d9df03-0d25-451c-0b14-08daf2bc265d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9038

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPg0KPiBTZW50OiBUdWVzZGF5LCBKYW51YXJ5IDEwLCAyMDIzIDI6MjMgQU0NCj4g
VG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhl
bnByb2plY3Qub3JnDQo+IENjOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IFN0ZWZhbm8g
U3RhYmVsbGluaQ0KPiA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRyYW5kIE1hcnF1aXMg
PEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47DQo+IFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlt
eXJfQmFiY2h1a0BlcGFtLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2MSAwNi8xM10geGVu
L2FybTogYXNzaWduIHNoYXJlZCBtZW1vcnkgdG8gb3duZXINCj4gd2hlbiBob3N0IGFkZHJlc3Mg
bm90IHByb3ZpZGVkDQo+IA0KPiBIaSBQZW5ueSwNCg0KSGkgSnVsaWVuLA0KDQo+IA0KPiBPbiAw
OS8wMS8yMDIzIDExOjU4LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPj4gLS0tLS1PcmlnaW5hbCBN
ZXNzYWdlLS0tLS0NCj4gPj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4g
Pj4gU2VudDogTW9uZGF5LCBKYW51YXJ5IDksIDIwMjMgNjo1OCBQTQ0KPiA+PiBUbzogUGVubnkg
WmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tDQo+IGRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnDQo+ID4+IENjOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IFN0ZWZhbm8gU3Rh
YmVsbGluaQ0KPiA+PiA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRyYW5kIE1hcnF1aXMN
Cj4gPj4gPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFZvbG9keW15ciBCYWJjaHVrDQo+ID4+
IDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT4NCj4gPj4gU3ViamVjdDogUmU6IFtQQVRDSCB2
MSAwNi8xM10geGVuL2FybTogYXNzaWduIHNoYXJlZCBtZW1vcnkgdG8gb3duZXINCj4gPj4gd2hl
biBob3N0IGFkZHJlc3Mgbm90IHByb3ZpZGVkDQo+ID4+DQo+ID4+DQo+ID4+DQo+ID4+IE9uIDA5
LzAxLzIwMjMgMDc6NDksIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+Pj4gSGkgSnVsaWVuDQo+ID4+
DQo+ID4+IEhpIFBlbm55LA0KPiA+Pg0KPiA+Pj4gSGFwcHkgbmV3IHllYXJ+fn5+DQo+ID4+DQo+
ID4+IEhhcHB5IG5ldyB5ZWFyIHRvbyENCj4gPj4NCj4gPj4+PiAtLS0tLU9yaWdpbmFsIE1lc3Nh
Z2UtLS0tLQ0KPiA+Pj4+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+
Pj4gU2VudDogU3VuZGF5LCBKYW51YXJ5IDgsIDIwMjMgODo1MyBQTQ0KPiA+Pj4+IFRvOiBQZW5u
eSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi0NCj4gPj4gZGV2ZWxAbGlzdHMueGVu
cHJvamVjdC5vcmcNCj4gPj4+PiBDYzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBTdGVm
YW5vIFN0YWJlbGxpbmkNCj4gPj4+PiA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRyYW5k
IE1hcnF1aXMNCj4gPj4+PiA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgVm9sb2R5bXlyIEJh
YmNodWsNCj4gPj4+PiA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+DQo+ID4+Pj4gU3ViamVj
dDogUmU6IFtQQVRDSCB2MSAwNi8xM10geGVuL2FybTogYXNzaWduIHNoYXJlZCBtZW1vcnkgdG8N
Cj4gPj4+PiBvd25lciB3aGVuIGhvc3QgYWRkcmVzcyBub3QgcHJvdmlkZWQNCj4gPj4+Pg0KPiA+
Pj4+IEhpLA0KPiA+Pj4+DQo+ID4+Pg0KPiA+Pj4gQSBmZXcgY29uY2VybnMgZXhwbGFpbmVkIHdo
eSBJIGRpZG4ndCBjaG9vc2UgInN0cnVjdCBtZW1pbmZvIiBvdmVyDQo+ID4+PiB0d28gcG9pbnRl
cnMgInN0cnVjdCBtZW1iYW5rKiIgYW5kICJzdHJ1Y3QgbWVtaW5mbyoiLg0KPiA+Pj4gMSkgbWVt
b3J5IHVzYWdlIGlzIHRoZSBtYWluIHJlYXNvbi4NCj4gPj4+IElmIHdlIHVzZSAic3RydWN0IG1l
bWluZm8iIG92ZXIgdGhlIGN1cnJlbnQgInN0cnVjdCBtZW1iYW5rKiIgYW5kDQo+ID4+PiAic3Ry
dWN0IG1lbWluZm8qIiwgInN0cnVjdCBzaG1fbWVtaW5mbyIgd2lsbCBiZWNvbWUgYSBhcnJheSBv
ZiAyNTYNCj4gPj4+ICJzdHJ1Y3Qgc2htX21lbWJhbmsiLCB3aXRoICJzdHJ1Y3Qgc2htX21lbWJh
bmsiIGJlaW5nIGFsc28gYW4gMjU2LQ0KPiA+PiBpdGVtDQo+ID4+PiBhcnJheSwgdGhhdCBpcyAy
NTYgKiAyNTYsIHRvbyBiaWcgZm9yIGEgc3RydWN0dXJlIGFuZCBJZiBJDQo+ID4+PiByZW1lbWJl
cmVkIGNsZWFybHksDQo+ID4+IGl0IHdpbGwgbGVhZCB0byAibW9yZSB0aGFuIFBBR0VfU0laRSIg
Y29tcGlsaW5nIGVycm9yLg0KPiA+Pg0KPiA+PiBJIGFtIG5vdCBhd2FyZSBvZiBhbnkgcGxhY2Ug
d2hlcmUgd2Ugd291bGQgcmVzdHJpY3QgdGhlIHNpemUgb2Yga2luZm8NCj4gPj4gaW4gdXBzdHJl
YW0uIENhbiB5b3UgZ2l2ZSBtZSBhIHBvaW50ZXI/DQo+ID4+DQo+ID4NCj4gPiBJZiBJIHJlbWVt
YmVyZWQgY29ycmVjdGx5LCBteSBmaXJzdCB2ZXJzaW9uIG9mICJzdHJ1Y3Qgc2htX21lbWluZm8i
IGlzDQo+ID4gdGhpcw0KPiA+ICJiaWciKDI1NiAqIDI1Nikgc3RydWN0dXJlLCBhbmQgaXQgbGVh
ZHMgdG8gdGhlIHdob2xlIHhlbiBiaW5hcnkgaXMNCj4gPi4gO1wNCj4gDQo+IEFoIHNvIHRoZSBw
cm9ibGVtIGlzIGJlY2F1c2Ugc2htX21lbSBpcyB1c2VkIGluIGJvb3RpbmZvLiBUaGVuIEkgdGhp
bmsgd2UNCj4gc2hvdWxkIGNyZWF0ZSBhIGRpc3RpbmN0IHN0cnVjdHVyZSB3aGVuIGRlYWxpbmcg
d2l0aCBkb21haW4gaW5mb3JtYXRpb24uDQo+IA0KDQpZZXMsIElmIEkgdXNlIHRoZSBsYXR0ZXIg
InN0cnVjdCBzaG1faW5mbyIsIGtlZXBpbmcgdGhlIHNobSBtZW1vcnkgaW5mbyBvdXQgb2YgdGhl
IGJvb3RpbmZvLA0KSSB0aGluayB3ZSBjb3VsZCBhdm9pZCAiYmlnZ2VyIHRoYW4gMk1CIiBlcnJv
ci4NCg0KSG1tLCBvdXQgb2YgY3VyaW9zaXR5LCBUaGUgd2F5IHRvIGNyZWF0ZSAiZGlzdGluY3Qi
IHN0cnVjdHVyZSBpcyBsaWtlIGNyZWF0aW5nIGFub3RoZXIgc2VjdGlvbg0KZm9yIHRoZXNlIGRp
c3RpbmN0IHN0cnVjdHVyZXMgaW4gbGRzLCBqdXN0IGxpa2UgdGhlIGV4aXN0aW5nIC5kdGIgc2Vj
dGlvbj8NCiANCj4gPg0KPiA+Pj4gRldJVCwgZWl0aGVyIHJld29ya2luZyBtZW1pbmZvIG9yIHVz
aW5nIGEgZGlmZmVyZW50IHN0cnVjdHVyZSwgYXJlDQo+ID4+PiBib3RoIGxlYWRpbmcgdG8gc2l6
aW5nIGRvd24gdGhlIGFycmF5LCBobW1tLCBJIGRvbid0IGtub3cgd2hpY2ggc2l6ZQ0KPiA+Pj4g
aXMgc3VpdGFibGUuIFRoYXQncyB3aHkgSSBwcmVmZXIgcG9pbnRlciBhbmQgZHluYW1pYyBhbGxv
Y2F0aW9uLg0KPiA+Pg0KPiA+PiBJIHdvdWxkIGV4cGVjdCB0aGF0IGluIG1vc3QgY2FzZXMsIHlv
dSB3aWxsIG5lZWQgb25seSBvbmUgYmFuayB3aGVuDQo+ID4+IHRoZSBob3N0IGFkZHJlc3MgaXMg
bm90IHByb3ZpZGVkLiBTbyBJIHRoaW5rIHRoaXMgaXMgYSBiaXQgb2RkIHRvIG1lIHRvDQo+IGlt
cG9zZSBhICJsYXJnZSINCj4gPj4gYWxsb2NhdGlvbiBmb3IgdGhlbS4NCj4gPj4NCj4gPg0KPiA+
IE9ubHkgaWYgdXNlciBpcyBub3QgZGVmaW5pbmcgc2l6ZSBhcyBzb21ldGhpbmcgbGlrZSAoMl5h
ICsgMl5iICsgMl5jICsNCj4gPiAuLi4pLiA7XCBTbyBtYXliZSA4IG9yIDE2IGlzIGVub3VnaD8N
Cj4gPiBzdHJ1Y3QgbmV3X21lbWluZm8gew0KPiANCj4gIm5ldyIgaXMgYSBiaXQgc3RyYW5nZS4g
VGhlIG5hbWUgd291bGQgd2FudCB0byBiZSBjaGFuZ2VkLiBPciBtYXliZSBiZXR0ZXINCj4gdGhl
IHN0cnVjdHVyZSBiZWVuIGRlZmluZWQgd2l0aGluIHRoZSBuZXh0IHN0cnVjdHVyZSBhbmQgYW5v
bnltaXplZC4NCj4gDQo+ID4gICAgICB1bnNpZ25lZCBpbnQgbnJfYmFua3M7DQo+ID4gICAgICBz
dHJ1Y3QgbWVtYmFuayBiYW5rWzhdOw0KPiA+IH07DQo+ID4NCj4gPiBDb3JyZWN0IG1lIGlmIEkn
bSB3cm9uZzoNCj4gPiBUaGUgInN0cnVjdCBzaG1fbWVtYmFuayIgeW91IGFyZSBzdWdnZXN0aW5n
IGlzIGxvb2tpbmcgbGlrZSB0aGlzLCByaWdodD8NCj4gPiBzdHJ1Y3Qgc2htX21lbWJhbmsgew0K
PiA+ICAgICAgY2hhciBzaG1faWRbTUFYX1NITV9JRF9MRU5HVEhdOw0KPiA+ICAgICAgdW5zaWdu
ZWQgaW50IG5yX3NobV9ib3Jyb3dlcnM7DQo+ID4gICAgICBzdHJ1Y3QgbmV3X21lbWluZm8gc2ht
X2JhbmtzOw0KPiA+ICAgICAgdW5zaWduZWQgbG9uZyB0b3RhbF9zaXplOw0KPiA+IH07DQo+IA0K
PiBBRkFJVSwgc2htX21lbWJhbmsgd291bGQgc3RpbGwgYmUgdXNlZCB0byBnZXQgdGhlIGluZm9y
bWF0aW9uIGZyb20gdGhlDQo+IGhvc3QgZGV2aWNlLXRyZWUuIElmIHNvLCB0aGVuIEkgYW0gYWZy
YWlkIHRoaXMgaXMgbm90IGFuIG9wdGlvbiB0byBtZSBiZWNhdXNlIGl0DQo+IHdvdWxkIG1ha2Ug
dGhlIGNvZGUgdG8gcmVzZXJ2ZSBtZW1vcnkgbW9yZSBjb21wbGV4Lg0KPiANCj4gSW5zdGVhZCwg
d2Ugc2hvdWxkIGNyZWF0ZSBhIHNlcGFyYXRlIHN0cnVjdHVyZSB0aGF0IHdpbGwgb25seSBiZSB1
c2VkIGZvcg0KPiBkb21haW4gc2hhcmVkIG1lbW9yeSBpbmZvcm1hdGlvbi4NCj4NCg0KQWgsIHNv
IHlvdSBhcmUgc3VnZ2VzdGluZyB3ZSBzaG91bGQgZXh0cmFjdCB0aGUgZG9tYWluIHNoYXJlZCBt
ZW1vcnkgaW5mb3JtYXRpb24gb25seQ0Kd2hlbiBkZWFsaW5nIHdpdGggdGhlIGluZm9ybWF0aW9u
IGZyb20gdGhlIGhvc3QgZGV2aWNlLXRyZWUsIHNvbWV0aGluZyBsaWtlIHRoaXM6DQpzdHJ1Y3Qg
c2htX2luZm8gew0KICAgICAgY2hhciBzaG1faWRbTUFYX1NITV9JRF9MRU5HVEhdOw0KICAgICAg
dW5zaWduZWQgaW50IG5yX3NobV9ib3Jyb3dlcnM7DQp9DQoNCkknbGwgdHJ5IGFuZCBtYXkgKmJv
dGhlciogeW91IHdoZW4gZW5jb3VudGVyaW5nIGFueSBwcm9ibGVtfiANCiANCj4gQ2hlZXJzLA0K
PiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 03:51:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 03:51:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474233.735282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF5fz-0003Gt-10; Tue, 10 Jan 2023 03:51:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474233.735282; Tue, 10 Jan 2023 03:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF5fy-0003Gm-TK; Tue, 10 Jan 2023 03:51:50 +0000
Received: by outflank-mailman (input) for mailman id 474233;
 Tue, 10 Jan 2023 03:51:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF5fx-0003Gc-Sw; Tue, 10 Jan 2023 03:51:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF5fx-00018f-PL; Tue, 10 Jan 2023 03:51:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF5fx-0005V1-8Y; Tue, 10 Jan 2023 03:51:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pF5fx-0003az-84; Tue, 10 Jan 2023 03:51:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Fc/XcGNtYMkIXBTbYkCxDzOzfA1HN9tLBKS8whlUkGE=; b=vVRkvAv/xIUedsD2U5GCI9E3Wk
	SG3T4mhITFLDwYUC1DSDPXE9OYBCN6R56f92C3alK0sdn7wvVQbhTDT0CsEyhSp7/7TepeptvU5tA
	jLfVqn5g3nuUw3AYmqOWff1foK2SlukpXrKEMswfsPbnqeUOvt9Cz0ee+JRjbnOFRQ5A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175672-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175672: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 03:51:49 +0000

flight 175672 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175672/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    2 days
Failing since        175627  2023-01-08 14:40:14 Z    1 days    8 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 04:58:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 04:58:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474243.735299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF6iS-0001IE-VS; Tue, 10 Jan 2023 04:58:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474243.735299; Tue, 10 Jan 2023 04:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF6iS-0001I7-R6; Tue, 10 Jan 2023 04:58:28 +0000
Received: by outflank-mailman (input) for mailman id 474243;
 Tue, 10 Jan 2023 04:58:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=du2x=5H=gmail.com=vsuneja63@srs-se1.protection.inumbo.net>)
 id 1pF6iQ-0001Hz-3J
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 04:58:27 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65cf9b72-90a3-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 05:58:18 +0100 (CET)
Received: by mail-ej1-x62c.google.com with SMTP id fy8so25529944ejc.13
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 20:58:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65cf9b72-90a3-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=oaOewmfSeYePVpf6Vcv0RW6UCzStuvGNQs9auxSLeQM=;
        b=CcjeblesSGSj/sMxe/RzZfljUQg7dE1wPDmksLHJEbu60d4BARBRM++wYJrbvzduji
         m7Hv76c+2nlDh+hmZKChA3ggbIrKK7WeYQ+XS70vPLP+7okiqFDjumA0PDIAJqXaaOfc
         pmwBN9vJAbZDEhextYv6N2xvPMyswkzOwCN0LytVd4Puu6YCLBgPJEjomPCBmvtAdAUB
         xAhVJ6AS7Hr8bJ+wxb4R0Xms3SoovldeSjctShE/X5CwqcmQu/Mvpa0IRzw/+oOcmLJJ
         Nx0mRAuzggHMa6nQ/G1xHv1xFI/egsFdNMxI71M27Y85STeLGgtNsSwkVe5b2ZE0sX4+
         sp3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=oaOewmfSeYePVpf6Vcv0RW6UCzStuvGNQs9auxSLeQM=;
        b=mYQb/hD7rtMY3OR3f9oI2Cp75QF8grMNmlc/2qDCONbshbMu8bcLfh4tNgZS4J2g5y
         uEvalGOj3Vy42y2IYLtppzWH1mIGzmimcDjeOP8XWZy05iaxKOzqoROyveNNVD0L5VDH
         nHNiOFjPnXShCt4k37kzvGQ+y5oZM9/SjnFn0hYZiA2SuXbKQdr62CAkok8qDPwNRZfV
         1LeWHl5jZYuKGsPkK/UdZBNJbMJF7L/5kAIuAeLx+UtvsB4CjeKbApkMZd8OaUujWTl/
         fIohoxhV+NzDgRlLCAV0U/IJCiCnqBFZYIXT+Ay/+9kHMn2QU4apPgGNiQVa7F7b9Gzx
         NenA==
X-Gm-Message-State: AFqh2krslrenXiScvmZ1YdemRo2Ls2QC4UMXGea5RwE4j5LlSnsJ9+Q6
	WN9tjjAlwrArq7Vef3WgkrqBXp7dvR8zL+FtnFQ=
X-Google-Smtp-Source: AMrXdXtqpBpWxptVz+OnnSsCnAlLP6i6Neh/9tksCYA7axiPmWBunDvEU9wB3eFEZJS9/5DM1quawOagOlCojTBsyLs=
X-Received: by 2002:a17:906:281b:b0:7c1:98f:be57 with SMTP id
 r27-20020a170906281b00b007c1098fbe57mr3786498ejc.97.1673326697625; Mon, 09
 Jan 2023 20:58:17 -0800 (PST)
MIME-Version: 1.0
References: <CALAP8f--jyG=ufJ9WGtL6qoeGdsykjNK85G3q50SzJm5+wOzhQ@mail.gmail.com>
 <CALAP8f_n2okQ-Ss_kGACAq3BVYXS_D2P_8AyhOzUxqgWpz9f4g@mail.gmail.com>
 <alpine.DEB.2.22.394.2211101702250.50442@ubuntu-linux-20-04-desktop>
 <CALAP8f8zGfNA_CZU4UQXy7-rPT6dqih9XpzuKM3vvkoBvy6usg@mail.gmail.com>
 <alpine.DEB.2.22.394.2211221605470.1049131@ubuntu-linux-20-04-desktop>
 <CALAP8f_QiHN4dP3z+LQgKdGeo8-=9DMyk0W7+x6P2eHvnOD_wQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2212011128430.4039@ubuntu-linux-20-04-desktop>
 <CALAP8f_b=0m0dqj9a50UYXYfw9X873i07sG9eyxFSqxF0yEneQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2212121406270.3075842@ubuntu-linux-20-04-desktop>
 <CALAP8f9JY23ZyDGnku4iWf5YCamSQKsZtdZj3MhX9TrF7wgEpw@mail.gmail.com>
 <alpine.DEB.2.22.394.2212131518180.315094@ubuntu-linux-20-04-desktop>
 <CALAP8f-fka4jicvLhzS8NFyyqD_NnffMxrZmqpz-x9JnL7Oy7w@mail.gmail.com>
 <alpine.DEB.2.22.394.2212141443130.315094@ubuntu-linux-20-04-desktop>
 <CALAP8f8yOdG_g0GpWG5ZPZ0BKiaKCyM2N4V6x_8Fr08f7QjpvA@mail.gmail.com>
 <alpine.DEB.2.22.394.2212221523390.4079@ubuntu-linux-20-04-desktop> <CALAP8f8yvZUKJEXXL8qcoy9=nJ1G97OtiWSv7tk1LDerEWUqiw@mail.gmail.com>
In-Reply-To: <CALAP8f8yvZUKJEXXL8qcoy9=nJ1G97OtiWSv7tk1LDerEWUqiw@mail.gmail.com>
From: Vipul Suneja <vsuneja63@gmail.com>
Date: Tue, 10 Jan 2023 10:28:03 +0530
Message-ID: <CALAP8f_op8wS=7AaZF0wCjZm8aSmQMfEY5Bv+30+8UDGmQrezA@mail.gmail.com>
Subject: Re: Porting Xen in raspberry pi4B
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, oleksandr_andrushchenko@epam.com, 
	oleksandr_tyshchenko@epam.com, jgross@suse.com, boris.ostrovsky@oracle.com, 
	Bertrand.Marquis@arm.com, Stewart.Hildebrand@amd.com, michal.orzel@amd.com, 
	vikram.garhwal@amd.com
Content-Type: multipart/alternative; boundary="00000000000067bdb605f1e1bbce"

--00000000000067bdb605f1e1bbce
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Stefano,

Thanks!

Any input further as per the logs attached?

Regards,
Vipul Kumar

On Mon, Dec 26, 2022 at 11:30 PM Vipul Suneja <vsuneja63@gmail.com> wrote:

> Hi Stefano,
>
> Thanks!
>
> As you have mention function call qemu_create_displaysurface,
> qemu_create_displaysurface_from, dpy_gfx_replace_surface, dpy_gfx_update
> and dpy_gfx_check_format, i found that
> these functions are not part of /ui/vnc.c source but they are defined in
> /ui/console.c source. Even none of these functions have been called from
> the vnc.c source. I have included debug logs for
> all of these functions in console.c but could see in the logs that only
> qemu_create_displaysurface &  dpy_gfx_replace_surface functions are
> invoked. Even i tried vncviewer
> on the host machine but other functions are not invoked. Attaching the lo=
g
> file, any other suggestion as per log file or any input for debugging vnc
> source file.
>
> *You can also try to use another QEMU UI like SDL to see if the problem i=
s
> specific to VNC only.*
> I already tried with SDL, by adding "vfb=3D[ 'type=3Dsdl' ]" in the guest
> configuration file but it failed & didn't start the guest machine. Correc=
t
> me if I am wrong with configuration or steps to use SDL?
>
> Thanks & Regards,
> Vipul Kumar
>
> On Fri, Dec 23, 2022 at 5:13 AM Stefano Stabellini <sstabellini@kernel.or=
g>
> wrote:
>
>> Hi Vipul,
>>
>> Great that you managed to setup a debugging environment. The logs look
>> very promising: it looks like xenfb.c is handling events as expected.
>> So it would apparently seem that xen-fbfront.c -> xenfb.c connection is
>> working.
>>
>> So the next step is the xenfb.c -> ./ui/vnc.c is working.
>>
>> It could be that the pixels and mouse events arrive just fine in
>> xenfb.c, but then there is an issue with exporting them to the vncserver
>> implementation inside QEMU, which is ./ui/vnc.c. The interesting
>> functions there are qemu_create_displaysurface,
>> qemu_create_displaysurface_from, dpy_gfx_replace_surface,
>> dpy_gfx_update, and dpy_gfx_check_format.
>>
>> Specifically dpy_gfx_update should cause VNC to render the new area.
>>
>> qemu_create_displaysurface_from let VNC use the xenfb buffer directly
>> with VNC, rather than using a secondary buffer and memory copies.
>> Interestingly, dpy_gfx_check_format should be used to check if it is
>> appropriate to share the buffer (qemu_create_displaysurface_from) or not
>> (qemu_create_displaysurface) but we don't call it.
>>
>> I think it would be good to add a call to dpy_gfx_check_format in
>> xenfb_update where we call qemu_create_displaysurface_from and also add
>> a printk.
>>
>> You can try to disable the buffer sharing by replacing
>> qemu_create_displaysurface_from with qemu_create_displaysurface. You can
>> also try to use another QEMU UI like SDL to see if the problem is
>> specific to VNC only.
>>
>> Cheers,
>>
>> Stefano
>>
>>
>> On Mon, 19 Dec 2022, Vipul Suneja wrote:
>> > Hi Stefano,
>> >
>> > Thanks!
>> >
>> > I could prepare a patch for adding debug printf logs in xenfb.c &
>> re-compile QEMU in yocto image. Just for reference, I have included logs
>> > in all the functions.
>> > Attaching qemu log file, could see the entry & exit logs coming up for
>> "xenfb_handle_events" & "xenfb_map_fb" also after the host machine
>> > boots up. Can you please further assist, which parameters has to be
>> cross checked or any other input as per logs?
>> >
>> > Thanks & Regards,
>> > Vipul Kumar
>> >
>> > On Thu, Dec 15, 2022 at 4:17 AM Stefano Stabellini <
>> sstabellini@kernel.org> wrote:
>> >       Hi Vipul,
>> >
>> >       For QEMU you actually need to follow the Yocto build process to
>> update
>> >       the QEMU binary. That is because QEMU is a userspace application
>> with
>> >       lots of library dependencies so we cannot just do "make" with a
>> >       cross-compiler like in the case of Xen.
>> >
>> >       So you need to make changes to QEMU and then add those changes a=
s
>> a
>> >       patch to the Yocto QEMU build recipe, or configure Yocto to your
>> local
>> >       tree to build QEMU. I am not a Yocto expert and the Yocto
>> community
>> >       would be a better place to ask for advice there. You can see fro=
m
>> here
>> >       some instructions on how to build Xen using a local tree, see th=
e
>> usage
>> >       of EXTERNALSRC (note that this is *not* what you need: you need
>> to build
>> >       QEMU with a local tree, not Xen. But I thought that the wikipage
>> might
>> >       still be a starting point)
>> >
>> >       https://wiki.xenproject.org/wiki/Xen_on_ARM_and_Yocto
>> >
>> >       Cheers,
>> >
>> >       Stefano
>> >
>> >
>> >       On Thu, 15 Dec 2022, Vipul Suneja wrote:
>> >       > Hi Stefano,
>> >       >
>> >       > Thanks!
>> >       >
>> >       > I could see QEMU 6.2.0 compiled & installed in the host image
>> xen-image-minimal. I could find xenfb.c source file also &
>> >       modified the same
>> >       > with debug logs.
>> >       > I have set up a cross compile environment, did 'make clean' &
>> 'make all' to recompile but it's failing. In case i am doing
>> >       wrong, Can you
>> >       > please assist me
>> >       > with the correct steps to compile qemu? Below are the error
>> logs:
>> >       >
>> >       >
>> >       agl@agl-OptiPlex-7010
>> :~/Automotive/ADAS_Infotainment/Platform/Poky_Kirkstone/build/tmp/work/c=
ortexa72-poky-linux/qemu/6.2.0-r0/build$
>> >       make
>> >       > all
>> >       > [1/3864] Compiling C object libslirp.a.p/slirp_src_arp_table.c=
.o
>> >       > [2/3864] Compiling C object
>> subprojects/libvhost-user/libvhost-user.a.p/libvhost-user.c.o
>> >       > [3/3864] Linking static target
>> subprojects/libvhost-user/libvhost-user.a
>> >       > [4/3864] Compiling C object libslirp.a.p/slirp_src_vmstate.c.o
>> >       > [5/3864] Compiling C object libslirp.a.p/slirp_src_dhcpv6.c.o
>> >       > [6/3864] Compiling C object libslirp.a.p/slirp_src_dnssearch.c=
.o
>> >       > [7/3864] Compiling C object libslirp.a.p/slirp_src_bootp.c.o
>> >       > [8/3864] Compiling C object libslirp.a.p/slirp_src_cksum.c.o
>> >       > [9/3864] Compiling C object libslirp.a.p/slirp_src_if.c.o
>> >       > [10/3864] Compiling C object libslirp.a.p/slirp_src_ip6_icmp.c=
.o
>> >       > [11/3864] Compiling C object
>> libslirp.a.p/slirp_src_ip6_input.c.o
>> >       > [12/3864] Compiling C object
>> libslirp.a.p/slirp_src_ip6_output.c.o
>> >       > [13/3864] Compiling C object libslirp.a.p/slirp_src_ip_icmp.c.=
o
>> >       > [14/3864] Compiling C object libslirp.a.p/slirp_src_ip_input.c=
.o
>> >       > [15/3864] Compiling C object
>> libslirp.a.p/slirp_src_ip_output.c.o
>> >       > [16/3864] Compiling C object libslirp.a.p/slirp_src_mbuf.c.o
>> >       > [17/3864] Compiling C object libslirp.a.p/slirp_src_misc.c.o
>> >       > [18/3864] Compiling C object libslirp.a.p/slirp_src_ncsi.c.o
>> >       > [19/3864] Compiling C object
>> libslirp.a.p/slirp_src_ndp_table.c.o
>> >       > [20/3864] Compiling C object libslirp.a.p/slirp_src_sbuf.c.o
>> >       > [21/3864] Compiling C object libslirp.a.p/slirp_src_slirp.c.o
>> >       > [22/3864] Compiling C object libslirp.a.p/slirp_src_socket.c.o
>> >       > [23/3864] Compiling C object libslirp.a.p/slirp_src_state.c.o
>> >       > [24/3864] Compiling C object libslirp.a.p/slirp_src_stream.c.o
>> >       > [25/3864] Compiling C object
>> libslirp.a.p/slirp_src_tcp_input.c.o
>> >       > [26/3864] Compiling C object
>> libslirp.a.p/slirp_src_tcp_output.c.o
>> >       > [27/3864] Compiling C object libslirp.a.p/slirp_src_tcp_subr.c=
.o
>> >       > [28/3864] Compiling C object
>> libslirp.a.p/slirp_src_tcp_timer.c.o
>> >       > [29/3864] Compiling C object libslirp.a.p/slirp_src_tftp.c.o
>> >       > [30/3864] Compiling C object libslirp.a.p/slirp_src_udp.c.o
>> >       > [31/3864] Compiling C object libslirp.a.p/slirp_src_udp6.c.o
>> >       > [32/3864] Compiling C object libslirp.a.p/slirp_src_util.c.o
>> >       > [33/3864] Compiling C object libslirp.a.p/slirp_src_version.c.=
o
>> >       > [34/3864] Linking static target libslirp.a
>> >       > [35/3864] Generating qemu-version.h with a custom command
>> (wrapped by meson to capture output)
>> >       > FAILED: qemu-version.h
>> >
>>  >/home/agl/Automotive/ADAS_Infotainment/Platform/Poky_Kirkstone/build/t=
mp/work/cortexa72-poky-linux/qemu/6.2.0-r0/recipe-sysroot-native/usr
>> >
>> >       > /bin/meson --internal exe --capture
>> qemu-version.h--/home/agl/Automotive/ADAS_Infotainment/Platform/Poky_Kir=
kstone/build/tmp/work/cortexa72-poky-linux/qemu/6.2.0-r0/qemu-6.2.0/scripts=
/qemu
>> >       -v
>> >       > ersion.sh
>> >
>>  /home/agl/Automotive/ADAS_Infotainment/Platform/Poky_Kirkstone/build/tm=
p/work/cortexa72-poky-linux/qemu/6.2.0-r0/qemu-6.2.0
>> ''
>> >       > 6.2.0
>> >       > /usr/bin/env: =E2=80=98nativepython3=E2=80=99: No such file or=
 directory
>> >       > ninja: build stopped: subcommand failed.
>> >       > make: *** [Makefile:162: run-ninja] Error 1
>> >       >
>> >       > Thanks & Regards,
>> >       > Vipul Kumar
>> >       >
>> >       > On Wed, Dec 14, 2022 at 4:55 AM Stefano Stabellini <
>> sstabellini@kernel.org> wrote:
>> >       >       Hi Vipul,
>> >       >
>> >       >       Good progress! The main function we should check is
>> "xenfb_refresh" but
>> >       >       from the logs it looks like it is called several times.
>> Which means that
>> >       >       everything seems to be working as expected on the Linux
>> side.
>> >       >
>> >       >       It is time to investigate the QEMU side:
>> >       >       ./hw/display/xenfb.c:xenfb_handle_events
>> >       >       ./hw/display/xenfb.c:xenfb_map_fb
>> >       >
>> >       >       I wonder if the issue is internal to QEMU. You might wan=
t
>> to use an
>> >       >       older QEMU version to check if it works, maybe 6.0 or 5.=
0
>> or even 4.0.
>> >       >       I also wonder if it is a problem between xenfb.c and the
>> rest of QEMU. I
>> >       >       would investigate how xenfb->pixels is rendered by the
>> rest of QEMU.
>> >       >       Specifically you might want to look at the call to
>> >       >       qemu_create_displaysurface,
>> qemu_create_displaysurface_from and
>> >       >       dpy_gfx_replace_surface in xenfb_update.
>> >       >
>> >       >       I hope this helps.
>> >       >
>> >       >       Cheers,
>> >       >
>> >       >       Stefano
>> >       >
>> >       >
>> >       >       On Tue, 13 Dec 2022, Vipul Suneja wrote:
>> >       >       > Hi Stefano,
>> >       >       >
>> >       >       > Thanks!
>> >       >       >
>> >       >       > I modified xen-fbfront.c source file, included printk
>> debug logs & cross compiled it. I included the printk debug log
>> >       at the
>> >       >       entry & exit
>> >       >       > of all functions of xen-fbfront.c file.
>> >       >       > Generated kernel module & loaded in guest machine at
>> bootup. I could see lots of logs coming up, and could see
>> >       multiple
>> >       >       functions being
>> >       >       > invoked even if I have not used vncviewer in the host.
>> Attaching the log file for reference. Any specific function or
>> >       >       parameters that have
>> >       >       > to be checked or any other suggestion as per logs?
>> >       >       >
>> >       >       > Thanks & Regards,
>> >       >       > Vipul Kumar
>> >       >       >
>> >       >       > On Tue, Dec 13, 2022 at 3:44 AM Stefano Stabellini <
>> sstabellini@kernel.org> wrote:
>> >       >       >       Hi Vipul,
>> >       >       >
>> >       >       >       I am online on IRC OFTC #xendevel (
>> https://www.oftc.net/, you need a
>> >       >       >       registered nickname to join #xendevel).
>> >       >       >
>> >       >       >       For development and debugging I find that it is =
a
>> lot easier to
>> >       >       >       crosscompile the kernel "by hand", and do a
>> monolithic build, rather
>> >       >       >       than going through Yocto.
>> >       >       >
>> >       >       >       For instance the following builds for me:
>> >       >       >
>> >       >       >       cd linux.git
>> >       >       >       export ARCH=3Darm64
>> >       >       >       export CROSS_COMPILE=3D/path/to/cross-compiler
>> >       >       >       make defconfig
>> >       >       >       [add printks to drivers/video/fbdev/xen-fbfront.=
c]
>> >       >       >       make -j8 Image.gz
>> >       >       >
>> >       >       >       And Image.gz boots on Xen as DomU kernel without
>> issues.
>> >       >       >
>> >       >       >       Cheers,
>> >       >       >
>> >       >       >       Stefano
>> >       >       >
>> >       >       >       On Sat, 10 Dec 2022, Vipul Suneja wrote:
>> >       >       >       > Hi Stefano,
>> >       >       >       >
>> >       >       >       > Thanks!
>> >       >       >       >
>> >       >       >       > I have included printk debug logs in the
>> xen-fbfront.c source file. While cross compiling to generate .ko
>> >       with
>> >       >       >       "xen-guest-image-minimal"
>> >       >       >       > toolchain it's throwing a modpost
>> >       >       >       > not found error. I could see the modpost.c
>> source file but the final script is missing. Any input on this,
>> >       Below are
>> >       >       the
>> >       >       >       logs:
>> >       >       >       >
>> >       >       >       > agl@agl-OptiPlex-7010:~/Automotive/ADAS_Infota=
inment/project/Application/Xen/Framebuffer$
>> make
>> >       >       >       > make ARCH=3Darm64
>> -I/opt/poky/4.0.5/sysroots/cortexa72-poky-linux/usr/include/asm -C
>> >       >       >       >
>> /opt/poky/4.0.5/sysroots/cortexa72-poky-linux/lib/modules/5.15.72-yocto-=
standard/build
>> >       >       >       >
>> M=3D/home/agl/Automotive/ADAS_Infotainment/project/Application/Xen/Frame=
buffer
>> modules
>> >       >       >       > make[1]: Entering directory
>> >
>>  '/opt/poky/4.0.5/sysroots/cortexa72-poky-linux/lib/modules/5.15.72-yoct=
o-standard/build'
>> >       >       >       > arch/arm64/Makefile:36: Detected assembler wit=
h
>> broken .inst; disassembly will be unreliable
>> >       >       >       > warning: the compiler differs from the one use=
d
>> to build the kernel
>> >       >       >       >   The kernel was built by: gcc (Ubuntu
>> 9.4.0-1ubuntu1~20.04.1) 9.4.0
>> >       >       >       >   You are using:
>> aarch64-poky-linux-gcc (GCC) 11.3.0
>> >       >       >       >   CC [M]
>>  /home/agl/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuf=
fer/xen-fbfront.o
>> >       >       >       >   MODPOST
>> /home/agl/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuff=
er/Module.symvers
>> >       >       >       > /bin/sh: 1: scripts/mod/modpost: not found
>> >       >       >       > make[2]: *** [scripts/Makefile.modpost:133:
>> >       >       >
>>  /home/agl/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuf=
fer/Module.symvers]
>> >       >       >       > Error 127
>> >       >       >       > make[1]: *** [Makefile:1813: modules] Error 2
>> >       >       >       > make[1]: Leaving directory
>> >
>>  '/opt/poky/4.0.5/sysroots/cortexa72-poky-linux/lib/modules/5.15.72-yoct=
o-standard/build'
>> >       >       >       > make: *** [Makefile:5: all] Error 2
>> >       >       >       > agl@agl-OptiPlex-7010:~/Automotive/ADAS_Infota=
inment/project/Application/Xen/Framebuffer$
>> ls -l
>> >       >       >       > total 324
>> >       >       >       > -rwxrwxrwx 1 agl agl    359 Dec 10 22:41
>> Makefile
>> >       >       >       > -rw-rw-r-- 1 agl agl     90 Dec 10 22:49
>> modules.order
>> >       >       >       > -rw-r--r-- 1 agl agl  18331 Dec  1 20:32
>> xen-fbfront.c
>> >       >       >       > -rw-rw-r-- 1 agl agl     90 Dec 10 22:49
>> xen-fbfront.mod
>> >       >       >       > -rw-rw-r-- 1 agl agl 297832 Dec 10 22:49
>> xen-fbfront.o
>> >       >       >       > agl@agl-OptiPlex-7010:~/Automotive/ADAS_Infota=
inment/project/Application/Xen/Framebuffer$
>> file xen-fbfront.o
>> >       >       >       > xen-fbfront.o: ELF 64-bit LSB relocatable, ARM
>> aarch64, version 1 (SYSV), with debug_info, not stripped
>> >       >       >       > agl@agl-OptiPlex-7010
>> :~/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuffer$
>> >       >       >       >
>> >       >       >       > I have connected a HDMI based 1980x1024
>> resolution display screen to raspberrypi4 for testing purposes. I
>> >       hope
>> >       >       connecting
>> >       >       >       this display to
>> >       >       >       > rpi4 should be ok.
>> >       >       >       >
>> >       >       >       > Is there any other way we can connect also for
>> detailed discussion on the display bringup issue? This will
>> >       really
>> >       >       help to
>> >       >       >       resolve this
>> >       >       >       > issue.
>> >       >       >       >
>> >       >       >       > Thanks & Regards,
>> >       >       >       > Vipul Kumar
>> >       >       >       >
>> >       >       >       > On Fri, Dec 2, 2022 at 1:02 AM Stefano
>> Stabellini <sstabellini@kernel.org> wrote:
>> >       >       >       >       On Thu, 1 Dec 2022, Vipul Suneja wrote:
>> >       >       >       >       > Hi Stefano,
>> >       >       >       >       > Thanks!
>> >       >       >       >       >
>> >       >       >       >       > I am exploring both options here,
>> modification of framebuffer source file & setting up x11vnc server
>> >       in
>> >       >       guest.
>> >       >       >       >       > Other than these I would like to share
>> a few findings with you.
>> >       >       >       >       >
>> >       >       >       >       > 1. If i keep
>> "CONFIG_XEN_FBDEV_FRONTEND=3Dy" then xen-fbfront.ko is not generating bu=
t if
>> i keep
>> >       >       >       "CONFIG_XEN_FBDEV_FRONTEND=3Dm"
>> >       >       >       >       > then could see xen-fbfront.ko & its
>> loading also. Same things with other frontend/backend drivers
>> >       also. Do we
>> >       >       need to
>> >       >       >       >       configure these
>> >       >       >       >       > drivers as a module(m) only?
>> >       >       >       >
>> >       >       >       >       xen-fbfront should work both as a module
>> (xen-fbfront.ko) or built-in
>> >       >       >       >       (CONFIG_XEN_FBDEV_FRONTEND=3Dy).
>> >       >       >       >
>> >       >       >       >
>> >       >       >       >
>> >       >       >       >       > 2. I could see xenstored service is
>> running for the host but it's always failing for the
>> >       guest machine. I
>> >       >       could see
>> >       >       >       it in
>> >       >       >       >       bootup logs & via
>> >       >       >       >       > systemctl status also.
>> >       >       >       >
>> >       >       >       >       That is normal. xenstored is only meant
>> to be run in Dom0, not in the
>> >       >       >       >       domUs. If you use the same rootfs for
>> Dom0 and DomU then xenstored will
>> >       >       >       >       fail starting in the DomU (but should
>> succeed in Dom0), which is what we
>> >       >       >       >       want.
>> >       >       >       >
>> >       >       >       >       If you run "xenstore-ls" in Dom0, you'll
>> see a bunch of entries,
>> >       >       >       >       including some of them related to "vfb"
>> which is the virtual framebuffer
>> >       >       >       >       protocol. You should also see an entry
>> called "state" set to "4" which
>> >       >       >       >       means "connected". state =3D 4 is usuall=
y
>> when everything works. Normally
>> >       >       >       >       when things don't work state !=3D 4.
>> >       >       >       >
>> >       >       >       >
>> >       >       >       >
>> >       >       >       >       > Below are the logs:
>> >       >       >       >       > [  OK  ] Reached target Basic System.
>> >       >       >       >       > [  OK  ] Started Kernel Logging Servic=
e.
>> >       >       >       >       > [  OK  ] Started System Logging Servic=
e.
>> >       >       >       >       >          Starting D-Bus System Message
>> Bus...
>> >       >       >       >       >          Starting User Login
>> Management...
>> >       >       >       >       >          Starting Permit User
>> Sessions...
>> >       >       >       >       >          Starting The Xen xenstore...
>> >       >       >       >       >          Starting OpenSSH Key
>> Generation...
>> >       >       >       >       > [FAILED] Failed to start The Xen
>> xenstore.
>> >       >       >       >       > See 'systemctl status
>> xenstored.service' for details.
>> >       >       >       >       > [DEPEND] Dependency failed for qemu fo=
r
>> xen dom0 disk backend.
>> >       >       >       >       > [DEPEND] Dependency failed for Xend=E2=
=80=A6p
>> guests on boot and shutdown.
>> >       >       >       >       > [DEPEND] Dependency failed for
>> xen-=E2=80=A6des, JSON configuration stub).
>> >       >       >       >       > [DEPEND] Dependency failed for
>> Xenc=E2=80=A6guest consoles and hypervisor.
>> >       >       >       >       > [  OK  ] Finished Permit User Sessions=
.
>> >       >       >       >       > [  OK  ] Started Getty on tty1.
>> >       >       >       >       > [  OK  ] Started Serial Getty on hvc0.
>> >       >       >       >       > [  OK  ] Started Serial Getty on ttyS0=
.
>> >       >       >       >       > [  OK  ] Reached target Login Prompts.
>> >       >       >       >       >          Starting Xen-watchdog - run
>> xen watchdog daemon...
>> >       >       >       >       > [  OK  ] Started D-Bus System Message
>> Bus.
>> >       >       >       >       > [  OK  ] Started Xen-watchdog - run xe=
n
>> watchdog daemon.
>> >       >       >       >       > [  OK  ] Finished OpenSSH Key
>> Generation.
>> >       >       >       >       > [  OK  ] Started User Login Management=
.
>> >       >       >       >       > [  OK  ] Reached target Multi-User
>> System.
>> >       >       >       >       >          Starting Record Runlevel
>> Change in UTMP...
>> >       >       >       >       > [  OK  ] Finished Record Runlevel
>> Change in UTMP.
>> >       >       >       >       > fbcon: Taking over console
>> >       >       >       >       >
>> >       >       >       >       > Poky (Yocto Project Reference Distro)
>> 4.0.4 raspberrypi4-64 hvc0
>> >       >       >       >       >
>> >       >       >       >       > raspberrypi4-64 login: root
>> >       >       >       >       > root@raspberrypi4-64:~#
>> >       >       >       >       > root@raspberrypi4-64:~#
>> >       >       >       >       > root@raspberrypi4-64:~# systemctl
>> status xenstored.service
>> >       >       >       >       > x xenstored.service - The Xen xenstore
>> >       >       >       >       >      Loaded: loaded
>> (/lib/systemd/system/xenstored.service; enabled; vendor preset: enabled)
>> >       >       >       >       >      Active: failed (Result: exit-code=
)
>> since Thu 2022-12-01 06:12:05 UTC; 26s ago
>> >       >       >       >       >     Process: 195 ExecStartPre=3D/bin/g=
rep
>> -q control_d /proc/xen/capabilities (code=3Dexited,
>> >       status=3D1/FAILURE)
>> >       >       >       >       >
>> >       >       >       >       > Dec 01 06:12:04 raspberrypi4-64
>> systemd[1]: Starting The Xen xenstore...
>> >       >       >       >       > Dec 01 06:12:05 raspberrypi4-64
>> systemd[1]: xenstored.service: Control pro...URE
>> >       >       >       >       > Dec 01 06:12:05 raspberrypi4-64
>> systemd[1]: xenstored.service: Failed with...e'.
>> >       >       >       >       > Dec 01 06:12:05 raspberrypi4-64
>> systemd[1]: Failed to start The Xen xenstore.
>> >       >       >       >       > Hint: Some lines were ellipsized, use
>> -l to show in full.
>> >       >       >       >       > root@raspberrypi4-64:~#
>> >       >       >       >       >
>> >       >       >       >       > Any input on these?
>> >       >       >       >       >
>> >       >       >       >       > Thanks & Regards,
>> >       >       >       >       > Vipul Kumar
>> >       >       >       >       >
>> >       >       >       >       > On Wed, Nov 23, 2022 at 5:41 AM Stefan=
o
>> Stabellini <sstabellini@kernel.org> wrote:
>> >       >       >       >       >       Hi Vipul,
>> >       >       >       >       >
>> >       >       >       >       >       I cannot spot any issue in the
>> configuration, in particual you have:
>> >       >       >       >       >
>> >       >       >       >       >       CONFIG_XEN_FBDEV_FRONTEND=3Dy
>> >       >       >       >       >
>> >       >       >       >       >       which is what you need.
>> >       >       >       >       >
>> >       >       >       >       >       The only thing I can suggest is
>> to add printks to the Linux frontend
>> >       >       >       >       >       driver (the one running in the
>> domU) which is
>> >       >       >       >       >       drivers/video/fbdev/xen-fbfront.=
c
>> and printfs to the QEMU backend
>> >       >       >       >       >       (running in Dom0) which is
>> hw/display/xenfb.c to figure out what is
>> >       >       >       >       >       going on.
>> >       >       >       >       >
>> >       >       >       >       >
>> >       >       >       >       >       Alternatively, you can setup PV
>> network with the domU, such as:
>> >       >       >       >       >
>> >       >       >       >       >         vif=3D['']
>> >       >       >       >       >
>> >       >       >       >       >       and then run x11 and a x11vnc
>> server in your domU. You should be able to
>> >       >       >       >       >       connect to it using vncviewer at
>> the network IP of your domU.
>> >       >       >       >       >
>> >       >       >       >       >       Basically you are skipping the
>> problem because instead of using the PV
>> >       >       >       >       >       framebuffer protocol, you just
>> use VNC over the network with the domU.
>> >       >       >       >       >
>> >       >       >       >       >
>> >       >       >       >       >       Cheers,
>> >       >       >       >       >
>> >       >       >       >       >       Stefano
>> >       >       >       >       >
>> >       >       >       >       >
>> >       >       >       >       >       On Tue, 22 Nov 2022, Vipul Sunej=
a
>> wrote:
>> >       >       >       >       >       > Hi Stefano,
>> >       >       >       >       >       > Thanks for the support!
>> >       >       >       >       >       >
>> >       >       >       >       >       > Looks like I have tried all th=
e
>> combinations & possible ways to get display up but failed. Is
>> >       there
>> >       >       any
>> >       >       >       document or
>> >       >       >       >       pdf for
>> >       >       >       >       >       porting xen on
>> >       >       >       >       >       > raspberrypi4.
>> >       >       >       >       >       > I could find lot's of links
>> telling the same but couldn't see any official user guide or
>> >       document
>> >       >       from the
>> >       >       >       xen
>> >       >       >       >       community on
>> >       >       >       >       >       the same. If
>> >       >       >       >       >       > there is something to refer
>> >       >       >       >       >       > to please share with me.
>> >       >       >       >       >       > I am attaching the kernel
>> configuration file also, just take a look if i have missed
>> >       anything.
>> >       >       >       >       >       > Any other suggestions or input
>> from your end could be really helpful?
>> >       >       >       >       >       >
>> >       >       >       >       >       > Regards,
>> >       >       >       >       >       > Vipul Kumar
>> >       >       >       >       >       >
>> >       >       >       >       >       > On Fri, Nov 11, 2022 at 6:40 A=
M
>> Stefano Stabellini <sstabellini@kernel.org> wrote:
>> >       >       >       >       >       >       Hi Vipul,
>> >       >       >       >       >       >
>> >       >       >       >       >       >       Sorry for the late reply=
.
>> From the earlier logs that you sent, it looks
>> >       >       >       >       >       >       like everything should b=
e
>> working correctly. Specifically:
>> >       >       >       >       >       >
>> >       >       >       >       >       >            vfb =3D ""
>> >       >       >       >       >       >             1 =3D ""
>> >       >       >       >       >       >              0 =3D ""
>> >       >       >       >       >       >               frontend =3D
>> "/local/domain/1/device/vfb/0"
>> >       >       >       >       >       >               frontend-id =3D =
"1"
>> >       >       >       >       >       >               online =3D "1"
>> >       >       >       >       >       >               state =3D "4"
>> >       >       >       >       >       >               vnc =3D "1"
>> >       >       >       >       >       >               vnclisten =3D
>> "127.0.0.1"
>> >       >       >       >       >       >               vncdisplay =3D "=
0"
>> >       >       >       >       >       >               vncunused =3D "1=
"
>> >       >       >       >       >       >               sdl =3D "0"
>> >       >       >       >       >       >               opengl =3D "0"
>> >       >       >       >       >       >               feature-resize =
=3D
>> "1"
>> >       >       >       >       >       >               hotplug-status =
=3D
>> "connected"
>> >       >       >       >       >       >               request-update =
=3D
>> "1"
>> >       >       >       >       >       >
>> >       >       >       >       >       >       state "4" means
>> "connected". So I would expect that you should be able
>> >       >       >       >       >       >       to connect to the vnc
>> server using vncviewer. You might not see anything
>> >       >       >       >       >       >       (black screen) but you
>> should definitely be able to connect.
>> >       >       >       >       >       >
>> >       >       >       >       >       >       I wouldn't try to launch
>> x11 in the guest just yet. fbcon in Linux is
>> >       >       >       >       >       >       enough to render
>> something on the screen. You should be able to see the
>> >       >       >       >       >       >       Linux text-based console
>> rendered graphically, connecting to it via vnc.
>> >       >       >       >       >       >
>> >       >       >       >       >       >       Sorry for the basic
>> question, but have you tried all the following?
>> >       >       >       >       >       >
>> >       >       >       >       >       >       vncviewer 127.0.0.1:0
>> >       >       >       >       >       >       vncviewer 127.0.0.1:1
>> >       >       >       >       >       >       vncviewer 127.0.0.1:2
>> >       >       >       >       >       >       vncviewer 127.0.0.1:5900
>> >       >       >       >       >       >       vncviewer 127.0.0.1:5901
>> >       >       >       >       >       >       vncviewer 127.0.0.1:5902
>> >       >       >       >       >       >
>> >       >       >       >       >       >       Given that from the
>> xenstore-ls logs everything seems to work correctly
>> >       >       >       >       >       >       I am not sure what else
>> to suggest. You might have to add printf to QEMU
>> >       >       >       >       >       >       ui/vnc.c and
>> hw/display/xenfb.c to see what is going wrong.
>> >       >       >       >       >       >
>> >       >       >       >       >       >       Cheers,
>> >       >       >       >       >       >
>> >       >       >       >       >       >       Stefano
>> >       >       >       >       >       >
>> >       >       >       >       >       >
>> >       >       >       >       >       >       On Mon, 7 Nov 2022, Vipu=
l
>> Suneja wrote:
>> >       >       >       >       >       >       > Hi Stefano,
>> >       >       >       >       >       >       > Thanks!
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > Any input further on
>> "xenstore-ls" logs?
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > I am trying to run the
>> x0vncserver & x11vnc server manually on guest
>> >       >       machine(xen_guest_image_minimal)
>> >       >       >       image
>> >       >       >       >       but it's
>> >       >       >       >       >       failing
>> >       >       >       >       >       >       with the below
>> >       >       >       >       >       >       > error.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > root@raspberrypi4-64:/=
usr/bin#
>> x0vncserver
>> >       >       >       >       >       >       > x0vncserver: unable to
>> open display ""
>> >       >       >       >       >       >       > root@raspberrypi4-64
>> :/usr/bin#
>> >       >       >       >       >       >       > root@raspberrypi4-64:/=
usr/bin#
>> x11vnc
>> >       >       >       >       >       >       >
>> ###############################################################
>> >       >       >       >       >       >       >
>> #@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  **  WARNING  **
>>  WARNING  **  WARNING  **  WARNING  **   @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@        YOU ARE
>> RUNNING X11VNC WITHOUT A PASSWORD!!        @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  This means anyone
>> with network access to this computer   @#
>> >       >       >       >       >       >       > #@  may be able to vie=
w
>> and control your desktop.            @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@ >>> If you did not
>> mean to do this Press CTRL-C now!! <<< @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       >
>> #@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  You can create an
>> x11vnc password file by running:       @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@       x11vnc
>> -storepasswd password /path/to/passfile      @#
>> >       >       >       >       >       >       > #@  or   x11vnc
>> -storepasswd /path/to/passfile               @#
>> >       >       >       >       >       >       > #@  or   x11vnc
>> -storepasswd                                 @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  (the last one will
>> use ~/.vnc/passwd)                    @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  and then starting
>> x11vnc via:                            @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@      x11vnc -rfbaut=
h
>> /path/to/passfile                    @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  an existing
>> ~/.vnc/passwd file from another VNC          @#
>> >       >       >       >       >       >       > #@  application will
>> work fine too.                          @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  You can also use
>> the -passwdfile or -passwd options.     @#
>> >       >       >       >       >       >       > #@  (note -passwd is
>> unsafe if local users are not trusted)  @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  Make sure any
>> -rfbauth and -passwdfile password files    @#
>> >       >       >       >       >       >       > #@  cannot be read by
>> untrusted users.                       @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  Use x11vnc -usepw
>> to automatically use your              @#
>> >       >       >       >       >       >       > #@  ~/.vnc/passwd or
>> ~/.vnc/passwdfile password files.       @#
>> >       >       >       >       >       >       > #@  (and prompt you to
>> create ~/.vnc/passwd if neither       @#
>> >       >       >       >       >       >       > #@  file exists.)
>>  Under -usepw, x11vnc will exit if it      @#
>> >       >       >       >       >       >       > #@  cannot find a
>> password to use.                           @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  Even with a
>> password, the subsequent VNC traffic is      @#
>> >       >       >       >       >       >       > #@  sent in the clear.
>> Consider tunnelling via ssh(1):      @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@
>> http://www.karlrunge.com/x11vnc/#tunnelling            @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  Or using the x11vn=
c
>> SSL options: -ssl and -stunnel       @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  Please Read the
>> documentation for more info about        @#
>> >       >       >       >       >       >       > #@  passwords,
>> security, and encryption.                     @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@
>> http://www.karlrunge.com/x11vnc/faq.html#faq-passwd    @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       > #@  To disable this
>> warning use the -nopw option, or put     @#
>> >       >       >       >       >       >       > #@  'nopw' on a line i=
n
>> your ~/.x11vncrc file.               @#
>> >       >       >       >       >       >       > #@
>>                                       @#
>> >       >       >       >       >       >       >
>> #@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#
>> >       >       >       >       >       >       >
>> ###############################################################
>> >       >       >       >       >       >       > 09/03/2018 12:58:41
>> x11vnc version: 0.9.16 lastmod: 2019-01-05  pid: 424
>> >       >       >       >       >       >       > 09/03/2018 12:58:41
>> XOpenDisplay("") failed.
>> >       >       >       >       >       >       > 09/03/2018 12:58:41
>> Trying again with XAUTHLOCALHOSTNAME=3Dlocalhost ...
>> >       >       >       >       >       >       > 09/03/2018 12:58:41
>> >       >       >       >       >       >       > 09/03/2018 12:58:41 **=
*
>> XOpenDisplay failed. No -display or DISPLAY.
>> >       >       >       >       >       >       > 09/03/2018 12:58:41 **=
*
>> Trying ":0" in 4 seconds.  Press Ctrl-C to abort.
>> >       >       >       >       >       >       > 09/03/2018 12:58:41 **=
*
>> 1 2 3 4
>> >       >       >       >       >       >       > 09/03/2018 12:58:45
>> XOpenDisplay(":0") failed.
>> >       >       >       >       >       >       > 09/03/2018 12:58:45
>> Trying again with XAUTHLOCALHOSTNAME=3Dlocalhost ...
>> >       >       >       >       >       >       > 09/03/2018 12:58:45
>> XOpenDisplay(":0") failed.
>> >       >       >       >       >       >       > 09/03/2018 12:58:45
>> Trying again with unset XAUTHLOCALHOSTNAME ...
>> >       >       >       >       >       >       > 09/03/2018 12:58:45
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > 09/03/2018 12:58:45
>> ***************************************
>> >       >       >       >       >       >       > 09/03/2018 12:58:45 **=
*
>> XOpenDisplay failed (:0)
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > *** x11vnc was unable
>> to open the X DISPLAY: ":0", it cannot continue.
>> >       >       >       >       >       >       > *** There may be
>> "Xlib:" error messages above with details about the failure.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > Some tips and
>> guidelines:
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > ** An X server (the on=
e
>> you wish to view) must be running before x11vnc is
>> >       >       >       >       >       >       >    started: x11vnc doe=
s
>> not start the X server.  (however, see the -create
>> >       >       >       >       >       >       >    option if that is
>> what you really want).
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > ** You must use
>> -display <disp>, -OR- set and export your $DISPLAY
>> >       >       >       >       >       >       >    environment variabl=
e
>> to refer to the display of the desired X server.
>> >       >       >       >       >       >       >  - Usually the display
>> is simply ":0" (in fact x11vnc uses this if you forget
>> >       >       >       >       >       >       >    to specify it), but
>> in some multi-user situations it could be ":1", ":2",
>> >       >       >       >       >       >       >    or even ":137".  As=
k
>> your administrator or a guru if you are having
>> >       >       >       >       >       >       >    difficulty
>> determining what your X DISPLAY is.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > ** Next, you need to
>> have sufficient permissions (Xauthority)
>> >       >       >       >       >       >       >    to connect to the X
>> DISPLAY.   Here are some Tips:
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >  - Often, you just nee=
d
>> to run x11vnc as the user logged into the X session.
>> >       >       >       >       >       >       >    So make sure to be
>> that user when you type x11vnc.
>> >       >       >       >       >       >       >  - Being root is
>> usually not enough because the incorrect MIT-MAGIC-COOKIE
>> >       >       >       >       >       >       >    file may be
>> accessed.  The cookie file contains the secret key that
>> >       >       >       >       >       >       >    allows x11vnc to
>> connect to the desired X DISPLAY.
>> >       >       >       >       >       >       >  - You can explicitly
>> indicate which MIT-MAGIC-COOKIE file should be used
>> >       >       >       >       >       >       >    by the -auth option=
,
>> e.g.:
>> >       >       >       >       >       >       >        x11vnc -auth
>> /home/someuser/.Xauthority -display :0
>> >       >       >       >       >       >       >        x11vnc -auth
>> /tmp/.gdmzndVlR -display :0
>> >       >       >       >       >       >       >    you must have read
>> permission for the auth file.
>> >       >       >       >       >       >       >    See also '-auth
>> guess' and '-findauth' discussed below.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > ** If NO ONE is logged
>> into an X session yet, but there is a greeter login
>> >       >       >       >       >       >       >    program like "gdm",
>> "kdm", "xdm", or "dtlogin" running, you will need
>> >       >       >       >       >       >       >    to find and use the
>> raw display manager MIT-MAGIC-COOKIE file.
>> >       >       >       >       >       >       >    Some examples for
>> various display managers:
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >      gdm:     -auth
>> /var/gdm/:0.Xauth
>> >       >       >       >       >       >       >               -auth
>> /var/lib/gdm/:0.Xauth
>> >       >       >       >       >       >       >      kdm:     -auth
>> /var/lib/kdm/A:0-crWk72
>> >       >       >       >       >       >       >               -auth
>> /var/run/xauth/A:0-crWk72
>> >       >       >       >       >       >       >      xdm:     -auth
>> /var/lib/xdm/authdir/authfiles/A:0-XQvaJk
>> >       >       >       >       >       >       >      dtlogin: -auth
>> /var/dt/A:0-UgaaXa
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >    Sometimes the
>> command "ps wwwwaux | grep auth" can reveal the file location.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >    Starting with x11vn=
c
>> 0.9.9 you can have it try to guess by using:
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >               -auth
>> guess
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >    (see also the x11vn=
c
>> -findauth option.)
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >    Only root will have
>> read permission for the file, and so x11vnc must be run
>> >       >       >       >       >       >       >    as root (or copy
>> it).  The random characters in the filenames will of course
>> >       >       >       >       >       >       >    change and the
>> directory the cookie file resides in is system dependent.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > See also:
>> http://www.karlrunge.com/x11vnc/faq.html
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > Regards,
>> >       >       >       >       >       >       > Vipul Kumar
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > On Thu, Nov 3, 2022 at
>> 10:27 PM Vipul Suneja <vsuneja63@gmail.com> wrote:
>> >       >       >       >       >       >       >       Hi Stefano,
>> >       >       >       >       >       >       > Thanks!
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > I used
>> xen-guest-image-minimal(simple console based image) as a guest with fbco=
n &
>> >       fbdev
>> >       >       enabled in
>> >       >       >       kernel
>> >       >       >       >       >       configurations but
>> >       >       >       >       >       >       still
>> >       >       >       >       >       >       > the same error can't
>> open the display.
>> >       >       >       >       >       >       > below are the outcome
>> of "xenstore-ls":
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > root@raspberrypi4-64:~=
/guest1#
>> xenstore-ls
>> >       >       >       >       >       >       > tool =3D ""
>> >       >       >       >       >       >       >  xenstored =3D ""
>> >       >       >       >       >       >       > local =3D ""
>> >       >       >       >       >       >       >  domain =3D ""
>> >       >       >       >       >       >       >   0 =3D ""
>> >       >       >       >       >       >       >    control =3D ""
>> >       >       >       >       >       >       >     feature-poweroff =
=3D
>> "1"
>> >       >       >       >       >       >       >     feature-reboot =3D=
 "1"
>> >       >       >       >       >       >       >    domid =3D "0"
>> >       >       >       >       >       >       >    name =3D "Domain-0"
>> >       >       >       >       >       >       >    device-model =3D ""
>> >       >       >       >       >       >       >     0 =3D ""
>> >       >       >       >       >       >       >      backends =3D ""
>> >       >       >       >       >       >       >       console =3D ""
>> >       >       >       >       >       >       >       vkbd =3D ""
>> >       >       >       >       >       >       >       vfb =3D ""
>> >       >       >       >       >       >       >       qnic =3D ""
>> >       >       >       >       >       >       >      state =3D "runnin=
g"
>> >       >       >       >       >       >       >     1 =3D ""
>> >       >       >       >       >       >       >      backends =3D ""
>> >       >       >       >       >       >       >       console =3D ""
>> >       >       >       >       >       >       >       vkbd =3D ""
>> >       >       >       >       >       >       >       vfb =3D ""
>> >       >       >       >       >       >       >       qnic =3D ""
>> >       >       >       >       >       >       >      state =3D "runnin=
g"
>> >       >       >       >       >       >       >    backend =3D ""
>> >       >       >       >       >       >       >     vbd =3D ""
>> >       >       >       >       >       >       >      1 =3D ""
>> >       >       >       >       >       >       >       51712 =3D ""
>> >       >       >       >       >       >       >        frontend =3D
>> "/local/domain/1/device/vbd/51712"
>> >       >       >       >       >       >       >        params =3D
>> "/home/root/guest2/xen-guest-image-minimal-raspberrypi4-64.ext3"
>> >       >       >       >       >       >       >        script =3D
>> "/etc/xen/scripts/block"
>> >       >       >       >       >       >       >        frontend-id =3D=
 "1"
>> >       >       >       >       >       >       >        online =3D "1"
>> >       >       >       >       >       >       >        removable =3D "=
0"
>> >       >       >       >       >       >       >        bootable =3D "1=
"
>> >       >       >       >       >       >       >        state =3D "4"
>> >       >       >       >       >       >       >        dev =3D "xvda"
>> >       >       >       >       >       >       >        type =3D "phy"
>> >       >       >       >       >       >       >        mode =3D "w"
>> >       >       >       >       >       >       >        device-type =3D
>> "disk"
>> >       >       >       >       >       >       >        discard-enable =
=3D
>> "1"
>> >       >       >       >       >       >       >
>>  feature-max-indirect-segments =3D "256"
>> >       >       >       >       >       >       >
>>  multi-queue-max-queues =3D "4"
>> >       >       >       >       >       >       >
>>  max-ring-page-order =3D "4"
>> >       >       >       >       >       >       >        node =3D
>> "/dev/loop0"
>> >       >       >       >       >       >       >        physical-device
>> =3D "7:0"
>> >       >       >       >       >       >       >
>>  physical-device-path =3D "/dev/loop0"
>> >       >       >       >       >       >       >        hotplug-status =
=3D
>> "connected"
>> >       >       >       >       >       >       >
>>  feature-flush-cache =3D "1"
>> >       >       >       >       >       >       >
>>  discard-granularity =3D "4096"
>> >       >       >       >       >       >       >
>>  discard-alignment =3D "0"
>> >       >       >       >       >       >       >        discard-secure =
=3D
>> "0"
>> >       >       >       >       >       >       >        feature-discard
>> =3D "1"
>> >       >       >       >       >       >       >        feature-barrier
>> =3D "1"
>> >       >       >       >       >       >       >
>>  feature-persistent =3D "1"
>> >       >       >       >       >       >       >        sectors =3D
>> "1794048"
>> >       >       >       >       >       >       >        info =3D "0"
>> >       >       >       >       >       >       >        sector-size =3D
>> "512"
>> >       >       >       >       >       >       >
>>  physical-sector-size =3D "512"
>> >       >       >       >       >       >       >     vfb =3D ""
>> >       >       >       >       >       >       >      1 =3D ""
>> >       >       >       >       >       >       >       0 =3D ""
>> >       >       >       >       >       >       >        frontend =3D
>> "/local/domain/1/device/vfb/0"
>> >       >       >       >       >       >       >        frontend-id =3D=
 "1"
>> >       >       >       >       >       >       >        online =3D "1"
>> >       >       >       >       >       >       >        state =3D "4"
>> >       >       >       >       >       >       >        vnc =3D "1"
>> >       >       >       >       >       >       >        vnclisten =3D
>> "127.0.0.1"
>> >       >       >       >       >       >       >        vncdisplay =3D =
"0"
>> >       >       >       >       >       >       >        vncunused =3D "=
1"
>> >       >       >       >       >       >       >        sdl =3D "0"
>> >       >       >       >       >       >       >        opengl =3D "0"
>> >       >       >       >       >       >       >        feature-resize =
=3D
>> "1"
>> >       >       >       >       >       >       >        hotplug-status =
=3D
>> "connected"
>> >       >       >       >       >       >       >        request-update =
=3D
>> "1"
>> >       >       >       >       >       >       >     vkbd =3D ""
>> >       >       >       >       >       >       >      1 =3D ""
>> >       >       >       >       >       >       >       0 =3D ""
>> >       >       >       >       >       >       >        frontend =3D
>> "/local/domain/1/device/vkbd/0"
>> >       >       >       >       >       >       >        frontend-id =3D=
 "1"
>> >       >       >       >       >       >       >        online =3D "1"
>> >       >       >       >       >       >       >        state =3D "4"
>> >       >       >       >       >       >       >
>>  feature-abs-pointer =3D "1"
>> >       >       >       >       >       >       >
>>  feature-raw-pointer =3D "1"
>> >       >       >       >       >       >       >        hotplug-status =
=3D
>> "connected"
>> >       >       >       >       >       >       >     console =3D ""
>> >       >       >       >       >       >       >      1 =3D ""
>> >       >       >       >       >       >       >       0 =3D ""
>> >       >       >       >       >       >       >        frontend =3D
>> "/local/domain/1/console"
>> >       >       >       >       >       >       >        frontend-id =3D=
 "1"
>> >       >       >       >       >       >       >        online =3D "1"
>> >       >       >       >       >       >       >        state =3D "1"
>> >       >       >       >       >       >       >        protocol =3D
>> "vt100"
>> >       >       >       >       >       >       >     vif =3D ""
>> >       >       >       >       >       >       >      1 =3D ""
>> >       >       >       >       >       >       >       0 =3D ""
>> >       >       >       >       >       >       >        frontend =3D
>> "/local/domain/1/device/vif/0"
>> >       >       >       >       >       >       >        frontend-id =3D=
 "1"
>> >       >       >       >       >       >       >        online =3D "1"
>> >       >       >       >       >       >       >        state =3D "4"
>> >       >       >       >       >       >       >        script =3D
>> "/etc/xen/scripts/vif-bridge"
>> >       >       >       >       >       >       >        mac =3D
>> "e4:5f:01:cd:7b:dd"
>> >       >       >       >       >       >       >        bridge =3D "xen=
br0"
>> >       >       >       >       >       >       >        handle =3D "0"
>> >       >       >       >       >       >       >        type =3D "vif"
>> >       >       >       >       >       >       >        hotplug-status =
=3D
>> "connected"
>> >       >       >       >       >       >       >        feature-sg =3D =
"1"
>> >       >       >       >       >       >       >
>>  feature-gso-tcpv4 =3D "1"
>> >       >       >       >       >       >       >
>>  feature-gso-tcpv6 =3D "1"
>> >       >       >       >       >       >       >
>>  feature-ipv6-csum-offload =3D "1"
>> >       >       >       >       >       >       >        feature-rx-copy
>> =3D "1"
>> >       >       >       >       >       >       >
>>  feature-xdp-headroom =3D "1"
>> >       >       >       >       >       >       >        feature-rx-flip
>> =3D "0"
>> >       >       >       >       >       >       >
>>  feature-multicast-control =3D "1"
>> >       >       >       >       >       >       >
>>  feature-dynamic-multicast-control =3D "1"
>> >       >       >       >       >       >       >
>>  feature-split-event-channels =3D "1"
>> >       >       >       >       >       >       >
>>  multi-queue-max-queues =3D "4"
>> >       >       >       >       >       >       >
>>  feature-ctrl-ring =3D "1"
>> >       >       >       >       >       >       >   1 =3D ""
>> >       >       >       >       >       >       >    vm =3D
>> "/vm/d81ec5a9-5bf9-4f2b-89e8-0f60d6da948f"
>> >       >       >       >       >       >       >    name =3D "guest2"
>> >       >       >       >       >       >       >    cpu =3D ""
>> >       >       >       >       >       >       >     0 =3D ""
>> >       >       >       >       >       >       >      availability =3D
>> "online"
>> >       >       >       >       >       >       >     1 =3D ""
>> >       >       >       >       >       >       >      availability =3D
>> "online"
>> >       >       >       >       >       >       >    memory =3D ""
>> >       >       >       >       >       >       >     static-max =3D
>> "2097152"
>> >       >       >       >       >       >       >     target =3D "209715=
2"
>> >       >       >       >       >       >       >     videoram =3D "0"
>> >       >       >       >       >       >       >    device =3D ""
>> >       >       >       >       >       >       >     suspend =3D ""
>> >       >       >       >       >       >       >      event-channel =3D=
 ""
>> >       >       >       >       >       >       >     vbd =3D ""
>> >       >       >       >       >       >       >      51712 =3D ""
>> >       >       >       >       >       >       >       backend =3D
>> "/local/domain/0/backend/vbd/1/51712"
>> >       >       >       >       >       >       >       backend-id =3D "=
0"
>> >       >       >       >       >       >       >       state =3D "4"
>> >       >       >       >       >       >       >       virtual-device =
=3D
>> "51712"
>> >       >       >       >       >       >       >       device-type =3D
>> "disk"
>> >       >       >       >       >       >       >
>> multi-queue-num-queues =3D "2"
>> >       >       >       >       >       >       >       queue-0 =3D ""
>> >       >       >       >       >       >       >        ring-ref =3D "8=
"
>> >       >       >       >       >       >       >        event-channel =
=3D
>> "4"
>> >       >       >       >       >       >       >       queue-1 =3D ""
>> >       >       >       >       >       >       >        ring-ref =3D "9=
"
>> >       >       >       >       >       >       >        event-channel =
=3D
>> "5"
>> >       >       >       >       >       >       >       protocol =3D
>> "arm-abi"
>> >       >       >       >       >       >       >
>> feature-persistent =3D "1"
>> >       >       >       >       >       >       >     vfb =3D ""
>> >       >       >       >       >       >       >      0 =3D ""
>> >       >       >       >       >       >       >       backend =3D
>> "/local/domain/0/backend/vfb/1/0"
>> >       >       >       >       >       >       >       backend-id =3D "=
0"
>> >       >       >       >       >       >       >       state =3D "4"
>> >       >       >       >       >       >       >       page-ref =3D
>> "275022"
>> >       >       >       >       >       >       >       event-channel =
=3D
>> "3"
>> >       >       >       >       >       >       >       protocol =3D
>> "arm-abi"
>> >       >       >       >       >       >       >       feature-update =
=3D
>> "1"
>> >       >       >       >       >       >       >     vkbd =3D ""
>> >       >       >       >       >       >       >      0 =3D ""
>> >       >       >       >       >       >       >       backend =3D
>> "/local/domain/0/backend/vkbd/1/0"
>> >       >       >       >       >       >       >       backend-id =3D "=
0"
>> >       >       >       >       >       >       >       state =3D "4"
>> >       >       >       >       >       >       >
>> request-abs-pointer =3D "1"
>> >       >       >       >       >       >       >       page-ref =3D
>> "275322"
>> >       >       >       >       >       >       >       page-gref =3D "1=
284"
>> >       >       >       >       >       >       >       event-channel =
=3D
>> "10"
>> >       >       >       >       >       >       >     vif =3D ""
>> >       >       >       >       >       >       >      0 =3D ""
>> >       >       >       >       >       >       >       backend =3D
>> "/local/domain/0/backend/vif/1/0"
>> >       >       >       >       >       >       >       backend-id =3D "=
0"
>> >       >       >       >       >       >       >       state =3D "4"
>> >       >       >       >       >       >       >       handle =3D "0"
>> >       >       >       >       >       >       >       mac =3D
>> "e4:5f:01:cd:7b:dd"
>> >       >       >       >       >       >       >       mtu =3D "1500"
>> >       >       >       >       >       >       >       xdp-headroom =3D=
 "0"
>> >       >       >       >       >       >       >
>> multi-queue-num-queues =3D "2"
>> >       >       >       >       >       >       >       queue-0 =3D ""
>> >       >       >       >       >       >       >        tx-ring-ref =3D
>> "1280"
>> >       >       >       >       >       >       >        rx-ring-ref =3D
>> "1281"
>> >       >       >       >       >       >       >        event-channel-t=
x
>> =3D "6"
>> >       >       >       >       >       >       >        event-channel-r=
x
>> =3D "7"
>> >       >       >       >       >       >       >       queue-1 =3D ""
>> >       >       >       >       >       >       >        tx-ring-ref =3D
>> "1282"
>> >       >       >       >       >       >       >        rx-ring-ref =3D
>> "1283"
>> >       >       >       >       >       >       >        event-channel-t=
x
>> =3D "8"
>> >       >       >       >       >       >       >        event-channel-r=
x
>> =3D "9"
>> >       >       >       >       >       >       >       request-rx-copy =
=3D
>> "1"
>> >       >       >       >       >       >       >       feature-rx-notif=
y
>> =3D "1"
>> >       >       >       >       >       >       >       feature-sg =3D "=
1"
>> >       >       >       >       >       >       >       feature-gso-tcpv=
4
>> =3D "1"
>> >       >       >       >       >       >       >       feature-gso-tcpv=
6
>> =3D "1"
>> >       >       >       >       >       >       >
>> feature-ipv6-csum-offload =3D "1"
>> >       >       >       >       >       >       >    control =3D ""
>> >       >       >       >       >       >       >     shutdown =3D ""
>> >       >       >       >       >       >       >     feature-poweroff =
=3D
>> "1"
>> >       >       >       >       >       >       >     feature-reboot =3D=
 "1"
>> >       >       >       >       >       >       >     feature-suspend =
=3D ""
>> >       >       >       >       >       >       >     sysrq =3D ""
>> >       >       >       >       >       >       >
>> platform-feature-multiprocessor-suspend =3D "1"
>> >       >       >       >       >       >       >
>> platform-feature-xs_reset_watches =3D "1"
>> >       >       >       >       >       >       >    data =3D ""
>> >       >       >       >       >       >       >    drivers =3D ""
>> >       >       >       >       >       >       >    feature =3D ""
>> >       >       >       >       >       >       >    attr =3D ""
>> >       >       >       >       >       >       >    error =3D ""
>> >       >       >       >       >       >       >    domid =3D "1"
>> >       >       >       >       >       >       >    store =3D ""
>> >       >       >       >       >       >       >     port =3D "1"
>> >       >       >       >       >       >       >     ring-ref =3D "2334=
73"
>> >       >       >       >       >       >       >    console =3D ""
>> >       >       >       >       >       >       >     backend =3D
>> "/local/domain/0/backend/console/1/0"
>> >       >       >       >       >       >       >     backend-id =3D "0"
>> >       >       >       >       >       >       >     limit =3D "1048576=
"
>> >       >       >       >       >       >       >     type =3D "xenconso=
led"
>> >       >       >       >       >       >       >     output =3D "pty"
>> >       >       >       >       >       >       >     tty =3D "/dev/pts/=
1"
>> >       >       >       >       >       >       >     port =3D "2"
>> >       >       >       >       >       >       >     ring-ref =3D "2334=
72"
>> >       >       >       >       >       >       >     vnc-listen =3D
>> "127.0.0.1"
>> >       >       >       >       >       >       >     vnc-port =3D "5900=
"
>> >       >       >       >       >       >       >    image =3D ""
>> >       >       >       >       >       >       >     device-model-pid =
=3D
>> "788"
>> >       >       >       >       >       >       > vm =3D ""
>> >       >       >       >       >       >       >
>>  d81ec5a9-5bf9-4f2b-89e8-0f60d6da948f =3D ""
>> >       >       >       >       >       >       >   name =3D "guest2"
>> >       >       >       >       >       >       >   uuid =3D
>> "d81ec5a9-5bf9-4f2b-89e8-0f60d6da948f"
>> >       >       >       >       >       >       >   start_time =3D
>> "1520600274.27"
>> >       >       >       >       >       >       > libxl =3D ""
>> >       >       >       >       >       >       >  1 =3D ""
>> >       >       >       >       >       >       >   device =3D ""
>> >       >       >       >       >       >       >    vbd =3D ""
>> >       >       >       >       >       >       >     51712 =3D ""
>> >       >       >       >       >       >       >      frontend =3D
>> "/local/domain/1/device/vbd/51712"
>> >       >       >       >       >       >       >      backend =3D
>> "/local/domain/0/backend/vbd/1/51712"
>> >       >       >       >       >       >       >      params =3D
>> "/home/root/guest2/xen-guest-image-minimal-raspberrypi4-64.ext3"
>> >       >       >       >       >       >       >      script =3D
>> "/etc/xen/scripts/block"
>> >       >       >       >       >       >       >      frontend-id =3D "=
1"
>> >       >       >       >       >       >       >      online =3D "1"
>> >       >       >       >       >       >       >      removable =3D "0"
>> >       >       >       >       >       >       >      bootable =3D "1"
>> >       >       >       >       >       >       >      state =3D "1"
>> >       >       >       >       >       >       >      dev =3D "xvda"
>> >       >       >       >       >       >       >      type =3D "phy"
>> >       >       >       >       >       >       >      mode =3D "w"
>> >       >       >       >       >       >       >      device-type =3D
>> "disk"
>> >       >       >       >       >       >       >      discard-enable =
=3D
>> "1"
>> >       >       >       >       >       >       >    vfb =3D ""
>> >       >       >       >       >       >       >     0 =3D ""
>> >       >       >       >       >       >       >      frontend =3D
>> "/local/domain/1/device/vfb/0"
>> >       >       >       >       >       >       >      backend =3D
>> "/local/domain/0/backend/vfb/1/0"
>> >       >       >       >       >       >       >      frontend-id =3D "=
1"
>> >       >       >       >       >       >       >      online =3D "1"
>> >       >       >       >       >       >       >      state =3D "1"
>> >       >       >       >       >       >       >      vnc =3D "1"
>> >       >       >       >       >       >       >      vnclisten =3D
>> "127.0.0.1"
>> >       >       >       >       >       >       >      vncdisplay =3D "0=
"
>> >       >       >       >       >       >       >      vncunused =3D "1"
>> >       >       >       >       >       >       >      sdl =3D "0"
>> >       >       >       >       >       >       >      opengl =3D "0"
>> >       >       >       >       >       >       >    vkbd =3D ""
>> >       >       >       >       >       >       >     0 =3D ""
>> >       >       >       >       >       >       >      frontend =3D
>> "/local/domain/1/device/vkbd/0"
>> >       >       >       >       >       >       >      backend =3D
>> "/local/domain/0/backend/vkbd/1/0"
>> >       >       >       >       >       >       >      frontend-id =3D "=
1"
>> >       >       >       >       >       >       >      online =3D "1"
>> >       >       >       >       >       >       >      state =3D "1"
>> >       >       >       >       >       >       >    console =3D ""
>> >       >       >       >       >       >       >     0 =3D ""
>> >       >       >       >       >       >       >      frontend =3D
>> "/local/domain/1/console"
>> >       >       >       >       >       >       >      backend =3D
>> "/local/domain/0/backend/console/1/0"
>> >       >       >       >       >       >       >      frontend-id =3D "=
1"
>> >       >       >       >       >       >       >      online =3D "1"
>> >       >       >       >       >       >       >      state =3D "1"
>> >       >       >       >       >       >       >      protocol =3D "vt1=
00"
>> >       >       >       >       >       >       >    vif =3D ""
>> >       >       >       >       >       >       >     0 =3D ""
>> >       >       >       >       >       >       >      frontend =3D
>> "/local/domain/1/device/vif/0"
>> >       >       >       >       >       >       >      backend =3D
>> "/local/domain/0/backend/vif/1/0"
>> >       >       >       >       >       >       >      frontend-id =3D "=
1"
>> >       >       >       >       >       >       >      online =3D "1"
>> >       >       >       >       >       >       >      state =3D "1"
>> >       >       >       >       >       >       >      script =3D
>> "/etc/xen/scripts/vif-bridge"
>> >       >       >       >       >       >       >      mac =3D
>> "e4:5f:01:cd:7b:dd"
>> >       >       >       >       >       >       >      bridge =3D "xenbr=
0"
>> >       >       >       >       >       >       >      handle =3D "0"
>> >       >       >       >       >       >       >      type =3D "vif"
>> >       >       >       >       >       >       >      hotplug-status =
=3D ""
>> >       >       >       >       >       >       >   type =3D "pvh"
>> >       >       >       >       >       >       >   dm-version =3D
>> "qemu_xen"
>> >       >       >       >       >       >       > root@raspberrypi4-64
>> :~/guest1#
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > Any input as per above=
?
>> Looking forward to hearing from you.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > Regards,
>> >       >       >       >       >       >       > Vipul Kumar
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       > On Wed, Oct 26, 2022 a=
t
>> 5:21 AM Stefano Stabellini <sstabellini@kernel.org> wrote:
>> >       >       >       >       >       >       >       Hi Vipul,
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       If you look at
>> the QEMU logs, it says:
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       VNC server
>> running on 127.0.0.1:5900
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       That is the VNC
>> server you need to connect to. So in theory:
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >         vncviewer
>> 127.0.0.1:5900
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       should work
>> correctly.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       If you have:
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >         vfb =3D
>> ["type=3Dvnc"]
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       in your xl confi=
g
>> file and you have "fbdev" in your Linux guest, it
>> >       >       >       >       >       >       >       should work.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       If you connect t=
o
>> the VNC server but you get a black screen, it might be
>> >       >       >       >       >       >       >       a guest
>> configuration issue. I would try with a simpler guest, text only
>> >       >       >       >       >       >       >       (no X11, no
>> Wayland) and enable the fbdev console (fbcon). See
>> >       >       >       >       >       >       >
>>  Documentation/fb/fbcon.rst in Linux. You should be able to see a
>> >       >       >       >       >       >       >       graphical consol=
e
>> over VNC.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       If that works,
>> then you know that the fbdev kernel driver (xen-fbfront)
>> >       >       >       >       >       >       >       works correctly.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       If it doesn't
>> work, the output of "xenstore-ls" would be interesting.
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       Cheers,
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       Stefano
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       On Wed, 19 Oct
>> 2022, Vipul Suneja wrote:
>> >       >       >       >       >       >       >       > Hi Stefano,
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       > Thanks for the
>> response!
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       > I am following
>> the same link you shared from the beginning. Tried the command
>> >       >       "vncviewer
>> >       >       >       localhost:0"
>> >       >       >       >       in DOM0
>> >       >       >       >       >       but
>> >       >       >       >       >       >       same
>> >       >       >       >       >       >       >       issue "Can't ope=
n
>> >       >       >       >       >       >       >       > display", belo=
w
>> are the logs:
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >
>> root@raspberrypi4-64:~# vncviewer localhost:0
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       > TigerVNC Viewe=
r
>> 64-bit v1.11.0
>> >       >       >       >       >       >       >       > Built on:
>> 2020-09-08 12:16
>> >       >       >       >       >       >       >       > Copyright (C)
>> 1999-2020 TigerVNC Team and many others (see README.rst)
>> >       >       >       >       >       >       >       > See
>> https://www.tigervnc.org for information on TigerVNC.
>> >       >       >       >       >       >       >       > Can't open
>> display:
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       > Below are the
>> netstat logs, i couldn't see anything running at port 5900 or
>> >       5901:
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >
>> root@raspberrypi4-64:~# netstat -tuwx
>> >       >       >       >       >       >       >       > Active Interne=
t
>> connections (w/o servers)
>> >       >       >       >       >       >       >       > Proto Recv-Q
>> Send-Q Local Address           Foreign Address         State
>> >
>> >       >       >       >       >       >       >       > tcp        0
>>  164 192.168.1.39:ssh        192.168.1.38:37472
>> >        ESTABLISHED
>> >       >       >       >       >       >       >       > Active UNIX
>> domain sockets (w/o servers)
>> >       >       >       >       >       >       >       > Proto RefCnt
>> Flags       Type       State         I-Node Path
>> >       >       >       >       >       >       >       > unix  8      [
>> ]         DGRAM      CONNECTED      10565 /dev/log
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10891
>> >       /var/run/xenstored/socket
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      13791
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10843
>> >       /var/run/xenstored/socket
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10573
>> >       /var/run/xenstored/socket
>> >       >       >       >       >       >       >       > unix  2      [
>> ]         DGRAM      CONNECTED      14510
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      13249
>> >       >       >       >       >       >       >       > unix  2      [
>> ]         DGRAM      CONNECTED      13887
>> >       >       >       >       >       >       >       > unix  2      [
>> ]         DGRAM      CONNECTED      10599
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      14005
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      13258
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      13248
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      14003
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10572
>> >       /var/run/xenstored/socket
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10786
>> >       /var/run/xenstored/socket
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         DGRAM      CONNECTED      13186
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10864
>> >       /var/run/xenstored/socket
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10812
>> >       /var/run/xenstored/socket
>> >       >       >       >       >       >       >       > unix  2      [
>> ]         DGRAM      CONNECTED      14083
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10813
>> >       /var/run/xenstored/socket
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      14068
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      13256
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10571
>> >       /var/run/xenstored/socket
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      10842
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      13985
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         DGRAM      CONNECTED      13185
>> >       >       >       >       >       >       >       > unix  2      [
>> ]         STREAM     CONNECTED      13884
>> >       >       >       >       >       >       >       > unix  2      [
>> ]         DGRAM      CONNECTED      14528
>> >       >       >       >       >       >       >       > unix  2      [
>> ]         DGRAM      CONNECTED      13785
>> >       >       >       >       >       >       >       > unix  3      [
>> ]         STREAM     CONNECTED      14034
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       > Attaching xen
>> log files of /var/log/xen.
>> >       >       >       >       >       >       >       > I didn't get
>> the role of QEMU here because as mentioned earlier, I am porting
>> >       in
>> >       >       raspberrypi
>> >       >       >       4B.
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       > Regards,
>> >       >       >       >       >       >       >       > Vipul Kumar
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       > On Wed, Oct 19=
,
>> 2022 at 12:43 AM Stefano Stabellini <sstabellini@kernel.org>
>> >       wrote:
>> >       >       >       >       >       >       >       >       It
>> usually works the way it is described in the guide:
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >
>> >       >       >       >       >
>> >       >       >       >
>> >       >       >
>> >       >
>> >
>> https://www.virtuatopia.com/index.php?title=3DConfiguring_a_VNC_based_Gr=
aphical_Console_for_a_Xen_Paravirtualized_domainU_Guest
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       You don'=
t
>> need to install any VNC-related server software because it is
>> >       >       >       >       >       >       >       >       already
>> provided by Xen (to be precise it is provided by QEMU working
>> >       >       >       >       >       >       >       >       together
>> with Xen.)
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       You only
>> need the vnc client in dom0 so that you can connect, but you
>> >       >       >       >       >       >       >       >       could
>> also run the vnc client outside from another host. So basically
>> >       >       >       >       >       >       >       >       the
>> following should work when executed in Dom0 after creating DomU:
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >
>> vncviewer localhost:0
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       Can you
>> attach the Xen and QEMU logs (/var/log/xen/*)? And also use
>> >       >       >       >       >       >       >       >       netstat
>> -taunp to check if there is anything running at port 5900 or
>> >       >       >       >       >       >       >       >       5901?
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       Cheers,
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       Stefano
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       On Tue,
>> 18 Oct 2022, Vipul Suneja wrote:
>> >       >       >       >       >       >       >       >       > Hi
>> Stefano,
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       > Thanks
>> for the response!
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       > I coul=
d
>> install tigerVNC, x11vnc & libvncserver in Dom0
>> >       xen-image-minimal but
>> >       >       only
>> >       >       >       manage to
>> >       >       >       >       install
>> >       >       >       >       >       >       >
>>  libvncserver(couldn't
>> >       >       >       >       >       >       >       >       install
>> tigervnc
>> >       >       >       >       >       >       >       >       > &
>> x11vnc because of x11
>> >       >       >       >       >       >       >       >       > suppor=
t
>> missing, it's wayland) in DOMU custom graphical image. I
>> >       tried
>> >       >       running
>> >       >       >       vncviewer with
>> >       >       >       >       IP
>> >       >       >       >       >       address &
>> >       >       >       >       >       >       port
>> >       >       >       >       >       >       >       in dom0 to
>> >       >       >       >       >       >       >       >       access
>> the domu
>> >       >       >       >       >       >       >       >       >
>> graphical image display as per below commands.
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>>  vncviewer 192.168.1.42:5901
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >  But i=
t
>> showing can't open display, below are the logs:
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>> root@raspberrypi4-64:~/guest1# vncviewer 192.168.1.42:5901
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>> TigerVNC Viewer 64-bit v1.11.0
>> >       >       >       >       >       >       >       >       > Built
>> on: 2020-09-08 12:16
>> >       >       >       >       >       >       >       >       >
>> Copyright (C) 1999-2020 TigerVNC Team and many others (see
>> >       README.rst)
>> >       >       >       >       >       >       >       >       > See
>> https://www.tigervnc.org for information on TigerVNC.
>> >       >       >       >       >       >       >       >       > Can't
>> open display:
>> >       >       >       >       >       >       >       >       >
>> root@raspberrypi4-64:~/guest1#
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       > I am
>> not exactly sure what the issue is but I thought only
>> >       libvncserver in
>> >       >       DOMU could
>> >       >       >       work to
>> >       >       >       >       get
>> >       >       >       >       >       access but
>> >       >       >       >       >       >       it
>> >       >       >       >       >       >       >       did not
>> >       >       >       >       >       >       >       >       work.
>> >       >       >       >       >       >       >       >       > If
>> TigerVNC is the issue here then is there any other VNC source
>> >       which could
>> >       >       be
>> >       >       >       installed for
>> >       >       >       >       both
>> >       >       >       >       >       x11 &
>> >       >       >       >       >       >       >       wayland supporte=
d
>> >       >       >       >       >       >       >       >       images?
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       > Regard=
s,
>> >       >       >       >       >       >       >       >       > Vipul
>> Kumar
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       > On Tue=
,
>> Oct 18, 2022 at 2:40 AM Stefano Stabellini
>> >       <sstabellini@kernel.org>
>> >       >       wrote:
>> >       >       >       >       >       >       >       >       >
>>  VNC is typically easier to setup, because SDL needs extra
>> >       libraries at
>> >       >       >       >       >       >       >       >       >
>>  build time and runtime. If QEMU is built without SDL support it
>> >       won't
>> >       >       >       >       >       >       >       >       >
>>  start when you ask for SDL.
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>>  VNC should work with both x11 and wayland in your domU. It
>> >       doesn't work
>> >       >       >       >       >       >       >       >       >
>>  at the x11 level, it exposes a special fbdev device in your
>> >       domU that
>> >       >       >       >       >       >       >       >       >
>>  should work with:
>> >       >       >       >       >       >       >       >       >       =
-
>> a graphical console in Linux domU
>> >       >       >       >       >       >       >       >       >       =
-
>> x11
>> >       >       >       >       >       >       >       >       >       =
-
>> wayland (but I haven't tested this so I am not 100% sure
>> >       about it)
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>>  When you say "it doesn't work", what do you mean? Do you get a
>> >       black
>> >       >       >       >       >       >       >       >       >
>>  window?
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>>  You need CONFIG_XEN_FBDEV_FRONTEND in Linux domU
>> >       >       >       >       >       >       >       >       >
>>  (drivers/video/fbdev/xen-fbfront.c). I would try to get a
>> >       graphical
>> >       >       text
>> >       >       >       >       >       >       >       >       >
>>  console up and running in your domU before attempting
>> >       x11/wayland.
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>>  Cheers,
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>>  Stefano
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>>  On Mon, 17 Oct 2022, Vipul Suneja wrote:
>> >       >       >       >       >       >       >       >       >       =
>
>> Hi,
>> >       >       >       >       >       >       >       >       >       =
>
>> Thanks!
>> >       >       >       >       >       >       >       >       >       =
>
>> >       >       >       >       >       >       >       >       >       =
>
>> I have ported xen minimal image as DOM0 & custom wayland GUI
>> >       based
>> >       >       image as
>> >       >       >       DOMU in
>> >       >       >       >       raspberry
>> >       >       >       >       >       pi4B. I
>> >       >       >       >       >       >       >       am trying to
>> >       >       >       >       >       >       >       >       make GUI
>> >       >       >       >       >       >       >       >       >
>>  display up
>> >       >       >       >       >       >       >       >       >       =
>
>> for guest machine. I tried using sdl, included below line in
>> >       >       guest.conf file
>> >       >       >       >       >       >       >       >       >       =
>
>> vfb=3D [ 'sdl=3D1' ]
>> >       >       >       >       >       >       >       >       >       =
>
>> >       >       >       >       >       >       >       >       >       =
>
>> But it is throwing below error:
>> >       >       >       >       >       >       >       >       >       =
>
>> >       >       >       >       >       >       >       >       >       =
>
>> root@raspberrypi4-64:~/guest1# xl create -c guest1.cfg
>> >       >       >       >       >       >       >       >       >       =
>
>> Parsing config from guest1.cfg
>> >       >       >       >       >       >       >       >       >       =
>
>> libxl: error: libxl_qmp.c:1400:qmp_ev_fd_callback: Domain
>> >       3:error on
>> >       >       QMP
>> >       >       >       socket:
>> >       >       >       >       Connection
>> >       >       >       >       >       reset by
>> >       >       >       >       >       >       >       peer
>> >       >       >       >       >       >       >       >       >       =
>
>> libxl: error: libxl_qmp.c:1439:qmp_ev_fd_callback: Domain
>> >       3:Error
>> >       >       happened
>> >       >       >       with the
>> >       >       >       >       QMP
>> >       >       >       >       >       connection to
>> >       >       >       >       >       >       >       QEMU
>> >       >       >       >       >       >       >       >       >       =
>
>> libxl: error: libxl_dm.c:3351:device_model_postconfig_done:
>> >       Domain
>> >       >       3:Post DM
>> >       >       >       startup
>> >       >       >       >       configs
>> >       >       >       >       >       failed,
>> >       >       >       >       >       >       >       rc=3D-26
>> >       >       >       >       >       >       >       >       >       =
>
>> libxl: error: libxl_create.c:1867:domcreate_devmodel_started:
>> >       Domain
>> >       >       3:device
>> >       >       >       model
>> >       >       >       >       did not
>> >       >       >       >       >       start:
>> >       >       >       >       >       >       -26
>> >       >       >       >       >       >       >       >       >       =
>
>> libxl: error: libxl_aoutils.c:646:libxl__kill_xs_path: Device
>> >       Model
>> >       >       already
>> >       >       >       exited
>> >       >       >       >       >       >       >       >       >       =
>
>> libxl: error: libxl_domain.c:1183:libxl__destroy_domid:
>> >       Domain
>> >       >       3:Non-existant
>> >       >       >       domain
>> >       >       >       >       >       >       >       >       >       =
>
>> libxl: error: libxl_domain.c:1137:domain_destroy_callback:
>> >       Domain
>> >       >       3:Unable to
>> >       >       >       destroy
>> >       >       >       >       guest
>> >       >       >       >       >       >       >       >       >       =
>
>> libxl: error: libxl_domain.c:1064:domain_destroy_cb: Domain
>> >       >       3:Destruction of
>> >       >       >       domain
>> >       >       >       >       failed
>> >       >       >       >       >       >       >       >       >       =
>
>> >       >       >       >       >       >       >       >       >       =
>
>> Another way is VNC, i could install tigervnc in DOM0 but same
>> >       i
>> >       >       couldn't in
>> >       >       >       guest
>> >       >       >       >       machine
>> >       >       >       >       >       because it
>> >       >       >       >       >       >       >       doesn't support
>> >       >       >       >       >       >       >       >       >
>>  x11(supports wayland
>> >       >       >       >       >       >       >       >       >       =
>
>> only). I am completely blocked here, Need your support to
>> >       enable the
>> >       >       display
>> >       >       >       up.
>> >       >       >       >       >       >       >       >       >       =
>
>> Any alternative of VNC which could work in both x11 & wayland
>> >       >       supported
>> >       >       >       images?
>> >       >       >       >       >       >       >       >       >       =
>
>> >       >       >       >       >       >       >       >       >       =
>
>> Any input on VNC, SDL or any other way to proceed on this?
>> >       Looking
>> >       >       forward to
>> >       >       >       hearing
>> >       >       >       >       from
>> >       >       >       >       >       you.
>> >       >       >       >       >       >       >       >       >       =
>
>> >       >       >       >       >       >       >       >       >       =
>
>> Regards,
>> >       >       >       >       >       >       >       >       >       =
>
>> Vipul Kumar
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >       >
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >       >
>> >       >       >       >       >       >
>> >       >       >       >       >       >
>> >       >       >       >       >       >
>> >       >       >       >       >
>> >       >       >       >       >
>> >       >       >       >       >
>> >       >       >       >
>> >       >       >       >
>> >       >       >       >
>> >       >       >
>> >       >       >
>> >       >       >
>> >       >
>> >       >
>> >       >
>> >
>> >
>> >
>
>

--00000000000067bdb605f1e1bbce
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+SGkgU3RlZmFubyw8ZGl2Pjxicj48L2Rpdj48ZGl2PlRoYW5rcyE8L2Rp
dj48ZGl2Pjxicj48L2Rpdj48ZGl2PkFueSBpbnB1dCBmdXJ0aGVyIGFzIHBlciB0aGUgbG9ncyBh
dHRhY2hlZD88L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlJlZ2FyZHMsPC9kaXY+PGRpdj5WaXB1
bCBLdW1hcjwvZGl2PjwvZGl2Pjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+PGRpdiBkaXI9
Imx0ciIgY2xhc3M9ImdtYWlsX2F0dHIiPk9uIE1vbiwgRGVjIDI2LCAyMDIyIGF0IDExOjMwIFBN
IFZpcHVsIFN1bmVqYSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnZzdW5lamE2M0BnbWFpbC5jb20iPnZz
dW5lamE2M0BnbWFpbC5jb208L2E+Jmd0OyB3cm90ZTo8YnI+PC9kaXY+PGJsb2NrcXVvdGUgY2xh
c3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjBweCAwcHggMHB4IDAuOGV4O2JvcmRlci1s
ZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwyMDQpO3BhZGRpbmctbGVmdDoxZXgiPjxkaXYgZGly
PSJsdHIiPkhpIFN0ZWZhbm8sPGJyPjxicj5UaGFua3MhPGJyPjxicj5BcyB5b3UgaGF2ZSBtZW50
aW9uIGZ1bmN0aW9uIGNhbGwgcWVtdV9jcmVhdGVfZGlzcGxheXN1cmZhY2UsIHFlbXVfY3JlYXRl
X2Rpc3BsYXlzdXJmYWNlX2Zyb20sIGRweV9nZnhfcmVwbGFjZV9zdXJmYWNlLCBkcHlfZ2Z4X3Vw
ZGF0ZSBhbmQgZHB5X2dmeF9jaGVja19mb3JtYXQsIGkgZm91bmQgdGhhdCA8YnI+dGhlc2UgZnVu
Y3Rpb25zIGFyZSBub3QgcGFydCBvZiAvdWkvdm5jLmMgc291cmNlIGJ1dCB0aGV5IGFyZSBkZWZp
bmVkIGluIC91aS9jb25zb2xlLmMgc291cmNlLiBFdmVuIG5vbmUgb2YgdGhlc2UgZnVuY3Rpb25z
IGhhdmUgYmVlbiBjYWxsZWQgZnJvbSB0aGUgdm5jLmMgc291cmNlLiBJIGhhdmUgaW5jbHVkZWQg
ZGVidWcgbG9ncyBmb3IgPGJyPmFsbCBvZiB0aGVzZSBmdW5jdGlvbnMgaW4gY29uc29sZS5jIGJ1
dCBjb3VsZCBzZWUgaW4gdGhlIGxvZ3MgdGhhdCBvbmx5IHFlbXVfY3JlYXRlX2Rpc3BsYXlzdXJm
YWNlICZhbXA7wqANCg0KZHB5X2dmeF9yZXBsYWNlX3N1cmZhY2XCoGZ1bmN0aW9ucyBhcmUgaW52
b2tlZC4gRXZlbiBpIHRyaWVkIHZuY3ZpZXdlcjxicj5vbiB0aGUgaG9zdCBtYWNoaW5lIGJ1dCBv
dGhlciBmdW5jdGlvbnMgYXJlIG5vdCBpbnZva2VkLiBBdHRhY2hpbmcgdGhlIGxvZyBmaWxlLCBh
bnkgb3RoZXIgc3VnZ2VzdGlvbiBhcyBwZXIgbG9nIGZpbGUgb3IgYW55IGlucHV0IGZvciBkZWJ1
Z2dpbmcgdm5jIHNvdXJjZSBmaWxlLjxkaXY+wqA8L2Rpdj48ZGl2PjxiPjxpPllvdSBjYW4gYWxz
byB0cnkgdG8gdXNlIGFub3RoZXIgUUVNVSBVSSBsaWtlIFNETCB0byBzZWUgaWYgdGhlIHByb2Js
ZW0gaXMgc3BlY2lmaWMgdG8gVk5DIG9ubHkuPC9pPjwvYj48L2Rpdj48ZGl2PkkgYWxyZWFkeSB0
cmllZCB3aXRoIFNETCwgYnkgYWRkaW5nICZxdW90O3ZmYj1bICYjMzk7dHlwZT1zZGwmIzM5OyBd
JnF1b3Q7IGluIHRoZSBndWVzdCBjb25maWd1cmF0aW9uIGZpbGUgYnV0IGl0IGZhaWxlZCAmYW1w
OyBkaWRuJiMzOTt0IHN0YXJ0IHRoZSBndWVzdMKgbWFjaGluZS4gQ29ycmVjdCBtZSBpZiBJIGFt
IHdyb25nIHdpdGggY29uZmlndXJhdGlvbiBvciBzdGVwcyB0byB1c2UgU0RMPzwvZGl2PjxkaXY+
PGJyPlRoYW5rcyAmYW1wOyBSZWdhcmRzLDxicj5WaXB1bCBLdW1hcjxicj48L2Rpdj48L2Rpdj48
YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPjxkaXYgZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9h
dHRyIj5PbiBGcmksIERlYyAyMywgMjAyMiBhdCA1OjEzIEFNIFN0ZWZhbm8gU3RhYmVsbGluaSAm
bHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgd3JvdGU6PGJyPjwvZGl2PjxibG9ja3F1
b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDti
b3JkZXItbGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4Ij5I
aSBWaXB1bCw8YnI+DQo8YnI+DQpHcmVhdCB0aGF0IHlvdSBtYW5hZ2VkIHRvIHNldHVwIGEgZGVi
dWdnaW5nIGVudmlyb25tZW50LiBUaGUgbG9ncyBsb29rPGJyPg0KdmVyeSBwcm9taXNpbmc6IGl0
IGxvb2tzIGxpa2UgeGVuZmIuYyBpcyBoYW5kbGluZyBldmVudHMgYXMgZXhwZWN0ZWQuPGJyPg0K
U28gaXQgd291bGQgYXBwYXJlbnRseSBzZWVtIHRoYXQgeGVuLWZiZnJvbnQuYyAtJmd0OyB4ZW5m
Yi5jIGNvbm5lY3Rpb24gaXM8YnI+DQp3b3JraW5nLjxicj4NCjxicj4NClNvIHRoZSBuZXh0IHN0
ZXAgaXMgdGhlIHhlbmZiLmMgLSZndDsgLi91aS92bmMuYyBpcyB3b3JraW5nLjxicj4NCjxicj4N
Ckl0IGNvdWxkIGJlIHRoYXQgdGhlIHBpeGVscyBhbmQgbW91c2UgZXZlbnRzIGFycml2ZSBqdXN0
IGZpbmUgaW48YnI+DQp4ZW5mYi5jLCBidXQgdGhlbiB0aGVyZSBpcyBhbiBpc3N1ZSB3aXRoIGV4
cG9ydGluZyB0aGVtIHRvIHRoZSB2bmNzZXJ2ZXI8YnI+DQppbXBsZW1lbnRhdGlvbiBpbnNpZGUg
UUVNVSwgd2hpY2ggaXMgLi91aS92bmMuYy4gVGhlIGludGVyZXN0aW5nPGJyPg0KZnVuY3Rpb25z
IHRoZXJlIGFyZSBxZW11X2NyZWF0ZV9kaXNwbGF5c3VyZmFjZSw8YnI+DQpxZW11X2NyZWF0ZV9k
aXNwbGF5c3VyZmFjZV9mcm9tLCBkcHlfZ2Z4X3JlcGxhY2Vfc3VyZmFjZSw8YnI+DQpkcHlfZ2Z4
X3VwZGF0ZSwgYW5kIGRweV9nZnhfY2hlY2tfZm9ybWF0Ljxicj4NCjxicj4NClNwZWNpZmljYWxs
eSBkcHlfZ2Z4X3VwZGF0ZSBzaG91bGQgY2F1c2UgVk5DIHRvIHJlbmRlciB0aGUgbmV3IGFyZWEu
PGJyPg0KPGJyPg0KcWVtdV9jcmVhdGVfZGlzcGxheXN1cmZhY2VfZnJvbSBsZXQgVk5DIHVzZSB0
aGUgeGVuZmIgYnVmZmVyIGRpcmVjdGx5PGJyPg0Kd2l0aCBWTkMsIHJhdGhlciB0aGFuIHVzaW5n
IGEgc2Vjb25kYXJ5IGJ1ZmZlciBhbmQgbWVtb3J5IGNvcGllcy48YnI+DQpJbnRlcmVzdGluZ2x5
LCBkcHlfZ2Z4X2NoZWNrX2Zvcm1hdCBzaG91bGQgYmUgdXNlZCB0byBjaGVjayBpZiBpdCBpczxi
cj4NCmFwcHJvcHJpYXRlIHRvIHNoYXJlIHRoZSBidWZmZXIgKHFlbXVfY3JlYXRlX2Rpc3BsYXlz
dXJmYWNlX2Zyb20pIG9yIG5vdDxicj4NCihxZW11X2NyZWF0ZV9kaXNwbGF5c3VyZmFjZSkgYnV0
IHdlIGRvbiYjMzk7dCBjYWxsIGl0Ljxicj4NCjxicj4NCkkgdGhpbmsgaXQgd291bGQgYmUgZ29v
ZCB0byBhZGQgYSBjYWxsIHRvIGRweV9nZnhfY2hlY2tfZm9ybWF0IGluPGJyPg0KeGVuZmJfdXBk
YXRlIHdoZXJlIHdlIGNhbGwgcWVtdV9jcmVhdGVfZGlzcGxheXN1cmZhY2VfZnJvbSBhbmQgYWxz
byBhZGQ8YnI+DQphIHByaW50ay48YnI+DQo8YnI+DQpZb3UgY2FuIHRyeSB0byBkaXNhYmxlIHRo
ZSBidWZmZXIgc2hhcmluZyBieSByZXBsYWNpbmc8YnI+DQpxZW11X2NyZWF0ZV9kaXNwbGF5c3Vy
ZmFjZV9mcm9tIHdpdGggcWVtdV9jcmVhdGVfZGlzcGxheXN1cmZhY2UuIFlvdSBjYW48YnI+DQph
bHNvIHRyeSB0byB1c2UgYW5vdGhlciBRRU1VIFVJIGxpa2UgU0RMIHRvIHNlZSBpZiB0aGUgcHJv
YmxlbSBpczxicj4NCnNwZWNpZmljIHRvIFZOQyBvbmx5Ljxicj4NCjxicj4NCkNoZWVycyw8YnI+
DQo8YnI+DQpTdGVmYW5vPGJyPg0KPGJyPg0KPGJyPg0KT24gTW9uLCAxOSBEZWMgMjAyMiwgVmlw
dWwgU3VuZWphIHdyb3RlOjxicj4NCiZndDsgSGkgU3RlZmFubyw8YnI+DQomZ3Q7IDxicj4NCiZn
dDsgVGhhbmtzITxicj4NCiZndDsgPGJyPg0KJmd0OyBJIGNvdWxkIHByZXBhcmUgYSBwYXRjaCBm
b3IgYWRkaW5nIGRlYnVnIHByaW50ZiBsb2dzIGluIHhlbmZiLmMgJmFtcDsgcmUtY29tcGlsZSBR
RU1VIGluIHlvY3RvIGltYWdlLiBKdXN0IGZvciByZWZlcmVuY2UsIEkgaGF2ZSBpbmNsdWRlZCBs
b2dzPGJyPg0KJmd0OyBpbiBhbGwgdGhlIGZ1bmN0aW9ucy48YnI+DQomZ3Q7IEF0dGFjaGluZyBx
ZW11IGxvZyBmaWxlLCBjb3VsZCBzZWUgdGhlIGVudHJ5ICZhbXA7IGV4aXQgbG9ncyBjb21pbmcg
dXAgZm9yICZxdW90O3hlbmZiX2hhbmRsZV9ldmVudHMmcXVvdDsgJmFtcDsgJnF1b3Q7eGVuZmJf
bWFwX2ZiJnF1b3Q7IGFsc28gYWZ0ZXIgdGhlIGhvc3TCoG1hY2hpbmU8YnI+DQomZ3Q7IGJvb3Rz
wqB1cC4gQ2FuIHlvdcKgcGxlYXNlIGZ1cnRoZXIgYXNzaXN0LCB3aGljaCBwYXJhbWV0ZXJzIGhh
cyB0byBiZSBjcm9zcyBjaGVja2VkIG9yIGFueSBvdGhlciBpbnB1dCBhcyBwZXIgbG9ncz/CoMKg
PGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFRoYW5rcyAmYW1wOyBSZWdhcmRzLDxicj4NCiZndDsgVmlw
dWwgS3VtYXI8YnI+DQomZ3Q7IDxicj4NCiZndDsgT24gVGh1LCBEZWMgMTUsIDIwMjIgYXQgNDox
NyBBTSBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
IHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCDCoEhpIFZpcHVsLDxicj4NCiZndDsgPGJyPg0KJmd0
O8KgIMKgIMKgIMKgRm9yIFFFTVUgeW91IGFjdHVhbGx5IG5lZWQgdG8gZm9sbG93IHRoZSBZb2N0
byBidWlsZCBwcm9jZXNzIHRvIHVwZGF0ZTxicj4NCiZndDvCoCDCoCDCoCDCoHRoZSBRRU1VIGJp
bmFyeS4gVGhhdCBpcyBiZWNhdXNlIFFFTVUgaXMgYSB1c2Vyc3BhY2UgYXBwbGljYXRpb24gd2l0
aDxicj4NCiZndDvCoCDCoCDCoCDCoGxvdHMgb2YgbGlicmFyeSBkZXBlbmRlbmNpZXMgc28gd2Ug
Y2Fubm90IGp1c3QgZG8gJnF1b3Q7bWFrZSZxdW90OyB3aXRoIGE8YnI+DQomZ3Q7wqAgwqAgwqAg
wqBjcm9zcy1jb21waWxlciBsaWtlIGluIHRoZSBjYXNlIG9mIFhlbi48YnI+DQomZ3Q7IDxicj4N
CiZndDvCoCDCoCDCoCDCoFNvIHlvdSBuZWVkIHRvIG1ha2UgY2hhbmdlcyB0byBRRU1VIGFuZCB0
aGVuIGFkZCB0aG9zZSBjaGFuZ2VzIGFzIGE8YnI+DQomZ3Q7wqAgwqAgwqAgwqBwYXRjaCB0byB0
aGUgWW9jdG8gUUVNVSBidWlsZCByZWNpcGUsIG9yIGNvbmZpZ3VyZSBZb2N0byB0byB5b3VyIGxv
Y2FsPGJyPg0KJmd0O8KgIMKgIMKgIMKgdHJlZSB0byBidWlsZCBRRU1VLiBJIGFtIG5vdCBhIFlv
Y3RvIGV4cGVydCBhbmQgdGhlIFlvY3RvIGNvbW11bml0eTxicj4NCiZndDvCoCDCoCDCoCDCoHdv
dWxkIGJlIGEgYmV0dGVyIHBsYWNlIHRvIGFzayBmb3IgYWR2aWNlIHRoZXJlLiBZb3UgY2FuIHNl
ZSBmcm9tIGhlcmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqBzb21lIGluc3RydWN0aW9ucyBvbiBob3cg
dG8gYnVpbGQgWGVuIHVzaW5nIGEgbG9jYWwgdHJlZSwgc2VlIHRoZSB1c2FnZTxicj4NCiZndDvC
oCDCoCDCoCDCoG9mIEVYVEVSTkFMU1JDIChub3RlIHRoYXQgdGhpcyBpcyAqbm90KiB3aGF0IHlv
dSBuZWVkOiB5b3UgbmVlZCB0byBidWlsZDxicj4NCiZndDvCoCDCoCDCoCDCoFFFTVUgd2l0aCBh
IGxvY2FsIHRyZWUsIG5vdCBYZW4uIEJ1dCBJIHRob3VnaHQgdGhhdCB0aGUgd2lraXBhZ2UgbWln
aHQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqBzdGlsbCBiZSBhIHN0YXJ0aW5nIHBvaW50KTxicj4NCiZn
dDsgPGJyPg0KJmd0O8KgIMKgIMKgIMKgPGEgaHJlZj0iaHR0cHM6Ly93aWtpLnhlbnByb2plY3Qu
b3JnL3dpa2kvWGVuX29uX0FSTV9hbmRfWW9jdG8iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8vd2lraS54ZW5wcm9qZWN0Lm9yZy93aWtpL1hlbl9vbl9BUk1fYW5kX1lv
Y3RvPC9hPjxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKgIMKgIMKgQ2hlZXJzLDxicj4NCiZndDsg
PGJyPg0KJmd0O8KgIMKgIMKgIMKgU3RlZmFubzxicj4NCiZndDsgPGJyPg0KJmd0OyA8YnI+DQom
Z3Q7wqAgwqAgwqAgwqBPbiBUaHUsIDE1IERlYyAyMDIyLCBWaXB1bCBTdW5lamEgd3JvdGU6PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBIaSBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoYW5rcyE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBJIGNvdWxkIHNlZSBRRU1VIDYuMi4wIGNvbXBp
bGVkICZhbXA7IGluc3RhbGxlZCBpbiB0aGUgaG9zdCBpbWFnZSB4ZW4taW1hZ2UtbWluaW1hbC4g
SSBjb3VsZCBmaW5kIHhlbmZiLmMgc291cmNlIGZpbGUgYWxzbyAmYW1wOzxicj4NCiZndDvCoCDC
oCDCoCDCoG1vZGlmaWVkIHRoZSBzYW1lPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyB3aXRoIGRl
YnVnIGxvZ3MuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBJIGhhdmUgc2V0IHVwIGEgY3Jvc3Mg
Y29tcGlsZSBlbnZpcm9ubWVudCwgZGlkICYjMzk7bWFrZSBjbGVhbiYjMzk7ICZhbXA7ICYjMzk7
bWFrZSBhbGwmIzM5OyB0byByZWNvbXBpbGUgYnV0IGl0JiMzOTtzIGZhaWxpbmcuIEluIGNhc2Ug
aSBhbSBkb2luZzxicj4NCiZndDvCoCDCoCDCoCDCoHdyb25nLCBDYW4geW91PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBwbGVhc2UgYXNzaXN0IG1lPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyB3
aXRoIHRoZSBjb3JyZWN0IHN0ZXBzIHRvIGNvbXBpbGUgcWVtdT/CoEJlbG93IGFyZSB0aGUgZXJy
b3IgbG9nczo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoGFnbEBhZ2wtT3B0aVBsZXgtNzAxMDp+L0F1dG9tb3RpdmUv
QURBU19JbmZvdGFpbm1lbnQvUGxhdGZvcm0vUG9reV9LaXJrc3RvbmUvYnVpbGQvdG1wL3dvcmsv
Y29ydGV4YTcyLXBva3ktbGludXgvcWVtdS82LjIuMC1yMC9idWlsZCQ8YnI+DQomZ3Q7wqAgwqAg
wqAgwqBtYWtlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBhbGw8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsxLzM4NjRdIENvbXBpbGluZyBDIG9iamVjdCBsaWJzbGlycC5hLnAvc2xpcnBfc3Jj
X2FycF90YWJsZS5jLm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsyLzM4NjRdIENvbXBpbGlu
ZyBDIG9iamVjdCBzdWJwcm9qZWN0cy9saWJ2aG9zdC11c2VyL2xpYnZob3N0LXVzZXIuYS5wL2xp
YnZob3N0LXVzZXIuYy5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbMy8zODY0XSBMaW5raW5n
IHN0YXRpYyB0YXJnZXQgc3VicHJvamVjdHMvbGlidmhvc3QtdXNlci9saWJ2aG9zdC11c2VyLmE8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFs0LzM4NjRdIENvbXBpbGluZyBDIG9iamVjdCBsaWJz
bGlycC5hLnAvc2xpcnBfc3JjX3Ztc3RhdGUuYy5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBb
NS8zODY0XSBDb21waWxpbmcgQyBvYmplY3QgbGlic2xpcnAuYS5wL3NsaXJwX3NyY19kaGNwdjYu
Yy5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbNi8zODY0XSBDb21waWxpbmcgQyBvYmplY3Qg
bGlic2xpcnAuYS5wL3NsaXJwX3NyY19kbnNzZWFyY2guYy5vPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbNy8zODY0XSBDb21waWxpbmcgQyBvYmplY3QgbGlic2xpcnAuYS5wL3NsaXJwX3NyY19i
b290cC5jLm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFs4LzM4NjRdIENvbXBpbGluZyBDIG9i
amVjdCBsaWJzbGlycC5hLnAvc2xpcnBfc3JjX2Nrc3VtLmMubzxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgWzkvMzg2NF0gQ29tcGlsaW5nIEMgb2JqZWN0IGxpYnNsaXJwLmEucC9zbGlycF9zcmNf
aWYuYy5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbMTAvMzg2NF0gQ29tcGlsaW5nIEMgb2Jq
ZWN0IGxpYnNsaXJwLmEucC9zbGlycF9zcmNfaXA2X2ljbXAuYy5vPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbMTEvMzg2NF0gQ29tcGlsaW5nIEMgb2JqZWN0IGxpYnNsaXJwLmEucC9zbGlycF9z
cmNfaXA2X2lucHV0LmMubzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWzEyLzM4NjRdIENvbXBp
bGluZyBDIG9iamVjdCBsaWJzbGlycC5hLnAvc2xpcnBfc3JjX2lwNl9vdXRwdXQuYy5vPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbMTMvMzg2NF0gQ29tcGlsaW5nIEMgb2JqZWN0IGxpYnNsaXJw
LmEucC9zbGlycF9zcmNfaXBfaWNtcC5jLm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsxNC8z
ODY0XSBDb21waWxpbmcgQyBvYmplY3QgbGlic2xpcnAuYS5wL3NsaXJwX3NyY19pcF9pbnB1dC5j
Lm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsxNS8zODY0XSBDb21waWxpbmcgQyBvYmplY3Qg
bGlic2xpcnAuYS5wL3NsaXJwX3NyY19pcF9vdXRwdXQuYy5vPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbMTYvMzg2NF0gQ29tcGlsaW5nIEMgb2JqZWN0IGxpYnNsaXJwLmEucC9zbGlycF9zcmNf
bWJ1Zi5jLm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsxNy8zODY0XSBDb21waWxpbmcgQyBv
YmplY3QgbGlic2xpcnAuYS5wL3NsaXJwX3NyY19taXNjLmMubzxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgWzE4LzM4NjRdIENvbXBpbGluZyBDIG9iamVjdCBsaWJzbGlycC5hLnAvc2xpcnBfc3Jj
X25jc2kuYy5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbMTkvMzg2NF0gQ29tcGlsaW5nIEMg
b2JqZWN0IGxpYnNsaXJwLmEucC9zbGlycF9zcmNfbmRwX3RhYmxlLmMubzxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDsgWzIwLzM4NjRdIENvbXBpbGluZyBDIG9iamVjdCBsaWJzbGlycC5hLnAvc2xp
cnBfc3JjX3NidWYuYy5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbMjEvMzg2NF0gQ29tcGls
aW5nIEMgb2JqZWN0IGxpYnNsaXJwLmEucC9zbGlycF9zcmNfc2xpcnAuYy5vPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbMjIvMzg2NF0gQ29tcGlsaW5nIEMgb2JqZWN0IGxpYnNsaXJwLmEucC9z
bGlycF9zcmNfc29ja2V0LmMubzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWzIzLzM4NjRdIENv
bXBpbGluZyBDIG9iamVjdCBsaWJzbGlycC5hLnAvc2xpcnBfc3JjX3N0YXRlLmMubzxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgWzI0LzM4NjRdIENvbXBpbGluZyBDIG9iamVjdCBsaWJzbGlycC5h
LnAvc2xpcnBfc3JjX3N0cmVhbS5jLm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsyNS8zODY0
XSBDb21waWxpbmcgQyBvYmplY3QgbGlic2xpcnAuYS5wL3NsaXJwX3NyY190Y3BfaW5wdXQuYy5v
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbMjYvMzg2NF0gQ29tcGlsaW5nIEMgb2JqZWN0IGxp
YnNsaXJwLmEucC9zbGlycF9zcmNfdGNwX291dHB1dC5jLm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsyNy8zODY0XSBDb21waWxpbmcgQyBvYmplY3QgbGlic2xpcnAuYS5wL3NsaXJwX3NyY190
Y3Bfc3Vici5jLm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsyOC8zODY0XSBDb21waWxpbmcg
QyBvYmplY3QgbGlic2xpcnAuYS5wL3NsaXJwX3NyY190Y3BfdGltZXIuYy5vPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbMjkvMzg2NF0gQ29tcGlsaW5nIEMgb2JqZWN0IGxpYnNsaXJwLmEucC9z
bGlycF9zcmNfdGZ0cC5jLm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFszMC8zODY0XSBDb21w
aWxpbmcgQyBvYmplY3QgbGlic2xpcnAuYS5wL3NsaXJwX3NyY191ZHAuYy5vPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbMzEvMzg2NF0gQ29tcGlsaW5nIEMgb2JqZWN0IGxpYnNsaXJwLmEucC9z
bGlycF9zcmNfdWRwNi5jLm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFszMi8zODY0XSBDb21w
aWxpbmcgQyBvYmplY3QgbGlic2xpcnAuYS5wL3NsaXJwX3NyY191dGlsLmMubzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgWzMzLzM4NjRdIENvbXBpbGluZyBDIG9iamVjdCBsaWJzbGlycC5hLnAv
c2xpcnBfc3JjX3ZlcnNpb24uYy5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbMzQvMzg2NF0g
TGlua2luZyBzdGF0aWMgdGFyZ2V0IGxpYnNsaXJwLmE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFszNS8zODY0XSBHZW5lcmF0aW5nIHFlbXUtdmVyc2lvbi5oIHdpdGggYSBjdXN0b20gY29tbWFu
ZCAod3JhcHBlZCBieSBtZXNvbiB0byBjYXB0dXJlIG91dHB1dCk8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IEZBSUxFRDogcWVtdS12ZXJzaW9uLmg8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7L2hv
bWUvYWdsL0F1dG9tb3RpdmUvQURBU19JbmZvdGFpbm1lbnQvUGxhdGZvcm0vUG9reV9LaXJrc3Rv
bmUvYnVpbGQvdG1wL3dvcmsvY29ydGV4YTcyLXBva3ktbGludXgvcWVtdS82LjIuMC1yMC9yZWNp
cGUtc3lzcm9vdC1uYXRpdmUvdXNyPGJyPg0KJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0OyAvYmluL21lc29uIC0taW50ZXJuYWwgZXhlIC0tY2FwdHVyZSBxZW11LXZlcnNp
b24uaC0tL2hvbWUvYWdsL0F1dG9tb3RpdmUvQURBU19JbmZvdGFpbm1lbnQvUGxhdGZvcm0vUG9r
eV9LaXJrc3RvbmUvYnVpbGQvdG1wL3dvcmsvY29ydGV4YTcyLXBva3ktbGludXgvcWVtdS82LjIu
MC1yMC9xZW11LTYuMi4wL3NjcmlwdHMvcWVtdTxicj4NCiZndDvCoCDCoCDCoCDCoC12PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyBlcnNpb24uc2g8YnI+DQomZ3Q7wqAgwqAgwqAgwqAvaG9tZS9h
Z2wvQXV0b21vdGl2ZS9BREFTX0luZm90YWlubWVudC9QbGF0Zm9ybS9Qb2t5X0tpcmtzdG9uZS9i
dWlsZC90bXAvd29yay9jb3J0ZXhhNzItcG9reS1saW51eC9xZW11LzYuMi4wLXIwL3FlbXUtNi4y
LjAgJiMzOTsmIzM5Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgNi4yLjA8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IC91c3IvYmluL2Vudjog4oCYbmF0aXZlcHl0aG9uM+KAmTogTm8gc3VjaCBm
aWxlIG9yIGRpcmVjdG9yeTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgbmluamE6IGJ1aWxkIHN0
b3BwZWQ6IHN1YmNvbW1hbmQgZmFpbGVkLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgbWFrZTog
KioqIFtNYWtlZmlsZToxNjI6IHJ1bi1uaW5qYV0gRXJyb3IgMTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoYW5rcyAmYW1wOyBSZWdhcmRzLDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgVmlwdWwgS3VtYXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBPbiBXZWQsIERlYyAxNCwgMjAyMiBhdCA0OjU1IEFN
IFN0ZWZhbm8gU3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgd3Jv
dGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgSGkgVmlwdWwsPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEdvb2Qg
cHJvZ3Jlc3MhIFRoZSBtYWluIGZ1bmN0aW9uIHdlIHNob3VsZCBjaGVjayBpcyAmcXVvdDt4ZW5m
Yl9yZWZyZXNoJnF1b3Q7IGJ1dDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGZy
b20gdGhlIGxvZ3MgaXQgbG9va3MgbGlrZSBpdCBpcyBjYWxsZWQgc2V2ZXJhbCB0aW1lcy4gV2hp
Y2ggbWVhbnMgdGhhdDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGV2ZXJ5dGhp
bmcgc2VlbXMgdG8gYmUgd29ya2luZyBhcyBleHBlY3RlZCBvbiB0aGUgTGludXggc2lkZS48YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
SXQgaXMgdGltZSB0byBpbnZlc3RpZ2F0ZSB0aGUgUUVNVSBzaWRlOjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoC4vaHcvZGlzcGxheS94ZW5mYi5jOnhlbmZiX2hhbmRsZV9ldmVu
dHM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAuL2h3L2Rpc3BsYXkveGVuZmIu
Yzp4ZW5mYl9tYXBfZmI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgSSB3b25kZXIgaWYgdGhlIGlzc3VlIGlzIGludGVybmFsIHRvIFFF
TVUuIFlvdSBtaWdodCB3YW50IHRvIHVzZSBhbjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoG9sZGVyIFFFTVUgdmVyc2lvbiB0byBjaGVjayBpZiBpdCB3b3JrcywgbWF5YmUgNi4w
IG9yIDUuMCBvciBldmVuIDQuMC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBJ
IGFsc28gd29uZGVyIGlmIGl0IGlzIGEgcHJvYmxlbSBiZXR3ZWVuIHhlbmZiLmMgYW5kIHRoZSBy
ZXN0IG9mIFFFTVUuIEk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB3b3VsZCBp
bnZlc3RpZ2F0ZSBob3cgeGVuZmItJmd0O3BpeGVscyBpcyByZW5kZXJlZCBieSB0aGUgcmVzdCBv
ZiBRRU1VLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFNwZWNpZmljYWxseSB5
b3UgbWlnaHQgd2FudCB0byBsb29rIGF0IHRoZSBjYWxsIHRvPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgcWVtdV9jcmVhdGVfZGlzcGxheXN1cmZhY2UsIHFlbXVfY3JlYXRlX2Rp
c3BsYXlzdXJmYWNlX2Zyb20gYW5kPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
ZHB5X2dmeF9yZXBsYWNlX3N1cmZhY2UgaW4geGVuZmJfdXBkYXRlLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBJIGhvcGUgdGhpcyBo
ZWxwcy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgQ2hlZXJzLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBTdGVmYW5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBP
biBUdWUsIDEzIERlYyAyMDIyLCBWaXB1bCBTdW5lamEgd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIaSBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFRoYW5rcyE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIG1vZGlmaWVkIHhlbi1mYmZyb250LmMg
c291cmNlIGZpbGUsIGluY2x1ZGVkIHByaW50ayBkZWJ1ZyBsb2dzICZhbXA7IGNyb3NzIGNvbXBp
bGVkIGl0LiBJIGluY2x1ZGVkIHRoZSBwcmludGsgZGVidWcgbG9nPGJyPg0KJmd0O8KgIMKgIMKg
IMKgYXQgdGhlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZW50cnkgJmFtcDsg
ZXhpdDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgb2YgYWxsIGZ1bmN0
aW9ucyBvZiB4ZW4tZmJmcm9udC5jIGZpbGUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBHZW5lcmF0ZWQga2VybmVsIG1vZHVsZSAmYW1wOyBsb2FkZWQgaW4gZ3Vlc3Qg
bWFjaGluZSBhdCBib290dXAuIEkgY291bGQgc2VlIGxvdHMgb2YgbG9ncyBjb21pbmcgdXAsIGFu
ZCBjb3VsZCBzZWU8YnI+DQomZ3Q7wqAgwqAgwqAgwqBtdWx0aXBsZTxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoGZ1bmN0aW9ucyBiZWluZzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgaW52b2tlZCBldmVuIGlmIEkgaGF2ZcKgbm90IHVzZWTCoHZuY3Zp
ZXdlciBpbiB0aGUgaG9zdC4gQXR0YWNoaW5nIHRoZSBsb2cgZmlsZSBmb3IgcmVmZXJlbmNlLiBB
bnkgc3BlY2lmaWMgZnVuY3Rpb24gb3I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBwYXJhbWV0ZXJzIHRoYXQgaGF2ZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgdG8gYmUgY2hlY2tlZCBvciBhbnkgb3RoZXIgc3VnZ2VzdGlvbiBhcyBwZXIgbG9ncz88
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGFua3MgJmFtcDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFZpcHVsIEt1bWFyPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgT24gVHVlLCBEZWMgMTMsIDIwMjIgYXQgMzo0NCBBTSBTdGVmYW5vIFN0YWJlbGxpbmkg
Jmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7IHdyb3RlOjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIFZpcHVsLDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBJIGFtIG9ubGluZSBvbiBJUkMgT0ZUQyAjeGVuZGV2ZWwgKDxh
IGhyZWY9Imh0dHBzOi8vd3d3Lm9mdGMubmV0LyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly93d3cub2Z0Yy5uZXQvPC9hPiwgeW91IG5lZWQgYTxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHJlZ2lzdGVyZWQgbmlja25hbWUg
dG8gam9pbiAjeGVuZGV2ZWwpLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBGb3Ig
ZGV2ZWxvcG1lbnQgYW5kIGRlYnVnZ2luZyBJIGZpbmQgdGhhdCBpdCBpcyBhIGxvdCBlYXNpZXIg
dG88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjcm9z
c2NvbXBpbGUgdGhlIGtlcm5lbCAmcXVvdDtieSBoYW5kJnF1b3Q7LCBhbmQgZG8gYSBtb25vbGl0
aGljIGJ1aWxkLCByYXRoZXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqB0aGFuIGdvaW5nIHRocm91Z2ggWW9jdG8uPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoEZvciBpbnN0YW5jZSB0aGUgZm9sbG93aW5nIGJ1aWxkcyBmb3IgbWU6PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNkIGxpbnV4LmdpdDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGV4cG9ydCBBUkNIPWFybTY0PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZXhwb3J0IENS
T1NTX0NPTVBJTEU9L3BhdGgvdG8vY3Jvc3MtY29tcGlsZXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBtYWtlIGRlZmNvbmZpZzxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFthZGQgcHJpbnRrcyB0byBkcml2
ZXJzL3ZpZGVvL2ZiZGV2L3hlbi1mYmZyb250LmNdPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbWFrZSAtajggSW1hZ2UuZ3o8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgQW5kIEltYWdlLmd6IGJvb3RzIG9uIFhlbiBhcyBEb21VIGtlcm5l
bCB3aXRob3V0IGlzc3Vlcy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgQ2hlZXJz
LDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTdGVmYW5vPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoE9uIFNhdCwgMTAgRGVjIDIwMjIsIFZpcHVsIFN1bmVqYSB3cm90
ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IEhpIFN0ZWZhbm8sPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgVGhhbmtzITxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IEkgaGF2ZSBpbmNsdWRlZCBwcmludGsgZGVidWcgbG9ncyBpbiB0aGUgeGVu
LWZiZnJvbnQuYyBzb3VyY2UgZmlsZS4gV2hpbGUgY3Jvc3MgY29tcGlsaW5nIHRvIGdlbmVyYXRl
IC5rbzxicj4NCiZndDvCoCDCoCDCoCDCoHdpdGg8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmcXVvdDt4ZW4tZ3Vlc3QtaW1hZ2UtbWluaW1hbCZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
dG9vbGNoYWluIGl0JiMzOTtzIHRocm93aW5nIGEgbW9kcG9zdDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgbm90IGZvdW5kIGVycm9yLiBJIGNv
dWxkIHNlZSB0aGUgbW9kcG9zdC5jIHNvdXJjZSBmaWxlIGJ1dCB0aGUgZmluYWwgc2NyaXB0IGlz
IG1pc3NpbmcuIEFueSBpbnB1dCBvbiB0aGlzLDxicj4NCiZndDvCoCDCoCDCoCDCoEJlbG93IGFy
ZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHRoZTxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGxvZ3M6PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgYWdsQGFnbC1PcHRpUGxleC03MDEw
On4vQXV0b21vdGl2ZS9BREFTX0luZm90YWlubWVudC9wcm9qZWN0L0FwcGxpY2F0aW9uL1hlbi9G
cmFtZWJ1ZmZlciQgbWFrZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgbWFrZSBBUkNIPWFybTY0IC1JL29wdC9wb2t5LzQuMC41L3N5c3Jvb3Rz
L2NvcnRleGE3Mi1wb2t5LWxpbnV4L3Vzci9pbmNsdWRlL2FzbSAtQzxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgL29wdC9wb2t5LzQuMC41L3N5
c3Jvb3RzL2NvcnRleGE3Mi1wb2t5LWxpbnV4L2xpYi9tb2R1bGVzLzUuMTUuNzIteW9jdG8tc3Rh
bmRhcmQvYnVpbGQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IE09L2hvbWUvYWdsL0F1dG9tb3RpdmUvQURBU19JbmZvdGFpbm1lbnQvcHJvamVj
dC9BcHBsaWNhdGlvbi9YZW4vRnJhbWVidWZmZXIgbW9kdWxlczxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgbWFrZVsxXTogRW50ZXJpbmcgZGly
ZWN0b3J5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJiMzOTsvb3B0L3Bva3kvNC4wLjUvc3lzcm9vdHMv
Y29ydGV4YTcyLXBva3ktbGludXgvbGliL21vZHVsZXMvNS4xNS43Mi15b2N0by1zdGFuZGFyZC9i
dWlsZCYjMzk7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBhcmNoL2FybTY0L01ha2VmaWxlOjM2OiBEZXRlY3RlZCBhc3NlbWJsZXIgd2l0aCBi
cm9rZW4gLmluc3Q7IGRpc2Fzc2VtYmx5IHdpbGwgYmUgdW5yZWxpYWJsZTxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgd2FybmluZzogdGhlIGNv
bXBpbGVyIGRpZmZlcnMgZnJvbSB0aGUgb25lIHVzZWQgdG8gYnVpbGQgdGhlIGtlcm5lbDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgVGhl
IGtlcm5lbCB3YXMgYnVpbHQgYnk6IGdjYyAoVWJ1bnR1IDkuNC4wLTF1YnVudHUxfjIwLjA0LjEp
IDkuNC4wPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyDCoCBZb3UgYXJlIHVzaW5nOiDCoCDCoCDCoCDCoCDCoCBhYXJjaDY0LXBva3ktbGludXgt
Z2NjIChHQ0MpIDExLjMuMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgwqAgQ0MgW01dIMKgL2hvbWUvYWdsL0F1dG9tb3RpdmUvQURBU19JbmZv
dGFpbm1lbnQvcHJvamVjdC9BcHBsaWNhdGlvbi9YZW4vRnJhbWVidWZmZXIveGVuLWZiZnJvbnQu
bzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
wqAgTU9EUE9TVCAvaG9tZS9hZ2wvQXV0b21vdGl2ZS9BREFTX0luZm90YWlubWVudC9wcm9qZWN0
L0FwcGxpY2F0aW9uL1hlbi9GcmFtZWJ1ZmZlci9Nb2R1bGUuc3ltdmVyczxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgL2Jpbi9zaDogMTogc2Ny
aXB0cy9tb2QvbW9kcG9zdDogbm90IGZvdW5kPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBtYWtlWzJdOiAqKiogW3NjcmlwdHMvTWFrZWZpbGUu
bW9kcG9zdDoxMzM6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgL2hvbWUvYWdsL0F1dG9tb3RpdmUvQURBU19JbmZvdGFpbm1lbnQvcHJvamVjdC9BcHBs
aWNhdGlvbi9YZW4vRnJhbWVidWZmZXIvTW9kdWxlLnN5bXZlcnNdPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBFcnJvciAxMjc8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IG1ha2VbMV06ICoq
KiBbTWFrZWZpbGU6MTgxMzogbW9kdWxlc10gRXJyb3IgMjxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgbWFrZVsxXTogTGVhdmluZyBkaXJlY3Rv
cnk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmIzM5Oy9vcHQvcG9reS80LjAuNS9zeXNyb290cy9jb3J0
ZXhhNzItcG9reS1saW51eC9saWIvbW9kdWxlcy81LjE1LjcyLXlvY3RvLXN0YW5kYXJkL2J1aWxk
JiMzOTs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IG1ha2U6ICoqKiBbTWFrZWZpbGU6NTogYWxsXSBFcnJvciAyPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBhZ2xAYWdsLU9wdGlQbGV4LTcw
MTA6fi9BdXRvbW90aXZlL0FEQVNfSW5mb3RhaW5tZW50L3Byb2plY3QvQXBwbGljYXRpb24vWGVu
L0ZyYW1lYnVmZmVyJCBscyAtbDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgdG90YWwgMzI0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAtcnd4cnd4cnd4IDEgYWdsIGFnbCDCoCDCoDM1OSBE
ZWMgMTAgMjI6NDEgTWFrZWZpbGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IC1ydy1ydy1yLS0gMSBhZ2wgYWdsIMKgIMKgIDkwIERlYyAxMCAy
Mjo0OSBtb2R1bGVzLm9yZGVyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAtcnctci0tci0tIDEgYWdsIGFnbCDCoDE4MzMxIERlYyDCoDEgMjA6
MzIgeGVuLWZiZnJvbnQuYzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgLXJ3LXJ3LXItLSAxIGFnbCBhZ2wgwqAgwqAgOTAgRGVjIDEwIDIyOjQ5
IHhlbi1mYmZyb250Lm1vZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgLXJ3LXJ3LXItLSAxIGFnbCBhZ2wgMjk3ODMyIERlYyAxMCAyMjo0OSB4
ZW4tZmJmcm9udC5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBhZ2xAYWdsLU9wdGlQbGV4LTcwMTA6fi9BdXRvbW90aXZlL0FEQVNfSW5mb3Rh
aW5tZW50L3Byb2plY3QvQXBwbGljYXRpb24vWGVuL0ZyYW1lYnVmZmVyJCBmaWxlIHhlbi1mYmZy
b250Lm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IHhlbi1mYmZyb250Lm86IEVMRiA2NC1iaXQgTFNCIHJlbG9jYXRhYmxlLCBBUk0gYWFyY2g2
NCwgdmVyc2lvbiAxIChTWVNWKSwgd2l0aCBkZWJ1Z19pbmZvLCBub3Qgc3RyaXBwZWQ8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGFnbEBhZ2wt
T3B0aVBsZXgtNzAxMDp+L0F1dG9tb3RpdmUvQURBU19JbmZvdGFpbm1lbnQvcHJvamVjdC9BcHBs
aWNhdGlvbi9YZW4vRnJhbWVidWZmZXIkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgSSBoYXZlIGNvbm5lY3RlZCBhIEhETUkgYmFzZWQgMTk4MHgx
MDI0IHJlc29sdXRpb24gZGlzcGxheSBzY3JlZW4gdG8gcmFzcGJlcnJ5cGk0IGZvciB0ZXN0aW5n
IHB1cnBvc2VzLiBJPGJyPg0KJmd0O8KgIMKgIMKgIMKgaG9wZTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoGNvbm5lY3Rpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aGlzIGRpc3BsYXkgdG88YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJwaTQgc2hvdWxkIGJlIG9rLjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IElzIHRoZXJl
IGFueSBvdGhlciB3YXkgd2UgY2FuIGNvbm5lY3QgYWxzbyBmb3IgZGV0YWlsZWQgZGlzY3Vzc2lv
biBvbiB0aGUgZGlzcGxheSBicmluZ3VwIGlzc3VlPyBUaGlzIHdpbGw8YnI+DQomZ3Q7wqAgwqAg
wqAgwqByZWFsbHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBoZWxwIHRvPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcmVzb2x2ZSB0
aGlzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBpc3N1ZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBUaGFua3MgJmFtcDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFZpcHVsIEt1bWFyPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgT24gRnJpLCBEZWMgMiwgMjAyMiBhdCAx
OjAyIEFNIFN0ZWZhbm8gU3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZn
dDsgd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgT24gVGh1LCAxIERlYyAyMDIyLCBWaXB1bCBTdW5lamEgd3JvdGU6
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBIaSBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhhbmtzITxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IEkgYW0gZXhwbG9yaW5nIGJvdGggb3B0aW9ucyBoZXJlLCBtb2RpZmlj
YXRpb24gb2YgZnJhbWVidWZmZXIgc291cmNlIGZpbGUgJmFtcDsgc2V0dGluZyB1cCB4MTF2bmMg
c2VydmVyPGJyPg0KJmd0O8KgIMKgIMKgIMKgaW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBndWVzdC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE90aGVyIHRoYW4gdGhlc2UgSSB3b3VsZCBsaWtl
IHRvIHNoYXJlIGEgZmV3IGZpbmRpbmdzIHdpdGggeW91Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IDEuIElmIGkga2VlcCAmcXVvdDtDT05GSUdfWEVOX0ZCREVWX0ZST05URU5EPXkmcXVvdDsg
dGhlbiB4ZW4tZmJmcm9udC5rbyBpcyBub3QgZ2VuZXJhdGluZyBidXQgaWYgaSBrZWVwPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJnF1b3Q7Q09ORklH
X1hFTl9GQkRFVl9GUk9OVEVORD1tJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyB0aGVuIGNvdWxkIHNlZSB4
ZW4tZmJmcm9udC5rbyAmYW1wOyBpdHMgbG9hZGluZyBhbHNvLiBTYW1lIHRoaW5ncyB3aXRoIG90
aGVyIGZyb250ZW5kL2JhY2tlbmQgZHJpdmVyczxicj4NCiZndDvCoCDCoCDCoCDCoGFsc28uIERv
IHdlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbmVlZCB0bzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNv
bmZpZ3VyZcKgdGhlc2U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGRyaXZlcnMgYXMgYSBtb2R1bGUobSkgb25seT88
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgeGVuLWZiZnJvbnQgc2hvdWxkIHdvcmsgYm90aCBhcyBhIG1vZHVsZSAoeGVuLWZiZnJv
bnQua28pIG9yIGJ1aWx0LWluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgKENPTkZJR19YRU5fRkJERVZfRlJPTlRFTkQ9eSku
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAyLiBJIGNvdWxkIHNlZSB4ZW5zdG9yZWQgc2VydmljZSBpcyBydW5uaW5nIGZvciB0
aGUgaG9zdCBidXQgaXQmIzM5O3MgYWx3YXlzIGZhaWxpbmcgZm9yIHRoZTxicj4NCiZndDvCoCDC
oCDCoCDCoGd1ZXN0wqBtYWNoaW5lLiBJPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgY291bGQgc2VlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgaXQgaW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBib290dXAgbG9ncyAmYW1wOyB2aWE8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHN5
c3RlbWN0bCBzdGF0dXMgYWxzby48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgVGhhdCBpcyBub3JtYWwuIHhlbnN0b3JlZCBpcyBv
bmx5IG1lYW50IHRvIGJlIHJ1biBpbiBEb20wLCBub3QgaW4gdGhlPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZG9tVXMuIElm
IHlvdSB1c2UgdGhlIHNhbWUgcm9vdGZzIGZvciBEb20wIGFuZCBEb21VIHRoZW4geGVuc3RvcmVk
IHdpbGw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBmYWlsIHN0YXJ0aW5nIGluIHRoZSBEb21VIChidXQgc2hvdWxkIHN1Y2Nl
ZWQgaW4gRG9tMCksIHdoaWNoIGlzIHdoYXQgd2U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB3YW50Ljxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBJZiB5b3Ug
cnVuICZxdW90O3hlbnN0b3JlLWxzJnF1b3Q7IGluIERvbTAsIHlvdSYjMzk7bGwgc2VlIGEgYnVu
Y2ggb2YgZW50cmllcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpbmNsdWRpbmcgc29tZSBvZiB0aGVtIHJlbGF0ZWQgdG8g
JnF1b3Q7dmZiJnF1b3Q7IHdoaWNoIGlzIHRoZSB2aXJ0dWFsIGZyYW1lYnVmZmVyPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
cHJvdG9jb2wuIFlvdSBzaG91bGQgYWxzbyBzZWUgYW4gZW50cnkgY2FsbGVkICZxdW90O3N0YXRl
JnF1b3Q7IHNldCB0byAmcXVvdDs0JnF1b3Q7IHdoaWNoPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbWVhbnMgJnF1b3Q7Y29u
bmVjdGVkJnF1b3Q7LiBzdGF0ZSA9IDQgaXMgdXN1YWxseSB3aGVuIGV2ZXJ5dGhpbmcgd29ya3Mu
IE5vcm1hbGx5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgd2hlbiB0aGluZ3MgZG9uJiMzOTt0IHdvcmsgc3RhdGUgIT0gNC48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IEJlbG93IGFyZSB0aGUgbG9nczo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqBPSyDCoF0gUmVhY2hl
ZCB0YXJnZXQgQmFzaWMgU3lzdGVtLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoE9LIMKgXSBTdGFydGVkIEtl
cm5lbCBMb2dnaW5nIFNlcnZpY2UuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgT0sgwqBdIFN0YXJ0ZWQgU3lz
dGVtIExvZ2dpbmcgU2VydmljZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgIMKgU3RhcnRpbmcg
RC1CdXMgU3lzdGVtIE1lc3NhZ2UgQnVzLi4uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoCDCoFN0
YXJ0aW5nIFVzZXIgTG9naW4gTWFuYWdlbWVudC4uLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqAg
wqBTdGFydGluZyBQZXJtaXQgVXNlciBTZXNzaW9ucy4uLjxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAg
wqAgwqBTdGFydGluZyBUaGUgWGVuIHhlbnN0b3JlLi4uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDC
oCDCoFN0YXJ0aW5nIE9wZW5TU0ggS2V5IEdlbmVyYXRpb24uLi48YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFtGQUlM
RURdIEZhaWxlZCB0byBzdGFydCBUaGUgWGVuIHhlbnN0b3JlLjxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU2VlICYj
Mzk7c3lzdGVtY3RsIHN0YXR1cyB4ZW5zdG9yZWQuc2VydmljZSYjMzk7IGZvciBkZXRhaWxzLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgW0RFUEVORF0gRGVwZW5kZW5jeSBmYWlsZWQgZm9yIHFlbXUgZm9yIHhlbiBk
b20wIGRpc2sgYmFja2VuZC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFtERVBFTkRdIERlcGVuZGVuY3kgZmFpbGVk
IGZvciBYZW5k4oCmcCBndWVzdHMgb24gYm9vdCBhbmQgc2h1dGRvd24uPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
REVQRU5EXSBEZXBlbmRlbmN5IGZhaWxlZCBmb3IgeGVuLeKApmRlcywgSlNPTiBjb25maWd1cmF0
aW9uIHN0dWIpLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgW0RFUEVORF0gRGVwZW5kZW5jeSBmYWlsZWQgZm9yIFhl
bmPigKZndWVzdCBjb25zb2xlcyBhbmQgaHlwZXJ2aXNvci48YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqBPSyDC
oF0gRmluaXNoZWQgUGVybWl0IFVzZXIgU2Vzc2lvbnMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgT0sgwqBd
IFN0YXJ0ZWQgR2V0dHkgb24gdHR5MS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqBPSyDCoF0gU3RhcnRlZCBT
ZXJpYWwgR2V0dHkgb24gaHZjMC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqBPSyDCoF0gU3RhcnRlZCBTZXJp
YWwgR2V0dHkgb24gdHR5UzAuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgT0sgwqBdIFJlYWNoZWQgdGFyZ2V0
IExvZ2luIFByb21wdHMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoCDCoFN0YXJ0aW5nIFhlbi13
YXRjaGRvZyAtIHJ1biB4ZW4gd2F0Y2hkb2cgZGFlbW9uLi4uPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgT0sg
wqBdIFN0YXJ0ZWQgRC1CdXMgU3lzdGVtIE1lc3NhZ2UgQnVzLjxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoE9L
IMKgXSBTdGFydGVkIFhlbi13YXRjaGRvZyAtIHJ1biB4ZW4gd2F0Y2hkb2cgZGFlbW9uLjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoE9LIMKgXSBGaW5pc2hlZCBPcGVuU1NIIEtleSBHZW5lcmF0aW9uLjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoE9LIMKgXSBTdGFydGVkIFVzZXIgTG9naW4gTWFuYWdlbWVudC48YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqBPSyDCoF0gUmVhY2hlZCB0YXJnZXQgTXVsdGktVXNlciBTeXN0ZW0uPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoCDCoCDCoFN0YXJ0aW5nIFJlY29yZCBSdW5sZXZlbCBDaGFuZ2UgaW4g
VVRNUC4uLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoE9LIMKgXSBGaW5pc2hlZCBSZWNvcmQgUnVubGV2ZWwg
Q2hhbmdlIGluIFVUTVAuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBmYmNvbjogVGFraW5nIG92ZXIgY29uc29sZTxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFBva3kgKFlvY3RvIFByb2plY3QgUmVmZXJlbmNlIERp
c3RybykgNC4wLjQgcmFzcGJlcnJ5cGk0LTY0IGh2YzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyByYXNwYmVycnlwaTQtNjQgbG9naW46IHJvb3Q8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJvb3RAcmFzcGJlcnJ5
cGk0LTY0On4jPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyByb290QHJhc3BiZXJyeXBpNC02NDp+Izxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgcm9vdEByYXNwYmVycnlwaTQtNjQ6fiMgc3lzdGVtY3RsIHN0YXR1cyB4ZW5zdG9yZWQuc2Vy
dmljZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgeCB4ZW5zdG9yZWQuc2VydmljZSAtIFRoZSBYZW4geGVuc3RvcmU8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IMKgIMKgIMKgTG9hZGVkOiBsb2FkZWQgKC9saWIvc3lzdGVtZC9zeXN0ZW0v
eGVuc3RvcmVkLnNlcnZpY2U7IGVuYWJsZWQ7IHZlbmRvciBwcmVzZXQ6IGVuYWJsZWQpPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoEFjdGl2ZTogZmFpbGVkIChSZXN1bHQ6IGV4aXQtY29kZSkgc2luY2Ug
VGh1IDIwMjItMTItMDEgMDY6MTI6MDUgVVRDOyAyNnMgYWdvPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCBQ
cm9jZXNzOiAxOTUgRXhlY1N0YXJ0UHJlPS9iaW4vZ3JlcCAtcSBjb250cm9sX2QgL3Byb2MveGVu
L2NhcGFiaWxpdGllcyAoY29kZT1leGl0ZWQsPGJyPg0KJmd0O8KgIMKgIMKgIMKgc3RhdHVzPTEv
RkFJTFVSRSk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBEZWMgMDEgMDY6MTI6MDQgcmFzcGJl
cnJ5cGk0LTY0IHN5c3RlbWRbMV06IFN0YXJ0aW5nIFRoZSBYZW4geGVuc3RvcmUuLi48YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IERlYyAwMSAwNjoxMjowNSByYXNwYmVycnlwaTQtNjQgc3lzdGVtZFsxXTogeGVuc3Rv
cmVkLnNlcnZpY2U6IENvbnRyb2wgcHJvLi4uVVJFPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBEZWMgMDEgMDY6MTI6
MDUgcmFzcGJlcnJ5cGk0LTY0IHN5c3RlbWRbMV06IHhlbnN0b3JlZC5zZXJ2aWNlOiBGYWlsZWQg
d2l0aC4uLmUmIzM5Oy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IERlYyAwMSAwNjoxMjowNSByYXNwYmVycnlwaTQt
NjQgc3lzdGVtZFsxXTogRmFpbGVkIHRvIHN0YXJ0IFRoZSBYZW4geGVuc3RvcmUuPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBIaW50OiBTb21lIGxpbmVzIHdlcmUgZWxsaXBzaXplZCwgdXNlIC1sIHRvIHNob3cgaW4g
ZnVsbC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IHJvb3RAcmFzcGJlcnJ5cGk0LTY0On4jwqA8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBBbnkgaW5wdXQgb24gdGhlc2U/PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgVGhhbmtzICZhbXA7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBWaXB1bCBLdW1hcjxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IE9uIFdlZCwgTm92IDIzLCAyMDIyIGF0IDU6NDEgQU0gU3RlZmFu
byBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyB3cm90ZTo8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBWaXB1bCw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgSSBjYW5ub3Qgc3BvdCBhbnkgaXNzdWUgaW4gdGhlIGNvbmZpZ3VyYXRpb24sIGlu
IHBhcnRpY3VhbCB5b3UgaGF2ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Q09ORklHX1hFTl9GQkRFVl9GUk9OVEVORD15PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoHdoaWNoIGlzIHdoYXQgeW91IG5lZWQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoFRoZSBvbmx5IHRoaW5nIEkgY2FuIHN1Z2dlc3QgaXMgdG8gYWRkIHByaW50a3Mg
dG8gdGhlIExpbnV4IGZyb250ZW5kPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZHJpdmVyICh0aGUg
b25lIHJ1bm5pbmcgaW4gdGhlIGRvbVUpIHdoaWNoIGlzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
ZHJpdmVycy92aWRlby9mYmRldi94ZW4tZmJmcm9udC5jIGFuZCBwcmludGZzIHRvIHRoZSBRRU1V
IGJhY2tlbmQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAocnVubmluZyBpbiBEb20wKSB3aGljaCBp
cyBody9kaXNwbGF5L3hlbmZiLmMgdG8gZmlndXJlIG91dCB3aGF0IGlzPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgZ29pbmcgb24uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBBbHRlcm5hdGl2ZWx5LCB5b3UgY2FuIHNldHVwIFBWIG5ldHdvcmsgd2l0aCB0
aGUgZG9tVSwgc3VjaCBhczo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAg
dmlmPVsmIzM5OyYjMzk7XTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBhbmQg
dGhlbiBydW4geDExIGFuZCBhIHgxMXZuYyBzZXJ2ZXIgaW4geW91ciBkb21VLiBZb3Ugc2hvdWxk
IGJlIGFibGUgdG88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb25uZWN0IHRvIGl0IHVzaW5nIHZu
Y3ZpZXdlciBhdCB0aGUgbmV0d29yayBJUCBvZiB5b3VyIGRvbVUuPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoEJhc2ljYWxseSB5b3UgYXJlIHNraXBwaW5nIHRoZSBwcm9ibGVt
IGJlY2F1c2UgaW5zdGVhZCBvZiB1c2luZyB0aGUgUFY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBm
cmFtZWJ1ZmZlciBwcm90b2NvbCwgeW91IGp1c3QgdXNlIFZOQyBvdmVyIHRoZSBuZXR3b3JrIHdp
dGggdGhlIGRvbVUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBDaGVlcnMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFN0ZWZhbm88
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoE9uIFR1ZSwg
MjIgTm92IDIwMjIsIFZpcHVsIFN1bmVqYSB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IEhpIFN0ZWZhbm8sPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGFua3MgZm9yIHRo
ZcKgc3VwcG9ydCE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBMb29rcyBsaWtlIEkgaGF2ZSB0cmllZCBhbGwgdGhlIGNvbWJpbmF0aW9uc8Kg
JmFtcDsgcG9zc2libGUgd2F5cyB0byBnZXQgZGlzcGxheSB1cCBidXQgZmFpbGVkLiBJczxicj4N
CiZndDvCoCDCoCDCoCDCoHRoZXJlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
YW55PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZG9j
dW1lbnQgb3I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBwZGYgZm9yPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcG9ydGluZyB4
ZW4gb248YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJhc3BiZXJyeXBpNC48YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IEkgY291bGQgZmluZCBsb3QmIzM5O3Mgb2YgbGlua3MgdGVsbGlu
ZyB0aGUgc2FtZSBidXQgY291bGRuJiMzOTt0IHNlZSBhbnkgb2ZmaWNpYWwgdXNlciBndWlkZSBv
cjxicj4NCiZndDvCoCDCoCDCoCDCoGRvY3VtZW50PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgZnJvbSB0aGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqB4ZW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb21tdW5pdHkgb248YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqB0aGUgc2FtZS4gSWY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHRoZXJlIGlzIHNvbWV0
aGluZyB0byByZWZlcsKgPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyB0byBwbGVhc2Ugc2hh
cmUgd2l0aCBtZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEkgYW0gYXR0YWNoaW5nIHRo
ZSBrZXJuZWwgY29uZmlndXJhdGlvbiBmaWxlIGFsc28sIGp1c3QgdGFrZSBhIGxvb2sgaWYgaSBo
YXZlIG1pc3NlZDxicj4NCiZndDvCoCDCoCDCoCDCoGFueXRoaW5nLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgQW55IG90aGVyIHN1Z2dlc3Rpb25zIG9yIGlucHV0IGZyb20geW91ciBlbmQg
Y291bGQgYmUgcmVhbGx5IGhlbHBmdWw/PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFZpcHVsIEt1bWFyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgT24gRnJpLCBOb3YgMTEsIDIwMjIgYXQgNjo0MCBBTSBTdGVmYW5vIFN0
YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7IHdyb3RlOjxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIFZpcHVsLDxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTb3JyeSBm
b3IgdGhlIGxhdGUgcmVwbHkuIEZyb20gdGhlIGVhcmxpZXIgbG9ncyB0aGF0IHlvdSBzZW50LCBp
dCBsb29rczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGxpa2UgZXZlcnl0
aGluZyBzaG91bGQgYmUgd29ya2luZyBjb3JyZWN0bHkuIFNwZWNpZmljYWxseTo8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
wqDCoCDCoCB2ZmIgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqDCoMKgIMKgIMKgMSA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoMKgwqAgwqAgwqAgMCA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoMKgwqAgwqAgwqAgwqBmcm9udGVuZCA9ICZxdW90Oy9s
b2NhbC9kb21haW4vMS9kZXZpY2UvdmZiLzAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqDCoMKgIMKgIMKgIMKgZnJvbnRlbmQtaWQgPSAmcXVvdDsxJnF1b3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqDCoCDCoCDCoCDCoG9ubGluZSA9
ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoMKg
IMKgIMKgIMKgc3RhdGUgPSAmcXVvdDs0JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgwqDCoCDCoCDCoCDCoHZuYyA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoMKgIMKgIMKgIMKgdm5jbGlzdGVuID0gJnF1b3Q7
MTI3LjAuMC4xJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqDC
oCDCoCDCoCDCoHZuY2Rpc3BsYXkgPSAmcXVvdDswJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgwqDCoCDCoCDCoCDCoHZuY3VudXNlZCA9ICZxdW90OzEmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoMKgIMKgIMKgIMKgc2RsID0g
JnF1b3Q7MCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoMKgwqAg
wqAgwqAgwqBvcGVuZ2wgPSAmcXVvdDswJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgwqDCoCDCoCDCoCDCoGZlYXR1cmUtcmVzaXplID0gJnF1b3Q7MSZxdW90Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoMKgwqAgwqAgwqAgwqBob3RwbHVn
LXN0YXR1cyA9ICZxdW90O2Nvbm5lY3RlZCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoMKgwqAgwqAgwqAgwqByZXF1ZXN0LXVwZGF0ZSA9ICZxdW90OzEmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgc3RhdGUgJnF1b3Q7NCZxdW90OyBtZWFucyAmcXVvdDtjb25uZWN0ZWQmcXVvdDsu
IFNvIEkgd291bGQgZXhwZWN0IHRoYXQgeW91IHNob3VsZCBiZSBhYmxlPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdG8gY29ubmVjdCB0byB0aGUgdm5jIHNlcnZlciB1c2lu
ZyB2bmN2aWV3ZXIuIFlvdSBtaWdodCBub3Qgc2VlIGFueXRoaW5nPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgKGJsYWNrIHNjcmVlbikgYnV0IHlvdSBzaG91bGQgZGVmaW5p
dGVseSBiZSBhYmxlIHRvIGNvbm5lY3QuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEkgd291bGRuJiMzOTt0IHRyeSB0byBs
YXVuY2ggeDExIGluIHRoZSBndWVzdCBqdXN0IHlldC4gZmJjb24gaW4gTGludXggaXM8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBlbm91Z2ggdG8gcmVuZGVyIHNvbWV0aGlu
ZyBvbiB0aGUgc2NyZWVuLiBZb3Ugc2hvdWxkIGJlIGFibGUgdG8gc2VlIHRoZTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoExpbnV4IHRleHQtYmFzZWQgY29uc29sZSByZW5k
ZXJlZCBncmFwaGljYWxseSwgY29ubmVjdGluZyB0byBpdCB2aWEgdm5jLjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTb3Jy
eSBmb3IgdGhlIGJhc2ljIHF1ZXN0aW9uLCBidXQgaGF2ZSB5b3UgdHJpZWQgYWxsIHRoZSBmb2xs
b3dpbmc/PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoHZuY3ZpZXdlciA8YSBocmVmPSJodHRwOi8vMTI3LjAuMC4xOjAiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPjEyNy4wLjAuMTowPC9hPjxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHZuY3ZpZXdlciA8YSBocmVmPSJodHRwOi8vMTI3
LjAuMC4xOjEiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPjEyNy4wLjAuMToxPC9h
Pjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHZuY3ZpZXdlciA8YSBocmVm
PSJodHRwOi8vMTI3LjAuMC4xOjIiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPjEy
Ny4wLjAuMToyPC9hPjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHZuY3Zp
ZXdlciA8YSBocmVmPSJodHRwOi8vMTI3LjAuMC4xOjU5MDAiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPjEyNy4wLjAuMTo1OTAwPC9hPjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoHZuY3ZpZXdlciA8YSBocmVmPSJodHRwOi8vMTI3LjAuMC4xOjU5MDEiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPjEyNy4wLjAuMTo1OTAxPC9hPjxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHZuY3ZpZXdlciA8YSBocmVmPSJodHRwOi8v
MTI3LjAuMC4xOjU5MDIiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPjEyNy4wLjAu
MTo1OTAyPC9hPjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBHaXZlbiB0aGF0IGZyb20gdGhlIHhlbnN0b3JlLWxzIGxvZ3Mg
ZXZlcnl0aGluZyBzZWVtcyB0byB3b3JrIGNvcnJlY3RseTxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoEkgYW0gbm90IHN1cmUgd2hhdCBlbHNlIHRvIHN1Z2dlc3QuIFlvdSBt
aWdodCBoYXZlIHRvIGFkZCBwcmludGYgdG8gUUVNVTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoHVpL3ZuYy5jIGFuZCBody9kaXNwbGF5L3hlbmZiLmMgdG8gc2VlIHdoYXQg
aXMgZ29pbmcgd3JvbmcuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoENoZWVycyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU3RlZmFubzxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gTW9uLCA3IE5vdiAyMDIyLCBWaXB1bCBT
dW5lamEgd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBI
aSBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhh
bmtzITxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEFueSBpbnB1dCBmdXJ0aGVyIG9uICZx
dW90O3hlbnN0b3JlLWxzJnF1b3Q7IGxvZ3M/PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
SSBhbSB0cnlpbmcgdG8gcnVuIHRoZSB4MHZuY3NlcnZlciAmYW1wOyB4MTF2bmMgc2VydmVyIG1h
bnVhbGx5IG9uIGd1ZXN0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbWFjaGlu
ZSh4ZW5fZ3Vlc3RfaW1hZ2VfbWluaW1hbCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBpbWFnZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJ1dCBpdCYjMzk7czxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoGZhaWxpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqB3aXRoIHRoZSBiZWxvdzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgZXJyb3IuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgcm9vdEByYXNwYmVycnlwaTQt
NjQ6L3Vzci9iaW4jIHgwdm5jc2VydmVyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyB4MHZuY3NlcnZlcjogdW5hYmxlIHRvIG9wZW4gZGlzcGxheSAmcXVvdDsmcXVv
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJvb3RAcmFzcGJl
cnJ5cGk0LTY0Oi91c3IvYmluIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgcm9vdEByYXNwYmVycnlwaTQtNjQ6L3Vzci9iaW4jIHgxMXZuYzxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAjQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBA
QEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIEAjPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAjQCDCoCoqIMKgV0FSTklORyDC
oCoqIMKgV0FSTklORyDCoCoqIMKgV0FSTklORyDCoCoqIMKgV0FSTklORyDCoCoqIMKgIEAjPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAjQCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgI0AgwqAgwqAgwqAgwqBZT1UgQVJFIFJVTk5JTkcgWDExVk5DIFdJVEhPVVQgQSBQQVNT
V09SRCEhIMKgIMKgIMKgIMKgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7ICNAIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAjQCDCoFRoaXMgbWVhbnMgYW55b25lIHdpdGgg
bmV0d29yayBhY2Nlc3MgdG8gdGhpcyBjb21wdXRlciDCoCBAIzxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgI0AgwqBtYXkgYmUgYWJsZSB0byB2aWV3IGFuZCBjb250
cm9sIHlvdXIgZGVza3RvcC4gwqAgwqAgwqAgwqAgwqAgwqBAIzxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgI0AgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
QCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAICZndDsmZ3Q7
Jmd0OyBJZiB5b3UgZGlkIG5vdCBtZWFuIHRvIGRvIHRoaXMgUHJlc3MgQ1RSTC1DIG5vdyEhICZs
dDsmbHQ7Jmx0OyBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
I0AgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBA
QEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgI0AgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgQCM8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgWW91IGNhbiBjcmVhdGUg
YW4geDExdm5jIHBhc3N3b3JkIGZpbGUgYnkgcnVubmluZzogwqAgwqAgwqAgQCM8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAj
QCDCoCDCoCDCoCB4MTF2bmMgLXN0b3JlcGFzc3dkIHBhc3N3b3JkIC9wYXRoL3RvL3Bhc3NmaWxl
IMKgIMKgIMKgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNA
IMKgb3IgwqAgeDExdm5jIC1zdG9yZXBhc3N3ZCAvcGF0aC90by9wYXNzZmlsZSDCoCDCoCDCoCDC
oCDCoCDCoCDCoCBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
I0AgwqBvciDCoCB4MTF2bmMgLXN0b3JlcGFzc3dkIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAjQCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCBAIzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgI0AgwqAodGhlIGxhc3Qgb25lIHdpbGwg
dXNlIH4vLnZuYy9wYXNzd2QpIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgQCM8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAjQCDCoGFuZCB0aGVuIHN0YXJ0aW5nIHgxMXZuYyB2aWE6IMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7ICNAIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAjQCDCoCDCoCDCoHgxMXZuYyAtcmZiYXV0
aCAvcGF0aC90by9wYXNzZmlsZSDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoEAjPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAjQCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgI0AgwqBhbiBleGlzdGluZyB+Ly52bmMvcGFzc3dkIGZpbGUgZnJvbSBhbm90aGVyIFZOQyDC
oCDCoCDCoCDCoCDCoEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAjQCDCoGFwcGxpY2F0aW9uIHdpbGwgd29yayBmaW5lIHRvby4gwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgI0AgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgQCM8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgWW91IGNhbiBhbHNvIHVzZSB0aGUg
LXBhc3N3ZGZpbGUgb3IgLXBhc3N3ZCBvcHRpb25zLiDCoCDCoCBAIzxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgI0AgwqAobm90ZSAtcGFzc3dkIGlzIHVuc2FmZSBp
ZiBsb2NhbCB1c2VycyBhcmUgbm90IHRydXN0ZWQpIMKgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIEAj
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAjQCDCoE1ha2Ugc3Vy
ZSBhbnkgLXJmYmF1dGggYW5kIC1wYXNzd2RmaWxlIHBhc3N3b3JkIGZpbGVzIMKgIMKgQCM8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgY2Fubm90IGJlIHJl
YWQgYnkgdW50cnVzdGVkIHVzZXJzLiDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCBA
Izxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgI0AgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7ICNAIMKgVXNlIHgxMXZuYyAtdXNlcHcgdG8gYXV0b21hdGljYWxseSB1c2UgeW91
ciDCoCDCoCDCoCDCoCDCoCDCoCDCoEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAjQCDCoH4vLnZuYy9wYXNzd2Qgb3Igfi8udm5jL3Bhc3N3ZGZpbGUgcGFzc3dv
cmQgZmlsZXMuIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAjQCDCoChhbmQgcHJvbXB0IHlvdSB0byBjcmVhdGUgfi8udm5jL3Bhc3N3ZCBpZiBu
ZWl0aGVyIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAjQCDCoGZpbGUgZXhpc3RzLikgwqBVbmRlciAtdXNlcHcsIHgxMXZuYyB3aWxsIGV4aXQg
aWYgaXQgwqAgwqAgwqBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgI0AgwqBjYW5ub3QgZmluZCBhIHBhc3N3b3JkIHRvIHVzZS4gwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7ICNAIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAjQCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgI0Ag
wqBFdmVuIHdpdGggYSBwYXNzd29yZCwgdGhlIHN1YnNlcXVlbnQgVk5DIHRyYWZmaWMgaXMgwqAg
wqAgwqBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgI0AgwqBz
ZW50IGluIHRoZSBjbGVhci7CoCBDb25zaWRlciB0dW5uZWxsaW5nIHZpYSBzc2goMSk6IMKgIMKg
IMKgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAjQCDCoCDCoDxhIGhyZWY9Imh0dHA6Ly93d3cua2FybHJ1bmdlLmNvbS94
MTF2bmMvI3R1bm5lbGxpbmciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6
Ly93d3cua2FybHJ1bmdlLmNvbS94MTF2bmMvI3R1bm5lbGxpbmc8L2E+IMKgIMKgIMKgIMKgIMKg
IMKgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAjQCDCoE9yIHVzaW5nIHRoZSB4MTF2bmMgU1NMIG9wdGlvbnM6IC1zc2wg
YW5kIC1zdHVubmVsIMKgIMKgIMKgIEAjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAjQCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCBAIzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgI0AgwqBQbGVhc2UgUmVhZCB0aGUgZG9j
dW1lbnRhdGlvbiBmb3IgbW9yZSBpbmZvIGFib3V0IMKgIMKgIMKgIMKgQCM8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAIMKgcGFzc3dvcmRzLCBzZWN1cml0eSwg
YW5kIGVuY3J5cHRpb24uIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIEAjPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAjQCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
I0AgwqAgwqA8YSBocmVmPSJodHRwOi8vd3d3LmthcmxydW5nZS5jb20veDExdm5jL2ZhcS5odG1s
I2ZhcS1wYXNzd2QiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly93d3cu
a2FybHJ1bmdlLmNvbS94MTF2bmMvZmFxLmh0bWwjZmFxLXBhc3N3ZDwvYT4gwqAgwqBAIzxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgI0AgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgQCM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7ICNAIMKgVG8gZGlzYWJsZSB0aGlzIHdhcm5pbmcgdXNlIHRoZSAtbm9wdyBvcHRpb24sIG9y
IHB1dCDCoCDCoCBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
I0AgwqAmIzM5O25vcHcmIzM5OyBvbiBhIGxpbmUgaW4geW91ciB+Ly54MTF2bmNyYyBmaWxlLiDC
oCDCoCDCoCDCoCDCoCDCoCDCoCBAIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgI0AgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgQCM8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICNAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBA
QEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAIzxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAwOS8wMy8yMDE4IDEyOjU4OjQxIHgxMXZuYyB2ZXJzaW9uOiAwLjku
MTYgbGFzdG1vZDogMjAxOS0wMS0wNSDCoHBpZDogNDI0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAwOS8wMy8yMDE4IDEyOjU4OjQxIFhPcGVuRGlzcGxheSgmcXVv
dDsmcXVvdDspIGZhaWxlZC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IDA5LzAzLzIwMTggMTI6NTg6NDEgVHJ5aW5nIGFnYWluIHdpdGggWEFVVEhMT0NBTEhPU1RO
QU1FPWxvY2FsaG9zdCAuLi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IDA5LzAzLzIwMTggMTI6NTg6NDE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IDA5LzAzLzIwMTggMTI6NTg6NDEgKioqIFhPcGVuRGlzcGxheSBmYWlsZWQuIE5v
IC1kaXNwbGF5IG9yIERJU1BMQVkuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAwOS8wMy8yMDE4IDEyOjU4OjQxICoqKiBUcnlpbmcgJnF1b3Q7OjAmcXVvdDsgaW4g
NCBzZWNvbmRzLsKgIFByZXNzIEN0cmwtQyB0byBhYm9ydC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IDA5LzAzLzIwMTggMTI6NTg6NDEgKioqIDEgMiAzIDQ8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IDA5LzAzLzIwMTggMTI6NTg6
NDUgWE9wZW5EaXNwbGF5KCZxdW90OzowJnF1b3Q7KSBmYWlsZWQuPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAwOS8wMy8yMDE4IDEyOjU4OjQ1IFRyeWluZyBhZ2Fp
biB3aXRoIFhBVVRITE9DQUxIT1NUTkFNRT1sb2NhbGhvc3QgLi4uPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAwOS8wMy8yMDE4IDEyOjU4OjQ1IFhPcGVuRGlzcGxh
eSgmcXVvdDs6MCZxdW90OykgZmFpbGVkLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgMDkvMDMvMjAxOCAxMjo1ODo0NSBUcnlpbmcgYWdhaW4gd2l0aCB1bnNldCBY
QVVUSExPQ0FMSE9TVE5BTUUgLi4uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAwOS8wMy8yMDE4IDEyOjU4OjQ1PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
MDkvMDMvMjAxOCAxMjo1ODo0NSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
Kio8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IDA5LzAzLzIwMTgg
MTI6NTg6NDUgKioqIFhPcGVuRGlzcGxheSBmYWlsZWQgKDowKTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7ICoqKiB4MTF2bmMgd2FzIHVuYWJsZSB0byBvcGVuIHRoZSBYIERJU1BMQVk6ICZx
dW90OzowJnF1b3Q7LCBpdCBjYW5ub3QgY29udGludWUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAqKiogVGhlcmUgbWF5IGJlICZxdW90O1hsaWI6JnF1b3Q7IGVy
cm9yIG1lc3NhZ2VzIGFib3ZlIHdpdGggZGV0YWlscyBhYm91dCB0aGUgZmFpbHVyZS48YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTb21lIHRpcHMgYW5kIGd1aWRlbGluZXM6PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKiogQW4gWCBzZXJ2ZXIgKHRoZSBvbmUgeW91IHdpc2ggdG8g
dmlldykgbXVzdCBiZSBydW5uaW5nIGJlZm9yZSB4MTF2bmMgaXM8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgc3RhcnRlZDogeDExdm5jIGRvZXMgbm90IHN0
YXJ0IHRoZSBYIHNlcnZlci4gwqAoaG93ZXZlciwgc2VlIHRoZSAtY3JlYXRlPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoG9wdGlvbiBpZiB0aGF0IGlzIHdo
YXQgeW91IHJlYWxseSB3YW50KS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAqKiBZb3Ug
bXVzdCB1c2UgLWRpc3BsYXkgJmx0O2Rpc3AmZ3Q7LCAtT1ItIHNldCBhbmQgZXhwb3J0IHlvdXIg
JERJU1BMQVk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKg
ZW52aXJvbm1lbnQgdmFyaWFibGUgdG8gcmVmZXIgdG8gdGhlIGRpc3BsYXkgb2YgdGhlIGRlc2ly
ZWQgWCBzZXJ2ZXIuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oC0gVXN1YWxseSB0aGUgZGlzcGxheSBpcyBzaW1wbHkgJnF1b3Q7OjAmcXVvdDsgKGluIGZhY3Qg
eDExdm5jIHVzZXMgdGhpcyBpZiB5b3UgZm9yZ2V0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyDCoCDCoHRvIHNwZWNpZnkgaXQpLCBidXQgaW4gc29tZSBtdWx0aS11
c2VyIHNpdHVhdGlvbnMgaXQgY291bGQgYmUgJnF1b3Q7OjEmcXVvdDssICZxdW90OzoyJnF1b3Q7
LDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqBvciBldmVu
ICZxdW90OzoxMzcmcXVvdDsuwqAgQXNrIHlvdXIgYWRtaW5pc3RyYXRvciBvciBhIGd1cnUgaWYg
eW91IGFyZSBoYXZpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IMKgIMKgZGlmZmljdWx0eSBkZXRlcm1pbmluZyB3aGF0IHlvdXIgWCBESVNQTEFZIGlzLjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICoqIE5leHQsIHlvdSBuZWVkIHRvIGhhdmUgc3VmZmlj
aWVudCBwZXJtaXNzaW9ucyAoWGF1dGhvcml0eSk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgdG8gY29ubmVjdCB0byB0aGUgWCBESVNQTEFZLiDCoCBIZXJl
IGFyZSBzb21lIFRpcHM6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAtIE9mdGVuLCB5
b3UganVzdCBuZWVkIHRvIHJ1biB4MTF2bmMgYXMgdGhlIHVzZXIgbG9nZ2VkIGludG8gdGhlIFgg
c2Vzc2lvbi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKg
U28gbWFrZSBzdXJlIHRvIGJlIHRoYXQgdXNlciB3aGVuIHlvdSB0eXBlIHgxMXZuYy48YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgLSBCZWluZyByb290IGlzIHVz
dWFsbHkgbm90IGVub3VnaCBiZWNhdXNlIHRoZSBpbmNvcnJlY3QgTUlULU1BR0lDLUNPT0tJRTxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqBmaWxlIG1heSBi
ZSBhY2Nlc3NlZC7CoCBUaGUgY29va2llIGZpbGUgY29udGFpbnMgdGhlIHNlY3JldCBrZXkgdGhh
dDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqBhbGxvd3Mg
eDExdm5jIHRvIGNvbm5lY3QgdG8gdGhlIGRlc2lyZWQgWCBESVNQTEFZLjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAtIFlvdSBjYW4gZXhwbGljaXRseSBpbmRp
Y2F0ZSB3aGljaCBNSVQtTUFHSUMtQ09PS0lFIGZpbGUgc2hvdWxkIGJlIHVzZWQ8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgYnkgdGhlIC1hdXRoIG9wdGlv
biwgZS5nLjo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKg
IMKgIMKgeDExdm5jIC1hdXRoIC9ob21lL3NvbWV1c2VyLy5YYXV0aG9yaXR5IC1kaXNwbGF5IDow
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoHgx
MXZuYyAtYXV0aCAvdG1wLy5nZG16bmRWbFIgLWRpc3BsYXkgOjA8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgeW91IG11c3QgaGF2ZSByZWFkIHBlcm1pc3Np
b24gZm9yIHRoZSBhdXRoIGZpbGUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoFNlZSBhbHNvICYjMzk7LWF1dGggZ3Vlc3MmIzM5OyBhbmQgJiMzOTstZmlu
ZGF1dGgmIzM5OyBkaXNjdXNzZWQgYmVsb3cuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KiogSWYgTk8gT05FIGlzIGxvZ2dlZCBpbnRvIGFuIFggc2Vzc2lvbiB5ZXQsIGJ1dCB0aGVyZSBp
cyBhIGdyZWV0ZXIgbG9naW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IMKgIMKgcHJvZ3JhbSBsaWtlICZxdW90O2dkbSZxdW90OywgJnF1b3Q7a2RtJnF1b3Q7LCAm
cXVvdDt4ZG0mcXVvdDssIG9yICZxdW90O2R0bG9naW4mcXVvdDsgcnVubmluZywgeW91IHdpbGwg
bmVlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqB0byBm
aW5kIGFuZCB1c2UgdGhlIHJhdyBkaXNwbGF5IG1hbmFnZXIgTUlULU1BR0lDLUNPT0tJRSBmaWxl
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqBTb21lIGV4
YW1wbGVzIGZvciB2YXJpb3VzIGRpc3BsYXkgbWFuYWdlcnM6PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgwqAgwqAgwqBnZG06IMKgIMKgIC1hdXRoIC92YXIvZ2RtLzowLlhhdXRoPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoCDCoCDCoCDC
oCAtYXV0aCAvdmFyL2xpYi9nZG0vOjAuWGF1dGg8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKga2RtOiDCoCDCoCAtYXV0aCAvdmFyL2xpYi9rZG0vQTow
LWNyV2s3Mjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgLWF1dGggL3Zhci9ydW4veGF1dGgvQTowLWNyV2s3Mjxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqB4ZG06IMKgIMKgIC1hdXRo
IC92YXIvbGliL3hkbS9hdXRoZGlyL2F1dGhmaWxlcy9BOjAtWFF2YUprPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoGR0bG9naW46IC1hdXRoIC92YXIv
ZHQvQTowLVVnYWFYYTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgU29tZXRpbWVz
IHRoZSBjb21tYW5kICZxdW90O3BzIHd3d3dhdXggfCBncmVwIGF1dGgmcXVvdDsgY2FuIHJldmVh
bCB0aGUgZmlsZSBsb2NhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoFN0
YXJ0aW5nIHdpdGggeDExdm5jIDAuOS45IHlvdSBjYW4gaGF2ZSBpdCB0cnkgdG8gZ3Vlc3MgYnkg
dXNpbmc6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
LWF1dGggZ3Vlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoChzZWUgYWxzbyB0
aGUgeDExdm5jIC1maW5kYXV0aCBvcHRpb24uKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IMKgIMKgT25seSByb290IHdpbGwgaGF2ZSByZWFkIHBlcm1pc3Npb24gZm9yIHRoZSBmaWxlLCBh
bmQgc28geDExdm5jIG11c3QgYmUgcnVuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyDCoCDCoGFzIHJvb3QgKG9yIGNvcHkgaXQpLsKgIFRoZSByYW5kb20gY2hhcmFj
dGVycyBpbiB0aGUgZmlsZW5hbWVzIHdpbGwgb2YgY291cnNlPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoGNoYW5nZSBhbmQgdGhlIGRpcmVjdG9yeSB0aGUg
Y29va2llIGZpbGUgcmVzaWRlcyBpbiBpcyBzeXN0ZW0gZGVwZW5kZW50Ljxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFNlZSBhbHNvOiA8YSBocmVmPSJodHRwOi8vd3d3LmthcmxydW5nZS5j
b20veDExdm5jL2ZhcS5odG1sIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
Oi8vd3d3LmthcmxydW5nZS5jb20veDExdm5jL2ZhcS5odG1sPC9hPjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBWaXB1bCBLdW1hcjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9uIFRodSwg
Tm92IDMsIDIwMjIgYXQgMTA6MjcgUE0gVmlwdWwgU3VuZWphICZsdDs8YSBocmVmPSJtYWlsdG86
dnN1bmVqYTYzQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPnZzdW5lamE2M0BnbWFpbC5jb208
L2E+Jmd0OyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBIaSBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgVGhhbmtzITxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEkgdXNlZCB4ZW4t
Z3Vlc3QtaW1hZ2UtbWluaW1hbChzaW1wbGUgY29uc29sZSBiYXNlZCBpbWFnZSkgYXMgYSBndWVz
dCB3aXRoIGZiY29uICZhbXA7PGJyPg0KJmd0O8KgIMKgIMKgIMKgZmJkZXY8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBlbmFibGVkIGluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKga2VybmVsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Y29uZmlndXJhdGlvbnMgYnV0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
c3RpbGw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHRoZSBzYW1l
IGVycm9yIGNhbiYjMzk7dCBvcGVuIHRoZSBkaXNwbGF5Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgYmVsb3cgYXJlIHRoZSBvdXRjb21lIG9mICZxdW90O3hlbnN0
b3JlLWxzJnF1b3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJvb3RAcmFzcGJlcnJ5
cGk0LTY0On4vZ3Vlc3QxIyB4ZW5zdG9yZS1sczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgdG9vbCA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgwqB4ZW5zdG9yZWQgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGxvY2FsID0gJnF1b3Q7JnF1b3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoGRvbWFpbiA9ICZxdW90
OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgMCA9
ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
wqAgwqBjb250cm9sID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyDCoCDCoCBmZWF0dXJlLXBvd2Vyb2ZmID0gJnF1b3Q7MSZxdW90Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgZmVhdHVyZS1yZWJv
b3QgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyDCoCDCoGRvbWlkID0gJnF1b3Q7MCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgwqAgwqBuYW1lID0gJnF1b3Q7RG9tYWluLTAmcXVvdDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgZGV2aWNlLW1vZGVsID0g
JnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oCDCoCAwID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoGJhY2tlbmRzID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCBjb25zb2xlID0gJnF1b3Q7JnF1b3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCB2a2Jk
ID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyDCoCDCoCDCoCB2ZmIgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIHFuaWMgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgc3RhdGUgPSAmcXVvdDtydW5u
aW5nJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDC
oCAxID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyDCoCDCoCDCoGJhY2tlbmRzID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCBjb25zb2xlID0gJnF1b3Q7JnF1b3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCB2a2JkID0g
JnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oCDCoCDCoCB2ZmIgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIHFuaWMgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgc3RhdGUgPSAmcXVvdDtydW5uaW5n
JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoGJh
Y2tlbmQgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IMKgIMKgIHZiZCA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAxID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCA1MTcxMiA9ICZxdW90OyZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBm
cm9udGVuZCA9ICZxdW90Oy9sb2NhbC9kb21haW4vMS9kZXZpY2UvdmJkLzUxNzEyJnF1b3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoHBhcmFt
cyA9ICZxdW90Oy9ob21lL3Jvb3QvZ3Vlc3QyL3hlbi1ndWVzdC1pbWFnZS1taW5pbWFsLXJhc3Bi
ZXJyeXBpNC02NC5leHQzJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoCDCoHNjcmlwdCA9ICZxdW90Oy9ldGMveGVuL3NjcmlwdHMvYmxvY2sm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKg
IMKgZnJvbnRlbmQtaWQgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoG9ubGluZSA9ICZxdW90OzEmcXVvdDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgcmVtb3ZhYmxl
ID0gJnF1b3Q7MCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgwqAgwqAgwqAgwqBib290YWJsZSA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgc3RhdGUgPSAmcXVvdDs0JnF1b3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoGRl
diA9ICZxdW90O3h2ZGEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IMKgIMKgIMKgIMKgdHlwZSA9ICZxdW90O3BoeSZxdW90Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBtb2RlID0gJnF1b3Q7dyZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBk
ZXZpY2UtdHlwZSA9ICZxdW90O2Rpc2smcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZGlzY2FyZC1lbmFibGUgPSAmcXVvdDsxJnF1b3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoGZl
YXR1cmUtbWF4LWluZGlyZWN0LXNlZ21lbnRzID0gJnF1b3Q7MjU2JnF1b3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoG11bHRpLXF1ZXVlLW1h
eC1xdWV1ZXMgPSAmcXVvdDs0JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyDCoCDCoCDCoCDCoG1heC1yaW5nLXBhZ2Utb3JkZXIgPSAmcXVvdDs0JnF1b3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoG5v
ZGUgPSAmcXVvdDsvZGV2L2xvb3AwJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoHBoeXNpY2FsLWRldmljZSA9ICZxdW90Ozc6MCZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBw
aHlzaWNhbC1kZXZpY2UtcGF0aCA9ICZxdW90Oy9kZXYvbG9vcDAmcXVvdDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgaG90cGx1Zy1zdGF0dXMg
PSAmcXVvdDtjb25uZWN0ZWQmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZmVhdHVyZS1mbHVzaC1jYWNoZSA9ICZxdW90OzEmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZGlz
Y2FyZC1ncmFudWxhcml0eSA9ICZxdW90OzQwOTYmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZGlzY2FyZC1hbGlnbm1lbnQgPSAmcXVv
dDswJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDC
oCDCoCDCoGRpc2NhcmQtc2VjdXJlID0gJnF1b3Q7MCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBmZWF0dXJlLWRpc2NhcmQgPSAmcXVv
dDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDC
oCDCoCDCoGZlYXR1cmUtYmFycmllciA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZmVhdHVyZS1wZXJzaXN0ZW50ID0g
JnF1b3Q7MSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
wqAgwqAgwqAgwqBzZWN0b3JzID0gJnF1b3Q7MTc5NDA0OCZxdW90Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBpbmZvID0gJnF1b3Q7MCZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBz
ZWN0b3Itc2l6ZSA9ICZxdW90OzUxMiZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBwaHlzaWNhbC1zZWN0b3Itc2l6ZSA9ICZxdW90OzUx
MiZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAg
dmZiID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyDCoCDCoCDCoDEgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIDAgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZnJvbnRlbmQgPSAmcXVvdDsv
bG9jYWwvZG9tYWluLzEvZGV2aWNlL3ZmYi8wJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoGZyb250ZW5kLWlkID0gJnF1b3Q7MSZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBv
bmxpbmUgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoCDCoHN0YXRlID0gJnF1b3Q7NCZxdW90Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqB2bmMgPSAmcXVvdDsxJnF1b3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoHZu
Y2xpc3RlbiA9ICZxdW90OzEyNy4wLjAuMSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqB2bmNkaXNwbGF5ID0gJnF1b3Q7MCZxdW90Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqB2bmN1
bnVzZWQgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoCDCoHNkbCA9ICZxdW90OzAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgb3BlbmdsID0gJnF1b3Q7MCZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBm
ZWF0dXJlLXJlc2l6ZSA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgaG90cGx1Zy1zdGF0dXMgPSAmcXVvdDtjb25uZWN0
ZWQmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKg
IMKgIMKgcmVxdWVzdC11cGRhdGUgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCB2a2JkID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoDEgPSAmcXVvdDsmcXVv
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIDAg
PSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IMKgIMKgIMKgIMKgZnJvbnRlbmQgPSAmcXVvdDsvbG9jYWwvZG9tYWluLzEvZGV2aWNlL3ZrYmQv
MCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAg
wqAgwqBmcm9udGVuZC1pZCA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgb25saW5lID0gJnF1b3Q7MSZxdW90Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBzdGF0ZSA9
ICZxdW90OzQmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IMKgIMKgIMKgIMKgZmVhdHVyZS1hYnMtcG9pbnRlciA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZmVhdHVyZS1yYXct
cG9pbnRlciA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IMKgIMKgIMKgIMKgaG90cGx1Zy1zdGF0dXMgPSAmcXVvdDtjb25uZWN0ZWQmcXVv
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIGNvbnNv
bGUgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IMKgIMKgIMKgMSA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgMCA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBmcm9udGVuZCA9ICZxdW90Oy9s
b2NhbC9kb21haW4vMS9jb25zb2xlJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoGZyb250ZW5kLWlkID0gJnF1b3Q7MSZxdW90Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBvbmxpbmUg
PSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyDCoCDCoCDCoCDCoHN0YXRlID0gJnF1b3Q7MSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBwcm90b2NvbCA9ICZxdW90O3Z0MTAwJnF1
b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCB2aWYg
PSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IMKgIMKgIMKgMSA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgwqAgwqAgwqAgMCA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBmcm9udGVuZCA9ICZxdW90Oy9sb2Nh
bC9kb21haW4vMS9kZXZpY2UvdmlmLzAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZnJvbnRlbmQtaWQgPSAmcXVvdDsxJnF1b3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoG9ubGlu
ZSA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IMKgIMKgIMKgIMKgc3RhdGUgPSAmcXVvdDs0JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoHNjcmlwdCA9ICZxdW90Oy9ldGMveGVu
L3NjcmlwdHMvdmlmLWJyaWRnZSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgwqAgwqAgwqAgwqBtYWMgPSAmcXVvdDtlNDo1ZjowMTpjZDo3YjpkZCZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBi
cmlkZ2UgPSAmcXVvdDt4ZW5icjAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgaGFuZGxlID0gJnF1b3Q7MCZxdW90Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqB0eXBlID0gJnF1b3Q7
dmlmJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDC
oCDCoCDCoGhvdHBsdWctc3RhdHVzID0gJnF1b3Q7Y29ubmVjdGVkJnF1b3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoGZlYXR1cmUtc2cgPSAm
cXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oCDCoCDCoCDCoGZlYXR1cmUtZ3NvLXRjcHY0ID0gJnF1b3Q7MSZxdW90Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBmZWF0dXJlLWdzby10Y3B2
NiA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IMKgIMKgIMKgIMKgZmVhdHVyZS1pcHY2LWNzdW0tb2ZmbG9hZCA9ICZxdW90OzEmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZmVh
dHVyZS1yeC1jb3B5ID0gJnF1b3Q7MSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBmZWF0dXJlLXhkcC1oZWFkcm9vbSA9ICZxdW90OzEm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKg
IMKgZmVhdHVyZS1yeC1mbGlwID0gJnF1b3Q7MCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBmZWF0dXJlLW11bHRpY2FzdC1jb250cm9s
ID0gJnF1b3Q7MSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgwqAgwqAgwqAgwqBmZWF0dXJlLWR5bmFtaWMtbXVsdGljYXN0LWNvbnRyb2wgPSAmcXVvdDsx
JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDC
oCDCoGZlYXR1cmUtc3BsaXQtZXZlbnQtY2hhbm5lbHMgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoG11bHRpLXF1ZXVl
LW1heC1xdWV1ZXMgPSAmcXVvdDs0JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoGZlYXR1cmUtY3RybC1yaW5nID0gJnF1b3Q7MSZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgMSA9ICZxdW90
OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqB2
bSA9ICZxdW90Oy92bS9kODFlYzVhOS01YmY5LTRmMmItODllOC0wZjYwZDZkYTk0OGYmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgbmFtZSA9ICZx
dW90O2d1ZXN0MiZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgwqAgwqBjcHUgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IMKgIMKgIDAgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgYXZhaWxhYmlsaXR5ID0gJnF1b3Q7b25saW5l
JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCAx
ID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyDCoCDCoCDCoGF2YWlsYWJpbGl0eSA9ICZxdW90O29ubGluZSZxdW90Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqBtZW1vcnkgPSAmcXVvdDsmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIHN0YXRpYy1t
YXggPSAmcXVvdDsyMDk3MTUyJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyDCoCDCoCB0YXJnZXQgPSAmcXVvdDsyMDk3MTUyJnF1b3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCB2aWRlb3JhbSA9ICZxdW90OzAm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgZGV2
aWNlID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyDCoCDCoCBzdXNwZW5kID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoGV2ZW50LWNoYW5uZWwgPSAmcXVvdDsmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIHZiZCA9ICZx
dW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAg
wqAgwqA1MTcxMiA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgwqAgwqAgwqAgYmFja2VuZCA9ICZxdW90Oy9sb2NhbC9kb21haW4vMC9iYWNr
ZW5kL3ZiZC8xLzUxNzEyJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoCBiYWNrZW5kLWlkID0gJnF1b3Q7MCZxdW90Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgc3RhdGUgPSAmcXVvdDs0JnF1
b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCB2
aXJ0dWFsLWRldmljZSA9ICZxdW90OzUxNzEyJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCBkZXZpY2UtdHlwZSA9ICZxdW90O2Rpc2smcXVv
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIG11
bHRpLXF1ZXVlLW51bS1xdWV1ZXMgPSAmcXVvdDsyJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCBxdWV1ZS0wID0gJnF1b3Q7JnF1b3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoHJpbmct
cmVmID0gJnF1b3Q7OCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgwqAgwqAgwqAgwqBldmVudC1jaGFubmVsID0gJnF1b3Q7NCZxdW90Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgcXVldWUtMSA9ICZxdW90
OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAg
wqAgwqByaW5nLXJlZiA9ICZxdW90OzkmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZXZlbnQtY2hhbm5lbCA9ICZxdW90OzUmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIHByb3Rv
Y29sID0gJnF1b3Q7YXJtLWFiaSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgwqAgwqAgwqAgZmVhdHVyZS1wZXJzaXN0ZW50ID0gJnF1b3Q7MSZxdW90Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgdmZiID0gJnF1
b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDC
oCDCoDAgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IMKgIMKgIMKgIGJhY2tlbmQgPSAmcXVvdDsvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92
ZmIvMS8wJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oCDCoCDCoCBiYWNrZW5kLWlkID0gJnF1b3Q7MCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgc3RhdGUgPSAmcXVvdDs0JnF1b3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCBwYWdlLXJlZiA9
ICZxdW90OzI3NTAyMiZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgwqAgwqAgwqAgZXZlbnQtY2hhbm5lbCA9ICZxdW90OzMmcXVvdDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIHByb3RvY29sID0gJnF1b3Q7
YXJtLWFiaSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
wqAgwqAgwqAgZmVhdHVyZS11cGRhdGUgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCB2a2JkID0gJnF1b3Q7JnF1b3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoDAgPSAmcXVvdDsm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKg
IGJhY2tlbmQgPSAmcXVvdDsvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92a2JkLzEvMCZxdW90Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgYmFja2Vu
ZC1pZCA9ICZxdW90OzAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IMKgIMKgIMKgIHN0YXRlID0gJnF1b3Q7NCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgcmVxdWVzdC1hYnMtcG9pbnRlciA9ICZx
dW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKg
IMKgIMKgIHBhZ2UtcmVmID0gJnF1b3Q7Mjc1MzIyJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCBwYWdlLWdyZWYgPSAmcXVvdDsxMjg0JnF1
b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCBl
dmVudC1jaGFubmVsID0gJnF1b3Q7MTAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIHZpZiA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAwID0gJnF1b3Q7JnF1b3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCBiYWNrZW5kID0g
JnF1b3Q7L2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmlmLzEvMCZxdW90Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgYmFja2VuZC1pZCA9ICZxdW90
OzAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKg
IMKgIHN0YXRlID0gJnF1b3Q7NCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgwqAgwqAgwqAgaGFuZGxlID0gJnF1b3Q7MCZxdW90Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgbWFjID0gJnF1b3Q7ZTQ6NWY6
MDE6Y2Q6N2I6ZGQmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IMKgIMKgIMKgIG10dSA9ICZxdW90OzE1MDAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIHhkcC1oZWFkcm9vbSA9ICZxdW90OzAmcXVv
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIG11
bHRpLXF1ZXVlLW51bS1xdWV1ZXMgPSAmcXVvdDsyJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCBxdWV1ZS0wID0gJnF1b3Q7JnF1b3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoHR4LXJp
bmctcmVmID0gJnF1b3Q7MTI4MCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgwqAgwqAgwqAgwqByeC1yaW5nLXJlZiA9ICZxdW90OzEyODEmcXVvdDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZXZlbnQt
Y2hhbm5lbC10eCA9ICZxdW90OzYmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIMKgZXZlbnQtY2hhbm5lbC1yeCA9ICZxdW90OzcmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIHF1ZXVl
LTEgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IMKgIMKgIMKgIMKgdHgtcmluZy1yZWYgPSAmcXVvdDsxMjgyJnF1b3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoCDCoHJ4LXJpbmctcmVmID0g
JnF1b3Q7MTI4MyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgwqAgwqAgwqAgwqBldmVudC1jaGFubmVsLXR4ID0gJnF1b3Q7OCZxdW90Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgwqBldmVudC1jaGFubmVs
LXJ4ID0gJnF1b3Q7OSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgwqAgwqAgwqAgcmVxdWVzdC1yeC1jb3B5ID0gJnF1b3Q7MSZxdW90Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgZmVhdHVyZS1yeC1ub3Rp
ZnkgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyDCoCDCoCDCoCBmZWF0dXJlLXNnID0gJnF1b3Q7MSZxdW90Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqAgZmVhdHVyZS1nc28tdGNwdjQgPSAm
cXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oCDCoCDCoCBmZWF0dXJlLWdzby10Y3B2NiA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgIGZlYXR1cmUtaXB2Ni1jc3VtLW9m
ZmxvYWQgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoGNvbnRyb2wgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIHNodXRkb3duID0gJnF1b3Q7JnF1b3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCBmZWF0dXJlLXBvd2Vy
b2ZmID0gJnF1b3Q7MSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgwqAgwqAgZmVhdHVyZS1yZWJvb3QgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCBmZWF0dXJlLXN1c3BlbmQgPSAmcXVv
dDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKg
IHN5c3JxID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCBwbGF0Zm9ybS1mZWF0dXJlLW11bHRpcHJvY2Vzc29yLXN1c3BlbmQgPSAm
cXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oCDCoCBwbGF0Zm9ybS1mZWF0dXJlLXhzX3Jlc2V0X3dhdGNoZXMgPSAmcXVvdDsxJnF1b3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoGRhdGEgPSAmcXVv
dDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKg
ZHJpdmVycyA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgwqAgwqBmZWF0dXJlID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoGF0dHIgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgZXJyb3IgPSAmcXVvdDsmcXVv
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgZG9taWQg
PSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyDCoCDCoHN0b3JlID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyDCoCDCoCBwb3J0ID0gJnF1b3Q7MSZxdW90Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgcmluZy1yZWYgPSAmcXVvdDsyMzM0NzMm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgY29u
c29sZSA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgwqAgwqAgYmFja2VuZCA9ICZxdW90Oy9sb2NhbC9kb21haW4vMC9iYWNrZW5kL2NvbnNv
bGUvMS8wJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oCDCoCBiYWNrZW5kLWlkID0gJnF1b3Q7MCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgbGltaXQgPSAmcXVvdDsxMDQ4NTc2JnF1b3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCB0eXBlID0gJnF1b3Q7
eGVuY29uc29sZWQmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IMKgIMKgIG91dHB1dCA9ICZxdW90O3B0eSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgdHR5ID0gJnF1b3Q7L2Rldi9wdHMvMSZxdW90Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgcG9ydCA9ICZx
dW90OzImcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKg
IMKgIHJpbmctcmVmID0gJnF1b3Q7MjMzNDcyJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCB2bmMtbGlzdGVuID0gJnF1b3Q7MTI3LjAuMC4xJnF1
b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCB2bmMt
cG9ydCA9ICZxdW90OzU5MDAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IMKgIMKgaW1hZ2UgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIGRldmljZS1tb2RlbC1waWQgPSAmcXVvdDs3ODgm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHZtID0gJnF1
b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoGQ4
MWVjNWE5LTViZjktNGYyYi04OWU4LTBmNjBkNmRhOTQ4ZiA9ICZxdW90OyZxdW90Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgbmFtZSA9ICZxdW90O2d1ZXN0
MiZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgdXVp
ZCA9ICZxdW90O2Q4MWVjNWE5LTViZjktNGYyYi04OWU4LTBmNjBkNmRhOTQ4ZiZxdW90Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgc3RhcnRfdGltZSA9ICZx
dW90OzE1MjA2MDAyNzQuMjcmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IGxpYnhsID0gJnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyDCoDEgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIGRldmljZSA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqB2YmQgPSAmcXVvdDsmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIDUxNzEyID0g
JnF1b3Q7JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oCDCoCDCoGZyb250ZW5kID0gJnF1b3Q7L2xvY2FsL2RvbWFpbi8xL2RldmljZS92YmQvNTE3MTIm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKg
YmFja2VuZCA9ICZxdW90Oy9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZiZC8xLzUxNzEyJnF1b3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoHBhcmFt
cyA9ICZxdW90Oy9ob21lL3Jvb3QvZ3Vlc3QyL3hlbi1ndWVzdC1pbWFnZS1taW5pbWFsLXJhc3Bi
ZXJyeXBpNC02NC5leHQzJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoHNjcmlwdCA9ICZxdW90Oy9ldGMveGVuL3NjcmlwdHMvYmxvY2smcXVv
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgZnJv
bnRlbmQtaWQgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyDCoCDCoCDCoG9ubGluZSA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgcmVtb3ZhYmxlID0gJnF1b3Q7MCZx
dW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqBi
b290YWJsZSA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IMKgIMKgIMKgc3RhdGUgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoGRldiA9ICZxdW90O3h2ZGEmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgdHlwZSA9
ICZxdW90O3BoeSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgwqAgwqAgwqBtb2RlID0gJnF1b3Q7dyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqBkZXZpY2UtdHlwZSA9ICZxdW90O2Rpc2smcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgZGlzY2Fy
ZC1lbmFibGUgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyDCoCDCoHZmYiA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgMCA9ICZxdW90OyZxdW90Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqBmcm9udGVuZCA9ICZxdW90Oy9s
b2NhbC9kb21haW4vMS9kZXZpY2UvdmZiLzAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgYmFja2VuZCA9ICZxdW90Oy9sb2NhbC9kb21haW4v
MC9iYWNrZW5kL3ZmYi8xLzAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IMKgIMKgIMKgZnJvbnRlbmQtaWQgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoG9ubGluZSA9ICZxdW90OzEm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKg
c3RhdGUgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoHZuYyA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgdm5jbGlzdGVuID0gJnF1b3Q7MTI3LjAuMC4x
JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDC
oHZuY2Rpc3BsYXkgPSAmcXVvdDswJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyDCoCDCoCDCoHZuY3VudXNlZCA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgc2RsID0gJnF1b3Q7MCZx
dW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqBv
cGVuZ2wgPSAmcXVvdDswJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoHZrYmQgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIDAgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgZnJvbnRlbmQgPSAmcXVvdDsvbG9j
YWwvZG9tYWluLzEvZGV2aWNlL3ZrYmQvMCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqBiYWNrZW5kID0gJnF1b3Q7L2xvY2FsL2RvbWFpbi8w
L2JhY2tlbmQvdmtiZC8xLzAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IMKgIMKgIMKgZnJvbnRlbmQtaWQgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoG9ubGluZSA9ICZxdW90OzEm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKg
c3RhdGUgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoGNvbnNvbGUgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIDAgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgZnJvbnRlbmQgPSAmcXVvdDsv
bG9jYWwvZG9tYWluLzEvY29uc29sZSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgwqAgwqAgwqBiYWNrZW5kID0gJnF1b3Q7L2xvY2FsL2RvbWFpbi8wL2Jh
Y2tlbmQvY29uc29sZS8xLzAmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IMKgIMKgIMKgZnJvbnRlbmQtaWQgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoG9ubGluZSA9ICZxdW90OzEm
cXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKg
c3RhdGUgPSAmcXVvdDsxJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDCoCDCoCDCoHByb3RvY29sID0gJnF1b3Q7dnQxMDAmcXVvdDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgdmlmID0gJnF1b3Q7JnF1b3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCAwID0gJnF1b3Q7
JnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDC
oGZyb250ZW5kID0gJnF1b3Q7L2xvY2FsL2RvbWFpbi8xL2RldmljZS92aWYvMCZxdW90Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqBiYWNrZW5kID0g
JnF1b3Q7L2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmlmLzEvMCZxdW90Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqAgwqAgwqBmcm9udGVuZC1pZCA9ICZxdW90
OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKg
IMKgb25saW5lID0gJnF1b3Q7MSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgwqAgwqAgwqBzdGF0ZSA9ICZxdW90OzEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgc2NyaXB0ID0gJnF1b3Q7L2V0Yy94
ZW4vc2NyaXB0cy92aWYtYnJpZGdlJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyDCoCDCoCDCoG1hYyA9ICZxdW90O2U0OjVmOjAxOmNkOjdiOmRkJnF1b3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoGJyaWRn
ZSA9ICZxdW90O3hlbmJyMCZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgwqAgwqAgwqBoYW5kbGUgPSAmcXVvdDswJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDCoCDCoCDCoHR5cGUgPSAmcXVvdDt2aWYmcXVvdDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIMKgIMKgaG90cGx1
Zy1zdGF0dXMgPSAmcXVvdDsmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IMKgIHR5cGUgPSAmcXVvdDtwdmgmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgIGRtLXZlcnNpb24gPSAmcXVvdDtxZW11X3hlbiZxdW90
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgcm9vdEByYXNwYmVy
cnlwaTQtNjQ6fi9ndWVzdDEjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQW55IGlucHV0
IGFzIHBlciBhYm92ZT8gTG9va2luZyBmb3J3YXJkIHRvIGhlYXJpbmcgZnJvbSB5b3UuPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFZpcHVsIEt1bWFyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgT24gV2VkLCBPY3QgMjYsIDIwMjIgYXQgNToyMSBBTSBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0
OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7IHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIFZpcHVsLDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBJZiB5b3UgbG9vayBhdCB0aGUgUUVNVSBsb2dzLCBp
dCBzYXlzOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBWTkMgc2VydmVy
IHJ1bm5pbmcgb24gPGEgaHJlZj0iaHR0cDovLzEyNy4wLjAuMTo1OTAwIiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj4xMjcuMC4wLjE6NTkwMDwvYT48YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgVGhhdCBpcyB0aGUgVk5DIHNlcnZlciB5b3UgbmVlZCB0byBj
b25uZWN0IHRvLiBTbyBpbiB0aGVvcnk6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoMKgIHZuY3ZpZXdlciA8YSBocmVmPSJodHRwOi8vMTI3LjAuMC4xOjU5MDAiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPjEyNy4wLjAuMTo1OTAwPC9hPjxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzaG91bGQgd29yayBjb3JyZWN0bHkuPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBJZiB5b3UgaGF2ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgwqAgdmZiID0gWyZxdW90O3R5cGU9dm5jJnF1b3Q7XTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpbiB5b3VyIHhsIGNvbmZpZyBmaWxlIGFuZCB5
b3UgaGF2ZSAmcXVvdDtmYmRldiZxdW90OyBpbiB5b3VyIExpbnV4IGd1ZXN0LCBpdDxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNob3VsZCB3b3Jr
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBJZiB5b3UgY29ubmVjdCB0
byB0aGUgVk5DIHNlcnZlciBidXQgeW91IGdldCBhIGJsYWNrIHNjcmVlbiwgaXQgbWlnaHQgYmU8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBhIGd1
ZXN0IGNvbmZpZ3VyYXRpb24gaXNzdWUuIEkgd291bGQgdHJ5IHdpdGggYSBzaW1wbGVyIGd1ZXN0
LCB0ZXh0IG9ubHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAobm8gWDExLCBubyBXYXlsYW5kKSBhbmQgZW5hYmxlIHRoZSBmYmRldiBjb25zb2xl
IChmYmNvbikuIFNlZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoERvY3VtZW50YXRpb24vZmIvZmJjb24ucnN0IGluIExpbnV4LiBZb3Ugc2hvdWxk
IGJlIGFibGUgdG8gc2VlIGE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBncmFwaGljYWwgY29uc29sZSBvdmVyIFZOQy48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgSWYgdGhhdCB3b3JrcywgdGhlbiB5b3Uga25vdyB0aGF0
IHRoZSBmYmRldiBrZXJuZWwgZHJpdmVyICh4ZW4tZmJmcm9udCk8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB3b3JrcyBjb3JyZWN0bHkuPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoElmIGl0IGRvZXNuJiMzOTt0IHdvcmss
IHRoZSBvdXRwdXQgb2YgJnF1b3Q7eGVuc3RvcmUtbHMmcXVvdDsgd291bGQgYmUgaW50ZXJlc3Rp
bmcuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoENoZWVycyw8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU3RlZmFubzxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgT24gV2VkLCAxOSBPY3QgMjAyMiwgVmlwdWwgU3VuZWphIHdyb3RlOjxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGkgU3Rl
ZmFubyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBUaGFua3MgZm9yIHRoZSByZXNwb25zZSE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIGFtIGZvbGxvd2luZyB0aGUgc2Ft
ZSBsaW5rIHlvdSBzaGFyZWQgZnJvbSB0aGUgYmVnaW5uaW5nLiBUcmllZCB0aGUgY29tbWFuZDxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZxdW90O3ZuY3ZpZXdlcjxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGxvY2FsaG9zdDowJnF1
b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgaW4gRE9NMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJ1dDxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNhbWU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpc3N1ZSAmcXVvdDtDYW4mIzM5O3Qgb3Blbjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
ZGlzcGxheSZxdW90OywgYmVsb3cgYXJlIHRoZSBsb2dzOjxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJvb3RAcmFzcGJlcnJ5cGk0LTY0
On4jIHZuY3ZpZXdlciBsb2NhbGhvc3Q6MDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRpZ2VyVk5DIFZpZXdlciA2NC1iaXQgdjEuMTEu
MDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgQnVpbHQgb246IDIwMjAtMDktMDggMTI6MTY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IENvcHlyaWdodCAoQykgMTk5OS0yMDIwIFRp
Z2VyVk5DIFRlYW0gYW5kIG1hbnkgb3RoZXJzIChzZWUgUkVBRE1FLnJzdCk8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFNlZSA8YSBocmVm
PSJodHRwczovL3d3dy50aWdlcnZuYy5vcmciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8vd3d3LnRpZ2Vydm5jLm9yZzwvYT4gZm9yIGluZm9ybWF0aW9uIG9uIFRpZ2Vy
Vk5DLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgQ2FuJiMzOTt0IG9wZW4gZGlzcGxheTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBCZWxvdyBhcmUgdGhlIG5ldHN0YXQgbG9n
cywgaSBjb3VsZG4mIzM5O3Qgc2VlIGFueXRoaW5nIHJ1bm5pbmcgYXQgcG9ydCA1OTAwIG9yPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgNTkwMTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyByb290QHJhc3BiZXJyeXBpNC02NDp+IyBuZXRzdGF0
IC10dXd4PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBBY3RpdmUgSW50ZXJuZXQgY29ubmVjdGlvbnMgKHcvbyBzZXJ2ZXJzKTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUHJvdG8g
UmVjdi1RIFNlbmQtUSBMb2NhbCBBZGRyZXNzIMKgIMKgIMKgIMKgIMKgIEZvcmVpZ24gQWRkcmVz
cyDCoCDCoCDCoCDCoCBTdGF0ZSDCoCDCoDxicj4NCiZndDvCoCDCoCDCoCDCoMKgPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyB0Y3AgwqAg
wqAgwqAgwqAwIMKgIMKgMTY0IDE5Mi4xNjguMS4zOTpzc2ggwqAgwqAgwqAgwqA8YSBocmVmPSJo
dHRwOi8vMTkyLjE2OC4xLjM4OjM3NDcyIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij4xOTIuMTY4LjEuMzg6Mzc0NzI8L2E+IMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgIMKgwqBFU1RB
QkxJU0hFRDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgQWN0aXZlIFVOSVggZG9tYWluIHNvY2tldHMgKHcvbyBzZXJ2ZXJzKTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUHJvdG8g
UmVmQ250IEZsYWdzIMKgIMKgIMKgIFR5cGUgwqAgwqAgwqAgU3RhdGUgwqAgwqAgwqAgwqAgSS1O
b2RlIFBhdGg8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IHVuaXggwqA4IMKgIMKgIMKgWyBdIMKgIMKgIMKgIMKgIERHUkFNIMKgIMKgIMKg
Q09OTkVDVEVEIMKgIMKgIMKgMTA1NjUgL2Rldi9sb2c8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHVuaXggwqAzIMKgIMKgIMKgWyBdIMKg
IMKgIMKgIMKgIFNUUkVBTSDCoCDCoCBDT05ORUNURUQgwqAgwqAgwqAxMDg5MTxicj4NCiZndDvC
oCDCoCDCoCDCoC92YXIvcnVuL3hlbnN0b3JlZC9zb2NrZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHVuaXggwqAzIMKgIMKgIMKgWyBd
IMKgIMKgIMKgIMKgIFNUUkVBTSDCoCDCoCBDT05ORUNURUQgwqAgwqAgwqAxMzc5MTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgdW5peCDC
oDMgwqAgwqAgwqBbIF0gwqAgwqAgwqAgwqAgU1RSRUFNIMKgIMKgIENPTk5FQ1RFRCDCoCDCoCDC
oDEwODQzPGJyPg0KJmd0O8KgIMKgIMKgIMKgL3Zhci9ydW4veGVuc3RvcmVkL3NvY2tldDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgdW5p
eCDCoDMgwqAgwqAgwqBbIF0gwqAgwqAgwqAgwqAgU1RSRUFNIMKgIMKgIENPTk5FQ1RFRCDCoCDC
oCDCoDEwNTczPGJyPg0KJmd0O8KgIMKgIMKgIMKgL3Zhci9ydW4veGVuc3RvcmVkL3NvY2tldDxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
dW5peCDCoDIgwqAgwqAgwqBbIF0gwqAgwqAgwqAgwqAgREdSQU0gwqAgwqAgwqBDT05ORUNURUQg
wqAgwqAgwqAxNDUxMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgdW5peCDCoDMgwqAgwqAgwqBbIF0gwqAgwqAgwqAgwqAgU1RSRUFNIMKg
IMKgIENPTk5FQ1RFRCDCoCDCoCDCoDEzMjQ5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyB1bml4IMKgMiDCoCDCoCDCoFsgXSDCoCDCoCDC
oCDCoCBER1JBTSDCoCDCoCDCoENPTk5FQ1RFRCDCoCDCoCDCoDEzODg3PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyB1bml4IMKgMiDCoCDC
oCDCoFsgXSDCoCDCoCDCoCDCoCBER1JBTSDCoCDCoCDCoENPTk5FQ1RFRCDCoCDCoCDCoDEwNTk5
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyB1bml4IMKgMyDCoCDCoCDCoFsgXSDCoCDCoCDCoCDCoCBTVFJFQU0gwqAgwqAgQ09OTkVDVEVE
IMKgIMKgIMKgMTQwMDU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IHVuaXggwqAzIMKgIMKgIMKgWyBdIMKgIMKgIMKgIMKgIFNUUkVBTSDC
oCDCoCBDT05ORUNURUQgwqAgwqAgwqAxMzI1ODxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgdW5peCDCoDMgwqAgwqAgwqBbIF0gwqAgwqAg
wqAgwqAgU1RSRUFNIMKgIMKgIENPTk5FQ1RFRCDCoCDCoCDCoDEzMjQ4PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyB1bml4IMKgMyDCoCDC
oCDCoFsgXSDCoCDCoCDCoCDCoCBTVFJFQU0gwqAgwqAgQ09OTkVDVEVEIMKgIMKgIMKgMTQwMDM8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IHVuaXggwqAzIMKgIMKgIMKgWyBdIMKgIMKgIMKgIMKgIFNUUkVBTSDCoCDCoCBDT05ORUNURUQg
wqAgwqAgwqAxMDU3Mjxicj4NCiZndDvCoCDCoCDCoCDCoC92YXIvcnVuL3hlbnN0b3JlZC9zb2Nr
ZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IHVuaXggwqAzIMKgIMKgIMKgWyBdIMKgIMKgIMKgIMKgIFNUUkVBTSDCoCDCoCBDT05ORUNU
RUQgwqAgwqAgwqAxMDc4Njxicj4NCiZndDvCoCDCoCDCoCDCoC92YXIvcnVuL3hlbnN0b3JlZC9z
b2NrZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IHVuaXggwqAzIMKgIMKgIMKgWyBdIMKgIMKgIMKgIMKgIERHUkFNIMKgIMKgIMKgQ09O
TkVDVEVEIMKgIMKgIMKgMTMxODY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHVuaXggwqAzIMKgIMKgIMKgWyBdIMKgIMKgIMKgIMKgIFNU
UkVBTSDCoCDCoCBDT05ORUNURUQgwqAgwqAgwqAxMDg2NDxicj4NCiZndDvCoCDCoCDCoCDCoC92
YXIvcnVuL3hlbnN0b3JlZC9zb2NrZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHVuaXggwqAzIMKgIMKgIMKgWyBdIMKgIMKgIMKgIMKg
IFNUUkVBTSDCoCDCoCBDT05ORUNURUQgwqAgwqAgwqAxMDgxMjxicj4NCiZndDvCoCDCoCDCoCDC
oC92YXIvcnVuL3hlbnN0b3JlZC9zb2NrZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHVuaXggwqAyIMKgIMKgIMKgWyBdIMKgIMKgIMKg
IMKgIERHUkFNIMKgIMKgIMKgQ09OTkVDVEVEIMKgIMKgIMKgMTQwODM8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHVuaXggwqAzIMKgIMKg
IMKgWyBdIMKgIMKgIMKgIMKgIFNUUkVBTSDCoCDCoCBDT05ORUNURUQgwqAgwqAgwqAxMDgxMzxi
cj4NCiZndDvCoCDCoCDCoCDCoC92YXIvcnVuL3hlbnN0b3JlZC9zb2NrZXQ8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHVuaXggwqAzIMKg
IMKgIMKgWyBdIMKgIMKgIMKgIMKgIFNUUkVBTSDCoCDCoCBDT05ORUNURUQgwqAgwqAgwqAxNDA2
ODxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgdW5peCDCoDMgwqAgwqAgwqBbIF0gwqAgwqAgwqAgwqAgU1RSRUFNIMKgIMKgIENPTk5FQ1RF
RCDCoCDCoCDCoDEzMjU2PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyB1bml4IMKgMyDCoCDCoCDCoFsgXSDCoCDCoCDCoCDCoCBTVFJFQU0g
wqAgwqAgQ09OTkVDVEVEIMKgIMKgIMKgMTA1NzE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAvdmFyL3J1
bi94ZW5zdG9yZWQvc29ja2V0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyB1bml4IMKgMyDCoCDCoCDCoFsgXSDCoCDCoCDCoCDCoCBTVFJF
QU0gwqAgwqAgQ09OTkVDVEVEIMKgIMKgIMKgMTA4NDI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHVuaXggwqAzIMKgIMKgIMKgWyBdIMKg
IMKgIMKgIMKgIFNUUkVBTSDCoCDCoCBDT05ORUNURUQgwqAgwqAgwqAxMzk4NTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgdW5peCDCoDMg
wqAgwqAgwqBbIF0gwqAgwqAgwqAgwqAgREdSQU0gwqAgwqAgwqBDT05ORUNURUQgwqAgwqAgwqAx
MzE4NTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgdW5peCDCoDIgwqAgwqAgwqBbIF0gwqAgwqAgwqAgwqAgU1RSRUFNIMKgIMKgIENPTk5F
Q1RFRCDCoCDCoCDCoDEzODg0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyB1bml4IMKgMiDCoCDCoCDCoFsgXSDCoCDCoCDCoCDCoCBER1JB
TSDCoCDCoCDCoENPTk5FQ1RFRCDCoCDCoCDCoDE0NTI4PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyB1bml4IMKgMiDCoCDCoCDCoFsgXSDC
oCDCoCDCoCDCoCBER1JBTSDCoCDCoCDCoENPTk5FQ1RFRCDCoCDCoCDCoDEzNzg1PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyB1bml4IMKg
MyDCoCDCoCDCoFsgXSDCoCDCoCDCoCDCoCBTVFJFQU0gwqAgwqAgQ09OTkVDVEVEIMKgIMKgIMKg
MTQwMzQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBBdHRhY2hpbmcgeGVuIGxvZyBmaWxlcyBvZiAvdmFyL2xvZy94ZW4uPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIGRpZG4m
IzM5O3QgZ2V0IHRoZSByb2xlIG9mIFFFTVUgaGVyZSBiZWNhdXNlIGFzIG1lbnRpb25lZCBlYXJs
aWVyLCBJIGFtIHBvcnRpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqBpbjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoHJhc3BiZXJyeXBpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgNEIuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFZpcHVsIEt1bWFyPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgT24g
V2VkLCBPY3QgMTksIDIwMjIgYXQgMTI6NDMgQU0gU3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBo
cmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoHdyb3RlOjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoEl0IHVzdWFsbHkgd29ya3MgdGhlIHdheSBpdCBpcyBkZXNjcmliZWQgaW4gdGhlIGd1
aWRlOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqA8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAg
wqDCoMKgwqDCoMKgwqDCoDxhIGhyZWY9Imh0dHBzOi8vd3d3LnZpcnR1YXRvcGlhLmNvbS9pbmRl
eC5waHA/dGl0bGU9Q29uZmlndXJpbmdfYV9WTkNfYmFzZWRfR3JhcGhpY2FsX0NvbnNvbGVfZm9y
X2FfWGVuX1BhcmF2aXJ0dWFsaXplZF9kb21haW5VX0d1ZXN0IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3d3dy52aXJ0dWF0b3BpYS5jb20vaW5kZXgucGhwP3RpdGxl
PUNvbmZpZ3VyaW5nX2FfVk5DX2Jhc2VkX0dyYXBoaWNhbF9Db25zb2xlX2Zvcl9hX1hlbl9QYXJh
dmlydHVhbGl6ZWRfZG9tYWluVV9HdWVzdDwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWW91IGRvbiYjMzk7dCBu
ZWVkIHRvIGluc3RhbGwgYW55IFZOQy1yZWxhdGVkIHNlcnZlciBzb2Z0d2FyZSBiZWNhdXNlIGl0
IGlzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgYWxyZWFkeSBwcm92aWRlZCBieSBYZW4gKHRvIGJlIHByZWNpc2UgaXQg
aXMgcHJvdmlkZWQgYnkgUUVNVSB3b3JraW5nPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdG9nZXRoZXIgd2l0aCBYZW4u
KTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBZb3Ugb25seSBuZWVkIHRoZSB2bmMgY2xpZW50IGluIGRvbTAgc28gdGhh
dCB5b3UgY2FuIGNvbm5lY3QsIGJ1dCB5b3U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb3VsZCBhbHNvIHJ1biB0aGUg
dm5jIGNsaWVudCBvdXRzaWRlIGZyb20gYW5vdGhlciBob3N0LiBTbyBiYXNpY2FsbHk8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqB0aGUgZm9sbG93aW5nIHNob3VsZCB3b3JrIHdoZW4gZXhlY3V0ZWQgaW4gRG9tMCBhZnRl
ciBjcmVhdGluZyBEb21VOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCB2bmN2aWV3ZXIgbG9jYWxob3N0OjA8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgQ2FuIHlvdSBhdHRhY2ggdGhlIFhlbiBhbmQgUUVNVSBsb2dzICgvdmFyL2xvZy94
ZW4vKik/IEFuZCBhbHNvIHVzZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoG5ldHN0YXQgLXRhdW5wIHRvIGNoZWNrIGlm
IHRoZXJlIGlzIGFueXRoaW5nIHJ1bm5pbmcgYXQgcG9ydCA1OTAwIG9yPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgNTkw
MT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgQ2hlZXJzLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTdGVmYW5vPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBP
biBUdWUsIDE4IE9jdCAyMDIyLCBWaXB1bCBTdW5lamEgd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBI
aSBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoYW5rcyBmb3IgdGhl
IHJlc3BvbnNlITxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEkgY291bGQgaW5zdGFs
bCB0aWdlclZOQywgeDExdm5jICZhbXA7IGxpYnZuY3NlcnZlciBpbiBEb20wPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgeGVuLWltYWdlLW1pbmltYWwgYnV0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgb25seTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoG1hbmFnZSB0bzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGluc3RhbGw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBsaWJ2bmNzZXJ2ZXIoY291bGRuJiMzOTt0PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgaW5zdGFsbCB0aWdlcnZuYzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgJmFtcDsgeDExdm5jIGJl
Y2F1c2Ugb2YgeDExPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBzdXBwb3J0IG1pc3NpbmcsIGl0JiMzOTtzIHdh
eWxhbmQpIGluIERPTVUgY3VzdG9tIGdyYXBoaWNhbCBpbWFnZS4gSTxicj4NCiZndDvCoCDCoCDC
oCDCoHRyaWVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcnVubmluZzxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHZuY3ZpZXdlciB3
aXRoPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgSVA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBhZGRyZXNzICZhbXA7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcG9ydDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGluIGRvbTAgdG88YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBh
Y2Nlc3MgdGhlIGRvbXU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGdyYXBoaWNhbCBpbWFnZSBkaXNwbGF5IGFz
IHBlciBiZWxvdyBjb21tYW5kcy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDC
oHZuY3ZpZXdlciA8YSBocmVmPSJodHRwOi8vMTkyLjE2OC4xLjQyOjU5MDEiIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPjE5Mi4xNjguMS40Mjo1OTAxPC9hPjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgwqA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgQnV0IGl0IHNob3dpbmcgY2FuJiMzOTt0IG9wZW4gZGlz
cGxheSwgYmVsb3cgYXJlIHRoZSBsb2dzOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgwqA8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IHJvb3RAcmFzcGJlcnJ5cGk0LTY0On4vZ3Vlc3QxIyB2bmN2aWV3ZXIgPGEgaHJlZj0iaHR0
cDovLzE5Mi4xNjguMS40Mjo1OTAxIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj4x
OTIuMTY4LjEuNDI6NTkwMTwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaWdl
clZOQyBWaWV3ZXIgNjQtYml0IHYxLjExLjA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEJ1aWx0IG9uOiAyMDIw
LTA5LTA4IDEyOjE2PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBDb3B5cmlnaHQgKEMpIDE5OTktMjAyMCBUaWdl
clZOQyBUZWFtIGFuZCBtYW55IG90aGVycyAoc2VlPGJyPg0KJmd0O8KgIMKgIMKgIMKgUkVBRE1F
LnJzdCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFNlZSA8YSBocmVmPSJodHRwczovL3d3dy50aWdlcnZuYy5v
cmciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vd3d3LnRpZ2Vydm5j
Lm9yZzwvYT4gZm9yIGluZm9ybWF0aW9uIG9uIFRpZ2VyVk5DLjxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQ2Fu
JiMzOTt0IG9wZW4gZGlzcGxheTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJvb3RAcmFzcGJlcnJ5cGk0LTY0
On4vZ3Vlc3QxIzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEkgYW0gbm90IGV4YWN0
bHnCoHN1cmUgd2hhdCB0aGUgaXNzdWUgaXMgYnV0IEkgdGhvdWdodCBvbmx5PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgbGlidm5jc2VydmVyIGluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgRE9NVSBjb3VsZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoHdvcmsgdG88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBnZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBhY2Nlc3Mg
YnV0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaXQ8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBkaWQgbm90PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgd29yay7CoDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSWYgVGlnZXJWTkMgaXMgdGhlIGlzc3VlIGhlcmUg
dGhlbiBpcyB0aGVyZSBhbnkgb3RoZXIgVk5DIHNvdXJjZTxicj4NCiZndDvCoCDCoCDCoCDCoHdo
aWNoIGNvdWxkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYmU8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpbnN0YWxsZWQgZm9yPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgYm90aDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHgxMSAmYW1wOzxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHdheWxhbmQgc3VwcG9ydGVk
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgaW1hZ2VzPzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFJlZ2Fy
ZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBWaXB1bCBLdW1hcjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IE9uIFR1ZSwgT2N0IDE4LCAyMDIyIGF0IDI6NDAgQU0gU3RlZmFubyBTdGFiZWxsaW5p
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgVk5DIGlzIHR5cGljYWxseSBlYXNpZXIgdG8gc2V0dXAsIGJlY2F1c2UgU0RMIG5l
ZWRzIGV4dHJhPGJyPg0KJmd0O8KgIMKgIMKgIMKgbGlicmFyaWVzIGF0PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgYnVpbGQgdGltZSBhbmQgcnVudGltZS4gSWYgUUVNVSBpcyBidWlsdCB3aXRo
b3V0IFNETCBzdXBwb3J0IGl0PGJyPg0KJmd0O8KgIMKgIMKgIMKgd29uJiMzOTt0PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgc3RhcnQgd2hlbiB5b3UgYXNrIGZvciBTREwuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFZOQyBzaG91bGQgd29yayB3aXRoIGJvdGgg
eDExIGFuZCB3YXlsYW5kIGluIHlvdXIgZG9tVS4gSXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqBkb2Vz
biYjMzk7dCB3b3JrPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYXQgdGhlIHgxMSBsZXZlbCwg
aXQgZXhwb3NlcyBhIHNwZWNpYWwgZmJkZXYgZGV2aWNlIGluIHlvdXI8YnI+DQomZ3Q7wqAgwqAg
wqAgwqBkb21VIHRoYXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzaG91bGQgd29yayB3aXRo
Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC0gYSBncmFwaGljYWwgY29uc29sZSBpbiBMaW51
eCBkb21VPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgLSB4MTE8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAtIHdheWxhbmQgKGJ1dCBJIGhhdmVuJiMzOTt0IHRlc3RlZCB0aGlzIHNvIEkgYW0g
bm90IDEwMCUgc3VyZTxicj4NCiZndDvCoCDCoCDCoCDCoGFib3V0IGl0KTxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBXaGVuIHlvdSBzYXkgJnF1b3Q7aXQgZG9lc24m
IzM5O3Qgd29yayZxdW90Oywgd2hhdCBkbyB5b3UgbWVhbj8gRG8geW91IGdldCBhPGJyPg0KJmd0
O8KgIMKgIMKgIMKgYmxhY2s8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB3aW5kb3c/PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFlvdSBuZWVkIENPTkZJR19YRU5f
RkJERVZfRlJPTlRFTkQgaW4gTGludXggZG9tVTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoChk
cml2ZXJzL3ZpZGVvL2ZiZGV2L3hlbi1mYmZyb250LmMpLiBJIHdvdWxkIHRyeSB0byBnZXQgYTxi
cj4NCiZndDvCoCDCoCDCoCDCoGdyYXBoaWNhbDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoHRleHQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb25zb2xlIHVwIGFuZCBydW5u
aW5nIGluIHlvdXIgZG9tVSBiZWZvcmUgYXR0ZW1wdGluZzxicj4NCiZndDvCoCDCoCDCoCDCoHgx
MS93YXlsYW5kLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBDaGVl
cnMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFN0ZWZhbm88YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gTW9uLCAxNyBPY3QgMjAy
MiwgVmlwdWwgU3VuZWphIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGks
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGFua3MhPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSSBoYXZlIHBvcnRl
ZCB4ZW4gbWluaW1hbCBpbWFnZSBhcyBET00wICZhbXA7IGN1c3RvbSB3YXlsYW5kIEdVSTxicj4N
CiZndDvCoCDCoCDCoCDCoGJhc2VkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
aW1hZ2UgYXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBET01VIGluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgcmFzcGJlcnJ5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcGk0Qi4g
STxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGFt
IHRyeWluZyB0bzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoG1ha2UgR1VJPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
ZGlzcGxheSB1cDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgZm9ywqBndWVzdCBtYWNo
aW5lLiBJIHRyaWVkIHVzaW5nwqBzZGwsIGluY2x1ZGVkIGJlbG93IGxpbmUgaW48YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBndWVzdC5jb25mIGZpbGU8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IHZmYj0gWyAmIzM5O3NkbD0xJiMzOTsgXTxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEJ1dCBpdCBpcyB0
aHJvd2luZyBiZWxvdyBlcnJvcjo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyByb290QHJhc3BiZXJyeXBpNC02NDp+L2d1ZXN0MSMg
eGwgY3JlYXRlIC1jIGd1ZXN0MS5jZmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFBh
cnNpbmcgY29uZmlnIGZyb20gZ3Vlc3QxLmNmZzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgbGlieGw6IGVycm9yOiBsaWJ4bF9xbXAuYzoxNDAwOnFtcF9ldl9mZF9jYWxsYmFjazogRG9t
YWluPGJyPg0KJmd0O8KgIMKgIMKgIMKgMzplcnJvciBvbjxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoFFNUDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoHNvY2tldDo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBDb25uZWN0aW9uPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgcmVzZXQgYnk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBwZWVyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBsaWJ4bDogZXJyb3I6
IGxpYnhsX3FtcC5jOjE0Mzk6cW1wX2V2X2ZkX2NhbGxiYWNrOiBEb21haW48YnI+DQomZ3Q7wqAg
wqAgwqAgwqAzOkVycm9yPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaGFwcGVu
ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB3aXRo
IHRoZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoFFNUDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbm5lY3Rpb24gdG88YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBRRU1VPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBsaWJ4bDogZXJyb3I6IGxpYnhsX2RtLmM6MzM1
MTpkZXZpY2VfbW9kZWxfcG9zdGNvbmZpZ19kb25lOjxicj4NCiZndDvCoCDCoCDCoCDCoERvbWFp
bjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDM6UG9zdCBETTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHN0YXJ0dXA8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBj
b25maWdzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZmFpbGVkLDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHJjPS0yNjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgbGlieGw6IGVycm9yOiBsaWJ4bF9jcmVhdGUuYzoxODY3OmRvbWNyZWF0
ZV9kZXZtb2RlbF9zdGFydGVkOjxicj4NCiZndDvCoCDCoCDCoCDCoERvbWFpbjxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDM6ZGV2aWNlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbW9kZWw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBkaWQgbm90PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgc3RhcnQ6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgLTI2PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBsaWJ4bDogZXJyb3I6IGxpYnhs
X2FvdXRpbHMuYzo2NDY6bGlieGxfX2tpbGxfeHNfcGF0aDogRGV2aWNlPGJyPg0KJmd0O8KgIMKg
IMKgIMKgTW9kZWw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBhbHJlYWR5PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZXhpdGVkPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBsaWJ4bDogZXJyb3I6IGxpYnhsX2RvbWFpbi5j
OjExODM6bGlieGxfX2Rlc3Ryb3lfZG9taWQ6PGJyPg0KJmd0O8KgIMKgIMKgIMKgRG9tYWluPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgMzpOb24tZXhpc3RhbnQ8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBkb21haW48YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGxpYnhsOiBlcnJvcjogbGlieGxfZG9tYWluLmM6MTEzNzpk
b21haW5fZGVzdHJveV9jYWxsYmFjazo8YnI+DQomZ3Q7wqAgwqAgwqAgwqBEb21haW48YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAzOlVuYWJsZSB0bzxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRlc3Ryb3k8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBndWVzdDxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgbGlieGw6IGVycm9yOiBsaWJ4bF9kb21haW4u
YzoxMDY0OmRvbWFpbl9kZXN0cm95X2NiOiBEb21haW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAzOkRlc3RydWN0aW9uIG9mPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgZG9tYWluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZmFpbGVkPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQW5vdGhlciB3
YXkgaXMgVk5DLCBpIGNvdWxkIGluc3RhbGwgdGlnZXJ2bmMgaW4gRE9NMCBidXQgc2FtZTxicj4N
CiZndDvCoCDCoCDCoCDCoGk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb3Vs
ZG4mIzM5O3QgaW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBndWVzdDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoG1hY2hpbmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBiZWNhdXNl
IGl0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
ZG9lc24mIzM5O3Qgc3VwcG9ydDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHgxMShzdXBwb3J0
cyB3YXlsYW5kPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBvbmx5KS4gSSBhbSBjb21w
bGV0ZWx5IGJsb2NrZWQgaGVyZSwgTmVlZCB5b3VyIHN1cHBvcnQgdG88YnI+DQomZ3Q7wqAgwqAg
wqAgwqBlbmFibGUgdGhlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZGlzcGxh
eTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHVwLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQW55IGFsdGVybmF0aXZlIG9mIFZOQyB3aGlj
aCBjb3VsZCB3b3JrIGluIGJvdGggeDExICZhbXA7IHdheWxhbmQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBzdXBwb3J0ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBpbWFnZXM/PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQW55IGlucHV0IG9uIFZOQywgU0RMIG9y
IGFueSBvdGhlciB3YXkgdG8gcHJvY2VlZCBvbiB0aGlzPzxicj4NCiZndDvCoCDCoCDCoCDCoExv
b2tpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBmb3J3YXJkIHRvPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaGVhcmluZzxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoGZyb208YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB5b3UuPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFZpcHVsIEt1bWFyPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0OyA8YnI+DQomZ3Q7IDxicj4NCiZndDsgPC9ibG9ja3F1
b3RlPjwvZGl2Pg0KPC9ibG9ja3F1b3RlPjwvZGl2Pg0K
--00000000000067bdb605f1e1bbce--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 05:28:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 05:28:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474252.735315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7BI-000546-Gu; Tue, 10 Jan 2023 05:28:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474252.735315; Tue, 10 Jan 2023 05:28:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7BI-00053z-EC; Tue, 10 Jan 2023 05:28:16 +0000
Received: by outflank-mailman (input) for mailman id 474252;
 Tue, 10 Jan 2023 05:28:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VHex=5H=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pF7BG-00053t-Py
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 05:28:15 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 928f8a23-90a7-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 06:28:12 +0100 (CET)
Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com
 [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-480-jqTb_xDJMKy4wR-t5aJCAw-1; Tue, 10 Jan 2023 00:28:09 -0500
Received: by mail-qv1-f72.google.com with SMTP id
 pm24-20020ad446d8000000b0053233e46a00so1649774qvb.23
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 21:28:09 -0800 (PST)
Received: from redhat.com ([191.101.160.154]) by smtp.gmail.com with ESMTPSA id
 l13-20020ac84a8d000000b003a6947863e1sm5557653qtq.11.2023.01.09.21.28.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 21:28:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 928f8a23-90a7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673328490;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0ansxmlLRWtTb0gFcB9ey2Q9eN1Orq6xeC0CbSQE8rs=;
	b=ip2tYMMYGm1CRwj7DEnMyMvs5GOWRh2MFy+TJYwxwRRpD3NjYwlE6NJwexL2oLdidG3WZ9
	H4c+DIVBGQnboiHwq5Db1lK8CsA9qFD5hGWEP0OZW8o0at+bgPToa0Ew5Sezpkm/jkVE3n
	evH00InwQ+qL6i3fMNb/yaSwijvD5Vw=
X-MC-Unique: jqTb_xDJMKy4wR-t5aJCAw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=0ansxmlLRWtTb0gFcB9ey2Q9eN1Orq6xeC0CbSQE8rs=;
        b=m9cMfh+3G901mMBbtyyYCXxbtP0XlukoDd/H5a+imf8Lq376nAXvt03GPwvyMfWGDB
         QjsjOsl7kq9fPHVaEMqmTN6n14FCpAGUp1ycUvcpGQ0cgpvWd7P3+YHTmiMqSvM7Uy2+
         AJOCDgVQPTuihj30/zLsGOPwwZSVIVes73AWTv6AQ3OqGWC4vnQzozsEf/OIZcgHgSnx
         nxPkphYFYcIymofT+Jl96vragO86bzftwZhjmXaUnqinaUSm9actFtQnxleekPtfV0FX
         bHthArgaqYRjaRNbMjjlHpRk5lNB3oX15nBEH0g2GfFTGMFCQdvLiB0TXxjdZNYiqN1O
         Qs5A==
X-Gm-Message-State: AFqh2kowhEQ4A8LZEDUXNiydosA4h52XxNj9imcj1avj/JxWWG1s8hTz
	QZtVMVneoSq3C9K3wgmxW8cXjHqZvgz7CyDc2HrfyEuxA5n3EKIhFiYjzVloXlf9DEJnwKV2mgu
	aQG+x/P9jGYTMFwzq4GqpK2bM0nU=
X-Received: by 2002:ac8:704b:0:b0:3a7:e110:81e5 with SMTP id y11-20020ac8704b000000b003a7e11081e5mr95042235qtm.53.1673328486968;
        Mon, 09 Jan 2023 21:28:06 -0800 (PST)
X-Google-Smtp-Source: AMrXdXucbIbhjgPdiRynWq9rFbx7j1yJ4QMzUzRKinQH4QGXmaDZxzMzBDCzNEHfjSWPBWUViD+pJA==
X-Received: by 2002:ac8:704b:0:b0:3a7:e110:81e5 with SMTP id y11-20020ac8704b000000b003a7e11081e5mr95042217qtm.53.1673328486642;
        Mon, 09 Jan 2023 21:28:06 -0800 (PST)
Date: Tue, 10 Jan 2023 00:27:59 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230110002712-mutt-send-email-mst@kernel.org>
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
 <20230109183132-mutt-send-email-mst@kernel.org>
 <aacffaa2-1e86-1392-8302-484248b893c4@aol.com>
MIME-Version: 1.0
In-Reply-To: <aacffaa2-1e86-1392-8302-484248b893c4@aol.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Mon, Jan 09, 2023 at 07:05:35PM -0500, Chuck Zmudzinski wrote:
> On 1/9/23 6:33 PM, Michael S. Tsirkin wrote:
> > On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
> >> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> >> as noted in docs/igd-assign.txt in the Qemu source code.
> >> 
> >> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> >> Intel IGD passthrough to the guest with the Qemu upstream device model,
> >> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> >> a different slot. This problem often prevents the guest from booting.
> >> 
> >> The only available workaround is not good: Configure Xen HVM guests to use
> >> the old and no longer maintained Qemu traditional device model available
> >> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> >> 
> >> To implement this feature in the Qemu upstream device model for Xen HVM
> >> guests, introduce the following new functions, types, and macros:
> >> 
> >> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> >> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> >> * typedef XenPTQdevRealize function pointer
> >> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> >> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> >> 
> >> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> >> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> >> the xl toolstack with the gfx_passthru option enabled, which sets the
> >> igd-passthru=on option to Qemu for the Xen HVM machine type.
> >> 
> >> The new xen_igd_reserve_slot function also needs to be implemented in
> >> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> >> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> >> in which case it does nothing.
> >> 
> >> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> >> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> >> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> >> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> >> 
> >> Move the call to xen_host_pci_device_get, and the associated error
> >> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> >> initialize the device class and vendor values which enables the checks for
> >> the Intel IGD to succeed. The verification that the host device is an
> >> Intel IGD to be passed through is done by checking the domain, bus, slot,
> >> and function values as well as by checking that gfx_passthru is enabled,
> >> the device class is VGA, and the device vendor in Intel.
> >> 
> >> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> >> ---
> >> Notes that might be helpful to reviewers of patched code in hw/xen:
> >> 
> >> The new functions and types are based on recommendations from Qemu docs:
> >> https://qemu.readthedocs.io/en/latest/devel/qom.html
> >> 
> >> Notes that might be helpful to reviewers of patched code in hw/i386:
> >> 
> >> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> >> not affect builds that do not have CONFIG_XEN defined.
> > 
> > I'm not sure how you can claim that.
> 
> I mean the small patch to pc_piix.c in this patch sits
> between an "#ifdef CONFIG_XEN" and the corresponding
> "#endif" so the preprocessor will exclude it when CONFIG_XEN
> is not defined. In other words, my patch is part of the
> xen-specific code in pc_piix.c. Or am I missing something?
> 
> 
> > 
> > ...
> > 
> >> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> >> index b48047f50c..34a9736b5e 100644
> >> --- a/hw/i386/pc_piix.c
> >> +++ b/hw/i386/pc_piix.c
> >> @@ -32,6 +32,7 @@
> >>  #include "hw/i386/pc.h"
> >>  #include "hw/i386/apic.h"
> >>  #include "hw/pci-host/i440fx.h"
> >> +#include "hw/rtc/mc146818rtc.h"
> >>  #include "hw/southbridge/piix.h"
> >>  #include "hw/display/ramfb.h"
> >>  #include "hw/firmware/smbios.h"
> >> @@ -40,16 +41,16 @@
> >>  #include "hw/usb.h"
> >>  #include "net/net.h"
> >>  #include "hw/ide/pci.h"
> >> -#include "hw/ide/piix.h"
> >>  #include "hw/irq.h"
> >>  #include "sysemu/kvm.h"
> >>  #include "hw/kvm/clock.h"
> >>  #include "hw/sysbus.h"
> >> +#include "hw/i2c/i2c.h"
> >>  #include "hw/i2c/smbus_eeprom.h"
> >>  #include "hw/xen/xen-x86.h"
> >> +#include "hw/xen/xen.h"
> >>  #include "exec/memory.h"
> >>  #include "hw/acpi/acpi.h"
> >> -#include "hw/acpi/piix4.h"
> >>  #include "qapi/error.h"
> >>  #include "qemu/error-report.h"
> >>  #include "sysemu/xen.h"
> >> @@ -66,6 +67,7 @@
> >>  #include "kvm/kvm-cpu.h"
> >>  
> >>  #define MAX_IDE_BUS 2
> >> +#define XEN_IOAPIC_NUM_PIRQS 128ULL
> >>  
> >>  #ifdef CONFIG_IDE_ISA
> >>  static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
> >> @@ -73,6 +75,32 @@ static const int ide_iobase2[MAX_IDE_BUS] = { 0x3f6, 0x376 };
> >>  static const int ide_irq[MAX_IDE_BUS] = { 14, 15 };
> >>  #endif
> >>  
> >> +/*
> >> + * Return the global irq number corresponding to a given device irq
> >> + * pin. We could also use the bus number to have a more precise mapping.
> >> + */
> >> +static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
> >> +{
> >> +    int slot_addend;
> >> +    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
> >> +    return (pci_intx + slot_addend) & 3;
> >> +}
> >> +
> >> +static void piix_intx_routing_notifier_xen(PCIDevice *dev)
> >> +{
> >> +    int i;
> >> +
> >> +    /* Scan for updates to PCI link routes (0x60-0x63). */
> >> +    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
> >> +        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
> >> +        if (v & 0x80) {
> >> +            v = 0;
> >> +        }
> >> +        v &= 0xf;
> >> +        xen_set_pci_link_route(i, v);
> >> +    }
> >> +}
> >> +
> >>  /* PC hardware initialisation */
> >>  static void pc_init1(MachineState *machine,
> >>                       const char *host_type, const char *pci_type)
> >> @@ -84,7 +112,7 @@ static void pc_init1(MachineState *machine,
> >>      MemoryRegion *system_io = get_system_io();
> >>      PCIBus *pci_bus;
> >>      ISABus *isa_bus;
> >> -    int piix3_devfn = -1;
> >> +    Object *piix4_pm;
> >>      qemu_irq smi_irq;
> >>      GSIState *gsi_state;
> >>      BusState *idebus[MAX_IDE_BUS];
> >> @@ -205,10 +233,9 @@ static void pc_init1(MachineState *machine,
> >>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
> >>  
> >>      if (pcmc->pci_enabled) {
> >> -        PIIX3State *piix3;
> >> +        DeviceState *dev;
> >>          PCIDevice *pci_dev;
> >> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
> >> -                                         : TYPE_PIIX3_DEVICE;
> >> +        int i;
> >>  
> >>          pci_bus = i440fx_init(pci_type,
> >>                                i440fx_host,
> >> @@ -216,21 +243,65 @@ static void pc_init1(MachineState *machine,
> >>                                x86ms->below_4g_mem_size,
> >>                                x86ms->above_4g_mem_size,
> >>                                pci_memory, ram_memory);
> >> +        pci_bus_map_irqs(pci_bus,
> >> +                         xen_enabled() ? xen_pci_slot_get_pirq
> >> +                                       : pci_slot_get_pirq);
> >>          pcms->bus = pci_bus;
> >>  
> >> -        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
> >> -        piix3 = PIIX3_PCI_DEVICE(pci_dev);
> >> -        piix3->pic = x86ms->gsi;
> >> -        piix3_devfn = piix3->dev.devfn;
> >> -        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(piix3), "isa.0"));
> >> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
> >> +        object_property_set_bool(OBJECT(pci_dev), "has-usb",
> >> +                                 machine_usb(machine), &error_abort);
> >> +        object_property_set_bool(OBJECT(pci_dev), "has-acpi",
> >> +                                 x86_machine_is_acpi_enabled(x86ms),
> >> +                                 &error_abort);
> >> +        qdev_prop_set_uint32(DEVICE(pci_dev), "smb_io_base", 0xb100);
> >> +        object_property_set_bool(OBJECT(pci_dev), "smm-enabled",
> >> +                                 x86_machine_is_smm_enabled(x86ms),
> >> +                                 &error_abort);
> >> +        pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
> >> +
> >> +        if (xen_enabled()) {
> >> +            pci_device_set_intx_routing_notifier(
> >> +                        pci_dev, piix_intx_routing_notifier_xen);
> >> +
> >> +            /*
> >> +             * Xen supports additional interrupt routes from the PCI devices to
> >> +             * the IOAPIC: the four pins of each PCI device on the bus are also
> >> +             * connected to the IOAPIC directly.
> >> +             * These additional routes can be discovered through ACPI.
> >> +             */
> >> +            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
> >> +                         XEN_IOAPIC_NUM_PIRQS);
> >> +        }
> >> +
> >> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
> >> +        for (i = 0; i < ISA_NUM_IRQS; i++) {
> >> +            qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
> >> +        }
> >> +        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(pci_dev), "isa.0"));
> >> +        rtc_state = ISA_DEVICE(object_resolve_path_component(OBJECT(pci_dev),
> >> +                                                             "rtc"));
> >> +        piix4_pm = object_resolve_path_component(OBJECT(pci_dev), "pm");
> >> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "ide"));
> >> +        pci_ide_create_devs(PCI_DEVICE(dev));
> >> +        idebus[0] = qdev_get_child_bus(dev, "ide.0");
> >> +        idebus[1] = qdev_get_child_bus(dev, "ide.1");
> >>      } else {
> >>          pci_bus = NULL;
> >> +        piix4_pm = NULL;
> >>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
> >>                                &error_abort);
> >> +        isa_bus_irqs(isa_bus, x86ms->gsi);
> >> +
> >> +        rtc_state = isa_new(TYPE_MC146818_RTC);
> >> +        qdev_prop_set_int32(DEVICE(rtc_state), "base_year", 2000);
> >> +        isa_realize_and_unref(rtc_state, isa_bus, &error_fatal);
> >> +
> >>          i8257_dma_init(isa_bus, 0);
> >>          pcms->hpet_enabled = false;
> >> +        idebus[0] = NULL;
> >> +        idebus[1] = NULL;
> >>      }
> >> -    isa_bus_irqs(isa_bus, x86ms->gsi);
> >>  
> >>      if (x86ms->pic == ON_OFF_AUTO_ON || x86ms->pic == ON_OFF_AUTO_AUTO) {
> >>          pc_i8259_create(isa_bus, gsi_state->i8259_irq);
> >> @@ -252,18 +323,12 @@ static void pc_init1(MachineState *machine,
> >>      }
> >>  
> >>      /* init basic PC hardware */
> >> -    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, &rtc_state, true,
> >> +    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, rtc_state, true,
> >>                           0x4);
> >>  
> >>      pc_nic_init(pcmc, isa_bus, pci_bus);
> >>  
> >>      if (pcmc->pci_enabled) {
> >> -        PCIDevice *dev;
> >> -
> >> -        dev = pci_create_simple(pci_bus, piix3_devfn + 1, TYPE_PIIX3_IDE);
> >> -        pci_ide_create_devs(dev);
> >> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> >> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
> >>          pc_cmos_init(pcms, idebus[0], idebus[1], rtc_state);
> >>      }
> >>  #ifdef CONFIG_IDE_ISA
> >> @@ -289,21 +354,9 @@ static void pc_init1(MachineState *machine,
> >>      }
> >>  #endif
> >>  
> >> -    if (pcmc->pci_enabled && machine_usb(machine)) {
> >> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
> >> -    }
> >> -
> >> -    if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
> >> -        PCIDevice *piix4_pm;
> >> -
> >> +    if (piix4_pm) {
> >>          smi_irq = qemu_allocate_irq(pc_acpi_smi_interrupt, first_cpu, 0);
> >> -        piix4_pm = pci_new(piix3_devfn + 3, TYPE_PIIX4_PM);
> >> -        qdev_prop_set_uint32(DEVICE(piix4_pm), "smb_io_base", 0xb100);
> >> -        qdev_prop_set_bit(DEVICE(piix4_pm), "smm-enabled",
> >> -                          x86_machine_is_smm_enabled(x86ms));
> >> -        pci_realize_and_unref(piix4_pm, pci_bus, &error_fatal);
> >>  
> >> -        qdev_connect_gpio_out(DEVICE(piix4_pm), 0, x86ms->gsi[9]);
> >>          qdev_connect_gpio_out_named(DEVICE(piix4_pm), "smi-irq", 0, smi_irq);
> >>          pcms->smbus = I2C_BUS(qdev_get_child_bus(DEVICE(piix4_pm), "i2c"));
> >>          /* TODO: Populate SPD eeprom data.  */
> >> @@ -315,7 +368,7 @@ static void pc_init1(MachineState *machine,
> >>                                   object_property_allow_set_link,
> >>                                   OBJ_PROP_LINK_STRONG);
> >>          object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
> >> -                                 OBJECT(piix4_pm), &error_abort);
> >> +                                 piix4_pm, &error_abort);
> >>      }
> >>  
> >>      if (machine->nvdimms_state->is_enabled) {
> >> @@ -405,6 +458,9 @@ static void pc_xen_hvm_init(MachineState *machine)
> >>      }
> >>  
> >>      pc_xen_hvm_init_pci(machine);
> >> +    if (xen_igd_gfx_pt_enabled()) {
> >> +        xen_igd_reserve_slot(pcms->bus);
> >> +    }
> >>      pci_create_simple(pcms->bus, -1, "xen-platform");
> >>  }
> >>  #endif
> >> @@ -441,6 +497,11 @@ static void pc_i440fx_8_0_machine_options(MachineClass *m)
> >>      pc_i440fx_machine_options(m);
> >>      m->alias = "pc";
> >>      m->is_default = true;
> >> +#ifdef CONFIG_MICROVM_DEFAULT
> >> +    m->is_default = false;
> >> +#else
> >> +    m->is_default = true;
> >> +#endif
> >>  }
> >>  
> >>  DEFINE_I440FX_MACHINE(v8_0, "pc-i440fx-8.0", NULL,
> > 
> > 
> > Lots of changes here not guarded by CONFIG_XEN.
> > 
> 
> What diff is this? How is my patch related to it?


This is what you posted, take a look:
https://lore.kernel.org/all/8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com/


-- 
MST



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 05:34:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 05:34:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474258.735328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7Gt-0006Vc-6v; Tue, 10 Jan 2023 05:34:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474258.735328; Tue, 10 Jan 2023 05:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7Gt-0006VV-1m; Tue, 10 Jan 2023 05:34:03 +0000
Received: by outflank-mailman (input) for mailman id 474258;
 Tue, 10 Jan 2023 05:34:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VHex=5H=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pF7Gr-0006VP-4w
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 05:34:01 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 61a98727-90a8-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 06:33:59 +0100 (CET)
Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com
 [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-544-LseaCMcvMbiLxusax2YNaQ-1; Tue, 10 Jan 2023 00:33:56 -0500
Received: by mail-qv1-f69.google.com with SMTP id
 o20-20020a0ccb14000000b005321d865fbbso6199771qvk.17
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 21:33:56 -0800 (PST)
Received: from redhat.com ([191.101.160.154]) by smtp.gmail.com with ESMTPSA id
 t11-20020a05620a034b00b006fa31bf2f3dsm6536546qkm.47.2023.01.09.21.33.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Jan 2023 21:33:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61a98727-90a8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673328838;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=n2rrvtY6YACF2A594KZFjWGGgNjo+s7enLD/esRvyA4=;
	b=Ml1Iv8hI4ycHX46CjmJ+l638N1UNmAH94YJdVRuHl7tWdlS7y5n25k+uoG3g6u/rlgJeyN
	pnUisQeNaScKIHuzJ0n6FjZFIVJIS+LiBCrITubpurkEzlEq7JS+hdvSF1YWIt7V4PdOqU
	vxx8yJ5zjhIHlHxNGClObVJSaC3h/3E=
X-MC-Unique: LseaCMcvMbiLxusax2YNaQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=n2rrvtY6YACF2A594KZFjWGGgNjo+s7enLD/esRvyA4=;
        b=hxJJXI1GZ/yer0ybGKgDfHwL7idQhLFlre6k3OX31hzA3obyrZjhpKcMpVjExlhAIY
         +6QXKnSzyzMX4XbQgyZ0wXjWMrgE5/7gVt8wBM0CkNIgZUkRoqJ58ThUTD0zuP1faY4c
         Caa1i54bTzELAwpeLQi9idC0XdYq79so6WeKHiFAt2QUA4KiZDGMd82yfG69aMkNNlJI
         hnc7ft5W0Zh4s86Nv/JZso3gq1slsivrEhDR41TRiRSV//X1uW8+QSEjWISYa6qhtQSr
         9WekMaEPBa3iG0qDD0ll9OjGx8aOi8LgZlq3WBMa17B+7ZIhRyJAVyLFHtgeu8dJFuXk
         IcnQ==
X-Gm-Message-State: AFqh2kpD2CHTOMnhlEqGxRBb/tTbZThjPgn+vUE3Dz4KF4K6wgPNsBHk
	fOrx+ZXzXaQnjfLnSgFZ31HvKhfirIsTEcwa2TxLSLXij6GBig1VKvOP0V98NeiaouT/uzODmmt
	2qTrB5PA7HEIN/JZb/PU/Doegfuc=
X-Received: by 2002:a0c:e989:0:b0:532:25bf:1e5b with SMTP id z9-20020a0ce989000000b0053225bf1e5bmr14527069qvn.13.1673328835969;
        Mon, 09 Jan 2023 21:33:55 -0800 (PST)
X-Google-Smtp-Source: AMrXdXutkVsyyurgzYuTtqQs1bSdqx8vpXhBqsggh24leTy+BR7awDoq2WW1vhRn+V38P68mLmdiAg==
X-Received: by 2002:a0c:e989:0:b0:532:25bf:1e5b with SMTP id z9-20020a0ce989000000b0053225bf1e5bmr14527054qvn.13.1673328835619;
        Mon, 09 Jan 2023 21:33:55 -0800 (PST)
Date: Tue, 10 Jan 2023 00:33:48 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230110002809-mutt-send-email-mst@kernel.org>
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
 <20230109183132-mutt-send-email-mst@kernel.org>
 <aacffaa2-1e86-1392-8302-484248b893c4@aol.com>
 <1542de4a-3b42-0ca4-777a-ce01f75b5532@aol.com>
MIME-Version: 1.0
In-Reply-To: <1542de4a-3b42-0ca4-777a-ce01f75b5532@aol.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Mon, Jan 09, 2023 at 09:11:22PM -0500, Chuck Zmudzinski wrote:
> On 1/9/2023 7:05 PM, Chuck Zmudzinski wrote:
> > On 1/9/23 6:33 PM, Michael S. Tsirkin wrote:
> > > On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
> > >> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > >> as noted in docs/igd-assign.txt in the Qemu source code.
> > >> 
> > >> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > >> Intel IGD passthrough to the guest with the Qemu upstream device model,
> > >> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > >> a different slot. This problem often prevents the guest from booting.
> > >> 
> > >> The only available workaround is not good: Configure Xen HVM guests to use
> > >> the old and no longer maintained Qemu traditional device model available
> > >> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > >> 
> > >> To implement this feature in the Qemu upstream device model for Xen HVM
> > >> guests, introduce the following new functions, types, and macros:
> > >> 
> > >> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > >> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > >> * typedef XenPTQdevRealize function pointer
> > >> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > >> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > >> 
> > >> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > >> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > >> the xl toolstack with the gfx_passthru option enabled, which sets the
> > >> igd-passthru=on option to Qemu for the Xen HVM machine type.
> > >> 
> > >> The new xen_igd_reserve_slot function also needs to be implemented in
> > >> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > >> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > >> in which case it does nothing.
> > >> 
> > >> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > >> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > >> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > >> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > >> 
> > >> Move the call to xen_host_pci_device_get, and the associated error
> > >> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > >> initialize the device class and vendor values which enables the checks for
> > >> the Intel IGD to succeed. The verification that the host device is an
> > >> Intel IGD to be passed through is done by checking the domain, bus, slot,
> > >> and function values as well as by checking that gfx_passthru is enabled,
> > >> the device class is VGA, and the device vendor in Intel.
> > >> 
> > >> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> > >> ---
> > >> Notes that might be helpful to reviewers of patched code in hw/xen:
> > >> 
> > >> The new functions and types are based on recommendations from Qemu docs:
> > >> https://qemu.readthedocs.io/en/latest/devel/qom.html
> > >> 
> > >> Notes that might be helpful to reviewers of patched code in hw/i386:
> > >> 
> > >> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> > >> not affect builds that do not have CONFIG_XEN defined.
> > > 
> > > I'm not sure how you can claim that.
> >
> > I mean the small patch to pc_piix.c in this patch sits
> > between an "#ifdef CONFIG_XEN" and the corresponding
> > "#endif" so the preprocessor will exclude it when CONFIG_XEN
> > is not defined. In other words, my patch is part of the
> > xen-specific code in pc_piix.c. Or am I missing something?
> >
> >
> > > 
> > > ...
> > > 
> > >> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> > >> index b48047f50c..34a9736b5e 100644
> > >> --- a/hw/i386/pc_piix.c
> > >> +++ b/hw/i386/pc_piix.c
> > >> @@ -32,6 +32,7 @@
> > >>  #include "hw/i386/pc.h"
> > >>  #include "hw/i386/apic.h"
> > >>  #include "hw/pci-host/i440fx.h"
> > >> +#include "hw/rtc/mc146818rtc.h"
> > >>  #include "hw/southbridge/piix.h"
> > >>  #include "hw/display/ramfb.h"
> > >>  #include "hw/firmware/smbios.h"
> > >> @@ -40,16 +41,16 @@
> > >>  #include "hw/usb.h"
> > >>  #include "net/net.h"
> > >>  #include "hw/ide/pci.h"
> > >> -#include "hw/ide/piix.h"
> > >>  #include "hw/irq.h"
> > >>  #include "sysemu/kvm.h"
> > >>  #include "hw/kvm/clock.h"
> > >>  #include "hw/sysbus.h"
> > >> +#include "hw/i2c/i2c.h"
> > >>  #include "hw/i2c/smbus_eeprom.h"
> > >>  #include "hw/xen/xen-x86.h"
> > >> +#include "hw/xen/xen.h"
> > >>  #include "exec/memory.h"
> > >>  #include "hw/acpi/acpi.h"
> > >> -#include "hw/acpi/piix4.h"
> > >>  #include "qapi/error.h"
> > >>  #include "qemu/error-report.h"
> > >>  #include "sysemu/xen.h"
> > >> @@ -66,6 +67,7 @@
> > >>  #include "kvm/kvm-cpu.h"
> > >>  
> > >>  #define MAX_IDE_BUS 2
> > >> +#define XEN_IOAPIC_NUM_PIRQS 128ULL
> > >>  
> > >>  #ifdef CONFIG_IDE_ISA
> > >>  static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
> > >> @@ -73,6 +75,32 @@ static const int ide_iobase2[MAX_IDE_BUS] = { 0x3f6, 0x376 };
> > >>  static const int ide_irq[MAX_IDE_BUS] = { 14, 15 };
> > >>  #endif
> > >>  
> > >> +/*
> > >> + * Return the global irq number corresponding to a given device irq
> > >> + * pin. We could also use the bus number to have a more precise mapping.
> > >> + */
> > >> +static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
> > >> +{
> > >> +    int slot_addend;
> > >> +    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
> > >> +    return (pci_intx + slot_addend) & 3;
> > >> +}
> > >> +
> > >> +static void piix_intx_routing_notifier_xen(PCIDevice *dev)
> > >> +{
> > >> +    int i;
> > >> +
> > >> +    /* Scan for updates to PCI link routes (0x60-0x63). */
> > >> +    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
> > >> +        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
> > >> +        if (v & 0x80) {
> > >> +            v = 0;
> > >> +        }
> > >> +        v &= 0xf;
> > >> +        xen_set_pci_link_route(i, v);
> > >> +    }
> > >> +}
> > >> +
> > >>  /* PC hardware initialisation */
> > >>  static void pc_init1(MachineState *machine,
> > >>                       const char *host_type, const char *pci_type)
> > >> @@ -84,7 +112,7 @@ static void pc_init1(MachineState *machine,
> > >>      MemoryRegion *system_io = get_system_io();
> > >>      PCIBus *pci_bus;
> > >>      ISABus *isa_bus;
> > >> -    int piix3_devfn = -1;
> > >> +    Object *piix4_pm;
> > >>      qemu_irq smi_irq;
> > >>      GSIState *gsi_state;
> > >>      BusState *idebus[MAX_IDE_BUS];
> > >> @@ -205,10 +233,9 @@ static void pc_init1(MachineState *machine,
> > >>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
> > >>  
> > >>      if (pcmc->pci_enabled) {
> > >> -        PIIX3State *piix3;
> > >> +        DeviceState *dev;
> > >>          PCIDevice *pci_dev;
> > >> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
> > >> -                                         : TYPE_PIIX3_DEVICE;
> > >> +        int i;
> > >>  
> > >>          pci_bus = i440fx_init(pci_type,
> > >>                                i440fx_host,
> > >> @@ -216,21 +243,65 @@ static void pc_init1(MachineState *machine,
> > >>                                x86ms->below_4g_mem_size,
> > >>                                x86ms->above_4g_mem_size,
> > >>                                pci_memory, ram_memory);
> > >> +        pci_bus_map_irqs(pci_bus,
> > >> +                         xen_enabled() ? xen_pci_slot_get_pirq
> > >> +                                       : pci_slot_get_pirq);
> > >>          pcms->bus = pci_bus;
> > >>  
> > >> -        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
> > >> -        piix3 = PIIX3_PCI_DEVICE(pci_dev);
> > >> -        piix3->pic = x86ms->gsi;
> > >> -        piix3_devfn = piix3->dev.devfn;
> > >> -        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(piix3), "isa.0"));
> > >> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
> > >> +        object_property_set_bool(OBJECT(pci_dev), "has-usb",
> > >> +                                 machine_usb(machine), &error_abort);
> > >> +        object_property_set_bool(OBJECT(pci_dev), "has-acpi",
> > >> +                                 x86_machine_is_acpi_enabled(x86ms),
> > >> +                                 &error_abort);
> > >> +        qdev_prop_set_uint32(DEVICE(pci_dev), "smb_io_base", 0xb100);
> > >> +        object_property_set_bool(OBJECT(pci_dev), "smm-enabled",
> > >> +                                 x86_machine_is_smm_enabled(x86ms),
> > >> +                                 &error_abort);
> > >> +        pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
> > >> +
> > >> +        if (xen_enabled()) {
> > >> +            pci_device_set_intx_routing_notifier(
> > >> +                        pci_dev, piix_intx_routing_notifier_xen);
> > >> +
> > >> +            /*
> > >> +             * Xen supports additional interrupt routes from the PCI devices to
> > >> +             * the IOAPIC: the four pins of each PCI device on the bus are also
> > >> +             * connected to the IOAPIC directly.
> > >> +             * These additional routes can be discovered through ACPI.
> > >> +             */
> > >> +            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
> > >> +                         XEN_IOAPIC_NUM_PIRQS);
> > >> +        }
> > >> +
> > >> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
> > >> +        for (i = 0; i < ISA_NUM_IRQS; i++) {
> > >> +            qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
> > >> +        }
> > >> +        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(pci_dev), "isa.0"));
> > >> +        rtc_state = ISA_DEVICE(object_resolve_path_component(OBJECT(pci_dev),
> > >> +                                                             "rtc"));
> > >> +        piix4_pm = object_resolve_path_component(OBJECT(pci_dev), "pm");
> > >> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "ide"));
> > >> +        pci_ide_create_devs(PCI_DEVICE(dev));
> > >> +        idebus[0] = qdev_get_child_bus(dev, "ide.0");
> > >> +        idebus[1] = qdev_get_child_bus(dev, "ide.1");
> > >>      } else {
> > >>          pci_bus = NULL;
> > >> +        piix4_pm = NULL;
> > >>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
> > >>                                &error_abort);
> > >> +        isa_bus_irqs(isa_bus, x86ms->gsi);
> > >> +
> > >> +        rtc_state = isa_new(TYPE_MC146818_RTC);
> > >> +        qdev_prop_set_int32(DEVICE(rtc_state), "base_year", 2000);
> > >> +        isa_realize_and_unref(rtc_state, isa_bus, &error_fatal);
> > >> +
> > >>          i8257_dma_init(isa_bus, 0);
> > >>          pcms->hpet_enabled = false;
> > >> +        idebus[0] = NULL;
> > >> +        idebus[1] = NULL;
> > >>      }
> > >> -    isa_bus_irqs(isa_bus, x86ms->gsi);
> > >>  
> > >>      if (x86ms->pic == ON_OFF_AUTO_ON || x86ms->pic == ON_OFF_AUTO_AUTO) {
> > >>          pc_i8259_create(isa_bus, gsi_state->i8259_irq);
> > >> @@ -252,18 +323,12 @@ static void pc_init1(MachineState *machine,
> > >>      }
> > >>  
> > >>      /* init basic PC hardware */
> > >> -    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, &rtc_state, true,
> > >> +    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, rtc_state, true,
> > >>                           0x4);
> > >>  
> > >>      pc_nic_init(pcmc, isa_bus, pci_bus);
> > >>  
> > >>      if (pcmc->pci_enabled) {
> > >> -        PCIDevice *dev;
> > >> -
> > >> -        dev = pci_create_simple(pci_bus, piix3_devfn + 1, TYPE_PIIX3_IDE);
> > >> -        pci_ide_create_devs(dev);
> > >> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> > >> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
> > >>          pc_cmos_init(pcms, idebus[0], idebus[1], rtc_state);
> > >>      }
> > >>  #ifdef CONFIG_IDE_ISA
> > >> @@ -289,21 +354,9 @@ static void pc_init1(MachineState *machine,
> > >>      }
> > >>  #endif
> > >>  
> > >> -    if (pcmc->pci_enabled && machine_usb(machine)) {
> > >> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
> > >> -    }
> > >> -
> > >> -    if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
> > >> -        PCIDevice *piix4_pm;
> > >> -
> > >> +    if (piix4_pm) {
> > >>          smi_irq = qemu_allocate_irq(pc_acpi_smi_interrupt, first_cpu, 0);
> > >> -        piix4_pm = pci_new(piix3_devfn + 3, TYPE_PIIX4_PM);
> > >> -        qdev_prop_set_uint32(DEVICE(piix4_pm), "smb_io_base", 0xb100);
> > >> -        qdev_prop_set_bit(DEVICE(piix4_pm), "smm-enabled",
> > >> -                          x86_machine_is_smm_enabled(x86ms));
> > >> -        pci_realize_and_unref(piix4_pm, pci_bus, &error_fatal);
> > >>  
> > >> -        qdev_connect_gpio_out(DEVICE(piix4_pm), 0, x86ms->gsi[9]);
> > >>          qdev_connect_gpio_out_named(DEVICE(piix4_pm), "smi-irq", 0, smi_irq);
> > >>          pcms->smbus = I2C_BUS(qdev_get_child_bus(DEVICE(piix4_pm), "i2c"));
> > >>          /* TODO: Populate SPD eeprom data.  */
> > >> @@ -315,7 +368,7 @@ static void pc_init1(MachineState *machine,
> > >>                                   object_property_allow_set_link,
> > >>                                   OBJ_PROP_LINK_STRONG);
> > >>          object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
> > >> -                                 OBJECT(piix4_pm), &error_abort);
> > >> +                                 piix4_pm, &error_abort);
> > >>      }
> > >>  
> > >>      if (machine->nvdimms_state->is_enabled) {
> > >> @@ -405,6 +458,9 @@ static void pc_xen_hvm_init(MachineState *machine)
> > >>      }
> > >>  
> > >>      pc_xen_hvm_init_pci(machine);
> > >> +    if (xen_igd_gfx_pt_enabled()) {
> > >> +        xen_igd_reserve_slot(pcms->bus);
> > >> +    }
> 
> These are the three lines from my patch that are added to
> pc_piix.c. IIUC, this diff is showing the differences in pc_piix.c
> without and with CONFIG_XEN defined.

changing configure flags does not modify pc_piix.c IIUC.  configure
changes compiler flags and some generated files (mostly headers).
AFAIK pc_piix.c is not one of them.

> If so, then the fact that
> the lines in my patch are added when CONFIG_XEN is defined
> means they are not included as part of the build when CONFIG_XEN
> is not defined. That is how I can claim that my small patch to
> pc_piix.c does not affect builds without CONFIG_XEN defined.
>

Doesn't make sense to me, sorry.

> I also think the rest of the files touched by my patch are only included
> by the meson build system with the --enable-xen an option to configure.
> So the entire patch is xen-specific and can easily be eliminated from
> any build with the --disable-xen configure option.
> 
> Kind regards,
> 
> Chuck
> 
> > >>      pci_create_simple(pcms->bus, -1, "xen-platform");
> > >>  }
> > >>  #endif
> > >> @@ -441,6 +497,11 @@ static void pc_i440fx_8_0_machine_options(MachineClass *m)
> > >>      pc_i440fx_machine_options(m);
> > >>      m->alias = "pc";
> > >>      m->is_default = true;
> > >> +#ifdef CONFIG_MICROVM_DEFAULT
> > >> +    m->is_default = false;
> > >> +#else
> > >> +    m->is_default = true;
> > >> +#endif
> > >>  }
> > >>  
> > >>  DEFINE_I440FX_MACHINE(v8_0, "pc-i440fx-8.0", NULL,
> > > 
> > > 
> > > Lots of changes here not guarded by CONFIG_XEN.
> > > 
> >
> > What diff is this? How is my patch related to it?
> >



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 05:48:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 05:48:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474267.735338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7UP-00085M-Ft; Tue, 10 Jan 2023 05:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474267.735338; Tue, 10 Jan 2023 05:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7UP-00085F-Ct; Tue, 10 Jan 2023 05:48:01 +0000
Received: by outflank-mailman (input) for mailman id 474267;
 Tue, 10 Jan 2023 05:48:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TLW0=5H=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pF7UP-000859-0G
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 05:48:01 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5645e610-90aa-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 06:47:58 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id DF75E4DC62;
 Tue, 10 Jan 2023 05:47:57 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A3B3713458;
 Tue, 10 Jan 2023 05:47:57 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id uVKCJg38vGNyRgAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 10 Jan 2023 05:47:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5645e610-90aa-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673329677; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6nR2J48veGxgqbDtj3mpE0l8hzEZ8e4y9D3DftLzn5w=;
	b=sQmGPAehePLtDdoBg7Pz/MNCfPqEDN2lQqd0h1bQhgM/4WgUu/kr39MOv8woNuhkfOlpwO
	Cdd39ALB9rCo2dyWVve2DeT4dFriFkOAo0xf0q5zCtkku7XqiMjSW4G+52+Nl509NSwvee
	XZxc0R38z/drh22FG/QIYp617jXQAKU=
Message-ID: <03edcbc5-2dd7-1ddb-bafe-8412d8fc95aa@suse.com>
Date: Tue, 10 Jan 2023 06:47:57 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: Problem with pat_enable() and commit 72cbc8f04fe2
Content-Language: en-US
To: "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Andrew Lutomirski <luto@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Dave Hansen <dave.hansen@linux.intel.com>,
 Peter Zijlstra <peterz@infradead.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------s2hryonzbwnhOTjl3olsUKO7"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------s2hryonzbwnhOTjl3olsUKO7
Content-Type: multipart/mixed; boundary="------------Az9IynJ4vGV7ZPasu5p0pfX8";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Andrew Lutomirski <luto@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Dave Hansen <dave.hansen@linux.intel.com>,
 Peter Zijlstra <peterz@infradead.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <03edcbc5-2dd7-1ddb-bafe-8412d8fc95aa@suse.com>
Subject: Re: Problem with pat_enable() and commit 72cbc8f04fe2
References: <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>
In-Reply-To: <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>

--------------Az9IynJ4vGV7ZPasu5p0pfX8
Content-Type: multipart/mixed; boundary="------------PVNmTecz6Ude3kl9G08cGM1s"

--------------PVNmTecz6Ude3kl9G08cGM1s
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDkuMDEuMjMgMTk6MjgsIE1pY2hhZWwgS2VsbGV5IChMSU5VWCkgd3JvdGU6DQo+IEkn
dmUgY29tZSBhY3Jvc3MgYSBjYXNlIHdpdGggYSBWTSBydW5uaW5nIG9uIEh5cGVyLVYgdGhh
dCBkb2Vzbid0IGdldA0KPiBNVFJScywgYnV0IHRoZSBQQVQgaXMgZnVuY3Rpb25hbC4gIChU
aGlzIGlzIGEgQ29uZmlkZW50aWFsIFZNIHVzaW5nDQo+IEFNRCdzIFNFVi1TTlAgZW5jcnlw
dGlvbiB0ZWNobm9sb2d5IHdpdGggdGhlIHZUT00gb3B0aW9uLikgIEluIHRoaXMNCj4gY2Fz
ZSwgdGhlIGNoYW5nZXMgaW4gY29tbWl0IDcyY2JjOGYwNGZlMiAoIng4Ni9QQVQ6IEhhdmUg
cGF0X2VuYWJsZWQoKQ0KPiBwcm9wZXJseSByZWZsZWN0IHN0YXRlIHdoZW4gcnVubmluZyBv
biBYZW4iKSBhcHBseS4gICBwYXRfZW5hYmxlZCgpIHJldHVybnMNCj4gInRydWUiLCBidXQg
dGhlIE1UUlJzIGFyZSBub3QgZW5hYmxlZC4NCj4gDQo+IEJ1dCB3aXRoIHRoaXMgY29tbWl0
LCB0aGVyZSdzIGEgcHJvYmxlbS4gIENvbnNpZGVyIG1lbXJlbWFwKCkgb24gYSBSQU0NCj4g
cmVnaW9uLCBjYWxsZWQgd2l0aCBNRU1SRU1BUF9XQiBwbHVzIE1FTVJFTUFQX0RFQyBhcyB0
aGUgM3JkDQo+IGFyZ3VtZW50LiBCZWNhdXNlIG9mIHRoZSByZXF1ZXN0IGZvciBhIGRlY3J5
cHRlZCBtYXBwaW5nLA0KPiBhcmNoX21lbXJlbWFwX2Nhbl9yYW1fcmVtYXAoKSByZXR1cm5z
IGZhbHNlLCBhbmQgYSBuZXcgbWFwcGluZw0KPiBtdXN0IGJlIGNyZWF0ZWQsIHdoaWNoIGlz
IGFwcHJvcHJpYXRlLg0KPiANCj4gVGhlIGZvbGxvd2luZyBjYWxsIHN0YWNrIHJlc3VsdHM6
DQo+IA0KPiAgICBtZW1yZW1hcCgpDQo+ICAgIGFyY2hfbWVtcmVtYXBfd2IoKQ0KPiAgICBp
b3JlbWFwX2NhY2hlKCkNCj4gICAgX19pb3JlbWFwX2NhbGxlcigpDQo+ICAgIG1lbXR5cGVf
cmVzZXJ2ZSgpICA8LS0tIHBjbSBpcyBfUEFHRV9DQUNIRV9NT0RFX1dCDQo+ICAgIHBhdF94
X210cnJfdHlwZSgpICA8LS0gb25seSBjYWxsZWQgYWZ0ZXIgY29tbWl0IDcyY2JjOGYwNGZl
Mg0KPiANCj4gcGF0X3hfbXRycl90eXBlKCkgcmV0dXJucyBfUEFHRV9DQUNIRV9NT0RFX1VD
X01JTlVTIGJlY2F1c2UNCj4gbXRycl90eXBlX2xvb2t1cCgpIGZhaWxzLiAgQXMgYSByZXN1
bHQsIG1lbXJlbWFwKCkgZXJyb25lb3VzbHkgY3JlYXRlcyB0aGUNCj4gbmV3IG1hcHBpbmcg
YXMgdW5jYWNoZWQuICAgVGhpcyB1bmNhY2hlZCBtYXBwaW5nIGlzIGNhdXNpbmcgYSBzaWdu
aWZpY2FudA0KPiBwZXJmb3JtYW5jZSBwcm9ibGVtIGluIGNlcnRhaW4gSHlwZXItViBDb25m
aWRlbnRpYWwgVk0gY29uZmlndXJhdGlvbnMuDQo+IA0KPiBBbnkgdGhvdWdodHMgb24gcmVz
b2x2aW5nIHRoaXM/ICBTaG91bGQgbWVtdHlwZV9yZXNlcnZlKCkgYmUgY2hlY2tpbmcNCj4g
Ym90aCBwYXRfZW5hYmxlZCgpICphbmQqIHdoZXRoZXIgTVRSUnMgYXJlIGVuYWJsZWQgYmVm
b3JlIGNhbGxpbmcNCj4gcGF0X3hfbXRycl90eXBlKCk/ICBPciBkb2VzIHRoYXQgZGVmZWF0
IHRoZSBwdXJwb3NlIG9mIGNvbW1pdA0KPiA3MmNiYzhmMDRmZTIgaW4gdGhlIFhlbiBlbnZp
cm9ubWVudD8NCg0KSSB0aGluayBwYXRfeF9tdHJyX3R5cGUoKSBzaG91bGQgcmV0dXJuIF9Q
QUdFX0NBQ0hFX01PREVfVUNfTUlOVVMgb25seSBpZg0KbXRycl90eXBlX2xvb2t1cCgpIGlz
IG5vdCBmYWlsaW5nIGFuZCBpcyByZXR1cm5pbmcgYSBtb2RlIG90aGVyIHRoYW4gV0IuDQoN
CkknbGwgc2VuZCBhIHBhdGNoLg0KDQo+IEknbSBhbHNvIGxvb2tpbmcgYXQgaG93IHRvIGF2
b2lkIHRoaXMgY29tYmluYXRpb24gaW4gYSBIeXBlci1WIENvbmZpZGVudGlhbA0KPiBWTSwg
YnV0IHRoYXQgZG9lc24ndCBhZGRyZXNzIHVuZGVybHlpbmcgdGhlIGZsYXcuDQoNClllcy4N
Cg0KDQpKdWVyZ2VuDQo=
--------------PVNmTecz6Ude3kl9G08cGM1s
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------PVNmTecz6Ude3kl9G08cGM1s--

--------------Az9IynJ4vGV7ZPasu5p0pfX8--

--------------s2hryonzbwnhOTjl3olsUKO7
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO8/A0FAwAAAAAACgkQsN6d1ii/Ey+E
XQf9FXFccfJOnG/tmPwFInZjxEORcoptE0XtMp96Sp5kVPsgPzBj4Tgmx1bVlxG8UNrAtIl18Txy
7M6k2SbkWOsoLPqM2USmvmYDlBAhjbUDvrSh16HTzQDrfAO8YT5MDAIrwtcBNJ6D6mggFIOAbg7G
9rfHHcuA6U1+zNXOlvZWIcjOEqRLkOYH21eYczxBQZYf+nbgA31kOPqf7l58RQBHbn1ThUIzsEDl
2PS4cxMwwgIsvFeXP586iuaBTMIS+YFkJ4haajpguYMWtql0332j9a0eWSwlHUomWFbrSGConyrS
WgAgSWeC7BNIT7B19vM3UKZXnQsAcrr/d0NWC36cxA==
=lv3X
-----END PGP SIGNATURE-----

--------------s2hryonzbwnhOTjl3olsUKO7--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 05:59:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 05:59:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474273.735349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7fj-0001Af-Gu; Tue, 10 Jan 2023 05:59:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474273.735349; Tue, 10 Jan 2023 05:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7fj-0001AY-E2; Tue, 10 Jan 2023 05:59:43 +0000
Received: by outflank-mailman (input) for mailman id 474273;
 Tue, 10 Jan 2023 05:59:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TLW0=5H=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pF7fi-0001AS-4r
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 05:59:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8879c41-90ab-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 06:59:40 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 17595612FE;
 Tue, 10 Jan 2023 05:59:38 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D371E1358A;
 Tue, 10 Jan 2023 05:59:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id IcYgMsn+vGMASwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 10 Jan 2023 05:59:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8879c41-90ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673330378; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=aF338VeMd8mOG+zXjPDXJGd2RkUVJK3TeF08hiwUm6Q=;
	b=KTZbsyV6KslH4Fbu+GLkmPyl0I184Qoa384lQ+XFvMCkiTwfx9siYJHO1uC18cre/Q41fI
	pEv6pUPPn52RcJXlDAsOouwV0Bsf+aXRydGHD53GVkuvnoGf7O88AEHI1EZCih9EOkvTU6
	xc9ws0/gr5QH/OW+4DZqxLVnmRKvg1k=
Message-ID: <ba24157d-92fc-f472-9ef5-4eae3c63c12e@suse.com>
Date: Tue, 10 Jan 2023 06:59:37 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: Problem with pat_enable() and commit 72cbc8f04fe2
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
To: "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Andrew Lutomirski <luto@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Dave Hansen <dave.hansen@linux.intel.com>,
 Peter Zijlstra <peterz@infradead.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>
 <03edcbc5-2dd7-1ddb-bafe-8412d8fc95aa@suse.com>
In-Reply-To: <03edcbc5-2dd7-1ddb-bafe-8412d8fc95aa@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------NCcMP466BJPyiDVM1qKkw022"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------NCcMP466BJPyiDVM1qKkw022
Content-Type: multipart/mixed; boundary="------------Q25iRST94k8C9636ZAsL0ppF";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Andrew Lutomirski <luto@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Dave Hansen <dave.hansen@linux.intel.com>,
 Peter Zijlstra <peterz@infradead.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <ba24157d-92fc-f472-9ef5-4eae3c63c12e@suse.com>
Subject: Re: Problem with pat_enable() and commit 72cbc8f04fe2
References: <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>
 <03edcbc5-2dd7-1ddb-bafe-8412d8fc95aa@suse.com>
In-Reply-To: <03edcbc5-2dd7-1ddb-bafe-8412d8fc95aa@suse.com>

--------------Q25iRST94k8C9636ZAsL0ppF
Content-Type: multipart/mixed; boundary="------------HvmiliKnfQFHcSOjC0mQZzph"

--------------HvmiliKnfQFHcSOjC0mQZzph
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTAuMDEuMjMgMDY6NDcsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IE9uIDA5LjAxLjIz
IDE5OjI4LCBNaWNoYWVsIEtlbGxleSAoTElOVVgpIHdyb3RlOg0KPj4gSSd2ZSBjb21lIGFj
cm9zcyBhIGNhc2Ugd2l0aCBhIFZNIHJ1bm5pbmcgb24gSHlwZXItViB0aGF0IGRvZXNuJ3Qg
Z2V0DQo+PiBNVFJScywgYnV0IHRoZSBQQVQgaXMgZnVuY3Rpb25hbC7CoCAoVGhpcyBpcyBh
IENvbmZpZGVudGlhbCBWTSB1c2luZw0KPj4gQU1EJ3MgU0VWLVNOUCBlbmNyeXB0aW9uIHRl
Y2hub2xvZ3kgd2l0aCB0aGUgdlRPTSBvcHRpb24uKcKgIEluIHRoaXMNCj4+IGNhc2UsIHRo
ZSBjaGFuZ2VzIGluIGNvbW1pdCA3MmNiYzhmMDRmZTIgKCJ4ODYvUEFUOiBIYXZlIHBhdF9l
bmFibGVkKCkNCj4+IHByb3Blcmx5IHJlZmxlY3Qgc3RhdGUgd2hlbiBydW5uaW5nIG9uIFhl
biIpIGFwcGx5LsKgwqAgcGF0X2VuYWJsZWQoKSByZXR1cm5zDQo+PiAidHJ1ZSIsIGJ1dCB0
aGUgTVRSUnMgYXJlIG5vdCBlbmFibGVkLg0KPj4NCj4+IEJ1dCB3aXRoIHRoaXMgY29tbWl0
LCB0aGVyZSdzIGEgcHJvYmxlbS7CoCBDb25zaWRlciBtZW1yZW1hcCgpIG9uIGEgUkFNDQo+
PiByZWdpb24sIGNhbGxlZCB3aXRoIE1FTVJFTUFQX1dCIHBsdXMgTUVNUkVNQVBfREVDIGFz
IHRoZSAzcmQNCj4+IGFyZ3VtZW50LiBCZWNhdXNlIG9mIHRoZSByZXF1ZXN0IGZvciBhIGRl
Y3J5cHRlZCBtYXBwaW5nLA0KPj4gYXJjaF9tZW1yZW1hcF9jYW5fcmFtX3JlbWFwKCkgcmV0
dXJucyBmYWxzZSwgYW5kIGEgbmV3IG1hcHBpbmcNCj4+IG11c3QgYmUgY3JlYXRlZCwgd2hp
Y2ggaXMgYXBwcm9wcmlhdGUuDQo+Pg0KPj4gVGhlIGZvbGxvd2luZyBjYWxsIHN0YWNrIHJl
c3VsdHM6DQo+Pg0KPj4gwqDCoCBtZW1yZW1hcCgpDQo+PiDCoMKgIGFyY2hfbWVtcmVtYXBf
d2IoKQ0KPj4gwqDCoCBpb3JlbWFwX2NhY2hlKCkNCj4+IMKgwqAgX19pb3JlbWFwX2NhbGxl
cigpDQo+PiDCoMKgIG1lbXR5cGVfcmVzZXJ2ZSgpwqAgPC0tLSBwY20gaXMgX1BBR0VfQ0FD
SEVfTU9ERV9XQg0KPj4gwqDCoCBwYXRfeF9tdHJyX3R5cGUoKcKgIDwtLSBvbmx5IGNhbGxl
ZCBhZnRlciBjb21taXQgNzJjYmM4ZjA0ZmUyDQo+Pg0KPj4gcGF0X3hfbXRycl90eXBlKCkg
cmV0dXJucyBfUEFHRV9DQUNIRV9NT0RFX1VDX01JTlVTIGJlY2F1c2UNCj4+IG10cnJfdHlw
ZV9sb29rdXAoKSBmYWlscy7CoCBBcyBhIHJlc3VsdCwgbWVtcmVtYXAoKSBlcnJvbmVvdXNs
eSBjcmVhdGVzIHRoZQ0KPj4gbmV3IG1hcHBpbmcgYXMgdW5jYWNoZWQuwqDCoCBUaGlzIHVu
Y2FjaGVkIG1hcHBpbmcgaXMgY2F1c2luZyBhIHNpZ25pZmljYW50DQo+PiBwZXJmb3JtYW5j
ZSBwcm9ibGVtIGluIGNlcnRhaW4gSHlwZXItViBDb25maWRlbnRpYWwgVk0gY29uZmlndXJh
dGlvbnMuDQo+Pg0KPj4gQW55IHRob3VnaHRzIG9uIHJlc29sdmluZyB0aGlzP8KgIFNob3Vs
ZCBtZW10eXBlX3Jlc2VydmUoKSBiZSBjaGVja2luZw0KPj4gYm90aCBwYXRfZW5hYmxlZCgp
ICphbmQqIHdoZXRoZXIgTVRSUnMgYXJlIGVuYWJsZWQgYmVmb3JlIGNhbGxpbmcNCj4+IHBh
dF94X210cnJfdHlwZSgpP8KgIE9yIGRvZXMgdGhhdCBkZWZlYXQgdGhlIHB1cnBvc2Ugb2Yg
Y29tbWl0DQo+PiA3MmNiYzhmMDRmZTIgaW4gdGhlIFhlbiBlbnZpcm9ubWVudD8NCj4gDQo+
IEkgdGhpbmsgcGF0X3hfbXRycl90eXBlKCkgc2hvdWxkIHJldHVybiBfUEFHRV9DQUNIRV9N
T0RFX1VDX01JTlVTIG9ubHkgaWYNCj4gbXRycl90eXBlX2xvb2t1cCgpIGlzIG5vdCBmYWls
aW5nIGFuZCBpcyByZXR1cm5pbmcgYSBtb2RlIG90aGVyIHRoYW4gV0IuDQoNCkFub3RoZXIg
aWRlYSB3b3VsZCBiZSB0byBsZXQgdGhlIG10cnJfdHlwZV9sb29rdXAoKSBzdHViIGluDQph
cmNoL3g4Ni9pbmNsdWRlL2FzbS9tdHJyLmggcmV0dXJuIE1UUlJfVFlQRV9XUkJBQ0ssIGVu
YWJsaW5nIHRvIHNpbXBsaWZ5DQpwdWRfc2V0X2h1Z2UoKSBhbmQgcG1kX3NldF9odWdlKCkg
YnkgcmVtb3ZpbmcgdGhlIGNoZWNrIGZvciBNVFJSX1RZUEVfSU5WQUxJRC4NCg0KDQpKdWVy
Z2VuDQo=
--------------HvmiliKnfQFHcSOjC0mQZzph
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------HvmiliKnfQFHcSOjC0mQZzph--

--------------Q25iRST94k8C9636ZAsL0ppF--

--------------NCcMP466BJPyiDVM1qKkw022
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO8/skFAwAAAAAACgkQsN6d1ii/Ey84
Cwf9G/P++wHlaCcX8j+JRBcoA9jKCSsN2iWkWtxi0WC9kZpybt58HDx1D2QPHYD2raTtF5s+NAkr
IDXa8QMCVXWDsDEzXbrDjjascvUtG2HViGemQe4YDNbqNqSRH43pu9qiF7aR6GC87w/aV5Ma0bwZ
FYfbGJsqT8wkH0IjMYglU63GCQw4sjjoR7V1sXR+SYVqHH7dy1PTj7XBcwEDw/Lk6mS4ihor79Zk
voE3hwtt8VfY8sHSTC5AAr1vIt3i2DaAN8cDqGX2ah//JUvI9PwsbwW+wqPhjaLc9/GGxm6m4+5Q
qrjZdGfbfw1smAfHptLkYsBB4c4E9eTzt40UI8nOkA==
=ff7s
-----END PGP SIGNATURE-----

--------------NCcMP466BJPyiDVM1qKkw022--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 05:59:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 05:59:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474274.735360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7fv-0001SH-Ok; Tue, 10 Jan 2023 05:59:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474274.735360; Tue, 10 Jan 2023 05:59:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF7fv-0001SA-M6; Tue, 10 Jan 2023 05:59:55 +0000
Received: by outflank-mailman (input) for mailman id 474274;
 Tue, 10 Jan 2023 05:59:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF7fu-0001AS-7W
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 05:59:54 +0000
Received: from sonic317-22.consmr.mail.gq1.yahoo.com
 (sonic317-22.consmr.mail.gq1.yahoo.com [98.137.66.148])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe31a185-90ab-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 06:59:51 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic317.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 05:59:48 +0000
Received: by hermes--production-ne1-7b69748c4d-bgkrh (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID a8406ef0cbcbedb2d3599d546f1be4ba; 
 Tue, 10 Jan 2023 05:59:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe31a185-90ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673330388; bh=IQswRJ3mNTU2rW5CB8ekxkuMTpc6VFiPyScNDj65O6E=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=uDAO4jcip6hhG13sRDD1TCwo0Uy/ZDmUpsQI5dm6JQeRwMERoUDWoefNyRxPifmxaggFFP7k+Klch+9QGaefYYZjP8wGtpPRBm8LkkUH9JJHOvmgGWjvWSKRq0ArOnuJrReNrYwkQ5Ym9C6/Z8zr3J70MfvrFS+tKTPx3Ry3JCwvCuqIMMDkSPI4d3H2xAThT0XlCvna6iRc+wO9gNbDsdQ9avTo8OPMPSBa4LKKtOehMEltjciFzvl6RqBFKS4t2ntuw8vAoEJ/yLLnLoXltaq7WUm1lpArzqU3tKsWCCajiSnfBjxikpYcSMF67SaOgH2q8erCgSXhTY+MjlxbbQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673330388; bh=LZqtAdaC54vL/ejVGshCmWeWyIi/se6my3VUn5Dvwku=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=riia0ZVjp6RfB8x36IDIyJla5JSgDYpM0brW+qZKNvGQduPP+3hW6fTptr9Fe7wLst4DmdyMTRnfEoZd15fNojl/Igcu97YD5WsxFloHZt9cpOVxxhIzhhb8R6o0BRJzypgEFGhHU4SFrQS7QRKpYit0635OhZ3fC7rNn3Nwy994hNUYBiKXCH1bRjbi+4aYJJztwVW6i/AZXNrLDOrZ2IUGpa9KSQoMZbR7OQdSC8YJCSCP8mjYiQtD8ezAqUrjFyNq1Rdm4LffKIsG7bGs1TWF31DU/x7Xb9UyV3KlSDQJucbD914Tz1PiToTyGbuuLN9jbiqCyFXbzZ+ik6T9+A==
X-YMail-OSG: L4UlU_MVM1leDEms_BcRLMUJc.6udPFe1VgzSYs9.g3zFCK3S7ZEr18nxUJJtnY
 ciVYSmIVPGDtams3XtAwHf_5Q6ILQOZ6zcYHq_fDFTVYAWlQw_cz_79xklgKkisEhs8PLATiILFk
 zLPiUqdQ3WF2Kbde9ZLavisvZkcmb_EEq2JXMTGqX47_mbuGqqsXVAwWCC7F6FQ0YBM3qEx_cr8A
 dgIhYpGvipfbYcew6KQFDt13QfdTMb0X92chYEX7F_Qq40JSYXhcyF2I2yEfhHnJLiXOHCFU4qCd
 Ani6DWHP9.sRfmmmvb0BGKSlGg7she7RQRZRc9fByontYFDIVOVE58jlv8hUG8IvyK8dPGTtu8E3
 MBx_tKSaAUg928xVgxYsEE_lF9Wsx6h7ZdJxa5SM1ycc75GkXtjEWUAiIu1UEtypcrZbrklXsWZT
 KebkKCKtklzLYJbEWDSgqtZe1zHu6yZ1UE6jrQjcAl.EkGC6T48gJVy_q60Qz7A0GXtZjFEuUvoA
 lF.yk0YgTxRMxsxhhkd.1Ra4366vn_hEbvtswegcdSLYp5jTzni_Qz4eWhh4JnpzpPA0EMDMwmSP
 Rgb9YsWyhQHCGx4O6SLIxBI.LK8.2PN0rMIWwojeMHF4lheGmO2FD3wT606w770xIeEgRdE07W2C
 BHrLKQ_vYR3pAilt.l_Ds66Aar47ZgGhcYvL.ArirvAsD94OotjqNLxMWso_iG16xOaan0RzJ4Aq
 aOT1h3QiMehNdF4_P3roQJnRkCpzaEiPtcMW_z.ZBCq.kYaWM4bIqo9V78ugAKYYWSe.tS3QOMgt
 BKUgw1dKYse7cBe7hnrCEqhkjW0yXvFOrsUlCoEO2l.njf71KoAwFmvit6XYL02MTqomPqBCXNpH
 mCBzaF_8E6gzQebzU0yX3FRBW3k4MrfylZTji1pN69uOl_9mdcH10mGdiBPIio7v7MtZup3JPr2n
 fAhet_x77uZq9CSoLwIhxEVcZChfkCADN7vMEabiLVuZNtKPXrHdsYNeXSLoOld1wR.Sufddxs.A
 DtfeXcEUqvbBgBc4970Y7UTWNCd2N72hHOTgClYNUOW0zPi10SRL0UTYjw8ZrzUDrHnAPqikvcnx
 XSow69zQJqdcFSajakkka_cBVWTf9AO1kL2pfZuF1DOZPbMdIMRLXerym0lR2DhapuGriv5n8LhI
 .KBr9EDrLeTj9HRwBnOgzi3BKX9sgSEoDrlmxP5A7SV2crk1TbPwXFmWkIeO15Wey37E0ADYmLEO
 rIOLwF_qq3Lwj5JCOSHX30uFdax64CCdHAlZmBi5mXANvGGi6R8GY8ley7mbcT0K6CK5s_y3qGGN
 oGD.QPsapvby01QGSdJdZLKClSJRtkxVYQ54_Fuc15LVuwAatxQWbsBY3HZ5JFd8CMFaNE0NT9MV
 j6aGCl9UYQaF27OwcWFoaIGXevW3htJXIQLbBirn5Ked75fWyiwJBHSPawVNTgLPtNgkH4lOt51S
 KaXCn8rPmX.DqsyhCAXOA0csJEY_HIZpQUbkMrqSGDp_7ivRmda65e90cgbwOLfxXhqVrRctpT.T
 D4mxvFwthNKrE0vXJ6w4HcmINGhe2LI.lBdZcPgUg0ik24xbURgymsCHYi17gE23r9f3N0JK65dc
 DO735PWN5eJLQ_pLWO7BFkiP8E_EjBqPXT6EC_6NXKueHEkm5ay_Y12DN2dZZ0Hv_7zMFZAlZQLC
 upAERu.3Salf5p3hvF_JWl82VllHqNO2XaZNMYQzPrtxcuBGzLKSUEe06cOMs1Orkbo1yQ.JfR1I
 rjVbnY6eCU3N3Jk0KhtnVsvO_wisdhBRGU.iq4DvYNHNfGcuzZr5op69jZ9vi.UdL8oCfkUb8JjI
 i0jva1QxsasNWFRtmzwySoyFdW6Y2xexRJztBR2I_U1qAfHaWY194dxSh07lbTEPB_WCF3CoA42o
 O6e5ZiVQETuEp4iJZxjf53z0QcxbgEu1EmydIfZwwUQq.sb4EB8QGygdZ4QGIPmHwxwpduXmDnnb
 SHPd_nDD3FoGCybQ6RUF6Q5mCr9BFRouIicx_F_3IBZqhZA2smKMZNSd9PCCGSj5g9RFJbm.uqO3
 .BORoGiJ8GR7Rveyl_fIbeTMSqVIMTlKZ0781olKwxCz5_qNiZlT_pKzvbc2wyuoC_9dkBf3eGxC
 NQnqnYR1CDuOt_ZQTZrfbA73JBiw0uWzSlWzOLkx7MOPj.jDzbKBZHp_sxGrqQthnbJb5jAlNzKM
 dJzKAOfDMkHUg
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <5d8b4bfd-92b3-e63a-58fa-b2dc953a7ee5@aol.com>
Date: Tue, 10 Jan 2023 00:59:44 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v7] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz.ref@aol.com>
 <8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com>
 <20230109183132-mutt-send-email-mst@kernel.org>
 <aacffaa2-1e86-1392-8302-484248b893c4@aol.com>
 <20230110002712-mutt-send-email-mst@kernel.org>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230110002712-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 15531

On 1/10/2023 12:27 AM, Michael S. Tsirkin wrote:
> On Mon, Jan 09, 2023 at 07:05:35PM -0500, Chuck Zmudzinski wrote:
> > On 1/9/23 6:33 PM, Michael S. Tsirkin wrote:
> > > On Mon, Jan 09, 2023 at 04:55:42PM -0500, Chuck Zmudzinski wrote:
> > >> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > >> as noted in docs/igd-assign.txt in the Qemu source code.
> > >> 
> > >> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > >> Intel IGD passthrough to the guest with the Qemu upstream device model,
> > >> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > >> a different slot. This problem often prevents the guest from booting.
> > >> 
> > >> The only available workaround is not good: Configure Xen HVM guests to use
> > >> the old and no longer maintained Qemu traditional device model available
> > >> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > >> 
> > >> To implement this feature in the Qemu upstream device model for Xen HVM
> > >> guests, introduce the following new functions, types, and macros:
> > >> 
> > >> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > >> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > >> * typedef XenPTQdevRealize function pointer
> > >> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > >> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > >> 
> > >> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > >> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > >> the xl toolstack with the gfx_passthru option enabled, which sets the
> > >> igd-passthru=on option to Qemu for the Xen HVM machine type.
> > >> 
> > >> The new xen_igd_reserve_slot function also needs to be implemented in
> > >> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > >> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > >> in which case it does nothing.
> > >> 
> > >> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > >> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > >> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > >> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > >> 
> > >> Move the call to xen_host_pci_device_get, and the associated error
> > >> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > >> initialize the device class and vendor values which enables the checks for
> > >> the Intel IGD to succeed. The verification that the host device is an
> > >> Intel IGD to be passed through is done by checking the domain, bus, slot,
> > >> and function values as well as by checking that gfx_passthru is enabled,
> > >> the device class is VGA, and the device vendor in Intel.
> > >> 
> > >> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> > >> ---
> > >> Notes that might be helpful to reviewers of patched code in hw/xen:
> > >> 
> > >> The new functions and types are based on recommendations from Qemu docs:
> > >> https://qemu.readthedocs.io/en/latest/devel/qom.html
> > >> 
> > >> Notes that might be helpful to reviewers of patched code in hw/i386:
> > >> 
> > >> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> > >> not affect builds that do not have CONFIG_XEN defined.
> > > 
> > > I'm not sure how you can claim that.
> > 
> > I mean the small patch to pc_piix.c in this patch sits
> > between an "#ifdef CONFIG_XEN" and the corresponding
> > "#endif" so the preprocessor will exclude it when CONFIG_XEN
> > is not defined. In other words, my patch is part of the
> > xen-specific code in pc_piix.c. Or am I missing something?
> > 
> > 
> > > 
> > > ...
> > > 
> > >> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> > >> index b48047f50c..34a9736b5e 100644
> > >> --- a/hw/i386/pc_piix.c
> > >> +++ b/hw/i386/pc_piix.c
> > >> @@ -32,6 +32,7 @@
> > >>  #include "hw/i386/pc.h"
> > >>  #include "hw/i386/apic.h"
> > >>  #include "hw/pci-host/i440fx.h"
> > >> +#include "hw/rtc/mc146818rtc.h"
> > >>  #include "hw/southbridge/piix.h"
> > >>  #include "hw/display/ramfb.h"
> > >>  #include "hw/firmware/smbios.h"
> > >> @@ -40,16 +41,16 @@
> > >>  #include "hw/usb.h"
> > >>  #include "net/net.h"
> > >>  #include "hw/ide/pci.h"
> > >> -#include "hw/ide/piix.h"
> > >>  #include "hw/irq.h"
> > >>  #include "sysemu/kvm.h"
> > >>  #include "hw/kvm/clock.h"
> > >>  #include "hw/sysbus.h"
> > >> +#include "hw/i2c/i2c.h"
> > >>  #include "hw/i2c/smbus_eeprom.h"
> > >>  #include "hw/xen/xen-x86.h"
> > >> +#include "hw/xen/xen.h"
> > >>  #include "exec/memory.h"
> > >>  #include "hw/acpi/acpi.h"
> > >> -#include "hw/acpi/piix4.h"
> > >>  #include "qapi/error.h"
> > >>  #include "qemu/error-report.h"
> > >>  #include "sysemu/xen.h"
> > >> @@ -66,6 +67,7 @@
> > >>  #include "kvm/kvm-cpu.h"
> > >>  
> > >>  #define MAX_IDE_BUS 2
> > >> +#define XEN_IOAPIC_NUM_PIRQS 128ULL
> > >>  
> > >>  #ifdef CONFIG_IDE_ISA
> > >>  static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
> > >> @@ -73,6 +75,32 @@ static const int ide_iobase2[MAX_IDE_BUS] = { 0x3f6, 0x376 };
> > >>  static const int ide_irq[MAX_IDE_BUS] = { 14, 15 };
> > >>  #endif
> > >>  
> > >> +/*
> > >> + * Return the global irq number corresponding to a given device irq
> > >> + * pin. We could also use the bus number to have a more precise mapping.
> > >> + */
> > >> +static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
> > >> +{
> > >> +    int slot_addend;
> > >> +    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
> > >> +    return (pci_intx + slot_addend) & 3;
> > >> +}
> > >> +
> > >> +static void piix_intx_routing_notifier_xen(PCIDevice *dev)
> > >> +{
> > >> +    int i;
> > >> +
> > >> +    /* Scan for updates to PCI link routes (0x60-0x63). */
> > >> +    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
> > >> +        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
> > >> +        if (v & 0x80) {
> > >> +            v = 0;
> > >> +        }
> > >> +        v &= 0xf;
> > >> +        xen_set_pci_link_route(i, v);
> > >> +    }
> > >> +}
> > >> +
> > >>  /* PC hardware initialisation */
> > >>  static void pc_init1(MachineState *machine,
> > >>                       const char *host_type, const char *pci_type)
> > >> @@ -84,7 +112,7 @@ static void pc_init1(MachineState *machine,
> > >>      MemoryRegion *system_io = get_system_io();
> > >>      PCIBus *pci_bus;
> > >>      ISABus *isa_bus;
> > >> -    int piix3_devfn = -1;
> > >> +    Object *piix4_pm;
> > >>      qemu_irq smi_irq;
> > >>      GSIState *gsi_state;
> > >>      BusState *idebus[MAX_IDE_BUS];
> > >> @@ -205,10 +233,9 @@ static void pc_init1(MachineState *machine,
> > >>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
> > >>  
> > >>      if (pcmc->pci_enabled) {
> > >> -        PIIX3State *piix3;
> > >> +        DeviceState *dev;
> > >>          PCIDevice *pci_dev;
> > >> -        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
> > >> -                                         : TYPE_PIIX3_DEVICE;
> > >> +        int i;
> > >>  
> > >>          pci_bus = i440fx_init(pci_type,
> > >>                                i440fx_host,
> > >> @@ -216,21 +243,65 @@ static void pc_init1(MachineState *machine,
> > >>                                x86ms->below_4g_mem_size,
> > >>                                x86ms->above_4g_mem_size,
> > >>                                pci_memory, ram_memory);
> > >> +        pci_bus_map_irqs(pci_bus,
> > >> +                         xen_enabled() ? xen_pci_slot_get_pirq
> > >> +                                       : pci_slot_get_pirq);
> > >>          pcms->bus = pci_bus;
> > >>  
> > >> -        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
> > >> -        piix3 = PIIX3_PCI_DEVICE(pci_dev);
> > >> -        piix3->pic = x86ms->gsi;
> > >> -        piix3_devfn = piix3->dev.devfn;
> > >> -        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(piix3), "isa.0"));
> > >> +        pci_dev = pci_new_multifunction(-1, true, TYPE_PIIX3_DEVICE);
> > >> +        object_property_set_bool(OBJECT(pci_dev), "has-usb",
> > >> +                                 machine_usb(machine), &error_abort);
> > >> +        object_property_set_bool(OBJECT(pci_dev), "has-acpi",
> > >> +                                 x86_machine_is_acpi_enabled(x86ms),
> > >> +                                 &error_abort);
> > >> +        qdev_prop_set_uint32(DEVICE(pci_dev), "smb_io_base", 0xb100);
> > >> +        object_property_set_bool(OBJECT(pci_dev), "smm-enabled",
> > >> +                                 x86_machine_is_smm_enabled(x86ms),
> > >> +                                 &error_abort);
> > >> +        pci_realize_and_unref(pci_dev, pci_bus, &error_fatal);
> > >> +
> > >> +        if (xen_enabled()) {
> > >> +            pci_device_set_intx_routing_notifier(
> > >> +                        pci_dev, piix_intx_routing_notifier_xen);
> > >> +
> > >> +            /*
> > >> +             * Xen supports additional interrupt routes from the PCI devices to
> > >> +             * the IOAPIC: the four pins of each PCI device on the bus are also
> > >> +             * connected to the IOAPIC directly.
> > >> +             * These additional routes can be discovered through ACPI.
> > >> +             */
> > >> +            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
> > >> +                         XEN_IOAPIC_NUM_PIRQS);
> > >> +        }
> > >> +
> > >> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "pic"));
> > >> +        for (i = 0; i < ISA_NUM_IRQS; i++) {
> > >> +            qdev_connect_gpio_out(dev, i, x86ms->gsi[i]);
> > >> +        }
> > >> +        isa_bus = ISA_BUS(qdev_get_child_bus(DEVICE(pci_dev), "isa.0"));
> > >> +        rtc_state = ISA_DEVICE(object_resolve_path_component(OBJECT(pci_dev),
> > >> +                                                             "rtc"));
> > >> +        piix4_pm = object_resolve_path_component(OBJECT(pci_dev), "pm");
> > >> +        dev = DEVICE(object_resolve_path_component(OBJECT(pci_dev), "ide"));
> > >> +        pci_ide_create_devs(PCI_DEVICE(dev));
> > >> +        idebus[0] = qdev_get_child_bus(dev, "ide.0");
> > >> +        idebus[1] = qdev_get_child_bus(dev, "ide.1");
> > >>      } else {
> > >>          pci_bus = NULL;
> > >> +        piix4_pm = NULL;
> > >>          isa_bus = isa_bus_new(NULL, get_system_memory(), system_io,
> > >>                                &error_abort);
> > >> +        isa_bus_irqs(isa_bus, x86ms->gsi);
> > >> +
> > >> +        rtc_state = isa_new(TYPE_MC146818_RTC);
> > >> +        qdev_prop_set_int32(DEVICE(rtc_state), "base_year", 2000);
> > >> +        isa_realize_and_unref(rtc_state, isa_bus, &error_fatal);
> > >> +
> > >>          i8257_dma_init(isa_bus, 0);
> > >>          pcms->hpet_enabled = false;
> > >> +        idebus[0] = NULL;
> > >> +        idebus[1] = NULL;
> > >>      }
> > >> -    isa_bus_irqs(isa_bus, x86ms->gsi);
> > >>  
> > >>      if (x86ms->pic == ON_OFF_AUTO_ON || x86ms->pic == ON_OFF_AUTO_AUTO) {
> > >>          pc_i8259_create(isa_bus, gsi_state->i8259_irq);
> > >> @@ -252,18 +323,12 @@ static void pc_init1(MachineState *machine,
> > >>      }
> > >>  
> > >>      /* init basic PC hardware */
> > >> -    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, &rtc_state, true,
> > >> +    pc_basic_device_init(pcms, isa_bus, x86ms->gsi, rtc_state, true,
> > >>                           0x4);
> > >>  
> > >>      pc_nic_init(pcmc, isa_bus, pci_bus);
> > >>  
> > >>      if (pcmc->pci_enabled) {
> > >> -        PCIDevice *dev;
> > >> -
> > >> -        dev = pci_create_simple(pci_bus, piix3_devfn + 1, TYPE_PIIX3_IDE);
> > >> -        pci_ide_create_devs(dev);
> > >> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> > >> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
> > >>          pc_cmos_init(pcms, idebus[0], idebus[1], rtc_state);
> > >>      }
> > >>  #ifdef CONFIG_IDE_ISA
> > >> @@ -289,21 +354,9 @@ static void pc_init1(MachineState *machine,
> > >>      }
> > >>  #endif
> > >>  
> > >> -    if (pcmc->pci_enabled && machine_usb(machine)) {
> > >> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
> > >> -    }
> > >> -
> > >> -    if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
> > >> -        PCIDevice *piix4_pm;
> > >> -
> > >> +    if (piix4_pm) {
> > >>          smi_irq = qemu_allocate_irq(pc_acpi_smi_interrupt, first_cpu, 0);
> > >> -        piix4_pm = pci_new(piix3_devfn + 3, TYPE_PIIX4_PM);
> > >> -        qdev_prop_set_uint32(DEVICE(piix4_pm), "smb_io_base", 0xb100);
> > >> -        qdev_prop_set_bit(DEVICE(piix4_pm), "smm-enabled",
> > >> -                          x86_machine_is_smm_enabled(x86ms));
> > >> -        pci_realize_and_unref(piix4_pm, pci_bus, &error_fatal);
> > >>  
> > >> -        qdev_connect_gpio_out(DEVICE(piix4_pm), 0, x86ms->gsi[9]);
> > >>          qdev_connect_gpio_out_named(DEVICE(piix4_pm), "smi-irq", 0, smi_irq);
> > >>          pcms->smbus = I2C_BUS(qdev_get_child_bus(DEVICE(piix4_pm), "i2c"));
> > >>          /* TODO: Populate SPD eeprom data.  */
> > >> @@ -315,7 +368,7 @@ static void pc_init1(MachineState *machine,
> > >>                                   object_property_allow_set_link,
> > >>                                   OBJ_PROP_LINK_STRONG);
> > >>          object_property_set_link(OBJECT(machine), PC_MACHINE_ACPI_DEVICE_PROP,
> > >> -                                 OBJECT(piix4_pm), &error_abort);
> > >> +                                 piix4_pm, &error_abort);
> > >>      }
> > >>  
> > >>      if (machine->nvdimms_state->is_enabled) {
> > >> @@ -405,6 +458,9 @@ static void pc_xen_hvm_init(MachineState *machine)
> > >>      }
> > >>  
> > >>      pc_xen_hvm_init_pci(machine);
> > >> +    if (xen_igd_gfx_pt_enabled()) {
> > >> +        xen_igd_reserve_slot(pcms->bus);
> > >> +    }
> > >>      pci_create_simple(pcms->bus, -1, "xen-platform");
> > >>  }
> > >>  #endif
> > >> @@ -441,6 +497,11 @@ static void pc_i440fx_8_0_machine_options(MachineClass *m)
> > >>      pc_i440fx_machine_options(m);
> > >>      m->alias = "pc";
> > >>      m->is_default = true;
> > >> +#ifdef CONFIG_MICROVM_DEFAULT
> > >> +    m->is_default = false;
> > >> +#else
> > >> +    m->is_default = true;
> > >> +#endif
> > >>  }
> > >>  
> > >>  DEFINE_I440FX_MACHINE(v8_0, "pc-i440fx-8.0", NULL,
> > > 
> > > 
> > > Lots of changes here not guarded by CONFIG_XEN.
> > > 
> > 
> > What diff is this? How is my patch related to it?
>
>
> This is what you posted, take a look:
> https://lore.kernel.org/all/8349506149de6d81b0762f17623552c248439e93.1673297742.git.brchuckz@aol.com/
>
>

Oops, I think I sent the wrong patch here. I must have used the wrong git
branch. Sorry.

I wouldn't blame you if you ignore future messages from me.

I will get it right next time if there is a next time with v8. Linus
named git a stupid content tracker. And in this case, it was
too stupid to warn me that the patch it sent on my behalf is
not what I expected it to be. Or maybe I am the stupid one, LOL.

I hope no one is stupid enough to consider v7 of my patch any
further! And the fact that this stupidity of mine is preserved for
all future generations is so sweet - one of the benefits of trying
to help out in FLOSS projects, LOL.

Kind regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 06:33:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 06:33:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474286.735371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF8C5-00060v-EF; Tue, 10 Jan 2023 06:33:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474286.735371; Tue, 10 Jan 2023 06:33:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF8C5-00060o-BR; Tue, 10 Jan 2023 06:33:09 +0000
Received: by outflank-mailman (input) for mailman id 474286;
 Tue, 10 Jan 2023 06:33:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF8C4-00060e-6Q; Tue, 10 Jan 2023 06:33:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF8C4-0005NB-4k; Tue, 10 Jan 2023 06:33:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF8C3-000453-QV; Tue, 10 Jan 2023 06:33:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pF8C3-0004Fg-Q2; Tue, 10 Jan 2023 06:33:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WdlUsvcVpUq0yb8vkNYIC/1KpAixJ84JyYzg8o6ilpE=; b=yGOTvb3xuZaH47H8qzGvs/+6OV
	bI/YKanHe/zMDtfK3h+BP0IM1YMSWvkyYNEDr2CjDfF0W4HzRLJL8v85pU7zHl3zThOCmfQhHhZMO
	5Df7xFF9DmQiAPcYZmf+biHWg/J32mqXKDm3j1gZQea07pxO1sMTVaU0zfWFoJwGZzaQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175683-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175683: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=2cc6d4c8ed54e36fd9638dafdb293dd9a4c54cf9
X-Osstest-Versions-That:
    ovmf=33a3408fbbf988aaa8ecc6e721cf83e3ae810e54
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 06:33:07 +0000

flight 175683 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175683/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 2cc6d4c8ed54e36fd9638dafdb293dd9a4c54cf9
baseline version:
 ovmf                 33a3408fbbf988aaa8ecc6e721cf83e3ae810e54

Last test of basis   175655  2023-01-09 18:11:29 Z    0 days
Testing same since   175683  2023-01-10 04:10:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ning Feng <ning.feng@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   33a3408fbb..2cc6d4c8ed  2cc6d4c8ed54e36fd9638dafdb293dd9a4c54cf9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 07:09:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 07:09:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474296.735388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF8kn-00018p-6E; Tue, 10 Jan 2023 07:09:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474296.735388; Tue, 10 Jan 2023 07:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF8kn-00018i-3H; Tue, 10 Jan 2023 07:09:01 +0000
Received: by outflank-mailman (input) for mailman id 474296;
 Tue, 10 Jan 2023 07:08:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF8kl-00018Z-DL
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 07:08:59 +0000
Received: from sonic308-54.consmr.mail.gq1.yahoo.com
 (sonic308-54.consmr.mail.gq1.yahoo.com [98.137.68.30])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a4412fec-90b5-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 08:08:55 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic308.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 07:08:52 +0000
Received: by hermes--production-bf1-5458f64d4-c7wsl (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 685679c689afdd034d61b58b257d2b66; 
 Tue, 10 Jan 2023 07:08:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4412fec-90b5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673334532; bh=KuWJYIFENPkHWbAOvcCiwKQm/Xx2ED603nRvGGU94Ig=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=FfPNwFdGlr/JEBG8LpuBgUR4pP8x+J1GCalVPRus3zAXJ/rwTVkg37O6Kn+NzOKHI2SJZNmWkvellckj/KKyJUpdfPfK9otQkVlrfk145vWBo6GAGZGdFHIRP/sOEDOcaCPh3cRTyPEjyz2uvTaKk6K1/BaVzQ2scp0mmTkWPgatVjd6MBRDwuRABCwmUKZ5GJKfk96WV5Dz7jHzzqES1a+OOEMSDk5y5BOEfuq9t0F5sUZcEeK9cZMD2doH16ZArmU1Wwhfmron0ZD8zj1xmF2YHsL8h2PUa1htGxwYReH1J4CZVKCWA3r/RT/jp+WCjWwRWxhnO1231PRL6sNSZg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673334532; bh=mFeK+N6el/Uveabs+Ic/qDm85kO91cY93TU7fteohwD=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=osFbjclQ98zHsNwWrC1Bz6Nh6SexcWqDziCA/ShandMHF4rrNpq3Inq9uWI7LyM7pSSXzyyl6q5NB8xDtWw3j+H3Dwdib3PblkO6ojjJMXlc06Mh1tmMUXGRDFD7YsNsc5JPEvvD7/Kr9KrwKWZjBnqQQGieict1a3oYvLx57q78YAj8DibvAbzHhWDN7N0YrZgWUtJUWSR0/iAta8qH/grdMw0tL3PT7xkAUok07Lixg2jPHNqX9LGqXc5ZWIUN9nky7O61WopPRYlMo0juKRNGZ+ttDrLiZc6dF1OChLGkc+XRuDshLeyjLcGBL5LFoH8gpHBMmfbas7WjUB1aGQ==
X-YMail-OSG: auhjzb0VM1lZiBqxL1p4B_eR6C.6nYduSpSAXeEvBtNl8ew1jZGRz95R8kj8pls
 9xgM65wpRxnoWwSI6HDcFsDfsXdrVeMw1fmxMsC0AmOn9TQItZEnUgjkls0s0Qv.LiHsm2bBX8pz
 TDzC7lAP00xw8nH66NZff0j3xqarkzzu7oc8JQ7YV9H3wuQM5diQGw8HQUHSm_dNjF9wSpVb1Nq5
 lNlNG4dP3_k0wnpoCqd0_j1GgN.IDqShVvHh5H5zwhG9L6NnN0VWACekknFxmRaN0G5a2xQUTUqS
 31GiJrOPibgImZVyUjosrtwPlmpCcI3zAA.65IgKCLsTOCiB64F2tlBJkWl_1Rs2EEJYhWwYF8SE
 wpFnRDC3_dMxB6ZwEH8yZ1g2iJkH6oDL2tkFZp6hHUWJvmeceCvyTxfNaawHamYmWHnFadSvDOwf
 KVcLDd8R3paNgn0LLYbTLrP_hbgkWACQr1f3RnRYkgUvDExolWL.qKiEwWhc4mz0wXG2IIly_3A6
 XmV3y2._jv7dqJj7FbCCUG1DIVtWVNohiDS4l8N4B5ch.e5dg3OPnscNYAinKppGSFrX884fkeoC
 8iQFgtVddi.ouLH6T5SLQPFcZhnvEVYACibIAS8uC1kXGvQG_O._kqaJS1diNTKTuZ3yh1DiGxdH
 b_18wOvyG64g1TzIIavhtpn4h6mStbRxGEoEEW3IAH_Wzgh3w5e2LV8Vtag055d6iV4WNGAooQuE
 fEeTvUldgA9NRbRW7_aVZoreyg.TnONatnU9X1IM9NTFq5UJsz1MMIIoGHS4xZil8BxXk0Fbdf3X
 uUAj54gJ7saOi9F9O93R4sJOzaj9UGMDZ5XeLe8uLNXUB_DfUA01HOFb3s7emhtK0rXfPivrI5ng
 d738G7pu_3A9.OLydaUK6e3xYnn0GFfvFX11LiWemqkdEtykcntZO_RXHJUsJxZvtXRJv1u54wle
 fpgFepNLmA5_UEFmnz3MCKuIL0DYUKwyAaKGVBB2i0putP_WKGwudYTt7MI9Y_Xt5470NpzMT0lI
 HubRWwWMRlFUejLzDJNJtcaVGrhtnNAOgkVbx6XvzHNaiL4hNWTTzrG3aTSO.J7Z1MhIlm4T4gc8
 mJiJ7kaJRYYqBuvI9bmYqTape4Q9F3dXgaGbCsys7rkCnMvicbSUt4zg5zS3Ybf.qgTw3dT0NWSh
 k2kUjA1SOTJt_AKVQHnkdw57xvhkeFj0yh492_6.CKfgBiPvnoOZmCEKbH6I_45Bj3neS2Vmr.x9
 NeuNA8qn3fy1vBm6bpaIkd6x_UMYrt438Ycq84Idiako5Z9sBVBxCXpu9PFBz9GofbUfly9Qau6j
 ZmqqS6DydwxoVN7OTP5o5w49x1Hk7zgXBbrg8b_.NN1qZll9.lGOOg5DkCtyDYyeR2L0vUE8CcNf
 kUUTVNLkFJoNOVQTuSjuyBc3lOmjG9cKkURTFh7V1OappEVlG19e4keOBv9cWt2BCdxw5PdoIRF1
 kmfIlIVD7IAX1h9yLaMpOpZmGsgO8G2FG57AWE4SPpiFNuFjIRunrSkzaLRcaJN6C41lMghvkgPa
 6nc73Tuf3KeZ_DhuchzjzlgNOQ1n451L1W01x91w9qd6veBKAtfPOaLzqEq6I9Yxv26KAfNlfY8Z
 zkdh5TBU0aTozVjyfGY_PqcPmYtM8BB6zGJ6Dn0NSwnUcmiQV5bTcO4I8mez1ADbghF1HstKMbl3
 xD3DyCC1kiEkeLyl2OWis96F4K7GdaXPTsdsi8KbxnwYmLaABRGENqVxXAbKQFHu8M9U_8bFYKFN
 ES7iNEOj0YT2WeFnV3Jou.MjHk4fkFAq4sUWnbAVt0jo4UX_CTkixWHV853ka_LUqi2jizlzO0WK
 oROCiEKyZhdItFZfXcyTPCbprX5pPJvwg1jeHwgNZTejDuaX9DX37Sh1yvGAYkmYJ66D7txwzhyg
 He7Oo7xZTHdjE43RGFC3wTzfXkQ_QKiaihsOEURKAOriCcWwVXYAFL9BlHr84hPUertLTPvdM4Vm
 ezEnZg30S.wRB1gT1MmWRmfbla9q_NfnTvVrj11lfJRPz3E9b7PxLp9bcTxDLSOCvnokDoKqtgT3
 uD.LMgR.AUhZJDdYTSAfEAprMeqXdz9E_7R1KZIippQJKCVs9_65mclJw1pocO2S9KqPKkXyCOm0
 hgYcSVrtK9Vsl8TKFe_Rjs9qfV_XHooxoXQZfCYmOSoRg.3Ff5Jc2Hza1jebssXD2qr2uXFWnTu9
 It0QAUUZFUg--
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Date: Tue, 10 Jan 2023 02:08:34 -0500
Message-Id: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
Content-Length: 11344

Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
as noted in docs/igd-assign.txt in the Qemu source code.

Currently, when the xl toolstack is used to configure a Xen HVM guest with
Intel IGD passthrough to the guest with the Qemu upstream device model,
a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
a different slot. This problem often prevents the guest from booting.

The only available workaround is not good: Configure Xen HVM guests to use
the old and no longer maintained Qemu traditional device model available
from xenbits.xen.org which does reserve slot 2 for the Intel IGD.

To implement this feature in the Qemu upstream device model for Xen HVM
guests, introduce the following new functions, types, and macros:

* XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
* XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
* typedef XenPTQdevRealize function pointer
* XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
* xen_igd_reserve_slot and xen_igd_clear_slot functions

The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
the xl toolstack with the gfx_passthru option enabled, which sets the
igd-passthru=on option to Qemu for the Xen HVM machine type.

The new xen_igd_reserve_slot function also needs to be implemented in
hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
in which case it does nothing.

The new xen_igd_clear_slot function overrides qdev->realize of the parent
PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
created in hw/i386/pc_piix.c for the case when igd-passthru=on.

Move the call to xen_host_pci_device_get, and the associated error
handling, from xen_pt_realize to the new xen_igd_clear_slot function to
initialize the device class and vendor values which enables the checks for
the Intel IGD to succeed. The verification that the host device is an
Intel IGD to be passed through is done by checking the domain, bus, slot,
and function values as well as by checking that gfx_passthru is enabled,
the device class is VGA, and the device vendor in Intel.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Notes that might be helpful to reviewers of patched code in hw/xen:

The new functions and types are based on recommendations from Qemu docs:
https://qemu.readthedocs.io/en/latest/devel/qom.html

Notes that might be helpful to reviewers of patched code in hw/i386:

The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
not affect builds that do not have CONFIG_XEN defined.

xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
existing function that is only true when Qemu is built with
xen-pci-passthrough enabled and the administrator has configured the Xen
HVM guest with Qemu's igd-passthru=on option.

v2: Remove From: <email address> tag at top of commit message

v3: Changed the test for the Intel IGD in xen_igd_clear_slot:

    if (is_igd_vga_passthrough(&s->real_device) &&
        (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {

    is changed to

    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    I hoped that I could use the test in v2, since it matches the
    other tests for the Intel IGD in Qemu and Xen, but those tests
    do not work because the necessary data structures are not set with
    their values yet. So instead use the test that the administrator
    has enabled gfx_passthru and the device address on the host is
    02.0. This test does detect the Intel IGD correctly.

v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
    email address to match the address used by the same author in commits
    be9c61da and c0e86b76
    
    Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc

v5: The patch of xen_pt.c was re-worked to allow a more consistent test
    for the Intel IGD that uses the same criteria as in other places.
    This involved moving the call to xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot and updating the checks for the
    Intel IGD in xen_igd_clear_slot:
    
    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    is changed to

    if (is_igd_vga_passthrough(&s->real_device) &&
        s->real_device.domain == 0 && s->real_device.bus == 0 &&
        s->real_device.dev == 2 && s->real_device.func == 0 &&
        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {

    Added an explanation for the move of xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot to the commit message.

    Rebase.

v6: Fix logging by removing these lines from the move from xen_pt_realize
    to xen_igd_clear_slot that was done in v5:

    XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
               " to devfn 0x%x\n",
               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
               s->dev.devfn);

    This log needs to be in xen_pt_realize because s->dev.devfn is not
    set yet in xen_igd_clear_slot.

v7: The v7 that was posted to the mailing list was incorrect. v8 is what
    v7 was intended to be.

v8: Inhibit out of context log message and needless processing by
    adding 2 lines at the top of the new xen_igd_clear_slot function:

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
        return;

    Rebase. This removed an unnecessary header file from xen_pt.h 

 hw/i386/pc_piix.c    |  3 +++
 hw/xen/xen_pt.c      | 49 ++++++++++++++++++++++++++++++++++++--------
 hw/xen/xen_pt.h      | 16 +++++++++++++++
 hw/xen/xen_pt_stub.c |  4 ++++
 4 files changed, 63 insertions(+), 9 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index b48047f50c..bc5efa4f59 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
     }
 
     pc_xen_hvm_init_pci(machine);
+    if (xen_igd_gfx_pt_enabled()) {
+        xen_igd_reserve_slot(pcms->bus);
+    }
     pci_create_simple(pcms->bus, -1, "xen-platform");
 }
 #endif
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 0ec7e52183..eff38155ef 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
                s->dev.devfn);
 
-    xen_host_pci_device_get(&s->real_device,
-                            s->hostaddr.domain, s->hostaddr.bus,
-                            s->hostaddr.slot, s->hostaddr.function,
-                            errp);
-    if (*errp) {
-        error_append_hint(errp, "Failed to \"open\" the real pci device");
-        return;
-    }
-
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
@@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Object *obj)
     PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
 }
 
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
+    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
+}
+
+static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
+{
+    ERRP_GUARD();
+    PCIDevice *pci_dev = (PCIDevice *)qdev;
+    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
+    PCIBus *pci_bus = pci_get_bus(pci_dev);
+
+    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
+        return;
+
+    xen_host_pci_device_get(&s->real_device,
+                            s->hostaddr.domain, s->hostaddr.bus,
+                            s->hostaddr.slot, s->hostaddr.function,
+                            errp);
+    if (*errp) {
+        error_append_hint(errp, "Failed to \"open\" the real pci device");
+        return;
+    }
+
+    if (is_igd_vga_passthrough(&s->real_device) &&
+        s->real_device.domain == 0 && s->real_device.bus == 0 &&
+        s->real_device.dev == 2 && s->real_device.func == 0 &&
+        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
+        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
+        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
+    }
+    xpdc->pci_qdev_realize(qdev, errp);
+}
+
 static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
+    xpdc->pci_qdev_realize = dc->realize;
+    dc->realize = xen_igd_clear_slot;
     k->realize = xen_pt_realize;
     k->exit = xen_pt_unregister_device;
     k->config_read = xen_pt_pci_read_config;
@@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info = {
     .instance_size = sizeof(XenPCIPassthroughState),
     .instance_finalize = xen_pci_passthrough_finalize,
     .class_init = xen_pci_passthrough_class_init,
+    .class_size = sizeof(XenPTDeviceClass),
     .instance_init = xen_pci_passthrough_instance_init,
     .interfaces = (InterfaceInfo[]) {
         { INTERFACE_CONVENTIONAL_PCI_DEVICE },
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index cf10fc7bbf..8c25932b4b 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -2,6 +2,7 @@
 #define XEN_PT_H
 
 #include "hw/xen/xen_common.h"
+#include "hw/pci/pci_bus.h"
 #include "xen-host-pci-device.h"
 #include "qom/object.h"
 
@@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
 #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
 OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
 
+#define XEN_PT_DEVICE_CLASS(klass) \
+    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
+#define XEN_PT_DEVICE_GET_CLASS(obj) \
+    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
+
+typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
+
+typedef struct XenPTDeviceClass {
+    PCIDeviceClass parent_class;
+    XenPTQdevRealize pci_qdev_realize;
+} XenPTDeviceClass;
+
 uint32_t igd_read_opregion(XenPCIPassthroughState *s);
+void xen_igd_reserve_slot(PCIBus *pci_bus);
 void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
 void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
                                            XenHostPCIDevice *dev);
@@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
 
 #define XEN_PCI_INTEL_OPREGION 0xfc
 
+#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
+
 typedef enum {
     XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
     XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
index 2d8cac8d54..5c108446a8 100644
--- a/hw/xen/xen_pt_stub.c
+++ b/hw/xen/xen_pt_stub.c
@@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
         error_setg(errp, "Xen PCI passthrough support not built in");
     }
 }
+
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 07:29:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 07:29:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474302.735399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF94K-0003ZL-QV; Tue, 10 Jan 2023 07:29:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474302.735399; Tue, 10 Jan 2023 07:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF94K-0003ZE-My; Tue, 10 Jan 2023 07:29:12 +0000
Received: by outflank-mailman (input) for mailman id 474302;
 Tue, 10 Jan 2023 07:29:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qhNV=5H=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1pF94I-0003Z8-Fc
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 07:29:10 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77ef4fcd-90b8-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 08:29:08 +0100 (CET)
Received: by mail-ej1-x630.google.com with SMTP id fc4so26233009ejc.12
 for <xen-devel@lists.xenproject.org>; Mon, 09 Jan 2023 23:29:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77ef4fcd-90b8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=9ssIX2bpuLjHtHr5+BBTQqOChORvzTDNQnsLLcvt2qg=;
        b=e7oCwnl55++94Q2FyIhXiOCmNMMWEE7hSeTcjnM0OwwZ7iZMByeN4dDjGkAaCzTSN+
         A+tUo3V6c6bqBLWWMJUZKKcPp79cNKGBrQ/1KRKyIb2mTPooruC9hx4w2QXm17FKfmmw
         eh3F0FZD4LFseycTQZF/n9uSMZqPRBU+oGrHj786Tx4tJSs9CWPLu9Ds6Iu0wjVGa0LU
         MQTmmPR0ka59gBy23Q6qvq4hRs7BvqTCWGUenKwuvKl/RhDtqGxEP2zBgLStq/e+/4VW
         dFAptn70skTjRrbIBGhG7AsqzPbMZN0IQklb2gZkFprPDWo7Il1U0/peARTjzbWAoAog
         dJ6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=9ssIX2bpuLjHtHr5+BBTQqOChORvzTDNQnsLLcvt2qg=;
        b=xA52JBM/GRL5wQkKmCkdRRSGrsHILgb04CVfeZDz8P6R+t81NZ/TI19BffyHGQU2BI
         W25O1P48NhipYuyZshkv16gDyvTOUuWHMWLHA44BU9vPqMhaoO/L4NdUaQBmLzpaQodr
         M+ba9LNiGR8dOryw13TL3mwJAtguLHSAVRuutGywdDSPyeRxGz/TYU9pEPl4Wyb6rmcy
         DqY/teP0cT64fnu6qP5gafBjYkXch0OraGTVhksOpV0AceT5RPLxFvFYVQxIzg3/WKIF
         zXvR9Cm+g7tamzH0dHTELXyE2ZdyfVvady+O3DsFNExgMTwzadEaV1D+iYGAriywgkW0
         HZ1w==
X-Gm-Message-State: AFqh2koK3fa8pGmnEFXFgtPugHVvDjLF9TFkwRxcb//h7HPB9dqqrfR9
	xXBQg053xeC2AiZlYJKQf1CKLYgrwKDezCJIdx8=
X-Google-Smtp-Source: AMrXdXs0ZjD9l52k6wjoRYK84dhQ5fuBp/z+R2mnzmEK3FIS0kye8yaVmQ+EiNEJQKbIhhqMW3V/y85D2ea9g8InJlM=
X-Received: by 2002:a17:906:ac2:b0:78d:dddb:3974 with SMTP id
 z2-20020a1709060ac200b0078ddddb3974mr6322536ejf.411.1673335747386; Mon, 09
 Jan 2023 23:29:07 -0800 (PST)
MIME-Version: 1.0
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <527727b2c9e26e6ef7714fe9a3fbe580caf1ae13.1673278109.git.oleksii.kurochko@gmail.com>
 <CAKmqyKNyRfAhyP-3uZwEf3OZEv5be4KNdGvNjUiQGu8w-vf_8g@mail.gmail.com>
In-Reply-To: <CAKmqyKNyRfAhyP-3uZwEf3OZEv5be4KNdGvNjUiQGu8w-vf_8g@mail.gmail.com>
From: Bobby Eshleman <bobby.eshleman@gmail.com>
Date: Mon, 9 Jan 2023 23:28:56 -0800
Message-ID: <CAKB00G3nVtcBppt2TJa-dFzz4TKqVT6B-1swjzkZwqsRkFxwsA@mail.gmail.com>
Subject: Re: [PATCH v2 6/8] xen/riscv: introduce early_printk basic stuff
To: Alistair Francis <alistair23@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Connor Davis <connojdavis@gmail.com>, Gianluca Guida <gianluca@rivosinc.com>, 
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Oleksii Kurochko <oleksii.kurochko@gmail.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000d0230205f1e3d6f0"

--000000000000d0230205f1e3d6f0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jan 9, 2023 at 4:28 PM Alistair Francis <alistair23@gmail.com>
wrote:

> On Tue, Jan 10, 2023 at 1:47 AM Oleksii Kurochko
> <oleksii.kurochko@gmail.com> wrote:
> >
> > The patch introduces a basic stuff of early_printk functionality
> > which will be enough to print 'hello from C environment".
> > early_printk() function was changed in comparison with original as
> > common isn't being built now so there is no vscnprintf.
> >
> > Because printk() relies on a serial driver (like the ns16550 driver)
> > and drivers require working virtual memory (ioremap()) there is not
> > print functionality early in Xen boot.
> >
> > This commit adds early printk implementation built on the putc SBI call=
.
> >
> > As sbi_console_putchar() is being already planned for deprecation
> > it is used temporary now and will be removed or reworked after
> > real uart will be ready.
>
> There was a discussion to add a new SBI putchar replacement. It
> doesn't seem to be completed yet, but there might be an SBI
> replacement for this in the future as well.
>
> Alistair
>

Are you referring to the Debug Console Extension (EID #0x4442434E "DBCN")=
=E2=80=9D?

https://lists.riscv.org/g/tech-prs/topic/96051183#84

Best,
Bobby


> >
> > Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes in V2:
> >     - add license to early_printk.c
> >     - add signed-off-by Bobby
> >     - add RISCV_32 to Kconfig.debug to EARLY_PRINTK config
> >     - update commit message
> >     - order the files alphabetically in Makefile
> > ---
> >  xen/arch/riscv/Kconfig.debug              |  7 +++++
> >  xen/arch/riscv/Makefile                   |  1 +
> >  xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
> >  xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
> >  4 files changed, 53 insertions(+)
> >  create mode 100644 xen/arch/riscv/early_printk.c
> >  create mode 100644 xen/arch/riscv/include/asm/early_printk.h
> >
> > diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debu=
g
> > index e69de29bb2..6ba0bd1e5a 100644
> > --- a/xen/arch/riscv/Kconfig.debug
> > +++ b/xen/arch/riscv/Kconfig.debug
> > @@ -0,0 +1,7 @@
> > +config EARLY_PRINTK
> > +    bool "Enable early printk config"
> > +    default DEBUG
> > +    depends on RISCV_64 || RISCV_32
> > +    help
> > +
> > +      Enables early printk debug messages
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index fd916e1004..1a4f1a6015 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,3 +1,4 @@
> > +obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o
> >  obj-$(CONFIG_RISCV_64) +=3D riscv64/
> >  obj-y +=3D sbi.o
> >  obj-y +=3D setup.o
> > diff --git a/xen/arch/riscv/early_printk.c
> b/xen/arch/riscv/early_printk.c
> > new file mode 100644
> > index 0000000000..88da5169ed
> > --- /dev/null
> > +++ b/xen/arch/riscv/early_printk.c
> > @@ -0,0 +1,33 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * RISC-V early printk using SBI
> > + *
> > + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> > + */
> > +#include <asm/sbi.h>
> > +#include <asm/early_printk.h>
> > +
> > +/*
> > + * TODO:
> > + *   sbi_console_putchar is already planned for deprication
> > + *   so it should be reworked to use UART directly.
> > +*/
> > +void early_puts(const char *s, size_t nr)
> > +{
> > +    while ( nr-- > 0 )
> > +    {
> > +        if (*s =3D=3D '\n')
> > +            sbi_console_putchar('\r');
> > +        sbi_console_putchar(*s);
> > +        s++;
> > +    }
> > +}
> > +
> > +void early_printk(const char *str)
> > +{
> > +    while (*str)
> > +    {
> > +        early_puts(str, 1);
> > +        str++;
> > +    }
> > +}
> > diff --git a/xen/arch/riscv/include/asm/early_printk.h
> b/xen/arch/riscv/include/asm/early_printk.h
> > new file mode 100644
> > index 0000000000..05106e160d
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/early_printk.h
> > @@ -0,0 +1,12 @@
> > +#ifndef __EARLY_PRINTK_H__
> > +#define __EARLY_PRINTK_H__
> > +
> > +#include <xen/early_printk.h>
> > +
> > +#ifdef CONFIG_EARLY_PRINTK
> > +void early_printk(const char *str);
> > +#else
> > +static inline void early_printk(const char *s) {};
> > +#endif
> > +
> > +#endif /* __EARLY_PRINTK_H__ */
> > --
> > 2.38.1
> >
> >
>

--000000000000d0230205f1e3d6f0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div><br></div><div style=3D"background-color:rgba(0,0,0,0)!important;borde=
r-color:rgb(0,0,0)!important"><br><div class=3D"gmail_quote" style=3D"backg=
round-color:rgba(0,0,0,0)!important;border-color:rgb(0,0,0)!important"><div=
 dir=3D"ltr" class=3D"gmail_attr">On Mon, Jan 9, 2023 at 4:28 PM Alistair F=
rancis &lt;<a href=3D"mailto:alistair23@gmail.com">alistair23@gmail.com</a>=
&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px =
0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1e=
x;border-left-color:rgb(204,204,204)">On Tue, Jan 10, 2023 at 1:47 AM Oleks=
ii Kurochko<br>
&lt;<a href=3D"mailto:oleksii.kurochko@gmail.com" target=3D"_blank">oleksii=
.kurochko@gmail.com</a>&gt; wrote:<br>
&gt;<br>
&gt; The patch introduces a basic stuff of early_printk functionality<br>
&gt; which will be enough to print &#39;hello from C environment&quot;.<br>
&gt; early_printk() function was changed in comparison with original as<br>
&gt; common isn&#39;t being built now so there is no vscnprintf.<br>
&gt;<br>
&gt; Because printk() relies on a serial driver (like the ns16550 driver)<b=
r>
&gt; and drivers require working virtual memory (ioremap()) there is not<br=
>
&gt; print functionality early in Xen boot.<br>
&gt;<br>
&gt; This commit adds early printk implementation built on the putc SBI cal=
l.<br>
&gt;<br>
&gt; As sbi_console_putchar() is being already planned for deprecation<br>
&gt; it is used temporary now and will be removed or reworked after<br>
&gt; real uart will be ready.<br>
<br>
There was a discussion to add a new SBI putchar replacement. It<br>
doesn&#39;t seem to be completed yet, but there might be an SBI<br>
replacement for this in the future as well.<br>
<br>
Alistair<br>
</blockquote><div dir=3D"auto"><br></div><div dir=3D"auto" style=3D"backgro=
und-color:rgba(0,0,0,0)!important;border-color:rgb(0,0,0)!important"><span =
style=3D"background-color:rgba(0,0,0,0);border-color:rgb(49,49,49)">Are you=
 referring to the=C2=A0</span><span style=3D"word-spacing:1px;color:rgb(49,=
49,49)">Debug Console Extension (EID #0x4442434E &quot;DBCN&quot;)=E2=80=9D=
?</span></div><div dir=3D"auto" style=3D"background-color:rgba(0,0,0,0);bor=
der-color:rgb(0,0,0)"><font style=3D"color:rgb(49,49,49)"><span style=3D"wo=
rd-spacing:1px"><br></span></font><span style=3D"word-spacing:1px;color:rgb=
(49,49,49)"><div><a href=3D"https://lists.riscv.org/g/tech-prs/topic/960511=
83#84">https://lists.riscv.org/g/tech-prs/topic/96051183#84</a></div><div d=
ir=3D"auto"><br></div><div dir=3D"auto">Best,</div><div dir=3D"auto">Bobby<=
/div></span></div><div dir=3D"auto"><br></div><blockquote class=3D"gmail_qu=
ote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-st=
yle:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><br>
&gt;<br>
&gt; Signed-off-by: Bobby Eshleman &lt;<a href=3D"mailto:bobby.eshleman@gma=
il.com" target=3D"_blank">bobby.eshleman@gmail.com</a>&gt;<br>
&gt; Signed-off-by: Oleksii Kurochko &lt;<a href=3D"mailto:oleksii.kurochko=
@gmail.com" target=3D"_blank">oleksii.kurochko@gmail.com</a>&gt;<br>
&gt; ---<br>
&gt; Changes in V2:<br>
&gt;=C2=A0 =C2=A0 =C2=A0- add license to early_printk.c<br>
&gt;=C2=A0 =C2=A0 =C2=A0- add signed-off-by Bobby<br>
&gt;=C2=A0 =C2=A0 =C2=A0- add RISCV_32 to Kconfig.debug to EARLY_PRINTK con=
fig<br>
&gt;=C2=A0 =C2=A0 =C2=A0- update commit message<br>
&gt;=C2=A0 =C2=A0 =C2=A0- order the files alphabetically in Makefile<br>
&gt; ---<br>
&gt;=C2=A0 xen/arch/riscv/Kconfig.debug=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 |=C2=A0 7 +++++<br>
&gt;=C2=A0 xen/arch/riscv/Makefile=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 1 +<br>
&gt;=C2=A0 xen/arch/riscv/early_printk.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0| 33 +++++++++++++++++++++++<br>
&gt;=C2=A0 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++<br>
&gt;=C2=A0 4 files changed, 53 insertions(+)<br>
&gt;=C2=A0 create mode 100644 xen/arch/riscv/early_printk.c<br>
&gt;=C2=A0 create mode 100644 xen/arch/riscv/include/asm/early_printk.h<br>
&gt;<br>
&gt; diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.deb=
ug<br>
&gt; index e69de29bb2..6ba0bd1e5a 100644<br>
&gt; --- a/xen/arch/riscv/Kconfig.debug<br>
&gt; +++ b/xen/arch/riscv/Kconfig.debug<br>
&gt; @@ -0,0 +1,7 @@<br>
&gt; +config EARLY_PRINTK<br>
&gt; +=C2=A0 =C2=A0 bool &quot;Enable early printk config&quot;<br>
&gt; +=C2=A0 =C2=A0 default DEBUG<br>
&gt; +=C2=A0 =C2=A0 depends on RISCV_64 || RISCV_32<br>
&gt; +=C2=A0 =C2=A0 help<br>
&gt; +<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 Enables early printk debug messages<br>
&gt; diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile<br>
&gt; index fd916e1004..1a4f1a6015 100644<br>
&gt; --- a/xen/arch/riscv/Makefile<br>
&gt; +++ b/xen/arch/riscv/Makefile<br>
&gt; @@ -1,3 +1,4 @@<br>
&gt; +obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o<br>
&gt;=C2=A0 obj-$(CONFIG_RISCV_64) +=3D riscv64/<br>
&gt;=C2=A0 obj-y +=3D sbi.o<br>
&gt;=C2=A0 obj-y +=3D setup.o<br>
&gt; diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_prin=
tk.c<br>
&gt; new file mode 100644<br>
&gt; index 0000000000..88da5169ed<br>
&gt; --- /dev/null<br>
&gt; +++ b/xen/arch/riscv/early_printk.c<br>
&gt; @@ -0,0 +1,33 @@<br>
&gt; +/* SPDX-License-Identifier: GPL-2.0 */<br>
&gt; +/*<br>
&gt; + * RISC-V early printk using SBI<br>
&gt; + *<br>
&gt; + * Copyright (C) 2021 Bobby Eshleman &lt;<a href=3D"mailto:bobbyeshle=
man@gmail.com" target=3D"_blank">bobbyeshleman@gmail.com</a>&gt;<br>
&gt; + */<br>
&gt; +#include &lt;asm/sbi.h&gt;<br>
&gt; +#include &lt;asm/early_printk.h&gt;<br>
&gt; +<br>
&gt; +/*<br>
&gt; + * TODO:<br>
&gt; + *=C2=A0 =C2=A0sbi_console_putchar is already planned for deprication=
<br>
&gt; + *=C2=A0 =C2=A0so it should be reworked to use UART directly.<br>
&gt; +*/<br>
&gt; +void early_puts(const char *s, size_t nr)<br>
&gt; +{<br>
&gt; +=C2=A0 =C2=A0 while ( nr-- &gt; 0 )<br>
&gt; +=C2=A0 =C2=A0 {<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (*s =3D=3D &#39;\n&#39;)<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sbi_console_putchar(&#39;\r=
&#39;);<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 sbi_console_putchar(*s);<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 s++;<br>
&gt; +=C2=A0 =C2=A0 }<br>
&gt; +}<br>
&gt; +<br>
&gt; +void early_printk(const char *str)<br>
&gt; +{<br>
&gt; +=C2=A0 =C2=A0 while (*str)<br>
&gt; +=C2=A0 =C2=A0 {<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 early_puts(str, 1);<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 str++;<br>
&gt; +=C2=A0 =C2=A0 }<br>
&gt; +}<br>
&gt; diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/risc=
v/include/asm/early_printk.h<br>
&gt; new file mode 100644<br>
&gt; index 0000000000..05106e160d<br>
&gt; --- /dev/null<br>
&gt; +++ b/xen/arch/riscv/include/asm/early_printk.h<br>
&gt; @@ -0,0 +1,12 @@<br>
&gt; +#ifndef __EARLY_PRINTK_H__<br>
&gt; +#define __EARLY_PRINTK_H__<br>
&gt; +<br>
&gt; +#include &lt;xen/early_printk.h&gt;<br>
&gt; +<br>
&gt; +#ifdef CONFIG_EARLY_PRINTK<br>
&gt; +void early_printk(const char *str);<br>
&gt; +#else<br>
&gt; +static inline void early_printk(const char *s) {};<br>
&gt; +#endif<br>
&gt; +<br>
&gt; +#endif /* __EARLY_PRINTK_H__ */<br>
&gt; --<br>
&gt; 2.38.1<br>
&gt;<br>
&gt;<br>
</blockquote></div></div>

--000000000000d0230205f1e3d6f0--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 07:32:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 07:32:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474309.735410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF97Z-0005MM-D6; Tue, 10 Jan 2023 07:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474309.735410; Tue, 10 Jan 2023 07:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF97Z-0005ME-9n; Tue, 10 Jan 2023 07:32:33 +0000
Received: by outflank-mailman (input) for mailman id 474309;
 Tue, 10 Jan 2023 07:32:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF97Y-0005Lu-5D
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 07:32:32 +0000
Received: from sonic308-54.consmr.mail.gq1.yahoo.com
 (sonic308-54.consmr.mail.gq1.yahoo.com [98.137.68.30])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ef4ba9c2-90b8-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 08:32:29 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic308.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 07:32:27 +0000
Received: by hermes--production-ne1-7b69748c4d-bxfkx (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 00995f78f0d4fd001b9b8f1699ee7ed7; 
 Tue, 10 Jan 2023 07:32:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef4ba9c2-90b8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673335947; bh=06K19SzqcQMh+hYhDSeIvS+A1/F0qrYWujU3+eUpzVU=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=P4bfXUJ/Oj1sE4kdQK8QT7e6pQLcH4DLBCs17Yh/tS3Ez3cmciomWmb4LuLRhJP1HfsHObKGh6HOOeS8Xe1evZLESy46Ke0IIZOfyvR5TaowAotPqoqW6K8HwPOz6F6+d9Zkq41rYMNbI7t3I16CFg+6oiLVrAxCDtgX5cRXYol54on/6QCG/l/56UPQEQV7Zs8WuItBQ4W/BkHZGfwLeMuEnYAWq90yTqwhHgvIZ33idYcDJAMv8V82mJtyyygNeXnbde2mkOhR/l0OD3iJ6I2lQrOTQFIjl/1K9AQHNTODe0mRxaIO+FZqH5Y3V2mOWRgfJRigw1damskw5/JFGw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673335947; bh=qiEct4R8jT0mwdX7UpFLVNjLYCi7GJda04po0C6cY4K=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=dxATIrmQpuS93tvMUICVaqOLCSFREPVzQm4SQQugtK1YPeVS2q9ps0WdrqXcNA8JnZS14CuT5D3lf2bkSjPAiF2AI5xHeCRjMg76cBu9DT/26QyKjNCKe9oqKduMHlne1lVSzDriGhPOL7edrRGwo6a0SaSAUFW/PgDguRFaBrzLE3nd6L5IN79QIBKK7Le/RcC4nEFjBBA+WcuupHF4ahPLaVmENgVLfiC1kdL/YdHkwSYVkmp94whzGeKAaY4P+gd+5GeTj7G4GicfViEaRlpFNaW62hOrDky620gTBUi6UYJTHfhsnY5oDssexMHQWe5b/Ol5NlzjhGmwiR4kaQ==
X-YMail-OSG: zDrCPGAVM1nKO6q4JEwQd0TLn_dx5z3NGvb.UsTIFqQDWW2_QU9Eu2G7qEfPxvn
 XkCvt4XXYgZtyeSQWLmALXZwrgXq95f3.TgppA__RxHl2_GQTuNqOntSLARx7z16_SZOC405wb5P
 dmzs_SLzVX_qv_XAe1e_6SyqumXPBj6g5GY5vJ32bommQhqlbh3CGW6MD.fVzKl9NgGDN3HrNgSQ
 uAqOsLTA6yprOwHUOp.jPAXpcTvCi9d324aATxwhzSvleqjnwVkZn7.qn0qv7hkItw6WyBsQrxW9
 GkGE44Ly9UU7y5vUQJkskGkygQhAgs9AHjhRT7nJwtN__Pvcksfu_wlX8yqABQ7_G9iqculoxG4y
 3N76GrDm4ukxhPx_WZhnzWpNyRzz80UhhRKJHmfr2SDpWhQyPbl_CKpFXgKNHmvYrAb.9_0WV8LG
 sfGzj5axvp6izZZcH8i9jRrLGhbeXoO3ntqGWtVmSG8h_H87yu779IndHgls17iUGtFSMaQQaren
 wCXzNwj8Mbr4CgzwbasQQNRRKLivGLBw0c_I1mnFBjK475kVyMxgQ_DQP2oeB5s5jAspyM7QUfbH
 Shsn6NMbJotgQEYehpNI7sI6b6hHan.fcaD9pZanWen2QJQ89bYCqEyrIZigK_pUUS0.WfhsD_lZ
 I.uembUWw4jBGrAeesqFtmiyr6Ww7CeQqsdhWtbuk3FQZZQffz5nznf0rql2k0YcWpsAteOzHDTJ
 FwSYM3hZPqRbZQ9WMON8xK9a.MMPp87IIDpufxohXIgG4MUocFZrK7jblRzdpVZibGJt6IDf2kPe
 LgZI24cRTUnet.inN02WTYM8RAOG_6P4CYRbbnWiCKnHE3Qx2R7yr4WPETBJCOVUlA2jODAoovrw
 fe0vpIdUqJONkeZGwY29p4dDU4c3NsMsDrh_9KUDZJBq78vq6_5dOHXMf3VC8MOP3dhqXLPK.gX7
 pzIIJ2l0CzzWntdcoP0Q87sdfoLzjX3VyrxNW4foh07rf0Yaa0KWDpNRWhvNozVVaGInDjlbM.cA
 oKZiuwxYjtAz0ODQWXWcV0rAKZeLpaMoMY2z6vennE2mZ45bnr7m8II3joniWnTraAQPmpQNUSn.
 qggbLLzSlv3UZTl_F.0EXeo6egH.Pj8QT_2xomUopIk707Fk9iKnf17dOicMfMA6fBFKQTmdjx1L
 .fNEoRI86hN1VZQeWgLNOYr4xdXY.R7gTT5En8.yX8HynE62Z9gNja16xDbj4vzgnwJ0KX3oqKNN
 Jik3xHuCy7j7pKO_iJCnAHxOB5v4MWP6v9oIozpMARdeUxxQ83_gBcT6XOvQLJ0oH9ScGCwqgTGr
 OzOXeWZ7XMTcamXOkKlxaF01ZaMBVLtJulpLcbV2pPN0wrDCPBG5cVf1xavIlH87eQmkif_junML
 9.0qn0OePyv06EnHlmaWnGTtQoL.zk56j0DYmI0ib2L3CBN5mDL.eSuW0X4b4HGbQVvtOgstUX5y
 XmAJ9Sp6UPd0rdKwidP5cjPkhLo6Sr6xInsgVBJ_EvRNI1JMEojo9yED6ekX_tSyRvjETuPSRLVb
 sdtAdswCL3FueL6z0nVCay0k92c_neQjcW0jXN8R935uG_oQzaZwtjEpxD2lxN7jrn9zD0ursLDO
 6TVFTGaih3SthGsIFisrTjLbPrbaltMDqAcMLbV9gtHNSnlnZVzCnbZETFA021MCzqIwNZ.evpmB
 IqSEl0LWxCUt2aq5MTQ7nHxG3b7vFLRuMbLiNPTIMOOui8SRv34t2hJ3zyjuTBuh5OvuBBnCS820
 z7H6ZgNdO.Q_.S7YUny5YRmfs1V7GIherasBf_Q8rceMnWouTuYeZyhkIIjmCNdGeHvTq9bFUPaZ
 4N7zIVUNH8RGLjLS4EaE8RZNJcG.nY2MikLlUaWkuqD6yOJtIvAEMfzKCaCRaPVZUPREFB_in47z
 kCICh4y3IQ0euEB8rs.nhin7kauvYXZUUKfQeeciiRXauvgOmVYavlYsHCBCVJMLVzKFQ9tnT0XX
 MRCnKbyhcI.VwF36AOBqf2gpGWxikKtYCqVADkDFLWh3Bs5zgBn7xtkk3TU8W_e8Gu8435nZShSM
 dEuZiEJ4mNfNwSbbIW9ZgD7duiaGPWXC92mTj.gih4ivW7mIlqPGFuAx4ycrvLpyjAIjpJFEysoF
 7ieCCClAAdBj.mrJ1DBCuQ7LxIHICbkPjXE04JWObdFgFypR1OhTpAyHTnxk1g23nynNljo4aAU8
 -
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: anthony.perard@citrix.com
Cc: xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	qemu-devel@nongnu.org
Subject: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default for igd-passthru
Date: Tue, 10 Jan 2023 02:32:01 -0500
Message-Id: <cover.1673300848.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <cover.1673300848.git.brchuckz.ref@aol.com>
Content-Length: 9798

Sorry for the length of this cover letter but it is helpful to put all
the pros and cons of the two different approaches to solving the problem
of configuring the Intel IGD with qemu upstream and libxl in one place,
which I attempt to do here. Of course the other approach involves a
patch to qemu [1] instead of using this patch series for libxl.

The quick answer:

I think the other patch to qemu is the better option, but I would be OK
if you use this patch series instead.

Details with my reasons for preferring the other patch to qemu over this
patch series to libxl: 

I call attention to the commit message of the first patch which points
out that using the "pc" machine and adding the xen platform device on
the qemu upstream command line is not functionally equivalent to using
the "xenfv" machine which automatically adds the xen platform device
earlier in the guest creation process. As a result, there is a noticeable
reduction in the performance of the guest during startup with the "pc"
machne type even if the xen platform device is added via the qemu
command line options, although eventually both Linux and Windows guests
perform equally well once the guest operating system is fully loaded.

Specifically, startup time is longer and neither the grub vga drivers
nor the windows vga drivers in early startup perform as well when the
xen platform device is added via the qemu command line instead of being
added immediately after the other emulated i440fx pci devices when the
"xenfv" machine type is used.

For example, when using the "pc" machine, which adds the xen platform
device using a command line option, the Linux guest could not display
the grub boot menu at the native resolution of the monitor, but with the
"xenfv" machine, the grub menu is displayed at the full 1920x1080
native resolution of the monitor for testing. So improved startup
performance is an advantage for the patch for qemu.

I also call attention to the last point of the commit message of the
second patch and the comments for reviewers section of the second patch.
This approach, as opposed to fixing this in qemu upstream, makes
maintaining the code in libxl__build_device_model_args_new more
difficult and therefore increases the chances of problems caused by
coding errors and typos for users of libxl. So that is another advantage
of the patch for qemu.

OTOH, fixing this in qemu causes newer qemu versions to behave
differently than previous versions of qemu, which the qemu community
does not like, although they seem OK with the other patch since it only
affects qemu "xenfv" machine types, but they do not want the patch to
affect toolstacks like libvirt that do not use qemu upstream's
autoconfiguration options as much as libxl does, and, of course, libvirt
can manage qemu "xenfv" machines so exising "xenfv" guests configured
manually by libvirt could be adversely affected by the patch to qemu,
but only if those same guests are also configured for igd-passthrough,
which is likely a very small number of possibly affected libvirt users
of qemu.

A year or two ago I tried to configure guests for pci passthrough on xen
using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
could not convert an xl.cfg file with a configuration item
pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
So it is unlikely there are any users out there using libvirt to
configure xen hvm guests for igd passthrough on xen, and those are the
only users that could be adversely affected by the simpler patch to qemu
to fix this.

The only advantage of this patch series over the qemu patch is that
this patch series does not need any patches to qemu to make Intel IGD
configuration easier with libxl so the risk of affecting other qemu
users is entirely eliminated if we use this patch instead of patching
qemu. The cost of patching libxl instead of qemu is reduced startup
performance compared to what could be achieved by patching qemu instead
and an increased risk that the tedious process of manually managing the
slot addresses of all the emulated devices will make it more difficult
to keep the libxl code free of bugs.

I will leave it to the maintainer of the code in both qemu and libxl
(Anthony) to decide which, if any, of the patches to apply. I am OK with
either this patch series to libxl or the proposed patch to qemu to fix
this problem, but I do recommend the other patch to qemu over this patch
series because of the improved performance during startup with that
patch and the relatively low risk that any libvirt users will be
adversely affected by that patch.

Brief statement of the problem this patch series solves:

Currently only the qemu traditional device model reserves slot 2 for the
Intel Integrated Graphics Device (IGD) with default settings. Assigning
the Intel IGD to slot 2 is necessary for the device to operate properly
when passed through as the primary graphics adapter. The qemu
traditional device model takes care of this by reserving slot 2 for the
Intel IGD, but the upstream qemu device model currently does not reserve
slot 2 for the Intel IGD.

This patch series modifies libxl so the upstream qemu device model will
also, with default settings, assign slot 2 for the Intel IGD.

There are three reasons why it is difficult to configure the guest
so the Intel IGD is assigned to slot 2 in the guest using libxl and the
upstream device model, so the patch series is logically organized in
three separate patches; each patch resolves one of the three reasons
that cause problems:

The description of what each of the three libxl patches do:

1. With the default "xenfv" machine type, qemu upstream is hard-coded
   to assign the xen platform device to slot 2. The first patch fixes
   that by using the "pc" machine instead when gfx_passthru type is igd
   and, if xen_platform_pci is set in the guest config, libxl now assigns
   the xen platform device to slot 3, making it possible to assign the
   IGD to slot 2. The patch only affects guests with the gfx_passthru
   option enabled. The default behavior (xen_platform_pci is enabled
   but gfx_passthru option is disabled) of using the "xenfv" machine
   type is preserved. Another way to describe what the patch does is
   to say that it adds a second exception to the default choice of the
   "xenfv" machine type, with the first exception being that the "pc"
   machine type is also used instead of "xenfv" if the xen platform pci
   device is disabled in the guest xl.cfg file.

2. Currently, with libxl and qemu upstream, most emulated pci devices
   are by default automatically assigned a pci slot, and the emulated
   ones are assigned before the passed through ones, which means that
   even if libxl is patched so the xen platform device will not be
   assigned to slot 2, any other emulated device will be assigned slot 2
   unless libxl explicitly assigns the slot address of each emulated pci
   device in such a way that the IGD will be assigned slot 2. The second
   patch fixes this by hard coding the slot assignment for the emulated
   devices instead of deferring to qemu upstream's auto-assignment which
   does not do what is necessary to configure the Intel IGD correctly.
   With the second patch applied, it is possible to configure the Intel
   IGD correctly by using the @VSLOT parameter in xl.cfg to specify the
   slot address of each passed through pci device in the guest. The
   second patch is also designed to not change the default behavior of
   letting qemu autoconfigure the pci slot addresses when igd
   gfx_pasthru is disabled in xl.cfg.  

3. For convenience, the third patch automatically assigns slot 2 to the
   Intel IGD when the gfx_passthru type is igd so with the third patch
   appled it is not necessary to set the @VSLOT parameter to configure
   the Intel IGD correctly.

Testing:

I tested a system with Intel IGD passthrough and two other pci devices
passed through, with and without the xen platform device. I also did
tests on guests without any pci passthrough configured. In all cases
tested, libxl behaved as expected. For example, the device model
arguments are only changed if gfx_passthru is set for the IGD, libxl
respected administrator settings such as @VSLOT and xen_platform_pci
with the patch series applied, and not adding the xen platform device to
the guest caused reduced performance because in that case the guest
could not take advantage of the improvements offered by the Xen PV
drivers in the guest. I tested the following emulated devices on my
setup: xen-platform, e1000, and VGA. I also verified the device that is
added by the "hdtype = 'ahci'" xl.cfg option is configured correctly
with the patch applied. I did not test all 12 devices that could be
affected by patch 2 of the series. These include the intel-hda high
definition audio device, a virtio-serial, device, etc. Once can look
at the second patch for the full list of qemu emulated devices whose
behavior is affected by the second patch of the series when the guest
is configured for igd gfx_passthru. These devices are also subject
to mistakes in the patch not discovered by the compiler, as mentioned
in the comments for reviewers section of the second patch. 

[1] https://lore.kernel.org/qemu-devel/a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com/

v2: correct the link to the qemu patch - the link in v1 was to an
    incorrect version of the patch

Chuck Zmudzinski (3):
  libxl/dm: Use "pc" machine type for Intel IGD passthrough
  libxl/dm: Manage pci slot assignment for Intel IGD passthrough
  libxl/dm: Assign slot 2 by default for Intel IGD passthrough

 tools/libs/light/libxl_dm.c | 227 +++++++++++++++++++++++++++++-------
 1 file changed, 183 insertions(+), 44 deletions(-)

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 07:32:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 07:32:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474310.735417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF97Z-0005Po-PN; Tue, 10 Jan 2023 07:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474310.735417; Tue, 10 Jan 2023 07:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF97Z-0005PV-HW; Tue, 10 Jan 2023 07:32:33 +0000
Received: by outflank-mailman (input) for mailman id 474310;
 Tue, 10 Jan 2023 07:32:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF97X-0005Lt-Rx
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 07:32:32 +0000
Received: from sonic314-21.consmr.mail.gq1.yahoo.com
 (sonic314-21.consmr.mail.gq1.yahoo.com [98.137.69.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eee044ef-90b8-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 08:32:29 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 07:32:26 +0000
Received: by hermes--production-ne1-7b69748c4d-bxfkx (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 00995f78f0d4fd001b9b8f1699ee7ed7; 
 Tue, 10 Jan 2023 07:32:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eee044ef-90b8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673335946; bh=MlYqfxn4vwPKUpewBFfsPqn045LoqxLP6nhibRvkmMQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From:Subject:Reply-To; b=E/1qai2PiOIqbUTF+XBOHFQuXUBMbVzW1ho8Ij2gjZ27xRD1dwjXCtNrNQ4E0unkmGIUM3s8WmahRg0TmTsyf4yQMF2t1SpFRc3gh5fUOS6Mijgvqp4JcPgWC/3Uw/0puNm2BNIrF26KL/OwkwKFiwnWU29U9Or0WK1uj75QiHplwK8ho9cMZiFmmi5Jko0yavTTPJa3jHOY3KFr5sZoyLo2LY4CH4jllmRO0630p210u+u2NSTbj7atbAJAM0dseG9/9MGkWMRqwYhC37K4NB7tUvweCDC/cKsQ5Kif53rXB/aQIV5VV+B+iidUvyby/jUEV/l3ysCP3mMmsbaf3Q==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673335946; bh=LQMlvZQo4JgVkUeUOy2EtUHA3/Ts7yg0JQfMLc1ODBG=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=beDqLG/dZStPkhBKkGVBGhI6zDoDXQyqQKA8cmNBIWiB9Omm6MFO/BIE63xINC/+2lowa2lyXFhXBVaR9pvEm45TmpGaZJlgdonwD4LggH3kdzy+ji5+M+keoV0WS0ABPIX3YR79EUq4M+Y//skWtGKY+WDiaQfImZMkUhYE4/u9U+GJImKG+0gCm8RpdraoON62pKwrfgm9j/rf/roST/Z5BypVOR2kuzaxeIaMjglqaZyoHifWYf68zNB9bDvbSSI0I4wUVy3aWM6BFF6/TRRvbX7PlMQHDv/7KJRS90ZR1J8IaSW4k+ns0Qo6uTw9NqRDtQ+3rdYPTkZ3mQjQMA==
X-YMail-OSG: NoXoiVwVM1kCuc.PBP0ELv0IIHyBv1HK6yJ2Lb_Fz4clgpQN0YG76Zddmac7_wE
 gpkPS9B_wOnXL8pAzvaCN.zNUKJP8cB6x0aW9l1HF1isRiT8qqQnue2kFc_lRJ5mTteg.y48E9Aq
 GYBT29zIY7VV7kW1o32BWktJpw7WeCla6YHPvbgh.2Nn5ZM0GDr3ndaVJmE0vWlHLJrgfcuYv0.L
 HsU2SKoU400M5oJFNdlYhMHTyroMhWjsnYeHbM1sct8bahMzUkcyaYAZmUdYkwukPMo0svsgg6AS
 RUbq_sWRLmRSixhUP1MwuIzniBITSnImNcyCbtt_roM9ghP3bStEolgGKXTZvpztFPMo8UOaXM7Y
 E.5StQWhJzLDg8Oip0zZG6bH3XJ8j3ECh07NkVAjkuDxR72Nt4KkTXrd2YypaW4N5lzi5uMr5F3a
 OoZy7Qn.PZHHdPX_6vq6mnRz1tNe05lm8TBU.OcUCnonIpaztC8aXIHKgmIQA2Xiw8iVQLNtM9C2
 28zX41nMJXB.JNbPJ94Q6YGq.D_U5HyBArYs63x8M3FyIohVkSiAbWJrQ4jEzO.cb.UESZM5LfrX
 alBUsclrwrmJRV1LvSJwVyglIhmMrt04YUF0LOcdtGrvX9cdqGFO4QuTVjdBTcXn7RgstJjwVfSB
 kiJd7r6a2lWUDfgar6s7YVYwHJ0pk20R0zeVUnuwTcQsI7rSu4xS0VL2JI00vpPXsLpXoKGbKPVn
 q_1zKTpQfq9n3D1ES9HSQD_3XuEgI3Upj123ZaCRgV1mrpalLR9NBfsUPUYSfA.L8Cl_aGa8uGxc
 OMAPtIFDY9efAPC3Zp_HlHvWmW78HCxg4DK07Q7HvLS_BHcMm_ci3zhgXZrrle7XpagyMYLOtOsx
 m18kjlhJSRhPB30I8i8ENOKx7j9HoarE2vq_Xl_11S4LJObtjJhuoJpIa3k19BzJqLV9tLLHNsMx
 i_IPHj3oQzhW3Jf8xf0LNnk8JdIhEU5w07FRDDM_6VimJPwqo3E_HWpTk3yDvPQWQ.7lb0pLJlvt
 0sAjTrep.pQn.giv_Hmd89zC72hk_4hZ9af9oaVeyQ8dXuctLkcAQRR5x51dlSU2vPY2uPSZNt5.
 B2kDGecbM9epAIDdLDgrDAC8haTbZn4vyYW_zrhYDyCC3VwxmiX4Fi2V6PQ.l2gXDH5x80V7NM1P
 3hrsXFqUgFtu_cOYiQ3a3JbIIb59LZQydW0QQ4CwQ2umXq8.0.EsR4ij1mm.2hUR9I.cHSo.IOzF
 rOa.dFPnYnhStCVBnG1MLhouPMisnYTuBLcKcrTy0yzYRRaoBpq8dRwTOXze4njJm2wSiDxrzSfX
 r8qk_TyNN1kB6h.cdkIwMxzFbtQF7bKp4xNdk_ECLFwq2lUZETLfTO89K1Avi1TxVGPagXHkJKLv
 _UpNNamE48S7i58YRPeOdwCzxm831xnjrTsBF8xz5s0VSxx8NM5dV9TDI16131NsYBPY6RUeW4.b
 eRWDNK96BUZ1R5lYze7x2F3sG7OuzXCIWbuvmzbbHBOr9lbWbdrxugDIj08qoFbFdB_38mZvxhYz
 93vUjhUXEosfadOst79puBRaSOm0pTTNXe2myi.RbO2jIPfF9upgFPkSUQljgmNCEYBBFFsBTrtY
 so96vfCP71FeP5c6kevt0_FhRjUCRjGJEcU1rzxlI.jaCo6Oj6LiGF_c5s9647dm9PAVHWrePYH4
 oNTV98kNV1CsB76YWAI8bS1Q1oLiqZVPjxjX8aCm2yn4RXc82SqNJwe2n9Z.1F2EHQ.HCUcfQmNE
 fK5Bsw3_twAjydKfRgpHWCnCeA.eKLDwF9_KR1RP15av78qzqgq8rRZkFoTONYn6fEXJoMja8O4C
 ItrN.NM1y72uYwIwiHH84SGjP.V0kVyYPLIrnyOngIyX5Fe3gUIaZZ4XExVwQ2KcWLR4GHiXyicB
 TmvoTVH04sRXtrq3SXEjl4khM1tyz0xXpXONjoxQmD.JXsRgy01l8zLeowf32n7ACsGCn8nMc4LB
 cJlSgeKNgzjTtJbidSO5tuGBlp2qARmzPYLcVavbUL3noLUPDsdEuzAYzmkPcS37ANV0hAcFsRtH
 KJVt3WeZoDSM.MghiRdv01.NZRsLVtO148we1u6dT12SdC9QmMz8B0zza8ht383rutR9W2eY_WWK
 s2xAYRSQp7AciLUKiNHfO9Vpe6hXALoz5N3Kh2L51YNzKvGVUeZAsuTGjd5kHt7CSqyy4arVRclg
 -
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: anthony.perard@citrix.com
Cc: xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	qemu-devel@nongnu.org
Subject: [XEN PATCH v2 1/3] libxl/dm: Use "pc" machine type for Intel IGD passthrough
Date: Tue, 10 Jan 2023 02:32:02 -0500
Message-Id: <a38db9a2b829add5612e1bce44ae54ecd96e96b7.1673300848.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673300848.git.brchuckz@aol.com>
References: <cover.1673300848.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Length: 4686

The default qemu upstream "xenfv" machine type that is used when an HVM
guest is configured for Intel IGD passthrough assigns slot 2 to the
xen platform pci device. It is a requirement that slot 2 be assigned to
the Intel IGD when it is passed through as the primary graphics adapter.
Using the "pc" machine type instead of the "xenfv" machine type in that
case makes it possible for qemu upstream to assign slot 2 to the IGD.

Using the qemu "pc" machine and adding the xen platform device on the
qemu command line instead of using the qemu "xenfv" machine which
automatically adds the xen platform device earlier in the guest creation
process does come with some degredation of startup performance: startup
is slower and some vga drivers in use during early boot are unable to
display the screen at the native resolution of the monitor, but once
the guest operating system (Windows or Linux) is fully loaded, there
is no noticeable difference in the performance of the guest when using
the "pc" machine type instead of the "xenfv" machine type.

With this patch, libxl continues to use default "xenfv" machine type
with the default settings of xen_platform_pci enabled and igd
gfx_passthru disabled. The patch only affects machines configured with
gfx_passthru enabled.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Reviewers might find this patch easier to review by looking at the
resulting code in the patched file instead of looking at the diff because
it is hard to follow the logical flow of the resulting code in the diff
because the patch moves the check for igd gfx_passthru before the check
for disabling the xen platform device. That change was made because it
results in a more simplified logical flow in the resulting code.

v2: No changes to this patch since v1

 tools/libs/light/libxl_dm.c | 37 ++++++++++++++++++++-----------------
 1 file changed, 20 insertions(+), 17 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index fc264a3a13..2048815611 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1809,7 +1809,26 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, b_info->extra_pv[i]);
         break;
     case LIBXL_DOMAIN_TYPE_HVM:
-        if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
+        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+            enum libxl_gfx_passthru_kind gfx_passthru_kind =
+                            libxl__detect_gfx_passthru_kind(gc, guest_config);
+            switch (gfx_passthru_kind) {
+            case LIBXL_GFX_PASSTHRU_KIND_IGD:
+                /*
+                 * Using the machine "pc" because with the default machine "xenfv"
+                 * the xen-platform device will be assigned to slot 2, but with
+                 * GFX_PASSTHRU_KIND_IGD, slot 2 needs to be reserved for the Intel IGD.
+                 */
+                machinearg = libxl__strdup(gc, "pc,accel=xen,suppress-vmdesc=on,igd-passthru=on");
+                break;
+            case LIBXL_GFX_PASSTHRU_KIND_DEFAULT:
+                LOGD(ERROR, guest_domid, "unable to detect required gfx_passthru_kind");
+                return ERROR_FAIL;
+            default:
+                LOGD(ERROR, guest_domid, "invalid value for gfx_passthru_kind");
+                return ERROR_INVAL;
+            }
+        } else if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
             /* Switching here to the machine "pc" which does not add
              * the xen-platform device instead of the default "xenfv" machine.
              */
@@ -1831,22 +1850,6 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
         }
 
-        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
-            enum libxl_gfx_passthru_kind gfx_passthru_kind =
-                            libxl__detect_gfx_passthru_kind(gc, guest_config);
-            switch (gfx_passthru_kind) {
-            case LIBXL_GFX_PASSTHRU_KIND_IGD:
-                machinearg = GCSPRINTF("%s,igd-passthru=on", machinearg);
-                break;
-            case LIBXL_GFX_PASSTHRU_KIND_DEFAULT:
-                LOGD(ERROR, guest_domid, "unable to detect required gfx_passthru_kind");
-                return ERROR_FAIL;
-            default:
-                LOGD(ERROR, guest_domid, "invalid value for gfx_passthru_kind");
-                return ERROR_INVAL;
-            }
-        }
-
         flexarray_append(dm_args, machinearg);
         for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
             flexarray_append(dm_args, b_info->extra_hvm[i]);
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 07:32:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 07:32:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474311.735432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF97a-0005qc-Ul; Tue, 10 Jan 2023 07:32:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474311.735432; Tue, 10 Jan 2023 07:32:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF97a-0005pf-Qr; Tue, 10 Jan 2023 07:32:34 +0000
Received: by outflank-mailman (input) for mailman id 474311;
 Tue, 10 Jan 2023 07:32:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF97Z-0005Lt-0H
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 07:32:33 +0000
Received: from sonic307-54.consmr.mail.gq1.yahoo.com
 (sonic307-54.consmr.mail.gq1.yahoo.com [98.137.64.30])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f0cd5c42-90b8-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 08:32:32 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic307.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 07:32:29 +0000
Received: by hermes--production-ne1-7b69748c4d-bxfkx (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 00995f78f0d4fd001b9b8f1699ee7ed7; 
 Tue, 10 Jan 2023 07:32:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0cd5c42-90b8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673335949; bh=IXWL0zBe+TGnnQx6f+FhHQ+rLQndNCxwY9B4xdroKxQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From:Subject:Reply-To; b=PNwFrlYzMqzEwT/E+oGS8Q3uNQiE/OwLQvqWhehSCNsxbDjavxCS2Sx8ioWWaFwUYqodd3doNUaSKpxANvRfBKu8kh5Zbfgr4KVEACUx2bt97TACZILrnvWl+ERevY7hkGQtPlr6wRnpAxmXcVqkSHDwUZNcv548y4ZowDS9k4bV0knixD185V9gY89QyzzvaGA2hwU9sf0fiGgpdUb18kkIw3C2tSD2Y98q3/PHf40hCTOoePRgx7yzME6wE8pDJ/0HnuKh6CntYEpgCOVJQRMVec2VDvXVRFOyM7r0P982BJI6wbXPu2tJ22h0N8ryoSZSqYA+QJ6z8ggtESDi+w==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673335949; bh=AcbauxwS4/wmmJiS90weiJNhYtz0Z33GTuCvWkIVR9N=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=R+0zo2IvCY4Wdqy39uABgWGBzxRkNsP/6w02TVt8zAAEAJ7qaxYix6aANJFwgSo9bs/3YvGyROciezOFt9kh4hauvB0kd/dcihnAmt+9uPQFfA3otKnUArLgypXRE9CCW9XiR51i56IS8MSOUjDlPClbnleiURDdHDUtVnPm2hZwpUY1dU8+1JnTJXESdxmE4hwqOSQjxXOVKKom+Q/fmWAY6oPM8famKfl7kmZQ12UluEuw1agvKsOylMxAlDaWWYeDkRESZIxOtLAC/uXrJikGcB+MXwAE+ijoi6iQghdV+ZCHLuOCBqgEL3Nhlfwc35XVXxb70P93FbhtsLa4JA==
X-YMail-OSG: RPsCyUQVM1k.LODvx.GNliRJADa_2tXrnOIqgPgucpoj5u6.CpSyWjRtcPImtju
 339tm95k_C7yv8umF6C71xfAWUd84RoLbc6nKjHQm7oHCkh_6njsVKVs_qSt2wwA1aPzbx8CFgG2
 zJ.agJbugSWvNtkov7LFHNe86z8fXnWdEAllAD3OwKDFVLq9lQ296y5G8B61ZeLMmrJ_v36XcGRR
 OWbxebwCkNmUUZ.RA09YkBG97TQ1nXfDGt7QQCU543EQuGe4cJMP2sZ.7eB1a2FHvdp3AWGR9JAc
 4.sGgwA22xa1rBuAPJ7TUK38k64iWJ8K1hGy5uVr9o4yQcBpw_E7plLwUY38fdX6CrT5QDC_Pt3U
 ahEljvRm34WeTm_vmH9cTF9l8ZV.oNXtBUtiGPt1lURtvx41uh35KYkW_9bEumxZfM7fEN3UaOtl
 phHDSNJREvg17vG851M9ZL_WkFslVK.RhZeNG0e85Fm9A3pHr0IwevJPd3GMG5C06Kegz_D8_XKm
 Xut4nGyC5I.Lmq8mrtfdHHeUDXIEO4izHBCFLDX5EImJoTsR70Jct8zUqe882WUlg.5FjtiU2Rk3
 gf7Wmr7DperFZ0DJHiF8vOPshIND3UeGxiGmdq0C7BVF7Z4N10IujB.QMKPC19X4omLZEvVtQl0V
 4LAV8l_88rrbE8elgun25dKvNzV3J5T1mVwxBInc._djySsUDklDq28dDTf8TOc4PT9liwaxmeWZ
 gCflHuVx6CgDhM6YQfBJQXBpDfdqJrlZYww902ziQykuqJ5ZyHQGBuxUJq6MuwyPTvd69G14vpk9
 1qxeb94lNfReWZjAzTeB0lz6wGFOfqgDDrnbeQ9rsMvd9mqhB7ZS_m6dN.5r1uoKqFoMZvkMHCrC
 tC2KYlaPwxLpSZjjbApkXiAdb5C2tYddx8jO8MMPdjgPOMR0vox14Qqsxi9oTWdYobYd6T2e.f85
 srZBVJPzHCbsaJyP655r7jmnvBSVmH1FwTnDrOKl51fXWS_3v1w1Ax_2fktc4uAKpayeztPgZsYl
 y8c8cwopkSEz23LS.NVlJBYlbdY0paxHJDUhCnQZWGlqjxgmlqJDpwaFHjYXosY2QBZc3gW4_Q05
 J9hezO0y82shqUdUxC2bFtq2thdd65Y73aWNFIo5LbPN0ltqdAgXSWuDK8OygZi2gEmLp2cAE4EW
 HL2gUYnie_iiLC94uA0O_BF37MjpBh_AtNaTc7eYUXZkdPyJ3T6QY1K14OUhKsIFqPbtJ08tlE2x
 C8g.aMBsaFJK3_khjKclTj7V4T_2S5hChdXUWnIHW62dmF_KBjfnJBSPor9U.9XiXuOT7I9n4poA
 I.aOXMaQuecUE96diL01U2Cfx70M5buWt2RfwVno5H.GBD3_JmCuOhjJMPPdEFGAn7jEc0MPbNoH
 .cds9poGCLn6wJ_..nGMl_axjUiX4AJqovtaVNczhiw4zU6Wru01tAtxDRX3BoNPMCkLn9BvcP8x
 YZnaxjCclwuDtrt13TrAIFHwQwGPERQyNfZsRVgXNepSmpWTNKTnskrB542Vyu61XuQAOPldhJFQ
 nmyGACKPtvnIob2kUw8PRz8IK2Q_eAPqq2q_5Q8T8oMK21TbZn5DT_GnXHkQh5CtRHBFSC01aTcX
 na7ok0e96Ocm39RqobRCeRXrbPaHOb4vTPds88eEN1FYYvWJBdmagzeO53SVpd4yPAj9q0_zxTa.
 9GrKIu.Hw.9SjsW_KVZqxNva9lQScce13CdQpWw8seKlvUHLlQZYUOJwBEJLgsovBbwwfz5Evk23
 eCdyO9xo1stpB952fx.Y57qR3hLRbi0HqMSk2LWfA9FF9mk.UepVv7i1cyKQ62RYT8L0817RKpJ4
 rxp.XDq9G3kkqHxoZ6DytIMoxi_e7Z3fYVhyXIqb7ms3HZIo5iCaLtGmB.6Xze.pPB6cBmYdgVuU
 OeypV.6piH64Onmwy7tN_N28ldOdHsvQ_kuON2I9s.oMprO4Ryw2oBgILQ1_tNfxVJjKJdQtxd_2
 vx3Vl3MI.XRnJfz2kii6_CirNVs1qPRZL5K7s4Y_SryUouAwLV77xuU9fW5hdVMn6YbbrpACscUV
 QJxcZ5JjX58rSZsS1dRi36MVCpmYKZm9s5jAWegD.V8l.VYVSAIK.avN_LtLfexcX3hZYnauiAMn
 qoDCQ08ni_riiw5jhegOlF41Q.mQJKPebCYe.hEzPnHhN7LxLJDDOyDbGpUQ3QftcAci4SKCErh8
 -
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: anthony.perard@citrix.com
Cc: xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	qemu-devel@nongnu.org
Subject: [XEN PATCH v2 3/3] libxl/dm: Assign slot 2 by default for Intel IGD passthrough
Date: Tue, 10 Jan 2023 02:32:04 -0500
Message-Id: <27bb3979f234c8de6b51be7bb8195e3cacb5181c.1673300848.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673300848.git.brchuckz@aol.com>
References: <cover.1673300848.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Length: 2558

It is possible for the administrator to manually specify the virtual
slot addresses of passed through pci devices on the guest's pci bus
using the @VSLOT parameter in xl.cfg. With this patch, libxl will by
default assign the Intel IGD to slot 2 when gfx_passthru is configured
for the Intel IGD so it will no longer be necessary to use the @VSLOT
setting to configure the IGD correctly. Also, with this patch, libxl
will not override explicit @VSLOT settings by the administrator so
in that case the patch will have no effect on guest behavior.

The default behavior of letting qemu manage the slot addresses of passed
through pci devices when gfx_passthru is disabled and the administrator
does not set @VSLOT for passed through pci devices is also preserved.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
v2: No changes to this patch since v1

 tools/libs/light/libxl_dm.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 2720b5d4d0..b51ebae643 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1207,6 +1207,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     int rc;
     int next_slot;
     bool configure_pci_for_igd = false;
+    const int igd_slot = 2;
     /*
      * next_slot is only used when we need to configure the pci
      * slots for the Intel IGD. Slot 2 will be for the Intel IGD.
@@ -2173,6 +2174,27 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     flexarray_append(dm_envs, NULL);
     if (envs)
         *envs = (char **) flexarray_contents(dm_envs);
+    if (configure_pci_for_igd) {
+        libxl_device_pci *pci = NULL;
+        for (i = 0; i < guest_config->num_pcidevs; i++) {
+            pci = &guest_config->pcidevs[i];
+            if (!pci->vdevfn) {
+                /*
+                 * Find the Intel IGD and configure it for slot 2.
+                 * Configure any other devices for slot next_slot.
+                 * Since the guest is configured for IGD passthrough,
+                 * assume the device on the host at slot 2 is the IGD.
+                 */
+                if (pci->domain == 0 && pci->bus == 0 &&
+                    pci->dev == igd_slot && pci->func == 0) {
+                    pci->vdevfn = PCI_DEVFN(igd_slot, 0);
+                } else {
+                    pci->vdevfn = PCI_DEVFN(next_slot, 0);
+                    next_slot++;
+                }
+            }
+        }
+    }
     return 0;
 }
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 07:32:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 07:32:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474312.735437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF97b-0005tx-9n; Tue, 10 Jan 2023 07:32:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474312.735437; Tue, 10 Jan 2023 07:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF97b-0005tP-3V; Tue, 10 Jan 2023 07:32:35 +0000
Received: by outflank-mailman (input) for mailman id 474312;
 Tue, 10 Jan 2023 07:32:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pF97Z-0005Lt-Kw
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 07:32:33 +0000
Received: from sonic307-54.consmr.mail.gq1.yahoo.com
 (sonic307-54.consmr.mail.gq1.yahoo.com [98.137.64.30])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f0ce17e0-90b8-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 08:32:32 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic307.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 07:32:29 +0000
Received: by hermes--production-ne1-7b69748c4d-bxfkx (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 00995f78f0d4fd001b9b8f1699ee7ed7; 
 Tue, 10 Jan 2023 07:32:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0ce17e0-90b8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673335949; bh=uHirQOlWVuB6Lr5OlbeNFyBvByaNMmueHYWxVEP496s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From:Subject:Reply-To; b=Oy5ECqB7RYxQ0xeY4XwHq6G+IpuDYUzxomzFfXpdKV2LtdbwjeL9SXK/vXNrjubU6nMMkJ4n+OZrZEi14hoqkCQJ46oh+MKfExW2o1V4CLoRoBSMzglUgLRPp3LcH7mYBmvmuJaroqQcRgozg6CyZjq/ORQtcLEtawJhCq/KKQvVPDw/AfbZkPQYiwJS7531vFgL9yVD4YShHpk7uE2zQo+80xC2ogQ8Z0dm+D/qrKUCEMjsBJwHXf1FK5pZHgitc522JtJHhhqPTpDaFoy/tfvLHhF6tEgYb/dqEpFNUG65uuL1XXfqPelbUvdoKR4R2zN6j3RjLmURk8kDHF3j9A==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673335949; bh=yjV/LpRrRp+Rn9XEJsZO0kH4qFFobZTwzfyhX1BZbge=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=TZcDZVjqG3+t0g2pfpzGnWXqRfH1Cc5fOII707cXJHnx1Q5brSzUiKUwRw7bEn2XcAXt+qfWazxqbGJ5Einu9fowzTvGQvi0M1ECq6JMxuDFcod+BhuyXtt30wRlx+77suhqBdSDYY7IZcC8aRFkCbrC9THGtUJWpC0wO+YPyEmqlQPeKh8hjTail0MdDVQ8YyzydYt32l8rnyWuutfyspYgsNoz33eOw7CYpisipMigeapZ2k7hN4ATrVdTusI4MVe4+AMd+e/Vpv8sio29lwrG+p2ZHo/gRoHnr6kavHHQVptvghGdyhan7ByQ+UnN7a/i3xe46E+djMTQwPNjNw==
X-YMail-OSG: yZ4M3AEVM1kN1QlAmfPoevpe4f29EyvqCohc3s_6hBnPd0be3FNSakkqdwsNxTR
 yAnQv0jlwMF9zdc_HKDnrSch4S16zjzvcfC6ii8WtwsxSKxcYEbLm7gfwQHoUVSkUmlzdSHjayXI
 Pofoud_gufFHy4Vm4lNqHacZxSLiimBnYNOCt3mj9SeqVjtAMoL4FpspAtHRozlrFT0eth.VS8wt
 5XkopSMS40ycAHezFpVOLstBQm5S1nsBm6HpP_wXLJ8QvDjFoDQTbjtMkObBgPXn1to3kjFFdrNS
 j4ojBx1GtxSFkMOwrqEgwzmSpRQPPIj7XLx4fRG0NtstD5MpuEE7mpQB1thHh0ZxMrTFnHp9EIPB
 SIKLNIvF5LRxiPKxAU6L2bW1JEtreokUghfMvj.gZrLRkuml5hBrVIF8EYgKLGJyS_2ly2uo8Ni.
 My3IPdifOnRpwTEAK9eiwWWw3adOvhddwxzo_L8Z4MnxqqnIdBhUGZVUCdE6AzVdxvYJc6M2vumG
 S4RsokRdqv4inDs892xy4oKN_LtkDakHtFjMyY9j4aZaBAhUHv2VUmWRtmmWxZYNAiew9r8Dlc4s
 Cna2.nOfSJ6H1lpVxGCxhFjmYapUgjJIRZkdvWbtmTMXnmk6.e1CxCXbLmdSMsO9ilA8O6e6v5Ud
 Q.rQXmdxlC4_Cr.IzQjqXRkUJx1T1R_TvV7OjiV192edsverOOsgxZNq5hDcyf0DayuAGK3Dq29Y
 xnlBvinG0leh39PAsR5zryrjswtbDAbT_dEkiHa.avWrWJvAg1XOJpdgSZSZ58mmZ0XYAt3pEyhK
 YCmvHDxP88woYb1v7fxu2pNHnQOxkiq3MJItb3eE9msDkeVaT5oEWfx7SPMwAauxLC6yRW3Cb1SP
 xtiEjAbKE0MMA60rDbECGg9btz0JHSb8vIna.RDTdcLjaoTSd1blgE7yQMexCUA7gTdv0zqoQoWU
 1FuRMYptn.U_S1H6dAChrdxTlAIeYbw8hvaxisP31pt0e0YWIma4AomHiBPij9tSNOKoiBxMqEgX
 yc1GEMR290QVCBedlaXWwOEOb4qrirUtW7T_Hmnyaf8QTM8DanS6aKtJ7DLv.ZXTf7GGT3MHUdDm
 PkNyr6WHOyabAUtPR5ay_e1k7XMZTu8WSwONg6RHkSxAKU96JCjIFcX0VEVouj4SYkL9a6Cc9qHZ
 d_fezAMX2.dSvDNMzP7Eq4GcfoWA8BT_cw.4VTVOlhHgiatCY5570Odn87zm6RAkZ613gwUpxxtI
 PnIIYRreVKwZih7ox0DnVZT5jwmdl3rXFUU5Ym6m8UWKCXmIB_OwvXqAWBFoXvg7REwzB4ob0Udu
 daKNOm.TN5LCdCrX6e.Vd89cGNtF_SxhkxtjBcBjWrPZhimn5AYMshcwwz4cb3vfLXxaiE_mdcoa
 ZZ.egpqNTgGF76H9uJRT7UkTTMy3TPNiN4d4YnQ1ffSLyrFumqv79HP9GUDYV6vjWGijmD2Kw5ZP
 JkHv.bpXU4KId7IgD3r6eg7oUmQhDXFJ.vnlCnw0ePWs9ktfddsLp5Qfd4.GRN0PswikgiZyhWm3
 1ABuE6m_8OdvZm5u3LsnbCKq0tv9E6P9VIvgHKk5X_YbKtBRp0G0GkSyLsz32Q4hrvBxNNSSBUF6
 LAPsA_cewW5m0_ZfKXohYeyWnX.pc3T.IMH2DCqd.9lz.PvIi2vhv0QIkpS.CrpYJG_9loDZGkTd
 AA2PMuntDqwW597eY0Y9H7CHxsH1d3UF8WSd4AbIXO1OB0JOuA8ux4ULj._oioDRaoDazgkCiiZS
 FZs51NOrV7IO.swtfcaw_j4dYkKE299YRbBeNGBk4WjgKzLAWIl9jgVdG9G8xkw5.wpHtt9Elatm
 5U_F8fe393Lm_C2kHNYtvd7ial8TQP.l9gJkck5I4K7r8rxxQRjYFhbqoRhijQSpCIJ8E0_ytXD0
 m4KpsJBzK16g4tCV7qW2BXDBauT7Rpa3jzTAFBjRTkbZEu192fWxc9iHkxCWFQgZeEk8Au8OmONH
 HJvNlWBdL.b52sRA1sIAxaq0dsvSQ6wua1mz27Bgy4XviOIEkB1UTePwqx3vYzyvpiOgQ1eCYu7N
 ek_dS.wInV9pcuwZSUD76MsF4jRAhqoEVGdW.bYi_pYblcf357H_6ucFw2wOSNVFguOYSM8g63Oe
 YZA4m1aJcLWAqU7zJc7HWPSrnMfXebmrijIA6h3Y7qSNegbG2IRGnnu1mil4FWtSBDTYjAN9E4EA
 -
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: anthony.perard@citrix.com
Cc: xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	qemu-devel@nongnu.org
Subject: [XEN PATCH v2 2/3] libxl/dm: Manage pci slot assignment for Intel IGD passthrough
Date: Tue, 10 Jan 2023 02:32:03 -0500
Message-Id: <76d06f5d01e01df316230def4f31037695f11c1a.1673300848.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673300848.git.brchuckz@aol.com>
References: <cover.1673300848.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Length: 15168

By default, except for the ich9-usb-uhci device which libxl assigns to
slot 29 (0xid), libxl defers to qemu upstream's automatic slot assignment
process, which is simply to assign each emulated device to the next
available slot on the pci bus. With this default behavior, libxl and
qemu will not configure the Intel IGD correctly. Specifically, the Intel
IGD must be assigned to slot 2, but the current default behavior is to
assign one of the emulated devices to slot 2.

With this patch libxl uses qemu command line options to specify the slot
address of each pci device in the guest when gfx_passthru is enabled
for the Intel IGD. It uses the same simple algorithm of assigning the
next available slot, except that it starts with slot 3 instead of slot 2
for the emulated devices. This process of slot assignment aims to
simulate the behavior of existing machines as much as possible.

The default behavior (when igd gfx_passthru is disabled) of letting qemu
manage the slot addresses of emulated pci devices is preserved. The
patch also preserves the special treatment for the ich9 usb2 controller
(ich9-usb-ehci1) that libxl currently assigns to slot.function 29.7 and
the associated ich9-usb-uhciN devices to slot 29 (0x1d).

For future maintainance of this code, it is important that pci devices
that are managed by the libxl__build_device_model_args_new function
follow the logic of this patch and use the new local counter next_slot
to assign the slot address instead of letting upstream qemu assign the
slot if the guest is configured for Intel IGD passthrough, and preserve
the current behavior of letting qemu assign the slot address otherwise.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
The diff of this patch is easier to review if it is generated using
the -b (aka --ignore-space-change) option of diff/git-diff to filter
out some of the changes that are only due to white space.

This patch is difficult to verify for correctness without testing all
the devices that could be added by the libxl__build_device_model_args_new
function. There are 12 places where the addr=%x option needed to be
added to the arguments of the "-device" qemu option that adds an
emulated pci device which corresponds to at least 12 different devices
that could be affected by this patch if the patch contains mistakes
that the compiler did not notice. One mistake the compiler would not
notice is a missing next_slot++; statement that would result in qemu
trying to assign a device to a slot that is already assigned, which
is an error in qemu. I did enough tests to find some mistakes in the
patch which of course I fixed before submitting the patch. So I cannot
guarantee that there are not any other mistakes in the patch because
I don't have the resources to do the necessary testing of the many
possible configurations that could be affected by this patch.

v2: No changes to this patch since v1

 tools/libs/light/libxl_dm.c | 168 ++++++++++++++++++++++++++++++------
 1 file changed, 141 insertions(+), 27 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 2048815611..2720b5d4d0 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1205,6 +1205,20 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     const char *path, *chardev;
     bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
     int rc;
+    int next_slot;
+    bool configure_pci_for_igd = false;
+    /*
+     * next_slot is only used when we need to configure the pci
+     * slots for the Intel IGD. Slot 2 will be for the Intel IGD.
+     */
+    next_slot = 3;
+    if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+        enum libxl_gfx_passthru_kind gfx_passthru_kind =
+                        libxl__detect_gfx_passthru_kind(gc, guest_config);
+        if (gfx_passthru_kind == LIBXL_GFX_PASSTHRU_KIND_IGD) {
+            configure_pci_for_igd = true;
+        }
+    }
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1372,6 +1386,20 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
 
         if (b_info->cmdline)
             flexarray_vappend(dm_args, "-append", b_info->cmdline, NULL);
+        /*
+         * When the Intel IGD is configured for primary graphics
+         * passthrough, we need to manually add the xen platform
+         * device because the QEMU machine type is "pc". Add it first to
+         * simulate the behavior of the "xenfv" QEMU machine type which
+         * always adds the xen platform device first. But in this case it
+         * will be at slot 3 because we are reserving slot 2 for the IGD.
+         */
+        if (configure_pci_for_igd &&
+            libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
+            flexarray_append_pair(dm_args, "-device",
+                        GCSPRINTF("xen-platform,addr=%x", next_slot));
+            next_slot++;
+        }
 
         /* Find out early if one of the disk is on the scsi bus and add a scsi
          * controller. This is done ahead to keep the same behavior as previous
@@ -1381,7 +1409,14 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                 continue;
             }
             if (strncmp(disks[i].vdev, "sd", 2) == 0) {
-                flexarray_vappend(dm_args, "-device", "lsi53c895a", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("lsi53c895a,addr=%x", next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "lsi53c895a",
+                                      NULL);
+                }
                 break;
             }
         }
@@ -1436,31 +1471,67 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, "-spice");
             flexarray_append(dm_args, spiceoptions);
             if (libxl_defbool_val(b_info->u.hvm.spice.vdagent)) {
-                flexarray_vappend(dm_args, "-device", "virtio-serial",
-                    "-chardev", "spicevmc,id=vdagent,name=vdagent", "-device",
-                    "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
-                    NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("virtio-serial,addr=%x", next_slot),
+                        "-chardev", "spicevmc,id=vdagent,name=vdagent",
+                        "-device",
+                        "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
+                        NULL);
+                next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "virtio-serial",
+                        "-chardev", "spicevmc,id=vdagent,name=vdagent",
+                        "-device",
+                        "virtserialport,chardev=vdagent,name=com.redhat.spice.0",
+                        NULL);
+                }
             }
         }
 
         switch (b_info->u.hvm.vga.kind) {
         case LIBXL_VGA_INTERFACE_TYPE_STD:
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("VGA,vgamem_mb=%d",
-                libxl__sizekb_to_mb(b_info->video_memkb)));
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("VGA,addr=%x,vgamem_mb=%d", next_slot,
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("VGA,vgamem_mb=%d",
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+            }
             break;
         case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("cirrus-vga,vgamem_mb=%d",
-                libxl__sizekb_to_mb(b_info->video_memkb)));
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("cirrus-vga,addr=%x,vgamem_mb=%d", next_slot,
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("cirrus-vga,vgamem_mb=%d",
+                    libxl__sizekb_to_mb(b_info->video_memkb)));
+            }
             break;
         case LIBXL_VGA_INTERFACE_TYPE_NONE:
             break;
         case LIBXL_VGA_INTERFACE_TYPE_QXL:
             /* QXL have 2 ram regions, ram and vram */
-            flexarray_append_pair(dm_args, "-device",
-                GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64",ram_size_mb=%"PRIu64,
-                (b_info->video_memkb/2/1024), (b_info->video_memkb/2/1024) ) );
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("qxl-vga,addr=%x,vram_size_mb=%"PRIu64
+                    ",ram_size_mb=%"PRIu64, next_slot,
+                    (b_info->video_memkb/2/1024),
+                    (b_info->video_memkb/2/1024) ) );
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device",
+                    GCSPRINTF("qxl-vga,vram_size_mb=%"PRIu64
+                    ",ram_size_mb=%"PRIu64,
+                    (b_info->video_memkb/2/1024),
+                    (b_info->video_memkb/2/1024) ) );
+            }
             break;
         default:
             LOGD(ERROR, guest_domid, "Invalid emulated video card specified");
@@ -1496,8 +1567,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         } else if (b_info->u.hvm.usbversion) {
             switch (b_info->u.hvm.usbversion) {
             case 1:
-                flexarray_vappend(dm_args,
-                    "-device", "piix3-usb-uhci,id=usb", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("piix3-usb-uhci,addr=%x,id=usb",
+                                  next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args,
+                        "-device", "piix3-usb-uhci,id=usb", NULL);
+                }
                 break;
             case 2:
                 flexarray_append_pair(dm_args, "-device",
@@ -1509,8 +1587,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                         i, 2*(i-1), i-1));
                 break;
             case 3:
-                flexarray_vappend(dm_args,
-                    "-device", "nec-usb-xhci,id=usb", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("nec-usb-xhci,addr=%x,id=usb",
+                                  next_slot), NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args,
+                        "-device", "nec-usb-xhci,id=usb", NULL);
+                }
                 break;
             default:
                 LOGD(ERROR, guest_domid, "usbversion parameter is invalid, "
@@ -1542,8 +1627,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
 
             switch (soundhw) {
             case LIBXL__QEMU_SOUNDHW_HDA:
-                flexarray_vappend(dm_args, "-device", "intel-hda",
-                                  "-device", "hda-duplex", NULL);
+                if (configure_pci_for_igd) {
+                    flexarray_vappend(dm_args, "-device",
+                        GCSPRINTF("intel-hda,addr=%x", next_slot),
+                                   "-device", "hda-duplex", NULL);
+                    next_slot++;
+                } else {
+                    flexarray_vappend(dm_args, "-device", "intel-hda",
+                                      "-device", "hda-duplex", NULL);
+                }
                 break;
             default:
                 flexarray_append_pair(dm_args, "-device",
@@ -1573,10 +1665,18 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                                                 guest_domid, nics[i].devid,
                                                 LIBXL_NIC_TYPE_VIF_IOEMU);
                 flexarray_append(dm_args, "-device");
-                flexarray_append(dm_args,
-                   GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s",
-                             nics[i].model, nics[i].devid,
-                             nics[i].devid, smac));
+                if (configure_pci_for_igd) {
+                    flexarray_append(dm_args,
+                        GCSPRINTF("%s,addr=%x,id=nic%d,netdev=net%d,mac=%s",
+                                  nics[i].model, next_slot, nics[i].devid,
+                                  nics[i].devid, smac));
+                    next_slot++;
+                } else {
+                    flexarray_append(dm_args,
+                        GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s",
+                                  nics[i].model, nics[i].devid,
+                                  nics[i].devid, smac));
+                }
                 flexarray_append(dm_args, "-netdev");
                 flexarray_append(dm_args,
                                  GCSPRINTF("type=tap,id=net%d,ifname=%s,"
@@ -1865,8 +1965,15 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         flexarray_append_pair(dm_envs, "XEN_DOMAIN_ID", GCSPRINTF("%d", guest_domid));
 
-        if (b_info->u.hvm.hdtype == LIBXL_HDTYPE_AHCI)
-            flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
+        if (b_info->u.hvm.hdtype == LIBXL_HDTYPE_AHCI) {
+            if (configure_pci_for_igd) {
+                flexarray_append_pair(dm_args, "-device",
+                            GCSPRINTF("ahci,addr=%x,id=ahci0", next_slot));
+                next_slot++;
+            } else {
+                flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
+            }
+        }
         for (i = 0; i < num_disks; i++) {
             int disk, part;
             int dev_number =
@@ -2043,7 +2150,14 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         switch (b_info->u.hvm.vendor_device) {
         case LIBXL_VENDOR_DEVICE_XENSERVER:
             flexarray_append(dm_args, "-device");
-            flexarray_append(dm_args, "xen-pvdevice,device-id=0xc000");
+            if (configure_pci_for_igd) {
+                flexarray_append(dm_args,
+                GCSPRINTF("xen-pvdevice,addr=%x,device-id=0xc000",
+                           next_slot));
+                next_slot++;
+            } else {
+                flexarray_append(dm_args, "xen-pvdevice,device-id=0xc000");
+            }
             break;
         default:
             break;
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 07:48:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 07:48:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474340.735460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9NA-0000GF-VT; Tue, 10 Jan 2023 07:48:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474340.735460; Tue, 10 Jan 2023 07:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9NA-0000G3-R5; Tue, 10 Jan 2023 07:48:40 +0000
Received: by outflank-mailman (input) for mailman id 474340;
 Tue, 10 Jan 2023 07:48:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF9N9-0000Ft-6l; Tue, 10 Jan 2023 07:48:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF9N9-0007Bi-5E; Tue, 10 Jan 2023 07:48:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF9N8-0005zr-QE; Tue, 10 Jan 2023 07:48:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pF9N8-0000iy-Pj; Tue, 10 Jan 2023 07:48:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TVToy+jeKt1DzVB1dJuKD4F62A8Wmozc/5IZ0Lufei0=; b=kecf6nYbxH0Tl2ytRQCxO04XMF
	akS4Z5BDMVUxvooJ9riSr0DF8kolT9MpLAFqmps20nXh4K5TSPLdKeWXml2JwFhkcXTlNHaBb4qC5
	LKrQsXfQ8nORsDMgkQILUxvkIL3F+ioHkJwDnPxv+ewb/A47qOfN6dcHF5MUg8T0exV8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175665-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175665: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1fe4fd6f5cad346e598593af36caeadc4f5d4fa9
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 07:48:38 +0000

flight 175665 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175665/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                1fe4fd6f5cad346e598593af36caeadc4f5d4fa9
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   94 days
Failing since        173470  2022-10-08 06:21:34 Z   94 days  197 attempts
Testing same since   175634  2023-01-09 01:40:59 Z    1 days    3 attempts

------------------------------------------------------------
3317 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505645 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:05:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:05:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474355.735477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9cv-0003Gy-Q0; Tue, 10 Jan 2023 08:04:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474355.735477; Tue, 10 Jan 2023 08:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9cv-0003Gr-Mp; Tue, 10 Jan 2023 08:04:57 +0000
Received: by outflank-mailman (input) for mailman id 474355;
 Tue, 10 Jan 2023 08:04:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF9ct-0003Gh-PM; Tue, 10 Jan 2023 08:04:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF9ct-0007zY-Mq; Tue, 10 Jan 2023 08:04:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pF9ct-0006NO-GU; Tue, 10 Jan 2023 08:04:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pF9ct-0007nW-G0; Tue, 10 Jan 2023 08:04:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NrXu3tLBKgJjiFNZg1YGlt3SnSBYSoFCmFAOLEvkgtI=; b=E10PODYds9/trZMSGCQKJlkOyK
	cgibozrgVtnevO+9JtY1FFE1/1uAZ+enRAV8SxveTwHfuXP4RCxiT4HQGHvtRbuY72+sM0uBx+0hH
	hl/9TdaVPf9dVIEvy1jekoo32LMIDkvA+omM+UAhG2/DjVFGRufIpW0cEy/doMWPauPA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175681-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175681: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 08:04:55 +0000

flight 175681 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175681/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    2 days
Failing since        175627  2023-01-08 14:40:14 Z    1 days    9 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:07:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:07:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474363.735488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9ex-0003qe-6n; Tue, 10 Jan 2023 08:07:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474363.735488; Tue, 10 Jan 2023 08:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9ex-0003qX-3P; Tue, 10 Jan 2023 08:07:03 +0000
Received: by outflank-mailman (input) for mailman id 474363;
 Tue, 10 Jan 2023 08:07:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TLW0=5H=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pF9ev-0003qN-OP
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:07:01 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c272e00a-90bd-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:07:00 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id E669438644;
 Tue, 10 Jan 2023 08:06:59 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 42E641358A;
 Tue, 10 Jan 2023 08:06:59 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id OCfgDqMcvWP8BQAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 10 Jan 2023 08:06:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c272e00a-90bd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673338019; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zFXGBYfEGk+Jg1p3t9ncmeM1OkJ5SOgglsWjtD+4D3E=;
	b=MU0PDUFJ2NOK1/QtTmhG08jj+/hc3DiAuRHDHTX1+NVJc1guSrtQvC8BVMTDUdfLTtyHGF
	6APJegbUQMpu/JSlQUeDOXjwRXvediBWLPQiIOxWIiWdsx/2uu14gNsCvHo9CWNCj9U567
	FAJqat2f5hJEHt/USBs7CM7dZZ25UOc=
Message-ID: <76596dba-efb7-eab1-bca9-60ab67823ade@suse.com>
Date: Tue, 10 Jan 2023 09:06:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] hvc/xen: lock console list traversal
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
Cc: xen-devel@lists.xenproject.org,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jiri Slaby <jirislaby@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Jan Beulich
 <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, linuxppc-dev@lists.ozlabs.org
References: <20221130163611.14686-1-roger.pau@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20221130163611.14686-1-roger.pau@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------6nQNq6G02bf1dW0c6lDPjjkY"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------6nQNq6G02bf1dW0c6lDPjjkY
Content-Type: multipart/mixed; boundary="------------Kb0uGwizoLvibEvRPqCzPKl8";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
Cc: xen-devel@lists.xenproject.org,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jiri Slaby <jirislaby@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Jan Beulich
 <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, linuxppc-dev@lists.ozlabs.org
Message-ID: <76596dba-efb7-eab1-bca9-60ab67823ade@suse.com>
Subject: Re: [PATCH v2] hvc/xen: lock console list traversal
References: <20221130163611.14686-1-roger.pau@citrix.com>
In-Reply-To: <20221130163611.14686-1-roger.pau@citrix.com>

--------------Kb0uGwizoLvibEvRPqCzPKl8
Content-Type: multipart/mixed; boundary="------------gP0Ef10fixcPW6kf2HHiGOoT"

--------------gP0Ef10fixcPW6kf2HHiGOoT
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMTEuMjIgMTc6MzYsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToNCj4gVGhlIGN1cnJl
bnRseSBsb2NrbGVzcyBhY2Nlc3MgdG8gdGhlIHhlbiBjb25zb2xlIGxpc3QgaW4NCj4gdnRl
cm1ub190b194ZW5jb25zKCkgaXMgaW5jb3JyZWN0LCBhcyBhZGRpdGlvbnMgYW5kIHJlbW92
YWxzIGZyb20gdGhlDQo+IGxpc3QgY2FuIGhhcHBlbiBhbnl0aW1lLCBhbmQgYXMgc3VjaCB0
aGUgdHJhdmVyc2FsIG9mIHRoZSBsaXN0IHRvIGdldA0KPiB0aGUgcHJpdmF0ZSBjb25zb2xl
IGRhdGEgZm9yIGEgZ2l2ZW4gdGVybW5vIG5lZWRzIHRvIGhhcHBlbiB3aXRoIHRoZQ0KPiBs
b2NrIGhlbGQuICBOb3RlIHVzZXJzIHRoYXQgbW9kaWZ5IHRoZSBsaXN0IGFscmVhZHkgZG8g
c28gd2l0aCB0aGUNCj4gbG9jayB0YWtlbi4NCj4gDQo+IEFkanVzdCBjdXJyZW50IGxvY2sg
dGFrZXJzIHRvIHVzZSB0aGUgX2lycXtzYXZlLHJlc3RvcmV9IGhlbHBlcnMsDQo+IHNpbmNl
IHRoZSBjb250ZXh0IGluIHdoaWNoIHZ0ZXJtbm9fdG9feGVuY29ucygpIGlzIGNhbGxlZCBj
YW4gaGF2ZQ0KPiBpbnRlcnJ1cHRzIGRpc2FibGVkLiAgVXNlIHRoZSBfaXJxe3NhdmUscmVz
dG9yZX0gc2V0IG9mIGhlbHBlcnMgdG8NCj4gc3dpdGNoIHRoZSBjdXJyZW50IGNhbGxlcnMg
dG8gZGlzYWJsZSBpbnRlcnJ1cHRzIGluIHRoZSBsb2NrZWQgcmVnaW9uLg0KPiBJIGhhdmVu
J3QgY2hlY2tlZCBpZiBleGlzdGluZyB1c2VycyBjb3VsZCBpbnN0ZWFkIHVzZSB0aGUgX2ly
cQ0KPiB2YXJpYW50LCBhcyBJIHRoaW5rIGl0J3Mgc2FmZXIgdG8gdXNlIF9pcnF7c2F2ZSxy
ZXN0b3JlfSB1cGZyb250Lg0KPiANCj4gV2hpbGUgdGhlcmUgc3dpdGNoIGZyb20gdXNpbmcg
bGlzdF9mb3JfZWFjaF9lbnRyeV9zYWZlIHRvDQo+IGxpc3RfZm9yX2VhY2hfZW50cnk6IHRo
ZSBjdXJyZW50IGVudHJ5IGN1cnNvciB3b24ndCBiZSByZW1vdmVkIGFzDQo+IHBhcnQgb2Yg
dGhlIGNvZGUgaW4gdGhlIGxvb3AgYm9keSwgc28gdXNpbmcgdGhlIF9zYWZlIHZhcmlhbnQg
aXMNCj4gcG9pbnRsZXNzLg0KPiANCj4gRml4ZXM6IDAyZTE5ZjljN2NhYyAoJ2h2Y194ZW46
IGltcGxlbWVudCBtdWx0aWNvbnNvbGUgc3VwcG9ydCcpDQo+IFNpZ25lZC1vZmYtYnk6IFJv
Z2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPg0KDQpQdXNoZWQgdG8geGVu
L3RpcC5naXQgZm9yLWxpbnVzLTYuMg0KDQoNCkp1ZXJnZW4NCg0K
--------------gP0Ef10fixcPW6kf2HHiGOoT
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------gP0Ef10fixcPW6kf2HHiGOoT--

--------------Kb0uGwizoLvibEvRPqCzPKl8--

--------------6nQNq6G02bf1dW0c6lDPjjkY
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO9HKIFAwAAAAAACgkQsN6d1ii/Ey+t
twf/SCL+Qbrvyi0dgYX3s8xBkFvBU9UzEaa6MxDE1aev/pt1IndKUcFrciE11V00rdeqb+jb/PbR
K5b1n843/V+EbAEFfAIV9K9LCH9YiDFqUJ49yKQEL58vieL2FVD4OvZTWuKTgeTwu1kPrD4O4m7V
UVhWsedPcHGFHWvLs3e8oujnke1Jhj2lfD5DqGWdht1X/4te+j6Owzy3JO69M3FYVRZYzn+72qv/
NON65BQg2FuDujwdzvDP4Vvsqz684JxVc3qi9bI9JH7VJH1FbwApJ1mQKU5xmWx24MxfAPgNEajB
uaJ48m4NrF4BTw4QmhmEQ4vlGS2Ilb4GalWyksQypA==
=E2fh
-----END PGP SIGNATURE-----

--------------6nQNq6G02bf1dW0c6lDPjjkY--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:07:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:07:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474369.735499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9fZ-0004Qv-Kz; Tue, 10 Jan 2023 08:07:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474369.735499; Tue, 10 Jan 2023 08:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9fZ-0004Qk-I3; Tue, 10 Jan 2023 08:07:41 +0000
Received: by outflank-mailman (input) for mailman id 474369;
 Tue, 10 Jan 2023 08:07:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TLW0=5H=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pF9fY-0004IQ-Cn
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:07:40 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d8637e25-90bd-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:07:38 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A68AB4D6EE;
 Tue, 10 Jan 2023 08:07:36 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 374CE1358A;
 Tue, 10 Jan 2023 08:07:36 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id LNETDMgcvWNKBgAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 10 Jan 2023 08:07:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8637e25-90bd-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673338056; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xohioAh1kapJutPJWQnyJIvtrut+K4fCdUNvyrvvEps=;
	b=F/1x9PdCXjPQpN04iGPzwZgJ9ewZMUL6Am2p+uO7S6oRkx1yxxHLC2OkN2nirzqHxJ/eiO
	lkmdc9OI/RFJP8wF1UH6d3f1vS4dNjAaB2KDnNinVFybpOkVE9FUHcIiTtwx/x5UwgvwJP
	3X8qReklSQNoZijRLpF81364vnxmPVA=
Message-ID: <8a2abe27-361a-aab1-60fe-00a3cc8684ff@suse.com>
Date: Tue, 10 Jan 2023 09:07:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen: make remove callback of xen driver void returned
Content-Language: en-US
To: Dawei Li <set_pte_at@outlook.com>, sstabellini@kernel.org,
 oleksandr_tyshchenko@epam.com
Cc: roger.pau@citrix.com, peterhuewe@gmx.de,
 oleksandr_andrushchenko@epam.com, airlied@gmail.com,
 dmitry.torokhov@gmail.com, wei.liu@kernel.org, bhelgaas@google.com,
 jejb@linux.ibm.com, gregkh@linuxfoundation.org, ericvh@gmail.com,
 tiwai@suse.com, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <TYCP286MB23238119AB4DF190997075C9CAE39@TYCP286MB2323.JPNP286.PROD.OUTLOOK.COM>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <TYCP286MB23238119AB4DF190997075C9CAE39@TYCP286MB2323.JPNP286.PROD.OUTLOOK.COM>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------0lgsG4DjM2bogqj9tDjcsDeD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------0lgsG4DjM2bogqj9tDjcsDeD
Content-Type: multipart/mixed; boundary="------------r84Y64pzyXf8LsYbXQjpT04r";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Dawei Li <set_pte_at@outlook.com>, sstabellini@kernel.org,
 oleksandr_tyshchenko@epam.com
Cc: roger.pau@citrix.com, peterhuewe@gmx.de,
 oleksandr_andrushchenko@epam.com, airlied@gmail.com,
 dmitry.torokhov@gmail.com, wei.liu@kernel.org, bhelgaas@google.com,
 jejb@linux.ibm.com, gregkh@linuxfoundation.org, ericvh@gmail.com,
 tiwai@suse.com, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Message-ID: <8a2abe27-361a-aab1-60fe-00a3cc8684ff@suse.com>
Subject: Re: [PATCH v2] xen: make remove callback of xen driver void returned
References: <TYCP286MB23238119AB4DF190997075C9CAE39@TYCP286MB2323.JPNP286.PROD.OUTLOOK.COM>
In-Reply-To: <TYCP286MB23238119AB4DF190997075C9CAE39@TYCP286MB2323.JPNP286.PROD.OUTLOOK.COM>

--------------r84Y64pzyXf8LsYbXQjpT04r
Content-Type: multipart/mixed; boundary="------------BJxvnO001IhyRZyK5BKNLhzp"

--------------BJxvnO001IhyRZyK5BKNLhzp
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTMuMTIuMjIgMTY6NDYsIERhd2VpIExpIHdyb3RlOg0KPiBTaW5jZSBjb21taXQgZmM3
YTYyMDlkNTcxICgiYnVzOiBNYWtlIHJlbW92ZSBjYWxsYmFjayByZXR1cm4gdm9pZCIpDQo+
IGZvcmNlcyBidXNfdHlwZTo6cmVtb3ZlIGJlIHZvaWQtcmV0dXJuZWQsIGl0IGRvZXNuJ3Qg
bWFrZSBtdWNoIHNlbnNlIGZvcg0KPiBhbnkgYnVzIGJhc2VkIGRyaXZlciBpbXBsZW1lbnRp
bmcgcmVtb3ZlIGNhbGxiYWxrIHRvIHJldHVybiBub24tdm9pZCB0bw0KPiBpdHMgY2FsbGVy
Lg0KPiANCj4gVGhpcyBjaGFuZ2UgaXMgZm9yIHhlbiBidXMgYmFzZWQgZHJpdmVycy4NCj4g
DQo+IEFja2VkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+IFNpZ25l
ZC1vZmYtYnk6IERhd2VpIExpIDxzZXRfcHRlX2F0QG91dGxvb2suY29tPg0KDQpQdXNoZWQg
dG8geGVuL3RpcC5naXQgZm9yLWxpbnVzLTYuMg0KDQoNCkp1ZXJnZW4NCg0K
--------------BJxvnO001IhyRZyK5BKNLhzp
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BJxvnO001IhyRZyK5BKNLhzp--

--------------r84Y64pzyXf8LsYbXQjpT04r--

--------------0lgsG4DjM2bogqj9tDjcsDeD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO9HMcFAwAAAAAACgkQsN6d1ii/Ey+f
1wf+K+5GUrDsbaaoBFao/iA/yd9cCF6PTSRa8C28Z0Xu+s5K30n0thAK7JjyRMZJmzbemu4XZVA6
OxCGIQdrkLTmgoGRTZjdYbFhz7C4ByTAEqkgk72umn3WZEqthOfZY1EE2Imd1beYJzeLfjM6U32h
DuUK14cuf7PbMVfAxpMgPvk+awBFO7fjD5gdNRpAE/F1eseawanTnJSeD5AVm4MRJCwnQr9/8xWA
M0lm0kk3xCfAu1tlqV2XXuAXF8pZCdXr6/heuDpg2UwwakevfI3rHCsGO85anJ1vWiWJR+5N3hrz
8f7fsLu5VuBRJPVziQO8zwQUNBXYTCjlBj7/Ba8lJQ==
=VY/4
-----END PGP SIGNATURE-----

--------------0lgsG4DjM2bogqj9tDjcsDeD--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:08:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:08:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474375.735510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9fu-0004sH-Tm; Tue, 10 Jan 2023 08:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474375.735510; Tue, 10 Jan 2023 08:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9fu-0004s8-QH; Tue, 10 Jan 2023 08:08:02 +0000
Received: by outflank-mailman (input) for mailman id 474375;
 Tue, 10 Jan 2023 08:08:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TLW0=5H=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pF9ft-0004Qe-Ml
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:08:01 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e627c658-90bd-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:08:00 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A12B03E6DB;
 Tue, 10 Jan 2023 08:07:59 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 58EC01358A;
 Tue, 10 Jan 2023 08:07:59 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id RSjlE98cvWN8BgAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 10 Jan 2023 08:07:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e627c658-90bd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673338079; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=w0R2jA+ecFgshta3vkDpZTjBNhdF5coLo4TxsrYpzyw=;
	b=GYs+Bd0xk7tL3+jMhddK1YqdH0y8+YWKn/QW0Usjy068JSSa3m2T1TIYuNDR/J5RDDBt34
	qPBzDL4Hj17EH6WuSyKzKJbmp/bl0a1C+YJfn7kapTvp3vGqQdbnJik8osI+6bIiYmYwP7
	2RxpB0T9FicBnqLefEhE0gSrceVH9Os=
Message-ID: <b220cdf1-7d8c-e8f6-eb22-7a93b0b330ea@suse.com>
Date: Tue, 10 Jan 2023 09:07:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1] xen/pvcalls: free active map buffer on
 pvcalls_front_free_map
Content-Language: en-US
To: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
References: <6a762ee32dd655cbb09a4aa0e2307e8919761311.1671531297.git.oleksii_moisieiev@epam.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <6a762ee32dd655cbb09a4aa0e2307e8919761311.1671531297.git.oleksii_moisieiev@epam.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------8gQ0uGiBiOXOeEGU0HSLL09N"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------8gQ0uGiBiOXOeEGU0HSLL09N
Content-Type: multipart/mixed; boundary="------------u9z5dqxRb0ujLeyXzcx9HgSU";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Message-ID: <b220cdf1-7d8c-e8f6-eb22-7a93b0b330ea@suse.com>
Subject: Re: [PATCH v1] xen/pvcalls: free active map buffer on
 pvcalls_front_free_map
References: <6a762ee32dd655cbb09a4aa0e2307e8919761311.1671531297.git.oleksii_moisieiev@epam.com>
In-Reply-To: <6a762ee32dd655cbb09a4aa0e2307e8919761311.1671531297.git.oleksii_moisieiev@epam.com>

--------------u9z5dqxRb0ujLeyXzcx9HgSU
Content-Type: multipart/mixed; boundary="------------KYcv5NY9RdtDLbyPZBpgAZJU"

--------------KYcv5NY9RdtDLbyPZBpgAZJU
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMTIuMjIgMTU6NTAsIE9sZWtzaWkgTW9pc2llaWV2IHdyb3RlOg0KPiBEYXRhIGJ1
ZmZlciBmb3IgYWN0aXZlIG1hcCBpcyBhbGxvY2F0ZWQgaW4gYWxsb2NfYWN0aXZlX3Jpbmcg
YW5kIGZyZWVkDQo+IGluIGZyZWVfYWN0aXZlX3JpbmcgZnVuY3Rpb24sIHdoaWNoIGlzIHVz
ZWQgb25seSBmb3IgdGhlIGVycm9yDQo+IGNsZWFudXAuIHB2Y2FsbHNfZnJvbnRfcmVsZWFz
ZSBpcyBjYWxsaW5nIHB2Y2FsbHNfZnJvbnRfZnJlZV9tYXAgd2hpY2gNCj4gZW5kcyBmb3Jl
aWduIGFjY2VzcyBmb3IgdGhpcyBidWZmZXIsIGJ1dCBkb2Vzbid0IGZyZWUgYWxsb2NhdGVk
IHBhZ2VzLg0KPiBDYWxsIGZyZWVfYWN0aXZlX3JpbmcgdG8gY2xlYW4gYWxsIGFsbG9jYXRl
ZCByZXNvdXJjZXMuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBPbGVrc2lpIE1vaXNpZWlldiA8
b2xla3NpaV9tb2lzaWVpZXZAZXBhbS5jb20+DQoNClB1c2hlZCB0byB4ZW4vdGlwLmdpdCBm
b3ItbGludXMtNi4yDQoNCg0KSnVlcmdlbg0KDQo=
--------------KYcv5NY9RdtDLbyPZBpgAZJU
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------KYcv5NY9RdtDLbyPZBpgAZJU--

--------------u9z5dqxRb0ujLeyXzcx9HgSU--

--------------8gQ0uGiBiOXOeEGU0HSLL09N
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO9HN4FAwAAAAAACgkQsN6d1ii/Ey+7
5AgAhFX4FfJ6FhYGxjTFPefYV/e5GSvSZuMf+G8RGk5j6ehl8KBdnWO+m4l9T+lXGSLrxJZ8phvl
Heo2Z6UvTHmaoYajeIDfdmdwDfJxiNHWygW2qAFkBypCd8pKeZhk9VPiEYpsCpLDEjOlth+ggViX
wbzNwmUAslTbrWZniHPaqVdsCFV/JPdKf5/olC37FaRoVy26U4PYkUcnflMSq8jaXFfLNedOlGva
TvYd/zyEsROKkSuzBpbxRp3lcng41lzlYZcT/ZFiXECdnU4t9iHtBraYZXySbfbuCW/E4UvHWP43
02ip9dMJCiy8mzOYcO4OABjVJNr4JZad80XtOxqFBA==
=BVWP
-----END PGP SIGNATURE-----

--------------8gQ0uGiBiOXOeEGU0HSLL09N--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:08:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:08:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474377.735521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9gJ-0005Th-6r; Tue, 10 Jan 2023 08:08:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474377.735521; Tue, 10 Jan 2023 08:08:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9gJ-0005Ta-46; Tue, 10 Jan 2023 08:08:27 +0000
Received: by outflank-mailman (input) for mailman id 474377;
 Tue, 10 Jan 2023 08:08:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TLW0=5H=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pF9gH-0004IQ-OG
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:08:25 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3e1ec29-90bd-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:08:24 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1FE54676B3;
 Tue, 10 Jan 2023 08:08:23 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BD4431358A;
 Tue, 10 Jan 2023 08:08:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id GNjLLPYcvWO+BgAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 10 Jan 2023 08:08:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3e1ec29-90bd-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673338103; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=EOLvc5pnec6Qvz0IU+meu3lE65aCc308JtHNKiojSLY=;
	b=RdVtED4QkMdawYnUheFwonqrPjjaeMxanOr0oKCs0M9/UOFRKUnxjmXEqvW9/yH9KdH1jN
	rc3H8Mhp9GwaScjUMpfXNXZWSHUkwc+HSfMRBmMoyLwYk8q3mXAnRNNyaokzfo9lsWOJ5G
	3obgGgtI7piKJWWw/dclODpSkreYogY=
Message-ID: <7c844c39-2ca0-0ee7-0c05-66932403fa30@suse.com>
Date: Tue, 10 Jan 2023 09:08:22 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] x86/xen: Remove the unused function p2m_index()
Content-Language: en-US
To: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Cc: boris.ostrovsky@oracle.com, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Abaci Robot <abaci@linux.alibaba.com>
References: <20230105090141.36248-1-jiapeng.chong@linux.alibaba.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230105090141.36248-1-jiapeng.chong@linux.alibaba.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------0PNXtjnnvM7ewaD6a96IAjRq"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------0PNXtjnnvM7ewaD6a96IAjRq
Content-Type: multipart/mixed; boundary="------------F5pGtvD6Hmk9tBX0D0MdPuj0";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Cc: boris.ostrovsky@oracle.com, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Abaci Robot <abaci@linux.alibaba.com>
Message-ID: <7c844c39-2ca0-0ee7-0c05-66932403fa30@suse.com>
Subject: Re: [PATCH v2] x86/xen: Remove the unused function p2m_index()
References: <20230105090141.36248-1-jiapeng.chong@linux.alibaba.com>
In-Reply-To: <20230105090141.36248-1-jiapeng.chong@linux.alibaba.com>

--------------F5pGtvD6Hmk9tBX0D0MdPuj0
Content-Type: multipart/mixed; boundary="------------pRGhk8J2LAf4W2eEJiqH0h3E"

--------------pRGhk8J2LAf4W2eEJiqH0h3E
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDUuMDEuMjMgMTA6MDEsIEppYXBlbmcgQ2hvbmcgd3JvdGU6DQo+IFRoZSBmdW5jdGlv
biBwMm1faW5kZXggaXMgZGVmaW5lZCBpbiB0aGUgcDJtLmMgZmlsZSwgYnV0IG5vdCBjYWxs
ZWQNCj4gZWxzZXdoZXJlLCBzbyByZW1vdmUgdGhpcyB1bnVzZWQgZnVuY3Rpb24uDQo+IA0K
PiBhcmNoL3g4Ni94ZW4vcDJtLmM6MTM3OjI0OiB3YXJuaW5nOiB1bnVzZWQgZnVuY3Rpb24g
J3AybV9pbmRleCcuDQo+IA0KPiBMaW5rOiBodHRwczovL2J1Z3ppbGxhLm9wZW5hbm9saXMu
Y24vc2hvd19idWcuY2dpP2lkPTM1NTcNCj4gUmVwb3J0ZWQtYnk6IEFiYWNpIFJvYm90IDxh
YmFjaUBsaW51eC5hbGliYWJhLmNvbT4NCj4gU2lnbmVkLW9mZi1ieTogSmlhcGVuZyBDaG9u
ZyA8amlhcGVuZy5jaG9uZ0BsaW51eC5hbGliYWJhLmNvbT4NCg0KUHVzaGVkIHRvIHhlbi90
aXAuZ2l0IGZvci1saW51cy02LjINCg0KDQpKdWVyZ2VuDQoNCg==
--------------pRGhk8J2LAf4W2eEJiqH0h3E
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------pRGhk8J2LAf4W2eEJiqH0h3E--

--------------F5pGtvD6Hmk9tBX0D0MdPuj0--

--------------0PNXtjnnvM7ewaD6a96IAjRq
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO9HPYFAwAAAAAACgkQsN6d1ii/Ey9F
lwgAimMoYe0xETfpgDsSG8zRtW6MWXY+2UTVahqK1Y1e7wgJxgCKy8vumEKadDLlxOl+z2e40y1q
RfL7syF3t6hhWbqt/VlsBSYSoaVdC0Wi9ML1VGcI5BGng7QNwr6alTmtUN4kqVY/z4Es2NDU14VK
Exnp2y4vK1J1jNcOOPLppcewbtbmC9beSlHAQ3Z2fHKOSJb/vejUAxXQPWc30LHZl8HgDjF2TMz7
sIhvRMVU5RcmQn/RlUBSEu08SWXrB86iFiuS33jMYC+iKU1INTDPLiV7iFQp//9GWR3EZkk+PbMf
i11oUoPUcXSQn0gNErHuNZLVAptfo9os5n2UvBpS1Q==
=IQpu
-----END PGP SIGNATURE-----

--------------0PNXtjnnvM7ewaD6a96IAjRq--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:09:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474387.735532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9hW-0006Ah-GZ; Tue, 10 Jan 2023 08:09:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474387.735532; Tue, 10 Jan 2023 08:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9hW-0006Aa-Dl; Tue, 10 Jan 2023 08:09:42 +0000
Received: by outflank-mailman (input) for mailman id 474387;
 Tue, 10 Jan 2023 08:09:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pF9hV-0006AM-47
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:09:41 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 213f1f2f-90be-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:09:39 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id hw16so14615723ejc.10
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 00:09:39 -0800 (PST)
Received: from 2a02.2378.1014.d6d9.ip.kyivstar.net
 ([2a02:2378:1014:d6d9:98bb:6889:a785:5d8c])
 by smtp.gmail.com with ESMTPSA id
 14-20020a170906308e00b0084d3acda5fasm3165190ejv.189.2023.01.10.00.09.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 00:09:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 213f1f2f-90be-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=1wqDTBTYa0JxVyZO5wjAE7AZ1AFSwlVaPJIYsCDPwwE=;
        b=LB+V2D1ACwno6eZ0473SRU+SA1WtKlU85/d5U6OJO+5P0QYajaCNkpekazAruO6cJm
         uHGWIaBYAFxDPJpwxnrJuEemYwqtKFJziDMfEwbeV3yqt62xfqBIvnW21LBiXI7ee7gq
         1CS5IIBF3nnj5IsM8y6IhTqCUXw/+mqmXXw0TvmfySZZo7IUWGO9qq+G54/hv/XQpvD7
         JTG96E8qh/3wJpiPpvAChKBG4E7omT/y9B/JoL0k7QcizfD0GZg82JGO0RjqFrIR60cv
         Z3ZTlOtcuj4ofHR3lq/5vBVR7NY961dWPOZ0ss7TIj8WSfzoThlQNyLyeQjO8WsVqFML
         HJZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=1wqDTBTYa0JxVyZO5wjAE7AZ1AFSwlVaPJIYsCDPwwE=;
        b=GDfz3HaILMOHRynyjbnCmWhLq9WG/bsUi8wQvJ0OxqGxlP+0mACGkervbHn/M1qxY9
         yXZyeDewZ1Hzm2Bs0DUawkdPb8AVfk4ATH/A2lBO1OHM9nYs5rjMlfFf5t/Xnb465k7R
         flJVBbuSC2Mi9TBrS2yci4nAr8JrrZQLxLBgmTC/Z+d68+U3/U7IztYZELduVzc2mRw2
         TLD9fSV2f/z7jV3N/LTJijnamf1pQ6TJ60EHWQ1rMpcPsksTF9qq5sYtWIUbmz5U3ZLb
         2MhuQZeDm6NAJtFLXouf9PL6BwvQDCDanowFEkLcsWabwXNZDkcy+T2hMLZDAer7FKHb
         qASw==
X-Gm-Message-State: AFqh2kpUSe6pqhzkRNTln2Wy+IkIyL1UDpRmywXefAVaXsabxJMfDuOc
	93MUTwnrLCWcBYguKzaimU0=
X-Google-Smtp-Source: AMrXdXs4j6+OHvVyZWiZxdSan/h33AzijSxyP2fAtJ+jxhDsyI/qow+wWYt7uGbXC5vU4ngVJ8O9KA==
X-Received: by 2002:a17:906:8492:b0:7c1:6b46:a722 with SMTP id m18-20020a170906849200b007c16b46a722mr59135972ejx.56.1673338179013;
        Tue, 10 Jan 2023 00:09:39 -0800 (PST)
Message-ID: <8488a52b2e15944b8882c9c93344fb1ced171056.camel@gmail.com>
Subject: Re: [PATCH v2 8/8] automation: add RISC-V smoke test
From: Oleksii <oleksii.kurochko@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Gianluca
 Guida <gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
Date: Tue, 10 Jan 2023 10:09:37 +0200
In-Reply-To: <alpine.DEB.2.22.394.2301091820580.1342743@ubuntu-linux-20-04-desktop>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
	 <494c2fd1e046de20c2fa24be3989cc6adde8fdbe.1673278109.git.oleksii.kurochko@gmail.com>
	 <alpine.DEB.2.22.394.2301091820580.1342743@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-09 at 18:21 -0800, Stefano Stabellini wrote:
> On Mon, 9 Jan 2023, Oleksii Kurochko wrote:
> > Add check if there is a message 'Hello from C env' presents
> > in log file to be sure that stack is set and C part of early printk
> > is working.
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes in V2:
> > =C2=A0=C2=A0=C2=A0 - Move changes in the dockerfile to separate patch a=
nd=C2=A0 send it
> > to
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mailing list separately:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH] automation: add qemu=
-system-riscv to
> > riscv64.dockerfile
> > =C2=A0=C2=A0=C2=A0 - Update test.yaml to wire up smoke test
> > ---
> > =C2=A0automation/gitlab-ci/test.yaml=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 20 ++++++++++++++++++++
> > =C2=A0automation/scripts/qemu-smoke-riscv64.sh | 20 +++++++++++++++++++=
+
> > =C2=A02 files changed, 40 insertions(+)
> > =C2=A0create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
> >=20
> > diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-
> > ci/test.yaml
> > index afd80adfe1..64f47a0ab9 100644
> > --- a/automation/gitlab-ci/test.yaml
> > +++ b/automation/gitlab-ci/test.yaml
> > @@ -54,6 +54,19 @@
> > =C2=A0=C2=A0 tags:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - x86_64
> > =C2=A0
> > +.qemu-riscv64:
> > +=C2=A0 extends: .test-jobs-common
> > +=C2=A0 variables:
> > +=C2=A0=C2=A0=C2=A0 CONTAINER: archlinux:riscv64
>=20
> I realize that it is committed now, but following the arm32
> convention
> the name of the arch container (currently archlinux:riscv64) would
> be:
>=20
> CONTAINER: archlinux:current-riscv64
>=20
> I know this is not related to this patch, but I am taking the
> opportunity to mention it now in case we get an opportunity to fix it
> in
> the future for consistency.
>=20
>=20
> > +=C2=A0=C2=A0=C2=A0 LOGFILE: qemu-smoke-riscv64.log
> > +=C2=A0 artifacts:
> > +=C2=A0=C2=A0=C2=A0 paths:
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - smoke.serial
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - '*.log'
> > +=C2=A0=C2=A0=C2=A0 when: always
> > +=C2=A0 tags:
> > +=C2=A0=C2=A0=C2=A0 - x86_64
> > +
> > =C2=A0.yocto-test:
> > =C2=A0=C2=A0 extends: .test-jobs-common
> > =C2=A0=C2=A0 script:
> > @@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
> > =C2=A0=C2=A0 needs:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - debian-unstable-clang-debug
> > =C2=A0
> > +qemu-smoke-riscv64-gcc:
> > +=C2=A0 extends: .qemu-riscv64
> > +=C2=A0 script:
> > +=C2=A0=C2=A0=C2=A0 - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 |=
 tee
> > ${LOGFILE}
> > +=C2=A0 needs:
> > +=C2=A0=C2=A0=C2=A0 - riscv64-cross-gcc
>=20
> Similarly here the "needs" dependency should be called
> arch-current-gcc-riscv for consistency with arm32.
>=20
> Basically we already have a crossbuild and crosstest environment up
> and
> running in gitlab-ci and it is the one for arm32. You can just base
> all
> the naming convention on that.
>=20
> I realize that riscv64-cross-gcc is also already exported by
> build.yaml,
> but I am mentioning it in case we get an opportunity to fix it in the
> future.
>=20
> Nonetheless this patch on its own is OK so
>=20
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
Thanks for the comments.

I think it will be nice to fix that from the start so I will do a
separete patch out of this patch series (when it will be merged) which
will fix all this names related to RISCV.
>=20
> > =C2=A0# Yocto test jobs
> > =C2=A0yocto-qemuarm64:
> > =C2=A0=C2=A0 extends: .yocto-test-arm64
> > diff --git a/automation/scripts/qemu-smoke-riscv64.sh
> > b/automation/scripts/qemu-smoke-riscv64.sh
> > new file mode 100755
> > index 0000000000..e0f06360bc
> > --- /dev/null
> > +++ b/automation/scripts/qemu-smoke-riscv64.sh
> > @@ -0,0 +1,20 @@
> > +#!/bin/bash
> > +
> > +set -ex
> > +
> > +# Run the test
> > +rm -f smoke.serial
> > +set +e
> > +
> > +timeout -k 1 2 \
> > +qemu-system-riscv64 \
> > +=C2=A0=C2=A0=C2=A0 -M virt \
> > +=C2=A0=C2=A0=C2=A0 -smp 1 \
> > +=C2=A0=C2=A0=C2=A0 -nographic \
> > +=C2=A0=C2=A0=C2=A0 -m 2g \
> > +=C2=A0=C2=A0=C2=A0 -kernel binaries/xen \
> > +=C2=A0=C2=A0=C2=A0 |& tee smoke.serial
> > +
> > +set -e
> > +(grep -q "Hello from C env" smoke.serial) || exit 1
> > +exit 0
> > --=20
> > 2.38.1
> >=20
~Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:16:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:16:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474395.735543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9o8-0007km-A5; Tue, 10 Jan 2023 08:16:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474395.735543; Tue, 10 Jan 2023 08:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9o8-0007kf-7K; Tue, 10 Jan 2023 08:16:32 +0000
Received: by outflank-mailman (input) for mailman id 474395;
 Tue, 10 Jan 2023 08:16:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VHex=5H=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pF9o6-0007kZ-RJ
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:16:31 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13963517-90bf-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:16:27 +0100 (CET)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-374-UJXH8I9wOKCKdZ50-_WKXQ-1; Tue, 10 Jan 2023 03:16:24 -0500
Received: by mail-wm1-f71.google.com with SMTP id
 n8-20020a05600c294800b003d1cc68889dso2265835wmd.7
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 00:16:24 -0800 (PST)
Received: from redhat.com ([2.52.155.142]) by smtp.gmail.com with ESMTPSA id
 i13-20020adffdcd000000b002a91572638csm10285640wrs.75.2023.01.10.00.16.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 00:16:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13963517-90bf-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673338585;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fdfdV2mGPZFbQYgeIp74EWeax4bjPbcYuP1ixJmlo6w=;
	b=Q3kEzwAsht7/IVVQMf6E40bIoMyFR150CNbIDIiNu1TY06PqJ25BS7x07sDpsIJy4WjmFQ
	yXbARH/9yTtkKcjvhfbdBIoYS/flAKH+qilxl3QXYRWovbdGHfanAiTHzP9RSQAK4dxusq
	GnjhgLpTgZzRemB6pZ5D1DaejMODVsY=
X-MC-Unique: UJXH8I9wOKCKdZ50-_WKXQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=fdfdV2mGPZFbQYgeIp74EWeax4bjPbcYuP1ixJmlo6w=;
        b=u7o4wzNAz6IofWAlHy7KjYAMwDsVYvDuItjBB56FMdU5CvSGu82S6atGWx2fRbK0OT
         4x2TFC1ibaQMm2vqyUDW6pFxuJwbOEWaTL5i/pcB4CgENJKDkWY6JPCEFAXtD01As9UE
         2y/E+w1gPLiK7dkrqe4ao7mAIxJLC+b9MoeP+ob8afC2f/pbrRVqkhcxZGmO8zsWw2mF
         Rzt4sAowNK3ehBtDeBGtr0sQ/O2TKzznfCFNK3lJJeB4nGf3ZnO2SBPcfL3CWoYk8R6j
         lER62F7lcTeD1Uld2+rWjCMQLIoWKSTWbpn1VzhdCYzZY99d63bx0YyYjv8zoCfIqyyo
         pdEA==
X-Gm-Message-State: AFqh2kqg3nTSH3TFILX2jlnTDX5xD24YECUjf3jSItpR0RJF6dTHdHAW
	54lZ8EiAZZhxj0ZoXm4U5+0IhbxHNhoGQHK0VK5uU93x6FfctNqpCt5NjRYm4ff5JUAepMIeRTZ
	9FxsGEi4Z02FvD7wEDZlXUzylnY8=
X-Received: by 2002:a05:6000:a19:b0:2b1:c393:cbe with SMTP id co25-20020a0560000a1900b002b1c3930cbemr13525500wrb.11.1673338582839;
        Tue, 10 Jan 2023 00:16:22 -0800 (PST)
X-Google-Smtp-Source: AMrXdXv4CeJ65urswcTquNnKzwMVxiexQ61lL8ZVID0TxM/EtuK9pODIr7IFSCXKmw7b02ueWsWl4A==
X-Received: by 2002:a05:6000:a19:b0:2b1:c393:cbe with SMTP id co25-20020a0560000a1900b002b1c3930cbemr13525485wrb.11.1673338582488;
        Tue, 10 Jan 2023 00:16:22 -0800 (PST)
Date: Tue, 10 Jan 2023 03:16:17 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230110030331-mutt-send-email-mst@kernel.org>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
MIME-Version: 1.0
In-Reply-To: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Tue, Jan 10, 2023 at 02:08:34AM -0500, Chuck Zmudzinski wrote:
> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> as noted in docs/igd-assign.txt in the Qemu source code.
> 
> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> Intel IGD passthrough to the guest with the Qemu upstream device model,
> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> a different slot. This problem often prevents the guest from booting.
> 
> The only available workaround is not good: Configure Xen HVM guests to use
> the old and no longer maintained Qemu traditional device model available
> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> 
> To implement this feature in the Qemu upstream device model for Xen HVM
> guests, introduce the following new functions, types, and macros:
> 
> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> * typedef XenPTQdevRealize function pointer
> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> 
> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> the xl toolstack with the gfx_passthru option enabled, which sets the
> igd-passthru=on option to Qemu for the Xen HVM machine type.
> 
> The new xen_igd_reserve_slot function also needs to be implemented in
> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> in which case it does nothing.
> 
> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> 
> Move the call to xen_host_pci_device_get, and the associated error
> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> initialize the device class and vendor values which enables the checks for
> the Intel IGD to succeed. The verification that the host device is an
> Intel IGD to be passed through is done by checking the domain, bus, slot,
> and function values as well as by checking that gfx_passthru is enabled,
> the device class is VGA, and the device vendor in Intel.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> ---
> Notes that might be helpful to reviewers of patched code in hw/xen:
> 
> The new functions and types are based on recommendations from Qemu docs:
> https://qemu.readthedocs.io/en/latest/devel/qom.html
> 
> Notes that might be helpful to reviewers of patched code in hw/i386:
> 
> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> not affect builds that do not have CONFIG_XEN defined.
> 
> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
> existing function that is only true when Qemu is built with
> xen-pci-passthrough enabled and the administrator has configured the Xen
> HVM guest with Qemu's igd-passthru=on option.
> 
> v2: Remove From: <email address> tag at top of commit message
> 
> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
> 
>     is changed to
> 
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     I hoped that I could use the test in v2, since it matches the
>     other tests for the Intel IGD in Qemu and Xen, but those tests
>     do not work because the necessary data structures are not set with
>     their values yet. So instead use the test that the administrator
>     has enabled gfx_passthru and the device address on the host is
>     02.0. This test does detect the Intel IGD correctly.
> 
> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>     email address to match the address used by the same author in commits
>     be9c61da and c0e86b76
>     
>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
> 
> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>     for the Intel IGD that uses the same criteria as in other places.
>     This involved moving the call to xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>     Intel IGD in xen_igd_clear_slot:
>     
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     is changed to
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> 
>     Added an explanation for the move of xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot to the commit message.
> 
>     Rebase.
> 
> v6: Fix logging by removing these lines from the move from xen_pt_realize
>     to xen_igd_clear_slot that was done in v5:
> 
>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>                " to devfn 0x%x\n",
>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                s->dev.devfn);
> 
>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>     set yet in xen_igd_clear_slot.
> 
> v7: The v7 that was posted to the mailing list was incorrect. v8 is what
>     v7 was intended to be.
> 
> v8: Inhibit out of context log message and needless processing by
>     adding 2 lines at the top of the new xen_igd_clear_slot function:
> 
>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>         return;
> 
>     Rebase. This removed an unnecessary header file from xen_pt.h 
> 
>  hw/i386/pc_piix.c    |  3 +++
>  hw/xen/xen_pt.c      | 49 ++++++++++++++++++++++++++++++++++++--------
>  hw/xen/xen_pt.h      | 16 +++++++++++++++
>  hw/xen/xen_pt_stub.c |  4 ++++
>  4 files changed, 63 insertions(+), 9 deletions(-)
> 
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index b48047f50c..bc5efa4f59 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>      }
>  
>      pc_xen_hvm_init_pci(machine);
> +    if (xen_igd_gfx_pt_enabled()) {
> +        xen_igd_reserve_slot(pcms->bus);
> +    }
>      pci_create_simple(pcms->bus, -1, "xen-platform");
>  }
>  #endif

I would even maybe go further and move the whole logic into
xen_igd_reserve_slot. And I would even just name it
xen_hvm_init_reserved_slots without worrying about the what
or why at the pc level.  At this point it will be up to Xen maintainers.

> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 0ec7e52183..eff38155ef 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                 s->dev.devfn);
>  
> -    xen_host_pci_device_get(&s->real_device,
> -                            s->hostaddr.domain, s->hostaddr.bus,
> -                            s->hostaddr.slot, s->hostaddr.function,
> -                            errp);
> -    if (*errp) {
> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
> -        return;
> -    }
> -
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> @@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>  }
>  
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
> +}
> +
> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
> +{
> +    ERRP_GUARD();
> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
> +
> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
> +        return;
> +
> +    xen_host_pci_device_get(&s->real_device,
> +                            s->hostaddr.domain, s->hostaddr.bus,
> +                            s->hostaddr.slot, s->hostaddr.function,
> +                            errp);
> +    if (*errp) {
> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
> +        return;
> +    }
> +
> +    if (is_igd_vga_passthrough(&s->real_device) &&
> +        s->real_device.domain == 0 && s->real_device.bus == 0 &&
> +        s->real_device.dev == 2 && s->real_device.func == 0 &&
> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {

how about macros for these?

#define XEN_PCI_IGD_DOMAIN 0
#define XEN_PCI_IGD_BUS 0
#define XEN_PCI_IGD_DEV 2
#define XEN_PCI_IGD_FN 0

> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;

If you are going to do this, you should set it back
either after pci_qdev_realize or in unrealize,
for symmetry.

> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
> +    }


> +    xpdc->pci_qdev_realize(qdev, errp);
> +}
> +



>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>  
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
> +    xpdc->pci_qdev_realize = dc->realize;
> +    dc->realize = xen_igd_clear_slot;
>      k->realize = xen_pt_realize;
>      k->exit = xen_pt_unregister_device;
>      k->config_read = xen_pt_pci_read_config;
> @@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>      .instance_size = sizeof(XenPCIPassthroughState),
>      .instance_finalize = xen_pci_passthrough_finalize,
>      .class_init = xen_pci_passthrough_class_init,
> +    .class_size = sizeof(XenPTDeviceClass),
>      .instance_init = xen_pci_passthrough_instance_init,
>      .interfaces = (InterfaceInfo[]) {
>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index cf10fc7bbf..8c25932b4b 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -2,6 +2,7 @@
>  #define XEN_PT_H
>  
>  #include "hw/xen/xen_common.h"
> +#include "hw/pci/pci_bus.h"
>  #include "xen-host-pci-device.h"
>  #include "qom/object.h"
>  
> @@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>  
> +#define XEN_PT_DEVICE_CLASS(klass) \
> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
> +
> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
> +
> +typedef struct XenPTDeviceClass {
> +    PCIDeviceClass parent_class;
> +    XenPTQdevRealize pci_qdev_realize;
> +} XenPTDeviceClass;
> +
>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>                                             XenHostPCIDevice *dev);
> @@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
>  
>  #define XEN_PCI_INTEL_OPREGION 0xfc
>  
> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
> +

I think you want to calculate this based on dev fn:

#define XEN_PCI_IGD_SLOT_MASK \
	(0x1 << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))


>  typedef enum {
>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> index 2d8cac8d54..5c108446a8 100644
> --- a/hw/xen/xen_pt_stub.c
> +++ b/hw/xen/xen_pt_stub.c
> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>          error_setg(errp, "Xen PCI passthrough support not built in");
>      }
>  }
> +
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +}
> -- 
> 2.39.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:18:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:18:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474401.735554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9pe-0008Iq-Ks; Tue, 10 Jan 2023 08:18:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474401.735554; Tue, 10 Jan 2023 08:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9pe-0008Ih-Ht; Tue, 10 Jan 2023 08:18:06 +0000
Received: by outflank-mailman (input) for mailman id 474401;
 Tue, 10 Jan 2023 08:18:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pF9pd-0008Ib-Ol
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:18:05 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4dc47d54-90bf-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:18:03 +0100 (CET)
Received: by mail-ed1-x52b.google.com with SMTP id j16so16328972edw.11
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 00:18:03 -0800 (PST)
Received: from 2a02.2378.1014.d6d9.ip.kyivstar.net
 ([2a02:2378:1014:d6d9:98bb:6889:a785:5d8c])
 by smtp.gmail.com with ESMTPSA id
 b15-20020aa7c90f000000b004615f7495e0sm4644581edt.8.2023.01.10.00.18.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 00:18:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dc47d54-90bf-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=kFUxWJzeM2COHFWq5okhVHEx6N/a8Nas9W0hsl1vXtg=;
        b=LtU5mRIo0+RkUXzDAFazQKGaYSBgyYdirWMwYf++OGsu6ktHJ5LA54G8jnQUyl0fm/
         gvqMwgFGGvlq8W+KSW8niC1TzQaNdAS9yM54cej+e1USn0Ak9u/Yl7f5mtNsx5UXWXGU
         9dMfETLWzgWvQzH03FX7Mi9tz4ZjJZO8lJvO//njHgVTyuF55Wnvrhvc4aYQ181glSdj
         BpTzkJz4O9ILntSbe0u+zD0Hb6A3ajwmyM1ous8YKseyEc4s15ISxKa6Plivjl4x96rA
         4n38vOWkd5YPwTfnlEne8jOV3Z28iylN+cbav/sjZ0ruj10jK21SzcsggRxUuxoKeCIX
         MSWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=kFUxWJzeM2COHFWq5okhVHEx6N/a8Nas9W0hsl1vXtg=;
        b=HJ7uu8hHWrZJ1Nu6haKeYExdQqDFGJOerHtZr2B5S7Sa2X5SyrW709ZX3eAw7m80HE
         qiYhLRXvhpuYQBV5xx568Tya0ObENYqjK4iXV1stSFu77C6brOraEBeIoncd6uLb3hgr
         Kh9W1mwA1sg/2YC1MUpYaka+3Nx8oZ8IOv/LU5/HAenZhpe6IHp1R02N1iqJt1igxxE3
         g5YbBUEH/hkaL6w32s6doLorGc7Z2wH81Tr51izw/vy4jVBG1lfXGdVLU/eBFt8G31en
         vcwYZO7dW0qQH9rI39e1HM/BVWMJSmLSuxzNGZJdvByrmWK3vn9QmNhCvrsj9NZcZB20
         fYxw==
X-Gm-Message-State: AFqh2kpxxtGJmcyI+9sqE+oNAvZon2HVdSbY0BM3vQIBmlUvYCTC7YRk
	rZQMOAJ6QTtl2PeIG6HaW5o=
X-Google-Smtp-Source: AMrXdXuePFplV6NusJbeKCkigvpa4LjqkET2uRLMDyYLyHVvu9nDOYGvB0NWxdXFBh2jLiZlkd5GxQ==
X-Received: by 2002:a05:6402:e81:b0:48c:afae:9331 with SMTP id h1-20020a0564020e8100b0048cafae9331mr31101907eda.10.1673338683196;
        Tue, 10 Jan 2023 00:18:03 -0800 (PST)
Message-ID: <474c92a9b26ba8a404db997e1646f1892fa857e0.camel@gmail.com>
Subject: Re: [PATCH v2 4/8] xen/riscv: introduce sbi call to putchar to
 console
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Tue, 10 Jan 2023 10:18:01 +0200
In-Reply-To: <e33d028f-b6c4-d120-5aa9-36c9f2d02420@suse.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
	 <9b85a963db538e4735a9f99fc9090ad79508cb2c.1673278109.git.oleksii.kurochko@gmail.com>
	 <e33d028f-b6c4-d120-5aa9-36c9f2d02420@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-09 at 17:00 +0100, Jan Beulich wrote:
> On 09.01.2023 16:46, Oleksii Kurochko wrote:
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/sbi.h
> > @@ -0,0 +1,34 @@
> > +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
> > +/*
> > + * Copyright (c) 2021 Vates SAS.
> > + *
> > + * Taken from xvisor, modified by Bobby Eshleman
> > (bobby.eshleman@gmail.com).
> > + *
> > + * Taken/modified from Xvisor project with the following
> > copyright:
> > + *
> > + * Copyright (c) 2019 Western Digital Corporation or its
> > affiliates.
> > + */
> > +
> > +#ifndef __CPU_SBI_H__
> > +#define __CPU_SBI_H__
>=20
> Didn't you mean to change this?
Thanks.

My fault. Missed that. I will double check that and take into account
in new patch series.
>=20
> > +#define SBI_EXT_0_1_CONSOLE_PUTCHAR=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A00x1
> > +
> > +struct sbiret {
> > +=C2=A0=C2=A0=C2=A0 long error;
> > +=C2=A0=C2=A0=C2=A0 long value;
> > +};
> > +
> > +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
> > unsigned long arg0,
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long arg1, unsigne=
d long arg2,
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long arg3, unsigne=
d long arg4,
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long arg5);
>=20
> Please get indentation right here as well as for the definition.
Thanks for clarification.
> Possible
> forms are
>=20
> struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned=
 long arg0, unsigned long arg1,
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned=
 long arg2, unsigned long arg3,
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned=
 long arg4, unsigned long arg5);
>=20
> or
>=20
> struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
> =C2=A0=C2=A0=C2=A0 unsigned long arg0, unsigned long arg1,
> =C2=A0=C2=A0=C2=A0 unsigned long arg2, unsigned long arg3,
> =C2=A0=C2=A0=C2=A0 unsigned long arg4, unsigned long arg5);
>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:22:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:22:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474407.735565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9tS-0001JZ-5x; Tue, 10 Jan 2023 08:22:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474407.735565; Tue, 10 Jan 2023 08:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pF9tS-0001JS-2c; Tue, 10 Jan 2023 08:22:02 +0000
Received: by outflank-mailman (input) for mailman id 474407;
 Tue, 10 Jan 2023 08:22:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pF9tR-0001JM-5b
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:22:01 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id daac699d-90bf-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:22:00 +0100 (CET)
Received: by mail-ed1-x52f.google.com with SMTP id v30so16348322edb.9
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 00:22:00 -0800 (PST)
Received: from 2a02.2378.1014.d6d9.ip.kyivstar.net
 ([2a02:2378:1014:d6d9:98bb:6889:a785:5d8c])
 by smtp.gmail.com with ESMTPSA id
 en6-20020a056402528600b00499b3d09bd2sm2474641edb.91.2023.01.10.00.21.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 00:21:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: daac699d-90bf-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=AprDi40x/B80KDuPx8u+X/GyU8Jo6blaRoC7zSOcKw4=;
        b=h5YpWVidwuM0qgPb2qCMupmtLX9SXIs+I2Nf9bvovkS390D6Y3AIQrUWJwonaxQC65
         FOA26Ujm1yE91LWK+m6aU1cNLmm0gvags3JuDk1CTUkOs2heBbFNYpOFqhz3W92FBmPX
         2MHz23+OUQ7rDgDRzw5a6MestAjubiEdatGdgXqNiqvrq1CldkJr8DV9LhjJ2ShIJRGk
         eFgWHtZ5n9OLWbuVlk+0GF+CK7yGhgJ4wDV27M7Ii96PbJWV06J/8nbw4nJU804Ns6OP
         /QBoOu8yNRyEBYsfRoFKzrSPNwLnFhp5J28U0MNg+NXygR1TIK3ULkITzsODQEK0k4Wk
         Kz9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=AprDi40x/B80KDuPx8u+X/GyU8Jo6blaRoC7zSOcKw4=;
        b=s/i7a5vxAF4QqdH2g9HK4UkqZplVWaQnQmMg2dcolGUbWqhH2LJmEWRUJ70zPrfnJd
         DyeSifx/zGoLp0XEe0wZPaQTi/nNt3054a3EkjroUq35rtJoAxOI3sTNQ/RKzZr5yrf3
         tv828j4jDPatfO12hiShGHPbG7r/5/t6WRKyRBeFc5BlIpJoLEWuMqpRRCqKb5aE+MI+
         9KpElr1IKk/W5Ftz9h3yirbpBgq8C2DlKOeZyo/jh0uU/PVzfSsZYbfYF2JVaKFPd2fg
         bfCnoPw8QPNSptDQ63kY94ZUy1rykIRxq7U3uk8c0228RFmhobEZddViUMOtDIKqTr6q
         5k4Q==
X-Gm-Message-State: AFqh2kpmgl1Om7xTEDCT4ScGkYrMvFQsP9MyfcsaXHv5R3euO3aSx6dh
	ug1Zn0Fb+o9uiWZ45yYgpuw=
X-Google-Smtp-Source: AMrXdXstGyqSI4HjuPwNOAdTiabyj0wqZZms2KkgU/NzSPCAxyYERHCUnD7LRVoAo0o49mb5WUtpWw==
X-Received: by 2002:aa7:db8d:0:b0:45c:835c:3679 with SMTP id u13-20020aa7db8d000000b0045c835c3679mr58769549edt.2.1673338919802;
        Tue, 10 Jan 2023 00:21:59 -0800 (PST)
Message-ID: <eb2c4ad53e596e633f1d59de05db9fa9630f28ac.camel@gmail.com>
Subject: Re: [PATCH v2 4/8] xen/riscv: introduce sbi call to putchar to
 console
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>
Date: Tue, 10 Jan 2023 10:21:57 +0200
In-Reply-To: <7990322c-639b-38d4-ff6c-221988532c33@xen.org>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
	 <9b85a963db538e4735a9f99fc9090ad79508cb2c.1673278109.git.oleksii.kurochko@gmail.com>
	 <7990322c-639b-38d4-ff6c-221988532c33@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

Hi,

On Mon, 2023-01-09 at 16:03 +0000, Julien Grall wrote:
> Hi,
>=20
> On 09/01/2023 15:46, Oleksii Kurochko wrote:
> > The patch introduce sbi_putchar() SBI call which is necessary
> > to implement initial early_printk
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes in V2:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - add an explanatory comment about sbi_console=
_putchar()
> > function.
> > =C2=A0=C2=A0=C2=A0=C2=A0 - order the files alphabetically in Makefile
> > ---
> > =C2=A0 xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 |=C2=A0 1 +
> > =C2=A0 xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
> > =C2=A0 xen/arch/riscv/sbi.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 44
> > ++++++++++++++++++++++++++++++++
> > =C2=A0 3 files changed, 79 insertions(+)
> > =C2=A0 create mode 100644 xen/arch/riscv/include/asm/sbi.h
> > =C2=A0 create mode 100644 xen/arch/riscv/sbi.c
> >=20
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index 5a67a3f493..fd916e1004 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,4 +1,5 @@
> > =C2=A0 obj-$(CONFIG_RISCV_64) +=3D riscv64/
> > +obj-y +=3D sbi.o
> > =C2=A0 obj-y +=3D setup.o
> > =C2=A0=20
> > =C2=A0 $(TARGET): $(TARGET)-syms
> > diff --git a/xen/arch/riscv/include/asm/sbi.h
> > b/xen/arch/riscv/include/asm/sbi.h
> > new file mode 100644
> > index 0000000000..34b53f8eaf
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/sbi.h
> > @@ -0,0 +1,34 @@
> > +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
> > +/*
> > + * Copyright (c) 2021 Vates SAS.
> > + *
> > + * Taken from xvisor, modified by Bobby Eshleman
> > (bobby.eshleman@gmail.com).
> Hmmm... I missed this one in v1. Is this mostly code from Bobby? If
> so,=20
> please update the commit message accordingly.
>=20
> FAOD, this comment applies for any future code you take from anyone.
> I=20
> will try to remember to mention it but please take pro-active action
> to=20
> check/mention where the code is coming from.
>=20
Sure, I will try to be more attentive next time.

Probably it is out of scope a little bit but could you please share
with me a link or clarify in which cases I have to add Copytrigt(c) or
should I add a new comment with "Modfied by Oleksii ... ", or it is not
necessary at all? Maybe have I to put some other information related to
copyrigts?

Thanks in advance.

> Cheers,
>=20
~Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:29:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:29:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474414.735575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFA0V-00023j-0F; Tue, 10 Jan 2023 08:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474414.735575; Tue, 10 Jan 2023 08:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFA0U-00023c-Tw; Tue, 10 Jan 2023 08:29:18 +0000
Received: by outflank-mailman (input) for mailman id 474414;
 Tue, 10 Jan 2023 08:29:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFA0T-00023U-Lz
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:29:17 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id de2bb332-90c0-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:29:15 +0100 (CET)
Received: by mail-ej1-x632.google.com with SMTP id cf18so20261186ejb.5
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 00:29:15 -0800 (PST)
Received: from 2a02.2378.1014.d6d9.ip.kyivstar.net
 ([2a02:2378:1014:d6d9:98bb:6889:a785:5d8c])
 by smtp.gmail.com with ESMTPSA id
 y19-20020a1709060a9300b0084debc351b3sm1401014ejf.20.2023.01.10.00.29.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 00:29:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de2bb332-90c0-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=PbwEnvP28IXrNyfma1R9wMCLoHMIHS4QeFEg6V+6yGo=;
        b=HwwzIy2qZCXgfhG6Zms6XS2miKZBCfF+vA5UbZQe6jrpYzPELMhVCS2S9/5t/GSikF
         YNr8RWmA/SEfq6phnuEtsmczXXi6VG7qnnbRodVUYjiGO9OQOx3s64MA49Nua4P10vrc
         3NZSdoGpebwIaMOiayIjProyULY4jlnBhEoEq5MbjhrzLrqiSd42fM720v5IG7paulZ3
         4+3pIPtX8f9z/TpcVu0DYLelq2J9aKan5WgqgcKRLbO7UuUZ+yoivn8/PSy0wmcGTC9i
         Osq4chsk4lG8mK+MkFckl4cq2xcn/Gl21PBTCX5/rrLoJxTZbRqn1bV4C/DAfsCQQmav
         9ykA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=PbwEnvP28IXrNyfma1R9wMCLoHMIHS4QeFEg6V+6yGo=;
        b=GqmyAfgxYxcWvhqbkw4dHDD+rYxV86vVUtCFfic1o7WHjesuSRZgjR10tZq9tQhRqN
         5Gkgy3/fv4fCK4JcWaelDK1qkvJ7PqQZXSs58uPCPDybdxWVMmtHKRn4ZSZVljDSH6/v
         HkRCQ+t4zbT7hw7Z3+2StE/LEZpFaE228TV2RM17hoxr+6dlknQ/nNqQD9NfwxLGCp+y
         GcRjLcWgTEgkspw8NG8lbvyXYPsLjfYOT3gV5mHArM5GeGIUBDj3oTfW/F1Pn8uKwNgW
         VfOd/wIuKJz7ZKl8vwZEDeF+csSoWflCKE5FUr96rik3qj6kGqi7HExYSgtKPuJQILr5
         FqFA==
X-Gm-Message-State: AFqh2krRW2T/OdHRECHUg1STQc/AuqRTeYE0T/gL+3GVaR9E0cKgVZvW
	6OhBG01k0wKmYtsM4hcBCU0=
X-Google-Smtp-Source: AMrXdXscl0HAHCTAtCsYQF87Oc/mdzKLVYF1+arZzHCjbrD7bhBYyixaaxHPTwrSVEe7Qjngd4Fjag==
X-Received: by 2002:a17:906:700f:b0:7c0:b79c:7d5f with SMTP id n15-20020a170906700f00b007c0b79c7d5fmr74184064ejj.68.1673339355096;
        Tue, 10 Jan 2023 00:29:15 -0800 (PST)
Message-ID: <13d7161756f1eb4cd359f2fb7694eb398814235b.camel@gmail.com>
Subject: Re: [PATCH v2 3/8] xen/riscv: introduce stack stuff
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>
Date: Tue, 10 Jan 2023 10:29:13 +0200
In-Reply-To: <e5ecc23f-4d4b-fa21-bd71-68cefcde644f@xen.org>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
	 <b253e61bebbc029c94b89389d81643f9587200b7.1673278109.git.oleksii.kurochko@gmail.com>
	 <e5ecc23f-4d4b-fa21-bd71-68cefcde644f@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.1 (3.46.1-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-09 at 16:11 +0000, Julien Grall wrote:
> Hi,
>=20
> On 09/01/2023 15:46, Oleksii Kurochko wrote:
> > The patch introduces and sets up a stack in order to go to C
> > environment
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> It looks like the comments from Andrew were missed.
>=20
I will double check which one.
Thanks.
> > ---
> > Changes in V2:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - introduce STACK_SIZE define.
> > =C2=A0=C2=A0=C2=A0=C2=A0 - use consistent padding between instruction m=
nemonic and
> > operand(s)
> > ---
> > =C2=A0 xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 +
> > =C2=A0 xen/arch/riscv/include/asm/config.h | 2 ++
> > =C2=A0 xen/arch/riscv/riscv64/head.S=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 | 8 +++++++-
> > =C2=A0 xen/arch/riscv/setup.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 6 ++++++
> > =C2=A0 4 files changed, 16 insertions(+), 1 deletion(-)
> > =C2=A0 create mode 100644 xen/arch/riscv/setup.c
> >=20
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index 248f2cbb3e..5a67a3f493 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,4 +1,5 @@
> > =C2=A0 obj-$(CONFIG_RISCV_64) +=3D riscv64/
> > +obj-y +=3D setup.o
> > =C2=A0=20
> > =C2=A0 $(TARGET): $(TARGET)-syms
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0$(OBJCOPY) -O binary -S=
 $< $@
> > diff --git a/xen/arch/riscv/include/asm/config.h
> > b/xen/arch/riscv/include/asm/config.h
> > index 0370f865f3..bdd2237f01 100644
> > --- a/xen/arch/riscv/include/asm/config.h
> > +++ b/xen/arch/riscv/include/asm/config.h
> > @@ -43,6 +43,8 @@
> > =C2=A0=20
> > =C2=A0 #define SMP_CACHE_BYTES (1 << 6)
> > =C2=A0=20
> > +#define STACK_SIZE (PAGE_SIZE)
> > +
> > =C2=A0 #endif /* __RISCV_CONFIG_H__ */
> > =C2=A0 /*
> > =C2=A0=C2=A0 * Local variables:
> > diff --git a/xen/arch/riscv/riscv64/head.S
> > b/xen/arch/riscv/riscv64/head.S
> > index 990edb70a0..c1f33a1934 100644
> > --- a/xen/arch/riscv/riscv64/head.S
> > +++ b/xen/arch/riscv/riscv64/head.S
> > @@ -1,4 +1,10 @@
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .section .text.h=
eader, "ax", %progbits
> > =C2=A0=20
> > =C2=A0 ENTRY(start)
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j=C2=A0 start
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 la=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 sp, cpu0_boot_stack
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 li=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 t0, STACK_SIZE
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 add=C2=A0=C2=A0=C2=A0=C2=A0=
 sp, sp, t0
> > +
> > +_start_hang:
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 wfi
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 _start_hang
> > diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> > new file mode 100644
> > index 0000000000..41ef4912bd
> > --- /dev/null
> > +++ b/xen/arch/riscv/setup.c
> > @@ -0,0 +1,6 @@
> > +#include <xen/init.h>
> > +#include <xen/compile.h>
>=20
> Why do you need to include <xen/compile.h>?
>=20
It is needed to use __aligned define which looks better then
__attribute__((__aligned__(SOMETHING)))=20
> In any case, please order the include alphabetically. I haven't look
> at=20
Thanks. I didn't now that headers should be in alphabetic order too.
> the rest of the series. But please go through the series and check
> that=20
> generic comments (like this one) are addressed everywhere.
>=20
> Cheers,
>=20
~Oleksii



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:53:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474420.735587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFANf-0005Kv-TG; Tue, 10 Jan 2023 08:53:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474420.735587; Tue, 10 Jan 2023 08:53:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFANf-0005Kh-Pk; Tue, 10 Jan 2023 08:53:15 +0000
Received: by outflank-mailman (input) for mailman id 474420;
 Tue, 10 Jan 2023 08:53:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFANd-0005KI-K4; Tue, 10 Jan 2023 08:53:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFANd-0000jv-IQ; Tue, 10 Jan 2023 08:53:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFANd-0007aC-BQ; Tue, 10 Jan 2023 08:53:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFANd-0007s9-B1; Tue, 10 Jan 2023 08:53:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hvMd7o/GEpMrztcOZ57Yr3bQ4vz6nPLHEKgbykUzC+8=; b=GKCiIGZmGY5+yKFGlroQBriXwa
	CaBuHfDuqB8SbQduIlb3lYd7VT17ZMeEo6I8E0odooUJT9Qu7Cg1x0R2EjuMm0Y96gL7KkM6wUnFP
	5EI4rNXV2ksQ2IxeIU+08prxWQ2c0UsHZF10rBhcPiqGjHC95g5BbLsY7v7ruUsWLllE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175686-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175686: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=82dd766f25225b0812bbac628c60d2b48f2346e4
X-Osstest-Versions-That:
    ovmf=2cc6d4c8ed54e36fd9638dafdb293dd9a4c54cf9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 08:53:13 +0000

flight 175686 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175686/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 82dd766f25225b0812bbac628c60d2b48f2346e4
baseline version:
 ovmf                 2cc6d4c8ed54e36fd9638dafdb293dd9a4c54cf9

Last test of basis   175683  2023-01-10 04:10:46 Z    0 days
Testing same since   175686  2023-01-10 06:40:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chao Li <lichao@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   2cc6d4c8ed..82dd766f25  82dd766f25225b0812bbac628c60d2b48f2346e4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474427.735598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOQ-0005sR-5b; Tue, 10 Jan 2023 08:54:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474427.735598; Tue, 10 Jan 2023 08:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOQ-0005sK-2l; Tue, 10 Jan 2023 08:54:02 +0000
Received: by outflank-mailman (input) for mailman id 474427;
 Tue, 10 Jan 2023 08:54:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOO-0005s6-0z
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:00 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2064.outbound.protection.outlook.com [40.107.103.64])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 512b99b4-90c4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:53:57 +0100 (CET)
Received: from ZR2P278CA0062.CHEP278.PROD.OUTLOOK.COM (2603:10a6:910:52::10)
 by AS4PR08MB7902.eurprd08.prod.outlook.com (2603:10a6:20b:51d::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:53:54 +0000
Received: from VI1EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:910:52:cafe::c8) by ZR2P278CA0062.outlook.office365.com
 (2603:10a6:910:52::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT043.mail.protection.outlook.com (100.127.145.21) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:53:53 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Tue, 10 Jan 2023 08:53:53 +0000
Received: from 08576f30d8c9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 84897C40-18DE-4F73-9AF2-423B309F9F89.1; 
 Tue, 10 Jan 2023 08:53:46 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 08576f30d8c9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:53:46 +0000
Received: from AS4PR09CA0005.eurprd09.prod.outlook.com (2603:10a6:20b:5e0::11)
 by DB9PR08MB7469.eurprd08.prod.outlook.com (2603:10a6:10:36f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:53:43 +0000
Received: from VI1EUR03FT058.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:5e0:cafe::b8) by AS4PR09CA0005.outlook.office365.com
 (2603:10a6:20b:5e0::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:43 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT058.mail.protection.outlook.com (100.127.144.186) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:53:42 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:53:40 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 512b99b4-90c4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4WSflx/hlaylWKbeI/bAsqdI9Oj43SHHqevu0JMLugA=;
 b=FBeAWyPCV9uxh7Sjl1M/oQYLVvmiGjnPnoncypDlejGUmST244mgA6KpPhj/VjrvTkK/lcI3RnsufxOZsmmrGh84EGlQznv5PuA8AyGwoWUUp9ezkSvJdNlO2c6K8kaPav6A+AmVZ5961xUcVT590JEKkX7bYUFr+vgvey7RX+8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4eea29087c37b2c3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GGHgxSMrI9u7eYO32RDvDSlUvQmV2l+0scFQJgc1mUF3K+D4vpoO9RByzYFyfoCk5bgWl3z/msz/gYTXJyVjvI+O56wxhpkHyN8HvU1uPUTeupiPquiH7pWXSV7bPcadaek6qiw4UlI2mtT3srLGZUDY/jYZx2sjU77g1FmPEaKP3IxSHNWe+/28h4SzrxFKVp3ZtCG1403Wx78I452LB0Fqw06CssLu4oMLqwif1OqRyOnAFMacfP2BSNY2qy5SivKabuHzqdSS8mEaq7XjmFebLNLPQYmKAG8/DsMu12/ZXobkJmyrvkEc5lIf9TGTOFUEn4yGqErxawTS/7kUEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4WSflx/hlaylWKbeI/bAsqdI9Oj43SHHqevu0JMLugA=;
 b=b70iTjx9mgONZMx1/HZUUnNZ8mubxSRDoAaBDRY7aaGGPmkXSSGJHeMAgvzxK9NupRo7PjtIUXlB9RPVhc3/yVKErJIpO78BHSGdWOasbAS0PNjBGG9tWzsk7pAIFKcQrZOwRq8GfmtTxwwnwhDYIlT2t2f1iD1RXr9j13vgjcoTN5RYfsjJBlpq00awh3uzJQqGdgDtDpXgWPUWmpBp6ZKXwatHwiG8mEeAzsGKQ0O8+UsINRpCeAE6qySmHO9jBIneKmE2FWPcg3ad4mip6lN6WRWKFi79d0rpXerxu4VlLrPvJnK2qLhki6XputnbI1puBWnIEHM4b/FeNH3RwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4WSflx/hlaylWKbeI/bAsqdI9Oj43SHHqevu0JMLugA=;
 b=FBeAWyPCV9uxh7Sjl1M/oQYLVvmiGjnPnoncypDlejGUmST244mgA6KpPhj/VjrvTkK/lcI3RnsufxOZsmmrGh84EGlQznv5PuA8AyGwoWUUp9ezkSvJdNlO2c6K8kaPav6A+AmVZ5961xUcVT590JEKkX7bYUFr+vgvey7RX+8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 00/17] Device tree based NUMA support for Arm - Part#3
Date: Tue, 10 Jan 2023 16:49:13 +0800
Message-ID: <20230110084930.1095203-1-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT058:EE_|DB9PR08MB7469:EE_|VI1EUR03FT043:EE_|AS4PR08MB7902:EE_
X-MS-Office365-Filtering-Correlation-Id: 7f8555fd-9467-4ee2-b15b-08daf2e833bc
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9svjhQ5Sr68UXSLb6JGH3nUC5D4jhvyepAmsNbz8rti37G54FUbtxAvXYy2Z3bFqSfunZRKSQ7QQVarB9YTXTkoavV3po/qTNSOhOcac0XOuCUSfuGQf8VuVxTdDt7KEMqhrt0Y3ITpVCBtY7+jw0XJnivRSCvFK/PCDoQnO9rPmZ3IFuDnxYBz5KGHrVlzQGTlJT9Xyg3CZKLsVnfn+94olskiW1uOZNaqrtqJ+MwCn/bai25M4jGvePVCT+9xTFFhvC1xSwaYxlE1bra57eAnUJ/ptdK6692sNWNddPqRTyKRm8cFppf3yjxuHAjr/PXIeVOrfDLEvVqKtip/JUDweLCaauZEEs2RNfkDiGDefERzEHckXzproJO5fbxP6ps2jDUQMxXboM1mb/jBXUwvbTF1Vcr1tGEKZMfAIf07q9XPjW7Fnrur3qcBo01q4Uo7VOwarxk/u76WPAXVyqsYaA067TyCrFdZgD2QNo6vjxGaLUsPY3jGbpvtFd69HAeMEPAqnjwP8OEI8vIS15auYYyq8QTY6IoEVRKUhwGJzkuJQQrw+ujLLIqX1Ej5xOrrjg2qemquXTs2L6CzM2e9xQChDY7bHW4zhDPbDXybahzppaO4e5ReYZ9E+wGpTAapOMP2E7Nbx4ZaxuLVeBL2bBXlAd/xYmspRn0gQhvSeN0kTHIRcjm1+cS0e+BHh/8gcHfzqU0xXaqXJtAqudphP6MIJXPNAna5OwpwRMNb/kaX2hVqKY+XNYK43ntCQLklYyLI73F6oNFGpbj9Drg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(376002)(346002)(39860400002)(451199015)(36840700001)(46966006)(36756003)(86362001)(40480700001)(316002)(54906003)(6916009)(478600001)(82740400003)(356005)(81166007)(2616005)(6666004)(1076003)(966005)(186003)(26005)(82310400005)(7696005)(8676002)(70586007)(4326008)(8936002)(70206006)(41300700001)(5660300002)(2906002)(44832011)(36860700001)(336012)(47076005)(83380400001)(426003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7469
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7aac53ca-4b2a-45bd-7845-08daf2e82cf1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7Zlvd+WWdPiUT4wY8UiD9taF81l/UArfvLrY1AHJnGTyOaBglRNsxSmi6FyEePCQZu3RDxso4rnGSeBKfe+tZtYUJ9yqnQ0sfbW8ObMUY6L6dv/u2kuKug/fxk9GLJrgCyk6leOt22ZJ0N8enOabnN8pHR87UfmNkXDkj861sNnpldC/3iTqkkPj9rbZ/9aLTa4OpX1nv8GH41F8B2naRrMTmMoQZV6nQd6TlFBA8mWTGCuE6cMaVlie6rkCtQc3cwuUB19Ierd9SXim6n3O8yNgZA4+nVAfqQnT1B4GBygu9by9pMJMw9h6h1IK7l32U532C4wZsq6hYf8QjvQeIoh8O8zFfQn6xKFcVJgitB25KCa5q2jJ+rDM1KnH4Fi1g9qPytsW1clF//y2J9YcTWenbfTWDdLPUiu4ZxoAb+OR4JSySOQdAfmyvFowjyJGJUUSXz1U5lprvbm92pLOfiyk3yMiyUC8oIEo2XK4n49Ho2+7huc1L3hm6BvzhYzD1yNTDTyo0RQPUTCRrHjUOLqKjmSt7dfKYvUFbknRuYFqDSHs6o0gtM0qREr3udSr/iBjMf8oUZFbXYIcJP/iBZoF1Q/x0ljpEcs7gpfvOumCiXtJmVPNXJyw1UJ+IzreTzPvD8Z5vCJfJp4kFXjxPV2EpEbejd/ZE4SCToAJNvVJEHZnJUio280LauSJPOwVnV6XxqUMZD9y7U1uOt9DkI6z1lA+OmBC77uI8bEcEA3zssjrI1rHGcpJu+mRhiFmmKKlmAnuYtzamx9k4ZEy3A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(136003)(346002)(396003)(451199015)(36840700001)(40470700004)(46966006)(70586007)(70206006)(8676002)(336012)(4326008)(186003)(82310400005)(83380400001)(41300700001)(426003)(2906002)(47076005)(82740400003)(36860700001)(36756003)(44832011)(81166007)(5660300002)(40460700003)(8936002)(966005)(478600001)(6666004)(107886003)(2616005)(26005)(1076003)(6916009)(316002)(54906003)(40480700001)(86362001)(7696005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:53:53.9747
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f8555fd-9467-4ee2-b15b-08daf2e833bc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7902

(The Arm device tree based NUMA support patch set contains 35
patches. In order to make stuff easier for reviewers, I split
them into 3 parts:
1. Preparation. I have re-sorted the patch series. And moved
   independent patches to the head of the series - merged in [1]
2. Move generically usable code from x86 to common - merged in [2]
3. Add new code to support Arm - this series.

This series only contains the third part patches. As the whole NUMA
series has been reviewed for 1 round in [3], so this series would
be v2)

Xen memory allocation and scheduler modules are NUMA aware.
But actually, on x86 has implemented the architecture APIs
to support NUMA. Arm was providing a set of fake architecture
APIs to make it compatible with NUMA awared memory allocation
and scheduler.

Arm system was working well as a single node NUMA system with
these fake APIs, because we didn't have multiple nodes NUMA
system on Arm. But in recent years, more and more Arm devices
support multiple nodes NUMA system.

So now we have a new problem. When Xen is running on these Arm
devices, Xen still treat them as single node SMP systems. The
NUMA affinity capability of Xen memory allocation and scheduler
becomes meaningless. Because they rely on input data that does
not reflect real NUMA layout.

Xen still think the access time for all of the memory is the
same for all CPUs. However, Xen may allocate memory to a VM
from different NUMA nodes with different access speeds. This
difference can be amplified in workloads inside VM, causing
performance instability and timeouts.

So in this patch series, we implement a set of NUMA API to use
device tree to describe the NUMA layout. We reuse most of the
code of x86 NUMA to create and maintain the mapping between
memory and CPU, create the matrix between any two NUMA nodes.
Except ACPI and some x86 specified code, we have moved other
code to common. In next stage, when we implement ACPI based
NUMA for Arm64, we may move the ACPI NUMA code to common too,
but in current stage, we keep it as x86 only.

This patch serires has been tested and booted well on one
Arm64 NUMA machine and one HPE x86 NUMA machine.

[1] https://lists.xenproject.org/archives/html/xen-devel/2022-06/msg00499.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg01043.html
[3] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg01903.html

Henry Wang (1):
  xen/arm: Set correct per-cpu cpu_core_mask

Wei Chen (16):
  xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS
  xen/arm: implement helpers to get and update NUMA status
  xen/arm: implement node distance helpers for Arm
  xen/arm: use arch_get_ram_range to memory ranges from bootinfo
  xen/arm: build NUMA cpu_to_node map in dt_smp_init_cpus
  xen/arm: Add boot and secondary CPU to NUMA system
  xen/arm: introduce a helper to parse device tree processor node
  xen/arm: introduce a helper to parse device tree memory node
  xen/arm: introduce a helper to parse device tree NUMA distance map
  xen/arm: unified entry to parse all NUMA data from device tree
  xen/arm: keep guest still be NUMA unware
  xen/arm: enable device tree based NUMA in system init
  xen/arm: implement numa_node_to_arch_nid for device tree NUMA
  xen/arm: use CONFIG_NUMA to gate node_online_map in smpboot
  xen/arm: Provide Kconfig options for Arm to enable NUMA
  docs: update numa command line to support Arm

 SUPPORT.md                        |   1 +
 docs/misc/xen-command-line.pandoc |   2 +-
 xen/arch/arm/Kconfig              |  11 ++
 xen/arch/arm/Makefile             |   2 +
 xen/arch/arm/domain_build.c       |   6 +
 xen/arch/arm/include/asm/numa.h   |  87 ++++++++-
 xen/arch/arm/numa.c               | 183 +++++++++++++++++++
 xen/arch/arm/numa_device_tree.c   | 290 ++++++++++++++++++++++++++++++
 xen/arch/arm/setup.c              |  17 ++
 xen/arch/arm/smpboot.c            |  38 ++++
 xen/arch/x86/include/asm/numa.h   |   2 -
 xen/arch/x86/srat.c               |   2 +-
 xen/include/xen/numa.h            |  11 ++
 13 files changed, 647 insertions(+), 5 deletions(-)
 create mode 100644 xen/arch/arm/numa.c
 create mode 100644 xen/arch/arm/numa_device_tree.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474429.735609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOX-0006AO-E2; Tue, 10 Jan 2023 08:54:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474429.735609; Tue, 10 Jan 2023 08:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOX-0006AD-As; Tue, 10 Jan 2023 08:54:09 +0000
Received: by outflank-mailman (input) for mailman id 474429;
 Tue, 10 Jan 2023 08:54:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOW-0005s6-8W
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:08 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2054.outbound.protection.outlook.com [40.107.22.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 577127da-90c4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:54:07 +0100 (CET)
Received: from AS8PR07CA0035.eurprd07.prod.outlook.com (2603:10a6:20b:459::27)
 by AS4PR08MB7687.eurprd08.prod.outlook.com (2603:10a6:20b:506::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:05 +0000
Received: from AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:459:cafe::2c) by AS8PR07CA0035.outlook.office365.com
 (2603:10a6:20b:459::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT009.mail.protection.outlook.com (100.127.140.130) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:05 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Tue, 10 Jan 2023 08:54:05 +0000
Received: from 1a530aeacf9f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E72D6A2F-6BE3-4CAD-BE82-F07CD82351F6.1; 
 Tue, 10 Jan 2023 08:53:58 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1a530aeacf9f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:53:58 +0000
Received: from AS9PR05CA0329.eurprd05.prod.outlook.com (2603:10a6:20b:491::35)
 by PA4PR08MB7572.eurprd08.prod.outlook.com (2603:10a6:102:271::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:53:57 +0000
Received: from AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:491:cafe::6e) by AS9PR05CA0329.outlook.office365.com
 (2603:10a6:20b:491::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:57 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM7EUR03FT031.mail.protection.outlook.com (100.127.140.84) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:53:56 +0000
Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:53:56 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com
 (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:53:55 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 577127da-90c4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ukOHSyYJNGo1PvmsWwUlZnhKUtTLLuyk3pl7Z1kqbYk=;
 b=IQfayMYuPhAh9yCaVERhL4zea685Z/8WMvbuqmwEdwVpuGd3xSOYqqDfJ0QZKaXrmspo2Ik3eDSAOuKhtOJoUkhYVD5q98QbRmJLz17wzMNkusDnMfdsZGS9IWSpQq2H+WO7ANTOTBBcLuTPtCw/qEbfrgT5q7qPpY2PxbnMbg0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8e0d06204b1326e9
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k49p0Ga7YSHbcTaLM2Yf/9a9ZP59cMtgjQBkE/WVtAwtE6NDu9XURdL5qrg5bD8UW78p6fcIbG59CItouubUY0kmPxs0YzEnCmYGas70O16RrdilfYOjNVLWCiqkMm8J0lzMZkN8JXc4GYCHgA6mVZucgnigC2pArN4eqQpEJNnhFkU5jgHeJxZDLwaXm77S9y8L4kX5YMaLntwieu/WBLnTKxSPcUJMzlkpqivGZOIUFkPvowpcIXk3PjpVJMUB8gseDMtMC30Yngbb3B1++NpgF+iMd9TYzd3cjZCQuX1Z13m0w5JbiVNZgEHALyKPKoeqTt8hAgdzYL0KNtFJmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ukOHSyYJNGo1PvmsWwUlZnhKUtTLLuyk3pl7Z1kqbYk=;
 b=K7UlsP3WRXewE0HeGJh5m024Fc5+fKKtRzVbukGfy7iG4POxJJhgmuuA6tgkq5mY+odNsSXRoDc7/eORIwWftKFT3PlzNp2eUsD3j7IBLoy7QDkDKvurZSw/ftNi66BziT1NU0Hzx21yRXXllUrsSHAHrfJPzfoEKNm6Lj/Djvr3ohwAxoiTEWhWyWR0ivRvFLsfwS8f+noK23b4X11BXyFjSjTb9BEkAiJ4fpWejf+DamYUCOIX2j4Wk+SK93e/MEnDYpoeQUo/EFhBkVJOy1H+apMnbEqx5vuh2NcTpWu2iVTQrwtmc31bVBjlAu5WEPnRcuXJVZ5Cts6xaR5zDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ukOHSyYJNGo1PvmsWwUlZnhKUtTLLuyk3pl7Z1kqbYk=;
 b=IQfayMYuPhAh9yCaVERhL4zea685Z/8WMvbuqmwEdwVpuGd3xSOYqqDfJ0QZKaXrmspo2Ik3eDSAOuKhtOJoUkhYVD5q98QbRmJLz17wzMNkusDnMfdsZGS9IWSpQq2H+WO7ANTOTBBcLuTPtCw/qEbfrgT5q7qPpY2PxbnMbg0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 04/17] xen/arm: use arch_get_ram_range to memory ranges from bootinfo
Date: Tue, 10 Jan 2023 16:49:17 +0800
Message-ID: <20230110084930.1095203-5-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	AM7EUR03FT031:EE_|PA4PR08MB7572:EE_|AM7EUR03FT009:EE_|AS4PR08MB7687:EE_
X-MS-Office365-Filtering-Correlation-Id: b38d74bf-f550-4b63-7f18-08daf2e83a9a
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8qWH3Cdk6TooUHnWV6bBmMqkiv8YlWmA95yuq/P/7tr80IVpC33v1tU3OEGP/5k9QMS/p20oGjydcTymTScaaPE2ucdSZTq63pAe2W9F2EpLgXjXiz1FLLytfl/+Ekjn2A+WyHQiiBVeuvfC15zoh6ROuDg2ldcokeoeeYfjlRssS6mb1K6pewsHBTJBm+LQq36szooRGhqVAW1xqSN3hQIXhRckRWWsv2ZbmrSSY81XLYb/9lxciIRoGweteF3VM23a8OwblX3sCyWzSCQOo+Ms4fdGrwXclVflGt9HrjTEhSixh2R5vvm8L5E8lzHhYGQNmqggty7ivZ5Kv90HwhLCK/AqSvjCM0q2DbCk8NNtWYBbQaWMNE+xbLO2Flofu30lyQKL66QE4QYFART+mmjO8ynL9lxcbAntix0IxFsa+dLPvoXs5ka2YH8Q1a9eqickxuJuQhfYaLVl5NKosOrjifNfViehb3NDdQQNsPcHg56YVdJ40ZF9Tf3d9HAT9l1fd7Vds7gmAMUliUuA+EMrrKZhGP7N6SwsU6E6SAbc8++EAIRKhlMkOLVtWKSllzKjcOC+p6xAs3vTa1+A/wgNuiNQCQR5uODPtPn9uxxpR6w77cTIUVbQmDM6RfXlbltga6846yFqzBRYQqBAcyQX+BOLBX8GrtNnCZEt6aWeRkQFzNgAWRVzR9giCxF4MbEMIFDrkV1LbYKd/PpXqA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199015)(46966006)(36840700001)(83380400001)(36860700001)(41300700001)(70586007)(8676002)(70206006)(4326008)(5660300002)(4744005)(8936002)(36756003)(26005)(2616005)(186003)(54906003)(1076003)(81166007)(478600001)(7696005)(356005)(47076005)(86362001)(6666004)(82740400003)(82310400005)(40480700001)(6916009)(336012)(316002)(426003)(44832011)(2906002)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB7572
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ae21e75f-961f-4e03-646b-08daf2e83555
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xVon1AF4OQ+FXef+6denCz3kTho9IgTteDaY9XB8cmZvWT/3Ep6ZrNsnDLY/zAxxmc7bo4P32Y/jzDE609bcqRd+Eo5xlmmMaD7eYCEgrtT9avQiuqynDz/tZPo54FrNLhLrOhxCNqw9qfvel3Pc33DRLlXfP/JLuOMIojtoRiWnkfviHbV9ReMA9GNUVEo9FuuDkTeGh1jIVbq1NaCXr1jrT0JjE3TFQ1nKx4DaJpvN4Wm8iboXiyQVL7HahssI5XjlWGDLzRWPQZUWVdIvlR43zLIHjoeejBxNG+jVSdAKSa4L2VM7zES4keCouwbya4WhOvAwJsLmXLoxrXYOufzB9fk/RMZTsmaNwfR7znKol+hen1tDpe3dko1lp+emClDAea3nKHWm7+O1ECmQO7zh9Afv5/dORSAzjGpjncANXw2GcjGn2wheMyC/oJqRDWwMC23TbH7yyJD4hYHkcM7qH1HXusNdR/y2FmWC+PZa0TiBDyv37J1HRiIGHEKGWIpfzePFgz8rCzciON6kpqsT3SLkvPe5z1KdxHlI+Gn2s6fpx2kvAyj14G4HqVIqLRlFSAj89vV4BeeyWMvqvMhHcfAvnwwgNh342n/LFMcSOFrwuWdJw5HJUSCq9wpsEbvSr1M3PYGTPbzgxRLMGkQyNEzYfXF0H4XG+4kDIfc+WAZdkM/RyyzH2OLg1H3op2Hrw8xFs2ipfmwmhXQ03Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199015)(40470700004)(36840700001)(46966006)(2906002)(36860700001)(44832011)(5660300002)(4744005)(7696005)(41300700001)(8936002)(36756003)(4326008)(83380400001)(70206006)(70586007)(8676002)(40460700003)(86362001)(47076005)(82310400005)(1076003)(336012)(2616005)(6916009)(426003)(26005)(81166007)(40480700001)(478600001)(186003)(82740400003)(316002)(6666004)(107886003)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:05.5259
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b38d74bf-f550-4b63-7f18-08daf2e83a9a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7687

Implement the same helper "arch_get_ram_range" as x86 for NUMA
code to get memory bank from Arm bootinfo.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v2 -> v3:
1. Use arch_get_ram_range instead of arch_get_memory_map.
v1 -> v2:
1. Use arch_get_memory_map to replace arch_get_memory_bank_range
   and arch_get_memory_bank_number.
---
 xen/arch/arm/numa.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 34851ceacf..dcfcd85fcf 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -92,3 +92,14 @@ unsigned char __node_distance(nodeid_t from, nodeid_t to)
 }
 
 EXPORT_SYMBOL(__node_distance);
+
+int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *end)
+{
+    if ( idx >= bootinfo.mem.nr_banks )
+        return -ENOENT;
+
+    *start = bootinfo.mem.bank[idx].start;
+    *end = *start + bootinfo.mem.bank[idx].size;
+
+    return 0;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474430.735620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOY-0006Qg-Q4; Tue, 10 Jan 2023 08:54:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474430.735620; Tue, 10 Jan 2023 08:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOY-0006QV-N9; Tue, 10 Jan 2023 08:54:10 +0000
Received: by outflank-mailman (input) for mailman id 474430;
 Tue, 10 Jan 2023 08:54:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOX-0005s6-5v
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:09 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2061.outbound.protection.outlook.com [40.107.249.61])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 57c40165-90c4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:54:08 +0100 (CET)
Received: from AS9PR01CA0020.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:540::25) by PAXPR08MB7646.eurprd08.prod.outlook.com
 (2603:10a6:102:241::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:05 +0000
Received: from AM7EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:540:cafe::e4) by AS9PR01CA0020.outlook.office365.com
 (2603:10a6:20b:540::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT008.mail.protection.outlook.com (100.127.141.25) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:05 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Tue, 10 Jan 2023 08:54:04 +0000
Received: from cd1a513c34a0.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 736A2ED2-55B1-41D7-A47A-6BC14235A46A.1; 
 Tue, 10 Jan 2023 08:53:57 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cd1a513c34a0.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:53:57 +0000
Received: from FR0P281CA0063.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:49::11)
 by AS8PR08MB9315.eurprd08.prod.outlook.com (2603:10a6:20b:5a6::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:53:51 +0000
Received: from VI1EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:49:cafe::73) by FR0P281CA0063.outlook.office365.com
 (2603:10a6:d10:49::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:50 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT014.mail.protection.outlook.com (100.127.145.17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:53:50 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:53:48 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57c40165-90c4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PmYoezE2IW59ME1CxZnlcTEIFj+1tLggVIMbia4uvmE=;
 b=S+xLDEACULP7+KdH+nj/G3VV5LHgo4ymITLwjnMgopxVbrXQXWhbUyVRz9ogi53sk2bNSHD7T/TGduW+f/9T00ohftgTkiuDvUHGYD5oXeN2++KvwKo2agN5w2ycRJ/mA9Sn5qmzfO9hXnC7+yleeh1mNMmfODCpbvjGACVDPU8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 83c795689737a32a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RPYvj2k1Lj2CdVKjNkT7p0alj6ZJiiZyFUrQ0HV4Mmtyd1ese3R+OykyIpydt7iUFzgnIdYeXmBQLelrIl9+PnlKb3aaIDNnTBrb0FWosBzCdHcwihxA5FbmT2HiQYLSVK01XAOb3Y5RwrWpuO+7SGLqTefQQiPaPNdXrEXHwIaF5vo+MZNH6CBS0AdYcMU2zNsOLcN7vuWc3jy5affe5q1b29n4pFcSweETtkDoucXGhY8VPRv9XG87iQiIWtaY8KgMKFFtu6f5HslJE1kLQeJJxbKFG+dex3stozwASNCHqAf1e3fa32oAtzl9KI2JMOsgduzSPvzjnHHFKiUyIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PmYoezE2IW59ME1CxZnlcTEIFj+1tLggVIMbia4uvmE=;
 b=aCKm87ysa2saUVVLDBReWk2HgbAmJVGPn7fGA0UPXpPL9r7BZFJicp7bpp3ugFFaUGQfdM4NT+rlpLb7wYqXL8iseW38JQQHtCooDPJWw/zLq2G+LOHFPNCSUPs9f8GJNfmI0Wf051UFhsE6H7J0O4cdKvdvoPD4wpbPT8qncDiEvTFHFRV9qFfRiSeXzaRv9xq7JOo/CCt888ICD6/p2RKhfpISLuE+z37uasNRZ2SGG8QQFq/Qn/65D9wYRKZCj+LVF0hGLP4x/YSE6OZ0uXXf0NF9BbZsA0b09Z6bTrX95BhGXpc9W84x1+js5Ym3VpbZSPl68BzSSs+gMbkMqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PmYoezE2IW59ME1CxZnlcTEIFj+1tLggVIMbia4uvmE=;
 b=S+xLDEACULP7+KdH+nj/G3VV5LHgo4ymITLwjnMgopxVbrXQXWhbUyVRz9ogi53sk2bNSHD7T/TGduW+f/9T00ohftgTkiuDvUHGYD5oXeN2++KvwKo2agN5w2ycRJ/mA9Sn5qmzfO9hXnC7+yleeh1mNMmfODCpbvjGACVDPU8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 02/17] xen/arm: implement helpers to get and update NUMA status
Date: Tue, 10 Jan 2023 16:49:15 +0800
Message-ID: <20230110084930.1095203-3-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT014:EE_|AS8PR08MB9315:EE_|AM7EUR03FT008:EE_|PAXPR08MB7646:EE_
X-MS-Office365-Filtering-Correlation-Id: 1d688aa0-011f-468c-06a2-08daf2e83a58
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 bTZm+HdFMjwpjFhk74ZmuJGg3uJLNaJNRMl7VjLMOaUlJGDCQ0bWSEj/1dMcyUn0K24XpZVKImbVi4f9ymRMHp/qiagmo4jJ1BkkLQU40ZEcPIg6HqZIUQHWtaEJQlNauWimUUHaks2nwBCa0rO4Nd8+x7/G3vDwhUQcGzKg3aOE8csafVk2vNtVboeqxB/rPA4hlpTOD8a8GWB1z/QR98Dusg2pleIPFZ0WIb9RcOdsaxpERwWejbTE8mypQ26I3u6qB4o5+vxtup5eRJcxOsOvMEqk0cVxXo0QQaXH5eEF5AztxTQgj0TJLm7xly0c+xAYrBbsq6Mkkuc1wt6JEymO9ZlCvda+slC5f5qizknvzINBR1TpHtgXM1MHcR9VnYAPzd1Qm/8viOxuynD9NrD1mEBvXQN55ZuHkIAa5vcuPgLJYevQIbJ0rwd4WDiZnIxNb5tTB4ho4V9NtuO0YkmGXL+wHIhP5nmTOwK1sxvuK6kHK52JBvCOsemSZkFHRRiOQHiUfxBpER8TvGqlq1F/gO69QUpssYTMPmk0nO5eAAWccnJ9WRuscqARP8+DLw/IFPcdSrtVbYlZiq0QZBptM2PrFgfozIcJt2JtB91eFvP+iWcj7clzkh+vqBk1/9GBY4TXbXQHEJI3lwtpC+zgAg8aGkKKgwTLf3AiDAygTmbM8yUTT0WSfgYiw0hU8+aFyU+3UEkwHZrMpwX65fNr4Ywe9D/u23QFOvPsl3R0WeKrdJXU8C4Kt37bmS94
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(376002)(346002)(39860400002)(451199015)(46966006)(36840700001)(356005)(81166007)(82740400003)(36860700001)(6916009)(41300700001)(8676002)(4326008)(54906003)(70586007)(86362001)(40480700001)(44832011)(8936002)(5660300002)(15650500001)(1076003)(83380400001)(2616005)(47076005)(336012)(426003)(186003)(26005)(7696005)(478600001)(82310400005)(6666004)(2906002)(316002)(70206006)(36756003)(2004002)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9315
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6c62addc-70e8-4869-cea6-08daf2e831a2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JQG2fUK/dot99yd9QECpImJCMJouuwXf4n5gOoRIOxfRxI6U5YDUNguNm4rOeUsBKfHahET3UpoGq0XTBFPNSDH9K9DuE4OY7xtlVPG5rlnUVbvu9OTYJEWaD4IDFUwFfpZVEsd8WtuJwZuy9c7bPEI5EoXfixMZmJGFaWO8A5ltQOsm0dG0Ngl+d40iZfmoIuA1HE5+y9MQALfVGSZH9Q+4VLUe/KPJHEW2DJnJeFKtTHmSXcefbaKnO3sxiAf9+X8EV+1pACTje19/Mm2r5mFU9QDCZ0MxAhz32q5YjkRdRyWgzlx1vz+JEjdWol0AjQ1gRaHUT8U+7vnwmLAMFkaM7g8UQ5e6nrq6VZmY8eeuyyf/Bnwe70ktAZ9knirYqSlTd+4xuLJWaRKSx+aJLcIWaJdpZVfpWorSG+R8rthp9yYj97TXtsBu+wjoEGWG/9qSx6MlpaF38Ac4YkAfSV0j01gQaOCkHXHh7miLuN3iXAKA1uwKnzf9orGSVBVsZKqwxY0+iCuukHmuus7wGcWN4Jikdar8QrdTDGknSmLX2UJYDM8gSKWo5sL3Vv1M7b0eDqRqdrkPVW6KypYqMZXbFY4f3TNJrxN0AW2tM5i/6ctA/GHX531o8wIt9CvxEv9wY9hpKLdmHK3voUmU7j1IJdOK91dPe8pcr0tIWNc/JtZVB+GMcheUL4Grj7I0XASAmLn5issHE1vJfbNQQQCeRzc9+haf2mIYxhecDjzsAcRk7symhfQEh80I0vGa
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199015)(36840700001)(46966006)(40470700004)(83380400001)(426003)(82310400005)(47076005)(316002)(86362001)(54906003)(6916009)(36756003)(40480700001)(7696005)(81166007)(107886003)(82740400003)(186003)(336012)(26005)(40460700003)(2616005)(1076003)(2906002)(44832011)(478600001)(6666004)(5660300002)(15650500001)(8936002)(36860700001)(70206006)(41300700001)(4326008)(70586007)(8676002)(2004002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:05.1059
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d688aa0-011f-468c-06a2-08daf2e83a58
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7646

NUMA has one global and one implementation specific switches. For
ACPI NUMA implementation, Xen has acpi_numa, so we introduce
device_tree_numa for device tree NUMA implementation. And use
enumerations to indicate init, off and on status.

arch_numa_disabled will get device_tree_numa status, but for
arch_numa_setup we have not provided boot arguments to setup
device_tree_numa. So we just return -EINVAL in this patch.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Use arch_numa_disabled to replace numa_enable_with_firmware.
2. Introduce enumerations for device tree numa status.
3. Use common numa_disabled, drop Arm version numa_disabled.
4. Introduce arch_numa_setup for Arm.
5. Rename bad_srat to numa_bad.
6. Add numa_enable_with_firmware helper.
7. Add numa_disabled helper.
8. Refine commit message.
---
 xen/arch/arm/include/asm/numa.h | 17 +++++++++++++
 xen/arch/arm/numa.c             | 44 +++++++++++++++++++++++++++++++++
 xen/arch/x86/include/asm/numa.h |  1 -
 xen/include/xen/numa.h          |  1 +
 4 files changed, 62 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/numa.c

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 7d6ae36a19..52ca414e47 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,6 +22,12 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+enum dt_numa_status {
+    DT_NUMA_INIT,
+    DT_NUMA_ON,
+    DT_NUMA_OFF,
+};
+
 #else
 
 /* Fake one node for now. See also node_online_map. */
@@ -39,6 +45,17 @@ extern mfn_t first_valid_mfn;
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
 
+#define numa_disabled() (true)
+static inline bool arch_numa_unavailable(void)
+{
+    return true;
+}
+
+static inline bool arch_numa_broken(void)
+{
+    return true;
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
new file mode 100644
index 0000000000..1c02b6a25d
--- /dev/null
+++ b/xen/arch/arm/numa.c
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm Architecture support layer for NUMA.
+ *
+ * Copyright (C) 2021 Arm Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#include <xen/init.h>
+#include <xen/numa.h>
+
+static enum dt_numa_status __read_mostly device_tree_numa;
+
+void __init numa_fw_bad(void)
+{
+    printk(KERN_ERR "NUMA: device tree numa info table not used.\n");
+    device_tree_numa = DT_NUMA_OFF;
+}
+
+bool __init arch_numa_unavailable(void)
+{
+    return device_tree_numa != DT_NUMA_ON;
+}
+
+bool arch_numa_disabled(void)
+{
+    return device_tree_numa == DT_NUMA_OFF;
+}
+
+int __init arch_numa_setup(const char *opt)
+{
+    return -EINVAL;
+}
diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index 7866afa408..61efe60a95 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -12,7 +12,6 @@ extern unsigned int numa_node_to_arch_nid(nodeid_t n);
 
 #define ZONE_ALIGN (1UL << (MAX_ORDER+PAGE_SHIFT))
 
-extern bool numa_disabled(void);
 extern nodeid_t setup_node(unsigned int pxm);
 extern void srat_detect_node(int cpu);
 
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index b86d0851fc..7d7aeb3a3c 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -55,6 +55,7 @@ extern void numa_init_array(void);
 extern void numa_set_node(unsigned int cpu, nodeid_t node);
 extern void numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn);
 extern void numa_fw_bad(void);
+extern bool numa_disabled(void);
 
 extern int arch_numa_setup(const char *opt);
 extern bool arch_numa_unavailable(void);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474431.735631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOc-0006k0-3W; Tue, 10 Jan 2023 08:54:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474431.735631; Tue, 10 Jan 2023 08:54:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOc-0006jt-0I; Tue, 10 Jan 2023 08:54:14 +0000
Received: by outflank-mailman (input) for mailman id 474431;
 Tue, 10 Jan 2023 08:54:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOa-0005oC-41
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:12 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2073.outbound.protection.outlook.com [40.107.8.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58b01ba2-90c4-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:54:09 +0100 (CET)
Received: from ZR2P278CA0066.CHEP278.PROD.OUTLOOK.COM (2603:10a6:910:52::20)
 by GVXPR08MB8234.eurprd08.prod.outlook.com (2603:10a6:150:17::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:53:56 +0000
Received: from VI1EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:910:52:cafe::96) by ZR2P278CA0066.outlook.office365.com
 (2603:10a6:910:52::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT043.mail.protection.outlook.com (100.127.145.21) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:53:55 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Tue, 10 Jan 2023 08:53:55 +0000
Received: from 61d7f55ca79f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E6FA4810-2017-46FE-9DD2-24C2ADEF3DDC.1; 
 Tue, 10 Jan 2023 08:53:49 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 61d7f55ca79f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:53:49 +0000
Received: from FR2P281CA0121.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:9d::12)
 by PAWPR08MB9892.eurprd08.prod.outlook.com (2603:10a6:102:342::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:53:47 +0000
Received: from VI1EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:9d:cafe::45) by FR2P281CA0121.outlook.office365.com
 (2603:10a6:d10:9d::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:47 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT013.mail.protection.outlook.com (100.127.145.11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:53:46 +0000
Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:53:45 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com
 (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:53:44 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58b01ba2-90c4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0ghbs3NMlTW31HC7fLfGV4BVVb2uAXdN3Yk6On4z3OE=;
 b=UYdjnt2KxskTOhh7CnrBqDiDOI8770guG9plfdw15gHHXoftHCwkLhzPmeougr81rLEhrQU0Ht3a5lEcK/wtM5x8HWdFrjShiv4Ej0Fz86uH6eVgm2sJBsiOu0BCHbmpA+xqDkFhcrbda7wQ/ahkESkrr206RgCaIBNHd1gX2n0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 57ba30604fec6c7a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Edm4yXDsKlHYiMNb9jWW0nNxhTpCGzJ4YgvTRozhkCAt0VQc3J5Kaii9ijiH0O0L6GldnHuiXWSBtnSCpXGhQJ9/sGPvoXxPCKd9y1bZCBFhHB/i1uWzkBvHX9ypx4lT0NnkDUqcmL54BPPnjwH2PYRnv57cJChPS2YZSVMHcAhzt6nb2pp4/UkBaeyD+PU0Kp5SLQojzh3ADIPtPnhI4D8smHR8iiL5Fcgr6TIHuCMQiJxoglvVxfsPcXwpNc1oG8G2hZUXLani2lt9K9H+sL+KhIzj5KBV0yf2pTAgl1JM2aXzVeuk64XrLgBDwrmrjWFwBhc3Zn/zIWIcc28H4g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0ghbs3NMlTW31HC7fLfGV4BVVb2uAXdN3Yk6On4z3OE=;
 b=CqpK6TLUP71Bphk0tmVXmS0cf3PWLb7rARR4AD0VdJodz6gTy9X9YX/lOJFDz+4VdqXv81KB/+FeBm+33O4KPzyxOWduQo3VGHURLiCPg6ihxrxR02uiE+fYiWnXiA7yJUXeXKAUCBwU/x1wEvDsI1sE6vxY30jEC+VUCbBl7EuENT/6kutf83QbZ0KxY72xsySAw0QiaySo9xGErzAfE2MwFCV66GW9wy2kwyIt1Ae0mR3al+ajvrdHbrRDPETle2wOQgibSYcOUUeFJOiHrp6lyJuemlPWNa2E7C+2DtgWyCKZEregffJdhC04gtGeUjZFG0wXGlsSVE6RucGHjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0ghbs3NMlTW31HC7fLfGV4BVVb2uAXdN3Yk6On4z3OE=;
 b=UYdjnt2KxskTOhh7CnrBqDiDOI8770guG9plfdw15gHHXoftHCwkLhzPmeougr81rLEhrQU0Ht3a5lEcK/wtM5x8HWdFrjShiv4Ej0Fz86uH6eVgm2sJBsiOu0BCHbmpA+xqDkFhcrbda7wQ/ahkESkrr206RgCaIBNHd1gX2n0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 01/17] xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS
Date: Tue, 10 Jan 2023 16:49:14 +0800
Message-ID: <20230110084930.1095203-2-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT013:EE_|PAWPR08MB9892:EE_|VI1EUR03FT043:EE_|GVXPR08MB8234:EE_
X-MS-Office365-Filtering-Correlation-Id: e59497f9-d102-4d6d-c5a6-08daf2e834da
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 WmhDGC8koXn0vc2EKsYtnqaU3vxpgT8bGu8QEnhxMUGAvItMSU8IeAoeuiW0L35D6Kmfy79qepNiMM/g9j+WFdi8pNKFTZ7aWKAPLOPR+Kfo6hkFyL9Hc7pei4RzxWXwf3C3BKo+FXhl9I9+Rs5XK+rNaBGxQDZYm6yQA1Eln+9wvNUU/bVkVqTXQY/Ps4D3nYDNx2lrXeWjTB8oapjt8DMDtB1KmBKZhVO4JIYbpYwuQYYKrERYvuZn0h9cZ5+MqQ+yOsHhOElwl4PMm8AH4AxFbLxXyV/1envd2QaMOrVLfj0SFXTTj5nBB3E7ir94ohHxUL9NHwhDa88s7OdXb7frPXf0imzqYQAt80sQqh7QlJSP5M6pQ/bWK9IoMZUsEwbS0NK5IuEnwg2JGu2L1i4T509j6aA3w0AzOYEPQDG6ibaf9EksB7R1p8qQEJTglK8H7EdfXnXmgEWff4g/CbEOBj82/QknNaELMz11PNfkMeQf7tR4cJtLPu4McMyrUHJ7gnclDi1zMpflH7rDYC9cbsSmhGaML0Vo1+tueY25QFPHmR3oz3buSpKQsg16FAaptOFGM2A+cSsoJfAF0y/6s6Bzml2zdbOEZSobjnRq1htY+Bpt+RVNrHIUIHKCqitsvsCryyFsumfQhfja8GXegcn5JDMvivb9rDJIyHyamdBFNVVm9VPW8ee2BwTk6b73+GLzw5IYNBgV/auJe/KiSKOHpoToCn/1vWcdCzg=
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199015)(36840700001)(46966006)(36756003)(356005)(41300700001)(2906002)(81166007)(8936002)(82740400003)(82310400005)(5660300002)(44832011)(47076005)(426003)(36860700001)(83380400001)(86362001)(966005)(54906003)(6916009)(7696005)(40480700001)(6666004)(26005)(186003)(8676002)(478600001)(70586007)(70206006)(4326008)(316002)(2616005)(336012)(1076003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9892
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d3ed03b6-9734-450f-b86c-08daf2e82f2e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	I+MnnlNnQvnqdKm6zsWgIJW3MNVJVlkk7+0GI33nv61Qa1ttsYj3sTMuo8vJ5nbTxKpMP/t7HzHZSTA0OSU8sJANMbdqvLwoQhInbzjJHkqR9A9itdF/8tfGxUyfsrSLRy8+V5cKG94O+DUyapBJT6HXcly1q3ICEZAkMQV4TVdUOlJDAFMWa9HSPhBJGKjRxhpgKSi/oRKRBgG2y/pZHaNlGUMZxzJKX5G+aL5GK4I3nVzeMqIPnALYrt+kseAeKy8qV/0wK/TGLr1BURQUS/sB6yGr6EsnF9YqEJVHlBso/o5iZC0vs+Gn/Db32w+EfNESq8mejmhDIDsk8jyX7XYKKlEjAHbiF+16kmJC5Ypk1nGG3/Zhsi89kUI/x5m04frEpVD6DsmDjR8u7Btzc+wo6WF1yu0JY4iTTJdNZpvfYyXvOOW5Td5REMZSpk+hO3j6jl0nOdmP4x0T0Tjb3WoNKXq2VzgtBHcgbEj13DgFL1GbCUnM/ppKDGMt1x2afUCHH0pK/4l17BGFoNEtWR9fY0TuEMAvFnpzIkrXjqarRCaGCv94HIyc9AnXR5qdQg3tJ9GMiiCOxLNhQkDOyCkxMTsPbPV5Ca+sPgCYP3M7zICxegNj/NHrVFwLO/PqiIuEKcRblaaEm010mFN3ICeq7AbmkD8i3xp9CDBeHpLFwDNCAUirfGjaenKMOVPypyy0OyfT+VMapGJ9trwuIblmHpzc330s4fXgaK3uzSs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199015)(46966006)(40470700004)(36840700001)(82740400003)(81166007)(36860700001)(54906003)(8676002)(4326008)(40460700003)(86362001)(6916009)(41300700001)(70586007)(70206006)(316002)(8936002)(40480700001)(44832011)(5660300002)(2906002)(1076003)(2616005)(336012)(83380400001)(426003)(47076005)(82310400005)(966005)(7696005)(186003)(478600001)(6666004)(26005)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:53:55.8339
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e59497f9-d102-4d6d-c5a6-08daf2e834da
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR08MB8234

As a memory range described in device tree cannot be split across
multiple nodes. And it is very likely than if you have more than
64 nodes, you may need a lot more than 2 regions per node. So the
default NR_NODE_MEMBLKS value (MAX_NUMNODES * 2) makes no sense
on Arm.

So, for Arm, we would just define NR_NODE_MEMBLKS as an alias to
NR_MEM_BANKS. And in the future NR_MEM_BANKS will be user-configurable
via kconfig, but for now leave NR_MEM_BANKS as 128 on Arm. This
avoid to have different way to define the value based NUMA vs non-NUMA.

Further discussions can be found here[1].

[1] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Add code comments to explain using NR_MEM_BANKS for Arm
2. Refine commit messages.
---
 xen/arch/arm/include/asm/numa.h | 19 ++++++++++++++++++-
 xen/include/xen/numa.h          |  9 +++++++++
 2 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index e2bee2bd82..7d6ae36a19 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -3,9 +3,26 @@
 
 #include <xen/mm.h>
 
+#include <asm/setup.h>
+
 typedef u8 nodeid_t;
 
-#ifndef CONFIG_NUMA
+#ifdef CONFIG_NUMA
+
+/*
+ * It is very likely that if you have more than 64 nodes, you may
+ * need a lot more than 2 regions per node. So, for Arm, we would
+ * just define NR_NODE_MEMBLKS as an alias to NR_MEM_BANKS.
+ * And in the future NR_MEM_BANKS will be bumped for new platforms,
+ * but for now leave NR_MEM_BANKS as it is on Arm. This avoid to
+ * have different way to define the value based NUMA vs non-NUMA.
+ *
+ * Further discussions can be found here:
+ * https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html
+ */
+#define NR_NODE_MEMBLKS NR_MEM_BANKS
+
+#else
 
 /* Fake one node for now. See also node_online_map. */
 #define cpu_to_node(cpu) 0
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index 29b8c2df89..b86d0851fc 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -13,7 +13,16 @@
 #define MAX_NUMNODES 1
 #endif
 
+/*
+ * Some architectures may have different considerations for
+ * number of node memory blocks. They can define their
+ * NR_NODE_MEMBLKS in asm/numa.h to reflect their architectural
+ * implementation. If the arch does not have specific implementation,
+ * the following default NR_NODE_MEMBLKS will be used.
+ */
+#ifndef NR_NODE_MEMBLKS
 #define NR_NODE_MEMBLKS (MAX_NUMNODES * 2)
+#endif
 
 #define vcpu_to_node(v) (cpu_to_node((v)->processor))
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474433.735642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOi-0007FW-I6; Tue, 10 Jan 2023 08:54:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474433.735642; Tue, 10 Jan 2023 08:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOi-0007FL-EN; Tue, 10 Jan 2023 08:54:20 +0000
Received: by outflank-mailman (input) for mailman id 474433;
 Tue, 10 Jan 2023 08:54:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOh-0005s6-EJ
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:19 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2044.outbound.protection.outlook.com [40.107.7.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e05ba28-90c4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:54:18 +0100 (CET)
Received: from DB6P195CA0006.EURP195.PROD.OUTLOOK.COM (2603:10a6:4:cb::16) by
 PR3PR08MB5722.eurprd08.prod.outlook.com (2603:10a6:102:8f::15) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18; Tue, 10 Jan 2023 08:54:14 +0000
Received: from DBAEUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:cb:cafe::5b) by DB6P195CA0006.outlook.office365.com
 (2603:10a6:4:cb::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT036.mail.protection.outlook.com (100.127.142.193) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:13 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Tue, 10 Jan 2023 08:54:13 +0000
Received: from e119c51c4a8a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 51145894-0177-4CE7-B3C7-5A664C129EB9.1; 
 Tue, 10 Jan 2023 08:54:07 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e119c51c4a8a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:07 +0000
Received: from AM6PR01CA0070.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::47) by DBAPR08MB5830.eurprd08.prod.outlook.com
 (2603:10a6:10:1a7::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:00 +0000
Received: from VI1EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::6e) by AM6PR01CA0070.outlook.office365.com
 (2603:10a6:20b:e0::47) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:00 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT017.mail.protection.outlook.com (100.127.145.12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:53:59 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:53:58 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e05ba28-90c4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OmlC5wFOoSJu90l/OLm3xFQ5JzZOfdDA0L7vWkxPHDc=;
 b=Sy43EBjYcLq8aF4up6RexB9Ji6rFuvibh5nUWFOJsnY6cMDnUZD8UnFjsuXT0/DrtY26YjebUFNJh9QhblDTyKkj6lYjnAeOVZcFBIK1tFoQ8+/ZawjAc9qkDtl47S8Zxj2v2D20Cu5YlLpgAZeAmblg4Hd9qY58nARxQ/cu6Qk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 598d340d54c7d6d4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aEGZGb6/zZt5fGplWpgxZUBCQ+6cLrMMtFP5+0Kuh0mqcjdk+0sE0z618V/BSDCjwqx34pqXB0WJTk7lUnsvJAvWlIqdWe/OVwU6qWt83e1WcD+oxQTeo3v6KA0SMTHo9fJzJ2NmqEEfDmhIvL0+e4ErSZU5Em+PYtPz7P7R+3Zs/s5jyOfDN1f3BtIqkKzEW/c3RxCDUZ6fJxSb2yIwFeeeriU1xrZQzcIGsEIu5gL67tS7h6vpioNoD/xEV+loIUFnc62yMQVEJTgnM1/qfTquG4luUk/RRkgPXL6BTZ1Uxaidwd+kPoJsximRbXvajTq5NA4GMJpcrF5uvxzS4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OmlC5wFOoSJu90l/OLm3xFQ5JzZOfdDA0L7vWkxPHDc=;
 b=ZF93L2XywmOEbBeFqAwIAZy8HIv1U1Fgqr9DS5hkQC1o4Fl/4kIYGDBznbqwFoHlfmo3tzBP1ZY7reQdTJZldEjgp0clozdoVxGdlcO2GqFSHDDOqdZAhTMfdO0F+ahyQinWuKPJr8nHwJtYDmcv2avIJJ5KyVlReoseNKBR8ydle+LieZDR2EXHjIwGKfZuoZ5VVyAehrnGutzvcrQmpsrktlbb4uqXv4Thep5eHwi5mqufS/SW7zcC+7EDA5qK40ReqBbk3IGGDtg0kwzMP+oJrJ5CUWxEHtPJgyyvx7TyXKsxo6PiHr90e0NiygL7bnqq+u7jkstMIXeMZU+Nxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OmlC5wFOoSJu90l/OLm3xFQ5JzZOfdDA0L7vWkxPHDc=;
 b=Sy43EBjYcLq8aF4up6RexB9Ji6rFuvibh5nUWFOJsnY6cMDnUZD8UnFjsuXT0/DrtY26YjebUFNJh9QhblDTyKkj6lYjnAeOVZcFBIK1tFoQ8+/ZawjAc9qkDtl47S8Zxj2v2D20Cu5YlLpgAZeAmblg4Hd9qY58nARxQ/cu6Qk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 05/17] xen/arm: build NUMA cpu_to_node map in dt_smp_init_cpus
Date: Tue, 10 Jan 2023 16:49:18 +0800
Message-ID: <20230110084930.1095203-6-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT017:EE_|DBAPR08MB5830:EE_|DBAEUR03FT036:EE_|PR3PR08MB5722:EE_
X-MS-Office365-Filtering-Correlation-Id: bd4cf88b-290a-448c-d48c-08daf2e83f7a
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 SI51p/Vk9meZPDmXBPqoFHt6ZfyI+gLA1ABYTrtNS/85Ed1RuiafFjLa8TzMtnbg+IQ/iKGoUhM8y7MzbJqqzV+lZx9+xlI9GVOVbdG+rEwZo0mYO9jJkkPaKy9RIKo2IF82ejktwJmEgnpzHaGr7FWnOUSTeemIBZdmryOIP7K6ku4RsGZo1lRqiSsVE7K5zFmbzK3V9430c+b8iepZ1CGAPtkh7XnQ8f6m2MzjfY/EX01I97NemBiTlCrdNXC/qt6NPQYsqiMmsYVFn8YRyqtRuZVsdiqql2ArAlUC+mvKyBPnZRGHozB1j0cGxa+ul+Vtqt9BEYrz8V3Q6LBJNWLzCIuvLEUWBxxsCwpv0ENf2KfGz7OzXjkhkQ6E8j6fkWIqw5r+zVnbgA+aAfR4p5cq+owJsGTA8naXLErTiS+tqMFV6Dg9BPssAfwDW8EVt3zgYTPVjeAD5E4C+oXHgAJFJymN4pW0ds1I+CEeFx1hhf7GI+nNwY1r5GUzAS3lAFrIkcyZbJ6zuc4ktsK9gObITApriMVUcZwQnMdk6nc+5fk+2/Ay7bIBWT9fGpaRbJQnJ7UX5oQR4RsoaFl/ql/Y0WR9IbE9wIqWYYvLZ2X8V1D3TRSAk11qoBP90ZSW46kdo0Na2v2GuQ6zdpchMP3SCmlYjt8KvC83cTImtr72gdQ7+mq2w0jB3ccMsXMMKxfAm375n6+/v4vrv2YztQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199015)(46966006)(36840700001)(5660300002)(44832011)(8936002)(41300700001)(2906002)(36756003)(54906003)(83380400001)(8676002)(70586007)(70206006)(4326008)(316002)(6916009)(7696005)(26005)(81166007)(186003)(6666004)(86362001)(478600001)(336012)(1076003)(2616005)(40480700001)(47076005)(82740400003)(356005)(426003)(82310400005)(36860700001)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5830
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6a57a722-ad5f-4226-b963-08daf2e8370f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lVQCwmOzH206DPJaw9vKdCl72MM1bNLoqu86R3ShS2+5Gf/tKqLwZLYp10Xk3oWlocpPaEMmhS7ZWcOi8ShIXMzqgk5nUrJYys9CDQNuQ/VoGgfb0HHnk2cNALPQitqNnv2qFMdndJNjAe2byaVIWljeyY6OVh/FGljnowLupVelDdwE5TxQlcZFIEDP7wVzXaC1p+aADa7bxyLszjTu2lQAyXlOwnOV201YNe4PzBcm+YDJTOmkP2G8asAo8XL3b87jntriroiLjLqrCCpIcxip+tjyM1A7K/iIow4zgDbNrRvRvre0WeOKj4qnTMwl2ZYyDExFfKJiwo/uc1F/91OoY1YfGOBbTZwccDxhNMEVOTljCeo74j5gswZn3DkZymZPT+Imbc8EzEC2cJMkdPbQlJqGz5yo+8X3Y6Y3kCAI1WZrE611/LduHtPCSVd1DxN4uLcST/nJyLSpq9JsPMoESZ2B9tLP9OWK9X/OUEOa/CGe01mAtdEd+oRsIjxjX8SjjcSq0iXhfCNjaXxo50u+bFqn7k03fDVpq9I1vQ901AZ/1rrhOmEBH1BigVj/PEPg8elmN7i2oDZJ0J8ykHoPuyppqml/qHINr888cp3B0X3Q4sG3J0vtNm5k4PrAX31XcfzKxI7Xca5ShayBGNColbB1t4IjYxWda0QsdAf5hwQg91bGz4uzSssBBkV1Z69svS7AGaOtPzisAype9Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199015)(40470700004)(36840700001)(46966006)(44832011)(2906002)(5660300002)(70206006)(36860700001)(8676002)(41300700001)(7696005)(8936002)(36756003)(70586007)(83380400001)(4326008)(316002)(86362001)(2616005)(82310400005)(426003)(336012)(6916009)(47076005)(186003)(40460700003)(1076003)(40480700001)(478600001)(81166007)(107886003)(26005)(82740400003)(54906003)(6666004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:13.7539
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bd4cf88b-290a-448c-d48c-08daf2e83f7a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5722

NUMA implementation has a cpu_to_node array to store CPU to NODE
map. Xen is using CPU logical ID in runtime components, so we
use CPU logical ID as CPU index in cpu_to_node.

In device tree case, cpu_logical_map is created in dt_smp_init_cpus.
So, when NUMA is enabled, dt_smp_init_cpus will fetch CPU NUMA id
at the same time for cpu_to_node.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Use static inline to replace macros to perform
   function paramerters type check.
2. Add numa_disabled to gate the numa-node-id check for
   CONFIG_NUMA on but numa disabled user case.
3. Use macro instead of static inline function to stub
   numa_set_node.
---
 xen/arch/arm/include/asm/numa.h |  4 ++++
 xen/arch/arm/smpboot.c          | 36 +++++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index dbdb632711..3bc28416b4 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -70,6 +70,10 @@ static inline bool arch_numa_broken(void)
     return true;
 }
 
+static inline void numa_set_node(unsigned int cpu, nodeid_t node)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 412ae22869..5ee6ab11e9 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -118,7 +118,12 @@ static void __init dt_smp_init_cpus(void)
     {
         [0 ... NR_CPUS - 1] = MPIDR_INVALID
     };
+    static nodeid_t node_map[NR_CPUS] __initdata =
+    {
+        [0 ... NR_CPUS - 1] = NUMA_NO_NODE
+    };
     bool bootcpu_valid = false;
+    unsigned int nid = 0;
     int rc;
 
     mpidr = system_cpuinfo.mpidr.bits & MPIDR_HWID_MASK;
@@ -169,6 +174,28 @@ static void __init dt_smp_init_cpus(void)
             continue;
         }
 
+        if ( IS_ENABLED(CONFIG_NUMA) )
+        {
+            /*
+             * When CONFIG_NUMA is set, try to fetch numa infomation
+             * from CPU dts node, otherwise the nid is always 0.
+             */
+            if ( !dt_property_read_u32(cpu, "numa-node-id", &nid) )
+            {
+                printk(XENLOG_WARNING
+                       "cpu[%d] dts path: %s: doesn't have numa information!\n",
+                       cpuidx, dt_node_full_name(cpu));
+                /*
+                 * During the early stage of NUMA initialization, when Xen
+                 * found any CPU dts node doesn't have numa-node-id info, the
+                 * NUMA will be treated as off, all CPU will be set to a FAKE
+                 * node 0. So if we get numa-node-id failed here, we should
+                 * set nid to 0.
+                 */
+                nid = 0;
+            }
+        }
+
         /*
          * 8 MSBs must be set to 0 in the DT since the reg property
          * defines the MPIDR[23:0]
@@ -228,9 +255,13 @@ static void __init dt_smp_init_cpus(void)
         {
             printk("cpu%d init failed (hwid %"PRIregister"): %d\n", i, hwid, rc);
             tmp_map[i] = MPIDR_INVALID;
+            node_map[i] = NUMA_NO_NODE;
         }
         else
+        {
             tmp_map[i] = hwid;
+            node_map[i] = nid;
+        }
     }
 
     if ( !bootcpu_valid )
@@ -246,6 +277,11 @@ static void __init dt_smp_init_cpus(void)
             continue;
         cpumask_set_cpu(i, &cpu_possible_map);
         cpu_logical_map(i) = tmp_map[i];
+
+        nid = node_map[i];
+        if ( nid >= MAX_NUMNODES )
+            nid = 0;
+        numa_set_node(i, nid);
     }
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474437.735653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOk-0007cx-SI; Tue, 10 Jan 2023 08:54:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474437.735653; Tue, 10 Jan 2023 08:54:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOk-0007cl-O3; Tue, 10 Jan 2023 08:54:22 +0000
Received: by outflank-mailman (input) for mailman id 474437;
 Tue, 10 Jan 2023 08:54:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOj-0005oC-LX
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:21 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2040.outbound.protection.outlook.com [40.107.247.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e93e1ff-90c4-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:54:19 +0100 (CET)
Received: from AS9PR0301CA0045.eurprd03.prod.outlook.com
 (2603:10a6:20b:469::13) by AS4PR08MB7783.eurprd08.prod.outlook.com
 (2603:10a6:20b:517::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:08 +0000
Received: from AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:469:cafe::5) by AS9PR0301CA0045.outlook.office365.com
 (2603:10a6:20b:469::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT005.mail.protection.outlook.com (100.127.140.218) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:07 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Tue, 10 Jan 2023 08:54:07 +0000
Received: from 506abfabed72.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 67DB3E74-75F4-498C-8913-675EE63F1492.1; 
 Tue, 10 Jan 2023 08:54:00 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 506abfabed72.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:00 +0000
Received: from FR3P281CA0049.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4a::22)
 by AS2PR08MB9570.eurprd08.prod.outlook.com (2603:10a6:20b:60a::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:53:54 +0000
Received: from VI1EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:4a:cafe::56) by FR3P281CA0049.outlook.office365.com
 (2603:10a6:d10:4a::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:54 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT035.mail.protection.outlook.com (100.127.145.20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:53:54 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:53:52 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e93e1ff-90c4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3aUeRUGBRNGaCgpSKVEFEGKe8AB9JhLLaUj+oRdym0E=;
 b=PAO6IshgIncgfGpUvO35veBmTVm6f6P1LLu6onpE7NfrSnPDrdh+lQhwumbTJXEwfS3wzlZxUsykqlflANNOZD8wxn65JCFb84NiA4pfsxNM9TqL3HATUHEMnCwMJyqLTmc25KpLynD+f0ueIKiG9lISXLRuBHFav4JONxcExeQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2bad1a70c48d8a12
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V/lx093Rgg1VIfShjwQq1ohzRNs9oSNa+HDWmUcxAfm7vDJVR4R59yQiHZZwWQ4H2fQEjrLcxvcBKUUgdFtzDpaBYOMfN3rnQmfnjR7YA5Ih2lAmEpBncAyPWV75eB0LlEL+dhuosJD9kDHDB3NT8+uJH5Nj6EJtG3yQcu9efP5fLq3Xn/Ky9woedVIuh5WvZkTDim0CkWT0Ni3kJYY/E4fbuOOIC6CXMmzHA51olccHyqira4+ht4+alRSRiYDmgzVAAwHYATv8T6P/dynmv8iMcsdEIToBNlvkZpdYLRFmtWu2TOkVcIXD6GQd4MDuiQRN/h7rJN/O3RFmo7A6jw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3aUeRUGBRNGaCgpSKVEFEGKe8AB9JhLLaUj+oRdym0E=;
 b=Yie0lS5hpUE7rzC0dKy8sIdh25NRAb60PPW+2f9IXL36IBGKfeU4QxUSP+wSoxcYvr2BpJa4Mly/q/J9kniMoSkADxOAMqTxkX5hQmkencgKhKVvz9DiEmwQTd5q9cxXW4q4qqPISRIeLVxWxEtIIuLmDTouY1uE97JieeUkI/baOnQ+d3v7wRVeyp76uLL04LJ/WzftFpwfgvLzDHVDzzDVX+QyZTceZY3ige9d++ZfQa3nmIMNrgbp5XagIEGnbvF5DYmLLY29td/0Nekd2sDWKxnXf4M4rwmY7T5jpSNhT8yX2DAyCh9X2HmeeRMJh0NxyWiGoRpEfRRuPqejlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3aUeRUGBRNGaCgpSKVEFEGKe8AB9JhLLaUj+oRdym0E=;
 b=PAO6IshgIncgfGpUvO35veBmTVm6f6P1LLu6onpE7NfrSnPDrdh+lQhwumbTJXEwfS3wzlZxUsykqlflANNOZD8wxn65JCFb84NiA4pfsxNM9TqL3HATUHEMnCwMJyqLTmc25KpLynD+f0ueIKiG9lISXLRuBHFav4JONxcExeQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 03/17] xen/arm: implement node distance helpers for Arm
Date: Tue, 10 Jan 2023 16:49:16 +0800
Message-ID: <20230110084930.1095203-4-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT035:EE_|AS2PR08MB9570:EE_|AM7EUR03FT005:EE_|AS4PR08MB7783:EE_
X-MS-Office365-Filtering-Correlation-Id: d6ff6283-0e07-422c-adb8-08daf2e83bc8
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hbjNgsCSK8+bVz4REmlbUM3es/jUHi81a7rooPo68FJU8b4Z6G5bBfPCxPZvJ97ZKjHpzFXi3nSejLOfCRO4ac0GVMdonoHFJCtH4GaG8gPHdtPWOqyMGr9f4ecTTc2bHCU8EOsEmOAla56E5dhE8CuJI6LvAjLqPJ+zOW/Q2lhA0oXOTxkM/iWa7W6FJG/6MExdzZmRR02CCTQrQtjzxiiZi0NDD4R16nK4ypkVx7MUtHcwkPTN2x4JSDry63ATDcUKzwMrnrk36ByKA3oz9YFmMQzletrmwwtTQpgdX+6DHbc6eqZEGNOjrvpraDgAEV1j7XdCyD7y6VhtHwjA7P5snhl74qbdP/BHQJht6VATcISOE1qscMdH6DbkgHceXnfhw4F0c/kE9/HJVWUr+BUZMZnVBBCdbjvmHxRrPwCU5xl5KLE5zQj4kC/swUHFUpUJd6d214GxLqvgPx4UpQfd3MbwI+hwfMeL7pFHqriqFWxMIuGJOCM/jz0xqB0k8yqxc2zs49hfItZdsUCGZ+tDlJ9GfL9M5AZOfKkqlZeHHAG0mnCJyQ2idVgF0e+9PLFI1F8wGB+JYhWT6MlOV231G5tZQ/ExnXfzZ+relYMme9Ph6YD5vUcVAPAFgvc3lHho0e7dYVAMUsCJ8qjwh3feahAHQJ7Vcj+e9cd7+GDZjQP7QPvVQrsaSj/NguCGH1XRSLBOvfuIdSrS4OSWo/E+P3K8plxMzsKapolStjY=
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(346002)(136003)(396003)(451199015)(36840700001)(46966006)(83380400001)(36860700001)(86362001)(356005)(2906002)(4326008)(82740400003)(81166007)(44832011)(8676002)(5660300002)(70586007)(8936002)(70206006)(426003)(40480700001)(41300700001)(82310400005)(186003)(26005)(336012)(2616005)(47076005)(1076003)(7696005)(316002)(6916009)(478600001)(54906003)(6666004)(36756003)(2004002)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9570
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cf03f6e2-3636-4159-bfff-08daf2e833c9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VW7TrlGt2fVdKZarcjmXloyKk/QCj/J9JvtDmZ2/okup7iEfY7akz2qC0DmF6+RdD1RP54n0c7gqawZrFgSyyIvma6H58oRx0CmsWUzDh+tBVXudK7b67xIshwAaYlM64qeSPf2vVuyGfJGLgBB2yyZLr+ODA4wKYjK2M+mxiArJ7ojWiCjCTUme9A6tdFi/oftRuALDNED2osE9ZD+3QPAjjG9pubjl2lTkYC7MYiOp2l126iQ08W/RET4t5qNFes2N3MC97nSOG1pUNkQJgY96Wlg4A9DajdCILzC0a8LnMepQGUdj8xcQLJC5NfsWK2m+TiM3ey6Udr2iBlMlYVe1qUuHDB18AzIhUz82AI4b6Gl6seEDe39ifLkdxmoenGLMGKnraUb+Ssh4WDDmYbbxgm6Jw7/SZfNuoxbDIDfylfkSx90CHf/Jxfrclv9QPcDNJzmvCVfAHWtzlholCRuI72RT4pPIhUFz2R/r8fCu41dyyFWTgvF6+ZKBMGJcM0zRxKtuDtFGjghhzOMOtKccss0MWnud/PGtagX6BOpAL7gaHp+sk2WYHvjsRID2v+RorESu5UM55BglGnqAwdFtcnTJIv2weQA7MYezHeTCbjxV4U6etP1tGZP4i16HSacuuU8VjDznsoHR8JjGzASKedseA/o6QvmezuDY0blGpp+0deKC8v40iIxM/l7YfAPltgYowYr0OSb5omfLmyMUKiOz66JfAtT02PfjnHs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(396003)(136003)(376002)(451199015)(36840700001)(40470700004)(46966006)(40480700001)(36756003)(86362001)(40460700003)(316002)(7696005)(54906003)(6916009)(478600001)(6666004)(5660300002)(2906002)(44832011)(70586007)(107886003)(70206006)(8676002)(4326008)(41300700001)(36860700001)(83380400001)(82740400003)(2616005)(81166007)(26005)(8936002)(1076003)(186003)(47076005)(426003)(336012)(82310400005)(2004002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:07.5219
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d6ff6283-0e07-422c-adb8-08daf2e83bc8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7783

We will parse NUMA nodes distances from device tree. So we need
a matrix to record the distances between any two nodes we parsed.
Accordingly, we provide this node_set_distance API for device tree
NUMA to set the distance for any two nodes in this patch. When
NUMA initialization failed, __node_distance will return
NUMA_REMOTE_DISTANCE, this will help us avoid doing rollback
for distance maxtrix when NUMA initialization failed.

As both x86 and Arm have implemented __node_distance, so we move
its definition from asm/numa.h to xen/numa.h. At same time, the
outdated u8 return value of x86 has been changed to unsigned char.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Use unsigned int/char instead of uint32_t/u8.
2. Re-org the commit message.
---
 xen/arch/arm/Makefile           |  1 +
 xen/arch/arm/include/asm/numa.h | 14 +++++++++
 xen/arch/arm/numa.c             | 52 ++++++++++++++++++++++++++++++++-
 xen/arch/x86/include/asm/numa.h |  1 -
 xen/arch/x86/srat.c             |  2 +-
 xen/include/xen/numa.h          |  1 +
 6 files changed, 68 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4d076b278b..9073398d6e 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
+obj-$(CONFIG_NUMA) += numa.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += platform.o
diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 52ca414e47..dbdb632711 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -28,6 +28,20 @@ enum dt_numa_status {
     DT_NUMA_OFF,
 };
 
+/*
+ * In ACPI spec, 0-9 are the reserved values for node distance,
+ * 10 indicates local node distance, 20 indicates remote node
+ * distance. Set node distance map in device tree will follow
+ * the ACPI's definition.
+ */
+#define NUMA_DISTANCE_UDF_MIN   0
+#define NUMA_DISTANCE_UDF_MAX   9
+#define NUMA_LOCAL_DISTANCE     10
+#define NUMA_REMOTE_DISTANCE    20
+
+extern void numa_set_distance(nodeid_t from, nodeid_t to,
+                              unsigned int distance);
+
 #else
 
 /* Fake one node for now. See also node_online_map. */
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 1c02b6a25d..34851ceacf 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -2,7 +2,7 @@
 /*
  * Arm Architecture support layer for NUMA.
  *
- * Copyright (C) 2021 Arm Ltd
+ * Copyright (C) 2022 Arm Ltd
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License version 2 as
@@ -22,6 +22,11 @@
 
 static enum dt_numa_status __read_mostly device_tree_numa;
 
+static unsigned char __read_mostly
+node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
+    { 0 }
+};
+
 void __init numa_fw_bad(void)
 {
     printk(KERN_ERR "NUMA: device tree numa info table not used.\n");
@@ -42,3 +47,48 @@ int __init arch_numa_setup(const char *opt)
 {
     return -EINVAL;
 }
+
+void __init numa_set_distance(nodeid_t from, nodeid_t to,
+                              unsigned int distance)
+{
+    if ( from >= MAX_NUMNODES || to >= MAX_NUMNODES )
+    {
+        printk(KERN_WARNING
+               "NUMA: invalid nodes: from=%"PRIu8" to=%"PRIu8" MAX=%"PRIu8"\n",
+               from, to, MAX_NUMNODES);
+        return;
+    }
+
+    /* NUMA defines 0xff as an unreachable node and 0-9 are undefined */
+    if ( distance >= NUMA_NO_DISTANCE ||
+        (distance >= NUMA_DISTANCE_UDF_MIN &&
+         distance <= NUMA_DISTANCE_UDF_MAX) ||
+        (from == to && distance != NUMA_LOCAL_DISTANCE) )
+    {
+        printk(KERN_WARNING
+               "NUMA: invalid distance: from=%"PRIu8" to=%"PRIu8" distance=%"PRIu32"\n",
+               from, to, distance);
+        return;
+    }
+
+    node_distance_map[from][to] = distance;
+}
+
+unsigned char __node_distance(nodeid_t from, nodeid_t to)
+{
+    /* When NUMA is off, any distance will be treated as remote. */
+    if ( numa_disabled() )
+        return NUMA_REMOTE_DISTANCE;
+
+    /*
+     * Check whether the nodes are in the matrix range.
+     * When any node is out of range, except from and to nodes are the
+     * same, we treat them as unreachable (return 0xFF)
+     */
+    if ( from >= MAX_NUMNODES || to >= MAX_NUMNODES )
+        return from == to ? NUMA_LOCAL_DISTANCE : NUMA_NO_DISTANCE;
+
+    return node_distance_map[from][to];
+}
+
+EXPORT_SYMBOL(__node_distance);
diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index 61efe60a95..18b71ddfef 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -21,7 +21,6 @@ extern void init_cpu_to_node(void);
 #define arch_want_default_dmazone() (num_online_nodes() > 1)
 
 void srat_parse_regions(paddr_t addr);
-extern u8 __node_distance(nodeid_t a, nodeid_t b);
 unsigned int arch_get_dma_bitsize(void);
 
 #endif
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index 56749ddca5..50faf5d352 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -328,7 +328,7 @@ unsigned int numa_node_to_arch_nid(nodeid_t n)
 	return 0;
 }
 
-u8 __node_distance(nodeid_t a, nodeid_t b)
+unsigned char __node_distance(nodeid_t a, nodeid_t b)
 {
 	unsigned index;
 	u8 slit_val;
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index 7d7aeb3a3c..cff4fb8ccc 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -115,6 +115,7 @@ extern bool numa_memblks_available(void);
 extern bool numa_update_node_memblks(nodeid_t node, unsigned int arch_nid,
                                      paddr_t start, paddr_t size, bool hotplug);
 extern void numa_set_processor_nodes_parsed(nodeid_t node);
+extern unsigned char __node_distance(nodeid_t a, nodeid_t b);
 
 #else
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474438.735664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOm-0007vw-F3; Tue, 10 Jan 2023 08:54:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474438.735664; Tue, 10 Jan 2023 08:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOm-0007v2-AD; Tue, 10 Jan 2023 08:54:24 +0000
Received: by outflank-mailman (input) for mailman id 474438;
 Tue, 10 Jan 2023 08:54:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOk-0005oC-ED
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:22 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2051.outbound.protection.outlook.com [40.107.20.51])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5eadc0bb-90c4-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:54:19 +0100 (CET)
Received: from AS9P251CA0024.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:50f::26)
 by AS8PR08MB10272.eurprd08.prod.outlook.com (2603:10a6:20b:62b::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:17 +0000
Received: from AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:50f:cafe::a0) by AS9P251CA0024.outlook.office365.com
 (2603:10a6:20b:50f::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:17 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT020.mail.protection.outlook.com (100.127.140.196) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:17 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Tue, 10 Jan 2023 08:54:17 +0000
Received: from ce7d68c79393.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DCA68D6C-84A1-4841-B4DA-AE16FA9E8149.1; 
 Tue, 10 Jan 2023 08:54:10 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ce7d68c79393.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:10 +0000
Received: from AS9PR04CA0143.eurprd04.prod.outlook.com (2603:10a6:20b:48a::24)
 by AS4PR08MB8021.eurprd08.prod.outlook.com (2603:10a6:20b:584::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:07 +0000
Received: from AM7EUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:48a:cafe::74) by AS9PR04CA0143.outlook.office365.com
 (2603:10a6:20b:48a::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:07 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM7EUR03FT045.mail.protection.outlook.com (100.127.140.150) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:07 +0000
Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:07 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com
 (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:06 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5eadc0bb-90c4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OKOqbwKgR3BpQXq2D8GZb/uo0PrVqRyZlKb7VkWKCUg=;
 b=AXnC8dht/q3saI6VzXewmyQIF1ygQWApxX737ntEYgSF1qsK0vD6c2sA1RwMWmPf+SCr2xHX2aP0c+uPjVufXv4LGxXiZicbyTVPFVLFUqb8Bi6zbndNDBHYEpua9CfhXTpHhIgEvgs3u4qmS9b9dp6XsAV4wrltkq/IVBZcCDg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7cb5e6b944ac46a3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZVvzccwh/3zGxJOsUbts1FUGLsjFolK0rMjB/BlvVlK//6wTHke8+1th5q8ij6DxZrh/QsjLpR+UxOmunx6+PQQKD9jCt1L8M5j+wGeKlEXn62p6SFRn+yg0GB0Q6dnIwdD+axbqn5aWghkmlOu/oSr9Bvz/ktJ3HdFcnCCyiBSotmaLi5a9MSTrZ1lk1g/IuqNL+PwnLNz7Wu8heDRJxTLoKj6eLD83N6gVYby2iBKosPeRz0E4l3xwI6TPGUT0qMiGdBh8Wcq37uUHA69z/rQbQ/4EXT5/xMxX5vHgk1fmI6iNPM6WBQbMG61c5719+PL+lXOg5NAAtgQDZpj8Sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OKOqbwKgR3BpQXq2D8GZb/uo0PrVqRyZlKb7VkWKCUg=;
 b=XfnTSQv3AhqoRJm55lqZgy4/BvOzddIojFoX+9svsbb+3WTmYzyinyeIH2vhOZ9sfDN9WSGWmztDbj2Fo0LNX9tSDIRqQ27J9eWAyursULHI+BL/iJRZfhDZftRgiPIu3NcuH2ElxbjWmrg2VO+MLqDyydvMztkaWDFng4CEjce4l9EjXBkik4sakk6afa3vePd1o2AnuW4Dw68Ac4BF9O2AqZTFn9l3RWeD1gKrxGf6RviO3N/Kn12TgLV18uc6sTgoyYIpI14n5eKrvCL/bMxYuF9CFT7m1CSRDN8iwUGpN6uFuyoqcuDQXkzgUqYSMYjsEInmV4W1QEo51z/3tA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OKOqbwKgR3BpQXq2D8GZb/uo0PrVqRyZlKb7VkWKCUg=;
 b=AXnC8dht/q3saI6VzXewmyQIF1ygQWApxX737ntEYgSF1qsK0vD6c2sA1RwMWmPf+SCr2xHX2aP0c+uPjVufXv4LGxXiZicbyTVPFVLFUqb8Bi6zbndNDBHYEpua9CfhXTpHhIgEvgs3u4qmS9b9dp6XsAV4wrltkq/IVBZcCDg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 08/17] xen/arm: introduce a helper to parse device tree memory node
Date: Tue, 10 Jan 2023 16:49:21 +0800
Message-ID: <20230110084930.1095203-9-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	AM7EUR03FT045:EE_|AS4PR08MB8021:EE_|AM7EUR03FT020:EE_|AS8PR08MB10272:EE_
X-MS-Office365-Filtering-Correlation-Id: 804971e2-82c8-4155-85b5-08daf2e841d4
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 cYC1f+ENHyJAnlqwLMSVrd/QKZpp8bFQw/NYKKG7PF23sg5hOcL8aXy9meIimpkmkmoX0f108tkj+WLiA8zoksBm92kundWXXBtANT3EFsrL05GmUeibh+Rb6td7vJf1lZOnIhC5NvpF6u8h76EadVhTDTNS9nrHyMH04Z3tzFRI7jefglp/b7ZvgJTnBqWXCNzxWtsGkQHP0eX8+RFvzhblIsgg8j5if/EPGHbXA01umoe9wYjjaQeQBcxwbbNoN3cioqZMGPkYQaPL7/OqC3EwheIoyucUdgNtMWD50gK+5T33/aDVPITtDE/HPYR+0lGkHM2hLbj2Af5mRvnlRDXWRpY5qfz0Gr4JwjAGyF9TcANP53AAD2scNH/pbhgOQYlQCVyrYQmU2p4OXHPKxHrF307cQ064ST5RmCnxdx0F9HUjDrTjGRD1L6ax/P3dhi9cZkw6YKPRXvql9F7jeI/OflVpXCAQuzHMQJ3VmNzHvrAVUFDiJu1V9XCClHQ0DkMj1jLCnQg/j+bqX7FhHj3FWkPr2gBUaixh+wHECaPQo4wrZDnY4f44BqbUUe2+xXwuqQ0DmGGQ7f+XjmakcNq9gvYAAQp+Cg4yOtkyatSPpm0n6cy+NG5v/RXg/mSpeuRfFTy/QRVYhORYWOJIkHgOrqgANSi2YG/F9tDR+x42xwGZ7DQFN3WniF3oZHjkwiOi8SgirOb2EzSYyS8vSw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199015)(36840700001)(46966006)(36756003)(82740400003)(81166007)(356005)(86362001)(2906002)(44832011)(8676002)(70206006)(70586007)(4326008)(5660300002)(8936002)(83380400001)(36860700001)(7696005)(316002)(54906003)(6666004)(6916009)(40480700001)(82310400005)(41300700001)(1076003)(47076005)(336012)(426003)(478600001)(186003)(2616005)(26005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8021
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ed89b6d8-b1b3-4969-9475-08daf2e83bcb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tASn4v6OG2BoFMYZF9AgW2RWMlFlKpGv9j/M0edEIiWzE4w4kB/48VYlPsavsFGiSwzxeXl4GGIMqH7P53um9SQqgoag6SyOikYCnS/WuzPqQ1YsVnFTf6yVE8icJiU4zANRc5hkkCeM+HlGN0MBGUUersvOKdyJOg98t872jSEdCgv9P5Bn9uRbC6ekrKg6UOM/RGUD2QvG4UoHuK99HMhVl+1GAxWxX/JcJU26h38nSX5PEz1FJFqkz8umipHIPG/wGhvnn7k9vYKkIMmMuJ2zKMcrneVyXiTEJ697+Y6r71RV8ETCltuKwSfyqZg8IzEC+AwJejMd/AvjE83dufsElbdhf7myh2z9yNHCFwzc3OM3xs5om7Jd7Zlvu4o2AWLvcyhm9ozRrEJn9Rj3SSUj/79yBbMh/BvkbPOYQ8IrRSzMnPKYddZCuTIHH8JIi8AylqPAcMJmzBTZvjFb1Zd/rwAAuMWIYnqKMa2IYxQ+boHa+56k9KyIB+MbAgquPy7BhZK2BYwcryI7u1rDO7YWjQrDQnX120qxht6DJs1VkUHTXyB6f4q1B+eHPUFQSCXkAiI0n51KEyVtUUvFmPFak4FieMd+InFxlZ2SYnK9330M39OHbJHbF51/2l/1suyWYMC2WI/I9GmSThjdoTnNuSxgCrvwzfDQ74g/3T7j+qHkZ/LGoS+wRxQCAzRL2wen8cE2/iHQ3OPyHwfnyw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(376002)(346002)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(82740400003)(81166007)(36860700001)(54906003)(41300700001)(70586007)(86362001)(8676002)(40460700003)(6916009)(70206006)(6666004)(5660300002)(316002)(40480700001)(2906002)(44832011)(2616005)(1076003)(426003)(336012)(47076005)(83380400001)(7696005)(82310400005)(26005)(107886003)(186003)(8936002)(478600001)(4326008)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:17.6502
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 804971e2-82c8-4155-85b5-08daf2e841d4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB10272

Memory blocks' NUMA ID information is stored in device tree's
memory nodes as "numa-node-id". We need a new helper to parse
and verify this ID from memory nodes.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1->v2:
1. Move numa_disabled check to fdt_parse_numa_memory_node.
2. Use numa_bad to replace bad_srat.
3. Replace tabs by spaces.
4. Align parameters.
5. return ENODATA for a normal dtb without numa info.
6. Un-addressed comment:
   "Why not parse numa-node-id and call fdt_numa_memory_affinity_init
   from xen/arch/arm/bootfdt.c:device_tree_get_meminfo. Is it because
   device_tree_get_meminfo is called too early?"
   I checked the device_tree_get_meminfo code and I think the answer
   is similar as I reply in RFC. I prefer a unify numa initialization
   entry. Don't want to make numa parse code in different places.
7. Use node id as dummy PXM for numa_update_node_memblks.
---
 xen/arch/arm/numa_device_tree.c | 89 +++++++++++++++++++++++++++++++++
 1 file changed, 89 insertions(+)

diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index c031053d24..793a410fd4 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -34,6 +34,26 @@ static int __init fdt_numa_processor_affinity_init(nodeid_t node)
     return 0;
 }
 
+/* Callback for parsing of the memory regions affinity */
+static int __init fdt_numa_memory_affinity_init(nodeid_t node,
+                                                paddr_t start, paddr_t size)
+{
+    if ( !numa_memblks_available() )
+    {
+        dprintk(XENLOG_WARNING,
+                "Too many numa entry, try bigger NR_NODE_MEMBLKS\n");
+        return -EINVAL;
+    }
+
+    numa_fw_nid_name = "numa-node-id";
+    if ( !numa_update_node_memblks(node, node, start, size, false) )
+        return -EINVAL;
+
+    device_tree_numa = DT_NUMA_ON;
+
+    return 0;
+}
+
 /* Parse CPU NUMA node info */
 static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
 {
@@ -62,3 +82,72 @@ static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
 
     return fdt_numa_processor_affinity_init(nid);
 }
+
+/* Parse memory node NUMA info */
+static int __init fdt_parse_numa_memory_node(const void *fdt, int node,
+                                             const char *name,
+                                             unsigned int addr_cells,
+                                             unsigned int size_cells)
+{
+    unsigned int nid;
+    int ret = 0, len;
+    paddr_t addr, size;
+    const struct fdt_property *prop;
+    unsigned int idx, ranges;
+    const __be32 *addresses;
+
+    if ( numa_disabled() )
+        return -EINVAL;
+
+    /*
+     * device_tree_get_u32 will return NUMA_NO_NODE when this memory
+     * DT node doesn't have numa-node-id. This can help us to
+     * distinguish a bad DTB and a normal DTB without NUMA info.
+     */
+    nid = device_tree_get_u32(fdt, node, "numa-node-id", NUMA_NO_NODE);
+    if ( node == NUMA_NO_NODE )
+    {
+        numa_fw_bad();
+        return -ENODATA;
+    }
+    else if ( nid >= MAX_NUMNODES )
+    {
+        printk(XENLOG_WARNING "Node id %u exceeds maximum value\n", nid);
+        goto invalid_data;
+    }
+
+    prop = fdt_get_property(fdt, node, "reg", &len);
+    if ( !prop )
+    {
+        printk(XENLOG_WARNING
+               "fdt: node `%s': missing `reg' property\n", name);
+        goto invalid_data;
+    }
+
+    addresses = (const __be32 *)prop->data;
+    ranges = len / (sizeof(__be32)* (addr_cells + size_cells));
+    for ( idx = 0; idx < ranges; idx++ )
+    {
+        device_tree_get_reg(&addresses, addr_cells, size_cells, &addr, &size);
+        /* Skip zero size ranges */
+        if ( !size )
+            continue;
+
+        ret = fdt_numa_memory_affinity_init(nid, addr, size);
+        if ( ret )
+            goto invalid_data;
+    }
+
+    if ( idx == 0 )
+    {
+        printk(XENLOG_ERR
+               "bad property in memory node, idx=%d ret=%d\n", idx, ret);
+        goto invalid_data;
+    }
+
+    return 0;
+
+invalid_data:
+    numa_fw_bad();
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474443.735675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOo-0008Lv-O9; Tue, 10 Jan 2023 08:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474443.735675; Tue, 10 Jan 2023 08:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOo-0008Ll-K6; Tue, 10 Jan 2023 08:54:26 +0000
Received: by outflank-mailman (input) for mailman id 474443;
 Tue, 10 Jan 2023 08:54:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOn-0005oC-LN
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:25 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2074.outbound.protection.outlook.com [40.107.7.74])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 610301df-90c4-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:54:23 +0100 (CET)
Received: from AS9P251CA0012.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:50f::14)
 by AS8PR08MB6023.eurprd08.prod.outlook.com (2603:10a6:20b:291::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:21 +0000
Received: from AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:50f:cafe::a0) by AS9P251CA0012.outlook.office365.com
 (2603:10a6:20b:50f::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT020.mail.protection.outlook.com (100.127.140.196) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:20 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Tue, 10 Jan 2023 08:54:20 +0000
Received: from 6c9be4230c91.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B72E062C-9E7F-45B2-933D-CA866276CC5D.1; 
 Tue, 10 Jan 2023 08:54:13 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6c9be4230c91.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:13 +0000
Received: from AM6PR01CA0063.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::40) by GV2PR08MB7956.eurprd08.prod.outlook.com
 (2603:10a6:150:a9::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:09 +0000
Received: from VI1EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::f4) by AM6PR01CA0063.outlook.office365.com
 (2603:10a6:20b:e0::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:08 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT017.mail.protection.outlook.com (100.127.145.12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:08 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:00 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 610301df-90c4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DetB4lMCYSGFHbQ+S23R1nmvP6AkNk5bezLoeuNo2NQ=;
 b=CjYW9OA5MiITfPRLeAQJdBtRhIIzXkoXUeLTsylRqosW0eZ5fT1/96spS8RcdECNGzla6LAu1pnAvPEJws6jZTKutIQh25/plhP6g5Q+g2qq6lLH30tl6Z2xbjAY2McqQ7OhYemVvH39LQAyf05iqCUlnd9hdLqPZ2TydNKt76c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 138f2759a09173da
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k/UZejh/sDhmr1GIIVrikmdsspgU0j0oko0y7Pq0nmGe9UN8Uju5V0Y3Fr0TF7F9NAoQGYHx+b9EO7YznfB0lyAKduICwGn+nyydzUSSeEOtlEW7KTjN/s3SbqpAzCI8Mka+g0jAOePvva6uok91EkDqRIkdO/LFo3u4tSK4G05vq2GdhABgAMGDvMRDpGDXwLV4tTda2s5fiRvbxGKES6BeJhD22w500pY64ERUjqjAIgeYU5Rv1ynzVjP58WlCOGOPo2f9W3dbAHymnBTheeMQHF+Vp48S9S8aeTU2NVHLa5ZrmirPaGOUaX4mLRf5zObTfw9Ll0xDlhK7DY1lWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DetB4lMCYSGFHbQ+S23R1nmvP6AkNk5bezLoeuNo2NQ=;
 b=h81WCsFZthOa9TZtRLQfeHFDvD/QJEonb5RqdioAp6nVhSMKyWiatu3BlTi/uQvSvnT1EbG9ce9uFGuHFZzbby4gHzIOXEkez4g2lnCmH7DO6YiNkd4caR9mDBDpNqaOKKaxzihBqrcEbIw6udJdjl7AM65kkHtHiTZxscFMUAgnfmgSlJTaP8axt49B7MhvErq/L4TakHSXMcxo3K2pHkzs1YYSHh0mlY5Dv6PijnnubTp2WyXAIm9d4jtIt9TQA5tgHPSLzzVl4Fg2ucDC9ECMbPIUfM8mmEH3+qIwWyD2iJK9NVtZBCt9cpwhBGoVJVatuEbVGdQdlbS9x63lVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DetB4lMCYSGFHbQ+S23R1nmvP6AkNk5bezLoeuNo2NQ=;
 b=CjYW9OA5MiITfPRLeAQJdBtRhIIzXkoXUeLTsylRqosW0eZ5fT1/96spS8RcdECNGzla6LAu1pnAvPEJws6jZTKutIQh25/plhP6g5Q+g2qq6lLH30tl6Z2xbjAY2McqQ7OhYemVvH39LQAyf05iqCUlnd9hdLqPZ2TydNKt76c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 06/17] xen/arm: Add boot and secondary CPU to NUMA system
Date: Tue, 10 Jan 2023 16:49:19 +0800
Message-ID: <20230110084930.1095203-7-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT017:EE_|GV2PR08MB7956:EE_|AM7EUR03FT020:EE_|AS8PR08MB6023:EE_
X-MS-Office365-Filtering-Correlation-Id: 6f6f6376-5ef1-4973-1930-08daf2e84397
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 aS+J3UKHWPXZoOTlJwyBHwOsVJ8zE3h4K0qY36ZEiw2IY+Pf0ED6kCVnVdrfC3unMz2KM62mpTtbZGeTF2tO844Xu5iSI77dqUiNvIcDSksnfQm4W5a54jG2UWxMUqRhN/8y2cbCOO7KzZgxsUYw2OTODNt2KxKSW1GL/dd8BXbVfeWLfkBaYRx1GsyTx3fRSnP0vliGuhPOfm6Uj90dzBoK5WXhM6jK8xHkNp6I/3GkF8FBT2+8LVWzT6V5L7/BdWt5tgkT+NF8lO/XqMsMCzGAcYHBulTSI9sMpTL1WlJq8vq21WNnH6O382DhiAxKjDms9TEI0cVbHKYvAlMXgw5xZKE0MhfvYAB0ehXEJEf+kGwocVHSKRmC+JmALb7HgclQc0zOpr0Y4wK7Z6J47zig7oH9BNc6za4jUITLZS80c41auYKQGMM+hz6vsVPrATH3uWlTn6jSBXUQLEKlneWG/RfUcBmQ42/wumqIqFcQFIxdN3LkROU3Ia/Hm/wOrXTE9n/5MXq17US2Z/fSvC8enSE8W8TaehuJZwu+GH1HHGFGxFFRDer+1e27bxQk3VCzf0xW1TuXpMUUcHKFMzhgAzCss6iuLiwryEdEQxcaENeVT+T6JW7P32ihYAWTNNRO/XJwrbD5IiiQ9jEjesuG9BujlpUW0yoiy0Dd/XrNI23Qro0pJgJD75sZ7dZ0HCEw+/Gtz1XSyi6IeH8GzQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199015)(46966006)(36840700001)(83380400001)(426003)(82310400005)(47076005)(316002)(86362001)(54906003)(6916009)(40480700001)(36756003)(7696005)(356005)(81166007)(82740400003)(186003)(336012)(26005)(2616005)(1076003)(2906002)(44832011)(478600001)(6666004)(5660300002)(8936002)(36860700001)(70206006)(41300700001)(4326008)(70586007)(8676002)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB7956
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	aa82329a-87e4-4135-0f00-08daf2e83c93
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GVNyi/5bICXOl0fQBUzMg1x3cmPpCFtJw1r5zASEgFs6re1mKVZzDYlPhnlISFnO20hcGGp9WdGRpt8cBiN5mqbxMTbnibjBg5+A3doKjH7phhCZUAN3EMUTHCGLeiGa37yCYgz9jVIADZHeNS/mCWtZHXV9DcsN220hgGwBG1PbJfvJ6YoUwG5NSZU7JAT4dRQQmENeYUCTtyubhYZ86YOuA4vv68LL+suzLAuLLQzt2SrpTPzWUJKxqlsUOvDEJzxgjkIkiilEHW+DM+PR7vS0e37UGHU2qTL3uL3Mr0cadlgJUqntvn2Xfr/m3CQgzRBh9pfsh/JmwIkRmzMltqFAWablQ2BECaoWBfd7Mtf9BKPm3JCCGJIhbtKC4Jykh/b2j1I+0Xt9pdehD5hNrOzZJl3Rz/NyJcdcUKg/YmSvV5TmVBnwHPum+RE6+FJncXSSfi2yZPej0K685FI3AWGBX+TUSllKkBKvOQ45FV7jp76hiDRQozKq0iTOcRDTSF9/OSN8RvSy1g06dNvf7mGqll1QYiFWXRZAVJNf7zwc8MMEmcWeh8499mhp/Gb2nHCQ+QmtoDIe9CSpROXuRytyWUbhHBZDtSg+CrHT4ITNGbzjlVFUirAgpVRE8VpAVHpVIBobu39YZear4q+wcuzpyfcTqO8AxPdYFNvqxdJRSdfV4KBPN2SPMtcEo74pSs+ya6tk1yQxao1DJwwhYw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199015)(46966006)(40470700004)(36840700001)(5660300002)(44832011)(8936002)(41300700001)(2906002)(36756003)(54906003)(83380400001)(8676002)(70586007)(70206006)(4326008)(316002)(6916009)(7696005)(26005)(81166007)(186003)(6666004)(86362001)(478600001)(107886003)(336012)(1076003)(2616005)(40480700001)(47076005)(82740400003)(426003)(82310400005)(36860700001)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:20.6031
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f6f6376-5ef1-4973-1930-08daf2e84397
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6023

In this patch, we make NUMA node online and add cpu to
its NUMA node. This will make NUMA-aware components
have NUMA affinity data to support their work.

To keep the mostly the same behavior of x86, we use
numa_detect_cpu_node to online node. The difference is that,
we have prepared cpu_to_node in dt_smp_init_cpus, so we don't
need to setup cpu_to_node in numa_detect_cpu_node.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v2 -> v3:
1. Use unsigned int instead of int for cpu id.
2. Use static inline for stub to do type check.
v1 -> v2:
1. Use numa_detect_cpu_node to online node.
2. Use macros instead of static inline functions to stub
   numa_detect_cpu_node.
---
 xen/arch/arm/include/asm/numa.h |  9 +++++++++
 xen/arch/arm/numa.c             | 10 ++++++++++
 xen/arch/arm/setup.c            |  5 +++++
 3 files changed, 24 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 3bc28416b4..e0c909cbb7 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -41,6 +41,7 @@ enum dt_numa_status {
 
 extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
+extern void numa_detect_cpu_node(unsigned int cpu);
 
 #else
 
@@ -74,6 +75,14 @@ static inline void numa_set_node(unsigned int cpu, nodeid_t node)
 {
 }
 
+static inline void numa_add_cpu(unsigned int cpu)
+{
+}
+
+static inline void numa_detect_cpu_node(unsigned int cpu)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index dcfcd85fcf..4dd7cf10ba 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -74,6 +74,16 @@ void __init numa_set_distance(nodeid_t from, nodeid_t to,
     node_distance_map[from][to] = distance;
 }
 
+void numa_detect_cpu_node(unsigned int cpu)
+{
+    nodeid_t node = cpu_to_node[cpu];
+
+    if ( node == NUMA_NO_NODE )
+        node = 0;
+
+    node_set_online(node);
+}
+
 unsigned char __node_distance(nodeid_t from, nodeid_t to)
 {
     /* When NUMA is off, any distance will be treated as remote. */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90..8c02cf6cd4 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1115,6 +1115,11 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     for_each_present_cpu ( i )
     {
+        /* Detect and online node based on cpu_to_node[]. */
+        numa_detect_cpu_node(i);
+        /* Set up node_to_cpumask based on cpu_to_node[]. */
+        numa_add_cpu(i);
+
         if ( (num_online_cpus() < nr_cpu_ids) && !cpu_online(i) )
         {
             int ret = cpu_up(i);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474446.735686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOt-0000bL-6k; Tue, 10 Jan 2023 08:54:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474446.735686; Tue, 10 Jan 2023 08:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOt-0000b3-1n; Tue, 10 Jan 2023 08:54:31 +0000
Received: by outflank-mailman (input) for mailman id 474446;
 Tue, 10 Jan 2023 08:54:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOr-0005s6-Jv
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:29 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2074.outbound.protection.outlook.com [40.107.105.74])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 62a65bd2-90c4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:54:26 +0100 (CET)
Received: from DB8PR06CA0045.eurprd06.prod.outlook.com (2603:10a6:10:120::19)
 by DBAPR08MB5861.eurprd08.prod.outlook.com (2603:10a6:10:1a3::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:23 +0000
Received: from DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:120:cafe::32) by DB8PR06CA0045.outlook.office365.com
 (2603:10a6:10:120::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT011.mail.protection.outlook.com (100.127.142.132) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:23 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Tue, 10 Jan 2023 08:54:23 +0000
Received: from f4bffd900317.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3AA26FAB-2024-4733-AA56-034F6642273B.1; 
 Tue, 10 Jan 2023 08:54:16 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f4bffd900317.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:16 +0000
Received: from AM6PR0502CA0059.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::36) by AS2PR08MB8973.eurprd08.prod.outlook.com
 (2603:10a6:20b:5f9::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:13 +0000
Received: from AM7EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:56:cafe::93) by AM6PR0502CA0059.outlook.office365.com
 (2603:10a6:20b:56::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:13 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM7EUR03FT064.mail.protection.outlook.com (100.127.140.127) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:13 +0000
Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:12 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com
 (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:12 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62a65bd2-90c4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xRix1bFfBrxEVUq1lAzsdtefn4r0M5T6xXFzQdn7bOs=;
 b=TByQ2A56Oj+s/Q6qPo4M1+1wWWTRJP8X2fVKh+IQHP3B/WKRsvf9jZ0VlDv6ES1VuxwSYkTmrlYSM2rmhGWmyKcpf599z/IdUUejg4sHtk0DXfXcQPbq4bJGpYibW9qv9153PefoZpFiCU6SaXKczprLOBeENXiyJcyospIs78s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 85e75a5faba983db
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iQbR3ZzRCvzDAoHp4L9ive+hsvhmUcqg97wd8xHLFRBHE3Jn3bTCmj7Za8dhzUoKW88Qi2CWZKqrsILSci+3yjTPhGpVmEFXMMtQprrYJ+M0ZTvu5GiPg3VUzSqXPEUsXvtxumzkJ6TMppWGroCmPI0xEYlIh7WfD6JcN7u84KTPaL32/dLOPPp9JgurcUwXFYuTV5BuShvQJvJ+K/rlnSl8hVrMoswB4ZIrELS5JoNY1uJJhaC3Px7VeceMkdh3w5slb35hMzFHB52ARVo3vrMvwi8RFtE3Bdf8F+KRpVpQFuX3FVjCAWNuOOO1bJ3j5MdXk8NoJriGVlySeufdXQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xRix1bFfBrxEVUq1lAzsdtefn4r0M5T6xXFzQdn7bOs=;
 b=GYLnOcdNRdhrW+SrPEnQPEETok8PHeTjMS+JWMVVMnv362bV++O/JD2R1SgpR9sJcnEjHA2X/k//u0XOFohBE+9woEiaYne/3gDmOlOLdlLhQ7K6cy9t0darbqAfdaogW3/9x1lh3BgqMn+Y9G2BPuTSwpcDkDIR2cqZ9kvfDFEIuXUAi4wcmFwH9hbZoOzYgqvvpUgqau4PHj63pk7gS00J71Gkzdm4XT7dZUNUOmST/E302/2eZJ0eyXioCFDsVS76zdYtXRbyK+FaZIbnzrwWLHCEMAqour7rQcRBBJn5aTJ7zICVU/Y2RQ2kYVlbYf/wWFxv1xY61j0KFtRzbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xRix1bFfBrxEVUq1lAzsdtefn4r0M5T6xXFzQdn7bOs=;
 b=TByQ2A56Oj+s/Q6qPo4M1+1wWWTRJP8X2fVKh+IQHP3B/WKRsvf9jZ0VlDv6ES1VuxwSYkTmrlYSM2rmhGWmyKcpf599z/IdUUejg4sHtk0DXfXcQPbq4bJGpYibW9qv9153PefoZpFiCU6SaXKczprLOBeENXiyJcyospIs78s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 10/17] xen/arm: unified entry to parse all NUMA data from device tree
Date: Tue, 10 Jan 2023 16:49:23 +0800
Message-ID: <20230110084930.1095203-11-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	AM7EUR03FT064:EE_|AS2PR08MB8973:EE_|DBAEUR03FT011:EE_|DBAPR08MB5861:EE_
X-MS-Office365-Filtering-Correlation-Id: b7d5f511-aa9a-4fd2-2bbe-08daf2e84513
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 eDEPRIrXslnvkQzfPCa7Vo7kc3fEslyGVOTpJ8Nu4WNjk9ENPS73j6L7txDMvh7u821zMtsxApJG9MCb5dblNLZLoTAgF5G2NM39iAhYUA7/JcPFMc7fCVzosCky4iqIR3OGSQ7IzScGPMFKYSVQ3D7gfo6Lc22Izc7GJVUJa3uM+1K+avAFhcKpHNWUbXWUd9sLCOrPnA3geNPbJznX/ahBG27r/ebviWuXneoJxJv/tTAowbPUaF0ybLyutx2WtiD55kPaRJkQjcfMGGJtoH43286ppH+D8gEI/7PgfTrGExzuLGKr/xq559QvzMnAdA9hK3o49gyD3ATEu+zr35+GRlU3StarhgiMKDubpES0ku7/4SrzI6YRMukxd3fyNz5+VJWnDfFrWH1Pcj54X2JhpdWs1KvEq2nqwqC3F/XGDHekgNoJUrVAfjeDu+rvtoPydCA2I/z/jt18rN5eYpuWEXI3Y/prPboUmlXFm1Z9rge6tm2bBI4KRHzc+ykw7LOZtauYcV7oJf+H5TnpEUwCDHMNgq0nxBCuQKnz7EGduf7bbQ1C836zIVGlR6nl/GiJGLMYWOq1XB8p039G2PjUbufRJ0RYL3WdYWPh+jy2bXgCfYE+xX887MD+F6uPDwdPA95kFVcZ/LthRgijhR/FXmdIWvFZq+tcUhQGT+jviu3PILC+KBQ/cMQKxjqmSY69T4MbGZY69osAfC20Pw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(346002)(136003)(376002)(451199015)(36840700001)(46966006)(82310400005)(36860700001)(478600001)(83380400001)(5660300002)(36756003)(54906003)(44832011)(2906002)(86362001)(82740400003)(8676002)(4326008)(6916009)(426003)(41300700001)(2616005)(1076003)(6666004)(40480700001)(186003)(26005)(336012)(70206006)(316002)(356005)(70586007)(47076005)(7696005)(8936002)(81166007)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8973
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5fb67014-4de0-43c8-fbb0-08daf2e83f34
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cMzoiPRsPavEwZ/zgF2cTpX8F7zk8zzKFzSN7Db5mKz2Wn4wLhak1nod52Oqz86buv1TNLZq9L7rc6+soQoYnV694g/lrhmDy5hYiOPdSur3GZXp4PRhtA++LN5lHmKGDPt+L/qcSCsJvZUUcaH1abkF3mN8USAI+fj7HjisGPty55nMx/wzjWtq9C8mBo45hNz9HxJ45fxSDuOOAUQZArsGr5BKkPsb+eCxfz/itAaQFnvTQzrZg6+oeeG0Lj6pCSOazIrAwDe1z4QE2Aix+A06XM3i1I2TbkjGmIIi5utxqwVcblAP5bVla+ySyM8dmvRRCNonwv+HlONUL2mVaY08K0mnioNvCZwPdqWeCTQ5de/XgrZsdfHf0d4FhIWxQmhagn14l0SFoldmWOfWgHhkLfL8Bj0JWPptGoMdc5L3gHXqVmR7OpHEUpCsMOY+GQZFJ2hf/S+xSChODfIN3SOZei4iO0HlnAm8dc5iae1GVqDinvEnclE3rIGJhtHMprAYnsMQ7Pr+07UiCabvJWIhBjwN5ZXYBNiQyEWCRtxnwEYMsbP2ClORnUTz7STxy2r06VYj7zDn65kOr8ocqmcTq3bWu98ElD95U4SWtmIejoN/xYJ33XpqVynS4XjO017D14X9ZJLvFbnDaQDngqGvr4G5ebXB3lC6UUw4O1W74SHQfKzKa6z2BKnvAJUjCLbTr6/MEdrPnBoDvJI+nQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199015)(40470700004)(46966006)(36840700001)(70586007)(70206006)(83380400001)(8676002)(4326008)(5660300002)(7696005)(36756003)(8936002)(41300700001)(186003)(40480700001)(81166007)(26005)(6666004)(54906003)(6916009)(82740400003)(107886003)(426003)(82310400005)(47076005)(316002)(86362001)(40460700003)(1076003)(478600001)(2616005)(336012)(44832011)(2906002)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:23.1565
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b7d5f511-aa9a-4fd2-2bbe-08daf2e84513
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5861

In this function, we scan the whole device tree to parse CPU node id,
memory node id and distance-map. Though early_scan_node will invoke
a handler to process memory nodes. If we want to parse memory node id
in that handler, we have to embed NUMA parse code in that handler.
But we still need to scan whole device tree to find CPU NUMA id and
distance-map. In this case, we include memory NUMA id parse in this
function too. Another benefit is that we have a unique entry for device
tree NUMA data parse.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1->v2:
1. Fix typos in commit message.
2. Fix code style and align parameters.
3. Use strncmp to replace memcmp.
---
 xen/arch/arm/include/asm/numa.h |  1 +
 xen/arch/arm/numa_device_tree.c | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 923ffbfd42..1213d30ce0 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -44,6 +44,7 @@ extern enum dt_numa_status device_tree_numa;
 extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
 extern void numa_detect_cpu_node(unsigned int cpu);
+extern int numa_device_tree_init(const void *fdt);
 
 #else
 
diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index 88056e7ef8..4009b9b6de 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -258,3 +258,33 @@ invalid_data:
     numa_fw_bad();
     return -EINVAL;
 }
+
+static int __init fdt_scan_numa_nodes(const void *fdt, int node,
+                                      const char *uname, int depth,
+                                      unsigned int address_cells,
+                                      unsigned int size_cells, void *data)
+{
+    int len, ret = 0;
+    const void *prop;
+
+    prop = fdt_getprop(fdt, node, "device_type", &len);
+    if ( prop )
+    {
+        if ( strncmp(prop, "cpu", len) == 0 )
+            ret = fdt_parse_numa_cpu_node(fdt, node);
+        else if ( strncmp(prop, "memory", len) == 0 )
+            ret = fdt_parse_numa_memory_node(fdt, node, uname,
+                                address_cells, size_cells);
+    }
+    else if ( fdt_node_check_compatible(fdt, node,
+                                        "numa-distance-map-v1") == 0 )
+        ret = fdt_parse_numa_distance_map_v1(fdt, node);
+
+    return ret;
+}
+
+/* Initialize NUMA from device tree */
+int __init numa_device_tree_init(const void *fdt)
+{
+    return device_tree_for_each_node(fdt, 0, fdt_scan_numa_nodes, NULL);
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:54:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:54:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474450.735697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOv-0000yq-HB; Tue, 10 Jan 2023 08:54:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474450.735697; Tue, 10 Jan 2023 08:54:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAOv-0000yX-CQ; Tue, 10 Jan 2023 08:54:33 +0000
Received: by outflank-mailman (input) for mailman id 474450;
 Tue, 10 Jan 2023 08:54:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOt-0005s6-KG
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:31 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2080.outbound.protection.outlook.com [40.107.21.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64a89fc5-90c4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:54:29 +0100 (CET)
Received: from AS9PR0301CA0031.eurprd03.prod.outlook.com
 (2603:10a6:20b:469::31) by DBBPR08MB5994.eurprd08.prod.outlook.com
 (2603:10a6:10:20d::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:28 +0000
Received: from AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:469:cafe::b8) by AS9PR0301CA0031.outlook.office365.com
 (2603:10a6:20b:469::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT005.mail.protection.outlook.com (100.127.140.218) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:27 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Tue, 10 Jan 2023 08:54:27 +0000
Received: from bf9eea3806c2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 09505C98-CD7C-42E8-8219-4CA26EC3D753.1; 
 Tue, 10 Jan 2023 08:54:22 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bf9eea3806c2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:22 +0000
Received: from FR3P281CA0138.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:95::17)
 by PAVPR08MB9796.eurprd08.prod.outlook.com (2603:10a6:102:2f8::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:19 +0000
Received: from VI1EUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:95:cafe::60) by FR3P281CA0138.outlook.office365.com
 (2603:10a6:d10:95::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:19 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT039.mail.protection.outlook.com (100.127.144.77) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:18 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:17 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64a89fc5-90c4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0tafurYy+S0foZPSXVs4MXxTgGOhH6a719Sg+5VUNEA=;
 b=5/AeyROb+LeKOthAZWhDuLaaOHSdhHoqXe89J+OFwYmqaEHJbQ/p26ThxEdc7/oeHPuOc2WIbPW3IHdbJMS0SvF18Vbd+UMyH1PGVR6Yzq1OFIiafVYEWOnCoPZYnm9uDaCatD3x81Uh2oP/4KHrvvQnA9UgBFTO4xlJG0iIw3Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 827b35e3b0659489
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yd640jTkooDsJVKnYpwGL2IyfeIc+q89tWKjGoRO2+++EWCtkw0cL5hOIvZ4TA9Oswvxi9QZYzLw5m73tnIV/FNqnfLkD5k54IpA33ovMJQytgg6n43eMZ20W+43lOCkWbNhs1YI4gDMz1O0TmQWyzXRjL7TJuvt4A7lLqHnlhJbb2wI2x3L5iJjYw8fSjw1M6R274S76TuxjHLtIxlGypGK8yEQCZNizc11d0Z9GIZfZMOwjit6EAHk8aNpiC71GVNyR74jwilKOHFI9E8+ocCKHC/FJy5f/1eVmIdDEEiQh/iaM3k6SrOp55qd1gQ9N/47akDEs2N3y2aHO1p6/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0tafurYy+S0foZPSXVs4MXxTgGOhH6a719Sg+5VUNEA=;
 b=nsX61tnhkj6pgZa0K1lwVigNsujVBQSGZLIL+b1pqLR3UVX5bK3GejX/Xr1HJkvkxHZmZ3SZbyfGMg9qzb1qQtxbDqDCIEVQwE6/Xb9n7zHMohOVJBHRtvHhrx9T68KcUvC3vk/wFXclGyAj/4bu84rr8iCWuv9ciX3go+eNbrhDY6u6ifpEB738nOiDLDz7FMf8/dzU6EbYx56W1NVj69yjoaQOL11EgBmLKyJKwClLLNXgY4HpW1p9uJn/F2MsV4k6VClknhWz0DM9UeY05Z7xAgSqEJlxBWUhiE1ouXeIZLywVzj/NqWZrYAlxBmVTsQQu5sgPgEv4qZZs9mUXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0tafurYy+S0foZPSXVs4MXxTgGOhH6a719Sg+5VUNEA=;
 b=5/AeyROb+LeKOthAZWhDuLaaOHSdhHoqXe89J+OFwYmqaEHJbQ/p26ThxEdc7/oeHPuOc2WIbPW3IHdbJMS0SvF18Vbd+UMyH1PGVR6Yzq1OFIiafVYEWOnCoPZYnm9uDaCatD3x81Uh2oP/4KHrvvQnA9UgBFTO4xlJG0iIw3Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 12/17] xen/arm: enable device tree based NUMA in system init
Date: Tue, 10 Jan 2023 16:49:25 +0800
Message-ID: <20230110084930.1095203-13-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT039:EE_|PAVPR08MB9796:EE_|AM7EUR03FT005:EE_|DBBPR08MB5994:EE_
X-MS-Office365-Filtering-Correlation-Id: 138bac64-1cf9-491f-337e-08daf2e847fe
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 tFL1VHCIgog4EcjhUb5e1C+fl4yWwZDlNsu8Lz4J9UzPyTrIpY1uVdEg0imcGH7oFjObb43VZurc+6X6FrLeCPzSySIhqtzQpr0O1vlO7wFm+PcKfpysIGh7hZQxoPCXo+x+sfAhBz8P+T6ZWOqKOV/JTPGJ3BJhLnRZDb/k8UqIZsIByPeBtvqHFkRi1bqODETcn81a5Wpmyl6ngrPu/55Z+BHU6inJxr0fH+FLIWktwzc+/9FnDXROzYIPb6/h/QUp0/Reut6ORPGTksBog7vZeHWGoNhX7liMo7MIHvJX/WLe9Ur9z8nLc3XoHqE9cQgbj8GfWPprC1yeDIaBXsY+a09dQqFoiwB3hyrFwRbmpO5FXYbUq2X8uK2TZyw7lmJIAbAg8c7FJtmXf3oKwnRsWGxBb+u8HR65vz1IIHq3+BiVkYSiOpNkV0r12daMObaizryWLKHrKQbmCIUJihlVNc2cT97UkDnLodmbaD2PokC5lGr52/k9tdM034PDcsIddHvOZYQ9rl91Eg+G4H+4q6l2UJnqESKlHydVsDEmuAFC18iI95nwkgocrIublHEt2tXXhi+P2u9aCAnsQYmfS/RMMRzQqd73/eSykzhhdrcNveij7+SLgOMZezcGGB2he0cU1xpPFn9PHyDoxt1Q+irKC9SvAhq7bnsBkyCM91nPliiiADnWDy0joenzTQ2buVtU/KNwhOQzTriM/Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(39860400002)(396003)(136003)(451199015)(36840700001)(46966006)(6916009)(356005)(54906003)(82310400005)(41300700001)(478600001)(2906002)(316002)(82740400003)(6666004)(36860700001)(44832011)(336012)(1076003)(83380400001)(4326008)(8676002)(86362001)(186003)(26005)(40480700001)(7696005)(5660300002)(36756003)(2616005)(81166007)(70206006)(70586007)(47076005)(426003)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9796
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	df9ab331-527d-4f18-6fef-08daf2e84297
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/AuKOjvJa5ALwIqEwyKhhk9yBW10GhuhWy/H3Qr9VfJd045mSfOaRD2Z8dW65j2KsiDwNzi5W1k+us/bJbxxETGL1jcmauSeo5bp5wtu4M/QMuwBuovrDbFaLmAaq7btrIu7FTX2oPSEXzsYY3ELTz8DCj6ry8P1Lq/lRJ/kEv/Xi9H9CoUjgu9g7b82jxKRv+G7F3N+1+d2O/DWgENB/64WWGMN47+FAHCt597r7PSyQ9upCLeHlAuGlmjVKGXOG9lrT5pGVL1MCXZ34z8OMisLGUPb6Uw+nvTPRc/EDtYw7d9MZY78rp2h4uUreN2+X10NvdDXd9ofU6WWtxbSjQ4p9QwaEAH5BgBHrnofcYTawVMeqNcsUDMFmUgiBZtwfOkcdnlPD89JK8PhMYvFihBoowAGBRTYiuZPlphwoXBz+HBhliGKy8a8jrM6I00xybKp6TMAJA0xXyVfstIHcPZVycLtyLxRcTINf3clhwt2MzSG5wLyUssAF2wXEkiKw/FTJE80dmTG3Q19+lq1jHcTNig7K51hbjVk3sX1d30mFG+TfFj6u5g+Led3/HMeI6OXp2hn7Z5vGe/cco5iJu4lUpnbGuXf+VTpSa0EuFhEBRPhPO2hiRQKOA8tjzr3CxjLvTR7Gvj4YIeNNy2OjuOb+ca6Bec7imD3PfwLIAHz5CmsMXS73ArJSvrsDB4fHte/gWHGYRkj/36EpoaMWw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(396003)(346002)(376002)(451199015)(36840700001)(46966006)(40470700004)(8676002)(8936002)(4326008)(41300700001)(82310400005)(70206006)(70586007)(5660300002)(86362001)(40480700001)(6666004)(82740400003)(107886003)(316002)(2906002)(54906003)(6916009)(44832011)(36756003)(81166007)(36860700001)(40460700003)(478600001)(1076003)(7696005)(83380400001)(47076005)(426003)(26005)(2616005)(186003)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:27.9897
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 138bac64-1cf9-491f-337e-08daf2e847fe
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5994

In this patch, we can start to create NUMA system that is
based on device tree.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1->v2:
1. replace ~0 by INVALID_PADDR.
2. only print error messages for invalid dtb data.
3. remove unnecessary return.
4. remove the parameter of numa_init.
---
 xen/arch/arm/include/asm/numa.h |  5 +++
 xen/arch/arm/numa.c             | 57 +++++++++++++++++++++++++++++++++
 xen/arch/arm/setup.c            |  7 ++++
 3 files changed, 69 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 1213d30ce0..34eec9378c 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -45,6 +45,7 @@ extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
 extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
+extern void numa_init(void);
 
 #else
 
@@ -86,6 +87,10 @@ static inline void numa_detect_cpu_node(unsigned int cpu)
 {
 }
 
+static inline void numa_init(void)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 3e02cec646..e9081d45ce 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -18,7 +18,11 @@
  *
  */
 #include <xen/init.h>
+#include <xen/device_tree.h>
+#include <xen/nodemask.h>
 #include <xen/numa.h>
+#include <xen/pfn.h>
+#include <xen/acpi.h>
 
 enum dt_numa_status __read_mostly device_tree_numa;
 
@@ -103,6 +107,59 @@ unsigned char __node_distance(nodeid_t from, nodeid_t to)
 
 EXPORT_SYMBOL(__node_distance);
 
+void __init numa_init(void)
+{
+    unsigned int idx;
+    paddr_t ram_start = INVALID_PADDR;
+    paddr_t ram_size = 0;
+    paddr_t ram_end = 0;
+
+    /* NUMA has been turned off through Xen parameters */
+    if ( numa_off )
+        goto mem_init;
+
+    /* Initialize NUMA from device tree when system is not ACPI booted */
+    if ( acpi_disabled )
+    {
+        int ret = numa_device_tree_init(device_tree_flattened);
+        if ( ret )
+        {
+            numa_off = true;
+            if ( ret == -EINVAL )
+                printk(XENLOG_WARNING
+                       "Init NUMA from device tree failed, ret=%d\n", ret);
+        }
+    }
+    else
+    {
+        /* We don't support NUMA for ACPI boot currently */
+        printk(XENLOG_WARNING
+               "ACPI NUMA has not been supported yet, NUMA off!\n");
+        numa_off = true;
+    }
+
+mem_init:
+    /*
+     * Find the minimal and maximum address of RAM, NUMA will
+     * build a memory to node mapping table for the whole range.
+     */
+    ram_start = bootinfo.mem.bank[0].start;
+    ram_size  = bootinfo.mem.bank[0].size;
+    ram_end   = ram_start + ram_size;
+    for ( idx = 1 ; idx < bootinfo.mem.nr_banks; idx++ )
+    {
+        paddr_t bank_start = bootinfo.mem.bank[idx].start;
+        paddr_t bank_size = bootinfo.mem.bank[idx].size;
+        paddr_t bank_end = bank_start + bank_size;
+
+        ram_size  = ram_size + bank_size;
+        ram_start = min(ram_start, bank_start);
+        ram_end   = max(ram_end, bank_end);
+    }
+
+    numa_initmem_init(PFN_UP(ram_start), PFN_DOWN(ram_end));
+}
+
 int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *end)
 {
     if ( idx >= bootinfo.mem.nr_banks )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 8c02cf6cd4..4cdc7e2edb 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1031,6 +1031,13 @@ void __init start_xen(unsigned long boot_phys_offset,
     /* Parse the ACPI tables for possible boot-time configuration */
     acpi_boot_table_init();
 
+    /*
+     * Try to initialize NUMA system, if failed, the system will
+     * fallback to uniform system which means system has only 1
+     * NUMA node.
+     */
+    numa_init();
+
     end_boot_allocator();
 
     /*
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:55:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474465.735708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPX-0002uf-5W; Tue, 10 Jan 2023 08:55:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474465.735708; Tue, 10 Jan 2023 08:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPX-0002uV-2V; Tue, 10 Jan 2023 08:55:11 +0000
Received: by outflank-mailman (input) for mailman id 474465;
 Tue, 10 Jan 2023 08:55:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAPE-0005oC-5t
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:52 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2079.outbound.protection.outlook.com [40.107.104.79])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 706f88ee-90c4-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:54:50 +0100 (CET)
Received: from DUZPR01CA0078.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:46a::20) by AM9PR08MB6034.eurprd08.prod.outlook.com
 (2603:10a6:20b:2db::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:47 +0000
Received: from DBAEUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:46a:cafe::fd) by DUZPR01CA0078.outlook.office365.com
 (2603:10a6:10:46a::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT025.mail.protection.outlook.com (100.127.142.226) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:47 +0000
Received: ("Tessian outbound b1d3ffe56e73:v132");
 Tue, 10 Jan 2023 08:54:47 +0000
Received: from 6209e2cc19c6.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D5B9000A-951A-4E52-BADB-5DAEA1383FC3.1; 
 Tue, 10 Jan 2023 08:54:36 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6209e2cc19c6.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:36 +0000
Received: from DU2PR04CA0216.eurprd04.prod.outlook.com (2603:10a6:10:2b1::11)
 by DU0PR08MB9347.eurprd08.prod.outlook.com (2603:10a6:10:41f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:33 +0000
Received: from DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b1:cafe::a1) by DU2PR04CA0216.outlook.office365.com
 (2603:10a6:10:2b1::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:33 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DBAEUR03FT041.mail.protection.outlook.com (100.127.142.233) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:32 +0000
Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:32 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com
 (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:31 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 706f88ee-90c4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DCO0fihkDCNBhO3hhRTkyjVzUW8pv9QuwCGAw2orFHo=;
 b=9snf656RL62LLV3QLMlqWRD1bA5sBjsdFifN0MCcM1kyoGFw0u4By3BaDPFZpUWGBxuJGTnPKzMeXGodh6NujH9l7nphIP6Eqd+Q66122+c0bU1bo1s8HH6/cZfDjCGyX2Wp/YTuQIj3kUd84oShKtt98HWrwzT8KxPCcD+mX70=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e1e887521e64c94f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T9RTdtytLXmYRaeWcnThNla6A+vtkm0MCJ05xG4ycM2uUbvMSe+GIjQIwDIuBMNGGsiSYYoCqG3EvEoJYnG21/+bpgpaQonoWoLyQ37gCWZEzaonbkHTs4lWiCVp8Agfgbh6hZVU+WbkmfR0Gfv8TYPumvgu7JZ+NGJ+C3oi8QDJ+cZh8m2JUgd1pv7dgK/z4ytxp03ShKm5fF779FwuCGWEukjPd4saDwnbjo27UVaFQ3YTws98KkmXqAVHS8EugE7iCJYC6wHd2RihHaVgz0DVY4EPmGaRS78nOQ4eZ6COfjSKHCOsYNxuM/L2ukSna+WmE0MhLIrFIUm8/7Fb6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DCO0fihkDCNBhO3hhRTkyjVzUW8pv9QuwCGAw2orFHo=;
 b=AKNXnO3yPkNYTdleVc27vHUcs8IUpvjA6riOPIOQzNzTCy/8SswijdK/5vpYGBfCsafh1H3twNJnk1CCwCLsDgqr7CAU4A6szPwZg5xtGtZtzzpFqDYmi19loDjJybNI+v3WmNA9ECnb9WeOW6DP+nok58u3qOyQKL4s2/qYkVk74SjCvsyVIloG++ElXl1uU+p709K5GgXlejK/uq1yq7pGBFIqs58T79/XJjwPoQs7lcbn7uPyJ7ePb2HAPOBLr0Luj50TTpJbBK2jhSauzRQMSvz9rBf1f6yGQfFg6oJJ6P8pdfpEwuU1LVG15KdMqdQhYoeTxL2hCQaBKrWEwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DCO0fihkDCNBhO3hhRTkyjVzUW8pv9QuwCGAw2orFHo=;
 b=9snf656RL62LLV3QLMlqWRD1bA5sBjsdFifN0MCcM1kyoGFw0u4By3BaDPFZpUWGBxuJGTnPKzMeXGodh6NujH9l7nphIP6Eqd+Q66122+c0bU1bo1s8HH6/cZfDjCGyX2Wp/YTuQIj3kUd84oShKtt98HWrwzT8KxPCcD+mX70=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 17/17] docs: update numa command line to support Arm
Date: Tue, 10 Jan 2023 16:49:30 +0800
Message-ID: <20230110084930.1095203-18-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	DBAEUR03FT041:EE_|DU0PR08MB9347:EE_|DBAEUR03FT025:EE_|AM9PR08MB6034:EE_
X-MS-Office365-Filtering-Correlation-Id: a233f2fa-3505-4379-13df-08daf2e8537d
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gYqbdJxhb49crfW6S5nQgUv+h9haKLbcdOx/n4owFnl/MnlejUmKJzuoDv9TxMTMd2nmJiuJWa5+joXUP5YDBGOui1hPv84lfTa7sWIbXbSC6I0Fvk44cfW4zfMy/GbupT3hu5m5cIPh1cvcjmz0eos3FzHhakigrI+wVylRLRC9Kq4H6XHoa7xHRjS9BOi43SlGf9awimkKpK00JU4m9TzpOdg6ySIvzemXwa9J3qKu8huTPvXVY2+96r2195eAEX2UxY8cvPjPVY3Ga/YQjDTG1G1yBc2fwFdZpiCIcOr8F0qFhzJI+6zHhBing5sU+zNgFqXHj0l9Z+geGvHzuYZe3dToka/N0eDToAVctWqc1/JPlMKJobJtwyqwl2LfkVZBDcFjB9OxDKw/ks28KyrgThG8QIaVdoA+5DxIlxJZdTCcWh6i66ykzvxwL8WxZRvU8ZQM3rm0TmNcUib1cUxtXUqTlx0qrLmRL5ltx/xD3p/oRn20q2fIySxS8lZQwWRSWKlnI3dMpI10ucFzHF0sEgyWpSBEMzKv7ipwy4nVODBl60RFVEoX0R0I3h7lchCnVgBi8qtbP0IPzykFAwAGlcVt7dIpd5a1eM+JHfI9wTvwhB6W9wcwZ12zLy2UV+pxwyLXpwymxxOTMervjm1JipskKyUZF0JaTKF0qsvhLDPw+PpkWe/QG/HV699R5lEkDhm+1p4WE+S0DVX88Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199015)(36840700001)(46966006)(40480700001)(356005)(36756003)(86362001)(54906003)(478600001)(316002)(15650500001)(44832011)(5660300002)(7696005)(2906002)(8936002)(70206006)(70586007)(36860700001)(8676002)(6916009)(41300700001)(4326008)(82740400003)(81166007)(6666004)(82310400005)(426003)(26005)(186003)(47076005)(1076003)(83380400001)(2616005)(336012)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9347
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6b91dd04-ff36-4b04-0c74-08daf2e84adb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AWhCG1PH88NBGFSOzjFy6h3+pWSWTjsZHeeH0PagxRz8JmgmAnz/Bn08a+oKGxV69CI4uuQAjcmyKwwQ0rl+DHur3AzklMy9KJF8OE+HB4MDHAN6pLWaSMuqcTNO7NjlRpNILT3fNxOS4CEMO9+uxiqcg3O4j1Y7iDupQbU+94oLSc5G6gq58Vax4ksXwInAIyEA6dtKrMAxOZQIKYA9JCJPExRLonvwIRBoypjGr+r5chc1UphkWxgNHGOM2r4olhW9VWUXHFNPT8wv3MS94Md5cVUky67p6Nx/Dl7bKIk9P5sF2wlk0dBYuZhRJaLjbaUspJmvJEOH9oZZ0m4nLM33H4nJWeRzVysttygn4fG+QWOXGvWB8gIwdENrSIU+94Th18vMSO3QgD9smwt6zLSwD+Qdo2cSi/qvFuG0FwqXrvougE1hnlctXXmGmn1KdoJAbUC22CtDMqQtiHdzIzQyp57zh84zHHVsKtwluyH5ucKRcRaGy8cEdehnL6xA9lKVQCMRRLr57fmel57b7fglHtkJBXn5/70+Qfdd2LpZ/7b9DF7u/fx5sLRnawezXlhSWUUX0Jo5frlEcblEzeHiKewvDxR9VeAzX8HsBSCcR7v1U81fe93WFIyJnM5agyuAXJblE1C2ORfOr5+eCxkJWSSI0ZdX6N7be80+yDOjO9sRHk9YS6GIClWQrzEvTRCK/hVK19pJ+pK5Pq7pOA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(396003)(346002)(136003)(451199015)(46966006)(40470700004)(36840700001)(1076003)(336012)(426003)(70586007)(4326008)(47076005)(44832011)(70206006)(8936002)(8676002)(5660300002)(82310400005)(26005)(186003)(2616005)(6916009)(6666004)(86362001)(82740400003)(316002)(15650500001)(36860700001)(54906003)(2906002)(83380400001)(40480700001)(81166007)(36756003)(41300700001)(478600001)(40460700003)(7696005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:47.3386
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a233f2fa-3505-4379-13df-08daf2e8537d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6034

Current numa command in documentation is x86 only. Remove
x86 from numa command's arch limitation in this patch.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Update Arm NUMA status in SUPPORT.md to "Tech Preview".
---
 SUPPORT.md                        | 1 +
 docs/misc/xen-command-line.pandoc | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index 295369998e..b73d04a028 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -401,6 +401,7 @@ on embedded platforms and the x86 PV shim.
 Enables NUMA aware scheduling in Xen
 
     Status, x86: Supported
+    Status, Arm: Tech Preview
 
 ## Scalability
 
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 923910f553..40da980b67 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1878,7 +1878,7 @@ i.e. a limit on the number of guests it is possible to start each having
 assigned a device sharing a common interrupt line.  Accepts values between
 1 and 255.
 
-### numa (x86)
+### numa
 > `= on | off | fake=<integer> | noacpi`
 
 > Default: `on`
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:55:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:55:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474467.735719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPY-0003Ac-EC; Tue, 10 Jan 2023 08:55:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474467.735719; Tue, 10 Jan 2023 08:55:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPY-0003AT-A1; Tue, 10 Jan 2023 08:55:12 +0000
Received: by outflank-mailman (input) for mailman id 474467;
 Tue, 10 Jan 2023 08:55:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAP6-0005s6-5Q
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:44 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2083.outbound.protection.outlook.com [40.107.7.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c11ee8f-90c4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:54:42 +0100 (CET)
Received: from DB6PR0402CA0024.eurprd04.prod.outlook.com (2603:10a6:4:91::34)
 by AM7PR08MB5350.eurprd08.prod.outlook.com (2603:10a6:20b:101::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:38 +0000
Received: from DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:91:cafe::7e) by DB6PR0402CA0024.outlook.office365.com
 (2603:10a6:4:91::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT057.mail.protection.outlook.com (100.127.142.182) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:38 +0000
Received: ("Tessian outbound b1d3ffe56e73:v132");
 Tue, 10 Jan 2023 08:54:38 +0000
Received: from 046c96facb57.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 28559E46-02F6-4DE6-8E01-5DBAA055C8FF.1; 
 Tue, 10 Jan 2023 08:54:29 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 046c96facb57.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:29 +0000
Received: from FR0P281CA0052.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:48::20)
 by DBAPR08MB5671.eurprd08.prod.outlook.com (2603:10a6:10:1a1::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:27 +0000
Received: from VI1EUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:48:cafe::ae) by FR0P281CA0052.outlook.office365.com
 (2603:10a6:d10:48::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:27 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT047.mail.protection.outlook.com (100.127.144.198) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:27 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:25 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c11ee8f-90c4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZFycWCyr9eafMdsIRMXDpPfZgceLANbgQsKdH29zQUg=;
 b=FUgjPEH3SDw0CgNr7/SQou9XAWOj4pzs2LG6xfK7CTKzT+fl9NmbGQ2WAvdWJDAdd/UWS9wwkup5u8/3AUI2eWytTN4kXUJN69bmq/J4+GeyMXIKdG04QsrS3dZJBQrPXAUWpFmMvrTlGoifsmdjtpxbld8UT5fegDKGHD3Wdh8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8325e71c2a7c0b30
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KlmSe6nJSBejfbUbmXXV7b0fPyc1BbokKF+EBW8tapVmuweOXelR7KLstQ3g8Y5PpR/HR0FnJXbPEWqUaKjDQ6+P+S4hcCjcdGRYWtUxrnlaQG1p+Ftq8KveKUOLY1VlISXdAi9LaGvS53CgT1Hm/6PscyYl5XAx8RdkgqSNdIbvhAwSrhl1ryyL5hoavca3qoF5TobrhtyQg0X4ncWrytQIJab4vhOXCPRYuEbQQVRa7IinOM+9yIqE58FQIA9ZNjgBqed8g9HW3N7KGjvUewco1A95twxwAy7EtzOE9vDJvPMvfMp4+ehtJMlwfK3JqomfZvf6RHDZjerDKSdvMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZFycWCyr9eafMdsIRMXDpPfZgceLANbgQsKdH29zQUg=;
 b=IeSbP0rIOSno/G6f4Qp2LgoGkCZC49cfkV4eUVi5WoxPHaYApMCrdjZzh6mSGWTM4hBS3I/Uv1/LDM6ULoBE7Q+zjDSePxMfaB13AiFaODMdsEv7VghGVIB9WUHIG77o/PPLb1qNuBpj5L335rJVSZcpiDYSNydf0MwIL1+kP+Hqvn9Y+A/zQVPqRYLmayQrHoo8pxjhGpont7mPoOnfnXfekSTdLtfs6cJIS1yE7opdJPwb5fo6KB/HtVTUUM+aSF7M2Qx3XFYey5giisGDW61CX3rdzb1PqrgCxDmva0PdROvAoKQ9LEd/M+tFKbVKU6rNzhtBPE/SjcdUhy+JEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZFycWCyr9eafMdsIRMXDpPfZgceLANbgQsKdH29zQUg=;
 b=FUgjPEH3SDw0CgNr7/SQou9XAWOj4pzs2LG6xfK7CTKzT+fl9NmbGQ2WAvdWJDAdd/UWS9wwkup5u8/3AUI2eWytTN4kXUJN69bmq/J4+GeyMXIKdG04QsrS3dZJBQrPXAUWpFmMvrTlGoifsmdjtpxbld8UT5fegDKGHD3Wdh8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Henry Wang <Henry.Wang@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 15/17] xen/arm: Set correct per-cpu cpu_core_mask
Date: Tue, 10 Jan 2023 16:49:28 +0800
Message-ID: <20230110084930.1095203-16-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT047:EE_|DBAPR08MB5671:EE_|DBAEUR03FT057:EE_|AM7PR08MB5350:EE_
X-MS-Office365-Filtering-Correlation-Id: 25f7d896-3a2e-4a1b-a068-08daf2e84e16
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1nVaoo9E+GFvT+cClWsW/NG/w3IzVI+3KObds7Prvv0MZgL3umrok29TqA/gqKByCXR7gsQOZSP85XS5mPn1DKM7qYyWQalFFc6miKFk0Ok/WSXu1GNDvZfdJAEDl7wdS/b3AMB2n6smdTJ8IAomyMO5PDmypf5SMN+0kOP0GeBXd2s+A/WzH4IDhfScDyNSCzzNvvpuyuqrFIf9h6fU4hC2D0d0IO1huIE2Wz+Zc7HoUk6LfUbxlR+hwi/TGbzCGgVBqYQIygKav9jC6HmH1OvpqrpVsev6RdvffLb2r5vdM9oi6YzXfcQxRRZjzJT+5D7N4/9VhdV4E08k0ZXGrsyBkmn8CtNNFsVvScCtalPfHNP3fTW/INL9B4xOZs1WAo3mz2KM/ng76ndQfal0V72PuWyodnn/+ys8+Z0FeD0ZejLEuIAzbqlsBjyqgCJiRY8MAcbB1JuAEgtN4A0Arqwun3/4RhqVcHa7/PCs8M1h4MpGG67qPev9q2DlmplBRHZZVm7qDLA63ng/2bhukBd3n7HWmmSJeO0pf3s71EM81+1tncuYmiq1s3nXgkFWDFPVWGgDxOaWMdbqeBHn6EIUAXhWxo5NdNcQ9VhwRLft19grdmi8iSrir9ELGuE8G0q/KlodOkZ+RVDUUll8wb11+ln9DU/bo412ip0g4KrlkZptPGDeISw8OlW+5c18egnVfp9/h9mXTWTR6OX4Zw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(396003)(376002)(346002)(451199015)(46966006)(36840700001)(83380400001)(40480700001)(47076005)(44832011)(426003)(82310400005)(36860700001)(5660300002)(336012)(2616005)(356005)(2906002)(81166007)(54906003)(86362001)(8936002)(1076003)(6916009)(316002)(8676002)(36756003)(186003)(4326008)(478600001)(41300700001)(7696005)(70586007)(82740400003)(70206006)(26005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5671
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2dce9913-8214-4290-eb64-08daf2e847ba
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CEB9qhTi6/UWTS54LBvP+c+c/wgi2X12rvajZwqx0nJSbg6b6nWaHG0N43VAohOvXNSkpysg1OAwGB4LjLfz8U6wmYL3eNvVZzwgz3DNwK8ChHBFrC8p60RiWCHkmI8IxqOG4yqgplhY+vwRf07onIdkJdCuNL1gopPlhC5igNk1QWwKGmOYRp9axNix5gzXu2VIv7FfeYHyEE7UjvdnW1nEh7zOiNSAvG8c3fGYyw+Uqf8UKZETFDgzZWnrItsrvbCdDhYlgUPHIjZaOLDQqbIA8CoiX1sgh/QJ9vth/j0jGCXMmePdCcMLd1IsItEYBGstwSiLmJBup+XqL+mX22Je8bj9yWU6yYMV3QtMNdjuo2+sHDx6rpChSic9aW2ccuzkj6g7ZK5uhgbIoNWPxXaseah2qVPPa6DpyDXJsPx7Bh3i3etEK8mL6kd6IO3HoR1rACm2yko7SCy4K7LKIU8ZIrF+GcQYtSZo483OoSHI5EyKIk7D/WEus5UQdCrKRKMbT099DKhEEyuhPHuU9pWScDMEC9GD+By7XvsLiNiq0vG3SZLIQTC3yYsVtEKLutqDhkjzlOBrFGaQoSNKzNXSd478TkjfP6lk1VLj95UxPAaLh1vmeYjYj66if39mOgwqpyCD5cXRabssu61IdR1lMA07A50DVVIKBKoHXwDQlSXz/zkYJgaG6MDh4OK/yqtnjceXVizEJ8zjPWE0Tw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199015)(36840700001)(46966006)(40470700004)(44832011)(2906002)(5660300002)(36860700001)(7696005)(41300700001)(8936002)(36756003)(70206006)(83380400001)(70586007)(4326008)(8676002)(316002)(47076005)(86362001)(2616005)(336012)(82310400005)(426003)(40460700003)(1076003)(81166007)(478600001)(186003)(40480700001)(26005)(6916009)(54906003)(107886003)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:38.2786
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 25f7d896-3a2e-4a1b-a068-08daf2e84e16
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5350

From: Henry Wang <Henry.Wang@arm.com>

In the common sysctl command XEN_SYSCTL_physinfo, the cores_per_socket
is calculated based on the cpu_core_mask of CPU0. Currently on Arm
this is a fixed value 1 (can be checked via xl info), which is not
correct. This is because during the Arm cpu online process,
set_cpu_sibling_map() only sets the per-cpu cpu_core_mask for itself.

cores_per_socket refers to the number of cores that belong to the same
socket (NUMA node). Therefore, this commit introduces a helper function
numa_set_cpu_core_mask(cpu), which sets the per-cpu cpu_core_mask to
the cpus in the same NUMA node as cpu. Calling this function at the
boot time can ensure the correct cpu_core_mask, leading to the correct
cores_per_socket to be returned by XEN_SYSCTL_physinfo.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v1 -> v2:
1. New patch

---
 xen/arch/arm/include/asm/numa.h |  7 +++++++
 xen/arch/arm/numa.c             | 11 +++++++++++
 xen/arch/arm/setup.c            |  5 +++++
 3 files changed, 23 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index a0b8d7a11c..e66fb0a11f 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -46,6 +46,7 @@ extern void numa_set_distance(nodeid_t from, nodeid_t to,
 extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
 extern void numa_init(void);
+extern void numa_set_cpu_core_mask(int cpu);
 
 /*
  * Device tree NUMA doesn't have architecural node id.
@@ -62,6 +63,12 @@ static inline unsigned int numa_node_to_arch_nid(nodeid_t n)
 #define cpu_to_node(cpu) 0
 #define node_to_cpumask(node)   (cpu_online_map)
 
+static inline void numa_set_cpu_core_mask(int cpu)
+{
+    cpumask_or(per_cpu(cpu_core_mask, cpu),
+               per_cpu(cpu_core_mask, cpu), &cpu_possible_map);
+}
+
 /*
  * TODO: make first_valid_mfn static when NUMA is supported on Arm, this
  * is required because the dummy helpers are using it.
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index e9081d45ce..ef245e39a8 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -52,6 +52,17 @@ int __init arch_numa_setup(const char *opt)
     return -EINVAL;
 }
 
+void numa_set_cpu_core_mask(int cpu)
+{
+    nodeid_t node = cpu_to_node[cpu];
+
+    if ( node == NUMA_NO_NODE )
+        node = 0;
+
+    cpumask_or(per_cpu(cpu_core_mask, cpu),
+               per_cpu(cpu_core_mask, cpu), &node_to_cpumask(node));
+}
+
 void __init numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 4cdc7e2edb..d45becedee 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1136,6 +1136,11 @@ void __init start_xen(unsigned long boot_phys_offset,
     }
 
     printk("Brought up %ld CPUs\n", (long)num_online_cpus());
+
+    /* Set per-cpu cpu_core_mask to cpus that belongs to the same NUMA node. */
+    for_each_online_cpu ( i )
+        numa_set_cpu_core_mask(i);
+
     /* TODO: smp_cpus_done(); */
 
     /* This should be done in a vpmu driver but we do not have one yet. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:55:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474471.735730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPe-0003Yu-PV; Tue, 10 Jan 2023 08:55:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474471.735730; Tue, 10 Jan 2023 08:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPe-0003Yg-Lr; Tue, 10 Jan 2023 08:55:18 +0000
Received: by outflank-mailman (input) for mailman id 474471;
 Tue, 10 Jan 2023 08:55:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAP2-0005s6-6C
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:40 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2040.outbound.protection.outlook.com [40.107.13.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6a8f16e9-90c4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:54:39 +0100 (CET)
Received: from AM5PR1001CA0039.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::16) by AS2PR08MB9546.eurprd08.prod.outlook.com
 (2603:10a6:20b:60d::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:31 +0000
Received: from AM7EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::f8) by AM5PR1001CA0039.outlook.office365.com
 (2603:10a6:206:15::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT013.mail.protection.outlook.com (100.127.140.191) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:30 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Tue, 10 Jan 2023 08:54:30 +0000
Received: from e1b2afb5b038.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4B3EF0B9-F4C2-42BD-BA0C-7BA60B2A73B3.1; 
 Tue, 10 Jan 2023 08:54:24 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e1b2afb5b038.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:24 +0000
Received: from FR3P281CA0152.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a2::6) by
 AM9PR08MB5972.eurprd08.prod.outlook.com (2603:10a6:20b:280::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:23 +0000
Received: from VI1EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:a2:cafe::fa) by FR3P281CA0152.outlook.office365.com
 (2603:10a6:d10:a2::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:23 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT037.mail.protection.outlook.com (100.127.145.14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:22 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:20 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a8f16e9-90c4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A2FodgFJQlHN+jEOT1icBJJm4ueZE52nEYnjQXW8aSU=;
 b=5ngaVJMMfdKPRJlMeUfRk87msXv9iG3rhV8PHtIs0yLFbAQw+JyiFdmkhk9lmlnZDxvL0knByp0c8fDtUfkBtPGsDImLyA73TktN8cvfK4V+dBTGMkRW+Pb/DYrWdnfj1leJotguqtOh3r6CAM5ZkpdiEDXC3Qh4PeqOt3KzZFY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 48f0738268577f07
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gujj+nN2Tey337Kxfu8v7LxzA1GHQGNgyOxScgmPs4VFlQSubR3PJqovA6W8jzFw1QJvS8hRPa2o4Bl0AseM1RIa2b65qX7NkR0xpaVo7yu8o8TDR8E/cvaiRfoCUVN5mBLp/r7Ss53ed1FA+2fxNeLw9+xPpBInTUxhvZOe4l2oI1W/VkNhNiL9jIrPG+zgGoncNTMpVNNiHcKmCrqAZDAX4ysxhwDzao8/G/gBwulqiC8PEYiXejY/RbeXTDy4h3gfr6QIs3u2dU7dfcFmcEQaizfLQ1+qlkUWQgOUV8VBra5dTs6GaVFb0kGdHG+30ez70uHnSei4VxaADegBZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A2FodgFJQlHN+jEOT1icBJJm4ueZE52nEYnjQXW8aSU=;
 b=eOcBKIioqxKMO2/XlvTtP7J6moNy8nfLr9220mQ29ujOm8cTT8YKQXZtkC3PID2Y7y+9d40qmDPsNVjnObb96n2Lz9rl5QRwnRJuM5Sgl7ZOmClXvnU5s2qhHZwW/I/lV9YAETaJ4lxclLwh1PulVG+d3EhT3/IoMToyMtaX62+prFI5ULh3gARcl5hMnZh+n4UIs6iREgG6ECu8Gui7zXrh4/0ICWcEBqWLIiBNeE0BLCyXEot2PBxj68S+ZhfjUWrz/G/Y7mq/VWqntoBhpX3w/0//5d9dLLk0iCeHFu5UxH3FEtG8Y+C/35XDHve4foKfV9CUsMxIhjowK5EsHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A2FodgFJQlHN+jEOT1icBJJm4ueZE52nEYnjQXW8aSU=;
 b=5ngaVJMMfdKPRJlMeUfRk87msXv9iG3rhV8PHtIs0yLFbAQw+JyiFdmkhk9lmlnZDxvL0knByp0c8fDtUfkBtPGsDImLyA73TktN8cvfK4V+dBTGMkRW+Pb/DYrWdnfj1leJotguqtOh3r6CAM5ZkpdiEDXC3Qh4PeqOt3KzZFY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 13/17] xen/arm: implement numa_node_to_arch_nid for device tree NUMA
Date: Tue, 10 Jan 2023 16:49:26 +0800
Message-ID: <20230110084930.1095203-14-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT037:EE_|AM9PR08MB5972:EE_|AM7EUR03FT013:EE_|AS2PR08MB9546:EE_
X-MS-Office365-Filtering-Correlation-Id: f51d712c-b003-4132-0f9a-08daf2e849c5
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KRxgqa5aoyZuuqh55H8XFhW9AR9ToiSWX1xcdhlzXNSg8e2OnrXSELdc9ro+dfZrIeaDHQQfKY6jdl/hCHOfY9KTC7y+eBTH66fwp2yNFOZKgnBVfTYrFPQA2hKxFfYezNJogunfDSLH7gdAOaxgd12hoNYDg5xEKMBmO4kQoK19ihwUlgjZOoO3CaL4pE/9vAzPHzXP7oWFFN2Z+GYJujreylGNLJuUvPs1M5ywaXcPFA+AmS8lXgwu9yxiFkZQpBLAmkCL/8tw7P9O4UhH/9TBSAsE2li4WA9WSWOAt+frdMgoGqqtr13xaOU7w/KZ3jutEHKFbQzlxCXGXPqcRkiQj1HTBlNj0XcHOyxiZ9mrMh9U4G32ufaRBytgSeActm0RJ4iXOBD/LotuXkS1tkefsylm7b1INBzjHdTOgzR0mh+JxBQWPRI2GxX4ugpPU0x5e6Q8rL7Ujz51v+jcvWnr525AxdH9P8U4M5ZKhGLv5ioPHx+bAMyLkS5C1Py4O5t6d5qCPYXOhyqC4abojlK1JF9w8W2uvc/dgI8ndQM5VhHyWsedHVtDH4m/sDpX5PT1b5gjI10hnieKfAAAy4hW9yvxlsQqqGo5ySIfwkBBEc6GbK9T2ahrVIL1dEqi+XqvHZCaLfBvNHV/V2f0B8CM4OAAtQesZqgKjxaf2zXOaNQG5P+66t5ZbJVdBJ7c88GH2QhP2Q7zFKKirYq5Vw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199015)(46966006)(36840700001)(5660300002)(4744005)(44832011)(8936002)(41300700001)(2906002)(36756003)(54906003)(83380400001)(8676002)(70586007)(70206006)(4326008)(316002)(6916009)(7696005)(26005)(81166007)(186003)(6666004)(86362001)(478600001)(336012)(1076003)(2616005)(40480700001)(47076005)(82740400003)(356005)(426003)(82310400005)(36860700001)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5972
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fe8c08a8-489c-4652-ba7a-08daf2e844b3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jssQLk84BYGbldLfDpzDNyi8hXCoJrKgNGph29yW253pBcFp8/j/11QDHeiM0ReBjjJHXFD2u7rz5vMTt1mITL6TYSBgKiNPMvim7Tju1F/hshB7lAsU90AlIaLuRX3Z6QEt9M6XV+DJNXTala26ADwGlCF49LpeJH1FO5jZ9Byerpts5v7vzs6INWnVw2srBvvA83bparPt04yodbjQbXEwMPqRQwa1hhw/IUK4WBixGdlnykCoIhQsOh8hjse1qi8lQKjofCAtmDOsNn22fcsOIXJncqydGh7fyxTH7czdF3UShQ5ulOMD+I9VoMWA9ANb0M6WLx7XsSsLn9x1iMSFtLjLJXG/V1xWY1MBYmKOdXzGPKFuOOSz/mNkuxML00oZggh8L/oWdyG7Pp8pJ7ChAuItRyzRm2fFpAkCulZ2vDRCyzG8h4EFSQETrtxSpk+15zlEPYsQmBAYGz5VYV1NZce+6P6vqTOZWesUbCgNZk4D7Wwd0mE1FJD6RDFsk8uvuumSr7TlV04tjSDaoQNl9JCyyeSXIeCCA2jt8VeTevL0DAcShX5fm2iScZ389gd4dbwecRMhIcByhGlP4D0JKWumK3tQANvlywQNxDuQtt3C0ECevmaE/xHHQobzZb56TGbtyvjIyJvBMULdh2wUaRdZ/AfkmkFOzSENRUXDvBgKbPmYVgLMa5Y/0ivcoxGD1nHlaj1UwWgmZ/HHHA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199015)(46966006)(36840700001)(40470700004)(36860700001)(1076003)(86362001)(5660300002)(4744005)(44832011)(186003)(2906002)(26005)(36756003)(40480700001)(7696005)(6666004)(83380400001)(478600001)(40460700003)(107886003)(70586007)(8676002)(4326008)(41300700001)(336012)(70206006)(47076005)(54906003)(81166007)(426003)(6916009)(316002)(82740400003)(8936002)(82310400005)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:30.9707
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f51d712c-b003-4132-0f9a-08daf2e849c5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9546

Device tree based NUMA doesn't have the proximity domain like
ACPI. So we can return node id directly as arch nid.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Use numa_node_to_arch_nid instead of dummy node_to_pxm.
---
 xen/arch/arm/include/asm/numa.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 34eec9378c..a0b8d7a11c 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -47,6 +47,15 @@ extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
 extern void numa_init(void);
 
+/*
+ * Device tree NUMA doesn't have architecural node id.
+ * So we can just return node id as arch nid.
+ */
+static inline unsigned int numa_node_to_arch_nid(nodeid_t n)
+{
+    return n;
+}
+
 #else
 
 /* Fake one node for now. See also node_online_map. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:55:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:55:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474478.735741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPh-0003t7-7W; Tue, 10 Jan 2023 08:55:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474478.735741; Tue, 10 Jan 2023 08:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPh-0003st-42; Tue, 10 Jan 2023 08:55:21 +0000
Received: by outflank-mailman (input) for mailman id 474478;
 Tue, 10 Jan 2023 08:55:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAP0-0005oC-BY
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:38 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2047.outbound.protection.outlook.com [40.107.7.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 682860ce-90c4-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:54:35 +0100 (CET)
Received: from AM5PR0601CA0030.eurprd06.prod.outlook.com
 (2603:10a6:203:68::16) by DU0PR08MB7437.eurprd08.prod.outlook.com
 (2603:10a6:10:354::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:34 +0000
Received: from AM7EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:68:cafe::b5) by AM5PR0601CA0030.outlook.office365.com
 (2603:10a6:203:68::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT006.mail.protection.outlook.com (100.127.141.21) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:33 +0000
Received: ("Tessian outbound baf1b7a96f25:v132");
 Tue, 10 Jan 2023 08:54:33 +0000
Received: from 43b39c5aa916.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5A10E8F6-007D-4BC4-A4BA-549E0BED9E3A.1; 
 Tue, 10 Jan 2023 08:54:26 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 43b39c5aa916.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:26 +0000
Received: from AM6PR10CA0030.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:89::43)
 by PAVPR08MB9139.eurprd08.prod.outlook.com (2603:10a6:102:30c::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:24 +0000
Received: from AM7EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:89:cafe::e3) by AM6PR10CA0030.outlook.office365.com
 (2603:10a6:209:89::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:24 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM7EUR03FT035.mail.protection.outlook.com (100.127.141.24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:24 +0000
Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:23 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com
 (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:23 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 682860ce-90c4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OOWzVhGTnAA1jwh+NIzceBycXKs2uJrzXGAp2en5qCg=;
 b=ql6UJYSdbgZcWz8+l6VSXPK8eTaGaH0KPHn8TkWej7SiQQ8JTlaaYdkZ+9SxkLeSrGxxrowXNIyLy0GAkgC9BOPwhrehnbMgAvoMz0g3Pw6MxufzQVSPCn2kPsHGeYE8ulkfRvy1AmglSoWk+hxHw5Qet+b+sDCIziRfUUnxx08=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f238819d235296e5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gd7GKiDkGt9PFTjucYpmx4GyWQ3ML7j4he/JrXK4WgtoV1sgjMSOxLMl7MXYot9g+Dv+3Uq1/Z9JTgcPKqf3Sqi5sFtZ6wEdZ527Vsn6AklmcXqPHt/OZmg1IMck7X3PlkRWDqNCLPmH7mzE9n+XiJKUQGwPJ3u4VgXiGnbsfMB6tgqL/XQqiKmbqfM4gmem5Ym106HQm7Txlw9hwxJp2tjaP5NiuCjUY3qy8YGmhsvYuHRawjzZKmV4TOTSH4ifkOowalHzLG1biIbN26teZ3uP7RNFx9s8MU+E5xkSKJNq9zQbtcAgA/wSF1sb5BODpmGw9YD+xcnG9xNZYElf3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OOWzVhGTnAA1jwh+NIzceBycXKs2uJrzXGAp2en5qCg=;
 b=At+zLhsyvGZFQ4O3UhMi+eTK/YWeXLjnUQZrVxPZBqtBKyY38ImSGDFOBYtneWUvvZB8r4kUyulW571NAL8jsdM1JpelPO8nkATmg8ELekBK2KMnXHP5igqp7wb1W0dAcKeQZZEr4SfYx3kPd+Te8Ab3BSNlQw59DWtvZxnujsfWR3QEoOdoZuguGpVCa7dbiq3D+WsSvlSictCwBHhC9Ac5A4He+52Qu8IHYpF9YKf6tsCqVwaEtA8w16onO+t5pfSePTK8WP7LsRdsbrzo5b1wkmmHnmATfgeg/ejh52s1Qrj+I9mv5aFt9ovYFhR/xOBGv/XSSudbDwQEWUWykQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OOWzVhGTnAA1jwh+NIzceBycXKs2uJrzXGAp2en5qCg=;
 b=ql6UJYSdbgZcWz8+l6VSXPK8eTaGaH0KPHn8TkWej7SiQQ8JTlaaYdkZ+9SxkLeSrGxxrowXNIyLy0GAkgC9BOPwhrehnbMgAvoMz0g3Pw6MxufzQVSPCn2kPsHGeYE8ulkfRvy1AmglSoWk+hxHw5Qet+b+sDCIziRfUUnxx08=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 14/17] xen/arm: use CONFIG_NUMA to gate node_online_map in smpboot
Date: Tue, 10 Jan 2023 16:49:27 +0800
Message-ID: <20230110084930.1095203-15-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	AM7EUR03FT035:EE_|PAVPR08MB9139:EE_|AM7EUR03FT006:EE_|DU0PR08MB7437:EE_
X-MS-Office365-Filtering-Correlation-Id: 4411b4f5-03de-41d3-14b1-08daf2e84b7a
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 yzynJM8x5/DVIwBjxxTpsnQWlFGNjs6xlB4sbUSGq4TipO54VwCaiNPESxMAnowximu87EVKQ9KSighewqWb1h6CA8vzQN2OGqcEommpKw+pXaJcxhdUac8Ly1/Ye6Tf44cNl6/HrWRCFwh4hK8mgaGqdOEd8ibYUS/wBm5ao8r8XIsCYg65OWmCObl2btkvRHqgp8784WTHhIOR9be9f3GYWfvohYwbm3wVwUKmpx95vYzJGWMxj860dgSHxQTR8A2YTbyheddY6jyaor02WwRir91wMj+QQCzYf/+aWMUikCN8/6DHez3qBh+Y4JKjZANUiGh1U1dj2sVlpfaxgGLLKivxSzkcsgRVc+u+oCKobrEUltI+RBf+jwKmLksPMX+mo/XnIL6j9ss5xbqximchZI7DnnNbOqIMzDYA7UFVKVX7VYjngRKPvGktC+NtUgXavGs+RRv5S406S7CT3esqJX4lGOTVrI+pBPA+GrOSyKiDdC5gYN4kNDBXLikbWClOId1t6lhcfTzJmP1pHfyofDbKReUOLTjGUXQGeoYEN9v/cc6OCY/twQHft88/Nq/KacpEEIFfk4vFh2dlRmXSkJLEbzZFuulOspslr11XxxdlgfqZXznyl0BDeBYJHUo9qlosllmR6JD1bkfU2/A+vYSNPv7yHiKNQyFBJeXhNsp+u2CfyXkl5PwcGOCi2xVG3Gq7WbRbNxdJPrkDPg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(376002)(396003)(136003)(451199015)(46966006)(36840700001)(356005)(81166007)(82740400003)(7696005)(86362001)(6916009)(478600001)(40480700001)(316002)(54906003)(4744005)(8676002)(2906002)(44832011)(70586007)(41300700001)(70206006)(4326008)(426003)(336012)(8936002)(36860700001)(47076005)(83380400001)(26005)(2616005)(186003)(82310400005)(1076003)(5660300002)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9139
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ba758de7-8bd5-4a2e-969f-08daf2e845e1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NNfJFR1/W4tHDaJY+Lk9QP3yBA+yjex+vi7DT3afDPklW5c8LfUZtJDYWlcqJthoTJNdaq775ZB5g6q8wv8NZUCs7X4am5hTkkgxxWpHAA19/ju7sqd4AbjXslfmitJ44typlTJhBCqUx8TrPDoR1hN63ZloBeVfPgdMCuD9RFXlWFWNCwpOjaRWxeijDEAGwS7Wy494hZOm8l41pQY3LnAaQV3F52eqNLr8yd46iivG3Qvwq/RVSUnBoztN3lszgzZZU6NWEw2JPFWs3Z/GktmrQ0LZPXHccw0uRcxAbT5bPZThrhcqs4lH1UkdTqaMNz+/efXeKfXOqwY1K/hs+9NcjnVXy324AOL2+paTovM+kvxHtU/2mq5+0rX65FJAkarP1r+XTIZ3pIfj4/OHcuqb3QaFucfy4DnYBEbcPkmrziaJpRUdBEU0+n5HjkurQbojTnNF0xyFiOO4ds/AEc68vMC/9+1HcscGsAgfwqESgPGi8nBhRgkcv0xRlLqQT0Asg1AfZH8utZ43e4p3ErerglgDm/h9OlQjTUIb67jfn3H1G57QpqnvQr+Lh59xo4h8AUdxOOgM1dZef3uHTShp80wlGNpAw55ni4b33tIkIRVscnvBEYiCrJxLHl+y+HTxCP08/OhZc2rDqEfETvhAay0rfXJQaYnl6GU1qjBmscXt5xXGNIi1xEhJPet/VA7FJPmEbzDMyEn4oNXUtw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(82310400005)(6916009)(83380400001)(36860700001)(478600001)(5660300002)(44832011)(4744005)(86362001)(36756003)(2906002)(40460700003)(316002)(54906003)(4326008)(82740400003)(26005)(186003)(336012)(8936002)(2616005)(1076003)(7696005)(40480700001)(426003)(107886003)(70586007)(8676002)(70206006)(41300700001)(47076005)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:33.8347
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4411b4f5-03de-41d3-14b1-08daf2e84b7a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7437

node_online_map in smpboot still need for Arm when NUMA is turn
off by Kconfig.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. No change.
---
 xen/arch/arm/smpboot.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 5ee6ab11e9..3ae359bbeb 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -41,8 +41,10 @@ integer_param("maxcpus", max_cpus);
 /* CPU logical map: map xen cpuid to an MPIDR */
 register_t __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = MPIDR_INVALID };
 
+#ifndef CONFIG_NUMA
 /* Fake one node for now. See also asm/numa.h */
 nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
+#endif
 
 /* Xen stack for bringing up the first CPU. */
 static unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:55:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474482.735752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPj-0004Gp-JP; Tue, 10 Jan 2023 08:55:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474482.735752; Tue, 10 Jan 2023 08:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPj-0004Gc-DX; Tue, 10 Jan 2023 08:55:23 +0000
Received: by outflank-mailman (input) for mailman id 474482;
 Tue, 10 Jan 2023 08:55:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAP7-0005s6-5R
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:45 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2070.outbound.protection.outlook.com [40.107.103.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c85df74-90c4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 09:54:43 +0100 (CET)
Received: from DB7PR02CA0006.eurprd02.prod.outlook.com (2603:10a6:10:52::19)
 by DB4PR08MB9864.eurprd08.prod.outlook.com (2603:10a6:10:3cf::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:40 +0000
Received: from DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:52:cafe::2a) by DB7PR02CA0006.outlook.office365.com
 (2603:10a6:10:52::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT032.mail.protection.outlook.com (100.127.142.185) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:40 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Tue, 10 Jan 2023 08:54:39 +0000
Received: from b094e260e48b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3D2E68E8-747C-4449-9205-D2B044856754.1; 
 Tue, 10 Jan 2023 08:54:34 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b094e260e48b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:34 +0000
Received: from AS8P189CA0042.EURP189.PROD.OUTLOOK.COM (2603:10a6:20b:458::12)
 by DU0PR08MB7638.eurprd08.prod.outlook.com (2603:10a6:10:31f::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:31 +0000
Received: from AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:458:cafe::af) by AS8P189CA0042.outlook.office365.com
 (2603:10a6:20b:458::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:31 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM7EUR03FT052.mail.protection.outlook.com (100.127.140.214) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:31 +0000
Received: from AZ-NEU-EX02.Emea.Arm.com (10.251.26.5) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:29 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX02.Emea.Arm.com
 (10.251.26.5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:28 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c85df74-90c4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A9xXZMbekjt/tql+kzQOe7eglF3L9li7hx3d+MOMfTs=;
 b=wStOQAtR9wr3xfjETfijXVkPoK6EjofHsjwj4GezKJ4hg3PfSwYIPFKFpA8QuZPm14Fa4ucszy2WmkJLHSnnBYKiKfiXsElSDF7OedleAmuwx4I8cADpSZod4MK5deiRnIWknCNiWiq7qfNnlhSWXgvtbo1QHzaRrtCZKYInBn0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8f436cb93478cc52
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cENsEWAd+moxKaD55/XfUjNcxkB3d0km+dUiiyBnjidvB9M2rKkXh6BORZaYgRq0+3e+d9IPlSUe0PexA2S0XB6qBM9wSloOYN/WVO0y00VEPRv/6VOLFPCybf0xZSI1CwSg3wSBrKmx9hRoOWSaXHB4HnRsWivFmeIFXq/zXUrUk6w52+WzSC39xZA3DwoATN/cKIsSfk4RBsufhhXYiGw+7RXauP4Dtl2HLzu5agewHRtVwmLzEv41JfkddPWVcgFPWAgUmtpwobTDao1K3g41rIF3OpOPmMyIT2MaODJ1cqjFi7SEqUVY8G7HNYeEs1X6WyPJYgNDkPk2e64dfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A9xXZMbekjt/tql+kzQOe7eglF3L9li7hx3d+MOMfTs=;
 b=GA35AQ0XgoLhZ3XdZD0y4jXu5wM4OddFCtVmlluTg9AwFPoIc3suuBCgGAYI4HjbOj6AG1VVazA/B2ofGNXXIHCFKD6SkE55pzZqPORysHZAqCgDmNfn6QrjDJE40ZqUNxFS77KSfgb9miy1UmTfYpkTKZkuSbfsP0RLLGMBT0CpodehwuDS/fyBbGQHCaBDLrD1UTNHhWCtY1tz9L8xGcJAFY4LZDgcoqiv5j5o7VvkoF9zpfUELqkfYEaEnt8mho6pywNVF3w3zJwne83lRf5hyR8ffKONPRFebEfj2bNL+rGeNffh17N1AxyQ1VzRpHzWpR1j0eOT2TLvsmbCvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A9xXZMbekjt/tql+kzQOe7eglF3L9li7hx3d+MOMfTs=;
 b=wStOQAtR9wr3xfjETfijXVkPoK6EjofHsjwj4GezKJ4hg3PfSwYIPFKFpA8QuZPm14Fa4ucszy2WmkJLHSnnBYKiKfiXsElSDF7OedleAmuwx4I8cADpSZod4MK5deiRnIWknCNiWiq7qfNnlhSWXgvtbo1QHzaRrtCZKYInBn0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 16/17] xen/arm: Provide Kconfig options for Arm to enable NUMA
Date: Tue, 10 Jan 2023 16:49:29 +0800
Message-ID: <20230110084930.1095203-17-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	AM7EUR03FT052:EE_|DU0PR08MB7638:EE_|DBAEUR03FT032:EE_|DB4PR08MB9864:EE_
X-MS-Office365-Filtering-Correlation-Id: 87f9161b-9e2a-4ef4-bbd3-08daf2e84f2b
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Ai2Nn49kHTWarRWtw0dy8bx2KRkaTSdeT2IEQHS6UjuznaMHQ5V+eNZqKeMGTFZhM2Dw2wWx3YmhADU92G3I/bg/9lEnJdubKjUjC8awrnVsZMKE2YmwBKeyKyx5iBuGJfTXWym79bGIkvWaSJZ3c8bburSI5SKXbpXNzth4NdMNSEq/vWnJDAOpN83FygPao93MBwpyplt+W3mrZQZaine+yu5a2iEDz1Ai2/QKBV+E7bmwEa38C+YGMWLs+TNTk4zqoyIVENWLwT/spNYA/tzzgc+rYdD0StLQgJDb1rpyh8BxA0In+WsfG0RktEkqF+jMbhflFczUphUekj3gwskhctfDnr4kfIRrqaTtB/Utro09kxKve44ZynoMLFMdhIcPErnEcrfwwvvU/vJ90FaPZNTfF/h1G0r3gU3NVkuCQmQXB0diCc7f+VEFDwhJCoE+RqHMbjwMLGNAIYBMiMiWGtFB8q3gKtahxW3+F5lP+Kj7U/ajs9lYwxChjScg4lOIMtbfSuetyfO4VpGmR+SgmT/v0e2EHxMR2wcCq6IN5nLfCvMX+uOQ/lrdlQ4/HScm31NfJxHHOwuMO2pl10LnLwJH3Y4Z3aCmWzkxVbKCK6LVo5GahWXuRbRqOB4ss79GX1GegkyjpLhD7TktJiSoR+wWeinFIa8usJN/XwMntY1jYODJaw0/HW6dbuuJamm8Be4/gtuoJWwgsaGYHQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199015)(36840700001)(46966006)(2906002)(44832011)(36860700001)(2616005)(5660300002)(7696005)(41300700001)(8936002)(36756003)(70586007)(70206006)(83380400001)(8676002)(356005)(4326008)(316002)(47076005)(86362001)(336012)(82310400005)(1076003)(426003)(40480700001)(26005)(81166007)(478600001)(186003)(54906003)(6916009)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7638
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7767b79c-f5eb-4540-bbee-08daf2e84a03
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V0m2pag9oMWbbSsjJbj1ryiPS2w1nNx8mcKAKATcBt6/8yU3xIZ/cHjNvtcKniHUiLU4V3w+Vl8ICyOqcIVtQJNEkTBtQBaPdeut62oB4ZGA+84jjC80sk1i/hd2CcbCK88TEJtjgk4p1RcIUdr3cg0IwtT1U156W6YHZjz9gtL5FqW35Ew5ELgTOVD2mv8chzbDfeopJf6xxyT8G5DKmevLakKW1sCFakFfHK24wnUsC3tS7xdV6I/x/fD8c5T2uMo8OSqql+oYA1pIraUVA3BLzTgmFgorhbVbuUyJYqB3tB4CRpcOu2l28xBlSMwj0DcKHqDCUmCL+ZLGZ3vmqGwbeSyTl9qVH1FwohP+oCoO0O96Re4c8uVHantGHeXTCmKV/7d4YayOLyQm0zED/MST3sgeYj/3LtKQ1b/D0GH68jlAXCd7UBviRTLyeXU+BhhEJIDA0JPLDnEqwQUky+KLH7fQiKlkaURHkFl3Yk+iuWib5cQn6i1dr82Yll9iYYrjUdvCNgt5owEWkV9M9ZrymE3bkJGlZ2VwLuPCGHhU7frJHqo1g/4HKRG/uPWNzlCKYsM9D6q+GHb6i2DKEIiEDG7v5b4NPXAEm4CSOOxLUjnsJsyeaeDmymb/NpvbpyFMrQuNjDlQ4oVK0QD8aipSBe7DnunJlns6k/9i3nzMkzgqXN2SuRcQjSgpN7zUBu0JG3rRxNtgI9EjNcZdsw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(39860400002)(136003)(376002)(451199015)(40470700004)(46966006)(36840700001)(107886003)(47076005)(70206006)(36860700001)(1076003)(70586007)(8676002)(82740400003)(41300700001)(86362001)(426003)(4326008)(336012)(2616005)(81166007)(478600001)(40460700003)(5660300002)(40480700001)(316002)(44832011)(6916009)(26005)(186003)(7696005)(2906002)(54906003)(36756003)(83380400001)(82310400005)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:40.0790
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 87f9161b-9e2a-4ef4-bbd3-08daf2e84f2b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB9864

Arm platforms support both ACPI and device tree. We don't
want users to select device tree NUMA or ACPI NUMA manually.
We hope users can just enable NUMA for Arm, and device tree
NUMA and ACPI NUMA can be selected depends on device tree
feature and ACPI feature status automatically. In this case,
these two kinds of NUMA support code can be co-exist in one
Xen binary. Xen can check feature flags to decide using
device tree or ACPI as NUMA based firmware.

So in this patch, we introduce a generic option:
CONFIG_ARM_NUMA for users to enable NUMA for Arm.
And one CONFIG_DEVICE_TREE_NUMA option for ARM_NUMA
to select when HAS_DEVICE_TREE option is enabled.
Once when ACPI NUMA for Arm is supported, ACPI_NUMA
can be selected here too.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Remove the condition of selecting DEVICE_TREE_NUMA.
---
 xen/arch/arm/Kconfig | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..e751ad50d1 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -39,6 +39,17 @@ config ACPI
 config ARM_EFI
 	bool
 
+config ARM_NUMA
+	bool "Arm NUMA (Non-Uniform Memory Access) Support (UNSUPPORTED)" if UNSUPPORTED
+	depends on HAS_DEVICE_TREE
+	select DEVICE_TREE_NUMA
+	help
+	  Enable Non-Uniform Memory Access (NUMA) for Arm architecutres
+
+config DEVICE_TREE_NUMA
+	bool
+	select NUMA
+
 config GICV3
 	bool "GICv3 driver"
 	depends on !NEW_VGIC
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:55:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:55:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474487.735763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPl-0004fJ-WC; Tue, 10 Jan 2023 08:55:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474487.735763; Tue, 10 Jan 2023 08:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPl-0004eR-Rc; Tue, 10 Jan 2023 08:55:25 +0000
Received: by outflank-mailman (input) for mailman id 474487;
 Tue, 10 Jan 2023 08:55:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAP1-0005oC-60
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:39 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2040.outbound.protection.outlook.com [40.107.6.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 684407f9-90c4-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:54:36 +0100 (CET)
Received: from AS9PR07CA0032.eurprd07.prod.outlook.com (2603:10a6:20b:46b::26)
 by AS8PR08MB9930.eurprd08.prod.outlook.com (2603:10a6:20b:564::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:34 +0000
Received: from AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:46b:cafe::e8) by AS9PR07CA0032.outlook.office365.com
 (2603:10a6:20b:46b::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT025.mail.protection.outlook.com (100.127.140.199) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:34 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Tue, 10 Jan 2023 08:54:33 +0000
Received: from 3390037eb137.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2F7B9D46-02EF-44CB-AE26-D0F4CB766B0F.1; 
 Tue, 10 Jan 2023 08:54:26 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3390037eb137.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:26 +0000
Received: from FR3P281CA0159.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a2::19)
 by DU0PR08MB9875.eurprd08.prod.outlook.com (2603:10a6:10:423::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:15 +0000
Received: from VI1EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:a2:cafe::a3) by FR3P281CA0159.outlook.office365.com
 (2603:10a6:d10:a2::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:15 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT055.mail.protection.outlook.com (100.127.144.130) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:15 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:09 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 684407f9-90c4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JD7QrIfD9JHz3fTuQsCfz4zF77DrQ2+h/779Q8LcaQg=;
 b=lZ9tyBRNocxLf39gyHK2PqW0/wInK5F7UD9wUV46pvLMvVjblxMPVLloJt341jM5uNGD7puh7H76ky3qYG3tRd2+3Aj72TTxrTCf7BvAfyQYTwP3m8icL+GSXYynQYx69XaUtEySS9Wy5yzX1bzK7T7iTkp1LLKzLpBsF/4eZCw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6e8560d985ecbcc9
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Maa6RscqbOXNhihsTbC3V4BIiMLxNz4ex48uAer04W/YxSLBW4l34l9d84ao67Z+rljmRddGrOrEB0fBQz9tOJZ5Ewx1BaYSrHxCT83vaOJXV+7hHh+GsqLiuzZ5fM3Xowqsf9enwpemcM6Gss6UiMi3qm/AVOSnY0U9TGsTnjvtCHcZRNyM8a+i5pHt++rRpVtbF7CKqfIpm+riOg2Qa03h31aLkcRNXjZjiaAuoHtRQUckIPLXZnO92N5LKDBbzhRNoV813Ea4kuTl/gitNCiT6QnDga27W1eVHz/Id279lG4mCYH7JSsVTinbYKZut/vcPxSo2/YALU/7970VCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JD7QrIfD9JHz3fTuQsCfz4zF77DrQ2+h/779Q8LcaQg=;
 b=l8b/PBbDnukDhWNv9cJ0dia+1FmJCzQEZZ5O43lesbsfOdeqvSj+hdxdpp+hVKty5fUrQklIpicaM9iBJCEb4kL2Jnp8tx48S/MXtATUCjifaxUJoZwch/FnKikbXZXkh6iY9e2nn/zEnsvFyL2G2+OEpti+lbu1EDHO5+/QvHyHv4eTOeKiBGj4sCr7i3iRP6w8O9uL9b0P7P2vFTH1mc3CcsylUOL6mpST02dAuXcBZfJmlJ2dMCVx6CepzvseBoOn7WV418K3l9dMpuU9BijkexOTI/S0ieBe2/XGfjYBNbv83gzCHP5dYAisWx8Gp+JllFpAFxguwmsvz0NBkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JD7QrIfD9JHz3fTuQsCfz4zF77DrQ2+h/779Q8LcaQg=;
 b=lZ9tyBRNocxLf39gyHK2PqW0/wInK5F7UD9wUV46pvLMvVjblxMPVLloJt341jM5uNGD7puh7H76ky3qYG3tRd2+3Aj72TTxrTCf7BvAfyQYTwP3m8icL+GSXYynQYx69XaUtEySS9Wy5yzX1bzK7T7iTkp1LLKzLpBsF/4eZCw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 09/17] xen/arm: introduce a helper to parse device tree NUMA distance map
Date: Tue, 10 Jan 2023 16:49:22 +0800
Message-ID: <20230110084930.1095203-10-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT055:EE_|DU0PR08MB9875:EE_|AM7EUR03FT025:EE_|AS8PR08MB9930:EE_
X-MS-Office365-Filtering-Correlation-Id: c3ae5224-0cd0-4e27-de41-08daf2e84baf
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 SirlqwtpLwHm2mWKsAdonfyzexqmkx2lbTAa6R1FFEe4+2+b43/bJrXeYmWZj5M7CE6/XjyuAn7BGq89ebRYbbE81EzQzL5Ob8Bw7Tp5fbHFzSZSgNpVK3fM61RQ1j3XzN08LNXSwOgH9jWNHCX5jKZ4AGOu7KxIZppUDCgy6RIMoXYylyOnBOLEllRuTn/see1sL0G1vq3uLW1cED8d/ndCvcEGs66HaP90qmRcPtY/qrG+Mvmab+VyvaahwiP9+5qdsd2rocpNj9TNBO87Py2g4vXIiQdLy9yLujxNslYNFneegDlZmIkjieubwxhTxiBSPKUndNZ9Srih8ur+WNl0PKmFN3YEhiubM+WPUu2yF7+UCU9+hsNdKVlgzykPN9TmiMq6/mgky0QxAi/16YJvxI+XAO4go+6cm+1NkwwWHeZvFWESRawMwsM8+Dx0LpheydWvnTXTydbEY4oiIGZ/WmFfYhKmWSM9KzG9bCz7mePNXiERfUo0CMODjxnS9zFvZvsdr2y+vY4SL33yKLu6gyrmW1OyNqUK05ab7R3BBWpXY/rUXz6EerjKHQZxkVwpBK4J/2/pQErpw85DUu3JXvnhdpb/WEyWYDBz/ZMGVsfnCcunwRV8Ulv6n2E8u+oYCKmnDaK/IA5uU4ZbEB2yrg+ub+O0Iv/P2Y3CwUuNppmqdL/9fGq5M/qBt/Z1KJYlVf93etvMRfOwMYTuY+ymWo5xo9fKk4KuVRj3IuI=
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(136003)(39860400002)(346002)(451199015)(36840700001)(46966006)(356005)(81166007)(82740400003)(36860700001)(86362001)(54906003)(8676002)(4326008)(6916009)(41300700001)(70586007)(70206006)(8936002)(40480700001)(5660300002)(2906002)(44832011)(1076003)(2616005)(336012)(83380400001)(426003)(47076005)(82310400005)(966005)(186003)(6666004)(26005)(316002)(7696005)(478600001)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9875
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ec34f7cc-6957-4d43-a18f-08daf2e8408c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dOx2p4Nbu+Qq5yYF0i5m0xeM29ARRXdYzMPCVCdajFldRVlNWJDW+KFKTum7j+koVXpD8KtLgucfKy+Steb0nwTcSSu2g5T0vquGTjyykTd6i5d6z8GHAaedbUiev+BWtAKkp3XU0sRSZFkZiruPHPsaZgV57dWz2c6tRMQdIAUbloxktZp7qoXtiPFq3lXFimvUmG9mTavjO7xziNxoZsWPVK4csh2HgHDWxyygym7m3oHmIAa8s8EM7ifuX4vdAP5yD6rbg20tyiCv9aeSMj+Sve3ue42SjyG6rH2eM9OCdP/rEqKALDW7xV8xKqrOdm/qS8K6FeDkraj2AJ/iiu8Smm53JIIO/UF3lQbmSFcHQ+5TgzRccl3I1b8nnDvC3uRO+vTeSVdByxiL2pKWQf8ztuBxhtQBxrK2JjSaWVR9jFwpljkAd3AVh56qxWWJE23A8XrvmaCHG8F516HL1LtEM3CYBso3VEg1tLNetj5d2VElyAnIbBiqASv8SofF+cSpHkOFJjiwoxG0g9QhfrAaZYl5i7a8h7xuQQBTBb9muwarmvwKqFnHP9IXQ8ysc2PcDb132yYosPayw9C42eLuBJDXbEdsnrMiGQEcKeeHOrGFL9UrFaObDIeWe8/HHBnTlY83WuvvY1CgmVhmxlSc668UbDKpZ7ZiOS68sSitWEhEGPxSqbPh7CF2i573Svlr1WWk/CiW3ROsSb0FDwZO/8eF7264B6gCpJZsZVI=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199015)(40470700004)(36840700001)(46966006)(2906002)(36860700001)(44832011)(5660300002)(7696005)(41300700001)(8936002)(36756003)(4326008)(83380400001)(70206006)(70586007)(8676002)(40460700003)(86362001)(47076005)(82310400005)(1076003)(336012)(2616005)(6916009)(426003)(26005)(81166007)(40480700001)(478600001)(186003)(966005)(82740400003)(316002)(6666004)(107886003)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:34.1716
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c3ae5224-0cd0-4e27-de41-08daf2e84baf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9930

A NUMA aware device tree will provide a "distance-map" node to
describe distance between any two nodes. This patch introduce a
new helper to parse this distance map.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Get rid of useless braces.
2. Use new NUMA status helper.
3. Use PRIu32 to replace u in print messages.
4. Fix opposite = __node_distance(to, from).
5. disable dtb numa info table when we find an invalid data
   in dtb.
---
 xen/arch/arm/numa_device_tree.c | 107 ++++++++++++++++++++++++++++++++
 1 file changed, 107 insertions(+)

diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index 793a410fd4..88056e7ef8 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -151,3 +151,110 @@ invalid_data:
     numa_fw_bad();
     return -EINVAL;
 }
+
+/* Parse NUMA distance map v1 */
+static int __init fdt_parse_numa_distance_map_v1(const void *fdt, int node)
+{
+    const struct fdt_property *prop;
+    const __be32 *matrix;
+    unsigned int i, entry_count;
+    int len;
+
+    printk(XENLOG_INFO "NUMA: parsing numa-distance-map\n");
+
+    prop = fdt_get_property(fdt, node, "distance-matrix", &len);
+    if ( !prop )
+    {
+        printk(XENLOG_WARNING
+               "NUMA: No distance-matrix property in distance-map\n");
+        goto invalid_data;
+    }
+
+    if ( len % sizeof(__be32) != 0 )
+    {
+        printk(XENLOG_WARNING
+               "distance-matrix in node is not a multiple of u32\n");
+        goto invalid_data;
+    }
+
+    entry_count = len / sizeof(__be32);
+    if ( entry_count == 0 )
+    {
+        printk(XENLOG_WARNING "NUMA: Invalid distance-matrix\n");
+        goto invalid_data;
+    }
+
+    matrix = (const __be32 *)prop->data;
+    for ( i = 0; i + 2 < entry_count; i += 3 )
+    {
+        unsigned int from, to, distance, opposite;
+
+        from = dt_next_cell(1, &matrix);
+        to = dt_next_cell(1, &matrix);
+        distance = dt_next_cell(1, &matrix);
+        if ( (from == to && distance != NUMA_LOCAL_DISTANCE) ||
+            (from != to && distance <= NUMA_LOCAL_DISTANCE) )
+        {
+            printk(XENLOG_WARNING
+                   "NUMA: Invalid distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
+                   from, to, distance);
+            goto invalid_data;
+        }
+
+        printk(XENLOG_INFO "NUMA: distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
+               from, to, distance);
+
+        /* Get opposite way distance */
+        opposite = __node_distance(to, from);
+        if ( opposite == 0 )
+        {
+            /* Bi-directions are not set, set both */
+            numa_set_distance(from, to, distance);
+            numa_set_distance(to, from, distance);
+        }
+        else
+        {
+            /*
+             * Opposite way distance has been set to a different value.
+             * It may be a firmware device tree bug?
+             */
+            if ( opposite != distance )
+            {
+                /*
+                 * In device tree NUMA distance-matrix binding:
+                 * https://www.kernel.org/doc/Documentation/devicetree/bindings/numa.txt
+                 * There is a notes mentions:
+                 * "Each entry represents distance from first node to
+                 *  second node. The distances are equal in either
+                 *  direction."
+                 *
+                 * That means device tree doesn't permit this case.
+                 * But in ACPI spec, it cares to specifically permit this
+                 * case:
+                 * "Except for the relative distance from a System Locality
+                 *  to itself, each relative distance is stored twice in the
+                 *  matrix. This provides the capability to describe the
+                 *  scenario where the relative distances for the two
+                 *  directions between System Localities is different."
+                 *
+                 * That means a real machine allows such NUMA configuration.
+                 * So, place a WARNING here to notice system administrators,
+                 * is it the specail case that they hijack the device tree
+                 * to support their rare machines?
+                 */
+                printk(XENLOG_WARNING
+                       "Un-matched bi-direction! NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32", NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
+                       from, to, distance, to, from, opposite);
+            }
+
+            /* Opposite way distance has been set, just set this way */
+            numa_set_distance(from, to, distance);
+        }
+    }
+
+    return 0;
+
+invalid_data:
+    numa_fw_bad();
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:55:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:55:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474490.735768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPm-0004l2-Fj; Tue, 10 Jan 2023 08:55:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474490.735768; Tue, 10 Jan 2023 08:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPm-0004jr-7T; Tue, 10 Jan 2023 08:55:26 +0000
Received: by outflank-mailman (input) for mailman id 474490;
 Tue, 10 Jan 2023 08:55:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOt-0005oC-Vo
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:32 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2060.outbound.protection.outlook.com [40.107.241.60])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64252c3d-90c4-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:54:29 +0100 (CET)
Received: from DB3PR08CA0023.eurprd08.prod.outlook.com (2603:10a6:8::36) by
 AS8PR08MB8014.eurprd08.prod.outlook.com (2603:10a6:20b:573::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18; Tue, 10 Jan 2023 08:54:23 +0000
Received: from DBAEUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:0:cafe::7f) by DB3PR08CA0023.outlook.office365.com
 (2603:10a6:8::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT046.mail.protection.outlook.com (100.127.142.67) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:23 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Tue, 10 Jan 2023 08:54:23 +0000
Received: from 2c054ce7be9b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 905571E1-3E67-479C-9FC7-959B1C3C0CE3.1; 
 Tue, 10 Jan 2023 08:54:16 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2c054ce7be9b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:16 +0000
Received: from FR3P281CA0152.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a2::6) by
 AM8PR08MB5570.eurprd08.prod.outlook.com (2603:10a6:20b:1d2::7) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18; Tue, 10 Jan 2023 08:54:14 +0000
Received: from VI1EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:a2:cafe::9) by FR3P281CA0152.outlook.office365.com
 (2603:10a6:d10:a2::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:14 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT055.mail.protection.outlook.com (100.127.144.130) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:14 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:03 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64252c3d-90c4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M0LsJ5CSUqhuJ+jKKH9S6bS+xUQ/jDtaI6zjdX/HcC4=;
 b=WgH4ZFzxCtwfDNF2TYm+Q3vSBwWD9rEQb4TbvG0TOlzyuovBK158ivFlfQksCTuKqgZatB1Cz8A+f8Nn1k0lhiJm52dmcxykXXCdncAAz86PloSxBXTHHgv/9kiQifdq13ZBwyaeOKUJo8sfZksjaRJqCjlzvRGgjxX9NYVqzF0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7d9229752571d865
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R6jgDNrWZ3RsmHR7P4XYzN/NdlYbwODLleULfn9UNAYMir2VIF7zBlO7Ls27pwSvg5oIx/q7PCuQ5edJrwtAdwT46/QZLpb76MPqSVkitkvSFGOi5dH/qOtHb2jCJ9MFsvb6ObRYcQsrat8y8nVVPr4T9e0zHRAQZIXyu2z175WSSE240ZAeQWwh9Rt7GylPisUm6DxHRNA8UU0nOrLOyyszn323yM2xJ4M7zyoLvVqRkUdYPfNk7niwTR+f48MWsqnFLG/1lzgAfgWsEUK5roCx1xP9ZeHfWHsRuuxRWB4VXDMwUW0iQzBxRfHUrvuLlotJ/pLY+CjViN2X05XjoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M0LsJ5CSUqhuJ+jKKH9S6bS+xUQ/jDtaI6zjdX/HcC4=;
 b=UTf5mljz33ND6pukkxyxuZg5BTqGNlTcWhypwwBfqzDDXsDjDZpHn06FfjKnZZY86SOdGH2j6B7UDOIjlwSEbdT8JmHRd12NkYiMaWWfLbK2nwzQc4Qqm3aZ5vEDtpBvaLbADi3NwDn0EsHHvQCFiO5ZNnC85vutBlzp0iLltQT28FZ+Lgq98EfjjtKFKyve5sPE5a4LJEcP8g0zRIPp7y/FuwHvMEr8tpqx7rwDyWHTTuQbwSKfi371zw7v9OWw+2g4Kmgb20CCCI5kkchlG36l0dBxt2yU3TBTNw8i0fUcDXorvpBCmXD2zKbGxlTN8OXdn9Ag7xnfakOlb05NaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M0LsJ5CSUqhuJ+jKKH9S6bS+xUQ/jDtaI6zjdX/HcC4=;
 b=WgH4ZFzxCtwfDNF2TYm+Q3vSBwWD9rEQb4TbvG0TOlzyuovBK158ivFlfQksCTuKqgZatB1Cz8A+f8Nn1k0lhiJm52dmcxykXXCdncAAz86PloSxBXTHHgv/9kiQifdq13ZBwyaeOKUJo8sfZksjaRJqCjlzvRGgjxX9NYVqzF0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 07/17] xen/arm: introduce a helper to parse device tree processor node
Date: Tue, 10 Jan 2023 16:49:20 +0800
Message-ID: <20230110084930.1095203-8-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT055:EE_|AM8PR08MB5570:EE_|DBAEUR03FT046:EE_|AS8PR08MB8014:EE_
X-MS-Office365-Filtering-Correlation-Id: 1f7a8fcb-cf41-4814-5cb6-08daf2e84566
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1dn9LWEKyH0NMvNktotZQO1LMC0LUoKoNrL0OaDJ1rCVnG+XJJKj8ABBXnDcO40W00Rrb08qt9fWJgMn33fBt6OA/aWxVjazCfgpaelNlDGUn7ZrC1DIZDM3iTOudKyk9TOeiQOOfwV9UsjuWZ7oCRCnEhfY+woXV8k6EMT5ER5PSR/llt+2gv1QVoPSyIFiWNUYYkCKDgU+bUPrdoQdnO/FFfdAPU+gaYOiehCsfIGsPcWpHvLgkf7BAPSQmZXXzvfv7CdCVCRzU0eVVH8ZcRYr28fvtG7Gam5wfQIgclhxtcf7c3lLMlPItWnn/ySFmprS9ZRkP04vSAmCr33K1mzg9r/0q61ytraN74SS/l2h4maQRR44iMbjLVgf/9lIUh0CM2ACG9aslS4XDRQSFrsiKs4REGMk+Q3iIlELYdlgaLb+nvrX6Zdwc8FXOowA5NwsUZHIJqfGmql+lMNWe2k++lsnokavl65lsK3ppFG6bT1NtIAmcKsLRdS8j2D1zRtYs4pidexvMVU2Y2Y1TDLqtHnEh7YeUzD+vX2sSvmXDpqqOB/4qvkGH5JtCzclh8jtIDj0FxCgwnKPmvXqebMQ3GJKaOXlk9slre9qPUUMrhPXJWcsZt3BVnsWVOYr9ApVmfsGUw9OwBWwk5rHHdThKc1Jq5OQPxUGDvQGcrEcBFchpv677RV0So09ll1xS+b+PdQqO4VDf0IgCHKvB6c1EEkDSVLSHPP2uf1vmhQHdwTZKbx98Q2pKtU0yg1j
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199015)(46966006)(36840700001)(5660300002)(44832011)(8936002)(41300700001)(2906002)(36756003)(54906003)(83380400001)(8676002)(70586007)(70206006)(4326008)(316002)(6916009)(7696005)(26005)(81166007)(186003)(6666004)(86362001)(478600001)(336012)(1076003)(2616005)(40480700001)(47076005)(82740400003)(356005)(426003)(82310400005)(36860700001)(2004002)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5570
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	397c67c1-6fe2-499a-0e00-08daf2e83ff4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UO7Y7HN+aYDoyYCIxAQOUE9CJxW5LrZTDETgW7GLoZt54HIhkzx53jEXBrev3EaQMKlHSBh1DLNIj6UJxmJg5RYkIGZqMQUNT8B32pWJ1k/JJviFUm2M5dM19v1NZP76hSTWj5tgS4oJJm/cZfWethdMjYucOsEGa501MZeSn0OWfsWYzqVP84wrG/cx5+L5BiLBI39KQpIMCP03/H9Kps969UqXRUUMYg+zLNi+535naJQLnRSlI0LxECG+sr7x3qrKF9dXFvMIkd/J82lfRZ8s4GI3Z3cekFnoq3on3z30MqxiHIUSenddJg7BqyjOvYE4EbMCINs7sbo2fvn3w91YT/Wi20gzJkukTULDiaormFudITFD1ixepUiMsP6U4xkeByEyKWe5751RdEOKFGOyzGwWBdxbfswcULAoLJhPpMRCKoOobGcQdBRHxxyFinwo1eA3OyhLyFn2wscBCRR4vt3AsmcU6YpD9y9mbfuYV+u6rRUGdwt1svJgV/FY9hl0Do0jsygfsV/rMEjMn8m3MCv9TgWkM60rVZcJUVV91eIYTffds71wv5aCmY2MQu1EvdQSMNM7ps8jRaSRTi7yqRD8w3kAdZuDcNRNBpiMKYpGTleXlcLWrlj+Q82i4/AdVPzTWadeNK7qIuwadUDRGPO3jymwFvPeo1e+xLJQIFYd/dhDCqSZWoXklYO17mbSOiNYw++IMCCBV2j4vncFphA+6xy9EhZ9FMl7fIuZmdRmua7V6mYpkM4MXHNe
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199015)(36840700001)(46966006)(40470700004)(36756003)(81166007)(70206006)(4326008)(5660300002)(44832011)(8676002)(2906002)(8936002)(70586007)(83380400001)(82740400003)(36860700001)(86362001)(7696005)(478600001)(316002)(6666004)(6916009)(54906003)(40460700003)(40480700001)(82310400005)(41300700001)(336012)(107886003)(47076005)(1076003)(26005)(186003)(2616005)(426003)(2004002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:23.6861
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1f7a8fcb-cf41-4814-5cb6-08daf2e84566
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8014

Processor NUMA ID information is stored in device tree's processor
node as "numa-node-id". We need a new helper to parse this ID from
processor node. If we get this ID from processor node, this ID's
validity still need to be checked. Once we got a invalid NUMA ID
from any processor node, the device tree will be marked as NUMA
information invalid.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Move numa_disabled from fdt_numa_processor_affinity_init
   to fdt_parse_numa_cpu_node.
2. Move invalid NUMA id check to fdt_parse_numa_cpu_node.
3. Return ENODATA for normal dtb without NUMA info.
4. Use NUMA status helpers instead of SRAT functions.
---
 xen/arch/arm/Makefile           |  1 +
 xen/arch/arm/include/asm/numa.h |  2 ++
 xen/arch/arm/numa.c             |  2 +-
 xen/arch/arm/numa_device_tree.c | 64 +++++++++++++++++++++++++++++++++
 4 files changed, 68 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/numa_device_tree.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 9073398d6e..bbc68e3735 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -39,6 +39,7 @@ obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
 obj-$(CONFIG_NUMA) += numa.o
+obj-$(CONFIG_DEVICE_TREE_NUMA) += numa_device_tree.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += platform.o
diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index e0c909cbb7..923ffbfd42 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -28,6 +28,8 @@ enum dt_numa_status {
     DT_NUMA_OFF,
 };
 
+extern enum dt_numa_status device_tree_numa;
+
 /*
  * In ACPI spec, 0-9 are the reserved values for node distance,
  * 10 indicates local node distance, 20 indicates remote node
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 4dd7cf10ba..3e02cec646 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -20,7 +20,7 @@
 #include <xen/init.h>
 #include <xen/numa.h>
 
-static enum dt_numa_status __read_mostly device_tree_numa;
+enum dt_numa_status __read_mostly device_tree_numa;
 
 static unsigned char __read_mostly
 node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
new file mode 100644
index 0000000000..c031053d24
--- /dev/null
+++ b/xen/arch/arm/numa_device_tree.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm Architecture support layer for device tree NUMA.
+ *
+ * Copyright (C) 2022 Arm Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#include <xen/init.h>
+#include <xen/nodemask.h>
+#include <xen/numa.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/device_tree.h>
+
+/* Callback for device tree processor affinity */
+static int __init fdt_numa_processor_affinity_init(nodeid_t node)
+{
+    numa_set_processor_nodes_parsed(node);
+    device_tree_numa = DT_NUMA_ON;
+
+    printk(KERN_INFO "DT: NUMA node %"PRIu8" processor parsed\n", node);
+
+    return 0;
+}
+
+/* Parse CPU NUMA node info */
+static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
+{
+    unsigned int nid;
+
+    if ( numa_disabled() )
+        return -EINVAL;
+
+    /*
+     * device_tree_get_u32 will return NUMA_NO_NODE when this CPU
+     * DT node doesn't have numa-node-id. This can help us to
+     * distinguish a bad DTB and a normal DTB without NUMA info.
+     */
+    nid = device_tree_get_u32(fdt, node, "numa-node-id", NUMA_NO_NODE);
+    if ( nid == NUMA_NO_NODE )
+    {
+        numa_fw_bad();
+        return -ENODATA;
+    }
+    else if ( nid >= MAX_NUMNODES )
+    {
+        printk(XENLOG_ERR "DT: CPU numa node id %u is invalid\n", nid);
+        numa_fw_bad();
+        return -EINVAL;
+    }
+
+    return fdt_numa_processor_affinity_init(nid);
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 08:55:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 08:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474495.735785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPy-0006B9-5P; Tue, 10 Jan 2023 08:55:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474495.735785; Tue, 10 Jan 2023 08:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAPy-0006As-1k; Tue, 10 Jan 2023 08:55:38 +0000
Received: by outflank-mailman (input) for mailman id 474495;
 Tue, 10 Jan 2023 08:55:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K1Mu=5H=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFAOs-0005oC-Ve
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 08:54:31 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2065.outbound.protection.outlook.com [40.107.105.65])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64046635-90c4-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 09:54:28 +0100 (CET)
Received: from DB8PR06CA0066.eurprd06.prod.outlook.com (2603:10a6:10:120::40)
 by PR3PR08MB5691.eurprd08.prod.outlook.com (2603:10a6:102:82::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:26 +0000
Received: from DBAEUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:120:cafe::8) by DB8PR06CA0066.outlook.office365.com
 (2603:10a6:10:120::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT030.mail.protection.outlook.com (100.127.142.197) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:26 +0000
Received: ("Tessian outbound baf1b7a96f25:v132");
 Tue, 10 Jan 2023 08:54:26 +0000
Received: from fdeca5795d80.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 61AE824B-371A-47A1-A627-6D734A3DBD48.1; 
 Tue, 10 Jan 2023 08:54:18 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fdeca5795d80.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Jan 2023 08:54:18 +0000
Received: from FR3P281CA0152.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a2::6) by
 PAXPR08MB6752.eurprd08.prod.outlook.com (2603:10a6:102:131::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 08:54:17 +0000
Received: from VI1EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:a2:cafe::ac) by FR3P281CA0152.outlook.office365.com
 (2603:10a6:d10:a2::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:17 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VI1EUR03FT055.mail.protection.outlook.com (100.127.144.130) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.5986.18 via Frontend Transport; Tue, 10 Jan 2023 08:54:17 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 10 Jan
 2023 08:54:14 +0000
Received: from ais-wip-ds.shanghai.arm.com (10.169.190.86) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2507.16 via Frontend
 Transport; Tue, 10 Jan 2023 08:54:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64046635-90c4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0gPXnqAOqOIvOq2SZnrFV2Z6dU7bFkyDMdqIXXj4RZE=;
 b=dLHUYEMdck3Q7r4mNSrbppoSU3bhtYwYDwjvaa+hA5dIeRLEJVpawJfFwo8QidcIDwOszznDj6q2Pqegjo2Xy5jOZcvVtf77336sjcFvo17oDHzsLyM/kEMUERFc9jXxeHrpsfW8ycI2WreiCr0tpXh8RmztRwLJfK1JKnWMIZY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: fdb4868fb50ca0cb
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ea+N/6nIfTQX1zQDTrZPfLnRLTDx0+uMl03K9gBFLueweRyVU5+svrm63ncWD0Nj5p/tN6c4w+BhBAPLdmw9DnQ0JRl3fpDxlyItIL0tPSPnEgWf7CLMn+JefwPqEbX0u2EBysX2QPcfF5tb9ZeEoVWCNYeJbe8hlFC3fAnhw5vb1gkgAXs9yiWQDe5ZVw8qSkWmn9r1VPxuxcgy1cfyADHBD7PzKJTuAGEAXkiLqAOxBU53jeprvYBWD27PQ28VSupj9ucH6IhIzKiHL2VFsTT2BLxTjGccR6kuky32XOM8L7LOkyfZRjuc+bUrJfDHPaJ3FTSX7glml6WNly2ofQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0gPXnqAOqOIvOq2SZnrFV2Z6dU7bFkyDMdqIXXj4RZE=;
 b=Yjs+/qYvY3LWbPPqt1nOwUT+ND+0HA/2TDbVwfwejFYNbHjUKu3FlpI1tOKXABfnIGw/e+B0KVTnlo9HDIeaBnyW3BIgh8e4kI/itxDwHhowP3aFACRITi6K+0lqxCGFs6iZ05TAFrcyWzyGI1hXNkjivc52DMHAGLukD5fAAVZRq1PoXhQ0j0zDXuqA3azmZ52UkLAai4u2K+ou0Bl2yI1HckamsJjobuFx8kOvWXp8Dbbs2NSyNjOy64Q988BNklB/9OOetFF87RqDQR7mHsdX5NKuQGlfnBDBgpgRJHZNAvXwg53zQtcezdLbwZFIQXPk5qner1XTYG6sg5Rg5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0gPXnqAOqOIvOq2SZnrFV2Z6dU7bFkyDMdqIXXj4RZE=;
 b=dLHUYEMdck3Q7r4mNSrbppoSU3bhtYwYDwjvaa+hA5dIeRLEJVpawJfFwo8QidcIDwOszznDj6q2Pqegjo2Xy5jOZcvVtf77336sjcFvo17oDHzsLyM/kEMUERFc9jXxeHrpsfW8ycI2WreiCr0tpXh8RmztRwLJfK1JKnWMIZY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com; pr=C
From: Wei Chen <wei.chen@arm.com>
To: <xen-devel@lists.xenproject.org>
CC: <nd@arm.com>, Wei Chen <wei.chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 11/17] xen/arm: keep guest still be NUMA unware
Date: Tue, 10 Jan 2023 16:49:24 +0800
Message-ID: <20230110084930.1095203-12-wei.chen@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230110084930.1095203-1-wei.chen@arm.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-TrafficTypeDiagnostic:
	VI1EUR03FT055:EE_|PAXPR08MB6752:EE_|DBAEUR03FT030:EE_|PR3PR08MB5691:EE_
X-MS-Office365-Filtering-Correlation-Id: aabd66b6-910a-4179-fa55-08daf2e8471c
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 WqGNSjQkcwE3hYhgl/i6TMKH6hKLRoqTgSWclNzJfnGSOP+jsoIm6RNyuxaM786eFrwi9upp3E/1vpFLtVTHpuTWEeFyFmweubKoOlVnUekjGEbYPioFVuTEQj9y1egjCcloOCh9FcP/r++oCEGHAno5WEQ675aJr6+A4iEzY4XAr6lUuSHflKUORdRVgIM8PWhhnZXg6qKbxLTjbKNSrp+ZeLT1Iv0H7GH3J/fTBiwAv/4UpDAReqTEbba+Y57cyPgHoer+4+ghJ0HoW6pNoQV8lDv+RS0QRTMhKBqNcB7BQRZnZt78vU+kcQS71EUwyJlUHDRciVJTtxYvEFhvStajLFMGFAXyoYIAQuKC+Vak5i0YbaN3rvcB9qhmbG8DCkkr18Ez+TX1amT6ZbcRXkJu6JTa4fba/NKUEUsUwmIwTuIVRV7XUz+9herG2n7kmslcs1kMQRowlg8HDl2aho6L2xgze58uX7NtyxSUlV7aaf+aDtkCJaoKpU2Zv2KEjz6/xMfbV1PtzG58Ya8DLe0tN00bkjuADq35lNp027nAploN6jsflbFJuQLdi7byCmy4KZn6nrzuy1PovYa0eaZ6LLMf10B+YyLiqRbRyRk03zNjR+YGESUlL3jVLEqjd4ysGLC+3y2JiJT96AUvil3S+aWPCnrAnMkh/jGwgjRGSTRkHj5qm3fT2TtVhBsXm5sUgt9G/NoVvIjgdGOTaQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199015)(46966006)(36840700001)(5660300002)(44832011)(8936002)(41300700001)(2906002)(36756003)(54906003)(8676002)(70586007)(70206006)(4326008)(316002)(6916009)(7696005)(26005)(81166007)(186003)(6666004)(86362001)(478600001)(336012)(1076003)(2616005)(40480700001)(47076005)(82740400003)(356005)(426003)(82310400005)(36860700001)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6752
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	317a562a-0aab-4700-6a6b-08daf2e8417d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ALt0lGf7OtNh/WjofST4kIl941WpNam3+GFl6K343w6KEQW/H7I+hAODHHZGVxzpt38jYmG/NT37eUmy8WwGlQkecs13NBYfqquQ7Lp1ycG7aBcgh5Xmk4HziZM535+2bAhfr6e5FK1V9eRSPoBTyssNincfZTqgdfh5r+y1qsMK1NbwrNYmQOXEwF68Wt944Btb7zl3QhpcdLnHnZMQA1H9EoWUml7Ix/0KCkmDMibjC2e+ME5Icd4FjZM47uqv3utbJfgNcc2JXv4AZ0S842CtxvQ9PDMY3ITOa/NaV7t5dSDQlbvxX8/jiihL7zLJh9hb1wNoVrcwrwzeaHb1E0Swh8dGneUQ1sQCRvYv0cqwv24e1GyVu41JSCM4BoVGq/HFcBThl11sGi5DMObJ5i5l4mNCZ+gt8cMIRu9fXhQLjI7EEmPPGKNm3nCKAAj5bXy38e3esnqtu6OM85U9t2JUVNmI3Ht6aspBmY0HZM7hPWvJmAlVko6nnBwDQN9W3MHLlMOsbtrCLb90ogksg2aUFh5JR0CZcANWHEQzp7epbVHtkCTGruv4l/E5iELaaap0zoV+/QOI+RalZq8Md/JtXHho69bN8cUKIClYX7ApUChD67+6jBc+2sNsWISz3DRF2iEcMRA+BmwkWLPU2v6j27ootAxhu2evIz6uXDrzw8mxa0MQaQbMwq55627D/QJHDv0zlcZvuNi9YE52zA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199015)(36840700001)(40470700004)(46966006)(82740400003)(81166007)(36860700001)(54906003)(8676002)(4326008)(70206006)(40460700003)(86362001)(6916009)(41300700001)(70586007)(316002)(8936002)(40480700001)(44832011)(5660300002)(2906002)(1076003)(2616005)(336012)(426003)(47076005)(82310400005)(186003)(7696005)(6666004)(478600001)(107886003)(26005)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 08:54:26.5762
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: aabd66b6-910a-4179-fa55-08daf2e8471c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5691

The NUMA information provided in the host Device-Tree
are only for Xen. For dom0, we want to hide them as they
may be different (for now, dom0 is still not aware of NUMA)
The CPU and memory nodes are recreated from scratch for the
domain. So we already skip the "numa-node-id" property for
these two types of nodes.

However, some devices like PCIe may have "numa-node-id"
property too. We have to skip them as well.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v1 -> v2:
1. Add Rb
---
 xen/arch/arm/domain_build.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 829cea8de8..8258048d0e 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1185,6 +1185,10 @@ static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
                 continue;
         }
 
+        /* Dom0 is currently NUMA unaware */
+        if ( dt_property_name_is_equal(prop, "numa-node-id") )
+            continue;
+
         res = fdt_property(kinfo->fdt, prop->name, prop_data, prop_len);
 
         if ( res )
@@ -2540,6 +2544,8 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
         DT_MATCH_TYPE("memory"),
         /* The memory mapped timer is not supported by Xen. */
         DT_MATCH_COMPATIBLE("arm,armv7-timer-mem"),
+        /* Numa info doesn't need to be exposed to Domain-0 */
+        DT_MATCH_COMPATIBLE("numa-distance-map-v1"),
         { /* sentinel */ },
     };
     static const struct dt_device_match timer_matches[] __initconst =
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 09:31:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 09:31:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474571.735803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAyk-0003OK-0r; Tue, 10 Jan 2023 09:31:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474571.735803; Tue, 10 Jan 2023 09:31:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFAyj-0003OD-R6; Tue, 10 Jan 2023 09:31:33 +0000
Received: by outflank-mailman (input) for mailman id 474571;
 Tue, 10 Jan 2023 09:31:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFAyi-0003O3-OR; Tue, 10 Jan 2023 09:31:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFAyi-0001ek-L6; Tue, 10 Jan 2023 09:31:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFAyi-000096-BA; Tue, 10 Jan 2023 09:31:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFAyi-0000qY-Al; Tue, 10 Jan 2023 09:31:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mMUeHK7Ly81MCiQFaeLTSciSQ0R5yxSlgKIOOypV8RY=; b=gZV62RInZoKruH/c5N7ELUq3sd
	ss91ERjuc2idEB7b+WTZzrGJd1b83QKuVPzuD30Waw9VO2kMlV7E6UO+hkV6xSCB2Y515RX+y9cRr
	JF9Y18hdojIwEDtMldO2Krtmdk5g091c50ob0kTGXqb6pgzCvDtUN6NkG/EjkWc775jI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175671-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175671: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=692d04a9ca429ca574d859fa8f43578e03b9f8b3
X-Osstest-Versions-That:
    xen=38525f6f73f906699f77a1af86c16b4eaad48e04
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 09:31:32 +0000

flight 175671 xen-unstable real [real]
flight 175692 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175671/
http://logs.test-lab.xenproject.org/osstest/logs/175692/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 175692-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop      fail blocked in 175651
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175651
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175651
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175651
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175651
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175651
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175651
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175651
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175651
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175651
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 175651
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175651
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175651
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  692d04a9ca429ca574d859fa8f43578e03b9f8b3
baseline version:
 xen                  38525f6f73f906699f77a1af86c16b4eaad48e04

Last test of basis   175651  2023-01-09 17:37:04 Z    0 days
Testing same since   175671  2023-01-10 00:08:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@cloud.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   38525f6f73..692d04a9ca  692d04a9ca429ca574d859fa8f43578e03b9f8b3 -> master


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 09:39:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 09:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474583.735824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFB63-0004GW-2e; Tue, 10 Jan 2023 09:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474583.735824; Tue, 10 Jan 2023 09:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFB63-0004GP-04; Tue, 10 Jan 2023 09:39:07 +0000
Received: by outflank-mailman (input) for mailman id 474583;
 Tue, 10 Jan 2023 09:39:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFB61-0004GJ-Od
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 09:39:05 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2051.outbound.protection.outlook.com [40.107.21.51])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9e6fdf76-90ca-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 10:39:03 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7386.eurprd04.prod.outlook.com (2603:10a6:102:85::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 09:39:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 09:39:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e6fdf76-90ca-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L5ot5e+fyTFJl10hHH2zNFBcd+L6Tp8bmnCN/NLItreBH1uf6vR78S9YUJHnxTp6f777fP3qXPIpb/zEd/QNsKlD1msvfVBLES2fE51BiQgzwTlg/KFf5Q3YxljM6TU81khylg3s4BDERJ24IGbkxRQJoIZFAPmaFBgQbJUeIHG8E5SGjt3kNXj6tmC6hOKWHY4eMYCBxGYkp2HAHhpPiEu5QOHw5DkUkfgkd09q7cWMek4APWYvbjj4NNwq+CjiYC65tJE8XgYrl9Cl9Qh+9siHmRPlbfMvV7uudwcaYluc939DTmQxzYF+Hzx9HH0STsgKTi7or0V5oSkm67TYbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cQeJ+AwqmXPde5CI4yD7WgZRUy/cEGl1FvE9Pxjw4uo=;
 b=Pb5kvVqbkdul5nXndrGV/glMMGxVlJLV1kxCJ8JeGneI6jiCUR9muX3qwug4nfFAKPVIgsOxkuYaS9/GAApaWcpNoGd+O2/vxq99QIgyV7RhnrR/jQE+UKD+8Ejjgi2FqWNW86te8ePn49lBSRhf7czh+oyv6jQFrtI2nrfcXCviyAwB47Mup8cKjv96z2JdlyXKlPE7b/Q2w7rsY8//lOmPi0CiQ7mZggaIQPuj7w7h2ugU1kxieHK7R49ZVSYyXoeqlGPDQnVh0YqEakyjgcbtIxRvbMeJgLLKfCh2VhoU9igl1XOtGpjllL7J9dCJMIiOIXgOAiQ1GFD3sW+LEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cQeJ+AwqmXPde5CI4yD7WgZRUy/cEGl1FvE9Pxjw4uo=;
 b=AteIyDUY35v6sNQXDEXADM+I5fLAivyxFCQkxJuHeIgbb0PJIMaUq7C5aXeJsLGfcCkY4UEskbxEvR4mct6MeIAvUso0NBOSyeVWWLitVEs70/JgGgqxh0r5z7NzUBr6WrXecwrkbd9s+9MbF1uCKCusHBAc5O0R2bJvAniierTYBi2FTY4zW7i2fYHbvRnmMpeund51zQ2KFE2rAHQwNVKnEeckTo9bWNBhXwsUikZT69tfTg642IMk4kaejXKbFFEdpM4VA1ZRxDtN67r6oY9i/2llJTIujh+Pnr1AA4S6S86g9H4Nlmt9LxjnmO7I55JdlZc1Kk1bfOuahZUteg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1653609a-42af-b7cd-9d30-fb2bd5721080@suse.com>
Date: Tue, 10 Jan 2023 10:38:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: Problem with pat_enable() and commit 72cbc8f04fe2
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Andrew Lutomirski <luto@kernel.org>,
 Dave Hansen <dave.hansen@linux.intel.com>,
 Peter Zijlstra <peterz@infradead.org>
References: <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>
 <03edcbc5-2dd7-1ddb-bafe-8412d8fc95aa@suse.com>
 <ba24157d-92fc-f472-9ef5-4eae3c63c12e@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ba24157d-92fc-f472-9ef5-4eae3c63c12e@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0113.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7386:EE_
X-MS-Office365-Filtering-Correlation-Id: 55782c39-384c-4f77-2050-08daf2ee8100
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YWwdvNqmi4VLZRsO850o7cAGqXNxBJp/iDYaK0szVM9qDDM/JSxUShrwD7EfeOFfKtpC68EVujTJTywHHbjp3R9xV31CnP/yQsltHvFa/tV091KOlW0DxeVvRDFBCQOhHE/4PnWXfXImxObK6SCJ7Y7cZtWpC2faXK0h3gA1r1C3SFp2G5NQBdP/aFMctZS0NYC3HjidSxkN+3BFGKnvs4nWNxhpyiR6XUzKc++PPOuEa92wZYaSk+Gmn60C76oecf8G6iEwDmMxlryEHk4dyEE3bI2KM+Xg7GTSe1J+9sFs7SGYmMHiD6iPgTq1k6t1CKc2tc5NLBphcdXMC5bgh6/0xQ65Emj6JdhiShf1NsQi52Pk6RvPnW8NTmDJcD53DUKp8pJ63NMYRpsy39sR9Zohf0m0+U5rNYPPfidBiNEWDRoEz4wtO7pBfhDMZWYbHvQ86IojViDiDPkNd/obKuJkKk4wS+tWQOe6XQ8PmlYIEYfTEU4bhpqNQ6h6iU991EEwLm3tRe/galLe4YLCv3r6BolA642Ltpy/Rpfu8GvZMVMjqVksqKY963TlW4gI/2vFCMZvc0wE/BDJMSBhXk2DCPi2pDdKC12lLuiXk7FKMvXUZPh9h1xb+Y51LHlvfZZciCtQ5lRwVdy5SAwfThe8IMPTObcSOnfrhk+ztwZfE1KyTwKJ5qs1D/XVJQw3uL0gcVUTvpkfHedK7uv4MkvVkK0OIjnYnYRZ4SHZmTxu5nIhzqJlasxO87xyPxec
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(366004)(39860400002)(396003)(376002)(346002)(451199015)(53546011)(2906002)(6506007)(6512007)(26005)(6666004)(478600001)(31686004)(6486002)(186003)(6636002)(2616005)(316002)(54906003)(37006003)(66556008)(36756003)(66946007)(8676002)(66476007)(4326008)(41300700001)(5660300002)(8936002)(38100700002)(86362001)(83380400001)(6862004)(31696002)(98903001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YzU2ZFd2S2l6TmdlQkw0a2JONkppY3dQVTBCZzNQV212SFI5UlU3TjhCYnFR?=
 =?utf-8?B?QlFTVmNtZzNIWnlVR0J3UU5JazdwSnZFaEtGcTlsZnRxK3A2RkZFeThDc2Zq?=
 =?utf-8?B?Z05LMHZRZHZ6QVQ1M21nTXdYeVZkUWd3SC9lTFdpNXZraVV5bW05R1praFhT?=
 =?utf-8?B?aVZGbjhUNmxVbERudHVSK003L3pTTDZIeGdTU2VIVmp4OS9TaUZQRlV6Y1dW?=
 =?utf-8?B?RXk4NSt0Yzkwb0VkalJMSGtEQWZIWXlKY1JTb2ZhcHRBemJhOURpbGgrQ0x1?=
 =?utf-8?B?aDNTZllDSjE2M0Flbk9zUUF4d0Y0ak5nd2NjM3p0a1hEdlpWT09NT2RXeFYw?=
 =?utf-8?B?NVExQjNCRmJwQUpNcHJrck93SHY2UitsZU1aWWFxRjB1Z3Z2MnRLTkduYVk3?=
 =?utf-8?B?VW1hUktJWGxMZURKYWFWRHdnRWdWbWZZNjVVZytYS3hEVEZzdENLbTFYR1Ex?=
 =?utf-8?B?QnN1NHJqMG4rRWh4Q0k2bkFRWm1UZmxIZlk5VFBWZXZYSTdGUlp6aEg2cjg2?=
 =?utf-8?B?Q21jNkxGRG90UXpqelZiOVhSVVcyL1NTYWhNRk14N1hYY1RxWjFKdHQ2LzFk?=
 =?utf-8?B?bjdrREZWTTA2MmVlOUNGcTBnQ2tXeG1heVpKTzl3dkpmaDViTXhOVE14TFNG?=
 =?utf-8?B?cTlZL29MQmNSTS9jRTZZZnBXbjlRNEdRdjhJWkk5RVY1bDdLWlkrc1NrK0Rr?=
 =?utf-8?B?NG4xbUw3aTB2L2s5cWlRMm9LcGZCUDU4UWVxL0Y4NXgyS0IycEtMaFpjdDZD?=
 =?utf-8?B?Wlpib2poWWJvKzFMK2FLeHpHYkJLTDV2WmVTdWRFWndzQWF3RWtlekEyVnl5?=
 =?utf-8?B?WDJOWURLNm9Gb0JKU3dkYVBYRXczamMxNXNVV1Z4eE9VU3YxcGRDY3M5MlJK?=
 =?utf-8?B?U2FzOFpUbXZCQXVwNElwdysyN3NXRjFLbmErUVdSdnNBQi9wUm45ZU5iQngz?=
 =?utf-8?B?eDRMSXR1S2p2cDRNaTFjRFNZVUhvT0ZhMWVaWXZ0RlZZMjQvdi9ablZ1cWI5?=
 =?utf-8?B?T09uVi9pMGJkUi9VU3BNVVJ5K2h2OFNNMUhiRkxpKzd4WG91dnpUenNvUUZi?=
 =?utf-8?B?dG9VcmladU5iVFRDTytuR1JwN25nWkdmQzRpSzRFTS8zWmhqTld5ZC9nY2ov?=
 =?utf-8?B?UDBMQW1mdWVUdXFRYVV1QmtPVUQ4TytIc2grRzRXRm5YWTU4UyttUnc0eHBH?=
 =?utf-8?B?UW1aZ3R2bWNKdnNtcGQ5a0J6STdVbnVLbHRsK3hla1BDZjFUSDdTQ3VsRSs0?=
 =?utf-8?B?Wk1YUHJwL3haVS9rbVBhZjJreURweit0Mm9kK3JFK0VQeldvNnhDSy9UV1JM?=
 =?utf-8?B?WExSQlNUcHdhZjA5KzAyKzhLOVZaQkJYSnVzbHA2RU5YU0RNaFRvK1ZhdHdO?=
 =?utf-8?B?VVp0dktFVzMzU2NHNFZ3bHNJNkdjMWx5SGJ3RDdWRWtPYnMrU0N3dlA5UVpi?=
 =?utf-8?B?TVpjd1ZVMHM3ajJtQkZ2c2JJbFI4TmQ3Wm1TSGRPV0lGZndpbnA1d0lRWXpX?=
 =?utf-8?B?ZVBwL29vbmNoMEpRZE9GcmRpeDNRMFJrWHNIVVFMU0wzQ05vZ0hEN0lSaEha?=
 =?utf-8?B?Y0JOekJrWWdwY1Y0NzNHSmhiTzdlbmphUXVNQ3lDQ0tqMXhYYm9lVDQ0QXVV?=
 =?utf-8?B?ZW9Pa0RQYXVaOVpER2tXNE5qeitBenIzRytoOWF1Mm9IZmNHcUdSeEQ2VVFS?=
 =?utf-8?B?c25iVEZWdEEwV1FlNExIODVVUUJlSkxQbUI4c3IwVUQwVTJ1WXlrUFU1N0t0?=
 =?utf-8?B?RHhJQkR3SW1wb2RIVjQweTFCOGJrMzFMcWFjS0J1elQwWVRtc0hURkRTaFZC?=
 =?utf-8?B?aG5MeU9SU215RjJVcWNyb0JJZWlpV21EY2ZyMEdIdnZmbmZucWhEbXhWdC90?=
 =?utf-8?B?dFJvdm16TFFtd2IrK1ZwZE5Qc1VqVTMxWW5MdDV3N0txZWVZMHJIeVVyNFZj?=
 =?utf-8?B?U0xtU3pSRjE2T3FFd0R4UjdQY0IyNnJONHFhV0FLMlZST3JmRkxpTTg5K2o3?=
 =?utf-8?B?NEpYRDU3NzJENGoySHdYZGJFK3RkT1Y0QlpFK2ZtOEo4ZGpuNkJBVUxnUWxk?=
 =?utf-8?B?Vm43Y2VKZlM4VklMdUdjSW16MUdoNnc2NC91Q2N2eklubm5TMGRxU0doWGNC?=
 =?utf-8?Q?D2kDcCAVBwI7eUHpP6LGiI7FW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 55782c39-384c-4f77-2050-08daf2ee8100
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 09:39:01.2691
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iMB3rXEu78pG8LBmIHS4jDVRaIiPhrtDa/I3UNm/tVSsJDkFEijEKrLZeJ3Ns1FjQadP39iWz2IN5Q3Bry4Uwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7386

On 10.01.2023 06:59, Juergen Gross wrote:
> On 10.01.23 06:47, Juergen Gross wrote:
>> On 09.01.23 19:28, Michael Kelley (LINUX) wrote:
>>> I've come across a case with a VM running on Hyper-V that doesn't get
>>> MTRRs, but the PAT is functional.  (This is a Confidential VM using
>>> AMD's SEV-SNP encryption technology with the vTOM option.)  In this
>>> case, the changes in commit 72cbc8f04fe2 ("x86/PAT: Have pat_enabled()
>>> properly reflect state when running on Xen") apply.   pat_enabled() returns
>>> "true", but the MTRRs are not enabled.
>>>
>>> But with this commit, there's a problem.  Consider memremap() on a RAM
>>> region, called with MEMREMAP_WB plus MEMREMAP_DEC as the 3rd
>>> argument. Because of the request for a decrypted mapping,
>>> arch_memremap_can_ram_remap() returns false, and a new mapping
>>> must be created, which is appropriate.
>>>
>>> The following call stack results:
>>>
>>>    memremap()
>>>    arch_memremap_wb()
>>>    ioremap_cache()
>>>    __ioremap_caller()
>>>    memtype_reserve()  <--- pcm is _PAGE_CACHE_MODE_WB
>>>    pat_x_mtrr_type()  <-- only called after commit 72cbc8f04fe2
>>>
>>> pat_x_mtrr_type() returns _PAGE_CACHE_MODE_UC_MINUS because
>>> mtrr_type_lookup() fails.  As a result, memremap() erroneously creates the
>>> new mapping as uncached.   This uncached mapping is causing a significant
>>> performance problem in certain Hyper-V Confidential VM configurations.
>>>
>>> Any thoughts on resolving this?  Should memtype_reserve() be checking
>>> both pat_enabled() *and* whether MTRRs are enabled before calling
>>> pat_x_mtrr_type()?  Or does that defeat the purpose of commit
>>> 72cbc8f04fe2 in the Xen environment?
>>
>> I think pat_x_mtrr_type() should return _PAGE_CACHE_MODE_UC_MINUS only if
>> mtrr_type_lookup() is not failing and is returning a mode other than WB.

I agree.

> Another idea would be to let the mtrr_type_lookup() stub in
> arch/x86/include/asm/mtrr.h return MTRR_TYPE_WRBACK, enabling to simplify
> pud_set_huge() and pmd_set_huge() by removing the check for MTRR_TYPE_INVALID.

But that has a risk of ending up misleading: When there are no MTRRs, there
simply is no default type (in the absence of inspecting other criteria).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 09:59:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 09:59:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474592.735842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFBQ7-0006nn-R3; Tue, 10 Jan 2023 09:59:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474592.735842; Tue, 10 Jan 2023 09:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFBQ7-0006ng-NS; Tue, 10 Jan 2023 09:59:51 +0000
Received: by outflank-mailman (input) for mailman id 474592;
 Tue, 10 Jan 2023 09:59:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFBQ6-0006nY-Jq
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 09:59:50 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2083.outbound.protection.outlook.com [40.107.14.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8430f5a3-90cd-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 10:59:48 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8955.eurprd04.prod.outlook.com (2603:10a6:20b:40a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 09:59:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 09:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8430f5a3-90cd-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bxj/6fZwYIWRMudwi7gdaPMN2iZyfxm/Rq3QNmNZqcQ3sa+MN/PwQkHd/oOyJhyxPQTwkOJTdSbjPKF16Prr5CKRmEUML9dXEvc4PcFaIlSGsP0O2J9Ge/kO8RoFDi8hE3htlmMetgJjwfVIc4j5BcM2SIVlW3ajKlBJUFZu6W/JW+of12tMwoIacQhc9xENqjgCb5eomG8eKdDdA55rl/ZcpopVLiBFQbRxarlv0gqzAsw8hOHx6P+4n+PJ6jENeD9IUVPb+1YIB65pzMMkzrdilJwBljLOwTBZkYhNoDmaNcMMAmqXAO1WSFnhjhRYPPmCvZzOAQ3OBgp1uopYtw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oJgTaLj0E3C8Gvf+1diGzcKyvH9w8oRVYT/kyGbpfTc=;
 b=S8UED2Ap7qGpA8x9aL7yQvdPARETgEYEBGXsQ5OOKDdp9GcwMcHyJ85VtQZjcqumDjAnJ1Mi57C9Z1a8NakOYjMBn01VPWZ5zsaOJGLR1POXgwdUj6fjVlgMz9OEIHzWUE/gJWzbCLsHDQNjVopnkblunHie5vBMVmFg1ElSKHXipokfnytccdUEbajqEM0KTcqDhPRqqpHYt6Lwm0t/pWJUxak3DFcKfF3CPoNnCqPO5OStyaPvHapmlEctoWfQQqFfKBii429vaEcsaskx1IW+hEtYC/JU3BasLCvv4kd627zmZgwriv52ju9FQUvwemZ9LhebyfBGNkOKo5k1Tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oJgTaLj0E3C8Gvf+1diGzcKyvH9w8oRVYT/kyGbpfTc=;
 b=uPpslDZ+4ipztEd4Rdt59p02JiVjUfXN34EiYL0cG/mHy8esQsSG/tnTivos11oshGuPaeWZ2FCwvTXrQ5BanudgzyxYqAsmSryHGQSd3mh4mlO3fZS5utPpxtL0xfhKAnh3xa4OzuYi1Mq6xd0HfVXo/PdQpMseqP+hqvgNv49dV3+ZxFmYjLZq6wlGkeXieDLiT/r26zq51ePFWxIt7ZKyIVRTAYO5nrGnP4o7kUhtWxBoU0vR94erOKTEiNnc4T5GSxNGPp7H95zmHlZxiky7VHqk53iG1MuSNkwPV1v6fsk3Hv2IwG3Gs+CSIXIti+ujYacSdmS9vtUfvaPMuQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3c664437-7184-d4be-63e2-335942bb6a46@suse.com>
Date: Tue, 10 Jan 2023 10:59:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 01/17] xen/arm: use NR_MEM_BANKS to override default
 NR_NODE_MEMBLKS
Content-Language: en-US
To: Wei Chen <wei.chen@arm.com>
Cc: nd@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-2-wei.chen@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230110084930.1095203-2-wei.chen@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0101.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8955:EE_
X-MS-Office365-Filtering-Correlation-Id: 5441a8fc-d943-4dde-c07e-08daf2f166ed
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SXEPyk+mO0fh5mGjXjS8qFeXyGX+DHLbGwh6+2/wzCSDGtCmncq+UWtjYKRriwzEX3W7rtkMa5uocqnpjMs4g9LqT9SqFjGjT44YlbndfQ9ctpXya9XBFAGxElR8AcUaEswEpVjQzq+TAgeXiRXNcy9QUCn0wk8DHH2iroXQy8GA0phcx8HGDAGcFepA+kGQRoXv0dUR0mAEkB/NgzeK3OxHX2pQV6FwWYEfTTxP1t4+TO2mBa95rqp18jdhOxLFqfSuDWsRpIw/BvOofyOHyohgX32EyWL/UU4/UJcW+DUDDZ5cROGWE/wuHw+FfXSY2fy2q2HgjH4UJzoOfhideYumGMlGa2gzl3Qc2dpe/BMS6yZ47MQYLibgXPCa1M18YY7EMOgNabaREykNJEAAiMoU7u9akrUUvigTW9Bp3McnN/gf7l8FEGmF5EihE+l7Cvk5SbwRPEv23czIbFVFm3lqMsuBZzod7NeamwoEa6nSUp0s2KaVvDO3mdM73CyKZStJ8fmBlgHkPSlNrvZsdc63Po3G00iABfUSt8mFp+26i9uqZmu0K+GqrFsBWBECdAo6rGQ6J8TJqsrfUe9taskGmQrzICig1BBkBgEZ7KTdCBGWWL9zkcd0Eh8L4goZ2YZU6vC5s5fbNzzsqyGYr6vTgpEKfEHTN2GX4uPcotnwcoY04SAEi//dQ0YRxYg5lMq9TC3v/v0RvSeZnQK/cev4JBK6b394GV+wRyiZjqRyGHdmspHbCTWygWvEVpHBMqKfDq3Ezaq8r7B3bhiE4w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(39860400002)(366004)(376002)(346002)(451199015)(66556008)(66899015)(31686004)(7416002)(2906002)(8936002)(5660300002)(36756003)(41300700001)(66946007)(8676002)(66476007)(316002)(4326008)(54906003)(6486002)(53546011)(6506007)(478600001)(26005)(186003)(6916009)(6512007)(2616005)(966005)(31696002)(86362001)(83380400001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bmFXVDdsV2tTaEU5SHA5S1R1T2E1bWJMVm5aU25ibDJaaERXdUNybzc4OW9v?=
 =?utf-8?B?SVJoLytCVVA5eXZNOTdGWlM0enRGME5hMUJRWnBhNjROY3pRNVJxMGZJQ3pz?=
 =?utf-8?B?emZuRWhxVFdSaTNScGJ2Z3FzbmptU242Q2x5WFRaSWpDc25wYTIvUGtBaHNU?=
 =?utf-8?B?eDJBN2dqWk5PRUVUOUIwQ0ZHU3R0Z29vN29uUXdPQ0JNRk9JY3ROd3NwN2xt?=
 =?utf-8?B?RjFCSUZFY3FlRGJYdHpsb2k1TkdpQ054QmNkVHZEN2NnNzJXRkZkeU5ibk9l?=
 =?utf-8?B?U0pNeUlvaW1lZEMxVE5kT2lxYmNDTm1PVW1maE8za1BEM1VNa25ZYVpCR2Y3?=
 =?utf-8?B?bGtYeU5kNjhHNFRrWVpKRkxWQnlvYnJSeHd1UUswYytqNjFjbWRibXQ4aUtX?=
 =?utf-8?B?ZHRWd2NYUWVWaFo0SmU0dGFlRGVXVFVSaHVqWmozMXQ4RkdMcGFZblhDemt3?=
 =?utf-8?B?alZWVVI2Sit3aGxKZExOT0R5K3VFSnB6YW9tdDY5MHp5K0hYQVl3c2lGTGkv?=
 =?utf-8?B?TlhxMlZtMDJFUlFlWmUvUlczQm0rNzlHRG9FRWZQSjhqSTZpemxvaGRKM3ox?=
 =?utf-8?B?a1Nmeld6OFlwYWZDdGFxNVA1cEJXOUtYSFlPbFZOdkY0bVNpMmdJdlk4VTFs?=
 =?utf-8?B?V2tzU1RtaHZaRmhWTWZ2RUMyZjd6NGM5SjNNa095dllQNW51elhrTG5BYlBa?=
 =?utf-8?B?SXpzbXVsa293YjcvTzBYUjZMQWUxVHFIRC9iM0tIRGpjS0FzOTRIUTE3Slh6?=
 =?utf-8?B?cFpXQVAvVmpHcFQ3Q004dE9icmkyQzIxSDR1czdpbjRWQ1BSamVySm9oZ1VJ?=
 =?utf-8?B?RGVFVjVWRDlXWWFZZTg0dldLWTE4U2s0Y0xrQXlsUDU4ZDlYd2loLzg3SzZU?=
 =?utf-8?B?eGdYUjFtNWFLV0lIdHV0WWtsZGRxUUZ2WFo0bmRiVGpteUpaZW8xcTI1NGZP?=
 =?utf-8?B?SkMzN2tMWk10TUFPU3dPSzYzeDRQSmphNUdUYWk3VVhJazI5RHF1M2pCdWE3?=
 =?utf-8?B?a084Wmh4eGsxVWJ0c0NLbUc4eGpSVVQvU0ZvWUZlbEE3SEFWakZIcXJCREVj?=
 =?utf-8?B?bEZWWWo1SjVITW1IczhJcElITU9EYisyUjVpbG9oR3pPSnZlNDNJak5xMXp6?=
 =?utf-8?B?VUhrclBsUUpSRk9EcjYyNEQvOWVTamRDU0tKUUdpL0xITWJkUVE5WHFZN3JO?=
 =?utf-8?B?QjdFb0NZbVROMFJFa2dyU3hHaEtOUExGN0ViUFdReVJZZUJYNEdpOVg0ZUha?=
 =?utf-8?B?NzBXUXBnam0yK0p4UDdYUHlkSE1aNjRKY3BpSVUrOVhZSlFiN0ZtR0ZoQlFi?=
 =?utf-8?B?YmRYNTdwc0NtRFVqR2hRM1dFdGNoRXFiMHlSZHpjbFBPR3ZJbnVHVlJnQUZk?=
 =?utf-8?B?NEpEYjBzcHh0bFZXTXRXK3NqMFBHUklRU2taUlZCektkMFhMUXNoOHAvNTQ1?=
 =?utf-8?B?WDdVdTNwTDI1dVZMS29vVDNNalRMaUtDNVBDOHI2d0UreTNmUUxKUVFXMWFp?=
 =?utf-8?B?OWlXNGNNTDZZeEVjOHk4RnNjeG5HT1hJYWtDN2JDYXVTU3ZaVUE0eU5kM3Yr?=
 =?utf-8?B?WEIvSUZ6TVBYMGhvWS8wMEdRcTFybnZtMWZRdGRWTlZpM2FYQys0OFBTaDAw?=
 =?utf-8?B?emROUjJyRW9YckZmOVJuSmV0TDZRd1JJZ0dPSWxMOC9OcFg2cExQeWdCQVFp?=
 =?utf-8?B?Sm1rVkh1eDJJSEpYT0ZITWtLb0o5c2VnTGZIcU5FbS9jRGF4ZFBPcWpQaTZR?=
 =?utf-8?B?bWc2RElNMDFaaFc3cysyRTQ3anRkZTV3UVJXU0R6VSsxa0NMQkkwbFdaOXND?=
 =?utf-8?B?enp1SVpVN2svdmxYaEZFZzRtV09UZUFNVHR3WC9LTTY5TER5TXlVRUU3bmJj?=
 =?utf-8?B?Q1FZSEpsQmF4aWN2YUV3STIzNktzN3NSZEhzRWgyckxTaWQ4S0NoLzh4MU1w?=
 =?utf-8?B?SXRhV0d0ZTBSaWhNZ255aWFNamkvQWZrY0s0aWtPWlo5QlhFaWlWc1VUZFRu?=
 =?utf-8?B?STk3ZFF6Y1A5bEpoSkYyNEd2U0pGdFJKN2dnU2lTRmNYQzZoV3pyNytNY1B5?=
 =?utf-8?B?YzY4UXpEdmN1cDR6dndPNmE5ME1PeGFEcWNHdVlaRFY5b1NLcEEwTFA4NHAv?=
 =?utf-8?Q?TAGb2KGgLh6eJ9JWs9AHloJ7Y?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5441a8fc-d943-4dde-c07e-08daf2f166ed
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 09:59:45.5493
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mnAqYjHZ+v0NpntBDFARzaegHXnSgNdYOWLrbv+BqR4nVQWs8SKKdbHPelrhCeDctE61zidTYExZwU40hunjuw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8955

On 10.01.2023 09:49, Wei Chen wrote:
> --- a/xen/arch/arm/include/asm/numa.h
> +++ b/xen/arch/arm/include/asm/numa.h
> @@ -3,9 +3,26 @@
>  
>  #include <xen/mm.h>
>  
> +#include <asm/setup.h>
> +
>  typedef u8 nodeid_t;
>  
> -#ifndef CONFIG_NUMA
> +#ifdef CONFIG_NUMA
> +
> +/*
> + * It is very likely that if you have more than 64 nodes, you may
> + * need a lot more than 2 regions per node. So, for Arm, we would
> + * just define NR_NODE_MEMBLKS as an alias to NR_MEM_BANKS.
> + * And in the future NR_MEM_BANKS will be bumped for new platforms,
> + * but for now leave NR_MEM_BANKS as it is on Arm. This avoid to
> + * have different way to define the value based NUMA vs non-NUMA.
> + *
> + * Further discussions can be found here:
> + * https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html
> + */
> +#define NR_NODE_MEMBLKS NR_MEM_BANKS

But isn't NR_MEM_BANKS a system-wide setting, which doesn't really
make sense to use as a per-node one? IOW aren't you now allowing
NR_MEM_BANKS regions on each node, which taken together will be
much more than NR_MEM_BANKS that you can deal with on the whole
system?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 10:01:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 10:01:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474598.735853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFBRL-0008Cu-55; Tue, 10 Jan 2023 10:01:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474598.735853; Tue, 10 Jan 2023 10:01:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFBRL-0008Cn-1o; Tue, 10 Jan 2023 10:01:07 +0000
Received: by outflank-mailman (input) for mailman id 474598;
 Tue, 10 Jan 2023 10:01:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TLW0=5H=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFBRJ-0008Ch-AY
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 10:01:05 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b199fd20-90cd-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 11:01:04 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id BB39968094;
 Tue, 10 Jan 2023 10:01:03 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 846B41358A;
 Tue, 10 Jan 2023 10:01:03 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id YPT8Hl83vWPrRgAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 10 Jan 2023 10:01:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b199fd20-90cd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673344863; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UwDp5XYDAC2Q9q+rKMqs99kfvtsDPmiiS62MPkYcPFg=;
	b=uV35YrM8vCb/3Yuy98VTRDwPJ3S8FEJxA2CzXaVSVHOi+6DZ+hoDgzpEC6oHKTedPIykJw
	kKtEO71vvi4QER9/xBEHygk3emUBiS7cystV35PJ15yR3YqwJ61SkXv7n8xgleYiNSt9yv
	1FTW4yr/XTnha8aPnX16oJWoWx5YVzo=
Message-ID: <00df09fe-bd71-9204-18f2-6479c91dc4b0@suse.com>
Date: Tue, 10 Jan 2023 11:01:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Andrew Lutomirski <luto@kernel.org>,
 Dave Hansen <dave.hansen@linux.intel.com>,
 Peter Zijlstra <peterz@infradead.org>
References: <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>
 <03edcbc5-2dd7-1ddb-bafe-8412d8fc95aa@suse.com>
 <ba24157d-92fc-f472-9ef5-4eae3c63c12e@suse.com>
 <1653609a-42af-b7cd-9d30-fb2bd5721080@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: Problem with pat_enable() and commit 72cbc8f04fe2
In-Reply-To: <1653609a-42af-b7cd-9d30-fb2bd5721080@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------orkPXvNaufX9x9hz0ec9o6vR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------orkPXvNaufX9x9hz0ec9o6vR
Content-Type: multipart/mixed; boundary="------------vYkKADx0yfD0N7Rcj8wnZweX";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Andrew Lutomirski <luto@kernel.org>,
 Dave Hansen <dave.hansen@linux.intel.com>,
 Peter Zijlstra <peterz@infradead.org>
Message-ID: <00df09fe-bd71-9204-18f2-6479c91dc4b0@suse.com>
Subject: Re: Problem with pat_enable() and commit 72cbc8f04fe2
References: <BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com>
 <03edcbc5-2dd7-1ddb-bafe-8412d8fc95aa@suse.com>
 <ba24157d-92fc-f472-9ef5-4eae3c63c12e@suse.com>
 <1653609a-42af-b7cd-9d30-fb2bd5721080@suse.com>
In-Reply-To: <1653609a-42af-b7cd-9d30-fb2bd5721080@suse.com>

--------------vYkKADx0yfD0N7Rcj8wnZweX
Content-Type: multipart/mixed; boundary="------------KeddABKumQSF93NckVqhMfMH"

--------------KeddABKumQSF93NckVqhMfMH
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTAuMDEuMjMgMTA6MzgsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAxMC4wMS4yMDIz
IDA2OjU5LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMTAuMDEuMjMgMDY6NDcsIEp1
ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+Pj4gT24gMDkuMDEuMjMgMTk6MjgsIE1pY2hhZWwgS2Vs
bGV5IChMSU5VWCkgd3JvdGU6DQo+Pj4+IEkndmUgY29tZSBhY3Jvc3MgYSBjYXNlIHdpdGgg
YSBWTSBydW5uaW5nIG9uIEh5cGVyLVYgdGhhdCBkb2Vzbid0IGdldA0KPj4+PiBNVFJScywg
YnV0IHRoZSBQQVQgaXMgZnVuY3Rpb25hbC7CoCAoVGhpcyBpcyBhIENvbmZpZGVudGlhbCBW
TSB1c2luZw0KPj4+PiBBTUQncyBTRVYtU05QIGVuY3J5cHRpb24gdGVjaG5vbG9neSB3aXRo
IHRoZSB2VE9NIG9wdGlvbi4pwqAgSW4gdGhpcw0KPj4+PiBjYXNlLCB0aGUgY2hhbmdlcyBp
biBjb21taXQgNzJjYmM4ZjA0ZmUyICgieDg2L1BBVDogSGF2ZSBwYXRfZW5hYmxlZCgpDQo+
Pj4+IHByb3Blcmx5IHJlZmxlY3Qgc3RhdGUgd2hlbiBydW5uaW5nIG9uIFhlbiIpIGFwcGx5
LsKgwqAgcGF0X2VuYWJsZWQoKSByZXR1cm5zDQo+Pj4+ICJ0cnVlIiwgYnV0IHRoZSBNVFJS
cyBhcmUgbm90IGVuYWJsZWQuDQo+Pj4+DQo+Pj4+IEJ1dCB3aXRoIHRoaXMgY29tbWl0LCB0
aGVyZSdzIGEgcHJvYmxlbS7CoCBDb25zaWRlciBtZW1yZW1hcCgpIG9uIGEgUkFNDQo+Pj4+
IHJlZ2lvbiwgY2FsbGVkIHdpdGggTUVNUkVNQVBfV0IgcGx1cyBNRU1SRU1BUF9ERUMgYXMg
dGhlIDNyZA0KPj4+PiBhcmd1bWVudC4gQmVjYXVzZSBvZiB0aGUgcmVxdWVzdCBmb3IgYSBk
ZWNyeXB0ZWQgbWFwcGluZywNCj4+Pj4gYXJjaF9tZW1yZW1hcF9jYW5fcmFtX3JlbWFwKCkg
cmV0dXJucyBmYWxzZSwgYW5kIGEgbmV3IG1hcHBpbmcNCj4+Pj4gbXVzdCBiZSBjcmVhdGVk
LCB3aGljaCBpcyBhcHByb3ByaWF0ZS4NCj4+Pj4NCj4+Pj4gVGhlIGZvbGxvd2luZyBjYWxs
IHN0YWNrIHJlc3VsdHM6DQo+Pj4+DQo+Pj4+ICDCoMKgIG1lbXJlbWFwKCkNCj4+Pj4gIMKg
wqAgYXJjaF9tZW1yZW1hcF93YigpDQo+Pj4+ICDCoMKgIGlvcmVtYXBfY2FjaGUoKQ0KPj4+
PiAgwqDCoCBfX2lvcmVtYXBfY2FsbGVyKCkNCj4+Pj4gIMKgwqAgbWVtdHlwZV9yZXNlcnZl
KCnCoCA8LS0tIHBjbSBpcyBfUEFHRV9DQUNIRV9NT0RFX1dCDQo+Pj4+ICDCoMKgIHBhdF94
X210cnJfdHlwZSgpwqAgPC0tIG9ubHkgY2FsbGVkIGFmdGVyIGNvbW1pdCA3MmNiYzhmMDRm
ZTINCj4+Pj4NCj4+Pj4gcGF0X3hfbXRycl90eXBlKCkgcmV0dXJucyBfUEFHRV9DQUNIRV9N
T0RFX1VDX01JTlVTIGJlY2F1c2UNCj4+Pj4gbXRycl90eXBlX2xvb2t1cCgpIGZhaWxzLsKg
IEFzIGEgcmVzdWx0LCBtZW1yZW1hcCgpIGVycm9uZW91c2x5IGNyZWF0ZXMgdGhlDQo+Pj4+
IG5ldyBtYXBwaW5nIGFzIHVuY2FjaGVkLsKgwqAgVGhpcyB1bmNhY2hlZCBtYXBwaW5nIGlz
IGNhdXNpbmcgYSBzaWduaWZpY2FudA0KPj4+PiBwZXJmb3JtYW5jZSBwcm9ibGVtIGluIGNl
cnRhaW4gSHlwZXItViBDb25maWRlbnRpYWwgVk0gY29uZmlndXJhdGlvbnMuDQo+Pj4+DQo+
Pj4+IEFueSB0aG91Z2h0cyBvbiByZXNvbHZpbmcgdGhpcz/CoCBTaG91bGQgbWVtdHlwZV9y
ZXNlcnZlKCkgYmUgY2hlY2tpbmcNCj4+Pj4gYm90aCBwYXRfZW5hYmxlZCgpICphbmQqIHdo
ZXRoZXIgTVRSUnMgYXJlIGVuYWJsZWQgYmVmb3JlIGNhbGxpbmcNCj4+Pj4gcGF0X3hfbXRy
cl90eXBlKCk/wqAgT3IgZG9lcyB0aGF0IGRlZmVhdCB0aGUgcHVycG9zZSBvZiBjb21taXQN
Cj4+Pj4gNzJjYmM4ZjA0ZmUyIGluIHRoZSBYZW4gZW52aXJvbm1lbnQ/DQo+Pj4NCj4+PiBJ
IHRoaW5rIHBhdF94X210cnJfdHlwZSgpIHNob3VsZCByZXR1cm4gX1BBR0VfQ0FDSEVfTU9E
RV9VQ19NSU5VUyBvbmx5IGlmDQo+Pj4gbXRycl90eXBlX2xvb2t1cCgpIGlzIG5vdCBmYWls
aW5nIGFuZCBpcyByZXR1cm5pbmcgYSBtb2RlIG90aGVyIHRoYW4gV0IuDQo+IA0KPiBJIGFn
cmVlLg0KPiANCj4+IEFub3RoZXIgaWRlYSB3b3VsZCBiZSB0byBsZXQgdGhlIG10cnJfdHlw
ZV9sb29rdXAoKSBzdHViIGluDQo+PiBhcmNoL3g4Ni9pbmNsdWRlL2FzbS9tdHJyLmggcmV0
dXJuIE1UUlJfVFlQRV9XUkJBQ0ssIGVuYWJsaW5nIHRvIHNpbXBsaWZ5DQo+PiBwdWRfc2V0
X2h1Z2UoKSBhbmQgcG1kX3NldF9odWdlKCkgYnkgcmVtb3ZpbmcgdGhlIGNoZWNrIGZvciBN
VFJSX1RZUEVfSU5WQUxJRC4NCj4gDQo+IEJ1dCB0aGF0IGhhcyBhIHJpc2sgb2YgZW5kaW5n
IHVwIG1pc2xlYWRpbmc6IFdoZW4gdGhlcmUgYXJlIG5vIE1UUlJzLCB0aGVyZQ0KPiBzaW1w
bHkgaXMgbm8gZGVmYXVsdCB0eXBlIChpbiB0aGUgYWJzZW5jZSBvZiBpbnNwZWN0aW5nIG90
aGVyIGNyaXRlcmlhKS4NCg0KSSd2ZSBzZW50IGEgcGF0Y2ggY2hlY2tpbmcgZm9yIE1UUlJf
VFlQRV9JTlZBTElEIGluIHBhdF94X210cnJfdHlwZSgpLiBUaGlzDQpzZWVtZWQgdG8gYmUg
YSBsZXNzIGludHJ1c2l2ZSBjaGFuZ2UuDQoNClRoZSBpZGVhIHRvIG1vZGlmeSB0aGUgc3R1
YiBjYW1lIHVwIGFzIGEgcmVzdWx0IG9mIGxvb2tpbmcgYXQgbXRycl90eXBlX2xvb2t1cCgp
DQp1c2UgY2FzZXMgYWZ0ZXIgd3JpdGluZyBteSBwYXRjaC4gQWxsIHVzZXJzIG5vdyB0YWtl
IGFuIGFjdGlvbiBpZiB0aGUgcmV0dXJuZWQNCnR5cGUgaXMgbm90IFdCIGFuZCBub3QgSU5W
QUxJRC4gU28gaXQgd291bGQgYmUgYSBtb2RpZmljYXRpb24gdGFpbG9yZWQgZm9yDQp0b2Rh
eSdzIG10cnJfdHlwZV9sb29rdXAoKSB1c2VycyBvbmx5Lg0KDQoNCkp1ZXJnZW4NCg==
--------------KeddABKumQSF93NckVqhMfMH
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------KeddABKumQSF93NckVqhMfMH--

--------------vYkKADx0yfD0N7Rcj8wnZweX--

--------------orkPXvNaufX9x9hz0ec9o6vR
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO9N18FAwAAAAAACgkQsN6d1ii/Ey9A
6wf/QDNuFDIZkOpSywJRLkULG6bKCjqnvQ7JbuaoaxVkVj3Nm+umRDzQw+UMWdrhDM8rPSAAPRma
buW5agDFnucjo1JRNjzoE9VVQJkGkWvZksYX5dx2Zkh0dI3rARgV73+/kXcWb2r4XXEYnpwFcYXf
H9aCrCndCoIllJWrKZDP3TYNTiA7VzfPsuOvcmRfkr0bQM2NsBPfApM28UQ5h/Kb7CMCzUJ/ZS6S
mdE2bxqZoAn06jEPqK6hEB3wS5FSg9rjt2qSAOjASXVf79LCqE8clE3HIqzkrqR85lv/qF4B2nbi
Yqq6O1y8BxRMqWnxTdi4wf6v2LZ0LKiweubtf3Ipbw==
=t3Ib
-----END PGP SIGNATURE-----

--------------orkPXvNaufX9x9hz0ec9o6vR--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 10:01:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 10:01:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474605.735864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFBS8-0000NP-IF; Tue, 10 Jan 2023 10:01:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474605.735864; Tue, 10 Jan 2023 10:01:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFBS8-0000NI-EW; Tue, 10 Jan 2023 10:01:56 +0000
Received: by outflank-mailman (input) for mailman id 474605;
 Tue, 10 Jan 2023 10:01:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFBS7-0000HN-Fb
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 10:01:55 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2056.outbound.protection.outlook.com [40.107.21.56])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cefc20db-90cd-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 11:01:53 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7403.eurprd04.prod.outlook.com (2603:10a6:102:8b::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 10:01:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 10:01:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cefc20db-90cd-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FRrD8XeNmm81lZ6RzH90uLTYgwD69Ss4T/Qu0mz0AaMy+AMpRt2MxCRcfQ1Xusa0L/D0Cdi++XpZPLLhzBKg5546I3xowsMEg/D9aarGrYWa2UFVLn7D44G8dmD1qxBLAvY7EFdo+8+DOIvKNXaETui9XEp7Fozc9tzl8sSAIjqahlRfrBKA1RX5sQjVBrdyLH8OaiXOkVtl5THLCB026DNr/YjLsI64yI5jB629TYEiOh4Ur0Oo6gCHjZ5463IN56QLsMpY7ph078ccEpsWImj5/ze5YxkoX4yuu9aZu/b7oIn2msItXd4Td7xSq3yib0cqssJiXh6QuYeWW3iqAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BlSIp38KmCDogYRPIN4hEZ5Ie+tbUo38UsCscNJhcl8=;
 b=LGFVhoflcgqO6E3kRG6d5cJGubFGb33FcFuIsUDCpTKo03mdkCSgt8rYd8OOWdj+JKwq43hFZHqXRvhqpqipBReUn+dR5oV2wpf/lwcsElYh+16T632XqoacRyGxW16OWa7Luq+qOTWs/xWGU8CP113k+9z95cB27ikamhqx9VCqMCOXsLlVvxoRx4FUZQkXsTE3X9RfK20i4dKowLJ6ze5Rpq3W1UvbtEK72WrGqMwZlxiOvucf62CR64Q8TW/vabtQroKP9NQPr35aDI368ZatqLAs5GayHBuk63uZ4QwMWDWoHCkDl7eiaIfcYj/2FPnjU5CwOHQR5j4YgKtecA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BlSIp38KmCDogYRPIN4hEZ5Ie+tbUo38UsCscNJhcl8=;
 b=qzWPmXXGc6DCblYDLXBnO4MNxHKbp2Hy+rtDaSrYRLT5nSEHQ6e5x8MaaengdMrEVVdOk6KTTRAlxZGr3oHrng3klZzUQFBqu0GQnZyov68IHcASXni9ygcyqwI2F6Q5VnnAnz3WYZwWX0W4L1s16aBxtshRW6u04c3//Vu1Al0w1NPcr2uRBdO4fJTN6Lm5GfPn1VIOWu0koLl55jHUjxFb6iYc82p3pt9TK2DGBpF6qGyvuknmbwUpD6jA22JRfmq2PG00sAfPcgwHnJYuAYgPX6UGwPpsvOui3mDJPZZ0SrsFVivr9AojMZCR+NLcgibFD9ozPfm2clVm++1zAw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <379a86b8-837c-9b95-7737-1446c1967666@suse.com>
Date: Tue, 10 Jan 2023 11:01:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 17/17] docs: update numa command line to support Arm
Content-Language: en-US
To: Wei Chen <wei.chen@arm.com>
Cc: nd@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-18-wei.chen@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230110084930.1095203-18-wei.chen@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: DBBPR09CA0031.eurprd09.prod.outlook.com
 (2603:10a6:10:d4::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7403:EE_
X-MS-Office365-Filtering-Correlation-Id: 672bc308-14f8-40fd-bda3-08daf2f1b24b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Nl2weA04A0dXcOoqWbdb7ab25RqBYZtkNzYEExEc83oNv52tgKdhuoqfD3dyf+Qu//YyfRdSbJSVE4hIgOCIVoiay+LRoUhDRHJPCtekNJ7kL2yW66Xoejdi3pCUCToEo+9iFUH0d/GxjjHmvTfVy/izoocCpsyO7g4bIzZnB9kkcl5yRVqRDOaE3mfbwRUVSDo7ljfAyY3Jzew7tJ/h1Pmmq7lh5mZ+RjsLt41lj8UXWflL/0Ne4s8UMCbiAQ53kzsoQwlOCbZmn0+iLGlcng+6FI8CKGpeMmNJrFlXJ5/XBdR3n/uYfglkDZTWdgXVlP0KNUIuNk0Lwc+u6N7DmTWY8ErObUTk+GJQ4zDLhPjdS2V7Fyl0f1EMkGepovyBdnXabIl6qn1XvYehSAJaAGPYC1KR9r6RE0h8tJ28urK9G02vjEmXKB/1fOhO3lV9TZkP3t9/IfeMrClO/AsXReqYoGsvjUZpmdzA4nV0xK1SdGA/zhNc5fV2dm9DWX2l4XsgFRL/qpjD60qYEXH7qFJE69exPTWVjvrOOwdMDTj/1+SPMPqpMh4i9/K7M4brgSTNx4Y02tPnawyaFsVY3rbURyq4XZ+wGlLxlAlVDpR57qQ+IxHula8F76AgLXxm1jqD8k9H/p/CMrqXLu7e6YWvbWPdmSmgRBr6zIYm5gTBZOGIY9Ehv608vt94IdLYfvY3lqhJVvsd2YuZtJn3mt8NLpEDUHsEtHID2022zkI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(366004)(346002)(396003)(376002)(39860400002)(451199015)(66946007)(31686004)(41300700001)(2906002)(8936002)(36756003)(5660300002)(4744005)(8676002)(66476007)(6916009)(6512007)(66556008)(316002)(54906003)(6666004)(6486002)(53546011)(478600001)(6506007)(186003)(4326008)(26005)(2616005)(31696002)(83380400001)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Uk5XemFLQXRqQm9EVkNoSUtnM0pKeVVhcllRV1lEcWZrVW81anpUQkcyN3B0?=
 =?utf-8?B?R05RdWZQd0MzZHlSTzNWNFlLdzZwZ2UxekJFT3RjVXFNSnlUTVYyK3NnRk9D?=
 =?utf-8?B?dFdQWHoxYnoyVnVhUWw1UlhCay9KMW55VHlNeVljUFFpZk8rVHVQVlJycEhZ?=
 =?utf-8?B?RkFqQkxUUE0zd0VJQVJFTmFIYzFCUXBQeXRNTGZoTVBmaVhiOU5abXlMdWdW?=
 =?utf-8?B?MmlTR1RrUmlDdzF6L0ZLcUtkRVpTa2g2WnBNdTNPdHFpcUZIb3hLa01kaUhm?=
 =?utf-8?B?R0dMSDhEV1JNQkhlVXN2cjRTWmt0bzlxM29Oc0kzblY4Y2lNelBHZkJmOHg4?=
 =?utf-8?B?cUZpVy9kbTRoYWNUSUs0WlZNK05pMXFabzZuV1ZRS0FzeW15TEJPQWNoL0NX?=
 =?utf-8?B?UFR1eXRCTnRVS094VnZPQS9CYjErZDZGb1Q0SG50UW5qK0wrc250ZGExdGJt?=
 =?utf-8?B?U3ZFR3RiS2N2UG1uK1ovUzlldXVJYWV1d0FWRWN6ODhnWDVaNHBpYnduaDRr?=
 =?utf-8?B?blV5QWdrM0pGRTU2a2EweVZwdmR0RWtnOUlHdzZraWQzOTVUcmVhZElwek12?=
 =?utf-8?B?cHdIM3VEd1VrbkZ4bVMzRlZpaGtCajdWSHBJNmtPUmR5SHBrbXRrUHpqNEJG?=
 =?utf-8?B?RzRqeUNSNHI4U1p2ZUJWR2xYWVNxTGE1N056UU5wanhRSTJXSXM5UkNQN1l3?=
 =?utf-8?B?djVMVnpHSHBCQndxTFVHODFQdUgwK1ZpL3NZMVdkZ1FWd01JY3BJbEhOdW1W?=
 =?utf-8?B?Y3ZpcElBdUNXMHNCWDc2NTk5dU9yQmd3Q1FIMFJ5MHVsclh3Q3UyOTlGamwz?=
 =?utf-8?B?K2NHMHN2Qm9BR1BLTUNKdmNGamVQNTRhcWZXYlFET1AzQlFNeDNRZTIyV0xU?=
 =?utf-8?B?eG5YU2U5RGsyTXV5V1NtYVI0Snc3V0hldmRCY2xQMlRnVEFOMFJVRFFoZTZ0?=
 =?utf-8?B?WmdiMXVGTlhEbEkrbWxpaXRPWjlRYjRGSjJSVVkrSjlTdVMzVEJOcHZZaDBz?=
 =?utf-8?B?UlEyWksrSTU0b3I4VHZUKzErYzVuSDFKQjROcWlaeG1zZzh3VWtDQks4R2tp?=
 =?utf-8?B?QXZwZjU2bkpOVk9JM3pDMFB5eVlGbm03TFo3M01wc0FSSGI5RkR4b2ExaVdR?=
 =?utf-8?B?cWJsMWRTQjVzV09HVjhPbG1mY3JYdHVBcjQ0ejNUMGFKbVZOQ2RyS2xvUThr?=
 =?utf-8?B?NjVzMHBReFVGczJZQmx6eWIxbGdFazZrbW9aSXlwTXl5bnRYdVVkVkwvWFNR?=
 =?utf-8?B?ZmZsQ0tJVmgyb2c4NGRmVm1pN2lkblVjUnpGbnhZOU1rUEpZM0d5WGE2Ylds?=
 =?utf-8?B?dEdkd2JjZzVGbjZOU25kTWp0U08zYkFMMm1vZFBYMjI2Z3dLd0dpcmFkdVE2?=
 =?utf-8?B?VHIvcEZaZnczaXlRZ09mck53RGhqSUlKWitid2Z2Rm5OKzA0RDNCR0Q1bVA0?=
 =?utf-8?B?OElNSVZzeDc3aTJLQTZDQUpQbDlMV3NDYjVrM2IxVkR4U3FjNlR1OTVPQmJT?=
 =?utf-8?B?cXczUVhIV1l1YjNJdFNZMVJnQWhBZkRwa1Z5dXBKV0UrUFRBc0J0MS9WUWcx?=
 =?utf-8?B?eXlYYnViZnYzQ0pDTmpiQXI2OEJKZEdPN00xU2lvcEhPNC8reHNqVlozTkJW?=
 =?utf-8?B?SFY2N29JaCt0NWRneHUrdGFwSXAxV1E3NytEeVVzNnRreDR4bUFrUldydVBj?=
 =?utf-8?B?eDJHa0JEWnova1F2WEdGRmpiMEQ2Y1JBWi9BYlhqdnQ2THRQQnNNSnREaVQz?=
 =?utf-8?B?N0J0R05RcEhmVXRFSzVTZElSVlVwUVFUOXhPZ1p0TDlSRzBmc2N0RndpRi9U?=
 =?utf-8?B?dXd5TW1GVmFWbUFyNzZhK2RqNy83SzFmV3I5Y08zNVhMZThVcG93eVZNa3N4?=
 =?utf-8?B?a2Z0YlZla2prdGh2VDE2TDJ3TjBZVUhYNXVGdFZKck4zdzBPaEEzYzhObnBQ?=
 =?utf-8?B?c1V5T3NoVlZwZTZRVXNqTDJ1Wi9iMndHMDhBOFZMTFNVeWd6clFHZHhhNEJS?=
 =?utf-8?B?YjdQRW12cDlyZld4Y1NKTEJ2eFBNYW5Va04veGVRUW5NK1ZXZ2JoRFRjYnQ0?=
 =?utf-8?B?dkdsQXA0Z3FiSEQvQkM5K2hnZkUwTHp0MUd2cFFMalF4d25EeUdXNlhuck5O?=
 =?utf-8?Q?ayDW9BScEkbBz3I37p/2k0oGa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 672bc308-14f8-40fd-bda3-08daf2f1b24b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 10:01:52.0569
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WvnTVH31MzmFEJ2zdc8g+YndeLbhJc39BGHbNSZ0AssLTuz8xrgA3tr2WzpWSlE3ziWG9vycEDwEa5ZXOAloYg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7403

On 10.01.2023 09:49, Wei Chen wrote:
> Current numa command in documentation is x86 only. Remove
> x86 from numa command's arch limitation in this patch.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

Of course not on its own, only on top of all the earlier patches.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 10:47:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 10:47:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474617.735886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFCAL-00050j-4A; Tue, 10 Jan 2023 10:47:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474617.735886; Tue, 10 Jan 2023 10:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFCAL-00050c-1c; Tue, 10 Jan 2023 10:47:37 +0000
Received: by outflank-mailman (input) for mailman id 474617;
 Tue, 10 Jan 2023 10:47:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Esb2=5H=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pFCAJ-00050W-Pu
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 10:47:35 +0000
Received: from mail-vs1-xe2e.google.com (mail-vs1-xe2e.google.com
 [2607:f8b0:4864:20::e2e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3048550d-90d4-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 11:47:34 +0100 (CET)
Received: by mail-vs1-xe2e.google.com with SMTP id t10so1219570vsr.3
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 02:47:34 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3048550d-90d4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=B77Khf8eYDJk70YMPWZP9aMu4CFf5/qt+dAtcpnZZjE=;
        b=GsSkzKr/BqvZWNZbiH6lpNMNzTtOwRClmyeSVkEviNpZrSWpXIQt5rmbownh6RQRMj
         d4RyhpGx9ygjqV7Dt3T7YnaA3yqgycGhRO41Pb4l9hhbWxOXLb5EHaVun9Dr3TW8n2Ek
         skLYxjVlEXa087vZzbXBciQ3UrvhAdUXl9/3q3OUjyGr+MbwgaiDm7W2fJcx/dcK58qP
         Z196Ee4/8CoFfkwL9JFT+5HbPzGW4xYngK4IcnSNJhADdb/Z3z6wBa0zxETBH5xCpAzg
         tcojBpGofBwUbcNESBYUDAxvGkN7iSXiblTQtI2+ElUJYxBoY/zIHRNe0553e/DXn6Cj
         POUA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=B77Khf8eYDJk70YMPWZP9aMu4CFf5/qt+dAtcpnZZjE=;
        b=79OSWwLs4OAyMO94MU+v/aQ4weIOjrubNF36MiBguDLCRgC9ORjkDf99QbXRgcnJeL
         iTBzsNRli/G6UblumlkiG57Ro2u31mmLSXIuQzjt4icWvCYLMBP2jLg8zezE+MKCM6cl
         bz72BE8isCm7yN3YZliZqsu8684qX0Oc7cn4YjdXbY4kRMLMj33/jFpFTUc3ciFtn33J
         s9NC2Gl6Q36HmhyxyZw36hvxl6wXkONwoM5yivceUClIdcrUm7cILVuLpix/mmQ00Azm
         83mF/m+yobZLZtz3ee8Fj2S0h+liUqsx/5nGkGV1k4ncVs3y09MXNE73SbxPRZ+nnLGO
         V2zg==
X-Gm-Message-State: AFqh2kpnev6+4xLf54tg6NfxLHaZSt7tZxukcNTWB6UecftgmL9GF8fs
	F3x8beuJQy3T0kL88ku+SY2qj9cfDHR5Uhq+104=
X-Google-Smtp-Source: AMrXdXswGahfWIyssunSTOjj4V0CO295VYHGFOc99NduNK0ZNED4UV62lz+Xttu1SaKmFWAnGj97AoIcQKsSpdTv8e4=
X-Received: by 2002:a05:6102:f8c:b0:3c9:8cc2:dd04 with SMTP id
 e12-20020a0561020f8c00b003c98cc2dd04mr7492001vsv.73.1673347653105; Tue, 10
 Jan 2023 02:47:33 -0800 (PST)
MIME-Version: 1.0
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <527727b2c9e26e6ef7714fe9a3fbe580caf1ae13.1673278109.git.oleksii.kurochko@gmail.com>
 <CAKmqyKNyRfAhyP-3uZwEf3OZEv5be4KNdGvNjUiQGu8w-vf_8g@mail.gmail.com> <CAKB00G3nVtcBppt2TJa-dFzz4TKqVT6B-1swjzkZwqsRkFxwsA@mail.gmail.com>
In-Reply-To: <CAKB00G3nVtcBppt2TJa-dFzz4TKqVT6B-1swjzkZwqsRkFxwsA@mail.gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 10 Jan 2023 20:47:06 +1000
Message-ID: <CAKmqyKMj30bdhb99YHJ5VaYaRbGiKMPa=YvLY_f8Wcggv-zv2w@mail.gmail.com>
Subject: Re: [PATCH v2 6/8] xen/riscv: introduce early_printk basic stuff
To: Bobby Eshleman <bobby.eshleman@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Connor Davis <connojdavis@gmail.com>, Gianluca Guida <gianluca@rivosinc.com>, 
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Oleksii Kurochko <oleksii.kurochko@gmail.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jan 10, 2023 at 5:29 PM Bobby Eshleman <bobby.eshleman@gmail.com> w=
rote:
>
>
>
> On Mon, Jan 9, 2023 at 4:28 PM Alistair Francis <alistair23@gmail.com> wr=
ote:
>>
>> On Tue, Jan 10, 2023 at 1:47 AM Oleksii Kurochko
>> <oleksii.kurochko@gmail.com> wrote:
>> >
>> > The patch introduces a basic stuff of early_printk functionality
>> > which will be enough to print 'hello from C environment".
>> > early_printk() function was changed in comparison with original as
>> > common isn't being built now so there is no vscnprintf.
>> >
>> > Because printk() relies on a serial driver (like the ns16550 driver)
>> > and drivers require working virtual memory (ioremap()) there is not
>> > print functionality early in Xen boot.
>> >
>> > This commit adds early printk implementation built on the putc SBI cal=
l.
>> >
>> > As sbi_console_putchar() is being already planned for deprecation
>> > it is used temporary now and will be removed or reworked after
>> > real uart will be ready.
>>
>> There was a discussion to add a new SBI putchar replacement. It
>> doesn't seem to be completed yet, but there might be an SBI
>> replacement for this in the future as well.
>>
>> Alistair
>
>
> Are you referring to the Debug Console Extension (EID #0x4442434E "DBCN")=
=E2=80=9D?
>
> https://lists.riscv.org/g/tech-prs/topic/96051183#84

That's the one!

Alistair

>
> Best,
> Bobby
>
>>
>> >
>> > Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
>> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>> > ---
>> > Changes in V2:
>> >     - add license to early_printk.c
>> >     - add signed-off-by Bobby
>> >     - add RISCV_32 to Kconfig.debug to EARLY_PRINTK config
>> >     - update commit message
>> >     - order the files alphabetically in Makefile
>> > ---
>> >  xen/arch/riscv/Kconfig.debug              |  7 +++++
>> >  xen/arch/riscv/Makefile                   |  1 +
>> >  xen/arch/riscv/early_printk.c             | 33 ++++++++++++++++++++++=
+
>> >  xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
>> >  4 files changed, 53 insertions(+)
>> >  create mode 100644 xen/arch/riscv/early_printk.c
>> >  create mode 100644 xen/arch/riscv/include/asm/early_printk.h
>> >
>> > diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.deb=
ug
>> > index e69de29bb2..6ba0bd1e5a 100644
>> > --- a/xen/arch/riscv/Kconfig.debug
>> > +++ b/xen/arch/riscv/Kconfig.debug
>> > @@ -0,0 +1,7 @@
>> > +config EARLY_PRINTK
>> > +    bool "Enable early printk config"
>> > +    default DEBUG
>> > +    depends on RISCV_64 || RISCV_32
>> > +    help
>> > +
>> > +      Enables early printk debug messages
>> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
>> > index fd916e1004..1a4f1a6015 100644
>> > --- a/xen/arch/riscv/Makefile
>> > +++ b/xen/arch/riscv/Makefile
>> > @@ -1,3 +1,4 @@
>> > +obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o
>> >  obj-$(CONFIG_RISCV_64) +=3D riscv64/
>> >  obj-y +=3D sbi.o
>> >  obj-y +=3D setup.o
>> > diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_prin=
tk.c
>> > new file mode 100644
>> > index 0000000000..88da5169ed
>> > --- /dev/null
>> > +++ b/xen/arch/riscv/early_printk.c
>> > @@ -0,0 +1,33 @@
>> > +/* SPDX-License-Identifier: GPL-2.0 */
>> > +/*
>> > + * RISC-V early printk using SBI
>> > + *
>> > + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
>> > + */
>> > +#include <asm/sbi.h>
>> > +#include <asm/early_printk.h>
>> > +
>> > +/*
>> > + * TODO:
>> > + *   sbi_console_putchar is already planned for deprication
>> > + *   so it should be reworked to use UART directly.
>> > +*/
>> > +void early_puts(const char *s, size_t nr)
>> > +{
>> > +    while ( nr-- > 0 )
>> > +    {
>> > +        if (*s =3D=3D '\n')
>> > +            sbi_console_putchar('\r');
>> > +        sbi_console_putchar(*s);
>> > +        s++;
>> > +    }
>> > +}
>> > +
>> > +void early_printk(const char *str)
>> > +{
>> > +    while (*str)
>> > +    {
>> > +        early_puts(str, 1);
>> > +        str++;
>> > +    }
>> > +}
>> > diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/risc=
v/include/asm/early_printk.h
>> > new file mode 100644
>> > index 0000000000..05106e160d
>> > --- /dev/null
>> > +++ b/xen/arch/riscv/include/asm/early_printk.h
>> > @@ -0,0 +1,12 @@
>> > +#ifndef __EARLY_PRINTK_H__
>> > +#define __EARLY_PRINTK_H__
>> > +
>> > +#include <xen/early_printk.h>
>> > +
>> > +#ifdef CONFIG_EARLY_PRINTK
>> > +void early_printk(const char *str);
>> > +#else
>> > +static inline void early_printk(const char *s) {};
>> > +#endif
>> > +
>> > +#endif /* __EARLY_PRINTK_H__ */
>> > --
>> > 2.38.1
>> >
>> >


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 11:55:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 11:55:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474623.735898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDE9-00043f-6r; Tue, 10 Jan 2023 11:55:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474623.735898; Tue, 10 Jan 2023 11:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDE9-00043Y-3a; Tue, 10 Jan 2023 11:55:37 +0000
Received: by outflank-mailman (input) for mailman id 474623;
 Tue, 10 Jan 2023 11:55:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFDE7-00043O-Ly; Tue, 10 Jan 2023 11:55:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFDE7-0005Aq-Hz; Tue, 10 Jan 2023 11:55:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFDE7-0005zw-3P; Tue, 10 Jan 2023 11:55:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFDE7-0002Xz-2v; Tue, 10 Jan 2023 11:55:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=uMdF5BZsojLPZd2+PP36lLb5iZ/vB15gXlSDAGbhDfg=; b=YfNaJX/zQ7JCp7FHxPaaVN1E9s
	x47+RuXN8nH4nm588pgFmHJ3zKzty0pRsTPLJR8mhURsXqHRmojXKglrRDgdAZBzSCuspF+5UlHG1
	kklS8XQXr/V7yQmPVFOlbNiNnNyErd6A/dsCg+ZlgexkE16mjXnsyNRVa/SqkxaHVDHQ=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-arm64
Message-Id: <E1pFDE7-0002Xz-2v@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 11:55:35 +0000

branch xen-unstable
xenbranch xen-unstable
job build-arm64
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175697/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-arm64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-arm64.xen-build --summary-out=tmp/175697.bisection-summary --basis-template=175623 --blessings=real,real-bisect,real-retry qemu-mainline build-arm64 xen-build
Searching for failure / basis pass:
 175681 fail [host=rochester0] / 175637 [host=laxton1] 175627 [host=rochester1] 175623 ok.
Failure / basis pass flights: 175681 / 175623
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 38525f6f73f906699f77a1af86c16b4eaad48e04
Basis pass d8d829b89dababf763ab33b8cdd852b2830db3cf 0ab12aa32462817f0a53fa6f6ce4baf664ef1713 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#d8d829b89dababf763ab33b8cdd852b2830db3cf-33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 git://git.qemu.org/qemu.git#0ab12aa32462817f0a53fa6f6ce4baf664ef1713-aa96ab7c9df59c615ca82b49c9062819e0a1c287 git://xenbits.xen.org/osstest/seabios.git#645a64b4911d7cadf5749d7375544fc2384e70ba-645a64b4911d7cadf5749d7375544fc2384e70ba git://xenbits.xen.org/xen.git#2b21cbbb339fb14414f357a6683b1df74c36fda2-38525f6f73f906699f77\
 a1af86c16b4eaad48e04
Loaded 24998 nodes in revision graph
Searching for test results:
 175619 pass irrelevant
 175623 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 0ab12aa32462817f0a53fa6f6ce4baf664ef1713 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175627 [host=rochester1]
 175636 [host=rochester1]
 175639 [host=laxton1]
 175637 [host=laxton1]
 175643 [host=rochester1]
 175647 fail d8d829b89dababf763ab33b8cdd852b2830db3cf d6271b657286de80260413684a1f2a63f44ea17b 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175654 fail irrelevant
 175664 fail irrelevant
 175676 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 0ab12aa32462817f0a53fa6f6ce4baf664ef1713 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175677 fail irrelevant
 175678 pass d8d829b89dababf763ab33b8cdd852b2830db3cf dd92cbb3665a47b9118cbd06a60ff0c75ad1f442 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175679 pass d8d829b89dababf763ab33b8cdd852b2830db3cf bf7a2ad8b6dfab4adb40db44022e4c424b56421e 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175672 fail 33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 38525f6f73f906699f77a1af86c16b4eaad48e04
 175680 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 97f4effeb6083f3c15f48cf4eea0c16552a9330a 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175682 fail 33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 38525f6f73f906699f77a1af86c16b4eaad48e04
 175685 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 46bda3e4de4f78f1d905107d7da34d97e1c3cb1d 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175687 pass d8d829b89dababf763ab33b8cdd852b2830db3cf dab30fbef3896bb652a09d46c37d3f55657cbcbb 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175688 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175681 fail 33a3408fbbf988aaa8ecc6e721cf83e3ae810e54 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 38525f6f73f906699f77a1af86c16b4eaad48e04
 175690 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175693 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175695 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175696 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175697 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Searching for interesting versions
 Result found: flight 175623 (pass), for basis pass
 Result found: flight 175672 (fail), for basis failure (at ancestor ~4)
 Repro found: flight 175676 (pass), for basis pass
 Repro found: flight 175681 (fail), for basis failure
 0 revisions at d8d829b89dababf763ab33b8cdd852b2830db3cf 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
No revisions left to test, checking graph state.
 Result found: flight 175688 (pass), for last pass
 Result found: flight 175690 (fail), for first failure
 Repro found: flight 175693 (pass), for last pass
 Repro found: flight 175695 (fail), for first failure
 Repro found: flight 175696 (pass), for last pass
 Repro found: flight 175697 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175697/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-arm64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
175697: tolerable ALL FAIL

flight 175697 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/175697/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64                   6 xen-build               fail baseline untested


jobs:
 build-arm64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 12:20:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 12:20:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474643.735911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDcK-0007cj-Qh; Tue, 10 Jan 2023 12:20:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474643.735911; Tue, 10 Jan 2023 12:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDcK-0007cc-O2; Tue, 10 Jan 2023 12:20:36 +0000
Received: by outflank-mailman (input) for mailman id 474643;
 Tue, 10 Jan 2023 12:20:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFDcJ-0007cS-Ur; Tue, 10 Jan 2023 12:20:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFDcJ-0005rl-Qn; Tue, 10 Jan 2023 12:20:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFDcJ-00077i-GE; Tue, 10 Jan 2023 12:20:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFDcJ-0006mW-Fl; Tue, 10 Jan 2023 12:20:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LL0MLTEoEhtkILTrMvkCMszj1Ovh2ok2D/0JUbnLijM=; b=3/8Ij3p/72mG21XAwSv8s4Vlzo
	RKe9++NDQeEZz+GrUit9qa9mvrDhXiOcKaZzR96/69HEBwrP9X8lag2i3JlzY+g83OXkglP55PfXC
	myivXVYFJAnNg0diDjwwHI4FKGgOb0HBsuM7hGNICpVat3o+RvCDhQbpG4OoTN1BtsDw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175684-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175684: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:guest-start/debian.repeat:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=49ff47269b71a762ca5de4595f6ec915043e05ce
X-Osstest-Versions-That:
    libvirt=ffd286ac6f25374d16f4eaa7ff64e30c77541b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 12:20:35 +0000

flight 175684 libvirt real [real]
flight 175698 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175684/
http://logs.test-lab.xenproject.org/osstest/logs/175698/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 175698-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175638
 test-arm64-arm64-libvirt-qcow2 17 guest-start/debian.repeat   fail like 175638
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175638
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175638
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              49ff47269b71a762ca5de4595f6ec915043e05ce
baseline version:
 libvirt              ffd286ac6f25374d16f4eaa7ff64e30c77541b41

Last test of basis   175638  2023-01-09 04:18:55 Z    1 days
Testing same since   175684  2023-01-10 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Jiang Jiacheng <jiangjiacheng@huawei.com>
  Ján Tomko <jtomko@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   ffd286ac6f..49ff47269b  49ff47269b71a762ca5de4595f6ec915043e05ce -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 12:32:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 12:32:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474652.735924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDng-0000hj-0m; Tue, 10 Jan 2023 12:32:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474652.735924; Tue, 10 Jan 2023 12:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDnf-0000hc-U3; Tue, 10 Jan 2023 12:32:19 +0000
Received: by outflank-mailman (input) for mailman id 474652;
 Tue, 10 Jan 2023 12:31:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e0qL=5H=epam.com=prvs=537488b4ae=dmytro_firsov@srs-se1.protection.inumbo.net>)
 id 1pFDmc-0000ge-Tw
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 12:31:15 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa3e1930-90e2-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 13:31:12 +0100 (CET)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 30ABkmv0026793;
 Tue, 10 Jan 2023 12:30:59 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2111.outbound.protection.outlook.com [104.47.17.111])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3n17err4t8-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 10 Jan 2023 12:30:58 +0000
Received: from AM9PR03MB7236.eurprd03.prod.outlook.com (2603:10a6:20b:260::7)
 by VI1PR03MB6254.eurprd03.prod.outlook.com (2603:10a6:800:13c::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 12:30:54 +0000
Received: from AM9PR03MB7236.eurprd03.prod.outlook.com
 ([fe80::68ef:4767:a0c1:1e3d]) by AM9PR03MB7236.eurprd03.prod.outlook.com
 ([fe80::68ef:4767:a0c1:1e3d%8]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 12:30:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa3e1930-90e2-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RGzgA29UvOkm1k4l4psEE/DOudvnTyBOgoEN2DTcBCuZlrQGPGQqLTcH0onnD/tS7jp9elG55GtWCrv064SPPcNg81DiJIxs3gVrtjtM4BA9cuBpuuJCXYx2zNHZ8VZHyfEnUduNE4abF8Oi5PUiSzUmVDaY7GUWBWAKFbm6OTdBA5lYWeG3leiLyEGNHQNVTY1v3xEBmC9bZNy1y8ofviUz1yGCUCxh55hdWuzbXlFCRKiNBfHhyoJIdxa1GypAAPs3Fz/d4H9t8Uf19N04VZuM4AweqPB9IJOfPdojFigAG2lh3AUYjRB5g6zngiROWUKKXdjtysu/rjIL8vPHyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Lf7/dU6ySl6QfRKApcvqrJXLdvlk5ei2aKPHk45OpGY=;
 b=G0e/Mx3IOdM6akqM6KozJG52Dx4/FwLI5LWea/0zErQcst2HDqbAbhJKb7ARUhAWRTFmvdxFbyttmML1Lxj7y7FDLF0zfC1HcCrthlkbAZrSQNOWOm6Zgb3oKdUSjV6pu69KgXet92u+oQe046KFl/fc+Z4q5FxsX1taAyfh9FmOI3yVs98QSh/JX8FB/PsVFSqWcxvBwrOHnwEfG9ppqpj2KWdohSv+IJLqu7jV7s0WrfGiy7WdCMqXF94DApUuR+A38WynG4rxk60XzWRBtNPNShJoNVqak504afi+6K1Ga8ey/niMwoz1kkEnU1vGsUvfS8Fj9LuId6EgCaqaOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Lf7/dU6ySl6QfRKApcvqrJXLdvlk5ei2aKPHk45OpGY=;
 b=nQQ74AdvvwPVJe7JKLRcwTUv33g8D+IElg+m1ulQwuyWkckzZ0cZzvBbexelZsG2wiMLDKG6pnlkCn7aIqG0FmghUqw4h2P+OxyGDNnG5j6gBYgyfWXQbzgZh/ImqosJ87Sa92ycfS2XRFTj4TcgKzwLJkS6+PjNSx4kSBA8j2q6tbGPsoNkZtQE8wi8WW9g/vKlupKt6EhMJhUTcoZnd79jHKvv66lu9oGq7YSSQNKwOTI/ac4R5tvpdMthH8KVsPLbhMDPkSQmwuMScuNhlgEe5QJS9IE05nvhJtO5qXqiJQrGSq3eC/KHvo5pWFIO9EfLQBa+oHwD+bp4N1/Y/Q==
From: Dmytro Firsov <Dmytro_Firsov@epam.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, Julien Grall <julien@xen.org>,
        Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "stefano.stabellini@amd.com" <stefano.stabellini@amd.com>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>,
        Bertrand Marquis <bertrand.marquis@arm.com>,
        "michal.orzel@amd.com" <michal.orzel@amd.com>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>
Subject: Re: [XEN v4] xen/arm: Probe the load/entry point address of an uImage
 correctly
Thread-Topic: [XEN v4] xen/arm: Probe the load/entry point address of an
 uImage correctly
Thread-Index: AQHZJA3n+pyXLJ9LxUm8V6NVh68Qyq6Xk1OA
Date: Tue, 10 Jan 2023 12:30:54 +0000
Message-ID: <ff1aa8c9-34a0-72a3-7a9e-c9a4fee93561@epam.com>
References: <20221221185300.5309-1-ayan.kumar.halder@amd.com>
 <e26768b7-99f7-f4e4-6ae5-094d17e1594a@xen.org>
 <20b15211-492b-713e-288c-14bd5e137ed7@gmail.com>
In-Reply-To: <20b15211-492b-713e-288c-14bd5e137ed7@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
msip_labels: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: AM9PR03MB7236:EE_|VI1PR03MB6254:EE_
x-ms-office365-filtering-correlation-id: 12fc838d-69c3-48a3-2a12-08daf30684a1
x-ld-processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 ktbrtuoYGznAsYVqL0O7ZIclHWX8hL2PEvXLSq15BNsEmx0GsrNPzm+20OzycSr2dnum/k3MBQDABC60dgMezjiTRDWTOtygMBYbM3uH+s6ovmPZFV/6DUS4mAO5hcc/cSIhEDVqo1dXv8C0gmIAzfR7bZqkkkJ/UTtIPtZx+vP2H7wCNkw2hxb4S3nbPuOgpTIkhzXUEKJbI44DCzt7u121wRw/fQqevPTCqBalf+MBlKtjq2rVlWVJwvQArg1csYN2n0eXguclprn1ghQOm7aGjZZX2hFW9E+ll1oVS7yr0qwqWz0Dblq3qxp4xXr2b5tIBXCvnfDDLAeSVptDDSsJxPaprTQ0cucfEaGumjK5CCbCrHQ0GWkub3vu2gkiBett4pHAnzkqOZ24ZdEhvGNThft02UKjLuIxBT0Sp+hGhAFH8PpUea6+Mb9M+loXsBSmsTcUZIVSGrzbU9LVmBm49F7sk6MArqSZ+OL1llvqRgOnUdz3A291+JJt3iTBOwOo1nWEfcc2e1AU5UGeUyTqNN3a2JORTz6tqrEfyr2UkgNNTmVIjIex5usGqDD2QGZYFqIo7erYCKUB1+d6TbqlGp8G1ce8UvhRGCd8GRS+tY8Y+6tOJ1vWK5/F3gQXxI4zExY26lwWwoAi71rtOW7xCOY/EsuVzAj+bk0iML/owQbQjScP3qBQycDIa7grUTfXUHVHY4ey3kJr/3xJgm3QUxGzNROMYYSuF1Ng4oE=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM9PR03MB7236.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(39860400002)(346002)(366004)(136003)(451199015)(41300700001)(31696002)(36756003)(8936002)(26005)(186003)(5660300002)(4744005)(6512007)(53546011)(6506007)(83380400001)(107886003)(6486002)(2616005)(316002)(38070700005)(54906003)(110136005)(76116006)(8676002)(66946007)(66476007)(66446008)(4326008)(86362001)(64756008)(66556008)(71200400001)(122000001)(91956017)(478600001)(38100700002)(31686004)(2906002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?ujXXo7wbXC1RvQnSm2U8C+UedyT6gblLWo/PCNuqEYtRDrx4ZN/3MruVjD?=
 =?iso-8859-1?Q?n+C/aGNvChrm2tiqBQ5Pa9o1AyX85PPMEanyykQKZCooviszjPoebAMYY1?=
 =?iso-8859-1?Q?i1SgnNy+LaVcI2SB1GNb/dJcU3HUonXriXehPywA9AiVHHWEz3QPPrxO9h?=
 =?iso-8859-1?Q?QeYV9sbe7l2dpingaBmmji0HhjmmXeHusFGRhNiBzPHlVBE7eNWTI0rer9?=
 =?iso-8859-1?Q?n/0rPcD5ARjd+e7ja1O6xdGOWAluvvoyNszFr8msWFjLF0JRSTRJ9gS73v?=
 =?iso-8859-1?Q?TuNbHnQaI+92Kb/ubLtUgM7XOdhNM0E0I/nxsNujCOLY8sgBymajNH2Qeb?=
 =?iso-8859-1?Q?TkTVfPYtPLRZEOCzG7rZfNRF0VZP+vT3shxjjKLjWPW/X3sM93wQu5YXtF?=
 =?iso-8859-1?Q?GFE/iLLUTfCo6tS8H1VwwX2Vc9sJZdjtmcHg5/1iIgvyJqHjQzgulyWCMq?=
 =?iso-8859-1?Q?8eobr4VS/emxOl9wsA1tkpX/beD11invdjQAh0l964GhGESabtW80F66s5?=
 =?iso-8859-1?Q?rwd2Ago3ZXCXzPVIxSh4PNUQakHl6YRwPGiDlpfo+vFFNFgc0p0edELHuG?=
 =?iso-8859-1?Q?BnrLz1iOBMPp1hWKIMWWO3974tQYBVzUWCYAReYgObOQOMAicBkdEofTV0?=
 =?iso-8859-1?Q?asS4IsfV0b/zWzxfQbRgcuRMuo81fd3idG/jX0oSoIE58IVoarVTgWPbks?=
 =?iso-8859-1?Q?8enDKC7V3zyWjGojiv6kIGZIBlvQi8iDPwD2M8YqADKRvD7KgHGWloJ2IS?=
 =?iso-8859-1?Q?M/DcJblGpH0aynZkQyvcquWRtP+K81XdOGyPOgWaVGQacHv+lIZ7EZRbAO?=
 =?iso-8859-1?Q?9axgb5P1Nx1dgnrjvuiVWFQJLAqnh2kFvkrrw+o8OteCDScIm172ViEBjH?=
 =?iso-8859-1?Q?vQd1JaTlqB/cF8XkZbMLDLsGt3fE6XctyrFvDptRSEYl7aulMThRIkwUgO?=
 =?iso-8859-1?Q?ETi9Qmzir4KG9MsLj8EMAPzYr15CvvM58BSzK9j36DFBKhVsupemoK9sLa?=
 =?iso-8859-1?Q?ETMea7hIOyWbvqyQHiI1wf9XgvND0NVaTdDky8v0cIH2gu1ABsWTfGB1BH?=
 =?iso-8859-1?Q?QuDNZwx+jI7uHvair8RiMsI7gPHnNQAUFmsUeTcRcJJklL8dVXBCCIfaBC?=
 =?iso-8859-1?Q?qGFjpsjRf1bHJke/ac0UymWolb8r0dR5jsAy/YL3lBthaS+9NCw1NG2Njk?=
 =?iso-8859-1?Q?T6yyVv2WprfNcE1nfd6HbU4+UgB4yCatVVV1/kIoVip3YiKzjeBpc/qn80?=
 =?iso-8859-1?Q?qAAXC5wrQVmeRqD28meabpmvx3sDI0Fiqmj77ZQ4u3QZqBXAWX/nlebgmG?=
 =?iso-8859-1?Q?TcMRWZbxj397+MeA15bcyUSyCVA1XeWTPMVLaQ/YpY8OPxQ3iwDUklIgys?=
 =?iso-8859-1?Q?MtPmlP+jVUzzpXWKytUPPcrkVGJ+RvGXOFg8gIJXlA1LB/walPy0BjYNm7?=
 =?iso-8859-1?Q?2tH2PGtuea0Mby0wu21e2zrnPKogGWxkFLGT2SqBqxbywWgvAPTmtL/Hn3?=
 =?iso-8859-1?Q?qpikg4sIcSWmGAbsRYiFGa2/MnhURH7JTI4apnCdv01rS5R7463gYdZgkM?=
 =?iso-8859-1?Q?xLgkTMPT9unIEbfrY15B92oO9UXHVg2Cr60BA9MiqWxgNC28CBm5wNbhUa?=
 =?iso-8859-1?Q?w3GDle5MiA6O+kDurOU6XLO3KjpwydTVXY?=
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <7A3D33046E4EBE488E4E03D2A1218C03@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM9PR03MB7236.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 12fc838d-69c3-48a3-2a12-08daf30684a1
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jan 2023 12:30:54.6541
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 79kGecerfhfBG0RfSJ0OwPqxhf+bGisESPibmJEMbxSFhfXAygPfKJbiZkd1cvLd2EqqN6pzE6phnFox1a4dNg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB6254
X-Proofpoint-ORIG-GUID: z3H8DiKX1ZkaGBd0ItTS3ZR0S0FCwxjP
X-Proofpoint-GUID: z3H8DiKX1ZkaGBd0ItTS3ZR0S0FCwxjP
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2023-01-10_04,2023-01-10_03,2022-06-22_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0
 malwarescore=0 bulkscore=0 phishscore=0 spamscore=0 mlxlogscore=671
 suspectscore=0 adultscore=0 impostorscore=0 priorityscore=1501
 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2212070000 definitions=main-2301100076

On 09.01.23 11:36, Oleksandr Tyshchenko wrote:=0A=
>=0A=
>=0A=
> On 08.01.23 18:06, Julien Grall wrote:=0A=
>=0A=
> Hello Julien, Ayan, all=0A=
>=0A=
>> Hi Ayan,=0A=
>> ...=0A=
>> The changes look good to me (with a few of comments below). That =0A=
>> said, before acking the code, I would like an existing user of uImage =
=0A=
>> (maybe EPAM or Arm?) to confirm they are happy with the change.=0A=
>=0A=
> I have just re-checked current patch in our typical Xen based =0A=
> environment (no dom0less, Linux in Dom0) and didn't notice issues with =
=0A=
> it. But we use zImage for Dom0's kernel, so kernel_uimage_probe() is =0A=
> not called.=0A=
>=0A=
>=0A=
> I CCed Dmytro Firsov who is playing with Zephyr in Dom0 and *might* =0A=
> use uImage.=0A=
=0A=
Hi Oleksandr, Julien, all=0A=
=0A=
Current Xenutils/Zephyr Dom0 setup uses standard format for Zephyr on =0A=
arm64 which is zImage. Thus uImage changes will not affect me.=0A=
=0A=


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 12:44:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 12:44:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474659.735935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDyu-0002EL-2d; Tue, 10 Jan 2023 12:43:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474659.735935; Tue, 10 Jan 2023 12:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDyt-0002EE-VN; Tue, 10 Jan 2023 12:43:55 +0000
Received: by outflank-mailman (input) for mailman id 474659;
 Tue, 10 Jan 2023 12:43:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFDys-0002E8-Ed
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 12:43:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFDyr-0006G4-RA; Tue, 10 Jan 2023 12:43:53 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.2.225]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFDyr-0007ny-Jr; Tue, 10 Jan 2023 12:43:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Tx8YHT2rVSoZMZQj6HL8AY94qQPEXLZ9E10ABuADxIY=; b=CLnkfRF4zuA3JULBhnH9cUKbnS
	Teuc0kc873AapYt249CbCrmkGwBa5YsT04PCud/4CX0f9MFnA5ZXWSTE81qt+5Y92jDDhNkNYOTwy
	r9Jwt8wd4tRjoX6iqrT3DQIabmS401VL4Wbmm0gZfFTmI3GvfIIt64hCxq2noiRN9FQc=;
Message-ID: <b5ea5c34-82ed-8845-616f-e702a432432c@xen.org>
Date: Tue, 10 Jan 2023 12:43:50 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] xen/riscv: introduce sbi call to putchar to
 console
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <9b85a963db538e4735a9f99fc9090ad79508cb2c.1673278109.git.oleksii.kurochko@gmail.com>
 <7990322c-639b-38d4-ff6c-221988532c33@xen.org>
 <eb2c4ad53e596e633f1d59de05db9fa9630f28ac.camel@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <eb2c4ad53e596e633f1d59de05db9fa9630f28ac.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 10/01/2023 08:21, Oleksii wrote:
> Hi,

Hi Oleksii,

> On Mon, 2023-01-09 at 16:03 +0000, Julien Grall wrote:
>> Hi,
>>
>> On 09/01/2023 15:46, Oleksii Kurochko wrote:
>>> The patch introduce sbi_putchar() SBI call which is necessary
>>> to implement initial early_printk
>>>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>> Changes in V2:
>>>       - add an explanatory comment about sbi_console_putchar()
>>> function.
>>>       - order the files alphabetically in Makefile
>>> ---
>>>    xen/arch/riscv/Makefile          |  1 +
>>>    xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>>>    xen/arch/riscv/sbi.c             | 44
>>> ++++++++++++++++++++++++++++++++
>>>    3 files changed, 79 insertions(+)
>>>    create mode 100644 xen/arch/riscv/include/asm/sbi.h
>>>    create mode 100644 xen/arch/riscv/sbi.c
>>>
>>> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
>>> index 5a67a3f493..fd916e1004 100644
>>> --- a/xen/arch/riscv/Makefile
>>> +++ b/xen/arch/riscv/Makefile
>>> @@ -1,4 +1,5 @@
>>>    obj-$(CONFIG_RISCV_64) += riscv64/
>>> +obj-y += sbi.o
>>>    obj-y += setup.o
>>>    
>>>    $(TARGET): $(TARGET)-syms
>>> diff --git a/xen/arch/riscv/include/asm/sbi.h
>>> b/xen/arch/riscv/include/asm/sbi.h
>>> new file mode 100644
>>> index 0000000000..34b53f8eaf
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/include/asm/sbi.h
>>> @@ -0,0 +1,34 @@
>>> +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
>>> +/*
>>> + * Copyright (c) 2021 Vates SAS.
>>> + *
>>> + * Taken from xvisor, modified by Bobby Eshleman
>>> (bobby.eshleman@gmail.com).
>> Hmmm... I missed this one in v1. Is this mostly code from Bobby? If
>> so,
>> please update the commit message accordingly.
>>
>> FAOD, this comment applies for any future code you take from anyone.
>> I
>> will try to remember to mention it but please take pro-active action
>> to
>> check/mention where the code is coming from.
>>
> Sure, I will try to be more attentive next time.
> 
> Probably it is out of scope a little bit but could you please share
> with me a link or clarify in which cases I have to add Copytrigt(c) or
> should I add a new comment with "Modfied by Oleksii ... ", or it is not
> necessary at all? Maybe have I to put some other information related to
> copyrigts?

 From my experience, in Xen, we tend to use the signed-off-by rather 
in-file copyright. So I don't exactly know what would be the rule.

Maybe George can help here?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 12:44:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 12:44:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474663.735946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDzQ-0002jK-9Q; Tue, 10 Jan 2023 12:44:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474663.735946; Tue, 10 Jan 2023 12:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFDzQ-0002jD-6f; Tue, 10 Jan 2023 12:44:28 +0000
Received: by outflank-mailman (input) for mailman id 474663;
 Tue, 10 Jan 2023 12:44:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFDzP-0002hp-3x
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 12:44:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFDzP-0006HB-3C
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 12:44:27 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.2.225]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFDzO-0007xO-TP; Tue, 10 Jan 2023 12:44:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	References:Cc:To:From:Subject:MIME-Version:Date:Message-ID;
	bh=s89S+/0Tw5JkM6SbysGCe35/dE77wWYNqa5KmJ+xXaw=; b=CexUFMK0KKQqCvf4s09suUEkJg
	cE7CT1+Qw+92Wd9SvNOeRfGrECeKXvcUYPXJAHfKcStS9LrE008lN+9L2fwwmNUW5GDTtTSAVuFNu
	yDrGNMkOnPXydWd7e6w0UTuVshWezI75Yp+jAWSm4CLcPSlZO4ihiJFjpxQA20hWWTTE=;
Message-ID: <d4b40666-ee3d-759f-e36a-547aec7a5480@xen.org>
Date: Tue, 10 Jan 2023 12:44:24 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] xen/riscv: introduce sbi call to putchar to
 console
Content-Language: en-US
From: Julien Grall <julien@xen.org>
To: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org,
 George Dunlap <george.dunlap@cloud.com>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1673278109.git.oleksii.kurochko@gmail.com>
 <9b85a963db538e4735a9f99fc9090ad79508cb2c.1673278109.git.oleksii.kurochko@gmail.com>
 <7990322c-639b-38d4-ff6c-221988532c33@xen.org>
 <eb2c4ad53e596e633f1d59de05db9fa9630f28ac.camel@gmail.com>
 <b5ea5c34-82ed-8845-616f-e702a432432c@xen.org>
In-Reply-To: <b5ea5c34-82ed-8845-616f-e702a432432c@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

(+ George)

On 10/01/2023 12:43, Julien Grall wrote:
> 
> 
> On 10/01/2023 08:21, Oleksii wrote:
>> Hi,
> 
> Hi Oleksii,
> 
>> On Mon, 2023-01-09 at 16:03 +0000, Julien Grall wrote:
>>> Hi,
>>>
>>> On 09/01/2023 15:46, Oleksii Kurochko wrote:
>>>> The patch introduce sbi_putchar() SBI call which is necessary
>>>> to implement initial early_printk
>>>>
>>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>>> ---
>>>> Changes in V2:
>>>>       - add an explanatory comment about sbi_console_putchar()
>>>> function.
>>>>       - order the files alphabetically in Makefile
>>>> ---
>>>>    xen/arch/riscv/Makefile          |  1 +
>>>>    xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>>>>    xen/arch/riscv/sbi.c             | 44
>>>> ++++++++++++++++++++++++++++++++
>>>>    3 files changed, 79 insertions(+)
>>>>    create mode 100644 xen/arch/riscv/include/asm/sbi.h
>>>>    create mode 100644 xen/arch/riscv/sbi.c
>>>>
>>>> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
>>>> index 5a67a3f493..fd916e1004 100644
>>>> --- a/xen/arch/riscv/Makefile
>>>> +++ b/xen/arch/riscv/Makefile
>>>> @@ -1,4 +1,5 @@
>>>>    obj-$(CONFIG_RISCV_64) += riscv64/
>>>> +obj-y += sbi.o
>>>>    obj-y += setup.o
>>>>    $(TARGET): $(TARGET)-syms
>>>> diff --git a/xen/arch/riscv/include/asm/sbi.h
>>>> b/xen/arch/riscv/include/asm/sbi.h
>>>> new file mode 100644
>>>> index 0000000000..34b53f8eaf
>>>> --- /dev/null
>>>> +++ b/xen/arch/riscv/include/asm/sbi.h
>>>> @@ -0,0 +1,34 @@
>>>> +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
>>>> +/*
>>>> + * Copyright (c) 2021 Vates SAS.
>>>> + *
>>>> + * Taken from xvisor, modified by Bobby Eshleman
>>>> (bobby.eshleman@gmail.com).
>>> Hmmm... I missed this one in v1. Is this mostly code from Bobby? If
>>> so,
>>> please update the commit message accordingly.
>>>
>>> FAOD, this comment applies for any future code you take from anyone.
>>> I
>>> will try to remember to mention it but please take pro-active action
>>> to
>>> check/mention where the code is coming from.
>>>
>> Sure, I will try to be more attentive next time.
>>
>> Probably it is out of scope a little bit but could you please share
>> with me a link or clarify in which cases I have to add Copytrigt(c) or
>> should I add a new comment with "Modfied by Oleksii ... ", or it is not
>> necessary at all? Maybe have I to put some other information related to
>> copyrigts?
> 
>  From my experience, in Xen, we tend to use the signed-off-by rather 
> in-file copyright. So I don't exactly know what would be the rule.
> 
> Maybe George can help here?
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 12:47:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 12:47:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474672.735957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFE22-0003Qx-QT; Tue, 10 Jan 2023 12:47:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474672.735957; Tue, 10 Jan 2023 12:47:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFE22-0003Qq-Nj; Tue, 10 Jan 2023 12:47:10 +0000
Received: by outflank-mailman (input) for mailman id 474672;
 Tue, 10 Jan 2023 12:47:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFE21-0003Qg-HL
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 12:47:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFE21-0006Zp-6z; Tue, 10 Jan 2023 12:47:09 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.2.225]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFE21-00080n-1V; Tue, 10 Jan 2023 12:47:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=E7vvzdqWpXhtaNOOWAA3uWfbjk16x5NUFhROFipET3U=; b=F51I1WRIoYZ5jjV2lJfoZTQR/J
	H5yIaDVB41ItV6vKawADszOvw+tH94tqGcLBRCqLdb4EvcWvYsRSlszHAHv382d40M4K/tRuqvsoN
	Us4QgCXVNay/zDGKl9mw9VQM+gl7BHPNkkM59pDrDqEjEcZU2TZkvgD5/y4qD6FoJSYA=;
Message-ID: <23bcfd2b-7482-c181-8520-3d4945386b8d@xen.org>
Date: Tue, 10 Jan 2023 12:47:07 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner when host
 address not provided
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-7-Penny.Zheng@arm.com>
 <d7f12897-c6cc-0895-b70e-53c0b88bd0f9@xen.org>
 <AM0PR08MB453041150588948050F718D4F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <6db41bd2-ab71-422a-4235-a9209e984915@xen.org>
 <AM0PR08MB4530048C87F24524BDE2DCF8F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <8ae9e898-55ba-7fba-6ccc-883bd8b3e7ee@xen.org>
 <AM0PR08MB4530F966B099071ED9428109F7FF9@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM0PR08MB4530F966B099071ED9428109F7FF9@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 10/01/2023 03:38, Penny Zheng wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, January 10, 2023 2:23 AM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner
>> when host address not provided
>>
>> Hi Penny,
> 
> Hi Julien,
> 
>>
>> On 09/01/2023 11:58, Penny Zheng wrote:
>>>> -----Original Message-----
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: Monday, January 9, 2023 6:58 PM
>>>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-
>> devel@lists.xenproject.org
>>>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>>>> <sstabellini@kernel.org>; Bertrand Marquis
>>>> <Bertrand.Marquis@arm.com>; Volodymyr Babchuk
>>>> <Volodymyr_Babchuk@epam.com>
>>>> Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to owner
>>>> when host address not provided
>>>>
>>>>
>>>>
>>>> On 09/01/2023 07:49, Penny Zheng wrote:
>>>>> Hi Julien
>>>>
>>>> Hi Penny,
>>>>
>>>>> Happy new year~~~~
>>>>
>>>> Happy new year too!
>>>>
>>>>>> -----Original Message-----
>>>>>> From: Julien Grall <julien@xen.org>
>>>>>> Sent: Sunday, January 8, 2023 8:53 PM
>>>>>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-
>>>> devel@lists.xenproject.org
>>>>>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>>>>>> <sstabellini@kernel.org>; Bertrand Marquis
>>>>>> <Bertrand.Marquis@arm.com>; Volodymyr Babchuk
>>>>>> <Volodymyr_Babchuk@epam.com>
>>>>>> Subject: Re: [PATCH v1 06/13] xen/arm: assign shared memory to
>>>>>> owner when host address not provided
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>
>>>>> A few concerns explained why I didn't choose "struct meminfo" over
>>>>> two pointers "struct membank*" and "struct meminfo*".
>>>>> 1) memory usage is the main reason.
>>>>> If we use "struct meminfo" over the current "struct membank*" and
>>>>> "struct meminfo*", "struct shm_meminfo" will become a array of 256
>>>>> "struct shm_membank", with "struct shm_membank" being also an 256-
>>>> item
>>>>> array, that is 256 * 256, too big for a structure and If I
>>>>> remembered clearly,
>>>> it will lead to "more than PAGE_SIZE" compiling error.
>>>>
>>>> I am not aware of any place where we would restrict the size of kinfo
>>>> in upstream. Can you give me a pointer?
>>>>
>>>
>>> If I remembered correctly, my first version of "struct shm_meminfo" is
>>> this
>>> "big"(256 * 256) structure, and it leads to the whole xen binary is
>>> . ;\
>>
>> Ah so the problem is because shm_mem is used in bootinfo. Then I think we
>> should create a distinct structure when dealing with domain information.
>>
> 
> Yes, If I use the latter "struct shm_info", keeping the shm memory info out of the bootinfo,
> I think we could avoid "bigger than 2MB" error.
> 
> Hmm, out of curiosity, The way to create "distinct" structure is like creating another section
> for these distinct structures in lds, just like the existing .dtb section?

No. I meant defining a new structure (i.e. struct {}) that would be used 
in kernel_info. So you don't grow the one used by bootinfo.

>   
>>>
>>>>> FWIT, either reworking meminfo or using a different structure, are
>>>>> both leading to sizing down the array, hmmm, I don't know which size
>>>>> is suitable. That's why I prefer pointer and dynamic allocation.
>>>>
>>>> I would expect that in most cases, you will need only one bank when
>>>> the host address is not provided. So I think this is a bit odd to me to
>> impose a "large"
>>>> allocation for them.
>>>>
>>>
>>> Only if user is not defining size as something like (2^a + 2^b + 2^c +
>>> ...). ;\ So maybe 8 or 16 is enough?
>>> struct new_meminfo {
>>
>> "new" is a bit strange. The name would want to be changed. Or maybe better
>> the structure been defined within the next structure and anonymized.
>>
>>>       unsigned int nr_banks;
>>>       struct membank bank[8];
>>> };
>>>
>>> Correct me if I'm wrong:
>>> The "struct shm_membank" you are suggesting is looking like this, right?
>>> struct shm_membank {
>>>       char shm_id[MAX_SHM_ID_LENGTH];
>>>       unsigned int nr_shm_borrowers;
>>>       struct new_meminfo shm_banks;
>>>       unsigned long total_size;
>>> };
>>
>> AFAIU, shm_membank would still be used to get the information from the
>> host device-tree. If so, then I am afraid this is not an option to me because it
>> would make the code to reserve memory more complex.
>>
>> Instead, we should create a separate structure that will only be used for
>> domain shared memory information.
>>
> 
> Ah, so you are suggesting we should extract the domain shared memory information only
> when dealing with the information from the host device-tree, something like this:
> struct shm_info {
>        char shm_id[MAX_SHM_ID_LENGTH];
>        unsigned int nr_shm_borrowers;
> }

I am not entirely sure what you are suggesting. So I will wait for the 
code to understand.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 13:09:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 13:09:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474678.735968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFENX-0005wy-Jm; Tue, 10 Jan 2023 13:09:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474678.735968; Tue, 10 Jan 2023 13:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFENX-0005wr-Ge; Tue, 10 Jan 2023 13:09:23 +0000
Received: by outflank-mailman (input) for mailman id 474678;
 Tue, 10 Jan 2023 13:09:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tiyo=5H=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pFENW-0005wk-BM
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 13:09:22 +0000
Received: from sonic316-54.consmr.mail.gq1.yahoo.com
 (sonic316-54.consmr.mail.gq1.yahoo.com [98.137.69.30])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc845035-90e7-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 14:09:18 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic316.consmr.mail.gq1.yahoo.com with HTTP; Tue, 10 Jan 2023 13:09:16 +0000
Received: by hermes--production-bf1-5458f64d4-x4bxm (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID de83800b43f3da3b8dbb82366d9f0a6c; 
 Tue, 10 Jan 2023 13:09:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc845035-90e7-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673356156; bh=xPxK0N/c7OIZ3S47koGswtv5h8DdNmsgPJE9fA0o+04=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=jGCO6TUPD0nJtcyssryZVuYZtFVJcyPjBUjNQT2g7xLC4oLXqjMybRn4qd/7D37CoVgs0Dhv1euZ9rIocRF5JlwC54GWuYDgUCYnIjzM176wJC+W1dfuZoBiJyZWIkk0aVuTamKTLN1rXbNwKQAPYTyuPdwBBw73u/ZuQ3i36B43ggLVFD6cvrbGjwxbtkQ1GKwQXl7DXHzMXmU/PhfE0GygtLnXFNug6aW5rsa6d5FSLbN5z431GJKyle5qRwDDBEdt59yvGyrR21qkmdbjKhO7ynWowRAfWYip7n3gf/c4n7irg2qqt4Tpnf3mh8ckBX0z1JfkDi1rkQnbVpoUjg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673356156; bh=xZ1//y5QUMtnD0uQm4jIk+IlVK2ENED2z9z6FebXnH9=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=r9Vbs1RUBsIabd6vbS/4QDcb0iSI/TLZ5RvUocmMd3Btbu/t6nY6iyjKpxs27H4riOjC2kaxZuU7lqa6EUmtlCwhnDhgFZbvNwtfh1/ohzamU/xjU1y7z/94/9VGO1BNq9u6LeG/x7mUb28XfenQIljEyQdFIC737lK4aN2kiWYhXFN1ef32LziyfPC9q9K7+eHFgJebRprPWjkRP9Xtsmbqnyk2oGZGouKGVkBC/uYBwgTl47PuqAt8PQkChp4jIFSQOJ3abl1zX6TpIwNagMCQC5ZGp/t+a5KK5LewiskqphRQ8HAKdpI0p3znMDtWG67IliEZ601bNgsoU8/oMA==
X-YMail-OSG: RRrQYqgVM1mtJQxO.V8_ljUdVLqrV67aiRiAzws5CkNAmmdaUhLwgx9UveX7Net
 6Zwx0BawIRZIBqDZlsBk.wIDsbPzVheKuBawppBV81QJA_qAjT0Z1ToE3qSUc1MCBlCj7I8gj9af
 j_RgWHN.LmgPDrNt5Qkz1pKwl.r07OJV7BqvYNyyg788KZcL2lz_dt38ph7lLt6CwbPE3K.zI4DW
 doCnmgK6dI73IWfTzvDqWpstamMDc.tAD8Nmo5WIhS43YfcEGws9OjUYEjzh2ayvy3jZG4lFo8eG
 GKWoT50mrYkZYhKL3KSWDBzoU7yCMU_Jvl96ncRjnJQq7sPaLaodHqccRcfidYUFP34SzD4b4iBM
 mh1FNcC6Zj7kWND00EqrnuyBTnCIPTcRB7G5cHBQL4sinM5VI9euuE13zLfYHI4cS24RIHw5LmLG
 jyv7KDg9MIWa04Zp.feFDHb1Y8FPEvd3YKc732OIUyYcw7prhufXohDTNGoELoun2LJmfADlUPVD
 MfFqzwNPrE4HrD2_C.9OtLCa0tUxwDQB10QwyXGQ_uRpM1MhTcYvyh5xFCubli3rArINChPWf47a
 Plic6roXX348LITPuz8sEzXDp99qoe0NDAPwwSOmn7J1e0FvL4svGV2j.L1ldENHZzsE3MUa8lgr
 pA.TRs2Bsa36r3W8msi_QQprxFdWLsENZSOOkD4WyBOcpLIZ4neZC5VjpoQWPQf0ZEG67J8MnQfm
 iKekUH0ZJCd6nM2JEQ2Vs_zmxB_VR6g_3Y0nhrg8C4sPbh_C69xfQI8x542J86Xze.GheW1JazQS
 0Se3JC4cAWvc25NZ5eXVN.9lbC6b7JIuRPdHBKMuSS_yA.VmqrYzyMx.a_y.EMwCM3yhWG5jqIKJ
 7Yi8kqmLzPbhtI1t.J3Ep_xWQCnbv5ugF8aGzbtQPy5m7l5ZgqrATtVdOXwNFfH1PsmIxNPAH6XH
 N6_DPXLwg2xQcrIx5L4RM470t934eTbu_L06PReFP4DjQnwJ8N1w9k4CtlTYFDzo06Su0N6PZq6r
 Z4qkuue3x9RyuYwaSVKE07OmFnA9cUEGy_5RYDn3nZl6DL3xNuRxeZ5d6SF6t7fZmrsyRgWC5Lkl
 g5E7EjcvLW2mcOZaVa5HntrfokQnfTxk6lM.pJPOXUtcuwuZR12UmdiwNNta1gfogDIXaysjBs0s
 E8aUFeNRufVyID2_KWVy_9iHoZUnAEOvauWdD8FlaIlKIEmUj3i37uhyCvcGmnFUy6ExiY1_ct3e
 bPut9_stPl6g_hFf4O34GutT2HtZ_6QaBDmcC1B.dmGZNfcN0N_VBa5JurTl1GGFzg76kUHAPrtT
 tZtyAKFQQwrCPxyVffIIpEHISSItbxFQTUY8Wiz_IknzMATu822uumUxQEDP4Ls8B2ryvY8StuTr
 cRZ_Ra6AaZ66ZrG8pQJa8UGKNog4t.8wEy4j0RCD7WtVWT9iRJ9.rXM5bomkGxXv9OZugVjsaL7U
 1TMM4Qujkorv0zOeVjHhANwU_C8xpWSIx.1Oe8te8twG.tYSH6ch8uW.PyJKg_czkJUGj.bII3wr
 Vl.F9Tp9PyjF3MV0J_iwAHR0dXZXOs9k5HcQff62SMcLxXJ4nYU47O7RzefTKx83AVkxVrPDFjrW
 gGoifTRJjOXNW.Yf6tmjZyBkvJPZaZqUonPkNia2EOX0pEVhwIiTuTs14peZp5Eq7q.Kll0az7Be
 L8rQd0O.s8Z9f33TysA3bJMj129c2QLNmQ96rKncqHH93R.ZCO.lYmv2S0JWI2Co95yUuOkzCiOC
 lklZcL57zEIFCE.JWJdgAD3m2AIw.HLIY18iOpgHe2M_j3qKdukq.hUrLa0wbk2_i9TuDaNj6ymp
 CbFKn27R5n7q9ysQcwZ2UcRhQG6PVXErb3abD2fFK9mkRnED6pz9D945DheYOdXtLXue5kkoDPH_
 Rxtsj4u04OVlLz84Huva0wF81XfZb8IkySR4vcRRLKwBtyDmdp.zpQJSGRNgxFUOuseiAdXwwOcl
 Obkep2oAWPtI74GevSoG5w6IFDDnSq8kMuR7cXr7bi4LmGnBUJ7FW1afQETD5VMCiFM_XBY838jT
 f1GsSnXQnDLC1roS9J_Wbwbzv.rXVSIFadzhTJiIJ_0H53TVrW80LMN0yBoG9k0u6x7vyhqYaWFU
 dpQpGfYjinLETlu5BlACbDOU_tbiucSFy.BIsA92v7v973E8aB5iOxkh6flSkUjNlPTh2EyBErj9
 ZeLpYzDnoLEG53BsQZ1qyrmI-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <8d5a0277-cfb0-169f-671a-0437118f7afb@aol.com>
Date: Tue, 10 Jan 2023 08:09:10 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230110030331-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 13894

On 1/10/2023 3:16 AM, Michael S. Tsirkin wrote:
> On Tue, Jan 10, 2023 at 02:08:34AM -0500, Chuck Zmudzinski wrote:
> > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > as noted in docs/igd-assign.txt in the Qemu source code.
> > 
> > Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > Intel IGD passthrough to the guest with the Qemu upstream device model,
> > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > a different slot. This problem often prevents the guest from booting.
> > 
> > The only available workaround is not good: Configure Xen HVM guests to use
> > the old and no longer maintained Qemu traditional device model available
> > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > 
> > To implement this feature in the Qemu upstream device model for Xen HVM
> > guests, introduce the following new functions, types, and macros:
> > 
> > * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > * typedef XenPTQdevRealize function pointer
> > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > 
> > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > the xl toolstack with the gfx_passthru option enabled, which sets the
> > igd-passthru=on option to Qemu for the Xen HVM machine type.
> > 
> > The new xen_igd_reserve_slot function also needs to be implemented in
> > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > in which case it does nothing.
> > 
> > The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > 
> > Move the call to xen_host_pci_device_get, and the associated error
> > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > initialize the device class and vendor values which enables the checks for
> > the Intel IGD to succeed. The verification that the host device is an
> > Intel IGD to be passed through is done by checking the domain, bus, slot,
> > and function values as well as by checking that gfx_passthru is enabled,
> > the device class is VGA, and the device vendor in Intel.
> > 
> > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> > ---
> > Notes that might be helpful to reviewers of patched code in hw/xen:
> > 
> > The new functions and types are based on recommendations from Qemu docs:
> > https://qemu.readthedocs.io/en/latest/devel/qom.html
> > 
> > Notes that might be helpful to reviewers of patched code in hw/i386:
> > 
> > The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> > not affect builds that do not have CONFIG_XEN defined.
> > 
> > xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
> > existing function that is only true when Qemu is built with
> > xen-pci-passthrough enabled and the administrator has configured the Xen
> > HVM guest with Qemu's igd-passthru=on option.
> > 
> > v2: Remove From: <email address> tag at top of commit message
> > 
> > v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
> > 
> >     if (is_igd_vga_passthrough(&s->real_device) &&
> >         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
> > 
> >     is changed to
> > 
> >     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
> >         && (s->hostaddr.function == 0)) {
> > 
> >     I hoped that I could use the test in v2, since it matches the
> >     other tests for the Intel IGD in Qemu and Xen, but those tests
> >     do not work because the necessary data structures are not set with
> >     their values yet. So instead use the test that the administrator
> >     has enabled gfx_passthru and the device address on the host is
> >     02.0. This test does detect the Intel IGD correctly.
> > 
> > v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
> >     email address to match the address used by the same author in commits
> >     be9c61da and c0e86b76
> >     
> >     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
> > 
> > v5: The patch of xen_pt.c was re-worked to allow a more consistent test
> >     for the Intel IGD that uses the same criteria as in other places.
> >     This involved moving the call to xen_host_pci_device_get from
> >     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
> >     Intel IGD in xen_igd_clear_slot:
> >     
> >     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
> >         && (s->hostaddr.function == 0)) {
> > 
> >     is changed to
> > 
> >     if (is_igd_vga_passthrough(&s->real_device) &&
> >         s->real_device.domain == 0 && s->real_device.bus == 0 &&
> >         s->real_device.dev == 2 && s->real_device.func == 0 &&
> >         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> > 
> >     Added an explanation for the move of xen_host_pci_device_get from
> >     xen_pt_realize to xen_igd_clear_slot to the commit message.
> > 
> >     Rebase.
> > 
> > v6: Fix logging by removing these lines from the move from xen_pt_realize
> >     to xen_igd_clear_slot that was done in v5:
> > 
> >     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
> >                " to devfn 0x%x\n",
> >                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
> >                s->dev.devfn);
> > 
> >     This log needs to be in xen_pt_realize because s->dev.devfn is not
> >     set yet in xen_igd_clear_slot.
> > 
> > v7: The v7 that was posted to the mailing list was incorrect. v8 is what
> >     v7 was intended to be.
> > 
> > v8: Inhibit out of context log message and needless processing by
> >     adding 2 lines at the top of the new xen_igd_clear_slot function:
> > 
> >     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
> >         return;
> > 
> >     Rebase. This removed an unnecessary header file from xen_pt.h 
> > 
> >  hw/i386/pc_piix.c    |  3 +++
> >  hw/xen/xen_pt.c      | 49 ++++++++++++++++++++++++++++++++++++--------
> >  hw/xen/xen_pt.h      | 16 +++++++++++++++
> >  hw/xen/xen_pt_stub.c |  4 ++++
> >  4 files changed, 63 insertions(+), 9 deletions(-)
> > 
> > diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> > index b48047f50c..bc5efa4f59 100644
> > --- a/hw/i386/pc_piix.c
> > +++ b/hw/i386/pc_piix.c
> > @@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
> >      }
> >  
> >      pc_xen_hvm_init_pci(machine);
> > +    if (xen_igd_gfx_pt_enabled()) {
> > +        xen_igd_reserve_slot(pcms->bus);
> > +    }
> >      pci_create_simple(pcms->bus, -1, "xen-platform");
> >  }
> >  #endif
>
> I would even maybe go further and move the whole logic into
> xen_igd_reserve_slot. And I would even just name it
> xen_hvm_init_reserved_slots without worrying about the what
> or why at the pc level.  At this point it will be up to Xen maintainers.

I will try to do that for v9. That would reduce, rather than increase, the
technical debt. I actually wanted to move all the xen specific stuff in
pc_piix.c to xen-hvm.c. That would reduce the technical debt that
has accumulated over the years. I just couldn't see how to do it easily.
It looked I would need to do violence to pc_init1. But, I suppose,
pc_init1 is the kind of function that has accumulated lots of technical
debt and needs some violence done to it! I will give it a try, but it
might take me a while. I think if it was easy, someone would have
done it by now.

Thanks,

Chuck

>
> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> > index 0ec7e52183..eff38155ef 100644
> > --- a/hw/xen/xen_pt.c
> > +++ b/hw/xen/xen_pt.c
> > @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
> >                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
> >                 s->dev.devfn);
> >  
> > -    xen_host_pci_device_get(&s->real_device,
> > -                            s->hostaddr.domain, s->hostaddr.bus,
> > -                            s->hostaddr.slot, s->hostaddr.function,
> > -                            errp);
> > -    if (*errp) {
> > -        error_append_hint(errp, "Failed to \"open\" the real pci device");
> > -        return;
> > -    }
> > -
> >      s->is_virtfn = s->real_device.is_virtfn;
> >      if (s->is_virtfn) {
> >          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> > @@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Object *obj)
> >      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
> >  }
> >  
> > +void xen_igd_reserve_slot(PCIBus *pci_bus)
> > +{
> > +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
> > +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
> > +}
> > +
> > +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
> > +{
> > +    ERRP_GUARD();
> > +    PCIDevice *pci_dev = (PCIDevice *)qdev;
> > +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
> > +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
> > +    PCIBus *pci_bus = pci_get_bus(pci_dev);
> > +
> > +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
> > +        return;
> > +
> > +    xen_host_pci_device_get(&s->real_device,
> > +                            s->hostaddr.domain, s->hostaddr.bus,
> > +                            s->hostaddr.slot, s->hostaddr.function,
> > +                            errp);
> > +    if (*errp) {
> > +        error_append_hint(errp, "Failed to \"open\" the real pci device");
> > +        return;
> > +    }
> > +
> > +    if (is_igd_vga_passthrough(&s->real_device) &&
> > +        s->real_device.domain == 0 && s->real_device.bus == 0 &&
> > +        s->real_device.dev == 2 && s->real_device.func == 0 &&
> > +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
>
> how about macros for these?
>
> #define XEN_PCI_IGD_DOMAIN 0
> #define XEN_PCI_IGD_BUS 0
> #define XEN_PCI_IGD_DEV 2
> #define XEN_PCI_IGD_FN 0
>
> > +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
>
> If you are going to do this, you should set it back
> either after pci_qdev_realize or in unrealize,
> for symmetry.
>
> > +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
> > +    }
>
>
> > +    xpdc->pci_qdev_realize(qdev, errp);
> > +}
> > +
>
>
>
> >  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
> >  {
> >      DeviceClass *dc = DEVICE_CLASS(klass);
> >      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
> >  
> > +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
> > +    xpdc->pci_qdev_realize = dc->realize;
> > +    dc->realize = xen_igd_clear_slot;
> >      k->realize = xen_pt_realize;
> >      k->exit = xen_pt_unregister_device;
> >      k->config_read = xen_pt_pci_read_config;
> > @@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info = {
> >      .instance_size = sizeof(XenPCIPassthroughState),
> >      .instance_finalize = xen_pci_passthrough_finalize,
> >      .class_init = xen_pci_passthrough_class_init,
> > +    .class_size = sizeof(XenPTDeviceClass),
> >      .instance_init = xen_pci_passthrough_instance_init,
> >      .interfaces = (InterfaceInfo[]) {
> >          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
> > diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> > index cf10fc7bbf..8c25932b4b 100644
> > --- a/hw/xen/xen_pt.h
> > +++ b/hw/xen/xen_pt.h
> > @@ -2,6 +2,7 @@
> >  #define XEN_PT_H
> >  
> >  #include "hw/xen/xen_common.h"
> > +#include "hw/pci/pci_bus.h"
> >  #include "xen-host-pci-device.h"
> >  #include "qom/object.h"
> >  
> > @@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
> >  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
> >  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
> >  
> > +#define XEN_PT_DEVICE_CLASS(klass) \
> > +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
> > +#define XEN_PT_DEVICE_GET_CLASS(obj) \
> > +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
> > +
> > +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
> > +
> > +typedef struct XenPTDeviceClass {
> > +    PCIDeviceClass parent_class;
> > +    XenPTQdevRealize pci_qdev_realize;
> > +} XenPTDeviceClass;
> > +
> >  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
> > +void xen_igd_reserve_slot(PCIBus *pci_bus);
> >  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
> >  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
> >                                             XenHostPCIDevice *dev);
> > @@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
> >  
> >  #define XEN_PCI_INTEL_OPREGION 0xfc
> >  
> > +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
> > +
>
> I think you want to calculate this based on dev fn:
>
> #define XEN_PCI_IGD_SLOT_MASK \
> 	(0x1 << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
>
>
> >  typedef enum {
> >      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
> >      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
> > diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> > index 2d8cac8d54..5c108446a8 100644
> > --- a/hw/xen/xen_pt_stub.c
> > +++ b/hw/xen/xen_pt_stub.c
> > @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
> >          error_setg(errp, "Xen PCI passthrough support not built in");
> >      }
> >  }
> > +
> > +void xen_igd_reserve_slot(PCIBus *pci_bus)
> > +{
> > +}
> > -- 
> > 2.39.0
>



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 13:33:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 13:33:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474709.736002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFEkv-0001nb-2Q; Tue, 10 Jan 2023 13:33:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474709.736002; Tue, 10 Jan 2023 13:33:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFEku-0001nU-VF; Tue, 10 Jan 2023 13:33:32 +0000
Received: by outflank-mailman (input) for mailman id 474709;
 Tue, 10 Jan 2023 13:33:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m4Du=5H=gmail.com=tushar.goel.dav@srs-se1.protection.inumbo.net>)
 id 1pFEkt-0001nO-As
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 13:33:31 +0000
Received: from mail-io1-xd35.google.com (mail-io1-xd35.google.com
 [2607:f8b0:4864:20::d35])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e910951-90eb-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 14:33:30 +0100 (CET)
Received: by mail-io1-xd35.google.com with SMTP id p9so6044386iod.13
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 05:33:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e910951-90eb-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=V3B/LM8ZUGd4RO9vFOApASAS0OMkoE1zSbESnKwfo80=;
        b=TCfM9fZvFXL8U2n2TDqCunyJLRqPI+LzBxBx7rN4GmAw99Clh+0jHGKDPKg/ATX4Mm
         vYh1nBAXKRvPYKRfZTzWhxu1DbxogJToYli2+DAABy2AMsdKdQv+XiuFLrX8oqIQvi6k
         OayzLCeP+DnUIfT+ihn9/ItOu7YbAUoryvsvz39qjyYl397Y2T3aV08VuYWvCHL04IKH
         pC1TPaTpGnPIFI0+vGGSlHweYo7Qi9o2MNKz+d2l4perteGjlOOWXVanXn7pGMG1nq+J
         HM+uf9evLOroNbpdaeEKW3tfDYpICieOSLWAS5nzrSszenoKgCoUCVzBGUI3RjgH2gQL
         dCKA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=V3B/LM8ZUGd4RO9vFOApASAS0OMkoE1zSbESnKwfo80=;
        b=XCQvfBPw8dZ3bDOz6OF9+2p2l474nfCsfYhHrpSwrDzje8363uWJLlHVEVAuP58Krr
         rD/Oie9dwETmxLzvuENsv3W+B8BFXXETu5xmrvX1q2UCR9PATp06uXc2H6hF5t2P+hR9
         gwZWT0KZ35tmXRFgek4kEnK+JDlpAiXGqE+knG7D/DfvmcB9CopolhU/Ia9MNRXO5ckA
         UFlSHGeTbKGHBK9RhAPgaMnqu6W/Cx32POZZj7NFmRL/f3W535Q/brhCFxPpZ4ZmB8aA
         GmrVBdKVqn/B8NpLK9EBTGPXTVyJdKRQC3Heq4HMlo7biq4Gm27DjqRzyTskzJujitKq
         VgBg==
X-Gm-Message-State: AFqh2kqCd28Q0Mwzap3zFDWHkqC/w6X2SsPGPRp/MGMLhvzJ6hTJm2G2
	vcViRPQIF00XbmoMsEMGiY4EA1SA+tMa1wlQMbCabelWO64kRZnC
X-Google-Smtp-Source: AMrXdXvqZlt8jWkcNV1EmfKGs/OFDLp8qvjNoyUc8wXHMbDIhYCX91ej8+/jLFnzebcEoszfKJWR7VIyX2vNomoGDT8=
X-Received: by 2002:a02:ce97:0:b0:38c:886a:2140 with SMTP id
 y23-20020a02ce97000000b0038c886a2140mr7253968jaq.224.1673357608791; Tue, 10
 Jan 2023 05:33:28 -0800 (PST)
MIME-Version: 1.0
From: Tushar Goel <tushar.goel.dav@gmail.com>
Date: Tue, 10 Jan 2023 19:03:17 +0530
Message-ID: <CAFD1rPdT5Tod+qdit50EWBN6WyRuK2ybb2G2HmOAayAV7uyBuA@mail.gmail.com>
Subject: Usage of Xen Security Data in VulnerableCode
To: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hey,

We would like to integrate the xen security data[1][2] data
in vulnerablecode[3] which is a FOSS db of FOSS vulnerability data.
We were not able to know under which license this security data comes.
We would be grateful to have your acknowledgement over
usage of the xen security data in vulnerablecode and
have some kind of licensing declaration from your side.

[1] - https://xenbits.xen.org/xsa/xsa.json
[2] - https://github.com/nexB/vulnerablecode/pull/1044
[3] - https://github.com/nexB/vulnerablecode

Regards,


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 13:46:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 13:46:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474716.736013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFEwt-0003O4-A5; Tue, 10 Jan 2023 13:45:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474716.736013; Tue, 10 Jan 2023 13:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFEwt-0003Nx-6L; Tue, 10 Jan 2023 13:45:55 +0000
Received: by outflank-mailman (input) for mailman id 474716;
 Tue, 10 Jan 2023 13:45:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFEwr-0003Nr-Ug
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 13:45:54 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 173c59d0-90ed-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 14:45:50 +0100 (CET)
Received: from mail-sn1nam02lp2043.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jan 2023 08:45:48 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5250.namprd03.prod.outlook.com (2603:10b6:a03:220::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 13:45:43 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 13:45:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 173c59d0-90ed-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673358351;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=XWb0VbvPm8jSmoBaGDxroV2OdrW4JuVkM0w5nXMnJO4=;
  b=X53CPFZKZmZMUsjzt1LZAHaN85Ric6YQ7NLLDg7U5uWowZslwbmc+ZiY
   PZKK0hcUavpVliurE0Qmcwc7hrMtpJNVZd31jibRDy5gwXWQZnkjZJZPT
   0nFFzt/klyGlKHox/DaXcgeU2rZpnsO9njAWfoy1Gqcy+MSeF7ie+sxdV
   Y=;
X-IronPort-RemoteIP: 104.47.57.43
X-IronPort-MID: 92348887
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:gUcC2Kvp9FjimHzRsW9w23dvfOfnVGdfMUV32f8akzHdYApBsoF/q
 tZmKWvUbKuMYTH8fo91ao6zpExTuMLQzd43TlBq+Cw3FXtH+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaHzyFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwBCFQSjWl2/2N4J2rSOByiNQDdpioM9ZK0p1g5Wmx4fcOZ7nmG/+P3vkBmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osj/6xa7I5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6Refjqq460AL7Kmo7GFood0OcntKAkFO+UtdxF
 EBP5Q8Usv1nnKCsZpynN/Gim1aYowUcUsAWHOo37EeBw7T87AOQB2xCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BXQPbjIeTBcUy8nupsc0lB2nczp4OKu8j9mwAjepx
 TmP9HI6n+9L0ZVN0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLlm4U2l/MBVfNgwIQ==
IronPort-HdrOrdr: A9a23:rAjPc6iW7iWs32AbL5WfZVr7FXBQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="92348887"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ArmuJbaWL5dF4AtiXAj/CyftoyrRfkeV9iVcuT1d9GFryF4AdBLyUxypgoUxmQbSJtdaTjhZQT2y029kZWSKirksNfaN95BQ2C7YOrWpWLPvBtaeI+tL6RnU5O9Blxy6fyEUXSmTa4Ootq5kD4g074t4cEv0EEIrOzMLWe0LRRf4NbEg0dOKtZZ7lmZ5TiGkwTTCU7iHbOWI3UjnQOyYJwLaaAAMHxTbl9S+bIaZMOybRv+avYU00fZvz6x6p2ULPke61FVA/42E3kfpnVzflag5Vg+ZaC7OwBmd0ox3rIdPuUhD7dppYu8qVBZ4ySlA4nB9ygxbW/C2n+cff1XloQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XWb0VbvPm8jSmoBaGDxroV2OdrW4JuVkM0w5nXMnJO4=;
 b=PfA4Cwqb50p7UHGZFL+DntnEHZ1jdbRqhgKgjK7SwFpcXalwp8eYlQxr0FfagFeEC+Zo2Tn+p26+ayxgzuYyWzlXGG25iL/AL6Kr2MLmtnngh8FkIBwrAMdOxspzr8BNKoHFw7yMb9NXaCLDVu1xfJf8ggx+qDxkVlOSpwHYeBOh0jTXq7+KPFUrVt4JIG+lOkiDbun23FbiBqujQbPV+oPLoZXTA2WuauLyIervS6+FC4M8eAsNWMekAXNPpcLw81vlgL9y8MgT3+yz+l5rJHJMCQmIAY02JepUy2DgeOL5jN6ELjQ0QKrgxi2W+7b+ucdeYxZykwkw517EPgKVnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XWb0VbvPm8jSmoBaGDxroV2OdrW4JuVkM0w5nXMnJO4=;
 b=na61Dt86T2gfMO6VMx+5eOLCOkaeEj+gNN0HgNjPvqqxqtxwBq7shZ0yu0CdmTfKMnOHUn+hABCohX1rGZwWmDw3/wSZykndxqt0UTuYc3u+TwMypqHw+M5l1AUpQFH9gKKrreE8Nn+RtRXL9aZuF+8Z7hQpRourRtjFCM3WCiA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Tushar Goel <tushar.goel.dav@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Xen Security <security@xen.org>
Subject: Re: Usage of Xen Security Data in VulnerableCode
Thread-Topic: Usage of Xen Security Data in VulnerableCode
Thread-Index: AQHZJPgyisjAiCiXW0uW9I6mR5F/Cq6XqbkA
Date: Tue, 10 Jan 2023 13:45:42 +0000
Message-ID: <7ddac120-29c5-d4fa-2bc7-9da6b1cf2dd9@citrix.com>
References:
 <CAFD1rPdT5Tod+qdit50EWBN6WyRuK2ybb2G2HmOAayAV7uyBuA@mail.gmail.com>
In-Reply-To:
 <CAFD1rPdT5Tod+qdit50EWBN6WyRuK2ybb2G2HmOAayAV7uyBuA@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BY5PR03MB5250:EE_
x-ms-office365-filtering-correlation-id: febf5bd6-f66d-4a11-89bc-08daf310f7d9
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 VsjqMAxcq2gMIfzzUCHg+451kfqN8lWUsm1BOFkODv4zYLI95k4v7cGcV+AJnYMnDPjYqiyWuMN0B0+bYzRQXqMJlI6AqNeGP0pa2TgWuzRT0wwtRJrm5ZfDL3l/+6wwDDoLCGwnMYEMs2gClwpufQWyT7Ot+wgztM5Pywryb1lcOGfUscoy+lVfu5m0g2mN/sWlac/MLTDB5B7/tH5nWBmGv6i0FEogoIHklET0DhJN6wdUUXu862l0O3ALIF4Xm0nOy073dMdUlY+x7aq86nv9caCgMn2DghDxeo1htsDPm5q/XykxezUseV4CLrHfSS1av/ERufsxtI66ehMG32CrPidurVuqg5+nbUzI3pVybPQu6/TVbqnTxhBJoRMNaZZ/UHOA0bQQpzJ85eR7BK1JeM8Oq0ekhc8qOQs/5tXsD3wFBk2OHwlAsGv7D9zuu0EfS/1u1mMAYhS2KFs5IH8Pgo6NkweQ5+l7aJ4D4ZBKXPHAxzzgCpD6nAQAwBqhFMaDn5zHX4gGZyJY7nraozCOfFatkWLuuDHQPp1AOQ6ZA8h0wZWYOG3HDT6cDdetxx4J4EZXD+mH3agPIkr46XWj1f+uu4lVEoji0ffBps61F3vM/mM8SHGBDCK8y0bu6iEfa47vR4LY0X5toDNkTIhWaFIP7H2mSincRdMpc17bsydZzCoJNzP+U1GcsfZyBCX9jJ8rbt0w2G7ovjQr9PtPiRhWVTmjteh7y9bSo0YihlzOnhMDpW9MR9Ioda2suL+YZ18LJ6tOzYv0Kmj3C0ZegsXGjRXF3tMtCyHNGt1LLGucs+sTYe3HCz4VSMFXGRBqjZ5OqezUf+n5mUj1nw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(396003)(346002)(136003)(39860400002)(376002)(451199015)(8936002)(2906002)(5660300002)(41300700001)(71200400001)(4744005)(66446008)(4326008)(91956017)(316002)(8676002)(64756008)(76116006)(66556008)(66476007)(66946007)(15650500001)(110136005)(38070700005)(26005)(6512007)(2616005)(122000001)(38100700002)(31686004)(186003)(86362001)(83380400001)(31696002)(966005)(36756003)(82960400001)(478600001)(6486002)(53546011)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?RlNuRHI2TUFrUkRFK04xQkVoNm1vYy9CVkl0T2d2SE54Y2FtWTN2b2lQUXV6?=
 =?utf-8?B?VjA1blo2UWFzYU5FZ0VDcWFqQTFPaEJzU01vSVhGK0piRnB5b2NBZGk5b1FP?=
 =?utf-8?B?eVRUd3RZYmZReXVROG5OdW1iV1IrWmlXZ0dpN2czRUtYOGhqNi9acWZpUUFL?=
 =?utf-8?B?SVozbkhESkYyT0JmUE5sVG9yUFlNRUhQNmc1eEQ5Snd1dXY1TGdSS3E5QTZ2?=
 =?utf-8?B?a21ObmNxZGZaa2NmM0hucXlVYkJwQmFrRTlSTUFzZUdKbWRWaEdaWU5zZUEx?=
 =?utf-8?B?dThuMFJ0OUd4UmZYTlQreUZmZnZGZmpYS0NmU3N0d1BoK3Z4N2JVWUFRVVE4?=
 =?utf-8?B?Y0V3dEFzUHBZc29VTXRMQVE0cnZVT1VRd3F1Qk1BWXpPbTU2NCt5WEhHazFx?=
 =?utf-8?B?U3hXRDRBMGMrZmx5bmFFY3hiK1ZWSzVWdGc5YjRCaVVlVGpudm05WVo3czRB?=
 =?utf-8?B?MWNOSlJDYkFxUlpNN0VSaHVTZ0VsdjN0dklTYU04Z1N0T0tsRStwOFZFQmRV?=
 =?utf-8?B?eFI0dzJGQ1N0S09PV1BUTHM3enRMcFF0NkR5RGUwZm4rR2psVmRXTjJiK2xk?=
 =?utf-8?B?TnZhSW5OWVNmNDh1dzFvaExJR0YrTHNCdzFGbHhXeFZIMml3ME9CZHU1c1po?=
 =?utf-8?B?MDVRZ1U2WWNGbk02RE03YTFJeitEeityQjVadTF0R1JBL2tidHY1OHp5YlR5?=
 =?utf-8?B?eVltY2VHWU9LVk42Tk9oTGhRYUYvbXE2djRUaVlNM1NReWpwTEMyOVdLbENi?=
 =?utf-8?B?WTB6WHlSMitoWG1BYTBPajl3VEdJdkZqZlZ2YjZpeTA2eVhnT1FtTTFEaEJL?=
 =?utf-8?B?M1VRNmxRRnlaaEk0L1Y1bXphZFRUZTVVUUZLaFhOeFJzVkUvU2ZGS1dvREZF?=
 =?utf-8?B?VzJWTHdrSkhTb1RMVFBib3lJQXUrNmhqZlhJN2drUjQvaHBtLy9jeHc4czVz?=
 =?utf-8?B?R2JJak9JMlNHaW9qcFFuU3RKdEZHUzVRZzlSVExpZXhpSGE1VVRJbVRMaXo5?=
 =?utf-8?B?MlkzOHhjdG8xUXgwdFhkdHNSdzVsdnpGN21ySFNJdEU4RkZNRUFYbXR4WTZH?=
 =?utf-8?B?dzdWaEx4RTl2RmR3MXE2eWx4YWsyeTRxTXlGa0xpYTlJRnNvc0V4WFlzeWtx?=
 =?utf-8?B?b3MvcW1ZT0V4MW5PTGRFSDhOWkw5K1pHUTFZR05SZ29PUFl5dVovZFhzOEZN?=
 =?utf-8?B?KzY5QXE5M0RaaXNmSHd5WFZrT2svK2cvTFB2dEUrWlJMaFlVYzlPNmZHMEN0?=
 =?utf-8?B?UDZkaGdvYUM0WSs1Um1tWldRR1hsZnVCKytJb1QzbVVIaHkvVE9yZEhZRGhD?=
 =?utf-8?B?UWovUHFzaDR0YmNwZnh6eHgvRnBuL1VrNTk4elJlSk0yY3VCdEd1WWErZVVV?=
 =?utf-8?B?bUJDY1JsSEdMckxwS04vbis2WkFpMTB5bHowOU1YdGw0bHFTbHB3Q0wxdnlj?=
 =?utf-8?B?Vm16N3R0dGNUNWU5ZzVWNG0xS3M4c3ZZVzVPMmc0amtaQ3hUVGNDMVBoei9o?=
 =?utf-8?B?UzROdWNSTFlOUzhsNkErMTkzV0hQL3FDSGhMcGdYREIwdlhTVFU3UXdiREp5?=
 =?utf-8?B?bVB4UjBhNW1YZWZTZHhHZTNBZkJVeS9hdktHMHE5bmJleTR4ZWt0TjZuQWw4?=
 =?utf-8?B?QzY0eE9jRGgzanlGcnNsUmx6SDg2TzdEZVVuUEd3cHptQmFWT09UNVR0Y0tC?=
 =?utf-8?B?QUJ6ZFZ4bXl4N2dTOFQ2NEx3SGZ6YWp5RmVIMUdVNFdpcWUzWGtDTjJrSlVq?=
 =?utf-8?B?TWNqcUxjbWg1RGVvOXlOdkpiSlo1N1p4UkFkWnlQWS9ueGZnQUlFR2RBZkZ2?=
 =?utf-8?B?VlFhRWcrZnYvM1FxYTJzeG5ic001eW91MnBhVVNTZTk0aFZONndKdUNsM0dX?=
 =?utf-8?B?aGdYb2RURXJjYmtPWGFzK3RseVppcittWm5OeGNBZ2NNVXhPQnVYcTZCQzU5?=
 =?utf-8?B?NjdrUGhtMEpJVnNLS0U4ZW16S2NER3VPU1ZZNC9QSUcwQ2dOR0RJSGtwNTBD?=
 =?utf-8?B?WkZsVlVpcE5ZaThNSHJvN0ViMmJXMWNuZTBGK1hsRHBDakZQL3V5dUlRTEto?=
 =?utf-8?B?R2JjVHRGTmFJbW5TY2dEVjZZV2Zmdm04R0RtMGVzd0w3ckM0STBROHZReEwz?=
 =?utf-8?Q?oPjgy92QpDvj2HG6lX/qHciqM?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <7F47F6EBD01EA6479DC2ED3CDBCEC819@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	x9IYZizMc2cg96k2SiyzVYkVef9BdagCdfwPuqpYjp1gpOiV0b08QN/0/IjGqQVzFvkct+8cViPv3WdmN6H8TW390bWq0Y2Qr1+jFfVRV7w15mk8I+B4SkQm6M9TXzq/ih4BCMV1GMeFCoWyoTuP5Uh69a0+bJlJ/adRknJeObply9r9lOr/+VAzIJMUCbyghxPMrGOtQ98PhF4kBsYVsotkzGAi6rNKPfJW7XCki/6HMzfB8Ft80YrAnMwlGE2edPexmOWDeRTMEaGeoOXAT7adbSqB+QfaWvnVsbVXK60I5RMhOX54Iy1GLAxemQib90ysfD9IRZWSxI7DLMGaunKGTrv8+f9Vh+epowh8YpRfKVCwhQ8AiXv02nJYvzsIqKiBnpIELGSjAbA43sf35C3CouUn06aQ4OJqEHvTM2R6R2e0qOKoa1gj7gZoBQ5ykVAQIdRztrspOZgBJXOWaYyv8fH5irarT7HNS/soNdZ5R94NKPwVDx0vQUEbQaFvY9SwpqXCF7zJULGPPw2dyVLXH3cgy530yG9jmO/VDCGmmTs50XGx6FZNARFUZjwCEckhhIW0grcmztNOMejsKyzxZEhr6lvh7/sJH3+JNfIoXfKKYKost2PTAI5d1LS6j1IatIQddQby3DjV2jP4XrNR9iwpMlOQzRnOaOrab8Mkly5Ua6ACecknaCI7R6lN9+pl+gBGBRFBRU8zBdlht0eH/N1o93oBpycfJR6zheUfTrhHpPn/+e9XGtD55Ko7dru9uyUIeFhUtvQtD0N4OTcK0RXhqIg0t06NE4+Haq+V5TzuZmwxybp2efGxTZNdBPyBJUOmEHOMKY+p227vgA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: febf5bd6-f66d-4a11-89bc-08daf310f7d9
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jan 2023 13:45:42.8976
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: b5UC8YmDpikbovroZxI/ADB9skZOJ36wzPpRz4vwzIFmSLXSzhB3Z7JcriYD05aY1OCaDDLZHkEJYSVtERjgaOl5DeYFPuMegmCsUcebXu8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5250

T24gMTAvMDEvMjAyMyAxOjMzIHBtLCBUdXNoYXIgR29lbCB3cm90ZToNCj4gSGV5LA0KPg0KPiBX
ZSB3b3VsZCBsaWtlIHRvIGludGVncmF0ZSB0aGUgeGVuIHNlY3VyaXR5IGRhdGFbMV1bMl0gZGF0
YQ0KPiBpbiB2dWxuZXJhYmxlY29kZVszXSB3aGljaCBpcyBhIEZPU1MgZGIgb2YgRk9TUyB2dWxu
ZXJhYmlsaXR5IGRhdGEuDQo+IFdlIHdlcmUgbm90IGFibGUgdG8ga25vdyB1bmRlciB3aGljaCBs
aWNlbnNlIHRoaXMgc2VjdXJpdHkgZGF0YSBjb21lcy4NCj4gV2Ugd291bGQgYmUgZ3JhdGVmdWwg
dG8gaGF2ZSB5b3VyIGFja25vd2xlZGdlbWVudCBvdmVyDQo+IHVzYWdlIG9mIHRoZSB4ZW4gc2Vj
dXJpdHkgZGF0YSBpbiB2dWxuZXJhYmxlY29kZSBhbmQNCj4gaGF2ZSBzb21lIGtpbmQgb2YgbGlj
ZW5zaW5nIGRlY2xhcmF0aW9uIGZyb20geW91ciBzaWRlLg0KPg0KPiBbMV0gLSBodHRwczovL3hl
bmJpdHMueGVuLm9yZy94c2EveHNhLmpzb24NCj4gWzJdIC0gaHR0cHM6Ly9naXRodWIuY29tL25l
eEIvdnVsbmVyYWJsZWNvZGUvcHVsbC8xMDQ0DQo+IFszXSAtIGh0dHBzOi8vZ2l0aHViLmNvbS9u
ZXhCL3Z1bG5lcmFibGVjb2RlDQoNCkhtbSwgZ29vZCBxdWVzdGlvbi4uLg0KDQpJbiBwcmFjdGlj
ZSwgaXQgaXMgcHVibGljIGRvbWFpbiwgbm90IGxlYXN0IGJlY2F1c2Ugd2UgcHVibGlzaCBpdCB0
bw0KTWl0cmUgYW5kIHZhcmlvdXMgcHVibGljIG1haWxpbmcgbGlzdHMsIGJ1dCBJJ20gbm90IGF3
YXJlIG9mIGhhdmluZw0KZXhwbGljaXRseSB0cmllZCB0byBjaG9vc2UgYSBsaWNlbnNlLg0KDQpN
YXliZSB3ZSB3YW50IHRvIG1ha2UgaXQgQ0MtQlktNCB0byByZXF1aXJlIHBlb3BsZSB0byByZWZl
cmVuY2UgYmFjayB0bw0KdGhlIGNhbm9uaWNhbCB1cHN0cmVhbSA/DQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 14:20:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 14:20:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474724.736024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFFUI-0007g8-To; Tue, 10 Jan 2023 14:20:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474724.736024; Tue, 10 Jan 2023 14:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFFUI-0007g1-QR; Tue, 10 Jan 2023 14:20:26 +0000
Received: by outflank-mailman (input) for mailman id 474724;
 Tue, 10 Jan 2023 14:20:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NrXS=5H=zf.com=youssef.elmesdadi@srs-se1.protection.inumbo.net>)
 id 1pFFUH-0007ff-Bg
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 14:20:25 +0000
Received: from de-smtp-delivery-114.mimecast.com
 (de-smtp-delivery-114.mimecast.com [194.104.109.114])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb86f9b9-90f1-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 15:20:23 +0100 (CET)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2109.outbound.protection.outlook.com [104.47.17.109]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-41-ae_4dJ7pNuSO1jSvk4bUKQ-1; Tue, 10 Jan 2023 15:20:20 +0100
Received: from AM5PR0802MB2578.eurprd08.prod.outlook.com
 (2603:10a6:203:9e::22) by PA4PR08MB6047.eurprd08.prod.outlook.com
 (2603:10a6:102:e3::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 14:20:18 +0000
Received: from AM5PR0802MB2578.eurprd08.prod.outlook.com
 ([fe80::f7f1:3b45:d707:62b7]) by AM5PR0802MB2578.eurprd08.prod.outlook.com
 ([fe80::f7f1:3b45:d707:62b7%2]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 14:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb86f9b9-90f1-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zf.com; s=mczfcom20220728;
	t=1673360423;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:mime-version:mime-version:content-type:content-type;
	bh=/cGjmQeA5LXT2I8FikypkzEeMiaptjZePldsNjHLTvk=;
	b=ZI9gtnFDc+uInj7CBMdSYF4Zz8dxD9nEsiU+UHjoZ+XbQlvJE8tebgZxWvcxeGVf5+im82
	EstPNq0Z3ar6VQ9yBYHk8H90yh8CBL1WLYZK3/6DNiYQYvN3ce1zJsVwb+Czj+/79g0baE
	Rs6WS4xmdY1aofkrhsTSNqwzSfNsQUE=
X-MC-Unique: ae_4dJ7pNuSO1jSvk4bUKQ-1
From: El Mesdadi Youssef ESK UILD7 <youssef.elmesdadi@zf.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Xenalyze on ARM ( NXP S32G3 with Cortex-A53)
Thread-Topic: Xenalyze on ARM ( NXP S32G3 with Cortex-A53)
Thread-Index: Adkk/qg7kRGuW6KDReaFxxX6SSJ/1g==
Date: Tue, 10 Jan 2023 14:20:18 +0000
Message-ID: <AM5PR0802MB25781717167B5BFC980BF2A49DFF9@AM5PR0802MB2578.eurprd08.prod.outlook.com>
Accept-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
msip_labels: MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Enabled=true;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_SetDate=2023-01-10T14:20:17Z;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Method=Privileged;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Name=Internal sub2 (no
 marking);
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_SiteId=eb70b763-b6d7-4486-8555-8831709a784e;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_ActionId=eeca7847-45f1-4c89-8855-4db1e219313e;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_ContentBits=0
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: AM5PR0802MB2578:EE_|PA4PR08MB6047:EE_
x-ms-office365-filtering-correlation-id: 3b6818af-14ca-44af-c33f-08daf315cd30
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0
x-microsoft-antispam-message-info: XozK9bbrrn0qoH9EPF6QWM3hGHTVJqsoSh3MWK2ZpFukXThZ1zC8wRmU/n4I6xGel4jr+OiSYKdRfqQ5uqNFaQHBpHtm1OP135/PmflG83xYd7WsyqwSviYOVYi+AgLc5QqymMtU7keD482WPN6kgftvKE34PEjicEDRTUly5cNh/eLNqqwbzm5D/NX/093T7W2mepXSaMtk2HiJC/52BywCAxmJQ7W3PHTjmMoj34k2PqHj30BaQz2j5rENcD5fPy1lyhADhLm+G38kLgdDW7ZeN+cTdqTwcKfL8ti6J7D4+IAPRWNoMwzIvD+sjPgoOHuUY7ZT8EKYBOloX4tk6pfAI/7JYRf1xGhDS+bbM3h6zkbQtNzebZ/wWIW/rtrBr1eq9yNJQMRjLInbx5C/SXfZ7uU9j7eu5O0vOw1NY+0XHNmsFEQDEBEx0eQyTv95aUwoJmhSu7tPgCyF9gf0I2KdBH0MgkgeJeExIKoyfA86XgXdOvfr+uCIs7M4tmCEMWLcNlu0+v5ThQQ2E1PNQM51PVTrKQmJdsLy+2kLWuLjAITeo/BeGWOKp69BGM5YYDeXj+wFDS/5/Jfg6r/e33oh6sfk0LE3eQssr5mFd2NDmrfKC870l3nqMQwo/cFVTpgvvweCc64pePifAxxD+q4NOJT4Vxy2NlyyN9CjQt0iUMp6Fgo7P1xD+M/c7q4o+8qjj7S0eY7P5HQQtEluiQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM5PR0802MB2578.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(136003)(39860400002)(376002)(366004)(451199015)(71200400001)(86362001)(6916009)(316002)(8936002)(52536014)(2906002)(38100700002)(478600001)(82960400001)(19627235002)(66556008)(66476007)(66446008)(7696005)(64756008)(26005)(76116006)(66946007)(41300700001)(186003)(9686003)(8676002)(33656002)(6506007)(38070700005)(122000001)(55016003)(83380400001)(5660300002);DIR:OUT;SFP:1102
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?wz0MOUEKUg3eF8xUCfV5oZOCIDuVDpQEXspbJFGwkbl2/V2Z2kmMF5jOkLIE?=
 =?us-ascii?Q?OAaZXAZtZio9gcwOWgtQW+tsCYRkzbo5AnyQ4XUm0ClSbP0yrshiwn4DcOM/?=
 =?us-ascii?Q?i2CjiAToaTu36fX3QHlG9UJZBjM1zegzoDaxpplLH3mleaydj5Dy6WShnFl0?=
 =?us-ascii?Q?uG9yHRPNj8c63IdhdiwbbiAxY3WWErHCLsfNInSU5EfNx5eef1vKUSqDSnNy?=
 =?us-ascii?Q?UXiNFf4IxHiOZihxXVVv1rTcM9YnQiYrm3ALof0TF+MJflPGXRd3pnygt09H?=
 =?us-ascii?Q?dt1iDUowMEcYBWz1HfMSrrg+qoGsJgnvW3GXmr4T8C9GBPbJ2POYsWC4ylCL?=
 =?us-ascii?Q?luW8qUhEbSbCz76ccIlWLlOYQcSye4cUv4ZOwjNjhaIq/F4Y8e/Ljd/javKE?=
 =?us-ascii?Q?OhbAoX+TR2j6h0wub2JJc1LKCmsZZOEwCY9P5e7W7cRT8jeeR3kKS/8V34Lt?=
 =?us-ascii?Q?iY67BfuhE5w8v0FH5SGObo6GF+5rsUOSYWX2a4nek7lM8K2pbfNdWmVhGCca?=
 =?us-ascii?Q?t/njdpqX48lyTiggTFMhpZJdvQJ7HrajLSxNF0QeKovNrzfXR4CiNxt/avIX?=
 =?us-ascii?Q?dxUpmrBiKIkzBdwrMvN6cHP3/HlIG6DV+k8BXKp1yIWuUyE2Flfx8Q3quXcs?=
 =?us-ascii?Q?n/0PLtxnX93iZRd1r8AOIJtLqiOchsEcu/uMlQBRo/QVNTASIFPKVENUGQq/?=
 =?us-ascii?Q?i6mWMEcs25qxH1C6vnZGb5W1SM9afXpwkn0hYMVNguOxKnQCKAstKVW/jxHC?=
 =?us-ascii?Q?DoYV2qqLpx6hiGHIrBPPMdW/mqGDD4vCrgs6SfTbGUhQjXiO3xDplxFDcjE5?=
 =?us-ascii?Q?VE/u3NPu8+av8S9UtxIkOJCaKCts/2zvirDjJ0vK2noc4+KFAzXyaV2Ou8Ls?=
 =?us-ascii?Q?F2uq0UklNfo9HRZCv6w502esmLC1s8wQTsp0hqHKakmzuuxuWsQIlWTsRclV?=
 =?us-ascii?Q?xyovW7rUlovxvmYBFeEK+XSyaNPWxFlgKREZelM+BO5cxNH7qzd1hKFTJ4by?=
 =?us-ascii?Q?E7ukfZzFJMEyQSXE72PB6po1T5aE3frQ29QEHyaZ7uYy1XWZVhffNhAf5JEa?=
 =?us-ascii?Q?bNcfZnfXcI8TUBt/kJRjPV62wrV83TpbTzz9/YhXgx157TF4XuhefnNCj679?=
 =?us-ascii?Q?8JpgL0nvBj++WDw+tvpUIuIM15Vl1kYwinDhkM8ypWjfu1q16mLDQdu67pLF?=
 =?us-ascii?Q?JFevruxtt0TUBhkWJa90yxDyQ/AGLhor85JIHzKxbu3X/QLe4YiFRNIQc0zm?=
 =?us-ascii?Q?Ie51Ma+qqjm2PEytOh5M0T48kqsxz4Tu8X6rBnC1CVvfAze8DoZzyL5ZlJBF?=
 =?us-ascii?Q?HkVTucjHlVGkl5PDTPa01WclfSULpbaddJLISipMeNdU8Lu4VRs3d6NGZ06l?=
 =?us-ascii?Q?vCE9smrqexuABEgAIGUOUNKVafavBXsbzbUCIDY6v2/hUu3MDNZFvUyEd41S?=
 =?us-ascii?Q?dJGD93sW9a3GM+pt7zYDyUM/1k+TFA8KkTS7Wm8GMtOsrWTZSUIAX8Q09v4+?=
 =?us-ascii?Q?oZl0Kj2toIGHIFooqTtO/9RcndhZ2CFu0SxXwqCFklNHv8Bf/R+cIEDF0p0S?=
 =?us-ascii?Q?XcmyCFAb2HMWNHNg527ZPEpy4iC/bu4kb1f1PAyV?=
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: VXmaIekqeDtV+ii4zkHKR0cjZAFYivVT3QqZi+xD6M2dd8B36cKWbh9gGhP/voyRsmXN/aTf7E0aTQRdl1IElVXlgoBfeAaxoXwxki1GMGgWa3u8W2jVRe9ksBJJklzVaxkCnIaBLOtPTCWxumLK01b7y+5F0KJUVkyN/q3dGLBcalooDWHx5Gy7VMU3JNl+eLS24jyuEtRW8I5JGtWkB7IGpr3brsynhaUn1zVTG6JZpJBw5bGEfIF7qUsClec9hJ9MaZFDv2Vi5dAzfFnf/NwEOoMEp6iTZw0GWgTU6a9GEl5yrhmTs6UmRnjCdRpiQERdy4jXorLzHlaj0aQqxDiDiRqiES1N8C8XPrNyGMtGJb+NtWHd216unGJrP3YSXVI97F4HL7RdXZW46+vm09pZHYQgeyDIqMP8NKs3Qlgp6jdsnUcprfXL5cxDu5NVCx0WLql2hV+K6FsqoKdFWLmRKrs5kQaK02tJfukPRZHbuu6sSGbw9Uf6Fjk4cGovwHvaKKZ1qojlmuZHYnoN9qa0wFwgFwO1wwQv8rD87UrXb7bMhuoGDzTYqT3o9OFc0yoLeu+xYTqN+vXgDrU9odQUkNYyi5RmtjAyRJtwjSpisgz7Ocrxc2e10FkiX4mV4tJBgSnnSje+POzGG5cCMD1RLMAxSrrSsi8RVlg2S/mds99Knk6Mp9T2dmtAF/x27kbA4EuttsyJGya0pUt3H+js6Nmd4sQoLdAY/ZRyloTsS2q6CEuAvBb0agVeUgu8XZQUJ9RQslAK3D2M1intGA==
X-OriginatorOrg: zf.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM5PR0802MB2578.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3b6818af-14ca-44af-c33f-08daf315cd30
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jan 2023 14:20:18.8109
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: eb70b763-b6d7-4486-8555-8831709a784e
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: NoUccbDIVHxkqpltGYfAZqaHsSbYy3SYNlLPlL29wEMg1jxySax1PI0Bza+tzJhSMMkT24ISK4SSWasbMxb+McshRAnPioBK9pjLbI7sEbg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6047
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: zf.com
Content-Language: de-DE
Content-Type: multipart/alternative;
	boundary="_000_AM5PR0802MB25781717167B5BFC980BF2A49DFF9AM5PR0802MB2578_"

--_000_AM5PR0802MB25781717167B5BFC980BF2A49DFF9AM5PR0802MB2578_
Content-Type: text/plain; charset=WINDOWS-1252
Content-Transfer-Encoding: quoted-printable

>> Hello,

>>
I'm trying to measure the performance of Xen on the S32G3 microprocessor, u=
nfortunately, after following the BSP-Linux to install Xen, I found that Xe=
ntrace is not available or not compatible with ARM architecture. I have see=
n some studies on  Xilinx, and how they made Xentrace compatible with ARM, =
but I have no resources or access to get that and make it work on my Board.=
 If there is any help I would appreciate it and thanks in advance.

>> I have some extra questions, and it will be helpful to share your ideas =
with me,


  *   Is it possible to run a native application ( C-code) on the virtual m=
achine, turn on a LED,  have access to the GPIO, or send some messages usin=
g Can-Interface?
  *   My Board has no Ethernet, and no Extern SD Card, is there any method =
I can use to build a kernel for an operating system with my laptop, and tra=
nsfer it to the board?
  *   Any suggestions in detail to measure the interrupt latency, Xen Overh=
ead, and switch context (time to switch from one VM to another, that's what=
 I wanted to measure with Xenalyze)

>> Best regards
>> Youssef El Mesdadi


--_000_AM5PR0802MB25781717167B5BFC980BF2A49DFF9AM5PR0802MB2578_
Content-Type: text/html; charset=WINDOWS-1252
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
=09{font-family:Wingdings;
=09panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
=09{font-family:"Cambria Math";
=09panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
=09{font-family:Calibri;
=09panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
=09{margin:0cm;
=09font-size:11.0pt;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
=09{mso-style-priority:34;
=09margin-top:0cm;
=09margin-right:0cm;
=09margin-bottom:0cm;
=09margin-left:36.0pt;
=09font-size:11.0pt;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
span.E-MailFormatvorlage17
=09{mso-style-type:personal-compose;
=09font-family:"Calibri",sans-serif;
=09color:windowtext;}
.MsoChpDefault
=09{mso-style-type:export-only;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
@page WordSection1
=09{size:612.0pt 792.0pt;
=09margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
=09{page:WordSection1;}
/* List Definitions */
@list l0
=09{mso-list-id:1371146029;
=09mso-list-type:hybrid;
=09mso-list-template-ids:-1672710734 -479298432 67567619 67567621 67567617 =
67567619 67567621 67567617 67567619 67567621;}
@list l0:level1
=09{mso-level-start-at:0;
=09mso-level-number-format:bullet;
=09mso-level-text:\F0B7;
=09mso-level-tab-stop:none;
=09mso-level-number-position:left;
=09text-indent:-18.0pt;
=09font-family:Symbol;
=09mso-fareast-font-family:Calibri;
=09mso-bidi-font-family:"Times New Roman";}
@list l0:level2
=09{mso-level-number-format:bullet;
=09mso-level-text:o;
=09mso-level-tab-stop:none;
=09mso-level-number-position:left;
=09text-indent:-18.0pt;
=09font-family:"Courier New";}
@list l0:level3
=09{mso-level-number-format:bullet;
=09mso-level-text:\F0A7;
=09mso-level-tab-stop:none;
=09mso-level-number-position:left;
=09text-indent:-18.0pt;
=09font-family:Wingdings;}
@list l0:level4
=09{mso-level-number-format:bullet;
=09mso-level-text:\F0B7;
=09mso-level-tab-stop:none;
=09mso-level-number-position:left;
=09text-indent:-18.0pt;
=09font-family:Symbol;}
@list l0:level5
=09{mso-level-number-format:bullet;
=09mso-level-text:o;
=09mso-level-tab-stop:none;
=09mso-level-number-position:left;
=09text-indent:-18.0pt;
=09font-family:"Courier New";}
@list l0:level6
=09{mso-level-number-format:bullet;
=09mso-level-text:\F0A7;
=09mso-level-tab-stop:none;
=09mso-level-number-position:left;
=09text-indent:-18.0pt;
=09font-family:Wingdings;}
@list l0:level7
=09{mso-level-number-format:bullet;
=09mso-level-text:\F0B7;
=09mso-level-tab-stop:none;
=09mso-level-number-position:left;
=09text-indent:-18.0pt;
=09font-family:Symbol;}
@list l0:level8
=09{mso-level-number-format:bullet;
=09mso-level-text:o;
=09mso-level-tab-stop:none;
=09mso-level-number-position:left;
=09text-indent:-18.0pt;
=09font-family:"Courier New";}
@list l0:level9
=09{mso-level-number-format:bullet;
=09mso-level-text:\F0A7;
=09mso-level-tab-stop:none;
=09mso-level-number-position:left;
=09text-indent:-18.0pt;
=09font-family:Wingdings;}
ol
=09{margin-bottom:0cm;}
ul
=09{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"DE" link=3D"#0563C1" vlink=3D"#954F72" style=3D"word-wrap:bre=
ak-word">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">&gt;&gt; Hello,<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&gt;&gt; <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I&#8217;m trying to measure the=
 performance of Xen on the S32G3 microprocessor, unfortunately, after follo=
wing the BSP-Linux to install Xen, I found that Xentrace is not available o=
r not compatible with ARM architecture. I
 have seen some studies on &nbsp;Xilinx, and how they made Xentrace compati=
ble with ARM, but I have no resources or access to get that and make it wor=
k on my Board. If there is any help I would appreciate it and thanks in adv=
ance.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&gt;&gt; I have some extra ques=
tions, and it will be helpful to share your ideas with me,<o:p></o:p></span=
></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<ul style=3D"margin-top:0cm" type=3D"disc">
<li class=3D"MsoListParagraph" style=3D"margin-left:0cm;mso-list:l0 level1 =
lfo1"><span lang=3D"EN-US">Is it possible to run a native application ( C-c=
ode) on the virtual machine, turn on a LED, &nbsp;have access to the GPIO, =
or send some messages using Can-Interface?<o:p></o:p></span></li><li class=
=3D"MsoListParagraph" style=3D"margin-left:0cm;mso-list:l0 level1 lfo1"><sp=
an lang=3D"EN-US">My Board has no Ethernet, and no Extern SD Card, is there=
 any method I can use to build a kernel for an operating system with my lap=
top, and transfer it to the board?<o:p></o:p></span></li><li class=3D"MsoLi=
stParagraph" style=3D"margin-left:0cm;mso-list:l0 level1 lfo1"><span lang=
=3D"EN-US">Any suggestions in detail to measure the interrupt latency, Xen =
Overhead, and switch context (time to switch from one VM to another, that&#=
8217;s what I wanted to measure
 with Xenalyze)<o:p></o:p></span></li></ul>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&gt;&gt; Best regards <o:p></o:=
p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&gt;&gt; Youssef El Mesdadi<o:p=
></o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_AM5PR0802MB25781717167B5BFC980BF2A49DFF9AM5PR0802MB2578_--



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:04:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:04:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474733.736040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGAp-0003kg-BM; Tue, 10 Jan 2023 15:04:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474733.736040; Tue, 10 Jan 2023 15:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGAp-0003kZ-8l; Tue, 10 Jan 2023 15:04:23 +0000
Received: by outflank-mailman (input) for mailman id 474733;
 Tue, 10 Jan 2023 15:04:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFGAn-0003kP-Bp; Tue, 10 Jan 2023 15:04:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFGAn-0001Do-9C; Tue, 10 Jan 2023 15:04:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFGAm-0006Hb-VC; Tue, 10 Jan 2023 15:04:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFGAm-0006TS-Ug; Tue, 10 Jan 2023 15:04:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gShHeZwpUPtVRY+/OJyvx/WPMcOr392XPikRBy+l5MI=; b=GknK0HyEBW4jXUaIfpHSGlvPVA
	4s7tyFxbJ4+Z9Wkik+n3rqr6Q2JARgHmBwD1zkq/z30JTpbFFwl6hJfem/tccPHjO0jbyjFF/x4Ix
	iWaUdBgvXW4AXDZNZbol45g5AifTUGvMXMAYu5k1rZ3TGp2WHjx8qr9JhYznKHBz1U1U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175691-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175691: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 15:04:20 +0000

flight 175691 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175691/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    2 days
Failing since        175627  2023-01-08 14:40:14 Z    2 days   10 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:18:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:18:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474745.736090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGOA-00063d-WD; Tue, 10 Jan 2023 15:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474745.736090; Tue, 10 Jan 2023 15:18:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGOA-00062R-OB; Tue, 10 Jan 2023 15:18:10 +0000
Received: by outflank-mailman (input) for mailman id 474745;
 Tue, 10 Jan 2023 15:18:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFGO9-0005G6-6Y
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:18:09 +0000
Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com
 [2a00:1450:4864:20::12e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fcf25e3e-90f9-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 16:18:08 +0100 (CET)
Received: by mail-lf1-x12e.google.com with SMTP id cf42so18983025lfb.1
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 07:18:08 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 y19-20020a0565123f1300b00498fc3d4cfdsm2203396lfa.189.2023.01.10.07.18.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 07:18:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcf25e3e-90f9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TVqSKi3nBOpTA6dXyBr5PLcPAxdksP3ri3d2O1B85MY=;
        b=e2u5zuLmD7mk4FxQARC1xRPXgyKyXnF3t4Yk95+zbf8PkkCQhdV7QU9z0YjZZsD6qI
         cmAw/MNs2iw1wZEbQjHLjeNOigxMPX6cmixeVIZaCwIHK51wKDb5nvrDcObYubWFzD+c
         S6Sz7Fw8ztE40Cb7wBCn1cyN9Tx72k28cD1nISynFxkcqfn4Dd87EuQkZavg0ihrrxKO
         ue29Tp3yoyJFBy4tf8D8kG02PwjrGx1CWtqcelIw+lqiZ0WQj1vSq9WBFKJJ6IJxDFin
         W7tiSBtmRyADQyhuTHq+BjTYMVQXlVtQdIoKUC6vyp7pzUrLAwgT/GusaQNxQLwX1nxD
         m7+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=TVqSKi3nBOpTA6dXyBr5PLcPAxdksP3ri3d2O1B85MY=;
        b=JCDwHbqrIfGjLo9FFcjyizeL3hFM3uyD34c+yAM/aCRpfGVo0iQpYh4Nwu+D7dYSYJ
         T8+l61DkQY7+Cnu8yIITv69qnVH+b8/dAX2iELTOqz3nMJX4Mf7nAoKm4/kAKbFIlG+m
         lkLnnl4npQg/B5Y5cb8tpzd4HF319z1T6LICFumeKaml5EhIM0laadYMhlnA+/wdKpgG
         Cu6lj8cK62hF8aqY4tpqvX6AFhZuys+66/8Xo/+1vPoIkI5VeuvHg6ahKFIGsUf3LPft
         LmTufIJR8RHIjzUTeLyoRI+vG3TEBvH/T+3c4hl5+LVhqfghIpevS5Q9O5qTVJg6yZ9b
         2lSQ==
X-Gm-Message-State: AFqh2kpIBfUlN8e0soFnwZICXrL4Z3XMY2aKlAw9ant4ErMWM9/2CEN1
	uUg1URpynkBpDLitIoyrGUFUdUwGM+6FGP12
X-Google-Smtp-Source: AMrXdXvEEnHVFscJyoj3h6ACK+mgIxiDwomDke1B0mbazKc+zfgoTaWIWDhMTSP1SuM7C7x5V83Fkw==
X-Received: by 2002:a05:6512:3d11:b0:4b5:2ef3:fd2a with SMTP id d17-20020a0565123d1100b004b52ef3fd2amr32153648lfv.47.1673363887859;
        Tue, 10 Jan 2023 07:18:07 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v3 4/6] xen/riscv: introduce sbi call to putchar to console
Date: Tue, 10 Jan 2023 17:17:57 +0200
Message-Id: <daa3dff9b0993b0e6706d7b0ebe2f0e5ef46d3d9.1673362493.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673362493.git.oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Bobby Eshleman <bobby.eshleman@gmail.com>

Originally SBI implementation for Xen was introduced by
Bobby Eshleman <bobby.eshleman@gmail.com> but it was removed
all the stuff for simplicity  except SBI call for putting
character to console.

The patch introduces sbi_putchar() SBI call which is necessary
to implement initial early_printk.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V3:
    - update copyright's year
    - rename define __CPU_SBI_H__ to __ASM_RISCV_SBI_H__
    - fix identations
    - change an author of the commit
---
Changes in V2:
    - add an explanatory comment about sbi_console_putchar() function.
    - order the files alphabetically in Makefile
---
 xen/arch/riscv/Makefile          |  1 +
 xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
 xen/arch/riscv/sbi.c             | 45 ++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/sbi.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 5a67a3f493..fd916e1004 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
+obj-y += sbi.o
 obj-y += setup.o
 
 $(TARGET): $(TARGET)-syms
diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
new file mode 100644
index 0000000000..0e6820a4ed
--- /dev/null
+++ b/xen/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: (GPL-2.0-or-later) */
+/*
+ * Copyright (c) 2021-2023 Vates SAS.
+ *
+ * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Taken/modified from Xvisor project with the following copyright:
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ */
+
+#ifndef __ASM_RISCV_SBI_H__
+#define __ASM_RISCV_SBI_H__
+
+#define SBI_EXT_0_1_CONSOLE_PUTCHAR		0x1
+
+struct sbiret {
+    long error;
+    long value;
+};
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
+                        unsigned long arg0, unsigned long arg1,
+                        unsigned long arg2, unsigned long arg3,
+                        unsigned long arg4, unsigned long arg5);
+
+/**
+ * Writes given character to the console device.
+ *
+ * @param ch The data to be written to the console.
+ */
+void sbi_console_putchar(int ch);
+
+#endif /* __ASM_RISCV_SBI_H__ */
diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
new file mode 100644
index 0000000000..dc0eb44bc6
--- /dev/null
+++ b/xen/arch/riscv/sbi.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Taken and modified from the xvisor project with the copyright Copyright (c)
+ * 2019 Western Digital Corporation or its affiliates and author Anup Patel
+ * (anup.patel@wdc.com).
+ *
+ * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2021-2023 Vates SAS.
+ */
+
+#include <xen/errno.h>
+#include <asm/sbi.h>
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
+                        unsigned long arg0, unsigned long arg1,
+                        unsigned long arg2, unsigned long arg3,
+                        unsigned long arg4, unsigned long arg5)
+{
+    struct sbiret ret;
+
+    register unsigned long a0 asm ("a0") = arg0;
+    register unsigned long a1 asm ("a1") = arg1;
+    register unsigned long a2 asm ("a2") = arg2;
+    register unsigned long a3 asm ("a3") = arg3;
+    register unsigned long a4 asm ("a4") = arg4;
+    register unsigned long a5 asm ("a5") = arg5;
+    register unsigned long a6 asm ("a6") = fid;
+    register unsigned long a7 asm ("a7") = ext;
+
+    asm volatile ("ecall"
+              : "+r" (a0), "+r" (a1)
+              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
+              : "memory");
+    ret.error = a0;
+    ret.value = a1;
+
+    return ret;
+}
+
+void sbi_console_putchar(int ch)
+{
+    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
+}
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:18:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:18:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474746.736096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGOB-00068P-Ap; Tue, 10 Jan 2023 15:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474746.736096; Tue, 10 Jan 2023 15:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGOB-00065E-0t; Tue, 10 Jan 2023 15:18:11 +0000
Received: by outflank-mailman (input) for mailman id 474746;
 Tue, 10 Jan 2023 15:18:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFGOA-0005G6-AI
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:18:10 +0000
Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com
 [2a00:1450:4864:20::134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fd9ff4e1-90f9-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 16:18:09 +0100 (CET)
Received: by mail-lf1-x134.google.com with SMTP id v25so18922394lfe.12
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 07:18:09 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 y19-20020a0565123f1300b00498fc3d4cfdsm2203396lfa.189.2023.01.10.07.18.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 07:18:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd9ff4e1-90f9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+4b0mCw5TWzzOfe9k1UrEZyIfTtwrFP3j0Uq+Wi1atU=;
        b=XvQ0/I+z48juzNDTq8nzooq2ZPekH2xUSqZvlbohxfYhFJZ81wYOyb19SIA/idZOET
         1LSYl7gVq2SJgJDIeDVMmj8OLR6bvModgWrPzFBBngFOChV4+SYXxO9/gxkKwm2EUikJ
         6seJ1954bJ4UVpnCtNHN2g3te/FnsdUEgj92ZKZNWevHNUqWgTOrErEybjqAVHdDkoLg
         cy6MKyKYXyEsEChdUziQU2BxLcb1JXg+nnb6WWP7Ryh4veGy2x00te/wwlWrUhREEcG6
         JkJKI/ceOiTsKHPaG68MNGuHGhZ6o4ZTrfXwWbqiD1NNnfrV7hP9e148/5puEI9xrj7q
         ot3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=+4b0mCw5TWzzOfe9k1UrEZyIfTtwrFP3j0Uq+Wi1atU=;
        b=GPBQlco/ktccUHrALcHrWZC8Q81cWryEH0oKPRgTqzbo5pLOlamJTM1CpHX5B5v6o9
         CFjF6cw+efEn0Qw4oWxnZSQfo4e0n36/3TXjgaXicnhezlXDvqgjFJGo9Jrsz3x3qRFr
         mx7V5AeQDSXdr8ZaFAji02DMiokFIYD+EAOuiimZdGL/eN8U1vDL5OyVlAghlNkYIwer
         QwSJhXD4ukQuurrttGeq3eDHfbSatqcqWSxGVQO3u05otvwXLzqfXh5DVf7KZCYQOYAW
         3UgoXqsOZrOEqypvfPv69HP0XSn730OD/P9sp0rRSwVY4VswQB5nkBKd+zdsPnes5vzX
         Bdhw==
X-Gm-Message-State: AFqh2koRSiNbEksNL8Fon5RkYz5lwU3fA0EIP+m0SLzFbqaoZCv32KKH
	tdJ7W3WLcmwgRy9S0UOyFz+tgdCtwnYUU6cG
X-Google-Smtp-Source: AMrXdXsaqM3PTnIRlw1RNlmG8XdcZtW5YW9PgDIR2zm2mvrS1lTxNUQV4v/ttVeXp1U7869rBWk4Yw==
X-Received: by 2002:a05:6512:3d94:b0:4a4:68b8:f4f4 with SMTP id k20-20020a0565123d9400b004a468b8f4f4mr24770124lfv.58.1673363889023;
        Tue, 10 Jan 2023 07:18:09 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v3 5/6] xen/riscv: introduce early_printk basic stuff
Date: Tue, 10 Jan 2023 17:17:58 +0200
Message-Id: <25cd3586fa51980279f25a82eed5ded1622bc212.1673362493.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673362493.git.oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Bobby Eshleman <bobby.eshleman@gmail.com>

The patch introduces a basic stuff of early_printk functionality
which will be enough to print 'hello from C environment".
early_printk() function was changed in comparison with original as
common isn't being built now so there is no vscnprintf.

Because printk() relies on a serial driver (like the ns16550 driver)
and drivers require working virtual memory (ioremap()) there is not
print functionality early in Xen boot.

This commit adds early printk implementation built on the putc SBI call.

As sbi_console_putchar() is being already planned for deprecation
it is used temporary now and will be removed or reworked after
real uart will be ready.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V3:
    - reorder "include <header>" in erly_printk.c
    - change an author of the commit
---
Changes in V2:
    - add license to early_printk.c
    - add signed-off-by Bobby
    - add RISCV_32 to Kconfig.debug to EARLY_PRINTK config
    - update commit message
    - order the files alphabetically in Makefile
---
 xen/arch/riscv/Kconfig.debug              |  7 +++++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
 xen/arch/riscv/setup.c                    |  4 +++
 5 files changed, 57 insertions(+)
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
index e69de29bb2..e21649781d 100644
--- a/xen/arch/riscv/Kconfig.debug
+++ b/xen/arch/riscv/Kconfig.debug
@@ -0,0 +1,7 @@
+config EARLY_PRINTK
+    bool "Enable early printk"
+    default DEBUG
+    depends on RISCV_64 || RISCV_32
+    help
+
+      Enables early printk debug messages
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index fd916e1004..1a4f1a6015 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,3 +1,4 @@
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
new file mode 100644
index 0000000000..348c21bdaa
--- /dev/null
+++ b/xen/arch/riscv/early_printk.c
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V early printk using SBI
+ *
+ * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
+ */
+#include <asm/early_printk.h>
+#include <asm/sbi.h>
+
+/*
+ * TODO:
+ *   sbi_console_putchar is already planned for deprecation
+ *   so it should be reworked to use UART directly.
+*/
+void early_puts(const char *s, size_t nr)
+{
+    while ( nr-- > 0 )
+    {
+        if (*s == '\n')
+            sbi_console_putchar('\r');
+        sbi_console_putchar(*s);
+        s++;
+    }
+}
+
+void early_printk(const char *str)
+{
+    while (*str)
+    {
+        early_puts(str, 1);
+        str++;
+    }
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
new file mode 100644
index 0000000000..05106e160d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -0,0 +1,12 @@
+#ifndef __EARLY_PRINTK_H__
+#define __EARLY_PRINTK_H__
+
+#include <xen/early_printk.h>
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_printk(const char *str);
+#else
+static inline void early_printk(const char *s) {};
+#endif
+
+#endif /* __EARLY_PRINTK_H__ */
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 13e24e2fe1..d09ffe1454 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,12 +1,16 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/early_printk.h>
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
 void __init noreturn start_xen(void)
 {
+    early_printk("Hello from C env\n");
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:18:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:18:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474741.736051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGO8-0005GS-H5; Tue, 10 Jan 2023 15:18:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474741.736051; Tue, 10 Jan 2023 15:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGO8-0005GE-EP; Tue, 10 Jan 2023 15:18:08 +0000
Received: by outflank-mailman (input) for mailman id 474741;
 Tue, 10 Jan 2023 15:18:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFGO6-0005Fu-Jq
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:18:06 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa8d639d-90f9-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 16:18:04 +0100 (CET)
Received: by mail-lf1-x12c.google.com with SMTP id m6so18936872lfj.11
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 07:18:04 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 y19-20020a0565123f1300b00498fc3d4cfdsm2203396lfa.189.2023.01.10.07.18.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 07:18:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa8d639d-90f9-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=YBsjKUp8Q0MJVa4U8jLERfOVLa8Dy11fP2YftT+HK8o=;
        b=kq/zQiOkycbQd2vJsbKmXLcsas+LrxbnyQrtti3+ZDmfMEZMkEWAGYSZN8yPXmTKUZ
         AQQ3E36PmaXQWHSzjpKqRaplR2iVAw8JkJHRnjXhnt3glv1pX4SltIBZJuTO3hMNHBu8
         bmOpy0E5keYh74mqjJzuiPVZA5elE1E9ToloXjt+VJt21ax86XZDbIaDp0aHp+VXtuHK
         uEW7GaGMn9maChekZ4CvlXtFl34uEQ+iwRdhFuwJScWBUws6219KwhImILBu2HLTRnVy
         8g3KoGY7M77bUEO/f+WuHcelqExclqk+++fylbW3z8LOyZ+faIcLbrKmkXvzZH7/TdcE
         rKRA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=YBsjKUp8Q0MJVa4U8jLERfOVLa8Dy11fP2YftT+HK8o=;
        b=22Lhz3XcryslhVXCJq4dBhyTMw/aqyZHvZLgvaxCOWC1oX/WndsbNH9XVl26Am6oln
         mOeJtqIZzZNfJWWe6txm1PIB25ibMCsWJP3WXzS7XVSlSFFF+/uIqDmv8JtgDEA7dPBE
         obel0aLcjQd1hZaScx9FEYCwY8CLp34wTHoiPOCqTIq0zyacy7JZfTpTJTybU112v+qt
         K3lVZ7e5P3MKPpPBSZwX08cuSF7h0nX+ENR/K8XdqhIIFa2FZnRzGe1ZMoQgzSDCZnPf
         v6tWxnIzEnRtiWol6IB/e1jazLJKPoQTmlOX3733o+nMNLPLZs9DJEycKkwihAJKmLhi
         kSOA==
X-Gm-Message-State: AFqh2kpLC8W/hB0p2AOvMin+xSJLt+k7el6GXv13O+3DJ40okQJLThvw
	PggheFIa3lv6iWDkuy7bj3kzNi6WgKUS3GNn
X-Google-Smtp-Source: AMrXdXvXpC0EKi01o6/2pO961wMBIghk3H0JjKUb10Q2NykRSL8BarfmO5dt1oElN7Azs6RW7MuXTg==
X-Received: by 2002:ac2:4ac3:0:b0:4b5:7e4c:dcea with SMTP id m3-20020ac24ac3000000b004b57e4cdceamr23303366lfp.51.1673363883688;
        Tue, 10 Jan 2023 07:18:03 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v3 0/6] Basic early_printk and smoke test implementation
Date: Tue, 10 Jan 2023 17:17:53 +0200
Message-Id: <cover.1673362493.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- the minimal set of headers and changes inside them.
- SBI (RISC-V Supervisor Binary Interface) things necessary for basic
  early_printk implementation.
- things needed to set up the stack.
- early_printk() function to print only strings.
- RISC-V smoke test which checks if  "Hello from C env" message is
  present in serial.tmp

---
Changes in V3:
    - Most of "[PATCH v2 7/8] xen/riscv: print hello message from C env"
      was merged with [PATCH v2 3/6] xen/riscv: introduce stack stuff.
    - "[PATCH v2 7/8] xen/riscv: print hello message from C env" was
      merged with "[PATCH v2 6/8] xen/riscv: introduce early_printk basic
      stuff".
    - "[PATCH v2 5/8] xen/include: include <asm/types.h> in
      <xen/early_printk.h>" was removed as it has been already merged to
      mainline staging.
    - code style fixes.
---
Changes in V2:
    - update commit patches commit messages according to the mailing
      list comments
    - Remove unneeded types in <asm/types.h>
    - Introduce definition of STACK_SIZE
    - order the files alphabetically in Makefile
    - Add license to early_printk.c
    - Add RISCV_32 dependecy to config EARLY_PRINTK in Kconfig.debug
    - Move dockerfile changes to separate config and sent them as
      separate patch to mailing list.
    - Update test.yaml to wire up smoke test
---

Bobby Eshleman (2):
  xen/riscv: introduce sbi call to putchar to console
  xen/riscv: introduce early_printk basic stuff

Oleksii Kurochko (4):
  xen/riscv: introduce dummy asm/init.h
  xen/riscv: introduce asm/types.h header file
  xen/riscv: introduce stack stuff
  automation: add RISC-V smoke test

 automation/gitlab-ci/test.yaml            | 20 ++++++++++
 automation/scripts/qemu-smoke-riscv64.sh  | 20 ++++++++++
 xen/arch/riscv/Kconfig.debug              |  7 ++++
 xen/arch/riscv/Makefile                   |  3 ++
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++
 xen/arch/riscv/include/asm/config.h       |  2 +
 xen/arch/riscv/include/asm/early_printk.h | 12 ++++++
 xen/arch/riscv/include/asm/init.h         | 12 ++++++
 xen/arch/riscv/include/asm/sbi.h          | 34 +++++++++++++++++
 xen/arch/riscv/include/asm/types.h        | 22 +++++++++++
 xen/arch/riscv/riscv64/head.S             |  6 ++-
 xen/arch/riscv/sbi.c                      | 45 +++++++++++++++++++++++
 xen/arch/riscv/setup.c                    | 18 +++++++++
 13 files changed, 233 insertions(+), 1 deletion(-)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h
 create mode 100644 xen/arch/riscv/include/asm/init.h
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/include/asm/types.h
 create mode 100644 xen/arch/riscv/sbi.c
 create mode 100644 xen/arch/riscv/setup.c

-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:18:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:18:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474742.736059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGO9-0005PR-0L; Tue, 10 Jan 2023 15:18:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474742.736059; Tue, 10 Jan 2023 15:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGO8-0005Nl-Rl; Tue, 10 Jan 2023 15:18:08 +0000
Received: by outflank-mailman (input) for mailman id 474742;
 Tue, 10 Jan 2023 15:18:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFGO7-0005Fu-CE
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:18:07 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fb20730a-90f9-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 16:18:05 +0100 (CET)
Received: by mail-lf1-x135.google.com with SMTP id g13so18965052lfv.7
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 07:18:05 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 y19-20020a0565123f1300b00498fc3d4cfdsm2203396lfa.189.2023.01.10.07.18.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 07:18:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb20730a-90f9-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=u7sI+87rn5yOEodXtae6iqX6/XNkgcCetji6u3zffsM=;
        b=Wcd+e1vaZ7jCFWNGH/6ZmgG8lMqmO5xhPnKN3ktZfI/PnHoeOzqYtwbZUeK5YQQekf
         Bz4qNhSrO8EAwQih9+bzTsTAVeEmzIC9aVW4edgZNrLDVMu5nsQTnoi0MnDdYo4QpgwM
         IKV1zFPVZdfctGAafPDgpzEQrwSkQmMWNIpSk6PBpVShrD1B2JcgLwdidmx2wR0h16yr
         tceSmVuJcbWls4L16BoDbD9tsqkZHI986vyx2Q6P7KpHproyAV6Wj94ndIQ9M6qu+ZWQ
         tvNQiQutSpyc4deTvrAn5PGz1eJCn7JOjyJREbamG7WbXx9E+ZdhpH6eNaoNl8LwKgqu
         8INw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=u7sI+87rn5yOEodXtae6iqX6/XNkgcCetji6u3zffsM=;
        b=8J26y+XR2RHbK1gRxuPFHR2TgJKmYXmDqFmcwumAEraL4wnPTAWgL/9s/kuyvOtD7y
         cLztmqosxfjQnAsUdXhguB6+aYkA7qXYlRtKSvS3bFfn/3cZ4KGdlYT+bx0scig6m7zf
         hMeQF8y3WcXZylhnmgw2euRh7YWXJ+1+6l9egbtsfoVC01B/nsW850UyRUhGhnoTWWeG
         rpOlaf4mgZ1Y6uyHA58S4faly8+DkimGsPH+gRXRsQal6es5CuFg9SsXhFPhPS8MXOWf
         NgA/h1LYYMKwXK10muIjFt6/Eark+a3oRHDUf9wEubHexqklYRS5j4l+FmIDwj+llebW
         Oo/Q==
X-Gm-Message-State: AFqh2krfKYRRxYMRYmDfuj+9qLKMkonAQX/FwCk7nWU2UFLEinn9T/pM
	ABpjwf2URQvqmbEDpnwVcdJr6UTjJTLKUEUq
X-Google-Smtp-Source: AMrXdXu6J6iJ2CHmwuWTqgPYwkvNGDt8B1Nj6qG/RDyMjRIDl6ZmqknoP7CuZmRNXuvTQOHUt8Aeqw==
X-Received: by 2002:a05:6512:b27:b0:4b6:eaed:f18f with SMTP id w39-20020a0565120b2700b004b6eaedf18fmr21721242lfu.38.1673363884715;
        Tue, 10 Jan 2023 07:18:04 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v3 1/6] xen/riscv: introduce dummy asm/init.h
Date: Tue, 10 Jan 2023 17:17:54 +0200
Message-Id: <b1585373e39a7cbe023f485aa5a04b093e25ec80.1673362493.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673362493.git.oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The following patches require <xen/init.h> which includes
<asm/init.h>

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V3:
    - Nothing changed
---
Changes in V2:
    - Add explanation comment to the comment message
---
 xen/arch/riscv/include/asm/init.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/init.h

diff --git a/xen/arch/riscv/include/asm/init.h b/xen/arch/riscv/include/asm/init.h
new file mode 100644
index 0000000000..237ec25e4e
--- /dev/null
+++ b/xen/arch/riscv/include/asm/init.h
@@ -0,0 +1,12 @@
+#ifndef _XEN_ASM_INIT_H
+#define _XEN_ASM_INIT_H
+
+#endif /* _XEN_ASM_INIT_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:18:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:18:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474744.736085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGOA-0005zt-Kw; Tue, 10 Jan 2023 15:18:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474744.736085; Tue, 10 Jan 2023 15:18:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGOA-0005z7-Fu; Tue, 10 Jan 2023 15:18:10 +0000
Received: by outflank-mailman (input) for mailman id 474744;
 Tue, 10 Jan 2023 15:18:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFGO8-0005G6-L7
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:18:08 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fc4704db-90f9-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 16:18:07 +0100 (CET)
Received: by mail-lf1-x131.google.com with SMTP id bt23so18918759lfb.5
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 07:18:07 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 y19-20020a0565123f1300b00498fc3d4cfdsm2203396lfa.189.2023.01.10.07.18.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 07:18:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc4704db-90f9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=k39j5USYdhYFET9upYnjISkYGgRbZovPpm53v/syNJU=;
        b=K0+5gplvcbHPCg9qsE3dFmBBiOcnrjo7e6fXQMSMTGWGGAkkmvAC0kMBUKdVS2/g/U
         odrC33pI1EF18bSUdwqhQKOffPEh/O4yIR5P8EOrxbxTeK/jDhC8EqSvgqu6kuQaeke5
         14M5BY4Pmc4xSv3ceefcgEzduhQHIbbd9E7WXat//1mle4drpoRcN9dj3077OkcNEQPo
         6fYJyXpFSwlGVYBmZLNwdLk6u9Pzlr0YpLMtwxwQOjvBqI+HkfJ8Q9oFx1gH/PFUqRwP
         xhYSl/sWvt8fli6ksgToIm3RPkCvqF47ovfn/wqXcCfP0GQjsHG2UwzEWzxmH1orYIRo
         EnLA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=k39j5USYdhYFET9upYnjISkYGgRbZovPpm53v/syNJU=;
        b=mP7fzz2MW/qWLlpjwZjRhZhmXMWDprpLQehnlXc6czbdVBoRnO0DRGlyhPe+BjbuVj
         0F6LUDUWb8wCFX6sN/dVeQtbPWrvnHqTm5RgAg/MvJjzA0TWJjQGks7hys6vU9AVIGam
         JLVrEvFvB8POwgOp0u8hJcA7i7oT/uhLRPlcH3q/zjNhGYxDhs6j2Nqi/p37Quncp7Bh
         +pa9Z3NeO0qt5M/jQyXKC4rd88DKRmTD/MrWDs/db+fSzaKHXemT793NQXIATUmIdu23
         Fh04BPUtyaLxEFDcVKL5JezKk2GA5VvjSDeU8hRrNtVwlOmEAws63jY1xbRUPa8dQPQC
         LmqA==
X-Gm-Message-State: AFqh2ko2X7k5DRI2zeWRRpebcawh/xYKCrVRhzaWfnpX/Sq/V9jNdnBY
	x5Szv3+MhWSjrjWFnpErV1kStT/gxmThqIFU
X-Google-Smtp-Source: AMrXdXv0v/qAB6MI2pI2Ns3onWDvlHjYdHBeV1ydTInGa27X5jo6bnrF+YFaC+5dYBGgv/vaRENlvA==
X-Received: by 2002:ac2:446b:0:b0:4b5:8504:feea with SMTP id y11-20020ac2446b000000b004b58504feeamr16675149lfl.24.1673363886777;
        Tue, 10 Jan 2023 07:18:06 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v3 3/6] xen/riscv: introduce stack stuff
Date: Tue, 10 Jan 2023 17:17:56 +0200
Message-Id: <7d89e3811e6ea4f307862d6552ad7c7e58176518.1673362493.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673362493.git.oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces and sets up a stack in order to go to C environment

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V3:
    - reorder headers in alphabetical order
    - merge changes related to start_xen() function from "[PATCH v2 7/8]
      xen/riscv: print hello message from C env" to this patch
    - remove unneeded parentheses in definition of STACK_SIZE
---
Changes in V2:
    - introduce STACK_SIZE define.
    - use consistent padding between instruction mnemonic and operand(s)
---
 xen/arch/riscv/Makefile             |  1 +
 xen/arch/riscv/include/asm/config.h |  2 ++
 xen/arch/riscv/riscv64/head.S       |  6 +++++-
 xen/arch/riscv/setup.c              | 14 ++++++++++++++
 4 files changed, 22 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/setup.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 248f2cbb3e..5a67a3f493 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
+obj-y += setup.o
 
 $(TARGET): $(TARGET)-syms
 	$(OBJCOPY) -O binary -S $< $@
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 0370f865f3..763a922a04 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -43,6 +43,8 @@
 
 #define SMP_CACHE_BYTES (1 << 6)
 
+#define STACK_SIZE PAGE_SIZE
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 990edb70a0..d444dd8aad 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,4 +1,8 @@
         .section .text.header, "ax", %progbits
 
 ENTRY(start)
-        j  start
+        la      sp, cpu0_boot_stack
+        li      t0, STACK_SIZE
+        add     sp, sp, t0
+
+        tail    start_xen
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
new file mode 100644
index 0000000000..13e24e2fe1
--- /dev/null
+++ b/xen/arch/riscv/setup.c
@@ -0,0 +1,14 @@
+#include <xen/compile.h>
+#include <xen/init.h>
+
+/* Xen stack for bringing up the first CPU. */
+unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
+    __aligned(STACK_SIZE);
+
+void __init noreturn start_xen(void)
+{
+    for ( ;; )
+        asm volatile ("wfi");
+
+    unreachable();
+}
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:18:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:18:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474743.736064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGO9-0005Re-7W; Tue, 10 Jan 2023 15:18:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474743.736064; Tue, 10 Jan 2023 15:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGO9-0005Qw-2b; Tue, 10 Jan 2023 15:18:09 +0000
Received: by outflank-mailman (input) for mailman id 474743;
 Tue, 10 Jan 2023 15:18:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFGO8-0005Fu-CU
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:18:08 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fba76ff9-90f9-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 16:18:06 +0100 (CET)
Received: by mail-lf1-x12c.google.com with SMTP id m6so18937020lfj.11
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 07:18:06 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 y19-20020a0565123f1300b00498fc3d4cfdsm2203396lfa.189.2023.01.10.07.18.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 07:18:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fba76ff9-90f9-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GDdmAaTmfRPA7/hoydJZPcOqFW6WO2Afa2sA0FosFio=;
        b=NEM8j9tyoYg5poO13W2TCJj3Us3bcZztoSFaeK51xO9v2vjzt/4x/SAKmRlJDzpp6J
         wD7Ws/2wnbVb8EYWs1Hn1RE9SPTb/mKzgWwvtZs814huxoBRsBgBvQsrrjxYUAXfzb4t
         o028CXIN+4Ofa4UM8B6PhJFmnIme18puLhWbYEmOMHbR97mTwtrmkBRIczssJEFhRO2s
         0QqzKSA+1r+qYq0W16LsB8Ou0hJ7lkXdjb9dYGdV9QBc319Uzz7TglhmqKn5z8gpc/t1
         FQKkFXRiRlWxpcCurzLJxbIkeUxR15z9QtlmUCA1lRKihUMUBoVkGSPyWBhO/oMZWJPw
         itXA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GDdmAaTmfRPA7/hoydJZPcOqFW6WO2Afa2sA0FosFio=;
        b=bNUBWrFyWg1L0KawxXVYPn8CPmT6v89FiA1Q+1sqgIiuiAt3L7HMhbt+jKJdi4Sn86
         YqXRWiWKaFKtJZMXX1P860x+bufJfwq1B/NbtK6knXsOmjhPeIGjYQhq7RXLSsygQIEp
         b9ypry9Mc9qD1aDS8oRiaCGwsfzR5oH5JvaKCVG2xmjSdPeJQO82c0rVNN6l8kyz0DB5
         kFCFLfN78JoZPCSikzyQKn4kfXdzzAoq8noCZe48mHYX0dUDwJS6rKOPzJTmGSvuMFZ3
         4jV6euxS900YAQXDjEOIXJYLITbaaWUDi/OefFvWrz+CemExP1xfiozBqDRPwxPpSmHt
         1qYQ==
X-Gm-Message-State: AFqh2kq7p4NqOS7OPN62x3U7V3r3ob3j6ELD/lC5Irz1iZuTFJIztu6m
	Mkk71k9451NlZtlWHvOX4h33pwhv5mb6wj6T
X-Google-Smtp-Source: AMrXdXvWG7xcUGOUpgh3YFTt+CPFUf7TwcKA+mL7/QGv6LAVJKwcJ5XHoU93PSXd60PvcRq90rS7Bw==
X-Received: by 2002:ac2:5dce:0:b0:4b5:b988:b409 with SMTP id x14-20020ac25dce000000b004b5b988b409mr18504737lfq.21.1673363885739;
        Tue, 10 Jan 2023 07:18:05 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v3 2/6] xen/riscv: introduce asm/types.h header file
Date: Tue, 10 Jan 2023 17:17:55 +0200
Message-Id: <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673362493.git.oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V3:
    - Nothing changed
---
Changes in V2:
    - Remove unneeded now types from <asm/types.h>
---
 xen/arch/riscv/include/asm/types.h | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/types.h

diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
new file mode 100644
index 0000000000..fbe352ef20
--- /dev/null
+++ b/xen/arch/riscv/include/asm/types.h
@@ -0,0 +1,22 @@
+#ifndef __RISCV_TYPES_H__
+#define __RISCV_TYPES_H__
+
+#ifndef __ASSEMBLY__
+
+#if defined(__SIZE_TYPE__)
+typedef __SIZE_TYPE__ size_t;
+#else
+typedef unsigned long size_t;
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_TYPES_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:18:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:18:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474747.736118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGOD-0006rO-Oj; Tue, 10 Jan 2023 15:18:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474747.736118; Tue, 10 Jan 2023 15:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGOD-0006qz-KG; Tue, 10 Jan 2023 15:18:13 +0000
Received: by outflank-mailman (input) for mailman id 474747;
 Tue, 10 Jan 2023 15:18:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFGOB-0005G6-E0
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:18:11 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe5a9a3b-90f9-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 16:18:10 +0100 (CET)
Received: by mail-lf1-x12a.google.com with SMTP id d30so14048152lfv.8
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 07:18:10 -0800 (PST)
Received: from fedora.. ([195.234.76.149]) by smtp.gmail.com with ESMTPSA id
 y19-20020a0565123f1300b00498fc3d4cfdsm2203396lfa.189.2023.01.10.07.18.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 07:18:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe5a9a3b-90f9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0syX+Z8ZsLO1K+qpK4hh/FiHzxvRksi171obTN3jJ5s=;
        b=Seey/dkCQRMk/iJocB/HBc0GgJnYzseGgxmfJDzEi5gt6+ob6lcNEkTYzUt+FwXLLG
         qXUCORaM5iafVcoX8VoTRr5loffmCABlS2bqkcTCtW9lt92+CRD8pzuuuDbRMuvo0nqN
         OFuf+ihwuoohMSMxzc7qjx0pZ7ksSdIjysXs3HvSSHe92nm61P0QwvswqVKUUd4i47Zz
         4cTmCmK0ftorLJzQvV2smQgFZfGCyDDBJYE1WLaJB4f9Ps1JnLDSJDbZ99oArjjNwrXO
         gFknI3JZAL8zCz4R5wsa2pkM6VMdxpcQBk3yYqCy2FpsVfUUOKGMzZegd8sSplxS4+it
         pWnw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0syX+Z8ZsLO1K+qpK4hh/FiHzxvRksi171obTN3jJ5s=;
        b=FZ75vsgLIvgYPXaIxSIiNlWf0Eav7MNooaOKN+ijmY08Mj8xshKi1LHNMiIL2rXRNt
         HVU2HUobHmZVPu0NtnkFcPoaV3dcthIlHawyk2+H0cNlyqE4nMJbox4sJemXF4bNGrGE
         4qVpisfsgZgNRL2HI6eT9fWxH59JV0T/+SWiau30q3nuJWs53TjCXmiB0q9bACs3a8Oa
         GUWJsN9O1bEHN/3+RLfJdagOchrbim0PHrvm1b9h7lZAhSIev6kObrOYDgtMQQ1Y545D
         +aq+GwgV9wrRrls1v/nR2JEz6Dr85YaWSVkyDDydK7/457cK0jeNWc73zZMZh4PMvrQe
         Gj1g==
X-Gm-Message-State: AFqh2kprWF0rW0Y1KmsaygF2UGSxVca1kOZA14SOAMMKVmVwC1wtrRLe
	8hmb8TpbynUEjBSh8TYrZXkZi6jJddwvo2Le
X-Google-Smtp-Source: AMrXdXsFr5BRkDWcCuVCxV1t9ah3Jggix5v9K7BgRPKnGRUnn6vsP+toxhtmFOCv3YC5BZh9d6NRBg==
X-Received: by 2002:a05:6512:400d:b0:4b5:936e:69df with SMTP id br13-20020a056512400d00b004b5936e69dfmr22497064lfb.53.1673363890034;
        Tue, 10 Jan 2023 07:18:10 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v3 6/6] automation: add RISC-V smoke test
Date: Tue, 10 Jan 2023 17:17:59 +0200
Message-Id: <7a7eeba57589465e34be00f3d031866abe53e466.1673362493.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673362493.git.oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add check if there is a message 'Hello from C env' presents
in log file to be sure that stack is set and C part of early printk
is working.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in V3:
    - Nothing changed
    - All mentioned comments by Stefano in Xen mailing list will be
      fixed as a separate patch out of this patch series.
---
Changes in V2:
    - Move changes in the dockerfile to separate patch and  send it to
      mailing list separately:
        [PATCH] automation: add qemu-system-riscv to riscv64.dockerfile
    - Update test.yaml to wire up smoke test
---
 automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
 2 files changed, 40 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index afd80adfe1..64f47a0ab9 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -54,6 +54,19 @@
   tags:
     - x86_64
 
+.qemu-riscv64:
+  extends: .test-jobs-common
+  variables:
+    CONTAINER: archlinux:riscv64
+    LOGFILE: qemu-smoke-riscv64.log
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - x86_64
+
 .yocto-test:
   extends: .test-jobs-common
   script:
@@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
   needs:
     - debian-unstable-clang-debug
 
+qemu-smoke-riscv64-gcc:
+  extends: .qemu-riscv64
+  script:
+    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
+  needs:
+    - riscv64-cross-gcc
+
 # Yocto test jobs
 yocto-qemuarm64:
   extends: .yocto-test-arm64
diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
new file mode 100755
index 0000000000..e0f06360bc
--- /dev/null
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -ex
+
+# Run the test
+rm -f smoke.serial
+set +e
+
+timeout -k 1 2 \
+qemu-system-riscv64 \
+    -M virt \
+    -smp 1 \
+    -nographic \
+    -m 2g \
+    -kernel binaries/xen \
+    |& tee smoke.serial
+
+set -e
+(grep -q "Hello from C env" smoke.serial) || exit 1
+exit 0
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:25:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:25:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474783.736129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGV9-0001mX-MZ; Tue, 10 Jan 2023 15:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474783.736129; Tue, 10 Jan 2023 15:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGV9-0001mQ-IP; Tue, 10 Jan 2023 15:25:23 +0000
Received: by outflank-mailman (input) for mailman id 474783;
 Tue, 10 Jan 2023 15:25:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Y9r=5H=citrix.com=prvs=367aabe44=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pFGPK-0005Fu-9l
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:19:22 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2648f661-90fa-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 16:19:20 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2648f661-90fa-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673363960;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=1vxXrw+ay3HT99goYjfDAu2oDeD/VzXJvgzuA0z439k=;
  b=BFjmJjSMMtyf2feYHUvQz3HVeZR8VZDgt75a1kIiXvkjmXLdAsSpEHGB
   DicRcv+pzhLsE76AzSAtRbldUGJXTMlXiBh/TNqEGzihp4b3fYlEhuQKw
   sFuylNQhPT6kNkKWq7t5/iAV5g0yZzUrj8rNnxr3Buo7Zgiq3OFiTZZcu
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92366405
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:CRPCiaqOfry98Q54OLmzdnqhhyNeBmIeZRIvgKrLsJaIsI4StFCzt
 garIBmEaf3ZZjH0KdpxOtvj/U4DuJ7Ry4cxHAZrrXg9ESIaopuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHziFNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGkiTw6bnufu+rT4WNllpMMNd86xGIxK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFHU/rSn8/x7pX7WzRetFKSo7tx+2XJxRZ9+LPsLMDUapqBQsA9ckOw9
 zidoTqlWkxy2Nq32AGU3Cur2df0nCbVCbMWGbyc2aNRuQjGroAUIEJPDgbqyRWjsWahX/pPJ
 kpS/TAhxYAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxO4QfZmcfMpp87pZwHGF0k
 A/S9z/0OdBxmOS6aGyF77LMlzXxKxgcD2gsPiheaQRQtrEPv7oPph7IS99iFou8gdv0BSz8z
 li2kcQuu1kApZVVjvvmpDgrlxrp/8GUFVBtum07S0r/tmtEiJiZi5tEALQxxdJJN86nQ1aIp
 xDocODOvblVXflheMFgKdjh/Y1FBd7fa1UwYnY1RfHNEghBHFb9Fb28GBkkeC9U3j8sIFcFm
 nP7twJL/4N0N3C3d6JxaI/ZI510kvO6RYW9Ca2JN4Amjn1NmOmvpnkGiam4hj6FraTRuftnZ
 cfznTiEUR729piLPBLpHrxAgNfHNwg1xH/JRICT8vhU+eP2WZJhcp9caAHmRrlgvMu5TPD9r
 4432z2il08OD4UTo0D/reYuELz9BSFnXsir9pUKL7brz8gPMDhJNsI9CIgJI+RN95m5XM+Sl
 p1hcie0EGbCuEA=
IronPort-HdrOrdr: A9a23:dJU/Y634c0bgCnZ56wzLpwqjBAkkLtp133Aq2lEZdPWaSK2lfq
 eV7ZImPH7P+VEssRQb8+xoV5PsfZqxz/JICMwqTNSftOePghrVEGgg1/qe/9XYcxeOidK1rJ
 0QDZSWaueRMbEKt7ef3ODiKadY/DDvysnB7ts2jU0dLz2CDZsO0+4TMHf/LqQZfmd77LMCZe
 uhz/sCiTq8WGgdKv+2DmMCWIH41qf2vaOjTx4aJgItrDKDhzOw6LL8DnGjr2wjegIK77c+0H
 TP1zf07KW7s/2911v12mLJ445N8eGRuudrNYijitU1Nj6psAquaYh7MofyxAwInA==
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="92366405"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH] tools: Fix build with recent QEMU, use "--enable-trace-backends"
Date: Tue, 10 Jan 2023 15:18:54 +0000
Message-ID: <20230110151854.50746-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The configure option "--enable-trace-backend" isn't accepted anymore
and we should use "--enable-trace-backends" instead which was
introduce in 2014 and allow multiple backends.

"--enable-trace-backends" was introduced by:
    5b808275f3bb ("trace: Multi-backend tracing")
The backward compatible option "--enable-trace-backend" is removed by
    10229ec3b0ff ("configure: remove backwards-compatibility and obsolete options")

As we already use ./configure options that wouldn't be accepted by
older version of QEMU's configure, we will simply use the new spelling
for the option and avoid trying to detect which spelling to use.

We already make use if "--firmwarepath=" which was introduced by
    3d5eecab4a5a ("Add --firmwarepath to configure")
which already include the new spelling for "--enable-trace-backends".

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/Makefile b/tools/Makefile
index 9e28027835..4906fdbc23 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -218,9 +218,9 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
 	mkdir -p qemu-xen-build; \
 	cd qemu-xen-build; \
 	if $$source/scripts/tracetool.py --check-backend --backend log ; then \
-		enable_trace_backend='--enable-trace-backend=log'; \
+		enable_trace_backend="--enable-trace-backends=log"; \
 	elif $$source/scripts/tracetool.py --check-backend --backend stderr ; then \
-		enable_trace_backend='--enable-trace-backend=stderr'; \
+		enable_trace_backend='--enable-trace-backends=stderr'; \
 	else \
 		enable_trace_backend='' ; \
 	fi ; \
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:48:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:48:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474793.736145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGr4-0004DH-H5; Tue, 10 Jan 2023 15:48:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474793.736145; Tue, 10 Jan 2023 15:48:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFGr4-0004DA-Dm; Tue, 10 Jan 2023 15:48:02 +0000
Received: by outflank-mailman (input) for mailman id 474793;
 Tue, 10 Jan 2023 15:48:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EOFq=5H=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pFGr2-0004D4-SC
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:48:00 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26e5a169-90fe-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 16:47:56 +0100 (CET)
Received: by mail-ej1-x635.google.com with SMTP id ud5so29802976ejc.4
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 07:47:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26e5a169-90fe-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=K9zFij6mcz/5EjbhUw4rVwr2bV1YHqurcc0Zhp61PdA=;
        b=lGnK5ikqxgW2LT21/K1L03RqZw22csPgayOQPywKPrXj/PUk/llgT4yYraB9McgByq
         t4fKoY4aqlf5mCVUsA4e6xu7RE0MiuWoNToUqjykUB0PuPYMT9wTvUZ+nxIZmqGUt54T
         9zcCo9Onl6n3i2/+cXtVUpB7wF5SEmcC/PDsa5E1OJflqcljdXHnu3TDPcYvWmFgU+vx
         UkA9Ema/mEn+vbDPxx80icJIA6+B4Bo2lyPzj3OeEVgE8oEGMDQsA4Skw+ToRK1GcR3a
         0kf/QuO0B0gNC8/sp1dlsFRKknnSMQbv82jltuQ7g1mVdPHm3mgL1VR9F6j0sgtQPDSA
         XWFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=K9zFij6mcz/5EjbhUw4rVwr2bV1YHqurcc0Zhp61PdA=;
        b=ql55Wj1/qhYKPbecTkYT7Uqe7sXmZEi1SgkcnayK+S8Rm602BAZxHZPUJyN3h3zRNS
         jkqTbVelpL3cPQla5R/3qR/+LIeUYlP1Q0dTk0SNz90Kj81sLua6juUFfsucpWxiAy9V
         s0XbDW1whtWZLM4RTgAxGipsFM/sUMWf3X0Rin/WeKpQ4n7XJuAoGoG5IsAcCtK2+E1k
         U51LDZVMzDPUzqZ7nzJeVgrusULlOY8e8lsZHqq4XHSc2mI3YCW7vnRznjzbaNkZvE4K
         ZJ5v+6yyhb9EA+qAi6347GWOmPmiLZXbn85Wksdbc0IZOLUW8dSOf4NQJZCl3fUSecqg
         jkIw==
X-Gm-Message-State: AFqh2kqHyE6zrzHc+WG4YbSQ9hiSjpaUq/c3VWao59+lNq5+0PlMfynQ
	qFocHpuNEds8A5KAdeqY4i+U9ELt9NWO6oAHsVc=
X-Google-Smtp-Source: AMrXdXtZCjmz2eEYGUuQtsjAmvIANrsSpe4d8dJvKJijER8DPcqmuDmusth+yawebeTjAUFsi8iKeAQdCI4B1/trC0s=
X-Received: by 2002:a17:906:3b5a:b0:82c:356c:c4c8 with SMTP id
 h26-20020a1709063b5a00b0082c356cc4c8mr3777148ejf.649.1673365676310; Tue, 10
 Jan 2023 07:47:56 -0800 (PST)
MIME-Version: 1.0
References: <20230110151854.50746-1-anthony.perard@citrix.com>
In-Reply-To: <20230110151854.50746-1-anthony.perard@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 10 Jan 2023 10:47:43 -0500
Message-ID: <CAKf6xptZruBdm+dRLtqan+uFTCyMdpscRRasQBeeiRno1WJXUQ@mail.gmail.com>
Subject: Re: [XEN PATCH] tools: Fix build with recent QEMU, use "--enable-trace-backends"
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jan 10, 2023 at 10:25 AM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> The configure option "--enable-trace-backend" isn't accepted anymore
> and we should use "--enable-trace-backends" instead which was
> introduce in 2014 and allow multiple backends.
>
> "--enable-trace-backends" was introduced by:
>     5b808275f3bb ("trace: Multi-backend tracing")
> The backward compatible option "--enable-trace-backend" is removed by
>     10229ec3b0ff ("configure: remove backwards-compatibility and obsolete options")
>
> As we already use ./configure options that wouldn't be accepted by
> older version of QEMU's configure, we will simply use the new spelling
> for the option and avoid trying to detect which spelling to use.
>
> We already make use if "--firmwarepath=" which was introduced by
>     3d5eecab4a5a ("Add --firmwarepath to configure")
> which already include the new spelling for "--enable-trace-backends".
>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 15:57:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 15:57:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474799.736157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFH0b-0005jP-F2; Tue, 10 Jan 2023 15:57:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474799.736157; Tue, 10 Jan 2023 15:57:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFH0b-0005jI-BJ; Tue, 10 Jan 2023 15:57:53 +0000
Received: by outflank-mailman (input) for mailman id 474799;
 Tue, 10 Jan 2023 15:57:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFH0a-0005jC-4R
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 15:57:52 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2042.outbound.protection.outlook.com [40.107.6.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88c1b9a9-90ff-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 16:57:50 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8147.eurprd04.prod.outlook.com (2603:10a6:20b:3e0::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 15:57:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 15:57:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88c1b9a9-90ff-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dq1wicav9BQVE8MDHlBgk+XxmNZ+eu75xmWxHjw3bqat/FgSb771DBkZLItnHYfUHZ9MRZO137dkqBggJUtswsJmpYqHkEhv53LIcjrvtB2ReIGeXr35OHSEIzOtHjQbGoeDOInCO8KY6mrnurmZsJDs+qFNxJPPaQj4qUAe47zwubG3sakqOCP0qmHFBQGOxQIDQETArHmbR5jjjxkXORCNw79f4F3cBVog62M8BtzmpGVjuiUi4yEMsSg9MPQ6uB15l6JC02NTFggoWaYPuy7qMB8pa5hvL5hW7TDhrIC+cDNd4+I59q7r21q8Zy6M4o0kEoabHGPb8GzH+yFozg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gaQPbytk+9sWancaNFfbsyIA0wu03ZervPJ+QZzmKQY=;
 b=EdKxGGJ1aKpQEhepsQ1+TNtPvyvjXpr96wk4uSRxYVYFpLKIVa2yMymmrwYilTev996+608phBAqa4Vyfz/OfJPBwmZ+2lV7ULLoM6EA6u8SgVYSUvq0CNDrWMt4Hz9YCfkhtMwQkc9qmfqCfsssmazauSeYabLOQ/SXLPYTRhTL8XlNVu+dZT902dxHaommDRHbQMDwqawC9/8Z9Nis4tHTmEP8rt9tJ0ObMdQrnE1yH7Eh/v5joet0gdpP1MJ4cge0gxM9Hh/BXRNPPjOk0fnjLH7Bw4f03NoQRrEIFWfGcInac80E5ibA6GC6ttJS99IVAIzHxtEGbUVELk6nJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gaQPbytk+9sWancaNFfbsyIA0wu03ZervPJ+QZzmKQY=;
 b=oVA2rLnhOHIuZj+yj+XGluUOKZPtNAwDb/j1iCRmKAFt64DUMAKDtx82igfwW09VrIsNFyzEBsogiiqFa2FBllnlvjgVRKrCzsbElJJJloPtf+G0q++9G7gjtBZ5Mo5WpcEK5fVG+fy0k6O2lAAyFzSVoE60FNDvaBhEpaTnYJBuJUeAVWotzGE7Zk/6dD2zqRgqehNHiyOUAOn3FfhEE4PctH2o3ui2cXrAIeXSrgTogiP0x2g+1kWqHPZuHU1kuxkOe0dlhz9NtH4Q0FRzBx3drI7H+hNMryST9YdhQ3+BsOc/kr1PV9EhS1BCJsHC+rReW+RzhE/sg/nxVlzXhQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2420503a-48d3-f975-beea-7e37e4ecd04a@suse.com>
Date: Tue, 10 Jan 2023 16:57:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 5/6] xen/riscv: introduce early_printk basic stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <25cd3586fa51980279f25a82eed5ded1622bc212.1673362493.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <25cd3586fa51980279f25a82eed5ded1622bc212.1673362493.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0005.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8147:EE_
X-MS-Office365-Filtering-Correlation-Id: 87ef407a-9da6-423a-7c6d-08daf3236be5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WOPOGoK+8yPcRUxCc2IoUZA5ghrOoK2EK7kbsdyGiyyjuWXBSel1Kiz8ApN6G0d1bGBCNBCbExKAhdAfhdLc/25OxEGUp6+OC/frMkQ02vM+wvF8XTHKQZHV3iaWmm/M96CwTmPVPHpVL8T8e4OCLbXL1ILkQTIUm4XGsN7eTtaXOKpVOD/tkOmXWItqvaZPd2NUFSeCUUy9uhqg8IdDp84BxZwtywpR96o5dFdAXwiiXqLLvdE1HYL44x3sZMAp4xeNG06bNmyEBir6C2uNOuPBBbRKuzLWyFJ6jHdRYgMTIMfCElj7Bp33O4JfqPFu8nrWGvjEcSCtBshcsAkA0ZZOE9mytIg7Tcm3j3EtAPa2HjRBD2lmsftWmDe1C1EHBAxiYyzpKdGaWsea7Jj9yTTteGXYl+R9msC/k+jBmIeAXiKicazxLYYtjo701r4iQr85id3Z4T7i++QIdSwgisTlNElRwFglOTHCA8PPNsxxZrGWD7B9GcIiNZ4h2dsZsjzUhj6SnguUM4ak5CqkDToDB3q1tF4WuvAM3I1X9DCSQ+yuw/aUlfHqR3xMVNqLawBgPqAQKV981EYJzqAlOGp3+FOx5quRq+iVVX3UrEvaomxMwuYioP4Cyyhc7rtQU3DKkJZdFW63kEuLOGCD5Zv8OAWBLxMJrDo7Rfgl78sogQJIip41MOyiTNmtGM0/R3FxvN9zonS/KUproDxlhq8VRM1cJBMH/g9Eih287gQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(39860400002)(396003)(376002)(136003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(7416002)(4326008)(316002)(6916009)(8676002)(66556008)(66476007)(66946007)(54906003)(26005)(6512007)(2616005)(38100700002)(31686004)(86362001)(186003)(31696002)(36756003)(53546011)(478600001)(6506007)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dnMzQnJaNTRUcWlPVHozbUw0VFdJS1ZBSFRjVmNpN2w0VFhTZWdIeVZJWnQv?=
 =?utf-8?B?UVBXcmlLNXVya3VoT2tvbEpQaGVvTHVqUjYyRDFhZEI0bmZIeHhYRE8xYjFn?=
 =?utf-8?B?R0Y4SFBqK3BNRHltMHcrMUhDMERjZERwUzNtY1IzcGxnbzRUci85NkVneXpm?=
 =?utf-8?B?V0oxL1ErbXU5eFBpWnVCMUVGZGpHMlJaSnpJSmY2cEQ1d01YWkdFZ0lBckFX?=
 =?utf-8?B?R0llbytnT1J0cGFKM3l1N1VWZzJTOGgwWGpobFIwZ2kwSnZrTHFnUnIzZlkz?=
 =?utf-8?B?dFd2YjdjZ1hOWHA4UGVPNUJvaE9aZEduVldQNmRnYi9scC9rd0dyVUtIMmE4?=
 =?utf-8?B?Mk5nWlhCOGdGSEZYbUwydGhxMVlPcUE0Q1RiZXdxaGEwWHRrN25TMVVsKzBo?=
 =?utf-8?B?bEdKMW9xWkJhV28yMFY3cUhmL1JhdkpVY0xWWm9nUkRjWFRhaUdjc091N1hj?=
 =?utf-8?B?Mkt5N1ZIT0hOZ004eDhXR1QzUFFUSi9KWlpZdTlPQVhHZkZST3pWbVU2aUo2?=
 =?utf-8?B?YWd0RE51VU50ckhUK29VcFNTbnFIeER3ZjhUZEh2Nk9zWHk0SHlKWmZFbnc5?=
 =?utf-8?B?Nm4rWjgxUFpaWjFBNXAxVm1UMEtDTnpOSGNxWUNueGZFWWw0NVpYZllobW5w?=
 =?utf-8?B?K1dNdTdZYTZ1TnZ0YVJhekRKbk1sQ0pVdTZDRXgvdm5Celovem93cG9pM3BO?=
 =?utf-8?B?TG4vVldHZlg1VXJFZGVIMHJIcERLNEJhby9NMlVvMVE1eWtWbHZpTkdKZUd4?=
 =?utf-8?B?d29sblg1Um5yMFJ5SmxPMnJhMHhvSlhsWUttMnRGYXR6amM2S2VhZlpkRXJC?=
 =?utf-8?B?bG9ZanhYclFQcDFRU0pHUE9ML1RIcGxmYlE4N3h4S2ZsditMeHNUcWZFa1Na?=
 =?utf-8?B?ZjRLaGVoaGEwZ0MzcUVkK0VJK0pDNWx1QStCVjhJUzRFK08xanhoZVJMUkVO?=
 =?utf-8?B?RWVHVGwvUVdlSnpNVXNHMk12bGZNMlFNb3M5NGtyNmllMnRNMEFidVF5ektX?=
 =?utf-8?B?ZnFQYWNYYlpiQzc3NTR3VWJIYkNnZ3Q5dmZ1Uzc0eTJXKzBQYnVQTWNMcUhv?=
 =?utf-8?B?VUZiNHQ0ZWFGM2QxVUZrSDBFWmVpa1pLSHo5VStOOUkxOXMzZzZ2VjE0d0dw?=
 =?utf-8?B?OUczOTJDNUgyYVNlc2ZTUHZoYXFSc1krMkVCNUJPQnpUcHJIK2JBcEp5RTRM?=
 =?utf-8?B?WUR5MXFhZUpPdTA4N2ZkN3hSUXlkN0FoYXhuMnA5VndtVG5UdWVIWGV0UHpK?=
 =?utf-8?B?dlpjbHI4cmRrMk1YTmFtOEYzY3BIMy9OMm5rNE1EWW9RZTJsN1ZFRVRidEdy?=
 =?utf-8?B?L09ReDMxZU9uc045L2RPODNLdzVML1dwMGVnTkNTTExzWlhkSzUzQjdnZTlX?=
 =?utf-8?B?eTE0Ui9lQ3N6b2pIZU9sTUpCTlJlTG5FUWdKTjlQTExJcVVPS0E4TDFqSWZn?=
 =?utf-8?B?YUVSM1VEcUExK3k0VE81WjFtbzk4VFlYSVh6dGpTdmZjSXNBenZoQ2NvNVNy?=
 =?utf-8?B?SjFFdmhKc0pGaWxpYzZzWkhCNjBWYXJCR0t3cFVjdVdSTnFxUFA4UE51K3pO?=
 =?utf-8?B?RVU4YUFLSnlrUHoxYVdEWm4weDFmdUJtaHNBNVo0K1g5ZjY2bFpaaUcycXNp?=
 =?utf-8?B?WWRVYTFUc0Z5dGl6dFhhejIvRktFa0xhWGNzRy8ybmI1RjVxamU1eUZBYkNa?=
 =?utf-8?B?VnFUK0g3NDRCbjlGNFY3a3kvTzR6RjZpSVBWc1E5R0JPbGg2WElJQll2Ui9U?=
 =?utf-8?B?QmF6U0srdENkOVNKYURac1N1emVvNDlNbGp6dGJ0cklvNVlwQVhaeEhSSjVH?=
 =?utf-8?B?WTY2WElIeGVVc0hheDZhTDgxbmpVRy9CQ0YvVGNwNDNKZEFnVnc4WDIzTjlt?=
 =?utf-8?B?azFTNmwvMFdxU1Jza3FhdVhQa1Q5dUsvU2M3Ukp1ZVRDd1k0RXd2bTF1dzJq?=
 =?utf-8?B?TDI0UjRFOEdqWXlxS3duMFhyL0lWQVFEb04xZ3dHUkozUVk5dWs5STVQUlR0?=
 =?utf-8?B?LzA5OVdXOHJ3R1ErSzRkV1F6MkhnR29nZ1RRVmRwN1A3S0RwaC8xNEZXdk5u?=
 =?utf-8?B?U05nQmxvTzJ6QzB6Q0JMcXpFWENjMWhCU1pWMkgwYmxZdkdxOE03VTN3YzY3?=
 =?utf-8?Q?0e6TNHFTY/ILhMJ/4nhgryQGd?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 87ef407a-9da6-423a-7c6d-08daf3236be5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 15:57:49.0484
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HrSlNZrl38slOoNVqoKRr67fwLjtL0onONaMsS7rwIjc29p3D2RZ+zAQGJWwWMkEDUNvy+NARnzYA2H8OWROhg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8147

On 10.01.2023 16:17, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/Kconfig.debug
> +++ b/xen/arch/riscv/Kconfig.debug
> @@ -0,0 +1,7 @@
> +config EARLY_PRINTK
> +    bool "Enable early printk"
> +    default DEBUG
> +    depends on RISCV_64 || RISCV_32

You're in a RISC-V-specific Kconfig - do you really need this line?

> --- /dev/null
> +++ b/xen/arch/riscv/early_printk.c
> @@ -0,0 +1,33 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * RISC-V early printk using SBI
> + *
> + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> + */
> +#include <asm/early_printk.h>
> +#include <asm/sbi.h>
> +
> +/*
> + * TODO:
> + *   sbi_console_putchar is already planned for deprecation
> + *   so it should be reworked to use UART directly.
> +*/
> +void early_puts(const char *s, size_t nr)
> +{
> +    while ( nr-- > 0 )
> +    {
> +        if (*s == '\n')

Nit (style): Missing blanks.

> +            sbi_console_putchar('\r');
> +        sbi_console_putchar(*s);
> +        s++;
> +    }
> +}
> +
> +void early_printk(const char *str)
> +{
> +    while (*str)

Again.

> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -1,12 +1,16 @@
>  #include <xen/compile.h>
>  #include <xen/init.h>
>  
> +#include <asm/early_printk.h>
> +
>  /* Xen stack for bringing up the first CPU. */
>  unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
>      __aligned(STACK_SIZE);
>  
>  void __init noreturn start_xen(void)
>  {
> +    early_printk("Hello from C env\n");
> +
>      for ( ;; )
>          asm volatile ("wfi");

While this is only context here, it affects an earlier patch in the
series; this wants to be

    for ( ; ; )
        asm volatile ( "wfi" );

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 16:25:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 16:25:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474806.736168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHRQ-0001C7-I5; Tue, 10 Jan 2023 16:25:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474806.736168; Tue, 10 Jan 2023 16:25:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHRQ-0001C0-FE; Tue, 10 Jan 2023 16:25:36 +0000
Received: by outflank-mailman (input) for mailman id 474806;
 Tue, 10 Jan 2023 16:25:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFHRO-0001Bu-KF
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 16:25:34 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 675bf3d7-9103-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 17:25:32 +0100 (CET)
Received: by mail-ej1-x629.google.com with SMTP id tz12so30093187ejc.9
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 08:25:32 -0800 (PST)
Received: from 2a02.2378.1019.479a.ip.kyivstar.net
 ([2a02:2378:1019:479a:6357:b59e:a085:4bb7])
 by smtp.gmail.com with ESMTPSA id
 gn19-20020a1709070d1300b00816edcb4e59sm5124252ejc.146.2023.01.10.08.25.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 08:25:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 675bf3d7-9103-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=wlw7+zZsV9HauSQ/SQ7cPTLqSUwEPYqT2Dxcx8+og0U=;
        b=hb4Foy4vLmPXkGwIKWaQt/tC6ky+ux0Q1UhIGphk0mSolL2r/AzmOfmSfRJsyIcyfu
         XHuNLHUuUeu4PFdfQol+mSln9HSNpDJCquRVXeIDW+SIgElApSyT+IuL6p/E8TKm4I7S
         cYIGgaFN4+ILVIzHDs2NmhYXOyP1R2grnPgvZL5seSLfpaseqrT+jgER2Z/zCs9sUdXf
         NhaMuVZiXdAy2I+gm/vh1m0R1LioLiOCb9/HQ8l6/ZVhEZaHFQpM4+r++fhdOOy8x6pf
         7DooVt+qOxTN+KbWZ3MkhSAaIPQTJK3sNQ70kn8GHWb9bnEUJ3t17eNR4Cq7TrDqYrc5
         BpOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=wlw7+zZsV9HauSQ/SQ7cPTLqSUwEPYqT2Dxcx8+og0U=;
        b=oWBVGhB8szmk1/PnwavCvR7jj2ZX2HOYNGsfBPMcxH+IA+2uwwwMZfblzLyM1eIhoB
         Mw3U9jWKGDi3DzKRG5RfWEArogjGl6y+WBwDR8I11EON5DCoQxVvBxqum758kTDX1AjI
         rWOkXQ8X85naU2kG4plmZrJ35Yn279o+v6JYjpEapsu4KWnerY39gf/5Ils0aoct7+4G
         iNKE5/Rzo3EFhL6HunKMO+2F5WJwVD39Zd0yArl+3Cva3p/yv8qITQGejiTtO0S8nGk3
         ZZxFbTFMqOmDZZ64FWZWSUKje1H1jS1gwVlLbAfhjwcNpLNxE2S7dqFTAvb8xHP8v57V
         2v7Q==
X-Gm-Message-State: AFqh2kp/s9u7ylFxkXeG/FFAhHc80ZczGuTSLEbdlsMFlv5fjHPHSTWd
	C1K90YsW9uQxgI3Pv5+Sat+oWnU2E3TjNQ==
X-Google-Smtp-Source: AMrXdXuyPgoH9gFzS8SK3KK4mYnl0Z/eZqsMjUP2WjuoFP+9PXyOZrzw8qmLceDgTUJyKQWVgFDdhQ==
X-Received: by 2002:a17:906:3084:b0:7c1:5ee1:4c57 with SMTP id 4-20020a170906308400b007c15ee14c57mr56274536ejv.8.1673367931694;
        Tue, 10 Jan 2023 08:25:31 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH] automation: rename RISCV_64 container and jobs
Date: Tue, 10 Jan 2023 18:25:23 +0200
Message-Id: <cea2d287fd65033d8631bf9905ad00652bf11035.1673367923.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

All RISCV_64-related stuff was renamed to be consistent with
ARM (arm32 is cross build as RISCV_64).

The patch is based on the following patch series:
[PATCH *] Basic early_printk and smoke test implementation

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 ...v64.dockerfile => current-riscv64.dockerfile} |  0
 automation/gitlab-ci/build.yaml                  | 16 ++++++++--------
 automation/gitlab-ci/test.yaml                   |  4 ++--
 automation/scripts/containerize                  |  2 +-
 4 files changed, 11 insertions(+), 11 deletions(-)
 rename automation/build/archlinux/{riscv64.dockerfile => current-riscv64.dockerfile} (100%)

diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/current-riscv64.dockerfile
similarity index 100%
rename from automation/build/archlinux/riscv64.dockerfile
rename to automation/build/archlinux/current-riscv64.dockerfile
diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index 6784974619..7ccd153375 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -647,31 +647,31 @@ alpine-3.12-gcc-debug-arm64-boot-cpupools:
       CONFIG_BOOT_TIME_CPUPOOLS=y
 
 # RISC-V 64 cross-build
-riscv64-cross-gcc:
+archlinux-current-gcc-riscv64:
   extends: .gcc-riscv64-cross-build
   variables:
-    CONTAINER: archlinux:riscv64
+    CONTAINER: archlinux:current-riscv64
     KBUILD_DEFCONFIG: tiny64_defconfig
     HYPERVISOR_ONLY: y
 
-riscv64-cross-gcc-debug:
+archlinux-current-gcc-riscv64-debug:
   extends: .gcc-riscv64-cross-build-debug
   variables:
-    CONTAINER: archlinux:riscv64
+    CONTAINER: archlinux:current-riscv64
     KBUILD_DEFCONFIG: tiny64_defconfig
     HYPERVISOR_ONLY: y
 
-riscv64-cross-gcc-randconfig:
+archlinux-current-gcc-riscv64-randconfig:
   extends: .gcc-riscv64-cross-build
   variables:
-    CONTAINER: archlinux:riscv64
+    CONTAINER: archlinux:current-riscv64
     KBUILD_DEFCONFIG: tiny64_defconfig
     RANDCONFIG: y
 
-riscv64-cross-gcc-debug-randconfig:
+archlinux-current-gcc-riscv64-debug-randconfig:
   extends: .gcc-riscv64-cross-build-debug
   variables:
-    CONTAINER: archlinux:riscv64
+    CONTAINER: archlinux:current-riscv64
     KBUILD_DEFCONFIG: tiny64_defconfig
     RANDCONFIG: y
 
diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 64f47a0ab9..4ca3e54862 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -57,7 +57,7 @@
 .qemu-riscv64:
   extends: .test-jobs-common
   variables:
-    CONTAINER: archlinux:riscv64
+    CONTAINER: archlinux:current-riscv64
     LOGFILE: qemu-smoke-riscv64.log
   artifacts:
     paths:
@@ -252,7 +252,7 @@ qemu-smoke-riscv64-gcc:
   script:
     - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
   needs:
-    - riscv64-cross-gcc
+    - archlinux-current-gcc-riscv64
 
 # Yocto test jobs
 yocto-qemuarm64:
diff --git a/automation/scripts/containerize b/automation/scripts/containerize
index 0f4645c4cc..9e508918bf 100755
--- a/automation/scripts/containerize
+++ b/automation/scripts/containerize
@@ -27,7 +27,7 @@ case "_${CONTAINER}" in
     _alpine) CONTAINER="${BASE}/alpine:3.12" ;;
     _alpine-arm64v8) CONTAINER="${BASE}/alpine:3.12-arm64v8" ;;
     _archlinux|_arch) CONTAINER="${BASE}/archlinux:current" ;;
-    _riscv64) CONTAINER="${BASE}/archlinux:riscv64" ;;
+    _riscv64) CONTAINER="${BASE}/archlinux:current-riscv64" ;;
     _centos7) CONTAINER="${BASE}/centos:7" ;;
     _centos72) CONTAINER="${BASE}/centos:7.2" ;;
     _fedora) CONTAINER="${BASE}/fedora:29";;
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 16:26:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 16:26:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474812.736179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHSC-0001kt-0A; Tue, 10 Jan 2023 16:26:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474812.736179; Tue, 10 Jan 2023 16:26:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHSB-0001km-TG; Tue, 10 Jan 2023 16:26:23 +0000
Received: by outflank-mailman (input) for mailman id 474812;
 Tue, 10 Jan 2023 16:26:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFHSA-0001Yc-Nd
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 16:26:22 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2040.outbound.protection.outlook.com [40.107.21.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 82f000bb-9103-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 17:26:21 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8369.eurprd04.prod.outlook.com (2603:10a6:20b:3b0::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 16:26:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 16:26:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82f000bb-9103-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qp603CpyuUIuk6v2fWazQ3C+jU5EgSaqMbi2t4EgzmCR5B3klYpcgH/aC6uu9wnbwJAgPzEbbRgVMaKgEdigbHDXxD3MnTVNrarwCp7Mqkct5VeApCIwhLTk/j5G/Gr/4cMVN/VelsoKlnWR9wWEq28p6h9QImKjre1vqBOT61Fik5tzWNjRGlLYxC+cgO2jcmdfFMBNFrZjkLQINrcrTZ7Gb0M1iZYtXXkpx8eemGc0pQnVcjw3vpFE5KkNGUc9rO6Fj3TSZ7eSvhvgXtjYzOI13GL8lcH3ATXW06uZhu0rfLu30P4lQxiJSPTo7kegnlENCqPTFkAkdZx24/lACA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=J9J1WlR7LO6Tw6yrMCIzapZHEArRLyuLNDKQFmwiito=;
 b=hDIX5+MFN9P+odkOOG0m9I/7TmFlrbzFbkbp/DJW+Q/iC9lECbGf7AYeTl/ki4BR9RiDfCu7rgF1TEFF8TejN7VgTNGWoFsx/2cmy4hzr4F6CqLMHVkEHpeWrc1yyvQEccxIIEINmSilkXiI8Ll9HkRilN/LZXf3S1Ddt5OcPzPXqcWIFvHJBBFSW6ejgEGu5FCGJuSWcY2/vmvmxtYSjMegRdzbSRiTXZDAqiWabWjhNJVVJzzKy/tZGPzyfMB2R58jR+BfBeaVg/ABFXOY0FtDuvHD5P8evGRJUbgAubBlGE2i2RB3AVrzFLL92EVIi19ZU4mFCnJfHc9tu2wHBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J9J1WlR7LO6Tw6yrMCIzapZHEArRLyuLNDKQFmwiito=;
 b=cKSti12YLkATAtUMByco8iWisRaAeYexBCdEDfO1ASI8IbvU4NMHV0xo1KkES4iPwuPt/7Nr7NRqP2wzhRHwU7n62essm2dfkeqUyZRxSzzbAt3aSYIjYOvdybBCQpSjg4Dbc2l+K4CJDmg6EdF2k0tzbSOnM5Mwuwy1L3yZE9cTFW3ZS8sCJZCltVLV76ECcjJBFMcnMjEt4rPXvDvOX/c/u+TIs0fQwJcuFqrYYTzHhDizFXALINpaJtjxT9LU7HmzdqDoJi2ww1RcBTPpvTvDkxUlh83uy7WaggpdCGfGpAoK9bn08xXqmHtihy+qdgrrDJ11/iA6hZlWlJOUbQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <80b2ea4d-d52b-14c5-38ea-b8ab7e3a713c@suse.com>
Date: Tue, 10 Jan 2023 17:26:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] x86/vmx: Calculate model-specific LBRs once at start
 of day
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230109120828.344-1-andrew.cooper3@citrix.com>
 <20230109120828.344-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230109120828.344-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0136.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8369:EE_
X-MS-Office365-Filtering-Correlation-Id: 7aee1e9c-b77c-4761-69ba-08daf327665f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qlKMGVvM7cQaofyTqfo3BMrO/4HeVhEdWOtik139efI/cv9d9dmITgYXHy2OHTCYxm/SycAMecGpThO1PsgCed1uU7y7P7KpUR6EYmVYsL98eoXNRPAFpGk90RfVUCBdDdqaPlEAUUDoPIE5MBu09A4jCW2VjLrOJjA0bHuTHVQXCTFTdm8sLP7BrIBTltoByUHKb/mTmlKPcNahh7WOs8sMxjtQHlnudL97gwNspNCrkCFmBZGFa8EFXvqzsiadG670gi8fOnbAGpoueMfLrGJWMR+2EH1nGSwH7dA3LeSxLEM/yNvVoz3NrgWKFbS3MxCBJt+xsSJO39MCISmh4XZ7VjLQOmJdBpcekcUVrtZn4XgeXb7H6628t19ltL79jicv/GD3ag3OUkX8nVvgjQoUbqg+9jWdV2pNmShRYjDPdT8cvfptT9s0rDi/KTm2lH8huGr2YHl4e0fnDYZiN/i5vLRrUlq7cv5O07HY5VIr1w8eu1Gne2Pcii0YH/7Am4xj4+TOwd1DLuK+YE6PsqjQmWG3SX7zluIb5+aLjM1Nv4Ea9BIhATMbJqyqbuo3RiX/qvDKSw/XVobYq6BMvjyjIhMTXQeKt6AYcna4bmIbp/WGL1RqV9nPWGIZef0dr2OJS96DBb7/OgBH+rOo6LYCSenQ3Gfo9Yd4XWS6TQpUC0v/CS7J9Th6HW7BVYjiZmNCZjelKivZA4kD+4YnwefgQCsHjB1sAc4FS9xw/C4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(136003)(396003)(346002)(376002)(366004)(451199015)(66946007)(31686004)(41300700001)(2906002)(8936002)(36756003)(5660300002)(8676002)(6916009)(66476007)(66556008)(6512007)(316002)(54906003)(478600001)(6486002)(53546011)(6506007)(186003)(4326008)(26005)(2616005)(31696002)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?akxRajBoWHRCNktuMk1NSWdsbktTM1RtUmpMLytrSG1yNWp3N2JVVHZKK3Iv?=
 =?utf-8?B?dHUvNW9qNEdieUx4T3RIN0kxMjZwOWpyaEdVVm5HNFdJRlVYUXkrL0xMaTQ5?=
 =?utf-8?B?WTNhdHNuMDBQUEo2TG1zanBzZWpQbDdQeXM2ZVlGMC9NVjY5cndqTXBXdlNG?=
 =?utf-8?B?WVVuQkZPeHlKNVk2RTFTMUNsSHJLNjBUV1g4YnYvZjFicHRnSktQcHhtSUNO?=
 =?utf-8?B?cmpJYnNXeVd1dzFwQjA2Tm9zU3JOeG5FZzVzWVcxTjVGNS90OHhVcjBxOGgy?=
 =?utf-8?B?cnprTGJSRGlKUFIyaXJocXlXQ1U4VWZocktjY2k4dnVRYmlpYXVyb3dHUmFy?=
 =?utf-8?B?MDFTN0grZklkTDFsbjZvYWlBNiswaEU1VmpQdk9uaHFUVDRMb1FNT2M1ZmFC?=
 =?utf-8?B?UitSRFNuL3FJY2lEbXAzQmVZRmFGKzBtZitYQXI0emZTU0M2YmV2ZlZPelBl?=
 =?utf-8?B?UDlrQ2dUVFFPTGszMGxCSDRUQ21xaHcxcWlKcEoraWg3dzU1cms0Y0FHdXlw?=
 =?utf-8?B?aElTN0s2dnNuOVBjOHpXRzFpN0NTRVNCbC9tbklONXUzUjBwMDM3MFdLOG1O?=
 =?utf-8?B?NDRZRy82bEorWUZpeFhWWndaYjVXS1I0amI3VzBPblZ4UUNLMUl0cXdMVER1?=
 =?utf-8?B?dVdXL0tzMWFxSWJKazlQbEd1UWRQVTZKVmExV2xmZGs1Z3hDY202TkxIdSta?=
 =?utf-8?B?MFpjWWVFTFlFK2RoSFFZLzJOTkdlSEVxYysxV1JsQVBoQ3hacklEQWNHR1Ft?=
 =?utf-8?B?Y3VOQzkyYXVtTWxIWUtuQVhIbDZuZ0lPMWFpSnhYaUFaR1RjM2ZicVo4VVIx?=
 =?utf-8?B?cW1XczhPWG1NblVXckhCMlRQZGg5L0NqQzBESS9jZjAvRHZvUG9QUkFGSVVu?=
 =?utf-8?B?N2NmMXNwK1p2VTVva3NxV3JtVkhEUFdtNG1DbXdqQW5MUGJqTU05R1FONmhE?=
 =?utf-8?B?Q1FEMzJ0Smw2akIvVXY4UzFxNGtONmp5NXFsVisxRWo3ZHhkdGROcTYraFYv?=
 =?utf-8?B?MHVXYklad2NOSzdoM2tQTkZZblpJRlVIOUorTnBpNnR1L3R5V3hlTER3azJs?=
 =?utf-8?B?TGx2ZDFYdnQ0UGpxMnVkRmFEWmVjV1VtU3hVM3dkRTM5SWJHS1VRZWo5QTBl?=
 =?utf-8?B?OGlwREhDbU5veVNRY3JUZWNwRDFHbmtJZDVuYVBSK1l1aWZIZlhIWWV3cGZP?=
 =?utf-8?B?SWY0dWZkcTdoZVR1TTRCMzdxbnEwclVBaHN3aG5Ob25wNkdBZ3oyZFNqc0lG?=
 =?utf-8?B?Y1dUL1RRUStaRWpXc3JlRkpEbEU0dVhFdS91YlFRdU4yekhtRG9KRDhJU0E0?=
 =?utf-8?B?ZS9DR005T1lYdk1USFR4a0YwUVBuQzJBVzhraVZtaXk0ZVU4TFV2VkFsMHU3?=
 =?utf-8?B?bEdWdjBzb3NxSEFtOWp5NUwvN1hRcnZkSlJqMFdLaVVwb2licEFOb0RER1lU?=
 =?utf-8?B?QjZLV2lDbFh0SUh5ajJmaW1MM2pEb20yZHlGZ3krUy9lZllrRlVTZERCKzdo?=
 =?utf-8?B?Sk9aOVRFZnhHZy9LNFNRLzFpYm11T1hBYzRnU3ZzeUtTRC9tbWIzaVZJakE1?=
 =?utf-8?B?UVUwek9mUSsxOVZwWTBMTlluQWJ6SmpVSUxxNnZLa2Nzc2FranFGZ2N1ajBt?=
 =?utf-8?B?MXpnSjNZRjFaQW1DS1pzdEZBVFcvVUE0bHgvQnNZYzEyTmVoNzNuYTFteDk1?=
 =?utf-8?B?WEJRM1lORzlWNDZGbWxZRkxqVGJ1THRzNDRFS295M2xMQWNRZlBWMmpFU2Jl?=
 =?utf-8?B?TzNoUkR3QjFzdkJ1aFFEU1J4MXRkeWpZYUU2NnZCYWszUEVMQVlBUUtJbWRB?=
 =?utf-8?B?UjlQSjhBaGY0OEwzb2NzY0I5S1VZemlmUXFnYTRNME9aMW1HMWFncUVyLy94?=
 =?utf-8?B?TDA0bXFoV3A0czdidHhqaExXUWNLWFJiOUxVVFVWeFJoUVpESUJ6c0JZQXY3?=
 =?utf-8?B?YmRsazdva1pCaUQzWDNTWStPUkxwYlpza3JxK3NRNlZZMGFWanF4NnQ2WDRH?=
 =?utf-8?B?dVpyQkZwUTJwSWt1RnB3bzdsdFo1aWp1VWNFbWVjb1F2K2NIUjhTYmNZcWNm?=
 =?utf-8?B?Wm9jYlBCZlJLUnF5V1g5MHdjWU1kb3VrRVBzbDkyQys3NW5qaUVUdGJNeWNJ?=
 =?utf-8?Q?GDqhqQ2LbIn2hHnS9G6pK/Uk5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7aee1e9c-b77c-4761-69ba-08daf327665f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 16:26:17.5024
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GpreWjj9xJCQKH71Kl+xskiY9jyqsb42J70dZ7EcOWW6q7ZhyVxVbhpD9Jj61dJeysERbtwRc6oguKcYqC6tgw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8369

On 09.01.2023 13:08, Andrew Cooper wrote:
> There is no point repeating this calculation at runtime, especially as it is
> in the fallback path of the WRSMR/RDMSR handlers.
> 
> Move the infrastructure higher in vmx.c to avoid forward declarations,
> renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
> these are model-specific only.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one nit:

> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -396,6 +396,142 @@ void vmx_pi_hooks_deassign(struct domain *d)
>      domain_unpause(d);
>  }
>  
> +static const struct lbr_info {
> +    u32 base, count;
> +} p4_lbr[] = {
> +    { MSR_P4_LER_FROM_LIP,          1 },
> +    { MSR_P4_LER_TO_LIP,            1 },
> +    { MSR_P4_LASTBRANCH_TOS,        1 },
> +    { MSR_P4_LASTBRANCH_0_FROM_LIP, NUM_MSR_P4_LASTBRANCH_FROM_TO },
> +    { MSR_P4_LASTBRANCH_0_TO_LIP,   NUM_MSR_P4_LASTBRANCH_FROM_TO },
> +    { 0, 0 }
> +}, c2_lbr[] = {
> +    { MSR_IA32_LASTINTFROMIP,       1 },
> +    { MSR_IA32_LASTINTTOIP,         1 },
> +    { MSR_C2_LASTBRANCH_TOS,        1 },
> +    { MSR_C2_LASTBRANCH_0_FROM_IP,  NUM_MSR_C2_LASTBRANCH_FROM_TO },
> +    { MSR_C2_LASTBRANCH_0_TO_IP,    NUM_MSR_C2_LASTBRANCH_FROM_TO },
> +    { 0, 0 }
> +}, nh_lbr[] = {
> +    { MSR_IA32_LASTINTFROMIP,       1 },
> +    { MSR_IA32_LASTINTTOIP,         1 },
> +    { MSR_NHL_LBR_SELECT,           1 },
> +    { MSR_NHL_LASTBRANCH_TOS,       1 },
> +    { MSR_P4_LASTBRANCH_0_FROM_LIP, NUM_MSR_P4_LASTBRANCH_FROM_TO },
> +    { MSR_P4_LASTBRANCH_0_TO_LIP,   NUM_MSR_P4_LASTBRANCH_FROM_TO },
> +    { 0, 0 }
> +}, sk_lbr[] = {
> +    { MSR_IA32_LASTINTFROMIP,       1 },
> +    { MSR_IA32_LASTINTTOIP,         1 },
> +    { MSR_NHL_LBR_SELECT,           1 },
> +    { MSR_NHL_LASTBRANCH_TOS,       1 },
> +    { MSR_SKL_LASTBRANCH_0_FROM_IP, NUM_MSR_SKL_LASTBRANCH },
> +    { MSR_SKL_LASTBRANCH_0_TO_IP,   NUM_MSR_SKL_LASTBRANCH },
> +    { MSR_SKL_LASTBRANCH_0_INFO,    NUM_MSR_SKL_LASTBRANCH },
> +    { 0, 0 }
> +}, at_lbr[] = {
> +    { MSR_IA32_LASTINTFROMIP,       1 },
> +    { MSR_IA32_LASTINTTOIP,         1 },
> +    { MSR_C2_LASTBRANCH_TOS,        1 },
> +    { MSR_C2_LASTBRANCH_0_FROM_IP,  NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
> +    { MSR_C2_LASTBRANCH_0_TO_IP,    NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
> +    { 0, 0 }
> +}, sm_lbr[] = {
> +    { MSR_IA32_LASTINTFROMIP,       1 },
> +    { MSR_IA32_LASTINTTOIP,         1 },
> +    { MSR_SM_LBR_SELECT,            1 },
> +    { MSR_SM_LASTBRANCH_TOS,        1 },
> +    { MSR_C2_LASTBRANCH_0_FROM_IP,  NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
> +    { MSR_C2_LASTBRANCH_0_TO_IP,    NUM_MSR_ATOM_LASTBRANCH_FROM_TO },
> +    { 0, 0 }
> +}, gm_lbr[] = {
> +    { MSR_IA32_LASTINTFROMIP,       1 },
> +    { MSR_IA32_LASTINTTOIP,         1 },
> +    { MSR_SM_LBR_SELECT,            1 },
> +    { MSR_SM_LASTBRANCH_TOS,        1 },
> +    { MSR_GM_LASTBRANCH_0_FROM_IP,  NUM_MSR_GM_LASTBRANCH_FROM_TO },
> +    { MSR_GM_LASTBRANCH_0_TO_IP,    NUM_MSR_GM_LASTBRANCH_FROM_TO },
> +    { 0, 0 }
> +};
> +static const struct lbr_info * __ro_after_init model_specific_lbr;
> +
> +static const struct __init lbr_info *get_model_specific_lbr(void)

Please move __init:

static const struct lbr_info *__init get_model_specific_lbr(void)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 16:28:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 16:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474819.736190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHUY-0002O0-Di; Tue, 10 Jan 2023 16:28:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474819.736190; Tue, 10 Jan 2023 16:28:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHUY-0002Nt-A2; Tue, 10 Jan 2023 16:28:50 +0000
Received: by outflank-mailman (input) for mailman id 474819;
 Tue, 10 Jan 2023 16:28:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFHUW-0002Nj-N7
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 16:28:48 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7780029-9103-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 17:28:45 +0100 (CET)
Received: from mail-sn1nam02lp2040.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jan 2023 11:28:34 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6152.namprd03.prod.outlook.com (2603:10b6:208:319::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Tue, 10 Jan
 2023 16:28:30 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 16:28:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7780029-9103-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673368125;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=2+qcYU4UZAVz66L4Kyc9H4n1d6XZgfoIpmG0t1Jr0c4=;
  b=fEfDqcpz7yOBN5xBeBUm2P9k471CqayzU0yIQNS1HiaQkwALBfrx3L9X
   w9J7s6fEWTXwQK/XMsTmFTipjJpAhsN2x6J4ED/U8fo0kEIS8FJbSa2no
   BgdHtGPf80AvPoQqIb3lmHHQ3LOXlq/O+s3jxf8LvMbYWccBPArEvihPj
   k=;
X-IronPort-RemoteIP: 104.47.57.40
X-IronPort-MID: 92002746
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:38j9TqlkxtYV1tD8TzN9LYzo5gxRJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIWUD/Ta62DN2KjfIgjaI2190sBuceHyt5qGQNq+C1kQSMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aqaVA8w5ARkPqgS5QeGzBH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 cQCLTtXUzaDvPP10qC0QfZLrMQOKeC+aevzulk4pd3YJdAPZMiZBp7svJpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVM3jOCF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNITOXiqaI06LGV7nFLECVVSEKXm+nnj3O1We9PC
 mJX4QN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxmdLngJSHhGctNOnM0rQ3os3
 1yAndLsDBRutqGYTTSW8bL8hTC/JykTa3MDbCksTA0Z7t2lq4Y25jrfQ9AmHKOrg9ndHTDr3
 yvMvCU4n68Uj8MAy+O851+vvt63jp3ATwpw/QOOWGugtll9fNT9O9Tu7kXH5/FdKorfVkOGo
 HUPh8mZ6qYJEI2JkyuOBu4KGdlF+sq4DdEVunY3d7FJythn0yTyFWyMyFmS/HtUD/s=
IronPort-HdrOrdr: A9a23:UdxFhaofos3pBEv4+QSmJG0aV5oReYIsimQD101hICG9JPbo8P
 xG+85rtiMc6QxwZJhOo7u90cW7K080lqQV3WByB9iftVLdyQ+VxehZhOPfKlvbdhEWndQy6U
 4PScRD4HKbNykdsS5XijPIcerJYbO8gcWVuds=
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="92002746"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gKVNRsikJofMAuxFMCazZQ/8Cn9/k7LfAg8UNl43nd5X8advfHg2KUUyeQMes+Ds4wERlaNNF937U46CtjIih0g7Hq12P0mFuk0lbcqk3GoySCYEAH09uogdRS1dIZG/3uCS016SKX0PLLnNQhK1Fcxfqh8G3mIpJ0xzkDbZuVv2ue11HRdYVh1RX2GsJVJLtfHFFdR7Cukqhsj/LHORYNJYLXRGzyTnGrZJmM+9ETMtrzUEW/kESa4KdQPBXQgUKSXx7UdOPI+SHQlfu556ls/g1Hg9h9LlyXwTm9bg/E3YM2WfODQsiP2N47SxZXbK5k99SyJLBESjkSgySjbTyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2+qcYU4UZAVz66L4Kyc9H4n1d6XZgfoIpmG0t1Jr0c4=;
 b=nf/7FVjxemscHXp/rjCgDdwlEbsBMNLWqOFsM9LIPw2rqS5d9rq3K0sgMGT1P4ctKyZH5REVYyOQbE8II/UXylEDsSHXlw7hHkfaESF3/giWmjuZQuhI56grVvSjMKj30pQf6WWyFrAOjVsMp5GwAtH/dPLDWgYPtIFvb6Osaml/7CXf9XWVV+lly1drRgtZSei2pwWH1Tij267DkewcjsM0/+UpSXyArBhGYHs7xM52Y/z2sSR162D3vORnEXypmaV1h9cz0RL07tOqKB/zaHBbWUCcuqgFSev9jXIJ4mg4lHhhtwstZvle7WOiE4t+JZFa9Y8SKjR/Bo6j3jjIRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2+qcYU4UZAVz66L4Kyc9H4n1d6XZgfoIpmG0t1Jr0c4=;
 b=aF1By0RK4TPOhH44Z8LezJVJJ4k9czfJxV21bTbHrWg6+WrxziWtgCinRQpl9i/9vqqMd+KuzPtHTn/v465oV8C0e7N2iV0MO8epC3sLZ/RAf8cmRGtO8X9rGiGLz+obeVYNsNgK+ikOJoD4gezxSCcQJcvQbyoa8dtIRvkvTzw=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 1/2] x86/vmx: Calculate model-specific LBRs once at start
 of day
Thread-Topic: [PATCH 1/2] x86/vmx: Calculate model-specific LBRs once at start
 of day
Thread-Index: AQHZJCMapaYe8MLA1UOWoaXJDH0eEq6X2D6AgAAAnoA=
Date: Tue, 10 Jan 2023 16:28:28 +0000
Message-ID: <433003aa-e3a9-b717-76f2-c62b0a31003f@citrix.com>
References: <20230109120828.344-1-andrew.cooper3@citrix.com>
 <20230109120828.344-2-andrew.cooper3@citrix.com>
 <80b2ea4d-d52b-14c5-38ea-b8ab7e3a713c@suse.com>
In-Reply-To: <80b2ea4d-d52b-14c5-38ea-b8ab7e3a713c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BL1PR03MB6152:EE_
x-ms-office365-filtering-correlation-id: 3051b1bd-e71d-42d8-3f47-08daf327b491
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 F43vuGmoZAz64MIVsoTk7wHo3FQd+ndIKQS7AqLizJ8vW1scoUFNxTubdQQdph3ydHhF7oFZ3oZCTW+wXqE3gj/AEMhB0YKGqNA5eMG2Ac+E0cZNa9Mw26ugQy5/21sVvknURd3MBZQr9GLH1WH8Qk+JgjyWtBSpZwV1cYZVgjrg8anA80XkylEuJoHgrWqaoQ5T/dTKKcceaR1CrlGSpg/2kd8uM52fB1C2J02gLjhqVM57zfE/XAzM9Qo9UsNDos+mearAx6wJAw7OYME+SJsvzM6zuGWLZIBYpy6nOvkGSrsTygZaEgUQ3Y7551BF8mzDmGZA42aYuld2YonvxM9fzM/sDiyMki9al17odcXzYCP/tTvRfPucJCQRtn6RILzgI/dvr4nQaNxZBVVyOawVy4QoDzfE03KKbhkx8RBc9e93TZ2177816HpvLQ68in5IWCmDd+6suKt8F+x3OTtOysvBDhTn60RppTQTnrm/WmHy/QmD4oeYEFk8Fl1GmVb77WBiVXPl0olGwoICRHMpngrKga7431rGOMmqgebWh70Yws0m+cYfoZ31LQQyPKE1P273evk79dLcRy50MD2BgImwOfZyFXkunhXLiXT/HQhopqFuD7z/b3STqW4TGQ1tXqeqs4loaExxsN8vTTTcUcVikqoHC7a4FSlpbwIInej4gEwo04adw31VmMHkZh4+3pWgyjM0nhs9jzrkr0T+wjxV/Co3MIgEgyn5Bo8EH8Os/BRPEzNFMiFWxnHfh/MFJNwpf+gU/oz18UVE8A==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(366004)(346002)(376002)(396003)(451199015)(6512007)(26005)(186003)(53546011)(6506007)(31696002)(71200400001)(38100700002)(86362001)(54906003)(2616005)(316002)(110136005)(82960400001)(2906002)(122000001)(41300700001)(38070700005)(66946007)(66556008)(66476007)(66446008)(64756008)(4326008)(8676002)(76116006)(91956017)(5660300002)(36756003)(8936002)(83380400001)(478600001)(31686004)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SkFnbTZOTS8xYjV4TTBvTkdIRGk3RDFWZTBRWWNlKzVNQ05NTmFJa0JmbVpG?=
 =?utf-8?B?T3prbXh0cjFiTjZYMzJHWlI2d01iSHFYaklXWW1na3ZkTWtOMFpkMzRHbTR6?=
 =?utf-8?B?Mm51ejZtNXU4NHZ2ZlBkaU5hekhGOThXRWRpSnhIcTZxRFdjSDNzMjhzMlNo?=
 =?utf-8?B?cVgraXJkMDczUDc0SEhEUk1peWFUQzEvTkt5akkwUGUwMVJRdjRJZnUzSGNO?=
 =?utf-8?B?UVRnWGtSS3dBYStoM3dPaE53U05zak4yTHh5M3Y5SE5IQkxObDVuQ2p4TE9t?=
 =?utf-8?B?UE1LK09zNVhEczVlUnkzVmwycnhaV2RMd2FSNTM0MEZ3bG8wNm5PS2ozc1Qw?=
 =?utf-8?B?dzBoQ3JDWDhGeHRaamJNZnJiT1FiUWhubmt1WVFCQlN3ZEdEdXVoZzV5dFVm?=
 =?utf-8?B?ZmYyc3NkWm91ZnJhQUUxRVFxVE43aC9YVk8vWElrNm9qM0xIc1JHVG1zZkdk?=
 =?utf-8?B?bFF5bEJpUGgxTjZweVlTeGJORXVkZVNIbFlYL0llRG9VTlp2ZEJOS0I3V0N2?=
 =?utf-8?B?WmJqdkpXaWZZRkRKL29hSXhKVk5nRStNMkZKMmZMdzRwNDN5QVNJMS9EYmNs?=
 =?utf-8?B?cnlad3M3Y0VzQ3NCOUFpaWNMTG9NV05LRnBYMWNQbnJ1NDI3TEl3L1ZDekpD?=
 =?utf-8?B?STliMmgrZlVpaGtSZXVITFYxZndwdXoyc1hJdUpBMnc0bHhXejd2d3V4dU9B?=
 =?utf-8?B?Rzc3cEtWWFFHYm1DKy9mdFpySUJPSWwwb1NRTG9kL1ZhNHZOZjRwNWZjVjZu?=
 =?utf-8?B?T3BsZGpzVHV3VU9pT29HL3U2b084YXAvUEthdjZnVURodkgzQ3dKR1JmNHFp?=
 =?utf-8?B?UVplbUNhK2VJMGkxdjBhckJtMnVJcUNiVktXcmRYZysyZ0p4VFdMamlOUldZ?=
 =?utf-8?B?dVZRSzRQc08wZEpEMEg0cW5DZlUwa3pkSVdNcEtHSHZMNmo2NDZGR1dOSjZx?=
 =?utf-8?B?Y3I5RW4zOGJTeW8xakdRTjduZkZRQ0NYRWZvRGJFRVFzM1FaYmxFYS9YYXpt?=
 =?utf-8?B?VjU0UndBYmVSd0hkZWlINnc5OGZnUlhDNnAweHZoSlYwMkZ6OElYbWE3NTVM?=
 =?utf-8?B?Rys3TldjL1BGZ3hZYzZqWGNSZmZvWk1tcUVza0IzcDlJYmJxSjBHS3ZaOTFr?=
 =?utf-8?B?d3FzdmF4UitDYWFUUE51SzB3bXZRZngycEhRNDRpTFJzR0kvM2VZUXR2WEZD?=
 =?utf-8?B?K1RSQTJDTDE3aHNaVmRFMUdvVWJ3d0YyTHU1QTduOUh1Mjd3eGg0M01VTWRV?=
 =?utf-8?B?OTBIdVJZNE5JdVFxaUtFTjFtcUxjWGNSdHdCVjhPN0M4N0lVazRLaDJLelFs?=
 =?utf-8?B?YVpGSUpGR2kwdzdXaHhoRHVLZ0dKa2N4MXJNT1VqdkpwNXhpK1dlSHVRY240?=
 =?utf-8?B?U1lhL2N4UlhzcUVya205aEpYQVlSSnozNG9ZaU0zUFN0TDUya08zVjQrRFBv?=
 =?utf-8?B?a25yTDdUQW1MNDBOUGNObEZMY0U3TWtudDhDWXQ5WlpybStNU2w0NHMvNEli?=
 =?utf-8?B?Skp3ZXptL3Y0eUpGeHV0V1FFazF1VEhBM1RGaGRLcUZGQmJ6Vmk0ek5HVCtZ?=
 =?utf-8?B?ZXB6cTVaNnJiRzdlOW56NzVobFh0NVJkK2sybi9FSzFEZUFucDZLTHgyNlFl?=
 =?utf-8?B?OU04WXlNc3Z3MGMwc2o3WVhLSEVyaTQyMHFmSkl1RktwR2VJanRXQSs3cFFJ?=
 =?utf-8?B?VUJKbi90QVJ0Q09WeU42enFibUhsSklsOEIvRUpsU1UyaUEvKzlZcVJiYStD?=
 =?utf-8?B?U3VtaFZtVTVXWDMzOVQxcFRaZVM1ZGhqK2RQU3lLVUt6YWk5MktmYUF5c2xQ?=
 =?utf-8?B?bEJaZnB4NnNlWkZaK05VTFNmeW9YVEhLbENWSjJiNVdUcGVEdXA3TGY0Z2xV?=
 =?utf-8?B?WW1kV3cwMGVwQjdOdm42SGJaOTR5cm5MREZZZlVsQkdGZkF4MVdmQ3F5S1pR?=
 =?utf-8?B?T28rYXBZSnJuYkxLYzk3RHlqRzkxRDQwTGpmYmhCaVYwRXpuY09xaDdKaFcr?=
 =?utf-8?B?bmJsK2NmenBHSG0zU0ZlcHZVYm4xZjJVaThhZDZCUUJrOVNoMXNwZHRqWS9Q?=
 =?utf-8?B?eUowbXpENjVJSUhoTjQ4YWZaeXlPcVZ3dTRuWDNsMGJMajROS2dpRjlTNTRB?=
 =?utf-8?Q?OZYFccdLwPmng+PNqPbP03ZFD?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <852B1E7D7C17AC468791B1F49CE3F942@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	pF6Fzb0BOBtQRtC2KwF3R6X4K8GzZrVyxMg1Mm3prZRq0pZq3C7zoF5rWi2HUPsbGaKq8Gv4HEe3RTQK8wP/4B5wJ8M0oN0d4kUGV7hJ8A1e2+rPjIdel4gv0bvLhXR86fNDYpm4HGGgfW2123xGgkF1+8gijp918ot1R6O1X2SlWBnQjJYM7v5BvUbz2LGeAC+9CZxJS3MHAeXQQmwjL1BMGAcdbuKcWNPlaAJJ/0NYFwmQvT/C9boLTR7VYc0gqWThdFwz0BqewUaQPosPnkm/BWYLX1nS7nkpahCyhI6nMi7mp9EtOhZtTXVI2tBg/c2jP44Yh6RdbCVCzRmmIfbpp7e3GGJYMFi9w1CRyqFcD5DxNFAw+z1RHx76Be6RI2itmQIFLVd03GOY7SZUmctJhlWP3Vy9fWOHy4P5lbBdWwioUXfmHtwO+KmlNAXtiaG0cTuuh3VoZPI+f+j4MkUZYUkaLdXlLF9AVh0pt+SppCHlBwn0vFz7qNVzs13nXneWsnTgEVqw/hZwfR+oD2UvHKI4lw8+OnT3lICujGxJhuTYg+YuqhEMtlV9ydNlHBMWkBFDl/quUgmiTmuheBs4qX8TWaKOy1fSfgbsjBhV8ya9A6uCD4eLCS/KEB/lE+6dH/OESnmcATzz11yTJYRIgs0Dc387DdBw4OfR0lsPtM5hvUidfnaY5ru7a3WT2GHT9Euf78e0duHEtbvsPjqTEBrWKjkYYtwJEbR1naqO3r4KA72Q7LvF8S4PKPzpTIjbjZoWEPCXuyv91vpY0w9JxZdOCaldzygGnrwdgRlnMzeI6swtEHE1C5/VGkoLEDdQEuQ9VMB290AawRpdl5PVWFKE3WX2PdOXx+vtrq0=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3051b1bd-e71d-42d8-3f47-08daf327b491
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jan 2023 16:28:28.4569
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: sXPl21UYC0HtA2lVbGt2Azk93qhqmMczRSa0SaSlTxIHaLrFzLZFRmZ+puZGgZYWMExwqYqEXcSKwCX1vakRLTRw2e/iBv4yHUwIszg2zsI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6152

T24gMTAvMDEvMjAyMyA0OjI2IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDkuMDEuMjAy
MyAxMzowOCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IFRoZXJlIGlzIG5vIHBvaW50IHJlcGVh
dGluZyB0aGlzIGNhbGN1bGF0aW9uIGF0IHJ1bnRpbWUsIGVzcGVjaWFsbHkgYXMgaXQgaXMNCj4+
IGluIHRoZSBmYWxsYmFjayBwYXRoIG9mIHRoZSBXUlNNUi9SRE1TUiBoYW5kbGVycy4NCj4+DQo+
PiBNb3ZlIHRoZSBpbmZyYXN0cnVjdHVyZSBoaWdoZXIgaW4gdm14LmMgdG8gYXZvaWQgZm9yd2Fy
ZCBkZWNsYXJhdGlvbnMsDQo+PiByZW5hbWluZyBsYXN0X2JyYW5jaF9tc3JfZ2V0KCkgdG8gZ2V0
X21vZGVsX3NwZWNpZmljX2xicigpIHRvIGhpZ2hsaWdodCB0aGF0DQo+PiB0aGVzZSBhcmUgbW9k
ZWwtc3BlY2lmaWMgb25seS4NCj4+DQo+PiBObyBwcmFjdGljYWwgY2hhbmdlLg0KPj4NCj4+IFNp
Z25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo+
IFJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IHdpdGggb25l
IG5pdDoNCj4NCj4+IC0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5jDQo+PiArKysgYi94
ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYw0KPj4gQEAgLTM5Niw2ICszOTYsMTQyIEBAIHZvaWQg
dm14X3BpX2hvb2tzX2RlYXNzaWduKHN0cnVjdCBkb21haW4gKmQpDQo+PiAgICAgIGRvbWFpbl91
bnBhdXNlKGQpOw0KPj4gIH0NCj4+ICANCj4+ICtzdGF0aWMgY29uc3Qgc3RydWN0IGxicl9pbmZv
IHsNCj4+ICsgICAgdTMyIGJhc2UsIGNvdW50Ow0KPj4gK30gcDRfbGJyW10gPSB7DQo+PiArICAg
IHsgTVNSX1A0X0xFUl9GUk9NX0xJUCwgICAgICAgICAgMSB9LA0KPj4gKyAgICB7IE1TUl9QNF9M
RVJfVE9fTElQLCAgICAgICAgICAgIDEgfSwNCj4+ICsgICAgeyBNU1JfUDRfTEFTVEJSQU5DSF9U
T1MsICAgICAgICAxIH0sDQo+PiArICAgIHsgTVNSX1A0X0xBU1RCUkFOQ0hfMF9GUk9NX0xJUCwg
TlVNX01TUl9QNF9MQVNUQlJBTkNIX0ZST01fVE8gfSwNCj4+ICsgICAgeyBNU1JfUDRfTEFTVEJS
QU5DSF8wX1RPX0xJUCwgICBOVU1fTVNSX1A0X0xBU1RCUkFOQ0hfRlJPTV9UTyB9LA0KPj4gKyAg
ICB7IDAsIDAgfQ0KPj4gK30sIGMyX2xicltdID0gew0KPj4gKyAgICB7IE1TUl9JQTMyX0xBU1RJ
TlRGUk9NSVAsICAgICAgIDEgfSwNCj4+ICsgICAgeyBNU1JfSUEzMl9MQVNUSU5UVE9JUCwgICAg
ICAgICAxIH0sDQo+PiArICAgIHsgTVNSX0MyX0xBU1RCUkFOQ0hfVE9TLCAgICAgICAgMSB9LA0K
Pj4gKyAgICB7IE1TUl9DMl9MQVNUQlJBTkNIXzBfRlJPTV9JUCwgIE5VTV9NU1JfQzJfTEFTVEJS
QU5DSF9GUk9NX1RPIH0sDQo+PiArICAgIHsgTVNSX0MyX0xBU1RCUkFOQ0hfMF9UT19JUCwgICAg
TlVNX01TUl9DMl9MQVNUQlJBTkNIX0ZST01fVE8gfSwNCj4+ICsgICAgeyAwLCAwIH0NCj4+ICt9
LCBuaF9sYnJbXSA9IHsNCj4+ICsgICAgeyBNU1JfSUEzMl9MQVNUSU5URlJPTUlQLCAgICAgICAx
IH0sDQo+PiArICAgIHsgTVNSX0lBMzJfTEFTVElOVFRPSVAsICAgICAgICAgMSB9LA0KPj4gKyAg
ICB7IE1TUl9OSExfTEJSX1NFTEVDVCwgICAgICAgICAgIDEgfSwNCj4+ICsgICAgeyBNU1JfTkhM
X0xBU1RCUkFOQ0hfVE9TLCAgICAgICAxIH0sDQo+PiArICAgIHsgTVNSX1A0X0xBU1RCUkFOQ0hf
MF9GUk9NX0xJUCwgTlVNX01TUl9QNF9MQVNUQlJBTkNIX0ZST01fVE8gfSwNCj4+ICsgICAgeyBN
U1JfUDRfTEFTVEJSQU5DSF8wX1RPX0xJUCwgICBOVU1fTVNSX1A0X0xBU1RCUkFOQ0hfRlJPTV9U
TyB9LA0KPj4gKyAgICB7IDAsIDAgfQ0KPj4gK30sIHNrX2xicltdID0gew0KPj4gKyAgICB7IE1T
Ul9JQTMyX0xBU1RJTlRGUk9NSVAsICAgICAgIDEgfSwNCj4+ICsgICAgeyBNU1JfSUEzMl9MQVNU
SU5UVE9JUCwgICAgICAgICAxIH0sDQo+PiArICAgIHsgTVNSX05ITF9MQlJfU0VMRUNULCAgICAg
ICAgICAgMSB9LA0KPj4gKyAgICB7IE1TUl9OSExfTEFTVEJSQU5DSF9UT1MsICAgICAgIDEgfSwN
Cj4+ICsgICAgeyBNU1JfU0tMX0xBU1RCUkFOQ0hfMF9GUk9NX0lQLCBOVU1fTVNSX1NLTF9MQVNU
QlJBTkNIIH0sDQo+PiArICAgIHsgTVNSX1NLTF9MQVNUQlJBTkNIXzBfVE9fSVAsICAgTlVNX01T
Ul9TS0xfTEFTVEJSQU5DSCB9LA0KPj4gKyAgICB7IE1TUl9TS0xfTEFTVEJSQU5DSF8wX0lORk8s
ICAgIE5VTV9NU1JfU0tMX0xBU1RCUkFOQ0ggfSwNCj4+ICsgICAgeyAwLCAwIH0NCj4+ICt9LCBh
dF9sYnJbXSA9IHsNCj4+ICsgICAgeyBNU1JfSUEzMl9MQVNUSU5URlJPTUlQLCAgICAgICAxIH0s
DQo+PiArICAgIHsgTVNSX0lBMzJfTEFTVElOVFRPSVAsICAgICAgICAgMSB9LA0KPj4gKyAgICB7
IE1TUl9DMl9MQVNUQlJBTkNIX1RPUywgICAgICAgIDEgfSwNCj4+ICsgICAgeyBNU1JfQzJfTEFT
VEJSQU5DSF8wX0ZST01fSVAsICBOVU1fTVNSX0FUT01fTEFTVEJSQU5DSF9GUk9NX1RPIH0sDQo+
PiArICAgIHsgTVNSX0MyX0xBU1RCUkFOQ0hfMF9UT19JUCwgICAgTlVNX01TUl9BVE9NX0xBU1RC
UkFOQ0hfRlJPTV9UTyB9LA0KPj4gKyAgICB7IDAsIDAgfQ0KPj4gK30sIHNtX2xicltdID0gew0K
Pj4gKyAgICB7IE1TUl9JQTMyX0xBU1RJTlRGUk9NSVAsICAgICAgIDEgfSwNCj4+ICsgICAgeyBN
U1JfSUEzMl9MQVNUSU5UVE9JUCwgICAgICAgICAxIH0sDQo+PiArICAgIHsgTVNSX1NNX0xCUl9T
RUxFQ1QsICAgICAgICAgICAgMSB9LA0KPj4gKyAgICB7IE1TUl9TTV9MQVNUQlJBTkNIX1RPUywg
ICAgICAgIDEgfSwNCj4+ICsgICAgeyBNU1JfQzJfTEFTVEJSQU5DSF8wX0ZST01fSVAsICBOVU1f
TVNSX0FUT01fTEFTVEJSQU5DSF9GUk9NX1RPIH0sDQo+PiArICAgIHsgTVNSX0MyX0xBU1RCUkFO
Q0hfMF9UT19JUCwgICAgTlVNX01TUl9BVE9NX0xBU1RCUkFOQ0hfRlJPTV9UTyB9LA0KPj4gKyAg
ICB7IDAsIDAgfQ0KPj4gK30sIGdtX2xicltdID0gew0KPj4gKyAgICB7IE1TUl9JQTMyX0xBU1RJ
TlRGUk9NSVAsICAgICAgIDEgfSwNCj4+ICsgICAgeyBNU1JfSUEzMl9MQVNUSU5UVE9JUCwgICAg
ICAgICAxIH0sDQo+PiArICAgIHsgTVNSX1NNX0xCUl9TRUxFQ1QsICAgICAgICAgICAgMSB9LA0K
Pj4gKyAgICB7IE1TUl9TTV9MQVNUQlJBTkNIX1RPUywgICAgICAgIDEgfSwNCj4+ICsgICAgeyBN
U1JfR01fTEFTVEJSQU5DSF8wX0ZST01fSVAsICBOVU1fTVNSX0dNX0xBU1RCUkFOQ0hfRlJPTV9U
TyB9LA0KPj4gKyAgICB7IE1TUl9HTV9MQVNUQlJBTkNIXzBfVE9fSVAsICAgIE5VTV9NU1JfR01f
TEFTVEJSQU5DSF9GUk9NX1RPIH0sDQo+PiArICAgIHsgMCwgMCB9DQo+PiArfTsNCj4+ICtzdGF0
aWMgY29uc3Qgc3RydWN0IGxicl9pbmZvICogX19yb19hZnRlcl9pbml0IG1vZGVsX3NwZWNpZmlj
X2xicjsNCj4+ICsNCj4+ICtzdGF0aWMgY29uc3Qgc3RydWN0IF9faW5pdCBsYnJfaW5mbyAqZ2V0
X21vZGVsX3NwZWNpZmljX2xicih2b2lkKQ0KPiBQbGVhc2UgbW92ZSBfX2luaXQ6DQo+DQo+IHN0
YXRpYyBjb25zdCBzdHJ1Y3QgbGJyX2luZm8gKl9faW5pdCBnZXRfbW9kZWxfc3BlY2lmaWNfbGJy
KHZvaWQpDQoNClllYWgsIEkgbm90aWNlZCBhbmQgZml4ZWQgYm90aCBzdHlsZSBlcnJvcnMgaGVy
ZS7CoCBBbHNvIGFuIGV4dHJhbmVvdXMNCnNwYWNlIGJlZm9yZSBfX3JvX2FmdGVyX2luaXQuDQoN
Cn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 16:28:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 16:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474820.736201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHUf-0002gl-OK; Tue, 10 Jan 2023 16:28:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474820.736201; Tue, 10 Jan 2023 16:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHUf-0002ge-LW; Tue, 10 Jan 2023 16:28:57 +0000
Received: by outflank-mailman (input) for mailman id 474820;
 Tue, 10 Jan 2023 16:28:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFHUd-0002Nj-V0
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 16:28:56 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2042.outbound.protection.outlook.com [40.107.7.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df73bbd6-9103-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 17:28:53 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7087.eurprd04.prod.outlook.com (2603:10a6:800:12a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 16:28:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 16:28:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df73bbd6-9103-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C8sPbhbivOcO48IZOunKrVoXUbPtEvavt5C2gCaqRRyQ5o+9QWHRmFMXU6nDVXju5a/GNiHl5eOsgt8Qsryl0tYN4jBZgyTfCthDxrNuUADebKQJJngTx8GFBTu/QgECT2a8zkiM49SWlvY3XJ7dz3fHji6R9Zs2KCyeQQKdJQugf6wYS/oyMHER8brc9BlywtqMgpHvPbddCw1KdIjAO8Ut+fPlTyfZSb7b1y7vnFYgLx0MvG94TEZKlupNX66n4oC7MnS0jwkP1NJPK/q33P7PuzL/lOKm9VEq5oJr3GyzUMCFuqDSNP9HGeJ3Pnx9v4P7tDBFSy3JmDx+G5KTWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YIVMyZpviGvd+DtKXNdgcxEVV4JpxCQF5jkJDsFuoPo=;
 b=SD+3gfL3bG2rmDnX2+k9MnMeQ2JWEy/o4qcOlnKjcxJvOjntBdeznBOItERC/lnUVruW2S56RkeCXovlNnp4Yz2lCexoxV1ghoNq01xGk9VsvIst9NIF/NR8Al0gUdOEcNLi9o53wSBz/c2LcL7yVkLwCLxZeSucwys+G13+9Y8ViHK5VS6NxeU3mv84pND0lv27IaDtpaxBIPQGPTrs/d2hN+oMbhpgALDg9Yuv7sAdntSd6f4ZzPJAPliM2OiyxKZoMIMPLoWsMxlM13fsz4kfVQ7RYcoP21Elsa0AfhmHnYw3sxZD/1vbBv2KZGEWUZ+4DJozo/nkuSx8zFRSCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YIVMyZpviGvd+DtKXNdgcxEVV4JpxCQF5jkJDsFuoPo=;
 b=HCLWSd55e7/87mVXyknnvWG0moy4DL4x8GAsH4ekoy6sx5+l8FfnquwtuXT4VNcUVg8oR14iPpMcFpAVXGsXrfau4JEbF4zDNr3SlvA4CyPbepTQ8HliyBcFk3rrYJZnsN10w2MBjN4huV44FNIqfdDcmZ7Lo4mGPRkvuSuvfaK3kwSJII5w9Q0k0fMdVSrpIxqnOryMV9iQsDuQmZmRb3GZcoNtremeXzcAg3UpL0BHzOIGStu/Oe9fR3hFtGmn1kiPrZnFxotfM3GarzT5uv4R+NGWLYkEEXaz0juGr9O4qioB2OmhKmc5cMVY5DzDRyxWBLg4mM+4l6G5c55W4A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a7157543-3132-eda8-080a-ce8afcf43f68@suse.com>
Date: Tue, 10 Jan 2023 17:28:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/2] x86/vmx: Support for CPUs without model-specific LBR
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230109120828.344-1-andrew.cooper3@citrix.com>
 <20230109120828.344-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230109120828.344-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0094.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7087:EE_
X-MS-Office365-Filtering-Correlation-Id: 82541215-5bf8-4f39-97d2-08daf327c26d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WeCwC7iO+TZIFyhKzLqwbXZYUZykVRkzbPc+KShMmDHyLZ4w3hZ2ANLRG1ruP0j0Gxtrzk8DD9U5Pwsyj9WtOaVfcsWDIJIYMYKK4elaT+dsuEGLn7Zgz5xQRHfMq+GMAdWPs4rQp3nZXygTtQiQFuhwpIjWXrUgSaMxdqBmm67Q9GPEexYPhBpFYBaEn9FWR4w6w9Vc4AnayE8mWsVvFV+RMRx+7hyPaHCu8nBT8LgclO1WTGDBX/z/a+pV7epowM0NOTxZGXfSgisUisT8dvexYrLEB6l7f+6NvvsTYqxfEWcNgLmYkacEHiCyiOVFgmoov3fvTGadh/09lvx+entjBWvitUuE31z0O7Z0a8yMdvycaru6Gmpw50VkYPPudhUsazOYP2bP5u3nYCeF5g/2HLySTq6lhv4EyG22+guceOAq24lN08JTUJzjPMrqnNaPfSy9impbfimI5NdzG7Ej6r5kzo0hhultNy3bPberNSRovXUNdbPBX9eSDMIM2vvw99ux2yq1Nu7+pv7P+q9dy9Foo9ZUvDfuEXKHsWT6QrPLY7jh30zSwrnCsenuwEN6jRAaHBMT0bgY7yEPPrW1Qn75AuOxL2xbi037ActV89oSM2YulCOiU7zPTz0NKj7sgl3kI9Gq+BKOzGL/JSJdjjj26d8wZ+kUfidvHalxkSk19l1fWr73gZtUsQ/2XbDbBIIk8xyAQALGor0b204fvcX7ICpzCI/wKtFVhBE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(39860400002)(396003)(376002)(136003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(4744005)(4326008)(316002)(6916009)(8676002)(66556008)(66476007)(66946007)(54906003)(26005)(6512007)(2616005)(38100700002)(31686004)(86362001)(186003)(31696002)(36756003)(53546011)(478600001)(6506007)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L2dPbmNnNFRCQmdwWk1vNDRBYXZMSXk0K3IwSVdGZG1KNE9yK3pwV1JUeDFR?=
 =?utf-8?B?NWU0QXNHTU9CVlpTMzYxN3ZsOEtxbEp5bFd2UkN1cDRMTG5TU1hWS2diRmNn?=
 =?utf-8?B?TXFxaEJiUkxNT01pZTBQRm12NFdPNVZ6SmxoT25HaXZhYnA4eGNUNWcxaTZh?=
 =?utf-8?B?OWZMSi9rZjAvYVU2c1pUNW5wRVgwVXFMdm5uTG9SMTJkK3Y1Z213d0JycFZn?=
 =?utf-8?B?SG0raThMejlhTm1mNC9pSitSMFhFbC9kSmRZRWRveXNUcjc3M0V6YlZuSXpY?=
 =?utf-8?B?M2cyTEpCU3JmMWJyRW9NYjZ6bzJqb2x0SkdhcWFSNzlTNkdnUmRFdXpJWXo0?=
 =?utf-8?B?UVYydmp1YjFuR0RYWCtWSWN4eWZDNUQzcTNLbFA4dFdSZ0tGRC9YejRmSldz?=
 =?utf-8?B?bE9uaS9yRDVzZ2NPM1dLWEpRMUx4VnlobDVWR25KY3dIWStaU2J6M3JjVVg5?=
 =?utf-8?B?Tm8wbU5GL0kxS29mVEhBenNyUUg0L2o2WjFDdWZ5Zm5SU0dtQ1RjY1dMU1BP?=
 =?utf-8?B?UkJlRHVLdUZTc0NsUDBMN2d3aEFtWFlHV3NEbkV1VnpQYk5sN0V6NG5tbFds?=
 =?utf-8?B?WW81MGgrOUdJOE1RYlF4RzNacUlSLzFSOEdnb29KVGJaQkw0ZnZLY0FFRWVN?=
 =?utf-8?B?R2RsYWttVnV0M3VwL09FU001a1c1cSsxOGsxM3RpY3VGeml0WHZnZTRPVUt1?=
 =?utf-8?B?NjMvNERxc0wyT1p2QTZDbzAxbFU4V2JlLzBqeldYUThGQ3hNN2NlTUdQUjdE?=
 =?utf-8?B?RmFvdXFlTkxpNzFlL21WeDRIb2pJeSs2OWJSNVgzU1F1VWF0c0I0Y1dxWkQ5?=
 =?utf-8?B?WFl4KzVlVnhjbnYrZFVxSCtqRy9yM3hrVzRKMms1STRFbk05OFZ4ckxtRWFa?=
 =?utf-8?B?T2kxZUYvbTRVc3FsNnJ4ZmxSTDkzZjRqRldTdjZiQ3o1WDNUT0JiVTduUzJZ?=
 =?utf-8?B?WnJBNThvbmRLdHA3WWZKTWZwRjAwa3h6MWl2ZEtBeGdmampIQ3AxZUhxODJ3?=
 =?utf-8?B?WWQ3Tno4eHRuSFZwZG5sTVRBQlRYMW52bEhYeEx3SzY0ZXV3RUVrbnllZHpF?=
 =?utf-8?B?YWdOQ2dmbXZuNm5EZ215VWtreC83bDQ3WW1QZTJtbHViVVBRTmJNc1NRUWc4?=
 =?utf-8?B?WHZjalVWSFlrdlR1RGUyUysyWVZWK2RwYmF6KzVEdk5oVVM5WjhubXNScVNy?=
 =?utf-8?B?S3c1VjVjdzVhZHdISWR0dktqeEVCM09md1JDdTE3ZGIrbmRDQy9IUGpOUGJJ?=
 =?utf-8?B?UXVsQU9hd3NTeXVUendKVmxBSmYzb0ViMXQzSVVWM1I1MHRsVnRiVDBJU0Nm?=
 =?utf-8?B?azdIMmRHaTFnR09ZM3VyelJSdXY0ZlA1TFZkVW9FUDg1RGMrV1dNN3FxSXpW?=
 =?utf-8?B?UXJOelQrUG50NE1IeEcwTVROTE50dko1VWplMDY0SmhIb21pS1BqZWFHU1FN?=
 =?utf-8?B?S1Z5NFJGK3Z4ZmJtMGhEbHdaZUtmQXlVVXZMOTZBS3djUWFzb0hZdkFHSG1S?=
 =?utf-8?B?NEtxcU5Jd2d5LzFVTUx4MHZDK2U1SFpIaFkzU0Q5OHNOb2tJMkc0QXJ2UnF0?=
 =?utf-8?B?bzg0ZVhLOTl0ekhBWENEbHI0TURnK2p1T0JsZnRIV1BpSlZwUzVnQjVWbmNo?=
 =?utf-8?B?aG1xR3k0QTg3d292cXE5clRCcHZYNlo0MXRXWWhURHIyK2ZJZzNLcWZTN1Mr?=
 =?utf-8?B?MzRubHh3TDAzQkIxU3lYbkdZaStlVDBrbkM2ejRHbmZ4eFBNazgyalVQUnRF?=
 =?utf-8?B?MW85RXJnNTNGeVZxclc4bnpraHRqNVZka0FIQ3lIa0gySW5PeUVZU0pMTEhm?=
 =?utf-8?B?cVNOdVZlTEF1UThMUkhMajNwamtzLzFhMmcrajJpd1QvOGVLTForLzhWcGNF?=
 =?utf-8?B?MjJNSmdoSnFPZ3BMRXpCTTVyYnJ3NGk1NnF6VDByMlJqK09OeGpoRlBvTnhB?=
 =?utf-8?B?YmdCQXRLTUhsOTFUUGVDeTRGeS9jRFREb1hLdzg3U01aM3g2YlVBVG9EOFFl?=
 =?utf-8?B?RkZUSk4wdVZpV0JudW5Tc0FjbVJ0NlJLNWRQUUlWY3ZVUXIyZTFRaXVNK09S?=
 =?utf-8?B?QXBLOUl3ckJINStFcUl5YkFra3FGRnI1YTdLNXVzWU1leVEvMTd6TGVHSU80?=
 =?utf-8?Q?VCjOHm8sgCaePcnK4mhMfRxzO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 82541215-5bf8-4f39-97d2-08daf327c26d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 16:28:51.8832
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 34psCZcaThLG8CiW7Bzj/dfDuT7FO91r15ijWZxfoEtPAR3XV9mfQFd6IufBoxpovf5vV5wKaGwHlnMswAUnbg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7087

On 09.01.2023 13:08, Andrew Cooper wrote:
> Ice Lake (server at least) has both Arch LBR and model-specific LBR.  Sapphire
> Rapids does not have model-specific LBR at all.  I.e. On SPR and later,
> model_specific_lbr will always be NULL, so we must make changes to avoid
> reliably hitting the domain_crash().
> 
> The Arch LBR spec states that CPUs without model-specific LBR implement
> MSR_DBG_CTL.LBR by discarding writes and always returning 0.
> 
> Do this for any CPU for which we lack model-specific LBR information.
> 
> Adjust the now-stale comment, now that the Arch LBR spec has created a way to
> signal "no model specific LBR" to guests.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 16:37:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 16:37:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474832.736212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHdH-0004WR-Ie; Tue, 10 Jan 2023 16:37:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474832.736212; Tue, 10 Jan 2023 16:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHdH-0004WK-Ez; Tue, 10 Jan 2023 16:37:51 +0000
Received: by outflank-mailman (input) for mailman id 474832;
 Tue, 10 Jan 2023 16:37:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFHdF-0004WA-EH; Tue, 10 Jan 2023 16:37:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFHdF-0003rW-DE; Tue, 10 Jan 2023 16:37:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFHdF-0000J4-1c; Tue, 10 Jan 2023 16:37:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFHdF-0005PN-17; Tue, 10 Jan 2023 16:37:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ld/FBqqdyPjBtfa6Zipt83I+c+IhUAJgw1P7URAlTvc=; b=gvyJNkxhlViGva/ywSY9+5rmAy
	Nn5FsEZEK/V2u+irGJjuYMqVX7YKqW2TC3ECqTHKAihh3Z5haoANJACdzk5+dlKVSfH1dhgd1wNyM
	CXRwNEzV1SrXnsWgCZmVvEywdE+YuaKy7xH4nBi2sDS7n4ThyC5I+Lwwb4ukGHwOCNqc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175702-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175702: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=717f35a9f2d883a74998f7deb3d2cdf95bddf039
X-Osstest-Versions-That:
    ovmf=82dd766f25225b0812bbac628c60d2b48f2346e4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 16:37:49 +0000

flight 175702 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175702/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 717f35a9f2d883a74998f7deb3d2cdf95bddf039
baseline version:
 ovmf                 82dd766f25225b0812bbac628c60d2b48f2346e4

Last test of basis   175686  2023-01-10 06:40:46 Z    0 days
Testing same since   175702  2023-01-10 14:40:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Moritz Fischer <moritzf@google.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   82dd766f25..717f35a9f2  717f35a9f2d883a74998f7deb3d2cdf95bddf039 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 16:38:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 16:38:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474836.736222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHdb-0004v7-QG; Tue, 10 Jan 2023 16:38:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474836.736222; Tue, 10 Jan 2023 16:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHdb-0004v0-NV; Tue, 10 Jan 2023 16:38:11 +0000
Received: by outflank-mailman (input) for mailman id 474836;
 Tue, 10 Jan 2023 16:38:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFHda-0004ta-4B
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 16:38:10 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2071.outbound.protection.outlook.com [40.107.6.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29d26e67-9105-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 17:38:08 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8048.eurprd04.prod.outlook.com (2603:10a6:102:c4::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 16:38:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 16:38:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29d26e67-9105-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HVJo87biQRb3gIcZULgx52UfzVo2zJWsn+kRGQBsCGXivOQr9fipfs9p5doVHk8SV5NKLQB3thYMfXSb90Cg1qJJiT1HO4GQP44HaZ6tAB4m3by76BLeNNRAX02bpdKSUjfwMvDqKPC5VlFCvElYtbtSAwRMmHhEbThjaUsO6O/ezzoZlFAEkmzuXL0cN/q7N/cyfQYrbERLZFY1w69yvQ5w1NZQR9YJwBjw21e5gzS84lZFqBXaz4vN+xeGdsxqjiEZ6JLq1tp4PNkYZDeNohqGpXxcFV25RCPPG60AMO5h4xA5RvD3+VvLiMXGHtyRD6ojaJpbfWZlRFOl68mIxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zCkOyY0jrVWt+HO5L1tUBGiWYJiPHwjxrjh8Tz7gjjc=;
 b=h4v5CcAF784LOKKT2eAqpP49Xskv7HkQS992tbzz1kHye+1tL8UY73Narsrblv0d/0+6TPxC7S8e1qEXsJ+kgAJE9vaLchsE3d91KA4NlxSARBjsRDohsooTwyYAcaEx8PBoP8V8fA9LmSfYbsD9aou8kgNMaMYqQfIenDzGk1qL3ZPA2rpyzLi5OhCbC+vFoNKteTfFNRiNawzNNLYO2mtciNQuLiqIO4w0MiMatpXPq9L7hep2Rd03o7UZZ0F/KzKnpw3PAYJR0P476XmVSvVPgSBfdXHra3v4dcstXoHgrjthx4jN562gZUz9NMC52ZO6jOwRPBc0wRbFEDDfKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zCkOyY0jrVWt+HO5L1tUBGiWYJiPHwjxrjh8Tz7gjjc=;
 b=ceUicv+mgLVAfCBk6ahpvCcI1ENvx7k7+gbQBECuVhGLxg/uAVrqpP1BAYKkHK+7EbY4vJxuRSgJ9oCLe/cln7ZkQscGBKspX7aPWiTC7kz59Cd1qlLPC/M7zbHWvQK64JjWwBCqgiXtIxAYmZ3xIicmpI/G+PQpW9YSh6ld/gQIJ4FoL9DL1A6ZNQVhoRBjBmRAcB/IzatQxeVVG2258lbX0x/6dRnpD3oBinnifnJZH8xqEH/IeyIcUY0/OnH7J8LQzBDg9k8LOV2wAbX6jTJenRb0QcdsSgP5ffe5J65zfb/IWxpjwV70wpGBAT0AqjaGCBftZStIUyU43yeM+g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9e32ffa1-1499-f9cd-7ca8-f9493b1269cb@suse.com>
Date: Tue, 10 Jan 2023 17:38:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 02/17] xen/arm: implement helpers to get and update
 NUMA status
Content-Language: en-US
To: Wei Chen <wei.chen@arm.com>
Cc: nd@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-3-wei.chen@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230110084930.1095203-3-wei.chen@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0154.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8048:EE_
X-MS-Office365-Filtering-Correlation-Id: db4e456d-7419-4215-139f-08daf3290c86
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KyS4bCLfEWqUCkvFbLru780/hpq16fmj7CDKtBHdPjUMpzTRgVB8O2t5MFdmANTJkDE3s7oD4Q6zdIexQli34vC2Gy46qfzLz5sxktdfvR5YOWDXSGSpFQnPiOTuQ6Rj2fYC+WJlVN3bh6emD/YhGEkrCyP/KjUhY953kkmkp878+CrWr9gVZ89BSk6K0slnws2DiVysbcPiu3kheeH1EcgNflhcaIaLtHHO2oYFxeEmTSrH/bXJc/EIntbDlxgAYr3WugBXWDaSGzVsSJtNr+ogym4J2MmQ3yFll6ofIH8ITEWj6e7RmN/OAkCCG1WO2fNtO0a0zj4YZyZz4R5+gXFii05uMoA+FOm8e+sKN9YBosjLqYv0xO1gXR60uDnGY/vFjdXfgjHta1RERaa2UR8j0ptYWO3d+gkVS8Y+h2UEKs/OBD7nt6WUMl/W1gHSQ+4OrFy/uJakrOPwhZlpvcyo1uK73/2SenxLy6RP3MImLvY8rOLghCAqBmk+hKbNdEnkTbxBFjsS86Y6U9wX7HOS6IyIpiVzkzx08FsSBwsgNoz5i/QYlKyL32RCEiZpgbJ1uhGlpaYYYLnflisRuiQu7nA33K4n5cs0kLF9Q522cWwQaKahz2iPkD/B+D3DWDZ5NLrD32eg2IIrPVsMrBKYFlsomZDS8ppJH5PohVQYtsjId5w/mlebqUAnpsf9hi4saRTb+Kc03QULWGN0ZXmlO2SnpOZxyp/Vt8LjqpX8EbJY7+0Wh8zppCN2FPv12UsdOIMduZh1174S4+eTEKfdNd3jSWZ8G6wBcBxJZas=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(396003)(346002)(39860400002)(366004)(451199015)(2906002)(83380400001)(2616005)(7416002)(66476007)(66556008)(6916009)(5660300002)(66946007)(6666004)(53546011)(8936002)(26005)(186003)(36756003)(6506007)(6512007)(31686004)(478600001)(38100700002)(6486002)(54906003)(8676002)(41300700001)(86362001)(316002)(4326008)(31696002)(2004002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MDlNNU9ZMlJOK0lXbUlwM0p6azlXeG9zcHNhZEtuNTVSTG9pRVNPazVVb0di?=
 =?utf-8?B?S2VVUTZmeGJiZkdmNGVTVklYTWFZU0MzSWlkRitnZkc4ZFJhY3JVZkd3NzBQ?=
 =?utf-8?B?bnM5cFVUNjBLZVBubU5oMlRMWEhsWFFPd1doeGdHK0krWjFRUXMwRVZUNzl2?=
 =?utf-8?B?MnRuUnQrWlJQTFFZdTBnN1Q1VFdPV2pTZlpZSC9uVzYyaHJ1eGZ1K2ZoNFdk?=
 =?utf-8?B?OVZab29zU05OUFgyZzNJVTVhdExJcjFGMXRGbHp6ZFhPWDFiU0RpMVFDQzgr?=
 =?utf-8?B?Y28xTVM1THN2aTZzVkFMeXZaQlgwclhZTkJFdjR0Zno4NGVDazBWVFZybTlH?=
 =?utf-8?B?ZE5WUElzaThkMXF2VTEyRTAybVhXZlhlbDdhbUZyUFpQNXQ2ZTF3WmlwcjdN?=
 =?utf-8?B?VTN4ZHJyMCs1M0x0ZlFXYzl5TGUwdkZkd3Bpa1RFeVZRMmdwNXRjbzdZOEJm?=
 =?utf-8?B?eDNteHhhKzhQZWNkVDl5Q1A4YzVXNUxpdW5sQlVOd3c1aTJvbkx3eGN0Y1V5?=
 =?utf-8?B?SStVWXVKVDRHQ3R2VTZId3pQRkVHTjM0aHAvbDRudWRSQ1pQYmFLNmg4R1Ir?=
 =?utf-8?B?UHN3eDFGRkpSUW0yV1Rnei9tbURVdllxOGxsL2swRFFhdVVyMmFOSHB0cTNK?=
 =?utf-8?B?a0toOGhHR2xIR1pDMkV0Sy9ESkM0QU5UZnBlYy9ZcXQ5OU1ZYWhqR0xYNXVN?=
 =?utf-8?B?eUdCdFAxbm5ZRlJGbjJBRlZJODkvb2dHNWdCazREb3A3OGtKY1lmVWpPMk9r?=
 =?utf-8?B?N0JzRmF5YkJ5VmtyVFhva3NsMkZMYjJ3ajJPWDJMU1lkUnZ2djgrV1YwV1VO?=
 =?utf-8?B?QldsOVJXSVRzWFlEU3lTVjU3T0lWRHNoNERoZ0RDbyt5RHczZGhlTWt0TVZh?=
 =?utf-8?B?Um5XY1NkRmd5bXpjcnJHNnNyNDJYY21pZUQ0RDF2YUpQNlVEdXlUTGdCVVd6?=
 =?utf-8?B?TU13SDhnc3h6VGUvb3hBTmZaN2V4d01WMExsWFE1cUxyUzR0VjVER2piZ0tj?=
 =?utf-8?B?WlZUOGRNT2NBRmZBMExvS2dVQndHV09pd1VUVUoxZkt5ZlVHN3BseWphZDMy?=
 =?utf-8?B?aklYNldtTlJ3ZGdzQTdFbFc1RTEzUHRPSDRsM0hyckdpa1NhNjEzazJOdXZj?=
 =?utf-8?B?YlpKZ1V3NkhCWGE1K2V0TkVqM1NEZFlMeXltVk9qY0pzcTBab3JucExFMU1O?=
 =?utf-8?B?c3d3Y1VTbzFBbVNTYmQ5Ykh3cmNvQ1I1RHA4b3VqeXQ4aEZpV3ROUmg1TkJB?=
 =?utf-8?B?OE4wOWdFazVPdzhNSGVaZitZNzdFVjJoSUJFajdMWFB1VVNWVmhjejBaUlhO?=
 =?utf-8?B?a29BZkU0YkZXUFdiSjEzRldWajc3YVRiMFhPcmpIZ2xNcDBNWWxpQW5zVWZ1?=
 =?utf-8?B?b2t5R05BK01uYXJ1OFAwRGJKUVR2VWEyTy9WNXUvYW12S1JLYjNrOXkvTmxM?=
 =?utf-8?B?ZmNabmI5VldMbjBwMTFmaitMMjJrcG40Vnltb29KM2l1eFM1TzZidlJwS2hP?=
 =?utf-8?B?MFA1UGNYcVo3dm5XdXRsK09pVnlQREdPNnpHR2t2bFAybkRDanR0M2tCcXll?=
 =?utf-8?B?eVZnU1hkdDJJVFpnTUR1TkJzU3hER1c3cG4vV0FwVE1ONGt2QlJXbXhkSmVF?=
 =?utf-8?B?K0VDWWsvU0Nld3ZMQ1U3SEQ1MXJ1NWhSalRTb0tsbTFRS1I3MUZsL2dEdFZl?=
 =?utf-8?B?ZkhJMkM1eHBNLzBEQVpOMTRsRDhUaW00UnZFR3o3NmVnU3U0VFhtTjQ5TDc3?=
 =?utf-8?B?cHJnRUY4enJOTi9taCtWU3d4T2UrOFFFSm5vYjFVazh3R3VXNFRFazBHOWdR?=
 =?utf-8?B?U0FDZ0d0YndibWJMZXJzQ042SEh4SG5vS1VDOUJVbXlsZmRxdXByM3hRbWd4?=
 =?utf-8?B?YndDb1dRa1RyclIvc3c2cWZuTVJLTEJ5RTRFcHBIM2lqcWNXWjFEU2NFN0Nv?=
 =?utf-8?B?aXhBdmd4QnN2S3VUTGxHYmZOQytwSVhPM0pnbHR0NXFGNllMdkE5KzgvS2lk?=
 =?utf-8?B?Ymh3azVaN1VNbWJmRW1kdFc1eVpzVzdiVHhXeUN1UXBNY0xTdzVmdXBuRzRh?=
 =?utf-8?B?bklVUnZNODBCTHZqZlBjcFlmdVFOVm1NNGxQd3p4eTVjTTBqMWNCZ1J6UjNj?=
 =?utf-8?Q?82U6jkcX209hRhVhfSWNR33KN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: db4e456d-7419-4215-139f-08daf3290c86
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 16:38:06.1293
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2hoTFKYwW6voa7wUCykcL1/kpS2uH/CEKC0HCjYu5FZNpXYCicyEobutxzZQECHMXHrUjlrjqhmnF2b6pXIEsA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8048

On 10.01.2023 09:49, Wei Chen wrote:
> --- a/xen/arch/arm/include/asm/numa.h
> +++ b/xen/arch/arm/include/asm/numa.h
> @@ -22,6 +22,12 @@ typedef u8 nodeid_t;
>   */
>  #define NR_NODE_MEMBLKS NR_MEM_BANKS
>  
> +enum dt_numa_status {
> +    DT_NUMA_INIT,

I don't see any use of this. I also think the name isn't good, as INIT
can be taken for "initializer" as well as "initialized". Suggesting an
alternative would require knowing what the future plans with this are;
right now ...

> +    DT_NUMA_ON,
> +    DT_NUMA_OFF,
> +};

... the other two are also used only in a single file, at which point
their placing in a header is also questionable.

> --- /dev/null
> +++ b/xen/arch/arm/numa.c
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Arm Architecture support layer for NUMA.
> + *
> + * Copyright (C) 2021 Arm Ltd
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + *
> + */
> +#include <xen/init.h>
> +#include <xen/numa.h>
> +
> +static enum dt_numa_status __read_mostly device_tree_numa;

__ro_after_init?

> --- a/xen/arch/x86/include/asm/numa.h
> +++ b/xen/arch/x86/include/asm/numa.h
> @@ -12,7 +12,6 @@ extern unsigned int numa_node_to_arch_nid(nodeid_t n);
>  
>  #define ZONE_ALIGN (1UL << (MAX_ORDER+PAGE_SHIFT))
>  
> -extern bool numa_disabled(void);
>  extern nodeid_t setup_node(unsigned int pxm);
>  extern void srat_detect_node(int cpu);
>  
> --- a/xen/include/xen/numa.h
> +++ b/xen/include/xen/numa.h
> @@ -55,6 +55,7 @@ extern void numa_init_array(void);
>  extern void numa_set_node(unsigned int cpu, nodeid_t node);
>  extern void numa_initmem_init(unsigned long start_pfn, unsigned long end_pfn);
>  extern void numa_fw_bad(void);
> +extern bool numa_disabled(void);
>  
>  extern int arch_numa_setup(const char *opt);
>  extern bool arch_numa_unavailable(void);

How is this movement of a declaration related to the subject of the patch?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 16:47:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 16:47:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474846.736234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHmX-0006bs-OV; Tue, 10 Jan 2023 16:47:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474846.736234; Tue, 10 Jan 2023 16:47:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHmX-0006bl-K3; Tue, 10 Jan 2023 16:47:25 +0000
Received: by outflank-mailman (input) for mailman id 474846;
 Tue, 10 Jan 2023 16:47:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFHmW-0006bf-MU
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 16:47:24 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2050.outbound.protection.outlook.com [40.107.103.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 749b1ae6-9106-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 17:47:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7943.eurprd04.prod.outlook.com (2603:10a6:20b:2a1::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 16:47:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 16:47:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 749b1ae6-9106-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NgThEWgjoFPllKuoXUB1P1KHjjj0b9pVgwjReThtvIWZx8vsEnf9tdYlz9LnOigd/hNAIhvZzhPHQrNzkTCSwe1Wpa4Ee9UPt2jvno2yHEquCfpfzBVKqHAtAj/nb7j7mhF2/GhinQz4MTBGEi0bxVTHRWfpktut1zA5EvBzbwkgPMroLcjd4lHzGUj3UKm85pEjJ1pIEpxNz9eCirvoMCjv3b8MykqwhEpWSHbdxfmJQSl1tEb3AOUKTTv0ciTLMIAAMOeeEhB3aFhvp2Uq5b731me2tjRVFlUNc3S+XUnJLrF0I8YXgVCjZYuCG5WCvWzO2tb2b7mMnmdo57QE8A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WyrbLmWIaFQ9duchQ4U5/ihs6OxxaDIeV8/qU/RNEsk=;
 b=C1cD70m2M1ko8SvcXJbMcMk4kKd+PmT0QYpxaJUW38enVmnaZnjZkjitgP0u1Vl7hXL4jVq/FR92vHKCwGcZQ8wi7JclYSB2bp3bNNZduZKV943Q4P0wtqV4Jfhxm7wNhUsLWMoRVumNuo8aajxJ4xjGeG5sCLgwT6Q1pDOt2qkDOiL7OtPdAX3H9GABw489VzyHsG01kisJclae99+IzmwkPlaV0mYdwGmPfUQT5hENP2K6cBvOk8oklu+JG3hQPs3WQNsR03AeTjtrID5z6X+LrvzjnqSmg04fXfo3aym7cyGm1UDFGvkuZyHL8Cv8IiojWq42pE6gsYkPKfXdDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WyrbLmWIaFQ9duchQ4U5/ihs6OxxaDIeV8/qU/RNEsk=;
 b=aZF0qGX64WdzkT5Q6iOo9vr+Y616ICmiOEgIXju5mXhMaSkAu/+fGGZOXW1dgVBlqiEEqzNU0sA6xUi1YHDoZWFADv7YUZXc18yvrtiXa1pcu325AwsMt3YS6/2RAgeWNRpSCRd84U8ha/2EIElD3YvJ32z3e0Zga/sbe2XL8/AFmBaeTUm93zSJb0eJ1PIT7gUcwOnHI4zANB7FFWuafQ6S551FinNaUyg9trZs9eqzHMo3YY+ddBTTg6ACLyBgPRiFAPgIRZFYNrJflgYGshVnHNpZtsWS9ZeJAz8djLbijG03npWtdu5jAtAGZ5lJHcAyyaLbieX8+Jo6S/yOPQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9fd67aa2-0bd5-16a2-1e19-139504c2090f@suse.com>
Date: Tue, 10 Jan 2023 17:47:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Wei Chen <wei.chen@arm.com>
Cc: nd@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-4-wei.chen@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230110084930.1095203-4-wei.chen@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0167.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7943:EE_
X-MS-Office365-Filtering-Correlation-Id: 7254ad4d-955c-434f-af84-08daf32a575f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/AFOIHNlT6P6HB0V0CmKos66CQ+8j1r5S0044rEu1wj+vmOfYCo4dfRt5ZZk1hfDGcW4QmOPllNYVcvkXloXDaaj2nOEHU1c1SnaOJHLJAQrxYheJsOOTn8Fu00AfM/SjOtlpOOPwVF5b8nMbPHWvjwe6n6fyhxweh9R4DN2cKWwYIqJ05J4hULeJwb/3N33t3oxyUSJB0hHt18QdrmyH16y+GvewqQ0JSZMugZQv0D3qb5dwQanTCSQHtraHb/f7TTS7PITDtd3JCtwVWpWk8dRPePhtCKfPe3EVfO+v1p7IRE3TJsNDzjkltMMHiXq4mwoJGAk9jxCKseTHhAl6nSDLGrpCZee9dn0bFE/vLfXiTYDoILnrUsrwXMYdFokqCSaBWNJhOXVHpNiMZgElmC6fzANuFRIJCJjilbhJ5y+NN8V623VotNppoOJZIjPRfBeZiiKZc8Q5UPOz5UcIpNfa9XBkozSMoKpJOvwUwlmXzEAasVrrhtx00Pca+OslqA6s53AUexP/79MyFfCgsWx3Hw6WW5lBnVdHEGSUqUFklHVLiv12u/RFZIDyHNi46sZtProuW15gfr1O4k61VATxCIF9HlSQAr5FDmrJd/Go1tGZT9sINs0rZBkFj031aK0nblDveZB9w7M7z5aEAFw97GMm2WAnBshSfw1M6KEh25sTk+gBkArT05xku0Oq0FMWVTqWhrXF+o30/N0YKxTyRS9GvSIe1fxbCC1Ago=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(136003)(396003)(346002)(376002)(366004)(451199015)(66946007)(31686004)(41300700001)(2906002)(8936002)(36756003)(5660300002)(7416002)(8676002)(6916009)(66476007)(66556008)(6512007)(316002)(54906003)(6666004)(478600001)(6486002)(53546011)(6506007)(186003)(4326008)(26005)(2616005)(31696002)(83380400001)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V1pVOGVGTHprMDRwK3NXRWRrbmNmWWtETVlKU3NvQ3JQeUppcnpkWTQ0MVRY?=
 =?utf-8?B?dWZnbHRMSVlCWVhRbUdINlhkRW5TMERXRUlJditRWm5ZWHRMUEFJY05Md05G?=
 =?utf-8?B?aWJCM3NCMUZqY2tua2tLRG9VVU9xWVExWEMwd0ZzUzc5cklpdXVwZm80ajVj?=
 =?utf-8?B?RmUwQ0c1NjYwRjhJZHVuUkxWSnhWVUZtQis4SEVtbVp5TU56ZDhPK2lERnRj?=
 =?utf-8?B?Y3NIOS91TExvTTZsUTN6cGdMQ0FQU0Y2Q1RabDhNbmZIZ05TNjc2TU9ZWCsy?=
 =?utf-8?B?ZUtDTm1nQkxlRzJBOVdMSDc5YU9OVFM2RTZJZm81c24rYitvcmdQVFM4NWk3?=
 =?utf-8?B?RjNZSDVjT1Q4NHpIQVNvWktQcWUrVGNRTk1vblplR2lXV1ZnSG9uTlVaeUtH?=
 =?utf-8?B?RFVVTEFkMHJBeXRkeUp4aVI0TWo1MnJuOW45NnBmcys2ZGVrODlwTWY1MWFC?=
 =?utf-8?B?MFBWTVZNT3BnTGRuZUhPNGxkYlpDTkNyQ3daOGF3WXMzN2UwdVFMYXU5aHhR?=
 =?utf-8?B?bG40blN6ZlJvODFhTXk5V2lkNXdJZGttTjBETWR6M09GeWlvaEVMRi9kNDc5?=
 =?utf-8?B?Rk1VVU5aWnBhZ21sU1NsU2xBTW5SUTVqclJXREcraWM0TzZ2WVdLamJWNSs2?=
 =?utf-8?B?VksrOEExME1rYnJFcWxqYm4vUFB0eXBRWEw0eEhHaFRWYWpaMS9pTUZNN1hI?=
 =?utf-8?B?VHlyV3VNMEI5UGhjTW5CRzBDMWNTQTd5THRFMXRxdVl2Qk1DZ3V0Ni9CaDcr?=
 =?utf-8?B?SE1KNGFaUHpUUXJWYk44cHo1SXJVWDdoSmp0dUxxbkE1OUdUNkx5VzZLcXlR?=
 =?utf-8?B?ZXIxNHVsUmpHUG10OHdrbTlTM3NWa1lraEU1dnBkbDhNT3R1UTlYTmZvNW13?=
 =?utf-8?B?QUJSQVZUS3NEREZ5SHN0cDJTSlltRmNMVUE2TGlDRFN1V0d2d2YzNVZaZHA3?=
 =?utf-8?B?Smt1NDVIWmNYZENObk9rME01SkU5SHR1eWl2cysvcXQvOTNWQThIcmo4VDhO?=
 =?utf-8?B?T2FqQXBWaWRTT3BWVTRLbHk5WE5TQ2FPcVZmcEV4OVpZK1g0VldTT2ljRExw?=
 =?utf-8?B?MEJmQllzOWliRmdlQTNQTjY3K2c0SytIRW16dTJLRU9KbmMzb2Zqd0JMRk9H?=
 =?utf-8?B?MVRneEMyUktsK1JqMDd5Y3A4MjRyWUUwQW1meWtWbFJKT1p2OENWL25scWx6?=
 =?utf-8?B?N0FqYVBGcTNuWmJUMlg2VDhQUGRjdy80MHB0ZHplNElqRlZPamdIWFppcmQ0?=
 =?utf-8?B?Sk1oS1lqd3l1bmMxSitpakdOKzFiQnlRd2VIU2tmKzk3Y1o1UzZiUFY0Y0o0?=
 =?utf-8?B?NUVXQVYzT0xpY2hBc1pIZTRlUnJuZDRLbFdyUzc4ZTdoZ3pQeGZsWVZFN1hY?=
 =?utf-8?B?QjZpVHFCZmVEVW95T01hZlpGSU41bUNsbzI0aGhBT3d5Q1gwVkV3UVZ2cC8x?=
 =?utf-8?B?bTJCbTBSTmhxY2tYNFM2QmhidjRaRG9XbW1GRlhzSUhhSzc3ZTgxVzdqY2k5?=
 =?utf-8?B?R2V5YnBtU1BPeFNKbUg5eWFaaWhFZ2dIbVZPS05oUlhGVCtNSWs3VklTaWxF?=
 =?utf-8?B?ckpocFpzYmFXaEQ0aWNCMWkrbVV2My8vd0JPMUZPUVpNWkwzeFNVanc4ZEQy?=
 =?utf-8?B?djlGc0gwa3ErT0RDaTJrWnlVamNPVkxGdkdBR1FZaHFvUjRPQU5EM1l2WVRZ?=
 =?utf-8?B?TjByWm9vc3NMT0VHNWhhVFlBYzF1dkluMXBYZk9UVWxPdUdXYVVscnJDVHV1?=
 =?utf-8?B?MndJTTltWkdhQnhKQ1djQUJIU0p4emNCaGlsM0l2ekxvNkg2ODRaRGVjQXFW?=
 =?utf-8?B?dXRuelVEd251MklLaDRBcGJ5ckpkRUtvNy9nd3hFMnF2aWJtN0libkxFbTdC?=
 =?utf-8?B?azVCMmRHcE0rV1l5aDhFWXV3cGNRcksrZ3N6aURER0NpaTJhMm9HalhWVFBx?=
 =?utf-8?B?SGorU3d3VFNLN3ZZelFxald2Wmo5RFZSNk9mY0E0a3F6RUZvVUZseTcvZWt5?=
 =?utf-8?B?b2xHd21LWm1BcFNJbzlWZHVoa2NHNVJQTENGeWJQVkMzcW9FWTltcFpNVStJ?=
 =?utf-8?B?ZytnL1NLZzk2S2JEdlI0L0tzaFVCbGgxRURaQnEyNDdaYlB0VklQeU1ra3d6?=
 =?utf-8?Q?efB5c5Rh5vDvcU8TbPNmlMn2d?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7254ad4d-955c-434f-af84-08daf32a575f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 16:47:21.2504
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lzGKUi7VqECw2RRjf1uMbs9GK9b7cz0Enoq16azo7iMX8Ff9WPfc5B3KFR5GY2RM+OxpZWNK974N644bWOHqXA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7943

On 10.01.2023 09:49, Wei Chen wrote:
> --- a/xen/arch/arm/include/asm/numa.h
> +++ b/xen/arch/arm/include/asm/numa.h
> @@ -28,6 +28,20 @@ enum dt_numa_status {
>      DT_NUMA_OFF,
>  };
>  
> +/*
> + * In ACPI spec, 0-9 are the reserved values for node distance,
> + * 10 indicates local node distance, 20 indicates remote node
> + * distance. Set node distance map in device tree will follow
> + * the ACPI's definition.
> + */
> +#define NUMA_DISTANCE_UDF_MIN   0
> +#define NUMA_DISTANCE_UDF_MAX   9
> +#define NUMA_LOCAL_DISTANCE     10
> +#define NUMA_REMOTE_DISTANCE    20

In the absence of a caller of numa_set_distance() it is entirely unclear
whether this tying to ACPI used values is actually appropriate.

> --- a/xen/arch/arm/numa.c
> +++ b/xen/arch/arm/numa.c
> @@ -2,7 +2,7 @@
>  /*
>   * Arm Architecture support layer for NUMA.
>   *
> - * Copyright (C) 2021 Arm Ltd
> + * Copyright (C) 2022 Arm Ltd

I don't think it makes sense to change such after the fact. And certainly
not in an unrelated patch.

> @@ -22,6 +22,11 @@
>  
>  static enum dt_numa_status __read_mostly device_tree_numa;
>  
> +static unsigned char __read_mostly
> +node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
> +    { 0 }
> +};

__ro_after_init?

> @@ -42,3 +47,48 @@ int __init arch_numa_setup(const char *opt)
>  {
>      return -EINVAL;
>  }
> +
> +void __init numa_set_distance(nodeid_t from, nodeid_t to,
> +                              unsigned int distance)
> +{
> +    if ( from >= MAX_NUMNODES || to >= MAX_NUMNODES )
> +    {
> +        printk(KERN_WARNING
> +               "NUMA: invalid nodes: from=%"PRIu8" to=%"PRIu8" MAX=%"PRIu8"\n",
> +               from, to, MAX_NUMNODES);
> +        return;
> +    }
> +
> +    /* NUMA defines 0xff as an unreachable node and 0-9 are undefined */
> +    if ( distance >= NUMA_NO_DISTANCE ||
> +        (distance >= NUMA_DISTANCE_UDF_MIN &&

Nit: Indentation.

> +         distance <= NUMA_DISTANCE_UDF_MAX) ||
> +        (from == to && distance != NUMA_LOCAL_DISTANCE) )
> +    {
> +        printk(KERN_WARNING
> +               "NUMA: invalid distance: from=%"PRIu8" to=%"PRIu8" distance=%"PRIu32"\n",
> +               from, to, distance);
> +        return;
> +    }
> +
> +    node_distance_map[from][to] = distance;
> +}
> +
> +unsigned char __node_distance(nodeid_t from, nodeid_t to)
> +{
> +    /* When NUMA is off, any distance will be treated as remote. */
> +    if ( numa_disabled() )
> +        return NUMA_REMOTE_DISTANCE;
> +
> +    /*
> +     * Check whether the nodes are in the matrix range.
> +     * When any node is out of range, except from and to nodes are the
> +     * same, we treat them as unreachable (return 0xFF)
> +     */
> +    if ( from >= MAX_NUMNODES || to >= MAX_NUMNODES )

I guess using ARRAY_SIZE() here would be more future-proof.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 16:58:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 16:58:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474853.736245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHww-0008Ba-R1; Tue, 10 Jan 2023 16:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474853.736245; Tue, 10 Jan 2023 16:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFHww-0008BT-O1; Tue, 10 Jan 2023 16:58:10 +0000
Received: by outflank-mailman (input) for mailman id 474853;
 Tue, 10 Jan 2023 16:58:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFHwv-0008BL-0l
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 16:58:09 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2080.outbound.protection.outlook.com [40.107.249.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f39dcc7e-9107-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 17:58:06 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8867.eurprd04.prod.outlook.com (2603:10a6:20b:42e::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 16:58:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 16:58:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f39dcc7e-9107-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WphBUvhP/IUpgtfvJvwzelBQ9Nci9NL9zHwpRLQqzPeX4pFOvfyeSh8vJ8t2T1nPOHErVhy+IQKw5VtqhEhCcZFJhGmy3r+lCvm4uTcocS9YB9f2NHWzuAH8pkzQ6ceeF9y9xXqcrjQt3AOAJucSviRIcX7Dzv+OeAuIuAX1kd5l/B92KPykF5rrlucQqFzRY3iN0R9Xbvitu17iLF1IbkmWk1VAcdTd79sW10mpEAqMz/UhJYN8QAeyb6ofwYs2BuvWftsQ0CMa/B3TsbxXzSqtdkv4sdLtbvg/dJhSOMOHGDw7ahBZM0OHSOpVi/9ZUhAcAYyZDKwaeZ1OX++ayA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fk+UgjwtsWqSfKJ+chmrOVNVaUuKHHNOX4IOpLZmjNQ=;
 b=QRDAHvLTVS6RfkL0Py4BT0MY2linMNqoUQgaqetbMVWf2lVpE2e/UpQKbnFcwkVS6y693uWe7YWnfMKTuY/2M2cqRlUN1qLGkSelsts6Vs7qp/Is+GHgARUuIMumfm+9see8NbZVoZRQuPFt+4RCCBH8GZsAgt1WWw8GiMJ6s6eVq0JgNpxvuDFhpr/WdDjJSB1PtkDVJXTd1sn3/fRkqEOSqZnvjABojkAUnzI4+4wVKUopCIO5rYQ9lpLA7zt22S4u/dKhTRIBYQftNKtcN9O33C4IzRr9geUA8tqFnV+VPKTgYQ9UkU9KSuR66tRj6mfb2yqnEg8SbJFolUBhGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fk+UgjwtsWqSfKJ+chmrOVNVaUuKHHNOX4IOpLZmjNQ=;
 b=Q2zx6AKgW0yMPvy0xHiqLcJoGJsue5qQVAJRVOHnRl0wgyD1rYJ8qH/xk7nbCltKO/gct3a04wRQJyNx+QuDb9+4yNtK4laLXb+Ien4vJHufWXFoB9aquDH0hXsP56iCFqUvhfYCfuVG6Pxt3zBhNnVZMd9vNh+zOCy+4V6YGZkfSfw+7o7GU5zCegyNer3Rxh3pl0yn1mxEHCLIwrSvLLuhueMeFdkkqCwi8pPVVOVAE2RfxbcmjizyOFyyqtlfmslLclBPGzuj1H1gMqcFNnIXUFmA7XWtybjStDPMttt6CQixTE12if4Fet5zKN/56xx1KvTyDQFz50BrHrveBw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c333b5b0-f936-59f8-d962-79d449403e6c@suse.com>
Date: Tue, 10 Jan 2023 17:58:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 2/6] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0119.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8867:EE_
X-MS-Office365-Filtering-Correlation-Id: 09b03cab-4929-4747-73a8-08daf32bd6f8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PsKPvJSylOuFAx82eqbkLeJ9l3N3Gg8dkjijFH2mnNxybHuvGpZlgxOWV8swXkKa3W+lZ3UfErdwRXSYR19EJPp6CmpngcJcBTGQLZTl+3FOEVzMAq4PUBjYaqaBqsprxCDKzcKGQMOWdu30RqYBZFrNwUMl57ipNCEkLKOjd1L38qQnB3iH9MF4dI2Vq5Lfqqz6LctftkuudhZlWVcExxp2AIomICr2UZ1xidwdNhMKuHdSJHrPXb+Iet0NKrlBJRKlV6MKghJsPhuMlaJkDbkzxMepj2ajUMKEt1EJ8DFNgFx5XlZyNaLMK+gWvGuf4GumXuDi3q7NpC0K0GHdaFQCswfVj+rYlU9j2MCk4JWsrWZDSIIez7CBpvrDSKJnAxQG3mrvt61gRoFHafTZK9DrwVBJ7QtoPbRM7KmYg6b4M1aXcrPWC8PbPsFh2iI8ow2yoPC6fDlWiQaZBsgbOilRlebdC4KOkPNLNiF42ULe+yNW0B0+/MtRS+RuyAUpwHNHVTG8iPkjoES+vXE3+oYjpbj6pjGDjl9k/HDzklgEjvr+fFvULo9T0DirhnOXzDx2Z1i84o+Qkil2M5nTfh/rxf4TonxlfCdODenkyAfw5kiquWqz4b/LGRLwchd76M39C1Qq8W3AY8aGZH5elvOUvUMRIxyQQMHNWAad65M36mTe+4pTJ7hCOvm2r2pc5hZpe30Y6uKB1Iogt04y2IAruoU+5uc4JHlU2FMvRpE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(39860400002)(396003)(376002)(136003)(451199015)(36756003)(8936002)(186003)(2906002)(41300700001)(5660300002)(4326008)(6916009)(66476007)(8676002)(66556008)(66946007)(316002)(4744005)(54906003)(38100700002)(26005)(2616005)(6512007)(31686004)(31696002)(86362001)(478600001)(53546011)(6486002)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TFhVM05YcVFxdXkvSVBiRUJnbWgzZFM1Vm00Q1F6aXJEZWhlVm9hRzhPbStq?=
 =?utf-8?B?WFpOSVJsbWh0aU92OWRiMlozZlBMb0pacG55OFU3WHBmQTdKS05vNmFhbmdv?=
 =?utf-8?B?bmhUMnBFdk5tZktjdnVNUUxmMllRMUxsUjZvSGlYT1dBQzRWMWJkL2ZDc2VM?=
 =?utf-8?B?WkxZQXl2QVVCUDVqTHJYYnVjTHlqL2lCOTNFNy9oMkhWcDFCbEFlVGZNd3ZX?=
 =?utf-8?B?aDdDV1Z2cVY2VVhuWXNweUZKVTlZb3Iram5hajh1TkdWUldNZkY1eUdma1FE?=
 =?utf-8?B?WUhrbG5aNDgxbFdLK2N2RDdUd1hOVUI1aHBrVHllOXBnMkM5elJlNlJPQ1Ri?=
 =?utf-8?B?TWhzYkQrVHN2b1pCUkFTNGQwTDVHU1BFTEMwcVcwZGtELy9HanoxN21UZkla?=
 =?utf-8?B?NkN3WDg1WCs3dTJCa0dJRjlsa05iQnE3ZlFzcXFDTGpTZ0ZVc3NhU3MzQlIw?=
 =?utf-8?B?d3RYcEdjNk5wU1d2V3YzendtSGFjVlFPYjFIUFNLYTJ0K2diemNkVzE5TEpV?=
 =?utf-8?B?ZHZBM2dFa0dkS05tVmxEU29kbkxld2JrdFFHK1pWbVRWK0Q4YlY3QnMrSlps?=
 =?utf-8?B?R0lGazErYWZ6QjgxU3UrellwVUpmeGE3eUlRajdCc0x3NjJGU3R3VTdzb3Fw?=
 =?utf-8?B?WG5YdUozYlZhRlFaTU45M0Z5a2FjYVNjNDRZTWJKNXZqcHhaemwyNk1SM29x?=
 =?utf-8?B?N01GeENmMjYwMk8xYUNmaW5CbnVEVTdIbmE3MllFc3BSWFVGZGM4UUFJbmFo?=
 =?utf-8?B?cnh2Y01YZmIvT1ZBR0FyWjJPZDdEc2ptOWxOKzYvYjJPTFRXUm1NbnQyZ3Bw?=
 =?utf-8?B?WnRxbm9iTktLNjg3T0JIZ2MzTU56RkIyMkZrSGVpQi9yeEZPRUl3V0N6UzVv?=
 =?utf-8?B?QVRtL2g5M3FBNWo1bEhEY0RYNStnN1JWWUxrRjVVZmdQSXg0ODhvcmVqUktt?=
 =?utf-8?B?VGZPM3h3MlJQdE9ieFVFblQzczFhU1JVaWFCKy8vMjZYMnl4cThCU3Q1Q2hh?=
 =?utf-8?B?YjI0cytsaWswRW9xSERBdHVxdGFYKy9XNDBOQkEybHpSL2tDbVhvb1EzYlhv?=
 =?utf-8?B?RS9XL0hWMmpZekd4UGRvdGNUSEVrVExWY0c3OSthRk1Zb05kS3ppK0U5eEkr?=
 =?utf-8?B?NWZydy93eGVQcC9qaExXU1IwSDNSOFFsb0wxWDV3aVMvTzh0TmxmTEU4aVlG?=
 =?utf-8?B?aFFJdCtFcFVqSFV0NWxZSVU3N1dlYnhvZndjV1IrbzZyU3MwT1gzZFQzRFNw?=
 =?utf-8?B?MWxMdVJtTDBjdG5NRzNHa0ZWVXNPbmQ0bVUyeHk5WXVqOFc4c2M3SWcycWNJ?=
 =?utf-8?B?T3MzQTBkRHJiYlBXQWMzYXgxamE4azc3eE95REF2WnVscld5T0oyblVHWFcw?=
 =?utf-8?B?alBBNHA3NEFoS01HZkZwZ3F6dFB5ZVIzN3FNd0MvVjZTdDJYd1BYV29Zaitm?=
 =?utf-8?B?Qy9xdDEvekNCdGpNVmZCQTIwZkpDeWpSWGV1bVhVem1TSHVMOUJ0MDREZTBW?=
 =?utf-8?B?anlXVnA4c3V3WDF0Nkk0ZEEySDN4M3hOeFpxV2VGZW5rUktjZ1ozdE90Q0wz?=
 =?utf-8?B?ODVncEJuclMwNzRpdWF1MWpEekNKeHdFMG5KcDJ3K056N3FXK3NVSHE1VlBH?=
 =?utf-8?B?bWRBR0htVTdhc09ucFZ0ZkpFTnFnZXZPcVQwakUrWm1EemE4aGdqbkUzSGlT?=
 =?utf-8?B?Z1F2QjJVSWNzVHdKdmtYTE5YV1FFTWc1VjlMSlpnSWZ3WmJtallYY2g3d09p?=
 =?utf-8?B?YU1zZXpwamExZUtXV1FTdW8vOVRvaFQvUnRRb0ZRSnArZ2Zyek52UXd2NUl1?=
 =?utf-8?B?cThyMzJ2M2R2aVJyQzlXVFNjNUxTZlZ5Q1hIbUxkQyt4SDNpQXUzMDNYNXVH?=
 =?utf-8?B?T3k2WXdjVnNjbERHTmJMejVVTmtTNnJXbzcySm1pYmFHZHltRG9rSEpqUUVm?=
 =?utf-8?B?Vit5azhVMnZDbnBTZXRPdU9POXJDQUhCOHJiaFFLNUlBR01tZktKS0hMNzNS?=
 =?utf-8?B?NDU5QUVDVkFuSUhXSm96OGdGakNHNENEdGZBN0xoZU41Znl4dC9hZnRmVFVx?=
 =?utf-8?B?ZkxkaTExUk9tMHZmdWhiYTBpU3ZFQVBFSHQyMUY5QmgzS2tSYWhMMmE2ckhy?=
 =?utf-8?Q?+h3B3SIIWjRv2E6NudnjkuWdB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 09b03cab-4929-4747-73a8-08daf32bd6f8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 16:58:04.3815
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rTLhwhXds+fwdPVHE80x4/Bju6K8YDfeRC6pT4JxffManHtNatiwqB4mC3zqfAIOsWbtIESpwo+t7SBuUIBvuw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8867

On 10.01.2023 16:17, Oleksii Kurochko wrote:
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V3:
>     - Nothing changed
> ---
> Changes in V2:
>     - Remove unneeded now types from <asm/types.h>

I guess you went a little too far: Types used in common code, even if you
do not build that yet, will want declaring right away imo. Of course I
should finally try and get rid of at least some of the being-phased-out
ones (s8 and s16 look to be relatively low hanging fruit, for example,
and of these only s16 looks to be used in common code) ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:03:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:03:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474859.736256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFI1c-0001DB-DF; Tue, 10 Jan 2023 17:03:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474859.736256; Tue, 10 Jan 2023 17:03:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFI1c-0001D4-9w; Tue, 10 Jan 2023 17:03:00 +0000
Received: by outflank-mailman (input) for mailman id 474859;
 Tue, 10 Jan 2023 17:02:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HTt=5H=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFI1b-0001Cy-7y
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:02:59 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2087.outbound.protection.outlook.com [40.107.241.87])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1267c03-9108-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 18:02:57 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8867.eurprd04.prod.outlook.com (2603:10a6:20b:42e::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 17:02:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 17:02:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1267c03-9108-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jkohcWCfECVyCb6zKOCfQ4xFq40VD3aOkssTdTOc3V92DV4oyOosKsQ3T1uWQTqWYALxmBwuA0aEqcVPYwF13vabvqPrAcMgVIK4JMaU9DpQZ9NHPm5tWhbrauQQMzgFKix7GJIqTNKNVBA3hslltXWSuucQtLSFPmf2eXOqWJ9yiI8FORj3N9iBP6R6PFmh4hgMwMh3TluoNbDrod1ObVqF9nNeFfpY50TNXc1E0WtjrpGtIZNUntAFCjRx+geL8OX2q65Y2jy94ol4LZPidZCFyKhukueOei3YE/wj6c9qqYH91sU+ooS+bCEnGxm8BizVoT/m/JTQyJrpD8neCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vsvIstwP9A5FBf1MATw/te3HnzA+g/ivPO9HJCk33A0=;
 b=S7fKIWiJnK4hUzYza0p61ZaL+YYBY7UfaVVUDTrZLMF2HfCd5itiHj1VtvXUMrKhIQyoKni9wvBhL8iBOBGyvSDw+EmybGa7FuU+7l0nYxU9/zFrb6dQwfGeT7NCFFjK+4crXvQc0SjbW0rRgZvXjuWfAvODO+PdtXDgObNYBZzXM2dM/ynArArJems5W3VXeUiesROCe25/H5yBaXqE5rVt716uh1xy/59EynFMxGPrThGH3mLDhXa8qlcJ1x7P8hwoR6u2RiyOQdp2o4E6PxenRH4YKRmO2s7uMKXZRA0Qou94tvOmwv7Cmh4lpCorp25UkoNDuCEFfkyrWc4sdA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vsvIstwP9A5FBf1MATw/te3HnzA+g/ivPO9HJCk33A0=;
 b=MHOUpJchMY2aKAt7pZ/XzrO/glwxQLHvF0x7IovDYXCX0cFI5sS21RRDI+hzAgTgR9rS0dylhlyxDHhR5QJUVkcZbn1AcgGv1qUm7L4ej75m2JxLDcO0lGXLMYqnHC59moSofCdvaFVdQJDg3dfTqodEa7h3xZF2cb6qGNJ3Wwn41QjA+iea+Sr3d9hvf7+gNnfvG0v8zInjuTc4SYp2768hk7ZHZybMokO8pVoI4OXuvs714QFabXLrMCH9c8tcyaz+lGekOvAz0FMbNIOliwDUSroPLXuZySki93RO6wUshmYKGG7K4uDYFqrVbKwMc54YVET7AiyHD7jLm0UajQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <891d0830-7fdc-202a-5f12-2364cae5bce5@suse.com>
Date: Tue, 10 Jan 2023 18:02:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 1/6] xen/riscv: introduce dummy asm/init.h
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <b1585373e39a7cbe023f485aa5a04b093e25ec80.1673362493.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b1585373e39a7cbe023f485aa5a04b093e25ec80.1673362493.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0101.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8867:EE_
X-MS-Office365-Filtering-Correlation-Id: eac8c3eb-386c-4242-d14b-08daf32c847b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sPDQEnOnDMusjKjxsxZY24UIDfaNw1Xct/XDpkZk6KWQ/OxOC5x4Mt5jlzQEKAt3K6AUbgbHehmi/g1WjLitrLWFHHiQTBxh4j6+e7fBhIuVmoljT8zUG8hMlel7NPF7VPF+nGvt6j4HYx/EfpMLX1N9lce8/gg8Hzy9rfflYhyylH3ZJes2FIvZ+aUzsRyNN5upViaoT6XGisTcqxIA1PoNMDl73X6WHzyrw2Z8652+4TUmEWOwHwdU+Q4OME0DVMRPWEaGGNzdy18noSDA3IwWzZQImk/k2/d9Lmc1f7rqDrRH4yIYdiNdcaJRBzTqEolVdNSMfObKxu4BDF5jP+5tSIsoOKQvY7hWNbsSKCFK91LyPAkCq1sKc7L60T5j+D9pH/A/rmesHszGRm8cc+i7W62ks1M9KBlKknxY7qDbi83hjIMQbFvexVuxOyUAOTcKUQB3YPCa6jQSzqeKN3ugjQLbfWSVe5Z3YqDlAs9W5NM9cuhYpQyBPa4fjCkoGDpC1Qr8qtj9TlHLayrBhuw2y5Ti0ea7Gc6dEoTf2MPSKa6GDmy3at6pCrlUs+GYDAK45R2ShKUOejbgTgYWqGifqmxEddZvxt3coxlbaxg/tcjW3zokvXDcTQ7ISkndFj91efxNo/SHo9JOaemoSVbja/0rbUD2OkoUlweE79vG748XrXr8B7HbnW/laB/PwEPIzEiXTg/wPyLWSKSquic1c63WekjbOaV/npP45KI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(39860400002)(396003)(376002)(136003)(451199015)(36756003)(8936002)(186003)(2906002)(7416002)(41300700001)(5660300002)(4326008)(66476007)(8676002)(66556008)(66946007)(316002)(4744005)(54906003)(110136005)(38100700002)(26005)(2616005)(6512007)(31686004)(31696002)(83380400001)(86362001)(478600001)(53546011)(6486002)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Qjl5V2RSYXlJeFM0YTQ4Sys3V3lSQ1gwUWV2eU1ZcFYwU3czWUpoZVR4ckxz?=
 =?utf-8?B?Uyt4bldzcmpFTUVuNDVVVm9VaWJwaUtEdyszMkg2OHlqSEVQVVJtZ3RFNGVR?=
 =?utf-8?B?dWtXM3VnZmFQcSs5ZG5kVXJBTVNmM2pQM3dmMzQ4UFRQcXo5Wk5sYnlxYldM?=
 =?utf-8?B?clZQM3dybzZ3VFpmWXZ2M0xGUVZ4SG1ZYlNGaWJ5dzJwalZoNjRzdWRWNXFz?=
 =?utf-8?B?Y05xVWsyS1d4blJJU0haR0dmemZhSFFQbUVzZVUrSmphREVHbW1TQUEvKzds?=
 =?utf-8?B?MVVHcW1DNDAwUlhzQlk1clBpSWFBSmxxRmN2THhQUU9Va3Vza1JzUElOOVpJ?=
 =?utf-8?B?OGFKVGpUZ3dDTjkzU1ZBMEFLbnhqelEvYmN1Tm40UEZPb2JLTElXdUtLK3pp?=
 =?utf-8?B?RFJXQkFwMm5hQm9hejBpOWc2T3lhT0RybXZXQnBBN1BlMmJZbGtnRUtpdnM0?=
 =?utf-8?B?NXZzTmF5OUYzOVEvaEVsL0xsUnRKTlltNXcrYlJ5Y2EreEUveVRaRDRDWjdV?=
 =?utf-8?B?YUVCcDFTeGZ0UXVnakt5eDZTVlpDeEpoS0FZdHJRSzlPdUw4NUlFczBaSllv?=
 =?utf-8?B?ODkwY3BSVUNQdTJNT1IvU2ZRU2IvVjZtQUJzOWhyMXErZFlhSGhkNnFmN29y?=
 =?utf-8?B?aTdxUmtvWlB6R211cWJPdHVmMkhDd3FKdENQekJzUUZzZ0ZUbVJuNitDUy9p?=
 =?utf-8?B?ak5iU1hMQjhXZ3pFbncyemRzNG5XRjhKVkRRcHNGZ2JOQXJldjRKNTZxRWVj?=
 =?utf-8?B?WVpEODBTbFVYMlVmYWgrUkloc2doYVBaOTR0ZSs2RW9ZaVFRKzhaWE1XTHBI?=
 =?utf-8?B?VW9GL3FjeFY5T25VZ01ibXhqNXBNWHlJY2l6Mkx3b3BJWHZqbDFCUnBrUVI1?=
 =?utf-8?B?ZlZRTlFzWlBIR1pYZmZaa3lhTWdBRTEwOFEwWWg1bDFybGVMemlOcFp2N1Bv?=
 =?utf-8?B?dHdWOXZDT0hiZERNUVFrQXBtL2NoNnlrcGs5bE1DTExnZVhTYnl1RlBPWXNq?=
 =?utf-8?B?Ym5nN01xL1RWcnE4NXd2U0swWmkybmJ1WEVnamJ6NTJpK3p0cm1FYlhxcHJv?=
 =?utf-8?B?SlhnNEs1OXA1bXVadVF6UWE1bEZCbmVqQU5GelMzc1hESEtSNjVZRGxqYUxD?=
 =?utf-8?B?RVNGY2pDaEhlZGU4eVp2WUVuN2NQa2pobXlzeFpBVWhmVHRROGxNTVArQUNX?=
 =?utf-8?B?b2ZDSkZvRlh0N2REL0VES01xcU84aFpsOWhDR1VXd0cwSE1SdlQyWmFJaDhI?=
 =?utf-8?B?TkFPeWl2c29ra2oyYlhJQkpyaGsydUpJeWZ3MDR4Z1NRN1kzK0tTVEZ5Tlda?=
 =?utf-8?B?SDhUVDdURG1lVlV3K0pnVEZxNXpIMElLWFAwYmFjblQ3dVljanhMWnNVdkVj?=
 =?utf-8?B?Y2h0S2dwVmYvU0g1TjdaM3BOWmFsc01Na3VOaUI1L2pUZ0RJTStNeDBWSGtB?=
 =?utf-8?B?YW9YMDNoWXlvcHlaa3I0eElZOUViT0VyUWJZekk3ZW5YYUsraTc3dHVMbTNq?=
 =?utf-8?B?SHpYSmN4Q3dpdzJMQmUwVDZQUVZPSFJDbVZJZnBubTAwTmkxWndqejk5VThN?=
 =?utf-8?B?K3N0cVpqb1U2UVJST0FxaytKaGFHWkwxaTBYTld0WUIvclBpczY2QjFRYmhv?=
 =?utf-8?B?VDRlMjV6dEJ5anJ2ZStUVURiUlBzdHRJbDUyNWsrSkpyVU40YndQZjdXbVB5?=
 =?utf-8?B?YTFrVXVGenVXNjJ5ZkxQbVZIRkVsV2QvQ0ErOUh6cDhCSzd3VitkcFp0N3Ft?=
 =?utf-8?B?QzVnamlvc0VtOExxMk0zSjVoYUxyNFlpNEJpRXlocjAra2pxUlIwd0JoWjM5?=
 =?utf-8?B?QjRJd1FSTlJnWFFzZ3k3OXRVWWVLL2ltd2NmdlIvNEV3Qi93SFVZNU9kTlVm?=
 =?utf-8?B?a080ZjNPaXhYNkdYdTBoL1IvcmpPaTNaSXVBbGk0dXJEdmVWUnFtbHlrbGxh?=
 =?utf-8?B?K2N5RFJjbVF1WUxZZUJNQjY4Z2Q5QlF2bG5Kbzk1M2thRlF2TFNhek9FNTVK?=
 =?utf-8?B?alFqc2pSckYrN3FTZ3BGQ3p6SWRPOFJHTVlsUTc2SkdOYUxXV3VKVElhY2x0?=
 =?utf-8?B?aW5DYjhyR0RCVVNJU0JMWnpESGRLQnZ6SkRIejRFNUQ3ZU40TkJIQXhRcmt1?=
 =?utf-8?Q?tM9CPaMStMpAiwTbqNgjXRRVj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eac8c3eb-386c-4242-d14b-08daf32c847b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 17:02:55.5976
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FR0ci2buY+BbuFMclTtxuqEknftc9FvGQo3OR1nKyDK/RsscWZ++PyKZ+9+tL6q/Ww+1nLNXuF0SC4N8/uBx7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8867

Arm maintainers,

On 10.01.2023 16:17, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/init.h
> @@ -0,0 +1,12 @@
> +#ifndef _XEN_ASM_INIT_H
> +#define _XEN_ASM_INIT_H
> +
> +#endif /* _XEN_ASM_INIT_H */

instead of having RISC-V introduce an empty stub matching what x86 has,
what would it take to empty Arm's as well, allowing both present ones to
go away and no new one to be introduced? The only thing you have in there
is struct init_info, which doesn't feel like it's properly placed in this
header (x86 has such stuff in setup.h) ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:13:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:13:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474865.736266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIBJ-0002iM-BK; Tue, 10 Jan 2023 17:13:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474865.736266; Tue, 10 Jan 2023 17:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIBJ-0002iF-8Y; Tue, 10 Jan 2023 17:13:01 +0000
Received: by outflank-mailman (input) for mailman id 474865;
 Tue, 10 Jan 2023 17:12:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFIBH-0002i5-Ec; Tue, 10 Jan 2023 17:12:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFIBH-0004ea-AD; Tue, 10 Jan 2023 17:12:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFIBG-000154-Ob; Tue, 10 Jan 2023 17:12:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFIBG-000269-OA; Tue, 10 Jan 2023 17:12:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m7MdvrMlHzHn4A6gsKjK6RvpiR/dipGcbaZOiB7yvK8=; b=r+u1Wf4meoCEFnGll0gv6o38RQ
	LUsMdLYQast0R2meIAf6N0C6lIUyuzXCidyXCFIC7trcQrPD7K33e11mSx6Mn6s3WUYiX1RnxXbSm
	25mIdR6nxgXfxld4Ib7Skv2FVilXr23yktK8bL82Ezva6XfUOAqvxG1LZq6M18CUwZNg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175689-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175689: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5a41237ad1d4b62008f93163af1d9b1da90729d8
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 17:12:58 +0000

flight 175689 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175689/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                5a41237ad1d4b62008f93163af1d9b1da90729d8
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   94 days
Failing since        173470  2022-10-08 06:21:34 Z   94 days  198 attempts
Testing same since   175689  2023-01-10 07:51:40 Z    0 days    1 attempts

------------------------------------------------------------
3317 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505667 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:18:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474874.736278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH2-0003SN-5B; Tue, 10 Jan 2023 17:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474874.736278; Tue, 10 Jan 2023 17:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH2-0003SG-2G; Tue, 10 Jan 2023 17:18:56 +0000
Received: by outflank-mailman (input) for mailman id 474874;
 Tue, 10 Jan 2023 17:18:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFIH0-0003Rz-Lx
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:18:54 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d99b1fde-910a-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 18:18:52 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d99b1fde-910a-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673371131;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=nlcu1l1czYrvkQ6yAjB0ab/hxX/LNf+c8nxTwGFVhRM=;
  b=arkGuz02Y/TSqhJka3P0D7BPkhsPmZs/BAcU4ELfx/tMfyrNSAYwtAH+
   mXEiN9eQtMeNPCPEYSHLQIlBuVbnSthuw50vp8dc3IKpPcK+mncDFmETz
   x1y4EObzV/TGWZcgj5NTNIqtSxLkl+ka6NYRMftbZt9Tce9sGQhh+OUD/
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92390969
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:MnqQCa2gwGaBC4Zd3vbD5cBxkn2cJEfYwER7XKvMYLTBsI5bp2QGy
 WEfCDiCMq2JZzP1L9siPo6z/ENU78fcndRmSVNlpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK5ULSfUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS9nuDgNyo4GlD5gVnPagQ1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfDX112
 tcoER00TxmDgfu9y5eUT8pHr5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKkSbC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TbH54ExhfG9
 woq+UzFJAxEHtu5xQGdsW+pg8LQuRK4eKM7QejQGvlC3wTImz175ActfUS/iem0jAi5Qd03A
 24+9zcqrKMy3Fe2VdS7VBq9yFaUsxhZV9dOHukS7ACW1rGS8wufHnIDTDNKdJohrsBeeNAx/
 gbXxZWzX2Up6eDLDyLGnluJkd+sESQJFkApVRYpdCoM49/6q4oWoRfsZf82RcZZkebJMT33x
 jmLqg03iLMSkdMH2s2HwLzXv96/jsOXF1Bov207Skrgt1okP9D9O+RE/HCBtZ59wJClok5tV
 ZTus+yX96gwAJ6Ej0Rhq81dTejyt55p3NAx6GOD/qXNFRz3oBZPnqgKulmSwXuF1e5aEQIFm
 GeJ5WtsCGZ7ZRNGl5NfbYOrENgNxqP9D9njXf28RoMQPcMrJF7fo3wzPBT4M4XRfK4Ey/lX1
 XCzKJjEMJrnIf4/kGreqxk1jdfHORzSNUuMHMumnnxLIJKVZWKPSKdtDbd9RrlR0U9wmy2Mq
 4w3H5LTm31ivBjWPnG/HXg7cQpbchDWxPne96RqSwJ0ClE6RDBwWqKMn+hJlk4Mt/09q9okN
 0qVAidwoGcTT1WeQelWQhiPsI/SYKs=
IronPort-HdrOrdr: A9a23:Xj6syKk04uxO5DNMXQHtoTXZa7XpDfIi3DAbv31ZSRFFG/Fw9v
 rDoB1/73TJYVkqN03I9ervBEDjexPhHO9OgLX5VI3KNGOKhILCFvAA0WKN+UyEJwTOssJbyK
 d8Y+xfJbTLfDxHZB/BkWuFL+o=
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="92390969"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 4/8] x86: Initial support for WRMSRNS
Date: Tue, 10 Jan 2023 17:18:41 +0000
Message-ID: <20230110171845.20542-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230110171845.20542-1-andrew.cooper3@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

WRMSR Non-Serialising is an optimisation intended for cases where an MSR needs
updating, but architectural serialising properties are not needed.

In is anticipated that this will apply to most if not all MSRs modified on
context switch paths.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * New
---
 tools/libs/light/libxl_cpuid.c              |  1 +
 tools/misc/xen-cpuid.c                      |  1 +
 xen/arch/x86/include/asm/msr.h              | 12 ++++++++++++
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 4 files changed, 15 insertions(+)

diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index cbd4e511e8ab..8da78773a886 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -235,6 +235,7 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
         {"fzrm",         0x00000007,  1, CPUID_REG_EAX, 10,  1},
         {"fsrs",         0x00000007,  1, CPUID_REG_EAX, 11,  1},
         {"fsrcs",        0x00000007,  1, CPUID_REG_EAX, 12,  1},
+        {"wrmsrns",      0x00000007,  1, CPUID_REG_EAX, 19,  1},
 
         {"intel-psfd",   0x00000007,  2, CPUID_REG_EDX,  0,  1},
         {"mcdt-no",      0x00000007,  2, CPUID_REG_EDX,  5,  1},
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index ea7ff320e0e4..f482c4e28f30 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -189,6 +189,7 @@ static const char *const str_7a1[32] =
 
     [10] = "fzrm",          [11] = "fsrs",
     [12] = "fsrcs",
+    /* 18 */                [19] = "wrmsrns",
 };
 
 static const char *const str_e21a[32] =
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index dd1eee04a637..191e54068856 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -38,6 +38,18 @@ static inline void wrmsrl(unsigned int msr, __u64 val)
         wrmsr(msr, lo, hi);
 }
 
+/* Non-serialising WRMSR, when available.  Falls back to a serialising WRMSR. */
+static inline void wrmsr_ns(uint32_t msr, uint32_t lo, uint32_t hi)
+{
+    /*
+     * WRMSR is 2 bytes.  WRMSRNS is 3 bytes.  Pad WRMSR with a redundant CS
+     * prefix to avoid a trailing NOP.
+     */
+    alternative_input(".byte 0x2e; wrmsr",
+                      ".byte 0x0f,0x01,0xc6", X86_FEATURE_WRMSRNS,
+                      "c" (msr), "a" (lo), "d" (hi));
+}
+
 /* rdmsr with exception handling */
 #define rdmsr_safe(msr,val) ({\
     int rc_; \
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index ad7e89dd4c40..5444bc5d8374 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -281,6 +281,7 @@ XEN_CPUFEATURE(AVX512_BF16,  10*32+ 5) /*A  AVX512 BFloat16 Instructions */
 XEN_CPUFEATURE(FZRM,         10*32+10) /*A  Fast Zero-length REP MOVSB */
 XEN_CPUFEATURE(FSRS,         10*32+11) /*A  Fast Short REP STOSB */
 XEN_CPUFEATURE(FSRCS,        10*32+12) /*A  Fast Short REP CMPSB/SCASB */
+XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*   WRMSR Non-Serialising */
 
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:18:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474878.736322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH5-0004TH-D0; Tue, 10 Jan 2023 17:18:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474878.736322; Tue, 10 Jan 2023 17:18:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH5-0004T2-8v; Tue, 10 Jan 2023 17:18:59 +0000
Received: by outflank-mailman (input) for mailman id 474878;
 Tue, 10 Jan 2023 17:18:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFIH3-0003S0-Pt
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:18:57 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc1f739e-910a-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 18:18:55 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc1f739e-910a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673371135;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=OplqtOF2TCVepI2xNnGz4FHobV4vVNUxy+l935bIbyE=;
  b=N6+J8k6yv34EOg1DTFfVyosztgSqztT/sVz7NenGSHSyAiebnS15G7Nt
   18xjxRWzaXgQxzqBICXtTxTS/30NvvzrBbnbfFHbHETeQK/v0EgLRGclq
   YVYUl/biioriDNNaAyyP2OZQMO+JwmxcdNggiqGoza2oc4MStePNxPYUt
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91449594
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:POXL86vAUteEsYTc3GrAQvmQR+fnVEVeMUV32f8akzHdYApBsoF/q
 tZmKTuEa/+PZDf9f4wjaoSw8R8D7JPTmNRiGlRrrn8zQStD+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaHzyFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwBBsARw+D3s2Nz+yWZ8hpjIc4CPLxI9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfAdMYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27J/
 z6arjmoXnn2MvTD6QSq+0i1v9X9gArDaNgrKZGU3Nx11Qj7Kms7V0RNCArTTeOColG6c8JSL
 QoT4CVGhYoY+VGvT9L9dwalu3PCtRkZM/JAHut/5AyTx6785weCGnNCXjNHcMYhtsI9WXotz
 FDhoj/yLWUx6vvPEyvbr+rK62PpUcQIEYMcTQMvQCIa44DMm45toz/uS9wgC4qOlMKgTFkc3
 Au2hCQ5grwSi+sC2KO64U3LjlqQm3TZcuImzl6JBzz4t2uVcKbgPtX1sgaDsZ6sOa7DFjG8U
 G44d99yBQzkJbWEj2SzTeoEB9lFDN7VYWSH0TaD83TMnglBGkJPn6gKu1mSx28zaK7onAMFh
 2eN0T69HLcJYBOXgVZfOupd8fgCw6n6DsjCXfvJdNdIaZUZXFbZo3o0NR/IgD2wyRJEfUQD1
 XGzK57E4ZEyUPoP8dZLb71Fje9DKt4WmQs/uqwXPzz4iOHDNRZ5uJ8OMUeUb/BR0U93iFy9z
 jqrDOPTk083eLSnMkHqHXs7cQhiwY4TWcqn9KS6t4erfmJbJY3WI6SNneJwKtE4wf89eyWh1
 ijVZ3K0AWHX3RXvQThmoFg4AF8zdf6TdU4GABE=
IronPort-HdrOrdr: A9a23:YMCwkqysP/VV8RpyfE9PKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="91449594"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/8] x86/prot-key: Enumeration for Protection Key Supervisor
Date: Tue, 10 Jan 2023 17:18:39 +0000
Message-ID: <20230110171845.20542-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230110171845.20542-1-andrew.cooper3@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Protection Key Supervisor works in a very similar way to Protection Key User,
except that instead of a PKRU register used by the {RD,WR}PKRU instructions,
the supervisor protection settings live in MSR_PKRS and is accessed using
normal {RD,WR}MSR instructions.

PKS has the same problematic interactions with PV guests as PKU (more infact,
given the guest kernel's CPL), so we'll only support this for HVM guests for
now.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_cpuid.c              | 1 +
 tools/misc/xen-cpuid.c                      | 2 +-
 xen/arch/x86/include/asm/cpufeature.h       | 1 +
 xen/arch/x86/include/asm/msr-index.h        | 2 ++
 xen/arch/x86/include/asm/x86-defns.h        | 1 +
 xen/include/public/arch-x86/cpufeatureset.h | 1 +
 6 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index 2aa23225f42c..cbd4e511e8ab 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -211,6 +211,7 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
         {"avx512-vpopcntdq",0x00000007,0,CPUID_REG_ECX, 14,  1},
         {"rdpid",        0x00000007,  0, CPUID_REG_ECX, 22,  1},
         {"cldemote",     0x00000007,  0, CPUID_REG_ECX, 25,  1},
+        {"pks",          0x00000007,  0, CPUID_REG_ECX, 31,  1},
 
         {"avx512-4vnniw",0x00000007,  0, CPUID_REG_EDX,  2,  1},
         {"avx512-4fmaps",0x00000007,  0, CPUID_REG_EDX,  3,  1},
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index d5833e9ce879..ea7ff320e0e4 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -134,7 +134,7 @@ static const char *const str_7c0[32] =
     /* 24 */                   [25] = "cldemote",
     /* 26 */                   [27] = "movdiri",
     [28] = "movdir64b",        [29] = "enqcmd",
-    [30] = "sgx-lc",
+    [30] = "sgx-lc",           [31] = "pks",
 };
 
 static const char *const str_e7d[32] =
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 044cfd9f882d..0a301013c3d9 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -121,6 +121,7 @@
 #define cpu_has_movdiri         boot_cpu_has(X86_FEATURE_MOVDIRI)
 #define cpu_has_movdir64b       boot_cpu_has(X86_FEATURE_MOVDIR64B)
 #define cpu_has_enqcmd          boot_cpu_has(X86_FEATURE_ENQCMD)
+#define cpu_has_pks             boot_cpu_has(X86_FEATURE_PKS)
 
 /* CPUID level 0x80000007.edx */
 #define cpu_has_hw_pstate       boot_cpu_has(X86_FEATURE_HW_PSTATE)
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 0a8852f3c246..7615d8087f46 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -148,6 +148,8 @@
 #define MSR_PL3_SSP                         0x000006a7
 #define MSR_INTERRUPT_SSP_TABLE             0x000006a8
 
+#define MSR_PKRS                            0x000006e1
+
 #define MSR_X2APIC_FIRST                    0x00000800
 #define MSR_X2APIC_LAST                     0x000008ff
 
diff --git a/xen/arch/x86/include/asm/x86-defns.h b/xen/arch/x86/include/asm/x86-defns.h
index 42b5f382d438..fe1caba6f819 100644
--- a/xen/arch/x86/include/asm/x86-defns.h
+++ b/xen/arch/x86/include/asm/x86-defns.h
@@ -74,6 +74,7 @@
 #define X86_CR4_SMAP       0x00200000 /* enable SMAP */
 #define X86_CR4_PKE        0x00400000 /* enable PKE */
 #define X86_CR4_CET        0x00800000 /* Control-flow Enforcement Technology */
+#define X86_CR4_PKS        0x01000000 /* Protection Key Supervisor */
 
 /*
  * XSTATE component flags in XCR0
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 7915f5826f57..ad7e89dd4c40 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -227,6 +227,7 @@ XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
 XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*a  MOVDIRI instruction */
 XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*a  MOVDIR64B instruction */
 XEN_CPUFEATURE(ENQCMD,        6*32+29) /*   ENQCMD{,S} instructions */
+XEN_CPUFEATURE(PKS,           6*32+31) /*   Protection Key for Supervisor */
 
 /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */
 XEN_CPUFEATURE(HW_PSTATE,     7*32+ 7) /*   Hardware Pstates */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:18:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474875.736283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH2-0003VX-EV; Tue, 10 Jan 2023 17:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474875.736283; Tue, 10 Jan 2023 17:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH2-0003Ur-8v; Tue, 10 Jan 2023 17:18:56 +0000
Received: by outflank-mailman (input) for mailman id 474875;
 Tue, 10 Jan 2023 17:18:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFIH1-0003S0-01
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:18:55 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dab2f6fd-910a-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 18:18:53 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dab2f6fd-910a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673371133;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=eCREFamTt0EoYJNm1E/bfKdghUubXSF6+VcYofNVBO0=;
  b=Y4HzCWa3dNjZ8nxtp9pKn19MxQtoCOFlnOVfbf91miE0xYt0OBdnuYED
   8fs3uZeiaJ58vRpQCII3FnCMDgHcAynLeXWJId7RivDyRbDRoRwkxKQyE
   WyVXlfaWKtu+T/HWI0hcdfpQJF8aZV6n4nAHCvHAu3ycgam7XSrgtQzQQ
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90908184
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:HlOFja6Ym/vZ55ltzfpHKgxRtCvHchMFZxGqfqrLsTDasY5as4F+v
 jFNX2rQPanbMDDzKdp0OYni8UlSu5XSyIBhGwtk/i40Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakS5AeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m6
 NwkEiJWUw+/gLyV+ZSaQKpXh/8uI5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJU0UUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0ExhfA9
 juWowwVBDkhHs7B0iOU0kixubGMsTzjCaxNBuOno6sCbFq7mTVIVUx+uUGAiea9ol6zXZRYM
 UN80jojq+0++VKmSvH5XgakuziUsxgEQd1SHuYmrgaXxcL8wSyUG2wFRT5pc8E9uYk9QjlC6
 7OSt4q3X3o16uTTEC/DsOfPxd+vBcQLBXIiWRUWFjYa3969g7gVrxiMdddYHaHg27UZBgrM6
 zyNqSE/gZAagsgKy7i38Dj7vt68mnTaZlVrv1uKBwpJ+is8Pdf4PNLwtTA3+N4adO6kok+9U
 G/ociR0xMQHFtmzmSOEW43h95n5tq/eYFUwbbOCdqTNFghBGVb5Jui8Axkkfi+F1/ronhe3C
 HI/QSsLuPdu0IKCNMebmb6ZBcUw1rTHHt/4TP3SZdcmSsEvK1TdrHA2OhPNhjqFfK0QfUcXY
 8/znSGEVChyNEia5GDuG7d1PUEDmkjSOl8/tbiklk/6gNJylVaeSKsfMUvmUwzKxPrsnekhy
 P4Gb5Hi40wGAIXDjtz/rdZ7waYicSJqWvgbaqV/Koa+H+aRMDp9V66AkO58JdcNcmY8vr6gw
 0xRk3RwkDLX7UAr4y3WApy/QNsDhapCkE8=
IronPort-HdrOrdr: A9a23:VNaTBao4kH69+E0mjWyyoxkaV5oleYIsimQD101hICG9E/b1qy
 nKpp8mPHDP5wr5NEtPpTnjAsm9qALnlKKdiLN5Vd3OYOCMghrKEGgN1/qG/xTQXwH46+5Bxe
 NBXsFFebnN5IFB/KTH3DU=
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="90908184"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/8] x86/boot: Sanitise PKRU on boot
Date: Tue, 10 Jan 2023 17:18:38 +0000
Message-ID: <20230110171845.20542-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230110171845.20542-1-andrew.cooper3@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

While the reset value of the register is 0, it might not be after kexec/etc.
If PKEY0.{WD,AD} have leaked in from an earlier context, construction of a PV
dom0 will explode.

Sequencing wise, this must come after setting CR4.PKE, and before we touch any
user mappings.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

For sequencing, it could also come after setting XCR0.PKRU too, but then we'd
need to construct an empty XSAVE area to XRSTOR from, and that would be even
more horrible to arrange.
---
 xen/arch/x86/cpu/common.c             | 3 +++
 xen/arch/x86/include/asm/cpufeature.h | 1 +
 xen/arch/x86/setup.c                  | 2 +-
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 0412dbc915e5..fe92f29c2dc6 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -936,6 +936,9 @@ void cpu_init(void)
 	write_debugreg(6, X86_DR6_DEFAULT);
 	write_debugreg(7, X86_DR7_DEFAULT);
 
+	if (cpu_has_pku)
+		wrpkru(0);
+
 	/*
 	 * If the platform is performing a Secure Launch via SKINIT, GIF is
 	 * clear to prevent external interrupts interfering with Secure
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index a3ad9ebee4e9..044cfd9f882d 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -109,6 +109,7 @@
 
 /* CPUID level 0x00000007:0.ecx */
 #define cpu_has_avx512_vbmi     boot_cpu_has(X86_FEATURE_AVX512_VBMI)
+#define cpu_has_pku             boot_cpu_has(X86_FEATURE_PKU)
 #define cpu_has_avx512_vbmi2    boot_cpu_has(X86_FEATURE_AVX512_VBMI2)
 #define cpu_has_gfni            boot_cpu_has(X86_FEATURE_GFNI)
 #define cpu_has_vaes            boot_cpu_has(X86_FEATURE_VAES)
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 566422600d94..6deadcf74763 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1798,7 +1798,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     if ( boot_cpu_has(X86_FEATURE_FSGSBASE) )
         set_in_cr4(X86_CR4_FSGSBASE);
 
-    if ( boot_cpu_has(X86_FEATURE_PKU) )
+    if ( cpu_has_pku )
         set_in_cr4(X86_CR4_PKE);
 
     if ( opt_invpcid && cpu_has_invpcid )
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:18:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474876.736290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH2-0003cI-Ny; Tue, 10 Jan 2023 17:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474876.736290; Tue, 10 Jan 2023 17:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH2-0003Zm-I0; Tue, 10 Jan 2023 17:18:56 +0000
Received: by outflank-mailman (input) for mailman id 474876;
 Tue, 10 Jan 2023 17:18:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFIH1-0003S0-PK
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:18:55 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id da896568-910a-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 18:18:53 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da896568-910a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673371133;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=QpOw3C6VwM8FcX6hHOXRu7/mibwXlAuu3I7T3tZcU2Y=;
  b=OgrY0uZ3A7NGVTz6QwsutsL1L79JMqEB2hMV9187F5tbW+a/OlZmv5t5
   NLWm93VR+G3t7qjJsM9cL2EvgEtahExvhYQNHh+i684lJd0oZS7agFOg0
   Z5XBmBlqnpF43xvsQlNNcA9cvzx1ZwhM0GJknMX3Z/iCIWngK4NAs+MdM
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91449590
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:Li9XpKpB6kbN4mSMqhYPLdNVOKZeBmIxZRIvgKrLsJaIsI4StFCzt
 garIBnTaavYajekfdp3b4u+908H7JHdzNUwSws4/y0xFH8T9puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHziFNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAB9TV1egpr+4++rlWuQ2i8lyHOW3M7pK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFHU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw9
 zuaozWkU0ly2Nq3wDi7wFCsjNHzrxj3UpgYMLL7r65YuQjGroAUIEJPDgbqyRWjsWahX/pPJ
 kpS/TAhxYAx+VKqSJ/hXhS+iH+CohMYHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRNmrCITXOW9p+PsCi/fyMSKAc/iTQsFFVfpYO5+cdq00yJHo0L/LOJYsPdNm/Jh
 BOr8SYF3+sDgJQG2vSL0QzIumf5znTWdTId6gLSV2Ojywp2Yo+5eoClgWTmAeZ8wJWxFQfY4
 iVd8ySKxKVXVMzWynTRKAkYNOvxj8tpJgEwlrKG83MJ0z22s0CucolLiN2VDBc4a51UEdMFj
 aK6hO+w2HOxFCHxBUOUS9jrYyjP8UQHPYqNaxwsRoASCqWdjSfelM2UWWae3nr2jG8nmrwlN
 JGQfK6EVChFUv43nWLpGrpEi9fHIxzSI0uJHfgXKDz+j9KjiIO9E+9ZYDNikMhlhE97nOkl2
 4kGbJbbo/mueOb/fjPW4eYuwaMidBAG6WTNg5UPLIare1M2cFzN/teNmdvNjaQ5xfUK/goJl
 1nhMnJlJK3X3iGbeFTbNy09M9sCn/9X9BoGAMDlBn7ws1BLXGplxP13m0cfFVX/yNFe8A==
IronPort-HdrOrdr: A9a23:oTZoiq3aFgb5AX9qgrmu5gqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="91449590"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>
Subject: [PATCH v2 6/8] x86/hvm: Enable guest access to MSR_PKRS
Date: Tue, 10 Jan 2023 17:18:43 +0000
Message-ID: <20230110171845.20542-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230110171845.20542-1-andrew.cooper3@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Have guest_{rd,wr}msr(), via hvm_{get,set}_reg(), access either the live
register, or stashed state, depending on context.  Include MSR_PKRS for
migration, and let the guest have full access.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Kevin Tian <kevin.tian@intel.com>

v2:
 * Rebase over the get/set_reg() infrastructure.
---
 xen/arch/x86/hvm/hvm.c     |  1 +
 xen/arch/x86/hvm/vmx/vmx.c | 17 +++++++++++++++++
 xen/arch/x86/msr.c         | 10 ++++++++++
 3 files changed, 28 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 927a221660e8..c6c1eea18003 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1333,6 +1333,7 @@ static int cf_check hvm_load_cpu_xsave_states(
 static const uint32_t msrs_to_send[] = {
     MSR_SPEC_CTRL,
     MSR_INTEL_MISC_FEATURES_ENABLES,
+    MSR_PKRS,
     MSR_IA32_BNDCFGS,
     MSR_IA32_XSS,
     MSR_VIRT_SPEC_CTRL,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b1f493f009fd..57827779c305 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -657,6 +657,11 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
     else
         vmx_set_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW);
 
+    if ( cp->feat.pks )
+        vmx_clear_msr_intercept(v, MSR_PKRS, VMX_MSR_RW);
+    else
+        vmx_set_msr_intercept(v, MSR_PKRS, VMX_MSR_RW);
+
  out:
     vmx_vmcs_exit(v);
 
@@ -2455,6 +2460,7 @@ static uint64_t cf_check vmx_get_reg(struct vcpu *v, unsigned int reg)
 {
     const struct vcpu *curr = current;
     struct domain *d = v->domain;
+    const struct vcpu_msrs *msrs = v->arch.msrs;
     uint64_t val = 0;
     int rc;
 
@@ -2471,6 +2477,9 @@ static uint64_t cf_check vmx_get_reg(struct vcpu *v, unsigned int reg)
         }
         return val;
 
+    case MSR_PKRS:
+        return (v == curr) ? rdpkrs() : msrs->pkrs;
+
     case MSR_SHADOW_GS_BASE:
         if ( v != curr )
             return v->arch.hvm.vmx.shadow_gs;
@@ -2499,6 +2508,8 @@ static uint64_t cf_check vmx_get_reg(struct vcpu *v, unsigned int reg)
 
 static void cf_check vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
 {
+    const struct vcpu *curr = current;
+    struct vcpu_msrs *msrs = v->arch.msrs;
     struct domain *d = v->domain;
     int rc;
 
@@ -2514,6 +2525,12 @@ static void cf_check vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
             domain_crash(d);
         }
         return;
+
+    case MSR_PKRS:
+        msrs->pkrs = val;
+        if ( v == curr )
+            wrpkrs(val);
+        return;
     }
 
     /* Logic which maybe requires remote VMCS acquisition. */
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 317b154d244d..7ddf0078c3a2 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -325,6 +325,11 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         *val = 0;
         break;
 
+    case MSR_PKRS:
+        if ( !cp->feat.pks )
+            goto gp_fault;
+        goto get_reg;
+
     case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( !is_hvm_domain(d) || v != curr )
             goto gp_fault;
@@ -616,6 +621,11 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             break;
         goto gp_fault;
 
+    case MSR_PKRS:
+        if ( !cp->feat.pks || val != (uint32_t)val )
+            goto gp_fault;
+        goto set_reg;
+
     case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( !is_hvm_domain(d) || v != curr )
             goto gp_fault;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:19:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474879.736328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH5-0004a2-Tm; Tue, 10 Jan 2023 17:18:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474879.736328; Tue, 10 Jan 2023 17:18:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH5-0004ZF-OP; Tue, 10 Jan 2023 17:18:59 +0000
Received: by outflank-mailman (input) for mailman id 474879;
 Tue, 10 Jan 2023 17:18:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFIH4-0003S0-QI
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:18:58 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dcb2fe24-910a-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 18:18:56 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcb2fe24-910a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673371135;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=4kWQkYSpiZBIu90gQLV1EYaENe8mysfk1fvV+p0jEtY=;
  b=Yy5cNSMLA2BR3bxYAt0zSwq51NKtyLIYeE98unOEo508TO38foAnpOGp
   tOIk6YKcKalQe75FEFbDOhDrp+Cb3/NyztHQN0AW2xSlTGIFBBpxSr4T6
   0kpwFEHhYjiCtEpvh2H1l8XKsmfQ6miYMpHX6aGZ4KCcO2D4wxnGnVRy9
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90908186
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:9XpW9q/pwj6N3yVagW41DrUDk36TJUtcMsCJ2f8bNWPcYEJGY0x3n
 DRLCzuFbK2IZjH9eIoiO9nk8E5TsJLTyYM1GQI6+3g8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIx1BjOkGlA5AdmPKkT5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklx/
 t0ZL2ASSCyIxKG0/o+cUbcvg+kKeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGM0Yn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZQNzx7I/
 zKYl4j/Kg4aKMSwwheMyVSPts3erR/yfoItPrLto5aGh3XMnzdOWXX6T2CTvv2RmkO4HdVFJ
 CQ86ico6KQ/6kGvZt38RAGj5m6JuAYGXNhdGPF87xuCooL2yQuEAmkPThZadccr8sQxQFQXO
 kShxo2zQ2Y16fvMFCzbpuz8QS6O1TY9EmQjZChUUi056Jqgor8OqQmRDdNOOfvg5jHqIg3Yz
 zePpSk4orwci88Xyqm2lWz6byKQSovhFVBsuFiONo6xxkYgPdP+OdT0gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hP0yT4FWyzyGskTKuMDirjUWGBX
 aMrkVkNjKK/xVPzBUONX6q/Ct4x0Y/rHsn/W/bfY7JmO8YuL1XXrXkxOBPJhQgBdXTAd4llY
 f93lu71Ux4n5VlPlmLqF4/xL5d3rszB+Y8jbc+ilEn2uVZvTHWUVa0EIDOzghMRtcu5TPHu2
 48HbaOikkwPONASlwGLqeb/23hWdylkbX03wuQLHtO+zv1OQz19Wq6AnO5/IOSIXc19z4/1w
 510YWcAoHKXuJENAV7ihqxLAF83YatCkA==
IronPort-HdrOrdr: A9a23:dyl0+a9aSWYSd8IC7FZuk+DWI+orL9Y04lQ7vn2ZKCY4TiX8ra
 uTdZsguiMc5Ax+ZJhDo7C90di7IE80nKQdieN9AV7IZniEhILHFvAG0aLShxHmBi3i5qp8+M
 5bAsxD4QTLfDpHsfo=
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="90908186"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 3/8] x86/prot-key: Split PKRU infrastructure out of asm/processor.h
Date: Tue, 10 Jan 2023 17:18:40 +0000
Message-ID: <20230110171845.20542-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230110171845.20542-1-andrew.cooper3@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

asm/processor.h is in desperate need of splitting up, and protection key
functionality in only used in the emulator and pagewalk.  Introduce a new
asm/prot-key.h and move the relevant content over.

Rename the PKRU_* constants to drop the user part and to use the architectural
terminology.

Drop the read_pkru_{ad,wd}() helpers entirely.  The pkru infix is about to
become wrong, and the sole user is shorter and easier to follow without the
helpers.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Mask pk_ar
---
 xen/arch/x86/cpu/common.c            |  1 +
 xen/arch/x86/include/asm/processor.h | 38 ------------------------------------
 xen/arch/x86/include/asm/prot-key.h  | 31 +++++++++++++++++++++++++++++
 xen/arch/x86/mm/guest_walk.c         |  9 ++++++---
 xen/arch/x86/x86_emulate.c           |  2 ++
 5 files changed, 40 insertions(+), 41 deletions(-)
 create mode 100644 xen/arch/x86/include/asm/prot-key.h

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index fe92f29c2dc6..2bcdd08b2fb5 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -11,6 +11,7 @@
 #include <asm/io.h>
 #include <asm/mpspec.h>
 #include <asm/apic.h>
+#include <asm/prot-key.h>
 #include <asm/random.h>
 #include <asm/setup.h>
 #include <asm/shstk.h>
diff --git a/xen/arch/x86/include/asm/processor.h b/xen/arch/x86/include/asm/processor.h
index 60b902060914..b95d2483212a 100644
--- a/xen/arch/x86/include/asm/processor.h
+++ b/xen/arch/x86/include/asm/processor.h
@@ -374,44 +374,6 @@ static always_inline void set_in_cr4 (unsigned long mask)
     write_cr4(read_cr4() | mask);
 }
 
-static inline unsigned int rdpkru(void)
-{
-    unsigned int pkru;
-
-    asm volatile (".byte 0x0f,0x01,0xee"
-        : "=a" (pkru) : "c" (0) : "dx");
-
-    return pkru;
-}
-
-static inline void wrpkru(unsigned int pkru)
-{
-    asm volatile ( ".byte 0x0f, 0x01, 0xef"
-                   :: "a" (pkru), "d" (0), "c" (0) );
-}
-
-/* Macros for PKRU domain */
-#define PKRU_READ  (0)
-#define PKRU_WRITE (1)
-#define PKRU_ATTRS (2)
-
-/*
- * PKRU defines 32 bits, there are 16 domains and 2 attribute bits per
- * domain in pkru, pkeys is index to a defined domain, so the value of
- * pte_pkeys * PKRU_ATTRS + R/W is offset of a defined domain attribute.
- */
-static inline bool_t read_pkru_ad(uint32_t pkru, unsigned int pkey)
-{
-    ASSERT(pkey < 16);
-    return (pkru >> (pkey * PKRU_ATTRS + PKRU_READ)) & 1;
-}
-
-static inline bool_t read_pkru_wd(uint32_t pkru, unsigned int pkey)
-{
-    ASSERT(pkey < 16);
-    return (pkru >> (pkey * PKRU_ATTRS + PKRU_WRITE)) & 1;
-}
-
 static always_inline void __monitor(const void *eax, unsigned long ecx,
                                     unsigned long edx)
 {
diff --git a/xen/arch/x86/include/asm/prot-key.h b/xen/arch/x86/include/asm/prot-key.h
new file mode 100644
index 000000000000..63a2e22f3fa0
--- /dev/null
+++ b/xen/arch/x86/include/asm/prot-key.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2021-2022 Citrix Systems Ltd.
+ */
+#ifndef ASM_PROT_KEY_H
+#define ASM_PROT_KEY_H
+
+#include <xen/types.h>
+
+#define PKEY_AD 1 /* Access Disable */
+#define PKEY_WD 2 /* Write Disable */
+
+#define PKEY_WIDTH 2 /* Two bits per protection key */
+
+static inline uint32_t rdpkru(void)
+{
+    uint32_t pkru;
+
+    asm volatile ( ".byte 0x0f,0x01,0xee"
+                   : "=a" (pkru) : "c" (0) : "dx" );
+
+    return pkru;
+}
+
+static inline void wrpkru(uint32_t pkru)
+{
+    asm volatile ( ".byte 0x0f,0x01,0xef"
+                   :: "a" (pkru), "d" (0), "c" (0) );
+}
+
+#endif /* ASM_PROT_KEY_H */
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 70dacc477f9a..161a61b8f5ca 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -26,7 +26,9 @@
 #include <xen/paging.h>
 #include <xen/domain_page.h>
 #include <xen/sched.h>
+
 #include <asm/page.h>
+#include <asm/prot-key.h>
 #include <asm/guest_pt.h>
 #include <asm/hvm/emulate.h>
 
@@ -413,10 +415,11 @@ guest_walk_tables(const struct vcpu *v, struct p2m_domain *p2m,
          guest_pku_enabled(v) )
     {
         unsigned int pkey = guest_l1e_get_pkey(gw->l1e);
-        unsigned int pkru = rdpkru();
+        unsigned int pkr = rdpkru();
+        unsigned int pk_ar = (pkr >> (pkey * PKEY_WIDTH)) & (PKEY_AD | PKEY_WD);
 
-        if ( read_pkru_ad(pkru, pkey) ||
-             ((walk & PFEC_write_access) && read_pkru_wd(pkru, pkey) &&
+        if ( (pk_ar & PKEY_AD) ||
+             ((walk & PFEC_write_access) && (pk_ar & PKEY_WD) &&
               ((walk & PFEC_user_mode) || guest_wp_enabled(v))) )
         {
             gw->pfec |= PFEC_prot_key;
diff --git a/xen/arch/x86/x86_emulate.c b/xen/arch/x86/x86_emulate.c
index 720740f29b84..8c7d18521807 100644
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -12,8 +12,10 @@
 #include <xen/domain_page.h>
 #include <xen/err.h>
 #include <xen/event.h>
+
 #include <asm/x86_emulate.h>
 #include <asm/processor.h> /* current_cpu_info */
+#include <asm/prot-key.h>
 #include <asm/xstate.h>
 #include <asm/amd.h> /* cpu_has_amd_erratum() */
 #include <asm/debugreg.h>
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:18:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474877.736310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH4-0004By-1o; Tue, 10 Jan 2023 17:18:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474877.736310; Tue, 10 Jan 2023 17:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH3-0004AN-Tt; Tue, 10 Jan 2023 17:18:57 +0000
Received: by outflank-mailman (input) for mailman id 474877;
 Tue, 10 Jan 2023 17:18:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFIH2-0003S0-Pd
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:18:56 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc0f24b7-910a-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 18:18:54 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc0f24b7-910a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673371134;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=mz05xHRr3qRTB02QoQaO+mfrlV8+T4q/+0C/A1xaNc4=;
  b=WCMh4zFl/PlKETTmaZaTzzCqqUISfUDIWTgdYv6Wc3UHoVjC2MZrlF2X
   OKwuN+6zoErEtRoWe+73iGe6L7KJzQWjLwcakiwXyTUYMni30IKi8Yrrx
   gi/HAdNQm1+ULJ1P6VpXvlhxxHo/Sg8CfHialbIaCedQioL1BJABR6zUT
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90908185
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:28nkk6wk8XHfTO+NWwV6t+cexirEfRIJ4+MujC+fZmUNrF6WrkUAm
 zcZWm2AMv2IZzDxedF/bYyx/EoD7MTXztdiHgZoqCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5QRnPKgT5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KV5F1
 P0feC8rUhvAxOCI/ZWgFKppq+12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKQOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiFJ0ExhbB9
 j+uE2LRJywELsyD8zi5+VWegeTAwAnGep9PLejtnhJtqALKnTFCYPEMbnOkpdGph0j4XMhQQ
 2QX9zQvq+4u9UWtZtj7QxC85nWDu3Y0S9dWVuE39gyJ4q7V+BqCQHgJSCZbb94rv9NwQiYlv
 nertd70AT1ksJWOVGmQsLyTqFuP1TM9dDFYI3VeFE1cvoel8NpbYg/zoshLL6WUj9qlBhfMy
 HPJrhYDoK5Nzssm/vDulbzYuA6Eqp/MRw8zwwzYWGO58w90DLKYi5yUBUvztqgZctvAJrWVl
 D1dwpXFsrhSZX2YvHbVKNjhCo1F8Bps3Nf0pVd0V6cs+D22k5JIVdABuWouTKuF3yttRNMIX
 KMxkVkKjHOwFCHwBUOSX25WI5pC8EQYPY65Ps04l/IXCnSLSCeJ/Tt1eWmb1H33nU4nnMkXY
 MnEKpz8XC5FVPk+llJaotvxN5dxnkjSIkuKG/jGI+mPi+LCNBZ5t59ZWLdxUgzJxPzd+1iEm
 zquH8CL1w9eQIXDjtr/qOYuwaQxBSFjX/je8pUHHtNv1yI6QAnN/deNm+J+E2Gk9owJ/tr1E
 oaVARMEmQuu2CCZcm1nqBlLMdvSYHq2llpjVQREALpi8yRLjVqHhEvHS6YKQA==
IronPort-HdrOrdr: A9a23:Ts+48qvM/T/hIseqttiTepL+7skDeNV00zEX/kB9WHVpm62j+/
 xG+c5x6faaslkssR0b9+xoWpPhfZqsz/9ICOAqVN/JMTUO01HYT72Kg7GSpgHIKmnT8fNcyL
 clU4UWMqyVMbGit7eZ3DWF
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="90908185"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>
Subject: [PATCH v2 0/8] x86: Protection Key Supervisor support
Date: Tue, 10 Jan 2023 17:18:37 +0000
Message-ID: <20230110171845.20542-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Only 14 months after v1...  This time with testing on real hardware (not that
any bugs were found) and against an updated XTF comprehensive pagewalk test.

  # time ./xtf-runner hvm64 pagetable-emulation~hap
  --- Xen Test Framework ---
  Environment: HVM 64bit (Long mode 4 levels)
  Test pagetable-emulation
    Info: Intel, Fam 6, Model 143, Stepping 3, paddr 46, vaddr 48
    Features: PSE PAE PGE PAT PSE36 PCID NX PAGE1G SMEP SMAP PKU PKS
    Paging mode heuristic: Hap
    Using physical addresses 0000000040000000 and 0000200000000000
  Test L1e
  Test L2e
  Test L2e Superpage
  Test L3e
  Test L3e Superpage
  Test L4e
  Test L4e Superpage
  Completed 28835840 tests
  Test result: SUCCESS

  Combined test results:
  test-hvm64-pagetable-emulation~hap       SUCCESS

  real                                     1m26.315s
  user                                     0m0.039s
  sys                                      0m0.086s

This test is going to get even more silly when CET-SS becomes supported.

Andrew Cooper (8):
  x86/boot: Sanitise PKRU on boot
  x86/prot-key: Enumeration for Protection Key Supervisor
  x86/prot-key: Split PKRU infrastructure out of asm/processor.h
  x86: Initial support for WRMSRNS
  x86/hvm: Context switch MSR_PKRS
  x86/hvm: Enable guest access to MSR_PKRS
  x86/pagewalk: Support PKS
  x86/hvm: Support PKS for HAP guests

 tools/libs/light/libxl_cpuid.c              |  2 +
 tools/misc/xen-cpuid.c                      |  3 +-
 xen/arch/x86/cpu/common.c                   |  6 ++
 xen/arch/x86/cpuid.c                        |  9 +++
 xen/arch/x86/hvm/hvm.c                      |  5 +-
 xen/arch/x86/hvm/vmx/vmx.c                  | 26 +++++++++
 xen/arch/x86/include/asm/cpufeature.h       |  2 +
 xen/arch/x86/include/asm/guest_pt.h         |  5 ++
 xen/arch/x86/include/asm/hvm/hvm.h          |  3 +
 xen/arch/x86/include/asm/msr-index.h        |  2 +
 xen/arch/x86/include/asm/msr.h              | 21 +++++++
 xen/arch/x86/include/asm/processor.h        | 38 -------------
 xen/arch/x86/include/asm/prot-key.h         | 85 +++++++++++++++++++++++++++++
 xen/arch/x86/include/asm/x86-defns.h        |  1 +
 xen/arch/x86/mm/guest_walk.c                | 16 ++++--
 xen/arch/x86/msr.c                          | 10 ++++
 xen/arch/x86/setup.c                        |  6 +-
 xen/arch/x86/smpboot.c                      |  4 ++
 xen/arch/x86/x86_emulate.c                  |  2 +
 xen/include/public/arch-x86/cpufeatureset.h |  2 +
 20 files changed, 201 insertions(+), 47 deletions(-)
 create mode 100644 xen/arch/x86/include/asm/prot-key.h

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:19:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474880.736344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH8-00054l-8N; Tue, 10 Jan 2023 17:19:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474880.736344; Tue, 10 Jan 2023 17:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH8-00054T-1o; Tue, 10 Jan 2023 17:19:02 +0000
Received: by outflank-mailman (input) for mailman id 474880;
 Tue, 10 Jan 2023 17:19:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFIH5-0003S0-QM
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:18:59 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc27791f-910a-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 18:18:56 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc27791f-910a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673371136;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=nq9rxCSKOSFBwm6aWpKwYZFwQDRPg+/dlW9R5ZTrDnU=;
  b=f3zU8CdTxfe1VDbWjgv1M/n1ISXt4fe0NTFDgM9gE5MStm3U/n75NAwu
   uYYujmpeJKHSQpshSWyrRt1PFYzxKFj0CRZdUlsoOzJf6dIdqm8OWsFlP
   ydJwQF+39MnP+dkfs3U7e+cqLulfWC5VDhEYVDmJ9QZMTzzcqMc3hUDBc
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90908188
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:58OJS6qHUup8gnBYIg57y4Gp9bdeBmIxZRIvgKrLsJaIsI4StFCzt
 garIBmHOK2LNjDxc4wkYNm2oB8Dv8CGzYIxTAc5/HsxES4Qp5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHziFNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADoqVkCypc3s+72Ed7FGqdocF9S0EapK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFHU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw9
 zqXpzSgUkFy2Nq3kDmB0H+jmLD2hirZBN09C562yuJRjwjGroAUIEJPDgbqyRWjsWahX/pPJ
 kpS/TAhxYAx+VKqSJ/hXhS+iH+CohMYHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRNmrCITXOW9p+PsCi/fyMSKAc/iTQsFFVfpYO5+cdq00yJHo0L/LOJYsPdExbIk
 wGog3IFiIou1eQkyZqA4A/qqmf5znTWdTId6gLSV2Ojywp2Yo+5eoClgWTmAeZ8wJWxFQfY4
 iVd8ySKxKVXVMzWynTRKAkYNOvxj8tpJgEwlrKG83MJ0z22s0CucolLiN2VDBc4a51UEdMFj
 aK6hO+w2HOxFCHxBUOUS9jrYyjP8UQHPYqNaxwsRoASCqWdjSfelM2UWWae3nr2jG8nmrwlN
 JGQfK6EVChFUv43nWLpGrpEi9fHIxzSI0uJHfgXKDz+j9KjiIO9E+9ZYDNikMhlhE97nOkl2
 4kGbJbbo/mueOb/fjPW4eYuwaMidBAG6WTNg5UPLIare1M2cFzN/teNmdvNjaQ5xfUK/goJl
 1nhMnJlJK3X3iGbeFTbNy09M9sCn/9X9BoGAMDlBn7ws1BLXGplxPl3m0cfFVX/yNFe8A==
IronPort-HdrOrdr: A9a23:ZLPzHar/LmCAuSlV/CUVsGkaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="90908188"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>
Subject: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Date: Tue, 10 Jan 2023 17:18:42 +0000
Message-ID: <20230110171845.20542-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230110171845.20542-1-andrew.cooper3@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Under PKS, MSR_PKRS is available and based on the CPUID policy alone, and
usable independently of CR4.PKS.  See the large comment in prot-key.h for
details of the context switching arrangement.

Use WRMSRNS right away, as we don't care about serialsing properties for
context switching this MSR.

Sanitise MSR_PKRS on boot.  In anticipation of wanting to use PKS for Xen in
the future, arrange for the sanitisation to occur prior to potentially setting
CR4.PKS; if PKEY0.{AD,WD} leak in from a previous context, we will triple
fault immediately on setting CR4.PKS.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Kevin Tian <kevin.tian@intel.com>

v2:
 * Use WRMSRNS
 * Sanitise MSR_PKS on boot.
---
 xen/arch/x86/cpu/common.c           |  2 ++
 xen/arch/x86/hvm/vmx/vmx.c          |  9 +++++++
 xen/arch/x86/include/asm/msr.h      |  9 +++++++
 xen/arch/x86/include/asm/prot-key.h | 54 +++++++++++++++++++++++++++++++++++++
 xen/arch/x86/setup.c                |  4 +++
 xen/arch/x86/smpboot.c              |  4 +++
 6 files changed, 82 insertions(+)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 2bcdd08b2fb5..f44c907e8a43 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -58,6 +58,8 @@ static unsigned int forced_caps[NCAPINTS];
 
 DEFINE_PER_CPU(bool, full_gdt_loaded);
 
+DEFINE_PER_CPU(uint32_t, pkrs);
+
 void __init setup_clear_cpu_cap(unsigned int cap)
 {
 	const uint32_t *dfs;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 43a4865d1c76..b1f493f009fd 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -58,6 +58,7 @@
 #include <asm/event.h>
 #include <asm/mce.h>
 #include <asm/monitor.h>
+#include <asm/prot-key.h>
 #include <public/arch-x86/cpuid.h>
 
 static bool_t __initdata opt_force_ept;
@@ -536,6 +537,7 @@ static void vmx_restore_host_msrs(void)
 
 static void vmx_save_guest_msrs(struct vcpu *v)
 {
+    const struct cpuid_policy *cp = v->domain->arch.cpuid;
     struct vcpu_msrs *msrs = v->arch.msrs;
 
     /*
@@ -549,10 +551,14 @@ static void vmx_save_guest_msrs(struct vcpu *v)
         rdmsrl(MSR_RTIT_OUTPUT_MASK, msrs->rtit.output_mask);
         rdmsrl(MSR_RTIT_STATUS, msrs->rtit.status);
     }
+
+    if ( cp->feat.pks )
+        msrs->pkrs = rdpkrs_and_cache();
 }
 
 static void vmx_restore_guest_msrs(struct vcpu *v)
 {
+    const struct cpuid_policy *cp = v->domain->arch.cpuid;
     const struct vcpu_msrs *msrs = v->arch.msrs;
 
     write_gs_shadow(v->arch.hvm.vmx.shadow_gs);
@@ -569,6 +575,9 @@ static void vmx_restore_guest_msrs(struct vcpu *v)
         wrmsrl(MSR_RTIT_OUTPUT_MASK, msrs->rtit.output_mask);
         wrmsrl(MSR_RTIT_STATUS, msrs->rtit.status);
     }
+
+    if ( cp->feat.pks )
+        wrpkrs(msrs->pkrs);
 }
 
 void vmx_update_cpu_exec_control(struct vcpu *v)
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 191e54068856..7946b6b24c11 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -373,6 +373,15 @@ struct vcpu_msrs
         };
     } rtit;
 
+    /*
+     * 0x000006e1 - MSR_PKRS - Protection Key Supervisor.
+     *
+     * Exposed R/W to guests.  Xen doesn't use PKS yet, so only context
+     * switched per vcpu.  When in current context, live value is in hardware,
+     * and this value is stale.
+     */
+    uint32_t pkrs;
+
     /* 0x00000da0 - MSR_IA32_XSS */
     struct {
         uint64_t raw;
diff --git a/xen/arch/x86/include/asm/prot-key.h b/xen/arch/x86/include/asm/prot-key.h
index 63a2e22f3fa0..0dcd31b7ea68 100644
--- a/xen/arch/x86/include/asm/prot-key.h
+++ b/xen/arch/x86/include/asm/prot-key.h
@@ -5,8 +5,11 @@
 #ifndef ASM_PROT_KEY_H
 #define ASM_PROT_KEY_H
 
+#include <xen/percpu.h>
 #include <xen/types.h>
 
+#include <asm/msr.h>
+
 #define PKEY_AD 1 /* Access Disable */
 #define PKEY_WD 2 /* Write Disable */
 
@@ -28,4 +31,55 @@ static inline void wrpkru(uint32_t pkru)
                    :: "a" (pkru), "d" (0), "c" (0) );
 }
 
+/*
+ * Xen does not use PKS.
+ *
+ * Guest kernel use is expected to be one default key, except for tiny windows
+ * with a double write to switch to a non-default key in a permitted critical
+ * section.
+ *
+ * As such, we want MSR_PKRS un-intercepted.  Furthermore, as we only need it
+ * in Xen for emulation or migration purposes (i.e. possibly never in a
+ * domain's lifetime), we don't want to re-sync the hardware value on every
+ * vmexit.
+ *
+ * Therefore, we read and cache the guest value in ctxt_switch_from(), in the
+ * expectation that we can short-circuit the write in ctxt_switch_to().
+ * During regular operations in current context, the guest value is in
+ * hardware and the per-cpu cache is stale.
+ */
+DECLARE_PER_CPU(uint32_t, pkrs);
+
+static inline uint32_t rdpkrs(void)
+{
+    uint32_t pkrs, tmp;
+
+    rdmsr(MSR_PKRS, pkrs, tmp);
+
+    return pkrs;
+}
+
+static inline uint32_t rdpkrs_and_cache(void)
+{
+    return this_cpu(pkrs) = rdpkrs();
+}
+
+static inline void wrpkrs(uint32_t pkrs)
+{
+    uint32_t *this_pkrs = &this_cpu(pkrs);
+
+    if ( *this_pkrs != pkrs )
+    {
+        *this_pkrs = pkrs;
+
+        wrmsr_ns(MSR_PKRS, pkrs, 0);
+    }
+}
+
+static inline void wrpkrs_and_cache(uint32_t pkrs)
+{
+    this_cpu(pkrs) = pkrs;
+    wrmsr_ns(MSR_PKRS, pkrs, 0);
+}
+
 #endif /* ASM_PROT_KEY_H */
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 6deadcf74763..567a0a42ac50 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -54,6 +54,7 @@
 #include <asm/spec_ctrl.h>
 #include <asm/guest.h>
 #include <asm/microcode.h>
+#include <asm/prot-key.h>
 #include <asm/pv/domain.h>
 
 /* opt_nosmp: If true, secondary processors are ignored. */
@@ -1804,6 +1805,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     if ( opt_invpcid && cpu_has_invpcid )
         use_invpcid = true;
 
+    if ( cpu_has_pks )
+        wrpkrs_and_cache(0); /* Must be before setting CR4.PKS */
+
     init_speculation_mitigations();
 
     init_idle_domain();
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 52beed9d8d6d..b26758c2c89f 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -42,6 +42,7 @@
 #include <asm/microcode.h>
 #include <asm/msr.h>
 #include <asm/mtrr.h>
+#include <asm/prot-key.h>
 #include <asm/setup.h>
 #include <asm/spec_ctrl.h>
 #include <asm/time.h>
@@ -364,6 +365,9 @@ void start_secondary(void *unused)
 
     /* Full exception support from here on in. */
 
+    if ( cpu_has_pks )
+        wrpkrs_and_cache(0); /* Must be before setting CR4.PKS */
+
     /* Safe to enable feature such as CR4.MCE with the IDT set up now. */
     write_cr4(mmu_cr4_features);
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:19:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:19:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474881.736350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH8-000580-SC; Tue, 10 Jan 2023 17:19:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474881.736350; Tue, 10 Jan 2023 17:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH8-00057K-EC; Tue, 10 Jan 2023 17:19:02 +0000
Received: by outflank-mailman (input) for mailman id 474881;
 Tue, 10 Jan 2023 17:19:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFIH6-0003S0-Qe
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:19:00 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dd6eb98c-910a-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 18:18:57 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd6eb98c-910a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673371137;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=JEFM5DgSMIlHdoI6Ogj3/3vP8p3vve1BuSmFbdoNBC0=;
  b=CkJfXtGAuvEjg0fSpmAQpDOtb+a/xGUkfpEjtXNssU8v2WwoUpN+LrCE
   BNOR5fBNcTNCMEstQpPOj5d3C70hhJSospZ6FMmPZN1MGoOCOjdcAwMs0
   VZh3zUeU+0rARogkOGXgh5OrojteeuIrFnL764w/j/A4VGfx9gRiSjdmy
   k=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 90908190
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:DbCNyqvq0Vx5DDVZ2p/52hYuvufnVEVeMUV32f8akzHdYApBsoF/q
 tZmKTuAP66DZWHwLt5yYYq+p0gAvZXSy4VnQQQ4+Ho2Ri4T+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaHzyFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwIhc1PiGhqLiNz42GEsNDmPsYMZbGFdZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfAdMYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27J/
 jOerz2oWnn2MvSE1ziV0WuHhdbzoiH3V78oL7aa5NBD1Qj7Kms7V0RNCArTTeOColG6c8JSL
 QoT4CVGhbg/8gmnQ8fwWzW8oWWYpVgMVtxICeo45QqRjK3O7G6xJEIJUzpAY9wOr9ItSHoh0
 Vrhoj/yLWUx6vvPEyvbr+rK62PpUcQIEYMcTSUjdVs0wfa5m44Ms0rlYchcK7Pqo/SgTFkc3
 Au2hCQ5grwSi+sC2KO64U3LjlqQm3TZcuImzl6JBzz4t2uVcKbgPtX1sgaDsZ6sOa7DFjG8U
 G44d99yBQzkJbWEj2SzTeoEB9lFDN7VYWSH0TaD83TMnglBGkJPn6gKu1mSx28zaK7onAMFh
 2eN0T69HLcJYBOXgVZfOupd8fgCw6n6DsjCXfvJdNdIaZUZXFbZo3o0NR/IgD2wyRJEfUQD1
 XGzK57E4ZEyUPoP8dZLb71Fje9DKt4WmQs/uqwXPzz4iOHDNRZ5uJ8OMUeUb/BR0U93iFy9z
 jqrDOPTk083eLSnMkHqHXs7cQhiwY4TWcqn9KS6t4erfmJbJY3WI6SNneJwKtE4wf89eyWh1
 ijVZ3K0AWHX3RXvQThmoFg9AF8zdf6TdU4GABE=
IronPort-HdrOrdr: A9a23:oRLJXKi0dBOuc7/fiMW0v9kprnBQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="90908190"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 7/8] x86/pagewalk: Support PKS
Date: Tue, 10 Jan 2023 17:18:44 +0000
Message-ID: <20230110171845.20542-8-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230110171845.20542-1-andrew.cooper3@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

PKS is very similar to the existing PKU behaviour, operating on pagewalks for
any supervisor mapping.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/include/asm/guest_pt.h | 5 +++++
 xen/arch/x86/include/asm/hvm/hvm.h  | 3 +++
 xen/arch/x86/mm/guest_walk.c        | 9 +++++----
 3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/include/asm/guest_pt.h b/xen/arch/x86/include/asm/guest_pt.h
index 6647ccfb8520..6802db2a415a 100644
--- a/xen/arch/x86/include/asm/guest_pt.h
+++ b/xen/arch/x86/include/asm/guest_pt.h
@@ -282,6 +282,11 @@ static always_inline bool guest_pku_enabled(const struct vcpu *v)
     return !is_pv_vcpu(v) && hvm_pku_enabled(v);
 }
 
+static always_inline bool guest_pks_enabled(const struct vcpu *v)
+{
+    return !is_pv_vcpu(v) && hvm_pks_enabled(v);
+}
+
 /* Helpers for identifying whether guest entries have reserved bits set. */
 
 /* Bits reserved because of maxphysaddr, and (lack of) EFER.NX */
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 93254651f2f5..65768c797ea7 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -407,6 +407,8 @@ int hvm_get_param(struct domain *d, uint32_t index, uint64_t *value);
     ((v)->arch.hvm.guest_efer & EFER_NXE)
 #define hvm_pku_enabled(v) \
     (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_PKE))
+#define hvm_pks_enabled(v) \
+    (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_PKS))
 
 /* Can we use superpages in the HAP p2m table? */
 #define hap_has_1gb (!!(hvm_funcs.hap_capabilities & HVM_HAP_SUPERPAGE_1GB))
@@ -911,6 +913,7 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
 #define hvm_smap_enabled(v) ((void)(v), false)
 #define hvm_nx_enabled(v) ((void)(v), false)
 #define hvm_pku_enabled(v) ((void)(v), false)
+#define hvm_pks_enabled(v) ((void)(v), false)
 
 #define arch_vcpu_block(v) ((void)(v))
 
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 161a61b8f5ca..76b4e0425887 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -406,16 +406,17 @@ guest_walk_tables(const struct vcpu *v, struct p2m_domain *p2m,
 #if GUEST_PAGING_LEVELS >= 4 /* 64-bit only... */
     /*
      * If all access checks are thus far ok, check Protection Key for 64bit
-     * data accesses to user mappings.
+     * data accesses.
      *
      * N.B. In the case that the walk ended with a superpage, the fabricated
      * gw->l1e contains the appropriate leaf pkey.
      */
-    if ( (ar & _PAGE_USER) && !(walk & PFEC_insn_fetch) &&
-         guest_pku_enabled(v) )
+    if ( !(walk & PFEC_insn_fetch) &&
+         ((ar & _PAGE_USER) ? guest_pku_enabled(v)
+                            : guest_pks_enabled(v)) )
     {
         unsigned int pkey = guest_l1e_get_pkey(gw->l1e);
-        unsigned int pkr = rdpkru();
+        unsigned int pkr = (ar & _PAGE_USER) ? rdpkru() : rdpkrs();
         unsigned int pk_ar = (pkr >> (pkey * PKEY_WIDTH)) & (PKEY_AD | PKEY_WD);
 
         if ( (pk_ar & PKEY_AD) ||
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 17:19:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 17:19:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474882.736363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIHA-0005XK-6d; Tue, 10 Jan 2023 17:19:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474882.736363; Tue, 10 Jan 2023 17:19:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFIH9-0005VV-UW; Tue, 10 Jan 2023 17:19:03 +0000
Received: by outflank-mailman (input) for mailman id 474882;
 Tue, 10 Jan 2023 17:19:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFIH7-0003S0-Qg
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 17:19:01 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc65418a-910a-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 18:18:57 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc65418a-910a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673371137;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=C9gSfgX+9i+r0YwgaULSxlIhX07TJ2xDDOE7SZdYeBY=;
  b=NcvoAQWciyRAbIkLgnVc9iNEg66rZ8SSLjF+pZfL9Jrh7hkPNp6lxqUY
   bYP+y1xr8nUp/gEAkT41lPMFLhjhROGEj1HDBlIs6O1msHSW/mE//WDif
   MLY3PPmyM3n7ftk0GMRlrPawHgfYBEMU2+Wc/Ao0xPPGmMUFbfiFRf1n9
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91967753
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:YwnKM6zuo4bG75SfPt16t+ckxirEfRIJ4+MujC+fZmUNrF6WrkUPz
 mRKUWzXPK2MajSnet0lYY7k8ktS6MDUxoNjG1ZvqCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5QRnPKgT5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KW1y8
 985EjkAUhfdhPm38ZGfYM9Fpct2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKQOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiFJwEwBnH+
 zmuE2LRWw9HLfnHxCK8o1mR3/bFsgy8A49CG+jtnhJtqALKnTFCYPEMbnOkpdGph0j4XMhQQ
 2QE9yxroaUs+UiDStjmQwb+sHOCpgQbWddbD6s98g7l4oj+7hudB2MEZiVcc9Fgv8gzLQHGz
 XfQwYmvX2Y29uTIFzTNrd94sA9eJwAZEWkhRBUNQDcCvdr4obAQqjjpZ/VsRfvdYsLOJRn8x
 DWDrS4bjroVjNIW26jTwW0rkw5AtbCSEFdru1y/snaNq1ogOdX7P9DABU3zt64oEWqPcrWWU
 JHoceC65ftGM5yCnTflrA4lTODwvKbt3NExbDdS83gdG9aFoSXLkWN4umsWyKJV3iEsJ1fUj
 Lf741852XOqFCLCgVVLS4ywEd826qPrCM7oUPvZBvIXPMcqLl/WpH4zOBfKt4wIrKTKuftnU
 Xt8WZ/yZUv29Iw9lGbmLwvj+eNDKt8CKZP7GsmgkkXPPUu2b3+JU7YVWGZinchghJ5oVD79q
 o4FX+PTkkU3bQELSnWPmWLlBQxQfCdT6FGfg5A/S9Nv1SI9RD1wWq6MnO16E2Gn9owM/tr1E
 riGchcw4DLCabfvcG1mtlgLhGvTYKtC
IronPort-HdrOrdr: A9a23:H2/ClqFyTJLFRUOopLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="91967753"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 8/8] x86/hvm: Support PKS for HAP guests
Date: Tue, 10 Jan 2023 17:18:45 +0000
Message-ID: <20230110171845.20542-9-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230110171845.20542-1-andrew.cooper3@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

With all infrastructure in place, advertise the PKS CPUID bit to HAP guests,
and let them set CR4.PKS.

Experiment with a tweak to the layout of hvm_cr4_guest_valid_bits() so future
additions will be just a single added line.

The current context switching behaviour is tied to how VT-x works, so leave a
safety check in the short term.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpuid.c                        | 9 +++++++++
 xen/arch/x86/hvm/hvm.c                      | 4 +++-
 xen/include/public/arch-x86/cpufeatureset.h | 2 +-
 3 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index acc2f606cea8..b22725c492e7 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -579,6 +579,15 @@ static void __init calculate_hvm_max_policy(void)
             __clear_bit(X86_FEATURE_XSAVES, hvm_featureset);
     }
 
+    /*
+     * Xen doesn't use PKS, so the guest support for it has opted to not use
+     * the VMCS load/save controls for efficiency reasons.  This depends on
+     * the exact vmentry/exit behaviour, so don't expose PKS in other
+     * situations until someone has cross-checked the behaviour for safety.
+     */
+    if ( !cpu_has_vmx )
+        __clear_bit(X86_FEATURE_PKS, hvm_featureset);
+
     guest_common_feature_adjustments(hvm_featureset);
 
     sanitise_featureset(hvm_featureset);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c6c1eea18003..606f0e864981 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -969,7 +969,9 @@ unsigned long hvm_cr4_guest_valid_bits(const struct domain *d)
             (p->feat.smep     ? X86_CR4_SMEP              : 0) |
             (p->feat.smap     ? X86_CR4_SMAP              : 0) |
             (p->feat.pku      ? X86_CR4_PKE               : 0) |
-            (cet              ? X86_CR4_CET               : 0));
+            (cet              ? X86_CR4_CET               : 0) |
+            (p->feat.pks      ? X86_CR4_PKS               : 0) |
+            0);
 }
 
 static int cf_check hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 5444bc5d8374..3b85bcca1537 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -227,7 +227,7 @@ XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
 XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*a  MOVDIRI instruction */
 XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*a  MOVDIR64B instruction */
 XEN_CPUFEATURE(ENQCMD,        6*32+29) /*   ENQCMD{,S} instructions */
-XEN_CPUFEATURE(PKS,           6*32+31) /*   Protection Key for Supervisor */
+XEN_CPUFEATURE(PKS,           6*32+31) /*H  Protection Key for Supervisor */
 
 /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */
 XEN_CPUFEATURE(HW_PSTATE,     7*32+ 7) /*   Hardware Pstates */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 18:08:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 18:08:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474932.736383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFJ2P-0004uY-47; Tue, 10 Jan 2023 18:07:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474932.736383; Tue, 10 Jan 2023 18:07:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFJ2P-0004uR-18; Tue, 10 Jan 2023 18:07:53 +0000
Received: by outflank-mailman (input) for mailman id 474932;
 Tue, 10 Jan 2023 18:07:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dQxz=5H=citrix.com=prvs=367c7493a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFJ2N-0004uL-M8
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 18:07:52 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b0133454-9111-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 19:07:49 +0100 (CET)
Received: from mail-bn1nam02lp2049.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 Jan 2023 13:07:42 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5132.namprd03.prod.outlook.com (2603:10b6:5:1e4::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan
 2023 18:07:40 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023
 18:07:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0133454-9111-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673374069;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=MdqVEpphdJImzJSpUifrpqkuvfmbJ6VAe+5o7miVXh4=;
  b=V0itUQyhzoOSmOkLCuNHT80dFQr0VjOcEk/q83Ysrn9B2XyA14M9wdGq
   eGinUEj9EcGt2FNdPBCg6YupcrNm3+xoAmXVjdQxGy16CScnhs82MsC36
   P6VwcmH+1RXF37AyInc4fvV5La7ShRcvZxzndZF2XDtPgibJ3H+eFrcSf
   s=;
X-IronPort-RemoteIP: 104.47.51.49
X-IronPort-MID: 94445988
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6Aifuq668Slxi4rXLDSqowxRtAHGchMFZxGqfqrLsTDasY5as4F+v
 mYfWDyCbP2JZWOgeN10bN+0pxtT65/VmII1TQY5/300Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakS5AeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m7
 /wFAjctYim/qM2v2pGKG+BHos0vFZy+VG8fkikIITDxK98DGcqGaYOToNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OlUotgdABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTCd5OSODppq4CbFu712kBKiwIWGmH+9aemEOcBe5He
 xMG0397xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnO0cSCEu1
 1SJt8j0HjEpu7qQIVqC8p+EoDX0PjIaRUcBaDEFS00Z4tDliIA1kh/LCN1kFcaIYsbdHDjxx
 3WAqnE4jrBL18oTjf3nrBbAni6moYXPQkgt/ALLU2m57wR/Iom4e4iv7lud5vFFRGqEcmS8U
 LE/s5D2xIgz4VulzURhnM1l8GmV2su4
IronPort-HdrOrdr: A9a23:aRqv2q+kGb5QVFJ/S51uk+H2dr1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYW4qKQodcdDpAtjifZtFnaQFrbX5To3SJjUO31HYY72KjLGSjgEIfheTygcz79
 YGT0ETMrzN5B1B/L7HCWqDYpgdKbu8gcaVbI7lph8DIz2CKZsQljuRYTzrcHGeMTM2YabRY6
 Dsg/avyQDBRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirCWekD+y77b+Mh6AmjMTSSlGz7sO+X
 XM11WR3NTij9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhn8lwqyY4xlerua+BQ4uvum5loGmM
 TF5z0gI8NwwXXMeXzdm2qt5yDQlBIVr1Pyw16RhnXu5ebjQighNsZHjYVFNjPE9ksJprhHoe
 B29lPck6ASIQLLnSz76dSNfQptjFCIrX0rlvNWp2BDULEZdKRaoeUkjQZo+dY7bWbHAbIcYa
 9T5fLnla9rmJShHijkV1xUsZuRt7IIb0y7qwY5y5aoOnNt7Q1EJgMjtbAidzE7hdEAotB/lp
 r52u4DrsAwcuYGKa16H+sPWs2xFyjERg/NKnubJRD9GLgAIG+lke++3Fyb3pDZRHUk9upFpH
 36aiIQiUciP0b1TcGe1pxC9R7ABG27QDT208lbo5x0oKf1SrbnOTCKDAlGqbrrn9wPRsnAH/
 qjMpNfBPHuaWPoBIZSxgX7H51fM2MXXsEZsssyH1iOvsXIIIv3sfGzSoeZGJP9VTI/Hm/vCH
 oKWzb+YM1G80CwQ3f9xAPcXnv8E3aPiq6Y0JKqi9T75LJ9RbGk6DJl+GhRzvv7WQFqo+gxYF
 Z0Jq/hn+eyuXS2lFy4mllUBg==
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="94445988"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KXBj/6Jbjdkl1GrEhBfUc7QJYFH6Gft0Mp72o2p1JIzWTIKZZ3SGZzOF4HrhCriTkdt8pU9kYC8637Q+whhWLXuUe2iGUmDV7abMLiVR80cKQSMaV/ERcAv4Pq0kOGK6Zilgx7LE4DX0thrwywVaKaVCdynEvMcaTbwTcUCEskjbj9tM/AcpvFXbQP/KC3dm2+pVXu57OqKV5MtETWoZUz3n4iYjmd340CgFz/RU3VqR+l3rwtqNEuiJm6U4YgzeS78TGgZ4uG9Izd5du7wyqTOHLkud0P3L4QlVfsQmkc/PNAqrMi7EJ76P+2O1FAj3oSWg2e0dmF9bSqldxqXjPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MdqVEpphdJImzJSpUifrpqkuvfmbJ6VAe+5o7miVXh4=;
 b=YgIY07I2xALE9rqhYEBzb7o9jSmlISrIIKRMunRvUlxMzOpx+gNSW/DqLwldt8ZqQTvgavM4c0pXyDAzcHUv2+xJtyL6EHDoIVH5JK6OupPvow4C5iLvxoiUJbuSIu21KQn7urOoR97pEXH/wr+ZiGAZ+oGnjpuh2h8dVMMvfKmvFxHkR30ANkkNWmFe1RE4gWeIfiqUNGbuAbyOQrhLn0k6EQqq+zPKcLpeYcOAXf7eaaY/E+iAOwBQLH77EbNQeVjhQGz0FsQvFkiHLOd8fMbDS6YwfW+Hxk3yGWDFU6NHOrppf7EdCrWgVrrRxpCLdVrnN6Oaad3UbL5spRC63g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MdqVEpphdJImzJSpUifrpqkuvfmbJ6VAe+5o7miVXh4=;
 b=gkmNLeC1xBU48YLCErIwZT0o/AkVXDKRJ+XFA8fJYOoC/e6DsgVE5aUeOrJGgH/+59ovhrVcu5qie1An8tIdS5KZgqo5pXA+wRxiaaMZhQ3EcD8/x4S9EpMYq7zGLhoQTE+cwUPXA96hEL8LFqNm8vAKvQ1sdJ8LM+Y360SCPSA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v2 6/8] x86/hvm: Enable guest access to MSR_PKRS
Thread-Topic: [PATCH v2 6/8] x86/hvm: Enable guest access to MSR_PKRS
Thread-Index: AQHZJReeV/6Z/BfYW0W8YgpP3+cAB66X8qqA
Date: Tue, 10 Jan 2023 18:07:39 +0000
Message-ID: <798bf587-06c2-bdf3-646d-bc3da2a9a566@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-7-andrew.cooper3@citrix.com>
In-Reply-To: <20230110171845.20542-7-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB5132:EE_
x-ms-office365-filtering-correlation-id: a2ce736f-a0a7-436c-859d-08daf3358fc8
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 J+duAl60zo5hWAQafLCYvrIiVv/ysllt1P+QNsxZlH8FGyiI8TcCYIMsu+MPSzvjR39wPNVCr1LNAGYDerHsMs6Q5tlnFDpCIqMVXJ1RtL0zPJfTwQxPfqK7WQXx4+ImuDqqeRHO0Z1Ss6ob6aZVS4qSCHyixEpMjyySuJdRwGzUCgOKxyNkLqmLWdCqM2YvctsHQoRh/4ISRkk0xuWHXN/4PWYa7akDR9oESlDZ1PLDJnCNemMqmmRxKr+rwLo0OKmZSNSyNMB5h5D7KGl4JVSdXcX2U4i1J5IHLWcXDyJsJ0rXEVY/ylV19axrRmTjCw3qLLgMY7IB14cn94kU1rkYbdL7vnZxGxV7vMq3iVTHxZ2WHsG5cgg3zrApnLTuF6IqZYDkAXxWwQC40cgH8MeL5+dHD9hwGMk35qeuwekpkRB+n98LkHkZ+h9eBAHO3T3ulW6zaCiP9MSHVzVnHNJ1wjB5n/QlQjRh2gJOc/YOg1WUX9ojja8EM2tquYulWdYzcgOWuqzh++dw24mfNqPPiPPENE0535TjqkFq+1mo+S+drbkywHaYz6lpTvd4x8DjAOatwjsUyz8UY0ylnTuYqh5JR0uwnj5WJxYQhM+/+7VjuJYJcZvma0YgLkhzwzYg4cWnB4pue9tkTs8m4l/oFqAFlAytDvbVIBI+vOQO3EBkVSSll9oDkLppGhvVsYm8zayLuXWHjiOEPieznki6bi65pdQc5yxRHYh/USipUOQD2B070sdbN8ETe9AFh+/yPKKdUBQBFcFdXuGN/w==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(366004)(39860400002)(346002)(396003)(376002)(451199015)(36756003)(186003)(26005)(53546011)(122000001)(8936002)(31686004)(6512007)(6506007)(2616005)(66946007)(5660300002)(66446008)(66556008)(66476007)(64756008)(316002)(91956017)(4326008)(38070700005)(86362001)(71200400001)(31696002)(76116006)(38100700002)(82960400001)(478600001)(6486002)(54906003)(41300700001)(8676002)(110136005)(83380400001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ajZVZk5JMmExWklCelpOSjdqZjNYUThFV2hwdXVHU2s0czl4OFR4NHZpRmJM?=
 =?utf-8?B?RUJ3QjRPMFlHaXQ2SzZQOVNyeWdEVGVDeGxmc0U1RnVtb2tpSnUxcUFsZUtM?=
 =?utf-8?B?K1RHWGFSNVpKM2wxeXVuREdpaUJKU1hoaWtTVWFWSFhJZHdwS1lUL2QzRGpr?=
 =?utf-8?B?d25LZHFGWEVZNUtUNFNVZTNxZEJDRUJlZ3ZNVlRiYVd4L2ZJaXhVYjVjQlNT?=
 =?utf-8?B?MTRFMEpQc1lhUDBwcTBqeFdSRU13M0tuQmtjV25QREcyM1lkaGRmbkllcFR6?=
 =?utf-8?B?dUdlWStwVVpKc2h0ejBoT1NVZUI1OGRadE5QN0JNL3Y2UUpPVnBsMkFNYUZP?=
 =?utf-8?B?dnFzOFBVTzMrdWNneENMWnpIQ2FRQmRXNDRqUUtxRzdvTW1WR0dFQ1c2WG45?=
 =?utf-8?B?N0Z5Zm05Y01xNzVUeVRYUmZuN1NQWEptS3FIc2dVU2dON0ZaTXNlUnk2OTU0?=
 =?utf-8?B?ejFEd08yLzZBK2NVbnhDMm05VkhBd1R2SjJFMDZzYzRrMnBWOE5XRGlNUUlO?=
 =?utf-8?B?RzZmV2FzOWdNVmp5a2NZV05pOVd6R3NwRnE2bFNMR0t3S28xV2pmbS9DdEt2?=
 =?utf-8?B?Qkxra1laSCtjNjBtTHp2SGlBTll2OTZiUk9sMTBlcktKNmJ0T1R5TE5wSU8x?=
 =?utf-8?B?Y2xxMWVxelh4dWZKemNESElUWGE3OVFTSkozNVlaRHZIamxtTGwyM2pWQU9C?=
 =?utf-8?B?R0hscXBuUE40eTg3ZkYzRjkwN0FsRXh1UUlqRlR5ZXZ3OWM4R09STFEvM0pE?=
 =?utf-8?B?c1ZJZDdUU0cyNUtHMkR3QzlBbXViU29SOGluelI5M0RVeE04NFFrODRIeGZt?=
 =?utf-8?B?NmRFS2p4WVNWN2R3MEpoTnhsSFNmcTl1MC9TZlh0L3lyRC9URGpJcVlyUEtp?=
 =?utf-8?B?MlI4bEZ2eWFkL2t4SWNQY2hHa0pSK3J5RVNycXFZWjd3RXNSZDRDY1JTQ3Bt?=
 =?utf-8?B?dmxaeGZoWThhMi9pNFYzRlJobkVHTGFQV1ZYZ3g5VFNmdVgyQzFvVCtSYjZo?=
 =?utf-8?B?R1pyMitDaTVHZk01VE1SblRWUVpHS3dLUXBwOTN1aDNXaitGSkJoZ2FTcFhq?=
 =?utf-8?B?SFZrZndOU05RNVhiYUFPRHdiekdjOTQvN2V5VGd3VkF1emp0V0tiajFsNzN5?=
 =?utf-8?B?cmhlRGhrTUo1RStZeFBvS3BTRFJoSEttQlVPcDJSYnkrd2F5ZDBZU3hhbFBx?=
 =?utf-8?B?T245dEMrcDEyVUUyV3J1NWE2djJZQXpYR1hEcWtxQ1Awc1EwSkx3Mmlvci9F?=
 =?utf-8?B?NVFvRjVwaDY3UFh0SkZHdVBaUEU0cE5pTkVZaEZZd2JQSTZMZGpSSTdaT3Z3?=
 =?utf-8?B?M08xeFJiei84WFJMWS9PN0huZ3ZhQU1oY3VHWFRBZ0hTdnh5Y0E1cEc4UlFH?=
 =?utf-8?B?c0JQN25xNXhLSTFzbVl1V1ZlNll5Qk9FSkREWXR2UCtna2JoRzBnajNDbFJy?=
 =?utf-8?B?VEdCblFYeGFwRFFxZG9oaWtLQzZETXVseUxsTVd6WHNMQ0xUVGtMbk5PM00x?=
 =?utf-8?B?WlhLUXVZaGtCZVI0QnVOUFNBVDUwL2VISmhZTHMwbDFWS2RmczI2a05ReEpX?=
 =?utf-8?B?bTFKSFNKYklRWDdQVE90REVJUUp4MFRncmtZY3RGL1JQREYxRk55c1FUQ09Z?=
 =?utf-8?B?OUxVY29NUDRKdStXY2RLc0FmRTZxVy9IKzV5a2dPdy9OODZicWRVbU0xY1dq?=
 =?utf-8?B?MHlKSzVZcVI1VWtjWlcrd001QkVrUG5ZVUowbTZFRW9WSVduSjBMaFFOdXV4?=
 =?utf-8?B?bGk5OGJnZGVxOERJQlhkcEVJVFI3TFFFVGV6eG41T0ZvaFptQWJSa3llQ2hO?=
 =?utf-8?B?UHl0RmJLckdtekpWaXRnd1A4SkwrMkJqaUxQbnQ4STkwR3ZvazlORGhUckJE?=
 =?utf-8?B?Ly8xVHBwMkp0OG5sTSt3RU1UQzJ6ZjlISWU4Z202TXJGMEgvTFRKR3hxcnBC?=
 =?utf-8?B?ZmhiU3I1blBPOVlmOGEvUGlHM3hZMVFCM1p1SnNhL1lQSDlIYXU0ak4rb3Iv?=
 =?utf-8?B?blN4SzlMdXpGT2d0Q3dDWmwrdDNBeG92RjcwTjZLZHJoTHVNQ01Ed21MKzBy?=
 =?utf-8?B?cGpjQXYzVTJMYkZNWC9uVFZFOTRCVlIra2tUVXZlVkxtbHluY25zaDhSZkl0?=
 =?utf-8?Q?8YPR/XB8gEv4YVuCPDgmi1Lg+?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0A177FF677FDDD42BEBD0973D4FB9674@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	PGKj8lhuTeQzvwXVYTXfEK2lvu8WuqP8CVCdrdLcnfhwTxwI8HqdJCTZFMAlAdZjNr/St92xS9NWZCaBxYnpAfKhlFrxnsiPLX0CgnmSYS2zSIaomTqG3WeuZBGcdm1p0ySnud71BA4FtqdE0GLbmj++zgmyswnxxsUhFW6YCQbonch8kdxvExXieu/JqOj8beNg/R6jm9t+rZLG0CvLf1UhGfft2TUgN5Hu7wd0JPgfrXSi76S/zW0DAanvLJwnDxXDzsiGdx27lCiWgYBIHPCvkecPZT/Kl8Hw/nv6siDfDBqpt2ffhAwAznEIfbDtOKeLdGpT8d+HNWVXPzBu3AVP1TZcexmPN/l3TCRzkI7VcQWKsHCVkiV4+lQXLJkPHVYYxRJ/xMKmAz9HFFZExFpGxWRyFM5iaAGD7HuObArJFCrtFqDLlBymgtvX6K9WG0DAYXqJF8x7TtZg7I/Gbfps8KG7ADW5R7rQWY2/GlMnzIIMYouZGJeafNO+zveTUsmgqoGZMrcuGmHFc3zMfN1pOay4eFuwDSDYGXanMg4nCkwyY9oJMtzgQYtVx1mJZGGNT8/o5i2ttEgQuskHjJP2O9ush8aaEArnVIp6qz9qtEV1RGMCNCN6+q4YG4cTdfSxTrIIKW7EQ06En3cNtrrcR0IyDkAuZdbegS6SUyPzADE2wkjvjPMQO3gsWL2FFBoKEZfz2UBrJGacHFbPkmgv3KSCJrTP1GAdGk8ucyK2X/1iMU6kxjIezD2FePi0FLCo7MghMi9t4zwkPZpVKCsjD2E2IbCeVMpsT4cDgOz+nZsRXJ8wC5LYs+pE6o6npnf+prsmJP9aGkFUAZ0hhesd8KSvlKCJNiozGmpk9HI=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2ce736f-a0a7-436c-859d-08daf3358fc8
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jan 2023 18:07:39.6821
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /QhVEsWY6oawseMrLl4RRFT/MgzKboZT8AwlAa4mDBh7noxRSwz5WiIWEK9V0ViMRP/1iH3zBd9u/hNL0XlQT4vGeDX0C+h4HuiVpbE6u+o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5132

T24gMTAvMDEvMjAyMyA1OjE4IHBtLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0KPiBIYXZlIGd1ZXN0
X3tyZCx3cn1tc3IoKSwgdmlhIGh2bV97Z2V0LHNldH1fcmVnKCksIGFjY2VzcyBlaXRoZXIgdGhl
IGxpdmUNCj4gcmVnaXN0ZXIsIG9yIHN0YXNoZWQgc3RhdGUsIGRlcGVuZGluZyBvbiBjb250ZXh0
LiAgSW5jbHVkZSBNU1JfUEtSUyBmb3INCj4gbWlncmF0aW9uLCBhbmQgbGV0IHRoZSBndWVzdCBo
YXZlIGZ1bGwgYWNjZXNzLg0KPg0KPiBTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPg0KPiAtLS0NCj4gQ0M6IEphbiBCZXVsaWNoIDxKQmV1bGljaEBzdXNlLmNv
bT4NCj4gQ0M6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPg0KPiBDQzog
V2VpIExpdSA8d2xAeGVuLm9yZz4NCj4gQ0M6IEtldmluIFRpYW4gPGtldmluLnRpYW5AaW50ZWwu
Y29tPg0KPg0KPiB2MjoNCj4gICogUmViYXNlIG92ZXIgdGhlIGdldC9zZXRfcmVnKCkgaW5mcmFz
dHJ1Y3R1cmUuDQo+IC0tLQ0KPiAgeGVuL2FyY2gveDg2L2h2bS9odm0uYyAgICAgfCAgMSArDQo+
ICB4ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguYyB8IDE3ICsrKysrKysrKysrKysrKysrDQo+ICB4
ZW4vYXJjaC94ODYvbXNyLmMgICAgICAgICB8IDEwICsrKysrKysrKysNCj4gIDMgZmlsZXMgY2hh
bmdlZCwgMjggaW5zZXJ0aW9ucygrKQ0KPg0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2
bS9odm0uYyBiL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMNCj4gaW5kZXggOTI3YTIyMTY2MGU4Li5j
NmMxZWVhMTgwMDMgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMNCj4gKysr
IGIveGVuL2FyY2gveDg2L2h2bS9odm0uYw0KPiBAQCAtMTMzMyw2ICsxMzMzLDcgQEAgc3RhdGlj
IGludCBjZl9jaGVjayBodm1fbG9hZF9jcHVfeHNhdmVfc3RhdGVzKA0KPiAgc3RhdGljIGNvbnN0
IHVpbnQzMl90IG1zcnNfdG9fc2VuZFtdID0gew0KPiAgICAgIE1TUl9TUEVDX0NUUkwsDQo+ICAg
ICAgTVNSX0lOVEVMX01JU0NfRkVBVFVSRVNfRU5BQkxFUywNCj4gKyAgICBNU1JfUEtSUywNCj4g
ICAgICBNU1JfSUEzMl9CTkRDRkdTLA0KPiAgICAgIE1TUl9JQTMyX1hTUywNCj4gICAgICBNU1Jf
VklSVF9TUEVDX0NUUkwsDQoNCk5lZWRzIHRoZSBmb2xsb3dpbmcgaHVuayB0b286DQoNCmRpZmYg
LS1naXQgYS94ZW4vYXJjaC94ODYvaHZtL2h2bS5jIGIveGVuL2FyY2gveDg2L2h2bS9odm0uYw0K
aW5kZXggYzZjMWVlYTE4MDAzLi44NmNhYjdhYTI2MjcgMTAwNjQ0DQotLS0gYS94ZW4vYXJjaC94
ODYvaHZtL2h2bS5jDQorKysgYi94ZW4vYXJjaC94ODYvaHZtL2h2bS5jDQpAQCAtMTQ4Nyw2ICsx
NDg3LDcgQEAgc3RhdGljIGludCBjZl9jaGVjayBodm1fbG9hZF9jcHVfbXNycyhzdHJ1Y3QNCmRv
bWFpbiAqZCwgaHZtX2RvbWFpbl9jb250ZXh0X3QgKmgpDQrCoA0KwqDCoMKgwqDCoMKgwqDCoCBj
YXNlIE1TUl9TUEVDX0NUUkw6DQrCoMKgwqDCoMKgwqDCoMKgIGNhc2UgTVNSX0lOVEVMX01JU0Nf
RkVBVFVSRVNfRU5BQkxFUzoNCivCoMKgwqDCoMKgwqDCoCBjYXNlIE1TUl9QS1JTOg0KwqDCoMKg
wqDCoMKgwqDCoCBjYXNlIE1TUl9JQTMyX0JORENGR1M6DQrCoMKgwqDCoMKgwqDCoMKgIGNhc2Ug
TVNSX0lBMzJfWFNTOg0KwqDCoMKgwqDCoMKgwqDCoCBjYXNlIE1TUl9WSVJUX1NQRUNfQ1RSTDoN
Cg0KZm9yIHRoZSByZWNlaXZlIHNpZGUgb2YgbWlncmF0aW9uIHRvIHdvcmsuDQoNCn5BbmRyZXcN
Cg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 19:16:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 19:16:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474938.736394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFK6V-00040S-40; Tue, 10 Jan 2023 19:16:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474938.736394; Tue, 10 Jan 2023 19:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFK6U-00040L-WF; Tue, 10 Jan 2023 19:16:11 +0000
Received: by outflank-mailman (input) for mailman id 474938;
 Tue, 10 Jan 2023 19:16:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ek+I=5H=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFK6U-00040F-70
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 19:16:10 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3cc27a69-911b-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 20:16:08 +0100 (CET)
Received: by mail-ed1-x52b.google.com with SMTP id i9so19100517edj.4
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 11:16:08 -0800 (PST)
Received: from 2a02.2378.10b2.ddc1.ip.kyivstar.net
 ([2a02:2378:10b2:ddc1:8e79:90f9:af64:1818])
 by smtp.gmail.com with ESMTPSA id
 p1-20020aa7cc81000000b00499cc326a4fsm913311edt.9.2023.01.10.11.16.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Jan 2023 11:16:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cc27a69-911b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=o828iz6JTK7okONjfswkpoOBjebJS1MqqtT+ftr793o=;
        b=QE2F3WSP/YvMYIQ1z1dwdkfsDuVoyIZvAQiSfLeGX5hkDoY7H+ZU2iSARZsURKcO0v
         uWZ1lxiMXl/8uvtBSpSsuL9KmEv1lZKUGSLAY5+XJWmA5D1mY/7ILKFJODGBK/qLoUYD
         zyX0oRRGcEceJ4vFbUTlTbzBHSbWYZc/Xt+g+xij484C6veurxRUlv7prUwPCX+YUeF4
         YNBJX3aJfQDqOjfMaxf1tDjtiIDVQ1UnkBhcFdLjcBI1A5CzUCVbBRaRMubxouBPO7Qy
         5wmhWjSxbpt5Ch0m9S8YAFxfVreAjkyJlwciL3T0MEFzucUs8UFk9b/uC/ptod9diowH
         E90g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=o828iz6JTK7okONjfswkpoOBjebJS1MqqtT+ftr793o=;
        b=CpETLYbTmuWlAV/rGTO6ZP/QkRSilxp+xgj0OARQEzQ8Ec/b5A/42cdMGAL08Dvi9x
         FSqz5tiz3rN2BeG+y6tJIOshGRdUVl2ujOva3kwQvwCCqepj01+KqmJya7l7LC2sgIpW
         c0WhbhWkuUvaX35Fc/LOH/PngWiguiaw4UjB4MCfVl3zT8bucSvkxRVDch7Dk9RHfFyw
         Snyx2onMhLBxnRUSz+8F6Z6igj4HK3xmmGECDMIvax2n9QH5jAvrRsGg+vUUHg9lml4N
         ssTgvF9IkCOw0AsG9RKsGuWbWzo9DaUg6t6PLhIRmqnE8MD3CPz3x2TB6U2lgU5A5hMB
         3gYg==
X-Gm-Message-State: AFqh2kqLvZRRJrdhjN6jq6xxVt/MJprjtPkz/YyH9yH4GmEySawOFMCK
	jpKIp6fUuSp6CiRGLWI42RU=
X-Google-Smtp-Source: AMrXdXuHJIh9tZBmajvoQ7sCEV6H+1VltAy2kFAubizjyqbH5gL5D1lgW/EEXM/x741+O9swxLfkYQ==
X-Received: by 2002:a05:6402:501c:b0:48f:a9a2:29fa with SMTP id p28-20020a056402501c00b0048fa9a229famr23532127eda.2.1673378168420;
        Tue, 10 Jan 2023 11:16:08 -0800 (PST)
Message-ID: <e15997cf6e765cb23b706889b93ee35a90173a8c.camel@gmail.com>
Subject: Re: [PATCH v3 1/6] xen/riscv: introduce dummy asm/init.h
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
 <bertrand.marquis@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Gianluca Guida
	 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
	Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, 
	xen-devel@lists.xenproject.org
Date: Tue, 10 Jan 2023 21:16:06 +0200
In-Reply-To: <891d0830-7fdc-202a-5f12-2364cae5bce5@suse.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
	 <b1585373e39a7cbe023f485aa5a04b093e25ec80.1673362493.git.oleksii.kurochko@gmail.com>
	 <891d0830-7fdc-202a-5f12-2364cae5bce5@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

Hello Jan,

Sorry for breaking into the conversation.

On Tue, 2023-01-10 at 18:02 +0100, Jan Beulich wrote:
> Arm maintainers,
>=20
> On 10.01.2023 16:17, Oleksii Kurochko wrote:
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/init.h
> > @@ -0,0 +1,12 @@
> > +#ifndef _XEN_ASM_INIT_H
> > +#define _XEN_ASM_INIT_H
> > +
> > +#endif /* _XEN_ASM_INIT_H */
>=20
> instead of having RISC-V introduce an empty stub matching what x86
> has,
Have you had a chance to look at the answer (Re: [PATCH v1 0/8] Basic
early_printk and smoke test implementation) of Andrew:
https://lore.kernel.org/xen-devel/299d913c-8095-ad90-ea3b-d46ef74d4fdc@citr=
ix.com/#t

I agree with his point regarding the usage of __has_include() to not
produce empty headers stubs for RISCV and for future architectures too.

> If maintainers are OK I can start to use __has_include() in the
> correspondent <xen/*.h> files.what would it take to empty Arm's as
> well, allowing both present ones to
> go away and no new one to be introduced? The only thing you have in
> there
> is struct init_info, which doesn't feel like it's properly placed in
> this
> header (x86 has such stuff in setup.h) ...
>=20
> Jan
~Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 20:33:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 20:33:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474952.736411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFLIr-0003fP-RR; Tue, 10 Jan 2023 20:33:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474952.736411; Tue, 10 Jan 2023 20:33:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFLIr-0003fI-Nu; Tue, 10 Jan 2023 20:33:01 +0000
Received: by outflank-mailman (input) for mailman id 474952;
 Tue, 10 Jan 2023 20:33:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLIq-0003f8-57; Tue, 10 Jan 2023 20:33:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLIq-0000oz-2T; Tue, 10 Jan 2023 20:33:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLIp-0006GR-KT; Tue, 10 Jan 2023 20:32:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLIp-0003vE-K0; Tue, 10 Jan 2023 20:32:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WxNBTje/r+ocUZcBF/5/vf8Es6YebQKTgco6/Jra55s=; b=xOqF/RqylTnRjCwYdXjjYlcPVJ
	Q82aJ0G6cqNgjIyuTKEc7/in1kS+z3jCOkvQgGIuPd7clQjLc0ERE7yO0QbG2sG2KuD8xgRXmYNKn
	EsRQDOBrLZJEuxpHk9tlHwa+PMDpZjtPMWgQKB3vaIp/cQSUCyvkHeFLSyplvK5LtD7Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175707-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175707: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ec54ce1f1ab41b92782b37ae59e752fff0ef9c41
X-Osstest-Versions-That:
    ovmf=717f35a9f2d883a74998f7deb3d2cdf95bddf039
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 20:32:59 +0000

flight 175707 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175707/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ec54ce1f1ab41b92782b37ae59e752fff0ef9c41
baseline version:
 ovmf                 717f35a9f2d883a74998f7deb3d2cdf95bddf039

Last test of basis   175702  2023-01-10 14:40:41 Z    0 days
Testing same since   175707  2023-01-10 17:40:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  dann frazier <dann.frazier@canonical.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   717f35a9f2..ec54ce1f1a  ec54ce1f1ab41b92782b37ae59e752fff0ef9c41 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 20:49:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 20:49:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474960.736422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFLZ8-0005Es-8H; Tue, 10 Jan 2023 20:49:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474960.736422; Tue, 10 Jan 2023 20:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFLZ8-0005El-5P; Tue, 10 Jan 2023 20:49:50 +0000
Received: by outflank-mailman (input) for mailman id 474960;
 Tue, 10 Jan 2023 20:49:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLZ6-0005Eb-E4; Tue, 10 Jan 2023 20:49:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLZ6-0001GZ-Av; Tue, 10 Jan 2023 20:49:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLZ5-0006fv-Sl; Tue, 10 Jan 2023 20:49:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLZ5-0007oN-SC; Tue, 10 Jan 2023 20:49:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6aTQMh1gv7M3ZDeD+FNJa8KPUqql1IvnsLVX6lrPJTE=; b=C67I8/IXj3LjiVnJc87aAbsKlc
	M3F/1Ham3q9VZwB/jIJRBKxhDWD8cbB+Kn4z0k89gkRaTsbzw29xoohy9KzXgaPQHrAssCpKNDazO
	GSU9m8kirsfZBccF+ijS3WckxgbwIzV9BpnY8Ol31ynFvECIk7ogGc+7/fx3ig/EgjxI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175703-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175703: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 20:49:47 +0000

flight 175703 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175703/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    2 days
Failing since        175627  2023-01-08 14:40:14 Z    2 days   11 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    1 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 21:01:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 21:01:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474969.736433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFLk5-0007co-EA; Tue, 10 Jan 2023 21:01:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474969.736433; Tue, 10 Jan 2023 21:01:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFLk5-0007ch-AR; Tue, 10 Jan 2023 21:01:09 +0000
Received: by outflank-mailman (input) for mailman id 474969;
 Tue, 10 Jan 2023 21:01:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLk3-0007cX-U4; Tue, 10 Jan 2023 21:01:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLk3-0001Vk-QD; Tue, 10 Jan 2023 21:01:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLk3-0006ur-77; Tue, 10 Jan 2023 21:01:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFLk3-00044X-6W; Tue, 10 Jan 2023 21:01:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R2D59ZFz1TIF0XrxWAVn2pBSUPs6xQBQ7HgIlQoPp9w=; b=P1xQ8rbSgtiAMCV/9BYHuFD3GV
	mV+1VKvuIdKrfbYGgJqhnL/XPMrDWKoAezjuZbULg9Hvgly6DyIs7btynyVCOVA9xctH6cuRxSTJ8
	C4dL6pcBVCZchPg2E+Z/GciV86XyY04/hysNHbKaQkprqBbTwA/qkEdl6eTgTAr//kOI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175694-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175694: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-migrupgrade:xen-install/src_host:fail:heisenbug
    xen-unstable:test-amd64-i386-migrupgrade:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=692d04a9ca429ca574d859fa8f43578e03b9f8b3
X-Osstest-Versions-That:
    xen=692d04a9ca429ca574d859fa8f43578e03b9f8b3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Jan 2023 21:01:07 +0000

flight 175694 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175694/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 175671 pass in 175694
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 175671 pass in 175694
 test-amd64-i386-migrupgrade  10 xen-install/src_host       fail pass in 175671
 test-amd64-i386-migrupgrade  11 xen-install/dst_host       fail pass in 175671

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175671
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175671
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175671
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175671
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175671
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175671
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175671
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175671
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175671
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175671
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175671
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175671
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  692d04a9ca429ca574d859fa8f43578e03b9f8b3
baseline version:
 xen                  692d04a9ca429ca574d859fa8f43578e03b9f8b3

Last test of basis   175694  2023-01-10 09:33:52 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 21:30:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 21:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474979.736450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFMBw-0001lL-O1; Tue, 10 Jan 2023 21:29:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474979.736450; Tue, 10 Jan 2023 21:29:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFMBw-0001lE-Kc; Tue, 10 Jan 2023 21:29:56 +0000
Received: by outflank-mailman (input) for mailman id 474979;
 Tue, 10 Jan 2023 21:29:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+sVE=5H=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pFMBu-0001l8-Vl
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 21:29:55 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ea9e8b68-912d-11ed-91b6-6bf2151ebd3b;
 Tue, 10 Jan 2023 22:29:51 +0100 (CET)
Received: by mail-wr1-x433.google.com with SMTP id z5so12097913wrt.6
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 13:29:51 -0800 (PST)
Received: from localhost.localdomain ([185.126.107.38])
 by smtp.gmail.com with ESMTPSA id
 m7-20020adffe47000000b002b880b6ef19sm12096616wrs.66.2023.01.10.13.29.48
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Tue, 10 Jan 2023 13:29:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea9e8b68-912d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=W6wbWAaLwPWIlIyedxb4xDOOJaJOFE6u3k4WgWE5z1Q=;
        b=NJZgrJVezNBUk1duCF+KjTkxDbZ0B2oYgQK83ujjYQRz+WNzlUbXFUmU9NDDTsM19M
         goUMPU68xScavEmrKhM2zb28Z0Uwxk8Ks3kUNckShLpzHgDUlR5ymQe1OZjxxTFSGJfy
         0mIRfytO1Hzan3AA8Tb7BY3X2jb4Bn5qRPiuxYu8Q2WvtI4lbPS4ceMTfBPmj03uewwZ
         DDGHmMz6s2PJdA2G8U1VcDFgyfp5CE0R5WzFgg1GOpY1yXdYtJPJVAryA7QY3qB9bevb
         mGS1tIJTmqUB7pSkXaebkhgeUEgr/3nV7vBsCwGWP/6JsZVQNg8uLSUdjUPpO9cHHeoC
         RSng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=W6wbWAaLwPWIlIyedxb4xDOOJaJOFE6u3k4WgWE5z1Q=;
        b=HFwMHq2BXnA3kA/2yCxRKKhIPUVpgTAceYy+CzruVZcZ/2FC0udJmqkW9kb2w6i+Tb
         807kf4nhz7XLIvcsQaoCFx2iBozeEqIWaEF4F1E37XZUvWovBarwZyC69+uescIo8YeJ
         UXq/uvPsIxiSt319Y/DdXKHhD5N/0BFtumqHXIChm2K4cYbYVA1fx6xuuypO6wgm5CNU
         gSR3k3F3nztJQn7GlN6UT+K8hBAdV4lX5cS5VfLHegZ1lVhNDEj4YRcGrAr0tHCxdYrF
         ph14mL2/nCbuCrPhiRZe3AArTofyrHF0tinFN8sdvQXK9UwAdJwTFdhQxk+tOzagJ2qZ
         LZGQ==
X-Gm-Message-State: AFqh2koAvaUySPI0q5uuJlVEVastbdMgvY6JY0RaI75vchY7PGd8I+TI
	hYZcbu/1Ad0fHD2XAFBgXgadhA==
X-Google-Smtp-Source: AMrXdXsUkev5VAsuQo1OfOr+0irzTxLF6bwT0XZWW8EQ21bjlIyp+8HkVhItOJf6wPYBdCkZvWs46A==
X-Received: by 2002:a5d:664e:0:b0:2bc:7ec3:8a8 with SMTP id f14-20020a5d664e000000b002bc7ec308a8mr5077500wrw.44.1673386190595;
        Tue, 10 Jan 2023 13:29:50 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	qemu-block@nongnu.org,
	qemu-arm@nongnu.org,
	qemu-ppc@nongnu.org,
	Richard Henderson <richard.henderson@linaro.org>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	ale@rev.ng,
	qemu-riscv@nongnu.org,
	xen-devel@lists.xenproject.org,
	Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [PATCH] bulk: Rename TARGET_FMT_plx -> HWADDR_FMT_plx
Date: Tue, 10 Jan 2023 22:29:47 +0100
Message-Id: <20230110212947.34557-1-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The 'hwaddr' type is defined in "exec/hwaddr.h" as:

    hwaddr is the type of a physical address
   (its size can be different from 'target_ulong').

All definitions use the 'HWADDR_' prefix, except TARGET_FMT_plx:

 $ fgrep define include/exec/hwaddr.h
 #define HWADDR_H
 #define HWADDR_BITS 64
 #define HWADDR_MAX UINT64_MAX
 #define TARGET_FMT_plx "%016" PRIx64
         ^^^^^^
 #define HWADDR_PRId PRId64
 #define HWADDR_PRIi PRIi64
 #define HWADDR_PRIo PRIo64
 #define HWADDR_PRIu PRIu64
 #define HWADDR_PRIx PRIx64
 #define HWADDR_PRIX PRIX64

Since hwaddr's size can be *different* from target_ulong, it is
very confusing to read one of its format using the 'TARGET_FMT_'
prefix, normally used for the target_long / target_ulong types:

$ fgrep TARGET_FMT_ include/exec/cpu-defs.h
 #define TARGET_FMT_lx "%08x"
 #define TARGET_FMT_ld "%d"
 #define TARGET_FMT_lu "%u"
 #define TARGET_FMT_lx "%016" PRIx64
 #define TARGET_FMT_ld "%" PRId64
 #define TARGET_FMT_lu "%" PRIu64

Apparently this format was missed during commit a8170e5e97
("Rename target_phys_addr_t to hwaddr"), so complete it by
doing a bulk-rename with:

 $ sed -i -e s/TARGET_FMT_plx/HWADDR_FMT_plx/g $(git grep -l TARGET_FMT_plx)

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 accel/tcg/cputlb.c                  |  2 +-
 hw/arm/strongarm.c                  | 24 ++++++++++++------------
 hw/block/pflash_cfi01.c             |  2 +-
 hw/char/digic-uart.c                |  4 ++--
 hw/char/etraxfs_ser.c               |  4 ++--
 hw/core/loader.c                    |  8 ++++----
 hw/core/sysbus.c                    |  4 ++--
 hw/display/cirrus_vga.c             |  4 ++--
 hw/display/g364fb.c                 |  4 ++--
 hw/display/vga.c                    |  8 ++++----
 hw/dma/etraxfs_dma.c                | 14 +++++++-------
 hw/dma/pl330.c                      | 14 +++++++-------
 hw/dma/xilinx_axidma.c              |  4 ++--
 hw/dma/xlnx_csu_dma.c               |  4 ++--
 hw/i2c/mpc_i2c.c                    |  4 ++--
 hw/i386/multiboot.c                 |  8 ++++----
 hw/i386/xen/xen-hvm.c               |  8 ++++----
 hw/i386/xen/xen-mapcache.c          | 16 ++++++++--------
 hw/i386/xen/xen_platform.c          |  4 ++--
 hw/intc/arm_gicv3_dist.c            |  8 ++++----
 hw/intc/arm_gicv3_its.c             | 14 +++++++-------
 hw/intc/arm_gicv3_redist.c          |  8 ++++----
 hw/intc/exynos4210_combiner.c       | 10 +++++-----
 hw/misc/auxbus.c                    |  2 +-
 hw/misc/ivshmem.c                   |  6 +++---
 hw/misc/macio/mac_dbdma.c           |  4 ++--
 hw/misc/mst_fpga.c                  |  4 ++--
 hw/net/allwinner-sun8i-emac.c       |  4 ++--
 hw/net/allwinner_emac.c             |  4 ++--
 hw/net/fsl_etsec/etsec.c            |  4 ++--
 hw/net/fsl_etsec/rings.c            |  4 ++--
 hw/net/pcnet.c                      |  4 ++--
 hw/net/rocker/rocker.c              | 26 +++++++++++++-------------
 hw/net/rocker/rocker_desc.c         |  2 +-
 hw/net/xilinx_axienet.c             |  4 ++--
 hw/net/xilinx_ethlite.c             |  6 +++---
 hw/pci-bridge/pci_expander_bridge.c |  2 +-
 hw/pci-host/bonito.c                | 14 +++++++-------
 hw/pci-host/ppce500.c               |  4 ++--
 hw/pci/pci_host.c                   |  4 ++--
 hw/ppc/ppc4xx_sdram.c               |  2 +-
 hw/rtc/exynos4210_rtc.c             |  4 ++--
 hw/sh4/sh7750.c                     |  4 ++--
 hw/ssi/xilinx_spi.c                 |  4 ++--
 hw/ssi/xilinx_spips.c               |  8 ++++----
 hw/timer/digic-timer.c              |  4 ++--
 hw/timer/etraxfs_timer.c            |  2 +-
 hw/timer/exynos4210_mct.c           |  2 +-
 hw/timer/exynos4210_pwm.c           |  4 ++--
 hw/virtio/virtio-mmio.c             |  4 ++--
 hw/xen/xen_pt.c                     |  4 ++--
 include/exec/hwaddr.h               |  2 +-
 monitor/misc.c                      |  2 +-
 softmmu/memory.c                    | 18 +++++++++---------
 softmmu/memory_mapping.c            |  4 ++--
 softmmu/physmem.c                   | 10 +++++-----
 target/i386/monitor.c               |  6 +++---
 target/loongarch/tlb_helper.c       |  2 +-
 target/microblaze/op_helper.c       |  2 +-
 target/mips/tcg/sysemu/tlb_helper.c |  2 +-
 target/ppc/mmu-hash32.c             | 14 +++++++-------
 target/ppc/mmu-hash64.c             | 12 ++++++------
 target/ppc/mmu_common.c             | 26 +++++++++++++-------------
 target/ppc/mmu_helper.c             |  4 ++--
 target/riscv/cpu_helper.c           | 10 +++++-----
 target/riscv/monitor.c              |  2 +-
 target/sparc/ldst_helper.c          |  6 +++---
 target/sparc/mmu_helper.c           | 10 +++++-----
 target/tricore/helper.c             |  2 +-
 69 files changed, 227 insertions(+), 227 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 4948729917..4e040a1cb9 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1142,7 +1142,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
                                                 &xlat, &sz, full->attrs, &prot);
     assert(sz >= TARGET_PAGE_SIZE);
 
-    tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
+    tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" HWADDR_FMT_plx
               " prot=%x idx=%d\n",
               vaddr, full->phys_addr, prot, mmu_idx);
 
diff --git a/hw/arm/strongarm.c b/hw/arm/strongarm.c
index 39b8f01ac4..cc73145053 100644
--- a/hw/arm/strongarm.c
+++ b/hw/arm/strongarm.c
@@ -151,7 +151,7 @@ static uint64_t strongarm_pic_mem_read(void *opaque, hwaddr offset,
     case ICPR:
         return s->pending;
     default:
-        printf("%s: Bad register offset 0x" TARGET_FMT_plx "\n",
+        printf("%s: Bad register offset 0x" HWADDR_FMT_plx "\n",
                         __func__, offset);
         return 0;
     }
@@ -173,7 +173,7 @@ static void strongarm_pic_mem_write(void *opaque, hwaddr offset,
         s->int_idle = (value & 1) ? 0 : ~0;
         break;
     default:
-        printf("%s: Bad register offset 0x" TARGET_FMT_plx "\n",
+        printf("%s: Bad register offset 0x" HWADDR_FMT_plx "\n",
                         __func__, offset);
         break;
     }
@@ -333,7 +333,7 @@ static uint64_t strongarm_rtc_read(void *opaque, hwaddr addr,
                 ((qemu_clock_get_ms(rtc_clock) - s->last_hz) << 15) /
                 (1000 * ((s->rttr & 0xffff) + 1));
     default:
-        printf("%s: Bad register 0x" TARGET_FMT_plx "\n", __func__, addr);
+        printf("%s: Bad register 0x" HWADDR_FMT_plx "\n", __func__, addr);
         return 0;
     }
 }
@@ -375,7 +375,7 @@ static void strongarm_rtc_write(void *opaque, hwaddr addr,
         break;
 
     default:
-        printf("%s: Bad register 0x" TARGET_FMT_plx "\n", __func__, addr);
+        printf("%s: Bad register 0x" HWADDR_FMT_plx "\n", __func__, addr);
     }
 }
 
@@ -581,7 +581,7 @@ static uint64_t strongarm_gpio_read(void *opaque, hwaddr offset,
         return s->status;
 
     default:
-        printf("%s: Bad offset 0x" TARGET_FMT_plx "\n", __func__, offset);
+        printf("%s: Bad offset 0x" HWADDR_FMT_plx "\n", __func__, offset);
     }
 
     return 0;
@@ -626,7 +626,7 @@ static void strongarm_gpio_write(void *opaque, hwaddr offset,
         break;
 
     default:
-        printf("%s: Bad offset 0x" TARGET_FMT_plx "\n", __func__, offset);
+        printf("%s: Bad offset 0x" HWADDR_FMT_plx "\n", __func__, offset);
     }
 }
 
@@ -782,7 +782,7 @@ static uint64_t strongarm_ppc_read(void *opaque, hwaddr offset,
         return s->ppfr | ~0x7f001;
 
     default:
-        printf("%s: Bad offset 0x" TARGET_FMT_plx "\n", __func__, offset);
+        printf("%s: Bad offset 0x" HWADDR_FMT_plx "\n", __func__, offset);
     }
 
     return 0;
@@ -817,7 +817,7 @@ static void strongarm_ppc_write(void *opaque, hwaddr offset,
         break;
 
     default:
-        printf("%s: Bad offset 0x" TARGET_FMT_plx "\n", __func__, offset);
+        printf("%s: Bad offset 0x" HWADDR_FMT_plx "\n", __func__, offset);
     }
 }
 
@@ -1164,7 +1164,7 @@ static uint64_t strongarm_uart_read(void *opaque, hwaddr addr,
         return s->utsr1;
 
     default:
-        printf("%s: Bad register 0x" TARGET_FMT_plx "\n", __func__, addr);
+        printf("%s: Bad register 0x" HWADDR_FMT_plx "\n", __func__, addr);
         return 0;
     }
 }
@@ -1221,7 +1221,7 @@ static void strongarm_uart_write(void *opaque, hwaddr addr,
         break;
 
     default:
-        printf("%s: Bad register 0x" TARGET_FMT_plx "\n", __func__, addr);
+        printf("%s: Bad register 0x" HWADDR_FMT_plx "\n", __func__, addr);
     }
 }
 
@@ -1443,7 +1443,7 @@ static uint64_t strongarm_ssp_read(void *opaque, hwaddr addr,
         strongarm_ssp_fifo_update(s);
         return retval;
     default:
-        printf("%s: Bad register 0x" TARGET_FMT_plx "\n", __func__, addr);
+        printf("%s: Bad register 0x" HWADDR_FMT_plx "\n", __func__, addr);
         break;
     }
     return 0;
@@ -1509,7 +1509,7 @@ static void strongarm_ssp_write(void *opaque, hwaddr addr,
         break;
 
     default:
-        printf("%s: Bad register 0x" TARGET_FMT_plx "\n", __func__, addr);
+        printf("%s: Bad register 0x" HWADDR_FMT_plx "\n", __func__, addr);
         break;
     }
 }
diff --git a/hw/block/pflash_cfi01.c b/hw/block/pflash_cfi01.c
index 0cbc2fb4cb..36d68c70f6 100644
--- a/hw/block/pflash_cfi01.c
+++ b/hw/block/pflash_cfi01.c
@@ -645,7 +645,7 @@ static void pflash_write(PFlashCFI01 *pfl, hwaddr offset,
 
  error_flash:
     qemu_log_mask(LOG_UNIMP, "%s: Unimplemented flash cmd sequence "
-                  "(offset " TARGET_FMT_plx ", wcycle 0x%x cmd 0x%x value 0x%x)"
+                  "(offset " HWADDR_FMT_plx ", wcycle 0x%x cmd 0x%x value 0x%x)"
                   "\n", __func__, offset, pfl->wcycle, pfl->cmd, value);
 
  mode_read_array:
diff --git a/hw/char/digic-uart.c b/hw/char/digic-uart.c
index 00e5df5517..51d4e7db52 100644
--- a/hw/char/digic-uart.c
+++ b/hw/char/digic-uart.c
@@ -63,7 +63,7 @@ static uint64_t digic_uart_read(void *opaque, hwaddr addr,
     default:
         qemu_log_mask(LOG_UNIMP,
                       "digic-uart: read access to unknown register 0x"
-                      TARGET_FMT_plx "\n", addr << 2);
+                      HWADDR_FMT_plx "\n", addr << 2);
     }
 
     return ret;
@@ -101,7 +101,7 @@ static void digic_uart_write(void *opaque, hwaddr addr, uint64_t value,
     default:
         qemu_log_mask(LOG_UNIMP,
                       "digic-uart: write access to unknown register 0x"
-                      TARGET_FMT_plx "\n", addr << 2);
+                      HWADDR_FMT_plx "\n", addr << 2);
     }
 }
 
diff --git a/hw/char/etraxfs_ser.c b/hw/char/etraxfs_ser.c
index e8c3017724..8d6422dae4 100644
--- a/hw/char/etraxfs_ser.c
+++ b/hw/char/etraxfs_ser.c
@@ -113,7 +113,7 @@ ser_read(void *opaque, hwaddr addr, unsigned int size)
             break;
         default:
             r = s->regs[addr];
-            D(qemu_log("%s " TARGET_FMT_plx "=%x\n", __func__, addr, r));
+            D(qemu_log("%s " HWADDR_FMT_plx "=%x\n", __func__, addr, r));
             break;
     }
     return r;
@@ -127,7 +127,7 @@ ser_write(void *opaque, hwaddr addr,
     uint32_t value = val64;
     unsigned char ch = val64;
 
-    D(qemu_log("%s " TARGET_FMT_plx "=%x\n",  __func__, addr, value));
+    D(qemu_log("%s " HWADDR_FMT_plx "=%x\n",  __func__, addr, value));
     addr >>= 2;
     switch (addr)
     {
diff --git a/hw/core/loader.c b/hw/core/loader.c
index 0548830733..a18fb26469 100644
--- a/hw/core/loader.c
+++ b/hw/core/loader.c
@@ -1054,7 +1054,7 @@ ssize_t rom_add_file(const char *file, const char *fw_dir,
             rom->mr = mr;
             snprintf(devpath, sizeof(devpath), "/rom@%s", file);
         } else {
-            snprintf(devpath, sizeof(devpath), "/rom@" TARGET_FMT_plx, addr);
+            snprintf(devpath, sizeof(devpath), "/rom@" HWADDR_FMT_plx, addr);
         }
     }
 
@@ -1238,10 +1238,10 @@ static void rom_print_one_overlap_error(Rom *last_rom, Rom *rom)
         "\nThe following two regions overlap (in the %s address space):\n",
         rom_as_name(rom));
     error_printf(
-        "  %s (addresses 0x" TARGET_FMT_plx " - 0x" TARGET_FMT_plx ")\n",
+        "  %s (addresses 0x" HWADDR_FMT_plx " - 0x" HWADDR_FMT_plx ")\n",
         last_rom->name, last_rom->addr, last_rom->addr + last_rom->romsize);
     error_printf(
-        "  %s (addresses 0x" TARGET_FMT_plx " - 0x" TARGET_FMT_plx ")\n",
+        "  %s (addresses 0x" HWADDR_FMT_plx " - 0x" HWADDR_FMT_plx ")\n",
         rom->name, rom->addr, rom->addr + rom->romsize);
 }
 
@@ -1595,7 +1595,7 @@ HumanReadableText *qmp_x_query_roms(Error **errp)
                                    rom->romsize,
                                    rom->name);
         } else if (!rom->fw_file) {
-            g_string_append_printf(buf, "addr=" TARGET_FMT_plx
+            g_string_append_printf(buf, "addr=" HWADDR_FMT_plx
                                    " size=0x%06zx mem=%s name=\"%s\"\n",
                                    rom->addr, rom->romsize,
                                    rom->isrom ? "rom" : "ram",
diff --git a/hw/core/sysbus.c b/hw/core/sysbus.c
index 05c1da3d31..35f902b582 100644
--- a/hw/core/sysbus.c
+++ b/hw/core/sysbus.c
@@ -269,7 +269,7 @@ static void sysbus_dev_print(Monitor *mon, DeviceState *dev, int indent)
 
     for (i = 0; i < s->num_mmio; i++) {
         size = memory_region_size(s->mmio[i].memory);
-        monitor_printf(mon, "%*smmio " TARGET_FMT_plx "/" TARGET_FMT_plx "\n",
+        monitor_printf(mon, "%*smmio " HWADDR_FMT_plx "/" HWADDR_FMT_plx "\n",
                        indent, "", s->mmio[i].addr, size);
     }
 }
@@ -289,7 +289,7 @@ static char *sysbus_get_fw_dev_path(DeviceState *dev)
         }
     }
     if (s->num_mmio) {
-        return g_strdup_printf("%s@" TARGET_FMT_plx, qdev_fw_name(dev),
+        return g_strdup_printf("%s@" HWADDR_FMT_plx, qdev_fw_name(dev),
                                s->mmio[0].addr);
     }
     if (s->num_pio) {
diff --git a/hw/display/cirrus_vga.c b/hw/display/cirrus_vga.c
index 55c32e3e40..b80f98b6c4 100644
--- a/hw/display/cirrus_vga.c
+++ b/hw/display/cirrus_vga.c
@@ -2041,7 +2041,7 @@ static uint64_t cirrus_vga_mem_read(void *opaque,
     } else {
         val = 0xff;
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "cirrus: mem_readb 0x" TARGET_FMT_plx "\n", addr);
+                      "cirrus: mem_readb 0x" HWADDR_FMT_plx "\n", addr);
     }
     return val;
 }
@@ -2105,7 +2105,7 @@ static void cirrus_vga_mem_write(void *opaque,
         }
     } else {
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "cirrus: mem_writeb 0x" TARGET_FMT_plx " "
+                      "cirrus: mem_writeb 0x" HWADDR_FMT_plx " "
                       "value 0x%02" PRIx64 "\n", addr, mem_value);
     }
 }
diff --git a/hw/display/g364fb.c b/hw/display/g364fb.c
index caca86d773..2903cab82d 100644
--- a/hw/display/g364fb.c
+++ b/hw/display/g364fb.c
@@ -320,7 +320,7 @@ static uint64_t g364fb_ctrl_read(void *opaque,
                 break;
             default:
             {
-                error_report("g364: invalid read at [" TARGET_FMT_plx "]",
+                error_report("g364: invalid read at [" HWADDR_FMT_plx "]",
                              addr);
                 val = 0;
                 break;
@@ -424,7 +424,7 @@ static void g364fb_ctrl_write(void *opaque,
             break;
         default:
             error_report("g364: invalid write of 0x%" PRIx64
-                         " at [" TARGET_FMT_plx "]", val, addr);
+                         " at [" HWADDR_FMT_plx "]", val, addr);
             break;
         }
     }
diff --git a/hw/display/vga.c b/hw/display/vga.c
index 0cb26a791b..7a5fdff649 100644
--- a/hw/display/vga.c
+++ b/hw/display/vga.c
@@ -875,7 +875,7 @@ void vga_mem_writeb(VGACommonState *s, hwaddr addr, uint32_t val)
     uint32_t write_mask, bit_mask, set_mask;
 
 #ifdef DEBUG_VGA_MEM
-    printf("vga: [0x" TARGET_FMT_plx "] = 0x%02x\n", addr, val);
+    printf("vga: [0x" HWADDR_FMT_plx "] = 0x%02x\n", addr, val);
 #endif
     /* convert to VGA memory offset */
     memory_map_mode = (s->gr[VGA_GFX_MISC] >> 2) & 3;
@@ -909,7 +909,7 @@ void vga_mem_writeb(VGACommonState *s, hwaddr addr, uint32_t val)
             assert(addr < s->vram_size);
             s->vram_ptr[addr] = val;
 #ifdef DEBUG_VGA_MEM
-            printf("vga: chain4: [0x" TARGET_FMT_plx "]\n", addr);
+            printf("vga: chain4: [0x" HWADDR_FMT_plx "]\n", addr);
 #endif
             s->plane_updated |= mask; /* only used to detect font change */
             memory_region_set_dirty(&s->vram, addr, 1);
@@ -925,7 +925,7 @@ void vga_mem_writeb(VGACommonState *s, hwaddr addr, uint32_t val)
             }
             s->vram_ptr[addr] = val;
 #ifdef DEBUG_VGA_MEM
-            printf("vga: odd/even: [0x" TARGET_FMT_plx "]\n", addr);
+            printf("vga: odd/even: [0x" HWADDR_FMT_plx "]\n", addr);
 #endif
             s->plane_updated |= mask; /* only used to detect font change */
             memory_region_set_dirty(&s->vram, addr, 1);
@@ -1003,7 +1003,7 @@ void vga_mem_writeb(VGACommonState *s, hwaddr addr, uint32_t val)
             (((uint32_t *)s->vram_ptr)[addr] & ~write_mask) |
             (val & write_mask);
 #ifdef DEBUG_VGA_MEM
-        printf("vga: latch: [0x" TARGET_FMT_plx "] mask=0x%08x val=0x%08x\n",
+        printf("vga: latch: [0x" HWADDR_FMT_plx "] mask=0x%08x val=0x%08x\n",
                addr * 4, write_mask, val);
 #endif
         memory_region_set_dirty(&s->vram, addr << 2, sizeof(uint32_t));
diff --git a/hw/dma/etraxfs_dma.c b/hw/dma/etraxfs_dma.c
index c4334e87bf..8951864ed7 100644
--- a/hw/dma/etraxfs_dma.c
+++ b/hw/dma/etraxfs_dma.c
@@ -272,7 +272,7 @@ static void channel_load_d(struct fs_dma_ctrl *ctrl, int c)
 	hwaddr addr = channel_reg(ctrl, c, RW_SAVED_DATA);
 
 	/* Load and decode. FIXME: handle endianness.  */
-	D(printf("%s ch=%d addr=" TARGET_FMT_plx "\n", __func__, c, addr));
+	D(printf("%s ch=%d addr=" HWADDR_FMT_plx "\n", __func__, c, addr));
     cpu_physical_memory_read(addr, &ctrl->channels[c].current_d,
                              sizeof(ctrl->channels[c].current_d));
 
@@ -285,7 +285,7 @@ static void channel_store_c(struct fs_dma_ctrl *ctrl, int c)
 	hwaddr addr = channel_reg(ctrl, c, RW_GROUP_DOWN);
 
 	/* Encode and store. FIXME: handle endianness.  */
-	D(printf("%s ch=%d addr=" TARGET_FMT_plx "\n", __func__, c, addr));
+	D(printf("%s ch=%d addr=" HWADDR_FMT_plx "\n", __func__, c, addr));
 	D(dump_d(c, &ctrl->channels[c].current_d));
     cpu_physical_memory_write(addr, &ctrl->channels[c].current_c,
                               sizeof(ctrl->channels[c].current_c));
@@ -296,7 +296,7 @@ static void channel_store_d(struct fs_dma_ctrl *ctrl, int c)
 	hwaddr addr = channel_reg(ctrl, c, RW_SAVED_DATA);
 
 	/* Encode and store. FIXME: handle endianness.  */
-	D(printf("%s ch=%d addr=" TARGET_FMT_plx "\n", __func__, c, addr));
+	D(printf("%s ch=%d addr=" HWADDR_FMT_plx "\n", __func__, c, addr));
     cpu_physical_memory_write(addr, &ctrl->channels[c].current_d,
                               sizeof(ctrl->channels[c].current_d));
 }
@@ -574,7 +574,7 @@ static inline int channel_in_run(struct fs_dma_ctrl *ctrl, int c)
 
 static uint32_t dma_rinvalid (void *opaque, hwaddr addr)
 {
-        hw_error("Unsupported short raccess. reg=" TARGET_FMT_plx "\n", addr);
+        hw_error("Unsupported short raccess. reg=" HWADDR_FMT_plx "\n", addr);
         return 0;
 }
 
@@ -603,7 +603,7 @@ dma_read(void *opaque, hwaddr addr, unsigned int size)
 
 		default:
 			r = ctrl->channels[c].regs[addr];
-			D(printf ("%s c=%d addr=" TARGET_FMT_plx "\n",
+			D(printf ("%s c=%d addr=" HWADDR_FMT_plx "\n",
 				  __func__, c, addr));
 			break;
 	}
@@ -613,7 +613,7 @@ dma_read(void *opaque, hwaddr addr, unsigned int size)
 static void
 dma_winvalid (void *opaque, hwaddr addr, uint32_t value)
 {
-        hw_error("Unsupported short waccess. reg=" TARGET_FMT_plx "\n", addr);
+        hw_error("Unsupported short waccess. reg=" HWADDR_FMT_plx "\n", addr);
 }
 
 static void
@@ -686,7 +686,7 @@ dma_write(void *opaque, hwaddr addr,
 			break;
 
 	        default:
-			D(printf ("%s c=%d " TARGET_FMT_plx "\n",
+			D(printf ("%s c=%d " HWADDR_FMT_plx "\n",
 				__func__, c, addr));
 			break;
         }
diff --git a/hw/dma/pl330.c b/hw/dma/pl330.c
index e5d521c329..e7e67dd8b6 100644
--- a/hw/dma/pl330.c
+++ b/hw/dma/pl330.c
@@ -1373,7 +1373,7 @@ static void pl330_iomem_write(void *opaque, hwaddr offset,
             pl330_exec(s);
         } else {
             qemu_log_mask(LOG_GUEST_ERROR, "pl330: write of illegal value %u "
-                          "for offset " TARGET_FMT_plx "\n", (unsigned)value,
+                          "for offset " HWADDR_FMT_plx "\n", (unsigned)value,
                           offset);
         }
         break;
@@ -1384,7 +1384,7 @@ static void pl330_iomem_write(void *opaque, hwaddr offset,
         s->dbg[1] = value;
         break;
     default:
-        qemu_log_mask(LOG_GUEST_ERROR, "pl330: bad write offset " TARGET_FMT_plx
+        qemu_log_mask(LOG_GUEST_ERROR, "pl330: bad write offset " HWADDR_FMT_plx
                       "\n", offset);
         break;
     }
@@ -1409,7 +1409,7 @@ static inline uint32_t pl330_iomem_read_imp(void *opaque,
         chan_id = offset >> 5;
         if (chan_id >= s->num_chnls) {
             qemu_log_mask(LOG_GUEST_ERROR, "pl330: bad read offset "
-                          TARGET_FMT_plx "\n", offset);
+                          HWADDR_FMT_plx "\n", offset);
             return 0;
         }
         switch (offset & 0x1f) {
@@ -1425,7 +1425,7 @@ static inline uint32_t pl330_iomem_read_imp(void *opaque,
             return s->chan[chan_id].lc[1];
         default:
             qemu_log_mask(LOG_GUEST_ERROR, "pl330: bad read offset "
-                          TARGET_FMT_plx "\n", offset);
+                          HWADDR_FMT_plx "\n", offset);
             return 0;
         }
     }
@@ -1434,7 +1434,7 @@ static inline uint32_t pl330_iomem_read_imp(void *opaque,
         chan_id = offset >> 3;
         if (chan_id >= s->num_chnls) {
             qemu_log_mask(LOG_GUEST_ERROR, "pl330: bad read offset "
-                          TARGET_FMT_plx "\n", offset);
+                          HWADDR_FMT_plx "\n", offset);
             return 0;
         }
         switch ((offset >> 2) & 1) {
@@ -1456,7 +1456,7 @@ static inline uint32_t pl330_iomem_read_imp(void *opaque,
         chan_id = offset >> 2;
         if (chan_id >= s->num_chnls) {
             qemu_log_mask(LOG_GUEST_ERROR, "pl330: bad read offset "
-                          TARGET_FMT_plx "\n", offset);
+                          HWADDR_FMT_plx "\n", offset);
             return 0;
         }
         return s->chan[chan_id].fault_type;
@@ -1495,7 +1495,7 @@ static inline uint32_t pl330_iomem_read_imp(void *opaque,
         return s->debug_status;
     default:
         qemu_log_mask(LOG_GUEST_ERROR, "pl330: bad read offset "
-                      TARGET_FMT_plx "\n", offset);
+                      HWADDR_FMT_plx "\n", offset);
     }
     return 0;
 }
diff --git a/hw/dma/xilinx_axidma.c b/hw/dma/xilinx_axidma.c
index cbb8f0f169..6030c76435 100644
--- a/hw/dma/xilinx_axidma.c
+++ b/hw/dma/xilinx_axidma.c
@@ -456,7 +456,7 @@ static uint64_t axidma_read(void *opaque, hwaddr addr,
             break;
         default:
             r = s->regs[addr];
-            D(qemu_log("%s ch=%d addr=" TARGET_FMT_plx " v=%x\n",
+            D(qemu_log("%s ch=%d addr=" HWADDR_FMT_plx " v=%x\n",
                            __func__, sid, addr * 4, r));
             break;
     }
@@ -509,7 +509,7 @@ static void axidma_write(void *opaque, hwaddr addr,
             }
             break;
         default:
-            D(qemu_log("%s: ch=%d addr=" TARGET_FMT_plx " v=%x\n",
+            D(qemu_log("%s: ch=%d addr=" HWADDR_FMT_plx " v=%x\n",
                   __func__, sid, addr * 4, (unsigned)value));
             s->regs[addr] = value;
             break;
diff --git a/hw/dma/xlnx_csu_dma.c b/hw/dma/xlnx_csu_dma.c
index 1ce52ea5a2..88002698a1 100644
--- a/hw/dma/xlnx_csu_dma.c
+++ b/hw/dma/xlnx_csu_dma.c
@@ -211,7 +211,7 @@ static uint32_t xlnx_csu_dma_read(XlnxCSUDMA *s, uint8_t *buf, uint32_t len)
     if (result == MEMTX_OK) {
         xlnx_csu_dma_data_process(s, buf, len);
     } else {
-        qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address " TARGET_FMT_plx
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address " HWADDR_FMT_plx
                       " for mem read", __func__, addr);
         s->regs[R_INT_STATUS] |= R_INT_STATUS_AXI_BRESP_ERR_MASK;
         xlnx_csu_dma_update_irq(s);
@@ -241,7 +241,7 @@ static uint32_t xlnx_csu_dma_write(XlnxCSUDMA *s, uint8_t *buf, uint32_t len)
     }
 
     if (result != MEMTX_OK) {
-        qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address " TARGET_FMT_plx
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad address " HWADDR_FMT_plx
                       " for mem write", __func__, addr);
         s->regs[R_INT_STATUS] |= R_INT_STATUS_AXI_BRESP_ERR_MASK;
         xlnx_csu_dma_update_irq(s);
diff --git a/hw/i2c/mpc_i2c.c b/hw/i2c/mpc_i2c.c
index 845392505f..219c548402 100644
--- a/hw/i2c/mpc_i2c.c
+++ b/hw/i2c/mpc_i2c.c
@@ -224,7 +224,7 @@ static uint64_t mpc_i2c_read(void *opaque, hwaddr addr, unsigned size)
         break;
     }
 
-    DPRINTF("%s: addr " TARGET_FMT_plx " %02" PRIx32 "\n", __func__,
+    DPRINTF("%s: addr " HWADDR_FMT_plx " %02" PRIx32 "\n", __func__,
                                          addr, value);
     return (uint64_t)value;
 }
@@ -234,7 +234,7 @@ static void mpc_i2c_write(void *opaque, hwaddr addr,
 {
     MPCI2CState *s = opaque;
 
-    DPRINTF("%s: addr " TARGET_FMT_plx " val %08" PRIx64 "\n", __func__,
+    DPRINTF("%s: addr " HWADDR_FMT_plx " val %08" PRIx64 "\n", __func__,
                                              addr, value);
     switch (addr) {
     case MPC_I2C_ADR:
diff --git a/hw/i386/multiboot.c b/hw/i386/multiboot.c
index 963e29362e..3332712ab3 100644
--- a/hw/i386/multiboot.c
+++ b/hw/i386/multiboot.c
@@ -137,7 +137,7 @@ static void mb_add_mod(MultibootState *s,
     stl_p(p + MB_MOD_END,     end);
     stl_p(p + MB_MOD_CMDLINE, cmdline_phys);
 
-    mb_debug("mod%02d: "TARGET_FMT_plx" - "TARGET_FMT_plx,
+    mb_debug("mod%02d: "HWADDR_FMT_plx" - "HWADDR_FMT_plx,
              s->mb_mods_count, start, end);
 
     s->mb_mods_count++;
@@ -353,7 +353,7 @@ int load_multiboot(X86MachineState *x86ms,
             mb_add_mod(&mbs, mbs.mb_buf_phys + offs,
                        mbs.mb_buf_phys + offs + mb_mod_length, c);
 
-            mb_debug("mod_start: %p\nmod_end:   %p\n  cmdline: "TARGET_FMT_plx,
+            mb_debug("mod_start: %p\nmod_end:   %p\n  cmdline: "HWADDR_FMT_plx,
                      (char *)mbs.mb_buf + offs,
                      (char *)mbs.mb_buf + offs + mb_mod_length, c);
             g_free(one_file);
@@ -382,8 +382,8 @@ int load_multiboot(X86MachineState *x86ms,
     stl_p(bootinfo + MBI_MMAP_ADDR,   ADDR_E820_MAP);
 
     mb_debug("multiboot: entry_addr = %#x", mh_entry_addr);
-    mb_debug("           mb_buf_phys   = "TARGET_FMT_plx, mbs.mb_buf_phys);
-    mb_debug("           mod_start     = "TARGET_FMT_plx,
+    mb_debug("           mb_buf_phys   = "HWADDR_FMT_plx, mbs.mb_buf_phys);
+    mb_debug("           mod_start     = "HWADDR_FMT_plx,
              mbs.mb_buf_phys + mbs.offset_mods);
     mb_debug("           mb_mods_count = %d", mbs.mb_mods_count);
 
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index e4293d6d66..b9a6f7f538 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -516,13 +516,13 @@ static void xen_set_memory(struct MemoryListener *listener,
             if (xen_set_mem_type(xen_domid, mem_type,
                                  start_addr >> TARGET_PAGE_BITS,
                                  size >> TARGET_PAGE_BITS)) {
-                DPRINTF("xen_set_mem_type error, addr: "TARGET_FMT_plx"\n",
+                DPRINTF("xen_set_mem_type error, addr: "HWADDR_FMT_plx"\n",
                         start_addr);
             }
         }
     } else {
         if (xen_remove_from_physmap(state, start_addr, size) < 0) {
-            DPRINTF("physmapping does not exist at "TARGET_FMT_plx"\n", start_addr);
+            DPRINTF("physmapping does not exist at "HWADDR_FMT_plx"\n", start_addr);
         }
     }
 }
@@ -642,8 +642,8 @@ static void xen_sync_dirty_bitmap(XenIOState *state,
 #endif
         if (errno == ENODATA) {
             memory_region_set_dirty(framebuffer, 0, size);
-            DPRINTF("xen: track_dirty_vram failed (0x" TARGET_FMT_plx
-                    ", 0x" TARGET_FMT_plx "): %s\n",
+            DPRINTF("xen: track_dirty_vram failed (0x" HWADDR_FMT_plx
+                    ", 0x" HWADDR_FMT_plx "): %s\n",
                     start_addr, start_addr + size, strerror(errno));
         }
         return;
diff --git a/hw/i386/xen/xen-mapcache.c b/hw/i386/xen/xen-mapcache.c
index a2f93096e7..1d0879d234 100644
--- a/hw/i386/xen/xen-mapcache.c
+++ b/hw/i386/xen/xen-mapcache.c
@@ -357,7 +357,7 @@ tryagain:
         entry->lock++;
         if (entry->lock == 0) {
             fprintf(stderr,
-                    "mapcache entry lock overflow: "TARGET_FMT_plx" -> %p\n",
+                    "mapcache entry lock overflow: "HWADDR_FMT_plx" -> %p\n",
                     entry->paddr_index, entry->vaddr_base);
             abort();
         }
@@ -404,7 +404,7 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
     if (!found) {
         fprintf(stderr, "%s, could not find %p\n", __func__, ptr);
         QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
-            DPRINTF("   "TARGET_FMT_plx" -> %p is present\n", reventry->paddr_index,
+            DPRINTF("   "HWADDR_FMT_plx" -> %p is present\n", reventry->paddr_index,
                     reventry->vaddr_req);
         }
         abort();
@@ -445,7 +445,7 @@ static void xen_invalidate_map_cache_entry_unlocked(uint8_t *buffer)
     if (!found) {
         DPRINTF("%s, could not find %p\n", __func__, buffer);
         QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
-            DPRINTF("   "TARGET_FMT_plx" -> %p is present\n", reventry->paddr_index, reventry->vaddr_req);
+            DPRINTF("   "HWADDR_FMT_plx" -> %p is present\n", reventry->paddr_index, reventry->vaddr_req);
         }
         return;
     }
@@ -503,7 +503,7 @@ void xen_invalidate_map_cache(void)
             continue;
         }
         fprintf(stderr, "Locked DMA mapping while invalidating mapcache!"
-                " "TARGET_FMT_plx" -> %p is present\n",
+                " "HWADDR_FMT_plx" -> %p is present\n",
                 reventry->paddr_index, reventry->vaddr_req);
     }
 
@@ -562,7 +562,7 @@ static uint8_t *xen_replace_cache_entry_unlocked(hwaddr old_phys_addr,
         entry = entry->next;
     }
     if (!entry) {
-        DPRINTF("Trying to update an entry for "TARGET_FMT_plx \
+        DPRINTF("Trying to update an entry for "HWADDR_FMT_plx \
                 "that is not in the mapcache!\n", old_phys_addr);
         return NULL;
     }
@@ -570,15 +570,15 @@ static uint8_t *xen_replace_cache_entry_unlocked(hwaddr old_phys_addr,
     address_index  = new_phys_addr >> MCACHE_BUCKET_SHIFT;
     address_offset = new_phys_addr & (MCACHE_BUCKET_SIZE - 1);
 
-    fprintf(stderr, "Replacing a dummy mapcache entry for "TARGET_FMT_plx \
-            " with "TARGET_FMT_plx"\n", old_phys_addr, new_phys_addr);
+    fprintf(stderr, "Replacing a dummy mapcache entry for "HWADDR_FMT_plx \
+            " with "HWADDR_FMT_plx"\n", old_phys_addr, new_phys_addr);
 
     xen_remap_bucket(entry, entry->vaddr_base,
                      cache_size, address_index, false);
     if (!test_bits(address_offset >> XC_PAGE_SHIFT,
                 test_bit_size >> XC_PAGE_SHIFT,
                 entry->valid_mapping)) {
-        DPRINTF("Unable to update a mapcache entry for "TARGET_FMT_plx"!\n",
+        DPRINTF("Unable to update a mapcache entry for "HWADDR_FMT_plx"!\n",
                 old_phys_addr);
         return NULL;
     }
diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
index 7db0d94ec2..66e6de31a6 100644
--- a/hw/i386/xen/xen_platform.c
+++ b/hw/i386/xen/xen_platform.c
@@ -445,7 +445,7 @@ static uint64_t platform_mmio_read(void *opaque, hwaddr addr,
                                    unsigned size)
 {
     DPRINTF("Warning: attempted read from physical address "
-            "0x" TARGET_FMT_plx " in xen platform mmio space\n", addr);
+            "0x" HWADDR_FMT_plx " in xen platform mmio space\n", addr);
 
     return 0;
 }
@@ -454,7 +454,7 @@ static void platform_mmio_write(void *opaque, hwaddr addr,
                                 uint64_t val, unsigned size)
 {
     DPRINTF("Warning: attempted write of 0x%"PRIx64" to physical "
-            "address 0x" TARGET_FMT_plx " in xen platform mmio space\n",
+            "address 0x" HWADDR_FMT_plx " in xen platform mmio space\n",
             val, addr);
 }
 
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
index d599fefcbc..35e850685c 100644
--- a/hw/intc/arm_gicv3_dist.c
+++ b/hw/intc/arm_gicv3_dist.c
@@ -564,7 +564,7 @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
         /* WO registers, return unknown value */
         qemu_log_mask(LOG_GUEST_ERROR,
                       "%s: invalid guest read from WO register at offset "
-                      TARGET_FMT_plx "\n", __func__, offset);
+                      HWADDR_FMT_plx "\n", __func__, offset);
         *data = 0;
         return true;
     default:
@@ -773,7 +773,7 @@ static bool gicd_writel(GICv3State *s, hwaddr offset,
         /* RO registers, ignore the write */
         qemu_log_mask(LOG_GUEST_ERROR,
                       "%s: invalid guest write to RO register at offset "
-                      TARGET_FMT_plx "\n", __func__, offset);
+                      HWADDR_FMT_plx "\n", __func__, offset);
         return true;
     default:
         return false;
@@ -838,7 +838,7 @@ MemTxResult gicv3_dist_read(void *opaque, hwaddr offset, uint64_t *data,
 
     if (!r) {
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "%s: invalid guest read at offset " TARGET_FMT_plx
+                      "%s: invalid guest read at offset " HWADDR_FMT_plx
                       " size %u\n", __func__, offset, size);
         trace_gicv3_dist_badread(offset, size, attrs.secure);
         /* The spec requires that reserved registers are RAZ/WI;
@@ -879,7 +879,7 @@ MemTxResult gicv3_dist_write(void *opaque, hwaddr offset, uint64_t data,
 
     if (!r) {
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "%s: invalid guest write at offset " TARGET_FMT_plx
+                      "%s: invalid guest write at offset " HWADDR_FMT_plx
                       " size %u\n", __func__, offset, size);
         trace_gicv3_dist_badwrite(offset, data, size, attrs.secure);
         /* The spec requires that reserved registers are RAZ/WI;
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
index 57c79da5c5..43dfd7a35c 100644
--- a/hw/intc/arm_gicv3_its.c
+++ b/hw/intc/arm_gicv3_its.c
@@ -1633,7 +1633,7 @@ static bool its_writel(GICv3ITSState *s, hwaddr offset,
             /* RO register, ignore the write */
             qemu_log_mask(LOG_GUEST_ERROR,
                           "%s: invalid guest write to RO register at offset "
-                          TARGET_FMT_plx "\n", __func__, offset);
+                          HWADDR_FMT_plx "\n", __func__, offset);
         }
         break;
     case GITS_CREADR + 4:
@@ -1643,7 +1643,7 @@ static bool its_writel(GICv3ITSState *s, hwaddr offset,
             /* RO register, ignore the write */
             qemu_log_mask(LOG_GUEST_ERROR,
                           "%s: invalid guest write to RO register at offset "
-                          TARGET_FMT_plx "\n", __func__, offset);
+                          HWADDR_FMT_plx "\n", __func__, offset);
         }
         break;
     case GITS_BASER ... GITS_BASER + 0x3f:
@@ -1675,7 +1675,7 @@ static bool its_writel(GICv3ITSState *s, hwaddr offset,
         /* RO registers, ignore the write */
         qemu_log_mask(LOG_GUEST_ERROR,
                       "%s: invalid guest write to RO register at offset "
-                      TARGET_FMT_plx "\n", __func__, offset);
+                      HWADDR_FMT_plx "\n", __func__, offset);
         break;
     default:
         result = false;
@@ -1785,14 +1785,14 @@ static bool its_writell(GICv3ITSState *s, hwaddr offset,
             /* RO register, ignore the write */
             qemu_log_mask(LOG_GUEST_ERROR,
                           "%s: invalid guest write to RO register at offset "
-                          TARGET_FMT_plx "\n", __func__, offset);
+                          HWADDR_FMT_plx "\n", __func__, offset);
         }
         break;
     case GITS_TYPER:
         /* RO registers, ignore the write */
         qemu_log_mask(LOG_GUEST_ERROR,
                       "%s: invalid guest write to RO register at offset "
-                      TARGET_FMT_plx "\n", __func__, offset);
+                      HWADDR_FMT_plx "\n", __func__, offset);
         break;
     default:
         result = false;
@@ -1851,7 +1851,7 @@ static MemTxResult gicv3_its_read(void *opaque, hwaddr offset, uint64_t *data,
 
     if (!result) {
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "%s: invalid guest read at offset " TARGET_FMT_plx
+                      "%s: invalid guest read at offset " HWADDR_FMT_plx
                       " size %u\n", __func__, offset, size);
         trace_gicv3_its_badread(offset, size);
         /*
@@ -1887,7 +1887,7 @@ static MemTxResult gicv3_its_write(void *opaque, hwaddr offset, uint64_t data,
 
     if (!result) {
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "%s: invalid guest write at offset " TARGET_FMT_plx
+                      "%s: invalid guest write at offset " HWADDR_FMT_plx
                       " size %u\n", __func__, offset, size);
         trace_gicv3_its_badwrite(offset, data, size);
         /*
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
index c92ceecc16..297f7f0263 100644
--- a/hw/intc/arm_gicv3_redist.c
+++ b/hw/intc/arm_gicv3_redist.c
@@ -601,7 +601,7 @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset,
         /* RO registers, ignore the write */
         qemu_log_mask(LOG_GUEST_ERROR,
                       "%s: invalid guest write to RO register at offset "
-                      TARGET_FMT_plx "\n", __func__, offset);
+                      HWADDR_FMT_plx "\n", __func__, offset);
         return MEMTX_OK;
         /*
          * VLPI frame registers. We don't need a version check for
@@ -668,7 +668,7 @@ static MemTxResult gicr_writell(GICv3CPUState *cs, hwaddr offset,
         /* RO register, ignore the write */
         qemu_log_mask(LOG_GUEST_ERROR,
                       "%s: invalid guest write to RO register at offset "
-                      TARGET_FMT_plx "\n", __func__, offset);
+                      HWADDR_FMT_plx "\n", __func__, offset);
         return MEMTX_OK;
         /*
          * VLPI frame registers. We don't need a version check for
@@ -727,7 +727,7 @@ MemTxResult gicv3_redist_read(void *opaque, hwaddr offset, uint64_t *data,
 
     if (r != MEMTX_OK) {
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "%s: invalid guest read at offset " TARGET_FMT_plx
+                      "%s: invalid guest read at offset " HWADDR_FMT_plx
                       " size %u\n", __func__, offset, size);
         trace_gicv3_redist_badread(gicv3_redist_affid(cs), offset,
                                    size, attrs.secure);
@@ -786,7 +786,7 @@ MemTxResult gicv3_redist_write(void *opaque, hwaddr offset, uint64_t data,
 
     if (r != MEMTX_OK) {
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "%s: invalid guest write at offset " TARGET_FMT_plx
+                      "%s: invalid guest write at offset " HWADDR_FMT_plx
                       " size %u\n", __func__, offset, size);
         trace_gicv3_redist_badwrite(gicv3_redist_affid(cs), offset, data,
                                     size, attrs.secure);
diff --git a/hw/intc/exynos4210_combiner.c b/hw/intc/exynos4210_combiner.c
index a289510bdb..4ba448fdb1 100644
--- a/hw/intc/exynos4210_combiner.c
+++ b/hw/intc/exynos4210_combiner.c
@@ -120,7 +120,7 @@ exynos4210_combiner_read(void *opaque, hwaddr offset, unsigned size)
     default:
         if (offset >> 2 >= IIC_REGSET_SIZE) {
             hw_error("exynos4210.combiner: overflow of reg_set by 0x"
-                    TARGET_FMT_plx "offset\n", offset);
+                    HWADDR_FMT_plx "offset\n", offset);
         }
         val = s->reg_set[offset >> 2];
     }
@@ -184,19 +184,19 @@ static void exynos4210_combiner_write(void *opaque, hwaddr offset,
 
     if (req_quad_base_n >= IIC_NGRP) {
         hw_error("exynos4210.combiner: unallowed write access at offset 0x"
-                TARGET_FMT_plx "\n", offset);
+                HWADDR_FMT_plx "\n", offset);
         return;
     }
 
     if (reg_n > 1) {
         hw_error("exynos4210.combiner: unallowed write access at offset 0x"
-                TARGET_FMT_plx "\n", offset);
+                HWADDR_FMT_plx "\n", offset);
         return;
     }
 
     if (offset >> 2 >= IIC_REGSET_SIZE) {
         hw_error("exynos4210.combiner: overflow of reg_set by 0x"
-                TARGET_FMT_plx "offset\n", offset);
+                HWADDR_FMT_plx "offset\n", offset);
     }
     s->reg_set[offset >> 2] = val;
 
@@ -246,7 +246,7 @@ static void exynos4210_combiner_write(void *opaque, hwaddr offset,
         break;
     default:
         hw_error("exynos4210.combiner: unallowed write access at offset 0x"
-                TARGET_FMT_plx "\n", offset);
+                HWADDR_FMT_plx "\n", offset);
         break;
     }
 }
diff --git a/hw/misc/auxbus.c b/hw/misc/auxbus.c
index 8a8012f5f0..28d50d9d09 100644
--- a/hw/misc/auxbus.c
+++ b/hw/misc/auxbus.c
@@ -299,7 +299,7 @@ static void aux_slave_dev_print(Monitor *mon, DeviceState *dev, int indent)
 
     s = AUX_SLAVE(dev);
 
-    monitor_printf(mon, "%*smemory " TARGET_FMT_plx "/" TARGET_FMT_plx "\n",
+    monitor_printf(mon, "%*smemory " HWADDR_FMT_plx "/" HWADDR_FMT_plx "\n",
                    indent, "",
                    object_property_get_uint(OBJECT(s->mmio), "addr", NULL),
                    memory_region_size(s->mmio));
diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c
index 8270db53cd..d66d912172 100644
--- a/hw/misc/ivshmem.c
+++ b/hw/misc/ivshmem.c
@@ -179,7 +179,7 @@ static void ivshmem_io_write(void *opaque, hwaddr addr,
 
     addr &= 0xfc;
 
-    IVSHMEM_DPRINTF("writing to addr " TARGET_FMT_plx "\n", addr);
+    IVSHMEM_DPRINTF("writing to addr " HWADDR_FMT_plx "\n", addr);
     switch (addr)
     {
         case INTRMASK:
@@ -207,7 +207,7 @@ static void ivshmem_io_write(void *opaque, hwaddr addr,
             }
             break;
         default:
-            IVSHMEM_DPRINTF("Unhandled write " TARGET_FMT_plx "\n", addr);
+            IVSHMEM_DPRINTF("Unhandled write " HWADDR_FMT_plx "\n", addr);
     }
 }
 
@@ -233,7 +233,7 @@ static uint64_t ivshmem_io_read(void *opaque, hwaddr addr,
             break;
 
         default:
-            IVSHMEM_DPRINTF("why are we reading " TARGET_FMT_plx "\n", addr);
+            IVSHMEM_DPRINTF("why are we reading " HWADDR_FMT_plx "\n", addr);
             ret = 0;
     }
 
diff --git a/hw/misc/macio/mac_dbdma.c b/hw/misc/macio/mac_dbdma.c
index efcc02609f..43bb1f56ba 100644
--- a/hw/misc/macio/mac_dbdma.c
+++ b/hw/misc/macio/mac_dbdma.c
@@ -704,7 +704,7 @@ static void dbdma_write(void *opaque, hwaddr addr,
     DBDMA_channel *ch = &s->channels[channel];
     int reg = (addr - (channel << DBDMA_CHANNEL_SHIFT)) >> 2;
 
-    DBDMA_DPRINTFCH(ch, "writel 0x" TARGET_FMT_plx " <= 0x%08"PRIx64"\n",
+    DBDMA_DPRINTFCH(ch, "writel 0x" HWADDR_FMT_plx " <= 0x%08"PRIx64"\n",
                     addr, value);
     DBDMA_DPRINTFCH(ch, "channel 0x%x reg 0x%x\n",
                     (uint32_t)addr >> DBDMA_CHANNEL_SHIFT, reg);
@@ -786,7 +786,7 @@ static uint64_t dbdma_read(void *opaque, hwaddr addr,
         break;
     }
 
-    DBDMA_DPRINTFCH(ch, "readl 0x" TARGET_FMT_plx " => 0x%08x\n", addr, value);
+    DBDMA_DPRINTFCH(ch, "readl 0x" HWADDR_FMT_plx " => 0x%08x\n", addr, value);
     DBDMA_DPRINTFCH(ch, "channel 0x%x reg 0x%x\n",
                     (uint32_t)addr >> DBDMA_CHANNEL_SHIFT, reg);
 
diff --git a/hw/misc/mst_fpga.c b/hw/misc/mst_fpga.c
index 2aaadfa966..7692825867 100644
--- a/hw/misc/mst_fpga.c
+++ b/hw/misc/mst_fpga.c
@@ -131,7 +131,7 @@ mst_fpga_readb(void *opaque, hwaddr addr, unsigned size)
 		return s->pcmcia1;
 	default:
 		printf("Mainstone - mst_fpga_readb: Bad register offset "
-			"0x" TARGET_FMT_plx "\n", addr);
+			"0x" HWADDR_FMT_plx "\n", addr);
 	}
 	return 0;
 }
@@ -185,7 +185,7 @@ mst_fpga_writeb(void *opaque, hwaddr addr, uint64_t value,
 		break;
 	default:
 		printf("Mainstone - mst_fpga_writeb: Bad register offset "
-			"0x" TARGET_FMT_plx "\n", addr);
+			"0x" HWADDR_FMT_plx "\n", addr);
 	}
 }
 
diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
index ecc0245fe8..b861d8ff35 100644
--- a/hw/net/allwinner-sun8i-emac.c
+++ b/hw/net/allwinner-sun8i-emac.c
@@ -663,7 +663,7 @@ static uint64_t allwinner_sun8i_emac_read(void *opaque, hwaddr offset,
         break;
     default:
         qemu_log_mask(LOG_UNIMP, "allwinner-h3-emac: read access to unknown "
-                                 "EMAC register 0x" TARGET_FMT_plx "\n",
+                                 "EMAC register 0x" HWADDR_FMT_plx "\n",
                                   offset);
     }
 
@@ -760,7 +760,7 @@ static void allwinner_sun8i_emac_write(void *opaque, hwaddr offset,
         break;
     default:
         qemu_log_mask(LOG_UNIMP, "allwinner-h3-emac: write access to unknown "
-                                 "EMAC register 0x" TARGET_FMT_plx "\n",
+                                 "EMAC register 0x" HWADDR_FMT_plx "\n",
                                   offset);
     }
 }
diff --git a/hw/net/allwinner_emac.c b/hw/net/allwinner_emac.c
index ddddf35c45..372e5b66da 100644
--- a/hw/net/allwinner_emac.c
+++ b/hw/net/allwinner_emac.c
@@ -304,7 +304,7 @@ static uint64_t aw_emac_read(void *opaque, hwaddr offset, unsigned size)
     default:
         qemu_log_mask(LOG_UNIMP,
                       "allwinner_emac: read access to unknown register 0x"
-                      TARGET_FMT_plx "\n", offset);
+                      HWADDR_FMT_plx "\n", offset);
         ret = 0;
     }
 
@@ -407,7 +407,7 @@ static void aw_emac_write(void *opaque, hwaddr offset, uint64_t value,
     default:
         qemu_log_mask(LOG_UNIMP,
                       "allwinner_emac: write access to unknown register 0x"
-                      TARGET_FMT_plx "\n", offset);
+                      HWADDR_FMT_plx "\n", offset);
     }
 }
 
diff --git a/hw/net/fsl_etsec/etsec.c b/hw/net/fsl_etsec/etsec.c
index b75d8e3dce..c753bfb3a8 100644
--- a/hw/net/fsl_etsec/etsec.c
+++ b/hw/net/fsl_etsec/etsec.c
@@ -99,7 +99,7 @@ static uint64_t etsec_read(void *opaque, hwaddr addr, unsigned size)
         break;
     }
 
-    DPRINTF("Read  0x%08x @ 0x" TARGET_FMT_plx
+    DPRINTF("Read  0x%08x @ 0x" HWADDR_FMT_plx
             "                            : %s (%s)\n",
             ret, addr, reg->name, reg->desc);
 
@@ -276,7 +276,7 @@ static void etsec_write(void     *opaque,
         }
     }
 
-    DPRINTF("Write 0x%08x @ 0x" TARGET_FMT_plx
+    DPRINTF("Write 0x%08x @ 0x" HWADDR_FMT_plx
             " val:0x%08x->0x%08x : %s (%s)\n",
             (unsigned int)value, addr, before, reg->value,
             reg->name, reg->desc);
diff --git a/hw/net/fsl_etsec/rings.c b/hw/net/fsl_etsec/rings.c
index a32589e33b..788463f1b6 100644
--- a/hw/net/fsl_etsec/rings.c
+++ b/hw/net/fsl_etsec/rings.c
@@ -109,7 +109,7 @@ static void read_buffer_descriptor(eTSEC         *etsec,
 {
     assert(bd != NULL);
 
-    RING_DEBUG("READ Buffer Descriptor @ 0x" TARGET_FMT_plx"\n", addr);
+    RING_DEBUG("READ Buffer Descriptor @ 0x" HWADDR_FMT_plx"\n", addr);
     cpu_physical_memory_read(addr,
                              bd,
                              sizeof(eTSEC_rxtx_bd));
@@ -141,7 +141,7 @@ static void write_buffer_descriptor(eTSEC         *etsec,
         stl_be_p(&bd->bufptr, bd->bufptr);
     }
 
-    RING_DEBUG("Write Buffer Descriptor @ 0x" TARGET_FMT_plx"\n", addr);
+    RING_DEBUG("Write Buffer Descriptor @ 0x" HWADDR_FMT_plx"\n", addr);
     cpu_physical_memory_write(addr,
                               bd,
                               sizeof(eTSEC_rxtx_bd));
diff --git a/hw/net/pcnet.c b/hw/net/pcnet.c
index e63e524913..d456094575 100644
--- a/hw/net/pcnet.c
+++ b/hw/net/pcnet.c
@@ -908,11 +908,11 @@ static void pcnet_rdte_poll(PCNetState *s)
             s->csr[37] = nnrd >> 16;
 #ifdef PCNET_DEBUG
             if (bad) {
-                printf("pcnet: BAD RMD RECORDS AFTER 0x" TARGET_FMT_plx "\n",
+                printf("pcnet: BAD RMD RECORDS AFTER 0x" HWADDR_FMT_plx "\n",
                        crda);
             }
         } else {
-            printf("pcnet: BAD RMD RDA=0x" TARGET_FMT_plx "\n", crda);
+            printf("pcnet: BAD RMD RDA=0x" HWADDR_FMT_plx "\n", crda);
 #endif
         }
     }
diff --git a/hw/net/rocker/rocker.c b/hw/net/rocker/rocker.c
index cf54ddf49d..7ea8eb6ba5 100644
--- a/hw/net/rocker/rocker.c
+++ b/hw/net/rocker/rocker.c
@@ -815,7 +815,7 @@ static void rocker_io_writel(void *opaque, hwaddr addr, uint32_t val)
             }
             break;
         default:
-            DPRINTF("not implemented dma reg write(l) addr=0x" TARGET_FMT_plx
+            DPRINTF("not implemented dma reg write(l) addr=0x" HWADDR_FMT_plx
                     " val=0x%08x (ring %d, addr=0x%02x)\n",
                     addr, val, index, offset);
             break;
@@ -857,7 +857,7 @@ static void rocker_io_writel(void *opaque, hwaddr addr, uint32_t val)
         r->lower32 = 0;
         break;
     default:
-        DPRINTF("not implemented write(l) addr=0x" TARGET_FMT_plx
+        DPRINTF("not implemented write(l) addr=0x" HWADDR_FMT_plx
                 " val=0x%08x\n", addr, val);
         break;
     }
@@ -876,8 +876,8 @@ static void rocker_io_writeq(void *opaque, hwaddr addr, uint64_t val)
             desc_ring_set_base_addr(r->rings[index], val);
             break;
         default:
-            DPRINTF("not implemented dma reg write(q) addr=0x" TARGET_FMT_plx
-                    " val=0x" TARGET_FMT_plx " (ring %d, offset=0x%02x)\n",
+            DPRINTF("not implemented dma reg write(q) addr=0x" HWADDR_FMT_plx
+                    " val=0x" HWADDR_FMT_plx " (ring %d, offset=0x%02x)\n",
                     addr, val, index, offset);
             break;
         }
@@ -895,8 +895,8 @@ static void rocker_io_writeq(void *opaque, hwaddr addr, uint64_t val)
         rocker_port_phys_enable_write(r, val);
         break;
     default:
-        DPRINTF("not implemented write(q) addr=0x" TARGET_FMT_plx
-                " val=0x" TARGET_FMT_plx "\n", addr, val);
+        DPRINTF("not implemented write(q) addr=0x" HWADDR_FMT_plx
+                " val=0x" HWADDR_FMT_plx "\n", addr, val);
         break;
     }
 }
@@ -987,8 +987,8 @@ static const char *rocker_reg_name(void *opaque, hwaddr addr)
 static void rocker_mmio_write(void *opaque, hwaddr addr, uint64_t val,
                               unsigned size)
 {
-    DPRINTF("Write %s addr " TARGET_FMT_plx
-            ", size %u, val " TARGET_FMT_plx "\n",
+    DPRINTF("Write %s addr " HWADDR_FMT_plx
+            ", size %u, val " HWADDR_FMT_plx "\n",
             rocker_reg_name(opaque, addr), addr, size, val);
 
     switch (size) {
@@ -1060,7 +1060,7 @@ static uint32_t rocker_io_readl(void *opaque, hwaddr addr)
             ret = desc_ring_get_credits(r->rings[index]);
             break;
         default:
-            DPRINTF("not implemented dma reg read(l) addr=0x" TARGET_FMT_plx
+            DPRINTF("not implemented dma reg read(l) addr=0x" HWADDR_FMT_plx
                     " (ring %d, addr=0x%02x)\n", addr, index, offset);
             ret = 0;
             break;
@@ -1115,7 +1115,7 @@ static uint32_t rocker_io_readl(void *opaque, hwaddr addr)
         ret = (uint32_t)(r->switch_id >> 32);
         break;
     default:
-        DPRINTF("not implemented read(l) addr=0x" TARGET_FMT_plx "\n", addr);
+        DPRINTF("not implemented read(l) addr=0x" HWADDR_FMT_plx "\n", addr);
         ret = 0;
         break;
     }
@@ -1136,7 +1136,7 @@ static uint64_t rocker_io_readq(void *opaque, hwaddr addr)
             ret = desc_ring_get_base_addr(r->rings[index]);
             break;
         default:
-            DPRINTF("not implemented dma reg read(q) addr=0x" TARGET_FMT_plx
+            DPRINTF("not implemented dma reg read(q) addr=0x" HWADDR_FMT_plx
                     " (ring %d, addr=0x%02x)\n", addr, index, offset);
             ret = 0;
             break;
@@ -1165,7 +1165,7 @@ static uint64_t rocker_io_readq(void *opaque, hwaddr addr)
         ret = r->switch_id;
         break;
     default:
-        DPRINTF("not implemented read(q) addr=0x" TARGET_FMT_plx "\n", addr);
+        DPRINTF("not implemented read(q) addr=0x" HWADDR_FMT_plx "\n", addr);
         ret = 0;
         break;
     }
@@ -1174,7 +1174,7 @@ static uint64_t rocker_io_readq(void *opaque, hwaddr addr)
 
 static uint64_t rocker_mmio_read(void *opaque, hwaddr addr, unsigned size)
 {
-    DPRINTF("Read %s addr " TARGET_FMT_plx ", size %u\n",
+    DPRINTF("Read %s addr " HWADDR_FMT_plx ", size %u\n",
             rocker_reg_name(opaque, addr), addr, size);
 
     switch (size) {
diff --git a/hw/net/rocker/rocker_desc.c b/hw/net/rocker/rocker_desc.c
index f3068c9250..675383db36 100644
--- a/hw/net/rocker/rocker_desc.c
+++ b/hw/net/rocker/rocker_desc.c
@@ -104,7 +104,7 @@ static bool desc_ring_empty(DescRing *ring)
 bool desc_ring_set_base_addr(DescRing *ring, uint64_t base_addr)
 {
     if (base_addr & 0x7) {
-        DPRINTF("ERROR: ring[%d] desc base addr (0x" TARGET_FMT_plx
+        DPRINTF("ERROR: ring[%d] desc base addr (0x" HWADDR_FMT_plx
                 ") not 8-byte aligned\n", ring->index, base_addr);
         return false;
     }
diff --git a/hw/net/xilinx_axienet.c b/hw/net/xilinx_axienet.c
index 990ff3a1c2..7e00965323 100644
--- a/hw/net/xilinx_axienet.c
+++ b/hw/net/xilinx_axienet.c
@@ -524,7 +524,7 @@ static uint64_t enet_read(void *opaque, hwaddr addr, unsigned size)
             if (addr < ARRAY_SIZE(s->regs)) {
                 r = s->regs[addr];
             }
-            DENET(qemu_log("%s addr=" TARGET_FMT_plx " v=%x\n",
+            DENET(qemu_log("%s addr=" HWADDR_FMT_plx " v=%x\n",
                             __func__, addr * 4, r));
             break;
     }
@@ -630,7 +630,7 @@ static void enet_write(void *opaque, hwaddr addr,
             break;
 
         default:
-            DENET(qemu_log("%s addr=" TARGET_FMT_plx " v=%x\n",
+            DENET(qemu_log("%s addr=" HWADDR_FMT_plx " v=%x\n",
                            __func__, addr * 4, (unsigned)value));
             if (addr < ARRAY_SIZE(s->regs)) {
                 s->regs[addr] = value;
diff --git a/hw/net/xilinx_ethlite.c b/hw/net/xilinx_ethlite.c
index 6e09f7e422..99c22819ea 100644
--- a/hw/net/xilinx_ethlite.c
+++ b/hw/net/xilinx_ethlite.c
@@ -99,7 +99,7 @@ eth_read(void *opaque, hwaddr addr, unsigned int size)
         case R_RX_CTRL1:
         case R_RX_CTRL0:
             r = s->regs[addr];
-            D(qemu_log("%s " TARGET_FMT_plx "=%x\n", __func__, addr * 4, r));
+            D(qemu_log("%s " HWADDR_FMT_plx "=%x\n", __func__, addr * 4, r));
             break;
 
         default:
@@ -125,7 +125,7 @@ eth_write(void *opaque, hwaddr addr,
             if (addr == R_TX_CTRL1)
                 base = 0x800 / 4;
 
-            D(qemu_log("%s addr=" TARGET_FMT_plx " val=%x\n",
+            D(qemu_log("%s addr=" HWADDR_FMT_plx " val=%x\n",
                        __func__, addr * 4, value));
             if ((value & (CTRL_P | CTRL_S)) == CTRL_S) {
                 qemu_send_packet(qemu_get_queue(s->nic),
@@ -155,7 +155,7 @@ eth_write(void *opaque, hwaddr addr,
         case R_TX_LEN0:
         case R_TX_LEN1:
         case R_TX_GIE0:
-            D(qemu_log("%s addr=" TARGET_FMT_plx " val=%x\n",
+            D(qemu_log("%s addr=" HWADDR_FMT_plx " val=%x\n",
                        __func__, addr * 4, value));
             s->regs[addr] = value;
             break;
diff --git a/hw/pci-bridge/pci_expander_bridge.c b/hw/pci-bridge/pci_expander_bridge.c
index 870d9bab11..e752a21292 100644
--- a/hw/pci-bridge/pci_expander_bridge.c
+++ b/hw/pci-bridge/pci_expander_bridge.c
@@ -155,7 +155,7 @@ static char *pxb_host_ofw_unit_address(const SysBusDevice *dev)
     main_host_sbd = SYS_BUS_DEVICE(main_host);
 
     if (main_host_sbd->num_mmio > 0) {
-        return g_strdup_printf(TARGET_FMT_plx ",%x",
+        return g_strdup_printf(HWADDR_FMT_plx ",%x",
                                main_host_sbd->mmio[0].addr, position + 1);
     }
     if (main_host_sbd->num_pio > 0) {
diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
index f04f3ad668..e55e4d2950 100644
--- a/hw/pci-host/bonito.c
+++ b/hw/pci-host/bonito.c
@@ -254,7 +254,7 @@ static void bonito_writel(void *opaque, hwaddr addr,
 
     saddr = addr >> 2;
 
-    DPRINTF("bonito_writel "TARGET_FMT_plx" val %lx saddr %x\n",
+    DPRINTF("bonito_writel "HWADDR_FMT_plx" val %lx saddr %x\n",
             addr, val, saddr);
     switch (saddr) {
     case BONITO_BONPONCFG:
@@ -317,7 +317,7 @@ static uint64_t bonito_readl(void *opaque, hwaddr addr,
 
     saddr = addr >> 2;
 
-    DPRINTF("bonito_readl "TARGET_FMT_plx"\n", addr);
+    DPRINTF("bonito_readl "HWADDR_FMT_plx"\n", addr);
     switch (saddr) {
     case BONITO_INTISR:
         return s->regs[saddr];
@@ -342,7 +342,7 @@ static void bonito_pciconf_writel(void *opaque, hwaddr addr,
     PCIBonitoState *s = opaque;
     PCIDevice *d = PCI_DEVICE(s);
 
-    DPRINTF("bonito_pciconf_writel "TARGET_FMT_plx" val %lx\n", addr, val);
+    DPRINTF("bonito_pciconf_writel "HWADDR_FMT_plx" val %lx\n", addr, val);
     d->config_write(d, addr, val, 4);
 }
 
@@ -353,7 +353,7 @@ static uint64_t bonito_pciconf_readl(void *opaque, hwaddr addr,
     PCIBonitoState *s = opaque;
     PCIDevice *d = PCI_DEVICE(s);
 
-    DPRINTF("bonito_pciconf_readl "TARGET_FMT_plx"\n", addr);
+    DPRINTF("bonito_pciconf_readl "HWADDR_FMT_plx"\n", addr);
     return d->config_read(d, addr, 4);
 }
 
@@ -469,7 +469,7 @@ static uint32_t bonito_sbridge_pciaddr(void *opaque, hwaddr addr)
     regno = (cfgaddr & BONITO_PCICONF_REG_MASK_HW) >> BONITO_PCICONF_REG_OFFSET;
 
     if (idsel == 0) {
-        error_report("error in bonito pci config address 0x" TARGET_FMT_plx
+        error_report("error in bonito pci config address 0x" HWADDR_FMT_plx
                      ",pcimap_cfg=0x%x", addr, s->regs[BONITO_PCIMAP_CFG]);
         exit(1);
     }
@@ -489,7 +489,7 @@ static void bonito_spciconf_write(void *opaque, hwaddr addr, uint64_t val,
     uint32_t pciaddr;
     uint16_t status;
 
-    DPRINTF("bonito_spciconf_write "TARGET_FMT_plx" size %d val %lx\n",
+    DPRINTF("bonito_spciconf_write "HWADDR_FMT_plx" size %d val %lx\n",
             addr, size, val);
 
     pciaddr = bonito_sbridge_pciaddr(s, addr);
@@ -519,7 +519,7 @@ static uint64_t bonito_spciconf_read(void *opaque, hwaddr addr, unsigned size)
     uint32_t pciaddr;
     uint16_t status;
 
-    DPRINTF("bonito_spciconf_read "TARGET_FMT_plx" size %d\n", addr, size);
+    DPRINTF("bonito_spciconf_read "HWADDR_FMT_plx" size %d\n", addr, size);
 
     pciaddr = bonito_sbridge_pciaddr(s, addr);
 
diff --git a/hw/pci-host/ppce500.c b/hw/pci-host/ppce500.c
index 568849e930..38814247f2 100644
--- a/hw/pci-host/ppce500.c
+++ b/hw/pci-host/ppce500.c
@@ -189,7 +189,7 @@ static uint64_t pci_reg_read4(void *opaque, hwaddr addr,
         break;
     }
 
-    pci_debug("%s: win:%lx(addr:" TARGET_FMT_plx ") -> value:%x\n", __func__,
+    pci_debug("%s: win:%lx(addr:" HWADDR_FMT_plx ") -> value:%x\n", __func__,
               win, addr, value);
     return value;
 }
@@ -268,7 +268,7 @@ static void pci_reg_write4(void *opaque, hwaddr addr,
 
     win = addr & 0xfe0;
 
-    pci_debug("%s: value:%x -> win:%lx(addr:" TARGET_FMT_plx ")\n",
+    pci_debug("%s: value:%x -> win:%lx(addr:" HWADDR_FMT_plx ")\n",
               __func__, (unsigned)value, win, addr);
 
     switch (win) {
diff --git a/hw/pci/pci_host.c b/hw/pci/pci_host.c
index eaf217ff55..7f9f75239c 100644
--- a/hw/pci/pci_host.c
+++ b/hw/pci/pci_host.c
@@ -143,7 +143,7 @@ static void pci_host_config_write(void *opaque, hwaddr addr,
 {
     PCIHostState *s = opaque;
 
-    PCI_DPRINTF("%s addr " TARGET_FMT_plx " len %d val %"PRIx64"\n",
+    PCI_DPRINTF("%s addr " HWADDR_FMT_plx " len %d val %"PRIx64"\n",
                 __func__, addr, len, val);
     if (addr != 0 || len != 4) {
         return;
@@ -157,7 +157,7 @@ static uint64_t pci_host_config_read(void *opaque, hwaddr addr,
     PCIHostState *s = opaque;
     uint32_t val = s->config_reg;
 
-    PCI_DPRINTF("%s addr " TARGET_FMT_plx " len %d val %"PRIx32"\n",
+    PCI_DPRINTF("%s addr " HWADDR_FMT_plx " len %d val %"PRIx32"\n",
                 __func__, addr, len, val);
     return val;
 }
diff --git a/hw/ppc/ppc4xx_sdram.c b/hw/ppc/ppc4xx_sdram.c
index a24c80b1d2..4501fb28a5 100644
--- a/hw/ppc/ppc4xx_sdram.c
+++ b/hw/ppc/ppc4xx_sdram.c
@@ -500,7 +500,7 @@ static uint32_t sdram_ddr2_bcr(hwaddr ram_base, hwaddr ram_size)
         bcr = 0x8000;
         break;
     default:
-        error_report("invalid RAM size " TARGET_FMT_plx, ram_size);
+        error_report("invalid RAM size " HWADDR_FMT_plx, ram_size);
         return 0;
     }
     bcr |= ram_base >> 2 & 0xffe00000;
diff --git a/hw/rtc/exynos4210_rtc.c b/hw/rtc/exynos4210_rtc.c
index d1620c7a2a..2b8a38a296 100644
--- a/hw/rtc/exynos4210_rtc.c
+++ b/hw/rtc/exynos4210_rtc.c
@@ -374,7 +374,7 @@ static uint64_t exynos4210_rtc_read(void *opaque, hwaddr offset,
 
     default:
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "exynos4210.rtc: bad read offset " TARGET_FMT_plx,
+                      "exynos4210.rtc: bad read offset " HWADDR_FMT_plx,
                       offset);
         break;
     }
@@ -508,7 +508,7 @@ static void exynos4210_rtc_write(void *opaque, hwaddr offset,
 
     default:
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "exynos4210.rtc: bad write offset " TARGET_FMT_plx,
+                      "exynos4210.rtc: bad write offset " HWADDR_FMT_plx,
                       offset);
         break;
 
diff --git a/hw/sh4/sh7750.c b/hw/sh4/sh7750.c
index c77792d150..ebe0fd96d9 100644
--- a/hw/sh4/sh7750.c
+++ b/hw/sh4/sh7750.c
@@ -207,13 +207,13 @@ static void portb_changed(SH7750State *s, uint16_t prev)
 
 static void error_access(const char *kind, hwaddr addr)
 {
-    fprintf(stderr, "%s to %s (0x" TARGET_FMT_plx ") not supported\n",
+    fprintf(stderr, "%s to %s (0x" HWADDR_FMT_plx ") not supported\n",
             kind, regname(addr), addr);
 }
 
 static void ignore_access(const char *kind, hwaddr addr)
 {
-    fprintf(stderr, "%s to %s (0x" TARGET_FMT_plx ") ignored\n",
+    fprintf(stderr, "%s to %s (0x" HWADDR_FMT_plx ") ignored\n",
             kind, regname(addr), addr);
 }
 
diff --git a/hw/ssi/xilinx_spi.c b/hw/ssi/xilinx_spi.c
index b2819a7ff0..552927622f 100644
--- a/hw/ssi/xilinx_spi.c
+++ b/hw/ssi/xilinx_spi.c
@@ -232,7 +232,7 @@ spi_read(void *opaque, hwaddr addr, unsigned int size)
         break;
 
     }
-    DB_PRINT("addr=" TARGET_FMT_plx " = %x\n", addr * 4, r);
+    DB_PRINT("addr=" HWADDR_FMT_plx " = %x\n", addr * 4, r);
     xlx_spi_update_irq(s);
     return r;
 }
@@ -244,7 +244,7 @@ spi_write(void *opaque, hwaddr addr,
     XilinxSPI *s = opaque;
     uint32_t value = val64;
 
-    DB_PRINT("addr=" TARGET_FMT_plx " = %x\n", addr, value);
+    DB_PRINT("addr=" HWADDR_FMT_plx " = %x\n", addr, value);
     addr >>= 2;
     switch (addr) {
     case R_SRR:
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
index 1e9dba2039..97009d3a5d 100644
--- a/hw/ssi/xilinx_spips.c
+++ b/hw/ssi/xilinx_spips.c
@@ -887,7 +887,7 @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
     case R_INTR_STATUS:
         ret = s->regs[addr] & IXR_ALL;
         s->regs[addr] = 0;
-        DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
+        DB_PRINT_L(0, "addr=" HWADDR_FMT_plx " = %x\n", addr * 4, ret);
         xilinx_spips_update_ixr(s);
         return ret;
     case R_INTR_MASK:
@@ -916,12 +916,12 @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
         if (!(s->regs[R_CONFIG] & R_CONFIG_ENDIAN)) {
             ret <<= 8 * shortfall;
         }
-        DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
+        DB_PRINT_L(0, "addr=" HWADDR_FMT_plx " = %x\n", addr * 4, ret);
         xilinx_spips_check_flush(s);
         xilinx_spips_update_ixr(s);
         return ret;
     }
-    DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4,
+    DB_PRINT_L(0, "addr=" HWADDR_FMT_plx " = %x\n", addr * 4,
                s->regs[addr] & mask);
     return s->regs[addr] & mask;
 
@@ -971,7 +971,7 @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
     XilinxSPIPS *s = opaque;
     bool try_flush = true;
 
-    DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr, (unsigned)value);
+    DB_PRINT_L(0, "addr=" HWADDR_FMT_plx " = %x\n", addr, (unsigned)value);
     addr >>= 2;
     switch (addr) {
     case R_CONFIG:
diff --git a/hw/timer/digic-timer.c b/hw/timer/digic-timer.c
index d5186f4454..973eab4386 100644
--- a/hw/timer/digic-timer.c
+++ b/hw/timer/digic-timer.c
@@ -76,7 +76,7 @@ static uint64_t digic_timer_read(void *opaque, hwaddr offset, unsigned size)
     default:
         qemu_log_mask(LOG_UNIMP,
                       "digic-timer: read access to unknown register 0x"
-                      TARGET_FMT_plx "\n", offset);
+                      HWADDR_FMT_plx "\n", offset);
     }
 
     return ret;
@@ -116,7 +116,7 @@ static void digic_timer_write(void *opaque, hwaddr offset,
     default:
         qemu_log_mask(LOG_UNIMP,
                       "digic-timer: read access to unknown register 0x"
-                      TARGET_FMT_plx "\n", offset);
+                      HWADDR_FMT_plx "\n", offset);
     }
 }
 
diff --git a/hw/timer/etraxfs_timer.c b/hw/timer/etraxfs_timer.c
index ecc2831baf..0205b49912 100644
--- a/hw/timer/etraxfs_timer.c
+++ b/hw/timer/etraxfs_timer.c
@@ -324,7 +324,7 @@ timer_write(void *opaque, hwaddr addr,
             t->rw_ack_intr = 0;
             break;
         default:
-            printf ("%s " TARGET_FMT_plx " %x\n",
+            printf ("%s " HWADDR_FMT_plx " %x\n",
                 __func__, addr, value);
             break;
     }
diff --git a/hw/timer/exynos4210_mct.c b/hw/timer/exynos4210_mct.c
index e175a9f5b9..c17b247da3 100644
--- a/hw/timer/exynos4210_mct.c
+++ b/hw/timer/exynos4210_mct.c
@@ -1445,7 +1445,7 @@ static void exynos4210_mct_write(void *opaque, hwaddr offset,
     case L0_ICNTO: case L1_ICNTO:
     case L0_FRCNTO: case L1_FRCNTO:
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "exynos4210.mct: write to RO register " TARGET_FMT_plx,
+                      "exynos4210.mct: write to RO register " HWADDR_FMT_plx,
                       offset);
         break;
 
diff --git a/hw/timer/exynos4210_pwm.c b/hw/timer/exynos4210_pwm.c
index 02924a9e5b..3528d0f33a 100644
--- a/hw/timer/exynos4210_pwm.c
+++ b/hw/timer/exynos4210_pwm.c
@@ -257,7 +257,7 @@ static uint64_t exynos4210_pwm_read(void *opaque, hwaddr offset,
 
     default:
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "exynos4210.pwm: bad read offset " TARGET_FMT_plx,
+                      "exynos4210.pwm: bad read offset " HWADDR_FMT_plx,
                       offset);
         break;
     }
@@ -352,7 +352,7 @@ static void exynos4210_pwm_write(void *opaque, hwaddr offset,
 
     default:
         qemu_log_mask(LOG_GUEST_ERROR,
-                      "exynos4210.pwm: bad write offset " TARGET_FMT_plx,
+                      "exynos4210.pwm: bad write offset " HWADDR_FMT_plx,
                       offset);
         break;
 
diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c
index 103260ec15..23ba625eb6 100644
--- a/hw/virtio/virtio-mmio.c
+++ b/hw/virtio/virtio-mmio.c
@@ -829,10 +829,10 @@ static char *virtio_mmio_bus_get_dev_path(DeviceState *dev)
     assert(section.mr);
 
     if (proxy_path) {
-        path = g_strdup_printf("%s/virtio-mmio@" TARGET_FMT_plx, proxy_path,
+        path = g_strdup_printf("%s/virtio-mmio@" HWADDR_FMT_plx, proxy_path,
                                section.offset_within_address_space);
     } else {
-        path = g_strdup_printf("virtio-mmio@" TARGET_FMT_plx,
+        path = g_strdup_printf("virtio-mmio@" HWADDR_FMT_plx,
                                section.offset_within_address_space);
     }
     memory_region_unref(section.mr);
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 0ec7e52183..8db0532632 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -434,7 +434,7 @@ static uint64_t xen_pt_bar_read(void *o, hwaddr addr,
     PCIDevice *d = o;
     /* if this function is called, that probably means that there is a
      * misconfiguration of the IOMMU. */
-    XEN_PT_ERR(d, "Should not read BAR through QEMU. @0x"TARGET_FMT_plx"\n",
+    XEN_PT_ERR(d, "Should not read BAR through QEMU. @0x"HWADDR_FMT_plx"\n",
                addr);
     return 0;
 }
@@ -443,7 +443,7 @@ static void xen_pt_bar_write(void *o, hwaddr addr, uint64_t val,
 {
     PCIDevice *d = o;
     /* Same comment as xen_pt_bar_read function */
-    XEN_PT_ERR(d, "Should not write BAR through QEMU. @0x"TARGET_FMT_plx"\n",
+    XEN_PT_ERR(d, "Should not write BAR through QEMU. @0x"HWADDR_FMT_plx"\n",
                addr);
 }
 
diff --git a/include/exec/hwaddr.h b/include/exec/hwaddr.h
index 8f16d179a8..50fbb2d96c 100644
--- a/include/exec/hwaddr.h
+++ b/include/exec/hwaddr.h
@@ -10,7 +10,7 @@
 
 typedef uint64_t hwaddr;
 #define HWADDR_MAX UINT64_MAX
-#define TARGET_FMT_plx "%016" PRIx64
+#define HWADDR_FMT_plx "%016" PRIx64
 #define HWADDR_PRId PRId64
 #define HWADDR_PRIi PRIi64
 #define HWADDR_PRIo PRIo64
diff --git a/monitor/misc.c b/monitor/misc.c
index bf3f1c67ca..fa0a42c261 100644
--- a/monitor/misc.c
+++ b/monitor/misc.c
@@ -566,7 +566,7 @@ static void memory_dump(Monitor *mon, int count, int format, int wsize,
 
     while (len > 0) {
         if (is_physical) {
-            monitor_printf(mon, TARGET_FMT_plx ":", addr);
+            monitor_printf(mon, HWADDR_FMT_plx ":", addr);
         } else {
             monitor_printf(mon, TARGET_FMT_lx ":", (target_ulong)addr);
         }
diff --git a/softmmu/memory.c b/softmmu/memory.c
index e05332d07f..9d64efca26 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1281,7 +1281,7 @@ static uint64_t unassigned_mem_read(void *opaque, hwaddr addr,
                                     unsigned size)
 {
 #ifdef DEBUG_UNASSIGNED
-    printf("Unassigned mem read " TARGET_FMT_plx "\n", addr);
+    printf("Unassigned mem read " HWADDR_FMT_plx "\n", addr);
 #endif
     return 0;
 }
@@ -1290,7 +1290,7 @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
                                  uint64_t val, unsigned size)
 {
 #ifdef DEBUG_UNASSIGNED
-    printf("Unassigned mem write " TARGET_FMT_plx " = 0x%"PRIx64"\n", addr, val);
+    printf("Unassigned mem write " HWADDR_FMT_plx " = 0x%"PRIx64"\n", addr, val);
 #endif
 }
 
@@ -3220,9 +3220,9 @@ static void mtree_print_mr(const MemoryRegion *mr, unsigned int level,
             for (i = 0; i < level; i++) {
                 qemu_printf(MTREE_INDENT);
             }
-            qemu_printf(TARGET_FMT_plx "-" TARGET_FMT_plx
-                        " (prio %d, %s%s): alias %s @%s " TARGET_FMT_plx
-                        "-" TARGET_FMT_plx "%s",
+            qemu_printf(HWADDR_FMT_plx "-" HWADDR_FMT_plx
+                        " (prio %d, %s%s): alias %s @%s " HWADDR_FMT_plx
+                        "-" HWADDR_FMT_plx "%s",
                         cur_start, cur_end,
                         mr->priority,
                         mr->nonvolatile ? "nv-" : "",
@@ -3242,7 +3242,7 @@ static void mtree_print_mr(const MemoryRegion *mr, unsigned int level,
             for (i = 0; i < level; i++) {
                 qemu_printf(MTREE_INDENT);
             }
-            qemu_printf(TARGET_FMT_plx "-" TARGET_FMT_plx
+            qemu_printf(HWADDR_FMT_plx "-" HWADDR_FMT_plx
                         " (prio %d, %s%s): %s%s",
                         cur_start, cur_end,
                         mr->priority,
@@ -3329,8 +3329,8 @@ static void mtree_print_flatview(gpointer key, gpointer value,
     while (n--) {
         mr = range->mr;
         if (range->offset_in_region) {
-            qemu_printf(MTREE_INDENT TARGET_FMT_plx "-" TARGET_FMT_plx
-                        " (prio %d, %s%s): %s @" TARGET_FMT_plx,
+            qemu_printf(MTREE_INDENT HWADDR_FMT_plx "-" HWADDR_FMT_plx
+                        " (prio %d, %s%s): %s @" HWADDR_FMT_plx,
                         int128_get64(range->addr.start),
                         int128_get64(range->addr.start)
                         + MR_SIZE(range->addr.size),
@@ -3340,7 +3340,7 @@ static void mtree_print_flatview(gpointer key, gpointer value,
                         memory_region_name(mr),
                         range->offset_in_region);
         } else {
-            qemu_printf(MTREE_INDENT TARGET_FMT_plx "-" TARGET_FMT_plx
+            qemu_printf(MTREE_INDENT HWADDR_FMT_plx "-" HWADDR_FMT_plx
                         " (prio %d, %s%s): %s",
                         int128_get64(range->addr.start),
                         int128_get64(range->addr.start)
diff --git a/softmmu/memory_mapping.c b/softmmu/memory_mapping.c
index f6f0a829fd..d7f1d096e0 100644
--- a/softmmu/memory_mapping.c
+++ b/softmmu/memory_mapping.c
@@ -241,8 +241,8 @@ static void guest_phys_block_add_section(GuestPhysListener *g,
     }
 
 #ifdef DEBUG_GUEST_PHYS_REGION_ADD
-    fprintf(stderr, "%s: target_start=" TARGET_FMT_plx " target_end="
-            TARGET_FMT_plx ": %s (count: %u)\n", __func__, target_start,
+    fprintf(stderr, "%s: target_start=" HWADDR_FMT_plx " target_end="
+            HWADDR_FMT_plx ": %s (count: %u)\n", __func__, target_start,
             target_end, predecessor ? "joined" : "added", g->list->num);
 #endif
 }
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index edec095c7a..bf585e45a8 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -2475,7 +2475,7 @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
     MemTxResult res;
 
 #if defined(DEBUG_SUBPAGE)
-    printf("%s: subpage %p len %u addr " TARGET_FMT_plx "\n", __func__,
+    printf("%s: subpage %p len %u addr " HWADDR_FMT_plx "\n", __func__,
            subpage, len, addr);
 #endif
     res = flatview_read(subpage->fv, addr + subpage->base, attrs, buf, len);
@@ -2493,7 +2493,7 @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
     uint8_t buf[8];
 
 #if defined(DEBUG_SUBPAGE)
-    printf("%s: subpage %p len %u addr " TARGET_FMT_plx
+    printf("%s: subpage %p len %u addr " HWADDR_FMT_plx
            " value %"PRIx64"\n",
            __func__, subpage, len, addr, value);
 #endif
@@ -2507,7 +2507,7 @@ static bool subpage_accepts(void *opaque, hwaddr addr,
 {
     subpage_t *subpage = opaque;
 #if defined(DEBUG_SUBPAGE)
-    printf("%s: subpage %p %c len %u addr " TARGET_FMT_plx "\n",
+    printf("%s: subpage %p %c len %u addr " HWADDR_FMT_plx "\n",
            __func__, subpage, is_write ? 'w' : 'r', len, addr);
 #endif
 
@@ -2558,7 +2558,7 @@ static subpage_t *subpage_init(FlatView *fv, hwaddr base)
                           NULL, TARGET_PAGE_SIZE);
     mmio->iomem.subpage = true;
 #if defined(DEBUG_SUBPAGE)
-    printf("%s: %p base " TARGET_FMT_plx " len %08x\n", __func__,
+    printf("%s: %p base " HWADDR_FMT_plx " len %08x\n", __func__,
            mmio, base, TARGET_PAGE_SIZE);
 #endif
 
@@ -3703,7 +3703,7 @@ void mtree_print_dispatch(AddressSpaceDispatch *d, MemoryRegion *root)
         const char *names[] = { " [unassigned]", " [not dirty]",
                                 " [ROM]", " [watch]" };
 
-        qemu_printf("      #%d @" TARGET_FMT_plx ".." TARGET_FMT_plx
+        qemu_printf("      #%d @" HWADDR_FMT_plx ".." HWADDR_FMT_plx
                     " %s%s%s%s%s",
             i,
             s->offset_within_address_space,
diff --git a/target/i386/monitor.c b/target/i386/monitor.c
index 8e4b4d600c..ad5b7b8bb5 100644
--- a/target/i386/monitor.c
+++ b/target/i386/monitor.c
@@ -57,7 +57,7 @@ static void print_pte(Monitor *mon, CPUArchState *env, hwaddr addr,
 {
     addr = addr_canonical(env, addr);
 
-    monitor_printf(mon, TARGET_FMT_plx ": " TARGET_FMT_plx
+    monitor_printf(mon, HWADDR_FMT_plx ": " HWADDR_FMT_plx
                    " %c%c%c%c%c%c%c%c%c\n",
                    addr,
                    pte & mask,
@@ -258,8 +258,8 @@ static void mem_print(Monitor *mon, CPUArchState *env,
     prot1 = *plast_prot;
     if (prot != prot1) {
         if (*pstart != -1) {
-            monitor_printf(mon, TARGET_FMT_plx "-" TARGET_FMT_plx " "
-                           TARGET_FMT_plx " %c%c%c\n",
+            monitor_printf(mon, HWADDR_FMT_plx "-" HWADDR_FMT_plx " "
+                           HWADDR_FMT_plx " %c%c%c\n",
                            addr_canonical(env, *pstart),
                            addr_canonical(env, end),
                            addr_canonical(env, end - *pstart),
diff --git a/target/loongarch/tlb_helper.c b/target/loongarch/tlb_helper.c
index c6d1de50fe..cce1db1e0a 100644
--- a/target/loongarch/tlb_helper.c
+++ b/target/loongarch/tlb_helper.c
@@ -655,7 +655,7 @@ bool loongarch_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                      physical & TARGET_PAGE_MASK, prot,
                      mmu_idx, TARGET_PAGE_SIZE);
         qemu_log_mask(CPU_LOG_MMU,
-                      "%s address=%" VADDR_PRIx " physical " TARGET_FMT_plx
+                      "%s address=%" VADDR_PRIx " physical " HWADDR_FMT_plx
                       " prot %d\n", __func__, address, physical, prot);
         return true;
     } else {
diff --git a/target/microblaze/op_helper.c b/target/microblaze/op_helper.c
index 5b745d0928..f6378030b7 100644
--- a/target/microblaze/op_helper.c
+++ b/target/microblaze/op_helper.c
@@ -403,7 +403,7 @@ void mb_cpu_transaction_failed(CPUState *cs, hwaddr physaddr, vaddr addr,
     CPUMBState *env = &cpu->env;
 
     qemu_log_mask(CPU_LOG_INT, "Transaction failed: vaddr 0x%" VADDR_PRIx
-                  " physaddr 0x" TARGET_FMT_plx " size %d access type %s\n",
+                  " physaddr 0x" HWADDR_FMT_plx " size %d access type %s\n",
                   addr, physaddr, size,
                   access_type == MMU_INST_FETCH ? "INST_FETCH" :
                   (access_type == MMU_DATA_LOAD ? "DATA_LOAD" : "DATA_STORE"));
diff --git a/target/mips/tcg/sysemu/tlb_helper.c b/target/mips/tcg/sysemu/tlb_helper.c
index 9d16859c0a..e5e1e9dd3f 100644
--- a/target/mips/tcg/sysemu/tlb_helper.c
+++ b/target/mips/tcg/sysemu/tlb_helper.c
@@ -924,7 +924,7 @@ bool mips_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     switch (ret) {
     case TLBRET_MATCH:
         qemu_log_mask(CPU_LOG_MMU,
-                      "%s address=%" VADDR_PRIx " physical " TARGET_FMT_plx
+                      "%s address=%" VADDR_PRIx " physical " HWADDR_FMT_plx
                       " prot %d\n", __func__, address, physical, prot);
         break;
     default:
diff --git a/target/ppc/mmu-hash32.c b/target/ppc/mmu-hash32.c
index cc091c3e62..3976416840 100644
--- a/target/ppc/mmu-hash32.c
+++ b/target/ppc/mmu-hash32.c
@@ -346,24 +346,24 @@ static hwaddr ppc_hash32_htab_lookup(PowerPCCPU *cpu,
     ptem = (vsid << 7) | (pgidx >> 10);
 
     /* Page address translation */
-    qemu_log_mask(CPU_LOG_MMU, "htab_base " TARGET_FMT_plx
-            " htab_mask " TARGET_FMT_plx
-            " hash " TARGET_FMT_plx "\n",
+    qemu_log_mask(CPU_LOG_MMU, "htab_base " HWADDR_FMT_plx
+            " htab_mask " HWADDR_FMT_plx
+            " hash " HWADDR_FMT_plx "\n",
             ppc_hash32_hpt_base(cpu), ppc_hash32_hpt_mask(cpu), hash);
 
     /* Primary PTEG lookup */
-    qemu_log_mask(CPU_LOG_MMU, "0 htab=" TARGET_FMT_plx "/" TARGET_FMT_plx
+    qemu_log_mask(CPU_LOG_MMU, "0 htab=" HWADDR_FMT_plx "/" HWADDR_FMT_plx
             " vsid=%" PRIx32 " ptem=%" PRIx32
-            " hash=" TARGET_FMT_plx "\n",
+            " hash=" HWADDR_FMT_plx "\n",
             ppc_hash32_hpt_base(cpu), ppc_hash32_hpt_mask(cpu),
             vsid, ptem, hash);
     pteg_off = get_pteg_offset32(cpu, hash);
     pte_offset = ppc_hash32_pteg_search(cpu, pteg_off, 0, ptem, pte);
     if (pte_offset == -1) {
         /* Secondary PTEG lookup */
-        qemu_log_mask(CPU_LOG_MMU, "1 htab=" TARGET_FMT_plx "/" TARGET_FMT_plx
+        qemu_log_mask(CPU_LOG_MMU, "1 htab=" HWADDR_FMT_plx "/" HWADDR_FMT_plx
                 " vsid=%" PRIx32 " api=%" PRIx32
-                " hash=" TARGET_FMT_plx "\n", ppc_hash32_hpt_base(cpu),
+                " hash=" HWADDR_FMT_plx "\n", ppc_hash32_hpt_base(cpu),
                 ppc_hash32_hpt_mask(cpu), vsid, ptem, ~hash);
         pteg_off = get_pteg_offset32(cpu, ~hash);
         pte_offset = ppc_hash32_pteg_search(cpu, pteg_off, 1, ptem, pte);
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index b9b31fd276..900f906990 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -697,15 +697,15 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
 
     /* Page address translation */
     qemu_log_mask(CPU_LOG_MMU,
-            "htab_base " TARGET_FMT_plx " htab_mask " TARGET_FMT_plx
-            " hash " TARGET_FMT_plx "\n",
+            "htab_base " HWADDR_FMT_plx " htab_mask " HWADDR_FMT_plx
+            " hash " HWADDR_FMT_plx "\n",
             ppc_hash64_hpt_base(cpu), ppc_hash64_hpt_mask(cpu), hash);
 
     /* Primary PTEG lookup */
     qemu_log_mask(CPU_LOG_MMU,
-            "0 htab=" TARGET_FMT_plx "/" TARGET_FMT_plx
+            "0 htab=" HWADDR_FMT_plx "/" HWADDR_FMT_plx
             " vsid=" TARGET_FMT_lx " ptem=" TARGET_FMT_lx
-            " hash=" TARGET_FMT_plx "\n",
+            " hash=" HWADDR_FMT_plx "\n",
             ppc_hash64_hpt_base(cpu), ppc_hash64_hpt_mask(cpu),
             vsid, ptem,  hash);
     ptex = ppc_hash64_pteg_search(cpu, hash, sps, ptem, pte, pshift);
@@ -714,9 +714,9 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
         /* Secondary PTEG lookup */
         ptem |= HPTE64_V_SECONDARY;
         qemu_log_mask(CPU_LOG_MMU,
-                "1 htab=" TARGET_FMT_plx "/" TARGET_FMT_plx
+                "1 htab=" HWADDR_FMT_plx "/" HWADDR_FMT_plx
                 " vsid=" TARGET_FMT_lx " api=" TARGET_FMT_lx
-                " hash=" TARGET_FMT_plx "\n", ppc_hash64_hpt_base(cpu),
+                " hash=" HWADDR_FMT_plx "\n", ppc_hash64_hpt_base(cpu),
                 ppc_hash64_hpt_mask(cpu), vsid, ptem, ~hash);
 
         ptex = ppc_hash64_pteg_search(cpu, ~hash, sps, ptem, pte, pshift);
diff --git a/target/ppc/mmu_common.c b/target/ppc/mmu_common.c
index 8901f4d134..7235a4befe 100644
--- a/target/ppc/mmu_common.c
+++ b/target/ppc/mmu_common.c
@@ -252,7 +252,7 @@ static int ppc6xx_tlb_check(CPUPPCState *env, mmu_ctx_t *ctx,
     }
     if (best != -1) {
     done:
-        qemu_log_mask(CPU_LOG_MMU, "found TLB at addr " TARGET_FMT_plx
+        qemu_log_mask(CPU_LOG_MMU, "found TLB at addr " HWADDR_FMT_plx
                       " prot=%01x ret=%d\n",
                       ctx->raddr & TARGET_PAGE_MASK, ctx->prot, ret);
         /* Update page flags */
@@ -328,7 +328,7 @@ static int get_bat_6xx_tlb(CPUPPCState *env, mmu_ctx_t *ctx,
                 ctx->prot = prot;
                 ret = check_prot(ctx->prot, access_type);
                 if (ret == 0) {
-                    qemu_log_mask(CPU_LOG_MMU, "BAT %d match: r " TARGET_FMT_plx
+                    qemu_log_mask(CPU_LOG_MMU, "BAT %d match: r " HWADDR_FMT_plx
                                   " prot=%c%c\n", i, ctx->raddr,
                                   ctx->prot & PAGE_READ ? 'R' : '-',
                                   ctx->prot & PAGE_WRITE ? 'W' : '-');
@@ -403,9 +403,9 @@ static int get_segment_6xx_tlb(CPUPPCState *env, mmu_ctx_t *ctx,
         /* Check if instruction fetch is allowed, if needed */
         if (type != ACCESS_CODE || ctx->nx == 0) {
             /* Page address translation */
-            qemu_log_mask(CPU_LOG_MMU, "htab_base " TARGET_FMT_plx
-                    " htab_mask " TARGET_FMT_plx
-                    " hash " TARGET_FMT_plx "\n",
+            qemu_log_mask(CPU_LOG_MMU, "htab_base " HWADDR_FMT_plx
+                    " htab_mask " HWADDR_FMT_plx
+                    " hash " HWADDR_FMT_plx "\n",
                     ppc_hash32_hpt_base(cpu), ppc_hash32_hpt_mask(cpu), hash);
             ctx->hash[0] = hash;
             ctx->hash[1] = ~hash;
@@ -420,7 +420,7 @@ static int get_segment_6xx_tlb(CPUPPCState *env, mmu_ctx_t *ctx,
                 hwaddr curaddr;
                 uint32_t a0, a1, a2, a3;
 
-                qemu_log("Page table: " TARGET_FMT_plx " len " TARGET_FMT_plx
+                qemu_log("Page table: " HWADDR_FMT_plx " len " HWADDR_FMT_plx
                          "\n", ppc_hash32_hpt_base(cpu),
                          ppc_hash32_hpt_mask(cpu) + 0x80);
                 for (curaddr = ppc_hash32_hpt_base(cpu);
@@ -432,7 +432,7 @@ static int get_segment_6xx_tlb(CPUPPCState *env, mmu_ctx_t *ctx,
                     a2 = ldl_phys(cs->as, curaddr + 8);
                     a3 = ldl_phys(cs->as, curaddr + 12);
                     if (a0 != 0 || a1 != 0 || a2 != 0 || a3 != 0) {
-                        qemu_log(TARGET_FMT_plx ": %08x %08x %08x %08x\n",
+                        qemu_log(HWADDR_FMT_plx ": %08x %08x %08x %08x\n",
                                  curaddr, a0, a1, a2, a3);
                     }
                 }
@@ -578,14 +578,14 @@ static int mmu40x_get_physical_address(CPUPPCState *env, mmu_ctx_t *ctx,
         if (ret >= 0) {
             ctx->raddr = raddr;
             qemu_log_mask(CPU_LOG_MMU, "%s: access granted " TARGET_FMT_lx
-                          " => " TARGET_FMT_plx
+                          " => " HWADDR_FMT_plx
                           " %d %d\n", __func__, address, ctx->raddr, ctx->prot,
                           ret);
             return 0;
         }
     }
      qemu_log_mask(CPU_LOG_MMU, "%s: access refused " TARGET_FMT_lx
-                   " => " TARGET_FMT_plx
+                   " => " HWADDR_FMT_plx
                    " %d %d\n", __func__, address, raddr, ctx->prot, ret);
 
     return ret;
@@ -666,11 +666,11 @@ static int mmubooke_get_physical_address(CPUPPCState *env, mmu_ctx_t *ctx,
     if (ret >= 0) {
         ctx->raddr = raddr;
         qemu_log_mask(CPU_LOG_MMU, "%s: access granted " TARGET_FMT_lx
-                      " => " TARGET_FMT_plx " %d %d\n", __func__,
+                      " => " HWADDR_FMT_plx " %d %d\n", __func__,
                       address, ctx->raddr, ctx->prot, ret);
     } else {
          qemu_log_mask(CPU_LOG_MMU, "%s: access refused " TARGET_FMT_lx
-                       " => " TARGET_FMT_plx " %d %d\n", __func__,
+                       " => " HWADDR_FMT_plx " %d %d\n", __func__,
                        address, raddr, ctx->prot, ret);
     }
 
@@ -894,11 +894,11 @@ found_tlb:
     if (ret >= 0) {
         ctx->raddr = raddr;
          qemu_log_mask(CPU_LOG_MMU, "%s: access granted " TARGET_FMT_lx
-                       " => " TARGET_FMT_plx " %d %d\n", __func__, address,
+                       " => " HWADDR_FMT_plx " %d %d\n", __func__, address,
                        ctx->raddr, ctx->prot, ret);
     } else {
          qemu_log_mask(CPU_LOG_MMU, "%s: access refused " TARGET_FMT_lx
-                       " => " TARGET_FMT_plx " %d %d\n", __func__, address,
+                       " => " HWADDR_FMT_plx " %d %d\n", __func__, address,
                        raddr, ctx->prot, ret);
     }
 
diff --git a/target/ppc/mmu_helper.c b/target/ppc/mmu_helper.c
index 2a91f3f46a..64e30435f5 100644
--- a/target/ppc/mmu_helper.c
+++ b/target/ppc/mmu_helper.c
@@ -826,7 +826,7 @@ void helper_4xx_tlbwe_hi(CPUPPCState *env, target_ulong entry,
         tlb->prot &= ~PAGE_VALID;
     }
     tlb->PID = env->spr[SPR_40x_PID]; /* PID */
-    qemu_log_mask(CPU_LOG_MMU, "%s: set up TLB %d RPN " TARGET_FMT_plx
+    qemu_log_mask(CPU_LOG_MMU, "%s: set up TLB %d RPN " HWADDR_FMT_plx
                   " EPN " TARGET_FMT_lx " size " TARGET_FMT_lx
                   " prot %c%c%c%c PID %d\n", __func__,
                   (int)entry, tlb->RPN, tlb->EPN, tlb->size,
@@ -864,7 +864,7 @@ void helper_4xx_tlbwe_lo(CPUPPCState *env, target_ulong entry,
     if (val & PPC4XX_TLBLO_WR) {
         tlb->prot |= PAGE_WRITE;
     }
-    qemu_log_mask(CPU_LOG_MMU, "%s: set up TLB %d RPN " TARGET_FMT_plx
+    qemu_log_mask(CPU_LOG_MMU, "%s: set up TLB %d RPN " HWADDR_FMT_plx
                   " EPN " TARGET_FMT_lx
                   " size " TARGET_FMT_lx " prot %c%c%c%c PID %d\n", __func__,
                   (int)entry, tlb->RPN, tlb->EPN, tlb->size,
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 8ea3442b4a..9a28816521 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -1272,7 +1272,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
 
         qemu_log_mask(CPU_LOG_MMU,
                       "%s 1st-stage address=%" VADDR_PRIx " ret %d physical "
-                      TARGET_FMT_plx " prot %d\n",
+                      HWADDR_FMT_plx " prot %d\n",
                       __func__, address, ret, pa, prot);
 
         if (ret == TRANSLATE_SUCCESS) {
@@ -1285,7 +1285,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
 
             qemu_log_mask(CPU_LOG_MMU,
                     "%s 2nd-stage address=%" VADDR_PRIx " ret %d physical "
-                    TARGET_FMT_plx " prot %d\n",
+                    HWADDR_FMT_plx " prot %d\n",
                     __func__, im_address, ret, pa, prot2);
 
             prot &= prot2;
@@ -1295,7 +1295,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                                size, access_type, mode);
 
                 qemu_log_mask(CPU_LOG_MMU,
-                              "%s PMP address=" TARGET_FMT_plx " ret %d prot"
+                              "%s PMP address=" HWADDR_FMT_plx " ret %d prot"
                               " %d tlb_size " TARGET_FMT_lu "\n",
                               __func__, pa, ret, prot_pmp, tlb_size);
 
@@ -1320,7 +1320,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
 
         qemu_log_mask(CPU_LOG_MMU,
                       "%s address=%" VADDR_PRIx " ret %d physical "
-                      TARGET_FMT_plx " prot %d\n",
+                      HWADDR_FMT_plx " prot %d\n",
                       __func__, address, ret, pa, prot);
 
         if (ret == TRANSLATE_SUCCESS) {
@@ -1328,7 +1328,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                            size, access_type, mode);
 
             qemu_log_mask(CPU_LOG_MMU,
-                          "%s PMP address=" TARGET_FMT_plx " ret %d prot"
+                          "%s PMP address=" HWADDR_FMT_plx " ret %d prot"
                           " %d tlb_size " TARGET_FMT_lu "\n",
                           __func__, pa, ret, prot_pmp, tlb_size);
 
diff --git a/target/riscv/monitor.c b/target/riscv/monitor.c
index 17e63fab00..236f93b9f5 100644
--- a/target/riscv/monitor.c
+++ b/target/riscv/monitor.c
@@ -64,7 +64,7 @@ static void print_pte(Monitor *mon, int va_bits, target_ulong vaddr,
         return;
     }
 
-    monitor_printf(mon, TARGET_FMT_lx " " TARGET_FMT_plx " " TARGET_FMT_lx
+    monitor_printf(mon, TARGET_FMT_lx " " HWADDR_FMT_plx " " TARGET_FMT_lx
                    " %c%c%c%c%c%c%c\n",
                    addr_canonical(va_bits, vaddr),
                    paddr, size,
diff --git a/target/sparc/ldst_helper.c b/target/sparc/ldst_helper.c
index ec4fae78c3..a53580d9e4 100644
--- a/target/sparc/ldst_helper.c
+++ b/target/sparc/ldst_helper.c
@@ -430,12 +430,12 @@ static void sparc_raise_mmu_fault(CPUState *cs, hwaddr addr,
 
 #ifdef DEBUG_UNASSIGNED
     if (is_asi) {
-        printf("Unassigned mem %s access of %d byte%s to " TARGET_FMT_plx
+        printf("Unassigned mem %s access of %d byte%s to " HWADDR_FMT_plx
                " asi 0x%02x from " TARGET_FMT_lx "\n",
                is_exec ? "exec" : is_write ? "write" : "read", size,
                size == 1 ? "" : "s", addr, is_asi, env->pc);
     } else {
-        printf("Unassigned mem %s access of %d byte%s to " TARGET_FMT_plx
+        printf("Unassigned mem %s access of %d byte%s to " HWADDR_FMT_plx
                " from " TARGET_FMT_lx "\n",
                is_exec ? "exec" : is_write ? "write" : "read", size,
                size == 1 ? "" : "s", addr, env->pc);
@@ -490,7 +490,7 @@ static void sparc_raise_mmu_fault(CPUState *cs, hwaddr addr,
     CPUSPARCState *env = &cpu->env;
 
 #ifdef DEBUG_UNASSIGNED
-    printf("Unassigned mem access to " TARGET_FMT_plx " from " TARGET_FMT_lx
+    printf("Unassigned mem access to " HWADDR_FMT_plx " from " TARGET_FMT_lx
            "\n", addr, env->pc);
 #endif
 
diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index 919448a494..158ec2ae8f 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -230,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     if (likely(error_code == 0)) {
         qemu_log_mask(CPU_LOG_MMU,
                       "Translate at %" VADDR_PRIx " -> "
-                      TARGET_FMT_plx ", vaddr " TARGET_FMT_lx "\n",
+                      HWADDR_FMT_plx ", vaddr " TARGET_FMT_lx "\n",
                       address, paddr, vaddr);
         tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size);
         return true;
@@ -356,27 +356,27 @@ void dump_mmu(CPUSPARCState *env)
     hwaddr pa;
     uint32_t pde;
 
-    qemu_printf("Root ptr: " TARGET_FMT_plx ", ctx: %d\n",
+    qemu_printf("Root ptr: " HWADDR_FMT_plx ", ctx: %d\n",
                 (hwaddr)env->mmuregs[1] << 4, env->mmuregs[2]);
     for (n = 0, va = 0; n < 256; n++, va += 16 * 1024 * 1024) {
         pde = mmu_probe(env, va, 2);
         if (pde) {
             pa = cpu_get_phys_page_debug(cs, va);
-            qemu_printf("VA: " TARGET_FMT_lx ", PA: " TARGET_FMT_plx
+            qemu_printf("VA: " TARGET_FMT_lx ", PA: " HWADDR_FMT_plx
                         " PDE: " TARGET_FMT_lx "\n", va, pa, pde);
             for (m = 0, va1 = va; m < 64; m++, va1 += 256 * 1024) {
                 pde = mmu_probe(env, va1, 1);
                 if (pde) {
                     pa = cpu_get_phys_page_debug(cs, va1);
                     qemu_printf(" VA: " TARGET_FMT_lx ", PA: "
-                                TARGET_FMT_plx " PDE: " TARGET_FMT_lx "\n",
+                                HWADDR_FMT_plx " PDE: " TARGET_FMT_lx "\n",
                                 va1, pa, pde);
                     for (o = 0, va2 = va1; o < 64; o++, va2 += 4 * 1024) {
                         pde = mmu_probe(env, va2, 0);
                         if (pde) {
                             pa = cpu_get_phys_page_debug(cs, va2);
                             qemu_printf("  VA: " TARGET_FMT_lx ", PA: "
-                                        TARGET_FMT_plx " PTE: "
+                                        HWADDR_FMT_plx " PTE: "
                                         TARGET_FMT_lx "\n",
                                         va2, pa, pde);
                         }
diff --git a/target/tricore/helper.c b/target/tricore/helper.c
index 1db32808e8..114685cce4 100644
--- a/target/tricore/helper.c
+++ b/target/tricore/helper.c
@@ -79,7 +79,7 @@ bool tricore_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                address, rw, mmu_idx);
 
     qemu_log_mask(CPU_LOG_MMU, "%s address=" TARGET_FMT_lx " ret %d physical "
-                  TARGET_FMT_plx " prot %d\n",
+                  HWADDR_FMT_plx " prot %d\n",
                   __func__, (target_ulong)address, ret, physical, prot);
 
     if (ret == TLBRET_MATCH) {
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 10 22:04:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 22:04:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474988.736461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFMjA-00068h-Hx; Tue, 10 Jan 2023 22:04:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474988.736461; Tue, 10 Jan 2023 22:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFMjA-00068a-FI; Tue, 10 Jan 2023 22:04:16 +0000
Received: by outflank-mailman (input) for mailman id 474988;
 Tue, 10 Jan 2023 22:04:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XyTY=5H=eik.bme.hu=balaton@srs-se1.protection.inumbo.net>)
 id 1pFMj9-00068U-Ll
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 22:04:15 +0000
Received: from zero.eik.bme.hu (zero.eik.bme.hu [152.66.115.2])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6e89f3c-9132-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 23:04:12 +0100 (CET)
Received: from zero.eik.bme.hu (blah.eik.bme.hu [152.66.115.182])
 by localhost (Postfix) with SMTP id 3ACFA745712;
 Tue, 10 Jan 2023 23:01:53 +0100 (CET)
Received: by zero.eik.bme.hu (Postfix, from userid 432)
 id F3B30745706; Tue, 10 Jan 2023 23:01:52 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
 by zero.eik.bme.hu (Postfix) with ESMTP id F05E37456E3;
 Tue, 10 Jan 2023 23:01:52 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6e89f3c-9132-11ed-b8d0-410ff93cb8f0
Date: Tue, 10 Jan 2023 23:01:52 +0100 (CET)
From: BALATON Zoltan <balaton@eik.bme.hu>
To: =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>
cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>, 
    qemu-block@nongnu.org, qemu-arm@nongnu.org, qemu-ppc@nongnu.org, 
    Richard Henderson <richard.henderson@linaro.org>, 
    =?ISO-8859-15?Q?Alex_Benn=E9e?= <alex.bennee@linaro.org>, ale@rev.ng, 
    qemu-riscv@nongnu.org, xen-devel@lists.xenproject.org, 
    Thomas Huth <thuth@redhat.com>
Subject: Re: [PATCH] bulk: Rename TARGET_FMT_plx -> HWADDR_FMT_plx
In-Reply-To: <20230110212947.34557-1-philmd@linaro.org>
Message-ID: <d4ea8bf5-a669-eb33-6dd2-f37417dab1c7@eik.bme.hu>
References: <20230110212947.34557-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="3866299591-1251169997-1673388112=:35553"
X-Spam-Checker-Version: Sophos PMX: 6.4.8.2820816, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2023.1.10.215119, AntiVirus-Engine: 5.96.0, AntiVirus-Data: 2023.1.10.5960001
X-Spam-Flag: NO
X-Spam-Probability: 9%
X-Spam-Level: 
X-Spam-Status: No, score=9% required=50%

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--3866299591-1251169997-1673388112=:35553
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8BIT

On Tue, 10 Jan 2023, Philippe Mathieu-Daudé wrote:
> The 'hwaddr' type is defined in "exec/hwaddr.h" as:
>
>    hwaddr is the type of a physical address
>   (its size can be different from 'target_ulong').
>
> All definitions use the 'HWADDR_' prefix, except TARGET_FMT_plx:
>
> $ fgrep define include/exec/hwaddr.h
> #define HWADDR_H
> #define HWADDR_BITS 64
> #define HWADDR_MAX UINT64_MAX
> #define TARGET_FMT_plx "%016" PRIx64
>         ^^^^^^
> #define HWADDR_PRId PRId64
> #define HWADDR_PRIi PRIi64
> #define HWADDR_PRIo PRIo64
> #define HWADDR_PRIu PRIu64
> #define HWADDR_PRIx PRIx64

Why are there both TARGET_FMT_plx and HWADDR_PRIx? Why not just use 
HWADDR_PRIx instead?

Regards,
BALATON Zoltan
--3866299591-1251169997-1673388112=:35553--


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 22:26:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 22:26:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.474995.736472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFN4c-00008r-AZ; Tue, 10 Jan 2023 22:26:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 474995.736472; Tue, 10 Jan 2023 22:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFN4c-00008k-7B; Tue, 10 Jan 2023 22:26:26 +0000
Received: by outflank-mailman (input) for mailman id 474995;
 Tue, 10 Jan 2023 22:26:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Esb2=5H=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pFN4b-00008e-5E
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 22:26:25 +0000
Received: from mail-vs1-xe31.google.com (mail-vs1-xe31.google.com
 [2607:f8b0:4864:20::e31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf660be4-9135-11ed-b8d0-410ff93cb8f0;
 Tue, 10 Jan 2023 23:26:22 +0100 (CET)
Received: by mail-vs1-xe31.google.com with SMTP id t10so3275214vsr.3
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 14:26:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf660be4-9135-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=IZhsa1BRRn0LWm2+qyDZA4kp5ryKIQ0bdpgv4P9ZnTE=;
        b=EbqBD4NEjXR7PbLaQ/7HfFBIicc1rO3MgjB5n1JfOsHIpY4+gURtFwcS/efzu6VnK8
         iWpEGAjKqM5TgHLahDGE8XAMe/E/WUbLyYPvzw9mq8XLHAwGbZo/rxWnSpEnC+bz3yxZ
         WQP4btPlXNpgQYLp1TD+syyCVtEgb66e71mdV4gn5e0GFqrU9aSDYFReL1DbCyIy65Ey
         tOe9Gk10VcffKCB/NaMwK+QdN0V7dCw9c4AOU2cItfhhI4E0PD/OIHx1Z8nOtp4rD3Ax
         Nqdlx10zGcKrJHtSzDMsQlwKgVotoB57ycFGCb5lQPsZ2g8vOiZ8KdKTDLM/XXTG/Yw1
         K8rw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=IZhsa1BRRn0LWm2+qyDZA4kp5ryKIQ0bdpgv4P9ZnTE=;
        b=PritcKdxWGpjaNjzIqAALEvVW+cFsnaoASfolJ83ahOe9W0swtZC977L/mhK9tWwTj
         jtw61OKxA9MCbWp9H0AX3oSym3idDkuSOquzqJKNJY1nlysIxHb022xp7aWWJxeG2akt
         fok0nBLOKEZ6G/80mPOzFxLkiC2Cmw2YOT7pZ6hvzpBhH19dX5RoAAyT87n/moZf7A/9
         F9W1Sz+QFg0bTOmfiSUAZNCLo0S+tT9lVCVPHfoFAYU2/vGYF3yajjazgqbiEiqf7Dt0
         v764Iml8J+pqeAU9FzgXxWg63AnPb8VSjUmqIwHl3PjZwFMScerLHXT1CpdZsZeVzrLB
         Hnnw==
X-Gm-Message-State: AFqh2kpxBEBnWto3hRMCF+5dIPWi8grh04ZxOAbjE1onNXAGpYx2MTpr
	aJ9EjgYG4hBsjHBV/nol+GeBoIQHzgFX21dAwaw=
X-Google-Smtp-Source: AMrXdXuWycCwf1rXJCknXxR3cUfEDUJV/lBBOmFmcQSlQ7u1K27uah59lY7T3vkBmtS4/+SMfJNopgy5YwuwOMBL0uc=
X-Received: by 2002:a67:6582:0:b0:3d0:d7ce:b383 with SMTP id
 z124-20020a676582000000b003d0d7ceb383mr119944vsb.72.1673389581218; Tue, 10
 Jan 2023 14:26:21 -0800 (PST)
MIME-Version: 1.0
References: <cover.1673362493.git.oleksii.kurochko@gmail.com> <7d89e3811e6ea4f307862d6552ad7c7e58176518.1673362493.git.oleksii.kurochko@gmail.com>
In-Reply-To: <7d89e3811e6ea4f307862d6552ad7c7e58176518.1673362493.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 11 Jan 2023 08:25:55 +1000
Message-ID: <CAKmqyKOe7Si2k6+ewZn3O6pb3TM7DQxxQZnedSL2Z3AZm2ox_w@mail.gmail.com>
Subject: Re: [PATCH v3 3/6] xen/riscv: introduce stack stuff
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 11, 2023 at 1:18 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> The patch introduces and sets up a stack in order to go to C environment
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V3:
>     - reorder headers in alphabetical order
>     - merge changes related to start_xen() function from "[PATCH v2 7/8]
>       xen/riscv: print hello message from C env" to this patch
>     - remove unneeded parentheses in definition of STACK_SIZE
> ---
> Changes in V2:
>     - introduce STACK_SIZE define.
>     - use consistent padding between instruction mnemonic and operand(s)
> ---
>  xen/arch/riscv/Makefile             |  1 +
>  xen/arch/riscv/include/asm/config.h |  2 ++
>  xen/arch/riscv/riscv64/head.S       |  6 +++++-
>  xen/arch/riscv/setup.c              | 14 ++++++++++++++
>  4 files changed, 22 insertions(+), 1 deletion(-)
>  create mode 100644 xen/arch/riscv/setup.c
>
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 248f2cbb3e..5a67a3f493 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,4 +1,5 @@
>  obj-$(CONFIG_RISCV_64) += riscv64/
> +obj-y += setup.o
>
>  $(TARGET): $(TARGET)-syms
>         $(OBJCOPY) -O binary -S $< $@
> diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
> index 0370f865f3..763a922a04 100644
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -43,6 +43,8 @@
>
>  #define SMP_CACHE_BYTES (1 << 6)
>
> +#define STACK_SIZE PAGE_SIZE
> +
>  #endif /* __RISCV_CONFIG_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
> index 990edb70a0..d444dd8aad 100644
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,4 +1,8 @@
>          .section .text.header, "ax", %progbits
>
>  ENTRY(start)
> -        j  start
> +        la      sp, cpu0_boot_stack
> +        li      t0, STACK_SIZE
> +        add     sp, sp, t0
> +
> +        tail    start_xen
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> new file mode 100644
> index 0000000000..13e24e2fe1
> --- /dev/null
> +++ b/xen/arch/riscv/setup.c
> @@ -0,0 +1,14 @@
> +#include <xen/compile.h>
> +#include <xen/init.h>
> +
> +/* Xen stack for bringing up the first CPU. */
> +unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
> +    __aligned(STACK_SIZE);
> +
> +void __init noreturn start_xen(void)
> +{
> +    for ( ;; )
> +        asm volatile ("wfi");
> +
> +    unreachable();
> +}
> --
> 2.38.1
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 10 22:31:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Jan 2023 22:31:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475001.736483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFN9E-0001Yu-Ro; Tue, 10 Jan 2023 22:31:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475001.736483; Tue, 10 Jan 2023 22:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFN9E-0001Yn-OT; Tue, 10 Jan 2023 22:31:12 +0000
Received: by outflank-mailman (input) for mailman id 475001;
 Tue, 10 Jan 2023 22:31:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFN9D-0001Yh-Mz
 for xen-devel@lists.xenproject.org; Tue, 10 Jan 2023 22:31:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFN96-0003Zs-Mt; Tue, 10 Jan 2023 22:31:04 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.2.225]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFN96-0004o6-Ft; Tue, 10 Jan 2023 22:31:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=nL0hgIXtLIRLqID3kiLWMDGxgcSsGiMcy7f4z5lq1Jg=; b=XG6kO7rHNnckMSBBPOWKDM3577
	TfGLpg+bfRuQBpU3vI3Y7JisSBzaUyg2I7O8SuvgF33UGVONjNJowJJhEl6jHnurU0E4r8AHn3UVD
	p5WQ2WZz8TubQbpW9pCsXA+P3TRnW/eIfU4VgGbrOKTL+fbw9EQ0sY+1s8Rjwi6zqJEI=;
Message-ID: <c7c5bac2-d9a9-6a1f-355f-c1088322e8ad@xen.org>
Date: Tue, 10 Jan 2023 22:31:01 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 1/6] xen/riscv: introduce dummy asm/init.h
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <b1585373e39a7cbe023f485aa5a04b093e25ec80.1673362493.git.oleksii.kurochko@gmail.com>
 <891d0830-7fdc-202a-5f12-2364cae5bce5@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <891d0830-7fdc-202a-5f12-2364cae5bce5@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 10/01/2023 17:02, Jan Beulich wrote:
> Arm maintainers,

Hi Jan,

> On 10.01.2023 16:17, Oleksii Kurochko wrote:
>> --- /dev/null
>> +++ b/xen/arch/riscv/include/asm/init.h
>> @@ -0,0 +1,12 @@
>> +#ifndef _XEN_ASM_INIT_H
>> +#define _XEN_ASM_INIT_H
>> +
>> +#endif /* _XEN_ASM_INIT_H */
> 
> instead of having RISC-V introduce an empty stub matching what x86 has,
> what would it take to empty Arm's as well, allowing both present ones to
> go away and no new one to be introduced? The only thing you have in there
> is struct init_info, which doesn't feel like it's properly placed in this
> header (x86 has such stuff in setup.h) ...

Yes. setup.h should be a good place even though the header is getting 
quite crowded.

I am happy to send a patch for it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 00:38:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 00:38:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475008.736493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFP85-0005WJ-1W; Wed, 11 Jan 2023 00:38:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475008.736493; Wed, 11 Jan 2023 00:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFP84-0005WC-V6; Wed, 11 Jan 2023 00:38:08 +0000
Received: by outflank-mailman (input) for mailman id 475008;
 Wed, 11 Jan 2023 00:38:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFP83-0005W2-3s; Wed, 11 Jan 2023 00:38:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFP82-0006rr-S9; Wed, 11 Jan 2023 00:38:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFP82-0003TC-BA; Wed, 11 Jan 2023 00:38:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFP82-0003vP-Ah; Wed, 11 Jan 2023 00:38:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=TWZ+qVXHQP3krp9rAGWxkij9uLoIozYj3G3z4UDpz1A=; b=AnXClTdQK836sQW5umF0qeJ6Vw
	SZUnYoA541gVyop/idHTXMrrvKYaM24Tdf8NvIYObl7BN0CwAHFTl7SN9iF2U7RYwFrjem8+oQ/AG
	uyjRm+Ss1GVEEaVUWvkAnFHXe1XYtx0GYUb+dAmZjwrlB47mdBHnt9jvFGdvnEZ+5VKw=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-armhf
Message-Id: <E1pFP82-0003vP-Ah@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 00:38:06 +0000

branch xen-unstable
xenbranch xen-unstable
job build-armhf
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175708/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-armhf.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-armhf.xen-build --summary-out=tmp/175710.bisection-summary --basis-template=175623 --blessings=real,real-bisect,real-retry qemu-mainline build-armhf xen-build
Searching for failure / basis pass:
 175703 fail [host=cubietruck-gleizes] / 175637 ok.
Failure / basis pass flights: 175703 / 175637
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 82dd766f25225b0812bbac628c60d2b48f2346e4 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 692d04a9ca429ca574d859fa8f43578e03b9f8b3
Basis pass d8d829b89dababf763ab33b8cdd852b2830db3cf 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#d8d829b89dababf763ab33b8cdd852b2830db3cf-82dd766f25225b0812bbac628c60d2b48f2346e4 git://git.qemu.org/qemu.git#528d9f33cad5245c1099d77084c78bb2244d5143-aa96ab7c9df59c615ca82b49c9062819e0a1c287 git://xenbits.xen.org/osstest/seabios.git#645a64b4911d7cadf5749d7375544fc2384e70ba-645a64b4911d7cadf5749d7375544fc2384e70ba git://xenbits.xen.org/xen.git#2b21cbbb339fb14414f357a6683b1df74c36fda2-692d04a9ca429ca574d8\
 59fa8f43578e03b9f8b3
Loaded 24993 nodes in revision graph
Searching for test results:
 175627 [host=cubietruck-braque]
 175637 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175643 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175647 fail d8d829b89dababf763ab33b8cdd852b2830db3cf d6271b657286de80260413684a1f2a63f44ea17b 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175654 fail irrelevant
 175664 fail irrelevant
 175672 [host=cubietruck-picasso]
 175681 fail irrelevant
 175699 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175700 fail irrelevant
 175691 fail irrelevant
 175701 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175703 fail 82dd766f25225b0812bbac628c60d2b48f2346e4 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 692d04a9ca429ca574d859fa8f43578e03b9f8b3
 175704 fail irrelevant
 175705 pass d8d829b89dababf763ab33b8cdd852b2830db3cf 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175708 fail d8d829b89dababf763ab33b8cdd852b2830db3cf 3d83b78285d6e96636130f7d449fd02e2d4deee0 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175710 fail 82dd766f25225b0812bbac628c60d2b48f2346e4 aa96ab7c9df59c615ca82b49c9062819e0a1c287 645a64b4911d7cadf5749d7375544fc2384e70ba 692d04a9ca429ca574d859fa8f43578e03b9f8b3
Searching for interesting versions
 Result found: flight 175637 (pass), for basis pass
 Result found: flight 175703 (fail), for basis failure (at ancestor ~9)
 Repro found: flight 175705 (pass), for basis pass
 Repro found: flight 175710 (fail), for basis failure
 0 revisions at d8d829b89dababf763ab33b8cdd852b2830db3cf 528d9f33cad5245c1099d77084c78bb2244d5143 645a64b4911d7cadf5749d7375544fc2384e70ba 2b21cbbb339fb14414f357a6683b1df74c36fda2
No revisions left to test, checking graph state.
 Result found: flight 175637 (pass), for last pass
 Result found: flight 175643 (fail), for first failure
 Repro found: flight 175699 (pass), for last pass
 Repro found: flight 175701 (fail), for first failure
 Repro found: flight 175705 (pass), for last pass
 Repro found: flight 175708 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  3d83b78285d6e96636130f7d449fd02e2d4deee0
  Bug not present: 528d9f33cad5245c1099d77084c78bb2244d5143
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/175708/


  commit 3d83b78285d6e96636130f7d449fd02e2d4deee0
  Merge: 528d9f33ca fb418b51b7
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sun Jan 8 14:27:40 2023 +0000
  
      Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
      
      * Atomic memslot updates for KVM (Emanuele, David)
      * Always send errors to logfile when daemonized (Greg)
      * Add support for IDE CompactFlash card (Lubomir)
      * First round of build system cleanups (myself)
      * First round of feature removals (myself)
      * Reduce "qemu/accel.h" inclusion (Philippe)
      
      # gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
        i386: SGX: remove deprecated member of SGXInfo
        target/i386: Add SGX aex-notify and EDECCSSA support
        util: remove support -chardev tty and -chardev parport
        util: remove support for hex numbers with a scaling suffix
        KVM: remove support for kernel-irqchip=off
        docs: do not talk about past removal as happening in the future
        meson: accept relative symlinks in "meson introspect --installed" data
        meson: cleanup compiler detection
        meson: support meson 0.64 -Doptimization=plain
        configure: test all warnings
        tests/qapi-schema: remove Meson workaround
        meson: cleanup dummy-cpus.c rules
        meson: tweak hardening options for Windows
        configure: remove backwards-compatibility and obsolete options
        configure: preserve qemu-ga variables
        configure: cleanup $cpu tests
        configure: remove dead function
        configure: remove useless write_c_skeleton
        ide: Add "ide-cf" driver, a CompactFlash card
        ide: Add 8-bit data mode
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit fb418b51b7b43c34873f4b9af3da7031b7452115
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:02:48 2022 +0100
  
      i386: SGX: remove deprecated member of SGXInfo
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit d45f24fe7525d8a8aaa4ca6d9d214dc41819caa5
  Author: Kai Huang <kai.huang@intel.com>
  Date:   Wed Nov 9 15:48:34 2022 +1300
  
      target/i386: Add SGX aex-notify and EDECCSSA support
      
      The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
      allows one enclave to receive a notification in the ERESUME after the
      enclave exit due to an AEX.  EDECCSSA is a new SGX user leaf function
      (ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
      
      Whether the hardware supports to create enclave with AEX-notify support
      is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10].  The new EDECCSSA
      user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
      
      Add support to allow to expose the new SGX AEX-notify feature and the
      new EDECCSSA user leaf function to KVM guest.
      
      Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
      Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
      Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
      Signed-off-by: Kai Huang <kai.huang@intel.com>
      Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6f9f630836df355b9ca3f4641e6b7be71f6af076
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:56:53 2022 +0100
  
      util: remove support -chardev tty and -chardev parport
      
      These were deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 8b902e3d2309595567e4957b96e971c4f3ca455e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:50:05 2022 +0100
  
      util: remove support for hex numbers with a scaling suffix
      
      This was deprecated in 6.0 and can now be removed.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit eaaaf8abdc9a9f3493f2cb6a751660dff3f9db57
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 10:39:32 2022 +0100
  
      KVM: remove support for kernel-irqchip=off
      
      -machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
      is the replacement that works, so remove the deprecated support for the former.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9d3f8b3247795ae8f482700bbbace04b04421d5b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Dec 16 11:05:20 2022 +0100
  
      docs: do not talk about past removal as happening in the future
      
      KVM guest support on 32-bit Arm hosts *has* been removed, so rephrase
      the sentence describing it.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f32eb0021a85efaca97f69b0e9201737562a8e4f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 13:25:00 2022 +0100
  
      meson: accept relative symlinks in "meson introspect --installed" data
      
      When installing shared libraries, as is the case for libvfio-user.so,
      Meson will include relative symbolic links in the output of
      "meson introspect --installed":
      
        {
          "libvfio-user.so": "/usr/local/lib64/libvfio-user.so",
          ...
        }
      
      In the case of scripts/symlink-install-tree.py, this will
      be a symbolic link to a symbolic link but, in any case, there is
      no issue in creating it.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e51340243687a2cd7ffcf0d6e2de030bed4b8720
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:15:06 2022 +0200
  
      meson: cleanup compiler detection
      
      Detect all compilers at the beginning of meson.build, and store
      the available languages in an array.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6a97f3939240977e66e90862419911666956a76a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:07:23 2022 +0100
  
      meson: support meson 0.64 -Doptimization=plain
      
      In Meson 0.64, the optimization built-in option now accepts the "plain" value,
      which will not set any optimization flags.  While QEMU does not check the
      contents of the option and therefore does not suffer any ill effect
      from the new value, it uses get_option to print the optimization flags
      in the summary.  Clean the code up to remove duplication, and check for
      -Doptimization=plain at the same time.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ca9b5c2ebf1aca87677a24c208bf3d0345c0b1aa
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 14:21:22 2022 +0200
  
      configure: test all warnings
      
      Some warnings are hardcoded in QEMU_CFLAGS and not tested.  There is
      no particular reason to single out these five, as many more -W flags are
      present on all the supported compilers.  For homogeneity when moving
      the detection to meson, make them use the same warn_flags infrastructure.
      
      Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 7bef93ff064f540e24a36a31263ae3db2d06b3d2
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Dec 14 12:29:11 2022 +0100
  
      tests/qapi-schema: remove Meson workaround
      
      The referenced issue has been fixed since version 0.61, so remove the
      workaround.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9c9b85d705abdcce0b63f9182d8140dd67bd13fb
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Jul 22 10:43:00 2021 +0200
  
      meson: cleanup dummy-cpus.c rules
      
      Now that qtest is available on all targets including Windows, dummy-cpus.c
      is included unconditionally in the build.  It also does not need to be
      compiled per-target.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2d73fa74728dccde5cc29c4e56b4d781e4ead7c4
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Nov 2 13:03:51 2022 +0100
  
      meson: tweak hardening options for Windows
      
      meson.build has been enabling ASLR _only_ for debug builds since
      commit d2147e04f95f ("configure: move Windows flags detection to meson",
      2022-05-07); instead it was supposed to disable it for debug builds.
      
      However, the flag has been enabled for DLLs upstream for roughly 2
      years (https://sourceware.org/bugzilla/show_bug.cgi?id=19011), and
      also by some distros including Debian for 6 years even
      (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836365).
      
      Enable it unconditionally; we can fix the reversed logic of commit
      d2147e04f95f later if there are any reports, but for now just
      enable the hardening.
      
      Also add -Wl,--high-entropy-va, which also controls ASLR.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 10229ec3b0ff77c4894cefa312c21e65a761dcde
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:46 2022 +0200
  
      configure: remove backwards-compatibility and obsolete options
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 954ed68f9934a3e08f904acb93ce168505995e95
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 11:35:17 2022 +0200
  
      configure: preserve qemu-ga variables
      
      Ensure that qemu-ga variables set at configure time are kept
      later when the script is rerun.  For preserve_env to work,
      the variables need to be empty so move the default values
      to config-host.mak generation.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f9c77801f4992fae99392ccbb60596dfa1fcf04a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 12 15:27:03 2022 +0200
  
      configure: cleanup $cpu tests
      
      $cpu is derived from preprocessor defines rather than uname these days,
      so do not bother using isainfo on Solaris.  Likewise do not recognize
      BeOS's uname -m output.
      
      Keep the other, less OS-specific canonicalizations for the benefit
      of people using --cpu.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 91cd485a6dcbc8210666d19146fe73b8664f0418
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 18 10:17:25 2022 +0200
  
      configure: remove dead function
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c5634e822416e71e00f08f55a521362d8d21264d
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 20 14:20:06 2022 +0200
  
      configure: remove useless write_c_skeleton
      
      This is not needed ever since QEMU stopped detecting -liberty; this
      happened with the Meson switch but it is quite likely that the
      library was not really necessary years before.
      
      Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cec79db38df72ce74d0296b831e90547111bc13c
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:03:19 2022 +0100
  
      ide: Add "ide-cf" driver, a CompactFlash card
      
      This allows attaching IDE_CFATA device to an IDE bus. Behaves like a
      CompactFlash card in True IDE mode.
      
      Tested with:
      
        qemu-system-i386 \$
          -device driver=ide-cf,drive=cf,bus=ide.0 \$
          -drive id=cf,index=0,format=raw,if=none,file=cf.img
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120319.706885-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1ea17d228e582b1cfbf6f61e9da5fafef4063be8
  Author: Lubomir Rintel <lkundrak@v3.sk>
  Date:   Wed Nov 30 13:02:38 2022 +0100
  
      ide: Add 8-bit data mode
      
      CompactFlash uses features 0x01 and 0x81 to enable/disable 8-bit data
      path. Implement them.
      
      Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
      Message-Id: <20221130120238.706717-1-lkundrak@v3.sk>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9b063b7ea697d796914b3651d15c3457b7b1135c
  Author: Greg Kurz <groug@kaod.org>
  Date:   Tue Nov 8 15:00:32 2022 +0100
  
      util/log: Always send errors to logfile when daemonized
      
      When QEMU is started with `-daemonize`, all stdio descriptors get
      redirected to `/dev/null`. This basically means that anything
      printed with error_report() and friends is lost.
      
      Current logging code allows to redirect to a file with `-D` but
      this requires to enable some logging item with `-d` as well to
      be functional.
      
      Relax the check on the log flags when QEMU is daemonized, so that
      other users of stderr can benefit from the redirection, without the
      need to enable unwanted debug logs. Previous behaviour is retained
      for the non-daemonized case. The logic is unrolled as an `if` for
      better readability. The qemu_log_level and log_per_thread globals
      reflect the state we want to transition to at this point : use
      them instead of the intermediary locals for correctness.
      
      qemu_set_log_internal() is adapted to open a per-thread log file
      when '-d tid' is passed. This is done by hijacking qemu_try_lock()
      which seems simpler that refactoring the code.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-3-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 59bde2137445b63c822720d069d91d38190c6540
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Nov 8 15:00:31 2022 +0100
  
      util/log: do not close and reopen log files when flags are turned off
      
      log_append makes sure that if you turn off the logging (which clears
      log_flags and makes need_to_open_file false) the old log is not
      overwritten.  The usecase is that if you remove or move the file
      QEMU will not keep writing to the old file.  However, this is
      not always the desited behavior, in particular having log_append==1
      after changing the file name makes little sense.
      
      When qemu_set_log_internal is called from the logfile monitor
      command, filename must be non-NULL and therefore changed_name must
      be true.  Therefore, the only case where the file is closed and
      need_to_open_file == false is indeed when log_flags becomes
      zero.  In this case, just flush the file and do not bother
      closing it, thus faking the same append behavior as previously.
      
      The behavioral change is that changing the logfile twice, for
      example log1 -> log2 -> log1, will cause log1 to be overwritten.
      This can simply be documented, since it is not a particularly
      surprising behavior.
      
      Suggested-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Reviewed-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221025092119.236224-1-pbonzini@redhat.com>
      [groug: nullify global_file before actually closing the file]
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <20221108140032.1460307-2-groug@kaod.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit cc6ff741123216550997b12cdd991beeed47bd0d
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:41 2022 +0100
  
      hw: Reduce "qemu/accel.h" inclusion
      
      Move "qemu/accel.h" include from the heavily included
      "hw/boards.h" to hw/core/machine.c, the single file using
      the AccelState definition.
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-3-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3d277871f39d4de42f56b7b0cef5721e525b2d31
  Author: Philippe Mathieu-Daudé <philmd@linaro.org>
  Date:   Wed Nov 30 14:56:40 2022 +0100
  
      typedefs: Forward-declare AccelState
      
      Forward-declare AccelState in "qemu/typedefs.h" so structures
      using a reference of it (like MachineState in "hw/boards.h")
      don't have to include "qemu/accel.h".
      
      Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: Fabiano Rosas <farosas@suse.de>
      Message-Id: <20221130135641.85328-2-philmd@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-armhf.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
175710: tolerable ALL FAIL

flight 175710 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/175710/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-armhf                   6 xen-build               fail baseline untested


jobs:
 build-armhf                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 00:39:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 00:39:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475017.736505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFP97-00067L-IB; Wed, 11 Jan 2023 00:39:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475017.736505; Wed, 11 Jan 2023 00:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFP97-00067E-Dg; Wed, 11 Jan 2023 00:39:13 +0000
Received: by outflank-mailman (input) for mailman id 475017;
 Wed, 11 Jan 2023 00:39:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFP96-000672-FZ; Wed, 11 Jan 2023 00:39:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFP96-0006sf-El; Wed, 11 Jan 2023 00:39:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFP96-0003Vw-4l; Wed, 11 Jan 2023 00:39:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFP96-0004PW-4J; Wed, 11 Jan 2023 00:39:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XACOe9Jsv3kvkMOy/6uZk5JMJ1rX1a1UMfnkoSfWIIE=; b=2CKg48WWqAJj2S9YWDd83uN4jW
	vcPXSu/yPLzdK41Mz1tn269mg8gmolyFKFRsXEAFxcQJqPR+yIfTRQNLgkSBUq7iSYPv5TUw2j2R3
	OCNgGsKGVbtgPzHrHRimJyFjHYO+N00bEIRBy4kUO0Osd31oFmqYhkhGpZ3tWFfrL8EI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175711-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175711: tolerable FAIL - PUSHED
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:heisenbug
X-Osstest-Versions-This:
    ovmf=fe405f08a09e9f2306c72aa23d8edfbcfaa23bff
X-Osstest-Versions-That:
    ovmf=ec54ce1f1ab41b92782b37ae59e752fff0ef9c41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 00:39:12 +0000

flight 175711 ovmf real [real]
flight 175713 ovmf real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175711/
http://logs.test-lab.xenproject.org/osstest/logs/175713/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install  fail pass in 175713-retest

version targeted for testing:
 ovmf                 fe405f08a09e9f2306c72aa23d8edfbcfaa23bff
baseline version:
 ovmf                 ec54ce1f1ab41b92782b37ae59e752fff0ef9c41

Last test of basis   175707  2023-01-10 17:40:54 Z    0 days
Testing same since   175711  2023-01-10 21:40:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>
  Zachary Clark-Williams <zachary.clark-williams@intel.com>
  Zachary Clark-Williams <zclarkw112@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ec54ce1f1a..fe405f08a0  fe405f08a09e9f2306c72aa23d8edfbcfaa23bff -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 01:17:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 01:17:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475027.736516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFPkS-0000Fa-Eb; Wed, 11 Jan 2023 01:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475027.736516; Wed, 11 Jan 2023 01:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFPkS-0000FS-9K; Wed, 11 Jan 2023 01:17:48 +0000
Received: by outflank-mailman (input) for mailman id 475027;
 Wed, 11 Jan 2023 01:17:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFPkR-0000FH-5q; Wed, 11 Jan 2023 01:17:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFPkR-0006Qr-3a; Wed, 11 Jan 2023 01:17:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFPkQ-0004N4-Nj; Wed, 11 Jan 2023 01:17:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFPkQ-00009z-NM; Wed, 11 Jan 2023 01:17:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CdFPKlnrt47iZJh97eTAgD9msfM9iPH4ZbeGmGwRTwo=; b=Nu5Rm5Xcfj59PXa7HMgOXuBHuL
	gNFWMoKktOvdYXrGmizb76O6wJCypJUheKvFNevh3VNb8+3J9yjXpYOVaJiKWq7sfwNDu6vb/YN1s
	q6PN5Vsu+idrF/mGecSA0P+InBgWIxCevmZ8n0/prvGLiKYPbiFN2ug7Ontimq/fcdrU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175712-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175712: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4d975798e11579fdf405b348543061129e01b0fb
X-Osstest-Versions-That:
    xen=692d04a9ca429ca574d859fa8f43578e03b9f8b3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 01:17:46 +0000

flight 175712 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175712/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4d975798e11579fdf405b348543061129e01b0fb
baseline version:
 xen                  692d04a9ca429ca574d859fa8f43578e03b9f8b3

Last test of basis   175653  2023-01-09 18:05:52 Z    1 days
Testing same since   175712  2023-01-10 22:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   692d04a9ca..4d975798e1  4d975798e11579fdf405b348543061129e01b0fb -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 01:48:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 01:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475035.736527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFQDj-0003ce-OL; Wed, 11 Jan 2023 01:48:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475035.736527; Wed, 11 Jan 2023 01:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFQDj-0003cF-I5; Wed, 11 Jan 2023 01:48:03 +0000
Received: by outflank-mailman (input) for mailman id 475035;
 Wed, 11 Jan 2023 01:48:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFQDh-0003c3-Qv; Wed, 11 Jan 2023 01:48:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFQDh-00073r-Oe; Wed, 11 Jan 2023 01:48:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFQDh-00051m-4w; Wed, 11 Jan 2023 01:48:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFQDh-0000Yu-3W; Wed, 11 Jan 2023 01:48:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aP9M++HVXyuaYkEn0cw3AFsXunwLV5x9EYYiljqFap0=; b=5La9mAivIRVDUR4g8n78S/DdEq
	CwySFykiUxBPz2z3bFmzl46tX/vcX+wONPZI9ls8GWt8s943V7rRJmFSS94hgLGVTK/gDwsfcR7UI
	R3OlYKZ6qe1rn13maRv6vNeToO3zm9Rwv/xVh0o+3QW3BgojqvG3xjXRWwiYtjDlnpnQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175709-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175709: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 01:48:01 +0000

flight 175709 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175709/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    3 days
Failing since        175627  2023-01-08 14:40:14 Z    2 days   12 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    1 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 03:04:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 03:04:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475044.736538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFRPV-0003cG-8h; Wed, 11 Jan 2023 03:04:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475044.736538; Wed, 11 Jan 2023 03:04:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFRPV-0003c9-5K; Wed, 11 Jan 2023 03:04:17 +0000
Received: by outflank-mailman (input) for mailman id 475044;
 Wed, 11 Jan 2023 03:04:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFRPT-0003bz-OG; Wed, 11 Jan 2023 03:04:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFRPT-0000ig-KF; Wed, 11 Jan 2023 03:04:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFRPT-0006kp-8a; Wed, 11 Jan 2023 03:04:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFRPT-0002eW-7y; Wed, 11 Jan 2023 03:04:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+NxwE6NEMB60Od4y5e6p524A5gFTPx6MBJzagkp8g7g=; b=DYBTzz5WrmYgxpBF6g2kRVW+u4
	xnOpTEHuP2LCmCg6k/3Te/uhC10OkBOSq2C/UmawMDCLvyi2k/6aOK4r6eZwa8xE8adrMG5NNYWT5
	+owKGf7SKGtHb3CKm9M2xmdVqMOXlx36qXOJRadlwDKOdb5wbVDjcaOu1sIm1NSjOFbM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175706-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175706: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=40c18f363a0806d4f566e8a9a9bd2d7766a72cf5
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 03:04:15 +0000

flight 175706 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175706/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                40c18f363a0806d4f566e8a9a9bd2d7766a72cf5
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   95 days
Failing since        173470  2022-10-08 06:21:34 Z   94 days  199 attempts
Testing same since   175706  2023-01-10 17:40:05 Z    0 days    1 attempts

------------------------------------------------------------
3319 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505758 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 03:24:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 03:24:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475052.736549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFRj1-000608-TP; Wed, 11 Jan 2023 03:24:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475052.736549; Wed, 11 Jan 2023 03:24:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFRj1-000601-Q2; Wed, 11 Jan 2023 03:24:27 +0000
Received: by outflank-mailman (input) for mailman id 475052;
 Wed, 11 Jan 2023 03:24:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFRj0-0005zr-DL; Wed, 11 Jan 2023 03:24:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFRj0-0001CT-BW; Wed, 11 Jan 2023 03:24:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFRiz-0007Mv-TO; Wed, 11 Jan 2023 03:24:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFRiz-0001SR-Sk; Wed, 11 Jan 2023 03:24:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nH8HPizo21iSp5t+H++796J1pK2fD+QJmm4NQGEJzlo=; b=thXKFGGDI2DL5cOMUZM/H1669o
	qq4koSsQ53cCLI9+1u6rrrYJUDLzi2kvgGBni2oWXnHo1wm97GlbEOL7acebANKptMIvZiqZysKlH
	EPLfC4K3KY/uruGbdAFyNTL0I8HkitMrhHfzTYwizMQMntZTyYlw92HdTJM0knTdNsTw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175715-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 175715: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=c0f454c68329301447fd258e47824f7d402f19e9
X-Osstest-Versions-That:
    xtf=d1b8b7c312d2cf0e501ed43e88e45bba2c6986b5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 03:24:25 +0000

flight 175715 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175715/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  c0f454c68329301447fd258e47824f7d402f19e9
baseline version:
 xtf                  d1b8b7c312d2cf0e501ed43e88e45bba2c6986b5

Last test of basis   175508  2022-12-27 20:12:16 Z   14 days
Testing same since   175715  2023-01-11 01:41:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   d1b8b7c..c0f454c  c0f454c68329301447fd258e47824f7d402f19e9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 04:29:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 04:29:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475060.736560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFSk6-0003nD-Ha; Wed, 11 Jan 2023 04:29:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475060.736560; Wed, 11 Jan 2023 04:29:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFSk6-0003n6-EH; Wed, 11 Jan 2023 04:29:38 +0000
Received: by outflank-mailman (input) for mailman id 475060;
 Wed, 11 Jan 2023 04:29:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFSk5-0003mw-Se; Wed, 11 Jan 2023 04:29:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFSk5-0002ig-On; Wed, 11 Jan 2023 04:29:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFSk5-0002Lg-CU; Wed, 11 Jan 2023 04:29:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFSk5-0001TY-Bo; Wed, 11 Jan 2023 04:29:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3uOPmtWBUBTw35P2hc/+UUHYSMfCoiT/VICta7/Rfq0=; b=p1p5JkDAGtectHCy9g7WU0GmWK
	7MKcgKZHFtK7S1e6t3WyPx6rumAuh9LFXok16gZPyVtGWRCSvVTTRIxhidl9E5N8gEZbBDEXzCmCg
	pFrdW2/xNYx799u/SDTxeg6LAbE8Tg+EiTqYzg/ezvQVKG02cDvplztI1WjL3SvLhX4w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175716-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175716: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 04:29:37 +0000

flight 175716 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175716/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    3 days
Failing since        175627  2023-01-08 14:40:14 Z    2 days   13 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    1 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 06:37:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 06:37:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475069.736571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFUjE-0000GG-VF; Wed, 11 Jan 2023 06:36:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475069.736571; Wed, 11 Jan 2023 06:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFUjE-0000G4-R8; Wed, 11 Jan 2023 06:36:52 +0000
Received: by outflank-mailman (input) for mailman id 475069;
 Wed, 11 Jan 2023 06:36:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zIvS=5I=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFUjD-0000Fy-Lx
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 06:36:51 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5375e9bb-917a-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 07:36:49 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 38539767AF;
 Wed, 11 Jan 2023 06:36:48 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1446313591;
 Wed, 11 Jan 2023 06:36:48 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id uQRkAwBZvmPpDwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 11 Jan 2023 06:36:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5375e9bb-917a-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673419008; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JaBNeXv3JHHEZzeIDo3MVWFDAOvKZX193fiyQ4pPOJ4=;
	b=M4sHfpFQ+vLCruT3vRJ0x+AvFGg9BMQ713MwvhZ0a5lnXP0fEpLpQ6hRd90RzayXroq6MM
	ArkdSY5iCLNgTJKrmIM3DB+NBebmICWExJTTKs5ToLmL5kRyHwmLrHYXJEEURPj4fZZsd1
	X8A9Z8yoDxVET4JaYNd2/HfLt+rwRVI=
Message-ID: <27c0f7bd-b548-17f4-d675-7143e218fd65@suse.com>
Date: Wed, 11 Jan 2023 07:36:47 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 04/19] tools/xenstore: remove all watches when a domain
 has stopped
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-5-jgross@suse.com>
 <831c0e75-1a23-6210-9f5b-7212a6763dc3@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <831c0e75-1a23-6210-9f5b-7212a6763dc3@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------543lBimyMSr6wfImjmhUjh0V"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------543lBimyMSr6wfImjmhUjh0V
Content-Type: multipart/mixed; boundary="------------QnXaG4DvJ01tw3PmipuYzTR0";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <27c0f7bd-b548-17f4-d675-7143e218fd65@suse.com>
Subject: Re: [PATCH v2 04/19] tools/xenstore: remove all watches when a domain
 has stopped
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-5-jgross@suse.com>
 <831c0e75-1a23-6210-9f5b-7212a6763dc3@xen.org>
In-Reply-To: <831c0e75-1a23-6210-9f5b-7212a6763dc3@xen.org>

--------------QnXaG4DvJ01tw3PmipuYzTR0
Content-Type: multipart/mixed; boundary="------------XdaIJZ0eb0W0NGn75bMqIkpx"

--------------XdaIJZ0eb0W0NGn75bMqIkpx
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMTIuMjIgMjA6MDEsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDEzLzEyLzIwMjIgMTY6MDAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBX
aGVuIGEgZG9tYWluIGhhcyBiZWVuIHJlbGVhc2VkIGJ5IFhlbiB0b29scywgcmVtb3ZlIGFs
bCBpdHMNCj4+IHJlZ2lzdGVyZWQgd2F0Y2hlcy4gVGhpcyBhdm9pZHMgc2VuZGluZyB3YXRj
aCBldmVudHMgdG8gdGhlIGRlYWQgZG9tYWluDQo+PiB3aGVuIGFsbCB0aGUgbm9kZXMgcmVs
YXRlZCB0byBpdCBhcmUgYmVpbmcgcmVtb3ZlZCBieSB0aGUgWGVuIHRvb2xzLg0KPiANCj4g
QUZBSUNULCB0aGUgb25seSB1c2VyIG9mIHRoZSBjb21tYW5kIGluIHRoZSB0cmVlIGlzIHNv
ZnRyZXNldC4gV291bGQgeW91IGJlIGFibGUgDQo+IHRvIGNoZWNrIHRoaXMgaXMgc3RpbGwg
d29ya2luZyBhcyBleHBlY3RlZD8NCg0KU2VlbXMgdG8gd29yayBmaW5lLg0KDQoNCkp1ZXJn
ZW4NCg==
--------------XdaIJZ0eb0W0NGn75bMqIkpx
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------XdaIJZ0eb0W0NGn75bMqIkpx--

--------------QnXaG4DvJ01tw3PmipuYzTR0--

--------------543lBimyMSr6wfImjmhUjh0V
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO+WP8FAwAAAAAACgkQsN6d1ii/Ey+o
PQf+NNNGZzTLYRWZ6eDnIi+WZa7cfKJlPaRIpUXp69u6zH8fvwAKy4rVKyRRDaBXqK8+UWFQbBXe
pdJNXUGFtBrLumABe7IBdW5bx6qH0qT8azuL5TAXxEYNnVwaEvI8KZNaxgq5Y81p2LmVsNfXyBHT
hFqRDJt/pYj2eiBuJc6dRx6JFKyDBE9SPm2VvKG9KSS8d9qnfMEQ7/S/Tpe7g7597VlEv5bKjaPk
sg8g/G0X9RSoxsr6/RKW6IqrgTDHaToFd1mZa+8R+nBVo4elAIUgMagHdKu/NhjS60MrZBivOg5T
uNANbh1PYyUzdXJr8CAdcUUvb4zuN0uN6oeB+IuhjA==
=lgfh
-----END PGP SIGNATURE-----

--------------543lBimyMSr6wfImjmhUjh0V--


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 06:41:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 06:41:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475077.736582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFUnd-0001h8-Jx; Wed, 11 Jan 2023 06:41:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475077.736582; Wed, 11 Jan 2023 06:41:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFUnd-0001h1-Gj; Wed, 11 Jan 2023 06:41:25 +0000
Received: by outflank-mailman (input) for mailman id 475077;
 Wed, 11 Jan 2023 06:41:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zIvS=5I=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFUnb-0001gv-SU
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 06:41:23 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f621b1ea-917a-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 07:41:22 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 5D52F767FD;
 Wed, 11 Jan 2023 06:41:21 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3CF0413591;
 Wed, 11 Jan 2023 06:41:21 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 9TdUDRFavmOtEQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 11 Jan 2023 06:41:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f621b1ea-917a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673419281; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+vW4S4y4+XfNKdBHLKvlFz/sa6uAIQgJqHzcbiqqzL0=;
	b=iB2JM4997ocQyGAojts/fvPXeOPTAFwDW43/uqWUvk4DGPQLIoVHXMyOMLoqRFjIBp66h/
	93M5g9g8ijuYGoPoALWGDXG/LhyNcuswtPNsSegxfLMmeF9/auv4SuPCz+2I2eAclZWSz1
	kcOGsO1Oh701vpx8iHqQVbvjWeU98eU=
Message-ID: <db863117-a383-4373-d43d-7072bdf57a96@suse.com>
Date: Wed, 11 Jan 2023 07:41:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 07/19] tools/xenstore: introduce dummy nodes for
 special watch paths
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-8-jgross@suse.com>
 <ef0ed925-5c07-a5c2-c7e6-f5a8ad21d480@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <ef0ed925-5c07-a5c2-c7e6-f5a8ad21d480@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------PKamcPk8MnVCPoaJSA6OwDn2"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------PKamcPk8MnVCPoaJSA6OwDn2
Content-Type: multipart/mixed; boundary="------------7q8kZOtFBX4uJ9pMUoKvfi7t";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <db863117-a383-4373-d43d-7072bdf57a96@suse.com>
Subject: Re: [PATCH v2 07/19] tools/xenstore: introduce dummy nodes for
 special watch paths
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-8-jgross@suse.com>
 <ef0ed925-5c07-a5c2-c7e6-f5a8ad21d480@xen.org>
In-Reply-To: <ef0ed925-5c07-a5c2-c7e6-f5a8ad21d480@xen.org>

--------------7q8kZOtFBX4uJ9pMUoKvfi7t
Content-Type: multipart/mixed; boundary="------------ITGsB4urG0ZsP17XW4b2tN4Y"

--------------ITGsB4urG0ZsP17XW4b2tN4Y
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMTIuMjIgMjA6MzksIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDEzLzEyLzIwMjIgMTY6MDAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBJ
bnN0ZWFkIG9mIHNwZWNpYWwgY2FzaW5nIHRoZSBwZXJtaXNzaW9uIGhhbmRsaW5nIGFuZCB3
YXRjaCBldmVudA0KPj4gZmlyaW5nIGZvciB0aGUgc3BlY2lhbCB3YXRjaCBwYXRocyAiQGlu
dHJvZHVjZURvbWFpbiIgYW5kDQo+PiAiQHJlbGVhc2VEb21haW4iLCB1c2Ugc3RhdGljIGR1
bW15IG5vZGVzIGFkZGVkIHRvIHRoZSBkYXRhIGJhc2Ugd2hlbg0KPj4gc3RhcnRpbmcgWGVu
c3RvcmUuDQo+Pg0KPj4gVGhlIG5vZGUgYWNjb3VudGluZyBuZWVkcyB0byByZWZsZWN0IHRo
YXQgY2hhbmdlIGJ5IGFkZGluZyB0aGUgc3BlY2lhbA0KPj4gbm9kZXMgaW4gdGhlIGRvbWFp
bl9lbnRyeV9maXgoKSBjYWxsIGluIHNldHVwX3N0cnVjdHVyZSgpLg0KPj4NCj4+IE5vdGUg
dGhhdCB0aGlzIHJlcXVpcmVzIHRvIHJld29yayB0aGUgY2FsbHMgb2YgZmlyZV93YXRjaGVz
KCkgZm9yIHRoZQ0KPj4gc3BlY2lhbCBldmVudHMgaW4gb3JkZXIgdG8gYXZvaWQgbGVha2lu
ZyBtZW1vcnkuDQo+Pg0KPj4gTW92ZSB0aGUgY2hlY2sgZm9yIGEgdmFsaWQgbm9kZSBuYW1l
IGZyb20gZ2V0X25vZGUoKSB0bw0KPj4gZ2V0X25vZGVfY2Fub25pY2FsaXplZCgpLCBhcyBp
dCBhbGxvd3MgdG8gdXNlIGdldF9ub2RlKCkgZm9yIHRoZQ0KPj4gc3BlY2lhbCBub2Rlcywg
dG9vLg0KPj4NCj4+IEluIG9yZGVyIHRvIGF2b2lkIHJlYWQgYW5kIHdyaXRlIGFjY2Vzc2Vz
IHRvIHRoZSBzcGVjaWFsIG5vZGVzIHVzZSBhDQo+PiBzcGVjaWFsIHZhcmlhbnQgZm9yIG9i
dGFpbmluZyB0aGUgY3VycmVudCBub2RlIGRhdGEgZm9yIHRoZSBwZXJtaXNzaW9uDQo+PiBo
YW5kbGluZy4NCj4+DQo+PiBUaGlzIGFsbG93cyB0byBzaW1wbGlmeSBxdWl0ZSBzb21lIGNv
ZGUuIEluIGZ1dHVyZSBzdWItbm9kZXMgb2YgdGhlDQo+PiBzcGVjaWFsIG5vZGVzIHdpbGwg
YmUgcG9zc2libGUgZHVlIHRvIHRoaXMgY2hhbmdlLCBhbGxvd2luZyBtb3JlIGZpbmUNCj4+
IGdyYWluZWQgcGVybWlzc2lvbiBjb250cm9sIG9mIHNwZWNpYWwgZXZlbnRzIGZvciBzcGVj
aWZpYyBkb21haW5zLg0KPj4NCj4+IFNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpn
cm9zc0BzdXNlLmNvbT4NCj4+IC0tLQ0KPj4gVjI6DQo+PiAtIGFkZCBnZXRfc3BlY19ub2Rl
KCkNCj4+IC0gZXhwYW5kIGNvbW1pdCBtZXNzYWdlIChKdWxpZW4gR3JhbGwpDQo+PiAtLS0N
Cj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb250cm9sLmMgfMKgwqAgMyAtDQo+
PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jwqDCoMKgIHzCoCA5MiArKysr
KysrKystLS0tLS0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmPC
oCB8IDE2MiArKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gwqAgdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2RvbWFpbi5owqAgfMKgwqAgNiAtLQ0KPj4gwqAgdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX3dhdGNoLmPCoMKgIHzCoCAxNyArLS0NCj4+IMKgIDUgZmlsZXMg
Y2hhbmdlZCwgODAgaW5zZXJ0aW9ucygrKSwgMjAwIGRlbGV0aW9ucygtKQ0KPj4NCj4+IGRp
ZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29udHJvbC5jIA0KPj4gYi90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29udHJvbC5jDQo+PiBpbmRleCBkMWFhYTAwYmY0
Li40MWU2OTkyNTkxIDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2NvbnRyb2wuYw0KPj4gKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvbnRyb2wu
Yw0KPj4gQEAgLTY3Niw5ICs2NzYsNiBAQCBzdGF0aWMgY29uc3QgY2hhciAqbHVfZHVtcF9z
dGF0ZShjb25zdCB2b2lkICpjdHgsIHN0cnVjdCANCj4+IGNvbm5lY3Rpb24gKmNvbm4pDQo+
PiDCoMKgwqDCoMKgIGlmIChyZXQpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgZ290byBvdXQ7
DQo+PiDCoMKgwqDCoMKgIHJldCA9IGR1bXBfc3RhdGVfY29ubmVjdGlvbnMoZnApOw0KPj4g
LcKgwqDCoCBpZiAocmV0KQ0KPj4gLcKgwqDCoMKgwqDCoMKgIGdvdG8gb3V0Ow0KPj4gLcKg
wqDCoCByZXQgPSBkdW1wX3N0YXRlX3NwZWNpYWxfbm9kZXMoZnApOw0KPj4gwqDCoMKgwqDC
oCBpZiAocmV0KQ0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIGdvdG8gb3V0Ow0KPj4gwqDCoMKg
wqDCoCByZXQgPSBkdW1wX3N0YXRlX25vZGVzKGZwLCBjdHgpOw0KPj4gZGlmZiAtLWdpdCBh
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfY29yZS5jDQo+PiBpbmRleCAxNjUwODIxOTIyLi5mOTY3MTRlMWI4IDEwMDY0NA0K
Pj4gLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYw0KPj4gKysrIGIvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYw0KPj4gQEAgLTYxNiw3ICs2MTYsOCBAQCBz
dGF0aWMgdm9pZCBnZXRfYWNjX2RhdGEoVERCX0RBVEEgKmtleSwgc3RydWN0IA0KPj4gbm9k
ZV9hY2NvdW50X2RhdGEgKmFjYykNCj4+IMKgIHN0YXRpYyB1bnNpZ25lZCBpbnQgZ2V0X2Fj
Y19kb21pZChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgVERCX0RBVEEgKmtleSwNCj4+IMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGludCBkb21p
ZCkNCj4+IMKgIHsNCj4+IC3CoMKgwqAgcmV0dXJuICghY29ubiB8fCBrZXktPmRwdHJbMF0g
PT0gJy8nKSA/IGRvbWlkIDogY29ubi0+aWQ7DQo+PiArwqDCoMKgIHJldHVybiAoIWNvbm4g
fHwga2V5LT5kcHRyWzBdID09ICcvJyB8fCBrZXktPmRwdHJbMF0gPT0gJ0AnKQ0KPiANCj4g
VGhlIGNvbW1lbnQgb24gdG9wIG9mIGdldF9hY2NfZG9taWQoKSBuZWVkcyB0byBiZSB1cGRh
dGVkLg0KDQpPaCwgeWVzLg0KDQo+IA0KPj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgID8gZG9t
aWQgOiBjb25uLT5pZDsNCj4+IMKgIH0NCj4gDQo+IFsuLi5dDQo+IA0KPj4gK3N0YXRpYyB2
b2lkIGZpcmVfc3BlY2lhbF93YXRjaGVzKGNvbnN0IGNoYXIgKm5hbWUpDQo+PiArew0KPj4g
K8KgwqDCoCB2b2lkICpjdHggPSB0YWxsb2NfbmV3KE5VTEwpOw0KPj4gK8KgwqDCoCBzdHJ1
Y3Qgbm9kZSAqbm9kZTsNCj4+ICsNCj4+ICvCoMKgwqAgaWYgKCFjdHgpDQo+PiArwqDCoMKg
wqDCoMKgwqAgcmV0dXJuOw0KPj4gKw0KPj4gK8KgwqDCoCBub2RlID0gcmVhZF9ub2RlKE5V
TEwsIGN0eCwgbmFtZSk7DQo+PiArDQo+PiArwqDCoMKgIGlmIChub2RlKQ0KPj4gK8KgwqDC
oMKgwqDCoMKgIGZpcmVfd2F0Y2hlcyhOVUxMLCBjdHgsIG5hbWUsIG5vZGUsIHRydWUsIE5V
TEwpOw0KPiANCj4gTklUOiBJIHdvdWxkIGNvbnNpZGVyIHRvIGxvZyBhbiBlcnJvciAobWF5
YmUgb25seSBvbmNlKSBpZiAnbm9kZScgaXMgTlVMTC4gVGhlIA0KPiBwdXJwb3NlIGlzIHRv
IGhlbHAgZGVidWdnaW5nIFhlbnN0b3JlZC4NCg0KSSB0aGluayB3ZSBjYW4gbG9nIGl0IGFs
d2F5cywgYXMgdGhpcyBpcyByZWFsbHkgYSBiYWQgcHJvYmxlbS4NCg0KDQpKdWVyZ2VuDQo=

--------------ITGsB4urG0ZsP17XW4b2tN4Y
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ITGsB4urG0ZsP17XW4b2tN4Y--

--------------7q8kZOtFBX4uJ9pMUoKvfi7t--

--------------PKamcPk8MnVCPoaJSA6OwDn2
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO+WhAFAwAAAAAACgkQsN6d1ii/Ey/H
HAgAhOvLTDg5w53w2vG33Eum8ZngSadlt2L2Rjfn1Ylv7h9REeNHJQO39LYrXz6y8PcQ2pGcFVlY
h8giNYfid8dn2fxCtrNFX5yRcstPKc/vIxkB/v2KPBIGecuzk6/ZJlY+k6o/coQIl8H30M3+JZV8
0/TL7lcjGHyhH7LDWdU3q5MDXMaEWNrmAmv1uXzAlgSpto24bv/C0gnmHRT3ODUjGFO5flM9v0PF
eJ6R29srn8o+yZyRAmSFV373SnPNEe9iXAdbXvtG457XFDFYZRt/FaBDUrxTpJD7BATf+An0jaJt
UlQvAIHDNP8YS9SuN0ZO5Sc2lBO39Q+jHDfG/7Qtrg==
=idBP
-----END PGP SIGNATURE-----

--------------PKamcPk8MnVCPoaJSA6OwDn2--


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 06:48:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 06:48:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475083.736593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFUuY-0002NC-9f; Wed, 11 Jan 2023 06:48:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475083.736593; Wed, 11 Jan 2023 06:48:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFUuY-0002N5-71; Wed, 11 Jan 2023 06:48:34 +0000
Received: by outflank-mailman (input) for mailman id 475083;
 Wed, 11 Jan 2023 06:48:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zIvS=5I=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFUuW-0002Mz-Pn
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 06:48:32 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f58832ca-917b-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 07:48:30 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 286BD6C4B3;
 Wed, 11 Jan 2023 06:48:30 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 04C7C13591;
 Wed, 11 Jan 2023 06:48:29 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id h0bgOr1bvmN6FAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 11 Jan 2023 06:48:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f58832ca-917b-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673419710; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=h/xsLmYu8PUeVBs7m9IadSmfo18cmPayuL3NLFVkB3A=;
	b=kVuiqtsle7luVDg0xLgVPsW+6c0N/l/5nO6yrFYmxt03hcO3QUfwz93XdyQ9QlIaZwS09d
	f9D9TRKlSkvU+i/1D94iJBulKdM9uuqGiFo9KPZIvcauM015+5IQL/YFj0z8qY2x25esVe
	6a5nBGkhiQFoFw4jIRos64DYmF4vlzE=
Message-ID: <163d32e1-7deb-e177-a334-c1dd27f3e2c5@suse.com>
Date: Wed, 11 Jan 2023 07:48:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 08/19] tools/xenstore: replace watch->relative_path
 with a prefix length
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-9-jgross@suse.com>
 <f8ae2b41-50e2-1c36-26e0-cdc0d54bd45d@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <f8ae2b41-50e2-1c36-26e0-cdc0d54bd45d@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------MncdTU0DTfpFocwTVofiMOMe"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------MncdTU0DTfpFocwTVofiMOMe
Content-Type: multipart/mixed; boundary="------------YWS0fT3dUpV2wVLegTZwDaFU";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <163d32e1-7deb-e177-a334-c1dd27f3e2c5@suse.com>
Subject: Re: [PATCH v2 08/19] tools/xenstore: replace watch->relative_path
 with a prefix length
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-9-jgross@suse.com>
 <f8ae2b41-50e2-1c36-26e0-cdc0d54bd45d@xen.org>
In-Reply-To: <f8ae2b41-50e2-1c36-26e0-cdc0d54bd45d@xen.org>

--------------YWS0fT3dUpV2wVLegTZwDaFU
Content-Type: multipart/mixed; boundary="------------NxgSjV8HyGqqKi4Gpuo4LrEL"

--------------NxgSjV8HyGqqKi4Gpuo4LrEL
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMTIuMjIgMjA6NDIsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDEzLzEyLzIwMjIgMTY6MDAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBA
QCAtMzEzLDE5ICszMDIsMTkgQEAgY29uc3QgY2hhciAqZHVtcF9zdGF0ZV93YXRjaGVzKEZJ
TEUgKmZwLCBzdHJ1Y3QgDQo+PiBjb25uZWN0aW9uICpjb25uLA0KPj4gwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBpbnQgY29ubl9pZCkNCj4+
IMKgIHsNCj4+IMKgwqDCoMKgwqAgY29uc3QgY2hhciAqcmV0ID0gTlVMTDsNCj4+ICvCoMKg
wqAgY29uc3QgY2hhciAqd2F0Y2hfcGF0aDsNCj4+IMKgwqDCoMKgwqAgc3RydWN0IHdhdGNo
ICp3YXRjaDsNCj4+IMKgwqDCoMKgwqAgc3RydWN0IHhzX3N0YXRlX3dhdGNoIHN3Ow0KPj4g
wqDCoMKgwqDCoCBzdHJ1Y3QgeHNfc3RhdGVfcmVjb3JkX2hlYWRlciBoZWFkOw0KPj4gLcKg
wqDCoCBjb25zdCBjaGFyICpwYXRoOw0KPj4gwqDCoMKgwqDCoCBoZWFkLnR5cGUgPSBYU19T
VEFURV9UWVBFX1dBVENIOw0KPj4gwqDCoMKgwqDCoCBsaXN0X2Zvcl9lYWNoX2VudHJ5KHdh
dGNoLCAmY29ubi0+d2F0Y2hlcywgbGlzdCkgew0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIGhl
YWQubGVuZ3RoID0gc2l6ZW9mKHN3KTsNCj4+ICvCoMKgwqDCoMKgwqDCoCB3YXRjaF9wYXRo
ID0gZ2V0X3dhdGNoX3BhdGgod2F0Y2gsIHdhdGNoLT5ub2RlKTsNCj4gDQo+IEl0IGlzIG5v
dCBjbGVhciB0byBtZSB3aHkgeW91IGNhbGwgZ2V0X3dhdGNoX3BhdGgoKSBlYXJsaWVyIGFu
ZCBhbHNvIHJlbmFtZSB0aGUgDQo+IHZhcmlhYmxlLg0KPiANCj4gSSBkb24ndCBtaW5kIHRo
ZSBuZXcgbmFtZSwgYnV0IGl0IGRvZXNuJ3QgZmVlbCBsaWtlIGl0IGJlbG9uZ3MgdG8gdGhp
cyBwYXRjaCBhcyANCj4gdGhlIGNvZGUgaW4gZHV5bXBfc3RhdGVfd2F0Y2hlcygpIHdvdWxk
IG5vdCBiZSBjaGFuZ2VkIG90aGVyd2lzZS4NCg0KQm90aCBjaGFuZ2VzIGFyZSB0aGUgcmVz
dWx0IG9mIFYxIG9mIHRoZSBzZXJpZXMuIFdpbGwgdW5kbyB0aGVtLg0KDQoNCkp1ZXJnZW4N
Cg==
--------------NxgSjV8HyGqqKi4Gpuo4LrEL
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------NxgSjV8HyGqqKi4Gpuo4LrEL--

--------------YWS0fT3dUpV2wVLegTZwDaFU--

--------------MncdTU0DTfpFocwTVofiMOMe
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO+W70FAwAAAAAACgkQsN6d1ii/Ey9z
2gf8Duyiba2HuQ1nQt/75qeCr90HHRt2R1NeDKWbNdHndactjt2JyEYlpAST5Er++swyzd4f8iuh
7ED/uOjRRdPXCuyUtqBdSdU15+xVlfUemXypyvkeVC6JcUvHmlXUS1h1zrquuQKoci14OE/C0EiN
VddJvJi1k6UfJzpTkZ4WYxQWlDv0M0G7Iw6PC3S7Ih7H3jRM08ZYb0nZYhm5pvGgdSFsXZt3HrWn
uvPyunUR9jMQ8q/UYQJ65F6pjtE8JeVo12kExBXb0WZ1rqDOsGeQEq3akSV25EEo8qAX0eEYKuov
De59w9lMRC4Ga8XyB+u+MTZOqGYGBmZQ4m3b9ilY6g==
=EvL5
-----END PGP SIGNATURE-----

--------------MncdTU0DTfpFocwTVofiMOMe--


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 07:10:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 07:10:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475090.736604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFVFd-0005jj-53; Wed, 11 Jan 2023 07:10:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475090.736604; Wed, 11 Jan 2023 07:10:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFVFd-0005jc-2L; Wed, 11 Jan 2023 07:10:21 +0000
Received: by outflank-mailman (input) for mailman id 475090;
 Wed, 11 Jan 2023 07:10:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RBaj=5I=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pFVFb-0005jW-ME
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 07:10:19 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 004f5cb0-917f-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 08:10:17 +0100 (CET)
Received: by mail-wr1-x431.google.com with SMTP id d17so14095027wrs.2
 for <xen-devel@lists.xenproject.org>; Tue, 10 Jan 2023 23:10:17 -0800 (PST)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 c10-20020a056000104a00b002238ea5750csm15291197wrx.72.2023.01.10.23.10.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 10 Jan 2023 23:10:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 004f5cb0-917f-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=JLMrfUKmlGGDmWYHnk7ZCXAGeJsLRVrjdj3i8XFCUkM=;
        b=QsnvU3QjCsGrUrsWmZMal1dSye7ALlPTlWdR8roEyrDSpSTxK4uCYZRp8s0YPW7Pi+
         iNCH8BzYk3d67MKeCG1vRxV7zdeIFTl3dhKrDxswgfX1ppqfAPl/q5ksc0vXLhXpgmZ5
         gyZ14dlu5esz4Yna9dWY4Bupdu0Ra7GI6TtXdqPYjQetM3qHuHJPdCy9pMjWSYJ1aXu3
         RJEHUS+8Y+cJu+27SNrPJ853dDlN/4lYwMCZsHFppzlmfeFZ42cCme2GHrtMhJ1J6ZKL
         GrjNaiXAgexd1SaOpVMGq7zBPoqHafa7W63RaApTWq0/QMvDX4+KE5cgoHgXVFcvY9ko
         arRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=JLMrfUKmlGGDmWYHnk7ZCXAGeJsLRVrjdj3i8XFCUkM=;
        b=LU15ll/uVm9UhFnvkeWLXrr9ae+ifQ5mSXPn49PxmhkIoGzG79a1TTjMWORVGliLE+
         VJ/qkKn5nhjLq5VVo1AGr/uPMFMtANBlDD27MLvcrY9kHFgabHpdo4IPJwl7rLVeEFLr
         lcDBtcVaxP59ACE2Rbhw+KpC0YHhW3qoLZ28Ke2aSipvfRkBLZNIvmcinjw/ahjC3u0N
         lrcIJw1/5SgdlsClddpW0W/xf0k3d8tgv5eWHL5klj+RUo38mL8dJcTPaVOHzf7nDDae
         RBCNFdcXNzJUxLx8SNbNCqCvn0aBAv4RJmdzxSHbV5krbv9WJ0XK+pMq3zqN6mphtwmt
         +MDQ==
X-Gm-Message-State: AFqh2ko1j8Fr0STTI8CibvwQ3JNt+GjkyEoJ+Swp5K1ZHebTlDFlNfHB
	Nr9ORnPntetcTZnV9oY8HSWoIQ==
X-Google-Smtp-Source: AMrXdXsGPsc4OJU/t1NQdJK4GmcIPv1Y7o5mpehNbv/fNNrQ3c/gKOMp6+JozXH+1a+SCVhmx/wolA==
X-Received: by 2002:adf:f107:0:b0:284:5050:5e59 with SMTP id r7-20020adff107000000b0028450505e59mr33484997wro.29.1673421016818;
        Tue, 10 Jan 2023 23:10:16 -0800 (PST)
Message-ID: <92cd2724-6a14-deb1-923c-dad28de5e8c6@linaro.org>
Date: Wed, 11 Jan 2023 08:10:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] bulk: Rename TARGET_FMT_plx -> HWADDR_FMT_plx
Content-Language: en-US
To: BALATON Zoltan <balaton@eik.bme.hu>
Cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
 qemu-block@nongnu.org, qemu-arm@nongnu.org, qemu-ppc@nongnu.org,
 Richard Henderson <richard.henderson@linaro.org>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>, ale@rev.ng,
 qemu-riscv@nongnu.org, xen-devel@lists.xenproject.org,
 Thomas Huth <thuth@redhat.com>
References: <20230110212947.34557-1-philmd@linaro.org>
 <d4ea8bf5-a669-eb33-6dd2-f37417dab1c7@eik.bme.hu>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <d4ea8bf5-a669-eb33-6dd2-f37417dab1c7@eik.bme.hu>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 10/1/23 23:01, BALATON Zoltan wrote:
> On Tue, 10 Jan 2023, Philippe Mathieu-Daudé wrote:
>> The 'hwaddr' type is defined in "exec/hwaddr.h" as:
>>
>>    hwaddr is the type of a physical address
>>   (its size can be different from 'target_ulong').
>>
>> All definitions use the 'HWADDR_' prefix, except TARGET_FMT_plx:
>>
>> $ fgrep define include/exec/hwaddr.h
>> #define HWADDR_H
>> #define HWADDR_BITS 64
>> #define HWADDR_MAX UINT64_MAX
>> #define TARGET_FMT_plx "%016" PRIx64
>>         ^^^^^^
>> #define HWADDR_PRId PRId64
>> #define HWADDR_PRIi PRIi64
>> #define HWADDR_PRIo PRIo64
>> #define HWADDR_PRIu PRIu64
>> #define HWADDR_PRIx PRIx64
> 
> Why are there both TARGET_FMT_plx and HWADDR_PRIx? Why not just use 
> HWADDR_PRIx instead?

Too lazy to specify the 0-digit alignment format I presume?


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 07:38:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 07:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475096.736614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFVgE-0008Ew-9n; Wed, 11 Jan 2023 07:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475096.736614; Wed, 11 Jan 2023 07:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFVgE-0008Ep-6f; Wed, 11 Jan 2023 07:37:50 +0000
Received: by outflank-mailman (input) for mailman id 475096;
 Wed, 11 Jan 2023 07:37:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFVgC-0008Ej-KO
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 07:37:48 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2087.outbound.protection.outlook.com [40.107.6.87])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d763f9af-9182-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 08:37:46 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9300.eurprd04.prod.outlook.com (2603:10a6:10:357::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 07:37:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 07:37:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d763f9af-9182-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ylhaq2dbWYtp2PCsAqbi6EZUATIrgXI3NU9ug2iNoHC01ScMdqf+JG+GmH9b2FXwFSYuWLoNHbTodEFHoy2pZNG63ubP8pfC2nGi9g2U7rGAflQ3WCpfMSRWVxxfDANmEI4eFtnyzYOo1YNlZzpCZWiHPlI9Qt2XGaLfPBtQCGu7iGF9frXU0PHjddpF7A0nrsWo9/A8Z8QHMg3D8XVmUlVf/nLP9nGTTth/WMofU4yZgwYF8Ubs6hSPNaHZO93xR/fpsPUWWizTaohzW4/4Uf/jpjpkTfeI4rO4EFQWh7ynr9GW0NNV2uE/Ic+BfPao00OQ6On9ZNJ9FTPvcwUE0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vhz+97kEQewXrQMY8ev7cqCU+oOfgrtlWTWgYUbkSVk=;
 b=NYMN3PBk+1+retThxrA3rEPbAmqABTWtctjgGAYXMBKHBiICdnAOf9+5iC35svKuAX8bkpTLRt8lOnXibbGqxqIvIlpVQNrW6Gl32IVpGYi7UnCvH5BhKlLaz1lyzqDefHZYHEeIqlwOiYx9lGh1QlXg2eSfvkIdFrc/9LqbJ4MPupRrmITV1vmsY0cel0lXHFVTQhJF7CLqkGHVZk24p4L84V6F52bh3zhgPcNChxqs/4OCdifUnr9C2hoE0YSjRjwOhGhwZu3478ZU/UPNWcU+32iOo7AhBlDq6ADlPvfZ1BuHfAUPKZGsEs5F+a8PohP1ZpNYx7lyYE0+Et27QA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vhz+97kEQewXrQMY8ev7cqCU+oOfgrtlWTWgYUbkSVk=;
 b=rLfq79Wmkm5JJWSU/5KAXxtaOuzo6G8J5qbv8K6JYzCKAc0JfsDiTEGk9xID6UoTUQTzQPe0eTnunEcleE6Rr5ovF0Qtbt2vqyc+zFUOuDWM3C9qErj6zebG2gR7Ja3HRitbRFGigB3GAmybA/OlpojMIvxvk6k+e6qsOXdO793Dxpfpw+qPeQLRul72Y9F8nUWilBnJNkNKGgcyXV2ubX415/4VdL+LZ/3gDOhvxjy5IRtfSArSDtUfT/BcrrwtqzrakxQUqB+wzwjJYkbwhLfkYIt7DQtFIQ6sOtm7DgkFEBuzIpV50AGRwkXdje39lNriCfGSjGwnkRqloOwxcQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <da886d73-efe1-f5b2-9647-cf167741c8eb@suse.com>
Date: Wed, 11 Jan 2023 08:37:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 1/6] xen/riscv: introduce dummy asm/init.h
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <b1585373e39a7cbe023f485aa5a04b093e25ec80.1673362493.git.oleksii.kurochko@gmail.com>
 <891d0830-7fdc-202a-5f12-2364cae5bce5@suse.com>
 <e15997cf6e765cb23b706889b93ee35a90173a8c.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e15997cf6e765cb23b706889b93ee35a90173a8c.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0111.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9300:EE_
X-MS-Office365-Filtering-Correlation-Id: fd8ea0a5-f7d9-4e9d-15df-08daf3a6ba7f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UkNZtxUzAMOYfa7Q9hxaQ033OhtDFbzkinkzniv1vh3aleUg+v/5h0FcQTPoc1bxQkm7dZgA5fy/gfJpxMwGPcd6qQ2AD9oZGhKYv1nU+ddBH74vp1Wpj8B2YGegecTpaj8iad+SozoW5lguugJcwV5/EzICQTxdL4750fCkHdzp4Q3Dapwkx07V8X3FfEOM85pxezjE0ig1lkDjeimVdQx77fFZxj2afBRpruZ7nz9J9BUo1/OyfzWmK5JtoeD/74I8lSeIxif4GW4TJ8aWBTD87F9RJnPrEu5s4OsuGY94rGWgLZ052ctC8xYlvxlaDjX5HFXt+lYyFBjYWNCEnNYDGeX9AV60c7JiZjXmpse8QTWLW5ztbMtEJh/deIxlrVpRZ0mVguoLlkKlj6vq0tyglChClH7C5dA0IYEsClt563Wam1QHhPFAZ5e6UsSeWZnxCdHFAKRh1uwF5eZbUCSRudIISKz3qfguzW76zPAsbXJ5q6vXLUSHppNXhF8RWT3KQEvFM7L5eC/1Xuggb+fkDCo5N244r/7ZDQZiVGU0iJdamkkm2R7diIjq5K10n2QfmB5rQ0Jt/zDTGWgDVyTgG1Z7pJYVcpfiAWrpnanYb6qKwJTayZhU7KzZU85TVzpAyGZ7u0Xw1GaJU7UMD9YxnJbDAm8wG2XSl8ccwV3XQp3j7Zs6wKVZWqdQi+PhReM0Fl56AsoRE4UitKrUxiG8FqkhRXblWEY6D9JnwVjy1QP3rx9sXlemQeWC1Gzh
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(396003)(366004)(39860400002)(346002)(376002)(451199015)(7416002)(316002)(5660300002)(186003)(6512007)(26005)(6486002)(966005)(478600001)(2616005)(31696002)(41300700001)(6916009)(66556008)(54906003)(66946007)(66476007)(4326008)(8676002)(8936002)(86362001)(36756003)(53546011)(31686004)(6506007)(38100700002)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ck9zSmpmT2VaM1hFRGhlNGRqL1VqZ1RQVUdPalVFS3lJV3QvV2ZLZjFMNWtF?=
 =?utf-8?B?UHhaOENYY0hNQ0psNXFtN2IyVDdGbm1wNk82d1BkY3F4dWN4K1VlYVpOaFJ5?=
 =?utf-8?B?SkF5czJwbnEyYk9ta2dTZi9pbFhFdk8rMmwrNFd1aFNHRkFJeGUzUnp3M05l?=
 =?utf-8?B?T1JUMkk4eXAwVzFkMTBUSGN6Z0lja205VUYwU2EwNlFxYVJuc0s3QURmS2hw?=
 =?utf-8?B?NmFWWHJ1RlRDNy90cjF1eWQ2RVlDNS9RVE1EeFBTSnJaSk9tWStad3hISXRi?=
 =?utf-8?B?d0xTRjRNTVhaRW43NEptRW1tU3g2Q24xVVJKOE9GZlZTUXBEK24rZ0M0TElL?=
 =?utf-8?B?ZTF6UWVaL0ovVVNQZTg0QitFS0t6alQyWHEyN3ByMDBleVNWS1JsRzhaQ1F1?=
 =?utf-8?B?eGN4RzNBZTJCN1hlY1BIcFdxcVc3QjNhTldvamNoT1hJV21VNEZZdnhQUmt4?=
 =?utf-8?B?UVhETmhqTTRoT0pXRUlIWWpZc1NvQ2hCYWNuMTYxV0wxUk1KY3BiL2dUWnpB?=
 =?utf-8?B?Tzh5YVp2RlZJQUYzazRWNlJwNTh5THFCeTArR3FtYzlubkxvdGp6V2VRVTlK?=
 =?utf-8?B?Tzh5cStMcElwaEFNb0NIckFPT1NuV084bmo5Qm16SUNnckVJRTJMZ3IwMUhw?=
 =?utf-8?B?cHZGKzRxWG9DdjRRa3NMNTNZRXdPeHFXcWlxQmk0OGc5RnhRVjFoQ3U4UVgv?=
 =?utf-8?B?akVSMjJnTkMvMlJTVXVWZlczOUFHWUM4eWZ0MjBYbzR2L0tsNHJPTm82NkF5?=
 =?utf-8?B?WlFsckkvc1VCUmtMYmVDa2czTmtvSzhZM0ZPZGl4dldPeUlpNzVOaU5wWkJC?=
 =?utf-8?B?eUNRK21ldXRXa0trbmdOV2tQSlAzZFM3NlNSQ3NmOGRCaWdVSnZzVnVJVVNa?=
 =?utf-8?B?WTZ2OG5zU0MrRGJZZmNURCtZZ21DY21hajVFdXFKTjJ0aFNQWkxWTGZndEtP?=
 =?utf-8?B?bGZUcW9CS3AvbkRjNFc2ZnNmREVycTFrL1dYdlN3RFpzK1Q3RHI1NGY0U2hN?=
 =?utf-8?B?STZIOHRobzVVbDUzd0JxdkJUbllhRVk5VW83NzcxQUU3MUwxazRXOW1NcUN4?=
 =?utf-8?B?V2xjUGJVMys4d2pIYk1BSnlaSHFOSkNvNWNWUVpYWm5TZklVUlhRK2o4WVlR?=
 =?utf-8?B?ZXRsdVFIOE9Ua2l1VVp5OUxIZ0c5NFY4Tjhzbi9ZUGFZb1lheVpicWs3L3VF?=
 =?utf-8?B?NXlHRUphVFRybVYweHRYdG1paC9KN0VKOG5icjA1d21VcW1iWmFRYWE5Zklo?=
 =?utf-8?B?Um9NNEgxaHRDRFpqL1E2L3lrampNckJYU3BLblB5bnhEaW43M1c4azR3NWN3?=
 =?utf-8?B?Q1kvK3k0VGwyVnBCb0xDVUk4Ymg2VzlkVzF4aU92VERwKzBPcURtZHJRTzRF?=
 =?utf-8?B?UXNBRVRoa1BQWG9xNm9QbllPVWEzcDhaSXowUjJBa3JtaXU2VEdBQkgwL1RI?=
 =?utf-8?B?dko0VXJ0ai9rMU9lOEJZa25uNHlvTFRTVTZpcXhBNlZOR053eWoxVE9rbUZq?=
 =?utf-8?B?anRsVnlhakNBdjlPYzdHdis1NkhiblhTbFVRWmxBMHlJS1QxQmpxclJQTmRu?=
 =?utf-8?B?UVAzbTBEeTFiYmJsR2RKNGcrU3JZSW83K050RFJ2Umh0MWh2WnFhWWhlWUM0?=
 =?utf-8?B?d28vRUw0NFZUemdTTjlJc1JjcWlVYy9saGg0ZGRqbEFVSFhxNURvd2FpY1d3?=
 =?utf-8?B?cmc5VG9xUnRRbXVuNXhuWEUrNUN5SExTMEtMam8yUjNXQ0x5Yy93OTh5TGhu?=
 =?utf-8?B?cllPU1V3OExqR1NwOHRzeFdBaFNYYWFnbkh4QVVxSmRhclpSeU92bW1EaUZN?=
 =?utf-8?B?dUZMN2Q5dXp3a3lnQXhYL0lzOWJOTThoS2pLWUdxdHFZZEc2b1V0U05Bb1dp?=
 =?utf-8?B?WHlKUzhnaEpFVlJVY01KdmdNanpleG5qR1BqdWdZZTd1UnF5aGJwaTFLbFJp?=
 =?utf-8?B?ajdsNGZBWGtoYXgydWRDUVRMTVVtaEZ1cm5nVlVDa0kwZ2hVbkVjanVXVHdC?=
 =?utf-8?B?eUZ2V3FNaUpTRHZvWUo4NDhQUUtESnhBYjN0Y1oxWTd6TW1oeU1EWVJGRy9v?=
 =?utf-8?B?NVBySHFMSFFUemRHQjF2VEZwWjVrQ0F4Q3ZicjZDVFBpazFXKzNXZmFsUGlH?=
 =?utf-8?Q?eodkwE7lYXeUNTjStxNttCP+W?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fd8ea0a5-f7d9-4e9d-15df-08daf3a6ba7f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 07:37:44.7076
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Wp03pECD6X9qBJpSdaiKGIBMHiYyCaAgqy6McWAlDLYiIhUHSUqmHTl3ATJ3y4UR58tiBSMMTwR1X/rGRUODRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9300

On 10.01.2023 20:16, Oleksii wrote:
> Sorry for breaking into the conversation.

That's perfectly fine; no need to be sorry.

> On Tue, 2023-01-10 at 18:02 +0100, Jan Beulich wrote:
>> Arm maintainers,
>>
>> On 10.01.2023 16:17, Oleksii Kurochko wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/include/asm/init.h
>>> @@ -0,0 +1,12 @@
>>> +#ifndef _XEN_ASM_INIT_H
>>> +#define _XEN_ASM_INIT_H
>>> +
>>> +#endif /* _XEN_ASM_INIT_H */
>>
>> instead of having RISC-V introduce an empty stub matching what x86
>> has,
> Have you had a chance to look at the answer (Re: [PATCH v1 0/8] Basic
> early_printk and smoke test implementation) of Andrew:
> https://lore.kernel.org/xen-devel/299d913c-8095-ad90-ea3b-d46ef74d4fdc@citrix.com/#t
> 
> I agree with his point regarding the usage of __has_include() to not
> produce empty headers stubs for RISCV and for future architectures too.

Sure, but as he said, that requires settling on a new toolchain baseline,
which is something that we've failed to come to any results for, for a
considerable number of years. Plus if we could get rid of this (then
optional) arch header altogether, it would imo be even better.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 08:06:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 08:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475105.736626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFW7T-0003fy-Oj; Wed, 11 Jan 2023 08:05:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475105.736626; Wed, 11 Jan 2023 08:05:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFW7T-0003fr-M7; Wed, 11 Jan 2023 08:05:59 +0000
Received: by outflank-mailman (input) for mailman id 475105;
 Wed, 11 Jan 2023 08:05:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFW7T-0003fh-59; Wed, 11 Jan 2023 08:05:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFW7T-0000RC-21; Wed, 11 Jan 2023 08:05:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFW7S-0004xq-GP; Wed, 11 Jan 2023 08:05:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFW7S-00046v-Fw; Wed, 11 Jan 2023 08:05:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X1eacXw8d1UxZUYYq/r1IiTafNFNYZ1mN17LVZQv+t0=; b=K78Jit/nn1yO6ck2WvRNvlHgFU
	h0Faa67/aP5onlSvzPSI6087/OE6R82z2uEIF7JFeTqu3WVLYT3O4j+iUXhhTrxt9d9v/nNx9poCP
	Ko0NGAH31LfMNFkAXxJ9619ZW3PHAYrElBhi18gGfQvAtBOKbaGDQ0kGx3qJhOuQTQa4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175714-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175714: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4d975798e11579fdf405b348543061129e01b0fb
X-Osstest-Versions-That:
    xen=692d04a9ca429ca574d859fa8f43578e03b9f8b3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 08:05:58 +0000

flight 175714 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175714/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175694
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175694
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175694
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175694
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175694
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175694
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175694
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175694
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175694
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175694
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175694
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175694
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4d975798e11579fdf405b348543061129e01b0fb
baseline version:
 xen                  692d04a9ca429ca574d859fa8f43578e03b9f8b3

Last test of basis   175694  2023-01-10 09:33:52 Z    0 days
Testing same since   175714  2023-01-11 01:39:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   692d04a9ca..4d975798e1  4d975798e11579fdf405b348543061129e01b0fb -> master


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 08:44:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 08:44:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475116.736644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFWiB-000801-SV; Wed, 11 Jan 2023 08:43:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475116.736644; Wed, 11 Jan 2023 08:43:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFWiB-0007zu-Pn; Wed, 11 Jan 2023 08:43:55 +0000
Received: by outflank-mailman (input) for mailman id 475116;
 Wed, 11 Jan 2023 08:43:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zZOV=5I=yahoo.com=hack3rcon@srs-se1.protection.inumbo.net>)
 id 1pFWiB-0007zo-5E
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 08:43:55 +0000
Received: from sonic309-13.consmr.mail.bf2.yahoo.com
 (sonic309-13.consmr.mail.bf2.yahoo.com [74.6.129.123])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 10683819-918c-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 09:43:50 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.bf2.yahoo.com with HTTP; Wed, 11 Jan 2023 08:43:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10683819-918c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673426629; bh=VoekXF/Cd5va2LLTczOOjyYZgThCQF3kMuFP7XQ81TQ=; h=Date:From:To:In-Reply-To:References:Subject:From:Subject:Reply-To; b=G9z4wLQrRMYIWI5UaKX9/g/77+ubmKhYHJD4NlHQFz1SX2B3MeM7wndtbkwlp+7WH62K0Y9zfGICOYexCe/V9e6DNFiijTuzrCOI3b8nvLLjfYqId9d71kuj1SbU85KNWHXiGy9obj5KL5LGrEoTsiApHpZ94vgeQ3EyVlAc8zVmVyfRChqmHpgCtblCDRByecUql4zhnOLgmXdDxDelRFBHKlAk6TKPxUXl9mqLGpOY4x6e8T00zbp8VUcBGygT3mK5+vzeV6Dj2iu403m3zAArbi5qhVjfYJp5UGtatFLFc/3WQPgm3wHv7kYQftAxocbdfgy35ZGUCRH8U5I8sg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673426629; bh=CJ4lJtf1Bl/l8OtddRDgBx6qzxht4WP6MLLJKMmo+1Q=; h=X-Sonic-MF:Date:From:To:Subject:From:Subject; b=Lt5Vf4ZqlEonPCn4KKe2lr4Lv7UTL8VIfJ9Y3VaAc0qDtNO8OR8NJfrHwih0H17W4l/I6+anIRh9FdPtG/zsWZNY2bx5CokF8R4IaDHJ8t9YSOgLrI0m6iA5gh3jTx5/FjPZxLR/3Com0Mb9KLANH1cLptROKplQv4WGrWDcC6IWYVnsiv+r70wEsTmL8c60kd7+GZW4Kd7LLFNNBoxB3kLPdYN5MPMG4nwn8O33qGNNKhketlh9YVyvVRPe9zKDvWNj2dh6i47EDIbPsboD3y/UyCV4rHZdh8eBa08x/A5MGJ1cOWNPS+SZ+ex06GYWr4rFxu8q1PVtX1TMKdM86Q==
X-YMail-OSG: MJRqsm4VM1luKBx.fvru._khTqXq9mlDA4F16VP_rcGUUzLETySH9suTbj5v7vt
 1d4Et18vYapNYZ9JhTQzNTwYanJ2bP3oZigByZc8LNnqF87rsYiHxYPR2HKymGfMKfTlJcK2xuOP
 jtNfd7ERajSgQIxEIha4SbU4eETTp59PBcBs3c1x_oVMRS5k.qcpREUf7IQ8K0c0Cbwz8jyAGhZl
 AyfC1gk5k6wY4Qjsr6S69Bj86kr4.IDBN0U_WkSazTnTX_d2WvsNhsylZmiJ_XNRLi4cO3uwF5BR
 cznuuRB3G0pFWEG7R0IxkVjJQT4QSrd6UIR9.0KzJmkKzx5fr7z51j7lTeuoyQdaqY1IdoewU01X
 D16WE8Tlz.gbDLk2NNAAHyhUA60c41qqEDyHNjTa6yi4NqeU9eCPT_Sr2tdy2CApHhX3ISs9Esq2
 j_xWdI5pOBGc_R3m870wIVpKgMB7m0YZs9XWJDS6N._sHjwFX_5pp72nKf82p1Wo6l0YOSrNR4Jg
 Mgtqsd0mVWfBNvORy0vJFIC68dW22KpqvNAFYQUdo0eVyq8COzeOT3sf44y.qexKYgSdi0kQXDgt
 1wciSueAmGUg8KG8Uko9gVsf_bRtayO3ic6EQ_xv1T1qiOIh71lLfN7fQHBleOo1HOQYfAc2JyH2
 qvDGsw9GT_Hi9dnv6gO6MJV_80qwM0dXaQrPlbxWj1MmLVfYRrSMhBXhp3t.YOeq3X0_dcW0k0kF
 cLlPmSHLAEPnu4i.ZeEw7UnINGPqeIwG02s7O9Qr2FT42DyfmC2j2zKR1FHePmFmcB7rOixCnf2X
 ETeuhvcXzDJ9RGCiZSiMnqbiy_OzXRluLfBbT4PfCT3Z3EujbLx.4rp.iRk_5UhmzCkJbnq44dLr
 ET2hY7SVv5PoSENyw5O.tgznUu_WXM7F2ShLxi.kurNHomWy2W7k0LIaHXu8uW_Tx1zwhaGAh370
 OJHNNAGLrJRoouZ7vMM4NGU14JBTju.jspj4BKeE0WJ.qGXjyB49mp2i_TG8HBA_6qVDwletph6j
 LBqEFRoHKo7fdQdYPgyC0vX.1o44od1h_N.SdO5vXcF7dd3w1B6uEfuX1XwyMMFkLZTvUbgdw76q
 .ZbrLjOsr.f8kPiCTNyoAm7o7ZKWjaXTKH7MBfeGSiLiSmXC51Ygu90a2Ah1oea4suaWF6o4.PA6
 EnHCZ2iCfzGls9vLf7u7zGZnzB6u2f5NzDYaiclVK6OmG_JmtxYhmub1Em1zbZw1E_9J7ekEG0K.
 jHJkRC_QoO9XRemsCAFdtpe5VQWNBCKSXBNq08B2gi.kaMBIElJm8NocBXC_0aLd42tOUSWhVyRP
 UUBuyO7ADNgtMzapYYd1FZJgMqALNidRhRMPWfBY5KVyIbTppfMh3cIFxrqpPgs4d230KsmxYI47
 vZXKEo0f.HzUKdM0oX68aavIg01Y5RfePwzTL0dSvvKpaMGvKUltVd3kJnPBB82..puDS22NjfQh
 MshCmUWIuGcyk.8SybsP37nViBjrzTMuTH6.VOaHBTiFTJguh8FIcKmvlXSF1rP8O42vYOIbPxMJ
 oBSwnV1vVUQAozHhQZWrd5JRopou9g4fHi8wF9384BNEHX.OE5kQ.Ey2yBO8gEz6YGltLIda5ewp
 wpmV5ghTdt7iTVuDyM67V34d16s31wC0cpy6AJMr4._UsRO9N11DutXm1iIKm01CXqNZ4l4eV7YT
 wqcjDXXafDogA3iwaVX2_aMq29z1ZY5rVH3J7yOLplfQa_vjK3zvADeRcnIBpu.uRzJ2lPMH0lYT
 .Q08hRTn4OHYOdC_DVDVqIcHYoYL4fpcz9Ze67nxYxeb0GKho9wtJ7YxkVKoWZ6Lcn9Tpr1Alne6
 ekm5K6cQfZTsJ9pb_6zdwDikhVO6YIaIJL8FR8chw1Y4yaJxcqiJHHk6xCVViGDUPyDPfsgmqqlv
 Ot_AFzN83EyGo9MU9WJktgBWxB1YJSt3TmlyGO7UHthErFMVFCYRgrtrOJ.ud.7mS1wdF.8XoLIg
 nz2z8t3dbwZlJt0O_Gb2CbwsS5pZcKmwXZPhox1mZG_qdf2bcihxhK6944omvzbvp08QaVXW38b9
 .NPX67fgjYei_Id93etsKXWhdJs0jDD0MPcwbljBp6MmJSE.RSHTD872babfDUQwdcseCx5IRo4w
 YFsQr.miaW9Hz2dCaQMb2Ibc0vX9XwdNFe0lGMZmjdCD7OFceknlI62Frv6kp7Q_k.lRi2tjp370
 VEfJwHww0b2bJHHQ_ZMPFC6y8CJA-
X-Sonic-MF: <hack3rcon@yahoo.com>
Date: Wed, 11 Jan 2023 08:43:48 +0000 (UTC)
From: Jason Long <hack3rcon@yahoo.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <437182837.4659385.1673426628212@mail.yahoo.com>
In-Reply-To: <741943497.1243882.1671532529734@mail.yahoo.com>
References: <741943497.1243882.1671532529734.ref@mail.yahoo.com> <741943497.1243882.1671532529734@mail.yahoo.com>
Subject: Re: No package 'pixman-1' found
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Mailer: WebService/1.1.20982 YMailNorrin
Content-Length: 4205

Hello,
I installed the=C2=A0pixman package, but Xen can't find it.




On Tuesday, December 20, 2022 at 02:05:48 PM GMT+3:30, Jason Long <hack3rco=
n@yahoo.com> wrote:=20





Hello,
How to solve the pixman error:


$ sudo ./configure
checking build system type... x86_64-pc-solaris2.11
checking host system type... x86_64-pc-solaris2.11
Will build the following subsystems:
=C2=A0=C2=A0xen
=C2=A0=C2=A0tools
=C2=A0=C2=A0stubdom
=C2=A0=C2=A0docs
configure: creating ./config.status
config.status: creating config/Toplevel.mk
config.status: creating config/Paths.mk
=3D=3D=3D configuring in tools (/home/Hypervisor/xen-4.16.2/tools)
configure: running /bin/sh ./configure --disable-option-checking '--prefix=
=3D/usr/local'=C2=A0=C2=A0--cache-file=3D/dev/null --srcdir=3D.
checking build system type... x86_64-pc-solaris2.11
checking host system type... x86_64-pc-solaris2.11
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for special C compiler options needed for large files... no
checking for _FILE_OFFSET_BITS value needed for large files... no
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking whether make sets $(MAKE)... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking for flex... /usr/bin/flex
checking for abi-dumper... no
checking for perl... /usr/bin/perl
checking for awk... /usr/bin/awk
checking for ocamlc... no
checking for ocaml... no
checking for ocamldep... no
checking for ocamlmktop... no
checking for ocamlmklib... no
checking for ocamldoc... no
checking for ocamlbuild... no
checking for ocamlfind... no
checking for gawk... /usr/bin/awk
checking for go... no
checking for checkpolicy... no
checking for bash... /usr/bin/bash
checking for python3... python3
checking for python3... /usr/bin/python3
checking for python3... /usr/bin/python3
checking for python version >=3D 2.6 ... yes
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /usr/bin/ggrep
checking for egrep... /usr/bin/ggrep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for python3-config... /usr/bin/python3-config
checking Python.h usability... yes
checking Python.h presence... yes
checking for Python.h... yes
checking for PyArg_ParseTuple... yes
checking whether Python setup.py brokenly enables -D_FORTIFY_SOURCE... no
checking for iasl... /usr/sbin/iasl
checking uuid/uuid.h usability... yes
checking uuid/uuid.h presence... yes
checking for uuid/uuid.h... yes
checking for uuid_clear in -luuid... yes
checking uuid.h usability... no
checking uuid.h presence... no
checking for uuid.h... no
checking curses.h usability... yes
checking curses.h presence... yes
checking for curses.h... yes
checking for clear in -lcurses... yes
checking ncurses.h usability... no
checking ncurses.h presence... no
checking for ncurses.h... no
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for glib... yes
checking for pixman... no
configure: error: Package requirements (pixman-1 >=3D 0.21.8) were not met:

No package 'pixman-1' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables pixman_CFLAGS
and pixman_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
configure: error: ./configure failed for tools



Thank you.



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 08:48:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 08:48:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475122.736656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFWmE-000090-CZ; Wed, 11 Jan 2023 08:48:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475122.736656; Wed, 11 Jan 2023 08:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFWmE-00008t-9M; Wed, 11 Jan 2023 08:48:06 +0000
Received: by outflank-mailman (input) for mailman id 475122;
 Wed, 11 Jan 2023 08:48:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o9nX=5I=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pFWmC-00008n-MY
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 08:48:04 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a6c86103-918c-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 09:47:59 +0100 (CET)
Received: by mail-lf1-x130.google.com with SMTP id bf43so22462350lfb.6
 for <xen-devel@lists.xenproject.org>; Wed, 11 Jan 2023 00:48:02 -0800 (PST)
Received: from [192.168.0.145] ([195.234.76.149])
 by smtp.gmail.com with ESMTPSA id
 v15-20020a056512348f00b004abc977ad7fsm2618553lfr.294.2023.01.11.00.48.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 11 Jan 2023 00:48:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6c86103-918c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=8y917LEB/E4gPgzcqGY64IPhYuqoMULpvWHngFQzRlY=;
        b=BhVKKn+O+uKX22E5J26BMzjcOBHqhEzPlMpmecjC6R0s2MyXuFVUXHvXgqg9qjTa0L
         pHoQOWNlpRX8g4iS3gcTPTdzgh7/HysbqbRnAaKWGriyHjHfV63QIODM6PW/X+/fhAZs
         OFHzAfsGzgLvIsdoYXnC4O/jR1a1HpiL3ojVK1eQvbMmVFEW1goJydpNgpinqdMYsURM
         SbjefM9UKh5KTLP9k7oF7h/Ckrn4YZD9c3uBrKNPv8eDsNyuEEa86aAi6mqGBOLQrD36
         7WgLwZNRCm/FMsuNpXDMtcEiDuQsgR3FOxtJIwHX5BMhPz/LYPeHHulFbG0cOq3TpYUa
         cz4g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=8y917LEB/E4gPgzcqGY64IPhYuqoMULpvWHngFQzRlY=;
        b=STTzyJSbXJk+ydwNWfulpZ7GiUc07NiX2HPnnw+2ixmOcThQqHsw80SnHfd6KulDg2
         8gMg+DFBsQc3QQCt2ZA4vCXkMhhDSHj+b8MqfUjKF7UZLLVDkmH/Bcy8G4Aek3IoohZv
         hmKTci+Oc4NOZOyPIQaYlNH5SCCrZ73P6nG4C9wLU03QPzx+FVwO4KghSHqDRO2H3FE5
         EFt6tDivgLspmPnBLEyE8E2gN/75zLVoipsUqkEEwEgFt9MSJom+pcjwg3hcAaDDWCpE
         t1nxEqHZTZfBr1yfGxEuDtOjSvmT28zNCBMAr6Vba6xDLc81SuSmNxwvJsBi75oO/VuG
         YoNw==
X-Gm-Message-State: AFqh2kqJGP7rwzKIAAwATaxaU5Um33gNAuwvzPcNiraOoq1lIlOd0J0x
	KUEXJa8ZOcFMbUu5TnyPQwo=
X-Google-Smtp-Source: AMrXdXsv7jz+fVYF8AXY6KBGKv2KGgu/sAbV+BAjZP+C9tp0SHMCoFBt97GyczbGnJPSQkLCzlTyaA==
X-Received: by 2002:ac2:5604:0:b0:4a4:68b8:c2b6 with SMTP id v4-20020ac25604000000b004a468b8c2b6mr17246519lfd.13.1673426881740;
        Wed, 11 Jan 2023 00:48:01 -0800 (PST)
Message-ID: <06185fcfb8cbd849df4b033efa923b55c054738d.camel@gmail.com>
Subject: Re: [PATCH v3 2/6] xen/riscv: introduce asm/types.h header file
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Wed, 11 Jan 2023 10:47:59 +0200
In-Reply-To: <c333b5b0-f936-59f8-d962-79d449403e6c@suse.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
	 <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
	 <c333b5b0-f936-59f8-d962-79d449403e6c@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Tue, 2023-01-10 at 17:58 +0100, Jan Beulich wrote:
> On 10.01.2023 16:17, Oleksii Kurochko wrote:
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes in V3:
> > =C2=A0=C2=A0=C2=A0 - Nothing changed
> > ---
> > Changes in V2:
> > =C2=A0=C2=A0=C2=A0 - Remove unneeded now types from <asm/types.h>
>=20
> I guess you went a little too far: Types used in common code, even if
> you
It looks then I didn't understand which one of types are needed.

In "[PATCH v1 2/8] xen/riscv: introduce asm/types.h header file" all
types were introduced as most of them are used in <xen/types.h>:
__{u|s}{8|16|32|64}. Thereby it looks like the following types in
<asm/types.h> should be present from the start:
  typedef __signed__ char __s8;
  typedef unsigned char __u8;

  typedef __signed__ short __s16;
  typedef unsigned short __u16;

  typedef __signed__ int __s32;
  typedef unsigned int __u32;

  #if defined(__GNUC__) && !defined(__STRICT_ANSI__)
  #if defined(CONFIG_RISCV_32)
    typedef __signed__ long long __s64;
    typedef unsigned long long __u64;
  #elif defined (CONFIG_RISCV_64)
    typedef __signed__ long __s64;
    typedef unsigned long __u64;
  #endif
  #endif

 Then it turns out that there is no any sense in:
  typedef signed char s8;
  typedef unsigned char u8;

  typedef signed short s16;
  typedef unsigned short u16;

  typedef signed int s32;
  typedef unsigned int u32;

  typedef signed long long s64;
  typedef unsigned long long u64;
    OR
  typedef signed long s64;
  typedef unsigned long u64;
As I understand instead of them should be used: {u|s}int<N>_t.

All other types such as {v,p}addr_t, register_t and definitions
PRIvaddr, INVALID_PADDR, PRIpaadr, PRIregister should be present in
<asm/types.h> from the start.

Am I right?
> do not build that yet, will want declaring right away imo. Of course
> I
> should finally try and get rid of at least some of the being-phased-
> out
> ones (s8 and s16 look to be relatively low hanging fruit, for
> example,
> and of these only s16 looks to be used in common code) ...
>=20
> Jan
~Oleksii


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 08:59:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 08:59:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475130.736667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFWxS-0001g2-GQ; Wed, 11 Jan 2023 08:59:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475130.736667; Wed, 11 Jan 2023 08:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFWxS-0001fv-Dh; Wed, 11 Jan 2023 08:59:42 +0000
Received: by outflank-mailman (input) for mailman id 475130;
 Wed, 11 Jan 2023 08:59:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zIvS=5I=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFWxQ-0001fp-LZ
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 08:59:40 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4761c808-918e-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 09:59:38 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id EBDD017046;
 Wed, 11 Jan 2023 08:59:37 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C781613591;
 Wed, 11 Jan 2023 08:59:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id kJA7L3l6vmN3VQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 11 Jan 2023 08:59:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4761c808-918e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673427577; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=f7vdUTLFJK7nC/ySdUbLnopzsNLUzSvsor32P4sJj0A=;
	b=ADksBRVKrLmEF7t/c/AorY7+23HbU4QgiYWVwL2zFhRvV0OHGZ0y60CcmwVGWyBN0E0Kej
	q7Zeknw4Itd+8bmZCQ5W5lTlJCNV1t1d0gx52FrDm7/KgYN3FNG3//L6ubsTWvX1s2ewem
	nmgurm/w18MoFkOi/sUkW73LwfA3ykg=
Message-ID: <05871696-1638-82d0-8d55-9088b4bb9a18@suse.com>
Date: Wed, 11 Jan 2023 09:59:37 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-11-jgross@suse.com>
 <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 10/19] tools/xenstore: change per-domain node
 accounting interface
In-Reply-To: <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------KS7AxDwQIZ4uNKtSJykthUro"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------KS7AxDwQIZ4uNKtSJykthUro
Content-Type: multipart/mixed; boundary="------------ijqjNbtAm927E2rgdiy43jO6";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <05871696-1638-82d0-8d55-9088b4bb9a18@suse.com>
Subject: Re: [PATCH v2 10/19] tools/xenstore: change per-domain node
 accounting interface
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-11-jgross@suse.com>
 <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>
In-Reply-To: <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>

--------------ijqjNbtAm927E2rgdiy43jO6
Content-Type: multipart/mixed; boundary="------------dZLLH6SYO5JkkPHB2rrxCE6s"

--------------dZLLH6SYO5JkkPHB2rrxCE6s
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMTIuMjIgMjE6MTUsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGksDQo+IA0KPiBP
biAxMy8xMi8yMDIyIDE2OjAwLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gUmV3b3JrIHRo
ZSBpbnRlcmZhY2UgYW5kIHRoZSBpbnRlcm5hbHMgb2YgdGhlIHBlci1kb21haW4gbm9kZQ0K
Pj4gYWNjb3VudGluZzoNCj4+DQo+PiAtIHJlbmFtZSB0aGUgZnVuY3Rpb25zIHRvIGRvbWFp
bl9uYmVudHJ5XyooKSBpbiBvcmRlciB0byBiZXR0ZXIgbWF0Y2gNCj4+IMKgwqAgdGhlIHJl
bGF0ZWQgY291bnRlciBuYW1lDQo+Pg0KPj4gLSBzd2l0Y2ggZnJvbSBub2RlIHBvaW50ZXIg
dG8gZG9taWQgYXMgaW50ZXJmYWNlLCBhcyBhbGwgbm9kZXMgaGF2ZSB0aGUNCj4+IMKgwqAg
b3duZXIgZmlsbGVkIGluDQo+IA0KPiBUaGUgZG93bnNpZGUgaXMgbm93IHlvdSBoYXZlIG1h
eSBwbGFjZSBvcGVuLWNvZGluZyAiLi4uLT5wZXJtcy0+cFswXS5pZCIuIElITU8gDQo+IHRo
aXMgaXMgbWFraW5nIHRoZSBjb2RlIG1vcmUgY29tcGxpY2F0ZWQuIFNvIGNhbiB5b3UgaW50
cm9kdWNlIGEgZmV3IHdyYXBwZXJzIA0KPiB0aGF0IHdvdWxkIHRha2UgYSBub2RlIGFuZCB0
aGVuIGNvbnZlcnQgdG8gdGhlIG93bmVyPw0KDQpPa2F5Lg0KDQo+IA0KPj4NCj4+IC0gdXNl
IGEgY29tbW9uIGludGVybmFsIGZ1bmN0aW9uIGZvciBhZGRpbmcgYSB2YWx1ZSB0byB0aGUg
Y291bnRlcg0KPj4NCj4+IEZvciB0aGUgdHJhbnNhY3Rpb24gY2FzZSBhZGQgYSBoZWxwZXIg
ZnVuY3Rpb24gdG8gZ2V0IHRoZSBsaXN0IGhlYWQNCj4+IG9mIHRoZSBwZXItdHJhbnNhY3Rp
b24gY2hhbmdlZCBkb21haW5zLCBlbmFibGluZyB0byBlbGltaW5hdGUgdGhlDQo+PiB0cmFu
c2FjdGlvbl9lbnRyeV8qKCkgZnVuY3Rpb25zLg0KPj4NCj4+IFNpZ25lZC1vZmYtYnk6IEp1
ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+IC0tLQ0KPj4gwqAgdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2NvcmUuY8KgwqDCoMKgwqDCoMKgIHzCoCAyMiArKy0tLQ0KPj4g
wqAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jwqDCoMKgwqDCoCB8IDEyMiAr
KysrKysrKysrKy0tLS0tLS0tLS0tLS0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfZG9tYWluLmjCoMKgwqDCoMKgIHzCoCAxMCArLQ0KPj4gwqAgdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMgfMKgIDE1ICstLQ0KPj4gwqAgdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmggfMKgwqAgNyArLQ0KPj4gwqAgNSBmaWxlcyBj
aGFuZ2VkLCA3MiBpbnNlcnRpb25zKCspLCAxMDQgZGVsZXRpb25zKC0pDQo+Pg0KPj4gZGlm
ZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+PiBpbmRleCBmOTY3MTRlMWI4Li42MTU2OWNlY2Ji
IDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYw0KPj4g
KysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYw0KPj4gQEAgLTE0NTksNyAr
MTQ1OSw3IEBAIHN0YXRpYyB2b2lkIGRlc3Ryb3lfbm9kZV9ybShzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwgDQo+PiBzdHJ1Y3Qgbm9kZSAqbm9kZSkNCj4+IMKgIHN0YXRpYyBpbnQgZGVz
dHJveV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSkN
Cj4+IMKgIHsNCj4+IMKgwqDCoMKgwqAgZGVzdHJveV9ub2RlX3JtKGNvbm4sIG5vZGUpOw0K
Pj4gLcKgwqDCoCBkb21haW5fZW50cnlfZGVjKGNvbm4sIG5vZGUpOw0KPj4gK8KgwqDCoCBk
b21haW5fbmJlbnRyeV9kZWMoY29ubiwgbm9kZS0+cGVybXMucFswXS5pZCk7DQo+PiDCoMKg
wqDCoMKgIC8qDQo+PiDCoMKgwqDCoMKgwqAgKiBJdCBpcyBub3QgcG9zc2libGUgdG8gZWFz
aWx5IHJldmVydCB0aGUgY2hhbmdlcyBpbiBhIHRyYW5zYWN0aW9uLg0KPj4gQEAgLTE0OTgs
NyArMTQ5OCw3IEBAIHN0YXRpYyBzdHJ1Y3Qgbm9kZSAqY3JlYXRlX25vZGUoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIA0KPj4gY29uc3Qgdm9pZCAqY3R4LA0KPj4gwqDCoMKgwqDCoCBm
b3IgKGkgPSBub2RlOyBpOyBpID0gaS0+cGFyZW50KSB7DQo+PiDCoMKgwqDCoMKgwqDCoMKg
wqAgLyogaS0+cGFyZW50IGlzIHNldCBmb3IgZWFjaCBuZXcgbm9kZSwgc28gY2hlY2sgcXVv
dGEuICovDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgaWYgKGktPnBhcmVudCAmJg0KPj4gLcKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgZG9tYWluX2VudHJ5KGNvbm4pID49IHF1b3RhX25iX2Vu
dHJ5X3Blcl9kb21haW4pIHsNCj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGRvbWFpbl9u
YmVudHJ5KGNvbm4pID49IHF1b3RhX25iX2VudHJ5X3Blcl9kb21haW4pIHsNCj4+IMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldCA9IEVOT1NQQzsNCj4+IMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIGdvdG8gZXJyOw0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIH0NCj4+IEBA
IC0xNTA5LDcgKzE1MDksNyBAQCBzdGF0aWMgc3RydWN0IG5vZGUgKmNyZWF0ZV9ub2RlKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCANCj4+IGNvbnN0IHZvaWQgKmN0eCwNCj4+IMKgwqDC
oMKgwqDCoMKgwqDCoCAvKiBBY2NvdW50IGZvciBuZXcgbm9kZSAqLw0KPj4gwqDCoMKgwqDC
oMKgwqDCoMKgIGlmIChpLT5wYXJlbnQpIHsNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKg
IGlmIChkb21haW5fZW50cnlfaW5jKGNvbm4sIGkpKSB7DQo+PiArwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCBpZiAoZG9tYWluX25iZW50cnlfaW5jKGNvbm4sIGktPnBlcm1zLnBbMF0uaWQp
KSB7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGRlc3Ryb3lfbm9k
ZV9ybShjb25uLCBpKTsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
cmV0dXJuIE5VTEw7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB9DQo+PiBAQCAt
MTY2Miw3ICsxNjYyLDcgQEAgc3RhdGljIGludCBkZWxub2RlX3N1Yihjb25zdCB2b2lkICpj
dHgsIHN0cnVjdCANCj4+IGNvbm5lY3Rpb24gKmNvbm4sDQo+PiDCoMKgwqDCoMKgIHdhdGNo
X2V4YWN0ID0gc3RyY21wKHJvb3QsIG5vZGUtPm5hbWUpOw0KPj4gwqDCoMKgwqDCoCBmaXJl
X3dhdGNoZXMoY29ubiwgY3R4LCBub2RlLT5uYW1lLCBub2RlLCB3YXRjaF9leGFjdCwgTlVM
TCk7DQo+PiAtwqDCoMKgIGRvbWFpbl9lbnRyeV9kZWMoY29ubiwgbm9kZSk7DQo+PiArwqDC
oMKgIGRvbWFpbl9uYmVudHJ5X2RlYyhjb25uLCBub2RlLT5wZXJtcy5wWzBdLmlkKTsNCj4+
IMKgwqDCoMKgwqAgcmV0dXJuIFdBTEtfVFJFRV9STV9DSElMREVOVFJZOw0KPj4gwqAgfQ0K
Pj4gQEAgLTE4MDIsMjUgKzE4MDIsMjUgQEAgc3RhdGljIGludCBkb19zZXRfcGVybXMoY29u
c3Qgdm9pZCAqY3R4LCBzdHJ1Y3QgDQo+PiBjb25uZWN0aW9uICpjb25uLA0KPj4gwqDCoMKg
wqDCoMKgwqDCoMKgIHJldHVybiBFUEVSTTsNCj4+IMKgwqDCoMKgwqAgb2xkX3Blcm1zID0g
bm9kZS0+cGVybXM7DQo+PiAtwqDCoMKgIGRvbWFpbl9lbnRyeV9kZWMoY29ubiwgbm9kZSk7
DQo+PiArwqDCoMKgIGRvbWFpbl9uYmVudHJ5X2RlYyhjb25uLCBub2RlLT5wZXJtcy5wWzBd
LmlkKTsNCj4+IMKgwqDCoMKgwqAgbm9kZS0+cGVybXMgPSBwZXJtczsNCj4+IC3CoMKgwqAg
aWYgKGRvbWFpbl9lbnRyeV9pbmMoY29ubiwgbm9kZSkpIHsNCj4+ICvCoMKgwqAgaWYgKGRv
bWFpbl9uYmVudHJ5X2luYyhjb25uLCBub2RlLT5wZXJtcy5wWzBdLmlkKSkgew0KPj4gwqDC
oMKgwqDCoMKgwqDCoMKgIG5vZGUtPnBlcm1zID0gb2xkX3Blcm1zOw0KPj4gwqDCoMKgwqDC
oMKgwqDCoMKgIC8qDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoCAqIFRoaXMgc2hvdWxkIG5l
dmVyIGZhaWwgYmVjYXVzZSB3ZSBoYWQgYSByZWZlcmVuY2Ugb24gdGhlDQo+PiDCoMKgwqDC
oMKgwqDCoMKgwqDCoCAqIGRvbWFpbiBiZWZvcmUgYW5kIFhlbnN0b3JlZCBpcyBzaW5nbGUt
dGhyZWFkZWQuDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoCAqLw0KPj4gLcKgwqDCoMKgwqDC
oMKgIGRvbWFpbl9lbnRyeV9pbmMoY29ubiwgbm9kZSk7DQo+PiArwqDCoMKgwqDCoMKgwqAg
ZG9tYWluX25iZW50cnlfaW5jKGNvbm4sIG5vZGUtPnBlcm1zLnBbMF0uaWQpOw0KPj4gwqDC
oMKgwqDCoMKgwqDCoMKgIHJldHVybiBFTk9NRU07DQo+PiDCoMKgwqDCoMKgIH0NCj4+IMKg
wqDCoMKgwqAgaWYgKHdyaXRlX25vZGUoY29ubiwgbm9kZSwgZmFsc2UpKSB7DQo+PiDCoMKg
wqDCoMKgwqDCoMKgwqAgaW50IHNhdmVkX2Vycm5vID0gZXJybm87DQo+PiAtwqDCoMKgwqDC
oMKgwqAgZG9tYWluX2VudHJ5X2RlYyhjb25uLCBub2RlKTsNCj4+ICvCoMKgwqDCoMKgwqDC
oCBkb21haW5fbmJlbnRyeV9kZWMoY29ubiwgbm9kZS0+cGVybXMucFswXS5pZCk7DQo+PiDC
oMKgwqDCoMKgwqDCoMKgwqAgbm9kZS0+cGVybXMgPSBvbGRfcGVybXM7DQo+PiDCoMKgwqDC
oMKgwqDCoMKgwqAgLyogTm8gZmFpbHVyZSBwb3NzaWJsZSBhcyBhYm92ZS4gKi8NCj4+IC3C
oMKgwqDCoMKgwqDCoCBkb21haW5fZW50cnlfaW5jKGNvbm4sIG5vZGUpOw0KPj4gK8KgwqDC
oMKgwqDCoMKgIGRvbWFpbl9uYmVudHJ5X2luYyhjb25uLCBub2RlLT5wZXJtcy5wWzBdLmlk
KTsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBlcnJubyA9IHNhdmVkX2Vycm5vOw0KPj4gwqDC
oMKgwqDCoMKgwqDCoMKgIHJldHVybiBlcnJubzsNCj4+IEBAIC0yMzkyLDcgKzIzOTIsNyBA
QCB2b2lkIHNldHVwX3N0cnVjdHVyZShib29sIGxpdmVfdXBkYXRlKQ0KPj4gwqDCoMKgwqDC
oMKgwqDCoMKgIG1hbnVhbF9ub2RlKCIvdG9vbC94ZW5zdG9yZWQiLCBOVUxMKTsNCj4+IMKg
wqDCoMKgwqDCoMKgwqDCoCBtYW51YWxfbm9kZSgiQHJlbGVhc2VEb21haW4iLCBOVUxMKTsN
Cj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBtYW51YWxfbm9kZSgiQGludHJvZHVjZURvbWFpbiIs
IE5VTEwpOw0KPj4gLcKgwqDCoMKgwqDCoMKgIGRvbWFpbl9lbnRyeV9maXgoZG9tMF9kb21p
ZCwgNSwgdHJ1ZSk7DQo+PiArwqDCoMKgwqDCoMKgwqAgZG9tYWluX25iZW50cnlfZml4KGRv
bTBfZG9taWQsIDUsIHRydWUpOw0KPj4gwqDCoMKgwqDCoCB9DQo+PiDCoMKgwqDCoMKgIGNo
ZWNrX3N0b3JlKCk7DQo+PiBAQCAtMzQwMCw3ICszNDAwLDcgQEAgdm9pZCByZWFkX3N0YXRl
X25vZGUoY29uc3Qgdm9pZCAqY3R4LCBjb25zdCB2b2lkICpzdGF0ZSkNCj4+IMKgwqDCoMKg
wqAgaWYgKHdyaXRlX25vZGVfcmF3KE5VTEwsICZrZXksIG5vZGUsIHRydWUpKQ0KPj4gwqDC
oMKgwqDCoMKgwqDCoMKgIGJhcmYoIndyaXRlIG5vZGUgZXJyb3IgcmVzdG9yaW5nIG5vZGUi
KTsNCj4+IC3CoMKgwqAgaWYgKGRvbWFpbl9lbnRyeV9pbmMoJmNvbm4sIG5vZGUpKQ0KPj4g
K8KgwqDCoCBpZiAoZG9tYWluX25iZW50cnlfaW5jKCZjb25uLCBub2RlLT5wZXJtcy5wWzBd
LmlkKSkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBiYXJmKCJub2RlIGFjY291bnRpbmcgZXJy
b3IgcmVzdG9yaW5nIG5vZGUiKTsNCj4+IMKgwqDCoMKgwqAgdGFsbG9jX2ZyZWUobm9kZSk7
DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jIA0K
Pj4gYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMNCj4+IGluZGV4IDMyMTYx
MTllODMuLjQwYjI0MDU2YzUgMTAwNjQ0DQo+PiAtLS0gYS90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfZG9tYWluLmMNCj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21h
aW4uYw0KPj4gQEAgLTI0OSw3ICsyNDksNyBAQCBzdGF0aWMgaW50IGRvbWFpbl90cmVlX3Jl
bW92ZV9zdWIoY29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3QgDQo+PiBjb25uZWN0aW9uICpjb25u
LA0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIGRvbWFpbi0+bmJlbnRyeS0tOw0KPj4gwqDCoMKg
wqDCoMKgwqDCoMKgIG5vZGUtPnBlcm1zLnBbMF0uaWQgPSBwcml2X2RvbWlkOw0KPj4gwqDC
oMKgwqDCoMKgwqDCoMKgIG5vZGUtPmFjYy5tZW1vcnkgPSAwOw0KPj4gLcKgwqDCoMKgwqDC
oMKgIGRvbWFpbl9lbnRyeV9pbmMoTlVMTCwgbm9kZSk7DQo+PiArwqDCoMKgwqDCoMKgwqAg
ZG9tYWluX25iZW50cnlfaW5jKE5VTEwsIHByaXZfZG9taWQpOw0KPj4gwqDCoMKgwqDCoMKg
wqDCoMKgIGlmICh3cml0ZV9ub2RlX3JhdyhOVUxMLCAma2V5LCBub2RlLCB0cnVlKSkgew0K
Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgLyogVGhhdCdzIHVuZm9ydHVuYXRlLiBX
ZSBvbmx5IGNhbiB0cnkgdG8gY29udGludWUuICovDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCBzeXNsb2coTE9HX0VSUiwNCj4+IEBAIC01NTksNyArNTU5LDcgQEAgaW50IGFj
Y19maXhfZG9tYWlucyhzdHJ1Y3QgbGlzdF9oZWFkICpoZWFkLCBib29sIHVwZGF0ZSkNCj4+
IMKgwqDCoMKgwqAgaW50IGNudDsNCj4+IMKgwqDCoMKgwqAgbGlzdF9mb3JfZWFjaF9lbnRy
eShjZCwgaGVhZCwgbGlzdCkgew0KPj4gLcKgwqDCoMKgwqDCoMKgIGNudCA9IGRvbWFpbl9l
bnRyeV9maXgoY2QtPmRvbWlkLCBjZC0+bmJlbnRyeSwgdXBkYXRlKTsNCj4+ICvCoMKgwqDC
oMKgwqDCoCBjbnQgPSBkb21haW5fbmJlbnRyeV9maXgoY2QtPmRvbWlkLCBjZC0+bmJlbnRy
eSwgdXBkYXRlKTsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoIXVwZGF0ZSkgew0KPj4g
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaWYgKGNudCA+PSBxdW90YV9uYl9lbnRyeV9w
ZXJfZG9tYWluKQ0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1
cm4gRU5PU1BDOw0KPj4gQEAgLTYwNCwxOCArNjA0LDE5IEBAIHN0YXRpYyBzdHJ1Y3QgY2hh
bmdlZF9kb21haW4gDQo+PiAqYWNjX2dldF9jaGFuZ2VkX2RvbWFpbihjb25zdCB2b2lkICpj
dHgsDQo+PiDCoMKgwqDCoMKgIHJldHVybiBjZDsNCj4+IMKgIH0NCj4+IC1pbnQgYWNjX2Fk
ZF9kb21fbmJlbnRyeShjb25zdCB2b2lkICpjdHgsIHN0cnVjdCBsaXN0X2hlYWQgKmhlYWQs
IGludCB2YWwsDQo+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBpbnQgZG9t
aWQpDQo+PiArc3RhdGljIGludCBhY2NfYWRkX2RvbV9uYmVudHJ5KGNvbnN0IHZvaWQgKmN0
eCwgc3RydWN0IGxpc3RfaGVhZCAqaGVhZCwgaW50IHZhbCwNCj4+ICvCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgaW50IGRvbWlkKQ0KPj4gwqAgew0K
Pj4gwqDCoMKgwqDCoCBzdHJ1Y3QgY2hhbmdlZF9kb21haW4gKmNkOw0KPj4gwqDCoMKgwqDC
oCBjZCA9IGFjY19nZXRfY2hhbmdlZF9kb21haW4oY3R4LCBoZWFkLCBkb21pZCk7DQo+PiDC
oMKgwqDCoMKgIGlmICghY2QpDQo+PiAtwqDCoMKgwqDCoMKgwqAgcmV0dXJuIGVycm5vOw0K
Pj4gK8KgwqDCoMKgwqDCoMKgIHJldHVybiAwOw0KPj4gK8KgwqDCoCBlcnJubyA9IDA7DQo+
PiDCoMKgwqDCoMKgIGNkLT5uYmVudHJ5ICs9IHZhbDsNCj4+IC3CoMKgwqAgcmV0dXJuIDA7
DQo+PiArwqDCoMKgIHJldHVybiBjZC0+bmJlbnRyeTsNCj4gDQo+IFlvdSBqdXN0IGludHJv
ZHVjZWQgdGhpcyBoZWxwZXIgaW4gdGhlIHByZXZpb3VzIHBhdGNoIChpLmUuICM5KS4gU28g
Y2FuIHlvdSBnZXQgDQo+IHRoZSBpbnRlcmZhY2UgY29ycmVjdCBmcm9tIHRoZSBzdGFydD8g
VGhpcyB3aWxsIG1ha2UgZWFzaWVyIHRvIHJldmlldyB0aGUgc2VyaWVzLg0KPiANCj4gSSBk
b24ndCBtaW5kIHRvbyBtdWNoIGlmIHlvdSBhZGQgdGhlIHN0YXRpYyBoZXJlLiBBbHRob3Vn
aCwgaXQgd291bGQgaGF2ZSBiZWVuIA0KPiBuaWNlIGlmIHdlIGF2b2lkIGNoYW5naW5nIGNv
ZGUganVzdCBpbnRyb2R1Y2VkLg0KDQpGaW5lIHdpdGggbWUuDQoNCj4gDQo+PiDCoCB9DQo+
PiDCoCBzdGF0aWMgdm9pZCBkb21haW5fY29ubl9yZXNldChzdHJ1Y3QgZG9tYWluICpkb21h
aW4pDQo+PiBAQCAtOTg4LDMwICs5ODksNiBAQCB2b2lkIGRvbWFpbl9kZWluaXQodm9pZCkN
Cj4+IMKgwqDCoMKgwqDCoMKgwqDCoCB4ZW5ldnRjaG5fdW5iaW5kKHhjZV9oYW5kbGUsIHZp
cnFfcG9ydCk7DQo+PiDCoCB9DQo+PiAtaW50IGRvbWFpbl9lbnRyeV9pbmMoc3RydWN0IGNv
bm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQ0KPj4gLXsNCj4+IC3CoMKgwqAg
c3RydWN0IGRvbWFpbiAqZDsNCj4+IC3CoMKgwqAgdW5zaWduZWQgaW50IGRvbWlkOw0KPj4g
LQ0KPj4gLcKgwqDCoCBpZiAoIW5vZGUtPnBlcm1zLnApDQo+PiAtwqDCoMKgwqDCoMKgwqAg
cmV0dXJuIDA7DQo+PiAtDQo+PiAtwqDCoMKgIGRvbWlkID0gbm9kZS0+cGVybXMucFswXS5p
ZDsNCj4+IC0NCj4+IC3CoMKgwqAgaWYgKGNvbm4gJiYgY29ubi0+dHJhbnNhY3Rpb24pIHsN
Cj4+IC3CoMKgwqDCoMKgwqDCoCB0cmFuc2FjdGlvbl9lbnRyeV9pbmMoY29ubi0+dHJhbnNh
Y3Rpb24sIGRvbWlkKTsNCj4+IC3CoMKgwqAgfSBlbHNlIHsNCj4+IC3CoMKgwqDCoMKgwqDC
oCBkID0gKGNvbm4gJiYgZG9taWQgPT0gY29ubi0+aWQgJiYgY29ubi0+ZG9tYWluKSA/IGNv
bm4tPmRvbWFpbg0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgOiBmaW5kX29yX2FsbG9j
X2V4aXN0aW5nX2RvbWFpbihkb21pZCk7DQo+PiAtwqDCoMKgwqDCoMKgwqAgaWYgKGQpDQo+
PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBkLT5uYmVudHJ5Kys7DQo+PiAtwqDCoMKgwqDC
oMKgwqAgZWxzZQ0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJuIEVOT01FTTsN
Cj4+IC3CoMKgwqAgfQ0KPj4gLQ0KPj4gLcKgwqDCoCByZXR1cm4gMDsNCj4+IC19DQo+PiAt
DQo+PiDCoCAvKg0KPj4gwqDCoCAqIENoZWNrIHdoZXRoZXIgYSBkb21haW4gd2FzIGNyZWF0
ZWQgYmVmb3JlIG9yIGFmdGVyIGEgc3BlY2lmaWMgZ2VuZXJhdGlvbg0KPj4gwqDCoCAqIGNv
dW50ICh1c2VkIGZvciB0ZXN0aW5nIHdoZXRoZXIgYSBub2RlIHBlcm1pc3Npb24gaXMgb2xk
ZXIgdGhhbiBhIGRvbWFpbikuDQo+PiBAQCAtMTA3OSw2MiArMTA1Niw2NyBAQCBpbnQgZG9t
YWluX2FkanVzdF9ub2RlX3Blcm1zKHN0cnVjdCBub2RlICpub2RlKQ0KPj4gwqDCoMKgwqDC
oCByZXR1cm4gMDsNCj4+IMKgIH0NCj4+IC12b2lkIGRvbWFpbl9lbnRyeV9kZWMoc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2RlKQ0KPj4gK3N0YXRpYyBpbnQg
ZG9tYWluX25iZW50cnlfYWRkKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCB1bnNpZ25lZCBp
bnQgZG9taWQsDQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpbnQg
YWRkLCBib29sIGRvbV9leGlzdHMpDQo+IA0KPiBUaGUgbmFtZSBvZiB0aGUgdmFyaWFibGUg
c3VnZ2VzdHMgdGhhdCB0aGF0IGlmIGl0IGlzIGZhbHNlIHRoZW4gaXQgZG9lc24ndCANCj4g
ZXhpc3RzLiBIb3dldmVyLCBsb29raW5nIGF0IGhvdyB5b3UgdXNlIGl0LCBpdCBpcyBtb3Jl
IGEgIkNhbiBzdHJ1Y3QgZG9tYWluIGJlIA0KPiBhbGxvY2F0ZWQ/Ii4gU28gSSB3b3VsZCBy
ZW5hbWUgaXQgdG8gImRvbV9hbGxvY19hbGxvd2VkIiBvciBzaW1pbGFyLg0KDQpJJ2xsIG5h
bWUgaXQgIm5vX2RvbV9hbGxvYyIuDQoNCj4gDQo+PiDCoCB7DQo+PiDCoMKgwqDCoMKgIHN0
cnVjdCBkb21haW4gKmQ7DQo+PiAtwqDCoMKgIHVuc2lnbmVkIGludCBkb21pZDsNCj4+IC0N
Cj4+IC3CoMKgwqAgaWYgKCFub2RlLT5wZXJtcy5wKQ0KPj4gLcKgwqDCoMKgwqDCoMKgIHJl
dHVybjsNCj4+ICvCoMKgwqAgc3RydWN0IGxpc3RfaGVhZCAqaGVhZDsNCj4+ICvCoMKgwqAg
aW50IHJldDsNCj4+IC3CoMKgwqAgZG9taWQgPSBub2RlLT5wZXJtcy5wID8gbm9kZS0+cGVy
bXMucFswXS5pZCA6IGNvbm4tPmlkOw0KPj4gK8KgwqDCoCBpZiAoY29ubiAmJiBkb21pZCA9
PSBjb25uLT5pZCAmJiBjb25uLT5kb21haW4pDQo+PiArwqDCoMKgwqDCoMKgwqAgZCA9IGNv
bm4tPmRvbWFpbjsNCj4+ICvCoMKgwqAgZWxzZSBpZiAoZG9tX2V4aXN0cykgew0KPj4gK8Kg
wqDCoMKgwqDCoMKgIGQgPSBmaW5kX2RvbWFpbl9zdHJ1Y3QoZG9taWQpOw0KPj4gK8KgwqDC
oMKgwqDCoMKgIGlmICghZCkgew0KPj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgZXJybm8g
PSBFTk9FTlQ7DQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBjb3JydXB0KGNvbm4sICJN
aXNzaW5nIGRvbWFpbiAldVxuIiwgZG9taWQpOw0KPj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgcmV0dXJuIC0xOw0KPj4gK8KgwqDCoMKgwqDCoMKgIH0NCj4+ICvCoMKgwqAgfSBlbHNl
IHsNCj4+ICvCoMKgwqDCoMKgwqDCoCBkID0gZmluZF9vcl9hbGxvY19leGlzdGluZ19kb21h
aW4oZG9taWQpOw0KPj4gK8KgwqDCoMKgwqDCoMKgIGlmICghZCkgew0KPj4gK8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgZXJybm8gPSBFTk9NRU07DQo+PiArwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCByZXR1cm4gLTE7DQo+PiArwqDCoMKgwqDCoMKgwqAgfQ0KPj4gK8KgwqDCoCB9DQo+
PiDCoMKgwqDCoMKgIGlmIChjb25uICYmIGNvbm4tPnRyYW5zYWN0aW9uKSB7DQo+PiAtwqDC
oMKgwqDCoMKgwqAgdHJhbnNhY3Rpb25fZW50cnlfZGVjKGNvbm4tPnRyYW5zYWN0aW9uLCBk
b21pZCk7DQo+PiAtwqDCoMKgIH0gZWxzZSB7DQo+PiAtwqDCoMKgwqDCoMKgwqAgZCA9IChj
b25uICYmIGRvbWlkID09IGNvbm4tPmlkICYmIGNvbm4tPmRvbWFpbikgPyBjb25uLT5kb21h
aW4NCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDogZmluZF9kb21haW5fc3RydWN0KGRv
bWlkKTsNCj4+IC3CoMKgwqDCoMKgwqDCoCBpZiAoZCkgew0KPj4gLcKgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgZC0+bmJlbnRyeS0tOw0KPj4gLcKgwqDCoMKgwqDCoMKgIH0gZWxzZSB7DQo+
PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBlcnJubyA9IEVOT0VOVDsNCj4+IC3CoMKgwqDC
oMKgwqDCoMKgwqDCoMKgIGNvcnJ1cHQoY29ubiwNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgIk5vZGUgXCIlc1wiIG93bmVkIGJ5IG5vbi1leGlzdGluZyBkb21haW4g
JXVcbiIsDQo+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG5vZGUtPm5hbWUs
IGRvbWlkKTsNCj4+ICvCoMKgwqDCoMKgwqDCoCBoZWFkID0gdHJhbnNhY3Rpb25fZ2V0X2No
YW5nZWRfZG9tYWlucyhjb25uLT50cmFuc2FjdGlvbik7DQo+PiArwqDCoMKgwqDCoMKgwqAg
cmV0ID0gYWNjX2FkZF9kb21fbmJlbnRyeShjb25uLT50cmFuc2FjdGlvbiwgaGVhZCwgYWRk
LCBkb21pZCk7DQo+PiArwqDCoMKgwqDCoMKgwqAgaWYgKGVycm5vKSB7DQo+PiArwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCBmYWlsX3RyYW5zYWN0aW9uKGNvbm4tPnRyYW5zYWN0aW9uKTsN
Cj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiAtMTsNCj4+IMKgwqDCoMKgwqDC
oMKgwqDCoCB9DQo+PiArwqDCoMKgwqDCoMKgwqAgcmV0dXJuIGQtPm5iZW50cnkgKyByZXQ7
DQo+IA0KPiBJdCBpcyBub3QgZW50aXJlbHkgY2xlYXIgd2h5IHlvdSBhcmUgcmV0dXJuICJk
LT5uYmVudHJ5ICsgcmV0IiBoZXJlLiBJZiBpdCBpcyAuLi4NCj4gDQo+PiDCoMKgwqDCoMKg
IH0NCj4+ICsNCj4+ICvCoMKgwqAgZC0+bmJlbnRyeSArPSBhZGQ7DQo+PiArDQo+PiArwqDC
oMKgIHJldHVybiBkLT5uYmVudHJ5Ow0KPj4gwqAgfQ0KPj4gLWludCBkb21haW5fZW50cnlf
Zml4KHVuc2lnbmVkIGludCBkb21pZCwgaW50IG51bSwgYm9vbCB1cGRhdGUpDQo+PiAraW50
IGRvbWFpbl9uYmVudHJ5X2luYyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgdW5zaWduZWQg
aW50IGRvbWlkKQ0KPj4gwqAgew0KPj4gLcKgwqDCoCBzdHJ1Y3QgZG9tYWluICpkOw0KPj4g
LcKgwqDCoCBpbnQgY250Ow0KPj4gK8KgwqDCoCByZXR1cm4gKGRvbWFpbl9uYmVudHJ5X2Fk
ZChjb25uLCBkb21pZCwgMSwgZmFsc2UpIDwgMCkgPyBlcnJubyA6IDA7DQo+PiArfQ0KPj4g
LcKgwqDCoCBpZiAodXBkYXRlKSB7DQo+PiAtwqDCoMKgwqDCoMKgwqAgZCA9IGZpbmRfZG9t
YWluX3N0cnVjdChkb21pZCk7DQo+PiAtwqDCoMKgwqDCoMKgwqAgYXNzZXJ0KGQpOw0KPj4g
LcKgwqDCoCB9IGVsc2Ugew0KPj4gLcKgwqDCoMKgwqDCoMKgIC8qDQo+PiAtwqDCoMKgwqDC
oMKgwqDCoCAqIFdlIGFyZSBjYWxsZWQgZmlyc3Qgd2l0aCB1cGRhdGUgPT0gZmFsc2UgaW4g
b3JkZXIgdG8gY2F0Y2gNCj4+IC3CoMKgwqDCoMKgwqDCoMKgICogYW55IGVycm9yLiBTbyBk
byBhIHBvc3NpYmxlIGFsbG9jYXRpb24gYW5kIGNoZWNrIGZvciBlcnJvcg0KPj4gLcKgwqDC
oMKgwqDCoMKgwqAgKiBvbmx5IGluIHRoaXMgY2FzZSwgYXMgaW4gdGhlIGNhc2Ugb2YgdXBk
YXRlID09IHRydWUgbm90aGluZw0KPj4gLcKgwqDCoMKgwqDCoMKgwqAgKiBjYW4gZ28gd3Jv
bmcgYW55bW9yZSBhcyB0aGUgYWxsb2NhdGlvbiBhbHJlYWR5IGhhcHBlbmVkLg0KPj4gLcKg
wqDCoMKgwqDCoMKgwqAgKi8NCj4+IC3CoMKgwqDCoMKgwqDCoCBkID0gZmluZF9vcl9hbGxv
Y19leGlzdGluZ19kb21haW4oZG9taWQpOw0KPj4gLcKgwqDCoMKgwqDCoMKgIGlmICghZCkN
Cj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiAtMTsNCj4+IC3CoMKgwqAgfQ0K
Pj4gK2ludCBkb21haW5fbmJlbnRyeV9kZWMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHVu
c2lnbmVkIGludCBkb21pZCkNCj4+ICt7DQo+PiArwqDCoMKgIHJldHVybiAoZG9tYWluX25i
ZW50cnlfYWRkKGNvbm4sIGRvbWlkLCAtMSwgdHJ1ZSkgPCAwKSA/IGVycm5vIDogMDsNCj4g
DQo+IC4uLiB0byBtYWtlIHN1cmUgZG9tYWluX25iZW50cnlfYWRkKCkgaXMgbm90IHJldHVy
bmluZyBhIG5lZ2F0aXZlIHZhbHVlLiBUaGVuIGl0IA0KPiB3b3VsZCBub3Qgd29yay4NCj4g
DQo+IEEgZ29vZCBleGFtcGxlIGltYWdpbmUgeW91IGhhdmUgYSB0cmFuc2FjdGlvbiByZW1v
dmluZyBub2RlcyBmcm9tIHRyZWUgYnV0IG5vdCANCj4gYWRkaW5nIGFueS4gVGhlbiB0aGUg
InJldCIgd291bGQgYmUgbmVnYXRpdmUuDQo+IA0KPiBNZWFud2hpbGUgdGhlIG5vZGVzIGFy
ZSBhbHNvIHJlbW92ZWQgb3V0c2lkZSBvZiB0aGUgdHJhbnNhY3Rpb24uIFNvIHRoZSBzdW0g
b2YgDQo+ICJkLT5uYmVudHJ5ICsgcmV0IiB3b3VsZCBiZSBuZWdhdGl2ZSByZXN1bHRpbmcg
dG8gYSBmYWlsdXJlIGhlcmUuDQoNClRoYW5rcyBmb3IgY2F0Y2hpbmcgdGhpcy4NCg0KSSB0
aGluayB0aGUgY29ycmVjdCB3YXkgdG8gaGFuZGxlIHRoaXMgaXMgdG8gcmV0dXJuIG1heChk
LT5uYmVudHJ5ICsgcmV0LCAwKSBpbg0KZG9tYWluX25iZW50cnlfYWRkKCkuIFRoZSB2YWx1
ZSBtaWdodCBiZSBpbXByZWNpc2UsIGJ1dCBhbHdheXMgPj0gMCBhbmQgbmV2ZXINCndyb25n
IG91dHNpZGUgb2YgYSB0cmFuc2FjdGlvbiBjb2xsaXNpb24uDQoNCj4gDQo+IFN1Y2ggY2hh
bmdlIG9mIGJlaGF2aW9yIHNob3VsZCBwb2ludGVkIGluIHRoZSBjb21taXQgbWVzc2FnZS4g
QnV0IHRoZW4gSSBhbSBub3QgDQo+IGNvbnZpbmNlZCB0aGlzIHNob3VsZCBiZSBwYXJ0IG9m
IHRoaXMgY29tbWl0IHdoaWNoIGlzIG1haW5seSByZXdvcmtpbmcgYW4gDQo+IGludGVyZmFj
ZSAoZS5nLiBubyBmdW5jdGlvbmFsIGNoYW5nZSBpcyBleHBlY3RlZCkuDQoNCg0KSnVlcmdl
bg0KDQo=
--------------dZLLH6SYO5JkkPHB2rrxCE6s
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------dZLLH6SYO5JkkPHB2rrxCE6s--

--------------ijqjNbtAm927E2rgdiy43jO6--

--------------KS7AxDwQIZ4uNKtSJykthUro
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO+enkFAwAAAAAACgkQsN6d1ii/Ey/s
wQf+PJOnejDtgeKL+GLHhCHx4937CxbZtbLEsK4OIDYN7VMAjOWlZ5q/Ws1PtdsOwFaC99aoF2Q6
TXl47ODFdDCUXE54u9aHkYBbTqV3VSfJZw2Lo1nzQjKN0W/mGcVLLTWeLY9y07LTW+/ejyu9kEla
9uDvvi72dm/HI27BY9aZsRw8g43erfVEQcjd/Ff/Zvs83iTj6n6OC4U5pNzxI/W8bwXTxv4Zrp9N
bZndpWR002znMNRoOKObXMJDdQxeaMCxRy9kusUPmY3NhLbTt8RFQtL2CtjeioPts8yvZgtRMKDZ
b8gHBXjuFPvFhHpY3004uX2tv9MjSxdwHY+wC7NDsw==
=z9Mn
-----END PGP SIGNATURE-----

--------------KS7AxDwQIZ4uNKtSJykthUro--


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 09:07:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 09:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475137.736677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFX4f-0003E3-Be; Wed, 11 Jan 2023 09:07:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475137.736677; Wed, 11 Jan 2023 09:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFX4f-0003Dw-8z; Wed, 11 Jan 2023 09:07:09 +0000
Received: by outflank-mailman (input) for mailman id 475137;
 Wed, 11 Jan 2023 09:07:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zIvS=5I=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFX4e-0003Dq-B8
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 09:07:08 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 52a5e5d5-918f-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 10:07:07 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 93BED170A7;
 Wed, 11 Jan 2023 09:07:06 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6C5FE13591;
 Wed, 11 Jan 2023 09:07:06 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id iiIDGTp8vmOAWQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 11 Jan 2023 09:07:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52a5e5d5-918f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673428026; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hINL1kw5QQtqka4RW0Ff+qlJJ1FTTaLsBoXy+6fy0SY=;
	b=u8cqqGl5iJLQW4NvZB0OfdcA0OX/HsXxnqegqv2rKBxRg+7JvkgWzpjKxjSDeqpd7/X7ex
	oMOP7iWvchwSpghbnB5xtAO4vdLw4kuGFmF9Y6wlEI9CMMGtC6R7w4edrIJTWZauKnKr8H
	Nqq1xNhhVrvFbfu5lFzzCN1kUS7TQ0Y=
Message-ID: <2c0b38d3-b254-6968-1492-35d7b8b475ad@suse.com>
Date: Wed, 11 Jan 2023 10:07:06 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 11/19] tools/xenstore: don't allow creating too many
 nodes in a transaction
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-12-jgross@suse.com>
 <04658e96-ec81-1dba-6829-3a52c69a27bb@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <04658e96-ec81-1dba-6829-3a52c69a27bb@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------6zV1cawd1gKaGdLLS6m0S5vW"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------6zV1cawd1gKaGdLLS6m0S5vW
Content-Type: multipart/mixed; boundary="------------4fNEaKrbQaeyNiC3QP8pEqpn";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <2c0b38d3-b254-6968-1492-35d7b8b475ad@suse.com>
Subject: Re: [PATCH v2 11/19] tools/xenstore: don't allow creating too many
 nodes in a transaction
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-12-jgross@suse.com>
 <04658e96-ec81-1dba-6829-3a52c69a27bb@xen.org>
In-Reply-To: <04658e96-ec81-1dba-6829-3a52c69a27bb@xen.org>

--------------4fNEaKrbQaeyNiC3QP8pEqpn
Content-Type: multipart/mixed; boundary="------------95dPM5R07imH7r3tCk1Hjnk9"

--------------95dPM5R07imH7r3tCk1Hjnk9
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMTIuMjIgMjE6MTgsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGksDQo+IA0KPiBP
biAxMy8xMi8yMDIyIDE2OjAwLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gVGhlIGFjY291
bnRpbmcgZm9yIHRoZSBudW1iZXIgb2Ygbm9kZXMgb2YgYSBkb21haW4gaW4gYW4gYWN0aXZl
DQo+PiB0cmFuc2FjdGlvbiBpcyBub3Qgd29ya2luZyBjb3JyZWN0bHksIGFzIGl0IGFsbG93
cyB0byBjcmVhdGUgYXJiaXRyYXJ5DQo+PiBudW1iZXIgb2Ygbm9kZXMuIFRoZSB0cmFuc2Fj
dGlvbiB3aWxsIGZpbmFsbHkgZmFpbCBkdWUgdG8gZXhjZWVkaW5nDQo+PiB0aGUgbnVtYmVy
IG9mIG5vZGVzIHF1b3RhLCBidXQgYmVmb3JlIGNsb3NpbmcgdGhlIHRyYW5zYWN0aW9uIGFu
DQo+PiB1bnByaXZpbGVnZWQgZ3Vlc3QgY291bGQgY2F1c2UgWGVuc3RvcmUgdG8gdXNlIGEg
bG90IG9mIG1lbW9yeS4NCj4gDQo+IEFzIHBlciB0aGUgZGlzY3Vzc2lvbiBpbiB2MSwgdGhl
IGNvbW1pdCBtZXNzYWdlIG5lZWRzIHRvIGJlIHJld29yZGVkLg0KPiANCj4gSSB3aWxsIGxv
b2sgYXQgdGhpcyBwYXRjaCBpbiBtb3JlIGRldGFpbHMgb25jZSBJIGhhdmUgcmVhY2hlZCB0
aGUgMm5kIHNlcmllcy4NCg0KSSdsbCB3YWl0IHdpdGggdGhlIHJld29yZGluZyB1bnRpbCB0
aGVuLg0KDQoNCkp1ZXJnZW4NCg==
--------------95dPM5R07imH7r3tCk1Hjnk9
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------95dPM5R07imH7r3tCk1Hjnk9--

--------------4fNEaKrbQaeyNiC3QP8pEqpn--

--------------6zV1cawd1gKaGdLLS6m0S5vW
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO+fDoFAwAAAAAACgkQsN6d1ii/Ey85
WQf7BFomOGWEfBpKGyU353SMG/DW8NJcbDP8CgVk3bkGErZApUc6V76Tl0mw1Azl3pi67kWNOno/
gWlCVvok1XNvRkGkKvji4A53P2nZqR8LLK2zUAJgnBITjx36afBr06fpvrCFSMV90JQB3dIkkstj
7QoQhGD5sC2vh3+H/5kc3lEyNwpwU7yBNkLau3WL4kcYJ9wQ3KAsCbWNjeQ1pvTfpKwfrF9sJIMu
TIt5zgo0cqidEMoFR0tU+g0rY84QxcJseQjmhJHu32Xf0CpGSGBIFFgiDVW5vP9GKTsQyEkGKYVz
tbgxUU5nHojxlMGZPqe/ANc6KpnL1aSQ+5ZC8RCNRw==
=HGFe
-----END PGP SIGNATURE-----

--------------6zV1cawd1gKaGdLLS6m0S5vW--


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 09:08:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 09:08:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475143.736688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFX5w-0003mu-Lu; Wed, 11 Jan 2023 09:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475143.736688; Wed, 11 Jan 2023 09:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFX5w-0003mn-Iw; Wed, 11 Jan 2023 09:08:28 +0000
Received: by outflank-mailman (input) for mailman id 475143;
 Wed, 11 Jan 2023 09:08:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFX5v-0003mh-PX
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 09:08:27 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2040.outbound.protection.outlook.com [40.107.22.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7fbf4159-918f-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 10:08:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6803.eurprd04.prod.outlook.com (2603:10a6:208:187::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Wed, 11 Jan
 2023 09:08:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 09:08:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fbf4159-918f-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g6V8MEZokr3DYQGSn6eWcqLsvGmdovnzWUNfFJGF9cSjBU11wg+dz4WLtB2W18vKzXzSiEB7LPCm/dUgbToU0Rmpd9/oGwIWElX1XxmGLKHh5Td0ZrlI3fLr03uBy1W/G5Q6gUA/O1vWG5huKlccFZxrn64E+oRu/KwAe/K0sprpaHY2juE60uew/8jMNyfWaIFsLaD6vlqj+VhDJI84NrM/3M8djRRXM9Qp343kc7DzeJNJVooDRnjkmHkOI3PWmuqJ6/OilPM22hkM6bdZTnr2sZ8vPfFhajnb4ukGul/xA61CKj/RZ5WTzSGLOlE5NLThRWzA7uRSHOM43zz9oA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fcayiEJHhuTrBBbs3n/Zkwp+wXDoF5yREjQiuKs5PQE=;
 b=WMMU2A8HTc0kZN1fIGv/TmRHuE5RtX6D9ZsqR1P4WjiYrW0ufhbmQeAjJsuG7gxi4nUtknSbRZMBj6jjfZwUTPnq0z2LN7g4RjrUxCXWrg1EVSLVaeWsdMzVbkLd1JwrXVMsbANra94E3BggWlyVUt8ys5GN7+bA3F1p6omh6ETGMYPNBXDT1fjRS29IowCiWGhfmw2ufAX7QxNBVWjIsS2JiOFyU7JoJPBWAuLhXdr2aOLSrZlGzjNpMIDxtGCqT6AHTrzIHrmWSSp80Fhueff2g6g46OdR9/TSr2YviafUvssu0+mgUZwAmWXoM5tEMVllxOnY/Qtt1Few31P3Uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fcayiEJHhuTrBBbs3n/Zkwp+wXDoF5yREjQiuKs5PQE=;
 b=A019GrJZk7g5WZg4oKewTOq5gdvQmTl+YjoyVlTagVwBoC7vTsGRFV1SQhocqRMDruwbhghWHj5L6oMBw69m463FJIX/z7qIS5nfFS+rIUvC4hNlxIR1dY2JdxkG4MqTfc3AedXji1KVBbs4FQK0VpkHhgXNEhOl7dRlO3g4zCaYiSvCOBGfrKeFMVHcnN5w5j2S4tgrQkqantplbljALkgI8kFtz+el3QmKTEd9SU6pE6iT/AIo67/KwhubuEdBLz5zVgEXs5RlHSKHNMrxUjsge+js3mYvoJ4nQRAUuNATBPeF9t2liLq75Cf6vohF1bI+ZsiArahIudO4ZiXOjQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1b6ee20d-2f32-ab38-83ec-69c33baf42fd@suse.com>
Date: Wed, 11 Jan 2023 10:08:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 2/6] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
 <c333b5b0-f936-59f8-d962-79d449403e6c@suse.com>
 <06185fcfb8cbd849df4b033efa923b55c054738d.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <06185fcfb8cbd849df4b033efa923b55c054738d.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0101.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6803:EE_
X-MS-Office365-Filtering-Correlation-Id: 9afdd411-15cc-4544-4512-08daf3b363e6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MrMC31EWQie1C5it+fnueu96JOht7t0lOGc799NtF69t70QT7sT0UZ8sNSIxqnNNFzCJCfgZ2K4M86mIDPIezQQ4gUakc5Tx+b9aO4EAw5jUBf5EsTvnPIB0p3rO6kWC3YRfYtkSm2dy/IayqPM/6mEP8KK04efSGhd5ld7iZpcvAoFJR5GaFV6z6cmIgyTiMVAGRB8NjpjYJWpGyLvVbp+UwI5CpWfYwo5unPqguzJ8+54FA2mFuo4wkbqYNM2x41HpF8uSYsgqwRUBkBfZvxRBGi7H/Ix/zPPgckf0Pzt946rYebljzwcUsZW5b+HhBn7wuJSKThSesQ/iuVrvBNNnermfgyq4XPvm0hve4cpWUr73KZ1Ijrz2JWChEuxqx/tk3XdGSf0x7S1ME0mu0aU+5hGG5ldL/Iwu28A7trlo2MsMWwMXpfcYSWDr/4I6MEWLZHaHLAkz/WbXD3GlnuTyPUaPWQXEuWhO33Ss17z/gKOMNJLF4mWoDVoBtK9vkYKX55M9nIF1ysNVkQi9EhhT3R7nC0gqU92XAn2Kznwpb3nAxUPTXwKb+Nxde2Sbr+1ghNTS7MrvcsvVuLXRwVjygWOoE+6LznoU0NP9mcxv/gpLBNMZMDs//YzhWinqmBFWkbB4sFKpv04QnoiQQoYLFtMozQKmgUS2ThwRawsOgwb/EMIouupRyiCuCMP5fR3vbiDHJSFvOUJ6UwuEAgc7EjFSVgL7oQWdxy7f1bk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(136003)(376002)(346002)(39860400002)(451199015)(6506007)(478600001)(53546011)(26005)(36756003)(2616005)(38100700002)(86362001)(6486002)(31696002)(6512007)(186003)(5660300002)(31686004)(66556008)(8936002)(66946007)(66476007)(8676002)(54906003)(4326008)(41300700001)(6916009)(316002)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bjJQRW9CaXFncmJ4VkpGNHJuWjVGTmpRWjg1elpqN2ZOdUUrZGRwbTZjQVhO?=
 =?utf-8?B?SUFnSXdxT2hPSWhCK0tKRDI5OFdMd0NFRythakFrTlhkaWlyR2pYaDFJUTg0?=
 =?utf-8?B?dXkwVUxZT3Q1bjI2czZhcTR0YklqUEpyRFY0d1hDTEVISTBOWmJPaDNpdk81?=
 =?utf-8?B?YVlvdUZ1M1lOd3o4TmZBRnBsdktEa29XNm9qckpYZ1F2T1R1SFVWU0kxZGxu?=
 =?utf-8?B?VXdIbUFPWThzWDNYYzFDK3M5NHVTaXgwSHkrVkdVeS93Mm1SYVVQWFVHRUtP?=
 =?utf-8?B?cldwZ2JoYVVYaUYwaEc5eko4eCt4SWVGUGRyVVoyU3pZelRRQVFGeUtvbTho?=
 =?utf-8?B?VmFkeDNIcUF6VkVtMkdxaDI2cUpCSXR6TWZBbm5GWStqVUhPcXNGK3pjR00x?=
 =?utf-8?B?Ukw2ME90OUxvdC9BMm1WM1B5SzBlLzBjVjR1WkdwSHUwQWxvM01JYnhMS05P?=
 =?utf-8?B?RXZkUlJTTDZoZjhOeklGWUZPSVB5UkFJOFBJZkFiR0hXOTk1R2l3dzhkUXFv?=
 =?utf-8?B?VWdPN3RXUE9DZjBTZXVYYThsNERrTFpiVHpzaUJXVDFCS3BWbDNHSGNNY0h5?=
 =?utf-8?B?UW5QczVNVDR2M2dwZTM2MFRhWFhJQ1dobzJiT3JRZmtFcnVDM0w5LzExUWw5?=
 =?utf-8?B?SVZ3RXpGVS94NTYwZW9xSExuTlZSRkRXeWNUZlZTd0c5Z3Z2UG51aDY1UUdv?=
 =?utf-8?B?RjRPNldSNUF1UEVRYUcyVmJYY21RN2t1M25VdDVnMnBVcERJREExRjg3dWda?=
 =?utf-8?B?ajBScXdtaEFkVzc1UEE3aU41WHdLVWl0a09nWWhJZ0pBczZBTmhnTmpqQk5U?=
 =?utf-8?B?MUV4Y2tXbk9WZTIyREN1UDNZcHNjK0cvZTdLWnh2bUkxZjVQaVhTb0ZCbkdz?=
 =?utf-8?B?Wk5mY1E2dXRBbWNSbkRMOGlyM2ZzQ1NZRE95TlpwbFcvUTJTSXptbDBzNWhY?=
 =?utf-8?B?UUx2d3N6bnAvZk9yYVJDZTBKTGV1VkVSNU1FMi9hUUErNDR0RElhWjNzb282?=
 =?utf-8?B?SDRvL1IzMkZwSTQyWTVMa3NyeUZuSFNldkZNU1gwUDdSaFlhQlFWZWkxMUZR?=
 =?utf-8?B?WkpaeDBiUDBJRDBlclB4ZkhLck1UM2k1b2U3UzJPd1huRWJkZVFyc3pVanp5?=
 =?utf-8?B?UGFLSmxZcjQ3S1JHVGZFZ0d0RjZ2MHhzOVlzRDArbzJ4blhialErbDcwaUdl?=
 =?utf-8?B?cE4raXBuU0dWSERLTGtnaDFvb1ljdWlxWktLSVNYZjB0OFFyN2UzOUdrZDRs?=
 =?utf-8?B?R0JzRlZleHJnSzQ1ZkhOZWpDd3ptcHBwNnRTOGhCME5LZ2pCcitEVlNxZ3Zs?=
 =?utf-8?B?Ly9TRldrRFdIbWRHVDczMnB0R05hNXV1eVdubXRoTnUxU1BXVnNURnJVK1Rs?=
 =?utf-8?B?Y2llcTh2WDVwNjA1a0laOWd4aEtIc2dqQkI0aWR2Z3pWTy8rQ3pyNjc3QnhP?=
 =?utf-8?B?Y3NlQ1pXN1ZaSmFkL29FRlpyVGpBVzgxMGt0TTFkRDk5OC9SYWJKVERyS3Fi?=
 =?utf-8?B?VXp6eUxwVlhuZGpPc2NlNnh5Y2Z2dVNnUmhzcXZ2RGRRdTlZZUN4VGcvSW8r?=
 =?utf-8?B?YXF1RmVUZTRkRlJ0SS91Y01KUEJzQ3ZXd0lrd2ttWG5rRjJ4MlFPVnRBenBD?=
 =?utf-8?B?SzBHb3pRbWRaaTVvU0ZHQXJRcHNKRzV6LzZ2ZkR4dlFuQTRobXE0Zjk5VGNo?=
 =?utf-8?B?UEJHR2g2K3A5cEdyS3NLSUhmVFl6OEZ6VzAydXJodElhVy9KV2NLSENlNkR0?=
 =?utf-8?B?c1c1c0plaG9qSWdzdy9Ec1JUdUNxNEh6dTFscmtIZkpyQ2lZck1vcVE1MWtG?=
 =?utf-8?B?MDd6cGRUMXRyZi9uWlRRdFJZM0FZSkNJNUgvMHJUY0pzcFZPRCtETC9oNFlO?=
 =?utf-8?B?YmkrNnp2cHAwTXlDQUdSMk52NjBlaWNabXNOUGx1QUVLVVpVcWhHUUV6Nk41?=
 =?utf-8?B?dDJUdDB1VmZCbFZWc1FzdWRaNTMzTjlkdUlRWDBqeWlPT2ZWb2VFRWVScGZw?=
 =?utf-8?B?eVRRbUhyc0lBVTBWb2tGRmkycVgyRE9YUTdQUjNLUjBhM3RHb2V6WkJ1cDJP?=
 =?utf-8?B?aGN2V1F0Vk5DMVkyMmVBeEZrQWFqVE9Zc3JGZDE1V3dLMWw4R0JqWjdLcmdq?=
 =?utf-8?Q?Gwnri104DfMXCIKvykDxVQ81n?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9afdd411-15cc-4544-4512-08daf3b363e6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 09:08:22.8171
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: F+NxpV/yrE1ZWgUV/PqKE9nzxnq1AuySih9/tOnpcUdlxJa4UccVwEVe8g+RFHMSE/Sdy5VD/KYDt41wJWO7QA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6803

On 11.01.2023 09:47, Oleksii wrote:
> On Tue, 2023-01-10 at 17:58 +0100, Jan Beulich wrote:
>> On 10.01.2023 16:17, Oleksii Kurochko wrote:
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>> Changes in V3:
>>>     - Nothing changed
>>> ---
>>> Changes in V2:
>>>     - Remove unneeded now types from <asm/types.h>
>>
>> I guess you went a little too far: Types used in common code, even if
>> you
> It looks then I didn't understand which one of types are needed.
> 
> In "[PATCH v1 2/8] xen/riscv: introduce asm/types.h header file" all
> types were introduced as most of them are used in <xen/types.h>:
> __{u|s}{8|16|32|64}. Thereby it looks like the following types in
> <asm/types.h> should be present from the start:
>   typedef __signed__ char __s8;
>   typedef unsigned char __u8;
> 
>   typedef __signed__ short __s16;
>   typedef unsigned short __u16;
> 
>   typedef __signed__ int __s32;
>   typedef unsigned int __u32;
> 
>   #if defined(__GNUC__) && !defined(__STRICT_ANSI__)
>   #if defined(CONFIG_RISCV_32)
>     typedef __signed__ long long __s64;
>     typedef unsigned long long __u64;
>   #elif defined (CONFIG_RISCV_64)
>     typedef __signed__ long __s64;
>     typedef unsigned long __u64;
>   #endif
>   #endif
> 
>  Then it turns out that there is no any sense in:
>   typedef signed char s8;
>   typedef unsigned char u8;
> 
>   typedef signed short s16;
>   typedef unsigned short u16;
> 
>   typedef signed int s32;
>   typedef unsigned int u32;
> 
>   typedef signed long long s64;
>   typedef unsigned long long u64;
>     OR
>   typedef signed long s64;
>   typedef unsigned long u64;
> As I understand instead of them should be used: {u|s}int<N>_t.

Hmm, the situation is worse than I thought (recalled) it was: You're
right, xen/types.h actually uses __{u,s}<N>. So I'm sorry for mis-
guiding you; we'll need to do more cleanup first for asm/types.h to
become smaller.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 09:27:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 09:27:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475149.736699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXNz-0006AZ-5f; Wed, 11 Jan 2023 09:27:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475149.736699; Wed, 11 Jan 2023 09:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXNz-0006AS-37; Wed, 11 Jan 2023 09:27:07 +0000
Received: by outflank-mailman (input) for mailman id 475149;
 Wed, 11 Jan 2023 09:27:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zIvS=5I=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFXNx-0006AM-F3
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 09:27:05 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 19ba45e0-9192-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 10:27:00 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1B9F51715E;
 Wed, 11 Jan 2023 09:27:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E60EC1358A;
 Wed, 11 Jan 2023 09:27:01 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id mUbnNuWAvmOTZAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 11 Jan 2023 09:27:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19ba45e0-9192-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673429222; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=sC0uF9eGD69o848eLYrtexwMqzyMTiSrfsq2L3LBc6o=;
	b=OyjvChkMMyD2LS6IjAWh2I3qNttukzBbDcDJAtctZ3PdlMQH/UfvDd/tNRFJGfbYr58Q6j
	Arl3lpWMdFWeXE2pgv8Yc/juJmPMoUsrztRlCwzApQV+ZGeZ1zuN8kdrTRICA5VqD1gNQV
	A3j6wCCa2jPbMIJ0T2lf9qQr/RpCCgk=
Message-ID: <00c146ee-b0d4-55bd-3276-4894b26cd83c@suse.com>
Date: Wed, 11 Jan 2023 10:27:01 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-16-jgross@suse.com>
 <b7cfd35b-97ef-42eb-eceb-7f07cd72268c@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 15/19] tools/xenstore: switch hashtable to use the
 talloc framework
In-Reply-To: <b7cfd35b-97ef-42eb-eceb-7f07cd72268c@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------iFVyoi4bR64KLaI8CtSoZUfV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------iFVyoi4bR64KLaI8CtSoZUfV
Content-Type: multipart/mixed; boundary="------------KWXXITPtJ1KDuGIb5mxLsr06";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <00c146ee-b0d4-55bd-3276-4894b26cd83c@suse.com>
Subject: Re: [PATCH v2 15/19] tools/xenstore: switch hashtable to use the
 talloc framework
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-16-jgross@suse.com>
 <b7cfd35b-97ef-42eb-eceb-7f07cd72268c@xen.org>
In-Reply-To: <b7cfd35b-97ef-42eb-eceb-7f07cd72268c@xen.org>

--------------KWXXITPtJ1KDuGIb5mxLsr06
Content-Type: multipart/mixed; boundary="------------TrpUY8kPg6koMLqXntZ3bvG1"

--------------TrpUY8kPg6koMLqXntZ3bvG1
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMTIuMjIgMjI6NTAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDEzLzEyLzIwMjIgMTY6MDAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBA
QCAtMTE1LDQ3ICsxMTcsMzIgQEAgaGFzaHRhYmxlX2V4cGFuZChzdHJ1Y3QgaGFzaHRhYmxl
ICpoKQ0KPj4gwqDCoMKgwqDCoCBpZiAoaC0+cHJpbWVpbmRleCA9PSAocHJpbWVfdGFibGVf
bGVuZ3RoIC0gMSkpIHJldHVybiAwOw0KPj4gwqDCoMKgwqDCoCBuZXdzaXplID0gcHJpbWVz
WysrKGgtPnByaW1laW5kZXgpXTsNCj4+IC3CoMKgwqAgbmV3dGFibGUgPSAoc3RydWN0IGVu
dHJ5ICoqKWNhbGxvYyhuZXdzaXplLCBzaXplb2Yoc3RydWN0IGVudHJ5KikpOw0KPj4gLcKg
wqDCoCBpZiAoTlVMTCAhPSBuZXd0YWJsZSkNCj4+ICvCoMKgwqAgbmV3dGFibGUgPSB0YWxs
b2NfcmVhbGxvYyhoLCBoLT50YWJsZSwgc3RydWN0IGVudHJ5ICosIG5ld3NpemUpOw0KPj4g
K8KgwqDCoCBpZiAoIW5ld3RhYmxlKQ0KPj4gwqDCoMKgwqDCoCB7DQo+PiAtwqDCoMKgwqDC
oMKgwqAgLyogVGhpcyBhbGdvcml0aG0gaXMgbm90ICdzdGFibGUnLiBpZS4gaXQgcmV2ZXJz
ZXMgdGhlIGxpc3QNCj4+IC3CoMKgwqDCoMKgwqDCoMKgICogd2hlbiBpdCB0cmFuc2ZlcnMg
ZW50cmllcyBiZXR3ZWVuIHRoZSB0YWJsZXMgKi8NCj4+IC3CoMKgwqDCoMKgwqDCoCBmb3Ig
KGkgPSAwOyBpIDwgaC0+dGFibGVsZW5ndGg7IGkrKykgew0KPj4gLcKgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgd2hpbGUgKE5VTEwgIT0gKGUgPSBoLT50YWJsZVtpXSkpIHsNCj4+IC3CoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaC0+dGFibGVbaV0gPSBlLT5uZXh0Ow0KPj4g
LcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpbmRleCA9IGluZGV4Rm9yKG5ld3Np
emUsZS0+aCk7DQo+PiArwqDCoMKgwqDCoMKgwqAgaC0+cHJpbWVpbmRleC0tOw0KPj4gK8Kg
wqDCoMKgwqDCoMKgIHJldHVybiAwOw0KPj4gK8KgwqDCoCB9DQo+PiArDQo+PiArwqDCoMKg
IGgtPnRhYmxlID0gbmV3dGFibGU7DQo+PiArwqDCoMKgIG1lbXNldChuZXd0YWJsZSArIGgt
PnRhYmxlbGVuZ3RoLCAwLA0KPj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgIChuZXdzaXplIC0g
aC0+dGFibGVsZW5ndGgpICogc2l6ZW9mKCpuZXd0YWJsZSkpOw0KPj4gK8KgwqDCoCBmb3Ig
KGkgPSAwOyBpIDwgaC0+dGFibGVsZW5ndGg7IGkrKykgew0KPiANCj4gSSB1bmRlcnN0YW5k
IHRoaXMgY29kZSBpcyB0YWtlbiBmcm9tIHRoZSByZWFsbG9jIHBhdGguIEhvd2V2ZXIsIGlz
bid0IHRoaXMgDQo+IGFsZ29yaXRobSBhbHNvIG5vdCBzdGFibGU/IElmIHNvLCB0aGVuIEkg
dGhpbmsgd2UgbW92ZSB0aGUgY29tbWVudCBoZXJlLg0KDQpJJ20gZmluZSB3aXRoIHRoYXQs
IGV2ZW4gaWYgSSBkb24ndCBzZWUgaG93IGl0IHdvdWxkIG1hdHRlci4gVGhlcmUgaXMgbm8N
Cmd1YXJhbnRlZSByZWdhcmRpbmcgdGhlIG9yZGVyIG9mIGVudHJpZXMgZm9yIGEgZ2l2ZW4g
aW5kZXguDQoNCj4gDQo+PiArwqDCoMKgwqDCoMKgwqAgZm9yIChwRSA9ICYobmV3dGFibGVb
aV0pLCBlID0gKnBFOyBlICE9IE5VTEw7IGUgPSAqcEUpIHsNCj4+ICvCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIGluZGV4ID0gaW5kZXhGb3IobmV3c2l6ZSxlLT5oKTsNCj4gDQo+IE1pc3Np
bmcgc3BhY2UgYWZ0ZXIgIiwiLg0KDQpXaWxsIGZpeC4NCg0KPiANCj4+ICvCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIGlmIChpbmRleCA9PSBpKQ0KPj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgew0KPj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBwRSA9ICYoZS0+bmV4
dCk7DQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB9DQo+PiArwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCBlbHNlDQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB7DQo+PiArwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICpwRSA9IGUtPm5leHQ7DQo+PiDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGUtPm5leHQgPSBuZXd0YWJsZVtpbmRleF07DQo+
PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG5ld3RhYmxlW2luZGV4XSA9
IGU7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB9DQo+PiDCoMKgwqDCoMKgwqDC
oMKgwqAgfQ0KPj4gLcKgwqDCoMKgwqDCoMKgIGZyZWUoaC0+dGFibGUpOw0KPj4gLcKgwqDC
oMKgwqDCoMKgIGgtPnRhYmxlID0gbmV3dGFibGU7DQo+PiAtwqDCoMKgIH0NCj4+IC3CoMKg
wqAgLyogUGxhbiBCOiByZWFsbG9jIGluc3RlYWQgKi8NCj4+IC3CoMKgwqAgZWxzZQ0KPj4g
LcKgwqDCoCB7DQo+PiAtwqDCoMKgwqDCoMKgwqAgbmV3dGFibGUgPSAoc3RydWN0IGVudHJ5
ICoqKQ0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCByZWFsbG9j
KGgtPnRhYmxlLCBuZXdzaXplICogc2l6ZW9mKHN0cnVjdCBlbnRyeSAqKSk7DQo+PiAtwqDC
oMKgwqDCoMKgwqAgaWYgKE5VTEwgPT0gbmV3dGFibGUpIHsgKGgtPnByaW1laW5kZXgpLS07
IHJldHVybiAwOyB9DQo+PiAtwqDCoMKgwqDCoMKgwqAgaC0+dGFibGUgPSBuZXd0YWJsZTsN
Cj4+IC3CoMKgwqDCoMKgwqDCoCBtZW1zZXQobmV3dGFibGUgKyBoLT50YWJsZWxlbmd0aCwg
MCwNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChuZXdzaXplIC0gaC0+dGFi
bGVsZW5ndGgpICogc2l6ZW9mKCpuZXd0YWJsZSkpOw0KPj4gLcKgwqDCoMKgwqDCoMKgIGZv
ciAoaSA9IDA7IGkgPCBoLT50YWJsZWxlbmd0aDsgaSsrKSB7DQo+PiAtwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCBmb3IgKHBFID0gJihuZXd0YWJsZVtpXSksIGUgPSAqcEU7IGUgIT0gTlVM
TDsgZSA9ICpwRSkgew0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpbmRl
eCA9IGluZGV4Rm9yKG5ld3NpemUsZS0+aCk7DQo+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIGlmIChpbmRleCA9PSBpKQ0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCB7DQo+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
cEUgPSAmKGUtPm5leHQpOw0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB9
DQo+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGVsc2UNCj4+IC3CoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgew0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgICpwRSA9IGUtPm5leHQ7DQo+PiAtwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgZS0+bmV4dCA9IG5ld3RhYmxlW2luZGV4XTsNCj4+IC3C
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBuZXd0YWJsZVtpbmRleF0g
PSBlOw0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB9DQo+PiAtwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCB9DQo+PiAtwqDCoMKgwqDCoMKgwqAgfQ0KPj4gwqDCoMKgwqDC
oCB9DQo+PiArDQo+PiDCoMKgwqDCoMKgIGgtPnRhYmxlbGVuZ3RoID0gbmV3c2l6ZTsNCj4+
IMKgwqDCoMKgwqAgaC0+bG9hZGxpbWl0wqDCoCA9ICh1bnNpZ25lZCBpbnQpDQo+PiDCoMKg
wqDCoMKgwqDCoMKgwqAgKCgodWludDY0X3QpbmV3c2l6ZSAqIG1heF9sb2FkX2ZhY3Rvcikg
LyAxMDApOw0KPj4gQEAgLTE4NCw3ICsxNzEsNyBAQCBoYXNodGFibGVfaW5zZXJ0KHN0cnVj
dCBoYXNodGFibGUgKmgsIHZvaWQgKmssIHZvaWQgKnYpDQo+PiDCoMKgwqDCoMKgwqDCoMKg
wqDCoCAqIGVsZW1lbnQgbWF5IGJlIG9rLiBOZXh0IHRpbWUgd2UgaW5zZXJ0LCB3ZSdsbCB0
cnkgZXhwYW5kaW5nIGFnYWluLiovDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgaGFzaHRhYmxl
X2V4cGFuZChoKTsNCj4+IMKgwqDCoMKgwqAgfQ0KPj4gLcKgwqDCoCBlID0gKHN0cnVjdCBl
bnRyeSAqKWNhbGxvYygxLCBzaXplb2Yoc3RydWN0IGVudHJ5KSk7DQo+PiArwqDCoMKgIGUg
PSB0YWxsb2NfemVybyhoLCBzdHJ1Y3QgZW50cnkpOw0KPj4gwqDCoMKgwqDCoCBpZiAoTlVM
TCA9PSBlKSB7IC0tKGgtPmVudHJ5Y291bnQpOyByZXR1cm4gMDsgfSAvKm9vbSovDQo+PiDC
oMKgwqDCoMKgIGUtPmggPSBoYXNoKGgsayk7DQo+PiDCoMKgwqDCoMKgIGluZGV4ID0gaW5k
ZXhGb3IoaC0+dGFibGVsZW5ndGgsZS0+aCk7DQo+PiBAQCAtMjM4LDggKzIyNSw4IEBAIGhh
c2h0YWJsZV9yZW1vdmUoc3RydWN0IGhhc2h0YWJsZSAqaCwgdm9pZCAqaykNCj4+IMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIGgtPmVudHJ5Y291bnQtLTsNCj4+IMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgIHYgPSBlLT52Ow0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgaWYgKGgtPmZsYWdzICYgSEFTSFRBQkxFX0ZSRUVfS0VZKQ0KPj4gLcKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCBmcmVlKGUtPmspOw0KPj4gLcKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgZnJlZShlKTsNCj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdGFs
bG9jX2ZyZWUoZS0+ayk7DQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB0YWxsb2NfZnJl
ZShlKTsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiB2Ow0KPj4gwqDC
oMKgwqDCoMKgwqDCoMKgIH0NCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBwRSA9ICYoZS0+bmV4
dCk7DQo+PiBAQCAtMjgwLDI1ICsyNjcsMjAgQEAgdm9pZA0KPj4gwqAgaGFzaHRhYmxlX2Rl
c3Ryb3koc3RydWN0IGhhc2h0YWJsZSAqaCkNCj4+IMKgIHsNCj4+IMKgwqDCoMKgwqAgdW5z
aWduZWQgaW50IGk7DQo+PiAtwqDCoMKgIHN0cnVjdCBlbnRyeSAqZSwgKmY7DQo+PiArwqDC
oMKgIHN0cnVjdCBlbnRyeSAqZTsNCj4+IMKgwqDCoMKgwqAgc3RydWN0IGVudHJ5ICoqdGFi
bGUgPSBoLT50YWJsZTsNCj4+IMKgwqDCoMKgwqAgZm9yIChpID0gMDsgaSA8IGgtPnRhYmxl
bGVuZ3RoOyBpKyspDQo+PiDCoMKgwqDCoMKgIHsNCj4+IC3CoMKgwqDCoMKgwqDCoCBlID0g
dGFibGVbaV07DQo+PiAtwqDCoMKgwqDCoMKgwqAgd2hpbGUgKE5VTEwgIT0gZSkNCj4+ICvC
oMKgwqDCoMKgwqDCoCBmb3IgKGUgPSB0YWJsZVtpXTsgZTsgZSA9IGUtPm5leHQpDQo+PiDC
oMKgwqDCoMKgwqDCoMKgwqAgew0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgZiA9IGU7
DQo+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBlID0gZS0+bmV4dDsNCj4+IMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIGlmIChoLT5mbGFncyAmIEhBU0hUQUJMRV9GUkVFX0tFWSkN
Cj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgZnJlZShmLT5rKTsNCj4+ICvC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdGFsbG9jX2ZyZWUoZS0+ayk7DQo+PiDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoaC0+ZmxhZ3MgJiBIQVNIVEFCTEVfRlJF
RV9WQUxVRSkNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgZnJlZShmLT52
KTsNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGZyZWUoZik7DQo+IA0KPiBBRkFJVSwg
dGhlIGxvb3AgaXMgcmV3b3JrZWQgc28geW91IGxldCB0YWxsb2MgdG8gZnJlZSBlYWNoIGVs
ZW1lbnQgd2l0aCB0aGUgDQo+IHBhcmVudC4gVXNpbmcgYSB3aGlsZSBsb29wIGlzIGRlZmlu
aXRlbHkgY2xlYW5lciwgYnV0IG5vdyB5b3Ugd2lsbCBlbmQgdXAgdG8gDQo+IGhhdmUgdHdv
IHNlcGFyYXRlIGxvb3AgZm9yIHRoZSBlbGVtZW50cy4NCj4gDQo+IFRoZXJlIGlzIGEgcmlz
ayB0aGF0IHRoZSBvdmVyYWxsIHBlcmZvcm1hbmNlIG9mIGhhc2h0YWJsZV9kZXN0cm95KCkg
d2lsbCBiZSANCj4gd29yc2UgYXMgdGhlIGRhdGEgYWNjZXNzZWQgaW4gb25lIGxvb3AgbWF5
IG5vdCBmaXQgaW4gdGhlIGNhY2hlLiBTbyB5b3Ugd2lsbCANCj4gaGF2ZSB0byByZWxvYWQg
aXQgb24gdGhlIHNlY29uZCBsb29wLg0KPiANCj4gVGhlcmVmb3JlLCBJIHRoaW5rIGl0IHdv
dWxkIGJlIGJldHRlciB0byBrZWVwIHRoZSBsb29wIGFzLWlzLg0KDQpXaGF0IGFib3V0IGEg
Y29tcGxldGVseSBkaWZmZXJlbnQgYXBwcm9hY2g/IEkgY291bGQgbWFrZSB0aGUga2V5IGFu
ZCB2YWx1ZQ0KdGFsbG9jIGNoaWxkcmVuIG9mIGUgd2hlbiBfaW5zZXJ0aW5nXyB0aGUgZWxl
bWVudCBhbmQgdGhlIHJlbGF0ZWQgZmxhZyBpcw0Kc2V0LiBUaGlzIHdvdWxkIHJlZHVjZSBo
YXNodGFibGVfZGVzdHJveSB0byBhIHNpbmdsZSB0YWxsb2NfZnJlZSgpLg0KDQoNCkp1ZXJn
ZW4NCg==
--------------TrpUY8kPg6koMLqXntZ3bvG1
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------TrpUY8kPg6koMLqXntZ3bvG1--

--------------KWXXITPtJ1KDuGIb5mxLsr06--

--------------iFVyoi4bR64KLaI8CtSoZUfV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO+gOUFAwAAAAAACgkQsN6d1ii/Ey+v
uQf+P5XFFJu4NP1Mq53Wvtwy8CyWDF2MCfNCX9ie51bqX/q8N/+KFNfHYdU1r0qybpGvcntcFhhM
tN571xBmd9SDFAmGUJ3m9G8ldVzY39zmXk2MXuDJ4YBw6g4BzmVPuTtUyv5yByVR9I5mo1CYpeN1
O/aCwC2OGZ/K3ThxQVBMchSAnELZgbSBFWnT7sSWEgduoEBIfI68RyqT3QoXYwD/2/9ixdo5JrYM
Pn+rYDjb0rdra8I0osBUSG+s6B9+GNxtrupkXfqfybhXf315jMlpTXVyEkbeGVaziTOQJGvc96r+
Z2ymWKtiapHPL372dgjUxKe4YGQxKskYSNJgXfZ3Bg==
=0rXS
-----END PGP SIGNATURE-----

--------------iFVyoi4bR64KLaI8CtSoZUfV--


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 09:49:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 09:49:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475156.736710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXj4-0000DM-2X; Wed, 11 Jan 2023 09:48:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475156.736710; Wed, 11 Jan 2023 09:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXj3-0000DF-W6; Wed, 11 Jan 2023 09:48:53 +0000
Received: by outflank-mailman (input) for mailman id 475156;
 Wed, 11 Jan 2023 09:48:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFXj1-0000D9-Th
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 09:48:52 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2080.outbound.protection.outlook.com [40.107.241.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 242e3b18-9195-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 10:48:46 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7412.eurprd04.prod.outlook.com (2603:10a6:20b:1df::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Wed, 11 Jan
 2023 09:48:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 09:48:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 242e3b18-9195-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SF6NnTtLrsJr2SzrT7J8Ho/n+m94qAWfm430rj6MFMIQUc7tSb4DwdqnJ1R1NB+M4CuqIcznr9r9lZvWwsUK3H9ZKC+B/GX1dKg6o0KGMVKUGxMzeVneBe9TKA4567kypnxcAg0edQDfxeas6kiXvHmrvznX9XmkYb/aU9PdEWDX2kjbhSvKjUaK2SP5dABdy/SMKURs0UbhQ5s53K/YZViTZ7yK0xWQk0Dc2ZXQYva3ML1xEjDSyIuT5YY3EibWwcOEvNTY7Wm5qFiQRq91ipnlpb9IgE4x0Dfc9UI5KLo878KqSrJjm698vkYohttzjAEIrCKbepE4aEw6U0Np+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eoZCTYBDdYYKGrLaMH8wVWpss546JEA1sd9sUuWz8iw=;
 b=KiDqhxp581K872JKra2y430/iY5zm86ZMRDQE2WDQlk5LMf6dUWdvzTfNT1Mq5OvY2epGQA6Qu2Tv3FVgPgl01RXbziMaOBjvnppuN+Nnc5lOVaVH3t00hS+sgERaZ5DPdPj3cYPSWbF0RM3xHhgE1aIA3Lf56IJN8Xh3bh7zl5c1KDTPQyrxGvBIh0uHzGlVs/sG+Ye42CmMu6qrFEJwGeCxknI+EaDX6W8uOOjUHN0Fr5WgvAxGwVljU8F7xUUXdioXNTq3DWQOJ1yGR80JOxZ6BUxJggtzEjaIZKMjIKjhw9H/gDB3H31XhxqD+07/w0rgAzBZ7BXLj/umPVlqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eoZCTYBDdYYKGrLaMH8wVWpss546JEA1sd9sUuWz8iw=;
 b=nefqHG47PLsLd6GumTbaxX61c/yS4XwF6iPxBs2RXkRMykjGlkymE5HJYAkv66wKA6iWmSkKL4e015CHlvIH1ueDzYXEPeTJbwLvS+DIniFKfrUFv1Ajflvin0LGerI+yuUSLrggr0jhGwiSBbcrZV7e/HZhMy5FQF5lbdeTeFVlnD4HdqUWX9C6YSEhE7oYKPtUhf0xlRE/L6KiYgOFffN8NkwkGXO3qefrkwAT9JQqkkIMF9Tcjuj/mxooFtg3exIcxiRLr28K+RP/B9j+i2sqXclqcjI1gpN5Qe8n7xbJYsPbvYe0LHl7AL0D+hHDkgcHnjf/7Pzbcje9nG+H9g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <999009a1-bfef-3e02-09cd-865ff462c3d8@suse.com>
Date: Wed, 11 Jan 2023 10:48:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN PATCH] tools: Fix build with recent QEMU, use
 "--enable-trace-backends"
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230110151854.50746-1-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230110151854.50746-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0162.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7412:EE_
X-MS-Office365-Filtering-Correlation-Id: f4672d97-838b-47fa-f263-08daf3b9082f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	B1b7uXsO2cXfZug3glU2a8yU4l7Hqbi1sVRxDBYbqp7mVWal+EiP+DAav4U9B1dTN2U7/i+GDoDP8qziT/efO+Pd5kaxdAy8JiehCrdQ/TQo+8y+NXNbYkNl5Zvj+RelAivucYdt16yztEXatFek+W+k42Z8Y6DT5CI/mfWXzNwOqgf0TpE39m+z5djUSy5i9grwGjNcUsa6QQTv/a5PVGn/qy11+WDTIhIz/5/2+UcaFp8ZblhTG6MqxZfvQTwPB1oToMBm3tuSaKJnXQ9DNE/kF5XLlszQc/dSLiuMbkl+8B9GcGqX2lUB3R6ebL3RAm7/yfJwdFXz2dtgtRjbKQGxNtlD3V7J2wXSqLI0Nay4nJrz5YRqobMdev05pW+AYhs09OWlST9A2FNg2sTaOuM+tz6am3BNNxiY/x0HMlP8C+3Y0JsGanf2XgvMTn5YA/XVp2cG+rAn9X0fXIea2wCKaihl/5ELbsgnN1baHYPr6oA9OK0nq3wr9RVLwFU0WfIbGyGgdG5AXwcG6FcgRbemNkoe05Exoep6yLRV0zVoGASpCJzXt6rDWlrtsSQCRgT1C7yby/MLqrDDL/f8i/eWhrv99Fss5DjaTe45uMBblwDMlrTGfuSG9b8vIktkicd1HDd2c502GD2M3hrttGaz1gWkJC9hq8MMin7faxMJ9bZApex/nteYr/Noe0hFy4wbsHtTRVnKj9vEVwmRIIpVWDKxRu6cVtod6y9yHsE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(346002)(136003)(366004)(39860400002)(451199015)(38100700002)(31696002)(86362001)(36756003)(53546011)(2616005)(6512007)(186003)(316002)(478600001)(6486002)(31686004)(26005)(2906002)(83380400001)(5660300002)(8936002)(4326008)(41300700001)(6506007)(66476007)(66556008)(8676002)(6916009)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a1BFaGpZbXZyYjFUUEJJaHdYendTRnJaQ1JmK3VMejlFL1psZHYvU0JkclVW?=
 =?utf-8?B?Mno3MC9YZWFORGlZbmpkK25semVvekM0MWFDZW5BVVdFekEwZ3Q5RnVITTl0?=
 =?utf-8?B?SC9tMlI3ajYrODdJK3pVa1cwL0FVSHY5QUdXb0dmbkVReGMwSHdac0RxVmlK?=
 =?utf-8?B?SUd4b3FVa3FTcXN4clFVUlQ3ckx0Sjg3MGp4TkxWSDV3ZzFnZ0NxSVVrem81?=
 =?utf-8?B?SGIzSFkvbmMrM2VXV3BLbmtOS1VCZFRVdERRdmVYSlBvWTg3Z1V0QVRsN2JC?=
 =?utf-8?B?cHUyMVY4V3BGQXlJUmlUZDk0ZUpTN2VTTkZXd0d0NTJzY1NvNGdDYTJKbzR1?=
 =?utf-8?B?NGJ3T2dQWTFTZmlNeGl6cndCTmQrbXh2SU9VWmlxQS8vUkRLMWxsNFFyNE9v?=
 =?utf-8?B?UitiSUNMMGY5TDRtRnlmeEhpcU9Venh2T1FVN0tOS3hrZ2d3ZFdqdmVDWGUx?=
 =?utf-8?B?alJBUHJ3eHJYRm9MTUZ0M0kwdDhFSjdtQTFmRWhnUUlCalB3WHFHT2FhWFhY?=
 =?utf-8?B?RjRDd3JWcFJ6R3FEZVJiM09kMXRrVVdMZDlweWxhSjN3KzMvOGdpYy84TDlp?=
 =?utf-8?B?VlE1bnI0QmJSWURuU1h1a3NobzlDaW1JNXB2SEZoNENuV3hGNEpOZ2VVOW1J?=
 =?utf-8?B?OGlLSXB6SGdBaXpIM2ZLd21Jdnd4YVBpRlYwckU1ZGpUbXBCV0xaQXpYTlVt?=
 =?utf-8?B?RVB0RXVzbzRvOWNvNGpwL3lqSjlNVGFhNVdvRHpSQW1vcmRCblRoaW1DT1Bt?=
 =?utf-8?B?WC8zWWk3MVRTQ0huMzZuL0RRQ01VQlFRbkdxMXE5ck5kc2pzd285VGdYWkgw?=
 =?utf-8?B?U2NtS25sUVNlbnZldWZuS0tiWjVDTzRFMTJyaVEzK1RaYXl6MEQxVFgxZ1U5?=
 =?utf-8?B?V0FFbkxiT0Nvbzh6UFoyd01jQU9HTkV4azhHdmpQT3VZaHdNVmtOTFh3SU5s?=
 =?utf-8?B?T3JCQUpRRWd5ak9rVm8rbm9hT1lPMjVNMjBOUmFCNTQzNEJPN1F5YThURUdK?=
 =?utf-8?B?MFpMVjRjb2s0b0RBYS8rU0kwc0thVlE3RWpQTGJvRHhRaGZpcW5UMlA0cXA0?=
 =?utf-8?B?TkV6RW9BQlNudkIwNUR3OVVGdDY0dUxBMFpBRExnMUNYWm1naEozaTJpNlNX?=
 =?utf-8?B?bGh3MTlkdGx1d1N2Um9wWVVTdVJVSkxZYmk2Mk5qRUE5MnZ1dTZvMEM0cVJn?=
 =?utf-8?B?VkFVdlN0dVlob0VabFYxOGU3WmpudnFHdndScldtN2svb3g3QmRQODZEdXVJ?=
 =?utf-8?B?aU9iczIrNGo0eFJ3Rk9FRHpCU1V3NWtYM0wvYTdrUDlJN1RaVnN3RUZHOXFp?=
 =?utf-8?B?dUpsSWhVTy9mM1ZLb1VGdzFEaHBOK1NoUldDK1VYejJ3Uyt0M0lPbjRiTEtl?=
 =?utf-8?B?aTFJaVF0N1h4Umh5cXc0dWtUdmgzWEw4d29VZzZLT1VKVElFMVNUWU42ajZQ?=
 =?utf-8?B?THFpVWpjSUdQS1BkTkVEMlhoc0tYM2Q4WFVYTHpPSFhTZll3c2lQZ0RrbHdj?=
 =?utf-8?B?U3lBWUczWEo1azZFRVJjWkRnWjNvWDFVQnk0TTNJTktkVElkZzZFUng3SEhy?=
 =?utf-8?B?MW5HdW5CMHNZNXo0YmZ3eGFnTHEvT0VpUmRNSzNlK09pc2d4akxRRk1hWGFG?=
 =?utf-8?B?cEtFck5JdU9WV3R1UUNnV1ZXbFdwUlFwTnRoRGlMcnBSMWVTZlpPU0ZjdjZO?=
 =?utf-8?B?SDlWQzVYakMwNVJ5K1FBKzJ0R2tlWklwT2hucE1DOFd0UTNJSEtGdEhXNzBU?=
 =?utf-8?B?VUo3TTZocjNiZ09lZEYvQUFxU2ZlQW9Tc256RTVtTlpRWmZERnA4U0JxdWZZ?=
 =?utf-8?B?VWFqdkFtdlpRQ3BhL1JlM0NMRk1QQlZRUUtrVWl6dmFCT1J6NUhwRWQydWkw?=
 =?utf-8?B?ZVlMbXRCNThPaSt3Y0VYc3MwM1ZzVXY4SEVwb0VHREg5aEFvMDFDVDBxUGhx?=
 =?utf-8?B?clRjeHVqMFVFOTRjek5DVmE3cTRvWkVlVzlHZGZ1cDJiR282YXNwTGZkU21q?=
 =?utf-8?B?Wi9rYjh1dVE4SndKNys4VHJwZld2Y2RpVFhBd1JSLzJOUXR6bENMaUdWb1Vh?=
 =?utf-8?B?akRteWZNUzlWRnU4YjRkbmF5R2twc3laZTlFTzZhb1VTSWQydDRiR05hY21u?=
 =?utf-8?Q?HXgQQ09s+vGWIOhzEAbYe3eba?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f4672d97-838b-47fa-f263-08daf3b9082f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 09:48:46.1174
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: J2EGgGcWWTDAM6KtGLMAXyZD5kBTFWXkbCUWPp34YYWygI6zdDUjafXfTogu6AdbTI66xIIeXkRulq5gPfd4xQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7412

On 10.01.2023 16:18, Anthony PERARD wrote:
> The configure option "--enable-trace-backend" isn't accepted anymore
> and we should use "--enable-trace-backends" instead which was
> introduce in 2014 and allow multiple backends.
> 
> "--enable-trace-backends" was introduced by:
>     5b808275f3bb ("trace: Multi-backend tracing")
> The backward compatible option "--enable-trace-backend" is removed by
>     10229ec3b0ff ("configure: remove backwards-compatibility and obsolete options")
> 
> As we already use ./configure options that wouldn't be accepted by
> older version of QEMU's configure, we will simply use the new spelling
> for the option and avoid trying to detect which spelling to use.
> 
> We already make use if "--firmwarepath=" which was introduced by
>     3d5eecab4a5a ("Add --firmwarepath to configure")
> which already include the new spelling for "--enable-trace-backends".
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

I've committed this, and I'll queue this for backporting unless you
(or anyone else) tell(s) me otherwise.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 09:52:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 09:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475162.736721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXmb-0001b7-IE; Wed, 11 Jan 2023 09:52:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475162.736721; Wed, 11 Jan 2023 09:52:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXmb-0001b0-Fa; Wed, 11 Jan 2023 09:52:33 +0000
Received: by outflank-mailman (input) for mailman id 475162;
 Wed, 11 Jan 2023 09:52:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFXmZ-0001ao-Ch; Wed, 11 Jan 2023 09:52:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFXmZ-0002tw-8k; Wed, 11 Jan 2023 09:52:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFXmY-0000kT-Sp; Wed, 11 Jan 2023 09:52:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFXmY-0004Iv-SD; Wed, 11 Jan 2023 09:52:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8DHp1X0Zv2wIFfRo6BA3+CmJ9GiFHdoswGNA2MKM0EY=; b=y71UQilISbE6BLhJyxm7IQRJJg
	j83CEZf9ENpH88a9ZV3SM2TcG7zVsHlHc8O4kdK/wQrVMpwEGAl6gePxwdoKEtssFkCPtqm9Oa4r3
	4LjlpEfSzrFxT4ZesBn7hwdH4FVpphFTSn8x2Ub0UuOohlRzTvalKNAkIhqIkMkZVpMw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175719-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175719: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 09:52:30 +0000

flight 175719 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175719/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    3 days
Failing since        175627  2023-01-08 14:40:14 Z    2 days   14 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 10:00:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 10:00:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475171.736732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXuW-0003DF-G8; Wed, 11 Jan 2023 10:00:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475171.736732; Wed, 11 Jan 2023 10:00:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXuW-0003D8-DT; Wed, 11 Jan 2023 10:00:44 +0000
Received: by outflank-mailman (input) for mailman id 475171;
 Wed, 11 Jan 2023 10:00:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+DUH=5I=casper.srs.infradead.org=BATV+193d99197280eaff698e+7080+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pFXuV-0003D1-Bx
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 10:00:43 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ccd4d554-9196-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 11:00:38 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pFXuT-00417y-Mq; Wed, 11 Jan 2023 10:00:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccd4d554-9196-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=gtUnWvOq0Tv3Rog3AzXlM5oDAQWs+ZNGSYH5rPHcNOw=; b=W/bVxTZCRA/CwxLvzV2a6CwDA2
	VozrtnsiBnXzk6jMDQuM5KtWTO6kWiv4R77u/QA95YIudRUknFdt+TVcA50WJonqoz1FOH76o13L3
	dUfSGsPD+2HCf+p685OHQBq4mpIabi7a5e3KprmmzVu6Y+HFahCvcPsB5EBqghbnDE5TaUPhUcGTJ
	WmFMM5JVaM/5ur5QBTofs19XG8Byz1B73Mvt0dO3qX9C8bbOJTqK80vddwt4Hf6TWMwaWEXzkz5Xc
	7KdSo2f6vxFSLZJ5etFSRnSn28Y3vaXvKD54nixu1SDAuFmDcK3yxUDQwbkjgjJ3UnJd+GHnoktVu
	q/GWZ4dA==;
Message-ID: <7cd31f8e7d5ec4658ab8e945c2b771fd55a0b101.camel@infradead.org>
Subject: Re: [PATCH v7 1/2] KVM: x86/cpuid: generalize
 kvm_update_kvm_cpuid_base() and also capture limit
From: David Woodhouse <dwmw2@infradead.org>
To: Paul Durrant <pdurrant@amazon.com>, x86@kernel.org, kvm@vger.kernel.org,
  xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Sean Christopherson <seanjc@google.com>, Paolo Bonzini
 <pbonzini@redhat.com>,  Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar
 <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>
Date: Wed, 11 Jan 2023 10:00:27 +0000
In-Reply-To: <20230106103600.528-2-pdurrant@amazon.com>
References: <20230106103600.528-1-pdurrant@amazon.com>
	 <20230106103600.528-2-pdurrant@amazon.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-f3qb3SVQBInmnod430iv"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-f3qb3SVQBInmnod430iv
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2023-01-06 at 10:35 +0000, Paul Durrant wrote:
> A subsequent patch will need to acquire the CPUID leaf range for emulated
> Xen so explicitly pass the signature of the hypervisor we're interested i=
n
> to the new function. Also introduce a new kvm_hypervisor_cpuid structure
> so we can neatly store both the base and limit leaf indices.
>=20
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---


Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>


--=-f3qb3SVQBInmnod430iv
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTExMTAwMDI3WjAvBgkqhkiG9w0BCQQxIgQgLVDKcKpc
s/0Xl+zEpzb59ornBDJqO0R8sz9mYsHaa+owgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgCnMR6Zpf9PuUzry9agQlFeDFJ9824fgOPU
QjTNjVgEmb6oHW06B53qOlhnU/Jf1t9+JTpID4GfOwO67Xc1Hj2Za0sOouHWWSpSVKt6ynO2G7I4
xj0Bsdr9iQTm/PBYo1t42lrMikj55LK60XteEjAvt0w7tKSerdQKUAcG8PYjBY1VQfD9FLHzoIbs
o+qJfneWp8QLFyx8M6fJsKMyJ20+rYUAGRxdfTkk4fkjW/RYW1D2d9j15y5DwI0WAFztZZugA35T
peQZYKpDzQGT1OJ9wY5mnS7CRqmXGabk0R0xegjtgv3SbkjreO834yhl9yPvlwJUJ4L+wyZqAMXs
4vKZqQonc5nmyTdcXJt5qskpMKBMVcN50/Xc8VC1yXeRqWP3ZCiuyDCydQvUnl5utqIQArxI0zhf
6WrnFOI9cl/1BGeTDAk3YKxnzjB696iBADIlqfpBKERS6ZT3maAvuFgsKpDcHXCs3B5qkEW1qizT
uu/nz1vSdG/8ShRhkBJ9eYvsyv1OUBJvtDA+WCaZL/E9jV612X1tSZGlMNYbz3nH0Y+r/fce6zu2
hX20NrFIbY80NyjHgVBFFtKHwA67Y4EFkmccFdBkx3LRqtagyrfMTfHBTpQSEHPpGCfRLy62fuQy
U2Jr1x4txafZ38DYH2cSdWYR9ORtZa2O/ahNku+6vQAAAAAAAA==


--=-f3qb3SVQBInmnod430iv--


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 10:01:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 10:01:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475172.736744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXuo-0003Xp-P5; Wed, 11 Jan 2023 10:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475172.736744; Wed, 11 Jan 2023 10:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXuo-0003Xg-MN; Wed, 11 Jan 2023 10:01:02 +0000
Received: by outflank-mailman (input) for mailman id 475172;
 Wed, 11 Jan 2023 10:01:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+DUH=5I=casper.srs.infradead.org=BATV+193d99197280eaff698e+7080+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pFXun-0003Vu-8y
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 10:01:01 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d9798001-9196-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 11:01:00 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pFXuq-00419Q-AI; Wed, 11 Jan 2023 10:01:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9798001-9196-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=QTPVTKNqATKtQW82I4C8pCyzH/+bIzUEXeJ6Ghw5pok=; b=P3kcRxE7uzo+Y8dmUu/iJrK7hl
	F8osY8n0VaDEt2J7x4qBbijTBIWKJO2362zN+eZ1WVjg+nc0z3pim5m5VGIYfiuLovCRyHZ1bZeed
	b2y4kSw1J/x2MCWfWs03QJPo7bwKwvy2X56W4EDJvNEvfLNperiG6Y3y0vOjSaqxD3qQnXn2FlSmA
	Ej8vbmSY7sDXWvxuD8feT8IYJ4HphQ0M5jfFCUHj5sgXWJp7F1F5NajARqcFpJRlDogNXzJ8GgVaS
	Al2e6qISqxRSGJoECde9WfgDX1Z3ueZRCkIqVaTgF+i/wHGWKihxhXgYzXlB3XxOGZtKfAMr3Mph3
	zGoFDBbg==;
Message-ID: <477edcdff6507e35793c1cb9a97101d01425cff9.camel@infradead.org>
Subject: Re: [PATCH v7 2/2] KVM: x86/xen: update Xen CPUID Leaf 4 (tsc info)
 sub-leaves, if present
From: David Woodhouse <dwmw2@infradead.org>
To: Paul Durrant <pdurrant@amazon.com>, x86@kernel.org, kvm@vger.kernel.org,
  xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Sean Christopherson <seanjc@google.com>, Paolo Bonzini
 <pbonzini@redhat.com>,  Juergen Gross <jgross@suse.com>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo
 Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>
Date: Wed, 11 Jan 2023 10:00:50 +0000
In-Reply-To: <20230106103600.528-3-pdurrant@amazon.com>
References: <20230106103600.528-1-pdurrant@amazon.com>
	 <20230106103600.528-3-pdurrant@amazon.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-sPgM3fl2soxIlK/U1gLd"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-sPgM3fl2soxIlK/U1gLd
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2023-01-06 at 10:36 +0000, Paul Durrant wrote:
> The scaling information in subleaf 1 should match the values set by KVM i=
n
> the 'vcpu_info' sub-structure 'time_info' (a.k.a. pvclock_vcpu_time_info)
> which is shared with the guest, but is not directly available to the VMM.
> The offset values are not set since a TSC offset is already applied.
> The TSC frequency should also be set in sub-leaf 2.
>=20
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---

Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>

--=-sPgM3fl2soxIlK/U1gLd
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTExMTAwMDUwWjAvBgkqhkiG9w0BCQQxIgQgNgV5GBjJ
lHIlCxWgQgdiehi/uQV1nV77Zej1Pn52QJAwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgBwxZg89Rk6L/cCUA5Nx4P1NkkYNFssNrSZ
GUADoVY7R9peuMmOR0FteSbXSZqbiqnM11MxDDte2HhwCIJlNBGJtTj/doHrdBMCocWKJ99uBPgJ
4su7WohSzAWhr5oVJHOKIfo11jjX/t7tWJly7PFqcZSa3Jr0/d7RV5qPsueTYmcRBEgcjxO1akNE
ea8JSAISLKoxTUGgKx4KKrdik/1RV/rAi9nt40d4c2ltbVJV3M+ACo4msttBt6G4hF2cDpzjJz6E
of6y2C34eiM+9gJYkWokcrsKaLAW5TZpQjEwY9H2RZWZySZ8CeTWKxaX6vxdZgwLb7FUrTsaOMns
schYhvs9bVnZbv5Kqof9ZsMFTyJOuREWudGeymrQ8eAUGPx9tqiRZaYVq226xVQ1Wg9viCzbVxPF
UIAdkH+Q70e3yDILny9c2lkFMlO6xcCMzZv8DMEMPAuaEV8skUbZ9/53hx4J27ROWbqwJKKFORkf
c2adKpl0R7sZb69h28oyy3ibdUyF+vRTW6xGIiVvab9eV5xxvKGUaDqRrwtDGTyXtj8QgTON0/6j
s9EM9ZsJIfXv/Jm68aCLRSlZnVZZ6MGN7QHVQnHdcqNnzUf1rQybxT26Oe8w83azF1g3ggSHQoUO
q8Zm4PR/bksmIE31o22eRAT8T5EGh2YXTTNaByHpiQAAAAAAAA==


--=-sPgM3fl2soxIlK/U1gLd--


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 10:04:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 10:04:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475185.736754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXxj-0004Ns-6K; Wed, 11 Jan 2023 10:04:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475185.736754; Wed, 11 Jan 2023 10:04:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFXxj-0004Nl-3V; Wed, 11 Jan 2023 10:04:03 +0000
Received: by outflank-mailman (input) for mailman id 475185;
 Wed, 11 Jan 2023 10:04:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EETQ=5I=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFXxh-0004Na-LC
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 10:04:01 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2048.outbound.protection.outlook.com [40.107.21.48])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42f6c161-9197-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 11:03:57 +0100 (CET)
Received: from AM7PR04CA0020.eurprd04.prod.outlook.com (2603:10a6:20b:110::30)
 by AM0PR08MB5524.eurprd08.prod.outlook.com (2603:10a6:208:181::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.9; Wed, 11 Jan
 2023 10:03:51 +0000
Received: from AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:110:cafe::bf) by AM7PR04CA0020.outlook.office365.com
 (2603:10a6:20b:110::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend
 Transport; Wed, 11 Jan 2023 10:03:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT040.mail.protection.outlook.com (100.127.140.128) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Wed, 11 Jan 2023 10:03:49 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Wed, 11 Jan 2023 10:03:49 +0000
Received: from 6cc083741620.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3732E506-27AA-445C-AA94-DE92036D4E13.1; 
 Wed, 11 Jan 2023 10:03:41 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6cc083741620.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 11 Jan 2023 10:03:41 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by GV2PR08MB8653.eurprd08.prod.outlook.com (2603:10a6:150:b9::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 10:03:35 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 10:03:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42f6c161-9197-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1FdgGnzcTccqMF0pRyxh6JcyPPsUhfbZpJF0BWhSYwI=;
 b=TIxDaFxn7VrWvIsPdtNXfLZap/p1LryxhoEtS0b0pYQ2sGYoYSGQH7ALEJYdCsx8U0R+m08xGr0LZkkxx7LThfIdn7OVtbDHIrFDsKRCmRM1MKbfbUT1VgDB/aVTRoeIrjaY1VbKgUNF5wxUTSIscGAikQu0a0NmCYd4UgnbkjA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fj43aJZRLaybNAfMNbF/jQaIPxYGVlGrNgqjDJsyQ3Du3tAj+p29X8MH+HqvgxPrR1nKOkBedVNCszrWbcI1V6+LfEpEoxds+l2KuEmR8dBfIEs+pRzr+wRJcX6vc7W303i81weqkPFsrF9zubYR+IwWRfiRb6ofgfO+s9k7JoF7V6a8BFYcutCbvZXQ5tJdPzlkRQrMP5baK3XmGObrYT4NnTSVgrkKYIwmcEV5aRf5opkOXJYAtM2aIOyOCnfSXkxy1Jax+OoNmlT4njWubbuivBf0HU7UvJOBzGfGhKc7O2txv3WRl/Yoza5hIYJktDcv2DEw6eF1hMieO8pwYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1FdgGnzcTccqMF0pRyxh6JcyPPsUhfbZpJF0BWhSYwI=;
 b=gfmHqUAa46aNnLC+saUoy8i6SQWYTCafw2wqNSia3zdrbDetvj8QmtZniLlBECFJtSAh5qX0JuF3mUyEP3fxQV202wg/+4LAmp5DtNmBD5ZM1xtjMJcg+PjCdOp0QselRYoUF0jnntZmJDrLaAwZOJTkdrcnoTxWIrWOmxyxbcYEKUFXC1sPw6VvFp2v41dZG8qoh+u+gcHdGX8vYGwj2kMK4qIHs50ctIjgSzeqepb+XC7t2riggDkBEYh5XBfJQx4VJY5lamoYZKA8JZcLyVpse0tw6Xq+d/j+xq6BdZWsf38i6DlB8YproVABf91eGJbTUI0x19x2VUtsQB4Rqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1FdgGnzcTccqMF0pRyxh6JcyPPsUhfbZpJF0BWhSYwI=;
 b=TIxDaFxn7VrWvIsPdtNXfLZap/p1LryxhoEtS0b0pYQ2sGYoYSGQH7ALEJYdCsx8U0R+m08xGr0LZkkxx7LThfIdn7OVtbDHIrFDsKRCmRM1MKbfbUT1VgDB/aVTRoeIrjaY1VbKgUNF5wxUTSIscGAikQu0a0NmCYd4UgnbkjA=
From: Wei Chen <Wei.Chen@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v2 01/17] xen/arm: use NR_MEM_BANKS to override default
 NR_NODE_MEMBLKS
Thread-Topic: [PATCH v2 01/17] xen/arm: use NR_MEM_BANKS to override default
 NR_NODE_MEMBLKS
Thread-Index: AQHZJNEWABvs02i6OUO2BkMLi8xulK6XauOAgAGQmoA=
Date: Wed, 11 Jan 2023 10:03:35 +0000
Message-ID:
 <PAXPR08MB742006EF0342233FC0ADC5729EFC9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-2-wei.chen@arm.com>
 <3c664437-7184-d4be-63e2-335942bb6a46@suse.com>
In-Reply-To: <3c664437-7184-d4be-63e2-335942bb6a46@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 07CD7E887D4AC040B11786254DD652AB.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|GV2PR08MB8653:EE_|AM7EUR03FT040:EE_|AM0PR08MB5524:EE_
X-MS-Office365-Filtering-Correlation-Id: fcf160cd-4391-4129-9bb5-08daf3bb22ed
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 lecE4T+77r6OoSinuaugkRy0Rv/nC0FnrUfQh4kyCqKd+qFXXuTWT9RbSugQsmnQMLesMekJqyHHsRAp+ttksMhsRWEAPzA0RyIiff9mrAMlRTC6Cjxp8nNtV1xVeyppWEFBr6rvabnbkmlGFd2tKBntBgIA5BKJbijoKBe6ibTE+c72XLYEtHsXw8YKGHLc3gMSMJW9t7coUQqGnC9VJaw2tAwaUW746lBnxwgw18002HFX6OjOSPERyZK/BG/OXXA9xUVg7oUKYWFSSZ4/ET6fsbuMkxMI4QCyvmi5tcrHPUElYriuqvqSIMpK6D0swEk5kt4F55hRglx9/dxYcvhdSwCT+KEXCPMfXJVg0xOhMEqV7Z0RIJE5UmfnYjbqBujZpxlhsXI505q2KHdo2DdTBREByJswFzWLDFPQ16WaQylOc+VqEcMawfXuOCV+xLulVZ1WKRKxcXmuVKv8QwFggua/swvRDITBS6EBwXcbIoF/hmnJpCRaZ8ImXpgbC4zh+G2BkYhV4FkxjnheVCeY7BzWaLTWHqVkTcQXXVqOiVM/zeIiXJtSCYvAEQndev4u+0lNFFKUpj9rsYkZuIaSZ8tkBbHd8K93GPurWH2iSeVWiRKrX5gBNsuYKXyW9hhE08TuBUnRSPi7K5GxZEr28eDo5MeNfIEh2W3XVS/qu0EEsmluri3/6HgCXxm49M8jtSGmtLL84OlvQyeA7tTaYZGdkEAiLa/oO7npCGA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(396003)(346002)(366004)(376002)(451199015)(122000001)(83380400001)(86362001)(38100700002)(2906002)(52536014)(5660300002)(41300700001)(8936002)(53546011)(55016003)(8676002)(186003)(9686003)(478600001)(26005)(6506007)(66476007)(66556008)(66946007)(4326008)(76116006)(64756008)(66446008)(316002)(54906003)(966005)(6916009)(71200400001)(7696005)(38070700005)(66899015)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8653
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2fd47b3b-e22c-453f-8527-08daf3bb1a62
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5VSa6Voml4XuKjfhIJwQY6ynh+GtVYmcIGkZ8mhOdhhtSm1qLPBgiv4mDRCrH/LGqkzFKgilEYjuLEP745NTiNhCpFSwIZ7wkFRv8eZm/Rm83RuTJdnW61n/E845WD8H8iJie01glOqhOnqsAX6K+EQTPKntTODS8MQqzOIkVXIlGQuY4qUi1adoOfXaNI6Hvg9fStWT/NHXIfMdRf8tcN8CMC+HW0N1tFOLG+6G53vg+x2cEUGMV61uKypjdpzbXFI9OK3bSdGvhwMpsoQ4hrXhv1rQGr12t+n5LdSlbdeEMAIXD/6I3z7cIeJOwgGWKLJDqHz7upzyPsMxSf/PmI3GR4RNhvN/0PaSjVVEWgEpLLxOvciJzHfzGFQ63Ydjj15Zx0uLqR54x9t9GALmy45SPm0D6JkPv/AxDAtO13uIvDNXKH5kLKpHFF/QZRvx8/jsH0Dw1chYL2OWmAo15E4kYFAKsF+E3b14Be/BU0aKorqXsUTJPDgj7O5el4V0N7Gg2FBJiy8uwIwSdoBDTldh0CtxnMSJhfXxwjjj64uh03LhW+Njcq+aob9mVDCvTZ8wqCxejD//3GN7UWaxm0frNHdMMWBnYtFVjKGUiZZFWRNRIZWpmtYSESL2egzKxxEuXMFIIYosKcDPqpAubBrRjOhkIXUklATNs+jBY+REkuSwSuzSpJlzQSsh4fnI7f67Lr/fvR+v0NEtVXfi9FCsFjhGbuRNdu+oPVy+scM=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(36860700001)(52536014)(186003)(8936002)(6862004)(82740400003)(26005)(9686003)(478600001)(82310400005)(966005)(2906002)(40460700003)(316002)(54906003)(70586007)(55016003)(40480700001)(5660300002)(336012)(7696005)(70206006)(8676002)(81166007)(33656002)(4326008)(356005)(6506007)(66899015)(41300700001)(53546011)(47076005)(86362001)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 10:03:49.6152
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fcf160cd-4391-4129-9bb5-08daf3bb22ed
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5524

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogMjAyM+W5tDHmnIgxMOaXpSAxODowMA0K
PiBUbzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+DQo+IENjOiBuZCA8bmRAYXJtLmNvbT47
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEp1bGllbg0KPiBH
cmFsbCA8anVsaWVuQHhlbi5vcmc+OyBCZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlz
QGFybS5jb20+Ow0KPiBWb2xvZHlteXIgQmFiY2h1ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5j
b20+OyBBbmRyZXcgQ29vcGVyDQo+IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPjsgR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPjsgV2VpDQo+IExpdSA8d2xAeGVuLm9y
Zz47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IHYyIDAxLzE3XSB4ZW4vYXJtOiB1c2UgTlJfTUVNX0JBTktTIHRvIG92ZXJyaWRlDQo+IGRlZmF1
bHQgTlJfTk9ERV9NRU1CTEtTDQo+IA0KPiBPbiAxMC4wMS4yMDIzIDA5OjQ5LCBXZWkgQ2hlbiB3
cm90ZToNCj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vbnVtYS5oDQo+ID4gKysr
IGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL251bWEuaA0KPiA+IEBAIC0zLDkgKzMsMjYgQEAN
Cj4gPg0KPiA+ICAjaW5jbHVkZSA8eGVuL21tLmg+DQo+ID4NCj4gPiArI2luY2x1ZGUgPGFzbS9z
ZXR1cC5oPg0KPiA+ICsNCj4gPiAgdHlwZWRlZiB1OCBub2RlaWRfdDsNCj4gPg0KPiA+IC0jaWZu
ZGVmIENPTkZJR19OVU1BDQo+ID4gKyNpZmRlZiBDT05GSUdfTlVNQQ0KPiA+ICsNCj4gPiArLyoN
Cj4gPiArICogSXQgaXMgdmVyeSBsaWtlbHkgdGhhdCBpZiB5b3UgaGF2ZSBtb3JlIHRoYW4gNjQg
bm9kZXMsIHlvdSBtYXkNCj4gPiArICogbmVlZCBhIGxvdCBtb3JlIHRoYW4gMiByZWdpb25zIHBl
ciBub2RlLiBTbywgZm9yIEFybSwgd2Ugd291bGQNCj4gPiArICoganVzdCBkZWZpbmUgTlJfTk9E
RV9NRU1CTEtTIGFzIGFuIGFsaWFzIHRvIE5SX01FTV9CQU5LUy4NCj4gPiArICogQW5kIGluIHRo
ZSBmdXR1cmUgTlJfTUVNX0JBTktTIHdpbGwgYmUgYnVtcGVkIGZvciBuZXcgcGxhdGZvcm1zLA0K
PiA+ICsgKiBidXQgZm9yIG5vdyBsZWF2ZSBOUl9NRU1fQkFOS1MgYXMgaXQgaXMgb24gQXJtLiBU
aGlzIGF2b2lkIHRvDQo+ID4gKyAqIGhhdmUgZGlmZmVyZW50IHdheSB0byBkZWZpbmUgdGhlIHZh
bHVlIGJhc2VkIE5VTUEgdnMgbm9uLU5VTUEuDQo+ID4gKyAqDQo+ID4gKyAqIEZ1cnRoZXIgZGlz
Y3Vzc2lvbnMgY2FuIGJlIGZvdW5kIGhlcmU6DQo+ID4gKyAqIGh0dHBzOi8vbGlzdHMueGVucHJv
amVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAyMS0NCj4gMDkvbXNnMDIzMjIuaHRt
bA0KPiA+ICsgKi8NCj4gPiArI2RlZmluZSBOUl9OT0RFX01FTUJMS1MgTlJfTUVNX0JBTktTDQo+
IA0KPiBCdXQgaXNuJ3QgTlJfTUVNX0JBTktTIGEgc3lzdGVtLXdpZGUgc2V0dGluZywgd2hpY2gg
ZG9lc24ndCByZWFsbHkNCj4gbWFrZSBzZW5zZSB0byB1c2UgYXMgYSBwZXItbm9kZSBvbmU/IElP
VyBhcmVuJ3QgeW91IG5vdyBhbGxvd2luZw0KPiBOUl9NRU1fQkFOS1MgcmVnaW9ucyBvbiBlYWNo
IG5vZGUsIHdoaWNoIHRha2VuIHRvZ2V0aGVyIHdpbGwgYmUNCj4gbXVjaCBtb3JlIHRoYW4gTlJf
TUVNX0JBTktTIHRoYXQgeW91IGNhbiBkZWFsIHdpdGggb24gdGhlIHdob2xlDQo+IHN5c3RlbT8N
Cj4gDQoNClRoYW5rcyBvZiBwb2ludGluZyBvdXQgdGhpcy4gWWVzIE5SX01FTV9CQU5LUyBhIHN5
c3RlbS13aWRlIHNldHRpbmcsDQpidXQgd2UganVzdCB1c2UgaXQgdG8gZGVmaW5lIHRoZSBNQVgg
bWVtb3J5IGJhbmtzIGZvciBzaW5nbGUgbm9kZSwNCnRoaXMgZG9lcyBub3QgbWVhbiB0aGF0IHRo
ZXJlIGFyZSByZWFsbHkgc28gbWFueSBiYW5rcyBvbiB0aGVzZSBub2Rlcy4NCldoZW4gYSBzeXN0
ZW0gb25seSBjb250YWlucyBvbmUgbm9kZSBOUl9NRU1fQkFOS1MgZXF1YWxzIHRvDQpOUl9OT0RF
X01FTUJMS1MuIE91ciBpZGVhIGlzIHRoYXQsIGlmIHRoZSB0b3RhbCBtZW1vcnkgYmFua3Mgb2YN
CmFsbCBub2RlcyBleGNlZWQgdGhlIE5SX01FTV9CQU5LUywgd2Ugd2lsbCBidW1wIE5SX01FTV9C
QU5LUy4NCg0KQnV0IEkgYW0gb3BlbiB0byB0aGlzIHF1ZXN0aW9uLCBpZiB0aGVyZSBhcmUgbW9y
ZSBzdWdnZXN0aW9ucyBmcm9tDQptYWludGFpbmVycy4NCg0KQ2hlZXJzLA0KV2VpIENoZW4NCg0K
DQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 10:22:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 10:22:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475192.736766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFYFQ-0006nP-SK; Wed, 11 Jan 2023 10:22:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475192.736766; Wed, 11 Jan 2023 10:22:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFYFQ-0006nI-PV; Wed, 11 Jan 2023 10:22:20 +0000
Received: by outflank-mailman (input) for mailman id 475192;
 Wed, 11 Jan 2023 10:22:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uwfa=5I=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pFYFO-0006nC-Tg
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 10:22:19 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d1120586-9199-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 11:22:14 +0100 (CET)
Received: by mail-ej1-x62c.google.com with SMTP id az20so16561123ejc.1
 for <xen-devel@lists.xenproject.org>; Wed, 11 Jan 2023 02:22:16 -0800 (PST)
Received: from [192.168.1.93] (adsl-211.37.6.0.tellas.gr. [37.6.0.211])
 by smtp.gmail.com with ESMTPSA id
 kx1-20020a170907774100b0084d368b1628sm4908156ejc.40.2023.01.11.02.22.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Jan 2023 02:22:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1120586-9199-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=6tJ7kpzrhyHUzMy1RjtfFRjdNaVMpk5nM2TuZL5gE8g=;
        b=KufNy0VSaoPfgDDeR+SD9quaYDs9EZYi/89cMJk+QbGxz9EaN2xbZemRNgLQalRfp0
         CtE5ZhcTlstIlwml8qq22uNe6E5F3cm8RD0mEeb97pszLenvnsFTLSlsJjIHgzSeAPW/
         fEoRIv7jd3dZxEPJFSYvrIIFG1R8spkfCAPc+btJST1OBTZbFeqgeNLgV/ECTXZCbLgd
         ePvtC1Wov2zAQXZrMTzA7jrr5nQOsgC5N50rMqNFXSiJD0/h69RsLJO9dk5WDV9vGdA3
         cnCyXVIZqTGRMrPQ7w2t8gbSnYMqRYLrxstjbNiMB6tN7O89dm/WxBOlmKITDshTvs0C
         0Kmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=6tJ7kpzrhyHUzMy1RjtfFRjdNaVMpk5nM2TuZL5gE8g=;
        b=xrG8KXF964YsoO/b6OJRA4CVjnYxHO6endV0GnLnTjiip/1RRUhzzFK3e6aJvbbV9H
         k4cVfD9H46DQYWzr7OzxUTL1rMFC+h7f3d1H89hVfZ8JyLMx0nA7PuyHcsS92VdI3mnf
         FMhupGtesUEGfFvb0s4YJpeL7SD4E5tQWZuIgG5Fgw6zt9D8OXdWXEVgH5uR9yMNmuVn
         QwF87cxlbtN3WLAoThwOlZqfQSc9vuzNKruh7bDwa0TDqoiHjjAVvfjkrDRbe3TU2Opo
         y+bbGWsTZrW12qVAcnqXzlSerWkPix7vmJTV447AnKZlKQvV9z5IqjRLg+5GsGFgnewy
         YiKw==
X-Gm-Message-State: AFqh2krCWLWDSZwy6bDiVZpFDJszp0Y4pKz/B+gMP+OhEKVNFL50j4wo
	x5gbHv3/KMpjZGN7QQpSa5A=
X-Google-Smtp-Source: AMrXdXsRxv3g1e/h/BybE80Hrt7Q0YSmsDLpGYzAvSabVG+lJTSGohlyCqExnCpAITtHrgwgpusLZg==
X-Received: by 2002:a17:906:380e:b0:7c0:be5d:59a9 with SMTP id v14-20020a170906380e00b007c0be5d59a9mr64490219ejc.20.1673432536107;
        Wed, 11 Jan 2023 02:22:16 -0800 (PST)
Message-ID: <0398c48c-cc5d-a4fb-354f-54e3c5900d18@gmail.com>
Date: Wed, 11 Jan 2023 12:22:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v3 2/6] xen/riscv: introduce asm/types.h header file
To: Jan Beulich <jbeulich@suse.com>, Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
 <c333b5b0-f936-59f8-d962-79d449403e6c@suse.com>
 <06185fcfb8cbd849df4b033efa923b55c054738d.camel@gmail.com>
 <1b6ee20d-2f32-ab38-83ec-69c33baf42fd@suse.com>
Content-Language: en-US
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <1b6ee20d-2f32-ab38-83ec-69c33baf42fd@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit


On 1/11/23 11:08, Jan Beulich wrote:
> On 11.01.2023 09:47, Oleksii wrote:
>> On Tue, 2023-01-10 at 17:58 +0100, Jan Beulich wrote:
>>> On 10.01.2023 16:17, Oleksii Kurochko wrote:
>>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>>> ---
>>>> Changes in V3:
>>>>      - Nothing changed
>>>> ---
>>>> Changes in V2:
>>>>      - Remove unneeded now types from <asm/types.h>
>>>
>>> I guess you went a little too far: Types used in common code, even if
>>> you
>> It looks then I didn't understand which one of types are needed.
>>
>> In "[PATCH v1 2/8] xen/riscv: introduce asm/types.h header file" all
>> types were introduced as most of them are used in <xen/types.h>:
>> __{u|s}{8|16|32|64}. Thereby it looks like the following types in
>> <asm/types.h> should be present from the start:
>>    typedef __signed__ char __s8;
>>    typedef unsigned char __u8;
>>
>>    typedef __signed__ short __s16;
>>    typedef unsigned short __u16;
>>
>>    typedef __signed__ int __s32;
>>    typedef unsigned int __u32;
>>
>>    #if defined(__GNUC__) && !defined(__STRICT_ANSI__)
>>    #if defined(CONFIG_RISCV_32)
>>      typedef __signed__ long long __s64;
>>      typedef unsigned long long __u64;
>>    #elif defined (CONFIG_RISCV_64)
>>      typedef __signed__ long __s64;
>>      typedef unsigned long __u64;
>>    #endif
>>    #endif
>>
>>   Then it turns out that there is no any sense in:
>>    typedef signed char s8;
>>    typedef unsigned char u8;
>>
>>    typedef signed short s16;
>>    typedef unsigned short u16;
>>
>>    typedef signed int s32;
>>    typedef unsigned int u32;
>>
>>    typedef signed long long s64;
>>    typedef unsigned long long u64;
>>      OR
>>    typedef signed long s64;
>>    typedef unsigned long u64;
>> As I understand instead of them should be used: {u|s}int<N>_t.
> 
> Hmm, the situation is worse than I thought (recalled) it was: You're
> right, xen/types.h actually uses __{u,s}<N>. So I'm sorry for mis-
> guiding you; we'll need to do more cleanup first for asm/types.h to
> become smaller.

What is the reason for not declaring __{u,s}<N> directly in xen/types.h 
as type alias to {u,s}<N>?

IIUC __{u,s}<N> and {u,s}<N> are needed by code ported from Linux while 
Xen code should use {u|s}int<N>_t instead, right?

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 10:44:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 10:44:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475198.736777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFYb5-0000oO-OA; Wed, 11 Jan 2023 10:44:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475198.736777; Wed, 11 Jan 2023 10:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFYb5-0000oH-KZ; Wed, 11 Jan 2023 10:44:43 +0000
Received: by outflank-mailman (input) for mailman id 475198;
 Wed, 11 Jan 2023 10:44:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHza=5I=citrix.com=prvs=3687a0f96=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFYb4-0000oB-84
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 10:44:42 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f20eab4e-919c-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 11:44:39 +0100 (CET)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 05:44:32 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5705.namprd03.prod.outlook.com (2603:10b6:806:11a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 10:44:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 10:44:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f20eab4e-919c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673433879;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=cUc1rSn5E96ZQjUiXm6XDYkzlgy88aybRcj54b94sWo=;
  b=TE/SEc0LL2OlItUbGekHcBrbPG/d7vmsn+fx4kGfvUOv0aPHb0zk6bwl
   YMLIiCEi0a/AbEgZswD9F5F0jMamtw4Sp6iZO66xv3ZV9EoMP1xJ0pmlz
   /6fR7k3QNkF0J7ch0Wq6l26uXjW3NvmNPV+T++YD8N++CXpLR6ThZl6Yp
   8=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 91036434
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ljqed6x6Wp1lm61TJ9V6t+fUxyrEfRIJ4+MujC+fZmUNrF6WrkUFz
 DcYCD+FaKyNZ2Chfd8jaYu2oBgFsJ7dz9NkS1ZsqSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5QRnPKkT5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KTFw9
 6c+DDU8VRC8usCw+PGXEeIyrNt2eaEHPKtH0p1h5RfwKK9/BLrlE+DN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjiVlVIguFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqAdpJSuHjqpaGhnWYwDciFQNJZWeX+6GU1GOwCs5zD
 Esbr39GQa8asRbDosPGdxq8rX2fvx9aWMdKFOY66walxa/d4gLfDW8BJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaETMOMWYIaCsATA0Ey9ruuoc+ilTIVNkLOKG0h9vxBDr56
 yqLsi8lhrMYy8UM0s2T9lndjzWhjpPAVAIy60PcWWfN0+9iTIusZojt416L6/9Fdd+dVgPY4
 ilCnNWC5ucTC53LjDaKXOgGALCu4bCCLSHYhllsWZIm8lxB5kKeQGyZ2xkmTG8BDyrOUWaBj
 JP70e+J2KJuAQ==
IronPort-HdrOrdr: A9a23:LnaWaato7eFw7yIeZB3cLit47skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="91036434"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ly0DOinDCJRhiUEwZsIN+5hzjrn0ZM50W3A545q90BSo+xuu6nf2gxzRgeeJUK4E9GaiBa2706rVI3KKhaq/BGq5oE/54mUEDKklfS3ii5vWDnBykpybfCo0K6z641xyL+eZLvEDhmoepSdMKFSnkGgdLCropp/Al0q7s1YQiLwKFTf66G8/eL1Wc25fPcz2Lsu6fGrurAku7x/ntOXRql9YaNPFMZoJplX76kK94jCuNAFdLTIhT6oyNNgOhodx6v+Wjb5YdwSCJYrrXgSiw8D2RVAoP9lQaN2PK0JtUwBK120/p3noTkL9b8lqotqMXVWXPwG20FXyUQC3AnN7QQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cUc1rSn5E96ZQjUiXm6XDYkzlgy88aybRcj54b94sWo=;
 b=Ic2FEesQC5tn7tPuSyrqexrYcYTYfZiu2JHWBr5rhIxWTZDuFIZoqg51ava0zN5DlxsazR09+3WmxvU0RiqTFb3/WX0KDjbOpYtZSfYjxazIK1JSIURiB7hXRD/n7h4fSQeIsAx0pK6q3C2G7hqDmQyDMV4THLozv+dSXbpolSZ/GMb77ku4ADGlHWJoEwb0PK3neJYchmYlxWvVFC5yasi0rR/06glMd3eXlyWtWFyP1vYp04FKfVkp2J6xHkLwjOC5Xz7c/AuMAMpolc3BZYiNr2aivVRp92hQEb12sdJdWxJOh/PDuAH/lKmDS0BlIa2l6xt4KFSfEjxdYHpApw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cUc1rSn5E96ZQjUiXm6XDYkzlgy88aybRcj54b94sWo=;
 b=hXD2XI6/mSCGheEwrKJ7KP02xL7tHTTEf1WtDi84NcN5KBqKUZxtHCgTBKPHzzM+3YMqsCxyYKtK9wy/oLx4cwy0NZ0AHTtF9taKVpWWwnFcS+RoHV0nG1EiwAA6xG51oP3rITFzbW6BE3qW7ymOCkJFKJfpoiSliefU3Bf/jSU=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Doug
 Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH] automation: rename RISCV_64 container and jobs
Thread-Topic: [PATCH] automation: rename RISCV_64 container and jobs
Thread-Index: AQHZJRAt1qKQYAWJX02qzojGezRz/a6ZCT0A
Date: Wed, 11 Jan 2023 10:44:30 +0000
Message-ID: <4f15fe80-9a60-76ce-9cf8-2143ea2d86eb@citrix.com>
References:
 <cea2d287fd65033d8631bf9905ad00652bf11035.1673367923.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <cea2d287fd65033d8631bf9905ad00652bf11035.1673367923.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA2PR03MB5705:EE_
x-ms-office365-filtering-correlation-id: 9e8c6818-9c28-4420-43f5-08daf3c0d1e0
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 8aqcETFqg2puyXjMekJLe+wDZgnxwK0lfUIcrnQaojG0jS8gWTp+5cuvfThQyT0tXQmGxlZiGpRxisrBnJe7uVeI3jViBOuSSia92xDm2UrYmPcnywj2HSm0k4SgHxX6BAku59uLo+WSOVNMYVmKK+7xpMVbKQCdvjdyVNPc6L0OTOq1Eido/eaGbADcfO38bK4uQyO8Fq1Ic7yAyQt7FoEFqo/450VOvgCUYn9egLcpDugUAX+LTD7EjSb8qPeI2c0/Wjq0CHh2YbzZ4HHUVPhTV7zrfwRiLALlsQrpA9S6YeTXw06u8WNr0vlpHpqYil3YZfzp69s3ABEeNEZSknGtJnQG0YETrhAaRrZQn/oGJXcwz5l92hgAt8XSToc/iaRh7eL4z4hkZYpepGCWrRTjrenIHh46ImmwJXZa3fXjE/U5y9iYaZqFkFyZkL8q479DS1mYjiw1eOHdomj1/D7Z55xL9Oehnzo5xtBlUHGLEJOlQTxfFWEwPSVjukQ2WFH+DxpGWZNfVls0XAxTzJ3eI6eEFnCeRxVHY+dT7AnplsAxzMokWfsIRtEG+mBlGxy/xo0igsfibt2aog9PDotwchBzgvRDl0whitZjHNKTo4p4fhDn6zQVQ5+BnacYytootpb1rvO/g445sX/qP2e4E9G4dolspwIHrQvLji8WBYpgFgKyAm29rWawHrNsVP3KXwwpRRNaljqXzaSmnfy06q/DGxKvsCtD6s5Ivl5Q1PSLS8CImn7arEldUBGtaghFMrRREyo6kuSa7eWrAg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(396003)(376002)(136003)(366004)(451199015)(6506007)(82960400001)(38100700002)(122000001)(31686004)(53546011)(2906002)(478600001)(6486002)(86362001)(2616005)(4744005)(8676002)(186003)(6512007)(316002)(26005)(5660300002)(71200400001)(8936002)(36756003)(38070700005)(31696002)(41300700001)(66446008)(66946007)(91956017)(110136005)(54906003)(76116006)(64756008)(4326008)(66556008)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?NklaeEF0NEI5K3lGTStmYkJCTS9FdXpvNE10b3FhNFY1MHduZnFUd2cyY29O?=
 =?utf-8?B?TVBTNGJhY3oyMExoQ3BnelpVbmRqU2I2cWVHcU1VOUZsWUVlSWVjWHo4dmZD?=
 =?utf-8?B?dXN2emU0cHpFL3g1c0tCeklmQmx3N2tnclk3Z0ovblRjL1VyR0RNWkVuRUdu?=
 =?utf-8?B?Ym9XY0FrRzk5cUlQUm9XdnF3TGxxOThOT0tZZW5QU1l1eEJsS0RVV2xkUnJX?=
 =?utf-8?B?V1gyZHoxYzB2VEYzS3VMK1g4UTBTRFRPbitnMC9JR1JvQ1YxZ2IxZHVUUm9l?=
 =?utf-8?B?MXBId2xmODh3QkRld2lwbDlhalNlT1FmV0RMemlnUEJuNmNxdDdjRzkzZE4v?=
 =?utf-8?B?UFVqcE01U2FkeDdicHcxUkdHOG5GRjZRbGFxY0duKytpdEdFRHNLQWhILyts?=
 =?utf-8?B?MTk4clJFQWxUKzZhdzkvKzJGdkViVWswVWhXbWV4LzRlYU1PTTAxenhkNG9u?=
 =?utf-8?B?REtIVlBIbzc5cWdIVU5TamNZanRCcDQybmZlUnFDRThsRGhmdmErVklHTno5?=
 =?utf-8?B?NEZmRWtSOVlOamdTcllzR3NXVHdOZzU3YjdCaVN3dzJQc1Axd1U3ZWc5L090?=
 =?utf-8?B?VVhzaEl0ZHdWamJYSSs2eURIUVBXbzNSZWhQcnYyb05SKys2SitNbDl1NUF2?=
 =?utf-8?B?SDFKazNOdVhWVmcrRFNISHdDZkRIbDlGemVzT2x2UTJ0VzZNSzZ6Wi9Vby9z?=
 =?utf-8?B?OTFxOFFFSnBFZXBqby95ZWRhdnc4ZW1FamFlU3RwcE95RWt2Q0hvWGYra2hE?=
 =?utf-8?B?dk5NeEZWcjZrdGQ2dHhEdHVTK1F0azQ2TnlwR2wxRU1VeVR5MTgyK2F6VUpV?=
 =?utf-8?B?b1pPSDZaVmJVdUxnc2hGNUtMMVlQeG1hejI2djU3ZUlhSklUNmtWWmwycko5?=
 =?utf-8?B?ZzB6cU5jOTQvYVFHaGdTOHdHZ0tlekNBd1k2dmFXOTJyN0hQU0lDQ1lBaFdi?=
 =?utf-8?B?dHNsb1NlZ3RxTlZTUFlrTGY5OThMUkx0VE9xd1NkRmUzUmhtdWtocHk5bmsy?=
 =?utf-8?B?RzVqYXJVL29TZlVZKzFlWDRHeDhZWnhNQ1VCTE5XK1dhdFl0Z3N3bGlzNURw?=
 =?utf-8?B?cjd1WGZOVUF4UlFzZyswV1M0aWdLb29nelIwM05uM0pTNU1QVFRibHNZclJp?=
 =?utf-8?B?SGIyNkZUSEk0VGRiQ1ZWSkQyaWpJck9jdGs5M3JseEk4Y052SE50VUFOMjlh?=
 =?utf-8?B?QjZOdDJGeG4wcUc1blp4NUs2TVlNbmVlVW5PRmFpaTVWYTdXWTNQMGM4RXls?=
 =?utf-8?B?Nk5zcHM3cG12bGdrcUh4UkV1UXNxb0lQaXpLL0tjMEtWZGtUWGhlVktsbmw5?=
 =?utf-8?B?TElRRU5YcGF5V0JjZnNIQTk4NnZLQW1BQzlBS2g2SXI2ZFdLNmF1T1VwelB2?=
 =?utf-8?B?YUhaTUNwcld0RkUraWQ3dGhYMHY5ZkNCd05DUGZOam83cXZoajdOZUplWTdP?=
 =?utf-8?B?U1NDVjY1aXZTb1c3ZUY3WmRxK0hocElOdlRhUWxacitGN25qNDJtcW45NkJP?=
 =?utf-8?B?MlBaZ1IybDJEOUljVEgvUDZ1bVd2VFpvNTFUUlFkaFpoTlNpWFVqZElicHp6?=
 =?utf-8?B?OUgwa1VnMit3cmMzUk42UGFYZkRTOWJOWXBQY0MxcGFQaTByNVhLWm9HQm1T?=
 =?utf-8?B?S01CU0FSazZCTGQ3YU9OVWdqOHpZcXpKOEM5bGJ4eWYvZ21jaDlkRmlYalRV?=
 =?utf-8?B?TjI5SmdVL2l4UjZJOW1oS25YdHFYQ21nM2tTVkVQTXExUVY3RWRFWGFRd0xn?=
 =?utf-8?B?S0RIRVVtNU9td3ptdkFha1VZYWlZRXZad25mNjNjeGVmNXY1Wm5XUFNTOXVD?=
 =?utf-8?B?SmpCNW1FRFhwSG9wUVo4dU84RXdCMGZkWTg5dldobGZHaDVOUmplWXpKZTJV?=
 =?utf-8?B?SzEyUUdCck9NWmx3WU9aNk52QXVYeW1mcmVSc2psVVJIK0lBV2RJaXU0aEpR?=
 =?utf-8?B?aVl6aW1XZ0ZoTHdJQStidi9zZnpwWHZSa3dSK3BWUnFnbHU4Yjltd3RuWXNB?=
 =?utf-8?B?eEtpWUJibEpSMmh3cnNCTDFOdTZTUCtaWkRSMjg3NVoyN3d3RmxVeC83c1kr?=
 =?utf-8?B?Vi8rRFBVT1lRNjRMREsyVUwraUEydnowZGhsVkpjRjQ5Y1VpbzNaV1B4cm9V?=
 =?utf-8?Q?VEvbLrJUJ4ZQfVXZARY9zsDrk?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F509C742F10CBA41AA61294CB57F072E@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Sx7J0+f4Oc/Ow5QLPWYyKHciRRIdyxTKwZM1lay79O9vnwknoE8VBkaSPiduaPx+eFuTad3pCkfNkf6y6TOiCy372Xu0rjFVDGaVlfG6o6T/U/piC7nSgsEt6pVcTos/ZKcxmX4EcAJErKrrCQ3oa9LqqzFl/Q3QFiqCfs1r3cQOyN4VERLEbrq2iC33YN9csO2YvGXMDcjjORqUcN4W0Rs2ZZN+ugIdsqujj29dUJnPqaXS8M0Er3J4PYSbK9iApBD7r2Q2zxVqEV8+syOTQ6E4I8M6D8lA++6rW8BUDu3EK7eSQhnb649IY7te6AccVdbGwgfJWT1qi2bRwVMDWvgZVenSgrjFbo4Ll85stbNg/WqOX8zxTt2ChpXyVou1xTAeb858c0ZgC4dyLoMRSroRfG3DtwCHfdtqW0APQY3iXLDURhW+Sc/3kxBl7y2+UbWBnO9tZh6nAmqomVEdWhCk6nStkg2fD/IQAZXYBKaWQ2S/Y+rfuRLP5rnxHDgBxnoLS6sxnkM6z5TTUH6ALvl/WJNmgqkXxYHuc1gQnjVAMWn+olSFgZBtMnU0VosNjmQkxZz/C1Sk1gJQ9lU+y9bhJ38By+KCs8omdKbwruGMSnWmYPfpWlEc5fnkkPzOZ8nz3HWb0LFye0MUXZqlVZcSTPDgcPgK9ni3dXWiDCwGJ7NGr88X3U/l3YnzWEFim6ozNSDPUBWvA2MWvJ1PSSUwTNTS1d55rxQIvNWDCFYGUiA+dXgzjW5CHQurkugwZ3VyewEeMAEcH+9uss3F+WvG8n2feKs9sIZBzeH+kCXzEHFKJfa0SFy+hgH08kTT+Yhu3tHhcJ5n8ppo98tWT4YEDC8Gne2OuoDRNYOmnW1hMLZO1qkc5hZmTk9vBDAkGgSapc+P6ZnXQ38pY5rZpA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e8c6818-9c28-4420-43f5-08daf3c0d1e0
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2023 10:44:30.6596
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: elDQ/uevrmYXhPYqA2rsgJw/8LY2jZWjXiv1E00HiIdZhuzVhZ55nEzJHVk4vtjaepkNpVFwRWGQoI+EQdyHTFoE9ZRAS3nriC0fnV8A5Dk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5705

T24gMTAvMDEvMjAyMyA0OjI1IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBBbGwgUklT
Q1ZfNjQtcmVsYXRlZCBzdHVmZiB3YXMgcmVuYW1lZCB0byBiZSBjb25zaXN0ZW50IHdpdGgNCj4g
QVJNIChhcm0zMiBpcyBjcm9zcyBidWlsZCBhcyBSSVNDVl82NCkuDQo+DQo+IFRoZSBwYXRjaCBp
cyBiYXNlZCBvbiB0aGUgZm9sbG93aW5nIHBhdGNoIHNlcmllczoNCj4gW1BBVENIICpdIEJhc2lj
IGVhcmx5X3ByaW50ayBhbmQgc21va2UgdGVzdCBpbXBsZW1lbnRhdGlvbg0KPg0KPiBTaWduZWQt
b2ZmLWJ5OiBPbGVrc2lpIEt1cm9jaGtvIDxvbGVrc2lpLmt1cm9jaGtvQGdtYWlsLmNvbT4NCj4g
LS0tDQo+ICAuLi52NjQuZG9ja2VyZmlsZSA9PiBjdXJyZW50LXJpc2N2NjQuZG9ja2VyZmlsZX0g
fCAgMA0KDQpUaGlzIHJlbmFtZSB3aWxsIGJyZWFrIFhlbiA0LjE3DQoNCk5vdywgYXMgNC4xNydz
IFJJU0MtViBzdXBwb3J0IHdhcyBzbyB0b2tlbiB0aGF0IGl0IHdhc24ndCBldmVuIHByb3Blcmx5
DQp3aXJlZCBpbnRvIENJLCB0aGVuIHRoaXMgaXMgcHJvYmFibHkgZmluZS4NCg0KQnV0IHdlIGFi
c29sdXRlbHkgZG8gbmVlZCB0byBiZSBhd2FyZSBvZiB0aGUgY29uc2VxdWVuY2UgYmVmb3JlIHRh
a2luZw0KdGhlIHBhdGNoLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 11:05:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 11:05:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475205.736788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFYv9-0003HA-D9; Wed, 11 Jan 2023 11:05:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475205.736788; Wed, 11 Jan 2023 11:05:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFYv9-0003H3-9t; Wed, 11 Jan 2023 11:05:27 +0000
Received: by outflank-mailman (input) for mailman id 475205;
 Wed, 11 Jan 2023 11:05:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pmZf=5I=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pFYv7-0003Gx-TP
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 11:05:26 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2084.outbound.protection.outlook.com [40.107.244.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d8cb6fae-919f-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 12:05:24 +0100 (CET)
Received: from DM5PR08CA0030.namprd08.prod.outlook.com (2603:10b6:4:60::19) by
 CY8PR12MB7435.namprd12.prod.outlook.com (2603:10b6:930:51::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13; Wed, 11 Jan 2023 11:05:21 +0000
Received: from DM6NAM11FT106.eop-nam11.prod.protection.outlook.com
 (2603:10b6:4:60:cafe::fc) by DM5PR08CA0030.outlook.office365.com
 (2603:10b6:4:60::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend
 Transport; Wed, 11 Jan 2023 11:05:21 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT106.mail.protection.outlook.com (10.13.172.229) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.12 via Frontend Transport; Wed, 11 Jan 2023 11:05:20 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 11 Jan
 2023 05:05:20 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 11 Jan
 2023 05:05:20 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 11 Jan 2023 05:05:18 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8cb6fae-919f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KvoI+LKKpNIpbWF+69cGnGcwmVfLUXrk0G205/FYvXrRes1PQ7leFUMGeLiPnKYaFGEXd00LbUdefYCyS+OzudHrA0oLC9GwGC64lpnOhlHInXgJZoUtn6/iJJan+2q4h+Tduoit2bBcF/BVZsMjG1kgAPGn2kirisWOk8kgNsQ6rvAdwetKDrUwuxUH556VeS/UOX6gvq01PTlekIFO2fKKFzvZ6DdPb7rZRKb8z9lhM/eGfsgzolPmmp0Ks4QzVqVu+V2EcstzeJ4T5D+r93SXpvAsvRejWTzL+VrTyMdPJE2rqwBQ9CdB/RM04iAvwqQJvYVJUladcOjurZbWmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wErCXnpAFasgVfkVOpDBs0NnPqWQPgKvRmIRDzVqRqE=;
 b=h1igtfL+nl1DcXM+8pgscvxxwrHFtl425vJFRyGWF+TV1tQ5xisNaaRl3Hbr8jkLpBLhsZk7y5auUKVjHXRPzJMzRccQ/cFzj1zyzonQp8Iu2/y56yWNpspdh+rfilgfoZ2Gg3bntUOe7/y23rZY8e396N3t7xTaI5myD+ZXgAQt+URc6uYpZT0SAytwFZLEQycFOQrO1wavbp7I2G1ZkOWyaRuWEmPaHsgb8Fi3sqDHk1BzSnlZLAEuDJ6M8LKAtTd2vzyA80AghssxyFjF/17Wt2vYm+g5h3+2JQoDr9IHTc/jU3PtjWEr0OgREtZ8U6BIS413tZ+glzC7kL6QJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wErCXnpAFasgVfkVOpDBs0NnPqWQPgKvRmIRDzVqRqE=;
 b=sjtRFrdwYTOg2oM8OV2YKX8zd/mUSBvUvD3ANjNlnPV77OpXhXj8uFmkWzH4oMRKQLDWkP83pke9OO+OTg7anBfof1leTh+ZhXGackzTazVQrCqk87I4oQCDUhl5VIBTEADLOpa5tjRRhy2+fO8Qrte9UkHP2er2QTrXfKjh14o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <4bdb36a2-fda6-1949-d7db-7344a801fa50@amd.com>
Date: Wed, 11 Jan 2023 12:05:17 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] automation: rename RISCV_64 container and jobs
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Oleksii Kurochko
	<oleksii.kurochko@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
References: <cea2d287fd65033d8631bf9905ad00652bf11035.1673367923.git.oleksii.kurochko@gmail.com>
 <4f15fe80-9a60-76ce-9cf8-2143ea2d86eb@citrix.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <4f15fe80-9a60-76ce-9cf8-2143ea2d86eb@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT106:EE_|CY8PR12MB7435:EE_
X-MS-Office365-Filtering-Correlation-Id: 1b5f3afc-9796-4fb7-7bd9-08daf3c3bb23
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+KW+gWucd6aWaiHm9bR2pH024yR1CE5NEnhDcvwQHVR/A8RoHE+2gHYW3WzvszmTiq+Zmy+49sSZWWh4RR4ZM0JFA9D4meSEb74vaAx0u/dBrzuTgRIcVDtZ6BQhNjQ6Sr0dvQb/zpZT8hkBvgrpsoBOsAj1zziL3nStNstYBzOdAq6fqLXTGqNJBgbU4Lgtopas3IqSE7lZfmiLXaOEYa0paLYDakRe/ygqW10Wzo+TGhxoqWS5dVniO+jiB1YiC6ACvRNQRZxXxLYZAuafdbIF80cg3ka5zHKg8K7hEvG2jntYTV4tb0K5XJumUIjVjSUQ2+4Myv3n02ToJOCTtTTqFgeFJ19buv4jaJOnGeXLdfR6ga7H9Ci3Wuz4tCEKDAmLbReBTKwLa3iuAIpdwiYQZiX4QWmsWoX/vZR1L/Xtv2OkOEmBoQvMC2xdvZ/Lwonh3kbjg5e0EVaDZsAnJSUGATFZ9O0cvMe313orfopxlYRlSgwnp8xWxEwZOKQIVfkvhdmIAV1kho0ha2Z7pPNlFHCe8tCzLfD43I1MDIrn1ba7K4q/hCdgsZDk7q/RrPofTh1M3rHbhwKYXhXOgST21y1CDEdDwFHEzpv3d3yNgn8vl9bnfIZAKnwTMEThOdkcCFhiUrASSyxXcnEjeXLQvIaN3rOYrVqG3imLTMI7Sr8Ml4qWGx4yRMkUxhxDsQ5xNoLyH61Vy3SVAf3LHeWAlXivgY/Ta5IQxy03159ocOlQ6XXTQLTJcWWbwzhX/8PPCAiKilldIyQGb57Kng==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199015)(36840700001)(46966006)(40470700004)(478600001)(82740400003)(47076005)(41300700001)(356005)(426003)(81166007)(40460700003)(86362001)(110136005)(316002)(54906003)(16576012)(31696002)(2616005)(186003)(40480700001)(70586007)(26005)(336012)(82310400005)(70206006)(8676002)(36756003)(5660300002)(53546011)(36860700001)(2906002)(4326008)(31686004)(4744005)(44832011)(8936002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 11:05:20.9402
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b5f3afc-9796-4fb7-7bd9-08daf3c3bb23
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT106.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7435



On 11/01/2023 11:44, Andrew Cooper wrote:
> 
> 
> On 10/01/2023 4:25 pm, Oleksii Kurochko wrote:
>> All RISCV_64-related stuff was renamed to be consistent with
>> ARM (arm32 is cross build as RISCV_64).
>>
>> The patch is based on the following patch series:
>> [PATCH *] Basic early_printk and smoke test implementation
>>
>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>> ---
>>  ...v64.dockerfile => current-riscv64.dockerfile} |  0
> 
> This rename will break Xen 4.17
I guess only if we decide to remove the old container from registry,
which we do not need to do. We already have a few containers in the registry
whose dockerfiles no longer appear in xen.git but are kept in gitlab for backwards compatibility.

> 
> Now, as 4.17's RISC-V support was so token that it wasn't even properly
> wired into CI, then this is probably fine.
> 
> But we absolutely do need to be aware of the consequence before taking
> the patch.
> 
> ~Andrew

~Michal


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 11:31:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 11:31:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475211.736799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZJo-0006Vr-D9; Wed, 11 Jan 2023 11:30:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475211.736799; Wed, 11 Jan 2023 11:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZJo-0006Vk-AU; Wed, 11 Jan 2023 11:30:56 +0000
Received: by outflank-mailman (input) for mailman id 475211;
 Wed, 11 Jan 2023 11:30:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bB9=5I=linuxfoundation.org=gregkh@srs-se1.protection.inumbo.net>)
 id 1pFZJm-0006Ve-JF
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 11:30:54 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 680acd88-91a3-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 12:30:53 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 81C1EB81B8D;
 Wed, 11 Jan 2023 11:30:52 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D688C433EF;
 Wed, 11 Jan 2023 11:30:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 680acd88-91a3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org;
	s=korg; t=1673436651;
	bh=WJ96hH6M2QWhqMqos8NlJvpLqVVBxtYsuEjUoVykxZs=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=X7a5KWZHBLMUkKm8d7N1mA+UILCUqbJnKRAWqBLYnqLFbEiClsPP+hZZV1xghrAa6
	 /mZHoqJZQ3wMqDA66QoIsvm5lc2i9xyw4j4Qa9qsBNa5/OJ4Pcg7bemhED9vtRGmtJ
	 c/TGi6+t/6rL26PggxIY0N2xBa4fplpGKMUGxUKo=
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 14/16] xen/xenbus: move to_xenbus_device() to use container_of_const()
Date: Wed, 11 Jan 2023 12:30:16 +0100
Message-Id: <20230111113018.459199-15-gregkh@linuxfoundation.org>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230111113018.459199-1-gregkh@linuxfoundation.org>
References: <20230111113018.459199-1-gregkh@linuxfoundation.org>
MIME-Version: 1.0
X-Developer-Signature: v=1; a=openpgp-sha256; l=1111; i=gregkh@linuxfoundation.org; h=from:subject; bh=WJ96hH6M2QWhqMqos8NlJvpLqVVBxtYsuEjUoVykxZs=; b=owGbwMvMwCRo6H6F97bub03G02pJDMn75p5iDqjYGXvtfmJbfiGLYEh0rmbzkQfMxlmZT6a8f3xz mmVIRywLgyATg6yYIsuXbTxH91ccUvQytD0NM4eVCWQIAxenAExEzINhnumZBRPTnvPn53v37j2tLX W4Ri/vPsM8/VuB8089evrezHar0ST1/HL+f+5XAA==
X-Developer-Key: i=gregkh@linuxfoundation.org; a=openpgp; fpr=F4B60CC5BF78C2214A313DCB3147D40DDB2DFB29
Content-Transfer-Encoding: 8bit

The driver core is changing to pass some pointers as const, so move
to_xenbus_device() to use container_of_const() to handle this change.

to_xenbus_device() now properly keeps the const-ness of the pointer passed
into it, while as before it could be lost.

Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 include/xen/xenbus.h | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index eaa932b99d8a..b31f77f9c50c 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -96,10 +96,7 @@ struct xenbus_device {
 	unsigned int spurious_threshold;
 };
 
-static inline struct xenbus_device *to_xenbus_device(struct device *dev)
-{
-	return container_of(dev, struct xenbus_device, dev);
-}
+#define to_xenbus_device(__dev)	container_of_const(__dev, struct xenbus_device, dev)
 
 struct xenbus_device_id
 {
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 11:38:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 11:38:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475218.736810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZRE-0007Ei-9D; Wed, 11 Jan 2023 11:38:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475218.736810; Wed, 11 Jan 2023 11:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZRE-0007Eb-5m; Wed, 11 Jan 2023 11:38:36 +0000
Received: by outflank-mailman (input) for mailman id 475218;
 Wed, 11 Jan 2023 11:38:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFZRD-0007EV-J5
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 11:38:35 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2057.outbound.protection.outlook.com [40.107.6.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a1ddd81-91a4-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 12:38:33 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9521.eurprd04.prod.outlook.com (2603:10a6:10:2f3::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 11:38:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 11:38:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a1ddd81-91a4-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MmorWcLXUTBm0fwhXjDg8q/RuRnCA2qvyAe32Oef1q19kzVOsqnYJNJG/1qL9z9rxMB1slBrEntBb1uSXAHCbIgFZBe6MvnLxeZAP+Aogv3MJbvVLn4epnprz+VlL9F70ujhKhdLcbnbylIdDZpmJmcNfSJKx37H5MTvrEIXD3r+dMx9UmBsBrezccB9fzk/e1p/m1/qUWE2u3oC3VD9Ha3QzC8hcNBkq07O+yMLOJLLRCQCFRYw0Nh08lNxlfaw87+G7QDeav3n7Z6Fp6pZ+kWWyNq0D+5L+m+IXFV09vacxFIHmMlIOQWtCS3wSehdmfVnCxBBr2OXwJBfQ/+cwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HsNHV6GLY/1LSfhSE0JxMz7TRDlapBwsAEehfvWhyro=;
 b=ExXA5dN0DkXUdHMV1RpmuhyPLWF+1bFrHr/sk/WE+z/3eL4YDSLmt8Y3pUtQooMTT6tQoNxuG41RfhuwhBOSB5jSSpLTr9DR5b2gKWilMTjFxBMm6+j0PvaZlc2R5Hdn3B5eLuCS2KRI+O2qXtrWRLTjrA6vz4LvhpWz9uaqbDD1LqSgeBeoIAUWnBDvcSGOhfQnmrBpcGkfdVtrhwYJ02eMUAF2799YEJzscrPC4S0lDedVxZ+LxfvxqUJHNSZpH+uOx6W89dFaEhlmarzqOeoIQqWwobawkocui91kLLvCZNRokq/R2/21YU2+75YvUgNDa2chsHni3lH6APoqew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HsNHV6GLY/1LSfhSE0JxMz7TRDlapBwsAEehfvWhyro=;
 b=U3/tihuM+zAng1mquLq8qNdycLwq5G8s8HS2UWSCRBPI6B3LheMDcSxr/Fym1YY7BLc/d3pWGSct3FxZoDJfpfk0plgY7wG8oJp27emlpGD9RFoIg1cspt/6d40hr+0/4EzBjoSxcngPM5n3Q2WL5jQVLcMDtWXNsgPZLqEKJc6vbfUXUeaeS2XSW2xXOaOdskakX+hlK1N2JZhMz2OJ5e2e4Wm7qZi43zJAfi0En1mW3jr2oLs0xAr04wOpR0HZqEp+wfNoPs+U4zgaQCKRLXVktNRLB3+9RCGzTWrpeez84KazHUvs81EVlnM6YtuQc+nfNEkID2CVpN4e6j0HOg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <644334f1-8e1d-3203-e942-0eb3ef5584a9@suse.com>
Date: Wed, 11 Jan 2023 12:38:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 2/6] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Oleksii <oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
 <c333b5b0-f936-59f8-d962-79d449403e6c@suse.com>
 <06185fcfb8cbd849df4b033efa923b55c054738d.camel@gmail.com>
 <1b6ee20d-2f32-ab38-83ec-69c33baf42fd@suse.com>
 <0398c48c-cc5d-a4fb-354f-54e3c5900d18@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <0398c48c-cc5d-a4fb-354f-54e3c5900d18@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0056.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9521:EE_
X-MS-Office365-Filtering-Correlation-Id: cdd39ce0-1def-44be-07c7-08daf3c85d15
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1QxpyuAnAkK+5rK6x/5wnfb7bHbfTmtAI6hQZMEwDX2aJwA252KeYg1hiorvNlY7NrdIdoE/wBCumRe40fkM4PRSxjr40R8En4PK9U3ZXAFNsg/yKlG4EoKZnngq2VQ+yJeAG8euO7MdIuxG3RH8ogKfNlAB3RBqh1Afk2CnlbY2Ky00TyJrcrbZTJrC2slVRJ3Kzs+DFXH+4ODxd6ED3hcPcAsdORG/UClpPeQhjQN+6DGhnXSn68bbAgnSxznknS63OB33DND5v8YoAizj8WAwk5P2t/xw1ejNwL1rms9wPzFNBEaUHxwrTh2m5odC2F/A2RlXjQ9wQDiRSJxmt/vU7TVC1XVqfQKI5rvoqJ9uKrbtsk44ITso99nzH45lTh/8SRGIAXPeEETbJQanDQ9vZHpxloWbO2Kb91Ed/00BDtVqNi3d3vcHv1pDeQ1VAaHxX9evQl9DCZHzFdcGzl1k1pWr+VR8yBwGRuSrYpqTjVtp+x/t06ZSgoGocqAY4+8VQ27SoBITPEg18Nb/b/mo15oPwuFnQmPvQRyyGKinI+xyOMV/jWdQDb57PY7O0H8V2e5yIAyFVhH1s+6G5fkxiy7mxaBgIRDlM2jLm4gEq2UJMhpu7fML42YRH/d2yhCeaYyb3YAMhmweZpZWqFcl/2GAaTzWNwSAmzoLTYFtn+B1ucpcxwGdord0EEiAPstGyJ0trGlasLgAGiehxBAYX79SjZMBHpKXYgKQwP0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(366004)(396003)(346002)(136003)(376002)(451199015)(36756003)(26005)(186003)(8936002)(53546011)(6486002)(6512007)(31686004)(6506007)(2616005)(66946007)(5660300002)(66476007)(7416002)(66556008)(6916009)(316002)(4326008)(86362001)(31696002)(38100700002)(478600001)(54906003)(8676002)(41300700001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TTlzSWszVDdDeVJNcDlvYi90d0xpR3ZiNExRYjdlRTlqVlJQcGlRdFdIWnVP?=
 =?utf-8?B?YmphVTFPUVhlelFveHBVd3RoVjkxTEhMNWFCZEpOeStTTTBhSGE2Y0NmamRR?=
 =?utf-8?B?dVNvZ09xc1hyUFpjSTVXR2hhbnJud2t6cVFEUGxGQmdRWjFTay9Ya0tGOEg2?=
 =?utf-8?B?K2xmUXp3SlVpbXJvMmpYZTZIeW1hQXRwVTZqd3BTOGRxc0lvd3N4ZDN2Y2VT?=
 =?utf-8?B?d2htdUEwSVczaGU5ZGxxcXJwZHpuSlQvaHZnSDl5WHpieWFEc2lDdXQyeFBP?=
 =?utf-8?B?LzF5azFkODFLbTRNZisxK0JkVkhuK3JwYWdLYm1qQmhsMVRaVEE5NTQweUVx?=
 =?utf-8?B?MWFUSmtkWVoyRWJnbms3Vlh1elRrOXdFNGZuejErc2Urc21kOHk1UW9Pc05q?=
 =?utf-8?B?RGZjbWFBTE91Nk5lWEFsK1hKZldEVFNtdVU3YnZpUVVGQ0VmTEJxSVVwVzgx?=
 =?utf-8?B?NHVBY1dpQkVPQXoxZWNoeVc5UWNnTGdaWnA2T1BIaHU1dkwrcjR1YndhZHRk?=
 =?utf-8?B?QlpDV0U0U1JFWmJhZW1NMmV4ZlZEajdQOWh0d2lCWTQ3Rm1JNmZiUzFIQmd0?=
 =?utf-8?B?MWFiSHlDQm5hTDFxM0x4cUkvS1lYM3NMT05jalRZVEJITmVIbTNjS3pGaG5a?=
 =?utf-8?B?UXRDU1hTWUdrUngyNytPbUtnTVVXOTFWNkp5dHNzRTNCYmdtTDdZWEdpQXNu?=
 =?utf-8?B?d25wZmhmSXErOFBBdnIya1c4V05NMlNQeWF0Wkt3eHh6dWdsMzZnVkxjYjhZ?=
 =?utf-8?B?SHdCMUhRM0NaSVoxYmdjYk9TK1dsejBoSmFBNG1wRXZzUEtWNUU4WjNUZGZm?=
 =?utf-8?B?dkdvUHo4WDJFOVpocXF1ajRoNkJ2R1VLeVZzVXdCb1NmMWNjYVdzdEppTFFD?=
 =?utf-8?B?Mk1RekNQTE1kMXFGajNuZHNBalorcStSaHpSQ2lKNWk0b054azdlU3RoOGtu?=
 =?utf-8?B?TWNHdmNBMVUzeVdkMjlCVG1SbksyYXA5bFdzZDBTVmJuczRKekFTZlhvTnBq?=
 =?utf-8?B?VE1UT3kwQTJUd0NPZC9McFBueExTcGhpMEcwOHRVd3Y4dzB2bWxENjBsbXMw?=
 =?utf-8?B?MGJielNVdm5WNytBNjM0ZXZuVkhaWS9EdVA1dTJmTjhXbkU0WWFhOXEzZVRj?=
 =?utf-8?B?aUNMWDZIcXNjOXlCaG9uMllTbEkzNFp4NVA0NXoxOXBHWjhHVVBIcVVMZFQ3?=
 =?utf-8?B?aVZxekxPWGVDQ3pWdnYzTDVkWHVrdjVZUnJ4YklXUlU4cjBIdkN5MVd2bE9C?=
 =?utf-8?B?cTdxelkrOURJN05vdzMzRW93WVNNOXp1TStpQmtVbk1ydXVPVGp0ZXN0eEdB?=
 =?utf-8?B?RDAzZjBUd0FxdmJTRDNTeUwxcmFqS255OHQvaEkvR2k3SXVmT2l2TFdTSnlu?=
 =?utf-8?B?OWN5TUhBdFBPVVlNZG13b2tOYXdlcVJYUUV1YWxIbjFwRjFuQnFMNlZzMWpI?=
 =?utf-8?B?dXdFSGtmOWtjeCs0VVFkTGRuUUZ1TWUrY1FhUDdmN29YYTNLRGhDK0RsQWFr?=
 =?utf-8?B?bGpHcngxdmQvOTRIL2hIaHJHenpOajhJWlAvd2RCYWg1Yk1iak91b2YvSHJX?=
 =?utf-8?B?ZUQ0bm95eVBGOXhBTEJ5TmZ3WU5nZGFRRUl3Yi85L0ZXUWZ6czMrV2xsNnEv?=
 =?utf-8?B?VnNJVnBkalVNWnVFR1J6RklYT3BHNXNaQjZlbEhDV1ArcEhYa1JOVksxeEVC?=
 =?utf-8?B?eG9ydTYyS0JZVXJEWHFTTGhJK2Z6dzRic0h4bkxjMHVTalNHMWg1OU95cXZT?=
 =?utf-8?B?c3F4SlZJNjhxN0REZXhHVVJOYWdOMmFuZ2ZBais1MWxwZUJmMnhaRzJrZ1lv?=
 =?utf-8?B?ZVhrN2dSUzZYT2RkUUVrMUU1aDlQbDJHYVZNK21EbEk4cmR3b2V0QkZQZHV3?=
 =?utf-8?B?eHFlVWlOdUpSOFdBV0ZiV3lVNDhQcVY3aXRXTk8vcWh6MTMySHNwai9PRGE4?=
 =?utf-8?B?ZVQ1QmZsbVlHT1JxamlJQXJOMWpKWGlGdmNVZUxsUlJRSThjakRGalU0aGhw?=
 =?utf-8?B?ZWNlS0V1djMyNTN1OWFGU0p4Y3NUOU04TlM2eTVPWXR0dlMwOGZCREUvYno1?=
 =?utf-8?B?ZFcxRmFWNGFBa3ViaWJaOE1QUFpZVTdTODd6a2gvK1pyTWlMM1pYeTUxdTdY?=
 =?utf-8?Q?OqnWzY6W4fJhYWM6750JaCLMU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cdd39ce0-1def-44be-07c7-08daf3c85d15
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 11:38:31.3889
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CS5BT7ae0nD7cdP76EmTm+mRW7ypC0UVQ5EnOZOyzNUR4cxxIsdk8i8grUqMjOwN8JewRDDuIYxIH/WWVJ4TNQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9521

On 11.01.2023 11:22, Xenia Ragiadakou wrote:
> 
> On 1/11/23 11:08, Jan Beulich wrote:
>> On 11.01.2023 09:47, Oleksii wrote:
>>> On Tue, 2023-01-10 at 17:58 +0100, Jan Beulich wrote:
>>>> On 10.01.2023 16:17, Oleksii Kurochko wrote:
>>>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>>>> ---
>>>>> Changes in V3:
>>>>>      - Nothing changed
>>>>> ---
>>>>> Changes in V2:
>>>>>      - Remove unneeded now types from <asm/types.h>
>>>>
>>>> I guess you went a little too far: Types used in common code, even if
>>>> you
>>> It looks then I didn't understand which one of types are needed.
>>>
>>> In "[PATCH v1 2/8] xen/riscv: introduce asm/types.h header file" all
>>> types were introduced as most of them are used in <xen/types.h>:
>>> __{u|s}{8|16|32|64}. Thereby it looks like the following types in
>>> <asm/types.h> should be present from the start:
>>>    typedef __signed__ char __s8;
>>>    typedef unsigned char __u8;
>>>
>>>    typedef __signed__ short __s16;
>>>    typedef unsigned short __u16;
>>>
>>>    typedef __signed__ int __s32;
>>>    typedef unsigned int __u32;
>>>
>>>    #if defined(__GNUC__) && !defined(__STRICT_ANSI__)
>>>    #if defined(CONFIG_RISCV_32)
>>>      typedef __signed__ long long __s64;
>>>      typedef unsigned long long __u64;
>>>    #elif defined (CONFIG_RISCV_64)
>>>      typedef __signed__ long __s64;
>>>      typedef unsigned long __u64;
>>>    #endif
>>>    #endif
>>>
>>>   Then it turns out that there is no any sense in:
>>>    typedef signed char s8;
>>>    typedef unsigned char u8;
>>>
>>>    typedef signed short s16;
>>>    typedef unsigned short u16;
>>>
>>>    typedef signed int s32;
>>>    typedef unsigned int u32;
>>>
>>>    typedef signed long long s64;
>>>    typedef unsigned long long u64;
>>>      OR
>>>    typedef signed long s64;
>>>    typedef unsigned long u64;
>>> As I understand instead of them should be used: {u|s}int<N>_t.
>>
>> Hmm, the situation is worse than I thought (recalled) it was: You're
>> right, xen/types.h actually uses __{u,s}<N>. So I'm sorry for mis-
>> guiding you; we'll need to do more cleanup first for asm/types.h to
>> become smaller.
> 
> What is the reason for not declaring __{u,s}<N> directly in xen/types.h 
> as type alias to {u,s}<N>?
> 
> IIUC __{u,s}<N> and {u,s}<N> are needed by code ported from Linux while 
> Xen code should use {u|s}int<N>_t instead, right?

Yes. Hence in the long run only Linux files should get to see __{u,s}<N>
and {u,s}<N>, perhaps from a new linux-types.h.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 11:40:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 11:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475224.736821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZSo-00008T-K3; Wed, 11 Jan 2023 11:40:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475224.736821; Wed, 11 Jan 2023 11:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZSo-00008M-Gg; Wed, 11 Jan 2023 11:40:14 +0000
Received: by outflank-mailman (input) for mailman id 475224;
 Wed, 11 Jan 2023 11:40:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHza=5I=citrix.com=prvs=3687a0f96=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFZSn-00008F-4r
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 11:40:13 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b3a1b7d0-91a4-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 12:40:11 +0100 (CET)
Received: from mail-bn8nam12lp2173.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 06:40:01 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA0PR03MB5402.namprd03.prod.outlook.com (2603:10b6:806:b7::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 11:39:58 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 11:39:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3a1b7d0-91a4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673437210;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=Npg4FEh7fyagYb+edMrpBphe4PX+wjiTOlTdbORlpiw=;
  b=DGowymT0p4m1+xJ/CJ+XAz8x60/dHxo9KQ2G3z/azao0vP5Pybcxmp1j
   cXv9HxMNM0iCyy3TK/BglkgaYDPMp8dLlYXHgKlK4kDqWDA7JqaCcPRxp
   i+JuVIB2XLAEqtpFjCrS/2wx1BkN9yuHcDAzEWut6fPLFNT3vJ8PectXQ
   8=;
X-IronPort-RemoteIP: 104.47.55.173
X-IronPort-MID: 91581756
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:IpC0JqMbyQodUq3vrR0QlsFynXyQoLVcMsEvi/4bfWQNrUoi3mcCm
 DZLC2vQPffbZzfwfI92Ydu+8RkP7MDWyNdmHgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo42tB5wVmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0qFLAUp02
 d1AFGAcfjWInNKMxKyHVOY506zPLOGzVG8ekldJ6GiDSNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujeIpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexH6rAd1PStVU8NZqsAKU+mZQFiEVUHm3mt6yrXPhQo1mf
 hl8Fi0G6PJaGFaQZsHwQxCislaFuBAGUtZdGuF87xuCooLW5A+fDHIZQSNMctUjnMAzTD0uk
 FSOmrvBAT1pra3QSn+H8LqQhS29NDJTLmIYYyIACwwf7LHLsNFtphHCVNBuFOiylNKdMS3/x
 yCiqCk4mqkJisgKx+O38DjvgT22oYPSZhUo/QiRVWWghitjbYCsaoiA6lXB6/tEaoGDQTGpr
 HUC3sST8u0KJZWMjzCWBvUAGqmz4PSIOyGahkRgd7Ej/Tmw/3+ofahL/SpzYkxuN645lSTBZ
 UbSvUZb4s9VNX7zN6tvOdvuUIIt0LTqEsnjWrbMdN1Sb5NtdQiBuiZzeUqX2GOrm08p+U0iB
 aqmnQ+XJS5yIcxaIPCeHo/xDZdDKvgC+F7u
IronPort-HdrOrdr: A9a23:aMabnKi4ttMCk/3wriADwlyW4XBQXh0ji2hC6mlwRA09TySZ//
 re+sjztCWVtN91YhodcL+7WZVoLUmskKKdgrNhRItKPjOWwFdARbsKheSN/9SJIVyEygc379
 YFT0ERMqyWMXFKyevBzU2fNf1I+rW6GaaT79v2/jNWYTsvQYdGwCdWNj2yL21RY019KadRLu
 v+2uN34zWhfHgMbte2HBA+MtTrrcHQiZTjbQUnKnccmWuzsQ8=
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="91581756"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n9z0banVAMugsDsjsV0tOampXUeLXbTxzTf1kK7yOrDGo4UTiVNUXQSLuT+uqjTEmU3Hb1XLRVOfb8A9/LJqSBXJ0BJu32KIHrW8WO4B7UY2OfrdngNxNwNIuiM5viXA/FaMRWEpUGv7Ks1UU3q3AeE+JBH+QZ+weidcSBRn7btv3PettSk+wjkILIib8xNSao4l7spHfKbaaqMzvv2WSXnoQgxIvM0aNItLIg3VbRNs+UlDFUoCdpi2eloKZz6fmrlYy60SuzvWCtGLw5COWuzzp7Z61HOrqDggcl+h/5X8vBUo5YgNqPCEdfk9ukzadDuW3uufftELXDnF0gqXNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Npg4FEh7fyagYb+edMrpBphe4PX+wjiTOlTdbORlpiw=;
 b=NcxYxjRIVWzXsBw0/rCBpVQB7SQ9K8QFdojr88LyElAi2sH6mN8POPc8zICX+CGUy5LYPES6ia2L1P4KYOgIcj9cVqw1Tz39C8fEb1VbQA8gT9VISI/Y1PTBRJR0NNoo980uqW3CRuczC3HQcMe3TZVXE6z0rY1IUuOfvzu39Fz2Rqnm7ODfT6iOjCg2aAgXJfx2rEA/wyknqtMciDbWH9VMC1bUUsbxP63S85Wjd9C+hzLTwTnxZhU7w8Ahc6J7VI1h/cnGeemYlb4iJdaSK7iHPHYa5v/NwOAHKdU+eGkzIKHIE4FdqX7/Az9A+Fbfh0nmIoYBX1ndohQw7XTBwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Npg4FEh7fyagYb+edMrpBphe4PX+wjiTOlTdbORlpiw=;
 b=U2imVUrVMzCxLd6YBSraBivx2FPpHGJ52JzzUkTpluQYmihxnwtTGXQGnbfQsU5lD9r9YZNdl3wXgXQv7GJeKSJFpmgZrnW8PshUUbtYOzof7gDAVIO531UUrqKPc0+1zKEZ/7GKDxMrZ0QHaKGkDRI145IihAAbPI84RVqXF9Q=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Peter Zijlstra <peterz@infradead.org>, Joan Bruguera
	<joanbrugueram@gmail.com>
CC: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"x86@kernel.org" <x86@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
	Juergen Gross <jgross@suse.com>, "Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: Wake-up from suspend to RAM broken under `retbleed=stuff`
Thread-Topic: Wake-up from suspend to RAM broken under `retbleed=stuff`
Thread-Index: AQHZJa6+C7zgK+BmnkyxcC0/cTTk966ZF36A
Date: Wed, 11 Jan 2023 11:39:57 +0000
Message-ID: <aefed99b-6747-5dcc-65ec-6880f7c0d207@citrix.com>
References: <20230108030748.158120-1-joanbrugueram@gmail.com>
 <20230109040531.7888-1-joanbrugueram@gmail.com>
 <Y76bbtJn+jIV3pOz@hirez.programming.kicks-ass.net>
In-Reply-To: <Y76bbtJn+jIV3pOz@hirez.programming.kicks-ass.net>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA0PR03MB5402:EE_
x-ms-office365-filtering-correlation-id: fa5edb11-4719-4c3d-2944-08daf3c89113
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 TlpLLYxC/5fSK0lPqZH8Ueh9UtfVynvOyoZdJlLi8fHDuvQA3Gm7OSgiR44CV3cLUM6kRyykRnp2PMfXnlkGvTZRwQ/3+nelIdo1/MLuoGDbu7bICdfVn/GYEkK1w/nxJqT5FmK448DypO+Wid9QJPsJhYCpv8CdU4NOyfd5MQ9GZ5acajOPZCGJGwpei24x0tJVI7zO2sbQvo7PvoBm8fAyswvVPNKyyt97sPFiR2GUtndefFSHYvu+KRuJ2zgzbjSHYIP6JfWag8jEQfIcjOpvXpKrVV5ULrQzMpfHwpzEDLDxNdhi8EdQrUR7pLOAwJcsiaD37JaMl1P2X+uyYfUBEIQQyXeixY90s3CqXylQERED90afvimrwl+HPDegLeA1t9+HmUAp3/JzOTJzJKpXgY3SLDrFvp2HKBNfyToHlB8FjXy1iw9w4I+FWbk+RAv3+NCx+K5bUPQo/cVkfpi8RWLa5ifyB7XeZyRIdDuTSHWaGNFEdH4q0G1lsABmnedxHQFCTKTnLfOv30MIzfE5Mabe4ZcG3O5LmbBAeIMGkjKbAmWSEsIYZoVGqdR2AYwkfIfKQXylF0dWQl/QWZ7UOyf9D/RSpl5O/gd2s/pguGn7+C4fKP54JyU0WaU3b8bPxHiuhKdHKvQnGhTm3qzrOMAeSKM+aA55RIiLkj4QCm/5+LSHBcHx4HX5kYHWkKD3kwqLRKcYhVG9LV+rDrAOMPAFVbS8On8hm47QduTLTchYyEfNG8NxHi64PJ+5vRyqECTQCjFPIgRI+mLzHM8dqylwwI/9ny1+LSjtpXs=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(136003)(396003)(346002)(376002)(39860400002)(451199015)(7416002)(316002)(5660300002)(71200400001)(508600001)(6512007)(26005)(186003)(6486002)(2616005)(31696002)(41300700001)(66556008)(91956017)(54906003)(66946007)(66476007)(66446008)(4326008)(8676002)(64756008)(110136005)(76116006)(38070700005)(8936002)(83380400001)(86362001)(36756003)(15650500001)(107886003)(53546011)(31686004)(6506007)(122000001)(82960400001)(38100700002)(2906002)(81973001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?RUdpeFd1eGpnYVg5eWRIbEVLRkNqdmJlMG1CTEU1MXlXdFBoREhBY1FmcFhB?=
 =?utf-8?B?UkRpalErUU9aMkxXT1Nsd2tXaUl2MjFZaEtyaXdreWc4UVBMUEJwNUd6Wkh5?=
 =?utf-8?B?OFRZQXdadzU2RjJ2c3doN1BvQVhMT2hxYjhLTEJaSmxMZWNnVVJDalY2ZmN0?=
 =?utf-8?B?U2l4TjNndjFLSzJzUHA1VE5paVFFQTNhU0JCQ0pjQ0ppcFlaNHJTNHJiM3lL?=
 =?utf-8?B?bnllVVNZNjNKYXFhaWNYd1dkMEJXLzg1eXI4c25KaXpzbDVzUVIrRnl5clN0?=
 =?utf-8?B?ZFdmd1ZKMHZkcXBpRFBuS3pDQklOQklLVGdCWkxnME1YcGx2YzRjMlhJeVpW?=
 =?utf-8?B?QTF1T3Z6alpSblVHZ3RwdFdEL2d0cjlVTW1ycmdaYlQrRlg4NW9KWlVNdHFr?=
 =?utf-8?B?S3UzU0tzUmZ4V1VGem9rT0lMeENkOXJscnlWTmt3aUtWalF2L3hIWGJIOWxi?=
 =?utf-8?B?L3FrcmRMNnVOQnEzMVM4bk9aMjVqZUxNeTZETnhUVUFYMmdxQUo5cFI2N0Iv?=
 =?utf-8?B?QU9HYUlBeVJNT05MdGw1K0NoTXNqeFZKaHhTdVVFd3pXeWU5em8wellWY2Fh?=
 =?utf-8?B?RGFkQkR4blJXMUkrVXQ4aExvMm0xc0h3eHNTbGFSOVN5czdlMHUxWGtDcjBm?=
 =?utf-8?B?QlN4ek1xaEtsSlF1VjVzdjVTb1RpcXZYWlpDSVJqbGdDa2F1Rm9SaS81QWI2?=
 =?utf-8?B?dnluMHpDQmZDWVk4YTF0NmNVYnlwTDl2R0pJL1EwRDlrbUZPQXpkY2RTQlpZ?=
 =?utf-8?B?UjNnSWl4RTBNTktFdUswOUxCYmZhLzBFZ3g4aEk4UUl2TjQ3S29ST3lhOUtl?=
 =?utf-8?B?djBQRjIrcVVxek5jRkZERFZqRjc0MURHb0dvYndQZitTR3ZKNjM1ZXhKTzlv?=
 =?utf-8?B?UldZNklNNFJIOEIxb3p6VmVKZEJOZTRBYUp4NmZtU0lsM3pDZHgvTG0rcjR6?=
 =?utf-8?B?WnRYWG02UlFjZ3Z6QnFVSDNUZDZzQzhWVFhRRGRxaHIvbGpvWHdHV2lDbFJV?=
 =?utf-8?B?djRjMmZFQzBDVmxhSCtta0VtZlpOQ0pLWUgxOGNueE1vUnBhZHJNWkVFTDhU?=
 =?utf-8?B?SFNqZWVZOHI3cDE5QS9jcnV3Zy83cWNxZkhpekRUZDMyaFh2WEVldXpoc2di?=
 =?utf-8?B?akxxSmhVVXB2bUJGZmZZQWxjTkErc0dkdnk3VzIxazFubCtWNklHYmpLWFEz?=
 =?utf-8?B?M1YxNFBMMHhxNDJTVS82SHNxc3QyeDBDcjRzT1JwUG9PQncrRUVaL0VkU1Z0?=
 =?utf-8?B?eFpZcWZSaEZwMVVJVzIzM1VYWndpOXR1VlAxK3c1eE5mOERhYjI1bmJPWHVy?=
 =?utf-8?B?NXJhZmhPS1RUdWlXWGhpa0laZG5tUHBRMnJ1Sk1ETm0xWjE3UXdXK21zQWRz?=
 =?utf-8?B?U1lnNXBZWjQzL29Da2RXTFB1cHNOZkNqRmhuRnV1eWZHdHU1Ym9kZmd6Ujl6?=
 =?utf-8?B?TkwxanhwZkhYeW9Fb0QyQjJzalRiMDRTU0dlbDkvcDBkbDNzZUVmU0JraitO?=
 =?utf-8?B?bU80UmFLZWRwMFdabHpqS2krRUNXOXFFRWVrazRyclNqUmV2Z2NhUzFKUit2?=
 =?utf-8?B?WjVNK2xKempjLzNiMTNXRm9EeWVLNGk3LzZhYjBtbHAyYk1KNTZMRU45ZFdN?=
 =?utf-8?B?RTlHd1lXeS9YQy9aL2VIbHgzTENsbTBPOXJGY1FlR2ZvUXhGaVZiZjdhRVpB?=
 =?utf-8?B?T01CNnhPR3YrbWM3eUVmOE5iazJuazlzUjFJdjFLSXp0dFZleFMxV3Fsa2pJ?=
 =?utf-8?B?T0VLcnQ0VlVUNXliOUlINy9NcVlZcUtNSjRmVEdWN1p6VnBOeHQrZ1NhVUhu?=
 =?utf-8?B?eDQ5OUdkMDhtU0hYamVSM0NleVd0cTVPakVETWZHVWJRdnRxSkJjaGo1R3U5?=
 =?utf-8?B?eHJjT29QUEdSclFFeVp0VE9JRDRaamJKYmUzMTh0RmI4UUlnSTFLVjlsUzQy?=
 =?utf-8?B?MXFIeng0dEFLRGJvblZTOTNQVDlUSjlIM0pjRVptVzBQeDhwNlhBdnpGQjJz?=
 =?utf-8?B?S0ZDbkRtZ0kyZG9zODd0QWpIenQwc3JnTzNVTHpITXcvQzEzVGRVbFRtSXZZ?=
 =?utf-8?B?MGI4QVVDVzJiSGRrU3haUHl0QWt2SnV0TmRNRkVsQi9CVWd4QTlzNXR5d1Rv?=
 =?utf-8?Q?ENucGTdwCeR05/i77KMV+t1pq?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <CB2D9FFBCB892B48AA3EB49271C7242A@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?YTBSc2tkS3RTMThhMkRzODdkb3RKWk9VY1dRZm5oTk5WcEpETmpyL01Xd1Ji?=
 =?utf-8?B?MGh4WDQ2cWZnYUtsaTM0U3ZhRFY5c25qT1pPTSs0WkQrWVZoKy9hK2Jhb2d5?=
 =?utf-8?B?K21ySmhRWjl0bXAxaStDN1dyL3hVcFNXWUI5M1pFRXM2dDROU2t4R0VoTDk4?=
 =?utf-8?B?ZHhjWitkYURiVStteEdjczY2V05UaGh1bWxaaHg3MlFaUVJwYzkzbFArN0R2?=
 =?utf-8?B?WXUzelpYbnVrSk9uSGhXTjlwTjllRmZnSjROZXp6NjMweW9MamlvUmNzVDJ4?=
 =?utf-8?B?eFNFSHlFRDRqNkUvU2c4bWd2YjJhZnA4dEo0aE5xOWp6RzNWUkZqaGtwT2JB?=
 =?utf-8?B?ek5yc2VxVTg1cmZqcFprdVhPbzUzbzV6VmV6VXlrOWNGakx6eGRsR3RIeHNW?=
 =?utf-8?B?Y0h5QkJpOVlJSTl6R25uMExiNUJ5TGUrWDN6Wlo5TVJQYkJ5cmd1STV4dWhE?=
 =?utf-8?B?M0RzZHY4bzcrMGlpa01lR3ZpNnZrOEoxNFViLzFlWnVrRHoyVm1oNnFVSEN0?=
 =?utf-8?B?UkJlVzd6U2dNYi9hWWZxSElMU3B0ZjNYYld0SDljNzZpQjAxWmFBT3BQQkg5?=
 =?utf-8?B?bEJOLy9lY05rZEhsQ3pLQ045WGI3MDVXZEhxaFFBeWgzcVQ0Nm56Zlp1eGtS?=
 =?utf-8?B?R0pTOUw0VjZ0WXVuUWpPVkVwRUxPSjI3cDFraGp3Q2lmVHdpUWx1ZG1jSVEw?=
 =?utf-8?B?Y2JUUjNOb3luVWtaRURVNFV4SHFjSThuaXhlVWlyNkNjbDBDa0h6NGNLTnNp?=
 =?utf-8?B?dWQ4SHBMV3RXaFoyNnQ2UFBvRUVFRDFTaThUa0tlT3ZPTXZwYmh0aFAwUDFu?=
 =?utf-8?B?MzNLdmFNZ3BYNHQreWJKQTJFV1lWKzA1Q05FelNPTVdUeWlSQ3hBN1ozL04w?=
 =?utf-8?B?bk5FMlljRWpqTERjTGtDVGVMUUZFS2xjVjFSWXdmbUU1b2ptMmYzVGNwTkRl?=
 =?utf-8?B?ekd4U2VGTFlmZStFeXlkZUp0K1FlMkxDU3FpdzRLRlRaTURUZzJOS2tETHAx?=
 =?utf-8?B?Wk45QWhaUHROVTJyRm9BRVRTUmpNdUJYcWFWaXRHLzBTbDc0aGRCeVhGaS9S?=
 =?utf-8?B?Y01wa3RSRFZIaDBQdFVWcURsZjAxZjEzb09RU0F2UGtCRkZuVWRaMVNLZzJz?=
 =?utf-8?B?c1RtcmJ4Q1JIK0w3blI4Y0IwZWZkOXRQVUpDdnVEYWJ5bmlsUVVsWEZkYktE?=
 =?utf-8?B?eHgzc3V5Tm1ZM3pXM3dNZFNkaUVTa243dlpkQ05iZkdkelRmTGRBMSt6NDRa?=
 =?utf-8?B?NVZaNXhPeVJKazhTSU5VTms2MHl1czVKNjdqMFFOTVduVU00eC94aTNhc3lI?=
 =?utf-8?Q?8QXXdN+5Kf5zA=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fa5edb11-4719-4c3d-2944-08daf3c89113
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2023 11:39:57.8838
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: WBAE67blKJopvNNxCVLXsIDkZ5dQBDoa6whsJj3QEgw0yWavIv4M9cThC5Gc6mL6GNrwtLkoCJLWFXxnb/JpmkFmvGzkLOl75T2HQJfBwTs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5402

T24gMTEvMDEvMjAyMyAxMToyMCBhbSwgUGV0ZXIgWmlqbHN0cmEgd3JvdGU6DQo+IE9uIE1vbiwg
SmFuIDA5LCAyMDIzIGF0IDA0OjA1OjMxQU0gKzAwMDAsIEpvYW4gQnJ1Z3VlcmEgd3JvdGU6DQo+
PiBUaGlzIGZpeGVzIHdha2V1cCBmb3IgbWUgb24gYm90aCBRRU1VIGFuZCByZWFsIEhXDQo+PiAo
anVzdCBhIHByb29mIG9mIGNvbmNlcHQsIGRvbid0IG1lcmdlKQ0KPj4NCj4+IGRpZmYgLS1naXQg
YS9hcmNoL3g4Ni9rZXJuZWwvY2FsbHRodW5rcy5jIGIvYXJjaC94ODYva2VybmVsL2NhbGx0aHVu
a3MuYw0KPj4gaW5kZXggZmZlYTk4ZjkwNjRiLi44NzA0YmNjMGNlMzIgMTAwNjQ0DQo+PiAtLS0g
YS9hcmNoL3g4Ni9rZXJuZWwvY2FsbHRodW5rcy5jDQo+PiArKysgYi9hcmNoL3g4Ni9rZXJuZWwv
Y2FsbHRodW5rcy5jDQo+PiBAQCAtNyw2ICs3LDcgQEANCj4+ICAjaW5jbHVkZSA8bGludXgvbWVt
b3J5Lmg+DQo+PiAgI2luY2x1ZGUgPGxpbnV4L21vZHVsZWxvYWRlci5oPg0KPj4gICNpbmNsdWRl
IDxsaW51eC9zdGF0aWNfY2FsbC5oPg0KPj4gKyNpbmNsdWRlIDxsaW51eC9zdXNwZW5kLmg+DQo+
PiAgDQo+PiAgI2luY2x1ZGUgPGFzbS9hbHRlcm5hdGl2ZS5oPg0KPj4gICNpbmNsdWRlIDxhc20v
YXNtLW9mZnNldHMuaD4NCj4+IEBAIC0xNTAsNiArMTUxLDEwIEBAIHN0YXRpYyBib29sIHNraXBf
YWRkcih2b2lkICpkZXN0KQ0KPj4gIAlpZiAoZGVzdCA+PSAodm9pZCAqKWh5cGVyY2FsbF9wYWdl
ICYmDQo+PiAgCSAgICBkZXN0IDwgKHZvaWQqKWh5cGVyY2FsbF9wYWdlICsgUEFHRV9TSVpFKQ0K
Pj4gIAkJcmV0dXJuIHRydWU7DQo+PiArI2VuZGlmDQo+PiArI2lmZGVmIENPTkZJR19QTV9TTEVF
UA0KPj4gKwlpZiAoZGVzdCA9PSByZXN0b3JlX3Byb2Nlc3Nvcl9zdGF0ZSkNCj4+ICsJCXJldHVy
biB0cnVlOw0KPj4gICNlbmRpZg0KPj4gIAlyZXR1cm4gZmFsc2U7DQo+PiAgfQ0KPj4gZGlmZiAt
LWdpdCBhL2FyY2gveDg2L3Bvd2VyL2NwdS5jIGIvYXJjaC94ODYvcG93ZXIvY3B1LmMNCj4+IGlu
ZGV4IDIzNjQ0N2VlOWJlYi4uZTY2Nzg5NDkzNmY3IDEwMDY0NA0KPj4gLS0tIGEvYXJjaC94ODYv
cG93ZXIvY3B1LmMNCj4+ICsrKyBiL2FyY2gveDg2L3Bvd2VyL2NwdS5jDQo+PiBAQCAtMjgxLDYg
KzI4MSw5IEBAIHN0YXRpYyB2b2lkIG5vdHJhY2UgX19yZXN0b3JlX3Byb2Nlc3Nvcl9zdGF0ZShz
dHJ1Y3Qgc2F2ZWRfY29udGV4dCAqY3R4dCkNCj4+ICAvKiBOZWVkZWQgYnkgYXBtLmMgKi8NCj4+
ICB2b2lkIG5vdHJhY2UgcmVzdG9yZV9wcm9jZXNzb3Jfc3RhdGUodm9pZCkNCj4+ICB7DQo+PiAr
CS8qIFJlc3RvcmUgR1MgYmVmb3JlIGNhbGxpbmcgYW55dGhpbmcgdG8gYXZvaWQgY3Jhc2ggb24g
Y2FsbCBkZXB0aCBhY2NvdW50aW5nICovDQo+PiArCW5hdGl2ZV93cm1zcmwoTVNSX0dTX0JBU0Us
IHNhdmVkX2NvbnRleHQua2VybmVsbW9kZV9nc19iYXNlKTsNCj4+ICsNCj4+ICAJX19yZXN0b3Jl
X3Byb2Nlc3Nvcl9zdGF0ZSgmc2F2ZWRfY29udGV4dCk7DQo+PiAgfQ0KPiBZZWFoLCBJIGNhbiBz
ZWUgd2h5LCBidXQgSSdtIG5vdCByZWFsbHkgY29tZm9ydGFibGUgd2l0aCB0aGlzLiBUQkgsIEkN
Cj4gZG9uJ3Qgc2VlIGhvdyB0aGUgd2hvbGUgcmVzdW1lIGNvZGUgaXMgY29ycmVjdCB0byBiZWdp
biB3aXRoLiBBdCB0aGUNCj4gdmVyeSBsZWFzdCBpdCBuZWVkcyBhIGhlYXZ5IGRvc2Ugb2Ygbm9p
bnN0ci4NCj4NCj4gUmFmYWVsLCB3aGF0IGNyMyBpcyBhY3RpdmUgd2hlbiB3ZSBjYWxsIHJlc3Rv
cmVfcHJvY2Vzc29yX3N0YXRlKCk/DQo+DQo+IFNwZWNpZmljYWxseSwgdGhlIHByb2JsZW0gaXMg
dGhhdCBJIGRvbid0IGZlZWwgY29tZm9ydGFibGUgZG9pbmcgYW55DQo+IHNvcnQgb2Ygd2VpcmQg
Y29kZSB1bnRpbCBhbGwgdGhlIENSIGFuZCBzZWdtZW50IHJlZ2lzdGVycyBoYXZlIGJlZW4NCj4g
cmVzdG9yZWQsIGhvd2V2ZXIsIHdyaXRlX2NyKigpIGFyZSBwYXJhdmlydCBmdW5jdGlvbnMgdGhh
dCByZXN1bHQgaW4NCj4gQ0FMTCwgd2hpY2ggdGhlbiBnaXZlcyB1cyBhIGJpdCBvZiBhIGNoZWNr
ZW4gYW5kIGVnZyBwcm9ibGVtLg0KPg0KPiBJJ20gYWxzbyB3b25kZXJpbmcgaG93IHdlbGwgcmV0
YmxlZWQ9c3R1ZmYgd29ya3Mgb24gWGVuLCBpZiBhdCBhbGwuIElmDQo+IHdlIGNhbiBpZ25vcmUg
WGVuLCB0aGluZ3MgYXJlIGEgbGl0dGxlIGVhcmllciBwZXJoYXBzLg0KDQpJIHJlYWxseSB3b3Vs
ZCBsaWtlIHJldGJsZWVkPXN0dWZmIHRvIHdvcmsgdW5kZXIgWGVuIFBWLCBiZWNhdXNlIHRoZW4g
d2UNCmNhbiBhcnJhbmdlIHRvIHN0YXJ0IHR1cm5pbmcgb2ZmIHNvbWUgZXZlbiBtb3JlIGV4cGVu
c2l2ZSBtaXRpZ2F0aW9ucw0KdGhhdCBYZW4gZG9lcyBvbiBiZWhhbGYgb2YgZ3Vlc3RzLg0KDQpJ
IGhhdmUgYSBzdXNwaWNpb24gdGhhdCB0aGVzZSBwYXRocyB3aWxsIGJlIHVzZWQgdW5kZXIgWGVu
IFBWLCBldmVuIGlmDQpvbmx5IGZvciBkb20wLsKgIFRoZSBhYnN0cmFjdGlvbiBmb3IgaG9zdCBT
My80LzUgYXJlIG5vdCBncmVhdC7CoCBUaGF0DQpzYWlkLCBhdCBhbGwgcG9pbnRzIHRoYXQgZ3Vl
c3QgY29kZSBpcyBleGVjdXRpbmcsIGV2ZW4gYWZ0ZXIgYSBsb2dpY2FsDQpTMyByZXN1bWUsIGl0
IHdpbGwgaGF2ZSBhIGdvb2QgR1NfQkFTRcKgIChBc3N1bWluZyB0aGUgdGVhcmRvd24gbG9naWMN
CmRvZXNuJ3Qgc2VsZi1jbG9iYmVyLikNCg0KVGhlIGJpZ2dlciBpc3N1ZSB3aXRoIHN0dWZmIGFj
Y291bnRpbmcgaXMgdGhhdCBub3RoaW5nIEFGQUlDVCBhY2NvdW50cw0KZm9yIHRoZSBmYWN0IHRo
YXQgYW55IGh5cGVyY2FsbCBwb3RlbnRpYWxseSBlbXB0aWVzIHRoZSBSU0IgaW4gb3RoZXJ3aXNl
DQpzeW5jaHJvbm91cyBwcm9ncmFtIGZsb3cuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 11:44:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 11:44:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475231.736832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZWm-0000qm-7N; Wed, 11 Jan 2023 11:44:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475231.736832; Wed, 11 Jan 2023 11:44:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZWm-0000qf-4K; Wed, 11 Jan 2023 11:44:20 +0000
Received: by outflank-mailman (input) for mailman id 475231;
 Wed, 11 Jan 2023 11:44:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFZWk-0000qZ-MG
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 11:44:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFZWj-0005Uq-UA; Wed, 11 Jan 2023 11:44:17 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFZWj-0004U7-LW; Wed, 11 Jan 2023 11:44:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=IqfAabigwW7CFHzU0bZ0FY9KOz/BysPhEjw0xmgSgig=; b=Gy3Snw
	FPKTAkG1qUEP9B/Bco0jMaZx0BKx8PW1ffs1HugeclmkrTKpKNVlnrKChowYRzR8eXFv63DX5ljo1
	OBnOt9MsCW4mGAC1hU0Go3L28u6iv2S9k1EWjK/HuzVG/Fo1cUz7HGUK89Yp2o2TIj5XPYtUFQt8T
	RzG+oNgFc6Y=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH] xen: Remove the arch specific header init.h
Date: Wed, 11 Jan 2023 11:44:09 +0000
Message-Id: <20230111114409.7495-1-julien@xen.org>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Both x86 and (soon) RISC-V version of init.h are empty. On Arm, it contains
a structure that should not be used by any common code.

The structure init_info is used to store information to setup the CPU
currently being brought-up. setup.h seems to be more suitable even though
the header is getting quite crowded.

Looking through the history, <asm/init.h> was introduced at the same
time as the ia64 port because for some reasons most of the macros
where duplicated. This was changed in 72c07f413879 and I don't
foresee any reason to require arch specific definition for init.h
in the near future.

Therefore remove asm/init.h for both x86 and arm (the only definition
is moved in setup.h). With that RISC-V will not need to introduce
an empty header.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/arm/arm32/asm-offsets.c |  1 +
 xen/arch/arm/arm64/asm-offsets.c |  1 +
 xen/arch/arm/include/asm/init.h  | 20 --------------------
 xen/arch/arm/include/asm/setup.h |  8 ++++++++
 xen/arch/x86/acpi/power.c        |  1 -
 xen/arch/x86/include/asm/init.h  |  4 ----
 xen/include/xen/init.h           |  2 --
 7 files changed, 10 insertions(+), 27 deletions(-)
 delete mode 100644 xen/arch/arm/include/asm/init.h
 delete mode 100644 xen/arch/x86/include/asm/init.h

diff --git a/xen/arch/arm/arm32/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
index 2116ba5b95bf..05c692bb2822 100644
--- a/xen/arch/arm/arm32/asm-offsets.c
+++ b/xen/arch/arm/arm32/asm-offsets.c
@@ -11,6 +11,7 @@
 #include <public/xen.h>
 #include <asm/current.h>
 #include <asm/procinfo.h>
+#include <asm/setup.h>
 
 #define DEFINE(_sym, _val)                                                 \
     asm volatile ("\n.ascii\"==>#define " #_sym " %0 /* " #_val " */<==\"" \
diff --git a/xen/arch/arm/arm64/asm-offsets.c b/xen/arch/arm/arm64/asm-offsets.c
index 280ddb55bfd4..7226cd9b2eb0 100644
--- a/xen/arch/arm/arm64/asm-offsets.c
+++ b/xen/arch/arm/arm64/asm-offsets.c
@@ -10,6 +10,7 @@
 #include <xen/bitops.h>
 #include <public/xen.h>
 #include <asm/current.h>
+#include <asm/setup.h>
 #include <asm/smccc.h>
 
 #define DEFINE(_sym, _val)                                                 \
diff --git a/xen/arch/arm/include/asm/init.h b/xen/arch/arm/include/asm/init.h
deleted file mode 100644
index 5ac8cf8797d6..000000000000
--- a/xen/arch/arm/include/asm/init.h
+++ /dev/null
@@ -1,20 +0,0 @@
-#ifndef _XEN_ASM_INIT_H
-#define _XEN_ASM_INIT_H
-
-struct init_info
-{
-    /* Pointer to the stack, used by head.S when entering in C */
-    unsigned char *stack;
-    /* Logical CPU ID, used by start_secondary */
-    unsigned int cpuid;
-};
-
-#endif /* _XEN_ASM_INIT_H */
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index fdbf68aadcaa..a926f30a2be4 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -168,6 +168,14 @@ int map_range_to_domain(const struct dt_device_node *dev,
 
 extern const char __ro_after_init_start[], __ro_after_init_end[];
 
+struct init_info
+{
+    /* Pointer to the stack, used by head.S when entering in C */
+    unsigned char *stack;
+    /* Logical CPU ID, used by start_secondary */
+    unsigned int cpuid;
+};
+
 #endif
 /*
  * Local variables:
diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index b76f673acb1a..d23335391c67 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -17,7 +17,6 @@
 #include <xen/sched.h>
 #include <asm/acpi.h>
 #include <asm/irq.h>
-#include <asm/init.h>
 #include <xen/spinlock.h>
 #include <xen/sched.h>
 #include <xen/domain.h>
diff --git a/xen/arch/x86/include/asm/init.h b/xen/arch/x86/include/asm/init.h
deleted file mode 100644
index 5295b35e6337..000000000000
--- a/xen/arch/x86/include/asm/init.h
+++ /dev/null
@@ -1,4 +0,0 @@
-#ifndef _XEN_ASM_INIT_H
-#define _XEN_ASM_INIT_H
-
-#endif /* _XEN_ASM_INIT_H */
diff --git a/xen/include/xen/init.h b/xen/include/xen/init.h
index 0af0e234ec80..1d7c0216bc80 100644
--- a/xen/include/xen/init.h
+++ b/xen/include/xen/init.h
@@ -1,8 +1,6 @@
 #ifndef _LINUX_INIT_H
 #define _LINUX_INIT_H
 
-#include <asm/init.h>
-
 /*
  * Mark functions and data as being only used at initialization
  * or exit time.
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 11:45:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 11:45:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475237.736843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZYC-0001PG-Hd; Wed, 11 Jan 2023 11:45:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475237.736843; Wed, 11 Jan 2023 11:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZYC-0001P9-Eb; Wed, 11 Jan 2023 11:45:48 +0000
Received: by outflank-mailman (input) for mailman id 475237;
 Wed, 11 Jan 2023 11:45:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFZYA-0001P0-TB
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 11:45:46 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2048.outbound.protection.outlook.com [40.107.8.48])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b820bfd-91a5-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 12:45:44 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9489.eurprd04.prod.outlook.com (2603:10a6:102:2c1::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 11:45:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 11:45:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b820bfd-91a5-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IuSovQhox+7pvm2LM8mMy3OCazP++cMoCFH5nB2NFR/zh0XCCcBiCqeWgs8l0N36/BPhH6LRRm8k2pgmP6ZM8H/LYd9lxQeZDFhqs1Fqt0f/lclHeWXjflCxkm9/pEuL9Xw05TquB6/53XmHW4j7hC7nNzOVneYQObzYJmlp7dfG7i9/KJb1JGzjdhpj6cCBSRWu+LufKzxmNrgbEanJCTqpo4E0dFvqvO9v312Vw/OvUs/2KYfmi4+8kFj1R+uVLw8r62MAPe5VSCSBZz/NV0l89o9fICgDDg4NDSJVVyTjRf5XF6vGC9dYIeuCaNtW++cqCUnZ5sbrrZcUNAUOdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vJyBKwRcILZLfFiT4JKRJBdJChq++tQiSdJTUxAwsZY=;
 b=fAlE4NRljiXH9tSwhmUC6x1k2x0msNp8wokCaWCFHoQzerrlVxvA+Fb5fjy/BDD3woHZD9q/FYBhntTcbIX7X2Paw8qN7m9isccG6PivvOnN3ocXdxMpSK3mHsiYwwrL+RA2P2HlPL/rIdEsHJC4wuD0Yl5ClaEUDeDFYfHu7YBa7vI+foi+IGIXWXT/2Nwd0Z4pn1cxtRJxUQ4ZCFDN3GR1UYu3Bom1haH62TiPGwn3tkij/7aKUSVmdP47SKtUdrK8WdM+qB49vs/qbEKA38OFBkbjDC+OiX8JvBiW99dgbnYLcDbmKydRo3/hIvJ1gENf5iWSP4h44lzgFua+Eg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vJyBKwRcILZLfFiT4JKRJBdJChq++tQiSdJTUxAwsZY=;
 b=YIMH09P5xf84nJhadwq6Ng0PRQbK1DZD2febd4fZCf+ROeN1Ec2Z803GrqUTifeoXWkpREY/h4IcJiTxhKg5Ke9lkWE+05nUOwhBgpnsJtVmIFcdEkFYpgo5MPGAv374hXwVuavEvTvSDWiQ+qWq91OtBK3ULEHzmw+JF+4Fu4wb7Eo547ly/M8wiGFxHVA+bAp2xjIgQ/mvNg2cmqttF/1QuL40reEw6bP+myb+LahgB4NyeKA8jpOaGnHmEM1ofNIfCpeMSSlI5QGcsJ/n0eOkxssm0VhF25udU9jMs0YkWcLzblOXBg6uaxUx7OxtteEWUoNh1COlNAeEtVkxRQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f2c151af-b3ba-69e6-2878-3256971f5a9d@suse.com>
Date: Wed, 11 Jan 2023 12:45:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: Wake-up from suspend to RAM broken under `retbleed=stuff`
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "x86@kernel.org" <x86@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Juergen Gross <jgross@suse.com>, "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>, Peter Zijlstra
 <peterz@infradead.org>, Joan Bruguera <joanbrugueram@gmail.com>
References: <20230108030748.158120-1-joanbrugueram@gmail.com>
 <20230109040531.7888-1-joanbrugueram@gmail.com>
 <Y76bbtJn+jIV3pOz@hirez.programming.kicks-ass.net>
 <aefed99b-6747-5dcc-65ec-6880f7c0d207@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <aefed99b-6747-5dcc-65ec-6880f7c0d207@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0119.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9489:EE_
X-MS-Office365-Filtering-Correlation-Id: 43b48721-5c99-4d23-7067-08daf3c95e1c
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	owCNN63GRNTThhPWm/9y/E20yMqdne0m1WZtQCfsCZU/7nH5q44OzXKTMHE4pAAtFttrLQU3J1MGfefSLaW8wwT/9A/FYnM7wIuf7CrrZhiYIaCDM5ggZpmG5KwcAPXpW+05wWCWjlHI8PMUb67SnDCd62hPpSL3n81mYWvo87QRRHVKa1O3JIFMuarWg7w33O4F6yVQCykcLv2bEg/vbgztp1n+/ZOEfai2WyZOstZUwmRc95TBrqc4kNZHV4didJQslLSi4JzGXNFAAewW/4TC77YlRmkCBhTz5Rj+P9tXzkQRIOV/9TxXS18vU6ZNBPUiRXjuoiOO9qVkbw8YE3SmMxLG0SnUzXBjdAUoeDADmCxG6ouMcHpC69lZDZfx5cVjnlARxmTa0q9eAh7K9l1gddmJ5spPdXCC98CNdoiQBYnPwO5aAeBtSJy8KJTgbvm9XfULU+gzUg3qf9CctaWND9TGr8YO5flHtDnVrcDz/q2YvxHGoX2LX/lIf40Tfw2vcSeuYOfGfLPGtD6oEvvr7n/WJInq8se2jTgptRKfI7ZThmADxKbLc4yjX6OLS+Cg7pEISTpeL+9H8P68hJEQM9DhrftiBDd0wdIwqwvnL8D4D3P3R+llZJLQGyvo3vShuvPpLaMe72BBKN/GaItFTA0D/5wnrhZj7wpPy6PCFoxqHLiorGkdkJKKq1WWmEzFIzp55p2so+CCJrVtcscTQpsRj311cH5dxYlO35bb3+6IyoH/RzcKhoNVbidr
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(366004)(39860400002)(136003)(346002)(451199015)(53546011)(15650500001)(31686004)(2906002)(6506007)(6486002)(26005)(478600001)(6512007)(6666004)(186003)(8676002)(83380400001)(2616005)(36756003)(316002)(66946007)(66476007)(54906003)(66556008)(6916009)(4326008)(41300700001)(5660300002)(4744005)(38100700002)(31696002)(86362001)(8936002)(81973001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T1p3RUd5VCtpSCtySlRHTmx4Yi9YYThyY0JYREFhYVhmcDk5SG5WODBQMU5z?=
 =?utf-8?B?K0poRmdqTlpWMHd2NUh6eFBPb3ZMOFk4Skl5clVsVUQ2bTBHRGliZ09CWUFo?=
 =?utf-8?B?MEpIV04vQ3dPMC83VWlOaVVLQzlVdmYycFpneWphd2NKN2tURnlXb3pyaW9p?=
 =?utf-8?B?M0hiQnlZVTZrYnJuanp3SWdqSGFuZXk1cjQ1UllVayttTzVFRVg2V0pNQzBk?=
 =?utf-8?B?NjhiZ2hpdjVPTkpIZHpTcWhZSytqSjJhT0JpVEZkbGRzVVlBZ3lsdzdqMkdJ?=
 =?utf-8?B?SGhwRU5wOUdVY2VUYndZTzdEQWZJM0R4TUQ5cVZ3MkJJT0EydEpwS0hKRm5Q?=
 =?utf-8?B?ejJmZFd3R0VGTWJZb3J5aXBoVGNPWGtkbkNoY25DOS9uakw2eGxoL2Zqd2wy?=
 =?utf-8?B?TU1vRElaY1F2dG9Ka2l0TklNWGJ0T1JMZktWSDhjWkgwTGFnRjlKcERpZ2Z2?=
 =?utf-8?B?RXl0M3dWRFJQTTdNeUZXcWZTOWd0ZnpYUU9mem9NU21xRjZqejRVcmJOY1dh?=
 =?utf-8?B?NUhiMW00ZXJGWnlWRGpkRGZYUjlSUnF2MXFKTnNQR2Izd1phYlFudWM5dEdJ?=
 =?utf-8?B?djdWZGY0NkZTYjlJdmdYc3hJWVJWaUR1TmtlV1Z6SDZGOHR5MlZMc1RhbTE4?=
 =?utf-8?B?NUdMTnVlSCtXaC82QXAzUDhBM0RYQkNxa05IWDczVzFxSmEvc0Y2NUxQNEJz?=
 =?utf-8?B?Rm1QUE5JdVFneFRyUEF1elExWWtXVmlXa1YzalBOOXJ0SThrZFB5WnUzczlB?=
 =?utf-8?B?NDJzSFg3b3F5d3htYmJ3d0RrdWppTU05RE9uMU8xM2JwMjQ4eEMwQlZpOFd0?=
 =?utf-8?B?Ynp2YmRNc1ZicDhNQnB0QUpuWjgxYzFiZTF3aG5vYkRRNXhkQVBsbEJ2VEcy?=
 =?utf-8?B?OFdncGRnR1prNEZuZkRyZjdPVlFrVDBLZG5hK2tnOUJ2RGJTTSs1OEVoMVZN?=
 =?utf-8?B?Yml5YzVqN1F6dmlZblY4a3l0UFBxWkduakNlWmxCRzJDZHNSd0FtT3BjMWlx?=
 =?utf-8?B?V1dBN0locEF4Szloblc4N0lkMFZEN05mNXhhNExOVGFyQ1pDTnhEVVZHS3ZV?=
 =?utf-8?B?N2c0UzZUNVA5VFlnSXhFZzRrT1YxUWplYlRjR2ZqVUtiUCs1ajFEenRCMzVQ?=
 =?utf-8?B?Q3d3ejRORUljUUluS2MrMjhWVFhlem9iaW9ucHkvN2pxcDFtdjU5eGlEbWEy?=
 =?utf-8?B?SFFHajdjWFNjb1czRTN0L3NuOHNua1lkdXU5MmRXcHU5OGdOS2NLN0JTTXJ3?=
 =?utf-8?B?MTB2RzlCNGw3cHdoQVlUN3pZOTV6VDNybE9uSmkwYmVYUkNVZjljNHNZRHJT?=
 =?utf-8?B?Y09YUnlZUHFQRjBJZllqU1d1TDREOVdTQ2k1QUhyT0djUTBVN1YyUk5Sb0I0?=
 =?utf-8?B?VU9PL0FGVDNpd0hhdElwTDZzdHZnSE1DK1Ewbm55dy85TGZWbDQ5alFmbE1y?=
 =?utf-8?B?ZHJCbkZERzIzelB3YjBZOU9qYUZEckRDVjFmWkg4cmpodXIxV2tSYXQ5Qi85?=
 =?utf-8?B?VVovc3V2WDNGaUNXNnZPN0ozN0dGUjhweHZLVGJEZVN4MjY5Ulg1dHN3anJp?=
 =?utf-8?B?eFhWUENha0JkQk5mem9lNG0zYkRwZnFlN0FsVkF1WWtZSE9GZ1MxWHhPRjdk?=
 =?utf-8?B?Mk9mVk9OekFXQzk3RXFJTVVEV3RCdEpqMzM2b3YwVHUySnJMMFZCMDVYaGkw?=
 =?utf-8?B?MENGZWRiemxRZlltUit0RzZBZjJXSkdXWUxQYm9BTExSYU1SMUluWnB4QVZj?=
 =?utf-8?B?QzEvOXdabzFRVzVOTk9EczNUZzlFMTVaTWFhZU5jeXpKZ1NTMUxqKzhmczha?=
 =?utf-8?B?ZGFaaG9aZjRWNkQwa2ZGRUpXVGh6a0tHdlpJRnAzd1M0RUlZVDhnN1dUZVNJ?=
 =?utf-8?B?Rm80Vk1ZYTQvS0RKRGFRLzNPTmN0cE5ObGNybEQ2YlB5dzlnRzE2cWRsT29X?=
 =?utf-8?B?dDFOaTAzb2ZScFVZaFl1anNnR0VORjNUN1B5cG1CM3ozZjRrWnp5VkVHTzBx?=
 =?utf-8?B?SUlHb2NPNFhXTnlHZEJPQURZcGlWTGJlL0ttcFR2dnBzcExtby9EbWU5aDEr?=
 =?utf-8?B?KzRiY1U3RTZzajA2K2RYZVZmUVRKNE9nSHJIWU1qWWY5SldBTDVicWJkK0Zy?=
 =?utf-8?Q?Sv5fZmG1pacWQlwpi6XbUdfRV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43b48721-5c99-4d23-7067-08daf3c95e1c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 11:45:42.1119
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PKsUwEW9ReEg0+L9PyfNKN04LQMbr2R5om7+UCDS4ee0xhEr2fp5eyQWIoTHOan9I2uk5cxsBqRDy4Zpu+YwEQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9489

On 11.01.2023 12:39, Andrew Cooper wrote:
> The bigger issue with stuff accounting is that nothing AFAICT accounts
> for the fact that any hypercall potentially empties the RSB in otherwise
> synchronous program flow.

But that's not just at hypercall boundaries, but effectively anywhere
(i.e. whenever the hypervisor decides to de-schedule the vCPU)?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 11:47:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 11:47:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475243.736853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZZp-0001yc-T7; Wed, 11 Jan 2023 11:47:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475243.736853; Wed, 11 Jan 2023 11:47:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZZp-0001yV-QN; Wed, 11 Jan 2023 11:47:29 +0000
Received: by outflank-mailman (input) for mailman id 475243;
 Wed, 11 Jan 2023 11:47:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHza=5I=citrix.com=prvs=3687a0f96=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFZZo-0001yH-C5
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 11:47:28 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b67f4aaa-91a5-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 12:47:25 +0100 (CET)
Received: from mail-mw2nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 06:47:21 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA0PR03MB5402.namprd03.prod.outlook.com (2603:10b6:806:b7::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 11:47:20 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 11:47:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b67f4aaa-91a5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673437645;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=N19BN8vT5mhkquQUDl8iaAN7tGdPH7TWLraXfCAokq4=;
  b=MUyENqgDZvQYfNHba/gnmG4+fo7Cdv1vTT9DaYAxhRsk64yIzpYCW8+t
   H43Q43oKx9Ougw7hMOUvi/aagLs3eiYveE3GYOKB/aLD6PDSAK6ur0PHa
   cNkRKXDfLpslBlzqzd+5jOZA82Ktk+hk76NwnyORn6Yg2CKNB2x8a3k1O
   I=;
X-IronPort-RemoteIP: 104.47.55.104
X-IronPort-MID: 92103583
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:s+aHKqj/wxN/9bmNr0oe30diX1610REKZh0ujC45NGQN5FlHY01je
 htvWW2GOf/ZYWOmeN1/bIvlp04P7JeHnYRiSAc5qH8xFSsb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWs0N8klgZmP6sT5QaHzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQSBTIvVhS619iH64KVVepCgpQuN/nkadZ3VnFIlVk1DN4AaLWaG+Dv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEsluG1bLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6RefhrqU30Ab7Kmo7CQwPVGK4mNaAkhClXOtlG
 lAN+HUTlP1nnKCsZpynN/Gim1aGoxodVtx4A+A8rgaXxcL88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqf+a2dqTeaMC0cP2gEIyQDSGMt+ML/qYs+ihbOSNdLE6OviNDxXzbqz
 FiipiUkm68ai8JN0qyh5E3GmBqlvJ2PRQkwji3pWWai4hJ8dZSSTYWi4ljG7t5NNI+cCFKGu
 RAsmcKT8eQPBpGljzGWTaMGG7TBz+mBGC3RhxhoBZZJ3zOp9n24fIEW4yxkI0xpMcEsdjrgY
 UuVsgRUjKK/J1OvZK5zJoeuUcIjyPG4Ecy/D6iIKN1TfpJ2aQmLujl0YlKd1Hzsl05qlrwjP
 ZCccoCnCnNy5blb8Qdajtw1idcDrh3SD0uKLXwn53xLCYajWUM=
IronPort-HdrOrdr: A9a23:c0zT0KrB3sSvbAv7l8QATv4aV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-IronPort-AV: E=Sophos;i="5.96,315,1665460800"; 
   d="scan'208";a="92103583"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BfiyQEyH8cTyHJqKB7gCBeNqFp9O/35mjR7LDhgZKZ9XJA6eaizLuMtNvL5kSrnkfhdEEuYD/BVd+x4xBn516738vGNbCNsI4SDrcRgchH7u7iTtdN3rUbmI4dPAoL6eF/Go0sIVK1ZKbfElynSnVIZYjhurTdlOer3548emvKdaNmm//ZN6jZFOo7YpF5CvGYYsSgUEh8BIprdb3n8KhVRLJi5d7s3SDegw2Bh2izQPrO0wVwVOOfDp1aBSIKzBmXWqak3YEmZ49w5BUCXLDfneo4ak4i5Ol3vWiEocu/OmcsRUqiyWG+IxiVITs0EemsrG/L6PZqnJOFiUmKxorw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=N19BN8vT5mhkquQUDl8iaAN7tGdPH7TWLraXfCAokq4=;
 b=e6E9fzNRFrXmDA5YQJ0Ih2qkYU0jSd4olQhPb9cSCNiw8Maxf95QVstSobM+v6L97E7kyYWgdl9iurbr1EeOEQWBEbfwM8dAyUksLCSAqjbNqpJMn3m+7+nrD1d1gtWdXBaidNhocifEU7V3PGD3okRKcToHIXFJNG9RDMMlSOvmxy7QhYvlZo1nPqcVXd3LBPnHdFZ37gMdytadjR/W44kZttc3bLR0LDXEmwGY2uV6wW+b9dUlJA1NcQ9J+LAXIWgBvPaGQLBVS49dYQn6r4kvZ7j/kDq2TfgjTlFrO3QFMXS2R7aZaiM8HAi6La+WwGs1LvwSWY5l+EgcUyTeQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N19BN8vT5mhkquQUDl8iaAN7tGdPH7TWLraXfCAokq4=;
 b=CFAljS5NFqm5wj/FKz/jhURHuaSzeS5oC6CYyNgniluVXO/+gx0U/WG5OROHRXejpoHJ4Z9Rhqb0IGpP3kzpWafOh3Gooj2lMLHiTLfpnSLN0Kaxfr0LDojeRmmHu4dCChscv/fj2qDQ9CNtbHTshpyzMQMNhSAgwm4wb2TBy6E=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, George Dunlap
	<George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, Oleksii Kurochko
	<oleksii.kurochko@gmail.com>
Subject: Re: [PATCH] xen: Remove the arch specific header init.h
Thread-Topic: [PATCH] xen: Remove the arch specific header init.h
Thread-Index: AQHZJbIRC1R7bPh5UkiB/FwWKG3LDa6ZGYaA
Date: Wed, 11 Jan 2023 11:47:20 +0000
Message-ID: <7590f300-49ef-8180-543d-96b5af06fbd5@citrix.com>
References: <20230111114409.7495-1-julien@xen.org>
In-Reply-To: <20230111114409.7495-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA0PR03MB5402:EE_
x-ms-office365-filtering-correlation-id: ac591cdc-7851-48bd-baf8-08daf3c998bb
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ogirymhQkRqselGT+JYNBxijFb871U8NEn20x3v0GkmkM9VR7UInm0dK4khP8xvBHSsyxVm2YOgGJiaeqe+bvHecwu3n4eZRkMISxa8ITFp0tF97LgT8HcfnN4pYL/qsUSBLjKh8dWFcCUIbt28Sdq6p1j1+OHfqlqzKTF181Lk/bgxeaD5b8UN0sJh0/hwbtum14bk/EZrYOyanzjI3uH8nEt+5c++DKEdDNsFQmfGzr3s5029PyPukDxWcyH8en655V5yhCaLfFMaVs5nN7K9Hw3cGIKBhGx82fD8J3RgH7r7FwEm3PwDX8Nrf8fRRfnp3JbyNBiBgVJMPu45SbSK/Glp4RobvHNXIQBydUlGgqWiLnUKPCoR739YGwegQBKcVkKU/cWKkBGxmaJCjEKp6t4ErRJH2efhonKn9CKogUOjHBMiFK4mjenRaKjsm/K0VIpWJC1vqPZJ1764p5HH8FkOaoszXzXs6jOluY5+Kfdire73fLwWDiYE+XdRpVIVgte7IvDFGGhPdmdlxb+8d/R+CPUCHuT9E6XqYYntMR84mHi5AK+skLAaFW//xSzlFwulch2z2RpHUqRHmOK7kApkDcAIu+v2muuZ6xL+WnYuH2QAPvaPO6Qwk6iN2TNntXR210FkUPc1E9kh5ElRuqjEON4Opw1FyPBfulQYnkEHRdUyt/OihYpt1w9D7HH2jVZhhGbwsBXtnCGTSMGX987IhXCgLwk6T6jgrzblxbi8kndEJyuKphN14Y7HzvdXN8S0vdCg48oIAI6Pq2g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(136003)(396003)(346002)(376002)(39860400002)(451199015)(7416002)(4744005)(316002)(5660300002)(71200400001)(508600001)(6512007)(26005)(186003)(6486002)(2616005)(31696002)(41300700001)(66556008)(91956017)(54906003)(66946007)(66476007)(66446008)(4326008)(8676002)(64756008)(110136005)(76116006)(38070700005)(8936002)(86362001)(36756003)(53546011)(31686004)(6506007)(122000001)(82960400001)(38100700002)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SGppaVdIWUw1NzJwd3dMdVBIOU1HeDhpbmpZZE1RZ01XLzZRNFVBUkgwMUVG?=
 =?utf-8?B?YWNycDA4ZkttM3lJa2tjdXdvSVB5YTVSdlNQZjRCYjNJK1RZZXVzaUVhamp0?=
 =?utf-8?B?U29oZE5pRG53MHg5Zng3M2lydGZib0V6UUI4ZjRoZ3lvMFVsOUNLNTNaem5X?=
 =?utf-8?B?cEpLNTRPaEdaVSttTEVtc2crd1FLR1NjR29jTnJTU21PRktXREdiTE9xVGl6?=
 =?utf-8?B?QWNTcXVYNjVSMkNCYWFrQStuYXRneWViTXIyc3J0OGdjc2pDMWpQOTlNUGFo?=
 =?utf-8?B?NVBidTZuTnVKR2V3eCttYVVKeXhXRC9FbFAxd1lWbWZVNmFEWWpjMUNrWk93?=
 =?utf-8?B?ek12VjJuVkQzWW51M3lMM0VhVmhUMGw3M3AyODF3dVlmR1RCYTdGVEV6akI1?=
 =?utf-8?B?UllKNy9BNlpoeGg2Z2s2TFBLQ1haaVhSZUVUNTJJV1A0ZThGbkxseDRvUkN6?=
 =?utf-8?B?bU9oMVhlT2pZVGd0VG9qSkRrTXRoUExVWkhzMWRva3ZXL0pBSjRCd01URTJh?=
 =?utf-8?B?VXIrd3FQWkx0N25SR0MwbEI2WWo0QnBrS0RZSXB0U0FiMGFkMHBpTWttVDVM?=
 =?utf-8?B?Wk1wYUxKYUNoMVdYVWxVYmJSamRUcjQrVG5lOCs3RTdRL25TVkl5ZXROazEr?=
 =?utf-8?B?VE0xMGlNQkUySTl4V25lOS8ybUM2L09rU2hOYzYvRlQxcHFyT1ZxdzUvU0R2?=
 =?utf-8?B?NFE3OGdDSEpZWW1pT0daYytsZkNSby9TRFlpblllRzVETjdEaittNm9zOGJ4?=
 =?utf-8?B?RkNlQVZPeHdnb2hvKys3OFhHVWZwZ04vZ3UzMStiZERjY2NFT3FkZW5ORUh5?=
 =?utf-8?B?RmIvVmRUY2psaG5pWTN1WjNVVS91dmV0eTRjV052SFFHRUtWTldLSzh1UUVr?=
 =?utf-8?B?V0FhTnBrVi9Xc0lNVytMS2hDWEtmS3pEL2xXVFhOMDAvLy9ndGZVYmx2WTh2?=
 =?utf-8?B?MUZhUlh1NDhzbGlVbHZoSytLdjNPU3hpL1hXQitmQ1pVcm1iZ3VVSFYvejYz?=
 =?utf-8?B?aytqVnMyR0VTcTEvWnpZZzJFbVgveDlFOHNTOHBuU0VvY2hXV21mOWxWMXgz?=
 =?utf-8?B?MmZnbzZKd1JmSHBDcVhGdW1FUTlJczRLbHBkMFhsN0FOT1dQSGNyT1pXaFlW?=
 =?utf-8?B?bnAxZEVmeDNIdU00TExTQXQ1Q2RWY0xkdDVUMnM0TlZQZWlPVTVHOXc2ajQw?=
 =?utf-8?B?QVZWZnA2cWp2TTBBSDYvWTBlakYxUFNLNG1TSU5kYmRNYWFTQjl1VU9JN1Vz?=
 =?utf-8?B?SGl4RWpLcmx3WFlKandEejVOdUFaQWJRcDVDdlRrWnJvZ2oyZkFKV0hIVHF3?=
 =?utf-8?B?Ykh1ejhWZkJpb1hpVU1kL2ZWQVJ0OUYvS0trWlhmZUhKSXFVU2pVdVFuRndr?=
 =?utf-8?B?eWs5LzArZHdQckNSaTI2RVhOZVhSdVowZHV2QUZHLzVmY1UrKzQyeWowd05X?=
 =?utf-8?B?Qk9LTVh2czg4cjIvU2VLV1NTVFN5Q0hCYVFxSmRXKzNPRlBBTVJWM2RobnpC?=
 =?utf-8?B?TkgrbG85S3pSY2t0ZHI5eDZ4WWR0Z0g5b0lzUDFuS1FhY2w0VTZqUmlkR3VR?=
 =?utf-8?B?cGw4TmpUdkVCSmVRU2JSR2s5SVhGdGJJU1RpUlM4Rlo0K0xtdEVVVS9RWW81?=
 =?utf-8?B?RUlJZUtRcnF0aFRJTGFMWWlsa0t6blhoWlAwWURwQXlMMXVRYzJIeDJtMWFy?=
 =?utf-8?B?SzFoM1U0Q092MFkxQXFCRExaay9neVloS21VdFc5NFdBa2JmUTdod2wxMUR1?=
 =?utf-8?B?UlpPRk44T3dGUWJ3aE9SU2kvUzQvRk1CR0NXU3RNNU1ReDlEUWNvRUlYNGVB?=
 =?utf-8?B?OXkvR0FOSWM2QS9TQnpXVXJwUy9Pcktsb21ZakhXK2xnNUJwUWRISVRva0NO?=
 =?utf-8?B?amJGbnhMeWFDTi9reXMzbUl1bFJBMHhzZTVQL2pMamlOdFpybW94OE9HVDBX?=
 =?utf-8?B?TXNRWWlMODZFZ1gxcU1rWlZxSkNRODBkaUlqNTQxVUdZekpubUtHVkloTjEy?=
 =?utf-8?B?bUxJNGZSWTdPRVRocmRZUXdVODZOWWQwOEJWdXMxa1F5bElTV3Q4SnR4TGFD?=
 =?utf-8?B?WEJidkNEazM0SmRUc1AwTEMvaGRuQ05CcmEvSU1OYkdZd2RkSTRsb20wbHRa?=
 =?utf-8?Q?Pn1l0+iH9/6KdrIaytgdmLZnr?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <49306D701B4E8B4BA6AB7710D450EFE6@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?YnlwSXRnbFA0eXhsU2xoRm5kd0tGRWZPUkVGNWhFQmZDTVRNSjZqbFJPd0Mr?=
 =?utf-8?B?R2tDNkpuK0NHSkxYcndyZ05xZnh3dGVSbi85dXlIdXRrcUZ6VWxqekVwYk1C?=
 =?utf-8?B?MXFlUlYyWDk1Wmc2d3IxNUJJQ0Z5Zm5QZDFjSDNBVk1vbWRRaFJJSHptazVy?=
 =?utf-8?B?VStmUmtJdUdxaGtYazNZaDgrRTRCSURsc2FKZDN3cHBSclNSMmt0SmF3TDUr?=
 =?utf-8?B?YVhhMU8vdHR1anA2eTRjZGpEQzArdUZXVllKTlJwWlZNTklvb05helZDRy8w?=
 =?utf-8?B?R01WZDhpQnFNS2EzS2F0ZmRjclpHNUhwaiszV1ZoSEhudGo3QmZvSGkvclVi?=
 =?utf-8?B?clZXQlVJdVdYUjYxc0Q2cVg2cUtTVExabzdjeWNWUkNreElHQjVIdUJEMmh4?=
 =?utf-8?B?bjVNZEpvUUFJTzhoUkl4ejBGM29Md0ZTbGxlNnZLSUhEWWY5NXRDZEpudmtv?=
 =?utf-8?B?ZCtTL2ZhWGxJb3U1WDlDTjhOdFNnM2dTeStINStKY2xwR1hKN2k2QTYwdXNJ?=
 =?utf-8?B?NzlkTUhYR0FXRUEwekR2NjhrZWE5Y1Y4eHNCakF5RzNuZ3lUL0crRFR3dzZD?=
 =?utf-8?B?NUo4R3RmTk9xY2RteGVHdmZtQXltbHJ6UHpqYlFqdVcrL0RmcGpEeVhZcFJR?=
 =?utf-8?B?enJzSTNlMFFJWXBDeUdKdmF1VDhhWGFZUjFIdFBRaFh4UHcyUUJzd2dudFFT?=
 =?utf-8?B?QTdPSEk4YUVBbit3SVBBTkl4azJWR2FQSDM1U05PQ0lKanl0SHJHUWhjWjgz?=
 =?utf-8?B?UXFMdm9TK0ZpVzcyenhoZHI5K2F4bEhrTFJCaXBUK1dPL2UyUk1nTDZwUk00?=
 =?utf-8?B?NWZCSHRjek9tclYrZXg3dzZrcFY3YlM4S2NLZ1liN3c4NXp3azRTb1NnZXBk?=
 =?utf-8?B?MFRvOTVYaHRlOTBhWC9QWGxhenZjU0MvaWR4M2R0cDJpM00zZ1kwaXF4YkRX?=
 =?utf-8?B?T1EraGNaYktTNk04U0Jka2gxenRKWG9HQTE3cWs3L3ZoeXJvczNERzBVU25D?=
 =?utf-8?B?OFcyMWJhUklUdGxEY2xtakhtL201MWsrVDE2aFA2ekJnZmxLOTI2RVFwbTdE?=
 =?utf-8?B?aGtxRVdYN080YTM3czU2VmViMjVVN2llejlYTzY2NHB6Q1NjVU9oYzFoelQx?=
 =?utf-8?B?VHhSczRVMDEyaUtUVm45UC96dkNXOVIySjRDZytTdHlYbFJ5N1ZLUyt2eUMw?=
 =?utf-8?B?L1ZMVkRUQ1lSZnkvcFVaS2hQQ0JVeVZqNVVVM1ZHVUJxUC9vQ1liZ2VGVTFn?=
 =?utf-8?B?TVZJR0R3aW0ybk1HcjQzU2lmaThzT3BxUmEwMnFRbjNTUnNxQT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ac591cdc-7851-48bd-baf8-08daf3c998bb
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2023 11:47:20.2606
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: e0cP1qCANL8V7PRb9IOHNIH4+78gP9T+N/Ou7e63Of1B16dAJriTVcZszHUqpOsbjKBVpOL//oNA3noae29rPms+GHsTMuj3sk8YOLKVDKQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5402

T24gMTEvMDEvMjAyMyAxMTo0NCBhbSwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPiBGcm9tOiBKdWxp
ZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KPg0KPiBCb3RoIHg4NiBhbmQgKHNvb24pIFJJ
U0MtViB2ZXJzaW9uIG9mIGluaXQuaCBhcmUgZW1wdHkuIE9uIEFybSwgaXQgY29udGFpbnMNCj4g
YSBzdHJ1Y3R1cmUgdGhhdCBzaG91bGQgbm90IGJlIHVzZWQgYnkgYW55IGNvbW1vbiBjb2RlLg0K
Pg0KPiBUaGUgc3RydWN0dXJlIGluaXRfaW5mbyBpcyB1c2VkIHRvIHN0b3JlIGluZm9ybWF0aW9u
IHRvIHNldHVwIHRoZSBDUFUNCj4gY3VycmVudGx5IGJlaW5nIGJyb3VnaHQtdXAuIHNldHVwLmgg
c2VlbXMgdG8gYmUgbW9yZSBzdWl0YWJsZSBldmVuIHRob3VnaA0KPiB0aGUgaGVhZGVyIGlzIGdl
dHRpbmcgcXVpdGUgY3Jvd2RlZC4NCj4NCj4gTG9va2luZyB0aHJvdWdoIHRoZSBoaXN0b3J5LCA8
YXNtL2luaXQuaD4gd2FzIGludHJvZHVjZWQgYXQgdGhlIHNhbWUNCj4gdGltZSBhcyB0aGUgaWE2
NCBwb3J0IGJlY2F1c2UgZm9yIHNvbWUgcmVhc29ucyBtb3N0IG9mIHRoZSBtYWNyb3MNCj4gd2hl
cmUgZHVwbGljYXRlZC4gVGhpcyB3YXMgY2hhbmdlZCBpbiA3MmMwN2Y0MTM4NzkgYW5kIEkgZG9u
J3QNCj4gZm9yZXNlZSBhbnkgcmVhc29uIHRvIHJlcXVpcmUgYXJjaCBzcGVjaWZpYyBkZWZpbml0
aW9uIGZvciBpbml0LmgNCj4gaW4gdGhlIG5lYXIgZnV0dXJlLg0KPg0KPiBUaGVyZWZvcmUgcmVt
b3ZlIGFzbS9pbml0LmggZm9yIGJvdGggeDg2IGFuZCBhcm0gKHRoZSBvbmx5IGRlZmluaXRpb24N
Cj4gaXMgbW92ZWQgaW4gc2V0dXAuaCkuIFdpdGggdGhhdCBSSVNDLVYgd2lsbCBub3QgbmVlZCB0
byBpbnRyb2R1Y2UNCj4gYW4gZW1wdHkgaGVhZGVyLg0KPg0KPiBTdWdnZXN0ZWQtYnk6IEphbiBC
ZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxs
IDxqZ3JhbGxAYW1hem9uLmNvbT4NCg0KQWNrZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 12:06:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 12:06:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475258.736865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZsA-0004eB-RS; Wed, 11 Jan 2023 12:06:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475258.736865; Wed, 11 Jan 2023 12:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZsA-0004e4-Oi; Wed, 11 Jan 2023 12:06:26 +0000
Received: by outflank-mailman (input) for mailman id 475258;
 Wed, 11 Jan 2023 12:06:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ktLh=5I=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pFZs9-0004dy-0D
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 12:06:25 +0000
Received: from mail-vs1-xe2d.google.com (mail-vs1-xe2d.google.com
 [2607:f8b0:4864:20::e2d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5ddb441d-91a8-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 13:06:24 +0100 (CET)
Received: by mail-vs1-xe2d.google.com with SMTP id n190so11571631vsc.11
 for <xen-devel@lists.xenproject.org>; Wed, 11 Jan 2023 04:06:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ddb441d-91a8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=8jzO6WX6QmQF0cHfaz5VH2e/upxcA5uDmPtPD+mE8yM=;
        b=JrcRNYFE0uDC2Xu5K9OIsHIAZ14u1jygNIehEYnKypuMbMX7eOtVym54dbN95+iPkw
         cSUTQ71OzNX+sDMOJ2fN6FY2dMBawehNB/67DwZ9gqdEGki2V8BrvIomo97X+8ilBF+C
         pcluFd0Q6SuhvJl+MFXaT7YIoN4dSTKW5IwVhpTnfvdBqq8XwyjX1fKWHA45zt8Jhu3B
         zKIs0tx5TuAJzAwMpj9d1LVDavSXIgcxxJ4nivMJPYOK7D24Thl6Sumb/IOKa84YcXwy
         meTIVk02XxVomJSqLL4R04vOk8YQE37KdrNqrjgLF0nVYruCEv7ZYpVbqNgWv+owR5yr
         hfdA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=8jzO6WX6QmQF0cHfaz5VH2e/upxcA5uDmPtPD+mE8yM=;
        b=0SE1POvDeQxHKiNVDt9hNlKPa8WwWLjxZFv5wKpI1fm0TeZgfg9Rs5ELQe8U1D8w9L
         Iaqm0+Zcv9UzTbwu2qlcRNMR+l+ied0HUo5/9XUQKN8oNB8QcLJo4uIvwdvFUR/mkV/j
         37Tx4fXGpU3BpfgmpfID812OZS4GKZeocCxuIOIooSmiDPDWYvabLkSPYTNQQddeuQPq
         HNbJ/mFM2EmddXl/5bd6UnpNuX1oiOkrdTm+pnD3dYdgekGXrHfgt0GwQgGDAqIJ3U3K
         4PULFbixj1hbcQbYSjFZl6hGynB/VVbnzw1mzbSSCd0dFRzbDLgm0Z0NCc66SHoBKvpc
         n8OQ==
X-Gm-Message-State: AFqh2kp43x1uxAlhKqL611+NNJ3qo7Wl/ne0CtthCtGGCNDjDDS8WZPk
	s/pAI7QR9Yhz5W2ctZiQ2/UmOz01SK/2f6GeZsg=
X-Google-Smtp-Source: AMrXdXuW5oH7qCrf783SkWF+5Un00TFGbLY1x6jvHqrYZaHBs81C3ZwH8qykQGGpb/VF4zT3NPTLuYN3PX0w5aGyxwI=
X-Received: by 2002:a67:6582:0:b0:3d0:d7ce:b383 with SMTP id
 z124-20020a676582000000b003d0d7ceb383mr376193vsb.72.1673438782800; Wed, 11
 Jan 2023 04:06:22 -0800 (PST)
MIME-Version: 1.0
References: <20230111114409.7495-1-julien@xen.org>
In-Reply-To: <20230111114409.7495-1-julien@xen.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 11 Jan 2023 22:05:56 +1000
Message-ID: <CAKmqyKO9v9PhHFtkemg7Ta9ReCzP=Ozr2q=3zVjfY+dKE38XAQ@mail.gmail.com>
Subject: Re: [PATCH] xen: Remove the arch specific header init.h
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 11, 2023 at 9:44 PM Julien Grall <julien@xen.org> wrote:
>
> From: Julien Grall <jgrall@amazon.com>
>
> Both x86 and (soon) RISC-V version of init.h are empty. On Arm, it contains
> a structure that should not be used by any common code.
>
> The structure init_info is used to store information to setup the CPU
> currently being brought-up. setup.h seems to be more suitable even though
> the header is getting quite crowded.
>
> Looking through the history, <asm/init.h> was introduced at the same
> time as the ia64 port because for some reasons most of the macros
> where duplicated. This was changed in 72c07f413879 and I don't
> foresee any reason to require arch specific definition for init.h
> in the near future.
>
> Therefore remove asm/init.h for both x86 and arm (the only definition
> is moved in setup.h). With that RISC-V will not need to introduce
> an empty header.
>
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

>
> ---
> cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  xen/arch/arm/arm32/asm-offsets.c |  1 +
>  xen/arch/arm/arm64/asm-offsets.c |  1 +
>  xen/arch/arm/include/asm/init.h  | 20 --------------------
>  xen/arch/arm/include/asm/setup.h |  8 ++++++++
>  xen/arch/x86/acpi/power.c        |  1 -
>  xen/arch/x86/include/asm/init.h  |  4 ----
>  xen/include/xen/init.h           |  2 --
>  7 files changed, 10 insertions(+), 27 deletions(-)
>  delete mode 100644 xen/arch/arm/include/asm/init.h
>  delete mode 100644 xen/arch/x86/include/asm/init.h
>
> diff --git a/xen/arch/arm/arm32/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
> index 2116ba5b95bf..05c692bb2822 100644
> --- a/xen/arch/arm/arm32/asm-offsets.c
> +++ b/xen/arch/arm/arm32/asm-offsets.c
> @@ -11,6 +11,7 @@
>  #include <public/xen.h>
>  #include <asm/current.h>
>  #include <asm/procinfo.h>
> +#include <asm/setup.h>
>
>  #define DEFINE(_sym, _val)                                                 \
>      asm volatile ("\n.ascii\"==>#define " #_sym " %0 /* " #_val " */<==\"" \
> diff --git a/xen/arch/arm/arm64/asm-offsets.c b/xen/arch/arm/arm64/asm-offsets.c
> index 280ddb55bfd4..7226cd9b2eb0 100644
> --- a/xen/arch/arm/arm64/asm-offsets.c
> +++ b/xen/arch/arm/arm64/asm-offsets.c
> @@ -10,6 +10,7 @@
>  #include <xen/bitops.h>
>  #include <public/xen.h>
>  #include <asm/current.h>
> +#include <asm/setup.h>
>  #include <asm/smccc.h>
>
>  #define DEFINE(_sym, _val)                                                 \
> diff --git a/xen/arch/arm/include/asm/init.h b/xen/arch/arm/include/asm/init.h
> deleted file mode 100644
> index 5ac8cf8797d6..000000000000
> --- a/xen/arch/arm/include/asm/init.h
> +++ /dev/null
> @@ -1,20 +0,0 @@
> -#ifndef _XEN_ASM_INIT_H
> -#define _XEN_ASM_INIT_H
> -
> -struct init_info
> -{
> -    /* Pointer to the stack, used by head.S when entering in C */
> -    unsigned char *stack;
> -    /* Logical CPU ID, used by start_secondary */
> -    unsigned int cpuid;
> -};
> -
> -#endif /* _XEN_ASM_INIT_H */
> -/*
> - * Local variables:
> - * mode: C
> - * c-file-style: "BSD"
> - * c-basic-offset: 4
> - * indent-tabs-mode: nil
> - * End:
> - */
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index fdbf68aadcaa..a926f30a2be4 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -168,6 +168,14 @@ int map_range_to_domain(const struct dt_device_node *dev,
>
>  extern const char __ro_after_init_start[], __ro_after_init_end[];
>
> +struct init_info
> +{
> +    /* Pointer to the stack, used by head.S when entering in C */
> +    unsigned char *stack;
> +    /* Logical CPU ID, used by start_secondary */
> +    unsigned int cpuid;
> +};
> +
>  #endif
>  /*
>   * Local variables:
> diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
> index b76f673acb1a..d23335391c67 100644
> --- a/xen/arch/x86/acpi/power.c
> +++ b/xen/arch/x86/acpi/power.c
> @@ -17,7 +17,6 @@
>  #include <xen/sched.h>
>  #include <asm/acpi.h>
>  #include <asm/irq.h>
> -#include <asm/init.h>
>  #include <xen/spinlock.h>
>  #include <xen/sched.h>
>  #include <xen/domain.h>
> diff --git a/xen/arch/x86/include/asm/init.h b/xen/arch/x86/include/asm/init.h
> deleted file mode 100644
> index 5295b35e6337..000000000000
> --- a/xen/arch/x86/include/asm/init.h
> +++ /dev/null
> @@ -1,4 +0,0 @@
> -#ifndef _XEN_ASM_INIT_H
> -#define _XEN_ASM_INIT_H
> -
> -#endif /* _XEN_ASM_INIT_H */
> diff --git a/xen/include/xen/init.h b/xen/include/xen/init.h
> index 0af0e234ec80..1d7c0216bc80 100644
> --- a/xen/include/xen/init.h
> +++ b/xen/include/xen/init.h
> @@ -1,8 +1,6 @@
>  #ifndef _LINUX_INIT_H
>  #define _LINUX_INIT_H
>
> -#include <asm/init.h>
> -
>  /*
>   * Mark functions and data as being only used at initialization
>   * or exit time.
> --
> 2.38.1
>
>


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 12:12:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 12:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475264.736876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZxc-00064u-EY; Wed, 11 Jan 2023 12:12:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475264.736876; Wed, 11 Jan 2023 12:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFZxc-00064n-BX; Wed, 11 Jan 2023 12:12:04 +0000
Received: by outflank-mailman (input) for mailman id 475264;
 Wed, 11 Jan 2023 12:12:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zIvS=5I=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFZxa-00064h-NT
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 12:12:02 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26b2dfea-91a9-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 13:12:00 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D516B17771;
 Wed, 11 Jan 2023 12:11:58 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A009E13591;
 Wed, 11 Jan 2023 12:11:58 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id te2/JY6nvmPRQAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 11 Jan 2023 12:11:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26b2dfea-91a9-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673439118; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=B3qrf9YikvjYRWNKLSEsWiB+pp5U14J3LBizIBHQ/WI=;
	b=qamluBfRpiT5Np1/rixUryWJe96HqV9zqy4Ro3rxPqrVYaY0ZMMrR7IXIfYlhJhS9nMu9X
	P6iNwNfFfU4J7lWSa2YBRLKkrxCEd/kPAFvlPGDKOEl2BUezB/j+vHerKMbgMOEa+Ed8LZ
	SAY5C1Ga6iZ4ySRlt2CtUsvl9X5cHJE=
Message-ID: <2cc22d0a-105e-7569-9148-1267bee17181@suse.com>
Date: Wed, 11 Jan 2023 13:11:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 14/16] xen/xenbus: move to_xenbus_device() to use
 container_of_const()
Content-Language: en-US
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230111113018.459199-1-gregkh@linuxfoundation.org>
 <20230111113018.459199-15-gregkh@linuxfoundation.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230111113018.459199-15-gregkh@linuxfoundation.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------LCWxg77wBtn0gC0pubqm0PjV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------LCWxg77wBtn0gC0pubqm0PjV
Content-Type: multipart/mixed; boundary="------------gURSyyQPE0TXCrnge7VP9xW8";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
Message-ID: <2cc22d0a-105e-7569-9148-1267bee17181@suse.com>
Subject: Re: [PATCH v2 14/16] xen/xenbus: move to_xenbus_device() to use
 container_of_const()
References: <20230111113018.459199-1-gregkh@linuxfoundation.org>
 <20230111113018.459199-15-gregkh@linuxfoundation.org>
In-Reply-To: <20230111113018.459199-15-gregkh@linuxfoundation.org>

--------------gURSyyQPE0TXCrnge7VP9xW8
Content-Type: multipart/mixed; boundary="------------AjYhaO0p3szD2QFXvSvYRgYI"

--------------AjYhaO0p3szD2QFXvSvYRgYI
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTEuMDEuMjMgMTI6MzAsIEdyZWcgS3JvYWgtSGFydG1hbiB3cm90ZToNCj4gVGhlIGRy
aXZlciBjb3JlIGlzIGNoYW5naW5nIHRvIHBhc3Mgc29tZSBwb2ludGVycyBhcyBjb25zdCwg
c28gbW92ZQ0KPiB0b194ZW5idXNfZGV2aWNlKCkgdG8gdXNlIGNvbnRhaW5lcl9vZl9jb25z
dCgpIHRvIGhhbmRsZSB0aGlzIGNoYW5nZS4NCj4gDQo+IHRvX3hlbmJ1c19kZXZpY2UoKSBu
b3cgcHJvcGVybHkga2VlcHMgdGhlIGNvbnN0LW5lc3Mgb2YgdGhlIHBvaW50ZXIgcGFzc2Vk
DQo+IGludG8gaXQsIHdoaWxlIGFzIGJlZm9yZSBpdCBjb3VsZCBiZSBsb3N0Lg0KPiANCj4g
Q2M6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4gQ2M6IFN0ZWZhbm8gU3Rh
YmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gQ2M6IE9sZWtzYW5kciBUeXNo
Y2hlbmtvIDxvbGVrc2FuZHJfdHlzaGNoZW5rb0BlcGFtLmNvbT4NCj4gQ2M6IHhlbi1kZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTaWduZWQtb2ZmLWJ5OiBHcmVnIEtyb2FoLUhh
cnRtYW4gPGdyZWdraEBsaW51eGZvdW5kYXRpb24ub3JnPg0KDQpBY2tlZC1ieTogSnVlcmdl
biBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KDQoNCkp1ZXJnZW4NCg0K
--------------AjYhaO0p3szD2QFXvSvYRgYI
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AjYhaO0p3szD2QFXvSvYRgYI--

--------------gURSyyQPE0TXCrnge7VP9xW8--

--------------LCWxg77wBtn0gC0pubqm0PjV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO+p44FAwAAAAAACgkQsN6d1ii/Ey/e
bQgAjbPhju5lkpQbgda1vZ8/DpvZTWODueSnVZMfvIG/CesFP2mSzpp9g3IO1DoBcvKCWikBTQXw
gDEoDqaVvS72H5uybTNOAZTktXiXt9TLSt6Bkg1TKrW2G9jiTfmhGdNQgt2j5r9sbEvhSz1tFcso
Vq6KE7OXQiV5l5+hNDYt/kbpdtgFZpwJYqWJzBEhWcCtcz+ciIewf4wqKsWq/nStN150XFdCW4DJ
kj9IgVtW4054+V2+ROzcSNSRLpZ7pUtzlRQq8NjCGCy9WsNQy94u+sqYZx1C8JDjVkOerj9twghz
uSXwneu8DEwVojGB4sFYe0VvFvQABU6Q7CSqUMMPwA==
=4j1i
-----END PGP SIGNATURE-----

--------------LCWxg77wBtn0gC0pubqm0PjV--


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 12:14:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 12:14:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475270.736887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFa0N-0006g1-UH; Wed, 11 Jan 2023 12:14:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475270.736887; Wed, 11 Jan 2023 12:14:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFa0N-0006fu-QO; Wed, 11 Jan 2023 12:14:55 +0000
Received: by outflank-mailman (input) for mailman id 475270;
 Wed, 11 Jan 2023 12:14:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pFa0M-0006fk-O8
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 12:14:54 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2087.outbound.protection.outlook.com [40.107.247.87])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8d5d4fdd-91a9-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 13:14:52 +0100 (CET)
Received: from AS9PR06CA0457.eurprd06.prod.outlook.com (2603:10a6:20b:49a::18)
 by DB4PR08MB7959.eurprd08.prod.outlook.com (2603:10a6:10:38e::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Wed, 11 Jan
 2023 12:14:41 +0000
Received: from AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:49a:cafe::35) by AS9PR06CA0457.outlook.office365.com
 (2603:10a6:20b:49a::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend
 Transport; Wed, 11 Jan 2023 12:14:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT005.mail.protection.outlook.com (100.127.140.218) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Wed, 11 Jan 2023 12:14:40 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Wed, 11 Jan 2023 12:14:40 +0000
Received: from 83a73c8f75ae.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6480313E-C6CF-4E30-8788-17E969093945.1; 
 Wed, 11 Jan 2023 12:14:29 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 83a73c8f75ae.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 11 Jan 2023 12:14:29 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAVPR08MB9354.eurprd08.prod.outlook.com (2603:10a6:102:301::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 12:14:26 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 12:14:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d5d4fdd-91a9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0MvJYukd5RpqeKbJG/1RE6/DG6w+HNio7w46iNbYORc=;
 b=kjzIzJc4W1eYrJohHo2l5chO19gp0UVZwTXX0GIiwG6FbzMrcXASEbEiZ/+0qXPR/p4M3W2PhgH4M2uPh+5QqFUTd34rvJSTO+DbbBb/vMBi02e2O7s0RKaqGV14HDVnAKdO/j7Yc3lLoBDeTz+gRrzbw5qlBMz14o7XeqTbMYg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e2bd239f0e166dd1
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wb4an7EHPilKYVUEzdozvgngNC7ureDTzteTt0XtSdgFemxw54FUlZYs1MjUuT3rDBMAaZSlONQirGu2geeB79NPH6W0xChxTTHLm+pXEC4S7Oizo1aLh5iRwrxC7EP9KZgLwUtspkZ2pFPIFx2/+ABO4Gx5qK3zqrBlTvSH0UaPbt4XVn8ivmtINEhQNz+8YIKJvvPWp/ldhT+LRbIbWCfjsGla2bZVU7Q9dVjZaupkUkcG+zsSLmQcOMKZPQCCwMsAfviAFTYYDyI3kl59JF8D0k8nZ7slcxSGYXEhiEoObMNoo0b+Do65pJaNKPoDb7O92NEyXAv8a5j8yV2Zww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0MvJYukd5RpqeKbJG/1RE6/DG6w+HNio7w46iNbYORc=;
 b=WTi5ZtlDZzjX1/HTXU/yDLDyCC6Jez9xh0+7KJ4VoCIjT9AkzzKXg9IguJbTF6Vij+K4a31j1YDTXjV4tKbwTzPhXm+0YesaQL2G2EodQloTf+bvkAxLHD3VN94EidaH/yXMTNRJVJALETgxPosbmaxikD0ZkCXbnhaaK/eTdRBb2Q/124tb+u5HG7dgXJ2Ep2F7hF+/9QPNGUSMFF8noSrCNmHAaRkNEIQZSE14/+81Sl4nWSolAEhGQp7isOK9WaAG5Lzfhn9VStV8HjtRKdHRvczEnx0rfHSJeMo+QlM1ES5clVdrBdjMgl7JxPFeYRlCe5XlR+O5YcceDfNWpQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0MvJYukd5RpqeKbJG/1RE6/DG6w+HNio7w46iNbYORc=;
 b=kjzIzJc4W1eYrJohHo2l5chO19gp0UVZwTXX0GIiwG6FbzMrcXASEbEiZ/+0qXPR/p4M3W2PhgH4M2uPh+5QqFUTd34rvJSTO+DbbBb/vMBi02e2O7s0RKaqGV14HDVnAKdO/j7Yc3lLoBDeTz+gRrzbw5qlBMz14o7XeqTbMYg=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH] xen: Remove the arch specific header init.h
Thread-Topic: [PATCH] xen: Remove the arch specific header init.h
Thread-Index: AQHZJbIaKE9+sa25KkC1qw/e7vlCUK6ZIQwA
Date: Wed, 11 Jan 2023 12:14:25 +0000
Message-ID: <69428FD5-7BFB-43BC-B2D7-F1C352365EB8@arm.com>
References: <20230111114409.7495-1-julien@xen.org>
In-Reply-To: <20230111114409.7495-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAVPR08MB9354:EE_|AM7EUR03FT005:EE_|DB4PR08MB7959:EE_
X-MS-Office365-Filtering-Correlation-Id: 4b9e937a-e57b-45e0-a153-08daf3cd6a9b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9LOBkl0nCdjl4SDOPcoQgwqfEzzAU2tbLDrT5yJwistFtBAq5Wtylo49VybF4soTSZBrE5xjtpCBPvpTQ2ZpHYl4bz+TAPaAlwf0ZD7elg3398GkXhbm6VLjYalsre4K6m53P01Gxw5+yl8xBeMOxhetZZpEnNFgpWCUJzdN/xmVQ81pxOq84IR312li2Hz63xsnPqEeLG3b1WCqWmBlMi0jiseziUtOsjE/EJ+i7Bu3xofjVeCdvQHjGUpaueWvqHED0KfcJMSRcBhpWPJBb0VtIHyGsfLb39a2xPoEsVt89llPp3TgS1HGmtLDxaQdRt9U1QLYDagCeWudgOIem0dyNzEbTCEBl50r4cG4dFSWcUKHD0RLdzc5u6FQQhw5ITgAwedKjTEbtCqNeF4L5Ux9msm+ZPqwr4iPTJlw3rjCRMbZt9zkeRGb7HENeRCoizyD80ElJuOV3IwBRQkW00m9DKaihCtPmisKrrw4/l2kVNyW2DgNZH7zxZqjBEobPDRWyKoRFUoqRf1Pcu9hIlcXgtsOeXyZUhupwHEBTGLAifcfNh2EgvGJu5atkwVUI7kXjsHFQcA+qX7IlpZazPcqLoRq015J5Z3svUWUBrodquPOOUql2UKoOJDf7bwoQsK+ds+8Vxwie9Km1PWMAuqtGe3knXJ5bbz2AQaD0E8Bhw4Aju2TQtYBhL+Ecu3X+q0bw2oY30yr/1yhlCOpaC93ka3fSoRYVN/21sQoVtI=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(366004)(346002)(39860400002)(136003)(451199015)(7416002)(5660300002)(2906002)(8936002)(41300700001)(2616005)(8676002)(64756008)(4326008)(316002)(66476007)(66556008)(91956017)(86362001)(66946007)(6916009)(76116006)(54906003)(66446008)(71200400001)(36756003)(478600001)(6486002)(33656002)(6512007)(53546011)(26005)(6506007)(186003)(122000001)(38070700005)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <14F03F888A259C4388E86CA88E34151A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9354
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	398c2b35-f106-4ce2-d2dc-08daf3cd61b9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Iks4Gadg3jehNMnx6XqQitH+GXnBfzYzoYNXNM6DqMVP0VIDRRTccpuegQZbgs9mN7yCIn4lge8QZ7iSzSM4t48RWSKKTICfsFjeihq1s90cg+t/yb3uWNV92+q7x6r+ZMXpQd18KAPNol09m4zY9IXPB1UJxZoqo8TK2408lSeflRFx0NAe0Aq+LEr1m6auMPTFN9ZYruAx9ZDQmVB6yobFvvJOYxpLIQ99RskEyO7WODVeQigARfM7WOt8WSB7DH+9dpv3sFvgulsUMFQ76zCrCho/ad7tKKuqTNMyeXocQlZl8qZ58O+Al5xCHKwlm3eY/EBshouOfj7LBBtNjBjp5PGsk3utzHEKOudLiYjJ/82O/BTtg9d1jPRX8JeuHCL3wyOjQXvOhkrXtRVsTHc+/vcO4HoFst+OO6CbXBtUWci8/xsOtr5Nk6Q4B6mqaLLq6hca5hcjNKhDpvyxuieaUx6FW0CfHEOBpGw3z/8cJrfjzlm41LgEKdqHbr9EnX+8lkVm/L/dAaCkXpE4hnbRGPXLvoCsl/hVjCxbMt+SoUuE4nwkSi1HrHhCXSKeoX45eL1jKc76IJ8GmrajzQmohFvrMaSqaQOWDGDEqgnUu2rW3FQhgLL9ej5QUGZYXHdY58JlsgxLcLFQKpnfg7FTUlvKSM+ECSaSh2V+T7GcmsM5uJ88GPTBGhoaC2lIFUB//mYZo0u2QGddlXOLFQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(396003)(346002)(136003)(451199015)(36840700001)(40470700004)(46966006)(36860700001)(478600001)(6486002)(6512007)(26005)(40460700003)(36756003)(186003)(356005)(47076005)(81166007)(86362001)(41300700001)(8936002)(6862004)(82310400005)(2906002)(5660300002)(82740400003)(54906003)(316002)(40480700001)(2616005)(70206006)(70586007)(8676002)(33656002)(336012)(6506007)(4326008)(107886003)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 12:14:40.8151
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b9e937a-e57b-45e0-a153-08daf3cd6a9b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB7959

DQoNCj4gT24gMTEgSmFuIDIwMjMsIGF0IDExOjQ0LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQo+IA0KPiBCb3RoIHg4NiBhbmQgKHNvb24pIFJJU0MtViB2ZXJzaW9uIG9mIGluaXQuaCBhcmUg
ZW1wdHkuIE9uIEFybSwgaXQgY29udGFpbnMNCj4gYSBzdHJ1Y3R1cmUgdGhhdCBzaG91bGQgbm90
IGJlIHVzZWQgYnkgYW55IGNvbW1vbiBjb2RlLg0KPiANCj4gVGhlIHN0cnVjdHVyZSBpbml0X2lu
Zm8gaXMgdXNlZCB0byBzdG9yZSBpbmZvcm1hdGlvbiB0byBzZXR1cCB0aGUgQ1BVDQo+IGN1cnJl
bnRseSBiZWluZyBicm91Z2h0LXVwLiBzZXR1cC5oIHNlZW1zIHRvIGJlIG1vcmUgc3VpdGFibGUg
ZXZlbiB0aG91Z2gNCj4gdGhlIGhlYWRlciBpcyBnZXR0aW5nIHF1aXRlIGNyb3dkZWQuDQo+IA0K
PiBMb29raW5nIHRocm91Z2ggdGhlIGhpc3RvcnksIDxhc20vaW5pdC5oPiB3YXMgaW50cm9kdWNl
ZCBhdCB0aGUgc2FtZQ0KPiB0aW1lIGFzIHRoZSBpYTY0IHBvcnQgYmVjYXVzZSBmb3Igc29tZSBy
ZWFzb25zIG1vc3Qgb2YgdGhlIG1hY3Jvcw0KPiB3aGVyZSBkdXBsaWNhdGVkLiBUaGlzIHdhcyBj
aGFuZ2VkIGluIDcyYzA3ZjQxMzg3OSBhbmQgSSBkb24ndA0KPiBmb3Jlc2VlIGFueSByZWFzb24g
dG8gcmVxdWlyZSBhcmNoIHNwZWNpZmljIGRlZmluaXRpb24gZm9yIGluaXQuaA0KPiBpbiB0aGUg
bmVhciBmdXR1cmUuDQo+IA0KPiBUaGVyZWZvcmUgcmVtb3ZlIGFzbS9pbml0LmggZm9yIGJvdGgg
eDg2IGFuZCBhcm0gKHRoZSBvbmx5IGRlZmluaXRpb24NCj4gaXMgbW92ZWQgaW4gc2V0dXAuaCku
IFdpdGggdGhhdCBSSVNDLVYgd2lsbCBub3QgbmVlZCB0byBpbnRyb2R1Y2UNCj4gYW4gZW1wdHkg
aGVhZGVyLg0KPiANCj4gU3VnZ2VzdGVkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+DQo+IFNpZ25lZC1vZmYtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+
IA0KDQpIaSBKdWxpZW4sDQoNClRoZSBhcm0gcGFydCBsb29rcyBnb29kIHRvIG1lLCBJ4oCZdmUg
YWxzbyBidWlsdCB4ODYsIGFybTMyIGFuZCBhcm02NCBhbmQNCm5vIHByb2JsZW1zIGZvdW5kLg0K
DQpSZXZpZXdlZC1ieTogTHVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KDQoN
Cg0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 12:25:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 12:25:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475277.736898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFaAD-0008E8-01; Wed, 11 Jan 2023 12:25:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475277.736898; Wed, 11 Jan 2023 12:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFaAC-0008E1-Ry; Wed, 11 Jan 2023 12:25:04 +0000
Received: by outflank-mailman (input) for mailman id 475277;
 Wed, 11 Jan 2023 12:25:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zIvS=5I=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFaAC-0008Dv-4c
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 12:25:04 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8e66b5f-91aa-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 13:25:02 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0523A177FF;
 Wed, 11 Jan 2023 12:25:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D519413591;
 Wed, 11 Jan 2023 12:25:01 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id xE+2Mp2qvmMUSAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 11 Jan 2023 12:25:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8e66b5f-91aa-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673439902; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=vZMspio6CqflV2FoZGlGgB3ql5H0jWaCysfX0yUnYDQ=;
	b=i1j/ru4dgYR90bYOPjq+JSLrzSFu3i9s/RaWWGUijp2M6fS3/bNXgZgzTq+7/Wkoxb+Vco
	YrOwY5iy3Qm217kyCcQYgZQdqOPNNKWA09rKXzprgg7CG/OpT7LMAK+Y7t62I9v+Y5dJbC
	qnsseM3aCsLj/JhMkaXRg2Vdz86A4+o=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	sstabellini@kernel.org
Subject: [GIT PULL] xen: branch for v6.2-rc4
Date: Wed, 11 Jan 2023 13:25:01 +0100
Message-Id: <20230111122501.21815-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-6.2-rc4-tag

xen: branch for v6.2-rc4

It contains:
- 2 cleanup patches
- a fix of a memory leak in the Xen pvfront driver
- a fix of a locking issue in the Xen hypervisor console driver

Thanks.

Juergen

 arch/x86/xen/p2m.c                  |  5 ----
 drivers/block/xen-blkback/xenbus.c  |  4 +--
 drivers/block/xen-blkfront.c        |  3 +--
 drivers/char/tpm/xen-tpmfront.c     |  3 +--
 drivers/gpu/drm/xen/xen_drm_front.c |  3 +--
 drivers/input/misc/xen-kbdfront.c   |  5 ++--
 drivers/net/xen-netback/xenbus.c    |  3 +--
 drivers/net/xen-netfront.c          |  4 +--
 drivers/pci/xen-pcifront.c          |  4 +--
 drivers/scsi/xen-scsifront.c        |  4 +--
 drivers/tty/hvc/hvc_xen.c           | 50 +++++++++++++++++++++++--------------
 drivers/usb/host/xen-hcd.c          |  4 +--
 drivers/video/fbdev/xen-fbfront.c   |  6 ++---
 drivers/xen/pvcalls-back.c          |  3 +--
 drivers/xen/pvcalls-front.c         |  7 +++---
 drivers/xen/xen-pciback/xenbus.c    |  4 +--
 drivers/xen/xen-scsiback.c          |  4 +--
 include/xen/xenbus.h                |  2 +-
 net/9p/trans_xen.c                  |  3 +--
 sound/xen/xen_snd_front.c           |  3 +--
 20 files changed, 54 insertions(+), 70 deletions(-)

Dawei Li (1):
      xen: make remove callback of xen driver void returned

Jiapeng Chong (1):
      x86/xen: Remove the unused function p2m_index()

Oleksii Moisieiev (1):
      xen/pvcalls: free active map buffer on pvcalls_front_free_map

Roger Pau Monne (1):
      hvc/xen: lock console list traversal


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 12:31:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 12:31:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475283.736908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFaFt-0001Eg-KD; Wed, 11 Jan 2023 12:30:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475283.736908; Wed, 11 Jan 2023 12:30:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFaFt-0001EZ-HN; Wed, 11 Jan 2023 12:30:57 +0000
Received: by outflank-mailman (input) for mailman id 475283;
 Wed, 11 Jan 2023 12:30:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uwfa=5I=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pFaFr-0001ER-ME
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 12:30:55 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cac2bd1c-91ab-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 13:30:54 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id az20so17363314ejc.1
 for <xen-devel@lists.xenproject.org>; Wed, 11 Jan 2023 04:30:54 -0800 (PST)
Received: from [192.168.1.93] (adsl-211.37.6.0.tellas.gr. [37.6.0.211])
 by smtp.gmail.com with ESMTPSA id
 r18-20020a1709061bb200b0082000f8d871sm6178680ejg.152.2023.01.11.04.30.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Jan 2023 04:30:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cac2bd1c-91ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Yjk1YZNghOMLZI3FrMEWJU4Cn7nFZUvUvjgjLRR3p2k=;
        b=AOjelN5z2UIcIODUnVgWrlBIX9DOy+sD3o6f/jPR74RP7uzVY3+4DfdoB61jI7tYzO
         Yh9qkGKx+gMTTmociq21tB0DYouTv4A3gmAQzXrS7xELrHHOyGcMIgl6HWqAPQSETSDL
         PkiF1MlDjNt2A4rKQvARidkk8pdSLm+/VhEfTEBwlF8uGfmUTHJqURnBwQzRDHHFXnLw
         K+uHPZWH4726AL1M+vylSEVP1x/bIZ3zxXydGTzagT95fZHACiApMt623VOnGSNnspFK
         RFaY56bPgFPeiOAulfAkikKH3BdqEdcAGD5yQ0OwfUrAJB908y3AruuYVMP/PAzZWE3o
         USPQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Yjk1YZNghOMLZI3FrMEWJU4Cn7nFZUvUvjgjLRR3p2k=;
        b=3vfa4hgEx7RyGeu5KhqSO7K++HP+BpeY/qrS+AEJ8VSdGf+EKUqanMZDSLfztD8daK
         cvnwqKDfcwHzxKIwwTx96lr6dCeozYbIKEEqemXIjdmljGtncSlk8mx+nQ86zpp/0GTg
         ScZ/E+zmbOUd4kvukUOzSLFpRRaBRO8wUlWJZeqZlqz9l3HXMKU5YjHWwmrdhqpLp0vX
         8//ZMGjOmdXvnU8AN8PagTSs/ExugjJHgfDTDI0Rn6JXDEcx+I8x2EEqm55h01vYPnBb
         zwtly+cGTqUb+7egxOugPDQhref9YWmQrge5Rb6npepsixaU0bhwfwCwHnXSVy/AUwqN
         AdCQ==
X-Gm-Message-State: AFqh2krGc/JL7e/QJOWjuncIuB/KwWCiBfVdo7Jt9bVBSZBfssl3Hdju
	AVDwEuzSRlrCcu8DihSsVFA=
X-Google-Smtp-Source: AMrXdXvNAsGU5q66/AH9qF3roK03cLm2BWTh8G7reASo6tOvzixzJEnl0XWV5s9mWLkvF68lyIWEPg==
X-Received: by 2002:a17:907:8d16:b0:7c4:fa17:71fe with SMTP id tc22-20020a1709078d1600b007c4fa1771femr9867416ejc.45.1673440254190;
        Wed, 11 Jan 2023 04:30:54 -0800 (PST)
Message-ID: <2acf07aa-ec56-99fc-765e-70fb7e753547@gmail.com>
Date: Wed, 11 Jan 2023 14:30:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v3 2/6] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Oleksii <oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
 <c333b5b0-f936-59f8-d962-79d449403e6c@suse.com>
 <06185fcfb8cbd849df4b033efa923b55c054738d.camel@gmail.com>
 <1b6ee20d-2f32-ab38-83ec-69c33baf42fd@suse.com>
 <0398c48c-cc5d-a4fb-354f-54e3c5900d18@gmail.com>
 <644334f1-8e1d-3203-e942-0eb3ef5584a9@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <644334f1-8e1d-3203-e942-0eb3ef5584a9@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit


On 1/11/23 13:38, Jan Beulich wrote:
> On 11.01.2023 11:22, Xenia Ragiadakou wrote:
>>
>> On 1/11/23 11:08, Jan Beulich wrote:
>>> On 11.01.2023 09:47, Oleksii wrote:
>>>> On Tue, 2023-01-10 at 17:58 +0100, Jan Beulich wrote:
>>>>> On 10.01.2023 16:17, Oleksii Kurochko wrote:
>>>>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>>>>> ---
>>>>>> Changes in V3:
>>>>>>       - Nothing changed
>>>>>> ---
>>>>>> Changes in V2:
>>>>>>       - Remove unneeded now types from <asm/types.h>
>>>>>
>>>>> I guess you went a little too far: Types used in common code, even if
>>>>> you
>>>> It looks then I didn't understand which one of types are needed.
>>>>
>>>> In "[PATCH v1 2/8] xen/riscv: introduce asm/types.h header file" all
>>>> types were introduced as most of them are used in <xen/types.h>:
>>>> __{u|s}{8|16|32|64}. Thereby it looks like the following types in
>>>> <asm/types.h> should be present from the start:
>>>>     typedef __signed__ char __s8;
>>>>     typedef unsigned char __u8;
>>>>
>>>>     typedef __signed__ short __s16;
>>>>     typedef unsigned short __u16;
>>>>
>>>>     typedef __signed__ int __s32;
>>>>     typedef unsigned int __u32;
>>>>
>>>>     #if defined(__GNUC__) && !defined(__STRICT_ANSI__)
>>>>     #if defined(CONFIG_RISCV_32)
>>>>       typedef __signed__ long long __s64;
>>>>       typedef unsigned long long __u64;
>>>>     #elif defined (CONFIG_RISCV_64)
>>>>       typedef __signed__ long __s64;
>>>>       typedef unsigned long __u64;
>>>>     #endif
>>>>     #endif
>>>>
>>>>    Then it turns out that there is no any sense in:
>>>>     typedef signed char s8;
>>>>     typedef unsigned char u8;
>>>>
>>>>     typedef signed short s16;
>>>>     typedef unsigned short u16;
>>>>
>>>>     typedef signed int s32;
>>>>     typedef unsigned int u32;
>>>>
>>>>     typedef signed long long s64;
>>>>     typedef unsigned long long u64;
>>>>       OR
>>>>     typedef signed long s64;
>>>>     typedef unsigned long u64;
>>>> As I understand instead of them should be used: {u|s}int<N>_t.
>>>
>>> Hmm, the situation is worse than I thought (recalled) it was: You're
>>> right, xen/types.h actually uses __{u,s}<N>. So I'm sorry for mis-
>>> guiding you; we'll need to do more cleanup first for asm/types.h to
>>> become smaller.
>>
>> What is the reason for not declaring __{u,s}<N> directly in xen/types.h
>> as type alias to {u,s}<N>?
>>
>> IIUC __{u,s}<N> and {u,s}<N> are needed by code ported from Linux while
>> Xen code should use {u|s}int<N>_t instead, right?
> 
> Yes. Hence in the long run only Linux files should get to see __{u,s}<N>
> and {u,s}<N>, perhaps from a new linux-types.h.

Thanks for the clarification. Could you please help me understand also 
why __signed__ keyword is required when declaring __{u,s}<N>?
I mean why __{u,s}<N> cannot be declared using the signed type 
specifier, just like {u,s}<N>?

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:17:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:17:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475289.736920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFayf-0005Yd-5G; Wed, 11 Jan 2023 13:17:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475289.736920; Wed, 11 Jan 2023 13:17:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFayf-0005YW-1O; Wed, 11 Jan 2023 13:17:13 +0000
Received: by outflank-mailman (input) for mailman id 475289;
 Wed, 11 Jan 2023 13:17:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFayc-0005YQ-KP
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:17:10 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2079.outbound.protection.outlook.com [40.107.20.79])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3de542c8-91b2-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 14:17:05 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8558.eurprd04.prod.outlook.com (2603:10a6:102:215::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 11 Jan
 2023 13:17:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:17:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3de542c8-91b2-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iqydBGDl+UjeI6g9kwruVpimJmwH6MWBS2j22656D2/KZucAvPQwjiFjQKZQHa2llmJekV1OdrbyzFBdLyI+x3ddxCoTqH5P6CrUTPhcG9IHCDYSXJfsnHVXGXU+1/VSCQC90FLwtOpUr0ZXgjUNZP2PnJOvtXX8+ReM8Yk8Ac1r+AiJmgpo+cPAj+3ATQrrys5CXP5qpQzudmV5qDYyX0/jwh3Q7fTDtWcWPo5Y0HhAHXmyq1IIsWJUkrNCYWtknAkVmAQq4pryIurkdV8twRVCYtTPLuZ4FjLpw53G5paAxffD8t6Ray3pabYLvDrvzpx5THRbezC8ZLMRvuw5rw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nzrFD3tt/PvzTVbC4ffQHf9Qv8d3GT/J1THhPr70VZI=;
 b=ZymPjonLrqRvxOx50s1IAaz35c+dj9erKUpeBfurteWSto2X3YST9IyFXlGKuEGXF2m5qxIIpTImtHKU9vRZRhDm7Rp8rjB8Jn+9xSgmdNoR6Apw+hM+n7whimlIqKnurVxZySvVpcHmYcMkr+1qZVJELwXKTayR7H4atQU/IEIyeu7hyNW7nzufO2Q4O6xo5EUr7tYm+Nw3vHx2pQqfit/ED66Dr4uqNfZAdB6sXmuZX6RzOm9H4gYqlfmwdFJTZiZ//Yk78rTSIcLTYmq43fZPYD5aaGvnoNRhmfW9XJSEgwHkSeBMvAhNEaKvENXjIrhrhkhr5Aoebr7tWej4yA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzrFD3tt/PvzTVbC4ffQHf9Qv8d3GT/J1THhPr70VZI=;
 b=i50kvVXFd+R2D4mVRVZy1Xp7LMahj/9XjDt511rYsNJtkFr6g44sQQGiVzkCZwtSBDX2P2Lrlkt8FcGy6SrPpy4P6W+zKOFCD6W+WJM4YEfh6aUE+Iz2gvsoNpLQ2gpYZbHJRUjgr6K2JZAXZsfQJG4rXIM/j/8HvYXqaBht2/Jkrfam15bVQ/Xam1NYfP0FVOItmg/F8zug+mxS7hHJonAQLwZs9DEsm9J3KCH9qcvj6MTg9Lv1GO7P/mxBt3AnKhZz3Z6O049NFbYZgBRjSmYYZJg9zGAgJAO2yXm0npimrvVEciBv5aFNJvdnqRh6wom/1j1pG8evBrJPyRBWZw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <72930382-b216-4ec3-6d29-889a3bf0bc6e@suse.com>
Date: Wed, 11 Jan 2023 14:17:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 2/6] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Oleksii <oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
 <c333b5b0-f936-59f8-d962-79d449403e6c@suse.com>
 <06185fcfb8cbd849df4b033efa923b55c054738d.camel@gmail.com>
 <1b6ee20d-2f32-ab38-83ec-69c33baf42fd@suse.com>
 <0398c48c-cc5d-a4fb-354f-54e3c5900d18@gmail.com>
 <644334f1-8e1d-3203-e942-0eb3ef5584a9@suse.com>
 <2acf07aa-ec56-99fc-765e-70fb7e753547@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2acf07aa-ec56-99fc-765e-70fb7e753547@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0143.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8558:EE_
X-MS-Office365-Filtering-Correlation-Id: 49d1b50e-24e9-4f93-fca3-08daf3d62134
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dMCgeY6HsXb90QxYyMf6SDZpNcE5AeHrY8H2Maa7+QveUKAEyCCP7vW3l86bnpYFUM2NJ9rN1/HB1QC7i2XWgqxU+x1zlli/g5oZxaDg7vmFj3KKcjodkKuz81ylaVIHSYg0sEj03rnIeXvgg1PggGW+XrRvO9JR8ZyXi+f2axio1mHL3QHiYD10jcZ7HXyoVtZUTDXLdamyBz7hncbndSr7xTLiIiimqBpUWrUByiPeXx4j3fPH1iRZPGqCUqjorYDCaN7rcNl/bVJVOzIH/vV63BaleUL8LXFBrN9/gYFkau47c6qvZTwYf008LimkN9roxUULA3ApOJbOs5p600XYTr6e3vJkIMUR/GQzXfpvh4v87DIEMD0uJaJDC9lPZnIjqt26MeZsgp/xlGWtj4qSwISjL9s26v9SdtKyPsdcslYo1NS4+8yj77hglmJZV6V2Cze6XCa4968WxtGj8DIGWFylVydiPviAbsWjLjUJfbMNz8TSpyM1pDCF8AntncmRO3WrDGQ5ek37xsoja1rxDes2DOpO9VS7QoGhOPcXo2/ijambFUgayKcX634jGDthyfyyog0n1bJ3wH1W5cjlFQ/Ortnqo02vnmZFkDpb2xF2CopxQil4Hdd5Ztsz9R6c03Pqe2oxkdaaSNUaaZvlMKHAsXhvCTEdZv6RDUCl9g+hpPv9MnJnameB9sT/JdABxpF6Kcq2TJeDHeqk03bKG1b1rpg1bmWCIhVUHuk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(396003)(136003)(376002)(366004)(451199015)(7416002)(6506007)(2906002)(36756003)(8676002)(4326008)(6916009)(53546011)(5660300002)(8936002)(4744005)(31686004)(38100700002)(6512007)(41300700001)(6486002)(478600001)(26005)(66476007)(66946007)(66556008)(186003)(86362001)(31696002)(2616005)(54906003)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TWNld0pOYTJ3aERyanZpdWVxdG5KcEQ5Zy9GT3U3UzdTb3VWakExd1IwWTA4?=
 =?utf-8?B?NVY3cS9ySDFFdm9kNWFDbVVoM21yM2U3dmtoeE1NUEpYdFEvS1lmYVZ4RmdP?=
 =?utf-8?B?Tlg2aExoOTdlcXp2d2liM2g3SlpKL1hPU3NwckNRMEp3OUdzWHZhZTQzQ3pV?=
 =?utf-8?B?WDJqU2x1SDhaUFhyZ3cvNXcwdG9Cakw3aElpZUZrRFRiWlFmMTlYSXBsQ2Zi?=
 =?utf-8?B?STZKWi83US80Zng5ckNxVVhQaDlpSG05Q2k1MzBIWDZVT2VTUzhpald2QjBh?=
 =?utf-8?B?MHJnSERORzdTVmFxVXduLy9rNCtTZVdkUjdDR0NwVmY0UkxrTmVBQTJhQzZw?=
 =?utf-8?B?NnQ1THFYbGRMYnRhT1o5c0p6cTZXSEpmUkN5NlRldW1yWmUwZVJoamFPNmdM?=
 =?utf-8?B?TExsV0dnL1Q3Nkh3MkhGSjAyYVdUNnp6M015bCtFZ0oxVHk5VEJkOUxpcVJj?=
 =?utf-8?B?eGVlSTgzNmVqdHdQUytBL3NmZ1owZ001S2dqckgzN3VLVzVuLzEzYnlFUk1i?=
 =?utf-8?B?em5JNGVCcENSTmJqWHM3QUdhV3ZnanhwQ0U0NWhuMk5zLzk0V043U3hqY1JQ?=
 =?utf-8?B?VUJJTUFlSVlhaHZFSHMzVGd0aTF5V0hzRGhFVkdha0NISzk2MUg2ZXNmajIz?=
 =?utf-8?B?VzM5bUVjL2lxdm1iQ3g4cm5ZeUNsdDNnOGNyWXRYQ1c1NFVqRjNvTEtYaG54?=
 =?utf-8?B?bkUrVUF1N1U4SlExVE1CNUNwLzd0V2RDK3dicXRxWTlmbEVjOCtBcVJmT05M?=
 =?utf-8?B?Z1RhRERCcG8wL3lvSGd2Z09KckxkU05sWndlcC85dXZQZEgxMHMvTTJWcFVW?=
 =?utf-8?B?aHYrb1pFM0Njbm9mMDN2c04zM1RhbnR4a3NOTEhIeUJjd3J1VGswTkxla2hh?=
 =?utf-8?B?VzdjTFgxQkxINVRXMEpwemtleHVxa2tIQzNISHEzMjJGQTNQZlVaZUlBRkc2?=
 =?utf-8?B?ZnJISnlOOWNWUDRTOHpCRWRxeUdyZ01VcDVQSVAwVXRkcUJ0cDFCU3BPbXlh?=
 =?utf-8?B?cGpQQUcxeHlUTkkwR0NQdjB5a2RiVzdzeU5GNnQrSjZJYkpIVUZWMVpsTTBt?=
 =?utf-8?B?ZWNDLyswRFZFQzJZR2NWM0Z2Q1RUc25zaCsrT00xTkJDTEpMRUNSS1Flc0o1?=
 =?utf-8?B?MnQzcGFMSnY1cEtnYlZyZDdpejJJYkd3MXhPRG9CMVl4dHBzVkpFSmd5RTg3?=
 =?utf-8?B?QVVYTmJuQ0R2bU5HdTRZTkpyQi83MUdPbHFyZTRGems5REEyY200WkpRdXJ5?=
 =?utf-8?B?L2tiYzcvb3ZzK2JRV0pFYXdSdExCbElXa1VxYitrV2FueFNtdThjSDdibnNJ?=
 =?utf-8?B?NFRhbUF4ZEFGdVZvb05yd2lqWUNER1VXTUI1d0hZK0FGc08zTlllb0pOWDFY?=
 =?utf-8?B?Sk5pUjlYNWNsMVZ3ZVFEbnNzNHBkWUZnQ1pRS0NMam9JTXRyWkI3amEyNUg0?=
 =?utf-8?B?OHRXVmxSS3NTdHdSUWJoTUhCd1V0NVNxb0l5QVRnQ3l4OUVyelppYnZQUlNE?=
 =?utf-8?B?c3hiMFVtNGpSS3pJNEtQUWZUNXoxREpkZWdwUTRvbk93NVg5eXVNUXYvRWhP?=
 =?utf-8?B?M3hPZEFIeSsrT01YcEZaOEx6b3FVOVl3dy9TY1A1NEJ2bmVWVmt2d1JrZmRP?=
 =?utf-8?B?L1VkOS9IWTM5akJmbUlyYTFPUWNHbHpqOFdhQUR0bmNSRkUxYzEvNlFZQlZt?=
 =?utf-8?B?TktCQzRqbDRZbjVOSi85T1owM0hSRkxjZ2JJT1lza0JOakIxWWZEMWhtRjE2?=
 =?utf-8?B?RitRc3p0TWlJLy83Y21lMzFJOE84ZmNIUzNzL0h4MGxLUzIxZlR3L1ZXN1BL?=
 =?utf-8?B?L0FhU3p0WDFRTUNIZllRTWpSQk91UWJ2MkxGNWZmZEovOTZDUHVieFEvSE5E?=
 =?utf-8?B?dWFtNEk3bVhySEZxMnpCWGp5aXVhdEVsLzVFWGI1QmozdkowOWsvdlZXbnRr?=
 =?utf-8?B?RE4zSDI1bkdzMFJUTVRGNDRMRFNWdGlmdGpWaDFKQi9wTzFUOWNnU1hvQms1?=
 =?utf-8?B?bmpFNjBlaEpKTERvWFRaRDVGTUd2aXp4c1NOaTEyQ3pvSG1SaWhEcHg3bEdm?=
 =?utf-8?B?cGQ0WXRnVVA0RDVBWmlPQStGWlRTM3JSdjdoN29yQ0NiOXBvenFjdCtnUTZE?=
 =?utf-8?Q?BoP82NGQX0BZfo7tOhIxbWqb+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 49d1b50e-24e9-4f93-fca3-08daf3d62134
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:17:03.3892
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IZ2wtJMfXsN17lPxBHcqi3wKsJ69OXJ/+HLe4Q3yAYHK4jSgT2LIwnzYBuTwG+hYAWQgzfpajQH2E4oS//F6Jg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8558

On 11.01.2023 13:30, Xenia Ragiadakou wrote:
> Could you please help me understand also 
> why __signed__ keyword is required when declaring __{u,s}<N>?
> I mean why __{u,s}<N> cannot be declared using the signed type 
> specifier, just like {u,s}<N>?

I'm afraid I can't, as this looks to have been (blindly?) imported
from Linux very, very long ago. Having put time in going through
our own history, I'm afraid I don't want to go further and see
whether I could spot the reason for you by going through Linux'es.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:30:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:30:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475295.736931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbBC-0007rS-8g; Wed, 11 Jan 2023 13:30:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475295.736931; Wed, 11 Jan 2023 13:30:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbBC-0007rL-52; Wed, 11 Jan 2023 13:30:10 +0000
Received: by outflank-mailman (input) for mailman id 475295;
 Wed, 11 Jan 2023 13:30:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbBB-0007rB-KZ
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:30:09 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2045.outbound.protection.outlook.com [40.107.21.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 103d15a8-91b4-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 14:30:07 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8209.eurprd04.prod.outlook.com (2603:10a6:20b:3e4::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:30:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:30:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 103d15a8-91b4-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mFIy+QvfbTeOv3/reyp9xdENLhPruCS2CAq4iDulJs/Py44UBJ8IfJN6jXIDw4iaQzn5JOX2y7FUeigS1yVeSY2+6jVprV4DLEqj+cMnDEYGJsUFo4yOy+0zHndShbG6FKos6VGil2JOnVeh/2U/dkET0feYV+UxGAB9yheRB3QA9ncUfpifeVADu9DcQaFTanigGtyir11GNTW8IXXc5eZCEZS48UGQZHglIO7HHtLhtQ0WahqhGNUSyezkkku2hX66c92JmjE/d/A5pxWpPQ/rFUenUvb+sx6UVEpdiWcM40KtlHdXZMjkGHYThHvLIYjawUdHpxPFwvi4dUPUjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0TX1CqU2wGeQ3+EII8UKIjdIZad4JftlhHN3FtONqdI=;
 b=M03ZLfKZZGG6vFBANJwOjMoK3aY9aX/EDpw6rikctdsPd5+rXLPZtuxT2npVkVLJcCspCBwV/MjFelW5X8u6pC/6A9KrpXZzoCp40kz0zqATN3M85aEkUkR6kvRAiWd8cP01VSGyovJxK5JbMdr43FkKmyBOwK1kUo4t1Mw4Ke886NOdEuMHIgPxn8h8pTg1X1v5MlZFiwHjsaqTYGh25cHED0iWJaJYXQPYvQfbHzID55l3VVi8GCcmShpKBBIgUxSRD+aAsA8lQUYpeBuK3+MLjNKAToIACzLsR6vC0LFgicesiyXaVsNdaSfoEzgt6cxxIf7hpSpQLLBtK6QrTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0TX1CqU2wGeQ3+EII8UKIjdIZad4JftlhHN3FtONqdI=;
 b=jU1cqnh5cEsEYvAv+3zrIGHI1UlKcTDAcNBNmDy24aZKjtWGf5ssUoHn/wuZErLhwgrqOOantorv+Jf9cnxdPrcQqKZ6aewG2AZZNP+RScqrc8tvRpGbFJlYMOa3aV6Lcl++7n4EfemNKljNGBlXaq09tdZYAA2/CiaKEQPI/hMBP8ncmna/Y5UAt7tCetBpkw4VnHwCSSbz4gFvE0yexmwWfEac5hfAnZQdK1m2mCGwFjTYDUL7Xf6ilZhb7dEOrTboQo6YNg0H9D/d+5j14NGjSphIg29hZs/DlKGNo17d3gii7D+yFW36nHQ7WX5biWzqBf/0SxhkIjJP6b830w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <306563b1-c040-6e28-c50e-0d742c59cb7c@suse.com>
Date: Wed, 11 Jan 2023 14:30:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen: Remove the arch specific header init.h
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
References: <20230111114409.7495-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230111114409.7495-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0002.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8209:EE_
X-MS-Office365-Filtering-Correlation-Id: f59bd13b-b996-4e8e-7f70-08daf3d7f319
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uvttihokEJKvUeQebd7F0I7hmHf1wK2+LYMdN+RAVqQ/Qicy/JggHbJCPKeQjB7pfb4D4qyB8cXriNJ3RVZoEsIg92U2OH8GGKSFWf5i1S6kSUnXPC2ojw5Vfv/Rs1oIlgofgAA8HlgEXh26vdO5UJJFiiCfMLfjadE5wybfV7lJa/34Nao1cvI4zjsIU0KAYEL0rvDSKxr1jTN171EVkgFFZasMA3E+rvg0B6xx7GbRb8LHsoOknkYcKRBxCPQYvTIzFy55VXv4wUJoqpswPMKH37twfO4T70F2ylCXAE/S87fkLteH/eXPlqzoJTOwJIa/qJcf87IWXBC0U8CwcdVQKCLezUOs0NwHMo2D+CkP0cRmefk6xt1aZki0cxirgng3c+JigJUp1C6+2IDCab/x8YgGzh2fwDNRPnjkNhk9RX5ICRdi6W7z1XIlHzJ9nrg0gKwo1Y/eDPomUYGkshMLlTGveYFqY87ahqndSglK1Lkluv6WUeK8T9EVvr5cFS04RtvJfTfDEO3QZAYNSBd9pvNgt2K8izD3Eb0KsfTHvwDUWJv4niDTvgBbtNh6bMUr3YWcIjwyG8duCitxPzbwRlaJK/sqOPg7yMqRp6bHO7YifTDj5K9OD/8v8UlkDtXh0N355jDxK/4OnKYZq5NjgRZVCv0jJzNvwxs0N1eZdTgLJgYU1/Vdhrmoy8J8haxR0cpX9EvZsvdYI83mBUX/v30bBf3cvVNAwUMr14g=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(7416002)(316002)(4326008)(66556008)(8676002)(6916009)(66476007)(66946007)(54906003)(26005)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(53546011)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bFRnVFhzQ0ZSdGh4NjJCSXExTjF1VmJVVE1BYWxMeVRFMUJrZ2o4UXdMK3By?=
 =?utf-8?B?V09qSUJHSlJGOTNBYnovYXJJRVZ3dlJiUEVES1pGdjYwZzZIMnRubDN3OVg3?=
 =?utf-8?B?eHV3WjVramV1RFByZVdUTnplSHZzd2tDS3ZJcFFEL004SWdRK2xURkZVdXRW?=
 =?utf-8?B?YjhHZWprWXpGdU81bTRkODYraW05MzR6YXNmVDJhYVJhZDZuQ3c3RWhXaGFC?=
 =?utf-8?B?aWZiTzkyWmVOUjV5QVZsM1FwZHl6bjZFaXgzeVlBeFFoVVJiRVZLR2E4aUlP?=
 =?utf-8?B?SHdYQVRiVHVvLyttMUJjM0pTblJ0Qkt3R3pXczZKRzVUbWdtM2lyUjlPYzMx?=
 =?utf-8?B?WU15WnQ5dU1EU3R6YXpaVldNZlE4ZlU0dGJuSmZDa0RPWTZsMmtkMjJTZzcz?=
 =?utf-8?B?NUFOM3dJNy93UkovWGlrSE9VRVFuRWcwcHpnSmltNWRMeStya2tRNFVWcVRL?=
 =?utf-8?B?OFdrbFhaNHhFQVQydUk3Z3l6UFkzdzFwZVVrQTJqS2NHaU4wVlFaNVA5TE5U?=
 =?utf-8?B?MWF5clhwcHBwb0NQdENXOUdHQ245WkJBa3lFSzd3ZHR1cWpvWk4yVUtDaXY1?=
 =?utf-8?B?SUpMZ1V1TE5KcTQvd2V6ZlZmU1J5dmU4RnB5MjNXWXJDLzg0N2NZWGtnS21z?=
 =?utf-8?B?cFZMZEJIRWhLbUU4UGVmd0lpYjJmUmhsNHRGejlGZDR6OWY3SzhNaXNoUjNI?=
 =?utf-8?B?VFRsVlZUY3pybFNRTUx1OEZqQUs3QUw3SlJQYWMvVThKMzB3Vld5VUpERnFO?=
 =?utf-8?B?NFYyUkZrdDJVa2JIOUFzVHgwbUQzNWhVUHFWcGNCT2Y4RDdUUUZGM1U5ZXd3?=
 =?utf-8?B?N1dFbER3RUFnbmFvM3BHaHJCV1BEaEo3djNUUEZib3NwcGt3TmxLeGN4S21n?=
 =?utf-8?B?cTRTZ1RodTk3OXM2VndHYldtdFA0UkRNMFhMeU81MHYydXNYUXExWUxTRGkw?=
 =?utf-8?B?RHYzVFFnZk52dzU0OEtZM05WV2ZQWWNCQmpYTGVRZTJVenEvSXVTOEd0QVpE?=
 =?utf-8?B?eVhudEduNDJtbUxkdERabU5vV2llaHltU1pMb3BPc1MveE5iRVlpa2Ezem5E?=
 =?utf-8?B?b3lSTktVWjBhNXlrUVBBUjZPbzVwVGdBTCtod3ZkOGlQaUk3RVRGRmw3dHly?=
 =?utf-8?B?VTAvbDNQRHRRZHJPTFZzaUVQU0tVOEw2Q1VsRm9LYzZib0tjWTNrRlNSZTFN?=
 =?utf-8?B?emFBNzltUVpxWGoydG44bVdpKyt1eC9wRll1c1dNbGYvbVRqb1dDMW5tMGN6?=
 =?utf-8?B?amRWa1RPdlVZTUhWUElCNlNNY2lvRS9zSmgveG9nYWZVTU5SaWxnenRBZmNu?=
 =?utf-8?B?Mi83UGF6ZTM4SlkzS0JHMS9iY01ZbmNQUkhZRGM3UDRxRnNwM3RzZHFNdFd1?=
 =?utf-8?B?NGF4Mi95citFa1JscEpvYjc1ZXhQbFI5Q3BqVTdmRGNXRXJKM25HU3FrOFJI?=
 =?utf-8?B?WkptanBMcUt4SVI3MlUyOUZLU0lHTGJaR2lpU1hhMFQ3MDdJTktxcnVKYmph?=
 =?utf-8?B?TTl6Vm40enU5amJ0c1lDMlBLaUNxa0tBWktzb2YzM1Z4WUpLWnE5blBMY3Jj?=
 =?utf-8?B?c1lpMzBXWGFqTEx5aHZvRXJMWURIMWVOU0NBS05xazQ1SnVQbGVpSzA0NTQw?=
 =?utf-8?B?ZWZTRFI3YjdsUHFkaTRrUmVuYjIrQk1HUkdKRU4rcEpaaW0zWUx5Rm91MEsz?=
 =?utf-8?B?MHhRcFJ4UjVyMU1DREp0QklZa2ZNWE9WQ25ZODA1bTNydGZVOHc5a2tjZmNr?=
 =?utf-8?B?OFhpT3V3MDdMWGQrWmVYUWRnSVljMUlpSjBhamhGR0pnaStkWEM2VEJlNUp4?=
 =?utf-8?B?THhGVWFDWENFdWFZUUJlYnVSNDViZ1phOVFhdDhRNDhoWTU4U21McDdRVUVT?=
 =?utf-8?B?dVBZUmFrcXMrM1YzWndrcVBPeHgrS3ZTcXVma21XTDcvT1ZCdnMzV2Jqa1hP?=
 =?utf-8?B?S2NXZFJxOEVraEJBL3JqSmlnUDF2WlRrV3pqMlZsaE5CeGpOTkNpclFMOTcz?=
 =?utf-8?B?WUR5UGFWc0JBTnFLNXFMK2NESFNCUldpT2d1bmpzT1RFK0wwMGtWZDlRZHpE?=
 =?utf-8?B?MTdXQThXejIzMU1idkRjU1Z6ZFRzVDJLbEhreHJnam1sZm5JOUlFTksyL0o2?=
 =?utf-8?Q?ym3fMi3k5IyGKagTVdLYCgT08?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f59bd13b-b996-4e8e-7f70-08daf3d7f319
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:30:04.9649
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OcbW7KA4WD4T3pi9QjJrf6WslnpNqJtvNqSoVJCRKwDK4vlYknMLPhYJnUZNY/YpvJpl3hHSBU3CivH7lePUcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8209

On 11.01.2023 12:44, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Both x86 and (soon) RISC-V version of init.h are empty. On Arm, it contains
> a structure that should not be used by any common code.
> 
> The structure init_info is used to store information to setup the CPU
> currently being brought-up. setup.h seems to be more suitable even though
> the header is getting quite crowded.
> 
> Looking through the history, <asm/init.h> was introduced at the same
> time as the ia64 port because for some reasons most of the macros
> where duplicated. This was changed in 72c07f413879 and I don't
> foresee any reason to require arch specific definition for init.h
> in the near future.
> 
> Therefore remove asm/init.h for both x86 and arm (the only definition
> is moved in setup.h). With that RISC-V will not need to introduce
> an empty header.
> 
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:35:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:35:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475302.736941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbGg-00006m-1n; Wed, 11 Jan 2023 13:35:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475302.736941; Wed, 11 Jan 2023 13:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbGf-00006f-VE; Wed, 11 Jan 2023 13:35:49 +0000
Received: by outflank-mailman (input) for mailman id 475302;
 Wed, 11 Jan 2023 13:35:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbGe-00006V-KK; Wed, 11 Jan 2023 13:35:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbGe-00085c-Gw; Wed, 11 Jan 2023 13:35:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbGe-0002vq-0i; Wed, 11 Jan 2023 13:35:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbGe-0001jJ-0J; Wed, 11 Jan 2023 13:35:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a4tPSNcJ1Q+cgVxfRTd+oLsh4xRBRuM072Zyzw6m+zg=; b=NKLu0nj4CmfGd3kT5MbFJyBY2a
	AFlXGD5uFiAh/0BH/FP7syUm+WwhqNoRN/RW25JTG51bBIHbf0f7C65gI5XEUll8XQRWXEDmOpTt5
	46liEoccksVlCVEI/37DsJ1TISZOTnp26g1f0rEPn1Tc8fCnXvPUiiIjjiKknyJdStvY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175717-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175717: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7dd4b804e08041ff56c88bdd8da742d14b17ed25
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 13:35:48 +0000

flight 175717 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175717/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7dd4b804e08041ff56c88bdd8da742d14b17ed25
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   95 days
Failing since        173470  2022-10-08 06:21:34 Z   95 days  200 attempts
Testing same since   175717  2023-01-11 03:09:17 Z    0 days    1 attempts

------------------------------------------------------------
3319 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505897 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:36:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475308.736953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbHE-0000cH-BV; Wed, 11 Jan 2023 13:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475308.736953; Wed, 11 Jan 2023 13:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbHE-0000cA-8X; Wed, 11 Jan 2023 13:36:24 +0000
Received: by outflank-mailman (input) for mailman id 475308;
 Wed, 11 Jan 2023 13:36:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbHC-0000bs-Ju; Wed, 11 Jan 2023 13:36:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbHC-00085z-IB; Wed, 11 Jan 2023 13:36:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbHC-0002xa-6v; Wed, 11 Jan 2023 13:36:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbHC-0002RX-6V; Wed, 11 Jan 2023 13:36:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N8yjvevQDF8meG17XgBT9ibHclfwJsOAcZlZRFo/MRc=; b=gaEw19yT7PrAz0gUMCYOFbbrLq
	ZgfnQY7HtBGWT0JO+7w0xPzI6mPqfF3qz9dsxQyz7hxraNucG2IPCHWQz3kpRFgv/g8l4f19eMz9V
	pGB3YdwWLovTQyXD89Qv+kHKKLdGtW4MMiDmaQmIwp+yJAuF29XwIKEMmbnpBO/w9MvU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175721-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175721: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e66d450b6e0ffec635639df993ab43ce28b3383f
X-Osstest-Versions-That:
    xen=4d975798e11579fdf405b348543061129e01b0fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 13:36:22 +0000

flight 175721 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175721/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e66d450b6e0ffec635639df993ab43ce28b3383f
baseline version:
 xen                  4d975798e11579fdf405b348543061129e01b0fb

Last test of basis   175712  2023-01-10 22:00:26 Z    0 days
Testing same since   175721  2023-01-11 10:03:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4d975798e1..e66d450b6e  e66d450b6e0ffec635639df993ab43ce28b3383f -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:51:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:51:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475318.736964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbVf-000339-LS; Wed, 11 Jan 2023 13:51:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475318.736964; Wed, 11 Jan 2023 13:51:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbVf-000332-Ij; Wed, 11 Jan 2023 13:51:19 +0000
Received: by outflank-mailman (input) for mailman id 475318;
 Wed, 11 Jan 2023 13:51:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbVe-00032w-SC
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:51:18 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2077.outbound.protection.outlook.com [40.107.241.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0498bb00-91b7-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 14:51:16 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7352.eurprd04.prod.outlook.com (2603:10a6:10:1a8::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:51:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:51:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0498bb00-91b7-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ehFA9beu3wI7dZRUgk2FXcAMUMID6fFgaInH/T0KRL9m9q194mJvYEXO53QposM+tS2dV+ovYWCpeQH3kp6ID6uLnLyQ5HbhswhyG7b1D07iRWFk6lHT4f7Aqq67b67wdRivwyilE88AQ/9gcY04ezMhDKfgtOakvcqm7Q7259Zqe/2fkNiWJ2MBbRMuuMFxugcZgmKZSwdlnOojZzBZxDKbo72gwGJItg5IgaF/7djSyG/2BPtirIFvYyUWMiwKOrSP3y9J26WNIQ1HFBkCSQ9HWrPf2PBSStJeR2xt1ijkzZYMLmnpxGjrbNvXuOg5yFDhU0fJcXYGHuYcwm4S4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ILA0dFecrQ4kY+XxLhjKokOnovJsHmRJvxl1dDrFEto=;
 b=YKD78oWLtyUPLaEDNPVHMmYse60pk3FR2fwDmVTrNkf5TlZ+t7JNPFKy9ki0/DC73yavQoZqx6RrM04IumXrcWFUcxcJ7ZY+PKYsSl0bi16YIe5A/99zh4yyMHkQ0To1WX/uJgEmvDMayTH3o8Q5Qoz5SewRvF7HBbE0B/fFlc/ddiksuF97eXTMn3x0FVaso6pddm7gAvIIcIv72sx/XC0tmkH293tmLgBHWPJHA7nbgZTyKegqfxIdzzaiGVDRvSzbrNUKkENhAhcThRE+3PdpQmGmaqHpgupMjMKeTusTHXv2T8zaOQc2Qdk1yUVrw3sIISzMqmi6jtYoFesZjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ILA0dFecrQ4kY+XxLhjKokOnovJsHmRJvxl1dDrFEto=;
 b=tTvqcj6J30POYLQoKIElakEDxoCJ8yc1QYJ2z2ckqqwvIVi3c0Y/pxGlcjxp3oRPl5GVwwC9N19q2q937mJGK6npiWxRsG20+GkGtZ81Y3ovuNucBOYyi+6uf/B7ZEM47+Du/WKgT6qzt4fkFXtNJ4Leta4KblL1cHu1S+omSrC+sxbfeHSmgivYtYkBdsCGLM041aQF9tQQJiz2e5FGXapYlz7XbT1YdzMEJepimZXLtePskwDwtWZZGKJL21PjqSTuI1K3+Xll/KKMNYDgFd/yv0Wo6k+nwtT5BNS2GpTisuNYWLKQYB/KiF9bWf63RvAmZlP0jYEjQeeFBJ+wKg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Date: Wed, 11 Jan 2023 14:51:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/9] x86/shadow: misc tidying
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0050.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7352:EE_
X-MS-Office365-Filtering-Correlation-Id: 860045e7-638c-45f0-b423-08daf3dae759
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Kt2YvQvDn+8szyiGFtv0hDU16mJfdkPI/ohTB/p2Qi+svxCrcCiqidUznA1JK7wUQgrqfpJS6JUAejatqBn9v0kA6RiFSMZZdGH28EyCMF72k7pG6Y4yen1o6psFZWPASVL9geyteQ8UE0sDcFjK80wkm2YBbhR6bC1G1LUbn4XmxSMgPvKByUVQ2gnn9SntakTExHVSH5XUcpD52MoV7lWEfrpO0qw+fMf76Ag9/KjpZWhl1qOwWhprg1Bi61rnKzzMvnfMtJhn7whZ4PD4UmUn6gQUbvaHvfbNXkGxKIfenrjsL6k4eP3Wm5sSSfAg21wxEu3dwezmVxbD8VijKOEKfyiFdDKDWMjwUU8qXAnNWQTq6tEQnIZMJ9orUIoH8etQ9Ftb/ASCpvvxpiF7jcx7TmWSRXhuvc4YBGidcbwuibl8J23FXQ5nCM1jvmCAvQJArzK1lyanPhkN1D/NyhmymVD+Zb/fUKA0OMIimaqRutr6E79bN42hjQaI0/BHodepp5yzUhXOwMDkp9Q3ECn+SlvyVjETps8oIFOWvw8TjO/LHCpP9/E0KNeNf6938kida6nqfQejWBMPmp4LKTf6fi+JhxK4WM4bVwomlqxMJBJJGfgcKzMyfp+fuiE0BD2KDs0qucXhn1aLyl1GP/QDfIWzOq0WOL+v60DWr7adboffpo3EFCh3Dt3dF2pK3zomZlxPhYRp4/Rg310ALtoSvRutDnO5Qf9T66gE/TA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(136003)(366004)(396003)(376002)(451199015)(8676002)(83380400001)(31696002)(38100700002)(2906002)(4326008)(5660300002)(4744005)(8936002)(6916009)(66946007)(66476007)(186003)(6486002)(41300700001)(6512007)(26005)(6506007)(2616005)(54906003)(478600001)(316002)(66556008)(86362001)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dWNPUFpoM2hwbEtyNlJlekNUVWttd2p6cXdxMTFNNzBsYUthdW04ZW9BUjFp?=
 =?utf-8?B?a09vQjhxOTNGcGcvRS9UNlEwR0ZFU1RVYjZoTFZYYkNsRmszdlJJall5K0R1?=
 =?utf-8?B?RkwvdFloMFI0Z1E2akNwZlFUM1V3Uyt3a2hvMS91dFNSVnQ2VGdlaFE1Rk9J?=
 =?utf-8?B?VWhtOCt4SEFYelFLZ3E2aTNxODE3aCtsaHVPN0tNVjdpSHdGcjI4MHpsUXRy?=
 =?utf-8?B?RHRlclBrVnRBS1dNQzBrN0t3WmpDNHJQL2podHJVb1dGcnd5QUw2ZEVDSFMw?=
 =?utf-8?B?Q1BiVWdrTnJGdU5McXB1WmVHNmhxeHpXMnBzeWJUNU1tUUpmWTdmL2YxSVRs?=
 =?utf-8?B?Tjc0VW54d3VIS1ZlMVhGREFyMkU2TnhTZ3VSUHM4ZEJncVJ2MlFuM0JSK0Uv?=
 =?utf-8?B?M1NNQXphNVYzZE5wWUp4My9CaENFTlpsNFlTZTROQm53T25iWXJaRzBZZG91?=
 =?utf-8?B?OXBmcHdITk9Kb1Y2T3UrUHEydUl4NzJVYmhLTXFFVHo0UGsvV2hjOEpUZzJN?=
 =?utf-8?B?R3ZrenpmUWQxZDl0eXpNUzhJWDc4UUE0NXNmYUNpODFVU1RoSXVSSFA5UWV4?=
 =?utf-8?B?QnBXdE9aREpmNjFXTXhNdG9QU1FwS3pBblQ1K3lNVEQyTUM1TFFJTjlML08w?=
 =?utf-8?B?ZWszRGs5Z0J2cGc0bmI1SC9vNnRDMDJPUCtzbitqNkxxSFh5ekZESXRld2Fz?=
 =?utf-8?B?QWpwTnVqdmxzam9WODRiS1BKbS9kdjRtTmdWMkhxVG55M2FnZnpoWEcrdE1O?=
 =?utf-8?B?Z3d1Z0JDeUFnTktUOUlMU3RZelpEbWpHeHJPZms0cFBhYXFYd0lKZ3ZPNnlG?=
 =?utf-8?B?eWNUeXRlcTNpK0xBNVVTaWNRV1pXNXBKMFNKa0N4bTFZSllyUEdidG03YWxH?=
 =?utf-8?B?ejdtODNMN3MyTExROWtKUlhSZ2h2V3FSZUNSb3pwbFVxNXNMMFZacER3ZlNy?=
 =?utf-8?B?WUMvbkNBZVhXQ0o3SldaNm03SDNXNlEydk9YNTViYm5LdC9NdC9ZbmJuMUVP?=
 =?utf-8?B?VnZ1RlZhRFRYb0wvSWcvL3pSNkV3ZU1hMlJPOTZyMmpXbW5DVTlvVU56ZGl5?=
 =?utf-8?B?UTdYTUxWNnh2Q3Y4UGNEVWY4b0o0amFBSmdNOVlrSmxiMWh3N2QzeGt2em5J?=
 =?utf-8?B?Q2ZuVmdZT3o0WlBpUHdpaE5BOU5NM1lMMlo0eDBWVGcyY2dYbzNyZ1ZxRnJU?=
 =?utf-8?B?ZzZ2M1NWanpJOTlmWXNGenFXc0hqT3BLcnFJS003NmhqbS9ObUFNai9PTFVM?=
 =?utf-8?B?YVlhV3BQSXlYc1l3T0p4NDVtbkNrTkk0OFFjVnNBdFpVUjl0dE56VGFFVXVL?=
 =?utf-8?B?cUNqMXJucjE4VlFPSVQzRzZwOGliWmg1bmxJUnRTVjFqSHVCQThiWUNsNndz?=
 =?utf-8?B?a0o1dWltTkIxZkhqWkNzNE1WdzFycUFIWlQxbDMzV1JGcjFXLzg4bFhNT0pQ?=
 =?utf-8?B?bGZXMVFkUlV1UG4rODE0TnNCak1XQkxwT1J1eXczbmZEeUlnZk1XSmtNenZT?=
 =?utf-8?B?MXNzWXRmSmZweU9sd2diYmNYN0JlSlVmSEJrK2ZBdTVFb3VDa3ZQUE1nY2R2?=
 =?utf-8?B?MytFQnBFSGxjUlAzK0padnE3Q0h5UkYyQUN3cjVuZ0RZbjhycWd6R0Zucm56?=
 =?utf-8?B?S3hQMkxnT1JVenRFYzNib25hc0R1aklmWlk2aS9BRm5sVEV4QkpnMER1L0gx?=
 =?utf-8?B?bkQyaGpWbWtOcG9aWCtYeURIY2JrT2puOVN2M0JKMGNCRkZiSm1QMmlFZ2RE?=
 =?utf-8?B?aFdzNkJGRklURW1zUlBQWm9RTWM3c0M5L2J3dEUxOCtGM3ZlZlVnSGNvOWJT?=
 =?utf-8?B?OWErY2NlM0Fuc2c4eWNPRjczUHVVNjkvRU5Rbm9qTksrTW4wUXdiUUxZa1J1?=
 =?utf-8?B?bmhNUUlrdU1ZaGs1UE42aVB2TDRHcUZWM09JbkV5czloRnYxN1FudlI1d3RN?=
 =?utf-8?B?Q2p3cWU0VDhXOGdWdXpJT1YwT2tnak42RjE1MFZHYkxLYlVmQjZJaG4vbi9v?=
 =?utf-8?B?ZEpRYUtEeldKNlhjZ1Y3Q3RBSzVUUUVhWDY5VU9RVGFOelhjV2taSE1YeFQr?=
 =?utf-8?B?cHYzQWZLZ3Q3eXdFSmI0WWgvZU9GZXZBWUExNDhCM1lSUWxJRy9zQnR0WklN?=
 =?utf-8?Q?L3YaIxosS2geUayKPQlo7mAiY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 860045e7-638c-45f0-b423-08daf3dae759
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:51:13.7275
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6Ew2zOJpg+hVd2QZJ1gxeJehLbYzl5A1S7TWQyhu/+5Pz/4lWIZ4eNqrs+4r1K/OWEC1dnglGVKzv/n/P7h51A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7352

... or so I hope. The main observation was that we still have both
hash_vcpu_for_each() and hash_domain_for_each(), where the latter was
introduced in 2014/15 to replace the former. Only some eight years
later we can now complete this conversion. Everything else addresses
other things noticed along the road.

Note that patch 8 conflicts contextually but not functionally with
patches 3 and 4 from "x86/mm: address aspects noticed during XSA-410
work".

1: replace sh_reset_l3_up_pointers()
2: drop hash_vcpu_foreach()
3: rename hash_domain_foreach()
4: drop a few uses of mfn_valid()
5: L2H shadow type is PV32-only
6: x86/shadow: re-work 4-level SHADOW_FOREACH_L2E()
7: reduce effort of hash calculation
8: call sh_detach_old_tables() directly
9: harden shadow_size()

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:52:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:52:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475325.736975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbWn-0003dJ-3s; Wed, 11 Jan 2023 13:52:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475325.736975; Wed, 11 Jan 2023 13:52:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbWn-0003dC-12; Wed, 11 Jan 2023 13:52:29 +0000
Received: by outflank-mailman (input) for mailman id 475325;
 Wed, 11 Jan 2023 13:52:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbWl-0003d2-Vu
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:52:27 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2081.outbound.protection.outlook.com [40.107.13.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2db852a9-91b7-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 14:52:25 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8551.eurprd04.prod.outlook.com (2603:10a6:10:2d6::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:52:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:52:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2db852a9-91b7-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PnJXs/RCFzWvUfIZrzXsIUTiCGrHr6jla1sdXi42MvSxY3MQAvpBGTIER/bnW4Hi5ytmrbHEpHM50/qAl7LaQzNhto4LVcKMd41+EiCKXZ0I4rQfOHZhgYeccEboBEY459Cj7O1DKpKIyYP4VcyQ9cYzaChZ4sLmwWbtl8dbvru+GrOlGNpvpX4QzQiJJjiw9uD7m1+mbr/1tVcpb7hj0k2L14e9KaQRpuIvDFS/CqYH5VuZ38QnhyHRQQOvk3JRHS1kg5RQJRKzWw8nf6wG7WI+e1mKEAHoq/egIo5JhAXz+UmIN+RmtLitzn8ysfMkP0g313xyZ9eWHSiS1L59xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KK/uRlwbYcJhgv6CUBJFcIfHjTvH5M/RQaJchWkIy70=;
 b=MBCyyS5Ey085RxPJypzjAQCj2Yb0n/K4ZU6ceMTQFiXncVyeZhSiJdrioXyqjpzimOtdRLoWje5xaxE/cEiRwg3vPJTVlBSmN7xlZ/4fjFcDGeMJ701mEduSMVEHN6afcv67s6e2/rVHByDrK2MCXKwXkXv+HJgSR2vKbeJGKbobLIeoNXCsUoiVh5XiXr//78XWscH9VXMkiPBAwWf+H74k/egUrtU7R52DtOz67SGxR4CWHd9aeskIm7gGZaRQnbREj6/IRxD0BwFiVLuQYexTed5VIu2636UX04qRktdpPHbWsJ4QdCSGdNfDHybxYHdKrxLz6XJJeibbvOh+1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KK/uRlwbYcJhgv6CUBJFcIfHjTvH5M/RQaJchWkIy70=;
 b=zTZ3dJEnc4LIasDhgH0+ZFcgb088WgxZGxR1ih5Ddihv/4ynuTkzfMQOdNgtwMbwx+JPt1OWrPZS+B/3ES0zCAiZtSwLGtliXWkj47uK+tS0PyiiaSFqgGaiH0WUlzKijD8R+fDYHz7PjmkO5QbpOZmUStqPKGypaBrLJUzmr+VIDvo5ETjXJkPPxOxvs/r0+vqANr2/frYRF3m1S8P2rPH9zYsJaRs7xr3TzDxzxeQzqei7kVTt8qffp1ZUh3DOWzTxxan3sVmJ38v4BLvUWoscdzVGrgppVC4Kp1iZnC49dMUEdklIXQ/ZMGfPhurLjMvMB6icqWnO6oQh0cQgRA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d91b5315-a5bb-a6ee-c9bb-58974c733a4e@suse.com>
Date: Wed, 11 Jan 2023 14:52:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 1/9] x86/shadow: replace sh_reset_l3_up_pointers()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
In-Reply-To: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0063.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8551:EE_
X-MS-Office365-Filtering-Correlation-Id: dfc66c8c-e5ab-4393-7521-08daf3db113f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FAg/V3ol1FxL1GfqhAbdhCjB4B35KZL+j52JNj0TaLBgflCBP1Ix6ERrGgelSlRmnY8H5o/yYVhoKj4FW5QojGsNNNvTgzGZaBscoGuXgSCxZP+qwKL435daBOGcGHMb+fDX1OUg6P/82nvpqBKorVY3LZ1hyvSSTjW6pyeHrzhSr4YYeZtvZzkjZU3dVKlulV2tmzaovBPOjJRbtuD3CnqCVpuMe8up2X5ysjx9Ma36M26CSp5IMShlPVl2QZ8UHjxDPnT5Yu/Ua1jZwLUTL0utjr3poNDaF6gMh3PBpZb/mhbz1SQrCoUoQu3ZQ4yQabXqT9+1Ex4gwnMFSyndmXAbba53thJVAzeD7vLX39toVUH8pW0qbLcpLUEiGUswEP9R+xqAOa0lZPSG9BU74jvemnnJqARFTSylDbDU0a5Ww0gA2ndUOnmm8e/Z1Zboam5xBn2OT+r9rFdIKLkHV758IlvaHTW6IOEqO9R4RkjLwEhZm+6XGV2hm56eSNtl+8+xhBAXGE4+kDVNBqz87unFy+y4EvUaL32JhWExqKy/4n0qqO2lNOZyMTA51x6jboqhwS2qynHgzPoBOj/XkjliuzxGjrZBTm3dwLbTGHKa+ZyH/mjcqmhQDA94vGFRkQnN4F4OLmksR7X3ygeZi9miopC3g/3a2HHCrnjCX5iWrzBgndXcl0UPQLqMBr8BZLJGGYpMsxdxhxwz/gHy5T+g+l0aFF5rLBA3ehcW/vA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(316002)(4326008)(66556008)(8676002)(6916009)(66476007)(66946007)(54906003)(26005)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(83380400001)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZG1mZ2ZNQVpSbTZxbUhqalNhK1hrTi9wZHlPSG5xZjlUV0xBeFhMb3JtdG5I?=
 =?utf-8?B?cTZta0gzUkhCb3JMeTViL1BueFc3bjQrdzZRU1dkanBTQkxpY0FvbDMvT2hn?=
 =?utf-8?B?QkxSTllCWkc0MU83YjY5RVRFSWdPeitZZGp3c1VEVm5tbkoveVpZNUpkVElX?=
 =?utf-8?B?OGVOOFhJQnAzMStMN1hwc1JYeVhnaFpjUGh3cVJxQ1VJbzd0Y2lHVTZTSkdC?=
 =?utf-8?B?L3BPUWVHMVhFdHE4MzUrcWFCcUM2aUFtMUdFeVhqSmE5NGFxK29WR0IreXM0?=
 =?utf-8?B?MDJ2RU93TUsreHp4Q1ZncGc4K01EWWdUQ0NUdFlaRW4zMVBHUVRwTndHZldR?=
 =?utf-8?B?YjVveDZyQWdJUWFwWThWM20vU2NKV3prUlhHRmxUWXpCZXZDUVRPdDZocTY4?=
 =?utf-8?B?cTFFdUg4aVg0MGhRNU1QSkhzR00xREdPUjZpR1JjenFjREFzKzA5V2Z6bVd1?=
 =?utf-8?B?VnFWb0dNUjVKMXVVbTlLdVJvUXVRWXo5dVdvN00rRGRrell0eDVNMU9YUVoz?=
 =?utf-8?B?N2lXZlArRktJUXMrU204R0RNbWRoRmxieG0wK2VWekZKSHkrYmptL1pwM0ZM?=
 =?utf-8?B?YUhkaE9zbEpGajlCSTVkbnFSSUoweGpaTFUvTldSUGFHUWJadzFweUREemRw?=
 =?utf-8?B?SlA4YXpSQVJtcGtZNk5wOHF2RmR4enVEZ05HdGxVM1pnaFlPUXZ1Q3dqQ3Va?=
 =?utf-8?B?d0Rod2VWc1AzQUZ1QU0wQWl3eURZbU1YMUlQb3NtMStNSmlocThEZGRaTzlV?=
 =?utf-8?B?SW5qTzQ4dGl0NktyZDZlWnBwVUR6eHJab2gxSXV2WlYyN2FHQlRscDFoZUNF?=
 =?utf-8?B?K0VZQmcvQmZycjJRQkM0R2pUOVJicEloS001SStyU21uMTlydVJFOWEvK3NF?=
 =?utf-8?B?UTNwbTRPd3hRaWdWa2xvemJaeUJHdVN1Mm52V0ZaS3ZHa3JyQ0JqUkNzL3gv?=
 =?utf-8?B?aHo1OE9sOEVHb2MyNUxQb1I4cUxlZ2cxRkR0UWxwUXpZSXVFZDhqd1FkOUhY?=
 =?utf-8?B?bitwaWVJbFRRWEI0WlJYVXVRTVcvbFExQkk0ZVpRWHFTZ1UyMU1UeENCK1h3?=
 =?utf-8?B?UG0yQ0dvSm92MFpsc0g1cFpkMG8rdWVYcllJSFVwKzJpVFN6QkxOSHF0OEJM?=
 =?utf-8?B?UzlKdU1WRjFFWmUvVVJncG9oYmVPZUx0Uk1qbFdtWjRXRUJyTlQrWEFuc3lh?=
 =?utf-8?B?bjdQZ2U4RzNwVXdIa2IxVUQxOTZOL01aMTZoOHRhTnJsVDlnOUVFdXJpR0hM?=
 =?utf-8?B?MG8zYWd2bkFCNnZMNktadlFNNnR4UDVKbXdGemJWcmdPNkFva01xQTBpTm9M?=
 =?utf-8?B?TFZpUTl4SDRtU05RTU12b3Y3bWk5WGtSa1VzZG9SeDhVMituRFFMeGFZWW5O?=
 =?utf-8?B?SGF2ZmJGYkxPeU1WUW9yY1lQUEFUVzhTampsL3FwZ3o5cDhhaG5JWlBITDBX?=
 =?utf-8?B?TEl2TVFOUWNiOFc2N09DdGxNVjFzemg3bFl1OUhjZmNtbXN3RzJsUVhYeXRn?=
 =?utf-8?B?QWNZcHY3ZXpUVCtDREs4QVQzNHhkd3lyRFB1VGROemtWU2VsWHRJSlE4b2xs?=
 =?utf-8?B?aGUvbEVBOGdnbllZY1ptVzVlSk1GUTM0NXNVQVRHZENFVUF4cERYRXVINXN4?=
 =?utf-8?B?OFo2Tm5YWmVld2JUcUFnVm5PUE1rNjZscitDNkNpT3RmVzgyQUdUdTh6QTA5?=
 =?utf-8?B?Tjh5SDVKZG90Y3NkOUJWR0k5YytaRlRTdnlEa3JIRm9hbHN3VHpEUTM3c04w?=
 =?utf-8?B?b1FnckkzazdzWEd6a21IdXM4Nms5RHFBeWl6cHNWM2V0enY3OGtxUlo0YnRO?=
 =?utf-8?B?VnIzUmM2b0N6a0ora0xBSXpta2hDNGFqTnBOaTNYakdkdGJ5R3MyMWFlZVBW?=
 =?utf-8?B?UFlURVZuak4xRkF2bXpLdVFUemdSTUs2SjcrNG0wTkp4RzdUeWNmaXM1MDRH?=
 =?utf-8?B?QStjYUpHakVleW1NTWxxamt4eGE5RERDZTB1clgzVkFiVlJuZnB6UU1XRkpj?=
 =?utf-8?B?OVl4OEViMDdqblc0aFZOZWNDTFZwcEE4WW14UWlDblhwR0k1TDhWSDJYQm1p?=
 =?utf-8?B?Z3JRdndaV1p1bHZta2FmNW5DcUx5MkJRbStoQ0pHb3NDU2lIRjZsZXA1UzBP?=
 =?utf-8?Q?r7K6LbkkbedpDlsGwErrqgxUj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dfc66c8c-e5ab-4393-7521-08daf3db113f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:52:24.0355
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /U09tZ0oVetiutgyIdihm2DyfKHPO/w6U5ml9c+X/+ZicL/95qDaPBMkt1HZ0THp0BBZ33yXwVlA5/KkecZJYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8551

Rather than doing a separate hash walk (and then even using the vCPU
variant, which is to go away), do the up-pointer-clearing right in
sh_unpin(), as an alternative to the (now further limited) enlisting on
a "free floating" list fragment. This utilizes the fact that such list
fragments are traversed only for multi-page shadows (in shadow_free()).
Furthermore sh_terminate_list() is a safe guard only anyway, which isn't
in use in the common case (it actually does anything only for BIGMEM
configurations).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -116,6 +116,9 @@ struct shadow_domain {
     /* OOS */
     bool_t oos_active;
 
+    /* Domain is in the process of leaving SHOPT_LINUX_L3_TOPLEVEL mode. */
+    bool unpinning_l3;
+
 #ifdef CONFIG_HVM
     /* Has this domain ever used HVMOP_pagetable_dying? */
     bool_t pagetable_dying_op;
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2302,29 +2302,6 @@ void shadow_prepare_page_type_change(str
 
 /**************************************************************************/
 
-/* Reset the up-pointers of every L3 shadow to 0.
- * This is called when l3 shadows stop being pinnable, to clear out all
- * the list-head bits so the up-pointer field is properly inititalised. */
-static int cf_check sh_clear_up_pointer(
-    struct vcpu *v, mfn_t smfn, mfn_t unused)
-{
-    mfn_to_page(smfn)->up = 0;
-    return 0;
-}
-
-void sh_reset_l3_up_pointers(struct vcpu *v)
-{
-    static const hash_vcpu_callback_t callbacks[SH_type_unused] = {
-        [SH_type_l3_64_shadow] = sh_clear_up_pointer,
-    };
-
-    HASH_CALLBACKS_CHECK(SHF_L3_64);
-    hash_vcpu_foreach(v, SHF_L3_64, callbacks, INVALID_MFN);
-}
-
-
-/**************************************************************************/
-
 static void sh_update_paging_modes(struct vcpu *v)
 {
     struct domain *d = v->domain;
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -960,6 +960,8 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
         }
         if ( l4count > 2 * d->max_vcpus )
         {
+            d->arch.paging.shadow.unpinning_l3 = true;
+
             /* Unpin all the pinned l3 tables, and don't pin any more. */
             page_list_for_each_safe(sp, t, &d->arch.paging.shadow.pinned_shadows)
             {
@@ -967,7 +969,8 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
                     sh_unpin(d, page_to_mfn(sp));
             }
             d->arch.paging.shadow.opt_flags &= ~SHOPT_LINUX_L3_TOPLEVEL;
-            sh_reset_l3_up_pointers(v);
+
+            d->arch.paging.shadow.unpinning_l3 = false;
         }
     }
 #endif
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -497,11 +497,6 @@ void shadow_blow_tables(struct domain *d
  */
 int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn);
 
-/* Reset the up-pointers of every L3 shadow to 0.
- * This is called when l3 shadows stop being pinnable, to clear out all
- * the list-head bits so the up-pointer field is properly inititalised. */
-void sh_reset_l3_up_pointers(struct vcpu *v);
-
 /******************************************************************************
  * Flags used in the return value of the shadow_set_lXe() functions...
  */
@@ -721,7 +716,7 @@ static inline void sh_unpin(struct domai
 {
     struct page_list_head tmp_list, *pin_list;
     struct page_info *sp, *next;
-    unsigned int i, head_type;
+    unsigned int i, head_type, sz;
 
     ASSERT(mfn_valid(smfn));
     sp = mfn_to_page(smfn);
@@ -733,20 +728,30 @@ static inline void sh_unpin(struct domai
         return;
     sp->u.sh.pinned = 0;
 
-    /* Cut the sub-list out of the list of pinned shadows,
-     * stitching it back into a list fragment of its own. */
+    sz = shadow_size(head_type);
+
+    /*
+     * Cut the sub-list out of the list of pinned shadows, stitching
+     * multi-page shadows back into a list fragment of their own.
+     */
     pin_list = &d->arch.paging.shadow.pinned_shadows;
     INIT_PAGE_LIST_HEAD(&tmp_list);
-    for ( i = 0; i < shadow_size(head_type); i++ )
+    for ( i = 0; i < sz; i++ )
     {
         ASSERT(sp->u.sh.type == head_type);
         ASSERT(!i || !sp->u.sh.head);
         next = page_list_next(sp, pin_list);
         page_list_del(sp, pin_list);
-        page_list_add_tail(sp, &tmp_list);
+        if ( sz > 1 )
+            page_list_add_tail(sp, &tmp_list);
+        else if ( head_type == SH_type_l3_64_shadow &&
+                  d->arch.paging.shadow.unpinning_l3 )
+            sp->up = 0;
         sp = next;
     }
-    sh_terminate_list(&tmp_list);
+
+    if ( sz > 1 )
+        sh_terminate_list(&tmp_list);
 
     sh_put_ref(d, smfn, 0);
 }



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:52:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:52:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475329.736986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbXE-00049g-F3; Wed, 11 Jan 2023 13:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475329.736986; Wed, 11 Jan 2023 13:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbXE-00049Z-9u; Wed, 11 Jan 2023 13:52:56 +0000
Received: by outflank-mailman (input) for mailman id 475329;
 Wed, 11 Jan 2023 13:52:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbXD-000486-17; Wed, 11 Jan 2023 13:52:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbXC-00007H-S9; Wed, 11 Jan 2023 13:52:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbXC-0003jc-Ez; Wed, 11 Jan 2023 13:52:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFbXC-0005rG-EV; Wed, 11 Jan 2023 13:52:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WmcN1gR5iWVs5Cq2Jm0coIAsjVLXJHEr0jlramojE7k=; b=05Ha6d8vu7JIieARrR512kArVI
	GDEAJCTk/NQb35TbztCO4Ld/ro9jKEnZ5UYGvIXEn5RH9bT24xCb5C+4c/6aj2yZGWpfGz9tb/JPR
	+CCImAg8TY4oDci0idBOkkM3poQGspZbp/bGnKY/9jJlk67BHo7CYYS5ULFUmI5e4amE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175722-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175722: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-pvops:kernel-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 13:52:54 +0000

flight 175722 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175722/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    3 days
Failing since        175627  2023-01-08 14:40:14 Z    2 days   15 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:52:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:52:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475332.736997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbXH-0004RR-RI; Wed, 11 Jan 2023 13:52:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475332.736997; Wed, 11 Jan 2023 13:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbXH-0004RJ-Nm; Wed, 11 Jan 2023 13:52:59 +0000
Received: by outflank-mailman (input) for mailman id 475332;
 Wed, 11 Jan 2023 13:52:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbXG-0004QI-Lp
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:52:58 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2084.outbound.protection.outlook.com [40.107.13.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3fde7629-91b7-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 14:52:55 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8551.eurprd04.prod.outlook.com (2603:10a6:10:2d6::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:52:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:52:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3fde7629-91b7-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xs5ktteQeAbjbF3OaY6w1lsapMxtJ+FEdx2EF+KXaa6Hg8XKaz4GOvsnsyPutwzKL8aBDDw44EXosht5S/FQ+QhYD4utSdOypESW8XhogJpvymhla8IqWakzLsoJJk9aGJ04BPGwkBtdgoncLruUdf2KbkfESdvI6xGg1PmAQSAlWPCqrYORBbG9DVtqovdIRMk1/RdlQcvkBmXTEp1E0/QAwqU7lxGhKXUmRl+qgWVd91lnWc/a6+FjNTo/de8A1txd5uPhv3cJuBESO0nxsxQ2s5Rtzv7jVAiZpHUrLT5bNZ2+zxkzYRnpl88U/YrSV9HLXGXgEDAouBWYyOG3Bw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rQVzcV/aXrmtFWKLwSeA0Wp+ghnodUCv9vHrPOZHrwE=;
 b=HczbtIbvSt7k/ToZ2QyGMzLYTX9L5rNUgcvNfo5aRLIBiXrVya3bu6RTir5TLbcl9FSmIhfursO2Flk5MJLqd3U2cb2NwEER4+rM9t+PvMgv/zzTLkJc8vZ0P2sJ1usGsVjGOde5El+njYW4JvXkEsFL0E0U908xNJXZigyYFi9Fb5va4jhC+n2LyDD2RNHnFDGQKn3EYwDjfvDJkSVumavXa+exn3r9DBkbcGCYggSxRjooUjUdiOvkD/EzF/kIESR6eb5UnccqyPMjRisjLXCyZIHXbJ1GqgIPQriTKbEMpUgiEVVR/e1NE6/mPqeLiKl4OAXzXNGnT0OILwL7Zw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rQVzcV/aXrmtFWKLwSeA0Wp+ghnodUCv9vHrPOZHrwE=;
 b=uuLYYp2LtOXjuTjT8YtjEvA90DwT0rzNHs1DifEbBz+1TlOpIeHmxxoelpKhRWFbZUbdEUbCDrU5+CwkPh8Rm6FFcBtp1UsGk/IIv3/khk6nC7kz7oFlB+LUoqmcOlYnS6mFfWRm1sC+9pB81XW3dM82Kw9Hi5LUOSE0pABS8MNgcpyxsigTlOqyNBRkdGpN8M05Uj8nLw2v29JWFz9pwMMqCn2x2DUfAnlXB8TLA1s2w5pzo+cbKc4GNk0dT8E4mWDOwe6SwjP/PDipBbDLEKPIdMBT4uZbmrUAbBuM4f+oPz8zZZ7ohuW78AZU5khiQnxWW5B3JI+cJziZ7/6Gqg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <17d7ca95-4c94-93e3-9a42-cc95512a66cc@suse.com>
Date: Wed, 11 Jan 2023 14:52:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 2/9] x86/shadow: drop hash_vcpu_foreach()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
In-Reply-To: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0064.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8551:EE_
X-MS-Office365-Filtering-Correlation-Id: d6b15711-5617-4149-eee7-08daf3db2365
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	//0zVOYkBlk+vQJ/HfhyC1nKEJ73BQcACHluRZQHwt1COJ1q0xHggOfsLK1du3af0W586iGfWutDDVxt/dJhPlg3nZPHpys3cR7DpoD4wjwQQBqN6fCreliMMlY3OmagV0nibsPrGP8EYEibCDNHG1+3BcRxvgLiYHACzXYWbsG/qNEDefVQxeu0FzX/98T3OkaJ6xwxtXC5zx9M8P/TkYv2Vpr42fVhziSHviHt3N3kvUK0q5Nu0XNJP3lOTkdQZswYf1Gqunj+xIbkR0iqTIlPKeTPC8h25xA0SJEltAEdHzu4jcXmB6s6h8ez2RG3lDnbrQTFe3rqMuoUKJFwlSqzoZB4h99FNIzvU7PvG3Wm4Puiu3svE7nUTNhxEsGEajhevxaGZM9cL7d9YWCl1bSvYROkrZk+0Xe+PpW+ApDQ4QcKUeTX8ddvvESG+sBp3OzI9sMW8aKPRhv+oEbEFaDlzyCwKX0mdfi4/MytrB3KKEq+VfsLWhly6PqabYM4uXcY33rsoNN+vtpbd5YQ2KaaZUpciHt4Q91431y9QbkyYWQhdgl3+b4IrsN7UrR5uI6pmcbSiTe4Yj9POB6YL0cu7vKRvgD5VFYwPQZ2S3mv/vTRahc1NRsGiYtekAm22W2XTlrx0XsU/FTcIFd+/uOIv1iux/VXBA1//PUlijP5DLi7hHORhDG2MlofUwpgxAlBa03qfbW2Qje/KZ4p9RRtTirloIYOol5/1YL68DsWV5M7HNzEtSN+4E7xL3STdMSkfzd5ekK1uXaI+XCLAavq0+aSE4OVUYNE9Q6FiPBwoIfwzLDwmidvv/MMOmhS
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(316002)(4326008)(66556008)(8676002)(6916009)(66476007)(66946007)(54906003)(26005)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(83380400001)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001)(414714003)(473944003)(357404004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b3YrVXZZNzRXbnV2eDRYY1FmTy9CSjh4UDVPK0lvbzRKWHlLSXozZ0RhMjBp?=
 =?utf-8?B?ejE3UWJhQXp1VEVmOXU2dmFhQmZSSnZMdFlrclYrK1JyVzhBckd2UVAvc2FS?=
 =?utf-8?B?cG9Mc1ExNlNJZm4zWjBtdmtuTzFPZXI2MzdmdEpZUjN1dGJpVHQxZXZOTHow?=
 =?utf-8?B?QXdnTlcyZlAwZS9FQXZYS0U3SHdpem9wRFlkd2RkUzJ4OGFydWtUSmR4UEM3?=
 =?utf-8?B?SXRVZkZEaXl0NlUxczhoTUVNWGtiWlpHVzN2S25VWkROcmZWZUJiQW9FYnBV?=
 =?utf-8?B?bXRzRmxpMEU4OWxVUHg2M2JaakpGSFkvMmNCSnprUEVnVWc2bjNGV3ZVOFNN?=
 =?utf-8?B?TUZTOFJDeG8rdzdteEQyeXQ4SE45QmNuRkViRUQ1SytoOEV3M283SFpUemFV?=
 =?utf-8?B?TlRPQnJMYlNKd3NUcStSbEJHYVkvK3dwaGpEa21nUjA5ZzM5M0pvSU5QZGpY?=
 =?utf-8?B?OVBhRGJYMUUyVHpVWEJUbmROdEtQWlorRjBLZ2FjN2Q1Ykhxc0VLVVoxU1o4?=
 =?utf-8?B?ZlcyNFNqazJhWFdjSFRwWU5wb1VxM0ZteHJtRnQ3bkVVb3BJTjZTU2pXcE1Q?=
 =?utf-8?B?RTdhazVvVlU4bWt0TFZmR25ZaHBOZWtRbFlSVHlmNEN0SDZ4L05DWGdmNTAw?=
 =?utf-8?B?anpTR1ZOaW1rdFpqbk9LQW93WFBxb0tDSEZobThkNGpRLzIveFY1VEZ2aGNP?=
 =?utf-8?B?MGtENkxDRFRtZnNUKzZGU0Y4TWtkTWhpOXdOVXRuNjdoNHdZcHpGNUtvSzhG?=
 =?utf-8?B?NkFmSEIzVHYwaTlISyt4YXJrTEttOHdCRTZPOG9kRDd1dzdpTjJmUVljNmlt?=
 =?utf-8?B?SXJ5ajFxQkhXOGhvbElMQmdPYTBQK1ZnUnp1N1FjL25WY1dtOEorbmh2ZDNJ?=
 =?utf-8?B?V2QvRnpVRUI2N3R5UmVtNnhUY20zbFBmZHB4SmltVnRSZm8zSy9aWGVXYW8v?=
 =?utf-8?B?dnlzcXM3QnVFeGE5bjh1TjdkZlRVL0lFWUVTRkpaVkZ1eVZQNW5ja0tSOGdo?=
 =?utf-8?B?aHRsQnJHMStSSThEMG9aUHVoTWIyVzdmYWRGYUFpV2lJNlc5Z3h0SDZ5RWl6?=
 =?utf-8?B?Y09VRDJqVUdpdkFmcE5COS9PY3o4cmtWU3ByalI0WmFIZEcrMUtkNDZFczgy?=
 =?utf-8?B?QW5ZMXovdHRXSDBPY2tXNnVkNEVCK2VWSG94cC9iYlVhd1QxdTZsZmJXcVpz?=
 =?utf-8?B?a1k4V29zaG1DUDVYRWhKbDJxYmN4R1pSbThnWjYrcWhqeXNTTytRTVFmOWVC?=
 =?utf-8?B?elFkV2c4RkU1YzlLUFZJRzFpRmhNNTRBYjhya1NCV051R3Y0VEtwQnQwSFUv?=
 =?utf-8?B?ekFyWWRtSWFyeEc0Qm8yZGtFeDRiMnRNOWxrZXduNjdFN2ZabGZoaHJNTWFl?=
 =?utf-8?B?SHFYQW1oNjQyNXFzdzVuTGl2aTdsOHc5NU1sTTBsSy82eFYveHFaaVZramhC?=
 =?utf-8?B?VW5aYmlqZVRsUzV3UE9XYkVOb05BMm1zVlc1T1YvenF5S2dvWldJNGdwTXJl?=
 =?utf-8?B?OTBDZGRrSTUveDhEZm1BaWtRcktyaTM5ZFBkZVBNVm5JN21RdHBxQlZEQ1B2?=
 =?utf-8?B?RlowRlF6YUdkZFNhZWJ6c3BWUkdIeUQvN2V0eG5USVAyZjVlN0VvcmRJL0VL?=
 =?utf-8?B?dVRrWlZYcGtFVElwZnVoV3RhU2dkM1lFUndYTVhKRHVTZXNObjFycVhFVERM?=
 =?utf-8?B?TlRBTUkvK2F2YlNxcHRvZEd4QTdhUERjQVMzOFROVU4yZG5TV29ZSHBBY1hm?=
 =?utf-8?B?NTdIeHpJUmVSaUI4dFhiRzRrR3pKSzBWZEkwUEZXdDBmTEtLTWVicmwxRzI2?=
 =?utf-8?B?RUN5cUNZcEpqTGp2UDQ0Z2lNcS9pUTFaK3BxaDhkaFFsSklsUW1BUExLU3Bq?=
 =?utf-8?B?eHhPYVpwVld3YUhrdnRaTmU0V0JiZDRVQm8vRjduSjRjQWFleUNGNDQyVllz?=
 =?utf-8?B?RW1NanIwOFQxSUlOWmI4SENLMWU0R0FaQjZlVlpnZGI2ZG5Pcm5zekUwZExa?=
 =?utf-8?B?Yk83Y20zV1BHb1NwQ25Ha1ROdEVGcERnODFnZWZBcld3dlpSS04ycDFxRWlM?=
 =?utf-8?B?WmdPUjJYZ1paWnh1bTJhZFN1dlExV1BPNnB2Q1BFVExJZ0JMeDBud2lLb1or?=
 =?utf-8?Q?MfeOd0ctzwRdkbVgvuafHukvB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d6b15711-5617-4149-eee7-08daf3db2365
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:52:54.5335
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MS4y41vVZFF7t+NIy7x67Bf84MUqL61ubBx1bn82Z+aeb1Av2lyXupvPjeLD7C/kV6tyqgUnrbHL7/F55l8tPQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8551

The domain based variant is easily usable by shadow_audit_tables(); all
that's needed is conversion of the callback functions.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1640,59 +1640,11 @@ bool shadow_hash_delete(struct domain *d
     return true;
 }
 
-typedef int (*hash_vcpu_callback_t)(struct vcpu *v, mfn_t smfn, mfn_t other_mfn);
 typedef int (*hash_domain_callback_t)(struct domain *d, mfn_t smfn, mfn_t other_mfn);
 
 #define HASH_CALLBACKS_CHECK(mask) \
     BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
 
-static void hash_vcpu_foreach(struct vcpu *v, unsigned int callback_mask,
-                              const hash_vcpu_callback_t callbacks[],
-                              mfn_t callback_mfn)
-/* Walk the hash table looking at the types of the entries and
- * calling the appropriate callback function for each entry.
- * The mask determines which shadow types we call back for, and the array
- * of callbacks tells us which function to call.
- * Any callback may return non-zero to let us skip the rest of the scan.
- *
- * WARNING: Callbacks MUST NOT add or remove hash entries unless they
- * then return non-zero to terminate the scan. */
-{
-    int i, done = 0;
-    struct domain *d = v->domain;
-    struct page_info *x;
-
-    ASSERT(paging_locked_by_me(d));
-
-    /* Can be called via p2m code &c after shadow teardown. */
-    if ( unlikely(!d->arch.paging.shadow.hash_table) )
-        return;
-
-    /* Say we're here, to stop hash-lookups reordering the chains */
-    ASSERT(d->arch.paging.shadow.hash_walking == 0);
-    d->arch.paging.shadow.hash_walking = 1;
-
-    for ( i = 0; i < SHADOW_HASH_BUCKETS; i++ )
-    {
-        /* WARNING: This is not safe against changes to the hash table.
-         * The callback *must* return non-zero if it has inserted or
-         * deleted anything from the hash (lookups are OK, though). */
-        for ( x = d->arch.paging.shadow.hash_table[i]; x; x = next_shadow(x) )
-        {
-            if ( callback_mask & (1 << x->u.sh.type) )
-            {
-                ASSERT(x->u.sh.type <= SH_type_max_shadow);
-                ASSERT(callbacks[x->u.sh.type] != NULL);
-                done = callbacks[x->u.sh.type](v, page_to_mfn(x),
-                                               callback_mfn);
-                if ( done ) break;
-            }
-        }
-        if ( done ) break;
-    }
-    d->arch.paging.shadow.hash_walking = 0;
-}
-
 static void hash_domain_foreach(struct domain *d,
                                 unsigned int callback_mask,
                                 const hash_domain_callback_t callbacks[],
@@ -3211,7 +3163,7 @@ int shadow_domctl(struct domain *d,
 void shadow_audit_tables(struct vcpu *v)
 {
     /* Dispatch table for getting per-type functions */
-    static const hash_vcpu_callback_t callbacks[SH_type_unused] = {
+    static const hash_domain_callback_t callbacks[SH_type_unused] = {
 #if SHADOW_AUDIT & (SHADOW_AUDIT_ENTRIES | SHADOW_AUDIT_ENTRIES_FULL)
 # ifdef CONFIG_HVM
         [SH_type_l1_32_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l1_table, 2),
@@ -3258,7 +3210,7 @@ void shadow_audit_tables(struct vcpu *v)
     HASH_CALLBACKS_CHECK(SHADOW_AUDIT & (SHADOW_AUDIT_ENTRIES |
                                          SHADOW_AUDIT_ENTRIES_FULL)
                          ? SHF_page_type_mask : 0);
-    hash_vcpu_foreach(v, mask, callbacks, INVALID_MFN);
+    hash_domain_foreach(v->domain, mask, callbacks, INVALID_MFN);
 }
 
 #ifdef CONFIG_PV
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -326,32 +326,32 @@ static void sh_audit_gw(struct vcpu *v,
     if ( mfn_valid(gw->l4mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l4mfn,
                                                 SH_type_l4_shadow))) )
-        (void) sh_audit_l4_table(v, smfn, INVALID_MFN);
+        sh_audit_l4_table(d, smfn, INVALID_MFN);
     if ( mfn_valid(gw->l3mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l3mfn,
                                                 SH_type_l3_shadow))) )
-        (void) sh_audit_l3_table(v, smfn, INVALID_MFN);
+        sh_audit_l3_table(d, smfn, INVALID_MFN);
 #endif /* PAE or 64... */
     if ( mfn_valid(gw->l2mfn) )
     {
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2_shadow))) )
-            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
+            sh_audit_l2_table(d, smfn, INVALID_MFN);
 #if GUEST_PAGING_LEVELS >= 4 /* 32-bit PV only */
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2h_shadow))) )
-            (void) sh_audit_l2_table(v, smfn, INVALID_MFN);
+            sh_audit_l2_table(d, smfn, INVALID_MFN);
 #endif
     }
     if ( mfn_valid(gw->l1mfn)
          && mfn_valid((smfn = get_shadow_status(d, gw->l1mfn,
                                                 SH_type_l1_shadow))) )
-        (void) sh_audit_l1_table(v, smfn, INVALID_MFN);
+        sh_audit_l1_table(d, smfn, INVALID_MFN);
     else if ( (guest_l2e_get_flags(gw->l2e) & _PAGE_PRESENT)
               && (guest_l2e_get_flags(gw->l2e) & _PAGE_PSE)
               && mfn_valid(
               (smfn = get_fl1_shadow_status(d, guest_l2e_get_gfn(gw->l2e)))) )
-        (void) sh_audit_fl1_table(v, smfn, INVALID_MFN);
+        sh_audit_fl1_table(d, smfn, INVALID_MFN);
 #endif /* SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES */
 }
 
@@ -3950,9 +3950,8 @@ static const char *sh_audit_flags(const
     return NULL;
 }
 
-int cf_check sh_audit_l1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
+int cf_check sh_audit_l1_table(struct domain *d, mfn_t sl1mfn, mfn_t x)
 {
-    struct domain *d = v->domain;
     guest_l1e_t *gl1e, *gp;
     shadow_l1e_t *sl1e;
     mfn_t mfn, gmfn, gl1mfn;
@@ -4019,7 +4018,7 @@ int cf_check sh_audit_l1_table(struct vc
     return done;
 }
 
-int cf_check sh_audit_fl1_table(struct vcpu *v, mfn_t sl1mfn, mfn_t x)
+int cf_check sh_audit_fl1_table(struct domain *d, mfn_t sl1mfn, mfn_t x)
 {
     guest_l1e_t *gl1e, e;
     shadow_l1e_t *sl1e;
@@ -4045,9 +4044,8 @@ int cf_check sh_audit_fl1_table(struct v
     return 0;
 }
 
-int cf_check sh_audit_l2_table(struct vcpu *v, mfn_t sl2mfn, mfn_t x)
+int cf_check sh_audit_l2_table(struct domain *d, mfn_t sl2mfn, mfn_t x)
 {
-    struct domain *d = v->domain;
     guest_l2e_t *gl2e, *gp;
     shadow_l2e_t *sl2e;
     mfn_t mfn, gmfn, gl2mfn;
@@ -4097,9 +4095,8 @@ int cf_check sh_audit_l2_table(struct vc
 }
 
 #if GUEST_PAGING_LEVELS >= 4
-int cf_check sh_audit_l3_table(struct vcpu *v, mfn_t sl3mfn, mfn_t x)
+int cf_check sh_audit_l3_table(struct domain *d, mfn_t sl3mfn, mfn_t x)
 {
-    struct domain *d = v->domain;
     guest_l3e_t *gl3e, *gp;
     shadow_l3e_t *sl3e;
     mfn_t mfn, gmfn, gl3mfn;
@@ -4145,9 +4142,8 @@ int cf_check sh_audit_l3_table(struct vc
     return 0;
 }
 
-int cf_check sh_audit_l4_table(struct vcpu *v, mfn_t sl4mfn, mfn_t x)
+int cf_check sh_audit_l4_table(struct domain *d, mfn_t sl4mfn, mfn_t x)
 {
-    struct domain *d = v->domain;
     guest_l4e_t *gl4e, *gp;
     shadow_l4e_t *sl4e;
     mfn_t mfn, gmfn, gl4mfn;
--- a/xen/arch/x86/mm/shadow/multi.h
+++ b/xen/arch/x86/mm/shadow/multi.h
@@ -83,19 +83,19 @@ SHADOW_INTERNAL_NAME(sh_remove_l3_shadow
 #if SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_l1_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl1mfn, mfn_t x);
+    (struct domain *d, mfn_t sl1mfn, mfn_t x);
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_fl1_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl1mfn, mfn_t x);
+    (struct domain *d, mfn_t sl1mfn, mfn_t x);
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_l2_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl2mfn, mfn_t x);
+    (struct domain *d, mfn_t sl2mfn, mfn_t x);
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_l3_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl3mfn, mfn_t x);
+    (struct domain *d, mfn_t sl3mfn, mfn_t x);
 int cf_check
 SHADOW_INTERNAL_NAME(sh_audit_l4_table, GUEST_LEVELS)
-    (struct vcpu *v, mfn_t sl4mfn, mfn_t x);
+    (struct domain *d, mfn_t sl4mfn, mfn_t x);
 #endif
 
 extern const struct paging_mode



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:53:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:53:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475344.737008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbXr-0005IV-6M; Wed, 11 Jan 2023 13:53:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475344.737008; Wed, 11 Jan 2023 13:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbXr-0005II-2H; Wed, 11 Jan 2023 13:53:35 +0000
Received: by outflank-mailman (input) for mailman id 475344;
 Wed, 11 Jan 2023 13:53:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbXp-00047w-ET
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:53:33 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2048.outbound.protection.outlook.com [40.107.13.48])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55891915-91b7-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 14:53:32 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8551.eurprd04.prod.outlook.com (2603:10a6:10:2d6::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:53:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:53:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55891915-91b7-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=URIUJvBUnhD4gku6qib2sVtrZYoKIBmoNUghFiwwgyeCoScB/mUwsrbzK8K5W4tyzqxwCr7zhRYfIpyZ8wMjLAjydK0/g42FepxwPItjpLnwUxtlHE5B8zWKCQSe1rl+StcCTQ2ryrmWj4K1nibMoXRGtdXpcRhlFYWvjSntKPo0gr/iFVEjLXj6o0XCP2NYW7o1YF+LhZ3DIc7wdbvBHCtMygtMAUo0bLZB9kmYbhCrLqpW65Tjigt6uQny5XOLok3DNLR3aCscbI8T/ylM5XWRxjMwbuotsWwjcR9Mqs41iCKajdhRCaelFc96sKMnDHZjTqa89u+iEyh6y332zw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zfV7FH4PlbLPIPiG+PzMORtLnBIDSYzDeJCNwEUQtis=;
 b=Ow+4oszuWC7+szzZOqmusakF1TkHARoostoEZWy7tldhwsEl6s3oM/wYm8fxzaUljzfMcDTC3MsJuoFlmbvzWUeRpmfdB/k+w51kAf4CsAWCi7WkjYzpMPAImpqx8hNwm9Qr5odM7Y5vv7TGUVqeeY/ULR/wKW5aN8god6ck7GQv46oeqzlVP+QUbooE1XVIhAbkm4GsJMbcnoUABlJtUGzxP+oXcBS6La2nlT3DMbrX6oFQuM4OynXvqO5t9az6MlVcL6yX0NXI1CVBNn0+5VGB7C41FNcBaUvP40rMlz+C2YSQebaW3saoDDuVZ2FQ/xJzDUokzqlqNIW551Dgag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zfV7FH4PlbLPIPiG+PzMORtLnBIDSYzDeJCNwEUQtis=;
 b=bIS/MneUtTtRjQmNLzGJJdMBFWtz09BScvbX22KDROsyUmACdCZLYqsDabtUbLqaXt1rZlVpwtbXq0OK5TOzEyHLI0dfwpngM6eMsWqQQeAyk0YgGIniXBikGDzPDr7UfTFPdLml9TovrdhahCyBhx3IC+NZMhBNErmIg/GXxux1axKJJYJFd1EP3IreZc+CWZ1SauoCHQ/RiWl7VORU/JyQAzH0cEQVdoFV6ZvXhNO6lrJP5vEp3JX71aRPxiHcP5Q+zHpjs005pyMK3uOLfYnkOgiPcLpmDExMsIBZ/VL3L/zffCyt+fURzlqkIfsxoW2Ripa7FmJqw3p4BvKC4g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dddaefaf-3112-e33a-4430-d3cc6b0ec2ca@suse.com>
Date: Wed, 11 Jan 2023 14:53:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 3/9] x86/shadow: rename hash_domain_foreach()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
In-Reply-To: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0181.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8551:EE_
X-MS-Office365-Filtering-Correlation-Id: c4b00fea-09ab-4c99-1dbb-08daf3db38fc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yBVUELvVhxkKF3Wuker5LK7J1dyVEsyW9Q4c9eq63mJ2ZH1TnGw4BFsN1AVle+6wVMvk7SGA19QkRtSD2Sv0//rRMDp7P5BAirODYFkmyebvRJ/kbuRTkU9OXNhog7Bbmm3EC3oMjPRvL8I0J/shA2AcOoSt5HLdqY55x2IZTMlX/Bll6TBCEg+6WyWx7VHs5dR9HBDWLRQtviiUYoXYmXqxGVLhZrYC6Ex7x8gTYEXcWUh/7iI+s+LIkARol2eDFvrMZr0dkSEMUtLqMFoSiq4W4rKwm3tQaTxyi649bQOQO1WiAkKhPisQxCsTLCcfgWb8hiwC0H1s/MaRZkMWvAhOPuU7Xg0wCbs8wi7HiIpbWSEy9TAteJzhDcRz3G1cgjMounCr5GEW5Z3RajbpA8bIHRSJvPmI13QorIopYNiOii0oDSkInNAZ1B3OAnPpyGbTW0BBTcweogFkkhQv8x16Gu2sHVXWkCju70XkvdRKcxVEoiP6XOok876JTkwiPt7OESPuQUCb3LJ0ewXDvrBU8a+EFQveFIfwbIB3Fo9coxUeupWWYdhv3qWYOuax6xmrzk8TLPWymSZex0S/APoN9nGBrTPz54J8wjtPu+zeT6nwKrVrk7KODxnwk0zPeVW8aXEr7VNWL3FDiQedMJdmpU2zcvSMVHsLhvlqpNWxBF9rGJf/onHqtLlpFG5FW6W9m7kax+tyKjMuF2O6DB5ythyxloNmrkn1Esr9cbs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(316002)(4326008)(66556008)(8676002)(6916009)(66476007)(66946007)(54906003)(26005)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(83380400001)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R3VoZzNrWG5tdjA1Y0t5eFhVeXJ0bU1qeVJuZll5L1VJV083QTFsN3RMcVFv?=
 =?utf-8?B?OUY3TVdtQnhESE9teGpJSVBNczJSRnB1RXFQUTdKa1dPemxqY3FoRlRkUkh4?=
 =?utf-8?B?VDBUWC9QV3N0WGNFT1FNMXQzRlgxZXI2UG5TVS9LbG9VS3lJS2lJQmRxOE4v?=
 =?utf-8?B?RFdSTXpEa3FGMGhJTm5ycVZUcGdDZnZDak9zK2NmM3g5NUY5KzRsUVREbjF0?=
 =?utf-8?B?Q2R6Z0krZ0s5RUY0WjZtQXdScnFKSHRyK3pMTWc3eFozRnFVYWtKNTIzR09h?=
 =?utf-8?B?T1hJRjE5Z0p4bmp1WWxwWWxVaGtrODJhTG9BajNReitmNlVZQlRLblNlNkdG?=
 =?utf-8?B?NGM2eFZqNHVmQldUTnExMWVxbTUyK0d6Q3JSRzBYS2tOaXAvR1ZMQkdYUUtj?=
 =?utf-8?B?Uld6QTJNYjUyUnVvdWtOekF1NGFGaTY4MGpmMVVETzVlaS9ZRGNBTmlZN3Fh?=
 =?utf-8?B?bFpTTHBGaktYT09FWURCandRblBkdWJrOVpyNzdrR2ZLa1Z3em5naXlXNHli?=
 =?utf-8?B?SHhBOWhjcDViN29SNDEvc2FhdHJjWTlNRFRaa0dFMm1aUFdZZzY4ci9zb0t5?=
 =?utf-8?B?MHNicnVkenE5em53N0xrZFZaWUJaY3MxSytDOWlxT1lkcE55cmpQa20yVEFx?=
 =?utf-8?B?SjY0dGJxaG9iZUZlcW9ZODZ6NDVGTmh5cktxdkpzOFV3bjJEdTVKcXFxdmxN?=
 =?utf-8?B?WkpoSnhWVVAvZDlDdW1VWk01WGJLNzNKN2N1OFlMREprR1BsdGZKZlprNDQ5?=
 =?utf-8?B?SEVoeUtOelVHN3pXSFJvdUdtOEFFMm8wdUpRbDE3QU9NajdsRkFBSStEMVhJ?=
 =?utf-8?B?Vk0zUnhTYzErZGpWc2t0NXlRaEdkdkZXNGFlSU0zaUkwZENkQ0Znd0FNWDk4?=
 =?utf-8?B?OWlucC95bFIvdDVPOVFqcjNsb2ZzU0NyUTFCMGtNSUdZZnN1VENIYjFiV2hY?=
 =?utf-8?B?TVhpc1BQMnM4ZEJpSzQxdzY5RldRT01xZ2Y0cWg1R3Zsdi9RalBOREswR3RU?=
 =?utf-8?B?NE9FcWtRT2dYTVRCNDUvb3BEZDd4aW1kQ0tIZFBuN1Q2a09VejJxTmFEUEFj?=
 =?utf-8?B?a29obFp0NXJDZkhHV2Y5RFBoc01qNDNtc0tMcWw5Mlg5b2Q0WDNmOGVHdjNz?=
 =?utf-8?B?MHR1eVBhN0tRUDBOS3EzTkxidnN5cHpUQTJXelg4M0wwMUZLVzViZWZaVWl6?=
 =?utf-8?B?d3JCRnRBNFBWZVN6TkFESUNQVzNGMlFIV1FPQ0JFOS82dVBialVpMHBVYVJY?=
 =?utf-8?B?bExBUUpoNkYrc1kyZE5YdWFEc0Z1R3Rzb29RdExvdDBnLzZCMXBtaXBWWGpQ?=
 =?utf-8?B?Ui9PSjgwQXZtdHRHdG1jM1ZqYmZobDQzM1hBcGFFbjZSNGdkQ25BSDhxU0t5?=
 =?utf-8?B?VWtDRXdnSDZBcmttclBQQmJ2K0tvY28yOUxtcVB4M2w0N283eVlxTDNFQ29F?=
 =?utf-8?B?dDM2bmNpOHNQb3NjNzZGRUZZSTVXMlVzZTBNdW9YN3ZRSnMyVlZ5NUxsbG1X?=
 =?utf-8?B?OTVFOWVTbXNYZ2hWZG1ZYWMxaEN1ZVVsRnNCZlhZeUdYYVYvNXFBRkoxd3p3?=
 =?utf-8?B?UXZUYmJ2Sjg4K1hTTHpKcEkwYUFBc3Vwc2xVczEzWXVVSEhOKy9qbFBNVEN5?=
 =?utf-8?B?bDVoVVlCNVhLS2ZsVnE3cC9IdWhpbzJieWlBc0VCSlJRZFhocU5UOGVybHFX?=
 =?utf-8?B?Rk90cGFtc21VZHAwSGNxSGpSNG5YNHVQaUF0Z3FieHdTWkVMRmEvdldQc0hs?=
 =?utf-8?B?UnRvOE9nY0dCdlNNb1Jwa0J3a1hnVktQRDJoWHR1anBXcDFCZllEYVExV0wy?=
 =?utf-8?B?YWhzcGZ1SkZld2thWEFSQ3p0MGJnWUt3Q3NMTG1vOWxCbHRlYWxFejVjbzJT?=
 =?utf-8?B?UUpGRlJDTm1KVllZaUV4UHlUdlVsSGlkVm5uUWp3eUFnNTZjbG1hbnhtZjhp?=
 =?utf-8?B?Y3g5ek5HWkN4WDNtcmNlb0RxaDNuQXl2QktUZW1xY3M2cDYza0ZvcmZPSUZI?=
 =?utf-8?B?S2pod09MeWVHMWNjeWlZNXpHTWZIdnlxZjdsUE0yaVVxdHY4RmgwUGFVZFZ6?=
 =?utf-8?B?djEwU3M3SWVKYkRPdkZnNmpWMlRsOGlYR2IwVGMxSG9ZUENXd0NnY0svck9u?=
 =?utf-8?Q?0jl1o2m0kv+2TpFP5dz5Yx7CQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c4b00fea-09ab-4c99-1dbb-08daf3db38fc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:53:30.6874
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2tjwHUZPplVPns/+4HE8dfFY4HNDs2WIGua3tH8rxVltNrEzsV+Q6b/lnKvCSeZ2tvlqx2kXoT2cSzZs5DH/eQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8551

The "domain" in there has become meaningless; drop it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1640,15 +1640,15 @@ bool shadow_hash_delete(struct domain *d
     return true;
 }
 
-typedef int (*hash_domain_callback_t)(struct domain *d, mfn_t smfn, mfn_t other_mfn);
+typedef int (*hash_callback_t)(struct domain *d, mfn_t smfn, mfn_t other_mfn);
 
 #define HASH_CALLBACKS_CHECK(mask) \
     BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
 
-static void hash_domain_foreach(struct domain *d,
-                                unsigned int callback_mask,
-                                const hash_domain_callback_t callbacks[],
-                                mfn_t callback_mfn)
+static void hash_foreach(struct domain *d,
+                         unsigned int callback_mask,
+                         const hash_callback_t callbacks[],
+                         mfn_t callback_mfn)
 /* Walk the hash table looking at the types of the entries and
  * calling the appropriate callback function for each entry.
  * The mask determines which shadow types we call back for, and the array
@@ -1784,7 +1784,7 @@ int sh_remove_write_access(struct domain
                            unsigned long fault_addr)
 {
     /* Dispatch table for getting per-type functions */
-    static const hash_domain_callback_t callbacks[SH_type_unused] = {
+    static const hash_callback_t callbacks[SH_type_unused] = {
 #ifdef CONFIG_HVM
         [SH_type_l1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_write_access_from_l1, 2),
         [SH_type_fl1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_write_access_from_l1, 2),
@@ -1969,7 +1969,7 @@ int sh_remove_write_access(struct domain
     else
         perfc_incr(shadow_writeable_bf);
     HASH_CALLBACKS_CHECK(SHF_L1_ANY | SHF_FL1_ANY);
-    hash_domain_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
+    hash_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
 
     /* If that didn't catch the mapping, then there's some non-pagetable
      * mapping -- ioreq page, grant mapping, &c. */
@@ -1997,7 +1997,7 @@ int sh_remove_all_mappings(struct domain
     struct page_info *page = mfn_to_page(gmfn);
 
     /* Dispatch table for getting per-type functions */
-    static const hash_domain_callback_t callbacks[SH_type_unused] = {
+    static const hash_callback_t callbacks[SH_type_unused] = {
         [SH_type_l1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 2),
         [SH_type_fl1_32_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 2),
         [SH_type_l1_pae_shadow] = SHADOW_INTERNAL_NAME(sh_rm_mappings_from_l1, 3),
@@ -2021,7 +2021,7 @@ int sh_remove_all_mappings(struct domain
     /* Brute-force search of all the shadows, by walking the hash */
     perfc_incr(shadow_mappings_bf);
     HASH_CALLBACKS_CHECK(SHF_L1_ANY | SHF_FL1_ANY);
-    hash_domain_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
+    hash_foreach(d, SHF_L1_ANY | SHF_FL1_ANY, callbacks, gmfn);
 
     /* If that didn't catch the mapping, something is very wrong */
     if ( !sh_check_page_has_no_refs(page) )
@@ -2128,7 +2128,7 @@ void sh_remove_shadows(struct domain *d,
 
     /* Dispatch table for getting per-type functions: each level must
      * be called with the function to remove a lower-level shadow. */
-    static const hash_domain_callback_t callbacks[SH_type_unused] = {
+    static const hash_callback_t callbacks[SH_type_unused] = {
 #ifdef CONFIG_HVM
         [SH_type_l2_32_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 2),
         [SH_type_l2_pae_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 3),
@@ -2173,9 +2173,9 @@ void sh_remove_shadows(struct domain *d,
 
     /*
      * Lower-level shadows need to be excised from upper-level shadows. This
-     * call to hash_domain_foreach() looks dangerous but is in fact OK: each
-     * call will remove at most one shadow, and terminate immediately when
-     * it does remove it, so we never walk the hash after doing a deletion.
+     * call to hash_foreach() looks dangerous but is in fact OK: each call
+     * will remove at most one shadow, and terminate immediately when it does
+     * remove it, so we never walk the hash after doing a deletion.
      */
 #define DO_UNSHADOW(_type) do {                                         \
     t = (_type);                                                        \
@@ -2199,7 +2199,7 @@ void sh_remove_shadows(struct domain *d,
          (pg->shadow_flags & (1 << t)) )                                \
     {                                                                   \
         HASH_CALLBACKS_CHECK(SHF_page_type_mask);                       \
-        hash_domain_foreach(d, masks[t], callbacks, smfn);              \
+        hash_foreach(d, masks[t], callbacks, smfn);                     \
     }                                                                   \
 } while (0)
 
@@ -3163,7 +3163,7 @@ int shadow_domctl(struct domain *d,
 void shadow_audit_tables(struct vcpu *v)
 {
     /* Dispatch table for getting per-type functions */
-    static const hash_domain_callback_t callbacks[SH_type_unused] = {
+    static const hash_callback_t callbacks[SH_type_unused] = {
 #if SHADOW_AUDIT & (SHADOW_AUDIT_ENTRIES | SHADOW_AUDIT_ENTRIES_FULL)
 # ifdef CONFIG_HVM
         [SH_type_l1_32_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l1_table, 2),
@@ -3210,7 +3210,7 @@ void shadow_audit_tables(struct vcpu *v)
     HASH_CALLBACKS_CHECK(SHADOW_AUDIT & (SHADOW_AUDIT_ENTRIES |
                                          SHADOW_AUDIT_ENTRIES_FULL)
                          ? SHF_page_type_mask : 0);
-    hash_domain_foreach(v->domain, mask, callbacks, INVALID_MFN);
+    hash_foreach(v->domain, mask, callbacks, INVALID_MFN);
 }
 
 #ifdef CONFIG_PV



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:54:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:54:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475351.737019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbYI-0005tG-H2; Wed, 11 Jan 2023 13:54:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475351.737019; Wed, 11 Jan 2023 13:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbYI-0005t9-E7; Wed, 11 Jan 2023 13:54:02 +0000
Received: by outflank-mailman (input) for mailman id 475351;
 Wed, 11 Jan 2023 13:54:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbYG-0004QI-RZ
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:54:01 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2051.outbound.protection.outlook.com [40.107.13.51])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 653476f0-91b7-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 14:53:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8551.eurprd04.prod.outlook.com (2603:10a6:10:2d6::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:53:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:53:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 653476f0-91b7-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T0w5MUipcezPpMUOlCjSz2ReCrpCgkG9lPNmv3iD7VmzNEWNdmJLYZtq+XQLEOkOlFXpEI9Hei1gleUCfdjCcgeV+2QtyWyFhdhGlxSOmsE7DIQ/3TTW5rnEadV+sG8sp+ahqRpvmiOoVndK3M6eD4o+YEVMTCFhMLgqbf0OxmRB/q0tZxSJdF9Uo3nDr0wOX7kyGxhmORKbrxYUAYBXrd2m4N8fNgR1MjwAiMn7vSvi1ktJwPldr/z9mDoP8qo29HW7YYADRyPm79vtciKYE2EWuzzFoscOKVXHYsf/7LjoAY60TuENiJdHygt3Dzpc58O2PAmwPnlp0gixrgcJuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=I0kOuEOlEby+rFnQkiCiFwCOaJzxSnerJCBJ3RDmWQU=;
 b=Z6ElVhyAMUi/UwJCJs0O1u01DefWMHCUB6Orz4joRz3nsuR+7CSxK48xhD1mD28p+VrJ+yeTxgQyDIyxoWJhXBAgRT8WXr/MGmqcjMsbFCcebXkS03u8RXwtxpz82tQz46+96qqZNlKeBI5KEILWpcNCymTESy7jLHDAQBcHKQa2+hK2qoL3LRJuiQTVHD161KIm8RcKA7cKo+XuzcGeabWUIXaePO8kwDzhCvBps/d7Qa3aXgReAO79veKISZbe6wFWU85y8B8S6yyDoFeZ7KXDsJH0S87eK5nsbEqcsjc77YYSa+jPdF9R9gF6TlQ5tX0brDIpOxHQ2nRV4+RvEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I0kOuEOlEby+rFnQkiCiFwCOaJzxSnerJCBJ3RDmWQU=;
 b=D+iPzFj9uOvdTyzl/BAlIXp31Pl3eVfEUU3POWmMLcWWE0UDYEutYc2eyAsDd/peGUk91ys93mLbgvkQsoyJrrP9idcDJT09Tq9f7d0z/mFQA9kBn5R7gM0p8BeWrQ8iJ3KII1rdM3kuyq/S1VNsLQqdK6/OovgD+0EgD883JmubfcbeeZ4+BI3ddE7zWhuRP5xE+BnfnqF65O8cnjnpT3uHscJj3OEz6MXv3+gJR76/CBNiLxapS3mjNQ/dGdUG0kJcH+pDgFNquheRhx44ZLnruPY55/seIrfrf9vIncUTeXycddt2kPj+FVNZkhU8SSpZbHbI7SPO2zzAQCLQCA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cd8028ec-3188-3422-881e-28a3a6d8451c@suse.com>
Date: Wed, 11 Jan 2023 14:53:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 4/9] x86/shadow: drop a few uses of mfn_valid()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
In-Reply-To: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0119.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8551:EE_
X-MS-Office365-Filtering-Correlation-Id: ceb18629-2864-4b6a-72c9-08daf3db48e6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HeLBCjs5tZLaz/qQDn4lvkeEN/9Cdqa8kSMhZ4egsF5HzfvJUHan2fT2Qr2uW4F6Dg06aUVRBT0md5Ubu2dE2dbt7XY2M+fgJBuPHjg7S6Zt84zfkma2wQX7kf5dkk9zmjryw2srlxX6OUUvOi3/C4tHIubAfX9Nc7gYEYMrMOIGUroUcHHkzd9Ipqd92wM3PDgQ61tNY981XLpk5nqgJiXJAQQdWCuVs4LdWCZsP9Z+CK1aEmmmFyLUplg7YTkyORoZsgvYBoFiMrD9mS/UVe3zRjsRq76eIbPKKwwgfC0mkGkcsS/eGA3/QugG5SHukS88rL0zU7ZLoQaFnr7Sv4KiWbSpQNnNc3+KvHMcAqaSJ+N4rGxZRpqmWH6ifWlKN2Jr7ZtKa3cIVZmer+p3AlktquNgHDle7PoLt2T4rCuMBhGnFjyMO6YW6aksn55wT6npZ4g+6FbI/5RIwHU7Kw4c7zq0SzPROkvYn5/2rn+wUo7YLzRZMmQ2DAmcgkNVnKjBd2gU96wxDUngSy05ydKGwS0RoqLBomr24MudNBgBLl/FCFrqCQ8V10iaey80GgpULBzDokB1iZ4YBotoH3w/uPv/IVV17aby4cTf4c/8eY2sBcL9+GPS4Y/HSo5bR9wiRvKSIbjH6ueTw9CnyVaJEHIITnpTUPfkd7IiAzufF4VX362Moy1aWdaacuHQFmLFkC+f8EUbW7DnonXvHCL+Prq/Be8TkMdc0RoSQJk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(316002)(4326008)(66556008)(8676002)(6916009)(66476007)(66946007)(54906003)(26005)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dnl2Ykd5ZUliTlh3cU1GbkFuQ2hlcTJDL1JnMUdGVHNtVCs5SG5MS2hrY0gx?=
 =?utf-8?B?TkR3WWR3SGhNWVFwWDJ4dDBuVVhaOHAvYXR2dTBZQkNCRmQ5cWdmcmlFaFBQ?=
 =?utf-8?B?bW1uUTROaEhCT3pycGRVWExGN0RTL2NxU2dDaFhlQUZuZ3RUUFE2TVhFNUcz?=
 =?utf-8?B?OTdRSi8reGIvU09sS2tIcUQxV1R5UTV1aWlkVVNYWXFFODA3em91WTNsckJr?=
 =?utf-8?B?QWt1UUdXVGppa3kzZ1VzWFM5L0RHcHY2MUlKYURtSlg5aUxzc2dvTEZoa3RJ?=
 =?utf-8?B?c2ZiL0duZGxuUXFzZkhXd1ArU1hkZGVOL2J0a3pxRTl5V2lBdWNXZDdLZHJt?=
 =?utf-8?B?QWZhdmR2ZHBMWWJKNks5NEErMkJCcExVZzd3TmsyZ0pHRTBSdENpcExEUTY2?=
 =?utf-8?B?V1pTd2JxVWlHaWd3U2NtbnBRTkQ2eHl5M0VneHdXMTI3MzFlUFFzN0wyV2I1?=
 =?utf-8?B?N3N1bkJiaUNXSDkvNDdRbkhBdW1NWjd0SDRkV3ZvUnlRTWFKOXFFUzhBUDRL?=
 =?utf-8?B?MllOQ2ZRVURlTVpvMWw4YitJZFZLcE9mRmU3OG52VC9FNi90MTF5VzFPcEpt?=
 =?utf-8?B?cElCQlpOY2kxRi9LeHZXR2pRQkJTTHFKSzNkdUd1S2pZbEpCaUN3dWV5SnlW?=
 =?utf-8?B?ZDZPalROeGNnYzNQYnBGSFVwQU5uakFWeDl0djgyWTFmUDIvdlptVHRYbER3?=
 =?utf-8?B?dnI4M1Bld2pYcmx2UXJITzY2SVlIWEVPN2x1NExhVFk1Q2gyMVZHZmF1bUVQ?=
 =?utf-8?B?Uitwdkx0cmJKT2hzZGJGRFFxVzNQelFvdEpFdFppc3ZVNkpmLy9sdzdWU09F?=
 =?utf-8?B?d215MHdjRHBneEZtdUowZ2hvRkpuQWVFUFRiYUZaVE1LZHpkYlhqMXRuRVgv?=
 =?utf-8?B?OWdGTGVHNGRaK3BORWpPcGczbEZia21TTnhGTVhoQWNKTTFXdURWZ1FrQ0Z2?=
 =?utf-8?B?amxuck0vREEwLzhscEZkZytGYW1sQ1VrWjhSWnAwYUY5VkRGbllWLzFBYVBF?=
 =?utf-8?B?UnhmL2hWWURvdWQ2ZE5RN2tJRmkyK2VBRmxQSkxRMGhjRXdSWXZTTW5wbStK?=
 =?utf-8?B?MzlPNUMzYnRLOG85dkxPN1RSeEN2SG9YN1FMNXVvR0dVN0NtTUhlYVdEOHE5?=
 =?utf-8?B?QU5Oek9IZkZaMEM1MmFLNytaN3pBb0t3ZDVHN3g3ZFZMZisvL3docnFEajdx?=
 =?utf-8?B?bjVQdEZKcDZZLytPWk1QcWtSRk9wUDlDYlp1SUxVRkF2SnVTS3JYM1R4VEZW?=
 =?utf-8?B?dkk2NjBaWlVoNGdnaFk2MXB4TWROUVQ4VFpWSmR3N1hEQzZkbWFMKzd3cDl2?=
 =?utf-8?B?NGdlMzUra1lub1NGNmFTYXZ1KzBqSjFRQThRcHQyMzJWclJPc3NvYmNzeG5y?=
 =?utf-8?B?eGtlY05Qdk1qblBSTXg3MXBhVHNvQTRsS1dqcjJIU3dYb2tCN1QrTjJYNzN4?=
 =?utf-8?B?TU1wOS9EanpwNGRoQlhuSm5LQXFOREtGd3VPZHlXT3FwemRvcjJ0NWlrVXpa?=
 =?utf-8?B?RWY2MmgreXJxYitaV05XdHB2Tmo3YmZadXMyZlcxaXhycHlSWHYraUlNdERC?=
 =?utf-8?B?SGV0TFpZVXhjU2lnWSs1YldNem9NRWVtL0RiaU5ra3l1WFp5M0JwN2haUW9y?=
 =?utf-8?B?eUw3eWFzZlY4Q0dSQkkrV1lxMXNWbGtQVmZhMkJHTXlLSXhqRkxyZ2wwUWlQ?=
 =?utf-8?B?U0RTam11Q00xWHVIMHZpcWlVMktsVUZsMWxhOU9lT3Q0SERKUVhoV2RBS3c5?=
 =?utf-8?B?ZkFFaE5MVkRKUDRHeUN0cTFGSTFvSFUydi9RWm9DQmpXOVRXa0dOVmxxSG9y?=
 =?utf-8?B?dDQ5a1FzT2c0V1BUREVsWDRoUzJ4cEpzYklnbFA5QmNCckV3NW5vR0k4cldl?=
 =?utf-8?B?OGRiUjlHM28yQi9MaGZJaVhldlllL2JPUUYrMnMxaCtLNlExenpnU2EvZmhi?=
 =?utf-8?B?OUV6QkRHL1ZHS2lNWEpnV3ZSNHBjUDZDNUZtSEJ6a2R3U0YrRCtiSmdqLzJK?=
 =?utf-8?B?MHhycFNsNTVHWUhlWU1BZStnNjYySlhmNzNnNUpxQmNrME1XaEpXNlAyNnE1?=
 =?utf-8?B?ZXQ0TGJaa3VPNUMvMDRqbjdCUE1QVU4wZDY3MDZjQU5FQ01pZDlDMDJEbjBr?=
 =?utf-8?Q?wb8CzMJUXneG6IIDT0sm82yoL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ceb18629-2864-4b6a-72c9-08daf3db48e6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:53:57.3733
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EdZUxOQLlrgiCcJthcHQjk82pY9X6E/noJvMiC7aCIPAwcRaIVAlDxqWnDHzLPf60dASdU8f+U646z5H7AyNeg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8551

v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
hash table are all only ever written with valid MFNs or INVALID_MFN.
Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
these arrays.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
There are many more uses which can likely be replaced, but I think we're
better off doing this in piecemeal fashion.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -171,7 +171,7 @@ static void sh_oos_audit(struct domain *
         for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
         {
             mfn_t *oos = v->arch.paging.shadow.oos;
-            if ( !mfn_valid(oos[idx]) )
+            if ( mfn_eq(oos[idx], INVALID_MFN) )
                 continue;
 
             expected_idx = mfn_x(oos[idx]) % SHADOW_OOS_PAGES;
@@ -327,8 +327,7 @@ void oos_fixup_add(struct domain *d, mfn
             int i;
             for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
             {
-                if ( mfn_valid(oos_fixup[idx].smfn[i])
-                     && mfn_eq(oos_fixup[idx].smfn[i], smfn)
+                if ( mfn_eq(oos_fixup[idx].smfn[i], smfn)
                      && (oos_fixup[idx].off[i] == off) )
                     return;
             }
@@ -461,7 +460,7 @@ static void oos_hash_add(struct vcpu *v,
     idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
     oidx = idx;
 
-    if ( mfn_valid(oos[idx])
+    if ( !mfn_eq(oos[idx], INVALID_MFN)
          && (mfn_x(oos[idx]) % SHADOW_OOS_PAGES) == idx )
     {
         /* Punt the current occupant into the next slot */
@@ -470,8 +469,8 @@ static void oos_hash_add(struct vcpu *v,
         swap = 1;
         idx = (idx + 1) % SHADOW_OOS_PAGES;
     }
-    if ( mfn_valid(oos[idx]) )
-   {
+    if ( !mfn_eq(oos[idx], INVALID_MFN) )
+    {
         /* Crush the current occupant. */
         _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
         perfc_incr(shadow_unsync_evict);
@@ -607,7 +606,7 @@ void sh_resync_all(struct vcpu *v, int s
 
     /* First: resync all of this vcpu's oos pages */
     for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
-        if ( mfn_valid(oos[idx]) )
+        if ( !mfn_eq(oos[idx], INVALID_MFN) )
         {
             /* Write-protect and sync contents */
             _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
@@ -630,7 +629,7 @@ void sh_resync_all(struct vcpu *v, int s
 
         for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
         {
-            if ( !mfn_valid(oos[idx]) )
+            if ( mfn_eq(oos[idx], INVALID_MFN) )
                 continue;
 
             if ( skip )
@@ -2183,7 +2182,7 @@ void sh_remove_shadows(struct domain *d,
          !(pg->shadow_flags & (1 << t)) )                               \
         break;                                                          \
     smfn = shadow_hash_lookup(d, mfn_x(gmfn), t);                       \
-    if ( unlikely(!mfn_valid(smfn)) )                                   \
+    if ( unlikely(mfn_eq(smfn, INVALID_MFN)) )                          \
     {                                                                   \
         printk(XENLOG_G_ERR "gmfn %"PRI_mfn" has flags %#x"             \
                " but no type-%#x shadow\n",                             \
@@ -2751,7 +2750,7 @@ void shadow_teardown(struct domain *d, b
             int i;
             mfn_t *oos_snapshot = v->arch.paging.shadow.oos_snapshot;
             for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
-                if ( mfn_valid(oos_snapshot[i]) )
+                if ( !mfn_eq(oos_snapshot[i], INVALID_MFN) )
                 {
                     shadow_free(d, oos_snapshot[i]);
                     oos_snapshot[i] = INVALID_MFN;
@@ -2934,7 +2933,7 @@ static int shadow_one_bit_disable(struct
                 int i;
                 mfn_t *oos_snapshot = v->arch.paging.shadow.oos_snapshot;
                 for ( i = 0; i < SHADOW_OOS_PAGES; i++ )
-                    if ( mfn_valid(oos_snapshot[i]) )
+                    if ( !mfn_eq(oos_snapshot[i], INVALID_MFN) )
                     {
                         shadow_free(d, oos_snapshot[i]);
                         oos_snapshot[i] = INVALID_MFN;
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -110,7 +110,7 @@ get_fl1_shadow_status(struct domain *d,
 /* Look for FL1 shadows in the hash table */
 {
     mfn_t smfn = shadow_hash_lookup(d, gfn_x(gfn), SH_type_fl1_shadow);
-    ASSERT(!mfn_valid(smfn) || mfn_to_page(smfn)->u.sh.head);
+    ASSERT(mfn_eq(smfn, INVALID_MFN) || mfn_to_page(smfn)->u.sh.head);
     return smfn;
 }
 
@@ -2680,7 +2680,7 @@ static int cf_check sh_page_fault(
                 mfn_t smfn = pagetable_get_mfn(
                                  v->arch.paging.shadow.shadow_table[i]);
 
-                if ( mfn_valid(smfn) && (mfn_x(smfn) != 0) )
+                if ( mfn_x(smfn) )
                 {
                     used |= (mfn_to_page(smfn)->v.sh.back == mfn_x(gmfn));
 
@@ -3824,7 +3824,7 @@ static void cf_check sh_pagetable_dying(
                    : shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l2_pae_shadow);
         }
 
-        if ( mfn_valid(smfn) )
+        if ( !mfn_eq(smfn, INVALID_MFN) )
         {
             gmfn = _mfn(mfn_to_page(smfn)->v.sh.back);
             mfn_to_page(gmfn)->pagetable_dying = true;
@@ -3867,7 +3867,7 @@ static void cf_check sh_pagetable_dying(
     smfn = shadow_hash_lookup(d, mfn_x(gmfn), SH_type_l4_64_shadow);
 #endif
 
-    if ( mfn_valid(smfn) )
+    if ( !mfn_eq(smfn, INVALID_MFN) )
     {
         mfn_to_page(gmfn)->pagetable_dying = true;
         shadow_unhook_mappings(d, smfn, 1/* user pages only */);
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -769,8 +769,10 @@ get_shadow_status(struct domain *d, mfn_
 /* Look for shadows in the hash table */
 {
     mfn_t smfn = shadow_hash_lookup(d, mfn_x(gmfn), shadow_type);
-    ASSERT(!mfn_valid(smfn) || mfn_to_page(smfn)->u.sh.head);
+
+    ASSERT(mfn_eq(smfn, INVALID_MFN) || mfn_to_page(smfn)->u.sh.head);
     perfc_incr(shadow_get_shadow_status);
+
     return smfn;
 }
 



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:54:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:54:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475355.737030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbYc-0006Of-SR; Wed, 11 Jan 2023 13:54:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475355.737030; Wed, 11 Jan 2023 13:54:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbYc-0006OY-Nz; Wed, 11 Jan 2023 13:54:22 +0000
Received: by outflank-mailman (input) for mailman id 475355;
 Wed, 11 Jan 2023 13:54:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbYb-0004QI-0J
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:54:21 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2085.outbound.protection.outlook.com [40.107.13.85])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 711636b6-91b7-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 14:54:18 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8551.eurprd04.prod.outlook.com (2603:10a6:10:2d6::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:54:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:54:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 711636b6-91b7-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z/X6hsId14TLdqMovZn6xfbBMq9Vy09EQsW1KtOfE2uZQDRAyfmaFzLkyz3ViIM9A4QhMuDNDyTatk7lgQVqwG/pVCRsBf98lIbuOkmrlLCxHZU4w8rZqVKWu64hR4X/Qpua7y5EGND5muVzYm4UmopsNZl7V0t3Ae2dSxmYKPdc9o2oG0GO7L3h6yYw71BWgxaSn15KxyGmIuwC8ZYOhLV5crDK1M9ctSP3kPMSIaFzrEaVKsrUSXuZml4Vkn5wT0yGQYG73ExU+nui66hoEE4LPM/OAUvjp8yasAlSzqeQyxXqt672K0XwcFejgfZPwt7z4numTwZvT+czRSkxeA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rzqulTEtucE5QZgMYqiUHlGTP59ie28OePbEzibohCE=;
 b=bfrmh5bfC8MStwAB4C97185IwRk+kJrSjopibru4g4Q0kOZwASoaireZ6dJmnbMNyOgEQJJYQALADPzukl38VdYun6Lorpq8LdlpygLEejqMNh03ewDRt12YU2oq9JRos+qqck1MMuawxxU8v4Ufw9JADOJ8UroPdVuZzdDsvq85SEW42CK0vtdKxTz4QYvmNvHM4FGLlsNaIur81DkvDc9kgx4LUt5b9YsrGKM4ta3v89T2po8mDaCT/8YsX+0hjy5reZ2aCiVGJDC/75YsETO/mCQcvYF/xwTHj041VkFQZ9XvniJv0BnmyZTSwS2w04PNKCDiB5QNYkIHOu6IbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rzqulTEtucE5QZgMYqiUHlGTP59ie28OePbEzibohCE=;
 b=wCS9rReg0rpAQ9wiZ7XZRXdBoF/mZkkw+nJC/tq9uC7yQwZq6rUez8dJsa1odzHCxvhrx4YbJwMhOmcFTIEU2f40HUIGkq1Cyl6eXiT9QzAnL5b07Qy1jTBMqVJDIwA8MitB3yjnX4Idd7uo9SpwiK4d1eryLFR0lKRp8I2X0wHb8ynN4ox79woRaghbMNznGo9ZB+6wlPrVwekVy7K1NBaAhMopvP0cpLOzvNQrFLyDcisHAi8q9UCOtm/OY7RdzkY6HMjEzd7RMzR2jUBySg811E+A0nXMS+hrEF8Lhxg/8CBMnR1H7s/e3VWVEd+f6uYjRzFaFQLmb61qHozDiw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <af8ca228-473a-6777-4a4c-a474a5faec1f@suse.com>
Date: Wed, 11 Jan 2023 14:54:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 5/9] x86/shadow: L2H shadow type is PV32-only
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
In-Reply-To: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0017.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::27) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8551:EE_
X-MS-Office365-Filtering-Correlation-Id: b48130e9-32de-49a3-54e3-08daf3db54e0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QhQC/dFlDo4JO9JDWJrG26/1RoDmxezYZQwk3Tp7EvGVrpjotE8XQ1cvkWp36KUSxrgiT2t1paRU1980GqV2OzyoSIPpdViL1aoyXSRFZOCOPbUqFm925eV1CQ0i4l3U5G0o0EM3Ywk+VyEw4RjL1bNyzYDvXdrv+AVqEqVvUg/HO5DcN3Km9STK51lh/6YrI2dSvpmMn/+A/RSVmklO6VvQhcV2zYsF/JRsrogDporW4N8WhaGTKgsiwTLstrg29tGggBUb/BJBVrDOlOvsyANhs3HlOPehYlG0wJ5HH2TAW1opRzBdGvmvuprBXmGhmIORhqQZyUY50szDysVrpGYzQ0CyF04fksBvkct9XcN79Zw9hS4Myk//NC8W5mGGti+LSdqt6+IR+7gz2jFubQEfXLPI7+FkkHT0Tgn9lTkWoTgUluzZntIKapiJJ/I4Toz8WEIj80P+gJGW/aGpmAivVctgodpdqfWPMKKDo2aZKwXdei1hk/NO/7Fl69Eh+JvK4jq5OCtTmz1YfJJc1QC+Gd3kVfuHWTx7Kk/2QG4VkpN9mlFZrRSGjnrcujcY5eqUviySqewL/3hAx7qxRs4VNlY/1C/yD8lYYhZR66rDKz2B0L1NKoMRiHsjQWSx94fBUNdOFVJu44UX0AXRiEWunTH7ZVJYcXXr5sclcDGuzcQa3Bq4KXh6elBVpSVEe2ERodGjxzbXw3wvBAk0C17UP3sP/a6Q+CLdTzj/QVY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(316002)(4326008)(66556008)(8676002)(6916009)(66476007)(66946007)(54906003)(26005)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(83380400001)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZnhUVXBUVjN1SHJ1SjQ4VjdTZGNlaHBTc3IzSUpYUURTb3hqUzhZdUZacWhv?=
 =?utf-8?B?blhrRFJIU0QvbVR5Z3hZOUNNTkR2bDZBbzNjelJkVjlzK3d6cXZ6RDhQdElx?=
 =?utf-8?B?MHV5d1RtdjNtZncycHNkL2ZJY1MxLzFQbGZTbzZjMVhKbk8xMEFLQTFlRVVz?=
 =?utf-8?B?dXdqTVppYk1NTFBmQmlsZDRIMEpJcCtQbkxPN05xSWFrWTVKYWZEMXFHemMv?=
 =?utf-8?B?Ukp3aW1lTzcwdEhrQ0dCbm45eER3Mk1jR2lmUGY4bTRMR0I4L3lwckFWaGYw?=
 =?utf-8?B?S1I1b01uTTFtbzU3UDNiK1NRQ3FLUllNQW5wS2I0OEt3SXpQcXBON2NSU1F2?=
 =?utf-8?B?Q2tQTXoybG9rQnJDRXRUaG83NXpvRzUvUlgxcTlWMWJIUDUrMHByMzN4V1hE?=
 =?utf-8?B?Rm1uTkM0QVYyWkJwN2pUNnU1VllSVzVCTHRyYkE4SFZ4cld1bnVkZTNyY01x?=
 =?utf-8?B?a2FDVk9UbXVDNEVMKytrb2hpSVJoaWRPTVZJZGZhU3NFZmhpbE4wcTdVc3FF?=
 =?utf-8?B?UjNSYnMzQlV1UkJYQVJabWthRC9qNzArYXlDVEg0NVNBL0RaN2tGak83QmdZ?=
 =?utf-8?B?VG1CN0t4L1VKTlNwcFROeXA1KzBPY29ieXRndDRWMWtiSDFvcWNQUTRjYTd3?=
 =?utf-8?B?UVY1c3VMQzU5R0xwR0tIYWpmUFVxQUFRY01USlZzendEdkFRWEF3YVo5OGdI?=
 =?utf-8?B?cUViNlNBYVJoM1NDci8wbEJJUmpQMm9NcDhTMm1CaEU1OU9Ld1ptMkROWDdZ?=
 =?utf-8?B?WjFBU2laOHkza1p6Lzhlb2tCYUNzcHlid2pTY3J2ZVp0aDZZckg2a2dRYjVV?=
 =?utf-8?B?VFN1MHdoS1NnMFhPYlBEMHRJNVIzaERwcDV5M0I3dEE5RHhMZG5tUjUxQ3ZL?=
 =?utf-8?B?UnVVRTlPNW1UblpCZmJJSVpzQ1IyZHBaVzVickZDQ0JORkY0eWgyMHZ0bkkv?=
 =?utf-8?B?UmtZcmJWcW5hMEMvRkhRTURCVENhK1hEWkN2S29YS2VZME9RNjA0TmhRM1VV?=
 =?utf-8?B?dEs4OXlTVjN5cHhXOTF5WGxjZmRWSGt5VnV4SFQvSTNNRlduRVkvMktnaVBv?=
 =?utf-8?B?dS9mSGVjc3E2MWV0VkVEazIrNlBCUkl5WGF2SVFRdnFNUWJ0MG9TM2cvdGtL?=
 =?utf-8?B?MzdBeHo0UHpZQmp1S0ZXWnpCZnVPeVZVZ2JkOUpJaXpTSVpkZ04yRUlNSFYv?=
 =?utf-8?B?STRqSFVQaFFWRElQb3JGL1ZGYzM3Y3JxWHJBRStXUWRzYWVEamp0QTBrK0R4?=
 =?utf-8?B?RVpHNkdYZEN1SE1jb0xheCtMRTFNRVpnVDBaeTJRYlQrdVNmaUg2dkc2QjA0?=
 =?utf-8?B?Si9uUHM4SkR5WG1sTVgwRXlaV2lGTlNzcG8vSDZIYmlDcjQxQUlMRWEzZHND?=
 =?utf-8?B?RkhEOTlObWlSZFpVYVI0UXArS2xtdmZUVnlkenNBZ1FMTDRsemVBS0pXdXRt?=
 =?utf-8?B?VEhWWEY2UW83Q0RjZTd0UTBmRTJabjA0ZXZSRXB1VWFKZ2J4Q2llcytQZmY1?=
 =?utf-8?B?ZmlNNVRGMzdOQVVEckVicVhYemhFOG1mZGVPbjVta0QwQTJhYUsySG1TbzBS?=
 =?utf-8?B?eGNtd1JrUGRrRjdGSEZYeWliZzV2MnltK2lVSS9NZHlYQVJicHQrNkNQUitB?=
 =?utf-8?B?QWk4T0wzUTExZDhXc1I2d0lyOHVkTFJNd1RTZjB1bVVMY0xFdlcrRllaMVRK?=
 =?utf-8?B?SGZ5TVllVXY4TUk4L0ZsQUV1enYyLzRBZmRhaEtIVmJRbG0xek0welp3K0FO?=
 =?utf-8?B?dU5SZnpmVVd4aTYwRFBENVBIejA4c3R4Vml1bzhGcDY4ckJYaTdwM2RJVG0y?=
 =?utf-8?B?SWtCWXVlb1JteXpXd0dLMnp4VUVXRXJGWE1obHhOR0MrQ2JPaDJjNkxHcENP?=
 =?utf-8?B?SHJEc2I0eGtqZkE4Vi9vWXp4cHdpY3QvbGNXTDFPTE9lYXdWd083Rmt2SWM3?=
 =?utf-8?B?UFUrRkJQY05CQ3EzZ0xnM3J0UlgrMTVzaXBBMmxhZnU3NW9mcUtBUXJQbWVV?=
 =?utf-8?B?cTFlT1lXUUZJKzQyWTJqTWRHSnRESTFZaFVIUzFtR2ZsbUtUZE5LQ2NXcjJr?=
 =?utf-8?B?MDZDekQ5OFhSVGFGeWJlaXlFNmlmcE92eXIzdmpraDBKVVFIMXBtYjBPVWF2?=
 =?utf-8?Q?U19HL2/vndxWdWVUXBwToLwbs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b48130e9-32de-49a3-54e3-08daf3db54e0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:54:17.4815
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BuP8aE6jpW9NB/8eKpEaOGtNz9K0RdyoeXP/NdoOeawDu7gh95jKzC66swtjXMssOA0sexxtBgIh0rUQ7t8Ahw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8551

Like for the various HVM-only types, save a little bit of code by suitably
"masking" this type out when !PV32.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I wasn't really sure whether it would be worthwhile to also update the
"#else" part of shadow_size(). Doing so would be a little tricky, as the
type to return 0 for has no name right now; I'd need to move down the
#undef to allow for that. Thoughts?
---
v2: Merely comment out the sh_type_to_size[] entry.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1740,9 +1740,11 @@ void sh_destroy_shadow(struct domain *d,
     case SH_type_fl1_64_shadow:
         SHADOW_INTERNAL_NAME(sh_destroy_l1_shadow, 4)(d, smfn);
         break;
+#ifdef CONFIG_PV32
     case SH_type_l2h_64_shadow:
         ASSERT(is_pv_32bit_domain(d));
         /* Fall through... */
+#endif
     case SH_type_l2_64_shadow:
         SHADOW_INTERNAL_NAME(sh_destroy_l2_shadow, 4)(d, smfn);
         break;
@@ -2095,7 +2097,9 @@ static int sh_remove_shadow_via_pointer(
 #endif
     case SH_type_l1_64_shadow:
     case SH_type_l2_64_shadow:
+#ifdef CONFIG_PV32
     case SH_type_l2h_64_shadow:
+#endif
     case SH_type_l3_64_shadow:
     case SH_type_l4_64_shadow:
         SHADOW_INTERNAL_NAME(sh_clear_shadow_entry, 4)(d, vaddr, pmfn);
@@ -2133,7 +2137,9 @@ void sh_remove_shadows(struct domain *d,
         [SH_type_l2_pae_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 3),
 #endif
         [SH_type_l2_64_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 4),
+#ifdef CONFIG_PV32
         [SH_type_l2h_64_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l1_shadow, 4),
+#endif
         [SH_type_l3_64_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l2_shadow, 4),
         [SH_type_l4_64_shadow] = SHADOW_INTERNAL_NAME(sh_remove_l3_shadow, 4),
     };
@@ -2146,7 +2152,9 @@ void sh_remove_shadows(struct domain *d,
 #endif
         [SH_type_l1_64_shadow] = SHF_L2H_64 | SHF_L2_64,
         [SH_type_l2_64_shadow] = SHF_L3_64,
+#ifdef CONFIG_PV32
         [SH_type_l2h_64_shadow] = SHF_L3_64,
+#endif
         [SH_type_l3_64_shadow] = SHF_L4_64,
     };
 
@@ -2210,7 +2218,9 @@ void sh_remove_shadows(struct domain *d,
 #endif
     DO_UNSHADOW(SH_type_l4_64_shadow);
     DO_UNSHADOW(SH_type_l3_64_shadow);
+#ifdef CONFIG_PV32
     DO_UNSHADOW(SH_type_l2h_64_shadow);
+#endif
     DO_UNSHADOW(SH_type_l2_64_shadow);
     DO_UNSHADOW(SH_type_l1_64_shadow);
 
@@ -3175,7 +3185,9 @@ void shadow_audit_tables(struct vcpu *v)
         [SH_type_l1_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l1_table, 4),
         [SH_type_fl1_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_fl1_table, 4),
         [SH_type_l2_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l2_table, 4),
+# ifdef CONFIG_PV32
         [SH_type_l2h_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l2_table, 4),
+# endif
         [SH_type_l3_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l3_table, 4),
         [SH_type_l4_64_shadow] = SHADOW_INTERNAL_NAME(sh_audit_l4_table, 4),
 #endif
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -56,7 +56,7 @@ const uint8_t sh_type_to_size[] = {
     [SH_type_l1_64_shadow]   = 1,
     [SH_type_fl1_64_shadow]  = 1,
     [SH_type_l2_64_shadow]   = 1,
-    [SH_type_l2h_64_shadow]  = 1,
+/*  [SH_type_l2h_64_shadow]  = 1,  PV32-only */
     [SH_type_l3_64_shadow]   = 1,
     [SH_type_l4_64_shadow]   = 1,
     [SH_type_p2m_table]      = 1,
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -97,6 +97,13 @@ static void sh_flush_local(const struct
     flush_local(guest_flush_tlb_flags(d));
 }
 
+#if GUEST_PAGING_LEVELS >= 4 && defined(CONFIG_PV32)
+#define ASSERT_VALID_L2(t) \
+    ASSERT((t) == SH_type_l2_shadow || (t) == SH_type_l2h_shadow)
+#else
+#define ASSERT_VALID_L2(t) ASSERT((t) == SH_type_l2_shadow)
+#endif
+
 /**************************************************************************/
 /* Hash table mapping from guest pagetables to shadows
  *
@@ -337,7 +344,7 @@ static void sh_audit_gw(struct vcpu *v,
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2_shadow))) )
             sh_audit_l2_table(d, smfn, INVALID_MFN);
-#if GUEST_PAGING_LEVELS >= 4 /* 32-bit PV only */
+#if GUEST_PAGING_LEVELS >= 4 && defined(CONFIG_PV32)
         if ( mfn_valid((smfn = get_shadow_status(d, gw->l2mfn,
                                                  SH_type_l2h_shadow))) )
             sh_audit_l2_table(d, smfn, INVALID_MFN);
@@ -859,13 +866,12 @@ do {
     int _i;                                                                 \
     int _xen = !shadow_mode_external(_dom);                                 \
     shadow_l2e_t *_sp = map_domain_page((_sl2mfn));                         \
-    ASSERT(mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_64_shadow ||\
-           mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2h_64_shadow);\
+    ASSERT_VALID_L2(mfn_to_page(_sl2mfn)->u.sh.type);                       \
     for ( _i = 0; _i < SHADOW_L2_PAGETABLE_ENTRIES; _i++ )                  \
     {                                                                       \
         if ( (!(_xen))                                                      \
              || !is_pv_32bit_domain(_dom)                                   \
-             || mfn_to_page(_sl2mfn)->u.sh.type != SH_type_l2h_64_shadow    \
+             || mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_64_shadow     \
              || (_i < COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(_dom)) )           \
         {                                                                   \
             (_sl2e) = _sp + _i;                                             \
@@ -992,6 +998,7 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
         }
         break;
 
+#ifdef CONFIG_PV32
         case SH_type_l2h_shadow:
             BUILD_BUG_ON(sizeof(l2_pgentry_t) != sizeof(shadow_l2e_t));
             if ( is_pv_32bit_domain(d) )
@@ -1002,6 +1009,8 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
                 unmap_domain_page(l2t);
             }
             break;
+#endif
+
         default: /* Do nothing */ break;
         }
     }
@@ -1123,11 +1132,13 @@ static shadow_l2e_t * shadow_get_and_cre
         shadow_l3e_t new_sl3e;
         unsigned int t = SH_type_l2_shadow;
 
+#ifdef CONFIG_PV32
         /* Tag compat L2 containing hypervisor (m2p) mappings */
         if ( is_pv_32bit_domain(d) &&
              guest_l4_table_offset(gw->va) == 0 &&
              guest_l3_table_offset(gw->va) == 3 )
             t = SH_type_l2h_shadow;
+#endif
 
         /* No l2 shadow installed: find and install it. */
         *sl2mfn = get_shadow_status(d, gw->l2mfn, t);
@@ -1337,11 +1348,7 @@ void sh_destroy_l2_shadow(struct domain
 
     SHADOW_DEBUG(DESTROY_SHADOW, "%"PRI_mfn"\n", mfn_x(smfn));
 
-#if GUEST_PAGING_LEVELS >= 4
-    ASSERT(t == SH_type_l2_shadow || t == SH_type_l2h_shadow);
-#else
-    ASSERT(t == SH_type_l2_shadow);
-#endif
+    ASSERT_VALID_L2(t);
     ASSERT(sp->u.sh.head);
 
     /* Record that the guest page isn't shadowed any more (in this type) */
@@ -1865,7 +1872,7 @@ int
 sh_map_and_validate_gl2he(struct vcpu *v, mfn_t gl2mfn,
                            void *new_gl2p, u32 size)
 {
-#if GUEST_PAGING_LEVELS >= 4
+#if GUEST_PAGING_LEVELS >= 4 && defined(CONFIG_PV32)
     return sh_map_and_validate(v, gl2mfn, new_gl2p, size,
                                 SH_type_l2h_shadow,
                                 shadow_l2_index,
@@ -3674,7 +3681,7 @@ void sh_clear_shadow_entry(struct domain
         shadow_set_l1e(d, ep, shadow_l1e_empty(), p2m_invalid, smfn);
         break;
     case SH_type_l2_shadow:
-#if GUEST_PAGING_LEVELS >= 4
+#if GUEST_PAGING_LEVELS >= 4 && defined(CONFIG_PV32)
     case SH_type_l2h_shadow:
 #endif
         shadow_set_l2e(d, ep, shadow_l2e_empty(), smfn);
@@ -4124,14 +4131,16 @@ int cf_check sh_audit_l3_table(struct do
 
         if ( SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES_MFNS )
         {
+            unsigned int t = SH_type_l2_shadow;
+
             gfn = guest_l3e_get_gfn(*gl3e);
             mfn = shadow_l3e_get_mfn(*sl3e);
-            gmfn = get_shadow_status(d, get_gfn_query_unlocked(
-                                        d, gfn_x(gfn), &p2mt),
-                                     (is_pv_32bit_domain(d) &&
-                                      guest_index(gl3e) == 3)
-                                     ? SH_type_l2h_shadow
-                                     : SH_type_l2_shadow);
+#ifdef CONFIG_PV32
+            if ( guest_index(gl3e) == 3 && is_pv_32bit_domain(d) )
+                t = SH_type_l2h_shadow;
+#endif
+            gmfn = get_shadow_status(
+                       d, get_gfn_query_unlocked(d, gfn_x(gfn), &p2mt), t);
             if ( !mfn_eq(gmfn, mfn) )
                 AUDIT_FAIL(3, "bad translation: gfn %" SH_PRI_gfn
                            " --> %" PRI_mfn " != mfn %" PRI_mfn,
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -209,6 +209,10 @@ extern void shadow_audit_tables(struct v
 #define SH_type_unused        10U
 #endif
 
+#ifndef CONFIG_PV32 /* Unused (but uglier to #ifdef above): */
+#undef SH_type_l2h_64_shadow
+#endif
+
 /*
  * What counts as a pinnable shadow?
  */
@@ -286,7 +290,11 @@ static inline void sh_terminate_list(str
 #define SHF_L1_64   (1u << SH_type_l1_64_shadow)
 #define SHF_FL1_64  (1u << SH_type_fl1_64_shadow)
 #define SHF_L2_64   (1u << SH_type_l2_64_shadow)
+#ifdef CONFIG_PV32
 #define SHF_L2H_64  (1u << SH_type_l2h_64_shadow)
+#else
+#define SHF_L2H_64  0
+#endif
 #define SHF_L3_64   (1u << SH_type_l3_64_shadow)
 #define SHF_L4_64   (1u << SH_type_l4_64_shadow)
 



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:55:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475361.737043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbZP-00079L-B0; Wed, 11 Jan 2023 13:55:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475361.737043; Wed, 11 Jan 2023 13:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbZP-00079E-7w; Wed, 11 Jan 2023 13:55:11 +0000
Received: by outflank-mailman (input) for mailman id 475361;
 Wed, 11 Jan 2023 13:55:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbYy-0004QI-Bw
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:54:44 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2059.outbound.protection.outlook.com [40.107.13.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f31bd34-91b7-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 14:54:42 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8551.eurprd04.prod.outlook.com (2603:10a6:10:2d6::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:54:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:54:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f31bd34-91b7-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ef4eBs+AKiiy090udEb1VC+i9u//RYUJ5ou3mdLgQ9b6v0NqFQZm3eOYB/2LvKMsMQza4tj4RkVFhV1PXHJI6VJyJbiFPAKMAcfnEFsuSCbco8F3PbSaUozDvVb5GKQdJvFGCu1Z2pwb3GNsS7dIy3CT85Fh4sw2nG95zVepUBfHQaVQORMxlIlmAvrRf5Zcm7DQdiwoVMzH58trVcID0Hw8LTQgpDs8SNBs9tpusHwuXTL8ZCnUGq4SUHdHDWUkJ3ZG9avyKxF4OUWIxAEZQUX5TS+/pQ2oIDcsrrRXfxrr8+LwuC+hr2fo4NY+jAiENdlTLFlnfI/TDt2mfR4Tlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+A1uJIX8hCVYxMZpSUyJ3UOaNSMYMDBoFU6vGM6iCfg=;
 b=ZFVuMt3mAXTXntaYkgz/BRN3GtdwR7DDbyo2AL6oRH8Nz3lP/fUHEYmmmsqY0TPv5Qgz4oyTfc+ruFgX6WTZe66E85NXPcWKELzZ0oYWL7xhC0RnF1VUPQG2dYiG9u9Q+dCd5sYctXVmvKOkDo/pdwhYRsAylsYc3uFTxkOldJsVX8B283OMV9bt1aiQDQE6FzRc2mMnfF2Ocnp/0IRt9ucbYoUVVZfAWvuu89fLs8V7dev7MywsglWEgu2aQU9VHvt1ov/SWqkXRysPZCg/Kf9w6+YhtWkUpHVll9euzK4k76P70I9e7eY7kcO2mnwdt4jFaRsraGXrGedM2DWBhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+A1uJIX8hCVYxMZpSUyJ3UOaNSMYMDBoFU6vGM6iCfg=;
 b=nzbKpI3XMAKMvios3Vn76ZE44+K5Hsg4OavvTdiW5CyfvFWOqTD41O8F6Ty6dj1xx9bJdhHbg3SaKGZugR4SaR3w9et0Y3glkIcZYXjHK4px8ctLZs1IfBNaO5RkR8HVOLAt7tnVny9YXj3m/R8dQ79PAcMQF/OJWF2X/T0YTl7QQMWVUIFy0Q8sHJB2C1FLf17L9yu4Xj7/LShwjNeLWartGKQKXozKXLBtPVEXJtSo/Bvkqq2IT3xD9Cf96KW5TwCjI4Fi7dIlOTfdj5q4FX8sgTltaiTMskUg4D63GiGiFUMqji4n7BjlpZ9kt/hNSsN41d25q0A8EiMFUTr4LA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <27a7245c-f933-5b2b-5685-d9ba2dbd4a8c@suse.com>
Date: Wed, 11 Jan 2023 14:54:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 6/9] x86/shadow: re-work 4-level SHADOW_FOREACH_L2E()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
In-Reply-To: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0032.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8551:EE_
X-MS-Office365-Filtering-Correlation-Id: 77f40980-f1c7-40ad-5378-08daf3db62c5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	x4WqpYgLr3iqSQJb/MamaeokYvbenoEW649EYtu+cuhH/ix4DTkhY1/cFxm4HRpMoKnPwrg+C6hIZnq4G9IOr/UChdZwWP01Tp4cTifMGmiJMzGy/s/TO1o1HXFQrdXwqWX9l/XGRLRn16DufIJtg+ugZYq3OM85llPVxZlCIQbnHtHp6ehipwDi01Qv7MG/PcorOqHey/gJPI8U/vTdjoT+hb7ig2NDjs1jikpb+AkEPvGaofjoQMO1jURtIaCrbBsJsuqsaI3TJjYHVOwd3xDvWPa0eZtpMAlcgXBUsk7JVraBvi4G1PS1y3jYMjc9BkGTK0QfceRDovUheuY3v6V0c/332dtcsxgfcOw0W3jIxcQqqET5zJl+gaDzP5AYw7wI6xbBWlbeKdVYlSLpKJnB5hyAtao+N4g6gMmfKTpyvhRMA4q4t3t+HAeb2Yryh6w0aisnvTIFjhC7AoBpMlDvq+OkfmpcCbFSSTfrOXwweEATVTVdmEURl0BRI2RB3v+4zQ7oy0Bv5KUvkih3hpWgcR2LLxgSx+2LhsXlMfdH0Oiw9WKoyX2dhuwgZV0dl1b440+hre6yNtz7Rrf5/UVV/KJXZewMbISD7h9bH8TM52J8vZyMLO/7Yce38egF8yi14dUhr4uXq66/UovksTI/oZ0aGwVO2sA3m41Ic6vMf47yCkIlVNTY4cZ5Ng2L8a5JHGmfvRL0Lp+GdsGBg99RlRrSAOxQlWDMQWlZ1Hk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(316002)(4326008)(66556008)(8676002)(6916009)(66476007)(66946007)(54906003)(26005)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SHlTSy9mMzk3OWVldEV0Qld4T1lpQWUzR2hKMU1sZW85aVVjajVmckxvNXFx?=
 =?utf-8?B?SWowU09ySExoYnpjWExlcDlNa25nRmV6QW9NVVRMT0ZNbS9XWkpvZlB1S0VC?=
 =?utf-8?B?SVFYaGQvalQyeXhYc3VmQ21FaVZXeTNDZkdLQnk2R0ZsQXdMOEgycjhMNkZQ?=
 =?utf-8?B?eGtNUG9jRlN5cXVrUG5aS1pjNjZmbWxlWVo1Q01jM3duUG9tYWJBalZDYVBY?=
 =?utf-8?B?UGlEK1dNQmlNbjRwRTNjdVBpc3lxUlBnMUNaakNrOTc4UEZYNFNqbWowTHNp?=
 =?utf-8?B?a0daV2FaMTFUc3JkbEtlckcwOTJHRWFoODlVR3BEVFk0dDBjU2JYdWd1WHZE?=
 =?utf-8?B?Y1Z4aUNiWTZhYkFXeURSRnFnMjU2bmkxbmIvM2ZqNC94QkN2TFFKZzc2c2dG?=
 =?utf-8?B?N1Q2ZzROeTlFRGZvdXJ5VEVWbzZOaDlUZlFHK2JNWk9yb3VXWmVxZFFpbThC?=
 =?utf-8?B?VUU4YzZWUk5vQjBpeFl2R09RYTBrYmNmbzdDZGdCRjBPOEVFTTg3cEFweXVD?=
 =?utf-8?B?OTBUMnVoNnc1bithOG9sN2VlYmhYdWZVVkF3R1Rwd0RKdktWRHMwOFB3amJF?=
 =?utf-8?B?QTFFK2t3OEtvNGw3ODg1anJ0M2pZVHJ1dHhsR0RmTWN6NzBIR1JLT3psT2hy?=
 =?utf-8?B?alc2N1d4a05UWUhZcWh1VHRLUjJKWVVrcnpnMmJoTXY2NXhsWkR2OXFOSElz?=
 =?utf-8?B?S1ZuMUdDckxPeStGdVE1dmZXZlZRR2NhaDdZUEU3UGJQQ0locWk5bCtjeExw?=
 =?utf-8?B?bXlUUFdZUklhejdPQ2JycFhPYXJkeHFjUm54VklLYnNRcHV6S2hSeWwxemM2?=
 =?utf-8?B?VWtTYTJIRnNzV0ZKTjBiT0U5Z1RnZjVEa0padTlCbWVTMWlRQU1TSnVINnpl?=
 =?utf-8?B?SkU2TVVRcmRYbnJRN2pMOHBDTVdZVkVycFhxZ2VDVHZaNVg5TVBkQTE3NENW?=
 =?utf-8?B?WEo1ek1ybDhURzErM3B2VHFWOFNTS3JOS0NKamZQZXJWa2ZMcHhhL1JEcUI1?=
 =?utf-8?B?YU9KZnlPS1NIc3NDZ0hXcDlaMnJXNW5WOEhNVUMyZkFldFhxYUVLUHc1cWJV?=
 =?utf-8?B?d0R3Ri96K2lGUTV1czQzWHl3dzdJUHczdFBkT2NpZnYvTHZhUXMxV0p5WUpP?=
 =?utf-8?B?QTZyczI4VU13Zk15RytNK3lpbkNzb0pEVWNRdmZMcnl1dWZva2cwem5KZDJ4?=
 =?utf-8?B?QisvM3ZpK0pRM2FjVk93T1dRbHVxbGZ3NCtMNG5DWnNXc3VCa0MvZStkT0p2?=
 =?utf-8?B?WEJBV0J3QmlOYndoY3FGVklNYWN5MU4xL05lb0FndUJieGhwRC8xNjA0Z1lQ?=
 =?utf-8?B?aFplUjcrZmgwbmZab2VmcDlrQXhyQmJ3Q2RLVTZvdlJ5ZUlWT1NMeTB0NjFv?=
 =?utf-8?B?SU4zRUdvNlVoNThZMHpqNHNNLytsNTZ1QlJsQVJUYUlkdVRaV2ZsU2Q5dzIx?=
 =?utf-8?B?VDVYOEJ1M3ptZ0YrdGtYUnlPT091NFhMejVndlZGQlIxYjlGaGtId29IdTdH?=
 =?utf-8?B?ZHFIYnYwNTZiN09IbW4xK1VxYmxyN0poaDZ6YUtERXFiYzNBRmhjZkNtY0d0?=
 =?utf-8?B?R0ZRMEYzdUJQdUhmY25RM1NSZWxFNkhUU1VvS2djU2E5RjYxMVZTeWNjYW1W?=
 =?utf-8?B?TTQ0TTdRbklNSnQzN3RyRTZoNnRjNlltQTlIQWZ3RjNtS3hUSDF5MGxwbmdr?=
 =?utf-8?B?aGFtRWFDRW5ocUVIdE1jL0pCdHRvR2c2RndqTk5pOGk1ZUFQcjgrMWZFMkdQ?=
 =?utf-8?B?NWRjTzA5WG5JTDdkdXBDdmhWaEN3MWNabnNuZWRFRFl3UkoralVEYjRrc2pW?=
 =?utf-8?B?bmdFZjFibFA2OUMzb3o1NkdTdm9vTi91WEdTUHkvQVBvbGREemZDd2dnSVRp?=
 =?utf-8?B?YmowYytIMEJ3ZnlRSHorbU9uUVZ6OW1JakkwU1NNcTlxaFpDUFlocUdMTkNh?=
 =?utf-8?B?YmhyZUZ2RzFjR2xxYVVOVnpJcVk4R1ZhbzFaS3cxQVNFQW9Pc1AwQWZNQ1V3?=
 =?utf-8?B?WHpMVUNVeDc5bmFBVDVpb0FTNDFBL0JidHY3MFdUdTE0WXBBckZQVWxLdC9p?=
 =?utf-8?B?OTYzNE15ZVRFcWRwZHZtNmNKMUJXSFNxUjQ5a3JyWk83cnVxTUQzcEJPQzhR?=
 =?utf-8?Q?0uW1LtPf4c5X2e8Z9EKRI8Tce?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 77f40980-f1c7-40ad-5378-08daf3db62c5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:54:40.7770
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PcxIjW7TFPzjrc9g8815cBO2A4qjBvxhFlPsiatNZj4DNwJVkZuZ0bA+2/8dbTMDDQC6owX1dFRB+ogtiIVF4A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8551

First of all move the almost loop-invariant condition out of the loop;
transform it into an altered loop boundary. Since the new local variable
wants to be "unsigned int" and named without violating name space rules,
convert the loop induction variable accordingly.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -863,23 +863,20 @@ do {
 /* 64-bit l2: touch all entries except for PAE compat guests. */
 #define SHADOW_FOREACH_L2E(_sl2mfn, _sl2e, _gl2p, _done, _dom, _code)       \
 do {                                                                        \
-    int _i;                                                                 \
-    int _xen = !shadow_mode_external(_dom);                                 \
+    unsigned int i_, end_ = SHADOW_L2_PAGETABLE_ENTRIES;                    \
     shadow_l2e_t *_sp = map_domain_page((_sl2mfn));                         \
     ASSERT_VALID_L2(mfn_to_page(_sl2mfn)->u.sh.type);                       \
-    for ( _i = 0; _i < SHADOW_L2_PAGETABLE_ENTRIES; _i++ )                  \
+    if ( !shadow_mode_external(_dom) &&                                     \
+         is_pv_32bit_domain(_dom) &&                                        \
+         mfn_to_page(_sl2mfn)->u.sh.type != SH_type_l2_64_shadow )          \
+        end_ = COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(_dom);                    \
+    for ( i_ = 0; i_ < end_; ++i_ )                                         \
     {                                                                       \
-        if ( (!(_xen))                                                      \
-             || !is_pv_32bit_domain(_dom)                                   \
-             || mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_64_shadow     \
-             || (_i < COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(_dom)) )           \
-        {                                                                   \
-            (_sl2e) = _sp + _i;                                             \
-            if ( shadow_l2e_get_flags(*(_sl2e)) & _PAGE_PRESENT )           \
-                {_code}                                                     \
-            if ( _done ) break;                                             \
-            increment_ptr_to_guest_entry(_gl2p);                            \
-        }                                                                   \
+        (_sl2e) = _sp + i_;                                                 \
+        if ( shadow_l2e_get_flags(*(_sl2e)) & _PAGE_PRESENT )               \
+            { _code }                                                       \
+        if ( _done ) break;                                                 \
+        increment_ptr_to_guest_entry(_gl2p);                                \
     }                                                                       \
     unmap_domain_page(_sp);                                                 \
 } while (0)



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:56:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:56:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475373.737055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFban-0007nf-LO; Wed, 11 Jan 2023 13:56:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475373.737055; Wed, 11 Jan 2023 13:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFban-0007nY-IS; Wed, 11 Jan 2023 13:56:37 +0000
Received: by outflank-mailman (input) for mailman id 475373;
 Wed, 11 Jan 2023 13:56:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbam-0007nL-CY
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:56:36 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2079.outbound.protection.outlook.com [40.107.6.79])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1f28ee2-91b7-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 14:56:34 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9116.eurprd04.prod.outlook.com (2603:10a6:10:2f7::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:56:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:56:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1f28ee2-91b7-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z953bz3nFFFaQ8kRRDXismk0G7je/7A7feBoknxMk94W9dqBDDOXX4mEzejOYUSw3B7Gm2NMmgpZukW2t5lpVPYGUkCSlCelD7rN6bMJjHwC0llDL8OBOh1OQnYWbEJYX3kTM82iRHqLLv27sjzLtwosLtL7qO688g9QjzxwGDIgHN+uqUP6cJqF8Dr9iDGYliahGOBv/59fat9luHaLFVVhkqBiClnw5Nk+OmtbAfPTLo32/WmnTN02h8x/k6NUmfa1zGvMXUY3G2Q/hwdd8zYVJQS6pWwR5dq9jDVtYOJRPC98z9xfdZWiXLrAP4BJZLkQ8h8PAmtbxeylI0Gqaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dOlBxwttZ7/+xseKnKOuiqz4kR/3+d88oza4XKWQx5c=;
 b=Q1F0GUfzbx8Z7UvU8AP+nNyj1rjYDxJ1ByQvE6awNcYfvdXADwXV1/RYwBNikqX8XWKMepjsSwsLiFqbPKOa+ulf5x5FLjtUjYC3gDHMSzCcmmAOSE1RwugoCnXY+pzcc1ekm1JZFeDYQMqHLCEgGaP2XheeW0HbSL0GfS4P7XmOzUVv6viabhuHXXtXQR+gW+8YmCLLB4lmW6lwGdLqxBrdopLfKRTW4DEonhELpNlONWyca/sU8ZJvHVkzUWjpWEK8GWQ8VCSNrg817ewABmRVY4ivcr6e2fI6JSgyqF+jT715XW8F/2sHT5eChtkneZu7tJXAUPi/qxbzW1D19g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dOlBxwttZ7/+xseKnKOuiqz4kR/3+d88oza4XKWQx5c=;
 b=zoh4veZ5PfqSLe389B4cbTUw2GjeL30lAIp39GgmDuMjqPaDGQwXQ3n6Mya3x+aZe8aKg2gEIrmsKi+DI0EFlcUwDuEotmelba3NFWnbj/DyGovZyCBm3aw4Bip+8VOsGS5vyaUNZxKyplNQmNKsUOnL16yVwDIhGcTY0+r+tFoRcHygdPQ3+qQ67U3XCKuPLElbHZODMf6PuVaZWVx0FuzLmLwP1fHQZXqg0sqHt4my+ptl4fs1vOzAzJpD937p/TexMbUCXPb7NQY76kZNRK/5l769OhKspd8VFjEoeUZkYmDd3RTaBuWLeBVPnFWlCLSdOomfw/VLhOmTbuxeIw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4785b34b-2672-e3a8-8096-df1365b6b7b8@suse.com>
Date: Wed, 11 Jan 2023 14:56:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 7/9] x86/shadow: reduce effort of hash calculation
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
In-Reply-To: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0018.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::28) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9116:EE_
X-MS-Office365-Filtering-Correlation-Id: 91fef956-0412-4031-5c15-08daf3dba5a0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5D8H1G5oecUJkQHviGT+zZZ6KmsDOeRgaOgDikazts7wgSrYXYRltyRh9cP9HxiWIYeayePzRP0CpZgDpExaPme0fTXXW1HpPT6LUJEiIuh7MlrmbRh64+QUeaDKZdeFUDAgBcRWVLK0IOmOajbGy0nbxleD84HcoVq24l0R8s8WhbrnU+QCNAJvYh6bUDWt9PH3wfXJBHklpmNTqSA9/ZRnnYAgaZgo3cWlQ92MzUuVoxqQ3gwsIan2omIxKU+8SycGAAtT/zJ1o4I8NqMZyfRgFcW6qN+3vMuLSvTh8c0Nl6Oj8UipEL4Yu8IR24id1bWMECkSdRBIZMg4JA+x8eOvIUWNDFwl36paZftkxmQryqtCVcRNHMmWLMz6ZNaBh/M2Pvcbw5iq7GBKidkhh/PZcR28PJXqwk5O42yeCdE2O7AG3N5qmdhbkBYMWLN8N/PewCioNlfPTIAlOoTBgwH/z/Ure7Plg8FcWj7uTx0M6RYuV8Tp3OvJ9JzR3QMX45SEfgVTN/k660eAvAjO1WlNrS20lJ6100fED/3se3k8ShIcbmA57eOdkoQpHX3h9oIUE7jJj/cMML4kGQmvLS7B3ULSQBkRpHSviY5EWPKXapIvvJf3vMP9Yn2FVK83mCUfhQJ7f1qFJK3RycR+Hz/0z3B6CGKFPhhIPpMfgG7+j+6kwMBo71I8afZOm5sVmo9DXmzDZlQq8PYg3Wtu57JY1P0Rn6t3ugCWc00aM0g=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(346002)(39860400002)(366004)(376002)(396003)(451199015)(8676002)(66946007)(31686004)(41300700001)(2906002)(8936002)(5660300002)(36756003)(6916009)(66476007)(316002)(6512007)(66556008)(54906003)(478600001)(6486002)(6506007)(4326008)(26005)(186003)(2616005)(31696002)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TFdmek02TExYa2h4N2pEUXFNa25uTWJBV2xleFBQRHRjRWJNcjl1SmNrMkEr?=
 =?utf-8?B?T2NOanRSbWx3Q3BtL0hNK1VkVldnemJTYUphSzY3em1wOFl6Sit4MGxmWGN5?=
 =?utf-8?B?c2dRSnE1V0FZaVhNcUdhT2V1NVMvcWFKOThBdWFsTTZMRnh4QlZGTVVJQWRB?=
 =?utf-8?B?dGdPZVlNbFVGUWw3eGNoWTA0M2JuRy9PaDJGUWVUOXZnSWZMMThZbGc1MmVh?=
 =?utf-8?B?amh1WGZXVUc0TVcydzliZEhDUGFKVHR4SU4zK2pNTDRRTU51U1VkclNNVEha?=
 =?utf-8?B?QTFyYmNuMDUxUzA2b3FCNnV5OW9VaWpjOUVuWTJxcXZkOXliZ2JMQUJVSGhR?=
 =?utf-8?B?VnhnSGVwbUlUUWJMK28wcUhyczV0a2xRSTRwczFBTUVqelF4YnNEcUtadmFa?=
 =?utf-8?B?YTdIVXloK2dVOFMzbFdBSk5Xam04M3FSZGJJN1dKNDhuSmcwWDNHU1EzVmtl?=
 =?utf-8?B?d1BKbk5lMzRyb1pOSGFvam85Zjh0d3NuS1hGWkhEcnFVRjFmanlDRlNmbDVs?=
 =?utf-8?B?VmFScFcyWWJCTWRkVGVsUVp4SHQrcXg3cWFEaDhPblRCaFd2MStBdUNFZDJK?=
 =?utf-8?B?dzh0N1BLSm9sUHFYenUrdzBvQjIwbG5COHhFNUFPRDFrWm56WUszZjhZYVdN?=
 =?utf-8?B?cW5oakxRRnNvbXQyUHMvNTJheXNPaVgxQ0dlbmszelo5TUJHb0FnZ2g5MGlQ?=
 =?utf-8?B?WXZNTFpqaFNlMDhmV21iT0Q1V3hpeEN4ejJiUGpjQmYrMjZqYTRoTGRHUzNW?=
 =?utf-8?B?ZFNBYkg4Z055VFZVcGJSODJsNzZONVRDNGVrOHlwZmpKV1ZsR3BjU0tQcTZC?=
 =?utf-8?B?aGpHdTRRbkpWd1o1OG5BSFpuOVl4U25WRkM3enJjWUhpUTdJWjlyM1ZNVFll?=
 =?utf-8?B?d01HRy9iRG1UK1d5ZmVDZDFJTVBiQ1R0bCtDUDhVc1hEYTZmNWxqenpiUk4x?=
 =?utf-8?B?V0lRNE05N0s3aCtOMFdpM2IvcWloZHFObHFSa2xSMy9LWEpXRDhSa2JsbExl?=
 =?utf-8?B?SkxFM0ZUOU5RQVprL3RYNjhUNmYzY0IxSzMrWER6YitmYVl1alZ4SFIrRkNj?=
 =?utf-8?B?U1F3TEk5cG8vRUJPSDQvYUdOWi90Y3dzWE9WMWhXQ29jaDQ1ZnpUWFVhREJ1?=
 =?utf-8?B?dGdoaFV5aFJzVm9oUkFWcnhPSVhvenZWYWVoK2x6TGJYU1I1WW1FeFA4dVRP?=
 =?utf-8?B?Uk0vTG10NHdyblprTStCWk9ka1ZJVTF1WTFKVExsMTl6RzVkN3E5SnNJdStE?=
 =?utf-8?B?SXMxSkVwVVAwbUNrS2p5YlBWVXk1OHE0c29vTVUrYkQ2bTV6TmNSV2dndTNx?=
 =?utf-8?B?bEcyK3hRTjZYVDhRMU9GYzIwRVNUMGhVMkJXVzJRVzI5SW0vZ2JEVlFrRWJF?=
 =?utf-8?B?UlhzSkR1TUZNUnNvZGRSQUNDMWtOZGIyOEVocUJQcGdEV2JvWlhweHJsaUpB?=
 =?utf-8?B?YWhObVUvRjFaeTNuQkcxSjlCOWY5T0xRbi9hSGs5cC9zVWlpakVoYk82eFFO?=
 =?utf-8?B?eVpmTXNCdXE1OTROeW9QQVZEUmtqOVROdTIxeVEyckJXeFl0aE5MdksybGFU?=
 =?utf-8?B?MVRReGdXZlBYdVpRaFlMNTl4YkpXeTZ4U1RWdFc2OVo2R1luWDJyMGsxdXVh?=
 =?utf-8?B?dU51emdGMU1keUM1dHl0VksveitVNDRXcHlzUC9pNW5xNFhlRDhtNWNRZ0Nm?=
 =?utf-8?B?bHo3aElpZ29GVXJ0LzRwT1hXVFl6MktJY2NyQVlKbnVMZTlrNlFRWUtKaTdp?=
 =?utf-8?B?STNNeTNUditicGFkUkM4L2JDeDBKT0VKblduNUVGQWtURml6T1l3QzdSNkQr?=
 =?utf-8?B?ejU4bGNQU1BCRGNyRHQxTzRPdnJqTjdzYXhESmo5SHRyb1NmL251Sk95SU13?=
 =?utf-8?B?U3ZhbENpbFRXRlJUd1M5WGpaOG9sT3YzRWFzNitocjVqOWVzVERkcWs0NmpY?=
 =?utf-8?B?RkZxempXVlRzM0lNYlFSYTVSN0hGOW9rZVdodmVuK0pmN0hQMjhGY3JFc2Mv?=
 =?utf-8?B?ZlpNVTZxSTFuTUtTeHdEZU5NbWNYR0EwUTBUeXIzYWZlTjFmTXdIRFkrKzZj?=
 =?utf-8?B?cVFEamRab1VpOGN2K0NPdkZDZzZrSGlPU092NnRGc2EzdHNCTjFtR2dnS0J2?=
 =?utf-8?Q?pAkakPTD2PBzyj1aXfaRBETlv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 91fef956-0412-4031-5c15-08daf3dba5a0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:56:32.9577
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tHszxTT8ffrBZ8Z8eMdO5NeCQq5truvEfqlaFWhfwO2foFKWTbPB/zn+tpAyS9Sk9fDjwYbs/fR/TedndyqQ/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9116

The "n" input is a GFN/MFN value and hence bounded by the physical
address bits in use on a system. The hash quality won't improve by also
including the upper always-zero bits in the calculation. To keep things
as compile-time-constant as they were before, use PADDR_BITS (not
paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.

While there also drop the unnecessary conversion to an array of unsigned
char, moving the value off the stack altogether (at least with
optimization enabled).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was tempted to also change the type "i" (unsigned) right here, but
then thought this might be going too far ...
---
v2: Also eliminate the unsigned char * alias of "n".

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1397,10 +1397,13 @@ static unsigned int shadow_get_allocatio
 typedef u32 key_t;
 static inline key_t sh_hash(unsigned long n, unsigned int t)
 {
-    unsigned char *p = (unsigned char *)&n;
     key_t k = t;
     int i;
-    for ( i = 0; i < sizeof(n) ; i++ ) k = (u32)p[i] + (k<<6) + (k<<16) - k;
+
+    BUILD_BUG_ON(PADDR_BITS > BITS_PER_LONG + PAGE_SHIFT);
+    for ( i = 0; i < (PADDR_BITS - PAGE_SHIFT + 7) / 8; i++, n >>= 8 )
+        k = (uint8_t)n + (k << 6) + (k << 16) - k;
+
     return k % SHADOW_HASH_BUCKETS;
 }
 



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:57:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475378.737066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbbQ-0008LM-V4; Wed, 11 Jan 2023 13:57:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475378.737066; Wed, 11 Jan 2023 13:57:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbbQ-0008LF-Re; Wed, 11 Jan 2023 13:57:16 +0000
Received: by outflank-mailman (input) for mailman id 475378;
 Wed, 11 Jan 2023 13:57:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbbP-0008Kv-Ik
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:57:15 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2046.outbound.protection.outlook.com [40.107.6.46])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d904d9cd-91b7-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 14:57:13 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8073.eurprd04.prod.outlook.com (2603:10a6:10:24d::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:57:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:57:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d904d9cd-91b7-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WkxdgB88KE20zyOORVTy4RosbNtgrxGjpkEM4pysQbT6Xyuok9jeYshiM5nlk1quYhhnR/AfPYh+iLgpZESwSSe0eTOtGfz+cEimMVJEnSRjIy8t4wAMp+z0OgQDDLOAk2vDr7NRSa/MfljoqUB/7I8Wj8LPHO9PPvFd3N35vkppXmRQPNURGKz7m4SbjKz4CTvW0RlHD58+Ye1QaR2uSdszSG3RpNUpWF6P8/BKJanme3WMkc171FwXK90CDfktz08xJj30uslkkx1cedVju1E082c5NVM8hPnvwj5Ea5y1OXOQbhfeEVP3gIryZ1YINutue9mRIb54p6mkPL5Oaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4uJicMk7KMO3J5OufWXUur+3R2HmbLQ7mvJAVkX8lh8=;
 b=oMaoI6rCBCF4h4BIMNx5AZVZaysitmIpl4TslSq3cA9CRJuaJGk2K1hpEbet1TY6Ww2SYu4QXKUkA8994sbkVYCBbCQFi0rK1ye6W9Mzk77h9zpO3zWFx97KSDdl1Pl5KDOz4JTuc0REUnlkhA9ir0NGxThmGwGYIGPW2Mdjz8qA6bhXKvM3OPj9IojyY2iCHBbrSj9LxCxVPeHBCfPeJjczUozxgUjNHSPAzEESz61nk7sKAXviTYM9xjOv0HabjZV5+okcXg7pouU6IBDkY+SD9tkhq5jtvCs/KeRhxqyeQy1PHp27dsfzb2j8EE457wduSTwBHauezRNEYUPGgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4uJicMk7KMO3J5OufWXUur+3R2HmbLQ7mvJAVkX8lh8=;
 b=nqmqe3S8mTfCKPnKooPJn3xO7qpQ2zgZehLK8bKuRblnamQ4eY2+ube8v+U3MCnVCpySm83FwL4HNEW/bJ4adUvaIhvHGOBeCTp44bBSsjd/zslBzK2UfljntcWeVIgjz3n08Xaqmd2T3Kei62q1umZ0ypAILWfY0E9+KlIhRviEbqBmewGlaLmPPd4CkLEIwlk84SbPtN15FQkxjjoqP7CGvu0uG13ZDvi2zr/lgOfpkW+SZc4Y1BxOLd46s6lpYBAOTcEUhettJzN6waR0HjTlSXy9HD8IQdQX985Qz1E0AV9RiFFuB0yrAvBs/XBdzAMyL+/NQW7V22i1bwb/RQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8ed7a628-f64e-5512-efdb-4116a7b88a1d@suse.com>
Date: Wed, 11 Jan 2023 14:57:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 8/9] x86/shadow: call sh_detach_old_tables() directly
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
In-Reply-To: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0013.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8073:EE_
X-MS-Office365-Filtering-Correlation-Id: 20e7719b-0f12-4f63-25a1-08daf3dbbbb0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZgIDrbmRswtSGvu6MoAAcu7vvOSz/wPQRTry08dJSX3634u1ry+j3My45I9rbrN8h+cYbw+JkOoE7FEvS5jcIwWc9vdviTyT+9+oBWj6XV3Ngbc5fq7EqluLonco25s3apsYZ4jO1BvXC/DjH62lpQ7QSJfA02D1axIsluS17jXRnPwsA/XDrrQSCANTsKcMCavjfaJwcZBoQGHtNtXB/DOw42bwMAi3/7GZ8q5zIalYF9ENQUwJqyiIp/Ed+HmrbFc9B9hbjQIyHSaXlcfOV+vGc20a1uOzJKJCRknaQvFLWKj4NYXL2jIKuAZaeg5eW4mQ2J+huNtXdS/saTeGUQr99gTSwNRBZ3pFz3LMi1/R3vbO6g1Bs/pI+3o99sHJsL6qqojXUTribFiZh+R7XMmOpZ5LiukOcmnmeqIKKSe34M/owp1EizLyrc6kAnQVLl8NcjCcOQaBDwxUfMpazEHJytXzaWjuTgXFQFZVa/21R1zDEqZA9Bfd0ykzgUwfxGNVqItNQPWb/GKX0e8D4zZhESYo7ipJ5wheM3OlybAwo2+kv0tu4De48TsehZBRWg9SCoOywZUbRSeCuTx2OI3l3kFVfimbs5DRvd7rTyelb+6aM3GHUYricboHX+yXgQuE2OnYvD9K99xCejeVOP0E3LEfUJ8/NcIgnzcVkVgzi2yHUFyH+eL+yqmyPzr3eNTPNA/yU3xdS8SlEo4yoZuEu3k9tIhnQlmWlCvj8tY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(346002)(39860400002)(366004)(376002)(396003)(451199015)(8676002)(66946007)(66899015)(31686004)(41300700001)(2906002)(8936002)(5660300002)(36756003)(6916009)(66476007)(316002)(6512007)(66556008)(54906003)(478600001)(6486002)(6506007)(4326008)(26005)(186003)(2616005)(31696002)(83380400001)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZktyME1BTHMycXhOSTJtSTJvNXdzYnhmSWc5WlhxTlBZQzFrTE8wMzVwRWFl?=
 =?utf-8?B?d0NtaU9JOGRmV05DZTh6OEdSdmVIQmwyeFUvVnkxY0piSmZPQStXcnlFUnkr?=
 =?utf-8?B?cFNTcGVCY2drUVNnN2pEWUlWRU1wYmxCNnM5NndMUFcrTE9Va2VzTnNVa3lE?=
 =?utf-8?B?aVIwUUkxQ1RMdkhtOGhzUXZ1OFhzZ3NkaHJ2Y0MxM2kyZTA2QWprVG9VZS9v?=
 =?utf-8?B?NGh5YnJmcy9hREdGNG5SOWl5M214UDNNL2ZjazhXZEZKR3pMV0JJNklCNjU3?=
 =?utf-8?B?cDBIYUZpWmU2Skw3Q1NnYmFnOU5HOTRKM01mSDMxbzQ0eXNuY3A4VUZ4cFBz?=
 =?utf-8?B?Umc1NWdsU21PU09ONWE1dGdpVm1MOXpXSHEwMUx0QjVyMEtVSlFHT1JOQ1Fj?=
 =?utf-8?B?LzlIK0hZb2FYRWVXVTRkSlJqTWdPSnpFcUZteEp1dGNyZXVUajl3SlhCWHlh?=
 =?utf-8?B?S0JvOGZLM0Z4bllURUUrUTRGbmxUMC9YalhTWktjVUlMRldpMUVET3VTR3NE?=
 =?utf-8?B?dmdkZXFwMlI1a2d1UTF5RngwNklmUlV3eUhEWVN0UTlMTEtEdFc3TStURHRQ?=
 =?utf-8?B?dWdReWdWeXVaQlZxY21GM1ZMd2JrNFlpU0pkSTdRcC9aM0lsNnNINysvVUJo?=
 =?utf-8?B?WEI0ZFpnbHVJT204d29LekpGZVdCaHp3WmdNRUdEMXdrSDZoYWZCNnVPV3Ev?=
 =?utf-8?B?Z09DejlGWHRZOVl2ZlFkbXViMHh3N0cxaDlSQlV1ZElTSis4ZTdZRHJ3TGl3?=
 =?utf-8?B?amU3bUNYZHJIMzFIN1NpRE13Z29NbHNLTzVlbkg5Z3o0NTNXN2R4NVViekxh?=
 =?utf-8?B?MzhCM0VKM1hOWVQ3WkVrSzlseGtrQzJOc1g4dWxYbzI2aXJxVVVudjU5QzZB?=
 =?utf-8?B?a3JaSkM4RURYcm9mR3o1UFpaNFVQZ2Q4QnRqbGZwQko0Z0RGSEZOZ0VDUlVq?=
 =?utf-8?B?N05jSG1DWUZ1djg0dDNiNlpmSEVvK3h0dVdzTGUzK1ZmTG0zYTl2eXJwc3pV?=
 =?utf-8?B?OEdsSVVmcy9kVTFiTGh3L3Q5UW9OcW5ReGY3UWd1Szl0QWdHaDJUSkNDa0Z5?=
 =?utf-8?B?bVBHd0NzWVJ4NXZ2L0phWXNTdExBMW9aOE13Nkplc2lrMVROOGRKajNCUTZB?=
 =?utf-8?B?R3BEZjdpZFJhcklDM0hzdUpCRkhZSytLSzZ2NWFnNWdyYWx6b0VHQUIvOW1R?=
 =?utf-8?B?VHR5OVhVZ0JMdGRKTGQ5cTJFVEJYWjFvTVE2Qk5EWTNnR0svUjdTRzVER1Yr?=
 =?utf-8?B?TCtQWms0Q3FVUVY0VitJUXAwNlVvUFQrL1JuWktNaG9SbHRPbkoyL2wrQW54?=
 =?utf-8?B?Qm9DQkFvWEtzOWNaYVV2Z25DYVZhVm8xcXFlaW95TGtyQXdtYzFOUnd0cHQy?=
 =?utf-8?B?TSt5bjNVMk1KN2hUaithMGE2WU9vRVpucElHb21JbGxHT1FUSXRSVzZlMXIr?=
 =?utf-8?B?b3JZM1pIdk9MNytyQzNhWDNwZnFzOUZCa09FcUxaODFnQTVoN29JNmlKUGJl?=
 =?utf-8?B?UDJJUUtCN2RycWNSRndKSmJaWmYyNkt3VFBta3hOQlF5V0FLZW9ST2JGbDMx?=
 =?utf-8?B?V3ZGRUQ0WHBlaXQ0ZFBkeDRQL2Q4OHlOeUlSemxlM1kxRzMvSUZRYXUrRkdi?=
 =?utf-8?B?ZFRLYlRBZUNsZVRkU2ZzQVkrYTVZNWhIVVR2bTNNNDBzSzVxVXJRcTltOE94?=
 =?utf-8?B?RjNCY3ZuOERUVnptR2VLYW1qdmUwVU1zUVQzN1VHK0JyRTlmOTVlZlZyd3Na?=
 =?utf-8?B?RzVTZFRXMnp5M0VRb0pTZGdENG5saEs3MFFDa1UrYitWWStoU2g5bU9sRDV3?=
 =?utf-8?B?VEx3cUhnbXZ3bnNkclJlZG1xbVdWU1hETlU3S3ptOENDVEhFSWVEeG1xbUFp?=
 =?utf-8?B?OTFPaDhUTlUyei8yWjdXdnZOWHdBemVEeG90N3M2cmVZZ0FVZkNlL05Ic2VG?=
 =?utf-8?B?bm1aVTEwRXBEbTBkeEhBTnI4RW1pbHpYVFo5RkxpaVorZ3Z3RHRsaWFJSFNx?=
 =?utf-8?B?Qzk1ZGUyWmdFNkZEaXJEREFpazRyZjJiNEZiRmpuUDZpczg4K3JBWUxWbWt4?=
 =?utf-8?B?dGE3UVVuTjV2SXgvUjhkT1pCZW1aSWFMd2ZSOGROUVhyVlhacWQ2VHFxbFdv?=
 =?utf-8?Q?rlnuLGjl/kkkMqVKnBFdTycSf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 20e7719b-0f12-4f63-25a1-08daf3dbbbb0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:57:09.9710
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +Vq5kjVN7hzmzI0kfPp2Xo60nZXLZRWRwkdQFhwAZ+gM4VOpt2VsS7kUEHIVAUs+Jf4so/fkWL3tyupUAX3Z7g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8073

There's nothing really mode specific in this function anymore (the
varying number of valid entries in v->arch.paging.shadow.shadow_table[]
is dealt with fine by the zero check, and we have other similar cases of
iterating through the full array in common.c), and hence there's neither
a need to have multiple instances of it, nor does it need calling
through a function pointer.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I've retained the C++-style comment in the function as this style is
used elsewhere as well in shadow code. I wouldn't mind changing the
comment to conform to ./CODING_STYLE.
---
v2: New.

--- a/xen/arch/x86/include/asm/paging.h
+++ b/xen/arch/x86/include/asm/paging.h
@@ -98,7 +98,6 @@
 
 struct shadow_paging_mode {
 #ifdef CONFIG_SHADOW_PAGING
-    void          (*detach_old_tables     )(struct vcpu *v);
 #ifdef CONFIG_PV
     void          (*write_guest_entry     )(struct vcpu *v, intpte_t *p,
                                             intpte_t new, mfn_t gmfn);
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2264,6 +2264,29 @@ void shadow_prepare_page_type_change(str
     shadow_remove_all_shadows(d, page_to_mfn(page));
 }
 
+/*
+ * Removes v->arch.paging.shadow.shadow_table[].
+ * Does all appropriate management/bookkeeping/refcounting/etc...
+ */
+static void sh_detach_old_tables(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    unsigned int i;
+
+    ////
+    //// vcpu->arch.paging.shadow.shadow_table[]
+    ////
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.paging.shadow.shadow_table); ++i )
+    {
+        mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[i]);
+
+        if ( mfn_x(smfn) )
+            sh_put_ref(d, smfn, 0);
+        v->arch.paging.shadow.shadow_table[i] = pagetable_null();
+    }
+}
+
 /**************************************************************************/
 
 static void sh_update_paging_modes(struct vcpu *v)
@@ -2312,7 +2335,7 @@ static void sh_update_paging_modes(struc
     // First, tear down any old shadow tables held by this vcpu.
     //
     if ( v->arch.paging.mode )
-        v->arch.paging.mode->shadow.detach_old_tables(v);
+        sh_detach_old_tables(v);
 
 #ifdef CONFIG_HVM
     if ( is_hvm_domain(d) )
@@ -2700,7 +2723,7 @@ void shadow_vcpu_teardown(struct vcpu *v
     if ( !paging_mode_shadow(d) || !v->arch.paging.mode )
         goto out;
 
-    v->arch.paging.mode->shadow.detach_old_tables(v);
+    sh_detach_old_tables(v);
 #ifdef CONFIG_HVM
     if ( shadow_mode_external(d) )
     {
@@ -2935,7 +2958,7 @@ static int shadow_one_bit_disable(struct
         for_each_vcpu(d, v)
         {
             if ( v->arch.paging.mode )
-                v->arch.paging.mode->shadow.detach_old_tables(v);
+                sh_detach_old_tables(v);
             if ( !(v->arch.flags & TF_kernel_mode) )
                 make_cr3(v, pagetable_get_mfn(v->arch.guest_table_user));
             else
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3207,30 +3207,6 @@ sh_update_linear_entries(struct vcpu *v)
     sh_flush_local(d);
 }
 
-
-/*
- * Removes v->arch.paging.shadow.shadow_table[].
- * Does all appropriate management/bookkeeping/refcounting/etc...
- */
-static void cf_check sh_detach_old_tables(struct vcpu *v)
-{
-    struct domain *d = v->domain;
-    mfn_t smfn;
-    unsigned int i;
-
-    ////
-    //// vcpu->arch.paging.shadow.shadow_table[]
-    ////
-
-    for_each_shadow_table(v, i)
-    {
-        smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[i]);
-        if ( mfn_x(smfn) )
-            sh_put_ref(d, smfn, 0);
-        v->arch.paging.shadow.shadow_table[i] = pagetable_null();
-    }
-}
-
 static void cf_check sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
 /* Updates vcpu->arch.cr3 after the guest has changed CR3.
  * Paravirtual guests should set v->arch.guest_table (and guest_table_user,
@@ -4211,7 +4187,6 @@ const struct paging_mode sh_paging_mode
     .update_paging_modes           = shadow_update_paging_modes,
     .flush_tlb                     = shadow_flush_tlb,
     .guest_levels                  = GUEST_PAGING_LEVELS,
-    .shadow.detach_old_tables      = sh_detach_old_tables,
 #ifdef CONFIG_PV
     .shadow.write_guest_entry      = sh_write_guest_entry,
     .shadow.cmpxchg_guest_entry    = sh_cmpxchg_guest_entry,
--- a/xen/arch/x86/mm/shadow/types.h
+++ b/xen/arch/x86/mm/shadow/types.h
@@ -236,7 +236,6 @@ static inline shadow_l4e_t shadow_l4e_fr
 #define sh_unhook_pae_mappings     INTERNAL_NAME(sh_unhook_pae_mappings)
 #define sh_unhook_64b_mappings     INTERNAL_NAME(sh_unhook_64b_mappings)
 #define sh_paging_mode             INTERNAL_NAME(sh_paging_mode)
-#define sh_detach_old_tables       INTERNAL_NAME(sh_detach_old_tables)
 #define sh_audit_l1_table          INTERNAL_NAME(sh_audit_l1_table)
 #define sh_audit_fl1_table         INTERNAL_NAME(sh_audit_fl1_table)
 #define sh_audit_l2_table          INTERNAL_NAME(sh_audit_l2_table)



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 13:58:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 13:58:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475386.737077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbc8-0000YC-9I; Wed, 11 Jan 2023 13:58:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475386.737077; Wed, 11 Jan 2023 13:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbc8-0000Y5-6b; Wed, 11 Jan 2023 13:58:00 +0000
Received: by outflank-mailman (input) for mailman id 475386;
 Wed, 11 Jan 2023 13:57:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbc6-0008Co-5B
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 13:57:58 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2068.outbound.protection.outlook.com [40.107.6.68])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f3a98656-91b7-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 14:57:57 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8073.eurprd04.prod.outlook.com (2603:10a6:10:24d::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 13:57:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 13:57:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3a98656-91b7-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TSRKM4oRF64gC77pPF7nUVgW9tf0uMUPNbvDdzwKSND6YmoJUfF0iuVSTtzSaYmNkSCX8TEHXa68Pm+0oRj5ciyS3hRsvNMdR30ao483g5gPf0bv8VwRtd1qWPszc7w9Sx47PS8kzY37oluYMRNkVQR243c0L7F+z+063GgedLF5tncE9JUitRgzKj0gAftE5TG1HmoDiJcdPEsrmUDYHyVTRlIc5bKCiojTWN8S//Pi4enidjmE5yxRv5TCt3lbi19HZnTkGrgaVZp8xvbGrmsl5vtYE6vvs/fTuqN4vHT9bRAKZsZ0F9DvEnXZ4FP75tVihJVYkZgsAudVgKZM4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bY7YXGCVxaLa3//TMr+/cVinLxzirB14/tYwJGkYJjk=;
 b=hNLalCz4m6o2ykOlQQqlaXVOBukHG/SzuXvBpL6I5w0hkkZ8sND+rODYbJhiH9fxxfgQ/JDfZdI1Jb/uXidSrHCMLL/J68l3rxrOaYX+e04eKqR1WRMiU52Bw3/ezmfgpcKdbBBzD9M0+TYeYyknAx/uHbPpLAVxjHhNSwvgPPT2PfDvp0Y/A0mO13PFupLAeVeMNEOA2cI0n1fEsjr8BluLqHsOXlQm/SPKilIAdpbNIMJ542CfeV0x6x0DvQ9Wz1Io2X8U7TamTKAf+w1bxySpLgQ/rDQo0iHKZZ9OlmCZkpObocl2jBmameezeLH9uSUbAQiLlzaq1AWGH5QUFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bY7YXGCVxaLa3//TMr+/cVinLxzirB14/tYwJGkYJjk=;
 b=N15A5ATS7ywjuhcOKx2GNBO/I77JMKDUY5NiRnwW0ihxCok6GzCNdKVxNM+UKRew7PFspHLI6Ke7RKwxnEAXRxrcDhh90l2HIk1ez3oVS4BmNYys3Quna7BcUZGlW/rPUbh9xX/1nz7jkEVw/KwMrpuKLJQymjoAmxq+qtdyaVnP0aD6h2bTGiFbEATGwNDQMaVVIis+CA0L5gFTygdC5MJQc/9Pyi+M62Pm3GnEwaqKpPoXWKIscq4OymCv+OVllyrvZos45eYzSOt6NDQenUy2e56RQWu+mVAmyh5YBZ5HmBrUcTmnwzVIUKwHX031ShxldS7rDUbcfOEoGgLfrA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9bea51eb-4fbd-b061-52d7-c6c234d060a1@suse.com>
Date: Wed, 11 Jan 2023 14:57:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
In-Reply-To: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0175.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8073:EE_
X-MS-Office365-Filtering-Correlation-Id: 7e217842-588d-493e-9601-08daf3dbd751
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VZsiGfwYLcPJVSOfMGpKtPDMqAtgpCG3tRjXgraxkoxOwP/vfVnUVrOWXD7eA7crR7rcnHAJsbW6QxV49Y29YxfC51Eez7ATGuh8k7UclifmXlunV8HIHe9tdMWqEjQ/FkPVz3zGBI64PRG/uS6ae6U4/aT80Ck+D9fsnoaolM1HxUk7hNZYZQbXyTtK+zIocV5MIjvcvsbvCyKB8vLZQXVrmh6WFMFiunp16NHmlTnTzM42pRoZlOVNBFmzm38ajohznsS10rEomS6bL4nZWbUC6JcbYtHhef/TBapRrYGubtJCkJYryaGBiBh12QmB/QzeWMReX+16YhQgzXknWnskOsJBN75ogglRB7TSwi+7jjBGT4cVmcFUMuQe3PeM4Wr+OWmeN/ywSxdgLsc0d51P87UwAIPoISZ/OA1bEGJJu4NH8qP3AwBVmSt17+wI7Y9WvUsQKgfx2/rK8+TA+NEpvnUt9FRdEjV7hD32TNTUMUKxF0sTJknXaVsW/AH4I6CfHOnNFs1STYQybiVRhYmq81dNEl8832LKn0Nh1SVsotRpyuboolVqf2+I4pQxJgHCH5Po752CQ0f7V50ZRRaMXveI+ejZ77OUDAdCIqEttnFcx3eqgtjG1OTj6QEP2SyMqPDWcJK9WPhcZIudEq7mch0/xLfRKcngvOGoX870MgMCZ4HERAZVI0+hw5U3/p4bc7D5O8kmLi93aVMNbZNeV7PaqWLI7asoaooh1Nk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(346002)(39860400002)(366004)(376002)(396003)(451199015)(8676002)(66946007)(31686004)(41300700001)(2906002)(8936002)(5660300002)(36756003)(4744005)(6916009)(66476007)(316002)(6512007)(66556008)(54906003)(478600001)(6486002)(6506007)(4326008)(26005)(186003)(2616005)(31696002)(83380400001)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SklXSXNLMGNmSFRWYUdoNmt4NFJRb01aYVNVVjlTaXdRMmliYUY4VHdNZElj?=
 =?utf-8?B?dFBnY042d05EQ0xNUzRxcSt4Mml3aVY1Q0xaeVJFWERoUHpqQjViVFZBN2ts?=
 =?utf-8?B?VUZrcjNIOG1LZzlleFEyZm9RMG9DL0ROckRTaXpOQm5DbmhieEEwVEFxVjhj?=
 =?utf-8?B?WENpSHVKRmpaZE5CeUd0RWNxdmR3WEFDZG1IUDVVK3AvT3djbGcxMVJFcWI2?=
 =?utf-8?B?TC9WbnhRWjJrNEwyZTdOWU9pNjZFRlJGVmJxbS9TUHlCUndBK1RBdGRnTUFG?=
 =?utf-8?B?Smd1T0paR2xXNkdLWmpqTWQ0cUcrR3lMaXoyR2Y4aGdkT2Vrd3FWeXNRRzN4?=
 =?utf-8?B?aXhMcjEraGE4QTRmaDNpdFdjVWJmbHN5bTNEdTZPSWpxR3Y0QVQ1Y2NlZWxs?=
 =?utf-8?B?ekNmcHNVaHMvSVVmRFBENXI1Y05wd1ZFQWxWVWZCaEp1MUxDN0xodkZiV0xW?=
 =?utf-8?B?RHowRU4zM1ZsT1FRbkZGZ1drVHlrenhKbVVuaEg0SE1jc21ZNFVMd1BYK002?=
 =?utf-8?B?Mkc3N1Y0MjQzVXhEd3haSlptS0JxV2hBaHp5NnJuR2Eva2FXc1hRbXNSS3dW?=
 =?utf-8?B?d0JVdmVWSXZMaldBQ3lvSW9MVW1LSjh4M3RFNXV1OHZZN0p5VzdCdG1ySXhG?=
 =?utf-8?B?eEhxYkd0NmNzVlUvV3FSS0dNRmw3N0xQQ3h5ZmZGazNIMVJQTk93TDFnYXdo?=
 =?utf-8?B?NFRFdmFkeUhyTit2ZXZPbUM3Szh3aDZ1VkthT3VRcGlleWlsZlFzdjY2MW1j?=
 =?utf-8?B?SzRYd2xGdHJtTW83alFHZWVRdEJha0hvQ2NxMjgxaFdIVVhuQ215SDVtcFoy?=
 =?utf-8?B?ZUVwMnh1RGZFaVpTZUEyRlBhUXVXaGRialo2Yk1SSnhaendSTlBJS1Z0K29w?=
 =?utf-8?B?Y3BDL0htTkJPWFZ3bURxdlRzYzlyTlFjaUlZM0R6MC90clAxSXRxTVJmNzJ6?=
 =?utf-8?B?cnVNZFJ1azhHK2tSOGdrWExYSm5uQm1UcGhnc1JWZWxOR2sveXJqRnQ5QVQ5?=
 =?utf-8?B?dFZFZVg1VEtXSlFIYW9zTFMxb2J2WTdTRS9rS3lVMmdaSXBGQU9UVmMwN09H?=
 =?utf-8?B?c0c5cDV3ampSMWZ4eHgrRFNpeEFlSDJWM1N4aW9PL1RFWjEyMGE2SDdQbDQy?=
 =?utf-8?B?dWhkWXFWL0M1MDFrRzRXKzgvTHo2WHVUWGpFNnhMYzBpU0t2RTdVMFBUSWYr?=
 =?utf-8?B?UFpoTFFtbXVDcnB3U3Y1RlJxWEd3eE9NUUxVWHg3RjJBS2FiUWx2ZFFxZFlw?=
 =?utf-8?B?Q0hnbXhqb3Vad3A2c0EvVFUyNm5pWlRoYWFIRG9xL3BvNTYzTzdSOEdqY2x4?=
 =?utf-8?B?bm9NWWRKY2FSN1c2bkV2c0RrZ0hmNDk0NmtRQ1VQaVIvYWFIS3l0UGVmQnY5?=
 =?utf-8?B?d2VjOTIyaXBQdWczSWt6Z1lGZG9wbXJ2a0IvWjJPMXNxTDBmT0xidFlXR0l0?=
 =?utf-8?B?bkdUTm85ZGZOd0cyUXBuYXhEalFJZ2FzRkRiSWNlRDhyL3RVeGphM2d6NS8z?=
 =?utf-8?B?bk5yc1ZjZnFTL0Z4WXg5WHV5QldHM05ZK0ZpRXhwSzk1SHVrSjBKWUdreGUz?=
 =?utf-8?B?bkhiTmhhMElzUWs5T1VPUGczZEVKM1prclZaWlBzWGRuS3JWYXVKSmc1bG9q?=
 =?utf-8?B?VXlmSFJIRlR1TXk1NU13blNRWXRhaWFKekZRRWhFWVFXaUgzdU9LTE9KeENq?=
 =?utf-8?B?cXB3djBHbCtvZ2swajhuV0V1YXdtTDZHZHhZZ0hzckpnUTRDeDFVTXdoT2JP?=
 =?utf-8?B?VlRJaDRFMEUvQVdTVDd5dEE4YnFIcmQ4QXAweU4wNnp5a1dnVWZPbmpydXhN?=
 =?utf-8?B?a1daN1FLZmtOeEFCOWdjRzZzWVdhVHJHdTlXRTdMOFpCanRUekJ0OERlLzF2?=
 =?utf-8?B?Zm96a2I5aHhURWJyRjJ2YWkyaG9ldVQ1czVhZFc0Y1gwS1p0dzI0cjlrQng4?=
 =?utf-8?B?dEFlZGlFOVVXaVc2c1Jpd2ZraGZhaWluY3ZKWUhiV2pYQWpoSkdOOTRNZHlk?=
 =?utf-8?B?THV3bGZqZzNwT0NJNWZRWkNoUXAxblYwbFQ2MWRGbWJnbGt3dGVmSEhaUjND?=
 =?utf-8?B?cUhFcUhTalZOM3NQTzVzSjZGMDJvNGdkVUxiQlEwLzl1SXc0eEtrbGRVL1R6?=
 =?utf-8?Q?YLX1ZvyxHW57hGX7+O2wCHpOf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e217842-588d-493e-9601-08daf3dbd751
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 13:57:56.3119
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pNl8/x3onggPgJYTTXZSXr9YCczCD1KgrwhkC/lBGac3cJeF+R61O/ju2UbgrjpkCx/Jb9QOq1oFBgqWWxTkVw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8073

Make HVM=y release build behavior prone against array overrun, by
(ab)using array_access_nospec(). This is in particular to guard against
e.g. SH_type_unused making it here unintentionally.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -27,6 +27,7 @@
 // been included...
 #include <asm/page.h>
 #include <xen/domain_page.h>
+#include <xen/nospec.h>
 #include <asm/x86_emulate.h>
 #include <asm/hvm/support.h>
 #include <asm/atomic.h>
@@ -368,7 +369,7 @@ shadow_size(unsigned int shadow_type)
 {
 #ifdef CONFIG_HVM
     ASSERT(shadow_type < ARRAY_SIZE(sh_type_to_size));
-    return sh_type_to_size[shadow_type];
+    return array_access_nospec(sh_type_to_size, shadow_type);
 #else
     ASSERT(shadow_type < SH_type_unused);
     return shadow_type != SH_type_none;



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:11:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:11:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475392.737087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbp1-00031X-En; Wed, 11 Jan 2023 14:11:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475392.737087; Wed, 11 Jan 2023 14:11:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFbp1-00031Q-Br; Wed, 11 Jan 2023 14:11:19 +0000
Received: by outflank-mailman (input) for mailman id 475392;
 Wed, 11 Jan 2023 14:11:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFbp0-00031K-1T
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:11:18 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2053.outbound.protection.outlook.com [40.107.8.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d02b5c98-91b9-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 15:11:17 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7350.eurprd04.prod.outlook.com (2603:10a6:10:1a9::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 14:11:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 14:11:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d02b5c98-91b9-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dpR0656fUdv6dGf4ZoB6GU9xbwXNK2jko6PvMOU43awudUlJYNAbAUhwBQp6JVcumw3vUrl/P3aOib8aMaXcKcT1sH+xcSQZiO+aggu+8dZtchBZ8FxUXoRcudvWC1YwYKaDqNKTRFI6pZA/hirXt0futXBfmuBsyLqvXwFQagiuhQ4cW17YDD237ZhyBIWfEzLPlyZGf9X93PvSAE8YUACm/ACcA36RMCSoyR2sAEmpCaA7ALIMmfW71FpEsx59aqz1xmvor1Cdy0V4amF8uXHEWXbZ0CUi968OIDnL8Tw6a4ssXP+9pEWM7IOFQTFbm3LHxLn8VD6hLor6lsXKpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=samdL1jok7HXUArxo1y7qppuzAQBeGFzLeWWO6WNryw=;
 b=YeRUFxWIM8T3f8f898L0IfCTpTz0xbl1CKjrS8RJ0vWunLnsXXmFud4QM3dWMYRtSAkDAR1BSshenZI6fqxthVtDqkWLB25EwVAmlxnsAox6bR6uuzBNUZ3Af1Qjk+HAF861VNvybk80m9Sd1rxXdQqjXKcEyHKfOe1iIDtRuz6vFCph6gbWxaBklrM0nQojPS9oG3M5LcXvW75gt0sT9DqK0raYlLCiATcApufCFodgSUMWePax8IQwbfHcQr1/+OwhHeAYzMC2Ae5ouFQBufhhnrZOH3cN4nRR/16SS8S2e12hDHNJuRrzu1pbw079fp1ZddaNoYLUDJRyilydCA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=samdL1jok7HXUArxo1y7qppuzAQBeGFzLeWWO6WNryw=;
 b=qy3pTWyXdi8str371y7RgG/kAz7QYHMSATdajMZ1LN5OVZo5xxgYfrnW1UVuyJhpReB6t+xP3fv/qnzkc5h7nuRWZQ4EEgDe9ZL3GxHoaVsc68hYJKK6KZCR0S35rNZBg/ffN4vfeXul0+bFXA+8j2qI6Gn6p/p2vbid2EiEm5AF1ghuqauk4T9a7h1BpS62v4aVruVZJSzC8AKxUXmQReOvp/NPRXLsLh4Xm+Qy+U5jW6JiATq7P2Pm5I3Gh4ObVXGEiWOZ3D6pzgVEHmt1dCPMSeX4TB/sW/FRVGbFxsz958q4EfB1dTcqcLY876bYdkBL5bZai2fIA/PIOvzRqg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <52429554-1559-6bae-f3d9-d32152e763b4@suse.com>
Date: Wed, 11 Jan 2023 15:11:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 14/22] x86/domain_page: remove the fast paths when mfn is
 not in the directmap
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-15-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20221216114853.8227-15-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0129.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7350:EE_
X-MS-Office365-Filtering-Correlation-Id: ffee38c1-f8e3-41fa-d850-08daf3ddb2bc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oWJG2y0e1e5wnw9wzE9jhnk8PGd/pYfq+rowrPesldxeli8SoPRf5/Aesssr5khIUTjreYkLkBoHmI9YxJ2YV4PFEDrC40uPtwX4fHuccZLl19p/9TNO1GRH1VjSMyLkTmjGFbYwh5mHgPmXnk+KM9cH4IfnlvCw7oS7BDhxxHUVwomtb7QK9LRjogCohA3ldsutgNSZsbLOlCJkAsiv16bUUDB1VisD6n8LinRhzP6vvCCj1KKB1TPBq7VKVVZUakTmyI/tpVWg2TY4ROlABWockz63HqULBKYYqNOCqKsINxuwgo9IJn5Gia55L4FOkZ+EP5umpHD6UU6OHmLywWp0gNOpX5B6GgI+XdO9e/nvY7ShcJAKgzCCSIWlz6QjQNWFpWJcAMcRUU2UYUeNkhMjANW0lyn2Ew18vM12MHAMVpSbbHPM+PH9AHnvJAPil/Jr3/Z+LZbahxXcviztnVkC0W/7jbLOVvtUPWPVamm1kqOwOaEexVBz0Dyj6GP51wNmZ2aUAhs0FT3FCt/sDxUL9NfD3l5eMEw4jVzyz3ZESpkOIeNJ5txlGwERILAr5n82J4Yq/M65vobIlFkhJfFx082Q9IjQHyGqs3HXICmYWlDbmYu0kI8NtIXzMFzISpErmvm/Ztfuis2GEBFl1f9jKVUj6t5WaHsA0ChBlet2oElz5PX7GNyLG+gwT2pvXz9O/wbQc+Ul0qgHQud76TiSNLaiyWhYqdiGVtcjJ2w=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(346002)(396003)(366004)(136003)(451199015)(478600001)(53546011)(6506007)(6486002)(31686004)(186003)(6512007)(66476007)(66556008)(86362001)(31696002)(66946007)(316002)(54906003)(2616005)(6916009)(4326008)(8676002)(2906002)(41300700001)(38100700002)(8936002)(5660300002)(26005)(36756003)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M2Q1Zmd3VUdIR3JPY2t0Vm8xUFBtOFArL0RvQnY3U2hua3VnMXBMY0o1OVho?=
 =?utf-8?B?OXhyUHNEWGlwMkoxdE91SklwN1NWMmJ4RlUrNmJKT05Sc29ndzZUdU9iSVlH?=
 =?utf-8?B?V2dTd1dyUG4xb2toeWduaTBOSkthVU5tazU5WVFmcDNKbXNSdktSNHZycEZi?=
 =?utf-8?B?Z2NaUUNsSlVBcXRtZm1jS2wxbzZseU5mRXBDUlI1M1pmNXZNb1UwRFU2Mi83?=
 =?utf-8?B?YStmWnNxOFc3Nkw4NitpbTBGemt0M21WdjgzaWYycXByMERMU0tNZkhBMGdy?=
 =?utf-8?B?ZmRiMHduSkJyUUdFQ28xMUtTMFp6TG05OVFlTTJPNnora0lNaytFVjkvVng4?=
 =?utf-8?B?YW5DN0E3U3E1SmlycURJSXZ0NEpxQm1nZEpoMnhmeTAyVEtxait4aURrVjRW?=
 =?utf-8?B?TTBsN0V5N1dCMnJuZ3Q1N1prb1g1K2FhTTFYRU10KysvR2FXRmJ2cDhmekU5?=
 =?utf-8?B?WEJDeVBQNTNwdlJsQjVqdjJ6ejdGd2VyZXNjd0p6anlBbXhvNmNqc09yU0hT?=
 =?utf-8?B?Mm0vMXBxaXdzeDZLenlTSEszOFBGRWlaelVDL24rZnR0T0ZsdU1iWVJERXlx?=
 =?utf-8?B?U09FVjlHMEd4dloyNGx6YmFpN1BHR1UzN0gvNjBYZGQ2WmM3d21wZDJIb1BJ?=
 =?utf-8?B?QmQ3ODR0RG1NempVa2JBNFNHSlovb1VOVTRpcFdwcTlLY09ZUkc1dzYwdUxV?=
 =?utf-8?B?MzRqTDRhN2VlSTRPR0JKSWhnY2RoUDRjNnlENnc4d2FrREJXQXZjeWxhQXhF?=
 =?utf-8?B?aGFac2VGelE5ZndjOVpkVVM3VjZFOUhMVlFRRjI1ZjBORnJBdmtSM3FzUDdX?=
 =?utf-8?B?MlhEN1lrUFlTa1kxczQ1M3lwWThDRWFGTHpRdThaQndVYTBCMTBCVXdCOUJE?=
 =?utf-8?B?aEh1NjlOWURoOGZ6Rzc5YmZKN090Y3pLVkw3L3h2R0ZSUUJhWC95R2tlbUh4?=
 =?utf-8?B?UzQ3RU1oRXhTNVlkL2oyYzhxcnE4NkJOa1FndWdKMVB6aVVua2pQTkdGS25V?=
 =?utf-8?B?ZStVcG5kT1ZUdHFuTzVFTVFZV3FVKzhwaFM0TGhTZzhVa2ovQ2g2THhWdFVV?=
 =?utf-8?B?QlV6RzV4WitPbVdjVGt3WUN4NUh0MURibG9ZZ1V6MXZwL3pSZUwzZ0JiTDV3?=
 =?utf-8?B?TXNySGNpQVhaUHJoUnhWNmxXbjdjUTRtTWJTQmxDOHFLM0pSSVNzTmFvWjRj?=
 =?utf-8?B?czQweFVyb2paSXFrMU1ZUUJxVGV3REhxUndQc0xCVXRMemhEYXJlaWxhWWNw?=
 =?utf-8?B?NVZaMVl2bFRFYyt5Qng2UmhEaDdlcklHanI3QnZLR1FXTDcvQ2ROTEdSU09K?=
 =?utf-8?B?VGNRc1I4SmF6SC8wWitUQzh6MWRvL1ZoUVNwcURDZXFPMWxkYUhXQ3RIUW5O?=
 =?utf-8?B?U2N4eitJajV0Y3J0VlgxZzE3MGxVVElIL1BaaGRQb0ZSODNoYUtvMUJmdDhl?=
 =?utf-8?B?dFgwMlRramxrcmtlM0pUcDBEZkJSaVlsZGdOL3Z2U0F1d3M4a2dKZnlFdlNY?=
 =?utf-8?B?U292dE1iUm1lazREVEhWZ1k3L1l6Y0xxOGMzVWhxZUF4ZzdETDR3QXlSKzU0?=
 =?utf-8?B?cjhBU3I2eFFTRVRDTFdoSlB4ajNKdzllU0x5Q2tYQ2NvN3RpWDlRVXBDOW9Y?=
 =?utf-8?B?OFhTVDQ5YXR3dWlDOTJFUFVPbTJmdjVucHZrR01XakxlRERmMWNmQjZEOE9o?=
 =?utf-8?B?dVVDc1J0dFF3UCtFMWdlS3lDTWRCZ1kveVhVVEgzWVRDRk8yd2N0L2N2QWxa?=
 =?utf-8?B?OEpHMlRLU2c1OE5ZU1VwQ2JUTkpoVnJDZ2F5UGgxSUpmTnlYckZ2dEpFbC9p?=
 =?utf-8?B?OTk5VU05SzAvQ05YUVU0ajd5V0pIWUErZ05KNWNLbVNjUlJkRG9CQWw1MTBj?=
 =?utf-8?B?b0M4Vm4wdzNaQ216d2REUHRzMHIydzVUVlFrZ2F6bms3NWVodmV5U1c1azZO?=
 =?utf-8?B?VHRYSnVrQkRMNFQwRFl5M3lGK0dNVnpoM1BKeGtCdGxpaVpPZlEvV29YWFEw?=
 =?utf-8?B?b2lsRmJzQWJqUldUWGZYc1NhQTVKL3lmUDJGM2o4ZzRTQmRtV2o4MmR3QW40?=
 =?utf-8?B?cFpQSTNQTzYzOFFQSFJvdEw0M2t3Q3JPUm9rVHNlVUducDdpNmN4Q0Y1NzBq?=
 =?utf-8?Q?mJwuaRm2MDaDT4jdHX3uFjN9A?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ffee38c1-f8e3-41fa-d850-08daf3ddb2bc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 14:11:13.9482
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Vv4Jj+eVfRj1i4ZlZjKNS2weRiCxFH8dKmOS9FC/VfsVLEXB0ZTN0icO99iJyRyCmqcQHtE/8WkjPlYvpPM64Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7350

On 16.12.2022 12:48, Julien Grall wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> When mfn is not in direct map, never use mfn_to_virt for any mappings.
> 
> We replace mfn_x(mfn) <= PFN_DOWN(__pa(HYPERVISOR_VIRT_END - 1)) with
> arch_mfns_in_direct_map(mfn, 1) because these two are equivalent. The
> extra comparison in arch_mfns_in_direct_map() looks different but because
> DIRECTMAP_VIRT_END is always higher, it does not make any difference.
> 
> Lastly, domain_page_map_to_mfn() needs to gain to a special case for
> the PMAP.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

This looks plausible, but cannot really be fully judged upon before the
mapcache_current_vcpu() aspects pointed out earlier have been sorted.
As to using pmap - assuming you've done an audit and the number of
simultaneous mappings that can be in use can be proven to not exceed
the number of slots available, can you please say so in the description?
I have to admit though that I'm wary - this isn't a per-CPU number of
slots aiui, but a global one. But then you also have a BUG_ON() there
restricting the use to early boot. The reasoning for this is also
missing (and might address my concern).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:23:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:23:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475398.737098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFc15-0004XF-I5; Wed, 11 Jan 2023 14:23:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475398.737098; Wed, 11 Jan 2023 14:23:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFc15-0004X8-FN; Wed, 11 Jan 2023 14:23:47 +0000
Received: by outflank-mailman (input) for mailman id 475398;
 Wed, 11 Jan 2023 14:23:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFc14-0004X2-Cj
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:23:46 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2079.outbound.protection.outlook.com [40.107.22.79])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8cc1aadd-91bb-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 15:23:42 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7206.eurprd04.prod.outlook.com (2603:10a6:10:1a4::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 14:23:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 14:23:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cc1aadd-91bb-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ab+R12EWg1Mx0zro3Bc7XelMyMCLFWgn0NW+YWWn89VFL1uPSXU9yrIgsD38APfktCBcml7/8rpnjtJOPSp9Ta40ZSUJKebL0m3NgplS7NlCZ2vsSCmxnR3EUf5MX9vHgY3p2MdhC4NCsmcqMuTTfqYH8tQDeAzcxJDec0F9EAkCHPl138ytOXskmQ01VfuH0TOcU0qd4oeqaj4bSH9RI5TYYkxh+xl2crBaGrTCIbGe1GW6gj2XKaZinv7U3vmdiOtvOKJ9VBQn4iI4VzjQS/HBcsgwFlrxnjpHEAjQTPmRMkjphP3wTitrw4sBBR7TZKpCCOY9K4rGw+froheLxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DMBTdI2Yk2d2vY0X7k+8AX2NLn5sYOGt2PdfXWSDWsc=;
 b=Xm2+EdB1vIx1YFmR12A7vt+7xaRi2oAAKSdWGofupK066Eom+bVC3ZnHCAwEqAYLsxItB0faig7u6mtFZZ5Bn+/nObKZbN2bYoBBm1QMoCGgUShBPmKm+43XqbIx9J679NWfuDgU2yz63WM39xlhwLVxta1SAl2UG2XFihtFCsmHv6NCKKDUDdfMoAYKfOYeCgkoqy5yjOceB2mb23wfi6cqKI5eZKhYnFNLKJbRqCk/OljB8jZxid8nc8gBvmBgj1EMKJCWFHe+DwJbWPZRPlvbN/ZAnX7S9b8TLcP7FsLN1z4dxhFGpe5ChQAqPCNRGpLhJsRF+SZQSAWQeceNYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DMBTdI2Yk2d2vY0X7k+8AX2NLn5sYOGt2PdfXWSDWsc=;
 b=WfwpB8t82bcuBjjGob6bqhj/rn7GVmqOjsrcqqYUWL0Wcg23TyexWaP3hyow4rGAZJSbk0mE9ZyjdZzpyg70DHQTBIlKWXdaXh3GXF38i5Zc28GTRWvmczZ7GeqtMzzHcUJoDRtlH91Y7hYbCphyyHz3nasoj4wIJZbtWDDtoJuyUIkmt0tMgR+Osb2tSYggGrMQKturG4sstC05G6q5xJ5Euww6nmXdAl4JRw8/ZEj/U80AlS45hH696oCutqqpWiDfFU/ZPOMRh4P2Uuv7jbMPdslhx3ht146Vow5V3moIKfHhXgarWCaIborx/q8intLRIqVyjkZiQ5NAN1/0zg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3ad82ad8-7762-8014-a55b-a6f8316f398e@suse.com>
Date: Wed, 11 Jan 2023 15:23:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 15/22] xen/page_alloc: add a path for xenheap when there
 is no direct map
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-16-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20221216114853.8227-16-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0150.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7206:EE_
X-MS-Office365-Filtering-Correlation-Id: 50cc3b31-62f9-43ea-fbf7-08daf3df6fd0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Z4tii7jzPiWW1LW46ete0wtS3FFa08xCKJ6G0P1nLkmJyUcUg7r019nRukBnBv8VQJJSO1tMFRdEaDoXOilk7eNa4jON50uN5+a/l3COc0537qgmEh1K/J8BXd4w3TXHOhozv79XKgzUjXNtjAAfBRts46AJE4CDqUMvwcI9PfNG8pdPreuVlZyWFfIObkTMN/0wM9l0ZqsFSoYCXLBSBK2uiJm5vm6h0camt0Wmi7KpOt8POjrnLUbmsGShJxV/8+arWL1+NfAUyccPOktMwtnS5ZKeLVB9UxUDX8DrMRhOFBv5DJMvKjzJ1Y9l1WyZ9frSbdwYuai2UWov6KFrZ3oekUVRDkVXvd/zksc/UNrQhtL5FWKT/C11v8ju5XPFJm2ys9SP2JbEAra80+DRho9qLZ07iBiEGMsst6ja996DP1TYEiu/GKEm28Yp3bjI2QcSXUQ+xQbO2pUX1fQFtya9OQ866GKc4Tf+w7vGZXtRZLsAAFzJVG4sCXVfm97ih91S43GVoiZA1kcmuOSVJSRkeMTSjhF6FPfdmBvsmDETZWH5T/cvb0NDbDlR2hGkN3MPBcyFI58OYzDri47cNlabTW9UzWlnr8JmZRNNNs/+I/xLL41tLh71SdW+HR+Z8u6zQQef44Tmee+WLlFydT8rCMjxQqnVrBG8Zk1QH4scDkMRCmcIJBwq4qbU19oz6+YuNeWAtM67yAKCAyLO3oTpJIh6OT25U9IC0xvpUqA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(346002)(396003)(366004)(39860400002)(136003)(451199015)(2906002)(83380400001)(2616005)(66556008)(66476007)(6916009)(66946007)(5660300002)(8936002)(53546011)(36756003)(186003)(26005)(6506007)(6512007)(31686004)(6486002)(478600001)(38100700002)(41300700001)(8676002)(54906003)(86362001)(316002)(4326008)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cjFsMldPUTdkMjFsTWcwbStyOUxkQ3RDTmxnSE4rM25GTTNWUWlZQlk5NmVH?=
 =?utf-8?B?SUI2dm13M0ZrWEN6RVdDemt0c0o1K1R2SUZ5eGlxTnVJMjNCZEIzRmJQcmhG?=
 =?utf-8?B?NjVJTU84cm9EL3VtVDZPd3laWlVkUDNsSjJ1bE40RktabFNWTTJoQ3NOWnZn?=
 =?utf-8?B?ZU9PUkNreWhEQ0lQbSt5b1piUnQ1VSt3TEpaZ0UwblZvRTJweUdLOW9zZnB6?=
 =?utf-8?B?ckpBWnVKZ1RuWGZHN08xRVNXMGhYekpXdzJ1cGNBQWRDQWtYdU9mK05va1VP?=
 =?utf-8?B?QmRzMXo1c1Q1eGo3cXptc3B4U0tiRElhWitwTTlZK2h6Y0FkUm9rdDBycW1a?=
 =?utf-8?B?cXRwdktjOXR2ZzhFQkV0QkFWQ2tlMi9jTWFRcGxYeXZiUUQyUU1iTmlWUWo4?=
 =?utf-8?B?RVp5b1Z3K2tqdVcvMXJvYXZIN3FYNi9JMmhQYjl3VmsrTFRJQ1dGVVZJL2Rs?=
 =?utf-8?B?VEZnT3NmdnI5N0VjUXpjZlEzN0loc2pjUlZ0dVVMSFgyelpEcWpsaFVkWDNZ?=
 =?utf-8?B?Rk5kWXJOeDBVU3ZXeWRMemx1ekxKd3NvbkszVEowSUhXS1V3aTRTN3lWbUhW?=
 =?utf-8?B?NmNkQ0JGeWJ0dEh4b284d0lLNlJYdWNDb1lKT2xxVWFBZHdnNTEvcWF6SGxm?=
 =?utf-8?B?WWdvU3A3aFp0R3BaaUtzS1FkSWZHSzlXKzVNRHdHc2E2bzRKeXhmZkVjaXJq?=
 =?utf-8?B?NXpoRHplZXY4THZuUEhraml5MklnWWtrdlpPZ0FUakhOTVhQWHRIbnA4dnVu?=
 =?utf-8?B?YlJRMVp5MVNlRHpTSmxqTGVxNGk4Y0dGOWxyamZBZzVMeHpUbnhJamVlbE1E?=
 =?utf-8?B?SkZvM25EUUt2OGNHbkpxQUhuN3NoNlFDMStkbHRSN3dZeTdIczBIc0JNNmo4?=
 =?utf-8?B?ZjJ1azJEMEFERTFFWDVGdFduVVMrdHQ5U3BFTmV0SlJoeDVlZndYeGdnRmF4?=
 =?utf-8?B?Zjg5b1hmUHRnWlBROVBxWEFNTzlMQUJHMGtlR0dqR0FPMmQrejBjVDVTYTJR?=
 =?utf-8?B?T3ljUW0vVEVRQ2w2L0twUEFVZ0cyMkdTS2RGd0d6TzBYcTQ1UVh4K2RWaEI3?=
 =?utf-8?B?Sm1zRVVSekN3ZzNhcUUrblpWRjMwb0lqMXlXSHMwYkhsM0JqWktxWkhEY3Vw?=
 =?utf-8?B?QlRyNWxqRHlnejlSc2pkMlJ6a20vVVdFWjIyTEl2d3cvclJWcmZaVHA4NEVl?=
 =?utf-8?B?WmY4UEo0Ryt5amhqMk5Idzg3OEZNN21yMmdiM3ZYVC9VL04xSzYyUmh4bTk0?=
 =?utf-8?B?V3VXMjQ3MFRCVlRqVm1SQVNlYlUxVjBZTGR4Zng0dnd6a05XN1poemR5NnVI?=
 =?utf-8?B?eVZMM0R1WVR2RXBCbXVjdE5HdUxBZzFHSEh1UmErMEN5dEE5ME9JNnA0MmtE?=
 =?utf-8?B?WFoxUWdrR2VEWEhTMFlJYTAwcUY2WFQzWW9zeS9xK29JdngrNnVsOGZ5ekFn?=
 =?utf-8?B?U2pFUjJmVVNxT0VqT1lpRWVKOUxHdG8rcE1BOXlQY25sTldiN1B4aFBmKzFw?=
 =?utf-8?B?WFlLOUpLYWxKcE1GY25Wa3JsMUlrWElTT1RmZTdTOEIwRnhZQ0JoK1VNUFc5?=
 =?utf-8?B?OERaT2c2bGlIYi94ZDhmUWVhYXZZVUZWeUdsaTE5TVdNM1oxZjBWUFZmRDNL?=
 =?utf-8?B?NHVrdWk2Q0dybHY2TjFoanFxYjBMeXpmTnN3TkR1Mm1pdTNuUnNiWEVoQ1Z0?=
 =?utf-8?B?dE1EM1JEeWZZMHg3THBvaUdEdFBkMkFRSGJQLzZWUnZ2SVJSREozZDgzNTlv?=
 =?utf-8?B?SVgwZzM5M0FueDRFeFdOdVYyOC9XVWptNk8wWkF5azdmVTlzY2Rwbzc2OStL?=
 =?utf-8?B?c1ZhbC85bHkwVUs5eEVzZHRrWUI1SkljNk13ZVU0cXVYRGdZZ0l4YTlSa1Fw?=
 =?utf-8?B?RlRlYlROakRpYUFEMkJFOTBraWVDbzRxNFRmWmNsL3hLTWEydnBOSkVyY3Fu?=
 =?utf-8?B?S09ITS9ZN0UwaTgrOE5nRzBQL3Jid0ROeGtvSjArbFN3eGVQTVRRSGRMQ0Ey?=
 =?utf-8?B?YmQzdldzUk4wS3VhTXdYaTdyMzZ1di95enhjL1RMYjdabjQvZVVXbVVuRFBK?=
 =?utf-8?B?U3c4Yk9Hd3JoU05CZVdWOTUvc0Q0L2VFdHBrTEhmSTBwRFBxSXBta004REE3?=
 =?utf-8?Q?nX+Se7rwnUb4xEmDyo7w1yGNw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 50cc3b31-62f9-43ea-fbf7-08daf3df6fd0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 14:23:40.6351
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5lqnJrlm5LhEpBeEjbR8kWb+wu4A+6UwHibyjTtL25EGkoJxA4CdGiF+Y31Hoe0fu/hueRAycvKKFB00sDu9IA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7206

On 16.12.2022 12:48, Julien Grall wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> When there is not an always-mapped direct map, xenheap allocations need
> to be mapped and unmapped on-demand.

Hmm, that's still putting mappings in the directmap, which I thought we
mean to be doing away with. If that's just a temporary step, then please
say so here.

>     I have left the call to map_pages_to_xen() and destroy_xen_mappings()
>     in the split heap for now. I am not entirely convinced this is necessary
>     because in that setup only the xenheap would be always mapped and
>     this doesn't contain any guest memory (aside the grant-table).
>     So map/unmapping for every allocation seems unnecessary.

But if you're unmapping, that heap won't be "always mapped" anymore. So
why would it need mapping initially?

>     Changes since Hongyan's version:
>         * Rebase
>         * Fix indentation in alloc_xenheap_pages()

Looks like you did in one of the two instances only, as ...

> @@ -2230,17 +2231,36 @@ void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
>      if ( unlikely(pg == NULL) )
>          return NULL;
>  
> +    ret = page_to_virt(pg);
> +
> +    if ( !arch_has_directmap() &&
> +         map_pages_to_xen((unsigned long)ret, page_to_mfn(pg), 1UL << order,
> +                          PAGE_HYPERVISOR) )
> +        {
> +            /* Failed to map xenheap pages. */
> +            free_heap_pages(pg, order, false);
> +            return NULL;
> +        }

... this looks wrong.

An important aspect here is that to be sure of no recursion,
map_pages_to_xen() and destroy_xen_mappings() may no longer use Xen
heap pages. May be worth saying explicitly in the description (I can't
think of a good place in code where such a comment could be put _and_
be likely to be noticed at the right point in time).

>  void free_xenheap_pages(void *v, unsigned int order)
>  {
> +    unsigned long va = (unsigned long)v & PAGE_MASK;
> +
>      ASSERT_ALLOC_CONTEXT();
>  
>      if ( v == NULL )
>          return;
>  
> +    if ( !arch_has_directmap() &&
> +         destroy_xen_mappings(va, va + (1UL << (order + PAGE_SHIFT))) )
> +        dprintk(XENLOG_WARNING,
> +                "Error while destroying xenheap mappings at %p, order %u\n",
> +                v, order);

Doesn't failure here mean (intended) security henceforth isn't guaranteed
anymore? If so, a mere dprintk() can't really be sufficient to "handle"
the error.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:24:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:24:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475402.737110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFc1U-0004xC-U1; Wed, 11 Jan 2023 14:24:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475402.737110; Wed, 11 Jan 2023 14:24:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFc1U-0004x5-R2; Wed, 11 Jan 2023 14:24:12 +0000
Received: by outflank-mailman (input) for mailman id 475402;
 Wed, 11 Jan 2023 14:24:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ffGs=5I=citrix.com=prvs=36809819d=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1pFc1T-0004X2-MV
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:24:11 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9abdd617-91bb-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 15:24:08 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9abdd617-91bb-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673447048;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=w8iJbWumWsWSLAQupGB9uR+XhDb3DO8D3MIfiJPhdz8=;
  b=fvF4BX+LPrQIrY2PpHtLmmLlik6LfxTjvMHH0utzI0BqD0bxqb/oUMd5
   N3ynSJFBX6ODR5lUKO0xlFzG1LltL/LlsRiGuVjiAB81lItJuZvtYlSwB
   A1eMWha8DWiO/+SqxstiXRfd8Z2ybKbHaE2mP176YByTOrzifXjKxz0u7
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92126493
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:MaYA86LOzuKiLSf/FE+R25UlxSXFcZb7ZxGr2PjKsXjdYENS12dVz
 2sbDG/VO6yNZGCnctAibd/g/E1XsMeBz4AwQFFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPdwP9TlK6q4mhA5wVnPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5WKEVx3
 84CcQsyZ066nOKv0qmWE+NF05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpPE+ojx5nYz/7DLolkf2ni2i5fyxRs1aUjaE2/3LS3Ep6172F3N/9K4bTH5sJzx3wS
 mTuvEPDXA9HF9WjwyOV81Sq3NfrxS3pYddHfFG/3qEz2wDCroAJMzUJUXOrrP//jVSxM/pPJ
 kpR9icwoKwa8E2wUsK7TxC+uGSDvBMXR5xXCeJSwAOHx7fQ4g2ZLnMZVTMHY9sj3PLaXhRzi
 AXPxYmwQ2Uy7vvFEhpx64t4sxuTEAwqB10YPBQ7RFU8+Nn6spoe1xjmG4ML/LGOsvX5HjT5w
 javpSc4hqkOgcNj65hX7WwrkBr3+MGXE1ddChH/Gzv8s1gnPNLNi5mAswCz0BpWEGqOorBtV
 lAgktPW0u0BBIrleMelELRUR+HBCxpo3VThbb9T83sJrW/FF52LJ9o4DNRCyKBBbK45lcfBO
 hO7hO+ozMY70IGWRaF2eZmtLM8h0LLtE9/oPtiNMIUVOccsKl/ao38+DaJ144wKuBFy+ZzTx
 L/BKZr8ZZrkIfkPIMWKqxc1juZwm3FWKZL7TpHn1RW3uYdyl1bMIYrpxGCmN7hjhIvd+VW9z
 jqqH5fSo/mpeLGkM3a/HE96BQxiEEXX8rit9ZcMK7LTc1c4cIzjYteIqY4cl0Vet/w9vo/1E
 ruVACe0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:McqUha8+FRcEno35IkFuk+D2I+orL9Y04lQ7vn2ZESYlFfBxl6
 iV88jzpiWE7gr5OUtQ4+xoV5PwIk80maQZ3WBVB8bHYOCEghrUEGgB1/qB/9SIIUSXnYRgPO
 VbAs1D4bbLY2SS+Pyb3ODOKbcdKbe8nJxAzt2utkuFBTsaE52IwT0JcTqmLg==
X-IronPort-AV: E=Sophos;i="5.96,317,1665460800"; 
   d="scan'208";a="92126493"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Sergey Dyasli <sergey.dyasli@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2] x86/ucode/AMD: apply the patch early on every logical thread
Date: Wed, 11 Jan 2023 14:23:29 +0000
Message-ID: <20230111142329.4379-1-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The original issue has been reported on AMD Bulldozer-based CPUs where
ucode loading loses the LWP feature bit in order to gain the IBPB bit.
LWP disabling is per-SMT/CMT core modification and needs to happen on
each sibling thread despite the shared microcode engine. Otherwise,
logical CPUs will end up with different cpuid capabilities.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=216211

Guests running under Xen happen to be not affected because of levelling
logic for the feature masking/override MSRs which causes the LWP bit to
fall out and hides the issue. The latest recommendation from AMD, after
discussing this bug, is to load ucode on every logical CPU.

In Linux kernel this issue has been addressed by e7ad18d1169c
("x86/microcode/AMD: Apply the patch early on every logical thread").
Follow the same approach in Xen.

Introduce SAME_UCODE match result and use it for early AMD ucode
loading. Late loading is still performed only on the first of SMT/CMT
siblings and only if a newer ucode revision has been provided (unless
allow_same option is specified).

Intel's side of things is modified for consistency but provides no
functional change.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v1 --> v2:
- Expanded the commit message with the levelling section
- Adjusted comment for OLD_UCODE

CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: "Roger Pau Monné" <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu/microcode/amd.c     | 16 +++++++++++++---
 xen/arch/x86/cpu/microcode/intel.c   |  9 +++++++--
 xen/arch/x86/cpu/microcode/private.h |  3 ++-
 3 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
index 4b097187a0..96db10a2e0 100644
--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -176,8 +176,13 @@ static enum microcode_match_result compare_revisions(
     if ( new_rev > old_rev )
         return NEW_UCODE;
 
-    if ( opt_ucode_allow_same && new_rev == old_rev )
-        return NEW_UCODE;
+    if ( new_rev == old_rev )
+    {
+        if ( opt_ucode_allow_same )
+            return NEW_UCODE;
+        else
+            return SAME_UCODE;
+    }
 
     return OLD_UCODE;
 }
@@ -220,8 +225,13 @@ static int cf_check apply_microcode(const struct microcode_patch *patch)
     unsigned int cpu = smp_processor_id();
     struct cpu_signature *sig = &per_cpu(cpu_sig, cpu);
     uint32_t rev, old_rev = sig->rev;
+    enum microcode_match_result result = microcode_fits(patch);
 
-    if ( microcode_fits(patch) != NEW_UCODE )
+    /*
+     * Allow application of the same revision to pick up SMT-specific changes
+     * even if the revision of the other SMT thread is already up-to-date.
+     */
+    if ( result != NEW_UCODE && result != SAME_UCODE )
         return -EINVAL;
 
     if ( check_final_patch_levels(sig) )
diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c
index f7fec4b4ed..59a99eee4e 100644
--- a/xen/arch/x86/cpu/microcode/intel.c
+++ b/xen/arch/x86/cpu/microcode/intel.c
@@ -232,8 +232,13 @@ static enum microcode_match_result compare_revisions(
     if ( new_rev > old_rev )
         return NEW_UCODE;
 
-    if ( opt_ucode_allow_same && new_rev == old_rev )
-        return NEW_UCODE;
+    if ( new_rev == old_rev )
+    {
+        if ( opt_ucode_allow_same )
+            return NEW_UCODE;
+        else
+            return SAME_UCODE;
+    }
 
     /*
      * Treat pre-production as always applicable - anyone using pre-production
diff --git a/xen/arch/x86/cpu/microcode/private.h b/xen/arch/x86/cpu/microcode/private.h
index 73b095d5bf..626aeb4d08 100644
--- a/xen/arch/x86/cpu/microcode/private.h
+++ b/xen/arch/x86/cpu/microcode/private.h
@@ -6,7 +6,8 @@
 extern bool opt_ucode_allow_same;
 
 enum microcode_match_result {
-    OLD_UCODE, /* signature matched, but revision id is older or equal */
+    OLD_UCODE, /* signature matched, but revision id is older */
+    SAME_UCODE, /* signature matched, but revision id is the same */
     NEW_UCODE, /* signature matched, but revision id is newer */
     MIS_UCODE, /* signature mismatched */
 };
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:29:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:29:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475411.737121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFc6R-0005mU-IV; Wed, 11 Jan 2023 14:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475411.737121; Wed, 11 Jan 2023 14:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFc6R-0005mN-F3; Wed, 11 Jan 2023 14:29:19 +0000
Received: by outflank-mailman (input) for mailman id 475411;
 Wed, 11 Jan 2023 14:29:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFc6Q-0005mD-7z; Wed, 11 Jan 2023 14:29:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFc6Q-0001A4-5Q; Wed, 11 Jan 2023 14:29:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFc6P-0004qZ-Pz; Wed, 11 Jan 2023 14:29:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFc6P-0008Lr-PT; Wed, 11 Jan 2023 14:29:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+Vo5v3yU7TBDaCEYl+KLk7kGe7EGqUw64Budd18tvRs=; b=GKZBhFX48Z1x0nehXuSP8J2VBB
	xaZ/70mrePvHmfBx7YjWKOlpbJlwMtelUS01p8YIrYBvlWTwoYojQwN7Me699ZDxICAbnPmvJWADp
	qY85RWrjVMDS+tw4wK30V+3yCsTPRLqICZScy6ii7Sq2jGPYDRMYEwlq5zfHrmMR8xXk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175718-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175718: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=a5738ab74c2ab449522ddf661808262fc92e28d7
X-Osstest-Versions-That:
    libvirt=49ff47269b71a762ca5de4595f6ec915043e05ce
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 14:29:17 +0000

flight 175718 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175718/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175684
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 175684
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175684
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175684
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              a5738ab74c2ab449522ddf661808262fc92e28d7
baseline version:
 libvirt              49ff47269b71a762ca5de4595f6ec915043e05ce

Last test of basis   175684  2023-01-10 04:18:53 Z    1 days
Testing same since   175718  2023-01-11 04:20:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>
  Jiri Denemark <jdenemar@redhat.com>
  Laine Stump <laine@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   49ff47269b..a5738ab74c  a5738ab74c2ab449522ddf661808262fc92e28d7 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:35:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475419.737131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcBv-0007CV-6u; Wed, 11 Jan 2023 14:34:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475419.737131; Wed, 11 Jan 2023 14:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcBv-0007CO-4I; Wed, 11 Jan 2023 14:34:59 +0000
Received: by outflank-mailman (input) for mailman id 475419;
 Wed, 11 Jan 2023 14:34:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFcBt-0007CI-1I
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:34:57 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2044.outbound.protection.outlook.com [40.107.247.44])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d53642d-91bd-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 15:34:54 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8551.eurprd04.prod.outlook.com (2603:10a6:10:2d6::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 14:34:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 14:34:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d53642d-91bd-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KDxzQTkjy38BmcIoA4oe5Udj98+Z3b33bjT+lxvxs88ws5BhmE72rTD5RMvrnHbmRLEBLWQXHvdyMqeQ5LaSNdtJNC9Hql3xUAuZdk+MJudBz6TDetN/zx1fgSG4AexF86oka3If8UlHncsgwBII1u+/rZqEEPg/nROW0ZzuOyqAvVgRTcmEHXB54oUpKdQWf14vIuxvNW3dZY5eV9MaAwgMszecy5BkCypg68BrqRr5uubAZqc1auecC8VP7IQkelsvGuM2tYPV0ph58DCNn1h2+yItahtqa6+fH6BoUPn1NCXP/PqflofKbRAuyFqBDqBWAH8xrD99unKEI9ZoIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8mwZanp8D+FPIIeYcPrZURlT8a36AzCP0jEaVSmpmQc=;
 b=YLqsLxRhOPEq0HvQHskybO9Q6CNnON+udnmV/iR4gJtRy657XiF7iYLUFK4lJGP0fEsXnjbyFxjfJJ+bz2osEVBGJXAwDbFqJT9ZVtUOD9tt8vuHdjcXmPBie9qzIic8Y3uZCr6lHWPNsnSLvBv6NJJ2ZMD3XgZnsp+Mt0jHAkkTJcWfU9/bNGHecvSEHNi/hg2AKwTNchWWC+SqcEejY7XMc5E/6baZvcTf6UeXg3hcwuzEUd+S0V3WS2DdtXuiIp1j13PImLtzL5e9d87CeS+QJ9L++lxWIhcs22/R1234M3PBXeydaa3IUy02Iu7u5F5yQwjjAiOHsNYBblLFdA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8mwZanp8D+FPIIeYcPrZURlT8a36AzCP0jEaVSmpmQc=;
 b=0gD+XDQwekBSXjPtDpALB5pryxSDApFdH7Plm6Hz7skleE8EHMnNF4jNQKXdLm8dhe1oq1xkuQ1ptrU9i1y8yiMKvu7qUy8D8UnjLJ08gFRBsZtk9rKVdAIcvAumSenZtz8w15kmxkZ9Z4Fm9tM+fZxRljSB1giKcyItEpbfv22DaCO6tdR7pdQpeoFd60Km1UqC50pUoUncUeWSjBLQoGUr/0LBeU2VknQfQ7QSMXDI0Oew6/zp1Za/sti6uSB3Jncrz7f3PdJST8bKpWZaJYO/RTZieTQzXfC57WAfsWJSymCylWHBYth8He7SbuTsBGuS10PifHKjGpu0SRDx/A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <75a3787f-7fc0-8586-8a96-1eb2e94cf523@suse.com>
Date: Wed, 11 Jan 2023 15:34:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 16/22] x86/setup: leave early boot slightly earlier
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-17-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20221216114853.8227-17-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0172.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8551:EE_
X-MS-Office365-Filtering-Correlation-Id: 95409bce-a927-4bda-1185-08daf3e100bc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zryfzpD2urRK8pZM8bXKqmYVkTh6z2mU3fpocJz7wR4HsHtsRevduBoGowtCuDJi98/JmaU36YApUXO+haKaP7+55Mb7dqGUo/0nmYQrxTRgF93up76xs3D95djZMLEdPJWk4e47jLOUKjs9RCqGucMuOGGsPd6pwk0rejhSbn/8czNlEChpr06MWeVLLYuVqiJxmP3yoDCC34heQ9d0Y5LlNJtmV8Q8USr8/lUzK0rRXcgBxXpiyMhbFmGQybMvXwPigf3JlopW6Sm55Tl1IDIEbvhsQyRkrJPsVXPfgMgaRDiCRhMUh+ApnzzZp3SkGM4YutXncrED9gaidEkpTIyKmPwAOFXNSJ2YWBCZBp57zHgGkaIiDyj+n18uYYUsqASEolnjleoXFSv5NJKb1FJjp9/Q18ZKicEnUQ4C+VLfx5qt58XK90gfvRs/PCyr8BRKXLicoXtOnNw7zr65kqU7IB7occrXCCmFhehQjI+/Z9iGJhDDHjRGPPtoul1qDbL3/KUt0mz+P264RupRFJ+9ujvlcOCWvLMz4RL8/9jYRYE2DAW+j8jpQgnXNZ8BpWKx5P4dVLGwHsfPOgO5QngWF1JpD5pzyfahhi7K8XM56CHtfanTfGeH3q+hBcB8rSL+MLuh4wscETTNoUcfAhAIaGho1yrQlIvzXMnc36XPgjXfe6oF9d5meoiAdDfSZGAOl63b1aNMpoXViQzGNIM/EERkQQuUVO66ABdbLQI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199015)(8936002)(2906002)(5660300002)(41300700001)(316002)(4326008)(66556008)(8676002)(6916009)(66476007)(66946007)(54906003)(26005)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(83380400001)(53546011)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b3k3Y3JiZWtXMnJFYXNhTWd2Y3VCVE5Ud20zWUV3OVZnZnB0bmhWeUtESERu?=
 =?utf-8?B?OWR5bEg2L3ZyUTMwQzVWeE5JZDdCdXFtbnI2b3BxTUtGbnFHRjJtVmdBMkZL?=
 =?utf-8?B?aEcveU1neXFrKzJJQ3RJYUk2ZXc3c0o4OFdkYjQ3ZlEzbU9aOXAwK1dpUDZa?=
 =?utf-8?B?Ymo0QWJWQ0IxbU5xZnBXNFk0V3FubWlEZHVwa0ZhSjVlUmhvM2FGbXBuUzU1?=
 =?utf-8?B?MUVYN2EvOVJ6WFdRZTRsek52OVpjVFhPN2J3TjlNT0pHblcrM2ZMNVcyaVE4?=
 =?utf-8?B?ZG5YRXNMUkdiVzFkKzRNRFJJYy8rbVVrN3JBL3ZPY3ZPQi9GWVExcjRZczdv?=
 =?utf-8?B?WWpZdlMzRzdpU0FEcFBCSzZuNEQ5MXRrbllsbHAwaFhOYmNKaDRaa01XV1JV?=
 =?utf-8?B?WmZKMHduVTIxamdvcEJSbCtLdEQxMlhNcVh3L0piM2xsS3BHRXA4LzVmOXRn?=
 =?utf-8?B?Tk4xaUtqL1lXK3lINXUwbzRid0p0ZnFUNlkxYXdwQ09EZGV1bTRLblBNNzBK?=
 =?utf-8?B?TGxOSlhPdzRrYmRFb2R6Ulp6TjU1UGx5eUhHR08yZXR5THBGemJ1T2RnbC9Q?=
 =?utf-8?B?bXE0alJwcnNic2JxZ09sQ3l6aXVVYVV4MEpncitKQkd2eXVQK0RsaHBoY3JV?=
 =?utf-8?B?dG1ieFZLZGpsZU9vWk5lSG12NTBjTTV4ZDczMGJheS90V0ZCNkRjOGh5dDlS?=
 =?utf-8?B?d0MwRG93UTVLeXpUUG1UNnFuNlMxWWVPRUVtbkRIaWNlTk1ia2wrSXJ6Tmtz?=
 =?utf-8?B?aVY1M3g4a0ZISm9kNmc0TGNIMnRDVzF5UExwYUJ1QW1aZnNuVnVGL01vNktM?=
 =?utf-8?B?MmdxbHl6cGY3bnJHVzliVGZtRlRPbXdSS3RqUUs0OTBvdURMd3QxdU1sMzlJ?=
 =?utf-8?B?bURyajBEdm1lZWNiZFgrdmxBSkpGTGl1VFdnbVhadFNjUy8xenJxZjNxREYr?=
 =?utf-8?B?bytwNldpd2N0aGo1QkVIUnViUXVETC9nSzdxc3JuRGhXaXRwUWdYS0k1ZDhI?=
 =?utf-8?B?NzVHLzVhbWV2NEg5Z2Vwcmpud1BWMjBsaitBbHgvR0M4d0FXVDlPOHAwNTBl?=
 =?utf-8?B?RXhwd0hmSlZaSklweDZnV0VYM3hMRU1PZFJYN2oxTVFiUjl4Y21WWFRSV1Yx?=
 =?utf-8?B?OHRaOVFVUVpyZDFXb2hHVTc5di9VTzZ2a2VSOFBLa2RmK0xKMFFEdCtYQmxX?=
 =?utf-8?B?R2dTR1ZxWDZyTi9FVDV5a1NKNk9yYTRhcUgwOG4xbkgzWSszN3RLU0t1aksr?=
 =?utf-8?B?SDYrMXdadDhxVHRPTzI2QVJUaElMQVFweHhXQlViZHkvbm1MaDVQc09XMWg3?=
 =?utf-8?B?TUtaeDlQNFVtRTRIeGE5VFdVamg3TEpyS2E3S2s0cGkwTWhWb2k2bDV3OGlv?=
 =?utf-8?B?d0ZmdGhiSkUxMG9uWDhPS0k4alBTWWZxbEdURE5HOXdNYUJ6eHlqc0RJaXJw?=
 =?utf-8?B?MHd0d241VERpRDlCZE9uaUJDcmNEQmZiRWF0ckdBMThWU1ZoVmY3Szd1ZUlD?=
 =?utf-8?B?RkVKSUlzeVNhNW5VUi9ZMjU5amhPd2hnazE1dUVNVU15bnd5c0dTUVo0SGlj?=
 =?utf-8?B?STcremNNRS9xU1c0bXdBQ2dyM2taNmhJTW85WTNmTzlPdzJJbWVUV3JsUVJK?=
 =?utf-8?B?V3hWcTB6OXNIYU1vdnhMRzM5ZDBRSXNqeFAxY0hCU3hyRllQRThEYzYvU3cx?=
 =?utf-8?B?TUtlR0lhdG5TVXpJTkhKRm9yT0pUd0J2SUo1OUZwOWNpcGJhdm5lckx4TnRV?=
 =?utf-8?B?amg1U2dLdy96UDNsdTNNK0VrbkRJYUIyYnFvWlR6T1h1U0dkSzFIcmpJS0dk?=
 =?utf-8?B?ZFdIZzJGVUFGbG9WNkh2aXlvQlQ1ZXFvZDlyTFpiOGZpU2h1YzBucUoxdjg3?=
 =?utf-8?B?WXpQQ3BrMWlQM0ltcEtnYWdHQVNCemFpWERHRWVGeDZHZkZPUnpwTDg0QTlm?=
 =?utf-8?B?VUgvdEQrcXg5dFllNUp5UnVoZzZRL1dKYUtiY3VyM204MUV4MDhUTlQ4U0FS?=
 =?utf-8?B?WHpzc082bzF3L3NoK01wbmFSOXJ2bEtjRlo3UWVldkwyRXFPdnJUTHg3dSt4?=
 =?utf-8?B?bElSc1V5UzFYZWplSFZobzE0TFpzL2tKV3JnSjJJTEZJdVh4ZEZ2LzRCL3dm?=
 =?utf-8?Q?THafqjgIf5VdLxMHoCUWmnfnY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 95409bce-a927-4bda-1185-08daf3e100bc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 14:34:53.2956
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bz/zDgtsONFgN/kYUWxLokQhF/JN4Q9gOQhohN9qOxXxdFwQGd8jGduQmHACp0RFC4r04V1PSTB2LZAvZHsEgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8551

On 16.12.2022 12:48, Julien Grall wrote:
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1648,6 +1648,22 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>  
>      numa_initmem_init(0, raw_max_page);
>  
> +    /*
> +     * When we do not have a direct map, memory for metadata of heap nodes in
> +     * init_node_heap() is allocated from xenheap, which needs to be mapped and
> +     * unmapped on demand. However, we cannot just take memory from the boot
> +     * allocator to create the PTEs while we are passing memory to the heap
> +     * allocator during end_boot_allocator().
> +     *
> +     * To solve this race, we need to leave early boot before
> +     * end_boot_allocator() so that Xen PTE pages are allocated from the heap
> +     * instead of the boot allocator. We can do this because the metadata for
> +     * the 1st node is statically allocated, and by the time we need memory to
> +     * create mappings for the 2nd node, we already have enough memory in the
> +     * heap allocator in the 1st node.
> +     */

Is this "enough" guaranteed, or merely a hope (and true in the common case,
but maybe not when the 1st node ends up having very little memory)?

> +    system_state = SYS_STATE_boot;
> +
>      if ( max_page - 1 > virt_to_mfn(HYPERVISOR_VIRT_END - 1) )
>      {
>          unsigned long limit = virt_to_mfn(HYPERVISOR_VIRT_END - 1);
> @@ -1677,8 +1693,6 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      else
>          end_boot_allocator();
>  
> -    system_state = SYS_STATE_boot;

I'm afraid I don't view this as viable - there are assumptions not just in
the page table allocation functions that SYS_STATE_boot (or higher) means
that end_boot_allocator() has run (e.g. acpi_os_map_memory()). You also do
this for x86 only. I think system_state wants leaving alone here, and an
arch specific approach wants creating for the page table allocation you
talk of.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:38:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475454.737177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcFX-0001PT-Co; Wed, 11 Jan 2023 14:38:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475454.737177; Wed, 11 Jan 2023 14:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcFX-0001PM-A9; Wed, 11 Jan 2023 14:38:43 +0000
Received: by outflank-mailman (input) for mailman id 475454;
 Wed, 11 Jan 2023 14:38:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pFcFV-0001NK-Nf
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:38:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id a2e338e5-91bd-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 15:38:39 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 33D9DFEC;
 Wed, 11 Jan 2023 06:39:20 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 766E63F71A;
 Wed, 11 Jan 2023 06:38:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2e338e5-91bd-11ed-b8d0-410ff93cb8f0
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>
Subject: [RFC PATCH 0/8] SVE feature for arm guests
Date: Wed, 11 Jan 2023 14:38:18 +0000
Message-Id: <20230111143826.3224-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1

This serie is introducing the possibility for Dom0 and DomU guests to use
sve/sve2 instructions.

SVE feature introduces new instruction and registers to improve performances on
floating point operations.

The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE field, and
when available the ID_AA64ZFR0_EL1 register provides additional information
about the implemented version and other SVE feature.

New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.

Z0-Z31 are scalable vector register whose size is implementation defined and
goes from 128 bits to maximum 2048, the term vector length will be used to refer
to this quantity.
P0-P15 are predicate registers and the size is the vector length divided by 8,
same size is the FFR (First Fault Register).
ZCR_ELx is a register that can control and restrict the maximum vector length
used by the <x> exception level and all the lower exception levels, so for
example EL3 can restrict the vector length usable by EL3,2,1,0.

The platform has a maximum implemented vector length, so for every value
written in ZCR register, if this value is above the implemented length, then the
lower value will be used. The RDVL instruction can be used to check what vector
length is the HW using after setting ZCR.

For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there is no
need to save them separately, saving Z0-Z31 will save implicitly also V0-V31.

SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie the
register is added to the domain state, to be able to trap only the guests that
are not allowed to use SVE.

This serie is introducing a command line parameter to enable Dom0 to use SVE and
to set its maximum vector length that by default is 0 which means the guest is
not allowed to use SVE. Values from 128 to 2048 mean the guest can use SVE with
the selected value used as maximum allowed vector length (which could be lower
if the implemented one is lower).
For DomUs, an XL parameter with the same way of use is introduced and a dom0less
DTB binding is created.

The context switch is the most critical part because there can be big registers
to be saved, in this serie an easy approach is used and the context is
saved/restored every time for the guests that are allowed to use SVE.


Luca Fancellu (8):
  xen/arm: enable SVE extension for Xen
  xen/arm: add sve_vl_bits field to domain
  xen/arm: Expose SVE feature to the guest
  xen/arm: add SVE exception class handling
  arm/sve: save/restore SVE context switch
  xen/arm: enable Dom0 to use SVE feature
  xen/tools: add sve parameter in XL configuration
  xen/arm: add sve property for dom0less domUs

 docs/man/xl.cfg.5.pod.in                 |  11 ++
 docs/misc/arm/device-tree/booting.txt    |   7 +
 docs/misc/xen-command-line.pandoc        |  12 ++
 tools/golang/xenlight/helpers.gen.go     |   2 +
 tools/golang/xenlight/types.gen.go       |   1 +
 tools/include/libxl.h                    |   5 +
 tools/libs/light/libxl_arm.c             |   2 +
 tools/libs/light/libxl_types.idl         |   1 +
 tools/xl/xl_parse.c                      |  10 ++
 xen/arch/arm/Kconfig                     |   3 +-
 xen/arch/arm/arm64/Makefile              |   1 +
 xen/arch/arm/arm64/cpufeature.c          |   7 +-
 xen/arch/arm/arm64/sve.c                 | 104 +++++++++++++
 xen/arch/arm/arm64/sve_asm.S             | 189 +++++++++++++++++++++++
 xen/arch/arm/arm64/vfp.c                 |  79 ++++++----
 xen/arch/arm/arm64/vsysreg.c             |  39 ++++-
 xen/arch/arm/cpufeature.c                |   6 +-
 xen/arch/arm/domain.c                    |  61 ++++++++
 xen/arch/arm/domain_build.c              |  11 ++
 xen/arch/arm/include/asm/arm64/sve.h     |  72 +++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h |   4 +
 xen/arch/arm/include/asm/arm64/vfp.h     |  10 ++
 xen/arch/arm/include/asm/cpufeature.h    |  14 ++
 xen/arch/arm/include/asm/domain.h        |   8 +
 xen/arch/arm/include/asm/processor.h     |   3 +
 xen/arch/arm/setup.c                     |   5 +-
 xen/arch/arm/traps.c                     |  46 ++++--
 xen/include/public/arch-arm.h            |   2 +
 xen/include/public/domctl.h              |   2 +-
 29 files changed, 661 insertions(+), 56 deletions(-)
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/arm64/sve_asm.S
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:38:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475455.737184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcFX-0001WW-Tg; Wed, 11 Jan 2023 14:38:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475455.737184; Wed, 11 Jan 2023 14:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcFX-0001UQ-N2; Wed, 11 Jan 2023 14:38:43 +0000
Received: by outflank-mailman (input) for mailman id 475455;
 Wed, 11 Jan 2023 14:38:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pFcFW-0001NK-Dl
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:38:42 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id a38f1210-91bd-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 15:38:40 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 618DA13D5;
 Wed, 11 Jan 2023 06:39:21 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5539A3F71A;
 Wed, 11 Jan 2023 06:38:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a38f1210-91bd-11ed-b8d0-410ff93cb8f0
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 1/8] xen/arm: enable SVE extension for Xen
Date: Wed, 11 Jan 2023 14:38:19 +0000
Message-Id: <20230111143826.3224-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230111143826.3224-1-luca.fancellu@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>

Enable Xen to handle the SVE extension, add code in cpufeature module
to handle ZCR SVE register, disable trapping SVE feature on system
boot, it will be restored later on vcpu creation and running.
While there, correct coding style for the comment on coprocessor
trapping.

Change the KConfig entry to make ARM64_SVE symbol selectable, by
default it will be not selected.

Create sve module and sve_asm.S that contains assembly routines for
the SVE feature, this code is inspired from linux and it uses
instruction encoding to be compatible with compilers that does not
support SVE.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/arch/arm/Kconfig                     |  3 +-
 xen/arch/arm/arm64/Makefile              |  1 +
 xen/arch/arm/arm64/cpufeature.c          |  7 ++--
 xen/arch/arm/arm64/sve.c                 | 38 +++++++++++++++++++
 xen/arch/arm/arm64/sve_asm.S             | 48 ++++++++++++++++++++++++
 xen/arch/arm/cpufeature.c                |  6 ++-
 xen/arch/arm/domain.c                    |  4 ++
 xen/arch/arm/include/asm/arm64/sve.h     | 43 +++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
 xen/arch/arm/include/asm/cpufeature.h    | 14 +++++++
 xen/arch/arm/include/asm/domain.h        |  1 +
 xen/arch/arm/include/asm/processor.h     |  2 +
 xen/arch/arm/setup.c                     |  5 ++-
 xen/arch/arm/traps.c                     | 34 ++++++++++++-----
 14 files changed, 188 insertions(+), 19 deletions(-)
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/arm64/sve_asm.S
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c7f..2a5151f3c718 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -112,11 +112,10 @@ config ARM64_PTR_AUTH
 	  This feature is not supported in Xen.
 
 config ARM64_SVE
-	def_bool n
+	bool "Enable Scalar Vector Extension support" if EXPERT
 	depends on ARM_64
 	help
 	  Scalar Vector Extension support.
-	  This feature is not supported in Xen.
 
 config ARM64_MTE
 	def_bool n
diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 6d507da0d44d..1d59c3b0ec89 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -12,6 +12,7 @@ obj-y += insn.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += smc.o
 obj-y += smpboot.o
+obj-$(CONFIG_ARM64_SVE) += sve.o sve_asm.o
 obj-y += traps.o
 obj-y += vfp.o
 obj-y += vsysreg.o
diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeature.c
index d9039d37b2d1..b4656ff4d80f 100644
--- a/xen/arch/arm/arm64/cpufeature.c
+++ b/xen/arch/arm/arm64/cpufeature.c
@@ -455,15 +455,11 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
 	ARM64_FTR_END,
 };
 
-#if 0
-/* TODO: use this to sanitize SVE once we support it */
-
 static const struct arm64_ftr_bits ftr_zcr[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
 		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
 	ARM64_FTR_END,
 };
-#endif
 
 /*
  * Common ftr bits for a 32bit register with all hidden, strict
@@ -603,6 +599,9 @@ void update_system_features(const struct cpuinfo_arm *new)
 
 	SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
 
+	if ( cpu_has_sve )
+		SANITIZE_REG(zcr64, 0, zcr);
+
 	/*
 	 * Comment from Linux:
 	 * Userspace may perform DC ZVA instructions. Mismatched block sizes
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
new file mode 100644
index 000000000000..326389278292
--- /dev/null
+++ b/xen/arch/arm/arm64/sve.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#include <xen/types.h>
+#include <asm/arm64/sve.h>
+#include <asm/arm64/sysregs.h>
+
+extern unsigned int sve_get_hw_vl(void);
+
+register_t compute_max_zcr(void)
+{
+    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
+    unsigned int hw_vl;
+
+    /*
+     * Set the maximum SVE vector length, doing that we will know the VL
+     * supported by the platform, calling sve_get_hw_vl()
+     */
+    WRITE_SYSREG(zcr, ZCR_EL2);
+
+    /*
+     * Read the maximum VL, which could be lower than what we imposed before,
+     * hw_vl contains VL in bytes, multiply it by 8 to use vl_to_zcr() later
+     */
+    hw_vl = sve_get_hw_vl() * 8U;
+
+    return vl_to_zcr(hw_vl);
+}
+
+/* Takes a vector length in bits and returns the ZCR_ELx encoding */
+register_t vl_to_zcr(uint16_t vl)
+{
+    return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
+}
diff --git a/xen/arch/arm/arm64/sve_asm.S b/xen/arch/arm/arm64/sve_asm.S
new file mode 100644
index 000000000000..4d1549344733
--- /dev/null
+++ b/xen/arch/arm/arm64/sve_asm.S
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE assembly routines
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ *
+ * Some macros and instruction encoding in this file are taken from linux 6.1.1,
+ * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modified
+ * version.
+ */
+
+/* Sanity-check macros to help avoid encoding garbage instructions */
+
+.macro _check_general_reg nr
+    .if (\nr) < 0 || (\nr) > 30
+        .error "Bad register number \nr."
+    .endif
+.endm
+
+.macro _check_num n, min, max
+    .if (\n) < (\min) || (\n) > (\max)
+        .error "Number \n out of range [\min,\max]"
+    .endif
+.endm
+
+/* SVE instruction encodings for non-SVE-capable assemblers */
+/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
+
+/* RDVL X\nx, #\imm */
+.macro _sve_rdvl nx, imm
+    _check_general_reg \nx
+    _check_num (\imm), -0x20, 0x1f
+    .inst 0x04bf5000                \
+        | (\nx)                     \
+        | (((\imm) & 0x3f) << 5)
+.endm
+
+/* Gets the current vector register size in bytes */
+GLOBAL(sve_get_hw_vl)
+    _sve_rdvl 0, 1
+    ret
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index c4ec38bb2554..83b84368f6d5 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -9,6 +9,7 @@
 #include <xen/init.h>
 #include <xen/smp.h>
 #include <xen/stop_machine.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
@@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
 
     c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
 
+    if ( cpu_has_sve )
+        c->zcr64.bits[0] = compute_max_zcr();
+
     c->dczid.bits[0] = READ_SYSREG(DCZID_EL0);
 
     c->ctr.bits[0] = READ_SYSREG(CTR_EL0);
@@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
     guest_cpuinfo.pfr64.mpam = 0;
     guest_cpuinfo.pfr64.mpam_frac = 0;
 
-    /* Hide SVE as Xen does not support it */
+    /* Hide SVE by default to the guests */
     guest_cpuinfo.pfr64.sve = 0;
     guest_cpuinfo.zfr64.bits[0] = 0;
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 99577adb6c69..8ea3843ea8e8 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -181,6 +181,8 @@ static void ctxt_switch_to(struct vcpu *n)
     /* VGIC */
     gic_restore_state(n);
 
+    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
+
     /* VFP */
     vfp_restore_state(n);
 
@@ -548,6 +550,8 @@ int arch_vcpu_create(struct vcpu *v)
 
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
+    v->arch.cptr_el2 = get_default_cptr_flags();
+
     v->arch.hcr_el2 = get_default_hcr_flags();
 
     v->arch.mdcr_el2 = HDCR_TDRA | HDCR_TDOSA | HDCR_TDA;
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
new file mode 100644
index 000000000000..bd56e2f24230
--- /dev/null
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#ifndef _ARM_ARM64_SVE_H
+#define _ARM_ARM64_SVE_H
+
+#define SVE_VL_MAX_BITS (2048U)
+
+/* Vector length must be multiple of 128 */
+#define SVE_VL_MULTIPLE_VAL (128U)
+
+#ifdef CONFIG_ARM64_SVE
+
+register_t compute_max_zcr(void);
+register_t vl_to_zcr(uint16_t vl);
+
+#else /* !CONFIG_ARM64_SVE */
+
+static inline register_t compute_max_zcr(void)
+{
+    return 0;
+}
+
+static inline register_t vl_to_zcr(uint16_t vl)
+{
+    return 0;
+}
+
+#endif
+
+#endif /* _ARM_ARM64_SVE_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 463899951414..4cabb9eb4d5e 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -24,6 +24,7 @@
 #define ICH_EISR_EL2              S3_4_C12_C11_3
 #define ICH_ELSR_EL2              S3_4_C12_C11_5
 #define ICH_VMCR_EL2              S3_4_C12_C11_7
+#define ZCR_EL2                   S3_4_C1_C2_0
 
 #define __LR0_EL2(x)              S3_4_C12_C12_ ## x
 #define __LR8_EL2(x)              S3_4_C12_C13_ ## x
diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
index c62cf6293fd6..6d703e051906 100644
--- a/xen/arch/arm/include/asm/cpufeature.h
+++ b/xen/arch/arm/include/asm/cpufeature.h
@@ -32,6 +32,12 @@
 #define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
 #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
 
+#ifdef CONFIG_ARM64_SVE
+#define cpu_has_sve       (boot_cpu_feature64(sve) == 1)
+#else
+#define cpu_has_sve       (0)
+#endif
+
 #ifdef CONFIG_ARM_32
 #define cpu_has_gicv3     (boot_cpu_feature32(gic) >= 1)
 #define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
@@ -323,6 +329,14 @@ struct cpuinfo_arm {
         };
     } isa64;
 
+    union {
+        register_t bits[1];
+        struct {
+            unsigned long len:4;
+            unsigned long __res0:60;
+        };
+    } zcr64;
+
     struct {
         register_t bits[1];
     } zfr64;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 0e310601e846..42eb5df320a7 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -190,6 +190,7 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
 
diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index 1dd81d7d528f..0e38926b94db 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -583,6 +583,8 @@ void do_trap_guest_serror(struct cpu_user_regs *regs);
 
 register_t get_default_hcr_flags(void);
 
+register_t get_default_cptr_flags(void);
+
 /*
  * Synchronize SError unless the feature is selected.
  * This is relying on the SErrors are currently unmasked.
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90e3..5459cc4f5e62 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -135,10 +135,11 @@ static void __init processor_id(void)
            cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
            cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
            cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
-    printk("    Extensions:%s%s%s\n",
+    printk("    Extensions:%s%s%s%s\n",
            cpu_has_fp ? " FloatingPoint" : "",
            cpu_has_simd ? " AdvancedSIMD" : "",
-           cpu_has_gicv3 ? " GICv3-SysReg" : "");
+           cpu_has_gicv3 ? " GICv3-SysReg" : "",
+           cpu_has_sve ? " SVE" : "");
 
     /* Warn user if we find unknown floating-point features */
     if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 061c92acbd68..45163fd3afb0 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -93,6 +93,21 @@ register_t get_default_hcr_flags(void)
              HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
+register_t get_default_cptr_flags(void)
+{
+    /*
+     * Trap all coprocessor registers (0-13) except cp10 and
+     * cp11 for VFP.
+     *
+     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
+     *
+     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
+     * RES1, i.e. they would trap whether we did this write or not.
+     */
+    return  ((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
+             HCPTR_TTA | HCPTR_TAM);
+}
+
 static enum {
     SERRORS_DIVERSE,
     SERRORS_PANIC,
@@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
 
 void init_traps(void)
 {
+    register_t cptr_bits = get_default_cptr_flags();
     /*
      * Setup Hyp vector base. Note they might get updated with the
      * branch predictor hardening.
@@ -135,17 +151,15 @@ void init_traps(void)
     /* Trap CP15 c15 used for implementation defined registers */
     WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
 
-    /* Trap all coprocessor registers (0-13) except cp10 and
-     * cp11 for VFP.
-     *
-     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
-     *
-     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
-     * RES1, i.e. they would trap whether we did this write or not.
+#ifdef CONFIG_ARM64_SVE
+    /*
+     * Don't trap SVE now, Xen might need to access ZCR reg in cpufeature code,
+     * trapping again or not will be handled on vcpu creation/scheduling later
      */
-    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
-                 HCPTR_TTA | HCPTR_TAM,
-                 CPTR_EL2);
+    cptr_bits &= ~HCPTR_CP(8);
+#endif
+
+    WRITE_SYSREG(cptr_bits, CPTR_EL2);
 
     /*
      * Configure HCR_EL2 with the bare minimum to run Xen until a guest
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:38:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475456.737199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcFZ-0001v4-59; Wed, 11 Jan 2023 14:38:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475456.737199; Wed, 11 Jan 2023 14:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcFZ-0001u5-0e; Wed, 11 Jan 2023 14:38:45 +0000
Received: by outflank-mailman (input) for mailman id 475456;
 Wed, 11 Jan 2023 14:38:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pFcFX-0001NK-JO
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:38:43 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id a472b751-91bd-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 15:38:41 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E4BDE15DB;
 Wed, 11 Jan 2023 06:39:22 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 885FB3F71A;
 Wed, 11 Jan 2023 06:38:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a472b751-91bd-11ed-b8d0-410ff93cb8f0
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH 2/8] xen/arm: add sve_vl_bits field to domain
Date: Wed, 11 Jan 2023 14:38:20 +0000
Message-Id: <20230111143826.3224-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230111143826.3224-1-luca.fancellu@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>

Add sve_vl_bits field to arch_domain and xen_arch_domainconfig
structure, to allow the domain to have an information about the
SVE feature and the number of SVE register bits that are allowed
for this domain.

The field is used also to allow or forbid a domain to use SVE,
because a value equal to zero means the guest is not allowed to
use the feature.

When the guest is allowed to use SVE, the zcr_el2 register is
updated on context switch to restict the domain on the allowed
number of bits chosen, this value is the minimum among the chosen
value and the platform supported value.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/arch/arm/arm64/sve.c             |  9 ++++++
 xen/arch/arm/domain.c                | 45 ++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sve.h | 12 ++++++++
 xen/arch/arm/include/asm/domain.h    |  6 ++++
 xen/include/public/arch-arm.h        |  2 ++
 xen/include/public/domctl.h          |  2 +-
 6 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index 326389278292..b7695834f4ba 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -6,6 +6,7 @@
  */
 
 #include <xen/types.h>
+#include <asm/cpufeature.h>
 #include <asm/arm64/sve.h>
 #include <asm/arm64/sysregs.h>
 
@@ -36,3 +37,11 @@ register_t vl_to_zcr(uint16_t vl)
 {
     return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
 }
+
+/* Get the system sanitized value for VL in bits */
+uint16_t get_sys_vl_len(void)
+{
+    /* ZCR_ELx len field is ((len+1) * 128) = vector bits length */
+    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
+            SVE_VL_MULTIPLE_VAL;
+}
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8ea3843ea8e8..27f38729302b 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -13,6 +13,7 @@
 #include <xen/wait.h>
 
 #include <asm/alternative.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
 #include <asm/current.h>
@@ -183,6 +184,11 @@ static void ctxt_switch_to(struct vcpu *n)
 
     WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
 
+#ifdef CONFIG_ARM64_SVE
+    if ( is_sve_domain(n->domain) )
+        WRITE_SYSREG(n->arch.zcr_el2, ZCR_EL2);
+#endif
+
     /* VFP */
     vfp_restore_state(n);
 
@@ -551,6 +557,11 @@ int arch_vcpu_create(struct vcpu *v)
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
     v->arch.cptr_el2 = get_default_cptr_flags();
+    if ( is_sve_domain(v->domain) )
+    {
+        v->arch.cptr_el2 &= ~HCPTR_CP(8);
+        v->arch.zcr_el2 = vl_to_zcr(v->domain->arch.sve_vl_bits);
+    }
 
     v->arch.hcr_el2 = get_default_hcr_flags();
 
@@ -595,6 +606,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     unsigned int max_vcpus;
     unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
     unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
+    unsigned int sve_vl_bits = config->arch.sve_vl_bits;
 
     if ( (config->flags & ~flags_optional) != flags_required )
     {
@@ -603,6 +615,36 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    /* Check feature flags */
+    if ( sve_vl_bits > 0 ) {
+        unsigned int zcr_max_bits;
+
+        if ( !cpu_has_sve )
+        {
+            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
+            return -EINVAL;
+        }
+        else if ( !is_vl_valid(sve_vl_bits) )
+        {
+            dprintk(XENLOG_INFO, "Unsupported SVE vector length (%u)\n",
+                    sve_vl_bits);
+            return -EINVAL;
+        }
+        /*
+         * get_sys_vl_len() is the common safe value among all cpus, so if the
+         * value specified by the user is above that value, use the safe value
+         * instead.
+         */
+        zcr_max_bits = get_sys_vl_len();
+        if ( sve_vl_bits > zcr_max_bits )
+        {
+            config->arch.sve_vl_bits = zcr_max_bits;
+            dprintk(XENLOG_INFO,
+                    "SVE vector length lowered to %u, safe value among CPUs\n",
+                    zcr_max_bits);
+        }
+    }
+
     /* The P2M table must always be shared between the CPU and the IOMMU */
     if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
     {
@@ -745,6 +787,9 @@ int arch_domain_create(struct domain *d,
     if ( (rc = domain_vpci_init(d)) != 0 )
         goto fail;
 
+    /* Copy sve_vl_bits to the domain configuration */
+    d->arch.sve_vl_bits = config->arch.sve_vl_bits;
+
     return 0;
 
 fail:
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index bd56e2f24230..f4a660e402ca 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -13,10 +13,17 @@
 /* Vector length must be multiple of 128 */
 #define SVE_VL_MULTIPLE_VAL (128U)
 
+static inline bool is_vl_valid(uint16_t vl)
+{
+    /* SVE vector length is multiple of 128 and maximum 2048 */
+    return ((vl % SVE_VL_MULTIPLE_VAL) == 0) && (vl <= SVE_VL_MAX_BITS);
+}
+
 #ifdef CONFIG_ARM64_SVE
 
 register_t compute_max_zcr(void);
 register_t vl_to_zcr(uint16_t vl);
+uint16_t get_sys_vl_len(void);
 
 #else /* !CONFIG_ARM64_SVE */
 
@@ -30,6 +37,11 @@ static inline register_t vl_to_zcr(uint16_t vl)
     return 0;
 }
 
+static inline uint16_t get_sys_vl_len(void)
+{
+    return 0;
+}
+
 #endif
 
 #endif /* _ARM_ARM64_SVE_H */
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 42eb5df320a7..e4794a9fd2ab 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -31,6 +31,8 @@ enum domain_type {
 
 #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
 
+#define is_sve_domain(d) ((d)->arch.sve_vl_bits > 0)
+
 /*
  * Is the domain using the host memory layout?
  *
@@ -114,6 +116,9 @@ struct arch_domain
     void *tee;
 #endif
 
+    /* max SVE vector length in bits */
+    uint16_t sve_vl_bits;
+
 }  __cacheline_aligned;
 
 struct arch_vcpu
@@ -190,6 +195,7 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t zcr_el2;
     register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 1528ced5097a..e18a075105f0 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -304,6 +304,8 @@ struct xen_arch_domainconfig {
     uint16_t tee_type;
     /* IN */
     uint32_t nr_spis;
+    /* IN */
+    uint16_t sve_vl_bits;
     /*
      * OUT
      * Based on the property clock-frequency in the DT timer node.
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 51be28c3de7c..616d7a1c070d 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -21,7 +21,7 @@
 #include "hvm/save.h"
 #include "memory.h"
 
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000016
 
 /*
  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:38:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:38:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475458.737210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcFd-0002Ge-FU; Wed, 11 Jan 2023 14:38:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475458.737210; Wed, 11 Jan 2023 14:38:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcFd-0002GV-An; Wed, 11 Jan 2023 14:38:49 +0000
Received: by outflank-mailman (input) for mailman id 475458;
 Wed, 11 Jan 2023 14:38:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pFcFb-0001NK-Ku
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:38:47 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id a681ec48-91bd-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 15:38:44 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51FE913D5;
 Wed, 11 Jan 2023 06:39:26 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 45D913F71A;
 Wed, 11 Jan 2023 06:38:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a681ec48-91bd-11ed-b8d0-410ff93cb8f0
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 5/8] arm/sve: save/restore SVE context switch
Date: Wed, 11 Jan 2023 14:38:23 +0000
Message-Id: <20230111143826.3224-6-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230111143826.3224-1-luca.fancellu@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>

Save/restore context switch for SVE, allocate memory to contain
the Z0-31 registers whose length is maximum 2048 bits each and
FFR who can be maximum 256 bits, the allocated memory depends on
how many bits is the vector length for the domain and how many bits
are supported by the platform.

Save P0-15 whose length is maximum 256 bits each, in this case the
memory used is from the fpregs field in struct vfp_state,
because V0-31 are part of Z0-31 and this space would have been
unused for SVE domain otherwise.

Create zcr_el1 field in arch_vcpu and save/restore ZCR_EL1 value
on context switch.

Remove headers from sve.c that are already included using
xen/sched.h.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/arch/arm/arm64/sve.c                 |  58 +++++++++-
 xen/arch/arm/arm64/sve_asm.S             | 141 +++++++++++++++++++++++
 xen/arch/arm/arm64/vfp.c                 |  79 +++++++------
 xen/arch/arm/domain.c                    |  12 ++
 xen/arch/arm/include/asm/arm64/sve.h     |  13 +++
 xen/arch/arm/include/asm/arm64/sysregs.h |   3 +
 xen/arch/arm/include/asm/arm64/vfp.h     |  10 ++
 xen/arch/arm/include/asm/domain.h        |   1 +
 8 files changed, 280 insertions(+), 37 deletions(-)

diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index b7695834f4ba..c7b325700fe4 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -5,12 +5,29 @@
  * Copyright (C) 2022 ARM Ltd.
  */
 
-#include <xen/types.h>
-#include <asm/cpufeature.h>
+#include <xen/sched.h>
+#include <xen/sizes.h>
 #include <asm/arm64/sve.h>
-#include <asm/arm64/sysregs.h>
 
 extern unsigned int sve_get_hw_vl(void);
+extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
+extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
+                         int restore_ffr);
+
+static inline uint16_t sve_zreg_ctx_size(uint16_t vl)
+{
+    /*
+     * Z0-31 registers size in bytes is computed from VL that is in bits, so VL
+     * in bytes is VL/8.
+     */
+    return (vl / 8U) * 32U;
+}
+
+static inline uint16_t sve_ffrreg_ctx_size(uint16_t vl)
+{
+    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
+    return (vl / 64U);
+}
 
 register_t compute_max_zcr(void)
 {
@@ -45,3 +62,38 @@ uint16_t get_sys_vl_len(void)
     return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
             SVE_VL_MULTIPLE_VAL;
 }
+
+int sve_context_init(struct vcpu *v)
+{
+    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(v->domain->arch.sve_vl_bits) +
+                             sve_ffrreg_ctx_size(v->domain->arch.sve_vl_bits),
+                             L1_CACHE_BYTES);
+
+    if ( !ctx )
+        return -ENOMEM;
+
+    v->arch.vfp.sve_context = ctx;
+
+    return 0;
+}
+
+void sve_context_free(struct vcpu *v)
+{
+    xfree(v->arch.vfp.sve_context);
+}
+
+void sve_save_state(struct vcpu *v)
+{
+    uint64_t *sve_ctx_zreg_end = v->arch.vfp.sve_context +
+            (sve_zreg_ctx_size(v->domain->arch.sve_vl_bits) / sizeof(uint64_t));
+
+    sve_save_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
+}
+
+void sve_restore_state(struct vcpu *v)
+{
+    uint64_t *sve_ctx_zreg_end = v->arch.vfp.sve_context +
+            (sve_zreg_ctx_size(v->domain->arch.sve_vl_bits) / sizeof(uint64_t));
+
+    sve_load_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
+}
diff --git a/xen/arch/arm/arm64/sve_asm.S b/xen/arch/arm/arm64/sve_asm.S
index 4d1549344733..8c37d7bc95d5 100644
--- a/xen/arch/arm/arm64/sve_asm.S
+++ b/xen/arch/arm/arm64/sve_asm.S
@@ -17,6 +17,18 @@
     .endif
 .endm
 
+.macro _sve_check_zreg znr
+    .if (\znr) < 0 || (\znr) > 31
+        .error "Bad Scalable Vector Extension vector register number \znr."
+    .endif
+.endm
+
+.macro _sve_check_preg pnr
+    .if (\pnr) < 0 || (\pnr) > 15
+        .error "Bad Scalable Vector Extension predicate register number \pnr."
+    .endif
+.endm
+
 .macro _check_num n, min, max
     .if (\n) < (\min) || (\n) > (\max)
         .error "Number \n out of range [\min,\max]"
@@ -26,6 +38,54 @@
 /* SVE instruction encodings for non-SVE-capable assemblers */
 /* (pre binutils 2.28, all kernel capable clang versions support SVE) */
 
+/* STR (vector): STR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (vector): LDR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* STR (predicate): STR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (predicate): LDR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
 /* RDVL X\nx, #\imm */
 .macro _sve_rdvl nx, imm
     _check_general_reg \nx
@@ -35,11 +95,92 @@
         | (((\imm) & 0x3f) << 5)
 .endm
 
+/* RDFFR (unpredicated): RDFFR P\np.B */
+.macro _sve_rdffr np
+    _sve_check_preg \np
+    .inst 0x2519f000                \
+        | (\np)
+.endm
+
+/* WRFFR P\np.B */
+.macro _sve_wrffr np
+    _sve_check_preg \np
+    .inst 0x25289000                \
+        | ((\np) << 5)
+.endm
+
+.macro __for from:req, to:req
+    .if (\from) == (\to)
+        _for__body %\from
+    .else
+        __for %\from, %((\from) + ((\to) - (\from)) / 2)
+        __for %((\from) + ((\to) - (\from)) / 2 + 1), %\to
+    .endif
+.endm
+
+.macro _for var:req, from:req, to:req, insn:vararg
+    .macro _for__body \var:req
+        .noaltmacro
+        \insn
+        .altmacro
+    .endm
+
+    .altmacro
+    __for \from, \to
+    .noaltmacro
+
+    .purgem _for__body
+.endm
+
+.macro sve_save nxzffrctx, nxpctx, save_ffr
+    _for n, 0, 31, _sve_str_v \n, \nxzffrctx, \n - 32
+    _for n, 0, 15, _sve_str_p \n, \nxpctx, \n
+        cbz \save_ffr, 1f
+        _sve_rdffr 0
+        _sve_str_p 0, \nxzffrctx
+        _sve_ldr_p 0, \nxpctx
+        b 2f
+1:
+        str xzr, [x\nxzffrctx]      // Zero out FFR
+2:
+.endm
+
+.macro sve_load nxzffrctx, nxpctx, restore_ffr
+    _for n, 0, 31, _sve_ldr_v \n, \nxzffrctx, \n - 32
+        cbz \restore_ffr, 1f
+        _sve_ldr_p 0, \nxzffrctx
+        _sve_wrffr 0
+1:
+    _for n, 0, 15, _sve_ldr_p \n, \nxpctx, \n
+.endm
+
 /* Gets the current vector register size in bytes */
 GLOBAL(sve_get_hw_vl)
     _sve_rdvl 0, 1
     ret
 
+/*
+ * Save the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Save FFR if non-zero
+ */
+GLOBAL(sve_save_ctx)
+    sve_save 0, 1, x2
+    ret
+
+/*
+ * Load the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Restore FFR if non-zero
+ */
+GLOBAL(sve_load_ctx)
+    sve_load 0, 1, x2
+    ret
+
 /*
  * Local variables:
  * mode: ASM
diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index 47885e76baae..2d0d7c2e6ddb 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -2,29 +2,35 @@
 #include <asm/processor.h>
 #include <asm/cpufeature.h>
 #include <asm/vfp.h>
+#include <asm/arm64/sve.h>
 
 void vfp_save_state(struct vcpu *v)
 {
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
-                 "stp q2, q3, [%1, #16 * 2]\n\t"
-                 "stp q4, q5, [%1, #16 * 4]\n\t"
-                 "stp q6, q7, [%1, #16 * 6]\n\t"
-                 "stp q8, q9, [%1, #16 * 8]\n\t"
-                 "stp q10, q11, [%1, #16 * 10]\n\t"
-                 "stp q12, q13, [%1, #16 * 12]\n\t"
-                 "stp q14, q15, [%1, #16 * 14]\n\t"
-                 "stp q16, q17, [%1, #16 * 16]\n\t"
-                 "stp q18, q19, [%1, #16 * 18]\n\t"
-                 "stp q20, q21, [%1, #16 * 20]\n\t"
-                 "stp q22, q23, [%1, #16 * 22]\n\t"
-                 "stp q24, q25, [%1, #16 * 24]\n\t"
-                 "stp q26, q27, [%1, #16 * 26]\n\t"
-                 "stp q28, q29, [%1, #16 * 28]\n\t"
-                 "stp q30, q31, [%1, #16 * 30]\n\t"
-                 : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_save_state(v);
+    else
+    {
+        asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
+                     "stp q2, q3, [%1, #16 * 2]\n\t"
+                     "stp q4, q5, [%1, #16 * 4]\n\t"
+                     "stp q6, q7, [%1, #16 * 6]\n\t"
+                     "stp q8, q9, [%1, #16 * 8]\n\t"
+                     "stp q10, q11, [%1, #16 * 10]\n\t"
+                     "stp q12, q13, [%1, #16 * 12]\n\t"
+                     "stp q14, q15, [%1, #16 * 14]\n\t"
+                     "stp q16, q17, [%1, #16 * 16]\n\t"
+                     "stp q18, q19, [%1, #16 * 18]\n\t"
+                     "stp q20, q21, [%1, #16 * 20]\n\t"
+                     "stp q22, q23, [%1, #16 * 22]\n\t"
+                     "stp q24, q25, [%1, #16 * 24]\n\t"
+                     "stp q26, q27, [%1, #16 * 26]\n\t"
+                     "stp q28, q29, [%1, #16 * 28]\n\t"
+                     "stp q30, q31, [%1, #16 * 30]\n\t"
+                     : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    }
 
     v->arch.vfp.fpsr = READ_SYSREG(FPSR);
     v->arch.vfp.fpcr = READ_SYSREG(FPCR);
@@ -37,23 +43,28 @@ void vfp_restore_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
-                 "ldp q2, q3, [%1, #16 * 2]\n\t"
-                 "ldp q4, q5, [%1, #16 * 4]\n\t"
-                 "ldp q6, q7, [%1, #16 * 6]\n\t"
-                 "ldp q8, q9, [%1, #16 * 8]\n\t"
-                 "ldp q10, q11, [%1, #16 * 10]\n\t"
-                 "ldp q12, q13, [%1, #16 * 12]\n\t"
-                 "ldp q14, q15, [%1, #16 * 14]\n\t"
-                 "ldp q16, q17, [%1, #16 * 16]\n\t"
-                 "ldp q18, q19, [%1, #16 * 18]\n\t"
-                 "ldp q20, q21, [%1, #16 * 20]\n\t"
-                 "ldp q22, q23, [%1, #16 * 22]\n\t"
-                 "ldp q24, q25, [%1, #16 * 24]\n\t"
-                 "ldp q26, q27, [%1, #16 * 26]\n\t"
-                 "ldp q28, q29, [%1, #16 * 28]\n\t"
-                 "ldp q30, q31, [%1, #16 * 30]\n\t"
-                 : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_restore_state(v);
+    else
+    {
+        asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
+                     "ldp q2, q3, [%1, #16 * 2]\n\t"
+                     "ldp q4, q5, [%1, #16 * 4]\n\t"
+                     "ldp q6, q7, [%1, #16 * 6]\n\t"
+                     "ldp q8, q9, [%1, #16 * 8]\n\t"
+                     "ldp q10, q11, [%1, #16 * 10]\n\t"
+                     "ldp q12, q13, [%1, #16 * 12]\n\t"
+                     "ldp q14, q15, [%1, #16 * 14]\n\t"
+                     "ldp q16, q17, [%1, #16 * 16]\n\t"
+                     "ldp q18, q19, [%1, #16 * 18]\n\t"
+                     "ldp q20, q21, [%1, #16 * 20]\n\t"
+                     "ldp q22, q23, [%1, #16 * 22]\n\t"
+                     "ldp q24, q25, [%1, #16 * 24]\n\t"
+                     "ldp q26, q27, [%1, #16 * 26]\n\t"
+                     "ldp q28, q29, [%1, #16 * 28]\n\t"
+                     "ldp q30, q31, [%1, #16 * 30]\n\t"
+                     : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    }
 
     WRITE_SYSREG(v->arch.vfp.fpsr, FPSR);
     WRITE_SYSREG(v->arch.vfp.fpcr, FPCR);
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 27f38729302b..228cd2f7627e 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -159,6 +159,11 @@ static void ctxt_switch_from(struct vcpu *p)
     /* VFP */
     vfp_save_state(p);
 
+#ifdef CONFIG_ARM64_SVE
+    if ( is_sve_domain(p->domain) )
+        p->arch.zcr_el1 = READ_SYSREG(ZCR_EL1);
+#endif
+
     /* VGIC */
     gic_save_state(p);
 
@@ -186,7 +191,10 @@ static void ctxt_switch_to(struct vcpu *n)
 
 #ifdef CONFIG_ARM64_SVE
     if ( is_sve_domain(n->domain) )
+    {
+        WRITE_SYSREG(n->arch.zcr_el1, ZCR_EL1);
         WRITE_SYSREG(n->arch.zcr_el2, ZCR_EL2);
+    }
 #endif
 
     /* VFP */
@@ -559,6 +567,8 @@ int arch_vcpu_create(struct vcpu *v)
     v->arch.cptr_el2 = get_default_cptr_flags();
     if ( is_sve_domain(v->domain) )
     {
+        if ( (rc = sve_context_init(v)) != 0 )
+            goto fail;
         v->arch.cptr_el2 &= ~HCPTR_CP(8);
         v->arch.zcr_el2 = vl_to_zcr(v->domain->arch.sve_vl_bits);
     }
@@ -591,6 +601,8 @@ fail:
 
 void arch_vcpu_destroy(struct vcpu *v)
 {
+    if ( is_sve_domain(v->domain) )
+        sve_context_free(v);
     vcpu_timer_destroy(v);
     vcpu_vgic_free(v);
     free_xenheap_pages(v->arch.stack, STACK_ORDER);
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index f4a660e402ca..28c31b329233 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -24,6 +24,10 @@ static inline bool is_vl_valid(uint16_t vl)
 register_t compute_max_zcr(void);
 register_t vl_to_zcr(uint16_t vl);
 uint16_t get_sys_vl_len(void);
+int sve_context_init(struct vcpu *v);
+void sve_context_free(struct vcpu *v);
+void sve_save_state(struct vcpu *v);
+void sve_restore_state(struct vcpu *v);
 
 #else /* !CONFIG_ARM64_SVE */
 
@@ -42,6 +46,15 @@ static inline uint16_t get_sys_vl_len(void)
     return 0;
 }
 
+static inline int sve_context_init(struct vcpu *v)
+{
+    return 0;
+}
+
+static inline void sve_context_free(struct vcpu *v) {}
+static inline void sve_save_state(struct vcpu *v) {}
+static inline void sve_restore_state(struct vcpu *v) {}
+
 #endif
 
 #endif /* _ARM_ARM64_SVE_H */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 4cabb9eb4d5e..3fdeb9d8cdef 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -88,6 +88,9 @@
 #ifndef ID_AA64ISAR2_EL1
 #define ID_AA64ISAR2_EL1            S3_0_C0_C6_2
 #endif
+#ifndef ZCR_EL1
+#define ZCR_EL1                     S3_0_C1_C2_0
+#endif
 
 /* ID registers (imported from arm64/include/asm/sysreg.h in Linux) */
 
diff --git a/xen/arch/arm/include/asm/arm64/vfp.h b/xen/arch/arm/include/asm/arm64/vfp.h
index e6e8c363bc16..8af714cb8ecc 100644
--- a/xen/arch/arm/include/asm/arm64/vfp.h
+++ b/xen/arch/arm/include/asm/arm64/vfp.h
@@ -6,7 +6,17 @@
 
 struct vfp_state
 {
+    /*
+     * When SVE is enabled for the guest, fpregs memory will be used to
+     * save/restore P0-P15 registers, otherwise it will be used for the V0-V31
+     * registers.
+     */
     uint64_t fpregs[64] __vfp_aligned;
+    /*
+     * When SVE is enabled for the guest, sve_context contains memory to
+     * save/restore Z0-Z31 registers and FFR.
+     */
+    uint64_t *sve_context;
     register_t fpcr;
     register_t fpexc32_el2;
     register_t fpsr;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index e4794a9fd2ab..4d1066750a9b 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -195,6 +195,7 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t zcr_el1;
     register_t zcr_el2;
     register_t cptr_el2;
     register_t hcr_el2;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:45:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:45:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475482.737221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLq-0004sl-BA; Wed, 11 Jan 2023 14:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475482.737221; Wed, 11 Jan 2023 14:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLq-0004se-8W; Wed, 11 Jan 2023 14:45:14 +0000
Received: by outflank-mailman (input) for mailman id 475482;
 Wed, 11 Jan 2023 14:45:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFcGC-0001NK-ST
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:39:25 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2082.outbound.protection.outlook.com [40.107.22.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd320209-91bd-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 15:39:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9449.eurprd04.prod.outlook.com (2603:10a6:10:36a::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 11 Jan
 2023 14:39:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 14:39:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd320209-91bd-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bCsAl/kMOXrBQ+C84FPAsm2zEivtdtQMLY9fJE+RdP020q3F3RNI1mpJMumi1UhYOarGF+BskxCJ0WwI3d8ViT0R2/hr6x6zI7+SIr8/ytWWRquozs8Y8f90PJmuTp2QBEqVYSrn/OSuO9kKCsBDTK2cgs1yO+RwqzC7//9I3cHUOcxLQyl8d0vdAgOhe8Qm6lFdaZJ16E2ixK6YfeccAwszvzzm5kQcPMg2sjyiKcxfhR3BSKEykUYOz1EpICxGYb7xyK35/Kn7gYQOxQ7FFDKk0T+kC/+ezbBooXyAS6BJTPCtLx+F73TKcES+PDwev3udnU/JM9toIBvOQ6lURw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wLFn4NbS1Lo3PpETX2gDmRJejP1llNLWgrYgaeEIQ5M=;
 b=WUTbBFt8P5H/Uk2jllIuUm7SKercGaVFmCX42k9Qaf7ZjPqAog13T8ffd8BGT7F9O/g/4DT7aGIP5xODxNb0umX4Vkinl2p+TddTmiQJVWSztb++RVitPmzqOmiwXDAYdp199PUrLOYYixLD3reuzLAnce5/Y6pnQUMvKlHwLdy/jEuuN2hr3vHbvKwIIxEtVFawPxvZSTLanVbQIfwicZf9I2WN1lmVYco7KO460rOLXh7sqPwnCGKHFz39HsIkLIj7jksjxEc3CQVybDV3d6ULruAmLW7R1AJk9wI4+kJTNO+jaiPGwazib5MPTPPTbNuruHeL9BkBbhgTDlLAvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wLFn4NbS1Lo3PpETX2gDmRJejP1llNLWgrYgaeEIQ5M=;
 b=dv3QEB3m+WD2w+2OpuLm1cAHtvL3P2NDDjvo/VAmrGMHN+5UIZGiaDX4KTIHAnxZuXukDG4VdwPbvwoxywVU00Qs4tkySsfGVE7GVJXetSzKDLN5dHC/V6KcQrU+zCysCBjHsXnZgggcNn1hYnTEJyGzSoGbFr8IRG+b2ymLVgOwSay1m5fkjvkwt64yfepx8GgYidX/e6jyaQaxgXJ4hfIOBqwPwiG7xHFjHnCxZud8SFfKtpZ9cNFlZbRdH4PRT0HBsb/RFpHh+Mf3zx6+ELUzZWsszZxBmJHLDYZQ44RC85y5qbi2Gn0HDuIPAeGXjKoSB2FFy9uTkdFlNxDzfQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <71e7227c-347e-6439-f3ab-827f2e646c24@suse.com>
Date: Wed, 11 Jan 2023 15:39:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 17/22] x86/setup: vmap heap nodes when they are outside
 the direct map
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-18-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20221216114853.8227-18-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0147.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9449:EE_
X-MS-Office365-Filtering-Correlation-Id: 112d85e4-31b6-4433-edc0-08daf3e1a025
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ti0cH0TOFMH7ABtn8KXml3Oh/XMP5JhK2g0NNXptnYjlkBBnvunSdHkB1HPy3eaPh8hmvxek3hZuCGrx1NAZnt2WR9OiQ3D0Gc6cZWi2ZZEs+XXj7HeNfSTHomFqENw+/dBPuPjh3E67YT/PFFe6bMFaVfKYzqaStzVi7AOSrXGXQN7F/RvDXteVl+8DAXgpeyGDWEFflq8UX2pAe8EfHRuQm5SHrBFsD5vEcmD1s5KKu8z8VaOptE9ezKMs4d14pKisX+hdxifeFf8yzs2nIlLZ/3WMmG7pUTnyEKTWlCZirihVqqOtuZjSwacrw2DMF6KV2vyHBppbRmNEl/pEfbWOs4suc3PNCtUcxKaNSdwg68DVEUEcjwcoASNd5KjpIoQcsZzY8Id3tJHKUwSkSRanKvrcV7tulAnJZV70vD2EI9xEcO1SVlRn2YO6gEetjSJOADwtBj/+vm3BtaksF9LcHaMBR1IcWj4hlAq+TbLmgLqNDmobhLb5FJm9KIodFK/wb5ipFkGTe1QCtP8Se3m5V9Q6e62AaeE32/hNsxisvo7fKMVQsJGtkwWeolgv8BzYBshLB/motfo+2IMKdgbYJ/2ojkz6OwSqgU5V55xniXeILBYMBFAFdDZzuar3RDL1GS3xWwflcHe0k9f7EUvl9h/B5ipcDl+xrzgUx3IjbYKTIOqfBE0C15PHRi9upvG426DEIGRbFGvf4eX/vX2ZGrPOkS+10vdIIxxZlyM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(396003)(136003)(376002)(366004)(451199015)(6506007)(2906002)(36756003)(8676002)(4326008)(6916009)(5660300002)(53546011)(8936002)(31686004)(38100700002)(6512007)(41300700001)(478600001)(6486002)(26005)(66476007)(66946007)(66556008)(186003)(86362001)(31696002)(2616005)(54906003)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RlV6VGcwL0t2dVVpN2JCY01iZFZpTk8xZmprN3ZTQ1c3M1oyMjhZS2wyWGdB?=
 =?utf-8?B?YnVjYmR0Q0E4UjcwNnp6RG5SQ0pFeGJBUXNpYzNOQzhEVHdLUEhRcW1aUGti?=
 =?utf-8?B?V1NKNVJOalBVbVZ0UFdQZytWNmR1eGdCLzNma3BZamYyNHNISTV2WUkvTjNh?=
 =?utf-8?B?TE1xbmxRWTRONXB6NHBrTzJvOVIwMmRNc3lKd2xpN3pUSUVTcExqMEZqR1o3?=
 =?utf-8?B?S2xWRHh6ejZGSGhWRzhVMnppdmRrRnNQNkNvRFp1K2h2K21wQXVnUnBUMEFi?=
 =?utf-8?B?bnpJUW1DMExVYm1nWWFYUUFYMTRBYjNqMWQrM2J2QmRaME8wLzBjRnhwWXJ6?=
 =?utf-8?B?MXplQ2o0dXQ2b3JrUW94WWtUSkNlZ0pSMUVnak1qTGc1QUdlWnJ0cTk0THBZ?=
 =?utf-8?B?TzBzK3AvMGp6aUtNcDRNOENrSmRBM1l5djZzUXB3ZjUzZWlXNUNJN2dTQ3pN?=
 =?utf-8?B?cXJJYnZvbE9ONEphVktNa3E5RVVEZnJpeUN1QnR6aXVGQ3ZzZDYrV3lrRTJU?=
 =?utf-8?B?cHpodUd6VCtHV0xHM242Vzd5WHA0K1JVQW5YWmxtbXJ3MHpIdzdUUC9TaWFC?=
 =?utf-8?B?TWZncENuMVo2WTNvbWlYYUw3b2gralh3TEVjS1l3dEI5R2JiWE8yODNZdS9V?=
 =?utf-8?B?RnN4bFNUQnJuTVAzQUtIdU9mZUhCR3BGd1doaDhjc21rTUR1a2h5S3VOakJD?=
 =?utf-8?B?c0NaaWNMOXFuakNsZnIvWjJhYWk1S3NrcU92bWYyUjVhWlBzMEt1NGNWcFFn?=
 =?utf-8?B?a2poUFZIWkJoZGZPVVJ6MTl5b0U0ZUgvMzZJNGlzNGIwWkZOZHpNM3BwTVRB?=
 =?utf-8?B?Tzd1SWxiR2lYeFMxN3ptRDB2YUwwT3lPMGlPcXdWS2Z0WGZ0ZzRSamJrUXlw?=
 =?utf-8?B?aGJ4NUJETGpVZjdLa2RpWUZnaGx2cExFZE1DMlk5WW1Ea0ZsSnhHS1VQenF3?=
 =?utf-8?B?a0pybU0yVEpRTENPSjAyb05VUHVZYkF2L1lnbGUyY0kwdlBGTWZ2Q21WRmtX?=
 =?utf-8?B?T3pha2Jmay9UNkFFV2RYUmpPSWZQTTF4eS9LdE9FQ2RmM3dZbXkwT3BidUp2?=
 =?utf-8?B?WjYyT1MrdXpFQUlncC9nVWRpNkwrb3FkZTlUVGduaWZseVp0TW8yUHdudkEr?=
 =?utf-8?B?a05qZlY5UlVmdnBISWE2WVVkL1U0TURJNlRLWWZLa0llbjhSVG9rNGRSY3JB?=
 =?utf-8?B?aUI3b1MveG1xeUlkUnpFVEh3T0ZEZVJCa2ZDSHZKUmpRQ3ZVTmVWYk9obm1H?=
 =?utf-8?B?OCtSNUxmZ0xuTDZHZ0lKSU9xWWZNNHdjL1pVTTlrdTZhY1RhVHFJZDEyaEJn?=
 =?utf-8?B?bUxOZ3FNNmpyZ0QzL01GOC9ZTlE1YWJZd2NFVFZDVnZqZkZTSTZRVVNnN0Z5?=
 =?utf-8?B?MTBJL2QzRmM5dlZZemc5SkNPR2VRbUlWOUo1M3ZxcG9vODVIL04vSVY1bjVH?=
 =?utf-8?B?bTRaSEFueEZIMEgvd0podWlIL2lOU014alJyRGxzZXpZRUt0ejlheUd2S1U0?=
 =?utf-8?B?WlNSMENyakRDMy9KdlVRTjhzNkR1RHZSWFJMcVNramEreEkvd3hwWFNvVGpW?=
 =?utf-8?B?Z0lQM3IyVXhWT3p4ZExZRHFuTXZ2QXpXL0VYWkRScWVXVjZ3a2Rqam1GSHlt?=
 =?utf-8?B?a2xvZEVQa0hNSjhsVUk2VXJkeDVBeE5DNytsQ1ZpaXkxb2ovemZBOG8vUmlr?=
 =?utf-8?B?NE9uMzRKNWJHMTBQY25NTTVWdzVGcEZFYU9tbWJYVWd1cWl6b2xMVFZKeWN2?=
 =?utf-8?B?S25pZnp3c3J3SWlkczB3dUc2TVR5QUdnK3NBeEVOYTZjZkNUNmhRdHYxWkdh?=
 =?utf-8?B?NTJxY0dSMnhYdkpQNnpUbUcwRUNUTVlvSjFma1B3TlNQcjcza2o0eU90aHcv?=
 =?utf-8?B?YTBTOU84M2dlc2pTUE93NHM4ZDliTXZYQXg2YXRXcDlLWUUxN1hiVVJXcGly?=
 =?utf-8?B?djZ5V0tINkNmRFpCRC9ZOFlEb2hEVmptaXo1aWduaktOdUR4YkkreEhWYjBU?=
 =?utf-8?B?enRCeHZueUFlL2NvenpZaDZpc1Y3alRWQ1VDMmRpTEVMN290Y2NrSjFmT05G?=
 =?utf-8?B?cG14SFlQU3AyR2FFYjVBTUlYZWcrZzIzME1neTBleWl0dU14VXI5N2tOUHNo?=
 =?utf-8?Q?VWRhZb9jCfnEPGhcpGANQjYmJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 112d85e4-31b6-4433-edc0-08daf3e1a025
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 14:39:20.7784
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: d3HvLfoDTDVvw1B32Mq/g9KLiQwFyAjBihDDFMNg0KJ8HDYs2ynghS0fxJbsM6sv70y7KAkYOEBEzYyhLCqlUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9449

On 16.12.2022 12:48, Julien Grall wrote:
> @@ -597,22 +598,43 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>          needed = 0;
>      }
>      else if ( *use_tail && nr >= needed &&
> -              arch_mfns_in_directmap(mfn + nr - needed, needed) &&
>                (!xenheap_bits ||
>                 !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>      {
> -        _heap[node] = mfn_to_virt(mfn + nr - needed);
> -        avail[node] = mfn_to_virt(mfn + nr - 1) +
> -                      PAGE_SIZE - sizeof(**avail) * NR_ZONES;
> -    }
> -    else if ( nr >= needed &&

By replacing these two well-formed lines with ...

> -              arch_mfns_in_directmap(mfn, needed) &&
> +        if ( arch_mfns_in_directmap(mfn + nr - needed, needed) )
> +        {
> +            _heap[node] = mfn_to_virt(mfn + nr - needed);
> +            avail[node] = mfn_to_virt(mfn + nr - 1) +
> +                          PAGE_SIZE - sizeof(**avail) * NR_ZONES;
> +        }
> +        else
> +        {
> +            mfn_t needed_start = _mfn(mfn + nr - needed);
> +
> +            _heap[node] = vmap_contig_pages(needed_start, needed);
> +            BUG_ON(!_heap[node]);
> +            avail[node] = (void *)(_heap[node]) + (needed << PAGE_SHIFT) -
> +                          sizeof(**avail) * NR_ZONES;
> +        }
> +    } else if ( nr >= needed &&

... this, you're not only violating style here, but you also ...

>                (!xenheap_bits ||
>                 !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )

... break indentation for these two lines.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:45:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:45:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475493.737231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLs-000597-Ie; Wed, 11 Jan 2023 14:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475493.737231; Wed, 11 Jan 2023 14:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLs-000590-Ft; Wed, 11 Jan 2023 14:45:16 +0000
Received: by outflank-mailman (input) for mailman id 475493;
 Wed, 11 Jan 2023 14:45:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pFcFc-0000FC-GM
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:38:48 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id a8093cec-91bd-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 15:38:47 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1281615DB;
 Wed, 11 Jan 2023 06:39:29 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 064353F71A;
 Wed, 11 Jan 2023 06:38:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8093cec-91bd-11ed-91b6-6bf2151ebd3b
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>
Subject: [RFC PATCH 7/8] xen/tools: add sve parameter in XL configuration
Date: Wed, 11 Jan 2023 14:38:25 +0000
Message-Id: <20230111143826.3224-8-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230111143826.3224-1-luca.fancellu@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>

Add sve parameter in XL configuration to allow guests to use
SVE feature.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 docs/man/xl.cfg.5.pod.in             | 11 +++++++++++
 tools/golang/xenlight/helpers.gen.go |  2 ++
 tools/golang/xenlight/types.gen.go   |  1 +
 tools/include/libxl.h                |  5 +++++
 tools/libs/light/libxl_arm.c         |  2 ++
 tools/libs/light/libxl_types.idl     |  1 +
 tools/xl/xl_parse.c                  | 10 ++++++++++
 7 files changed, 32 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 024bceeb61b2..60412f7e32a0 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2903,6 +2903,17 @@ Currently, only the "sbsa_uart" model is supported for ARM.
 
 =back
 
+=item B<sve="NUMBER">
+
+To enable SVE, user must specify a number different from zero, maximum 2048 and
+multiple of 128. That value will be the maximum number of SVE registers bits
+that the hypervisor will impose to this guest. If the platform has a lower bits
+value, then the lower value will be chosen.
+A value equal to zero is the default and it means this guest is not allowed to
+use SVE.
+
+=back
+
 =head3 x86
 
 =over 4
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 3ac4938858f2..7f3b1e758b00 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1117,6 +1117,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
 x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
+x.ArchArm.Sve = int(xc.arch_arm.sve)
 if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
@@ -1602,6 +1603,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
 xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
+xc.arch_arm.sve = C.int(x.Sve)
 if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 16ce879e3fb7..ed144325682e 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -537,6 +537,7 @@ TypeUnion DomainBuildInfoTypeUnion
 ArchArm struct {
 GicVersion GicVersion
 Vuart VuartType
+Sve uint32
 }
 ArchX86 struct {
 MsrRelaxed Defbool
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index d652895075a0..1057962e2e3f 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -278,6 +278,11 @@
  */
 #define LIBXL_HAVE_BUILDINFO_ARCH_ARM_TEE 1
 
+/*
+ * libxl_domain_build_info has the arch_arm.sve field.
+ */
+#define LIBXL_HAVE_BUILDINFO_ARCH_ARM_SVE 1
+
 /*
  * LIBXL_HAVE_SOFT_RESET indicates that libxl supports performing
  * 'soft reset' for domains and there is 'soft_reset' shutdown reason
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index ddc7b2a15975..31f30e054bf4 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -211,6 +211,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
+    config->arch.sve_vl_bits = d_config->b_info.arch_arm.sve;
+
     return 0;
 }
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 0cfad8508dbd..27e22523c7c2 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -663,6 +663,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
                                ("vuart", libxl_vuart_type),
+                               ("sve", uint32),
                               ])),
     ("arch_x86", Struct(None, [("msr_relaxed", libxl_defbool),
                               ])),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 853e9f357a1a..49b2f28807e5 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2828,6 +2828,16 @@ skip_usbdev:
         }
     }
 
+    if (!xlu_cfg_get_long (config, "sve", &l, 0)) {
+        if (((l % 128) != 0) || (l > 2048)) {
+            fprintf(stderr,
+                    "Invalid sve value: %ld. Needs to be <= 2048 and multiple"
+                    " of 128\n", l);
+            exit(-ERROR_FAIL);
+        }
+        b_info->arch_arm.sve = l;
+    }
+
     parse_vkb_list(config, d_config);
 
     d_config->virtios = NULL;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:45:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:45:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475503.737243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLv-0005Rh-R6; Wed, 11 Jan 2023 14:45:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475503.737243; Wed, 11 Jan 2023 14:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLv-0005Ra-Nt; Wed, 11 Jan 2023 14:45:19 +0000
Received: by outflank-mailman (input) for mailman id 475503;
 Wed, 11 Jan 2023 14:45:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pFcFd-0000FC-FX
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:38:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id a8afd8c2-91bd-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 15:38:48 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 299D6FEC;
 Wed, 11 Jan 2023 06:39:30 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3849D3F71A;
 Wed, 11 Jan 2023 06:38:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8afd8c2-91bd-11ed-91b6-6bf2151ebd3b
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 8/8] xen/arm: add sve property for dom0less domUs
Date: Wed, 11 Jan 2023 14:38:26 +0000
Message-Id: <20230111143826.3224-9-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230111143826.3224-1-luca.fancellu@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>

Add a device tree property in the dom0less domU configuration
to enable the guest to use SVE.

Update documentation.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 docs/misc/arm/device-tree/booting.txt | 7 +++++++
 xen/arch/arm/domain_build.c           | 7 +++++++
 2 files changed, 14 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 3879340b5e0a..3d1ce652317e 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -193,6 +193,13 @@ with the following properties:
     Optional. Handle to a xen,cpupool device tree node that identifies the
     cpupool where the guest will be started at boot.
 
+- sve
+
+    Optional. A number that, when above 0, enables SVE for this guest and sets
+    its maximum SVE vector length. The default value is 0, that means this
+    guest is not allowed to use SVE, the maximum value allowed is 2048, any
+    other value must be multiple of 128.
+
 - xen,enhanced
 
     A string property. Possible property values are:
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 48c3fdc28063..05b2bfc9195f 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3959,6 +3959,13 @@ void __init create_domUs(void)
             d_cfg.max_maptrack_frames = val;
         }
 
+        if ( dt_property_read_u32(node, "sve", &val) )
+        {
+            if ( val > UINT16_MAX )
+                panic("sve property value (%"PRIu32") overflow\n", val);
+            d_cfg.arch.sve_vl_bits = val;
+        }
+
         /*
          * The variable max_init_domid is initialized with zero, so here it's
          * very important to use the pre-increment operator to call
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:45:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:45:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475504.737254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLx-0005iQ-3v; Wed, 11 Jan 2023 14:45:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475504.737254; Wed, 11 Jan 2023 14:45:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLx-0005iF-0s; Wed, 11 Jan 2023 14:45:21 +0000
Received: by outflank-mailman (input) for mailman id 475504;
 Wed, 11 Jan 2023 14:45:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pFcFY-0000FC-D1
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:38:44 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id a5a241cf-91bd-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 15:38:43 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1FD5FFEC;
 Wed, 11 Jan 2023 06:39:25 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2E32A3F71A;
 Wed, 11 Jan 2023 06:38:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5a241cf-91bd-11ed-91b6-6bf2151ebd3b
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 4/8] xen/arm: add SVE exception class handling
Date: Wed, 11 Jan 2023 14:38:22 +0000
Message-Id: <20230111143826.3224-5-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230111143826.3224-1-luca.fancellu@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>

SVE has a new exception class with code 0x19, introduce the new code
and handle the exception.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/arch/arm/include/asm/processor.h |  1 +
 xen/arch/arm/traps.c                 | 12 ++++++++++++
 2 files changed, 13 insertions(+)

diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index 0e38926b94db..625c2bd0cd6c 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -426,6 +426,7 @@
 #define HSR_EC_HVC64                0x16
 #define HSR_EC_SMC64                0x17
 #define HSR_EC_SYSREG               0x18
+#define HSR_EC_SVE                  0x19
 #endif
 #define HSR_EC_INSTR_ABORT_LOWER_EL 0x20
 #define HSR_EC_INSTR_ABORT_CURR_EL  0x21
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 45163fd3afb0..66e07197aea5 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2168,6 +2168,13 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
         perfc_incr(trap_sysreg);
         do_sysreg(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        GUEST_BUG_ON(regs_mode_is_32bit(regs));
+        gprintk(XENLOG_WARNING,
+                "Domain id %d tried to use SVE while not allowed\n",
+                current->domain->domain_id);
+        inject_undef_exception(regs, hsr);
+        break;
 #endif
 
     case HSR_EC_INSTR_ABORT_LOWER_EL:
@@ -2197,6 +2204,11 @@ void do_trap_hyp_sync(struct cpu_user_regs *regs)
     case HSR_EC_BRK:
         do_trap_brk(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        /* An SVE exception is a bug somewhere in hypervisor code */
+        printk("SVE trap at EL2.\n");
+        do_unexpected_trap("Hypervisor", regs);
+        break;
 #endif
     case HSR_EC_DATA_ABORT_CURR_EL:
     case HSR_EC_INSTR_ABORT_CURR_EL:
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:45:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:45:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475507.737265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLz-00061K-Cx; Wed, 11 Jan 2023 14:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475507.737265; Wed, 11 Jan 2023 14:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLz-000617-9H; Wed, 11 Jan 2023 14:45:23 +0000
Received: by outflank-mailman (input) for mailman id 475507;
 Wed, 11 Jan 2023 14:45:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pFcFX-0000FC-Cn
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:38:43 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id a51c0bc6-91bd-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 15:38:42 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 09004169E;
 Wed, 11 Jan 2023 06:39:24 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 16C5F3F71A;
 Wed, 11 Jan 2023 06:38:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a51c0bc6-91bd-11ed-91b6-6bf2151ebd3b
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 3/8] xen/arm: Expose SVE feature to the guest
Date: Wed, 11 Jan 2023 14:38:21 +0000
Message-Id: <20230111143826.3224-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230111143826.3224-1-luca.fancellu@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>

When a guest is allowed to use SVE, expose the SVE features through
the identification registers.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/arch/arm/arm64/vsysreg.c | 39 ++++++++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 758750983c11..10048bb4d221 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -18,6 +18,7 @@
 
 #include <xen/sched.h>
 
+#include <asm/arm64/cpufeature.h>
 #include <asm/current.h>
 #include <asm/regs.h>
 #include <asm/traps.h>
@@ -295,7 +296,28 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
     GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
     GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
-    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
+
+    case HSR_SYSREG_ID_AA64PFR0_EL1:
+    {
+        register_t guest_reg_value = guest_cpuinfo.pfr64.bits[0];
+
+        if ( is_sve_domain(v->domain) )
+        {
+            /* 4 is the SVE field width in id_aa64pfr0_el1 */
+            uint64_t mask = GENMASK(ID_AA64PFR0_SVE_SHIFT + 4 - 1,
+                                    ID_AA64PFR0_SVE_SHIFT);
+            /* sysval is the sve field on the system */
+            uint64_t sysval = cpuid_feature_extract_unsigned_field_width(
+                                system_cpuinfo.pfr64.bits[0],
+                                ID_AA64PFR0_SVE_SHIFT, 4);
+            guest_reg_value &= ~mask;
+            guest_reg_value |= (sysval << ID_AA64PFR0_SVE_SHIFT) & mask;
+        }
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
+
     GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
     GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
     GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
@@ -306,7 +328,20 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
     GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
     GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
-    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
+
+    case HSR_SYSREG_ID_AA64ZFR0_EL1:
+    {
+        /*
+         * When the guest has the SVE feature enabled, the whole id_aa64zfr0_el1
+         * needs to be exposed.
+         */
+        register_t guest_reg_value = guest_cpuinfo.zfr64.bits[0];
+        if ( is_sve_domain(v->domain) )
+            guest_reg_value = system_cpuinfo.zfr64.bits[0];
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
 
     /*
      * Those cases are catching all Reserved registers trapped by TID3 which
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:45:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:45:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475509.737271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLz-00064Q-RZ; Wed, 11 Jan 2023 14:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475509.737271; Wed, 11 Jan 2023 14:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcLz-00063i-Ir; Wed, 11 Jan 2023 14:45:23 +0000
Received: by outflank-mailman (input) for mailman id 475509;
 Wed, 11 Jan 2023 14:45:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1gQc=5I=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pFcFb-0000FC-Er
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:38:47 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id a760cc4e-91bd-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 15:38:46 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D45BEFEC;
 Wed, 11 Jan 2023 06:39:27 -0800 (PST)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.195.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 78DB13F71A;
 Wed, 11 Jan 2023 06:38:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a760cc4e-91bd-11ed-91b6-6bf2151ebd3b
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH 6/8] xen/arm: enable Dom0 to use SVE feature
Date: Wed, 11 Jan 2023 14:38:24 +0000
Message-Id: <20230111143826.3224-7-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230111143826.3224-1-luca.fancellu@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>

Add a command line parameter to allow Dom0 the use of SVE resources,
the command line parameter dom0_sve controls the feature on this
domain and sets the maximum SVE vector length for Dom0.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 docs/misc/xen-command-line.pandoc    | 12 ++++++++++++
 xen/arch/arm/arm64/sve.c             |  5 +++++
 xen/arch/arm/domain_build.c          |  4 ++++
 xen/arch/arm/include/asm/arm64/sve.h |  4 ++++
 4 files changed, 25 insertions(+)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 923910f553c5..940a96f4207c 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -995,6 +995,18 @@ restrictions set up here. Note that the values to be specified here are
 ACPI PXM ones, not Xen internal node numbers. `relaxed` sets up vCPU
 affinities to prefer but be not limited to the specified node(s).
 
+### dom0_sve (arm)
+> `= <integer>`
+
+> Default: `0`
+
+Enable arm SVE usage for Dom0 domain and sets the maximum SVE vector length.
+Values above 0 means feature is enabled for Dom0, otherwise feature is disabled.
+Possible values are from 0 to maximum 2048, being multiple of 128, that will be
+the maximum vector length.
+Please note that the specified value is a maximum allowed vector length, so if
+the platform supports only a lower value, the lower one will be chosen.
+
 ### dom0_vcpus_pin
 > `= <boolean>`
 
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index c7b325700fe4..9f8c5d21a59f 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -5,10 +5,15 @@
  * Copyright (C) 2022 ARM Ltd.
  */
 
+#include <xen/param.h>
 #include <xen/sched.h>
 #include <xen/sizes.h>
 #include <asm/arm64/sve.h>
 
+/* opt_dom0_sve: allow Dom0 to use SVE and set maximum vector length. */
+unsigned int __initdata opt_dom0_sve;
+integer_param("dom0_sve", opt_dom0_sve);
+
 extern unsigned int sve_get_hw_vl(void);
 extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
 extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 829cea8de84f..48c3fdc28063 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -26,6 +26,7 @@
 #include <asm/platform.h>
 #include <asm/psci.h>
 #include <asm/setup.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 #include <asm/domain_build.h>
 #include <xen/event.h>
@@ -4075,6 +4076,9 @@ void __init create_dom0(void)
     if ( iommu_enabled )
         dom0_cfg.flags |= XEN_DOMCTL_CDF_iommu;
 
+    if ( opt_dom0_sve > 0 )
+        dom0_cfg.arch.sve_vl_bits = opt_dom0_sve;
+
     dom0 = domain_create(0, &dom0_cfg, CDF_privileged | CDF_directmap);
     if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
         panic("Error creating domain 0\n");
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index 28c31b329233..dc6e747cec9e 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -21,6 +21,8 @@ static inline bool is_vl_valid(uint16_t vl)
 
 #ifdef CONFIG_ARM64_SVE
 
+extern unsigned int opt_dom0_sve;
+
 register_t compute_max_zcr(void);
 register_t vl_to_zcr(uint16_t vl);
 uint16_t get_sys_vl_len(void);
@@ -31,6 +33,8 @@ void sve_restore_state(struct vcpu *v);
 
 #else /* !CONFIG_ARM64_SVE */
 
+#define opt_dom0_sve (0)
+
 static inline register_t compute_max_zcr(void)
 {
     return 0;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:48:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475561.737286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcOW-0008KZ-AO; Wed, 11 Jan 2023 14:48:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475561.737286; Wed, 11 Jan 2023 14:48:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcOW-0008KS-7h; Wed, 11 Jan 2023 14:48:00 +0000
Received: by outflank-mailman (input) for mailman id 475561;
 Wed, 11 Jan 2023 14:47:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFcOU-0008KI-Gc
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:47:58 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2077.outbound.protection.outlook.com [40.107.20.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id efd3e813-91be-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 15:47:57 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9399.eurprd04.prod.outlook.com (2603:10a6:102:2b3::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 14:47:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 14:47:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efd3e813-91be-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IIhyscHN3roRN3iTTia8efG8JLElZA92SbjJuRZgFV4tuRwH5OkNHQaBf4bFpanZB371WIWR7Dggw9inyW3j7WZyNes+d4Y3ep/glcygXsj+qBBGh0mgmtrV4IKHnyumZ+SPAMKA6ea7ep3sWGH4O+kbYfDlFlZH/tsBk5clFNCXxNx9FAIfTvrTgXcgbI9W180ckNPF1iji/MwHztcR5jz5csvL8W0TfMMR4wFG6JOftn59+jWuDCS0ikMmfBtHO1loMs5p24XB8Wua/qE7HtrmV6FkE4lY/cCRzxc7qwc/XfUZKRlVOT5T/MPtjS6lRQxX+t5EY+haCLyD41+2Jw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VdoXEti/lpxrYlLx7N4cmaE1xWLxL+GISh9xXSn2meI=;
 b=I73yptCvK91JzqS8aiatNhqoyxnSteJ5E5CENoZC8rKHK7O2LrRBNzyZU8YDLTVGZxQ0q6vvwE0CaglScjib1zEHrdRmB4G/xR7O9n3YFH2gy5qFac9buQVNDPI+BHnWDgj2lpdos6/xw4OjANBC+4n0V1/FSC2l3jQ7dFOEClhDe3rWCRDvXXeTOG2NM/JR/60xNYAuLl9JyxgPJB3htpa7rCN3UiEsjg7WwArlNTXfhmvOu8NP+uNSNc+huiI029tVOHUDvO+lyDV7WBLBZuNZmbrpTbIJNtmxytFj8lo8/K4fB+qnIxOG5wcgfl+/1f4InvJNsOSnjXxC15KQmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VdoXEti/lpxrYlLx7N4cmaE1xWLxL+GISh9xXSn2meI=;
 b=utDeqBw+nqCTNv7yrO4jUPIZ0aw34fBntngBFDEff7AkK3ou4ZwCpH2Wl4VpHLyXjc3zUJM8z7RXeMnsYeuSWtAjWQ9AQ/NIHZYTdTuhDtzwkoVFpRn5+JV+dW3qGsKbFTQUTXh5McP55ITldIv0oIcBYlIEabKhUkK2SRmyki4yEJN0nFJoMjU6zW31VFbzW5q8fa4tDOGBCMidng5SD3HD4Jai5kvfpJLqK7CGeT8OqvlS514z4V7D8OBsMTXoRLb1E0OBZKPKNJbmi/y1+0tyEBPjlq+oIbmv3efHMfVEF5j2iii8NYbuMV/gbrclmtoD0T7kJB0Qk4uuGugEzA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <41ae2fa8-27a3-72d4-ab20-100085c55ba2@suse.com>
Date: Wed, 11 Jan 2023 15:47:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 18/22] x86/setup: do not create valid mappings when
 directmap=no
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-19-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20221216114853.8227-19-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0166.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9399:EE_
X-MS-Office365-Filtering-Correlation-Id: cacf34b4-e117-4e45-05c6-08daf3e2d2e8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	H0jN9fFSrJK66BlrtmJ8U939nj/69O0X9G3tJYbRplwhKQGRE27eD5eR3eL7oa1NFNFM7j0vLT/2ymUFQNU4IOY7mCXjdOn4sKzKka5bpXwSwWfh0tXAGAaNEnkY7WbeHkXsF0z0N2oLsXZwQSLIDiYm28Uw1nW6h/JQpKw8nj71Dp7fM8y9mNoaUoKQluSeb+KLzCCvoF0enhAnkMyoYYTLkUkEGIdI7E9PyJjDWexOp2Md65F/1Lo3NazwV8ajlKB9SiiR0pDyRSKm/5G7I/06/kRNeDgnvFtFMfNlEteeALAA5Msc939BzifsFpqur1FaOR5aJNdiDqLJ/M2ly6kNa09ZVEo4GMLm0dyLt7kuNg1yZTYB+FBAqbmfBmM54uu+v5uuoU+Aw/PmLQFAafAN+KvOl1k4MsWJ5GsNAPwD1fisb0zSErIPnoEPGZBpgt2mmmj9AQEqzrHhku109DEXK5Aj6bFQPyyhBvIUojs9Fcdc/CdYn/C7GCbT63OFZW4dZziPSNSudkqBWM+IU1rySJD1iYbkN7VwRmkwjrifanThSg/NkCJ+0UM+dcF3jK0NwH5sD1PS9dQSd9rPVYaTxzDLwkLvN69BzW/565cQ9dZGprck4VJQyXqZG840PQkYB7f0RhFUGNXZAF7ufqXjNfeTlzTYsrGYQ/HMeVbCF38Kt+IYp2O0XngC3XF+9MRd4Mfpv6VXfmvZRdPmOjwStxtcqMZfztVvd5Zh26g=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(396003)(136003)(376002)(39860400002)(366004)(451199015)(5660300002)(4744005)(8936002)(41300700001)(31686004)(2906002)(36756003)(66946007)(54906003)(8676002)(6916009)(66476007)(4326008)(316002)(66556008)(6512007)(186003)(6506007)(86362001)(478600001)(6486002)(53546011)(26005)(31696002)(2616005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bG51Q1BmbFpGZm9qcWNIbUloQXY2U2tEa0V4ZWRQRExvY05IWjRMeGZQc1ZM?=
 =?utf-8?B?MFVzTHBVL3VCSmxXNUZ3ZFVUc0ZqN25Xb01rTG9rdXIwL0xSZWdpaW1tYmpR?=
 =?utf-8?B?elpkVFN3SlpLc1ZyMkRtOE5tWWFQejc1WHd6YlB1VStKbTR6Z1o5cC83R3B5?=
 =?utf-8?B?REc1cktVV3hXbllLRFI5VHNVSWc3NitGUzU1eUo0YXRCcGV0cXVxSWNwTmxo?=
 =?utf-8?B?WVJqZE9ETHNTMHhHMnFZOUhodlZPK3h0RVlLOWZZZGVWWERtQVMzSEFjK0xr?=
 =?utf-8?B?ZE9uVThQK1EyWXhId3pnbFJoRHAvSFVzcyttSTU3V2EyR2c5SDRNZWxwRy9p?=
 =?utf-8?B?T216UXFGQlpzaG16MEJhQ2luY3NZeUxYWTN2N3lDY2hEODk2cDdxMlpTZVR6?=
 =?utf-8?B?R0F3aitFZXp2TWVtVVdhUkJ2d3BzWXk4bnB4K3U3V1B2NmJOQlhJekVncXh2?=
 =?utf-8?B?SWhpREdZS1JGVlZ6M3k0c0hIdHhjakFGWGE4a1MvWjRscmlpODZvR3p2bzJp?=
 =?utf-8?B?REpFaGxTUzNyc3pESlN0TnZVQ3BIQTUrR2RwdHp5djZSZTdKdE1wd1UxZHVH?=
 =?utf-8?B?ZHNEZmhXdVNQOHdsd3VTVHBjNFUrVng4c0FSWXhNc1NCSVVvRysrVTV0WXV5?=
 =?utf-8?B?Ungwb3FUYU5kOEc5Ylc1ajhEM0tFUFNIdVRNazB3RXVmajdPdmZmZ3JUSUd6?=
 =?utf-8?B?RFZVbEs5ZW50SFZsVFVyZG9EL2w3UEQxalNoSXIwYVRZeGlEeUZBUW5HN1lv?=
 =?utf-8?B?U05SU29WNGRVSklyalR6UVdESzUzNVFZUzMzK3NxMUo4VENoTnJLUHJHUG16?=
 =?utf-8?B?MHpaRDlEdUpoZDhieE5tYnpubkJQQ1VQblF4WE14dW1FUVROSlBxK1ZTUVkx?=
 =?utf-8?B?d0FNQ3VPcTdqczVJVlZBQlk3NjNMYjBETmdqK2JJZGdVRnpBbFZjQldtcER6?=
 =?utf-8?B?YnNOMWRNWENDTTF0WWk3aVJ1SEFLdmhGTVRuZUE4blpXcnFvTnc4eEdwa2Qw?=
 =?utf-8?B?MGU2QVpLNE4vVXdENnpLVnBqaVdhenBhMkp6R3pNTFlQaUxWdVF4VVdDdXBV?=
 =?utf-8?B?S25oWk56aEZLSHg2QnNsVzBxMkVmK0VDUGhna2kvczh4TXZCTFBkU01lekhw?=
 =?utf-8?B?L2hqdzlmc095eE9qL0luZFRxZTBGQmJnMjdnRXl6TG5kRWc3TGpocGFEN0dv?=
 =?utf-8?B?Mms5TGZhRFJralI4NHlOZE4vRDNPLzFvb3dRMmtCWVVxK2hUR1JISVVzd3ND?=
 =?utf-8?B?bnJBaTRrNmVaVUplUmEwVDBtOG43ZjNCb0Z4eEhmVUZ2MWtlcm5vdEVWVnlF?=
 =?utf-8?B?ZzhXYW1CbDV6d2lSME52aElQSThJcVlVQkcxZ2Y3MUVYcS9GdzB4ekJaNVJh?=
 =?utf-8?B?c1hXTStjSlJqUWQzeVNuYTQ1MExGekVUSDNZSXlKZnQzclJ1QTQvWkVQUFAz?=
 =?utf-8?B?SHVUYWxCOStVTGp0VFN2N1Jsa3BBaVQ1aTYyL2o2R2dLMzM5MDFJU2kzazhV?=
 =?utf-8?B?UVFTa1JHQUkxV3g4K1dibUcrb2xzbDUzcWtDbmJ4N05DZEcwUElCSWxXZ05K?=
 =?utf-8?B?QS8wOWtvcnF0WmxyWkI2eG5SVjZCUE0vWVk4UmZDUU9WVHRwVGlNSG5KbEc4?=
 =?utf-8?B?TVh1SjJBRDJzenlvY1gwckJwQSttYWVGMDRiSlFKZi9kL1BzU2FKcnBWTTlD?=
 =?utf-8?B?S2dGWW1OVk9vWXFoanRZL2NsK0R2QUd5SWlHZlhyME9MKzRBdStEaVJ4NGxI?=
 =?utf-8?B?ckpuczZ0T0RHU2pRYWlUay9uV0Q2RjdVTklTR0NsS0tTRjh5OHhmV1RLekpL?=
 =?utf-8?B?cUI5MWpoL1R0bGZpZEZjRTZyM2paY1ArWVN4bFJtcUFIWkdqZld0SllHUWhY?=
 =?utf-8?B?MytYUjBTdm9FeDhlQjVhVzZBMjB6eEhNYkFjMUtpbExVTmNVSTFrTUNheURv?=
 =?utf-8?B?cGRYajZGZnJPTGQ2U1ZFLzNuaUFCT21kRzllMlk2VUpIY1NDL0lEWlBGZzdj?=
 =?utf-8?B?Q2QxemxxZ0Rha2h3aVpQSUZ1YzJ0NVgxVGplOVBTNlZKSE1VM1VsN0h0REsz?=
 =?utf-8?B?UmFSY1ZsV2dyYk9Qd0UyelEvejNuZDhvdTBmU2NNMzJjQWY5RWtVdlc4RDZD?=
 =?utf-8?Q?/0X7UTkIDzdV+NP9N5DFKg+j5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cacf34b4-e117-4e45-05c6-08daf3e2d2e8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 14:47:55.4957
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: t+fKBe4ntvDO5V5ktY10qiZMy+PwQZmS6lpgIUmv7uudTm9J0fO2S/bSOJJ607m9v34U8ijRTD7Mj8Pz8SanYA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9399

On 16.12.2022 12:48, Julien Grall wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> Create empty mappings in the second e820 pass. Also, destroy existing
> direct map mappings created in the first pass.

Could you remind me (perhaps say a word here) why these need creating then
in the first place?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:50:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:50:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475567.737298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcQk-0001KY-Oc; Wed, 11 Jan 2023 14:50:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475567.737298; Wed, 11 Jan 2023 14:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcQk-0001KR-KX; Wed, 11 Jan 2023 14:50:18 +0000
Received: by outflank-mailman (input) for mailman id 475567;
 Wed, 11 Jan 2023 14:50:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uwfa=5I=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pFcQi-0001KL-VA
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:50:17 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4286b155-91bf-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 15:50:16 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id fy8so37314109ejc.13
 for <xen-devel@lists.xenproject.org>; Wed, 11 Jan 2023 06:50:16 -0800 (PST)
Received: from [192.168.1.93] (adsl-139.109.242.137.tellas.gr.
 [109.242.137.139]) by smtp.gmail.com with ESMTPSA id
 kv18-20020a17090778d200b007c4f8bc322asm6254322ejc.196.2023.01.11.06.50.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Jan 2023 06:50:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4286b155-91bf-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=voa25sTfLc4t7NqxP8u43Bb1H3o1VCobc5wW+DSwPKg=;
        b=QHUaVQGEpZ20r2htUX1Vhc3KYWX3qiUGOoeOjVkPz0yHXRDCz6kq4P2kR4GzXRO99l
         QdMpumEQWn7KbnSX52Ynch4DnbXM4SdVJu3hgRtm9C038BC/9m/p+mJuWpOlkv5T0Lq9
         ZJPOG0kz8dkv+6Pva8A6hqfRPqYQigZDjEZ3Jepwwu2OLVLlRbPPC6doIBxvTX91ZzDO
         Vc0Fs7vVqrofhRyDGZyF3kIese++TA9EQEh+d/lv/7XJNMWvuVshqiMHgfa4nMm/49Q7
         i6vq8Di/JZx+Atb70B1H1EyG1ndINNh88uVUC5S+ijECILYA1HM2aoMhvCyaQBI1jJOG
         kwMQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=voa25sTfLc4t7NqxP8u43Bb1H3o1VCobc5wW+DSwPKg=;
        b=5NYVk0IdjBdR3FdlhRkGyezV/jz5/MKRyjoO3h0aBFKK/mLpKrPDYMuA9mPqFyPhIw
         FT/aKHj+our0Nh9C8O5On3fISu+ZCF32Zjf723/D1SEFGSA5XN5Hya1mTCFxVJCPorWC
         +ZpFICVpZoq2lqKrSpOA8RnsyZy85e36eR4oH8K1g+Kv4HYbd7+QCFTNQWo2iAnOjwYt
         8G6iXVhwzAdi7HoQFnuJ6cbcM5aoiBT/c7m+RNJtsl8MYHI4MBR7IvZmH5R6sN11CMXF
         mxuFq5M3Zs4n6HgwUsRb8+phlycSlE+r9v/s2CPVCNK5eSfnGD5BicSmkRe8eSz+jDfT
         Sx2g==
X-Gm-Message-State: AFqh2kr6wG+yzUzQyrcY5Le+cJrqUULsTNbu+SowO9c3u1vr4Z1Nt74h
	LLwac6yUuY2XKq2tq7H4e0w=
X-Google-Smtp-Source: AMrXdXuaexZXnckrt7QwAjlOlFbXAEYnc/a0t4Ny0Or7nmcdQ64+510P8/9hwmgBzgEOjTf/1Dlyjg==
X-Received: by 2002:a17:907:8dcc:b0:84d:21b2:735a with SMTP id tg12-20020a1709078dcc00b0084d21b2735amr17286857ejc.54.1673448615496;
        Wed, 11 Jan 2023 06:50:15 -0800 (PST)
Message-ID: <0b7489e6-3d7b-ddc3-fe8f-8dbea629bedd@gmail.com>
Date: Wed, 11 Jan 2023 16:50:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v3 2/6] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Oleksii <oleksii.kurochko@gmail.com>
References: <cover.1673362493.git.oleksii.kurochko@gmail.com>
 <ca2674739cfa71cae0bf084a7b471ad4518026d3.1673362493.git.oleksii.kurochko@gmail.com>
 <c333b5b0-f936-59f8-d962-79d449403e6c@suse.com>
 <06185fcfb8cbd849df4b033efa923b55c054738d.camel@gmail.com>
 <1b6ee20d-2f32-ab38-83ec-69c33baf42fd@suse.com>
 <0398c48c-cc5d-a4fb-354f-54e3c5900d18@gmail.com>
 <644334f1-8e1d-3203-e942-0eb3ef5584a9@suse.com>
 <2acf07aa-ec56-99fc-765e-70fb7e753547@gmail.com>
 <72930382-b216-4ec3-6d29-889a3bf0bc6e@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <72930382-b216-4ec3-6d29-889a3bf0bc6e@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/11/23 15:17, Jan Beulich wrote:
> On 11.01.2023 13:30, Xenia Ragiadakou wrote:
>> Could you please help me understand also
>> why __signed__ keyword is required when declaring __{u,s}<N>?
>> I mean why __{u,s}<N> cannot be declared using the signed type
>> specifier, just like {u,s}<N>?
> 
> I'm afraid I can't, as this looks to have been (blindly?) imported
> from Linux very, very long ago. Having put time in going through
> our own history, I'm afraid I don't want to go further and see
> whether I could spot the reason for you by going through Linux'es.

Sorry, I was not aiming to drag you (or anyone) into spotting why Linux 
uses __signed__ when declaring __s<N>. AfAIU these types are exported 
and used in userspace and maybe the reason is to support building with 
pre-standard C compilers.
I am just wondering why Xen, that is compiled with std=c99, uses 
__signed__. If there is no reason, I think this difference between the 
declarations of __{u,s}<N> and {u,s}<N> is misleading and confusing (to 
me at least).

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 14:57:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 14:57:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475579.737309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcXA-00020L-EN; Wed, 11 Jan 2023 14:56:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475579.737309; Wed, 11 Jan 2023 14:56:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcXA-00020E-AG; Wed, 11 Jan 2023 14:56:56 +0000
Received: by outflank-mailman (input) for mailman id 475579;
 Wed, 11 Jan 2023 14:56:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFcX8-000208-He
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 14:56:54 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2059.outbound.protection.outlook.com [40.107.21.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2e9f0f66-91c0-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 15:56:52 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7287.eurprd04.prod.outlook.com (2603:10a6:10:1a4::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 11 Jan
 2023 14:56:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 14:56:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e9f0f66-91c0-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SwImrKxBV4L4GUQEHnRMZF9ZiCosZxqBh6Nna+h/Vk4Q3k4JkK1ZVx163IP3U8KyL8NW13+VrR0Ja5sWo5KMM6USFDTvMvY+tao2YHlf6cPUAt5v2u0GOKHjRt5kHnNLPCix7d6LICA2lRT2AI3QavD6e8/BgzZwD1fvQ5ZaYfJQ0YX1qAJejlzw+YR4fwucwjIRlJmUl21ksL8r8UFrO/GVp50mUOSJ/T3GfdG2NuMKYbvL1hl+VMiZ3Me0b8Xa6s3R6DS/xSoHphg8/GxwMNgnK0hQBGqiHvuGjT3xrrYX9lDjiHHi8YwXZ8kQuRd+XYQ11VCAL3c3nMGnP0bb+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=b1Veq8igell7UWHBBB9V20mEnZ/tb+NsSpwXNKHkiXY=;
 b=k7yrtwndHUoR4FLy6bwnSuLtFH6SI/lY3xY+Xkym0+UZ6aZyazL00i57ESrU0QfoOJME9YvJqyqeoYU2Ax6+fwF5fnmxaej+Wb4ECHP0qhYpJP/LobHklwZR6eYd2htLUV1j9YYw40GIzxiGAUCu/3tjC0cclvMgnSdo6JKuLGzlzsneP851JulRRHAYQExA22EOQfK6BPE36wH7sWxO6MjkUvwQYqZBaB9rpolzyXAj70TtJIip84xpbMJiHT1sqD1T1y098HHxetZRT6peAyWMn5H2vY1xUEufOqtjuv56LjkVRvwd4EvKIWzPksX5VbJtOdDB+6GBX+R3/GTXpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b1Veq8igell7UWHBBB9V20mEnZ/tb+NsSpwXNKHkiXY=;
 b=g+z/g7T8aQV8Q8dXwlhOzeikYXtNAH0a7EgtcQ8vUNxkDTspmRaaszNw6Pc1RCQk7uiQxuYzsqiakEERKag5Pac6uImrUExBZq6rzp5ny1caN7YORCijw3sly5qnPwSPLdAdjIWDxiicTIiqqznCEs1CG9/6QB/WZld/7wD/t245KRHNZhPvPuNv0SgbL0wv0mxmUnJLFkKz6P1cNmSFBPmw2Bunf7HZM8MAygGIVyf0M3WCACyqJtziO2JMRoi4SiC9f9U9NhrcJWbIb/R6LT6e/R5Dtu78kE4TTzZk0iBfy/RxB5sLrlw9iOwk2MLEaA6q5n1A9v4+KhbMjyH+qA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <24b26a73-1e45-d556-c286-1e9d56bd4c29@suse.com>
Date: Wed, 11 Jan 2023 15:56:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/8] x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig
 options
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-2-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104084502.61734-2-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0172.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7287:EE_
X-MS-Office365-Filtering-Correlation-Id: dc913a10-735f-404f-ff42-08daf3e41213
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mWzt0hUqnnnG+vQi21fVAV8WOY1LJte2JounzOTuZTQR2CRoyYCWZM7pRgGAuepgzW6bm0JynyWp1N+U1A+vmuyr0tjTh7j0RIix+A6aL/cdWrosJB8Urqj4VfCM/5gvGdDf3ID7AqPvq0Fg1R7mbzh0NbL2uF7LmO7sERh6AMP6LWInEmLOIvQnqCGGF/zW8ljLejiPQFiOGZ1J5kguJYf6I6tjRezPM3sfNrMha9eEHagZ/yh6hzcewLL6SN6dR4voGxJUt+6rUBbyiXDhNEv1tXf4UQAIIZIcCIEX5WoE+lBoCMGCEFM67kfrRTXhrnVRKqMli4HU1BJ3FV9eXObdZyOXPeKJoDXQLp96A4qZU5lTh9cO9gZSFkZgb52N0thI7qBHB1YRrv6+4l3ZEfBy8GowRv+raOvXVAQ3iuBeFtA985D1SQwUXwABNANFlEifx2C8Hsu+MPhzlCVnUWnzO2cuoG+i53aRjANA/7dWTBHBhhw94h9kEpYdNeF2BKF2Pg941Dye7CrUz0y75IptnbHAxI223XOZfqQhJyiFwPgukOP4MUiurP9dmGceK1zwqPzlmKkM3HOIm/PhC6D4TCr2IOlLycBu3l57RK0g9py0H58JE7EbXZFeQiPwIq0ZPDJp11SArOLh9oCrXqXe0cvtHRdufolqNfEIsNenG5WlP2AknU0zRnT8qbbpN+Z1M47QyImAI/nQA6qTs4Bzh3raMo9UnluRCTcGFPM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(39860400002)(396003)(136003)(366004)(346002)(451199015)(66556008)(8676002)(66476007)(38100700002)(66946007)(6916009)(4326008)(86362001)(6512007)(31696002)(26005)(186003)(2616005)(316002)(478600001)(6486002)(54906003)(36756003)(31686004)(5660300002)(2906002)(41300700001)(8936002)(6506007)(4744005)(53546011)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MCtyRWdPMXBxQTlUN0VzMUdaeVZ4SnpJRFpFYjNOdFBoeEZXTGNKQlk0WDMw?=
 =?utf-8?B?c1paUWE0TFNxM0lYTERYUm0vSW1tL25xRUpNUnIydU1qUzR0ZGtWaitKZmdB?=
 =?utf-8?B?VmllbkxpaXJrTHRDS2RBazFKUWtER2lJMU1KOTBUWDNkM01KcGZESmRKTWhR?=
 =?utf-8?B?TXFJcW56cTZmRkpJV3pETEtuZlFpRUg1d1FWNklYUGlHSTBqVGgyYVlYR2xS?=
 =?utf-8?B?dm1CVnloYk9BZStpWGx1K01yR3ZvRDBjWUFkNUxncC9jZ0F3djdOT0pYTHF4?=
 =?utf-8?B?SFhGZnFBNVM1c3k5L3R2czM2aktENk1FMU5hRUpvYmlNdDhaWHhyOC9ZOVBi?=
 =?utf-8?B?RWpCYTZBVE9tOFJqS0N5OFlOK3NhMHdvQzlRNFc0SWhYV0dpVUNUWm5DT3ZQ?=
 =?utf-8?B?akpJeUdSN2hPUysyVHpNNUF6bVFCVWtrSU5GL2tTZ2EyVXlwNTI4eGt6b2Rh?=
 =?utf-8?B?anMvVXI0ZWkxZ29taTJJK2g3dmZwNUthVVRlckZYU3JxK2I1akZVZExycW50?=
 =?utf-8?B?eWVCNUxzUm9XTlhnd1NhWmx3NXlDdktZR255dURJZGFBL3Z4RDh4SVRkN2Jv?=
 =?utf-8?B?SzVESGVJOTF1S1ZobzRacFh4NDkvSUNIN29VekNENk5xV0ovTURLRjQvckp4?=
 =?utf-8?B?eHkxNGNEbno5TTNGclhNRTdNc2pIcE9hcWVUUDYrSTZMMmw4akxodkxUY2hG?=
 =?utf-8?B?OW9Zc3Nna3VIR2o2Tkt3K0FLbnkwM1pjR1pNTlhNTDJuZ0NWWWNsMk40NjhO?=
 =?utf-8?B?NnVwWTY3UlAyNTl4eC96a0pxcmpELytWeXdUbHZwMkNUeFVIMHkybmdkem9K?=
 =?utf-8?B?WDlnU1VyNmZDQ2NkUzlqMGppcDJOOHZBMUtnN3dmSXBiZXJvMmlBMmN1VEZa?=
 =?utf-8?B?UmhmS2UrMGlDb1VBRlQ4RzNlbDJVbytEb0o4VWFaUHk1dTlVcGJOd214WUtL?=
 =?utf-8?B?cWIvSjg4NURGaGRadEZSd2dVNlBTYm41a1hQWmZrVTRkUGgxdkgzOExyMUNS?=
 =?utf-8?B?TWxOYlhoSmlaZFc0UVJySWVMMzlzbjVRZ285RElIbUVWdHVDR2tzaDZHSzBE?=
 =?utf-8?B?YXIybGlTUkRVSmZiNDhPOGRqWVZEZ25QVVFpbnlXTkJ3VVZPcVJxMjYvbFB6?=
 =?utf-8?B?MXJuSVMrdzVnUEdwOHBZQWdxTFF0YlBEVmQreVh1d3NRZVlUUm5tNTIzODFw?=
 =?utf-8?B?d1cyNFFvYWpURUl1Tkw5VUJWell1SUpGU2JXeDVHVldWRmdyZ2llbzg5eGwr?=
 =?utf-8?B?NDh0bUVoblZKNmNyWkdyQ011RXZNM2U3eHpBc2RRclJsZHB5alk5UVI1bUh0?=
 =?utf-8?B?V05YZnp2aE5qM0pjejlKVnhtU01tM1QvUUtkbkNZUER2MytrNWNWNWgxZlJE?=
 =?utf-8?B?L3pyMGIyeDBrOXR4U1k1OXlwcm4wWU5NSlNwTnRVQTNsRXh3Z1dKbVBjQWdI?=
 =?utf-8?B?VHVaZ09Ic2poMVJjYTd5dnozeGlSVTVZZkJzQSt5cUgyZUl3cWlMbWhXSHA4?=
 =?utf-8?B?VWlhOEpRM293bXo0OGIzR2tBZUlLcTRHRTlVWmVUanRGaERhVzFzclNOMTZk?=
 =?utf-8?B?N0VuU3hFV1BUR3BXY0FYaCtWaDdiTk03cXNPL0lYMnZhZjlNQzFoak1QTUQ2?=
 =?utf-8?B?VGEvK1hibUVmMkdTK2FFdEhIUUlWVlRTeHphcmg3Y3A3YTA2ZjFURjFzNzFR?=
 =?utf-8?B?aVJyaWxuaERhLzRycjlCL25wOGhEaytuQTNnaVhvL0FTWThVY1lTRms0c3hL?=
 =?utf-8?B?UXRUejFNcVFYTHBGemY1MTBRWUJieXFXTVBRWTRXRVBIb2MveVdUNkEydGk2?=
 =?utf-8?B?dUNLOVRSMUtsVGtiSTJIaHRsa2doRVRZODVZOHJ1a0VJbkRaTW9UZUNDdkNo?=
 =?utf-8?B?OE9paUZNc0xFQzd6MDBaR0d2L0pHZXlDMVZ3cm5mL2NmaHRkK3FFaEgwUjdW?=
 =?utf-8?B?YUlVb3A0OGR0QjFUaFcwYjRzV2FYNkFmSXNZSGpVYlhjT29UUnhRVkpzU3Bp?=
 =?utf-8?B?WnJpR3RhU09SaU1LUVlYZW1VTnV1N1dSTjZ2K1lPUlNmTVNlNFR1Z01iWEp4?=
 =?utf-8?B?QXZNUzVuTkZ5cXlXM2REcGhvRDJNNUxhMWZDRktWM1VpS1ZpaTdFVHZhU21j?=
 =?utf-8?Q?pKPzbrXHBQ0XefeA3gGItJQqK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dc913a10-735f-404f-ff42-08daf3e41213
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 14:56:50.9302
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: d4ay7bEiKAmBp4Wu0x0Z0G+swd4Rf4UU0DIiZrKV/YE4RjevnDkYCQSx9IZD5gn0517lTaTHEW94m9teK8CXYg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7287

On 04.01.2023 09:44, Xenia Ragiadakou wrote:
> Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
> specific to each IOMMU technology to be separated and, when not required,
> stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
> implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
> enable IOMMU support for platforms that implement the Intel Virtualization
> Technology for Directed I/O.
> 
> Since, at this point, disabling any of them would cause Xen to not compile,
> the options are not visible to the user and are enabled by default if X86.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed Jan 11 15:03:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 15:03:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475586.737320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcde-0003XP-70; Wed, 11 Jan 2023 15:03:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475586.737320; Wed, 11 Jan 2023 15:03:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFcde-0003XI-4A; Wed, 11 Jan 2023 15:03:38 +0000
Received: by outflank-mailman (input) for mailman id 475586;
 Wed, 11 Jan 2023 15:03:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1UMm=5I=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFcdc-0003X9-Pb
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 15:03:36 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2073.outbound.protection.outlook.com [40.107.241.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1e721a8e-91c1-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 16:03:34 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9295.eurprd04.prod.outlook.com (2603:10a6:102:2a6::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 15:03:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Wed, 11 Jan 2023
 15:03:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e721a8e-91c1-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kH9vcWJ9kkOGpKMUpOq6ex7oKL1dSmJZ1RnC+nAoXuildWt1487dzMy+sqfFyD4nny+jKv1zNhB0iMpSgUVfsq2/sx/G/9WeQMKKORUx336pZMkt38mUuJnQZ9IUV0eFeG6UjIK2/PV3AgQn24yw25RryagKppoHs6sW5EyY23l71d9TR8FbAPP8ii7ftBquSPZBxs+6gM+yPa1SZY5czD7AO+svguPuTJrkQfKbyF0zCF2Oe9rj46JrqOeo+xcbuUOjemDAyO2MQ0KBoraW6TYB3g+5Znctz0K9+tWn43sN+CxOoUSgcbua4Ad5BnWc7ZnT/B8BV+w6fjAQqlvNRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UeZYyUPop4VvJLAt7/aINgFGYQAOIKFvFeiRXTQo8g4=;
 b=mVZmY8xqziqujV6fPO7T5BbDrC6WcQ+m/P5qkM7gFvhZBsLbGSNiT+dzYkNPJRNkUpja7G2oxlXQEcYO1lY19BYdFF8KYOrAR81QaNO+Lkm3+cSHkGGWKS3GkEiYPf1NOmYvDK1QgsLtLRUNDegB33ClpDEW36E236AYcefkGJd7D0cCVjVd7IC1aLF3pMXe6JYXV37KX48IQ4LSZVa+DaxyWyoNNQ0YdWddIFWRnXpDLiBbMXfVOXvpyhLEOz9CoQRlsDllMT5lbIXsvfb/zHSefNuXbfKSS1ERMEE84bvvWhRuZS08s4d1cve+fodlcnRxz9DTIdEon7AZ0jiK7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UeZYyUPop4VvJLAt7/aINgFGYQAOIKFvFeiRXTQo8g4=;
 b=uVMM8t2IZQ5pXBlNHoLZ4sQzx1++LxLey1bee7xzRycW/xA/pOQlz+S4ocXjvo2ZT2XbVW+JadfVMiwX+xfKpUHlO8c0trWX/wGNnUE28QfLoaXSdFfPI6voYHPV2gyYXvtAhx5xz6xOwEItj1C8UHMm57bPueUeniBBkPGZZ8tGecO57DZKW4V7W8l93QI0jQBN4aoOoRKOqPf5g6u9elq+Z4OJKLwSUyLC56Wh4U/U0brvWaxLEc76VoKQ4gcrAWZq714vhdilu4BgQfLrJs+ahV13FqeBM+x+/pQUa6iuSGJcwD0QZiGl69x8pIEab3+rvavOBfttaS41M3sJ9A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bbb9b0c5-886a-0fa1-d44d-9144fb86efb0@suse.com>
Date: Wed, 11 Jan 2023 16:03:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 2/8] x86/iommu: amd_iommu_perdev_intremap is AMD-Vi
 specific
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-3-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104084502.61734-3-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0129.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9295:EE_
X-MS-Office365-Filtering-Correlation-Id: ed9ca9cf-77bc-4793-e58d-08daf3e501a2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ur4+6uijsqCtYZikcIsczVycL0g9jZyhA4AIYOKlxFbNWQNPIYHTQnYUMFY3sTcQWPkYNJsSUnH62hd5GGvWAJFcDwbzoRyWoSya4MINhmLIt9FBIWLB7UCKyTQQ80aiC7bWaGmukyIT4puBpw0jPEspt7ke4sV/TZKgcsZTwveb6vh6tSLGkzvOCjaNg5Gbdzu/UkbGYc1HpqsNEU80f/Dw36kYPOGVwtBHtxhIqUhBP410N0Yu6lkzrZH2AdvsH7FtD7dxFUBZSPB4jbrCJhMivV8hoc5BIaE5M48iZXwBWVvXsPQScJJEhpomu/Q3Ti9LwvbfRsNO/TUW0MfsX/QUidoc1LDCZWPRQ+JC+T1tkLdu5e5gudqIwq7iaRTXLwAnAt0YVBM7eyfiyA8JJZ674W11fxu59d+bit4/8l5FtB/OlNiN4xwMXJkGROJw8wqIhYgW68jMZyY3IVbHo3eWob4GLUz4+yY661oxhKDPbcIGYg9CubMYQxb2vUbRxw5Bjx/BEurUdQXaIoai7BpzwOgcK/IY40y5jFh+Da/IEe67bI2S9PmJ487F7Uzkz7mBXE+n4WyDplnYa23DOr1SZczHlOevVV0myXcWCRoQj/dKaEU8NpoT3f+x5S4mqjHnr08Sqo2RrJBV9ELUpCAg90U8ks37MqizhSHYLX1CcLDj82/L+Dw6MrV8HPY2Bic3Htaam1yDOF3MDzRacbAFHEqjjZV2MfX/MIyuClU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(346002)(39860400002)(396003)(366004)(136003)(451199015)(36756003)(31686004)(86362001)(4326008)(66556008)(41300700001)(31696002)(54906003)(8676002)(66946007)(66476007)(38100700002)(6916009)(53546011)(6486002)(186003)(26005)(6506007)(2906002)(8936002)(5660300002)(4744005)(316002)(478600001)(2616005)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WWJZdmhQQnZIeHNhMXJIMWtBMFZvOTZjM254SmdEb0dUenhRUHJoSGxpRTZj?=
 =?utf-8?B?R3VDeWF0N2RRNDdWN1pOWkJ3VFUxYmMzbDBseWdtMnAwSVRValVkcDNRZjRS?=
 =?utf-8?B?SVpmU1c5T1NCbUZjWjUvRjlDSDFhbCtzMGsvb1JPeU8zMXpTYjVFaW44cC9y?=
 =?utf-8?B?VHRFa1pjdkE5NkM3RTJNbEp5bXdIblJTRWV2ckdvb2w5a091eFpNd2JJUGRx?=
 =?utf-8?B?bThMU0dQL09WNHZqbktUbzJJVDd2eWdRVmc0Tk1yT0VDS2U1ZE5scnNpcVR5?=
 =?utf-8?B?Tk9MdUpMbUc4SytOdHRvUDdVVXR2eTljcWc4YlpLU2I5RnhXWDh1eFpyRUZx?=
 =?utf-8?B?alN2UXk1Wmc4bjZkTEthY0o1NXY4THdiTUo1ZW5tWjlxK3B0alJ2bnEwMXdT?=
 =?utf-8?B?SUxjSlFGRFhhejlkbXhubWIrWk0remwxREFabVhRdTEyTCtmamhMaW5aZlNZ?=
 =?utf-8?B?UUs0a2pVck9MV0xQWCtLaE0yT1ZJK3pkTkF5Z3Uxa0Rzczh3MWtQbFdKVWdp?=
 =?utf-8?B?QWZWOGFhY1B2T1IvRW1tV1JMQmRpZWFoNGU0Q0d4SW5KdXFtelhsSnpXU2lu?=
 =?utf-8?B?Y3pjMWQ4WFZVZmMxVUpUTzNVdUdST2ROc2pjQTlkaVpNaFI5cEdXR2lObWRO?=
 =?utf-8?B?VkFOczF3NWxRb3RPZ1R5Y2N4TXdiRFVmNUc2TERWYW9wUHM3TjVuQ2toRjVQ?=
 =?utf-8?B?K0EzSnpxTUhJMUhLR2RwSE5XMnIzdHU0OHV3dFNwa2Z2SUttUVlUd2NUb2Qy?=
 =?utf-8?B?bVFVa3B1amw3ZVE0SlhMdC9idExSdGIweXJRbFN0YW5aeHBVQmRjckNpSGxK?=
 =?utf-8?B?cWo3c0gvcHlTT01BTTIxWDQ3WXJYYU5USjlZVjZGSjYzN3NMUFc3aEJFNFNh?=
 =?utf-8?B?TnZSOTBiaW94eWgzOW5LSFJRU3k0T3hXN25ka0JtYkNpaVdhL3lIamtJZUFZ?=
 =?utf-8?B?MC9PeHVCK1ZjcjNUQWtnNHlRTnhqM1puZ3l6c09EbzhrTjBSTXBqa3pQUWNr?=
 =?utf-8?B?bFpIL3BBZmxvanJpaS9Qcm16c0xieHZRRUpITE5PdFNwc3dIcTZyS3ZGaDJN?=
 =?utf-8?B?SXY2OTRWVDcrTTlyUkg3WmtGc0R4OGorMWNvZWhtL3Y1cWQ2VEhVQVF2bHN1?=
 =?utf-8?B?RTVScSs0VjRMZ2ZCODZ5OExYK0dXZDJjZlRuTXp4YVRyRm5JNDc3c3FYcGw1?=
 =?utf-8?B?QUUzYS9hSUtxSnNpK051YmxHVE84TG5IREZBRm5Mc2hBZEN3ZXRFa1hKdVJJ?=
 =?utf-8?B?dysxSktNOEFvNEJQVWR5bkM3alFFOXVwVjJZT0ROTGpUMWNuQUFHZGZjS3dH?=
 =?utf-8?B?bGFLZFZiWkJCNHZqdlVCL1dJSkY5ci80L3RTZ3dwZEYrZS9XUkQ0OFR3Yzlv?=
 =?utf-8?B?RHorOFZUbUFKOTBnTElsNFdVYVpab2k2UklOQVZKbVdkN3E2VGM5OFgvb21m?=
 =?utf-8?B?dHMza3JMOHE0dTdkbUQ4ajNFWm1ZZzFRWmJoUUtieW8rME4vMjJIeDEraEha?=
 =?utf-8?B?RGk0UkhnNXhEbSt6N3pnZ3hPdWJDS0lONTFkZ1NTRmk2d3BFc1JGeVhpYUF3?=
 =?utf-8?B?aGlMbFhic2Q5WjBvZkxWWUIrY3JqNXh1MjN1RExIM3RJVEpqdGNackhIYWdr?=
 =?utf-8?B?OEJERnZ1aGVZelVoK2k4SUlPYnNUUGg5TEVtbGdHbDRMRkY0bCt6ZzE5ajkz?=
 =?utf-8?B?QnJGTEowRFVHejE4clpsTWFERk9LWEQzangvcjNHcDVqN3dyM1JaNkRGYktM?=
 =?utf-8?B?WUNtajVBQU1tMHU3QVdyN2Q0K3B0eS9WVVFFck5xUE5qVGdsdkk5aHdOa0o3?=
 =?utf-8?B?WnBXNmZ2czZEVTRkUmZJS0drcGU1d0ZFWHBRTDlUTnZoOStzanN6TVc5Vm1s?=
 =?utf-8?B?ME00ZU1TWVhDOFpjdjNObTg5cHV2TUZWampvdjVuYTF3eGw0NEltQjBtYlVi?=
 =?utf-8?B?Q05xTXZTdWdweC8rY0VQSStFNWVaUEw4a2hWSzBMdVBlbjR6OEQybWxNNW5H?=
 =?utf-8?B?NmJTZVpIYWM2TEQyanMwTi8yem92RGx1dW92VDdxL0c4cHBrdXhUbWJTVTUr?=
 =?utf-8?B?TlhrZXBNbERlcFdsZjg5VDFpMVk0dXY1ekZkeXI3aHBLK3h2MDFuajFBaFFl?=
 =?utf-8?Q?O+FcR0bAB2EPJRZjlr+K/sMyr?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ed9ca9cf-77bc-4793-e58d-08daf3e501a2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 15:03:32.9515
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P5S87Vr6Lt4YjW2nUeTV8KtVvAi+Iu6HdeKUQgWunvxM9Znjt8XBDODKXXEnDNFSraaVD7kFFBUXBy5TmhA+GQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9295

On 04.01.2023 09:44, Xenia Ragiadakou wrote:
> @@ -116,7 +115,11 @@ static int __init cf_check parse_iommu_param(const char *s)
>                  iommu_verbose = 1;
>          }
>          else if ( (val = parse_boolean("amd-iommu-perdev-intremap", s, ss)) >= 0 )
> +#ifdef CONFIG_AMD_IOMMU
>              amd_iommu_perdev_intremap = val;
> +#else
> +            no_config_param("AMD_IOMMU", "amd-iommu-perdev-intremap", s, ss);

The string literal wants to be "iommu", I think. Please see other uses of
no_config_param().

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 15:20:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 15:20:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475592.737330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFctY-0005ey-JC; Wed, 11 Jan 2023 15:20:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475592.737330; Wed, 11 Jan 2023 15:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFctY-0005eL-Fk; Wed, 11 Jan 2023 15:20:04 +0000
Received: by outflank-mailman (input) for mailman id 475592;
 Wed, 11 Jan 2023 15:20:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uwfa=5I=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pFctX-0005Pg-OV
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 15:20:03 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ac0335c-91c3-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 16:20:01 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id u9so37726917ejo.0
 for <xen-devel@lists.xenproject.org>; Wed, 11 Jan 2023 07:20:01 -0800 (PST)
Received: from [192.168.1.93] (adsl-139.109.242.137.tellas.gr.
 [109.242.137.139]) by smtp.gmail.com with ESMTPSA id
 4-20020a170906300400b007b4bc423b41sm6310902ejz.190.2023.01.11.07.19.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Jan 2023 07:20:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ac0335c-91c3-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=6d3Rm4gSZWrkj+xvHGYHayuc1OKGY/sZHKxHJwdXIIk=;
        b=SgZSElsCSwvUXv5klue32nTPdvZ5NGHMGHqqi0k8IGfxXoJN7ehr6vZSWT4peBt2Lx
         n0SGoTiviqGvu9f9qsL0ePa+sa3r/KDXNw9w9fSXRKvuEwCWMWUMFA/Q7gJdto0KJhem
         Pfm/RvBlI4YWcFCYM2J9Ci68rNDSZISrUCK2MoS3/aF8ZLL430iDnpf67re5HebEztDY
         voTiUPCage/hCR0uJoQXW7iMu9U3MMH8fvDct7d1f4XMcypAmDebNrJXdCj/hXZh3Xw1
         fC/9tCNFEtaUb5rBtcUPwjiEo2OqbiDA+4dAZr1i3l0kGkwHvpSuAc2cLua1mlu/lehV
         EkJg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=6d3Rm4gSZWrkj+xvHGYHayuc1OKGY/sZHKxHJwdXIIk=;
        b=3gptv2IyGKMM4/VRX01JjzBnj3demfBmf8UDmVK+mEaSuJrdXbGhUSS510bZqTLv4D
         LLXvz8wa3yd8co5Kvoqi5pCAeqXsJrI3Ql71mmzcBpcvl9lMRhO7I94KD7Ej+VPIyjiy
         XBLBxH3IiWrFHNWnZyuYSs10BjHCIr/mqPtAFHwahtsmzglWcUuYpEYq1Kgpu8FQVd57
         8egc5DbDfeTrdhMqN9qkBP2i+GGROTQMJ2zGdZ2+4/jYSdKGHJJis+tQ2Z6bmEM8V/SP
         AlcxHlocOOjcCyRm3Z71XY9aqacCgGn1vxL3SySs4Dlgg2rTLlKqEueCNUelqYHy8yUl
         FDFA==
X-Gm-Message-State: AFqh2kqarTFbTqnBTWOdmEOtz585THYpxudq3dtBFDleYz2Ie1GcC2hn
	TuDSLXHN8Z57DJekK+iYjPTXrTFFA1U=
X-Google-Smtp-Source: AMrXdXu5pqUwZLrcRUNsu3uAbY+6t77DOkzBSBRMNM2P/IRed6PF/rekItGgxt8NXX/JvpvVqPLJew==
X-Received: by 2002:a17:906:279a:b0:7c1:10b4:4742 with SMTP id j26-20020a170906279a00b007c110b44742mr58538232ejc.55.1673450401063;
        Wed, 11 Jan 2023 07:20:01 -0800 (PST)
Message-ID: <57631dd0-0811-75af-12cc-9091228d2a6c@gmail.com>
Date: Wed, 11 Jan 2023 17:19:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v2 2/8] x86/iommu: amd_iommu_perdev_intremap is AMD-Vi
 specific
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-3-burzalodowa@gmail.com>
 <bbb9b0c5-886a-0fa1-d44d-9144fb86efb0@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <bbb9b0c5-886a-0fa1-d44d-9144fb86efb0@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/11/23 17:03, Jan Beulich wrote:
> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>> @@ -116,7 +115,11 @@ static int __init cf_check parse_iommu_param(const char *s)
>>                   iommu_verbose = 1;
>>           }
>>           else if ( (val = parse_boolean("amd-iommu-perdev-intremap", s, ss)) >= 0 )
>> +#ifdef CONFIG_AMD_IOMMU
>>               amd_iommu_perdev_intremap = val;
>> +#else
>> +            no_config_param("AMD_IOMMU", "amd-iommu-perdev-intremap", s, ss);
> 
> The string literal wants to be "iommu", I think. Please see other uses of
> no_config_param().

Thanks for reviewing this. You are right. I will fix it.

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 15:40:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 15:40:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475621.737359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFdDS-0000qN-K0; Wed, 11 Jan 2023 15:40:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475621.737359; Wed, 11 Jan 2023 15:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFdDS-0000qG-HF; Wed, 11 Jan 2023 15:40:38 +0000
Received: by outflank-mailman (input) for mailman id 475621;
 Wed, 11 Jan 2023 15:40:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LNO5=5I=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pFdDQ-0000qA-Su
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 15:40:37 +0000
Received: from sonic307-55.consmr.mail.gq1.yahoo.com
 (sonic307-55.consmr.mail.gq1.yahoo.com [98.137.64.31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 47c15eb5-91c6-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 16:40:32 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic307.consmr.mail.gq1.yahoo.com with HTTP; Wed, 11 Jan 2023 15:40:30 +0000
Received: by hermes--production-ne1-5648bd7666-fj7tv (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID ce1cce5b596248fed3d2923baf1d0f09; 
 Wed, 11 Jan 2023 15:40:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47c15eb5-91c6-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673451630; bh=s6r8sDYEdbTeLYOFsl5nU8ZnCWpMdTQmdI104oVZvNw=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=jRmQMdEuRFDFQ39ny7YSHp2wuML8TmnCuVUXgYcTsT/EQyrYROgQ+u/2DwiUOj4UL+7aGCEvHVqbv79ce6avA1dYwpaXzbbh2eu/8USXYKd27/wdoqacAxCfV69Fukt0SyW0v0VTrKzTfldVRsxKAsaoAJy65XG0VpuXckDppB7EJny2k5acsVfVkdK1VuRQ9zyyMNeS1CAE0tXnjMLZj+ApFmrgJIy51jBMIy6yQlGuZwzvWNYnIv+g3+e22JGZFm760CdMrIb6T8wHAY53eoE40jmKsiCPwZcUpzN8iLRkSbyv+2o46/qt/M/9IWkpLsfsoCaBUqZc5rOLnhqTiA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673451630; bh=aOL7gpP8uLaEW3P9FsLedafrNgS/BxTOzvqfUf6PCYb=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=JzRwa4+M6y4har9euI9Ci2QQVCz1fZGRVkbf1940ww/tL1YsXY9hPqEjLIc+8yIPYZpnI9dsMst+GP/YKHvvHVaY7xXwS+KoXHAQVjObZSX0kQEEYInwkIgGIgwT3q1BWrfgvRlUpHHNp2cRiHY/BFcJJSTsmPcu7FFi4iCbLHZIYS25ukkQWGQz3yXF9y5MgCgxDjnuRAcEKRfvklT2ZOEgSvfF6pOk0Hq2p7ZQh0CceIDubJSA3RL3bPZ25MNP/OoAg17FM7O3rfcPe6mro4KFkQqoGPrfG5JhKGmDfBXMfskNCsRajOpIGSmgrdb3mSkrk3d97rtbS6JylpnQyQ==
X-YMail-OSG: lYsCu0wVM1k3WaIGZI2L.iu2qG7YRcGMILSxRU5JPKmIxzl6i4pWw0n6uaaqZ5e
 LwF2H5FVE36O17cLqgvLhpPY2KSZlOD0jOBXQWuzE2aORxNhz3n9slMbtEtYFDmGRZaIrzSFiRQh
 N9Y5vP6plZnE6ppOw8m04V8YNTU.ldqdhM7PRwrSYiTYgGOWE41opg9ayUhhyNZ_R8HxYUxm69SG
 ESZuSYGPOJCUtO1ctqwGw6YOQHbdjUxNFd5zch1VV4HWgWQWkIt1jngEcjjPHDr7OK8MaP.ikv_u
 CoiVMeyvFBqmNaMSDyn5UV429_RtCFD_THF1g.6LoohrKy2iPYJAZCmSb2m93rm5ETITf_X5GQve
 d67nMqNKluS_gxdIErIs_W_S2aGfSTtcxkiU7OZ.Urg0pw2izynz_uVOvZel0Q2PZZ6ngYJTMWVX
 PEAqVZ2xmk5c0Th9OLuJYJA7ntTilJ3DW4IBtCCB4YU3OIPFPRl7LzPp56fZXhgs_W7ZyG2YnCbD
 M12j8qbqaErBtF4EL4BA0gxF.AczUTk.ZRlRnyQmUoyPzpgXJZHCA1jfwSxEGdD5SewAwhU4uoNU
 3TEDkliOQ3P9nVqWpvX6OkJE_1ZqvmKU42iGPSl8DbT8rXgYgT82QTX3Fs6OLFWgQjCuqnvZMMzl
 5w0PjjUqAP3LncL1ONGu3qp9YrQGFEuMEVGFK2eMEtlj.DIHP7ZMpfghNRuHpPXzrp3hi8np4B3E
 mn1mInSBQoR.J9WaOjd8sjUz5OR2EWMopLiy8ZGx_MOzCZv64.WbATk6FYX.oHG04xedmp6Th_J3
 SHIUGiOsMT2ABNiXw5GpzWNjF7Uwt6e5ask9aN9vviojGi8QvGEUppU4YJmHxSc0QrFP3bM_1iJS
 lbBbKgvoXt62B9bWyKkhfD0MxdgL.bl2mOVuJoD_3LkDrNsYkM1IjtiUa03T58lt48_WbZcP5lsR
 3_x2XwQMDTeWxRuzjUYgSEkb5zTmBdv0yU_lu6mdlt9EW4VcOKHwzyw3UH4Tp2Butko8p1Y9_cgR
 1XLvcgymX6EhWZhuzR_joqwC_zv0q_MiwmZbrcPAJKM5Jl0lfIGiX_chUQT2udzUce.QadvT1faP
 ZdDyKXj9THauuKmHEmo7yvI9t_AMYBqT9BDZJzTOhkVeH2UTw6_.Qi3pcNhS0JMzAZciULhfJLqD
 A9oMRmW9L_l4BI3BPvWwfOTiZluoRz3hUVxWK3E6QdSiuLzNi4GFWTSckB0_lx.me9L3sYmeKNHq
 jcbOi3pvY8DTQqdfkZDQYXxLH4HCKv5mCyZzOhV0sr9noHkJzkGP62d9KKRSxkBh2Vfg0Dod2nIw
 6bFu_tTxEU2oywhRSzCQe7QhpZmM_w2AG6.bA.MSOKxw_REoxO4m4wF_Kz.T5IELHTAjtw.Eb3KV
 8GO8X2r1N_2TxoxzN0K8XxFK4YYOlRIYRxqm8qD4ZF5ThXOquujUCcoe7pi5HX5UyT9IcZjcft3T
 jBRrIYWf4wQ74ItrBCPX_GnTfL6m0sAVZG1bmn6gUVnGzQVQgqkcSumyzW50S.2hw6W28divjlX7
 GIygfmdgVUti7VWgwTiHTvRw982a.buCK137Dbj1CjYAAWLR9QVjT14hUvjSFnu92W7ATWrzmKTJ
 PDbcP_xloobsPq_LbSbZ0vosDCvjCemWok7muaA8b5eFs9AjLiik3CFY1SDc0G0hi0s8Hri6HDNC
 clBhw_jkaIeBXOaN50LUBZBVILp7lW9ffd5q6zfnvfWmrPqAv56KrfQ53Wtpo8knUlvZuT61f0EX
 y958r_4fyAeukDMHaU8yIGt8PPqqph8B.PqRSdJNSyb8_K704kFVNkLQTSnotN9akVDFWKvCyBhD
 .BRlICshs4UapnpSBSldSLBlrUPNByk4JYsffdWM8Cy1AQUV2CQBfFNjDL3k7ixUJYz6s.8ll4k1
 4eaI9ORQuZbgUpOzTBl5BzlMa6Bi0X._KreaoCmOF6d43qN7ASoUvEYbNOIkUI6dU8Nu5ZTee42r
 4L19QDY_bh9J3R4HX8GV5Vfifc7Si8mXZmcacSvut8bQAWPDVbK3g4DJ0xvU9kE30Az9M1nitlO.
 6b_Iky8YUyRzC.dCgOHKS1D7015zRrYGH7n1XNwkGnJVh7cbgoODU168JwaSR8max3RchS13W4Ss
 Uo_7tMdl2gQ34eZbBdZm1MK1okxDComQ9AHUSL38J5XljbGZKQ61n_4NxxfPmH69j1L6fKvN5WSS
 lJ.NMbXBt0K1peo1teE0RRTN.wg--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
Date: Wed, 11 Jan 2023 10:40:24 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, Bernhard Beschow <shentey@gmail.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230110030331-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.20982 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 14386

On 1/10/23 3:16 AM, Michael S. Tsirkin wrote:
> On Tue, Jan 10, 2023 at 02:08:34AM -0500, Chuck Zmudzinski wrote:
>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>> as noted in docs/igd-assign.txt in the Qemu source code.
>> 
>> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>> Intel IGD passthrough to the guest with the Qemu upstream device model,
>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>> a different slot. This problem often prevents the guest from booting.
>> 
>> The only available workaround is not good: Configure Xen HVM guests to use
>> the old and no longer maintained Qemu traditional device model available
>> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>> 
>> To implement this feature in the Qemu upstream device model for Xen HVM
>> guests, introduce the following new functions, types, and macros:
>> 
>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>> * typedef XenPTQdevRealize function pointer
>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>> 
>> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>> the xl toolstack with the gfx_passthru option enabled, which sets the
>> igd-passthru=on option to Qemu for the Xen HVM machine type.
>> 
>> The new xen_igd_reserve_slot function also needs to be implemented in
>> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>> in which case it does nothing.
>> 
>> The new xen_igd_clear_slot function overrides qdev->realize of the parent
>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>> 
>> Move the call to xen_host_pci_device_get, and the associated error
>> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>> initialize the device class and vendor values which enables the checks for
>> the Intel IGD to succeed. The verification that the host device is an
>> Intel IGD to be passed through is done by checking the domain, bus, slot,
>> and function values as well as by checking that gfx_passthru is enabled,
>> the device class is VGA, and the device vendor in Intel.
>> 
>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>> ---
>> Notes that might be helpful to reviewers of patched code in hw/xen:
>> 
>> The new functions and types are based on recommendations from Qemu docs:
>> https://qemu.readthedocs.io/en/latest/devel/qom.html
>> 
>> Notes that might be helpful to reviewers of patched code in hw/i386:
>> 
>> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
>> not affect builds that do not have CONFIG_XEN defined.
>> 
>> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
>> existing function that is only true when Qemu is built with
>> xen-pci-passthrough enabled and the administrator has configured the Xen
>> HVM guest with Qemu's igd-passthru=on option.
>> 
>> v2: Remove From: <email address> tag at top of commit message
>> 
>> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
>> 
>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
>> 
>>     is changed to
>> 
>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>         && (s->hostaddr.function == 0)) {
>> 
>>     I hoped that I could use the test in v2, since it matches the
>>     other tests for the Intel IGD in Qemu and Xen, but those tests
>>     do not work because the necessary data structures are not set with
>>     their values yet. So instead use the test that the administrator
>>     has enabled gfx_passthru and the device address on the host is
>>     02.0. This test does detect the Intel IGD correctly.
>> 
>> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>>     email address to match the address used by the same author in commits
>>     be9c61da and c0e86b76
>>     
>>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
>> 
>> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>>     for the Intel IGD that uses the same criteria as in other places.
>>     This involved moving the call to xen_host_pci_device_get from
>>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>>     Intel IGD in xen_igd_clear_slot:
>>     
>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>         && (s->hostaddr.function == 0)) {
>> 
>>     is changed to
>> 
>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
>> 
>>     Added an explanation for the move of xen_host_pci_device_get from
>>     xen_pt_realize to xen_igd_clear_slot to the commit message.
>> 
>>     Rebase.
>> 
>> v6: Fix logging by removing these lines from the move from xen_pt_realize
>>     to xen_igd_clear_slot that was done in v5:
>> 
>>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>>                " to devfn 0x%x\n",
>>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>                s->dev.devfn);
>> 
>>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>>     set yet in xen_igd_clear_slot.
>> 
>> v7: The v7 that was posted to the mailing list was incorrect. v8 is what
>>     v7 was intended to be.
>> 
>> v8: Inhibit out of context log message and needless processing by
>>     adding 2 lines at the top of the new xen_igd_clear_slot function:
>> 
>>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>         return;
>> 
>>     Rebase. This removed an unnecessary header file from xen_pt.h 
>> 
>>  hw/i386/pc_piix.c    |  3 +++
>>  hw/xen/xen_pt.c      | 49 ++++++++++++++++++++++++++++++++++++--------
>>  hw/xen/xen_pt.h      | 16 +++++++++++++++
>>  hw/xen/xen_pt_stub.c |  4 ++++
>>  4 files changed, 63 insertions(+), 9 deletions(-)
>> 
>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>> index b48047f50c..bc5efa4f59 100644
>> --- a/hw/i386/pc_piix.c
>> +++ b/hw/i386/pc_piix.c
>> @@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>>      }
>>  
>>      pc_xen_hvm_init_pci(machine);
>> +    if (xen_igd_gfx_pt_enabled()) {
>> +        xen_igd_reserve_slot(pcms->bus);
>> +    }
>>      pci_create_simple(pcms->bus, -1, "xen-platform");
>>  }
>>  #endif
> 
> I would even maybe go further and move the whole logic into
> xen_igd_reserve_slot. And I would even just name it
> xen_hvm_init_reserved_slots without worrying about the what
> or why at the pc level.  At this point it will be up to Xen maintainers.

I see to do that would be to resolve the two pc_xen_hvm*
functions in pc_piix.c that are guarded by CONFIG_XEN and
move them to an appropriate place such as xen-hvm.c.

That is along the lines of the work that Bernhard and Philippe
are doing, so I am Cc'ing them. My first inclination is just
to defer to them: I think eventually the little patch I propose
here to pc_piix.c is eventually going to be moved out of pc_piix.c
by Bernhard in a future patch.

What they have been doing is very conservative, and I expect
if and when Bernhard gets here to resolve those functions, they
will do it in a way that keeps the dependency of the xenfv machine
type on the pc machine type and the pc_init1 function.

What I would propose would be to break the dependency of xenfv
on the pc_init1 function. That is, I would propose having a
xenfv_init function in xen-hvm.c, and the first version would
be the current version of pc_init1, so xenfv would still depend
on many i440fx type things, but with the change xen developers
would be free to tweak xenfv_init without affecting the users
of the pc machine type.

Would that be a good idea? If I get posiive feedback for this
idea, I will put it on the table, probably initially as an RFC
patch.

Also, thanks, Michael, for your other suggestions for this patch
about using macros for the devfn constants.

Chuck

> 
>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> index 0ec7e52183..eff38155ef 100644
>> --- a/hw/xen/xen_pt.c
>> +++ b/hw/xen/xen_pt.c
>> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>                 s->dev.devfn);
>>  
>> -    xen_host_pci_device_get(&s->real_device,
>> -                            s->hostaddr.domain, s->hostaddr.bus,
>> -                            s->hostaddr.slot, s->hostaddr.function,
>> -                            errp);
>> -    if (*errp) {
>> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
>> -        return;
>> -    }
>> -
>>      s->is_virtfn = s->real_device.is_virtfn;
>>      if (s->is_virtfn) {
>>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
>> @@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>>  }
>>  
>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>> +{
>> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
>> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
>> +}
>> +
>> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
>> +{
>> +    ERRP_GUARD();
>> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
>> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
>> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
>> +
>> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>> +        return;
>> +
>> +    xen_host_pci_device_get(&s->real_device,
>> +                            s->hostaddr.domain, s->hostaddr.bus,
>> +                            s->hostaddr.slot, s->hostaddr.function,
>> +                            errp);
>> +    if (*errp) {
>> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
>> +        return;
>> +    }
>> +
>> +    if (is_igd_vga_passthrough(&s->real_device) &&
>> +        s->real_device.domain == 0 && s->real_device.bus == 0 &&
>> +        s->real_device.dev == 2 && s->real_device.func == 0 &&
>> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> 
> how about macros for these?
> 
> #define XEN_PCI_IGD_DOMAIN 0
> #define XEN_PCI_IGD_BUS 0
> #define XEN_PCI_IGD_DEV 2
> #define XEN_PCI_IGD_FN 0
> 
>> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
> 
> If you are going to do this, you should set it back
> either after pci_qdev_realize or in unrealize,
> for symmetry.
> 
>> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
>> +    }
> 
> 
>> +    xpdc->pci_qdev_realize(qdev, errp);
>> +}
>> +
> 
> 
> 
>>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>>  {
>>      DeviceClass *dc = DEVICE_CLASS(klass);
>>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>  
>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
>> +    xpdc->pci_qdev_realize = dc->realize;
>> +    dc->realize = xen_igd_clear_slot;
>>      k->realize = xen_pt_realize;
>>      k->exit = xen_pt_unregister_device;
>>      k->config_read = xen_pt_pci_read_config;
>> @@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>>      .instance_size = sizeof(XenPCIPassthroughState),
>>      .instance_finalize = xen_pci_passthrough_finalize,
>>      .class_init = xen_pci_passthrough_class_init,
>> +    .class_size = sizeof(XenPTDeviceClass),
>>      .instance_init = xen_pci_passthrough_instance_init,
>>      .interfaces = (InterfaceInfo[]) {
>>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
>> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
>> index cf10fc7bbf..8c25932b4b 100644
>> --- a/hw/xen/xen_pt.h
>> +++ b/hw/xen/xen_pt.h
>> @@ -2,6 +2,7 @@
>>  #define XEN_PT_H
>>  
>>  #include "hw/xen/xen_common.h"
>> +#include "hw/pci/pci_bus.h"
>>  #include "xen-host-pci-device.h"
>>  #include "qom/object.h"
>>  
>> @@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
>>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>>  
>> +#define XEN_PT_DEVICE_CLASS(klass) \
>> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
>> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
>> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
>> +
>> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
>> +
>> +typedef struct XenPTDeviceClass {
>> +    PCIDeviceClass parent_class;
>> +    XenPTQdevRealize pci_qdev_realize;
>> +} XenPTDeviceClass;
>> +
>>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
>> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>>                                             XenHostPCIDevice *dev);
>> @@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
>>  
>>  #define XEN_PCI_INTEL_OPREGION 0xfc
>>  
>> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
>> +
> 
> I think you want to calculate this based on dev fn:
> 
> #define XEN_PCI_IGD_SLOT_MASK \
> 	(0x1 << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
> 
> 
>>  typedef enum {
>>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
>> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
>> index 2d8cac8d54..5c108446a8 100644
>> --- a/hw/xen/xen_pt_stub.c
>> +++ b/hw/xen/xen_pt_stub.c
>> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>>          error_setg(errp, "Xen PCI passthrough support not built in");
>>      }
>>  }
>> +
>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>> +{
>> +}
>> -- 
>> 2.39.0
> 



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 15:49:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 15:49:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475628.737369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFdM0-0001aM-J9; Wed, 11 Jan 2023 15:49:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475628.737369; Wed, 11 Jan 2023 15:49:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFdM0-0001aF-GL; Wed, 11 Jan 2023 15:49:28 +0000
Received: by outflank-mailman (input) for mailman id 475628;
 Wed, 11 Jan 2023 15:49:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFdLz-0001a5-Ds; Wed, 11 Jan 2023 15:49:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFdLz-0003GB-A5; Wed, 11 Jan 2023 15:49:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFdLz-0006im-37; Wed, 11 Jan 2023 15:49:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFdLz-0004bX-2e; Wed, 11 Jan 2023 15:49:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3+zwuy+EA5WrvQE6y1eJ9ZPpehJnYnLBd21Jz9Lu3K8=; b=38JxeEyTIXxLd/jcJeDSQKd1N0
	c93kXt/HHTKevL6jUrUONCYc2ybgrGidaNT1zPx0c4vKKOEwAAAIU4zfZeweTGlFIHST/nU75G3ke
	18bzNi6N53V7JUSsK+hrMI44P3pCMIuQ3CTRuabJ6SLDeScn9zfV5hnJ2iFEumrJsCEo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175720-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175720: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4d975798e11579fdf405b348543061129e01b0fb
X-Osstest-Versions-That:
    xen=4d975798e11579fdf405b348543061129e01b0fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 15:49:27 +0000

flight 175720 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175720/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-i386  7 xen-install              fail pass in 175714
 test-amd64-i386-pair         10 xen-install/src_host       fail pass in 175714
 test-amd64-i386-xl-vhd       22 guest-start.2              fail pass in 175714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175714
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175714
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175714
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175714
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175714
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175714
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175714
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175714
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175714
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175714
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4d975798e11579fdf405b348543061129e01b0fb
baseline version:
 xen                  4d975798e11579fdf405b348543061129e01b0fb

Last test of basis   175720  2023-01-11 08:10:04 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 16:59:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 16:59:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475636.737380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFeRm-0000fb-Qy; Wed, 11 Jan 2023 16:59:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475636.737380; Wed, 11 Jan 2023 16:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFeRm-0000fU-OO; Wed, 11 Jan 2023 16:59:30 +0000
Received: by outflank-mailman (input) for mailman id 475636;
 Wed, 11 Jan 2023 16:59:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFeRl-0000fO-DN
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 16:59:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFeRk-0005Kj-Kz; Wed, 11 Jan 2023 16:59:28 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.5.208]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFeRk-0006fG-Et; Wed, 11 Jan 2023 16:59:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=8JyxsweNSjFnsgr9I+UrD3dI3IlDhE6PVGuCmTDcsBg=; b=TqLmaPnp7KDDR415D3LtoZxJJw
	0FOqiwID9R7GQsFIX4jYfdvPb203GecYn9CX+uZSmnCXXvHb4W8CUQH8cwkwU2EfgOL12zMBu4ON2
	3EqlxTlE3StQipJaJdz2z34+Rk9yOF8Acg6W6d5YRPLwKvaD2Qy5OfNg5snJ3rAf2bWE=;
Message-ID: <3e4ce6c0-9949-1312-f492-913b7dd2cf18@xen.org>
Date: Wed, 11 Jan 2023 16:59:25 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 0/8] SVE feature for arm guests
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230111143826.3224-1-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 11/01/2023 14:38, Luca Fancellu wrote:
> This serie is introducing the possibility for Dom0 and DomU guests to use
> sve/sve2 instructions.
> 
> SVE feature introduces new instruction and registers to improve performances on
> floating point operations.
> 
> The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE field, and
> when available the ID_AA64ZFR0_EL1 register provides additional information
> about the implemented version and other SVE feature.
> 
> New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.
> 
> Z0-Z31 are scalable vector register whose size is implementation defined and
> goes from 128 bits to maximum 2048, the term vector length will be used to refer
> to this quantity.
> P0-P15 are predicate registers and the size is the vector length divided by 8,
> same size is the FFR (First Fault Register).
> ZCR_ELx is a register that can control and restrict the maximum vector length
> used by the <x> exception level and all the lower exception levels, so for
> example EL3 can restrict the vector length usable by EL3,2,1,0.
> 
> The platform has a maximum implemented vector length, so for every value
> written in ZCR register, if this value is above the implemented length, then the
> lower value will be used. The RDVL instruction can be used to check what vector
> length is the HW using after setting ZCR.
> 
> For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there is no
> need to save them separately, saving Z0-Z31 will save implicitly also V0-V31.
> 
> SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie the
> register is added to the domain state, to be able to trap only the guests that
> are not allowed to use SVE.
> 
> This serie is introducing a command line parameter to enable Dom0 to use SVE and
> to set its maximum vector length that by default is 0 which means the guest is
> not allowed to use SVE. Values from 128 to 2048 mean the guest can use SVE with
> the selected value used as maximum allowed vector length (which could be lower
> if the implemented one is lower).
> For DomUs, an XL parameter with the same way of use is introduced and a dom0less
> DTB binding is created.
> 
> The context switch is the most critical part because there can be big registers
> to be saved, in this serie an easy approach is used and the context is
> saved/restored every time for the guests that are allowed to use SVE.

This would be OK for an initial approach. But I would be worry to 
officially support SVE because of the potential large impact on other users.

What's the long term plan?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 17:14:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 17:14:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475642.737392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFegM-00032F-5J; Wed, 11 Jan 2023 17:14:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475642.737392; Wed, 11 Jan 2023 17:14:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFegM-000328-1W; Wed, 11 Jan 2023 17:14:34 +0000
Received: by outflank-mailman (input) for mailman id 475642;
 Wed, 11 Jan 2023 17:14:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFegK-00031y-0j; Wed, 11 Jan 2023 17:14:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFegJ-0005g2-Nq; Wed, 11 Jan 2023 17:14:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFegJ-00010p-Eb; Wed, 11 Jan 2023 17:14:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFegJ-0005S0-E7; Wed, 11 Jan 2023 17:14:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xVuqWYu0r/UOSN8D56WeD5YbVyHqDEwYeGxvcqgHbz8=; b=TdXs0+t9gba1Gsqq8UyhQHqzu9
	Xl7J9ZdXOTmKYcaf2BRiOotkA01n+kQ7mVd30y8CPYmK36MYdtTnSP3Fv6PyVawoheW29fL2diN5t
	7MelQxAYtkpmJNHFT8ktRT4GHFKrguFSIEpgyhiPvQoYAoM7+zlylWxucOLwjBTisDkk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175725-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175725: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 17:14:31 +0000

flight 175725 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175725/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    3 days
Failing since        175627  2023-01-08 14:40:14 Z    3 days   16 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 17:16:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 17:16:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475651.737403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFeib-0003fD-NE; Wed, 11 Jan 2023 17:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475651.737403; Wed, 11 Jan 2023 17:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFeib-0003f6-JO; Wed, 11 Jan 2023 17:16:53 +0000
Received: by outflank-mailman (input) for mailman id 475651;
 Wed, 11 Jan 2023 17:16:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFeiZ-0003f0-Sl
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 17:16:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFeiZ-0005tZ-IU; Wed, 11 Jan 2023 17:16:51 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.5.208]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFeiZ-0007er-Ce; Wed, 11 Jan 2023 17:16:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2bzw1WLg844SYQ5XotTV+5/B3cmddNxNxoBmTA6gcb8=; b=1nF2crPSUbKxCLzolKIeIHrBeT
	cQn+vtmsWm2WbeErVdql+YJ4BsQqgbJCO3+bajdXDDUZIAZnxAMJGf4XVkiSzVuJREn8AkyPDXCWJ
	zVaA3M96h7m5xeQATefgrqOaU6cLpdGMlrqObZxFz+syQ/z6OkvFdnQ7rTBbuMQeXT3c=;
Message-ID: <e37e5564-e7b9-c9d2-1360-171c014649c7@xen.org>
Date: Wed, 11 Jan 2023 17:16:49 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 1/8] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <20230111143826.3224-2-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230111143826.3224-2-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

As this is an RFC, I will be mostly making general comments.

On 11/01/2023 14:38, Luca Fancellu wrote:
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 99577adb6c69..8ea3843ea8e8 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -181,6 +181,8 @@ static void ctxt_switch_to(struct vcpu *n)
>       /* VGIC */
>       gic_restore_state(n);
>   
> +    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);

Shouldn't this need an isb() afterwards do ensure that any previously 
trapped will be accessible?

[...]

> @@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
>   
>   void init_traps(void)
>   {
> +    register_t cptr_bits = get_default_cptr_flags();
>       /*
>        * Setup Hyp vector base. Note they might get updated with the
>        * branch predictor hardening.
> @@ -135,17 +151,15 @@ void init_traps(void)
>       /* Trap CP15 c15 used for implementation defined registers */
>       WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
>   
> -    /* Trap all coprocessor registers (0-13) except cp10 and
> -     * cp11 for VFP.
> -     *
> -     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
> -     *
> -     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
> -     * RES1, i.e. they would trap whether we did this write or not.
> +#ifdef CONFIG_ARM64_SVE
> +    /*
> +     * Don't trap SVE now, Xen might need to access ZCR reg in cpufeature code,
> +     * trapping again or not will be handled on vcpu creation/scheduling later
>        */

Instead of enable by default at boot, can we try to enable/disable only 
when this is strictly needed?

> -    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
> -                 HCPTR_TTA | HCPTR_TAM,
> -                 CPTR_EL2);
> +    cptr_bits &= ~HCPTR_CP(8);
> +#endif
> +
> +    WRITE_SYSREG(cptr_bits, CPTR_EL2);
>   
>       /*
>        * Configure HCR_EL2 with the bare minimum to run Xen until a guest

Cheers,
-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 17:28:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 17:28:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475658.737414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFetL-0005Ag-M0; Wed, 11 Jan 2023 17:27:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475658.737414; Wed, 11 Jan 2023 17:27:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFetL-0005AZ-JE; Wed, 11 Jan 2023 17:27:59 +0000
Received: by outflank-mailman (input) for mailman id 475658;
 Wed, 11 Jan 2023 17:27:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFetK-0005AT-Pj
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 17:27:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFetK-000664-DN; Wed, 11 Jan 2023 17:27:58 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.5.208]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFetK-00081C-5p; Wed, 11 Jan 2023 17:27:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=vx4ET+fi0wgacgZzFl1B6OvOIT/Q1T/uIXWJTLhiJ8Q=; b=5J+JVCemmfBxg9Q1/lVjUbiCzh
	WsVQFLjqhgkuxqdU8/t02lCHRd7GRsg/OsLkrPjMNiEXub9hHbnJ56l8dV454aAs7sizhMCEN9oXD
	PWlvaA3z2bbnPcACg/oX4XsdpQBhnMzAUxzS6IlZP5nKYow7YR/yWlTTrHP7L3MARm5c=;
Message-ID: <91b5c7db-ec9b-efa6-f5cf-dc5e8b176db6@xen.org>
Date: Wed, 11 Jan 2023 17:27:55 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 2/8] xen/arm: add sve_vl_bits field to domain
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <20230111143826.3224-3-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230111143826.3224-3-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 11/01/2023 14:38, Luca Fancellu wrote:
> Add sve_vl_bits field to arch_domain and xen_arch_domainconfig
> structure, to allow the domain to have an information about the
> SVE feature and the number of SVE register bits that are allowed
> for this domain.
> 
> The field is used also to allow or forbid a domain to use SVE,
> because a value equal to zero means the guest is not allowed to
> use the feature.
> 
> When the guest is allowed to use SVE, the zcr_el2 register is
> updated on context switch to restict the domain on the allowed
> number of bits chosen, this value is the minimum among the chosen
> value and the platform supported value.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>   xen/arch/arm/arm64/sve.c             |  9 ++++++
>   xen/arch/arm/domain.c                | 45 ++++++++++++++++++++++++++++
>   xen/arch/arm/include/asm/arm64/sve.h | 12 ++++++++
>   xen/arch/arm/include/asm/domain.h    |  6 ++++
>   xen/include/public/arch-arm.h        |  2 ++
>   xen/include/public/domctl.h          |  2 +-
>   6 files changed, 75 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index 326389278292..b7695834f4ba 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -6,6 +6,7 @@
>    */
>   
>   #include <xen/types.h>
> +#include <asm/cpufeature.h>
>   #include <asm/arm64/sve.h>
>   #include <asm/arm64/sysregs.h>
>   
> @@ -36,3 +37,11 @@ register_t vl_to_zcr(uint16_t vl)
>   {
>       return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
>   }
> +
> +/* Get the system sanitized value for VL in bits */
> +uint16_t get_sys_vl_len(void)
> +{
> +    /* ZCR_ELx len field is ((len+1) * 128) = vector bits length */
> +    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
> +            SVE_VL_MULTIPLE_VAL;
> +}
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 8ea3843ea8e8..27f38729302b 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -13,6 +13,7 @@
>   #include <xen/wait.h>
>   
>   #include <asm/alternative.h>
> +#include <asm/arm64/sve.h>
>   #include <asm/cpuerrata.h>
>   #include <asm/cpufeature.h>
>   #include <asm/current.h>
> @@ -183,6 +184,11 @@ static void ctxt_switch_to(struct vcpu *n)
>   
>       WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
>   
> +#ifdef CONFIG_ARM64_SVE
> +    if ( is_sve_domain(n->domain) )
> +        WRITE_SYSREG(n->arch.zcr_el2, ZCR_EL2);
> +#endif

I would actually expect that is_sve_domain() returns false when the SVE 
is not enabled. So can we simply remove the #ifdef?

> +
>       /* VFP */
>       vfp_restore_state(n);
>   
> @@ -551,6 +557,11 @@ int arch_vcpu_create(struct vcpu *v)
>       v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
>   
>       v->arch.cptr_el2 = get_default_cptr_flags();
> +    if ( is_sve_domain(v->domain) )
> +    {
> +        v->arch.cptr_el2 &= ~HCPTR_CP(8);
> +        v->arch.zcr_el2 = vl_to_zcr(v->domain->arch.sve_vl_bits);
> +    }
>   
>       v->arch.hcr_el2 = get_default_hcr_flags();
>   
> @@ -595,6 +606,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>       unsigned int max_vcpus;
>       unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>       unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
> +    unsigned int sve_vl_bits = config->arch.sve_vl_bits;
>   
>       if ( (config->flags & ~flags_optional) != flags_required )
>       {
> @@ -603,6 +615,36 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>           return -EINVAL;
>       }
>   
> +    /* Check feature flags */
> +    if ( sve_vl_bits > 0 ) {
> +        unsigned int zcr_max_bits;
> +
> +        if ( !cpu_has_sve )
> +        {
> +            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
> +            return -EINVAL;
> +        }
> +        else if ( !is_vl_valid(sve_vl_bits) )
> +        {
> +            dprintk(XENLOG_INFO, "Unsupported SVE vector length (%u)\n",
> +                    sve_vl_bits);
> +            return -EINVAL;
> +        }
> +        /*
> +         * get_sys_vl_len() is the common safe value among all cpus, so if the
> +         * value specified by the user is above that value, use the safe value
> +         * instead.
> +         */
> +        zcr_max_bits = get_sys_vl_len();
> +        if ( sve_vl_bits > zcr_max_bits )
> +        {
> +            config->arch.sve_vl_bits = zcr_max_bits;
> +            dprintk(XENLOG_INFO,
> +                    "SVE vector length lowered to %u, safe value among CPUs\n",
> +                    zcr_max_bits);
> +        }

I don't think Xen should "downgrade" the value. Instead, this should be 
the decision from the tools. So we want to find a different way to 
reproduce the value (Andrew may have some ideas here as he was looking 
at it).

> +    }
> +
>       /* The P2M table must always be shared between the CPU and the IOMMU */
>       if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
>       {
> @@ -745,6 +787,9 @@ int arch_domain_create(struct domain *d,
>       if ( (rc = domain_vpci_init(d)) != 0 )
>           goto fail;
>   
> +    /* Copy sve_vl_bits to the domain configuration */
> +    d->arch.sve_vl_bits = config->arch.sve_vl_bits;
> +
>       return 0;
>   
>   fail:
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
> index bd56e2f24230..f4a660e402ca 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -13,10 +13,17 @@
>   /* Vector length must be multiple of 128 */
>   #define SVE_VL_MULTIPLE_VAL (128U)
>   
> +static inline bool is_vl_valid(uint16_t vl)
> +{
> +    /* SVE vector length is multiple of 128 and maximum 2048 */
> +    return ((vl % SVE_VL_MULTIPLE_VAL) == 0) && (vl <= SVE_VL_MAX_BITS);
> +}
> +
>   #ifdef CONFIG_ARM64_SVE
>   
>   register_t compute_max_zcr(void);
>   register_t vl_to_zcr(uint16_t vl);
> +uint16_t get_sys_vl_len(void);
>   
>   #else /* !CONFIG_ARM64_SVE */
>   
> @@ -30,6 +37,11 @@ static inline register_t vl_to_zcr(uint16_t vl)
>       return 0;
>   }
>   
> +static inline uint16_t get_sys_vl_len(void)
> +{
> +    return 0;
> +}
> +
>   #endif
>   
>   #endif /* _ARM_ARM64_SVE_H */
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index 42eb5df320a7..e4794a9fd2ab 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -31,6 +31,8 @@ enum domain_type {
>   
>   #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>   
> +#define is_sve_domain(d) ((d)->arch.sve_vl_bits > 0)
> +
>   /*
>    * Is the domain using the host memory layout?
>    *
> @@ -114,6 +116,9 @@ struct arch_domain
>       void *tee;
>   #endif
>   
> +    /* max SVE vector length in bits */
> +    uint16_t sve_vl_bits;
> +
>   }  __cacheline_aligned;
>   
>   struct arch_vcpu
> @@ -190,6 +195,7 @@ struct arch_vcpu
>       register_t tpidrro_el0;
>   
>       /* HYP configuration */
> +    register_t zcr_el2;
>       register_t cptr_el2;
>       register_t hcr_el2;
>       register_t mdcr_el2;
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 1528ced5097a..e18a075105f0 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -304,6 +304,8 @@ struct xen_arch_domainconfig {
>       uint16_t tee_type;
>       /* IN */
>       uint32_t nr_spis;
> +    /* IN */
> +    uint16_t sve_vl_bits;

Please spell out the padding clearly (even though I know there is one in 
this structure that is not mentioned).

>       /*
>        * OUT
>        * Based on the property clock-frequency in the DT timer node.
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 51be28c3de7c..616d7a1c070d 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -21,7 +21,7 @@
>   #include "hvm/save.h"
>   #include "memory.h"
>   
> -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
> +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000016
>   
>   /*
>    * NB. xen_domctl.domain is an IN/OUT parameter for this operation.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 17:41:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 17:41:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475664.737425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFf66-0007U2-Pi; Wed, 11 Jan 2023 17:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475664.737425; Wed, 11 Jan 2023 17:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFf66-0007Tv-Mv; Wed, 11 Jan 2023 17:41:10 +0000
Received: by outflank-mailman (input) for mailman id 475664;
 Wed, 11 Jan 2023 17:41:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFf65-0007Tp-Sz
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 17:41:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFf65-0006Vr-ML; Wed, 11 Jan 2023 17:41:09 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.5.208]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFf65-0000Am-GH; Wed, 11 Jan 2023 17:41:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:Cc:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=fv3MXOu6EuolHTnQDcmuRLHyg/qPSeb3PzTxILBhHN8=; b=dv7SVcKhR+mtH7H4vfq+alyXYz
	01aYzpxEVaPQfeNfZcLvYAxF9K4MBEEIOGPyBXKjvNkmfFD/GdQHSQKRC9vlV5SG69HKWN5IxtZRJ
	NDhIpDFi62cv+0jd0oz4E8dKaMhRw9DbT9QfXo6YqPOuOyPBiFx8m8j2ldmddbTF3CiU=;
Message-ID: <3e7059c2-0d23-03f2-9a93-f88de09171f4@xen.org>
Date: Wed, 11 Jan 2023 17:41:08 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Xenalyze on ARM ( NXP S32G3 with Cortex-A53)
Content-Language: en-US
To: El Mesdadi Youssef ESK UILD7 <youssef.elmesdadi@zf.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <AM5PR0802MB25781717167B5BFC980BF2A49DFF9@AM5PR0802MB2578.eurprd08.prod.outlook.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM5PR0802MB25781717167B5BFC980BF2A49DFF9@AM5PR0802MB2578.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

(+Stefano)

On 10/01/2023 14:20, El Mesdadi Youssef ESK UILD7 wrote:
>>> Hello,
> 
>>>
> I'm trying to measure the performance of Xen on the S32G3 microprocessor, unfortunately, after following the BSP-Linux to install Xen, I found that Xentrace is not available or not compatible with ARM architecture. I have seen some studies on  Xilinx, and how they made Xentrace compatible with ARM, but I have no resources or access to get that and make it work on my Board. If there is any help I would appreciate it and thanks in advance.

xentrace should work on upstream Xen. What did you version did you try? 
Can you also clarify the error you are seen?

> 
>>> I have some extra questions, and it will be helpful to share your ideas with me,
> 
> 
>    *   Is it possible to run a native application ( C-code) on the virtual machine, turn on a LED,  have access to the GPIO, or send some messages using Can-Interface?

Yes if you assign (or provide para-virtualized driver) the 
GPIO/LED/Can-Interface to the guest.

>    *   My Board has no Ethernet, and no Extern SD Card, is there any method I can use to build a kernel for an operating system with my laptop, and transfer it to the board?

I am confused, if you don't have network and an external sdcard, then 
you did you first put Xen on your system?

In theory you could transfer the binary (using base64) via the serial 
console. But that's hackish. Instead, I would recommend to speak with 
the board vendor and ask them how you can upload your own software.

>    *   Any suggestions in detail to measure the interrupt latency, Xen Overhead, and switch context (time to switch from one VM to another, that's what I wanted to measure with Xenalyze)

xentrace will be your best bet. Otherwise, you will need to implement 
custom tracing.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 17:45:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 17:45:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475671.737435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfAH-00086X-9f; Wed, 11 Jan 2023 17:45:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475671.737435; Wed, 11 Jan 2023 17:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfAH-00086Q-6x; Wed, 11 Jan 2023 17:45:29 +0000
Received: by outflank-mailman (input) for mailman id 475671;
 Wed, 11 Jan 2023 17:45:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFfAF-00086I-MW
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 17:45:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFfAE-0006eP-VP; Wed, 11 Jan 2023 17:45:26 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.5.208]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFfAE-0000Pa-Pp; Wed, 11 Jan 2023 17:45:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=a7QlcAv1z/+JZ8qrnZpz/POOL4amY4nEQ0WiQriXJG0=; b=YzmBMgq9bQ0gAKVpfTR/6CRZZQ
	Q6DrYuFUms0C/GOU31dBdYFRj9UmemBjdBTHKIjPjG3maRNDRoiTylzrj2oqKYZIzu//A5Cjx39cq
	0d5VH6vtFuLHR1Y91aMdv6eBUI34kUuncpKr8Gw8o6Kcxw+EsE+5Ek9V+dwFf7hetYkw=;
Message-ID: <4715f53c-ace8-5f45-edea-4391cc308524@xen.org>
Date: Wed, 11 Jan 2023 17:45:25 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 04/19] tools/xenstore: remove all watches when a domain
 has stopped
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-5-jgross@suse.com>
 <831c0e75-1a23-6210-9f5b-7212a6763dc3@xen.org>
 <27c0f7bd-b548-17f4-d675-7143e218fd65@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <27c0f7bd-b548-17f4-d675-7143e218fd65@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 11/01/2023 06:36, Juergen Gross wrote:
> On 20.12.22 20:01, Julien Grall wrote:
>> On 13/12/2022 16:00, Juergen Gross wrote:
>>> When a domain has been released by Xen tools, remove all its
>>> registered watches. This avoids sending watch events to the dead domain
>>> when all the nodes related to it are being removed by the Xen tools.
>>
>> AFAICT, the only user of the command in the tree is softreset. Would 
>> you be able to check this is still working as expected?
> 
> Seems to work fine.

Thanks for the confirmation! You can add my reviewed-by:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 17:46:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 17:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475677.737447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfBO-0000Cy-LB; Wed, 11 Jan 2023 17:46:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475677.737447; Wed, 11 Jan 2023 17:46:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfBO-0000Cr-Gr; Wed, 11 Jan 2023 17:46:38 +0000
Received: by outflank-mailman (input) for mailman id 475677;
 Wed, 11 Jan 2023 17:46:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFfBN-0000Cl-AD
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 17:46:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFfBM-0006pK-KA; Wed, 11 Jan 2023 17:46:36 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.5.208]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFfBM-0000QX-EY; Wed, 11 Jan 2023 17:46:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Jvy5eD9ZeUAtNcJWnt2kJUd5icqyY9UC1xK7nopURIA=; b=PO7N406FBi0TsjelWwOzO7r9YR
	Roq7YBxAWuL9bEYoFhZovjoKzOAfU0TdpgFOisQCpfwN2Jk7A5jyjebSQRPIWpGW7Gz75uZgzKEj8
	HSyopx0/r8TqNllC33M+wOxIUuRsJC+cziVxkVk/roePs42Bg0oWVYNB/MIETLsbKA2I=;
Message-ID: <0d56e193-c70d-e65a-f4a5-02babc608045@xen.org>
Date: Wed, 11 Jan 2023 17:46:35 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 07/19] tools/xenstore: introduce dummy nodes for
 special watch paths
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-8-jgross@suse.com>
 <ef0ed925-5c07-a5c2-c7e6-f5a8ad21d480@xen.org>
 <db863117-a383-4373-d43d-7072bdf57a96@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <db863117-a383-4373-d43d-7072bdf57a96@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 11/01/2023 06:41, Juergen Gross wrote:
> On 20.12.22 20:39, Julien Grall wrote:
>>> +static void fire_special_watches(const char *name)
>>> +{
>>> +    void *ctx = talloc_new(NULL);
>>> +    struct node *node;
>>> +
>>> +    if (!ctx)
>>> +        return;
>>> +
>>> +    node = read_node(NULL, ctx, name);
>>> +
>>> +    if (node)
>>> +        fire_watches(NULL, ctx, name, node, true, NULL);
>>
>> NIT: I would consider to log an error (maybe only once) if 'node' is 
>> NULL. The purpose is to help debugging Xenstored.
> 
> I think we can log it always, as this is really a bad problem.

That works for me. If it is always logged, then I guess this would need 
to be a syslog message as well.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 17:48:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 17:48:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475683.737458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfDb-0000ph-0o; Wed, 11 Jan 2023 17:48:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475683.737458; Wed, 11 Jan 2023 17:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfDa-0000pa-Tw; Wed, 11 Jan 2023 17:48:54 +0000
Received: by outflank-mailman (input) for mailman id 475683;
 Wed, 11 Jan 2023 17:48:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFfDY-0000pQ-TV
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 17:48:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFfDY-0006r9-5g; Wed, 11 Jan 2023 17:48:52 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.5.208]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFfDX-0000TV-ND; Wed, 11 Jan 2023 17:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=hFdyGGRIGPGTdIo6+1cWsTsw+9Ohkpo48FAwiLTbEiM=; b=YX5IoEuFCotJcph4MvfYhCiTx6
	snYuEVlzWX1/2NaXSMJYGmgjfwjYWSetKGF49I+twJ7w1voQtD2QTmtlRXB88MoLzPAES5hIqhdqm
	aIvJLv467iv4y/HT/Ee323Ea4x/jXHdIfJ5owFPwbO5mpbRFb/hhkarr4xr6YL3sJagQ=;
Message-ID: <e9eeee72-ecd1-faaa-dc63-b57d50162bbf@xen.org>
Date: Wed, 11 Jan 2023 17:48:50 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 10/19] tools/xenstore: change per-domain node
 accounting interface
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-11-jgross@suse.com>
 <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>
 <05871696-1638-82d0-8d55-9088b4bb9a18@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <05871696-1638-82d0-8d55-9088b4bb9a18@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 11/01/2023 08:59, Juergen Gross wrote:
>> ... to make sure domain_nbentry_add() is not returning a negative 
>> value. Then it would not work.
>>
>> A good example imagine you have a transaction removing nodes from tree 
>> but not adding any. Then the "ret" would be negative.
>>
>> Meanwhile the nodes are also removed outside of the transaction. So 
>> the sum of "d->nbentry + ret" would be negative resulting to a failure 
>> here.
> 
> Thanks for catching this.
> 
> I think the correct way to handle this is to return max(d->nbentry + 
> ret, 0) in
> domain_nbentry_add(). The value might be imprecise, but always >= 0 and 
> never
> wrong outside of a transaction collision.

I am bit confused with your proposal. If the return value is imprecise, 
then what's the point of returning max(...) instead of simply 0?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 17:50:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 17:50:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475689.737469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfF9-0002Dh-BT; Wed, 11 Jan 2023 17:50:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475689.737469; Wed, 11 Jan 2023 17:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfF9-0002Da-8a; Wed, 11 Jan 2023 17:50:31 +0000
Received: by outflank-mailman (input) for mailman id 475689;
 Wed, 11 Jan 2023 17:50:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFfF8-0002DU-LZ
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 17:50:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFfF8-0006u0-2H; Wed, 11 Jan 2023 17:50:30 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.5.208]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFfF7-0000UZ-Sw; Wed, 11 Jan 2023 17:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=sAeILFWTwtmh4b34OUGqh46wnnEdsZZkpDl1zLNmOog=; b=ox4ZhNHBLi2qpcwF7osKlnq+Dd
	2NtMsnlmRtSlvJvkz13HKdXVEn6P4n9DsuY7kEHO9dAY8EhgAu6pzlxz7GLz3aLPVTuQhJsgeMs6Q
	0kEmYUU+g6ugqizjq7MWPNvBu2ZVWgO438p2XXt60ewxb+GTtJpyKREyhqT9HDXe0ug8=;
Message-ID: <79a999c7-af1f-1cf2-6d01-ac70bdd1972d@xen.org>
Date: Wed, 11 Jan 2023 17:50:28 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 15/19] tools/xenstore: switch hashtable to use the
 talloc framework
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-16-jgross@suse.com>
 <b7cfd35b-97ef-42eb-eceb-7f07cd72268c@xen.org>
 <00c146ee-b0d4-55bd-3276-4894b26cd83c@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <00c146ee-b0d4-55bd-3276-4894b26cd83c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 11/01/2023 09:27, Juergen Gross wrote:
> On 20.12.22 22:50, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 13/12/2022 16:00, Juergen Gross wrote:
>>> @@ -115,47 +117,32 @@ hashtable_expand(struct hashtable *h)
>>>       if (h->primeindex == (prime_table_length - 1)) return 0;
>>>       newsize = primes[++(h->primeindex)];
>>> -    newtable = (struct entry **)calloc(newsize, sizeof(struct entry*));
>>> -    if (NULL != newtable)
>>> +    newtable = talloc_realloc(h, h->table, struct entry *, newsize);
>>> +    if (!newtable)
>>>       {
>>> -        /* This algorithm is not 'stable'. ie. it reverses the list
>>> -         * when it transfers entries between the tables */
>>> -        for (i = 0; i < h->tablelength; i++) {
>>> -            while (NULL != (e = h->table[i])) {
>>> -                h->table[i] = e->next;
>>> -                index = indexFor(newsize,e->h);
>>> +        h->primeindex--;
>>> +        return 0;
>>> +    }
>>> +
>>> +    h->table = newtable;
>>> +    memset(newtable + h->tablelength, 0,
>>> +           (newsize - h->tablelength) * sizeof(*newtable));
>>> +    for (i = 0; i < h->tablelength; i++) {
>>
>> I understand this code is taken from the realloc path. However, isn't 
>> this algorithm also not stable? If so, then I think we move the 
>> comment here.
> 
> I'm fine with that, even if I don't see how it would matter. There is no
> guarantee regarding the order of entries for a given index.

That's a fair point. Feel free to ignore my comment then :).

>>> +            if (index == i)
>>> +            {
>>> +                pE = &(e->next);
>>> +            }
>>> +            else
>>> +            {
>>> +                *pE = e->next;
>>>                   e->next = newtable[index];
>>>                   newtable[index] = e;
>>>               }
>>>           }
>>> -        free(h->table);
>>> -        h->table = newtable;
>>> -    }
>>> -    /* Plan B: realloc instead */
>>> -    else
>>> -    {
>>> -        newtable = (struct entry **)
>>> -                   realloc(h->table, newsize * sizeof(struct entry *));
>>> -        if (NULL == newtable) { (h->primeindex)--; return 0; }
>>> -        h->table = newtable;
>>> -        memset(newtable + h->tablelength, 0,
>>> -               (newsize - h->tablelength) * sizeof(*newtable));
>>> -        for (i = 0; i < h->tablelength; i++) {
>>> -            for (pE = &(newtable[i]), e = *pE; e != NULL; e = *pE) {
>>> -                index = indexFor(newsize,e->h);
>>> -                if (index == i)
>>> -                {
>>> -                    pE = &(e->next);
>>> -                }
>>> -                else
>>> -                {
>>> -                    *pE = e->next;
>>> -                    e->next = newtable[index];
>>> -                    newtable[index] = e;
>>> -                }
>>> -            }
>>> -        }
>>>       }
>>> +
>>>       h->tablelength = newsize;
>>>       h->loadlimit   = (unsigned int)
>>>           (((uint64_t)newsize * max_load_factor) / 100);
>>> @@ -184,7 +171,7 @@ hashtable_insert(struct hashtable *h, void *k, 
>>> void *v)
>>>            * element may be ok. Next time we insert, we'll try 
>>> expanding again.*/
>>>           hashtable_expand(h);
>>>       }
>>> -    e = (struct entry *)calloc(1, sizeof(struct entry));
>>> +    e = talloc_zero(h, struct entry);
>>>       if (NULL == e) { --(h->entrycount); return 0; } /*oom*/
>>>       e->h = hash(h,k);
>>>       index = indexFor(h->tablelength,e->h);
>>> @@ -238,8 +225,8 @@ hashtable_remove(struct hashtable *h, void *k)
>>>               h->entrycount--;
>>>               v = e->v;
>>>               if (h->flags & HASHTABLE_FREE_KEY)
>>> -                free(e->k);
>>> -            free(e);
>>> +                talloc_free(e->k);
>>> +            talloc_free(e);
>>>               return v;
>>>           }
>>>           pE = &(e->next);
>>> @@ -280,25 +267,20 @@ void
>>>   hashtable_destroy(struct hashtable *h)
>>>   {
>>>       unsigned int i;
>>> -    struct entry *e, *f;
>>> +    struct entry *e;
>>>       struct entry **table = h->table;
>>>       for (i = 0; i < h->tablelength; i++)
>>>       {
>>> -        e = table[i];
>>> -        while (NULL != e)
>>> +        for (e = table[i]; e; e = e->next)
>>>           {
>>> -            f = e;
>>> -            e = e->next;
>>>               if (h->flags & HASHTABLE_FREE_KEY)
>>> -                free(f->k);
>>> +                talloc_free(e->k);
>>>               if (h->flags & HASHTABLE_FREE_VALUE)
>>> -                free(f->v);
>>> -            free(f);
>>
>> AFAIU, the loop is reworked so you let talloc to free each element 
>> with the parent. Using a while loop is definitely cleaner, but now you 
>> will end up to have two separate loop for the elements.
>>
>> There is a risk that the overall performance of hashtable_destroy() 
>> will be worse as the data accessed in one loop may not fit in the 
>> cache. So you will have to reload it on the second loop.
>>
>> Therefore, I think it would be better to keep the loop as-is.
> 
> What about a completely different approach? I could make the key and value
> talloc children of e when _inserting_ the element and the related flag is
> set. This would reduce hashtable_destroy to a single talloc_free().

I am fine with that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 18:18:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 18:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475696.737479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfgF-0004tE-KW; Wed, 11 Jan 2023 18:18:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475696.737479; Wed, 11 Jan 2023 18:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFfgF-0004t7-He; Wed, 11 Jan 2023 18:18:31 +0000
Received: by outflank-mailman (input) for mailman id 475696;
 Wed, 11 Jan 2023 18:18:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FWTY=5I=citrix.com=prvs=368716087=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pFfgD-0004t1-Og
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 18:18:29 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56dafc82-91dc-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 19:18:27 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56dafc82-91dc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673461107;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=8eWpl94CgVNDdkjjqPyLI3KBUVFTsyRmT98TKI665ck=;
  b=dsVsbTdsBg8z+f+jFXNZP3vNjqLkMPhb44ZHkLvG7RY29bzET60YdtSP
   iBQNCEVnD/coboH4592N9XjavAHrXLhNQvXGVd/+2s0S/lyVi5rY0c/pC
   50P5tZBpceim0o9ErPBKj5NRp8VaHPvQg+RUEt66/fW3+KB65l1xWyd0X
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92218242
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:5lxhZarKMUvrhYru9AfC5ob7/8heBmIZZRIvgKrLsJaIsI4StFCzt
 garIBnSOPbfZWKnLox3ao3g8EtV65TVnNU3HVE5+y40EHgb8puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHziBNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACoDVRSu2tK9+bLla81lo8JzNvW2FoxK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFHU/rSn8/x7pX7WzRetFKSo7tx+2XJxRZ9+LPsLMDUapqBQsA9ckOw9
 zicpjSjXkty2Nq34gKf6lm+nPH0nwCgYa8SMqK98v9PqQjGroAUIEJPDgbqyRWjsWauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O+8w5RyJy6HUyx2EHWVCRTlEAOHKr+dvG2Zsj
 AXQ2Yq0W3o/69V5VE5x6J+Oowi2FHUlJlYCOw4gRC4Ds8Pf/as820enoslYLIa5idj8GDfVy
 j+MrTQji7h7sfPnx5lX7nic3Wvy+8Ghohodo1yOAzn7tl8RiJuNPdTA1LTN0RpXwG91pHGlt
 WNMpcWR5ftm4XqlxH3UG7Vl8F1ECp+43NzgbbxHRcJJG9eFoSTLkWVsDNZWei9U3j4sI2OBX
 aMqkVo5CGVvFHWrd7RrRIm6Ft4ny6Ptffy8CK+PNYUTPMgsK1HXlM2LWaJ39zmz+HXAbIllY
 cvLGSpSJSty5VtbIMqeGL5GjO5DKtEWzmLPX5HrpylLIpLHDEN5vYwtaQPUBshgtfPsnekg2
 4oHXyd840kFAbKWj+i+2dJ7EG3m2lBiW8mo8JIJJ7LdSuekcUl4Y8LsLXoaU9QNt8xoei3gp
 xlRhmcwJILDuED6
IronPort-HdrOrdr: A9a23:ZAoJ4KxiJ4IRZuDHZ9xgKrPxv+skLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9IYgBZpTnyAtj6fZq8z+893WB1B9uftWbdyQ+Vxe1ZjLcKhgeQYhEWldQtnZ
 uIEZIOb+EYZGIS5amV3OD7KadH/DDtytHKuQ6q9QYJcegcUdAD0+4WMGamO3wzYDMDKYsyFZ
 Ka6MYCjSGnY24rYsOyAWRAd/TfpvXQ/aiWKyIuNloC0k2jnDmo4Ln1H1yzxREFSQ5Cxr8k7C
 zsjxH53KO+qPu2oyWsmlM7rq4m1OcJ+OEzSvBkufJlawkETTzYJLiJbofy8wzdZtvfq2rC3u
 O84SvIdP4DkU85NlvF3CcFnTOQmwrGokWStWOwkD/tp9f0Syk9DNcEjYVFcgHB405lp91k1r
 lXtljpw6a/ICmw7hgV3eK4Ii1Chw6xuz4vgOQTh3tQXc8Xb6JQt5UW+AdQHI0bFCz35Yg7GK
 02Zfusksp+YBefdTTUr2NvyNujUjA6GQqHWFELvoiQ3yJNlH50wkMEzIgUn2sG9pg6V55Yjt
 60RZhAhfVLVIsbfKh9DOAOTY++DXHMWwvFNCaILVHuBMg8SgHwQl7MkcUIDc2RCe01JcEJ6e
 v8uXtjxBAPR34=
X-IronPort-AV: E=Sophos;i="5.96,317,1665460800"; 
   d="scan'208";a="92218242"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Anthony PERARD <anthony.perard@citrix.com>
Subject: [RFC PATCH] build: include/compat: figure out which other compat headers are needed
Date: Wed, 11 Jan 2023 18:17:03 +0000
Message-ID: <20230111181703.30991-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Some compat headers depends on other compat headers that may not have
been generated due to config option.

This would be a generic way to deal with deps, instead of
    headers-$(call or $(CONFIG_TRACEBUFFER),$(CONFIG_HVM)) += compat/trace.h

This is just an RFC, as it only deals with "hvm_op.h" and nothing is
done to have make regenerate the new file when needed.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/include/Makefile | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 65be310eca..5e6de97841 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -34,6 +34,29 @@ headers-$(CONFIG_TRACEBUFFER) += compat/trace.h
 headers-$(CONFIG_XENOPROF) += compat/xenoprof.h
 headers-$(CONFIG_XSM_FLASK) += compat/xsm/flask_op.h
 
+# Find dependencies of compat headers.
+# e.g. hvm/hvm_op.h needs trace.h; but if CONFIG_TRACEBUFFER=n, then trace.h would be missing.
+#
+# Using sed to remove ".." from path because unsure if something else is available
+# There's `realpath`, but maynot be available
+#	realpath --relative-to=. -mL compat/hvm/../trace.h -> compat/trace.h
+# `make` also have macro for that $(abspath), only recent version.
+#
+# The $(CC) line to gen deps is derived from $(cmd_compat_i)
+include $(obj)/.compat-header-deps.d
+$(obj)/.compat-header-deps.d: include/public/hvm/hvm_op.h
+	$(CC) -MM -MF $@.tmp $(filter-out -Wa$(comma)% -include %/include/xen/config.h,$(XEN_CFLAGS)) $<
+	for f in $$(cat $@.tmp | sed -r '1s/^[^:]*: //; s/ \\$$//'); do \
+	    echo $$f; \
+	done | sed -r \
+	    -e 's#.*/public#compat#; : p; s#/[^/]+/../#/#; t p; s#$$# \\#' \
+	    -e '1i headers-y-deps := \\' -e '$$a \ ' \
+	    > $@
+
+headers-y-deps := $(filter-out compat/xen-compat.h,$(headers-y-deps))
+# Add compat header dependencies and eliminates duplicates
+headers-y := $(sort $(headers-y) $(headers-y-deps))
+
 cppflags-y                := -include public/xen-compat.h -DXEN_GENERATING_COMPAT_HEADERS
 cppflags-$(CONFIG_X86)    += -m32
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Wed Jan 11 20:55:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 20:55:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475702.737490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFi85-0003OJ-My; Wed, 11 Jan 2023 20:55:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475702.737490; Wed, 11 Jan 2023 20:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFi85-0003OC-KQ; Wed, 11 Jan 2023 20:55:25 +0000
Received: by outflank-mailman (input) for mailman id 475702;
 Wed, 11 Jan 2023 20:55:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFi84-0003O2-QR; Wed, 11 Jan 2023 20:55:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFi84-0002ji-Oh; Wed, 11 Jan 2023 20:55:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFi84-0003ar-6B; Wed, 11 Jan 2023 20:55:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFi84-0006ZD-5h; Wed, 11 Jan 2023 20:55:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nJXva4DLyo0mJGHODWvDTKdBwuYfbzqXPKZjNPCzliw=; b=beLKrZLYHFNosBR0fiaM+d82EG
	CceGwcf8HH8dTY9cVqRKytm2UhcqTvKmHtjPwtQVxpiTO6d2KebOij+awGywOP1dNqhzaTPxCV50L
	tP5YQJ902h2jAbXoYnLgag628tXNtWaWsP0Oa6X5I3OoSEjCZo2F0eQJJ9HIQLs9qqoI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175728-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175728: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=fd42170b152b19ff12121f3b63674e882c087849
X-Osstest-Versions-That:
    xen=e66d450b6e0ffec635639df993ab43ce28b3383f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 20:55:24 +0000

flight 175728 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175728/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fd42170b152b19ff12121f3b63674e882c087849
baseline version:
 xen                  e66d450b6e0ffec635639df993ab43ce28b3383f

Last test of basis   175721  2023-01-11 10:03:26 Z    0 days
Testing same since   175728  2023-01-11 18:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e66d450b6e..fd42170b15  fd42170b152b19ff12121f3b63674e882c087849 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 21:05:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 21:05:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475710.737502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFiI2-0004tf-MT; Wed, 11 Jan 2023 21:05:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475710.737502; Wed, 11 Jan 2023 21:05:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFiI2-0004tY-JP; Wed, 11 Jan 2023 21:05:42 +0000
Received: by outflank-mailman (input) for mailman id 475710;
 Wed, 11 Jan 2023 21:05:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHza=5I=citrix.com=prvs=3687a0f96=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFiI0-0004tR-Q2
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 21:05:41 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b16e78ef-91f3-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 22:05:37 +0100 (CET)
Received: from mail-dm6nam04lp2046.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 16:05:15 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS7PR03MB5477.namprd03.prod.outlook.com (2603:10b6:5:2c4::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 21:05:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 21:05:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b16e78ef-91f3-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673471137;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=HAvb2Mg1S0uuzH1onFjGMe88MbP+3GiSjiHHn55VMlw=;
  b=eN4l+iinxuUzBuamNQVbVsDEiqPnWpDUzFnRss4JR6OH4X5upnUBGprE
   66R9TYcPATRdSF+QBgJZvnbsenm7qaJFjkDARP6g4DwAg2syPQ0lPRbRi
   CJX3XZea3lWQeEHCDmrgRe650WiJ4wuagqaHjsDF4nekJMXGEjJUjbk73
   E=;
X-IronPort-RemoteIP: 104.47.73.46
X-IronPort-MID: 92616230
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pRP0A6LKfUcMfgJeFE+RKJQlxSXFcZb7ZxGr2PjKsXjdYENSgTIFy
 TEfUT+OPP7eZzDxft9/bdzg8R8FvcfUydMwSgFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPdwP9TlK6q4mhA5wVnPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5mD1ENy
 qEGMwlQMEvAvqWmw6+LZrlV05FLwMnDZOvzu1lG5BSBUbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VspTSNpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eex32iBthJTNVU8NZb2QyexzceDCYVD1uRiOuGtRWQWO5Af
 hl8Fi0G6PJaGFaQZsnwWVi0rWCJujYYWsFMCKsq5QeV0K3W7g2FQG8eQVZpatYrqcs3TjwCz
 UKSkpXiAjkHmL+ITXOQ8J+EoDX0PjIaRUcZfjMNRwYB59jloakwgwjJQ9IlF7S65vXqHRngz
 jbMqzIx750RkMhN0ay49FLGhjuEp57VQwpz7QLSNkqm4x14Ysi5ZoWuwVnd8ftEao2eSzG8U
 GMsnsGf6KUCCM+LnSnVHOEVRun1ubCCLSHWhkNpE9857TOx9nW/fIdWpjZjOENuNcVCcjjsC
 KPOhT5sCFZoFCPCRcdKj0iZVKzGEYCI+QzZa83p
IronPort-HdrOrdr: A9a23:8EEw46yxh1stJ6KsbVSJKrPxAugkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9IYgBapTiBUJPwIk81bfZOkMUs1MSZLXPbUQyTXc5fBOrZsnDd8kjFmtK1up
 0QFJSWZOeQMbE+t7eD3ODaKadv/DDkytHPuQ629R4EIm9XguNbnn5E422gYy9LrXx9dP4E/e
 2nl696TlSbGUg/X4CePD0oTuLDr9rEmNbPZgMHPQcu7E2jnC6l87nzFjmfx1M7XylUybkv3G
 DZm0ihj5/T882T+1v57Sv+/p5WkNzuxp9qA9GNsNEcLnHBmxulf4NoXpyFpXQQrPu04Fgnvd
 HQq1MLPth16VnWYmapyCGdmjXI4XIL0TvP2FWYiXzsrYjSXzQhEfdMgopfb1/w91cglMsU6t
 MI40up875sST/QliX04NbFEztwkFCvnHYkmekPy1RCTIolbqNLp4B3xjIRLH5AJlO/1GkUKp
 gpMCju3ocOTbpcVQGAgoBb+q3qYp30JGbcfqFNgL3O79EcpgEF86JR/r1iop5HzuN/d3AM3Z
 W7Dkwj/os+MfM+fOZzAvwMTtCwDXGISRXQMHiKKVCiD60fPWnRwqSHqIndS9vaCqDg4aFC7q
 gpamko/FIaagbrE4mDzZdL+hfCTCG0Wins0NhX49x8tqfnTLTmPCWfQBR2+vHQ6ck3E4neQb
 K+KZhWC/jsIS/nHptIxRT3X91XJWMFWMMYt94nUxaFo97NKIftquvHGcyjb4bFAHIhQCfyE3
 EDVD/8KIFJ6V2qQGbxhFzLV3bkaiXEjOVN+WjhjpwuIaQ2R/5xW1Iu+CWED+mwWE1/m71zel
 diK7X6la7+rXWq/A/znhBUBiY=
X-IronPort-AV: E=Sophos;i="5.96,317,1665460800"; 
   d="scan'208";a="92616230"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FKTT/sluGpc0hw0VdTyxzbS8NC0aVlYy8NO1NnfTAUn8UMQHh59MiSGRsHGqTesHZa5uJN93HFURMLiZrlAcPHoIiN0ngTN/obGMISM+eyNnLEMgkeSewFwMKLfPCtRwGTei3b+htU0cuS5ZAPrTjB2+lbF0aeWdxOXAAEPU9CVYzENLflersyIpwvJMoRkCKByQId8I6xmNtC/p/mdyZ6fvQtAlo5yTjkeYLOKR7ak6VaMjuIfHTYRU8p/0CL140WFxU0SVGtXi/puyOOgmvVG7PnLBSzqjhvLQOhlAvYcUSlNGm9en1XbBAJOCgq8359TyXPiur9SCy8ZWL2c5Ng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HAvb2Mg1S0uuzH1onFjGMe88MbP+3GiSjiHHn55VMlw=;
 b=C3QcbFnDP7NRP1TmWbFtPUcieeXF+o59O/KxfJhHF1nrlNhmaj7eQwln8AURsJ5y8af399MK2Y0ALLg9fJ/fhn62zNPBgp5VoqO18UZRag8uuR86XBGjvLg0qy/4XNyvqC9//XeyGihzlHVGpNYYVWqXR9m1ZH9SDpOwSgf/QlJWyqCXJ3+qSOI+thx02QrqVbi0J73R9n0qoUrroxhmGdqdnGAdveOVswba4jNeVN+ItcylDylKUPb0qOuJ2dHi6fzyzuB7zTDWketAO9Y6s0GtDcviPm/vD6CGTx1LvxX+amhXXq7aS3zjahCPm5W6VQZn4zNcw0dKydXgbPqS3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HAvb2Mg1S0uuzH1onFjGMe88MbP+3GiSjiHHn55VMlw=;
 b=hQIoaWbSIwzd+cnLZ6CgNa2Hh3Gy6n81GaFu7s2OnkNRs+P0uysFZTPzwCG2ocz+Aqs4GegBS4+ugsOihY2/bwNgKmUTkldfjZfJmISeWFj2grS+1uwea5U+eHyExWufqDfiyFmcpL65P244oPYVd1brcP+MYkyIg7bz27U7Ny0=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
Subject: Re: [PATCH v4 3/3] x86/vmx: implement Notify VM Exit
Thread-Topic: [PATCH v4 3/3] x86/vmx: implement Notify VM Exit
Thread-Index: AQHZDxB+OJsGEBOWPkStgY7+/Eiq966Z4qgA
Date: Wed, 11 Jan 2023 21:05:12 +0000
Message-ID: <553cbd97-f667-1549-4374-9385d3d53710@citrix.com>
References: <20221213163104.19066-1-roger.pau@citrix.com>
 <20221213163104.19066-4-roger.pau@citrix.com>
In-Reply-To: <20221213163104.19066-4-roger.pau@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DS7PR03MB5477:EE_
x-ms-office365-filtering-correlation-id: b032f9af-70fd-4ea8-e5cd-08daf41787f1
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 aVLU/RA1iQaUX2Nu4UcxgwLOA2teLb6SwagstNjBlVNXQTDRpyAtgAIPkX2FCR1YeSyLBuniUd4a7s/SH+l6hDJz1abEDb8Zq/irA++A+xsUopEFiSxfuv8vQ1pOyMdN5MFhBcvz6JS03wuo8WVr34CtOChr0bWI8wdemHHhoUPJdUZqtFho9UUF2twGcj+BSrtsGTi3uMLW2SdEIuVQ7Lk+QPC8DoDfjIZjYlGLQZWDQTh48VBY2s9lYhXxY04XvNjZ3Ubh9Pq/pXM+F6+AnPY73zBbPGOTRyzkhQGrGuotaRW84zjfgTpPHd/tv/u5W5Fr4LKafj6cSmEKVhZ1u5jHXBIih8one71rlelD2qDUCHRfHQwICU89KGOpBbLVU6xK57B1jW/tp/l+Aaj7sgSLj/Cg/s1lfkdbYk84WWGL1ZSo+Uxd21ceQwOI4idtu5WnJ9W6FoyeaONdsew+HSMLVJftdt45jDbBQiQJGReq9YSvWYYRWNg+b+tvY5yLd08jlp05Xvw3AzRNvz/iJ6kAEu7+9BMNKZhTb/wVef5dFLD2sw3DVi6Iwk+Ydl2xdp1Qly1AichUiZ7nidGFR5RLCrUU5dH/6SDZAj7qraNPaCUCJ2xJPy+FfvR+vBT4i+T252x7ypsKhwqL1XMZgzAV84Sl2eyppYCvXuMdEqQ0SaZ9smBCgU6hogwm3+b6n/l/FDuQTd/NWKvSoHlCbl5w5Wejuj9HuYbIBluG6fZ58+qb0s7n44gL1mzlGc5iN5VgWhDlu2Sfp+Tz3ILn2g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(366004)(136003)(39860400002)(376002)(451199015)(8936002)(26005)(2906002)(5660300002)(71200400001)(41300700001)(66446008)(66556008)(4326008)(91956017)(316002)(8676002)(76116006)(64756008)(66476007)(66946007)(110136005)(54906003)(38070700005)(6512007)(122000001)(2616005)(38100700002)(86362001)(31686004)(186003)(31696002)(53546011)(83380400001)(36756003)(508600001)(82960400001)(6506007)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?OXBrSnNvLzFDenFTTEVhbGxFd0M5ZEJ4OCthbjFsTVdrYnpaM2h3R3NaN1ky?=
 =?utf-8?B?K2VUcThiRmN0NVhsYytOSW9jYWZYQ1R1Vkp6RVJ0RG5Sbm83YUNQb3NIN0NW?=
 =?utf-8?B?MzZsdzVNdmx1dncwREQ4bjdURGhSWm1yakJOSFExd3lwT0tsL3ozOWdVazNu?=
 =?utf-8?B?OTdNblRCd1RhSXRXdVpsNC9SWTROK3BiMW5YS3dKZ051bStvL2FvNjQ1ZVJ0?=
 =?utf-8?B?TmF1T0dvMFc0VGV4eWNVSlVwb28zcmpwM0s0VjIyWjltaW9JUENML3BkZlg3?=
 =?utf-8?B?TjJRNEwxMXQ2QlI5cmtReFRpTGZKY1NaOFlnNXlONGRGVGY4d3kxd1ZDWGJT?=
 =?utf-8?B?dlB1YW5xdVp5YWRCaEdUZVpEK1E5WW9QMWt4aHRhb01LREhrdVBETHhWbTZC?=
 =?utf-8?B?MjNUTDNiYW5yMDZHUXZlZ0RETmkvMkpRLy9CQWdTNnNkV2cvSjE1bGtVSTFL?=
 =?utf-8?B?MmJpMHE3NXRvajIrWmV5MEVWdzFWaTB5MmQwNjRNRWViV3hlUWE3d3NvdHND?=
 =?utf-8?B?Mms0cGMrYjRMaDQvVXVPVTMreDZOcHlPZ3BiczRJdEdlN3RiUnJJNEFVQ0pk?=
 =?utf-8?B?aFozd2NscHNhSFlQSlQwVyt6T3oyZWMrOVlZbGwrdTgwSDBEMGNkdW1TWGhF?=
 =?utf-8?B?SWNxSlNJQktYV2VNTkhYVlQxQVVGN3BteWFrLzIvWVYxNU1xck5qNmdIYm9Y?=
 =?utf-8?B?TjNmYzF6TDBVNXpMTko3MmdJQ1FJTGxJTmJzRXVwckIzSWdHMmlNQ01NY2dv?=
 =?utf-8?B?Nmd2b1QwWWdHaTAvWmdiakRXaTFNRU5YdVNJUzdHbXc0MFk0enRBTkFnL3Vx?=
 =?utf-8?B?TXNhNEYyQU5raDgrM2NDMmJtaFhvVE92ZXFxYVZwSFZpeC96djdhVDFSQlRl?=
 =?utf-8?B?VUUzMFZSZXI4WTZWZEVSalByWXNYcGVnZ2dnMHhYR3dQU1hEcjNJREtmV0Vl?=
 =?utf-8?B?QkFVdFcyQ2MveEVhYlBzVUZkQ3hjWVAxTzRLc2ZyNzlMNllQSGpDd055b2pE?=
 =?utf-8?B?Q1Z6ZEF1YmpSNTU2ZUpMWVVEbDIwaUthUk1QNzdYKzgzb2ZpS2lsbExFbkUx?=
 =?utf-8?B?VkMxOENYeUFpWUxiMXN5ZjFIdzIyUFlvUGI2djVMTmltallSK2Q0cnJCSm1G?=
 =?utf-8?B?aHVXVVlPei94NmdXYnlMUmp2WlNHYXJ5bFFjcWw5ekZEU0ZPVTlpeFJxM3pV?=
 =?utf-8?B?aEtaSGpkdHRJQW1KWnJDK3VOMmFHTG9FcGF6RC9sTDVBYzlKR2ZhVmE4UE9u?=
 =?utf-8?B?eTZEOXJEUVFkdjhrd2pESWFFM2NyY3l5T1JmY3Y2OEhwR00wdFpUUGpycXFG?=
 =?utf-8?B?YTdWTlZTSzNTejhORGZKYTVxNzJ1c1J3STdTaCtmeG9FUzBOZnpMdkVSem9t?=
 =?utf-8?B?TFV2d1FVSDhTMXNGSC9QVTlsWGpSV2R5cXZqSFFDSEIyZ2h0SklpR3djSGUw?=
 =?utf-8?B?T0lYVG9YV3lLYXhGMVlYQ1B4OGw2M2xuQUw0V0xpRnRDNUtZLzFSY2l6dUVp?=
 =?utf-8?B?ei9JbXlUTE5KVU1MQmNKcmxSd05kek1ZTGxmcS9pZU5ad1VoZ3ExUFBEbzNO?=
 =?utf-8?B?SXhyM1FwbDF2Q0VnZHJOSWI2S3pMQ21WaFNNUzlkQ3VyVTAxaGFKdk9JNEhC?=
 =?utf-8?B?SlpRT3J2dVFvSkVwRGxTS25neWlJQU8zT1NZUElRekcyNkZoRlhKQ0FtY0Np?=
 =?utf-8?B?TGtCaTFYKzBXTlh6ejlLNU9hNTFSZEE2My9yZFpWeU82b3NjY2l1UDBTTVdt?=
 =?utf-8?B?c21EWklTc2tVVmdiTlIxWSt3MkVPK1NXL0t0d3U3RnhIejgrYW1NNmEwcXBX?=
 =?utf-8?B?WTJGT3k5a3kzNVExZ2dXOHhQRndaTTNaMmoxUnFXajMvNHR2QXlZLzZtT2Js?=
 =?utf-8?B?RUdxaXhudURMaWtwK3gxTVU1Zkw5OHZndEY3dmZaamxhdTZLaGZ5bVpOWWIz?=
 =?utf-8?B?eFAwcTJzRFVscDAwdUZ2U3ZjM1V1NEY0NlgwVXhFbmJ4UFNIYkp6aEN0ZTZQ?=
 =?utf-8?B?dExTaE5Fb3JWOC93WDFjVjEvK0IrdVhPd2pwSDgvS21SMkJPTEJMaDl1aG1h?=
 =?utf-8?B?NThMNmFZSmk2OXlIT0Yvd2VnVkcvVUsvRnYwampqZThYT1pIYmRSVmZxOUdN?=
 =?utf-8?Q?/xlTf7AXiHPfFZ5HCizSzFiPK?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F69D438D5FF51349924DD3D90546B49F@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	p+BhTnAM7Xns3USDFDjN8DVK1xT1QFgajx3fY6eyVy/5A7Mdzt2Z+euG1leTkfJyk7A2KcXxVj0KIyD4Qe0IvG+RcQJR5OmLuyJpDRS7mYC4Wqf9qiVNgNXSFNLBJ4vP1nOi4kmubJ9dKx2s/dpeUJyx6UCyzz2O3o32df48wiaqs+FmO8lBYkQIxo9Zdi4Au+ipmj+4UMIq8g41mpOlyI6aepo83P3i5N8HXqpVC7OXxASP6PtqCiX9WQl4fMMZPYv/m9lGPs08iBXDZaJ/8Bpc3LgAcH/Xgfaf9ynVRuvdPjrJveUqOFeGtYGr6kTz6MsiMf7ltv+8QIk9sV/wuxL+y9fcRJOnUXDQMqud2tBSFgeI7MiP+nkA1cvPzGMtfLHvl3tYFAcsdJnEUAP1syd6FC05fQS4G+rdyBeTnHZNFZrzKowVpoWff4MMxlNcDCD3T+PyOaWwfiTmi+nOixLdZ2JFwG89db7N5xpoGSjPJmt6Vom40itWV7EGNLeqs9f6EizO51wTcRdFJowXZYZrEoGVuQgZuAo7pYDSYlXDTUkyCDK8DFqKsJMoXVpYCxOTSb8X7mjghiBMRRIDTwt8894DJaEDyekQLWVgcp4BXigD3xu2yhI3swaMFWGM7Jd79pFjarPNF+tDqACT+JHz2YRCkiv51ylYJsafSYi8wdNhtGOWfDOUwrILp3S/VucfJszf+1V7owZVVgXOmhUmTORSV+ZK0oIZSfXc3NHxqqBNnKplGbbqMorsGz0Kupcbd81nI1gRSbqacVeOPfnAZoROx4jijU/ANeq9gNVlurN6WcNSDA5NYxCfRLYpnLiq7+EUmNZbqtc8/AyEuqM0Mdtgl5MdBoL55nDhmELhlreknlxL8Po8J1AUGqlV
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b032f9af-70fd-4ea8-e5cd-08daf41787f1
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2023 21:05:12.8236
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: SWnl7+gqiOcOo1DiD/1APtZak1TLCD3UwsIb3jlZ8HJzFJskw89zynpz1ECeK2BRHLiCZ5LKLyZkOWFGwG/Sn8JIWFxRcABCgTOu1OBhZ+s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5477

T24gMTMvMTIvMjAyMiA0OjMxIHBtLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6DQo+IGRpZmYgLS1n
aXQgYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92
bWNzLmMNCj4gaW5kZXggYTBkNWU4ZDZhYi4uM2Q3YzQ3MWEzZiAxMDA2NDQNCj4gLS0tIGEveGVu
L2FyY2gveDg2L2h2bS92bXgvdm1jcy5jDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9odm0vdm14L3Zt
Y3MuYw0KPiBAQCAtMTI5MCw2ICsxMjk2LDE3IEBAIHN0YXRpYyBpbnQgY29uc3RydWN0X3ZtY3Mo
c3RydWN0IHZjcHUgKnYpDQo+ICAgICAgdi0+YXJjaC5odm0udm14LmV4Y2VwdGlvbl9iaXRtYXAg
PSBIVk1fVFJBUF9NQVNLDQo+ICAgICAgICAgICAgICAgIHwgKHBhZ2luZ19tb2RlX2hhcChkKSA/
IDAgOiAoMVUgPDwgVFJBUF9wYWdlX2ZhdWx0KSkNCj4gICAgICAgICAgICAgICAgfCAodi0+YXJj
aC5mdWxseV9lYWdlcl9mcHUgPyAwIDogKDFVIDw8IFRSQVBfbm9fZGV2aWNlKSk7DQo+ICsgICAg
aWYgKCBjcHVfaGFzX3ZteF9ub3RpZnlfdm1fZXhpdGluZyApDQo+ICsgICAgew0KPiArICAgICAg
ICBfX3Ztd3JpdGUoTk9USUZZX1dJTkRPVywgdm1fbm90aWZ5X3dpbmRvdyk7DQo+ICsgICAgICAg
IC8qDQo+ICsgICAgICAgICAqIERpc2FibGUgI0FDIGFuZCAjREIgaW50ZXJjZXB0aW9uOiBieSB1
c2luZyBWTSBOb3RpZnkgWGVuIGlzDQo+ICsgICAgICAgICAqIGd1YXJhbnRlZWQgdG8gZ2V0IGEg
Vk0gZXhpdCBldmVuIGlmIHRoZSBndWVzdCBtYW5hZ2VzIHRvIGxvY2sgdGhlDQo+ICsgICAgICAg
ICAqIENQVS4NCj4gKyAgICAgICAgICovDQo+ICsgICAgICAgIHYtPmFyY2guaHZtLnZteC5leGNl
cHRpb25fYml0bWFwICY9IH4oKDFVIDw8IFRSQVBfZGVidWcpIHwNCj4gKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAoMVUgPDwgVFJBUF9hbGlnbm1lbnRfY2hl
Y2spKTsNCj4gKyAgICB9DQo+ICAgICAgdm14X3VwZGF0ZV9leGNlcHRpb25fYml0bWFwKHYpOw0K
PiAgDQo+ICAgICAgdi0+YXJjaC5odm0uZ3Vlc3RfY3JbMF0gPSBYODZfQ1IwX1BFIHwgWDg2X0NS
MF9FVDsNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZteC5jIGIveGVuL2Fy
Y2gveDg2L2h2bS92bXgvdm14LmMNCj4gaW5kZXggZGFiZjRhMzU1Mi4uYjExNTc4Nzc3YSAxMDA2
NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS92bXgvdm14LmMNCj4gKysrIGIveGVuL2FyY2gv
eDg2L2h2bS92bXgvdm14LmMNCj4gQEAgLTE0MjgsMTAgKzE0MjgsMTkgQEAgc3RhdGljIHZvaWQg
Y2ZfY2hlY2sgdm14X3VwZGF0ZV9ob3N0X2NyMyhzdHJ1Y3QgdmNwdSAqdikNCj4gIA0KPiAgdm9p
ZCB2bXhfdXBkYXRlX2RlYnVnX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KQ0KPiAgew0KPiArICAgIHVu
c2lnbmVkIGludCBtYXNrID0gMXUgPDwgVFJBUF9pbnQzOw0KPiArDQo+ICsgICAgaWYgKCAhY3B1
X2hhc19tb25pdG9yX3RyYXBfZmxhZyAmJiBjcHVfaGFzX3ZteF9ub3RpZnlfdm1fZXhpdGluZyAp
DQo+ICsgICAgICAgIC8qDQo+ICsgICAgICAgICAqIE9ubHkgYWxsb3cgdG9nZ2xpbmcgVFJBUF9k
ZWJ1ZyBpZiBub3RpZnkgVk0gZXhpdCBpcyBlbmFibGVkLCBhcw0KPiArICAgICAgICAgKiB1bmNv
bmRpdGlvbmFsbHkgc2V0dGluZyBUUkFQX2RlYnVnIGlzIHBhcnQgb2YgdGhlIFhTQS0xNTYgZml4
Lg0KPiArICAgICAgICAgKi8NCj4gKyAgICAgICAgbWFzayB8PSAxdSA8PCBUUkFQX2RlYnVnOw0K
PiArDQo+ICAgICAgaWYgKCB2LT5hcmNoLmh2bS5kZWJ1Z19zdGF0ZV9sYXRjaCApDQo+IC0gICAg
ICAgIHYtPmFyY2guaHZtLnZteC5leGNlcHRpb25fYml0bWFwIHw9IDFVIDw8IFRSQVBfaW50MzsN
Cj4gKyAgICAgICAgdi0+YXJjaC5odm0udm14LmV4Y2VwdGlvbl9iaXRtYXAgfD0gbWFzazsNCj4g
ICAgICBlbHNlDQo+IC0gICAgICAgIHYtPmFyY2guaHZtLnZteC5leGNlcHRpb25fYml0bWFwICY9
IH4oMVUgPDwgVFJBUF9pbnQzKTsNCj4gKyAgICAgICAgdi0+YXJjaC5odm0udm14LmV4Y2VwdGlv
bl9iaXRtYXAgJj0gfm1hc2s7DQo+ICANCj4gICAgICB2bXhfdm1jc19lbnRlcih2KTsNCj4gICAg
ICB2bXhfdXBkYXRlX2V4Y2VwdGlvbl9iaXRtYXAodik7DQo+IEBAIC00MTgwLDYgKzQxODksOSBA
QCB2b2lkIHZteF92bWV4aXRfaGFuZGxlcihzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykNCj4g
ICAgICAgICAgc3dpdGNoICggdmVjdG9yICkNCj4gICAgICAgICAgew0KPiAgICAgICAgICBjYXNl
IFRSQVBfZGVidWc6DQo+ICsgICAgICAgICAgICBpZiAoIGNwdV9oYXNfbW9uaXRvcl90cmFwX2Zs
YWcgJiYgY3B1X2hhc192bXhfbm90aWZ5X3ZtX2V4aXRpbmcgKQ0KPiArICAgICAgICAgICAgICAg
IGdvdG8gZXhpdF9hbmRfY3Jhc2g7DQoNClRoaXMgYnJlYWtzIEdEQlNYIGFuZCBpbnRyb3NwZWN0
aW9uLg0KDQpGb3IgWFNBLTE1Niwgd2Ugd2VyZSBmb3JjZWQgdG8gaW50ZXJjZXB0ICNEQiB1bmls
YXRlcmFsbHkgZm9yIHNhZmV0eSwNCmJ1dCBib3RoIEdEQlNYIGFuZCBJbnRyb3NwZWN0aW9uIGNh
biBvcHRpb25hbGx5IGludGVyY2VwdGluZyAjREIgZm9yDQpsb2dpY2FsIHJlYXNvbnMgdG9vLg0K
DQppLmUuIHdlIGNhbiBsZWdpdGltYXRlbHkgZW5kIHVwIGhlcmUgZXZlbiBvbiBhbiBzeXN0ZW0g
d2l0aCBWTSBOb3RpZnkuDQoNCg0KV2hhdCBJIGNhbid0IGZpZ3VyZSBvdXQgaXMgd2h5IG1hZGUg
YW55IHJlZmVyZW5jZSB0byBNVEYuwqAgTVRGIGhhcw0KYWJzb2x1dGVseSBub3RoaW5nIHRvIGRv
IHdpdGggVFJBUF9kZWJ1Zy4NCg0KRnVydGhlcm1vcmUsIHRoZXJlJ3Mgbm8gQ1BVIGluIHByYWN0
aWNlIHRoYXQgaGFzIFZNIE5vdGlmeSBidXQgbGFja3MNCk1URiwgc28gdGhlIGhlYWQgb2Ygdm14
X3VwZGF0ZV9kZWJ1Z19zdGF0ZSgpIGxvb2tzIGxpa2UgZGVhZCBjb2RlLi4uDQoNCn5BbmRyZXcN
Cg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 21:14:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 21:14:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475716.737513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFiQf-0006MI-JP; Wed, 11 Jan 2023 21:14:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475716.737513; Wed, 11 Jan 2023 21:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFiQf-0006MB-F6; Wed, 11 Jan 2023 21:14:37 +0000
Received: by outflank-mailman (input) for mailman id 475716;
 Wed, 11 Jan 2023 21:14:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFiQe-0006M1-7j; Wed, 11 Jan 2023 21:14:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFiQe-00033c-5H; Wed, 11 Jan 2023 21:14:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFiQd-0004Yo-LA; Wed, 11 Jan 2023 21:14:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFiQd-00045U-Kf; Wed, 11 Jan 2023 21:14:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z2VlB9Fz/SR1Do5HCRA5Rs/CD9PPYRNDZjFCwBBELok=; b=Fo7rmtG0ginM7yQQYlAPdxhea3
	tvfwLn+FW1M0QdAYQauQm+DOOfcUc8gyyMq3JmHBIUVgoIk90VYQaeWzqVzBoeK0TANJiWN0h9TBW
	sOVi5NoHe06KLsMdukaizqcclg0aI5C2ecsm+m5e3hUh+q7wLlLQsWkSoK/YLgExAV0c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175727-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175727: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 21:14:35 +0000

flight 175727 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175727/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    3 days
Failing since        175627  2023-01-08 14:40:14 Z    3 days   17 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    2 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 22:07:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 22:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475725.737524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFjFS-0003EF-Hp; Wed, 11 Jan 2023 22:07:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475725.737524; Wed, 11 Jan 2023 22:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFjFS-0003E8-Eo; Wed, 11 Jan 2023 22:07:06 +0000
Received: by outflank-mailman (input) for mailman id 475725;
 Wed, 11 Jan 2023 22:07:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHza=5I=citrix.com=prvs=3687a0f96=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFjFQ-0003E2-Vj
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 22:07:05 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45d25bd3-91fc-11ed-91b6-6bf2151ebd3b;
 Wed, 11 Jan 2023 23:07:02 +0100 (CET)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 17:06:59 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6555.namprd03.prod.outlook.com (2603:10b6:303:126::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 11 Jan
 2023 22:06:57 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 22:06:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45d25bd3-91fc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673474822;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=3WC4I1sq6pRJvzk9BQhg0ICzYteAkwLRJ6tfwfxFB0A=;
  b=du3Xq4yTFreumlYt7IAR8V1bBoKwdmesB1SJDyqy4Ne+d/QHPdp2/2zF
   6w5VtBcdz34TP0KclhoG4KyQtUkw5DQQ6NNJP8cJNa/2TuWeb3Q2ABEfV
   3/rAYzflxY1f6gH7pegeA/mx5RqR5Qou/UGRpKqiHBGtddeaMCLXMxrFj
   A=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 92247221
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:XRDpJKhxB9pCb10JH+qNxXm2X1611REKZh0ujC45NGQN5FlHY01je
 htvXmqEa6yJZWOjLdh0PYuwpkoAvsSEm9UyTlA6/Hg9ESob9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWs0N8klgZmP6sT5QaHzyJ94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQGLgIvbzyN2tuqg5CxG8ZrtJ4DMu/CadZ3VnFIlVk1DN4AaLWaGeDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEgluGyb7I5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6RefiqqEw2gT7Kmo7UzwyZVym/OGFjE+/e41ed
 2ob6zUtov1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xAmkCUy4Ea9E8ssIybSIl2
 0XPnN7zAzFr9rqPRhq15ufKhTC/Iy4YKSkFfyBsZRcK58nLpIA1kw7VSdBiAOi5g7XdGz7qx
 CuRhDMjnLhVhskOv42h9F7OjjaEpZXTSAMxoALNUQqN/g5/IYKoeYGswVza9upbapaUSEGbu
 3oJkNTY6/oBZbmVmTCAWvclHben/f+JPTTQx1l1EPEJ9TOk/XS5YI9N7St3IW9mN88FfXniZ
 0q7hO9KzJpaPX/vYaopZYu0Up4u1fK5SoujUe3IZN1TZJQ3bBWA4CxleU+X2SbqjVQolqY8f
 5ycdK5AEEonNEiu9xLuL8917FPh7nlWKb/7LXwj8yma7A==
IronPort-HdrOrdr: A9a23:S9kQEqvz0wktzuBfuHU4OIwz7skDeNV00zEX/kB9WHVpm62j+/
 xG+c5x6faaslkssR0b9+xoWpPhfZqsz/9ICOAqVN/JMTUO01HYT72Kg7GSpgHIKmnT8fNcyL
 clU4UWMqyVMbGit7eZ3DWF
X-IronPort-AV: E=Sophos;i="5.96,318,1665460800"; 
   d="scan'208";a="92247221"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LByou3vqt3nfqWjejpteOkuKVGQOb7bcu6qohg2xuy/kmckJcEDU+UPzsSenHyH01lUYkv5zcJEOVZovtsx4A2DZZEt6cTUgZd4qg9ieOmlIW+FUJd8dq8fpdT2MT6y4HR57EDF14KZUkvUsjrtLYi/VcpJMVNCY/+v+OH+CgoauD7zoKQDaKGh3gcbaM4Nkdk5ozE0z0puU4nyxenE9lVkF0LLQINY5tmoamMNHr0WepHM3Ja601nnTpdlbLRA+ksw2CVr9Cg/0fFe6oNHFtuVOFuL8+yZVyZbMt4TypkkrwfG3GMqCkHec9VGKniSZkrIVR2wwkbDQMELNTNFFow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3WC4I1sq6pRJvzk9BQhg0ICzYteAkwLRJ6tfwfxFB0A=;
 b=NDWpxgGd9ja2Tc3bkeki8cQ4TQ3ncGqxnoIRUF8dID4KNrnHFhkzBMo0iSdSd/Bqy7uorNTb/xZ7FNAU0VF2HOBJoxNmHkC2YRXWt5jQXR26TUb9RSULPfq3htgnfQE6vt5LA95PiUMpfHU+ofqb68/Lpws5EM1lWmlFgbXzdPSH54iZ/70c/V1Hvd21JqIkWoBeVdlC742IlF24cDF1GJDgBkHXcakicALCl30p2+hhrJ3IVKuF0se/qt1Tj//ZPnSlwm6ViTxWiXIt0KZ2mBWhlVw1ETLyjEGaeZK64DFdDzjrCAbhtJqlGeerEbv+rrgD+oH3GHOS7kbxLw1WFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3WC4I1sq6pRJvzk9BQhg0ICzYteAkwLRJ6tfwfxFB0A=;
 b=JE5gCUzYce+uFFP0xwEFi7rvp/F9RGhPjdj+Ug81GffsRQ2TSsO/BYNhWcRIUxu/HX2apnGzK4lzhv+JIu1Z6ZYXaxRbL2i4eEqAwMr4mvU+z3q5jYqHC5GQWXLsFNQZxq60b70q7/yMUm7z+4KXQcnPsBpFaVWxlNQkH38OVL0=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"x86@kernel.org" <x86@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
	Juergen Gross <jgross@suse.com>, "Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>, Roger Pau Monne
	<roger.pau@citrix.com>, Peter Zijlstra <peterz@infradead.org>, Joan Bruguera
	<joanbrugueram@gmail.com>
Subject: Re: Wake-up from suspend to RAM broken under `retbleed=stuff`
Thread-Topic: Wake-up from suspend to RAM broken under `retbleed=stuff`
Thread-Index: AQHZJa6+C7zgK+BmnkyxcC0/cTTk966ZF36AgAABmICAAK2VAA==
Date: Wed, 11 Jan 2023 22:06:56 +0000
Message-ID: <cfe514f9-bf6c-292d-a481-48614aeb9dd6@citrix.com>
References: <20230108030748.158120-1-joanbrugueram@gmail.com>
 <20230109040531.7888-1-joanbrugueram@gmail.com>
 <Y76bbtJn+jIV3pOz@hirez.programming.kicks-ass.net>
 <aefed99b-6747-5dcc-65ec-6880f7c0d207@citrix.com>
 <f2c151af-b3ba-69e6-2878-3256971f5a9d@suse.com>
In-Reply-To: <f2c151af-b3ba-69e6-2878-3256971f5a9d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MW4PR03MB6555:EE_
x-ms-office365-filtering-correlation-id: 2a83e043-1f44-4f5d-1227-08daf42027c6
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 EEnmCcgekWAKjbIUQU1PSMdoNxaVfi5td86HLQzVmWXn/rCsfU+3093Di6zA6lh+Bd8Lbudy3k7I2oyMqr2GlI4O7C/e/cVXrJ8pvu4ZXj8yYNYwLDJLJjED6xWox97Wmh+kq35015Z2lnVNrlU7sbBUFdUNHFyWEer8DDvjPU2e790bAiHAtBeDjX9fE5Wa9b0vvX4CyW65AvWkeK88RJVAfWN6+PpUPpj2/mIp2M/UVwdhs4WMP4AT4uMch9FnpXBWsF3NnjSkHSJGYY7bBljeAYTNSB9xp+Drj57Rk/iQ183/qgxm7dg35jshcKIwiRbAm2fT3B16ZOodNIoN4NkawZ+uZSKdpsOQtF+/iOyhNYV/lUf5a49p0OQXEuZ8v70wgsxt/G2qiq2DlpKForAVuX/dLh/4BNneqOuV7SqXtvrku7flphGjEJn31XpYasDnaMU6GYvDJpG+j3IIACKAJ8JafAOq9rpYTEt6P2W3qCqQxBVc1ogkZfg5wJU2dLBlFqcl8voYlZxECOYfrp8rvG1c0x/ESwxE4k6tTxe5WQgXYZm92TmYl7ye2VoL29ECvjF6Ldaby+C88MB59/RxsW5nUUVtNtOFzMCIw6MucTuNncvOLUKLaa7r6RdtYK6fMJEjM1TWyxEd2j2s+yoJ7w933LQjuG+FKLBpOpmsXhdqT1t1Xbm4vcaqs8JdoLfQLNz8NnrHwaxxLpp8DpeezJsjj3phesF+R+2Jhd04VkukGkVU/BNBwcpnym6pCdMCvn3bAECJe+T1+Q+AgZz8pAGs3mIfbQu+A3gudHE=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(366004)(39860400002)(136003)(396003)(346002)(451199015)(82960400001)(508600001)(38070700005)(6486002)(41300700001)(38100700002)(6512007)(31696002)(86362001)(71200400001)(316002)(54906003)(2616005)(91956017)(66556008)(66946007)(26005)(186003)(66476007)(76116006)(4326008)(66446008)(6916009)(64756008)(36756003)(5660300002)(53546011)(15650500001)(7416002)(6506007)(2906002)(31686004)(4744005)(8676002)(8936002)(83380400001)(122000001)(81973001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bXRhVlgzZVpYaGljRkxMK0U3SzA5Q0h2ZEdmUjZsT2l5eDMrOGpSbGtXTDd5?=
 =?utf-8?B?QXQ3MzR5TUJRaE8zZ3FuMnRFLzV6cVN0L3dud29rQ2g2dlRRK0wvd21vT0E3?=
 =?utf-8?B?ZHVoMWFhSWtpeDNZaVlpSlBOQlovcmpaQjlsQ3J4Sk1RckFoN1htZVU0Tktq?=
 =?utf-8?B?TjEvNitXOW9DeTNYSThQMzg3ZUppMXhMU3JMVG8xcldtc3dmR1YwVUZyMkdW?=
 =?utf-8?B?NE1oYWRsa0Rpa21QRVM0SkswZUdwa2tmYXM1NXI5L3BuWFlORzJCV2NmVkNr?=
 =?utf-8?B?b2Zqd2hEVGtkSVJRVmJzS2dzbHE2UTBuSURHV2NqMXVBNHQ4YzAzVXhRbXN2?=
 =?utf-8?B?clJIK1REL2NTMDlwS1BnL0R1TzVGclJFZXlGTHptakZiVFVpVUh1Qk9nZUtT?=
 =?utf-8?B?UkorT2t6UDZ6NGo3bE5va3IrelFLOGVIK2J4bjM3dVh3cW8rN00xSlBmdE83?=
 =?utf-8?B?TitOYW9BdytHbXk5VjFoWWZ2MjhpN2hnN1YrTWRkRDgxTmNOb016N2JlUXJV?=
 =?utf-8?B?dlh4UDVCaUpQeisrd3J6WUtpckhRb1pIdE5WMktVL25ESlRGaC9lYWU1U3Ux?=
 =?utf-8?B?V1dSMnBRYXphN1dTNmYxRlRQRjlwWmhNeEc5YmJhUzNXa3llN2NXMGlVZUJR?=
 =?utf-8?B?Wm4xK0FSU3FTWEsyY3lmS0o3UEpIUkFRVVFwMDNIMk5nY2Nxb2tBOUtJYU92?=
 =?utf-8?B?LzlzekVyM2pqWUdxbndwSjAwTGR2SitPSG41QmVtZGcxbFo3VTl3RmNMRmFy?=
 =?utf-8?B?Z3plQnZUaGwxazczVHFFSko2azdhRS9OT0MvUS9TVkx2UlVSa1ArVU1OVzNB?=
 =?utf-8?B?eWdUamJ5eTJqbDhBUXRVQ2pCVHgxL3lMQkV6d3hlRzFQOVJxbWZVNWRiaWNF?=
 =?utf-8?B?dTZnZU9OZUN0NTVCU084OUNyZGRwV3ZhRXphMkdvbzhQZ0QraTVEM0hKS1pa?=
 =?utf-8?B?RkkrTjgyaW5mWmxzSEgvdVREKzRlUkdGMnB0SzFFQk9oeWJNVWJOWForSnBD?=
 =?utf-8?B?clZBcnh4M0lDdXhZSGVDampGWUM1YytOc0VoZnppeFhKbkJ5TUI2ZnZlNFJT?=
 =?utf-8?B?QXZ3b1VYUVFZVHE0aDRTN0pTcFMrVEJmc3Y2NUlFRis0SzRENnhhc1NHYVRa?=
 =?utf-8?B?Nlg5eVZERW5hTkE0QjBsVU1nVmMxTFZjYXUzWTB1RGhNMVVESXlrQ2RwUzlG?=
 =?utf-8?B?bk5randvVWtvZGNIQnA1TDdyU0kxR1FhYVZPNmQ2TlkrUVlZaW9FT3Y3SXJw?=
 =?utf-8?B?S1lxQnRPdi9zUjV4eHJ1cDVxSHNsZUpod1J6Q2RzSE81bUtoVXNBTzF4Q3ZW?=
 =?utf-8?B?MTRDOWNCQTVFek5iQ04zSWdQaElINWF1c1hONTkxald3QjBrUkdCRnhHbDdK?=
 =?utf-8?B?MFNlcmRGWTNaLzY0Si9DR1A2SWxwaDVEczhEZUxaUjE3WEdBeFd1cjdDQzBD?=
 =?utf-8?B?OWNGNTlBK0ZqVEtuamhTVXpwK0VyaGRRQzZ3cHUwSDVkY0JRa1k1Y24zd2Nk?=
 =?utf-8?B?TnNnay94R3lsUEl0U2VWRW5ONmdEWVI5aTNiZGdtL2JqcS9aVWRpTXlsZndr?=
 =?utf-8?B?NHFKQldEQWxESm1rN0IvZjcvUWRCcnRpZ2dYMjR6a3QzS0k4bkhMcHYvUFNN?=
 =?utf-8?B?VzJEd1VhVHFHM1JDYnV4RTM3R0JyY2NjVVFkVForcHBCazZjdnF4VDI2Rnhr?=
 =?utf-8?B?ZXBDWUltbU9wSnlnMVVmNWgweGF2cGNpd285WVZjNHppVndSUnlYMnVFa0or?=
 =?utf-8?B?UEs4ZUxhMzBsR2lMejBBOGt2T2FvenM2NnZ1ZThGV0ZwdUQ5RFhyKzhQVHV2?=
 =?utf-8?B?SU5Zd2puQ1lTa3RtTnBqUjNVZE5SVnZScTBGNjBKS2xLY3oxWVo1eTZzYmFX?=
 =?utf-8?B?KzFjekRjb1V4Zko2M0JqcndZUUJUQjhndzVGenpDT0VKWU4zd0NtV2ErRlgw?=
 =?utf-8?B?VUFuUXZKZkVqejJtV0FObUMrU3VLWjRuU1FEaWwxYmUwUmUvV1NlQ3ExMWtQ?=
 =?utf-8?B?VTJrMENrRTdRSUdvTjk3Q1prbUl5MUU0anNuMWdpaDZIQ3pranlFRndQVXJM?=
 =?utf-8?B?bDBvU1FnRkR5THNObm9zRStWOXoxZkppOUU4THVRUW5PMjNrbHNYU3FJMDNk?=
 =?utf-8?Q?xvhhYi9YTJuY6pq30mC9iuD1U?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <4A8DB20C8A814C47B6867826EF244D2A@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?OG5NUUxGY0JTVlhLU0ZtVzl6REwxVWVNL25DZ2FEMWdzL0tOd1hsc0tYZmlh?=
 =?utf-8?B?YThzN3E5TFhDQXdtanlFVExFS2NHbmUyTTJFYUY4RXA0NkJhRm1KVnBQMS9i?=
 =?utf-8?B?SFd2NW81SVI0L1orKzNKcjRMaytBQjNVN1V5WTlJY3ljY1pwN0tIaVMyUDJ3?=
 =?utf-8?B?MHZCRUgrWkhtZkdIRmNEQnp4VHJqTkw0bXVVSExLVUprWHEvSjJnUmZOZ0xT?=
 =?utf-8?B?M3ZUNktOTXlVZHRzOVQ5T0hBN21ZYU9iUzZjeVd4VmEySnRWR2xMVEMvRkI0?=
 =?utf-8?B?TDZWWlNqb3RPWnlLVTd3VDRHelROUnRzZms0YlpUZ0ljR1Q0Ym5DbUxDczNS?=
 =?utf-8?B?TnE2UEFBa1hxWkhxanc0RmtjSERnWjRhQ3hibzE3MWg3dWVuQ3VqWjhHMkFz?=
 =?utf-8?B?NHNFbVpkWU1yUE5Ba3JlbEtSa01Fc2hJc0JtRW52cFk0eUkwNWhhUkUrMGFW?=
 =?utf-8?B?RkFreDlRT2RkTnJZS3R3R0NQNS9qL0xNMUw0SFUyYVhnNjlTMUFYWHdNdlIv?=
 =?utf-8?B?OFpQS3RvekFBVWRWRzAvcGwyN0I1ZTVkOTk0RXBJYTFpRGY5MGdBbklDNzUx?=
 =?utf-8?B?MldlMUE5WVE3L1Z5NGtOU2RLZmUzU1BHVjZvb1hseUdobUFzU3VBZHVmR2pC?=
 =?utf-8?B?bnI0dlhac2VCYTlkQ0RWbTV0L2hNRzFyUWRBZVNTbU5JdThzcWZmdnRudG9J?=
 =?utf-8?B?U3FDZHlWNFZqWlVRL1oyeS9mZS9PYUhLczN2UHczbW9LU0IyU1pPazZNZ0s2?=
 =?utf-8?B?VXRqcG9wenA1N3o2K2xZTTlubjZtcWZtbkl1UnlUQjF1cW1XRzhmNnZ3Mm5j?=
 =?utf-8?B?ekdCei9UcEllZjJDNUY0ZEhyYnJSTGc1VzFabG5wR3cyNFg4eWNCV2N0QlF1?=
 =?utf-8?B?ckdUYXkzLy9oWW5ndEFxWmI4U2VXRkF5RkowSHlkMWpqNzZERlRtQjNMalRm?=
 =?utf-8?B?UlhScENENlR6UktDc0huL3RhbmE1K3lWWm5TMkFOd3g5QnNkSElwKzVGLzcz?=
 =?utf-8?B?YnlpdExsWVJLUklyQ1dEMGlrcllvZzFvOVZnVEtoN0UwM0lmL1JDanNJR1VO?=
 =?utf-8?B?ME5kcGcxbUhka3JyZGY0VVVNbEZocmduYnJHQnJJNnFMSGlpTVE4UlhndFpk?=
 =?utf-8?B?ZCtPWFRqb1VTdTlTd0FRUWFBU21XSE5zenU5bUU0ajM5M3VwOHkxRzAvbUNs?=
 =?utf-8?B?RlRZZlJRdklwMysvaU03SFBwdmdWOHVwLy9BS1VxYmF3WDIvWGNWYUdtMU1a?=
 =?utf-8?B?Znh3cjBSZklGMG1jRDZxWFhaeHAxVWRucllEY2RiV29ZRmFacnlDOEtkVzRL?=
 =?utf-8?Q?suAZcQ/vXM+R4=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a83e043-1f44-4f5d-1227-08daf42027c6
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2023 22:06:56.9613
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: YnDzG0Exna6WsGLecBjljQYrZJ0ZJVJ2dlSHkohMWfm+t4EhZpxHUz6DMdiUFJnzhyADGq0Qdh/c/GnsB2G+JfllvaRJ1hc5jlHxba7hOoM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6555

T24gMTEvMDEvMjAyMyAxMTo0NSBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDExLjAxLjIw
MjMgMTI6MzksIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBUaGUgYmlnZ2VyIGlzc3VlIHdpdGgg
c3R1ZmYgYWNjb3VudGluZyBpcyB0aGF0IG5vdGhpbmcgQUZBSUNUIGFjY291bnRzDQo+PiBmb3Ig
dGhlIGZhY3QgdGhhdCBhbnkgaHlwZXJjYWxsIHBvdGVudGlhbGx5IGVtcHRpZXMgdGhlIFJTQiBp
biBvdGhlcndpc2UNCj4+IHN5bmNocm9ub3VzIHByb2dyYW0gZmxvdy4NCj4gQnV0IHRoYXQncyBu
b3QganVzdCBhdCBoeXBlcmNhbGwgYm91bmRhcmllcywgYnV0IGVmZmVjdGl2ZWx5IGFueXdoZXJl
DQo+IChpLmUuIHdoZW5ldmVyIHRoZSBoeXBlcnZpc29yIGRlY2lkZXMgdG8gZGUtc2NoZWR1bGUg
dGhlIHZDUFUpPw0KDQpDb3JyZWN0LCBidXQgaXQncyBvbmx5IHRoZSBSRVQgaW5zdHJ1Y3Rpb25z
IHRoYXQgcmVsaWFibHkgdW5kZXJmbG93IHRoZQ0KUlNCIHdoaWNoIGNhbiBiZSB1c2VmdWxseSBh
dHRhY2tlZC4NCg0KVGhlICVyaXAgYXQgd2hpY2ggWGVuIGRlY2lkZXMgdG8gZGUtc2NoZWR1bGUg
YSB2Q1BVIGFyZSByYW5kb20gZnJvbSB0aGUNCnBvaW50IG9mIHZpZXcgb2YgYW4gYXR0YWNrZXIu
DQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 22:22:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 22:22:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475731.737535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFjUD-0005ZE-Tf; Wed, 11 Jan 2023 22:22:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475731.737535; Wed, 11 Jan 2023 22:22:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFjUD-0005Z7-P6; Wed, 11 Jan 2023 22:22:21 +0000
Received: by outflank-mailman (input) for mailman id 475731;
 Wed, 11 Jan 2023 22:22:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFjUD-0005Yw-CP; Wed, 11 Jan 2023 22:22:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFjUD-0004jd-9o; Wed, 11 Jan 2023 22:22:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFjUC-0006QM-Pi; Wed, 11 Jan 2023 22:22:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFjUC-0000fA-PE; Wed, 11 Jan 2023 22:22:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WjG9wrRVnF1A3hL0o0y3bWu6/UW/fOsxxQM0SxxQq4o=; b=6bPsZVzZL5mQ3rwBNmn0TED+bk
	UXNVQy9+t2M1QKx35FYDybwMChHsjzBKWv3uWe5EemWGaapi3DuK0deIrh6kSdNgEOhSMoicP0s0r
	xzQDfkOgAzbGyTcjM8mxzGoq2pteVmwNsIBEBwOj51ENuMlmAVx8NIEmm6KgXE1yLuKY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175723-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175723: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7dd4b804e08041ff56c88bdd8da742d14b17ed25
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Jan 2023 22:22:20 +0000

flight 175723 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175723/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7dd4b804e08041ff56c88bdd8da742d14b17ed25
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   96 days
Failing since        173470  2022-10-08 06:21:34 Z   95 days  201 attempts
Testing same since   175717  2023-01-11 03:09:17 Z    0 days    2 attempts

------------------------------------------------------------
3319 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505897 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 22:30:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 22:30:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475740.737546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFjbl-00076M-Rb; Wed, 11 Jan 2023 22:30:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475740.737546; Wed, 11 Jan 2023 22:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFjbl-00076F-MY; Wed, 11 Jan 2023 22:30:09 +0000
Received: by outflank-mailman (input) for mailman id 475740;
 Wed, 11 Jan 2023 22:30:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHza=5I=citrix.com=prvs=3687a0f96=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFjbl-000765-7X
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 22:30:09 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f0a6dd6-91ff-11ed-b8d0-410ff93cb8f0;
 Wed, 11 Jan 2023 23:30:06 +0100 (CET)
Received: from mail-dm6nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 17:29:56 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6421.namprd03.prod.outlook.com (2603:10b6:a03:398::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 22:29:53 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 22:29:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f0a6dd6-91ff-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673476206;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=Tfci1Yft9hO6QbrKbFegdwobYO10oeVT7nn7iT/MQl4=;
  b=TPYap8oc8EmPdWt0QDtyvxKuRwsLN2OV9NHIRh0SWZrA1dTEGju+NJ0T
   LWsEhbx1K4Jqd7l/imqi+1fnU4A0sctycOnsg1nsUc8mi6BXjsdW8G89/
   40BPB635pUZpVd5XuhjOXDL5Lefsoq9hXyPZvGGC3z3gGvfLWNJ3Lxem7
   U=;
X-IronPort-RemoteIP: 104.47.57.176
X-IronPort-MID: 91140376
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:etQh+KIKNIBJC4T8FE+RG5QlxSXFcZb7ZxGr2PjKsXjdYENS0DIAy
 2AcXWCHP/aDZWbyfN8nb97i9E1VuMWEx4QxHVFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPdwP9TlK6q4mhA5wVnPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5GAjl20
 P1GMwsgbzbboMyqnJejddJz05FLwMnDZOvzu1lG5BSAVLMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTGMkmSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzHqiBNpJS+PQGvhC2HrUwnAjLh4sCXCLpdu2iHWPZPdYE
 hlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWXVHac+7G8vT60fy8PIgcqfjQYRAEI593ipoAbjR/VSNtnVqmvgbXdBjXY0
 z2M6i8kiN0uYdUj0qy6+RXLhmyqr52QFwotvFyIACSi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf02DaDw7FJG+yRxkOe
IronPort-HdrOrdr: A9a23:Jk9w9arwl7G7Ed4GeEkhl3caV5oleYIsimQD101hICG9E/b1qy
 nKpp8mPHDP5wr5NEtPpTnjAsm9qALnlKKdiLN5Vd3OYOCMghrKEGgN1/qG/xTQXwH46+5Bxe
 NBXsFFebnN5IFB/KTH3DU=
X-IronPort-AV: E=Sophos;i="5.96,318,1665460800"; 
   d="scan'208";a="91140376"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HO8e/QD+7uaOC5vz8NVQJXiyssWsxNTtXGAcxaSkv+MO/e1NDBa/57NE0IzRdiVt8OttfiNyCFB+44J8FGgtIm0Yp4gp0j0JSdCvtVgU5/T9ry4rAoAFf2w5E96NHsCot2pw56aSzVv+xr36+rBoa57ftSC5n2ZS+5Ii8nuY9islLbiYKiU/u7yEvmJ6MFEcChrSuvnarvpL3e5gkCRPcleL6+Ibsu0YCU1+nIuYrukfRLfMg81ED+sx/yyrVRBSl/BOBpn+jJq4CsoOnI5utPB5uaPlCTfUrYbyD+z+JJIEZdOOvOeO0xbomOaJXQU6zjx6EjgV/FNX+zMw8qW0cQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Tfci1Yft9hO6QbrKbFegdwobYO10oeVT7nn7iT/MQl4=;
 b=H20khbe0Ggu6T/0hGnNBKXWwMb8E55/cXcFpvepSHL5M6DSV32X5Y8aXEmUbxr0GVfUIdZDuhqqTAfkyg4C3aSfV2zjrt/VfWlkN29fwkstKyopD9srrja/Xll4ex3rIQo+ZCigqIQQb7CmWfE7bc6QTp8qMprVkZgZc9bwqo/N662KuUFuh06eRSJigy9Ipy1hdtSI6J9IpKsocR8vWCJRxRLfedBc1/vtsjd+0tW9njzVrru7w1IFDomHap609b6sviWoAw51D5vrSTcqPlOpsAkYYBFMS6aKokteP+1FHokIKbv6W6tThIuNhJpsRnA86qH9xv5qNUgmRA2ASIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Tfci1Yft9hO6QbrKbFegdwobYO10oeVT7nn7iT/MQl4=;
 b=KmRKLqSQNKOEGc7dNt1ettwHatY0keKRuupZfWWsf8D5bSJYR/1ejetVwduA16EkVnqw3r8XqrS8ACun9ewFrmvkVRtl5LtsV7UI8sza80klgfh7uNM5aGlVx1Et+nFf1ors6JywySe8tv2b/EwjAfIIwIfoAjMOPriOKLG4FOk=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, George Dunlap <George.Dunlap@citrix.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: Re:[Multiple reverts] [RFC PATCH] build: include/compat: figure out
 which other compat headers are needed
Thread-Topic: Re:[Multiple reverts] [RFC PATCH] build: include/compat: figure
 out which other compat headers are needed
Thread-Index: AQHZJekaQF9ZU0LVM0eT+jo0D4jURa6ZzKCA
Date: Wed, 11 Jan 2023 22:29:53 +0000
Message-ID: <5c7ffbe4-3c19-d748-9489-9a256faebb7a@citrix.com>
References: <20230111181703.30991-1-anthony.perard@citrix.com>
In-Reply-To: <20230111181703.30991-1-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6421:EE_
x-ms-office365-filtering-correlation-id: 9dcd06f4-ea76-4dde-08fd-08daf4235c6a
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 q6JYKNIOkAugNmAjkql06KTLpHptgJiMSS3lt8jrDM6goAK+sSWDENno2yVKPp1lKXBRK5LLCHbKeqaZWmaneazMdyRLMZwAfxWT14lwh3W6+9YuMzUHZ2Lmmp1MKJkxzSIYqScjkDmGEr5q4xBKQQLiL0QbiyLWFepaHqPSdGWU7wFyUgWnURs8eZedBmhRfv5b98FOs8R6OdZDKu1C1rJPfG+aQWDckqsIrErXTr35xE/ZOg+gh6B5vNAxa4u6CYh+FOqLFYZHqdFVVHFGNXUYJOr87On/w61Hy2+8XyMq6j8TBkSHkO8JHF4mBMGM8fiC74CNP9d6p/EtcLccQmY3h+Tn3/6hViIIPqdoIeudo8mEgKyiaRVa0wh1+hP/gJWrK8OnY9E3IFnwxQFr3Ds/BZymcMfcnu194lQPP6nJOVEVQGCK6TyAuN+F3h2OYBS96c9NqDI33tgUhYkoYFWe802Qq1LTm4aAxH0k9jmLcvch4KAEbHONvWIV1JKecWe3/FxVvu4sPgGwcsgUgopfZYSuE1muGPdOBKJUQc5L4ch/EJNBWOnyisz0qPwPIJT/95w7SFh+jGhPP3nvDuAaVUyjzCKyTDv0Wa80Wx4kG7/zGXvB2RXko9EVCsE5kV8PvXO0o5eL8sYOyFcSBUi3Yz4256/Mh+zz6X1fqtzEdIK6hm8qaKLC5TaTaUNlqKjGdUECq3sUqKZ61/k/Wzf9vZAtM1jsVxYXlte1kGxMn2KTMYdMxNuhpY9aLcwaX8XJPTiv3MnX2+OSp6CT2HcDL2uvtGk2zMJMIDjqUnggQ6ywoOAA50BZGK+ouRj1MQehMU1q9OEEUoTR3/3b4iWcNAERvtxLXCx5c+nV+H8=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(366004)(39860400002)(346002)(376002)(451199015)(82960400001)(38100700002)(38070700005)(122000001)(86362001)(31696002)(91956017)(66446008)(66476007)(64756008)(4326008)(76116006)(66556008)(54906003)(66946007)(8676002)(41300700001)(110136005)(8936002)(2906002)(5660300002)(6512007)(2616005)(83380400001)(6486002)(53546011)(71200400001)(966005)(186003)(316002)(508600001)(26005)(6506007)(66899015)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?WnZFRVdLRVN3TDJ5TS82U1ZwWHd3bTFVaGtQaHNSaFQyb2RLWFJOSWdKU2pP?=
 =?utf-8?B?L1UyeWN2UWhDdmVFeFZqb0o4bElwN3RmYWtFaENDRUFRUWorV2xjUWxKb2R3?=
 =?utf-8?B?NFR3UTBTc2NIdjVqb1pVMmhqNmV3dnlrSHdINytzQnh6aEVtNFJnRU1Gc2pn?=
 =?utf-8?B?NVM0anNabnUwSXpEUXgyRHkyS2x0SW9zWjd0N3VzRUpWYjRzTXkyS3pIZEpD?=
 =?utf-8?B?WjlsbXRlaitkUWtmSjI4TVJ0d29RN3ZMZWdvV1Vpa0IwZ1MxR3QvdngzNXpC?=
 =?utf-8?B?cmk3WUlVOTNnVDV3QitFZjI0TnQyNHgvTitzYStmWnRjbDZDOGNoZzdPWEVF?=
 =?utf-8?B?cUVodUx2Qk0yRHp3cjU0UVZhZHd1YXlZSVk1VkNHM2toMHljSG43NFE4eUJk?=
 =?utf-8?B?dXF1Yk1IT1U4QndHa1h2WnZrbmdFODJUcUZWNHRFRkJibFJkY2QreWlCZExI?=
 =?utf-8?B?TE13eTRKQjV4UkkzMTVINHIwNXZzRjJWNVVoZHNGWU05dEkrMXlkZ1l3RE1V?=
 =?utf-8?B?bHRqdnpIZStXNkc2RW9WS2RlSE5tRklCZkZkRElybGxnVzd0MDZDSmJGU0Vx?=
 =?utf-8?B?Rm0rdlhLNERPZ3llb2J5YXhsUGpONUVMV29GaHRsaFIrdjMxdUdQUmY3VlEz?=
 =?utf-8?B?MjhXM0tpcTI0TDYyTHdncktJK2tCY1Q0cGZsMnlTaDhheGJLcmR2dG8rYWo0?=
 =?utf-8?B?MlNmeFZJUGh0Z0JDOEQvbDlQT3UwZ1BqNUdCMG44Qi9zR2pNeERKaUdRTlR6?=
 =?utf-8?B?bWNUcDlIelkrR0JjZVlneisxSWhiVldlS0ZnUXo0TVQ2enE4U0VUdFcxYUhN?=
 =?utf-8?B?RHlYU0RXNnNwUDMrblBDT2ROZjduVWhKMmxOQ2Vxa3hDTnl4K2xFQ280dm5y?=
 =?utf-8?B?aWpZMEMxWGpJVnIyVmFkaVliOTFjNkNITWlwWE9hSHhYNnNKT0RtRVYySk1K?=
 =?utf-8?B?NEN1ZTl2Q1RjUnIxalB3bmN6eTdGRjhwR0dVcVFWbXFIdHVmNC9lSEk2QUE0?=
 =?utf-8?B?Y0o3N2hDbTBQcjFZRk9vV1RBVmV4ckpQeW1XRTc3RlhrVVo2ZmZBdlF4Wmkx?=
 =?utf-8?B?RmdzeTJ6cHRHNE41YkRaZjh1OWRnZ1pNMm1JT25vcVJvK09jdFNJdlByS3ly?=
 =?utf-8?B?OTdqWW1FUXlXSEM5TjBCbFp5UlM5VTFVMHRGdTdHRGhKL1dwRGJEUEszZnRp?=
 =?utf-8?B?RFlTamJleENwS0hPQjZySUNzMjgxckduTTJwcklzbXRDVzVlb2UwZStaakdq?=
 =?utf-8?B?emtVQkIvNlZOZkVMamc3MjM3YUxBTGRDbVgyWFNXcXdyaGV4OERQTzFnRGRO?=
 =?utf-8?B?ODBoWHBnTGNqK1hiek9NNkdMMXpoclVoWE55WFJoSU9JNzdybXExd2t2ZGNM?=
 =?utf-8?B?MTBUUEtITUZRMytrUFdPa0tmSUdCRnRnb2JGcHBHMUFnQ1hBZitzb0hhUWNN?=
 =?utf-8?B?aG1XQ3RGTk5EbnYxaDVtYW40Ni9ULzJrKzNkcHhxMnZEL2pJS0tKTy8wbVRU?=
 =?utf-8?B?TXdtMTljVkNrQktDZzI5UmRDQ2hobkZWcG80cmpZOWh5QUJjT3dnOWdQKy9C?=
 =?utf-8?B?SEdRdHFrSnd6RDRqakQrVXNMYm1ZWTVGQlB5elBaRXRWRU5pK05vV281R01o?=
 =?utf-8?B?WHlYQ29DMDZTRUFjaC9MMzc1THlGa3F4bkpha1RnNTl1RmRESW9zME9JNDhC?=
 =?utf-8?B?WG1QanY5N2FYMFYzaWoxSXJoWm9YdXVkMmtNYVFuZGZTQnpOSmtoZWpMamtR?=
 =?utf-8?B?RlVWU1R2UVo4cVFocS9JNDNPSFZoV0tEQnpQZ04rY0VlSTZWMDJyYm5WYTZ3?=
 =?utf-8?B?aVpqMjRTc2lZbFpnaXJjQlc2VHRIcnJRZFJUajJKQ1N0S2Vmczdld3gzSXUz?=
 =?utf-8?B?ODVKRjdyck9uQkEvN2ExRklTeDNuem5veUZBZ1Z0dnN3RWlEZ2RaWkZVTTU2?=
 =?utf-8?B?cGg4dVg5M3pVazlyaVhDRE9tZUwyLzJPa3FzVzl4Y28yYTFSS2ErMEhiSm11?=
 =?utf-8?B?cE1XZ0x3UmRDWDlpcnBxanpWbDN1QU16TlFDeWhSRnhld0V1eEg3SmU2bDdX?=
 =?utf-8?B?QlZwTVZwWkVtN1phKzlCTVp6bDBQZU1yWXVRQkxwN1RsSHo3TnBlVGRpMzdM?=
 =?utf-8?Q?awuamTOUJC2BR+j0CBhcDniND?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <9A018F9DF57C8A4F87AE15F76341704B@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	bqcKTcAvIujz7kVq8lq2wXP56A6k8wx/SA2+NpHyTFHO8VyRxSx5ugmScsc3NxqDyOyzylofRv7/iPSolybaZQwgb+l43dR0crQL5iEiF4URTzbdFinIlHIouHqd0K0hm5LNHr4iEGEj9qXftS8LmwO/wFr4JYlaawU9FC7y6B2z5qIxtYG1XVgcp+KwqXD+0UYvV1rYzHVuYdSmIyy2HwXi3pvHmwqFI1Mi7egvRI1xMIUue7l/VfA9IHKJ2Pf46whOnIF3hsCeALyMH9R7VTrCUVAZLt7Eq8nJNxmnhZcD0w21BIYKpVIYU67Zzy2MiV8HjtiEJIAiNy/VswscKi3DWsXusBl7947rybV6oNzY9nPHRTKDsWy+CScj+f65xwV63cVIgtAUUkQHz8p13nfrXHSUzRD9a8jBMlE/1DXZ0c+xeRzXf86KNdv/hfY1+6FoJUnBlLD6bzShp/2nQTfZfxroBh4TCl2v0PEcO0H/L61zWJC70M20e2PQ6YwJKzH+W9PJZb3ocVBlpaoLjGlKNN4ZCBF9dKKICKOpF6RhnfMdFl4KoRigSPRybkqsC2Nm60Hhyu9GI9YEZnmTQGdXXjFM00tNdQnzoBB97m34Syt5uWzqwT8xQgT8aRlnaJTcNGAJCoYUxdiz3cSqVjL9r88CDMDFFTi37c6oQg0nh51MykwZ7SdHTutogCYXxIvmzbeRaqF4SQ48KgZsyJpf62JFt9NYaXciCxuEnVRUlqMoCHigLflYsMlOdRbCU6HYh9tnO2UdN3JN3K11YXU6L4GhhBxeVtyKX3uufvsROgv1h13/svSE8JfcQs08RbCFA7flPtE70ethA4b9b6gasFMIwwSPwbpIKmu0YdI=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9dcd06f4-ea76-4dde-08fd-08daf4235c6a
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2023 22:29:53.7363
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: cQcCq1t3mbjfKRe9li1q4uZ4qUWeg4JONgWI7ZLPzNl00hxEpLalzk6NyJrcbrNrITy+FCMW3puL9HEaUpDoHyCdgYCfEAP4RHQ9p05GMtc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6421

T24gMTEvMDEvMjAyMyA2OjE3IHBtLCBBbnRob255IFBFUkFSRCB3cm90ZToNCj4gU29tZSBjb21w
YXQgaGVhZGVycyBkZXBlbmRzIG9uIG90aGVyIGNvbXBhdCBoZWFkZXJzIHRoYXQgbWF5IG5vdCBo
YXZlDQo+IGJlZW4gZ2VuZXJhdGVkIGR1ZSB0byBjb25maWcgb3B0aW9uLg0KPg0KPiBUaGlzIHdv
dWxkIGJlIGEgZ2VuZXJpYyB3YXkgdG8gZGVhbCB3aXRoIGRlcHMsIGluc3RlYWQgb2YNCj4gICAg
IGhlYWRlcnMtJChjYWxsIG9yICQoQ09ORklHX1RSQUNFQlVGRkVSKSwkKENPTkZJR19IVk0pKSAr
PSBjb21wYXQvdHJhY2UuaA0KPg0KPiBUaGlzIGlzIGp1c3QgYW4gUkZDLCBhcyBpdCBvbmx5IGRl
YWxzIHdpdGggImh2bV9vcC5oIiBhbmQgbm90aGluZyBpcw0KPiBkb25lIHRvIGhhdmUgbWFrZSBy
ZWdlbmVyYXRlIHRoZSBuZXcgZmlsZSB3aGVuIG5lZWRlZC4NCj4NCj4gU2lnbmVkLW9mZi1ieTog
QW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+DQo+IC0tLQ0KPiAgeGVu
L2luY2x1ZGUvTWFrZWZpbGUgfCAyMyArKysrKysrKysrKysrKysrKysrKysrKw0KPiAgMSBmaWxl
IGNoYW5nZWQsIDIzIGluc2VydGlvbnMoKykNCj4NCj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRl
L01ha2VmaWxlIGIveGVuL2luY2x1ZGUvTWFrZWZpbGUNCj4gaW5kZXggNjViZTMxMGVjYS4uNWU2
ZGU5Nzg0MSAxMDA2NDQNCj4gLS0tIGEveGVuL2luY2x1ZGUvTWFrZWZpbGUNCj4gKysrIGIveGVu
L2luY2x1ZGUvTWFrZWZpbGUNCj4gQEAgLTM0LDYgKzM0LDI5IEBAIGhlYWRlcnMtJChDT05GSUdf
VFJBQ0VCVUZGRVIpICs9IGNvbXBhdC90cmFjZS5oDQo+ICBoZWFkZXJzLSQoQ09ORklHX1hFTk9Q
Uk9GKSArPSBjb21wYXQveGVub3Byb2YuaA0KPiAgaGVhZGVycy0kKENPTkZJR19YU01fRkxBU0sp
ICs9IGNvbXBhdC94c20vZmxhc2tfb3AuaA0KPiAgDQo+ICsjIEZpbmQgZGVwZW5kZW5jaWVzIG9m
IGNvbXBhdCBoZWFkZXJzLg0KPiArIyBlLmcuIGh2bS9odm1fb3AuaCBuZWVkcyB0cmFjZS5oOyBi
dXQgaWYgQ09ORklHX1RSQUNFQlVGRkVSPW4sIHRoZW4gdHJhY2UuaCB3b3VsZCBiZSBtaXNzaW5n
Lg0KPiArIw0KPiArIyBVc2luZyBzZWQgdG8gcmVtb3ZlICIuLiIgZnJvbSBwYXRoIGJlY2F1c2Ug
dW5zdXJlIGlmIHNvbWV0aGluZyBlbHNlIGlzIGF2YWlsYWJsZQ0KPiArIyBUaGVyZSdzIGByZWFs
cGF0aGAsIGJ1dCBtYXlub3QgYmUgYXZhaWxhYmxlDQo+ICsjCXJlYWxwYXRoIC0tcmVsYXRpdmUt
dG89LiAtbUwgY29tcGF0L2h2bS8uLi90cmFjZS5oIC0+IGNvbXBhdC90cmFjZS5oDQo+ICsjIGBt
YWtlYCBhbHNvIGhhdmUgbWFjcm8gZm9yIHRoYXQgJChhYnNwYXRoKSwgb25seSByZWNlbnQgdmVy
c2lvbi4NCj4gKyMNCj4gKyMgVGhlICQoQ0MpIGxpbmUgdG8gZ2VuIGRlcHMgaXMgZGVyaXZlZCBm
cm9tICQoY21kX2NvbXBhdF9pKQ0KPiAraW5jbHVkZSAkKG9iaikvLmNvbXBhdC1oZWFkZXItZGVw
cy5kDQo+ICskKG9iaikvLmNvbXBhdC1oZWFkZXItZGVwcy5kOiBpbmNsdWRlL3B1YmxpYy9odm0v
aHZtX29wLmgNCj4gKwkkKENDKSAtTU0gLU1GICRALnRtcCAkKGZpbHRlci1vdXQgLVdhJChjb21t
YSklIC1pbmNsdWRlICUvaW5jbHVkZS94ZW4vY29uZmlnLmgsJChYRU5fQ0ZMQUdTKSkgJDwNCj4g
Kwlmb3IgZiBpbiAkJChjYXQgJEAudG1wIHwgc2VkIC1yICcxcy9eW146XSo6IC8vOyBzLyBcXCQk
Ly8nKTsgZG8gXA0KPiArCSAgICBlY2hvICQkZjsgXA0KPiArCWRvbmUgfCBzZWQgLXIgXA0KPiAr
CSAgICAtZSAncyMuKi9wdWJsaWMjY29tcGF0IzsgOiBwOyBzIy9bXi9dKy8uLi8jLyM7IHQgcDsg
cyMkJCMgXFwjJyBcDQo+ICsJICAgIC1lICcxaSBoZWFkZXJzLXktZGVwcyA6PSBcXCcgLWUgJyQk
YSBcICcgXA0KPiArCSAgICA+ICRADQo+ICsNCj4gK2hlYWRlcnMteS1kZXBzIDo9ICQoZmlsdGVy
LW91dCBjb21wYXQveGVuLWNvbXBhdC5oLCQoaGVhZGVycy15LWRlcHMpKQ0KPiArIyBBZGQgY29t
cGF0IGhlYWRlciBkZXBlbmRlbmNpZXMgYW5kIGVsaW1pbmF0ZXMgZHVwbGljYXRlcw0KPiAraGVh
ZGVycy15IDo9ICQoc29ydCAkKGhlYWRlcnMteSkgJChoZWFkZXJzLXktZGVwcykpDQo+ICsNCj4g
IGNwcGZsYWdzLXkgICAgICAgICAgICAgICAgOj0gLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQu
aCAtRFhFTl9HRU5FUkFUSU5HX0NPTVBBVF9IRUFERVJTDQo+ICBjcHBmbGFncy0kKENPTkZJR19Y
ODYpICAgICs9IC1tMzINCj4gIA0KDQpGb3IgcG9zdGVyaXR5LA0KaHR0cHM6Ly9naXRsYWIuY29t
L3hlbi1wcm9qZWN0L3Blb3BsZS9hbmR5aGhwL3hlbi8tL2pvYnMvMzU4NTM3OTU1MyBpcw0KdGhl
IGlzc3VlIGluIHF1ZXN0aW9uLg0KDQpJbiBmaWxlIGluY2x1ZGVkIGZyb20gYXJjaC94ODYvaHZt
L2h2bS5jOjgyOg0KLi9pbmNsdWRlL2NvbXBhdC9odm0vaHZtX29wLmg6NjoxMDogZmF0YWwgZXJy
b3I6IC4uL3RyYWNlLmg6IE5vIHN1Y2gNCmZpbGUgb3IgZGlyZWN0b3J5DQrCoMKgwqAgNiB8ICNp
bmNsdWRlICIuLi90cmFjZS5oIg0KwqDCoMKgwqDCoCB8wqDCoMKgwqDCoMKgwqDCoMKgIF5+fn5+
fn5+fn5+fg0KY29tcGlsYXRpb24gdGVybWluYXRlZC4NCm1ha2VbNF06ICoqKiBbUnVsZXMubWs6
MjQ2OiBhcmNoL3g4Ni9odm0vaHZtLm9dIEVycm9yIDENCm1ha2VbM106ICoqKiBbUnVsZXMubWs6
MzIwOiBhcmNoL3g4Ni9odm1dIEVycm9yIDINCm1ha2VbM106ICoqKiBXYWl0aW5nIGZvciB1bmZp
bmlzaGVkIGpvYnMuLi4uDQoNCg0KQWxsIHB1YmxpYyBoZWFkZXJzIHVzZSAiLi4vIiByZWxhdGl2
ZSBpbmNsdWRlcyBmb3IgdHJhdmVyc2luZyB0aGUNCnB1YmxpYy8gaGllcmFyY2h5LsKgIFRoaXMg
Y2Fubm90IGZlYXNpYmx5IGNoYW5nZSBnaXZlbiBvdXIgImNvcHkgdGhpcw0KaW50byB5b3VyIHBy
b2plY3QiIHN0YW5jZSwgYnV0IGl0IG1lYW5zIHRoZSBjb21wYXQgaGVhZGVycyBoYXZlIHRoZSBz
YW1lDQpzdHJ1Y3R1cmUgdW5kZXIgY29tcGF0Ly4NCg0KVGhpcyBpbmNsdWRlIGlzIHN1cHBvc2Vk
IHRvIGJlIGluY2x1ZGluZyBjb21wYXQvdHJhY2UuaCBidXQgaXQgd2FzDQphY3R1YWxseSBwaWNr
aW5nIHVwIHg4NidzIGFzbS90cmFjZS5oLCBoZW5jZSB0aGUgYnVpbGQgZmFpbHVyZSBub3cgdGhh
dA0KSSd2ZSBkZWxldGVkIHRoZSBmaWxlLg0KDQpUaGlzIGRlbW9uc3RyYXRlcyB0aGF0IHRyeWlu
ZyB0byBiZSBjbGV2ZXIgd2l0aCAtaXF1b3RlIGlzIGEgbWlzdGFrZS7CoA0KTm90aGluZyBnb29k
IGNhbiBwb3NzaWJseSBjb21lIG9mIGhhdmluZyB0aGUgPD4gYW5kICIiIGluY2x1ZGUgcGF0aHMN
CmJlaW5nIGRpZmZlcmVudC7CoCBUaGVyZWZvcmUgd2UgbXVzdCByZXZlcnQgYWxsIHVzZXMgb2Yg
LWlxdW90ZS4NCg0KDQpCdXQsIHRoYXQgaXNuJ3QgdGhlIG9ubHkgYnVnLg0KDQpUaGUgcmVhbCBo
dm1fb3AuaCBsZWdpdGltYXRlbHkgaW5jbHVkZXMgdGhlIHJlYWwgdHJhY2UuaCwgdGhlcmVmb3Jl
IHRoZQ0KY29tcGF0IGh2bV9vcC5oIGxlZ2l0aW1hdGVseSBpbmNsdWRlcyB0aGUgY29tcGF0IHRy
YWNlLmggdG9vLsKgIEJ1dA0KZ2VuZXJhdGlvbiBvZiBjb21wYXQgdHJhY2UuaCB3YXMgbWFkZSBh
c3ltbWV0cmljIGJlY2F1c2Ugb2YgMmM4ZmFiYjIyMy4NCg0KSW4gaGluZHNpZ2h0LCB0aGF0J3Mg
YSBwdWJsaWMgQUJJIGJyZWFrYWdlLsKgIFRoZSBjdXJyZW50IGNvbmZpZ3VyYXRpb24NCm9mIHRo
aXMgYnVpbGQgb2YgdGhlIGh5cGVydmlzb3IgaGFzIG5vIGxlZ2l0aW1hdGUgYmVhcmluZyBvbiB0
aGUgaGVhZGVycw0KbmVlZGluZyB0byBiZSBpbnN0YWxsZWQgdG8gL3Vzci9pbmNsdWRlL3hlbi4N
Cg0KT3IgcHV0IGFub3RoZXIgd2F5LCBpdCBpcyBhIGJyZWFrYWdlIHRvIHJlcXVpcmUgWGVuIHRv
IGhhdmUNCkNPTkZJR19DT01QQVQrQ09ORklHX1RSQUNFQlVGRkVSIGVuYWJsZWQgaW4gdGhlIGJ1
aWxkIHNpbXBseSB0byBnZXQgdGhlDQpwdWJsaWMgQVBJIGhlYWRlcnMgZ2VuZXJhdGVkIHByb3Bl
cmx5Lg0KDQpTbyAyYzhmYWJiMjIzIG5lZWRzIHJldmVydGluZyB0b28uDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 23:16:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 23:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475746.737557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFkKM-00033a-5K; Wed, 11 Jan 2023 23:16:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475746.737557; Wed, 11 Jan 2023 23:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFkKM-00033T-1z; Wed, 11 Jan 2023 23:16:14 +0000
Received: by outflank-mailman (input) for mailman id 475746;
 Wed, 11 Jan 2023 23:16:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHza=5I=citrix.com=prvs=3687a0f96=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFkKK-00033N-Jh
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 23:16:12 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb007b52-9205-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 00:16:04 +0100 (CET)
Received: from mail-co1nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 18:15:58 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB5878.namprd03.prod.outlook.com (2603:10b6:510:34::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 23:15:54 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 23:15:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb007b52-9205-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673478964;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=slJOBE0mbK8K/SCPeC6NuKHHLlciEoPKDGwPLBhaokI=;
  b=dPNydfVYHAhaWXCCifDhULmF9pfCS5A8owaWJe0s2D91YPItHxtQeyvV
   vrJxLvAawTWVAs8tsgOPbsFYtOp3NlU76TAcmU/Yh6Sx8hoPjYPKHjvd7
   kJwBh5dqVkekZjH8H4pWZ1qor2/6plNqtDHz9p/l0Tr/eAEV0xv+sCA4m
   o=;
X-IronPort-RemoteIP: 104.47.56.171
X-IronPort-MID: 94677062
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Qmwatazk4O2zKbZ+teR6t+cGxyrEfRIJ4+MujC+fZmUNrF6WrkVRz
 mQWXzzSM/yNZmbwc4x2PISypE8Pv5KGyYRrGwZq+SAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+U0HUMja4mtC5QRnPKkT5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KTl8+
 /MYCCoXVRasjb2V+JeRTvNO1v12eaEHPKtH0p1h5RfwKK9/BLvkGuDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjaVlVMouFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqBNxNS+XmrJaGhnWL31cCNjRKbmKy4tO7ulC6XYMEI
 BALr39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLE7gDcCmUaQzppbN09qNRwVTEsz
 kWOnd7iGXpoqrL9YW2Z3qeZq3W1Iyd9BXMDYAcUQA1D5MPsyLzflTrKR9dnVaWy19v8HGipx
 yjQ9XdnwbIOkcQMyqO3u0jdhC6hrYTISQhz4RjLWmWi7UVyY4vNi5GU1GU3JM1odO6xJmRtd
 lBd8yRCxIji1a2wqRE=
IronPort-HdrOrdr: A9a23:OEBT06664SBrvsnFyAPXwOPXdLJyesId70hD6qkRc20xTiX8ra
 rCoB1173PJYVoqN03I4OrwQZVoIkmsl6Kdg7NwAV7KZmCPhILPFu9fBODZsl7d8kPFl9K14p
 0QF5SWWOeaMbGjt7eA3OBjKadH/DBbytHOuQ4D9QYUcei1UdAb0ztE
X-IronPort-AV: E=Sophos;i="5.96,318,1665460800"; 
   d="scan'208";a="94677062"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Nds3jGcURGqMSuukeSyz5UDTRKTp0KFhT8LggkajqVuGM0VfHlqF6NQnOzxWAHjYV3yZecoOWWkS04+DI74M5RcAg8MQ8VuUxelyfQHSC5Og4uRDu59W4y22OjiZg4T57R/RUJIvtQS1+ZkgNsCUbuNDcZddglslXpY1/TSQiMo8poySYazlEO1xQDz1fBBU6I+iSLAS1+vqJF7YXnD8yyfcgoRvnkM1YVDgAJJStsaacQNihLaf7hgz8hJPdV4paqR5WfHmF71miA0ILePb+MaxXnFL2ay+tW/H1IvYTCjBuT9Lzmi7pidDcpvBvKQ2wwI82ir3Shf8fBjuY+eZLQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=slJOBE0mbK8K/SCPeC6NuKHHLlciEoPKDGwPLBhaokI=;
 b=ChrJ9VNzWyny/uenxMrvdJwkgbXsHOUb2ago5wakIF+kbQW4Qz0pjGh8lEgaRAAupxwyjKd7jP25oYs/g+xJBD4Oq8sHiq9FagexIZiV6oWB1bf/Cga6g3pw/PpeVXTwtPZgKiScq1RiLUtrSr0IVrAVLQYbaDJrxOY82I+xRVagV0XkeyUqZH9jOfA+nIlTdxetj4rgoQ0RUILv1RdGrUapcx77EnrYj6Tio5DkrBcqYZngJtt2ZsENr1+xdy1LzYzT28Kr64a0f4eFFNb6MajoYaQju+uJ9Z9tZKdyo+Os1gprlvvfesy2/9VxCm/ypBUrjpUG4XrHk7s9FDBwLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=slJOBE0mbK8K/SCPeC6NuKHHLlciEoPKDGwPLBhaokI=;
 b=mvZd9QZ0ONLdSRWieu/HDYKulwN6TQZ1ofYY4uUDzZ88nrl0kSWii1+xwgu67UPc7OzCKLVhxnvmViduiFQospt4D/zF00BsQ7R1UR7EfsMmTLkGvPEKszWojlYJggPMlcj6OHWNVJh+doyAuOSAiedfmrwTH/UBeaQq2GNrKDE=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Thread-Topic: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Thread-Index: AQHZJcS6xQlTtpJxX0O79yAhgOZJ+66Z2cOA
Date: Wed, 11 Jan 2023 23:15:54 +0000
Message-ID: <c5d201ac-89ca-6baa-d685-5bef2497183f@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <9bea51eb-4fbd-b061-52d7-c6c234d060a1@suse.com>
In-Reply-To: <9bea51eb-4fbd-b061-52d7-c6c234d060a1@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|PH0PR03MB5878:EE_
x-ms-office365-filtering-correlation-id: f701c556-07ac-428d-1861-08daf429c9bf
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 WePJjXtuoCD6sIsVm+E5aPCU4umzbvrkJreTKNkVPb3OlmQSIevaohe7ixdpzS8IPgr0fsrAP8Sf7Jz+x+qt3QdYQ+b/yeuayrXfVZg+iXK5WhVOFWooFnQzFnyA7FErNY52E8fQ48vsVjwweXr5zLt8N/GYPcC4kY7gzNVMYzMHFTdM92V0bwRNPsZkKh3q5ThLy6dLmka17gw6ckmyntZWi7f4HKZ4ydiqsDRd7dVidDomFhJq64NFwzfTeyYtMSL5m+PmKeO4SXZw1Srs/xArOt77doq2ZCNkk4Uk7YKZe6pvEtBdZ3Xws9PYaczgHmRXlDCMdgnJ66gI8icHWD7hA1EYtz8izkO1GIQ9ypH1zuz/ANleDXPjzEjMAUVNs3DRcCJFEkFTo1ab+7rVPblio1IgKbx0GfZhB4G8w2vUcWvzCOEgu9WP4MNagFo0RsdbusrsU0apNom5VM2UnD2OBBIgsph3+BX94V5rH2Q5tyLSZbmEL/XFe7WTTpF89G3mO2RRH3D21s5OkWOaE9mWOXTFVC2c2Rtl5bvZBmMF0SsMB/vPtFr7vph8RIzYM14G4GzMS2HOVRrO9LOmk9y4gl+uVOAdz9ZF3uI0Lif4vze+os3iurHhcvUG3IxhVSoGkFVxbGvZ0HvijAx5N0jcDBUCv3GsQlEtJq5Aw1zemO55Pyoy33KLtdmjMS2CwJ9dcKd/jvUyDKYSxSOfw0CYUVy1p68c0BMZRnrVqfb0zxqO/jls1xOCslnFDX7FcRCdfNxsgoiTmwnpwh9rTg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(136003)(396003)(376002)(346002)(39860400002)(451199015)(8676002)(186003)(26005)(316002)(71200400001)(5660300002)(6512007)(6486002)(478600001)(86362001)(2616005)(31696002)(41300700001)(54906003)(76116006)(66556008)(110136005)(91956017)(4326008)(66476007)(66946007)(64756008)(66446008)(8936002)(36756003)(83380400001)(38070700005)(31686004)(107886003)(6506007)(82960400001)(38100700002)(122000001)(2906002)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZVlVQVZCZm9rdnYreDZLQkwxdjVWRjJNSDBGWmZGMDRJZjZXZGxycWZ3RGpq?=
 =?utf-8?B?SENmdC9tK1dVL3NsUjBua2R3ZDBzTUFCQkNTT3JHUENKY1plckxQeWZrdE5j?=
 =?utf-8?B?ZEZydDV3M1R5LzRGOEdzQ1ordFJMNFhHM2hZRnYxQzhieCtvek9iR2ZrdWxt?=
 =?utf-8?B?dGZSQmlWcDdheU9OQTRpVjFmZVhHSmwyZnZzdVZwaENFVTEyVlhvME5kSGRW?=
 =?utf-8?B?LzlrWmIxcG0rWjJXRjZ6M2Jrd2hYcWJjVS9nRnVkY3lyQ1FldzRJZFBhaW01?=
 =?utf-8?B?NTNNMEhHU2ROenVaT0NuNklZaUZaMzh1V0o0dzAzbXZUMW5PclJCVWF4UU45?=
 =?utf-8?B?b2hlSmo5RnNBWVVNVEZrSXp6dXhDcXFtRkpjVWJVMmhzeTBEMzg5aXBOOTE4?=
 =?utf-8?B?amhTTlJnaEFIRDZ1L3BwN015MmdTTzFDVHRveW83dWVzOXR6VmVPRmNnSXkz?=
 =?utf-8?B?NkRzV0dYOHQ3YWIwTDIvdmNzTkJYMFJMc0ZwWVY3M1ZGRG50YWR1NHRDTEpE?=
 =?utf-8?B?bGxTcG9SazlKYkljRkE1VkloYVg5bUpCVEpMZThSelAwZUNCbk9hOEt0ME1C?=
 =?utf-8?B?S2FiVU0rUlV1c0JwdVpwRjZ2MlhQMzJlczVpenBKaHoyc0V0MzFNYUVzbkdR?=
 =?utf-8?B?SmtjbFR1UStDMnlDaklqMXdBY0xRMVM1cmw0L2lxNWtxM0R6TjVZNHFoRDdi?=
 =?utf-8?B?eXloYlFxRDRKKzBZN3cyTEpra1lsMGZWOW5HNm54NG1tYk1mZkg2b0grdW5F?=
 =?utf-8?B?MDhMSXRncmhaOGFXYjN2cXU0c1I2aHkyYjM2ZG1RTnc3TXhFNjJ3TzJUc1lN?=
 =?utf-8?B?NDd0UWtLN1VsTlJRVE5EN2N0RE5YRitOTWI0OHBUMC9ZbWgrMUdqR1RMZUVP?=
 =?utf-8?B?aFdaUTZSSnF4c01oTE54eWxQaXVzWkI5Z3FSSU4rK2ZYSjVVTkJvNHZFYlo1?=
 =?utf-8?B?TVZtcGwvdVQzUlVGUnBTN0lXcnltZlpEU3BBb1IzVHA5WEVJWGk5TVBtbytr?=
 =?utf-8?B?dGxqaVcvcHFhNlRaOU5hZmRBVk5rVG0xSzRqMUE4b2phcFNwSGJmeGF5TXIw?=
 =?utf-8?B?c0R5VmxxOXRyUFpRMFhYaTZNQVhIcjNsSGZSYWxrL3UzUVo4Vk85VHcycGZU?=
 =?utf-8?B?aGI3RUl5TjVpT3NBTndwbnl6b1hXS0VPVm54RllUVGZtTjhqUGZGeldBZGlY?=
 =?utf-8?B?UUF0R3JoQVQ4eFdPSVZLSmt6UjIrUjBTMDRIcUhkaVpuc3RidW9FemdIZDJO?=
 =?utf-8?B?VnE5STJRR0hOeEdZblAreG9DR2tMSU9ocEM0MFBLaXJKbXVPSzVHT3ZjU2NN?=
 =?utf-8?B?cFRlVWVoY1ljRnNDN3I1WWFTU3IrbDhlUU9pcVNUTHl4bTRzdm1jZU02QlFT?=
 =?utf-8?B?YXdGZEpZK0EyOGJrV3ZsRWFBM3BkNFRud0dSZ3c1Y0liRUZNNDIrd2tTOUt3?=
 =?utf-8?B?WFM5N3pvYUJvcEVyTWEwRUZWTm5ZYlFrYXVpckE3NDk4ZENVcDBzQnZRaDhx?=
 =?utf-8?B?eUU0dkxZSVZSZks0STJreVlsYXl3YTl4R3h4dEFXYmtUQnJqelVXRVU3aXFE?=
 =?utf-8?B?QW5oOFk2NmZaajNYU2FPZVpHZkJpU0h3NXNuTHAzamQvRG1GYVhrbzU4Q2x0?=
 =?utf-8?B?bVhxVVdRdzI3b2VGWEVpKzdiekFReHFOaElUaTZiK25ZTlpldU1BZ0N0TVFI?=
 =?utf-8?B?U3lDdkc2MnZqaTZjZThnN25xNk9nNUZBVmE2ZzBxd21Cc1U2dDFPbXRTSWV5?=
 =?utf-8?B?ekN1a1NjRGZUSGZmaHZqMlRFOWZQQWpuMjFSM0dHT0w3RUkrZVRCZ0w1THdj?=
 =?utf-8?B?K3FoK21sRHFMTXNTNHVRL0NjWUVFNnd5TVNIdzBWQ2NqdlNMVzh5ZS8wNUlv?=
 =?utf-8?B?MXJEWjJUR0JxRU9uZ1plM1psTjlSSzlLMlk5cGlHdElTWmZxajF3Uzg3dW9x?=
 =?utf-8?B?ZnFzSUFJODJrSnNBdU95MzZMSVN3S243UktoWG52NzNMazdVbkJESTVhcmY3?=
 =?utf-8?B?RXZBQ1c0YmZkTkdDTjd5NWRhbXd2ZS9LbUtBYlRDSy9FbTY4aEtWcERMQUZN?=
 =?utf-8?B?UFI4NElPREFvK1RsVWhPQnJlWXlNWStYY05WdS9CakVYS000dnM5aXUzY0VX?=
 =?utf-8?Q?H9oFluWCo0Tw1Tlb1x1QG7TNd?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <78DC18C3BC14B84C9444B4499CB72BBA@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ooad0/v2jJu9uW40vAsT8k7+/QGv7NWCMu/cy6vYerVsvbhLKfE8OGB6iP02NMm1cHUfgN9qtpb/7IRg+g2+p4sCACKIyNY8tG2NT7fB2OL70+hN+hZhe9JwkwteAjRa/hQ5NCQVOI2vZ/JP4oX3gsKbvJCIgiiuYpas2YsY3aBYWhOVc2X/AYOyYN3YPyeVH6QufnQKi+20tZHGW7/a/tFlHt3m18a3PIYYLL6GBoz6YI0bq6N43Md3+UNa45xMNCjPqCAiN4KxlyGAEKLsqdPPi1YkJ2ySRYN7FI8vgLGSYtQnWlaMV9vKq/mgZOg4ZiQJY8Jq3tZB9kwPBeIhphXHutVMx9olYH1/X+47GtO2VBxU3YKZU1ftiXf7TLlhZYHJ7EBcnwbAkK8kjybSE4ckQRQD6CHBgEfm9k/9AAnL+r80VCan2TqZ/qvPrVOwlKeWsw7YD78NpuRGE8bm2/B5ZxNvNO8ZIv0WH9dmw2mP50AkkD0lmUpXMfijd5mte5M+k/lnPu/MYkniM7fHbmLCa5d08FrwMtJXRLIQ8xbVcdCfadjc4h7oBRlrvb+RKbQPzzFKNpKPCi8i4FsRi9A7jsTLPB0pE9IHxoGE78auw7V7S4nvdNUGCyVMllBB5A7em/UelV3A98mWyUa+Mm62uRa8AggeDHNe1si+cADQXutP5xqS0PqA2mZ763/b379SPQj2dr7Va849n6pfJUVvVBrewx8UXvADZ3g4yn9m7ZrixEQT6uDbAdh5bBy0YwDTaNBjpT8b2dCliqcoIJaLOZiM3G3s4kMOnFSP/1M0e9NATUhsWUwmOJFmHFadPi5fPw25hFbBj3vKqg09Lw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f701c556-07ac-428d-1861-08daf429c9bf
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2023 23:15:54.1302
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: b7VQWYOn5+3L9DU7sd1Aowg88BivuWmATZiCBiC+n5uayybwdIwu95/ql57+lG9w4npIlXSTFWiocB00xxfENYP2I8xRMgCobDLMbR1vwgc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5878

T24gMTEvMDEvMjAyMyAxOjU3IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gTWFrZSBIVk09eSBy
ZWxlYXNlIGJ1aWxkIGJlaGF2aW9yIHByb25lIGFnYWluc3QgYXJyYXkgb3ZlcnJ1biwgYnkNCj4g
KGFiKXVzaW5nIGFycmF5X2FjY2Vzc19ub3NwZWMoKS4gVGhpcyBpcyBpbiBwYXJ0aWN1bGFyIHRv
IGd1YXJkIGFnYWluc3QNCj4gZS5nLiBTSF90eXBlX3VudXNlZCBtYWtpbmcgaXQgaGVyZSB1bmlu
dGVudGlvbmFsbHkuDQo+DQo+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4NCj4gLS0tDQo+IHYyOiBOZXcuDQo+DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9z
aGFkb3cvcHJpdmF0ZS5oDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJpdmF0ZS5o
DQo+IEBAIC0yNyw2ICsyNyw3IEBADQo+ICAvLyBiZWVuIGluY2x1ZGVkLi4uDQo+ICAjaW5jbHVk
ZSA8YXNtL3BhZ2UuaD4NCj4gICNpbmNsdWRlIDx4ZW4vZG9tYWluX3BhZ2UuaD4NCj4gKyNpbmNs
dWRlIDx4ZW4vbm9zcGVjLmg+DQo+ICAjaW5jbHVkZSA8YXNtL3g4Nl9lbXVsYXRlLmg+DQo+ICAj
aW5jbHVkZSA8YXNtL2h2bS9zdXBwb3J0Lmg+DQo+ICAjaW5jbHVkZSA8YXNtL2F0b21pYy5oPg0K
PiBAQCAtMzY4LDcgKzM2OSw3IEBAIHNoYWRvd19zaXplKHVuc2lnbmVkIGludCBzaGFkb3dfdHlw
ZSkNCj4gIHsNCj4gICNpZmRlZiBDT05GSUdfSFZNDQo+ICAgICAgQVNTRVJUKHNoYWRvd190eXBl
IDwgQVJSQVlfU0laRShzaF90eXBlX3RvX3NpemUpKTsNCj4gLSAgICByZXR1cm4gc2hfdHlwZV90
b19zaXplW3NoYWRvd190eXBlXTsNCj4gKyAgICByZXR1cm4gYXJyYXlfYWNjZXNzX25vc3BlYyhz
aF90eXBlX3RvX3NpemUsIHNoYWRvd190eXBlKTsNCg0KSSBkb24ndCB0aGluayB0aGlzIGlzIHdh
cnJhbnRlZC4NCg0KRmlyc3QsIGlmIHRoZSBjb21taXQgbWVzc2FnZSB3ZXJlIGFjY3VyYXRlLCB0
aGVuIGl0J3MgYSBwcm9ibGVtIGZvciBhbGwNCmFycmF5cyBvZiBzaXplIFNIX3R5cGVfdW51c2Vk
LCB5ZXQgeW91J3ZlIG9ubHkgYWRqdXN0ZWQgYSBzaW5nbGUgaW5zdGFuY2UuDQoNClNlY29uZGx5
LCBpZiBpdCB3ZXJlIHJlbGlhYmx5IDE2IHRoZW4gd2UgY291bGQgZG8gdGhlIGJhc2ljYWxseS1m
cmVlDQoidHlwZSAmPSAxNTsiIG1vZGlmaWNhdGlvbi7CoCAoSXQgYXBwZWFycyBteSBjaGFuZ2Ug
dG8gZG8gdGhpcw0KYXV0b21hdGljYWxseSBoYXNuJ3QgYmVlbiB0YWtlbiB5ZXQuKSwgYnV0IHdl
J2xsIGVuZCB1cCB3aXRoIGxmZW5jZQ0KdmFyaWF0aW9uIGhlcmUuDQoNCg0KQnV0IHRoZSB2YWx1
ZSBpc24ndCBhdHRhY2tlciBjb250cm9sbGVkLsKgIHNoYWRvd190eXBlIGFsd2F5cyBjb21lcyBm
cm9tDQpYZW4ncyBtZXRhZGF0YSBhYm91dCB0aGUgZ3Vlc3QsIG5vdCB0aGUgZ3Vlc3QgaXRzZWxm
LsKgIFNvIEkgZG9uJ3Qgc2VlDQpob3cgdGhpcyBjYW4gY29uY2VpdmFibHkgYmUgYSBzcGVjdWxh
dGl2ZSBpc3N1ZSBldmVuIGluIHByaW5jaXBsZS4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 11 23:56:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Jan 2023 23:56:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475753.737568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFkxY-0007Ls-Cu; Wed, 11 Jan 2023 23:56:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475753.737568; Wed, 11 Jan 2023 23:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFkxY-0007Ll-AA; Wed, 11 Jan 2023 23:56:44 +0000
Received: by outflank-mailman (input) for mailman id 475753;
 Wed, 11 Jan 2023 23:56:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHza=5I=citrix.com=prvs=3687a0f96=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFkxW-0007Lf-Tm
 for xen-devel@lists.xenproject.org; Wed, 11 Jan 2023 23:56:42 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9685ab62-920b-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 00:56:40 +0100 (CET)
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 18:56:34 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6787.namprd03.prod.outlook.com (2603:10b6:510:123::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan
 2023 23:56:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Wed, 11 Jan 2023
 23:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9685ab62-920b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673481400;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=+wJnjP+gFDwTbENByDjE9DAEGhVAVwRyGY6u9OSpZ5g=;
  b=UKJHHSqVT3uhp1OIeSRXxkILm1WJv/zxAOZ5utZDsLiCDfyhJ2Ph+v5n
   k7a25Bjaeki9RMZ4+AtIWioErv4hUS6470OaTFvOisXNikb28YOwnm8RU
   QhQF5WzbRe4XPi+XH5/MJXSeX3En1BAyP655Ulyn9ih6h5wEjb30k5lfx
   I=;
X-IronPort-RemoteIP: 104.47.70.107
X-IronPort-MID: 91686049
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:JtIO1KO0ShuUm83vrR2UlsFynXyQoLVcMsEvi/4bfWQNrUp21D1Vn
 GBLCDyBaareZGvzeop0boq19kMB6p/QnYAyHQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo42tB5wVmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0vdqCEN83
 +ZAE25TVT7YvLuR64Khc9A506zPLOGzVG8ekldJ6GiBSNwAHtXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+vJxujCMpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexH2rBtpORdVU8NZLvkyQ7EtKKSEHRGSbsfOdjGmuC/h2f
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaETQUKEcSaClCShEKi+QPu6k2hxPLC9xlT6i8i4StHSmqm
 mjV6i8jm78UkMgHkb2h+kzKiC6toZ6PSRMp4gLQXSSu6QYRiJOZWrFEIGPztZ5oRLt1hHHY1
 JTYs6ByNNwzMKw=
IronPort-HdrOrdr: A9a23:2BHan6yAukz+po36jnSbKrPxaeskLtp133Aq2lEZdPU1SL3sqy
 nKpp906faaslYssQ4b6Ky90cW7IE80lqQFkrX5Q43SPjUO0VHAROtfBODZsl7d8kPFh4tgPa
 wJSdkANDWZZ2IXsS6QijPWLz7uquPrzImYwd77i1NRZUVSbadkhj0JeDpy0CdNNXd77V5SLu
 vt2iKDzQDQCEj/Ff7LYkUtbqz4vtjWk5CjSQcebiRXkTWmvHeT8bvnFBrd9QsfVj4n+8ZezU
 H11zbh47mlsbWdwhvRvlWjiKh+qZ/a095eA87JrNYTJi6EsHfPWK1RH4eauSwzoqWUyHtCqq
 i1nz4Qe/5r7m/XfCWOrQDz1xLG2DIjgkWSsmOwsD/YuMnkQzB/NMZbn4JedXLimjAdgO0=
X-IronPort-AV: E=Sophos;i="5.96,318,1665460800"; 
   d="scan'208";a="91686049"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OgP+NOhqtpBrg6HrrVxrdpTCBZRm6cLfGbDKnv84X+QZh8t74ztiC3nghUQDT3CymIg9HuP2x6ufEhw0R4KT5tT3PpZro2VgQkSHVGf8CoKyIRfj2dsPOjJNQiURk03Z2Z33yJAk1mxukjK0fu2icF1poD8PxkGc5X5o5c6MHKAbI/8IHJ+b3Ug/1jXPh6cTI7OC2pTEYE1w7L4qrrRbXnH5NCIt0dRGzH8+tjOSayBiqHgqwsiFXCD97zRVEjJWiU6mAiTrdFQK5ZMKSR+bhvGxIws1DCc2XvxIsCnAfsog81CQ1Nw03YCC7WZFYYFqAbqs4Vt9Wt4VpKEl4AK+qQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+wJnjP+gFDwTbENByDjE9DAEGhVAVwRyGY6u9OSpZ5g=;
 b=nCDVBh7j/hSdw0qXjE0if9FxhUOYgPYKX+JoZ0eJ71bUZpv0dhpLesG4Crj9rQ7mT1FVGcUNAQxH2rCsb2epZQc0/5onHoLyfFxKDdX7nGofzuEP2GbG+4Xn3yBcC4FIgQyrYEOXLEj97NhXf3w9ytGHHc9dleWEJ+1XP8fvPPQYAquGrK+ITuibmcLSgd/gung6c/QfBiSySSVknplVdd4WTRnqxgRbJoFGTe9QfwRQ9nJa5vyt+Tqf6BPIZD0TCqHUtC5zlk4gkr08lvmSck0iPvhnY07xSB7smFlfXWSfHpMx0t3+CizoGgdUdjNbTNYMnimDz6d27nfkel3aNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+wJnjP+gFDwTbENByDjE9DAEGhVAVwRyGY6u9OSpZ5g=;
 b=CVOmKgUbQ6coNqZ/g1IqCams8yCeHNv3Ol1KbP6niNAaJcdFGWbx+py/BTS6oFIhZLP4AM4Arbdey6cRVBjRwzav5rT/kht5rQJo51j3DqtVQNtkJR0/6VNogGq2y7LrZJ1MbgWyFwvpoAcXH2dyzGFkqZppCEXqyoG78lp64ns=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v2 8/9] x86/shadow: call sh_detach_old_tables() directly
Thread-Topic: [PATCH v2 8/9] x86/shadow: call sh_detach_old_tables() directly
Thread-Index: AQHZJcSfDC5mFmcp2UCTS4rgBv7O2q6Z5R0A
Date: Wed, 11 Jan 2023 23:56:30 +0000
Message-ID: <722642fa-4eb0-930a-9755-f7780da65eec@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <8ed7a628-f64e-5512-efdb-4116a7b88a1d@suse.com>
In-Reply-To: <8ed7a628-f64e-5512-efdb-4116a7b88a1d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|PH0PR03MB6787:EE_
x-ms-office365-filtering-correlation-id: ca53f7d5-9d4f-42ba-de2a-08daf42f7634
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 nLeyYGoCqW9YYyZNVRHYQqv5g9xdq4sMOibRHYrm1WTaUuLmtSiC/zXhBwUpD/SduDJjg9A3Pqmm3Pwg78pe4m0KLvryONLinugKZhBLq+BCPLjyQUQlWFW3ADmXIxoHbYGLlTXbI/TMnM5jcUjkuFIB7ZvdH/LCjvGbno2NnEI8S3eFnAF6dHgM4Mok/BaHMopEQt5/XX80zWNxcjO0LL9rE2jFJtIaoPVXvd2Mp2DJ+EhKCqP5BNi+CfzROdlRSRyB7g7vwXg4yxAjcd6iptFXfDyKYCurPul2ZgQiaiPF+XVZR2yk3OU9UoSnCJdAYouYCXKvAzEE6YkyrlWK5j/ynOs1wRfhmSqAeDFE43uMmlQVVSMwqd+37TYPECUHj6PyrGQRFKGlmRtofMFhFm9vnZoc4f40jSUyCsIx4/ch7R4haegfS0liqcvDrd8gSejEkHMgRlFJh7akExwC5qr+OqOKU+bjztZwzw2OEW/OOYxvAlSKbuHtrtV54mYNo81pyAVa3qGV3QTtBtIIm1mE1SHNJctFhpQ+u4KqJKtqx+bFNEZihdMVs9fi3DeTNTppuIMuFt3SUdI0vpWFEMK8a+OtgVV0mIxnjx4bqaLG2TIgfUU6Hf4r7I9ml0bMWkRlLlFJlm447+TF0tR+VaZeYW3mdhwWgLCGwaC2/E9yf4HUR+QZqjdPsd1sh++Ts6ZNX/4iOq99PsEzQS0bhd5r6Lv+sH7gVvYwHWYcIa/fmG8t5VLQ1jMotMxVL1Nez2BPhP04RC/WevBig5EZJQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(366004)(39860400002)(396003)(376002)(136003)(451199015)(36756003)(38100700002)(186003)(8936002)(2906002)(91956017)(41300700001)(5660300002)(76116006)(66446008)(71200400001)(66476007)(64756008)(8676002)(66556008)(66946007)(316002)(4326008)(4744005)(38070700005)(110136005)(54906003)(122000001)(6512007)(26005)(2616005)(31686004)(31696002)(86362001)(478600001)(107886003)(82960400001)(53546011)(6506007)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dXcrbXI2YTZtRlBycW1iR250UkRTMVNBaVcxZm03THgwUE9qcHJvaS9QRUNC?=
 =?utf-8?B?aU5lSXRGL210TjM0U0kwMUcwQ29Kd25kRzR0ejQ0WTBnNElqaGpPTlhDZm41?=
 =?utf-8?B?cDc0WEFnTVpDOXR2MHhXU0JpbTY5bDBKUXh6a3RSNnYxMWFRL2Mrd0xqYXI1?=
 =?utf-8?B?Z09MR29icHdqR0NNbVhCL2R5TWdXVGorQk9JLzBZc3V1SkZUYm1RT2VPWmdC?=
 =?utf-8?B?Wm5adDd1ZFNSdy82d1lMdk9RUlBBYmFxeW1tQlQxdFRHL3B1RlJsL2ViaW1n?=
 =?utf-8?B?cGx2V2xscnpqclIxU1poaHUzOFRNWXgzbHpLbVdVcVZCdE9DcTl5b0dZeTJ3?=
 =?utf-8?B?c3VWOTBGVW9nOFF1dDdwcjBQMWZUZU1XUXNlV0k5VGFhK3c0dndaN1JTZFZy?=
 =?utf-8?B?c2FEMURWQXNKMCthV05aUVN6OVBXTmx3Q1hLV3VuU2E3bFVjZHg2NE40NkJl?=
 =?utf-8?B?ZWRVMi92eURyQmxnVWo4Q3lhaVEvZE1mWEV0bGVaaHpyM2pNZlZYY292aE9j?=
 =?utf-8?B?RVFLemt1WlF1bS9STndnZ0ZkZEVCQ0xFZFdoM3ZpNmZ0T2FvYzRUKzRIRDd3?=
 =?utf-8?B?THVCZ1ZicXlnTVlEMU5lcmNPb2g5QUc5ZHJlVG5GU2RsZEdUSFVsc3JGaGcw?=
 =?utf-8?B?TGo2V2R3Q0kvMHdNckVhcERVeElENzVpR1JjY25jdjZxSzMzSkh3cTBaS1lK?=
 =?utf-8?B?T01xbzBrVlZhNFRDRXZvMXhLMjlzQ1dCYmEwbnZqa0dNeUIrK2kzSGhIT3d5?=
 =?utf-8?B?UVNPOGRxMExkR1MwdlNGNzFWRFdYMThEOTlwQkdISGRhYm5sZUNUZmh6QVhy?=
 =?utf-8?B?Ty9mWjV6Rm1sRGpxVFArcWttNTVwKzRNTWRmYXB5UGhwQk5yeGg1d2NlZk1R?=
 =?utf-8?B?NkcrdTBQZ2lLWm5ZZytEOVZVK2RhbkNxazVLY0dINHB2OERaTXE2TzczZ0lR?=
 =?utf-8?B?VUJXeEs1Umdpb3hnQUU0dmpJU01vSjJmTU1MRmRoUjJJYWFINkZWNXdxTUdR?=
 =?utf-8?B?VFViMHZ6V082RjI2Z3RBaUpva2tsYTM3UVJUVk1uRG5ZUVNsYjJzVEVWeW1M?=
 =?utf-8?B?bzZWY29FZGtGdHJIaStXU2luaVE3ZThleHR0RGJ3YVVXcVh2U3hZTnBPb3Vh?=
 =?utf-8?B?dVZUQUhURDZEQ29iaFVhSEhQTlIxc1JHYXp5UmxHSkhGdVMyOFNBTmlJMEh6?=
 =?utf-8?B?ZkVPSFdiMHdRRkZUUkR3QUJWVCtMbEIwckR3N3RDZ3ZiVTZjMHhyU3VVTHdi?=
 =?utf-8?B?NitQTDZSNnMzcmN3VFFYRDc5Sk1VdGU1dmswK3lBcVRhTmE4SFBrbEozUUZZ?=
 =?utf-8?B?eUp3VVE4MWZ4TXRJTjlEYVEvZHl4TDdXSllYMk4rc3Z4MkZCbzh3aGlKQ2tV?=
 =?utf-8?B?SFV0b2p5Q2xiSHVabGFtR1IrbytmWUhKMWtLN0hscFZTRE9VTzVhRWdkanl4?=
 =?utf-8?B?Q0s3UmNWU3NMZjR2OW0zcE0rbHhtekJ1cXVpSlRrb0loYmE3SHVkam1MU3Zr?=
 =?utf-8?B?eStFOEhUbkIxb1dHN09QTUVVVGxKSjBZQUV4TEh1WmRRc2F4eldWNHBPRVYz?=
 =?utf-8?B?YjJNV1lweWtrc0pVbjQ3WkE1RytwMStna3pKWDZTbldMMlJmNVpPUDltbWJO?=
 =?utf-8?B?UWhuenV0QXVYRFlQY29lR1hidkswTHpBM0JSd0pSakVGM3RQdFdCcS9aK3R5?=
 =?utf-8?B?TXR3amRZVGFuQ28vQ1ZpdTNOR1dzR1FzbHVENFFwbXoyRDBBWGR5azBXeUdH?=
 =?utf-8?B?QnVSVThpQXE0SEt2cGhBSzd5RXBaZExCeW5TTFVFK0oyLzJxRC9vQ0oraVRT?=
 =?utf-8?B?MnZ4MXNTRFRnaG1Ta2RkTFI4UWtxRVlQQk5JN0hNTVY1alYvclBPOG9xdlpn?=
 =?utf-8?B?MnlQU0ZJb0ZIVUszc1ZycXc3dmVIMkdTbEcwM250Wldjd21CSm5IVTgyTThG?=
 =?utf-8?B?NC9JWHhTYWxKMXVBTkltS2RhZTZJQk9sc0VEUWd6aVhoWUlRTE9MbnBjR252?=
 =?utf-8?B?aEJNdDRvSlNoejg3NjN4QnZrdzJsenYwNVVWcDI0RXV3cGRQV05PVjNLVmNq?=
 =?utf-8?B?Wm5hZWFrQWh2VUtvZlA3eEVFZTM1THUycUV3Z2Vua0V4T3JiOENsRFpkeHBB?=
 =?utf-8?Q?SOIVn433i6r5hK/+lr5nTe9hK?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F01CFA14EEB94646A10534179C9DEE01@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	me3Q3rIj/HukUybZXzlIpKKu3YsSTdJIGeQZUfD25iXc2AIscICKM5hyGNwq9VCuJRpcXpJ6b69ff9NH8joOWOwfkXRUbas1G/A6DDp5Z77kbGHptvUjdkw+hBn+ddx/A+G+h4eKnBXg20si6nMr3ybGtAYdKk1M6f/EyQeno+SJx4aAlaQr7XC+IPBZLUT9M9Zhh4UlkxHJlxkwQuAsOGrZ4zmsMTQBz3li0hlOfw2zMj0CdWBWvDOJI8OQDbnqSeiP20mcfAgkgv5sdCgZ8DPmXW4PxuTouwY2zWiIGN/Q/1ksif8x9M+gpXrWwjeMt6Y8DWldru/kazVW4a1TNhtNARFWyH5haCRXwQXVHdFkd+OshISuox+k5d9Yjq7oRIhVIwvi2LIXG33roxjM9dQx9oW7frN3PeSUZpLpBj+orUkp/qe1F9cCLKaRKqkIELl7ztmwu+RwrgL2hCfPab851XgsBx8rgW9Jp1/Wva2vpHxjQZaZP1FufyPk5V95KO6W76je30zdJoaJqrxzF7bdL24yZiipjwLpJlvtLq+ZK0W20CF/0nC+CTH5Ji6PecuRvNeBeYvAXfPiMlySpfysTVjQm4pCpCBm1g8/A+MWwI+m9IrxyD6Fae60NpKQDMM0bx+QCQQkP2EgcBYdtMi0Q/pqXTL2Cjw2O0NGZpttSrmn3ZGojKFLqdZfCKr3LbUO6nZMRZeJLr6V4a2H0HK7N8qD5Yq3RFX9UMDHMK5EaejSg4K7KnVnCjL4jtCewsLaODlOpqZ+LD6IiiEHzp19hy6f8WvxIN/hNU9Qv2hzN4lIEItbd1l9Xsb0Twppbrkz2+/mzGql54CeOzPp+A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ca53f7d5-9d4f-42ba-de2a-08daf42f7634
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2023 23:56:30.9538
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 5BlEYfbHA88gYfQJLSSlsDXfD+ScFyjYq0/eE4VuHoP5RfIJyqL4xwZXn4zR2C0Z9GelYm/Y5s9mUBpotMvS3NqG7BEkY1opndtlm/0M4ZM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6787

T24gMTEvMDEvMjAyMyAxOjU3IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gLS0tIGEveGVuL2Fy
Y2gveDg2L21tL3NoYWRvdy9jb21tb24uYw0KPiArKysgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93
L2NvbW1vbi5jDQo+IEBAIC0yMjY0LDYgKzIyNjQsMjkgQEAgdm9pZCBzaGFkb3dfcHJlcGFyZV9w
YWdlX3R5cGVfY2hhbmdlKHN0cg0KPiAgICAgIHNoYWRvd19yZW1vdmVfYWxsX3NoYWRvd3MoZCwg
cGFnZV90b19tZm4ocGFnZSkpOw0KPiAgfQ0KPiAgDQo+ICsvKg0KPiArICogUmVtb3ZlcyB2LT5h
cmNoLnBhZ2luZy5zaGFkb3cuc2hhZG93X3RhYmxlW10uDQo+ICsgKiBEb2VzIGFsbCBhcHByb3By
aWF0ZSBtYW5hZ2VtZW50L2Jvb2trZWVwaW5nL3JlZmNvdW50aW5nL2V0Yy4uLg0KPiArICovDQo+
ICtzdGF0aWMgdm9pZCBzaF9kZXRhY2hfb2xkX3RhYmxlcyhzdHJ1Y3QgdmNwdSAqdikNCj4gK3sN
Cj4gKyAgICBzdHJ1Y3QgZG9tYWluICpkID0gdi0+ZG9tYWluOw0KPiArICAgIHVuc2lnbmVkIGlu
dCBpOw0KPiArDQo+ICsgICAgLy8vLw0KPiArICAgIC8vLy8gdmNwdS0+YXJjaC5wYWdpbmcuc2hh
ZG93LnNoYWRvd190YWJsZVtdDQo+ICsgICAgLy8vLw0KDQpIb25lc3RseSwgSSBkb24ndCBzZWUg
d2hhdCB0aGUgcG9pbnQgb2YgdGhpcyBjb21tZW50IGlzIGF0IGFsbC7CoCBJJ2QNCnN1Z2dlc3Qg
anVzdCBkcm9wcGluZyBpdCBhcyB5b3UgbW92ZSB0aGUgZnVuY3Rpb24sIHdoaWNoIGF2b2lkcyB0
aGUgbmVlZA0KdG8gZGViYXRlIG92ZXIgQysrIGNvbW1lbnRzLg0KDQpQcmVmZXJhYmx5IHdpdGgg
dGhpcyBkb25lLCBBY2tlZC1ieTogQW5kcmV3IENvb3Blcg0KPGFuZHJldy5jb29wZXIzQGNpdHJp
eC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 00:00:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 00:00:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475759.737579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFl0r-0000jr-53; Thu, 12 Jan 2023 00:00:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475759.737579; Thu, 12 Jan 2023 00:00:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFl0r-0000jk-1Z; Thu, 12 Jan 2023 00:00:09 +0000
Received: by outflank-mailman (input) for mailman id 475759;
 Thu, 12 Jan 2023 00:00:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFl0p-0000ja-OP; Thu, 12 Jan 2023 00:00:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFl0p-0007P2-Lu; Thu, 12 Jan 2023 00:00:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFl0p-0000DM-FB; Thu, 12 Jan 2023 00:00:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFl0p-0005w0-El; Thu, 12 Jan 2023 00:00:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7T4Q/NA4yHfN+17zHXeamo+yuPewB09SghTmA12euNo=; b=Q6IJpFKYQdPY9h3jfrklyTLNX9
	KhJjBC+8Rbi4ph2/ihQocRUk1fpQASF1RDrv1LDYPF4DyRl68+xFdo2gifbsx/SuBtFhE2H8E7//T
	5an9y+MuQP1R16rJnS1TgvOcLzyjyupzYWhiP1sG7jxTmyKK0j3bmHvHxQtbzyQ8h/Q4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175729-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175729: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 00:00:07 +0000

flight 175729 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175729/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    4 days
Failing since        175627  2023-01-08 14:40:14 Z    3 days   18 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    2 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 00:02:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 00:02:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475768.737590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFl3A-0001Z9-NZ; Thu, 12 Jan 2023 00:02:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475768.737590; Thu, 12 Jan 2023 00:02:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFl3A-0001Z2-Kg; Thu, 12 Jan 2023 00:02:32 +0000
Received: by outflank-mailman (input) for mailman id 475768;
 Thu, 12 Jan 2023 00:02:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFl3A-0001Yu-3C
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 00:02:32 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66ab9fb2-920c-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 01:02:29 +0100 (CET)
Received: from mail-mw2nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 19:02:26 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB5685.namprd03.prod.outlook.com (2603:10b6:510:42::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 00:02:25 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 00:02:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66ab9fb2-920c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673481749;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=c3iDwnezfMF+ULW9BNrpEmhodULbQKYQwOVb7MiOsC8=;
  b=R0VkhXTNH7xwM//h2vSO1p/HsQ2UpYsPhDs8uvnNX2XKq350Z98sZa87
   XaT7HCn58ADQKCRoiJAKUZfJZWJqw5VcrEYywgdNBKlegp3YvIrreDu1s
   gqM84ZUcVoT5NpSevDF6RwOZlo7CULNVlWZ0/3yyiSbnB4nQT+Ebl9TOx
   o=;
X-IronPort-RemoteIP: 104.47.55.107
X-IronPort-MID: 92208154
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:R+DA36P2BGaixafvrR2UlsFynXyQoLVcMsEvi/4bfWQNrUok1zdVz
 DcfDDyHafeOZjOgfo12YY3jpxgAuJOGndNnTQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo42tB5wVmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0sdPGVlu1
 /EpEwIuaReZl9uw0oqWTNA506zPLOGzVG8ekldJ6GiBSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujCMpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eex3mjCNtPSNVU8NZm2Qy06DYzVSYncmedqsW7kFyGBtx2f
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaETQUKEcSaClCShEKi+QPu6k2hxPLC9N8Sqi8i4StHSmqm
 mjV6i8jm78UkMgHkb2h+kzKiC6toZ6PSRMp4gLQXSSu6QYRiJOZWrFEIGPztZ5oRLt1hHHb1
 JTYs6ByNNwzMKw=
IronPort-HdrOrdr: A9a23:AxAW0Kxvakbs1UI523JmKrPw1r1zdoMgy1knxilNoHxuH/BwWf
 rPoB17726RtN91YhsdcL+7V5VoLUmzyXcX2/h1AV7BZniEhILAFugLgbcKqweKJ8SUzJ8+6U
 4PSclD4N2bNykGsS75ijPIb+rJFrO8gd+VbeS19QYScelzAZsQiDuQkmygYzZLrA8tP+teKL
 OsovBpihCHYnotYsGyFhA+LpL+T42iruOeXfYebSRXkDWzsQ==
X-IronPort-AV: E=Sophos;i="5.96,318,1665460800"; 
   d="scan'208";a="92208154"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hBxTqiNseKxi3Rb3bHnLcJnmemaAW6Vy14J0lgBniSDNKEY8Eia5I9kb7Biq2iK6X5wuL4NkdlGbNeY8Li2IaE2Pa6iyxmc6fKT5itA5H8p4Z48pPiU4hxxNNQZsSu88ZpbuTpx3CCDUuaRgmNSe0H3smchguFCj63rS5DUWYGjOl3vYKbLMT6ox6Sp6au8ZnWo1F5GhKUg61pv5vqQa5OfMVoKeKydeYMBNBKctQVLcRlugFAWdDJJAna+axp8bz0F11fK9xv2FVgpdoalw//cfg/DKWzXXGCEdepxlguffLH9UEe8MjeJKbzGlItSSfUGMRsVISvZb+nmEkc25aQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=c3iDwnezfMF+ULW9BNrpEmhodULbQKYQwOVb7MiOsC8=;
 b=GK6FcTleryCOEQZtvKX3dbEOde2B6dVCRvwSKBAVbggtWzqagsoHwvwiSUQAZs3JY8BSQjgNgUV+HMs/mfV89b6DRuoS6QZwxlIk9Xvdw5xHnb9hddT/NSqjQEVjq4Q6bxAJVDmbRzU9zO0rX+quxdznoYmltB6cO+z1aDsRIRovDfuI0di+7ny31iND9rtWYyAw/+msj0mBcLFGrfAOihW4VrVR9VnyVJkTwFBlwdz++n4RE6aIpla9AH2HZWd/ikPCQVwtaKzqUmbrKtFhXzClkDuwjtisPJFGbaYsifhD29W+CfYkNV1uLLW0k8ZCb9nW6fAUninvKXf2wZ99GQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=c3iDwnezfMF+ULW9BNrpEmhodULbQKYQwOVb7MiOsC8=;
 b=wBVTnHekgCJkCtPAiy/4pXz/+HwevUsLSOWw5cVoJdkfirKGB/zXgGCwwr3y9v6OQdDB+wPDph6DBAQEH7CcACJjSS+6FJvYjKEnGYwEEB5nPISAFH/sy32xsDYrmvtWhyLikJ/PeBjesyuok6ppVjHZEeA6AFzm1THTR+jiAvY=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v2 7/9] x86/shadow: reduce effort of hash calculation
Thread-Topic: [PATCH v2 7/9] x86/shadow: reduce effort of hash calculation
Thread-Index: AQHZJcSKbn2Oy+OF1EuP+6uJ3eHPoq6Z5sMA
Date: Thu, 12 Jan 2023 00:02:24 +0000
Message-ID: <28488645-1cf7-cb9b-ca03-f060f7947156@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <4785b34b-2672-e3a8-8096-df1365b6b7b8@suse.com>
In-Reply-To: <4785b34b-2672-e3a8-8096-df1365b6b7b8@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|PH0PR03MB5685:EE_
x-ms-office365-filtering-correlation-id: 1001df52-2a13-4c49-9cfb-08daf430491b
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 oCCOWOH52K1U7Usk3e1wfph8E5ezYKfJvxrjxhS5bohK0izIQ9jy0CWqYjSHMPPNfzFW6cpX8PH9e4/JZBrZ0wSGPskIzX5ettypmnVlPHLDl0Kx4W9uDcwD2BQPjXCz2WMf4uTQauzn85zIKLxk3gPvvjFMw1cuC53d9gfo9/GarxHe4ijsS+54BGBY0fkgwW0hA7QYySebbIDQHfJ1JkADaKoh8WUTK9rcMOioTZtPbw2+uxT1AvM3+uCjvTnhEA+y/nobC0a4+vanp4gEhk8dCbQ1cj1p3x5WJtibJhmNANt5rCj/vdmVcsybDCu7gFEPYu932pOEqQ1uEPlcqe3nocEl1Kd7PAdbPqlXxp7+4aQq69X5V7hQlBtjxKxv07ibMkY9ykGRVccLCWll1LUmxBdwPAWuc87fKTHElLZxTL6HZ+LmUxDhzW67S/6RhwwfbWKvNabdczsxp4p8fZULObCeZHcYxLnffVY95AMpVnar6SgqlMcKWTfD5oG1LBMrhx+knDoVA9vLkJCJyJHFg+C2uuWl96rZ801dZzFkI68QopX9th1X10JrQEfKYaVxdJiT/rywbmbdkRY99KYjdTpphC7z+k//bSHfaXIjDSRJB3VNhf1wMtCCppzZ1Z4RhY+nRFtEhY85ErpyJm5y4Pvh6XHZGDYbYMOJ+K8Fi0yFLF/1nL/LQAw92+JIvqn9sjSyy2GrQ3FtQBwdzw6FUutnrsTOMBskBxW2i13QL7owkh1lCxuff5v19Tb+WqNCV+DXXup8QqDQ+Axx4w==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(376002)(39860400002)(366004)(396003)(451199015)(66946007)(64756008)(110136005)(2616005)(36756003)(4326008)(41300700001)(316002)(8676002)(66556008)(66476007)(76116006)(66446008)(86362001)(38070700005)(31696002)(4744005)(8936002)(122000001)(38100700002)(5660300002)(54906003)(82960400001)(91956017)(6486002)(2906002)(71200400001)(6506007)(6512007)(53546011)(31686004)(478600001)(186003)(26005)(107886003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?cnV4MmlIQnc1eG1nT2YyOXFjS3ZWQzRUbEw0YVh1R3ZIWXBjcDFqOFNGMGp6?=
 =?utf-8?B?aVQ3MnNtVXoyTC94K2NXNkUwMk42UG12ZStBYnJ3RDB0U1prNEdGSkZ6QVZH?=
 =?utf-8?B?cmY4T3ZSZEcwUUJjYkR2WnhrbUZHaHdFRWU0WFA1YVN5T0lsM3dBcTdyVDh3?=
 =?utf-8?B?WE5RQkwxcDMyZmdFdE5JUnJyMkF1NUVMT1IwVnZDcFA4NWpKQW1jZzdCM3Zz?=
 =?utf-8?B?YnZnK1FReEppVXRjTHJ5VXYxVUJvajB2ODFEMk1TOUNoenZVVEdRdGcyWFF5?=
 =?utf-8?B?TEFjdFhiZEZ3b2tKbDRWZ1RsaTN6Q2pISVhhS3JLcEJQVlFDdEdoTEFjTkpr?=
 =?utf-8?B?ZjFodVdBZHhQc05KOWpuaCt0dTFoZmtSbmkxNmlya1YzK3dEVC90aUp2V29a?=
 =?utf-8?B?ZTZ4Rmh3eHNWOWRtNG1vbm8zWjFRVkwwTUhMOWtqaE9iMUpybUdPNGcvS0Fi?=
 =?utf-8?B?V0I5TDIrSDdlZlBhVHdVV2NTazA0c1FUdXN5QlJUbzNUeWVpbFZyd2F3U0pC?=
 =?utf-8?B?em0zbzZmL0h4YTRUK1M5ZVBQdTFyU2ZzaGh6WDBsU09tZHhrZGtkN290KzF3?=
 =?utf-8?B?K2tYRmFxWllFSDNQb2t3R2l2MytTL3pnY2psekdrdm4zalBXUXdZUG1JOG1I?=
 =?utf-8?B?K3owbm4wUGJZNkNBZTJwZ2MyNXIyaVhBemFHYmcxWis0RFMzM1h2R1BlREN2?=
 =?utf-8?B?akJOVzJMcFFEVTFreDg0VHJ0bDN1QkpVUjY4aG8ySUVlTmZscDQ2QWJUdG5N?=
 =?utf-8?B?cUhyNjdzUE1UNkFEUnFkZWJVS1ZmRFBIOENLUkJlTk9LSFBtZWJveFF2MUtV?=
 =?utf-8?B?SVNWQ2dvNG9zbUNWSTBNZk5FeEE0WTlWS3VOMUMvcmsrTmpYemF2N2h3WVcv?=
 =?utf-8?B?SXlYcnNZYUtWRXY4bnR6UXdUMXNDV05obEQ2bzBQN0dxMjNCeGJhTWpNM2Jo?=
 =?utf-8?B?Qm1CMDR5OXdnMG9UU01GYWV2dXhOSlozUVJKMHFxa21COHJIWU1XSXdKaUNH?=
 =?utf-8?B?NkFwWTc5N2NMeDhWdzBZYXQ5WGtSYmZWSnBmV002N29QSE1yeHFPNlg4MnFB?=
 =?utf-8?B?OEpLL1BCVFFEb2ordzRzdmdqTE1KSHhKZ1JRQklQRUlkWVpDN1ZHQ01XWmFk?=
 =?utf-8?B?U010Qm5kWVpIVk5YVkoyck94WTcvWDNGR0FGa3VBcTlCUlhpZ0k3OTl2UzV3?=
 =?utf-8?B?amU0eVhqVUI1WFd2OUdQZ3lwQmY2Q0hZWmRyN0dBZ21WcytjZU5GTlhTSGtt?=
 =?utf-8?B?dWorSU9HOXhtUm1zOWJLTGlOVHlyYkZRRTd5cHJkeDR5ZWthODJPbVNKWmNQ?=
 =?utf-8?B?NUlJQkh0TUs5SktvMURaSkI0L0NxdFVSbTJ6NGljQVBHWDdZTDJmSWk2MXdF?=
 =?utf-8?B?c1ZDRWd4YTU3SVQyWC9pRUdTT2l2TjdjNjFVS2ZoOHNTMnJrUWx2V0RhR0d3?=
 =?utf-8?B?eGhTY3ltT3pKNm56NU91TkUwTW5UZ3I3dnNQUG05VktWUFJORTBiR0w2bUsw?=
 =?utf-8?B?QWxDWmJFRjJJUFdXN3VLclcxR2JlQ1U1RFV4UGFudGxJT3BITkJmNzhJTGVG?=
 =?utf-8?B?ZUs4RFNqamhTcDVVQ0dGUXVQNDAzbFdjWkt6L0JCTU1CNFFkVUtqaXcxVytK?=
 =?utf-8?B?c1NJbk9VOGlBRHBtYnM2ZlpPV3ozaEx5UHVENWpYZW4wbDRScCtHT1Fqd2Ju?=
 =?utf-8?B?a1VHWHhHdmU1NThaSzBYeEVoaDV4YXVtYWF6dHl2d0ZLZHYzeVdVNVczbzYx?=
 =?utf-8?B?WEgvRzRIbDJGS2tId2FXUHZHaHdIUUZ3UURUbGVUV1JmdEJmeVJkMy9sd013?=
 =?utf-8?B?V3ozUnFsYVlEK2RWR2oxSE9neTB4THJiaHNNS1NDTmZ2YlYzMFlJWEFuVzZx?=
 =?utf-8?B?YTVjNzRQVVBkd0h0V2VKMXFvUlZRNlZvTmRSY2dPVXBjOEVQMlByTWZHMEV3?=
 =?utf-8?B?VVljZUNySlJKbGkrYWVXdXdmRGpRb2JUTnJPTkg0TVkwRkwxMUtaS01pR1BH?=
 =?utf-8?B?UDhtN2ZJejR6cXJKNFF4QUltZ2QvT0g4aTdFcmwxbGFYNHZGTFQyNHpvQXgz?=
 =?utf-8?B?bHlPa2FBakp3N2tUcWNvYnV0K2FZQzJHSGkwYlBMSTd0ZXdvcDJKWDB2cWRp?=
 =?utf-8?Q?3c1LPIujqc/aEbEYoPzXyYCf1?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <82E358E44FF7894CB3C2763FDBE6DCA3@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	TLbZiH43vNJBNjSgp3zTAvPCoTZ/CjbrqzKiPqhSDrJPnC9iVlQmeQ356WCLkc+oSqqxIkVnhsIwW8qbNcZsPzarGf4zdSYXIWSx/NdAinc8+vOSYB/ymSWleGtcHH79o1BafrRnJ158JVw1yLHDLi8rwWZM4rrMTP1hT9sCV3YRJkNUaHkFYQD55dsWxM08LVd1JZbfg6oNLBT+XJRAAhROfSZQgWRJvsYC8prcJEk+4x/W/CmFJUotTgFiV2O9O4hur504rCCFA15Q3DyhCdaNooWTNRH/eGBEsRFBcSvu/egIYCw9vb6o9h21GaOy6Y4gJUyBxeDoXx+VZ4mTMOcEzd2SN/G1WCxRMzXNWRZa4ayyrASO+98ucsN3wpiljySjnX4EIzP8Pi8hwEheJhyLCIqG/ZQEo/wUE7spRAI+uWSSYzZiR4oR9o/deblpUbjibm+ndBjgq2X0Ld3EBm5b9KA9KJdOnwA8QFcLP6g5oN+hh9G9BqWqR2lfmec54Q1yR3rHJXVEcGRbXrPpQRP1ffSw7eEdhBbI91jPaPAab5edQfan6pvLJecaNUH2kr1BvQ6JeTKn9UoAf0ABsW5eJ5U3fFO2Py5UpPEZQ8ck0cnHtJUNW+wGfhVRh+Gui0/tf96ynQESBf/kyJX+0O8PRrdvVRCTflHn6EOAfY4EbPzMt6TL3mTL1OUpCqJRYq/LMlQzLuFurCyfWZR9beraprGFuhY64O2QydSIvjFSqj9N0UUXD+C/aUD0roSARz6WujqLUixLmCmwhFE+vI6A9w8kZQL4JWwap++MSXus/koktWb0k+wrSNzV05kWfthiGpMzYl2nDMyjxsBVHw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1001df52-2a13-4c49-9cfb-08daf430491b
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 00:02:24.7875
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: EY027ciF4+8mTfK/wipN2e4uLaIAs7wNwE/HETKH27VEh9RS6zm9Pf7OqpO+p2OoZ06fEUDaD2tjerjFB7p1Hk+0onY1BdvNWt92phpUplA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5685

T24gMTEvMDEvMjAyMyAxOjU2IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gVGhlICJuIiBpbnB1
dCBpcyBhIEdGTi9NRk4gdmFsdWUgYW5kIGhlbmNlIGJvdW5kZWQgYnkgdGhlIHBoeXNpY2FsDQo+
IGFkZHJlc3MgYml0cyBpbiB1c2Ugb24gYSBzeXN0ZW0uIFRoZSBoYXNoIHF1YWxpdHkgd29uJ3Qg
aW1wcm92ZSBieSBhbHNvDQo+IGluY2x1ZGluZyB0aGUgdXBwZXIgYWx3YXlzLXplcm8gYml0cyBp
biB0aGUgY2FsY3VsYXRpb24uIFRvIGtlZXAgdGhpbmdzDQo+IGFzIGNvbXBpbGUtdGltZS1jb25z
dGFudCBhcyB0aGV5IHdlcmUgYmVmb3JlLCB1c2UgUEFERFJfQklUUyAobm90DQo+IHBhZGRyX2Jp
dHMpIGZvciBsb29wIGJvdW5kaW5nLiBUaGlzIHJlZHVjZXMgbG9vcCBpdGVyYXRpb25zIGZyb20g
OCB0byA1Lg0KPg0KPiBXaGlsZSB0aGVyZSBhbHNvIGRyb3AgdGhlIHVubmVjZXNzYXJ5IGNvbnZl
cnNpb24gdG8gYW4gYXJyYXkgb2YgdW5zaWduZWQNCj4gY2hhciwgbW92aW5nIHRoZSB2YWx1ZSBv
ZmYgdGhlIHN0YWNrIGFsdG9nZXRoZXIgKGF0IGxlYXN0IHdpdGgNCj4gb3B0aW1pemF0aW9uIGVu
YWJsZWQpLg0KDQpJJ20gbm90IHN1cmUgdGhpcyBmaW5hbCBiaXQgaW4gYnJhY2tldHMgaXMgcmVs
ZXZhbnQuwqAgSXQgd291bGRuJ3QgYmUgb24NCnRoZSBzdGFjayB3aXRob3V0IG9wdGltaXNhdGlv
bnMgZWl0aGVyLCBiZWNhdXNlIEFCSS13aXNlLCBpdCB3aWxsIGJlIGluDQolcnNpLg0KDQpJJ2Qg
anVzdCBkcm9wIGl0Lg0KDQo+DQo+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIz
QGNpdHJpeC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 00:05:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 00:05:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475774.737601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFl65-0002Bz-6l; Thu, 12 Jan 2023 00:05:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475774.737601; Thu, 12 Jan 2023 00:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFl65-0002Bs-2z; Thu, 12 Jan 2023 00:05:33 +0000
Received: by outflank-mailman (input) for mailman id 475774;
 Thu, 12 Jan 2023 00:05:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFl63-0002Bm-VH
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 00:05:31 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d20d7956-920c-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 01:05:29 +0100 (CET)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Jan 2023 19:05:26 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB7048.namprd03.prod.outlook.com (2603:10b6:510:2b2::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 00:05:23 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 00:05:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d20d7956-920c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673481929;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=IWEE+Uk224L1EzdZRWsnZkazNqZOt00zjoy/L5ND+Ss=;
  b=CaSDizayGBXMEqU8KtvkHK+S5sDwwmxmJeG4z62vSJJAHKjPK8/4Cj3i
   lSQ63w4bNOLHDUp+uYEX0aydS0teQ3HmPzWaFMfLj/gahv9PgkNmqVLT8
   nhayXchpVvfoy1DVP9uQi6roXBvtCggGRu+l0gAGJBlM0mf3KQapDn2rR
   E=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 92208481
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:VqW9x6txAoi4ss8uU65vrW5c5OfnVGZfMUV32f8akzHdYApBsoF/q
 tZmKTzVaPbZamL9LdFzaY/n9R8FvMPQzoBkTgA/rnpjQnsU+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoQ5AaHziFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwOS0rQ069vPCN8JWHT7Fv2ZQ8LM/xI9ZK0p1g5Wmx4fcOZ7nmGvyPz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60boq9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzirKY63A3KroAVIEYZCHSy8PaJtm23C+NAc
 EkYpW0tq7dnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8sjeaKSUTa2gYakcsUQoAy8nupsc0lB2nczp4OKu8j9mwEzegx
 TmP9XE6n+9K055N0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLlXZX1RvERhmNkwIYw=
IronPort-HdrOrdr: A9a23:ANkL4akOm9tAPXhKeALAYv2MOuvpDfIk3DAbv31ZSRFFG/Fw9v
 re/sjzsCWftN9/YgBGpTntAsm9qBDnhPtICPcqTNOftHGPghrKEGgK1+KLqFCNdBEWndQ96U
 4PScZD4RnLfD5Hsfo=
X-IronPort-AV: E=Sophos;i="5.96,318,1665460800"; 
   d="scan'208";a="92208481"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VGgQcJhdVDR5De9i6kLVEzsqntOxozNEr6LuwcT7z7cWNNkRxkW9U+Ge4SRoz0bLLGcjOvFk6+RecfMuqrmuq9tUJQBdqCef9oTUeZTbxNB7zTbZhwIy2IALS/RFPHhdrJJWimq9t3ivIxJeACFD7aFESnt0ue7zeXK6QuJu00n0DfXy4zGzUD/4gQGBsMmopzcbyDotXzg1kRgLwDzbdut6MvUaFFhuSaQYn19QMF5S4gTUjvT8LuOQ9GDtDt3Fg+T1SaVqv56AmyZg2rtGzgKn8jnYnM6/1ZZ1dICZWuj85jsXv3MacmstvX9kQZBA0KJKAM8Lb0iUeI8aj94NZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IWEE+Uk224L1EzdZRWsnZkazNqZOt00zjoy/L5ND+Ss=;
 b=ONETdCYH/zQzQVeuYuYR0MmHq+t6FqCFmBPwiJKhxNfwelsWMz9SoQk1Gmgtyn4qJ5ky8qqBTyWeC0jj4pJqx+7icfHb6Br9myEzDsnItJ+Cba8JF2wPUk2ZfHMnjDLS2g8V+QTncEowxisZxqXJvBf2jPSVXuVE7amUJiEBtvF5ql6257wyWGSE9ZdLgm2xUVoEHCxwQd751E8Rf0h9KLbDS+8G0/DLddbMvOIM7QVkhuQ/ADeVYSRJbLXGXnnSy81KEyNA490XXCLM7T1AjrENT3WYqNwdGAbG+LSioCRx8m6X1M349YDTauYEFSp/w6KgMNPeBrc57/TJGUlTPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IWEE+Uk224L1EzdZRWsnZkazNqZOt00zjoy/L5ND+Ss=;
 b=DNzhJ2sunwxbMbnOk6cUrdd386YhrKp6ZGY/BbHJZLAF1V2nf4XFqeO7knzyEc8Pkew7tAC0mmpkhPoyOZMZ6KR7cjSHOZ+rsFI7o88TAn+83nqGcyFUQGu3WjdSXHRgCRNKYo6YMWUr+10Pzp9NTuM/lKYh0WhlngQrlyNhQVA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v2 4/9] x86/shadow: drop a few uses of mfn_valid()
Thread-Topic: [PATCH v2 4/9] x86/shadow: drop a few uses of mfn_valid()
Thread-Index: AQHZJcQqh5dhxv4yjUqRQKHvfuBGaa6Z55gA
Date: Thu, 12 Jan 2023 00:05:22 +0000
Message-ID: <54297677-c176-9358-f101-3939d133c254@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <cd8028ec-3188-3422-881e-28a3a6d8451c@suse.com>
In-Reply-To: <cd8028ec-3188-3422-881e-28a3a6d8451c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|PH7PR03MB7048:EE_
x-ms-office365-filtering-correlation-id: 9da74b7d-e2d3-4220-5d77-08daf430b345
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 dXLL6J3ces1ImAJjiDwL2HkrU43WUGQe1liXhcG6Kgi6ImY0Zn8QUlsUEVUVODEwoTExWmw8t+D15dBeQ1bGIMxHcJdAl1i2C1ejqcKU6qbBPC3VK7rwWApimgpOHc8NeJJ+WV6x9XWXfq2gFYJVmVzwvTzKoR7TJBkxr45onbPWNHtYn28K8gh8QQVbcMdGEy6L5OUHs0FOjXjBY26zEGIwntGiXLwL6EyFm4GEDBrrL6b3clG2uevf5lfwZ5TU9m4pR9nfy6iJpNDy8zREEjfaNfAMS1dV6YLKMhRbsv/tw3KtM51Poicj491TkwJSJX2G5mQu1QdRDGzqYWRAY64gabqhQlzsk5+KnkOOcm/6A+92UwyeyYcHyuoaF+ckbtp0nxgYMGlqMXUZifYJ/9usl4HPKaChAjcU9EYvAz6HQdVhHLY3M72o4XGX8S+YT7fwQOu5iK9ksdV6LE6fDBTsKnEbFN/jrSPXSDyCRoq42f/Wl1D91mBQBZcgDH6KTwgO7+EFC4UsASkHcOxpnarMi0dL/Jz85qj5O02eIL0QTtwoVWi6s5DJGRfSDfyPxfX8a7SKGjcyX3nwBaJFlVeA4hmx+cIqsYu+JPBK4a1iosf3XhorJgbSbuqP+VTzhsPtDUOTDcP8gHy33HZ77ZrixqFOJbnXMNR147WaVelDCmJgBpZClzL6vybPcyz7Dm1b4cX/XD4c6xHSYUTDfwAFjxGrvoSVc/HX6q9XXAyvkEo0AVBw9Ss6dt7eVcfPv14Fhr0T7bnvvC53jQ+kUQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(376002)(39860400002)(396003)(136003)(346002)(451199015)(53546011)(31686004)(2906002)(71200400001)(6506007)(6486002)(508600001)(186003)(26005)(107886003)(6512007)(8676002)(2616005)(110136005)(66446008)(66946007)(36756003)(76116006)(64756008)(54906003)(316002)(66556008)(66476007)(4326008)(41300700001)(4744005)(82960400001)(122000001)(38100700002)(38070700005)(91956017)(86362001)(31696002)(8936002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SzFhQi9mNFMvdGdKK0drdFduWGZ5NDk5d28weEJKa2R2SUkxckF2NjdqODNS?=
 =?utf-8?B?djVtMjNuVktsYjBGcVdMME5iMnNNZ3pmRGplUTBEN1ptTm01Y0pDZFI5ZE51?=
 =?utf-8?B?anVIUThPVTIwWFFoKzVZL2dSVmcxZi9SYUd3dzVwVGsvazlrK1BqejlHNFhl?=
 =?utf-8?B?eFFHczNOZlhxVHFxaEl4S2hYVTg5aUlId0dDcXdualp3elg2cVg3cWJSK1pG?=
 =?utf-8?B?TEtqdmJPaGdpR05WQlVteThaV3c3Ri81U3JtOTBYV0lub2tKV3RXdE1HSEto?=
 =?utf-8?B?aTBpdDJlUFVDSm1BRlpqWkc0UXJ3K2djTUI3b2pMTUNEQ3hlSGdHd0RPdDhm?=
 =?utf-8?B?TDBCTFJWM0ZzMk1nSFFBUmpFUVUrQ0xlVGl4QzZ3RVUxTGJkUHdZdVZMWnFY?=
 =?utf-8?B?U0xkYURJeEp6aVk2NXQxc1IzSlhhTDdlOFNKSUN6TVVKMEZkbFkrTURXakdO?=
 =?utf-8?B?bDlQZnNxNjhTaHFrSG1ScUUxcXdFQjVXTTBPbmRMc0dDL3I3MXpVaWZGRHhu?=
 =?utf-8?B?ZVZpempoTys4clBmT3ZXRXlhU1FuZ0xVeFB0a0I2bTBpd3R4TUZ0cmhLUkMx?=
 =?utf-8?B?dld1QlVNb1VvNFE4ZnVGa2lPV05mb3c2Vlg4NEpRK3BNU1pVQ2hOenN0VDRz?=
 =?utf-8?B?YjFqeDFwVnJMREwxbVRzR0Rac0hwZXRHeTJjb05ZMVpwWFhkWlpHbjNVVVhE?=
 =?utf-8?B?a3RETk9SZWVCWVRROGpTWENUVWFHOWhUNnRrUmF1SVFTT2MwN054Y1FocWZH?=
 =?utf-8?B?WnJac3FkR2U5Z0ozb0F4V3hOdVNhNFF5eFk5NzVtcTZEMWZZUVFWWDN4NEZl?=
 =?utf-8?B?eEQxY1Bzd2VwVHpoUzl4c2hWS1orTU1DeW9JR3dndWJib28xYXRSWkhhRHNa?=
 =?utf-8?B?eDhtRG9wcjZNOWhENm1QS3hac3lnSFVSNENscFFNQkNQRWxIYzFrQXAyNmli?=
 =?utf-8?B?SkhoU3RzUkRxTHFVSUJjdThQQlE3WlJVUkg4LzJwN1BTY1ZMR1d4MnN5cEJI?=
 =?utf-8?B?dlEzRFg2L3lUMDlFc1FGYjUrcUdHb3hQY1VrdkpzbTkydTBDdXdGM29tT1RJ?=
 =?utf-8?B?a3hMREl3K25WZ2NyaGpSdkZSSHNpQ2lxT3dsTENKSXRvaUwyd3ZEdFg3N0Ru?=
 =?utf-8?B?elNvQzVPcHNpcnh3dGRpaUFmQmhMT0kzTGxDNG9CLzNtWmlnSzQvMkVBS3l0?=
 =?utf-8?B?Z3BYb3cxemx2S3pILy9ZSDhKQ1BxSzN1MDJiVXJmKy92UWNyUEhyQU1xcmVF?=
 =?utf-8?B?N3l3YTdOallHZkpGYUZRU0dxdmYyYUhhNUNXVURhVzJFdTZMWGJtQjhMajZI?=
 =?utf-8?B?cE5GMHJsTUwyNVNVV3dlcVY4Y0ZHS3J1cnpqUWJYMzc3d3BKVWIyVEx1UjU4?=
 =?utf-8?B?TUNSWThnMEhLSXNMaGdZd3cwVlhyTVZNbkNVOHhFdUhFd0p3Ty96TXRmVkJT?=
 =?utf-8?B?R29NWmgzM0hHbW9YeG9NR1ViRnVLaU44Nmg4K2VyTVpMTk02VUJ2NlNOWTFU?=
 =?utf-8?B?U1drY2FVWWdwRXl0Rm4xTlBjZVJVeG1sZ0YxdmxnVTdJcm9tN3VCaitkaTJi?=
 =?utf-8?B?TklaZ1A4R2tOY3c0NnZHNnVkSllGdGxLNDJOSC9VcUkwdnVTbEovR3BLMVIx?=
 =?utf-8?B?MW9ISGlXZVcva0pPd09GYUxRL1ROUnZrckRhWGlzSk1FSmdHY2RJTm5xdnJM?=
 =?utf-8?B?YWwzL04rc01YaVBJUHZXRjlTdUdvNS9IejBrZzhzckRNRFJqU2FOWXpXUVA4?=
 =?utf-8?B?ZmVna0ZrYVlMdTA2Yk5JNFV0U3p3RWpsaVVtbFJRNnBmdktUMWoxcmxWRU40?=
 =?utf-8?B?eHgwWnVRMUhvVG1vUEl2ZUN3T0E0VFlTTHRYQnBnZUR3Q2YzZU9LbnlVZ0pN?=
 =?utf-8?B?UVNHeFpiclpVZEFqM1hDY2JadjhOSTNpN2JxZWl0RmlaclRHRlJmWnVQd2tQ?=
 =?utf-8?B?ZURGaXRzMXl0Z05USHVrMHJxSmtRbEs3TlRMd256eThzN3pKQ1BaV0l3WFZa?=
 =?utf-8?B?QlY2Zko2U0NQcitmczZIMHBjNUhnTlZiN2ppalRpU0dtQ3B3T1FJcldCaXVY?=
 =?utf-8?B?dmRMZTdBRE44WXE0OWs1K09GQXh5MFpwQys0V0V0OVUxMzRkWDcwc25WWFdz?=
 =?utf-8?B?c0wrV2hRSi83cHdjd1c4cTVoVmNuWnlyWjlibnRid3lvR0NsUUZ6cU9sbTl6?=
 =?utf-8?B?WlE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <99616A2CEAE0984CB0EDA9B505FFC4C6@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	HY8PUg39IC33BYbryx3f72q04sy6yzrzQYHMwOXQLjci107s8xlDj89f6GwlF9mEiEolxUuOhQCJMAoD27z27XejahpA0ihQ61y536xgceU3eZ7M2T8zWIKnbP7kqyGCLQjh8n99S0Rc/L25BmYMwy3wg6ckPvitFd+IC77XPF42UhHXh/NYbMKv6zHiZXqO9mO/TJXMna/S3wfttPf/haGA80+/cCB6l9ArxhhnShR6P580nKCBAOYtUhBgzKZRgg5jTVexs668ZTuOD4UTnWBD8MsB+AAED8JrU/lvI1mNlRcpAPOhoOu/mlLyhSoFhHAG6p039nRpmrOAdMEbOL01X4bA7jFn7ZBw6/51DnR0X9jldH7nlE7pYgJ/SAzxt9HRvjiejky0u0yurYj9syl0tF6rM/xtNq9ZcPWLxESCli0vp8EP97pRc601LXZh+Po9s9YhoQhbO0CZK9CBMww7C5tRlAfCNWhHjOVXEUVYXVFm/REmN6GvmpKZdUXy+rdgAsdxkRKfLZxbSfOVNOcUv21k0CEIg09n4kjHPgKcC3OloGTag3GreydPkCxXsa3KsUBNb4DIKI+h9I3ttk8yEOlQZ0/YpxH7GXfnmoSssI9xmLsK448llwOm5Bm9iaQ+vEDB4grF/9IAgYfacFpHcL99aghgVWx9N61HdjM+R03/3G50/s7qtCAi2+WgjqbFQ+PtihUt9rikvPg9e58jrakOADHb0jjT8CDLJJYuuCFSYNbFMRxdrLAqgMYcsXT+jtZRaz7J0czVsIim4/J2zGmOhH9qXa/0C6wsnEfzGNm4//RpYYzx0wF3CwsC/uvv5aN22MNc/poY9jplcg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9da74b7d-e2d3-4220-5d77-08daf430b345
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 00:05:22.9465
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: t/SlHUtCidQLA//ATRHkx5rduOojoFnAhcLR/NGMiux50K+1yj4Ou/VetzZqPUxxd6UE1X5p/ax3/ph1la11WNddUczbJkarV39oUPqi8SA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7048

T24gMTEvMDEvMjAyMyAxOjUzIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gdi0+YXJjaC5wYWdp
bmcuc2hhZG93LnNoYWRvd190YWJsZVtdLCB2LT5hcmNoLnBhZ2luZy5zaGFkb3cub29zW10sDQo+
IHYtPmFyY2gucGFnaW5nLnNoYWRvdy5vb3Nfe3NuYXBzaG90W10sZml4dXBbXS5zbWZuW119IGFz
IHdlbGwgYXMgdGhlDQo+IGhhc2ggdGFibGUgYXJlIGFsbCBvbmx5IGV2ZXIgd3JpdHRlbiB3aXRo
IHZhbGlkIE1GTnMgb3IgSU5WQUxJRF9NRk4uDQo+IEF2b2lkIHRoZSBzb21ld2hhdCBleHBlbnNp
dmUgbWZuX3ZhbGlkKCkgd2hlbiBjaGVja2luZyBNRk5zIGNvbWluZyBmcm9tDQo+IHRoZXNlIGFy
cmF5cy4NCj4NCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29t
Pg0KDQpUZWNobmljYWxseSBJIGFja2VkIHRoaXMgaW4gdjEgYmVjYXVzZSB0aGUgY29tbWVudCB3
YXNuJ3QgYSBjb2RlDQpjb21tZW50LCBidXQgQWNrZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+DQpuZXZlcnRoZWxlc3MNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 01:36:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 01:36:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475780.737612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFmVb-000116-MJ; Thu, 12 Jan 2023 01:35:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475780.737612; Thu, 12 Jan 2023 01:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFmVb-00010y-Gc; Thu, 12 Jan 2023 01:35:59 +0000
Received: by outflank-mailman (input) for mailman id 475780;
 Thu, 12 Jan 2023 01:35:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b8ME=5J=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1pFmVZ-00010s-Rk
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 01:35:58 +0000
Received: from mga05.intel.com (mga05.intel.com [192.55.52.43])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 731450e7-9219-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 02:35:54 +0100 (CET)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Jan 2023 17:35:51 -0800
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by FMSMGA003.fm.intel.com with ESMTP; 11 Jan 2023 17:35:51 -0800
Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16; Wed, 11 Jan 2023 17:35:51 -0800
Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by
 fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16; Wed, 11 Jan 2023 17:35:50 -0800
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16 via Frontend Transport; Wed, 11 Jan 2023 17:35:50 -0800
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.168)
 by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.16; Wed, 11 Jan 2023 17:35:50 -0800
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by DM4PR11MB6263.namprd11.prod.outlook.com (2603:10b6:8:a6::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 01:35:46 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::6a8d:b95:e1b5:d79d]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::6a8d:b95:e1b5:d79d%7]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 01:35:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 731450e7-9219-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1673487354; x=1705023354;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=062Pw/QjRa03xkcJKM56gTmrb5zWdQp44PYzHol87us=;
  b=iIbllR/C1xRFsBzbQtQ9eVuTKga52WcdQGRE/JWYh2MUtVClJ6N62c10
   50aOS3DCayIqwhmufvFB3elU7yB5xsjpTSEWgwU9W1HuOTSNsIv8sLj65
   s1e/WhMsbffm+jDMZt0gpNqlTtjhQNyan7ovgiY0J3I2G29Zb89J6fGRZ
   kawi2e2yypWkapPz+fGKzcqDnVG9dI6phi2k+JbIXyr/FVkHCpnCFcide
   tqgzRLBtqWXJofO27zuSf2LqxVfKTy3+frkhJsInHnnWMB/ReXIOhWDcR
   n9xhx79KMYh39JtYNMbi1LWIZ2u2RB0BZTmzuJyYE64uBWNvPI72oki0h
   g==;
X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="409821222"
X-IronPort-AV: E=Sophos;i="5.96,318,1665471600"; 
   d="scan'208";a="409821222"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="746368034"
X-IronPort-AV: E=Sophos;i="5.96,318,1665471600"; 
   d="scan'208";a="746368034"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IJSZFLMHg12zsLcX+scT4ilOACJE82UZ4YGI1XLOHICIdjiC00ahc+K/zQE0OeqrI5M9b44QvhYd/77H4mfmeUu4ptfyiEN8bawt8Bn9xJ5iG98e3mR45hkE96T6zf6vhKQm64CPzbleptrD11F6/Qqh93Rxt4NXp4k83SgEksSp0+e0SQJvFyFrJsoYQ7LGSFbzs08XEPrJMoL3e+rcCIhSjq9GoHkPnhPPxi4/9wWlYymcWTZb2dY/Msb+cxFvAEN7dN4SPcgVQp99KWMxSzjNIeZttZ/VyOo4kitiZ+5iD+r8boN1ljLB0o/vsM91gpi4rgWdtD/2MWZQZqInhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=062Pw/QjRa03xkcJKM56gTmrb5zWdQp44PYzHol87us=;
 b=XsFU9vHDr9LId4ZD+9+gx64xGKZkRNQpVZ5hDRA5CyGUMx+UBmyxgLUYZTLTBFJBvJZYSI5Qe3+OThVSrNDOrmH+Y3dS8y8+3OtCZ8zI1tgwIF+T6mxSF1uraUXYjuwubZaf9X3waGjw1Gsyqs5K/OKTLOWQRPXZTDr1I8m5zhPuSC54ixCgVJpuvuDA0hrYXAOrBF9MkRk70sFRPGV3CfGNlHocFVN8kM1MbQegOoRT7N4wao3niFIbKd4HupWR8/6lKKmRNVf04SsuTFTZ8JH2lsIt0buZIWXDwFKK9T8lUGQXQONcU898jxDuay4Xho0rMpqF30gsZwTAwmWG9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Beulich, Jan"
	<JBeulich@suse.com>, =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, "Nakajima, Jun"
	<jun.nakajima@intel.com>
Subject: RE: [PATCH 1/2] x86/vmx: Calculate model-specific LBRs once at start
 of day
Thread-Topic: [PATCH 1/2] x86/vmx: Calculate model-specific LBRs once at start
 of day
Thread-Index: AQHZJCMqvD+lv3RMJEyTX9mVHWu5kK6aBACw
Date: Thu, 12 Jan 2023 01:35:46 +0000
Message-ID: <BN9PR11MB52766DB47FC1713963F966568CFD9@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20230109120828.344-1-andrew.cooper3@citrix.com>
 <20230109120828.344-2-andrew.cooper3@citrix.com>
In-Reply-To: <20230109120828.344-2-andrew.cooper3@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN9PR11MB5276:EE_|DM4PR11MB6263:EE_
x-ms-office365-filtering-correlation-id: 8cbf9686-e99f-4399-32c2-08daf43d53fb
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 49asUTlnk0gjTXfEgMyE4wjntyaQgEdNKd+ZWbFKM3xpVWeqF2C0tIzrLqVsO9C/18ZHbQobVC5YYxSVMkCC99twvSUwBhHPz8dgnix+tCBmhgc2I2b4aoo43LCnp+qX/1SS7QHZOs4MUQh0JQW4pttHSOVM1QCWVvvm+NZC1cvuSGY1V8GhjaLZyo9PimCDWFvMz3YXF8qgnESZrVn9zFDA+8Fmd7nZfy1FLYtpuMTyr3wlPGDtSmHOp6EBVxF8Ft8l1Jkoz4LI/dRj80+TyCvgfzU1s609JFq6cESVAvS6UByfsn4FQpF8+arKQswkcZMSXRqXHIqjRhHKDsBiru6QKm9g58xKGn941bIpahnd4Ey6cRc4Wd+HEbefc+IxWgU4Mlh9QAaF3CoaKQ3tR8uCUbJb/bLD1zD6L284vBgB2Bt51Qh+vBT1c4aczjeyfAWRmllPj7w1sUbr0FYsB+gs1jZCJwi74voktBbvhaP3mUIaV/NONa91qmlHnkl27UqrfMJbFju3jumgC3zX3R9gniqeWlKpTXfVwsodZTVtCaIgUNBw+sx8UTJilWolJFEIDsjh8dfJ2j/nQ9/BroL+kH/pnpjoih/9NUmb3PZfG/m7ZNQsA1NpmOURrV0QKoVsZPJsN07OywU4/2a3ZLpe3N3YVyA4n4Ym5/P8Qqjy+55WJC4dXtgWgR8gFfKgK7s+V+KidlA0FXhV3ptRBA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(396003)(376002)(136003)(366004)(451199015)(6506007)(38100700002)(122000001)(82960400001)(107886003)(2906002)(478600001)(33656002)(4744005)(316002)(26005)(186003)(5660300002)(7696005)(71200400001)(9686003)(86362001)(8936002)(38070700005)(52536014)(55016003)(64756008)(41300700001)(110136005)(76116006)(8676002)(66946007)(66556008)(66476007)(66446008)(54906003)(4326008);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?OVNlVTNlUnBMWWRpc2pFaUFGdmhlK2JncGpSS3NSZVJxaThNSnloN3N6dmYz?=
 =?utf-8?B?TS91ZnhxSFJVNjI0NGEvRnh4QmNPb3BQMTVOVUQveVNTdVZJSUd1TDZXYVdG?=
 =?utf-8?B?Z3RRSTBwZG8wdjZMRTBNbHpVVlJrWmRRaWlPZXlUdGtJRStzanlCZ0tlMmxG?=
 =?utf-8?B?a3l0NFJUcDFCUHg1Vm9MMGNvb1R0dW04akppR3RuekpnLzVqVWtDSGlvdkRv?=
 =?utf-8?B?c1ErQ2lWb1hmdVgwK254QmNPZVN3YnZtcU1ObHdnc3MzSkRkbkpEQUtkSlcy?=
 =?utf-8?B?QUtpL3JEYlVTREhyRkQxRjI5Sm5SdjZYend3WDZyZEU4VE12elZPYURMUmJ4?=
 =?utf-8?B?RmNzU29RcjJVVU43REpCNm5sYXQyMEpZKzk2bkxSeHl2OGRkc1J5UUVkVWRl?=
 =?utf-8?B?UGtYSjNQNG5hZngwT0ZvOHNhM0tab1JNbGxISGZUSGVOQmcrMGFUN29CSURk?=
 =?utf-8?B?YXk2SmxaYUJCdGVPM3pIOTdMRmtkemdwQ1RVRFhLSEpsa3J5LzhKTFJCcDlO?=
 =?utf-8?B?UXE1SXpXeEw4bS9XT2lFM1VoSzA5NTFEajN0cEFPc3F3NFY2TmxlM25xNjd6?=
 =?utf-8?B?YVhwV2pkZHNjUUNpUzJvQkJJa3FxZEpFbC9NSGVQazdGc2xUTjJWUENWZU5q?=
 =?utf-8?B?S2syQ0xNeGVvcVJwM0NkTDA4VHJhODJBOHBSZVNzZ2NqMmFmVVduQmVTVndh?=
 =?utf-8?B?OG5XZ2YrYzZzd3hXK2wwTjNxdXdjY3ZqaVF0R0pEcUE0TDdvVk1URnFXd2F6?=
 =?utf-8?B?RzcwSlA5ck1qNFpsYXFmRS9tTUZldXhGTXRReTNBVlBLVmt5V3hjcFpaRWZv?=
 =?utf-8?B?bkgvQWhROVhqRUQ3VU5vSHBvcTg3Z1UzNjNHZk93eFBnR1ZjaFdjb3hxRjln?=
 =?utf-8?B?c2tXMXhLaWRxRy9YbXJ4VVBLa3BlK2xITDZLb2w5aVk1Y1pET09mU2pKZUFP?=
 =?utf-8?B?T2o3dXBZQzFCNE5IZXBoaWIvUFhpQkptdE1KOTBrQTJxZzRhNXVReWZQd3l5?=
 =?utf-8?B?akdlKzFYM0lGbjExalNkTytTNHFmRklmZGttMWVPT0tyYjNoZDJ0cFhWdUZ2?=
 =?utf-8?B?VXV3Yk96UkZGTnFwd1ZwSmFrU0daMzU5NU56MzlZWm1ZZWdYWFVJRzVjSWJQ?=
 =?utf-8?B?OGxHZmlhWGZINXEyT1BHekJ5UWIyL0NYcDkvV2lJMDB5TjZBQll1U25sdUxu?=
 =?utf-8?B?bXFieTJNcUZMS0NETVV6M3FVMFhVanBsNWhRVUpKcGQ3QzIxS1lIYjNvZXRz?=
 =?utf-8?B?S0RZdkZWTm5CRit5TUhhZnJhdGtwb3FFQ1NuRDhCdjV4dXBzY0xQV3lTY3lZ?=
 =?utf-8?B?M05sNFdDSkNhV0hQK001dTRra3VrajBFdXVYVjFBODBialBiaURUV2hUbmll?=
 =?utf-8?B?OENVdGNSOGc0VG80bVdMUFNuaENYWFZJRnIwQUdqY1ZNbUlhRVptemVDY0V3?=
 =?utf-8?B?MFlNM0dPbCtYenNid3g5WXAxWkx4NTBodWtQVTlpbFd5RmRiSzdidDFIQnov?=
 =?utf-8?B?czVuS1RoOG5QcXh0ZXJBRHhVdTlubEdQWmhUWjY2MkpWaUFRemZ2d0pWR0JY?=
 =?utf-8?B?N2pzQ1NoR2tNc1JVdmkvc3cwWHExdjFvZm1MODJjNlVnVVRPbzVlR0ZURUZF?=
 =?utf-8?B?RGNtTDllMVlLeHBXY25IKzIxNGJZWUU3WkhtWVBrbEFXS2hjZG5JN216S05D?=
 =?utf-8?B?TUdVbGVoend4cHZNc3lLM3U1MFF4WFpCRGY3c3RYcVM4anUzc3pMUFRvTDdj?=
 =?utf-8?B?MFB4NVg3QVhKZkxBSk9TSkNoWTltMEFDUk9vTkdyaEk1U3pWbmpBb3hHclEv?=
 =?utf-8?B?a2hUeG9qSkN0YXpyamlXRlBocDdhREUxOTdtMkRDWTQrQkgvR3FmbzlvYnpu?=
 =?utf-8?B?QndQRzJZaFdQbzhLcm9wTzFRQjRNMG5rQmtuQnhoVW90c3NlY2FGbUFKclFo?=
 =?utf-8?B?dlUrVjhKTlIrWC9WNE03cHJlMFRYakhSUDQ0R3krWVVtbGNoZGZDVFJuRUh3?=
 =?utf-8?B?QmdKU0FqejgrV21mdEVXQ0swd1JObEsyVFJHaFVWMXoyV0tWN0J4MVRTSnpK?=
 =?utf-8?B?RzVSNkw5NkV2SkhaeDRUY0NMT05KUDk0YlFpVmtsdlg5c2J1V3JMZnlFZGU2?=
 =?utf-8?Q?NY+mdxj/4rPUfe9um8PNGwLfR?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8cbf9686-e99f-4399-32c2-08daf43d53fb
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 01:35:46.5187
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 0idUpcN3QkMNGOrsh+nV5BhyXQy94TrYfrQkxufdiWY6Oq4nkseGKtCIhCtWapstxtl9a85BCRQUye8Wn3otZw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR11MB6263
X-OriginatorOrg: intel.com

PiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50
OiBNb25kYXksIEphbnVhcnkgOSwgMjAyMyA4OjA4IFBNDQo+IA0KPiBUaGVyZSBpcyBubyBwb2lu
dCByZXBlYXRpbmcgdGhpcyBjYWxjdWxhdGlvbiBhdCBydW50aW1lLCBlc3BlY2lhbGx5IGFzIGl0
IGlzDQo+IGluIHRoZSBmYWxsYmFjayBwYXRoIG9mIHRoZSBXUlNNUi9SRE1TUiBoYW5kbGVycy4N
Cj4gDQo+IE1vdmUgdGhlIGluZnJhc3RydWN0dXJlIGhpZ2hlciBpbiB2bXguYyB0byBhdm9pZCBm
b3J3YXJkIGRlY2xhcmF0aW9ucywNCj4gcmVuYW1pbmcgbGFzdF9icmFuY2hfbXNyX2dldCgpIHRv
IGdldF9tb2RlbF9zcGVjaWZpY19sYnIoKSB0byBoaWdobGlnaHQgdGhhdA0KPiB0aGVzZSBhcmUg
bW9kZWwtc3BlY2lmaWMgb25seS4NCj4gDQo+IE5vIHByYWN0aWNhbCBjaGFuZ2UuDQo+IA0KPiBT
aWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0K
DQpSZXZpZXdlZC1ieTogS2V2aW4gVGlhbiA8a2V2aW4udGlhbkBpbnRlbC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 01:39:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 01:39:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475787.737622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFmYl-0001eT-7Z; Thu, 12 Jan 2023 01:39:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475787.737622; Thu, 12 Jan 2023 01:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFmYl-0001eM-4v; Thu, 12 Jan 2023 01:39:15 +0000
Received: by outflank-mailman (input) for mailman id 475787;
 Thu, 12 Jan 2023 01:39:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b8ME=5J=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1pFmYj-0001eG-Cz
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 01:39:13 +0000
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e8d48d36-9219-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 02:39:11 +0100 (CET)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Jan 2023 17:39:08 -0800
Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16])
 by orsmga006.jf.intel.com with ESMTP; 11 Jan 2023 17:39:06 -0800
Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by
 ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16; Wed, 11 Jan 2023 17:39:08 -0800
Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by
 ORSMSX610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16; Wed, 11 Jan 2023 17:39:07 -0800
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16 via Frontend Transport; Wed, 11 Jan 2023 17:39:07 -0800
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.106)
 by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.16; Wed, 11 Jan 2023 17:39:07 -0800
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by MW5PR11MB5930.namprd11.prod.outlook.com (2603:10b6:303:1a1::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 01:39:05 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::6a8d:b95:e1b5:d79d]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::6a8d:b95:e1b5:d79d%7]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 01:39:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8d48d36-9219-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1673487551; x=1705023551;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=OaWAnVeP0JWOVMEzqs0hGmjrGi1n6DfEAZ3Zh7ndx00=;
  b=gV4uYvsFkyNpQ98UbMYbZpwDtWhXOsjEiFmAY2wcZ3xP++Fci3U8ryxd
   q3dLJ1xRnsB+kkY6SQjhR4qC+rItum2X2a0AKibEBncBwKDOpA3HTsufk
   3UCqlFL3tQBuc6QKZL4Wroc1G8PANMbqI7GbfwscfSAIjQsyvxdD5tkrJ
   JdnXLwZ5J9SjkzCKrLx2ZYaC1fqdyK330/l1PrNifh3OHCRge1np67R6B
   F13sF2kxSHB9H+AnbjW+aAVoDhqD8xearbaoSEC9Ypi8li7NcoDZpsHDu
   +048JeD+iaf6RNSCTXsQcL5G3X1IzfmTG64rZ3mlZGVoBZkuzkpPXKPnO
   g==;
X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="311407704"
X-IronPort-AV: E=Sophos;i="5.96,318,1665471600"; 
   d="scan'208";a="311407704"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="635194790"
X-IronPort-AV: E=Sophos;i="5.96,318,1665471600"; 
   d="scan'208";a="635194790"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VCJxHNuT55VyAf84qkzjRi65dZeqdekhSE7xqJyAc1bbyWhImtoGe6P6sNXUB4oasYKzgDlBWO2QpgRbm4KHlZdcVI8T4qrxMui+9po2oFu/+zKgD4qqfAWxY8vGkEzzTor2+I5CCHchsFbMtrfw6Yv6c3b0PoSbXNTGEVuuvv4qK2nu70PEykodH4R3aUcUDccnVuUgvG4qYs6LKYNj9Gnf0Iw0c1om2BWc2veFPGrEbw35vKoVSf/yP0jZI5Sv2RRsmiVqGvkfbpjgxxlVdHFTyKCNK9es3cTozCES+5Dmh+eT/omkO1m9XN9BeND+Z2TuGVdGd2dGO/wL0zUNcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OaWAnVeP0JWOVMEzqs0hGmjrGi1n6DfEAZ3Zh7ndx00=;
 b=LBJXJTesMj5ZOrlIUj+4D0mVwckRTvh6mW22pLgGqoheo88lgFG36Scf85kRWW/1HsDrIev3a1sKukslZLSuVw0LxzEugfzkaPzvb843fFJr46Gutvwp+O4qTBNOBdPtsU/zJfoMyt+Ni5dgfLdkrN2Or0ztap7/+AEi7Odz2TPHEHJNAhhe4Nr4QKEflBJru9zxcm599udmVaQ9nr91nsoSUs89Y6iHcje9xGWZTaMgOOx8jUPjvxi9l5BFCkksesba0ooI4034ELZhdxg7zL6vqeu+0WBBnwr20JMmRVSz4CybSZw/T0FD27zecS1KiBmwHS7kNB12bKjVXbKvDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Beulich, Jan"
	<JBeulich@suse.com>, =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, "Nakajima, Jun"
	<jun.nakajima@intel.com>
Subject: RE: [PATCH 2/2] x86/vmx: Support for CPUs without model-specific LBR
Thread-Topic: [PATCH 2/2] x86/vmx: Support for CPUs without model-specific LBR
Thread-Index: AQHZJCMek6T83CtFrkaQg0ivmixH/a6aBOsw
Date: Thu, 12 Jan 2023 01:39:05 +0000
Message-ID: <BN9PR11MB5276D030EF1671A7509E84098CFD9@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20230109120828.344-1-andrew.cooper3@citrix.com>
 <20230109120828.344-3-andrew.cooper3@citrix.com>
In-Reply-To: <20230109120828.344-3-andrew.cooper3@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN9PR11MB5276:EE_|MW5PR11MB5930:EE_
x-ms-office365-filtering-correlation-id: 1b7dbf7e-00e3-4daa-75dd-08daf43dca8e
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: UYkibS0dum70SHkVJNKaQ6Oo5MqTdfjizuC4mUOxKvuicGMXJjxa6j+xQBlklABVfTD1DTliykQXD/4DTAMOH6G9FhL6MVmS+LOQRg9iFjFhPmEQKs1ilQ7LRjhkBWd7c6chzkxzpJ7yhQnToqayInmsSmr2OJ4Ki+6yEj1UhrH5ifHX/90EKT/rxRlHLUL0TVoLf9EM4NhdBC2xurYKW3IhU1IGGEpk7EJKJtEy9AA8njSF3NqPoER9wshq/9pXIGBIWSXCDnBCOOjbs3G+BzVY8Zfi52HY5RmmNv9PeCom0SSjHRQHDzqpNHHRaPY8p+gwxPcs/Dte/8lt+l9omW0UciFk0tIIwBqJSsISoUlsrKlR/LPrqoYDhp7/LnVX64Qx3wYkvHKKkX1Z01rk/ESNyeRzC3I3Bgdjtxym4VW4Fi/NuN/ERo+DBGuKUEqdbTSl9/jI09r3EG6eEkStqzK/kvpnm883MB9Uls73L448yBJsDlOvqfzjXvVNtpKMFX7t7yZPFzIErpeX6+I4PW66DdJx6CCf06kWfPAFr7M2QWf1sEjo+KB7LCs0PBuICZTu1Pep7CvYpeY4wnL6rz3UKy4KC3d7KeBZ6mS78+MuD2PQJSIFWyiIWYHuTcZJG5DE+5uDj/vahX2fcqtWr2RdfmzQIRdxFTcx49FH4vhUeTido4f/48eTSA83+MBI+UkPVJDkXyVNNQiBoVYqig==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(136003)(396003)(39860400002)(376002)(366004)(451199015)(41300700001)(4326008)(110136005)(66446008)(8676002)(55016003)(76116006)(66946007)(66556008)(66476007)(64756008)(54906003)(316002)(86362001)(38070700005)(8936002)(5660300002)(82960400001)(38100700002)(122000001)(33656002)(4744005)(52536014)(2906002)(6506007)(71200400001)(478600001)(9686003)(186003)(26005)(107886003)(7696005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?TE11Z1hLUitMbFpBc2xMZmJHeDdibUJkd3JBdUhia1A0QnRSQlk1NTUvbkJQ?=
 =?utf-8?B?TndVanJST1UzYzluZE04UGFjaTVwaUZvdUkwR3lyNzBXTnFieVBlVkxMMllG?=
 =?utf-8?B?Z0hReTRBOC9sVXpJalB3UFJSaW5FbTdlczFLbTVBZWgxcHo3ZFFGdXRscEdS?=
 =?utf-8?B?UDlOd2E1bnNvLzFDZ25Mc2taTkdUNVdLZWJXNXlIRldhSXlpa0pQRVJxcU9J?=
 =?utf-8?B?NDdzNDdaYnRpTlVCNXVJNDJOc0JpdGZZL2NOWFZsZ0FNaEJPNExtaDF3Skp0?=
 =?utf-8?B?NzV5dy8wbXpRSjQwNlkrN01seGFJcWZ6Nzk4SGhIcWdyQ3lackI4TmR0d2hU?=
 =?utf-8?B?VXNxVnZIRUVLekZ1UFpqRUhaV0lqaS9LY2wvdk55WE03SnhJU0dYS1FxVVIw?=
 =?utf-8?B?ZjgwU1JRbFlndGFFUDl4eVh0VStpSVQ5dS82bE4xTDJnTUN1Vk81bW5ueFpL?=
 =?utf-8?B?OXdaY3d2QWpPQlo5eHBSNDZUVE45NTFFdTBBS1FkNUpxbXl5RnY1MzlYd3BZ?=
 =?utf-8?B?bkYrZ1g1L3BmV0UydkFXUW44NkJUZ0x0TDNsMlY3dk55UkJxRGt1VTJ6NHlh?=
 =?utf-8?B?RE9Eek9oTitKVmMwUmg2amFrWEJuSi9FUHZWaGhjQVBtVkM0NEhzUm1OZzEy?=
 =?utf-8?B?Zmc1OFlNNHk4Yk02dTkzQW5iOHdON0NxR1VnUG92eEV2MjZrOENCK0NJMGs2?=
 =?utf-8?B?MnB4Zm1wblhFZEo5ZEYrWGxPWkxkOWFXYjk0WkJjNkFhREJ6NDdvcWM3OEhP?=
 =?utf-8?B?dHE5OUNTYTJkZk5LUy9SakJTWFUrTlU2OTRsdVRSMjZ1RC8zb3ExcVhFUnJi?=
 =?utf-8?B?QUhQai9SOWRJSHlxbndmK25aQ0xRbkF2RjVvMzBjSnZIVGs4TkVubStlYnhs?=
 =?utf-8?B?QlZ1R1FveWJLdnA1S1R4UDRBblUwYW9WUXk2ZDlyS2VBOEs5MmJIRHRGNGtS?=
 =?utf-8?B?UXVzMUhiMjYyeGFINHAyZGxUU0dnUUxMS3B2TUNCQVNERUxrcU9oaUNKbVV3?=
 =?utf-8?B?S0VlVmtCZXhvU2FnVHlGSjBlb2k4U092amVtOUxMUEQ0OTJLcmtkL05RNmc1?=
 =?utf-8?B?QjJVOUc0YndlYURJWmtpQjBhQWQwTU0wRzA2NjcrRWZmTVFKWlNvaVhnQ1My?=
 =?utf-8?B?bXdzeGhYSHpOSkhLejl4eXNaN2pLTU5kMyt2eXFKVEQ5MTZyVytzSzEzQno3?=
 =?utf-8?B?TWZPcHh3NEx6OWFyT2hCM3lPMUgxOHRjUTZlREtxR25na3gxVkwzTmRiWVhK?=
 =?utf-8?B?SFRVaWJwajU0MzdyNGJYeW1RZ0dsSFRCYk9Tdjk0cmN2RnF3d3EzK0xpcFE4?=
 =?utf-8?B?QytIUStJdEd5T1B0SExKYXVyQVJWcU4yeHdySFNaVlBnN2txNTkvb1NYUGxI?=
 =?utf-8?B?ODNsZEJuMFFEUEFxRm1JZHpjS2hPM1NlbnZXSmZ6S2M3cmVXYnoxTW01ZjNF?=
 =?utf-8?B?RlJuclZKZDRrQzYyN0ZVcDhFWFpBVkZ4WEcrSjAvNlpRcGNYRmFNdTA3Z3BR?=
 =?utf-8?B?bkllRmpCcFRLRUllcjRnS3BhV3NwSkt3cVA1SUZleFhrZy95K1dNYXpSdHlu?=
 =?utf-8?B?cWlHUk4wRGM2NkdHekJSQXErZFp1ZkdhU0FjVGZRSXBMSE5VQ3NEbk80aEZF?=
 =?utf-8?B?V3QyVkVYVVNuU25YbG4wN3psckpJTHN2OUFSRWxWWGRmK203NVhtTUllejVs?=
 =?utf-8?B?WTIzQkJ1cjNXMWdNd2FKbGFseTVwTFo5eDNuYWJyNGdWOW1helY3ZXVhTTJV?=
 =?utf-8?B?TzBDelJIdnJucFh6TTNsUzJFWmR6bjlKTm0xUlhsUlUzZUFPWkY5dEsyWjZH?=
 =?utf-8?B?elQrTHpiclJGZHF6bkF2bmxpaWFCakRDdWdXWlgwU3JiYy9BWjdxa0tjNHk0?=
 =?utf-8?B?eTVDM1hlM2lNL0FUQ2VDY01XN3ZBbHpBbFhvc1AyQkJhbEhpdWVtUjlGTTYw?=
 =?utf-8?B?Z2NIY09pS2cybEZTU2ZkaTByR1c1eE9NeWNYZExoaWNITUZ6TzROVkJYUjcx?=
 =?utf-8?B?YUxJVW5kUlZTRWJhS0o4RnVJL09KeHBiZjdhQXkwRnZxcGJhblBzaVJBWFor?=
 =?utf-8?B?MktWdm4wSzQrUUszZU1Kd01LV3FrQ2ZoaEw1c3BWUy9GejZwb1Rvd2pDMWN2?=
 =?utf-8?Q?0YSCO64E+LSwi7yj6RNe4/Zc5?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b7dbf7e-00e3-4daa-75dd-08daf43dca8e
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 01:39:05.4394
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: sW4mBx5ZfEf/jpuS48NFvgpBP4JH2K3Vo0SxZhylL9N2AI3zraZqHoXiLzra6n0l9q7ivAyrJBE04/otOcNqgA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW5PR11MB5930
X-OriginatorOrg: intel.com

PiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50
OiBNb25kYXksIEphbnVhcnkgOSwgMjAyMyA4OjA4IFBNDQo+IA0KPiBJY2UgTGFrZSAoc2VydmVy
IGF0IGxlYXN0KSBoYXMgYm90aCBBcmNoIExCUiBhbmQgbW9kZWwtc3BlY2lmaWMgTEJSLiAgU2Fw
cGhpcmUNCj4gUmFwaWRzIGRvZXMgbm90IGhhdmUgbW9kZWwtc3BlY2lmaWMgTEJSIGF0IGFsbC4g
IEkuZS4gT24gU1BSIGFuZCBsYXRlciwNCj4gbW9kZWxfc3BlY2lmaWNfbGJyIHdpbGwgYWx3YXlz
IGJlIE5VTEwsIHNvIHdlIG11c3QgbWFrZSBjaGFuZ2VzIHRvIGF2b2lkDQo+IHJlbGlhYmx5IGhp
dHRpbmcgdGhlIGRvbWFpbl9jcmFzaCgpLg0KPiANCj4gVGhlIEFyY2ggTEJSIHNwZWMgc3RhdGVz
IHRoYXQgQ1BVcyB3aXRob3V0IG1vZGVsLXNwZWNpZmljIExCUiBpbXBsZW1lbnQNCj4gTVNSX0RC
R19DVEwuTEJSIGJ5IGRpc2NhcmRpbmcgd3JpdGVzIGFuZCBhbHdheXMgcmV0dXJuaW5nIDAuDQo+
IA0KPiBEbyB0aGlzIGZvciBhbnkgQ1BVIGZvciB3aGljaCB3ZSBsYWNrIG1vZGVsLXNwZWNpZmlj
IExCUiBpbmZvcm1hdGlvbi4NCj4gDQo+IEFkanVzdCB0aGUgbm93LXN0YWxlIGNvbW1lbnQsIG5v
dyB0aGF0IHRoZSBBcmNoIExCUiBzcGVjIGhhcyBjcmVhdGVkIGENCj4gd2F5IHRvDQo+IHNpZ25h
bCAibm8gbW9kZWwgc3BlY2lmaWMgTEJSIiB0byBndWVzdHMuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5
OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KDQpSZXZpZXdlZC1i
eTogS2V2aW4gVGlhbiA8a2V2aW4udGlhbkBpbnRlbC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 01:43:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 01:43:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475793.737634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFmdE-00035N-SG; Thu, 12 Jan 2023 01:43:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475793.737634; Thu, 12 Jan 2023 01:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFmdE-00035G-N9; Thu, 12 Jan 2023 01:43:52 +0000
Received: by outflank-mailman (input) for mailman id 475793;
 Thu, 12 Jan 2023 01:43:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFmdD-000356-VN; Thu, 12 Jan 2023 01:43:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFmdD-00086m-S3; Thu, 12 Jan 2023 01:43:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFmdD-0002lA-38; Thu, 12 Jan 2023 01:43:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFmdD-0007Cd-2k; Thu, 12 Jan 2023 01:43:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bE95t7GDQv/FseEad72/n1KM9dMM/4VoobFzPh2blDI=; b=HWE//I5scQXz/NfSMiAy9cZ21y
	TWxJKkdWPcYpTvaDTuRR4993E9TFIODDEcHTE1atUT4I7DeInbLDfimqSYWPT5Q/U5cA21Lxu9714
	sXFMnjXbsbTvZAZOsG9so1y6XeRpyIFabGx84ydkvKwp8kuPuyYl7mByMIjUzC1J3x88=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175726-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175726: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e66d450b6e0ffec635639df993ab43ce28b3383f
X-Osstest-Versions-That:
    xen=4d975798e11579fdf405b348543061129e01b0fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 01:43:51 +0000

flight 175726 xen-unstable real [real]
flight 175731 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175726/
http://logs.test-lab.xenproject.org/osstest/logs/175731/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install   fail pass in 175731-retest
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 175731-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175720
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175720
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175720
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175720
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175720
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175720
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175720
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175720
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175720
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175720
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175720
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175720
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e66d450b6e0ffec635639df993ab43ce28b3383f
baseline version:
 xen                  4d975798e11579fdf405b348543061129e01b0fb

Last test of basis   175720  2023-01-11 08:10:04 Z    0 days
Testing same since   175726  2023-01-11 16:10:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4d975798e1..e66d450b6e  e66d450b6e0ffec635639df993ab43ce28b3383f -> master


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 02:41:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 02:41:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475806.737650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFnWM-0001GP-8P; Thu, 12 Jan 2023 02:40:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475806.737650; Thu, 12 Jan 2023 02:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFnWM-0001GI-3N; Thu, 12 Jan 2023 02:40:50 +0000
Received: by outflank-mailman (input) for mailman id 475806;
 Thu, 12 Jan 2023 02:40:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFnWK-0001Fs-P0; Thu, 12 Jan 2023 02:40:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFnWK-0001fF-NS; Thu, 12 Jan 2023 02:40:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFnWK-00045o-BT; Thu, 12 Jan 2023 02:40:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFnWK-0007Z8-At; Thu, 12 Jan 2023 02:40:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RBpF2YQuWouyMA6ecJteT0rCEyoKEo0jW6NTfV/Mfz8=; b=fc/hvl9U5vWMXdaxiCdNE6ertd
	TcfCu5llo+2/Mgb/V92GM/FHZTa4f5OKGCglYdohUcVlMRlNAfqjFr9TbhZbrnGj/5iFC/KuKykNJ
	Gwup40ZkGk42cQchmQyh+LWJ0JukINmv4qnBfJazdsV8ng3eP91tnyGRRlt8KHYSgj04=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175732-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175732: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=83d9679db057d5736c7b5a56db06bb6bb66c3914
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 02:40:48 +0000

flight 175732 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175732/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  83d9679db057d5736c7b5a56db06bb6bb66c3914
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175728  2023-01-11 18:00:27 Z    0 days
Testing same since   175732  2023-01-12 00:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fd42170b15..83d9679db0  83d9679db057d5736c7b5a56db06bb6bb66c3914 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 03:32:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 03:32:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475814.737660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFoKG-0006VS-4T; Thu, 12 Jan 2023 03:32:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475814.737660; Thu, 12 Jan 2023 03:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFoKG-0006VL-1W; Thu, 12 Jan 2023 03:32:24 +0000
Received: by outflank-mailman (input) for mailman id 475814;
 Thu, 12 Jan 2023 03:32:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFoKE-0006VB-Pv; Thu, 12 Jan 2023 03:32:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFoKE-0002t6-Ms; Thu, 12 Jan 2023 03:32:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFoKE-0006Nv-8l; Thu, 12 Jan 2023 03:32:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFoKE-0007KX-8G; Thu, 12 Jan 2023 03:32:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uphChg5XX2Eh7EnrKKYYKtqWo23Nnz4X21l/v6goLmI=; b=5RIPWrBqJRyKcr5D7PTs9c6180
	qesmbtPVHEqyjcZMrzxXGEO7jAVx1NwyFGfgnlMHOlXddaN/rxcYGVmVv9js//YMHG1TMFF4H+SED
	pCJydC3FHuPjIsMEu9WBhBVMkU/zjI4jXNdxINCDWekYncCkeJm0FKjO95GTvJaq/Kyk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175733-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175733: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 03:32:22 +0000

flight 175733 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175733/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175623
 build-amd64                   6 xen-build                fail REGR. vs. 175623
 build-i386                    6 xen-build                fail REGR. vs. 175623
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175623
 build-i386-xsm                6 xen-build                fail REGR. vs. 175623
 build-arm64                   6 xen-build                fail REGR. vs. 175623
 build-armhf                   6 xen-build                fail REGR. vs. 175623

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    4 days
Failing since        175627  2023-01-08 14:40:14 Z    3 days   19 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    2 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 05:49:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 05:49:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475822.737671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFqT2-0003Cj-UA; Thu, 12 Jan 2023 05:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475822.737671; Thu, 12 Jan 2023 05:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFqT2-0003Cc-Qu; Thu, 12 Jan 2023 05:49:36 +0000
Received: by outflank-mailman (input) for mailman id 475822;
 Thu, 12 Jan 2023 05:49:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hjhq=5J=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFqT1-0003CU-Hf
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 05:49:35 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e360f4a3-923c-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 06:49:33 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A944E3EBBA;
 Thu, 12 Jan 2023 05:49:32 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 822E5134AF;
 Thu, 12 Jan 2023 05:49:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id JmJXHmyfv2MCLAAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 12 Jan 2023 05:49:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e360f4a3-923c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673502572; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jCSkJ2BEMY4Qzmv5UmWOj9GwhY8ENcdcUr+TNtTtszk=;
	b=KypXs4wS45T6CBlA84cK0fwR+ved3fP2bsUGeCpIr+vQXCnlWakSM4zfgsXPI1MpXPI/vb
	A/XgIGIUBLO5rdvtmgDZ9NiOKXE2dHPIN7lco/qTo5BSD6HEHRutra4UEOFd+xpmUgkEBt
	tzT7OqUT0Sqnt8RXRVrsqbSp5hHo5vw=
Message-ID: <7ba1b191-ef89-1e0d-0e1b-0b24159a9eb9@suse.com>
Date: Thu, 12 Jan 2023 06:49:32 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 10/19] tools/xenstore: change per-domain node
 accounting interface
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-11-jgross@suse.com>
 <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>
 <05871696-1638-82d0-8d55-9088b4bb9a18@suse.com>
 <e9eeee72-ecd1-faaa-dc63-b57d50162bbf@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <e9eeee72-ecd1-faaa-dc63-b57d50162bbf@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------4Una2fEaNf807I0FAxj007AF"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------4Una2fEaNf807I0FAxj007AF
Content-Type: multipart/mixed; boundary="------------D1bTzj0mMMmwYj7UHgbYlVoj";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <7ba1b191-ef89-1e0d-0e1b-0b24159a9eb9@suse.com>
Subject: Re: [PATCH v2 10/19] tools/xenstore: change per-domain node
 accounting interface
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-11-jgross@suse.com>
 <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>
 <05871696-1638-82d0-8d55-9088b4bb9a18@suse.com>
 <e9eeee72-ecd1-faaa-dc63-b57d50162bbf@xen.org>
In-Reply-To: <e9eeee72-ecd1-faaa-dc63-b57d50162bbf@xen.org>

--------------D1bTzj0mMMmwYj7UHgbYlVoj
Content-Type: multipart/mixed; boundary="------------scW0k8HS13oUkWvjPCSjh6EM"

--------------scW0k8HS13oUkWvjPCSjh6EM
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTEuMDEuMjMgMTg6NDgsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDExLzAxLzIwMjMgMDg6NTksIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+Pj4g
Li4uIHRvIG1ha2Ugc3VyZSBkb21haW5fbmJlbnRyeV9hZGQoKSBpcyBub3QgcmV0dXJuaW5n
IGEgbmVnYXRpdmUgdmFsdWUuIFRoZW4gDQo+Pj4gaXQgd291bGQgbm90IHdvcmsuDQo+Pj4N
Cj4+PiBBIGdvb2QgZXhhbXBsZSBpbWFnaW5lIHlvdSBoYXZlIGEgdHJhbnNhY3Rpb24gcmVt
b3Zpbmcgbm9kZXMgZnJvbSB0cmVlIGJ1dCANCj4+PiBub3QgYWRkaW5nIGFueS4gVGhlbiB0
aGUgInJldCIgd291bGQgYmUgbmVnYXRpdmUuDQo+Pj4NCj4+PiBNZWFud2hpbGUgdGhlIG5v
ZGVzIGFyZSBhbHNvIHJlbW92ZWQgb3V0c2lkZSBvZiB0aGUgdHJhbnNhY3Rpb24uIFNvIHRo
ZSBzdW0gDQo+Pj4gb2YgImQtPm5iZW50cnkgKyByZXQiIHdvdWxkIGJlIG5lZ2F0aXZlIHJl
c3VsdGluZyB0byBhIGZhaWx1cmUgaGVyZS4NCj4+DQo+PiBUaGFua3MgZm9yIGNhdGNoaW5n
IHRoaXMuDQo+Pg0KPj4gSSB0aGluayB0aGUgY29ycmVjdCB3YXkgdG8gaGFuZGxlIHRoaXMg
aXMgdG8gcmV0dXJuIG1heChkLT5uYmVudHJ5ICsgcmV0LCAwKSBpbg0KPj4gZG9tYWluX25i
ZW50cnlfYWRkKCkuIFRoZSB2YWx1ZSBtaWdodCBiZSBpbXByZWNpc2UsIGJ1dCBhbHdheXMg
Pj0gMCBhbmQgbmV2ZXINCj4+IHdyb25nIG91dHNpZGUgb2YgYSB0cmFuc2FjdGlvbiBjb2xs
aXNpb24uDQo+IA0KPiBJIGFtIGJpdCBjb25mdXNlZCB3aXRoIHlvdXIgcHJvcG9zYWwuIElm
IHRoZSByZXR1cm4gdmFsdWUgaXMgaW1wcmVjaXNlLCB0aGVuIA0KPiB3aGF0J3MgdGhlIHBv
aW50IG9mIHJldHVybmluZyBtYXgoLi4uKSBpbnN0ZWFkIG9mIHNpbXBseSAwPw0KDQpQbGVh
c2UgaGF2ZSBhIGxvb2sgYXQgdGhlIHVzZSBjYXNlIGVzcGVjaWFsbHkgaW4gZG9tYWluX25i
ZW50cnkoKS4gUmV0dXJuaW5nDQphbHdheXMgMCB3b3VsZCBjbGVhcmx5IGJyZWFrIHF1b3Rh
IGNoZWNrcy4NCg0KDQpKdWVyZ2VuDQo=
--------------scW0k8HS13oUkWvjPCSjh6EM
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------scW0k8HS13oUkWvjPCSjh6EM--

--------------D1bTzj0mMMmwYj7UHgbYlVoj--

--------------4Una2fEaNf807I0FAxj007AF
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmO/n2wFAwAAAAAACgkQsN6d1ii/Ey8w
uwf+Im8UXjqCT06RyrHbhWgjkYeRYKz9lfPGAaD5qEMECYX7jFmE6/WKsNVlKOK5UKt37zRat2VL
ppqZm414mCaR+gAIxjBdIeN+2wWn+WvHKQt5m48REqXJb1S6LmEbUpWo482NonfnMOlR6mUVi+XF
OExG7VZAZ9DBFrVqkrYXonWvoh1gMzI3rMHsWxpv33piDfOp8wcHuvrES42ywYJ/72zg/d7DigI4
SWc4AihBlLV1chrdj6oBQLOvNq3Ojxhc+OhGRjiCNmvDD9PvlRE51nLK2lNYB1FFGmK9h5fksm3E
Bfscl4tKYcvTPyBTxEh8smhjG5KJKNk0FNSOFAaw1w==
=7sHy
-----END PGP SIGNATURE-----

--------------4Una2fEaNf807I0FAxj007AF--


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 06:23:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 06:23:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475833.737682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFqzf-0007Z2-Qq; Thu, 12 Jan 2023 06:23:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475833.737682; Thu, 12 Jan 2023 06:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFqzf-0007Yv-MN; Thu, 12 Jan 2023 06:23:19 +0000
Received: by outflank-mailman (input) for mailman id 475833;
 Thu, 12 Jan 2023 06:23:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=41ci=5J=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFqze-0007Y4-G7
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 06:23:18 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2064.outbound.protection.outlook.com [40.107.104.64])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99122d91-9241-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 07:23:16 +0100 (CET)
Received: from FR3P281CA0150.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:95::19)
 by AS8PR08MB9600.eurprd08.prod.outlook.com (2603:10a6:20b:618::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 06:23:06 +0000
Received: from VI1EUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:95:cafe::84) by FR3P281CA0150.outlook.office365.com
 (2603:10a6:d10:95::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Thu, 12 Jan 2023 06:23:06 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT039.mail.protection.outlook.com (100.127.144.77) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 06:23:05 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Thu, 12 Jan 2023 06:23:04 +0000
Received: from a7c287fc9c2f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 338A23BB-E575-4500-918C-2F41A2084C16.1; 
 Thu, 12 Jan 2023 06:22:54 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a7c287fc9c2f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 12 Jan 2023 06:22:54 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AS4PR08MB7686.eurprd08.prod.outlook.com (2603:10a6:20b:505::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 06:22:52 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 06:22:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99122d91-9241-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QG74g9kF2EkooN9cmiYACMFGBjheAxkc/5lBhzwQxZ4=;
 b=zOzIkZ5VdBMjyI7FoOpYbBkodowkTJLKtcKqoTI6Ec+CmH10kFhMlwIEJU/nK00IQ0gDYFa3zAoiu9ZTIH8MFU5+QwmaQM1NofDHaBJtblL1uyEV8w0Yd8Jyy8q1xNf8Kv+MILAwWPfYcggAOQK+TH5J64VkPXFEpMZpzcIogTw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B70hFksJmg3r9RA6ZjcZtBCLMy5eSrSo7fxiwvAOOphBLBG/Fps0pXsWHDm+TNXVZOJcSdnJlDd30MbS2+KyMH2PA7Z2S32ItCTueL4XV2o/BMGbDkNXCqXuBAL6W4IhHLQ98QiCWnFy8bGaDamW5SeLoPS9oHQmAmTCm2dwrH4zeSq5cw7bMXexwnRQjzoVoaWgTyktISmJN3ZwsEj72csYyXAGPnbhBpJKkMnN6nOq/P5G332Ca+w4B053UHjpZFi/pVqMvJFdkfET0jC0d64bj3KyQ7VWO6kmsiWG8suvyGyrnHBnnQqWfeXT41zbGQkFE/pIxUW/iW4xsdRU2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QG74g9kF2EkooN9cmiYACMFGBjheAxkc/5lBhzwQxZ4=;
 b=Zw1B0I9lFGiw221OiPCYDt6vL/gs08AEgkH+HhtAKgsB0fD82dvE0N3NGFNn7SpW7OwsEs2zyD94E1vn9GGYkpt8kIRjJfQDBUmobVuwHC0AWKwPLb2l0Gr0o4MxKHEdiNiV74LZX7Nry0yPr9FhewiX+w39/wIaMOR6+H/SppdpFnDj5gBf4htWG2KHWCHPOBXx9Zd36woNI2WvnRx3RDA3ehdZzEJKO2SaOgFycpELESKEWHtf4Kl60wXg18z93CR6MxSr5nq4Qf/qawblQ1igoQGkzYLm5XvsTTy3mjhU3lOtmIf6UphE2y2v/enQ7HpKWmUkvVQAMT8j4f/+BA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QG74g9kF2EkooN9cmiYACMFGBjheAxkc/5lBhzwQxZ4=;
 b=zOzIkZ5VdBMjyI7FoOpYbBkodowkTJLKtcKqoTI6Ec+CmH10kFhMlwIEJU/nK00IQ0gDYFa3zAoiu9ZTIH8MFU5+QwmaQM1NofDHaBJtblL1uyEV8w0Yd8Jyy8q1xNf8Kv+MILAwWPfYcggAOQK+TH5J64VkPXFEpMZpzcIogTw=
From: Wei Chen <Wei.Chen@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v2 02/17] xen/arm: implement helpers to get and update
 NUMA status
Thread-Topic: [PATCH v2 02/17] xen/arm: implement helpers to get and update
 NUMA status
Thread-Index: AQHZJNEXNkX9GZxGhkCgYl8odWGCka6X2i6AgAJ2fxA=
Date: Thu, 12 Jan 2023 06:22:52 +0000
Message-ID:
 <PAXPR08MB7420E482CACC741B1BA976569EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-3-wei.chen@arm.com>
 <9e32ffa1-1499-f9cd-7ca8-f9493b1269cb@suse.com>
In-Reply-To: <9e32ffa1-1499-f9cd-7ca8-f9493b1269cb@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 72CDFF01A6516C4E84A795F1433F4290.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|AS4PR08MB7686:EE_|VI1EUR03FT039:EE_|AS8PR08MB9600:EE_
X-MS-Office365-Filtering-Correlation-Id: b1de7add-2af5-4e17-7e90-08daf4657713
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 AWK5C6lPJb1h1bwpm71JrS87YkNU7Wc2S/y3U+swk3s2u3Kg9JNpwHZsV5Kd1JfPASwVXqDp0SdWu7avOcrvKlifJklfj80pAOeYeotmXApGg5Jj/L7dClOx+hFRMGShbFn2rBUO4EEJogdClWEYXeClgwDbQ/Svv2wBn/DJBJWJpahWbncdUY2kPcr64hA3EHqATq5FXYefCVVZ7N62Zcrfjb7wD7t4sxQ3PRtH5aeeuFrcge5Pfv3qEE3HUuuHYNBFf2f/osF5aZD9A2fScQOCrVVtftu6sovffSwfhq/lUEh20nGbqNb47/mw2B1LyoQs2IM8ZNBdv78KwnW8UJgAipDltXiM/j1+I2SjawqqJ8KPNkyY83qFVHbYYOX/k+70BCHev13R/tSnE95nYJ0EY17aX058hYSPk9P7Pp810RwRhz0xvfaEJpKlYsxFUqie5TyNqhMgXLZ/gfCc0KqlfAcqgyi984B+XrgHVmj4tyA3YbXO7Uij+8bmA1cZWJQyE6TLhTEToyYObZUMhtKCXtiwlwzMSg8BN/qVHse4x6DYPAvV/j7A6pn+fTaVj5Qu4gA8/afLznlDn5T3DeYj8810N/cvBqB3jwH7SsZX4Xw725a1VKEFACg01+gu1T5aOBTInVu+BG0aaT1usbozGRch1nu0w2AhGBwcwhNv+10PW4vzwYSCu5/kBO2CPf+dqfmYs4Sk3GISuqc5e7D38PNr0p3ovqMmOaC6jztXPKWxKnucnI9Ea73FOhDG
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(39860400002)(376002)(366004)(396003)(451199015)(33656002)(7696005)(38070700005)(86362001)(55016003)(54906003)(316002)(6506007)(71200400001)(6916009)(66446008)(2906002)(76116006)(478600001)(8676002)(64756008)(41300700001)(52536014)(66946007)(66476007)(15650500001)(4326008)(8936002)(66556008)(53546011)(5660300002)(38100700002)(122000001)(26005)(186003)(9686003)(83380400001)(2004002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7686
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	add1b981-b78e-46ed-f646-08daf4656f4e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tWzoxOZous/iAjy1fAQMAwc8SpHWlSlL8KsenKe0ZlUDLUoIzp5Ls5cmPEn5lz8j2ZptiHQ2TlqVn6VEuWgwstcmPMlB3a+RLtHRw8Hvsbiq4aFPFvbkr8xT5/zW5E50bKgQ1cNPjqRA6WRJX0btWMQC3UMW6FPx0jzAwBwhRhYM9glMno7BYxT2/IXwCMUay23jSJlsDKsQgNrR2HMB3k6kCewS4A9nHLNuSTpvefcMfw0/uZB8o8ZYnNdtE6JyPOWdQ6jJAZtHnQ5QYI2jaGeP5nD+5xlAxRYebdM4El1mKAGYuASx4xpfxj/zd6th5+5xwiRG3x/ACku7AUnF72RInsEHKTgueNM7JIgnpaYwyLIWWxrif4P8flhVtdbZgwHxh5UM9qjiqnQzlLYQ7S6UiZP+K/n4pUb483SKieISJnuSmj6bkuCh5reCPvnGA50KChssWPU4UTJbODBf3qD1iGSQks5fWVuKy1vV66H1+naUdwjbu60w/L2xg9zOTBQtDtqDoV13AvdkAs6D/wUsn+togtWn6kRiq0VZlQGmVeW4pfY+QbzGmRVFDBhgKNksmjYJdyz5ZOEXX4vbLwGeBHBomDC2CKnupibERiowQIn0W+MN94dfmpHfPVPJ/tKR3btZu1MoirflMzWiirlavcaDfrx7kaS6bZYk89wEwjGuGDQAewZhsyvz9ARNQnNa5hOr1YEctABU/hwE31cUgA9MKPAhitQG/lEzwA9D1AcWirlrl5/1auQ3QyPk
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(136003)(396003)(376002)(451199015)(40470700004)(46966006)(36840700001)(7696005)(33656002)(54906003)(83380400001)(8676002)(55016003)(2906002)(316002)(70586007)(70206006)(4326008)(356005)(36860700001)(81166007)(82740400003)(47076005)(82310400005)(40460700003)(53546011)(9686003)(6506007)(186003)(86362001)(478600001)(40480700001)(26005)(336012)(6862004)(8936002)(5660300002)(15650500001)(52536014)(41300700001)(2004002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 06:23:05.1580
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b1de7add-2af5-4e17-7e90-08daf4657713
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9600

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogMjAyM+W5tDHmnIgxMeaXpSAwOjM4DQo+
IFRvOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT4NCj4gQ2M6IG5kIDxuZEBhcm0uY29tPjsg
U3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgSnVsaWVuDQo+IEdy
YWxsIDxqdWxpZW5AeGVuLm9yZz47IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNA
YXJtLmNvbT47DQo+IFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNv
bT47IEFuZHJldyBDb29wZXINCj4gPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyBHZW9yZ2Ug
RHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+OyBXZWkNCj4gTGl1IDx3bEB4ZW4ub3Jn
PjsgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+OyB4ZW4tDQo+IGRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjIgMDIvMTddIHhl
bi9hcm06IGltcGxlbWVudCBoZWxwZXJzIHRvIGdldCBhbmQgdXBkYXRlDQo+IE5VTUEgc3RhdHVz
DQo+IA0KPiBPbiAxMC4wMS4yMDIzIDA5OjQ5LCBXZWkgQ2hlbiB3cm90ZToNCj4gPiAtLS0gYS94
ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vbnVtYS5oDQo+ID4gKysrIGIveGVuL2FyY2gvYXJtL2lu
Y2x1ZGUvYXNtL251bWEuaA0KPiA+IEBAIC0yMiw2ICsyMiwxMiBAQCB0eXBlZGVmIHU4IG5vZGVp
ZF90Ow0KPiA+ICAgKi8NCj4gPiAgI2RlZmluZSBOUl9OT0RFX01FTUJMS1MgTlJfTUVNX0JBTktT
DQo+ID4NCj4gPiArZW51bSBkdF9udW1hX3N0YXR1cyB7DQo+ID4gKyAgICBEVF9OVU1BX0lOSVQs
DQo+IA0KPiBJIGRvbid0IHNlZSBhbnkgdXNlIG9mIHRoaXMuIEkgYWxzbyB0aGluayB0aGUgbmFt
ZSBpc24ndCBnb29kLCBhcyBJTklUDQo+IGNhbiBiZSB0YWtlbiBmb3IgImluaXRpYWxpemVyIiBh
cyB3ZWxsIGFzICJpbml0aWFsaXplZCIuIFN1Z2dlc3RpbmcgYW4NCj4gYWx0ZXJuYXRpdmUgd291
bGQgcmVxdWlyZSBrbm93aW5nIHdoYXQgdGhlIGZ1dHVyZSBwbGFucyB3aXRoIHRoaXMgYXJlOw0K
PiByaWdodCBub3cgLi4uDQo+IA0KDQpzdGF0aWMgZW51bSBkdF9udW1hX3N0YXR1cyBfX3JlYWRf
bW9zdGx5IGRldmljZV90cmVlX251bWE7DQpXZSB1c2UgRFRfTlVNQV9JTklUIHRvIGluZGljYXRl
IGRldmljZV90cmVlX251bWEgaXMgaW4gYSBkZWZhdWx0IHZhbHVlDQooU3lzdGVtJ3MgaW5pdGlh
bCB2YWx1ZSwgaGFzbid0IGRvbmUgaW5pdGlhbGl6YXRpb24pLiBNYXliZSByZW5hbWUgaXQNClRv
IERUX05VTUFfVU5JTklUIGJlIGJldHRlcj8NCg0KPiA+ICsgICAgRFRfTlVNQV9PTiwNCj4gPiAr
ICAgIERUX05VTUFfT0ZGLA0KPiA+ICt9Ow0KPiANCj4gLi4uIHRoZSBvdGhlciB0d28gYXJlIGFs
c28gdXNlZCBvbmx5IGluIGEgc2luZ2xlIGZpbGUsIGF0IHdoaWNoIHBvaW50DQo+IHRoZWlyIHBs
YWNpbmcgaW4gYSBoZWFkZXIgaXMgYWxzbyBxdWVzdGlvbmFibGUuDQo+DQoNClRoaXMgaXMgYSBn
b29kIHBvaW50LCBJIHdpbGwgbW92ZSB0aGVtIHRvIHRoYXQgZmlsZS4NCg0KPiA+IC0tLSAvZGV2
L251bGwNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vbnVtYS5jDQo+ID4gQEAgLTAsMCArMSw0NCBA
QA0KPiA+ICsvKiBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogR1BMLTIuMCAqLw0KPiA+ICsvKg0K
PiA+ICsgKiBBcm0gQXJjaGl0ZWN0dXJlIHN1cHBvcnQgbGF5ZXIgZm9yIE5VTUEuDQo+ID4gKyAq
DQo+ID4gKyAqIENvcHlyaWdodCAoQykgMjAyMSBBcm0gTHRkDQo+ID4gKyAqDQo+ID4gKyAqIFRo
aXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQv
b3IgbW9kaWZ5DQo+ID4gKyAqIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwg
UHVibGljIExpY2Vuc2UgdmVyc2lvbiAyIGFzDQo+ID4gKyAqIHB1Ymxpc2hlZCBieSB0aGUgRnJl
ZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLg0KPiA+ICsgKg0KPiA+ICsgKiBUaGlzIHByb2dyYW0gaXMg
ZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwNCj4gPiArICog
YnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFu
dHkgb2YNCj4gPiArICogTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxB
UiBQVVJQT1NFLiAgU2VlIHRoZQ0KPiA+ICsgKiBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBm
b3IgbW9yZSBkZXRhaWxzLg0KPiA+ICsgKg0KPiA+ICsgKiBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2
ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZQ0KPiA+ICsgKiBhbG9u
ZyB3aXRoIHRoaXMgcHJvZ3JhbS4gSWYgbm90LCBzZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNl
bnNlcy8+Lg0KPiA+ICsgKg0KPiA+ICsgKi8NCj4gPiArI2luY2x1ZGUgPHhlbi9pbml0Lmg+DQo+
ID4gKyNpbmNsdWRlIDx4ZW4vbnVtYS5oPg0KPiA+ICsNCj4gPiArc3RhdGljIGVudW0gZHRfbnVt
YV9zdGF0dXMgX19yZWFkX21vc3RseSBkZXZpY2VfdHJlZV9udW1hOw0KPiANCj4gX19yb19hZnRl
cl9pbml0Pw0KPg0KDQpPSy4NCiANCj4gPiAtLS0gYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20v
bnVtYS5oDQo+ID4gKysrIGIveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL251bWEuaA0KPiA+IEBA
IC0xMiw3ICsxMiw2IEBAIGV4dGVybiB1bnNpZ25lZCBpbnQgbnVtYV9ub2RlX3RvX2FyY2hfbmlk
KG5vZGVpZF90IG4pOw0KPiA+DQo+ID4gICNkZWZpbmUgWk9ORV9BTElHTiAoMVVMIDw8IChNQVhf
T1JERVIrUEFHRV9TSElGVCkpDQo+ID4NCj4gPiAtZXh0ZXJuIGJvb2wgbnVtYV9kaXNhYmxlZCh2
b2lkKTsNCj4gPiAgZXh0ZXJuIG5vZGVpZF90IHNldHVwX25vZGUodW5zaWduZWQgaW50IHB4bSk7
DQo+ID4gIGV4dGVybiB2b2lkIHNyYXRfZGV0ZWN0X25vZGUoaW50IGNwdSk7DQo+ID4NCj4gPiAt
LS0gYS94ZW4vaW5jbHVkZS94ZW4vbnVtYS5oDQo+ID4gKysrIGIveGVuL2luY2x1ZGUveGVuL251
bWEuaA0KPiA+IEBAIC01NSw2ICs1NSw3IEBAIGV4dGVybiB2b2lkIG51bWFfaW5pdF9hcnJheSh2
b2lkKTsNCj4gPiAgZXh0ZXJuIHZvaWQgbnVtYV9zZXRfbm9kZSh1bnNpZ25lZCBpbnQgY3B1LCBu
b2RlaWRfdCBub2RlKTsNCj4gPiAgZXh0ZXJuIHZvaWQgbnVtYV9pbml0bWVtX2luaXQodW5zaWdu
ZWQgbG9uZyBzdGFydF9wZm4sIHVuc2lnbmVkIGxvbmcNCj4gZW5kX3Bmbik7DQo+ID4gIGV4dGVy
biB2b2lkIG51bWFfZndfYmFkKHZvaWQpOw0KPiA+ICtleHRlcm4gYm9vbCBudW1hX2Rpc2FibGVk
KHZvaWQpOw0KPiA+DQo+ID4gIGV4dGVybiBpbnQgYXJjaF9udW1hX3NldHVwKGNvbnN0IGNoYXIg
Km9wdCk7DQo+ID4gIGV4dGVybiBib29sIGFyY2hfbnVtYV91bmF2YWlsYWJsZSh2b2lkKTsNCj4g
DQo+IEhvdyBpcyB0aGlzIG1vdmVtZW50IG9mIGEgZGVjbGFyYXRpb24gcmVsYXRlZCB0byB0aGUg
c3ViamVjdCBvZiB0aGUgcGF0Y2g/DQo+DQoNCkNhbiBJIGFkZCBzb21lIG1lc3NhZ2VzIGluIGNv
bW1pdCBsb2cgdG8gc2F5IHNvbWV0aGluZyBsaWtlICJBcyB3ZSBoYXZlDQpJbXBsZW1lbnRlZCBu
dW1hX2Rpc2FibGVkIGZvciBBcm0sIHNvIHdlIG1vdmUgbnVtYV9kaXNhYmxlZCB0byBjb21tb24g
aGVhZGVyIj8NCg0KQ2hlZXJzLA0KV2VpIENoZW4NCiANCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 06:32:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 06:32:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475839.737692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFr8A-0000co-Mt; Thu, 12 Jan 2023 06:32:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475839.737692; Thu, 12 Jan 2023 06:32:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFr8A-0000ch-JP; Thu, 12 Jan 2023 06:32:06 +0000
Received: by outflank-mailman (input) for mailman id 475839;
 Thu, 12 Jan 2023 06:32:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=41ci=5J=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFr89-0000cb-84
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 06:32:05 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2073.outbound.protection.outlook.com [40.107.105.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d1d0274c-9242-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 07:32:00 +0100 (CET)
Received: from FR3P281CA0197.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a5::7) by
 AM7PR08MB5320.eurprd08.prod.outlook.com (2603:10a6:20b:103::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.9; Thu, 12 Jan
 2023 06:31:59 +0000
Received: from VI1EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:a5:cafe::ae) by FR3P281CA0197.outlook.office365.com
 (2603:10a6:d10:a5::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Thu, 12 Jan 2023 06:31:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT007.mail.protection.outlook.com (100.127.144.86) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 06:31:58 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Thu, 12 Jan 2023 06:31:57 +0000
Received: from c44bca52d657.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 52929A84-EA63-4D54-8244-C6087760E8A6.1; 
 Thu, 12 Jan 2023 06:31:47 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c44bca52d657.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 12 Jan 2023 06:31:46 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by DB3PR08MB9036.eurprd08.prod.outlook.com (2603:10a6:10:430::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 06:31:43 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 06:31:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1d0274c-9242-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w5MJBmVnVO9Knbigf1UQ+k5MGPHNfykIyXH4ko+N7FY=;
 b=32GgXTe+Q7qy3dr1+nATE7sL+6hsLtI90xcWXKKC1V27j7qDqKDC1amEbec76Xt9CPVb+sqHaxLvzhe2UkT2W+kjp/am7UGWxr9WQm/7ta/GFmXxZphy0bAhgHUQ0TAioLFb7Ec6ye4yivv/jxns6YuY+Gag/qQjOhQxPcjge+w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZSjZ+2Gsl7jLzWty6iNRC6V5l51MHA50gdDs2QGH4HAvHe8CN/3diACot7xNLE9b3cmE6aKS8ufyIs/xLW46LaHsqRrquh9bRN7euiYNtRXWY1z4ugke6H/86x/EDBsnLh2l5nytUIiOvgAq6k51Dbjl+wvBVSx4SxlKuR4Dn9sm5RFWom6XB14gote+GSUgP0yHE30wzcOT34o7SOucFOR8gxeDaUrxy9BPGxNWpIOwJG7kXUw5/IrXrJDOyh2SCbgAiGBNtU8XTjHtd3y397m+7sg3m3etZJ+XXQX3Sqj7IVmAFjAa7UNK/mC48vYLvAcN+fl6EdXuX8Nr7Bry4g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=w5MJBmVnVO9Knbigf1UQ+k5MGPHNfykIyXH4ko+N7FY=;
 b=agIBrEMm3g5Gyq/nB92jqxg2BBwMXx4kj/dY/w38oxpxIZq4SrHNTw9E1ZGC5lmXEx9cKwKH7aFrPl6ZpH2kexKNUcQYB3vJI/1a9H5IprFfTPbkeZKr1X2Ie4PFj8hR4lNnIYlMOK+Gto5Cup3S5fmBJS8jThz3UJZd42ZCH5WcMknDYpR5KOoOQw/sX5K8zUv8yNqxwYytNdbIiQ4GMPqtFh1Z7QCsWy68TCx3m8FXBXODn+LXLQMhPTvv4UHJFhna0UUtGfky0BKkm+YPEgyRNcVz9Ej0/SKnStPq0uESgJJYw6EKmrlraleoA7pbJe65b4ibPKDLlgguYJyJXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w5MJBmVnVO9Knbigf1UQ+k5MGPHNfykIyXH4ko+N7FY=;
 b=32GgXTe+Q7qy3dr1+nATE7sL+6hsLtI90xcWXKKC1V27j7qDqKDC1amEbec76Xt9CPVb+sqHaxLvzhe2UkT2W+kjp/am7UGWxr9WQm/7ta/GFmXxZphy0bAhgHUQ0TAioLFb7Ec6ye4yivv/jxns6YuY+Gag/qQjOhQxPcjge+w=
From: Wei Chen <Wei.Chen@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v2 03/17] xen/arm: implement node distance helpers for Arm
Thread-Topic: [PATCH v2 03/17] xen/arm: implement node distance helpers for
 Arm
Thread-Index: AQHZJNEfrPi0xon1hUWBAoFjpzsCdK6X3MQAgAJ3JNA=
Date: Thu, 12 Jan 2023 06:31:43 +0000
Message-ID:
 <PAXPR08MB7420A4E3DA252F9F37450EDA9EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-4-wei.chen@arm.com>
 <9fd67aa2-0bd5-16a2-1e19-139504c2090f@suse.com>
In-Reply-To: <9fd67aa2-0bd5-16a2-1e19-139504c2090f@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 67DA167C1D11F84C80FB928E26319960.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|DB3PR08MB9036:EE_|VI1EUR03FT007:EE_|AM7PR08MB5320:EE_
X-MS-Office365-Filtering-Correlation-Id: 8699eb11-5d92-4dce-24bc-08daf466b4d6
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 SaV3/Zx6UyY1ayleyg+Lj3YAthMzlvsv7crxDTJs+HkVZ8eGpvt3+rksuI8G89hVan8tMoUetguUR0qPveh7pKR8/KqoowdpSjTajwk5KzviKKqu9Gx9VvK2yB/cRFGdH5JP/yD/kdXmVoBekhzgF2MR3mnxwhBWHRdm3gKDKWyQKtXRLh2cNi+ynncWngHkjW5MYen4bXJwFp5zF3JcdvCL+Kzj3soIEfA0iocxE5aQWiA4uDHF2VHNQ6y/hIxOh31cxdnsnH2M0kePqDaH0qZJuSjKZCtplhR9p6tHjx54gjWXp/fb5heKiyLKK67syDWcnBjh+yjLn+6wH/kZX23LMS1tpaxS4eVSIUM68GbllZhFitpRAe7AFAH2wlhOIxY4ihxGvhnJobaqMyUZZIPE4YPZKA1qFY/yfJJGkOmmg7PJtoHL47Z+9ZjiVh6JPL60okN2+cr4TKHQ36HvFyzRfzyH8ItTUHiCS3pfLt1gC/1rc5zBs5a+ySx/P2w9FwTa8zqUw4UvXoUJqmBPo63LPa10kblW/wwpq16rsxgs9ihigydg6GH+CWvjyNn2GsGsAB9bMvtRrXKSd8rvIUokch85xlyO49n4S+ngvDLvGtxfu9/Vm9LKYZJapWAidwUtS/+BB/W959a84vTKz0YUvDxOorfRxzEcZLdbSMue3Z0VKaM5WyrP+T7JmlDBm81yF6iG7fAikjaiMYDVr9AalvHOS1RvLYbm8Y5QZcM=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(39860400002)(376002)(136003)(346002)(396003)(451199015)(2906002)(9686003)(5660300002)(38100700002)(8936002)(33656002)(66556008)(66446008)(64756008)(4326008)(8676002)(41300700001)(76116006)(66946007)(83380400001)(66476007)(52536014)(55016003)(316002)(86362001)(6916009)(71200400001)(186003)(7696005)(122000001)(26005)(54906003)(6506007)(38070700005)(53546011)(478600001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB9036
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f52fbe73-671b-44d3-72c9-08daf466abe6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	llFdW67WBQNIY/bMXjhHtf+A39xsijzTaYemvmFOLEPq/itJkUKWpbaaf43PiBZy1tAbiUKZj0TtRGqh2QF+ciZuOzPX2dT3RY/Gs8y6dVdr0z5zN3IiBQsnEX0Z6E3sn0lEGAn8YeO5ezw6hQcIiEvqLpOKIMtDpit0Av5dDR2G5sCcVjIs+XZOwnbB6SQbisxv+Y43pPUSzgIp7l+KGjdVnePePoI0dVBVuOjG3aFSyb04ktgVFXXDQxmqmDSLoKhXSSaiwCuYht/2g1W9psbooJOE6AxPtFSz/1lrBx3dztbAlgfbZkPUQ2IB0ZMc569Inc7iyP0mOuEuOpOxLBgsfPFqEd6boMHM02GLXSC9hL4Ml2fOn7YPkQwNo7XjMGcsF8NrYwVXNF6DyOfv5E5DcikFrXHtpFnTuE+pn368oRREj8hO42YFo0OEYXu9H3g7jUcA/9WVfXHS1+xsZ5lqjixJExt+T9UxUROX9js0T5dNfhnVUcTIQN934Ecdm+kzmp6OsFmpKADqkn1n0WzRtalALhhFT/gOiQMEVJA6kBjMNPbluVxOwwaanyLunYvrfHn4cQaJYBW/GVOJX2TV9MOwTgcAxkC1IV7tG4ISV3p2fVtZWOdcIKi2ZS5ZcA5JIGJmN/uQmFR6lteoE/Z4y10XJMUCLOjK9Ye0NYZuvPKby+vnrt3nfGGpvND6RU0ns8HZ/u+H7ceiq2pvU4vN+q1Vbc3HCsxnLkaESkE=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199015)(40470700004)(46966006)(36840700001)(336012)(82740400003)(7696005)(52536014)(86362001)(356005)(33656002)(316002)(40460700003)(9686003)(53546011)(6506007)(186003)(26005)(4326008)(8676002)(54906003)(81166007)(70206006)(70586007)(478600001)(41300700001)(40480700001)(36860700001)(55016003)(5660300002)(47076005)(6862004)(8936002)(2906002)(82310400005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 06:31:58.2714
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8699eb11-5d92-4dce-24bc-08daf466b4d6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5320

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogMjAyM+W5tDHmnIgxMeaXpSAwOjQ3DQo+
IFRvOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT4NCj4gQ2M6IG5kIDxuZEBhcm0uY29tPjsg
U3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgSnVsaWVuDQo+IEdy
YWxsIDxqdWxpZW5AeGVuLm9yZz47IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNA
YXJtLmNvbT47DQo+IFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNv
bT47IEFuZHJldyBDb29wZXINCj4gPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyBHZW9yZ2Ug
RHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+OyBXZWkNCj4gTGl1IDx3bEB4ZW4ub3Jn
PjsgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+OyB4ZW4tDQo+IGRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjIgMDMvMTddIHhl
bi9hcm06IGltcGxlbWVudCBub2RlIGRpc3RhbmNlIGhlbHBlcnMgZm9yDQo+IEFybQ0KPiANCj4g
T24gMTAuMDEuMjAyMyAwOTo0OSwgV2VpIENoZW4gd3JvdGU6DQo+ID4gLS0tIGEveGVuL2FyY2gv
YXJtL2luY2x1ZGUvYXNtL251bWEuaA0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2Fz
bS9udW1hLmgNCj4gPiBAQCAtMjgsNiArMjgsMjAgQEAgZW51bSBkdF9udW1hX3N0YXR1cyB7DQo+
ID4gICAgICBEVF9OVU1BX09GRiwNCj4gPiAgfTsNCj4gPg0KPiA+ICsvKg0KPiA+ICsgKiBJbiBB
Q1BJIHNwZWMsIDAtOSBhcmUgdGhlIHJlc2VydmVkIHZhbHVlcyBmb3Igbm9kZSBkaXN0YW5jZSwN
Cj4gPiArICogMTAgaW5kaWNhdGVzIGxvY2FsIG5vZGUgZGlzdGFuY2UsIDIwIGluZGljYXRlcyBy
ZW1vdGUgbm9kZQ0KPiA+ICsgKiBkaXN0YW5jZS4gU2V0IG5vZGUgZGlzdGFuY2UgbWFwIGluIGRl
dmljZSB0cmVlIHdpbGwgZm9sbG93DQo+ID4gKyAqIHRoZSBBQ1BJJ3MgZGVmaW5pdGlvbi4NCj4g
PiArICovDQo+ID4gKyNkZWZpbmUgTlVNQV9ESVNUQU5DRV9VREZfTUlOICAgMA0KPiA+ICsjZGVm
aW5lIE5VTUFfRElTVEFOQ0VfVURGX01BWCAgIDkNCj4gPiArI2RlZmluZSBOVU1BX0xPQ0FMX0RJ
U1RBTkNFICAgICAxMA0KPiA+ICsjZGVmaW5lIE5VTUFfUkVNT1RFX0RJU1RBTkNFICAgIDIwDQo+
IA0KPiBJbiB0aGUgYWJzZW5jZSBvZiBhIGNhbGxlciBvZiBudW1hX3NldF9kaXN0YW5jZSgpIGl0
IGlzIGVudGlyZWx5IHVuY2xlYXINCj4gd2hldGhlciB0aGlzIHR5aW5nIHRvIEFDUEkgdXNlZCB2
YWx1ZXMgaXMgYWN0dWFsbHkgYXBwcm9wcmlhdGUuDQo+IA0KDQpGcm9tIEtlcm5lbCdzIE5VTUEg
ZGV2aWNlIHRyZWUgYmluZGluZywgaXQgc2VlbXMgRFQgTlVNQSBhcmUgcmV1c2luZw0KQUNQSSB1
c2VkIHZhbHVlcyBmb3IgZGlzdGFuY2VzIFsxXS4NCg0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9u
dW1hLmMNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vbnVtYS5jDQo+ID4gQEAgLTIsNyArMiw3IEBA
DQo+ID4gIC8qDQo+ID4gICAqIEFybSBBcmNoaXRlY3R1cmUgc3VwcG9ydCBsYXllciBmb3IgTlVN
QS4NCj4gPiAgICoNCj4gPiAtICogQ29weXJpZ2h0IChDKSAyMDIxIEFybSBMdGQNCj4gPiArICog
Q29weXJpZ2h0IChDKSAyMDIyIEFybSBMdGQNCj4gDQo+IEkgZG9uJ3QgdGhpbmsgaXQgbWFrZXMg
c2Vuc2UgdG8gY2hhbmdlIHN1Y2ggYWZ0ZXIgdGhlIGZhY3QuIEFuZCBjZXJ0YWlubHkNCj4gbm90
IGluIGFuIHVucmVsYXRlZCBwYXRjaC4NCj4gDQoNCkkgd2lsbCByZXRvcmUgaXQsIGFuZCBhZGQg
YSBTUERYIGhlYWRlci4NCg0KPiA+IEBAIC0yMiw2ICsyMiwxMSBAQA0KPiA+DQo+ID4gIHN0YXRp
YyBlbnVtIGR0X251bWFfc3RhdHVzIF9fcmVhZF9tb3N0bHkgZGV2aWNlX3RyZWVfbnVtYTsNCj4g
Pg0KPiA+ICtzdGF0aWMgdW5zaWduZWQgY2hhciBfX3JlYWRfbW9zdGx5DQo+ID4gK25vZGVfZGlz
dGFuY2VfbWFwW01BWF9OVU1OT0RFU11bTUFYX05VTU5PREVTXSA9IHsNCj4gPiArICAgIHsgMCB9
DQo+ID4gK307DQo+IA0KPiBfX3JvX2FmdGVyX2luaXQ/DQo+IA0KDQpZZXMuDQoNCj4gPiBAQCAt
NDIsMyArNDcsNDggQEAgaW50IF9faW5pdCBhcmNoX251bWFfc2V0dXAoY29uc3QgY2hhciAqb3B0
KQ0KPiA+ICB7DQo+ID4gICAgICByZXR1cm4gLUVJTlZBTDsNCj4gPiAgfQ0KPiA+ICsNCj4gPiAr
dm9pZCBfX2luaXQgbnVtYV9zZXRfZGlzdGFuY2Uobm9kZWlkX3QgZnJvbSwgbm9kZWlkX3QgdG8s
DQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBkaXN0YW5j
ZSkNCj4gPiArew0KPiA+ICsgICAgaWYgKCBmcm9tID49IE1BWF9OVU1OT0RFUyB8fCB0byA+PSBN
QVhfTlVNTk9ERVMgKQ0KPiA+ICsgICAgew0KPiA+ICsgICAgICAgIHByaW50ayhLRVJOX1dBUk5J
TkcNCj4gPiArICAgICAgICAgICAgICAgIk5VTUE6IGludmFsaWQgbm9kZXM6IGZyb209JSJQUkl1
OCIgdG89JSJQUkl1OCINCj4gTUFYPSUiUFJJdTgiXG4iLA0KPiA+ICsgICAgICAgICAgICAgICBm
cm9tLCB0bywgTUFYX05VTU5PREVTKTsNCj4gPiArICAgICAgICByZXR1cm47DQo+ID4gKyAgICB9
DQo+ID4gKw0KPiA+ICsgICAgLyogTlVNQSBkZWZpbmVzIDB4ZmYgYXMgYW4gdW5yZWFjaGFibGUg
bm9kZSBhbmQgMC05IGFyZSB1bmRlZmluZWQNCj4gKi8NCj4gPiArICAgIGlmICggZGlzdGFuY2Ug
Pj0gTlVNQV9OT19ESVNUQU5DRSB8fA0KPiA+ICsgICAgICAgIChkaXN0YW5jZSA+PSBOVU1BX0RJ
U1RBTkNFX1VERl9NSU4gJiYNCj4gDQo+IE5pdDogSW5kZW50YXRpb24uDQo+IA0KDQpPay4NCg0K
PiA+ICsgICAgICAgICBkaXN0YW5jZSA8PSBOVU1BX0RJU1RBTkNFX1VERl9NQVgpIHx8DQo+ID4g
KyAgICAgICAgKGZyb20gPT0gdG8gJiYgZGlzdGFuY2UgIT0gTlVNQV9MT0NBTF9ESVNUQU5DRSkg
KQ0KPiA+ICsgICAgew0KPiA+ICsgICAgICAgIHByaW50ayhLRVJOX1dBUk5JTkcNCj4gPiArICAg
ICAgICAgICAgICAgIk5VTUE6IGludmFsaWQgZGlzdGFuY2U6IGZyb209JSJQUkl1OCIgdG89JSJQ
Ukl1OCINCj4gZGlzdGFuY2U9JSJQUkl1MzIiXG4iLA0KPiA+ICsgICAgICAgICAgICAgICBmcm9t
LCB0bywgZGlzdGFuY2UpOw0KPiA+ICsgICAgICAgIHJldHVybjsNCj4gPiArICAgIH0NCj4gPiAr
DQo+ID4gKyAgICBub2RlX2Rpc3RhbmNlX21hcFtmcm9tXVt0b10gPSBkaXN0YW5jZTsNCj4gPiAr
fQ0KPiA+ICsNCj4gPiArdW5zaWduZWQgY2hhciBfX25vZGVfZGlzdGFuY2Uobm9kZWlkX3QgZnJv
bSwgbm9kZWlkX3QgdG8pDQo+ID4gK3sNCj4gPiArICAgIC8qIFdoZW4gTlVNQSBpcyBvZmYsIGFu
eSBkaXN0YW5jZSB3aWxsIGJlIHRyZWF0ZWQgYXMgcmVtb3RlLiAqLw0KPiA+ICsgICAgaWYgKCBu
dW1hX2Rpc2FibGVkKCkgKQ0KPiA+ICsgICAgICAgIHJldHVybiBOVU1BX1JFTU9URV9ESVNUQU5D
RTsNCj4gPiArDQo+ID4gKyAgICAvKg0KPiA+ICsgICAgICogQ2hlY2sgd2hldGhlciB0aGUgbm9k
ZXMgYXJlIGluIHRoZSBtYXRyaXggcmFuZ2UuDQo+ID4gKyAgICAgKiBXaGVuIGFueSBub2RlIGlz
IG91dCBvZiByYW5nZSwgZXhjZXB0IGZyb20gYW5kIHRvIG5vZGVzIGFyZSB0aGUNCj4gPiArICAg
ICAqIHNhbWUsIHdlIHRyZWF0IHRoZW0gYXMgdW5yZWFjaGFibGUgKHJldHVybiAweEZGKQ0KPiA+
ICsgICAgICovDQo+ID4gKyAgICBpZiAoIGZyb20gPj0gTUFYX05VTU5PREVTIHx8IHRvID49IE1B
WF9OVU1OT0RFUyApDQo+IA0KPiBJIGd1ZXNzIHVzaW5nIEFSUkFZX1NJWkUoKSBoZXJlIHdvdWxk
IGJlIG1vcmUgZnV0dXJlLXByb29mLg0KPiANCg0KSSB3aWxsIHVzZSBpdCBpbiBuZXh0IHZlcnNp
b24uDQoNClsxXWh0dHBzOi8vd3d3Lmtlcm5lbC5vcmcvZG9jL0RvY3VtZW50YXRpb24vZGV2aWNl
dHJlZS9iaW5kaW5ncy9udW1hLnR4dA0KDQpUaGFua3MsDQpXZWkgQ2hlbg0KDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 07:04:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 07:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475846.737704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFrdg-00048G-EF; Thu, 12 Jan 2023 07:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475846.737704; Thu, 12 Jan 2023 07:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFrdg-000489-A3; Thu, 12 Jan 2023 07:04:40 +0000
Received: by outflank-mailman (input) for mailman id 475846;
 Thu, 12 Jan 2023 07:04:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFrdf-00047z-Mr; Thu, 12 Jan 2023 07:04:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFrdf-00087N-KL; Thu, 12 Jan 2023 07:04:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFrdf-0002rc-3m; Thu, 12 Jan 2023 07:04:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFrdf-0000fh-3J; Thu, 12 Jan 2023 07:04:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7EZAOK/ePhuZPSB5cKOXYlvM8LHVvlyCtk2hpjW8SVE=; b=lK822K0y5PTiNcyLOce97hIx9N
	QXh8bvEp1GGRR6GGghZ4YuBzju31ATiqbjEbN3qk0p2NxOmMpRZUxtUYRq28qQnvcba8pUPMhxr9/
	Y7XEaD3NLmEs7IrOBKvJD7ItpazFsfNtT6XpeQH2bIYgTxhFeHs1MaAtDWoS9N19pgVk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175730-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175730: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7dd4b804e08041ff56c88bdd8da742d14b17ed25
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 07:04:39 +0000

flight 175730 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175730/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7dd4b804e08041ff56c88bdd8da742d14b17ed25
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   96 days
Failing since        173470  2022-10-08 06:21:34 Z   96 days  202 attempts
Testing same since   175717  2023-01-11 03:09:17 Z    1 days    3 attempts

------------------------------------------------------------
3319 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 505897 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 07:42:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 07:42:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475854.737714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsEP-0008PG-9l; Thu, 12 Jan 2023 07:42:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475854.737714; Thu, 12 Jan 2023 07:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsEP-0008P9-6r; Thu, 12 Jan 2023 07:42:37 +0000
Received: by outflank-mailman (input) for mailman id 475854;
 Thu, 12 Jan 2023 07:42:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFsEN-0008P3-Uz
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 07:42:36 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2084.outbound.protection.outlook.com [40.107.14.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac896ff4-924c-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 08:42:33 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8936.eurprd04.prod.outlook.com (2603:10a6:10:2e3::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 07:42:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 07:42:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac896ff4-924c-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FkMgOd0XJmfbjlMJ8p7VTxFagW30Y6OKXGXHMz5HqA/SMJLGPqBjVxjgRr7GNkDxu3+ZlqfPyPPuNGjwNZvuMuLiaQVgM7AzfXnJRWKlW4xIC83dHK8bSmy9Qma6/oTvMz7bBz87ZMFYsyK50LGlA56VC+1RV50HU11DY5TTJi3e3RtKNFRq3KxSRl/UXcfc2Lrx1VaEdE1oPHuLz7ttazinwAe1dYWHzsg3lIUqgpEgHy/x4RE/GKDVHZbbwHHDXw8fOvVXXEsRPAhEwqR9WQciiMSKaI9o1YOSRk+ARBb1G5cXw5TReUnKXy1sqc3rGVTqb6Qz1gxW+29pKAH8Vg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ynWFmUUDTwFMF3VF0PVzuuKnBb1tVw4O41z9x4c9Hmg=;
 b=ED13Bm7XDb6/7fhGMEnjUbVMim+8EyQiHY5J+s35rF9nyB3eyuR5UMHXFfi3g+3miQYvJ0S1TN8tD4yIZ32ZFpNxPFwPU9At6RYX/3VC5ENMw7rQ5N7mR2Yf1bz/EDDGFJ6KabQNK+XgEwlaUI9qLAkLtrJzmxOig6oTOZJEcmPNjwfnclHfwclM2Ij6EnDMozBZYQay2S2o1Z3IOHQwmtqpxQSUTD0QMnh76/6AN6qj2wz/GWfuefMwkwRKHWJJyqyg/pwf4rn1z+D2AU3qCY2lzPxEBL0/45a/FJFmshDocX2ya+Wlj7+F+QXVdwy0e4SqCyis0Y4kpStI6RKTSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ynWFmUUDTwFMF3VF0PVzuuKnBb1tVw4O41z9x4c9Hmg=;
 b=onhVBgryRSrb89w2hqXDOSU0nqEUIFxCqqNkQ6blp2CNBHdpbvtDVe0wCLjNZ1NAbV9O00Km/WqVEqgV+N/GWEyfsXq2jMW0RADrsgGMguxP/Qkr5ao+q95IsbegcGS8J8BxP390UusaSfNKsjj2jPvJ4V9Ip2qdAA7X9g3Hime1OxpQDooK1X0Xd8SIKVHOjifQF5MKyrQLc2IjsbeahgIc4HQY9/cXbRWDFSTof2jTEH6R/cPQdAdd04BL2Di8gBdOgbXE8Bb2RZpCuAqT2kbTmkFdqCcuS72FRq5B11ccNQ47vyLw1eCuV9LUal4nm+POETL2oNYNKeGNJV2g0A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <70d620c9-7fd7-e442-15f6-af75eff33669@suse.com>
Date: Thu, 12 Jan 2023 08:42:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 3/3] x86/vmx: implement Notify VM Exit
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>
References: <20221213163104.19066-1-roger.pau@citrix.com>
 <20221213163104.19066-4-roger.pau@citrix.com>
 <553cbd97-f667-1549-4374-9385d3d53710@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <553cbd97-f667-1549-4374-9385d3d53710@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0067.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8936:EE_
X-MS-Office365-Filtering-Correlation-Id: 1ebb17b6-404e-4087-52a3-08daf4708fb8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YUx+9Ac9957gBE0rKUEha7dn3rnhvmFURw4LODWz1G+8yjy/co7cMP746Zq0WiJEprUIbWv1yGCAtf1kFLrLokTV2Qa3WZTeqOBOA8HshTbnnJo+0HWJgBpXsduqRs4AXozBDv9vL6+UWuPyq2tofigUy7sILHHnPXvpkx4M16dbxIILHfdGA3IsOW1j3lFAD5JBcxBNLAAIwPk2keFzQtgo1ruxXx5l12/HHl95bO2A8SrD9odDc3093fzA+gCIBCfQ+j5M7D0bvq0IEwqdp47tu8dFcZoYmxsByeTVaISVU2lgIziwLxdAjcfKwQAT8lF6mEKPrk44kaMIazdp8f0/fyTzJJjtlwC9m+mLsfSMc4xMCOMPupMj/0PeCMy729La2pinpGT/vRwcxIKX4MNI/3z+TOHDTZIytND9WoMTG8FhD0w4YQbG0R/zvWxja2YjfsTsMGl0yDwi9bbQ6/mP4rW8ouxuBVVRmoaDeCDOVrDBr+SslmIuUMyQdznwVasbG48ClbKeRaa7NGxOQ/EpUK533u2/RCG2jHC2nA49pQp7Tquw1zJlA2C8ZSAIFNIu5+W/swF58AwQYFVl4wArkXIJ6Qh3Zm3/AJEcWV3Dal3oWuFS3nken1CPtNm7fXmF8NMFMnwxRNFh0NiOGM+iitNCOcQzlM2xoZCDmB88OK9slbh/C6+q6U3OURK3a9G0kTUY1PCgWaYYfj3soxwVzEQGLZJx/wHq6cpbGPI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(39860400002)(396003)(366004)(376002)(346002)(451199015)(26005)(36756003)(186003)(6916009)(8936002)(53546011)(6512007)(31686004)(6486002)(6506007)(478600001)(66946007)(66556008)(5660300002)(66476007)(2616005)(316002)(4326008)(31696002)(41300700001)(38100700002)(54906003)(86362001)(8676002)(83380400001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Mjk1VnFQS3FBWlJwUnp3WCttckNaVFRvbGNYRlRzSXRXZmhnK3h0b3J1WGxJ?=
 =?utf-8?B?KytXUzVTNVF1S2h1cERxcTRzSThzU01IZE1WKzlwMDBhdmJBK3E4OUJDZzZv?=
 =?utf-8?B?RFErNDJYWkVkMXFEM1prb2JYQ0l2NXIwY0g2UWViUEw3L2E5NEgrNDk3REZO?=
 =?utf-8?B?U3ExUzdKY3VQZjBLdTI3UndXSnV4Z0ZBLzBlaGVBOEUxeDVucXF3aWJSMEhh?=
 =?utf-8?B?ZFpncE9CR0wzUTJzNjhaMm9JUFh4T2pPT05SSE5FNFVQRzNQNmsrL0p3Y1Ez?=
 =?utf-8?B?ZU1YUksxSEdFbklKeTJBaTRSSE8rRGplWnIvNnYrMzFrbmxYL0Fxblp2OEUw?=
 =?utf-8?B?bnVySUVPTWJzWVg4a0hmTDJLMkFTRWlMV0lkUDRhT0VUclQ0dWdScnZjYmZu?=
 =?utf-8?B?T1JNZGFpTnJKdXFjWERNRlBjdnlRZVlQaWtnZEhEdVZwS0wvcmNhVmhGRUo1?=
 =?utf-8?B?cEgwbVIvT3F3dmxCa1ZqWjB6emVqcmtZbW9oUVdnQllZL1gwQkNlemNzSm9m?=
 =?utf-8?B?S0RSWHBIVVZpM0lRSVlyOS8vYVBxWDlGZG5xREVFamZGVUl3ZVFQV0RFUUht?=
 =?utf-8?B?NHpHV1RyUEI4SDYvVnBpMVVMb1EzaStOdGR4dU9JT1ZpQkRiUVVvdE1kNkF4?=
 =?utf-8?B?azVPWWlrczVSV3JXTlhIaGZWSTBweit6OG1LMGV6ejdwUHZGRUFFeUgrSUFh?=
 =?utf-8?B?UFFPaWo5c1pXaHRoSExZQmUyYTJpRGEzZlhtS3lnWHFyUUNnWVFrbThuRUt2?=
 =?utf-8?B?YVJrUGpSVWJ5NHYvZ0NSdmlMQ0xwRG1sMXVzeHYxbkhlWERuRVBLSCt1aDNV?=
 =?utf-8?B?aGcxd2hLaDZjVDU2Smg0dlgxU1l1bGVYQlJYb2FRSVBaQWh1dStkenVDdU1i?=
 =?utf-8?B?bDBGaVRWN25nelo2RGVWSFBvcHJQdXNoWDV0QVZVbzVjQURndWE2RVVyRnp3?=
 =?utf-8?B?WEJrcDVQa1dOQVdPS0I5bXJSb3ptTkZFcGVzMXF3Wi9UWlRHRU5uNjlTV2tp?=
 =?utf-8?B?TVhNR2VzODJNMTc5OXlDbzIvRDMyS2gwL0FKaUZGUjRINlNUZ2JCdm9YN1My?=
 =?utf-8?B?WlNOeGs1bXk5VXJpYWJxdHNUbnRBWmhsT2tRNStITmsxUnZOQ01PQWxzTzVO?=
 =?utf-8?B?a1JLM0FRTS9MZkVFUnVNSFh3cHh4WndqU2dyYjV3OTBxTlhLR1J5dVprclQw?=
 =?utf-8?B?L2VCK0NyTG9WbXJJaFhiRERoaU9DT0ZVampCblVJZzFZVDBlRktvcnJjNjB2?=
 =?utf-8?B?NS9hQ2I4Y21aOUtWZzFRZ1RlRnZicm1yMS9nZFU5a0FtN0JycXh1d2NRRHdw?=
 =?utf-8?B?ZGZLSWwyUllnR0ZQWFFRRzZsUjdWcFJkVDl5SFNrSlpmSCs0eHlJNTRDbzNF?=
 =?utf-8?B?TkIyeVJtYXpHTW8rK0s3UDdtTnVrc2p6UUV4RERCOWxmUUJaOFZDaEo3azYr?=
 =?utf-8?B?UmFVS0ZjbStlSkJHckNnZWFPZmJyYXV4UFRUc0wxdEhSMUNadzZrVkZVVjNt?=
 =?utf-8?B?REM3R1pFbURKdXhaL2EreDk5eEw5YTRKY3E3ays1bm1talN5MmpLNnQxcXlO?=
 =?utf-8?B?V0lyMWthZUprNTY2YlBlYjl4L1l6bmJ5T2RRZ3N3U0N3bGFTOU5lUWthZ2Q0?=
 =?utf-8?B?WnlKTUlrVFVZSGJjeHY0dldwc1U1ZXJtNDBQZG01cUhMOEtDeU8vbnhIVjBO?=
 =?utf-8?B?eGREc01ZVTBvOURId1Z0UlJEMzRPQjJFendiNkxyNTYyTkNoZXdPdk1KNGlL?=
 =?utf-8?B?aExNWVZkUnQzODAzRXNzbDdtQXJyQmJPbGFQOFlSRmhEYm03Znp0RVRTZkZV?=
 =?utf-8?B?OFdEWDhtZFFaVk9wckozS01NSjFiUjFTdGpMbnJsQTFRdlMyTmhncFQ3Ry9M?=
 =?utf-8?B?K2N5NWFJVTJFaHJoR3djUU1zRk40b3NtWjBoMHJyZjh4TGg5b2JlbXI2Y3Bv?=
 =?utf-8?B?S2xDTlpMeXlMU1JHTkYycldVUnpYTVJvZHlEenZhRHVneW5GYmRxemJJUXI3?=
 =?utf-8?B?eVhFdnB3N0lVM1dLV0F3MkdOYWdJaVpqcmJwZlUydEZMa3E0bHEyeE9OdHVC?=
 =?utf-8?B?TEg3L0p5QXpLekpFVXVjTGt2cVBXL0puSENueTVtWmNleFVlK3lTQWx2S3Jq?=
 =?utf-8?Q?Bq/6jabdE06F87+7GME7g6jrE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ebb17b6-404e-4087-52a3-08daf4708fb8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 07:42:31.3894
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NCXIpTFGsUBO5JkVSBpQCA3MHDmBRADZZKgC2kY5VOs8YnMwHmaD+e7bVNg+JSCsEnkV4pJ1swVAyetyMWW+Fg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8936

On 11.01.2023 22:05, Andrew Cooper wrote:
> On 13/12/2022 4:31 pm, Roger Pau Monne wrote:
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -1428,10 +1428,19 @@ static void cf_check vmx_update_host_cr3(struct vcpu *v)
>>  
>>  void vmx_update_debug_state(struct vcpu *v)
>>  {
>> +    unsigned int mask = 1u << TRAP_int3;
>> +
>> +    if ( !cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )
>> +        /*
>> +         * Only allow toggling TRAP_debug if notify VM exit is enabled, as
>> +         * unconditionally setting TRAP_debug is part of the XSA-156 fix.
>> +         */
>> +        mask |= 1u << TRAP_debug;
>> +
>>      if ( v->arch.hvm.debug_state_latch )
>> -        v->arch.hvm.vmx.exception_bitmap |= 1U << TRAP_int3;
>> +        v->arch.hvm.vmx.exception_bitmap |= mask;
>>      else
>> -        v->arch.hvm.vmx.exception_bitmap &= ~(1U << TRAP_int3);
>> +        v->arch.hvm.vmx.exception_bitmap &= ~mask;
>>  
>>      vmx_vmcs_enter(v);
>>      vmx_update_exception_bitmap(v);
>> @@ -4180,6 +4189,9 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>>          switch ( vector )
>>          {
>>          case TRAP_debug:
>> +            if ( cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )
>> +                goto exit_and_crash;
> 
> This breaks GDBSX and introspection.
> 
> For XSA-156, we were forced to intercept #DB unilaterally for safety,
> but both GDBSX and Introspection can optionally intercepting #DB for
> logical reasons too.
> 
> i.e. we can legitimately end up here even on an system with VM Notify.
> 
> 
> What I can't figure out is why made any reference to MTF.  MTF has
> absolutely nothing to do with TRAP_debug.

Looking back I see that the two seemingly asymmetric conditions puzzled
me during review, but for some reason I didn't question the MTF part
as a whole; I think I simply wasn't sure and hence left it to the
VMX maintainers. I think you're right and that part of the condition
wants deleting from vmx_update_debug_state() (on top of deleting the
entire if() above).

> Furthermore, there's no CPU in practice that has VM Notify but lacks
> MTF, so the head of vmx_update_debug_state() looks like dead code...

"No CPU in practice" is not an applicable argument as long as the spec
doesn't spell out a connection. When running virtualized ourselves, any
valid feature combination may be found (seeing that we similarly
may ourselves surface feature combinations to guests which no real
hardware equivalent exists for).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 07:46:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 07:46:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475861.737726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsIA-0000gL-S2; Thu, 12 Jan 2023 07:46:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475861.737726; Thu, 12 Jan 2023 07:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsIA-0000gE-P0; Thu, 12 Jan 2023 07:46:30 +0000
Received: by outflank-mailman (input) for mailman id 475861;
 Thu, 12 Jan 2023 07:46:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFsI9-0000g8-Ps
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 07:46:29 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2058.outbound.protection.outlook.com [40.107.13.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38c74036-924d-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 08:46:28 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9656.eurprd04.prod.outlook.com (2603:10a6:20b:478::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 07:46:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 07:46:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38c74036-924d-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j4H4N8WRQ2tEzPALPcFJFmT1prtTp02cRF0chXsVul01B4sOMBPy8KD6DfdiCp+UrbQw/Viw2JZ/cVZqYwGoRwu4Sxequ/uA4P4Xh/qhZy09v+sqWkFb6GFlHJWG9hfUwgRakVmZvJLCzfFglsGntvlagjUXrrbb6FIsjYgNcRfbs3EBgcoasb8xK6ihA3IL0ATscI1QKAMcINv0+ylQhWlKKk6tryD4ts56qDIgx0VSRN74atPcVcWJk2DExddH8asC1KFqkNTRrXZkX9JRv6/s5vvF/HLQlHHONWaiGbTFWmlz0X08mQ85r7ysrbjN58Zu4DxkVzAnZukI2mLwTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OxTiy6rSNDxEOG8bK3nQOB5HR719EEuYnrmIBeGlZNE=;
 b=OWlC+wemmu7ahqBg8/h/gF96wOk4vPuGoDfZ9zJVsBkiMfOp/dOsyrfdRxrNxZoVLipWlcCkwB31dDeHI5GIbb9MpI3wnV6zk50zZzsLD/gq/JaT4T7sX6pZ1Jo1MU2MvBPJNfDeVYsSXh3Wdy+Ib6qM2BeHK0BPjKs3l+IaUy/LdSEbHphgX2RD34Wx+elbFeBDD5c75JxiN2irlApahvwiCQ507mK3Xk8Q8lpqT/qfoUJMnAY/91kbX2Jn+g40w0a3bOHaT6oilIAIcRx+LcrNDbUhxyD4k+evjHB5vN3j3xDBNSZShse/vmOMtWABu+wU7OfueK+SSELaZQDgHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OxTiy6rSNDxEOG8bK3nQOB5HR719EEuYnrmIBeGlZNE=;
 b=I77rv1tIN+ezpf77q95QdkAvqTOIGsAVKspx6CB0Sljon0hDMxxwniyrQ2eNsBZlHbI7+0OVn+bHi4rwbejvQqMlfzyX+NPnP2LRDHH8FtbL/RIU7/NNgVUChQzPAxhgD6V9oTJarSgG6uF5j5VRQAT3mq0iD7gpSv9F28w+Hd/LZ24sIi+kwFjgHCDQb6/Rh/UGzaixh5VCasAlpSraeV6qQb1F5TaD0o88UFxxRDZYStB2ud3E1K2dTSLOnD2d4HF9SBVjMCtsRcy018OhN1PlId3eTUl5JHpRWgCQk7sb22cAcweEUvAqhSEtRRvLjGICebO4pbDyV0M6FlhMlA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <750ad2e8-a5be-d9c0-846e-41bb64c195fe@suse.com>
Date: Thu, 12 Jan 2023 08:46:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [Multiple reverts] [RFC PATCH] build: include/compat: figure out
 which other compat headers are needed
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230111181703.30991-1-anthony.perard@citrix.com>
 <5c7ffbe4-3c19-d748-9489-9a256faebb7a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5c7ffbe4-3c19-d748-9489-9a256faebb7a@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0032.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9656:EE_
X-MS-Office365-Filtering-Correlation-Id: 449dff49-5e5f-430c-a992-08daf4711b80
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TwUTnoQqAyFssOw5Az04d/uLf2+NTs/CqTOaxXIb2Y99R2mgVo14xpmHgLCld2C9N1TU+cwVWYlfpIGs7i+b47FQXy4e4soh0Ox83uj7JEkpzVsUAw/0iKWRVED19fQU75EBAjUT0uFG6kXPWE03H7QxkAhX9HJgHHOeJx83FP+UFH6G+QtiVAqQGayRR2m89p+zUEs2w5PdvEeLXrtIZDpJbvc8/sDYeAmJU56p2ie3wdSmVGUNnu59K5Cjx4pNUnlqbkFsnjUmldAThFieVsnPEqvnjKqD9IT/aTddcQX9Bl6orrXYGn81mE859+9uhQhfdohhShTn6Inu47ojo4q1EBSOepEn/cVEBSwcfF6OaykxHHoy5FnoMvbwaDk7bkGc4xYa/jdR/UH+0tvKmamEr70ZCF9pg9MzbiqUhVTr+2oFjkdCMB/ngpegi4b+SqdBhNUAMa0HjcjK/VBjVbd+bpE8Q6Z1ZbUUyRINr973Pvi5AMrOGjWSbKxwaR73vWTBtWV8pz6+BctqCo95/0HwXfaImCAgHrWJi6aQ0UZMLed76FyKXOQBJAGQ3NkT93+H7K+xuVDb8WDvZOuRJp9AeyTJJINeN1Zj/ZZDAuxXiGWyIvVwvMBLGerGrn/r3wvBe30GLoxpOEb696XXH+1N9UfGd3aikeR4A5GfzuSYMX2YLCoADaTD10o2WqbdqVg5g/sfaXEgBdPpUHAek8GSLUrM9GoQ/WFg2nGIbaUhOUTYP6zWxEpw3DUrHboSgsztKctgDt66KahwXYT24A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(346002)(136003)(366004)(396003)(451199015)(26005)(6506007)(6486002)(478600001)(6916009)(6512007)(186003)(966005)(66476007)(2616005)(66946007)(41300700001)(8676002)(53546011)(316002)(54906003)(4326008)(83380400001)(86362001)(38100700002)(31696002)(66556008)(31686004)(36756003)(8936002)(5660300002)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cExhWThZSkh5d0tBOGoyTWhCdVRZayt4aldGa29XWVBFbEh5REJaN1BZYmRp?=
 =?utf-8?B?TmZOY0d1RWJOajZxU1ZteGhhc3QzZVR1dzE2L0dRR3VncUhmTGFkamlFTVpj?=
 =?utf-8?B?dVk4bEllNlplQ29sRHFrU24wT2JxNi9KK0c3SnVKYVZaR1pqZ2hTM0c5NFVD?=
 =?utf-8?B?SDV5Y1Q0N3pSODNnU2prRnFqRlRnWG93akNSd1FTNUNoZjI0dFNZWjFpYkJU?=
 =?utf-8?B?RVNEd285T0c5dHFiTWQ2UTRsMEd5L0hXbE9KcHVkRXRYVUM3KzJ5MHQvWUZs?=
 =?utf-8?B?TUJTYmFtbG9iQXREL21MbURUTERLcDdLTWtlWXBZSG15aldZeGdCUlhZU29n?=
 =?utf-8?B?WG9jdCs5emhSU0Qwd1RNckdyOU82aFBaNlJlOVJtd3B4VHJNRStZZnJkbitv?=
 =?utf-8?B?YTV4cnFkSWZpK1dXSnFpQ1pTLzJGUldxSVFxY24vaDFrTXc4ZTgrQXlSYmhD?=
 =?utf-8?B?d2hPdGtiWEFNbzNYNEN4RWFQMG5VZ2VpTXAzaXFCQ2xkRGdSdjFsRFVFcFRY?=
 =?utf-8?B?NzhXZnVqTFFPVnRHeWZ4aEVqSGNGdGVFZVFQOHVKTTYvQTZ6a3pUemZVSitp?=
 =?utf-8?B?b1Nub1RmWTNqR2RPdi9NUUtHN1F4Z3RscjRIK1hNalJCbjA5anl4RW9vbzRy?=
 =?utf-8?B?UG8xVmRLY2tEeVBIakMvZFZTY0tHdXM3QXNsU0JnMVlzaUQrUVQzMDg1ZXZm?=
 =?utf-8?B?S2RTaHlkYmxQUUQvdnU1V09vd2gzSHBjcHBhd2VkNzZEemwydlA2NU1YRWdI?=
 =?utf-8?B?KzBhRmVhMW9ETWZmbDVJRk8vaXVGaHp1MnVuaG1seGxybFc5VmV6dy9qYTFz?=
 =?utf-8?B?dWliaDdJUjB4cFRXeTRYL0JJVG0xc1dXanBaWEd2czFVZldUVlM1enVPY3ll?=
 =?utf-8?B?c1NwWGdyVjFRcHQvMFhwWjRYaVFiSGRMV0ZlVUdxK0R5SkhZL21BaEVKWG5n?=
 =?utf-8?B?VzN6WndHd2ZGb0RPYm9LNUh0ZWNsS1hSS1laNWt1MFAzRjdJM29KYUZIamdi?=
 =?utf-8?B?SWhyVWdLaGxCdUVMVTU5TE0rOEtwUGNsYVdVR3RZUllseGM4V3M4M2tXOC82?=
 =?utf-8?B?ZWNCVnZIVFN5d1hGK08wdFpQTVVYc1UzcVArY3V1d1R5aUF4anY3RGNMYmNx?=
 =?utf-8?B?VDNqWmppdlhWUTVwZzZ6cXhhTlBLSmt2VHBneDZDejRWazAxLzBxYm96RG53?=
 =?utf-8?B?dUtjdVdpdXM0L0MxbjNjcnQrZ2JJRHFYMCtsTDg2Y0pEZlphTHEyZStQTTlC?=
 =?utf-8?B?ajVnSnY0RVViRjBtbzN4QVkvUDEyN2dRYXdjQXRRUXo1WVFPVFhiU085Y0hj?=
 =?utf-8?B?YkFLMDBTS3Zsc0JCSGNMK3F0SXlEZFptN0thUy8wQ2UwV21jZk1vMzZmdGFh?=
 =?utf-8?B?bjEwVFN6THA1R1c1T3JzemYvWDVVV3FlZDBPZ0Q1SDhzaHRlOThqN0VCbnQw?=
 =?utf-8?B?NDhES2owNG9yN0pSS09lbGp4UEZ4WE1mVTZ3eFhSQ0QyVmsramY5L1hyZFlN?=
 =?utf-8?B?N1VGbGJ5b0xxY3ZxcGc0R0NoQ0VGMVNHQTBIbmtCemtKeG1YVjRnQVQ3Mytt?=
 =?utf-8?B?Szh4aDc0ZUVxK2pvN1JWM1NxZVluZ2xpbmFFdW90L1pDK3B5NGIyTmszd1M2?=
 =?utf-8?B?cW1ESy9qMnh0YUFDbmU4SGF0OG1EMEFlWm11Tm1hTU8zVDY1Nks2WVdoby9G?=
 =?utf-8?B?ZnhaMThmSmwxdWZlQnNJc2szcFpsVDh2UnI0aUIrdVRmTVI1NU5aYStuSC9m?=
 =?utf-8?B?ZXZidFZtSWlNTHpKNXdhWkNmUS9mLzZYM2JraWw5ZlFQMGt2YlBod3RoWjRT?=
 =?utf-8?B?MEVhdDZIWjcvVGJaM0RLOXg5ajNmaFV5cnRYYndIdTRrdE5qWDhSQlB6aEln?=
 =?utf-8?B?REliR1ZOaW01Q1FGK3BEeWUybWRKc2x1aUJJT1d3dVprVnF4YlI3YmcydjhK?=
 =?utf-8?B?Z1RFYVltaUVMb1pFRjlqU1dxLys1SUdHMnRyZGo1R3VPdnl0WGh3TExCMm1J?=
 =?utf-8?B?elh3WnBtYnR3ZWZxaXVnRnlGaVhkeUlxdnFjOTRFSjV0MTBZamdYMEFhOWFp?=
 =?utf-8?B?WWk2SVRiQncrR0c0UGtlYjZXeHhpYVYvK1Q4RGNDRXBUcjREMjNickd6V3Vr?=
 =?utf-8?Q?yAGtM+gpsl3Tq9Z+zQUT2VysN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 449dff49-5e5f-430c-a992-08daf4711b80
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 07:46:25.7963
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ISy/a1CtDoLoTgQTBmO8jwhoYpSqT6Cz3w5wqKBGNGmHeZ0cyJT1Gtm3/t5XDNykeLCKqs8PjRKa/4D1sbgbxQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9656

On 11.01.2023 23:29, Andrew Cooper wrote:
> For posterity,
> https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/3585379553 is
> the issue in question.
> 
> In file included from arch/x86/hvm/hvm.c:82:
> ./include/compat/hvm/hvm_op.h:6:10: fatal error: ../trace.h: No such
> file or directory
>     6 | #include "../trace.h"
>       |          ^~~~~~~~~~~~
> compilation terminated.
> make[4]: *** [Rules.mk:246: arch/x86/hvm/hvm.o] Error 1
> make[3]: *** [Rules.mk:320: arch/x86/hvm] Error 2
> make[3]: *** Waiting for unfinished jobs....
> 
> 
> All public headers use "../" relative includes for traversing the
> public/ hierarchy.  This cannot feasibly change given our "copy this
> into your project" stance, but it means the compat headers have the same
> structure under compat/.
> 
> This include is supposed to be including compat/trace.h but it was
> actually picking up x86's asm/trace.h, hence the build failure now that
> I've deleted the file.
> 
> This demonstrates that trying to be clever with -iquote is a mistake. 
> Nothing good can possibly come of having the <> and "" include paths
> being different.  Therefore we must revert all uses of -iquote.

I'm afraid I can't see the connection between use of -iquote and the bug
here.

> But, that isn't the only bug.
> 
> The real hvm_op.h legitimately includes the real trace.h, therefore the
> compat hvm_op.h legitimately includes the compat trace.h too.  But
> generation of compat trace.h was made asymmetric because of 2c8fabb223.
> 
> In hindsight, that's a public ABI breakage.  The current configuration
> of this build of the hypervisor has no legitimate bearing on the headers
> needing to be installed to /usr/include/xen.
> 
> Or put another way, it is a breakage to require Xen to have
> CONFIG_COMPAT+CONFIG_TRACEBUFFER enabled in the build simply to get the
> public API headers generated properly.

There are no public API headers which are generated. The compat headers
are generate solely for Xen's internal purposes (and hence there's also
no public ABI breakage). Since generation is slow, avoiding to generate
ones not needed during the build is helpful.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 08:02:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 08:02:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475870.737737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsXp-0003c3-Ha; Thu, 12 Jan 2023 08:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475870.737737; Thu, 12 Jan 2023 08:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsXp-0003bv-El; Thu, 12 Jan 2023 08:02:41 +0000
Received: by outflank-mailman (input) for mailman id 475870;
 Thu, 12 Jan 2023 08:02:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFsXn-0003bi-VN
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 08:02:39 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2073.outbound.protection.outlook.com [40.107.103.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a039cee-924f-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 09:02:37 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9058.eurprd04.prod.outlook.com (2603:10a6:102:231::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 08:02:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 08:02:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a039cee-924f-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=njjk/Y+h9vzUI94IBkMpSJU9UTcFA6CnxwLiEjbrq5P8Ex/6qLhk+g1Q25v+gQmnMB2sHsdz29SDzMPZMsXiRzT/d+iH+GqwgWl/6Ndr/8NFKnH9BgyaHCXYRwUGd/P6oh9kaqzNRrSrvY2VrPrBcm+an0HnkkU04eMx13m4QAn0FPK2sswZfU43E7ENuV/27OtXvSe2YU2eR+atTVJ0riGbodvULCWNHdLQ6ByOyyGkR5E8IjG34s6K727ohcjDVgnRLQfshnu4YR0kdFho71DyO8XwcPefNNGI1MoOmqrbe8IMq3qr1pAyLuPKYEJhiLHcT4jRhaleXz88hmtqNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KW1je6TKGecW+nUZ6pRPQuvvrcIDDFkiUqbq4QqiJXg=;
 b=cFYrt1OtqzTfVigIgCRWWdRT00z3Gby0BWZdljt8jFFzTlXcFbOk64Z9Gc/5i/kQ0DhmfSkDUh5oaF/K2/tYXNKROxFc5E65NqRXyU06BqxDHeRZ0qekwA3RY/RS8PYJhyplJLTLspwlEsjxGTY/3+tO8+diRd8dLlvx/q43eOzcz+4dIQEUWg9Y5CC/KzpPAxNSCEQT0azRCH9ZqIyOTB33IysmhVUUJyGw0M8y/6nYDSYWwjlKxG+Ia+AjPpgjgfqz7mVu6ak9hYKnSmRay+nd5GNIBNIFzmG6/PRGI3OdMEd91iiu+ICbLcLtrv0QTM2Izwi7XaFRcfpgiNNN1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KW1je6TKGecW+nUZ6pRPQuvvrcIDDFkiUqbq4QqiJXg=;
 b=Xk7Uv767EggzijOpRjJd1HfNAhUvoTARckuo1bSib8InkmozuUFQAu4w92Ov2iSrjNWpbhAacJADMxM7IL4rzQ7QWr91lyPCWeS6cE6D/XFKkTbEE0kdpoGXPVqbEfwcijnKCKgRwQ6xYN4X46RFam0BJhvCYz6HsYsqy9+AlMerCgtVOkuPo2+iY808gK3OmByRUJ4G3sVyXnxLPTxJ0ZtFlXKVlVJzxoZ3akF4rbu4eeDheEReIOWYB+/h4IuA3NLit1qhU/i9GixrXI1mEw0p6OsD/KsoQPcVVN2hObbrRhGefuECfcM/QKlEAwr0xXSWcmcU0iLHQBUhxVt8gw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cbddd240-cf1f-b2e5-059f-4cc920d7f3b2@suse.com>
Date: Thu, 12 Jan 2023 09:02:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [RFC PATCH] build: include/compat: figure out which other compat
 headers are needed
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20230111181703.30991-1-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230111181703.30991-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0121.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9058:EE_
X-MS-Office365-Filtering-Correlation-Id: 7bf2e183-8486-4c62-18d9-08daf4735c56
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8bkQ7v/5khKvo1q9fRECHGqC1dTRJex6UTORmXSiNuVjJbBdU60rDw7bD18LO/tZ69GYSgSn+B0R6JQnupS02yk+A2q4KMDrydC48GOO7ltA0eoTrF4i/fsEuUdRjSJC+bfkElPvTsTFs6lQ1Oh0x8X3a/ODPxJ8k3U1anzbQhF1PV03pS1LXn+bnhqKNpbs08+mcqqg55NdExxuQCPYB5xay/DFYqsBzr2czXxaeOBXIc2sSOn0ya77nxYGYgEl3UIwJ9sMoT+eeB9ihjHr9s4v7le3wbPBexhl+bXSshnm38T4+gDefW/MLG8JkR+yHsW974cfC50GKMjwWXwux6+tBQjtVO90fGMCg9oLg0gKnAgaJBraqPV7wSMCi4oFbsUxDqFNoJMGkkF9YXxjt+G1ZpEayEWw3dJ3h7n64daO31aQB6XZdOOIm4l1IcgAjGq/NPoO4r9rZscdBqM2TYESaLaX5SdPmJetsFOHZZGpBQFnErySHbXx1PYA0itnwAND75y0B4SaIUBo2ruJxG97W724i3BxFxJSxX26WKSq1aixu5qXZ2xNV84zBTdSoGaA5Q6bmgr4VNBKfllhAIpKiuF3/aRjiEN9AQz4a491tje3+gFVCR3pZ01/2EumtfOOeAgdOFDqtuOTASaid4ZDBw5vnDse83Sc+J8IcYIVTnHDvfllWFi2Gi+xwTT4Frjf9M3MB/N30LDRJ1HRNcCV6R9t2HXgC4EmHOKd028=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(346002)(136003)(39860400002)(366004)(376002)(451199015)(38100700002)(6512007)(31686004)(2906002)(66899015)(186003)(26005)(6506007)(53546011)(86362001)(41300700001)(5660300002)(8936002)(4326008)(6916009)(8676002)(31696002)(6486002)(66556008)(6666004)(66946007)(66476007)(2616005)(316002)(36756003)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZnJWNkFXeUN6M0hyQk9ZY3p5MWFLeWplWEd3U0hMT1ZId1dnc3l0RlVkVVdu?=
 =?utf-8?B?TnZEdmJUTlFMTVNmSXhLU0k5eTRpb04yYU9oV1R1Yld2dHZNMWJ0b1A0dGlB?=
 =?utf-8?B?VFUzS3FqVStobU94aFZsYnl2Nks0Q1d1eTdGQ0F6RG9WZ0xTWGh0cDROaFZl?=
 =?utf-8?B?SUNIbmRaUXdiU05kWDJUM0NKemxoWDhWSzlWaUtPK1R4QmVPc3dvTTBKZ3cy?=
 =?utf-8?B?ZkhLVlltWFh6bTJXKzRHNVlLcjRWc2F4dTJ4RkViUmtrYW93K01BcHpYeWxo?=
 =?utf-8?B?ZzVpTXl6dDRnb1BPKytDaVVEVEtjUTRCRkRIVDdnb2NmVHpCV2p5TTVsNmNh?=
 =?utf-8?B?TXpLUDhDVnQ1YVBjQUdWMWF1UDJBN1ZOT1BDSTJSVlVTS3hLaEVRc2lXMWJq?=
 =?utf-8?B?T2hKc0tLbEJIeHh4UnZlTnNZcHZpUDhaUG55NVh3Q2lUK3E1c0FMcWllQ0tl?=
 =?utf-8?B?eURaVVNxMG51UjM5V09QK1pHbnVHcmkyN3JBYXNiMHhuWHkwYUxVMkJlZmp3?=
 =?utf-8?B?MW9sNWlYSC9YanRPSy9acGIzV0tTUHBNZEtrc1pFMnJMbm50YkpaVG5qOU5l?=
 =?utf-8?B?RE9JY0ZCZ2V1dEM3RVAzTzd5QTdYakl1eXl5ZGZlV3dMTS9sdzBGeTNkczF5?=
 =?utf-8?B?RjRFN25JaG51QlZnczlua0QvZkdsaHpXbjNrVE9vMmtQYjlsWUJPY3ZreWlR?=
 =?utf-8?B?OW53UDE4ZzIwdTB2dG0zbDFKUUdnaGgvc1BLYnJZdmt3MjFHOHMvM2d2VkdW?=
 =?utf-8?B?M0pKN2tDUEF6UFFDZDN3dXJZbHlVS2RLTE9vVEZzY0UzWlF2Tm52ZlhuMmxq?=
 =?utf-8?B?dnNJN09UbWtaeWlmdjQ3L29kU3VJUlo0dWdtellXWEVoaHJmcTNzWk5YK2NG?=
 =?utf-8?B?U3ExMmpOOU5QelhvOVhrK3duRThNa3hDbUpDUkZZYU43bCtkYUZGNkJ4T0wr?=
 =?utf-8?B?VVV5M0ZXMlc3VjRNdXFGMGtidW5yNHRMUWQ4QmtXNjIzUklKaGVkL3NYZWE4?=
 =?utf-8?B?SFVHZXZ0L0k4T2xZR1dIRE41bGdzQmZFSDRwWGNJN2lmMGFKRW5XZ1lPOEVH?=
 =?utf-8?B?dzc0S1I3NG5PR0FiRjd1SU5QaUt4MkdORlZHTWludXJwY0VHbGwrSllrWEY0?=
 =?utf-8?B?Tm15RnFueXlOL2hEaTZ1MTFxNCtHZTUwc1hiN3p1U21SYnFmSkZVaFVwcFNw?=
 =?utf-8?B?VGpJSzI3MXV5cUsxVmVxWVNGbTE4VGlWR2dxbmlNY2RtMlZVN0o5ajJsWmhC?=
 =?utf-8?B?ZU9jUFJOcE4zVHdZR3M1dFMyNVZYWVhzVldja1FTTTdhblljK1hsTTVUQkR1?=
 =?utf-8?B?RkUrc2JjeEVOR1lIL0ZsR3l1ck9wMnVnSFFVc3QwdHV1dTZqa2x5Z2xPNFE1?=
 =?utf-8?B?ZDl2NDA4cmJsaStGVTc0NXJRMVJDMExKYk9Za21aaWZ1SVN3WEpBSFBEVWJH?=
 =?utf-8?B?SHJlUXFGT1hGaGovRkQvVzlPL2x0QmNwcGpoZFgvVzljK3dzbFIxRjNrR01s?=
 =?utf-8?B?UnEvSmNvOXhtQVM1Q1pHMXRoYnNpb1ZKY05xT3pVS0VVc1AwZUFsRytLK3du?=
 =?utf-8?B?NlBhdVpONSt2ZmJTT3JReithY3A3ZUxYaFM5ZE0vWEMwOVhUQzRtUXN6N0lF?=
 =?utf-8?B?eDhCWFpCbGNBc3Q4aldQNElQZVhrNTF5WnR2Y211SlpvbW9wcTg4N3RHK01K?=
 =?utf-8?B?UnNGYnVuR1FsREgyeHBuZWc3OUhZNnJTdUQ2dXhlYkxMVEZlN0FvbUwyTUYy?=
 =?utf-8?B?RFI2dG5CVzBBcitCZmMxMVJPQmJTTWlORE5CbDRxNWJDWVpnaXppcEg0RTdC?=
 =?utf-8?B?REF3bW9DSHpJaDRBVFhBVytCRmpqZVAxb0hlUmFKZlpvcWFYeGJiWW1sU3c0?=
 =?utf-8?B?Qi9PdFdCSGVDK0hMTnZVVUVHa2pWU2ozT1hsTXBub1hNelNOYmpWMVpGWUtj?=
 =?utf-8?B?bGYvMjFtMDJHU1RjR3drdXduVXcwbDIydzNZZUxJQ0hrMXFvTDVXZ0VGTXI0?=
 =?utf-8?B?bzl3RlNwM3JRdm5CcXJnayt0ZFdGempaVklDN2Z6T0ZKRzBrUWpOSlZueGor?=
 =?utf-8?B?WXRFNVVEYXpxRVZhNFdRQnpjb09UOTQ5KzlKZHh1cSs1TUU0cEVMZUxzTE5a?=
 =?utf-8?Q?P17xF+uBwUgyQmXwCw5VQFIvG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7bf2e183-8486-4c62-18d9-08daf4735c56
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 08:02:33.6409
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SxlOGMxWAW7mn/WLc2ONJsO703NE0akwt5uWZwYAp6oWvupR6+run+h+qluPrjsWltUigJEmd7iBQhlStNXN0Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9058

On 11.01.2023 19:17, Anthony PERARD wrote:
> Some compat headers depends on other compat headers that may not have
> been generated due to config option.
> 
> This would be a generic way to deal with deps, instead of
>     headers-$(call or $(CONFIG_TRACEBUFFER),$(CONFIG_HVM)) += compat/trace.h

But it would generate dependency headers even if there's only a fake dependency,
like is specifically the case for hvm_op.h vs trace.h (the compat header only
really needs public/trace.h, which it gets from the inclusion of the original
hvm_op.h). Avoiding the generation of unnecessary compat headers is solely to
speed up the build. If that wasn't an issue, I'd say we simply generate all
headers at al times. In particular ...

> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -34,6 +34,29 @@ headers-$(CONFIG_TRACEBUFFER) += compat/trace.h
>  headers-$(CONFIG_XENOPROF) += compat/xenoprof.h
>  headers-$(CONFIG_XSM_FLASK) += compat/xsm/flask_op.h
>  
> +# Find dependencies of compat headers.
> +# e.g. hvm/hvm_op.h needs trace.h; but if CONFIG_TRACEBUFFER=n, then trace.h would be missing.
> +#
> +# Using sed to remove ".." from path because unsure if something else is available
> +# There's `realpath`, but maynot be available
> +#	realpath --relative-to=. -mL compat/hvm/../trace.h -> compat/trace.h
> +# `make` also have macro for that $(abspath), only recent version.
> +#
> +# The $(CC) line to gen deps is derived from $(cmd_compat_i)
> +include $(obj)/.compat-header-deps.d
> +$(obj)/.compat-header-deps.d: include/public/hvm/hvm_op.h
> +	$(CC) -MM -MF $@.tmp $(filter-out -Wa$(comma)% -include %/include/xen/config.h,$(XEN_CFLAGS)) $<

... this removal of the config.h inclusion is to avoid introducing any
dependencies on CONFIG_* in the public headers (of course we'd expect such
to be caught during review).

I'll try my alternative approach next, and post a patch if successful. I am,
however, aware that this also won't deal with all theoretically possible
cases; I think though that the remaining cases might then better be dealt
with by manually recorded dependencies (kind of along the lines of your

headers-$(call or $(CONFIG_TRACEBUFFER),$(CONFIG_HVM)) += compat/trace.h

in the description).

> +	for f in $$(cat $@.tmp | sed -r '1s/^[^:]*: //; s/ \\$$//'); do \

I'm curious: Why "cat" instead of passing the file as argument to "sed"?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 08:08:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 08:08:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475876.737747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsdV-0004NU-5g; Thu, 12 Jan 2023 08:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475876.737747; Thu, 12 Jan 2023 08:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsdV-0004NN-2y; Thu, 12 Jan 2023 08:08:33 +0000
Received: by outflank-mailman (input) for mailman id 475876;
 Thu, 12 Jan 2023 08:08:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFsdU-0004NH-Hd
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 08:08:32 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2082.outbound.protection.outlook.com [40.107.8.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4c29d3aa-9250-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 09:08:29 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9520.eurprd04.prod.outlook.com (2603:10a6:102:22f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 12 Jan
 2023 08:08:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 08:08:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c29d3aa-9250-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mo+q1YZXF8nvt9ZtuWLwUIpEN6FszF+4Y0EKi0xJ0TNg0U6g+l7q/fFH2goqv4lwmlqmENp9CBMOrintBD3sa9Oe6Q8g1S5yPa3RP+0aDIdhOGxuIs4oi7Um/8dcnUr6DHe/jWaGA89vqghm8gilju+O8qsbr3UIvXTf7gWw+XiIDlqiTMZAqdaCFpvyCxaIpmkNRBN2RRs28SycpcMtAaisvuBPF+07cqRQXLqDgw3VgSf2az033STmULBpvY7koiI/unaxXjMXhg/Y3Vl2V05D4FcAUUd5zoMXyGUUlQO/Lu8XnI//qQ7qzcNxevKZ4P7gAvbjVtcIjawi7PWIIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NXvbY2AMJ6MkxksEorH/5cWem3dkgzZzT9MoggQzB2s=;
 b=HfIsdUzx0ycyYQyFpcVw/f3F11BVVLAyatKF4BsgAwnT85Xf0EQdvkNTuGqhSwh7+QaeHJRR1sy4hZLLkQ/PUnPVSVE693yr96RpkD8OtAQmSTcKXet3zWN5BYlpfmN/dU+jxXacHQMykYhT573uMFQVWNs76dApfh8UgHTj2mr8N2S9xyCmi9TF4O0nxJaNeDkAKt4zizEW+9bS1TajAdMzRI8GPh5yit/j6HoU3bIRUfWi+gUs4drMNAErZ0VN1hRPjMRmQ7pYxSMFodEkcUmTmhyr+g7/cL5SLtO5HanSgNRcE0OF7VdTYwNB9xEzdjuQ6YPki91DvF+FS3fzjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NXvbY2AMJ6MkxksEorH/5cWem3dkgzZzT9MoggQzB2s=;
 b=KB/g10ycW4Jz6GjVemGYRMGYxchuozQ9zFjvLGC2XsjFq4JvdKMnFwNftZzmGsjJgwCbRhSwkVaBRuBCScC3xYmcsanqTbRzulASHnLI8bEmtWsHyVA6YjLJuBbeS2qswStZaWRwgx9D6/h0jvpQVlt2Yv/3sS6DCM3LK7Q/1sOcQOJglQTtP0n+rRvj65yPShMqIsnCvrxJ7e7n83NEMjeiXIggf79733d+i2AnKZBN2bvJK72UwfJyz2pZ1eSLwMFLfKt7mBQadtql/vXfDxIs3hP0kxHR8mIEPnUFpFz7+mW7Q/pJnM2KnC9l4IQYfOZ95ZM6kWSXlAIdPq89Zg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5a802657-a6e8-9cc3-fefb-09a7e68d1e5e@suse.com>
Date: Thu, 12 Jan 2023 09:08:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 02/17] xen/arm: implement helpers to get and update
 NUMA status
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-3-wei.chen@arm.com>
 <9e32ffa1-1499-f9cd-7ca8-f9493b1269cb@suse.com>
 <PAXPR08MB7420E482CACC741B1BA976569EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <PAXPR08MB7420E482CACC741B1BA976569EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0144.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9520:EE_
X-MS-Office365-Filtering-Correlation-Id: 903bf81c-cb6d-45c3-d582-08daf4742e5a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iy+Tck9oeVEhJm30qJppCxWV5VulCM9mjoP9Q+dSqdZkW5iKYQ6vwAq6PHxxxOEJEM5bbzLtFMmv8EGpPqI/vVDMK/Uw+xSXVgNIS+ZeBhP9quu5da9wFZYDGZi7Tk3A9S60yJLMTjzAowqzeQB+BIet/5glsUN+A8YMEd+hD8MYoGDlaVGhXyHBo4ZpVhHZbSfSl1b3sZ285J7SvuVPZoWpR22qtEQD7GKKzk57c8YmvqhCMh6nMJcpmWt6cXvNWNz6uiEM1WPPs2PCAVG1o+zpYGLE3kyniKjSTuO4S2BiVS5fAHyAUG2r8HFY8zL4uoZ+LigMxP+wfrL3cMCyaGxiyqv4eDcj9sWF2OFsZF+t+wUvZTzqUNKIQp2QjWLQIkNCDlswIc/0+mVqNUdwIduAmU0Xa4v8GD94MiKf0yC7xBGowf/ysCJ2Xm6jY6d+Dl5x2kjA44HndDSloXNOV2SGJoh9ewB5MoYdhq+iuYI17bBowyY8vi89fBJvzTDcn+ELkf6/qFZc0H7nD6+XirEodq8Tb/ek1bWyysrYBqUx4kBfSZRODe1OmLUKF+3skOcw1AMcsRuTz141RnMea0fRtGMA6s/hxFKPs3THOHUamGvBfHUZOlD/9c7urQU6XhTSAHyZ+3zNatbD7BROTt8iwPijUvKZSuU4k+GRf5DF3RUUzUgiQaqvV3N7Pzlx4POEqWgGeUhyblJm0kJ04aXiypntdXfT1zt+fR4MUNk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(346002)(136003)(366004)(376002)(39860400002)(451199015)(186003)(6512007)(26005)(6666004)(6506007)(8936002)(53546011)(7416002)(478600001)(31686004)(6486002)(316002)(6916009)(2616005)(66946007)(66556008)(66476007)(4326008)(54906003)(8676002)(41300700001)(36756003)(83380400001)(5660300002)(2906002)(15650500001)(38100700002)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?blhRTi9PMlM5eVN2d2NhTUdxNHc3RmR1ZmNBT3VkSEhKa0hJQXZaaGloU0JJ?=
 =?utf-8?B?N1hIQ2pKNWdGWHNPcjRjUGM3SitCc2c1WGlqZisxYUc1ZzdkYXd3ZERVZ0Jl?=
 =?utf-8?B?TnR3YkdVNk5YVEpOajZ3MGRhVTFBMFQ3L25oV1EwdkV1ZkFkbW5zNzlwQzJQ?=
 =?utf-8?B?WndCaWZyK3ViYmZneDhaWHpqUUgyVUF0Slk5MTI1Y21BRmpDeUt3YVdTS0to?=
 =?utf-8?B?VWIzNVRpTzZVMEFsNVVmbFRTdjIwQnBNMVNIc1NWeEtxK0N0N281dlU1eTZY?=
 =?utf-8?B?WUJtOG16UzBjRGpTT3VtRlhnKzBJc01jRFBzYUVTQXM0L1VMdkRhcnJFSDZ0?=
 =?utf-8?B?MTV2YTJwUjlXQlBxM3VjMTAzVlo2cVp5SVJTNDVQTENaTGJhVmpxaTZka216?=
 =?utf-8?B?TndoSVZORkhpK2UxRE9wUEcyY2VZdTBpSEFjUWhiWnFiYkNCU05aVGpYSUF0?=
 =?utf-8?B?YkFQVjR0SWh6UUovd0dXVWtaRDFNS1g3dWhPajdQTXQ1Y2p1NVhTNEhNVUt5?=
 =?utf-8?B?emRsV3g3dHJad2RScWtKY0lwd1ZKVTV4NGFqUnNnTVpUWnFCaWRTdnRST241?=
 =?utf-8?B?UUFrNVNCZldXd2hhT2h3ajBmMTdNSEN2NzVvdmhwaE5BRTQyQ0FRQ1NxV2Ux?=
 =?utf-8?B?UVRWUTlrTm1FMER5bkhHZnRldndXYklnNnIvT0w2aFlpelpXNnBLSFBFVHdW?=
 =?utf-8?B?TVZOWnY5clJzZU5WYS9tbGdPaWNwdnJudHlRaC9yL0RpR0E1VHlGdTQyR0V1?=
 =?utf-8?B?Yk45VW5VWERKZGx6N2M3b3cxdlg5dkdpcTZEb09pT05qSGxWNlZSeVBiWkJj?=
 =?utf-8?B?Yk9UTXB2eEpTblA2M3NwNkVNbC96VGdCZURacm1oNWlnaGFkL1lqcnQvcmRs?=
 =?utf-8?B?T2Y4R3hmUjU0TTNtL0lobEE3YlI2SUlKeVR2VXkvZjRLbEZvREI5TlhMM2lG?=
 =?utf-8?B?akhHTWwvc0U2dXY0NVJERWNqbXQxTkVGeE1BRERMWDdwTTlnT09uN09HK3A2?=
 =?utf-8?B?a3poRHZWeHpEblRlcnVMa3E0ZzA3RXpSWlhKN0Q4Qy9LT2JhclBDVFZVQ2JM?=
 =?utf-8?B?ZldBT1JJRHp2ZzZzbkFDVGp0aVJyWmRkL2RZa1hSUXAzazJ3MW92WGFNM1p0?=
 =?utf-8?B?dlZtYTZhRjhNZkhJSW5WeG85emxrdGNBdncrYURJWHFPMDNlcXhuYmZPMnlN?=
 =?utf-8?B?UGFYZ2ZIUW9YYjRXek5XRDhnZ1FEUHBIWklKbExwMlVHRk8yZ1B0RG9Ea2ZC?=
 =?utf-8?B?VzJkYmFNZlczbmZybjFJeEl1QlVpb0VHc2xUb051SytlMDJSWGkvZWhYcTFV?=
 =?utf-8?B?ZmI3OUEvREpNQXlCUFhmMmhSb0RpWFE3SUJwZTkxeTlJNXNscVRiV2xJSmhY?=
 =?utf-8?B?WmwvaExCc0dEWHplMXF5NzdIb29qUHZJRERCZUpnbDBQNG03TG1PRFlvbVZN?=
 =?utf-8?B?QWZsemNUY0ZrNmpHVGFrWW9wQXl1bUpHWmZwZWwvQUtFSEZCS1V2azk2SURZ?=
 =?utf-8?B?Q0tlSUdTVjZHYXhBVHNTRE5MaVBoRW44SzM2MzVtcStmSUdselZRbXZ5eUJY?=
 =?utf-8?B?cm9xanAyRE40T1BRSjVpL2NoY2I1WW93RTNySHNOQUpoQ2hoM1Jpa0xiZFYr?=
 =?utf-8?B?cUNla2l6ZzNoVjNSNnVJUFU2MnpZOWsxRXhHQ1RTQXZCNXllODJGNWdQVUVE?=
 =?utf-8?B?TFhmaFFPOE4wckJrZUpXbGhHbXAwdnh3dlpTNmVlZjBqZkFKcXRhOTJ2b3o4?=
 =?utf-8?B?T0RHamgzTE9GVnJsVDNXYlJJRnlaQ1JBVkVST2hIMW8yUkpKWTJ5SjFub2x2?=
 =?utf-8?B?cEVUamQ0aG5hWDlkem41Q3R0ZjhBUnR5anBDZEsrNEVPczUyV3VlYkJTdmVY?=
 =?utf-8?B?TVg0TDRBbWtjMTVtM1hXMWhhMmpWUENQcy8vbjZIRjMzOTBOYTRKRTJsZjlz?=
 =?utf-8?B?dVNCeTlKeGRkWUJlemt6czlGVUttMTI4ZGt1VVU3dVlIYUJMNkZEUERhNHJ6?=
 =?utf-8?B?L3I2WDF0a1VnbWQ5bDJ2YkxrRHNZcEkyU0JudkxRMytYVENHR0ZFZGNraWds?=
 =?utf-8?B?Z1UyN3pnQzZSZVRObE5XWUk3WXlTTjM4TGpiWGlDVkVDaFk0N052NVVRMzVH?=
 =?utf-8?Q?Bnwlssyf36crG+LC4iBZaJmwK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 903bf81c-cb6d-45c3-d582-08daf4742e5a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 08:08:26.0559
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kv/BGVqNId27U+hMv0RZUAhX8VCBp/zrwQJejDt4fcdyKpERt35QjJFZPChqWUqGluRibEa1Wgb/w4waKyuk/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9520

On 12.01.2023 07:22, Wei Chen wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 2023年1月11日 0:38
>>
>> On 10.01.2023 09:49, Wei Chen wrote:
>>> --- a/xen/arch/arm/include/asm/numa.h
>>> +++ b/xen/arch/arm/include/asm/numa.h
>>> @@ -22,6 +22,12 @@ typedef u8 nodeid_t;
>>>   */
>>>  #define NR_NODE_MEMBLKS NR_MEM_BANKS
>>>
>>> +enum dt_numa_status {
>>> +    DT_NUMA_INIT,
>>
>> I don't see any use of this. I also think the name isn't good, as INIT
>> can be taken for "initializer" as well as "initialized". Suggesting an
>> alternative would require knowing what the future plans with this are;
>> right now ...
>>
> 
> static enum dt_numa_status __read_mostly device_tree_numa;

There's no DT_NUMA_INIT here. You _imply_ it having a value of zero.

> We use DT_NUMA_INIT to indicate device_tree_numa is in a default value
> (System's initial value, hasn't done initialization). Maybe rename it
> To DT_NUMA_UNINIT be better?

Perhaps, yes.

>>> --- a/xen/arch/x86/include/asm/numa.h
>>> +++ b/xen/arch/x86/include/asm/numa.h
>>> @@ -12,7 +12,6 @@ extern unsigned int numa_node_to_arch_nid(nodeid_t n);
>>>
>>>  #define ZONE_ALIGN (1UL << (MAX_ORDER+PAGE_SHIFT))
>>>
>>> -extern bool numa_disabled(void);
>>>  extern nodeid_t setup_node(unsigned int pxm);
>>>  extern void srat_detect_node(int cpu);
>>>
>>> --- a/xen/include/xen/numa.h
>>> +++ b/xen/include/xen/numa.h
>>> @@ -55,6 +55,7 @@ extern void numa_init_array(void);
>>>  extern void numa_set_node(unsigned int cpu, nodeid_t node);
>>>  extern void numa_initmem_init(unsigned long start_pfn, unsigned long
>> end_pfn);
>>>  extern void numa_fw_bad(void);
>>> +extern bool numa_disabled(void);
>>>
>>>  extern int arch_numa_setup(const char *opt);
>>>  extern bool arch_numa_unavailable(void);
>>
>> How is this movement of a declaration related to the subject of the patch?
>>
> 
> Can I add some messages in commit log to say something like "As we have
> Implemented numa_disabled for Arm, so we move numa_disabled to common header"?

See your own patch 3, where you have a similar statement (albeit you mean
"declaration" there, not "definition"). However, right now numa_disabled()
is a #define on Arm, so the declaration becoming common isn't really
warranted. In fact it'll get in the way of converting function-like macros
to inline functions for Misra.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 08:11:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 08:11:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475883.737758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsgI-0005oE-NA; Thu, 12 Jan 2023 08:11:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475883.737758; Thu, 12 Jan 2023 08:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsgI-0005o7-KC; Thu, 12 Jan 2023 08:11:26 +0000
Received: by outflank-mailman (input) for mailman id 475883;
 Thu, 12 Jan 2023 08:11:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFsgI-0005nz-05
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 08:11:26 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2053.outbound.protection.outlook.com [40.107.8.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b3c50771-9250-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 09:11:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9520.eurprd04.prod.outlook.com (2603:10a6:102:22f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 12 Jan
 2023 08:11:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 08:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3c50771-9250-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O8H+cCLnYeUtZ4rJi+HCktWJdNFdupoL2hPpY4d07Xa4y+CEZVplfe4+vu8guJe6T0nhI6OfBoHfVQ0E89SXZe1EZk/2ROqi4euoCoFWI2J1PoAiPOnFwlvnHFNRSaXC5+1wpkDMvJKNyzIw06Od0MxiGcLUZR8ZOdL8NxL4N+DS60tCBimPUenl5+aNeh/o36RBq/pXgl6l5ylJK3EmqR+YMkQ1bAPIqfmXQ7tTZ9U5pfL/5GQBzV3wWlZhSlFinbWiJSCto/p24/cM6RXWl4mWKQD+5DCQwFeii8PH4eG+YdrBy0x/iD8ytHQYelNJWUMKJxL2GrUNialZMRJfcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TDymfTCZTo/ms0/4p7VbEMIg1/AAMShmhSBVQAO3fpc=;
 b=aUrJGbSl3TCgLnPkiCTNXpUGsKSQGTqCrp+4sW3/ZWd1MDSKj+Ij+IHf1CqJ3qnh1ow6p0plLXwQCGdRVbqCyKdZsLaIB7BfQSGfysoSCh8bQdWmS0WHGDxiFnLweBIqdFfglC9/I/ih3ULLP+LdRDfpbUQ02Nx6WHnO3Tm2C21XDkbNGEYoOlBQYSiJwUZbuXpAHCMuXiVLA4FpLuKVPxawhjwjKv/+Pjl3AH1hBmRg2q4UuX6VkDqoF88Vfr8x2h+U8sq6pfo2fh5BOidBdDDBWuPGPqF7dKHt/0m0ViLuO0317YInELpwftsEEDohR8/+4wMwXD0MvE5c3shKNQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TDymfTCZTo/ms0/4p7VbEMIg1/AAMShmhSBVQAO3fpc=;
 b=0FFjyHd+eK4nQpFhg9Nr6A0ETZ9r+aExcAE6roQW7gqpHwFjukYbmNmxknDXkrgMuSYRpqhJXhNcf6hi7/9Svyo6l822p7u9CBpHrHgvSiqbBsFSQ/dEaformK1qJJIRXE0x55k+rn7+khUr3OSPu5JZWixPwMCsYkxUuQk4XvEkUwdGuxum4bYmRP4rTQtjcS6UWZcNzx/FTd9TbBiHJ1Iickx9Kb/qeWwivbEOHtDXyHoj16DoZCpZFaH1eoJhC2ggV7s4BNIEese0sDmGwlAjkRKWRPMH7FKa4N7mIKtCGH/TD28ohZ2fJKYA6ltfoN7O041joK6Sor+gjMjw4Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ee06b9a7-bfc7-e6f1-f2f6-f73a1fb42d6d@suse.com>
Date: Thu, 12 Jan 2023 09:11:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-4-wei.chen@arm.com>
 <9fd67aa2-0bd5-16a2-1e19-139504c2090f@suse.com>
 <PAXPR08MB7420A4E3DA252F9F37450EDA9EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <PAXPR08MB7420A4E3DA252F9F37450EDA9EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0135.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9520:EE_
X-MS-Office365-Filtering-Correlation-Id: 326844cb-b5b0-40b3-0d37-08daf4749756
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BlWd71viJKp60g7lcwzARRRS0dDb0H0UtlNEIpoob0Epywzn+J8YfxDcRfN3P65l5KmBF1NyrtCKs9TbAj7CbZSwVnKH0NRAjTiZv8DTDoMxKe77yHTw5E4Ns8xxDVkjVidGMJMf4HKF11iMNewMhsAf/qqgKhwQJnMPPJJGqVn+lay6eUaFNNmYvz7RH7hwJs020xNIQLwM9swoAGL27dJzfFiNciDyhOXZ4Y84UW/aiFj0e+kfZzJPwvGs31BDu8OzRzYT4eIOQGYA+EwgfGiBZUeZZnj2zzWqvGvE79xJ3P3tMe1qaiL1AsGW9AIpJcFQl+OY9Ow+7uFBoJDMPilEYVCEbt6Dk+Ct3taCfPeTZkHm+VIBxnu4Z3UvYtEyOHjnHcWWbRcyhnhR+f3JvJNCSQv3giK63tuwaaTDpls24P0EatbdkleqtVSY27x3YYMGIR0aTsokM84q0QX66H14VQKCKwZGUlF4WUhIdI66QkAkUh1Z40wGmL6sGI3zsvsnOHqabxhho+NgANptxzDA9+hpPbU6wduKeXhJGa2+1VA5COKCLbnuhdKD5bYO52WffJWX8WzaiYdNnsuYdG8BTHR62qf7K9DWmmtIaqZbKNYVp1/xuaHZRlxktKNEZ1UbPtaGvfBxIkug2WYySW/aeD+Hbtxd/vKVSOkDtGb8RAMRyZxDe7kucfvLOuXkmAzENCschvXgCtlOVwXbfPa7ier9mqPs4R0DDpU6iS4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(346002)(136003)(366004)(376002)(39860400002)(451199015)(186003)(6512007)(26005)(6666004)(6506007)(8936002)(53546011)(7416002)(478600001)(31686004)(6486002)(316002)(6916009)(2616005)(66946007)(66556008)(66476007)(4326008)(54906003)(8676002)(41300700001)(36756003)(83380400001)(5660300002)(2906002)(38100700002)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eHJ6V1paSWNoYTg3SjRZbzdQclJlcXE5dXMvNlF2WitmL0dYR09GZHpIU0lq?=
 =?utf-8?B?WUlZRDNUY2FsSi9vZE5JNVdGRjRMZ1pSQURqNldjblArZWxZeWJCOXg4eEpN?=
 =?utf-8?B?WmZJV016MjhFLzN2M09LVlZaQ28wZFREVm5WSUg2U1NtWWkrbUNaZFkvcDc3?=
 =?utf-8?B?TXg0SW1xeWhScG1POGRjSTZkd2pZOWNIMTB1NWx5dW9iZXIxb0V6TzdwajVV?=
 =?utf-8?B?MzdqRHhyaTN6QXVhdC9taEdCZWtrVGFTdHJPNld0bnRsTWlLaWw4MFJKa2tX?=
 =?utf-8?B?eGFtLzJoNXRnTTRja0Z1RkQwZ01scms0amEvbFJDeDlnRERsazlNdGJ3eVpZ?=
 =?utf-8?B?blBnUnhtRlNrYVRJbGpxeHRodUVwWitRTHhjR0Vub2dZblYzSGFJdVJYZzdE?=
 =?utf-8?B?eFhHQjZhczFDUDZSVVMrd0duTnpXVmdVZzJHR3NHZmw4Qk1jcW93Z3dycEI5?=
 =?utf-8?B?aC9Mei9GNnY1RXM4aE90THJEZ01sS2tYUGlmZE5vR0pwWmlQZDlWa05VNzJF?=
 =?utf-8?B?OVc2TXpxNXJKNHZnd2RsZ1BYRzI2Z2Z6ZUM2cnlUbTNiSG5CUlRnajN4ZzBN?=
 =?utf-8?B?L0dYTGEyMWtra3ZtZDhiT0NsMlR6bWRJY3d4cGlSYWRsejB2aDVOZ0NKMVVF?=
 =?utf-8?B?Y3BkTGpESVo2QnJ6M3FpNlJNdHVVdnBGbXRNZHZnOVljc3NFTCtMd0Zzc2hT?=
 =?utf-8?B?VTF4Q3k4TzJWdkMzVjNxa1JQTmRicS8yNTZQc3dmUGhnc0g3aHJjRWlHcG1j?=
 =?utf-8?B?YW9wWnlVd0toa2hTRXhQRUZHRi9HUkVBMVU5Smpoc0FnazFReDRybUNSRmVR?=
 =?utf-8?B?UFdzOUV3UVp2L2VQZXg1d0lGK0huRWZyUkNyS0poVEY4K0F3Yk9Yay9MLzlG?=
 =?utf-8?B?Qk05VGVtd0NXd1k5RXNuWndlbnQ3RmVmb21aTVhHV01CWkdEUlRzOUI1MlFM?=
 =?utf-8?B?M3JlR2t3RHJRdWZ4SnAzeklaZTBZUFN2TWhET0hZUnViL3RUQnllb3hqTW1K?=
 =?utf-8?B?K05sZzNvVTkzK1N6eFQ0akZ0b0VhZ2ovcmFwc1lJUDcwbkE1QzlQeTR1U2FI?=
 =?utf-8?B?Skp2TEMvWllSMWxRRm4rQmxEeU5GWWFUTWlSekNnLytKeUNBWUFhWnhNWWZC?=
 =?utf-8?B?MjBGMkRJUEU4ME9vUTlyK09UTldKbHh5SVVYSXQrNzR0bThldHdNWWc4aCtl?=
 =?utf-8?B?SHYzcG1PWXV1czlDOWJXWk9jWHVlU2VtTGNuV1lFWkZjT09aU0YwZmd5Z0U1?=
 =?utf-8?B?LzNxTE00T2RFTmllNDFTclpVRWt0a1VQdnZjV2VRVDNtdkw0cUdEcmpZY2lu?=
 =?utf-8?B?QUpvM05keko0WVdKamhpbXJUMittaXl1dVVtTDVoVStNRlR1b2FSOWxzenBv?=
 =?utf-8?B?SEV4RER3WXdMM0hJTkZvOXJ3NmhxOFdVY3RGc2YzVGFqL29wWmRWNU03SS9r?=
 =?utf-8?B?Y2IrWHJER1RYYlRDVGNtOEg0a0Jkbmh0MXZ2RWlDeHY1UkduT2JxQldIelAv?=
 =?utf-8?B?cm1uL2x4OUl2UUJrUEJaZHFkWXFSejlFR1E5cTFSNlZzL1NCeGw5ZXcwNzR6?=
 =?utf-8?B?NktJazJ5TVd4cjBvczFPTXR6L2VqR3VHNnRuM3B3aWExdUVDU1F3K0RDd0NF?=
 =?utf-8?B?ajlhREVvVk92SmpMTjI4MDhSMU8xNjVsc21RUjlkWnlhMnoxaXM4akVJNGJG?=
 =?utf-8?B?b2EzVWNFSjNoaVF2MDQxc0w0VlQzWitVUlVWSXVUVklPbndsV1BhNHpHWWtr?=
 =?utf-8?B?NXdEaUhOZ0hQUUNTZWROVFMxd2JxQmxSMnpRS0JKTk0rb0hwSVdRbG1ra3lZ?=
 =?utf-8?B?ZzRLRHBnUlZsbVdLTzdjSHZ1RWVVUzMvK1lDMWJVOUhvQVhUTHF2U1l0YWxC?=
 =?utf-8?B?Q2s4V0djZVFaYXNtNlRyd2M4RHJ3ZUZxeE9yVzNHYkQ2WnpXZm1GelowQ01k?=
 =?utf-8?B?NkFwaHloZEZFSkRSS1FOWUFucXhJL2JMb3BPTWRBemhEdjdqd0hWYm52ckF0?=
 =?utf-8?B?aWtNSkdodGxYNHFEcG53UkZOTW5VTUM5cGhmNGRBaUpUU1hqb2FFQzgvZE1m?=
 =?utf-8?B?U3ZmL29YTjZlMTE4MHYyelI5ME9TbGE3MFJOMk10ZTQxbEg3WDZneG84VXFB?=
 =?utf-8?Q?dyZLX2KBkFM7fp6BAeQ1wdPL7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 326844cb-b5b0-40b3-0d37-08daf4749756
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 08:11:22.0447
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4yxblY93uDvC1Vuqunfpf0AtgiBWpszZLilF1wOGppqFnIOxYq90AyQMxCYZT+a/cmhp9MhGzZgIhoAjZgF+Cg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9520

On 12.01.2023 07:31, Wei Chen wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 2023年1月11日 0:47
>>
>> On 10.01.2023 09:49, Wei Chen wrote:
>>> --- a/xen/arch/arm/include/asm/numa.h
>>> +++ b/xen/arch/arm/include/asm/numa.h
>>> @@ -28,6 +28,20 @@ enum dt_numa_status {
>>>      DT_NUMA_OFF,
>>>  };
>>>
>>> +/*
>>> + * In ACPI spec, 0-9 are the reserved values for node distance,
>>> + * 10 indicates local node distance, 20 indicates remote node
>>> + * distance. Set node distance map in device tree will follow
>>> + * the ACPI's definition.
>>> + */
>>> +#define NUMA_DISTANCE_UDF_MIN   0
>>> +#define NUMA_DISTANCE_UDF_MAX   9
>>> +#define NUMA_LOCAL_DISTANCE     10
>>> +#define NUMA_REMOTE_DISTANCE    20
>>
>> In the absence of a caller of numa_set_distance() it is entirely unclear
>> whether this tying to ACPI used values is actually appropriate.
>>
> 
> From Kernel's NUMA device tree binding, it seems DT NUMA are reusing
> ACPI used values for distances [1].

I can't find any mention of ACPI in that doc, so the example values used
there matching ACPI's may also be coincidental. In no event can a Linux
kernel doc serve as DT specification. If values are to match ACPI's, I
expect a DT spec to actually say so.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 08:29:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 08:29:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475889.737770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsxk-0007TQ-75; Thu, 12 Jan 2023 08:29:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475889.737770; Thu, 12 Jan 2023 08:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFsxk-0007TJ-3y; Thu, 12 Jan 2023 08:29:28 +0000
Received: by outflank-mailman (input) for mailman id 475889;
 Thu, 12 Jan 2023 08:29:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=41ci=5J=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pFsxi-0007TD-MW
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 08:29:26 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2086.outbound.protection.outlook.com [40.107.21.86])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 376ae5b2-9253-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 09:29:23 +0100 (CET)
Received: from DB6PR1001CA0015.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:b7::25)
 by AS8PR08MB6504.eurprd08.prod.outlook.com (2603:10a6:20b:336::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Thu, 12 Jan
 2023 08:29:20 +0000
Received: from DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:b7:cafe::f3) by DB6PR1001CA0015.outlook.office365.com
 (2603:10a6:4:b7::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend
 Transport; Thu, 12 Jan 2023 08:29:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT043.mail.protection.outlook.com (100.127.143.24) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 08:29:19 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Thu, 12 Jan 2023 08:29:19 +0000
Received: from 55b07ef79176.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 91BA8309-165C-4137-B154-3A2EF281DCA1.1; 
 Thu, 12 Jan 2023 08:29:12 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 55b07ef79176.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 12 Jan 2023 08:29:12 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by GV2PR08MB8750.eurprd08.prod.outlook.com (2603:10a6:150:b6::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 08:29:06 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 08:29:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 376ae5b2-9253-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DL/iYYTLzQHhiT5NJsqQUflNwBq/XnQTi7AVoBheryk=;
 b=Oa/yaxVGAUB0Rj19ZnEEe+KIJVFwq+RaRRFQTGC3a/lvsJcNQ5aCiQGehDRh7mhntZixha5GnP4RJqMYL5UpIbhV15RyjU4EMkbk1b1VTwbS510Zeh7hZnm1kjzjehil0l3lpjwy/Ts6Z1e+5v3pkhJ2rF7/lF4K8+xdIO3h0BM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bmG8eWe4dlobd7eEWvDmTZd6ThgCekBpIr2vPOLvYVWkPTbQ87EmQTfEezkcRQiKS1L3/KDYqQkUw4TJeX3fBrkr/Rzm7lTLG2b90r0QsbM/JgHxhuldHxRjOsjr5NHhCbv3X2ZyZeR8Et2V/ac0TMKmmBSVWRQcQ7BkLZ54D7HxXdn34R1M/Sdp3EcLMQwDlTbGiNyxMiUNbWIDRXDdR10S//hbSQZsIJgmSI9hWqPPy57Vy8hjxuBMuET5/ISirZoQBSagMdTXe74nedqkFATMy5hDrSqBOn+HGdamuRXVrtvdidCaeHWBxGLmpC1w/0KYjBTsmEfr6xSy5GmRIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DL/iYYTLzQHhiT5NJsqQUflNwBq/XnQTi7AVoBheryk=;
 b=Cyzf6+iSsykWj/xpmqtOspoNxPEgVedyjRhrPJkIWY3gBQP6SnNDMufJiz1xjNylUFSXOGVwdS2JaJ64BHZuHAlcL7KR+mHBzsqMrNhYCBe+v3NKo5YCHaNv2x9VqbzhEfMMKoF2ueyrAs6qKfemzor/fl50qIVeA+2sPUjbNCnHFExijiJ/KtyXgAojXyUa5b1b8toWNkuP82+5q9deHlZgs9BsASBbNs4yvJ3fICJWm7c0ezdkERO0aOEKdc+LM+Bn7kd0J/DLVsLaa7FpOoOMvL7ydYIHQAx4cxZhmiruXtw3XUCNFX1GFlKHj/Qou7K3oYS8svFN/UKgnwjRTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DL/iYYTLzQHhiT5NJsqQUflNwBq/XnQTi7AVoBheryk=;
 b=Oa/yaxVGAUB0Rj19ZnEEe+KIJVFwq+RaRRFQTGC3a/lvsJcNQ5aCiQGehDRh7mhntZixha5GnP4RJqMYL5UpIbhV15RyjU4EMkbk1b1VTwbS510Zeh7hZnm1kjzjehil0l3lpjwy/Ts6Z1e+5v3pkhJ2rF7/lF4K8+xdIO3h0BM=
From: Wei Chen <Wei.Chen@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v2 03/17] xen/arm: implement node distance helpers for Arm
Thread-Topic: [PATCH v2 03/17] xen/arm: implement node distance helpers for
 Arm
Thread-Index: AQHZJNEfrPi0xon1hUWBAoFjpzsCdK6X3MQAgAJ3JNCAAB1cgIAAAqfA
Date: Thu, 12 Jan 2023 08:29:05 +0000
Message-ID:
 <PAXPR08MB7420F074E71EF0C02A5DD6F99EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-4-wei.chen@arm.com>
 <9fd67aa2-0bd5-16a2-1e19-139504c2090f@suse.com>
 <PAXPR08MB7420A4E3DA252F9F37450EDA9EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <ee06b9a7-bfc7-e6f1-f2f6-f73a1fb42d6d@suse.com>
In-Reply-To: <ee06b9a7-bfc7-e6f1-f2f6-f73a1fb42d6d@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: EC1657997C37974EB9B39B4D0E357F87.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|GV2PR08MB8750:EE_|DBAEUR03FT043:EE_|AS8PR08MB6504:EE_
X-MS-Office365-Filtering-Correlation-Id: 25ee3c03-5407-423c-a805-08daf47719f0
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nZAtoqf1S4HJ9o1Bo7u3yCtS59qUo6OjyhGbfiG6lTJBBqvHIUazKzmeFjc9NqqD9Y0ggtks/+sAkfiblwFZR7LTG5GXGZWMLQnzMrqIIIZGtOjcnhbZo6RFi6gRdJ9N6kc0P6rqdeAFRjmOJNBrPL/XsFxT/MQimaBo2xDf8FtMzaftF4fxj2XDnso4TIHZTIPJ+RA4wlowxv7flrdfuOZhjzhCAhbASmmhCatToXr9Cfo0Ad9OO0mWecjZ4OIytrqDAY5W9R58H9d6lAdGqF9MzmTmkIH2nAX5rskdF3zCaLBGCG9F6dhE/Izam/fLuiSNZ3qOF+Nva1VZMkW/VDxhTPZ2sJERgDD454r+cGGchPlw+yNZhqRqN38Rk1cDtfelah+8x18/bic+uPjiY5U7LH0tbilrVhPr451QfVLYUnaHtsnXZ18a/U7bjIIk6bSZ5mGHGgJirHZuV/asWWCSScQE0dwv9pLsIha/uJIdvehlCdMKt/R/lrj2psmlh9UsNUuzg7+9xFSaPtKu37HPUTld5QEqW39CIGQeIR8H4Z906mpWzqkz1v5pdVShicGsHy2qGbu+98sVoLbe4eYQWAZpJDRrYBi7gXIAgCZgERAyzdTmoBBiJWKXQqtsyD68/8BQX+Y2HN3lTHOIN2boxBMR/pcj8tIqkeS/8tMi9pwSsxNuz3/g/G7GnXy55APLa7fc0dZ32RB+9xO33almGhN1J9QPwgXv6hsxJtM=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(366004)(136003)(376002)(346002)(451199015)(38100700002)(122000001)(66476007)(66446008)(4326008)(64756008)(86362001)(54906003)(6916009)(66556008)(66946007)(76116006)(38070700005)(33656002)(8676002)(41300700001)(8936002)(55016003)(316002)(2906002)(5660300002)(52536014)(7696005)(83380400001)(186003)(966005)(53546011)(71200400001)(26005)(6506007)(478600001)(9686003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8750
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	25a5b978-5db1-47f6-3465-08daf47711a8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	S+eI4lkpq/2RXdrnSGuzWUZ+Bfw8gbONIkYgezQeWC7O4NiKGTRweG8hJkmuF7wxhSUPEVyuKqXHaS4CuBFSh5mAlJOF3SS29yiBHVEHYnoXF5cjJVcz+nTeJrOEUm3C/7t1rmHwI8Br34MZkaY5kYjm+rqexoU76iTro+lpUBE0Fr1mqsr3lrwLYnj9X42kch7LhYSB+Cw6jjLcK0GNAqqU2fvBsC53VXlErL+Z6fhbuIzOjOHDi0HxkvNuhAVA7ezohDB0rQGzxQQy1895DlZzJMitDDKHwnsM+4+JeToL9tdgDV0DcmOzxWeuH8AwMSLHKICK/Cm9HoIZlD3xWWuHUX0cM1LzY6K4ll7fq6MJvDzp18W2oXPEQpv/LKEFa1rs7oNtvlXh+WzUM8NB5pahWt3Vqxs2nmpHuO8NXcbIdCO9zbquGwYsvVYub3ZCKfWKAH2ks6XRJRkhWvnW/k8w0z4n3D6dGuNiUhGP3kf188smr8Hl/tt90JDf/KkuChaz6KMMXF37OVqZ3EOhQzYhjkwpRySeec3LO2EbOySe/9nKTQMI4oXsukcctQlVXbW1sVDfStziUrEwFv6mVNyUL02j2dSPjJ/aEymmSNtFzceAhMcsmUoZNjyIs0cduzBQR1r7z3/Evvo6S9OCS80wPcrW7NHixEgpp/rInKlfVZvu0LUOJ51B9H7gZ/ssqegHEORmex+Rvx7t2aqdfdOBVR7byypriIICIwpMbUE=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(376002)(39860400002)(396003)(451199015)(46966006)(40470700004)(36840700001)(70586007)(6862004)(8936002)(70206006)(36860700001)(4326008)(52536014)(8676002)(82310400005)(81166007)(5660300002)(356005)(33656002)(86362001)(2906002)(316002)(40460700003)(54906003)(83380400001)(26005)(7696005)(41300700001)(336012)(47076005)(478600001)(82740400003)(186003)(40480700001)(53546011)(55016003)(966005)(9686003)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 08:29:19.9834
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 25ee3c03-5407-423c-a805-08daf47719f0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6504

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogMjAyM+W5tDHmnIgxMuaXpSAxNjoxMQ0K
PiBUbzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+DQo+IENjOiBuZCA8bmRAYXJtLmNvbT47
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEp1bGllbg0KPiBH
cmFsbCA8anVsaWVuQHhlbi5vcmc+OyBCZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlz
QGFybS5jb20+Ow0KPiBWb2xvZHlteXIgQmFiY2h1ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5j
b20+OyBBbmRyZXcgQ29vcGVyDQo+IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPjsgR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPjsgV2VpDQo+IExpdSA8d2xAeGVuLm9y
Zz47IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPjsgeGVuLQ0KPiBkZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDAzLzE3XSB4
ZW4vYXJtOiBpbXBsZW1lbnQgbm9kZSBkaXN0YW5jZSBoZWxwZXJzIGZvcg0KPiBBcm0NCj4gDQo+
IE9uIDEyLjAxLjIwMjMgMDc6MzEsIFdlaSBDaGVuIHdyb3RlOg0KPiA+PiAtLS0tLU9yaWdpbmFs
IE1lc3NhZ2UtLS0tLQ0KPiA+PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+
DQo+ID4+IFNlbnQ6IDIwMjPlubQx5pyIMTHml6UgMDo0Nw0KPiA+Pg0KPiA+PiBPbiAxMC4wMS4y
MDIzIDA5OjQ5LCBXZWkgQ2hlbiB3cm90ZToNCj4gPj4+IC0tLSBhL3hlbi9hcmNoL2FybS9pbmNs
dWRlL2FzbS9udW1hLmgNCj4gPj4+ICsrKyBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9udW1h
LmgNCj4gPj4+IEBAIC0yOCw2ICsyOCwyMCBAQCBlbnVtIGR0X251bWFfc3RhdHVzIHsNCj4gPj4+
ICAgICAgRFRfTlVNQV9PRkYsDQo+ID4+PiAgfTsNCj4gPj4+DQo+ID4+PiArLyoNCj4gPj4+ICsg
KiBJbiBBQ1BJIHNwZWMsIDAtOSBhcmUgdGhlIHJlc2VydmVkIHZhbHVlcyBmb3Igbm9kZSBkaXN0
YW5jZSwNCj4gPj4+ICsgKiAxMCBpbmRpY2F0ZXMgbG9jYWwgbm9kZSBkaXN0YW5jZSwgMjAgaW5k
aWNhdGVzIHJlbW90ZSBub2RlDQo+ID4+PiArICogZGlzdGFuY2UuIFNldCBub2RlIGRpc3RhbmNl
IG1hcCBpbiBkZXZpY2UgdHJlZSB3aWxsIGZvbGxvdw0KPiA+Pj4gKyAqIHRoZSBBQ1BJJ3MgZGVm
aW5pdGlvbi4NCj4gPj4+ICsgKi8NCj4gPj4+ICsjZGVmaW5lIE5VTUFfRElTVEFOQ0VfVURGX01J
TiAgIDANCj4gPj4+ICsjZGVmaW5lIE5VTUFfRElTVEFOQ0VfVURGX01BWCAgIDkNCj4gPj4+ICsj
ZGVmaW5lIE5VTUFfTE9DQUxfRElTVEFOQ0UgICAgIDEwDQo+ID4+PiArI2RlZmluZSBOVU1BX1JF
TU9URV9ESVNUQU5DRSAgICAyMA0KPiA+Pg0KPiA+PiBJbiB0aGUgYWJzZW5jZSBvZiBhIGNhbGxl
ciBvZiBudW1hX3NldF9kaXN0YW5jZSgpIGl0IGlzIGVudGlyZWx5DQo+IHVuY2xlYXINCj4gPj4g
d2hldGhlciB0aGlzIHR5aW5nIHRvIEFDUEkgdXNlZCB2YWx1ZXMgaXMgYWN0dWFsbHkgYXBwcm9w
cmlhdGUuDQo+ID4+DQo+ID4NCj4gPiBGcm9tIEtlcm5lbCdzIE5VTUEgZGV2aWNlIHRyZWUgYmlu
ZGluZywgaXQgc2VlbXMgRFQgTlVNQSBhcmUgcmV1c2luZw0KPiA+IEFDUEkgdXNlZCB2YWx1ZXMg
Zm9yIGRpc3RhbmNlcyBbMV0uDQo+IA0KPiBJIGNhbid0IGZpbmQgYW55IG1lbnRpb24gb2YgQUNQ
SSBpbiB0aGF0IGRvYywgc28gdGhlIGV4YW1wbGUgdmFsdWVzIHVzZWQNCj4gdGhlcmUgbWF0Y2hp
bmcgQUNQSSdzIG1heSBhbHNvIGJlIGNvaW5jaWRlbnRhbC4gSW4gbm8gZXZlbnQgY2FuIGEgTGlu
dXgNCj4ga2VybmVsIGRvYyBzZXJ2ZSBhcyBEVCBzcGVjaWZpY2F0aW9uLiBJZiB2YWx1ZXMgYXJl
IHRvIG1hdGNoIEFDUEkncywgSQ0KPiBleHBlY3QgYSBEVCBzcGVjIHRvIGFjdHVhbGx5IHNheSBz
by4NCj4gDQoNClVuZm9ydHVuYXRlbHksIHRoZSBsYXRlc3QgZGV2aWNlIHRyZWUgc3BlYyBkb2Vz
bid0IGhhdmUgYW55IE5VTUENCmRlc2NyaXB0aW9uIFsxXS4gQnV0IGlmIHdlIGRlZmluZSBkaWZm
ZXJlbnQgdmFsdWVzIGZvciBEVCBOVU1BIGluIFhlbiwNCndlIG1heSBoYXZlIGFuIGluY29tcGF0
aWJsZSB3aXRoIExpbnV4IERULg0KDQpbMV0gaHR0cHM6Ly9naXRodWIuY29tL2RldmljZXRyZWUt
b3JnL2RldmljZXRyZWUtc3BlY2lmaWNhdGlvbi9yZWxlYXNlcy9kb3dubG9hZC92MC40LXJjMS9k
ZXZpY2V0cmVlLXNwZWNpZmljYXRpb24tdjAuNC1yYzEucGRmDQoNCkNoZWVycywNCldlaSBDaGVu
DQoNCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 09:15:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 09:15:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475895.737781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFtff-0004HH-Lm; Thu, 12 Jan 2023 09:14:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475895.737781; Thu, 12 Jan 2023 09:14:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFtff-0004HA-J8; Thu, 12 Jan 2023 09:14:51 +0000
Received: by outflank-mailman (input) for mailman id 475895;
 Thu, 12 Jan 2023 09:14:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFtfe-0004H4-6w
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 09:14:50 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2073.outbound.protection.outlook.com [40.107.21.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8e39a3dc-9259-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 10:14:47 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7382.eurprd04.prod.outlook.com (2603:10a6:10:1ab::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 09:14:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 09:14:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e39a3dc-9259-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XRsLghl/IFzYgH2nEzacVx6KLftGuSR1OxRR7td4dwmZf6aPqiMHn6iGcC9d/ls6yqjCzt8jy/K9RUqeClUS8CgtfLDVq6e1DHDbiJ8vmbbqbLxKD9ASDnMVNpe9EI8vrQFzZjMJ5qHj6iTFNrnW150/UzHT+tlJM6lsdE0XT1I1A9DRqn4PTBn13RP9OVQlpE4s0l5Aw9UXbTKw8o+4uZyVHCzx24MLmxK+L7xxA1PnJonsIt0hI4nq3ENSKNig4DyNJw7WmRbfKnlohDtaqQV5BTTVeU2XEl3dEyo3ntqls1CCgsG+f6lMuU9tpJJs0WzIqJ1NV6RDiotseuBDzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NrXq9sANmJGnfgim5bzEJCucEceEnyBbmFCH112mVlw=;
 b=KAgJ7K5nkN+HnnNbITGHrr+uR+Y+mbyVrnNGi/C2/B1OD/gda7th4c5YMDLDB6WKVkaNoEJNHWPrbiADv3Ord034MS8+3IHvSUctfAkOLsmKi4pZt8m6fKZT5H6ReRme5TjdW3fnRXw8hB42xVeCk+7a2aTzotyqHooccGNZhKqY4isxAhaCrnwyC0KCeRUT6dYR0m2c47AtZ11RnbR1gaXQ/TXmgDdhDOd1wKY1lLzsLvo1uP2f5pjfeB1p1DCUPTcTquGfqXnWW8d7s1NtXkbrsmCnXpL8BPsS+NUYVKWQJz47ODrsbp7RloK1l7oE2GKqI3RE52amhdKVHpCUXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NrXq9sANmJGnfgim5bzEJCucEceEnyBbmFCH112mVlw=;
 b=ryLa0WHYESd5FZ50y9A/EtXrXImIaL3jDhIAXGoxJxEjYb0VIUjPzGgulsCkSZttZWftcr9PZYsdMgOAp/yKspoVOw5JwKEbgjtrfPQHs2SYeNJpLLAGFznJpBeMCrjhAftsCdNiLaKAtXxZwpwmld/kc4EVkLhE0KjB1st1tZhSY0NQXZB3tlo28EXd7FdEx+P2zwfbFJ7kzFR/cVomVXyeMHCn7SvU3+61E1gVMwy6kdv3861+KtflHHbbOhhYFhTXfbGY130xmo9EBCq+iry5/DABuhuikPIW2SKS+pENwNWqmc4GOhjrlN7K6C+pd0TIcejdYdqkNuumE/PnRw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ee06d3ff-c73f-baa7-479c-7c9995156526@suse.com>
Date: Thu, 12 Jan 2023 10:14:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [Multiple reverts] [RFC PATCH] build: include/compat: figure out
 which other compat headers are needed
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230111181703.30991-1-anthony.perard@citrix.com>
 <5c7ffbe4-3c19-d748-9489-9a256faebb7a@citrix.com>
 <750ad2e8-a5be-d9c0-846e-41bb64c195fe@suse.com>
In-Reply-To: <750ad2e8-a5be-d9c0-846e-41bb64c195fe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0050.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7382:EE_
X-MS-Office365-Filtering-Correlation-Id: 8f904e25-e11a-4909-6bc1-08daf47d7144
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9Waj5xBUZWvfirsEQF7pvdNrFtDu+opzeZ6Ikrr9EIklgUubgBpvOlOQP6aAK5vEwnJjEg3Pn0p69pjZK2fvXFP+abeVBDkYyk8OsZcvqAQrLpiVIarb6/e6L71Dudl5C8ff3Sc8tUpkNpOWdspUZPQd/wi9hZjAhLDQptToSnzE+n39CTAf3kCOdI2nRfzZH2jTrpXVC6eiiRKrTI/skDB7DdJxSkiymSLhA5VhdRlDqf3g3Z1V4qNq2CfgZxRbApiR7Hk+H5oFxGNyoGIzkb6UBaPro6NE8xIIKCQOW2wZPylYdGM/76fLXpjgRet3eN2GOfJVjqb9RxI5w7bL09ZGJG6WaVdjiGrYjKV68t2f4Xpeo0BYKQ8DH7lSzngzAR9Qkc43RLy46zR02EwcWmhzI//bbaoIXn6w1tmf2EdVG0Q2VJrM9vnbXPcCub5In0Fil6c0dVXLsBbcM9Zvxr/1E2z1lU6SzBgPYo+Aba1Q6hOJ0JeX1V3WK+JTCarmPEWM4Szzaow5tH0KcWfZPDsKeAHpVpXuN5QG90Qtex0u2HjKT4EiiVWXXntCrg8bxZETxHELcyrA2kmyRxilsvio60gmjmBBsK32nmbB+eoPZsQWQCZv30+6H3W6Vgiiq4KDaiSR0uxcgzWkDlJOHUfVr7aoNDmI0Y7Pnwz+/iKGA1Fjto0tR3cDq3KRA5b7LKqs1T4xW2rVjjP/awXhTpFR0sWYYdbAEqzrcrryNNfmYuL5HgWGjWhpQVMlvEohutKnJlHHqL/Jc6x6LV4mbw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(396003)(346002)(366004)(136003)(376002)(451199015)(53546011)(6486002)(478600001)(6506007)(6666004)(186003)(6512007)(26005)(66476007)(316002)(31686004)(966005)(6916009)(4326008)(66946007)(54906003)(8676002)(2616005)(66556008)(38100700002)(83380400001)(41300700001)(5660300002)(8936002)(31696002)(86362001)(36756003)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eHdhYXlkd21JNFEwdVlRaHl5YVV3ZURwUzF6b3FnNG5kSlNRVFJQUjF1VE1X?=
 =?utf-8?B?SzQ1b0ZOeXF4eEtFOFc0K3ZqbkFWcUIremFmZEM1THFuZng3S2RJOUR3Y3or?=
 =?utf-8?B?U29CTWNoUmZzemRWa0xKa2pQak9mZ3EzNEJhNzZXSXZMQndEUVBSOFl0NXNk?=
 =?utf-8?B?NG4xZXFDVE14dThpS0puRld4Q0hyYm12NUFaZEd5dGFwQTN5emtNbks4RFBj?=
 =?utf-8?B?Yis0bm5aSnNldnE5RjRqZ2dJdDVPY0pUcThySUNSSU1TdVFKbG44RnNIN1hC?=
 =?utf-8?B?enlVSkR3ZXlsZWRqZnBYZEdoN1RQV3RpZ3loR3labkoxUHJQdHZ3SVFTNHJQ?=
 =?utf-8?B?ZkZjRHk0aVNSUnBmUEtmeElYTXlTMFUrajhkd1JqQzdPSG9lVHdoUzlaU2k3?=
 =?utf-8?B?dGJlQ3RRMWhYSVp3cU9ud2ErajdOUGIwTENtazlTc21YRnB1bTMwemdveHlH?=
 =?utf-8?B?WGxqQWlLaHRuQjNaZzlzTGlHSTMrdU1hbjR3OFdUZXlxcTQycERIdy9IN0dF?=
 =?utf-8?B?QUxYMDN6TzBrc1lZbjRibjd6MlF0M21JQ1ppdnJHMXhoRnhLQWpBZTZ1d1Va?=
 =?utf-8?B?YmxQUEFXZytTcGxMNlVqQWM5NFM0UEE5aXdPQWZiTEtDOVZuNllzcVBIUDVE?=
 =?utf-8?B?dDVJMm1WQTlpUGRHUXBrQnhEQnRNVks2UFhlQ05QNGovNW9lbG1EL1hJLzlB?=
 =?utf-8?B?ejlJN3JXYnZLZTdFNlRacnBGRFI0dUxqckdzMzlWbUJib3cvQmEwTWFaVjdz?=
 =?utf-8?B?clVXMkpYVStpM0E1cSt2TEo2TjFVZFZiZ1dFbnE4UjJrM0lZNUtUeStxUnBM?=
 =?utf-8?B?amsvRDNrR1RFU0ozWHdKNHFEV1hOcEhOQkhEcVZhb0ZyNXVFKzlaRHE5aytk?=
 =?utf-8?B?WXR3a0hHSW4rb0NqQjRhM1RhMlVsdjY0dVBzd2EyNmRkbnI5cnZxNFlEWHZU?=
 =?utf-8?B?QVR5V256Szl5Z2N2TS9FTHhteWlzT2NGOURJU0R1bGNmVCtGQjdORGtYL3NV?=
 =?utf-8?B?YjZuV2RVVkdmZG12aDJ2Q0JUamp4WFAvNC9YNVh3dGFCN0lCejd5K0xKeElO?=
 =?utf-8?B?d056c2ZFT0VpR2RCUkJ2eUdTSFk4akRmSzRoMnFmeTdZbEkwL2tPZTlYUTk4?=
 =?utf-8?B?Z3VXQUVDWjFkSTVLeXRDdDFzMFlBak1FS1VMWklhRHlSOVEvSGdmcVgvYWx5?=
 =?utf-8?B?Y1RYbFdXdjB3bVVEb3BmMFBMUFJEWFR4THdVZThjZnQvTkhXczNic3BXS2RW?=
 =?utf-8?B?THNEb1pHOFQ0Y3hBUE1YU1Z4bHhBSzdrZjBacVZwUkxjQ0RmVVE2NStJb2I3?=
 =?utf-8?B?QkhqMHlwZFNYTFpiZ2dGeWk3SHlGblFsMmhMOXkvR3JwbzBaSjZyblZ1KzZI?=
 =?utf-8?B?S1VLWUZsa0VjaFlJSnpNSDlhVmMzSENKTUpaR0QrN0dxeTVzZElTdi9vY011?=
 =?utf-8?B?NDJTdXU3NDFYSmpzNW51SXNYNzZLT0Q3azB4aENKZG52cHNPUHo2b2hUN241?=
 =?utf-8?B?ckc2dU1qS1IyeERjbSs0bXVLdDJEUXJpNTJNbmQ3VTcrakdhUTEvcjZDSlF4?=
 =?utf-8?B?NkV6Vk5uTUpiS1gvTUNlSTVXa0hVQ0k2MEZON3ZuU0RVbnBQdWtKVlkyU01m?=
 =?utf-8?B?azBJYlJVUEUwdVgvdGRLempyNkt4REIxY0padVBqTkV3WVNTcFp2cWI0L0FJ?=
 =?utf-8?B?VjBHLzV1b2F5RVhMeVZmMmJPUnFoMFdJY29TK0ZnOTZyTUdrMVkyNjhucE5x?=
 =?utf-8?B?eFJ0MXUveWxKc2Zvc3lsZEtuQ05KcWNmTncrL05HVFJLYTZpbGFTNGpNYWcy?=
 =?utf-8?B?d3pLWFd0R3hWZVZGNlBMUDhYMVlTSGJXYmZ1OHQ4bklnRzh6aTBydEpOeSty?=
 =?utf-8?B?NjkxbHJwNXNqV3V4YUkvYmN1dlhPVDNBaEs5dXBDcjJQSEtoVTAzVW0rcVk0?=
 =?utf-8?B?bENmbnlUME9LMUthYzFuajYveGd2a0tUdmZOa3oxbk5tdmt1SGxiYXpzOWN4?=
 =?utf-8?B?TGNpRzJuNFNWaTdNZ29oc1FFaDdVbXZBTHpGR2hCdEh3aDVMdUV4cUIwN3p4?=
 =?utf-8?B?RjJ0a0ZIVGxaUEgraEhxZm1sVlZPUjFKcUNFZzRPU1BPOGkyQVpJTXlNbktR?=
 =?utf-8?Q?vP5Ew4SkaDw8qP2ctv5hO+64J?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f904e25-e11a-4909-6bc1-08daf47d7144
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 09:14:43.7713
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QzuHKY2TJ7VSHLQINA6QOgZbuNKmc4Jml9UP7OhxsAIdpzxQFbxx2lOAD26gZKk24PgGI+arMNCY4q41zEFuRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7382

On 12.01.2023 08:46, Jan Beulich wrote:
> On 11.01.2023 23:29, Andrew Cooper wrote:
>> For posterity,
>> https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/3585379553 is
>> the issue in question.
>>
>> In file included from arch/x86/hvm/hvm.c:82:
>> ./include/compat/hvm/hvm_op.h:6:10: fatal error: ../trace.h: No such
>> file or directory
>>     6 | #include "../trace.h"
>>       |          ^~~~~~~~~~~~
>> compilation terminated.
>> make[4]: *** [Rules.mk:246: arch/x86/hvm/hvm.o] Error 1
>> make[3]: *** [Rules.mk:320: arch/x86/hvm] Error 2
>> make[3]: *** Waiting for unfinished jobs....
>>
>>
>> All public headers use "../" relative includes for traversing the
>> public/ hierarchy.  This cannot feasibly change given our "copy this
>> into your project" stance, but it means the compat headers have the same
>> structure under compat/.
>>
>> This include is supposed to be including compat/trace.h but it was
>> actually picking up x86's asm/trace.h, hence the build failure now that
>> I've deleted the file.
>>
>> This demonstrates that trying to be clever with -iquote is a mistake. 
>> Nothing good can possibly come of having the <> and "" include paths
>> being different.  Therefore we must revert all uses of -iquote.
> 
> I'm afraid I can't see the connection between use of -iquote and the bug
> here.

In fact I think the issue was caused by

CFLAGS += -I$(srctree)/arch/x86/include/asm/mach-generic
CFLAGS += -I$(srctree)/arch/x86/include/asm/mach-default

which allowed the compiler when seeing "../trace.h" to pick up
arch/x86/include/asm/trace.h. No -iquote in sight here; all that
happens is that #include "..." fall back to using -I specified
paths when the file could not be found by using ""-only paths.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 09:17:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 09:17:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475902.737791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFtid-0004wY-6j; Thu, 12 Jan 2023 09:17:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475902.737791; Thu, 12 Jan 2023 09:17:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFtid-0004wR-46; Thu, 12 Jan 2023 09:17:55 +0000
Received: by outflank-mailman (input) for mailman id 475902;
 Thu, 12 Jan 2023 09:17:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFtib-0004wL-LD
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 09:17:53 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2086.outbound.protection.outlook.com [40.107.247.86])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fcf42ad4-9259-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 10:17:51 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9572.eurprd04.prod.outlook.com (2603:10a6:102:24f::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 09:17:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 09:17:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcf42ad4-9259-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jy+oa6BwyNonTn+EQbtnnvjI+/E/m1RPpYTRYt6gFnixa5VQS/rs2yVL7hcMkWXT0lvkrEXZP+m9dUVlnhm2o7J5Z12fSjet91bdt2OI3309qBcIpzqPd3TXf03Z9tRA1nFCXfxOkaBNaJXDVgfwanx496eX6sCRZn+FovQb6bIM4s4VrFQqZ4HF+B7RhgcNVxE4C10CUw/TttAf8r+0Q8tYz0YLn9PK0kXGeiq0tOOijpMm6GnnlcEIuZ/G/nT3MvmM3cWQ5V3RnEEvRnICIjakbbGKhhq4xz0+qYml8oUaaffBD1zB7L5Xx1jK2V3yjC/nBkztfE+Jw7z5dSC5ZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rcCUI5uWmX3KBPnNjDCJtytod74+7velJJnLKEUL42Y=;
 b=DLfvGMCPDMAd8rRKJybtkvjpXNAHYZH5iztKw67/tktcHCLAy7swa8xV0B/Rrf22nznWZbtVZ+C2HCnCzBrSHozcCyEzAhdQzKu2igtKdG4dneuoBHLy4h0u3+/hMy34ruTU9kvlOlHOOTHXuky8VaanuUruaDgYcaUSZVqohuuuy57WvQuneTxh475/BenumxU8qtIS/JkCPWT6MqviJqUwKjam2XjRX60x3NKYz2YQIBse/SHgo8OWGuM4DaeyEyKE602BuQvmJBjR2eNzmtcAwXSX6OgDGcdpzdRIN3QRlUtuqnJkNf3FIBcVftJ3Ij61zja5QvG7URaBO29hZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rcCUI5uWmX3KBPnNjDCJtytod74+7velJJnLKEUL42Y=;
 b=BMyok6OSAjEc5Z5VUw24mYIL/Sn80If2h/URnC2DSMbEVgFVYu+j32C/qe6hzTL7+EMlXB7dr7HioGbrj9y7AgI25GOI0PacVlumrXxBUv1V+s60Gzo/Vj7L4LfpukCWhPClAbpcVfacY4b3vNoBFvvc4WFbuR7PR5dmC57ANYuJcJENn+QORzd6OtCL2ThvlwLvK3GxEwRa+HDfRz9/GKCgU8q8her3YROAXn9bq+nBPnMKENVFKr/7HKJk8KSJxIa8wBaKtVUbLf5sqlj+c64BgOnPwbsS3wW6gqQvW1npnR4oQkg+RjhZ9Z7HKVQMGxbLmAin8svlRtfRTurriA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <992a28e1-bfcd-97ef-b3d5-c7341846b3ad@suse.com>
Date: Thu, 12 Jan 2023 10:17:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] include/compat: produce stubs for headers not otherwise
 generated
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0114.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9572:EE_
X-MS-Office365-Filtering-Correlation-Id: 28c53f94-59c5-49db-e374-08daf47de018
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bkCjyLoWUYJ+f9HxL2Nmi7u8kGYnyjO1uljzf8vbM3AsRvYGnfGgTngF+PzHwFROLsaayFzBFZ7H9FhIQG73ofNmGrh567a1fP7C7DBbta4Mo50WlSssoKpnruLyww2lYGT86Wj4H+RMKtSgsPGs7QD2eBqoW1OX+s052atZL35EE85OMmpIiztxHuT6VqTF9/HAnUCy3b2dV4Ud4mSelZF4Lxqu8n21WD/46MrIHAcvOEY1+4E1HDctV+fpJj5ULycbCSWyIOfrhewKANIzRAyMsMKnQ6HrUmyCbx+OXFrnsLOwzuJY5BiF7+hRLQ6b2zuFvTIx372RIsy3BnWwhR9EOLzvYOQT1InXqd7/dXLC1V9Jib8VIzzyO4musjgPF93s0bV3FJxWwA6jPfFFJH4Z4aRBJ+PQGa1DT6wyND5GnTplrBLhj5kgEG8PXzSjDmgk5ziSnldhqOFzJTtmenB6Gsv4Vigi1tU3ahJomfqqif+33Mk7Xaba0nqSfUM95w+OLptDGDgwu4aBaFFIzqKvyvvpEenMGoZyI1QIIouDXdCZqXal244BtUAaOl+vejznOFUWfbMpYVXdrLEB7rruGmaWtp2QGJHB1z51rTAq77noU5eugAWDzAqwxnaPL0sMYvMQEm2PPLcVee2GSHRQGs1F3HTRwGCZi1y7PFOPMrWKU+hofgS6K6YJQ85AXjDYGky1X2xoSFKyD0FssUz4LQrLON3wCPdLHn70LuJEC8Qo9K3sc9PSQIMxej85I54ruALFnvQJVS4kkTagjw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(396003)(366004)(376002)(136003)(451199015)(2906002)(31696002)(38100700002)(66556008)(8676002)(86362001)(6916009)(8936002)(5660300002)(66946007)(4326008)(66476007)(26005)(41300700001)(6512007)(186003)(2616005)(316002)(6486002)(54906003)(478600001)(6506007)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N2QwTXprcVYyN1FUZjNXcHVhbHdwbVJhTjZkUWdDR3o0OUVISVhtTWtib0Rr?=
 =?utf-8?B?M3FXeGpscGZDK3JTSE1VbGdCZEg5MnMyTjVYSEFSbGdaZ3QwQU9LbnpaMnhv?=
 =?utf-8?B?aGIxT2dFS0tyS1Z5NktKdnZZdFdTcG9BbVlqZDRlQkhONGVWdnUvVmZTY29P?=
 =?utf-8?B?WkdMYmpURXFYSCtHVjNCa1NERTRDTkllcFlPU1BqbzdXUDJPVm1CelFkdGN0?=
 =?utf-8?B?eDdEdElCQk4xOXIxNXlncTJtOVdDbUVScFArYUN1eFZEdU9SS3dIbm1ra0oy?=
 =?utf-8?B?Nk5McFZjS2Y2N01SU2JRL0lLdzkyQ2FxMW9LREhqT0VyYUlGb3ZqRnJ1Y245?=
 =?utf-8?B?NG9NQVd2b01CVHJHdGJaZEYvM1p0ZmdMMVgxKzh6aE5XZmdycWdiaWZ3U1ZO?=
 =?utf-8?B?a3FjWnk4OTFCUXhtalRORk1aTHZ0NVVCcG1wRlJNRVNSRlN5SUhObGxVY2lp?=
 =?utf-8?B?clN3Y1RmUHRXSFN6dmMrZ1krUm02a3RyYlNwMU1pbytqY2dOVURKVmtUenIr?=
 =?utf-8?B?YVBmUVJZb0EzSDN3aEwrNnZVSnBjVnQxS2VQODNqakwvbTV5NzJpVDcvZkpP?=
 =?utf-8?B?RGJtSkRvMUdrZEhKbzJya1YwNUs5aXVuQ0d5UFdxclFsVkhVVVRWUHV4enNN?=
 =?utf-8?B?bVBReHBlQk5OWHc4emlGWEppaVc5RXZJaExqL3FjZXplSU85aW9CcGgzWkxl?=
 =?utf-8?B?bTZpNFZjbXhCMXo2Q3BlQVJaakMwSGxkdkNpc2pDTUtmVmZDRzRKcTFUU2hy?=
 =?utf-8?B?MCszaVJEclpjMGJIV0NPbGptRDJjSkpSNjMwRnVvWnRlWC9RWThYK0M3bTNX?=
 =?utf-8?B?dHZHemZmY2lveTBWQlBwZS8wSEdRb25WaHFmMndRQWFsSGdSVzJlSHg3MzlR?=
 =?utf-8?B?eDRoZktDaTV1NmRlRGU3aE1qUUZiOTgySVJleWJ4NklqWE5xTnV4QVdrTHk4?=
 =?utf-8?B?NW9XVnR3RVNRZDZvTjNxZmQ3bmRwRnZRR3cxTzM2NU9KbWZZcERIZGpkOCtj?=
 =?utf-8?B?RDdaR295bEMwUWlZZ3pheWIvNk5aV1ZkaEFmUkFXVkNJemJ4U2p2d2hZSDhM?=
 =?utf-8?B?amo3Mm1FNldzL2YzUjlpOWo1YnNrekdGaDZsZ1BuQiszVFRMWnJTbjhLM28y?=
 =?utf-8?B?VjgyTS9CMHNGcEFKZ0RyeFJ5YVRXVE1HMWI1ZG1VUDdkaGxnenh3aHZuUi9u?=
 =?utf-8?B?czk4d1YrZU5vVTh6WW5Ic2R0bUxDVjZRVGc0U0N5aE5Hc2tHbnJ6NkZhc25U?=
 =?utf-8?B?U1JKS0lIb2pGMFZiMDdZbmdEZ2grOFZxN1hZUlc4VjArNmlWL2NFb1dhaGZ5?=
 =?utf-8?B?Y28zZk9KWmJFR09weDhqMWRrMmNEanZkRFJrSTFDMklzUUlNdHpGbFlpNHdt?=
 =?utf-8?B?U0p2Y05qNldZU2tNRFNUVTZLZFQzUHR5Sm9SdVptZDJkdHF6OVphMHQvOTJW?=
 =?utf-8?B?MFFQM1BOTEhKZGhRVWlRRGJJSDR6ZURELzU0S0NheXhKaXg2WENESXl6dTZN?=
 =?utf-8?B?dGhycjJSQnpJME84a2YzMVA5Y2QvY01VVG8wTWhYencySThMaWJpRFlpT2sx?=
 =?utf-8?B?UUpyV3RHYTl5R1VmTzZkd1Vxb0RYWmQvT2xuY1JEZGFMeTFqM3NaNGdqeGJz?=
 =?utf-8?B?UExNbks1MWlUeGtUUHllalVZK2xMczVtZDJaUUpUYUsxczlMcXdHbHpXSksr?=
 =?utf-8?B?MUQ2UzVERk52TFMxTUo4a28yaFBlaitOQXY4Z0RxR1cvelpnOStpekRxNFBZ?=
 =?utf-8?B?RHprd1ZMZlhxakpyZGs0dGlPcWNHWTJwZndxd3RQMVVpcXc0UTZSN1JJS0Nn?=
 =?utf-8?B?OG5sMDJtOEhqSXV3cUJZVmxoZzhZUytsN2dhYURvQm1FREVXSjNqeHYxNXVp?=
 =?utf-8?B?VEd5dUY2d3FxOEgzTitkY3g0bWkyQmdJeFd0V3BpV2ptSHF5WkdzV2FMWHBL?=
 =?utf-8?B?WVkzYU1VN3UraHpoemNBa1IrMXBzR2hRVU1tS2hscTZnaWQ2UEJibEE3T2FJ?=
 =?utf-8?B?RXpuSWgrQVFGcWFvR092N3FIM1ZvUm9WczZLa1VrQXFpRmQyLzloeERWM1RV?=
 =?utf-8?B?dElpMW1SQ1JPZG0rR29YMlQrbEtPUWkxUXV1YWc1M1V2bWUwazJ5d0lyN1hx?=
 =?utf-8?Q?h8UX6FKQmIz8HmqTvZs8lxhOt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 28c53f94-59c5-49db-e374-08daf47de018
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 09:17:49.7288
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SJiCWsfZ2/oy5AibghUp4jYd+VCWACs5td18jZW/bdWggGlTGDeovqCJ6VmimR7NC7yUErZpcuoij/C0aF9GxA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9572

Public headers can include other public headers. Such interdependencies
are retained in their compat counterparts. Since some compat headers are
generated only in certain configurations, the referenced headers still
need to exist. The lack thereof was observed with hvm/hvm_op.h needing
trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
empty stubs in such cases (as generating the extra headers is relatively
slow and hence better to avoid). Changes to .config and incrementally
(re-)building is covered by the respective .*.cmd then no longer
matching the command to be used, resulting in the necessary re-creation
of the (possibly stub) header.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
There may be differing views on which commit this actually fixes, hence
I'd prefer to omit a Fixes: tag here. The issue was exposed by
4c5edd2449bc ("xen: Drop $ARCH/trace.h").

--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -34,6 +34,8 @@ headers-$(CONFIG_TRACEBUFFER) += compat/
 headers-$(CONFIG_XENOPROF) += compat/xenoprof.h
 headers-$(CONFIG_XSM_FLASK) += compat/xsm/flask_op.h
 
+headers-n := $(filter-out $(headers-y),$(headers-n) $(headers-))
+
 cppflags-y                := -include public/xen-compat.h -DXEN_GENERATING_COMPAT_HEADERS
 cppflags-$(CONFIG_X86)    += -m32
 
@@ -43,13 +45,16 @@ public-$(CONFIG_X86) := $(wildcard $(src
 public-$(CONFIG_ARM) := $(wildcard $(srcdir)/public/arch-arm/*.h $(srcdir)/public/arch-arm/*/*.h)
 
 .PHONY: all
-all: $(addprefix $(obj)/,$(headers-y))
+all: $(addprefix $(obj)/,$(headers-y) $(headers-n))
 
 quiet_cmd_compat_h = GEN     $@
 cmd_compat_h = \
     $(PYTHON) $(srctree)/tools/compat-build-header.py <$< $(patsubst $(obj)/%,%,$@) >>$@.new; \
     mv -f $@.new $@
 
+quiet_cmd_stub_h = GEN     $@
+cmd_stub_h = echo '/* empty */' >$@
+
 quiet_cmd_compat_i = CPP     $@
 cmd_compat_i = $(CPP) $(filter-out -Wa$(comma)% -include %/include/xen/config.h,$(XEN_CFLAGS)) $(cppflags-y) -o $@ $<
 
@@ -69,6 +74,13 @@ targets += $(headers-y)
 $(obj)/compat/%.h: $(obj)/compat/%.i $(srctree)/tools/compat-build-header.py FORCE
 	$(call if_changed,compat_h)
 
+# Placeholders may be needed in case files in $(headers-y) include files we
+# don't otherwise generate.  Real dependencies would need spelling out explicitly,
+# for them to appear in $(headers-y) instead.
+targets += $(headers-n)
+$(addprefix $(obj)/,$(headers-n)): FORCE
+	$(call if_changed,stub_h)
+
 .PRECIOUS: $(obj)/compat/%.i
 targets += $(patsubst %.h, %.i, $(headers-y))
 $(obj)/compat/%.i: $(obj)/compat/%.c FORCE


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 09:27:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 09:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475908.737802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFtsA-0006TL-5T; Thu, 12 Jan 2023 09:27:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475908.737802; Thu, 12 Jan 2023 09:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFtsA-0006TE-2y; Thu, 12 Jan 2023 09:27:46 +0000
Received: by outflank-mailman (input) for mailman id 475908;
 Thu, 12 Jan 2023 09:27:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e0AC=5J=citrix.com=prvs=3691819c0=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pFts8-0006T8-Pv
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 09:27:44 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5bac784e-925b-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 10:27:41 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bac784e-925b-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673515661;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=U6/b6flkA/56yqLn16cctaKGuKYvZiCEN8j0e9zxTs0=;
  b=WBOGQibCVaMX2mvYt2+s6AmVEw/VOnKsGXT2rrQFHvTC3zcmoK5atbb0
   fFieBSUfa67v/4LDqZRTsF01DMnCcW4LmHMj3Lt9ORq0iFwsgtj9iJEzN
   PlV2Pleh5sWcqxjV6RE0xJo7s7tTdAyAin3KlxGMN+jzam45VpsXKv8bO
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92692561
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:ED6DNaAPnDhObRVW/xTjw5YqxClBgxIJ4kV8jS/XYbTApGl00WQDn
 zQYXmGDOPjYYDfyKdwgaYXi/EkFvsCHn4RrQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpB5ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw/+hlIUJo0
 9UhMTUPQS6S2u2H5r+eVbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2o0BPjDS0Qn1lM/AZQinOCulz/nfidRsl69rqsr+WnDigd21dABNfKEIILbH5gLxy50o
 ErP9DrnEi8QNOa99gKU6WCqp8GIoRnCDdd6+LqQqacx3Qz7KnYoIAISfUu2p7++kEHWc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUS8xqRw6DZ5wKYAGksTTNbbtEi8sgsSlQC1
 FWEgtfoDjxHq6CORDSW8bL8hTGvPSkYK0cSaClCShEKi+QPu6lq0EiJFIw6Vvfo0JulQlkc3
 gxmsgA7iaczrugt/pyi+FLovSPxqanvQysqs1C/sn2e0it1Y4usZoqN4Ffd7OpdIIvxcmRtr
 EToiODFsrlQUMjleDilBbxUQer3v6rt3Cj02wYHInU3y9i6F5dPl6h06So2GkpmO91sldTBM
 B6K4lM5CHO+0RKXgU5Lj2CZUZ9CIUvIT46NuhXogj1mPPBMmPevpn0GWKJp9zmFfLIQua8+I
 4yHVs2nEGwXD69qpBLvGbhGiuB2mHBjmj2DLXwe8/hA+ePADEN5tJ9faAfeBgzHxP7sTPrpH
 yZ3aJLRlkQ3vBzWaSjL648DRW3m3lBiba0aX/d/L7bZSiI/QTFJNhMk6e95E2CTt/gPx7igE
 7DUchMw9WcTclWceVXaMCA7M+K/NXu9xFpiVRER0Z+T8yBLSe6SAG03LvPboZFPGDRf8MNJ
IronPort-HdrOrdr: A9a23:8CNqCKjwWbA2GHwT0/PUpT4t8XBQXuAji2hC6mlwRA09TyX4ra
 yTdZEgviMc5wx/ZJhNo7690dC7MBXhHPxOgbX5TI3CYOCOggLBRuxfBODZsl7d8kPFh4pg/J
 YlX69iCMDhSXhW5PyKhzVQyuxQouVvJprY4Nvj8w==
X-IronPort-AV: E=Sophos;i="5.96,319,1665460800"; 
   d="scan'208";a="92692561"
Date: Thu, 12 Jan 2023 09:27:34 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, George Dunlap
	<George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Multiple reverts] [RFC PATCH] build: include/compat: figure out
 which other compat headers are needed
Message-ID: <Y7/Shv1qyi0jgrai@perard.uk.xensource.com>
References: <20230111181703.30991-1-anthony.perard@citrix.com>
 <5c7ffbe4-3c19-d748-9489-9a256faebb7a@citrix.com>
 <750ad2e8-a5be-d9c0-846e-41bb64c195fe@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <750ad2e8-a5be-d9c0-846e-41bb64c195fe@suse.com>

On Thu, Jan 12, 2023 at 08:46:23AM +0100, Jan Beulich wrote:
> On 11.01.2023 23:29, Andrew Cooper wrote:
> > In file included from arch/x86/hvm/hvm.c:82:
> > ./include/compat/hvm/hvm_op.h:6:10: fatal error: ../trace.h: No such
> > file or directory
> >  6 | #include "../trace.h"
> >  | ^~~~~~~~~~~~
> > compilation terminated.
> > make[4]: *** [Rules.mk:246: arch/x86/hvm/hvm.o] Error 1
> > make[3]: *** [Rules.mk:320: arch/x86/hvm] Error 2
> > make[3]: *** Waiting for unfinished jobs....
> > 
> > 
> > All public headers use "../" relative includes for traversing the
> > public/ hierarchy. This cannot feasibly change given our "copy this
> > into your project" stance, but it means the compat headers have the same
> > structure under compat/.
> > 
> > This include is supposed to be including compat/trace.h but it was
> > actually picking up x86's asm/trace.h, hence the build failure now that
> > I've deleted the file.
> > 
> > This demonstrates that trying to be clever with -iquote is a mistake.
> > Nothing good can possibly come of having the <> and "" include paths
> > being different. Therefore we must revert all uses of -iquote.
> 
> I'm afraid I can't see the connection between use of -iquote and the bug
> here.

Me neither, -iquote isn't used on that object's command line.

> > But, that isn't the only bug.
> > 
> > The real hvm_op.h legitimately includes the real trace.h, therefore the
> > compat hvm_op.h legitimately includes the compat trace.h too. But
> > generation of compat trace.h was made asymmetric because of 2c8fabb223.
> > 
> > In hindsight, that's a public ABI breakage. The current configuration
> > of this build of the hypervisor has no legitimate bearing on the headers
> > needing to be installed to /usr/include/xen.
> > 
> > Or put another way, it is a breakage to require Xen to have
> > CONFIG_COMPAT+CONFIG_TRACEBUFFER enabled in the build simply to get the
> > public API headers generated properly.
> 
> There are no public API headers which are generated. The compat headers
> are generate solely for Xen's internal purposes (and hence there's also
> no public ABI breakage). Since generation is slow, avoiding to generate
> ones not needed during the build is helpful.

If only we could do the generation faster:
    https://lore.kernel.org/xen-devel/20220614162248.40278-5-anthony.perard@citrix.com/
    patch which takes care of the slower part of the generation (slower
    at least for some compat headers).

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 09:32:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 09:32:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475914.737814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFtwM-0007t2-MH; Thu, 12 Jan 2023 09:32:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475914.737814; Thu, 12 Jan 2023 09:32:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFtwM-0007sv-JU; Thu, 12 Jan 2023 09:32:06 +0000
Received: by outflank-mailman (input) for mailman id 475914;
 Thu, 12 Jan 2023 09:32:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFtwK-0007sp-N2
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 09:32:04 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2073.outbound.protection.outlook.com [40.107.241.73])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8d2a825-925b-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 10:32:03 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8844.eurprd04.prod.outlook.com (2603:10a6:20b:40b::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 09:32:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 09:32:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8d2a825-925b-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Osx8vA9Bp1r+tE6tz+0TdmGvg5XsdRLiLM9wbstZX6/CGJkv10ocZPySuEks4YnaxEpbTx06gOAK4zJZU5dDRNeWbv/oPb+5pXNsM/krSnsDwQNYc8SoWK6PuFwTvQRFDhBybTKMGns4zgEC/cF/mGEKMrlzHX54wewSvgQUHIrxhf4v5UmHtU0ZQ70gvL2rI2fi7axesvCvnDnjlSVGARUh4hgPMrEwuBILVXarO4/4BHvx1t6iNMpYcXv2icqAwhidALw0VEPxi9C1V+wljFmF/+mPC1nRt21kzis3nyrN7RE7VKLKwNOqJ+FcRsBfJ/ih+IJ97pP+s8SjmFZ0hQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=a9pvGAc+QyvPB3mISkj/GXnby0SbnQtT1mzcmSy/Roo=;
 b=U5eCemof/9B3Df1+yB+bkhRbtam2OXSweaSmtraTj3suPmtkH/D9+eEgvBWXqxPpl64piVvaZiPgeEkMBXCgVQ4P91MPZnBRkuknsfG4PbvjbHfNj5bER78+BuIgNaGdCHHJGhSRp1kbu3Pff6dg2bpU+lWDZRZei7h+YF/7FekpSUxirlOjrYTj1P7/uGLV/0DUrjQjViL+NqhIOmO1hssgfxioR5mCAhw3bND7Mc5NIx7Ob+KuPOO9VABsw9YSsfe426C75rzYMSmS3ve8etdmZK3mM/7oAgpoF0606StzQvFVWliWXXkah0iUGU3VDMvnrtZhqykYS4TBN2S2Mg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a9pvGAc+QyvPB3mISkj/GXnby0SbnQtT1mzcmSy/Roo=;
 b=xtIWqTdFGU3XPw+lsFwpgzqmGEu9rCFFT5YA60NeNYzLKF94ayQ2TLYLiId99sz4jWciaXfyxxXnLr4wPAJjDjfkHB2+5o9jLH7wMH5q0ecIH32/H2An2DJjV3v3YajV0Zniu8w/ggji2uV7i2oK6KKdVlmvEGpHBAuBBWWzzRfdZbR6qmVdbfAjCWxG1GQW7s379fvMh9hlkNsVM6F/zHe437BrWPHwmjKy5/minAxmGekojzvAaw3Tsq9yceqNKdL+eER/ZXYc3FemmKGa0bRz1krDADp9qe0P7jPQqNfWLiN8c5sMxrmsqNKORlTvFOZImzEa3XKmlW+IRzmIWw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <50fd3ca1-c1ca-a21e-8211-ed1ce3353555@suse.com>
Date: Thu, 12 Jan 2023 10:31:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 7/9] x86/shadow: reduce effort of hash calculation
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <4785b34b-2672-e3a8-8096-df1365b6b7b8@suse.com>
 <28488645-1cf7-cb9b-ca03-f060f7947156@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <28488645-1cf7-cb9b-ca03-f060f7947156@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0146.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8844:EE_
X-MS-Office365-Filtering-Correlation-Id: 5a9a9789-1d0f-4309-3f66-08daf47fdafe
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	O/wwbJFK3v9k1U7JCriZLHB0BZxvkbTZmJymKB7nKfJ3EMTZXGyjPAZbzGgrA4ePmkDWI2CRlmNnwoClKoAa/7ps3GXQXfiEkj3moF5zRpy7oicGV+ctE18h3XE2gZgz5M4A8p5DBeLyBV3uLps3qXJV5tu9qNRDb9BmykBLwWeYTyIWJGlgkY/LtBbV5QLuQt0Uq+xQDerT0NjvCnYluS93/OyDRtTAHsdT9iE4cRtzrAVDnjcIKuN/b0+M5iqy0O+wBwQKGv8hHBZgcactFT4IiSpsSRg/LixYv3xgvvc9LARetdpkrZpntzTyKHleQMqOk0HXxNVLomD38ux0q/pZqiai0q42/Tfg8o6/RoZAQRdE7lMlPdEfAN84Sa5MvNIW4KH4iK+CR90d5VM+zpx1ZiqHGkUr0opt2JmPLg1tv3q75iBtSyq4PCipU++U4g6PpClk8SZ9PzJ6ZNklhqBzeus9S+1eTG2TnL4VduHu01h49FNCz3GzStYS7iTlcfQ/JZR4e87oVX61TF4yCMnbuKxTeYBtSV7Ki551MjUqiAbbSkIk7bzaAbGCyDPahzKfbXwpU6fEIoqu/oRQ75XjzgFNn0KCd5xuUQIDZGtvve4Kv8QSSOEzJIM29Bfgc1O/+/gi5h1TV3acUMn5lUsx4FVMJxRmrBkHHwJfF9FzuLYkMnMwoFR+kMvPq8pIoqC6QL63nP/rt0busSl69qUEkdmGHAOlHKCzDwCvuTI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(366004)(136003)(39860400002)(346002)(451199015)(36756003)(2906002)(5660300002)(8936002)(41300700001)(31696002)(66556008)(86362001)(66946007)(54906003)(38100700002)(31686004)(6506007)(8676002)(53546011)(6486002)(186003)(316002)(478600001)(26005)(4326008)(6512007)(2616005)(66476007)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TnhhQk0yeC9CZDdVOEdhSzBrUUNwSTFuUG5YS3JsUE5BalFLcmdGMjFsMUZv?=
 =?utf-8?B?V0pWZ0RBNWpmbTBzNjhJVlAyUmxmV0Jwc1JKRGJydGQ1RTZGUDR5L2VySjVk?=
 =?utf-8?B?L1FtalNSclYwZEovUEhZcHhJRkw5NHNET21yVVFJWHJncTUyOTVYaHRsU2lr?=
 =?utf-8?B?Z0huTDlSVVJQNDg3cjl6dkUybXgrdXFjQTVKK3FtOWFjbmR6bUZGR1RWT1Rh?=
 =?utf-8?B?QWxzWFdFSmVrZ0tobHdieVVWQWlUZFgzNHk4dWFrUnBQUnc1NndTYWpoVXN3?=
 =?utf-8?B?Y3dmNStKY1VOSWNvcEdFNGR4Vm04OERBLzdtaUpqZ2RRU2poTzJKb2VIN0Qr?=
 =?utf-8?B?ZGRYRmxNNjRhWG1vRnlqZldZVURhWTBKMzdQKzJIWHd5OXVoRGRHTGpZMEtG?=
 =?utf-8?B?VU00c3BFMmh1UFdvYW5RcU9NS3UraVVlek1HYkE5cGt4OW9Nbm1VckxpZkI1?=
 =?utf-8?B?Sm82K2ZoZmQxMzRDMTgvbGJoMnZVT2FDV3pjU1RWd1Z3Sjhabm5lNTM3dlcx?=
 =?utf-8?B?VDVwNjlPemhObXg2ZVljNlRRU0tUeEVOeERGTkJSdHlLUWwraVVvRWQ4QWxo?=
 =?utf-8?B?MFBaY2tEMmg2YzMyS3VpYTA0S01DRzdzcTRibkFJbkQrVWh0Z3hHTHlGSnRI?=
 =?utf-8?B?bXFKQXNheHRjZWd3amd3RUU5NUxlNFh0cjNRMGJDSVR5cXJDUEJYaGVpWHEv?=
 =?utf-8?B?d2xZbE1VTVdnZ2Q3Z1pFMFJ3cWFIaFk2YUF1akdYU3JhM0VRaE1Mc1ltaU5J?=
 =?utf-8?B?UkJQazZEN2RNcWxQSjdkcHlCUGFBc2JCSDlJN1lwY0k1YmZkVU1Uc21xd25T?=
 =?utf-8?B?a1Vzdzk3b1FHQ1dCSzhId052M0g0cGJvUTlPck1DV2xqdVcreGRVdTdrK2ZG?=
 =?utf-8?B?SHp2eEpkVVNMS3hpWVlpSEJGWGxtZmRlSzM2OHJLeGljVVVOWi9FVkFlZDlD?=
 =?utf-8?B?SDF3RDdPZ2hueGNFZDJpSVRiQXg5R2R4VTVTMkZLeWg1R3V5QUpHZldLamh0?=
 =?utf-8?B?ZjBTZHdaUTJXNFlHbDFNMFNLY1FuU1RjTlRsVHVXUXk0K3AreTVsazdIMkU4?=
 =?utf-8?B?aXQ0N2dscXhKRFUrVXBaclJ2dnIyMDNLd1VNaGE2YWNBOW13aTV4aG5RakxO?=
 =?utf-8?B?ZjdUbUgyRmd5VUd2c1lNN3g2cTFibkIxbDZXWWhwSWs1SFBCVFhiMFc5c0dk?=
 =?utf-8?B?TmpKUC9PejRqS091aldvbmROT0RTN1BaSUtocTFoalBnemVLYlAwK1NibmtI?=
 =?utf-8?B?Sk1UUGRlYUc0ZW55L3lKdGh2bWs0ZkE5WVIxUUFVYkJtR2NYSVlkbXJka1Qr?=
 =?utf-8?B?MzZaUG5ubnRSTXJKVEpyNUtGR0JHNjZ5T3hjYVROZ0ZPOTNVOHVNR3ZwZ0U4?=
 =?utf-8?B?NVFZV2ZnY3hIY2NJZmcyMGhSTzFXTnI3NGdIelhNWk9VZ2UwWnhjaVZrbFZu?=
 =?utf-8?B?WWdOSHFpWHp3RmNUU2toRFY5YnBsdWVoTC9ZY3gvZEExRUxUdzhWVkc0UUor?=
 =?utf-8?B?TS9hRlJtVnpJLzZveEI5bUcyVW5LSXBxM2VlRnJqTDJjdVVRTndnT05idUNa?=
 =?utf-8?B?R0xiQmkxMUx0dW5GRGVCTlVDazEzcyt0OExEekNqbitPb0QrTUQ0M2laOGc2?=
 =?utf-8?B?U3BtT3FHUytHWkphN1daNmtMMFBQSWZ1TEc4ZjVhZFI5OWVnalZwQS9HZWV6?=
 =?utf-8?B?Zjk4SG56YXZGQW9CRFVjd3pNSnl6bHNhelJEd0YyNjVPMzc2RllTbjFLUXJL?=
 =?utf-8?B?d0xIZkEyTkozYjFQSUJpS3RUYW8yVEx5M1RTRURZTTJoV1VOTnlYRnc1SHFE?=
 =?utf-8?B?NEczc0t0aDFUcWNrVVJqaEZNVCttM1YyZzhXYkdBLzB4Y0lrN3lRdy80ZS9W?=
 =?utf-8?B?ZUozN2dvSWNlR1ZMcWNuK0piMXNpNVRqK0ErekVOTm44UzVuc2REbXhQLzNa?=
 =?utf-8?B?UHlTTkFDL25tWUZrMUQ3WkkxNTU4b2tOelhJMUltT1Q4cGtocC9la1FwNUdH?=
 =?utf-8?B?VjFxM1Rjd1JJZlgzYUJiRFBrRDExSHpTYTFmUldnWHVWcGdxQlVRVzZFajBD?=
 =?utf-8?B?d3pxQjZKaDRyRGkwN2dJWUVycWEyY3JBWElJVDFyNjdqOEVnR2wzNUVkVEVl?=
 =?utf-8?Q?8HYlddyd3xC7vjvmmxCEajNIu?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a9a9789-1d0f-4309-3f66-08daf47fdafe
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 09:32:00.0184
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vpVy93hjgUXwJFNGQ+vGTHM1zU6pPlOzvZXLWjnrioqm3o8RUSpZ0iAj5paKlmMJom24CzP5NPuWK4/nV+Tf1Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8844

On 12.01.2023 01:02, Andrew Cooper wrote:
> On 11/01/2023 1:56 pm, Jan Beulich wrote:
>> The "n" input is a GFN/MFN value and hence bounded by the physical
>> address bits in use on a system. The hash quality won't improve by also
>> including the upper always-zero bits in the calculation. To keep things
>> as compile-time-constant as they were before, use PADDR_BITS (not
>> paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
>>
>> While there also drop the unnecessary conversion to an array of unsigned
>> char, moving the value off the stack altogether (at least with
>> optimization enabled).
> 
> I'm not sure this final bit in brackets is relevant.  It wouldn't be on
> the stack without optimisations either, because ABI-wise, it will be in
> %rsi.

Without optimization whether an inline function is actually inlined is
unclear. When it is, what register (or stack slot) an argument is in is
simply unknown. When it is made an out-of-line function, a compiler may
very well spill register arguments to the stack first thing, just to
make all arguments (whatsoever, i.e. not just in this function) be
consistently in memory.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 09:33:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 09:33:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475923.737830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFty4-00005L-7Y; Thu, 12 Jan 2023 09:33:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475923.737830; Thu, 12 Jan 2023 09:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFty4-00005E-2V; Thu, 12 Jan 2023 09:33:52 +0000
Received: by outflank-mailman (input) for mailman id 475923;
 Thu, 12 Jan 2023 09:33:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFty3-00004z-AD; Thu, 12 Jan 2023 09:33:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFty3-0003Xf-7u; Thu, 12 Jan 2023 09:33:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFty2-0001K9-PI; Thu, 12 Jan 2023 09:33:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFty2-0000pF-OS; Thu, 12 Jan 2023 09:33:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=E3RXL/6TrD11IsyiEocgKbqu+iTmyClO7DmWtePjqsY=; b=oz/HX5csncIMPV72EvRsurYRwD
	GHa7hJf6QrrxUH0LWjYSUCWClml1DDJnxWLkXbrhJNC65aW4cikNfAG0fEjhPbRRmh6uostrjEAFJ
	61HiNznc3C5vuQlnu6fvQ28vD8NQ/ULti2xaTbhshzzai3s8lYp2y3+NBIX562xXSDFY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175734-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175734: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-i386:guest-start/freebsd.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=fd42170b152b19ff12121f3b63674e882c087849
X-Osstest-Versions-That:
    xen=e66d450b6e0ffec635639df993ab43ce28b3383f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 09:33:50 +0000

flight 175734 xen-unstable real [real]
flight 175738 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175734/
http://logs.test-lab.xenproject.org/osstest/logs/175738/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-i386 21 guest-start/freebsd.repeat fail pass in 175738-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175726
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175726
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175726
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175726
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175726
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175726
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175726
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175726
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175726
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175726
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175726
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175726
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fd42170b152b19ff12121f3b63674e882c087849
baseline version:
 xen                  e66d450b6e0ffec635639df993ab43ce28b3383f

Last test of basis   175726  2023-01-11 16:10:12 Z    0 days
Testing same since   175734  2023-01-12 01:53:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e66d450b6e..fd42170b15  fd42170b152b19ff12121f3b63674e882c087849 -> master


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 09:37:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 09:37:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475932.737840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFu1i-0000oQ-Qa; Thu, 12 Jan 2023 09:37:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475932.737840; Thu, 12 Jan 2023 09:37:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFu1i-0000oJ-Nq; Thu, 12 Jan 2023 09:37:38 +0000
Received: by outflank-mailman (input) for mailman id 475932;
 Thu, 12 Jan 2023 09:37:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFu1h-0000oD-CY
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 09:37:37 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2055.outbound.protection.outlook.com [40.107.14.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id be8e1c19-925c-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 10:37:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9669.eurprd04.prod.outlook.com (2603:10a6:10:316::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 09:37:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 09:37:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be8e1c19-925c-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JzR31wFCldoDt6e3WBOr51kRz2ZInlq8C/vsEUyEmkgxDSleoq5cVK+fh0qLFMMwPQg/xz+f9AbWJNXRnsgUz0btrZ92sh/dEVZSLtvKCbT8o1F2xDV/LLszwTBeqq//sAofccfmTqYjthtB4pNTpaYSo3E71bnAQZpykhN88xGZ103sgTwUDdG82mJaX805lNpLJaM0IM1Zoey/3CFX0SdOWBbeJzVmcqFKQ43+ASZPJDG9BDkyQszFQvkBRasVzmHTU8Y3wG1yjeAQCdRPTDqHzxGhDKnWMtlgeoaCsRyBbi36oC5Csk2U60RhRdEtt1h17iKbtbhICK/LwztDYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+qval3siV5ZTKNWLhov1cDgk4NvpyHzDD1M+ttv7kF4=;
 b=gA7Gbjxf0pte2BFDicWtKQD9o6gAlgFet1bQWJhlvDTmICr2AXGaEQuBlr/168VGF7G/iasi8Wh6Nf5Od3Svty1YihnTW79xieCmAaZ9gQVaf7gAjKF1q3eV9l4y8Jva2JlMA9LB8O0dhcaWtvCIKWGxXOwudOqcXsZ+7S1jlhN7g6K18GXGhy5eb9ARXPKLnJEstXpH1DsuC3KZJVeRjq9z8IK8uQhFt7m0Gv+tFb7Snj5Z0CzHkwssF0rf+eS66hokUUaZVuG5gAbevUIPHPOorDth3G0ImtlN0iU3MzJyDxmL2hHxAv5CfY+CJYnyjwn+Z233vzf0R04O9iOX3A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+qval3siV5ZTKNWLhov1cDgk4NvpyHzDD1M+ttv7kF4=;
 b=KSnrUmtf2TcXsM6WMWp2TZGsNlU+w5vjMbr+4jJ56Goj62w8qVEp33xTAtw0y1cU8kvQXsBeBZ0pmUD9lDlAcmbg3LNuxy7e0+H8pBadUqXp4Hlt6P8dsLQNR39/7nMPx4R9OZCvSSWT3Wghbur4WqzygTYKDuw/UGAuBVW5B4HrcK75zofGLUI5ghjjRV5kP9VLxjg1uvSyIGX3OWNYWdW1Uhhaai4oJwFOh49dl5n2DH9tlo7VpIpngdZ0RdrZeKOmEHRdRMPFHi4F9NuM2Wh+TAFYKdlMJwSBw0b1OhYLCUVqhFHd4Bv63v49bmdnx+V6TNQPqgk5L+8yxdgQZA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <958e4f6e-9a2d-850b-3663-40c777932e03@suse.com>
Date: Thu, 12 Jan 2023 10:37:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 8/9] x86/shadow: call sh_detach_old_tables() directly
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <8ed7a628-f64e-5512-efdb-4116a7b88a1d@suse.com>
 <722642fa-4eb0-930a-9755-f7780da65eec@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <722642fa-4eb0-930a-9755-f7780da65eec@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0119.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9669:EE_
X-MS-Office365-Filtering-Correlation-Id: c49402e0-6ea5-4b96-c857-08daf480a176
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VOS00CKJnur2deSa2gXQRQFQEa0vFl1bHXnzo8rox60it/3KzoA1dCkJJWrzx+tMrKCp0fY18jDzijIPuCSclFnJe4XuaWsNbfoXT/Mv3NSo0z2SUA68O++xwpsJU6TxN+b3F+YuS7DWWa3jrSXerE+9vf4FV5L1nGzgrXvrltbZguFfRJPbbiF9VqcEhuVTRsCqNKW1UUSk0V/VUUx7cOxBWZpqlRfxWVT82wQ9dok+UCHH9l9eM7J13T8DsN1PyZjzA858AomaojsfHsoE60l9TS2Wx0CaoHGjohU7Z8Ay045TAAtQB6v9HvuTgRjDXcvvMvgdaK7fBE6SUsL6yW7++5FdN9VzjO+JyIcHZZo2MEll0OgE72bgkIjfgNxkDUeMFHYgCvctqlzS8qKB5gjJxnQPCDurCZzq1UGqxqFiCE/mbeFig2WlxY7ilogUPMMde3tnOf1iPLatn4v2bHxbXagG3m9FWxL8aKLEyumRj+iEN8+SGtAC6N1oH1O/UYLj0j/SaQm0/u/D5Vdw3eaTCCfv7fEJykp9hVtIO7RzEtY67cxXAlFI68LW+3EAotQVvscewX1vKtcycwM+4N8SDIqMuCUWegRV0E/LWsPcp/bfipMU2XW7MrawFwf5l+hK9LbwXxIG5tRl1ipnOGh/SLVrtuSdY45aPFwbTJMuHpB/wxBhmifg+p9xo6XJGuI+pJ/Po6RCsAlo1ldgKIKaIphliKVT9efr5U3oN0Q=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(136003)(346002)(396003)(376002)(39860400002)(451199015)(8676002)(66946007)(31686004)(41300700001)(2906002)(8936002)(36756003)(5660300002)(6916009)(66476007)(316002)(66556008)(6512007)(54906003)(478600001)(6486002)(53546011)(6506007)(186003)(26005)(4326008)(2616005)(31696002)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N3ZDcnNHY011ajJjeHZWTXR1eXBVMSs1akpEVmZyZkE2aGdJWFFSZFNjbW5W?=
 =?utf-8?B?eVpLZmpyUWUra2JtQjJxZkZmVElIQVk5bzIyRXRMYXhzdFlicFVxTGt2MlU0?=
 =?utf-8?B?TU10Z2xLVHZXdlpLZDlTVkFGVGNFQjhRcnc0ZnJCYXU2N3BzNU1oNXFNYlor?=
 =?utf-8?B?RVdoVk0zMlhzZEVCRjJ6SklGWGYrMGNvcHc5aG5FRXFrbThIZ1lTQ0tkWHZ4?=
 =?utf-8?B?U0RJOVA1V0dDT2xEVGF4Kyt2QkIySjFyblpBbDRreUYxeS95V0xOTXZhLy9w?=
 =?utf-8?B?YXRPMU91ZDVNb1Z1WjgvNjB0aFRRRVNHdTdDbFZXUnRxRVRJYUJMVXNxR0pu?=
 =?utf-8?B?cU5UcmhNaUR3Vy9LRVBneUhRd0VWUldzVUwzMkFnS2p0cjBUc2Q5TmZHa0l0?=
 =?utf-8?B?NU9rbCtCNTdvalVVYTFJbE93OElkNnBxcTRxaHNndTZWeHYxa3FZVGlFRnF5?=
 =?utf-8?B?dTd5UTVOU2dyK1JBRWwyaWZWTkowY3pPMVZVY0l3MlAvZm9id1A2eGtmdEo1?=
 =?utf-8?B?WVk3aThkbVNXT2NnaGtvRHl5RThTekNDdXBmN2RCY3VSNXdvWmJWNWxXK2xy?=
 =?utf-8?B?K1NRelVNU09lRS9icmVVOU52bStIejdtTHlqbDRCak1WV0ZWdHgvS1ZqVFRE?=
 =?utf-8?B?L2VQMzRySFJVa3M5QTR0ekJvemdVbTlBZTdYZTg1VFo0QW50aFBjb05qZ0Rh?=
 =?utf-8?B?UVpJSDZ5TWxEWlZsdnlTTU9mZkxPcy9jQ2FaODU3dHhCOUczYm1mcEdpWG1F?=
 =?utf-8?B?TC9IWVh3T0k3ZHBYUHREV3BINWVFYWFmSjJsQ2o2bUJSRDhPVU5BZSs3OEVQ?=
 =?utf-8?B?SVlkODhUVG0xeWoyb3FOaDlBQ2ZPeVVpb3g1Kzl2Nm8rbWRxcDNWYzdaSVlo?=
 =?utf-8?B?SFhSUVZldWdkLzkvTU5LUkhBQUFGa2dQTGlRdE9BdmJPS0ZPaWpzYUs0eG04?=
 =?utf-8?B?aWtBckhHaWZHMUJydkRPSnhtdWs5cEU3WjE0a2hZOThPRmhrVi9TdWpOcFpR?=
 =?utf-8?B?UEZiQkV0NlhRY3JVd1dQYUVXL25INHJsVEUzZGVKaXUwNmlOalFPR3dZb21B?=
 =?utf-8?B?YW1MN1JQWEI0Nnl5VDB0OFVNSzB2Mmo0M015ZThKSURVbUVXTFdXemNwNU1P?=
 =?utf-8?B?ZDcrdG9hd2JPZFRueUlZbWZNb0VxdlFoMUZxY1BKNHBOaHMzam9KL1RuK1BE?=
 =?utf-8?B?T0ExT29Gd2VhMnMvYU9aRVVVdXZpVVlUUElwa2RhUFAvVDRyQmhBaVh2cVF5?=
 =?utf-8?B?V0Z3VDNkSlYrRG8yOWhiNmZad21xOVZGenpzLzBoelc4MFAycU9MTENDc2Zo?=
 =?utf-8?B?QSs5c2dET3pQWGZ2QjdiZGxPK3lkdldNMUVoNzRJeVVXMm9oWEJQSzFyZDdu?=
 =?utf-8?B?L3REYmFPRDJkMUNSNkpEdG1FUXFOK0ZVK0RMRnc0STQ5bVUwRk5WVmdNeDdl?=
 =?utf-8?B?cU1BZ0MvdWxSTjJZaXlDVmozUTNoSklYMk00YlBMenZPeVlNdDBjay8vUGxU?=
 =?utf-8?B?aXQzb0JBMVJjSDZWL2dndGRCOHloczhmcjlZOXRldzVHTFVOY1IvbWZlekJj?=
 =?utf-8?B?MmJ4ekhwUnFkQUQ3QmhEU2VrWEROUEhQUTlTemxBNStXYzhab2JBU1c3aVJU?=
 =?utf-8?B?S01tcXZJZTk2SnYxRndCc1RFajdkemZWVVQweFpvZnhSaUM1dHhJbTVOQzB0?=
 =?utf-8?B?bjJjVUtuWDVDTnBQQkl1MUlJUExwS1VDRGVobitpSFN4ZGwvdDdrb3UvODdj?=
 =?utf-8?B?Q1JmMWh3VWkxVmc5Wk5TZS9tRzZNTE11aDF5dmV1c3h4U01EbGhBRDlHSFFW?=
 =?utf-8?B?Rm93blJHTzNRNVJqdUg2WGVMS1Vmc0QzbHgxSkMrZUhaSTF6dkRkQ2ZQV09t?=
 =?utf-8?B?eTNNdVQyanJRNWpvNHc4ekVvajdTMXFoL2FCcERhMmlYU3Vna0pOTHdpdWtp?=
 =?utf-8?B?YzExazI0dk5PUGlMbStzRVBmUnEzZzJQL0ptZmd1MGNQSC8xNzl0V2VzNXp1?=
 =?utf-8?B?WXZKaTRhVzdBSmhCWGsxOWZVbHZ1a3Q5SE05M3g4NzNRKzQ2OTF0TFU3dnlt?=
 =?utf-8?B?c2NFdjNtZGZMWGZubytmcTllNjBmYWMwK1JPdzNqYnpZSFV1WXNSbFRvalVs?=
 =?utf-8?Q?bsGwVEWhCHgleRaqbwJEPsP8u?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c49402e0-6ea5-4b96-c857-08daf480a176
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 09:37:33.0597
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gt2Fk5dCX8UWvoUsfz24FV6A4/u+hJ2pplwAT8S/oN7zrZ9o1jAAHlIla/O3NQrSAytz1qoEZJ62nCL2V+1XfQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9669

On 12.01.2023 00:56, Andrew Cooper wrote:
> On 11/01/2023 1:57 pm, Jan Beulich wrote:
>> --- a/xen/arch/x86/mm/shadow/common.c
>> +++ b/xen/arch/x86/mm/shadow/common.c
>> @@ -2264,6 +2264,29 @@ void shadow_prepare_page_type_change(str
>>      shadow_remove_all_shadows(d, page_to_mfn(page));
>>  }
>>  
>> +/*
>> + * Removes v->arch.paging.shadow.shadow_table[].
>> + * Does all appropriate management/bookkeeping/refcounting/etc...
>> + */
>> +static void sh_detach_old_tables(struct vcpu *v)
>> +{
>> +    struct domain *d = v->domain;
>> +    unsigned int i;
>> +
>> +    ////
>> +    //// vcpu->arch.paging.shadow.shadow_table[]
>> +    ////
> 
> Honestly, I don't see what the point of this comment is at all.  I'd
> suggest just dropping it as you move the function, which avoids the need
> to debate over C++ comments.

As said in the remark, this style of comments is used elsewhere as well,
to indicate what data structure a certain piece of code in a function is
updating. Earlier on the function here also played with
vcpu->arch.paging.shadow.guest_vtable, at which point having such comments
was certainly not entirely useless.

> Preferably with this done, Acked-by: Andrew Cooper
> <andrew.cooper3@citrix.com>

Thanks. I guess I'll drop it then; should the function become more
involved again, we could clearly resurrect comments in whatever shape is
then deemed best.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 09:47:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 09:47:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475938.737851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFuBE-0002Je-NP; Thu, 12 Jan 2023 09:47:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475938.737851; Thu, 12 Jan 2023 09:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFuBE-0002JX-Ke; Thu, 12 Jan 2023 09:47:28 +0000
Received: by outflank-mailman (input) for mailman id 475938;
 Thu, 12 Jan 2023 09:47:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFuBE-0002JR-2L
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 09:47:28 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2066.outbound.protection.outlook.com [40.107.6.66])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ed35518-925e-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 10:47:26 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9354.eurprd04.prod.outlook.com (2603:10a6:10:36c::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 09:47:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 09:47:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ed35518-925e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U3kU3/K17peqCx/MgJ7ZCFWAjEM+gleocgwPRMQS8WgxMZgR8WDXu+VGKxRQCJbqcmK7Zjzco3KcNGmN9vwpgsCiujqs6/wz1yMSYUoilw5Md6Ovi0UzPC9deIfWJUVD2eCzNjjUg0C8OBHtp5OnsOc2ITnhHCKWJ9MMsStbHNg7VyrVH8xp0yM6XXY0oxTpdQd3RxvZvV7rmne+8MiHykGlPP08Fcks7lWjIzFjIqoCzrr+SlHSANwFPDF4ZV+QO/2A/qekOHUWM7hQJXaiiIiBFgAnfzVSpzHOgKuw5qlBGDOxolJq7OUZb3mdU+YGn9vszEPE45j3tx5ni1PxrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=a+eX77cSMdawL4nZwmXFSn5kJuDxSijDU8TSMLh4aOU=;
 b=Vj2T75Y6kJC8vFFoWvW1lPJX8fARo2Gwy46zGK7ECriCfIw+btZzog4YY1ar0dOuK+loyDnucw/7rc/SXieSdigHiXP6/AJ9SvBEKz07ff7anp9D1Iwx26VIgR7XUyDCGKdC7uenOafgYLvB6CjEpuv9kifpkA3wo219QTi/B+pHsZpNMXH41qBfYwxJ+rb0LJW0bRVwBq3FRsuLwsJZ+Q22Z5thoBLBs/F1+Ov5ZEklGz5sfpduQoFX6jZDnUC5uSEAJnoAIQqgGOJ5qEMu/LMR+5QCQZ83km4J9GXme/j27RX1FSPCryZ1Ic2ln1JWQZfh0fSeiOTbJUBFwBtpDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a+eX77cSMdawL4nZwmXFSn5kJuDxSijDU8TSMLh4aOU=;
 b=TSRRFsi7ywMB8QbrvVw9MTI78UT+01Rg+tAe1ceFaDK2KpoiQGqwqUfJzl7ApICnHWvpoWXsvVzDIKOiRnMDLPck1TfGBfFvBFBz49974a3JTw3pTCEsY0xIFoZyGtBW6547dJ93mHvU9KVtEZc8Eh4O+azT1J2mUcNbB0veh6Sxt0Z9lUXXf8JdLzLG0RjsZLhKesL//qr1DiACgek3z72yK4NdeF7AG0yDqljsb3DD3gcg5zH5jjWq6GFBpsH7TXiN7evLMVZh57+lWUIVZpIaC2b3hNBrPmrs/HX/s2Ejxg3uLXUuD8tiqMuYhYI/MRkkvq/iFdporpSQYP5qWA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a438f16b-7d45-d7e4-2191-4ed7b2077785@suse.com>
Date: Thu, 12 Jan 2023 10:47:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <9bea51eb-4fbd-b061-52d7-c6c234d060a1@suse.com>
 <c5d201ac-89ca-6baa-d685-5bef2497183f@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <c5d201ac-89ca-6baa-d685-5bef2497183f@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0148.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9354:EE_
X-MS-Office365-Filtering-Correlation-Id: 98776e95-58f4-4b2a-5320-08daf482018e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	i8U4sVuMCz19Juq6VjpHen1TNAHw/QDPgdKb0tVrFWL/fluvKV3plclQVSr1Oc/m2p1EZSfWnmFXuPJB+/i3docE1MyNB/m+FKAMI66BMFTWg/gzXjEHVt4hD6+FKlB4GglLHBgoB/l0C1ihJniUtL7PgxP6Y12Qt5RJCwv5bNPPK1HZFhap1CuLL90s8c7W6iOXsEqHwAviwSgFt+c9L7K3S5sHIoUenT5Ns4EB3uIz9hpOZf0o24Cjxdta98iUABgcFaM9p4canR7+TemKV2Ks22zXnFaSoVSdmDVlCLjZuqFPxPVA8CdrB7Sk+iG7aOTxFX3bQHQ22UTZwS9Vti3a+RI/1jblvFafbjDW/Vdn0GZ6dTh0UHRW9pS7ong+meUVc2Rp4Qzg7+Uwrp25RcWBm2Tk+eodqnMjEMxieZ5CVlA4GEAxVENft2H6tbjmEJiiSEcA9aL0dGShsfgIAOJMI3lFZJAPbDjeeeV6U6js6Ald0nIbdycJ9JieNDpvW/qVvIxShfkL0g4mqM68t94J2Gv4OT04f6hQSPmy8IlvDNomA7v9+euceZ49GdIaZlOwvmFlMg1pkEHSrkzJYUuuK6zACw2mR0Whxn+ZILTMbigyuI1lJGEGF25a1bVhJbS11UagkOF84ntNmKkiok2sK7zWRPuzfbBcp41AwXLA6r6juWz3GWn/uj5oxxM4x0M4pyVlghsOnvUi3r7efVXr7rbdxLVssy6AoOieiqM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(376002)(346002)(366004)(396003)(39860400002)(451199015)(8936002)(26005)(5660300002)(2906002)(41300700001)(66556008)(4326008)(316002)(8676002)(6916009)(66946007)(66476007)(54906003)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(83380400001)(53546011)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U2F6dDZVbStpK1hhY2VoM0VubnFRTnZiNVpUNGhxM1BFcGxDSjZ1Njh4Y1Bu?=
 =?utf-8?B?eFBZZXZ4S0EwelhXWWVBc3hXaEtQQ3BwWWJyMUlJczVyZ3FvcktVbzkxUkJl?=
 =?utf-8?B?QzZhZWRZNWRER2FHb3pnZzU3RzlqWVIwS2NCbFRYUTE5ZWJYNjNBN043QUdD?=
 =?utf-8?B?ZWs2K21sNi9rZGFTSjRFVWc5SVZrTnJJdmFvQVZCMFVWR2ZuNDNlNlBNOHZT?=
 =?utf-8?B?bm9nUmZxdklscDVVa2sxd2FuUmlWZ3VaWTBiaFc3c0NtUS9oVnZKaUM4bWVG?=
 =?utf-8?B?djgzM1dSWHFHYTFOdVNmb2tBUDAxUXNrRGJQUGFzcncxdnV2Nk0vKzh6bnpG?=
 =?utf-8?B?V2tQODdIOEpQMzJsNmV5d210My8rcU9TOFhqLzl1Vzh1UnFxRjEwbS9TNTZB?=
 =?utf-8?B?dkpja3BxUDhUQ2lBVUFmMUVqNEdQM0RBbHVrMGxaeTNQdTlSZzFjTG5rTUtO?=
 =?utf-8?B?aGxiQytDQStqMGtSMWxDbThUbUJhQjdrczRmb2VBOW9FMnJ6bzFaQlRRRTl0?=
 =?utf-8?B?MGxVSFAyMFNYeFhzUEsrR0crVGJtTVVyY0pGcTl1dmM4a1luMmdYMEJKaW8v?=
 =?utf-8?B?RzEwYTFyNlppLzdpREtNcDNrSU5OWnF3VXhBSmRrc0YwT0d4MmdUT01na3Z1?=
 =?utf-8?B?czRaZ1JaSndieVJwdmdnenZDL0t3UGo3OTlGM0prcHhHdzBRVzN3ZnowSFVS?=
 =?utf-8?B?NHdDSVhaRTZyOUxDWXgzbDg2ZkNqdXArZm5nNXU1aFFUSGZyeWh3ZkdTUnZk?=
 =?utf-8?B?NmRUMEhRTFJMc3ZocHQxTnh0QTA0OXR2bGIrN3ZPcXo4WEowcXNZbHNmaU4w?=
 =?utf-8?B?Q280N05YRnUybVE0cDd2cXpvb3JJVzVpSHFoTHo0aHVFOExJZFNHSEtJUkx5?=
 =?utf-8?B?S1UxSEloc1RsOEdKR0dVeGJGYmdKSlBFYk1KNVg1SS9VamI3M2F6bzJKY1M5?=
 =?utf-8?B?ay92Y3NxK0VScWQxNUFMeGJwTittSVN4enlESFdTZ0FGZ2NjZ1dBUkFxMk01?=
 =?utf-8?B?YUJmZTZkeGNwTENHMGNXa3ZqYjM0NlM4YWRVcWxja1VBMm5kRjhxNWUrRHJB?=
 =?utf-8?B?R3FYcldZemhCdU1lazlvcmpwQUNZeDFhc2oxaVRGVEtlOUZhNDd5blpzeFVY?=
 =?utf-8?B?cGhpMDBkZEQ2NUR4Z2RuVHczUVZNRDdGZmwrUlJza2t1cWRMVkUyaGtEWERR?=
 =?utf-8?B?ZzY4Qmo1RFZSSWZpb0xiT2tkSVBNVXBtSGg3ZnJMUGJBWFhIS2xtZHp6ZFhx?=
 =?utf-8?B?ZlVSV0NiMm1vaEp4VS8wUHZ5S01XalEvYVRCUlJYb3pBUVRtaVpGUFNUc0o1?=
 =?utf-8?B?ZzNJMG55bHFYOFIvNGJucTNjTGJDaVJaMktJOXhMc1lCZlZmWTlBOUYwOG4v?=
 =?utf-8?B?UGpmaUhpOXMycUNBVitmbUYzSG1zY2tDa2VrOUpxNHVFOU43TS9EWGp1T25G?=
 =?utf-8?B?OGFtb21kSVFZeW80bG5yNjdLaU5SQ2hwUkpST21TWFNsTDQ1Nk4wVU54M29l?=
 =?utf-8?B?MUhWSVp5cFFkV2FoajJDZysrL0VhOXR0OExpR1gzRjdWSTFIRkEybGZDWXJr?=
 =?utf-8?B?cTNvdW0rNTk1SlEyYmg5YkVKSEtYVXJ3cGI2Wk92QmVDMU9tVnBCYm9vSktW?=
 =?utf-8?B?azgwVkJkcEJMeTNhS29EeVZGcXhTN0dNSUxxYUtEdnowZDJjazJkL1NlalhW?=
 =?utf-8?B?czVpcFpkYUpDRWdZUUxWb0hLWHFKYURCNVJ3bDY2UVh3MjFvVlIwblprbnox?=
 =?utf-8?B?ZjZJb2d0S3BkM0FWZ2xFOUlERmsrNFkySDJRcUxQNlpuM0pJMDhaMlpVRitT?=
 =?utf-8?B?dnRKNGt0Mk5UcXozVnRSd1V4V2E1NklaZHZTbEhGeWtpZFM3MndkVmsrWDBY?=
 =?utf-8?B?Tis2UHVacW51VlRseU9pRDdUV0E2VkJsaXVnTDl1eThjcU4yR0N5enpTa3Rp?=
 =?utf-8?B?SGNXUHo2MUdZU0o5UkNiN0gzZkdtaGV0cEJDNyt6R0JJMzZoQzZIU2c1S01J?=
 =?utf-8?B?MnVBazJtS21FL2R4eGdOK280N1pNQzlJK0RGeHRhVnNEN0d2SGxkcDRpc05Y?=
 =?utf-8?B?eW9BSC95MEppRy93ZmhMeFRQem1GdDgwT2xVNG51TG9QelV5U2toNVRvZ2dU?=
 =?utf-8?Q?WgD0piJ5M3FxOHWzDythdP6gY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 98776e95-58f4-4b2a-5320-08daf482018e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 09:47:23.6939
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jIKv8uldeje7ySxPXiz+QR02rgtYRQ5/lvNJZvGX/ykNUo8dSyJnXxrO0O+rRLiV6fmL8z9j3ThV15QrldkzLQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9354

On 12.01.2023 00:15, Andrew Cooper wrote:
> On 11/01/2023 1:57 pm, Jan Beulich wrote:
>> Make HVM=y release build behavior prone against array overrun, by
>> (ab)using array_access_nospec(). This is in particular to guard against
>> e.g. SH_type_unused making it here unintentionally.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: New.
>>
>> --- a/xen/arch/x86/mm/shadow/private.h
>> +++ b/xen/arch/x86/mm/shadow/private.h
>> @@ -27,6 +27,7 @@
>>  // been included...
>>  #include <asm/page.h>
>>  #include <xen/domain_page.h>
>> +#include <xen/nospec.h>
>>  #include <asm/x86_emulate.h>
>>  #include <asm/hvm/support.h>
>>  #include <asm/atomic.h>
>> @@ -368,7 +369,7 @@ shadow_size(unsigned int shadow_type)
>>  {
>>  #ifdef CONFIG_HVM
>>      ASSERT(shadow_type < ARRAY_SIZE(sh_type_to_size));
>> -    return sh_type_to_size[shadow_type];
>> +    return array_access_nospec(sh_type_to_size, shadow_type);
> 
> I don't think this is warranted.
> 
> First, if the commit message were accurate, then it's a problem for all
> arrays of size SH_type_unused, yet you've only adjusted a single instance.

Because I think the risk is higher here than for other arrays. In
other cases we have suitable build-time checks (HASH_CALLBACKS_CHECK()
in particular) which would trip upon inappropriate use of one of the
types which are aliased to SH_type_unused when !HVM.

> Secondly, if it were reliably 16 then we could do the basically-free
> "type &= 15;" modification.  (It appears my change to do this
> automatically hasn't been taken yet.), but we'll end up with lfence
> variation here.

How could anything be "reliably 16"? Such enums can change at any time:
They did when making HVM types conditional, and they will again when
adding types needed for 5-level paging.

> But the value isn't attacker controlled.  shadow_type always comes from
> Xen's metadata about the guest, not the guest itself.  So I don't see
> how this can conceivably be a speculative issue even in principle.

I didn't say anything about there being a speculative issue here. It
is for this very reason that I wrote "(ab)using".

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 09:47:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 09:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475939.737862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFuBL-0002aE-03; Thu, 12 Jan 2023 09:47:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475939.737862; Thu, 12 Jan 2023 09:47:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFuBK-0002a5-SQ; Thu, 12 Jan 2023 09:47:34 +0000
Received: by outflank-mailman (input) for mailman id 475939;
 Thu, 12 Jan 2023 09:47:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pFuBJ-0002Z7-6h
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 09:47:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFuBE-0003vp-Gv; Thu, 12 Jan 2023 09:47:28 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pFuBE-0008Ef-92; Thu, 12 Jan 2023 09:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Bs3VulGjSb/JK4kg4WXicdltLKKVB699Ov+/7rXfTI4=; b=Xxqt8VT0J7UimeZLqZEbSmXnYM
	TNpO7edn4KvvVQzxI9s5rLgvC+G5TJcY7O4JllikI7/kuFnvYgajSGD+osZd4RBsORqB+ZKaR7D+W
	KIsCOn+G/bddOGYsRezbGhaaHfaXACQyEs9ysf3kxFeMGlCfR5Pu9SCZlnCeFog0DmtU=;
Message-ID: <71e806f4-8bd2-dd47-67b9-958bb9061c7b@xen.org>
Date: Thu, 12 Jan 2023 09:47:25 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>, Wei Chen <Wei.Chen@arm.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-4-wei.chen@arm.com>
 <9fd67aa2-0bd5-16a2-1e19-139504c2090f@suse.com>
 <PAXPR08MB7420A4E3DA252F9F37450EDA9EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <ee06b9a7-bfc7-e6f1-f2f6-f73a1fb42d6d@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <ee06b9a7-bfc7-e6f1-f2f6-f73a1fb42d6d@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 12/01/2023 08:11, Jan Beulich wrote:
> On 12.01.2023 07:31, Wei Chen wrote:
>>> -----Original Message-----
>>> From: Jan Beulich <jbeulich@suse.com>
>>> Sent: 2023年1月11日 0:47
>>>
>>> On 10.01.2023 09:49, Wei Chen wrote:
>>>> --- a/xen/arch/arm/include/asm/numa.h
>>>> +++ b/xen/arch/arm/include/asm/numa.h
>>>> @@ -28,6 +28,20 @@ enum dt_numa_status {
>>>>       DT_NUMA_OFF,
>>>>   };
>>>>
>>>> +/*
>>>> + * In ACPI spec, 0-9 are the reserved values for node distance,
>>>> + * 10 indicates local node distance, 20 indicates remote node
>>>> + * distance. Set node distance map in device tree will follow
>>>> + * the ACPI's definition.

Looking at the ACPI spec, I agree that the local node distance is 
defined to 10. But I couldn't find any mention of the value 20.

How is NUMA_REMOTE_DISTANCE is meant to be used? Is this a default 
value? If so, maybe we should add "DEFAULT" in the name.

>>>> + */
>>>> +#define NUMA_DISTANCE_UDF_MIN   0
>>>> +#define NUMA_DISTANCE_UDF_MAX   9
>>>> +#define NUMA_LOCAL_DISTANCE     10
>>>> +#define NUMA_REMOTE_DISTANCE    20
>>>
>>> In the absence of a caller of numa_set_distance() it is entirely unclear
>>> whether this tying to ACPI used values is actually appropriate.
>>>
>>
>>  From Kernel's NUMA device tree binding, it seems DT NUMA are reusing
>> ACPI used values for distances [1].
> 
> I can't find any mention of ACPI in that doc, so the example values used
> there matching ACPI's may also be coincidental. In no event can a Linux
> kernel doc serve as DT specification. 

The directory Documentation/devicetree is the de-facto place where all 
the bindings are described. This is used by most (to not say all) users.

I vaguely remember there were plan in the past to move the bindings out 
of the kernel. But it looks like this has not been done. Yet, they tend 
to be reviewed independently from the kernel.

So, as Wei pointed, if we don't follow them then we will not be able to 
re-use Device-Tree shipped.

> If values are to match ACPI's, I
> expect a DT spec to actually say so.
I don't think it is necessary to say that. Bindings are not meant to 
change and a developer can rely on the local distance value to not 
change with the current bindings.

So IMHO, it is OK to assume that the local distance value is the same 
between ACPI and DT. That would definitely simplify the parsing and code.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 10:04:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 10:04:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475950.737872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFuRi-0005ME-9p; Thu, 12 Jan 2023 10:04:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475950.737872; Thu, 12 Jan 2023 10:04:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFuRi-0005M7-73; Thu, 12 Jan 2023 10:04:30 +0000
Received: by outflank-mailman (input) for mailman id 475950;
 Thu, 12 Jan 2023 10:04:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFuRg-0005M1-SY
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 10:04:28 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2084.outbound.protection.outlook.com [40.107.105.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7fb6baa2-9260-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 11:04:27 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8296.eurprd04.prod.outlook.com (2603:10a6:20b:3f2::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 12 Jan
 2023 10:04:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 10:04:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fb6baa2-9260-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EqR63RmIp6gVTn3ktjENM1dhrnlCeFRojk6W10Q2TQkY/e6nwPHuQAHhQJZkCQvdCK+PDbEBNz0qUuyLFVaQaI9kL8tM+dwT/eQj4JPiZACp55eKz5sAbUJfmXso6+dVBb4cO3UlpddEdPU8wWImlXxx31q30no5iexlVgyyRiOvwLVafb+qBL8W9e/VjJrN08qI+MagvyqoonkX7NQMuGPuZAz3ii1TF45q9/zoNPhNvGPoA+bPtcnWbqJuS60da4zM94UfliwkCPuvYwm0Kilem3zw1dxoX2imNunf515RvL2SkijTXkArolJq0mHLUefoHZDrJVXbRxHM4MJn8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QgD9SrKU0oo0baPtu3i+s2mTRlx/++FDvuMpUVol4ro=;
 b=IewZvXYDF5GVWq3k8p0dQf3ysQdAvAjpC+S5Rpp9WG/oLBt7c0zT+tTM+dlVngd0Uk5bMRyaLWw2tA85vSWiDTc96YIbD0P1ARhIqkHxvbdVEFT+aVkBISqapKKV+XnQg0se63+Ap1E6AJHbB9KKgw64kdkyaeQkZtMLyq7mSE1aBkJqX8Fhu0iZQ7yTMoauuoVMZtdJy99Wup8eU7xzQoyIO8acllsXpFohCsxExiwXVxfGk3+EqjBq7cJk6pyRZt2qzeCiDbNRtrf8JMHTvZzxnCmzfob2Wx2X2P6AjmGCaaw8sTUQi4x3kuBlR4hhusKyYhnUs89VV8kQBqPhZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QgD9SrKU0oo0baPtu3i+s2mTRlx/++FDvuMpUVol4ro=;
 b=NyG91mPXeMRa/3rbFtcizpWc4T8SzeDp314aGHDEz333ZLKJqTwbrFRyyqM/UScQOVmFtk6bQzy0jhdKii57Or2lG4g8CYzvmB1sKHEO8r5MFCwpcUiAQpAYT9SJJlvBER48GRDoE12d9NqCrsger1XZZHIWQvlVW7duILiwBGjJ3/lr5KPhPSX0bpHIw6LI/qcZVpP3k5qBi8n+Z9VfgrysDFRdp3bCnqyibLeXSLid7HweU272menbr6dx8RnZKn+HIgRjeULApSr31Ky6/GNUT+k+vdugE4o3fJrSBQiZ1orcX3HIl8tYXkTYmv1QETJ6+1fe9RHQ3gSzcp0R3g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3729733d-fa73-5221-4922-3abe40ffbbb2@suse.com>
Date: Thu, 12 Jan 2023 11:04:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [Multiple reverts] [RFC PATCH] build: include/compat: figure out
 which other compat headers are needed
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230111181703.30991-1-anthony.perard@citrix.com>
 <5c7ffbe4-3c19-d748-9489-9a256faebb7a@citrix.com>
 <750ad2e8-a5be-d9c0-846e-41bb64c195fe@suse.com>
 <Y7/Shv1qyi0jgrai@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <Y7/Shv1qyi0jgrai@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0012.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8296:EE_
X-MS-Office365-Filtering-Correlation-Id: 002118a1-0164-4a99-e757-08daf48462bd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Tbwk2N+alil9JCgcU0nc4sex94QmAeLu0JdCyETp4X+L/NwZO9Vjv8u2k87CiMQgW2Kmz9C56S/BvtAaK0i8UQEmGKys+n714q9pv5dRrXYV6ITQzGZ4jT5377l80EVYRCV9UfpxEIo5wYEaYXXnqsmq2aje/qMGR/fsSI//HUIfCRHs9e34JNb5uTCnYzQpXd6ywHGSgvnAoNBz16jyEl3Y99HjqFMN6T+b6Lp/aUd61IT1ZveFjz3A/C1NBxih51e8f87h8t6pPJqMsmMU4JQ/qnLJyHDREz5LHOo7JdOJWEKUdg51Z3QLlzDg94FVc8AgncmUL2ZEIj9k6UMCFvT6JOSqAwQ4BSHxoyHVLVEdNSya7qpYOqn0H+8EFizOUpi5/zG6Vd0nvm5pV4WNltXlPQ6ozGbUvP2TjWrSjQlNYscgKgVMwO7uIEkK84ZwbZ8cvVRI5brxUNUO32ox30dgjc33YDRAus+2eTd9lvdXGtSnsj7ER52tq9ZhQC42zZKNtcXn0OsCinTvv+kAylBuFZ0/ihnPp6YqEJ3blDriHZVR4RHqrPmTccWtskezRvTQDyCuRRVg/RDKfJMOqMFn2LWfGqk7jlW43hLsa0nMJhkfR3GLI5jxzorGCfWQjVqLYXfxqbL7co3nMbdxdkuW8p1Mv6S3CTyyCBVbElgmeNuMigdWLMsB396EYnoSfqZdmlsatZBTAvHBWDwKbNS4tkJ16DIYmoKLBXAcVJbQtgjr1nBeB+RgqcZUjHL1g+bsUZjghUcenwsvnWCeiQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(396003)(366004)(39860400002)(346002)(136003)(451199015)(5660300002)(8936002)(41300700001)(38100700002)(36756003)(2906002)(31696002)(86362001)(478600001)(6486002)(31686004)(966005)(26005)(6512007)(186003)(6506007)(53546011)(8676002)(6916009)(4326008)(66556008)(66476007)(316002)(66946007)(2616005)(54906003)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MGFITmJoRk9nL1B6dS9DejBuWnhBdkRLSGNhTVBUQk9rRGdmSEMzellsWWNk?=
 =?utf-8?B?UTJ3eTlEd2ttQ1g0SW8zNVh4K1ZjVHZmcE0rQ2diMDBhdHpVbXZZczZPb0sy?=
 =?utf-8?B?eU9sRkt0SnUrMEsycVlXL2ZxK1FvTk5GQS9Ia0FXYnQ2TVN1VGdSY3M0TFcy?=
 =?utf-8?B?UzZmc2lkL3h4cmtMVysxd2kvK1F0NVJPZWZFREdOV210dTdVQ1JEVldvUHVP?=
 =?utf-8?B?NUNMa1Vwc1llbHRuRHpzU2I1akJjQ250MDc2VFdFRkJZYVlWN1RNY0lZUytY?=
 =?utf-8?B?eWFWUUEvZkVxb1RCWmZycGl4UElxZU1oNE1ISlNqaEMzVXZnWkRnYVRBd0Fk?=
 =?utf-8?B?R3JqRzhPSmMrS0w4UzlvT085TGx2UkZDQ3hBYUgvMm0yQy9qdXlBSlV2M2V5?=
 =?utf-8?B?M2dxdmxvbVlBeVFXc0doeGU3REsyRzZZSThNNmFxU3RseFZLa01aSXp1NFhn?=
 =?utf-8?B?TU5pb1htQjIydlZOWE43bkxVQkEvQTJRbzA0VnBCREx4blhJN1J2a0hiamdH?=
 =?utf-8?B?V0Z3TEV0RjJWTUpBQlVZK2I0OXllM2lQNW5UMUJFcXkrdnRzaTIyejloNEpS?=
 =?utf-8?B?YUdsUDdtVENWRzNNZHlvTjM1Smh1cFNZTGIzYy9SNkhtSVBkaldSeDZiQnoz?=
 =?utf-8?B?andXZjYza2JLTm8wUzJSakJEVzNkKzNpZ0tTQzM3dzE5UUFSMzRXRGZ2Z3Nv?=
 =?utf-8?B?aWpCclpQbG4vKzczNVY4ZFgzZUp4Qmc2RE1xbzZXMnp6aDNWUGp3VTQrMDl2?=
 =?utf-8?B?QUxFYzdtWUQrdnZ1QmkvU1VIcGFVMUc3bmNPSmpmRVczdWxMVWJmSzB0WDVz?=
 =?utf-8?B?Q3VTeXpGbFU2bFIraXJvSThXa0dQdWI3TGlxY04xTlpXYnEzK0ltU3RyVTRi?=
 =?utf-8?B?dGhCVmVSSnZHbGQycmdBaUx5V3Y4T0Y4ZFAveW5HS1g0NDdOZmtXWVpwdEV3?=
 =?utf-8?B?bklLdmtOd0loUFc0eVNpT3FwWTVaaVFHY1paM2lwdnNYZU00Tlc1bG84VHJv?=
 =?utf-8?B?QjlYTno1Y1FqTk81Qy9BVzMvSlAvbjhjQ1RkbnFVbUxKVEpXRFVKNTJjTEt3?=
 =?utf-8?B?TXJqMlZSdnQ4b2JkUVpEZEFBM0Y2VHNWZmdMVGplYUkrdi95Sk9lSWpPQW5q?=
 =?utf-8?B?aXliRmxyYTNORHJ6b2dsa2xNcWcwKytPUWZGNUFMaVJyb3I3ZkF4NGRoZ0pS?=
 =?utf-8?B?STBIMi9HNHVsNFl3ZEhHZ200ZCtZdG9seW01S3RldWFYZ29rR0JXWEdyblNs?=
 =?utf-8?B?bVRaUm9xNlRpUW41c0R1TzBZWlJ0TnVWdG1SN21icERQaXFoWGNLN2VPby9n?=
 =?utf-8?B?dDlURzZabGFFRmlSZWdHem85ekhHbUp2R1dvV0tmYkZab3RMUzhNdkVyOFBF?=
 =?utf-8?B?WHB2ZkdteWhVamdueDAzTk9BSVRuc3A3aTd6a0FaKzAyb1ErRllQZFdmQ1pp?=
 =?utf-8?B?cEVGNGdMa0NwS05USGRQNGhSaEdlZllQRktTemFDYXB2U2xTL1FYZ0tja0Qz?=
 =?utf-8?B?b2VqQldjY2FRYWphSW4vb3M0eklUU1NvakNQY0FhODJRYjFQZGxjVlR5TXkv?=
 =?utf-8?B?M29obSttU2gxWVBjQVpoN1IwRWlWN3BJbzJEUVU2WXNEVWY5SGFFT2kyUk9m?=
 =?utf-8?B?NHl2bXVhSkpScE9JMXFtRDh2RFptbi9CSUZabk1mV1JIdmRTUksxZ1lzRnVF?=
 =?utf-8?B?OFRVMmRyKzFWVzF3ZnVTSHJGZXVhZ2ZTZVd5d2p1VmpLT2tDVlEwK01BSHBG?=
 =?utf-8?B?ZVVpQVZVaHZWbzZjMlMvM3ZLTE9pNUdjalVQazhjWXZhWE9UczFDYkt2S1JR?=
 =?utf-8?B?eWNkbmdIRzNaNmIrNVo2dWFGTm94VEZrQmd1azQ2ZXd6TGJWUk1acVNaMW9j?=
 =?utf-8?B?dHpzSDdBUXFVRFRmZm93bFBzdCtFbVNmVmwrYmxYTGs2L3Y1KzY5ZDlkc2ZL?=
 =?utf-8?B?MXBUY3JLQysvVUplaWFkbGZSNlEvR0pxcGk4SzR0YkhtcW5MVlAwRUg0NWJT?=
 =?utf-8?B?KzNoUk9ic0pldW94ZFJQdkFPajZyQUpzL1lnS3lvcmRyeVVxbGtndXpLbFpn?=
 =?utf-8?B?MHdMMVhTeFlWVXJsVFlTeTN5eVBQRzNhRGZER0xnbEtvbFRkRHhaRjFLejZX?=
 =?utf-8?Q?4AewLeO8IzaLALO9uoRmkzUCg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 002118a1-0164-4a99-e757-08daf48462bd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:04:25.7694
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IpVnERO8BwG7337sIiKtRKywsjKC2FxdcC6hUdoRgRgvsoOA74/mPcMX1R2L0QS9RcTH3+nobZ+R3kWzKBafKQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8296

On 12.01.2023 10:27, Anthony PERARD wrote:
> On Thu, Jan 12, 2023 at 08:46:23AM +0100, Jan Beulich wrote:
>> On 11.01.2023 23:29, Andrew Cooper wrote:
>>> The real hvm_op.h legitimately includes the real trace.h, therefore the
>>> compat hvm_op.h legitimately includes the compat trace.h too.  But
>>> generation of compat trace.h was made asymmetric because of 2c8fabb223.
>>>
>>> In hindsight, that's a public ABI breakage.  The current configuration
>>> of this build of the hypervisor has no legitimate bearing on the headers
>>> needing to be installed to /usr/include/xen.
>>>
>>> Or put another way, it is a breakage to require Xen to have
>>> CONFIG_COMPAT+CONFIG_TRACEBUFFER enabled in the build simply to get the
>>> public API headers generated properly.
>>
>> There are no public API headers which are generated. The compat headers
>> are generate solely for Xen's internal purposes (and hence there's also
>> no public ABI breakage). Since generation is slow, avoiding to generate
>> ones not needed during the build is helpful.
> 
> If only we could do the generation faster:
>     https://lore.kernel.org/xen-devel/20220614162248.40278-5-anthony.perard@citrix.com/
>     patch which takes care of the slower part of the generation (slower
>     at least for some compat headers).

Right, and I still have this in my folder waiting for a review (by someone
knowing Perl better than e.g. I do). Maybe you want to put on the agenda
of today's community call an item to see whether we can nominate someone
with enough Perl knowledge?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 10:18:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 10:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475957.737884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFufL-00072i-FN; Thu, 12 Jan 2023 10:18:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475957.737884; Thu, 12 Jan 2023 10:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFufL-00072b-CS; Thu, 12 Jan 2023 10:18:35 +0000
Received: by outflank-mailman (input) for mailman id 475957;
 Thu, 12 Jan 2023 10:18:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e0AC=5J=citrix.com=prvs=3691819c0=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pFufJ-00072V-HS
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 10:18:33 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74c02d28-9262-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 11:18:30 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74c02d28-9262-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673518710;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Fht+EqOzpwz5ylkFBu+NCRnrKL+oPCraZHKG17BnUUM=;
  b=WPJDil11KVitLgAqLuEb1avTWuJr42jUNFg5QMpV3hyJKijjqRF07NDR
   psLrg3eGP7BJuvT8mUs4HXoBrNMNS+0RGV4Bq1o6a3UK0trv6MK10q8JJ
   Esl8IZ33dIzZ5tAyG7RCUnl6ci/q+MSAm2O7hLYeo8BOuXfSu16G0T3aT
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 94746319
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.83
X-Policy: $RELAYED
IronPort-Data: A9a23:lxnaMaDZ1mP2ghVW/xTjw5YqxClBgxIJ4kV8jS/XYbTApDgi1jAEx
 jBJUW2DOviNM2qmLd0ladzj/E4Pv8fWyNZlQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpB5ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw2/srBWFM9
 8IjcDkDbj24pMbm0e6aRbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2o0BPjDS0Qn1lM/AZQinOCulz/nfidRsl69rqsr+WnDigd21dABNfKEJoPWG54Mzy50o
 EqW7jrEWE0mHeeOyCegy0iB3e/muT7CDdd6+LqQqacx3Qz7KnYoIB8LUVq2p9Gph0j4XMhQQ
 2QP4TYnp6U28E2tT/H+Uge+rXrCuQQTM/JPF8Uq5QfLzbDbiy6bDGUZSj9KaPQ9qdQ7Azct0
 zehj97vQDBirrCRYXac7auP6yO/PzAPKm0PbjNCShEKi+QPu6lq0EiJFIw6Vvfo0JulQlkc3
 gxmsgAfmukXjcwJ6Z7j9F7HrgmPgsjYEi86s1C/sn2e0it1Y4usZoqN4Ffd7OpdIIvxcmRtr
 EToiODFsrlQUMjleDilBbxUQer3v6rt3Cj02wYHInU3y9i6F5dPl6h06So2GkpmO91sldTBM
 B6K4lM5CHO+0RKXgU5Lj2CZUZ9CIUvIT46NuhXogj1mPPBMmPevpn0GWKJp9zmFfLIQua8+I
 4yHVs2nEGwXD69qpBLvGbhGiuB2mHBjmj2DLXwe8/hA+ePADEN5tJ9faAfeBgzHxP7sTPrpH
 yZ3aJLRlkQ3vBzWaSjL648DRW3m3lBiba0aX/d/L7bZSiI/QTFJNhMk6e95E2CTt/gPx7igE
 7DUchMw9WcTclWceVXaMCA7M+K/NXu9xFpiVRER0Z+T8yBLSe6SAG03LfPboZFPGDRf8MNJ
IronPort-HdrOrdr: A9a23:vtQgsKupwSSo291kJDBBN5Zc7skDcNV00zEX/kB9WHVpm62j5q
 aTdZEgviMc5wx/ZJhNo7690cq7MBDhHPxOgLX5VI3KNGOK1FdASrsSj7cKqAeBJ8SRzJ846Y
 5QN4R4Fd3sHRxboK/BkW6F+g8bsby6zJw=
X-IronPort-AV: E=Sophos;i="5.96,319,1665460800"; 
   d="scan'208";a="94746319"
Date: Thu, 12 Jan 2023 10:18:17 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] include/compat: produce stubs for headers not otherwise
 generated
Message-ID: <Y7/eaTzkecTCW2oN@perard.uk.xensource.com>
References: <992a28e1-bfcd-97ef-b3d5-c7341846b3ad@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <992a28e1-bfcd-97ef-b3d5-c7341846b3ad@suse.com>

On Thu, Jan 12, 2023 at 10:17:47AM +0100, Jan Beulich wrote:
> Public headers can include other public headers. Such interdependencies
> are retained in their compat counterparts. Since some compat headers are
> generated only in certain configurations, the referenced headers still
> need to exist. The lack thereof was observed with hvm/hvm_op.h needing
> trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
> empty stubs in such cases (as generating the extra headers is relatively
> slow and hence better to avoid). Changes to .config and incrementally
> (re-)building is covered by the respective .*.cmd then no longer
> matching the command to be used, resulting in the necessary re-creation
> of the (possibly stub) header.
> 
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

This patch also takes care of "removing" compat headers that are no
longer needed due to change in .config, which is good to have (even if
this only remove the content in it).

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 10:31:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 10:31:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475963.737895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFurn-0000y0-Jd; Thu, 12 Jan 2023 10:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475963.737895; Thu, 12 Jan 2023 10:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFurn-0000xt-Gb; Thu, 12 Jan 2023 10:31:27 +0000
Received: by outflank-mailman (input) for mailman id 475963;
 Thu, 12 Jan 2023 10:31:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFurm-0000xn-Qn
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 10:31:27 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 41303408-9264-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 11:31:22 +0100 (CET)
Received: from mail-mw2nam04lp2175.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Jan 2023 05:31:16 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5583.namprd03.prod.outlook.com (2603:10b6:a03:28e::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 10:31:12 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 10:31:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41303408-9264-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673519482;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=fgzrP60jBfGrPQdP/DsLEQGZ1cWMKjUiLE91ksIgTHc=;
  b=JI4sHJFzIR5sWt75qRaNkAeGjZUPqPbXOsxx64t3S3JDlA344yxP2r3j
   AhJE6Jujd8/iChJidMuF4ICL0aQbMZV0SqL+ukdwEBe5AumMFMPjOePLM
   Q4VxxMwiEAsAZVe55xIlR7ML6XrwwDYB5uxXnANXPhCKrDaWBqhBh4D/c
   g=;
X-IronPort-RemoteIP: 104.47.73.175
X-IronPort-MID: 91215629
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Yl1OgarzlFnY5nJ8W73dSnufNq9eBmIpZBIvgKrLsJaIsI4StFCzt
 garIBmBaKmKNmSmfYpzYYvlox4Au57cn9VmTQQ6qCgxRi5Go5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06W1wUmAWP6gR5weHziNNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADMmbkuGiL6R+riEe7J8rOU8D+36AbpK7xmMzRmBZRonabbqZv2WoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+OraYWIEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXpr6Qz3QPJlwT/DjUndmWCodOesneldNxxL
 EAm9XILsIMtoRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ21
 1qhj97vQzt1v9W9WX+bs7uZsz62ESwUNnMZIz8JSxMf5Nvuq511iQjAJuuPC4awh9zxXDTvm
 TaDqXBig61J1JFWkaKm4VrAnjSg4IDTSRI47RnWWWTj6R5lYImiZMqj7l2zAet8Ebt1h2Kp5
 BAs8/VyJshVZX1RvERhmNkwIYw=
IronPort-HdrOrdr: A9a23:nJmgaatOZBz9ZcLVw896ER3B7skDctV00zEX/kB9WHVpm6uj+/
 xG/c516faQsl0ssR4b9+xoVJPgfZq/z+8X3WBhB9eftWDd0QPDQb2KhrGSoQEIdReOktJ15O
 NNdLV/Fc21LXUSt7ec3OBgKadE/OW6
X-IronPort-AV: E=Sophos;i="5.96,319,1665460800"; 
   d="scan'208";a="91215629"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mNxOq3D+U3iDH941E8zovEiXM9wvX9hNhC+g/NGLvoPwRc3LNZX0vgwvE/KX/BlQVXKOp0IxOCZjAQk8slVoZEvepU3d5jp5cAOikEi2XgHhOax2pInFj3AxinFhcxGJtP8h9aWhIrPXqaH0SO/Z2Fl391fvD1Fyv19whmoGTiKKOCIUHQlrWU7As6YDCPOhcoZu4JWEMFaycauPIjw3m9Cy67k9EVFCe6tJEG2IEv2cRBS/HoX15NxkrXKr+sRwqs1DIqbKLE5GBe/mKGJk4gFGtyCu7JOjgQF0TnU7sPLxoe/+6vx7C4QJnWb9u5qUlfNeOzp7+C3Qt0jOF8KDhw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fgzrP60jBfGrPQdP/DsLEQGZ1cWMKjUiLE91ksIgTHc=;
 b=DZNZViFJ4rS9YeMGjGfD6zO9QWZqPFvVQT6hEEVjCT8x+9m04mAY8gltzilZhID69Nldxo7QWtJgEXLKy431qL3uQ+XZdjF2RnM5gpnEdtsfB6NmfyF9N57zQ+sJQ2hVq9IdtC47SxDHPisq7o+2XixpbBT51bLBkIDV+kq0XhOWbZY69Zhajx+Xiftx+TN3zMZJDutFGFPAzz4lB61aG9UMFdQvp7uzk4WsWuD37PJd9Zn4K5d+pFR5iRL+kx36AHIZ8NPcOsNxCe6p9GCTPmTutJsSCE0NAQoyXzo8h0GHV12LaNOlDt5imAXTViQ33TCdrzHqWlvy+13eZ35hQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fgzrP60jBfGrPQdP/DsLEQGZ1cWMKjUiLE91ksIgTHc=;
 b=VWhPrww/PH1ilJQdK4Pqxn56W8yFFgctKvJNFVqws5ztJ6wI8YXNYLkU19W4vFOT7l2PfOw9xBeSYfZJPNvKpOSZ283HTq6AUeclkKeL06Fk/xeCAFgIbEFb5UAE4bSibLs4MB8PuUrJAugY6I80dEJp1fifGgjnDSUaYtJNitY=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Thread-Topic: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Thread-Index: AQHZJcS6xQlTtpJxX0O79yAhgOZJ+66Z2cOAgACwbwCAAAw+gA==
Date: Thu, 12 Jan 2023 10:31:11 +0000
Message-ID: <71e7ba34-f1ea-16fd-ec01-bb2aa454270c@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <9bea51eb-4fbd-b061-52d7-c6c234d060a1@suse.com>
 <c5d201ac-89ca-6baa-d685-5bef2497183f@citrix.com>
 <a438f16b-7d45-d7e4-2191-4ed7b2077785@suse.com>
In-Reply-To: <a438f16b-7d45-d7e4-2191-4ed7b2077785@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5583:EE_
x-ms-office365-filtering-correlation-id: 6ae85160-01c9-475f-f99b-08daf4882017
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 1DdEIbxbTTfjaVOA/r16rzbzt6MYI1qHwQILOyXoT7yiqeKDoqOwfG4mKf45Ynhd7AhuHzeQjGng3i695QExpkz9OMBzuumEZ8GXZYZSW30VZwnZuphVeesXkWM6kcv6frsoBYmiXIZEHTIhDtUs53fpfohIcMw+Zt6xV+hXF9wPA7d+YTvrjOFZCMHOLg4HNtxyRPYdqE9ROOid4VPXUlo6boHl6SERNjeljNHUvobqRUPZhElvQ6jz1UNJ1iRycLi5IYviQORCbAN7giAU9p54BPWSh2WbDUgIuCxXCp1tw7+V0HA+XcBDtwUDE8Pc5R9B96bQ5Mv/XiISX+pGzx2HAeDSLGMo781gZ5UAEw7fvMxH5OConxeVjG7v5IO5YsQBqeQLGW5RuDUP/BrtOe6LW8AgpQUU8xATFcyXa3yJrK1keCQhe6guc0odzRjOJvc/LAKRGrul0eTohedBXy6ziqzjT6A2ovb3bRF//x2vxJ9DuvrWyosOVsQUhliAfUxN2x4nGTFYrOLjGNNf1sjHFdXGjCk5qNtr4c1UjVYzXD61GuEMkPFX9FyzdTrkREhPxNnPWmxx4EJSxaVN3NAiBFiiMzwf+dl+c8M2oniYquRQUvm1vUoH4fAYyPvOLVPTzyjmZaUey62Lm/HXGthcn9EB9050PLNPdRChWjP+97ZEoE7jLwu+gcUi00DfsfueDFOBklGrdOMd2dFoxTlmQ4BkSrl6y3pKDT4GLuamDoxnsCv+OfDiyZyElYL/z8ujY5WjwpfPUxQ1i6pgLg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(346002)(376002)(396003)(39860400002)(136003)(451199015)(2906002)(83380400001)(478600001)(66446008)(66476007)(64756008)(66556008)(5660300002)(66946007)(8936002)(53546011)(122000001)(26005)(36756003)(186003)(6916009)(6506007)(31686004)(6512007)(6486002)(2616005)(82960400001)(41300700001)(38100700002)(8676002)(54906003)(76116006)(86362001)(38070700005)(316002)(71200400001)(4326008)(91956017)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bHlqay9LOElYMWgwcjdEM3FMSHVYY201OEpJODRqNEttWHFuRzRQNHVZZnFX?=
 =?utf-8?B?ODhTVVV2ZktPcGtPWWZiOU8wSTZmWmdDMHp6blBrS2srY0Rpem9SdjhyeDBO?=
 =?utf-8?B?ZUhMQnIzSG1tTjB5UldXU0VpVEl0R0tIZkJ4ZXRxam43c0piMHlzY3FGOHFm?=
 =?utf-8?B?RGY4MUlOSXdvd04yR0Z6L1VtZFBKVWRyMFpZNjFXZXltU0oxaEJnMWhJeWt5?=
 =?utf-8?B?Q21hOUVKcDlveUZMbTVoK0Z3MldURGNTUGtCVFNhU0d1ZVFwVUtFbEtmU1Iy?=
 =?utf-8?B?d3hrQ1dLQXdMdjZaakFscnZoUFBwS2lpVmJMSG56b25WbXEvMGFaUXV5QW9z?=
 =?utf-8?B?akdaM0phWWRERFdGaGVXMnE4Vlp5OWVyOUVXQVc0eG1QR3N0aE5qcDl3R0hV?=
 =?utf-8?B?REdhL2w2SFBFamM1dkU0VldBNkhZRktYTXRCRkVlcTQyUGFBenpYNmlqMnlw?=
 =?utf-8?B?SUhiNXo3TmEwS3ZvbGYxZWJjelNNWGVBcGd6WUxVZEFuMzlHckM4ajFBanhL?=
 =?utf-8?B?Q0tpaGEvVHY1bVN4Zm81MDlDRms5YUtjSTBNNEdEVUVwekxqZzBoblplSC93?=
 =?utf-8?B?MUpqUEpFY2MyL3g0MDBBSHd5MFJ3VGFHZlZyem8zbWZ4aEY5NmpQYWozTTgz?=
 =?utf-8?B?cldiYnJxRU5YaFJQd2FjNlVnWXBJbjRHYkU3WUwxa3l4UmNDZEQrb01pOHUw?=
 =?utf-8?B?VVNNcUlKZmJ4MlBNbHRVNFM5bmhyTzRpbUVYUzZoYjF1bnNuTDV2ZXdJbmRU?=
 =?utf-8?B?UUVBMVZuV3lFZUcxNHY3Vmhyb0J1OWRHanREMUtJaDlhcS95ZVNTenhsS1p2?=
 =?utf-8?B?ZUF3ckpMSXlobURrTFdhK2lTRnlwUkFRU3NrKyt4SXBJM3MrM2h5UGV6ZkRm?=
 =?utf-8?B?amFRU1BRdE5XeVdCSTQ0ZmJlLzVTUkhNT2hkOTZ6RUxqSCtiSXU0dmRkNWE3?=
 =?utf-8?B?RTJMSXlWTVVBMUh3RGdmNnQ3R1QwK0pFRjIzR2V0SFU1aXQwRDZDSVhVMGlF?=
 =?utf-8?B?b016NVpoTzYvMEVCWmJadU9nWk04Wmw0NWR6MkRPQTBZOWhBS0FwUXhZQkRX?=
 =?utf-8?B?SHFHS1k1djB4aTlhcGdFaTlSY3J2bUtWOFR5UFpnRk9Zc2hycjRHM3ZRK0Np?=
 =?utf-8?B?OUZCLzZpdGhiTGt2RXhWWFNYMW80TFBkeVdSZDRNV0xvZDZLVXI2N3FoaHRO?=
 =?utf-8?B?TDRpMnIyQktGQUhuTW55dkFIbENNMm5jUlZqQlNGakpML0kwL0NydEdDelN1?=
 =?utf-8?B?bjVSb0JxSEdqSjNaQjd4L2sxMjJGeVFiVXdmTGdmVHZWRWU5eW9RMW9FaTRV?=
 =?utf-8?B?bFhCZmpmOU1xK3FSV2M1WTUvWVpURDRQSWZJTzQ1eUFKV1ZzTFdJdlE1bTVX?=
 =?utf-8?B?N2ppZVR5WXI4NWtaREFMODJ5aDM1YS92MGErdUgwa1ozS3RnVmM5eDNiTzBM?=
 =?utf-8?B?Q1Q3em1Odnl0T1dhc1dPSVFwcGhoUkxwZVVPV1FKWGxwdmREWnA4RlNOSllM?=
 =?utf-8?B?VkhnbDYrTmNDNWk5SkFmWGxsaUJ2NUFHcHRyZUtWK0h2RVVkVi9ubk53dC9h?=
 =?utf-8?B?aUdDS0hwaEVFV3VFd0JXaUtQb01kL2puRDVuK1dNQzFNZzNGNUFGQmhHSzZp?=
 =?utf-8?B?eDlwWG1wa3RHejcvWXFHWVpGeThtbm1LeEptVjJiRjlSYXI1bXY1ZE5zN2dj?=
 =?utf-8?B?eFRZMEZaV2U0VTlWald6UXlrcjZ2aXdjSzlkOGJOb3dGQ3ZUQW9UdjVJMG1w?=
 =?utf-8?B?MnB6c1R6TVpkVWpNeUdVeDcyWTZpamtOejdhRTFGUms2ejc3bzMvMFNYRTJl?=
 =?utf-8?B?TTlJUmg4WnZOWjEzdDgzRWkxTk93VmJyQTUzUzU3Y01ZZFlqdkxwbHFzbkZo?=
 =?utf-8?B?SkJrd05abEVFVE9neEhCZDhQVUx6SElkNklFM09UNG15T2xIMjFTQ2xZdmNk?=
 =?utf-8?B?QmtWc2lDSjBqNkhYbitmWm15QWFCUEF6Z0l0NFk4dURMNW51bzRiQytOWGVt?=
 =?utf-8?B?bm9qa2ZuOExrdVUwcDFpVDVUWklQTFRmTUpubXBPTnk5aW4wS2piaVI1L0pu?=
 =?utf-8?B?a2t4VTBOZlFPY25uZUxxeSt1eWVQZEVCVmNmUVNiemM1YnArSnJMZ0ZrZ3lk?=
 =?utf-8?B?YUZSaGFWOUYycUlGeDZZNnphQjV6dzBjcEc1aW1lUmJsTUV3K3ZXUWxYdmg3?=
 =?utf-8?B?Ymc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <D87378AB7FF01843A7B8D2CAE6A245D6@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	PnGkO3gOH5CnaCCXwf9Uhla00rhxcIJfvnpH4gkKr7ltwTF3lgrOU/fZfr+0L6QfYGYfAkSdoSQLGXsc3il/MoXoVX/FbifNy6bASw3DJ9b0i7GxK7+/quq9YHQiqB8WMGCN85/TkxQmmfLZiuAi170zr/utOarq9FCZFUUW4ZMpD//maGniwTZAgyzAwNcIb7B0Ha4F3unztu2Ft8F/CYMC9J17MNgK/218RnYixecoyFIX5RJgY9irIdurEvehu0BNGGUmH6Zfc1WYc6TMkoHGkR1WC9wNGJELCVHzITGLyEgvpeLbtlqt1RD0gNIn+fbyTqtMNt5Sh5KbcWNETsx9Cbfvh+bybof+1AIvVZCWpyitsHLakqyjXcRd5GclbuJExoBl/XIdIUu/D/Q6rcT+Xha3RQ1El4qS66MYr8OBKbBN6NQANDVwPcZ37TC0CpsDe4ZqoE3N0wPapMyIoeRpRoHX2GubwtyRJMN+2REgKpLdN7B4JGq15T7irW67xbfOsIoPJDBNqHRcaEWnEV3EVwbxo5Vf3ZYa0q3YYvj4yvR9ayg7vaEoaQVNZv5/SrXY2TSKR1jsGSwG8fFG6ei23h9iiXrndQAj2DzcC6mBrbzydZJfdbJYgq9cddwyCA0gZvwIpGzBhIaJ7jGxT9smZwEoTyy+CDEdTTw1N06j4E8I9U7L2O6ixqT1e4LjoSNI+i6mwl7YuqvX2bnK6Yn73xjM0ptKH2UbCi3co9XNIVPuw7SA31gasGD9NOWzRKkKMmajMwaVr+lgBxHWuC2wUjA0Us3xyk1tIbcrEigbrVKGY1wpv3rkE3h30b7/3zae7yNX35MSx15el4/UIA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ae85160-01c9-475f-f99b-08daf4882017
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 10:31:11.7362
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 3rtaEAab+OuZHdVhCRTDSj7jTqESYBkzTjMzHXwBR79QrguMcGUeMNd0QBG0cYYfJJIGdndMtxMIFtP3ehhd9JxipQ3ymtDmCMgABEgzf7k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5583

T24gMTIvMDEvMjAyMyA5OjQ3IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTIuMDEuMjAy
MyAwMDoxNSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDExLzAxLzIwMjMgMTo1NyBwbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gTWFrZSBIVk09eSByZWxlYXNlIGJ1aWxkIGJlaGF2aW9y
IHByb25lIGFnYWluc3QgYXJyYXkgb3ZlcnJ1biwgYnkNCj4+PiAoYWIpdXNpbmcgYXJyYXlfYWNj
ZXNzX25vc3BlYygpLiBUaGlzIGlzIGluIHBhcnRpY3VsYXIgdG8gZ3VhcmQgYWdhaW5zdA0KPj4+
IGUuZy4gU0hfdHlwZV91bnVzZWQgbWFraW5nIGl0IGhlcmUgdW5pbnRlbnRpb25hbGx5Lg0KPj4+
DQo+Pj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPj4+
IC0tLQ0KPj4+IHYyOiBOZXcuDQo+Pj4NCj4+PiAtLS0gYS94ZW4vYXJjaC94ODYvbW0vc2hhZG93
L3ByaXZhdGUuaA0KPj4+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJpdmF0ZS5oDQo+
Pj4gQEAgLTI3LDYgKzI3LDcgQEANCj4+PiAgLy8gYmVlbiBpbmNsdWRlZC4uLg0KPj4+ICAjaW5j
bHVkZSA8YXNtL3BhZ2UuaD4NCj4+PiAgI2luY2x1ZGUgPHhlbi9kb21haW5fcGFnZS5oPg0KPj4+
ICsjaW5jbHVkZSA8eGVuL25vc3BlYy5oPg0KPj4+ICAjaW5jbHVkZSA8YXNtL3g4Nl9lbXVsYXRl
Lmg+DQo+Pj4gICNpbmNsdWRlIDxhc20vaHZtL3N1cHBvcnQuaD4NCj4+PiAgI2luY2x1ZGUgPGFz
bS9hdG9taWMuaD4NCj4+PiBAQCAtMzY4LDcgKzM2OSw3IEBAIHNoYWRvd19zaXplKHVuc2lnbmVk
IGludCBzaGFkb3dfdHlwZSkNCj4+PiAgew0KPj4+ICAjaWZkZWYgQ09ORklHX0hWTQ0KPj4+ICAg
ICAgQVNTRVJUKHNoYWRvd190eXBlIDwgQVJSQVlfU0laRShzaF90eXBlX3RvX3NpemUpKTsNCj4+
PiAtICAgIHJldHVybiBzaF90eXBlX3RvX3NpemVbc2hhZG93X3R5cGVdOw0KPj4+ICsgICAgcmV0
dXJuIGFycmF5X2FjY2Vzc19ub3NwZWMoc2hfdHlwZV90b19zaXplLCBzaGFkb3dfdHlwZSk7DQo+
PiBJIGRvbid0IHRoaW5rIHRoaXMgaXMgd2FycmFudGVkLg0KPj4NCj4+IEZpcnN0LCBpZiB0aGUg
Y29tbWl0IG1lc3NhZ2Ugd2VyZSBhY2N1cmF0ZSwgdGhlbiBpdCdzIGEgcHJvYmxlbSBmb3IgYWxs
DQo+PiBhcnJheXMgb2Ygc2l6ZSBTSF90eXBlX3VudXNlZCwgeWV0IHlvdSd2ZSBvbmx5IGFkanVz
dGVkIGEgc2luZ2xlIGluc3RhbmNlLg0KPiBCZWNhdXNlIEkgdGhpbmsgdGhlIHJpc2sgaXMgaGln
aGVyIGhlcmUgdGhhbiBmb3Igb3RoZXIgYXJyYXlzLiBJbg0KPiBvdGhlciBjYXNlcyB3ZSBoYXZl
IHN1aXRhYmxlIGJ1aWxkLXRpbWUgY2hlY2tzIChIQVNIX0NBTExCQUNLU19DSEVDSygpDQo+IGlu
IHBhcnRpY3VsYXIpIHdoaWNoIHdvdWxkIHRyaXAgdXBvbiBpbmFwcHJvcHJpYXRlIHVzZSBvZiBv
bmUgb2YgdGhlDQo+IHR5cGVzIHdoaWNoIGFyZSBhbGlhc2VkIHRvIFNIX3R5cGVfdW51c2VkIHdo
ZW4gIUhWTS4NCj4NCj4+IFNlY29uZGx5LCBpZiBpdCB3ZXJlIHJlbGlhYmx5IDE2IHRoZW4gd2Ug
Y291bGQgZG8gdGhlIGJhc2ljYWxseS1mcmVlDQo+PiAidHlwZSAmPSAxNTsiIG1vZGlmaWNhdGlv
bi7CoCAoSXQgYXBwZWFycyBteSBjaGFuZ2UgdG8gZG8gdGhpcw0KPj4gYXV0b21hdGljYWxseSBo
YXNuJ3QgYmVlbiB0YWtlbiB5ZXQuKSwgYnV0IHdlJ2xsIGVuZCB1cCB3aXRoIGxmZW5jZQ0KPj4g
dmFyaWF0aW9uIGhlcmUuDQo+IEhvdyBjb3VsZCBhbnl0aGluZyBiZSAicmVsaWFibHkgMTYiPyBT
dWNoIGVudW1zIGNhbiBjaGFuZ2UgYXQgYW55IHRpbWU6DQo+IFRoZXkgZGlkIHdoZW4gbWFraW5n
IEhWTSB0eXBlcyBjb25kaXRpb25hbCwgYW5kIHRoZXkgd2lsbCBhZ2FpbiB3aGVuDQo+IGFkZGlu
ZyB0eXBlcyBuZWVkZWQgZm9yIDUtbGV2ZWwgcGFnaW5nLg0KPg0KPj4gQnV0IHRoZSB2YWx1ZSBp
c24ndCBhdHRhY2tlciBjb250cm9sbGVkLsKgIHNoYWRvd190eXBlIGFsd2F5cyBjb21lcyBmcm9t
DQo+PiBYZW4ncyBtZXRhZGF0YSBhYm91dCB0aGUgZ3Vlc3QsIG5vdCB0aGUgZ3Vlc3QgaXRzZWxm
LsKgIFNvIEkgZG9uJ3Qgc2VlDQo+PiBob3cgdGhpcyBjYW4gY29uY2VpdmFibHkgYmUgYSBzcGVj
dWxhdGl2ZSBpc3N1ZSBldmVuIGluIHByaW5jaXBsZS4NCj4gSSBkaWRuJ3Qgc2F5IGFueXRoaW5n
IGFib3V0IHRoZXJlIGJlaW5nIGEgc3BlY3VsYXRpdmUgaXNzdWUgaGVyZS4gSXQNCj4gaXMgZm9y
IHRoaXMgdmVyeSByZWFzb24gdGhhdCBJIHdyb3RlICIoYWIpdXNpbmciLg0KDQpUaGVuIGl0IGlz
IGVudGlyZWx5IHdyb25nIHRvIGJlIHVzaW5nIGEgbm9zcGVjIGFjY2Vzc29yLg0KDQpTcGVjdWxh
dGlvbiBwcm9ibGVtcyBhcmUgc3VidGxlIGVub3VnaCwgd2l0aG91dCBmYWxzZSB1c2VzIG9mIHRo
ZSBzYWZldHkNCmhlbHBlcnMuDQoNCklmIHlvdSB3YW50IHRvICJoYXJkZW4iIGFnYWluc3QgcnVu
dGltZSBhcmNoaXRlY3R1cmFsIGVycm9ycywgeW91IHdhbnQNCnRvIHVwIHRoZSBBU1NFUlQoKSB0
byBhIEJVRygpLCB3aGljaCB3aWxsIGV4ZWN1dGUgZmFzdGVyIHRoYW4gc3RpY2tpbmcNCmFuIGhp
ZGluZyBhbiBsZmVuY2UgaW4gaGVyZSwgYW5kIG5vdCBoaWRlIHdoYXQgaXMgb2J2aW91c2x5IGEg
bWFqb3INCm1hbGZ1bmN0aW9uIGluIHRoZSBzaGFkb3cgcGFnZXRhYmxlIGxvZ2ljLg0KDQp+QW5k
cmV3DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 10:43:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 10:43:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475969.737906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFv34-0002VT-M9; Thu, 12 Jan 2023 10:43:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475969.737906; Thu, 12 Jan 2023 10:43:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFv34-0002VM-IY; Thu, 12 Jan 2023 10:43:06 +0000
Received: by outflank-mailman (input) for mailman id 475969;
 Thu, 12 Jan 2023 10:43:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFv33-0002VG-NL
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 10:43:06 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2064.outbound.protection.outlook.com [40.107.22.64])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e3a96eac-9265-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 11:43:03 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7930.eurprd04.prod.outlook.com (2603:10a6:10:1ea::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 12 Jan
 2023 10:43:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 10:43:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3a96eac-9265-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ed/8imk65PyNLgtUf+88xhSGQ6/PKjzxRMFENwEr/Ox4z5q8yhnOASG2F6speCxuHMAIDe8UNR/+pgo94zg6k0fXJ2K+L3fySISCbctpA4UwnwzZLyXYZ7igXPVAJ9r0vLgkJoFDclGR6EIXwH6I+d79pvQymeAsCngwEAGUXBYmBmHxH+tvDHVQCazEwI996H+Nn0ImBCTzCalRHftXEeWXmf0ADkH4gD+Q1AsXk46J2K8Eocnlz0XQQ+XxNMxbLMyPkmkKehByV4kTANPnHbPL9UA2P7yTW3s5bDcqnujFjVwvDtfKFqQZ//NF5ANxgPmF3xu4QBj8zXb54UssJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5L1kQiY5uTx4IWBhotb8+wwYea9jsU5PILmCT8+5Gzg=;
 b=Ram6RUzwRxY/FmX2AJvnutOH66FSDsGAHSD204VpNZIFzoyaTmHz5pugIaBcygVmxIwEXc9QZKM4m7nLGzjbz2SRk/d1Gsq5l0OTe+8WXNS7sxvY75hJ8ULjHg53tQH24ICdsXEHUkcq+XjwspE5KHpP64YX5cztmQ7gwCHL0J3q4A0fxwBXy731ATJ3fRL9Rsplqb+Lz7oyMYw04yaWWN+OuAJWVc1wUK5zUo7hfvl8dt6W7j6Rd0MFZlLmBfjpkqq6oETkKHzHDRi/c7yh+nY/5gJO8ZcQkAYmnDkvlrWAVBaCped9EAPrkVQnVsoLbw8NkMwhlkev9BW1m3et8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5L1kQiY5uTx4IWBhotb8+wwYea9jsU5PILmCT8+5Gzg=;
 b=H9M9jengHjLLdLorjfj0vq+ZKQTTYHYIxseaHGjkKm1wS1aGrpc00uxrgMQVDzjOhx7aJI6fQgm9UGctQve0efbTn55hl9IPAjy45cpoWivLHOCLN2GF+6yIB4JUc0N/K1AxPxUu1SzgPZw5rTlKeLLpdruhbuwzwxwy5jDkbQS0eYDK9Ox+SIBxb4LPCn6RtQO3vKNZr+qFF03SzshFNTYFVEiSfr7NPF8lR67MgB7Y5n+Vv+v7ywM8p+Nhito2ywX0nXNbmhSTEnQ77/ydSNQfA4CBP/bmGHq6HFgN1mcaf6uzOyyk+wM7ptsJwAuCRH9D7OLO0lBN7nWuD3UJAA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b49793f8-55be-0746-815d-ab7b627d3baa@suse.com>
Date: Thu, 12 Jan 2023 11:42:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <9bea51eb-4fbd-b061-52d7-c6c234d060a1@suse.com>
 <c5d201ac-89ca-6baa-d685-5bef2497183f@citrix.com>
 <a438f16b-7d45-d7e4-2191-4ed7b2077785@suse.com>
 <71e7ba34-f1ea-16fd-ec01-bb2aa454270c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <71e7ba34-f1ea-16fd-ec01-bb2aa454270c@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0125.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7930:EE_
X-MS-Office365-Filtering-Correlation-Id: 7d5e894f-fbb9-4339-65a1-08daf489c5fd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Vs19dBtWdtSBobt4xBtJ4/TRiadBmO+SJCtR6lQn0NoXN7pRu3JTBwqXC19pfVHgtv0tyi0iVpali81bXKsjfsUbAHPD4V0TaMW+ZwDahonoOUF0q5Vfzyg/u51To4JX0771Ko40gARkkOLtOJEjAGooIPBueJDN/4EEavZwhJeksPAzOiULNgTct2MDG0C5BgzewPtiKl4tfbmrYYKtoJqX8ZVqlNZkcQgRtpYdHeudzyXLhXaH8E/oE2Ya2c4mTqHQhRdPlTw4GFq2xL6WkJN2WwXIFrYiDb3FwyLRt2oSA6pR10iGYHQAWqNP6alN/zdBLXZtukUIrSYOtuh2NN+okRMyfzSX6lKm9vMD4O941l+lMDbveVHeidxU45jWe4HoheLvIlF6sVQ1Sj33Jv6PPVxGtqZaxx0DVfiuYNvNSaYBLhczHi1IVjRukw+3lL5t0PV6drTVjx1TL4lfV9NEUQCKwIvtfNOTESUXrOZNdEyNGubJW7+Guvb5QLp6koDmZERpo8Ju/mz/fGyBKa/SHTnJ/Flx25ziAp9LnACTl0eer7z/ZwIFs0Xo5XmIv51In0skbGSqJEePtqtnUK3bbjM7R4wQDUrqBTq4aA6bMYe7My36U9SxCmeF91wQX1dvqUe5g4tVKdeIV2iyPfBa1mobCu2iuMhrSOwDp0AhZgAA2SMUZsmTh4++Eka4rUIV8M+tTE3o0YkFe0STNLf0wx3US2OCXoAT9ZHmXec=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(346002)(366004)(136003)(376002)(39860400002)(451199015)(41300700001)(6506007)(8936002)(5660300002)(31686004)(6666004)(2906002)(53546011)(83380400001)(4326008)(6916009)(8676002)(66946007)(66556008)(38100700002)(66476007)(316002)(54906003)(36756003)(478600001)(6486002)(2616005)(86362001)(186003)(31696002)(6512007)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dy9uM1VHTUR2c3lVbWNiY1lhTmtCMldLWUtmU3RIS3lFeVdsTUJLSmdSbDRu?=
 =?utf-8?B?cnBVYkxySmgrNVhFSXZZVU16MWxHcjJGUGNZZ1pmUEdUcDk1UnBTZVNLVkFX?=
 =?utf-8?B?SzBoWmUzVmZjZ1Q0MS96anBVRGZzQ0FSbVgzRElMaTFuRzJwNjJNemZwVk9v?=
 =?utf-8?B?ZUhOSDdOR0NpMVNMbFRGd3Z1VnZwSG1ubW44ZlN0cFF4ckFvMUdYeVVWeCtP?=
 =?utf-8?B?cnRuR21xUHZyakIvWVZhUk9hRDh6SHRoMnJ4dG43eEQ2T3ZwZGluWWpzRklP?=
 =?utf-8?B?OG1XSzdPL05MRGgzbVlNUlRUNm1LNDkyN0dTV1V6SDlnUzU4VEIvVXhST3Fk?=
 =?utf-8?B?bUdLd0NYV2ttK2l0dDdYc1ZRd084R3FiYlZ5S1U2aEpBTWpnQmYrQTlCVnkr?=
 =?utf-8?B?USsxenNDUFVhckdZN1NINTBnNlJmcTNYZ1ZtbHAweldrYU91cUZTOFV2cFBu?=
 =?utf-8?B?TFZVa1J4TmJsNUk2RGlSYTkrNHN6Y1ZlQ0JZYXQ3V3pJU1czUkllTHB2Vnhh?=
 =?utf-8?B?ZStjanBtcWRWU1RMRFRwQTk5QW80V21uUXNSQ2w2dWZ4R01MRUtaSk5xTkFu?=
 =?utf-8?B?ZnZBMVRIMG43YXZVNUoxOEJyUnNzU29WYmVDU1Y5dzgwWW9RcThWTzBZWWEz?=
 =?utf-8?B?ZUE5R2RKR1pFaWtrbm9yQVFZR3dPL3VBWVZ0Wkk1bThab2o5ZUNuTDI2a3Np?=
 =?utf-8?B?Sjhabk1aK1U5Wmh2Z0xPWUk1RklQMVZ2S2p2UjBjcVpnMU5Fa1BGVFB1VUtr?=
 =?utf-8?B?SmtGYzBFNmhBNUhiVWI4c3F2T3pJNmgvQUs0VGhVcGhHSHZZaHNHRkplK284?=
 =?utf-8?B?SkNGeUFya21DMkNqZHY0Y2xnaG45MVNZZHRmQzcxTG4wQlNLNUZhQUdSYWcy?=
 =?utf-8?B?Uk5JVkVqdUE5WVlQeEp3ZENhcnhkWnNmN1NNTzNnZUNUSDR1c3hLenZWVFZG?=
 =?utf-8?B?aUZSblNLM1FLbElIdDJmSU5TbVBRQ2VVL0xObDRDRW9NN3dGUFV1eUlJdDlt?=
 =?utf-8?B?aXVTeUY0RnRmT0FFK1N2V1piMlBaNk43WC96bTJnczV3UFdJcG9ZdTdKOXcx?=
 =?utf-8?B?dXhhR3VRV2VTeFRlcTBIMm5aek9KdUQ3aU55YlhKN0FIcDBJcHRUVHkzN3JN?=
 =?utf-8?B?ZnpSSUNVbUVGOEVUeGw3Q0tTMDkxYm5RVnpLRVBFUkJXMHB2OGg2MFczR2xH?=
 =?utf-8?B?andpdTh0MkxhT2NycThnSzRNbmd4QlJGejlyTTdZQUVYNWNTNmhCTEkrRE9N?=
 =?utf-8?B?MDMxdERnQXRsU0s5ZWp4dlYvL3E0TkNPQlcxV2phc3ZXNmlUblZwQy9oN012?=
 =?utf-8?B?NGtLT1hHbmtlV0FLeUs5V2hWVzViM2pCOGFYbVZmMUhHOEJrTG1PYURXVGhF?=
 =?utf-8?B?YjIwTDBVQmR6MC9Vd1NqaWhPNUt5UVN3WHJEcEQ1MXBya0toUWJ6dEc0OGtF?=
 =?utf-8?B?Y2M4VGcvd3pGYTBjN0lJVDVHMm1PVlB0M1krTVFwcWRybDlNNTZ0TURLNVlU?=
 =?utf-8?B?dHRIL0p5MjlrbHgvM3doSk9OSWEvNTBpaGtzeDhZQUxCTUR6YTEwSjFwQlht?=
 =?utf-8?B?RGlXSFV1aTNrNGFsOEFxNzNPK0pHOElNelg2WkFZRkpPcE5zNlV4M1ZBZ1NF?=
 =?utf-8?B?akFDK0pxd2MrMkNtaDd5NlVRY0JvaGpKRHNQSmx3M1dLcElqamtYMmU4N3Bw?=
 =?utf-8?B?V1lRaWNaNHdaS3dVYVZVVVNCWWRNNzJzS3JFcEpqbklLNlN5TzZYREJSVVJv?=
 =?utf-8?B?eG9CVlQxMG1YeUcvandjTlBpckZHTm8xSkVOckJsZlBCeTM5Z3V6NUthZzdP?=
 =?utf-8?B?SHFFcmZFakN1OW5tR044eHBmMisrcmZKMlVGNlovMks0REhHOTRVU0RFL2hD?=
 =?utf-8?B?SHUwdTVhaTRSVTBUTEQ2M3dXTEdUVmRJanczK1kvUnNYcGdIUFl6WUJNN1I0?=
 =?utf-8?B?ZFdxS3dEbG0xT2JzcitqVjdJQzNtU1pSZU9vRlAwcEdPRWczWjhicjJBZGR3?=
 =?utf-8?B?aFI0bnlzOHZSSnZuOW02R2xQbjdaR3hhbjdjWC9LSUJCcVppazE4dDcvcld1?=
 =?utf-8?B?dHNWd29FbnB1bWd5QTZLd1ZEbnN0OGRkZTErUmF6UjQ1bk9pNHdGTFZidnpv?=
 =?utf-8?Q?L9iGmIW6MJsvqczFr3QEsTcge?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7d5e894f-fbb9-4339-65a1-08daf489c5fd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:42:59.7480
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QKH1B+LjzoOvQ+9yq1SSc4Hvd9q6obpIzDvvc9oxJYi9dCvVrBz5waTTRMkrfdaJ4PsWtlarKMIso5iuBC+yww==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7930

On 12.01.2023 11:31, Andrew Cooper wrote:
> On 12/01/2023 9:47 am, Jan Beulich wrote:
>> On 12.01.2023 00:15, Andrew Cooper wrote:
>>> On 11/01/2023 1:57 pm, Jan Beulich wrote:
>>>> Make HVM=y release build behavior prone against array overrun, by
>>>> (ab)using array_access_nospec(). This is in particular to guard against
>>>> e.g. SH_type_unused making it here unintentionally.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> v2: New.
>>>>
>>>> --- a/xen/arch/x86/mm/shadow/private.h
>>>> +++ b/xen/arch/x86/mm/shadow/private.h
>>>> @@ -27,6 +27,7 @@
>>>>  // been included...
>>>>  #include <asm/page.h>
>>>>  #include <xen/domain_page.h>
>>>> +#include <xen/nospec.h>
>>>>  #include <asm/x86_emulate.h>
>>>>  #include <asm/hvm/support.h>
>>>>  #include <asm/atomic.h>
>>>> @@ -368,7 +369,7 @@ shadow_size(unsigned int shadow_type)
>>>>  {
>>>>  #ifdef CONFIG_HVM
>>>>      ASSERT(shadow_type < ARRAY_SIZE(sh_type_to_size));
>>>> -    return sh_type_to_size[shadow_type];
>>>> +    return array_access_nospec(sh_type_to_size, shadow_type);
>>> I don't think this is warranted.
>>>
>>> First, if the commit message were accurate, then it's a problem for all
>>> arrays of size SH_type_unused, yet you've only adjusted a single instance.
>> Because I think the risk is higher here than for other arrays. In
>> other cases we have suitable build-time checks (HASH_CALLBACKS_CHECK()
>> in particular) which would trip upon inappropriate use of one of the
>> types which are aliased to SH_type_unused when !HVM.
>>
>>> Secondly, if it were reliably 16 then we could do the basically-free
>>> "type &= 15;" modification.  (It appears my change to do this
>>> automatically hasn't been taken yet.), but we'll end up with lfence
>>> variation here.
>> How could anything be "reliably 16"? Such enums can change at any time:
>> They did when making HVM types conditional, and they will again when
>> adding types needed for 5-level paging.
>>
>>> But the value isn't attacker controlled.  shadow_type always comes from
>>> Xen's metadata about the guest, not the guest itself.  So I don't see
>>> how this can conceivably be a speculative issue even in principle.
>> I didn't say anything about there being a speculative issue here. It
>> is for this very reason that I wrote "(ab)using".
> 
> Then it is entirely wrong to be using a nospec accessor.
> 
> Speculation problems are subtle enough, without false uses of the safety
> helpers.
> 
> If you want to "harden" against runtime architectural errors, you want
> to up the ASSERT() to a BUG(), which will execute faster than sticking
> an hiding an lfence in here, and not hide what is obviously a major
> malfunction in the shadow pagetable logic.

I should have commented on this earlier already: What lfence are you
talking about? As to BUG() - the goal here specifically is to avoid a
crash in release builds, by forcing the function to return zero (via
having it use array slot zero for out of range indexes).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 10:46:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 10:46:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475976.737916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFv6N-0003Dc-98; Thu, 12 Jan 2023 10:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475976.737916; Thu, 12 Jan 2023 10:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFv6N-0003DV-66; Thu, 12 Jan 2023 10:46:31 +0000
Received: by outflank-mailman (input) for mailman id 475976;
 Thu, 12 Jan 2023 10:46:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+83i=5J=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pFv6L-0003DP-Rw
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 10:46:29 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2071.outbound.protection.outlook.com [40.107.15.71])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5da34080-9266-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 11:46:27 +0100 (CET)
Received: from LO4P265CA0241.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:350::15)
 by PAXPR08MB7393.eurprd08.prod.outlook.com (2603:10a6:102:2bd::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 10:46:19 +0000
Received: from AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:600:350:cafe::f8) by LO4P265CA0241.outlook.office365.com
 (2603:10a6:600:350::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend
 Transport; Thu, 12 Jan 2023 10:46:19 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT034.mail.protection.outlook.com (100.127.140.87) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Thu, 12 Jan 2023 10:46:18 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Thu, 12 Jan 2023 10:46:18 +0000
Received: from c283985d8fa6.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E0702FA6-EF12-47B8-B412-95CD15ABFBC7.1; 
 Thu, 12 Jan 2023 10:46:11 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c283985d8fa6.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 12 Jan 2023 10:46:11 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PA4PR08MB7524.eurprd08.prod.outlook.com (2603:10a6:102:26d::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 10:46:04 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 10:46:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5da34080-9266-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RXkVS1Z8dOTT9hU+gAbGeKlrx/wJV4wLt/WInMX7XYY=;
 b=sisxAGioqjjSsVN1zcz+BJI1rAI+5uGKnIEtiHFI9A31/6fZhFVjmT9kX3boy7g5R3RjAwPqpzjhBILVzTE+vW338js3KGp5oYDDUMAf2FaDGmew+uyifJDKCHQedLM9mBw0F8nqrjxJWo2kpBe873vDAPBLj81ifAlbelmDL4Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e9c38e77585c1013
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H97DUJEcXV4Iyr+exXCyZbxyAaRfOC80P3cboqaSgUHkaB644Tlp/nFxmqWpCx4uKgcRpXd8rS9y793+JA1qZJKlShGVHFxd8qkQ1nmBt+N4I4brzOpDYz7riKv8SPuSQ9ScnvrdnKdCKbWOTZgDpp6umWjCqyAdMOIau4ywi9/27myqm8UJbGjvCzStKb157d6F68zOOtxoo4EIl31YtkPpR1vIkTnwsyhwtIu2QMW+J/nIiBsN0YgBwMnABMWUbWvJ+UTbQrEmF290uZ42Kz6d1aDoUtKfRXYiUN0gmUKnEWU7iAyomfMXYmQ3tXMEZULkvgo06HYhtXftzxDO4g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RXkVS1Z8dOTT9hU+gAbGeKlrx/wJV4wLt/WInMX7XYY=;
 b=dXbPLQNJCP2Bg1cMAtbpvgLYUjZdCRbg8CefUIaO1SUqUj0JCl2E6820UyeDOyQ1rhiY2yeRvuHZurmfC25c+D6z6S7LnRSgVlTT3WWlfpAe5S44gIk2rKfsB/EHndaATc5yXFx0cSDJyr2L8Ro28pal/+QtVmJnQtZqnCgpVe/0TYkeVctMAcPvnZgRW/0WoHGqo2i2bdfVPHKiZHMAbX8AkoBUuPp+ngDoRRsetVU5VKqUDlIQwCB1m1IGipS4T0sSuArjpQcIyAXYDGutsj2sUh9lJ2ln7o0Md6n62TARQ62exfra5VRzrB4WTfYEwYmNwP09HVrpgnVWd6mymQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RXkVS1Z8dOTT9hU+gAbGeKlrx/wJV4wLt/WInMX7XYY=;
 b=sisxAGioqjjSsVN1zcz+BJI1rAI+5uGKnIEtiHFI9A31/6fZhFVjmT9kX3boy7g5R3RjAwPqpzjhBILVzTE+vW338js3KGp5oYDDUMAf2FaDGmew+uyifJDKCHQedLM9mBw0F8nqrjxJWo2kpBe873vDAPBLj81ifAlbelmDL4Y=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [RFC PATCH 1/8] xen/arm: enable SVE extension for Xen
Thread-Topic: [RFC PATCH 1/8] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZJcp25sHFUmTjSUKVOX9zw5fwA66ZdWWAgAElHQA=
Date: Thu, 12 Jan 2023 10:46:04 +0000
Message-ID: <85F9C725-816A-46EE-AD0C-2725AE13F14C@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <20230111143826.3224-2-luca.fancellu@arm.com>
 <e37e5564-e7b9-c9d2-1360-171c014649c7@xen.org>
In-Reply-To: <e37e5564-e7b9-c9d2-1360-171c014649c7@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PA4PR08MB7524:EE_|AM7EUR03FT034:EE_|PAXPR08MB7393:EE_
X-MS-Office365-Filtering-Correlation-Id: fcb03640-3901-45ba-2c47-08daf48a3c94
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 2oUBsQEJOXv1uQ1zMVqWcpFVKPyRf3WrxrF1TK2tvc8IpgTmaONbfVpAce3TklaNQa762Z2icjGYWKSL1OUxcl53LawwxP9mdBut2dhmhO4kuADrWk2g+dl3hf65EH6niLvwzXAV0i+nXaYItvXoWHogsyKSEH97ba0n48EPteeN/RRaBUDtkxmRu98Vi6wBlUPrqggOP7SlXZ0ppbdpbl6V3EJ0iAYiwCtbU2gpS7SeQajgjZskYQHT+i+4CkTeD3fc7HxZS79uM0NahkFDFXV3+KmthVkrlloaS1xs/5ZY+CLT9HnLVlpplxxZDCHhV7p+nRk/hmOL4FpbI8HHSVokw4JI0dy7nK4VCZjQ2KAWUp4Mpc3ZiHTkKyNYSr54f5FGdHRDJjpSo2sTuBWfdfMa1cWpzl83ahTlfUXxNQnrW70ArNNsnpSLafWFjsnbux056vpk0VLpYcAueTPDFFCBPCbXS4rUidFdsROcx1OHmHpYkMh+AGsIrdavoUkRSvWmVBGz9EW0q4uoN8DS5aT4aZYmKzg5Cx1TTsq01GtwHR03q9w6hZwtRqTxj0R2QA9F2NmySmRgBtKrwoKu0khxrtvb7Z5e3F/9kj6chK13y+K3TRnW+QFHKnbbyYDAkAqHnGWPF2TuWbPnNhsY2VsrBfLIIsWsZasWdqQM6spBwestC+3H1gpQaXHFUQa5kf+mNn+Imx6/Ri1zt6wsR64Php4m3UYv33hzsUR4r1E=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(396003)(346002)(376002)(136003)(39860400002)(451199015)(53546011)(41300700001)(86362001)(6506007)(83380400001)(122000001)(38070700005)(6486002)(26005)(6512007)(8936002)(478600001)(186003)(38100700002)(66946007)(66446008)(91956017)(8676002)(71200400001)(4326008)(5660300002)(66556008)(33656002)(36756003)(64756008)(76116006)(66476007)(6916009)(2906002)(316002)(2616005)(54906003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <2B2465419E64F24A980D173705DB6E7A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB7524
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8e11201b-7912-4b76-55cd-08daf48a3458
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	be72i6ouA6dmbyuf05p3Wo2iWJEy+oXW3Xd2JMbMWPYCShw6udXDBpiWkjJvKhibH/eWXXJHN1t2OLoqDYEPRi3NlVkYMhtoMmYDDsOyt8iqEZOCjD23dOgOJeS8fYJ5rgUGMvPvA/wEgetQL7/V+BhFukQG23LES/aAGCKr3S3siJfkJg2NmiNKgkqzHHJuPIx6dNpUVSkqXD2/yznvGnC9WxkJJSBSKTVhqA2GowlPUu6SUskrtGaOSKGYnMJc4Eg6yEOMemhzrJQuLn/YnYCTZlXpcfXiQjaMtxpIbh0BRGZhFRqtNh5k3f+DEdrOdlczHp/lPjfgH5JBj4bn/msU3jtliFygQ7rQxNlqYvv4x2y4MdErU12J1+JWpOhmjoUkKyg6QuCCkh3jQc+9wHuoO7yp5RjOXFA9FNphfA9dMT+uU+SCivRI1xQ1eTxOOJskQKm/sx8WlFxGCSAJfFuIxrXaC95QXybTYoj3s8+NPes2e2oGYRTES8rBDi0JuPNzwNHVWpDd8Cc5ii4ErvqWHQG0yjyuDReUsStqgX6y8XOU9PjlKmJrBlTnr5val7WqLCWwndyklv1uU0Vi+tCee7a86DXQiv3tXb1ZNoPfQgiMKaCqFNX7oYXTcbFCEYVm1paFnR8niNKemTKU+VPleQLXzAnq/GeQXtNXnmifdb08dHN1R79yHE0B2xwW0AsCDw0b8pV15sCkAg9ijw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199015)(46966006)(36840700001)(40470700004)(53546011)(26005)(86362001)(6512007)(40480700001)(186003)(83380400001)(4326008)(8676002)(33656002)(336012)(47076005)(6862004)(70206006)(70586007)(8936002)(36756003)(2616005)(40460700003)(81166007)(5660300002)(2906002)(478600001)(316002)(356005)(54906003)(41300700001)(82310400005)(6506007)(36860700001)(82740400003)(6486002)(107886003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:46:18.4729
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fcb03640-3901-45ba-2c47-08daf48a3c94
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7393

DQoNCj4gT24gMTEgSmFuIDIwMjMsIGF0IDE3OjE2LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBBcyB0aGlzIGlzIGFuIFJGQywgSSB3
aWxsIGJlIG1vc3RseSBtYWtpbmcgZ2VuZXJhbCBjb21tZW50cy4NCg0KSGkgSnVsaWVuLA0KDQpU
aGFuayB5b3UuDQoNCj4gDQo+IE9uIDExLzAxLzIwMjMgMTQ6MzgsIEx1Y2EgRmFuY2VsbHUgd3Jv
dGU6DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2RvbWFpbi5jIGIveGVuL2FyY2gvYXJt
L2RvbWFpbi5jDQo+PiBpbmRleCA5OTU3N2FkYjZjNjkuLjhlYTM4NDNlYThlOCAxMDA2NDQNCj4+
IC0tLSBhL3hlbi9hcmNoL2FybS9kb21haW4uYw0KPj4gKysrIGIveGVuL2FyY2gvYXJtL2RvbWFp
bi5jDQo+PiBAQCAtMTgxLDYgKzE4MSw4IEBAIHN0YXRpYyB2b2lkIGN0eHRfc3dpdGNoX3RvKHN0
cnVjdCB2Y3B1ICpuKQ0KPj4gICAgICAvKiBWR0lDICovDQo+PiAgICAgIGdpY19yZXN0b3JlX3N0
YXRlKG4pOw0KPj4gICsgICAgV1JJVEVfU1lTUkVHKG4tPmFyY2guY3B0cl9lbDIsIENQVFJfRUwy
KTsNCj4gDQo+IFNob3VsZG4ndCB0aGlzIG5lZWQgYW4gaXNiKCkgYWZ0ZXJ3YXJkcyBkbyBlbnN1
cmUgdGhhdCBhbnkgcHJldmlvdXNseSB0cmFwcGVkIHdpbGwgYmUgYWNjZXNzaWJsZT8NCg0KWWVz
IHlvdSBhcmUgcmlnaHQsIHdvdWxkIGl0IGJlIG9rIGZvciB5b3UgaWYgSSBtb3ZlIHRoaXMgYmVm
b3JlIGdpY19yZXN0b3JlX3N0YXRlIGJlY2F1c2UgaXQgaW5zaWRlDQpoYXMgYW4gaXNiKCk/IFRo
aXMgdG8gbGltaXQgdGhlIGlzYigpIHVzYWdlLiBJIGNvdWxkIHB1dCBhbHNvIGEgY29tbWVudCB0
byBkb27igJl0IGZvcmdldCBpdC4NCg0KT3RoZXJ3aXNlIEkgd2lsbCBhZGQgdGhlIGJhcnJpZXIu
DQoNCg0KPiANCj4gWy4uLl0NCj4gDQo+PiBAQCAtMTIyLDYgKzEzNyw3IEBAIF9faW5pdGNhbGwo
dXBkYXRlX3NlcnJvcnNfY3B1X2NhcHMpOw0KPj4gICAgdm9pZCBpbml0X3RyYXBzKHZvaWQpDQo+
PiAgew0KPj4gKyAgICByZWdpc3Rlcl90IGNwdHJfYml0cyA9IGdldF9kZWZhdWx0X2NwdHJfZmxh
Z3MoKTsNCj4+ICAgICAgLyoNCj4+ICAgICAgICogU2V0dXAgSHlwIHZlY3RvciBiYXNlLiBOb3Rl
IHRoZXkgbWlnaHQgZ2V0IHVwZGF0ZWQgd2l0aCB0aGUNCj4+ICAgICAgICogYnJhbmNoIHByZWRp
Y3RvciBoYXJkZW5pbmcuDQo+PiBAQCAtMTM1LDE3ICsxNTEsMTUgQEAgdm9pZCBpbml0X3RyYXBz
KHZvaWQpDQo+PiAgICAgIC8qIFRyYXAgQ1AxNSBjMTUgdXNlZCBmb3IgaW1wbGVtZW50YXRpb24g
ZGVmaW5lZCByZWdpc3RlcnMgKi8NCj4+ICAgICAgV1JJVEVfU1lTUkVHKEhTVFJfVCgxNSksIEhT
VFJfRUwyKTsNCj4+ICAtICAgIC8qIFRyYXAgYWxsIGNvcHJvY2Vzc29yIHJlZ2lzdGVycyAoMC0x
MykgZXhjZXB0IGNwMTAgYW5kDQo+PiAtICAgICAqIGNwMTEgZm9yIFZGUC4NCj4+IC0gICAgICoN
Cj4+IC0gICAgICogLyFcIEFsbCBjb3Byb2Nlc3NvcnMgZXhjZXB0IGNwMTAgYW5kIGNwMTEgY2Fu
bm90IGJlIHVzZWQgaW4gWGVuLg0KPj4gLSAgICAgKg0KPj4gLSAgICAgKiBPbiBBUk02NCB0aGUg
VENQeCBiaXRzIHdoaWNoIHdlIHNldCBoZXJlICgwLi45LDEyLDEzKSBhcmUgYWxsDQo+PiAtICAg
ICAqIFJFUzEsIGkuZS4gdGhleSB3b3VsZCB0cmFwIHdoZXRoZXIgd2UgZGlkIHRoaXMgd3JpdGUg
b3Igbm90Lg0KPj4gKyNpZmRlZiBDT05GSUdfQVJNNjRfU1ZFDQo+PiArICAgIC8qDQo+PiArICAg
ICAqIERvbid0IHRyYXAgU1ZFIG5vdywgWGVuIG1pZ2h0IG5lZWQgdG8gYWNjZXNzIFpDUiByZWcg
aW4gY3B1ZmVhdHVyZSBjb2RlLA0KPj4gKyAgICAgKiB0cmFwcGluZyBhZ2FpbiBvciBub3Qgd2ls
bCBiZSBoYW5kbGVkIG9uIHZjcHUgY3JlYXRpb24vc2NoZWR1bGluZyBsYXRlcg0KPj4gICAgICAg
Ki8NCj4gDQo+IEluc3RlYWQgb2YgZW5hYmxlIGJ5IGRlZmF1bHQgYXQgYm9vdCwgY2FuIHdlIHRy
eSB0byBlbmFibGUvZGlzYWJsZSBvbmx5IHdoZW4gdGhpcyBpcyBzdHJpY3RseSBuZWVkZWQ/DQoN
ClllcyB3ZSBjb3VsZCB1bi10cmFwIGluc2lkZSBjb21wdXRlX21heF96Y3IoKSBqdXN0IGJlZm9y
ZSBhY2Nlc3NpbmcgU1ZFIHJlc291cmNlcyBhbmQgdHJhcCBpdA0KYWdhaW4gd2hlbiBmaW5pc2hl
ZC4gV291bGQgaXQgYmUgb2sgZm9yIHlvdSB0aGlzIGFwcHJvYWNoPw0KDQo+IA0KPj4gLSAgICBX
UklURV9TWVNSRUcoKEhDUFRSX0NQX01BU0sgJiB+KEhDUFRSX0NQKDEwKSB8IEhDUFRSX0NQKDEx
KSkpIHwNCj4+IC0gICAgICAgICAgICAgICAgIEhDUFRSX1RUQSB8IEhDUFRSX1RBTSwNCj4+IC0g
ICAgICAgICAgICAgICAgIENQVFJfRUwyKTsNCj4+ICsgICAgY3B0cl9iaXRzICY9IH5IQ1BUUl9D
UCg4KTsNCj4+ICsjZW5kaWYNCj4+ICsNCj4+ICsgICAgV1JJVEVfU1lTUkVHKGNwdHJfYml0cywg
Q1BUUl9FTDIpOw0KPj4gICAgICAgIC8qDQo+PiAgICAgICAqIENvbmZpZ3VyZSBIQ1JfRUwyIHdp
dGggdGhlIGJhcmUgbWluaW11bSB0byBydW4gWGVuIHVudGlsIGEgZ3Vlc3QNCj4gDQo+IENoZWVy
cywNCj4gLS0gDQo+IEp1bGllbiBHcmFsbA0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 10:54:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 10:54:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475982.737927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvEF-0004hJ-32; Thu, 12 Jan 2023 10:54:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475982.737927; Thu, 12 Jan 2023 10:54:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvEF-0004hC-0E; Thu, 12 Jan 2023 10:54:39 +0000
Received: by outflank-mailman (input) for mailman id 475982;
 Thu, 12 Jan 2023 10:54:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+83i=5J=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pFvED-0004h4-KG
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 10:54:37 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2061.outbound.protection.outlook.com [40.107.6.61])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7fbd96b7-9267-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 11:54:34 +0100 (CET)
Received: from FR3P281CA0034.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1c::20)
 by DBBPR08MB6249.eurprd08.prod.outlook.com (2603:10a6:10:203::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 10:54:32 +0000
Received: from VI1EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:1c:cafe::61) by FR3P281CA0034.outlook.office365.com
 (2603:10a6:d10:1c::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Thu, 12 Jan 2023 10:54:32 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT013.mail.protection.outlook.com (100.127.145.11) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:54:31 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Thu, 12 Jan 2023 10:54:31 +0000
Received: from a69b6cc4c072.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BE37EF1C-B400-4D5C-8F7C-709C394F5BA7.1; 
 Thu, 12 Jan 2023 10:54:22 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a69b6cc4c072.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 12 Jan 2023 10:54:22 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB9PR08MB9491.eurprd08.prod.outlook.com (2603:10a6:10:45a::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 10:54:18 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 10:54:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fbd96b7-9267-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1Usn1RNUaOnGhr71Cg3aFmCJpGX2mSsNaj4aK0MYhhM=;
 b=CGV/wc0m3GApAlBhpspll+ZM+rze3bFj4wGke56+VfjL2ZMiBpjE90S0jXqwubp5znssNQw8ugfo8upN3zJXx2RZttNqO2ycPgdC1rhjWP1hX4sQONIhvWKI+bosc3EuiT/yWayY8RwuR0ixtLB/p93ghYxPtLMhueiQVK5pPp4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: cf1a4a8dfa4e666b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QyGaUG1/heBiTTP3HK3taZjLDpBpg8juZarHogzcyv2zAI0zvboxaPDg0QQFwm7eZkPlHB0RRTZ23DOqSov3QhLSbQR412Y3NXcNWNex+/+AfZhaEAXQBH0Mdp11ZOQzA8ZuVYQ/6vtkR0DOT1qQoUZecLQk4PXwGMPXAp6rEjc4zmcAjwfStpPIaJ3YD5H1/mcmBauB63hoTXypIfxPrzd/Iy244OVEXyuEcsIX7219Rw9K8WojnRwsSv/uh7HGfsUTKoQrdKAivazSvcl+hZ8MMuAL+ukQL8lmjggFtpWC6V25oEBNCRTZGWzKhfsTGPV0xWat5h/UVgrgXinW7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1Usn1RNUaOnGhr71Cg3aFmCJpGX2mSsNaj4aK0MYhhM=;
 b=EahDhZJnn8YqfF5TReKFJ6byGsf6jDHP/Ex/zhLmHg+7Xdrr241nZ25r48SJr2/pyH8BG1BGN11Yi/IO/HwCwfzQxDHaY05OBk0cfhPnFfm9Jdwa08cTDhhNgzkic3/GxFR4xo3IIs2PcQEKd/om6/JbxN+Ufwyl5otDfG54sDX7gIRDvgugI5jgaFOet/aGpmf6Ex2WkGxzCHcxNC7JwQX1VGGOLp/u6f6EQv4sCxXUpmySC3+Z6vHb5/SftzHzi+T9liXm03I5UWofqaC9QNaGnKl5kDmTsTUhJxKBiPTlBn9JxYY9ZtwueqUhMNLVx97Ho1oqo0y8Ib8Z6DZFPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1Usn1RNUaOnGhr71Cg3aFmCJpGX2mSsNaj4aK0MYhhM=;
 b=CGV/wc0m3GApAlBhpspll+ZM+rze3bFj4wGke56+VfjL2ZMiBpjE90S0jXqwubp5znssNQw8ugfo8upN3zJXx2RZttNqO2ycPgdC1rhjWP1hX4sQONIhvWKI+bosc3EuiT/yWayY8RwuR0ixtLB/p93ghYxPtLMhueiQVK5pPp4=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [RFC PATCH 2/8] xen/arm: add sve_vl_bits field to domain
Thread-Topic: [RFC PATCH 2/8] xen/arm: add sve_vl_bits field to domain
Thread-Index: AQHZJcqDGP094PnLfE+ehr42WQkWI66ZeH+AgAEkTQA=
Date: Thu, 12 Jan 2023 10:54:16 +0000
Message-ID: <9168CB2A-A1F1-43E0-9DAD-BB31AD3979E0@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <20230111143826.3224-3-luca.fancellu@arm.com>
 <91b5c7db-ec9b-efa6-f5cf-dc5e8b176db6@xen.org>
In-Reply-To: <91b5c7db-ec9b-efa6-f5cf-dc5e8b176db6@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB9PR08MB9491:EE_|VI1EUR03FT013:EE_|DBBPR08MB6249:EE_
X-MS-Office365-Filtering-Correlation-Id: c375eaf4-f9b3-44c5-6f08-08daf48b6274
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 iJ4KfzT0vmwP+dJ3H0HDZ37pYReY3hGwxT5qv/E169o5PzBzl6LTiVYCS6ckRxa+iLn5/SFePvjRkp+4LJeHuR8z33v/3Uc0a4aOzDHd+Q84Umv6joeF1mEny3HiElVRiV4UwrbZLCQUXoKDpC6ye4iajlqoM+gngeEFgcVHQB7Q4bWZOKTd6kn/CdU3PBt09tZ67WRrkeV1/I4cYsNJJQbsLXh0v7lQ7xtnd8ejsNJ/+e7pPV7pA6Hpd288o1nnENprS0083/z/KA7JgFAmfKQhxZxQrhISb4orkVOlLV/Gm+dbGox/ZM9QDmQHPLZr2WWuowHBCUQJE74AfwpWMi75fOuxihvCHnNA4Czxg8rP9VZWQlAvHyF1kYHlCpsINx+SnRIVIMWnToCan0jHbmJBP5NndLhDlhLSUnDCHKDvXiXyW2oDNgyAejOmGZmzU6zH10AjFph6BL5yHCaWstcQzh9wxq5ZZqJCOHyaFDI6LZb56j8LcZplm1vlktBnbhLyhl1BerbLoTwY71W/yifTDCivSFV3Rj2zQWRtJXhYhQOdDR0JjEwITqMX1ilYVE0t9woESbpcvvA+AY8Mv1vUEmQ//G/gEmnJhNEpvtIanqNcN1gf4oKtiio9uG5kGBLJNtvsk7NPL0/uPoKfmdPuFo/w1a7ObOxCJNbNA3me0JUo97m602RlOQC9ZXq9Cawi+NYMrgsYWnBMkhJ20LE550rPwxTrI8rwM9RArVc=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(136003)(346002)(376002)(366004)(451199015)(71200400001)(6486002)(2616005)(478600001)(38070700005)(83380400001)(316002)(186003)(6916009)(53546011)(6506007)(54906003)(26005)(6512007)(66476007)(86362001)(66556008)(64756008)(66446008)(41300700001)(8936002)(4326008)(91956017)(76116006)(66946007)(5660300002)(8676002)(2906002)(38100700002)(122000001)(36756003)(33656002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <6D9760FB910CB84997646CDE4B8EA403@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9491
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	39757535-3d5c-48fa-2b8c-08daf48b5975
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3QyOy5Fd1ynDajFZbylxhAqDtGxEGi4P1Z1j1jY4WqpTKxhLLTMTxgUFqPyYeOOcREOz9pw+DgyrBw/8DpJZtyN99DnRo5l110DmN5jXKXEsgIMsq8JrPpHDoy9g7bZjApFYSYmAhXLzEVXt0Kf5xtW5eNIj0zQkPl7EZYm7o/6Vi5yLpNXmnuWF23Vw86gy20P9DPJseUfXtfZyBdC2pSJ+yNqASIs78qLnEY36x+6N1TAyvQ+2fl4PLcWE0bgr5seZXnbVGzTmDQGQCO5FuLVXGWIazmqZDfFcyFrKZAN9e38qPBZ0BwIxEWJk8ZVURkbQ/zQxK4XBuvq8ZQdzBUjO9XzVFtHXKyTAHKCOM/7xMXo8nMOWHwhzMaVYIOCbbTW+O5nOJ9JNi3LQiNsqNhYCsXbHXGZ8x1utiYx2y93gYHSkRLkmzQioaNaxRtb0FnysUsvmV8VVkTJZ2/24rPwpf1HuUbuKVW1pEi0YiM93qPlzHb+qxPNHqmaSQ+veh2lSgIapg67u8RO/biZ/khpeYP+ajaqUE9pVfT+j/LHfi+1XRckoDCXnX3ysMc/quSKnXqx74HRL0QqsFHgp4JpBdlAZRJvhfuW/xHWul85ts3rpGtLLcc4wm+q/A97w4w9nob5mTavY4VaKYiJ+r9pEAWb+4OVEnc48nH6/2sn7+Yj65A+rS2Ie+EQQF+Y7tZmzBRJafw8Mg2Ga0ZNslA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(396003)(346002)(376002)(451199015)(40470700004)(36840700001)(46966006)(8676002)(478600001)(70206006)(70586007)(336012)(2906002)(41300700001)(82310400005)(82740400003)(36756003)(4326008)(5660300002)(36860700001)(40460700003)(81166007)(83380400001)(6862004)(8936002)(6486002)(356005)(6512007)(26005)(6506007)(53546011)(33656002)(2616005)(54906003)(316002)(40480700001)(47076005)(86362001)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:54:31.4213
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c375eaf4-f9b3-44c5-6f08-08daf48b6274
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6249

DQoNCj4gT24gMTEgSmFuIDIwMjMsIGF0IDE3OjI3LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAxMS8wMS8yMDIzIDE0OjM4LCBM
dWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4gQWRkIHN2ZV92bF9iaXRzIGZpZWxkIHRvIGFyY2hfZG9t
YWluIGFuZCB4ZW5fYXJjaF9kb21haW5jb25maWcNCj4+IHN0cnVjdHVyZSwgdG8gYWxsb3cgdGhl
IGRvbWFpbiB0byBoYXZlIGFuIGluZm9ybWF0aW9uIGFib3V0IHRoZQ0KPj4gU1ZFIGZlYXR1cmUg
YW5kIHRoZSBudW1iZXIgb2YgU1ZFIHJlZ2lzdGVyIGJpdHMgdGhhdCBhcmUgYWxsb3dlZA0KPj4g
Zm9yIHRoaXMgZG9tYWluLg0KPj4gVGhlIGZpZWxkIGlzIHVzZWQgYWxzbyB0byBhbGxvdyBvciBm
b3JiaWQgYSBkb21haW4gdG8gdXNlIFNWRSwNCj4+IGJlY2F1c2UgYSB2YWx1ZSBlcXVhbCB0byB6
ZXJvIG1lYW5zIHRoZSBndWVzdCBpcyBub3QgYWxsb3dlZCB0bw0KPj4gdXNlIHRoZSBmZWF0dXJl
Lg0KPj4gV2hlbiB0aGUgZ3Vlc3QgaXMgYWxsb3dlZCB0byB1c2UgU1ZFLCB0aGUgemNyX2VsMiBy
ZWdpc3RlciBpcw0KPj4gdXBkYXRlZCBvbiBjb250ZXh0IHN3aXRjaCB0byByZXN0aWN0IHRoZSBk
b21haW4gb24gdGhlIGFsbG93ZWQNCj4+IG51bWJlciBvZiBiaXRzIGNob3NlbiwgdGhpcyB2YWx1
ZSBpcyB0aGUgbWluaW11bSBhbW9uZyB0aGUgY2hvc2VuDQo+PiB2YWx1ZSBhbmQgdGhlIHBsYXRm
b3JtIHN1cHBvcnRlZCB2YWx1ZS4NCj4+IFNpZ25lZC1vZmYtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1
Y2EuZmFuY2VsbHVAYXJtLmNvbT4NCj4+IC0tLQ0KPj4gIHhlbi9hcmNoL2FybS9hcm02NC9zdmUu
YyAgICAgICAgICAgICB8ICA5ICsrKysrKw0KPj4gIHhlbi9hcmNoL2FybS9kb21haW4uYyAgICAg
ICAgICAgICAgICB8IDQ1ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4+ICB4ZW4vYXJj
aC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvc3ZlLmggfCAxMiArKysrKysrKw0KPj4gIHhlbi9hcmNo
L2FybS9pbmNsdWRlL2FzbS9kb21haW4uaCAgICB8ICA2ICsrKysNCj4+ICB4ZW4vaW5jbHVkZS9w
dWJsaWMvYXJjaC1hcm0uaCAgICAgICAgfCAgMiArKw0KPj4gIHhlbi9pbmNsdWRlL3B1YmxpYy9k
b21jdGwuaCAgICAgICAgICB8ICAyICstDQo+PiAgNiBmaWxlcyBjaGFuZ2VkLCA3NSBpbnNlcnRp
b25zKCspLCAxIGRlbGV0aW9uKC0pDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2FybTY0
L3N2ZS5jIGIveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jDQo+PiBpbmRleCAzMjYzODkyNzgyOTIu
LmI3Njk1ODM0ZjRiYSAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0K
Pj4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jDQo+PiBAQCAtNiw2ICs2LDcgQEANCj4+
ICAgKi8NCj4+ICAgICNpbmNsdWRlIDx4ZW4vdHlwZXMuaD4NCj4+ICsjaW5jbHVkZSA8YXNtL2Nw
dWZlYXR1cmUuaD4NCj4+ICAjaW5jbHVkZSA8YXNtL2FybTY0L3N2ZS5oPg0KPj4gICNpbmNsdWRl
IDxhc20vYXJtNjQvc3lzcmVncy5oPg0KPj4gIEBAIC0zNiwzICszNywxMSBAQCByZWdpc3Rlcl90
IHZsX3RvX3pjcih1aW50MTZfdCB2bCkNCj4+ICB7DQo+PiAgICAgIHJldHVybiAoKHZsIC8gU1ZF
X1ZMX01VTFRJUExFX1ZBTCkgLSAxVSkgJiBaQ1JfRUx4X0xFTl9NQVNLOw0KPj4gIH0NCj4+ICsN
Cj4+ICsvKiBHZXQgdGhlIHN5c3RlbSBzYW5pdGl6ZWQgdmFsdWUgZm9yIFZMIGluIGJpdHMgKi8N
Cj4+ICt1aW50MTZfdCBnZXRfc3lzX3ZsX2xlbih2b2lkKQ0KPj4gK3sNCj4+ICsgICAgLyogWkNS
X0VMeCBsZW4gZmllbGQgaXMgKChsZW4rMSkgKiAxMjgpID0gdmVjdG9yIGJpdHMgbGVuZ3RoICov
DQo+PiArICAgIHJldHVybiAoKHN5c3RlbV9jcHVpbmZvLnpjcjY0LmJpdHNbMF0gJiBaQ1JfRUx4
X0xFTl9NQVNLKSArIDFVKSAqDQo+PiArICAgICAgICAgICAgU1ZFX1ZMX01VTFRJUExFX1ZBTDsN
Cj4+ICt9DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2RvbWFpbi5jIGIveGVuL2FyY2gv
YXJtL2RvbWFpbi5jDQo+PiBpbmRleCA4ZWEzODQzZWE4ZTguLjI3ZjM4NzI5MzAyYiAxMDA2NDQN
Cj4+IC0tLSBhL3hlbi9hcmNoL2FybS9kb21haW4uYw0KPj4gKysrIGIveGVuL2FyY2gvYXJtL2Rv
bWFpbi5jDQo+PiBAQCAtMTMsNiArMTMsNyBAQA0KPj4gICNpbmNsdWRlIDx4ZW4vd2FpdC5oPg0K
Pj4gICAgI2luY2x1ZGUgPGFzbS9hbHRlcm5hdGl2ZS5oPg0KPj4gKyNpbmNsdWRlIDxhc20vYXJt
NjQvc3ZlLmg+DQo+PiAgI2luY2x1ZGUgPGFzbS9jcHVlcnJhdGEuaD4NCj4+ICAjaW5jbHVkZSA8
YXNtL2NwdWZlYXR1cmUuaD4NCj4+ICAjaW5jbHVkZSA8YXNtL2N1cnJlbnQuaD4NCj4+IEBAIC0x
ODMsNiArMTg0LDExIEBAIHN0YXRpYyB2b2lkIGN0eHRfc3dpdGNoX3RvKHN0cnVjdCB2Y3B1ICpu
KQ0KPj4gICAgICAgIFdSSVRFX1NZU1JFRyhuLT5hcmNoLmNwdHJfZWwyLCBDUFRSX0VMMik7DQo+
PiAgKyNpZmRlZiBDT05GSUdfQVJNNjRfU1ZFDQo+PiArICAgIGlmICggaXNfc3ZlX2RvbWFpbihu
LT5kb21haW4pICkNCj4+ICsgICAgICAgIFdSSVRFX1NZU1JFRyhuLT5hcmNoLnpjcl9lbDIsIFpD
Ul9FTDIpOw0KPj4gKyNlbmRpZg0KPiANCj4gSSB3b3VsZCBhY3R1YWxseSBleHBlY3QgdGhhdCBp
c19zdmVfZG9tYWluKCkgcmV0dXJucyBmYWxzZSB3aGVuIHRoZSBTVkUgaXMgbm90IGVuYWJsZWQu
IFNvIGNhbiB3ZSBzaW1wbHkgcmVtb3ZlIHRoZSAjaWZkZWY/DQoNCkkgd2FzIHRyaWNrZWQgYnkg
aXQgdG9vLCB0aGUgYXJtMzIgYnVpbGQgd2lsbCBmYWlsIGJlY2F1c2UgaXQgZG9lc27igJl0IGtu
b3cgWkNSX0VMMg0KDQo+IA0KPj4gKw0KPj4gICAgICAvKiBWRlAgKi8NCj4+ICAgICAgdmZwX3Jl
c3RvcmVfc3RhdGUobik7DQo+PiAgQEAgLTU1MSw2ICs1NTcsMTEgQEAgaW50IGFyY2hfdmNwdV9j
cmVhdGUoc3RydWN0IHZjcHUgKnYpDQo+PiAgICAgIHYtPmFyY2gudm1waWRyID0gTVBJRFJfU01Q
IHwgdmNwdWlkX3RvX3ZhZmZpbml0eSh2LT52Y3B1X2lkKTsNCj4+ICAgICAgICB2LT5hcmNoLmNw
dHJfZWwyID0gZ2V0X2RlZmF1bHRfY3B0cl9mbGFncygpOw0KPj4gKyAgICBpZiAoIGlzX3N2ZV9k
b21haW4odi0+ZG9tYWluKSApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHYtPmFyY2guY3B0cl9l
bDIgJj0gfkhDUFRSX0NQKDgpOw0KPj4gKyAgICAgICAgdi0+YXJjaC56Y3JfZWwyID0gdmxfdG9f
emNyKHYtPmRvbWFpbi0+YXJjaC5zdmVfdmxfYml0cyk7DQo+PiArICAgIH0NCj4+ICAgICAgICB2
LT5hcmNoLmhjcl9lbDIgPSBnZXRfZGVmYXVsdF9oY3JfZmxhZ3MoKTsNCj4+ICBAQCAtNTk1LDYg
KzYwNiw3IEBAIGludCBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoc3RydWN0IHhlbl9kb21j
dGxfY3JlYXRlZG9tYWluICpjb25maWcpDQo+PiAgICAgIHVuc2lnbmVkIGludCBtYXhfdmNwdXM7
DQo+PiAgICAgIHVuc2lnbmVkIGludCBmbGFnc19yZXF1aXJlZCA9IChYRU5fRE9NQ1RMX0NERl9o
dm0gfCBYRU5fRE9NQ1RMX0NERl9oYXApOw0KPj4gICAgICB1bnNpZ25lZCBpbnQgZmxhZ3Nfb3B0
aW9uYWwgPSAoWEVOX0RPTUNUTF9DREZfaW9tbXUgfCBYRU5fRE9NQ1RMX0NERl92cG11KTsNCj4+
ICsgICAgdW5zaWduZWQgaW50IHN2ZV92bF9iaXRzID0gY29uZmlnLT5hcmNoLnN2ZV92bF9iaXRz
Ow0KPj4gICAgICAgIGlmICggKGNvbmZpZy0+ZmxhZ3MgJiB+ZmxhZ3Nfb3B0aW9uYWwpICE9IGZs
YWdzX3JlcXVpcmVkICkNCj4+ICAgICAgew0KPj4gQEAgLTYwMyw2ICs2MTUsMzYgQEAgaW50IGFy
Y2hfc2FuaXRpc2VfZG9tYWluX2NvbmZpZyhzdHJ1Y3QgeGVuX2RvbWN0bF9jcmVhdGVkb21haW4g
KmNvbmZpZykNCj4+ICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPj4gICAgICB9DQo+PiAgKyAg
ICAvKiBDaGVjayBmZWF0dXJlIGZsYWdzICovDQo+PiArICAgIGlmICggc3ZlX3ZsX2JpdHMgPiAw
ICkgew0KPj4gKyAgICAgICAgdW5zaWduZWQgaW50IHpjcl9tYXhfYml0czsNCj4+ICsNCj4+ICsg
ICAgICAgIGlmICggIWNwdV9oYXNfc3ZlICkNCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAg
ICBkcHJpbnRrKFhFTkxPR19JTkZPLCAiU1ZFIGlzIHVuc3VwcG9ydGVkIG9uIHRoaXMgbWFjaGlu
ZS5cbiIpOw0KPj4gKyAgICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPj4gKyAgICAgICAgfQ0K
Pj4gKyAgICAgICAgZWxzZSBpZiAoICFpc192bF92YWxpZChzdmVfdmxfYml0cykgKQ0KPj4gKyAg
ICAgICAgew0KPj4gKyAgICAgICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8sICJVbnN1cHBvcnRl
ZCBTVkUgdmVjdG9yIGxlbmd0aCAoJXUpXG4iLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgc3Zl
X3ZsX2JpdHMpOw0KPj4gKyAgICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPj4gKyAgICAgICAg
fQ0KPj4gKyAgICAgICAgLyoNCj4+ICsgICAgICAgICAqIGdldF9zeXNfdmxfbGVuKCkgaXMgdGhl
IGNvbW1vbiBzYWZlIHZhbHVlIGFtb25nIGFsbCBjcHVzLCBzbyBpZiB0aGUNCj4+ICsgICAgICAg
ICAqIHZhbHVlIHNwZWNpZmllZCBieSB0aGUgdXNlciBpcyBhYm92ZSB0aGF0IHZhbHVlLCB1c2Ug
dGhlIHNhZmUgdmFsdWUNCj4+ICsgICAgICAgICAqIGluc3RlYWQuDQo+PiArICAgICAgICAgKi8N
Cj4+ICsgICAgICAgIHpjcl9tYXhfYml0cyA9IGdldF9zeXNfdmxfbGVuKCk7DQo+PiArICAgICAg
ICBpZiAoIHN2ZV92bF9iaXRzID4gemNyX21heF9iaXRzICkNCj4+ICsgICAgICAgIHsNCj4+ICsg
ICAgICAgICAgICBjb25maWctPmFyY2guc3ZlX3ZsX2JpdHMgPSB6Y3JfbWF4X2JpdHM7DQo+PiAr
ICAgICAgICAgICAgZHByaW50ayhYRU5MT0dfSU5GTywNCj4+ICsgICAgICAgICAgICAgICAgICAg
ICJTVkUgdmVjdG9yIGxlbmd0aCBsb3dlcmVkIHRvICV1LCBzYWZlIHZhbHVlIGFtb25nIENQVXNc
biIsDQo+PiArICAgICAgICAgICAgICAgICAgICB6Y3JfbWF4X2JpdHMpOw0KPj4gKyAgICAgICAg
fQ0KPiANCj4gSSBkb24ndCB0aGluayBYZW4gc2hvdWxkICJkb3duZ3JhZGUiIHRoZSB2YWx1ZS4g
SW5zdGVhZCwgdGhpcyBzaG91bGQgYmUgdGhlIGRlY2lzaW9uIGZyb20gdGhlIHRvb2xzLiBTbyB3
ZSB3YW50IHRvIGZpbmQgYSBkaWZmZXJlbnQgd2F5IHRvIHJlcHJvZHVjZSB0aGUgdmFsdWUgKEFu
ZHJldyBtYXkgaGF2ZSBzb21lIGlkZWFzIGhlcmUgYXMgaGUgd2FzIGxvb2tpbmcgYXQgaXQpLg0K
DQpDYW4geW91IGV4cGxhaW4gdGhpcyBpbiBtb3JlIGRldGFpbHM/IEJ5IHRoZSB0b29scyB5b3Ug
bWVhbiB4bD8gVGhpcyB3b3VsZCBiZSBvayBmb3IgRG9tVXMsIGJ1dCBob3cgdG8gZG8gaXQgZm9y
IERvbTA/DQpJ4oCZbGwgd2FpdCBzdWdnZXN0aW9ucyBhbHNvIGZyb20gQW5kcmV3Lg0KDQo+IA0K
Pj4gKyAgICB9DQo+PiArDQo+PiAgICAgIC8qIFRoZSBQMk0gdGFibGUgbXVzdCBhbHdheXMgYmUg
c2hhcmVkIGJldHdlZW4gdGhlIENQVSBhbmQgdGhlIElPTU1VICovDQo+PiAgICAgIGlmICggY29u
ZmlnLT5pb21tdV9vcHRzICYgWEVOX0RPTUNUTF9JT01NVV9ub19zaGFyZXB0ICkNCj4+ICAgICAg
ew0KPj4gQEAgLTc0NSw2ICs3ODcsOSBAQCBpbnQgYXJjaF9kb21haW5fY3JlYXRlKHN0cnVjdCBk
b21haW4gKmQsDQo+PiAgICAgIGlmICggKHJjID0gZG9tYWluX3ZwY2lfaW5pdChkKSkgIT0gMCAp
DQo+PiAgICAgICAgICBnb3RvIGZhaWw7DQo+PiAgKyAgICAvKiBDb3B5IHN2ZV92bF9iaXRzIHRv
IHRoZSBkb21haW4gY29uZmlndXJhdGlvbiAqLw0KPj4gKyAgICBkLT5hcmNoLnN2ZV92bF9iaXRz
ID0gY29uZmlnLT5hcmNoLnN2ZV92bF9iaXRzOw0KPj4gKw0KPj4gICAgICByZXR1cm4gMDsNCj4+
ICAgIGZhaWw6DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0
L3N2ZS5oIGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3N2ZS5oDQo+PiBpbmRleCBi
ZDU2ZTJmMjQyMzAuLmY0YTY2MGU0MDJjYSAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9p
bmNsdWRlL2FzbS9hcm02NC9zdmUuaA0KPj4gKysrIGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNt
L2FybTY0L3N2ZS5oDQo+PiBAQCAtMTMsMTAgKzEzLDE3IEBADQo+PiAgLyogVmVjdG9yIGxlbmd0
aCBtdXN0IGJlIG11bHRpcGxlIG9mIDEyOCAqLw0KPj4gICNkZWZpbmUgU1ZFX1ZMX01VTFRJUExF
X1ZBTCAoMTI4VSkNCj4+ICArc3RhdGljIGlubGluZSBib29sIGlzX3ZsX3ZhbGlkKHVpbnQxNl90
IHZsKQ0KPj4gK3sNCj4+ICsgICAgLyogU1ZFIHZlY3RvciBsZW5ndGggaXMgbXVsdGlwbGUgb2Yg
MTI4IGFuZCBtYXhpbXVtIDIwNDggKi8NCj4+ICsgICAgcmV0dXJuICgodmwgJSBTVkVfVkxfTVVM
VElQTEVfVkFMKSA9PSAwKSAmJiAodmwgPD0gU1ZFX1ZMX01BWF9CSVRTKTsNCj4+ICt9DQo+PiAr
DQo+PiAgI2lmZGVmIENPTkZJR19BUk02NF9TVkUNCj4+ICAgIHJlZ2lzdGVyX3QgY29tcHV0ZV9t
YXhfemNyKHZvaWQpOw0KPj4gIHJlZ2lzdGVyX3QgdmxfdG9femNyKHVpbnQxNl90IHZsKTsNCj4+
ICt1aW50MTZfdCBnZXRfc3lzX3ZsX2xlbih2b2lkKTsNCj4+ICAgICNlbHNlIC8qICFDT05GSUdf
QVJNNjRfU1ZFICovDQo+PiAgQEAgLTMwLDYgKzM3LDExIEBAIHN0YXRpYyBpbmxpbmUgcmVnaXN0
ZXJfdCB2bF90b196Y3IodWludDE2X3QgdmwpDQo+PiAgICAgIHJldHVybiAwOw0KPj4gIH0NCj4+
ICArc3RhdGljIGlubGluZSB1aW50MTZfdCBnZXRfc3lzX3ZsX2xlbih2b2lkKQ0KPj4gK3sNCj4+
ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPj4gICNlbmRpZg0KPj4gICAgI2VuZGlmIC8q
IF9BUk1fQVJNNjRfU1ZFX0ggKi8NCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vaW5jbHVk
ZS9hc20vZG9tYWluLmggYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZG9tYWluLmgNCj4+IGlu
ZGV4IDQyZWI1ZGYzMjBhNy4uZTQ3OTRhOWZkMmFiIDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gv
YXJtL2luY2x1ZGUvYXNtL2RvbWFpbi5oDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9h
c20vZG9tYWluLmgNCj4+IEBAIC0zMSw2ICszMSw4IEBAIGVudW0gZG9tYWluX3R5cGUgew0KPj4g
ICAgI2RlZmluZSBpc19kb21haW5fZGlyZWN0X21hcHBlZChkKSAoKGQpLT5jZGYgJiBDREZfZGly
ZWN0bWFwKQ0KPj4gICsjZGVmaW5lIGlzX3N2ZV9kb21haW4oZCkgKChkKS0+YXJjaC5zdmVfdmxf
Yml0cyA+IDApDQo+PiArDQo+PiAgLyoNCj4+ICAgKiBJcyB0aGUgZG9tYWluIHVzaW5nIHRoZSBo
b3N0IG1lbW9yeSBsYXlvdXQ/DQo+PiAgICoNCj4+IEBAIC0xMTQsNiArMTE2LDkgQEAgc3RydWN0
IGFyY2hfZG9tYWluDQo+PiAgICAgIHZvaWQgKnRlZTsNCj4+ICAjZW5kaWYNCj4+ICArICAgIC8q
IG1heCBTVkUgdmVjdG9yIGxlbmd0aCBpbiBiaXRzICovDQo+PiArICAgIHVpbnQxNl90IHN2ZV92
bF9iaXRzOw0KPj4gKw0KPj4gIH0gIF9fY2FjaGVsaW5lX2FsaWduZWQ7DQo+PiAgICBzdHJ1Y3Qg
YXJjaF92Y3B1DQo+PiBAQCAtMTkwLDYgKzE5NSw3IEBAIHN0cnVjdCBhcmNoX3ZjcHUNCj4+ICAg
ICAgcmVnaXN0ZXJfdCB0cGlkcnJvX2VsMDsNCj4+ICAgICAgICAvKiBIWVAgY29uZmlndXJhdGlv
biAqLw0KPj4gKyAgICByZWdpc3Rlcl90IHpjcl9lbDI7DQo+PiAgICAgIHJlZ2lzdGVyX3QgY3B0
cl9lbDI7DQo+PiAgICAgIHJlZ2lzdGVyX3QgaGNyX2VsMjsNCj4+ICAgICAgcmVnaXN0ZXJfdCBt
ZGNyX2VsMjsNCj4+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1hcm0uaCBi
L3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLWFybS5oDQo+PiBpbmRleCAxNTI4Y2VkNTA5N2EuLmUx
OGEwNzUxMDVmMCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLWFybS5o
DQo+PiArKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1hcm0uaA0KPj4gQEAgLTMwNCw2ICsz
MDQsOCBAQCBzdHJ1Y3QgeGVuX2FyY2hfZG9tYWluY29uZmlnIHsNCj4+ICAgICAgdWludDE2X3Qg
dGVlX3R5cGU7DQo+PiAgICAgIC8qIElOICovDQo+PiAgICAgIHVpbnQzMl90IG5yX3NwaXM7DQo+
PiArICAgIC8qIElOICovDQo+PiArICAgIHVpbnQxNl90IHN2ZV92bF9iaXRzOw0KPiANCj4gUGxl
YXNlIHNwZWxsIG91dCB0aGUgcGFkZGluZyBjbGVhcmx5IChldmVuIHRob3VnaCBJIGtub3cgdGhl
cmUgaXMgb25lIGluIHRoaXMgc3RydWN0dXJlIHRoYXQgaXMgbm90IG1lbnRpb25lZCkuDQoNCk9r
IEkgd2lsbA0KDQo+IA0KPj4gICAgICAvKg0KPj4gICAgICAgKiBPVVQNCj4+ICAgICAgICogQmFz
ZWQgb24gdGhlIHByb3BlcnR5IGNsb2NrLWZyZXF1ZW5jeSBpbiB0aGUgRFQgdGltZXIgbm9kZS4N
Cj4+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmggYi94ZW4vaW5jbHVk
ZS9wdWJsaWMvZG9tY3RsLmgNCj4+IGluZGV4IDUxYmUyOGMzZGU3Yy4uNjE2ZDdhMWMwNzBkIDEw
MDY0NA0KPj4gLS0tIGEveGVuL2luY2x1ZGUvcHVibGljL2RvbWN0bC5oDQo+PiArKysgYi94ZW4v
aW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgNCj4+IEBAIC0yMSw3ICsyMSw3IEBADQo+PiAgI2luY2x1
ZGUgImh2bS9zYXZlLmgiDQo+PiAgI2luY2x1ZGUgIm1lbW9yeS5oIg0KPj4gIC0jZGVmaW5lIFhF
Tl9ET01DVExfSU5URVJGQUNFX1ZFUlNJT04gMHgwMDAwMDAxNQ0KPj4gKyNkZWZpbmUgWEVOX0RP
TUNUTF9JTlRFUkZBQ0VfVkVSU0lPTiAweDAwMDAwMDE2DQo+PiAgICAvKg0KPj4gICAqIE5CLiB4
ZW5fZG9tY3RsLmRvbWFpbiBpcyBhbiBJTi9PVVQgcGFyYW1ldGVyIGZvciB0aGlzIG9wZXJhdGlv
bi4NCj4gDQo+IENoZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 11:02:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 11:02:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475989.737939 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvMF-0006JL-0x; Thu, 12 Jan 2023 11:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475989.737939; Thu, 12 Jan 2023 11:02:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvME-0006JE-UA; Thu, 12 Jan 2023 11:02:54 +0000
Received: by outflank-mailman (input) for mailman id 475989;
 Thu, 12 Jan 2023 11:02:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFvMD-0006Is-Jo
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 11:02:53 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a6d495ad-9268-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 12:02:50 +0100 (CET)
Received: from mail-mw2nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Jan 2023 06:02:39 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5553.namprd03.prod.outlook.com (2603:10b6:208:285::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 12 Jan
 2023 11:02:37 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 11:02:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6d495ad-9268-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673521370;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=N7VpTxKlYqQ0aWmh6KpZDMQyKIrC4HjhQ+OZJilveGY=;
  b=dEjqN8d/oqeyuCCxj62PxlSnqn8Dbb7fXahauVOMRVgcWs5LfaMbVpjV
   P4+r2Hwh5aY+4PUb/ELNVPgywdESbwLyqUY8QawIGFuzFsuR9opkOFRmD
   qpGK0CyVZXBMmLKn5GykkSCK4D3Tc1fzjHi2ZIgqj2pPkGyPaXYH+1P61
   U=;
X-IronPort-RemoteIP: 104.47.55.104
X-IronPort-MID: 91219673
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:p4bJha7JGU0zq0vwz1APxQxRtPLGchMFZxGqfqrLsTDasY5as4F+v
 mEdXjvVPPiLNGagKtgnOt6z8RlUvsPdnIdgQVRvqSpgHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+4pwehBtC5gZlPakS5geH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m2
 tIydWw3fiq/iu+yn5S7VrdDlJp9BZy+VG8fkikIITDxK98DGMiGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooiOSF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNLTuTpqKEx6LGV7lUCCEEkCWmlmKHnhxK9QNBWO
 VAW3AN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8ywSEAmkJSBZRZdpgs9U5LRQxz
 UOAld7tAT1psZWWRGib+7PSqim9UQAKKUcSaClCShEKi/HzrYd2gh/RQ9JLFK+uksazCTz22
 yqNriU1m/MUl8Fj6kmg1VXOgjbpo4eTSAcwv13TRjj8tl8/Y5O5bYu171Sd9exHMIuSUliGu
 j4DhtSa6+cNS5qKkURhXdkwIV1g3N7dWBW0vLKlN8NJG+iFk5J7Qb1t3Q==
IronPort-HdrOrdr: A9a23:LLJLMqu/1L3+4Jj9ZGaTCINZ7skDRNV00zEX/kB9WHVpm62j5r
 iTdZsgpHzJYVoqKRMdcLO7Sc+9qBHnhPpICOAqVN/OMGaHhILCFvAE0WKN+UyEJwTOssoY8Z
 1PGpIVaeEZdDNB/L/HCIXTKada/PC3tJ6Tv6P/4h5WJm9XV50=
X-IronPort-AV: E=Sophos;i="5.96,319,1665460800"; 
   d="scan'208";a="91219673"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LI8H/4MEqXacB/CP5IjC0zkdF6K5S9kSUKKPYjlYMlB/Cigg4+mg9Q0aLDLOZBR6m583Z6UeeQYW3RXzvIPOhRS2FVLirb9NDb7c2MgwE+Or0L1QDKen9z9Lm7csiXEDi+YwgKWf9AF1JnppxP2h88wYHJkj82QN3p2CPkdf7J/L26Wbv4OObXjjoaR8PpsA3Pbexh1050qfhhSbsztQVbPvL5FFirYGPcv1k2n4jcSFnKyf8prjYCpGwrgufBjZpLOgLnvQfji4ZJjAV2eT7mvD9+n2wcDG1r+Ifqv0zSuReomxbkphisFA3lEDOHIGSwNxXD2E4W8sChgyQTT/CQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=N7VpTxKlYqQ0aWmh6KpZDMQyKIrC4HjhQ+OZJilveGY=;
 b=WfI4SC7uUTPB2X8/c+KPGbz3LCO+oZNhDZ75RM/17ncz3yhfRIjCBUOQ8EJeZmhBPFHEumGzlde5Iod0DVc+ODSx6H3hCiqVT4j5sx9kawSBoE6OW4K6QndyQy4IH1uW7jEcJ6lO4nCbbQZifk0uobRBJT5nL80adGkqKxivfnZuuptNqlz54L6byKKXTJMqjMgbmbdwzwsMRNWM09o0VRau+OBrwkcQIr4yCgMCUTwaH6N8Q1ZwC4QibcTbsce7uXMt17hhd7vO6kKLAaxmNEkXXUaZx2y3+8I3P+WD44y4dEDmkRbHRae/wdLnrNUFpd93n7eWnUVA9dLHDQCpTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N7VpTxKlYqQ0aWmh6KpZDMQyKIrC4HjhQ+OZJilveGY=;
 b=xMKOMAKDsxaprX7HIlQkEox3wJfrMSYuRnnMugTiyoG8PeM9PSztREekI9cYGDhd9r4xtESyFyUBCcMTTwasfsiLCkiPGmaSnSIJbwBxBHfcl2N5Q3ODGmW7nBjYS297taFwks4WFyqPZOjcmKay/Se14RlRFunaVt+DkP0wNpk=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Anthony
 Perard <anthony.perard@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [Multiple reverts] [RFC PATCH] build: include/compat: figure out
 which other compat headers are needed
Thread-Topic: [Multiple reverts] [RFC PATCH] build: include/compat: figure out
 which other compat headers are needed
Thread-Index: AQHZJloKRT09WL2lFUW3wxQFePvpLa6ang0A
Date: Thu, 12 Jan 2023 11:02:37 +0000
Message-ID: <cf9b3be0-b195-83d8-875f-3ef70e5d9c3b@citrix.com>
References: <20230111181703.30991-1-anthony.perard@citrix.com>
 <5c7ffbe4-3c19-d748-9489-9a256faebb7a@citrix.com>
 <750ad2e8-a5be-d9c0-846e-41bb64c195fe@suse.com>
In-Reply-To: <750ad2e8-a5be-d9c0-846e-41bb64c195fe@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5553:EE_
x-ms-office365-filtering-correlation-id: 6210bfa3-1887-4512-033d-08daf48c83de
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 uMqQSMIiNfUyWy7g7rjhpxIfPtYHX72Cdar8GdfarQAZNZxuDSIAD2qg4P8GGZ1RWD2EUUxrikHGLu7Z8deW+5BbMwykQ+ze7ZtACQ39tphi/NU0kwH0hgCthcnB/+6Bt0BUudd1sI08rASaqpjlMcgFp4w+0WujeZG2NuAEmIg6huNPA0VKJxuAG3A1d/6Z2CiYNAjYpcKP/4IqCoJAPH0Yv4JVDYG2Nbz7IrSET8xGOjB5T546D3a9rei94kwGIFJHf/mHlse3E6LUpWX6Y74EXgH7/EZJ9TPceRKGEvzCIi0ozfVU3BZke/Vpojvwn5chZkcbC/5uk3bY1cjiKv1uYlHhaBH2hA+3LcfTTxIcSpbFHNHaGqQ0tY9BNgacHtKd5abqc/tzW5Jy9nEt3rHW3weiYZlMSGJrEwBzP+dSa2xtXLwhFsSe7bzmkMogPstihQfkMN1FuU97OhvFI26+TIwePCeZtSWjJq21BCMmlJP7uJGdOa4EuwXnX6+oUhrCxOIWwg0I76Zl9DQ3xkBIpMJ/GEvTQtl7czivSgy6fMLSxKfsLMyVoT62FfzLLb7nC+JNNIUjo73iTtzEcur4kDRMLGNAGJpot8eNcqpb2R21VRpZQkR68D1SNisz/jXOuM55SNw2pgx9WV90d68dHI0MVh7I7YHFnEOByFGzZ7+Ewvz+cq/lvm0KSH/LJM/t1TCD9b+aKcYRkR5TwyLBMo6ei55MtFxpSFyOFE4tHgOmOZGPPc0EUGP6zpXwJVzzJOqIjL/GFLlK1IPDwEM4G5Oeib3hnqaO/d4YQ6Q=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(366004)(136003)(376002)(39860400002)(451199015)(41300700001)(6506007)(8936002)(5660300002)(31686004)(2906002)(53546011)(83380400001)(66556008)(66946007)(66446008)(8676002)(91956017)(6916009)(71200400001)(76116006)(4326008)(36756003)(122000001)(38100700002)(82960400001)(38070700005)(316002)(66476007)(54906003)(478600001)(6486002)(64756008)(2616005)(86362001)(966005)(186003)(31696002)(6512007)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?cTNYN004RDFlSWUvTk9FdW5HY20wdkdCcXpDc0dqVnp0Z1Zad3U1b2xEV2dr?=
 =?utf-8?B?YXBjYTdjTW1WRWp5VkdhZEtPcGtpRU5XTjF1YWNyMFhteHlkdU1ZL2lObjE4?=
 =?utf-8?B?cFBZb3hscWR5OFRreUpINklYRFRIWVErbjhxMXlPLzgyRUE2VDN3b282K1hS?=
 =?utf-8?B?RWhMenZvc1FoRGdPdGVLWER0MmttUWtIUURjd096L29KSHZUSU5CVGdlekNy?=
 =?utf-8?B?WWhacVl6N1Y5OE9jbzFLaFRvTVhSK0I2ODB6MHQ2c3hhMnZhbjVVMDJyRmtI?=
 =?utf-8?B?bDVjN012YktMVlljQ0FWeDJYUUNuekFJNEN2a1FwT08vSlc2YnN0Q3doY1lZ?=
 =?utf-8?B?QXBaTUpaUXZnRlJJS1ViMWZZK1J2VHRLZlY4Y1ZvWVNTSFB5NEZ3K0FFek41?=
 =?utf-8?B?U0xFbTdrL0xBUDFPWU8vZ2ZvTTJGaGM0S042dXVOVnhPaW93N1RxSXJpNktu?=
 =?utf-8?B?blh3eHZTNTNObGFZUE13QUxNZEtZenZaaWQzZUFQNHlxTE1OVkZubHFUNTJy?=
 =?utf-8?B?YjF1UkdEU2J6WEc4dXBHck1pNGxUNC9BbGdDT0Q1S1llYVNxNmMwUngxRlho?=
 =?utf-8?B?SkhwRy9RMVRiWnVrZDN2aHg4TXhTM08yUGh3MTZjd1lHSkhWSDBlTWVFYjJp?=
 =?utf-8?B?UkFWZ1FxNXdKZ2RuazQzTE1PQnd1UWlZd3lIUUhrQllpNFoxWDBnSjFyQ3hj?=
 =?utf-8?B?Q1FlT1ZpckJ5dG1kUUJFNnN1MjZTQURVYjAxOHJ3YTYzOXNNdmtPc0N4ZmJn?=
 =?utf-8?B?emN5QUd1R0FZZHg0Mmk4Uk9xbXdycThtclE2a3dRUzZydVkvc0tyWGV5VW5N?=
 =?utf-8?B?WWMwK3QvbzZvcU1JUEo5bDFVNXNBVzB4eEE3cEptdEoyVU0veGRlaTRGQ3Ba?=
 =?utf-8?B?WnJnb3JSOWlpY0lzTFg3ZzZSMS9RZXJDUCtJeTJZaTZYZ3NqbEsrR0tnOGR1?=
 =?utf-8?B?U05jMjliM0RsOU5nYUpNTjJST1dZZ1RDTzlSdjVQUSs2dWpDMDRydzZjWmRF?=
 =?utf-8?B?bE52djJ5T2kzUXJiZkgwcEl5SWRjakNGLzlKQ1hQTExFd2VuQ1NwbmhGR2Mv?=
 =?utf-8?B?YWZ4WkI3WVg2UmZuRnErTUp0ZDN0Mnl4Yy9xNDlCRkVXckI0bmhyMEJTNEZ6?=
 =?utf-8?B?NllHR1JUcjlkNERSOSsvSjBHU2VxcGlPZVQ0RXVRd1dFVm83U2gydGQvNVFl?=
 =?utf-8?B?UzU5WTRiMTB6QldoTWhwNFBWNGVUdWhHb2RteTY5MWtlVEFwcGJ6YXZDR2cw?=
 =?utf-8?B?anQ1Ti9jdHFOOUtSUWE3dEU5NTQyRFpSRTFRbkZac0dLYldoSHBLMDdOc1Jv?=
 =?utf-8?B?V0RJc0toa0pPaVc0ZkFMVGZPaUo2UGxaNSthaHB1NFJtZ0RnWUtiWU0xaFVp?=
 =?utf-8?B?TEVVWlhrZEdLaFpTaWZta1hKOThYaHg4VG1WcjV1Y0FUL01kSStPVDVEN0JK?=
 =?utf-8?B?VnkweHhZamNGNGR0WlkvQ2xpcmhwdHUrcmpNSjN1S0E2QWNOMVErVWUyMW9u?=
 =?utf-8?B?c0lTN2IyTHNqTGlPWjRWMmRqeFdhaWVqTk52V1BvcXlqeEhpbERUaUVWdTFl?=
 =?utf-8?B?NmVicEc0c3hMbS8zNmkwbjZWQVRRZEZJcUEybzZUZkt0TEwyM0R5L2FmVTNM?=
 =?utf-8?B?OTIzZHY2a2Z4TzJXeUViaDU1bXNpQTRJSml3MDZwUWh1R3ZHZ3VYNnJMeWJl?=
 =?utf-8?B?K0tMbU1YTWV1TEdmL3VZQVhmMGNrcmkxZ3VpVTVoTFVrVTlVQXZLVTV1b29P?=
 =?utf-8?B?T3RCVDVaUEtvbXg4MHBvTTZnNUVHQ29OOGdQZDVrVitwSjRMeHBWS2ZkUVY3?=
 =?utf-8?B?bkdFRVJySVphM3VCWmhKdU9lamtBUWZOZXNQaStERFNWbnlKcVZXejVuUndH?=
 =?utf-8?B?MVhNUlZIOWJBUlBPZnVWanRDZlBpOW1vVU9JaUhhdGdlMVNzUEkwZTFMcXY2?=
 =?utf-8?B?RUpOYTVuR0E5ZUxOeko1ZnV5VG5ZNDg4UjB3WTRQdWdKV2Zrdms1TUErdTZO?=
 =?utf-8?B?S3ErUDBTWVl6TEdabnVHS3RNUHdJV21haUpYbVo3ZlZ5WG1WNFBwcWQwMEg5?=
 =?utf-8?B?a3JKbUdCalBnclV4MTBaaDhZd2pnaXVpMVF1Zm40NnA4bkVvM2QvT0hzWGNh?=
 =?utf-8?B?d1pBRWNRaTNhWHFmeWxoWTA2bFlmUEU1MG15OThVUHp6T2NweGdDbnYxQmQ2?=
 =?utf-8?B?eUE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <4F1A9C49D02BBB4986E7DD1143835BF5@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	p7/yuN+OWZ7vkKh8XXOAexHuOY1vyRgOlAA8FADDi8u7LO46HfMaBd6YSGtBmNIfyVMOYcbpGl1qJfr2eCqLbqafvZT3dZKLZjDALl2xlCzkmKdVBOCWwf5T9Xq1LJQQXa8cisYFGpVNZqFqfoVMOfsMVhb0i/tSlmRI3zK8TrQ5ziWaUDijRiBlBt+uAifZ7i7BiPIL0hJMqMysyGrdEg2ApFiuC0lQvAKwPeiSptuWInJ6+KfsdlvLBzdozinUUszuXKb741SkpR9RoJWPRonxtQ5IqFwsnbUe4kl9bAVLLGTD1+OnYUycU/wnXSs//2ejhnx+Lzy4hl76qTWkCOh/ZjVuTC/4u67azAtUSjnzoA47f1B+stRcbflKt+0/rrPc3yrk1OK/B4cd4YQQuAM5ElU1OL8FJJj7iCRPNrd7SQvR/szJf/JlatP3QQ013VOfwgKPq+saLcJKnzN1gljM5WTVAhUl/YzhrQr2jEzJ/LP85VKVmAbOGl/EcHFj1spiTacz5x8qNF8vvyFr8UgSylL0LZyRUSAcJZ00G6jIxuMtyrtZpB/ziUBqLYssM1RxhKo/v0suH2XaxMVw2+YvO2RfnmF6JOL6fybmBN6KKBPavhTkrsSI9HTn7qKieKndT5A9erSPfsJPZ9mOL1afw+ieiPA/CE3IId8YX71GuUwebqDY684RonC8AaXOoToZb6whNjx977DbpSb1CCTIngyjcESK342PNNqYAkZ0KD6O28JyS3vkJ5Qh5U1td3po7FK1WZVvUyLaBKi6xJaT9EK5z3gaMgnQ4xWvnEPLBSvL/a5akSdr5B6MonU1mzD/+DwbAm6KDQipxdzi62Xqn1L8S/S4aZR61R79m78=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6210bfa3-1887-4512-033d-08daf48c83de
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 11:02:37.0685
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: rAPy7hQhndYq4hejh0nNAWyGJHLB2o7oGWP06B9RytE1TTB4EGNl7BpvF8ZqcYMXpGezQ69nonZ1mNosErMxNgCNa/raOWJsOnHtdvcwZRA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5553

T24gMTIvMDEvMjAyMyA3OjQ2IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTEuMDEuMjAy
MyAyMzoyOSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IEZvciBwb3N0ZXJpdHksDQo+PiBodHRw
czovL2dpdGxhYi5jb20veGVuLXByb2plY3QvcGVvcGxlL2FuZHloaHAveGVuLy0vam9icy8zNTg1
Mzc5NTUzIGlzDQo+PiB0aGUgaXNzdWUgaW4gcXVlc3Rpb24uDQo+Pg0KPj4gSW4gZmlsZSBpbmNs
dWRlZCBmcm9tIGFyY2gveDg2L2h2bS9odm0uYzo4MjoNCj4+IC4vaW5jbHVkZS9jb21wYXQvaHZt
L2h2bV9vcC5oOjY6MTA6IGZhdGFsIGVycm9yOiAuLi90cmFjZS5oOiBObyBzdWNoDQo+PiBmaWxl
IG9yIGRpcmVjdG9yeQ0KPj4gwqDCoMKgIDYgfCAjaW5jbHVkZSAiLi4vdHJhY2UuaCINCj4+IMKg
wqDCoMKgwqAgfMKgwqDCoMKgwqDCoMKgwqDCoCBefn5+fn5+fn5+fn4NCj4+IGNvbXBpbGF0aW9u
IHRlcm1pbmF0ZWQuDQo+PiBtYWtlWzRdOiAqKiogW1J1bGVzLm1rOjI0NjogYXJjaC94ODYvaHZt
L2h2bS5vXSBFcnJvciAxDQo+PiBtYWtlWzNdOiAqKiogW1J1bGVzLm1rOjMyMDogYXJjaC94ODYv
aHZtXSBFcnJvciAyDQo+PiBtYWtlWzNdOiAqKiogV2FpdGluZyBmb3IgdW5maW5pc2hlZCBqb2Jz
Li4uLg0KPj4NCj4+DQo+PiBBbGwgcHVibGljIGhlYWRlcnMgdXNlICIuLi8iIHJlbGF0aXZlIGlu
Y2x1ZGVzIGZvciB0cmF2ZXJzaW5nIHRoZQ0KPj4gcHVibGljLyBoaWVyYXJjaHkuwqAgVGhpcyBj
YW5ub3QgZmVhc2libHkgY2hhbmdlIGdpdmVuIG91ciAiY29weSB0aGlzDQo+PiBpbnRvIHlvdXIg
cHJvamVjdCIgc3RhbmNlLCBidXQgaXQgbWVhbnMgdGhlIGNvbXBhdCBoZWFkZXJzIGhhdmUgdGhl
IHNhbWUNCj4+IHN0cnVjdHVyZSB1bmRlciBjb21wYXQvLg0KPj4NCj4+IFRoaXMgaW5jbHVkZSBp
cyBzdXBwb3NlZCB0byBiZSBpbmNsdWRpbmcgY29tcGF0L3RyYWNlLmggYnV0IGl0IHdhcw0KPj4g
YWN0dWFsbHkgcGlja2luZyB1cCB4ODYncyBhc20vdHJhY2UuaCwgaGVuY2UgdGhlIGJ1aWxkIGZh
aWx1cmUgbm93IHRoYXQNCj4+IEkndmUgZGVsZXRlZCB0aGUgZmlsZS4NCj4+DQo+PiBUaGlzIGRl
bW9uc3RyYXRlcyB0aGF0IHRyeWluZyB0byBiZSBjbGV2ZXIgd2l0aCAtaXF1b3RlIGlzIGEgbWlz
dGFrZS7CoA0KPj4gTm90aGluZyBnb29kIGNhbiBwb3NzaWJseSBjb21lIG9mIGhhdmluZyB0aGUg
PD4gYW5kICIiIGluY2x1ZGUgcGF0aHMNCj4+IGJlaW5nIGRpZmZlcmVudC7CoCBUaGVyZWZvcmUg
d2UgbXVzdCByZXZlcnQgYWxsIHVzZXMgb2YgLWlxdW90ZS4NCj4gSSdtIGFmcmFpZCBJIGNhbid0
IHNlZSB0aGUgY29ubmVjdGlvbiBiZXR3ZWVuIHVzZSBvZiAtaXF1b3RlIGFuZCB0aGUgYnVnDQo+
IGhlcmUuDQoNClNvIEkgaGFkIGNvbmNsdWRlZCAod3JvbmdseSkgdGhhdCBpdCB3YXMgdG8gZG8g
d2l0aCBhbiBhc3ltbWV0cnkgb2YNCmluY2x1ZGUgcGF0aHMsIGJ1dCBpdCdzIG5vdC7CoCA8Li4v
JHg+IHdvdWxkIGJlaGF2ZSB0aGUgc2FtZSwgZXZlbiBpZiBpdA0KaXMgYSBiaXQgbW9yZSBvYnZp
b3VzbHkgd3JvbmcuDQoNClRoZSBhY3R1YWwgcHJvYmxlbSBpcyB0aGUgdXNlIG9mIC4vIG9yIC4u
LyBiZWNhdXNlLCBkZXNwaXRlIGhvdyB0aGV5DQpyZWFkLCB0aGV5IGFyZSBuZXZlciByZWxhdGl2
ZSB0byB0aGUgY3VycmVudCBmaWxlLsKgIFRoZSBjb250ZW50cyBiZXR3ZWVuDQp0aGUgIiIgb3Ig
PD4gaXMgdHJlYXRlZCBhcyBhIHN0cmluZyBsaXRlcmFsIGFuZCBub3QgaW50ZXJwcmV0ZWQgYnkg
Q1BQLg0KDQpTbyBmdXJ0aGVybW9yZSwgdGhlIHB1YmxpYyBoZWFkZXJzIGFyZSBidWdneSBpbiB0
aGVpciB1c2Ugb2YgLi4vIGJlY2F1c2UNCml0IGlzIGFuIGltcGxpY2l0IGRlcGVuZGVuY3kgb24g
LUkvcGF0aC90by94ZW4vaGVhZGVycy9kaXIvIGJlaW5nDQplYXJsaWVyIG9uIHRoZSBpbmNsdWRl
IHBhdGggdGhhbiBvdGhlciBkaXJzIHdpdGggdGhlc2UgZmFpcmx5IGdlbmVyaWMNCmFuZCBub3Qt
eGVuLXByZWZpeGVkIGZpbGUgbmFtZXMuDQoNCkkgc3RpbGwgdGhpbmsgdGhhdCBpbmNsdWRlIHBh
dGggYXN5bW1ldHJ5IGlzIGJhZCBpZGVhIGFuZCB3YW50cw0KcmV2ZXJ0aW5nLCBidXQgSSBhZ3Jl
ZSBpdCdzIG5vdCB0aGUgc291cmNlIG9mIHRoaXMgYnVnLg0KDQo+PiBCdXQsIHRoYXQgaXNuJ3Qg
dGhlIG9ubHkgYnVnLg0KPj4NCj4+IFRoZSByZWFsIGh2bV9vcC5oIGxlZ2l0aW1hdGVseSBpbmNs
dWRlcyB0aGUgcmVhbCB0cmFjZS5oLCB0aGVyZWZvcmUgdGhlDQo+PiBjb21wYXQgaHZtX29wLmgg
bGVnaXRpbWF0ZWx5IGluY2x1ZGVzIHRoZSBjb21wYXQgdHJhY2UuaCB0b28uwqAgQnV0DQo+PiBn
ZW5lcmF0aW9uIG9mIGNvbXBhdCB0cmFjZS5oIHdhcyBtYWRlIGFzeW1tZXRyaWMgYmVjYXVzZSBv
ZiAyYzhmYWJiMjIzLg0KPj4NCj4+IEluIGhpbmRzaWdodCwgdGhhdCdzIGEgcHVibGljIEFCSSBi
cmVha2FnZS7CoCBUaGUgY3VycmVudCBjb25maWd1cmF0aW9uDQo+PiBvZiB0aGlzIGJ1aWxkIG9m
IHRoZSBoeXBlcnZpc29yIGhhcyBubyBsZWdpdGltYXRlIGJlYXJpbmcgb24gdGhlIGhlYWRlcnMN
Cj4+IG5lZWRpbmcgdG8gYmUgaW5zdGFsbGVkIHRvIC91c3IvaW5jbHVkZS94ZW4uDQo+Pg0KPj4g
T3IgcHV0IGFub3RoZXIgd2F5LCBpdCBpcyBhIGJyZWFrYWdlIHRvIHJlcXVpcmUgWGVuIHRvIGhh
dmUNCj4+IENPTkZJR19DT01QQVQrQ09ORklHX1RSQUNFQlVGRkVSIGVuYWJsZWQgaW4gdGhlIGJ1
aWxkIHNpbXBseSB0byBnZXQgdGhlDQo+PiBwdWJsaWMgQVBJIGhlYWRlcnMgZ2VuZXJhdGVkIHBy
b3Blcmx5Lg0KPiBUaGVyZSBhcmUgbm8gcHVibGljIEFQSSBoZWFkZXJzIHdoaWNoIGFyZSBnZW5l
cmF0ZWQuIFRoZSBjb21wYXQgaGVhZGVycw0KPiBhcmUgZ2VuZXJhdGUgc29sZWx5IGZvciBYZW4n
cyBpbnRlcm5hbCBwdXJwb3NlcyAoYW5kIGhlbmNlIHRoZXJlJ3MgYWxzbw0KPiBubyBwdWJsaWMg
QUJJIGJyZWFrYWdlKS4NCg0KV293LCBJIHJlYWxseSB3YXMgZGVjYWZmZWluYXRlZCB3aGVuIHdv
cmtpbmcgb24gdGhpcy4uLg0KDQpZZWFoLCBpdCdzIG5vdCB0aGF0IGVpdGhlci4NCg0KfkFuZHJl
dw0K


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 11:03:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 11:03:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.475995.737950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvN4-0006qP-AY; Thu, 12 Jan 2023 11:03:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 475995.737950; Thu, 12 Jan 2023 11:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvN4-0006qI-71; Thu, 12 Jan 2023 11:03:46 +0000
Received: by outflank-mailman (input) for mailman id 475995;
 Thu, 12 Jan 2023 11:03:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AeS6=5J=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pFvN3-0006Is-5A
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 11:03:45 +0000
Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com
 [2607:f8b0:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c5ea41de-9268-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 12:03:42 +0100 (CET)
Received: by mail-pf1-x432.google.com with SMTP id 20so8111254pfu.13
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 03:03:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5ea41de-9268-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YRMIxmrMjxFmZ7KhnuQyQ4LQN+JFHEuz988Y3hBKhAY=;
        b=AE2aKbGlog+nDLpzx6+73PFOKcFOlk7OrHENBBiyt8FPDfJtjJp9iazQfWNk1uJlM0
         QhMv2+Ux9eLUuLYjgMZWUkvqrbE7es6WjuMPIEs60T4Ls+D9q0/MzjqYBaO5D/adYkBZ
         nLgcDr0VnC6f+kKHzeT+m/f66v9V7jsPZreL9Ref5zsYSVjUQMk+NCI/f3XlF9FD4h0p
         wOJxBBhwgklj8lzeQO0DR7laeFBaDePYrZfypE63u0O/Ijnim3Qg/vKTEBQfhWb/FDZ3
         bh9hcjDTqiCPNZ6NKrduBHZtkSr3wrmKYWvOgX+Ta+5LYTfWRaUENMdezVR8C8dPh7E4
         HGZw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=YRMIxmrMjxFmZ7KhnuQyQ4LQN+JFHEuz988Y3hBKhAY=;
        b=MWEzk3J0pR90zo8mkEotZveLo40WJB85ILbcUHVTlUWgZ+4SW8MbxFd7GoswKgW8UP
         G8vsDzhlq4SET6nLabvoI7pM6DH2KuwX/2h2nbq7sb8qA493hEbGAMY6npR7cEv/pi8+
         gqWzbalSbgDTh2GGt3IITdzOPqkS7x2pnQm5YQuiYsPtqf5ihEvjEIcaEJBHxil24yge
         HMV7HTEdaxU5mWodBpNMkrW82HyBJE5MtPK0IpIg6gzG67ueePZ2lSBrILf8e5gRMQO+
         UVWQVU/B/OXkSfuEH3L270i8MUL4aRKXtENEW6PrXg8SwA27zBSuGj4s1mGmx/zz79lp
         L7vw==
X-Gm-Message-State: AFqh2kr8A6DC9LDR0c7I4GcW/fbwjCu1a9Yf1d+M3y6b1SRM65hNyL1N
	EPVUd6Sk2d8JmVz3ej7WwZ98x6fGGa67i9qntdHsVg==
X-Google-Smtp-Source: AMrXdXu5V4bX22nwNZjWYHOvPbmvbmEPrM5ZbLD1wZubhZ6omG79rXH/n1MH1qTGUPIgVjvQP0rVQQGio1U3g6Ik/Ao=
X-Received: by 2002:a63:e20b:0:b0:479:18a:8359 with SMTP id
 q11-20020a63e20b000000b00479018a8359mr4968225pgh.105.1673521421294; Thu, 12
 Jan 2023 03:03:41 -0800 (PST)
MIME-Version: 1.0
References: <20230110212947.34557-1-philmd@linaro.org> <d4ea8bf5-a669-eb33-6dd2-f37417dab1c7@eik.bme.hu>
In-Reply-To: <d4ea8bf5-a669-eb33-6dd2-f37417dab1c7@eik.bme.hu>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 12 Jan 2023 11:03:29 +0000
Message-ID: <CAFEAcA-5054ZzubQbZuzb8=D1QfbfKgqwnr48r2kOx5p4=Rchg@mail.gmail.com>
Subject: Re: [PATCH] bulk: Rename TARGET_FMT_plx -> HWADDR_FMT_plx
To: BALATON Zoltan <balaton@eik.bme.hu>
Cc: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>, 
	qemu-devel@nongnu.org, qemu-block@nongnu.org, qemu-arm@nongnu.org, 
	qemu-ppc@nongnu.org, Richard Henderson <richard.henderson@linaro.org>, 
	=?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, ale@rev.ng, 
	qemu-riscv@nongnu.org, xen-devel@lists.xenproject.org, 
	Thomas Huth <thuth@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 10 Jan 2023 at 22:04, BALATON Zoltan <balaton@eik.bme.hu> wrote:
>
> On Tue, 10 Jan 2023, Philippe Mathieu-Daud=C3=A9 wrote:
> > The 'hwaddr' type is defined in "exec/hwaddr.h" as:
> >
> >    hwaddr is the type of a physical address
> >   (its size can be different from 'target_ulong').
> >
> > All definitions use the 'HWADDR_' prefix, except TARGET_FMT_plx:
> >
> > $ fgrep define include/exec/hwaddr.h
> > #define HWADDR_H
> > #define HWADDR_BITS 64
> > #define HWADDR_MAX UINT64_MAX
> > #define TARGET_FMT_plx "%016" PRIx64
> >         ^^^^^^
> > #define HWADDR_PRId PRId64
> > #define HWADDR_PRIi PRIi64
> > #define HWADDR_PRIo PRIo64
> > #define HWADDR_PRIu PRIu64
> > #define HWADDR_PRIx PRIx64
>
> Why are there both TARGET_FMT_plx and HWADDR_PRIx? Why not just use
> HWADDR_PRIx instead?

TARGET_FMT_plx is part of a family of defines for printing
target_foo types; the rest are in cpu-defs.h. These all include the
'%' character. This is more convenient to use, but it's also
out-of-line with the C standard format macros like PRIx64.
The HWADDR_* macros take the approach of aligning with how you
use the C standard format macros.

As usual in QEMU, where there are two different ways of doing
things, it's probably because one of them is a lot older than
the other and written by a different person. In theory it would
be nice to apply some consistency here but it rarely seems
worth the effort of the bulk code edit.

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 11:22:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 11:22:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476001.737961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvei-00010W-Ti; Thu, 12 Jan 2023 11:22:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476001.737961; Thu, 12 Jan 2023 11:22:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvei-00010P-QI; Thu, 12 Jan 2023 11:22:00 +0000
Received: by outflank-mailman (input) for mailman id 476001;
 Thu, 12 Jan 2023 11:21:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFveh-00010J-Kt
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 11:21:59 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2051.outbound.protection.outlook.com [40.107.22.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53828e1f-926b-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 12:21:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8780.eurprd04.prod.outlook.com (2603:10a6:20b:40b::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 11:21:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 11:21:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53828e1f-926b-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aIzc44BPZjWDGItMctV5mLhr6+8J7s1YMwyB1U454FWL5hQdt/a8Yr1cQe08gS6VR3qyGL4b5sxb4Do/jGB86lF56sEF7LedZpdiGrOg3Ve9smncYQtNCIQ94+VSEDbkJNSuwCTnLzwKmsI3Luu7cBpwH7USFUMESLVpu0apLX7e2dQQFFHsQoYxHfsVwf6qg0Wu0VFIpkhUmg2G+5Vqbh1Q1QsjN1aFbq7MIWYmSEaC/n0kA9vp0XJugPWzjhiiYa2+aAUdTQH9uTsEZnpytyQXwrN2tD9I/+qw37f5Bi6I+H1QFzxnmNv9SCEAFldGWxlMhvKL00QZ0nOnkYzL9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ia2sbsa1r4vbw0b9jirnccNid+oPcdF6VmK/PXk2qcM=;
 b=UPjeHVUUehM0p27YdnRAP7+KKu6wXmKiUPuy4Tnz7cIL/EX9IkGhmTxKvv22oE93ep3RSVHrKNhoyZT55ec7dCuA/weHcHGFfRuN+xlswdGUZpgxcITbZ/0Ji2RxndApBYaOTfBCr3hIofS6pxZOiPN3msa8BgIvYxHZ8fjZExsAB+ZDZQivhJShZJasqmW8gPl7MkroQtnt+oQutxDimwvKh8qLIUSW1oO3dJilqHofTq9hKaQbluXs55n4i/LSk9+xsn+w+lWHB3kmP9VvffyQNIvmzr0Ir6tLH+yLTdEjneJ9jl/E5/xLHDkbh8aKkzvgbn1uZoXOdIe60bKbPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ia2sbsa1r4vbw0b9jirnccNid+oPcdF6VmK/PXk2qcM=;
 b=HSQgqFs2uS1Zx+LPtivBSJqlC1X0AF9jz/nf6vatNevpsp4XohxA4FepVBl4vHysUZeVnD/OibSqxOg5I78eRYjOM8olw6dq6PieqACoWHDWRxVZ2Em2Yow2jPc17YL7shPsrIMnwcWwRJSkSrnxR/+WY+i0Zlt2R+lzuD/Nzm1ef27w7thURcUdtvJ6KBcesJBav0PIScikisy3nHJ69MImJyJQ7CyYNLPvjjL3TccG2nw54OPEVJZ5ZDJNj6T0OSjcXEM0xtPpHMUxZFms+V2ovyjba9Aax6NHDmH4tRIZD5pKRjB8W7TCfTIVNPxta639/0u/XLIBaR/+sc76JQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b6351405-81ff-4ce3-2e94-aef1aee3c40c@suse.com>
Date: Thu, 12 Jan 2023 12:21:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [Multiple reverts] [RFC PATCH] build: include/compat: figure out
 which other compat headers are needed
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230111181703.30991-1-anthony.perard@citrix.com>
 <5c7ffbe4-3c19-d748-9489-9a256faebb7a@citrix.com>
 <750ad2e8-a5be-d9c0-846e-41bb64c195fe@suse.com>
 <cf9b3be0-b195-83d8-875f-3ef70e5d9c3b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <cf9b3be0-b195-83d8-875f-3ef70e5d9c3b@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0172.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8780:EE_
X-MS-Office365-Filtering-Correlation-Id: 5d6c5186-a8e0-4a0d-4c90-08daf48f369c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	W/ALIPkcwl+tyPK+ctEVA+CASAs/84mk1W7QZuk9bUKpBB+nFxApNwoHIDHTdDhcjL2DqdNsOVfwjyuWDRi4AcF+7EFJgOaUQonDJMxiXI4YooskuMl/XXwoadVVu3MYZw1qgec0gzNCKyGzKzesK4dfUnUHeHPbxxii2n67R8uCUZ04pN20uOgoMhS8msfb6BLPdxMUbL1kRooxpdfscvVqmbYQaNf4/czSs9ckJRShqW2kyQu8WziHpJLNnldPOViXZSEPpnaj7mKZvqD94Qu5tR+sfacV9SOIn07CYbene4WgRnHlbzPD2qEI+sD7c56o0Xundkbkbbun7KeU9V0EUphyTQYh4Ev/iPL7tkFE6kfihcPJuqpshGwZciYICMjqF2kSDRogJhZ1Rm9GLG/0t91Q3Swf/UTs9b5xrY67DuINFv93hEl4s76/u3/vrKClYUbUmnZt80urzzJDcb1bIGfD1CQRKVINcfMJzT+otIJGd/kSx1B/iHxry3Rkq4yQf2NR6tYSz4O2U/JBC7goQdboPao26aazDCU7G5CwBaCAa0PsFC+CoObXhGhG1QnH1Y8MJK1304ZOPlSgKx/aRAahq2rxAc0uTas9guSWXUlPBRH8yZ9b86zrlq19EL8PT0/9tOtFVehvwgdS52JpCB66b0tjBS8cNrMAPLI+gK3xWGvPOOy76jC56mDxOKR5JbJ0vIU22PH6FFYi2w3vMsfbVUliwKVobuV8+xCCS5MIse4cyb5U45LUna8F7Hf6hygGX3bLiLC1FnYIEQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(346002)(376002)(396003)(39860400002)(136003)(451199015)(2906002)(83380400001)(478600001)(66476007)(66556008)(5660300002)(66946007)(8936002)(53546011)(26005)(36756003)(186003)(6916009)(6506007)(31686004)(6512007)(6486002)(2616005)(41300700001)(38100700002)(8676002)(54906003)(86362001)(966005)(316002)(4326008)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UDdDQXFVaFNkbUx0dEJ6NGlvTko3TW1pVStxQ2NMZmF6ZEg5UjdGRXFsOENX?=
 =?utf-8?B?bW9uSE1YY2FHNXFCbm1BZm1aTkxvamRRamVRZXN1NEluMmo5a3c4amFWUEZ2?=
 =?utf-8?B?NllZb1lCRENtNWgzL0V0QkNDQmtmWFJPNkNlejlJK09oczQ5VUR4QzFnVFZN?=
 =?utf-8?B?WjhoWDk2OTJKS0xXV2pkRkt4ZU0zV0tBa1N6M2RVZmRTUVJaaHh2WGphYmxE?=
 =?utf-8?B?M1FsN2hwUEk0ZVN6TmtCc3J2M1FLT20rNXJ2VnNoczlPRzUxWXh3azh1dUxt?=
 =?utf-8?B?a3BPN2E0a0QvY0dNYzhqcFpKN2lPcXpkVWNYd3daa2NhaEwreU5rRGVmQU1w?=
 =?utf-8?B?OHNRdGVEeVdqZFRwd1Nib0s2OUVhUFpsTVhOQ3plaWVsR2lqb1cxNDV0MTMv?=
 =?utf-8?B?YWhEODlVamwzUVVJdEdFSThYOXN4eXYySnQ4N3RibDdKK2RBbWJnMzBqOE1G?=
 =?utf-8?B?VmtHZXU2V0NxWG5ndHFZRkpXdDlFNjJKUVpPYTE0dEJ4ckFEeURWRE1pLzBY?=
 =?utf-8?B?ODkxdnA2YWVwZ1RqaDFST1U5eWttZWo3RXpGNmJtNFBLUGFseWdmOU1Qcngy?=
 =?utf-8?B?NWEyWFlRMEZXbFpiYVUvQnBOeDgzSko1NmgwbG1mVUphOThpbjVocmlXUGVP?=
 =?utf-8?B?dWJLYlNJdnBEZ2VwK3JqNnBjR21MblZBTTJwYVA4Z2hLK3hGU2plQUVycHA0?=
 =?utf-8?B?MEJSa1d1VjBLSFFKNVhKRXdQY2oySGpsNUlha0VpYW8xdHFPY3N6dHVMQk82?=
 =?utf-8?B?SWtqSmdKSTNGZHM2OENjamVkbTI0eDQzUkV3RWFkcE1RcFR3b0h0K0hZOHdp?=
 =?utf-8?B?K0pSaSs4YVdmeXBrU1grNjRoZVIzRitFRFBEbWlFL2ZKaDluSXhIaXg5dS9M?=
 =?utf-8?B?RWt1bzFGbUN2Um9KQ01qSUlzS2RUby8ySnpRSmpFdlpjZHk3MjV5ZDU2QVBa?=
 =?utf-8?B?YTdYbDMxdUFUQW1QaStUVjZ6bUNDVGl1Tzg0VkFMbDNtcE5SZDBIaFduVktM?=
 =?utf-8?B?eWlBTHkvVm5vTDhvWEl2b0VjeXRoQUJWVmRIL29JM0dWR1N4V3JQWlcxeGVX?=
 =?utf-8?B?dnAwcjIxOEZRNVlUTndWdDBha2VwYXNBWjhIZHhxOWZHN013L2JORUFCRUwr?=
 =?utf-8?B?UFlCcis3M1VzbHFDaUtESnZmNEord0xVa0hiK0cyZ3VhK2RYa1Vab2lQT1or?=
 =?utf-8?B?eVdrZkx0aFl3aDYxZzJKT2ZNa2NEc2JoSEY4YUtlNHN3RzgzMk85U3BBdktX?=
 =?utf-8?B?V2tweFJDZWdwako3MnBWVGNrdVQ3R09yWUMxQXJkRFJzUFVoakJsV0doaFdl?=
 =?utf-8?B?NmlsM2FCTkxuajl6OHIzSGp0NFIwemxENzIxRmFvcldxZ01tdlVwcTdrR0dP?=
 =?utf-8?B?eFNpRHBmeDFoNmNkZXBVWmFIbmpiVm1jV3JPcGxwSk9nWkdNcjh3QU03LzRk?=
 =?utf-8?B?aW1BTUNDZVkrSHpiTEVLVzdGVi9zZXdWYXh1MFdPc0tYTlpHM0VqbDY4UXdJ?=
 =?utf-8?B?QmtUR2ljWTNYVHFHZnl4Q2crNVE4WEtZOE1wTlZBVHUrYm5IQlptTjUrSnFx?=
 =?utf-8?B?Z1RWRmM3QlMwYWFMdlpEa3pLMlRTNnJUOE5udnlLOEh4U3Z0b1dVTEp5cjZo?=
 =?utf-8?B?M0lQSks4ZTJha1ArRSsrdFhXaUROMVkxU3hUcEhybXl5QnIwaXpGRWlueW1q?=
 =?utf-8?B?eHNNcHNxVnk2VTdmdGlzY1J3N0M4ei9pRmxWTzhxV3Iyd29tL3FJVVMySDd3?=
 =?utf-8?B?dE1LL1d4WStYcVBYYXI4dElrK2xnRkx3R1BUS1JSZkZYTG1mNXdiaVNIam1y?=
 =?utf-8?B?RVlBQ1NYNitMY1JaOVZibG8xSkxiQkVhblhxK1cyemJXRDdGNG9iSmNHQ2dZ?=
 =?utf-8?B?RUxxQ0xyUXhDU0JpVVFwbnI1VTdXM1hEOE5UN3IwTDNST21ITm9vWHY4THdC?=
 =?utf-8?B?SGVmUjJlR1hCOXpkNC83WVlWQzVJbmx6RG13K3BUNVlCWGV6VW5CcnB4aW5R?=
 =?utf-8?B?UkZ2TGhUc2hCSFhlNTF0THFSWTFJYUNYd3VYQTdUVllFbVpzcU91Z3IzcGVD?=
 =?utf-8?B?MlR5YmRxOEFWY0VJeHpaVldoN2lSQXd6VUFVeFZXZVIweXJ5UlNYa0krRU52?=
 =?utf-8?Q?y0qHTzwg8NI/Qv/Pj7RfpKd0x?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5d6c5186-a8e0-4a0d-4c90-08daf48f369c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 11:21:56.1934
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ghDypdTZbnglhhC8S69x3+Pd31G/e+CHW7Myyc5k5+TFqSc6alyMUhltP8Y6YZ6ODp9wSX2xaWNHOvoT+2tx4w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8780

On 12.01.2023 12:02, Andrew Cooper wrote:
> On 12/01/2023 7:46 am, Jan Beulich wrote:
>> On 11.01.2023 23:29, Andrew Cooper wrote:
>>> For posterity,
>>> https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/3585379553 is
>>> the issue in question.
>>>
>>> In file included from arch/x86/hvm/hvm.c:82:
>>> ./include/compat/hvm/hvm_op.h:6:10: fatal error: ../trace.h: No such
>>> file or directory
>>>     6 | #include "../trace.h"
>>>       |          ^~~~~~~~~~~~
>>> compilation terminated.
>>> make[4]: *** [Rules.mk:246: arch/x86/hvm/hvm.o] Error 1
>>> make[3]: *** [Rules.mk:320: arch/x86/hvm] Error 2
>>> make[3]: *** Waiting for unfinished jobs....
>>>
>>>
>>> All public headers use "../" relative includes for traversing the
>>> public/ hierarchy.  This cannot feasibly change given our "copy this
>>> into your project" stance, but it means the compat headers have the same
>>> structure under compat/.
>>>
>>> This include is supposed to be including compat/trace.h but it was
>>> actually picking up x86's asm/trace.h, hence the build failure now that
>>> I've deleted the file.
>>>
>>> This demonstrates that trying to be clever with -iquote is a mistake. 
>>> Nothing good can possibly come of having the <> and "" include paths
>>> being different.  Therefore we must revert all uses of -iquote.
>> I'm afraid I can't see the connection between use of -iquote and the bug
>> here.
> 
> So I had concluded (wrongly) that it was to do with an asymmetry of
> include paths, but it's not.  <../$x> would behave the same, even if it
> is a bit more obviously wrong.
> 
> The actual problem is the use of ./ or ../ because, despite how they
> read, they are never relative to the current file.  The contents between
> the "" or <> is treated as a string literal and not interpreted by CPP.

First of all the C spec says nothing about how searching is performed.
It's all implementation defined.

Gcc documentation in turn is quite explicit: "By default, the
preprocessor looks for header files included by the quote form of the
directive #include "file" first relative to the directory of the
current file, and then ..." This is behavior I know from all compilers
I've ever worked with, so while not part of the C standard it is
(hopefully) something we can rely upon (or specify as a requirement
for people wanting to use the headers unmodified).

So I agree using ../ inside angle backets would be bogus at best, but
I think using such inside double quotes is sufficient generically okay.
Hence ...

> So furthermore, the public headers are buggy in their use of ../ because
> it is an implicit dependency on -I/path/to/xen/headers/dir/ being
> earlier on the include path than other dirs with these fairly generic
> and not-xen-prefixed file names.

... I don't see any bugginess here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 11:31:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 11:31:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476008.737971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvnj-0002c8-TP; Thu, 12 Jan 2023 11:31:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476008.737971; Thu, 12 Jan 2023 11:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvnj-0002c1-Qg; Thu, 12 Jan 2023 11:31:19 +0000
Received: by outflank-mailman (input) for mailman id 476008;
 Thu, 12 Jan 2023 11:31:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFvni-0002bv-Ag
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 11:31:18 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2083.outbound.protection.outlook.com [40.107.15.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a09b9092-926c-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 12:31:17 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6879.eurprd04.prod.outlook.com (2603:10a6:803:132::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 11:31:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 11:31:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a09b9092-926c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LZjvuQn+kyTQ9VHkz1bASGQ98kxRESvVFRsyTm5IdVYW1Ns/IHF6HJ0huUB7RmJXfcSktVW0eoVxpYt4qOWwp9gSAySjpyWdz6eqih+DEfFnCjT2rDpJ++WddyOuCeGjzzkvFJHBGrAM2vDUN/VhVcLg8lKAsEzD07BcOWw/y0rq1A8nTq2/URWy+PuP+LmicAiKYXEeZsxDqQwR0O7I2oeqfnL92JChgb+VLhnsJQ34KC6R5PASQ1t9Q0SQLatBYVcMAoRzGPwR0NWM2Izhczr2RFJe8luxYGkBG+Yqj2D9fSNpyWb5CDXer9iUTb1qFp3eIW+ipMWxp0Gji7SGfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QzJ8GnLtgQgzREvZeQzHoLZ4aqo9Q7Nx3aQJCiyMG5o=;
 b=Vd0Pk4gpqU4IQcjsv3JGdfKZBt+mvG06AE2xSN6xBwxI7VomPzynAzOWu/8P7plTcdTvcfMtr0UIFYj8sswKIoFs+piWwCLJ00HVEJTVA9jc64obySV53WgU/9djvRvIc+QES+Dt6tHJrjBc4L99GHBQ66mN1EEQPkE+L9dD8LTzrYs9L36Sw6sGjQKA8odKV+70RAcu4ipqZ+9BPn3gxOgLRC2RPSzL/vXB1t5yOLTFOoorz/uBoeoXEaySwVFxYmNCtCWw6fG8Pd6nQgq/6iWLKUvT57WMkyDRIjE1hO7rPLUJMi0MSJSsRcEIpVmv2PlL5wwObiJVGhaaI7oF2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QzJ8GnLtgQgzREvZeQzHoLZ4aqo9Q7Nx3aQJCiyMG5o=;
 b=WNmnQF9FZhx5ImVZR1r9DzZrxH2K7mgl9dK0O5ENooteDitRGOR4OBSJXukD6+cabDzK3uK/J9wP1tc/Dm+DeghHGj35hQdbW6GNzHYvPlDXdHcB/DQPVDIk8IYDRHeSwaFWAYwXILRMI+Xq37uTP5gQE+kfMHZrGQpIBF2yTwl+7GcpRrM7ypfxJdgWKGkvoD0+bdhm1cx4t4BoJbqi2JNDtDTbBkbpttB3sxHWmODVBd4FQua61L8p/Hq+Am198AAGkwwPxwWC7fNEt1wvcn1k+cq1boKjS98PmYZk+gGR00zk2pVK3GM60cS4H3ew7EIKriTs0aIDLbeK5oMgjA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f2d68a4d-b9b3-7700-961d-f6888edfb858@suse.com>
Date: Thu, 12 Jan 2023 12:31:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and
 iommu_snoop are VT-d specific
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-4-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104084502.61734-4-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0076.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6879:EE_
X-MS-Office365-Filtering-Correlation-Id: f5fb5c42-3cb9-4f64-f9d9-08daf490839e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7WxWf3LLiyl6ZYtiVA/ZtiqVYz/mViB9MHBKfvVRIOdfLycXVFKEow7IF4y+vznppajpXQP5XxR1nKWJg9TgIg+dwKty6dFEPhG5bIUEmPJ7gKH1C64uPZEj86AwOfmoHwLETo5bKrRq3nvyFFt6Hvewz+H6Dd9WwFvkdA/q+if5L3FdTKlUtV234mEz5esbNroV6hGjst+uX5Llh3yli9Wp67gcZmb52syF4N8GcFfOz0wOFqEXtlCCqvwEbkGuamzC30dvwK95X+IVzFADp7eYhpiBg1RL+tf97YnOWv8Ik2rFspaxDIyxAyXwzerHrwkg9jPcdkTQ2GP3F+XREMWvl9zgAjdtnT5aayERiZfBPqzyHZIW3QdG/Pw0+s7VKndKzswNzjFv8wzCKgM17GGmnEibrsZuOtjZO14A84izQ7WXXY9qsLheP2GMrM++vrrcDxxcdHl1IkPSuIhsn7nKCyvuxFObGoJnInYGpqOHJ7pjBY967fA09daAuGBhAlbjePjQo59kPO4A+PPoqfAsmvlXTTnN+hpVcGzT4vuHtbF7WYZ8O7nhkgb5OloovC2S+6nVIssvdDDoqz4+VIbfTKcOK/sJc6vwR8bvQefRw9IlyQodVQwvIRgx/RMpHswNAQdUNrWPsKyIlkgEVV98+lDO+1Y1Wjqua0Z68J1GfxIZ17kQP7bjYJFAtm9JL5tHQPJ9Q9duSooWgzkIqL8pg6t+bZZ+w81F4ZHnhJM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(396003)(346002)(366004)(136003)(376002)(451199015)(53546011)(6486002)(478600001)(6506007)(6666004)(186003)(6512007)(26005)(66476007)(316002)(31686004)(6916009)(4326008)(66946007)(54906003)(8676002)(2616005)(66556008)(38100700002)(41300700001)(5660300002)(8936002)(31696002)(86362001)(36756003)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dzk1NkZwV3paclJUM3dXYUJIU2FnNmRxT0VvdjRkNks2VXVFNUFKeFJ4eEdU?=
 =?utf-8?B?RG5TRWEwdXhnNUpoYTdGQmxRNVU2dHBlU0Z1QkV4bk54cjhtc0ZlL2VJeHps?=
 =?utf-8?B?L0NIOHBQelJmcXBZSi9wY1lRS0NrZmU5eWlJQlJreGFEUWlWMTZxSVhWWUNl?=
 =?utf-8?B?RjNYZmRGSzBmb01yUGYwSHJkY3k5WEVmeS9rdnlITHljVTgzY29ZNnc2RkU4?=
 =?utf-8?B?c0pwUWEyR3Q3aG8weDNXYU1LVnl6akkxUHU4a1FHVzF5aWtaeWpqNW5QQ3JL?=
 =?utf-8?B?WkdYUVJjVmN5K29XUjVmdStRcW9LUVpJYWNtTGRpclZjbEhadDhlbndGUkpM?=
 =?utf-8?B?NllYK2UvZmQ3UU5PR3lySkdzamlTV3RoN2xRSHR6ZGlMNWlPZncrV0swYlU4?=
 =?utf-8?B?TzBnREhGeXg1aS9KNCtKU0d1dGlaQU9hU1hvMmpNbDNVMWhEa1lvcmhQVlBq?=
 =?utf-8?B?MTdTWnI1Mzc2N3BmbnlhM3lyOGJkbncwVmIrRFBrbDlIZlJNK2ZWWEJ0R1o1?=
 =?utf-8?B?YWtBeVdHcmZmQ3VvVkx6MHRJMGxYTGhZdWgvNHptYkVHMnV0eEh3ekhWZEZm?=
 =?utf-8?B?Wi9mWE5KUHlNZTFoZHpmZkp4ajczM09vVjUxb0xUNW1CQlgwd2E2bUVHOGRW?=
 =?utf-8?B?dnlSQ1JtUWZCd0ZwbVlWZnBUeWdDZnFuelg0TzF0R2FVUFZuQWlhaUo1cktR?=
 =?utf-8?B?VDFUUm42czMzVGxLSGNDVDZLTUk0U3JhOGJUR2UvcW5Ud3pqejMwdFZ3SytJ?=
 =?utf-8?B?NWRvKzVTcGs0TzFFN2hPYnpYSkRWWFhNSHNRSlZaU1Jlc1V3MDcyZlp3Z1JN?=
 =?utf-8?B?b3BqREtDbkQxTkVJOE9nbk5qK0lZSVR0TjVsNEgzN01NTkFqRm0wdllPTFFY?=
 =?utf-8?B?SWxqMjc1dGI3YTMycitEOFhhTExIUUpXbnUxZ1RnYlQzUjR4UnlSTlV4UTZX?=
 =?utf-8?B?WHhkQ0p4dXphNlpLU0JFWXhmTTkzOHBDQ013ODR2Sklsd1NuNjVWN204ZStK?=
 =?utf-8?B?OENvSFYwVmZJc0N0UG9iQmlXVG1ONHUzQ0pSUGY1cWNHRXNtTTlqWUNjQXdN?=
 =?utf-8?B?am1rU3daN2NJMitzbk0yZGlRKzVYcm5hbXp1WW9MSzN4K0NvWnRHdkh4NFc1?=
 =?utf-8?B?Ym5lb3g2K1FvWnc5RllsZlhteHNFU2ZETnRmZGNqWmZtNFFBbWRyTnZGOVQz?=
 =?utf-8?B?aFFFSkxUektWdGN0c1Q3REd6bkNVN3FQbEVHVlQxR1pvOGgyZ1FpR2twZTRy?=
 =?utf-8?B?aHlPaGRTY25yWWtVUGlTZ0pCQlRNK00vQVdmeEMwWU5xZGhRTFhGR1h0RDk1?=
 =?utf-8?B?ODlWSUR0bUg0cytaU1dBU1laOHRLVmhNdS9sK0FjSlZjQ0l5NDNvbGE4ZXJu?=
 =?utf-8?B?amRUUVRqL0prdUc2RUorWlREaXlhdFF0anZqbERFSTlnNVlMM0lKTHY2RWNT?=
 =?utf-8?B?VW1uMWNLS1Z4WU4vRUtIQ0pMVXIwcU9xZDB1QTRCbkxPLzNYU0VXdDk2Ulcz?=
 =?utf-8?B?OGtidnZxeDRzNUNLa203MEZWL0V2RUtGbjR4eURWeDdOV1JvcEx1dFZyRnU1?=
 =?utf-8?B?YjJXZHpVR3FHcEpvb21zUU5WZGdzUExld29mZG5URFlBdktVNnhnVW9senhx?=
 =?utf-8?B?VkRJa0xmakZYWjkrUnRzeDU3TUJjdlk0QmVLcGRQK0QrT0h0dnhHYytmeUUx?=
 =?utf-8?B?bnkrazF0VWZoT3JQWTRiMHhMeHM0bm9HbHR0QTV4UUk5ZTRVU2NSUzNhTW9Q?=
 =?utf-8?B?MFNrTWpWYmoxRDF2QkoydDh2UVpTSGVvQXk3WFR6a2IxQ0FxTkpTWE43dFFR?=
 =?utf-8?B?TlRxMnBVbnRLMDRMckVzRThwa1czWXhRSUI4akxTMWUvQnBOMnBSNkJLL29y?=
 =?utf-8?B?bkxJMmRVU0NNUnJybEgvcEdXVGlzQUhEU1JBalU4UUZxcHpaN0JIT0FicXFx?=
 =?utf-8?B?U2NQUnN4SExFUzQ0Umg2OFlqeFBZUko3d04wK1orWGVTeWxMVmhVOG83SUox?=
 =?utf-8?B?dGRwREFlT0gwVHdIQ1ljT2lFbDNKWDJOVzhEamxXVVl6cXJLdU9uWTg0TU9s?=
 =?utf-8?B?Tlk1dldNTENtS25tRzlBSWV3NEdPSGxOTWRjMTgzaXU3cUFDVlFwU2hnN3Nt?=
 =?utf-8?Q?RG3KOwTWv1NBgboFj9h0fouua?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f5fb5c42-3cb9-4f64-f9d9-08daf490839e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 11:31:14.8767
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZoJSsRkGJ4Zsg6wkORHO95NRfcsh3O0IROMWEkD83765hF7/elG6s5h7ZuuAkWo2BMlBjivX9bLxelUi1uZEKg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6879

On 04.01.2023 09:44, Xenia Ragiadakou wrote:
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -82,11 +82,13 @@ static int __init cf_check parse_iommu_param(const char *s)
>          else if ( ss == s + 23 && !strncmp(s, "quarantine=scratch-page", 23) )
>              iommu_quarantine = IOMMU_quarantine_scratch_page;
>  #endif
> -#ifdef CONFIG_X86
> +#ifdef CONFIG_INTEL_IOMMU
>          else if ( (val = parse_boolean("igfx", s, ss)) >= 0 )
>              iommu_igfx = val;
>          else if ( (val = parse_boolean("qinval", s, ss)) >= 0 )
>              iommu_qinval = val;
> +#endif

You want to use no_config_param() here as well then.

> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -74,9 +74,13 @@ extern enum __packed iommu_intremap {
>     iommu_intremap_restricted,
>     iommu_intremap_full,
>  } iommu_intremap;
> -extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>  #else
>  # define iommu_intremap false
> +#endif
> +
> +#ifdef CONFIG_INTEL_IOMMU
> +extern bool iommu_igfx, iommu_qinval, iommu_snoop;
> +#else
>  # define iommu_snoop false
>  #endif

Do these declarations really need touching? In patch 2 you didn't move
amd_iommu_perdev_intremap's either.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 11:38:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 11:38:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476014.737982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvuA-0003J8-J2; Thu, 12 Jan 2023 11:37:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476014.737982; Thu, 12 Jan 2023 11:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvuA-0003J1-GM; Thu, 12 Jan 2023 11:37:58 +0000
Received: by outflank-mailman (input) for mailman id 476014;
 Thu, 12 Jan 2023 11:37:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFvu9-0003Iv-A7
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 11:37:57 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2075.outbound.protection.outlook.com [40.107.247.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8df43904-926d-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 12:37:55 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8186.eurprd04.prod.outlook.com (2603:10a6:10:25f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 11:37:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 11:37:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8df43904-926d-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Bm2TzNmG86VyTb+YkVJHeWy7eZjV+fMTAt4aEfXTZ+/jCWWJAq5vZUrICDvPBNHyPCBK2C9OE+nTmKv3/1nAycAZrhwE6yGfuzRvMrvwlUMc0T0RiedEEKA1+M0A5s4YckAW1CyqePLBCh0y4nBxcoV757ovESQ+Tn60+KmcaPPFLqQz5XSlBIW+Fxi2hkPE3JyHA3seaHJTVSHXMuDD4jmudOWuVXD/pN02Ip0LqqkkhegHb/hVrsWoxoNSaVo0vYR+6D9A2VO6z4m5aARtqszWW8hNK844o7bSrc2YxyJQLUcH47Eb94VGmFEsJbu28j5xbDrRb3REFhedmEjEYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LFq1SuO2v0RB+u4AyJ9axFl5wmBeX6LwrzPowh4qFCw=;
 b=edGQJ2Ix2jOeUZDmKzyX1V5mHduawwr87FSehtRjuR3yykEch0q6N/YUDJvlyCsUtxFj9xQnYZBvPrjgWlrfOeGKaToXm75K2ZVI4xAoJ2NqmuoYvayC/nbx3DRCeXBjrex3xQkEdMEyhvnIXN69uV0+L3NkY8BdyWtLJU42c0fsQyq50cSb7us+GWW72465lWwQx+8OQMypVzptfrf2KBN4izhG2jHH6dyATBPCwAn88Q1I4kwzyC980NO23F3efBoD7JlRO5xHklRTxcpnoFO9mQ7qtRn5Rc6Vc2g538zAMTrKFtzF6NwzXNSiKKY4dVtvoDZXurWu/MLQxCw+Wg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LFq1SuO2v0RB+u4AyJ9axFl5wmBeX6LwrzPowh4qFCw=;
 b=2iIS3n1Jpy8713XJREnMS6DXyP/3Qt1l+wc/Zd6GQEjH3vOe96SYBeu+ZPlasdWYhTE1/Y9mO+711J6Jl/a0iSYF5jrD1A+eVTdtq7e0jhept9D0H/2B+CgPeYiCEpeG+VINVZhZInyadDS9H5NqxRijJyyKKZ8Xy/FtAiLyrPkGa3ew7bMNztfG2mKOaR0n5si5CN0fjVVaXeelo2cWmPi0/pLXHKGBd/NpRRCE35eY8v+tVZdAPgdjAOJIQgY7zX/3LVlHpkRqqd2ls+zQpHpVa5/qCVXgK1q4B+nyNuDwkvdsxRWWy/ejUwO4moc20xcm7cp3TPGpLYPKlPrIDA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b73afacf-a23a-7e51-9bd3-b90b3eb484bd@suse.com>
Date: Thu, 12 Jan 2023 12:37:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] x86/acpi: separate AMD-Vi and VT-d specific
 functions
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-5-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104084502.61734-5-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0135.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8186:EE_
X-MS-Office365-Filtering-Correlation-Id: a1a08cc4-2ac2-4f17-73a5-08daf4917121
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mbe29RhmPzt2j7O++JTtuamRvA4vrzmlpo/zpw5/FlI7EMWoVkGHb+8OqZ4hAvmu/twSTRWjbxzL8ioLxh+WcLsOn3mFjntZwuDmGqEvIeX4UmXHZf2HBQCKRj8917fhr8SH7fzKHAMcu00FHA9KmzbqBkcqqKnkfwABNGjtDbuD1fX6pMK7Ui0sU1cbYStRJp54HHotXqN4iKBqeoG7BzjHqWMn8eweneZ1l3ad82sRtducI6tVIC18vgO9zTibau39b0d1zLjoMtj9r0V74rBzkLpKPP/momp/oPVQn6BioPpCYNoDapASYngFe+yoHxF/Lw9nVYP2kzL/hwUpwPdoyD4XrxxIqZNvtQM3usVcTBlzYsCf+Ygz5av0zAcuZRGVn2l4AVv8FIW1KHI3DPllyRWqWSDJfepXQtX1Yx1CbLVGJ6mGRL4mQ/H1Z+xM9K3Tg2Xy0eeojzO2C5PpL8VWowyR9SyqSx/nBtfyoK64ymp3FMXJVl07yohuvMTp5DfBFJIKygu4/EefWGHrdAXnVCFnj+xr0L8Ln1CSMejcVRLzWP96NurJgmZcsozdPAd/aYq0SfAHSKA+Ba/qj0+AwgTwGUcQHbBSvX8NaWBQ7Z1w3sSV37kh6HulDnpaPvx2JKMeAOz/N7T1Rg/Pu1TDoXGwgTI89oXcGsxzDs3GNzMfDc9eT2zeGYzYCubdYlfnPgMQuzYRcMdBmE1aXq/nf7kKbrT2WYxnL6pNj3I=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(346002)(376002)(396003)(39860400002)(136003)(451199015)(2906002)(83380400001)(478600001)(66476007)(66556008)(5660300002)(66946007)(8936002)(53546011)(26005)(36756003)(186003)(6916009)(6506007)(31686004)(6512007)(6486002)(2616005)(41300700001)(38100700002)(8676002)(54906003)(86362001)(316002)(4326008)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c1pEUGUySUhNWXdrYWduYy9Nc0ZFaFVrVzFIK0FiV2xhZVF5cnZDcVhUZ1Fu?=
 =?utf-8?B?dkFLb1lOb3dWZTlXdTFINkZVeVNMbXBqYUxRRC80KzdBanBJSXBrYkpmWjdX?=
 =?utf-8?B?WlF4NTg0SkM5UDQ0UklSajFDMkdSSGhhQzVYMGpESFVObm1zMmhSbVNwc3F3?=
 =?utf-8?B?cVpOQk9mVzJzckVidENpci9hNll4WGN3L2hZL0lYazBSdUZUNWdYOEJjc3JP?=
 =?utf-8?B?RFdsT3dLRHJJL3YwZmpWRTFSOVRGRk0zZk5ia2xuNTVURXdDMnVkS3FoL0x1?=
 =?utf-8?B?WjAyQlRpL3FYdDlvdVpsRytsVGd0ZS9ZYnF2NUhSOWVlZHVDeGVCWnI2SGVP?=
 =?utf-8?B?MTd0bzhuTzNTUkRjQ2ZlN0dmeVY1UHAzc0NjS3BaaUl6TXZ3emNIVyt5WXRi?=
 =?utf-8?B?ejZMMXRWS0FHVzNaNk1OSHdGVzhRRUM3d0pLRGhMNU1ERjJuVFA5czhzanBS?=
 =?utf-8?B?RTV3V2hkeUxGQzU4UFI2WGVwRWh6VnBNdjdJeEowazdsK2NLQnNZSFpGaEhu?=
 =?utf-8?B?TXRLMTEreUNjVkgrR29mb2t2SEdtdVRQakwvZ01kekVHaHhtVXRCWDBEVGg4?=
 =?utf-8?B?SVJCQURsQ3N5MDExcU5oWEZqZ041YkViQWtXa2R3OXlTdTNqc0w5S2lKaFNh?=
 =?utf-8?B?aTdxNU01Nm1obTNqU0Z6TmtZV0JYQ2o1TjdBUzZNZytqbEY0ak1YSTJKNVNy?=
 =?utf-8?B?OU0wUEM0UlJ6dUxPVzhza3lKbDRKU1FKYjR2TDdXcXp2Z0c1My84TDBvTDhC?=
 =?utf-8?B?V0tYRDdxTk9KbUpYSU50WWY0VkVId2lRMHRYbzlmakZNWTNENnc2a2RLZWll?=
 =?utf-8?B?TEg4eGhXVzk5Wm1ENUxIeGo3dDlEYlczUGwvRjYzQkx5QXBZTlhFRDE0Tmlh?=
 =?utf-8?B?a1pQaUZpWHJoc0liMDNiN3FVekVIZEJ3a1FEUjVmUjluQlIyQTVqSk11V05X?=
 =?utf-8?B?cFJmZ3UxSUhsMnVEOVNCdEdIS21PSG0zaHdoK2xkSW11U2p1WHh0ekIwS0hH?=
 =?utf-8?B?cjh0TkpUazE5RERnZ1hXT2tJeGY5ZFN5VDZjVVdmSkRjdmh2QlR6aGU1QWNW?=
 =?utf-8?B?ZnpVM01ENjVvSm1EUmhveFExUmNPQncrYlgxd2dGS0FyOFphSjkwSWZYUzlD?=
 =?utf-8?B?bW1Pak5ZMUtzeGtNMUFMelJNbHVJMlk1YkJtSlZMd09oQmZycDN1ODN4TTdD?=
 =?utf-8?B?Z000K2E5KzJ0VGxKSHdLalNXWjRISWUzNSt4V2wvNFJmdHNoV3hFdWcwUkgx?=
 =?utf-8?B?ZTByUXF2Y3dFYjF1ZEptWWVmTjExV0NrV01aenFqc0RjMnN2cGQyRnllUkxZ?=
 =?utf-8?B?bHBOOVdkc0NwcnpmdjdxME9mOHg1R3RGbkREN2RHcUY3VDBWYmhTQXp0RStk?=
 =?utf-8?B?U09saFpZOE5PU210YURDWVVRZTZMYWxxV1ZRS29saC9VQVp0aFplOU4vSjZj?=
 =?utf-8?B?TEVRK3hnUXk0SXVkTGFRM0dMblpTSElmV0grSis4eDBTTi8yWDdBd1loTS9E?=
 =?utf-8?B?bkYxa0FwTWNvT1V5VjN1cmIwRWVEZldhanRKcUVNazkxMElUQkVna0FjYUZ3?=
 =?utf-8?B?NEErcXpPWXkyMHloRXVZRit2QjduUzJPY09rMytsV21mazJBSXFFanJNS0Zz?=
 =?utf-8?B?U25wVTUvdFNhZUFpSm5FWEZGb3pqaGVJcVFQZGR6OUszTWVBRzdMbkpoWDI2?=
 =?utf-8?B?aWQxeEttay9aZ2JzTkNJYkVXcys4alhobFRMRFFIQzZraGJlV1J0NFFZN2dz?=
 =?utf-8?B?OXlHNnRpL3BJZ0NyemNCU3VONG5Yejltd1BXblhZZ0ZwNkRLUWJWZXFvWW84?=
 =?utf-8?B?UGFLN2xFQzN4My9wVTk5bUJjNStEV21DRHF3SktmUm15MDRmaWlKU1BydWVE?=
 =?utf-8?B?M1Y4Z09ldDNPTUVyK1hBOVJiRDFhMUEzOXEraC8zczZ4ZHpNOUpUL3huYnRX?=
 =?utf-8?B?UW90dFpHQXFqRkxyUFNsSFBFeW84Zms1KzNsdmNIcXBXdlBZOXZ5bFQ3SGVt?=
 =?utf-8?B?cmtmZDlxdXV4dG1DTnhXL2YwNEo0Z0NpZTdia0E2cmxXQktMR2M0V0FYMjFS?=
 =?utf-8?B?T3RuRDlCbW9mYW9XYjBoOVF2UUZPQVpyejQyaDdVVGxLdnlOaDhMdUJxdmhU?=
 =?utf-8?Q?veqpPRF/ogqiaMzWarBhxV6/b?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a1a08cc4-2ac2-4f17-73a5-08daf4917121
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 11:37:53.3674
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wWOIxg5dTJgIQzsVyOyUtjLbU2auARFFDECXVdCuxW1pS5h0Kh88cyAYeF9wV//Wvx0w9N3MsGQRqqPXlBvh6Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8186

On 04.01.2023 09:44, Xenia Ragiadakou wrote:
> The functions acpi_dmar_init() and acpi_dmar_zap/reinstate() are
> VT-d specific while the function acpi_ivrs_init() is AMD-Vi specific.
> To eliminate dead code, they need to be guarded under CONFIG_INTEL_IOMMU
> and CONFIG_AMD_IOMMU, respectively.
> 
> Instead of adding #ifdef guards around the function calls, implement them
> as empty static inline functions.
> 
> Take the opportunity to move the declarations of acpi_dmar_zap/reinstate() to
> the arch specific header.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

While I'm not opposed to ack the change in this form, I have a question
first:

> --- a/xen/arch/x86/include/asm/acpi.h
> +++ b/xen/arch/x86/include/asm/acpi.h
> @@ -140,8 +140,22 @@ extern u32 pmtmr_ioport;
>  extern unsigned int pmtmr_width;
>  
>  void acpi_iommu_init(void);
> +
> +#ifdef CONFIG_INTEL_IOMMU
>  int acpi_dmar_init(void);
> +void acpi_dmar_zap(void);
> +void acpi_dmar_reinstate(void);
> +#else
> +static inline int acpi_dmar_init(void) { return -ENODEV; }
> +static inline void acpi_dmar_zap(void) {}
> +static inline void acpi_dmar_reinstate(void) {}
> +#endif

Leaving aside my request to drop that part of patch 3, you've kept
declarations for VT-d in the common header there. Which I consider
correct, knowing that VT-d was also used on IA-64 at the time. As
a result I would suppose movement might better be done in the other
direction here.

> +#ifdef CONFIG_AMD_IOMMU
>  int acpi_ivrs_init(void);
> +#else
> +static inline int acpi_ivrs_init(void) { return -ENODEV; }
> +#endif

For AMD, otoh, without there being a 2nd architecture re-using
their IOMMU, moving into the x86-specific header is certainly fine,
no matter that there's a slim chance that this may need moving the
other direction down the road.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 11:40:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 11:40:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476020.737994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvwC-0004RZ-1S; Thu, 12 Jan 2023 11:40:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476020.737994; Thu, 12 Jan 2023 11:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFvwB-0004R7-TC; Thu, 12 Jan 2023 11:40:03 +0000
Received: by outflank-mailman (input) for mailman id 476020;
 Thu, 12 Jan 2023 11:40:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFvwA-00048u-Dx; Thu, 12 Jan 2023 11:40:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFvwA-0006Pr-9Y; Thu, 12 Jan 2023 11:40:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFvw9-0005Ii-Ib; Thu, 12 Jan 2023 11:40:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFvw9-0006Ov-I9; Thu, 12 Jan 2023 11:40:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=avweWqgI9AuhWW5Puu0N/9J/w6+LW8qnVQzpXlRS65I=; b=ELMq3lgE/PQnLO5vCwJY8JC44t
	N1e/pRH5krjszzXnzh4xQqDP//adozDlhe6lNaFDCe3QoESVEwNEGbgrQ2lGgAsf8fXOsVRwknJ0d
	aB0wY7e6Jf4NP6m1LLAfEvFFO2hlLDD4CsXZoeFykvxHbTOMZDa5kK+sl0BYJxZD8SZo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175736-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175736: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=12a3bee3899cdba8b637a7286f24ade1214b6420
X-Osstest-Versions-That:
    libvirt=a5738ab74c2ab449522ddf661808262fc92e28d7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 11:40:01 +0000

flight 175736 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175736/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175718
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 175718
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175718
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175718
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              12a3bee3899cdba8b637a7286f24ade1214b6420
baseline version:
 libvirt              a5738ab74c2ab449522ddf661808262fc92e28d7

Last test of basis   175718  2023-01-11 04:20:41 Z    1 days
Testing same since   175736  2023-01-12 04:18:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Jiri Denemark <jdenemar@redhat.com>
  Weblate <noreply@weblate.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   a5738ab74c..12a3bee389  12a3bee3899cdba8b637a7286f24ade1214b6420 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 11:49:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 11:49:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476055.738023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFw4w-0006XM-Ej; Thu, 12 Jan 2023 11:49:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476055.738023; Thu, 12 Jan 2023 11:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFw4w-0006XF-AB; Thu, 12 Jan 2023 11:49:06 +0000
Received: by outflank-mailman (input) for mailman id 476055;
 Thu, 12 Jan 2023 11:49:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QgwW=5J=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pFw4v-0006X7-12
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 11:49:05 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1be0c42f-926f-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 12:49:02 +0100 (CET)
Received: by mail-ej1-x631.google.com with SMTP id u9so44174752ejo.0
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 03:49:02 -0800 (PST)
Received: from [192.168.1.93] (adsl-139.109.242.225.tellas.gr.
 [109.242.225.139]) by smtp.gmail.com with ESMTPSA id
 ky17-20020a170907779100b0084d3d6500b1sm5683416ejc.101.2023.01.12.03.49.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 12 Jan 2023 03:49:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1be0c42f-926f-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=yHN027+HQ+YTwz5jGusmp+H/3j6JCwBH17SmyR9HxS0=;
        b=joh2i586bruj09jicjFptGD3d2tNKQX3ELplLZB2ZSLCG65EqSuc7+2UrUy7DNjrJJ
         Rd0ogYNseRGIYlC+NwO7HNd7G88nlZ5yfcwNuSeb83o+V6ZD4FKEJp+bp3qoofJVbXO9
         x6H/y+G36GVMbirjEfSrhpjQuwaHw6wy6QTYu2QIWG5vZTcydj1kqvH3a3RkKRAsH/PF
         rjUdqzJWRs64lX1rzAbJviLJBIbh/6+aMCF6qHLjfMUhQVXbEJEK/OBI8gCDDiAQcu69
         +DXXUhMDfLC2cpI6F3eQkE4w1YRfiK82zzJ8p6ZFr0c75Da+lTYZd9Y0A4sTzNm1FBO9
         GzQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=yHN027+HQ+YTwz5jGusmp+H/3j6JCwBH17SmyR9HxS0=;
        b=FJdTzeypAfMJxJUgGIt40++CvLAnQZsAKnsD3z/bxHXA7FQkFgp9/GEnYN+vsOvKsU
         XJk0hkhkltSDrn/dQ3rCVlDUG3hFJtwsksvoIEUFUyX0HK35jubuLbXNR8dWWaxwGR3c
         J6BgCaTxesBMpb3+z0cDCENpOAyyXkII5eCKHQPvCP3ZGrQaSQWddPSOBdnliyfFNry3
         207T2Mgt8ZQYdtLX3kYqnNe3LuDD2GoNtBbnWOHev5q1hCO3onE2fpl/Mc1CD2S6xx4t
         hyk9llsI77eOPsWU2CShFyhrd9o9pYwSU2WsSIdDsCb3TljqPkAAPdtk/7fbtonzUDt1
         mlkQ==
X-Gm-Message-State: AFqh2kpQ6b4qKI4j1kbQRpGa1Mu3yDsexnyWDMEiknNkPw9aGMfxI67T
	F8XJQfBgW+0hM2ctCD/DKWg=
X-Google-Smtp-Source: AMrXdXsXi+a3DZ8lS0Bvkc3s3sbFvbp9G9Ee13eFDPEpFxGXD/rlR04WD1NhvqrIVT8frxb8qOB1RQ==
X-Received: by 2002:a17:906:bc47:b0:78d:f455:3110 with SMTP id s7-20020a170906bc4700b0078df4553110mr58672489ejv.56.1673524142226;
        Thu, 12 Jan 2023 03:49:02 -0800 (PST)
Message-ID: <f4771b3d-63e8-a44b-bdaf-4e2823f43fb8@gmail.com>
Date: Thu, 12 Jan 2023 13:49:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and
 iommu_snoop are VT-d specific
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-4-burzalodowa@gmail.com>
 <f2d68a4d-b9b3-7700-961d-f6888edfb858@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <f2d68a4d-b9b3-7700-961d-f6888edfb858@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/12/23 13:31, Jan Beulich wrote:
> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>> --- a/xen/drivers/passthrough/iommu.c
>> +++ b/xen/drivers/passthrough/iommu.c
>> @@ -82,11 +82,13 @@ static int __init cf_check parse_iommu_param(const char *s)
>>           else if ( ss == s + 23 && !strncmp(s, "quarantine=scratch-page", 23) )
>>               iommu_quarantine = IOMMU_quarantine_scratch_page;
>>   #endif
>> -#ifdef CONFIG_X86
>> +#ifdef CONFIG_INTEL_IOMMU
>>           else if ( (val = parse_boolean("igfx", s, ss)) >= 0 )
>>               iommu_igfx = val;
>>           else if ( (val = parse_boolean("qinval", s, ss)) >= 0 )
>>               iommu_qinval = val;
>> +#endif
> 
> You want to use no_config_param() here as well then.

Yes. I will fix it.

> 
>> --- a/xen/include/xen/iommu.h
>> +++ b/xen/include/xen/iommu.h
>> @@ -74,9 +74,13 @@ extern enum __packed iommu_intremap {
>>      iommu_intremap_restricted,
>>      iommu_intremap_full,
>>   } iommu_intremap;
>> -extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>   #else
>>   # define iommu_intremap false
>> +#endif
>> +
>> +#ifdef CONFIG_INTEL_IOMMU
>> +extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>> +#else
>>   # define iommu_snoop false
>>   #endif
> 
> Do these declarations really need touching? In patch 2 you didn't move
> amd_iommu_perdev_intremap's either.

Ok, I will revert this change (as I did in v2 of patch 2) since it is 
not needed.

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 11:53:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 11:53:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476061.738033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFw8v-0007vk-VP; Thu, 12 Jan 2023 11:53:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476061.738033; Thu, 12 Jan 2023 11:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFw8v-0007vd-Qp; Thu, 12 Jan 2023 11:53:13 +0000
Received: by outflank-mailman (input) for mailman id 476061;
 Thu, 12 Jan 2023 11:53:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFw8t-0007vX-Ob
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 11:53:11 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2089.outbound.protection.outlook.com [40.107.8.89])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aee1dec5-926f-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 12:53:09 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7440.eurprd04.prod.outlook.com (2603:10a6:800:1b2::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 11:53:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 11:53:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aee1dec5-926f-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=deUE0beiV6s4dBqSeL21H81a7whbJOZEWoq8W+HMOyhcq+8P9MUrGxebEfYrlHibPjCk83Flv5ZukozzSIUbsdHPbqebdcDiBj3qbXpAdSNrq9LbxB+KT3pLaEMToj3aTpJkFyCekcCZuxJvbzo4hV2dkFxz4aB0W9svF45HIH0RcvRWhRKbT8fO0iqdWU0UFEbjgaLXKw509ORdAaH68aXHdw3iE+xri4uHz+YJ1FXW5ZiQetXRjm3XQm1+13SSPGnAyBrRbRPRef0DPK3ZT3C+j0KcwLzf09OZuU6HOL7sVaEq/sS0j7M2/nr0VLwmKCsg23Lb8QTjyiSDcTTlBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KjdNKbn2xVptyjw2QC5CMi6NcNrZEWVWFKNRxHh4GR4=;
 b=QD7hAIlN2hp7ks+QVIERPtjRfhHAl9xP5aoVzYUFv+KG53jSaLIf+5/QeCb+eq+146YGtxZRZ4UZxiMhZ84X4fn9Fo+IeaiHJd65ahj4M+CvjKksRcLs4cza7QnmqDbjFWCxjpiwC2OoCEzS7RagdTDHJeHAa4ZampCEovgxwzZNxq8YUWnH3FlJr2Okbnm1N4SaANtYZaogb4B5z69x6442v3I33v78Fn7iNI0rKEOgvoB22rhfQVBgiqL1TGcF4ZDP/+qx1zBi4JV1r0rPHfx1tRXIBARbSfSjuKIedsEsSLwOapr36AOxABp9BqhU4yq0BdWT0NQ9R1IlMga8tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KjdNKbn2xVptyjw2QC5CMi6NcNrZEWVWFKNRxHh4GR4=;
 b=MlVoGdGTKA+4Sep7XLk/lbmJnRhxQsfr4brPbO7st4vejpfOOCi4D9b01k04rTHG4krmjXMaN8O+uNHynZTppSKHNCeiUbNDCyP8hQ/bJrGmnPCA9r+WfKGrFxzewwH23s/wb6/RILIfHzkmU2+OGoHF5bo6xGuxMxh0gmZ502Bg43BHLlAneBPnAz0qE+mbOHrHTp6tY+MBw1yGWQYNKM0qFoKskLN1L0nnjmPQmfTbbfQ1JU4Dv/5tVJaZJ+s88NI9kKASFbWs8tcZKcpKTqGxK83iyMVUK8B05+Ixd+ftGRM+XCOXXbjDHG8/o4BO+jew+uWNl1Nl/jRGzsRZIA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fa7e38e2-2a44-6792-ea81-472a4d2b2100@suse.com>
Date: Thu, 12 Jan 2023 12:53:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [RFC 5/7] x86/iommu: the code addressing CVE-2011-1898 is VT-d
 specific
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20221219063456.2017996-1-burzalodowa@gmail.com>
 <20221219063456.2017996-6-burzalodowa@gmail.com>
 <5a78ab9f-ce3a-a4f5-9768-725bd9f8a745@citrix.com>
 <13b1c5a5-35c3-a413-3458-ce39c7799a10@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <13b1c5a5-35c3-a413-3458-ce39c7799a10@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0210.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7440:EE_
X-MS-Office365-Filtering-Correlation-Id: cde16e8b-caa7-4064-9d4b-08daf4939182
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8NQtk6IyJZg3oh27fFPagSRJ8MIJU3NRAK2fqbJNtpyLt4PpCuooXD63b7dyDFEChI9AIMIY/ui3cotdlF5Ppnrlflvg7rJhYMbuosN/Yedh5JyQIDrlDSnyfUZgy0qaFZUucKyLQKwVCFxndGPxlwZDE0461UL+bvayvsJ2CphVImZDEx5ZeMh0NxkKuF90OHheL72gXZRWIw7iU0VBK3NV69awrFeMv7ppHomLyBNyGUphO7BqTT4BXt3Cddly9G6L6lmCtNfC/zrKsDeUG5/fKfW9eWWzSkW9ShqqDgZWoTYs9tjgSzlO/QczRfSFSSOX5GuhTMU/2FMrvWq1JcpKpOmRLXwcbwVMQI9BlWF0s0D+JX7ZHFcCXd6Ir8+2Vh2QIooiwgHtjRqFuUS8+F4mFHhDTBPdVWBy5mREeKYyikGRPEfPguay7oJT7k8CHfQeNvnlAB93idOs582QNxcT9GWytmROZhcoU/EAqzs3lmfTIABSbaj1nh8eXaEHAa+vQZVx8c124L9tFcA4Shcmepr2ouiF0vcdGvC94A25lFeGAuJgpnOriIg5CTmf8xVnYsmunmFldbPPMPpvefpe3CuDaxJ3EMRTFaSPXMzkan+rKm5i/8qpBY6mZ+z7/Ahin4wQBlQR+IE3n4FwnULYFmdv20I3xPuTuwNGgIBriEjjIfbk2ZwKR/d/OLFeq1naT3U3EQJ+EU640chubO4WVIV96QdJadkRpG9zNUs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(39860400002)(376002)(346002)(366004)(451199015)(26005)(186003)(36756003)(8936002)(53546011)(6512007)(31686004)(2616005)(6486002)(6506007)(478600001)(66946007)(66556008)(5660300002)(66476007)(316002)(4326008)(31696002)(41300700001)(38100700002)(110136005)(54906003)(86362001)(8676002)(83380400001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YzNTaGxSQnRLbmtwcEw5YUJjYXlUUllzRE10V3dLNGlubTZ3NFRreUhyZmho?=
 =?utf-8?B?RStKSHFlc3hkMDVkazZlc1F6QnRPYWJKZzczcU1IOFMyeXloNFF0VXM2S0Vi?=
 =?utf-8?B?WTk3Z09CWXc4alJNcDZqMlJtczFzb1dNc3c5ZlkwMWdWRmF6YnprWDNsRlJF?=
 =?utf-8?B?NTFBeGZCeDl0QVBOcnZUMUZwL1dLZEFTWm9NOHI4TURnYTRBOTM4bHYwSkta?=
 =?utf-8?B?RmpKWVQxcTd6WkJxengyMTRmYyswRG1EVXZQVjFuTUdnZ3lpMk5SRWlBdVJW?=
 =?utf-8?B?b0dMK1NLTjVBQkNrcUFISGoxSW5JRS9QTFZWYWJBSmdMalM2Q20yVUlUeHRD?=
 =?utf-8?B?TXdsZ25HUHIzTTNCVHFHYS9VVUxPQit3WTdmekVobUxtTy9MaCs5ckFFTDN0?=
 =?utf-8?B?M3hOMkVSQVdodm5uWk5Vdmw1V3ZGQVB2T2lnczdoL0pTQUtoaG10VGxHS3VX?=
 =?utf-8?B?eXhjUGlUcUZMamlRU0pGaXdIVFYwNHA0NWl6NlVHRm9Xc0Z5UEhoN3R2WjYw?=
 =?utf-8?B?TmJjK1JmeDJnTnJmbGRhb2lTRTVnUnRHNDc0U1JhVU5nZG9jUFVTM0FHSmgz?=
 =?utf-8?B?TzIvcjd3U3lFRUs0YmhQTWxmYnVQZ0V2VzhxVk90dmFTSDZDOEd5NzVacjdG?=
 =?utf-8?B?S1RoZWQrSXRDcHVhdXYzZFIxRmdqRzQzYkhSclVlV2MzZlhjZ1ZFaEdSK045?=
 =?utf-8?B?bW9qZ1E2Q1c1VFh1eWlFUnhqS1Q1aWtBcEVBWWlaZExmb3p5cnBhNHAxeW9i?=
 =?utf-8?B?aTQ3R2ZPejBnK0lrVVdmV3FQTjZKZ2IyUS94ZXc2OFg2RVRvUWtpa3dCYnJm?=
 =?utf-8?B?NDVtUllLbkliN3ZpV2ZXYk1MVlIrR2lFNHgxN04rbkZrdnFISWhOTDdtZXVq?=
 =?utf-8?B?WlN2UGk5U3dpV1VnV2pOQ1ZoRmFSZFVpbnNNK0lmaExzSnl0cVVhZ2dGUi8y?=
 =?utf-8?B?SEJJL1dFMnZuTm9GOWlxN2pyQi90czhicGJ5SnpORnN5K2VJdzVtd3RlcGY3?=
 =?utf-8?B?MnpuQzZJenRMTjRLTHZjZ0doSmtDV0pid3FpYW5uS2FKcDZtd0t0TEh2eTZh?=
 =?utf-8?B?YTZ6czdYd2pTdTJMVm51U2dmL2ZoRUoxd2phRTR6VWU2SDJzbndlYXBoTzVD?=
 =?utf-8?B?ZVZuaitoT2lKb3I2SktrU3ZDcHluNXNxdERGZWtZdWR1V0RYN2paK253eXBr?=
 =?utf-8?B?dDlxbjdwK1FVTUc3NUdJc1R0YVZ6TTRoMDJoajhTK2Q3aHQwSFZnb2VvUzk0?=
 =?utf-8?B?eVFEQlc2VnZXcmRSSCsyQTR5dW5kKzVmY2IydjRRTXMvOGtIUkdHd3AwRW45?=
 =?utf-8?B?OUcwRDZoWTIwWTZkZ1FkdDhGNUVldExoMHV0cktBdG1kS3NDd0l6MHI2NWJ0?=
 =?utf-8?B?T0JzbkxvVXpEaUhMKzJtRERMb3VlOUY1d1NtWm1aM1N2dkRWUWZNU1NNSEwx?=
 =?utf-8?B?UGdGZS9RNjc1SkFGUVJzczhMZGJPVHFiQ2kvbWpiQ0dmZTFjcmlENEpPVEx1?=
 =?utf-8?B?cnJFOUNLSzluWUordEkwU0tnMFdIaGYyYzVObGZhczIzaFFES3RWVTZYT2ZV?=
 =?utf-8?B?RC9wTm84aC9xczdUQ2UzT3FqV0VXeDlPUVhOaEFGY0lGYjBkSi84NElQeVdm?=
 =?utf-8?B?VC8rMlhycmlsajNKUHgwMFAxTjdrbGI4ZkhwZ0NSZkNrVlc4Lyt6RUdXZjNi?=
 =?utf-8?B?NzJCWUd4SnBQSHR0RmpDSnBjNXUxdkVac0NEbVcrZFV0ZlNkWlNEbWRENDM0?=
 =?utf-8?B?UFZTTmFqV3J2T3dEbWlkS084c2NXN0t4b3dHS21Md1d6KzVFeXJpTmhVK2Qx?=
 =?utf-8?B?Z01aUWhabTdxeVY4em1aaHZwdTZKb1RXbXpoczdGVjJrTmJjT0ZUTU5lR2lv?=
 =?utf-8?B?K2JvQmkwOXBxdjgxeG1YbGpoa0lXTXRYSjhnSFhFdkhRQWozOTlrMllTWGYw?=
 =?utf-8?B?OEdBbENCb1ZqVitSQ0RpaU10c2ExZGhqM0s0Z0VPSFhtOUxHdXUvWXN1Y3BO?=
 =?utf-8?B?dEZFOU9ZMVZMWjR0bk1hNFRYK3p2OXdqbVdrYWlzR0hMODdIYnFLcDRDK29y?=
 =?utf-8?B?b1p1Y0FDcFRuRG5HcHUzWEdGNlVNWU5NdDhTelB3RlBsU2dIekY1bHFDelpO?=
 =?utf-8?Q?Av9/nSy5QKxildrD5lriwqZM2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cde16e8b-caa7-4064-9d4b-08daf4939182
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 11:53:06.6847
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ubN3oshNbdXO5GANCBV427504NqdoKSO9GXiIOLfsqWpW3MuoTDL7FKYQ1wQChDrdm1o4LNYLRd9WiTmwEWmKQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7440

On 21.12.2022 16:22, Xenia Ragiadakou wrote:
> 
> On 12/20/22 13:09, Andrew Cooper wrote:
>> On 19/12/2022 6:34 am, Xenia Ragiadakou wrote:
>>> The variable untrusted_msi indicates whether the system is vulnerable to
>>> CVE-2011-1898. This vulnerablity is VT-d specific.
>>> Place the code that addresses the issue under CONFIG_INTEL_VTD.
>>>
>>> No functional change intended.
>>>
>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>
>> Actually, this variable is pretty bogus.  I think I'd like to delete it
>> entirely.

The important difference between Intel and AMD was that Intel initially
supplied DMA-remap-only IOMMUs, while AMD had intremap from the beginning.
Hence Intel hardware could be unsafe by default, whereas on AMD an admin
would need to come and turn off intremap. Deleting the variable would be
okay only if we declared Xen security-unsupported on inremap-less Intel
hardware. Extending coverage to AMD wouldn't seem unreasonable to me, if
we knew that there were people turning off intremap _and_ caring about
this particular class of attack. With no-one having complained in over
10 years, perhaps there's no-one of this kind ...

> Nevertheless, I don't think that it would be appropriate to be done as 
> part of this series.

I agree, but I'll want to comment on v2 nevertheless, rather than simply
ack-ing it.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 11:59:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 11:59:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476067.738044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwEo-0000EM-JH; Thu, 12 Jan 2023 11:59:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476067.738044; Thu, 12 Jan 2023 11:59:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwEo-0000EF-GT; Thu, 12 Jan 2023 11:59:18 +0000
Received: by outflank-mailman (input) for mailman id 476067;
 Thu, 12 Jan 2023 11:59:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+83i=5J=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pFwEn-0000E8-JV
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 11:59:17 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062.outbound.protection.outlook.com [40.107.21.62])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 88028134-9270-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 12:59:13 +0100 (CET)
Received: from DB6PR0202CA0014.eurprd02.prod.outlook.com (2603:10a6:4:29::24)
 by VI1PR08MB5357.eurprd08.prod.outlook.com (2603:10a6:803:12e::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 11:59:10 +0000
Received: from DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:29:cafe::32) by DB6PR0202CA0014.outlook.office365.com
 (2603:10a6:4:29::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend
 Transport; Thu, 12 Jan 2023 11:59:09 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT034.mail.protection.outlook.com (100.127.142.97) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 11:59:09 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Thu, 12 Jan 2023 11:59:09 +0000
Received: from 1ec22c67a624.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4F09958B-D486-4548-ABFF-F62E25853036.1; 
 Thu, 12 Jan 2023 11:59:02 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1ec22c67a624.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 12 Jan 2023 11:59:02 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAWPR08MB9662.eurprd08.prod.outlook.com (2603:10a6:102:2e0::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 11:58:59 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 11:58:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88028134-9270-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b04EwoN0FW7LGyusLHilN8gQmZ/uqEa7Olpc5LoLThk=;
 b=xcRhboPg/QbZ/qTZiDwo3418jUvNCChBM7sUxlS3dsSxwZjm5X9uVX75cHQLEPt3nmObOfcBbQNn+2VrwSrlLxplgiLNngWMLbMspM3hqEZolZy6KUYNaWUsvcNSx56wSffTkXP2QGJwmgAvWm5YPJ8dsBbvi6qXaG5R00pFblQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 413c25dcfd4db4d7
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SFM3f6U2kaE7sTxEq03FHGCSeM6+aLbBQStIl6dC7jVpGkq5EWvMJ89GvHnMWnDixVb+Uqvr+a7877iY2kD2G9Xuf+viBTX7MVmGCfHsQ4ULurKh8DHxws2vOolb0fUU6QTwRsjfkPXhi63r4R/geIaXCnNkTL6dsOk4LGWhCzOI7p2oJ3wl2UWBpIbf628b240BJ/6qDF5hpJd3PYq91OPzhYw0Fy+gobDUtl9tFO+sqqojjyRII8Z6FRoGM1aOpucNG8zTKcbfQcjjmnMCET/zUVPOYcshyfGaKNLE2wiQm3RnoQw2ACVFYOUk0OqBDOuX/PQgkV6vm7u3zg2KpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=b04EwoN0FW7LGyusLHilN8gQmZ/uqEa7Olpc5LoLThk=;
 b=ASCSPYcB/N2HLZD/UCfmWMZWgk+Y09BRJ7iZw0xJG18av38atFkaGXq87r0dQIVGtVBdTQ+s/cmnpvWg82Ulcx9Da7X71AWyOrsQ7NBO3LWyzZ4WQZibQESvEpZzICRm5wdS0ec6esQ9OuIFpqMWAetruuWxCa91xaJxGiAR0jsfJQPinMWgxAz3PvyyneAGhANhjc44uzKOSJPkVX+ganiALp7wUpH/wwQ4DMvCBwD/pVuN8opUy22cKxI9+5nFuEEx7Onu9521t8NnV2Yp7Mgv2rmuUO/94a18lZ57daan1xf5PyN7VVxV8lD22NZRZirN9fZTO2u1DWxLloMZBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b04EwoN0FW7LGyusLHilN8gQmZ/uqEa7Olpc5LoLThk=;
 b=xcRhboPg/QbZ/qTZiDwo3418jUvNCChBM7sUxlS3dsSxwZjm5X9uVX75cHQLEPt3nmObOfcBbQNn+2VrwSrlLxplgiLNngWMLbMspM3hqEZolZy6KUYNaWUsvcNSx56wSffTkXP2QGJwmgAvWm5YPJ8dsBbvi6qXaG5R00pFblQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [RFC PATCH 0/8] SVE feature for arm guests
Thread-Topic: [RFC PATCH 0/8] SVE feature for arm guests
Thread-Index: AQHZJcp9wvywjfWnlk+ZpWy3G+b6x66ZcImAgAE+VAA=
Date: Thu, 12 Jan 2023 11:58:57 +0000
Message-ID: <EB12FEDD-F3EC-401A-9648-77D7B28F6750@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <3e4ce6c0-9949-1312-f492-913b7dd2cf18@xen.org>
In-Reply-To: <3e4ce6c0-9949-1312-f492-913b7dd2cf18@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAWPR08MB9662:EE_|DBAEUR03FT034:EE_|VI1PR08MB5357:EE_
X-MS-Office365-Filtering-Correlation-Id: 261e2b72-f7d9-420d-4aa6-08daf49469f6
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hrfVuMOki6+0+okyFsiT4iWZzu+BE5c/pjKIkEeZ8A2LIZvbnbh9jQw4EEJdvv/JrXq0dEbtxnc74YrbiAOYRQeY2jMim9Gp6os6d+yeui9VpgBwc22ZbuOY0Emi2WkMwJMhcPJ9tc+gv+tK4FQ1UUg+pMNM2BSIgCQxNxjSLPqJvX/ih70ig51z2ZVDTMXvq5XBhv4FaaO5c3ernuHQxiDeIk2r63NYIXnHWxBLy/pYMPJkR+Q1BLbi6J7nGqiFYNjsHA8Z/9z27wv2jt8BwFvs+egshKkEkxlzw87IqGh8aqYtTn3P1s4XEHbkbpwiNbVoGb15Ff5n4xTTY0y2JJxbRpjVeVPOxggbRwMoHAe59ui3cAeD8pCKGxOMTpK/XkSM0wnMthKhnqWqT78xRkieXKgn1OpKgulA0peaPv5js46D4ghg71iQ/Zw5hlySlmtEObUnvB/y2TfjRiATTvl5jafdFY9Y6MqpaIsqwPjjU+yZLN6hL6v5o2YgANpC1ybBamHMlNI4sz+a7Lb0VQOqNRXiT1I8YnOtS0OVPuBDTNjg6P4fdmfwJ4Hm5mejZzupTa/9tRG5OsYFdzR8zzApKNS/NhCuE2jK6UUNKOe9xEgV4m2Sq/6K69dBAyMWxyVi8gQZFkEtTThnlkNrmAbukuCjjpVLpnPcVCur/X1wBbxa1qDoey/wvLyV4rKsWCm/Edi2uhrBqvn7fGtl0MQfhrwLg+oW9NycKIronI4=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(366004)(396003)(376002)(39860400002)(451199015)(36756003)(33656002)(38070700005)(5660300002)(86362001)(54906003)(6486002)(316002)(6506007)(71200400001)(6916009)(66446008)(7416002)(2906002)(478600001)(76116006)(64756008)(91956017)(66946007)(66476007)(8676002)(41300700001)(4326008)(8936002)(66556008)(53546011)(38100700002)(122000001)(6512007)(2616005)(26005)(186003)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <BEEE4A712A1EC04C9C429B54D8A457AC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9662
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bb99040d-916e-4f58-0ef3-08daf49462e4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AePsOndc3Qcq5slBXYbvF9RAM0RktP7k0LCxG5fqKhMNhbHReC10ypZMFB070ZYIFKEAvq6gnfvTj6v98/s9nXJ7FuMjDvcd4M3EWqxYeDCKaFBuDZGi7Lg3R/E5a8pRZ5BsF66KMLO0QNgFWNSaekeGX+1xrVRB7PD9oYpQq9ajdXp9fz1V9X2GHyViy4BXEmSaEhYteS//4LtAp2GuaN9GmDaA2LyC8zF6tUhsYeVLCRuVNPc74yRXkK91JQM7mWPYOBgl5et0OwslJ3KoMEQQYfzPHttmjdwhhCJnYgNPje8sZw9/n7SWX87BES96scNNQSquk4Abl32OQoWjSgSPxqW94k8IjkVHwlQUw8vZg9vubx+CQ2vPvVl9VH4jvf1U4YZB0uIAZ6b7ofgqPkOQpOIraJpQ0Yf4Qsjf/W7TzrYOkBEMOAwZu/zETBTuP9ZYsWZpKtFt8us0mnIrgbg2vj6kpi5o9NKdTjhkSWhnssx9D0tDTEjJvyi1CBzotgwqm+UqufidpxCi+fZN9fZr0seQcXzgg+j0QLLwEAqlOY1K6E7Rd40qZ7K/YN4P0C7bSvZLqqKdlCNBpPSToybV6zhmBNJZrXh844hZXYCdnT5IYy/kEdGRH9or5hAzYVuns+fHdr2oGs/Lr8Af1FN17kG1/LbH5zt9hs5UvY0eDliDQVLoMVtMmahammcNd2bVQ/flVRCL1ONTp+T+Eg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199015)(36840700001)(40470700004)(46966006)(83380400001)(70586007)(70206006)(8936002)(6862004)(6506007)(4326008)(336012)(82740400003)(33656002)(40460700003)(36860700001)(47076005)(8676002)(53546011)(41300700001)(86362001)(82310400005)(478600001)(40480700001)(316002)(6512007)(6486002)(81166007)(2616005)(107886003)(2906002)(54906003)(26005)(5660300002)(36756003)(356005)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 11:59:09.6432
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 261e2b72-f7d9-420d-4aa6-08daf49469f6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5357



> On 11 Jan 2023, at 16:59, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Luca,
>=20
> On 11/01/2023 14:38, Luca Fancellu wrote:
>> This serie is introducing the possibility for Dom0 and DomU guests to us=
e
>> sve/sve2 instructions.
>> SVE feature introduces new instruction and registers to improve performa=
nces on
>> floating point operations.
>> The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE fi=
eld, and
>> when available the ID_AA64ZFR0_EL1 register provides additional informat=
ion
>> about the implemented version and other SVE feature.
>> New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.
>> Z0-Z31 are scalable vector register whose size is implementation defined=
 and
>> goes from 128 bits to maximum 2048, the term vector length will be used =
to refer
>> to this quantity.
>> P0-P15 are predicate registers and the size is the vector length divided=
 by 8,
>> same size is the FFR (First Fault Register).
>> ZCR_ELx is a register that can control and restrict the maximum vector l=
ength
>> used by the <x> exception level and all the lower exception levels, so f=
or
>> example EL3 can restrict the vector length usable by EL3,2,1,0.
>> The platform has a maximum implemented vector length, so for every value
>> written in ZCR register, if this value is above the implemented length, =
then the
>> lower value will be used. The RDVL instruction can be used to check what=
 vector
>> length is the HW using after setting ZCR.
>> For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there =
is no
>> need to save them separately, saving Z0-Z31 will save implicitly also V0=
-V31.
>> SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie t=
he
>> register is added to the domain state, to be able to trap only the guest=
s that
>> are not allowed to use SVE.
>> This serie is introducing a command line parameter to enable Dom0 to use=
 SVE and
>> to set its maximum vector length that by default is 0 which means the gu=
est is
>> not allowed to use SVE. Values from 128 to 2048 mean the guest can use S=
VE with
>> the selected value used as maximum allowed vector length (which could be=
 lower
>> if the implemented one is lower).
>> For DomUs, an XL parameter with the same way of use is introduced and a =
dom0less
>> DTB binding is created.
>> The context switch is the most critical part because there can be big re=
gisters
>> to be saved, in this serie an easy approach is used and the context is
>> saved/restored every time for the guests that are allowed to use SVE.
>=20
> This would be OK for an initial approach. But I would be worry to officia=
lly support SVE because of the potential large impact on other users.
>=20
> What's the long term plan?

Hi Julien,

For the future we can plan some work and decide together how to handle the =
context switch,
we might need some suggestions from you (arm maintainers) to design that pa=
rt in the best
way for functional and security perspective.

For now we might flag the feature as unsupported, explaining in the Kconfig=
 help that switching
between SVE and non-SVE guests, or between SVE guests, might add latency co=
mpared to
switching between non-SVE guests.

What do you think?

Cheers,
Luca

>=20
> Cheers,
>=20
> --=20
> Julien Grall




From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:02:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:02:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476081.738055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwHT-0001qz-Eq; Thu, 12 Jan 2023 12:02:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476081.738055; Thu, 12 Jan 2023 12:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwHT-0001qs-Ba; Thu, 12 Jan 2023 12:02:03 +0000
Received: by outflank-mailman (input) for mailman id 476081;
 Thu, 12 Jan 2023 12:02:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFwHS-0001qm-0w
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:02:02 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2052.outbound.protection.outlook.com [40.107.21.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea233db1-9270-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 13:01:59 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6984.eurprd04.prod.outlook.com (2603:10a6:20b:de::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Thu, 12 Jan
 2023 12:01:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 12:01:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea233db1-9270-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jUKktZSxN0XogMcel4W4RqwZNc8E5YPKahgi0z7g1bEpmrjViS7R80xY+uPVXCVFxJJuuIZBfOvzhZ89TeZxzLxxAjp7VPnlVNf3lAMgHhEq8nRuKEGhaiPfGTtGNH4JvhekXtPkpGZPapoMq2PjLveJiseAnNlhWg4Ht+t9EBEVG5S1bbLQMxUrBo8+8/b/i0KNlOMm54a/xJSCZESo9CJU50m98zgA6fgoQhhE3FnpBov6GydgE4c045e1rA020Ghkxw134dkOwY7mPmVBIrh6foQ0ghOarD9boiYArGTv9mxlXHDFgXBwRrRcJx9BLpTGQYatl1Lp0Vit79OiFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z5xqAFELM6aeYRPpdl2iJ0uN5YRwxvbcSRYvU8IYJrA=;
 b=dy1RfBJUlMAii7gQ0y8Vet9+1T2XriaOj0PETE4ecuahZWWGb9uwe2NKe3z+GOTnIj0QVsgopIECC30/AMraaZz9UzPKmhz8PtdVf0xGv3Hkd2EInm2WUaxd+eDEx+jzmscmjDpcfdwUjSMz/Myf88gt4znLQLzW5MrsXF9JDaappILjkZnVfj7PS7MIYJ0VUK1ViWqIZ0TxhW/5wejIdQmhSoDQm8sgj3WS6uP8fpixggWDJWeqknvF4gbz4n9qb5y3dPZp7TDEjDC1ZsjCTV8UX3OOMFMqRHi+x90NQ9khMtQXgI2htxDIbyzmYsNHksGprSYT6/IpDtEByXC/XA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z5xqAFELM6aeYRPpdl2iJ0uN5YRwxvbcSRYvU8IYJrA=;
 b=guJ2ePpDvdZP3D8gJbIk+BCBais8+K7AlH4HcpzE00zNo9pYt+46e4ARKVNJ/0dD9xJU2rDZRVXuGvNBzBwVcxkJmaoqZ9Lspi7iLRdbnZ5fTaCPabwvtLRYBETJ/gUXWp+TeozXDFTcsOQWvKPnn7nVJWnzLOmAmZ0f+MLZiHh0ilH/L5Hc9t6eYilggO7CxRy0iyn7JrezKp8aHV6K408nRuz+9RIcZO3bT6r6PTRLgI/rmlp04YgDARcZKhGGREBFHpiEaU51apQF5lmv17LMVkdPvYNUW4XEqxfFY/lgPUv9W/7hw7s0gN3gyP4oa1WykaQUZ9xkBs0d2fG4Dg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a71e7cc5-3f86-0397-613f-a796a0309d42@suse.com>
Date: Thu, 12 Jan 2023 13:01:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 5/8] x86/iommu: the code addressing CVE-2011-1898 is
 VT-d specific
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-6-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104084502.61734-6-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0051.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6984:EE_
X-MS-Office365-Filtering-Correlation-Id: f4ffbd98-65e0-41b3-d127-08daf494cd88
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	U+hqISi5J+Cp25EunYZUvPG2dnaD7nn/jQ9nBT+ssVXw7fRAQ8+0zNhVwGmCXKU900fwgze23lBs0LDBaA6tKmlIYoRCshAvjw1O/mZVE+IuDr+/qApRG25230AKEyuyHh9Iimlj2xXAr6nOPBM++HhMBVOesvAOe77K1Pm+bKTEKXvmIDmHdrlKCe0iKKANpcqDXLiQvB+n8C/MoDd1BQkIwh84SufUT1tKWlNyYW1qPg85sNUfpBYjqyguISO+VaXHy+DE7v20t2Wm6c8uTEJ8tL2U631VQe475dCCC2gInCfY+fMfqyGLuX6x9y55DUTUBY8Az2C/mYuxcvlnMvDfvt3FWbXO9yPCVJmeQkq4KDyO8uEneVenfnyJhi+JBvMvBFudAyi1Jx+kzbgTgeEC/2S5gInP8uGpCMvJwa8Fd+dAxhMmV5akRFS9rLBgW6aSLRhvClTdLtGA8cjRq06CG+uszmy0j5pNKa1afySVvxsrpnZ+cS0INRuVR4cTOKNNDWNcVWmXqa+GhOlNH2gsdTyFsx28ofLeiNmch3CA+xBqXYrkXbRl7yz/wk5fPrfSWbWA3R6orezENPg3YgnFLs1fDpk3hHludH22/k7vkcKxqjC1mnhIuHdFhysyJn3+2Bn6PXcORyDNkBiBjVLNtYuG1wGytGe3967mNukVAssoub8c/G4cBesxWPJVXTdg9kSGPGY+a3J5pOZgzo7JQ8GEsk7Q4zWdRUqJ+x4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(39860400002)(396003)(346002)(376002)(136003)(451199015)(8936002)(83380400001)(86362001)(31696002)(4326008)(38100700002)(8676002)(41300700001)(2906002)(5660300002)(66556008)(6916009)(186003)(2616005)(26005)(6486002)(6506007)(316002)(66946007)(66476007)(6512007)(478600001)(53546011)(54906003)(36756003)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bzBFa1FRS0VZdVF4V2M4TXhzSENmSnVnVGVUSlJmTWUwWkptODBSVUlhMVh2?=
 =?utf-8?B?WklXdWN1Y3VrZndrMFkrSW8xWGNhdVAxSTY4a3dJUHBNNG9ZSDNPRUxlV2JO?=
 =?utf-8?B?aFcrQjNVakJqY2pRbFZoejJHekN3S3o0RzZ1WlgyZG16QTJRbU42VDRLUEdo?=
 =?utf-8?B?MVNHTnA2Zy9KZ2NuQW1UdzA5bVI5NEdyWEN1dWlkUENKZ2VCR3JPSWU5SnBx?=
 =?utf-8?B?WmMvMlpBOXVIRmpYOWcwVlNrL0tlbmkyNW9wU3h1azdVM20zNGlmSE5SNm5s?=
 =?utf-8?B?U0JEcVpkUS8wUWRnVStqNVZvQ2NDTEwxSnNIaWtIVnkzODM0b09HMUxndUxP?=
 =?utf-8?B?QU8xZ09CcEZ1MVJUSjYzbG1YKzQ5UE4yMEpIVlZzSkV3RlRWVExCZExmZUNw?=
 =?utf-8?B?bEZCeURNNG5tcWpEdHNIdnl3RFZIRnk1RC8wTmxmeWxEUXc2Wm9MNk1YMVlR?=
 =?utf-8?B?dkV5MW95bTBSdGQzc2N4Z2xwcmQyZFo1ejEzU3FRMjVySUQ5cWFwYTFDamVW?=
 =?utf-8?B?Ti9HTk9paEtpTGpxQTJHd2lXQWE2SHlGSFJjMTB1SitYdUxmRXVRdzNKMHVh?=
 =?utf-8?B?QWpZUTU5WVZldENpSU44NndBdU55T256Um81Q2g5QWQyTFFTenphY0svOFdx?=
 =?utf-8?B?L0hoSktqRmU5YXJtQ2hyNEYrRE9tQTJmNjZpTUY5d2tIS1ZJL2hxTllCMlA3?=
 =?utf-8?B?VHhvd0ZGQTVBOEFCQXJaUkNHdkhuOWFxdC9wNEY1eW9GajFBdnFuUGNGSWJP?=
 =?utf-8?B?eitZSDF2dkFtY0xJa2JFMHFxMldUOEpvaVF6N215TTdJNDA2WitwOTc0eXhs?=
 =?utf-8?B?TitWU0MvUGNib0VaN3U0dUJKVzJSb3VjelZvK2k4NGwwd1pvWUdEL0NuVTVR?=
 =?utf-8?B?eURDcXIvOWcwMXUydlBDVW9YcFhPU0ZKM1I0Mkl0bDRlS2hwYUpMK1hPZmJP?=
 =?utf-8?B?R2ZQZktJNzd0SzZYZ3kxNXppdnN6RWJMamFyYXlEdG5TN1VNZTI5ZHFsdm1m?=
 =?utf-8?B?Qk5zL1A4UTAxdGZtbjV4ckdkVEs4REpPVGs1eEV3U291TWNRb0ZpRHhYbUds?=
 =?utf-8?B?WCtzMGs0eHRtamdVNWtxK1pWRlBZdXB6TTlBV3EzWUJvcVNSY0hoN2E1VnNv?=
 =?utf-8?B?VDVwTGNkTFNSckg2WU0xYmd2b0U5YnZDNG56VlpoVFBhQnlFUWYxc05JaGtC?=
 =?utf-8?B?SEpVRkZ4Qy9UZFVRT0trS2NxNFovaTZlZ0k1QjhjZDRJTjJVZnNUYUF2K0xE?=
 =?utf-8?B?U0V0U2JtMmtibVQ0dGxnM2Vqbytia01UTkx4TTg1bFFWYnFqd0lJYTAzZFdz?=
 =?utf-8?B?c2tpSDA2M01YcW00cFRXTnMwOXNPREtGM1NhWDRNQ2tSb21UNUFNN0I4MGFn?=
 =?utf-8?B?YldPY0pMd3FXamplYTl5VGxYWUM3K3hmSy9IUFI1UksyKzhnR3BjRndvTE9n?=
 =?utf-8?B?SXBGbVRUdDdJU1JIM21hbUJER1ZpbEFvVEZkZmhtM1k3ZEU0dDBhc0RXQmRa?=
 =?utf-8?B?cGlhODhJbWlpbWdkb3R6ckJSaGtSU2lFMC96bkdXMnVoeU5mblFtc2c1WFVJ?=
 =?utf-8?B?VmdNVlZJdlZTd3VQWFNhempsR1Z1UG56SE1wUHNjTXRrRnJZd0lkRVUycm9D?=
 =?utf-8?B?eWs3YjUxR08wZnV0azVzRGIzZmVIcXFKMEZUQThYQk5hOVQ5MzNvMVpuZ2ZJ?=
 =?utf-8?B?aVpCOVIzOWNGaUdzMElpYkJjN2JyTjVzM2pPNjdERVdMZlpVMk1Fd0hod0xN?=
 =?utf-8?B?RG5QRWEyM2RQT2tMU281eTlIdlk4L3FTSUJwVzQwUS9QaGl1WVVFUlNpUHUv?=
 =?utf-8?B?L3VSa2lCMS9HZ2RHMmQyUTZxenhVeUd0RUhxc3ZCeUFsaG1Da3RmS01zY0hu?=
 =?utf-8?B?OGlwN29tcVgvTlQ3MjFoNzVnbnorMGZCMFhwTTRnL21qUFUwcy9pN1RlTzF3?=
 =?utf-8?B?SXFuMEV4Tmw4bUtRdHJhL0ViMHA1bXlSeVZkTG1qZnNWclh3WlZKMlhuMDEr?=
 =?utf-8?B?dzlXckNDRHNhak5ERGYzL3BtV3AzYjFsakhwaGZqSlpRVmN5ZE4zYUdrdThM?=
 =?utf-8?B?SkhLSGRaUWM5SjAyWmR3eFQwK2JDK2FDNXRiSzBrRHBpRFJqM1MyUG1jQ1ho?=
 =?utf-8?Q?S13NW2pjbDg2/gS3KweGFJ/3V?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f4ffbd98-65e0-41b3-d127-08daf494cd88
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 12:01:56.8698
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s9S6XlTnxkrM2Zly01ACsdu94QiTuNyeU7yzoX/vFRYeVVS+yMh6sGRhwVM2ZN3mV1MnInk/vDQiUMMPxkrBhQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6984

On 04.01.2023 09:44, Xenia Ragiadakou wrote:
> The variable untrusted_msi indicates whether the system is vulnerable to
> CVE-2011-1898. This vulnerablity is VT-d specific.

As per the reply by Andrew to v1, this vulnerability is generic to intremap-
incapable or intremap-disabled configurations. You want to say so. In turn
I wonder whether instead of the changes you're making you wouldn't want to
move the definition of the variable to xen/drivers/passthrough/x86/iommu.c.
A useful further step might be to guard its definition (not necessarily
its declaration; see replies to earlier patches) by CONFIG_PV instead (of
course I understand that's largely orthogonal to your series here, yet it
would fit easily with moving the definition).

> --- a/xen/arch/x86/include/asm/iommu.h
> +++ b/xen/arch/x86/include/asm/iommu.h
> @@ -127,7 +127,9 @@ int iommu_identity_mapping(struct domain *d, p2m_access_t p2ma,
>                             unsigned int flag);
>  void iommu_identity_map_teardown(struct domain *d);
>  
> +#ifdef CONFIG_INTEL_IOMMU
>  extern bool untrusted_msi;
> +#endif

As per above / earlier comments I don't think this part is needed in any
event.

> --- a/xen/arch/x86/pv/hypercall.c
> +++ b/xen/arch/x86/pv/hypercall.c
> @@ -193,8 +193,10 @@ void pv_ring1_init_hypercall_page(void *p)
>  
>  void do_entry_int82(struct cpu_user_regs *regs)
>  {
> +#ifdef CONFIG_INTEL_IOMMU
>      if ( unlikely(untrusted_msi) )
>          check_for_unexpected_msi((uint8_t)regs->entry_vector);
> +#endif
>  
>      _pv_hypercall(regs, true /* compat */);
>  }
> diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
> index ae01285181..8f2fb36770 100644
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -406,11 +406,13 @@ ENTRY(int80_direct_trap)
>  .Lint80_cr3_okay:
>          sti
>  
> +#ifdef CONFIG_INTEL_IOMMU
>          cmpb  $0,untrusted_msi(%rip)
>  UNLIKELY_START(ne, msi_check)
>          movl  $0x80,%edi
>          call  check_for_unexpected_msi
>  UNLIKELY_END(msi_check)
> +#endif
>  
>          movq  STACK_CPUINFO_FIELD(current_vcpu)(%rbx), %rbx
>  



From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:08:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:08:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476088.738066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwNi-0002a8-4d; Thu, 12 Jan 2023 12:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476088.738066; Thu, 12 Jan 2023 12:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwNi-0002a1-1m; Thu, 12 Jan 2023 12:08:30 +0000
Received: by outflank-mailman (input) for mailman id 476088;
 Thu, 12 Jan 2023 12:08:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QgwW=5J=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pFwNh-0002Zv-4F
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:08:29 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d12ba190-9271-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 13:08:25 +0100 (CET)
Received: by mail-ej1-x62c.google.com with SMTP id fy8so44115270ejc.13
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 04:08:26 -0800 (PST)
Received: from [192.168.1.93] (adsl-211.37.6.0.tellas.gr. [37.6.0.211])
 by smtp.gmail.com with ESMTPSA id
 l10-20020a1709060cca00b0082ddfb47d06sm7427522ejh.148.2023.01.12.04.08.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 12 Jan 2023 04:08:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d12ba190-9271-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=dQ/h6xcmzcLV5jX9O2I0F3PYbgosWBKyzY9vBX4euzQ=;
        b=kIQbBXRXu0p9rtejlpcD05ymGsIxcsAVrjkx9gRyyBS3RNJYy5/MsvoK9dqndUZmmm
         H6guLqKPcVlWcMuaxYytJ3X87wEx+zRZeWECG2asHYCcDC8FcIn5zYomB1BOxsrrZ/J4
         sF/jedHBGGrgrngwTsShuU4XJAitsXsZcoZ4dAmUQP8ruy5tVrWAmlfCN5UJIJ4SU36F
         +pI2xmfzzXjF7Lo71pZ3pb8gt2sanGEY6kjSaZJ55/UJfKlwaE/7Ks+Q1c2RY7hAa3PK
         ssHbdSOn9M5Ymtkf835Gp2luUPW9GwdW/Md2qNuVGf/6SqveA8/yIU3bZQLfb3RWczIx
         Fuhg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=dQ/h6xcmzcLV5jX9O2I0F3PYbgosWBKyzY9vBX4euzQ=;
        b=1kxrURk1wsOfFgBCJu8/bGYJ9abcQHYoDo3f5v2hoL6RL0u7cBFTFZE24PxtyzDHtr
         vOF6ZuJR8LKQCP/L47iCfqKaXzYHYvTXyjuE6d4VY0435DXUo7cWe6bKvXEBawh4Iob/
         U9IdumW9CUAUhAwMciE4wTen+Edoc6cuU/dSTKtmU3pR3wYoySa/PxaY4dzGsv3qV6aj
         4rzHFdiqjweORQK1sKEgVQWVF7a/8YCUlqCeegx8wre01vN7rM9vRr2GnC6Q+1U0izsA
         2QY2KodWpndSvQe9YRqPpcc6RcTmDfGZIAV0/rmva3clO3H8CXFUZgnncR6byCcHz6KG
         6Wxg==
X-Gm-Message-State: AFqh2kpTxgv1mK4kr/AC6vh+jxD42W6xnLbVLbx5Tu6wToIBJQoZD3Ak
	mEl4v8jLgdgBQ/s86IWwP8E=
X-Google-Smtp-Source: AMrXdXu/Y+NMALFUqB45iP4PU654NuTeJLKkkMGbN6QCYNxiNZC+hDq4JkXolsbxopWeBm+xARE79w==
X-Received: by 2002:a17:906:b0cd:b0:7ac:a2f5:cd0a with SMTP id bk13-20020a170906b0cd00b007aca2f5cd0amr61492535ejb.44.1673525306079;
        Thu, 12 Jan 2023 04:08:26 -0800 (PST)
Message-ID: <d55b5784-5ebd-d799-9a81-33e2901f4025@gmail.com>
Date: Thu, 12 Jan 2023 14:08:23 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v2 4/8] x86/acpi: separate AMD-Vi and VT-d specific
 functions
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-5-burzalodowa@gmail.com>
 <b73afacf-a23a-7e51-9bd3-b90b3eb484bd@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <b73afacf-a23a-7e51-9bd3-b90b3eb484bd@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/12/23 13:37, Jan Beulich wrote:
> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>> The functions acpi_dmar_init() and acpi_dmar_zap/reinstate() are
>> VT-d specific while the function acpi_ivrs_init() is AMD-Vi specific.
>> To eliminate dead code, they need to be guarded under CONFIG_INTEL_IOMMU
>> and CONFIG_AMD_IOMMU, respectively.
>>
>> Instead of adding #ifdef guards around the function calls, implement them
>> as empty static inline functions.
>>
>> Take the opportunity to move the declarations of acpi_dmar_zap/reinstate() to
>> the arch specific header.
>>
>> No functional change intended.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> While I'm not opposed to ack the change in this form, I have a question
> first:
> 
>> --- a/xen/arch/x86/include/asm/acpi.h
>> +++ b/xen/arch/x86/include/asm/acpi.h
>> @@ -140,8 +140,22 @@ extern u32 pmtmr_ioport;
>>   extern unsigned int pmtmr_width;
>>   
>>   void acpi_iommu_init(void);
>> +
>> +#ifdef CONFIG_INTEL_IOMMU
>>   int acpi_dmar_init(void);
>> +void acpi_dmar_zap(void);
>> +void acpi_dmar_reinstate(void);
>> +#else
>> +static inline int acpi_dmar_init(void) { return -ENODEV; }
>> +static inline void acpi_dmar_zap(void) {}
>> +static inline void acpi_dmar_reinstate(void) {}
>> +#endif
> 
> Leaving aside my request to drop that part of patch 3, you've kept
> declarations for VT-d in the common header there. Which I consider
> correct, knowing that VT-d was also used on IA-64 at the time. As
> a result I would suppose movement might better be done in the other
> direction here.

I moved it to the x86-specific header because acpi_dmar_init() was 
declared there.
I can move all of them to the common header.

> 
>> +#ifdef CONFIG_AMD_IOMMU
>>   int acpi_ivrs_init(void);
>> +#else
>> +static inline int acpi_ivrs_init(void) { return -ENODEV; }
>> +#endif
> 
> For AMD, otoh, without there being a 2nd architecture re-using
> their IOMMU, moving into the x86-specific header is certainly fine,
> no matter that there's a slim chance that this may need moving the
> other direction down the road.

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:16:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476094.738076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwVN-00047L-U2; Thu, 12 Jan 2023 12:16:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476094.738076; Thu, 12 Jan 2023 12:16:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwVN-00047E-RP; Thu, 12 Jan 2023 12:16:25 +0000
Received: by outflank-mailman (input) for mailman id 476094;
 Thu, 12 Jan 2023 12:16:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFwVM-000478-KU
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:16:24 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2044.outbound.protection.outlook.com [40.107.249.44])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec84784e-9272-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 13:16:21 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9659.eurprd04.prod.outlook.com (2603:10a6:10:320::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 12:16:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 12:16:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec84784e-9272-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IkAkg+exqsZYredmu7r9MPmh5Agmrwzsp82aruLC1gTbp8r7spzqbK4XpDZdFWzsWV4Ped2caQmNK7BPJWzYgZv+3O4YgJcuoaoWpgo0/T1zU1OP9KE+Og0y+o4o5F+89heCd1vn3rkmnxI/HeU7x88jVrgWH1Ww0yLC8MmXsSsEbJHThmlN2ldWLG+HCzj2XOsDdVrfg5+JddEGsZW16eMaFuPjkjBt3UMO1JI3kxDa9014HP/4C2HmuWQE0w5GuWnkPlbGRAogWevVMJ3XYmvtxlRDDdU8vpYbEAzkFGY3WZDCPnbX5DAA9WKKnpggwhzw/S5/gueo9GoyEb/dIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HSg0cm17QE9NumIyyQ1sqXZmMUnNT4vIsgXAUlpWqfs=;
 b=HirPbeh2vphmZ3wP/1C52yHTBHiYneFvxAV5ldTxJs5qWsRwoCS1mB5He72ZgBfn9v2NFxVZrWasI+DoMA3XDSzwMPwKxq/lZ8LU26r3uT78XhkwVBYcMOR5RqE8SGaDuT9GWMZJyByfyvFopvhpXE3z9z/ehME3ES8Pw2BRUfmuCrRSSNgXl3KjcLpyrGA333GBFeduYFBChEqjZ+y8sZ5jCXcRHq1ENz3zwg9/7cZbVb+tFKlk+N38KmK5I6zerVzRnlQGWizkLJ3q6PVkdBN1uDEqel6Y8UCEnniUO/JzoBkKxKYowsu+6xnPPYM2CrXRI6hFdefXyfB8fcQ9qA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HSg0cm17QE9NumIyyQ1sqXZmMUnNT4vIsgXAUlpWqfs=;
 b=j0j7gJUTFycOqCXHhLmNtIv9MJoQ5dJxyUPHKrDW+er5ua5PRJ7ZJjQDObyeabN13SC69fNNqn+YI5WkdlFdJKzysnS3BkxvnzdmD5NQIfJ6T92As7AtekXFKhNrQ5RIUTb8bMyUcQSwv0jGuznEnSAHLHTd+S+sVg2QMcI/aR8O2v0mdJLOHIcIfuNnx3C85SBWmisjn8vNKTKFkFPg5D+JSbKIMhkvOl453TjQa5kuZjqK0Qn4hsmjgylR/IjoT05dXTVS6Si9pgIVoUPRCJdPpDjvUuz3mg5Xo5Tabx7ctV9VA36EobeB0bGyHpKeRkrNjdkwm2wkDSdGH20H0g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <aa20eb4d-7b18-9bbf-718f-2fe5fa896713@suse.com>
Date: Thu, 12 Jan 2023 13:16:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/8] x86/iommu: call pi_update_irte through an
 hvm_function callback
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-7-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104084502.61734-7-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0009.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9659:EE_
X-MS-Office365-Filtering-Correlation-Id: 6cde2bec-373a-45ea-b813-08daf496cfdc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0RZeA6thBoHURnKqXKYVVULV2zIJ5qmw5TNqhDKHbU+fP2F8XSv8BbK3rihRZqSJ3LFkhjzl5BeEHD6k7Ce7DS++cyOfNvn+ENWyupV8JemSlgVL2kcOIUunVeUOYJDfH0LfJgoveKf+AkUdgd0FkZ2jvh3RcjFw+0LVeUfuoiODkyPnvf1sXlXMttAQ5ncsegTfnYG6QMgYFIdNbQfw565f1ZZJAeqSM5iR71xraR67YnENSTuANaDhNqsc1xQJ2mGEe3Q9FFrj0cYRp/JQkGZMRh1ws4JCFeh5iKW5pkh1mQDZ4xgKXfrCxVUA6gKuPrp3Q8oQHXCehX3yWIdT3piPxgcMkFTInMFCkBA2fxPU7wvCRiddJ6qtQgo2QeL9QRvAFCwcX20LF9P94wfeKhls68JPhL0SUewjZ2C3EgkMJrS4TpcOF7qqZt9trn5i8UthWcw4f3eqV0FUA83ZzwOT25PuiZtdOXx59O3tLr2YQW+Rv5qBc2yHTN7j+aUV90womistuylluEQrWdI4/aG157AcBycVMom7iYgCvo5lM1udX84wSBSAi8Ex8CvwWD2EXqdlFgBPUWcO7dy8WGkIl0yeGskKJibrIYYfPIhIIpuERDugaJwcPhBfCeXuTM21ykLogndwC57l83NBEBZUnTtH/qHBRSJUgTns7rGxYo6Z/hNGFBCDIOnDYww/y77M3kn04Bkc2+aGQeCuR0akS9pVjSGjHbQUet4UR9g=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(136003)(39860400002)(396003)(376002)(346002)(451199015)(316002)(86362001)(6512007)(26005)(5660300002)(186003)(478600001)(6486002)(2616005)(41300700001)(31696002)(66556008)(4326008)(66946007)(54906003)(66476007)(6916009)(36756003)(8936002)(83380400001)(8676002)(53546011)(31686004)(6506007)(38100700002)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dmFKM3NlejhQZDk1cEROd0xnR29udlIzWWtpdVFCZjJZUG9kRndlS1FnUHhL?=
 =?utf-8?B?b3lMRW9PNnozM2NqS0ZURGFhTFRORVYrY0FIUXp3QmU3NmRPTTE5UWJxV3Fq?=
 =?utf-8?B?clBZN2QzOVg1akVBTlJDeDFGMW1aOHJCUFFONms0TmZRb0hoMEROR0xiSmNH?=
 =?utf-8?B?RFhNWENadjE1aGtHYXNCMlZRbEhDeVUrUGttQmovVVlvbVJBZHZVSWFVelhZ?=
 =?utf-8?B?OGhJdjh5NnRnd0ZQMCtWWVp0VmkxdTEyYyt0NE5HWkpTRG5QMFZzMHU0b2xW?=
 =?utf-8?B?YkxINGpFSTBCRUZWcWVGdVd3YUxxUVZXNk1lU3VoL3piQkpnUWhOdmdnUkdI?=
 =?utf-8?B?M21Oenp1WGhpSTNUN0JEczd0YkM0N1NXZm5wTWhuWXBKRkVvMkZ6YmVnQjlH?=
 =?utf-8?B?UnRQY0s2U3ExSEM4YnRDczhtU2drTFZORFRhLzVoRFdKdHFhS3c0Wk1tczcw?=
 =?utf-8?B?R25ZZ01KdnRnWmsrNnJnbGlFeTV2elN5VWhCcTkySXltaEtEZXZ6UE5uY3ll?=
 =?utf-8?B?QVJHQ3dkOXV3S0tPZks5WXBhQmRRQTRrUmgwYS82N0lobUNHanZrcG9iR3Vr?=
 =?utf-8?B?TWNzMVVkRGhtbnRxT2lTV1R1NGt3YUUxNVU3Z1dXTktmSWRrdlNFbmg4c3h2?=
 =?utf-8?B?VWoveW41TzIwNnlUa2sxWkJtMndKeU9SQTF1dzh4U2dMUm83bG9VMkdhUHhC?=
 =?utf-8?B?VnhxY3FOSlNTNnQ5OXc3eWM5dXZjVERRcTQ4eG5WUU5RbjlybWxYcFc4VTBW?=
 =?utf-8?B?RENHaEV3ZW40bzF2SFdtclVpcTUyYWt0ZlhsZXovYlozYldJYXpoVVFyZ2py?=
 =?utf-8?B?UldYdHl3aGs2a2tMNkl1bms0dFNndkJZNm5OY2U3cDY2VmZzcHpvVWVGMVBH?=
 =?utf-8?B?QXJLd2lWQk9GSng0bU5HQ2c4d3l6UjI4Y0VUaExvWEVoejh4bUdXMGpKSGQz?=
 =?utf-8?B?ajIzYTVCT2YzdjQwQnhTQnZ3dWdqeEtmc01iWG5FdDJZRzlCb2dOZElYcXFs?=
 =?utf-8?B?MVRQM3F6cyttUmNVRThUZjdKZmptNFMrSURPYUNiUFI3UVpKT21XRGZCWXVL?=
 =?utf-8?B?VThzRXBTTkYvQkhab0pGTHFMS20xYkxxL3dpZGo2SGtCano1MVQ3b0U1ejlF?=
 =?utf-8?B?ZlI0QnNRSUlZampVY3c4NkRMNmhidURqV3FnUE1PMTFMTEs1YmVJK1RsN0Yy?=
 =?utf-8?B?cVVhNXNxSGllSTVWL2NkUyt4UW9RVC82WWhUUENBRXhpR2h2OTR4ZjdwSW1O?=
 =?utf-8?B?ZkpFV0E3YTlWV0JzNVdvbFkwU004RGo4ekZlS2xrYlJJK2lINEdnM2R1bTRy?=
 =?utf-8?B?SDNWeWN3VjFpeTZpOUk0MGR1V2JyRUVreFVFVTVqaHViOHVJSEhGbHpEM2dn?=
 =?utf-8?B?TlVhektXbnpHcDA2RHh6Z3VIMHVNRXVlTHpUMnRXNUd0ZlE5b2dhUGIvaSt3?=
 =?utf-8?B?Ulh1aHNhZnB2QW9YZ0V2eTFERHU1V1o5Z2JhV3lCVnlNTzViT0djODFuVXkr?=
 =?utf-8?B?cklHNXNubEJLa1ZmcXQ1bWFMeUsybndlQWVVZzl0MjIxd2djREF0a2RvbGdV?=
 =?utf-8?B?cFJ6QmhmODFXNWMwVHhLOWFpVHQxc04wVnVidU5hekVXVWNxdlVGMGVkY0Jp?=
 =?utf-8?B?YTVZVXMzMWJxT1hId2ZZZytyREdvNXVvMXV5ajNlb3o1V2hReGhMRndvTVRw?=
 =?utf-8?B?SjBKRzRFSHo2R3lkMXByMHRrS3k4VHJ5ZTRFNzJMOVBET2txL0xsUXk4Y0Vk?=
 =?utf-8?B?K3ZrdHNPMWFjUWJVTVVqUG9najBnaDUxRzhJOTk3WkZtaXlUS3BxV0ttaUNT?=
 =?utf-8?B?MEc3SVI4ZXF1MC9qbHhPcmZpR3pMcGgzSndjcHFRdWJsbWtXTmhJL3dMalZk?=
 =?utf-8?B?ZCtNaDQ3SWh3bnoxODAxMDY3ZnV2VFpmTkoydWtCYUNURWllaEFzWTJHWGdN?=
 =?utf-8?B?TENpSWU2UEtWbTVoSENpMVIxT3FMRjBlYjd5SUZBSEJncWUzOEV2aU5HRkNI?=
 =?utf-8?B?OFdTK1hSTGRnbTRFdjEvLzNQMXhXeHhpczl4cVZQanlkVWFWbUxtUUc3akpV?=
 =?utf-8?B?UCs5QVhHV2tndUtjZkpDNkluRGNWcVJMbDhBclpRVWxCUTRPV2dkc1V1bjFS?=
 =?utf-8?Q?WnSkZBl8YblQzliYu+CHK9tu/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6cde2bec-373a-45ea-b813-08daf496cfdc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 12:16:19.7373
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YpvZqvyUfgbA4nU4k19Ba/WyBySfTzE93wrGahD0bxFKOwrypElLSMS7WxXdcs5T4EqBonv0Vh52zFCRWXifbA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9659

On 04.01.2023 09:45, Xenia Ragiadakou wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2143,6 +2143,14 @@ static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
>      return pi_test_pir(vec, &v->arch.hvm.vmx.pi_desc);
>  }
>  
> +static int cf_check vmx_pi_update_irte(const struct vcpu *v,
> +                                       const struct pirq *pirq, uint8_t gvec)
> +{
> +    const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
> +
> +    return pi_update_irte(pi_desc, pirq, gvec);
> +}

This being the only caller of pi_update_irte(), I don't see the point in
having the extra wrapper. Adjust pi_update_irte() such that it can be
used as the intended hook directly. Plus perhaps prefix it with vtd_.

> @@ -2591,6 +2599,8 @@ static struct hvm_function_table __initdata_cf_clobber vmx_function_table = {
>      .tsc_scaling = {
>          .max_ratio = VMX_TSC_MULTIPLIER_MAX,
>      },
> +
> +    .pi_update_irte = vmx_pi_update_irte,

You want to install this hook only when iommu_intpost (i.e. the only case
when it can actually be called, and only when INTEL_IOMMU=y (avoiding the
need for an inline stub of pi_update_irte() or whatever its final name is
going to be.

> @@ -250,6 +252,9 @@ struct hvm_function_table {
>          /* Architecture function to setup TSC scaling ratio */
>          void (*setup)(struct vcpu *v);
>      } tsc_scaling;
> +
> +    int (*pi_update_irte)(const struct vcpu *v,
> +                          const struct pirq *pirq, uint8_t gvec);
>  };

Please can this be moved higher up, e.g. next to .

> @@ -774,6 +779,16 @@ static inline void hvm_set_nonreg_state(struct vcpu *v,
>          alternative_vcall(hvm_funcs.set_nonreg_state, v, nrs);
>  }
>  
> +static inline int hvm_pi_update_irte(const struct vcpu *v,
> +                                     const struct pirq *pirq, uint8_t gvec)
> +{
> +    if ( hvm_funcs.pi_update_irte )
> +        return alternative_call(hvm_funcs.pi_update_irte, v, pirq, gvec);
> +
> +    return -EOPNOTSUPP;

I don't think the conditional is needed, at least not with the other
suggested adjustments. Plus the way alternative patching works, a NULL
hook will be converted to some equivalent of BUG() anyway, so
ASSERT_UNREACHABLE() should also be unnecessary.

> +}
> +
> +
>  #else  /* CONFIG_HVM */

Please don't add double blank lines.

> --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> @@ -146,6 +146,17 @@ static inline void pi_clear_sn(struct pi_desc *pi_desc)
>      clear_bit(POSTED_INTR_SN, &pi_desc->control);
>  }
>  
> +#ifdef CONFIG_INTEL_IOMMU
> +int pi_update_irte(const struct pi_desc *pi_desc,
> +                   const struct pirq *pirq, const uint8_t gvec);
> +#else
> +static inline int pi_update_irte(const struct pi_desc *pi_desc,
> +                                 const struct pirq *pirq, const uint8_t gvec)
> +{
> +    return -EOPNOTSUPP;
> +}
> +#endif

This still is a VT-d function, so I think its declaration would better
remain in asm/iommu.h.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:22:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:22:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476101.738088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwbH-0005dU-MH; Thu, 12 Jan 2023 12:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476101.738088; Thu, 12 Jan 2023 12:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwbH-0005dN-JH; Thu, 12 Jan 2023 12:22:31 +0000
Received: by outflank-mailman (input) for mailman id 476101;
 Thu, 12 Jan 2023 12:22:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFwbF-0005dH-Ob
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:22:29 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2045.outbound.protection.outlook.com [40.107.7.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c7439f99-9273-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 13:22:28 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8242.eurprd04.prod.outlook.com (2603:10a6:20b:3ee::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 12:22:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 12:22:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7439f99-9273-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SSYaqlJZiJcbqW9ZYVHAlai7FWzSM0Hh0rcVqvyifVRNnVIgiJx1zJxq+DrTXHNsjJs16z0AfOyExobjaNL5i+TsWhMfQJ0zttLz+g1Go3tIDdrI/Y0XMDjfIq39z2sP4P4fztUQXyw40yUme/Dy2EdveHJtz3ABRKBhXMhkmMQz4uKr44Z9d/uroa/L6CNzusXcw7gMPY6491tadsp6qnQQ0YlhSEoawEX2NjwwG1zkMwkVORo5ufQToeya+Jk9IBmuTkMLgElgyNRCat8SBHHq8Js8nWs7DyNjuPu5UidlwI6M+A2qiQS27MgEqE+fgaVijtow4JPaLRsS2eeUxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jua8+qvyFxdva6AeLg6ip2uyjypLjH77n/jmxwE62sk=;
 b=ZyBq0oZLxxJdnrIYJewalSn7MMi5NZRt4PFTaQ3eYDr4CkDVBrU/GzbcwqRHaWHc4KqnHo7Cte78onWU92ihsGSUVQoJCz57Ie0t8XR9E5w/J3Uv2PRUQl2f9EzwifoNe5G7iF4dXlDoZekwNtjxFWh+GH2IPSl0Viu+4Y3GlOqNGeSE7COAtVFkTTwb2FQNtCDrtU7YfBQ4ANs9SE+moKkpiYKV1dz3RF06dcdJavI4stZ3ADZX3k1FqkULTPLAqZtdT/bZFx/q1Ay/XAzryh6tvFhyirOhchj0Br0lmI7HslEjcasrvZDvOkJS7Tz+9ODNK/1fJ+V/7UcK4V1Ojw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jua8+qvyFxdva6AeLg6ip2uyjypLjH77n/jmxwE62sk=;
 b=0zHSaJp2RHS+Tz5KX9Dy+8PzuJq48QRwrdV1rGQJ0LsY7uRg5Dz5qZ4hQa+95588Xc0kCv+d614RhIkuAYtJZQRxh4E01rbzSlQqWnvBOUjwUsH0zL7UCP+bJij8LpyTchnHu/pUsyS7iJmqSM/9gaTBrDER3Z0TUdR5+GbGT0C20BhAnWjnkDGUE4ksUBZOoTKnAwKgKrRvtVRX6q8z0E6zTeNyg5mEVbapc20Vc2uQqUozyD3be92X9hoGgfwpzd0SEBtRsXrsTtIqqAbZAAppN5dKQq841cYz9ktdUdzpneldZqqeGax4TnedAMS9uCtKnz+A6Cg2zoIna76Z/g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d829f6e1-a032-931c-dd39-865b2f4f09fd@suse.com>
Date: Thu, 12 Jan 2023 13:22:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 7/8] x86/dpci: move hvm_dpci_isairq_eoi() to generic
 HVM code
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-8-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104084502.61734-8-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0131.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8242:EE_
X-MS-Office365-Filtering-Correlation-Id: 426fb9b6-27a0-4c27-1525-08daf497aa62
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CBa733W5D5Pe6zs8+X9ELfCcuYzknmQKX7uMOnqu+WCvig8sKAPyudu8eM77OlE23o4e6/+h/eDUDzCAmToJWnwIUKKsVkIgeG66INysko5GCIEa3Mq9XzEatk4QqcS3LMI2JRCbflbTzKbs/U2ebS8d4VenkvVmp9L5yuVUFi1g7gk95mwB48shWLa6cllgygcRmd5O28+ZWVX1o8HTE7yzMLSrNp9A5OeX8pT5Ynkwji1GrnUxpnIauqMnSupdgqdDNizUGn4VhyTqKlcR8bHiK0w7mWt1mL5enDyEV8nPWfPsscsgzvlYRlMA2mx4QyfXK2OqaigtGHZr6lEKf/p3J5gpzKvryuE4ZJqqFFHS8QluDT/VNLuJdkU88XmaxddhjVpIBshavLvpPiNxIzOjbPUT0HoM1x7YWE4PHxGZ/PFHiL9phjaQBiwgHLZ4F4IUP8Y7+77mP2WQcJ6LhN9QMjMg59MPd3nPMF+5WICC9hMgdQhxP55F7OejBVa7hAqhRJOn4aOoP46ToHMBofREnuWHLi/llR/ycto/TUIZe029v7SrR6rq4hgJczHBjsbskWUIEtgkJ6ijgKpIJg/m9fJ8zTNyt8FuhFMCXnLOwgLmj3njlYwCm9rSWXdKjSgZcN2hG0V86fKI9cpDYnAqqc04GEHnNl0TX1Kz9n/RETPlzrwKweaXzCWZ2QACQCy5dFAr23iVnnZcViI+0yUx6WEZB6TUQ38QrKDcolcjr78kJkQn4WFa5e4uj2Bn
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(366004)(376002)(346002)(39860400002)(396003)(451199015)(6512007)(186003)(31686004)(478600001)(26005)(66476007)(66946007)(53546011)(316002)(66556008)(54906003)(2616005)(6506007)(8676002)(6916009)(4326008)(6486002)(38100700002)(8936002)(5660300002)(41300700001)(86362001)(83380400001)(31696002)(36756003)(2906002)(41533002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bXk4a1pxYjZnSEtUV3RGcDNkS3NHMllIZ09UcGhTZzlGRGRkUzh4TlFpUEcx?=
 =?utf-8?B?d1BYd0Y5S2JMUmpjUEVkZTNhVEhmdGVyMm9ZWjhLMkNKVGhsQnNKaW1INEN4?=
 =?utf-8?B?RDBLbSs2UFUzRVdiNlVHcEdINDFDR2RJOUdmMUFhaFd0cWd5dVUwZXJ1UjU2?=
 =?utf-8?B?RUpLekpqR3g1MER0SFVaR0dHSnFnOWkybmJhbmVld29qZFpsL0x1OGZKMzRL?=
 =?utf-8?B?c0VLRFNUaDkzbFVKQlhIL0RHTkkzZG5NNndIYytRN2Y0Ukhid3l5bWVaeVVM?=
 =?utf-8?B?bW41NWdQUkpZdVVXaDF5emFCQS9sOGxFRmlxZ0IrNGdqZmMxTUU2OFl6bXVu?=
 =?utf-8?B?dzVIT1I2VWs1SXBTSHJxd0VIeWVCMjB2YUs4YklVMzQrQWVSMmlFQWNlenNY?=
 =?utf-8?B?YldDcHdLOGsyR2R3a2UxQnYwemhDc2NnVGppa29wWlBFcStwWk9KYlpLYmly?=
 =?utf-8?B?c0tCb3lmOHZZUkMyQkZNanpqWVVWbDdqUDB5am9Canh2WHBZVFFFMXUrT0ov?=
 =?utf-8?B?aE14SUUvTmRBem41T2JJblpIWTI2bVQ1WUFJYUs0UE5BL2cxWHVaNmFzSlJJ?=
 =?utf-8?B?QWx2SCtkWUUraDlkYnFNQU1ZUTlyZmJnOWxJbTJtV1h6Nk1sMXlScjJ1SGEv?=
 =?utf-8?B?Zitwbko0dk0vTVI4UWhwSkZ6SnJXZXEycG9weDNGeUlpcmJBdFBWZ1dtVWFo?=
 =?utf-8?B?NnNJQ1Q0OFdDMDEzbldveTY0UGkrVkNSaVI2T1RoQmw5RE1VdnpxSjhTVzg4?=
 =?utf-8?B?dTRuV2pyTFp6OHd5NFJNZlpucnZrcXNNV1gybFBFb3ZDQzJpdXk3cjMrRnFV?=
 =?utf-8?B?TlJrL24rK0U5MmpMdFJ6L2pscVZ5RWRyakNpbVRJSHBSZFVDNFlXZW1xMllq?=
 =?utf-8?B?V2kxSmV4cEQ3UWFnWStZek9SL1BDVjM2Y1o3TmJQU1grNFA3S1poUUZRWWV3?=
 =?utf-8?B?U0liUmZobEpOSUFrdjJyQTVnNGozNGw2OTNGTzYrdGxzcXQwL3VSRVNzcVAz?=
 =?utf-8?B?Y0tQQSs3WGRNb0p6K29aWEF1MFl6VlkwQS92UVVEVWJOYWVPSlNXN2VHTU5M?=
 =?utf-8?B?MzRjZmtPS2xmQStVWkpGeHd0MUM1T0xEc3hreGVGekFXQ1hJb0lRNkZzbGUw?=
 =?utf-8?B?dm5kNnV0SEFjeWF1VHQzQUtqL3JqVE91L3pzcG03K0JXMUhmdEVsTzUxeTVP?=
 =?utf-8?B?NVdaamZPTFl5Smx6NGtqQWpmTFBRNllaaTFRZ3RHM2NoN1BqZlhnenJQVVd1?=
 =?utf-8?B?Qk05RUVvUk5oVStiZHBOTmp3VHZhYVNlTjJqUU9vRkZaczY1VXo3dDFTWkd2?=
 =?utf-8?B?ejV0dFRXUFhzZ1BnR0RjZFdwS2tLNzdWRUxNK09jdTBEZk9YWEpyZUxIV2Vz?=
 =?utf-8?B?MG5EK3BlWEU1MFN1R1BtbVN5VFJ0Sk4wNzUzRjhNdnB0WmVReDh2NzZMNi9l?=
 =?utf-8?B?UEdYQkhyZDl6RkFBYUNXNWFtUEsxV0R5ejhiaEg3R3duOWZqS0hveFI3Vkll?=
 =?utf-8?B?ZzJEcU1CSzJwRUpOcmk3MXIzTTYwK3JFWFhvQzlBMm5PZ0ZGek82Zzhmb09S?=
 =?utf-8?B?WHlRdXRSdng3R1NZd3dJTHhwbHVLSlZVZldEQ0kxbEFYdlVxMlRVeXdiYTY1?=
 =?utf-8?B?ejlwd2tuOUNyaTJzYmJzZFVDZGp6ZldTL1VFMzUzNHV6dGFNSmcyclhJaE0z?=
 =?utf-8?B?Y2l3bVBxL0FBR0REMnFHSVVKaXZ2cXdYMFpyMHFsQTRtaXNvNGV2S01OMUVs?=
 =?utf-8?B?Z0tTSlQyMjFPZElPTE9hd1BNVTlhbTBsYVp6b3U1MUJRVXE2b1dESFFRMUZQ?=
 =?utf-8?B?YlUyY0p2R2psN3JhVTJ4TWNycUM0QlBWOTIyYXdOSFFodGlDRlo4SnNUU3Qx?=
 =?utf-8?B?VXBkNmNkVCt0UU5UL052REg5R1dsbzVBNVRsSUI3enp5Q0p5a0cvc1o1WFlo?=
 =?utf-8?B?SnlzcjhXQVlvT3BFM1RjNHNrNTRlN1Y0Z0pTU2NWKzkyTzFkTWpaTWp3Zk94?=
 =?utf-8?B?TWNVZTN6S0IzL1YwQ1ZYUlE5a2tjRnYvdWdTa3EveE1OY0gyT3RsUWVZdy9K?=
 =?utf-8?B?SzJrSW94aFlPZ2NBR0FmRlNQeGhjQlJ2R01kWGNVaG0ySUZmdG8zWmtvSVg2?=
 =?utf-8?Q?pXB0n38Jkkja/0j6ykedCBk9B?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 426fb9b6-27a0-4c27-1525-08daf497aa62
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 12:22:26.3546
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IKic1qhwsTsQ/qaT0BTl/JJGBK1iASHxF15qVMrgrURTe/Tug+Trx4T2EWOGsjZPkDtjK61suOwwV0BJxAwkoQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8242

On 04.01.2023 09:45, Xenia Ragiadakou wrote:
> The function hvm_dpci_isairq_eoi() has no dependencies on VT-d driver code
> and can be moved from xen/drivers/passthrough/vtd/x86/hvm.c to
> xen/drivers/passthrough/x86/hvm.c, along with the corresponding copyrights.
> 
> Remove the now empty xen/drivers/passthrough/vtd/x86/hvm.c.
> 
> Since the funcion is used only in this file, declare it static.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with a couple of cosmetic suggestions since you're touching this code
anyway:

> @@ -924,6 +925,48 @@ static void hvm_gsi_eoi(struct domain *d, unsigned int gsi)
>      hvm_pirq_eoi(pirq);
>  }
>  
> +static int cf_check _hvm_dpci_isairq_eoi(
> +    struct domain *d, struct hvm_pirq_dpci *pirq_dpci, void *arg)
> +{
> +    struct hvm_irq *hvm_irq = hvm_domain_irq(d);

I think this could become pointer-to-const.

> +    unsigned int isairq = (long)arg;
> +    const struct dev_intx_gsi_link *digl;
> +
> +    list_for_each_entry ( digl, &pirq_dpci->digl_list, list )
> +    {
> +        unsigned int link = hvm_pci_intx_link(digl->device, digl->intx);
> +
> +        if ( hvm_irq->pci_link.route[link] == isairq )
> +        {
> +            hvm_pci_intx_deassert(d, digl->device, digl->intx);
> +            if ( --pirq_dpci->pending == 0 )
> +                pirq_guest_eoi(dpci_pirq(pirq_dpci));
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +static void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq)
> +{
> +    struct hvm_irq_dpci *dpci = NULL;

And this too.

> +    ASSERT(isairq < NR_ISAIRQS);
> +    if ( !is_iommu_enabled(d) )

A blank line between the above two would be nice.

> +        return;
> +
> +    write_lock(&d->event_lock);
> +
> +    dpci = domain_get_irq_dpci(d);
> +
> +    if ( dpci && test_bit(isairq, dpci->isairq_map) )
> +    {
> +        /* Multiple mirq may be mapped to one isa irq */
> +        pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq);
> +    }
> +    write_unlock(&d->event_lock);

For symmetry with code above this could to with a blank line ahead of it.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:37:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476107.738099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwpT-0007K4-Vg; Thu, 12 Jan 2023 12:37:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476107.738099; Thu, 12 Jan 2023 12:37:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwpT-0007Jx-SA; Thu, 12 Jan 2023 12:37:11 +0000
Received: by outflank-mailman (input) for mailman id 476107;
 Thu, 12 Jan 2023 12:37:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFwpS-0007Jr-Dr
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:37:10 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2046.outbound.protection.outlook.com [40.107.22.46])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d448a08a-9275-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 13:37:09 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7649.eurprd04.prod.outlook.com (2603:10a6:20b:2d8::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Thu, 12 Jan
 2023 12:37:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 12:37:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d448a08a-9275-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XyVBY/W5EUfzWR936HNMzRbE942tx6bMwkYbmpsrKKboggy+vgzm5lQmS5x0pGnEKUDxzuSx70H4lNdypZz3ayJjZNre+/89F4KFBCnRGanA3C2Oo8X56WFLoCCHsKfb6zHRCcDF8fnkZNwuKhNYwSkjJ6IiESOSnEDbTR7dxkwickoyVivKrqUmH0a3Bo0LtUK9dTT+Ni3rR8RhOgtPdAKGgmTly5EteFwxD1WLKq/9tuZ5AZLssB6VCZNJcmb1U5Q976TdCkRJB1PeumMfq1nscg+VI07KK+/gGvS0FJmXf5VhYbSZvhtb/9ROhNSz4i+QAXXO5HwcSGZLJBgkwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LoZp68UQ9//guHuBEsj9AbqZ0JtSUVb8y30mnObDHNI=;
 b=RE6Htba0Wdxgz9BV6iznpu//oC2oNLXWcWBjFiDWDYYwBDbOYARzRYHFU9plV8H/Du4wNkLbiF6BeysCBJvmohwY9K2ULwouM5BZYZHRkbJs3EQK37vu4sMmVOObgLZHcjhLCWqdRfImzZ1eFk6cn+anzpZrD9qRzBTN1YwRHcgq4+m8HCYQU1WT9VlNx7F54jQnzfA/t0N4DHyHKnzazE6a/yb9kNAeyCxGjx8mQJ7QN/wiMW38VCgBtn183ktZ3YEWNeq7rsE7FEWiqd2ze7EMlecbHajdQej/hlZ4zIBztEfkmSDPyZxxEuQTWRQa9F6cQzxIDTKpMuuB46/LCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LoZp68UQ9//guHuBEsj9AbqZ0JtSUVb8y30mnObDHNI=;
 b=AiaAxBxs52nvu8yUtA0kTM6LBfMiFxV6unVVOlYUpFIxh/4i+2bo0/fpF7Y1t1QU0W0H0FRgYWac61VRXMh8kVMvYB0xKEyf/or0ImInJIjVXy2Zp25+WKi2kSIUQiMlI4yZfYLuRvFD9VXTzHaddVWrfarejW19j/FnRXpFzAeJVnRIrK0icap0I5us2nY53bjdL1heRX3iHbfY+UNbW7wnI6TNA0sj1ecmAnjE8PtXj1ylgQxXcGXIVcRpUEOIxt8Q+iEIEO42qDWOwBgpzHDxJxP198nvwHUsHe3ybyO+lGGx+JU/qv2fPRRB07kvNDVD1KSdLicM1I01dGgx1w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6c5a4c07-e942-a683-8579-a0f9d5971c7b@suse.com>
Date: Thu, 12 Jan 2023 13:37:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/8] x86/iommu: call pi_update_irte through an
 hvm_function callback
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-7-burzalodowa@gmail.com>
 <aa20eb4d-7b18-9bbf-718f-2fe5fa896713@suse.com>
In-Reply-To: <aa20eb4d-7b18-9bbf-718f-2fe5fa896713@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0128.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7649:EE_
X-MS-Office365-Filtering-Correlation-Id: c3f5bd65-2411-4105-fb10-08daf499b78c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4HpCIIhqSni/g+SF9DfUs4aWHvX0VDLbxHaMprthr7zPbeMIraLwHKOWEUlD9Sl27HyDhhtM0asoW4FrsS+1TRQBi+oW0+S6fabJhchNlE57A5/vokBLxVs/9GjWCEkLfHVj0hUoqXv7L3J4erxGUY6ZHEL+PmLEJtEZQqPZseMvl0seJc/HVgHKm9WMICxES2nPQV0MpdiAaFipLmhuyQOvaRr/KwvfvhMA3Wh1sPsnS+zuf/BwxQkdGBooIOyAQM+azNga9ARZ57XGLEgzLc2p4Bzysq/bM650BYDP0+U1FDccRqOum6YApyKOnmJjjS1t49KzmdndbOO6dESgohDsnLIUjHnr/4e+qTNR8Ejef+LUQkRpVKS2cdDru3oViYDZ6fcQF1lzxCJICz+BdUlOl2NJLC3sro1sIO/UKU/KRiwZLt8MAZs9sd/gvNJU6xRymiRUzfknU+gupYt1iy76xpoS5BBVXlQjM1sniyYlPBWQERaNgT4UoXMiWs8P3JpoMm1w04gcC9EhJ/kLMDK7CKS+oIfx61tLNfrQxphewIh5pHwNIcJ1QZpFxj1OX3VUXt7g2nGIHUcZWnVVOJTTPV1F65t8cpr/NDVtpxHuCMnVKfZdxhGuQCM5KPUThawSOIbmETw1/AKlm0LdNqX0QVNuM158t8v3BzyyU3GWV4w6lelsuD+FDsctP2C5I9YDh+fAnviNl7qhnFYhCI4ELNlJrok7jPcAsXCc8Qc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(376002)(396003)(366004)(136003)(451199015)(83380400001)(316002)(2616005)(54906003)(41300700001)(26005)(5660300002)(8936002)(66476007)(66946007)(8676002)(66556008)(6916009)(4326008)(31686004)(2906002)(6512007)(6506007)(31696002)(86362001)(186003)(36756003)(38100700002)(478600001)(53546011)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TTY4bm44bTk0MTQ5bi82ZjRHOTdtMWN4QldGbnFJNXJ3bWdORDFTZGZHL0l2?=
 =?utf-8?B?NEFuckVGanFIR1hoZWlTamtlMm52RERrTkd5ajRaWXJJRUkwRjY4bWh2RTAz?=
 =?utf-8?B?THhCSklFdDdMNDBiSVlCZUFMVi9Mc1VLQklKNllmVFNZVXFvT3NwL2dNT010?=
 =?utf-8?B?cWJobU5QSzZ5ZmZVbm5CK3BKV2xXeFBlVWdUYWU0NGtqcE1YZmZHcmVtOElY?=
 =?utf-8?B?V2VQLzVLQ0pJRHdEWDFXRE4yOWxPNFkySmJUbjhFNG8zdlJxL2tGYWFXeFVL?=
 =?utf-8?B?VEExVFNiSkRpTEtQL1FtTzlUdjJ3ZlBuUjkrQm9JU3RwVEVUOUNYVjRYY25o?=
 =?utf-8?B?MGVuaDlXOXJ1YllCc2pXUTg1dGxDQXFTTlJDRU5pNkxtTjVta2paeG1ENGpP?=
 =?utf-8?B?Tkx6MzAyVWFMcHRLelVjZmhyYkQxQkVuNHJUZFh1dU5USkZYYllxRkpOanlm?=
 =?utf-8?B?TUE0UkJOUTIvR3Frc1k4c1JHV2o4Vjl6NGNqamZWbDM0Y3VpSFdKbDFDNGNU?=
 =?utf-8?B?Mzljc1E4cFJySW11bHlRUEUvVnIxWm5NeVFqUVI4WlYzSC9FL1JIYzlob2xY?=
 =?utf-8?B?Wlo5cjJZTGY1b3lOUUd3eUZTL1Q4ekliSENlS3BueUZNVS8yZ1dNQnBCZDdZ?=
 =?utf-8?B?MW42Y3oxdEhRV2p2eisySHg1OUNGWkRCL3ZUM1ZlclhLMExKRnZOVS9aSnZ3?=
 =?utf-8?B?VWRHQUpBTTYvVk9tRzU5WWt6U2FlVGdSdGJBTnRWemtBYW9RcExHMFdYVHYy?=
 =?utf-8?B?ZENydmh5Z3BEK0o1SUhPZk9CUEM2SVJzdWpsZWRkNURpSUxUT0ovTjQ3dGZa?=
 =?utf-8?B?Q0ZUTkFpZGJqY1RnTUt2dFlZWDVic2dwRFJ6L3huRmRkMWl6RlY0RUJCU3Yz?=
 =?utf-8?B?ZUpnOWE5NVRzc3hTWWFYQ0trdnVObHh6cUFUaU85aHRrOGY3UHIyeFVIWDEv?=
 =?utf-8?B?aGoyeFlRWlk3eGdWdEdJemZhbEhiMzlFQ2J4c0VEbEJaMHMrSWtrWmo2K1hT?=
 =?utf-8?B?d1RveUh3bWhyMkJJakdZYXdOYUpDdFJFL3pGOVJZTlZlUDdtU0J3aHFtbHQy?=
 =?utf-8?B?OGxWaFZ4VVFYN2FsUCtPMmRBRjZydlN0ZlJ0aWFlQTZPV2FJd0Zlb0VwMFFB?=
 =?utf-8?B?UVdjSzBONk5tU21sZHdXL1l0K25nNmdzUmp2aE9qTzVhTzV3dWE1YXU3c2d5?=
 =?utf-8?B?TVJnTXY2SUZUS2xVbkEwT1RBQ1RvSlhqUWdvUzdrMVBhaG1GQndFa0NWZmdN?=
 =?utf-8?B?RTd3Wnl0bGEyTHBTU0tnZzF4aXJ6Wi9KallKOUJFbTNsdk9yK0xRa2RTTzBp?=
 =?utf-8?B?RWNZTGg1Njg1RXZneUk1cFJqbi9NREFUMG9uc28zYkhNOGROVkVoaGdvcExR?=
 =?utf-8?B?UHVSd3htOFlwK0FMZzM1WVcya01saWhLWWNwUzBCNHViMThteTVLdFJuM2tF?=
 =?utf-8?B?OEpId1U5dHphOXRtaURBaUJCUTJxYTVxcy9Dd1oyeUo4S0oxeTRORG9Ra2o0?=
 =?utf-8?B?VHhZd3BNNEVRbmlsbXJ6RG9tZFZsZjFGTW1RL2pheTd2LzVUSmxha1NjL3Nw?=
 =?utf-8?B?eGUzSEFpUU8xbFZ2OTl3Mm5QejNqcjhwbWJnVTgzSkt0VHBoUGNDdjc3Vkw0?=
 =?utf-8?B?YVJxdzZXd3lzZThVUjA1SEg3V2xJSWpvdGwvQkY5UnNCeVMzZXNTSFl2Mk5z?=
 =?utf-8?B?eFF0UG1xL0JaaURYMWs0eEpLYTZHSzBKMDlLTEZaWWRhenBOOXFzTm8zdWlh?=
 =?utf-8?B?MlFneGZnb3RPcytNRmtqSmlzc2U0d0ZtdklFZDJKSFlxZjVUU1d2Zzk5SVNp?=
 =?utf-8?B?VjJ4MjRHZmZJMk5mbGdJMGlhZUdHTmN1Sm05bnV2UStUSmNiT3FEbGJwckVD?=
 =?utf-8?B?a1VhMWlwNUlCS1dPTXBvb3VnSTllOGJRYWdFRC9BTTlEVGlZaVYybXhQSTVH?=
 =?utf-8?B?cCtCQnR6Y2NCRzBaUUZTYmw2eFF0TWk4cisyQnNOYnprdmxBWUh1aHlTMDlO?=
 =?utf-8?B?by9nci9rZGRlTXUwWG9hV3NhZkh6aXhtSUd0YW56ZnA2UTA3Q2pjck1qalJE?=
 =?utf-8?B?Zy8rWXVweWpQQkZaT0xId1NyUk9CajR0UWVSbXNZNVNRU216L3drNmNBMjNp?=
 =?utf-8?Q?mhQITQcysULTHYhdQ7tDQ/abt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c3f5bd65-2411-4105-fb10-08daf499b78c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 12:37:07.4708
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: S6MVq4JAluMvF28RoBxvkzc0KQuxvXPpDZkJdpbv5SFT0a0CGduoKQOrZOa9iddBjZtxZLeBkqB01zBOE91+kQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7649

On 12.01.2023 13:16, Jan Beulich wrote:
> On 04.01.2023 09:45, Xenia Ragiadakou wrote:
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -2143,6 +2143,14 @@ static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
>>      return pi_test_pir(vec, &v->arch.hvm.vmx.pi_desc);
>>  }
>>  
>> +static int cf_check vmx_pi_update_irte(const struct vcpu *v,
>> +                                       const struct pirq *pirq, uint8_t gvec)
>> +{
>> +    const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
>> +
>> +    return pi_update_irte(pi_desc, pirq, gvec);
>> +}
> 
> This being the only caller of pi_update_irte(), I don't see the point in
> having the extra wrapper. Adjust pi_update_irte() such that it can be
> used as the intended hook directly. Plus perhaps prefix it with vtd_.

Plus move it to vtd/x86/hvm.c (!HVM builds shouldn't need it), albeit I
realize this could be done independent of your work. In principle the
function shouldn't be VT-d specific (and could hence live in x86/hvm.c),
as msi_msg_write_remap_rte() is already available as IOMMU hook anyway,
provided struct pi_desc turns out compatible with what's going to be
needed for AMD.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:38:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:38:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476113.738110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwr4-0007tM-9I; Thu, 12 Jan 2023 12:38:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476113.738110; Thu, 12 Jan 2023 12:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwr4-0007tF-6W; Thu, 12 Jan 2023 12:38:50 +0000
Received: by outflank-mailman (input) for mailman id 476113;
 Thu, 12 Jan 2023 12:38:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFwr2-0007t9-Ut
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:38:48 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2077.outbound.protection.outlook.com [40.107.7.77])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e22cb03-9276-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 13:38:46 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7686.eurprd04.prod.outlook.com (2603:10a6:20b:290::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 12:38:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 12:38:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e22cb03-9276-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NwYYy558c98kKk98O+VctJnG1a/FElC7VxN26+COLLHPvK+rj6dBwOPWoz+hVhkpHYRksOKSVt6gacEBICkqrrkwd/c/JK85wCuBkiR6j+PPcFGMfSgwyaOgBtu1YUZms5rjH/sKxt1Wd8y03EWpzAXVgMvmorfLwJ7ctuTHxWWIoRD6hFr/+ji3b+c85OdWGXNtp/NW65B2+VDJxIb1sob/qRa3+H0/7qrsOPp0iGu3QYKLp0ChGCHejgUTNGx3NJ0vqce0qpgOE7t7sWaigmdOnpsBSz6yaAiUOztG1jriivWAgDPRKWYzv5YK4MuinH/sDiUZdqcFh4KmmLJJPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vGTUujg26/aFOTeDzWoP7dUsYAG6NhFV+q1kSVgBxUY=;
 b=R6Qfg45Ro9N0ZDO6mZsymbSRSFhj9EJzecGoO/rmKQj3N46WG7zS8rDmX5HGQahgu/OZbLjZ7Lcc1ihfUClNGQ0ncsVTZefjDJi9f7XyYUoaOGwae6CQv7OYSavWHUmE/3PhUEvcHJW/JF36FgpACUxVUO5JIOkkCNOESbMav8eykQP0i4+8uCjD3dGiqN0ltoPvpq/+TnHshqDjL99ta48t1f4X+wyzeVJfa6zDUanLPS6rSgmZQBLb6m/7k+NYxNMWH0iXix9hePmyrJjc7oUMkagU11+MkoBv4XcKz+b+5fTzhhBEC+RCwdZKTu1X/3g+aLR9hMZDIO2AHgjhFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vGTUujg26/aFOTeDzWoP7dUsYAG6NhFV+q1kSVgBxUY=;
 b=269nBU0bpNIUlbXMysfetnF9UIAotVC6dWipK6HZxVfanykl5y7iRRSEoDxcZ/Zspz8E2BK3dTdaQKSGq+MRYTg+362C3wVBcSSogRfkMKXzOpqQCtfjLR+YgTupWHOR1/jjdjyBvudL0f9tZj+J+xUw3waDnfo4MhOH87H13T2xnTydUo5oC8n2V45TUutkzN1c5aqor1SGtym0YoHq7i8wDcRD4v6iPCy1ld+YtRprJCR5mj1LAU0LzLwVyh0cyTVdjRgxaCK2eaP1H8xfgeG2RkHNhJT/SIZd19UudbiZOZYoR8ryNRQ8GMN3N2lzgEG3XfOcvTe1CPLy3/Cy7Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4770d744-ed33-4246-4ba3-8e1fe8877587@suse.com>
Date: Thu, 12 Jan 2023 13:38:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 8/8] x86/iommu: make AMD-Vi and Intel VT-d support
 configurable
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-9-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230104084502.61734-9-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0180.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7686:EE_
X-MS-Office365-Filtering-Correlation-Id: 51ddaf7c-6ad3-4504-9b15-08daf499f147
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IDXfMyxWrBoxZY2dJt0zxnbBVwmqgKxvmb1xzNVT/KTbwQGzrg0slxnzd0C7j2O4T9FkAgeeZ6HFdLGh8k7DJCF5+UHQqbHe4C7D4SlG8ClkAC5XnCQtHfb/iXF3u5JogtAr/tuCPpRTn3jl4JsUaplmRHObaQkfK8TqLc1dwHHHl+JbdG5DSHwdxInoNqBHLztkVF4T+ViSG7YJvDBFNZP3dhQMo/IRGFqfuhyc6LdxcJ4AScAnrrQV44wHBi36V/9jXvCB+PzCVX7kxEHzGtN/VWGQKAib4ReTFLJLfDfDLT0BycIBjIcUos8awLCac+HXQfkEyYHMLBJ52x4ai5qseIBgYsj6eI8pGiP6I6bbywLQeUcJc3jBNUcN5W+JKRtdTiKeB+EGW7wiR9MdYIBuHa9PPVYYPzvwgj4wle2hPhdeUAmZGsgpgB+4wiRN1Pdjfr4i+kT4bwSXSq/ppEfLZUQ9dqQBvlhNotr/M1/ed6YQqUTnyTz4ODaJsc226xyBX1/5EIIuUhgeBHl070rUDXbdiI/OtKA48m0lXXQTnY4Dx7K21aoXPbdD22OD32bzRliLV2xGZtiABWLtX6r64LWB5qrVpfIhTFrMYcsrZxZ2XUyTX+pBxW65cbDKyRDSCEGl/F3eSaN6EVJXM+YH8Ta63fh8gVT9tD5dhtrD7mb+AryKoefIqgO4dyXUGLYI1rP+cV08zg2h7v3ZdltXPehcR0k3DqN44I+GNXM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199015)(8676002)(66946007)(31686004)(41300700001)(2906002)(8936002)(36756003)(5660300002)(4744005)(6916009)(66476007)(316002)(66556008)(6512007)(54906003)(478600001)(6486002)(53546011)(6506007)(186003)(26005)(4326008)(2616005)(31696002)(83380400001)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NXBsSHhnU24rT0FncktKWVN0MzRSMDg4d1YwK1Z3Z0kyZjhrdExFYy9RZ3Zw?=
 =?utf-8?B?c3lKeDM1eE9EZjBIT3B6R2NyMklTczFLeU11eUtzSWJrRFd1bytKZHE3ZHEw?=
 =?utf-8?B?OVMvbUlRVHVGTmxwckE4Y0NadkVpMlQvK1NJRFdZcUwwdGtjQnFzZENSYkxm?=
 =?utf-8?B?U2JncDhXTWFjVlZ4K0RRT0dUOENMTHhqbWlqTkYxWjNMc1dWUkZCMTdjZlgz?=
 =?utf-8?B?UGlmdWUzWmxJS3QrMDNmWkNPVVA3Vk1lTE1IbmtpampLVU9YZW1PbG11My9x?=
 =?utf-8?B?bkE3dUN3N0syZEJXbUYwRzgzWlR2emdybEptSWErOGllS2hKR0pwNzhqQk45?=
 =?utf-8?B?dWU5VkZMK0NVQWhybEx0WlQxZjRnNVNHWkxjTldnMmY4bTdybFN4ZFU0SzE3?=
 =?utf-8?B?enRCcG9acHMyWW9oeFlEcGJyOVBuM0hsdXl5RURWeG9jUXNzN3dDT09iWmVm?=
 =?utf-8?B?dWxxanlLeFJGMHA2Si8wZjM1VnN5RXZJMVAvN1R0UlJOZHF5MXN5S214QTNS?=
 =?utf-8?B?NEJDVndUZy93Snd6djRteXA5TC8yRWtERU5kWXkvcE13V2dNU3I3ci9hRTJN?=
 =?utf-8?B?T3NlRjdSWmV4eVZFc2ZDYVl1TXFBbllmUU8rYjB4eFVXS3U2Q3luU3lzdjNt?=
 =?utf-8?B?WHJMc0xRRzNaTGF3eEtsM0JqQmlxTldudHZJRGZmYW5ncVdNbmtTM1BKam8x?=
 =?utf-8?B?STlONWU4TGFaMDUyZ0k1N3IzbGVjMkMrdTN3SCtDSWY3aUdhUDN2S0gwampP?=
 =?utf-8?B?NWE0ZldaVUdyTGVIS1o1eGJVanprM2hzcENBNE1aeW1SNTNpQVVxeHYyam9w?=
 =?utf-8?B?Yy9tMnFZd0NYL1pXT0p1VWErcGxRZU5ra0ZyeGZRdUxpTk4zV1lpWnZjZEgr?=
 =?utf-8?B?ZmlzcloyS29mcStCanBZelVzVVpDYzMzb2tjTVdBRjFWMmxiZHFLR1MrUUFr?=
 =?utf-8?B?em5Bb1pjNzZFblFDcXJqNUFvODdSanhFdklWTnNZdjBwSnhMSHBqczVlTm9t?=
 =?utf-8?B?a1BMdzFTZXc0MzJ0eXlld2hSOVFxN2dkRWMySVFKcVNOQVFUWk4rYWRiZ3Bu?=
 =?utf-8?B?S3hjZG5uUTZkUzE1eW9lSXp3MVVXcmh2M2w4MjJxYlBoTVNmMG4wNWEwdjMy?=
 =?utf-8?B?VzVzaEpqUnhVeTRsU3cwK1dwQ05WdVNLK1VhQVp2TXJwVjk0eCsxVVorbzFK?=
 =?utf-8?B?c3daK1NWa2I5dTFkRlR6UkxUNFgwcnkwbmh6QTRxYko4T1ZlUmZpTmVjUFNs?=
 =?utf-8?B?YVJoNENRY3c0czIxOU9DeE0wWHFOV0w3U0xJY05xNk5tMC9yRmV3ZXJtUFdo?=
 =?utf-8?B?eXNBK2dHajkxa1VBMWRiMmQyUmx6cExYNTRaS1pWeEk2UkFEN2hmUDRrUWRE?=
 =?utf-8?B?bUs0ZWhYMHB3WmtIRi9XTFV5TmhTdm9FSWRUQlJlWStrV2ZsZldKSU1yNDB2?=
 =?utf-8?B?R2drTkVNdzlucWlYOGhlbHUzb08yZ1UwekpwcHY1bTJvTUtibDVGTWFBbUFt?=
 =?utf-8?B?TG5sWXlNdDF4RzRWWForb3p2MlovSWNhRlRzdjBUZVVYSnBPQlNRNnZtQ2hl?=
 =?utf-8?B?YVBWZmJSRTNWZWhNWVdCUTQ4YWE3WlFNTEN0bndIMU9YZWk2bFBjZGdEL01Y?=
 =?utf-8?B?SUZtM2hrdWo0Z3JNc0hoZWRjV2xRcE5PMjNKc2FPQlNIbEgzenkwRTRXSk1I?=
 =?utf-8?B?QVFmejgrUCtncGxsRzVSZW1mK2lOd2x0QkNTWE9KU3pnb2N5NnYyelMvd09i?=
 =?utf-8?B?aGRCWk02M3g1c2xHTTFLTThxbnpVeUcrbG9RcmVkZHF2UFFCYTNQWld3aDNJ?=
 =?utf-8?B?K3ZlY2VzWGdTcldTZE01OFAvZjRXWUNmSHVydTZDVStxanYxSWtQRXBCN3Fs?=
 =?utf-8?B?RzRCaEF4ZkN3ZmJWV3BqdUpFU0FyZXNlWGM0eWMrbVh4cnZlRkw3dml1Tm9N?=
 =?utf-8?B?V21UL3NmZTVzbTdMTEdIbEZxZ3NLNHdEMEl4dE96SHYyUk1nSXpWT2RmUEtC?=
 =?utf-8?B?d0NuckRuY05CbDBKcUtTem10UExJWDR0RGg3REZBbDg4YU5vckJSTDFDSHor?=
 =?utf-8?B?NFdLbkZHYTdZOFJZUzJNNkFzQkxIeXA1VHZramZTRmxLS1pnQ01zbHpGbmZU?=
 =?utf-8?Q?n+md5GBcgZwydMUFjj/wNhJ8r?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 51ddaf7c-6ad3-4504-9b15-08daf499f147
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 12:38:44.6677
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /ks0kE073OVeUq8ffrRGwU97MACRqCSg5surG7N+37tgqSWQe2YeBIE6rz3H4ZpdE1KShfmUJiJK7hsC1WZr1w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7686

On 04.01.2023 09:45, Xenia Ragiadakou wrote:
> Provide the user with configuration control over the IOMMU support by making
> AMD_IOMMU and INTEL_IOMMU options user selectable and able to be turned off.
> 
> However, there are cases where the IOMMU support is required, for instance for
> a system with more than 254 CPUs. In order to prevent users from unknowingly
> disabling it and ending up with a broken hypervisor, make the support user
> selectable only if EXPERT is enabled.
> 
> To preserve the current default configuration of an x86 system, both options
> depend on X86 and default to Y.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:40:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:40:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476120.738120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwsz-0000uR-OH; Thu, 12 Jan 2023 12:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476120.738120; Thu, 12 Jan 2023 12:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwsz-0000uK-LT; Thu, 12 Jan 2023 12:40:49 +0000
Received: by outflank-mailman (input) for mailman id 476120;
 Thu, 12 Jan 2023 12:40:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFwsy-0000u9-Fj
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:40:48 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2088.outbound.protection.outlook.com [40.107.7.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 556ab6f9-9276-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 13:40:45 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7686.eurprd04.prod.outlook.com (2603:10a6:20b:290::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 12:40:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 12:40:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 556ab6f9-9276-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N9vy4f6aVVyOv0raB1LK7GyUtZGZYVL8KuARyOmpMMZCiQ5ydEKE4an1zI+2ztS4N9RyFFaBhImORDP9+fHVamdVtOLYp/DH8opt7zLZYj+QzE1dO+fDBts6bja1zdV8X/+3QGgWVFiRQXxbwjxJA2sgQcgsHi5GktmXOqpIvYzEcrh0Y8Pjbw69/cM9uVZA8hBzwPW1X1GsCPO/sjCfrNageRuYEGOw+XWOMOKanP4V0TC1tOOvlHOR5o5xhF2MR5NaSI42y00ouxq4fYVu2t+ZJgbzhs4kkE+5/nEZMrxuMiKguhiLsB6qd0BZ81zSAYnV+TgPy+rG06abzQfX3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DWGd4cOKC7AWrDXtTKfwpBoWoOay9JCrx3MBYUZ5DJg=;
 b=UvaqhYY9oGmDiNKzq0qXCpD49EqRH3nnzZ7i54nm8gSZdloEO4/TLqoOIO9xUULYlCli7VFg4TYyg6P0N8OypIqxMYkHTQsjwWhl2wnVkQKoGf5K9wCFy3cFFd2DxJoNJnzniYLNod4Hm3rI7PlY8RppTIJMhsYwfPRLGMiNHdH73Jh0Ai3dCDRQ2SYgDaYrDYmij6YtPyCU+Dy2/gyT90cId1zorHUQ0li195YrCZqUicemGL6znMNyHQxv0NdIn/nn0sS7hSweBA/rhojdWrx2GfjO1Dt/JrNKwQ/gqzCv6NprxCBxwZwH6cjr9dXUGtUs/cMmk+L61gWi4x01uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DWGd4cOKC7AWrDXtTKfwpBoWoOay9JCrx3MBYUZ5DJg=;
 b=KDaYXao1qlwsbNf4XG+XArbEaioH1t6cEGV3aEKjcix8YWN+hQndiEo0DIEudFiTFTK/WvWVTGqtpg2yHJnPH6XNApJ9K+/LOtmF7xnIfn2CqAB0qcKr70jRr+PsevItnitcrgYwlUpgpcXuxPxy27fzuhCOZPGIAylAfg+ilrLHgAzGibVwiOcO3R0D/t83GgoTkZ2IGQF3KLLNWvSj6dAKyHooIRpPrpeCgpN2qBpDAIv/2mp4pXMsdIJzoLxjQsT/xMP4lxFHRuW9u/8Vo1rlA56OAwmFm3yjeUnhV0QYExgWUQc1GIyeHzDmAcQyeYLUZgm+RrLNcqpXO3hnLA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c68f96eb-641b-8317-eff3-272efc296d05@suse.com>
Date: Thu, 12 Jan 2023 13:40:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] x86/acpi: separate AMD-Vi and VT-d specific
 functions
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-5-burzalodowa@gmail.com>
 <b73afacf-a23a-7e51-9bd3-b90b3eb484bd@suse.com>
 <d55b5784-5ebd-d799-9a81-33e2901f4025@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d55b5784-5ebd-d799-9a81-33e2901f4025@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0135.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7686:EE_
X-MS-Office365-Filtering-Correlation-Id: d3cc2a75-d9ea-406d-5b43-08daf49a396a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9+rhKCDEISSSMsh8JdkaGsIcmw/B7LnuXiP93Zy9GBfYtv9BiTYSmOVHWDkhixIAPCg/pLax4cPWUJxkXrt3Ipk8WJlRVbgDb1kRiNzMdqMg1adjJaZyRTtpZFgrPlXSRO/bC+jZxNiPBvw/ARn4TxQAa62IU7WheXDuFGG5WJC1dxUBZLXMRuglrW+MFj0jh0mLAZLLbRz2IQTX5WzphzhCu3/7zqyFqnazWZVR0kzNO6S040cDQI9LIBYOq92DAN6hxk19zyWmihi5pBFK4Qx7JfDfQg/tKBo2M/NbmI3jdCm+RTRKKwMxDPnOU0LYwIJV02kDJK/bTZWZf6q3es7jWfkrmnqsdyTT0Rc1CKvS+urPJSAC4+RDBcFxSVniEg5pDfBFWsnTrNc3J9kljrwXVpVyAmEkd6lGyHTDx9GuyT0Qp4ZBZ93p1d1M4eZQgraNlbsMEXf3enJmYTxx9mpTOF9/XHBcYDDlQO5Ezh2lg8uQkeysgdmCIoIcwKJAIMFkINPZAzyVJqlNWpX2ZI4SqS55vPBBKAJe2TmP3/D7HSQ2ZTFpNaYwcftLpTr6FrxRrysOLeoALKAgak2DEH/PF31De70vQkTmhvHAJKlmXS8Gt5I0SPxvPOhlb935TX9yDR4BhtknblqG+gUv76qfwzUqd3IfBYMHOw+XXC4u5uFUPpleaWsffaU66GYeULR5RueibvOT1uLhLwyylx7FmgwIFfudl5+BB0sc29I=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199015)(8676002)(66946007)(31686004)(41300700001)(2906002)(8936002)(36756003)(5660300002)(6916009)(66476007)(316002)(66556008)(6512007)(54906003)(478600001)(6666004)(6486002)(53546011)(6506007)(186003)(26005)(4326008)(2616005)(31696002)(83380400001)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cC8xTmMza21WM0t3bzh3eXdPWHRmS0tiZjVpOHFKM3J5RlpDTWUrVWRYeGFN?=
 =?utf-8?B?SzJhUi83cWZieWVyVGVJZkcxTlhnYTA4Q0Z1aUd6aVRZUUFNV21CQnhWTGpk?=
 =?utf-8?B?RHUvRTlaT2RVOGVjUkxJckN6QnlkZ2hvZ0pmbWJtUCtKeWNuREFqTm1xZ21J?=
 =?utf-8?B?T2cyN0h1N0lIeGQ3UDZGZU1ZS1UvNk0vTWc1K2cxN2ZubE5HczVneWpmSFJU?=
 =?utf-8?B?dFdFNU1nVGJENzBiNkxwZmFaQ3NWdkc5YW9Bak1GMU1UVXFvYW10NFcyNWhm?=
 =?utf-8?B?M0lVQ1hmd2VNQ2MxZXY0Z0d5OWowcHdyc1FzSVhHc1o2MmZLRFpncnFjVmdY?=
 =?utf-8?B?TGhMQXQrQXU4Nkd6Wnp5amN5eDcxQTA2NFlLMnRQSWJ2eFl1ZFFnWm1DQU4w?=
 =?utf-8?B?eTNxR0Z1R2d1T2RtemFPcmRZVUhZSVB4M0cvSDF5eFVoUjhVOHlteDVWWFFQ?=
 =?utf-8?B?UldJNVI2MGxybE5GUkpQQ0k3ZkpueGd3c0YrS2ZYSkpISDBwZXFvaThpVVdk?=
 =?utf-8?B?VjlZczA2NGh5L1BrdHlEWVh4TUQ0a1greW5URmdVNnBFdnp5eGIwTGVSZ21R?=
 =?utf-8?B?YWdQYnViLzRZRldvN1JaWVZ0RnlWOTRqRzFSL2pXWXNVbjF6ekV2a1AyRFdD?=
 =?utf-8?B?WjVXMG9QTWtBbkkzU21idE5FVE4yUjQ4aldlU0hOTnovaDBrR3NxUWJXR21r?=
 =?utf-8?B?Sm0ySlAzWFVjU2hYbmNaaDFqdW5EdjlWa3FlanNWditreEZuRUVwajBsdmpF?=
 =?utf-8?B?VWRuRXRMQ1UrR0dOSjY3dEh6dVNiRWhuQmVGdjVrc3dWY3FRbDRXZENSZE90?=
 =?utf-8?B?RlR6MUhFRmxEVGdnWDc2MW1Lb29VdFJKNkt2aDZneVQ3RC9LK0s2WHBsb09a?=
 =?utf-8?B?bDd4NWl4MWwvYVRtR2JiL0JPdXE1Q3VnMlhBb2ZpZUJTdkxkbmp0TURpc0lI?=
 =?utf-8?B?RGVRNk9GMDNocDJpQjlPWkgzeXhQbjVacUdpODloaWx1Ym1adzF4c0RXcjJP?=
 =?utf-8?B?WUF3QnFCVWxyRUdDdTRTY2FXMFVGWWlzY1hGMjFVWlBJTytIV25BZjJ3RW5U?=
 =?utf-8?B?S3Z1dU1hbVVwc0RkTWZnS0NRUkdCMjllOWdWOExaTTJjdXRpdHpXbFpEb0NU?=
 =?utf-8?B?UFVqNUNsVjc2bGRLeGZ0dVBaL2h2akVvUGV1eU0rS2I0RUVKZXdhUjZvOTdm?=
 =?utf-8?B?QU9LaXFYT0dIU05vaUVjejFqMk5MYnpvK3RpbHYybWo3dFZWVEZYRXFMelM0?=
 =?utf-8?B?NGZGKzNEN0pJNHlwQnREbU9xbEZSQWFxbHR3VWt4aHp1WmQvMnUvTVR4c0pk?=
 =?utf-8?B?WnBDVk0xSUN5R0c5cWw0ZjVFNmp4K0pnWlNZQzBTbmsvVGRscjBIdVpoTm5x?=
 =?utf-8?B?MG1ONDNzSDFKMTFNNU5JUVViK3l6dmUyVmN5RlpsMENPcUMyaGZvaDFrTHI4?=
 =?utf-8?B?Uk9PZmdFUm5uZGhQckhVNGRwaUg5ZG5BSmp4R3hsOFFPN1lMUWlCbmZvdjFR?=
 =?utf-8?B?UkxDRi9iT2YyM1JicWRnaFVOSkNlVER6R3JibjhNK0RkQzBYcTFDWThxZkRm?=
 =?utf-8?B?YU1VOGVlY2Q2cVYrVC9BM1ZUZGxoSUtOUEVQN3RCYnpZcEdBVklhMjB0VE5G?=
 =?utf-8?B?RkM3RnFFQlhBZXpiMy8yaTRGenJSM1RVdWtWVVlYcXFJMnZLNHorU3R6MmJl?=
 =?utf-8?B?RU9pdnRrdlBUWWNFR2JoeHdGU0QwTzJ1UTZFMnpEcTBVd3cvTC9iR3hTdy9T?=
 =?utf-8?B?a2UvR09PMXF2OWNpd2p2THNQRk9Xdm9FN3BrVndCSmwvRHV4dGJUK1YrQ09w?=
 =?utf-8?B?dDh1RUlldzFicE1SZ29HdWF5ZG5SQVl5SEJWWlo4UW5KZDlXVmhVbko2eEV3?=
 =?utf-8?B?VjdQN21UOThReUJQVWF3Q00ybEFQbU5lZzM0cm10U2ZxTGtYS3hFSzB1NUV0?=
 =?utf-8?B?dmlzWTQyamxIVHNFVkhuQmJVRjNlWG1HTlpyTVpxUkNlbWwvNzR4cmUyYkV6?=
 =?utf-8?B?alNnRFVPODVtaWhXNDRlZ3JKejY4WDVWUFNFajRkb281VWxMTEJ3MTBXSjVM?=
 =?utf-8?B?UFNBcDdKcUlyYzRSa09OQWhwNlVPeUtzNWxScU1YKzVsNmN1STJ3YW5ST1pO?=
 =?utf-8?Q?EZFqjlBK6vohECuTV59ijXqJI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d3cc2a75-d9ea-406d-5b43-08daf49a396a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 12:40:45.3008
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: y6xdcugRQpeOeK9Nlt5mxMQNlJBjlx4srd2Oe2YV1TQoUwaOdHMS4tGIuRfZ0fqtAa2HBrmAdd0wbGp2xTGuPw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7686

On 12.01.2023 13:08, Xenia Ragiadakou wrote:
> On 1/12/23 13:37, Jan Beulich wrote:
>> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>>> --- a/xen/arch/x86/include/asm/acpi.h
>>> +++ b/xen/arch/x86/include/asm/acpi.h
>>> @@ -140,8 +140,22 @@ extern u32 pmtmr_ioport;
>>>   extern unsigned int pmtmr_width;
>>>   
>>>   void acpi_iommu_init(void);
>>> +
>>> +#ifdef CONFIG_INTEL_IOMMU
>>>   int acpi_dmar_init(void);
>>> +void acpi_dmar_zap(void);
>>> +void acpi_dmar_reinstate(void);
>>> +#else
>>> +static inline int acpi_dmar_init(void) { return -ENODEV; }
>>> +static inline void acpi_dmar_zap(void) {}
>>> +static inline void acpi_dmar_reinstate(void) {}
>>> +#endif
>>
>> Leaving aside my request to drop that part of patch 3, you've kept
>> declarations for VT-d in the common header there. Which I consider
>> correct, knowing that VT-d was also used on IA-64 at the time. As
>> a result I would suppose movement might better be done in the other
>> direction here.
> 
> I moved it to the x86-specific header because acpi_dmar_init() was 
> declared there.
> I can move all of them to the common header.

I would prefer you doing so, yes, of course unless others object.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:42:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:42:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476126.738132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwuE-0001Tq-2X; Thu, 12 Jan 2023 12:42:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476126.738132; Thu, 12 Jan 2023 12:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwuD-0001Tj-Vr; Thu, 12 Jan 2023 12:42:05 +0000
Received: by outflank-mailman (input) for mailman id 476126;
 Thu, 12 Jan 2023 12:42:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QgwW=5J=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pFwuD-0001TW-6u
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:42:05 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 844231d1-9276-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 13:42:04 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id v6so1432346ejg.6
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 04:42:04 -0800 (PST)
Received: from [192.168.1.93] (adsl-211.37.6.0.tellas.gr. [37.6.0.211])
 by smtp.gmail.com with ESMTPSA id
 lb25-20020a170907785900b007c00323cc23sm7412168ejc.27.2023.01.12.04.42.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 12 Jan 2023 04:42:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 844231d1-9276-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=bzEU3EbHAD/6VGyAoPP7dH3w4Ub3KtjPw7ftvjmLQG8=;
        b=Z4PwmN8YlYegibwxRAtmfuNDbreknXyS0sAdpFuhAHazPjMQlAREdv9GT4fVQgkFid
         jfQ5eGtGS0eGFmI8Y5j8l6+waWhcO0xBd5+enJS9SefGd1vffTxZXriNkcBG7vC51vFK
         USsZ1jBcXOMT6AM9CsdDogh/S3djMsYZEpCrnTte77zAca30LZciGhyeh9tFCoF2cO7s
         4vqYtrggs1rYju2CiOXDfpdA+ypP4zccj7CuwNL2XWeCEE8i30jaY851lkEWi1xoiU59
         ZKtm5uN1veaShHkEljfKs7js5YYmd/l890BO7+dg1VPo4CZ7Tf3bkxhWh25QWX221XB1
         4/fA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=bzEU3EbHAD/6VGyAoPP7dH3w4Ub3KtjPw7ftvjmLQG8=;
        b=0fWJlhWY3Nc4Gpg6FORYpDLiGhhxS9JE/dVp/VGjW0pJdwTY8qQ8YRO2hdqALD1VXy
         05yt6PtV8Ju9u24Bk+hTkNSio7l7Q6H6vQkLOPEceiThxWin9QitZ5kuKKIi2cBx+iKW
         tHZngQYyLTjkor87WjVaoAZQWdqD404vIGoruWT+T5a2Y7R78GN85FPT86Kszte/qMOL
         QkYGMWcS9G4t1JpD/skR3sRYWqAp4ad5Pfo1uAydEX5dDmGmOQbx8X2yrdktuAPPdp/i
         PzbCkbt+oaoYMLQo6MJab5qLB8p9oOtkfgTsXcgXmlv3RzdieLiYUbbVSaNFaRFMEpAr
         d50g==
X-Gm-Message-State: AFqh2koGynFlg2lFYSWt/y8iyib1kPl8ILp1Rsr8EaEGWhac6QQhdm32
	VYOJxopIAURaE8Hck7uYhW4=
X-Google-Smtp-Source: AMrXdXtw+Y4PHPVT8RHEy5KBfbeevUJNjxyjGcweIMGMhyHA8gbCWsTFuIqC9siCC8Kefq704EUoHQ==
X-Received: by 2002:a17:906:39d8:b0:847:410:ecff with SMTP id i24-20020a17090639d800b008470410ecffmr58815369eje.16.1673527323793;
        Thu, 12 Jan 2023 04:42:03 -0800 (PST)
Message-ID: <2f5699a7-83c0-68c8-3303-c77d443f3fe7@gmail.com>
Date: Thu, 12 Jan 2023 14:42:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v2 5/8] x86/iommu: the code addressing CVE-2011-1898 is
 VT-d specific
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-6-burzalodowa@gmail.com>
 <a71e7cc5-3f86-0397-613f-a796a0309d42@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <a71e7cc5-3f86-0397-613f-a796a0309d42@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/12/23 14:01, Jan Beulich wrote:
> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>> The variable untrusted_msi indicates whether the system is vulnerable to
>> CVE-2011-1898. This vulnerablity is VT-d specific.
> 
> As per the reply by Andrew to v1, this vulnerability is generic to intremap-
> incapable or intremap-disabled configurations. You want to say so. In turn
> I wonder whether instead of the changes you're making you wouldn't want to
> move the definition of the variable to xen/drivers/passthrough/x86/iommu.c.
> A useful further step might be to guard its definition (not necessarily
> its declaration; see replies to earlier patches) by CONFIG_PV instead (of
> course I understand that's largely orthogonal to your series here, yet it
> would fit easily with moving the definition).

Sure I can do that.

> 
>> --- a/xen/arch/x86/include/asm/iommu.h
>> +++ b/xen/arch/x86/include/asm/iommu.h
>> @@ -127,7 +127,9 @@ int iommu_identity_mapping(struct domain *d, p2m_access_t p2ma,
>>                              unsigned int flag);
>>   void iommu_identity_map_teardown(struct domain *d);
>>   
>> +#ifdef CONFIG_INTEL_IOMMU
>>   extern bool untrusted_msi;
>> +#endif
> 
> As per above / earlier comments I don't think this part is needed in any
> event.
> 
>> --- a/xen/arch/x86/pv/hypercall.c
>> +++ b/xen/arch/x86/pv/hypercall.c
>> @@ -193,8 +193,10 @@ void pv_ring1_init_hypercall_page(void *p)
>>   
>>   void do_entry_int82(struct cpu_user_regs *regs)
>>   {
>> +#ifdef CONFIG_INTEL_IOMMU
>>       if ( unlikely(untrusted_msi) )
>>           check_for_unexpected_msi((uint8_t)regs->entry_vector);
>> +#endif
>>   
>>       _pv_hypercall(regs, true /* compat */);
>>   }
>> diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
>> index ae01285181..8f2fb36770 100644
>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -406,11 +406,13 @@ ENTRY(int80_direct_trap)
>>   .Lint80_cr3_okay:
>>           sti
>>   
>> +#ifdef CONFIG_INTEL_IOMMU
>>           cmpb  $0,untrusted_msi(%rip)
>>   UNLIKELY_START(ne, msi_check)
>>           movl  $0x80,%edi
>>           call  check_for_unexpected_msi
>>   UNLIKELY_END(msi_check)
>> +#endif
>>   
>>           movq  STACK_CPUINFO_FIELD(current_vcpu)(%rbx), %rbx
>>   
> 

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:48:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:48:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476132.738143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwzx-0002CM-Pd; Thu, 12 Jan 2023 12:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476132.738143; Thu, 12 Jan 2023 12:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFwzx-0002CF-L2; Thu, 12 Jan 2023 12:48:01 +0000
Received: by outflank-mailman (input) for mailman id 476132;
 Thu, 12 Jan 2023 12:47:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFwzv-0002C9-Sl
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:47:59 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2081.outbound.protection.outlook.com [40.107.20.81])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 579c3eaa-9277-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 13:47:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6940.eurprd04.prod.outlook.com (2603:10a6:10:11a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 12:47:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 12:47:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 579c3eaa-9277-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XkXJKszHgtwScjrSDfu1kesMGdUZT17Z1KT5a404LKcOKJE7BUk5FtxtPckmWJbOSjErs44bmzOXiuVFCd6Qmfe+UUuFzC8TTfLxbUJfZxGWriK6Dys/c1bYNBfdTArwiIhHoSREEPTF30QBowpmBW4o/ZbdhVemufEOlra29UxW+zF+ijDIi1EwslRgMVDGcsv4U4cgUtP6Qw1zMIGXcJ+Mjlhk/RHMUQ09ez/WNZAUh425WRF3ObJMAb3CZYqYHJcfNbuNb5BkTqx4fSaJp1hU91RYB20v58sDSUFMZJvCVV3c1QMo+9jwK8D/TiQG9wvyzE3dR6VOoQCiPIQ86A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+5nq5y+s6CvaJEydMt/0aW9bpCFjZb/jW5gJ8rxtVn4=;
 b=Zpc83K78loaQ81jVqnhR3/txwi9hJ+sTdGZlkzAEZNzKsFIRHQQkMdXo4JwtoZNP4+Mw72GuU+7Ukq9ktxbyqv9PpEQ4n+AYgkj+d4AcgNst/QZxCQ0o9Uo9f1KHjdSyP8PAUewlJ0hgcRjVoOyRKeEEpMPHytmX3IPFO7Op5X3gXMxq4ZWHynJ112RY0Yy2Y1gZYZvp6KkU2BK6RdRSH18SVqFpgvvNm5hbiHVfGUnwKGENjasAtJg70CQmAgi5IxHG5SNQmVEJRjrUUmEl2m8Xo87rCHksyUcMPyU0mxcnYKAsYI3AMn1G/UceAllN9y8lei+ZNU784HuPUP98cw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+5nq5y+s6CvaJEydMt/0aW9bpCFjZb/jW5gJ8rxtVn4=;
 b=MC0W8ulI+5baRyVmOgIiHEmjiZyZ9YmjmP1KUkBExzDxHeiYOHWH+yBfgXGxm+JzRKQ29UqmQlhU2Ew+9XrUECZJgrAa3zBLVhH5HSGbm+ZPMaiHAsFSQt3HVt6+Nnw2+f1qYe4WMWoYAfRAbCHV/zUmeBwFaH5hpQ886sPpWhbk/W09vsM/L1c8TSwoxwfwvuyC1uiK49BGHN9Dgs2XoE4QDto8yoPH0nbxCBgJQoo3EhZITvfjllhidm6PohMWkJV3ejIFKNTnLFUbExLQr8mAo+KSr81FQgsJ9uOIQygaih4jPSJBtFAvRhSpl4dF8X1Mc73s8uPfPr0F/t+bBA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <27d830b1-a32b-1368-3c0e-e5de15da5000@suse.com>
Date: Thu, 12 Jan 2023 13:47:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/8] x86/boot: Sanitise PKRU on boot
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230110171845.20542-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0030.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6940:EE_
X-MS-Office365-Filtering-Correlation-Id: a73ac689-367b-43e0-2b90-08daf49b3aa6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Nl9RnMG1iwqpukWCqXYM1+zj8qTYEug5ifEc+dVJF2WU+5EC0wnPuJ+FmGkX4SGyaQ6+yZwE6Pie8RPCt2Si61iwj+if/dixJZHK908Dd1W+eOVW81J/oTwOO7h1YJ+IBqBpuZcl8dcP/3e3AmFICnAPJFZczfeutciJz90HTO44wyHmXm4bNxy5+BjjzmJIJWxYHRk7Ako/8K+MxkDTbCDSS0WH4zxVQC6QrrPf1o4SMjxbNKODWT3kCm1LtE0wc0w/PJs20lbLZYEx6I69MEKYcsmOGUTbkzJhHPpXY0Wdm2A1yBGCp8IehEtz7NYYIG9e6cWW8fYkL3pBq2CPzExjacBRMC+0ODoLByNLdPzOqY7OPyTA0ytb14uGO0dfVpff9FBmLZokLlH+msEu1iOUiFhKT2FAC07j6vpctmkql8A0uSbr3BYt+As6LJlVmKJYMiS3t+4Y8tVzDF1ZLKDfRKNKbp6iEkJ9FoMQKiEgZp3PLyAblg8iPHH7fPIgXCZCHmUVnXrpzgMHxQpVixQLhMNHgaObTuNIvN2vtTbCt6K3zN/DsPjvFwnLXRegAVUpLtf+LjzWTXPeNJmFsKIIrRDRBE1Gz2l7QnuOsYyy3tXbHRIatoxN7TtM+cOZ19Hci+eAEE+Jmp0jVszd0SFdzy/N03R4GW3Nrg48zYOyhrfEKbOqnAxcvhPv46AN2EC9qgZQ/EEQZ3f1d2I2nAHGIQZ2EdPMLo1U2rTPdKg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(136003)(366004)(376002)(346002)(396003)(451199015)(53546011)(186003)(26005)(6512007)(6666004)(2906002)(6486002)(478600001)(6506007)(31686004)(66556008)(66476007)(66946007)(8676002)(316002)(2616005)(6916009)(4326008)(54906003)(41300700001)(36756003)(8936002)(86362001)(5660300002)(38100700002)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T05kQkN3cWd5cVd1RDlsQVQvaGdRMjRNOEFnL0lmbHFvMmthSTFNU1E4alRt?=
 =?utf-8?B?ZENNMnhKdVczWmV5NUtaZ1d1YjI3TWtPYzZKWi9yaG45eWIxYnVSWjQrR3Js?=
 =?utf-8?B?YmY1bnU0anY1VFlvNEhyY29XUVFBSzd0OHRUZ010ZEtnOUgySzBleG5yelhD?=
 =?utf-8?B?UDlyRTdLdXltb3hidWVUQnVPQ2llWDJ3U2ZXajdzU1lNaWZ4WG1NNURQTVVT?=
 =?utf-8?B?eXdMaklwdTdPYWZvZUNsZ1NZRHZxTVpJVC9tL2l2K3VxMU1MNElWQitxQU1I?=
 =?utf-8?B?dnVaZ0xwajFYa3VOeVJadVQ5cVBBTHZJWUJYdXcwTnJwUENMaDRyK0tkbXdK?=
 =?utf-8?B?WEtGbkltTDJsQkIvZ1VjWGFrcjJtUVlkTFdac0VSclRmVk5PaTBzV0x6SS82?=
 =?utf-8?B?cWtqVzA3UUNUdCs1TEJBVEI0UDRFWEMxVWRIcmZJTFozcWNXdGtLVG9ncllY?=
 =?utf-8?B?d1VPRGcrekJROElyUVFJRzJKd3ZEcll5bVNGRlVwSjNQYUxacGtqcjM5MVVv?=
 =?utf-8?B?M2xtTFNhckFwZDFxK2VucUZEQkFHTWJNS3BQbGxENGFNNW1UKzBjeVNBb1hQ?=
 =?utf-8?B?dTFUOTFDRUpubmhzamo0WU12bXNOWExaVy9OT084YVcrcUo1YngzOTJRS2Nq?=
 =?utf-8?B?bmdQZjVjQkx4dEt1S2lsUjhUU0ZLY2s5d3d4MkgwRTVQbHpKMytsMzNUSmgv?=
 =?utf-8?B?d3cwdjVaeHFrVW1NNEpWL25pMTY4OE1MRktUQUFVZkhZRldvQm1oL1U0R1hC?=
 =?utf-8?B?WmQrdElKQzBaSHRGbjI1RTN5eUFxRDUvaHZzaUpta1E0S1NaMUxIRHRYRVJW?=
 =?utf-8?B?U2xZVnRMR3ZOMGdUTWl0VDZOMkZ2TTB2aW9VVTJJWENJQXpBS1NHb2hrMGpM?=
 =?utf-8?B?QnZsdE03U0hUWXk2MkdtMmNSbEduUVVteW41cUl4bDg3TUxncnRNaFB2WTZC?=
 =?utf-8?B?RDBnRGNoeklkcmdxNWhQVVQzR0M4QUY3SGlTZk5VK0huQ0p3cmFsQ2FwSjBr?=
 =?utf-8?B?bWxETERuRjgzV1lVUmY5U0ZZR3dXeTJUNzVMQ1RkYUs1WjBhNDcrUHAva0xR?=
 =?utf-8?B?REdvelJRdDh6NnNtejk0NWZGcGEweHJZRHFNZk5GRXEvd2wwcHBTM1NBVTI4?=
 =?utf-8?B?cnpmMFhvUmtOaGhhYnd2dTd1ekY3M2pkTkswbStqMmRSSVZ1cWdJeWVUWVcz?=
 =?utf-8?B?U3Vrb1JDRHg1ZjlPTytpM3hndkNHaGZWWC9oL0VsdVFQb0JFdlhYalgwaWpr?=
 =?utf-8?B?cVJNdHZCYjB5QkpGWHBUTDJxNG9UNmY2ajJ3VEtDZDIwU2IxWjdJcTRaRVNN?=
 =?utf-8?B?SlhRRUptZWZXeTNOZVgxSVBSNmNob2NKUkQ2Y282RjN1ZGliL0Zkc3RIYlJ6?=
 =?utf-8?B?d1lrT200UFN2NkdNTk93WFZzdlh3dWl2MTRCaGNlRTF5d1g3MEo1TjVlR0xt?=
 =?utf-8?B?RjZkTVlyQ0dGSTkrZFUwZUtwUmV4UXpwTVRZOTVKbHpFRi9vQUNUVzZCTlpp?=
 =?utf-8?B?MGU0c3IrSUZ2azlFSWwwMGxhYzNJRXdpdEgxZW1QQ01hOC9yR0hYaHZHMFc0?=
 =?utf-8?B?eUFTV2NvU0U5RXdCVUtVK3lxWk8xeC9vZWxuK1VGOVBJMXRpUGxodEZyc014?=
 =?utf-8?B?WnQ0VXEwSkxpanVpc2FSbGxnT25VV3kxV1kyKzIxU25FTmduYzJRcStCRWN0?=
 =?utf-8?B?RHBKR2VBdTlvM1lXcUJWYTFOWkJ4QjVNR2xhejZORVhtTkFnREpOdDZ5L0RG?=
 =?utf-8?B?SmlXMGh6MHhvU3Q2OEduK1RLcU1YelZkUTNSNFVlVWJIMGEyRERhZHpCem1i?=
 =?utf-8?B?V3pPQXdubGwvSDZNVnJTUjM4S1BKVW1ELzYvTnlGbGRSQnZoS09QSDIrajFW?=
 =?utf-8?B?SHYvN01QaEEwNnc4Z0dpcVZmeElQV3Z5aFVMTWFKelpPWEtBWGdRNVJ0WCtD?=
 =?utf-8?B?TXRnN0IyNTBEZGJpL2R6Z2J6VmFBSy85OEdTUFdZU0gzeS9FSWNVdkFYYis1?=
 =?utf-8?B?dUVCb0luaU5NWmdpd2RhK0JkY1NndFJLdG9NNGxrZUZoakh6am1XbTgxQjI2?=
 =?utf-8?B?dDhZazBFTmVqMlBudjBiZE5EaHd3K3FROGVzdjZNT0hDd1NsTEdDYnh5R3JS?=
 =?utf-8?Q?elIyX32zyS9bS2AT7cuPH3P4s?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a73ac689-367b-43e0-2b90-08daf49b3aa6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 12:47:56.8985
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gfXggpVTfweQlmHh47YLjNXDQJeTvFqfa3hvz4Vr0eNwEDZoixkX+EO06dzur1QYSUIOOeN5YYJMZvfxhTYIlg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6940

On 10.01.2023 18:18, Andrew Cooper wrote:
> While the reset value of the register is 0, it might not be after kexec/etc.
> If PKEY0.{WD,AD} have leaked in from an earlier context, construction of a PV
> dom0 will explode.
> 
> Sequencing wise, this must come after setting CR4.PKE, and before we touch any
> user mappings.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> For sequencing, it could also come after setting XCR0.PKRU too, but then we'd
> need to construct an empty XSAVE area to XRSTOR from, and that would be even
> more horrible to arrange.

That would be ugly for other reasons as well, I think.

> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -936,6 +936,9 @@ void cpu_init(void)
>  	write_debugreg(6, X86_DR6_DEFAULT);
>  	write_debugreg(7, X86_DR7_DEFAULT);
>  
> +	if (cpu_has_pku)
> +		wrpkru(0);

What about the BSP during S3 resume? Shouldn't we play safe there too, just
in case?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 12:58:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 12:58:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476138.738153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFx9q-0003n2-Kw; Thu, 12 Jan 2023 12:58:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476138.738153; Thu, 12 Jan 2023 12:58:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFx9q-0003mv-IH; Thu, 12 Jan 2023 12:58:14 +0000
Received: by outflank-mailman (input) for mailman id 476138;
 Thu, 12 Jan 2023 12:58:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFx9o-0003mp-Hv
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 12:58:12 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2085.outbound.protection.outlook.com [40.107.21.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c4736321-9278-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 13:58:11 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8841.eurprd04.prod.outlook.com (2603:10a6:20b:408::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 12:58:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 12:58:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4736321-9278-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oFMYx3rn7wtE7mWuxpi/794htkqQyjJ0vlySlU2qcRTcQJ4FxyNljbMp0E8GS//SQ/y5JAReoJk72WmrjI60I+ZSxVZOPp4lWCns5H2rac63B/kWIENwIMhiU2ghWne59gp+1IpJYS0878JbKnLP6Ba6bXwnXCesCtznfyVd65j3Ufy57lqpiefwaUEGUxnG+Qbn9CL9FE6AdKh6QcCoS3SbwQVnsc1b5v/MK+CJdnbxLTsv/HiqdXZbpASNT7WrNI66qlJqYdoK34sL952pL4LYjkmssevxidDcTuIm4beN9qQJ2AwUywzM9a9QM1l+DtMs9G5i4LvDijWLZFezHw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=D26GuqKYX6gPt9im9t3yBAAtCPZw3qa90PikQLoD9uQ=;
 b=H6U9yEeQFWjs+9MBoUe84yf70a98Q/oYuuEWdYR0HdzejNJVQYMJe+CRkyMJ8wOvEUohg99QRr1Mv0VOee61AhX1CRh12JANJ53FCDepBWkEcufXwwqda4//Q2GYrfDZd0oyGuMYv43Gd3+5vfRMIICY5MQInB0hJYWxrOm57yymKCteQUJT/e45ohmLBOwOiWJKAGU9uW64+6WKc//fAlvarSMuRYZGoq5CCELZNEH2akE9NboXzXbM9es+g+bKql3EZL+LfBw6uJ5Zv4C6La/vnsXN5Zfesr3O+w5gD/v2uGGSKRthjZMf/4VfhM+lZpsH0UlMB9s9pe+yzCDNfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D26GuqKYX6gPt9im9t3yBAAtCPZw3qa90PikQLoD9uQ=;
 b=ryFLB0Bo1ntp1y1U565RyrjJ5IqHj/2mphcHaoBE2pd6s18ufHkqpZ22jmukQTi8nhxABb/QwpXOub2FnaOTFI+3k0WOC7oXGF9XDtqom5d1ba7I3f0JcxP47EB1SoW/FupB+6pM2xrkwk0PTxA9T7pLfLSi4YwO+YI33KfxvuuiRE+HFc6cOJmRxVx2QbqMmvcPzdWG5nvhLmcQtglpSUFHN1W2sxs/5QnUEzLxqcRbIxSLn5GDYRl7vTym+436CVDCqmgrHftUGq1VdVamDfVD+vROgsoiaeW+TXoaKhC1i7cV8Px6n1PNasmfEYSbOrzQ5Oya3PAXpX4kgNyA0A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <97d16968-57fa-0114-1a93-4d0d253b8172@suse.com>
Date: Thu, 12 Jan 2023 13:58:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] x86: Initial support for WRMSRNS
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230110171845.20542-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0006.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8841:EE_
X-MS-Office365-Filtering-Correlation-Id: bf3dde32-d52d-4efc-d948-08daf49ca6f0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PYwl/QHIzmTdQ/YuEuvYtyOi+bVzyWBcA81SFXmYdUuBdhbsPtBASTxY+oXCRGxaaOdx6QRN3AEX/iGoS3Shw+10ywdaJ1OE/YSlmFp7+ljqxCv6wuPi+X2/TNr5pYuaplhxDWYbPpzI6PdHEBZZ1zAV+j4yMtgb1EMtdx4BtKXf0Mg1UfuPWw79jKQksG+22opAVmdtT1TIkihOS17BedFfedTRl4YwjB3mffeSVi2jPLEpudfNSsEwt3wSa7fYHH5eMpXSKbwc0FTedk4Qr6ETIZC+m9sN4pLyzT87NKLjRNu/cjZ+MM9O9f7DZpoEePiXtXk7n7cJV4fMV0cOqzvoaSofdMS+bAfiN5tQ/bmLVEyS5hv4g4JpwhxcTjuRa8rHC5IuccHCLCKH4PgAZ4yolk3J5VPCzQqaTJ3Q1/PWgcFGJahd1tlg1IG2SzW6nn9pdJ/BPCZ3+jFnZnQiPpX0P56FION/8glCOJDE3SzQa9feC79OtRzQKC9xdzM8eTbtexUO2m117sBAz0sB2dqPJacy3SktBr4XqNkSJb1zGjAv2W2DUAG2D53D3z93nWBB0sDz59wYbT6TcHXGu2caI4DFvesAImVX3LjCHKMcEkqemXcpu8GPxYQoMDVhTDWTBUyZ367walNloyA7DnLVY3oLBaBcmCnoZpP1Tyeh7TNxowv0bwvoevEsfiqG7IB32PE1caS5luT9ufc70S8ayojeOtSf5joczeMOAKA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(396003)(366004)(136003)(376002)(451199015)(6512007)(186003)(31686004)(478600001)(26005)(66476007)(66946007)(53546011)(316002)(66556008)(6666004)(54906003)(2616005)(6506007)(8676002)(6916009)(4326008)(6486002)(38100700002)(8936002)(5660300002)(41300700001)(86362001)(83380400001)(31696002)(36756003)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MDlhYjBveEtBWE5BbDg5cC9GY2cxczVSSVUxU3RpMy8xNHNGalE2bW5naWRt?=
 =?utf-8?B?TVdZR1B2MTdHbzBxMkR5MVArU0RCeVZtVVVwc0NEWjZ1S2h1RnV1MS9BWU8v?=
 =?utf-8?B?clZNeHZvdXFSZis1TnJ4eWhkNWNVRVRzc0RZcHBVS1R2R1l4UXk1aTRtU2xF?=
 =?utf-8?B?Qzk0ZXcrQnVzY2tteTVheGhBQTlrUWhSVXpXODA4Um1WaHhpVDRJNk1GRmtV?=
 =?utf-8?B?V2dHdEhkc0QveHdJQm1nWGtjVk5UZmVMQWZkMnpZMlpPcW5sTFZsUUozb2kz?=
 =?utf-8?B?UjZ6OWlaOG0yZ3BablM5VWtuTy9wT0p2MUx1Z05XTUdPU1diR3FuNXYzajNa?=
 =?utf-8?B?WE5pZTdGRzI4bFY2eDZJdy9FQUVaUTNFcjdHVzY3VDNQeWM4QXFvYTNlYTFC?=
 =?utf-8?B?UlZQbDdpTHkxdVphWXRmVGFZRk5iRjd3dzFjTTRlTXBuUUY4OXpLL2YwaDBr?=
 =?utf-8?B?Y0wzbUpOUis2RWFnd0NIVi9YeCtJcnMwNlNUQUhNNWJLS25sMVNRM0dBL0Rq?=
 =?utf-8?B?T25vZ1BDMXlBalBNOUtoeUl0QWZoYTJuN1dXZDB3SHZnQ2NxVmkyTnVZeldl?=
 =?utf-8?B?dzZCQlN1ZlJGOXlrVG4vRnFmYTA2ejF0NVBzWEZQeVZiN2FQazFEaURxdmlO?=
 =?utf-8?B?VFlsczd0ZHVnSEM2aEJnNEEzOG9DRXBrVml2Zi9OZlVocjMrVFZyalFYd3c2?=
 =?utf-8?B?Yk5ZeTRYOU5scC8vQ01TL0g3Y3ZTbUNkZWhiUkxUajRxUm9vVFlhQ3B6VEhC?=
 =?utf-8?B?UUF2RFZ3Y2hzL0tGc3JWSjZtN0RkR3RITStac2tPUmQ2Wk54SWZncjllVXM5?=
 =?utf-8?B?bnZ5Wll3MkJUTjltaW51WUs1NXd5MEE3VnZuQkc5Q0RLeVBPV2licW84TEp3?=
 =?utf-8?B?MGFxYW9QMlcrVll3YmJWa1orTGFMV0hOWEQvb3VSUkhhZlZaUk4wS0QxMTRP?=
 =?utf-8?B?VHo5VFNZNzE5US9CYWFldG1sNVFaajVYZlZ0YjNXU1RYN0RZOE5jVCtsZDli?=
 =?utf-8?B?Y3cwNnlZaUFnUW5xYmJBUHB3MVEvVklhLzdmekNURll3K3l4RXBQVmRzNnpV?=
 =?utf-8?B?NFIrZW1OZWtlUEd4aytUYW4vdy9OcE0yRktjUlpxMWdVZmNCdURFSEFZTXZr?=
 =?utf-8?B?NGtFMFNUMFFaeHlWanh1cWwwVGMxWHczNzJOek96SFFiL3RwK2lQQW1BTWk2?=
 =?utf-8?B?cE02eWhvY0hvSUdCQWovc2hqaXM1WlF1Nk9UOHJOQ2J6S3g0aWYvTUpzWWph?=
 =?utf-8?B?c0ZZR2ZFV0RNSnZxWkY1bkZMQjlqNkI0eEE5Y2x5SG9CK1g4dEc2Q3pydEFK?=
 =?utf-8?B?N0NpMVZkWS9icFJVZTR5WDJuanJ1K205RVNoYzIzL0duV0JpUHE5SmpQMGFB?=
 =?utf-8?B?T1prR2NneUV6dVU3UnN5UTlYWktUei82Q05Vamp0ejNIaDhOVi9Da1p0ZGtF?=
 =?utf-8?B?Uktxc3YzaDJsRFIxY0RFY0RubjQwZCs4WGkwNC8zZDA0NXN4cnlYQlcyWFVk?=
 =?utf-8?B?ZXRjd0ZmNHo0NXp6UklEYUt3RUI1cEh5b1dCMWNraFdOV1ZYMFVDcFRsQ2NJ?=
 =?utf-8?B?Q1JkNWE5WktiOVBYRFNTMm4rMDJDWURCdFJzREI4dnpwQURJMHFtQ3pDbTFt?=
 =?utf-8?B?MmdvM25mbDl1dDRFaDlZVzBvMFBZT3JwRUlWdWxaOWVvKzB0d25CTFg0VmFn?=
 =?utf-8?B?SVlSTjEwSGxUOHpVSWpBNDdRdmJWR3NodXZxNnBQNHR0ajF1UHlLUys3cmhJ?=
 =?utf-8?B?ZXZVVEozOHhVMUQ0WG15VlY0YnFocFlrVTE4NGh6eTlMOXRjNFNkdkdLSXNH?=
 =?utf-8?B?c1l6WjlnTVlVWXk3eHQ3cUovc3lZc2V4dVAvNDdENzJyWG9Cc016dVppdm82?=
 =?utf-8?B?b1U0aFZCSnhib1RTNUUzMkVwbElEd0dWd21FQWFmZ3FuN1pmcXVSVTI0TGVJ?=
 =?utf-8?B?M0t3SlBCTU4wakFXcnU3bnFsSWwwUjY1QTRIclJGS0lRdWdEdUdZcm1pbkhs?=
 =?utf-8?B?dk5qaTdqSTRrN25LWCthM2NMZlFMZStJeW9HbEhsT2RnRjdxQndnaVJ5bi9X?=
 =?utf-8?B?OW5pMFhIMXc2SWhOMC81ek8wWjFDQjZEY01EMXBzbzF3dTIwc3VSaThOYmpH?=
 =?utf-8?Q?cN8EZRgs4tDEQNCDd6DhXXfSH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf3dde32-d52d-4efc-d948-08daf49ca6f0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 12:58:08.6254
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OizL/KZwFk0fTX27Bk3LG5sj7lWZ6O4wkAD/5M84DaafQRLShKDGaic/F+AZfKbrtFWswOmku5eqfeI7u7X55g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8841

On 10.01.2023 18:18, Andrew Cooper wrote:
> WRMSR Non-Serialising is an optimisation intended for cases where an MSR needs
> updating, but architectural serialising properties are not needed.
> 
> In is anticipated that this will apply to most if not all MSRs modified on
> context switch paths.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

This will allow me to drop half of what the respective emulator patch
consists of, which I'm yet to post (but which in turn is sitting on
top of many other already posted emulator patches). Comparing with
that patch, one nit though:

> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -189,6 +189,7 @@ static const char *const str_7a1[32] =
>  
>      [10] = "fzrm",          [11] = "fsrs",
>      [12] = "fsrcs",
> +    /* 18 */                [19] = "wrmsrns",
>  };

We commonly leave a blank line to indicate dis-contiguous entries.

> --- a/xen/arch/x86/include/asm/msr.h
> +++ b/xen/arch/x86/include/asm/msr.h
> @@ -38,6 +38,18 @@ static inline void wrmsrl(unsigned int msr, __u64 val)
>          wrmsr(msr, lo, hi);
>  }
>  
> +/* Non-serialising WRMSR, when available.  Falls back to a serialising WRMSR. */
> +static inline void wrmsr_ns(uint32_t msr, uint32_t lo, uint32_t hi)
> +{
> +    /*
> +     * WRMSR is 2 bytes.  WRMSRNS is 3 bytes.  Pad WRMSR with a redundant CS
> +     * prefix to avoid a trailing NOP.
> +     */
> +    alternative_input(".byte 0x2e; wrmsr",
> +                      ".byte 0x0f,0x01,0xc6", X86_FEATURE_WRMSRNS,
> +                      "c" (msr), "a" (lo), "d" (hi));
> +}

No wrmsrl_ns() and/or wrmsr_ns_safe() variants right away?

Do you have any indications towards a CS prefix being the least risky
one to use here (or in general)? Recognizing that segment prefixes have
gained alternative meaning in certain contexts, I would otherwise wonder
whether an address or operand size prefix wouldn't be more suitable.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 13:11:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 13:11:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476145.738165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFxMM-0006Nd-SU; Thu, 12 Jan 2023 13:11:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476145.738165; Thu, 12 Jan 2023 13:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFxMM-0006NW-Pg; Thu, 12 Jan 2023 13:11:10 +0000
Received: by outflank-mailman (input) for mailman id 476145;
 Thu, 12 Jan 2023 13:11:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFxML-0006NQ-SJ
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 13:11:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2046.outbound.protection.outlook.com [40.107.22.46])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8fd9a662-927a-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 14:11:01 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9322.eurprd04.prod.outlook.com (2603:10a6:10:355::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 13:10:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 13:10:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fd9a662-927a-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DsEj8FBMnPzSm+X7RHzWq9Ui07w5qduz2StPOet9pvZeYATGdqrYqcdd2ajcDp/RN3zmRL7sRO91eu+RzgsJCeILCF9CDv1g7ohRxL3jC+V2X3LyYD5zWvGmoW3wN7AfCm0yp0QrDTft5Yx6fTHFMiUL4KREQYVU9G7Ilgc7sa8IGYDcpAqv6BXX+fiy9OEgObmaeXPN0nHsnHUn2CwVQqYASJzEFYJgzOoPZJcQJAGCxct2Uek4hVeNlTUkacS7dskik4bf3CVTknEEYDlWjXYlwdT7qvmnn5lj3+8GfV3PV5O6NTRR5aVfuv4pWGF57FcsABxJBFMOdkDiWTRAMg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Y8pvCEsvK9DpiCgrrDOLt2avq5yLrFutttlavfM94sM=;
 b=ejICGx2iUpleIm22rl3I8wReweV4VPHxr8y7hbifV2jVNHYBKIiwRoWKGWdqnOHf7574Hv5lnydXtHSLSe0bYqYkb+P71Ae4tmHgbqiO1gQA+tvPO8WBrEBAi/Dw/f0yb+uZZCRxH+Hif4M/Zb72xFvZv17yRHEsr4fAV7GjBzVDe1kHtSuSdcQSrntebGcf8QDHIyMrtZLVm3sTq5Nnwk1aH/xGneb9nvByt91GtxBTrPjCGfsHh20AnDApDN3akjTB2XyNIuDjOoaHdlKxUrAWJk2zhYPvgvLjmB6w6cOROAfiDi93fQBq6145mcs39lgSFGLYkHdofngNSLldfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y8pvCEsvK9DpiCgrrDOLt2avq5yLrFutttlavfM94sM=;
 b=LMegPTetlWDlGmug1H9BtNzMomCE9PVRL08RkmupY7y3BemrPLRiVPbPLWgXvdBweGuSS7kbsDySEUMbqhuZ9xEgB3EhK2rQN8JV7jFf2JQ243yHgeZxOGNtNqy/WTXLTgMoFu4BkmBGeSCCzbnc+MVMBI1cj5yYUiFxKsVtNou2mOTsi1lsVvd9ot0C3y02s1ERcizppBxA8b8ZIfRn4Sc03C//G1MClUJcVrRQnkt4KxEylFOMTgptJupkUIr6TX9JJDWViCARk4YUeP2bvArQAFFiHlhbUIisaYk9jVgcIh0IjXWLDiCSyhouy2vo2zywCgWnN4Mqq6mUJE3epA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <af2b74b2-8f37-223c-b830-c2bb3bc6d467@suse.com>
Date: Thu, 12 Jan 2023 14:10:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-6-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230110171845.20542-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0054.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9322:EE_
X-MS-Office365-Filtering-Correlation-Id: 32d6084f-7bd5-462b-cc0b-08daf49e72bb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YjNWOzP9YMJRrJDrbSyr4UiC1qbgX1Rh45W1uPLWzHm0zhwsADHJUzX1PwbDRCF1EujubJZZmEDeXVrIHD5dj4lro3xDb6bIsJonEXOKl90U3I0uz8p+bwXN8mSKBB6ddVipDozgynY3/nq05xix4jR8xpTDBoJPs0RzRqIPBZNMf6+EwVZe/LtBulo4Veua/tnXRyy72XypcMvg0PR2DGMtze5wsiJQdyOfzq0BHxZDQLDCPdAWXZJqKEymOPdZqyFccFDb+IKAiYtuSVTnyNxzV8BY7Q4b8g2ZR+9SJko88iEOgYlI5DcTXhxur8RIFjLWfm+hJDkvWBnDGu8Oivbt/KGk/BfMA+v8aCkF38T+IyiCnE9FGZ7iOEmxPpwIViOsxm+WiEmWXKfxzBxMlxPNlUzcKcbdt9HqxxksM3xwvf1/xEpkDH2+1JHOpekLoy2aiUWQT1SU8/Ds5ehjD4NKr4BiD4akDIcW6CF9UGmIcNMOggba52qPDVN5YncEqVswHq56bGB5fqyUO43zuwtmnGOMcW5TYBTOmx3TvjaqQQymjYO7dzB8wO2j/pbwIFMPIHGQC9TcL1+Wir8HjGu+g1HSH6UtgJbxLqpoKDtfMDPk9S/1t1DuQtVcPS7dYeOsDsKvLMBKY8k54C5trn5xhZZnqD42y5KWTCr8JFEB68QtaJFrsZcFdTiZ4MZlPQrF4L8oCfR+BXXvDLECWM8M6TL0rjKcKHIzViw4g3s=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(39860400002)(396003)(136003)(376002)(346002)(451199015)(53546011)(31686004)(2906002)(6506007)(6486002)(26005)(478600001)(6512007)(186003)(8676002)(83380400001)(2616005)(66946007)(36756003)(54906003)(316002)(66556008)(66476007)(6916009)(4326008)(41300700001)(38100700002)(31696002)(86362001)(8936002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SlgvTTEzOEZxcHFXTnNhZU1qWmVCcXZmSFJzOWNsdmJhU2hzS1h2dzIzMEtO?=
 =?utf-8?B?OUEybXFLb2xBZTZ6ZzlhOGlSc2tnUXdOUEVtYnJYYTF3UTFmdWFNQlJxNXJW?=
 =?utf-8?B?UEIxd2dsWEZ6Vi9aaTFhR1lKUzhqaVdmSkZTT1A4SWZXazRHWndLU1FjdHVK?=
 =?utf-8?B?Wms5QVpJQjByWGdUZ0pPbW5pMC9KUzQrdVpEM2IrcURta1hOaDZ1TUVaL3Nq?=
 =?utf-8?B?YkJEanZ5OWhLdEVaeFdXTU9Wam9XSGtvTkdSMFFrVzlucVFzSEpXTnQrVUZN?=
 =?utf-8?B?Unl3R21qZTNYTWJXQit6QU9KVFh2eGhVeXV3K2hISnA0c2xZcG96ZTFneW5I?=
 =?utf-8?B?VVdRd0dvc0NPQUhkdUhXVkIyUyszVUpJQ0JZUzNZUlhaaDZUL3dBdnBGbGRL?=
 =?utf-8?B?SmFDbkJ5ZWtxL2F0YytFQXpCVG11WFVIRVpyK2ZUdjAzd1BhbXRqTkNJSHBp?=
 =?utf-8?B?U0tUTm8zRVpDZkNhdnZodmRTRkpuc0oySDV3WDhrVi9aMEFyUVNQOFBsUWp0?=
 =?utf-8?B?NnRmeldjUzB5MmJqVldjRG0vNnZzZmZjVnI5dWt3OVF6Wk5RencwYWdnL0Rm?=
 =?utf-8?B?ZlIvTHllblJsT0xTT1pyMFgrMW94YTF1TkFwMWtlMnNsNG4yK1c3cnFicldO?=
 =?utf-8?B?Z0JpSUpTNCtrQUhFNFptbmNxVXhuQ3pXVjlPN1MyUFB3RVBDUW4wQjVJc29j?=
 =?utf-8?B?WUxKVnJ4NjhXb1ZqcmxmNVE2U2RYdkpORjhvdHVsYllCVThWVW1OeUw5NVBX?=
 =?utf-8?B?U0lYZ0pmYUhxRW1yakUxeDRUb3FvTjYrOGt0aFpmZmZlV09md3J5eEduOVJk?=
 =?utf-8?B?K2dCdTFNcmVmNi9yVmRUNlZBYWhFNVFyNE1nOVpSblZhNitKU0xoc3gyUEZk?=
 =?utf-8?B?OG03ZERQQnZmdXNZbGFPbXIrN1JBSWNOQkhUQnBadVFncy9yQlpFV29Kcml2?=
 =?utf-8?B?SVJMWFlvQkttRDVlMVFBWVQwTnVQekpacnZOSnhBSjcveEhvOGhhQ0pyOUpj?=
 =?utf-8?B?MTNGcjB3c2Flb2VwMmNyK2R3TTh4S09IUVZtRjNoemZ3aXpoeTYwR2c4bzc1?=
 =?utf-8?B?NEVWOTZjaXlsLzFOM0NRVkhBdkJwdzNxSEtZeXJZL0Q5bTM4YWN0MENRL0p4?=
 =?utf-8?B?QWZkRk5oKzRkV1FyZ3NYeFlvMVhSTTJzYnM4cVpTYWY3Tk5ZcUpTQTdSdUVH?=
 =?utf-8?B?L2NoS0xUYU5oc2tLWStSOVc5ZGtmN1h6cS9RMFZIVnQzd3lRaHlBSnRHNll2?=
 =?utf-8?B?ZkdiQjV2anhWQnZGZ1RWYXc5QXVMQTd5TSs1OTFUQ3Y5Ky84MVB1Nll3QVhh?=
 =?utf-8?B?bmljdHhqOVRFaHJmWS90QUYyUDV5QUZJaWlmMHFrdzVmSkZ2RCtHY2dtNFIv?=
 =?utf-8?B?bG10RS9UNGpzZURoeUdEa2RrOUZUQ2ZIK1Q2d2Vqc1V1cUJOVnFZRnBJWWdw?=
 =?utf-8?B?cENLN2lLMi9YVEloWkV2cnQxaVZjNklDS054R1VucTM5T1RyanRMYm8rYmdy?=
 =?utf-8?B?a2ZUNzloV1h5UW5JN0cwR1lVRmtyT251anZ2N2pwck5zWHBpQ3gxUEVnaXcz?=
 =?utf-8?B?ekc1ZFBNeHRpeTJVRjVteCtLbUZ6eHV4TWFZNmRLU3pvdlY5bEpnRk0rZGFw?=
 =?utf-8?B?Z2hSNUtuMWwzbnFMYjFTY1J4Q0QvNGk0SnVDcnJ0bGZaSFoxOW9aMXJ3MVpC?=
 =?utf-8?B?VTlLTDhrbUoxbzNVWkRoU2Z4TExVaEZtVUQ5ME5jWEJpWWd1cmpXTCszMHVR?=
 =?utf-8?B?Sk9ubGV2SUI1L1FvZllRTHNUWno2ek9NWkZydHNmTXJmdzB2Nkk5K1FzbWVL?=
 =?utf-8?B?ZC9UMjVmb0ZOc1EyL3MyV2t4L0Y0NXFIWjk2aU5Fa2hpbkVGNjZWdHpLNitO?=
 =?utf-8?B?UWlZRkhmM3pzRFQ2NmdENkdNdDlMOEVWVHNPOGRzVm5DZk1zQVkvRThldS9t?=
 =?utf-8?B?ZDdiUnlhYzRhSVMxU3RnVEVnbTZQaTRPOTAzc3ZRK20yYXk0UFJMQVRad2ZZ?=
 =?utf-8?B?elo3NHF1RWpGRno0ODBpL21VeUt6RDRPQ28xaUxSRTZZQ1lJNEFXcVZjTVpx?=
 =?utf-8?B?Qit5WFJGcm5Qd1NJZWZVTk5PSkJUeFgrTFMxSGxHUUxjbStBYktnOXg5Wno2?=
 =?utf-8?Q?+O+wBWq2EsQQeHLbxD70lX/KJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 32d6084f-7bd5-462b-cc0b-08daf49e72bb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 13:10:59.4672
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Pi7mg7qTRKVFM+6T2EL1e/u7UxBwOkPFXZqO+7y2WsHZ9uWeG4hylSKW1ty9WAEqzemQIeHzCuwoD1+yDM/agA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9322

On 10.01.2023 18:18, Andrew Cooper wrote:
> +static inline void wrpkrs(uint32_t pkrs)
> +{
> +    uint32_t *this_pkrs = &this_cpu(pkrs);
> +
> +    if ( *this_pkrs != pkrs )
> +    {
> +        *this_pkrs = pkrs;
> +
> +        wrmsr_ns(MSR_PKRS, pkrs, 0);
> +    }
> +}
> +
> +static inline void wrpkrs_and_cache(uint32_t pkrs)
> +{
> +    this_cpu(pkrs) = pkrs;
> +    wrmsr_ns(MSR_PKRS, pkrs, 0);
> +}

Just to confirm - there's no anticipation of uses of this in async
contexts, i.e. there's no concern about the ordering of cache vs hardware
writes?

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -54,6 +54,7 @@
>  #include <asm/spec_ctrl.h>
>  #include <asm/guest.h>
>  #include <asm/microcode.h>
> +#include <asm/prot-key.h>
>  #include <asm/pv/domain.h>
>  
>  /* opt_nosmp: If true, secondary processors are ignored. */
> @@ -1804,6 +1805,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      if ( opt_invpcid && cpu_has_invpcid )
>          use_invpcid = true;
>  
> +    if ( cpu_has_pks )
> +        wrpkrs_and_cache(0); /* Must be before setting CR4.PKS */

Same question here as for PKRU wrt the BSP during S3 resume.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 13:26:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 13:26:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476151.738176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFxbJ-0008AA-7U; Thu, 12 Jan 2023 13:26:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476151.738176; Thu, 12 Jan 2023 13:26:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFxbJ-0008A3-4j; Thu, 12 Jan 2023 13:26:37 +0000
Received: by outflank-mailman (input) for mailman id 476151;
 Thu, 12 Jan 2023 13:26:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFxbI-00089u-CK
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 13:26:36 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2084.outbound.protection.outlook.com [40.107.104.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bbeee11e-927c-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 14:26:34 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8988.eurprd04.prod.outlook.com (2603:10a6:20b:40b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Thu, 12 Jan
 2023 13:26:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 13:26:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbeee11e-927c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PIxgZpukXDuuEhe/KP2JHX7ZArgrhbQ5xGGRtJ2Dk4bUEJIkAdusV5++PK4DeE0CwSfw0VTHh7JdwNBgzI768ZP7TTXniFAaUP3l5VianpM16VplY2Iy3uVSc3RnfftvzjTn+88I2UTALCgrj7ad6vK2U8gTjjP8gPBDLGStbZc1cTollKPs9rHjbymkbEwsstLoVKxDUKyYABxskZcDrwfzqLRAlxNr4RssXlKf71g9kxT60x8eCWju+TnSaIGNGu782b1g6NSSd7BeuXiFoCnhFCV097iMwoglmX1rWA2mDiDh7cbhHWaE87GkvHFxFGVm/8nPCunzWDdl6w3tcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hH7GRJMJLboLqQxfrRCU7JHmsgaKy1telQ3E6ETmio0=;
 b=g00V5ZNpwNG69VkcoIYGvUStRUXt0OvuIY46caJpHof0em34B4GfvR6VS7aMeSbd/aaGHJeRR5v1wExTVoSVkOC+yV1NiJs51lMOwofb/nLWnKb/NphLR0ZOi364j0QGffyMJvAJxwVo4ZBtdJqGJ9H2pt75wJwKyZ0rmIyvQLG4RnEUlETy8L1IR26a7mJXahX4Q+g7iMJq4dDKX0nQsAbkq7ohn2Yqd51FcPE6CGuPZycENcWBt2WL8/0fcYga/JzPxCBUBL1v/24SmbKYmQKEVwsb8TsX3XooxanGgZpv7nC40Hn8buvbZ0N2Vkr0sZgvOzonCtyOus0egr+G6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hH7GRJMJLboLqQxfrRCU7JHmsgaKy1telQ3E6ETmio0=;
 b=ADP+3Gi6h4pkTQbne3+Wdo56xlqncPdciOW7iNPGt8ab2ZoqCBiisHemY6WspMshPFrGY4mqFX5WWCVFCNvXPciGEbX2DXLCIvHKL4apAQQTJfmnYkXlstqJpcLklHkMOVO9BcPvS/8PL21z2RsDxyo2qAXKVrhNOY2bD89lpSeL1HEdKDH48TWb5wx42Le43BbryG6EwCcxsSKAg5A1SMzjiq/0qyz157wEhBRLwvvz2MzIAKrZZYo26Z8kz5quVZtoYoweUca3+9TN4dJjFEytw7McLJlZZAWWMOtuEfLpmg9Lqi8mWf7pxGS1y641mDgen5ubs1FpFJQTUNiIsw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f64f0bbf-8f47-e897-eb7c-51f11c9ea4a8@suse.com>
Date: Thu, 12 Jan 2023 14:26:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/8] x86/hvm: Enable guest access to MSR_PKRS
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-7-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230110171845.20542-7-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0054.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8988:EE_
X-MS-Office365-Filtering-Correlation-Id: 59f23e35-1ada-4d3c-ed19-08daf4a09eeb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zAWqc9lnWdCTfSXrYR2lg4mlVBCZONCcIhmc3n6gRPmZk2LT9yXeJqLfsuebKqo25rTUnHQIxB53y8d99cXF6tVKAi1pM1z4nnYL9xEenZv3W3yKXGEyttnQ/qYMAiGJnWiF5UUVzC5dZ/i92Gt5kKR+JHa8XrOMz944c4/3G92JJ8DMVV3rkO0hi1SpC7eFUX9ijkuD0ftm5BSqOKponpb2KNvn2COxeD1IbR8DbTP4nZYdZB0lyGFNWRbUcMArIZtiwDofFM5hAoh0M5T7uMY2d7ZhlLNQD6vgj2kbG8QpnyUXnUVsvBedmK0oqM0gPEa/2nxz57bHYBa2k19ld7aNowBpVci9Dqw54BK7CsEdvsbXVjJjSH2olsfOWu+uah3Hqa9Dj/wObvuxTsM3pcZh32ixrKpJKfJFx94Rak3Q49wjnyomTmaEYw7cCDyOgEXTk8tJDJKC/mju8Qmd3wOjg3iZV2An4v/2bTkWRecKrdJX6Ip9zXTAaLU9/eG9A/RldMmLH7QNVL/MPOMMhTXgwLZx93HsMDvokFq7FEESJJvgvLbt49UGnycYENUni3mBE5nZ6+ng7kXGCMJlYJASabBsHTQNC5va3Y7Bwoq8ohvU6OG9Mqn10hE9jO04Wnd8rSXOM1sjk23yDqgWvU2unugTJD8VmdC+1OKjk+9g1q3gC9wZZXWzliOlXYbBD+9yKUktTxMD2OGB+G/ivYjwC2wK9JLZ10EHzKB9muY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(366004)(396003)(39860400002)(346002)(451199015)(8936002)(5660300002)(83380400001)(41300700001)(38100700002)(86362001)(36756003)(2906002)(31696002)(6486002)(478600001)(31686004)(26005)(186003)(6512007)(6506007)(8676002)(6916009)(4326008)(66476007)(66946007)(66556008)(316002)(53546011)(2616005)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VUI1WFVqa29qLzRlYVVXTDFmVk9nRy9qS2M1MGo4RkVLOFR6KzBIVGZuYmNU?=
 =?utf-8?B?bnQxWndpR1F6LzVnYVlQbHlvWjZhMUovY0sxSGd2WWI3Vkxmc21GV3pwMUpl?=
 =?utf-8?B?UG81VkRUdzdVbnl2a09QajFkT0w2Y2cvSng0dHVwNmNvQWJKK1pTZzJvK2hs?=
 =?utf-8?B?TDVRUHAxMUhKcWhPcXRHdW9SNUdKdjk5WkNuRE5PTDRjWXhBOEN3aHhyZjY4?=
 =?utf-8?B?TE1HRmZocFUzQ3JDSEZlYWYwdlhmUGZlemltM3RIcXZESmM0Z0Q1UjlzWS9u?=
 =?utf-8?B?RmczSkR6SjhGNUxHdE1IOXY0MDdWbUM4NkxsRi96NTZod1djZzRvNXZUY2Jw?=
 =?utf-8?B?RnlmQkYxN3NOd3FoK01XSjl1NmFEY3BXVUNUSEpKQ2czbUdHWFdpQnl2ZmRN?=
 =?utf-8?B?VXNJanNTYTVhS0F4WkZad3dBMXhDRFpURTk5WmRZbDhtMlJ4MDdoUGVpWGtO?=
 =?utf-8?B?a2JDamtWTm51am9HUmRVREZmZ3JwT3lhUk5FYTNxT2FzNzV3MjRLZHVER2pu?=
 =?utf-8?B?U3MzL3JRRVd4N0IyWG5HL0MvTFlGMzlMd1lJWWQzekZLcllIQzJLKzRUY0s2?=
 =?utf-8?B?OTBLaVgyOWg0SjBhRVEyK2ZnSWxSWmZFckZkUkFJVmpScUd1RGJWZmdCOUpC?=
 =?utf-8?B?anlKTTMrbk5ZSE1ZQ3FUV3Rxa29ubzQ2eDdnNTlUaHBwVHk5MG5hM1h3cHFz?=
 =?utf-8?B?czA1K0tPdVVXejlIQmJpVEVuVUtkNnpFMGhhVldldGEwcUNXR09wYnhiL0Uz?=
 =?utf-8?B?M1ZCOGtobDFlYno4dnNvL0FMSjk5NVBUMERCc1FDRHQ2b01jY2tDeEtPZlRp?=
 =?utf-8?B?SmpWVlY4d254SkI3L0J3dkpLNVpHblFHVzJ2d2tmKy9wUmlnZ2VpNVN5b1Fo?=
 =?utf-8?B?clJ4S2JBYTdZd21CMTJTYlpyYkNNR0FtSHhJZVRkSlJKblV4NjBKL0hrSUhR?=
 =?utf-8?B?L3FpclZETFdVNm9mWUNpV3N0a0kxUVBxS3g4cFVpYmNITFVkaXVDR1B0cnlT?=
 =?utf-8?B?bWdlcGVFdGVpK01CNUxROTMzYmN6QzM1NW1uY1l1ZlZvdHBucElBcGF3Mzdn?=
 =?utf-8?B?alNmR2s4eGJvUDdiWE92bWN3ZGxMakR3Y1JlQzBMcGpNVGRYL3ZUbWdRSXhh?=
 =?utf-8?B?UGdCOHJOMFF3SzA4OE54NlpIY1J6cUxhQ3Z0YUErRThUcGZmS3IxSE9qODIy?=
 =?utf-8?B?YzhnVTlHajVUNjBUZGhmbHEzTVFYNGN1eCtzaGhGSTllVGJLV0lWeUlqU1lR?=
 =?utf-8?B?SDc0TCtFRDVtYUVXNXFONDNjWk9uWkUwWWthTFRnSGRLR215Ris2NEs3QXIw?=
 =?utf-8?B?UTdES1ViOFFxUzhsVy9OcHR2d0gvZFo1MUIzdmVOOE1rT3hGY29ubTZsSXpZ?=
 =?utf-8?B?Z3ZRRjZ3N1lTdEtnL3NWSnJZdGJwZ0szcEZPVnR3NVNTdk5xWG5jZEFpWS9W?=
 =?utf-8?B?bTk2T1Z6OXJLTjQ3SVFnY0NvK3pzVGZubGduVEc1QkZFcDJ6TG05VGxEUnVv?=
 =?utf-8?B?SFRtTzRIbnRMVzd1eGc0OXFoSFlYZXhHdHZlTm11MlBqd0Izc0VmMXBwTWJi?=
 =?utf-8?B?WlFqaUkzdzFpa2NMMmhNM3lBdEQ4SUpsdTNvZ09sQWFiRFhFUCttLzhyWUhK?=
 =?utf-8?B?TUVvWTIveWtXSFB5RGJ6WVU4TXlGbU1KMk1ucTdTRzNKOStVNU5FNHZWN1lX?=
 =?utf-8?B?K3I4Rm1WOEVUemdFbnNmK1p5RFNIbXV3VC9GcmYzUXVKWnhZNmIyUDZabkM2?=
 =?utf-8?B?VEVNV0E2N3MzS3d0WG9OMHFETmx3WlN2SzdLZ003NUt0WFNqaExLaEIyWWdX?=
 =?utf-8?B?WUp2RWlEc3VCNmEzRHNQRk5zZk1SU3ZiQmxUaUVBTUh5dHhJMzQvZURzeFVR?=
 =?utf-8?B?TjB5NzNCR042M2tHZkV3eStjZ2xIT1pXUFNldjZmWElyQ1p2SFBmSVZDMjkx?=
 =?utf-8?B?UEEydFJuMkdjd0dkN2kwTll5b3NQREpjcnNqMkFXR1oyTlQyc3FjNnh5Q1ZW?=
 =?utf-8?B?OEF5NVJBOTdnaFh3ZnF3WWt0VTFKMmV5d2hjdXQ3NXFXQ0c3RWtYeEI0SmEx?=
 =?utf-8?B?ZmI0WXFTdHg1SHpnUlFNZFdjNDR0bFJOU2JWZkplVEc2bzJCNHZ3WEJEc1dX?=
 =?utf-8?Q?0x5aiYaq4tlYOh8IVTNljmkhN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 59f23e35-1ada-4d3c-ed19-08daf4a09eeb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 13:26:32.6271
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i0CiWlnBcHvK/2ZpkbpERTqgOW/F8XJWihxCP9Au+m2zv/asqsQdLRloshM5ev2egCn3XNAtleapekkEXv8pNA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8988

On 10.01.2023 18:18, Andrew Cooper wrote:
> @@ -2471,6 +2477,9 @@ static uint64_t cf_check vmx_get_reg(struct vcpu *v, unsigned int reg)
>          }
>          return val;
>  
> +    case MSR_PKRS:
> +        return (v == curr) ? rdpkrs() : msrs->pkrs;

Nothing here or ...

> @@ -2514,6 +2525,12 @@ static void cf_check vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
>              domain_crash(d);
>          }
>          return;
> +
> +    case MSR_PKRS:
> +        msrs->pkrs = val;
> +        if ( v == curr )
> +            wrpkrs(val);
> +        return;

... here is VMX or (if we were to support it, just as a abstract
consideration) HVM specific. Which makes me wonder why this needs
handling in [gs]et_reg() in the first place. I guess I'm still not
fully in sync with your longer term plans here ...

The other thing I'd like to understand (and having an answer to this
would have been better before re-applying my R-b to this re-based
logic) is towards the lack of feature checks here. hvm_get_reg()
can be called from other than guest_rdmsr(), for an example see
arch_get_info_guest().

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 13:28:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 13:28:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476156.738187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFxcj-0000Iq-I3; Thu, 12 Jan 2023 13:28:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476156.738187; Thu, 12 Jan 2023 13:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFxcj-0000Ij-F5; Thu, 12 Jan 2023 13:28:05 +0000
Received: by outflank-mailman (input) for mailman id 476156;
 Thu, 12 Jan 2023 13:28:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFxci-0000IQ-4c; Thu, 12 Jan 2023 13:28:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFxci-0000om-1K; Thu, 12 Jan 2023 13:28:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFxch-0003oJ-Fs; Thu, 12 Jan 2023 13:28:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFxch-0007gQ-Ek; Thu, 12 Jan 2023 13:28:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SwXz0FbOvNhOoAQ+5vWVqeZXEPq5SZ4ZLKYge5CJdto=; b=KGgfIj31HHXW8/q5U96WY6xfH6
	0y0VXk+HNzQe1CUMdkDnEZO7QWfORlJC1L7XR1Z3/0J4yUvDBwzx20D4gXa0ufZGy+tJrDljXfb5W
	rF9eiC54YxjMUCo+bQ/wWPlJJsWhhtGYBExX1SdtK2RvGxRSxL2B4zqgbFuvzsEaAv/c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175735-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175735: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 13:28:03 +0000

flight 175735 qemu-mainline real [real]
flight 175742 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175735/
http://logs.test-lab.xenproject.org/osstest/logs/175742/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 175623
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 175623

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair     10 xen-install/src_host fail in 175742 pass in 175735
 test-amd64-i386-pair        11 xen-install/dst_host fail pass in 175742-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175623
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175623
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175623
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175623
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175623
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175623
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    4 days
Failing since        175627  2023-01-08 14:40:14 Z    3 days   20 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    2 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2556 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 13:59:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 13:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476167.738200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFy6c-000411-3J; Thu, 12 Jan 2023 13:58:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476167.738200; Thu, 12 Jan 2023 13:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFy6c-00040u-0c; Thu, 12 Jan 2023 13:58:58 +0000
Received: by outflank-mailman (input) for mailman id 476167;
 Thu, 12 Jan 2023 13:58:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFy6a-00040Y-QW
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 13:58:57 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3ded505e-9281-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 14:58:52 +0100 (CET)
Received: from mail-dm6nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Jan 2023 08:58:42 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN8PR03MB5025.namprd03.prod.outlook.com (2603:10b6:408:dd::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 13:58:41 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 13:58:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ded505e-9281-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673531932;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=7y/UBF+/73haNmuu4jeuUM3NbyULZc1Ow6fxxc0jvlQ=;
  b=JOoaUGlLHIBYxZNNZdn6asoJHzwA33jgR2y/qUH1zB7ZZBIStRt0Y0wl
   aeEPcI14KVGuBE0PyY51rST3mO+A+dmQt7rk1kKyajcQCPLiPJC1Ik88a
   qQQ+zrpGo2gDeLAr91Hsp0BhctTck/abxSr2vVL7hY398QhkrvNbEqphX
   c=;
X-IronPort-RemoteIP: 104.47.57.171
X-IronPort-MID: 91241997
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:F6R47aP8zWUuQebvrR2HlsFynXyQoLVcMsEvi/4bfWQNrUon3jRVy
 WtNUWmFaKqIZ2Lyeox1YIWz/B9VuMDXnINkQQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5wZmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0t4mCG1np
 PYAFCIQQBuHq6Hv5IKVdtA506zPLOGzVG8ekldJ6GiASNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PVxujeKpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexHmnAdNDS9VU8NZQigev1Hc6CCcSbkOFnejmsGq7UfxQf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaBMQOBWoLZCtBQQ5b5dDm+dg3lkiWEY8lF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9bABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:qIGhUKz1pVRtdp5GHM7MKrPw871zdoMgy1knxilNoHxuH/Bw8P
 re+MjztCWE7Qr5PUtLpTnuAsa9qB/nm6KdgrNhX4tKPjOHhILAFugLgbcKqweKJ8SUzJ8/6U
 4PSclD4N2bNykBsS75ijPIburJw7O8gdyVbf+19QYLcenzAZsQlDuQDGygYytLrFkvP+tBKH
 KEjPA33wadRQ==
X-IronPort-AV: E=Sophos;i="5.97,211,1669093200"; 
   d="scan'208";a="91241997"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gyneM/E16kIodQQew1AbITyMCEB5ShL9Ub9MRJq86C8g76UVJ08xfBC0uw/VCkrwaIHg0e03EoJrmtAbzUGiwxgnoBkBlvDwej7hNPRoYBheloYDppjotmzFfMJ49v6QenTK+4MLHrFUlgR6gSIiv475nntFmVMhh2biwDQhbxBvKgFnOYNRlFqPIxaTvtn2bFYw/G1oyCh1Kx5Ot4vSh/pCSm1CmUXVA42h2c5liyl7vJRxcXnK8Iz6Srk0pEgMZilYuimS8h3dbqLu6BM46pWY8ZHGbFn7SHoh35cbQNjBawhIEDqjod5wo8S2Gdjp3KCuMYn5a3ofzwdGv/GsXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7y/UBF+/73haNmuu4jeuUM3NbyULZc1Ow6fxxc0jvlQ=;
 b=mQKOOCTiUhEDjJkLRKJLQMg56v5Pu+bpnHiOGS2K45kvHSP3gAFIpirC8en1mUTSJ6VFrQmwVDSnB9z3IWUhsCDYrXlyVAz/47fIcvqIYCyvTENyFldKylHHoIR4HY6vgOVL048R2cusxaFQW7iTNvxyaiydLjOhDKZNJVDvSnQzRRJ9Mm8k4YpoAmt5xAkVf9C+/EvEM2h57QtcS4rBEeFXtH/DrIUbo+S/EpJ3cLMC1vd4r4YGqCdv1FmqOFiCNgItMnZh1fE7aNVSIvw71bkG/a4TqB+rpnh1dlydDsTX5Y4zmBYA4y4vI7Wpf4uBmJdC8Nzv4FR8qJIA8VSnjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7y/UBF+/73haNmuu4jeuUM3NbyULZc1Ow6fxxc0jvlQ=;
 b=vmyyQQeRg/8dilEtmqyqVe1zmxR3avoXtz27AqcKDDakZ2ywUJDMT0IhA3GsqEzO72q1hJQrzqlidxcbTP6QNE2XE4wj67HbmSNv1T4V6mpW8iqXm2Tw/l6FhBqzh+0L9ldeGUxPyozor94j+JLgmzUph5RaPM/poUZ80/9cDtM=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 4/8] x86: Initial support for WRMSRNS
Thread-Topic: [PATCH v2 4/8] x86: Initial support for WRMSRNS
Thread-Index: AQHZJRefYU5gP8dknEybN3m0hZOPA66awNaAgAAQ7QA=
Date: Thu, 12 Jan 2023 13:58:40 +0000
Message-ID: <2e568a8d-02d6-5761-8b55-c37a8de1be0e@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-5-andrew.cooper3@citrix.com>
 <97d16968-57fa-0114-1a93-4d0d253b8172@suse.com>
In-Reply-To: <97d16968-57fa-0114-1a93-4d0d253b8172@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BN8PR03MB5025:EE_
x-ms-office365-filtering-correlation-id: d84b2e79-3fee-48de-1e5f-08daf4a51c64
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 sJwdSiTxwobm9lzJHiumNbgNfWH6hDOni/WR7A4jLqJz1FJl8kMJ8JaFTBU0XNm1gx67TPjcFG6mcKNekTjMCORddMSPYWwZiDJC+BEAWj8RsLxWCbswqULCrPefKtN2jZXzIaHpfcR/W/ewEEz2b7ApNtNDDsGBiN2o1W0Khl7Bgy4OOLgCVgvW4YISxt//FQUGgTXb4cAF3gKVnhqsjQvyf43zDzPdJMv65o1OWtEtjsXJ9BAqCFW4hRG/GSH4q1OWaKH6LtyXnoSWURN2rmpi0O8GFPAsaWFBKz3MmjqSvMlpMdSJigGtdFwQM3eTEfghBgGfr+uXA7Y57XZ/BnsXrTvy8//tuX+RE+IefT2MA5H3mReC9jt0T7razaj+C4pl/Ht92HV3DviGCCsbODt1TbRVYJb6IExRRhIVG2ZTL6tLVvIHsXk+wa6SL1QG2ilFRVfFYCfjR4YxG5qy+fXYOKBxX7y1TYRW3EG1LtpYzEVCtVar0iORy74UQuBIl15/DjimqPObnw1aIZBkkoiruLIcg9D9QnnN1B2pbDN2j0vKWg0IDHdP3MiyCgeFp98z4RsEXBnKQ7ojD3Hq377Sm04IgaNKWMIauLsAQS2/hbJE1FjNdUw8Qoi5xqRvGeG52uM+onGYSpPW4JkEyTGb8rfLk1EQZ7FbJCXzPSDgutNG7+DYin7oU9MbLM2rVy2SQP7xTQBMMeFW76XeIys3HxIdyOMzYe3zDUka3zXU60305wz04UUYbxFxPJkNL0jRfOnb8/FeYYdIUrN4nw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(136003)(366004)(39860400002)(346002)(451199015)(8936002)(5660300002)(83380400001)(41300700001)(86362001)(38100700002)(82960400001)(122000001)(36756003)(38070700005)(2906002)(31696002)(478600001)(31686004)(26005)(186003)(6512007)(71200400001)(6506007)(6486002)(91956017)(8676002)(6916009)(64756008)(4326008)(76116006)(66476007)(66946007)(53546011)(316002)(66556008)(66446008)(2616005)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SHRNZzVvT0gwOHRrMURFOTMvaGlHNHlJbXJvRWVyVHBIOHlueHBJVUYySk9O?=
 =?utf-8?B?RWNwTS9vWW13dSsyMFVvZGZCYW5JVzBRVTdjRlVGakEwVWY4YXFPb3JDbkhS?=
 =?utf-8?B?UGVwdjhJelFmeUtqWS9xVmM2Z3pIS3Bzdnk1aWljUzQrWWdHVEt0QmtWc05s?=
 =?utf-8?B?ZGJzNmJMNmsxTFl6TzJJLzNrMUg0MjgweWxoaDA2b1NIeTA0OEVMMDlhejA3?=
 =?utf-8?B?eDBaRDFxWUV1bHFiMDFnQkIwWDZwSFhUbHNXNUZKM0p1UGhudWlhbmp0eERG?=
 =?utf-8?B?SVpkd0dIcjVEdHRYUXlGTTNUZ3NKNE1MVW9nS3p0TVNhNlN5V1pwZlg0VE9U?=
 =?utf-8?B?YS93Z2dkMHpUNUZ5ZTZSYmtqenl4WkFHdnRudUxTU3hsb0luVjhDRGZDa2M2?=
 =?utf-8?B?Sm1pOWowK0FrVk03aVFHZFBPblRLaGUrdmVDYXYxTXFRTWlVVWRGSnp0dmgx?=
 =?utf-8?B?cUovUjBZVURaZXYybDJEVEZvKzh5RU51WkFqWU52T3o5M3U3VUxTWGh5OE1z?=
 =?utf-8?B?cDAya0tMd3hPaWYrNm5CZVUwTlUyUG4xdFkrd1AyR2w2ank1cmx4aVpLeFdW?=
 =?utf-8?B?c29CRU8wZ0M3aW1yNUR0NW44TWczbjVUU0NqM1RSSEhuOGpxWmlWbDdxRmoz?=
 =?utf-8?B?MUQwYnc3UHh6ZWxXK2hLSzBzMWQ4MFZvWGlMeHVhVXdSbW5hbzBRckVrZi9q?=
 =?utf-8?B?RUtWanp1dVJLeDA0VThpUmJ2dit5UEp2RFhNaHFsOTl6UWJ2MjhOeWs5dWpw?=
 =?utf-8?B?MlhjdFVMNGgwcjVCelNpckVtUDdMNG1TNTQ4QmIxb2lTWndPK2xwVnhkS0tE?=
 =?utf-8?B?MmtORE51bHV5OUo1NlBPeDRkckpTb2w3ZVJiY1l0Q3duYk5SdjkxQ0lHSHRa?=
 =?utf-8?B?MmZ2Ly9JU0VLRVF0ZENjWTR5eGlubnVtOXlqRmx3Q2cyRTlaRmZiZWgxdnhp?=
 =?utf-8?B?Y21YeWJUMitCaDQrOHRPd0tXb1RNd0RXZ0ZWbkJXYkNtVkxWSFNhZ1JZMGJZ?=
 =?utf-8?B?bkpsMGl3ZzQyUU5rc2ozZjZkRWY4Mno0bUVYUnFWZmEvNFNrSlRjQ09JclEz?=
 =?utf-8?B?aDRUeUlBRXVQT1p1NmZ2aHNDOHV4dW9ZWmVHU2lRdzRYMjVkT2hBSGpwUlhu?=
 =?utf-8?B?dmw4bmJZSHpqL01TekwrbDVZTE4xWENmU1RjMTI4Uk1MY2ZCcVNLTGRaYUJk?=
 =?utf-8?B?RFk4b2ZBT2d4a2JWUnNNa3FGNFZpTmFLVVZBZ0dCd1p1QTJaZ3QxcWJBKzRp?=
 =?utf-8?B?ZE0xbVhxOFBWazZES053S2VSZG5kVHJYSGJJUXdOckRCSkZhR3FZRVltcytN?=
 =?utf-8?B?eGRTV1l3VmlYbnB0c1ArRnh3QTZwbWtCbGJuV2VjSWhzM1ZxZExLSVJ6dlk2?=
 =?utf-8?B?RUJ4Mm5FZk11V0pPM3RFUjB1QTVBQThpc1c4ZHlrMDBoeUZOY0s0eDNpWHpG?=
 =?utf-8?B?OElQbDhybGRpQy90SGlTb0hCa2tlNTlsTFkxME5TckpvQ1hCOUxRb0l5Ri92?=
 =?utf-8?B?a1MyYmNQN25SNXVIL2dVanZkYS84RzEvaWtiL2F0UmJBSjRIZ3VKK1VtemNX?=
 =?utf-8?B?aGtrYUVPTjBnMlVqeDZJM1M3OS9TR2hKUXlvN3JzcWlnSWZaQVhqTTBwWi9j?=
 =?utf-8?B?WnRBOEZIRCs3T0ZCaitmalBlaFVmZWFuREFZYm1wM2dZaEl1OUdTQS8zS2hk?=
 =?utf-8?B?cXhvM3JXYkRadWErZHZRaTg0Qi9SYzc0b1ZKdDRCNTYwTTRhbFhka0pQU2s0?=
 =?utf-8?B?cXRJRTBZQlJwSVZFZjYydEIvcVdubkFBemd6bVNuMGRLY1dsV29wSzUzaGcy?=
 =?utf-8?B?bWtGV2FidW81TVgvWnlKK1ByVHA2NTZScHE1ZE5vZG5VRDhEZUFWNFpQSVhF?=
 =?utf-8?B?Z1JmbjBuMWp4dDJnSUEvRm90Nm5wdStOVTl5Yko0TlZFNFhobnRLajZ2OXdX?=
 =?utf-8?B?N1Z2VXhveDltSUpUOGtXRjhvTk5mMFF5OTlmcFVoakFWNVRFREZaeEVpV0Jx?=
 =?utf-8?B?QWNXTERrb2FhWDFmTVpaTEtvT0puY1FXNzhYdmVCV0VPSkgrSHJJclhORVhC?=
 =?utf-8?B?Mng4QVBORnNWb3VEMXJsdzZ2azIxakVWc3hMSTllNjI5a3JpTGRRZzJyR1Fn?=
 =?utf-8?B?dk5FTEwxUVB0bUFLb2FveFoyN0RTcWkyaVRmdHRvNzRvbGJIZWtxcXZSWkNS?=
 =?utf-8?B?QkE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <D52A8C1EF6E1B949B113D2F9810B1A28@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	O2r9A88FXUoL9rckE2jDDLrn7EKAerBklpUssImVG2eh5XoD5/lCOyLUMFJPVkKUPKICGPfHotx5DRTj+qI9cuepu2HWe7EZTXRHvaw32i+yaEUbje9dthMagIHcKJe2+VF8sVN5dk3yRj45fSIm37b/tgHkmvatKMUf25bjvvKfa629EavvY17rZPoGprDM/plOiyDKl/qfV1gMPYTofqVM/qdKi7cIoX/ZGd/0wJc/GcKfFvi64+W31nqWNY1L221D57URg1oGALoHZ3oJeIZ/b8Sk0TTBKc3H+isYr2yX74t2e8u3+oxEZ6N6kMn6XknihB2vSSxK7MVXHdHeJ5X93G6h03CTwhQ9IvWw/LtoYVNhIaESxTvo/nUoQCcFPR/sogUE2+cT/GZkqNJ5g9LRQag7jnJr2j5hGC/E9am+WfWH/iT7EAAxU+3EKEq40+G5S+0CCLOBCaiz0mNW8YxTwF5Z0dIE+V3BZyDRzOt16WhJb1ztybrIvsCeqRbL4sD9XQCAZGbw+d5U/fqnmi7KHhYzWrpCPkP15BMPlJl1REfzRyJ5b4kyM3GzG4HtfP6hiYb3rUoAgZUK02bp7ZnRlPkYE3qR846luNJ/ApEIWkNZsrX/4Pjwln+L39159ONaSKeXZvDeMLO39qq+XcDYSyCQb6mf1Idj2Js1lxjcvWOiuWjAwZX/168SRmXjF2yPUHksH7H5HvPavA+JjYTqbjN5ddV5b6HSoS9Ers+/8z7XWdWxAnTtPhTJgmYGxx/elIXeeF4O4U77wyJDZg8Y7IlmNqTNkIjmF2Rz9bnLrqY9K75/s+CNXnfUWR3V
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d84b2e79-3fee-48de-1e5f-08daf4a51c64
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 13:58:40.8982
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 3o8hyJJPdi16K8unOx+q3sOII2IMmoUDtTU9/GFem9b91Ri1Nnxkd2Jw1JPhcgn57tyUTl5njAzFkanZr9oqNmGnxhkTHOqs8CJ9nUlJpuk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5025

T24gMTIvMDEvMjAyMyAxMjo1OCBwbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEwLjAxLjIw
MjMgMTg6MTgsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBXUk1TUiBOb24tU2VyaWFsaXNpbmcg
aXMgYW4gb3B0aW1pc2F0aW9uIGludGVuZGVkIGZvciBjYXNlcyB3aGVyZSBhbiBNU1IgbmVlZHMN
Cj4+IHVwZGF0aW5nLCBidXQgYXJjaGl0ZWN0dXJhbCBzZXJpYWxpc2luZyBwcm9wZXJ0aWVzIGFy
ZSBub3QgbmVlZGVkLg0KPj4NCj4+IEluIGlzIGFudGljaXBhdGVkIHRoYXQgdGhpcyB3aWxsIGFw
cGx5IHRvIG1vc3QgaWYgbm90IGFsbCBNU1JzIG1vZGlmaWVkIG9uDQo+PiBjb250ZXh0IHN3aXRj
aCBwYXRocy4NCj4+DQo+PiBTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPg0KPiBSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPg0KPg0KPiBUaGlzIHdpbGwgYWxsb3cgbWUgdG8gZHJvcCBoYWxmIG9mIHdoYXQgdGhl
IHJlc3BlY3RpdmUgZW11bGF0b3IgcGF0Y2gNCj4gY29uc2lzdHMgb2YsIHdoaWNoIEknbSB5ZXQg
dG8gcG9zdCAoYnV0IHdoaWNoIGluIHR1cm4gaXMgc2l0dGluZyBvbg0KPiB0b3Agb2YgbWFueSBv
dGhlciBhbHJlYWR5IHBvc3RlZCBlbXVsYXRvciBwYXRjaGVzKS4gQ29tcGFyaW5nIHdpdGgNCj4g
dGhhdCBwYXRjaCwgb25lIG5pdCB0aG91Z2g6DQoNCkkgZGlkIHdvbmRlciBpZiB5b3UgaGFkIHNv
bWUgc3R1ZmYgcXVldWVkIHVwLsKgIEkgZG8gbmVlZCB0byBnZXQgYmFjayB0bw0KcmV2aWV3aW5n
Lg0KDQo+DQo+PiAtLS0gYS90b29scy9taXNjL3hlbi1jcHVpZC5jDQo+PiArKysgYi90b29scy9t
aXNjL3hlbi1jcHVpZC5jDQo+PiBAQCAtMTg5LDYgKzE4OSw3IEBAIHN0YXRpYyBjb25zdCBjaGFy
ICpjb25zdCBzdHJfN2ExWzMyXSA9DQo+PiAgDQo+PiAgICAgIFsxMF0gPSAiZnpybSIsICAgICAg
ICAgIFsxMV0gPSAiZnNycyIsDQo+PiAgICAgIFsxMl0gPSAiZnNyY3MiLA0KPj4gKyAgICAvKiAx
OCAqLyAgICAgICAgICAgICAgICBbMTldID0gIndybXNybnMiLA0KPj4gIH07DQo+IFdlIGNvbW1v
bmx5IGxlYXZlIGEgYmxhbmsgbGluZSB0byBpbmRpY2F0ZSBkaXMtY29udGlndW91cyBlbnRyaWVz
Lg0KDQpPb3BzIHllcy7CoCBXaWxsIGZpeC4NCg0KPg0KPj4gLS0tIGEveGVuL2FyY2gveDg2L2lu
Y2x1ZGUvYXNtL21zci5oDQo+PiArKysgYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vbXNyLmgN
Cj4+IEBAIC0zOCw2ICszOCwxOCBAQCBzdGF0aWMgaW5saW5lIHZvaWQgd3Jtc3JsKHVuc2lnbmVk
IGludCBtc3IsIF9fdTY0IHZhbCkNCj4+ICAgICAgICAgIHdybXNyKG1zciwgbG8sIGhpKTsNCj4+
ICB9DQo+PiAgDQo+PiArLyogTm9uLXNlcmlhbGlzaW5nIFdSTVNSLCB3aGVuIGF2YWlsYWJsZS4g
IEZhbGxzIGJhY2sgdG8gYSBzZXJpYWxpc2luZyBXUk1TUi4gKi8NCj4+ICtzdGF0aWMgaW5saW5l
IHZvaWQgd3Jtc3JfbnModWludDMyX3QgbXNyLCB1aW50MzJfdCBsbywgdWludDMyX3QgaGkpDQo+
PiArew0KPj4gKyAgICAvKg0KPj4gKyAgICAgKiBXUk1TUiBpcyAyIGJ5dGVzLiAgV1JNU1JOUyBp
cyAzIGJ5dGVzLiAgUGFkIFdSTVNSIHdpdGggYSByZWR1bmRhbnQgQ1MNCj4+ICsgICAgICogcHJl
Zml4IHRvIGF2b2lkIGEgdHJhaWxpbmcgTk9QLg0KPj4gKyAgICAgKi8NCj4+ICsgICAgYWx0ZXJu
YXRpdmVfaW5wdXQoIi5ieXRlIDB4MmU7IHdybXNyIiwNCj4+ICsgICAgICAgICAgICAgICAgICAg
ICAgIi5ieXRlIDB4MGYsMHgwMSwweGM2IiwgWDg2X0ZFQVRVUkVfV1JNU1JOUywNCj4+ICsgICAg
ICAgICAgICAgICAgICAgICAgImMiIChtc3IpLCAiYSIgKGxvKSwgImQiIChoaSkpOw0KPj4gK30N
Cj4gTm8gd3Jtc3JsX25zKCkgYW5kL29yIHdybXNyX25zX3NhZmUoKSB2YXJpYW50cyByaWdodCBh
d2F5Pw0KDQpJIHN0aWxsIGhhdmUgYSBicmFuY2ggY2xlYW5pbmcgdXAgTVNSIGhhbmRsaW5nLCB3
aGljaCBoYXMgYmVlbiBwZW5kaW5nDQpzaW5jZSB0aGUgTmFuamluZyBYZW5TdW1taXQsIHdoaWNo
IG1ha2VzIHNvbWUgb2YgdGhvc2UgZGlzYXBwZWFyLg0KDQpCdXQgbm8gLSBJIHdhc24ndCBwbGFu
bmluZyB0byBpbnRyb2R1Y2UgaGVscGVycyBhaGVhZCBvZiB0aGVtIGJlaW5nIG5lZWRlZC4NCg0K
PiBEbyB5b3UgaGF2ZSBhbnkgaW5kaWNhdGlvbnMgdG93YXJkcyBhIENTIHByZWZpeCBiZWluZyB0
aGUgbGVhc3Qgcmlza3kNCj4gb25lIHRvIHVzZSBoZXJlIChvciBpbiBnZW5lcmFsKT8NCg0KWWVz
Lg0KDQpSZW1lbWJlciBpdCdzIHRoZSBwcmVmaXggcmVjb21tZW5kZWQgZm9yLCBhbmQgdXNlZCBi
eSwNCi1tYnJhbmNoZXMtd2l0aGluLTMyQi1ib3VuZGFyaWVzIHRvIHdvcmsgYXJvdW5kIHRoZSBT
a3lsYWtlIGptcCBlcnJhdGEuDQoNCkFuZCBiYXNlZCBvbiB0aGlzIGp1c3RpZmljYXRpb24sIGl0
cyBhbHNvIHRoZSBwcmVmaXggd2UgdXNlIGZvciBwYWRkaW5nDQpvbiB2YXJpb3VzIGptcC9jYWxs
J3MgZm9yIHJldHBvbGluZSBpbmxpbmluZyBwdXJwb3Nlcy4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:01:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:01:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476173.738212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFy99-0005Si-He; Thu, 12 Jan 2023 14:01:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476173.738212; Thu, 12 Jan 2023 14:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFy99-0005Sb-EB; Thu, 12 Jan 2023 14:01:35 +0000
Received: by outflank-mailman (input) for mailman id 476173;
 Thu, 12 Jan 2023 14:01:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFy97-0005ST-Ce
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:01:33 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2054.outbound.protection.outlook.com [40.107.249.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e238e7c-9281-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 15:01:32 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8537.eurprd04.prod.outlook.com (2603:10a6:20b:434::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 14:01:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 14:01:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e238e7c-9281-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j4TDR/EjdVzfF8gJg0EwKmIRx9k81maO44+S6RcKj1t3j2StYeyE1iT2Yvpk1kCyjW40YZEbUXfZ4hbbOppJUle9ZaBj2e6J7pTqKQtog0C0Qu1Ir+5xOzkwb7ChH9QPU2WprvITr1gdkUcgarlQT1PWQmOu92r/lAJFfIrpqSlhHScSPXGtYoGuQBnELh8gO46M1fAqV4eiLxBfeeUAlr+l5C1Xe//+HNzmd34ZRLEXbYEUf+AjB0sLMI8/BXfHLM+TjDTJB6yT3grUcfktouxnFrhrLYNukROYxNgm2CZ2EOLiE5WdiZZLFkVizWSMQVhw1i2D2gyKM6bl++KP/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fJjRH7h1ZjdWbrus5g79m3fp7lbTH6/IpDJ5Io/pxKM=;
 b=AWtY/kf6gOv0a8lqD+5AbBSayEOxxh9yffB+smNLJmYACkhhR8NE5RirPRTKN6Pgbij7tIyJnWHystLIskD+gtZWsfX+500csi+L+SJvIe/kd/esfX5+xmMgbnCvTV+GB95u/67WG4C3bgbx+ZW6Yx+BFcpD6/Pyy/YzOuTWs2XDTcNg7O62ZTVWFYOZKfAR/G5D4F2wojXX3RVay7ppRJhB5RAFqxTViPwtLVIigGN/Ubxq95r+T50vvfVEu8za5ILlnfGljY+Sdm+GgjDrolgdWKKtvsHDTb08NEfEdutDqgCXfvcCHQdyp+htQKj6dHblNcoWLHQ0pkircfpy2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fJjRH7h1ZjdWbrus5g79m3fp7lbTH6/IpDJ5Io/pxKM=;
 b=jbdShx4+n1bDwLTGX3IcqKaBTFI5zYaOMvghJo+3nxm+1GJTosbmWPLBycfjl+tgjddA+u/mjQn5KdsLnpU0LTDyU39NfCXSQkhtlt9tjpzNbih/PojPt7yb2olTApEggukO9TsfmzsFWXnlHBw5gv/Tf3taWHLXnlMDLdAYbFy3oyPVvFluQ4YpJtA9UtG6XnK3Rx4XXpEEs7X9BGS9HUEyi130A+EthdWOxoQRlTOrGy4vySpvoXJbRCXI32/wiDIRz/Bo4i6s8FihjJsQX61yHfXYkP5oNz/f3ljcrYuJLDxqS8xwU7AUJpLCEGHdqhIwmFOMABb0+fEI0mESIg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com>
Date: Thu, 12 Jan 2023 15:01:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] include/types: move stdlib.h-kind types to common header
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0078.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8537:EE_
X-MS-Office365-Filtering-Correlation-Id: 174b2b5e-88b0-4110-edfd-08daf4a58132
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2Ix20JLXTC4lsH0nmgDqIrtyB3WuTv/6nuvTkNJn1AavHsz7DnII/YuzWCn1X3JAsDE+taojlV4kUnWBPCtkS3B4f8aPgXoTldp33Z1VRI51uMCaie69kp2iXyBwg9LTWSRFIzMCRUntZZrdXw+XDiMjfBrd17vStQKquITjCpIPY49xH5A8V2wfhUuRPeDzzQl+KAPEECCJzvtptdBi1+n6yk77TopEjkUoDdbzmCiuyaJaPSHzg3BYG+8g6QT6ktk0Q/qKWUyrYxMMht7+R+X984w4KDqtaRIiYJYXYpNyh3r8Zhdeig1pSImOQkK7uAXTOS7VZOaCBnA2wnhc1ApbvqXYdVUMhNCT+CJq+P93CvTk0Ms9MAQbV5VQiAHqQd1CeoB6RL8z2/DUI10xT+TpvJRhMErFghKR6uRd45/Y5G6VARihsl++youmO84XnvQ9QjRutquQWEC0/IHb5t3JsG2jyymwkbi7WRBp0YMMZVstmjhhplBeRhB1zAE9DXFPHpnXT7XLPKLMzJchC/G9A39S8xv8MG6HlXayVG95MRjcU3sTsLPfo6CSWtzvYqlo6HssGUUhNJzkbFgQRqfr9rvDDK/rzskn9wW2EqN3EHe9B2s4lndMEMp8Yi6GnIx2gp1v15cVwtT+rmGLK9X4fMpN+e9MIAySTSNEFPMKsVlaP2mDZi7cFd3p4lpMyvhxAnXYKOQZCqbRHvAZlk99m5AURS+bnRTRgCM3tUA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(136003)(366004)(39860400002)(346002)(451199015)(8936002)(5660300002)(41300700001)(86362001)(38100700002)(36756003)(2906002)(31696002)(6486002)(478600001)(31686004)(26005)(186003)(6512007)(6506007)(8676002)(6916009)(4326008)(66476007)(66946007)(316002)(66556008)(2616005)(54906003)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VU9zU2pWR0xqYlBoZEU2dS9XNDhDUWMyMjFHdEVPdjdybjhnYVdURDE1SXFu?=
 =?utf-8?B?amdreVAxWWFTMXovdjVoN1RnZjNxRWkrdkd2dGdpOGovdHNKdFJhdnVJa0xC?=
 =?utf-8?B?REZUaTFTbzNqT1pHZ2hheGxpSVBiMEh0T29sQ2c1KytQSDRlblF6Y0MzUXQx?=
 =?utf-8?B?VnI0blkwbTFXVDZxQkFCalVJcFJFenRuTXkzSEs2dXpDb3A4SXhBc25ZZEdU?=
 =?utf-8?B?ejRsNE5CeUtLNUJ2dXpyMWVib0V5cXZ1UlN6RlRFNFdXeE15N3ZTOFFEc2pV?=
 =?utf-8?B?MGpuZkdNVmpaeHlWQjRmVWdPTlgxamdxenF1SUt6bWtGZUVTcC9Sc3h1alZM?=
 =?utf-8?B?K21VeUJsNjA3a0M3K1lMVmF4NmJPVXowUGR4alhwRWhZaGlKVHFISkRDcFNr?=
 =?utf-8?B?VzE0OW9LN1B0MnJTSGRDTUhpdWN1ck5IU0lYSWtEeWwxUDB5QkQremFrUVN4?=
 =?utf-8?B?UU05Z3hFZnFNejZjenNIQnVvcDZxczBoQXN5amVUOEl5ajhaNytQVldicWF2?=
 =?utf-8?B?WXlVeXQ2cXg3R2NXTGNtVlFFYVNKQkdYSWtFazc2UzdreUdZZGdsS1JtaDFa?=
 =?utf-8?B?eVdxdmNwRzNtOUhiTktQMjBJdXErYnlLV0pVa0tPQjY1eFdaK29kWGhCMEh6?=
 =?utf-8?B?UkI4Rm5QNTRlQWdxMVZkM1MzQVBTTDcwVDdyQjdQUHlvRmxLN2VHMVNGcDli?=
 =?utf-8?B?amlEcG5qODJHdC9VYzk4VWx0aXI4dUd4U0FkT09XTWxSdkZQcVJqM3VOTUdB?=
 =?utf-8?B?WHFsM0VZNnEybzBobVV2eVM3OGxjejRWZkl4ODFhNC9Ta0I4S2R5VGpLek5j?=
 =?utf-8?B?bkc5dlY3MFBucUFlcnozbUN2aGIzMkxxOG5JRjdBM3c3L1doNnk5b1FjVjlV?=
 =?utf-8?B?UGlURko5cUlRRmliMEF6R3lpR09BQVB5MlVwR0R2L3ZGZU5MK0tvb1ErWEIx?=
 =?utf-8?B?Y1pBTStFa0FKUndjaUJNcnpoMlRqYmppSTVQY2ZhQWJlU28yUjlkNDhza3pI?=
 =?utf-8?B?VG1sWEd3MENCLzU5bCtGeEZkNEpOeTEzSHRYOTB6a1duQWVab2ZseE11SWJW?=
 =?utf-8?B?TGJWQ2Fpc2t0THh5bU10Z083ZEE3aXVnV3MvQkZ3a3ltWkk4YVdRNHgyYkwv?=
 =?utf-8?B?Y2FWNEFwYVlHNnVTUzhoVzU2YzNZakRDTmExMEpsWG5yY1k3MTRiUFFyV0Vm?=
 =?utf-8?B?TlRPUjIxTTFKYXByS29wZEw2ZkFaMW9pYkZ0ajgycXB0WGtDTjZLSThSOHJU?=
 =?utf-8?B?Zzk2K1NkYTdjTUE5MWtrMWpwVlE2T05mSUF0dWFJUUdHWEp4QUlMNFdjdUF1?=
 =?utf-8?B?ckVvdS9mLzVBRnRqMmIydkQ2RU1jVndHYUhXeXpyd21MOFErOC80RnNRanZu?=
 =?utf-8?B?YytWTThsYjVRU1dsTFZCZGhtSHcvS3ZiYkxRN0IwWVlITEkwQ2NVUnRQN3Rv?=
 =?utf-8?B?WjZMQzJBL3U4YTg1WnRPTFhTcEJudUtJTk53clB0L09DUUdyRmk5UHRxdTJU?=
 =?utf-8?B?MGFBNVNLTFZwUHFJTUlFNzNlOWtEdEVRc2d0STM4b3phelNuQ2tCM1dTekRa?=
 =?utf-8?B?NlVaMHh5Z1dnOTRQOVc0R3dTR1NwLzR2QUEzcTNROGFlajR5WUN3bGJhV0F6?=
 =?utf-8?B?QkxQSHlSWDA1MVQvanlCRy90dmU5aGFqaUVtcUh5M1ZreGFlNm9iMjQ4UWdl?=
 =?utf-8?B?VTU5L1VoaHRmZ1VOemsyRjViSWdUVS9rZlc4dkNJQ09JOXI3YkhtdlFhcUZW?=
 =?utf-8?B?NmhBd0cyVzRRTUloMjRvTE1NM1p0T2RzdVZ3SXQ5K1JGUHpiZ203TU16SjRG?=
 =?utf-8?B?MFdZSHpRUE9CLzMxeWRSUWlXbFViTmZKTDBDWGszUjUvdjNoTkh3ZCt6ZlBh?=
 =?utf-8?B?NThEcE5aQ3psK3o0UWN2bXdKa3ZNTzUwYVU3Zi9hN0VBdE5EUUYzUXhwYjdt?=
 =?utf-8?B?bEZFU0d1U0J0WDJQYTBpb2F2VllaYnZNNVpCRTJnaE1vUHg4VkVPeDYrMnBR?=
 =?utf-8?B?TVRZQXgzK0k3c3Z0SnFTM3g4MzFaRVgwUExMOUcra1FjVmFQTzJhVmhZMks4?=
 =?utf-8?B?cHZTWkhsellYajlOL0xtQzlSa2IxT0oyUHRnVStTM2RBWlowOVk4bUVyaW9C?=
 =?utf-8?Q?FDrOcCzhW4k2xl2MT6eaPuSok?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 174b2b5e-88b0-4110-edfd-08daf4a58132
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 14:01:30.2444
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KrjkouWn3IvIMEgsXaPcfIWdvk4FmrhV4OVeTScxXFSMoXCbN4Tg3h6PdmE6ZrYfgl+YowGg9GfO+CZ/O0/rPw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8537

size_t, ssize_t, and ptrdiff_t are all expected to be uniformly defined
on any ports Xen might gain. In particular I hope new ports can rely on
__SIZE_TYPE__ and __PTRDIFF_TYPE__ being made available by the compiler.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
This is just to start with some hopefully uncontroversial low hanging fruit.

--- a/xen/arch/arm/include/asm/types.h
+++ b/xen/arch/arm/include/asm/types.h
@@ -54,19 +54,6 @@ typedef u64 register_t;
 #define PRIregister "016lx"
 #endif
 
-#if defined(__SIZE_TYPE__)
-typedef __SIZE_TYPE__ size_t;
-#else
-typedef unsigned long size_t;
-#endif
-typedef signed long ssize_t;
-
-#if defined(__PTRDIFF_TYPE__)
-typedef __PTRDIFF_TYPE__ ptrdiff_t;
-#else
-typedef signed long ptrdiff_t;
-#endif
-
 #endif /* __ASSEMBLY__ */
 
 #endif /* __ARM_TYPES_H__ */
--- a/xen/arch/x86/include/asm/types.h
+++ b/xen/arch/x86/include/asm/types.h
@@ -32,19 +32,6 @@ typedef unsigned long paddr_t;
 #define INVALID_PADDR (~0UL)
 #define PRIpaddr "016lx"
 
-#if defined(__SIZE_TYPE__)
-typedef __SIZE_TYPE__ size_t;
-#else
-typedef unsigned long size_t;
-#endif
-typedef signed long ssize_t;
-
-#if defined(__PTRDIFF_TYPE__)
-typedef __PTRDIFF_TYPE__ ptrdiff_t;
-#else
-typedef signed long ptrdiff_t;
-#endif
-
 #endif /* __ASSEMBLY__ */
 
 #endif /* __X86_TYPES_H__ */
--- a/xen/include/xen/types.h
+++ b/xen/include/xen/types.h
@@ -5,6 +5,19 @@
 
 #include <asm/types.h>
 
+#if defined(__SIZE_TYPE__)
+typedef __SIZE_TYPE__ size_t;
+#else
+typedef unsigned long size_t;
+#endif
+typedef signed long ssize_t;
+
+#if defined(__PTRDIFF_TYPE__)
+typedef __PTRDIFF_TYPE__ ptrdiff_t;
+#else
+typedef signed long ptrdiff_t;
+#endif
+
 #define BITS_TO_LONGS(bits) \
     (((bits)+BITS_PER_LONG-1)/BITS_PER_LONG)
 #define DECLARE_BITMAP(name,bits) \


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:12:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:12:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476179.738223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyJe-00073O-Hz; Thu, 12 Jan 2023 14:12:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476179.738223; Thu, 12 Jan 2023 14:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyJe-00073H-Ex; Thu, 12 Jan 2023 14:12:26 +0000
Received: by outflank-mailman (input) for mailman id 476179;
 Thu, 12 Jan 2023 14:12:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFyJd-000737-Oc; Thu, 12 Jan 2023 14:12:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFyJd-0001n0-MB; Thu, 12 Jan 2023 14:12:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFyJd-0005Ma-99; Thu, 12 Jan 2023 14:12:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFyJd-0002Jc-8h; Thu, 12 Jan 2023 14:12:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=namEvxOZfSMy9Qw2UTPJQBrnqATYnSxvYuHP95LxpDQ=; b=f68pLMVRDgzvE6TARXVFtQZkOd
	CLJiKZiBtmevV9M9BtoBYGQFzd0Gm3pfhhvRPJ0r1JWHKTrAwWMwB42I+4XFuGLd1tO+OIdePVrGH
	SxCabVlk8nDbOAAxZj/vxhy4ZhM5BS2qwrM8UoF9eNvqhpOMrTgH934UEWIlPWZOMNS8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175741-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175741: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=661489874e87c0f6e21ac298b039aab9379f6ee0
X-Osstest-Versions-That:
    xen=83d9679db057d5736c7b5a56db06bb6bb66c3914
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 14:12:25 +0000

flight 175741 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175741/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  661489874e87c0f6e21ac298b039aab9379f6ee0
baseline version:
 xen                  83d9679db057d5736c7b5a56db06bb6bb66c3914

Last test of basis   175732  2023-01-12 00:00:28 Z    0 days
Testing same since   175741  2023-01-12 11:01:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   83d9679db0..661489874e  661489874e87c0f6e21ac298b039aab9379f6ee0 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:16:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:16:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476188.738234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyNS-0007nq-61; Thu, 12 Jan 2023 14:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476188.738234; Thu, 12 Jan 2023 14:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyNS-0007nj-2D; Thu, 12 Jan 2023 14:16:22 +0000
Received: by outflank-mailman (input) for mailman id 476188;
 Thu, 12 Jan 2023 14:16:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFyNQ-0007nd-Rs
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:16:20 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id adfb10cf-9283-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 15:16:19 +0100 (CET)
Received: from mail-mw2nam12lp2043.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Jan 2023 09:16:16 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6460.namprd03.prod.outlook.com (2603:10b6:303:123::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 14:16:11 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 14:16:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adfb10cf-9283-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673532979;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=M4MLRHfiI+tlOzl0M5z8zfqWd7k4vE8XvdoGTLWQuxY=;
  b=cVdErYBqvZUkMNOOlHQ97ABmgLi+5ulgVnpqg2PLfLcUu7rG+DLnbli4
   /T8E/bCnv6bhSNE4ET2lBNRowcxdAMKwNqoElJmp9GWJVBx2uu2emON7e
   Ei+he8edGhfeyyNjk29TE3YMYB08xk6jS5b43xanUS6lhJ6uB1q425U7G
   0=;
X-IronPort-RemoteIP: 104.47.66.43
X-IronPort-MID: 92355094
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:aA0EwaMaYTT0sazvrR2clsFynXyQoLVcMsEvi/4bfWQNrUp21GdVz
 DFNC2jUOPiJZjSjfd1xady2oxwD75LdmNMxTwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5wZmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rlQXmZop
 N0RETtXTyqvv/yn3Y6wEtA506zPLOGzVG8ekldJ6GiDSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+/RxvzO7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6Vxn6mANNOfFG+3tNQmwO/1DE6NDtIWRid8cCopHCeZvsKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwB6J4rrZ5UCeHGdsZj1Mdt0g8tM3TDoC1
 1mVktevDjtq2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nczp4OKu8j9mwHC6qx
 TmP9XI6n+9L0ZVN0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLpm4U2l/MBVfNgwIQ==
IronPort-HdrOrdr: A9a23:e0/ES6Ffdxl9Ue9kpLqE2ceALOsnbusQ8zAXPhZKOHlom62j5r
 uTdZEgv3LJYVkqKRYdcLy7SdC9qBDnlKKdg7NhWYtKNTOO0ACVxedZnOjfKhLbak/DH4VmpM
 FdmsZFeaXN5JtB4voSIjPVLz/t+re6GWmT5dvj8w==
X-IronPort-AV: E=Sophos;i="5.97,211,1669093200"; 
   d="scan'208";a="92355094"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LkhyhHD8hMsvNpm4fqwp/gtEELQX0TyDa9GI4sGChI3BLfKuzj09EbHLXEuLvdpTp1mgXtccmewjvNzBNOsPvxFtzQcf4UMVu1fYy+BlDN4JGfn+ubhj5Ujcm37Cmv4iwEGVFsfP200EmRyy365ZBx3VN6BtUtt5oFqFCDZHKkVBANBDXXGtPt6zy4TvzKYuy+K9gxsoTa9QLGMpFWheYXkoz/sa+wlbsynpQci7vjWahYQaSMR4W/Ae9Zy1aJHUOCiHg8QFZdo26BI/wJpR3syZghmb9SlPjxBVIKDOCQAjXF7Z+Hn8FPj5LubRXSO+lYyl++Q3tyvB6ndlDZLddA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M4MLRHfiI+tlOzl0M5z8zfqWd7k4vE8XvdoGTLWQuxY=;
 b=R4fimVzK47dFPO66/AisLIyJ+o34J7/6RRVnfOZ8I1S349TM1/RXdRjwRI+uen6YH9CXCpe+UQCC0iUQnmGe5vpb33GgJWfgmLWkfFllJ2ZwO4P/oYKhGLR7cRqpMuHI0FE0Mq9mstZKUnf3LPJqFIIvvCa8jwtmvKfYFxN2ue7T9UWGZy3pDCuVdWZwUhr4yoqNTp3RgplMTMZdSzUBk9q+U+MxJ/I6s0ewH3zxPJtgRSySXAuPtS06hTXSBr2dDz9/tKixrpTL3liIkDRQ9xnfNNQ6MBmZIMbejyj5bZsYQKPlMthNJaZYGVqK2qSI12LNuLGZ+xRGrmZ6NSGQ8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M4MLRHfiI+tlOzl0M5z8zfqWd7k4vE8XvdoGTLWQuxY=;
 b=Q1CshBj0F7v2MVXe/Ra3sY/Zl/54gItTGjMvd4fwtdegTGHyEek4rk+5OwVTn8PaYLnJL+vkbFiAcz2P9U4g3hPZapft6YPLNS5WVZZMUW0XHFdypNs6f3rRBc4PZTFqH0yV2ORcsE6rWWqS2y/2pTlkhTvR+/mwCZbqDYEq+wY=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 6/8] x86/hvm: Enable guest access to MSR_PKRS
Thread-Topic: [PATCH v2 6/8] x86/hvm: Enable guest access to MSR_PKRS
Thread-Index: AQHZJReeV/6Z/BfYW0W8YgpP3+cAB66ayMYAgAAN4oA=
Date: Thu, 12 Jan 2023 14:16:11 +0000
Message-ID: <f1e2d71d-5fad-962d-a7a0-af1044f40de6@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-7-andrew.cooper3@citrix.com>
 <f64f0bbf-8f47-e897-eb7c-51f11c9ea4a8@suse.com>
In-Reply-To: <f64f0bbf-8f47-e897-eb7c-51f11c9ea4a8@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MW4PR03MB6460:EE_
x-ms-office365-filtering-correlation-id: 10a3c5d1-979a-4679-b486-08daf4a78e9a
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 d/ltQULOJYSCIFxlzPN25XbqojMENpX2uON9xTaY1k/l7LXu7zOIBuJbXXQ6zopmu4LN3hDgPUlZE4DPqC9Vl9h3F+PLRLcAZMJOZQ/MHFYNNooo+zfmWgFovyMu4a4xYPoeo7u4mn8bdig3ElPBlGozpEvAQ650Ma48/8r5bK7Jny8q4ROFZNoWWtkCTDFRVnUcKRckbbCUBKXGAhf+Yu9YemqO90uTEVri+5VKC1Ud+XnxeSop6v3ngv0JuqMpVzNH8tJ1/UQAhyHr7gkNi1bTdoQl1H+smo+aW9eccoYRlntbBAOzy3dUJMuDY0bVm1muCz89moq5qO48Y8WCAH9VJ9bZBl7N7i14i6B3IWQxGtxVNcdMpY+oWu5rL6EPOT1veFNA55QFTjkoOUlgPVjKhbI34o5Y6jGikGV07HVmKaykdMLnxIoAtOX37OKSVdTiKE0nF5zfoJyoEn9SaCaHVyzbRw2dU8b6W/OSmEAR8OGT5LnRUbEc5NiipY+ow7U/fs21G83rq0zKwL38DVABsEzW6/ygsZ47I6MZOwdQ1BvdCVzoVCzeRKYp8GWsCd/4zUa635EmNRAhdWWkw/DGVbzH7pkXCSocz8hvXb782z8/yE6bY7UeBGGsf14xg2Wins7i19jLDrMyiW/ty0a4FVBFFOaJ3Pa7xBYk2vgDtwKARmYMsCdgNLzaH8meBDHVZc40+lWwjBoQ6vhvXDsG9TypOWUwTJq161NA8qz0rnlzlU9IZG5xaXV1XQ8HdEQqBfEz0TBRfDOTne90lw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(396003)(136003)(366004)(346002)(451199015)(31686004)(53546011)(2906002)(71200400001)(6506007)(6512007)(26005)(478600001)(186003)(2616005)(36756003)(64756008)(66476007)(66556008)(76116006)(66446008)(41300700001)(6916009)(4326008)(316002)(8676002)(66946007)(5660300002)(82960400001)(54906003)(122000001)(8936002)(38100700002)(83380400001)(6486002)(91956017)(86362001)(38070700005)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SjRuYjlRSktqdVdwV1VjcmJpOGRxczdSMmkvc3ZHSlRaQ1Z2S0VZbWlTQnB4?=
 =?utf-8?B?ZmpYV2p0YTQ1dGVNN0JxQ0dGamNtK3ZjUVRKN25Ec284K2RDZjZ5N3NWNktk?=
 =?utf-8?B?YTNwUW00d0JjdUtMTFArWDlSWmJueDdPYTlJR01qdlBNRjdDeVZVc2xod1B0?=
 =?utf-8?B?TElLYU1wMmlOT0ppdXVZSlRkd0tnTEwwNmFQaG1IQlExNnRIWGlFb0hNSVBi?=
 =?utf-8?B?dXpacjUvTGFweERjZnFaSUZUS2RBb2lpM3llNzJWWXhBTCs1c0ladmVqT24x?=
 =?utf-8?B?V0dyQ0dReWxNQzRURHNDTjVCMUlOdzVFNTI2N21rd3pnTUtYWndSNVRsMVJy?=
 =?utf-8?B?aWUrU1dsTmhuajJoQlRtMzJCQitWZGhmdllYNFd5WUQ5Yld1V01yMDJReG1O?=
 =?utf-8?B?SFdwSnBmTW5ZZ0RGdzhOODhicnpjZlo0UUdvSW8yTldqb1JvRkxYNnp5M0x2?=
 =?utf-8?B?a215VUxRTHRDY1J0RjdicEcyN2FzSHNjOWRqQm93VnlrcTg5UHNWU05Fd3J0?=
 =?utf-8?B?R1FYMmFCUFhibCtleUhqU2RzYmQwaVNRbnZDOUE1b2pLS3lLTEtjQlZrUlV2?=
 =?utf-8?B?YzIrRVhOak1RYzdRbldZVXVUV2JOTmNJVkVEYlBsTVllb3Z6MklOVmlPMjRk?=
 =?utf-8?B?SlRGQzZZOGJIVzgySW5GdTlPUjRHU0Nnc0ZyMGluTVZwWXhIeEpOZGZ6YW5s?=
 =?utf-8?B?OHNWK2ZWWG4yT3lDdFBQKy9wdzcvQ3IrVWtKMGF3Zm5WVmF2VW5LSnp5WlRM?=
 =?utf-8?B?RzVIMzc1RlM4eDkwaW9ma0VNdUpQWmZBdlhoRVVtRUMxOFFzbnBlNHI5QjhS?=
 =?utf-8?B?RHM2a3h0NkJTSmlTYjlZNFNBVWNHOUNuSWhsQ1VrdE9hakRGYXdyZmhYNGp3?=
 =?utf-8?B?bmx2MGJReGlldFZRTWJZbXY1REo4Z2JEbi9NTzBYajEvamVKSFhVUEl0U2d5?=
 =?utf-8?B?d3VJSjQ3QlQycFdRTVQ0U2FHWE9GRzMzb0hacnEvN0ZUVWEvaDlkWXlPV2Jw?=
 =?utf-8?B?VC9FVjYrTTk2VVVxQk1tZmR5dkg0NEFKL2lTQUp4R0JZTnFBNjZaa1F0VG9E?=
 =?utf-8?B?U3hnd2FYNUc4Q3plemhSYTYrSTRibXBnYnZvOERobFo4cG8xUGQ5QWdtUTBP?=
 =?utf-8?B?YUpuYkg0cHh5UXExdFkwQTNEZmFwQ3BuOUc1OVoycnNGcTBkYVBQY2ZWSmVH?=
 =?utf-8?B?aVNGZTIyZGlpRWFDU09MMUVrbmtEeEdTNDMrM3U2cXpkSXFEZ2h3dHNNaFQv?=
 =?utf-8?B?SEQxekR4alBWNTJKMVJldlU2S3RCTWhNUU1kd1Uyc1VPMldka0hrQ0JORzY1?=
 =?utf-8?B?Z3hwNEJVQjR6KysxZmQzTVVoVTVKVlRrdXIrVi9PYm0wMENmS1dsalRNSzlv?=
 =?utf-8?B?SDZyWGVQSm9PMnB2cDhmM0tMVlVFMXdNeFA3ZndtTzRoMFpFMnJlOFlDd3NE?=
 =?utf-8?B?VFVncGRoVDcvYUxVak43RGZyNXdvcnpHRXRKNmJ1ODlXbzEwcXpqdUtTbXhy?=
 =?utf-8?B?L3dHVVhKZVVPaW10QlEzK2VHQWZ4aEk5dGJ1N3ZtME9ZVzh4VW0vT2Jad1ll?=
 =?utf-8?B?eWUvMDdJd28xb0oydzQ2NjZHUHFyOVhvaThycWNuNjJERnVkVmF3bm9qRkts?=
 =?utf-8?B?dUZBdUNaUi9YRUJ3WG5CVXh3L1lTVDM2SjZBcWtFSDlmVWJDSVFTaXBBWjdz?=
 =?utf-8?B?ZURJNlBCQUhWZlhjYnpYc00zVlBBUGltLzhRME56R3ExZ3F2ZlcrWVVzSHU1?=
 =?utf-8?B?Q1FzSXVVNE1Hb2RXT1QrUHpDeXJWOGlsS0R3Rms0cnZWRk9acTlWK1F4dm5K?=
 =?utf-8?B?SWxJRWxqL1B4K1drVDRLTGZWSkNabG5PN2YyMVRBZ3hteWtROENjeWFBT2xL?=
 =?utf-8?B?c0RzOHVvM0dwcHcwUG9PdEVlMG9VSkdwRldwNFJWY0FPYUNsazZOY0dkRTZI?=
 =?utf-8?B?ZXEvTUFMUGJtd2RoNW9SMnN1bkhoQ0pScm5wanR6MjhkQVlwdmxqTjhFU0hJ?=
 =?utf-8?B?S1ZlVkZxK0ZBcW1SN1RFVlJWNUE0YWIvS0FEMGVacTUzVW0wQU5aVUFFVmxu?=
 =?utf-8?B?bFNQWm01VVdHQllxT1dVM0F0RlJEU0tmYThESGlicDRNTTg2ZTRlci9oM25p?=
 =?utf-8?B?WFFUV2sxL0RHTUwxK2psQzBTdHdzZ29DMGEwSlZ6UDFGU0lVS0NzODVqVzhW?=
 =?utf-8?B?a2c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5D3B55E9AB659345A5920CAE2C5408F6@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	+xO04Fjmt0auY74A9ABbxrEEAxVW+0cuJyKn5caME9E2LQqnLwgv3+2/KRj/wPbtcsjbdeo1GMVsLVXGMnLOq3hneKwXpPYovFFzZ5Hkz+yeOk6iNIHJnHpscvgIg1ApXg6qKV/p0QhHvjbaYrTgicqquY+tos5/SlGIlHDyNIvDm2AQKRO7ozAq5dMHLaZXb/PoiDEXZ/VRjA/7KDJIKWFduRFh27a1ksM9dmm019c63n7P/cT0QMrtbobamdRr6TbsaThHwU6NySKbcci26OMbbl92CxeNoQhpcd20EQAR85gzheMfvGTKQ6ysOi4llt36cM/HEkqcqcsqjtOJV8ciguWbwoi9ZTCzMhHv6hWJLsa4y1hYoxtc676tBS0pbhChktn3C35yrtWmSPrGNPEGHXRZVP2wQwMv3TpZH6jdnoFgWO34jYXHjCNxzv5oXtQQ5MP6shepeU0CpSBmOYvXKHDHkorI1scCva/bOfH0Xh5Uj8eNA7QMSKcT5DxaygCIIVaXF6iCsG8yEw/zzINvHPte6BdckJN4OxXXiuYfqAqHEaOuLA6aezyUhZXCM4tI6gMhJQmPxX9PWcc5KuYe0ASQEqlU9DcGfk1/6VIcSvtaLTdBZTTEKxOWOCd4RgI4r3ny5SLXQrjDJgY7PLEJqMxRwjWQ7YCB2LZHNiExIeAmmTejA1QcXlduWn8qP2+WPscxhsw3Jao2KeBC046aEBexeh6Os5EQ48fyEk+UD+Q3HC+C+YyH/zukEOfSkCQlxyx2aMgNkqQIOkKWV5zK8qNtaeSm//wx8j1z+FOtzV1Qa33qpiAcm28/35C4WRLCDxG3X865FVeIoKEppbtVSBpvKPKJUMkztg/HJzk=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10a3c5d1-979a-4679-b486-08daf4a78e9a
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 14:16:11.4940
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: fJJKBldfN18K3jW9q44oCf9W7Nya5YvKdSHfa+mygMi02o3pRjrUWE0Hq7BWB8wpl8IqRq5/YkUcKSJclHR0wacT/eYOQV2Dn+dNNKzN+2k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6460

T24gMTIvMDEvMjAyMyAxOjI2IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTAuMDEuMjAy
MyAxODoxOCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IEBAIC0yNDcxLDYgKzI0NzcsOSBAQCBz
dGF0aWMgdWludDY0X3QgY2ZfY2hlY2sgdm14X2dldF9yZWcoc3RydWN0IHZjcHUgKnYsIHVuc2ln
bmVkIGludCByZWcpDQo+PiAgICAgICAgICB9DQo+PiAgICAgICAgICByZXR1cm4gdmFsOw0KPj4g
IA0KPj4gKyAgICBjYXNlIE1TUl9QS1JTOg0KPj4gKyAgICAgICAgcmV0dXJuICh2ID09IGN1cnIp
ID8gcmRwa3JzKCkgOiBtc3JzLT5wa3JzOw0KPiBOb3RoaW5nIGhlcmUgb3IgLi4uDQo+DQo+PiBA
QCAtMjUxNCw2ICsyNTI1LDEyIEBAIHN0YXRpYyB2b2lkIGNmX2NoZWNrIHZteF9zZXRfcmVnKHN0
cnVjdCB2Y3B1ICp2LCB1bnNpZ25lZCBpbnQgcmVnLCB1aW50NjRfdCB2YWwpDQo+PiAgICAgICAg
ICAgICAgZG9tYWluX2NyYXNoKGQpOw0KPj4gICAgICAgICAgfQ0KPj4gICAgICAgICAgcmV0dXJu
Ow0KPj4gKw0KPj4gKyAgICBjYXNlIE1TUl9QS1JTOg0KPj4gKyAgICAgICAgbXNycy0+cGtycyA9
IHZhbDsNCj4+ICsgICAgICAgIGlmICggdiA9PSBjdXJyICkNCj4+ICsgICAgICAgICAgICB3cnBr
cnModmFsKTsNCj4+ICsgICAgICAgIHJldHVybjsNCj4gLi4uIGhlcmUgaXMgVk1YIG9yIChpZiB3
ZSB3ZXJlIHRvIHN1cHBvcnQgaXQsIGp1c3QgYXMgYSBhYnN0cmFjdA0KPiBjb25zaWRlcmF0aW9u
KSBIVk0gc3BlY2lmaWMuIFdoaWNoIG1ha2VzIG1lIHdvbmRlciB3aHkgdGhpcyBuZWVkcw0KPiBo
YW5kbGluZyBpbiBbZ3NdZXRfcmVnKCkgaW4gdGhlIGZpcnN0IHBsYWNlLiBJIGd1ZXNzIEknbSBz
dGlsbCBub3QNCj4gZnVsbHkgaW4gc3luYyB3aXRoIHlvdXIgbG9uZ2VyIHRlcm0gcGxhbnMgaGVy
ZSAuLi4NCg0KSWYgKHdoZW4pIEFNRCBpbXBsZW1lbnQgaXQsIHRoZSBBTUQgZm9ybSBuZWVkcyB3
aWxsIGJlIHZtY2ItPnBrcnMgYW5kDQpub3QgbXNycy0+cGtycywgYmVjYXVzZSBsaWtlIGFsbCBv
dGhlciBwYWdpbmcgY29udHJvbHMsIHRoZXknbGwgYmUNCnN3YXBwZWQgYXV0b21hdGljYWxseSBi
eSBWTVJVTi4NCg0KKEkgZG9uJ3Qga25vdyB0aGlzIGZvciBjZXJ0YWluLCBidXQgSSdtIGhhcHB5
IHRvIGJldCBvbiBpdCwgZ2l2ZW4gYQ0KZGVjYWRlIG9mIGNvbnNpc3RlbmN5IGluIHRoaXMgcmVn
YXJkLikNCg0KPiBUaGUgb3RoZXIgdGhpbmcgSSdkIGxpa2UgdG8gdW5kZXJzdGFuZCAoYW5kIGhh
dmluZyBhbiBhbnN3ZXIgdG8gdGhpcw0KPiB3b3VsZCBoYXZlIGJlZW4gYmV0dGVyIGJlZm9yZSBy
ZS1hcHBseWluZyBteSBSLWIgdG8gdGhpcyByZS1iYXNlZA0KPiBsb2dpYykgaXMgdG93YXJkcyB0
aGUgbGFjayBvZiBmZWF0dXJlIGNoZWNrcyBoZXJlLiBodm1fZ2V0X3JlZygpDQo+IGNhbiBiZSBj
YWxsZWQgZnJvbSBvdGhlciB0aGFuIGd1ZXN0X3JkbXNyKCksIGZvciBhbiBleGFtcGxlIHNlZQ0K
PiBhcmNoX2dldF9pbmZvX2d1ZXN0KCkuDQoNClRoZSBwb2ludCBpcyB0byBzZXBhcmF0ZSBhdWRp
dGluZyBsb2dpYyAod2FudHMgdG8gYmUgaW1wbGVtZW50ZWQgb25seQ0Kb25jZSkgZnJvbSBkYXRh
IHNodWZmbGluZyBsb2dpYyAoaXMgdGhlIHZhbHVlIGluIGEgcmVnaXN0ZXIsIG9yIHRoZSBNU1IN
Cmxpc3RzLCBvciBWTUNCL1ZNQ1Mgb3Igc3RydWN0IHZjcHUsIGV0YykuwqAgSXQgaXMgYWx3YXlz
IHRoZSBjYWxsZXIncw0KcmVzcG9uc2liaWxpdHkgdG8gY29uZmlybSB0aGF0IFJFRyBleGlzdHMs
IGFuZCB0aGF0IFZBTCBpcyBzdWl0YWJsZSBmb3IgUkVHLg0KDQphcmNoX2dldF9pbmZvX2d1ZXN0
KCkgcGFzc2VzIE1TUl9TSEFET1dfR1NfQkFTRSB3aGljaCBleGlzdHMNCnVuaWxhdGVyYWxseSAo
YmVjYXVzZSB3ZSBkb24ndCB0ZWNobmljYWxseSBkbyAhTE0gY29ycmVjdGx5LikNCg0KDQpCdXQg
dGhpcyBpcyBhbGwgZGlzY3Vzc2VkIGluIHRoZSBjb21tZW50IGJ5IHRoZSBmdW5jdGlvbiBwcm90
b3R5cGVzLsKgDQpJJ20gbm90IHN1cmUgaG93IHRvIG1ha2UgdGhhdCBhbnkgY2xlYXJlciB0aGFu
IGl0IGFscmVhZHkgaXMuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:22:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:22:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476194.738245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyTS-0000pL-Qa; Thu, 12 Jan 2023 14:22:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476194.738245; Thu, 12 Jan 2023 14:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyTS-0000pE-Ni; Thu, 12 Jan 2023 14:22:34 +0000
Received: by outflank-mailman (input) for mailman id 476194;
 Thu, 12 Jan 2023 14:22:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pFyTR-0000p6-Im
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:22:33 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8b2b0e69-9284-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 15:22:30 +0100 (CET)
Received: from mail-sn1nam02lp2045.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Jan 2023 09:22:25 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5760.namprd03.prod.outlook.com (2603:10b6:a03:2d3::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 12 Jan
 2023 14:22:23 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 14:22:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b2b0e69-9284-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673533351;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=+h3UbKkuU1lIVjdRVZZQkGmzfqawdmEoIuAEuNNsHCw=;
  b=c2K1VFzVI4+4HXXGsaQy+p3JtkLvmkfZQSfzLnWz01CmG1aL51hCdyLu
   SvNPGGQRE3tYBnB9Zj01aXqe2/8RGg038lx4LUeWe10wtn50c29qZMh6a
   9nggArlXkmBPFJ+s466nK1udy0rVpe2AOIN373PsYJYpgCK3Ugb5W3R6w
   0=;
X-IronPort-RemoteIP: 104.47.57.45
X-IronPort-MID: 92356031
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:JWO30KLfGe5VUum/FE+RUJQlxSXFcZb7ZxGr2PjKsXjdYENS0GAOy
 WQWC2iHOq2IMWGnL9t3OYvg/U5Tvp/XyoBhHgVlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wVkPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5UIE538
 eUVdAkcSSyhp+m0+vWLdspF05FLwMnDZOvzu1lG5BSBV7MdZ8mGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTSOpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eex3imBtpJTdVU8NY2g0CLm3ADNiQRalKfhKGWyUjudNlQf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmL+ITXOQ8J+EoDX0PjIaRUcZfjMNRwYB59jloakwgwjJQ9IlF7S65vXqHRngz
 jbMqzIx750ZgNQXzay98RbCiii1u5nSZgcv40PcWWfNxh1+YImpdom582/R5PxLLJuaZlSZt
 X1CkM+bhN3iFrmInS2JBf4LRbeg4q7fNCWG2QIyWZ486z6q5nivO5hK5y1zL1toNcBCfiL1Z
 EjUukVa45o70GaWUJKbqrmZU6wCpZUM3/y4PhwIRrKiuqRMSTI=
IronPort-HdrOrdr: A9a23:DdH29KEoWAgCTQocpLqE0seALOsnbusQ8zAXPhZKOHtom6uj5q
 OTdZUgtSMc5wx7ZJhNo7q90cq7IE80l6Qb3WBLB8bHYOCOggLBEGgF1+bfKlbbdREWmNQw6U
 /OGZIObuEZoTJB/KTHCKjTKadE/OW6
X-IronPort-AV: E=Sophos;i="5.97,211,1669093200"; 
   d="scan'208";a="92356031"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GTixSmmIGw9N/+iRhqPXbYL9s3wA/X/SLaqA3G5XkDyWz4FPR9lfCmWUlGp+L+eH/Trcc5pRfF/sAhtHvNYhN2iPxYZudq/0whopjALnJLUBUmcPVQy5GZ/7ONcU6GhKpbyXAdBKTUpse5YWSl0I7jhaPoeg/bHyBV+jDoO0bSz8GkpeS3lt4FnQwCM54gV7hXrg9F1bLbLuZXXhS2kau3j0Zc5rRYQZwCaUDXbuds35CJIap1C9usrASKBMf8kZVHZ9ho37rPsHxEodYNVQpO3EKKe1mNrY3Dl8KCQFIeKAD7Jbh4rSu6qUC8/2Nu37gu0nHtgVu7TOgELmvf42qA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+h3UbKkuU1lIVjdRVZZQkGmzfqawdmEoIuAEuNNsHCw=;
 b=I7XdOH/8c1qMCOptpC0csoqO+qc8xt33lYtpTXAjdE3WbCi8M3apokpxL1X5wK4231L/Jz5okiw7r4d0L7nDRIgjGo1TdWKhsTggiP/upD/GbARMMl+L70/IZnW8ZOs5H45uJaR+eyKudYenX1Wb1mOIzDZR6kuq4R7GZF19X1YqVUCo1N2Bqk+LKbFP2Z7cEC6yDs/wUNogiMHMiETwxVk9BPJhEr44ZTeizGrXpwer5YHVui1DYos2sriLq/4BP38bLzh9wCLRCyOuNKqApL+ErbF0IHxje0Ns1PeuqdhduK6W+pBbmSHOcPktwiZ9CProu0TNqnmxELCiX9QRTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+h3UbKkuU1lIVjdRVZZQkGmzfqawdmEoIuAEuNNsHCw=;
 b=BoKReGflrZvCpgEjrLeFMfXvrT59fdbwGQchSdNLtGXzs6kJXlrtUgz5o5KPQeUhkqmmdtR6HSq4QwGElE68rOs9V7JxuLZ4zlWBjUS8XaRFmkog4le1UVJ1BF9crpssIwU9h0UUBrKb0RrLC//ATNSyzas6CHA3Lopa3HmUZn0=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Roger Pau
 Monne <roger.pau@citrix.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Subject: Re: [PATCH] include/types: move stdlib.h-kind types to common header
Thread-Topic: [PATCH] include/types: move stdlib.h-kind types to common header
Thread-Index: AQHZJo5jBKk6QTlSGEGpBgNCxgedxK6a1XaA
Date: Thu, 12 Jan 2023 14:22:23 +0000
Message-ID: <b0119998-b196-8319-fee8-de3231b14d8e@citrix.com>
References: <5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com>
In-Reply-To: <5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5760:EE_
x-ms-office365-filtering-correlation-id: 58739a7c-21cc-4ff1-ab8b-08daf4a86c72
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Ctu0V/6/qXaVilpijfRzUcCWIbUOAylU2DrwGAHhjWUkzSJTpxHDiUcF9fVy8IGU3EtUgahTS7BSsd3dIxSjG/JiU7BrRNcKXhojUSk8X4o1IYtNE29N9LC/lvGF2N7xwfHoSLKLfFNDT3P8wlmpcCYisxHeUy2zcKj7Kxl8KPwZwv1nTqiMv7vNzuDy8dntftsp/Tt1/5iVoO6sTtW1jQ9hHiWbtMbBwuiY5dFC379S/neYcRBsZDlUhrNb9zWs+ga2tNqBaghW/qAtDKUqZwF/wywik+ShdVTTOGS1dYB9ZSoFochQFRMUYfMFgmRA9WBAuGJ8q6GWPkxUiM9YmvY88OxJlGFpfOci9XIZG6+0KpwFMRjlKCyhQ16zr6Pu0m1uSJp5NQfYdLmrEE7P8biPzxqWhJTQp8WvijsBekX5B/BDdLQfOvWOrF0ZOy/Hd6q7LwGH/fFN533ZBxO0Qk9IegETLrloEuR1DE+ybVwXt4BhbKxAmrMkRWDG5k/h0Y6hqcU4oO3yVEs7Ajk4eRNjbdv0XYOyIjE8K/XBcsPEHKipXQFUYxutH1ax9Q5k3G6tsRV5KkeVMnzYSGMuA7dC0Oli2M5uzLcn/w4OfUOaJhMY5i53AaRIFj5hecDdJ3wdtz8iD09vjX/NY4BlUdZ8/9LPaRwdEVcdk7jakeuwlfzgmW7hKfDWCWRQ7RbVXgRVYD4/Efy1KV7y5ArarJYMyB9gGHxysNgvswXmzoDO7BfFNNu1VcimPrA/lFBboJuzH0E2fbYMyA2695tODw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(366004)(346002)(136003)(396003)(451199015)(82960400001)(38070700005)(6486002)(478600001)(41300700001)(6512007)(38100700002)(31696002)(71200400001)(86362001)(316002)(54906003)(110136005)(2616005)(91956017)(66556008)(66946007)(26005)(186003)(66476007)(76116006)(4326008)(36756003)(5660300002)(53546011)(6506007)(2906002)(31686004)(66446008)(64756008)(8676002)(8936002)(122000001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MTdIbjJGZDB4U0wzS2JMMzFaZUxUU1cxd3ZBNUpXUG9Yc3dkdXY5MnJDRnkx?=
 =?utf-8?B?dkJhaVVmUjk4NEYxUWFtcURBdmdGdU9hWmV2ZURWSmtJMTJ6SDE4blJrOW82?=
 =?utf-8?B?UGhoQmFDa2t3VkJEZGovNGhJenFMMTZKV2VPU1VFMkpqa3QyeHZuVHB3V2Nh?=
 =?utf-8?B?S3J3RC9TbHlVbi9TY2ZUam9DREF0UkFYekxPZytsYS9mRDNtTkJCVWNUUDk5?=
 =?utf-8?B?UXYvUStCOG5HRUZuMWxnbVJ3MEtQYit4TWtFdDF3a2tVM3ZnR3pLWkhBdnpR?=
 =?utf-8?B?YmZjR05wN0hOcHNGNzIrRWVOanlGcmlDRVNMeVRPRnc0dTBSb2JiRDEwZ1Js?=
 =?utf-8?B?NmxvSzRtNzZYK0MveVVDQlhRQ3hxdGZkVzNvYU5xeUIrUWZEN2ptT3k4cXBG?=
 =?utf-8?B?SWw4K0RieVFkMmpPWWpBMlNNQTRERmRWOFlXbHNhQnBCQXMxenpLMUwxalA2?=
 =?utf-8?B?VC82cHB6RDROSEJualpjZk5ncngrREhlaUFvTk1qQkxGQkwyRkI0RUowYklk?=
 =?utf-8?B?NUtsbTVwaTFxcEsySm5qd1pUMWZEREpTeFBTOVVaWmpBWGhWZUZ5YjZ4aWRi?=
 =?utf-8?B?Zm9ScUhML3JaOHdlN2pDTXZGYVc4d1VOV1ZiRWhGMFFQQmx1Vmh1VXpwYndp?=
 =?utf-8?B?SHdOVTFWL2UvR1hEOUt3MTJIYndwTktGMk5WSHA5K1lCRFpSemw0R1p4QWR1?=
 =?utf-8?B?TloxRUpiNGxUQlFOb2pGRUo4WFF5NUdZejJvL2pkV3kxZVlhTm5QUlVMbnN6?=
 =?utf-8?B?Q3FYWW9haTFSZ1M3QWljd0paNUhoWTlKQmJmR2RaZ3pXTkhFQXM2MjRjY1pV?=
 =?utf-8?B?M0hKZWVXcW41TkFSTDJPZXl5TERTMHVFam5sTXFrNHFOM3FUamZwN1ZmOFlY?=
 =?utf-8?B?SWRubUhYdTh0N1RCSW9wRUlPUTRoT01nZHplRW0rRjVnOEt0cGtVOFEwdU1j?=
 =?utf-8?B?eUFZTFJIKytyaW95dVFpQUlZeEpKL0hxTU1rVjFoeFQ2NmxET1AvK3dyVzZa?=
 =?utf-8?B?elpNcW1rZE9pTnBLNC96WnV5NVVHN2dibkczVS9wV0VRaVpMbGNFS2JDU0lt?=
 =?utf-8?B?SmlvVG1WWG1sbitOTkk2UWxaTlowNyt0YTVoMlBMWFJrbzRuc2x1dEtMVFVR?=
 =?utf-8?B?TUxsam9LbUMrbnFoQ0ZNaC9FRFNJV09rQzJNcWMzZWgrWUJ0RjBvTVdac0Y3?=
 =?utf-8?B?Zlc3RW9mS2srWWdxaTJnZzN4dGlCSVV0YUNnQVFQbFlpeEpjUUJJV0tSSUVX?=
 =?utf-8?B?SjA2VVo5QkdYRjVZenkyQkRIaUxZempJczdNU2grNkt4K1hUUElNZG56eTZX?=
 =?utf-8?B?bjVDS2hBbUdtL04yZnJPMkZ4VFBDREx0MG5IR3c2d3V3RDhMUFZRUFIyczlS?=
 =?utf-8?B?SVExWWxGVGFwS2JDKzlqZWFXWVc1YVJYMzBkN3RnaENzeFB0TVExTjl2UkZJ?=
 =?utf-8?B?RTZzdVR5dndQRzc2Wjhpa3l2bHM4c09Ybi8wdEZscHJJbkRKNEcwUExKREx3?=
 =?utf-8?B?Yk41ZnkxL2IvcFNtZ21pVWtEUDVOUjEyWWNFS2o1cmZaTndiYVRwSE9IdTdI?=
 =?utf-8?B?L0pQN1VDMXhXc1dVeFYzVDBTY2lNTUFRS0YzckhOWU5GVFNHMnFScnIzeHVn?=
 =?utf-8?B?a2J1T1lxdFlrS3hEWGlScndBYzcwZjNzMDV3Y0ZJSldGOEdRaWhna1NYck8x?=
 =?utf-8?B?b25pK1V3dldackhuaHFFWEcwbk1HTURzbmV0d2JrdUVHMFNGaDBqS3lzaHNN?=
 =?utf-8?B?R3dvMTJKL1hCSU4rYWhLR3gvYnlSODVKcUdoclNGaElRZFZ0Wi9mTWVIdE9L?=
 =?utf-8?B?N0RXM3lKZVdFN09jL3RFWEd6ODdaSkZ6UHY2elYrTTllSlB1dEdtVUtWRkQx?=
 =?utf-8?B?aFB0RktPNEVXeFFYc2JQZXBoZC9zdEF5VUphMzJhK1A2alBDUGVJdWNseHFq?=
 =?utf-8?B?U1ZGSHd6QjZ6OTdyT2t0WmdjRERlWXhydHFONjQ4UkJDQm01ekZma3EwVHd4?=
 =?utf-8?B?NTlIRis5VEhoQ1c4bUk3RDcwWGh2WmdJL2YwTjVpV2E3MldEbnc5czZMM2hB?=
 =?utf-8?B?N3IyWnl1VzJudjJCK3RjREg0UkNuVHJKUmhOS0hDNS9NbC8vK05FdndEQmk2?=
 =?utf-8?B?ZXFpOXBDQSs3ckxtV0oxdDB5WFNYYWR3NnF0SW51UzBJYTRFTjB2dFpEMzZE?=
 =?utf-8?B?bXc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5C2EF54913AB7A41AF27243F8AB6DF05@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	+gsU0egwR7YqovXveab2slRlomu1J7/5SL/RkGnGvJIahdbTmMv2QWlZRHqJczbnZfABapPQnO/Y5L0HiodKcWAT9uYSntZu88eMhkSaOQ9oFWcLIbRDsw8HIm0+VMamdsw63S/XlpcPwFeYLuQnn4BH7LDy3PhMUOPrcf+jPRWQLEuFGbAp6+DBI6xB1ixxWCtmS+nkkKB07HtaL2Gae4DgiykbxflE0qkb8G2uvW/63Uv+yaGkNl+tE9kQySE1qDVlfuzNvSMHLAPNttpPEruNw7035Oj5xRXtuYmPhmLawlnebsVZF2JaKOfDQ+hMsAICga+nKgPOBPCtFpeQbW5DyYL9Hf4L2I0wKXVu7L6tWIy9rk7+p2NUBsIioTcSAqPCQoK8btBEJWB9Gge0JRm2fHw4kj9WIo/rNvftySUetWxHxmkwysVCXuOiTk6TtK4gAEWqeXCoDG0m+o55ORQW5IpEGR08k7YshqUQ36QJQ42nF+x34dP3jgTlTXqOexz6Yi0OcwR9dNUgCpc8d41K3fKIy2cAvObiPGVZMQsNo0AzVVXMWCzW24o3UEcbvsKYOAlP4RMEmPQYI6UY6kVa/WkcDaGxZGzyMzF5uukCpAMom1JmsxHskSqKcrbsd11+2FIkB06dqdfU+NUr0fSBXHa0O6B3iTwj93dznzvspicZYcXfpMqAvpiWWs/Am8W6dFv3cms6zEAttEGD8Q7uCxk2YzwBns2yDs6msUnn23cU2FVry+l4jwqhf12SRXU+MEc2MWVoZaLnZQcqCL2b8w9Jf7RoWkV7pNip1euwflNNTgo4ejJmqjtqLPD8BDHnMjSnbhUx08F+NoK696VtvLnv38Ui4WpN2bLXVFsmFjqvKsa9Jq2Qv4y44VQ6yyoyAPbLMx/7gTueiNK6Lw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 58739a7c-21cc-4ff1-ab8b-08daf4a86c72
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 14:22:23.6858
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: XjTFtefQ3YrRXUNTJrAm6P2AIyc4VZsat4m4Z3z2BQRxbwyorCrk4RUlRGFMXhXL30Yhu4KPKWng1l613fOgRfnZc36sVC2QYO3MjXV3hKU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5760

T24gMTIvMDEvMjAyMyAyOjAxIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gc2l6ZV90LCBzc2l6
ZV90LCBhbmQgcHRyZGlmZl90IGFyZSBhbGwgZXhwZWN0ZWQgdG8gYmUgdW5pZm9ybWx5IGRlZmlu
ZWQNCj4gb24gYW55IHBvcnRzIFhlbiBtaWdodCBnYWluLiBJbiBwYXJ0aWN1bGFyIEkgaG9wZSBu
ZXcgcG9ydHMgY2FuIHJlbHkgb24NCj4gX19TSVpFX1RZUEVfXyBhbmQgX19QVFJESUZGX1RZUEVf
XyBiZWluZyBtYWRlIGF2YWlsYWJsZSBieSB0aGUgY29tcGlsZXIuDQo+DQo+IFNpZ25lZC1vZmYt
Ynk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KQWNrZWQtYnk6IEFuZHJldyBD
b29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQoNClRoYW5reW91IGZvciBzdGFydGlu
ZyB0aGlzLg0KDQo+IC0tLQ0KPiBUaGlzIGlzIGp1c3QgdG8gc3RhcnQgd2l0aCBzb21lIGhvcGVm
dWxseSB1bmNvbnRyb3ZlcnNpYWwgbG93IGhhbmdpbmcgZnJ1aXQuDQoNCkhvd2V2ZXIsIEknZCBh
ZHZvY2F0ZSBnb2luZyBvbmUgc3RlcCBmdXJ0aGVyIGFuZCBtYWtpbmcgcmVhbCBhDQp4ZW4vc3Rk
ZGVmLmggaGVhZGVyIHRvIG1hdGNoIG91ciBleGlzdGluZyBzdGRib290IGFuZCBzdGRhcmcsIG5v
dyB0aGF0DQp3ZSBoYXZlIGZ1bGx5IGRpdm9yY2VkIG91cnNlbHZlcyBmcm9tIHRoZSBjb21waWxl
ci1wcm92aWRlZCBmcmVlc3RhbmRpbmcNCmhlYWRlcnMuDQoNClRoaXMgd2F5LCB0aGUgdHlwZSBh
cmUgZGVjbGFyZWQgaW4gdGhlIHVzdWFsIHBsYWNlIGluIGEgQyBlbnZpcm9ubWVudC4NCg0KSSB3
YXMgdGhlbiBhbHNvIGdvaW5nIHRvIHVzZSB0aGlzIGFwcHJvYWNoIHRvIHN0YXJ0IGJyZWFraW5n
IHVwDQp4ZW4vbGliLmggd2hpY2ggaXMgYSBkdW1waW5nIGdyb3VuZCBvZiBmYXIgdG9vIG11Y2gg
c3R1ZmYuwqAgSW4NCnBhcnRpY3VsYXIsIHdoZW4gd2UgaGF2ZSBhIHN0ZGRlZi5oLCBJIHRoaW5r
IGl0IGlzIGVudGlyZWx5IHJlYXNvbmFibGUNCnRvIG1vdmUgdGhpbmdzIGxpa2UgQVJSQVlfU0la
RS9jb3VudF9hcmdzKCkvZXRjIGludG8gaXQsIGJlY2F1c2UgdGhleQ0KYXJlIGVudGlyZWx5IHN0
YW5kYXJkIGluIHRoZSBYZW4gY29kZWJhc2UuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476203.738265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjp-0002kN-Vj; Thu, 12 Jan 2023 14:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476203.738265; Thu, 12 Jan 2023 14:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjp-0002jB-Q6; Thu, 12 Jan 2023 14:39:29 +0000
Received: by outflank-mailman (input) for mailman id 476203;
 Thu, 12 Jan 2023 14:39:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CdGT=5J=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pFyjo-0002av-Gj
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:39:28 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7f02495-9286-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 15:39:23 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pFyjo-0057ze-0K; Thu, 12 Jan 2023 14:39:28 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E5F5E300BE3;
 Thu, 12 Jan 2023 15:39:12 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id BD2812CC8C6A4; Thu, 12 Jan 2023 15:39:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7f02495-9286-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=eW1eJS20PBPNIc/x7hkOEbhV5Z70C96GEP7VlPQr3GE=; b=aPU7lwsfmN0exgD06ro+v230ma
	inYN+4H37Om7tBqtVLzUL9CUNUKlfPza2KgcF/4PIvMJRSjjj182vbfZKCbWGWcbCIG45AgQgTeW2
	fJfibscpsY63h5bZRqGsUf4OhsblkoP/Zw/BZA9PTJDZ2yDWPiuYKZy08myhK7w4NQ6zGvyAXLf0r
	M3tKHRiVoMc/WSOdEc6z2X8grpK42f1voLXiB5Pdc8Sqe2lHCgDtzs7he1s2s4NuaMwjujHi3zrRS
	ZFm27VOnMo7aZariMxE7kPw4B2ygcq5oEm7hLZK/tnd6ofcrPVD/UdjX1MVog7QR+UD+cBkfTdQE7
	QKSVn4uQ==;
Message-ID: <20230112143825.762830112@infradead.org>
User-Agent: quilt/0.66
Date: Thu, 12 Jan 2023 15:31:45 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com
Subject: [RFC][PATCH 4/6] x86/power: Sprinkle some noinstr
References: <20230112143141.645645775@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Ensure no compiler instrumentation sneaks in while restoring the CPU
state. Specifically we can't handle CALL/RET until GS is restored.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/power/cpu.c |   13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -192,7 +192,7 @@ static void fix_processor_context(void)
  * The asm code that gets us here will have restored a usable GDT, although
  * it will be pointing to the wrong alias.
  */
-static void notrace __restore_processor_state(struct saved_context *ctxt)
+static __always_inline void __restore_processor_state(struct saved_context *ctxt)
 {
 	struct cpuinfo_x86 *c;
 
@@ -235,6 +235,13 @@ static void notrace __restore_processor_
 	loadsegment(fs, __KERNEL_PERCPU);
 #endif
 
+	/*
+	 * Definitely wrong, but at this point we should have at least enough
+	 * to do CALL/RET (consider SKL callthunks) and this avoids having
+	 * to deal with the noinstr explosion for now :/
+	 */
+	instrumentation_begin();
+
 	/* Restore the TSS, RO GDT, LDT, and usermode-relevant MSRs. */
 	fix_processor_context();
 
@@ -276,10 +283,12 @@ static void notrace __restore_processor_
 	 * because some of the MSRs are "emulated" in microcode.
 	 */
 	msr_restore_context(ctxt);
+
+	instrumentation_end();
 }
 
 /* Needed by apm.c */
-void notrace restore_processor_state(void)
+void noinstr restore_processor_state(void)
 {
 	__restore_processor_state(&saved_context);
 }




From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476206.738311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjs-0003pV-Qb; Thu, 12 Jan 2023 14:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476206.738311; Thu, 12 Jan 2023 14:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjs-0003p8-Kk; Thu, 12 Jan 2023 14:39:32 +0000
Received: by outflank-mailman (input) for mailman id 476206;
 Thu, 12 Jan 2023 14:39:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CdGT=5J=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pFyjq-0002av-J2
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:39:30 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e82b8c26-9286-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 15:39:25 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pFyjT-0040tJ-0q; Thu, 12 Jan 2023 14:39:07 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6DA1B30067B;
 Thu, 12 Jan 2023 15:39:13 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id C69932CC949B5; Thu, 12 Jan 2023 15:39:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e82b8c26-9286-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=w9+PAdJk73UH2Fl3uPAAs4wgk5kyO76VlH+ufUk/PX4=; b=dGCr6CbAN+FGZxrxNRDZdAytzQ
	ohdZmJ2pWqhrruEFxAeISbBJzfYzIpqoJ3L/tRSUW3gBGjkO2s4tKa2W7ElzbOgRY0hRHRAMqDn7v
	I08YMwa446QJGSFX9BK9EJUds/SAXIdeHq/1QrE0yC9DLJmA54/yUU0pMsKnK/GKurujDhdUdMZgy
	2+E0yvI3gp0qTRUUgSCZVL/zjtjL6j9B4gmAO+rUVP/3waaiBSWmCmPLQGQ2RCXfbx35gYfLuHUHj
	rMAwsTWq/CMa37B4ktxnJOl8GYiROHox2K2kVc6oMMpelLmGsftbM+7NBgYgsXqBejFVKGscqqsL5
	Y+V/ZZZg==;
Message-ID: <20230112143825.822328168@infradead.org>
User-Agent: quilt/0.66
Date: Thu, 12 Jan 2023 15:31:46 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com
Subject: [RFC][PATCH 5/6] PM / hibernate: Add minimal noinstr annotations
References: <20230112143141.645645775@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

When resuming there must not be any code between swsusp_arch_suspend()
and restore_processor_state() since the CPU state is ill defined at
this point in time.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/power/hibernate.c |   30 +++++++++++++++++++++++++++---
 1 file changed, 27 insertions(+), 3 deletions(-)

--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -280,6 +280,32 @@ __weak int arch_resume_nosmt(void)
 	return 0;
 }
 
+static noinstr int suspend_and_restore(void)
+{
+	int error;
+
+	/*
+	 * Strictly speaking swsusp_arch_suspend() should be noinstr too but it
+	 * is typically written in asm, as such, assume it is good and shut up
+	 * the validator.
+	 */
+	instrumentation_begin();
+	error = swsusp_arch_suspend();
+	instrumentation_end();
+
+	/*
+	 * Architecture resume code 'returns' from the swsusp_arch_suspend()
+	 * call and resumes execution here with some very dodgy machine state.
+	 *
+	 * Compiler instrumentation between these two calls (or in
+	 * restore_processor_state() for that matter) will make life *very*
+	 * interesting indeed.
+	 */
+	restore_processor_state();
+
+	return error;
+}
+
 /**
  * create_image - Create a hibernation image.
  * @platform_mode: Whether or not to use the platform driver.
@@ -323,9 +349,7 @@ static int create_image(int platform_mod
 	in_suspend = 1;
 	save_processor_state();
 	trace_suspend_resume(TPS("machine_suspend"), PM_EVENT_HIBERNATE, true);
-	error = swsusp_arch_suspend();
-	/* Restore control flow magically appears here */
-	restore_processor_state();
+	error = suspend_and_restore();
 	trace_suspend_resume(TPS("machine_suspend"), PM_EVENT_HIBERNATE, false);
 	if (error)
 		pr_err("Error %d creating image\n", error);




From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476205.738298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjr-0003UZ-H2; Thu, 12 Jan 2023 14:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476205.738298; Thu, 12 Jan 2023 14:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjr-0003T0-C2; Thu, 12 Jan 2023 14:39:31 +0000
Received: by outflank-mailman (input) for mailman id 476205;
 Thu, 12 Jan 2023 14:39:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CdGT=5J=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pFyjp-0002av-Ir
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:39:29 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7f83730-9286-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 15:39:25 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pFyjS-0040tE-0z; Thu, 12 Jan 2023 14:39:06 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id EA5E2300C50;
 Thu, 12 Jan 2023 15:39:12 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id B77922CC949A1; Thu, 12 Jan 2023 15:39:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7f83730-9286-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=+hVGbhW+kq8WeUTnRFVIQ4y6xF6mTUDTY1iojLvcO/Q=; b=VD2YCgkTmE2CQ4Y8uw29qiC2TE
	R80ib0ot/jJ/vDVx1nc81VA/UdFYRNG0RCJX9If+JwIwax/MdSzvs+229sqAm51rlqFP4ZtmBL6KE
	ZGwzJaeaNt97uYlCPg3WWmvJH6/Qc2TRui52REAxdkfvawrPGc1AlBYy8r3QDtkibYhajuP5B7DDl
	e4VUHACu7Phw+gAfdvDwUrseQHxbEogW6ZFrf4DGNzODQW73kd/B4Gn8iBVxRaIIdPZwruHvk8R/J
	U+pz1lXQCECLx95TWt3y1l8JrPMkbqVDOu1nEJOjWfQjLPnHBvOtaLz8Bxx4rBAruF+c4Yb+M8UAF
	+VcR5hIg==;
Message-ID: <20230112143825.644480983@infradead.org>
User-Agent: quilt/0.66
Date: Thu, 12 Jan 2023 15:31:43 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com
Subject: [RFC][PATCH 2/6] x86/power: Inline write_cr[04]()
References: <20230112143141.645645775@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Since we can't do CALL/RET until GS is restored and CR[04] pinning is
of dubious value in this code path, simply write the stored values.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/power/cpu.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -208,11 +208,11 @@ static void notrace __restore_processor_
 #else
 /* CONFIG X86_64 */
 	native_wrmsrl(MSR_EFER, ctxt->efer);
-	native_write_cr4(ctxt->cr4);
+	asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");
 #endif
 	native_write_cr3(ctxt->cr3);
 	native_write_cr2(ctxt->cr2);
-	native_write_cr0(ctxt->cr0);
+	asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");
 
 	/* Restore the IDT. */
 	native_load_idt(&ctxt->idt);




From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476207.738322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjw-0004Ac-4I; Thu, 12 Jan 2023 14:39:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476207.738322; Thu, 12 Jan 2023 14:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjv-0004AT-Vt; Thu, 12 Jan 2023 14:39:35 +0000
Received: by outflank-mailman (input) for mailman id 476207;
 Thu, 12 Jan 2023 14:39:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CdGT=5J=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pFyjv-0002av-69
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:39:35 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eda5cb2e-9286-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 15:39:33 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pFyjS-0040tF-12; Thu, 12 Jan 2023 14:39:06 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E5EEE30084B;
 Thu, 12 Jan 2023 15:39:12 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id B487C2CC8C6A5; Thu, 12 Jan 2023 15:39:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eda5cb2e-9286-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=sHDPsq2uABjtfq0P5gTU9bTPKMFTr0x7rU8OwfasSIg=; b=LI9y8vBQdis0SSWa52KOS2vrQI
	DwEgo+snsApY9aPqeiehYRoGQAQPCfcXdVF/jnQvKmoDaZgOMjKpFPoGTQGrbyME7YpvvPqVAN8Au
	xhJ4YL/E4PAO+PAZBX22U4wBu1/QoMzS7R97C0oTdBAOCx8hnEzZ/GJ8x43Z2EedpvXr/NLaJ1WcW
	Xn4115CcVMUdvp59cZr83pNUcEW6riGQbkak/hhi3nIgdfg9mi2Jm6QWhhq8mm98rpvZdtPfTsVha
	Tn3o9kjumy/5q3a/JwjaPtKaLtr7A4nKrKiZq5C++dGVHnqAjuh5olHmuMnpLddjUizxu999OLs+z
	VmxWPwnQ==;
Message-ID: <20230112143825.584639584@infradead.org>
User-Agent: quilt/0.66
Date: Thu, 12 Jan 2023 15:31:42 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com
Subject: [RFC][PATCH 1/6] x86/power: De-paravirt restore_processor_state()
References: <20230112143141.645645775@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Since Xen PV doesn't use restore_processor_state(), and we're going to
have to avoid CALL/RET until at least GS is restored, de-paravirt the
easy bits.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/power/cpu.c |   24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -197,25 +197,25 @@ static void notrace __restore_processor_
 	struct cpuinfo_x86 *c;
 
 	if (ctxt->misc_enable_saved)
-		wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable);
+		native_wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable);
 	/*
 	 * control registers
 	 */
 	/* cr4 was introduced in the Pentium CPU */
 #ifdef CONFIG_X86_32
 	if (ctxt->cr4)
-		__write_cr4(ctxt->cr4);
+		native_write_cr4(ctxt->cr4);
 #else
 /* CONFIG X86_64 */
-	wrmsrl(MSR_EFER, ctxt->efer);
-	__write_cr4(ctxt->cr4);
+	native_wrmsrl(MSR_EFER, ctxt->efer);
+	native_write_cr4(ctxt->cr4);
 #endif
-	write_cr3(ctxt->cr3);
-	write_cr2(ctxt->cr2);
-	write_cr0(ctxt->cr0);
+	native_write_cr3(ctxt->cr3);
+	native_write_cr2(ctxt->cr2);
+	native_write_cr0(ctxt->cr0);
 
 	/* Restore the IDT. */
-	load_idt(&ctxt->idt);
+	native_load_idt(&ctxt->idt);
 
 	/*
 	 * Just in case the asm code got us here with the SS, DS, or ES
@@ -230,7 +230,7 @@ static void notrace __restore_processor_
 	 * handlers or in complicated helpers like load_gs_index().
 	 */
 #ifdef CONFIG_X86_64
-	wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);
+	native_wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);
 #else
 	loadsegment(fs, __KERNEL_PERCPU);
 #endif
@@ -246,15 +246,15 @@ static void notrace __restore_processor_
 	loadsegment(ds, ctxt->es);
 	loadsegment(es, ctxt->es);
 	loadsegment(fs, ctxt->fs);
-	load_gs_index(ctxt->gs);
+	native_load_gs_index(ctxt->gs);
 
 	/*
 	 * Restore FSBASE and GSBASE after restoring the selectors, since
 	 * restoring the selectors clobbers the bases.  Keep in mind
 	 * that MSR_KERNEL_GS_BASE is horribly misnamed.
 	 */
-	wrmsrl(MSR_FS_BASE, ctxt->fs_base);
-	wrmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base);
+	native_wrmsrl(MSR_FS_BASE, ctxt->fs_base);
+	native_wrmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base);
 #else
 	loadsegment(gs, ctxt->gs);
 #endif




From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476204.738276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjq-0002rB-BK; Thu, 12 Jan 2023 14:39:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476204.738276; Thu, 12 Jan 2023 14:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjq-0002nW-2V; Thu, 12 Jan 2023 14:39:30 +0000
Received: by outflank-mailman (input) for mailman id 476204;
 Thu, 12 Jan 2023 14:39:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CdGT=5J=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pFyjo-0002av-UJ
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:39:28 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e881a6da-9286-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 15:39:24 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pFyjo-0057zn-9C; Thu, 12 Jan 2023 14:39:28 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6E776300AFB;
 Thu, 12 Jan 2023 15:39:13 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id CB2472CC949B7; Thu, 12 Jan 2023 15:39:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e881a6da-9286-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=LNbZyqdS6MCOXN5DhvrakKEz6kKRrs7/+Ia2/9U4HAc=; b=G5AA9h/yH9+nJcJaV99Ad2Ey8F
	k5XWYtZamnH9v88AQNk//cUWHI313YsUcDWkBRHwsEnZ3YwO8uX31UVbbHNGiSckhXek4//+Wnz8l
	sFL6gBapz0F4s5kPWpNrhL2lnWrU7jUGCNGjtvEK9GN2wjN3Qp6ppjE4WmTl7v9s6AsO4FwK2CMiy
	BksgEqIsTHg+IvsVuWE/RCyauvJNvXu9ihIdqTkl7iNXaBSeD4jwxobQPOMLpk/dWsh0TlC086R3I
	6S5sM/g0/l5h+kcMN6smqUUGVPcaJ0N69KsfH2p+TSKM/cDaDIhyRd6fHrKzfHFtn9N3WXaseHucg
	0YtdygGg==;
Message-ID: <20230112143825.881829388@infradead.org>
User-Agent: quilt/0.66
Date: Thu, 12 Jan 2023 15:31:47 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com
Subject: [RFC][PATCH 6/6] x86/power: Seal restore_processor_state()
References: <20230112143141.645645775@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Disallow indirect branches to restore_processor_state().

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/suspend_64.h |    4 ++++
 arch/x86/power/cpu.c              |    2 +-
 arch/x86/power/hibernate_asm_64.S |    4 ++++
 include/linux/suspend.h           |    4 ++++
 4 files changed, 13 insertions(+), 1 deletion(-)

--- a/arch/x86/include/asm/suspend_64.h
+++ b/arch/x86/include/asm/suspend_64.h
@@ -9,6 +9,7 @@
 
 #include <asm/desc.h>
 #include <asm/fpu/api.h>
+#include <asm/ibt.h>
 
 /*
  * Image of the saved processor state, used by the low level ACPI suspend to
@@ -61,4 +62,7 @@ struct saved_context {
 extern char core_restore_code[];
 extern char restore_registers[];
 
+#define restore_processor_state restore_processor_state
+extern __noendbr void restore_processor_state(void);
+
 #endif /* _ASM_X86_SUSPEND_64_H */
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -288,7 +288,7 @@ static __always_inline void __restore_pr
 }
 
 /* Needed by apm.c */
-void noinstr restore_processor_state(void)
+void __noendbr noinstr restore_processor_state(void)
 {
 	__restore_processor_state(&saved_context);
 }
--- a/arch/x86/power/hibernate_asm_64.S
+++ b/arch/x86/power/hibernate_asm_64.S
@@ -23,6 +23,10 @@
 #include <asm/frame.h>
 #include <asm/nospec-branch.h>
 
+.pushsection .discard.noendbr
+.quad	restore_processor_state
+.popsection
+
 	 /* code below belongs to the image kernel */
 	.align PAGE_SIZE
 SYM_FUNC_START(restore_registers)
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -9,6 +9,7 @@
 #include <linux/mm.h>
 #include <linux/freezer.h>
 #include <asm/errno.h>
+#include <asm/suspend.h>
 
 #ifdef CONFIG_VT
 extern void pm_set_vt_switch(int);
@@ -483,7 +484,10 @@ extern struct mutex system_transition_mu
 
 #ifdef CONFIG_PM_SLEEP
 void save_processor_state(void);
+
+#ifndef restore_processor_state
 void restore_processor_state(void);
+#endif
 
 /* kernel/power/main.c */
 extern int register_pm_notifier(struct notifier_block *nb);




From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476202.738262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjp-0002el-LJ; Thu, 12 Jan 2023 14:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476202.738262; Thu, 12 Jan 2023 14:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjp-0002dz-GL; Thu, 12 Jan 2023 14:39:29 +0000
Received: by outflank-mailman (input) for mailman id 476202;
 Thu, 12 Jan 2023 14:39:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CdGT=5J=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pFyjm-0002av-PO
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:39:28 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7f3caf7-9286-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 15:39:23 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pFyjn-0057zf-Vo; Thu, 12 Jan 2023 14:39:28 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E8514300C22;
 Thu, 12 Jan 2023 15:39:12 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id AF0012C539AAE; Thu, 12 Jan 2023 15:39:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7f3caf7-9286-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Subject:Cc:To:From:Date:Message-ID:
	Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=dxKp4F/ZypynYhxVQrwpFBE7F8I1udIiZ+qCB0euLH8=; b=Qw76boxsic6bAdPZ+sbUQ2gqH+
	toujE3QAL7Ysrb/kR6kbNWK8+/9lcxeR5snVxN72gLXv54orCAYheJOMSUJTxsIOq6nlVdd9M05yx
	xzAl7+DGS02CDXkNHHjeh7e9nMs5aY0qzl9lI/Ybug1C2cN+L9cUSAyH9scpEzK8O5SsQgYiCg6Y5
	jAK6oY9jXCznsqioTj6bpA+gfUlBJJaabAP8t+fNLnSC0fudkg27IKcOz4kzbhxN1B+GtqFgHO05t
	iG2Tq/6Nx3QTZSzkBCmPsHHW89g1u9fdQigJA6ONkHFfsmbKlG6YcpbLdRrHDBiKoAXxjMOx3j3S3
	79pDM38g==;
Message-ID: <20230112143141.645645775@infradead.org>
User-Agent: quilt/0.66
Date: Thu, 12 Jan 2023 15:31:41 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com
Subject: [RFC][PATCH 0/6] x86: Fix suspend vs retbleed=stuff

Hi,

I'm thinking these few patches should do the trick -- but I've only compiled
them and looked at the resulting asm output, I've not actually ran them.

Joan, could you kindly test?

The last (two) patches are optional fixes and should probably not go into /urgent.



From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476201.738256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjp-0002bN-D2; Thu, 12 Jan 2023 14:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476201.738256; Thu, 12 Jan 2023 14:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFyjp-0002bG-8O; Thu, 12 Jan 2023 14:39:29 +0000
Received: by outflank-mailman (input) for mailman id 476201;
 Thu, 12 Jan 2023 14:39:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CdGT=5J=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pFyjm-0002au-Sz
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 14:39:27 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e8a2ca71-9286-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 15:39:25 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pFyjS-0040tD-11; Thu, 12 Jan 2023 14:39:06 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id EC89A300C6F;
 Thu, 12 Jan 2023 15:39:12 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id BAAF42CC8C6A1; Thu, 12 Jan 2023 15:39:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8a2ca71-9286-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=T4GElGxaw5tEx2Lk6kKh2cwaswlijZP7pkxV7zD03yo=; b=dWjbmZzwf1WthR8TZ23ii1la+e
	yq8CeiWkZUYxPcP3zDoAdsjLQyUEFEwNBnojffsrWtXWwvZrVG3BvxUWfoXFoIemIzZBR3dchks7+
	30N+pLzv0Pq6fFF2X2CREioQEU6u9ZSI0QUhXVxyjoBTLRnevuU+BOKb90OE9rtHZxNFKkdIcFvCY
	0iBTtIbHpzJidWwd/VU+wg+DB3e1nBkUqzz+681wC9mH4sjUCyB8tmW0qAnc1AaOtsfRWMMqx6gco
	h5cNcMDHwQrxlAKs+W9Qc14+/gh6t51ElNBLBGLZWjnaQvksAwmhxVjKsZ7qqyNZZupTDL6qmnpHt
	IQaqbR8A==;
Message-ID: <20230112143825.704223863@infradead.org>
User-Agent: quilt/0.66
Date: Thu, 12 Jan 2023 15:31:44 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com
Subject: [RFC][PATCH 3/6] x86/callthunk: No callthunk for restore_processor_state()
References: <20230112143141.645645775@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

From: Joan Bruguera <joanbrugueram@gmail.com>

When resuming from suspend we don't have coherent CPU state, trying to
do callthunks here isn't going to work. Specifically GS isn't set yet.

Signed-off-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20230109040531.7888-1-joanbrugueram@gmail.com
---
 arch/x86/kernel/callthunks.c |    5 +++++
 arch/x86/power/cpu.c         |    3 +++
 2 files changed, 8 insertions(+)

--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -7,6 +7,7 @@
 #include <linux/memory.h>
 #include <linux/moduleloader.h>
 #include <linux/static_call.h>
+#include <linux/suspend.h>
 
 #include <asm/alternative.h>
 #include <asm/asm-offsets.h>
@@ -151,6 +152,10 @@ static bool skip_addr(void *dest)
 	    dest < (void*)hypercall_page + PAGE_SIZE)
 		return true;
 #endif
+#ifdef CONFIG_PM_SLEEP
+	if (dest == restore_processor_state)
+		return true;
+#endif
 	return false;
 }
 




From xen-devel-bounces@lists.xenproject.org Thu Jan 12 14:56:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 14:56:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476257.738333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFz0Y-0008To-Ok; Thu, 12 Jan 2023 14:56:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476257.738333; Thu, 12 Jan 2023 14:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFz0Y-0008Th-LE; Thu, 12 Jan 2023 14:56:46 +0000
Received: by outflank-mailman (input) for mailman id 476257;
 Thu, 12 Jan 2023 14:56:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFz0X-0008TX-AP; Thu, 12 Jan 2023 14:56:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFz0X-0002uC-88; Thu, 12 Jan 2023 14:56:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pFz0W-0007Jk-T9; Thu, 12 Jan 2023 14:56:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pFz0W-0002Ga-SY; Thu, 12 Jan 2023 14:56:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dno1s/MtE3UryhhZT0F6rt+55OZJnxvaD340+cMODaI=; b=CZb6/XM7d802a/yM20lm7sBrrd
	7ozlB4ayivH15WJ67rRC0m23ck7QDHiG44MNKm0c/eOSQctBY/4/8OzQiDYqVgsbFYaDMIDsN4kK+
	GYuH82mNvFQ1Wty/6mty7eJXw7JSvUM18qR9kST0ok2vAd/xa8SoY71Wsbi2v7uh4eqg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175740-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175740: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e5ec3ba409b5baa9cf429cc25fdf3c8d1b8dcef0
X-Osstest-Versions-That:
    ovmf=fe405f08a09e9f2306c72aa23d8edfbcfaa23bff
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 14:56:44 +0000

flight 175740 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175740/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e5ec3ba409b5baa9cf429cc25fdf3c8d1b8dcef0
baseline version:
 ovmf                 fe405f08a09e9f2306c72aa23d8edfbcfaa23bff

Last test of basis   175711  2023-01-10 21:40:39 Z    1 days
Testing same since   175740  2023-01-12 10:40:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   fe405f08a0..e5ec3ba409  e5ec3ba409b5baa9cf429cc25fdf3c8d1b8dcef0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 15:04:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 15:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476265.738344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFz8L-0001Ze-Im; Thu, 12 Jan 2023 15:04:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476265.738344; Thu, 12 Jan 2023 15:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFz8L-0001ZX-Fe; Thu, 12 Jan 2023 15:04:49 +0000
Received: by outflank-mailman (input) for mailman id 476265;
 Thu, 12 Jan 2023 15:04:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFz8J-0001ZR-Mc
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 15:04:47 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2089.outbound.protection.outlook.com [40.107.249.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 735551ab-928a-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 16:04:46 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7175.eurprd04.prod.outlook.com (2603:10a6:20b:111::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan
 2023 15:04:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 15:04:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 735551ab-928a-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KuldrY+qPLuZXonB87KKYRWXovbNal9Dwf3+kMZUVlxsHTiaV+wdkVsuzQCrof+NmNl/r/ECYLumwc14S8EkIIfcnF7vbib5Q+1aeFqje/WyE09e06t9vr+4GEy6lXhR/reDKHJYr35ti2dYRytaWWuQ7Fp6YqUHivNb5odtzBaWx7w6vGfCdfwLNHtgY/P+OT3qDrWO5alBtliYu+Oo7bEk8o3ah9adfPgThdj9N0QKBHmHrbUR+QA6EGn/Ab/udsMWC8txPBTC2GhiaFKXY339iwdmU0ui6UxkL2VX1YfSpAA8LlXVX0zLyT/BWH5+jkzXKjpIbGlgGa3jgNjmZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cON83jzL3+W5FDpvJD3UKo0qwkEWtPsnqG2BWdw41YE=;
 b=YumPUN26mEpJgNJYNPP2X12mQlJtgLIMU1hGWiUCw4Rgs9j1Qpuph4/eqG3a51wDelBt/2NFxjaROqDoFQY5loTtcYfmYUZFN0FREr+gq1pWkgnx5oeT5EPyYpUKrw6m3fR2da0hgD2F0S/OEAzZpqBMRDWWR59g//FcNnI1O3hRoIZAXZpBDYnjz1JoA9LGKdP3FnrA5IW+PIRqn0MwhBLtMzYyWwO690OUIofam5xj9KZt7t7SZRxU638eYcg+7hMN2kn39UtoiEOgO37EbJI79WLwfGPJzUi2Jzzk4KKByjofmRfwGz2bxwqvSk1cdSg6+qdXxNg6fxyH81Dz7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cON83jzL3+W5FDpvJD3UKo0qwkEWtPsnqG2BWdw41YE=;
 b=tF+sRzYy9XvM1IiHHIfDIqjUzXZvwZOZxvNEl7UkerhmLbXCBtnkjL7osJIe54uZl5Jtbyg5FNUlche9Y9lkXLMxnT2Le3udcmSHlcUMyA2OnadB1NonBVeZvvV6AABePg11ImuAqLKWQxFx8UVN5qIUvxbK+0+QAuaHOXWc3/dVwXN7CzHCVsw8jeZysv87GqZLOEOcQc4ERTJsg+Sy32H4oxRDzMX8+8epqKCBc6WqXfIk8GQg0GqCpFUuo/wyEDl0LwEyYBaCGhPCZc7uWsirLASXIbEBa9t9L5lOybN6e7HzvUj7m5aaDvQ1z/lTIK2j1F1+VthiMgvMKllhKg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9c757207-1f67-5cef-d314-e6959d5782da@suse.com>
Date: Thu, 12 Jan 2023 16:04:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] include/types: move stdlib.h-kind types to common header
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com>
 <b0119998-b196-8319-fee8-de3231b14d8e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b0119998-b196-8319-fee8-de3231b14d8e@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0068.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7175:EE_
X-MS-Office365-Filtering-Correlation-Id: 16e53b17-06c3-42b2-7e6b-08daf4ae5685
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XfUdTJyd1hwgFuo4WJHqDv1E8EGzsX52d5Q/AzHqIdUuYgyUVPKZlDa5I9XFV6YZEc36a/QBg7yniJuObbDvVMCTFoITVDxDB2dqn2dnfDeXKItfG6hOfXcGYe3wkn42YVyjkIQqq+DBjwyOhVfg+2YmXo1C2cYDtYKyAUWBRJ3yvQ7ve0AP/eEpJo+hM3xmFwdOEtNVDTvqtEHOh37W1ld+2Q9PEbRmkHnw70FgCOKbWk96KWmcGRA8H6geG1O3KGzBj5nlSim0/q43rIRgLYkRTTXMuYRFX9pCJk8KbPd1PbcQeTSE8UB4Lm4vjqqT0lXA9Wy6ZgzOP1ExlY0jQ+GDvFSgS1muGQm8eXxXV8e3p4axCLQNr4K2m2WoBSGubQ+5wdSPGWMWzPXg2yWf16jioFEvab9+7L3unFFQjpFr6vW93YcNiD3rsVxBrBY1DEbjFM/V2fTCkhiDZLCj0jJm8I5NDJwm9nCN8PvCTVuRI/KqzpcGCSWCpBUM7XIUfKk0vxCsi2tN9t8nINchjq6tksAy6EVCkqq7OVwUEhxAGKvocTMcwRxYe2ovLORhrQZVfA5yY0BmyE80HiTuDK8RrK788K1coStg8WaQX+LIDzzvMudf8uL8ObiT0TZzcSaNP21/4LfppLymOFqcGaTJLZaQpr3zv5gS84TszAufVSD4s056IGVA7/bqp5yxyYP7vTIXXyt+2RDMtXdtCNcc51UutU4DkKgOOHnJd0w=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(366004)(39860400002)(396003)(346002)(451199015)(6512007)(478600001)(186003)(6486002)(31686004)(26005)(66556008)(66476007)(66946007)(316002)(53546011)(54906003)(2616005)(6506007)(8676002)(6916009)(4326008)(38100700002)(8936002)(5660300002)(41300700001)(86362001)(31696002)(36756003)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZW1TMjZZMFFLbzFvOTJESDZzSGdmYUtweTB6Vjc3b0loN0wzOCtIQ0lkN1FX?=
 =?utf-8?B?c1lTbmVMQTJtTTMyQk9aTlRmd1RxbUZmazlMRmZQdURyR2R1MWdlV1FLeVh1?=
 =?utf-8?B?bytiMHRHeTRhdmRIdmhBamxTWm54T3lMd2Z6TEZkdWxqbWNkZ3FWTUJUSnU1?=
 =?utf-8?B?VGZUTG1ndDNDV1FpTWowVkI4NFpUS216UnV3a1lZblNhdDdIWVhtZzFBR2tL?=
 =?utf-8?B?dGkrZXI0UStwZXd4bXI5N1JPZ1lscU1NQXNDcXczeUFCRWdlM3IyeTNJT3Zx?=
 =?utf-8?B?UWNTU1hCb3g2dm45eEFHRTNMY1B4THlCZFhOMFljb0Rkckc2a1hkSk5BZ1Zr?=
 =?utf-8?B?Q2s5ZnJZZkFwZmg1c1czanJ0ajg1UGlVYXdMeTBqT1FlTE41dEZzMDFSMUc1?=
 =?utf-8?B?M1M0QlFBYm9HV3g0R1BpRkJMNXB2d1Y2NEJGaFdlM2ZPcTVuMnY0dmROOWxw?=
 =?utf-8?B?cTRGWU9xdjhRWmE4ekhOazRrcmszNTFEVVNaRFh6bTFuU0NZOXptQmgydWd6?=
 =?utf-8?B?OUJUUTFHY3lhWEhYWVpobG55cDg3SURaR0doU3ByMnliT0tuNllHTXpndTFE?=
 =?utf-8?B?aVBiNEtlME1DQ09zZ3JzR2UxaGZZdERzWnIzKzAyZTU5c0NJYlA0NDNJMDFw?=
 =?utf-8?B?ZWlNUnpWUW5TUnlyR2llUWNxR255OVpqSkZycytEY3BJUG9PMEM4UjErOFVh?=
 =?utf-8?B?UWlzb1NjRkhKb3FtVDdLY1I4dlVUWUg3YTBrVzZiME16cGxDQVdUcysvZzJm?=
 =?utf-8?B?a3dKU2h4dnIwOTF6cTBlbTJWeVNaVEVaQUcrVG5XS1piNHlEUTB5dVVaQkJI?=
 =?utf-8?B?YzhhYVhSN0lkY3RGcWRHWW9nVGhLY1BEdEp0c0c1ZVl1YjE1bW5LbVpVQ2lI?=
 =?utf-8?B?UzZsa1JxeEQ0Y0hianZzQ3NEcDJJUjhEQVphMW5UQWhQTWcza0xPM0RHOGFs?=
 =?utf-8?B?WEVoQXhMeEc4cHU2NCtzR0ZXZkQrUjlQclFUNG9MVkhHWEVucG4ySkgvK1hv?=
 =?utf-8?B?WW0wTHBndWVhaWdBbXVrU2FOdFJFRFpaUjd4MnZKMWdtb0R4VnIwejl6OE1N?=
 =?utf-8?B?TTAzcVZ5d2NIUTZwcE8xUWY3OTlHbnMvdXJ3ZlNDY0JGUzBpT3YzSXlGTTF6?=
 =?utf-8?B?bmJIQUt6VnlQcHkxU0NDYWo4a093Nmx5RE1zVlR3OXBFeEhlNnk5NnZLRGlh?=
 =?utf-8?B?MGtyaGtyWlB2UnFoZC9OamRJbGxKV1hoS1pxeEZzdmZhVmlsalphY2t3SmI4?=
 =?utf-8?B?NDhZQnhLR1IyNkU5NkJvbE1Fa1ZQeG1uNXJCcElqTUFoaWdhWm5RaXd3SlN6?=
 =?utf-8?B?RXpCU1UzQWliTGZxbDJPMFZVME9HK1V0RlJRRExuRnRjODUvNVlSa3h0azF6?=
 =?utf-8?B?SjFkZUt5d2dnSHdVY2F2T0RoVVZNYkZENGErcUw4aHFUSVMxN29MNUhVbk9S?=
 =?utf-8?B?NndIUGJYVTFPaUhHU1Z2aTcveGl6MFljN3RCcElLOUJYTGZjbWFacE4ybGgw?=
 =?utf-8?B?b2J6Z3NBU3J2dnhLcVdEVldHOU5XdWQ4Rkx1RGZ0WmdhVy9KVGh4c2xZdit3?=
 =?utf-8?B?Q2JsRUxDVTZoTHZkcnAyMnlkQitsNnllUUoxdCtsMnBUazBkTGZnUHgyQVdD?=
 =?utf-8?B?UXhMaXNvb3pDcUtHWXJOSUM0ZVBhZHVwdWlsUmh2VWk1ZklzZWh2SlZ3a1Ji?=
 =?utf-8?B?TWNMMDVmVzFRWk1iYzRUZGltdEFlV0txN09MUW9DazE1UW5nTThQbHhLN3B6?=
 =?utf-8?B?L3ljTExPbVF3U2tQdVZ2NHVwZHgzTDVCQ2tCaWxLWkZQRW8xYzRHMGhKQUsv?=
 =?utf-8?B?TUE0MTNVTWVjRVFZN1ZzWDNKcnFQNitXQ3QyODZPQU1wd2F0eFdjZ2tuV3Zl?=
 =?utf-8?B?QmdKckZ2ZlN3cEpSVmhPQXBsMVFoTUVmS0RxMEpxcTRYN2VyNCtBR05sQWpp?=
 =?utf-8?B?TDZGc0FTTXZNOU5FdzE3ZmczZ0lIdTFhOElPOUY5c05NOHhBYmpsTXpQcHR4?=
 =?utf-8?B?bEJGNGJGc0ljL2VkWHhWcDIvL3c1ZGRzblE5SWdIc3RjK1QzS1E3ZGZ5UDlU?=
 =?utf-8?B?SFZkczF6ZC84eVI2cjhyMENQU09tbEY5dWdsWStpRDI0WUd2SGV4TERJRXk0?=
 =?utf-8?Q?Mo9l8Y9iAXN2hbpljWmplcV0G?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 16e53b17-06c3-42b2-7e6b-08daf4ae5685
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 15:04:44.3797
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rat0xXX4wRDi3xtNtzCX4jO2rncDCwiffOZFJybCeGV8WgxD5ZeWAWFKAaUqm1x+ksAf4vi4WLORT/b3unQ3MA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7175

On 12.01.2023 15:22, Andrew Cooper wrote:
> On 12/01/2023 2:01 pm, Jan Beulich wrote:
>> size_t, ssize_t, and ptrdiff_t are all expected to be uniformly defined
>> on any ports Xen might gain. In particular I hope new ports can rely on
>> __SIZE_TYPE__ and __PTRDIFF_TYPE__ being made available by the compiler.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

>> ---
>> This is just to start with some hopefully uncontroversial low hanging fruit.
> 
> However, I'd advocate going one step further and making real a
> xen/stddef.h header to match our existing stdboot and stdarg, now that
> we have fully divorced ourselves from the compiler-provided freestanding
> headers.

Hmm, to be honest I'm not convinced. It'll be interesting to see what
other maintainers think about such further moving away from Linux'es
basic model.

> This way, the type are declared in the usual place in a C environment.
> 
> I was then also going to use this approach to start breaking up
> xen/lib.h which is a dumping ground of far too much stuff.  In
> particular, when we have a stddef.h, I think it is entirely reasonable
> to move things like ARRAY_SIZE/count_args()/etc into it, because they
> are entirely standard in the Xen codebase.

Yet these aren't what people would expect to live there. If we
introduce further std*.h, then I think these would better strictly
conform to what the C standard expects to be put there.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 15:11:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 15:11:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476271.738355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFzEf-00034T-8G; Thu, 12 Jan 2023 15:11:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476271.738355; Thu, 12 Jan 2023 15:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFzEf-00034M-5P; Thu, 12 Jan 2023 15:11:21 +0000
Received: by outflank-mailman (input) for mailman id 476271;
 Thu, 12 Jan 2023 15:11:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFzEd-00034G-Qq
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 15:11:19 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2045.outbound.protection.outlook.com [40.107.13.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d58f2a5-928b-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 16:11:18 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7407.eurprd04.prod.outlook.com (2603:10a6:800:1ab::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 12 Jan
 2023 15:11:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 15:11:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d58f2a5-928b-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bp0z1NsmVxnz3F9l+1K5a9hswCCRWph4PNgAi0cubC7sEx0FN2Sl9owKW6IKF14qgJZDMjJ8pnY3dKLVn4Pth2tU4BWAKTt7ozY1GFcvKiMIw2zeEtsxjxO9xdUjKRJSf6gcAUDNOY09ZHMaTUPjczg3HZUPD9jqfE4IwSA9ecYexpoakB7vAmRy0Zo3D94kKCIzXNf7NTaAYn5zt9cjQBzo+RIt2uh+TOgPuVCdVPhLMSv28EeKSFNNXjnLL2i/dbZAwW6AZnzxQ+MEsYXx+VMSNn7708vdOwCr/TP+pP9IwN0xSyd66hoP/HzJszAXD4TPFhsVQOSGxog9MohIbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s/iseJp9/k6CNN8jq1PHuSQPPN3JG/b5ahqOgQKVN0w=;
 b=EaoRJJsFgz6+nCJllJ3Iufrxrb6eDwDZBHIDBgC/HID25iEXO/YUd7yoDBBwuITVO2KP5f7BJVUddVzuJsoWuAmICmjtJfnkjFojHgZC+3J3OY0Fb17V4K8bsQXWj/vqoLHhCd/2QpiRxnzNZBdE87by1lTuzlMnWDgZiKJHZdCGl33cEzmlazYj1rd7aWNlSgpABS9ax9GP5gE1JJyQZG/uZWPtnGFDPtRgHF1ZvbXIjAzqch0CfKnS4eMBAFEVJiTrKkcVg4wOfyzs1kjZI/vnRNxTS0Xy6bvfBqggQ7l5v8fQxRusM38ILUSB9+/+4ecxohszrhWO6mGXPmsdfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s/iseJp9/k6CNN8jq1PHuSQPPN3JG/b5ahqOgQKVN0w=;
 b=iDy1uu6UppiEAsViEG57A7mItYWQTZ9PBZP0kVRc8CpJpGCBTE/lANEHY/iFXPZlnfaA8TvXrWoIdBEf1j+z5QJ8J26VwoT4u0x/SzsydSyMHGK7snL3pXS3eQmPL8L1yzbete3uGbAkU8/+3epT35FY7HpoWITrEQEIsqW/kK9KxN8qjuISwhU/tKLUxkE5Ug/MxSlcLlq6Q3xRYUi2zDzyIJYT+QxaS+SUi2IhqG/YzxuOYDkz5Nub+Wv4DZSVQabO6D99flPmHIQ7n83vUB4hPxB50YYhqTdby7TO7HHiB/9M2Wm4karFB4BAcTi5mOT8MtUhhQGzPvoxo9Q1Dw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <86519ac5-1f37-f3a9-f586-9c41f0ef66cc@suse.com>
Date: Thu, 12 Jan 2023 16:11:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] x86: Initial support for WRMSRNS
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-5-andrew.cooper3@citrix.com>
 <97d16968-57fa-0114-1a93-4d0d253b8172@suse.com>
 <2e568a8d-02d6-5761-8b55-c37a8de1be0e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2e568a8d-02d6-5761-8b55-c37a8de1be0e@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM0PR10CA0072.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:15::25) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7407:EE_
X-MS-Office365-Filtering-Correlation-Id: 36098c67-ace9-4efd-7587-08daf4af3ff8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BzTzKzwUb/lEl/FKSiaS6PYSya7uHcK4pqt9uJvYlfLrWyhhmbsWxd2lY1kulaeHJ8Y1kAH44DdqXJrp2FiFQn47VGtcYeewhP6JbAgwO+ceEsZ2FaImhE40k8PyrpzgZYTl2f4Ceftg18Hb20mejcJcyF5LLldNyr1uFta8Mo0QzZQ/VoyQtjiXgWMyvofDZXkvzbI0QVzAsQqV8fhQJiOdIBs/dVM8hZjLho4Xz64XYq6F0UfcnVt+pxRfF0NgZhJxNCWrtD8BQU0umq18vqulpyKn1TcwvmTcCj3lrWetoDZcTl4b0u3aTtOkqgZRFCd+s8o7CJO1cfeFhozIJfjFCtqDuWtW+pxri6UUTRsvWkmWUiXKbhSLdAEEhryzctB4ztdID1WuoLS2GFf0EnOV2vF6RLPx69mkxidSmorwaJFyHOFP7xbxL2yEtMip6rnCbOYbdPtk8nDIO6rgE3pz/pBqcHxPlMD5k/BELHir0Htoyk9wyEbI/nQtDbLfMlUJzbhscglTbBx9JV6+9M0KkEoxD2JF6mwhFZNn6nyP6RqpPDoC+hBTR2clyX2Mpge/wP0AXPayhrJO6tVtGtjuMYeUN5ZNptH0jWVWPtBgFHEiFd4f1LYOLlx0qtglds/cvqqtTqk6yA3YGLKkQrUdRy280O2ONjaRUSY2nZ84sQkaUfTT23HvG/KaBg2k7O43UeGzcxLXJ2ThmKLjpvyeWcKMMgyG/jUMQAGV3uI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(376002)(39860400002)(346002)(136003)(396003)(451199015)(478600001)(6486002)(41300700001)(38100700002)(6512007)(86362001)(31696002)(316002)(54906003)(2616005)(66556008)(26005)(66946007)(186003)(66476007)(4326008)(36756003)(5660300002)(53546011)(6506007)(2906002)(31686004)(83380400001)(4744005)(6916009)(8676002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TlBYdGdVL1E1ZjhiU0tqb2V6MHBzRHd0emRuT0Z0LzRWeW9iMjdUN3Y0R2VH?=
 =?utf-8?B?b3pSYWF6SlFyK20vYk1SdUJxYUVpdTRtU284M044ZE9VWWVpNS90Z3puaEMv?=
 =?utf-8?B?Ui9RK20wQjJFcEFwKzZqQkRBd21MaWhCUWUwb2IrdkNzUzRIdlFFVk5WZnRV?=
 =?utf-8?B?VUNseEJhck1QWWNpWll2c2UvWEFvWjFtVk1nSmhMaWdQNjZoN3dGQmlmdS85?=
 =?utf-8?B?U3IvcGxCMzVlMktNQ2FsUDBNRkZET2VPenR6WjhJdjdWTmNBVks2NFRTZzFE?=
 =?utf-8?B?Mk9pVWlUcmR5TWpxZzZ4VWI4eHlVNGQ4MmVUSi9yTzFxVjlZeFRSZWdFQW5p?=
 =?utf-8?B?RGd0QWpiSnd1Q2NieEtYRWNsTW5LVG84a1pGeWkzMEcrK0piUDZualhQakF5?=
 =?utf-8?B?WVIrNGFMUkROVk4yZWFJbjJoUENqZHBCQ1B5NlFWQUdQV2MrYkhHL00zb0tI?=
 =?utf-8?B?VnZZRlF5djE1dWh6NE80WHhoMWUzZ2hsUHFMT1R4SWIrMzV1SUM4blhFK1dE?=
 =?utf-8?B?elJCVHRjSnd3K3ZvSE1BcDhKQ3dmUE5zOEtHbWllUzZPd3BZU0kyYlQxM2lD?=
 =?utf-8?B?YmpJbVFMMTZPTE54aTYvRGt6aU9qdVJPZzlsM0Z1QVQ3VUdRTlk1S3NPc1lF?=
 =?utf-8?B?QW9KUWpGd3BqUE9BcGFkbTV3N1BzMVAwbXU1QzB6eGhocFdFaTNQckVsbTA2?=
 =?utf-8?B?eW5EQkMvUVNzMHdaNkl5aXU3b04zc2pWeEJMMmhVV2RhY3VONHpkU1lTMVhS?=
 =?utf-8?B?UE1iU1RIcDcwWmc3VTg4NlpjcWZVZTZtb3JOYTRJNzBWMFJBMVZWdUw4MlRx?=
 =?utf-8?B?cnRsZW1DK3BPWm1VSDBETGN4UC9nY1ZSSzBvZlVGd0Vaang2YU9aemp5aFox?=
 =?utf-8?B?Sk1MSkFMR1hmTVhGdUNkeDZ6N1VGNUtZWG92cjJPbXVwWkYrVkNIeEc3dzhY?=
 =?utf-8?B?WVRNT2htUFFQQ0pUenRBeEV6bm5DNnp1YXcwL3hOMG53L1phM1FiYkxPMzdk?=
 =?utf-8?B?N2t1cUF5UkJ3N2N2QWEvclBIaW9DODZ6azU5YnJZWFVJeFQzTERGQWJQUGpQ?=
 =?utf-8?B?ejFmTHl6bmVyaWE4b1Joa0pvK3NQeTlDZFhXNitEOTZaVm4rQ1JrOGF1R3RQ?=
 =?utf-8?B?M0d4RjZQSDhyYm9PU0lId2JMV1lkekdYQXNWVHU3cWZKNkZMcUdWNm94dWc4?=
 =?utf-8?B?ZWxHK3FMSTlIMVIwS0xyVDNkR0lUWm9EWkg2MExkTVNMa296VlJ1UWd5dkxO?=
 =?utf-8?B?bmlwbWhQYU9WNHo2ZGFybnpRS0UwbTRoWUpZdHZ1SUo3R2Y2aXFjUDNkaWdh?=
 =?utf-8?B?SlhMMC9zcjNGMG9MOElyc0lqbFpkaHhKbWxoV2d6dkFWREFweFc0V20yQ0E0?=
 =?utf-8?B?WWNZS3I3TW1CUm1tN1drdWZqSmgxeFVhMHI0UWZ6NTR3aFRDVCtMN0htK0hx?=
 =?utf-8?B?bG9iTEVKVFcwMDZHc0YxMVRuMFBRczZzTDJwMTA3d3NZYkNpdlcrUklHU1lN?=
 =?utf-8?B?dmcyRjRpSXdvSklkU1lxSEM1NzduanQ3d3lxMndJUkJIamNnVWZTYjd3bDkx?=
 =?utf-8?B?SkxsZ1ByemdQZnlKWkEvQ09yLy8zd1NscXZNSmVLWlhGMDdYQnBvQUgyK3M3?=
 =?utf-8?B?TlpHYVpuQUhBS0RlSkt6b2xwMUxPY2diTWc3NWtKb3JIcnN3N3JMWVEyT0Zz?=
 =?utf-8?B?SXdvK1RTUi9JcnprQkR5eFJGNWRGWUlXK2RPY3pqOG9wd2R4TWpGVklYYlpR?=
 =?utf-8?B?OVRWczhZUXd1dUFUMUtxNTQwZkowVE50TWR5amJCdjY3ZUc4ZjhPTXVQK2Jp?=
 =?utf-8?B?OFRnUytPRjBrWmpMVXJtZlZHMEx2UTBFNFBmRFNWaWVQYjNyOFJpN3VLaTgw?=
 =?utf-8?B?bUVwS3psR0hSV3VCck83dG5DNHRUS2VWbmQ5VzUzOEZoMEYrb2JxcEhMWFhL?=
 =?utf-8?B?bDhWbGN0dTZyemtEbUN3UTNrbWtZQ1FxS1Y2MnVuOTlOdVl0ajlJaHVNOEVN?=
 =?utf-8?B?SGlZMmlGdkFOaVpBRkw2SmVxN1czL0NlbGw3d21vZzQ5bzFUYkNxdEpsdERM?=
 =?utf-8?B?QkVWMDZRYjA0eDlITWVLQ3NHQldidWtoZGRYRXlQZXhKRDZrT0dnVThMTjB6?=
 =?utf-8?Q?otb1p2JdV72jL9SSEJz5o8zsY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 36098c67-ace9-4efd-7587-08daf4af3ff8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 15:11:15.7610
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Mark2qVHJWiUqsRBhXbosdB/otXq7Ac5iATBfQGSDH9eu9gPtQHM8IEccRKmKWI/qqwH+rzCOIY2SnlJZAHbOA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7407

On 12.01.2023 14:58, Andrew Cooper wrote:
> On 12/01/2023 12:58 pm, Jan Beulich wrote:
>> Do you have any indications towards a CS prefix being the least risky
>> one to use here (or in general)?
> 
> Yes.
> 
> Remember it's the prefix recommended for, and used by,
> -mbranches-within-32B-boundaries to work around the Skylake jmp errata.
> 
> And based on this justification, its also the prefix we use for padding
> on various jmp/call's for retpoline inlining purposes.

While I'm okay with the reply, I'd like to point out that in those cases
address or operand size prefix simply could not have been used, for the
insns in question having explicit operands which would be affected. Which
is unlike the case here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 15:17:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 15:17:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476275.738366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFzKC-0003ky-Sl; Thu, 12 Jan 2023 15:17:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476275.738366; Thu, 12 Jan 2023 15:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFzKC-0003kr-Pj; Thu, 12 Jan 2023 15:17:04 +0000
Received: by outflank-mailman (input) for mailman id 476275;
 Thu, 12 Jan 2023 15:17:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFzKA-0003kl-Tc
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 15:17:02 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2089.outbound.protection.outlook.com [40.107.22.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29ceb453-928c-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 16:17:01 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7752.eurprd04.prod.outlook.com (2603:10a6:20b:288::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 15:17:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 15:17:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29ceb453-928c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K6GMulc/9imHLAvIEFRdJHiQqCGubAwSQuruJXVbrI4SnJ2kHRndP17nqIfJVIrpXwlouhhIwiRyBdWUVSlatL+shbcPbeCbNHzrHlwlOkmfPGZtg6va2GidE8G6y8/lXbMRZEZdLZ/FrUoDBIYgpbUwlq3LnLBsYFVCLOKPpp+PTKbhvbFuC5D28y2jTn404l1GR7VvxjGN/e2xX6Ztf+cVmoJfSOaU5nFxJgwQj3IElX9eRiYm2rgiLsbPBfSBqQH32atgfCMbAiA9/xvzigVlxeYtsuQviCShv7YC1xTtmn46UZ6l7PCbfTpgUBM2M655YtvV7yAWB/wC+yay7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SkSZomSlQTVH2dS4wNry2udyFyEzoc56R83Z8NYcgRc=;
 b=WCt3S0F2fXvcx27oQvVQeZ80iW2B770cxgy6eNplXeFDh6VRZdNo8UvkRk/RuVy96RhtU1ikmPcTal83lWxyslcPA2b/WPp5Tcuz5tvESQcofgsKBrkZhTcR15QU/IFhHrAfiaVVbLv8yx1mLCoeTHdKNR/g9qYdbh12ktDhK4g9gxRbtCRgQIEAdyJV2FwjGzQ+7Cs3YY3GQ7kV5MbZfpV2bhQBml7VfTuNbHAkhlDiE8gmAJzsrYONEiouQhQ7L7E3EJ7BFDthGnZyGSz8yhcFU3fWTevQi/XpHrd4ZY9FbKpP4xhpMA7N9I8ZbLiQSrS18E5ZUcvSdWhhwZQ/nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SkSZomSlQTVH2dS4wNry2udyFyEzoc56R83Z8NYcgRc=;
 b=ErypFlPdepOya6zA8kF1bQFbDunx9fjuBj02rPJWK/vqnIlCd+uG3SsKx1vzA5Y7b/vZ4bb1Ydkd3UAm3FfCZykBs51PLmRSsV52ddoybOUeinblCd4tjGdBhy5vEvkCnYq+5ccNU3LX4pJ1U7ItsJtDX1Kea+nq5fcdDnB/k4nXlYh73Pxvk/CbgVDagHaxb2t93T0khrv6zbdMWX93seFke61FKZsBOuUEAH/lSneBYpoy0WHVNKYUSicbgVCOtRsa3ZMsnR0BqZ9jmMXXySn600AmvlYGqfK6z9NE4CxbEWTl/zbdL034WNBg//kJXrNBOX0wT+s+MfBAMGe+/A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f19eb724-19ac-552f-37f2-cc10d3db9144@suse.com>
Date: Thu, 12 Jan 2023 16:16:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/8] x86/hvm: Enable guest access to MSR_PKRS
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-7-andrew.cooper3@citrix.com>
 <f64f0bbf-8f47-e897-eb7c-51f11c9ea4a8@suse.com>
 <f1e2d71d-5fad-962d-a7a0-af1044f40de6@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f1e2d71d-5fad-962d-a7a0-af1044f40de6@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0080.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7752:EE_
X-MS-Office365-Filtering-Correlation-Id: 289b5a13-5a7a-42ac-682e-08daf4b00d5c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XesXdIuaUVoPID95j7c80M+UI4oGKcDgQFvHhO64GYw7P7ZwcHwegw3bqzlvRz2/qjvpPfOeXGaU8pRlD8DvJOekUK9SFK4JIKZh2JvQB2OpQtTuGIvtCT1VaV4NiCqNJ4uhN+Y/JV7pqCGNiMl363o+XxzwQJ6ZrDIad/3sLAgTT2tkdIu7VdjBOvVCWDXahZ1Y4IirT3gR8lZnbsbfNxeA8nYQBTOZVo9BGXtZ5y9tF4q30hjRxfPhSiuovTEr2FxedTv+fB/BokeKU/WeWnDdzQBPwts7HU3FLfby3XbmbP4CfZ2HPnS1vE1Bo+WforkNyk/AIvBekCykkKrA/2KCPJ/LqbNu1hilkx58fmBUIcr8Pg1U0UN0RYrE7G2xdmiOvyFWJKOl9I0K5bQz2yqdvFW+lTplYdCCTlFE23ZGhzxQppe7yudMMVKgXr+gFEaTJRqARQJpP1OAxU06W8samFM6ZbgVgfprJ9pkFzPJuJabwqFzue62qLEOeFnWdU7VV98x4ekW0R6MaTQn2AIXdwn015NPbRts0E8KMFnKZzWiaMdEBj3exYpP0CMvxh+LzYAjuYejPOlVV9sUWfV1ic2puB4S7fU70i987HWO7JMV0npqdy/Tia9HurWK7bpYU/AMKl0XMIlaTsxIk02KM51mbHzfj76pN3MwSwyJp22sxEjqjOTmSUKQa8r43+VeSBSNTnyOD7As7AEoHNstxbIVrv9FrP18S947Ojk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(366004)(39860400002)(396003)(346002)(451199015)(6512007)(478600001)(186003)(31686004)(6486002)(26005)(316002)(66556008)(66476007)(66946007)(53546011)(54906003)(2616005)(6506007)(8676002)(6916009)(4326008)(38100700002)(8936002)(5660300002)(41300700001)(83380400001)(86362001)(31696002)(36756003)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YjJRQXZ2OEl6cEN2WEYxWTl5NFk3aHlESVlCeGZtQTFtZnFCTFJ3aG9wR21Z?=
 =?utf-8?B?KzlKZ0VEa3Q3cERjWlNzSC9qR3lXK1pnWEIyMDBmT29PdTZPOGpGSW9tN1ZM?=
 =?utf-8?B?N2c3bDNocENreEwyNlNVOUhYK0diY09VVDE3T2tEYUhrUlBKTnFheG5qa3Q3?=
 =?utf-8?B?dVNxbVZmaFdoTzgzRm9oNGlIbUROQzBUTHBUNlN5YXpnbGM1UUhXZVJwa2Vp?=
 =?utf-8?B?akd0dTZXRlVFUmg2eTRwRDNIRHQ5RjVHOUVKaW9Gc2RpZFFrWkJIendYV1Zy?=
 =?utf-8?B?VXh3Q1ZzdmUxZldqTHZ6UmFUWDk0aW9FM1RQOWpZNlJONGZlb3dCUGgyYVYv?=
 =?utf-8?B?SUV1a2piU050dXltWXdDMldsWDFQSEJ3TTVhQWRIWnlTZURZd1hPS0VLL2l2?=
 =?utf-8?B?bm5oRXFDWmVKM2NpcVBCaDBVQXlGNU1ZZ0E0MXFoNFpUTG5jVG1kbTg3WDFl?=
 =?utf-8?B?MkxZOGNEUnBTVUhMaGd1cUczVnBKV1ZMcjhIOHFXZk15RFJuVkRxT2lEeEY5?=
 =?utf-8?B?U2pYY2xOa25yd1BqZ2hqY05qYTgrNFR3RW1CaVZYaXFDc2JFM1FaVnd0UWRT?=
 =?utf-8?B?dW01a2tkMjJFM2NoL20xK280MDRoc1B4bHNGRFJvQko1MnBkMVF2c3lzR0xx?=
 =?utf-8?B?NVd3dzY2d0NEd2pLUVFiTEtWcTdsTzF4bW9JSU9xL252ZE5QYTZMdmwzMmRx?=
 =?utf-8?B?UkpqZmJrRmkwOEhKZ2JQekhXUE9NbjBhaW1ZcW94dU1hR2JBOGhIQk96Tjdi?=
 =?utf-8?B?Z200WFBSUUY1UG1YcFU3bG1lamxJYkNGSUZhL045SFpKbHdldnBxWlVwY1dL?=
 =?utf-8?B?Zk02VzdBRldOZEdZMEVTUS9QY1dVTGQ5YzFTRmlYQkdnZ0Z5RVdXWVRPV3A3?=
 =?utf-8?B?TUJ5M3N6M3JyUkxDQXZNeHF3d1dyYTJ6RU9MTEh1TG5DaFBpK1RaUXJvcXhy?=
 =?utf-8?B?UncrL3BrU0pCRUx4QU0yKzhtMEJiU1l4MEw4OWtPZmdQZDZGanlYZjVGeHNW?=
 =?utf-8?B?NEFBR081OHlRdm1JKzRjVXRJS2IvaFUrcWFXSlNYTlBMZHpGWWVQR0NqblR3?=
 =?utf-8?B?Rld1Z3lhajNQSTZ2SXNuUXhBdHVYYm5VYVRzc0M2NndvVHdsQngwdUVPdGJj?=
 =?utf-8?B?NVJzc000V21qMVFIY2J0RUsxTXd5RisxN2s4MHppQVdSRmNjeGVSdGlyNmJl?=
 =?utf-8?B?eisyUENFRWxrcnBBUGhFWTAycXRSNWdIRkN4cUtoVUp4dldZVDZlNm04b1RF?=
 =?utf-8?B?NVFhaERFZWJvS0hzNUlLNm8xZW1FOHFBSS9kUmdOMUYzdWd3c2dqUWkrek5o?=
 =?utf-8?B?T2ZzUVBsMTYvOG1VaENoa3dVdzJGSVpvRjhSRDh6U1BRbktuRUtmdU85OHF4?=
 =?utf-8?B?Y2hidmhjUEI2NUkrUXpKRlVtSFFlTk4ydWFRcEpsaGY3ZENxRHdrWGdScDhK?=
 =?utf-8?B?ZmRUTkZBMjNzQUxweUtBY0lCb1h5WHFlR2pmUmJhYkg1RDZ4RDQrVHQ5cTBL?=
 =?utf-8?B?YXloSTlWVE5TZzI4L3hYQUorcElXUURuMTE5Ui9IZlhIb2o3OTlxTk1TKzMv?=
 =?utf-8?B?cmw0YUZGdmRhVm14eDRocDZsVXAwRk00NWRnOE1KQXU0d3pNVmM3YUsvN3VO?=
 =?utf-8?B?R1JPc2NnUXVvRHdSd09maFlWUXhCQjAvY01XaUwyejlFZ2NFYVl6alUwZEN2?=
 =?utf-8?B?VHJVMmhUanljRTdnM3FiOFNtdmljTGp3MEpaWVZkL1NERitxVjI0ZVpMQktJ?=
 =?utf-8?B?V2V2b1pmMm84ajFWaWdCblc5eXhIemlOeFJEekx6aFFDZnRPOUEwOUFKa05C?=
 =?utf-8?B?cUwzMEJESk03TFd2Q00yNjY1WS9tZTZzUzhvVVVOaDB0TG9jMjEyWlpRYjg1?=
 =?utf-8?B?OGZ4VUo3c3dtL0x4c04zZ3Z0Y25tb1EycmpUb2ZjZUVLZzVULzJrNmZYako2?=
 =?utf-8?B?K0pBZ0trcUszZ1pwdDlla09Kd29OVi9OVzBUSzNuVnJSYW1JZk96OWZVbjNl?=
 =?utf-8?B?cGxBTzR1WURZYXR0V0dKZ2NBdzE5ZjlNWUQ3Q3VuTTVIL1dPekNwYlFISDk1?=
 =?utf-8?B?U0xSU3NmQVZFbzZPQ0lQeFhXbjl2Mjh3RUZUeFpSSTd5TzFjY0FScE8yVnVr?=
 =?utf-8?Q?fuRZWriSpLQ4FjPyIY06iu2Qb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 289b5a13-5a7a-42ac-682e-08daf4b00d5c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 15:17:00.3490
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nI6nFM5Hip2RLath4XYehRJVREFJDiJHHJeVVJfcM+HLGckjr6DTYwtRRwa2XHCTFcPwJvHj13+QnkhLEU+Vhg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7752

On 12.01.2023 15:16, Andrew Cooper wrote:
> On 12/01/2023 1:26 pm, Jan Beulich wrote:
>> The other thing I'd like to understand (and having an answer to this
>> would have been better before re-applying my R-b to this re-based
>> logic) is towards the lack of feature checks here. hvm_get_reg()
>> can be called from other than guest_rdmsr(), for an example see
>> arch_get_info_guest().
> 
> The point is to separate auditing logic (wants to be implemented only
> once) from data shuffling logic (is the value in a register, or the MSR
> lists, or VMCB/VMCS or struct vcpu, etc).  It is always the caller's
> responsibility to confirm that REG exists, and that VAL is suitable for REG.
> 
> arch_get_info_guest() passes MSR_SHADOW_GS_BASE which exists
> unilaterally (because we don't technically do !LM correctly.)
> 
> 
> But this is all discussed in the comment by the function prototypes. 
> I'm not sure how to make that any clearer than it already is.

Okay, and I'm sorry for having looked at the definitions without finding
any helpful comment, but not at the declarations. Certainly sufficient
to confirm that my R-b can remain as you already had it.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 15:21:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 15:21:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476283.738377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFzOb-0005E9-IL; Thu, 12 Jan 2023 15:21:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476283.738377; Thu, 12 Jan 2023 15:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFzOb-0005E2-Fe; Thu, 12 Jan 2023 15:21:37 +0000
Received: by outflank-mailman (input) for mailman id 476283;
 Thu, 12 Jan 2023 15:21:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hjhq=5J=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pFzOa-0005Dv-4b
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 15:21:36 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ccad159d-928c-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 16:21:34 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 4D54D40269;
 Thu, 12 Jan 2023 15:21:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E045513776;
 Thu, 12 Jan 2023 15:21:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 1KxjNX0lwGM6IwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 12 Jan 2023 15:21:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccad159d-928c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673536894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=2n0IdQPvR+Gxy3g7DvPKITwTHiT/5QBMjcXvwvG5gq8=;
	b=aBmyfXqLewedueE1l8HgnS8oOWeLEQsBebQ3+JmtzpZzl9WKiSECnoSe2djtAFYUFg2yN/
	acxUR6stqti7Qe76GIxbS5WMojGnRTf19LbpN8XK+a9njRm4QPpWpJ8Z+aGoVqnwra2MEk
	bbQFMgCf9WV/aQ3RDAqPmCnv9Vf1osI=
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH] x86/paravirt: merge activate_mm and dup_mmap callbacks
Date: Thu, 12 Jan 2023 16:21:32 +0100
Message-Id: <20230112152132.4399-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The two paravirt callbacks .mmu.activate_mm and .mmu.dup_mmap are
sharing the same implementations in all cases: for Xen PV guests they
are pinning the PGD of the new mm_struct, and for all other cases
they are a NOP.

So merge them to a common callback .mmu.enter_mmap (in contrast to the
corresponding already existing .mmu.exit_mmap).

As the first parameter of the old callbacks isn't used, drop it from
the replacement one.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/mmu_context.h    |  4 ++--
 arch/x86/include/asm/paravirt.h       | 14 +++-----------
 arch/x86/include/asm/paravirt_types.h |  7 ++-----
 arch/x86/kernel/paravirt.c            |  3 +--
 arch/x86/xen/mmu_pv.c                 | 12 ++----------
 5 files changed, 10 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index b8d40ddeab00..6a14b6c2165c 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -134,7 +134,7 @@ extern void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 
 #define activate_mm(prev, next)			\
 do {						\
-	paravirt_activate_mm((prev), (next));	\
+	paravirt_enter_mmap(next);		\
 	switch_mm((prev), (next), NULL);	\
 } while (0);
 
@@ -167,7 +167,7 @@ static inline void arch_dup_pkeys(struct mm_struct *oldmm,
 static inline int arch_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm)
 {
 	arch_dup_pkeys(oldmm, mm);
-	paravirt_arch_dup_mmap(oldmm, mm);
+	paravirt_enter_mmap(mm);
 	return ldt_dup_context(oldmm, mm);
 }
 
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 73e9522db7c1..07bbdceaf35a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -332,16 +332,9 @@ static inline void tss_update_io_bitmap(void)
 }
 #endif
 
-static inline void paravirt_activate_mm(struct mm_struct *prev,
-					struct mm_struct *next)
+static inline void paravirt_enter_mmap(struct mm_struct *next)
 {
-	PVOP_VCALL2(mmu.activate_mm, prev, next);
-}
-
-static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm,
-					  struct mm_struct *mm)
-{
-	PVOP_VCALL2(mmu.dup_mmap, oldmm, mm);
+	PVOP_VCALL1(mmu.enter_mmap, next);
 }
 
 static inline int paravirt_pgd_alloc(struct mm_struct *mm)
@@ -787,8 +780,7 @@ extern void default_banner(void);
 
 #ifndef __ASSEMBLY__
 #ifndef CONFIG_PARAVIRT_XXL
-static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm,
-					  struct mm_struct *mm)
+static inline void paravirt_enter_mmap(struct mm_struct *mm)
 {
 }
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 8c1da419260f..71bf64b963df 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -164,11 +164,8 @@ struct pv_mmu_ops {
 	unsigned long (*read_cr3)(void);
 	void (*write_cr3)(unsigned long);
 
-	/* Hooks for intercepting the creation/use of an mm_struct. */
-	void (*activate_mm)(struct mm_struct *prev,
-			    struct mm_struct *next);
-	void (*dup_mmap)(struct mm_struct *oldmm,
-			 struct mm_struct *mm);
+	/* Hook for intercepting the creation/use of an mm_struct. */
+	void (*enter_mmap)(struct mm_struct *mm);
 
 	/* Hooks for allocating and freeing a pagetable top-level */
 	int  (*pgd_alloc)(struct mm_struct *mm);
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 327757afb027..ff1109b9c6cd 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -352,8 +352,7 @@ struct paravirt_patch_template pv_ops = {
 	.mmu.make_pte		= PTE_IDENT,
 	.mmu.make_pgd		= PTE_IDENT,
 
-	.mmu.dup_mmap		= paravirt_nop,
-	.mmu.activate_mm	= paravirt_nop,
+	.mmu.enter_mmap		= paravirt_nop,
 
 	.mmu.lazy_mode = {
 		.enter		= paravirt_nop,
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index ee29fb558f2e..b3b8d289b9ab 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -885,14 +885,7 @@ void xen_mm_unpin_all(void)
 	spin_unlock(&pgd_lock);
 }
 
-static void xen_activate_mm(struct mm_struct *prev, struct mm_struct *next)
-{
-	spin_lock(&next->page_table_lock);
-	xen_pgd_pin(next);
-	spin_unlock(&next->page_table_lock);
-}
-
-static void xen_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm)
+static void xen_enter_mmap(struct mm_struct *mm)
 {
 	spin_lock(&mm->page_table_lock);
 	xen_pgd_pin(mm);
@@ -2153,8 +2146,7 @@ static const typeof(pv_ops) xen_mmu_ops __initconst = {
 		.make_p4d = PV_CALLEE_SAVE(xen_make_p4d),
 #endif
 
-		.activate_mm = xen_activate_mm,
-		.dup_mmap = xen_dup_mmap,
+		.enter_mmap = xen_enter_mmap,
 		.exit_mmap = xen_exit_mmap,
 
 		.lazy_mode = {
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jan 12 15:44:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 15:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476290.738388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFzkA-0007pF-9g; Thu, 12 Jan 2023 15:43:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476290.738388; Thu, 12 Jan 2023 15:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFzkA-0007p8-6k; Thu, 12 Jan 2023 15:43:54 +0000
Received: by outflank-mailman (input) for mailman id 476290;
 Thu, 12 Jan 2023 15:43:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QgwW=5J=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pFzk8-0007p2-Io
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 15:43:52 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e922c277-928f-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 16:43:51 +0100 (CET)
Received: by mail-ej1-x62c.google.com with SMTP id tz12so45760565ejc.9
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 07:43:50 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.138.tellas.gr. [109.242.138.67])
 by smtp.gmail.com with ESMTPSA id
 j18-20020a17090623f200b007e57d6e3724sm7580527ejg.116.2023.01.12.07.43.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 12 Jan 2023 07:43:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e922c277-928f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:references:cc:to:from
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=o9NtSevycsjj3do/s9tO6Gqq4uhlTyGcoWM/DnHLgUU=;
        b=UtVeQSyPLsjiHMFMnXBIya5S7d2ZSbJ4NG3qwCqILLfAbLbgJbHCXnNlpGz/OLTNFE
         Mdt51sr140T6cSpVFfZJif08HRem9DTkFZz0uGGm15GvANKgCAuUcJ8/jcgKy061VWfd
         0vu2XI5VdDKlFaVzXNoKRsEaqnoIMZLnszhLhXPAe7eVsZHvublFqeEXbte8JsrfFcDP
         ZbH3EMFl4ZLdVfPwwsKJzvFZ6/xPjBScCGrDL5Y82q7PkYHAfO1lXfU92nfJEfQa9m4q
         BWxf4jOvhYctDEXiN83mrKb8ELT8ii0nxtwUOSw8JOTz6A0J89i5/H6pLNNCriX4spl7
         qGEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:references:cc:to:from
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=o9NtSevycsjj3do/s9tO6Gqq4uhlTyGcoWM/DnHLgUU=;
        b=YHJxKs8lucdE7ziG/t3amQ81VDWobbzpwroqoRObMOXhGLQEXs9AJCrb7jhSEBjV1u
         /gmBe3j59AJr96bUL5XqMZQpdW0PzOxI8r77rhRVAzBldCwYMyUV4pJ5frjw78QoC7fC
         zm1HKf0kBy4ui6TlMvwinBg917cVRvYC6MK65kgtC3DNgOaG5cITBtOx/ejpwiPqiAtb
         eVH8q8esfevw+j1DIXxkgzpScT6MKt7IbDjvUnRJM29YCTEK+TzxPrwExeLkxnz/9//Q
         ECQ99MKFcBv2subvsYY8Xci4WEnkMiKf5peOQtMuojgO/DNMyIyt1Qyq1sP80IxtcQ1y
         ilbw==
X-Gm-Message-State: AFqh2krIPBxzLOkJSP/+CcXUY885jZSex9p6sPYHr5D/pG/lbYAV8dma
	rhl1Kf1NWK0FlCLKmIdQas4=
X-Google-Smtp-Source: AMrXdXsPNYwUyi85rQM/lyEaj2aDJORIlGX5NvJehzCeNpnyEwkWbCwuzVeDx07ha8qnOcReK2Wb2Q==
X-Received: by 2002:a17:907:2587:b0:7c0:ac4b:8b9 with SMTP id ad7-20020a170907258700b007c0ac4b08b9mr55048171ejc.14.1673538230393;
        Thu, 12 Jan 2023 07:43:50 -0800 (PST)
Message-ID: <4bc3f2f6-9bf4-5810-89e3-526470e72d85@gmail.com>
Date: Thu, 12 Jan 2023 17:43:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and
 iommu_snoop are VT-d specific
Content-Language: en-US
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-4-burzalodowa@gmail.com>
 <f2d68a4d-b9b3-7700-961d-f6888edfb858@suse.com>
 <f4771b3d-63e8-a44b-bdaf-4e2823f43fb8@gmail.com>
In-Reply-To: <f4771b3d-63e8-a44b-bdaf-4e2823f43fb8@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit


On 1/12/23 13:49, Xenia Ragiadakou wrote:
> 
> On 1/12/23 13:31, Jan Beulich wrote:
>> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>>> --- a/xen/drivers/passthrough/iommu.c
>>> +++ b/xen/drivers/passthrough/iommu.c
>>> @@ -82,11 +82,13 @@ static int __init cf_check 
>>> parse_iommu_param(const char *s)
>>>           else if ( ss == s + 23 && !strncmp(s, 
>>> "quarantine=scratch-page", 23) )
>>>               iommu_quarantine = IOMMU_quarantine_scratch_page;
>>>   #endif
>>> -#ifdef CONFIG_X86
>>> +#ifdef CONFIG_INTEL_IOMMU
>>>           else if ( (val = parse_boolean("igfx", s, ss)) >= 0 )
>>>               iommu_igfx = val;
>>>           else if ( (val = parse_boolean("qinval", s, ss)) >= 0 )
>>>               iommu_qinval = val;
>>> +#endif
>>
>> You want to use no_config_param() here as well then.
> 
> Yes. I will fix it.
> 
>>
>>> --- a/xen/include/xen/iommu.h
>>> +++ b/xen/include/xen/iommu.h
>>> @@ -74,9 +74,13 @@ extern enum __packed iommu_intremap {
>>>      iommu_intremap_restricted,
>>>      iommu_intremap_full,
>>>   } iommu_intremap;
>>> -extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>   #else
>>>   # define iommu_intremap false
>>> +#endif
>>> +
>>> +#ifdef CONFIG_INTEL_IOMMU
>>> +extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>> +#else
>>>   # define iommu_snoop false
>>>   #endif
>>
>> Do these declarations really need touching? In patch 2 you didn't move
>> amd_iommu_perdev_intremap's either.
> 
> Ok, I will revert this change (as I did in v2 of patch 2) since it is 
> not needed.

Actually, my patch was altering the current behavior by defining 
iommu_snoop as false when !INTEL_IOMMU.

IIUC, there is no control over snoop behavior when using the AMD iommu. 
Hence, iommu_snoop should evaluate to true for AMD iommu.
However, when using the INTEL iommu the user can disable it via the 
"iommu" param, right?

If that's the case then iommu_snoop needs to be moved from vtd/iommu.c 
to x86/iommu.c and iommu_snoop assignment via iommu param needs to be 
guarded by CONFIG_INTEL_IOMMU.

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 15:53:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 15:53:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476297.738399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFztR-0000u4-6g; Thu, 12 Jan 2023 15:53:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476297.738399; Thu, 12 Jan 2023 15:53:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pFztR-0000tx-3r; Thu, 12 Jan 2023 15:53:29 +0000
Received: by outflank-mailman (input) for mailman id 476297;
 Thu, 12 Jan 2023 15:53:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pFztP-0000tr-DH
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 15:53:27 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2043.outbound.protection.outlook.com [40.107.22.43])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3f604efb-9291-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 16:53:25 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8993.eurprd04.prod.outlook.com (2603:10a6:20b:42c::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 15:53:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 15:53:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f604efb-9291-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X6J/AS2PLPBwUq4wq972dHYGn7+UlZP2FN85NyweM+muJ4SMqgeYxMeOW/tCH10qRn1LaFULAN8K6m8eLSr6cjWeA2+vJSYSvvzWhD8uHFPAz3yHKH3Vw06CcMD5z9Lb4nv1gLVCjL/P+6WfwVOWto/q8vwmMO3SfYsCd3TCbp2O66bLrOVpbW7iqrc1eyFb6aji9afazJb5iPpC7KCbYH5cxyG9KloKvaTOh0Doz3raQi7OsZNbVGyn+SLdkjuLXexgtHb2dcpXQ3bwHKGuo716hraXCQo7N0vvlY6vUuNSsP30IAggesolhSVw/HeN8fhIHaT/1Tfzz2o2jVqwAw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WYGRnZFO15xxiImwSL1q2e+5C4/uRRMhXSPe4Iw7tx4=;
 b=OFO+C+oWiTNVtEbSI+AaKfqxGbBLx3c4HPcr+uoFip3sa35vNr4F34CWnLztBlWTxQaikt40IASw3JRbaLHOH6f3YpcP/63wsK9bEGsjxIJ4Dvmnud+exj/SucOg/0z4jYBZQJ7q42203EPnmWEkRXom5jn0l6LIZxXSVA5YJHhOV9kBBCkVYw5X0MvxmWdizYLJCZgb4jA/F7OPPa5gtuLxhMVorgMYqE0vg3wXQzXVUwNuzaBysPJ8r0SfwDDt0htAaUPWZF9E7g/DpkebFBuD8isRuIhHY4WpFcnpo5c3gDEme4gqCz5v1e1kUVXKLCc9zRxgfSAejaMJia2Pdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WYGRnZFO15xxiImwSL1q2e+5C4/uRRMhXSPe4Iw7tx4=;
 b=eqhYaMDLTgXLytIsf9NvtLE2qGoj4Ksfju4ifIy9Ri4018dS84tS/SUziqegJuBpcNCEAXJVyTt7DUcAH3MKGlz3j6YQRsAZMXhHVyhlXYp3t5a6SNojzIV/cQwkqwbAJm2ZDX7/Br8WTg+k8smlLPNTbxSNe32yoBHzJIAxgkhz6wPDaGmiDL9rYcX9MCmAjDW8YQgkzzGFPabReVYexW/odzrMmpMbBxXrgy2uGQW69e/MupuWHs5SMp635GvEasMM5J886F3MdUYtVOOn1+5KuIYWXWPz/jYN1aDkzVJlKfjFbi3gLIHvMA1HVF0g+6uFaISZYUIKlCApBa4Cdw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d4105a37-e24f-96b6-f0f3-5990768fa8f5@suse.com>
Date: Thu, 12 Jan 2023 16:53:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and
 iommu_snoop are VT-d specific
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-4-burzalodowa@gmail.com>
 <f2d68a4d-b9b3-7700-961d-f6888edfb858@suse.com>
 <f4771b3d-63e8-a44b-bdaf-4e2823f43fb8@gmail.com>
 <4bc3f2f6-9bf4-5810-89e3-526470e72d85@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <4bc3f2f6-9bf4-5810-89e3-526470e72d85@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0056.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8993:EE_
X-MS-Office365-Filtering-Correlation-Id: 7242b07d-0da3-436f-d364-08daf4b52279
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	J4SC/KYD/c1/mIAEiQkkqeiFmfmJeRHvj4j+v7gNU9YIujZzQIoVdcJmo0APqaAlKrVEUy7MtQljTDXitebD3RyRGsOp3EO90hOdGBTEzyxEhP/oxVv7ekYYZb/IWaUEFPMS3/PeGS9qsvcFIvxzZXwVVw+YSstcX8auKGMwkpXAlnNTpnR0Ww/ei3qXiTqabU9Q4PLhwYIFkzZdPkDREz52DkXsbr1yclDH/r4Y7JpTlCnbpoQH0yifiGpcOkiBPoK5AMgh+88sI/Q7XVsikMj/mi6gtZMRK0NMd0mf/UE2TC758ecQF4TRqYJ1NNBSx3pSsLk7oMWwpeS2+//ba5MS2T61c6Njq4TptZ/qPXEdfwJ+rTz26Hd50ZowQAhxQ48ekSbR27GdRtkeOWOhHfj9s0xk5j+wH/16odvQtCIxk+qW/FZEZeVAo5OVeP6cGmAZMsMAdMIWVe78f/YfEEyjyiNe1IB7GQYePHslfA9pKRue5nAZdDVqmT55RTkgp9mlK+WqOlV2Xe3eZtSUCxZCwORVZEvcVASlRAAUmdGU0a5vyMXfoLjwtbryOBNBr22o3gVOds7Aoel1fP3Hug7jjZkZwBYtiETWdbvVL0vfjOFH7lthIJdVJh7uZtPL31BtVu9gCW3y6WsXJ6Xrbgj22s3zOEM4au5Yq2r3ooYNNMIdJI/F6/NbB/feTmAF2tqM3ctFtsAqTCmHsa2xSaFpQ/CtauP6S6CR+EfwWcE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(39860400002)(396003)(376002)(136003)(451199015)(26005)(8936002)(5660300002)(2906002)(41300700001)(66556008)(4326008)(316002)(8676002)(6916009)(66946007)(66476007)(54906003)(6512007)(38100700002)(2616005)(31686004)(86362001)(186003)(31696002)(53546011)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V3A4cTFwRldLTE5xVjBZRG9nS05IbkZPR2QyV2ZWV0NXdXZYdWg4d3Zibm42?=
 =?utf-8?B?UVdlTHVTeThvc2hRWFZ0aUZPY0VUNll2cFcrUXFZOU5NNjFzdXVRWGY2TFFK?=
 =?utf-8?B?UldkcnBWYUFNMk1oUWlpZmpZSlMrZndqMGlPK2RNK0FhTlhrU1l3d0V1dkVS?=
 =?utf-8?B?MzVVeW50UUUrTG5iZVpRN2ZaMTdIblJlSDVwbmNEYUVraG1qLy9FNFNWZFFq?=
 =?utf-8?B?TDdKSVNRTW9WeDd0dEhtQ0orU0hObWQrdVphT0dxRGdwV0dLRkptZE9DMWFm?=
 =?utf-8?B?TFRpeE9DWEQzUGptTHIyWW1aWWxGVnZySmVjUXA0SVlUOTcwR0lYelM2WEh6?=
 =?utf-8?B?Zk1aYTlXMjBaR050UW9ySkxpUmd1d1RlUThNaE0xTnNSQ2FKei9ZMFZyZ1Er?=
 =?utf-8?B?MmlBMmFFbFRPNHZ3OUFLZFV0VjRMYXl5U1VoWjFBdWp4b2dRVk5OMURDbWpS?=
 =?utf-8?B?cGlQdTZNL2JieWFaaXZkT1FFUTNiODZrU0kxKzZVc08zK1R0QlVyUGRPTWtO?=
 =?utf-8?B?RDdKdjRIZFJOcTFKellrQktPN1IxOHhsVXJJRStOcVgvRDhTT1hkTVNJNVNB?=
 =?utf-8?B?dW4yUjdhSlJwSXIvb2FHbGd1N24zQjd0MUdNc2E5T0NGc0pWN1JUR3ZTaFFZ?=
 =?utf-8?B?SUM4dGppZG9tNVBxOEVJQnVQRzNlTStzaWd5aFRBc0hINnFiVS9xVkdJRmkz?=
 =?utf-8?B?QzRDcUhDNWNpRU5mWnJnWEtjaXY3MDlIMFM2OWJaN2pVVVgyRVoxM1YrelQ0?=
 =?utf-8?B?d1VNMGg3akV5VkdXYS9HRzk4Q1F3Tm1ERFhtNVZvaVdkYWphb2pCVzZDdG8w?=
 =?utf-8?B?aThlQ3RvUHVDK01obDVvekFRYmE0NDE3WGlhRkt5NWNrSFZMMU5Tb00rbE82?=
 =?utf-8?B?OUNFWGt6bEF3WmNBMmg3c2Z2SDNUYTU0SkJMdTZBWThlT3RjUWZ0d0pYNmlH?=
 =?utf-8?B?UHlQblZYbXp1VFIvNkU5bmFxRzUyc21lbllkbDh4WXhKNUxaeGJvS1NjSG5G?=
 =?utf-8?B?WE9ZWWtJcUY3dHU1MlVnakRNM3JpRExYVTh4MjNWZWNZM0pOb2VreEc3OGxF?=
 =?utf-8?B?dGdMK0FsalJuWmdCcDY2NWNQWFcySlpBYTh1Z0Z0U1Nqc2FoVDM5QjR1Rzlz?=
 =?utf-8?B?MHp1aU16eUlZa3MveDVNc2Q4ZGlDL29PZGszVFliT1NxUU1pdGZ3bEtvSVUx?=
 =?utf-8?B?N25tYjZqWUd6eThxdFFxc2VMdXhIZ29uYnROSDVsNVY4aGFCSlFNME01cStK?=
 =?utf-8?B?V0J2SlY4WklFYkdWTjc5WXcxWFZiOVBtTUNBbW5VMkR6MXN4ZnNSV0RVUyt3?=
 =?utf-8?B?WElsWGFSc3lXTDdacEJRRW5FYVVWdlE4cjgvTk9LMDJMRDJMbEV3RzJPNmZ2?=
 =?utf-8?B?d0YwaGtibHRmL21XVmhHWGlNa2t4QW1nakova0VxUTljU2toRngwNFB4TGhZ?=
 =?utf-8?B?WnB5Q0h3YVVEYndvWTlaaS9MTjBWREFVa0k2RzlsZzBKRkJZMWtxaDdMV1Nm?=
 =?utf-8?B?RnBLWjZjb2o3TzI1Y3pOd3k4V0paUXoyUVovT3htMXQ1R3krVG5DUzRrNnBl?=
 =?utf-8?B?N21IcTlnaHF1ME1mbmJ2Z0xQRWtYbjRFVzcwcGVBYm5hRnFMTjZIcE9tVG9Y?=
 =?utf-8?B?Z05jdnRqSnl4MmdEeG4xWEJLTG43UUlkYTVTVStMamRFb3JEdTZzaU1ueGtM?=
 =?utf-8?B?aTM5Z2M0SEVrUlI1eEJqSWY4QTVBazhCazZLYW5ZVUZ1eXFXR3pweUsrS3hL?=
 =?utf-8?B?c1BtdVhCUlJianhOMkREb1BRZWVBM0pwdGR1bWRPdE1FR2I2aGJYTzViL01V?=
 =?utf-8?B?MzV1OTdaZVRNMUVkV3ZLOERkTWtMUlYzQnVTbWRRc0xnQjVQQ2lTeXhWMFIy?=
 =?utf-8?B?WE9VakFXUmxENmhsY1dETFpjZ0c5dm1NTTlhNzJtcHk0YU9ZMWQraCtrWGRG?=
 =?utf-8?B?dElhZVVHMVllM2l6ZC9QZUFBVkNHUlVoUTdmZFphNlJJaGpVYWZkNCtiNFk4?=
 =?utf-8?B?Z3V6eHE5MTNiTUJnVlJvaHhGMm0rc1lCWnJEWFVCNGw1ZXdqRkNkdXpxL2lv?=
 =?utf-8?B?YjZYb0NDbFJEWWI0OGNPekxjZitQNW9RL21JT21wTUlhS01UZGpnQldGNDBQ?=
 =?utf-8?Q?hXbYjrIOPhOAwSqdIyq1VubVD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7242b07d-0da3-436f-d364-08daf4b52279
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 15:53:23.2576
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YqKt1XsMG3yvFoeCOdruwq1ducpltPF7XdSapkry09+VvcswZmG2I4SDXVAjTk24pyCeXcVTs4r7LpasS+M1gg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8993

On 12.01.2023 16:43, Xenia Ragiadakou wrote:
> On 1/12/23 13:49, Xenia Ragiadakou wrote:
>> On 1/12/23 13:31, Jan Beulich wrote:
>>> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>>>> --- a/xen/include/xen/iommu.h
>>>> +++ b/xen/include/xen/iommu.h
>>>> @@ -74,9 +74,13 @@ extern enum __packed iommu_intremap {
>>>>      iommu_intremap_restricted,
>>>>      iommu_intremap_full,
>>>>   } iommu_intremap;
>>>> -extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>>   #else
>>>>   # define iommu_intremap false
>>>> +#endif
>>>> +
>>>> +#ifdef CONFIG_INTEL_IOMMU
>>>> +extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>> +#else
>>>>   # define iommu_snoop false
>>>>   #endif
>>>
>>> Do these declarations really need touching? In patch 2 you didn't move
>>> amd_iommu_perdev_intremap's either.
>>
>> Ok, I will revert this change (as I did in v2 of patch 2) since it is 
>> not needed.
> 
> Actually, my patch was altering the current behavior by defining 
> iommu_snoop as false when !INTEL_IOMMU.
> 
> IIUC, there is no control over snoop behavior when using the AMD iommu. 
> Hence, iommu_snoop should evaluate to true for AMD iommu.
> However, when using the INTEL iommu the user can disable it via the 
> "iommu" param, right?

That's the intended behavior, yes, but right now we allow the option
to also affect behavior on AMD - perhaps wrongly so, as there's one
use outside of VT-x and VT-d code. But of course the option is
documented to be there for VT-d only, so one can view it as user
error if it's used on a non-VT-d system.

> If that's the case then iommu_snoop needs to be moved from vtd/iommu.c 
> to x86/iommu.c and iommu_snoop assignment via iommu param needs to be 
> guarded by CONFIG_INTEL_IOMMU.

Or #define to true when !INTEL_IOMMU and keep the variable where it
is.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 16:52:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 16:52:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476326.738430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG0o5-0000Wm-Si; Thu, 12 Jan 2023 16:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476326.738430; Thu, 12 Jan 2023 16:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG0o5-0000Wf-Pk; Thu, 12 Jan 2023 16:52:01 +0000
Received: by outflank-mailman (input) for mailman id 476326;
 Thu, 12 Jan 2023 16:52:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pG0o4-0000WZ-Lo
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 16:52:00 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c248aa2-9299-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 17:51:57 +0100 (CET)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Jan 2023 11:51:54 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS0PR03MB7226.namprd03.prod.outlook.com (2603:10b6:8:124::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 12 Jan
 2023 16:51:51 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 16:51:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c248aa2-9299-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673542317;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=GmFi/8tIrTuLpufoIdU6VzhCivSlsrwgl0zjfNXw22A=;
  b=eLqsoPMquv32WOnG2vp+jeSdw77ntERzrNzqgQsl++n0g4D5U3Ipq628
   neHRnO/lhejaG6L1C9DULEffX9D0wSHY24na79Glo1rk/9Uj/uOCUXesB
   cqkokP+MUqelnziiWUTlDWPD1/sRJ9BiLmlcfYlSQ8LuzJ5uio8nMgAQD
   I=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 92386469
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:x1udWqkMkoXWRaVTuYQDV4ro5gy5J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIeWmDUaKqOZWH0fNh3at++/U1Vv5DUn9Y1Twc5qyk9FSMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5QWGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 c5fEStRMDysvvLsypSbafFI2fwTfda+aevzulk4pd3YJdAPZMmaBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3iea8WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDTuboqKI00TV/wEQCJTQzcgfj4sKgg1+/WfxzM
 mE1wgwh+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6GAkAUQzgHb8Yp3Oc1SCIr0
 BmVntrvLT1prLCRD3ma89+8vT60fCQYM2IGTSsFVhcepcnuppkpiRDCRcolF7S65uAZAhn1y
 jGO6SI417MaiJdS073hpA6WxTWxupLOUwg5oB3NWX6o5R94Y4jjYJG07V/c7rBLK4PxokS9g
 UXoUvO2tIgmZaxhXgTXKAnRNNlFP8q4DQA=
IronPort-HdrOrdr: A9a23:BN6EqqNXqzyzT8BcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.97,211,1669093200"; 
   d="scan'208";a="92386469"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b6lknO3R0S0CPZRGRNpqBK9JXyCGZAgRgUtkr0N/98AQ6FwFOwT5dGwrSMmcFlmyvV3RBxIbtcqDIZhinNQfjsjxO5/XUI7hCA4f5IdS9A0arg3yfGkDg8phTYFG8PuGDa+b0gPJVXgTI5Hqcxz2Vokn1RDxJP3M1jpqhkxYu1is0V/V/IQ9HmoZWRYdkCnJO167I5P1angXOV62m0ZrHhsSweoiU5ERcrKAP6N0YUVQDf44ndG813AvOB1GxMB54osei5vL6gaBFoIfFtpMJa93XCbMe3c1xEXALQKHD1+N+gFUNwgn1hFrzD+Wrl/1QBx4MxcrNdyxt6ZCrD3syw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GmFi/8tIrTuLpufoIdU6VzhCivSlsrwgl0zjfNXw22A=;
 b=NtkRN1Ch/LhFVf6ozWbaVMHDcV4J7M/Ll+prlUqPARlvw+Kbttpe7DZtaV3ZH1OuH2QDbiJ2H3jPoZHW96ZkP2ITpM28mVz0yuyOV4XpVQJM4678s2wp1dc3vMOhFliZCu/cslb0i3KZezAS2AaQbnLbwMS9eWO9PhMV5Rg6dAoIiuhxo6v/WY3K7flb0v2cdPz0iNMqetcR2MIm4BaIcmeeHpocJ7D+y8MBZLwboCzNVt9b+eIp7SIXxZ91QzORK/onkopQuY+C6Lj2ykmO8Shff5NfjO1EoAGp2sJVw7weTMqM5Mp1v3aHMybBx6ic8Iv1Mty1H5UBhmGicsHmOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GmFi/8tIrTuLpufoIdU6VzhCivSlsrwgl0zjfNXw22A=;
 b=Smbl9zC9Pks+qJFJTZrGsD2tl4JRcn3X2hV76LwrL6Mw8969Lw1hq450py5JE+s4vJUo6U3lp7Rnpz/8dwKR8Wb1YmZt5bCRYWKXq4qKeY8y39TzcjgAqIc7dlPqsiBvtAU2Sb8xCdjP0C9d6CeXBBa0mMrw94r7W/r1d3TOJ+s=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Thread-Topic: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Thread-Index: AQHZJRefeExwWeppzE65VXS63DgRAq6axG6AgAA9twA=
Date: Thu, 12 Jan 2023 16:51:51 +0000
Message-ID: <3ac6a4d6-44db-d248-4440-6e71aa14ad93@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-6-andrew.cooper3@citrix.com>
 <af2b74b2-8f37-223c-b830-c2bb3bc6d467@suse.com>
In-Reply-To: <af2b74b2-8f37-223c-b830-c2bb3bc6d467@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DS0PR03MB7226:EE_
x-ms-office365-filtering-correlation-id: 1c4ba72c-4215-4673-a710-08daf4bd4daa
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 o6VQOUNFCRCLZQxUUIrgc/g5vrVEPoPRfRqwJv3uG/FoFxZ2RgaqrRdGVi4U4MRsoITL0N9Kf9qBI8Dmcs8d7P+D+ORCqF2EOzOymhedzfd920ZWvN2bszCc009b5XXQFqtUanCAl+qgNy8MMe/xhe9PJeS+QfZtPcJYfMkF1D9gASh33rZQ4XjJwfq9UAx4CO4PyD3rbVrYZy6WumbEU//uNSEJkUdiHBs87TiCWGX86FFuH7LMRvsvW1v53VtAkwz+i1VJd8uCdhbm5uk5DMSKxDjohGjo5tSbiPgEhQ9nbhFJoKglF3bfqt/GepCgwKE8kl7M3OXtIzf/RIeibJ8wjGOjGfrxRqs2irstJLPAZmqhFUdwBqUvjtdAF74rs1odvltalx5JZmweQefzIGpP33GDfYi6gOkSD9YNi6ZjV7l30Hyjonk+KAi4sN7NTJhsqUpZyxFGRvkvQipmDvLQEG457UP735sY4YYhA2wMJN69503ulaJIvSyJaO2GrjEaCaVVCOLPZfqaXHtTXQUG6+nRLSBtEQgIt3lgiJCj5SWKoI9xXaIGylqSZ5eISOa6WZiXkG5mIegbzKH4z8TaO1gsP0Szytw4vVJSbe9uQFYcZSBvqejUh4HVrIOrZ538uG5Y/99xWfFrFowY9jRrgcYtvBjU37MCUO3lBZ1zpwL2fJ1VFgfebApg2OtVeHU++LNjhBmSAIiRjtAKqzOX2sSe3gDM6gANfntK3+xePh015lXhLHmvqspZC9I3IwCXdWN7QgW2XD/kr7DWhQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(366004)(136003)(346002)(396003)(451199015)(6916009)(66476007)(76116006)(4326008)(64756008)(66446008)(66946007)(66556008)(8676002)(31696002)(8936002)(36756003)(41300700001)(5660300002)(478600001)(6486002)(83380400001)(26005)(186003)(6512007)(6506007)(91956017)(2616005)(53546011)(316002)(86362001)(54906003)(71200400001)(38070700005)(82960400001)(38100700002)(2906002)(122000001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Q0pPRUJFQ2s2NGVLOHVYcW9USUJNWllWYnpPSE0yR2VjUzVOUHpuYnBCZVlJ?=
 =?utf-8?B?NEU0eWtNaFllZnJxaWlvRmZXcWl4NnRremNoZ0tiNWVOaytlRjYrYjRucGgv?=
 =?utf-8?B?dGlveWFqeFU1a1FKUnFwK3k4QXJEeFkxb0dEalFrRVgxVzVhbU80QXdUeXFv?=
 =?utf-8?B?WDdBUW5EK1FWcnIxR013Zm15N0dhMDZ0QytzN3puT3kyZkxzQWhPR3QzQnNn?=
 =?utf-8?B?VHZEeW04a3ZQOFY1SS80Mk1DYk5EdVN3SnNJT25jeHEzN0RJU0loRmYwZERa?=
 =?utf-8?B?US9qcDJ2WnlOZXZWTld5NUJTK1ZUSEJIRWhDNmFsOVFJa1dTa0lRSlNpdGow?=
 =?utf-8?B?cGFVbXBCMmtDWFJuS0VIMWVOVVJNUlN3NVZRdklhZ1gyNlFRWjF0cWJXSCt1?=
 =?utf-8?B?RFdEa0FiTVo5ZUpqMEM5QzNwdEJjQ083ZmY5NFJVN1ZFYktaK2xXZ2NoNE9S?=
 =?utf-8?B?empJZkVPMlJRdzlVWVEwMVBnb0s4UVVMV05IbEt3NjE0UktjR0VrYkNZWUZp?=
 =?utf-8?B?Wjd6NUYrOXFHUUhtbE5ZQ0hNQnc1NmZ2Uk8rM1ZkSWZOV2FnSzJESmRRNlUr?=
 =?utf-8?B?RHpoZjBmd2FuaW5MVXBobkxRemY5cVlCQ3Z1U29XcjFtY3hBelB5d1dubXNh?=
 =?utf-8?B?RHlDL1MyRjBEM1ZpSUd5YXlZNSs5eUR0eExOU1JveHFxc01Fbk8yaUdkK0Vr?=
 =?utf-8?B?MDUwelBuMlc2VWQxOGZVK3VTUUlxQS8wYzhBZ3o2c0VNbEh5SnBiMms3SVRt?=
 =?utf-8?B?ejh3T1Bpa0N2QVprSnJ0NEZ0UDlraytQZFJwYTdqbTNNZTJwcUN1ZHRWb0Er?=
 =?utf-8?B?RFZKTmw5aDdCdnpZRzJRak1lUTRsRHFSeEFBWWR3d2VFMTc5aldEQ0FZZVQv?=
 =?utf-8?B?c0l1QlcrYVJtQ1JkUWdMQ0FMci9YY0o5OS9oa3BtWDZQRnRiWXU4RnR3Zk8r?=
 =?utf-8?B?dlZPT056OEtlYWh2TWVMUHZkblZjUDljODVvQnk4aVBKNkoxbUZReU1rZkFP?=
 =?utf-8?B?eHZMWEoycy9PSnVWbHI0SlhZWUNUaVhhMkN3amZJeFlod3BsRVRBRjJ0N1FP?=
 =?utf-8?B?Mk5yNWpiWHJUU2pzK21qQ0F3QVFoTng3RkZUYjFwQnpPRlY4MEdMVmMxV0ta?=
 =?utf-8?B?Q0NudHoydER4bmRleGYyOUJzSzFpbm5TZmFiMEVkTkJoVnZBNDJRMWhZWlZD?=
 =?utf-8?B?aElJQUlQcUNzNVZBTlJ2Qm0xVmJ2alNaVjdLbjdndE1ZNndCZGw1ZHhDTEtB?=
 =?utf-8?B?TlRVdC8xUG13MEJMNEJkQk9DNzFvQnNPVUpDY2VHUzdIbEhkc3FVcmFHUm5u?=
 =?utf-8?B?RGk3eXY3eHBBOVF1LysvOEZXWFlWQk5YZ1JwRG84ZmIyeGxZc0d1aFNBTXhJ?=
 =?utf-8?B?Z3p0RlhDY0pvOE1JSWhxdWE4cnFjR1dweWtxYjFqMmFoWmpWNktEMGtKdS9R?=
 =?utf-8?B?UytjYXFiclFjaUt2THFzYUZBSjluYWZ1b2Z2SkM4Rkw4aGpmcDNzUHQydXFq?=
 =?utf-8?B?Y2xBQkRtNXBHbFkxSk1HQnBIRTRFNnNySGpPb0toUmlISGsra1dxOUlyaStN?=
 =?utf-8?B?YTVsaElMVkVJeWl4blgrV3EwMTlGL21udGdXdWtKdDM5ODlMZm1BdlY4ZnU4?=
 =?utf-8?B?Rm5Tdk0zWWN6ZzduMzZWaW5BMDU3VW1pTktQMTdmRkJZRDVuVVI5V0Q3QkZD?=
 =?utf-8?B?WUFDdVdBMUZFeThMOGFaSDRnWVpNZ29JU05PRXlxL21raytDbkQ4YllUVmt1?=
 =?utf-8?B?SHFXUGw2TUdpaFlSRWphblZTcWxjbnpNdXIwbEd2UnVSdXhiTFlzQ2NsR2s1?=
 =?utf-8?B?RGVLNmtseUNUR3dDYlVqTVozR2p3MzdiWVNWdVFQRU9CMENFVzBxRUtBcnlP?=
 =?utf-8?B?d0w0bjRYOS9UVXBLcXVHeVR3VkdRYkVTNWZkWFBTQjVIaVRSUTNNUUtIUDZs?=
 =?utf-8?B?WHNlSFZGRnkvNGxJOW9QM2ZOUmpJNVZIbTFRV3JLU2FoZ0VtNUlIOGt1dEFZ?=
 =?utf-8?B?ek00bmsvSjZZUHJvVXEvM1pPaWphMU02djI4RlQyc2NTZHNja1dtRGZpRU1o?=
 =?utf-8?B?L1dDMnEvL09tdFR3dVN5OHUrNHl1OTh2YzZaeis1NTBveTNnMGxYd1NYbjlP?=
 =?utf-8?B?dlhSelA2V0tZRFZMZytwdFF3TnpJWVhvTG03ZWFRWk5LLzFrTnh1djdRQnFn?=
 =?utf-8?B?TEE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F860553EB376E94F92F6D69BCBA2E5CE@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	jKN08GRVUHqjfRjuR597wb4oDyZjM55V0gBN1ff7oSGtq2fkX+rJyPAnU/JBZqcmo9FUMTkfOUTj4hKIWgrfDIUEUBzl8b2ymVvstAHzqFdmJT9hq5aNiY2gjROgOfFfdQ5lOtXMbVrFS8ogJkS04NIPSEADR65jZ5hikFwJtNCyA/UBH4ORv6WOeN/P69ALvyTpjMeeNZNZD40zFP5gFdkyE3wBg7wSOn9OVWn1pWbyEu2XEltAZWQl1AuYmU8AkFyLNky8FJs7thAtv/ljTkmiVwZQcxbNFhkgruIIBlJ3ny1ybLrgIrvF3g56m4VU1MVMeVh5tPa4uX8KPkJ9vpFPb4nJT68XkpfH2293tLPrq/NjKH/85mpncoyT+8ItrDhiX5Mu/YdITzoKf28J7QaUXYaDp7cHT3AX49/u/HPvt8tMT9OoT36jISyICbC1xl5KKytg8f9JsZF058LYMELWdljGS35RD8mky0vTNeQQP+D77+zuo4iJB/3Do5KlBuCtZ452INl2/HSQ46nxrkXWFLLZU3RLLyvcKUPCqDLHf9zFLFn5ODLKWjpfZIjxEFDqMzGiY5dSL44dZrbC1tjjXUasL+AOgPZFKFlAGV1peQdfOUU5dd4Y29B42cPyEotTOL/0yAs7khcdWPBXQcGQVa4+ozGEkbWfUNPvfLEcIIkZejoHEt6UC3Ct4hnMPX8WID2TLfLmZ7VWb4pz1U0EXgiglsbOels1bYQ5lJGsfKOo75Sr3+uMTjwUNucSqL1uyWVKdKiKJuTmwTKcFvMry/FpJRSu8ibf55QkV107Lof+Mpzle8LwWXxDcR8YBGmXzJ2fSlFHfpz37wRU5YEMXkZaERqut/oKsdWp8J4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c4ba72c-4215-4673-a710-08daf4bd4daa
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 16:51:51.5059
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: suBcyvBShP+hk106jepOHOdKwKuh6rvOnv1/51pGvlWNawwWUhLsIjY+lMZfJ8QfTJfNTdpY8qPX9MA3Un+1vsthJWsHfSnu2vhcaBChTo8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR03MB7226

T24gMTIvMDEvMjAyMyAxOjEwIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTAuMDEuMjAy
MyAxODoxOCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+ICtzdGF0aWMgaW5saW5lIHZvaWQgd3Jw
a3JzKHVpbnQzMl90IHBrcnMpDQo+PiArew0KPj4gKyAgICB1aW50MzJfdCAqdGhpc19wa3JzID0g
JnRoaXNfY3B1KHBrcnMpOw0KPj4gKw0KPj4gKyAgICBpZiAoICp0aGlzX3BrcnMgIT0gcGtycyAp
DQo+PiArICAgIHsNCj4+ICsgICAgICAgICp0aGlzX3BrcnMgPSBwa3JzOw0KPj4gKw0KPj4gKyAg
ICAgICAgd3Jtc3JfbnMoTVNSX1BLUlMsIHBrcnMsIDApOw0KPj4gKyAgICB9DQo+PiArfQ0KPj4g
Kw0KPj4gK3N0YXRpYyBpbmxpbmUgdm9pZCB3cnBrcnNfYW5kX2NhY2hlKHVpbnQzMl90IHBrcnMp
DQo+PiArew0KPj4gKyAgICB0aGlzX2NwdShwa3JzKSA9IHBrcnM7DQo+PiArICAgIHdybXNyX25z
KE1TUl9QS1JTLCBwa3JzLCAwKTsNCj4+ICt9DQo+IEp1c3QgdG8gY29uZmlybSAtIHRoZXJlJ3Mg
bm8gYW50aWNpcGF0aW9uIG9mIHVzZXMgb2YgdGhpcyBpbiBhc3luYw0KPiBjb250ZXh0cywgaS5l
LiB0aGVyZSdzIG5vIGNvbmNlcm4gYWJvdXQgdGhlIG9yZGVyaW5nIG9mIGNhY2hlIHZzIGhhcmR3
YXJlDQo+IHdyaXRlcz8NCg0KTm8uwqAgVGhlIG9ubHkgdGhpbmcgbW9kaWZ5aW5nIE1TUl9QS1JT
IGRvZXMgaXMgY2hhbmdlIGhvdyB0aGUgcGFnZXdhbGsNCndvcmtzIGZvciB0aGUgY3VycmVudCB0
aHJlYWQgKHNwZWNpZmljYWxseSwgdGhlIGRldGVybWluYXRpb24gb2YgQWNjZXNzDQpSaWdodHMp
LsKgIFRoZWlyIGlzIG5vIHJlbGV2YW5jZSBvdXRzaWRlIG9mIHRoZSBjb3JlLCBlc3BlY2lhbGx5
IGZvcg0KWGVuJ3MgbG9jYWwgY29weSBvZiB0aGUgcmVnaXN0ZXIgdmFsdWUuDQoNCldoYXQgV1JN
U1JOUyBkb2VzIGd1YXJhbnRlZSBpcyB0aGF0IG9sZGVyIGluc3RydWN0aW9ucyB3aWxsIGNvbXBs
ZXRlDQpiZWZvcmUgdGhlIE1TUiBnZXRzIHVwZGF0ZWQsIGFuZCB0aGF0IHN1YnNlcXVlbnQgaW5z
dHJ1Y3Rpb25zIHdvbid0DQpzdGFydCwgc28gV1JNU1JOUyBhY3RzICJhdG9taWNhbGx5IiB3aXRo
IHJlc3BlY3QgdG8gaW5zdHJ1Y3Rpb24gb3JkZXIuDQoNCkFsc28gcmVtZW1iZXIgdGhhdCBub3Qg
YWxsIFdSTVNScyBhcmUgc2VyaWFsaXNpbmcuwqAgZS5nLiB0aGUgWDJBUElDIE1TUnMNCmFyZSBl
eHBsaWNpdGx5IG5vdCwgYW5kIHRoaXMgaXMgYW4gb3ZlcnNpZ2h0IGluIHByYWN0aWNlIGZvcg0K
TVNSX1gyQVBJQ19JQ1IgYXQgbGVhc3QuDQoNCj4+IC0tLSBhL3hlbi9hcmNoL3g4Ni9zZXR1cC5j
DQo+PiArKysgYi94ZW4vYXJjaC94ODYvc2V0dXAuYw0KPj4gQEAgLTU0LDYgKzU0LDcgQEANCj4+
ICAjaW5jbHVkZSA8YXNtL3NwZWNfY3RybC5oPg0KPj4gICNpbmNsdWRlIDxhc20vZ3Vlc3QuaD4N
Cj4+ICAjaW5jbHVkZSA8YXNtL21pY3JvY29kZS5oPg0KPj4gKyNpbmNsdWRlIDxhc20vcHJvdC1r
ZXkuaD4NCj4+ICAjaW5jbHVkZSA8YXNtL3B2L2RvbWFpbi5oPg0KPj4gIA0KPj4gIC8qIG9wdF9u
b3NtcDogSWYgdHJ1ZSwgc2Vjb25kYXJ5IHByb2Nlc3NvcnMgYXJlIGlnbm9yZWQuICovDQo+PiBA
QCAtMTgwNCw2ICsxODA1LDkgQEAgdm9pZCBfX2luaXQgbm9yZXR1cm4gX19zdGFydF94ZW4odW5z
aWduZWQgbG9uZyBtYmlfcCkNCj4+ICAgICAgaWYgKCBvcHRfaW52cGNpZCAmJiBjcHVfaGFzX2lu
dnBjaWQgKQ0KPj4gICAgICAgICAgdXNlX2ludnBjaWQgPSB0cnVlOw0KPj4gIA0KPj4gKyAgICBp
ZiAoIGNwdV9oYXNfcGtzICkNCj4+ICsgICAgICAgIHdycGtyc19hbmRfY2FjaGUoMCk7IC8qIE11
c3QgYmUgYmVmb3JlIHNldHRpbmcgQ1I0LlBLUyAqLw0KPiBTYW1lIHF1ZXN0aW9uIGhlcmUgYXMg
Zm9yIFBLUlUgd3J0IHRoZSBCU1AgZHVyaW5nIFMzIHJlc3VtZS4NCg0KSSBoYWQgcmVhc29uZWQg
bm90LCBidXQgaXQgdHVybnMgb3V0IHRoYXQgSSdtIHdyb25nLg0KDQpJdCdzIGltcG9ydGFudCB0
byByZXNldCB0aGUgY2FjaGUgYmFjayB0byAwIGhlcmUuwqAgKEhhbmRsaW5nIFBLUlUgaXMNCmRp
ZmZlcmVudCAtIEknbGwgZm9sbG93IHVwIG9uIHRoZSBvdGhlciBlbWFpbC4uKQ0KDQp+QW5kcmV3
DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 16:52:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 16:52:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476331.738441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG0op-00012h-AH; Thu, 12 Jan 2023 16:52:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476331.738441; Thu, 12 Jan 2023 16:52:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG0op-00012a-77; Thu, 12 Jan 2023 16:52:47 +0000
Received: by outflank-mailman (input) for mailman id 476331;
 Thu, 12 Jan 2023 16:52:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Lmgi=5J=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pG0oo-00012U-3y
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 16:52:46 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2051.outbound.protection.outlook.com [40.107.247.51])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8856c038-9299-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 17:52:43 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7341.eurprd04.prod.outlook.com (2603:10a6:800:1a6::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 12 Jan
 2023 16:52:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Thu, 12 Jan 2023
 16:52:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8856c038-9299-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OGI2V6B29Nomn7kpeBIuZQ6fUCJd5K38tLss5IEkkyzavQm4Nj4bR8S5Im6BnwFxCLIUa/3eJi2R8+5jCb5N6GgiNkHo3WIJPuJ6HbVkY/zfaPxlFK3QW6dU7Vu5pOiLMadwf+Rvg71vnnNTk2M2coUawo76gkudpROwCWtNlWkV45woj4z4uw9pKuET6PynJX5eb61SR0IBWlyksq014MY5kOSoFHkyLVXtxpsAIXp8AWypTV/d/qm0w4BFnlpDqOsMhs3EgUsn2LuL+xGQBJmTrFAhpfkrjArFpODOiGYXK5Q0Uk/aiUJqUNJptfsPmfvvGZzViN92LggFW+5e8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3OC2hKCRm5O4+uIq6PdR97qMCPjE2gjFQTP+5K44Pbs=;
 b=XZtIGJjbU+XV5lk8/atUraN69AwGDT+opTz3N5t4hUbL/iPK/oi5P9bdmi7pJtW96W/PvYfgQiFCNfTN1dV8lPeC6q8qKMg77jwcOy1z4qyPQ3X3xLGpK+sG59NknJWed5sS25hAOavH0LwVbImlRtxfR4zWfp9woOV3ham2y1xccLLa/w8Wm7CDM6vvEFYAm0o+aJmzbwRaETZkFh7/hDvJc1sYMh+0m/som90qTwBB2KI1b2rJTI5jb0M0VxFf4FpU1kGbfPlVKWiuSP9ZSbYTfVp5xVujzvCdNULNKYUU/daEDyOQjpggAReAz8MtRrQklxk9JL+1sIrzUm35/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3OC2hKCRm5O4+uIq6PdR97qMCPjE2gjFQTP+5K44Pbs=;
 b=TtYxYQ0UEOrfyJce82LpEgcPqNEnLdGUnaobsJj5do5mYkg2z800YuCCAgeN+vN5woMjceI+5MuJyEXqNKFZZYCpr/JEDfZn3hi1FrrO7WRV71ZDM/ke3Y2zs+cEPHpdTKu5NBtSyDWP2JN0yKoCnfY+dVa1nU6LPEWhLZIMXLSkve5krXtrOislr68og8uA7M+TJ80KfjHDnPwBm8jW7VMxZt7Wn7ePLb5bPMq0veqN1IVaCN4oMYhpbjNkQw5tWfQ72J8gJSulXfAuz8r+VDC4CB5XwtR3tVk8LT++X5dI7a6ISxDmGkxF/Pqo5rdITMnf8Ub6TR07cpOtTf7Qzg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d77c1a7a-9d15-491d-38fa-99739f20bebd@suse.com>
Date: Thu, 12 Jan 2023 17:52:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
From: Jan Beulich <jbeulich@suse.com>
Subject: Proposal for consistent Kconfig usage by the hypervisor build system
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Language: en-US
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0068.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7341:EE_
X-MS-Office365-Filtering-Correlation-Id: 9201fa6a-66b8-4742-316f-08daf4bd6ba2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JlAhjOVvsrhesHo9s5tFhvTr0RhRLPPb5lt+GvKUYyE0LEWAtaHVI2E1gEH1way7fbZHmBPrCsMPFrLlUa5iDoalBA6O2hvhx+ZG6cUImi36xWtM775+h8RoTe9FMkf9bTaV5A7QauAuzskQGqhU+Log+Lf3P1r9n6/8KZVfFRRwM4wujhRbiRJYBCdykOs5Fle3U8tvwig5etD048gx4NIOcNB9IllNRdA6dulAOJ7z28aiW0zCQwatVntTTHhKZKSGYl0xFPicAWsd9uMlE3uQkjI8yRh2K+upXFwEVjUAVVZc3aajD7ITmXOB1dtDOLu0ANPgGqd3qnoXDQRm6W1sr5KT9FV1bYIjEB2/0O7RbRWDUh5oT6FfhHZ32NHs1uW6smrnTco9S5SQ4+XfjpzHbDuALwphLZ9aNcGeCAvXG3aVH/Sg+hEa3glUN5ugsV+DRbGTssB69zqYoPUHUdZnYQQrxHDfrhWG683gGiYpqPB+RQos/oOiGiEXxdZHw7m+6VEzRBIlP0VkshexjnohK73hDrgGgpML9Y2Z2uzSVRIMDAAOMN1m/DBGwI+kSpRk/zIQjXsm1LkwALdmLulGi+KbBkEXQUxxE/Zej9caRYudZOXgZMKejM8M0jvoyuNmvFE8ycvzdgz2Dsjpoyn5EejCS3Dun5ck/VcZ8nZbrRXpWdnzJmhzIptBycMQYgs9X3bCQlLoelru7kuTT97opBQhVvlG3G8SKk0zSE0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(396003)(346002)(39860400002)(136003)(451199015)(478600001)(6486002)(41300700001)(38100700002)(54906003)(31696002)(2616005)(316002)(26005)(186003)(66946007)(86362001)(66476007)(36756003)(66899015)(6506007)(6916009)(5660300002)(2906002)(31686004)(66556008)(4326008)(8676002)(6512007)(83380400001)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b3V5MklLb091dXlmMitKcTFlRkowUG1Iby9PV25YcHRaNTNvbVdQYWlpd1FX?=
 =?utf-8?B?NGNBUWpKV1E5OW0wNURKOUJvUXY5TUdSWTg1V0JBTE1sOXZ3cVVEV2xLcEs1?=
 =?utf-8?B?bEp3RDBSMERWdlhnRFJpeDJkMXAzeUdycWlPVFlLOGNLbUJZRXJLeitrbGJp?=
 =?utf-8?B?NHVNV3k4VDViQXBmV1g4SEpuT25Cbk9aMjNHWXE0NXRLOURDakFqZ25pMVIx?=
 =?utf-8?B?dlFEdHdjRkh3TGorWWUwUEhKelY0dThsUFJLRmc0UjdSNXo1ZFZIME1RQzF2?=
 =?utf-8?B?aEMyRFRwSEdPcUVTNTlqbFFnODJaZTJqdElJMzA1MWwyQWFWN1NwYkJqS2Na?=
 =?utf-8?B?NDNJYVZkMlV0VUZlaElxeWVOSWpENWd4UlJ0SFBNdG4zN0ZLR0RGVzlraFF4?=
 =?utf-8?B?czNjRk13M2pYaHZxd3dlTEJSWHBCU0VSQzlsT0ppZ3ZVZkVRYUFqeDNWRXN4?=
 =?utf-8?B?M09semIyWTdlYy9Pc2FMWEVuLzNtcUtiRXVGdmdLQ2tZd0cwNEZueHBncXhM?=
 =?utf-8?B?RGxwcFE5QTk0THQ2RmJxRDhFTitrdjhUTHE2cFQ2RFBnT2lDbGYvZ2dCN1Fm?=
 =?utf-8?B?ZjlwVUk1ZlZ2dlB6ZUd4NE1FSmJLT0ZZUTdVcFVOanZhMS9xWEZiS2hBWHVZ?=
 =?utf-8?B?a3RIQkZTWmtoSkphbWZteGJabkdZVlVPelNpTUJUTXpmL1JaYk1oYTJNR1A0?=
 =?utf-8?B?VlZJMU1wNHRlQnNzZ3A4SkNsRzMySVpqMWFzTjZEV2VOc0ZXWkw1cUV1WnNu?=
 =?utf-8?B?VWpsc0RWWnNhTTJaRkVESThHbzBtWGNkZGFjclRoT1FSK3pFNmRBOEczc3dn?=
 =?utf-8?B?Q2orZE82QnNWT255MHhCeDJCNysySkVuRVpoaE5KaEp2TTkvSlRtcHBWZjND?=
 =?utf-8?B?VU5pVHpSTEY4dXNJK2theitobGdyaVBqKys5Qm13YmQ4RG9sNC9nZEpZMXRs?=
 =?utf-8?B?SjRBUDNsYlZ5SGFGRFNIWjU3c2hBa1JyMXlDYkZESFhpY2IvWE1UM2pvcVNL?=
 =?utf-8?B?YnV4eGFXbllHZDlCcmhnaWl6NXpuMCtiWkNPOXQ1MnowVjJkNVZ3bEdhcCtp?=
 =?utf-8?B?MzFGRi9pMWdIVEdGL1hMNW1UemxHWnR0WFhLU3VJdWZnY1g3b1ZLbFZTWmZV?=
 =?utf-8?B?RENDQ0h1bkdVUW9wZGRoOHMvNFJqMHR0a1lXZXFNRlZ3dkpUb3dQWElJZXF2?=
 =?utf-8?B?TzFoR2pEMGJRL214ckZtU2hySWR0di9HejRSY3lRbm5vY0NDc2p5bmc3Q1Vw?=
 =?utf-8?B?djBoRnFQUm5WalRuaUE5a2VoZVFCU2Jycmc2SXE1WFFlNHJaK3hvSkVzUW5C?=
 =?utf-8?B?ejR2aXhNVFVtQWE4Um9qVmpiaDRLZ0VrYVB0ZUxORGJKRUtQT29xamMzdVJi?=
 =?utf-8?B?b081Y3czNWVNL3lnV1Z1TVlmWlV3VTJpZCtOSWRlSFZTRXJJTTZNRjJVd3dv?=
 =?utf-8?B?YXBFVkJralNaQkFqcjJmbXlXRk1PT0tjYXh5VEpzMlV3Y25YUGR0TEdLMm1L?=
 =?utf-8?B?V000aUpwVklzNFVURG9aczFuQmFiUVJhc3AzUDE4N0hKZWlXUExmU2hzNWEw?=
 =?utf-8?B?cVM0b1lBTngvNGN6VnIzVGtaZWYwejdhV1FhcWVySENQb2xBSTVPT2VybTll?=
 =?utf-8?B?T2hFekVqc0NCU3ZmSjBNTEhuSTJOYVdUbkswaVVpRDNjSjRRMks4V0Y0VXlH?=
 =?utf-8?B?UEZXRldxQ0gzd2VKaU9mVGthbUlqK2F0MTJxRkM5dXBTdktrSm1wK3hmekVT?=
 =?utf-8?B?M2pmOVFUd0RqMHNCWnBWL1pMVENHeDBibk5TSloxWlNNWkVzSWpqVmNvdkgw?=
 =?utf-8?B?YkxRUGF5aVc2S3BkbFQxVVJoaTJZemVRTmIvNFJNZDRWMUxqK0lKMU1UK3FQ?=
 =?utf-8?B?Sm4vMnNELzI1cWRuQWJDbVQvaDJ0eXJ2OWtqNXY5aWlYQlRZQy9GMmovc256?=
 =?utf-8?B?T3FLaytyVk03d1cxU3RkbzNEVm81dXhYMGFRbjJDSlp2c25nUkgxT2FJNThN?=
 =?utf-8?B?NlYzV1IvT3J5V0FCY2U0RzFHcFBPbC9vWHorSkh3S3lQMnBOMEF3TUdrNS9J?=
 =?utf-8?B?Q1BzQlhlU3Z1TExnb2QwZnc4dHJFenJ1WGh5MDgzRy9mdzZzQVpUVVErZU5W?=
 =?utf-8?Q?lDKm6rq0zUW3boZf4hicftxho?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9201fa6a-66b8-4742-316f-08daf4bd6ba2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 16:52:41.9853
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pgnvxyyty6O414W2mBmEwMtGvnpMKwyg3hgwT2qUGmYuH57gF/Nb8vh1O3MeyIcykZOw9Zx2e+TSuOy3MSLcHQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7341

(re-sending with REST on Cc, as requested at the community call)

At present we use a mix of Makefile and Kconfig driven capability checks for
tool chain components involved in the building of the hypervisor.  What approach
is used where is in some part a result of the relatively late introduction of
Kconfig into the build system, but in other places also simply a result of
different taste of different contributors.  Switching to a uniform model,
however, has drawbacks as well:
 - A uniformly Makefile based model is not in line with Linux, where Kconfig is
   actually coming from (at least as far as we're concerned; there may be
   earlier origins).  This model is also being disliked by some community
   members.
 - A uniformly Kconfig based model suffers from a weakness of Kconfig in that
   dependent options are silently turned off when dependencies aren't met.  This
   has the undesirable effect that a carefully crafted .config may be silently
   converted to one with features turned off which were intended to be on.
   While this could be deemed expected behavior when a dependency is also an
   option which was selected by the person configuring the hypervisor, it
   certainly can be surprising when the dependency is an auto-detected tool
   chain capability.  Furthermore there's no automatic re-running of kconfig if
   any part of the tool chain changed.  (Despite knowing of this in principle,
   I've still been hit by this more than once in the past: If one rebuilds a
   tree which wasn't touched for a while, and if some time has already passed
   since the updating to the newer component, one may not immediately make the
   connection.)

Therefore I'd like to propose that we use an intermediate model: Detected tool
chain capabilities (and alike) may only be used to control optimization (i.e.
including their use as dependencies for optimization controls) and to establish
the defaults of options.  They may not be used to control functionality, i.e.
they may in particular not be specified as a dependency of an option controlling
functionality.  This way unless defaults were overridden things will build, and
non-default settings will be honored (albeit potentially resulting in a build
failure).

For example

config AS_VMX
	def_bool $(as-instr,vmcall)

would be okay (as long as we have fallback code to deal with the case of too
old an assembler; raising the baseline there is a separate topic), but instead
of what we have currently

config XEN_SHSTK
	bool "Supervisor Shadow Stacks"
	default HAS_AS_CET_SS

would be the way to go.

It was additionally suggested that, for a better user experience, unmet
dependencies which are known to result in build failures (which at times may be
hard to associate back with the original cause) would be re-checked by Makefile
based logic, leading to an early build failure with a comprehensible error
message.  Personally I'd prefer this to be just warnings (first and foremost to
avoid failing the build just because of a broken or stale check), but I can see
that they might be overlooked when there's a lot of other output.  In any event
we may want to try to figure an approach which would make sufficiently sure that
Makefile and Kconfig checks don't go out of sync.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 17:07:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 17:07:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476340.738451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG13C-0002z7-HV; Thu, 12 Jan 2023 17:07:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476340.738451; Thu, 12 Jan 2023 17:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG13C-0002z0-El; Thu, 12 Jan 2023 17:07:38 +0000
Received: by outflank-mailman (input) for mailman id 476340;
 Thu, 12 Jan 2023 17:07:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pG13B-0002yu-2q
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 17:07:37 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a414363-929b-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 18:07:33 +0100 (CET)
Received: from mail-mw2nam04lp2173.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Jan 2023 12:07:32 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA1PR03MB6530.namprd03.prod.outlook.com (2603:10b6:806:1c5::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 17:07:26 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 17:07:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a414363-929b-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673543254;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=GsS+vEKyi7Jr/4YI9jEMDy8equIHYDeAWTzYCbpamrY=;
  b=XgXkhdYBuYsjNUbgtwTPoNV0x7c+Ak3PC0HNgfdjMTOrf1J8huBb3UDB
   Sqvo52ejpYZjmISPJux/MUSuzXpkHQA1SfEBkOPSnwmZgXY/bM/Kpqbn9
   B2xhItYUXLASJLxQSXXFnM0t5pYkCITCDoQrlEbiwePIUeHvQUDwAiC27
   I=;
X-IronPort-RemoteIP: 104.47.73.173
X-IronPort-MID: 92390044
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:LJ7ZdqOt9IIbmpDvrR2HlsFynXyQoLVcMsEvi/4bfWQNrUp01DYHn
 zMXXWmDOv+IMGSketB2PYyyp0xSupbSxtVnSQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5wZmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rgwKDtS9
 9A6EykIcBqP1+OH5aiFV+Y506zPLOGzVG8ekldJ6GmFSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+vFxujeIpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eex3iqANpOSNVU8NZkuEeYm09NOiEnUFe1usuwuEfgVI5Af
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaBMQOBWoLZCtBSBRf5dDm+N03lkiWEY0lF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9XABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:DDkcS6uK3YRj40v6tAYBsn9w7skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-IronPort-AV: E=Sophos;i="5.97,211,1669093200"; 
   d="scan'208";a="92390044"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a26L1f1qeZR/LgFNerpF3BvEYpEUmnXN1F+Jk1WYp4Sd7GZ1ySkATuu8Or67O9eGsGbdQjuyWxV8cAJebTYkx5dmxGihZdUgxoWVccwi7qhfbjX2/35AIF8y2m9BgV8p+5MheLr2bYuFa92TZuDdAJ8rwXEDllPQrKn1MU3CYqNoLqNtVuOEu+uqHYFRaOrMzgDgDTfYGg0MnBRCsEN9/436/9RVEqJAT6CPY1G6+Pnox0e6RVRmvbb85zxN7RC2bKMe1NAc2k34JobUe9Rv0Y7iyO1WWwo77vY96ZXMChEzjuSBhnCYTnlwOIDzEVfrq3twRny6LpnvvWEcAIMXEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GsS+vEKyi7Jr/4YI9jEMDy8equIHYDeAWTzYCbpamrY=;
 b=f/vQtnhDpeCfp0swOI/RDsDsNP7aEyu11Wkq1k1TL42xHIo0Mg0aBAuqRRTeLBXVi7eIhf1CEq0yJpkpy8C8bM1WbFYRlxMCHWBRRIQxcArHm0dlQoNrxnKJvaMql9TrIxsIE84h7NFu/GgoaOgecaLahf4xGaFO83Ylcv3cI8vRrEliMdS1EG2etp9wLmOUMiAEfVopjC9sxlbbraDQZphAaFkSOaguSoHZk5hhAQrkDjPrrdncc9+sm09EkPh7pdCX+2Pg1RcmOwqw7v0isSL+Z5CldCy0CCZhL16cUNaZ6Xw6RYpRhG9/scG9fN25QAqYGLi8x+8pN9GBiSXoew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GsS+vEKyi7Jr/4YI9jEMDy8equIHYDeAWTzYCbpamrY=;
 b=X8Fh5+TWYI4ux8pYEXQHwW/q0IkgWrqVqdPkczAFbSo/AEKkZwi95cf2qUQk1ISTLX6QzfjSakpszE3N9niZ/G+bbFdusKzqkCXYmq3y2dKszE5K+Nsy9sm9pde7Ry5521FlgIjLo7Uu7jA/fkEjRCz5csm87jhlD7OopHFuu2I=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 1/8] x86/boot: Sanitise PKRU on boot
Thread-Topic: [PATCH v2 1/8] x86/boot: Sanitise PKRU on boot
Thread-Index: AQHZJReg9aEubjLXpkqfuIdyn2CTs66avfyAgABIhQA=
Date: Thu, 12 Jan 2023 17:07:26 +0000
Message-ID: <fcfe9344-ceac-aa80-404b-55fb7a75fdeb@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-2-andrew.cooper3@citrix.com>
 <27d830b1-a32b-1368-3c0e-e5de15da5000@suse.com>
In-Reply-To: <27d830b1-a32b-1368-3c0e-e5de15da5000@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA1PR03MB6530:EE_
x-ms-office365-filtering-correlation-id: 384042a8-bb77-4210-585c-08daf4bf7b1b
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 3yhO0GWa3ARtbLqVOj9g/ft1D6eK7IoF060aQ1jFgqwUiwWJw6MiZkyi/iKtzB3Ax6WKQKhHFA5U1p7z5rijbNxuBiOu678emPkFzjDs7wE1srm+XLRCgRg35BgQuo3OEu6q2NzNRQ/4A1fzWzh+XTa7mLKmKdoRHnr14zQCH+8S9utdMBpo7QYGtBO591kfD/YL6wofqoSSfVioup1G+xo+UO27AJY/FzxKrNBfCPhL0KRUpUae1TqnXBmyT8plUmpUUUgfGya4Ex1j4wXuR/dR+vwhkyJ2b+j6wdyGFy9DetLiq9h7CWsx0UQZnMeBfGLw1D/TRKFWTyLgyofEIMWr9Rgis82uuolxeA+OM2vsCc1EU2pTTLFz/5ids43GktTdeeAQhrnJVopUQ5AjfrnVFt+n2ikdKLYq3ybC24gVo/PBi1zCBm0T3IO5THmmMRgsBm0KcU7FpLwi/TtDm/LzbAWhNzIu/V5c5qjaW+Y1Y+8hcx6WNHTrRn5aJCuIjC4KHzkvGoVU6VYjtLPUmMTZkTRD+DhoNQf7fXIHCrsgDbVdru7ZN2uJsAF0crwYE2Wq4S3fldn/aiwLaB8wbQrVOw6Eo93KHDNkgl3GEObVSTVfHwE9gM+yNFUU6D9djZWzZSGYEccvRysPJumFBcxVEB1uyKV800KlW3lheAixzALvS4UGEFW1iZo27Ikvzot2u1b5DMdwJFet2Dd2Hcmc4mt7k/JFXGK3EVA7HNTHNAvgPJ2lEO8UkZK/YEMnGDN0mSLci7cUZFdHEvfyAQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(366004)(376002)(136003)(396003)(451199015)(26005)(8936002)(71200400001)(5660300002)(2906002)(41300700001)(66446008)(66556008)(4326008)(8676002)(316002)(6916009)(76116006)(64756008)(66946007)(66476007)(91956017)(54906003)(38070700005)(6512007)(38100700002)(2616005)(122000001)(31686004)(86362001)(186003)(31696002)(53546011)(83380400001)(82960400001)(36756003)(6506007)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?QlNDUE1VRmJTTnpmZS9mN2xyV3duRTRFc3pVTnVvNCt4b0ZnRjhoTDZZSW9F?=
 =?utf-8?B?SzhyekphNFc3YkZZYzNaa1pHY0MvNUh1OCtUV20xRXQ5WkdRVkgraFNvckht?=
 =?utf-8?B?aTIraE1VbjVXZmNlRmNQaXZXaTFYaWhra2FZRzFLMmFjWUlZL0JWUGNZaUlU?=
 =?utf-8?B?TFV4T0tHc2pZR0lhWUNnQzBrL1NRVmIweVdpT2JWZDdOSXdVNWpnRFJ4WkNL?=
 =?utf-8?B?YzBLNnNMY2pSclZuQzdVUzNaUVFQbHovOEp4WVBWVllwcDhWelNFR0NGSUxw?=
 =?utf-8?B?dDRNTTR1TU9TWiswRDlCUEpjN1hIRW5mcXJONklrTkRRRyt4YUNrWjJjSFRu?=
 =?utf-8?B?RktrRzdma253bHNiM0JaS1pFNGYzYUdKS1JMb3FIR3M3elpiMVRDNEJaNHMv?=
 =?utf-8?B?MWNCZnFsSGxaL2ozK2xBWHFRQVZsaWNzTmxzMDV6eGIwQ0RDaFJYa3Z5L01K?=
 =?utf-8?B?bU0weXhMT2YzcENJQmV0dUlXQ0YvNkJjNXVzZ013djNwSE5ORnd4YXpsZS95?=
 =?utf-8?B?NlkzdEtWdGFMaTFHeGJxOTRyVFVwQU1lOXE2OHp2ZnhBTmppQzRBVzN4QlVx?=
 =?utf-8?B?WjR3UG5UeFlhMmdmSUU1dk4xcnJhSUlSVGxQa3U4bndSdkhUSnhGZ1F6Vk9E?=
 =?utf-8?B?RHhpSFNZV1ZCMU1KazRvNlFPU0VLbllEQVFqU290VjA5VnJXeHJTZXJUK1Yr?=
 =?utf-8?B?SVJyKy9uVXUrNmg1TU9UNEpUN29ua3lxZlZMS3VLeW5BeTFPRWowWXVNUmZV?=
 =?utf-8?B?SXNma1QxakxtZkMrbjRISHg5Qlg1dHM1MmtjUCtDRHJBalZOWFk0RzhkbEcr?=
 =?utf-8?B?UHorcTBneFhsVHBmU3RZMnRkUjA1RzVqM2tWK2ViR2pkbUZSOWJrNVAyR2Nz?=
 =?utf-8?B?U0dRRlQxTlRrMTdLend1L2hDcnFTSGpndXJScm5ZQ3BTSGM4RXJOR2hmcmpE?=
 =?utf-8?B?Ry9qVEZhd1YrOTBDU0YraFY4c2xWaG9HRkpYWEx3d3B0bGFITTVZRnVsTmZr?=
 =?utf-8?B?c1FYRUc5K3BIT3RIVDA3RTFIODVvUE9ReFZmZHdzN1llOEVObytrMVRKdWtp?=
 =?utf-8?B?alNwUEl4Y3FndldkSW5OWThCanRqYkxEUkxZYVZsMmo4QVF3MXdFVCtYQmEv?=
 =?utf-8?B?K2I3VGcyeVB1L2tBbzJZMmR5MS95aWd1My9vU1VXYnZxYXBtQjZEMEcrK0pJ?=
 =?utf-8?B?VnVzYTVOMjhtNHZzaCttU2o3eUN3NkVnSmJSSXY3SU5nKzF6UHUrNEcxaHJH?=
 =?utf-8?B?dDZJbzNuTGJKNmpMclZndEgvK0tLdExkT1JjRlI1R3VJRjlyRFRvMldiWEFF?=
 =?utf-8?B?NHgrVlFIK3JuZzh4WXZEM1JlcmNON2o4SlJTaURMa05xOXZCdytNdWszUlJW?=
 =?utf-8?B?Y25QakNvSWxHODV4aUZWOGxSWmNhcHAxZWZJc0ltdWRET25iM2oxWG9nQTkz?=
 =?utf-8?B?dytnamRFWWQ3YVJHUU5heEE2YmhRb3RJcXdKeUw5SGJZR3JMSlRhcmUrdzJ6?=
 =?utf-8?B?dmIzSWZHOXdRVHU4UDRjYXVWR2JIYmF3RkExUVJQbzE2a1YwVlRURzJCZ2Fn?=
 =?utf-8?B?TDdyYnFKanE1V040M2EzTkF2alFMYXMvK2tGWDFDb1BPdmFtK084N2I2cWRB?=
 =?utf-8?B?WGovOVpLYzNSUUZ3L0U4akRHS1hyNkJ1ME1NT253cFF5L0pSemZybnNJV3E2?=
 =?utf-8?B?Q1gzSTM4REEwazNSTkdIRys1RDFkYzFobzlROWtUQjc3Wkt0NWliZE5ONXJ5?=
 =?utf-8?B?VktQM3JxUGtzSWw4dENzRUsyVUc1dmR1NFNTNVdPdnJSSjNCUzUwQS8wZ05z?=
 =?utf-8?B?b3hZUDBJQUFKOVp6VCt2VzhWVUxJZzFLT0ZSVTdVK2QxVEoyRzhKZkVnRzhp?=
 =?utf-8?B?aXg0Z3gyREpkd0J4azQ5YWs5NjZ4Z3I2RE9Lejg4S09Ed1dQYytFVnZJKzZk?=
 =?utf-8?B?QmQyUzVKd2dqK1pyQ3MvRXVIdUNacmRGOTVnQ3dyNzMzb09FTWErdEFSak5V?=
 =?utf-8?B?WFVVcnlJV0IvaGw4Y0RSQzFXZWgyemlNZjN4aWNJSVFKVUVkN1BDNnNHVy9x?=
 =?utf-8?B?aVZENjBGUG9VbjRDTmtiT2FLL0JxN2pCWk1NckJNYXlnUkRxY25OTWcwSVpM?=
 =?utf-8?B?OHN4UXRqckt3SWl3Y2NCTnlpeXZQajBCYy92TzJwb3liRHdieHVXWXFMQWlN?=
 =?utf-8?B?QlE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1725C87D6B4F0B4EB1EFA19C0AE04CAF@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	A9yg2KGlOMYw/JVKK6I9/yrSWmAlHb3vX+rKcH3U+s+iWObWDEw7/VWjlwXp8DEIizCDHV2KvneJkVzjcf4ZmeODuiPgegC7thjxX6vSe7aUkfU9YjO+rJZq98nZNQH7TMW7tmDmh4gaBLeSB6Gqv3uSd+9WO8Xe0uy6zSkfr3iA8jIssbCW73UvcWTq5RYlrzqwySeKlrvchUEVodnRIUII2begwYFVzzCyvUbUUke/ihiBI8tbkpExtsv8izJNWjJVJB4VTTp+RX0eEc+yHJI5QeSz8UVfnZemd3+d01MJbkE3apiUItic5q5x5/EU+hiSfbsZKSaV1WPTKKx+S/6Db9pe/cDSzXkb8nPCn/Oi7ul1dkYPmFrzqQGL+wqOM+TDOUtm53SF4ZR1Z9DBO1zRwnDhCDpXY3WlNpukL0WVdmO/51nzozgAoVsSymM5ViN9mrpnrPFLxk3YigkwdULgXNedEWjQSQafN7CVvLzZHBQ32/kD5v7vGRupUh1LYmyoJjXjU6O4nXXw3+8P00eMDx5+O6RA9Qd564DcnQeWhzFc0uNJL017rxpMDAN99zMdtc7cW4H21R+Q8Fw3HE3Qpeu/X24tO9rvz9GhzgNX7QhDqw7FM8Hp19SwZOHaGi9SS1P0ChrjiXVzzNalknonbLoGGUHb9jOx/C8R0c+YiSxDkz55HBaZn41fOigpFXeEIFG2pi9u/OFPMgkPCF1WfczpDkeMbvSVHdoFQ0lQtENvsqfsr6zym23xuyII7SHv4XbbvdhD7RPkYYbmi8atLNUgBgUP/gagSbeq8GubwnACqjmO3+8u5Kr3NyRM
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 384042a8-bb77-4210-585c-08daf4bf7b1b
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 17:07:26.7198
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: PMXvwRffPYagNf/An7YPzszloUCVf8Y8CJxNfXkBhqfzGeuzoGx4p5bsV66E/L9Lsl5/WGSjorOOdscIEjig8gds+GHvB8mQ0KXU1BUoqEw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6530

T24gMTIvMDEvMjAyMyAxMjo0NyBwbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEwLjAxLjIw
MjMgMTg6MTgsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBXaGlsZSB0aGUgcmVzZXQgdmFsdWUg
b2YgdGhlIHJlZ2lzdGVyIGlzIDAsIGl0IG1pZ2h0IG5vdCBiZSBhZnRlciBrZXhlYy9ldGMuDQo+
PiBJZiBQS0VZMC57V0QsQUR9IGhhdmUgbGVha2VkIGluIGZyb20gYW4gZWFybGllciBjb250ZXh0
LCBjb25zdHJ1Y3Rpb24gb2YgYSBQVg0KPj4gZG9tMCB3aWxsIGV4cGxvZGUuDQo+Pg0KPj4gU2Vx
dWVuY2luZyB3aXNlLCB0aGlzIG11c3QgY29tZSBhZnRlciBzZXR0aW5nIENSNC5QS0UsIGFuZCBi
ZWZvcmUgd2UgdG91Y2ggYW55DQo+PiB1c2VyIG1hcHBpbmdzLg0KPj4NCj4+IFNpZ25lZC1vZmYt
Ynk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo+PiAtLS0NCj4+
IENDOiBKYW4gQmV1bGljaCA8SkJldWxpY2hAc3VzZS5jb20+DQo+PiBDQzogUm9nZXIgUGF1IE1v
bm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+PiBDQzogV2VpIExpdSA8d2xAeGVuLm9yZz4N
Cj4+DQo+PiBGb3Igc2VxdWVuY2luZywgaXQgY291bGQgYWxzbyBjb21lIGFmdGVyIHNldHRpbmcg
WENSMC5QS1JVIHRvbywgYnV0IHRoZW4gd2UnZA0KPj4gbmVlZCB0byBjb25zdHJ1Y3QgYW4gZW1w
dHkgWFNBVkUgYXJlYSB0byBYUlNUT1IgZnJvbSwgYW5kIHRoYXQgd291bGQgYmUgZXZlbg0KPj4g
bW9yZSBob3JyaWJsZSB0byBhcnJhbmdlLg0KPiBUaGF0IHdvdWxkIGJlIHVnbHkgZm9yIG90aGVy
IHJlYXNvbnMgYXMgd2VsbCwgSSB0aGluay4NCg0KWWVhaCAtIEkgYWJzb2x1dGVseSBkb24ndCB3
YW50IHRvIGdvIGRvd24gdGhpcyByb3V0ZS4NCg0KPg0KPj4gLS0tIGEveGVuL2FyY2gveDg2L2Nw
dS9jb21tb24uYw0KPj4gKysrIGIveGVuL2FyY2gveDg2L2NwdS9jb21tb24uYw0KPj4gQEAgLTkz
Niw2ICs5MzYsOSBAQCB2b2lkIGNwdV9pbml0KHZvaWQpDQo+PiAgCXdyaXRlX2RlYnVncmVnKDYs
IFg4Nl9EUjZfREVGQVVMVCk7DQo+PiAgCXdyaXRlX2RlYnVncmVnKDcsIFg4Nl9EUjdfREVGQVVM
VCk7DQo+PiAgDQo+PiArCWlmIChjcHVfaGFzX3BrdSkNCj4+ICsJCXdycGtydSgwKTsNCj4gV2hh
dCBhYm91dCB0aGUgQlNQIGR1cmluZyBTMyByZXN1bWU/IFNob3VsZG4ndCB3ZSBwbGF5IHNhZmUg
dGhlcmUgdG9vLCBqdXN0DQo+IGluIGNhc2U/DQoNCk91dCBvZiBTMywgSSB0aGluayBpdCdzIHJl
YXNvbmFibGUgdG8gcmVseSBvbiBwcm9wZXIgcmVzZXQgdmFsdWVzLCBhbmQNCmZvciBwa3J1LCBh
bmQgYW55IGlzc3VlcyBvZiBpdCBiZWluZyAid3JvbmciIHNob3VsZCBiZSBmaXhlZCB3aGVuIHdl
DQpyZWxvYWQgZDB2MCdzIFhTQVZFIHN0YXRlLg0KDQoNClRoYXQgc2FpZCwgSSdtIHdhbnRpbmcg
dG8gdHJ5IGFuZCBtZXJnZSBwYXJ0cyBvZiB0aGUgYm9vdCBhbmQgUzMgcGF0aHMNCmJlY2F1c2Ug
d2UncmUgZmluZGluZyBubyBlbmQgb2YgZXJyb3JzL292ZXJzaWdodHMsIG5vdCBsZWFzdCBiZWNh
dXNlIHdlDQpoYXZlIG5vIGF1dG9tYXRlZCB0ZXN0aW5nIG9mIFMzIHN1c3BlbmQvcmVzdW1lLsKg
IFNlcnZlcnMgdHlwaWNhbGx5IGRvbid0DQppbXBsZW1lbnQgaXQsIGFuZCBmaXhlcyBlaXRoZXIg
Y29tZSBmcm9tIGNvZGUgaW5zcGVjdGlvbiwgb3IgUXViZXMNCm5vdGljaW5nICh3aGljaCBpcyBh
YnNvbHV0ZWx5IGJldHRlciB0aGFuIG5vdGhpbmcsIGJ1dCBub3QgYSBncmVhdA0KcmVmbGVjdGlv
biBvbiBYZW4pLg0KDQpCdXQgdG8gbWVyZ2UgdGhlc2UgdGhpbmdzLCBJIGZpcnN0IG5lZWQgdG8g
ZmluaXNoIHRoZSB3b3JrIHRvIG1ha2UNCm1pY3JvY29kZSBsb2FkaW5nIHByb3Blcmx5IGVhcmx5
LCBhbmQgdGhlbiBmaXggdXAgc29tZSBvZiB0aGUgZmVhdHVyZQ0KZGV0ZWN0aW9uIHBhdGhzLCBh
bmQgY2xlYW5seSBzZXBhcmF0ZSBmZWF0dXJlIGRldGVjdGlvbiBmcm9tIGFwcGx5aW5nDQp0aGUg
Y2hvc2VuIGNvbmZpZ3VyYXRpb24sIGF0IHdoaWNoIHBvaW50IEkgaG9wZSB0aGUgbGF0dGVyIHBh
cnQgd2lsbCBiZQ0KcmV1c2FibGUgb24gdGhlIFMzIHJlc3VtZSBwYXRoLg0KDQpJIGRvbid0IGV4
cGVjdCB0aGlzIHdvcmsgdG8gaGFwcGVuIGltbWluZW50bHkuLi4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 18:15:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 18:15:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476347.738463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG26g-0002Ik-Fx; Thu, 12 Jan 2023 18:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476347.738463; Thu, 12 Jan 2023 18:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG26g-0002Id-DF; Thu, 12 Jan 2023 18:15:18 +0000
Received: by outflank-mailman (input) for mailman id 476347;
 Thu, 12 Jan 2023 18:15:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pG26e-0002IX-9I
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 18:15:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG26d-000845-4z; Thu, 12 Jan 2023 18:15:15 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.11.96]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG26c-00013C-U6; Thu, 12 Jan 2023 18:15:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=0T6CmS7Lj1GKJYo6rIwFronkiooLLAtVG9BDyYYAS1E=; b=Y7UtwD7uT2UscZgSumLEQvRnSX
	Rri3vynsOQIGbRL3mH3YyYuSnXyQFINTykD2NkIG9YerZz8IIjjRR1zxeXW76hOwYQpRUxy40tnnH
	dMZQIaPGEpgvMGfWMJkpJy1YdbLfpws/+7AGOGioQ4hpw5em90p6vmvamOT68VaSGaU4=;
Message-ID: <cb03bc33-48d3-c41c-b383-b3a2c5a57def@xen.org>
Date: Thu, 12 Jan 2023 18:15:12 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] include/types: move stdlib.h-kind types to common header
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
References: <5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 12/01/2023 14:01, Jan Beulich wrote:
> size_t, ssize_t, and ptrdiff_t are all expected to be uniformly defined
> on any ports Xen might gain. In particular I hope new ports can rely on
> __SIZE_TYPE__ and __PTRDIFF_TYPE__ being made available by the compiler.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

I also don't have any strong opinion either way about continuing to use 
types.h or introduce stddef.h.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 18:24:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 18:24:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476354.738474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG2Fm-0003p4-H6; Thu, 12 Jan 2023 18:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476354.738474; Thu, 12 Jan 2023 18:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG2Fm-0003ox-Dv; Thu, 12 Jan 2023 18:24:42 +0000
Received: by outflank-mailman (input) for mailman id 476354;
 Thu, 12 Jan 2023 18:24:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W0PO=5J=citrix.com=prvs=369126fba=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pG2Fl-0003oq-0j
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 18:24:41 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e6d618e-92a6-11ed-91b6-6bf2151ebd3b;
 Thu, 12 Jan 2023 19:24:39 +0100 (CET)
Received: from mail-bn8nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Jan 2023 13:24:30 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6234.namprd03.prod.outlook.com (2603:10b6:510:ea::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan
 2023 18:24:27 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.018; Thu, 12 Jan 2023
 18:24:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e6d618e-92a6-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673547879;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=Kv7SYk6PubM4TjXQAt73Phr45Sl1rUK9hEL18EWdA8M=;
  b=gU/hPu8G/A299bUS1O3Jj3UnFl7+fhFEIsBGJe90Od8rMG27z8G/wu3E
   P7TCczeO74GxJXaxvbgdNYqVCiyBotJfce8Dpem9OGZ2jaG7ek9RgfMDo
   DvgCdp5tV0IFMNlBjX+FPn58yavY3O43Hn0ijAYE+imas4iFmUWoepqVO
   Q=;
X-IronPort-RemoteIP: 104.47.55.177
X-IronPort-MID: 91829565
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Dazqlqnb+EbW3ZTylEGNevDo5gxEJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIcDTyGb66KN2XyKI11O4iw9EIFvsTRmNc3TgNo+XgxESMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5QWGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 fkDGCoKYkCMvLiNy723dO4vmsF7NuC+aevzulk4pd3YJdAPZMmZBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVE3ieeyWDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDTe3ip6M03TV/wEQxJxAPSXmmoMKHl0yTA/1ZL
 xcd1ioX+P1aGEuDC4OVsweDiHSZpAwVX91cFPIzwA6Iw6vQpQ2eAwAsTDRMddgnv88eXiEx2
 xmCmNaBLSRmrbm9WX+bsLCOoluaKSUTaGMPeyIAZQ8E+MX45pE+iArVSdRuG7Lzicf6cQwc2
 BiPpSk6wr8V3cgC0vzh+Uid2m3z4J/UUgQy+wPbGHq/6R90b5KkYIru7kXH6fFHL8CSSVzpU
 GU4pvVyJdsmVfml/BFhis1UdF11z55p6AHhvGM=
IronPort-HdrOrdr: A9a23:SzDMYqMu9MuIAcBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.97,211,1669093200"; 
   d="scan'208";a="91829565"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HlwwafbivKOmOpmlXKehDy7zDfPclirSjXA4yRz52Y4hA2tuZBzcaj94w14mGbI95y5iPxxiek/FlY0fzYtwWFn/6wzY3Q9Hw8890NXHd0MfCHFxC5nx7oM06QaMrCNM1ZC5U0dUF1U44jN4qJbKi/eyTU12UxQ+bMAV+CHNQqjDi54TD2t/XDJ7egOgT1Ncl0IdckXmd37OP5AcT/Rk4+5HPO+pQmAjLeHZ3S4vcoL1fWtjKRu7De18soKKB6Dmgx6vbYfeVzVj3U833WwfuiMlSMv1UsZafcosgTY9DZTrKrnDzduCG4W9U3FOhJkRlTHrpgKmiqSTPx0djstrKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Kv7SYk6PubM4TjXQAt73Phr45Sl1rUK9hEL18EWdA8M=;
 b=BIu9vqyGry3CbuvTv7zvsV3X1uu4ZXdzk6sFluufQ3jVYjm3Q1FTzEqDZF85ZpLeJKVzfNkresRz9OY0jJEYwbLaoBkhCb3UQGApdvQ9YHV/tx/ajYCNS8bHAGcd+/TtNH7t5YbKdji1vStngiF2Jy4pKpE3KlVnnsnQ8rgbh3nnWcUgkhMgdHeT4Ji0MC+SIY07WBcTpW00C2sN7XHytNNfoCFMry88DlD+WxEggd3uVyB+zROteGpJtoBWXUy8RBUXczznagUtTCSByFSDuVy4+k9QnBahHzKUGkAOekykbH7LbZuA0pcoEzScbH5lVpH/OFbSmLColXYfM9lKkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Kv7SYk6PubM4TjXQAt73Phr45Sl1rUK9hEL18EWdA8M=;
 b=M3qEW9ktdFlxw5VFEO3IzE5VacXYSz9WQrgQ412ZLfJRiaNSd8pjQItM5/AnYQ34iKIHJQd8L4YrnTYH5arWn2fW1LCQ72IF2iO9NQfE6M00F+jz2oekIfRkUsQrBeyNawMokvwoBLR3L6xBKv8TRBrekclKNGMclEDbRqWFQUw=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Xenia Ragiadakou <burzalodowa@gmail.com>, Jan Beulich <jbeulich@suse.com>
CC: Paul Durrant <paul@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and
 iommu_snoop are VT-d specific
Thread-Topic: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and
 iommu_snoop are VT-d specific
Thread-Index: AQHZIBjlmMJMZF8p3USyswha31M+3K6asoyAgAAE+gCAAEGaAIAALOIA
Date: Thu, 12 Jan 2023 18:24:26 +0000
Message-ID: <0e21b24e-d715-bd04-f98c-4cdd53f129ee@citrix.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-4-burzalodowa@gmail.com>
 <f2d68a4d-b9b3-7700-961d-f6888edfb858@suse.com>
 <f4771b3d-63e8-a44b-bdaf-4e2823f43fb8@gmail.com>
 <4bc3f2f6-9bf4-5810-89e3-526470e72d85@gmail.com>
In-Reply-To: <4bc3f2f6-9bf4-5810-89e3-526470e72d85@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|PH0PR03MB6234:EE_
x-ms-office365-filtering-correlation-id: d9b95448-ec16-4a02-e5d3-08daf4ca3d03
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Zxi/nB4HTBWWVShbSDCVAETK6loLM2mPvVFcR69+aBwyq/1f9Hi7rRMpDtoOVa4DOlnVTfRjQpY0x8XyJLG4FCLCGyzxjg0j12MaxezFJ/Zye6uR/lL6gV3ztj4HxYBROiE+oVo/HlrS0Zzu2EDjCgL+NSk9w81ublIrROb/LiSvLNSEde0JM532fUbGwnt6WZ/tf76ZTEB1Bwm+2+u04h8HrZKWUEc02O3Y+jo5+W14GsXBHnv+fnuiCgEmedZ+kzscVS69uCVPilxcn7nAvOFavL3Ef+RlWa6AwJJP6utWt2p2mVu1AjoLWtLcUTUhRCssUTe2e5irO1/zjumyNi94hnYmEAFiLcY42CLXJOLnDk6EoW9XLr7zil7HCM+6GopPjU18YgQ8HetefOhvMETPRqC2w3bIjgcW1UEOez4P4kprtDW/N4iMB3t98F78HO4rlzu/lnSypIgnOJqo33S2+MS8JA4eJYuLFTzSdtAl60BQHNQMmBNpBeTCyVtxt5Y/Ivg5R8R6cxvepHrkpmTO8xKv3lSF2BE5Rb/EBvNM5WMNgFOzBSPjW3issnWizeeNsT+UC03nttLlpOjepOydTOqCmUW2/mnx5X4GuHL9SKPSX1/PCcvgVA88wFDi96iGMXdMaMGerPeOInA4UjcCp0gk9N5z94LKfm+RU9Lw0MHWsLXzX0NEfntKEscX/kwYhWAzBcaAl3BEfJHVWfyUSAwCtY/nmQYXgddBFxz0kuMtC1xaMpdkSReeigdJzLAwJBFZKp2/1q9uATNgkA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(136003)(376002)(39860400002)(346002)(396003)(451199015)(86362001)(316002)(71200400001)(6512007)(26005)(5660300002)(186003)(478600001)(6486002)(2616005)(91956017)(31696002)(41300700001)(64756008)(66556008)(66446008)(4326008)(54906003)(66476007)(66946007)(110136005)(76116006)(38070700005)(83380400001)(8936002)(36756003)(8676002)(53546011)(31686004)(6506007)(122000001)(82960400001)(38100700002)(2906002)(66899015)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?NTZ2TWNTaFV1ZFBQUGFOYW1sRmZyY3ltMUJiNklYZjR6MUkvcWl4VHBGMHlq?=
 =?utf-8?B?anNFTTVpNVZyVmtiaUVheGZnT25vOTh6cFZLeFRpNkt6czZMdk5wTnNSNGtS?=
 =?utf-8?B?N1FwSlEwWEY3SitMamlRbnVtZ1lHL1E4L1Z4M2lhNXlQRFA0RU5NM2lTQUxm?=
 =?utf-8?B?dTVBVVNsVVRNaUxuZVZPdFlEQjZXOEs0K0RFajhBUWtPTTZ1Yk9xdndTbE95?=
 =?utf-8?B?ZUQzMjV2UzZOeS9PME4zMG1yTXdKcVVrNEFZQS9CdVA5NzJkcjFsY1lXc1I1?=
 =?utf-8?B?NC9NYm1iU3E1WXBzYldGUGVORDlTZ1VFblRaSlovRENJTHozbXNaK0RENXgy?=
 =?utf-8?B?V1FoNmF3c3dwdk1PcDMrT2Z3WWVLN2h2dlBlOTZieVMvSzlUaGJnWXJ3ZUZV?=
 =?utf-8?B?L1BjN1FQeGMxUDdxYWlUMzE1emxvL0MwdWx1dkhzUlVNdzdsY01YSytkcUFK?=
 =?utf-8?B?OHIvcE14R1lUMldULytUNmlJVTR2VE05OENiT2hDcENSblVnaWQyb3d3SS9s?=
 =?utf-8?B?WG92ay9kMGhQb1NEODhXSVcyVmJJVUQvQ2dTRzNMMEpCWkVzL2g3dHNzTkt0?=
 =?utf-8?B?WWVxT3kyUHN5UUtMbElTZXN0ejZpZkdqeGtFQ2l3WWcxMEd1c0JEVDRtQTc1?=
 =?utf-8?B?M1hFekNNcHI0MDFubGxoZmtzUWJwQWJuS1VROUIxbGxvWnJublU1c2xnN1Zl?=
 =?utf-8?B?MXdDRTg1aXA3a3BWeUxBNUE2ZjNBa25JcTc3bjNqNllmNlZ3VTRsbDdmMkJP?=
 =?utf-8?B?UUo3d085ZTlrdXdjeHoxM3pvWHVjL3Y0V09NdkpmWXhWcEdUR1lGL3hpNkh6?=
 =?utf-8?B?RkNYMTlEeXhYTWdreXBBakxTWm5HSXFobmdCUkRRL21xbjB3TTZBUmJ1d0Qv?=
 =?utf-8?B?dFo0SnpLNkZjVmMvN2pVaUVMbkNiN2R5S1R1dkdRd25zUS8zaWJvcjJrcENr?=
 =?utf-8?B?Q2VqemZzUDM4VXR0V25uWDdyd1FoU1ZHUktXaDlnb1QwSytHRVpjMUhpdHVD?=
 =?utf-8?B?MkV1QTMrN0hrcVNqMmkwbjV3R0x3TDk4RlBVM0hHQ0g2SGJZcnoxRkQyV01t?=
 =?utf-8?B?RXZ3OG1OYjZFMThTNDNIc1ZSQlJ1SXY3aEkzemh1SkZnT3F3K3BycHNZcW5n?=
 =?utf-8?B?NE9Eb0NYcFRMa0tMaWo3MGRrMGR3M0V1ZDgxUWtObEtLQWh0eGlFSndPM0lX?=
 =?utf-8?B?L20yT3NiU0s5OW9Mdi9MWGsvWVhXU2xkK21jMGRkU0lFc0lwek1nUW4waDFT?=
 =?utf-8?B?cGhwUWhsNDBtN1Z1U05iYlpHaEFqWmFKZEVPNndmV1JrYStwMDlDVWNxcElW?=
 =?utf-8?B?MGoxdHoyZVRmM0xRcXZtKzhySXpWMVVhM3Nxblh6SFVBd2gxcklxWG4wcWpu?=
 =?utf-8?B?UDVnNW5hcHdLMHZtQ1pJR2FYckdRcVZKSGVMQ3c5TE42RzBPN0Q1eHk4TWlo?=
 =?utf-8?B?WjBaSmV6cUdhUDlOL0hZTmRIdU5FeGwvcUw2UWFVL0RST2FpUE5idmVGbU1r?=
 =?utf-8?B?aXJSWkhPRExNTUlPcVdDNktCSTE3Y0xRTVdsdWdpZmZlMEFRQlZVSVNONjVq?=
 =?utf-8?B?di9kdHlQbS93M0xvcGlZUmF2dm9YVWNQNi96OHl3ZmZXRjVsaXA0KzhxQ1Ro?=
 =?utf-8?B?Nm5ra2dFNGJWTXdISFl4anc3VElKUDAycjMxUjY2Uko2elhNTVp2RUVzZkNH?=
 =?utf-8?B?blhTTE5mRmo5dm9uYWFTSW9PSTZjYUszOEIrV0lpOWJjTTVDVG9zcEM0eXJ3?=
 =?utf-8?B?STE2ZFlrdTB3ay9Xa2xFb3lGZ2FPck9FNDk0NjdhLzZodWtGenozbzRaaUtt?=
 =?utf-8?B?T3FnUlRLY3NrQkNRbklMbjlqMHRjSEdma09zY3JwajhJbmZ3cnRkMWlqdzRQ?=
 =?utf-8?B?TkFvY0JrdjZNeEQ1a2tyakllRDdtUkdCdnJpWlF3NmJzTjlKR0laM3ExMHlp?=
 =?utf-8?B?eXFtRjhZall3WHo3MUNjUG83V2dBMkJoWEMyaXM0SmV1UWNWUUtUNk9QUHhQ?=
 =?utf-8?B?cWdzdStaN0RJQTQyL244NEtLWHR5TGYza2psVmFMMlVwMFlBNWVUaStPM2g4?=
 =?utf-8?B?dGhJSUhZcmp2Umx5ZlIxVXJ4dFFPdUtWN25sOHJkdkpZYU9QdGIxQSt2VHZ0?=
 =?utf-8?B?blpOdkUzYTZLMzk1TGp4bEw0YkJwMU41VUlnWUx2TnBVVk9UdWthb29FZERF?=
 =?utf-8?B?VHc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <EC30C1AFCA31CA4C8CE408C3327EF0FC@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Ir5ybnVt3l4gcjLs/pW/HQqFd695Zb7scZCHlucEkymnyLiY9Qi42Uj2PWccIRA+twmGmBLaPuGH3uqmeNb5XEr/R/FAhx8wGPjN53qGAoDWBB6hL3M7kOu9/DvedbWtQp2qj6peb7Wyqn/Q0a2SldV+dRgH1Kb8BL5TomNxXDRlhJR64xPMyCcWUD8TxCeTs1kL+LV/7mwYHlq0MOCQ53PUCjIg6rmbayxdCUJ/PhAU+5rmu/SROvaFeQVOATOUe/pVOHltYAfcXx5mqStar4FSLbfNfhmiCrt4FaSs+DdFmnj+NnF0EAzXrT+f//f/94yFadrt/O5jz+c/XGXDnymeFwG9zXVUOtCjuwBuHpZZznPiJGDevXTije5ljh18I7lmm3JBfhI8eA+b8iDYBFf7rnsOobTa4MQHV7F/YpeWZuvFvIzUUanRaQj3LyMJ25EEwXEWlWf0HPlYd7Tg9oaHiNpoGONgegh5Nv+Bkprkh4IbAoqVVy2JWUY06uS1qbEsXD/AziBOJPAoycKw151vmC0ZczZHosCHNC533ilAuREaFVhmKin6nqVOxqHGUOqqvFYwG6OEHqnZbbqpDNsJZYuTHrk+7XzClw4gwxOcufLkA3M93R3UZqIRFFvvent1DlpUpknc1uliN+pQo0D6PNRY68m9rB3odnROoROeLdYgw4+WWKh7EU1rVVkBy+tnBh7G5PgtPQgAhTTS6UL5tAMqfCT6AWcaCM5FQi3YmjjTiTNpJYHt6K7Xtj/1zCcBc+Py9O5GahXUYDiBza8TqevftLyoY1PeA1I4p+au8SVm813dLh5Fbp0nCi1YJgcAB6CeZzOABiN5+aZOp/jcIsCIDy6+QlNVySgNFMI=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d9b95448-ec16-4a02-e5d3-08daf4ca3d03
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jan 2023 18:24:26.9958
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: iIhU7eViyC2bD5gub0WkCdn5OmW0tKYO47yGc+01FfjrpP7aMi7mXKHGgF1t+XWxD6EQOfybfW14hKqkW5s2AmPDHybV5ajN4JCAiiALdNg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6234

T24gMTIvMDEvMjAyMyAzOjQzIHBtLCBYZW5pYSBSYWdpYWRha291IHdyb3RlOg0KPg0KPiBPbiAx
LzEyLzIzIDEzOjQ5LCBYZW5pYSBSYWdpYWRha291IHdyb3RlOg0KPj4NCj4+IE9uIDEvMTIvMjMg
MTM6MzEsIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+IE9uIDA0LjAxLjIwMjMgMDk6NDQsIFhlbmlh
IFJhZ2lhZGFrb3Ugd3JvdGU6DQo+Pj4NCj4+Pj4gLS0tIGEveGVuL2luY2x1ZGUveGVuL2lvbW11
LmgNCj4+Pj4gKysrIGIveGVuL2luY2x1ZGUveGVuL2lvbW11LmgNCj4+Pj4gQEAgLTc0LDkgKzc0
LDEzIEBAIGV4dGVybiBlbnVtIF9fcGFja2VkIGlvbW11X2ludHJlbWFwIHsNCj4+Pj4gwqDCoMKg
wqAgaW9tbXVfaW50cmVtYXBfcmVzdHJpY3RlZCwNCj4+Pj4gwqDCoMKgwqAgaW9tbXVfaW50cmVt
YXBfZnVsbCwNCj4+Pj4gwqAgfSBpb21tdV9pbnRyZW1hcDsNCj4+Pj4gLWV4dGVybiBib29sIGlv
bW11X2lnZngsIGlvbW11X3FpbnZhbCwgaW9tbXVfc25vb3A7DQo+Pj4+IMKgICNlbHNlDQo+Pj4+
IMKgICMgZGVmaW5lIGlvbW11X2ludHJlbWFwIGZhbHNlDQo+Pj4+ICsjZW5kaWYNCj4+Pj4gKw0K
Pj4+PiArI2lmZGVmIENPTkZJR19JTlRFTF9JT01NVQ0KPj4+PiArZXh0ZXJuIGJvb2wgaW9tbXVf
aWdmeCwgaW9tbXVfcWludmFsLCBpb21tdV9zbm9vcDsNCj4+Pj4gKyNlbHNlDQo+Pj4+IMKgICMg
ZGVmaW5lIGlvbW11X3Nub29wIGZhbHNlDQo+Pj4+IMKgICNlbmRpZg0KPj4+DQo+Pj4gRG8gdGhl
c2UgZGVjbGFyYXRpb25zIHJlYWxseSBuZWVkIHRvdWNoaW5nPyBJbiBwYXRjaCAyIHlvdSBkaWRu
J3QgbW92ZQ0KPj4+IGFtZF9pb21tdV9wZXJkZXZfaW50cmVtYXAncyBlaXRoZXIuDQo+Pg0KPj4g
T2ssIEkgd2lsbCByZXZlcnQgdGhpcyBjaGFuZ2UgKGFzIEkgZGlkIGluIHYyIG9mIHBhdGNoIDIp
IHNpbmNlIGl0IGlzDQo+PiBub3QgbmVlZGVkLg0KPg0KPiBBY3R1YWxseSwgbXkgcGF0Y2ggd2Fz
IGFsdGVyaW5nIHRoZSBjdXJyZW50IGJlaGF2aW9yIGJ5IGRlZmluaW5nDQo+IGlvbW11X3Nub29w
IGFzIGZhbHNlIHdoZW4gIUlOVEVMX0lPTU1VLg0KPg0KPiBJSVVDLCB0aGVyZSBpcyBubyBjb250
cm9sIG92ZXIgc25vb3AgYmVoYXZpb3Igd2hlbiB1c2luZyB0aGUgQU1EDQo+IGlvbW11LiBIZW5j
ZSwgaW9tbXVfc25vb3Agc2hvdWxkIGV2YWx1YXRlIHRvIHRydWUgZm9yIEFNRCBpb21tdS4NCj4g
SG93ZXZlciwgd2hlbiB1c2luZyB0aGUgSU5URUwgaW9tbXUgdGhlIHVzZXIgY2FuIGRpc2FibGUg
aXQgdmlhIHRoZQ0KPiAiaW9tbXUiIHBhcmFtLCByaWdodD8NCj4NCj4gSWYgdGhhdCdzIHRoZSBj
YXNlIHRoZW4gaW9tbXVfc25vb3AgbmVlZHMgdG8gYmUgbW92ZWQgZnJvbSB2dGQvaW9tbXUuYw0K
PiB0byB4ODYvaW9tbXUuYyBhbmQgaW9tbXVfc25vb3AgYXNzaWdubWVudCB2aWEgaW9tbXUgcGFy
YW0gbmVlZHMgdG8gYmUNCj4gZ3VhcmRlZCBieSBDT05GSUdfSU5URUxfSU9NTVUuDQo+DQoNClBy
ZXR0eSBtdWNoIGV2ZXJ5dGhpbmcgWGVuIHRoaW5rcyBpdCBrbm93cyBhYm91dCBpb21tdV9zbm9v
cCBpcyBicm9rZW4uDQoNCkFNRCBJT01NVXMgaGF2ZSBoYWQgdGhpcyBjYXBhYmlsaXR5IHNpbmNl
IHRoZSBvdXRzZXQsIGJ1dCBpdCdzIHRoZSBGQw0KYml0IChGb3JjZSBDb2hlcmVudCkuwqAgT24g
SW50ZWwsIHRoZSBjYXBhYmlsaXR5IGlzIG9wdGlvbmFsLCBhbmQNCnR5cGljYWxseSBkaWZmZXJz
IGJldHdlZW4gSU9NTVVzIGluIHRoZSBzYW1lIHN5c3RlbS4NCg0KVHJlYXRpbmcgaW9tbXVfc25v
b3AgYXMgYSBzaW5nbGUgZ2xvYmFsIGlzIGJ1Z2d5LCBiZWNhdXNlICh3aGVuDQphdmFpbGFibGUp
IGl0J3MgYWx3YXlzIGEgcGVyLVNCREYgY29udHJvbC7CoCBJdCBpcyB1c2VkIHRvIHRha2UgYSBU
TFAgYW5kDQpmb3JjZSBpdCB0byBiZSBjb2hlcmVudCBldmVuIHdoZW4gdGhlIGRldmljZSB3YXMg
dHJ5aW5nIHRvIGlzc3VlIGENCm5vbi1jb2hlcmVudCBhY2Nlc3MuDQoNCkludGVsIHN5c3RlbXMg
dHlwaWNhbGx5IGhhdmUgYSBkZWRpY2F0ZWQgSU9NTVUgZm9yIHRoZSBJR0QsIHdoaWNoIGFsd2F5
cw0KaXNzdWVzIGNvaGVyZW50IGFjY2Vzc2VzIChpdHMgbWVtb3J5IGFjY2VzcyBoYXBwZW5zIGFz
IGFuIGFkanVuY3QgdG8gdGhlDQpMTEMsIG5vdCBhcyBzb21ldGhpbmcgdGhhdCBjb21tdW5pY2F0
ZXMgd2l0aCB0aGUgbWVtb3J5IGNvbnRyb2xsZXINCmRpcmVjdGx5KSwgc28gdGhlIElPTU1VIGRv
ZXNuJ3Qgb2ZmZXIgc25vb3AgY29udHJvbCwgYW5kIFhlbiAibGV2ZWxzIg0KdGhpcyBkb3duIHRv
ICJ0aGUgc3lzdGVtIGNhbid0IGRvIHNub29wIGNvbnRyb2wiLg0KDQoNClhlbiBpcyB2ZXJ5IGNv
bmZ1c2VkIHdoZW4gaXQgY29tZXMgdG8gY2FjaGVhYmlsaXR5IGNvcnJlY3RuZXNzLsKgIEkgc3Rp
bGwNCmhhdmUgYSBwaWxlIG9mIHBvc3QtWFNBLTQwMiB3b3JrIHBlbmRpbmcsIGFuZCBpdCBuZWVk
cyB0byBzdGFydCB3aXRoDQpzcGxpdHRpbmcgWGVuJ3MgaWRlYSBvZiAiZG9tYWluIGNhbiB1c2Ug
cmVkdWNlZCBjYWNoZWFiaWxpdHkiIGZyb20NCiJkb21haW4gaGFzIGEgZGV2aWNlIiwgYW5kIHdv
cmsgaW5jcmVtZW50YWxseSBmcm9tIHRoZXJlLg0KDQpCdXQgaW4gdGVybXMgb2Ygc25vb3BfY29u
dHJvbCwgaXQncyBzdHJpY3RseSBuZWNlc3NhcnkgZm9yIHRoZSBjYXNlcw0Kd2hlcmUgdGhlIGd1
ZXN0IGtlcm5lbCB0aGlua3MgaXQgaXMgdXNpbmcgcmVkdWNlZCBjYWNoZWFiaWxpdHksIGJ1dCBp
dA0KaXNuJ3QgYmVjYXVzZSBvZiBzb21ldGhpbmcgdGhlIGh5cGVydmlzb3IgaGFzIGRvbmUuwqAg
QnV0IGJleW9uZCB0aGF0LA0KZm9yY2luZyBzbm9vcCBiZWhpbmQgdGhlIGJhY2sgb2YgYSBndWVz
dCB3aGljaCBpcyB1c2luZyByZWR1Y2VkDQpjYWNoZWFiaWxpdHkgaXMganVzdCBhIHdhc3RlIG9m
IHBlcmZvcm1hbmNlLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 18:51:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 18:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476360.738485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG2fS-0007HW-KS; Thu, 12 Jan 2023 18:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476360.738485; Thu, 12 Jan 2023 18:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG2fS-0007HP-FD; Thu, 12 Jan 2023 18:51:14 +0000
Received: by outflank-mailman (input) for mailman id 476360;
 Thu, 12 Jan 2023 18:51:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG2fR-0007H9-Gv; Thu, 12 Jan 2023 18:51:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG2fR-0000Z1-Ct; Thu, 12 Jan 2023 18:51:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG2fR-0001gl-1P; Thu, 12 Jan 2023 18:51:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pG2fR-0006qF-0w; Thu, 12 Jan 2023 18:51:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=omginXvNBLOzUs2ZsmhzlB+XKQAWMNz7i4pLjS0v7no=; b=Hl02TrQgCx5g226QD/rLHADuH6
	S5RhCbq+oE/ejbzuzz1LhWUMB1wjENvXoOWPrIPAXMdA8ZxHRV0P45388eT1t16SDpAf0DYBbJvJ1
	T8kpXhajjKnxNtFTTt0OOOF/g/ozkGStK1Oi7KKloLnreUyzTryRDQ6ix6ATPWImWIck=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175747-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175747: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
X-Osstest-Versions-That:
    ovmf=e5ec3ba409b5baa9cf429cc25fdf3c8d1b8dcef0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 18:51:13 +0000

flight 175747 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175747/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
baseline version:
 ovmf                 e5ec3ba409b5baa9cf429cc25fdf3c8d1b8dcef0

Last test of basis   175740  2023-01-12 10:40:46 Z    0 days
Testing same since   175747  2023-01-12 16:10:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dionna Glaze <dionnaglaze@google.com>
  Jiewen Yao <Jiewen.yao@intel.com>
  Sophia Wolf <phiawolf@google.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e5ec3ba409..9d70d8f20d  9d70d8f20d0feee1d232cbf86fc87147ce92c2cb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 19:18:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 19:18:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476368.738496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG365-0001if-P7; Thu, 12 Jan 2023 19:18:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476368.738496; Thu, 12 Jan 2023 19:18:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG365-0001iY-Lc; Thu, 12 Jan 2023 19:18:45 +0000
Received: by outflank-mailman (input) for mailman id 476368;
 Thu, 12 Jan 2023 19:18:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=b756=5J=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pG363-0001iS-VO
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 19:18:44 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ebac9d24-92ad-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 20:18:40 +0100 (CET)
Received: by mail-ej1-x630.google.com with SMTP id vm8so47155552ejc.2
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 11:18:40 -0800 (PST)
Received: from [127.0.0.1] (dynamic-092-224-135-062.92.224.pool.telefonica.de.
 [92.224.135.62]) by smtp.gmail.com with ESMTPSA id
 b6-20020aa7d486000000b0048447efe3fcsm7496599edr.84.2023.01.12.11.18.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 12 Jan 2023 11:18:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebac9d24-92ad-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=znKsrcQI5eTxWw6koP5BmwfUjNK/TzPXrZHro2GI5WA=;
        b=Wl+jtIsFriABXsvXf6S4odSPfbDt4hEJ1RHb2+gg78+xqikxqT2QFN6fUVlX1YGsB/
         68iX7EMUXp7M0bFRIpJVRdi2t47OELjPLBINU8GPoNwSaFkQtPzloRXWZ5xfKW15dceu
         f+sdSnMC61oWlOVD4tCqBsqRrxU+zrGJvZONVIo+9BMG70Ev+WccXuSeZaCkJJ9hQAT3
         gPE2LWi50yvQUGxEgY9G1hkYO19i4qd6KZiJgWcP5ZAW26fK1/UvJGBiQNuzl1j6sFnL
         7rxwbMNyICpcg0CO4UYk+IeCVVf0sxcE4K30C3aCVpjOaALIVPm+7fOcII+oIqzSJv0/
         8vjA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=znKsrcQI5eTxWw6koP5BmwfUjNK/TzPXrZHro2GI5WA=;
        b=MY4wu97sGqVmTGVeTJEp9DpUvsB1hIHP3NkN6aD2n0wFiVZ6EjtUZiuVgHciymo9dJ
         GYCFWHPtWzUXRhrXqgVTcFxk5StqqtBeP+XoEPe8YdGDV123kuLE+/xH+5PxK8zoHDsO
         OOwODM29kDEAjf/i2LbHEOWJYRe218aU8iftm4mMTWnUuoTRsDw//r2IUMVFSeN+Kgii
         WAEhyshEZPV4OtzaZWlnmsLt7QREpvXgw0tNcZbhixIIYtH1AsYEMIfJcbjB/cNjZksJ
         OwFfk+uGBJBM2K7FU74zyP5DQCKnytMtSTuDHXQjWNXKiiFhrqFZx/4Y/pFntc73306b
         ePfw==
X-Gm-Message-State: AFqh2kpWzrHfJSXr1mY8f6P9YGexISw89mQwafPqxh1Gg0x5VWPrViX+
	CQ/cNMzQ03vtWseHz4cMFFs=
X-Google-Smtp-Source: AMrXdXv56uNTCYYXFq6lM/TortLCbgJXTPuzaeqNqvVDij57HiG55FXcZJQVWBmgtkxxMnqNNAZcRQ==
X-Received: by 2002:a17:907:9a98:b0:7c1:d4c:f08c with SMTP id km24-20020a1709079a9800b007c10d4cf08cmr68854870ejc.4.1673551120068;
        Thu, 12 Jan 2023 11:18:40 -0800 (PST)
Date: Thu, 12 Jan 2023 19:18:33 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Chuck Zmudzinski <brchuckz@aol.com>, "Michael S. Tsirkin" <mst@redhat.com>
CC: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
In-Reply-To: <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com> <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com> <20230110030331-mutt-send-email-mst@kernel.org> <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
Message-ID: <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 11=2E Januar 2023 15:40:24 UTC schrieb Chuck Zmudzinski <brchuckz@aol=
=2Ecom>:
>On 1/10/23 3:16=E2=80=AFAM, Michael S=2E Tsirkin wrote:
>> On Tue, Jan 10, 2023 at 02:08:34AM -0500, Chuck Zmudzinski wrote:
>>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>>> as noted in docs/igd-assign=2Etxt in the Qemu source code=2E
>>>=20
>>> Currently, when the xl toolstack is used to configure a Xen HVM guest =
with
>>> Intel IGD passthrough to the guest with the Qemu upstream device model=
,
>>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will o=
ccupy
>>> a different slot=2E This problem often prevents the guest from booting=
=2E
>>>=20
>>> The only available workaround is not good: Configure Xen HVM guests to=
 use
>>> the old and no longer maintained Qemu traditional device model availab=
le
>>> from xenbits=2Exen=2Eorg which does reserve slot 2 for the Intel IGD=
=2E
>>>=20
>>> To implement this feature in the Qemu upstream device model for Xen HV=
M
>>> guests, introduce the following new functions, types, and macros:
>>>=20
>>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_D=
EVICE
>>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLAS=
S
>>> * typedef XenPTQdevRealize function pointer
>>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve sl=
ot 2
>>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>>>=20
>>> The new xen_igd_reserve_slot function uses the existing slot_reserved_=
mask
>>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured u=
sing
>>> the xl toolstack with the gfx_passthru option enabled, which sets the
>>> igd-passthru=3Don option to Qemu for the Xen HVM machine type=2E
>>>=20
>>> The new xen_igd_reserve_slot function also needs to be implemented in
>>> hw/xen/xen_pt_stub=2Ec to prevent FTBFS during the link stage for the =
case
>>> when Qemu is configured with --enable-xen and --disable-xen-pci-passth=
rough,
>>> in which case it does nothing=2E
>>>=20
>>> The new xen_igd_clear_slot function overrides qdev->realize of the par=
ent
>>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI b=
us
>>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>>> created in hw/i386/pc_piix=2Ec for the case when igd-passthru=3Don=2E
>>>=20
>>> Move the call to xen_host_pci_device_get, and the associated error
>>> handling, from xen_pt_realize to the new xen_igd_clear_slot function t=
o
>>> initialize the device class and vendor values which enables the checks=
 for
>>> the Intel IGD to succeed=2E The verification that the host device is a=
n
>>> Intel IGD to be passed through is done by checking the domain, bus, sl=
ot,
>>> and function values as well as by checking that gfx_passthru is enable=
d,
>>> the device class is VGA, and the device vendor in Intel=2E
>>>=20
>>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol=2Ecom>
>>> ---
>>> Notes that might be helpful to reviewers of patched code in hw/xen:
>>>=20
>>> The new functions and types are based on recommendations from Qemu doc=
s:
>>> https://qemu=2Ereadthedocs=2Eio/en/latest/devel/qom=2Ehtml
>>>=20
>>> Notes that might be helpful to reviewers of patched code in hw/i386:
>>>=20
>>> The small patch to hw/i386/pc_piix=2Ec is protected by CONFIG_XEN so i=
t does
>>> not affect builds that do not have CONFIG_XEN defined=2E
>>>=20
>>> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix=2Ec file is an
>>> existing function that is only true when Qemu is built with
>>> xen-pci-passthrough enabled and the administrator has configured the X=
en
>>> HVM guest with Qemu's igd-passthru=3Don option=2E
>>>=20
>>> v2: Remove From: <email address> tag at top of commit message
>>>=20
>>> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
>>>=20
>>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>>         (s->real_device=2Evendor_id =3D=3D PCI_VENDOR_ID_INTEL)) {
>>>=20
>>>     is changed to
>>>=20
>>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr=2Eslot =3D=3D 2)
>>>         && (s->hostaddr=2Efunction =3D=3D 0)) {
>>>=20
>>>     I hoped that I could use the test in v2, since it matches the
>>>     other tests for the Intel IGD in Qemu and Xen, but those tests
>>>     do not work because the necessary data structures are not set with
>>>     their values yet=2E So instead use the test that the administrator
>>>     has enabled gfx_passthru and the device address on the host is
>>>     02=2E0=2E This test does detect the Intel IGD correctly=2E
>>>=20
>>> v4: Use brchuckz@aol=2Ecom instead of brchuckz@netscape=2Enet for the =
author's
>>>     email address to match the address used by the same author in comm=
its
>>>     be9c61da and c0e86b76
>>>    =20
>>>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
>>>=20
>>> v5: The patch of xen_pt=2Ec was re-worked to allow a more consistent t=
est
>>>     for the Intel IGD that uses the same criteria as in other places=
=2E
>>>     This involved moving the call to xen_host_pci_device_get from
>>>     xen_pt_realize to xen_igd_clear_slot and updating the checks for t=
he
>>>     Intel IGD in xen_igd_clear_slot:
>>>    =20
>>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr=2Eslot =3D=3D 2)
>>>         && (s->hostaddr=2Efunction =3D=3D 0)) {
>>>=20
>>>     is changed to
>>>=20
>>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>>         s->real_device=2Edomain =3D=3D 0 && s->real_device=2Ebus =3D=
=3D 0 &&
>>>         s->real_device=2Edev =3D=3D 2 && s->real_device=2Efunc =3D=3D =
0 &&
>>>         s->real_device=2Evendor_id =3D=3D PCI_VENDOR_ID_INTEL) {
>>>=20
>>>     Added an explanation for the move of xen_host_pci_device_get from
>>>     xen_pt_realize to xen_igd_clear_slot to the commit message=2E
>>>=20
>>>     Rebase=2E
>>>=20
>>> v6: Fix logging by removing these lines from the move from xen_pt_real=
ize
>>>     to xen_igd_clear_slot that was done in v5:
>>>=20
>>>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x=2E%d"
>>>                " to devfn 0x%x\n",
>>>                s->hostaddr=2Ebus, s->hostaddr=2Eslot, s->hostaddr=2Efu=
nction,
>>>                s->dev=2Edevfn);
>>>=20
>>>     This log needs to be in xen_pt_realize because s->dev=2Edevfn is n=
ot
>>>     set yet in xen_igd_clear_slot=2E
>>>=20
>>> v7: The v7 that was posted to the mailing list was incorrect=2E v8 is =
what
>>>     v7 was intended to be=2E
>>>=20
>>> v8: Inhibit out of context log message and needless processing by
>>>     adding 2 lines at the top of the new xen_igd_clear_slot function:
>>>=20
>>>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>>         return;
>>>=20
>>>     Rebase=2E This removed an unnecessary header file from xen_pt=2Eh=
=20
>>>=20
>>>  hw/i386/pc_piix=2Ec    |  3 +++
>>>  hw/xen/xen_pt=2Ec      | 49 ++++++++++++++++++++++++++++++++++++-----=
---
>>>  hw/xen/xen_pt=2Eh      | 16 +++++++++++++++
>>>  hw/xen/xen_pt_stub=2Ec |  4 ++++
>>>  4 files changed, 63 insertions(+), 9 deletions(-)
>>>=20
>>> diff --git a/hw/i386/pc_piix=2Ec b/hw/i386/pc_piix=2Ec
>>> index b48047f50c=2E=2Ebc5efa4f59 100644
>>> --- a/hw/i386/pc_piix=2Ec
>>> +++ b/hw/i386/pc_piix=2Ec
>>> @@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>>>      }
>>> =20
>>>      pc_xen_hvm_init_pci(machine);
>>> +    if (xen_igd_gfx_pt_enabled()) {
>>> +        xen_igd_reserve_slot(pcms->bus);
>>> +    }
>>>      pci_create_simple(pcms->bus, -1, "xen-platform");
>>>  }
>>>  #endif
>>=20
>> I would even maybe go further and move the whole logic into
>> xen_igd_reserve_slot=2E And I would even just name it
>> xen_hvm_init_reserved_slots without worrying about the what
>> or why at the pc level=2E  At this point it will be up to Xen maintaine=
rs=2E
>
>I see to do that would be to resolve the two pc_xen_hvm*
>functions in pc_piix=2Ec that are guarded by CONFIG_XEN and
>move them to an appropriate place such as xen-hvm=2Ec=2E
>
>That is along the lines of the work that Bernhard and Philippe
>are doing, so I am Cc'ing them=2E My first inclination is just
>to defer to them: I think eventually the little patch I propose
>here to pc_piix=2Ec is eventually going to be moved out of pc_piix=2Ec
>by Bernhard in a future patch=2E
>
>What they have been doing is very conservative, and I expect
>if and when Bernhard gets here to resolve those functions, they
>will do it in a way that keeps the dependency of the xenfv machine
>type on the pc machine type and the pc_init1 function=2E
>
>What I would propose would be to break the dependency of xenfv
>on the pc_init1 function=2E That is, I would propose having a
>xenfv_init function in xen-hvm=2Ec, and the first version would
>be the current version of pc_init1, so xenfv would still depend
>on many i440fx type things, but with the change xen developers
>would be free to tweak xenfv_init without affecting the users
>of the pc machine type=2E
>
>Would that be a good idea? If I get posiive feedback for this
>idea, I will put it on the table, probably initially as an RFC
>patch=2E

In various patches I've been decoupling 1/ PIIX3 from Xen and 2/ QEMU's Xe=
n integration code from the PC machine=2E My idea is to confine all wiring =
for a PIIX based PC machine using Xen in pc_piix=2Ec=2E The pc_xen_hvm* fun=
ctions seem to do exactly that, so I'd leave them there, at least for now=
=2E

What I would like to avoid is for the Xen integration code to make assumpt=
ions that an x86 or PC machine is always based on i440fx or PIIX3=2E

I like Michael's idea of going one step further, both in terms of the appr=
oach and the reasoning=2E

Best regards,
Bernhard

>Also, thanks, Michael, for your other suggestions for this patch
>about using macros for the devfn constants=2E
>
>Chuck
>
>>=20
>>> diff --git a/hw/xen/xen_pt=2Ec b/hw/xen/xen_pt=2Ec
>>> index 0ec7e52183=2E=2Eeff38155ef 100644
>>> --- a/hw/xen/xen_pt=2Ec
>>> +++ b/hw/xen/xen_pt=2Ec
>>> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **=
errp)
>>>                 s->hostaddr=2Ebus, s->hostaddr=2Eslot, s->hostaddr=2Ef=
unction,
>>>                 s->dev=2Edevfn);
>>> =20
>>> -    xen_host_pci_device_get(&s->real_device,
>>> -                            s->hostaddr=2Edomain, s->hostaddr=2Ebus,
>>> -                            s->hostaddr=2Eslot, s->hostaddr=2Efunctio=
n,
>>> -                            errp);
>>> -    if (*errp) {
>>> -        error_append_hint(errp, "Failed to \"open\" the real pci devi=
ce");
>>> -        return;
>>> -    }
>>> -
>>>      s->is_virtfn =3D s->real_device=2Eis_virtfn;
>>>      if (s->is_virtfn) {
>>>          XEN_PT_LOG(d, "%04x:%02x:%02x=2E%d is a SR-IOV Virtual Functi=
on\n",
>>> @@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Ob=
ject *obj)
>>>      PCI_DEVICE(obj)->cap_present |=3D QEMU_PCI_CAP_EXPRESS;
>>>  }
>>> =20
>>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>>> +{
>>> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
>>> +    pci_bus->slot_reserved_mask |=3D XEN_PCI_IGD_SLOT_MASK;
>>> +}
>>> +
>>> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
>>> +{
>>> +    ERRP_GUARD();
>>> +    PCIDevice *pci_dev =3D (PCIDevice *)qdev;
>>> +    XenPCIPassthroughState *s =3D XEN_PT_DEVICE(pci_dev);
>>> +    XenPTDeviceClass *xpdc =3D XEN_PT_DEVICE_GET_CLASS(s);
>>> +    PCIBus *pci_bus =3D pci_get_bus(pci_dev);
>>> +
>>> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>> +        return;
>>> +
>>> +    xen_host_pci_device_get(&s->real_device,
>>> +                            s->hostaddr=2Edomain, s->hostaddr=2Ebus,
>>> +                            s->hostaddr=2Eslot, s->hostaddr=2Efunctio=
n,
>>> +                            errp);
>>> +    if (*errp) {
>>> +        error_append_hint(errp, "Failed to \"open\" the real pci devi=
ce");
>>> +        return;
>>> +    }
>>> +
>>> +    if (is_igd_vga_passthrough(&s->real_device) &&
>>> +        s->real_device=2Edomain =3D=3D 0 && s->real_device=2Ebus =3D=
=3D 0 &&
>>> +        s->real_device=2Edev =3D=3D 2 && s->real_device=2Efunc =3D=3D=
 0 &&
>>> +        s->real_device=2Evendor_id =3D=3D PCI_VENDOR_ID_INTEL) {
>>=20
>> how about macros for these?
>>=20
>> #define XEN_PCI_IGD_DOMAIN 0
>> #define XEN_PCI_IGD_BUS 0
>> #define XEN_PCI_IGD_DEV 2
>> #define XEN_PCI_IGD_FN 0
>>=20
>>> +        pci_bus->slot_reserved_mask &=3D ~XEN_PCI_IGD_SLOT_MASK;
>>=20
>> If you are going to do this, you should set it back
>> either after pci_qdev_realize or in unrealize,
>> for symmetry=2E
>>=20
>>> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
>>> +    }
>>=20
>>=20
>>> +    xpdc->pci_qdev_realize(qdev, errp);
>>> +}
>>> +
>>=20
>>=20
>>=20
>>>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *=
data)
>>>  {
>>>      DeviceClass *dc =3D DEVICE_CLASS(klass);
>>>      PCIDeviceClass *k =3D PCI_DEVICE_CLASS(klass);
>>> =20
>>> +    XenPTDeviceClass *xpdc =3D XEN_PT_DEVICE_CLASS(klass);
>>> +    xpdc->pci_qdev_realize =3D dc->realize;
>>> +    dc->realize =3D xen_igd_clear_slot;
>>>      k->realize =3D xen_pt_realize;
>>>      k->exit =3D xen_pt_unregister_device;
>>>      k->config_read =3D xen_pt_pci_read_config;
>>> @@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info =
=3D {
>>>      =2Einstance_size =3D sizeof(XenPCIPassthroughState),
>>>      =2Einstance_finalize =3D xen_pci_passthrough_finalize,
>>>      =2Eclass_init =3D xen_pci_passthrough_class_init,
>>> +    =2Eclass_size =3D sizeof(XenPTDeviceClass),
>>>      =2Einstance_init =3D xen_pci_passthrough_instance_init,
>>>      =2Einterfaces =3D (InterfaceInfo[]) {
>>>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
>>> diff --git a/hw/xen/xen_pt=2Eh b/hw/xen/xen_pt=2Eh
>>> index cf10fc7bbf=2E=2E8c25932b4b 100644
>>> --- a/hw/xen/xen_pt=2Eh
>>> +++ b/hw/xen/xen_pt=2Eh
>>> @@ -2,6 +2,7 @@
>>>  #define XEN_PT_H
>>> =20
>>>  #include "hw/xen/xen_common=2Eh"
>>> +#include "hw/pci/pci_bus=2Eh"
>>>  #include "xen-host-pci-device=2Eh"
>>>  #include "qom/object=2Eh"
>>> =20
>>> @@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
>>>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>>>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>>> =20
>>> +#define XEN_PT_DEVICE_CLASS(klass) \
>>> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
>>> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
>>> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
>>> +
>>> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
>>> +
>>> +typedef struct XenPTDeviceClass {
>>> +    PCIDeviceClass parent_class;
>>> +    XenPTQdevRealize pci_qdev_realize;
>>> +} XenPTDeviceClass;
>>> +
>>>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
>>> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>>>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>>>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>>>                                             XenHostPCIDevice *dev);
>>> @@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
>>> =20
>>>  #define XEN_PCI_INTEL_OPREGION 0xfc
>>> =20
>>> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask *=
/
>>> +
>>=20
>> I think you want to calculate this based on dev fn:
>>=20
>> #define XEN_PCI_IGD_SLOT_MASK \
>> 	(0x1 << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
>>=20
>>=20
>>>  typedef enum {
>>>      XEN_PT_GRP_TYPE_HARDWIRED =3D 0,  /* 0 Hardwired reg group */
>>>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
>>> diff --git a/hw/xen/xen_pt_stub=2Ec b/hw/xen/xen_pt_stub=2Ec
>>> index 2d8cac8d54=2E=2E5c108446a8 100644
>>> --- a/hw/xen/xen_pt_stub=2Ec
>>> +++ b/hw/xen/xen_pt_stub=2Ec
>>> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>>>          error_setg(errp, "Xen PCI passthrough support not built in");
>>>      }
>>>  }
>>> +
>>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>>> +{
>>> +}
>>> --=20
>>> 2=2E39=2E0
>>=20
>


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 19:34:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 19:34:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476375.738506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG3Lb-0004C3-8m; Thu, 12 Jan 2023 19:34:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476375.738506; Thu, 12 Jan 2023 19:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG3Lb-0004Bw-6K; Thu, 12 Jan 2023 19:34:47 +0000
Received: by outflank-mailman (input) for mailman id 476375;
 Thu, 12 Jan 2023 19:34:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG3LZ-0004Bk-9q; Thu, 12 Jan 2023 19:34:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG3LZ-0001SG-8A; Thu, 12 Jan 2023 19:34:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG3LY-0002jp-P6; Thu, 12 Jan 2023 19:34:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pG3LY-0003w8-Od; Thu, 12 Jan 2023 19:34:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mV6sAw3XdWSY1/DAS+mLgro4nYv7WoCXYYDUI8+7M2g=; b=wk09nOG9YRrAEUtICOJA7Xq3Dv
	Z4mjPQTG/UynZXBS+eHtrAzWGBlLD0Vs0ctAB8oPYvbLJ+1hD9c197SPe9M30LkmLt6KNVF2okHTJ
	/v07R6Vdn7eW+LEfmIeb64+SLxQsOntkpr39IbErZLnJBvGafWPY+DHvG70YTiUjApd4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175746-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175746: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=661489874e87c0f6e21ac298b039aab9379f6ee0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 19:34:44 +0000

flight 175746 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175746/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  661489874e87c0f6e21ac298b039aab9379f6ee0

Last test of basis   175741  2023-01-12 11:01:58 Z    0 days
Testing same since   175746  2023-01-12 16:03:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   661489874e..6bec713f87  6bec713f871f21c6254a5783c1e39867ea828256 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 19:38:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 19:38:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476383.738517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG3Os-0004t7-NQ; Thu, 12 Jan 2023 19:38:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476383.738517; Thu, 12 Jan 2023 19:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG3Os-0004t0-Kt; Thu, 12 Jan 2023 19:38:10 +0000
Received: by outflank-mailman (input) for mailman id 476383;
 Thu, 12 Jan 2023 19:38:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pG3Or-0004su-7H
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 19:38:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG3Oq-0001WR-Oo; Thu, 12 Jan 2023 19:38:08 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.11.96]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG3Oq-0004nL-G9; Thu, 12 Jan 2023 19:38:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=KUTTMef6bEj0+C46UuMyREIKXuTlXvdMNg+BvFhrPCU=; b=bjJay9eDf4n/vXfnbYXMWWgR12
	GDcx5CWO2UwMagh1I15GwrvNPtEBcVgNUC68moDx9zjhRlS5JXAYjQ/lxqVchgDSeNm5Ql2FUCkdc
	zGb1HDiyFcW3Qge8zHnPtRBLQUw8Qhk10MmxkDXPTv5Dt5ZOjJVQ52ya7jJUb9IA0Ec4=;
Message-ID: <f7ae4b88-1dff-3b47-36e4-84fc3c1f80e1@xen.org>
Date: Thu, 12 Jan 2023 19:38:06 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 08/18] xen/arm32: head: Introduce an helper to flush
 the TLBs
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221212095523.52683-1-julien@xen.org>
 <20221212095523.52683-9-julien@xen.org>
 <95e9eff5-038d-923f-1afe-4f2d72bde5b3@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <95e9eff5-038d-923f-1afe-4f2d72bde5b3@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 14/12/2022 14:24, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 12/12/2022 10:55, Julien Grall wrote:
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The sequence for flushing the TLBs is 4 instruction long and often
>> requires an explanation how it works.
>>
>> So create an helper and use it in the boot code (switch_ttbr() is left
> Here and in title: s/an helper/a helper/

Done.

>> alone for now).
> Could you explain why?

So we need to decide how we expect switch_ttbr(). E.g. if Xen is 
relocated at a different, should the caller take care of the 
instruction/branch predictor flush?

I have expanded to "switch_ttbr() is left alone until we decided the 
semantic of the call".

>>
>> Note that in secondary_switched, we were also flushing the instruction
>> cache and branch predictor. Neither of them was necessary because:
>>      * We are only supporting IVIPT cache on arm32, so the instruction
>>        cache flush is only necessary when executable code is modified.
>>        None of the boot code is doing that.
>>      * The instruction cache is not invalidated and misprediction is not
>>        a problem at boot.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Apart from that, the patch is good, so:
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Thanks!

> 
>>
>> ---
>>      Changes in v3:
>>          * Fix typo
>>          * Update the documentation
>>          * Rename the argument from tmp1 to tmp
>> ---
>>   xen/arch/arm/arm32/head.S | 30 +++++++++++++++++-------------
>>   1 file changed, 17 insertions(+), 13 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>> index 40c1d7502007..315abbbaebec 100644
>> --- a/xen/arch/arm/arm32/head.S
>> +++ b/xen/arch/arm/arm32/head.S
>> @@ -66,6 +66,20 @@
>>           add   \rb, \rb, r10
>>   .endm
>>
>> +/*
>> + * Flush local TLBs
>> + *
>> + * @tmp:    Scratch register
> As you are respinning a series anyway, could you add just one space after @tmp:?

Ok.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 19:45:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 19:45:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476389.738529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG3VQ-0006La-Dx; Thu, 12 Jan 2023 19:44:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476389.738529; Thu, 12 Jan 2023 19:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG3VQ-0006LT-AO; Thu, 12 Jan 2023 19:44:56 +0000
Received: by outflank-mailman (input) for mailman id 476389;
 Thu, 12 Jan 2023 19:44:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5uq5=5J=oracle.com=boris.ostrovsky@srs-se1.protection.inumbo.net>)
 id 1pG3VO-0006LL-R4
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 19:44:55 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9446869d-92b1-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 20:44:52 +0100 (CET)
Received: from pps.filterd (m0246630.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 30CJfwql014771; Thu, 12 Jan 2023 19:44:15 GMT
Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta02.appoci.oracle.com [147.154.18.20])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3n28ja9vst-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 12 Jan 2023 19:44:14 +0000
Received: from pps.filterd
 (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5)
 with ESMTP id 30CIeEQr016065; Thu, 12 Jan 2023 19:44:14 GMT
Received: from nam12-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam12lp2170.outbound.protection.outlook.com [104.47.59.170])
 by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3n1k4r7q3f-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 12 Jan 2023 19:44:14 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by CY5PR10MB6237.namprd10.prod.outlook.com (2603:10b6:930:43::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11; Thu, 12 Jan
 2023 19:44:11 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::909f:fa34:2dac:11c5]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::909f:fa34:2dac:11c5%7]) with mapi id 15.20.6002.013; Thu, 12 Jan 2023
 19:44:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9446869d-92b1-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=message-id : date :
 to : cc : references : from : subject : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2022-7-12;
 bh=xGKYOzY1zjzSPR3GmLBSaXMk6Gd0r79ktWHnTEP4/mk=;
 b=XzVjoQYCIalikTDTOI48gCcF2p0C+Wkg6Qst30TIS33pejKjk18qbEcQ6ixYjupOAbMP
 rgLz5mDNUoE6ZGqWTalfTFWmsGl/yVE39TgCwPWswORoMcMp+mxLoKrfSP0waMJnZVzV
 V5mw1KAXcL6vVJ7ERbwe/qqJV/WzzgORisBvxGXWNFje5YF2C+ClVJMCQqddwtW+li/1
 QDyXkbZr0EYXVvfehGJSD0RIYtdKh7rxFUSAa7h1cWS/pQFaS3Y9+zReAyuufDRzEE9b
 5JBuGzYeORvWD5MDGw23YUE4vHLz1fX7y6tVr79J4Kp2CLfdtT28mMprssqGVZu+yd4m rw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M1NDhP4bXDhKGhul48hkMFLbeggOmEffBQvr1DV/0OxLcoeD8Je2Cg8C7ELiz9nJUtP1aBzZ1ER2mA7kuPvV2B+0xuTfNiVcnuaRZNcwJXHTFymbGrhCncZ1PHjnL0UfYqa+XWzdQfvmFcgqcPYWTPy4ChLtp52UmEifugn0C9gPibqPJlK2Wxn2tWXhYxn5mzgPMLcLoyu0ageX7y6bTmNb/fizSYw/ZTYOvv1SftoFVdF3gmy/lk2igW86GHvsN06xFiBMSFJoQYJ9BP6ssrjRQUMvwYrCR0/dQVV6OKGLr+V3ukDWYBN7DNnWowxCruXjq+goZK6Qv7xuZmzWmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xGKYOzY1zjzSPR3GmLBSaXMk6Gd0r79ktWHnTEP4/mk=;
 b=bWbCv8pYJRvbIGe7qqs3Slyl/hM8fq3Px18jtgNa/HktgEuctQOSeeN7Lp3pHUyObH32pbxyWGCb+y3UHyUvT7gD0FgvsFbZ+x5Oxi54/oXXp4hC71OkPlMG1QeFEuq91aQ0B/3dhr7FBxvDuRmbxTp/GqSHX71xXgWln2wYCuk/FwZI1RD76jIf3ON3wOkO0MrG51tJ6zTuPlfgxJDi8ucx6Lue7SZBDIGaYzm+eThQxhXGBXvkb4bmiCKAPDaTIlggxLZJccAW/DUW0PSwH+qaeXRi+7jHgaFP6PC7AIwdrSvybPfLxjlxhRXkeKwKkgucu4X2mmsPa31Ce+NUSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xGKYOzY1zjzSPR3GmLBSaXMk6Gd0r79ktWHnTEP4/mk=;
 b=UFaCyhemjL+SKJOPI6vQTgW1jPRbPQrT8JhaEXcFNW1YnO3P86jOw8F13icswJIm+C25qIEfoJGUBf3gOagD1esqX8cFCURMiNXtfU1Gg4259w9lemL0dHrJ9JJRoIVD6401Wjjo9zGfV8aGZWoCpKvGEfikyeNBHi/1rH8uV+Q=
Message-ID: <12462e10-ae39-b198-b159-9c621eb5119d@oracle.com>
Date: Thu, 12 Jan 2023 14:44:07 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, linux-kernel@vger.kernel.org,
        x86@kernel.org, virtualization@lists.linux-foundation.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
        Borislav Petkov <bp@alien8.de>,
        Dave Hansen <dave.hansen@linux.intel.com>,
        "H. Peter Anvin" <hpa@zytor.com>,
        "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
        Alexey Makhalov <amakhalov@vmware.com>,
        VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
        xen-devel@lists.xenproject.org
References: <20230112152132.4399-1-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH] x86/paravirt: merge activate_mm and dup_mmap callbacks
In-Reply-To: <20230112152132.4399-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: DM6PR12CA0020.namprd12.prod.outlook.com
 (2603:10b6:5:1c0::33) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BLAPR10MB5009:EE_|CY5PR10MB6237:EE_
X-MS-Office365-Filtering-Correlation-Id: 186789fb-8462-4973-fb0c-08daf4d560d3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	wdM5lWrnOzt4XMfOjxg8+A9a8qBy4jWrSUpbLPpPFJQnFfYs0sBgAN5m8aq6B1pQpEEj8SQVQ6+eWpyVY310h0YKSI91MWQQFWlVQ6kHyMADiPU+kr2+ekZMkIzcESCdEiG7L7JU6HnkLNXCycsJbrPaXGi7OmCKBPAVZEDUDSOxM1tWnrkhcc6qs6Vb65RiNGR0hffl0neCmMbqWRjhRBQHY0wP3kDax6OQ21dkNot63yV4hhkiqmb/9VswGT1Sj1YquPrmEBvEGJS5x9X/Pk+J+C1bocxCcFhoLHayCT8TtTlx/69m/McFV/N0dju2tv80boc3Ruphw8Qb1Hj3nxXLjZnjFM1vl35cTVBtYEaqBZxSsgekOBwk6fCJn1i+zcMBaO1ZZqDLfvau/u3GiuqNovfhup9QSjLgbaxJlWdRd5JnKC1Nf+uINDOZNGi8BN4knrTR4COBWRLXLADr8WB1Y9HW27cqmTHL/zeSFUyqsCVM4qqgqrGvTG+7dvm3g/BizsOxvjt8iZxZiZRKcGxXPKtJSBDEUIjNo5bA0/JXYGuwtMZQMm26bsOB/UhCeyU6FUZSayi9zu4wOz25xFMrtoelBg++GW+Q/9qGl2c/kfeLZjbZzWnab3c4HIGnmN2pfXYGgYQia/hGMAber8DFFSrnOFjZfCMd+uvcb0isA3IVTgH0EmiZOrdTlVDPC2YrJUvSp1AceZIsXamyBpyMfT5SiuMN/+NPz9lbeec=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(376002)(396003)(136003)(346002)(39860400002)(451199015)(86362001)(36756003)(31686004)(83380400001)(4326008)(7416002)(66946007)(66476007)(66556008)(8936002)(8676002)(44832011)(5660300002)(4744005)(31696002)(38100700002)(2906002)(478600001)(6486002)(316002)(54906003)(6506007)(41300700001)(26005)(2616005)(53546011)(6512007)(186003)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?Vm9FbTFnS1psU2R4YlVpbjlYektrOGRxT3VoeGdpSGl4bWlHanNmcmxpOUR3?=
 =?utf-8?B?Uk16ZUhJZlVaaENkRmtNayt2cFdZQUxyZS9CMnJkc2tGOUlQczdjZ3Q1UlJ4?=
 =?utf-8?B?OFhad1Y3WHFDN2J1bjY3SWlnMHBIM1hTTksvREJtcDJGOU5EeC9zejNWZ243?=
 =?utf-8?B?emxyODdNMjdyWVB6eWFUeTM5MjJzZjNrRFYzVDAwbVhHMzZjcVJPZkNPREg3?=
 =?utf-8?B?ekw1QWVwZk5EeitWS1FuWng2VUNTNnNUNnNvZlJUSGRpWDlqNWlFWjArbm9M?=
 =?utf-8?B?ZmcxbUVTaVNBTWtackJmcEdUdXZ3aW4rblg2ZmVjQ0x3cStsTkd1V09BVGNi?=
 =?utf-8?B?ZWNwREYvMVlEd1JGNkpFV3N3Rm4yV1V4YVFCb1l3VTNTTWx6eGRFMzAyUytB?=
 =?utf-8?B?UWVzZVh5ODdld2pHTXRLNTlYd0cxQ3JQSENjZHAxUFlqZ09iRGJCb0hKZ2VO?=
 =?utf-8?B?cVZBNzZHUnpBK21VRUpHVVZoV1FQNjNBb0N5dnRUblZpT2wrVWtqWFoxZk0w?=
 =?utf-8?B?QmdMT3hPQit5OFNwbFk3S1FDYlpyaTFIejEvUHR2SU12dlFybzdUOVZCRkYx?=
 =?utf-8?B?VDhTcEdISkFMWkF1Q2d1cnV4QVM4R3VISHlOYjJXcFZCSDJVaEpzbUU0Smxm?=
 =?utf-8?B?Vkppa0ZPZENuZncwcVp4NWo2Y094ZTZuTVhiSHhOV0NqQjBBVTF5RUpvTzRw?=
 =?utf-8?B?QTdkYlpPTE5UcmYyUGI1SXFEOURxaFVzRGJnbFJ2SWFiT0VGeFYyTTh5STVG?=
 =?utf-8?B?cVU5Tm5MT1p2TndBdEJCaWp6YnprZUJUdGx4WWJUcHdncDNtU09NY0RvREFm?=
 =?utf-8?B?RXVkeDBiQllIMmNTNlgvUDVZR1BHTlQzeHpzTFZHbmExK1VmdzBzWjczTXlp?=
 =?utf-8?B?TERvSFhXL1F6cXJBZmZTYzVxaXRxVk1TU2xaOHZxRGRqZkVTUVcyaUp5ejJT?=
 =?utf-8?B?emNJLzZwdzZsMEY5S2FpYVl6ZnI0VXluanJMZWVhNUFtMTlvcW1pdU1SVlJn?=
 =?utf-8?B?UXR4T0tmckMyNXdtQkZ2T0lYZ1VoSjNHME5wRllFU2F6N0lJZVIzcDBxSDls?=
 =?utf-8?B?WFRtYXNCejN2UkM0MWh2Tk9UbUcxUTkzUW1abWJvRE80V3paajBwdEdUUGoz?=
 =?utf-8?B?VHVuakx4Y2p2T0RENXh4N0pSMGZjNnNrWWM0NkhaMUVITXgrQXpSKzRYQUpO?=
 =?utf-8?B?TVZZMVVOSnJyZUtKWVdHNklXZTRKdDdPOWpNN3pJeWRJY0Jka3M2Y1daV1Ry?=
 =?utf-8?B?UDVyN2RPS1JWaFZ1Yi9XSlE5a01mUTlRNkpuU3k1VjFLKzNKZllTU3ZhTU8w?=
 =?utf-8?B?d2Y2MUN4ZWtQUElrMjdIanZUNE1BQWJGem9XVE0wb2IwV0sySkVtZ3U4Q2I4?=
 =?utf-8?B?S3dTMjVacnhvZnJRUER4VGN3UG1yQmdMQWllSG5VbDA2SGl4UFlsaEZGMi9T?=
 =?utf-8?B?V3JsOGRZa2VpOEk4QkVMRk9FREdEZzZQbzA5aTlWc2lmdzZ6M2plclNEUnE2?=
 =?utf-8?B?YkR6SkpPU3VHTUVqbllUd1JDWlVHS2hhSlpyQ1UxTWMrdW1VekQ0SFZvZmZz?=
 =?utf-8?B?R01UWlhETXFwNkhxd3pZNHBvOEtmTXA1Y3BZVGFOYUM2Y0NmQ0JCQ1ZnbHVW?=
 =?utf-8?B?NFRzbXVPQmZKZ1lFNDVEdU1pdDJnTVlVN0JEUEVFcm92RmtFODg1bEp3UGVU?=
 =?utf-8?B?bjVaYURJZGRSZHpyOWd6UkJpTU1jcEdqRExyRjk5OGFBTkppaUVaL1djYVZp?=
 =?utf-8?B?V1ZsME1GdlZkWis2eGh2a2grZ2JiZnhJWmN1bzcxZnB3Q0hNTW90aC9ReXBt?=
 =?utf-8?B?Y2NkRURtbXVGeTQ2WE54ZTkrdnJST3dRTHcrWktnZnBzaXYySHNKeFlzbFRh?=
 =?utf-8?B?MWwwUS9wZTdEMzF1TVB6dlZ3a1BVVDNEM1hId1dqckQxSVI2eDVjd2NmSytB?=
 =?utf-8?B?UkhWUzFWU1JybVRKMDBrK0orU3VyQTl0ZS9mcmk5a0dkTHZqRkNTMnh5SkJy?=
 =?utf-8?B?ZFNKNWEvTlpldTk1bXZPdVptaHpXM28rUVMzbUQwbjdGNWV5a3M2dmNueFhY?=
 =?utf-8?B?VktNSEdTZEZDck9iRVNTSVozWkFHbVp1TWdLQXNabEgyOWVkUklUQWpGaGJk?=
 =?utf-8?B?MEw1VzM1SDlaQWJGQUprcVhrZ1J1WEJmUWV3RXRmcXo0aUdjbE1qbWhuL1Jh?=
 =?utf-8?B?M2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	=?utf-8?B?MW5mRExLaXZkUWJ4NjZaQ3BQTmk1RDZJMGhpN2hTM24wam5KcStZeXlIV1Qv?=
 =?utf-8?B?ZGY1cjlXL05UMlkzWGV6ejV3QW5ncUt5RmFTcGtVSndvU3UxQ0RKVkxObGhB?=
 =?utf-8?B?QWpPU2drdXk4T1ZmanhIdjhFSlRpdFRLNXk0NHJwSGJCbmV4aTltbTl0Z1JU?=
 =?utf-8?B?Qjk2VmhKbW51TDZtVzhZV2ovdDE4dThVVEdjWHJpaDdwUTlraXEydnJhWi9I?=
 =?utf-8?B?TVMvaUcveVNGbWE1NWdHZjJpcjVvbStOWUNlc29RaDgvZmE2aUQrdDZhWitn?=
 =?utf-8?B?enNjSXJUdVNvdG9FbkRLUndRc2lHejFDZXRveVZqMlAyQWdvQWlmcm1DV0xn?=
 =?utf-8?B?YU9iS0R6WitPYlhsajF6ZVNPWEt2ZThEblJaTzNVYXEzTTlySUZmNTlCeFlp?=
 =?utf-8?B?ZXozZlM4dWN0S2hTNXBOSFZPZkhZYUR2d3MxSHF4YzlaR3lDNVhyZTY0REh4?=
 =?utf-8?B?UDdLRURzWEhkb1VXMGJiMWlJNDNrZ0dLcDlWcjVXd1JuKy9JQUY3MEx2RTNX?=
 =?utf-8?B?NDg4SUVza3J4NVJqTjJSTkFvR3JSQjZsb3NVRkJRdTZlU2xXTUFnMHlQTi9K?=
 =?utf-8?B?b1lBSUxac0dwZ2NIYUxkNGg4TlNKVU4yMWRNYWIreVB6Q0VRYUdEUGJMd3Ji?=
 =?utf-8?B?b0phU1BULzNhYWRrRzZ5T2g2T1FpczdlUDJsOEEvTnBuRGxRU0tDeUpDVkxK?=
 =?utf-8?B?b1JHRHVxVHBoTi94Zit5U3dtbnBTeTlWbncrSXdLUjA2S0cyelJrNU1OUG1i?=
 =?utf-8?B?NHd6WU8yVVpud1JEQ2RWOWprdUFhVEk4ZSt3cjBHWk10WGNkaFgyNUVaLzVU?=
 =?utf-8?B?L2VFcGh0NlhPN0dRYUdud0FuaFNBKzhMclcxSjR1UDVmY3hzamh3VlI5TElM?=
 =?utf-8?B?c3MwT0t0VWd3QjZJbmNzNGdub09aamg2SVFiZGFHM1VhV3ljdnpvVjBwNzlS?=
 =?utf-8?B?SndkRndablZTR2JsRWdKUno0NHpSZ0hTN0c3S2d5YkIzTXljeUtHWHpFOGUr?=
 =?utf-8?B?VE9ZZjhETjhoZEVRdVk5RU1ZYWhzR1FFN2FNNTU0QVg0TmwzbGIzTjdaM0RX?=
 =?utf-8?B?MytXSnBrb1p3cVJ3ZFJ0STdkcXNYdmNzZmR0Umwxc1g0dW1rdzJMMy94Lzc5?=
 =?utf-8?B?a3A2NjJBSWN3dXlIVUZQQnNtYkI1aHNFck4xc2ppQkkyUC91RjJHR29hVFEw?=
 =?utf-8?B?MWJ5M2NXT09kUHB6dTFIUEp5d1VlMTBISE5nNFNvK3lPcU9GalBVczVxbndo?=
 =?utf-8?B?M1llTE52NmVRajN1REtaeDBaaEs3R3pPTnZ6ODlxVkIxajZ5Um5BUkJMVVVX?=
 =?utf-8?B?V1dPam1LaThJNTV0N2JaNEdiRzhHK2dMaHNVZDFYLzZoOGozSmc4K0lyL3Rw?=
 =?utf-8?B?OVJFMGNSbEdNU1hVQVhQdHBINVpLbTVyZjFIdGxOcUs4VkYrWFdSdnk2YTdz?=
 =?utf-8?B?ZmxJUUNScWZJclZJR2VpTTJKeVMybGRaMVZHS3dNdVU1TC9UbTJyVVg2MGZT?=
 =?utf-8?B?ckR1S1ZOb3lmZlpaRFpLd3dENlZkMGxOU2gxUTlmTFFkazhGcG4wQ25Ld3NM?=
 =?utf-8?Q?IXomv0c3CZbZ/pJHNOEIPrJtI=3D?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 186789fb-8462-4973-fb0c-08daf4d560d3
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 19:44:11.7156
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: D7u3H/736JcLjowuGrsXn1pd1Y+0fTfXZJlH7d7kRdDjgbNTLPdnxl8xrV5bF0KTT92bKAl2nQ5VEOT6LeyTR0AXrKD8jYSOE4OU4DK2AWs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR10MB6237
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.219,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2023-01-12_12,2023-01-12_01,2022-06-22_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0
 adultscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 phishscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2212070000 definitions=main-2301120141
X-Proofpoint-ORIG-GUID: VD9Nbg3F-a_L5jYivbIFN7Xhc7-rzmbt
X-Proofpoint-GUID: VD9Nbg3F-a_L5jYivbIFN7Xhc7-rzmbt


On 1/12/23 10:21 AM, Juergen Gross wrote:
> The two paravirt callbacks .mmu.activate_mm and .mmu.dup_mmap are
> sharing the same implementations in all cases: for Xen PV guests they
> are pinning the PGD of the new mm_struct, and for all other cases
> they are a NOP.
>
> So merge them to a common callback .mmu.enter_mmap (in contrast to the
> corresponding already existing .mmu.exit_mmap).
>
> As the first parameter of the old callbacks isn't used, drop it from
> the replacement one.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Thu Jan 12 20:12:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 20:12:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476395.738540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG3vi-0001ar-Hg; Thu, 12 Jan 2023 20:12:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476395.738540; Thu, 12 Jan 2023 20:12:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG3vi-0001ak-EQ; Thu, 12 Jan 2023 20:12:06 +0000
Received: by outflank-mailman (input) for mailman id 476395;
 Thu, 12 Jan 2023 20:12:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6pGC=5J=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pG3vh-0001ad-By
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 20:12:05 +0000
Received: from sonic304-25.consmr.mail.gq1.yahoo.com
 (sonic304-25.consmr.mail.gq1.yahoo.com [98.137.68.206])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f49b8c3-92b5-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 21:12:02 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic304.consmr.mail.gq1.yahoo.com with HTTP; Thu, 12 Jan 2023 20:11:59 +0000
Received: by hermes--production-ne1-5648bd7666-66bz5 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 12a3adf0d72bdac2656bc55e49474b54; 
 Thu, 12 Jan 2023 20:11:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f49b8c3-92b5-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673554319; bh=T8yzQ0y1veKEWTZ3A4UhAjryizsAbXysdPhGQ1/7fBo=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=XaX03yQeRqrSdAIosaWnLf1z0ibVtHfPr1DuKua474PKkL0ej9tjXE0+PoFLHpLTzqdVgYZcLPt4NUwi6jf18f5ynqjzY+ZDRI+D7UZFX8Gs2ncgHdFSee+MOh+/3N2QdmH9Vz6nRMPib5lCM2W3+Wk7Dd9j4mvcZq3z2RnX6YXLHAvHiTi/Yc4AN7s8QoykXyNM91LMGMd+OL7hRsHvRgOJuX9NE5kGi3/xgmgr+bjgR9ANesR9uae7JRojyxcm/KZ0o5fcj0/xlrmZK4654ZAUtkAaAfVOjdW1uVvuTdjlRLVoSLc06FKbx8jHHS0z4ekKXj6wfjC6pzrICHNhNA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673554319; bh=pNair6KKPFevlfPd/dWK4VuBrJExe70/lSZOL07NlzE=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=ZOReAxUX28BzVbN31laJFQBgRnbmxdXE5j7yXZiyugA8k6Fi3YlU6lQ2P7Z6ngNSoqlUgL0u5GVMHIx3mrwKg0AxIeYF37DffBEpv3paOY1VMwm0wvw5P0PclBC85rk128sLq4zizZWtPL9jZ75JSi0Tez50jBNFIyIoToJIWyJ817CCCT/imeX1o58u+1vjs8lFfANnmgxvkLMyMa5IawLymcJxvabYboR8GCnBwXydL/uVb65kvskC1XqU/rKjBWfKKkSXWbW7CmuweEfZASw3F01OdJYkMt+YmpOqz3YNGDwCgB4ISHnsUjOLfZ9UmHBdgTjjQcPerrTeFOjvUA==
X-YMail-OSG: 6Opf7xgVM1nSwtAsFMugYzGfg803wrYoYYNliXjoiwKBo_ALdOl6Y7ez5G4nTHj
 2CsZfuGmpSbEIDh5SDD8VkTogAYqW6eax_zbIjgqH5TpOclbYgr0lweVZdG.A6Q1WN6.HBB37U2D
 NEU4oDsiXZQxZvpm..0MflPm2WRNzA854jRok1HOk8Zswss9vDAiGL8ErrOLjDNRCgh_99OSA4Wk
 V6MPWPe_93ngABL52XlBveJVjGKJWqOiFB5V0qhsdtIFOsGV6WGZnuxi_qhboaykBT9d.6xtMC1p
 sw_Ekph8bMyEGEToDDlrQQZaxJJncZM5aQ1P.rFhtFS8MYl8dgbAlryxWzhsH_LIfhwuUqrLmG8f
 i5q09bG_If.Gp9FONlhcalgO1B19uvaAzamfYld_J4FNWarUeHEySB3qHWulEYV5R57kSTzdkt9K
 dNjPdNQ7r.BL._tND9wrvLx9wma_8L7D881h7GbgQnbCPN_ricAsO7JA.w83ot5oHyWtXcsRf8e9
 AGvnLuFHo_o6agntTCdM9HeRb.840oAKFxqv2WCvmm44JhmIWIDa86KAUfA5lLZ1qBTN3WADIM2t
 S8TFpkKY3rvG9Xuz9tbovg4f0JxKYvMolR1sHwIeYxH1guRmklWyHWVouCu350_gcxAfodeLxswK
 ewrov0oDulU5bqfXN1KMhWwe1HM0gpUyAAuwQtJOdDPSdLkcE3_gKIWtNKLEaseevRTq0wBYYMNf
 DylsqFuI2ZQjR17oDir8YBs4j.DqVWXTWJm.l.oSqkvpCJwn6DgFYwNiVtIhOj1P_LrN8f8cSEN5
 av32KFCfTcCtrsKsIjN38KOW0Yva4r9MV0GwXErrR6tkf_Q1D4x2mk1mRhLEDH5D2WMtQ3vOdc0v
 RHFgWJrEBWOa9CjgKSK0x8pqddrpod1iEIptYVhBTwP3z2pObEPw5.FC6SvhLaelopJwel7d0paR
 KbdRANZu1FmSZB2N5OA1EsOZFxwgxcpgyQeHgXymdCnpHnMpseYOxAre2gfrMgsex8uwPzBXxeVt
 X8uTpChvF25SjxWk0TfVHAgGl494EkLv4mO9DL4lp7x2JEPBZnblNifpqrl04CwpfCuJUAEMmiox
 st1G48U6T8K8DImPGIvRUfMWsRVXsqzJ_MSfjbx7VVmEmbivJcKVvQkPDzoar21Z5g..vGuU9fww
 YgPXY3rtKhylbPOCVRm52TvJiPNSgyuj7u42Utce5EsZbrdX3pJA5qwVMDNUzYplNuHUd_BQXQUy
 wmpkllkmBguStRevHrc3_qztkIJDgha35uzhsDqG0eQtKn59subMgaxFRw8ITvv9jf9F4gVa9UDq
 IjfVOU3317gKF6YSqYy70lXwsuI3CPrAKWGI0XvuwbXv9BK12Syz5SDPoF38Wgn1OexmHffQVBPu
 UPcMMt9ka2_nHHRyOya5DpStOCbXhFKMNjgT6YtZ7jqWkiiZO3LeLys3JKF9UafpJZHqkVgwKUQ6
 cC_qxvA7_6qdi9ieq6cTl.MhxS3QKKFslLmM6s_tVSo4l5AjamtUec9RmWZaBCs.t34Rlv11pNSp
 VlJbyvXqa3NTOgZAba0rvumZfGrRG5Sw2Osw.9hIWLHvNfCbVrsPYba6ITOwvhHwn3Xmv3Hpdl4K
 WtZIedUB3NSdFI04O1Ao12ebLgm5JbnqVSu5iRCD7rEg.zc.oKqQ8Vqn.P2OalAARoxCCbtgXROe
 rpM.cskUOGB632N4_dDm6DNqWfCXSXigiyGOon6a41Zth.Cdv41hHCGPvxU8gN8eDtqXV11vcq2L
 4Xm9wmndhp9jtmbTNgyZ84bKYm8.w97dAHrxgqmpWxvvhdXg3aAcw6l1OKhPa6HI2GxswdlvfGO8
 Eou_JmquymjorBbOS90hnBEYDEOUWjAY4MsYC.JQZvdRcgifZyuFETo2A.VkqcICuT44rNNDQm7_
 B286OPeVfe4GJqqqJX3JBkfN6jQ_Nir2EOHgZKnkZdSqkcNqpH.QbRl8hmLrwIMp7zR0ow5525Rw
 e1M9P6tRZuLa4NwmeKg2r.TkrxWSAmdF1Qk.ps1UlMhqyTcbiL.F8nRiG7WKqP5xqn6d4Zb7OoDv
 P1pMtqQD0iqsu_4nSlMbd7CcwBdSrvB91RtYYTco.JjPt3GhyKuc9JgK9AbUonWgze7diQP4FMVT
 WWsI9UrGnvkOj61dpNHSO6XmBTy.EqlaBWEir3JN2_.EGyOgG_u2Trhy3w.MvGSVwKSTmg3MvjuV
 6u6Bkiyh7byMf.vcgMFMx3lM-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
Date: Thu, 12 Jan 2023 15:11:54 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: Bernhard Beschow <shentey@gmail.com>, "Michael S. Tsirkin"
 <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 16390

On 1/12/23 2:18 PM, Bernhard Beschow wrote:
> 
> 
> Am 11. Januar 2023 15:40:24 UTC schrieb Chuck Zmudzinski <brchuckz@aol.com>:
>>On 1/10/23 3:16 AM, Michael S. Tsirkin wrote:
>>> On Tue, Jan 10, 2023 at 02:08:34AM -0500, Chuck Zmudzinski wrote:
>>>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>>>> as noted in docs/igd-assign.txt in the Qemu source code.
>>>> 
>>>> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>>>> Intel IGD passthrough to the guest with the Qemu upstream device model,
>>>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>>>> a different slot. This problem often prevents the guest from booting.
>>>> 
>>>> The only available workaround is not good: Configure Xen HVM guests to use
>>>> the old and no longer maintained Qemu traditional device model available
>>>> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>>>> 
>>>> To implement this feature in the Qemu upstream device model for Xen HVM
>>>> guests, introduce the following new functions, types, and macros:
>>>> 
>>>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>>>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>>>> * typedef XenPTQdevRealize function pointer
>>>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>>>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>>>> 
>>>> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>>>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>>>> the xl toolstack with the gfx_passthru option enabled, which sets the
>>>> igd-passthru=on option to Qemu for the Xen HVM machine type.
>>>> 
>>>> The new xen_igd_reserve_slot function also needs to be implemented in
>>>> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>>>> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>>>> in which case it does nothing.
>>>> 
>>>> The new xen_igd_clear_slot function overrides qdev->realize of the parent
>>>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>>>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>>>> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>>>> 
>>>> Move the call to xen_host_pci_device_get, and the associated error
>>>> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>>>> initialize the device class and vendor values which enables the checks for
>>>> the Intel IGD to succeed. The verification that the host device is an
>>>> Intel IGD to be passed through is done by checking the domain, bus, slot,
>>>> and function values as well as by checking that gfx_passthru is enabled,
>>>> the device class is VGA, and the device vendor in Intel.
>>>> 
>>>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>>>> ---
>>>> Notes that might be helpful to reviewers of patched code in hw/xen:
>>>> 
>>>> The new functions and types are based on recommendations from Qemu docs:
>>>> https://qemu.readthedocs.io/en/latest/devel/qom.html
>>>> 
>>>> Notes that might be helpful to reviewers of patched code in hw/i386:
>>>> 
>>>> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
>>>> not affect builds that do not have CONFIG_XEN defined.
>>>> 
>>>> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
>>>> existing function that is only true when Qemu is built with
>>>> xen-pci-passthrough enabled and the administrator has configured the Xen
>>>> HVM guest with Qemu's igd-passthru=on option.
>>>> 
>>>> v2: Remove From: <email address> tag at top of commit message
>>>> 
>>>> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
>>>> 
>>>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>>>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
>>>> 
>>>>     is changed to
>>>> 
>>>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>>>         && (s->hostaddr.function == 0)) {
>>>> 
>>>>     I hoped that I could use the test in v2, since it matches the
>>>>     other tests for the Intel IGD in Qemu and Xen, but those tests
>>>>     do not work because the necessary data structures are not set with
>>>>     their values yet. So instead use the test that the administrator
>>>>     has enabled gfx_passthru and the device address on the host is
>>>>     02.0. This test does detect the Intel IGD correctly.
>>>> 
>>>> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>>>>     email address to match the address used by the same author in commits
>>>>     be9c61da and c0e86b76
>>>>     
>>>>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
>>>> 
>>>> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>>>>     for the Intel IGD that uses the same criteria as in other places.
>>>>     This involved moving the call to xen_host_pci_device_get from
>>>>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>>>>     Intel IGD in xen_igd_clear_slot:
>>>>     
>>>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>>>         && (s->hostaddr.function == 0)) {
>>>> 
>>>>     is changed to
>>>> 
>>>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>>>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>>>>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>>>>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
>>>> 
>>>>     Added an explanation for the move of xen_host_pci_device_get from
>>>>     xen_pt_realize to xen_igd_clear_slot to the commit message.
>>>> 
>>>>     Rebase.
>>>> 
>>>> v6: Fix logging by removing these lines from the move from xen_pt_realize
>>>>     to xen_igd_clear_slot that was done in v5:
>>>> 
>>>>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>>>>                " to devfn 0x%x\n",
>>>>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>>>                s->dev.devfn);
>>>> 
>>>>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>>>>     set yet in xen_igd_clear_slot.
>>>> 
>>>> v7: The v7 that was posted to the mailing list was incorrect. v8 is what
>>>>     v7 was intended to be.
>>>> 
>>>> v8: Inhibit out of context log message and needless processing by
>>>>     adding 2 lines at the top of the new xen_igd_clear_slot function:
>>>> 
>>>>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>>>         return;
>>>> 
>>>>     Rebase. This removed an unnecessary header file from xen_pt.h 
>>>> 
>>>>  hw/i386/pc_piix.c    |  3 +++
>>>>  hw/xen/xen_pt.c      | 49 ++++++++++++++++++++++++++++++++++++--------
>>>>  hw/xen/xen_pt.h      | 16 +++++++++++++++
>>>>  hw/xen/xen_pt_stub.c |  4 ++++
>>>>  4 files changed, 63 insertions(+), 9 deletions(-)
>>>> 
>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>>> index b48047f50c..bc5efa4f59 100644
>>>> --- a/hw/i386/pc_piix.c
>>>> +++ b/hw/i386/pc_piix.c
>>>> @@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>>>>      }
>>>>  
>>>>      pc_xen_hvm_init_pci(machine);
>>>> +    if (xen_igd_gfx_pt_enabled()) {
>>>> +        xen_igd_reserve_slot(pcms->bus);
>>>> +    }
>>>>      pci_create_simple(pcms->bus, -1, "xen-platform");
>>>>  }
>>>>  #endif
>>> 
>>> I would even maybe go further and move the whole logic into
>>> xen_igd_reserve_slot. And I would even just name it
>>> xen_hvm_init_reserved_slots without worrying about the what
>>> or why at the pc level.  At this point it will be up to Xen maintainers.
>>
>>I see to do that would be to resolve the two pc_xen_hvm*
>>functions in pc_piix.c that are guarded by CONFIG_XEN and
>>move them to an appropriate place such as xen-hvm.c.
>>
>>That is along the lines of the work that Bernhard and Philippe
>>are doing, so I am Cc'ing them. My first inclination is just
>>to defer to them: I think eventually the little patch I propose
>>here to pc_piix.c is eventually going to be moved out of pc_piix.c
>>by Bernhard in a future patch.
>>
>>What they have been doing is very conservative, and I expect
>>if and when Bernhard gets here to resolve those functions, they
>>will do it in a way that keeps the dependency of the xenfv machine
>>type on the pc machine type and the pc_init1 function.
>>
>>What I would propose would be to break the dependency of xenfv
>>on the pc_init1 function. That is, I would propose having a
>>xenfv_init function in xen-hvm.c, and the first version would
>>be the current version of pc_init1, so xenfv would still depend
>>on many i440fx type things, but with the change xen developers
>>would be free to tweak xenfv_init without affecting the users
>>of the pc machine type.
>>
>>Would that be a good idea? If I get posiive feedback for this
>>idea, I will put it on the table, probably initially as an RFC
>>patch.
> 
> In various patches I've been decoupling 1/ PIIX3 from Xen and 2/ QEMU's Xen integration code from the PC machine. My idea is to confine all wiring for a PIIX based PC machine using Xen in pc_piix.c. The pc_xen_hvm* functions seem to do exactly that, so I'd leave them there, at least for now.
> 
> What I would like to avoid is for the Xen integration code to make assumptions that an x86 or PC machine is always based on i440fx or PIIX3.

I think what you are saying is that if I try to move the logic of my patch to xen-hvm.c, as Michael suggests, I should not move or copy any piix3 code to the Xen integration code, but access it via an appropriate qom interface to the code in pc_piix.c and only move Xen specific things to the Xen integration code such as the content of my patch. I can try to do that for a v9 of my patch. It might take me a little while (I am not a professional coder), so I will just leave v8 of my patch as is for now until I have a patch ready to move it out of pc_piix.c the qom way.

Thanks,

Chuck

> 
> I like Michael's idea of going one step further, both in terms of the approach and the reasoning.
> 
> Best regards,
> Bernhard
> 
>>Also, thanks, Michael, for your other suggestions for this patch
>>about using macros for the devfn constants.
>>
>>Chuck
>>
>>> 
>>>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>>>> index 0ec7e52183..eff38155ef 100644
>>>> --- a/hw/xen/xen_pt.c
>>>> +++ b/hw/xen/xen_pt.c
>>>> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>>>>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>>>                 s->dev.devfn);
>>>>  
>>>> -    xen_host_pci_device_get(&s->real_device,
>>>> -                            s->hostaddr.domain, s->hostaddr.bus,
>>>> -                            s->hostaddr.slot, s->hostaddr.function,
>>>> -                            errp);
>>>> -    if (*errp) {
>>>> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
>>>> -        return;
>>>> -    }
>>>> -
>>>>      s->is_virtfn = s->real_device.is_virtfn;
>>>>      if (s->is_virtfn) {
>>>>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
>>>> @@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>>>>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>>>>  }
>>>>  
>>>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>>>> +{
>>>> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
>>>> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
>>>> +}
>>>> +
>>>> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
>>>> +{
>>>> +    ERRP_GUARD();
>>>> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
>>>> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
>>>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
>>>> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
>>>> +
>>>> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>>> +        return;
>>>> +
>>>> +    xen_host_pci_device_get(&s->real_device,
>>>> +                            s->hostaddr.domain, s->hostaddr.bus,
>>>> +                            s->hostaddr.slot, s->hostaddr.function,
>>>> +                            errp);
>>>> +    if (*errp) {
>>>> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
>>>> +        return;
>>>> +    }
>>>> +
>>>> +    if (is_igd_vga_passthrough(&s->real_device) &&
>>>> +        s->real_device.domain == 0 && s->real_device.bus == 0 &&
>>>> +        s->real_device.dev == 2 && s->real_device.func == 0 &&
>>>> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
>>> 
>>> how about macros for these?
>>> 
>>> #define XEN_PCI_IGD_DOMAIN 0
>>> #define XEN_PCI_IGD_BUS 0
>>> #define XEN_PCI_IGD_DEV 2
>>> #define XEN_PCI_IGD_FN 0
>>> 
>>>> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
>>> 
>>> If you are going to do this, you should set it back
>>> either after pci_qdev_realize or in unrealize,
>>> for symmetry.
>>> 
>>>> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
>>>> +    }
>>> 
>>> 
>>>> +    xpdc->pci_qdev_realize(qdev, errp);
>>>> +}
>>>> +
>>> 
>>> 
>>> 
>>>>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>>>>  {
>>>>      DeviceClass *dc = DEVICE_CLASS(klass);
>>>>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>>>  
>>>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
>>>> +    xpdc->pci_qdev_realize = dc->realize;
>>>> +    dc->realize = xen_igd_clear_slot;
>>>>      k->realize = xen_pt_realize;
>>>>      k->exit = xen_pt_unregister_device;
>>>>      k->config_read = xen_pt_pci_read_config;
>>>> @@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>>>>      .instance_size = sizeof(XenPCIPassthroughState),
>>>>      .instance_finalize = xen_pci_passthrough_finalize,
>>>>      .class_init = xen_pci_passthrough_class_init,
>>>> +    .class_size = sizeof(XenPTDeviceClass),
>>>>      .instance_init = xen_pci_passthrough_instance_init,
>>>>      .interfaces = (InterfaceInfo[]) {
>>>>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
>>>> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
>>>> index cf10fc7bbf..8c25932b4b 100644
>>>> --- a/hw/xen/xen_pt.h
>>>> +++ b/hw/xen/xen_pt.h
>>>> @@ -2,6 +2,7 @@
>>>>  #define XEN_PT_H
>>>>  
>>>>  #include "hw/xen/xen_common.h"
>>>> +#include "hw/pci/pci_bus.h"
>>>>  #include "xen-host-pci-device.h"
>>>>  #include "qom/object.h"
>>>>  
>>>> @@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
>>>>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>>>>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>>>>  
>>>> +#define XEN_PT_DEVICE_CLASS(klass) \
>>>> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
>>>> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
>>>> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
>>>> +
>>>> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
>>>> +
>>>> +typedef struct XenPTDeviceClass {
>>>> +    PCIDeviceClass parent_class;
>>>> +    XenPTQdevRealize pci_qdev_realize;
>>>> +} XenPTDeviceClass;
>>>> +
>>>>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
>>>> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>>>>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>>>>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>>>>                                             XenHostPCIDevice *dev);
>>>> @@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
>>>>  
>>>>  #define XEN_PCI_INTEL_OPREGION 0xfc
>>>>  
>>>> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
>>>> +
>>> 
>>> I think you want to calculate this based on dev fn:
>>> 
>>> #define XEN_PCI_IGD_SLOT_MASK \
>>> 	(0x1 << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
>>> 
>>> 
>>>>  typedef enum {
>>>>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>>>>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
>>>> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
>>>> index 2d8cac8d54..5c108446a8 100644
>>>> --- a/hw/xen/xen_pt_stub.c
>>>> +++ b/hw/xen/xen_pt_stub.c
>>>> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>>>>          error_setg(errp, "Xen PCI passthrough support not built in");
>>>>      }
>>>>  }
>>>> +
>>>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>>>> +{
>>>> +}
>>>> -- 
>>>> 2.39.0
>>> 
>>



From xen-devel-bounces@lists.xenproject.org Thu Jan 12 20:48:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 20:48:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476402.738551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG4Ui-0005HH-Dt; Thu, 12 Jan 2023 20:48:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476402.738551; Thu, 12 Jan 2023 20:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG4Ui-0005HA-9g; Thu, 12 Jan 2023 20:48:16 +0000
Received: by outflank-mailman (input) for mailman id 476402;
 Thu, 12 Jan 2023 20:48:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG4Uh-0005H0-B5; Thu, 12 Jan 2023 20:48:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG4Uh-0003Wi-8e; Thu, 12 Jan 2023 20:48:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG4Ug-0004i9-QR; Thu, 12 Jan 2023 20:48:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pG4Ug-00014H-Pz; Thu, 12 Jan 2023 20:48:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MHWVvIxNwm4T7vL/zFZ2YhuRE1CieVrj3SqKG2tYnF8=; b=2HsRYFgeRrLf2olxwrEInUTSPW
	OpCBKpHpts/wdM+tZ+ZNq+0+iucTCm4AeenQzDscGDC8h+WAo4VCS9J8iRDNt5jPdyUC0S2b4PYta
	VQBYX+9kPFSESOjD8J30CtSrVR7YplTp7BQEhGuM70aDIVKZ8WWr48FE7aCULHF8Rcao=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175737-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175737: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-coresched-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/src_host:fail:regression
    linux-linus:test-amd64-amd64-pair:xen-boot/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-bios:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine-uefi:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-shadow:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-pvshim:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e8f60cd7db24f94f2dbed6bec30dd16a68fc0828
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Jan 2023 20:48:14 +0000

flight 175737 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175737/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 12 xen-boot/src_host       fail REGR. vs. 173462
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host       fail REGR. vs. 173462
 test-amd64-coresched-amd64-xl  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot          fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-pair        12 xen-boot/src_host        fail REGR. vs. 173462
 test-amd64-amd64-pair        13 xen-boot/dst_host        fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-examine-bios  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-win7-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start           fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-qemuu-nested-amd  8 xen-boot            fail REGR. vs. 173462
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-amd  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-freebsd12-amd64  8 xen-boot             fail REGR. vs. 173462
 test-amd64-amd64-examine-uefi  8 reboot                  fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-ws16-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64  8 xen-boot             fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-pygrub       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-amd64-amd64-xl-shadow    8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvhv2-intel  8 xen-boot              fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-pvshim    8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-amd64-amd64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 8 xen-boot fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-amd64-amd64-xl-qemuu-ovmf-amd64  8 xen-boot         fail REGR. vs. 173462
 test-amd64-amd64-examine      8 reboot                   fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail REGR. vs. 173462
 test-amd64-amd64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                e8f60cd7db24f94f2dbed6bec30dd16a68fc0828
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   97 days
Failing since        173470  2022-10-08 06:21:34 Z   96 days  203 attempts
Testing same since   175737  2023-01-12 07:08:51 Z    0 days    1 attempts

------------------------------------------------------------
3320 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                fail    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 fail    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                fail    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 506287 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 21:51:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 21:51:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476410.738562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG5Tp-0004bw-1V; Thu, 12 Jan 2023 21:51:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476410.738562; Thu, 12 Jan 2023 21:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG5To-0004bp-UX; Thu, 12 Jan 2023 21:51:24 +0000
Received: by outflank-mailman (input) for mailman id 476410;
 Thu, 12 Jan 2023 21:51:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GYVK=5J=chromium.org=keescook@srs-se1.protection.inumbo.net>)
 id 1pG5Tn-0004bj-6w
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 21:51:23 +0000
Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com
 [2607:f8b0:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3e934cf7-92c3-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 22:51:19 +0100 (CET)
Received: by mail-pl1-x634.google.com with SMTP id p24so21526718plw.11
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 13:51:19 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
 by smtp.gmail.com with ESMTPSA id
 im22-20020a170902bb1600b0018bc4493005sm12607234plb.269.2023.01.12.13.51.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 12 Jan 2023 13:51:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e934cf7-92c3-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=CWezmmHa6oj6qJtMomh+1hZZGWqGQrSyq0C7qN1MANE=;
        b=NrhE8GeB0gPtUpLJmL0nObCWCBwo04pER0QWC7YeN486f0tyfa3g3S7BtDUHlIbXFW
         zwrs+/K+DcCOmOq9CHZZri2ktplzmFfjR3ymQ4DGsJ6K3GoZmmlnw6aXM90nHdEUvKT6
         K3lDHI2HhmuViAyN/43XcHXcUD0CiNOTpAm0o=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CWezmmHa6oj6qJtMomh+1hZZGWqGQrSyq0C7qN1MANE=;
        b=vkivjw+NIDq0yduZIFqA+Mjhb7co8dPMtWQU3A8zhU78+O4FqITOwhE4twm6t0dkjt
         +s0TFRPMUPO/Z1/e6i9PtxPHVyE4aKoLZNxOv38zpCy2UbzvGu+yovXSevdqIgsW4T4P
         KfUZJLSOnxySu1frhgK8fOBBLF1jAwhSkjhtKh+JITwrQeXjwfqKPYGebAY5iTvGl5ma
         OhAlDUKJJUSUEsPhTKU2g7RhmxfRxHXBr8EiiDnqm9EdeRWM75cY9tW2ej51mdxmSTrf
         xnVYGv1ztN6yiaWyfEtEdR9t3XgnkhMyZkXOGmimUsmz+S1CDUAV78KaSgLYYU7dtmlr
         6Hyg==
X-Gm-Message-State: AFqh2kqkLwe+NkSymyEogo68IcKeVzQvx0ArTnTKxQwAoD0IoIu8CDcP
	x+4+mmjTGhHKqxrrtjqrJb1urQ==
X-Google-Smtp-Source: AMrXdXvH368zgSN+CqWpi3idJyL7uyuYLfW6YfP3XhLcEJ233wBi4Eh2ch09l2nZrresK14l3RKZWQ==
X-Received: by 2002:a17:902:ba82:b0:189:86f0:70a2 with SMTP id k2-20020a170902ba8200b0018986f070a2mr37242231pls.43.1673560277885;
        Thu, 12 Jan 2023 13:51:17 -0800 (PST)
Date: Thu, 12 Jan 2023 13:51:16 -0800
From: Kees Cook <keescook@chromium.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>, mark.rutland@arm.com
Subject: Re: [RFC][PATCH 2/6] x86/power: Inline write_cr[04]()
Message-ID: <202301121351.B0CE02B@keescook>
References: <20230112143141.645645775@infradead.org>
 <20230112143825.644480983@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230112143825.644480983@infradead.org>

On Thu, Jan 12, 2023 at 03:31:43PM +0100, Peter Zijlstra wrote:
> Since we can't do CALL/RET until GS is restored and CR[04] pinning is
> of dubious value in this code path, simply write the stored values.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Reviewed-by: Kees Cook <keescook@chromium.org>

-- 
Kees Cook


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 22:04:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 22:04:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476416.738572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG5g2-0006HN-5h; Thu, 12 Jan 2023 22:04:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476416.738572; Thu, 12 Jan 2023 22:04:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG5g2-0006HG-2r; Thu, 12 Jan 2023 22:04:02 +0000
Received: by outflank-mailman (input) for mailman id 476416;
 Thu, 12 Jan 2023 22:04:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pG5fz-0006Gr-V4
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 22:04:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG5fz-0005CO-Eg; Thu, 12 Jan 2023 22:03:59 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG5fz-00023U-7x; Thu, 12 Jan 2023 22:03:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=XuCioJJ2tqNj9WXBy6FbHR37qa0/LlKycYs7DN0nRMs=; b=uaBmPeFTamDNWUYJOXCJOXIaDJ
	ZPhUzzYH9X/oDERKm64upgGtRN2mZq39P2YCktEFiB+K+CFWKdI1BBQPACDabzTbqYmQQV5OBpjTM
	l+/Jdax6jMyMrfNwPMG0SmeYQeryp/lrUuN84fLc8QGs3cCRCNGVYy60d5affXUe741A=;
Message-ID: <05007c7e-50ad-829e-21e2-6e26a7b01f96@xen.org>
Date: Thu, 12 Jan 2023 22:03:57 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, michal.orzel@amd.com,
 Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221212095523.52683-1-julien@xen.org>
 <20221212095523.52683-16-julien@xen.org>
 <alpine.DEB.2.22.394.2212121734150.3075842@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 15/18] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
In-Reply-To: <alpine.DEB.2.22.394.2212121734150.3075842@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/12/2022 01:41, Stefano Stabellini wrote:
>> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
>> index fdbf68aadcaa..e7a80fecec14 100644
>> --- a/xen/arch/arm/include/asm/setup.h
>> +++ b/xen/arch/arm/include/asm/setup.h
>> @@ -168,6 +168,17 @@ int map_range_to_domain(const struct dt_device_node *dev,
>>   
>>   extern const char __ro_after_init_start[], __ro_after_init_end[];
>>   
>> +extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
>> +
>> +#ifdef CONFIG_ARM_64
>> +extern DEFINE_BOOT_PAGE_TABLE(boot_first_id);
>> +#endif
>> +extern DEFINE_BOOT_PAGE_TABLE(boot_second_id);
>> +extern DEFINE_BOOT_PAGE_TABLE(boot_third_id);
> 
> This is more a matter of taste but I would either:
> - define extern all BOOT_PAGE_TABLEs here both ARM64 and ARM32 with
>    #ifdefs

A grep of BOOT_PAGE_TABLE shows that they are all defined in setup.h.

> - or define all the ARM64 only BOOT_PAGE_TABLE in arm64/mm.h and all the
>    ARM32 only BOOT_PAGE_TABLE in arm32/mm.h >
> Right now we have a mix, as we have boot_first_id with a #ifdef here
> and we have xen_pgtable in arm64/mm.h
We are talking about two distinct set of page-tables. One is used at 
runtime (i.e. xen_pgtable) and the other are for boot/smp-bring up.

So adding the boot_* in setup.h is correct. As I wrote earlier, setup.h 
would need a split. But this is not something I really want to handle 
here...

> 
> Also we are missing boot_second and boot_third. We might as well be
> consistent and declare them all?

My plan is really to kill boot_second and boot_third. So I don't really 
want to export them right now (even temporarily).

In any case, I don't think such change belongs in this patch (it is 
already complex enough).

>> +/* Find where Xen will be residing at runtime and return an PT entry */
>> +lpae_t pte_of_xenaddr(vaddr_t);
>> +
>>   #endif
>>   /*
>>    * Local variables:
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 0cf7ad4f0e8c..39e0d9e03c9c 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -93,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
>>   
>>   #ifdef CONFIG_ARM_64
>>   #define HYP_PT_ROOT_LEVEL 0
>> -static DEFINE_PAGE_TABLE(xen_pgtable);
>> +DEFINE_PAGE_TABLE(xen_pgtable);
>>   static DEFINE_PAGE_TABLE(xen_first);
>>   #define THIS_CPU_PGTABLE xen_pgtable
>>   #else
>> @@ -388,7 +388,7 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
>>           invalidate_icache();
>>   }
>>   
>> -static inline lpae_t pte_of_xenaddr(vaddr_t va)
>> +lpae_t pte_of_xenaddr(vaddr_t va)
>>   {
>>       paddr_t ma = va + phys_offset;
>>   
>> @@ -495,6 +495,8 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
>>   
>>       phys_offset = boot_phys_offset;
>>   
>> +    arch_setup_page_tables();
>> +
>>   #ifdef CONFIG_ARM_64
>>       pte = pte_of_xenaddr((uintptr_t)xen_first);
>>       pte.pt.table = 1;
>> -- 
>> 2.38.1
>>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 22:55:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 22:55:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476422.738584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG6Tx-0003Xw-ST; Thu, 12 Jan 2023 22:55:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476422.738584; Thu, 12 Jan 2023 22:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG6Tx-0003Xp-Pe; Thu, 12 Jan 2023 22:55:37 +0000
Received: by outflank-mailman (input) for mailman id 476422;
 Thu, 12 Jan 2023 22:55:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=b756=5J=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pG6Tv-0003Xj-M0
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 22:55:36 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 37696762-92cc-11ed-b8d0-410ff93cb8f0;
 Thu, 12 Jan 2023 23:55:32 +0100 (CET)
Received: by mail-ej1-x62c.google.com with SMTP id tz12so48365592ejc.9
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 14:55:32 -0800 (PST)
Received: from [127.0.0.1] (dynamic-092-224-135-062.92.224.pool.telefonica.de.
 [92.224.135.62]) by smtp.gmail.com with ESMTPSA id
 g9-20020a17090604c900b0085ca279966esm2907135eja.119.2023.01.12.14.55.30
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 12 Jan 2023 14:55:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37696762-92cc-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gcBQah7LIkYMj8F1zQ+LBzI0aneJEGKZfwDsIYdF1Lo=;
        b=UsUA5pX1vdg/jHwhIuedEWTeDhL4HsuHBBxu2MOy/ZZHOi6httqHVC1dV6chqa6yKG
         ehfVJkeA8NYLyVsGzaKZ6T+zHH7kHSIwxcrxhlBnt2PCBhWpmmBjZIQVtjv+T3Y1dL/q
         C2U/PX9JJvmjQBmbydRokghip8jXsSGZ3/bGYCHJatLzLmodRBENxhbraHZwQN5+B3Jp
         X0F3VrP4WBAToQQ5Cyt1LdMYJ03U4df9my9EXoYKHUTjwtwBvMZhKtpsv4EZmF/VCBBm
         L8PV1CKze1Uc7U9q43f9N03S88VreZhU4PXtdv1j9p8bDtIBrB/zZx6rlobAv+RdOnmV
         ak6A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=gcBQah7LIkYMj8F1zQ+LBzI0aneJEGKZfwDsIYdF1Lo=;
        b=uflLX5ZiFrJRTii/8+3SjqSTdFeBYyMDQ7X2b/j8Sr3nLGDtOdrDbZMn6E7uS3yZdb
         kkIXBlOO7s+G3Z0w1hXlj38JcNygA+ojJIlU1EmKchxy4xEBnEkAwx7GCNTzGhUDLlsn
         DjYwgoFsDLGOngqp9T6uo2FP6cflkqzOhtPFzonq/F9Heea+dW+CW7EQ1kWlk85ACCs8
         vWwGkzljfvlV5QoVmF8PV54w2p97UV2EHOxmZ1gISk/s9sRc/ZxF1ZhLdW0LoVWUoAHQ
         6NjMvVF4XZ1GlA7KtypJXBRnN606ijIk9rjMD14uLs+6qZ1MwO5Ulm6JKXWcjHoBchb/
         W1lA==
X-Gm-Message-State: AFqh2kotrHDaT/FGAnFp64CqRssEql83YMCJsXHBCbsKELgOzapposYF
	JVIShI37MclBdeOejnStsjU=
X-Google-Smtp-Source: AMrXdXslORGdXnAT5Qh+ZwtZAi1hOcSbTOS/be198kJia8qVDipsp9tC7Yv3brA25lVSSsq4szwbvg==
X-Received: by 2002:a17:906:2dd9:b0:861:7cb0:ec64 with SMTP id h25-20020a1709062dd900b008617cb0ec64mr5950513eji.67.1673564131426;
        Thu, 12 Jan 2023 14:55:31 -0800 (PST)
Date: Thu, 12 Jan 2023 22:55:25 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Chuck Zmudzinski <brchuckz@aol.com>, "Michael S. Tsirkin" <mst@redhat.com>
CC: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
In-Reply-To: <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com> <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com> <20230110030331-mutt-send-email-mst@kernel.org> <a6994521-68d5-a05b-7be2-a8c605733467@aol.com> <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com> <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
Message-ID: <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 12=2E Januar 2023 20:11:54 UTC schrieb Chuck Zmudzinski <brchuckz@aol=
=2Ecom>:
>On 1/12/23 2:18=E2=80=AFPM, Bernhard Beschow wrote:
>>=20
>>=20
>> Am 11=2E Januar 2023 15:40:24 UTC schrieb Chuck Zmudzinski <brchuckz@ao=
l=2Ecom>:
>>>On 1/10/23 3:16=E2=80=AFAM, Michael S=2E Tsirkin wrote:
>>>> On Tue, Jan 10, 2023 at 02:08:34AM -0500, Chuck Zmudzinski wrote:
>>>>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus=
,
>>>>> as noted in docs/igd-assign=2Etxt in the Qemu source code=2E
>>>>>=20
>>>>> Currently, when the xl toolstack is used to configure a Xen HVM gues=
t with
>>>>> Intel IGD passthrough to the guest with the Qemu upstream device mod=
el,
>>>>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will=
 occupy
>>>>> a different slot=2E This problem often prevents the guest from booti=
ng=2E
>>>>>=20
>>>>> The only available workaround is not good: Configure Xen HVM guests =
to use
>>>>> the old and no longer maintained Qemu traditional device model avail=
able
>>>>> from xenbits=2Exen=2Eorg which does reserve slot 2 for the Intel IGD=
=2E
>>>>>=20
>>>>> To implement this feature in the Qemu upstream device model for Xen =
HVM
>>>>> guests, introduce the following new functions, types, and macros:
>>>>>=20
>>>>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT=
_DEVICE
>>>>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CL=
ASS
>>>>> * typedef XenPTQdevRealize function pointer
>>>>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve =
slot 2
>>>>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>>>>>=20
>>>>> The new xen_igd_reserve_slot function uses the existing slot_reserve=
d_mask
>>>>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured=
 using
>>>>> the xl toolstack with the gfx_passthru option enabled, which sets th=
e
>>>>> igd-passthru=3Don option to Qemu for the Xen HVM machine type=2E
>>>>>=20
>>>>> The new xen_igd_reserve_slot function also needs to be implemented i=
n
>>>>> hw/xen/xen_pt_stub=2Ec to prevent FTBFS during the link stage for th=
e case
>>>>> when Qemu is configured with --enable-xen and --disable-xen-pci-pass=
through,
>>>>> in which case it does nothing=2E
>>>>>=20
>>>>> The new xen_igd_clear_slot function overrides qdev->realize of the p=
arent
>>>>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI=
 bus
>>>>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus w=
as
>>>>> created in hw/i386/pc_piix=2Ec for the case when igd-passthru=3Don=
=2E
>>>>>=20
>>>>> Move the call to xen_host_pci_device_get, and the associated error
>>>>> handling, from xen_pt_realize to the new xen_igd_clear_slot function=
 to
>>>>> initialize the device class and vendor values which enables the chec=
ks for
>>>>> the Intel IGD to succeed=2E The verification that the host device is=
 an
>>>>> Intel IGD to be passed through is done by checking the domain, bus, =
slot,
>>>>> and function values as well as by checking that gfx_passthru is enab=
led,
>>>>> the device class is VGA, and the device vendor in Intel=2E
>>>>>=20
>>>>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol=2Ecom>
>>>>> ---
>>>>> Notes that might be helpful to reviewers of patched code in hw/xen:
>>>>>=20
>>>>> The new functions and types are based on recommendations from Qemu d=
ocs:
>>>>> https://qemu=2Ereadthedocs=2Eio/en/latest/devel/qom=2Ehtml
>>>>>=20
>>>>> Notes that might be helpful to reviewers of patched code in hw/i386:
>>>>>=20
>>>>> The small patch to hw/i386/pc_piix=2Ec is protected by CONFIG_XEN so=
 it does
>>>>> not affect builds that do not have CONFIG_XEN defined=2E
>>>>>=20
>>>>> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix=2Ec file is =
an
>>>>> existing function that is only true when Qemu is built with
>>>>> xen-pci-passthrough enabled and the administrator has configured the=
 Xen
>>>>> HVM guest with Qemu's igd-passthru=3Don option=2E
>>>>>=20
>>>>> v2: Remove From: <email address> tag at top of commit message
>>>>>=20
>>>>> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
>>>>>=20
>>>>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>>>>         (s->real_device=2Evendor_id =3D=3D PCI_VENDOR_ID_INTEL)) {
>>>>>=20
>>>>>     is changed to
>>>>>=20
>>>>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr=2Eslot =3D=3D 2)
>>>>>         && (s->hostaddr=2Efunction =3D=3D 0)) {
>>>>>=20
>>>>>     I hoped that I could use the test in v2, since it matches the
>>>>>     other tests for the Intel IGD in Qemu and Xen, but those tests
>>>>>     do not work because the necessary data structures are not set wi=
th
>>>>>     their values yet=2E So instead use the test that the administrat=
or
>>>>>     has enabled gfx_passthru and the device address on the host is
>>>>>     02=2E0=2E This test does detect the Intel IGD correctly=2E
>>>>>=20
>>>>> v4: Use brchuckz@aol=2Ecom instead of brchuckz@netscape=2Enet for th=
e author's
>>>>>     email address to match the address used by the same author in co=
mmits
>>>>>     be9c61da and c0e86b76
>>>>>    =20
>>>>>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
>>>>>=20
>>>>> v5: The patch of xen_pt=2Ec was re-worked to allow a more consistent=
 test
>>>>>     for the Intel IGD that uses the same criteria as in other places=
=2E
>>>>>     This involved moving the call to xen_host_pci_device_get from
>>>>>     xen_pt_realize to xen_igd_clear_slot and updating the checks for=
 the
>>>>>     Intel IGD in xen_igd_clear_slot:
>>>>>    =20
>>>>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr=2Eslot =3D=3D 2)
>>>>>         && (s->hostaddr=2Efunction =3D=3D 0)) {
>>>>>=20
>>>>>     is changed to
>>>>>=20
>>>>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>>>>         s->real_device=2Edomain =3D=3D 0 && s->real_device=2Ebus =3D=
=3D 0 &&
>>>>>         s->real_device=2Edev =3D=3D 2 && s->real_device=2Efunc =3D=
=3D 0 &&
>>>>>         s->real_device=2Evendor_id =3D=3D PCI_VENDOR_ID_INTEL) {
>>>>>=20
>>>>>     Added an explanation for the move of xen_host_pci_device_get fro=
m
>>>>>     xen_pt_realize to xen_igd_clear_slot to the commit message=2E
>>>>>=20
>>>>>     Rebase=2E
>>>>>=20
>>>>> v6: Fix logging by removing these lines from the move from xen_pt_re=
alize
>>>>>     to xen_igd_clear_slot that was done in v5:
>>>>>=20
>>>>>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x=2E%d"
>>>>>                " to devfn 0x%x\n",
>>>>>                s->hostaddr=2Ebus, s->hostaddr=2Eslot, s->hostaddr=2E=
function,
>>>>>                s->dev=2Edevfn);
>>>>>=20
>>>>>     This log needs to be in xen_pt_realize because s->dev=2Edevfn is=
 not
>>>>>     set yet in xen_igd_clear_slot=2E
>>>>>=20
>>>>> v7: The v7 that was posted to the mailing list was incorrect=2E v8 i=
s what
>>>>>     v7 was intended to be=2E
>>>>>=20
>>>>> v8: Inhibit out of context log message and needless processing by
>>>>>     adding 2 lines at the top of the new xen_igd_clear_slot function=
:
>>>>>=20
>>>>>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>>>>         return;
>>>>>=20
>>>>>     Rebase=2E This removed an unnecessary header file from xen_pt=2E=
h=20
>>>>>=20
>>>>>  hw/i386/pc_piix=2Ec    |  3 +++
>>>>>  hw/xen/xen_pt=2Ec      | 49 ++++++++++++++++++++++++++++++++++++---=
-----
>>>>>  hw/xen/xen_pt=2Eh      | 16 +++++++++++++++
>>>>>  hw/xen/xen_pt_stub=2Ec |  4 ++++
>>>>>  4 files changed, 63 insertions(+), 9 deletions(-)
>>>>>=20
>>>>> diff --git a/hw/i386/pc_piix=2Ec b/hw/i386/pc_piix=2Ec
>>>>> index b48047f50c=2E=2Ebc5efa4f59 100644
>>>>> --- a/hw/i386/pc_piix=2Ec
>>>>> +++ b/hw/i386/pc_piix=2Ec
>>>>> @@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machin=
e)
>>>>>      }
>>>>> =20
>>>>>      pc_xen_hvm_init_pci(machine);
>>>>> +    if (xen_igd_gfx_pt_enabled()) {
>>>>> +        xen_igd_reserve_slot(pcms->bus);
>>>>> +    }
>>>>>      pci_create_simple(pcms->bus, -1, "xen-platform");
>>>>>  }
>>>>>  #endif
>>>>=20
>>>> I would even maybe go further and move the whole logic into
>>>> xen_igd_reserve_slot=2E And I would even just name it
>>>> xen_hvm_init_reserved_slots without worrying about the what
>>>> or why at the pc level=2E  At this point it will be up to Xen maintai=
ners=2E
>>>
>>>I see to do that would be to resolve the two pc_xen_hvm*
>>>functions in pc_piix=2Ec that are guarded by CONFIG_XEN and
>>>move them to an appropriate place such as xen-hvm=2Ec=2E
>>>
>>>That is along the lines of the work that Bernhard and Philippe
>>>are doing, so I am Cc'ing them=2E My first inclination is just
>>>to defer to them: I think eventually the little patch I propose
>>>here to pc_piix=2Ec is eventually going to be moved out of pc_piix=2Ec
>>>by Bernhard in a future patch=2E
>>>
>>>What they have been doing is very conservative, and I expect
>>>if and when Bernhard gets here to resolve those functions, they
>>>will do it in a way that keeps the dependency of the xenfv machine
>>>type on the pc machine type and the pc_init1 function=2E
>>>
>>>What I would propose would be to break the dependency of xenfv
>>>on the pc_init1 function=2E That is, I would propose having a
>>>xenfv_init function in xen-hvm=2Ec, and the first version would
>>>be the current version of pc_init1, so xenfv would still depend
>>>on many i440fx type things, but with the change xen developers
>>>would be free to tweak xenfv_init without affecting the users
>>>of the pc machine type=2E
>>>
>>>Would that be a good idea? If I get posiive feedback for this
>>>idea, I will put it on the table, probably initially as an RFC
>>>patch=2E
>>=20
>> In various patches I've been decoupling 1/ PIIX3 from Xen and 2/ QEMU's=
 Xen integration code from the PC machine=2E My idea is to confine all wiri=
ng for a PIIX based PC machine using Xen in pc_piix=2Ec=2E The pc_xen_hvm* =
functions seem to do exactly that, so I'd leave them there, at least for no=
w=2E
>>=20
>> What I would like to avoid is for the Xen integration code to make assu=
mptions that an x86 or PC machine is always based on i440fx or PIIX3=2E
>
>I think what you are saying is that if I try to move the logic of my patc=
h to xen-hvm=2Ec, as Michael suggests, I should not move or copy any piix3 =
code to the Xen integration code, but access it via an appropriate qom inte=
rface to the code in pc_piix=2Ec and only move Xen specific things to the X=
en integration code such as the content of my patch=2E I can try to do that=
 for a v9 of my patch=2E It might take me a little while (I am not a profes=
sional coder), so I will just leave v8 of my patch as is for now until I ha=
ve a patch ready to move it out of pc_piix=2Ec the qom way=2E

I think the change Michael suggests is very minimalistic: Move the if cond=
ition around xen_igd_reserve_slot() into the function itself and always cal=
l it there unconditionally -- basically turning three lines into one=2E Sin=
ce xen_igd_reserve_slot() seems very problem specific, Michael further sugg=
ests to rename it to something more general=2E All in all no big changes re=
quired=2E

Best regards,
Bernhard

>
>Thanks,
>
>Chuck
>
>>=20
>> I like Michael's idea of going one step further, both in terms of the a=
pproach and the reasoning=2E
>>=20
>> Best regards,
>> Bernhard
>>=20
>>>Also, thanks, Michael, for your other suggestions for this patch
>>>about using macros for the devfn constants=2E
>>>
>>>Chuck
>>>
>>>>=20
>>>>> diff --git a/hw/xen/xen_pt=2Ec b/hw/xen/xen_pt=2Ec
>>>>> index 0ec7e52183=2E=2Eeff38155ef 100644
>>>>> --- a/hw/xen/xen_pt=2Ec
>>>>> +++ b/hw/xen/xen_pt=2Ec
>>>>> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error =
**errp)
>>>>>                 s->hostaddr=2Ebus, s->hostaddr=2Eslot, s->hostaddr=
=2Efunction,
>>>>>                 s->dev=2Edevfn);
>>>>> =20
>>>>> -    xen_host_pci_device_get(&s->real_device,
>>>>> -                            s->hostaddr=2Edomain, s->hostaddr=2Ebus=
,
>>>>> -                            s->hostaddr=2Eslot, s->hostaddr=2Efunct=
ion,
>>>>> -                            errp);
>>>>> -    if (*errp) {
>>>>> -        error_append_hint(errp, "Failed to \"open\" the real pci de=
vice");
>>>>> -        return;
>>>>> -    }
>>>>> -
>>>>>      s->is_virtfn =3D s->real_device=2Eis_virtfn;
>>>>>      if (s->is_virtfn) {
>>>>>          XEN_PT_LOG(d, "%04x:%02x:%02x=2E%d is a SR-IOV Virtual Func=
tion\n",
>>>>> @@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(=
Object *obj)
>>>>>      PCI_DEVICE(obj)->cap_present |=3D QEMU_PCI_CAP_EXPRESS;
>>>>>  }
>>>>> =20
>>>>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>>>>> +{
>>>>> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
>>>>> +    pci_bus->slot_reserved_mask |=3D XEN_PCI_IGD_SLOT_MASK;
>>>>> +}
>>>>> +
>>>>> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
>>>>> +{
>>>>> +    ERRP_GUARD();
>>>>> +    PCIDevice *pci_dev =3D (PCIDevice *)qdev;
>>>>> +    XenPCIPassthroughState *s =3D XEN_PT_DEVICE(pci_dev);
>>>>> +    XenPTDeviceClass *xpdc =3D XEN_PT_DEVICE_GET_CLASS(s);
>>>>> +    PCIBus *pci_bus =3D pci_get_bus(pci_dev);
>>>>> +
>>>>> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>>>> +        return;
>>>>> +
>>>>> +    xen_host_pci_device_get(&s->real_device,
>>>>> +                            s->hostaddr=2Edomain, s->hostaddr=2Ebus=
,
>>>>> +                            s->hostaddr=2Eslot, s->hostaddr=2Efunct=
ion,
>>>>> +                            errp);
>>>>> +    if (*errp) {
>>>>> +        error_append_hint(errp, "Failed to \"open\" the real pci de=
vice");
>>>>> +        return;
>>>>> +    }
>>>>> +
>>>>> +    if (is_igd_vga_passthrough(&s->real_device) &&
>>>>> +        s->real_device=2Edomain =3D=3D 0 && s->real_device=2Ebus =
=3D=3D 0 &&
>>>>> +        s->real_device=2Edev =3D=3D 2 && s->real_device=2Efunc =3D=
=3D 0 &&
>>>>> +        s->real_device=2Evendor_id =3D=3D PCI_VENDOR_ID_INTEL) {
>>>>=20
>>>> how about macros for these?
>>>>=20
>>>> #define XEN_PCI_IGD_DOMAIN 0
>>>> #define XEN_PCI_IGD_BUS 0
>>>> #define XEN_PCI_IGD_DEV 2
>>>> #define XEN_PCI_IGD_FN 0
>>>>=20
>>>>> +        pci_bus->slot_reserved_mask &=3D ~XEN_PCI_IGD_SLOT_MASK;
>>>>=20
>>>> If you are going to do this, you should set it back
>>>> either after pci_qdev_realize or in unrealize,
>>>> for symmetry=2E
>>>>=20
>>>>> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
>>>>> +    }
>>>>=20
>>>>=20
>>>>> +    xpdc->pci_qdev_realize(qdev, errp);
>>>>> +}
>>>>> +
>>>>=20
>>>>=20
>>>>=20
>>>>>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void=
 *data)
>>>>>  {
>>>>>      DeviceClass *dc =3D DEVICE_CLASS(klass);
>>>>>      PCIDeviceClass *k =3D PCI_DEVICE_CLASS(klass);
>>>>> =20
>>>>> +    XenPTDeviceClass *xpdc =3D XEN_PT_DEVICE_CLASS(klass);
>>>>> +    xpdc->pci_qdev_realize =3D dc->realize;
>>>>> +    dc->realize =3D xen_igd_clear_slot;
>>>>>      k->realize =3D xen_pt_realize;
>>>>>      k->exit =3D xen_pt_unregister_device;
>>>>>      k->config_read =3D xen_pt_pci_read_config;
>>>>> @@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info =
=3D {
>>>>>      =2Einstance_size =3D sizeof(XenPCIPassthroughState),
>>>>>      =2Einstance_finalize =3D xen_pci_passthrough_finalize,
>>>>>      =2Eclass_init =3D xen_pci_passthrough_class_init,
>>>>> +    =2Eclass_size =3D sizeof(XenPTDeviceClass),
>>>>>      =2Einstance_init =3D xen_pci_passthrough_instance_init,
>>>>>      =2Einterfaces =3D (InterfaceInfo[]) {
>>>>>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
>>>>> diff --git a/hw/xen/xen_pt=2Eh b/hw/xen/xen_pt=2Eh
>>>>> index cf10fc7bbf=2E=2E8c25932b4b 100644
>>>>> --- a/hw/xen/xen_pt=2Eh
>>>>> +++ b/hw/xen/xen_pt=2Eh
>>>>> @@ -2,6 +2,7 @@
>>>>>  #define XEN_PT_H
>>>>> =20
>>>>>  #include "hw/xen/xen_common=2Eh"
>>>>> +#include "hw/pci/pci_bus=2Eh"
>>>>>  #include "xen-host-pci-device=2Eh"
>>>>>  #include "qom/object=2Eh"
>>>>> =20
>>>>> @@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
>>>>>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>>>>>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>>>>> =20
>>>>> +#define XEN_PT_DEVICE_CLASS(klass) \
>>>>> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
>>>>> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
>>>>> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
>>>>> +
>>>>> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
>>>>> +
>>>>> +typedef struct XenPTDeviceClass {
>>>>> +    PCIDeviceClass parent_class;
>>>>> +    XenPTQdevRealize pci_qdev_realize;
>>>>> +} XenPTDeviceClass;
>>>>> +
>>>>>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
>>>>> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>>>>>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>>>>>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *=
s,
>>>>>                                             XenHostPCIDevice *dev);
>>>>> @@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
>>>>> =20
>>>>>  #define XEN_PCI_INTEL_OPREGION 0xfc
>>>>> =20
>>>>> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask=
 */
>>>>> +
>>>>=20
>>>> I think you want to calculate this based on dev fn:
>>>>=20
>>>> #define XEN_PCI_IGD_SLOT_MASK \
>>>> 	(0x1 << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
>>>>=20
>>>>=20
>>>>>  typedef enum {
>>>>>      XEN_PT_GRP_TYPE_HARDWIRED =3D 0,  /* 0 Hardwired reg group */
>>>>>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
>>>>> diff --git a/hw/xen/xen_pt_stub=2Ec b/hw/xen/xen_pt_stub=2Ec
>>>>> index 2d8cac8d54=2E=2E5c108446a8 100644
>>>>> --- a/hw/xen/xen_pt_stub=2Ec
>>>>> +++ b/hw/xen/xen_pt_stub=2Ec
>>>>> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>>>>>          error_setg(errp, "Xen PCI passthrough support not built in"=
);
>>>>>      }
>>>>>  }
>>>>> +
>>>>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>>>>> +{
>>>>> +}
>>>>> --=20
>>>>> 2=2E39=2E0
>>>>=20
>>>
>


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 23:04:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 23:04:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476429.738594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG6c9-00058n-RY; Thu, 12 Jan 2023 23:04:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476429.738594; Thu, 12 Jan 2023 23:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG6c9-00058g-OX; Thu, 12 Jan 2023 23:04:05 +0000
Received: by outflank-mailman (input) for mailman id 476429;
 Thu, 12 Jan 2023 23:04:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qUmi=5J=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pG6c8-00058Y-TQ
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 23:04:05 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 675469a2-92cd-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 00:04:02 +0100 (CET)
Received: from mail-yw1-f199.google.com (mail-yw1-f199.google.com
 [209.85.128.199]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-582-wsG88cUzO5Oio2CBEv4omw-1; Thu, 12 Jan 2023 18:03:43 -0500
Received: by mail-yw1-f199.google.com with SMTP id
 00721157ae682-4c57d19de6fso183191337b3.20
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 15:03:43 -0800 (PST)
Received: from redhat.com ([185.199.103.82]) by smtp.gmail.com with ESMTPSA id
 d7-20020a67c107000000b003cea834049dsm1901988vsj.29.2023.01.12.15.03.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 12 Jan 2023 15:03:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 675469a2-92cd-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673564641;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qXbpa0euBExBaTO/Ebm60JXaGA3mQo8UBGk1nDl0J48=;
	b=W8IkQLb1Bz7lPbZe1L7G4LAGXN4PHVZWYQQCOUxSEerNhm8RnOkupTn8T16msFbiHVCGNc
	xYbLlZGJDDzlNUBwwEM3W728ZtjT88lzeeN5sOtMv1UIb6Ao3qo7ZZz7RTFSqhvm7kNw5u
	wu3vV5spJkr5loqv+SxjNs1Dn8apfmw=
X-MC-Unique: wsG88cUzO5Oio2CBEv4omw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qXbpa0euBExBaTO/Ebm60JXaGA3mQo8UBGk1nDl0J48=;
        b=6g+8o2XkqpMABxfuHRDWCd9OZriLXWb6pQKjP0qIiEwUzQ5J94oyq7gt4C5gD3qJAO
         bnR1uAS+jUgNCphQHjW0TuyAB1eEFn9OG1P75cUec0kdQd3P1IcxdgYYEgno9ri2Vsu/
         ZQwfzkK1Nho319e+3KQ2bm8wrm3mF37zroKcdF50Z6Pmv3dUhgq6XUPU/gJwM4+z57xy
         ik7v7U5Mouzqi67KavUiEz24AUjdlFiE2T0MzDgNfbxeNkPIS0LpE6ug/16JO1W3KxFe
         zHWKMkEMqKa53B5Kicr/l1fGCUOK/LTfAx6Q59GZGZxt/fW07kIsroqZsKjUWPKWupvs
         WkJw==
X-Gm-Message-State: AFqh2kqvUsI0r8E6W0vodpehb5FTX8qWp0xILhSgwR/u5/jxx1bSsP7f
	apEvw+KC3+QrWUz5+P+CcyJqvtwJ/tuowN2AEvU67eB+I7e/m8V6Wstr+HZK3rgqRQzPFyeaEzW
	GfY1FGiIKxEP0ikyhsFdZiRihi6Q=
X-Received: by 2002:a25:3485:0:b0:7b2:2c34:7bfa with SMTP id b127-20020a253485000000b007b22c347bfamr30601131yba.13.1673564622821;
        Thu, 12 Jan 2023 15:03:42 -0800 (PST)
X-Google-Smtp-Source: AMrXdXve5xvOCSmtbGUXXlUsTJO6xAn094jfDpk6/0I1/Sq4yrg4uvAhGWuFoy1TISz3ntSn2UKEsg==
X-Received: by 2002:a25:3485:0:b0:7b2:2c34:7bfa with SMTP id b127-20020a253485000000b007b22c347bfamr30601107yba.13.1673564622570;
        Thu, 12 Jan 2023 15:03:42 -0800 (PST)
Date: Thu, 12 Jan 2023 18:03:34 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Bernhard Beschow <shentey@gmail.com>
Cc: Chuck Zmudzinski <brchuckz@aol.com>, qemu-devel@nongnu.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230112180314-mutt-send-email-mst@kernel.org>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
MIME-Version: 1.0
In-Reply-To: <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:
> I think the change Michael suggests is very minimalistic: Move the if
> condition around xen_igd_reserve_slot() into the function itself and
> always call it there unconditionally -- basically turning three lines
> into one. Since xen_igd_reserve_slot() seems very problem specific,
> Michael further suggests to rename it to something more general. All
> in all no big changes required.

yes, exactly.

-- 
MST



From xen-devel-bounces@lists.xenproject.org Thu Jan 12 23:15:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 23:15:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476435.738606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG6n4-0006qy-Sd; Thu, 12 Jan 2023 23:15:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476435.738606; Thu, 12 Jan 2023 23:15:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG6n4-0006qr-OZ; Thu, 12 Jan 2023 23:15:22 +0000
Received: by outflank-mailman (input) for mailman id 476435;
 Thu, 12 Jan 2023 23:15:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pG6n3-0006qk-A5
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 23:15:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG6n1-0006p9-HO; Thu, 12 Jan 2023 23:15:19 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG6n1-0004vi-Bz; Thu, 12 Jan 2023 23:15:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=3Ea7sP/H+lnt603ST7qDMg96sbEvXpor6wScK3E8HfM=; b=0MJpSJ68kUriEXolQ0yKoLyDoL
	oSWjBY6EdErzGtI5Ik09NevLveiFlVN5vDMJq+luvxb7cFEHLTfqFsA0y8Mrl0iTFVzIwox8QxL9L
	cIgUb9pLIjVIDk0KOAIwAMsA9U4Io2809iykIeDZgjFM4Ug2W2W2J2e+xm6+y8WKZQbM=;
Message-ID: <a607223f-1cd5-5b32-4d9b-500496745786@xen.org>
Date: Thu, 12 Jan 2023 23:15:17 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Jan Beulich <jbeulich@suse.com>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-6-julien@xen.org>
 <ca02a313-0fa2-8041-3e8f-d467c3e99fb6@suse.com>
 <965e3faa-472d-9a79-83ca-fef57cda81c5@xen.org>
 <ade9f97d-aa28-bd7e-552c-35bd707bab29@suse.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 05/22] x86/srat: vmap the pages for acpi_slit
In-Reply-To: <ade9f97d-aa28-bd7e-552c-35bd707bab29@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 04/01/2023 10:23, Jan Beulich wrote:
> On 23.12.2022 12:31, Julien Grall wrote:
>> On 20/12/2022 15:30, Jan Beulich wrote:
>>> On 16.12.2022 12:48, Julien Grall wrote:
>>>> From: Hongyan Xia <hongyxia@amazon.com>
>>>>
>>>> This avoids the assumption that boot pages are in the direct map.
>>>>
>>>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> However, ...
>>>
>>>> --- a/xen/arch/x86/srat.c
>>>> +++ b/xen/arch/x86/srat.c
>>>> @@ -139,7 +139,8 @@ void __init acpi_numa_slit_init(struct acpi_table_slit *slit)
>>>>    		return;
>>>>    	}
>>>>    	mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
>>>> -	acpi_slit = mfn_to_virt(mfn_x(mfn));
>>>> +	acpi_slit = vmap_contig_pages(mfn, PFN_UP(slit->header.length));
>>>
>>> ... with the increased use of vmap space the VA range used will need
>>> growing. And that's perhaps better done ahead of time than late.
>>
>> I will have a look to increase the vmap().
>>
>>>
>>>> +	BUG_ON(!acpi_slit);
>>>
>>> Similarly relevant for the earlier patch: It would be nice if boot
>>> failure for optional things like NUMA data could be avoided.
>>
>> If you can't map (or allocate the memory), then you are probably in a
>> very bad situation because both should really not fail at boot.
>>
>> So I think this is correct to crash early because the admin will be able
>> to look what went wrong. Otherwise, it may be missed in the noise.
> 
> Well, I certainly can see one taking this view. However, at least in
> principle allocation (or mapping) may fail _because_ of NUMA issues.

Right. I read this as the user will likely want to add "numa=off" on the 
command line.

> At which point it would be better to boot with NUMA support turned off
I have to disagree with "better" here. This may work for a user with a 
handful of hosts. But for large scale setup, you will really want a 
failure early rather than having a host booting with an expected feature 
disabled (the NUMA issues may be a broken HW).

It is better to fail and then ask the user to specify "numa=off". At
least the person made a conscientious decision to turn off the feature.

I am curious to hear the opinion from the others.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 12 23:20:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Jan 2023 23:20:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476441.738616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG6ri-0008HJ-Ca; Thu, 12 Jan 2023 23:20:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476441.738616; Thu, 12 Jan 2023 23:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG6ri-0008HC-A2; Thu, 12 Jan 2023 23:20:10 +0000
Received: by outflank-mailman (input) for mailman id 476441;
 Thu, 12 Jan 2023 23:20:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pG6rg-0008H2-TD
 for xen-devel@lists.xenproject.org; Thu, 12 Jan 2023 23:20:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG6rf-00073n-C4; Thu, 12 Jan 2023 23:20:07 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pG6rf-000511-60; Thu, 12 Jan 2023 23:20:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=09NX0mt3JsdJrsXz9Xp/CqgrNkZN7cALRFZmr8cvHbU=; b=dfHLfXzVTw9Xc9AZB/pLAPih1T
	jgo9eg5gifsYxaE9aasLrYqObg/81s9If+MTJPC+C5+Y1b7fz954jwuP74wDNVHz4FjmyXBsR1BXf
	Qmkt7Vwx9kBaA7rsM11pWHsLbxoWWwwn6FKbmtDREXmvRF/2ZCty47QvKzCQ0hgt0ops=;
Message-ID: <a99a8246-bc80-07b9-dacc-f117ace37027@xen.org>
Date: Thu, 12 Jan 2023 23:20:05 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 06/22] x86: map/unmap pages in restore_all_guests
To: Jan Beulich <jbeulich@suse.com>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-7-julien@xen.org>
 <478e04bc-6ff7-de01-dfb9-55d579228152@suse.com>
 <f84d30cb-e743-60f8-a496-603323b79f37@xen.org>
 <01584e11-36ca-7836-85ad-bba9351af46e@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <01584e11-36ca-7836-85ad-bba9351af46e@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 04/01/2023 10:27, Jan Beulich wrote:
> On 23.12.2022 13:22, Julien Grall wrote:
>> Hi,
>>
>> On 22/12/2022 11:12, Jan Beulich wrote:
>>> On 16.12.2022 12:48, Julien Grall wrote:
>>>> --- a/xen/arch/x86/x86_64/entry.S
>>>> +++ b/xen/arch/x86/x86_64/entry.S
>>>> @@ -165,7 +165,24 @@ restore_all_guest:
>>>>            and   %rsi, %rdi
>>>>            and   %r9, %rsi
>>>>            add   %rcx, %rdi
>>>> -        add   %rcx, %rsi
>>>> +
>>>> +         /*
>>>> +          * Without a direct map, we have to map first before copying. We only
>>>> +          * need to map the guest root table but not the per-CPU root_pgt,
>>>> +          * because the latter is still a xenheap page.
>>>> +          */
>>>> +        pushq %r9
>>>> +        pushq %rdx
>>>> +        pushq %rax
>>>> +        pushq %rdi
>>>> +        mov   %rsi, %rdi
>>>> +        shr   $PAGE_SHIFT, %rdi
>>>> +        callq map_domain_page
>>>> +        mov   %rax, %rsi
>>>> +        popq  %rdi
>>>> +        /* Stash the pointer for unmapping later. */
>>>> +        pushq %rax
>>>> +
>>>>            mov   $ROOT_PAGETABLE_FIRST_XEN_SLOT, %ecx
>>>>            mov   root_table_offset(SH_LINEAR_PT_VIRT_START)*8(%rsi), %r8
>>>>            mov   %r8, root_table_offset(SH_LINEAR_PT_VIRT_START)*8(%rdi)
>>>> @@ -177,6 +194,14 @@ restore_all_guest:
>>>>            sub   $(ROOT_PAGETABLE_FIRST_XEN_SLOT - \
>>>>                    ROOT_PAGETABLE_LAST_XEN_SLOT - 1) * 8, %rdi
>>>>            rep movsq
>>>> +
>>>> +        /* Unmap the page. */
>>>> +        popq  %rdi
>>>> +        callq unmap_domain_page
>>>> +        popq  %rax
>>>> +        popq  %rdx
>>>> +        popq  %r9
>>>
>>> While the PUSH/POP are part of what I dislike here, I think this wants
>>> doing differently: Establish a mapping when putting in place a new guest
>>> page table, and use the pointer here. This could be a new per-domain
>>> mapping, to limit its visibility.
>>
>> I have looked at a per-domain approach and this looks way more complex
>> than the few concise lines here (not mentioning the extra amount of
>> memory).
> 
> Yes, I do understand that would be a more intrusive change.

I could be persuaded to look at a more intrusive change if there are a 
good reason to do it. To me, at the moment, it mostly seem a matter of 
taste.

So what would we gain from a perdomain mapping?

> 
>> So I am not convinced this is worth the effort here.
>>
>> I don't have an other approach in mind. So are you disliking this
>> approach to the point this will be nacked?
> 
> I guess I wouldn't nack it, but I also wouldn't provide an ack.
> I'm curious
> what Andrew or Roger think here...

Unfortunately Roger is on parental leaves for the next couple of months. 
It would be good to make some progress before hand. Andrew, what do you 
think?

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 00:45:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 00:45:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476447.738628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG8Bx-0000i4-9u; Fri, 13 Jan 2023 00:45:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476447.738628; Fri, 13 Jan 2023 00:45:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG8Bx-0000hx-6H; Fri, 13 Jan 2023 00:45:09 +0000
Received: by outflank-mailman (input) for mailman id 476447;
 Fri, 13 Jan 2023 00:45:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG8Bv-0000hn-7D; Fri, 13 Jan 2023 00:45:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG8Bv-00018w-47; Fri, 13 Jan 2023 00:45:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG8Bu-0001rK-LU; Fri, 13 Jan 2023 00:45:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pG8Bu-0005i5-L1; Fri, 13 Jan 2023 00:45:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ylj5uHL3wfZLgCDl6litaVJEVZ5nRVoL6tsb8N63YQU=; b=6bBnj/VsFCFXEHSHoJlkM17+k8
	unG5RsxRcqDvuaEQ8KuFpTZQNsa+aqhJmBemdG4d6O02fJxmCiEa6I7pNQIbXt9EtJOY6Qz+8+b3i
	of/y/F5vfzrLP+H9iZO4N3QCFCbjGOkDABeTO0cmiXiRc5DbPvyhPPf+bLpfMS4Lv9yM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175743-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175743: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:xen-boot:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-multivcpu:debian-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:debian-di-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
X-Osstest-Versions-That:
    qemuu=0ab12aa32462817f0a53fa6f6ce4baf664ef1713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Jan 2023 00:45:06 +0000

flight 175743 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175743/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 175735 pass in 175743
 test-amd64-i386-pair     11 xen-install/dst_host fail in 175735 pass in 175743
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 175735 pass in 175743
 test-amd64-i386-pair         10 xen-install/src_host       fail pass in 175735
 test-armhf-armhf-xl-rtds      8 xen-boot                   fail pass in 175735
 test-armhf-armhf-xl-multivcpu 12 debian-install            fail pass in 175735
 test-armhf-armhf-libvirt-qcow2 12 debian-di-install        fail pass in 175735
 test-armhf-armhf-libvirt-raw 12 debian-di-install          fail pass in 175735

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail in 175735 like 175623
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 175735 like 175623
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 175735 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 175735 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 175735 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 175735 never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check fail in 175735 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 175735 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175623
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175623
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175623
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175623
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175623
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287
baseline version:
 qemuu                0ab12aa32462817f0a53fa6f6ce4baf664ef1713

Last test of basis   175623  2023-01-07 23:09:00 Z    5 days
Failing since        175627  2023-01-08 14:40:14 Z    4 days   21 attempts
Testing same since   175654  2023-01-09 18:08:58 Z    3 days   16 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bernhard Beschow <shentey@gmail.com>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai Huang <kai.huang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Lei Xiang <leixiang@kylinos.cn>
  leixiang <leixiang@kylinos.cn>
  Liuxiangdong <liuxiangdong5@huawei.com>
  Longpeng <longpeng2@huawei.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nikita Ivanov <nivanov@cloudlinux.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yanan Wang <wangyanan55@huawei.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Zeng Chi <zengchi@kylinos.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   0ab12aa324..aa96ab7c9d  aa96ab7c9df59c615ca82b49c9062819e0a1c287 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 00:48:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 00:48:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476456.738639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG8F6-0001Sw-Vf; Fri, 13 Jan 2023 00:48:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476456.738639; Fri, 13 Jan 2023 00:48:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG8F6-0001Sp-RE; Fri, 13 Jan 2023 00:48:24 +0000
Received: by outflank-mailman (input) for mailman id 476456;
 Fri, 13 Jan 2023 00:48:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8zPp=5K=kernel.org=pr-tracker-bot@srs-se1.protection.inumbo.net>)
 id 1pG8F5-0001Sj-R7
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 00:48:23 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa01f384-92db-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 01:48:21 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0FB8B621EA;
 Fri, 13 Jan 2023 00:48:20 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPS id 74801C433F0;
 Fri, 13 Jan 2023 00:48:19 +0000 (UTC)
Received: from aws-us-west-2-korg-oddjob-1.ci.codeaurora.org
 (localhost.localdomain [127.0.0.1])
 by aws-us-west-2-korg-oddjob-1.ci.codeaurora.org (Postfix) with ESMTP id
 62452C395D4; Fri, 13 Jan 2023 00:48:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa01f384-92db-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1673570899;
	bh=FU5yKit5p//SKeorja80X9Jb/gpyvO/lOFikrwDWQv4=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=FY5CGGvJ4n5TmKgXhzDzTXmr5RzlbCLO6snnSa02rVHLm1bmG0ruq6HbDlpch9vo0
	 Hz+km26bAq6gzSeAx2QrOvrYp9UnaLUX62WUBoUWmTFmXXidQcjHH5rIYT2sAwDCBR
	 xEP2m4WiGtmcaY9VoH2/GH+74OUI6nC0GBY3YT3SexRfeA0H1wxrcXVvzY0wPwYUA7
	 Hl/w+6OPPrnRalGiSUitC9owv4Pdq2qdXzXAyGPgC8LJ1C58OVAnAmeBkXxNQgXvTJ
	 /GtMByWvbUZ05liYY4Z2srBJZfD3/BIf80E37LHgvX4aG7XQyveOUOBWbIdwuJaYGh
	 DA8E/DMq6FnHw==
Subject: Re: [GIT PULL] xen: branch for v6.2-rc4
From: pr-tracker-bot@kernel.org
In-Reply-To: <20230111122501.21815-1-jgross@suse.com>
References: <20230111122501.21815-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20230111122501.21815-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-6.2-rc4-tag
X-PR-Tracked-Commit-Id: f57034cedeb6e00256313a2a6ee67f974d709b0b
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: bad8c4a850eaf386df681d951e3afc06bf1c7cf8
Message-Id: <167357089939.28490.16784668804617629143.pr-tracker-bot@kernel.org>
Date: Fri, 13 Jan 2023 00:48:19 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org

The pull request you sent on Wed, 11 Jan 2023 13:25:01 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-6.2-rc4-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/bad8c4a850eaf386df681d951e3afc06bf1c7cf8

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 02:19:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 02:19:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476464.738650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG9f5-0001cu-Ju; Fri, 13 Jan 2023 02:19:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476464.738650; Fri, 13 Jan 2023 02:19:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG9f5-0001cm-F3; Fri, 13 Jan 2023 02:19:19 +0000
Received: by outflank-mailman (input) for mailman id 476464;
 Fri, 13 Jan 2023 02:19:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG9f4-0001cc-9E; Fri, 13 Jan 2023 02:19:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG9f4-0002Zs-6I; Fri, 13 Jan 2023 02:19:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG9f3-0003uW-S8; Fri, 13 Jan 2023 02:19:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pG9f3-0004Nm-Rg; Fri, 13 Jan 2023 02:19:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9G/KpU8ZPBrAx9W0B1V/z+p7xoWczZbnBtCAvEXuwZc=; b=YdG4T2JxmU2IMnin4Vcz+pk3EB
	3+wjccst8ZgKhdPXyUWxjbLKXI75HtK5KlwQke0J2gjsszcXHbhSmXIfg513NxfBAuD28EnyFQsqz
	nZwWRGqH6nYxOigizAHBs5K/w3ptakLa9n1lg0xHod5mSiuGHZymkxEo4pqc1b7eLPis=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175748-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175748: trouble: broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:host-install(5):broken:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3edca52ce736297d7fcf293860cd94ef62638052
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Jan 2023 02:19:17 +0000

flight 175748 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175748/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl           5 host-install(5)        broken REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3edca52ce736297d7fcf293860cd94ef62638052
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    0 days
Testing same since   175748  2023-01-12 20:01:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl broken
broken-step test-armhf-armhf-xl host-install(5)

Not pushing.

------------------------------------------------------------
commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 02:25:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 02:25:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476472.738661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG9kz-00039S-94; Fri, 13 Jan 2023 02:25:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476472.738661; Fri, 13 Jan 2023 02:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pG9kz-00039L-6K; Fri, 13 Jan 2023 02:25:25 +0000
Received: by outflank-mailman (input) for mailman id 476472;
 Fri, 13 Jan 2023 02:25:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG9kx-00039B-V9; Fri, 13 Jan 2023 02:25:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG9kx-0002hr-Oh; Fri, 13 Jan 2023 02:25:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pG9kx-000427-Ae; Fri, 13 Jan 2023 02:25:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pG9kx-0006y2-A9; Fri, 13 Jan 2023 02:25:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bSGeAMgCbPZiIB84nhVLIQTCIrFZ99E+lv+woF4if0g=; b=vuFOjzl61HRyFSf4j0KFoYFqS3
	Xst+DIlBJuiOZofEK5Gny5oMkU0spwX7M5jzs45teU6LdI75cn9uGyj19xWsViWdWtFg4NC96jORe
	TpIEfGtcm1SQbhAlyfYiAenSnWv0t/MbeVOMiZiwWZ+BWqRLVgjyV+b0tJlfl6YHxMAA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175739-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175739: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:host-install(5):broken:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=83d9679db057d5736c7b5a56db06bb6bb66c3914
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Jan 2023 02:25:23 +0000

flight 175739 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175739/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-multivcpu  5 host-install(5)       broken REGR. vs. 175734
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 175734
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175734
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175734
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175734
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175734
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175734
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175734
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175734
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175734
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175734
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175734
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175734
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175734
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  83d9679db057d5736c7b5a56db06bb6bb66c3914
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    1 days
Testing same since   175739  2023-01-12 09:38:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-multivcpu broken
broken-step test-armhf-armhf-xl-multivcpu host-install(5)

Not pushing.

------------------------------------------------------------
commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 02:57:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 02:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476481.738671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGAGJ-0006u6-Tl; Fri, 13 Jan 2023 02:57:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476481.738671; Fri, 13 Jan 2023 02:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGAGJ-0006tz-Qx; Fri, 13 Jan 2023 02:57:47 +0000
Received: by outflank-mailman (input) for mailman id 476481;
 Fri, 13 Jan 2023 02:57:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LVbI=5K=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pGAGI-0006tt-Gq
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 02:57:46 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0c470ef4-92ee-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 03:57:44 +0100 (CET)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 0E4895C0126;
 Thu, 12 Jan 2023 21:57:42 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute1.internal (MEProxy); Thu, 12 Jan 2023 21:57:42 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 12 Jan 2023 21:57:40 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c470ef4-92ee-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to; s=fm2; t=1673578662; x=1673665062; bh=rlS+RQUHqs
	IC3xC0I9xDTjyCMxue/C52cDH/kXBJLMA=; b=C+ElV/kndUUGybHtXj6FDOybbi
	8tkS4VoLc7k2v08A4l6p9OeozmKpI6dqtnmYQNx3xdALBt0Q6aCa5aYXqLiNo1e+
	mc8jq5mmRvDg/jJ3LiAG3G/3VK72sd1kjwl4nRALrQN7mo/hmKXkuzVdIIMPN8j4
	AaB28hv+kvaAo66eHtAn8LKAMLchSlx9dkpewe/ySeX3Su8BVqIEoUKrriJZGNwS
	3tZJHpgn4L2xICKg1koNqRMJBJWxtH31K0+xvr9k1zAnNaVH5+MNK1AKaEmB6B0r
	sz8R4GJU3ZXijqD9dNTxcuAO6ERBmPtD0+9S2Rj4NiHbOI3/ycOi3lweybhg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:message-id:mime-version
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1673578662; x=
	1673665062; bh=rlS+RQUHqsIC3xC0I9xDTjyCMxue/C52cDH/kXBJLMA=; b=l
	tIjns746EPB75MWgq7jph9EOTuXM1Zp1v0RYKgtNcgV+hr8gjo3W3lzDexiyUZ48
	ukO8fEyplAh2Hhf/EIcMQN0FUVAWn7kXrbyephnLZsoPC8Gyl/E1b9qoSKPxhxNF
	cVTYo4a4ETi0PBELGfJW09zKTxeB8MnLdgdthwE5eJkP7pwmEOYOyLr8/QcFzd53
	mUVr+vvEFFh09pChiXR6C8KwqvJcr27KmlLcqVCb5UWeuZkIGwjhq105b0y7LQ6Q
	7P0u3dBPJqbW4rG3tXQoU417up4XUGBHdT1tHaOkSavKBxsXQYK8WjDu8Dm8KIYT
	EpHCqh62p/1Od+CaO1AfA==
X-ME-Sender: <xms:pcjAY4TrV4rbZavQ2GMAvIQwxFsHr_vHfpuIRsF-dcDMr7mmLsRbbA>
    <xme:pcjAY1z7rcJFpE9KZ84xwMRpeqUB_YxBenrL1HvAh2ZhRsAwybwPGxjsqC9uOHqg8
    evx5OsbG_GmUw>
X-ME-Received: <xmr:pcjAY10GGaxVpVPzsrXBElzKxiZgX1D2GKEOw4QyXl1eFsNRlbf5hbmu8W4xYKlw9CjbqMYe4oFAUVGCFuCMxKhnf7Mz4c8uwA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrleejgdehgecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfggtggusehgtderredttdejnecuhfhrohhmpeforghrvghkucfo
    rghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvhhish
    hisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpedttedtlefg
    leetfeejjefftefhjeffheeiheevvddvkedukefguedvudffleefieenucffohhmrghinh
    epkhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm
    rghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:pcjAY8Cv_orcabPmI9qc3Bdv6Zp_w_H41P7jnIsdJxZ1Z8sKOWfLVg>
    <xmx:pcjAYxgZm8Glhoi_vv1Kw36sFdlpbBQWLHbmknfBCNJmTuMqdyOr4A>
    <xmx:pcjAY4rbmzoxPNljZt3ZdOKaaQiDW04pisxhfq6ATDd4BnbdctPtwg>
    <xmx:psjAYxKy65CSBUn8UrRPHU-iJsG9x6sQqN9gL_9xHmTQ1pjX3y79XA>
Feedback-ID: i1568416f:Fastmail
Date: Fri, 13 Jan 2023 03:57:37 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Cc: regressions@lists.linux.dev
Subject: S3 under Xen regression between 6.1.1 and 6.1.3
Message-ID: <Y8DIodWQGm99RA+E@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="5xaRNTg0gQ8/F2e0"
Content-Disposition: inline


--5xaRNTg0gQ8/F2e0
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 13 Jan 2023 03:57:37 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Cc: regressions@lists.linux.dev
Subject: S3 under Xen regression between 6.1.1 and 6.1.3

Hi,

6.1.3 as PV dom0 crashes when attempting to suspend. 6.1.1 works. The
crash:

    [  348.284004] PM: suspend entry (deep)
    [  348.289532] Filesystems sync: 0.005 seconds
    [  348.291545] Freezing user space processes ... (elapsed 0.000 seconds=
) done.
    [  348.292457] OOM killer disabled.
    [  348.292462] Freezing remaining freezable tasks ... (elapsed 0.104 se=
conds) done.
    [  348.396612] printk: Suspending console(s) (use no_console_suspend to=
 debug)
    [  348.749228] PM: suspend devices took 0.352 seconds
    [  348.769713] ACPI: EC: interrupt blocked
    [  348.816077] BUG: kernel NULL pointer dereference, address: 000000000=
000001c
    [  348.816080] #PF: supervisor read access in kernel mode
    [  348.816081] #PF: error_code(0x0000) - not-present page
    [  348.816083] PGD 0 P4D 0=20
    [  348.816086] Oops: 0000 [#1] PREEMPT SMP NOPTI
    [  348.816089] CPU: 0 PID: 6764 Comm: systemd-sleep Not tainted 6.1.3-1=
=2Efc32.qubes.x86_64 #1
    [  348.816092] Hardware name: Star Labs StarBook/StarBook, BIOS 8.01 07=
/03/2022
    [  348.816093] RIP: e030:acpi_get_wakeup_address+0xc/0x20
    [  348.816100] Code: 44 00 00 48 8b 05 04 a3 82 02 c3 cc cc cc cc cc cc=
 cc cc cc cc cc cc cc cc cc cc cc cc cc 0f 1f 44 00 00 48 8b 05 fc 9d 82 02=
 <8b> 40 1c c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f
    [  348.816103] RSP: e02b:ffffc90042537d08 EFLAGS: 00010246
    [  348.816105] RAX: 0000000000000000 RBX: 0000000000000003 RCX: 20c49ba=
5e353f7cf
    [  348.816106] RDX: 000000000000cd19 RSI: 000000000002ee9a RDI: 002a051=
ed42d7694
    [  348.816108] RBP: 0000000000000003 R08: ffffc90042537ca0 R09: fffffff=
f82c5e468
    [  348.816110] R10: 0000000000007ff0 R11: 0000000000000000 R12: 0000000=
000000000
    [  348.816111] R13: fffffffffffffff2 R14: ffff88812206e6c0 R15: ffff888=
12206e6e0
    [  348.816121] FS:  00007cb49b01eb80(0000) GS:ffff888189400000(0000) kn=
lGS:0000000000000000
    [  348.816123] CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
    [  348.816124] CR2: 000000000000001c CR3: 000000012231a000 CR4: 0000000=
000050660
    [  348.816131] Call Trace:
    [  348.816133]  <TASK>
    [  348.816134]  acpi_pm_prepare+0x1a/0x50
    [  348.816141]  suspend_enter+0x94/0x360
    [  348.816146]  suspend_devices_and_enter+0x198/0x2b0
    [  348.816150]  enter_state+0x18d/0x1f5
    [  348.816155]  pm_suspend.cold+0x20/0x6b
    [  348.816159]  state_store+0x27/0x60
    [  348.816163]  kernfs_fop_write_iter+0x125/0x1c0
    [  348.816169]  new_sync_write+0x105/0x190
    [  348.816176]  vfs_write+0x211/0x2a0
    [  348.816180]  ksys_write+0x67/0xe0
    [  348.816183]  do_syscall_64+0x59/0x90
    [  348.816188]  ? do_syscall_64+0x69/0x90
    [  348.816192]  ? exc_page_fault+0x76/0x170
    [  348.816195]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
    [  348.816200] RIP: 0033:0x7cb49c1412f7
    [  348.816203] Code: 0d 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f=
 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05=
 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24
    [  348.816204] RSP: 002b:00007ffc125f63f8 EFLAGS: 00000246 ORIG_RAX: 00=
00000000000001
    [  348.816206] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007cb=
49c1412f7
    [  348.816208] RDX: 0000000000000004 RSI: 00007ffc125f64e0 RDI: 0000000=
000000004
    [  348.816209] RBP: 00007ffc125f64e0 R08: 00005c83d772bca0 R09: 0000000=
00000000d
    [  348.816210] R10: 00005c83d7727eb0 R11: 0000000000000246 R12: 0000000=
000000004
    [  348.816211] R13: 00005c83d77272d0 R14: 0000000000000004 R15: 00007cb=
49c213700
    [  348.816213]  </TASK>
    [  348.816214] Modules linked in: loop vfat fat snd_hda_codec_hdmi snd_=
sof_pci_intel_tgl snd_sof_intel_hda_common soundwire_intel soundwire_generi=
c_allocation soundwire_cadence snd_sof_intel_hda snd_sof_pci snd_sof_xtensa=
_dsp snd_sof snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_acpi_i=
ntel_match snd_soc_acpi soundwire_bus snd_hda_codec_realtek snd_hda_codec_g=
eneric ledtrig_audio snd_soc_core snd_compress ac97_bus snd_pcm_dmaengine s=
nd_hda_intel snd_intel_dspcfg snd_intel_sdw_acpi iTCO_wdt intel_pmc_bxt ee1=
004 iTCO_vendor_support intel_rapl_msr snd_hda_codec snd_hda_core snd_hwdep=
 snd_seq snd_seq_device iwlwifi snd_pcm pcspkr joydev processor_thermal_dev=
ice_pci_legacy processor_thermal_device snd_timer snd cfg80211 processor_th=
ermal_rfim i2c_i801 processor_thermal_mbox i2c_smbus idma64 rfkill processo=
r_thermal_rapl soundcore intel_rapl_common int340x_thermal_zone intel_soc_d=
ts_iosf igen6_edac intel_hid intel_pmc_core intel_scu_pltdrv sparse_keymap =
fuse xenfs ip_tables dm_thin_pool
    ic#2 Part1
    [  348.816259]  dm_persistent_data dm_bio_prison dm_crypt i915 crct10di=
f_pclmul crc32_pclmul crc32c_intel polyval_clmulni polyval_generic drm_budd=
y nvme video wmi drm_display_helper nvme_core xhci_pci xhci_pci_renesas gha=
sh_clmulni_intel hid_multitouch sha512_ssse3 serio_raw nvme_common cec xhci=
_hcd ttm i2c_hid_acpi i2c_hid pinctrl_tigerlake xen_acpi_processor xen_priv=
cmd xen_pciback xen_blkback xen_gntalloc xen_gntdev xen_evtchn uinput
    [  348.816281] CR2: 000000000000001c
    [  348.816283] ---[ end trace 0000000000000000 ]---
    [  348.867991] RIP: e030:acpi_get_wakeup_address+0xc/0x20
    [  348.867996] Code: 44 00 00 48 8b 05 04 a3 82 02 c3 cc cc cc cc cc cc=
 cc cc cc cc cc cc cc cc cc cc cc cc cc 0f 1f 44 00 00 48 8b 05 fc 9d 82 02=
 <8b> 40 1c c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f
    [  348.867998] RSP: e02b:ffffc90042537d08 EFLAGS: 00010246
    [  348.867999] RAX: 0000000000000000 RBX: 0000000000000003 RCX: 20c49ba=
5e353f7cf
    [  348.868000] RDX: 000000000000cd19 RSI: 000000000002ee9a RDI: 002a051=
ed42d7694
    [  348.868001] RBP: 0000000000000003 R08: ffffc90042537ca0 R09: fffffff=
f82c5e468
    [  348.868001] R10: 0000000000007ff0 R11: 0000000000000000 R12: 0000000=
000000000
    [  348.868002] R13: fffffffffffffff2 R14: ffff88812206e6c0 R15: ffff888=
12206e6e0
    [  348.868008] FS:  00007cb49b01eb80(0000) GS:ffff888189400000(0000) kn=
lGS:0000000000000000
    [  348.868009] CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
    [  348.868009] CR2: 000000000000001c CR3: 000000012231a000 CR4: 0000000=
000050660
    [  348.868014] Kernel panic - not syncing: Fatal exception
    [  348.868031] Kernel Offset: disabled

Looking at git log between those two versions, and the
acpi_get_wakeup_address() function, I suspect it's this change (but I
have _not_ tested it):

commit b1898793777fe10a31c160bb8bc385d6eea640c6
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Nov 23 12:45:23 2022 +0100

    x86/boot: Skip realmode init code when running as Xen PV guest
   =20
    [ Upstream commit f1e525009493cbd569e7c8dd7d58157855f8658d ]
   =20
    When running as a Xen PV guest there is no need for setting up the
    realmode trampoline, as realmode isn't supported in this environment.
   =20
    Trying to setup the trampoline has been proven to be problematic in
    some cases, especially when trying to debug early boot problems with
    Xen requiring to keep the EFI boot-services memory mapped (some
    firmware variants seem to claim basically all memory below 1Mb for boot
    services).
   =20
    Introduce new x86_platform_ops operations for that purpose, which can
    be set to a NOP by the Xen PV specific kernel boot code.
   =20
      [ bp: s/call_init_real_mode/do_init_real_mode/ ]
   =20
    Fixes: 084ee1c641a0 ("x86, realmode: Relocator for realmode code")
    Suggested-by: H. Peter Anvin <hpa@zytor.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Link: https://lore.kernel.org/r/20221123114523.3467-1-jgross@suse.com
    Signed-off-by: Sasha Levin <sashal@kernel.org>


# regzbot introduced v6.1.1..v6.1.3

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--5xaRNTg0gQ8/F2e0
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmPAyKIACgkQ24/THMrX
1yxjsQf/W72vRWCEJuBqXmvxNSIdKZyzkRUf51j+IiWDY8g3j4abHSBvgzVe4trM
6KKhKa6yk3ThfRKe/vnHLx3932/DIi43vJEmSSOb26/De4ncspyUPK7IHgZsKaRd
hbUzf4b3XSNfaiQYVGABDScMf4+7715T2TfKtTPKMM1SYYMqY9Rgz1wszcnhsd02
TkSfdV1Hx6SvdrYdoNDjYuRhTp+u5gwcTl2G265NPqADE3IT62/GabkvSoPTfZ4w
jdoYhdmeBJwrx6g0M/gaiGTBHKPRn+I5RsFqYem13Cx5Vhk1BLBaBulCTo2faxWk
Rz+Yki0pER8kY09DKiibjifxxdDlQA==
=i4hv
-----END PGP SIGNATURE-----

--5xaRNTg0gQ8/F2e0--


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 04:14:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 04:14:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476489.738683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGBSi-0006g2-Hp; Fri, 13 Jan 2023 04:14:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476489.738683; Fri, 13 Jan 2023 04:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGBSi-0006fv-DU; Fri, 13 Jan 2023 04:14:40 +0000
Received: by outflank-mailman (input) for mailman id 476489;
 Fri, 13 Jan 2023 04:14:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2WVX=5K=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pGBSg-0006fn-Ee
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 04:14:39 +0000
Received: from sonic305-19.consmr.mail.gq1.yahoo.com
 (sonic305-19.consmr.mail.gq1.yahoo.com [98.137.64.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c7fbf237-92f8-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 05:14:34 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic305.consmr.mail.gq1.yahoo.com with HTTP; Fri, 13 Jan 2023 04:14:31 +0000
Received: by hermes--production-ne1-5648bd7666-x5zv9 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 90236039d2e62b2abb2e002068f6b7b4; 
 Fri, 13 Jan 2023 04:14:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7fbf237-92f8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673583271; bh=4hASBuEGunDT4zjWth2rvDl2Y4xCmYLJ9R5aycKiqec=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=tONNFQpBv6sDKV/ugSmglztfJJL0yYEkcO0dJcg4BFyP/xJY9NduoRr9Hn34HBoWNnbJbxJvJzsaT6waPYcUHbDFwd1ZqYfdcmQ6HuhbyE4g3Xm1iHjCIf6WzXznonLD6A9PGVBKs7eeYc+jNpRFAEeeqSIU/k6/2+2q/YsGs4RhTx8BPvanyUxqNehnaVvxLPPrlRe02kbt2hYGGB7R0a9+je1pij9eoX8PZoFZ+HkqocB0tB6zUmLq+xMyjHDOH8LxETQC0GSLUhnjYfP5hjbbyf4hqNUrc6zjsMHSSJUPnbVN16BOFi0MA58igCZAuxIe+WvoWscyxoAy1o/8Yw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673583271; bh=Tv4gbSx1lQwSkmBRYj+8e9bM3qtZfTwV2DimFH6eZWy=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=UlFlx8AdCnY9gXRLo+jltAqKbwiOFsksnU2zXen6M1DCZ686j8CIDK8hck9ltidKZVXRkLIH4zGWTR4mYF92gq7HDcHKyFpJ80W52RZBv1LaXOUbGMdF3m0O5UfaXcsa9ZQL94s23cxFeyY6jCv+qViH2gFbhjuR7/ThJ03xm0iFvYM8BWqqAA8lOQrZvn6rKqekMGenaixoYQsJtctmguxtibHEnvP/kUf7xkermgFlWLR5sno3IFSDQibYgmdjXYxIFZ4yU0PcqiAzJhTV4p3f6+1arbzi7LUQJJ5l/PN6AqyeIaBsDUdmuTTR/LtiGG8yrgTaNt7J1R1OAolE3w==
X-YMail-OSG: ttEldBMVM1nVeFTo_.eAWkkRTsqhuLa7Om2EKLB_l73WR5n3MGc_a9YbOHmGONy
 E3DkD0H0flRzYy.Sy60BO21eGNYMrQHhiEnxxx7jq8t5kWnX20Sa.WOYI4dzYitvZxqWFfN2tgVu
 ivgRIYummAvragH44oo5jFD.5BRQM4pbHrw.yJmMv_2dkCO_zUjVwwEwSQcBpRVbrCbsLRnaGJQx
 R6d3jnETZBF1n40M2oHkz7rZbr2IGLNyHUlWxGkTWGU4QccvzSQLzqqCCWQgEXavfuMUMnispzMP
 63knsXk7zhPiFETn9RAHqLzpFpB93hQ.6vJQRwRaMTNjC8_J4SBbo5lHWXwflLidB.yaLT.FLuGg
 TOyCQO.LedKHqqDjsIVIH285OdngrmYwUm06RIwv2m9.KocCCJMlZzTunS_rniJ_A6yWRD7QkRcI
 4XPxWY3wdIzUvEVXvJn6b7EJdBktenWWUBzjYtBkNFZkGrTTPdMsBH4Yds9csE0ZjAQy5MG3V.nN
 YYz4angnODQTV4aIQAVXKKsE0647PWOEwSZoUUUkgVz3e_QIEavRV2NTB62r9VDLcABa6EhPddrD
 jC11GJxHiLxuYkUqeqgYr8E9QI494gTd1iLPNl48NZ0TrQIIGzko_z.evZYCXZgbv0nv8JxnMKhX
 s0AFYuS5Q1U_xDAnCuinEBk9t2bNIEAjpg9t3FeJnEBGx6SCftNAS2jAaivTCYTs8aCMWu79QXeI
 sm8gfsKrxuD6dRXcmuWFI46B4qOUN_WZhe7KvUtFnN2R_rkfkDCdg4JKlYVvBDBvuKBd9PVUntUK
 EZeeGL77Wu7mm4pVg8BgaKlX88M2ONVCjIRO0i8I4qOVcmsQNMzWHB.B4P3a3N57krzpVLfNLi3n
 bVg_OX6B4yH0sAMj_sX2rr3ncAKOAYukSLaqpVuzYz0fTXovWfFbrmn7dHxXlCP5ge0cXk7_lbV9
 ubU6GPYOGB8NhwOK27gusL7TmKBH2ijhefApbiQuh_T9833DcGRqTyIUG78UrYw9Y.k1AnZZ2n9Z
 LL299tV.os4jWgEpaPA8edPh2yyOaVIUWebUY2rUaKtVbTe6ki9Wtq8jTJ8Yz2gd9x40c8YqqshG
 B6dNeM3HQ6bKxd4tEpOthYSFtrvCb2gNknTtetkTkP_5CDjsajXDZO244f22TRrtOVSw3j2Zbm5F
 m53nFDGhay9HVeAY2N.kUobffp0hH4OYL2oyd7.t72JkOix73iW5Wd8CZT9dLLHk1hpxv0gD7vgE
 We8K_k7zNtkNAE3DhCXp4dySJXaVwlufbdQf4smHnbnVWDP2.b6rZg8Tibl.LJFqgalLKBkDSjEe
 mSi0ic5ugnz_P9VGPDkZkuha4OitXVDoXjLHykZe6RQccZswk17EilxjeYkkzM7g_X5c_GO1l9iw
 gZ4f6CbN8pstqhst94cnWmxp6zJvNdC_TzyRE_Zps71EZArFniuCxz0Y5p6nAP933AQT7CMti.7x
 mVFpH1j0Ug4BkW4j96vN48_6o9jSapVZ0TkTC4fKqSN62JuSrsRQwRpDk6aLVhymuIOWbgoU7v3q
 ZacOVwjoyfJKr7zbuFTmDNNzlII_IVQ.4DF5cLNCc.CI5uj6Z6ZylLNkBUCO1JaAQbtggJjJ7e2W
 Y1r.FeSK20kCmC3b7qhVeAJlpN43o.IJfwYr8eV_ZnT3mt6b.QdoB28_jxpnJbAfA4X3SQRp6qln
 kcOL_FaPXk5v2caniSZEVaJT4TmiHhE8MfEDR.fCAjIj6CK7flzKgctdU8KCV4kC7G5Y4X9IJcLT
 rIhM9XJUi4XKkC0xAldVbz3oNXeZekQgqLS1HjTX9JfaqDn1ZaCr9c1V_rA0UQlRstf8.Vc3GwDL
 CDb9wtYqmvEDJs9k4_6.yHlFoz.D0g54u3bddkhahRDhyefIGUjcVq8tQnb2nj6ewDJ.6diY2jvR
 UP74pcW28nUPp.VveXif76.s.ud56DUw1tv5jvI.75wGbTIGbx4QeX0RDiCd8R.HP1P6IFppe7GW
 sD1.hR7LMdRHxyw_DviAqJaJMSGPjMn079JtlpVhd3i6XjmF5WbZOpqTSF975TXHSeRldBnIacC1
 0J0zl_d1weRGnWBOPOd_vGWLb8y0qRMneHURuhrHChlpapeFpL4dbnb6WR6suMFGvPPft09FLp6O
 LQ71q1lFRb5Rhd5Lrx1fb_NAxJ1rD.YO5Fm.jWvNxoIheGobmEqjIQwRPND.VF0_6M1TZoqgtgfw
 Jbm9Hbo7t2SHMDfH0jzV8Iw--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
Date: Thu, 12 Jan 2023 23:14:26 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: "Michael S. Tsirkin" <mst@redhat.com>,
 Bernhard Beschow <shentey@gmail.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
 <20230112180314-mutt-send-email-mst@kernel.org>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230112180314-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 605

On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:
> On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:
>> I think the change Michael suggests is very minimalistic: Move the if
>> condition around xen_igd_reserve_slot() into the function itself and
>> always call it there unconditionally -- basically turning three lines
>> into one. Since xen_igd_reserve_slot() seems very problem specific,
>> Michael further suggests to rename it to something more general. All
>> in all no big changes required.
> 
> yes, exactly.
> 

OK, got it. I can do that along with the other suggestions.

Thanks.


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:29:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476496.738705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdQ-00067l-7H; Fri, 13 Jan 2023 05:29:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476496.738705; Fri, 13 Jan 2023 05:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdQ-00067e-4F; Fri, 13 Jan 2023 05:29:48 +0000
Received: by outflank-mailman (input) for mailman id 476496;
 Fri, 13 Jan 2023 05:29:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdP-0005sP-37
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:29:47 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 494582fe-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:29:44 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0128613D5;
 Thu, 12 Jan 2023 21:30:26 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 791993F587;
 Thu, 12 Jan 2023 21:29:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 494582fe-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v2 01/40] xen/arm: remove xen_phys_start and xenheap_phys_end from config.h
Date: Fri, 13 Jan 2023 13:28:34 +0800
Message-Id: <20230113052914.3845596-2-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

These two variables are stale variables, they only have declarations
in config.h, they don't have any definition and no any code is using
these two variables. So in this patch, we remove them from config.h.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
v1 -> v2:
1. Add Ab.
---
 xen/arch/arm/include/asm/config.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 0fefed1b8a..25a625ff08 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -172,8 +172,6 @@
 #define STACK_SIZE  (PAGE_SIZE << STACK_ORDER)
 
 #ifndef __ASSEMBLY__
-extern unsigned long xen_phys_start;
-extern unsigned long xenheap_phys_end;
 extern unsigned long frametable_virt_end;
 #endif
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:29:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476495.738693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdO-0005sX-Qf; Fri, 13 Jan 2023 05:29:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476495.738693; Fri, 13 Jan 2023 05:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdO-0005sQ-O2; Fri, 13 Jan 2023 05:29:46 +0000
Received: by outflank-mailman (input) for mailman id 476495;
 Fri, 13 Jan 2023 05:29:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdM-0005sJ-Vj
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:29:45 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 47aff3bf-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:29:42 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 21D57FEC;
 Thu, 12 Jan 2023 21:30:23 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 570693F587;
 Thu, 12 Jan 2023 21:29:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47aff3bf-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 00/41] xen/arm: Add Armv8-R64 MPU support to Xen - Part#1
Date: Fri, 13 Jan 2023 13:28:33 +0800
Message-Id: <20230113052914.3845596-1-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The Armv8-R architecture profile was designed to support use cases
that have a high sensitivity to deterministic execution. (e.g.
Fuel Injection, Brake control, Drive trains, Motor control etc)

Arm announced Armv8-R in 2013, it is the latest generation Arm
architecture targeted at the Real-time profile. It introduces
virtualization at the highest security level while retaining the
Protected Memory System Architecture (PMSA) based on a Memory
Protection Unit (MPU). In 2020, Arm announced Cortex-R82,
which is the first Arm 64-bit Cortex-R processor based on Armv8-R64.
The latest Armv8-R64 document can be found [1]. And the features of
Armv8-R64 architecture:
  - An exception model that is compatible with the Armv8-A model
  - Virtualization with support for guest operating systems
  - PMSA virtualization using MPUs In EL2.
  - Adds support for the 64-bit A64 instruction set.
  - Supports up to 48-bit physical addressing.
  - Supports three Exception Levels (ELs)
        - Secure EL2 - The Highest Privilege
        - Secure EL1 - RichOS (MMU) or RTOS (MPU)
        - Secure EL0 - Application Workloads
 - Supports only a single Security state - Secure.
 - MPU in EL1 & EL2 is configurable, MMU in EL1 is configurable.

These patch series are implementing the Armv8-R64 MPU support
for Xen, which are based on the discussion of
"Proposal for Porting Xen to Armv8-R64 - DraftC" [2].

We will implement the Armv8-R64 and MPU support in three stages:
1. Boot Xen itself to idle thread, do not create any guests on it.
2. Support to boot MPU and MMU domains on Armv8-R64 Xen.
3. SMP and other advanced features of Xen support on Armv8-R64.

As we have not implemented guest support in part#1 series of MPU
support, Xen can not create any guest in boot time. So in this
patch serie, we provide an extra DNM-commit in the last for users
to test Xen boot to idle on MPU system.

We will split these patches to several parts, this series is the
part#1, v1 is in [3], the full PoC can be found in [4]. More software for
Armv8-R64 can be found in [5];

[1] https://developer.arm.com/documentation/ddi0600/latest
[2] https://lists.xenproject.org/archives/html/xen-devel/2022-05/msg00643.html
[3] https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg00289.html
[4] https://gitlab.com/xen-project/people/weic/xen/-/tree/integration/mpu_v2
[5] https://armv8r64-refstack.docs.arm.com/en/v5.0/

Penny Zheng (28):
  xen/mpu: build up start-of-day Xen MPU memory region map
  xen/mpu: introduce helpers for MPU enablement
  xen/mpu: introduce unified function setup_early_uart to map early UART
  xen/arm64: head: Jump to the runtime mapping in enable_mm()
  xen/arm: introduce setup_mm_mappings
  xen/mpu: plump virt/maddr/mfn convertion in MPU system
  xen/mpu: introduce helper access_protection_region
  xen/mpu: populate a new region in Xen MPU mapping table
  xen/mpu: plump early_fdt_map in MPU systems
  xen/arm: move MMU-specific setup_mm to setup_mmu.c
  xen/mpu: implement MPU version of setup_mm in setup_mpu.c
  xen/mpu: initialize frametable in MPU system
  xen/mpu: introduce "mpu,xxx-memory-section"
  xen/mpu: map MPU guest memory section before static memory
    initialization
  xen/mpu: destroy an existing entry in Xen MPU memory mapping table
  xen/mpu: map device memory resource in MPU system
  xen/mpu: map boot module section in MPU system
  xen/mpu: introduce mpu_memory_section_contains for address range check
  xen/mpu: disable VMAP sub-system for MPU systems
  xen/mpu: disable FIXMAP in MPU system
  xen/mpu: implement MPU version of ioremap_xxx
  xen/mpu: free init memory in MPU system
  xen/mpu: destroy boot modules and early FDT mapping in MPU system
  xen/mpu: Use secure hypervisor timer for AArch64v8R
  xen/mpu: move MMU specific P2M code to p2m_mmu.c
  xen/mpu: implement setup_virt_paging for MPU system
  xen/mpu: re-order xen_mpumap in arch_init_finialize
  xen/mpu: add Kconfig option to enable Armv8-R AArch64 support

Wei Chen (13):
  xen/arm: remove xen_phys_start and xenheap_phys_end from config.h
  xen/arm: make ARM_EFI selectable for Arm64
  xen/arm: adjust Xen TLB helpers for Armv8-R64 PMSA
  xen/arm: add an option to define Xen start address for Armv8-R
  xen/arm64: prepare for moving MMU related code from head.S
  xen/arm64: move MMU related code from head.S to head_mmu.S
  xen/arm64: add .text.idmap for Xen identity map sections
  xen/arm: use PA == VA for EARLY_UART_VIRTUAL_ADDRESS on Armv-8R
  xen/arm: decouple copy_from_paddr with FIXMAP
  xen/arm: split MMU and MPU config files from config.h
  xen/arm: move MMU-specific memory management code to mm_mmu.c/mm_mmu.h
  xen/arm: check mapping status and attributes for MPU copy_from_paddr
  xen/mpu: make Xen boot to idle on MPU systems(DNM)

 xen/arch/arm/Kconfig                      |   44 +-
 xen/arch/arm/Makefile                     |   17 +-
 xen/arch/arm/arm64/Makefile               |    5 +
 xen/arch/arm/arm64/head.S                 |  466 +----
 xen/arch/arm/arm64/head_mmu.S             |  399 ++++
 xen/arch/arm/arm64/head_mpu.S             |  394 ++++
 xen/arch/arm/bootfdt.c                    |   13 +-
 xen/arch/arm/domain_build.c               |    4 +
 xen/arch/arm/include/asm/alternative.h    |   15 +
 xen/arch/arm/include/asm/arm64/flushtlb.h |   25 +
 xen/arch/arm/include/asm/arm64/macros.h   |   51 +
 xen/arch/arm/include/asm/arm64/mpu.h      |  174 ++
 xen/arch/arm/include/asm/arm64/sysregs.h  |   77 +
 xen/arch/arm/include/asm/config.h         |  105 +-
 xen/arch/arm/include/asm/config_mmu.h     |  112 +
 xen/arch/arm/include/asm/config_mpu.h     |   25 +
 xen/arch/arm/include/asm/cpregs.h         |    4 +-
 xen/arch/arm/include/asm/cpuerrata.h      |   12 +
 xen/arch/arm/include/asm/cpufeature.h     |    7 +
 xen/arch/arm/include/asm/early_printk.h   |   13 +
 xen/arch/arm/include/asm/fixmap.h         |   28 +-
 xen/arch/arm/include/asm/flushtlb.h       |   22 +
 xen/arch/arm/include/asm/mm.h             |   78 +-
 xen/arch/arm/include/asm/mm_mmu.h         |   77 +
 xen/arch/arm/include/asm/mm_mpu.h         |   54 +
 xen/arch/arm/include/asm/p2m.h            |   27 +-
 xen/arch/arm/include/asm/p2m_mmu.h        |   28 +
 xen/arch/arm/include/asm/processor.h      |   13 +
 xen/arch/arm/include/asm/setup.h          |   39 +
 xen/arch/arm/kernel.c                     |   31 +-
 xen/arch/arm/mm.c                         | 1340 +-----------
 xen/arch/arm/mm_mmu.c                     | 1376 +++++++++++++
 xen/arch/arm/mm_mpu.c                     | 1056 ++++++++++
 xen/arch/arm/p2m.c                        | 2282 +--------------------
 xen/arch/arm/p2m_mmu.c                    | 2257 ++++++++++++++++++++
 xen/arch/arm/p2m_mpu.c                    |  274 +++
 xen/arch/arm/platforms/Kconfig            |   16 +-
 xen/arch/arm/setup.c                      |  394 +---
 xen/arch/arm/setup_mmu.c                  |  391 ++++
 xen/arch/arm/setup_mpu.c                  |  208 ++
 xen/arch/arm/time.c                       |   14 +-
 xen/arch/arm/traps.c                      |    2 +
 xen/arch/arm/xen.lds.S                    |   10 +-
 xen/arch/x86/Kconfig                      |    1 +
 xen/common/Kconfig                        |    6 +
 xen/common/Makefile                       |    2 +-
 xen/include/xen/vmap.h                    |   93 +-
 47 files changed, 7500 insertions(+), 4581 deletions(-)
 create mode 100644 xen/arch/arm/arm64/head_mmu.S
 create mode 100644 xen/arch/arm/arm64/head_mpu.S
 create mode 100644 xen/arch/arm/include/asm/arm64/mpu.h
 create mode 100644 xen/arch/arm/include/asm/config_mmu.h
 create mode 100644 xen/arch/arm/include/asm/config_mpu.h
 create mode 100644 xen/arch/arm/include/asm/mm_mmu.h
 create mode 100644 xen/arch/arm/include/asm/mm_mpu.h
 create mode 100644 xen/arch/arm/include/asm/p2m_mmu.h
 create mode 100644 xen/arch/arm/mm_mmu.c
 create mode 100644 xen/arch/arm/mm_mpu.c
 create mode 100644 xen/arch/arm/p2m_mmu.c
 create mode 100644 xen/arch/arm/p2m_mpu.c
 create mode 100644 xen/arch/arm/setup_mmu.c
 create mode 100644 xen/arch/arm/setup_mpu.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:29:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476497.738716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdS-0006NI-ES; Fri, 13 Jan 2023 05:29:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476497.738716; Fri, 13 Jan 2023 05:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdS-0006N9-Ar; Fri, 13 Jan 2023 05:29:50 +0000
Received: by outflank-mailman (input) for mailman id 476497;
 Fri, 13 Jan 2023 05:29:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdR-0005sP-38
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:29:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 4ac93ded-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:29:47 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 90E26FEC;
 Thu, 12 Jan 2023 21:30:28 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5D0063F587;
 Thu, 12 Jan 2023 21:29:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ac93ded-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 02/40] xen/arm: make ARM_EFI selectable for Arm64
Date: Fri, 13 Jan 2023 13:28:35 +0800
Message-Id: <20230113052914.3845596-3-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Currently, ARM_EFI will mandatorily selected by Arm64.
Even if the user knows for sure that their images will not
start in the EFI environment, they can't disable the EFI
support for Arm64. This means there will be about 3K lines
unused code in their images.

So in this patch, we make ARM_EFI selectable for Arm64, and
based on that, we can use CONFIG_ARM_EFI to gate the EFI
specific code in head.S for those images that will not be
booted in EFI environment.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. New patch
---
 xen/arch/arm/Kconfig      | 10 ++++++++--
 xen/arch/arm/arm64/head.S | 15 +++++++++++++--
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..ace7178c9a 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -7,7 +7,6 @@ config ARM_64
 	def_bool y
 	depends on !ARM_32
 	select 64BIT
-	select ARM_EFI
 	select HAS_FAST_MULTIPLY
 
 config ARM
@@ -37,7 +36,14 @@ config ACPI
 	  an alternative to device tree on ARM64.
 
 config ARM_EFI
-	bool
+	bool "UEFI boot service support"
+	depends on ARM_64
+	default y
+	help
+	  This option provides support for boot services through
+	  UEFI firmware. A UEFI stub is provided to allow Xen to
+	  be booted as an EFI application. This is only useful for
+	  Xen that may run on systems that have UEFI firmware.
 
 config GICV3
 	bool "GICv3 driver"
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index ad014716db..93f9b0b9d5 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -22,8 +22,11 @@
 
 #include <asm/page.h>
 #include <asm/early_printk.h>
+
+#ifdef CONFIG_ARM_EFI
 #include <efi/efierr.h>
 #include <asm/arm64/efibind.h>
+#endif
 
 #define PT_PT     0xf7f /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=1 P=1 */
 #define PT_MEM    0xf7d /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=0 P=1 */
@@ -172,8 +175,10 @@ efi_head:
         .byte   0x52
         .byte   0x4d
         .byte   0x64
-        .long   pe_header - efi_head        /* Offset to the PE header. */
-
+#ifndef CONFIG_ARM_EFI
+        .long   0                    /* 0 means no PE header. */
+#else
+        .long   pe_header - efi_head /* Offset to the PE header. */
         /*
          * Add the PE/COFF header to the file.  The address of this header
          * is at offset 0x3c in the file, and is part of Linux "Image"
@@ -279,6 +284,8 @@ section_table:
         .short  0                /* NumberOfLineNumbers  (0 for executables) */
         .long   0xe0500020       /* Characteristics (section flags) */
         .align  5
+#endif /* CONFIG_ARM_EFI */
+
 real_start:
         /* BSS should be zeroed when booting without EFI */
         mov   x26, #0                /* x26 := skip_zero_bss */
@@ -913,6 +920,8 @@ putn:   ret
 ENTRY(lookup_processor_type)
         mov  x0, #0
         ret
+
+#ifdef CONFIG_ARM_EFI
 /*
  *  Function to transition from EFI loader in C, to Xen entry point.
  *  void noreturn efi_xen_start(void *fdt_ptr, uint32_t fdt_size);
@@ -971,6 +980,8 @@ ENTRY(efi_xen_start)
         b     real_start_efi
 ENDPROC(efi_xen_start)
 
+#endif /* CONFIG_ARM_EFI */
+
 /*
  * Local variables:
  * mode: ASM
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:29:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476498.738727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdV-0006fl-Nn; Fri, 13 Jan 2023 05:29:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476498.738727; Fri, 13 Jan 2023 05:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdV-0006fc-J4; Fri, 13 Jan 2023 05:29:53 +0000
Received: by outflank-mailman (input) for mailman id 476498;
 Fri, 13 Jan 2023 05:29:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdT-0005sP-Sg
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:29:51 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 4c66a87e-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:29:49 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C4AD13D5;
 Thu, 12 Jan 2023 21:30:31 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EC0E93F587;
 Thu, 12 Jan 2023 21:29:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c66a87e-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 03/40] xen/arm: adjust Xen TLB helpers for Armv8-R64 PMSA
Date: Fri, 13 Jan 2023 13:28:36 +0800
Message-Id: <20230113052914.3845596-4-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

>From Arm ARM Supplement of Armv8-R AArch64 (DDI 0600A) [1],
section D1.6.2 TLB maintenance instructions, we know that
Armv8-R AArch64 permits an implementation to cache stage 1
VMSAv8-64 and stage 2 PMSAv8-64 attributes as a common entry
for the Secure EL1&0 translation regime. But for Xen itself,
it's running with stage 1 PMSAv8-64 on Armv8-R AArch64. The
EL2 MPU updates for stage 1 PMSAv8-64 will not be cached in
TLB entries. So we don't need any TLB invalidation for Xen
itself in EL2.

So in this patch, we use empty macros to stub Xen TLB helpers
for MPU system (PMSA), but still keep the Guest TLB helpers.
Because when a guest running in EL1 with VMSAv8-64 (MMU), guest
TLB invalidation is still needed. But we need some policy to
distinguish MPU and MMU guest, this will be done in guest
support of Armv8-R AArch64 later.

[1] https://developer.arm.com/documentation/ddi0600/ac

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. No change.
---
 xen/arch/arm/include/asm/arm64/flushtlb.h | 25 +++++++++++++++++++++++
 xen/arch/arm/include/asm/flushtlb.h       | 22 ++++++++++++++++++++
 2 files changed, 47 insertions(+)

diff --git a/xen/arch/arm/include/asm/arm64/flushtlb.h b/xen/arch/arm/include/asm/arm64/flushtlb.h
index 7c54315187..fe445f6831 100644
--- a/xen/arch/arm/include/asm/arm64/flushtlb.h
+++ b/xen/arch/arm/include/asm/arm64/flushtlb.h
@@ -51,6 +51,8 @@ TLB_HELPER(flush_all_guests_tlb_local, alle1);
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
 TLB_HELPER(flush_all_guests_tlb, alle1is);
 
+#ifndef CONFIG_HAS_MPU
+
 /* Flush all hypervisor mappings from the TLB of the local processor. */
 TLB_HELPER(flush_xen_tlb_local, alle2);
 
@@ -66,6 +68,29 @@ static inline void __flush_xen_tlb_one(vaddr_t va)
     asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
 }
 
+#else
+
+/*
+ * When Xen is running with stage 1 PMSAv8-64 on MPU systems. The EL2 MPU
+ * updates for stage1 PMSAv8-64 will not be cached in TLB entries. So we
+ * don't need any TLB invalidation for Xen itself in EL2. See Arm ARM
+ * Supplement of Armv8-R AArch64 (DDI 0600A), section D1.6.2 TLB maintenance
+ * instructions for more details.
+ */
+static inline void flush_xen_tlb_local(void)
+{
+}
+
+static inline void  __flush_xen_tlb_one_local(vaddr_t va)
+{
+}
+
+static inline void __flush_xen_tlb_one(vaddr_t va)
+{
+}
+
+#endif /* CONFIG_HAS_MPU */
+
 #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/arm/include/asm/flushtlb.h b/xen/arch/arm/include/asm/flushtlb.h
index 125a141975..4b8bf65281 100644
--- a/xen/arch/arm/include/asm/flushtlb.h
+++ b/xen/arch/arm/include/asm/flushtlb.h
@@ -28,6 +28,7 @@ static inline void page_set_tlbflush_timestamp(struct page_info *page)
 /* Flush specified CPUs' TLBs */
 void arch_flush_tlb_mask(const cpumask_t *mask);
 
+#ifndef CONFIG_HAS_MPU
 /*
  * Flush a range of VA's hypervisor mappings from the TLB of the local
  * processor.
@@ -66,6 +67,27 @@ static inline void flush_xen_tlb_range_va(vaddr_t va,
     isb();
 }
 
+#else
+
+/*
+ * When Xen is running with stage 1 PMSAv8-64 on MPU systems. The EL2 MPU
+ * updates for stage1 PMSAv8-64 will not be cached in TLB entries. So we
+ * don't need any TLB invalidation for Xen itself in EL2. See Arm ARM
+ * Supplement of Armv8-R AArch64 (DDI 0600A), section D1.6.2 TLB maintenance
+ * instructions for more details.
+ */
+static inline void flush_xen_tlb_range_va_local(vaddr_t va,
+                                                unsigned long size)
+{
+}
+
+static inline void flush_xen_tlb_range_va(vaddr_t va,
+                                          unsigned long size)
+{
+}
+
+#endif /* CONFIG_HAS_MPU */
+
 #endif /* __ASM_ARM_FLUSHTLB_H__ */
 /*
  * Local variables:
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:29:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:29:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476499.738738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdZ-0006zY-1g; Fri, 13 Jan 2023 05:29:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476499.738738; Fri, 13 Jan 2023 05:29:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdY-0006zP-Sq; Fri, 13 Jan 2023 05:29:56 +0000
Received: by outflank-mailman (input) for mailman id 476499;
 Fri, 13 Jan 2023 05:29:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdW-0005sP-M0
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:29:54 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 4e16b03c-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:29:52 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 07767FEC;
 Thu, 12 Jan 2023 21:30:34 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 884413F587;
 Thu, 12 Jan 2023 21:29:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e16b03c-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"Jiamei . Xie" <jiamei.xie@arm.com>
Subject: [PATCH v2 04/40] xen/arm: add an option to define Xen start address for Armv8-R
Date: Fri, 13 Jan 2023 13:28:37 +0800
Message-Id: <20230113052914.3845596-5-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

On Armv8-A, Xen has a fixed virtual start address (link address
too) for all Armv8-A platforms. In an MMU based system, Xen can
map its loaded address to this virtual start address. So, on
Armv8-A platforms, the Xen start address does not need to be
configurable. But on Armv8-R platforms, there is no MMU to map
loaded address to a fixed virtual address and different platforms
will have very different address space layout. So Xen cannot use
a fixed physical address on MPU based system and need to have it
configurable.

In this patch we introduce one Kconfig option for users to define
the default Xen start address for Armv8-R. Users can enter the
address in config time, or select the tailored platform config
file from arch/arm/configs.

And as we introduced Armv8-R platforms to Xen, that means the
existed Arm64 platforms should not be listed in Armv8-R platform
list, so we add !ARM_V8R dependency for these platforms.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Jiamei.Xie <jiamei.xie@arm.com>
---
v1 -> v2:
1. Remove the platform header fvp_baser.h.
2. Remove the default start address for fvp_baser64.
3. Remove the description of default address from commit log.
4. Change HAS_MPU to ARM_V8R for Xen start address dependency.
   No matter Arm-v8r board has MPU or not, it always need to
   specify the start address.
---
 xen/arch/arm/Kconfig           |  8 ++++++++
 xen/arch/arm/platforms/Kconfig | 16 +++++++++++++---
 2 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index ace7178c9a..c6b6b612d1 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -145,6 +145,14 @@ config TEE
 	  This option enables generic TEE mediators support. It allows guests
 	  to access real TEE via one of TEE mediators implemented in XEN.
 
+config XEN_START_ADDRESS
+	hex "Xen start address: keep default to use platform defined address"
+	default 0
+	depends on ARM_V8R
+	help
+	  This option allows to set the customized address at which Xen will be
+	  linked on MPU systems. This address must be aligned to a page size.
+
 source "arch/arm/tee/Kconfig"
 
 config STATIC_SHM
diff --git a/xen/arch/arm/platforms/Kconfig b/xen/arch/arm/platforms/Kconfig
index c93a6b2756..0904793a0b 100644
--- a/xen/arch/arm/platforms/Kconfig
+++ b/xen/arch/arm/platforms/Kconfig
@@ -1,6 +1,7 @@
 choice
 	prompt "Platform Support"
 	default ALL_PLAT
+	default FVP_BASER if ARM_V8R
 	---help---
 	Choose which hardware platform to enable in Xen.
 
@@ -8,13 +9,14 @@ choice
 
 config ALL_PLAT
 	bool "All Platforms"
+	depends on !ARM_V8R
 	---help---
 	Enable support for all available hardware platforms. It doesn't
 	automatically select any of the related drivers.
 
 config QEMU
 	bool "QEMU aarch virt machine support"
-	depends on ARM_64
+	depends on ARM_64 && !ARM_V8R
 	select GICV3
 	select HAS_PL011
 	---help---
@@ -23,7 +25,7 @@ config QEMU
 
 config RCAR3
 	bool "Renesas RCar3 support"
-	depends on ARM_64
+	depends on ARM_64 && !ARM_V8R
 	select HAS_SCIF
 	select IPMMU_VMSA
 	---help---
@@ -31,14 +33,22 @@ config RCAR3
 
 config MPSOC
 	bool "Xilinx Ultrascale+ MPSoC support"
-	depends on ARM_64
+	depends on ARM_64 && !ARM_V8R
 	select HAS_CADENCE_UART
 	select ARM_SMMU
 	---help---
 	Enable all the required drivers for Xilinx Ultrascale+ MPSoC
 
+config FVP_BASER
+	bool "Fixed Virtual Platform BaseR support"
+	depends on ARM_V8R
+	help
+	  Enable platform specific configurations for Fixed Virtual
+	  Platform BaseR
+
 config NO_PLAT
 	bool "No Platforms"
+	depends on !ARM_V8R
 	---help---
 	Do not enable specific support for any platform.
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:29:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:29:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476500.738743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdZ-00073w-E4; Fri, 13 Jan 2023 05:29:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476500.738743; Fri, 13 Jan 2023 05:29:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdZ-00073n-8b; Fri, 13 Jan 2023 05:29:57 +0000
Received: by outflank-mailman (input) for mailman id 476500;
 Fri, 13 Jan 2023 05:29:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdY-0005sJ-3x
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:29:56 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 4fa41a76-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:29:55 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 97960FEC;
 Thu, 12 Jan 2023 21:30:36 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 631913F587;
 Thu, 12 Jan 2023 21:29:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fa41a76-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 05/40] xen/arm64: prepare for moving MMU related code from head.S
Date: Fri, 13 Jan 2023 13:28:38 +0800
Message-Id: <20230113052914.3845596-6-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

We want to reuse head.S for MPU systems, but there are some
code implemented for MMU systems only. We will move such
code to another MMU specific file. But before that, we will
do some preparations in this patch to make them easier
for reviewing:
1. Fix the indentations of code comments.
2. Export some symbols that will be accessed out of file
   scope.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. New patch.
---
 xen/arch/arm/arm64/head.S | 40 +++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 93f9b0b9d5..b2214bc5e3 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -136,22 +136,22 @@
         add \xb, \xb, x20
 .endm
 
-        .section .text.header, "ax", %progbits
-        /*.aarch64*/
+.section .text.header, "ax", %progbits
+/*.aarch64*/
 
-        /*
-         * Kernel startup entry point.
-         * ---------------------------
-         *
-         * The requirements are:
-         *   MMU = off, D-cache = off, I-cache = on or off,
-         *   x0 = physical address to the FDT blob.
-         *
-         * This must be the very first address in the loaded image.
-         * It should be linked at XEN_VIRT_START, and loaded at any
-         * 4K-aligned address.  All of text+data+bss must fit in 2MB,
-         * or the initial pagetable code below will need adjustment.
-         */
+/*
+ * Kernel startup entry point.
+ * ---------------------------
+ *
+ * The requirements are:
+ *   MMU = off, D-cache = off, I-cache = on or off,
+ *   x0 = physical address to the FDT blob.
+ *
+ * This must be the very first address in the loaded image.
+ * It should be linked at XEN_VIRT_START, and loaded at any
+ * 4K-aligned address.  All of text+data+bss must fit in 2MB,
+ * or the initial pagetable code below will need adjustment.
+ */
 
 GLOBAL(start)
         /*
@@ -586,7 +586,7 @@ ENDPROC(cpu_init)
  *
  * Clobbers x0 - x4
  */
-create_page_tables:
+ENTRY(create_page_tables)
         /* Prepare the page-tables for mapping Xen */
         ldr   x0, =XEN_VIRT_START
         create_table_entry boot_pgtable, boot_first, x0, 0, x1, x2, x3
@@ -680,7 +680,7 @@ ENDPROC(create_page_tables)
  *
  * Clobbers x0 - x3
  */
-enable_mmu:
+ENTRY(enable_mmu)
         PRINT("- Turning on paging -\r\n")
 
         /*
@@ -714,7 +714,7 @@ ENDPROC(enable_mmu)
  *
  * Clobbers x0 - x1
  */
-remove_identity_mapping:
+ENTRY(remove_identity_mapping)
         /*
          * Find the zeroeth slot used. Remove the entry from zeroeth
          * table if the slot is not XEN_ZEROETH_SLOT.
@@ -775,7 +775,7 @@ ENDPROC(remove_identity_mapping)
  *
  * Clobbers x0 - x3
  */
-setup_fixmap:
+ENTRY(setup_fixmap)
 #ifdef CONFIG_EARLY_PRINTK
         /* Add UART to the fixmap table */
         ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
@@ -871,7 +871,7 @@ ENDPROC(init_uart)
  * x0: Nul-terminated string to print.
  * x23: Early UART base address
  * Clobbers x0-x1 */
-puts:
+ENTRY(puts)
         early_uart_ready x23, 1
         ldrb  w1, [x0], #1           /* Load next char */
         cbz   w1, 1f                 /* Exit on nul */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:30:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:30:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476505.738759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCde-0007sg-NM; Fri, 13 Jan 2023 05:30:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476505.738759; Fri, 13 Jan 2023 05:30:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCde-0007s4-Im; Fri, 13 Jan 2023 05:30:02 +0000
Received: by outflank-mailman (input) for mailman id 476505;
 Fri, 13 Jan 2023 05:30:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdd-0005sJ-FF
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:01 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 517a1b38-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:29:58 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B35B1FEC;
 Thu, 12 Jan 2023 21:30:39 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F37C33F587;
 Thu, 12 Jan 2023 21:29:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 517a1b38-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v2 06/40] xen/arm64: move MMU related code from head.S to head_mmu.S
Date: Fri, 13 Jan 2023 13:28:39 +0800
Message-Id: <20230113052914.3845596-7-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

There are lots of MMU specific code in head.S. This code will not
be used in MPU systems. If we use #ifdef to gate them, the code
will become messy and hard to maintain. So we move MMU related
code to head_mmu.S, and keep common code still in head.S.

And some assembly macros that will be shared by MMU and MPU later,
we move them to macros.h.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v1 -> v2:
1. Move macros to macros.h
2. Remove the indention modification
3. Duplicate "fail" instead of exporting it.
---
 xen/arch/arm/arm64/Makefile             |   3 +
 xen/arch/arm/arm64/head.S               | 383 ------------------------
 xen/arch/arm/arm64/head_mmu.S           | 372 +++++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/macros.h |  51 ++++
 4 files changed, 426 insertions(+), 383 deletions(-)
 create mode 100644 xen/arch/arm/arm64/head_mmu.S

diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 6d507da0d4..22da2f54b5 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -8,6 +8,9 @@ obj-y += domctl.o
 obj-y += domain.o
 obj-y += entry.o
 obj-y += head.o
+ifneq ($(CONFIG_HAS_MPU),y)
+obj-y += head_mmu.o
+endif
 obj-y += insn.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += smc.o
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index b2214bc5e3..5cfa47279b 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -28,17 +28,6 @@
 #include <asm/arm64/efibind.h>
 #endif
 
-#define PT_PT     0xf7f /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=1 P=1 */
-#define PT_MEM    0xf7d /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=0 P=1 */
-#define PT_MEM_L3 0xf7f /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=1 P=1 */
-#define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
-#define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
-
-/* Convenience defines to get slot used by Xen mapping. */
-#define XEN_ZEROETH_SLOT    zeroeth_table_offset(XEN_VIRT_START)
-#define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
-#define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
-
 #define __HEAD_FLAG_PAGE_SIZE   ((PAGE_SHIFT - 10) / 2)
 
 #define __HEAD_FLAG_PHYS_BASE   1
@@ -85,57 +74,6 @@
  *  x30 - lr
  */
 
-#ifdef CONFIG_EARLY_PRINTK
-/*
- * Macro to print a string to the UART, if there is one.
- *
- * Clobbers x0 - x3
- */
-#define PRINT(_s)          \
-        mov   x3, lr ;     \
-        adr   x0, 98f ;    \
-        bl    puts    ;    \
-        mov   lr, x3 ;     \
-        RODATA_STR(98, _s)
-
-/*
- * Macro to print the value of register \xb
- *
- * Clobbers x0 - x4
- */
-.macro print_reg xb
-        mov   x0, \xb
-        mov   x4, lr
-        bl    putn
-        mov   lr, x4
-.endm
-
-#else /* CONFIG_EARLY_PRINTK */
-#define PRINT(s)
-
-.macro print_reg xb
-.endm
-
-#endif /* !CONFIG_EARLY_PRINTK */
-
-/*
- * Pseudo-op for PC relative adr <reg>, <symbol> where <symbol> is
- * within the range +/- 4GB of the PC.
- *
- * @dst: destination register (64 bit wide)
- * @sym: name of the symbol
- */
-.macro  adr_l, dst, sym
-        adrp \dst, \sym
-        add  \dst, \dst, :lo12:\sym
-.endm
-
-/* Load the physical address of a symbol into xb */
-.macro load_paddr xb, sym
-        ldr \xb, =\sym
-        add \xb, \xb, x20
-.endm
-
 .section .text.header, "ax", %progbits
 /*.aarch64*/
 
@@ -500,296 +438,6 @@ cpu_init:
         ret
 ENDPROC(cpu_init)
 
-/*
- * Macro to find the slot number at a given page-table level
- *
- * slot:     slot computed
- * virt:     virtual address
- * lvl:      page-table level
- */
-.macro get_table_slot, slot, virt, lvl
-        ubfx  \slot, \virt, #XEN_PT_LEVEL_SHIFT(\lvl), #XEN_PT_LPAE_SHIFT
-.endm
-
-/*
- * Macro to create a page table entry in \ptbl to \tbl
- *
- * ptbl:    table symbol where the entry will be created
- * tbl:     table symbol to point to
- * virt:    virtual address
- * lvl:     page-table level
- * tmp1:    scratch register
- * tmp2:    scratch register
- * tmp3:    scratch register
- *
- * Preserves \virt
- * Clobbers \tmp1, \tmp2, \tmp3
- *
- * Also use x20 for the phys offset.
- *
- * Note that all parameters using registers should be distinct.
- */
-.macro create_table_entry, ptbl, tbl, virt, lvl, tmp1, tmp2, tmp3
-        get_table_slot \tmp1, \virt, \lvl   /* \tmp1 := slot in \tlb */
-
-        load_paddr \tmp2, \tbl
-        mov   \tmp3, #PT_PT                 /* \tmp3 := right for linear PT */
-        orr   \tmp3, \tmp3, \tmp2           /*          + \tlb paddr */
-
-        adr_l \tmp2, \ptbl
-
-        str   \tmp3, [\tmp2, \tmp1, lsl #3]
-.endm
-
-/*
- * Macro to create a mapping entry in \tbl to \phys. Only mapping in 3rd
- * level table (i.e page granularity) is supported.
- *
- * ptbl:     table symbol where the entry will be created
- * virt:    virtual address
- * phys:    physical address (should be page aligned)
- * tmp1:    scratch register
- * tmp2:    scratch register
- * tmp3:    scratch register
- * type:    mapping type. If not specified it will be normal memory (PT_MEM_L3)
- *
- * Preserves \virt, \phys
- * Clobbers \tmp1, \tmp2, \tmp3
- *
- * Note that all parameters using registers should be distinct.
- */
-.macro create_mapping_entry, ptbl, virt, phys, tmp1, tmp2, tmp3, type=PT_MEM_L3
-        and   \tmp3, \phys, #THIRD_MASK     /* \tmp3 := PAGE_ALIGNED(phys) */
-
-        get_table_slot \tmp1, \virt, 3      /* \tmp1 := slot in \tlb */
-
-        mov   \tmp2, #\type                 /* \tmp2 := right for section PT */
-        orr   \tmp2, \tmp2, \tmp3           /*          + PAGE_ALIGNED(phys) */
-
-        adr_l \tmp3, \ptbl
-
-        str   \tmp2, [\tmp3, \tmp1, lsl #3]
-.endm
-
-/*
- * Rebuild the boot pagetable's first-level entries. The structure
- * is described in mm.c.
- *
- * After the CPU enables paging it will add the fixmap mapping
- * to these page tables, however this may clash with the 1:1
- * mapping. So each CPU must rebuild the page tables here with
- * the 1:1 in place.
- *
- * Inputs:
- *   x19: paddr(start)
- *   x20: phys offset
- *
- * Clobbers x0 - x4
- */
-ENTRY(create_page_tables)
-        /* Prepare the page-tables for mapping Xen */
-        ldr   x0, =XEN_VIRT_START
-        create_table_entry boot_pgtable, boot_first, x0, 0, x1, x2, x3
-        create_table_entry boot_first, boot_second, x0, 1, x1, x2, x3
-        create_table_entry boot_second, boot_third, x0, 2, x1, x2, x3
-
-        /* Map Xen */
-        adr_l x4, boot_third
-
-        lsr   x2, x19, #THIRD_SHIFT  /* Base address for 4K mapping */
-        lsl   x2, x2, #THIRD_SHIFT
-        mov   x3, #PT_MEM_L3         /* x2 := Section map */
-        orr   x2, x2, x3
-
-        /* ... map of vaddr(start) in boot_third */
-        mov   x1, xzr
-1:      str   x2, [x4, x1]           /* Map vaddr(start) */
-        add   x2, x2, #PAGE_SIZE     /* Next page */
-        add   x1, x1, #8             /* Next slot */
-        cmp   x1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512 entries per page */
-        b.lt  1b
-
-        /*
-         * If Xen is loaded at exactly XEN_VIRT_START then we don't
-         * need an additional 1:1 mapping, the virtual mapping will
-         * suffice.
-         */
-        cmp   x19, #XEN_VIRT_START
-        bne   1f
-        ret
-1:
-        /*
-         * Setup the 1:1 mapping so we can turn the MMU on. Note that
-         * only the first page of Xen will be part of the 1:1 mapping.
-         */
-
-        /*
-         * Find the zeroeth slot used. If the slot is not
-         * XEN_ZEROETH_SLOT, then the 1:1 mapping will use its own set of
-         * page-tables from the first level.
-         */
-        get_table_slot x0, x19, 0       /* x0 := zeroeth slot */
-        cmp   x0, #XEN_ZEROETH_SLOT
-        beq   1f
-        create_table_entry boot_pgtable, boot_first_id, x19, 0, x0, x1, x2
-        b     link_from_first_id
-
-1:
-        /*
-         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
-         * then the 1:1 mapping will use its own set of page-tables from
-         * the second level.
-         */
-        get_table_slot x0, x19, 1      /* x0 := first slot */
-        cmp   x0, #XEN_FIRST_SLOT
-        beq   1f
-        create_table_entry boot_first, boot_second_id, x19, 1, x0, x1, x2
-        b     link_from_second_id
-
-1:
-        /*
-         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
-         * 1:1 mapping will use its own set of page-tables from the
-         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
-         * it.
-         */
-        get_table_slot x0, x19, 2     /* x0 := second slot */
-        cmp   x0, #XEN_SECOND_SLOT
-        beq   virtphys_clash
-        create_table_entry boot_second, boot_third_id, x19, 2, x0, x1, x2
-        b     link_from_third_id
-
-link_from_first_id:
-        create_table_entry boot_first_id, boot_second_id, x19, 1, x0, x1, x2
-link_from_second_id:
-        create_table_entry boot_second_id, boot_third_id, x19, 2, x0, x1, x2
-link_from_third_id:
-        create_mapping_entry boot_third_id, x19, x19, x0, x1, x2
-        ret
-
-virtphys_clash:
-        /* Identity map clashes with boot_third, which we cannot handle yet */
-        PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
-        b     fail
-ENDPROC(create_page_tables)
-
-/*
- * Turn on the Data Cache and the MMU. The function will return on the 1:1
- * mapping. In other word, the caller is responsible to switch to the runtime
- * mapping.
- *
- * Clobbers x0 - x3
- */
-ENTRY(enable_mmu)
-        PRINT("- Turning on paging -\r\n")
-
-        /*
-         * The state of the TLBs is unknown before turning on the MMU.
-         * Flush them to avoid stale one.
-         */
-        tlbi  alle2                  /* Flush hypervisor TLBs */
-        dsb   nsh
-
-        /* Write Xen's PT's paddr into TTBR0_EL2 */
-        load_paddr x0, boot_pgtable
-        msr   TTBR0_EL2, x0
-        isb
-
-        mrs   x0, SCTLR_EL2
-        orr   x0, x0, #SCTLR_Axx_ELx_M  /* Enable MMU */
-        orr   x0, x0, #SCTLR_Axx_ELx_C  /* Enable D-cache */
-        dsb   sy                     /* Flush PTE writes and finish reads */
-        msr   SCTLR_EL2, x0          /* now paging is enabled */
-        isb                          /* Now, flush the icache */
-        ret
-ENDPROC(enable_mmu)
-
-/*
- * Remove the 1:1 map from the page-tables. It is not easy to keep track
- * where the 1:1 map was mapped, so we will look for the top-level entry
- * exclusive to the 1:1 map and remove it.
- *
- * Inputs:
- *   x19: paddr(start)
- *
- * Clobbers x0 - x1
- */
-ENTRY(remove_identity_mapping)
-        /*
-         * Find the zeroeth slot used. Remove the entry from zeroeth
-         * table if the slot is not XEN_ZEROETH_SLOT.
-         */
-        get_table_slot x1, x19, 0       /* x1 := zeroeth slot */
-        cmp   x1, #XEN_ZEROETH_SLOT
-        beq   1f
-        /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */
-        ldr   x0, =boot_pgtable         /* x0 := root table */
-        str   xzr, [x0, x1, lsl #3]
-        b     identity_mapping_removed
-
-1:
-        /*
-         * Find the first slot used. Remove the entry for the first
-         * table if the slot is not XEN_FIRST_SLOT.
-         */
-        get_table_slot x1, x19, 1       /* x1 := first slot */
-        cmp   x1, #XEN_FIRST_SLOT
-        beq   1f
-        /* It is not in slot XEN_FIRST_SLOT, remove the entry. */
-        ldr   x0, =boot_first           /* x0 := first table */
-        str   xzr, [x0, x1, lsl #3]
-        b     identity_mapping_removed
-
-1:
-        /*
-         * Find the second slot used. Remove the entry for the first
-         * table if the slot is not XEN_SECOND_SLOT.
-         */
-        get_table_slot x1, x19, 2       /* x1 := second slot */
-        cmp   x1, #XEN_SECOND_SLOT
-        beq   identity_mapping_removed
-        /* It is not in slot 1, remove the entry */
-        ldr   x0, =boot_second          /* x0 := second table */
-        str   xzr, [x0, x1, lsl #3]
-
-identity_mapping_removed:
-        /* See asm/arm64/flushtlb.h for the explanation of the sequence. */
-        dsb   nshst
-        tlbi  alle2
-        dsb   nsh
-        isb
-
-        ret
-ENDPROC(remove_identity_mapping)
-
-/*
- * Map the UART in the fixmap (when earlyprintk is used) and hook the
- * fixmap table in the page tables.
- *
- * The fixmap cannot be mapped in create_page_tables because it may
- * clash with the 1:1 mapping.
- *
- * Inputs:
- *   x20: Physical offset
- *   x23: Early UART base physical address
- *
- * Clobbers x0 - x3
- */
-ENTRY(setup_fixmap)
-#ifdef CONFIG_EARLY_PRINTK
-        /* Add UART to the fixmap table */
-        ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
-        create_mapping_entry xen_fixmap, x0, x23, x1, x2, x3, type=PT_DEV_L3
-#endif
-        /* Map fixmap into boot_second */
-        ldr   x0, =FIXMAP_ADDR(0)
-        create_table_entry boot_second, xen_fixmap, x0, 2, x1, x2, x3
-        /* Ensure any page table updates made above have occurred. */
-        dsb   nshst
-
-        ret
-ENDPROC(setup_fixmap)
-
 /*
  * Setup the initial stack and jump to the C world
  *
@@ -818,37 +466,6 @@ fail:   PRINT("- Boot failed -\r\n")
         b     1b
 ENDPROC(fail)
 
-GLOBAL(_end_boot)
-
-/*
- * Switch TTBR
- *
- * x0    ttbr
- *
- * TODO: This code does not comply with break-before-make.
- */
-ENTRY(switch_ttbr)
-        dsb   sy                     /* Ensure the flushes happen before
-                                      * continuing */
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
-        isb
-
-        msr    TTBR0_EL2, x0
-
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
-        isb
-
-        ret
-ENDPROC(switch_ttbr)
-
 #ifdef CONFIG_EARLY_PRINTK
 /*
  * Initialize the UART. Should only be called on the boot CPU.
diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
new file mode 100644
index 0000000000..e2c8f07140
--- /dev/null
+++ b/xen/arch/arm/arm64/head_mmu.S
@@ -0,0 +1,372 @@
+/*
+ * xen/arch/arm/head_mmu.S
+ *
+ * Start-of-day code for an ARMv8-A.
+ *
+ * Ian Campbell <ian.campbell@citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * Based on ARMv7-A head.S by
+ * Tim Deegan <tim@xen.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/page.h>
+#include <asm/early_printk.h>
+
+#define PT_PT     0xf7f /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=1 P=1 */
+#define PT_MEM    0xf7d /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=0 P=1 */
+#define PT_MEM_L3 0xf7f /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=1 P=1 */
+#define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
+#define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
+
+/* Convenience defines to get slot used by Xen mapping. */
+#define XEN_ZEROETH_SLOT    zeroeth_table_offset(XEN_VIRT_START)
+#define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
+#define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
+
+/*
+ * Macro to find the slot number at a given page-table level
+ *
+ * slot:     slot computed
+ * virt:     virtual address
+ * lvl:      page-table level
+ */
+.macro get_table_slot, slot, virt, lvl
+        ubfx  \slot, \virt, #XEN_PT_LEVEL_SHIFT(\lvl), #XEN_PT_LPAE_SHIFT
+.endm
+
+/*
+ * Macro to create a page table entry in \ptbl to \tbl
+ *
+ * ptbl:    table symbol where the entry will be created
+ * tbl:     table symbol to point to
+ * virt:    virtual address
+ * lvl:     page-table level
+ * tmp1:    scratch register
+ * tmp2:    scratch register
+ * tmp3:    scratch register
+ *
+ * Preserves \virt
+ * Clobbers \tmp1, \tmp2, \tmp3
+ *
+ * Also use x20 for the phys offset.
+ *
+ * Note that all parameters using registers should be distinct.
+ */
+.macro create_table_entry, ptbl, tbl, virt, lvl, tmp1, tmp2, tmp3
+        get_table_slot \tmp1, \virt, \lvl   /* \tmp1 := slot in \tlb */
+
+        load_paddr \tmp2, \tbl
+        mov   \tmp3, #PT_PT                 /* \tmp3 := right for linear PT */
+        orr   \tmp3, \tmp3, \tmp2           /*          + \tlb paddr */
+
+        adr_l \tmp2, \ptbl
+
+        str   \tmp3, [\tmp2, \tmp1, lsl #3]
+.endm
+
+/*
+ * Macro to create a mapping entry in \tbl to \phys. Only mapping in 3rd
+ * level table (i.e page granularity) is supported.
+ *
+ * ptbl:     table symbol where the entry will be created
+ * virt:    virtual address
+ * phys:    physical address (should be page aligned)
+ * tmp1:    scratch register
+ * tmp2:    scratch register
+ * tmp3:    scratch register
+ * type:    mapping type. If not specified it will be normal memory (PT_MEM_L3)
+ *
+ * Preserves \virt, \phys
+ * Clobbers \tmp1, \tmp2, \tmp3
+ *
+ * Note that all parameters using registers should be distinct.
+ */
+.macro create_mapping_entry, ptbl, virt, phys, tmp1, tmp2, tmp3, type=PT_MEM_L3
+        and   \tmp3, \phys, #THIRD_MASK     /* \tmp3 := PAGE_ALIGNED(phys) */
+
+        get_table_slot \tmp1, \virt, 3      /* \tmp1 := slot in \tlb */
+
+        mov   \tmp2, #\type                 /* \tmp2 := right for section PT */
+        orr   \tmp2, \tmp2, \tmp3           /*          + PAGE_ALIGNED(phys) */
+
+        adr_l \tmp3, \ptbl
+
+        str   \tmp2, [\tmp3, \tmp1, lsl #3]
+.endm
+
+.section .text.header, "ax", %progbits
+/*.aarch64*/
+
+/*
+ * Rebuild the boot pagetable's first-level entries. The structure
+ * is described in mm.c.
+ *
+ * After the CPU enables paging it will add the fixmap mapping
+ * to these page tables, however this may clash with the 1:1
+ * mapping. So each CPU must rebuild the page tables here with
+ * the 1:1 in place.
+ *
+ * Inputs:
+ *   x19: paddr(start)
+ *   x20: phys offset
+ *
+ * Clobbers x0 - x4
+ */
+ENTRY(create_page_tables)
+        /* Prepare the page-tables for mapping Xen */
+        ldr   x0, =XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_first, x0, 0, x1, x2, x3
+        create_table_entry boot_first, boot_second, x0, 1, x1, x2, x3
+        create_table_entry boot_second, boot_third, x0, 2, x1, x2, x3
+
+        /* Map Xen */
+        adr_l x4, boot_third
+
+        lsr   x2, x19, #THIRD_SHIFT  /* Base address for 4K mapping */
+        lsl   x2, x2, #THIRD_SHIFT
+        mov   x3, #PT_MEM_L3         /* x2 := Section map */
+        orr   x2, x2, x3
+
+        /* ... map of vaddr(start) in boot_third */
+        mov   x1, xzr
+1:      str   x2, [x4, x1]           /* Map vaddr(start) */
+        add   x2, x2, #PAGE_SIZE     /* Next page */
+        add   x1, x1, #8             /* Next slot */
+        cmp   x1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512 entries per page */
+        b.lt  1b
+
+        /*
+         * If Xen is loaded at exactly XEN_VIRT_START then we don't
+         * need an additional 1:1 mapping, the virtual mapping will
+         * suffice.
+         */
+        cmp   x19, #XEN_VIRT_START
+        bne   1f
+        ret
+1:
+        /*
+         * Setup the 1:1 mapping so we can turn the MMU on. Note that
+         * only the first page of Xen will be part of the 1:1 mapping.
+         */
+
+        /*
+         * Find the zeroeth slot used. If the slot is not
+         * XEN_ZEROETH_SLOT, then the 1:1 mapping will use its own set of
+         * page-tables from the first level.
+         */
+        get_table_slot x0, x19, 0       /* x0 := zeroeth slot */
+        cmp   x0, #XEN_ZEROETH_SLOT
+        beq   1f
+        create_table_entry boot_pgtable, boot_first_id, x19, 0, x0, x1, x2
+        b     link_from_first_id
+
+1:
+        /*
+         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
+         * then the 1:1 mapping will use its own set of page-tables from
+         * the second level.
+         */
+        get_table_slot x0, x19, 1      /* x0 := first slot */
+        cmp   x0, #XEN_FIRST_SLOT
+        beq   1f
+        create_table_entry boot_first, boot_second_id, x19, 1, x0, x1, x2
+        b     link_from_second_id
+
+1:
+        /*
+         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
+         * 1:1 mapping will use its own set of page-tables from the
+         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
+         * it.
+         */
+        get_table_slot x0, x19, 2     /* x0 := second slot */
+        cmp   x0, #XEN_SECOND_SLOT
+        beq   virtphys_clash
+        create_table_entry boot_second, boot_third_id, x19, 2, x0, x1, x2
+        b     link_from_third_id
+
+link_from_first_id:
+        create_table_entry boot_first_id, boot_second_id, x19, 1, x0, x1, x2
+link_from_second_id:
+        create_table_entry boot_second_id, boot_third_id, x19, 2, x0, x1, x2
+link_from_third_id:
+        create_mapping_entry boot_third_id, x19, x19, x0, x1, x2
+        ret
+
+virtphys_clash:
+        /* Identity map clashes with boot_third, which we cannot handle yet */
+        PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
+        b     fail
+ENDPROC(create_page_tables)
+
+/*
+ * Turn on the Data Cache and the MMU. The function will return on the 1:1
+ * mapping. In other word, the caller is responsible to switch to the runtime
+ * mapping.
+ *
+ * Clobbers x0 - x3
+ */
+ENTRY(enable_mmu)
+        PRINT("- Turning on paging -\r\n")
+
+        /*
+         * The state of the TLBs is unknown before turning on the MMU.
+         * Flush them to avoid stale one.
+         */
+        tlbi  alle2                  /* Flush hypervisor TLBs */
+        dsb   nsh
+
+        /* Write Xen's PT's paddr into TTBR0_EL2 */
+        load_paddr x0, boot_pgtable
+        msr   TTBR0_EL2, x0
+        isb
+
+        mrs   x0, SCTLR_EL2
+        orr   x0, x0, #SCTLR_Axx_ELx_M  /* Enable MMU */
+        orr   x0, x0, #SCTLR_Axx_ELx_C  /* Enable D-cache */
+        dsb   sy                     /* Flush PTE writes and finish reads */
+        msr   SCTLR_EL2, x0          /* now paging is enabled */
+        isb                          /* Now, flush the icache */
+        ret
+ENDPROC(enable_mmu)
+
+/*
+ * Remove the 1:1 map from the page-tables. It is not easy to keep track
+ * where the 1:1 map was mapped, so we will look for the top-level entry
+ * exclusive to the 1:1 map and remove it.
+ *
+ * Inputs:
+ *   x19: paddr(start)
+ *
+ * Clobbers x0 - x1
+ */
+ENTRY(remove_identity_mapping)
+        /*
+         * Find the zeroeth slot used. Remove the entry from zeroeth
+         * table if the slot is not XEN_ZEROETH_SLOT.
+         */
+        get_table_slot x1, x19, 0       /* x1 := zeroeth slot */
+        cmp   x1, #XEN_ZEROETH_SLOT
+        beq   1f
+        /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */
+        ldr   x0, =boot_pgtable         /* x0 := root table */
+        str   xzr, [x0, x1, lsl #3]
+        b     identity_mapping_removed
+
+1:
+        /*
+         * Find the first slot used. Remove the entry for the first
+         * table if the slot is not XEN_FIRST_SLOT.
+         */
+        get_table_slot x1, x19, 1       /* x1 := first slot */
+        cmp   x1, #XEN_FIRST_SLOT
+        beq   1f
+        /* It is not in slot XEN_FIRST_SLOT, remove the entry. */
+        ldr   x0, =boot_first           /* x0 := first table */
+        str   xzr, [x0, x1, lsl #3]
+        b     identity_mapping_removed
+
+1:
+        /*
+         * Find the second slot used. Remove the entry for the first
+         * table if the slot is not XEN_SECOND_SLOT.
+         */
+        get_table_slot x1, x19, 2       /* x1 := second slot */
+        cmp   x1, #XEN_SECOND_SLOT
+        beq   identity_mapping_removed
+        /* It is not in slot 1, remove the entry */
+        ldr   x0, =boot_second          /* x0 := second table */
+        str   xzr, [x0, x1, lsl #3]
+
+identity_mapping_removed:
+        /* See asm/arm64/flushtlb.h for the explanation of the sequence. */
+        dsb   nshst
+        tlbi  alle2
+        dsb   nsh
+        isb
+
+        ret
+ENDPROC(remove_identity_mapping)
+
+/*
+ * Map the UART in the fixmap (when earlyprintk is used) and hook the
+ * fixmap table in the page tables.
+ *
+ * The fixmap cannot be mapped in create_page_tables because it may
+ * clash with the 1:1 mapping.
+ *
+ * Inputs:
+ *   x20: Physical offset
+ *   x23: Early UART base physical address
+ *
+ * Clobbers x0 - x3
+ */
+ENTRY(setup_fixmap)
+#ifdef CONFIG_EARLY_PRINTK
+        /* Add UART to the fixmap table */
+        ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
+        create_mapping_entry xen_fixmap, x0, x23, x1, x2, x3, type=PT_DEV_L3
+#endif
+        /* Map fixmap into boot_second */
+        ldr   x0, =FIXMAP_ADDR(0)
+        create_table_entry boot_second, xen_fixmap, x0, 2, x1, x2, x3
+        /* Ensure any page table updates made above have occurred. */
+        dsb   nshst
+
+        ret
+ENDPROC(setup_fixmap)
+
+/* Fail-stop */
+fail:   PRINT("- Boot failed -\r\n")
+1:      wfe
+        b     1b
+ENDPROC(fail)
+
+GLOBAL(_end_boot)
+
+/*
+ * Switch TTBR
+ *
+ * x0    ttbr
+ *
+ * TODO: This code does not comply with break-before-make.
+ */
+ENTRY(switch_ttbr)
+        dsb   sy                     /* Ensure the flushes happen before
+                                      * continuing */
+        isb                          /* Ensure synchronization with previous
+                                      * changes to text */
+        tlbi   alle2                 /* Flush hypervisor TLB */
+        ic     iallu                 /* Flush I-cache */
+        dsb    sy                    /* Ensure completion of TLB flush */
+        isb
+
+        msr    TTBR0_EL2, x0
+
+        isb                          /* Ensure synchronization with previous
+                                      * changes to text */
+        tlbi   alle2                 /* Flush hypervisor TLB */
+        ic     iallu                 /* Flush I-cache */
+        dsb    sy                    /* Ensure completion of TLB flush */
+        isb
+
+        ret
+ENDPROC(switch_ttbr)
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm64/macros.h b/xen/arch/arm/include/asm/arm64/macros.h
index 140e223b4c..f28c124e66 100644
--- a/xen/arch/arm/include/asm/arm64/macros.h
+++ b/xen/arch/arm/include/asm/arm64/macros.h
@@ -32,6 +32,57 @@
         hint    #22
     .endm
 
+#ifdef CONFIG_EARLY_PRINTK
+/*
+ * Macro to print a string to the UART, if there is one.
+ *
+ * Clobbers x0 - x3
+ */
+#define PRINT(_s)          \
+        mov   x3, lr ;     \
+        adr   x0, 98f ;    \
+        bl    puts    ;    \
+        mov   lr, x3 ;     \
+        RODATA_STR(98, _s)
+
+/*
+ * Macro to print the value of register \xb
+ *
+ * Clobbers x0 - x4
+ */
+.macro print_reg xb
+        mov   x0, \xb
+        mov   x4, lr
+        bl    putn
+        mov   lr, x4
+.endm
+
+#else /* CONFIG_EARLY_PRINTK */
+#define PRINT(s)
+
+.macro print_reg xb
+.endm
+
+#endif /* !CONFIG_EARLY_PRINTK */
+
+/*
+ * Pseudo-op for PC relative adr <reg>, <symbol> where <symbol> is
+ * within the range +/- 4GB of the PC.
+ *
+ * @dst: destination register (64 bit wide)
+ * @sym: name of the symbol
+ */
+.macro  adr_l, dst, sym
+        adrp \dst, \sym
+        add  \dst, \dst, :lo12:\sym
+.endm
+
+/* Load the physical address of a symbol into xb */
+.macro load_paddr xb, sym
+        ldr \xb, =\sym
+        add \xb, \xb, x20
+.endm
+
 /*
  * Register aliases.
  */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:30:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:30:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476506.738771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdg-0008If-Kx; Fri, 13 Jan 2023 05:30:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476506.738771; Fri, 13 Jan 2023 05:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdg-0008IK-6N; Fri, 13 Jan 2023 05:30:04 +0000
Received: by outflank-mailman (input) for mailman id 476506;
 Fri, 13 Jan 2023 05:30:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCde-0005sP-Ki
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 52eff47f-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:30:00 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4DA8413D5;
 Thu, 12 Jan 2023 21:30:42 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 194803F587;
 Thu, 12 Jan 2023 21:29:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52eff47f-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 07/40] xen/arm64: add .text.idmap for Xen identity map sections
Date: Fri, 13 Jan 2023 13:28:40 +0800
Message-Id: <20230113052914.3845596-8-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Only the first 4KB of Xen image will be mapped as identity
(PA == VA). At the moment, Xen guarantees this by having
everything that needs to be used in the identity mapping
in head.S before _end_boot and checking at link time if this
fits in 4KB.

In previous patch, we have moved the MMU code outside of
head.S. Although we have added .text.header to the new file
to guarantee all identity map code still in the first 4KB.
However, the order of these two files on this 4KB depends
on the build tools. Currently, we use the build tools to
process the order of objs in the Makefile to ensure that
head.S must be at the top. But if you change to another build
tools, it may not be the same result.

In this patch we introduce .text.idmap to head_mmu.S, and
add this section after .text.header. to ensure code of
head_mmu.S after the code of header.S.

After this, we will still include some code that does not
belong to identity map before _end_boot. Because we have
moved _end_boot to head_mmu.S. That means all code in head.S
will be included before _end_boot. In this patch, we also
added .text flag in the place of original _end_boot in head.S.
All the code after .text in head.S will not be included in
identity map section.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. New patch.
---
 xen/arch/arm/arm64/head.S     | 6 ++++++
 xen/arch/arm/arm64/head_mmu.S | 2 +-
 xen/arch/arm/xen.lds.S        | 1 +
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 5cfa47279b..782bd1f94c 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -466,6 +466,12 @@ fail:   PRINT("- Boot failed -\r\n")
         b     1b
 ENDPROC(fail)
 
+/*
+ * For the code that do not need in indentity map section,
+ * we put them back to normal .text section
+ */
+.section .text, "ax", %progbits
+
 #ifdef CONFIG_EARLY_PRINTK
 /*
  * Initialize the UART. Should only be called on the boot CPU.
diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
index e2c8f07140..6ff13c751c 100644
--- a/xen/arch/arm/arm64/head_mmu.S
+++ b/xen/arch/arm/arm64/head_mmu.S
@@ -105,7 +105,7 @@
         str   \tmp2, [\tmp3, \tmp1, lsl #3]
 .endm
 
-.section .text.header, "ax", %progbits
+.section .text.idmap, "ax", %progbits
 /*.aarch64*/
 
 /*
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 92c2984052..bc45ea2c65 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -33,6 +33,7 @@ SECTIONS
   .text : {
         _stext = .;            /* Text section */
        *(.text.header)
+       *(.text.idmap)
 
        *(.text.cold)
        *(.text.unlikely .text.*_unlikely .text.unlikely.*)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:30:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:30:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476508.738780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdh-0000Gs-SS; Fri, 13 Jan 2023 05:30:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476508.738780; Fri, 13 Jan 2023 05:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdh-0000FT-Nc; Fri, 13 Jan 2023 05:30:05 +0000
Received: by outflank-mailman (input) for mailman id 476508;
 Fri, 13 Jan 2023 05:30:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdg-0005sJ-74
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:04 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 54999a60-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:03 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E6A64FEC;
 Thu, 12 Jan 2023 21:30:44 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A95933F587;
 Thu, 12 Jan 2023 21:30:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54999a60-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 08/40] xen/arm: use PA == VA for EARLY_UART_VIRTUAL_ADDRESS on Armv-8R
Date: Fri, 13 Jan 2023 13:28:41 +0800
Message-Id: <20230113052914.3845596-9-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

There is no VMSA support on Armv8-R AArch64, so we can not map early
UART to FIXMAP_CONSOLE. Instead, we use PA == VA to define
EARLY_UART_VIRTUAL_ADDRESS on Armv8-R AArch64.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
1. New patch
---
 xen/arch/arm/include/asm/early_printk.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/xen/arch/arm/include/asm/early_printk.h b/xen/arch/arm/include/asm/early_printk.h
index c5149b2976..44a230853f 100644
--- a/xen/arch/arm/include/asm/early_printk.h
+++ b/xen/arch/arm/include/asm/early_printk.h
@@ -15,10 +15,22 @@
 
 #ifdef CONFIG_EARLY_PRINTK
 
+#ifdef CONFIG_ARM_V8R
+
+/*
+ * For Armv-8r, there is not VMSA support in EL2, so we use VA == PA
+ * for EARLY_UART_VIRTUAL_ADDRESS.
+ */
+#define EARLY_UART_VIRTUAL_ADDRESS CONFIG_EARLY_UART_BASE_ADDRESS
+
+#else
+
 /* need to add the uart address offset in page to the fixmap address */
 #define EARLY_UART_VIRTUAL_ADDRESS \
     (FIXMAP_ADDR(FIXMAP_CONSOLE) + (CONFIG_EARLY_UART_BASE_ADDRESS & ~PAGE_MASK))
 
+#endif /* CONFIG_ARM_V8R */
+
 #endif /* !CONFIG_EARLY_PRINTK */
 
 #endif
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:30:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:30:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476511.738794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdk-0000wu-97; Fri, 13 Jan 2023 05:30:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476511.738794; Fri, 13 Jan 2023 05:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdk-0000w2-2b; Fri, 13 Jan 2023 05:30:08 +0000
Received: by outflank-mailman (input) for mailman id 476511;
 Fri, 13 Jan 2023 05:30:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdi-0005sJ-MC
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:06 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 5610a0b4-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:05 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 835E5FEC;
 Thu, 12 Jan 2023 21:30:47 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4E56A3F587;
 Thu, 12 Jan 2023 21:30:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5610a0b4-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 09/40] xen/arm: decouple copy_from_paddr with FIXMAP
Date: Fri, 13 Jan 2023 13:28:42 +0800
Message-Id: <20230113052914.3845596-10-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

copy_from_paddr will map a page to Xen's FIXMAP_MISC area for
temporary access. But for those systems do not support VMSA,
they can not implement set_fixmap/clear_fixmap, that means they
can't always use the same virtual address for source address.

In this case, we introduce to helpers to decouple copy_from_paddr
with set_fixmap/clear_fixmap. map_page_to_xen_misc can always
return the same virtual address as before for VMSA systems. It
also can return different address for non-VMSA systems.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. New patch
---
 xen/arch/arm/include/asm/setup.h |  4 ++++
 xen/arch/arm/kernel.c            | 13 +++++++------
 xen/arch/arm/mm.c                | 12 ++++++++++++
 3 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2b..4f39a1aa0a 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -119,6 +119,10 @@ extern struct bootinfo bootinfo;
 
 extern domid_t max_init_domid;
 
+/* Map a page to misc area */
+void *map_page_to_xen_misc(mfn_t mfn, unsigned int attributes);
+/* Unmap the page from misc area */
+void unmap_page_from_xen_misc(void);
 void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len);
 
 size_t estimate_efi_size(unsigned int mem_nr_banks);
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 23b840ea9e..0475d8fae7 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -49,18 +49,19 @@ struct minimal_dtb_header {
  */
 void __init copy_from_paddr(void *dst, paddr_t paddr, unsigned long len)
 {
-    void *src = (void *)FIXMAP_ADDR(FIXMAP_MISC);
-
-    while (len) {
+    while ( len )
+    {
+        void *src;
         unsigned long l, s;
 
-        s = paddr & (PAGE_SIZE-1);
+        s = paddr & (PAGE_SIZE - 1);
         l = min(PAGE_SIZE - s, len);
 
-        set_fixmap(FIXMAP_MISC, maddr_to_mfn(paddr), PAGE_HYPERVISOR_WC);
+        src = map_page_to_xen_misc(maddr_to_mfn(paddr), PAGE_HYPERVISOR_WC);
+        ASSERT(src != NULL);
         memcpy(dst, src + s, l);
         clean_dcache_va_range(dst, l);
-        clear_fixmap(FIXMAP_MISC);
+        unmap_page_from_xen_misc();
 
         paddr += l;
         dst += l;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0fc6f2992d..8f15814c5e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -355,6 +355,18 @@ void clear_fixmap(unsigned int map)
     BUG_ON(res != 0);
 }
 
+void *map_page_to_xen_misc(mfn_t mfn, unsigned int attributes)
+{
+    set_fixmap(FIXMAP_MISC, mfn, attributes);
+
+    return fix_to_virt(FIXMAP_MISC);
+}
+
+void unmap_page_from_xen_misc(void)
+{
+    clear_fixmap(FIXMAP_MISC);
+}
+
 void flush_page_to_ram(unsigned long mfn, bool sync_icache)
 {
     void *v = map_domain_page(_mfn(mfn));
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:30:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:30:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476520.738804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdr-000263-JA; Fri, 13 Jan 2023 05:30:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476520.738804; Fri, 13 Jan 2023 05:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCdr-00025i-E3; Fri, 13 Jan 2023 05:30:15 +0000
Received: by outflank-mailman (input) for mailman id 476520;
 Fri, 13 Jan 2023 05:30:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdo-0005sJ-WF
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:13 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 59547816-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:11 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EECB0FEC;
 Thu, 12 Jan 2023 21:30:52 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7B4CD3F587;
 Thu, 12 Jan 2023 21:30:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59547816-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <penny.zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory region map
Date: Fri, 13 Jan 2023 13:28:44 +0800
Message-Id: <20230113052914.3845596-12-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Penny Zheng <penny.zheng@arm.com>

The start-of-day Xen MPU memory region layout shall be like as follows:

xen_mpumap[0] : Xen text
xen_mpumap[1] : Xen read-only data
xen_mpumap[2] : Xen read-only after init data
xen_mpumap[3] : Xen read-write data
xen_mpumap[4] : Xen BSS
......
xen_mpumap[max_xen_mpumap - 2]: Xen init data
xen_mpumap[max_xen_mpumap - 1]: Xen init text

max_xen_mpumap refers to the number of regions supported by the EL2 MPU.
The layout shall be compliant with what we describe in xen.lds.S, or the
codes need adjustment.

As MMU system and MPU system have different functions to create
the boot MMU/MPU memory management data, instead of introducing
extra #ifdef in main code flow, we introduce a neutral name
prepare_early_mappings for both, and also to replace create_page_tables for MMU.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/arm64/Makefile              |   2 +
 xen/arch/arm/arm64/head.S                |  17 +-
 xen/arch/arm/arm64/head_mmu.S            |   4 +-
 xen/arch/arm/arm64/head_mpu.S            | 323 +++++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/mpu.h     |  63 +++++
 xen/arch/arm/include/asm/arm64/sysregs.h |  49 ++++
 xen/arch/arm/mm_mpu.c                    |  48 ++++
 xen/arch/arm/xen.lds.S                   |   4 +
 8 files changed, 502 insertions(+), 8 deletions(-)
 create mode 100644 xen/arch/arm/arm64/head_mpu.S
 create mode 100644 xen/arch/arm/include/asm/arm64/mpu.h
 create mode 100644 xen/arch/arm/mm_mpu.c

diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 22da2f54b5..438c9737ad 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -10,6 +10,8 @@ obj-y += entry.o
 obj-y += head.o
 ifneq ($(CONFIG_HAS_MPU),y)
 obj-y += head_mmu.o
+else
+obj-y += head_mpu.o
 endif
 obj-y += insn.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 782bd1f94c..145e3d53dc 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -68,9 +68,9 @@
  *  x24 -
  *  x25 -
  *  x26 - skip_zero_bss (boot cpu only)
- *  x27 -
- *  x28 -
- *  x29 -
+ *  x27 - region selector (mpu only)
+ *  x28 - prbar (mpu only)
+ *  x29 - prlar (mpu only)
  *  x30 - lr
  */
 
@@ -82,7 +82,7 @@
  * ---------------------------
  *
  * The requirements are:
- *   MMU = off, D-cache = off, I-cache = on or off,
+ *   MMU/MPU = off, D-cache = off, I-cache = on or off,
  *   x0 = physical address to the FDT blob.
  *
  * This must be the very first address in the loaded image.
@@ -252,7 +252,12 @@ real_start_efi:
 
         bl    check_cpu_mode
         bl    cpu_init
-        bl    create_page_tables
+
+        /*
+         * Create boot memory management data, pagetable for MMU systems
+         * and memory regions for MPU systems.
+         */
+        bl    prepare_early_mappings
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
@@ -310,7 +315,7 @@ GLOBAL(init_secondary)
 #endif
         bl    check_cpu_mode
         bl    cpu_init
-        bl    create_page_tables
+        bl    prepare_early_mappings
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
index 6ff13c751c..2346f755df 100644
--- a/xen/arch/arm/arm64/head_mmu.S
+++ b/xen/arch/arm/arm64/head_mmu.S
@@ -123,7 +123,7 @@
  *
  * Clobbers x0 - x4
  */
-ENTRY(create_page_tables)
+ENTRY(prepare_early_mappings)
         /* Prepare the page-tables for mapping Xen */
         ldr   x0, =XEN_VIRT_START
         create_table_entry boot_pgtable, boot_first, x0, 0, x1, x2, x3
@@ -208,7 +208,7 @@ virtphys_clash:
         /* Identity map clashes with boot_third, which we cannot handle yet */
         PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
         b     fail
-ENDPROC(create_page_tables)
+ENDPROC(prepare_early_mappings)
 
 /*
  * Turn on the Data Cache and the MMU. The function will return on the 1:1
diff --git a/xen/arch/arm/arm64/head_mpu.S b/xen/arch/arm/arm64/head_mpu.S
new file mode 100644
index 0000000000..0b97ce4646
--- /dev/null
+++ b/xen/arch/arm/arm64/head_mpu.S
@@ -0,0 +1,323 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Start-of-day code for an Armv8-R AArch64 MPU system.
+ */
+
+#include <asm/arm64/mpu.h>
+#include <asm/early_printk.h>
+#include <asm/page.h>
+
+/*
+ * One entry in Xen MPU memory region mapping table(xen_mpumap) is a structure
+ * of pr_t, which is 16-bytes size, so the entry offset is the order of 4.
+ */
+#define MPU_ENTRY_SHIFT         0x4
+
+#define REGION_SEL_MASK         0xf
+
+#define REGION_TEXT_PRBAR       0x38    /* SH=11 AP=10 XN=00 */
+#define REGION_RO_PRBAR         0x3A    /* SH=11 AP=10 XN=10 */
+#define REGION_DATA_PRBAR       0x32    /* SH=11 AP=00 XN=10 */
+
+#define REGION_NORMAL_PRLAR     0x0f    /* NS=0 ATTR=111 EN=1 */
+
+/*
+ * Macro to round up the section address to be PAGE_SIZE aligned
+ * Each section(e.g. .text, .data, etc) in xen.lds.S is page-aligned,
+ * which is usually guarded with ". = ALIGN(PAGE_SIZE)" in the head,
+ * or in the end
+ */
+.macro roundup_section, xb
+        add   \xb, \xb, #(PAGE_SIZE-1)
+        and   \xb, \xb, #PAGE_MASK
+.endm
+
+/*
+ * Macro to create a new MPU memory region entry, which is a structure
+ * of pr_t,  in \prmap.
+ *
+ * Inputs:
+ * prmap:   mpu memory region map table symbol
+ * sel:     region selector
+ * prbar:   preserve value for PRBAR_EL2
+ * prlar    preserve value for PRLAR_EL2
+ *
+ * Clobbers \tmp1, \tmp2
+ *
+ */
+.macro create_mpu_entry prmap, sel, prbar, prlar, tmp1, tmp2
+    mov   \tmp2, \sel
+    lsl   \tmp2, \tmp2, #MPU_ENTRY_SHIFT
+    adr_l \tmp1, \prmap
+    /* Write the first 8 bytes(prbar_t) of pr_t */
+    str   \prbar, [\tmp1, \tmp2]
+
+    add   \tmp2, \tmp2, #8
+    /* Write the last 8 bytes(prlar_t) of pr_t */
+    str   \prlar, [\tmp1, \tmp2]
+.endm
+
+/*
+ * Macro to store the maximum number of regions supported by the EL2 MPU
+ * in max_xen_mpumap, which is identified by MPUIR_EL2.
+ *
+ * Outputs:
+ * nr_regions: preserve the maximum number of regions supported by the EL2 MPU
+ *
+ * Clobbers \tmp1
+ *
+ */
+.macro read_max_el2_regions, nr_regions, tmp1
+    load_paddr \tmp1, max_xen_mpumap
+    mrs   \nr_regions, MPUIR_EL2
+    isb
+    str   \nr_regions, [\tmp1]
+.endm
+
+/*
+ * Macro to prepare and set a MPU memory region
+ *
+ * Inputs:
+ * base:        base address symbol (should be page-aligned)
+ * limit:       limit address symbol
+ * sel:         region selector
+ * prbar:       store computed PRBAR_EL2 value
+ * prlar:       store computed PRLAR_EL2 value
+ * attr_prbar:  PRBAR_EL2-related memory attributes. If not specified it will be REGION_DATA_PRBAR
+ * attr_prlar:  PRLAR_EL2-related memory attributes. If not specified it will be REGION_NORMAL_PRLAR
+ *
+ * Clobber \tmp1
+ *
+ */
+.macro prepare_xen_region, base, limit, sel, prbar, prlar, tmp1, attr_prbar=REGION_DATA_PRBAR, attr_prlar=REGION_NORMAL_PRLAR
+    /* Prepare value for PRBAR_EL2 reg and preserve it in \prbar.*/
+    load_paddr \prbar, \base
+    and   \prbar, \prbar, #MPU_REGION_MASK
+    mov   \tmp1, #\attr_prbar
+    orr   \prbar, \prbar, \tmp1
+
+    /* Prepare value for PRLAR_EL2 reg and preserve it in \prlar.*/
+    load_paddr \prlar, \limit
+    /* Round up limit address to be PAGE_SIZE aligned */
+    roundup_section \prlar
+    /* Limit address should be inclusive */
+    sub   \prlar, \prlar, #1
+    and   \prlar, \prlar, #MPU_REGION_MASK
+    mov   \tmp1, #\attr_prlar
+    orr   \prlar, \prlar, \tmp1
+
+    mov   x27, \sel
+    mov   x28, \prbar
+    mov   x29, \prlar
+    /*
+     * x27, x28, x29 are special registers designed as
+     * inputs for function write_pr
+     */
+    bl    write_pr
+.endm
+
+.section .text.idmap, "ax", %progbits
+
+/*
+ * ENTRY to configure a EL2 MPU memory region
+ * ARMv8-R AArch64 at most supports 255 MPU protection regions.
+ * See section G1.3.18 of the reference manual for ARMv8-R AArch64,
+ * PRBAR<n>_EL2 and PRLAR<n>_EL2 provides access to the EL2 MPU region
+ * determined by the value of 'n' and PRSELR_EL2.REGION as
+ * PRSELR_EL2.REGION<7:4>:n.(n = 0, 1, 2, ... , 15)
+ * For example to access regions from 16 to 31 (0b10000 to 0b11111):
+ * - Set PRSELR_EL2 to 0b1xxxx
+ * - Region 16 configuration is accessible through PRBAR0_EL2 and PRLAR0_EL2
+ * - Region 17 configuration is accessible through PRBAR1_EL2 and PRLAR1_EL2
+ * - Region 18 configuration is accessible through PRBAR2_EL2 and PRLAR2_EL2
+ * - ...
+ * - Region 31 configuration is accessible through PRBAR15_EL2 and PRLAR15_EL2
+ *
+ * Inputs:
+ * x27: region selector
+ * x28: preserve value for PRBAR_EL2
+ * x29: preserve value for PRLAR_EL2
+ *
+ */
+ENTRY(write_pr)
+    msr   PRSELR_EL2, x27
+    dsb   sy
+    and   x27, x27, #REGION_SEL_MASK
+    cmp   x27, #0
+    bne   1f
+    msr   PRBAR0_EL2, x28
+    msr   PRLAR0_EL2, x29
+    b     out
+1:
+    cmp   x27, #1
+    bne   2f
+    msr   PRBAR1_EL2, x28
+    msr   PRLAR1_EL2, x29
+    b     out
+2:
+    cmp   x27, #2
+    bne   3f
+    msr   PRBAR2_EL2, x28
+    msr   PRLAR2_EL2, x29
+    b     out
+3:
+    cmp   x27, #3
+    bne   4f
+    msr   PRBAR3_EL2, x28
+    msr   PRLAR3_EL2, x29
+    b     out
+4:
+    cmp   x27, #4
+    bne   5f
+    msr   PRBAR4_EL2, x28
+    msr   PRLAR4_EL2, x29
+    b     out
+5:
+    cmp   x27, #5
+    bne   6f
+    msr   PRBAR5_EL2, x28
+    msr   PRLAR5_EL2, x29
+    b     out
+6:
+    cmp   x27, #6
+    bne   7f
+    msr   PRBAR6_EL2, x28
+    msr   PRLAR6_EL2, x29
+    b     out
+7:
+    cmp   x27, #7
+    bne   8f
+    msr   PRBAR7_EL2, x28
+    msr   PRLAR7_EL2, x29
+    b     out
+8:
+    cmp   x27, #8
+    bne   9f
+    msr   PRBAR8_EL2, x28
+    msr   PRLAR8_EL2, x29
+    b     out
+9:
+    cmp   x27, #9
+    bne   10f
+    msr   PRBAR9_EL2, x28
+    msr   PRLAR9_EL2, x29
+    b     out
+10:
+    cmp   x27, #10
+    bne   11f
+    msr   PRBAR10_EL2, x28
+    msr   PRLAR10_EL2, x29
+    b     out
+11:
+    cmp   x27, #11
+    bne   12f
+    msr   PRBAR11_EL2, x28
+    msr   PRLAR11_EL2, x29
+    b     out
+12:
+    cmp   x27, #12
+    bne   13f
+    msr   PRBAR12_EL2, x28
+    msr   PRLAR12_EL2, x29
+    b     out
+13:
+    cmp   x27, #13
+    bne   14f
+    msr   PRBAR13_EL2, x28
+    msr   PRLAR13_EL2, x29
+    b     out
+14:
+    cmp   x27, #14
+    bne   15f
+    msr   PRBAR14_EL2, x28
+    msr   PRLAR14_EL2, x29
+    b     out
+15:
+    msr   PRBAR15_EL2, x28
+    msr   PRLAR15_EL2, x29
+out:
+    isb
+    ret
+ENDPROC(write_pr)
+
+/*
+ * Static start-of-day Xen EL2 MPU memory region layout.
+ *
+ *     xen_mpumap[0] : Xen text
+ *     xen_mpumap[1] : Xen read-only data
+ *     xen_mpumap[2] : Xen read-only after init data
+ *     xen_mpumap[3] : Xen read-write data
+ *     xen_mpumap[4] : Xen BSS
+ *     ......
+ *     xen_mpumap[max_xen_mpumap - 2]: Xen init data
+ *     xen_mpumap[max_xen_mpumap - 1]: Xen init text
+ *
+ * Clobbers x0 - x6
+ *
+ * It shall be compliant with what describes in xen.lds.S, or the below
+ * codes need adjustment.
+ * It shall also follow the rules of putting fixed MPU memory region in
+ * the front, and the others in the rear, which, here, mainly refers to
+ * boot-only region, like Xen init text region.
+ */
+ENTRY(prepare_early_mappings)
+    /* stack LR as write_pr will be called later like nested function */
+    mov   x6, lr
+
+    /* x0: region sel */
+    mov   x0, xzr
+    /* Xen text section. */
+    prepare_xen_region _stext, _etext, x0, x1, x2, x3, attr_prbar=REGION_TEXT_PRBAR
+    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
+
+    add   x0, x0, #1
+    /* Xen read-only data section. */
+    prepare_xen_region _srodata, _erodata, x0, x1, x2, x3, attr_prbar=REGION_RO_PRBAR
+    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
+
+    add   x0, x0, #1
+    /* Xen read-only after init data section. */
+    prepare_xen_region __ro_after_init_start, __ro_after_init_end, x0, x1, x2, x3
+    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
+
+    add   x0, x0, #1
+    /* Xen read-write data section. */
+    prepare_xen_region __data_begin, __init_begin, x0, x1, x2, x3
+    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
+
+    read_max_el2_regions x5, x3 /* x5: max_mpumap */
+    sub   x5, x5, #1
+    /* Xen init text section. */
+    prepare_xen_region _sinittext, _einittext, x5, x1, x2, x3, attr_prbar=REGION_TEXT_PRBAR
+    create_mpu_entry xen_mpumap, x5, x1, x2, x3, x4
+
+    sub   x5, x5, #1
+    /* Xen init data section. */
+    prepare_xen_region __init_data_begin, __init_end, x5, x1, x2, x3
+    create_mpu_entry xen_mpumap, x5, x1, x2, x3, x4
+
+    add   x0, x0, #1
+    /* Xen BSS section. */
+    prepare_xen_region __bss_start, __bss_end, x0, x1, x2, x3
+    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
+
+    /* Update next_fixed_region_idx and next_transient_region_idx */
+    load_paddr x3, next_fixed_region_idx
+    add   x0, x0, #1
+    str   x0, [x3]
+    load_paddr x4, next_transient_region_idx
+    sub   x5, x5, #1
+    str   x5, [x4]
+
+    mov   lr, x6
+    ret
+ENDPROC(prepare_early_mappings)
+
+GLOBAL(_end_boot)
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/asm/arm64/mpu.h
new file mode 100644
index 0000000000..c945dd53db
--- /dev/null
+++ b/xen/arch/arm/include/asm/arm64/mpu.h
@@ -0,0 +1,63 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * mpu.h: Arm Memory Protection Region definitions.
+ */
+
+#ifndef __ARM64_MPU_H__
+#define __ARM64_MPU_H__
+
+#define MPU_REGION_SHIFT  6
+#define MPU_REGION_ALIGN  (_AC(1, UL) << MPU_REGION_SHIFT)
+#define MPU_REGION_MASK   (~(MPU_REGION_ALIGN - 1))
+
+/*
+ * MPUIR_EL2.Region identifies the number of regions supported by the EL2 MPU.
+ * It is a 8-bit field, so 255 MPU memory regions at most.
+ */
+#define ARM_MAX_MPU_MEMORY_REGIONS 255
+
+#ifndef __ASSEMBLY__
+
+/* Protection Region Base Address Register */
+typedef union {
+    struct __packed {
+        unsigned long xn:2;       /* Execute-Never */
+        unsigned long ap:2;       /* Acess Permission */
+        unsigned long sh:2;       /* Sharebility */
+        unsigned long base:42;    /* Base Address */
+        unsigned long pad:16;
+    } reg;
+    uint64_t bits;
+} prbar_t;
+
+/* Protection Region Limit Address Register */
+typedef union {
+    struct __packed {
+        unsigned long en:1;     /* Region enable */
+        unsigned long ai:3;     /* Memory Attribute Index */
+        unsigned long ns:1;     /* Not-Secure */
+        unsigned long res:1;    /* Reserved 0 by hardware */
+        unsigned long limit:42; /* Limit Address */
+        unsigned long pad:16;
+    } reg;
+    uint64_t bits;
+} prlar_t;
+
+/* MPU Protection Region */
+typedef struct {
+    prbar_t prbar;
+    prlar_t prlar;
+} pr_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ARM64_MPU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 4638999514..aca9bca5b1 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -458,6 +458,55 @@
 #define ZCR_ELx_LEN_SIZE             9
 #define ZCR_ELx_LEN_MASK             0x1ff
 
+/* System registers for Armv8-R AArch64 */
+#ifdef CONFIG_HAS_MPU
+
+/* EL2 MPU Protection Region Base Address Register encode */
+#define PRBAR_EL2   S3_4_C6_C8_0
+#define PRBAR0_EL2  S3_4_C6_C8_0
+#define PRBAR1_EL2  S3_4_C6_C8_4
+#define PRBAR2_EL2  S3_4_C6_C9_0
+#define PRBAR3_EL2  S3_4_C6_C9_4
+#define PRBAR4_EL2  S3_4_C6_C10_0
+#define PRBAR5_EL2  S3_4_C6_C10_4
+#define PRBAR6_EL2  S3_4_C6_C11_0
+#define PRBAR7_EL2  S3_4_C6_C11_4
+#define PRBAR8_EL2  S3_4_C6_C12_0
+#define PRBAR9_EL2  S3_4_C6_C12_4
+#define PRBAR10_EL2 S3_4_C6_C13_0
+#define PRBAR11_EL2 S3_4_C6_C13_4
+#define PRBAR12_EL2 S3_4_C6_C14_0
+#define PRBAR13_EL2 S3_4_C6_C14_4
+#define PRBAR14_EL2 S3_4_C6_C15_0
+#define PRBAR15_EL2 S3_4_C6_C15_4
+
+/* EL2 MPU Protection Region Limit Address Register encode */
+#define PRLAR_EL2   S3_4_C6_C8_1
+#define PRLAR0_EL2  S3_4_C6_C8_1
+#define PRLAR1_EL2  S3_4_C6_C8_5
+#define PRLAR2_EL2  S3_4_C6_C9_1
+#define PRLAR3_EL2  S3_4_C6_C9_5
+#define PRLAR4_EL2  S3_4_C6_C10_1
+#define PRLAR5_EL2  S3_4_C6_C10_5
+#define PRLAR6_EL2  S3_4_C6_C11_1
+#define PRLAR7_EL2  S3_4_C6_C11_5
+#define PRLAR8_EL2  S3_4_C6_C12_1
+#define PRLAR9_EL2  S3_4_C6_C12_5
+#define PRLAR10_EL2 S3_4_C6_C13_1
+#define PRLAR11_EL2 S3_4_C6_C13_5
+#define PRLAR12_EL2 S3_4_C6_C14_1
+#define PRLAR13_EL2 S3_4_C6_C14_5
+#define PRLAR14_EL2 S3_4_C6_C15_1
+#define PRLAR15_EL2 S3_4_C6_C15_5
+
+/* MPU Protection Region Selection Register encode */
+#define PRSELR_EL2 S3_4_C6_C2_1
+
+/* MPU Type registers encode */
+#define MPUIR_EL2 S3_4_C0_C0_4
+
+#endif
+
 /* Access to system registers */
 
 #define WRITE_SYSREG64(v, name) do {                    \
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
new file mode 100644
index 0000000000..43e9a1be4d
--- /dev/null
+++ b/xen/arch/arm/mm_mpu.c
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * xen/arch/arm/mm_mpu.c
+ *
+ * MPU based memory managment code for Armv8-R AArch64.
+ *
+ * Copyright (C) 2022 Arm Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/init.h>
+#include <xen/page-size.h>
+#include <asm/arm64/mpu.h>
+
+/* Xen MPU memory region mapping table. */
+pr_t __aligned(PAGE_SIZE) __section(".data.page_aligned")
+     xen_mpumap[ARM_MAX_MPU_MEMORY_REGIONS];
+
+/* Index into MPU memory region map for fixed regions, ascending from zero. */
+uint64_t __ro_after_init next_fixed_region_idx;
+/*
+ * Index into MPU memory region map for transient regions, like boot-only
+ * region, which descends from max_xen_mpumap.
+ */
+uint64_t __ro_after_init next_transient_region_idx;
+
+/* Maximum number of supported MPU memory regions by the EL2 MPU. */
+uint64_t __ro_after_init max_xen_mpumap;
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index bc45ea2c65..79965a3c17 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -91,6 +91,8 @@ SECTIONS
       __ro_after_init_end = .;
   } : text
 
+  . = ALIGN(PAGE_SIZE);
+  __data_begin = .;
   .data.read_mostly : {
        /* Exception table */
        __start___ex_table = .;
@@ -157,7 +159,9 @@ SECTIONS
        *(.altinstr_replacement)
   } :text
   . = ALIGN(PAGE_SIZE);
+
   .init.data : {
+       __init_data_begin = .;            /* Init data */
        *(.init.rodata)
        *(.init.rodata.*)
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476538.738820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCie-0004cF-Ij; Fri, 13 Jan 2023 05:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476538.738820; Fri, 13 Jan 2023 05:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCie-0004bk-Cw; Fri, 13 Jan 2023 05:35:12 +0000
Received: by outflank-mailman (input) for mailman id 476538;
 Fri, 13 Jan 2023 05:35:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCe7-0005sJ-A3
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:31 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 649eb9cb-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:30 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D6F2813D5;
 Thu, 12 Jan 2023 21:31:11 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 246403F587;
 Thu, 12 Jan 2023 21:30:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 649eb9cb-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 17/40] xen/mpu: plump virt/maddr/mfn convertion in MPU system
Date: Fri, 13 Jan 2023 13:28:50 +0800
Message-Id: <20230113052914.3845596-18-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

virt_to_maddr and maddr_to_virt are used widely in Xen code. So
even there is no VMSA in MPU system, we keep the interface name to
stay the same code flow.

We move the existing virt/maddr convertion from mm.h to mm_mmu.h.
And the MPU version of virt/maddr convertion is simple, returning
the input address as the output.

We should overide virt_to_mfn/mfn_to_virt in source file mm_mpu.c the
same way in mm_mmu.c.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/mm.h     | 26 --------------------------
 xen/arch/arm/include/asm/mm_mmu.h | 26 ++++++++++++++++++++++++++
 xen/arch/arm/include/asm/mm_mpu.h | 13 +++++++++++++
 xen/arch/arm/mm_mpu.c             |  6 ++++++
 4 files changed, 45 insertions(+), 26 deletions(-)

diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 9b4c07d965..e29158028a 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -250,32 +250,6 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
 /* Page-align address and convert to frame number format */
 #define paddr_to_pfn_aligned(paddr)    paddr_to_pfn(PAGE_ALIGN(paddr))
 
-static inline paddr_t __virt_to_maddr(vaddr_t va)
-{
-    uint64_t par = va_to_par(va);
-    return (par & PADDR_MASK & PAGE_MASK) | (va & ~PAGE_MASK);
-}
-#define virt_to_maddr(va)   __virt_to_maddr((vaddr_t)(va))
-
-#ifdef CONFIG_ARM_32
-static inline void *maddr_to_virt(paddr_t ma)
-{
-    ASSERT(is_xen_heap_mfn(maddr_to_mfn(ma)));
-    ma -= mfn_to_maddr(directmap_mfn_start);
-    return (void *)(unsigned long) ma + XENHEAP_VIRT_START;
-}
-#else
-static inline void *maddr_to_virt(paddr_t ma)
-{
-    ASSERT((mfn_to_pdx(maddr_to_mfn(ma)) - directmap_base_pdx) <
-           (DIRECTMAP_SIZE >> PAGE_SHIFT));
-    return (void *)(XENHEAP_VIRT_START -
-                    (directmap_base_pdx << PAGE_SHIFT) +
-                    ((ma & ma_va_bottom_mask) |
-                     ((ma & ma_top_mask) >> pfn_pdx_hole_shift)));
-}
-#endif
-
 /*
  * Translate a guest virtual address to a machine address.
  * Return the fault information if the translation has failed else 0.
diff --git a/xen/arch/arm/include/asm/mm_mmu.h b/xen/arch/arm/include/asm/mm_mmu.h
index a5e63d8af8..6d7e5ddde7 100644
--- a/xen/arch/arm/include/asm/mm_mmu.h
+++ b/xen/arch/arm/include/asm/mm_mmu.h
@@ -23,6 +23,32 @@ extern uint64_t init_ttbr;
 extern void setup_directmap_mappings(unsigned long base_mfn,
                                      unsigned long nr_mfns);
 
+static inline paddr_t __virt_to_maddr(vaddr_t va)
+{
+    uint64_t par = va_to_par(va);
+    return (par & PADDR_MASK & PAGE_MASK) | (va & ~PAGE_MASK);
+}
+#define virt_to_maddr(va)   __virt_to_maddr((vaddr_t)(va))
+
+#ifdef CONFIG_ARM_32
+static inline void *maddr_to_virt(paddr_t ma)
+{
+    ASSERT(is_xen_heap_mfn(maddr_to_mfn(ma)));
+    ma -= mfn_to_maddr(directmap_mfn_start);
+    return (void *)(unsigned long) ma + XENHEAP_VIRT_START;
+}
+#else
+static inline void *maddr_to_virt(paddr_t ma)
+{
+    ASSERT((mfn_to_pdx(maddr_to_mfn(ma)) - directmap_base_pdx) <
+           (DIRECTMAP_SIZE >> PAGE_SHIFT));
+    return (void *)(XENHEAP_VIRT_START -
+                    (directmap_base_pdx << PAGE_SHIFT) +
+                    ((ma & ma_va_bottom_mask) |
+                     ((ma & ma_top_mask) >> pfn_pdx_hole_shift)));
+}
+#endif
+
 #endif /* __ARCH_ARM_MM_MMU__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/mm_mpu.h b/xen/arch/arm/include/asm/mm_mpu.h
index 1f3cff7743..3a4b07f187 100644
--- a/xen/arch/arm/include/asm/mm_mpu.h
+++ b/xen/arch/arm/include/asm/mm_mpu.h
@@ -4,6 +4,19 @@
 
 #define setup_mm_mappings(boot_phys_offset) ((void)(boot_phys_offset))
 
+static inline paddr_t __virt_to_maddr(vaddr_t va)
+{
+    /* In MPU system, VA == PA. */
+    return (paddr_t)va;
+}
+#define virt_to_maddr(va)   __virt_to_maddr((vaddr_t)(va))
+
+static inline void *maddr_to_virt(paddr_t ma)
+{
+    /* In MPU system, VA == PA. */
+    return (void *)ma;
+}
+
 #endif /* __ARCH_ARM_MM_MPU__ */
 
 /*
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 87a12042cc..c9e17ab6da 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -29,6 +29,12 @@
 pr_t __aligned(PAGE_SIZE) __section(".data.page_aligned")
      xen_mpumap[ARM_MAX_MPU_MEMORY_REGIONS];
 
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+#undef mfn_to_virt
+#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn))
+
 /* Index into MPU memory region map for fixed regions, ascending from zero. */
 uint64_t __ro_after_init next_fixed_region_idx;
 /*
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476541.738852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCig-0005Km-PZ; Fri, 13 Jan 2023 05:35:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476541.738852; Fri, 13 Jan 2023 05:35:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCig-0005Iy-Ie; Fri, 13 Jan 2023 05:35:14 +0000
Received: by outflank-mailman (input) for mailman id 476541;
 Fri, 13 Jan 2023 05:35:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeG-0005sJ-O8
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:40 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 6a214224-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:39 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3131213D5;
 Thu, 12 Jan 2023 21:31:21 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7319F3F587;
 Thu, 12 Jan 2023 21:30:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a214224-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 20/40] xen/mpu: plump early_fdt_map in MPU systems
Date: Fri, 13 Jan 2023 13:28:53 +0800
Message-Id: <20230113052914.3845596-21-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In MPU system, device tree binary can be packed with Xen
image through CONFIG_DTB_FILE, or provided by bootloader through x0.

In MPU system, each section in xen.lds.S is PAGE_SIZE aligned.
So in order to not overlap with the previous BSS section, dtb section
should be made page-aligned too.
We add . = ALIGN(PAGE_SIZE); in the head of dtb section to make it happen.

In this commit, we map early FDT with a transient MPU memory region at
rear with REGION_HYPERVISOR_BOOT.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/arm64/mpu.h |  5 +++
 xen/arch/arm/mm_mpu.c                | 63 +++++++++++++++++++++++++---
 xen/arch/arm/xen.lds.S               |  5 ++-
 3 files changed, 67 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/asm/arm64/mpu.h
index fcde6ad0db..b85e420a90 100644
--- a/xen/arch/arm/include/asm/arm64/mpu.h
+++ b/xen/arch/arm/include/asm/arm64/mpu.h
@@ -45,18 +45,22 @@
  * [3:4] Execute Never
  * [5:6] Access Permission
  * [7]   Region Present
+ * [8]   Boot-only Region
  */
 #define _REGION_AI_BIT            0
 #define _REGION_XN_BIT            3
 #define _REGION_AP_BIT            5
 #define _REGION_PRESENT_BIT       7
+#define _REGION_BOOTONLY_BIT      8
 #define _REGION_XN                (2U << _REGION_XN_BIT)
 #define _REGION_RO                (2U << _REGION_AP_BIT)
 #define _REGION_PRESENT           (1U << _REGION_PRESENT_BIT)
+#define _REGION_BOOTONLY          (1U << _REGION_BOOTONLY_BIT)
 #define REGION_AI_MASK(x)         (((x) >> _REGION_AI_BIT) & 0x7U)
 #define REGION_XN_MASK(x)         (((x) >> _REGION_XN_BIT) & 0x3U)
 #define REGION_AP_MASK(x)         (((x) >> _REGION_AP_BIT) & 0x3U)
 #define REGION_RO_MASK(x)         (((x) >> _REGION_AP_BIT) & 0x2U)
+#define REGION_BOOTONLY_MASK(x)   (((x) >> _REGION_BOOTONLY_BIT) & 0x1U)
 
 /*
  * _REGION_NORMAL is convenience define. It is not meant to be used
@@ -68,6 +72,7 @@
 #define REGION_HYPERVISOR_RO      (_REGION_NORMAL|_REGION_XN|_REGION_RO)
 
 #define REGION_HYPERVISOR         REGION_HYPERVISOR_RW
+#define REGION_HYPERVISOR_BOOT    (REGION_HYPERVISOR_RW|_REGION_BOOTONLY)
 
 #define INVALID_REGION            (~0UL)
 
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 08720a7c19..b34dbf4515 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -20,11 +20,16 @@
  */
 
 #include <xen/init.h>
+#include <xen/libfdt/libfdt.h>
 #include <xen/mm.h>
 #include <xen/page-size.h>
+#include <xen/pfn.h>
+#include <xen/sizes.h>
 #include <xen/spinlock.h>
 #include <asm/arm64/mpu.h>
+#include <asm/early_printk.h>
 #include <asm/page.h>
+#include <asm/setup.h>
 
 #ifdef NDEBUG
 static inline void
@@ -62,6 +67,8 @@ uint64_t __ro_after_init max_xen_mpumap;
 
 static DEFINE_SPINLOCK(xen_mpumap_lock);
 
+static paddr_t dtb_paddr;
+
 /* Write a MPU protection region */
 #define WRITE_PROTECTION_REGION(sel, pr, prbar_el2, prlar_el2) ({       \
     uint64_t _sel = sel;                                                \
@@ -403,7 +410,16 @@ static int xen_mpumap_update_entry(paddr_t base, paddr_t limit,
 
         /* During boot time, the default index is next_fixed_region_idx. */
         if ( system_state <= SYS_STATE_active )
-            idx = next_fixed_region_idx;
+        {
+            /*
+             * If it is a boot-only region (i.e. region for early FDT),
+             * it shall be added from the tail for late init re-organizing
+             */
+            if ( REGION_BOOTONLY_MASK(flags) )
+                idx = next_transient_region_idx;
+            else
+                idx = next_fixed_region_idx;
+        }
 
         xen_mpumap[idx] = pr_of_xenaddr(base, limit - 1, REGION_AI_MASK(flags));
         /* Set permission */
@@ -465,14 +481,51 @@ int map_pages_to_xen(unsigned long virt,
                              mfn_to_maddr(mfn_add(mfn, nr_mfns)), flags);
 }
 
-/* TODO: Implementation on the first usage */
-void dump_hyp_walk(vaddr_t addr)
+void * __init early_fdt_map(paddr_t fdt_paddr)
 {
+    void *fdt_virt;
+    uint32_t size;
+
+    /*
+     * Check whether the physical FDT address is set and meets the minimum
+     * alignment requirement. Since we are relying on MIN_FDT_ALIGN to be at
+     * least 8 bytes so that we always access the magic and size fields
+     * of the FDT header after mapping the first chunk, double check if
+     * that is indeed the case.
+     */
+     BUILD_BUG_ON(MIN_FDT_ALIGN < 8);
+     if ( !fdt_paddr || fdt_paddr % MIN_FDT_ALIGN )
+         return NULL;
+
+    dtb_paddr = fdt_paddr;
+    /*
+     * In MPU system, device tree binary can be packed with Xen image
+     * through CONFIG_DTB_FILE, or provided by bootloader through x0.
+     * Map FDT with a transient MPU memory region of MAX_FDT_SIZE.
+     * After that, we can do some magic check.
+     */
+    if ( map_pages_to_xen(round_pgdown(fdt_paddr),
+                          maddr_to_mfn(round_pgdown(fdt_paddr)),
+                          round_pgup(MAX_FDT_SIZE) >> PAGE_SHIFT,
+                          REGION_HYPERVISOR_BOOT) )
+        panic("Unable to map the device-tree.\n");
+
+    /* VA == PA */
+    fdt_virt = maddr_to_virt(fdt_paddr);
+
+    if ( fdt_magic(fdt_virt) != FDT_MAGIC )
+        return NULL;
+
+    size = fdt_totalsize(fdt_virt);
+    if ( size > MAX_FDT_SIZE )
+        return NULL;
+
+    return fdt_virt;
 }
 
-void * __init early_fdt_map(paddr_t fdt_paddr)
+/* TODO: Implementation on the first usage */
+void dump_hyp_walk(vaddr_t addr)
 {
-    return NULL;
 }
 
 void __init remove_early_mappings(void)
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 79965a3c17..0565e22a1f 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -218,7 +218,10 @@ SECTIONS
   _end = . ;
 
   /* Section for the device tree blob (if any). */
-  .dtb : { *(.dtb) } :text
+  .dtb : {
+      . = ALIGN(PAGE_SIZE);
+      *(.dtb)
+  } :text
 
   DWARF2_DEBUG_SECTIONS
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476539.738826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCie-0004kC-Rt; Fri, 13 Jan 2023 05:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476539.738826; Fri, 13 Jan 2023 05:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCie-0004hh-Mb; Fri, 13 Jan 2023 05:35:12 +0000
Received: by outflank-mailman (input) for mailman id 476539;
 Fri, 13 Jan 2023 05:35:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCe7-0005sP-2P
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:31 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 62be1f31-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:30:27 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BCB2013D5;
 Thu, 12 Jan 2023 21:31:08 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0A5563F587;
 Thu, 12 Jan 2023 21:30:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62be1f31-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 16/40] xen/arm: introduce setup_mm_mappings
Date: Fri, 13 Jan 2023 13:28:49 +0800
Message-Id: <20230113052914.3845596-17-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Function setup_pagetables is responsible for boot-time pagetable setup
in MMU system.
But in MPU system, we have already built up start-of-day Xen MPU memory region
mapping at the very beginning in assembly.

So in order to keep only one codeflow in arm/setup.c, setup_mm_mappings
, with a more generic name, is introduced and act as an empty stub in
MPU system.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/mm.h     |  2 ++
 xen/arch/arm/include/asm/mm_mpu.h | 16 ++++++++++++++++
 xen/arch/arm/setup.c              |  2 +-
 3 files changed, 19 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/include/asm/mm_mpu.h

diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 1b9fdb6ff5..9b4c07d965 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -243,6 +243,8 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
 
 #ifndef CONFIG_HAS_MPU
 #include <asm/mm_mmu.h>
+#else
+#include <asm/mm_mpu.h>
 #endif
 
 /* Page-align address and convert to frame number format */
diff --git a/xen/arch/arm/include/asm/mm_mpu.h b/xen/arch/arm/include/asm/mm_mpu.h
new file mode 100644
index 0000000000..1f3cff7743
--- /dev/null
+++ b/xen/arch/arm/include/asm/mm_mpu.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef __ARCH_ARM_MM_MPU__
+#define __ARCH_ARM_MM_MPU__
+
+#define setup_mm_mappings(boot_phys_offset) ((void)(boot_phys_offset))
+
+#endif /* __ARCH_ARM_MM_MPU__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90..d7d200179c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1003,7 +1003,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     /* Initialize traps early allow us to get backtrace when an error occurred */
     init_traps();
 
-    setup_pagetables(boot_phys_offset);
+    setup_mm_mappings(boot_phys_offset);
 
     smp_clear_cpu_maps();
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476537.738814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCie-0004Xv-8Y; Fri, 13 Jan 2023 05:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476537.738814; Fri, 13 Jan 2023 05:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCie-0004Xo-4p; Fri, 13 Jan 2023 05:35:12 +0000
Received: by outflank-mailman (input) for mailman id 476537;
 Fri, 13 Jan 2023 05:35:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCe6-0005sP-22
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:30 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 60f1f780-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:30:24 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A2B5AFEC;
 Thu, 12 Jan 2023 21:31:05 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A4D773F587;
 Thu, 12 Jan 2023 21:30:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60f1f780-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 15/40] xen/arm: move MMU-specific memory management code to mm_mmu.c/mm_mmu.h
Date: Fri, 13 Jan 2023 13:28:48 +0800
Message-Id: <20230113052914.3845596-16-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

To make the code readable and maintainable, we move MMU-specific
memory management code from mm.c to mm_mmu.c and move MMU-specific
definitions from mm.h to mm_mmu.h.
Later we will create mm_mpu.h and mm_mpu.c for MPU-specific memory
management code.
This will avoid lots of #ifdef in memory management code and header files.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/Makefile             |    5 +
 xen/arch/arm/include/asm/mm.h     |   19 +-
 xen/arch/arm/include/asm/mm_mmu.h |   35 +
 xen/arch/arm/mm.c                 | 1352 +---------------------------
 xen/arch/arm/mm_mmu.c             | 1376 +++++++++++++++++++++++++++++
 xen/arch/arm/mm_mpu.c             |   67 ++
 6 files changed, 1488 insertions(+), 1366 deletions(-)
 create mode 100644 xen/arch/arm/include/asm/mm_mmu.h
 create mode 100644 xen/arch/arm/mm_mmu.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4d076b278b..21188b207f 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -37,6 +37,11 @@ obj-y += kernel.init.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += mem_access.o
 obj-y += mm.o
+ifneq ($(CONFIG_HAS_MPU), y)
+obj-y += mm_mmu.o
+else
+obj-y += mm_mpu.o
+endif
 obj-y += monitor.o
 obj-y += p2m.o
 obj-y += percpu.o
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 68adcac9fa..1b9fdb6ff5 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -154,13 +154,6 @@ struct page_info
 #define _PGC_need_scrub   _PGC_allocated
 #define PGC_need_scrub    PGC_allocated
 
-extern mfn_t directmap_mfn_start, directmap_mfn_end;
-extern vaddr_t directmap_virt_end;
-#ifdef CONFIG_ARM_64
-extern vaddr_t directmap_virt_start;
-extern unsigned long directmap_base_pdx;
-#endif
-
 #ifdef CONFIG_ARM_32
 #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
 #define is_xen_heap_mfn(mfn) ({                                 \
@@ -192,8 +185,6 @@ extern unsigned long total_pages;
 
 #define PDX_GROUP_SHIFT SECOND_SHIFT
 
-/* Boot-time pagetable setup */
-extern void setup_pagetables(unsigned long boot_phys_offset);
 /* Map FDT in boot pagetable */
 extern void *early_fdt_map(paddr_t fdt_paddr);
 /* Remove early mappings */
@@ -203,12 +194,6 @@ extern void remove_early_mappings(void);
 extern int init_secondary_pagetables(int cpu);
 /* Switch secondary CPUS to its own pagetables and finalise MMU setup */
 extern void mmu_init_secondary_cpu(void);
-/*
- * For Arm32, set up the direct-mapped xenheap: up to 1GB of contiguous,
- * always-mapped memory. Base must be 32MB aligned and size a multiple of 32MB.
- * For Arm64, map the region in the directmap area.
- */
-extern void setup_directmap_mappings(unsigned long base_mfn, unsigned long nr_mfns);
 /* Map a frame table to cover physical addresses ps through pe */
 extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 /* map a physical range in virtual memory */
@@ -256,6 +241,10 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
 #define vmap_to_mfn(va)     maddr_to_mfn(virt_to_maddr((vaddr_t)va))
 #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
 
+#ifndef CONFIG_HAS_MPU
+#include <asm/mm_mmu.h>
+#endif
+
 /* Page-align address and convert to frame number format */
 #define paddr_to_pfn_aligned(paddr)    paddr_to_pfn(PAGE_ALIGN(paddr))
 
diff --git a/xen/arch/arm/include/asm/mm_mmu.h b/xen/arch/arm/include/asm/mm_mmu.h
new file mode 100644
index 0000000000..a5e63d8af8
--- /dev/null
+++ b/xen/arch/arm/include/asm/mm_mmu.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef __ARCH_ARM_MM_MMU__
+#define __ARCH_ARM_MM_MMU__
+
+extern mfn_t directmap_mfn_start, directmap_mfn_end;
+extern vaddr_t directmap_virt_end;
+#ifdef CONFIG_ARM_64
+extern vaddr_t directmap_virt_start;
+extern unsigned long directmap_base_pdx;
+#endif
+
+/* Boot-time pagetable setup */
+extern void setup_pagetables(unsigned long boot_phys_offset);
+#define setup_mm_mappings(boot_phys_offset) setup_pagetables(boot_phys_offset)
+
+/* Non-boot CPUs use this to find the correct pagetables. */
+extern uint64_t init_ttbr;
+/*
+ * For Arm32, set up the direct-mapped xenheap: up to 1GB of contiguous,
+ * always-mapped memory. Base must be 32MB aligned and size a multiple of 32MB.
+ * For Arm64, map the region in the directmap area.
+ */
+extern void setup_directmap_mappings(unsigned long base_mfn,
+                                     unsigned long nr_mfns);
+
+#endif /* __ARCH_ARM_MM_MMU__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 8f15814c5e..e1ce2a62dc 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -2,371 +2,24 @@
 /*
  * xen/arch/arm/mm.c
  *
- * MMU code for an ARMv7-A with virt extensions.
+ * Memory management common code for MMU and MPU system.
  *
  * Tim Deegan <tim@xen.org>
  * Copyright (c) 2011 Citrix Systems.
  */
 
 #include <xen/domain_page.h>
-#include <xen/errno.h>
 #include <xen/grant_table.h>
-#include <xen/guest_access.h>
-#include <xen/init.h>
-#include <xen/libfdt/libfdt.h>
-#include <xen/mm.h>
-#include <xen/pfn.h>
-#include <xen/pmap.h>
 #include <xen/sched.h>
-#include <xen/sizes.h>
 #include <xen/types.h>
-#include <xen/vmap.h>
 
 #include <xsm/xsm.h>
 
-#include <asm/fixmap.h>
-#include <asm/setup.h>
-
-#include <public/memory.h>
-
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef virt_to_mfn
-#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
-#undef mfn_to_virt
-#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn))
-
-#ifdef NDEBUG
-static inline void
-__attribute__ ((__format__ (__printf__, 1, 2)))
-mm_printk(const char *fmt, ...) {}
-#else
-#define mm_printk(fmt, args...)             \
-    do                                      \
-    {                                       \
-        dprintk(XENLOG_ERR, fmt, ## args);  \
-        WARN();                             \
-    } while (0)
-#endif
-
-/* Static start-of-day pagetables that we use before the allocators
- * are up. These are used by all CPUs during bringup before switching
- * to the CPUs own pagetables.
- *
- * These pagetables have a very simple structure. They include:
- *  - 2MB worth of 4K mappings of xen at XEN_VIRT_START, boot_first and
- *    boot_second are used to populate the tables down to boot_third
- *    which contains the actual mapping.
- *  - a 1:1 mapping of xen at its current physical address. This uses a
- *    section mapping at whichever of boot_{pgtable,first,second}
- *    covers that physical address.
- *
- * For the boot CPU these mappings point to the address where Xen was
- * loaded by the bootloader. For secondary CPUs they point to the
- * relocated copy of Xen for the benefit of secondary CPUs.
- *
- * In addition to the above for the boot CPU the device-tree is
- * initially mapped in the boot misc slot. This mapping is not present
- * for secondary CPUs.
- *
- * Finally, if EARLY_PRINTK is enabled then xen_fixmap will be mapped
- * by the CPU once it has moved off the 1:1 mapping.
- */
-DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
-#ifdef CONFIG_ARM_64
-DEFINE_BOOT_PAGE_TABLE(boot_first);
-DEFINE_BOOT_PAGE_TABLE(boot_first_id);
-#endif
-DEFINE_BOOT_PAGE_TABLE(boot_second_id);
-DEFINE_BOOT_PAGE_TABLE(boot_third_id);
-DEFINE_BOOT_PAGE_TABLE(boot_second);
-DEFINE_BOOT_PAGE_TABLE(boot_third);
-
-/* Main runtime page tables */
-
-/*
- * For arm32 xen_pgtable are per-PCPU and are allocated before
- * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs.
- *
- * xen_second, xen_fixmap and xen_xenmap are always shared between all
- * PCPUs.
- */
-
-#ifdef CONFIG_ARM_64
-#define HYP_PT_ROOT_LEVEL 0
-static DEFINE_PAGE_TABLE(xen_pgtable);
-static DEFINE_PAGE_TABLE(xen_first);
-#define THIS_CPU_PGTABLE xen_pgtable
-#else
-#define HYP_PT_ROOT_LEVEL 1
-/* Per-CPU pagetable pages */
-/* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
-DEFINE_PER_CPU(lpae_t *, xen_pgtable);
-#define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
-/* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
-static DEFINE_PAGE_TABLE(cpu0_pgtable);
-#endif
-
-/* Common pagetable leaves */
-/* Second level page table used to cover Xen virtual address space */
-static DEFINE_PAGE_TABLE(xen_second);
-/* Third level page table used for fixmap */
-DEFINE_BOOT_PAGE_TABLE(xen_fixmap);
-/*
- * Third level page table used to map Xen itself with the XN bit set
- * as appropriate.
- */
-static DEFINE_PAGE_TABLE(xen_xenmap);
-
-/* Non-boot CPUs use this to find the correct pagetables. */
-uint64_t init_ttbr;
-
-static paddr_t phys_offset;
-
-/* Limits of the Xen heap */
-mfn_t directmap_mfn_start __read_mostly = INVALID_MFN_INITIALIZER;
-mfn_t directmap_mfn_end __read_mostly;
-vaddr_t directmap_virt_end __read_mostly;
-#ifdef CONFIG_ARM_64
-vaddr_t directmap_virt_start __read_mostly;
-unsigned long directmap_base_pdx __read_mostly;
-#endif
-
 unsigned long frametable_base_pdx __read_mostly;
-unsigned long frametable_virt_end __read_mostly;
 
 unsigned long max_page;
 unsigned long total_pages;
 
-extern char __init_begin[], __init_end[];
-
-/* Checking VA memory layout alignment. */
-static void __init __maybe_unused build_assertions(void)
-{
-    /* 2MB aligned regions */
-    BUILD_BUG_ON(XEN_VIRT_START & ~SECOND_MASK);
-    BUILD_BUG_ON(FIXMAP_ADDR(0) & ~SECOND_MASK);
-    /* 1GB aligned regions */
-#ifdef CONFIG_ARM_32
-    BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK);
-#else
-    BUILD_BUG_ON(DIRECTMAP_VIRT_START & ~FIRST_MASK);
-#endif
-    /* Page table structure constraints */
-#ifdef CONFIG_ARM_64
-    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
-#endif
-    BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
-#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
-    BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK);
-#endif
-    /*
-     * The boot code expects the regions XEN_VIRT_START, FIXMAP_ADDR(0),
-     * BOOT_FDT_VIRT_START to use the same 0th (arm64 only) and 1st
-     * slot in the page tables.
-     */
-#define CHECK_SAME_SLOT(level, virt1, virt2) \
-    BUILD_BUG_ON(level##_table_offset(virt1) != level##_table_offset(virt2))
-
-#ifdef CONFIG_ARM_64
-    CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, FIXMAP_ADDR(0));
-    CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, BOOT_FDT_VIRT_START);
-#endif
-    CHECK_SAME_SLOT(first, XEN_VIRT_START, FIXMAP_ADDR(0));
-    CHECK_SAME_SLOT(first, XEN_VIRT_START, BOOT_FDT_VIRT_START);
-
-#undef CHECK_SAME_SLOT
-}
-
-static lpae_t *xen_map_table(mfn_t mfn)
-{
-    /*
-     * During early boot, map_domain_page() may be unusable. Use the
-     * PMAP to map temporarily a page-table.
-     */
-    if ( system_state == SYS_STATE_early_boot )
-        return pmap_map(mfn);
-
-    return map_domain_page(mfn);
-}
-
-static void xen_unmap_table(const lpae_t *table)
-{
-    /*
-     * During early boot, xen_map_table() will not use map_domain_page()
-     * but the PMAP.
-     */
-    if ( system_state == SYS_STATE_early_boot )
-        pmap_unmap(table);
-    else
-        unmap_domain_page(table);
-}
-
-void dump_pt_walk(paddr_t ttbr, paddr_t addr,
-                  unsigned int root_level,
-                  unsigned int nr_root_tables)
-{
-    static const char *level_strs[4] = { "0TH", "1ST", "2ND", "3RD" };
-    const mfn_t root_mfn = maddr_to_mfn(ttbr);
-    const unsigned int offsets[4] = {
-        zeroeth_table_offset(addr),
-        first_table_offset(addr),
-        second_table_offset(addr),
-        third_table_offset(addr)
-    };
-    lpae_t pte, *mapping;
-    unsigned int level, root_table;
-
-#ifdef CONFIG_ARM_32
-    BUG_ON(root_level < 1);
-#endif
-    BUG_ON(root_level > 3);
-
-    if ( nr_root_tables > 1 )
-    {
-        /*
-         * Concatenated root-level tables. The table number will be
-         * the offset at the previous level. It is not possible to
-         * concatenate a level-0 root.
-         */
-        BUG_ON(root_level == 0);
-        root_table = offsets[root_level - 1];
-        printk("Using concatenated root table %u\n", root_table);
-        if ( root_table >= nr_root_tables )
-        {
-            printk("Invalid root table offset\n");
-            return;
-        }
-    }
-    else
-        root_table = 0;
-
-    mapping = xen_map_table(mfn_add(root_mfn, root_table));
-
-    for ( level = root_level; ; level++ )
-    {
-        if ( offsets[level] > XEN_PT_LPAE_ENTRIES )
-            break;
-
-        pte = mapping[offsets[level]];
-
-        printk("%s[0x%03x] = 0x%"PRIpaddr"\n",
-               level_strs[level], offsets[level], pte.bits);
-
-        if ( level == 3 || !pte.walk.valid || !pte.walk.table )
-            break;
-
-        /* For next iteration */
-        xen_unmap_table(mapping);
-        mapping = xen_map_table(lpae_get_mfn(pte));
-    }
-
-    xen_unmap_table(mapping);
-}
-
-void dump_hyp_walk(vaddr_t addr)
-{
-    uint64_t ttbr = READ_SYSREG64(TTBR0_EL2);
-
-    printk("Walking Hypervisor VA 0x%"PRIvaddr" "
-           "on CPU%d via TTBR 0x%016"PRIx64"\n",
-           addr, smp_processor_id(), ttbr);
-
-    dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1);
-}
-
-lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr)
-{
-    lpae_t e = (lpae_t) {
-        .pt = {
-            .valid = 1,           /* Mappings are present */
-            .table = 0,           /* Set to 1 for links and 4k maps */
-            .ai = attr,
-            .ns = 1,              /* Hyp mode is in the non-secure world */
-            .up = 1,              /* See below */
-            .ro = 0,              /* Assume read-write */
-            .af = 1,              /* No need for access tracking */
-            .ng = 1,              /* Makes TLB flushes easier */
-            .contig = 0,          /* Assume non-contiguous */
-            .xn = 1,              /* No need to execute outside .text */
-            .avail = 0,           /* Reference count for domheap mapping */
-        }};
-    /*
-     * For EL2 stage-1 page table, up (aka AP[1]) is RES1 as the translation
-     * regime applies to only one exception level (see D4.4.4 and G4.6.1
-     * in ARM DDI 0487B.a). If this changes, remember to update the
-     * hard-coded values in head.S too.
-     */
-
-    switch ( attr )
-    {
-    case MT_NORMAL_NC:
-        /*
-         * ARM ARM: Overlaying the shareability attribute (DDI
-         * 0406C.b B3-1376 to 1377)
-         *
-         * A memory region with a resultant memory type attribute of Normal,
-         * and a resultant cacheability attribute of Inner Non-cacheable,
-         * Outer Non-cacheable, must have a resultant shareability attribute
-         * of Outer Shareable, otherwise shareability is UNPREDICTABLE.
-         *
-         * On ARMv8 sharability is ignored and explicitly treated as Outer
-         * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable.
-         */
-        e.pt.sh = LPAE_SH_OUTER;
-        break;
-    case MT_DEVICE_nGnRnE:
-    case MT_DEVICE_nGnRE:
-        /*
-         * Shareability is ignored for non-Normal memory, Outer is as
-         * good as anything.
-         *
-         * On ARMv8 sharability is ignored and explicitly treated as Outer
-         * Shareable for any device memory type.
-         */
-        e.pt.sh = LPAE_SH_OUTER;
-        break;
-    default:
-        e.pt.sh = LPAE_SH_INNER;  /* Xen mappings are SMP coherent */
-        break;
-    }
-
-    ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK));
-
-    lpae_set_mfn(e, mfn);
-
-    return e;
-}
-
-/* Map a 4k page in a fixmap entry */
-void set_fixmap(unsigned int map, mfn_t mfn, unsigned int flags)
-{
-    int res;
-
-    res = map_pages_to_xen(FIXMAP_ADDR(map), mfn, 1, flags);
-    BUG_ON(res != 0);
-}
-
-/* Remove a mapping from a fixmap entry */
-void clear_fixmap(unsigned int map)
-{
-    int res;
-
-    res = destroy_xen_mappings(FIXMAP_ADDR(map), FIXMAP_ADDR(map) + PAGE_SIZE);
-    BUG_ON(res != 0);
-}
-
-void *map_page_to_xen_misc(mfn_t mfn, unsigned int attributes)
-{
-    set_fixmap(FIXMAP_MISC, mfn, attributes);
-
-    return fix_to_virt(FIXMAP_MISC);
-}
-
-void unmap_page_from_xen_misc(void)
-{
-    clear_fixmap(FIXMAP_MISC);
-}
-
 void flush_page_to_ram(unsigned long mfn, bool sync_icache)
 {
     void *v = map_domain_page(_mfn(mfn));
@@ -386,878 +39,6 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
         invalidate_icache();
 }
 
-static inline lpae_t pte_of_xenaddr(vaddr_t va)
-{
-    paddr_t ma = va + phys_offset;
-
-    return mfn_to_xen_entry(maddr_to_mfn(ma), MT_NORMAL);
-}
-
-void * __init early_fdt_map(paddr_t fdt_paddr)
-{
-    /* We are using 2MB superpage for mapping the FDT */
-    paddr_t base_paddr = fdt_paddr & SECOND_MASK;
-    paddr_t offset;
-    void *fdt_virt;
-    uint32_t size;
-    int rc;
-
-    /*
-     * Check whether the physical FDT address is set and meets the minimum
-     * alignment requirement. Since we are relying on MIN_FDT_ALIGN to be at
-     * least 8 bytes so that we always access the magic and size fields
-     * of the FDT header after mapping the first chunk, double check if
-     * that is indeed the case.
-     */
-    BUILD_BUG_ON(MIN_FDT_ALIGN < 8);
-    if ( !fdt_paddr || fdt_paddr % MIN_FDT_ALIGN )
-        return NULL;
-
-    /* The FDT is mapped using 2MB superpage */
-    BUILD_BUG_ON(BOOT_FDT_VIRT_START % SZ_2M);
-
-    rc = map_pages_to_xen(BOOT_FDT_VIRT_START, maddr_to_mfn(base_paddr),
-                          SZ_2M >> PAGE_SHIFT,
-                          PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
-    if ( rc )
-        panic("Unable to map the device-tree.\n");
-
-
-    offset = fdt_paddr % SECOND_SIZE;
-    fdt_virt = (void *)BOOT_FDT_VIRT_START + offset;
-
-    if ( fdt_magic(fdt_virt) != FDT_MAGIC )
-        return NULL;
-
-    size = fdt_totalsize(fdt_virt);
-    if ( size > MAX_FDT_SIZE )
-        return NULL;
-
-    if ( (offset + size) > SZ_2M )
-    {
-        rc = map_pages_to_xen(BOOT_FDT_VIRT_START + SZ_2M,
-                              maddr_to_mfn(base_paddr + SZ_2M),
-                              SZ_2M >> PAGE_SHIFT,
-                              PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
-        if ( rc )
-            panic("Unable to map the device-tree\n");
-    }
-
-    return fdt_virt;
-}
-
-void __init remove_early_mappings(void)
-{
-    int rc;
-
-    /* destroy the _PAGE_BLOCK mapping */
-    rc = modify_xen_mappings(BOOT_FDT_VIRT_START,
-                             BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE,
-                             _PAGE_BLOCK);
-    BUG_ON(rc);
-}
-
-/*
- * After boot, Xen page-tables should not contain mapping that are both
- * Writable and eXecutables.
- *
- * This should be called on each CPU to enforce the policy.
- */
-static void xen_pt_enforce_wnx(void)
-{
-    WRITE_SYSREG(READ_SYSREG(SCTLR_EL2) | SCTLR_Axx_ELx_WXN, SCTLR_EL2);
-    /*
-     * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized
-     * before flushing the TLBs.
-     */
-    isb();
-    flush_xen_tlb_local();
-}
-
-extern void switch_ttbr(uint64_t ttbr);
-
-/* Clear a translation table and clean & invalidate the cache */
-static void clear_table(void *table)
-{
-    clear_page(table);
-    clean_and_invalidate_dcache_va_range(table, PAGE_SIZE);
-}
-
-/* Boot-time pagetable setup.
- * Changes here may need matching changes in head.S */
-void __init setup_pagetables(unsigned long boot_phys_offset)
-{
-    uint64_t ttbr;
-    lpae_t pte, *p;
-    int i;
-
-    phys_offset = boot_phys_offset;
-
-#ifdef CONFIG_ARM_64
-    p = (void *) xen_pgtable;
-    p[0] = pte_of_xenaddr((uintptr_t)xen_first);
-    p[0].pt.table = 1;
-    p[0].pt.xn = 0;
-    p = (void *) xen_first;
-#else
-    p = (void *) cpu0_pgtable;
-#endif
-
-    /* Map xen second level page-table */
-    p[0] = pte_of_xenaddr((uintptr_t)(xen_second));
-    p[0].pt.table = 1;
-    p[0].pt.xn = 0;
-
-    /* Break up the Xen mapping into 4k pages and protect them separately. */
-    for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
-    {
-        vaddr_t va = XEN_VIRT_START + (i << PAGE_SHIFT);
-
-        if ( !is_kernel(va) )
-            break;
-        pte = pte_of_xenaddr(va);
-        pte.pt.table = 1; /* 4k mappings always have this bit set */
-        if ( is_kernel_text(va) || is_kernel_inittext(va) )
-        {
-            pte.pt.xn = 0;
-            pte.pt.ro = 1;
-        }
-        if ( is_kernel_rodata(va) )
-            pte.pt.ro = 1;
-        xen_xenmap[i] = pte;
-    }
-
-    /* Initialise xen second level entries ... */
-    /* ... Xen's text etc */
-
-    pte = pte_of_xenaddr((vaddr_t)xen_xenmap);
-    pte.pt.table = 1;
-    xen_second[second_table_offset(XEN_VIRT_START)] = pte;
-
-    /* ... Fixmap */
-    pte = pte_of_xenaddr((vaddr_t)xen_fixmap);
-    pte.pt.table = 1;
-    xen_second[second_table_offset(FIXMAP_ADDR(0))] = pte;
-
-#ifdef CONFIG_ARM_64
-    ttbr = (uintptr_t) xen_pgtable + phys_offset;
-#else
-    ttbr = (uintptr_t) cpu0_pgtable + phys_offset;
-#endif
-
-    switch_ttbr(ttbr);
-
-    xen_pt_enforce_wnx();
-
-#ifdef CONFIG_ARM_32
-    per_cpu(xen_pgtable, 0) = cpu0_pgtable;
-#endif
-}
-
-static void clear_boot_pagetables(void)
-{
-    /*
-     * Clear the copy of the boot pagetables. Each secondary CPU
-     * rebuilds these itself (see head.S).
-     */
-    clear_table(boot_pgtable);
-#ifdef CONFIG_ARM_64
-    clear_table(boot_first);
-    clear_table(boot_first_id);
-#endif
-    clear_table(boot_second);
-    clear_table(boot_third);
-}
-
-#ifdef CONFIG_ARM_64
-int init_secondary_pagetables(int cpu)
-{
-    clear_boot_pagetables();
-
-    /* Set init_ttbr for this CPU coming up. All CPus share a single setof
-     * pagetables, but rewrite it each time for consistency with 32 bit. */
-    init_ttbr = (uintptr_t) xen_pgtable + phys_offset;
-    clean_dcache(init_ttbr);
-    return 0;
-}
-#else
-int init_secondary_pagetables(int cpu)
-{
-    lpae_t *first;
-
-    first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
-
-    if ( !first )
-    {
-        printk("CPU%u: Unable to allocate the first page-table\n", cpu);
-        return -ENOMEM;
-    }
-
-    /* Initialise root pagetable from root of boot tables */
-    memcpy(first, cpu0_pgtable, PAGE_SIZE);
-    per_cpu(xen_pgtable, cpu) = first;
-
-    if ( !init_domheap_mappings(cpu) )
-    {
-        printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu);
-        per_cpu(xen_pgtable, cpu) = NULL;
-        free_xenheap_page(first);
-        return -ENOMEM;
-    }
-
-    clear_boot_pagetables();
-
-    /* Set init_ttbr for this CPU coming up */
-    init_ttbr = __pa(first);
-    clean_dcache(init_ttbr);
-
-    return 0;
-}
-#endif
-
-/* MMU setup for secondary CPUS (which already have paging enabled) */
-void mmu_init_secondary_cpu(void)
-{
-    xen_pt_enforce_wnx();
-}
-
-#ifdef CONFIG_ARM_32
-/*
- * Set up the direct-mapped xenheap:
- * up to 1GB of contiguous, always-mapped memory.
- */
-void __init setup_directmap_mappings(unsigned long base_mfn,
-                                     unsigned long nr_mfns)
-{
-    int rc;
-
-    rc = map_pages_to_xen(XENHEAP_VIRT_START, _mfn(base_mfn), nr_mfns,
-                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
-    if ( rc )
-        panic("Unable to setup the directmap mappings.\n");
-
-    /* Record where the directmap is, for translation routines. */
-    directmap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
-}
-#else /* CONFIG_ARM_64 */
-/* Map the region in the directmap area. */
-void __init setup_directmap_mappings(unsigned long base_mfn,
-                                     unsigned long nr_mfns)
-{
-    int rc;
-
-    /* First call sets the directmap physical and virtual offset. */
-    if ( mfn_eq(directmap_mfn_start, INVALID_MFN) )
-    {
-        unsigned long mfn_gb = base_mfn & ~((FIRST_SIZE >> PAGE_SHIFT) - 1);
-
-        directmap_mfn_start = _mfn(base_mfn);
-        directmap_base_pdx = mfn_to_pdx(_mfn(base_mfn));
-        /*
-         * The base address may not be aligned to the first level
-         * size (e.g. 1GB when using 4KB pages). This would prevent
-         * superpage mappings for all the regions because the virtual
-         * address and machine address should both be suitably aligned.
-         *
-         * Prevent that by offsetting the start of the directmap virtual
-         * address.
-         */
-        directmap_virt_start = DIRECTMAP_VIRT_START +
-            (base_mfn - mfn_gb) * PAGE_SIZE;
-    }
-
-    if ( base_mfn < mfn_x(directmap_mfn_start) )
-        panic("cannot add directmap mapping at %lx below heap start %lx\n",
-              base_mfn, mfn_x(directmap_mfn_start));
-
-    rc = map_pages_to_xen((vaddr_t)__mfn_to_virt(base_mfn),
-                          _mfn(base_mfn), nr_mfns,
-                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
-    if ( rc )
-        panic("Unable to setup the directmap mappings.\n");
-}
-#endif
-
-/* Map a frame table to cover physical addresses ps through pe */
-void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
-{
-    unsigned long nr_pdxs = mfn_to_pdx(mfn_add(maddr_to_mfn(pe), -1)) -
-                            mfn_to_pdx(maddr_to_mfn(ps)) + 1;
-    unsigned long frametable_size = nr_pdxs * sizeof(struct page_info);
-    mfn_t base_mfn;
-    const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
-    int rc;
-
-    frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
-    /* Round up to 2M or 32M boundary, as appropriate. */
-    frametable_size = ROUNDUP(frametable_size, mapping_size);
-    base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
-
-    rc = map_pages_to_xen(FRAMETABLE_VIRT_START, base_mfn,
-                          frametable_size >> PAGE_SHIFT,
-                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
-    if ( rc )
-        panic("Unable to setup the frametable mappings.\n");
-
-    memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info));
-    memset(&frame_table[nr_pdxs], -1,
-           frametable_size - (nr_pdxs * sizeof(struct page_info)));
-
-    frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pdxs * sizeof(struct page_info));
-}
-
-void *__init arch_vmap_virt_end(void)
-{
-    return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE);
-}
-
-/*
- * This function should only be used to remap device address ranges
- * TODO: add a check to verify this assumption
- */
-void *ioremap_attr(paddr_t pa, size_t len, unsigned int attributes)
-{
-    mfn_t mfn = _mfn(PFN_DOWN(pa));
-    unsigned int offs = pa & (PAGE_SIZE - 1);
-    unsigned int nr = PFN_UP(offs + len);
-    void *ptr = __vmap(&mfn, nr, 1, 1, attributes, VMAP_DEFAULT);
-
-    if ( ptr == NULL )
-        return NULL;
-
-    return ptr + offs;
-}
-
-void *ioremap(paddr_t pa, size_t len)
-{
-    return ioremap_attr(pa, len, PAGE_HYPERVISOR_NOCACHE);
-}
-
-static int create_xen_table(lpae_t *entry)
-{
-    mfn_t mfn;
-    void *p;
-    lpae_t pte;
-
-    if ( system_state != SYS_STATE_early_boot )
-    {
-        struct page_info *pg = alloc_domheap_page(NULL, 0);
-
-        if ( pg == NULL )
-            return -ENOMEM;
-
-        mfn = page_to_mfn(pg);
-    }
-    else
-        mfn = alloc_boot_pages(1, 1);
-
-    p = xen_map_table(mfn);
-    clear_page(p);
-    xen_unmap_table(p);
-
-    pte = mfn_to_xen_entry(mfn, MT_NORMAL);
-    pte.pt.table = 1;
-    write_pte(entry, pte);
-
-    return 0;
-}
-
-#define XEN_TABLE_MAP_FAILED 0
-#define XEN_TABLE_SUPER_PAGE 1
-#define XEN_TABLE_NORMAL_PAGE 2
-
-/*
- * Take the currently mapped table, find the corresponding entry,
- * and map the next table, if available.
- *
- * The read_only parameters indicates whether intermediate tables should
- * be allocated when not present.
- *
- * Return values:
- *  XEN_TABLE_MAP_FAILED: Either read_only was set and the entry
- *  was empty, or allocating a new page failed.
- *  XEN_TABLE_NORMAL_PAGE: next level mapped normally
- *  XEN_TABLE_SUPER_PAGE: The next entry points to a superpage.
- */
-static int xen_pt_next_level(bool read_only, unsigned int level,
-                             lpae_t **table, unsigned int offset)
-{
-    lpae_t *entry;
-    int ret;
-    mfn_t mfn;
-
-    entry = *table + offset;
-
-    if ( !lpae_is_valid(*entry) )
-    {
-        if ( read_only )
-            return XEN_TABLE_MAP_FAILED;
-
-        ret = create_xen_table(entry);
-        if ( ret )
-            return XEN_TABLE_MAP_FAILED;
-    }
-
-    /* The function xen_pt_next_level is never called at the 3rd level */
-    if ( lpae_is_mapping(*entry, level) )
-        return XEN_TABLE_SUPER_PAGE;
-
-    mfn = lpae_get_mfn(*entry);
-
-    xen_unmap_table(*table);
-    *table = xen_map_table(mfn);
-
-    return XEN_TABLE_NORMAL_PAGE;
-}
-
-/* Sanity check of the entry */
-static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level,
-                               unsigned int flags)
-{
-    /* Sanity check when modifying an entry. */
-    if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) )
-    {
-        /* We don't allow modifying an invalid entry. */
-        if ( !lpae_is_valid(entry) )
-        {
-            mm_printk("Modifying invalid entry is not allowed.\n");
-            return false;
-        }
-
-        /* We don't allow modifying a table entry */
-        if ( !lpae_is_mapping(entry, level) )
-        {
-            mm_printk("Modifying a table entry is not allowed.\n");
-            return false;
-        }
-
-        /* We don't allow changing memory attributes. */
-        if ( entry.pt.ai != PAGE_AI_MASK(flags) )
-        {
-            mm_printk("Modifying memory attributes is not allowed (0x%x -> 0x%x).\n",
-                      entry.pt.ai, PAGE_AI_MASK(flags));
-            return false;
-        }
-
-        /* We don't allow modifying entry with contiguous bit set. */
-        if ( entry.pt.contig )
-        {
-            mm_printk("Modifying entry with contiguous bit set is not allowed.\n");
-            return false;
-        }
-    }
-    /* Sanity check when inserting a mapping */
-    else if ( flags & _PAGE_PRESENT )
-    {
-        /* We should be here with a valid MFN. */
-        ASSERT(!mfn_eq(mfn, INVALID_MFN));
-
-        /*
-         * We don't allow replacing any valid entry.
-         *
-         * Note that the function xen_pt_update() relies on this
-         * assumption and will skip the TLB flush. The function will need
-         * to be updated if the check is relaxed.
-         */
-        if ( lpae_is_valid(entry) )
-        {
-            if ( lpae_is_mapping(entry, level) )
-                mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
-                          mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
-            else
-                mm_printk("Trying to replace a table with a mapping.\n");
-            return false;
-        }
-    }
-    /* Sanity check when removing a mapping. */
-    else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) == 0 )
-    {
-        /* We should be here with an invalid MFN. */
-        ASSERT(mfn_eq(mfn, INVALID_MFN));
-
-        /* We don't allow removing a table */
-        if ( lpae_is_table(entry, level) )
-        {
-            mm_printk("Removing a table is not allowed.\n");
-            return false;
-        }
-
-        /* We don't allow removing a mapping with contiguous bit set. */
-        if ( entry.pt.contig )
-        {
-            mm_printk("Removing entry with contiguous bit set is not allowed.\n");
-            return false;
-        }
-    }
-    /* Sanity check when populating the page-table. No check so far. */
-    else
-    {
-        ASSERT(flags & _PAGE_POPULATE);
-        /* We should be here with an invalid MFN */
-        ASSERT(mfn_eq(mfn, INVALID_MFN));
-    }
-
-    return true;
-}
-
-/* Update an entry at the level @target. */
-static int xen_pt_update_entry(mfn_t root, unsigned long virt,
-                               mfn_t mfn, unsigned int target,
-                               unsigned int flags)
-{
-    int rc;
-    unsigned int level;
-    lpae_t *table;
-    /*
-     * The intermediate page tables are read-only when the MFN is not valid
-     * and we are not populating page table.
-     * This means we either modify permissions or remove an entry.
-     */
-    bool read_only = mfn_eq(mfn, INVALID_MFN) && !(flags & _PAGE_POPULATE);
-    lpae_t pte, *entry;
-
-    /* convenience aliases */
-    DECLARE_OFFSETS(offsets, (paddr_t)virt);
-
-    /* _PAGE_POPULATE and _PAGE_PRESENT should never be set together. */
-    ASSERT((flags & (_PAGE_POPULATE|_PAGE_PRESENT)) != (_PAGE_POPULATE|_PAGE_PRESENT));
-
-    table = xen_map_table(root);
-    for ( level = HYP_PT_ROOT_LEVEL; level < target; level++ )
-    {
-        rc = xen_pt_next_level(read_only, level, &table, offsets[level]);
-        if ( rc == XEN_TABLE_MAP_FAILED )
-        {
-            /*
-             * We are here because xen_pt_next_level has failed to map
-             * the intermediate page table (e.g the table does not exist
-             * and the pt is read-only). It is a valid case when
-             * removing a mapping as it may not exist in the page table.
-             * In this case, just ignore it.
-             */
-            if ( flags & (_PAGE_PRESENT|_PAGE_POPULATE) )
-            {
-                mm_printk("%s: Unable to map level %u\n", __func__, level);
-                rc = -ENOENT;
-                goto out;
-            }
-            else
-            {
-                rc = 0;
-                goto out;
-            }
-        }
-        else if ( rc != XEN_TABLE_NORMAL_PAGE )
-            break;
-    }
-
-    if ( level != target )
-    {
-        mm_printk("%s: Shattering superpage is not supported\n", __func__);
-        rc = -EOPNOTSUPP;
-        goto out;
-    }
-
-    entry = table + offsets[level];
-
-    rc = -EINVAL;
-    if ( !xen_pt_check_entry(*entry, mfn, level, flags) )
-        goto out;
-
-    /* If we are only populating page-table, then we are done. */
-    rc = 0;
-    if ( flags & _PAGE_POPULATE )
-        goto out;
-
-    /* We are removing the page */
-    if ( !(flags & _PAGE_PRESENT) )
-        memset(&pte, 0x00, sizeof(pte));
-    else
-    {
-        /* We are inserting a mapping => Create new pte. */
-        if ( !mfn_eq(mfn, INVALID_MFN) )
-        {
-            pte = mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags));
-
-            /*
-             * First and second level pages set pte.pt.table = 0, but
-             * third level entries set pte.pt.table = 1.
-             */
-            pte.pt.table = (level == 3);
-        }
-        else /* We are updating the permission => Copy the current pte. */
-            pte = *entry;
-
-        /* Set permission */
-        pte.pt.ro = PAGE_RO_MASK(flags);
-        pte.pt.xn = PAGE_XN_MASK(flags);
-        /* Set contiguous bit */
-        pte.pt.contig = !!(flags & _PAGE_CONTIG);
-    }
-
-    write_pte(entry, pte);
-
-    rc = 0;
-
-out:
-    xen_unmap_table(table);
-
-    return rc;
-}
-
-/* Return the level where mapping should be done */
-static int xen_pt_mapping_level(unsigned long vfn, mfn_t mfn, unsigned long nr,
-                                unsigned int flags)
-{
-    unsigned int level;
-    unsigned long mask;
-
-    /*
-      * Don't take into account the MFN when removing mapping (i.e
-      * MFN_INVALID) to calculate the correct target order.
-      *
-      * Per the Arm Arm, `vfn` and `mfn` must be both superpage aligned.
-      * They are or-ed together and then checked against the size of
-      * each level.
-      *
-      * `left` is not included and checked separately to allow
-      * superpage mapping even if it is not properly aligned (the
-      * user may have asked to map 2MB + 4k).
-      */
-     mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0;
-     mask |= vfn;
-
-     /*
-      * Always use level 3 mapping unless the caller request block
-      * mapping.
-      */
-     if ( likely(!(flags & _PAGE_BLOCK)) )
-         level = 3;
-     else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) &&
-               (nr >= BIT(FIRST_ORDER, UL)) )
-         level = 1;
-     else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) &&
-               (nr >= BIT(SECOND_ORDER, UL)) )
-         level = 2;
-     else
-         level = 3;
-
-     return level;
-}
-
-#define XEN_PT_4K_NR_CONTIG 16
-
-/*
- * Check whether the contiguous bit can be set. Return the number of
- * contiguous entry allowed. If not allowed, return 1.
- */
-static unsigned int xen_pt_check_contig(unsigned long vfn, mfn_t mfn,
-                                        unsigned int level, unsigned long left,
-                                        unsigned int flags)
-{
-    unsigned long nr_contig;
-
-    /*
-     * Allow the contiguous bit to set when the caller requests block
-     * mapping.
-     */
-    if ( !(flags & _PAGE_BLOCK) )
-        return 1;
-
-    /*
-     * We don't allow to remove mapping with the contiguous bit set.
-     * So shortcut the logic and directly return 1.
-     */
-    if ( mfn_eq(mfn, INVALID_MFN) )
-        return 1;
-
-    /*
-     * The number of contiguous entries varies depending on the page
-     * granularity used. The logic below assumes 4KB.
-     */
-    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
-
-    /*
-     * In order to enable the contiguous bit, we should have enough entries
-     * to map left and both the virtual and physical address should be
-     * aligned to the size of 16 translation tables entries.
-     */
-    nr_contig = BIT(XEN_PT_LEVEL_ORDER(level), UL) * XEN_PT_4K_NR_CONTIG;
-
-    if ( (left < nr_contig) || ((mfn_x(mfn) | vfn) & (nr_contig - 1)) )
-        return 1;
-
-    return XEN_PT_4K_NR_CONTIG;
-}
-
-static DEFINE_SPINLOCK(xen_pt_lock);
-
-static int xen_pt_update(unsigned long virt,
-                         mfn_t mfn,
-                         /* const on purpose as it is used for TLB flush */
-                         const unsigned long nr_mfns,
-                         unsigned int flags)
-{
-    int rc = 0;
-    unsigned long vfn = virt >> PAGE_SHIFT;
-    unsigned long left = nr_mfns;
-
-    /*
-     * For arm32, page-tables are different on each CPUs. Yet, they share
-     * some common mappings. It is assumed that only common mappings
-     * will be modified with this function.
-     *
-     * XXX: Add a check.
-     */
-    const mfn_t root = maddr_to_mfn(READ_SYSREG64(TTBR0_EL2));
-
-    /*
-     * The hardware was configured to forbid mapping both writeable and
-     * executable.
-     * When modifying/creating mapping (i.e _PAGE_PRESENT is set),
-     * prevent any update if this happen.
-     */
-    if ( (flags & _PAGE_PRESENT) && !PAGE_RO_MASK(flags) &&
-         !PAGE_XN_MASK(flags) )
-    {
-        mm_printk("Mappings should not be both Writeable and Executable.\n");
-        return -EINVAL;
-    }
-
-    if ( flags & _PAGE_CONTIG )
-    {
-        mm_printk("_PAGE_CONTIG is an internal only flag.\n");
-        return -EINVAL;
-    }
-
-    if ( !IS_ALIGNED(virt, PAGE_SIZE) )
-    {
-        mm_printk("The virtual address is not aligned to the page-size.\n");
-        return -EINVAL;
-    }
-
-    spin_lock(&xen_pt_lock);
-
-    while ( left )
-    {
-        unsigned int order, level, nr_contig, new_flags;
-
-        level = xen_pt_mapping_level(vfn, mfn, left, flags);
-        order = XEN_PT_LEVEL_ORDER(level);
-
-        ASSERT(left >= BIT(order, UL));
-
-        /*
-         * Check if we can set the contiguous mapping and update the
-         * flags accordingly.
-         */
-        nr_contig = xen_pt_check_contig(vfn, mfn, level, left, flags);
-        new_flags = flags | ((nr_contig > 1) ? _PAGE_CONTIG : 0);
-
-        for ( ; nr_contig > 0; nr_contig-- )
-        {
-            rc = xen_pt_update_entry(root, vfn << PAGE_SHIFT, mfn, level,
-                                     new_flags);
-            if ( rc )
-                break;
-
-            vfn += 1U << order;
-            if ( !mfn_eq(mfn, INVALID_MFN) )
-                mfn = mfn_add(mfn, 1U << order);
-
-            left -= (1U << order);
-        }
-
-        if ( rc )
-            break;
-    }
-
-    /*
-     * The TLBs flush can be safely skipped when a mapping is inserted
-     * as we don't allow mapping replacement (see xen_pt_check_entry()).
-     *
-     * For all the other cases, the TLBs will be flushed unconditionally
-     * even if the mapping has failed. This is because we may have
-     * partially modified the PT. This will prevent any unexpected
-     * behavior afterwards.
-     */
-    if ( !((flags & _PAGE_PRESENT) && !mfn_eq(mfn, INVALID_MFN)) )
-        flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
-
-    spin_unlock(&xen_pt_lock);
-
-    return rc;
-}
-
-int map_pages_to_xen(unsigned long virt,
-                     mfn_t mfn,
-                     unsigned long nr_mfns,
-                     unsigned int flags)
-{
-    return xen_pt_update(virt, mfn, nr_mfns, flags);
-}
-
-int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
-{
-    return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE);
-}
-
-int destroy_xen_mappings(unsigned long s, unsigned long e)
-{
-    ASSERT(IS_ALIGNED(s, PAGE_SIZE));
-    ASSERT(IS_ALIGNED(e, PAGE_SIZE));
-    ASSERT(s <= e);
-    return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, 0);
-}
-
-int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags)
-{
-    ASSERT(IS_ALIGNED(s, PAGE_SIZE));
-    ASSERT(IS_ALIGNED(e, PAGE_SIZE));
-    ASSERT(s <= e);
-    return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, flags);
-}
-
-/* Release all __init and __initdata ranges to be reused */
-void free_init_memory(void)
-{
-    paddr_t pa = virt_to_maddr(__init_begin);
-    unsigned long len = __init_end - __init_begin;
-    uint32_t insn;
-    unsigned int i, nr = len / sizeof(insn);
-    uint32_t *p;
-    int rc;
-
-    rc = modify_xen_mappings((unsigned long)__init_begin,
-                             (unsigned long)__init_end, PAGE_HYPERVISOR_RW);
-    if ( rc )
-        panic("Unable to map RW the init section (rc = %d)\n", rc);
-
-    /*
-     * From now on, init will not be used for execution anymore,
-     * so nuke the instruction cache to remove entries related to init.
-     */
-    invalidate_icache_local();
-
-#ifdef CONFIG_ARM_32
-    /* udf instruction i.e (see A8.8.247 in ARM DDI 0406C.c) */
-    insn = 0xe7f000f0;
-#else
-    insn = AARCH64_BREAK_FAULT;
-#endif
-    p = (uint32_t *)__init_begin;
-    for ( i = 0; i < nr; i++ )
-        *(p + i) = insn;
-
-    rc = destroy_xen_mappings((unsigned long)__init_begin,
-                              (unsigned long)__init_end);
-    if ( rc )
-        panic("Unable to remove the init section (rc = %d)\n", rc);
-
-    init_domheap_pages(pa, pa + len);
-    printk("Freed %ldkB init memory.\n", (long)(__init_end-__init_begin)>>10);
-}
-
 void arch_dump_shared_mem_info(void)
 {
 }
@@ -1319,137 +100,6 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
     spin_unlock(&d->page_alloc_lock);
 }
 
-int xenmem_add_to_physmap_one(
-    struct domain *d,
-    unsigned int space,
-    union add_to_physmap_extra extra,
-    unsigned long idx,
-    gfn_t gfn)
-{
-    mfn_t mfn = INVALID_MFN;
-    int rc;
-    p2m_type_t t;
-    struct page_info *page = NULL;
-
-    switch ( space )
-    {
-    case XENMAPSPACE_grant_table:
-        rc = gnttab_map_frame(d, idx, gfn, &mfn);
-        if ( rc )
-            return rc;
-
-        /* Need to take care of the reference obtained in gnttab_map_frame(). */
-        page = mfn_to_page(mfn);
-        t = p2m_ram_rw;
-
-        break;
-    case XENMAPSPACE_shared_info:
-        if ( idx != 0 )
-            return -EINVAL;
-
-        mfn = virt_to_mfn(d->shared_info);
-        t = p2m_ram_rw;
-
-        break;
-    case XENMAPSPACE_gmfn_foreign:
-    {
-        struct domain *od;
-        p2m_type_t p2mt;
-
-        od = get_pg_owner(extra.foreign_domid);
-        if ( od == NULL )
-            return -ESRCH;
-
-        if ( od == d )
-        {
-            put_pg_owner(od);
-            return -EINVAL;
-        }
-
-        rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
-        if ( rc )
-        {
-            put_pg_owner(od);
-            return rc;
-        }
-
-        /* Take reference to the foreign domain page.
-         * Reference will be released in XENMEM_remove_from_physmap */
-        page = get_page_from_gfn(od, idx, &p2mt, P2M_ALLOC);
-        if ( !page )
-        {
-            put_pg_owner(od);
-            return -EINVAL;
-        }
-
-        if ( p2m_is_ram(p2mt) )
-            t = (p2mt == p2m_ram_rw) ? p2m_map_foreign_rw : p2m_map_foreign_ro;
-        else
-        {
-            put_page(page);
-            put_pg_owner(od);
-            return -EINVAL;
-        }
-
-        mfn = page_to_mfn(page);
-
-        put_pg_owner(od);
-        break;
-    }
-    case XENMAPSPACE_dev_mmio:
-        rc = map_dev_mmio_page(d, gfn, _mfn(idx));
-        return rc;
-
-    default:
-        return -ENOSYS;
-    }
-
-    /*
-     * Map at new location. Here we need to map xenheap RAM page differently
-     * because we need to store the valid GFN and make sure that nothing was
-     * mapped before (the stored GFN is invalid). And these actions need to be
-     * performed with the P2M lock held. The guest_physmap_add_entry() is just
-     * a wrapper on top of p2m_set_entry().
-     */
-    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
-        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
-    else
-    {
-        struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-        p2m_write_lock(p2m);
-        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
-        {
-            rc = p2m_set_entry(p2m, gfn, 1, mfn, t, p2m->default_access);
-            if ( !rc )
-                page_set_xenheap_gfn(mfn_to_page(mfn), gfn);
-        }
-        else
-            /*
-             * Mandate the caller to first unmap the page before mapping it
-             * again. This is to prevent Xen creating an unwanted hole in
-             * the P2M. For instance, this could happen if the firmware stole
-             * a RAM address for mapping the shared_info page into but forgot
-             * to unmap it afterwards.
-             */
-            rc = -EBUSY;
-        p2m_write_unlock(p2m);
-    }
-
-    /*
-     * For XENMAPSPACE_gmfn_foreign if we failed to add the mapping, we need
-     * to drop the reference we took earlier. In all other cases we need to
-     * drop any reference we took earlier (perhaps indirectly).
-     */
-    if ( space == XENMAPSPACE_gmfn_foreign ? rc : page != NULL )
-    {
-        ASSERT(page != NULL);
-        put_page(page);
-    }
-
-    return rc;
-}
-
 long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( op )
diff --git a/xen/arch/arm/mm_mmu.c b/xen/arch/arm/mm_mmu.c
new file mode 100644
index 0000000000..72b4909766
--- /dev/null
+++ b/xen/arch/arm/mm_mmu.c
@@ -0,0 +1,1376 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * xen/arch/arm/mm.c
+ *
+ * MMU code for an ARMv7-A with virt extensions.
+ *
+ * Tim Deegan <tim@xen.org>
+ * Copyright (c) 2011 Citrix Systems.
+ */
+
+#include <xen/domain_page.h>
+#include <xen/errno.h>
+#include <xen/grant_table.h>
+#include <xen/guest_access.h>
+#include <xen/init.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/mm.h>
+#include <xen/pfn.h>
+#include <xen/pmap.h>
+#include <xen/sched.h>
+#include <xen/sizes.h>
+#include <xen/types.h>
+#include <xen/vmap.h>
+
+#include <xsm/xsm.h>
+
+#include <asm/fixmap.h>
+#include <asm/setup.h>
+
+#include <public/memory.h>
+
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+#undef mfn_to_virt
+#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn))
+
+#ifdef NDEBUG
+static inline void
+__attribute__ ((__format__ (__printf__, 1, 2)))
+mm_printk(const char *fmt, ...) {}
+#else
+#define mm_printk(fmt, args...)             \
+    do                                      \
+    {                                       \
+        dprintk(XENLOG_ERR, fmt, ## args);  \
+        WARN();                             \
+    } while (0)
+#endif
+
+/* Static start-of-day pagetables that we use before the allocators
+ * are up. These are used by all CPUs during bringup before switching
+ * to the CPUs own pagetables.
+ *
+ * These pagetables have a very simple structure. They include:
+ *  - 2MB worth of 4K mappings of xen at XEN_VIRT_START, boot_first and
+ *    boot_second are used to populate the tables down to boot_third
+ *    which contains the actual mapping.
+ *  - a 1:1 mapping of xen at its current physical address. This uses a
+ *    section mapping at whichever of boot_{pgtable,first,second}
+ *    covers that physical address.
+ *
+ * For the boot CPU these mappings point to the address where Xen was
+ * loaded by the bootloader. For secondary CPUs they point to the
+ * relocated copy of Xen for the benefit of secondary CPUs.
+ *
+ * In addition to the above for the boot CPU the device-tree is
+ * initially mapped in the boot misc slot. This mapping is not present
+ * for secondary CPUs.
+ *
+ * Finally, if EARLY_PRINTK is enabled then xen_fixmap will be mapped
+ * by the CPU once it has moved off the 1:1 mapping.
+ */
+DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
+#ifdef CONFIG_ARM_64
+DEFINE_BOOT_PAGE_TABLE(boot_first);
+DEFINE_BOOT_PAGE_TABLE(boot_first_id);
+#endif
+DEFINE_BOOT_PAGE_TABLE(boot_second_id);
+DEFINE_BOOT_PAGE_TABLE(boot_third_id);
+DEFINE_BOOT_PAGE_TABLE(boot_second);
+DEFINE_BOOT_PAGE_TABLE(boot_third);
+
+/* Main runtime page tables */
+
+/*
+ * For arm32 xen_pgtable are per-PCPU and are allocated before
+ * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs.
+ *
+ * xen_second, xen_fixmap and xen_xenmap are always shared between all
+ * PCPUs.
+ */
+
+#ifdef CONFIG_ARM_64
+#define HYP_PT_ROOT_LEVEL 0
+static DEFINE_PAGE_TABLE(xen_pgtable);
+static DEFINE_PAGE_TABLE(xen_first);
+#define THIS_CPU_PGTABLE xen_pgtable
+#else
+#define HYP_PT_ROOT_LEVEL 1
+/* Per-CPU pagetable pages */
+/* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
+DEFINE_PER_CPU(lpae_t *, xen_pgtable);
+#define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
+/* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
+static DEFINE_PAGE_TABLE(cpu0_pgtable);
+#endif
+
+/* Common pagetable leaves */
+/* Second level page table used to cover Xen virtual address space */
+static DEFINE_PAGE_TABLE(xen_second);
+/* Third level page table used for fixmap */
+DEFINE_BOOT_PAGE_TABLE(xen_fixmap);
+/*
+ * Third level page table used to map Xen itself with the XN bit set
+ * as appropriate.
+ */
+static DEFINE_PAGE_TABLE(xen_xenmap);
+
+/* Non-boot CPUs use this to find the correct pagetables. */
+uint64_t init_ttbr;
+
+static paddr_t phys_offset;
+
+/* Limits of the Xen heap */
+mfn_t directmap_mfn_start __read_mostly = INVALID_MFN_INITIALIZER;
+mfn_t directmap_mfn_end __read_mostly;
+vaddr_t directmap_virt_end __read_mostly;
+#ifdef CONFIG_ARM_64
+vaddr_t directmap_virt_start __read_mostly;
+unsigned long directmap_base_pdx __read_mostly;
+#endif
+
+unsigned long frametable_virt_end __read_mostly;
+
+extern char __init_begin[], __init_end[];
+
+/* Checking VA memory layout alignment. */
+static void __init __maybe_unused build_assertions(void)
+{
+    /* 2MB aligned regions */
+    BUILD_BUG_ON(XEN_VIRT_START & ~SECOND_MASK);
+    BUILD_BUG_ON(FIXMAP_ADDR(0) & ~SECOND_MASK);
+    /* 1GB aligned regions */
+#ifdef CONFIG_ARM_32
+    BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK);
+#else
+    BUILD_BUG_ON(DIRECTMAP_VIRT_START & ~FIRST_MASK);
+#endif
+    /* Page table structure constraints */
+#ifdef CONFIG_ARM_64
+    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
+#endif
+    BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
+#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
+    BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK);
+#endif
+    /*
+     * The boot code expects the regions XEN_VIRT_START, FIXMAP_ADDR(0),
+     * BOOT_FDT_VIRT_START to use the same 0th (arm64 only) and 1st
+     * slot in the page tables.
+     */
+#define CHECK_SAME_SLOT(level, virt1, virt2) \
+    BUILD_BUG_ON(level##_table_offset(virt1) != level##_table_offset(virt2))
+
+#ifdef CONFIG_ARM_64
+    CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, FIXMAP_ADDR(0));
+    CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, BOOT_FDT_VIRT_START);
+#endif
+    CHECK_SAME_SLOT(first, XEN_VIRT_START, FIXMAP_ADDR(0));
+    CHECK_SAME_SLOT(first, XEN_VIRT_START, BOOT_FDT_VIRT_START);
+
+#undef CHECK_SAME_SLOT
+}
+
+static lpae_t *xen_map_table(mfn_t mfn)
+{
+    /*
+     * During early boot, map_domain_page() may be unusable. Use the
+     * PMAP to map temporarily a page-table.
+     */
+    if ( system_state == SYS_STATE_early_boot )
+        return pmap_map(mfn);
+
+    return map_domain_page(mfn);
+}
+
+static void xen_unmap_table(const lpae_t *table)
+{
+    /*
+     * During early boot, xen_map_table() will not use map_domain_page()
+     * but the PMAP.
+     */
+    if ( system_state == SYS_STATE_early_boot )
+        pmap_unmap(table);
+    else
+        unmap_domain_page(table);
+}
+
+void dump_pt_walk(paddr_t ttbr, paddr_t addr,
+                  unsigned int root_level,
+                  unsigned int nr_root_tables)
+{
+    static const char *level_strs[4] = { "0TH", "1ST", "2ND", "3RD" };
+    const mfn_t root_mfn = maddr_to_mfn(ttbr);
+    const unsigned int offsets[4] = {
+        zeroeth_table_offset(addr),
+        first_table_offset(addr),
+        second_table_offset(addr),
+        third_table_offset(addr)
+    };
+    lpae_t pte, *mapping;
+    unsigned int level, root_table;
+
+#ifdef CONFIG_ARM_32
+    BUG_ON(root_level < 1);
+#endif
+    BUG_ON(root_level > 3);
+
+    if ( nr_root_tables > 1 )
+    {
+        /*
+         * Concatenated root-level tables. The table number will be
+         * the offset at the previous level. It is not possible to
+         * concatenate a level-0 root.
+         */
+        BUG_ON(root_level == 0);
+        root_table = offsets[root_level - 1];
+        printk("Using concatenated root table %u\n", root_table);
+        if ( root_table >= nr_root_tables )
+        {
+            printk("Invalid root table offset\n");
+            return;
+        }
+    }
+    else
+        root_table = 0;
+
+    mapping = xen_map_table(mfn_add(root_mfn, root_table));
+
+    for ( level = root_level; ; level++ )
+    {
+        if ( offsets[level] > XEN_PT_LPAE_ENTRIES )
+            break;
+
+        pte = mapping[offsets[level]];
+
+        printk("%s[0x%03x] = 0x%"PRIpaddr"\n",
+               level_strs[level], offsets[level], pte.bits);
+
+        if ( level == 3 || !pte.walk.valid || !pte.walk.table )
+            break;
+
+        /* For next iteration */
+        xen_unmap_table(mapping);
+        mapping = xen_map_table(lpae_get_mfn(pte));
+    }
+
+    xen_unmap_table(mapping);
+}
+
+void dump_hyp_walk(vaddr_t addr)
+{
+    uint64_t ttbr = READ_SYSREG64(TTBR0_EL2);
+
+    printk("Walking Hypervisor VA 0x%"PRIvaddr" "
+           "on CPU%d via TTBR 0x%016"PRIx64"\n",
+           addr, smp_processor_id(), ttbr);
+
+    dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1);
+}
+
+lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr)
+{
+    lpae_t e = (lpae_t) {
+        .pt = {
+            .valid = 1,           /* Mappings are present */
+            .table = 0,           /* Set to 1 for links and 4k maps */
+            .ai = attr,
+            .ns = 1,              /* Hyp mode is in the non-secure world */
+            .up = 1,              /* See below */
+            .ro = 0,              /* Assume read-write */
+            .af = 1,              /* No need for access tracking */
+            .ng = 1,              /* Makes TLB flushes easier */
+            .contig = 0,          /* Assume non-contiguous */
+            .xn = 1,              /* No need to execute outside .text */
+            .avail = 0,           /* Reference count for domheap mapping */
+        }};
+    /*
+     * For EL2 stage-1 page table, up (aka AP[1]) is RES1 as the translation
+     * regime applies to only one exception level (see D4.4.4 and G4.6.1
+     * in ARM DDI 0487B.a). If this changes, remember to update the
+     * hard-coded values in head.S too.
+     */
+
+    switch ( attr )
+    {
+    case MT_NORMAL_NC:
+        /*
+         * ARM ARM: Overlaying the shareability attribute (DDI
+         * 0406C.b B3-1376 to 1377)
+         *
+         * A memory region with a resultant memory type attribute of Normal,
+         * and a resultant cacheability attribute of Inner Non-cacheable,
+         * Outer Non-cacheable, must have a resultant shareability attribute
+         * of Outer Shareable, otherwise shareability is UNPREDICTABLE.
+         *
+         * On ARMv8 sharability is ignored and explicitly treated as Outer
+         * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable.
+         */
+        e.pt.sh = LPAE_SH_OUTER;
+        break;
+    case MT_DEVICE_nGnRnE:
+    case MT_DEVICE_nGnRE:
+        /*
+         * Shareability is ignored for non-Normal memory, Outer is as
+         * good as anything.
+         *
+         * On ARMv8 sharability is ignored and explicitly treated as Outer
+         * Shareable for any device memory type.
+         */
+        e.pt.sh = LPAE_SH_OUTER;
+        break;
+    default:
+        e.pt.sh = LPAE_SH_INNER;  /* Xen mappings are SMP coherent */
+        break;
+    }
+
+    ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK));
+
+    lpae_set_mfn(e, mfn);
+
+    return e;
+}
+
+/* Map a 4k page in a fixmap entry */
+void set_fixmap(unsigned int map, mfn_t mfn, unsigned int flags)
+{
+    int res;
+
+    res = map_pages_to_xen(FIXMAP_ADDR(map), mfn, 1, flags);
+    BUG_ON(res != 0);
+}
+
+/* Remove a mapping from a fixmap entry */
+void clear_fixmap(unsigned int map)
+{
+    int res;
+
+    res = destroy_xen_mappings(FIXMAP_ADDR(map), FIXMAP_ADDR(map) + PAGE_SIZE);
+    BUG_ON(res != 0);
+}
+
+void *map_page_to_xen_misc(mfn_t mfn, unsigned int attributes)
+{
+    set_fixmap(FIXMAP_MISC, mfn, attributes);
+
+    return fix_to_virt(FIXMAP_MISC);
+}
+
+void unmap_page_from_xen_misc(void)
+{
+    clear_fixmap(FIXMAP_MISC);
+}
+
+static inline lpae_t pte_of_xenaddr(vaddr_t va)
+{
+    paddr_t ma = va + phys_offset;
+
+    return mfn_to_xen_entry(maddr_to_mfn(ma), MT_NORMAL);
+}
+
+void * __init early_fdt_map(paddr_t fdt_paddr)
+{
+    /* We are using 2MB superpage for mapping the FDT */
+    paddr_t base_paddr = fdt_paddr & SECOND_MASK;
+    paddr_t offset;
+    void *fdt_virt;
+    uint32_t size;
+    int rc;
+
+    /*
+     * Check whether the physical FDT address is set and meets the minimum
+     * alignment requirement. Since we are relying on MIN_FDT_ALIGN to be at
+     * least 8 bytes so that we always access the magic and size fields
+     * of the FDT header after mapping the first chunk, double check if
+     * that is indeed the case.
+     */
+    BUILD_BUG_ON(MIN_FDT_ALIGN < 8);
+    if ( !fdt_paddr || fdt_paddr % MIN_FDT_ALIGN )
+        return NULL;
+
+    /* The FDT is mapped using 2MB superpage */
+    BUILD_BUG_ON(BOOT_FDT_VIRT_START % SZ_2M);
+
+    rc = map_pages_to_xen(BOOT_FDT_VIRT_START, maddr_to_mfn(base_paddr),
+                          SZ_2M >> PAGE_SHIFT,
+                          PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
+    if ( rc )
+        panic("Unable to map the device-tree.\n");
+
+
+    offset = fdt_paddr % SECOND_SIZE;
+    fdt_virt = (void *)BOOT_FDT_VIRT_START + offset;
+
+    if ( fdt_magic(fdt_virt) != FDT_MAGIC )
+        return NULL;
+
+    size = fdt_totalsize(fdt_virt);
+    if ( size > MAX_FDT_SIZE )
+        return NULL;
+
+    if ( (offset + size) > SZ_2M )
+    {
+        rc = map_pages_to_xen(BOOT_FDT_VIRT_START + SZ_2M,
+                              maddr_to_mfn(base_paddr + SZ_2M),
+                              SZ_2M >> PAGE_SHIFT,
+                              PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
+        if ( rc )
+            panic("Unable to map the device-tree\n");
+    }
+
+    return fdt_virt;
+}
+
+void __init remove_early_mappings(void)
+{
+    int rc;
+
+    /* destroy the _PAGE_BLOCK mapping */
+    rc = modify_xen_mappings(BOOT_FDT_VIRT_START,
+                             BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE,
+                             _PAGE_BLOCK);
+    BUG_ON(rc);
+}
+
+/*
+ * After boot, Xen page-tables should not contain mapping that are both
+ * Writable and eXecutables.
+ *
+ * This should be called on each CPU to enforce the policy.
+ */
+static void xen_pt_enforce_wnx(void)
+{
+    WRITE_SYSREG(READ_SYSREG(SCTLR_EL2) | SCTLR_Axx_ELx_WXN, SCTLR_EL2);
+    /*
+     * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized
+     * before flushing the TLBs.
+     */
+    isb();
+    flush_xen_tlb_local();
+}
+
+extern void switch_ttbr(uint64_t ttbr);
+
+/* Clear a translation table and clean & invalidate the cache */
+static void clear_table(void *table)
+{
+    clear_page(table);
+    clean_and_invalidate_dcache_va_range(table, PAGE_SIZE);
+}
+
+/* Boot-time pagetable setup.
+ * Changes here may need matching changes in head.S */
+void __init setup_pagetables(unsigned long boot_phys_offset)
+{
+    uint64_t ttbr;
+    lpae_t pte, *p;
+    int i;
+
+    phys_offset = boot_phys_offset;
+
+#ifdef CONFIG_ARM_64
+    p = (void *) xen_pgtable;
+    p[0] = pte_of_xenaddr((uintptr_t)xen_first);
+    p[0].pt.table = 1;
+    p[0].pt.xn = 0;
+    p = (void *) xen_first;
+#else
+    p = (void *) cpu0_pgtable;
+#endif
+
+    /* Map xen second level page-table */
+    p[0] = pte_of_xenaddr((uintptr_t)(xen_second));
+    p[0].pt.table = 1;
+    p[0].pt.xn = 0;
+
+    /* Break up the Xen mapping into 4k pages and protect them separately. */
+    for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
+    {
+        vaddr_t va = XEN_VIRT_START + (i << PAGE_SHIFT);
+
+        if ( !is_kernel(va) )
+            break;
+        pte = pte_of_xenaddr(va);
+        pte.pt.table = 1; /* 4k mappings always have this bit set */
+        if ( is_kernel_text(va) || is_kernel_inittext(va) )
+        {
+            pte.pt.xn = 0;
+            pte.pt.ro = 1;
+        }
+        if ( is_kernel_rodata(va) )
+            pte.pt.ro = 1;
+        xen_xenmap[i] = pte;
+    }
+
+    /* Initialise xen second level entries ... */
+    /* ... Xen's text etc */
+
+    pte = pte_of_xenaddr((vaddr_t)xen_xenmap);
+    pte.pt.table = 1;
+    xen_second[second_table_offset(XEN_VIRT_START)] = pte;
+
+    /* ... Fixmap */
+    pte = pte_of_xenaddr((vaddr_t)xen_fixmap);
+    pte.pt.table = 1;
+    xen_second[second_table_offset(FIXMAP_ADDR(0))] = pte;
+
+#ifdef CONFIG_ARM_64
+    ttbr = (uintptr_t) xen_pgtable + phys_offset;
+#else
+    ttbr = (uintptr_t) cpu0_pgtable + phys_offset;
+#endif
+
+    switch_ttbr(ttbr);
+
+    xen_pt_enforce_wnx();
+
+#ifdef CONFIG_ARM_32
+    per_cpu(xen_pgtable, 0) = cpu0_pgtable;
+#endif
+}
+
+static void clear_boot_pagetables(void)
+{
+    /*
+     * Clear the copy of the boot pagetables. Each secondary CPU
+     * rebuilds these itself (see head.S).
+     */
+    clear_table(boot_pgtable);
+#ifdef CONFIG_ARM_64
+    clear_table(boot_first);
+    clear_table(boot_first_id);
+#endif
+    clear_table(boot_second);
+    clear_table(boot_third);
+}
+
+#ifdef CONFIG_ARM_64
+int init_secondary_pagetables(int cpu)
+{
+    clear_boot_pagetables();
+
+    /* Set init_ttbr for this CPU coming up. All CPus share a single setof
+     * pagetables, but rewrite it each time for consistency with 32 bit. */
+    init_ttbr = (uintptr_t) xen_pgtable + phys_offset;
+    clean_dcache(init_ttbr);
+    return 0;
+}
+#else
+int init_secondary_pagetables(int cpu)
+{
+    lpae_t *first;
+
+    first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
+
+    if ( !first )
+    {
+        printk("CPU%u: Unable to allocate the first page-table\n", cpu);
+        return -ENOMEM;
+    }
+
+    /* Initialise root pagetable from root of boot tables */
+    memcpy(first, cpu0_pgtable, PAGE_SIZE);
+    per_cpu(xen_pgtable, cpu) = first;
+
+    if ( !init_domheap_mappings(cpu) )
+    {
+        printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu);
+        per_cpu(xen_pgtable, cpu) = NULL;
+        free_xenheap_page(first);
+        return -ENOMEM;
+    }
+
+    clear_boot_pagetables();
+
+    /* Set init_ttbr for this CPU coming up */
+    init_ttbr = __pa(first);
+    clean_dcache(init_ttbr);
+
+    return 0;
+}
+#endif
+
+/* MMU setup for secondary CPUS (which already have paging enabled) */
+void mmu_init_secondary_cpu(void)
+{
+    xen_pt_enforce_wnx();
+}
+
+#ifdef CONFIG_ARM_32
+/*
+ * Set up the direct-mapped xenheap:
+ * up to 1GB of contiguous, always-mapped memory.
+ */
+void __init setup_directmap_mappings(unsigned long base_mfn,
+                                     unsigned long nr_mfns)
+{
+    int rc;
+
+    rc = map_pages_to_xen(XENHEAP_VIRT_START, _mfn(base_mfn), nr_mfns,
+                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
+    if ( rc )
+        panic("Unable to setup the directmap mappings.\n");
+
+    /* Record where the directmap is, for translation routines. */
+    directmap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
+}
+#else /* CONFIG_ARM_64 */
+/* Map the region in the directmap area. */
+void __init setup_directmap_mappings(unsigned long base_mfn,
+                                     unsigned long nr_mfns)
+{
+    int rc;
+
+    /* First call sets the directmap physical and virtual offset. */
+    if ( mfn_eq(directmap_mfn_start, INVALID_MFN) )
+    {
+        unsigned long mfn_gb = base_mfn & ~((FIRST_SIZE >> PAGE_SHIFT) - 1);
+
+        directmap_mfn_start = _mfn(base_mfn);
+        directmap_base_pdx = mfn_to_pdx(_mfn(base_mfn));
+        /*
+         * The base address may not be aligned to the first level
+         * size (e.g. 1GB when using 4KB pages). This would prevent
+         * superpage mappings for all the regions because the virtual
+         * address and machine address should both be suitably aligned.
+         *
+         * Prevent that by offsetting the start of the directmap virtual
+         * address.
+         */
+        directmap_virt_start = DIRECTMAP_VIRT_START +
+            (base_mfn - mfn_gb) * PAGE_SIZE;
+    }
+
+    if ( base_mfn < mfn_x(directmap_mfn_start) )
+        panic("cannot add directmap mapping at %lx below heap start %lx\n",
+              base_mfn, mfn_x(directmap_mfn_start));
+
+    rc = map_pages_to_xen((vaddr_t)__mfn_to_virt(base_mfn),
+                          _mfn(base_mfn), nr_mfns,
+                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
+    if ( rc )
+        panic("Unable to setup the directmap mappings.\n");
+}
+#endif
+
+/* Map a frame table to cover physical addresses ps through pe */
+void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
+{
+    unsigned long nr_pdxs = mfn_to_pdx(mfn_add(maddr_to_mfn(pe), -1)) -
+                            mfn_to_pdx(maddr_to_mfn(ps)) + 1;
+    unsigned long frametable_size = nr_pdxs * sizeof(struct page_info);
+    mfn_t base_mfn;
+    const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
+    int rc;
+
+    frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
+    /* Round up to 2M or 32M boundary, as appropriate. */
+    frametable_size = ROUNDUP(frametable_size, mapping_size);
+    base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
+
+    rc = map_pages_to_xen(FRAMETABLE_VIRT_START, base_mfn,
+                          frametable_size >> PAGE_SHIFT,
+                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
+    if ( rc )
+        panic("Unable to setup the frametable mappings.\n");
+
+    memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info));
+    memset(&frame_table[nr_pdxs], -1,
+           frametable_size - (nr_pdxs * sizeof(struct page_info)));
+
+    frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pdxs * sizeof(struct page_info));
+}
+
+void *__init arch_vmap_virt_end(void)
+{
+    return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE);
+}
+
+/*
+ * This function should only be used to remap device address ranges
+ * TODO: add a check to verify this assumption
+ */
+void *ioremap_attr(paddr_t pa, size_t len, unsigned int attributes)
+{
+    mfn_t mfn = _mfn(PFN_DOWN(pa));
+    unsigned int offs = pa & (PAGE_SIZE - 1);
+    unsigned int nr = PFN_UP(offs + len);
+    void *ptr = __vmap(&mfn, nr, 1, 1, attributes, VMAP_DEFAULT);
+
+    if ( ptr == NULL )
+        return NULL;
+
+    return ptr + offs;
+}
+
+void *ioremap(paddr_t pa, size_t len)
+{
+    return ioremap_attr(pa, len, PAGE_HYPERVISOR_NOCACHE);
+}
+
+static int create_xen_table(lpae_t *entry)
+{
+    mfn_t mfn;
+    void *p;
+    lpae_t pte;
+
+    if ( system_state != SYS_STATE_early_boot )
+    {
+        struct page_info *pg = alloc_domheap_page(NULL, 0);
+
+        if ( pg == NULL )
+            return -ENOMEM;
+
+        mfn = page_to_mfn(pg);
+    }
+    else
+        mfn = alloc_boot_pages(1, 1);
+
+    p = xen_map_table(mfn);
+    clear_page(p);
+    xen_unmap_table(p);
+
+    pte = mfn_to_xen_entry(mfn, MT_NORMAL);
+    pte.pt.table = 1;
+    write_pte(entry, pte);
+
+    return 0;
+}
+
+#define XEN_TABLE_MAP_FAILED 0
+#define XEN_TABLE_SUPER_PAGE 1
+#define XEN_TABLE_NORMAL_PAGE 2
+
+/*
+ * Take the currently mapped table, find the corresponding entry,
+ * and map the next table, if available.
+ *
+ * The read_only parameters indicates whether intermediate tables should
+ * be allocated when not present.
+ *
+ * Return values:
+ *  XEN_TABLE_MAP_FAILED: Either read_only was set and the entry
+ *  was empty, or allocating a new page failed.
+ *  XEN_TABLE_NORMAL_PAGE: next level mapped normally
+ *  XEN_TABLE_SUPER_PAGE: The next entry points to a superpage.
+ */
+static int xen_pt_next_level(bool read_only, unsigned int level,
+                             lpae_t **table, unsigned int offset)
+{
+    lpae_t *entry;
+    int ret;
+    mfn_t mfn;
+
+    entry = *table + offset;
+
+    if ( !lpae_is_valid(*entry) )
+    {
+        if ( read_only )
+            return XEN_TABLE_MAP_FAILED;
+
+        ret = create_xen_table(entry);
+        if ( ret )
+            return XEN_TABLE_MAP_FAILED;
+    }
+
+    /* The function xen_pt_next_level is never called at the 3rd level */
+    if ( lpae_is_mapping(*entry, level) )
+        return XEN_TABLE_SUPER_PAGE;
+
+    mfn = lpae_get_mfn(*entry);
+
+    xen_unmap_table(*table);
+    *table = xen_map_table(mfn);
+
+    return XEN_TABLE_NORMAL_PAGE;
+}
+
+/* Sanity check of the entry */
+static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level,
+                               unsigned int flags)
+{
+    /* Sanity check when modifying an entry. */
+    if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) )
+    {
+        /* We don't allow modifying an invalid entry. */
+        if ( !lpae_is_valid(entry) )
+        {
+            mm_printk("Modifying invalid entry is not allowed.\n");
+            return false;
+        }
+
+        /* We don't allow modifying a table entry */
+        if ( !lpae_is_mapping(entry, level) )
+        {
+            mm_printk("Modifying a table entry is not allowed.\n");
+            return false;
+        }
+
+        /* We don't allow changing memory attributes. */
+        if ( entry.pt.ai != PAGE_AI_MASK(flags) )
+        {
+            mm_printk("Modifying memory attributes is not allowed (0x%x -> 0x%x).\n",
+                      entry.pt.ai, PAGE_AI_MASK(flags));
+            return false;
+        }
+
+        /* We don't allow modifying entry with contiguous bit set. */
+        if ( entry.pt.contig )
+        {
+            mm_printk("Modifying entry with contiguous bit set is not allowed.\n");
+            return false;
+        }
+    }
+    /* Sanity check when inserting a mapping */
+    else if ( flags & _PAGE_PRESENT )
+    {
+        /* We should be here with a valid MFN. */
+        ASSERT(!mfn_eq(mfn, INVALID_MFN));
+
+        /*
+         * We don't allow replacing any valid entry.
+         *
+         * Note that the function xen_pt_update() relies on this
+         * assumption and will skip the TLB flush. The function will need
+         * to be updated if the check is relaxed.
+         */
+        if ( lpae_is_valid(entry) )
+        {
+            if ( lpae_is_mapping(entry, level) )
+                mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
+                          mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
+            else
+                mm_printk("Trying to replace a table with a mapping.\n");
+            return false;
+        }
+    }
+    /* Sanity check when removing a mapping. */
+    else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) == 0 )
+    {
+        /* We should be here with an invalid MFN. */
+        ASSERT(mfn_eq(mfn, INVALID_MFN));
+
+        /* We don't allow removing a table */
+        if ( lpae_is_table(entry, level) )
+        {
+            mm_printk("Removing a table is not allowed.\n");
+            return false;
+        }
+
+        /* We don't allow removing a mapping with contiguous bit set. */
+        if ( entry.pt.contig )
+        {
+            mm_printk("Removing entry with contiguous bit set is not allowed.\n");
+            return false;
+        }
+    }
+    /* Sanity check when populating the page-table. No check so far. */
+    else
+    {
+        ASSERT(flags & _PAGE_POPULATE);
+        /* We should be here with an invalid MFN */
+        ASSERT(mfn_eq(mfn, INVALID_MFN));
+    }
+
+    return true;
+}
+
+/* Update an entry at the level @target. */
+static int xen_pt_update_entry(mfn_t root, unsigned long virt,
+                               mfn_t mfn, unsigned int target,
+                               unsigned int flags)
+{
+    int rc;
+    unsigned int level;
+    lpae_t *table;
+    /*
+     * The intermediate page tables are read-only when the MFN is not valid
+     * and we are not populating page table.
+     * This means we either modify permissions or remove an entry.
+     */
+    bool read_only = mfn_eq(mfn, INVALID_MFN) && !(flags & _PAGE_POPULATE);
+    lpae_t pte, *entry;
+
+    /* convenience aliases */
+    DECLARE_OFFSETS(offsets, (paddr_t)virt);
+
+    /* _PAGE_POPULATE and _PAGE_PRESENT should never be set together. */
+    ASSERT((flags & (_PAGE_POPULATE|_PAGE_PRESENT)) != (_PAGE_POPULATE|_PAGE_PRESENT));
+
+    table = xen_map_table(root);
+    for ( level = HYP_PT_ROOT_LEVEL; level < target; level++ )
+    {
+        rc = xen_pt_next_level(read_only, level, &table, offsets[level]);
+        if ( rc == XEN_TABLE_MAP_FAILED )
+        {
+            /*
+             * We are here because xen_pt_next_level has failed to map
+             * the intermediate page table (e.g the table does not exist
+             * and the pt is read-only). It is a valid case when
+             * removing a mapping as it may not exist in the page table.
+             * In this case, just ignore it.
+             */
+            if ( flags & (_PAGE_PRESENT|_PAGE_POPULATE) )
+            {
+                mm_printk("%s: Unable to map level %u\n", __func__, level);
+                rc = -ENOENT;
+                goto out;
+            }
+            else
+            {
+                rc = 0;
+                goto out;
+            }
+        }
+        else if ( rc != XEN_TABLE_NORMAL_PAGE )
+            break;
+    }
+
+    if ( level != target )
+    {
+        mm_printk("%s: Shattering superpage is not supported\n", __func__);
+        rc = -EOPNOTSUPP;
+        goto out;
+    }
+
+    entry = table + offsets[level];
+
+    rc = -EINVAL;
+    if ( !xen_pt_check_entry(*entry, mfn, level, flags) )
+        goto out;
+
+    /* If we are only populating page-table, then we are done. */
+    rc = 0;
+    if ( flags & _PAGE_POPULATE )
+        goto out;
+
+    /* We are removing the page */
+    if ( !(flags & _PAGE_PRESENT) )
+        memset(&pte, 0x00, sizeof(pte));
+    else
+    {
+        /* We are inserting a mapping => Create new pte. */
+        if ( !mfn_eq(mfn, INVALID_MFN) )
+        {
+            pte = mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags));
+
+            /*
+             * First and second level pages set pte.pt.table = 0, but
+             * third level entries set pte.pt.table = 1.
+             */
+            pte.pt.table = (level == 3);
+        }
+        else /* We are updating the permission => Copy the current pte. */
+            pte = *entry;
+
+        /* Set permission */
+        pte.pt.ro = PAGE_RO_MASK(flags);
+        pte.pt.xn = PAGE_XN_MASK(flags);
+        /* Set contiguous bit */
+        pte.pt.contig = !!(flags & _PAGE_CONTIG);
+    }
+
+    write_pte(entry, pte);
+
+    rc = 0;
+
+out:
+    xen_unmap_table(table);
+
+    return rc;
+}
+
+/* Return the level where mapping should be done */
+static int xen_pt_mapping_level(unsigned long vfn, mfn_t mfn, unsigned long nr,
+                                unsigned int flags)
+{
+    unsigned int level;
+    unsigned long mask;
+
+    /*
+      * Don't take into account the MFN when removing mapping (i.e
+      * MFN_INVALID) to calculate the correct target order.
+      *
+      * Per the Arm Arm, `vfn` and `mfn` must be both superpage aligned.
+      * They are or-ed together and then checked against the size of
+      * each level.
+      *
+      * `left` is not included and checked separately to allow
+      * superpage mapping even if it is not properly aligned (the
+      * user may have asked to map 2MB + 4k).
+      */
+     mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0;
+     mask |= vfn;
+
+     /*
+      * Always use level 3 mapping unless the caller request block
+      * mapping.
+      */
+     if ( likely(!(flags & _PAGE_BLOCK)) )
+         level = 3;
+     else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) &&
+               (nr >= BIT(FIRST_ORDER, UL)) )
+         level = 1;
+     else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) &&
+               (nr >= BIT(SECOND_ORDER, UL)) )
+         level = 2;
+     else
+         level = 3;
+
+     return level;
+}
+
+#define XEN_PT_4K_NR_CONTIG 16
+
+/*
+ * Check whether the contiguous bit can be set. Return the number of
+ * contiguous entry allowed. If not allowed, return 1.
+ */
+static unsigned int xen_pt_check_contig(unsigned long vfn, mfn_t mfn,
+                                        unsigned int level, unsigned long left,
+                                        unsigned int flags)
+{
+    unsigned long nr_contig;
+
+    /*
+     * Allow the contiguous bit to set when the caller requests block
+     * mapping.
+     */
+    if ( !(flags & _PAGE_BLOCK) )
+        return 1;
+
+    /*
+     * We don't allow to remove mapping with the contiguous bit set.
+     * So shortcut the logic and directly return 1.
+     */
+    if ( mfn_eq(mfn, INVALID_MFN) )
+        return 1;
+
+    /*
+     * The number of contiguous entries varies depending on the page
+     * granularity used. The logic below assumes 4KB.
+     */
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    /*
+     * In order to enable the contiguous bit, we should have enough entries
+     * to map left and both the virtual and physical address should be
+     * aligned to the size of 16 translation tables entries.
+     */
+    nr_contig = BIT(XEN_PT_LEVEL_ORDER(level), UL) * XEN_PT_4K_NR_CONTIG;
+
+    if ( (left < nr_contig) || ((mfn_x(mfn) | vfn) & (nr_contig - 1)) )
+        return 1;
+
+    return XEN_PT_4K_NR_CONTIG;
+}
+
+static DEFINE_SPINLOCK(xen_pt_lock);
+
+static int xen_pt_update(unsigned long virt,
+                         mfn_t mfn,
+                         /* const on purpose as it is used for TLB flush */
+                         const unsigned long nr_mfns,
+                         unsigned int flags)
+{
+    int rc = 0;
+    unsigned long vfn = virt >> PAGE_SHIFT;
+    unsigned long left = nr_mfns;
+
+    /*
+     * For arm32, page-tables are different on each CPUs. Yet, they share
+     * some common mappings. It is assumed that only common mappings
+     * will be modified with this function.
+     *
+     * XXX: Add a check.
+     */
+    const mfn_t root = maddr_to_mfn(READ_SYSREG64(TTBR0_EL2));
+
+    /*
+     * The hardware was configured to forbid mapping both writeable and
+     * executable.
+     * When modifying/creating mapping (i.e _PAGE_PRESENT is set),
+     * prevent any update if this happen.
+     */
+    if ( (flags & _PAGE_PRESENT) && !PAGE_RO_MASK(flags) &&
+         !PAGE_XN_MASK(flags) )
+    {
+        mm_printk("Mappings should not be both Writeable and Executable.\n");
+        return -EINVAL;
+    }
+
+    if ( flags & _PAGE_CONTIG )
+    {
+        mm_printk("_PAGE_CONTIG is an internal only flag.\n");
+        return -EINVAL;
+    }
+
+    if ( !IS_ALIGNED(virt, PAGE_SIZE) )
+    {
+        mm_printk("The virtual address is not aligned to the page-size.\n");
+        return -EINVAL;
+    }
+
+    spin_lock(&xen_pt_lock);
+
+    while ( left )
+    {
+        unsigned int order, level, nr_contig, new_flags;
+
+        level = xen_pt_mapping_level(vfn, mfn, left, flags);
+        order = XEN_PT_LEVEL_ORDER(level);
+
+        ASSERT(left >= BIT(order, UL));
+
+        /*
+         * Check if we can set the contiguous mapping and update the
+         * flags accordingly.
+         */
+        nr_contig = xen_pt_check_contig(vfn, mfn, level, left, flags);
+        new_flags = flags | ((nr_contig > 1) ? _PAGE_CONTIG : 0);
+
+        for ( ; nr_contig > 0; nr_contig-- )
+        {
+            rc = xen_pt_update_entry(root, vfn << PAGE_SHIFT, mfn, level,
+                                     new_flags);
+            if ( rc )
+                break;
+
+            vfn += 1U << order;
+            if ( !mfn_eq(mfn, INVALID_MFN) )
+                mfn = mfn_add(mfn, 1U << order);
+
+            left -= (1U << order);
+        }
+
+        if ( rc )
+            break;
+    }
+
+    /*
+     * The TLBs flush can be safely skipped when a mapping is inserted
+     * as we don't allow mapping replacement (see xen_pt_check_entry()).
+     *
+     * For all the other cases, the TLBs will be flushed unconditionally
+     * even if the mapping has failed. This is because we may have
+     * partially modified the PT. This will prevent any unexpected
+     * behavior afterwards.
+     */
+    if ( !((flags & _PAGE_PRESENT) && !mfn_eq(mfn, INVALID_MFN)) )
+        flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
+
+    spin_unlock(&xen_pt_lock);
+
+    return rc;
+}
+
+int map_pages_to_xen(unsigned long virt,
+                     mfn_t mfn,
+                     unsigned long nr_mfns,
+                     unsigned int flags)
+{
+    return xen_pt_update(virt, mfn, nr_mfns, flags);
+}
+
+int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
+{
+    return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE);
+}
+
+int destroy_xen_mappings(unsigned long s, unsigned long e)
+{
+    ASSERT(IS_ALIGNED(s, PAGE_SIZE));
+    ASSERT(IS_ALIGNED(e, PAGE_SIZE));
+    ASSERT(s <= e);
+    return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, 0);
+}
+
+int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags)
+{
+    ASSERT(IS_ALIGNED(s, PAGE_SIZE));
+    ASSERT(IS_ALIGNED(e, PAGE_SIZE));
+    ASSERT(s <= e);
+    return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, flags);
+}
+
+/* Release all __init and __initdata ranges to be reused */
+void free_init_memory(void)
+{
+    paddr_t pa = virt_to_maddr(__init_begin);
+    unsigned long len = __init_end - __init_begin;
+    uint32_t insn;
+    unsigned int i, nr = len / sizeof(insn);
+    uint32_t *p;
+    int rc;
+
+    rc = modify_xen_mappings((unsigned long)__init_begin,
+                             (unsigned long)__init_end, PAGE_HYPERVISOR_RW);
+    if ( rc )
+        panic("Unable to map RW the init section (rc = %d)\n", rc);
+
+    /*
+     * From now on, init will not be used for execution anymore,
+     * so nuke the instruction cache to remove entries related to init.
+     */
+    invalidate_icache_local();
+
+#ifdef CONFIG_ARM_32
+    /* udf instruction i.e (see A8.8.247 in ARM DDI 0406C.c) */
+    insn = 0xe7f000f0;
+#else
+    insn = AARCH64_BREAK_FAULT;
+#endif
+    p = (uint32_t *)__init_begin;
+    for ( i = 0; i < nr; i++ )
+        *(p + i) = insn;
+
+    rc = destroy_xen_mappings((unsigned long)__init_begin,
+                              (unsigned long)__init_end);
+    if ( rc )
+        panic("Unable to remove the init section (rc = %d)\n", rc);
+
+    init_domheap_pages(pa, pa + len);
+    printk("Freed %ldkB init memory.\n", (long)(__init_end-__init_begin)>>10);
+}
+
+int xenmem_add_to_physmap_one(
+    struct domain *d,
+    unsigned int space,
+    union add_to_physmap_extra extra,
+    unsigned long idx,
+    gfn_t gfn)
+{
+    mfn_t mfn = INVALID_MFN;
+    int rc;
+    p2m_type_t t;
+    struct page_info *page = NULL;
+
+    switch ( space )
+    {
+    case XENMAPSPACE_grant_table:
+        rc = gnttab_map_frame(d, idx, gfn, &mfn);
+        if ( rc )
+            return rc;
+
+        /* Need to take care of the reference obtained in gnttab_map_frame(). */
+        page = mfn_to_page(mfn);
+        t = p2m_ram_rw;
+
+        break;
+    case XENMAPSPACE_shared_info:
+        if ( idx != 0 )
+            return -EINVAL;
+
+        mfn = virt_to_mfn(d->shared_info);
+        t = p2m_ram_rw;
+
+        break;
+    case XENMAPSPACE_gmfn_foreign:
+    {
+        struct domain *od;
+        p2m_type_t p2mt;
+
+        od = get_pg_owner(extra.foreign_domid);
+        if ( od == NULL )
+            return -ESRCH;
+
+        if ( od == d )
+        {
+            put_pg_owner(od);
+            return -EINVAL;
+        }
+
+        rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+        if ( rc )
+        {
+            put_pg_owner(od);
+            return rc;
+        }
+
+        /* Take reference to the foreign domain page.
+         * Reference will be released in XENMEM_remove_from_physmap */
+        page = get_page_from_gfn(od, idx, &p2mt, P2M_ALLOC);
+        if ( !page )
+        {
+            put_pg_owner(od);
+            return -EINVAL;
+        }
+
+        if ( p2m_is_ram(p2mt) )
+            t = (p2mt == p2m_ram_rw) ? p2m_map_foreign_rw : p2m_map_foreign_ro;
+        else
+        {
+            put_page(page);
+            put_pg_owner(od);
+            return -EINVAL;
+        }
+
+        mfn = page_to_mfn(page);
+
+        put_pg_owner(od);
+        break;
+    }
+    case XENMAPSPACE_dev_mmio:
+        rc = map_dev_mmio_page(d, gfn, _mfn(idx));
+        return rc;
+
+    default:
+        return -ENOSYS;
+    }
+
+    /*
+     * Map at new location. Here we need to map xenheap RAM page differently
+     * because we need to store the valid GFN and make sure that nothing was
+     * mapped before (the stored GFN is invalid). And these actions need to be
+     * performed with the P2M lock held. The guest_physmap_add_entry() is just
+     * a wrapper on top of p2m_set_entry().
+     */
+    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
+        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
+    else
+    {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+        p2m_write_lock(p2m);
+        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
+        {
+            rc = p2m_set_entry(p2m, gfn, 1, mfn, t, p2m->default_access);
+            if ( !rc )
+                page_set_xenheap_gfn(mfn_to_page(mfn), gfn);
+        }
+        else
+            /*
+             * Mandate the caller to first unmap the page before mapping it
+             * again. This is to prevent Xen creating an unwanted hole in
+             * the P2M. For instance, this could happen if the firmware stole
+             * a RAM address for mapping the shared_info page into but forgot
+             * to unmap it afterwards.
+             */
+            rc = -EBUSY;
+        p2m_write_unlock(p2m);
+    }
+
+    /*
+     * For XENMAPSPACE_gmfn_foreign if we failed to add the mapping, we need
+     * to drop the reference we took earlier. In all other cases we need to
+     * drop any reference we took earlier (perhaps indirectly).
+     */
+    if ( space == XENMAPSPACE_gmfn_foreign ? rc : page != NULL )
+    {
+        ASSERT(page != NULL);
+        put_page(page);
+    }
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 43e9a1be4d..87a12042cc 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -20,8 +20,10 @@
  */
 
 #include <xen/init.h>
+#include <xen/mm.h>
 #include <xen/page-size.h>
 #include <asm/arm64/mpu.h>
+#include <asm/page.h>
 
 /* Xen MPU memory region mapping table. */
 pr_t __aligned(PAGE_SIZE) __section(".data.page_aligned")
@@ -38,6 +40,71 @@ uint64_t __ro_after_init next_transient_region_idx;
 /* Maximum number of supported MPU memory regions by the EL2 MPU. */
 uint64_t __ro_after_init max_xen_mpumap;
 
+/* TODO: Implementation on the first usage */
+void dump_hyp_walk(vaddr_t addr)
+{
+}
+
+void * __init early_fdt_map(paddr_t fdt_paddr)
+{
+    return NULL;
+}
+
+void __init remove_early_mappings(void)
+{
+}
+
+int init_secondary_pagetables(int cpu)
+{
+    return -ENOSYS;
+}
+
+void mmu_init_secondary_cpu(void)
+{
+}
+
+void *ioremap_attr(paddr_t pa, size_t len, unsigned int attributes)
+{
+    return NULL;
+}
+
+void *ioremap(paddr_t pa, size_t len)
+{
+    return NULL;
+}
+
+int map_pages_to_xen(unsigned long virt,
+                     mfn_t mfn,
+                     unsigned long nr_mfns,
+                     unsigned int flags)
+{
+    return -ENOSYS;
+}
+
+int destroy_xen_mappings(unsigned long s, unsigned long e)
+{
+    return -ENOSYS;
+}
+
+int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags)
+{
+    return -ENOSYS;
+}
+
+void free_init_memory(void)
+{
+}
+
+int xenmem_add_to_physmap_one(
+    struct domain *d,
+    unsigned int space,
+    union add_to_physmap_extra extra,
+    unsigned long idx,
+    gfn_t gfn)
+{
+    return -ENOSYS;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476540.738847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCig-0005G1-EZ; Fri, 13 Jan 2023 05:35:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476540.738847; Fri, 13 Jan 2023 05:35:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCig-0005ET-84; Fri, 13 Jan 2023 05:35:14 +0000
Received: by outflank-mailman (input) for mailman id 476540;
 Fri, 13 Jan 2023 05:35:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCf8-0005sP-Hv
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:34 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 89986dc7-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:32 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0DDCEFEC;
 Thu, 12 Jan 2023 21:32:14 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4FAFA3F587;
 Thu, 12 Jan 2023 21:31:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89986dc7-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 36/40] xen/mpu: Use secure hypervisor timer for AArch64v8R
Date: Fri, 13 Jan 2023 13:29:09 +0800
Message-Id: <20230113052914.3845596-37-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As AArch64v8R only has one secure state, we have to use secure EL2 hypervisor
timer for Xen in secure EL2.

In this patch, we introduce a Kconfig option ARM_SECURE_STATE.
With this new Kconfig option, we can re-define the timer's
system register name in different secure state, but keep the
timer code flow unchanged.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/Kconfig                     |  7 +++++++
 xen/arch/arm/include/asm/arm64/sysregs.h | 21 ++++++++++++++++++++-
 xen/arch/arm/include/asm/cpregs.h        |  4 ++--
 xen/arch/arm/time.c                      | 14 +++++++-------
 4 files changed, 36 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 91491341c4..ee942a33bc 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -47,6 +47,13 @@ config ARM_EFI
 	  be booted as an EFI application. This is only useful for
 	  Xen that may run on systems that have UEFI firmware.
 
+config ARM_SECURE_STATE
+	bool "Xen will run in Arm Secure State"
+	depends on ARM_V8R
+	help
+	  In this state, a Processing Element (PE) can access the secure
+	  physical address space, and the secure copy of banked registers.
+
 config GICV3
 	bool "GICv3 driver"
 	depends on !NEW_VGIC
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index c46daf6f69..9546e8e3d0 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -458,7 +458,6 @@
 #define ZCR_ELx_LEN_SIZE             9
 #define ZCR_ELx_LEN_MASK             0x1ff
 
-/* System registers for Armv8-R AArch64 */
 #ifdef CONFIG_HAS_MPU
 
 /* EL2 MPU Protection Region Base Address Register encode */
@@ -510,6 +509,26 @@
 
 #endif
 
+#ifdef CONFIG_ARM_SECURE_STATE
+/*
+ * The Armv8-R AArch64 architecture always executes code in Secure
+ * state with EL2 as the highest Exception.
+ *
+ * Hypervisor timer registers for Secure EL2.
+ */
+#define CNTHPS_TVAL_EL2  S3_4_C14_C5_0
+#define CNTHPS_CTL_EL2   S3_4_C14_C5_1
+#define CNTHPS_CVAL_EL2  S3_4_C14_C5_2
+#define CNTHPx_TVAL_EL2  CNTHPS_TVAL_EL2
+#define CNTHPx_CTL_EL2   CNTHPS_CTL_EL2
+#define CNTHPx_CVAL_EL2  CNTHPS_CVAL_EL2
+#else
+/* Hypervisor timer registers for Non-Secure EL2. */
+#define CNTHPx_TVAL_EL2  CNTHP_TVAL_EL2
+#define CNTHPx_CTL_EL2   CNTHP_CTL_EL2
+#define CNTHPx_CVAL_EL2  CNTHP_CVAL_EL2
+#endif /* CONFIG_ARM_SECURE_STATE */
+
 /* Access to system registers */
 
 #define WRITE_SYSREG64(v, name) do {                    \
diff --git a/xen/arch/arm/include/asm/cpregs.h b/xen/arch/arm/include/asm/cpregs.h
index 6b083de204..a704677fbc 100644
--- a/xen/arch/arm/include/asm/cpregs.h
+++ b/xen/arch/arm/include/asm/cpregs.h
@@ -374,8 +374,8 @@
 #define CLIDR_EL1               CLIDR
 #define CNTFRQ_EL0              CNTFRQ
 #define CNTHCTL_EL2             CNTHCTL
-#define CNTHP_CTL_EL2           CNTHP_CTL
-#define CNTHP_CVAL_EL2          CNTHP_CVAL
+#define CNTHPx_CTL_EL2          CNTHP_CTL
+#define CNTHPx_CVAL_EL2         CNTHP_CVAL
 #define CNTKCTL_EL1             CNTKCTL
 #define CNTPCT_EL0              CNTPCT
 #define CNTP_CTL_EL0            CNTP_CTL
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 433d7be909..3bba733b83 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -196,13 +196,13 @@ int reprogram_timer(s_time_t timeout)
 
     if ( timeout == 0 )
     {
-        WRITE_SYSREG(0, CNTHP_CTL_EL2);
+        WRITE_SYSREG(0, CNTHPx_CTL_EL2);
         return 1;
     }
 
     deadline = ns_to_ticks(timeout) + boot_count;
-    WRITE_SYSREG64(deadline, CNTHP_CVAL_EL2);
-    WRITE_SYSREG(CNTx_CTL_ENABLE, CNTHP_CTL_EL2);
+    WRITE_SYSREG64(deadline, CNTHPx_CVAL_EL2);
+    WRITE_SYSREG(CNTx_CTL_ENABLE, CNTHPx_CTL_EL2);
     isb();
 
     /* No need to check for timers in the past; the Generic Timer fires
@@ -213,7 +213,7 @@ int reprogram_timer(s_time_t timeout)
 /* Handle the firing timer */
 static void htimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    if ( unlikely(!(READ_SYSREG(CNTHP_CTL_EL2) & CNTx_CTL_PENDING)) )
+    if ( unlikely(!(READ_SYSREG(CNTHPx_CTL_EL2) & CNTx_CTL_PENDING)) )
         return;
 
     perfc_incr(hyp_timer_irqs);
@@ -222,7 +222,7 @@ static void htimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
     raise_softirq(TIMER_SOFTIRQ);
 
     /* Disable the timer to avoid more interrupts */
-    WRITE_SYSREG(0, CNTHP_CTL_EL2);
+    WRITE_SYSREG(0, CNTHPx_CTL_EL2);
 }
 
 static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
@@ -281,7 +281,7 @@ void init_timer_interrupt(void)
     /* Do not let the VMs program the physical timer, only read the physical counter */
     WRITE_SYSREG(CNTHCTL_EL2_EL1PCTEN, CNTHCTL_EL2);
     WRITE_SYSREG(0, CNTP_CTL_EL0);    /* Physical timer disabled */
-    WRITE_SYSREG(0, CNTHP_CTL_EL2);   /* Hypervisor's timer disabled */
+    WRITE_SYSREG(0, CNTHPx_CTL_EL2);   /* Hypervisor's timer disabled */
     isb();
 
     request_irq(timer_irq[TIMER_HYP_PPI], 0, htimer_interrupt,
@@ -301,7 +301,7 @@ void init_timer_interrupt(void)
 static void deinit_timer_interrupt(void)
 {
     WRITE_SYSREG(0, CNTP_CTL_EL0);    /* Disable physical timer */
-    WRITE_SYSREG(0, CNTHP_CTL_EL2);   /* Disable hypervisor's timer */
+    WRITE_SYSREG(0, CNTHPx_CTL_EL2);   /* Disable hypervisor's timer */
     isb();
 
     release_irq(timer_irq[TIMER_HYP_PPI], NULL);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476542.738870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCip-00068n-4N; Fri, 13 Jan 2023 05:35:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476542.738870; Fri, 13 Jan 2023 05:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCio-00068S-Vu; Fri, 13 Jan 2023 05:35:22 +0000
Received: by outflank-mailman (input) for mailman id 476542;
 Fri, 13 Jan 2023 05:35:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCfJ-0005sJ-L5
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:45 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 91180261-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:31:44 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8100DFEC;
 Thu, 12 Jan 2023 21:32:26 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 04C333F587;
 Thu, 12 Jan 2023 21:31:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91180261-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 40/40] xen/mpu: add Kconfig option to enable Armv8-R AArch64 support
Date: Fri, 13 Jan 2023 13:29:13 +0800
Message-Id: <20230113052914.3845596-41-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce a Kconfig option to enable Armv8-R64 architecture
support. STATIC_MEMORY and HAS_MPU will be selected by
ARM_V8R by default, because Armv8-R64 only has PMSAv8-64 on secure-EL2
and only supports statically configured system.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/Kconfig | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index ee942a33bc..dc93b805a6 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -9,6 +9,15 @@ config ARM_64
 	select 64BIT
 	select HAS_FAST_MULTIPLY
 
+config ARM_V8R
+       bool "ARMv8-R AArch64 architecture support (UNSUPPORTED)" if UNSUPPORTED
+       default n
+       select STATIC_MEMORY
+       depends on ARM_64
+       help
+         This option enables Armv8-R profile for Arm64. Enabling this option
+         results in selecting MPU.
+
 config ARM
 	def_bool y
 	select HAS_ALTERNATIVE if !ARM_V8R
@@ -68,6 +77,10 @@ config HAS_ITS
         bool "GICv3 ITS MSI controller support (UNSUPPORTED)" if UNSUPPORTED
         depends on GICV3 && !NEW_VGIC && !ARM_32
 
+config HAS_MPU
+	bool "Protected Memory System Architecture"
+	depends on ARM_V8R
+
 config HVM
         def_bool y
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476566.738881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCj4-0007NV-Mn; Fri, 13 Jan 2023 05:35:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476566.738881; Fri, 13 Jan 2023 05:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCj4-0007NN-Ii; Fri, 13 Jan 2023 05:35:38 +0000
Received: by outflank-mailman (input) for mailman id 476566;
 Fri, 13 Jan 2023 05:35:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeO-0005sP-7n
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:48 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 6dd4dc0f-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:30:45 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7405113D5;
 Thu, 12 Jan 2023 21:31:27 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A74C73F587;
 Thu, 12 Jan 2023 21:30:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dd4dc0f-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 22/40] xen/mpu: implement MPU version of setup_mm in setup_mpu.c
Date: Fri, 13 Jan 2023 13:28:55 +0800
Message-Id: <20230113052914.3845596-23-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In MPU system, system RAM shall be statically partitioned into
different functionality section in Device Tree at the very beginning,
including static xenheap, guest memory section, boot-module section, etc.
So using a virtual contigious memory region to do direct-mapping for the
whole system RAM is not applicable in MPU system.

Function setup_static_mappings is introduced to set up MPU memory
region mapping section by section based on static configuration in
Device Tree.
And this commit is only responsible for static xenheap mapping, which is
implemented in setup_staticheap_mappings. All the other static
memory section mapping will be introduced later.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/Makefile             |  2 +
 xen/arch/arm/include/asm/mm_mpu.h |  5 +++
 xen/arch/arm/mm_mpu.c             | 41 ++++++++++++++++++
 xen/arch/arm/setup_mpu.c          | 70 +++++++++++++++++++++++++++++++
 4 files changed, 118 insertions(+)
 create mode 100644 xen/arch/arm/setup_mpu.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index adeb17b7ab..23dfbc3333 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -53,6 +53,8 @@ obj-y += psci.o
 obj-y += setup.o
 ifneq ($(CONFIG_HAS_MPU), y)
 obj-y += setup_mmu.o
+else
+obj-y += setup_mpu.o
 endif
 obj-y += shutdown.o
 obj-y += smp.o
diff --git a/xen/arch/arm/include/asm/mm_mpu.h b/xen/arch/arm/include/asm/mm_mpu.h
index 3a4b07f187..fe6a828a50 100644
--- a/xen/arch/arm/include/asm/mm_mpu.h
+++ b/xen/arch/arm/include/asm/mm_mpu.h
@@ -3,6 +3,11 @@
 #define __ARCH_ARM_MM_MPU__
 
 #define setup_mm_mappings(boot_phys_offset) ((void)(boot_phys_offset))
+/*
+ * Function setup_static_mappings() sets up MPU memory region mapping
+ * section by section based on static configuration in Device Tree.
+ */
+extern void setup_static_mappings(void);
 
 static inline paddr_t __virt_to_maddr(vaddr_t va)
 {
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index b34dbf4515..f057ee26df 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -523,6 +523,47 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
     return fdt_virt;
 }
 
+/*
+ * Heap must be statically configured in Device Tree through
+ * "xen,static-heap" in MPU system.
+ */
+static void __init setup_staticheap_mappings(void)
+{
+    unsigned int bank = 0;
+
+    for ( ; bank < bootinfo.reserved_mem.nr_banks; bank++ )
+    {
+        if ( bootinfo.reserved_mem.bank[bank].type == MEMBANK_STATIC_HEAP )
+        {
+            paddr_t bank_start = round_pgup(
+                                 bootinfo.reserved_mem.bank[bank].start);
+            paddr_t bank_size = round_pgdown(
+                                bootinfo.reserved_mem.bank[bank].size);
+
+            /* Map static heap with fixed MPU memory region */
+
+            if ( map_pages_to_xen(bank_start, maddr_to_mfn(bank_start),
+                                  bank_size >> PAGE_SHIFT,
+                                  REGION_HYPERVISOR) )
+                panic("mpu: failed to map static heap\n");
+        }
+    }
+}
+
+/*
+ * System RAM is statically partitioned into different functionality
+ * section in Device Tree, including static xenheap, guest memory
+ * section, boot-module section, etc.
+ * Function setup_static_mappings sets up MPU memory region mapping
+ * section by section.
+ */
+void __init setup_static_mappings(void)
+{
+    setup_staticheap_mappings();
+
+    /* TODO: guest memory section, device memory section, boot-module section, etc */
+}
+
 /* TODO: Implementation on the first usage */
 void dump_hyp_walk(vaddr_t addr)
 {
diff --git a/xen/arch/arm/setup_mpu.c b/xen/arch/arm/setup_mpu.c
new file mode 100644
index 0000000000..ca0d8237d5
--- /dev/null
+++ b/xen/arch/arm/setup_mpu.c
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xen/arch/arm/setup_mpu.c
+ *
+ * Early bringup code for an Armv8-R with virt extensions.
+ *
+ * Copyright (C) 2022 Arm Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/init.h>
+#include <xen/mm.h>
+#include <xen/pfn.h>
+#include <asm/mm_mpu.h>
+#include <asm/page.h>
+#include <asm/setup.h>
+
+void __init setup_mm(void)
+{
+    paddr_t ram_start = ~0, ram_end = 0, ram_size = 0;
+    unsigned int bank;
+
+    if ( !bootinfo.mem.nr_banks )
+        panic("No memory bank\n");
+
+    init_pdx();
+
+    populate_boot_allocator();
+
+    total_pages = 0;
+    for ( bank = 0 ; bank < bootinfo.mem.nr_banks; bank++ )
+    {
+        paddr_t bank_start = round_pgup(bootinfo.mem.bank[bank].start);
+        paddr_t bank_size = bootinfo.mem.bank[bank].size;
+        paddr_t bank_end = round_pgdown(bank_start + bank_size);
+
+        ram_size = ram_size + bank_size;
+        ram_start = min(ram_start, bank_start);
+        ram_end = max(ram_end, bank_end);
+    }
+
+    setup_static_mappings();
+
+    total_pages += ram_size >> PAGE_SHIFT;
+    max_page = PFN_DOWN(ram_end);
+
+    setup_frametable_mappings(ram_start, ram_end);
+
+    init_staticmem_pages();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476570.738892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCj6-0007gU-0Y; Fri, 13 Jan 2023 05:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476570.738892; Fri, 13 Jan 2023 05:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCj5-0007g1-SF; Fri, 13 Jan 2023 05:35:39 +0000
Received: by outflank-mailman (input) for mailman id 476570;
 Fri, 13 Jan 2023 05:35:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCfD-0005sJ-FV
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:40 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 8ba636c7-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:31:35 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6987DFEC;
 Thu, 12 Jan 2023 21:32:17 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6B5C43F587;
 Thu, 12 Jan 2023 21:31:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ba636c7-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 37/40] xen/mpu: move MMU specific P2M code to p2m_mmu.c
Date: Fri, 13 Jan 2023 13:29:10 +0800
Message-Id: <20230113052914.3845596-38-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Current P2M implementation is designed for MMU system. Only a few codes
can be shared by MPU system, like P2M pool, IPA, etc
We move the MMU-specific codes into p2m_mmu.c, and place stub functions
in p2m_mpu.c which wait for implementing on the first usage. And we
keep generic codes in p2m.c

We also move MMU-specific definitions to p2m_mmu.h, like P2M_ROOT_LEVEL and
function p2m_tlb_flush_sync.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/Makefile              |    5 +
 xen/arch/arm/include/asm/p2m.h     |   17 +-
 xen/arch/arm/include/asm/p2m_mmu.h |   28 +
 xen/arch/arm/p2m.c                 | 2276 +--------------------------
 xen/arch/arm/p2m_mmu.c             | 2295 ++++++++++++++++++++++++++++
 xen/arch/arm/p2m_mpu.c             |  191 +++
 6 files changed, 2528 insertions(+), 2284 deletions(-)
 create mode 100644 xen/arch/arm/include/asm/p2m_mmu.h
 create mode 100644 xen/arch/arm/p2m_mmu.c
 create mode 100644 xen/arch/arm/p2m_mpu.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index c949661590..ea650db52b 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -44,6 +44,11 @@ obj-y += mm_mpu.o
 endif
 obj-y += monitor.o
 obj-y += p2m.o
+ifneq ($(CONFIG_HAS_MPU), y)
+obj-y += p2m_mmu.o
+else
+obj-y += p2m_mpu.o
+endif
 obj-y += percpu.o
 obj-y += platform.o
 obj-y += platform_hypercall.o
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index 91df922e1c..a430aca232 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -14,17 +14,6 @@
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
 
-#ifdef CONFIG_ARM_64
-extern unsigned int p2m_root_order;
-extern unsigned int p2m_root_level;
-#define P2M_ROOT_ORDER    p2m_root_order
-#define P2M_ROOT_LEVEL p2m_root_level
-#else
-/* First level P2M is always 2 consecutive pages */
-#define P2M_ROOT_ORDER    1
-#define P2M_ROOT_LEVEL 1
-#endif
-
 struct domain;
 
 extern void memory_type_changed(struct domain *);
@@ -162,6 +151,10 @@ typedef enum {
 #endif
 #include <xen/p2m-common.h>
 
+#ifndef CONFIG_HAS_MPU
+#include <asm/p2m_mmu.h>
+#endif
+
 static inline bool arch_acquire_resource_check(struct domain *d)
 {
     /*
@@ -252,8 +245,6 @@ static inline int p2m_is_write_locked(struct p2m_domain *p2m)
     return rw_is_write_locked(&p2m->lock);
 }
 
-void p2m_tlb_flush_sync(struct p2m_domain *p2m);
-
 /* Look up the MFN corresponding to a domain's GFN. */
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
diff --git a/xen/arch/arm/include/asm/p2m_mmu.h b/xen/arch/arm/include/asm/p2m_mmu.h
new file mode 100644
index 0000000000..a0f2440336
--- /dev/null
+++ b/xen/arch/arm/include/asm/p2m_mmu.h
@@ -0,0 +1,28 @@
+#ifndef _XEN_P2M_MMU_H
+#define _XEN_P2M_MMU_H
+
+#ifdef CONFIG_ARM_64
+extern unsigned int p2m_root_order;
+extern unsigned int p2m_root_level;
+#define P2M_ROOT_ORDER    p2m_root_order
+#define P2M_ROOT_LEVEL p2m_root_level
+#else
+/* First level P2M is always 2 consecutive pages */
+#define P2M_ROOT_ORDER    1
+#define P2M_ROOT_LEVEL 1
+#endif
+
+struct p2m_domain;
+
+void p2m_tlb_flush_sync(struct p2m_domain *p2m);
+
+#endif /* _XEN_P2M_MMU_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 948f199d84..42f51051e0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1,36 +1,9 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-#include <xen/cpu.h>
-#include <xen/domain_page.h>
-#include <xen/iocap.h>
-#include <xen/ioreq.h>
 #include <xen/lib.h>
 #include <xen/sched.h>
-#include <xen/softirq.h>
 
-#include <asm/alternative.h>
 #include <asm/event.h>
-#include <asm/flushtlb.h>
-#include <asm/guest_walk.h>
 #include <asm/page.h>
-#include <asm/traps.h>
-
-#define MAX_VMID_8_BIT  (1UL << 8)
-#define MAX_VMID_16_BIT (1UL << 16)
-
-#define INVALID_VMID 0 /* VMID 0 is reserved */
-
-#ifdef CONFIG_ARM_64
-unsigned int __read_mostly p2m_root_order;
-unsigned int __read_mostly p2m_root_level;
-static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
-/* VMID is by default 8 bit width on AArch64 */
-#define MAX_VMID       max_vmid
-#else
-/* VMID is always 8 bit width on AArch32 */
-#define MAX_VMID        MAX_VMID_8_BIT
-#endif
-
-#define P2M_ROOT_PAGES    (1<<P2M_ROOT_ORDER)
 
 /*
  * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
@@ -38,50 +11,6 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
  */
 unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;
 
-static mfn_t __read_mostly empty_root_mfn;
-
-static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
-{
-    return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
-}
-
-static struct page_info *p2m_alloc_page(struct domain *d)
-{
-    struct page_info *pg;
-
-    /*
-     * For hardware domain, there should be no limit in the number of pages that
-     * can be allocated, so that the kernel may take advantage of the extended
-     * regions. Hence, allocate p2m pages for hardware domains from heap.
-     */
-    if ( is_hardware_domain(d) )
-    {
-        pg = alloc_domheap_page(NULL, 0);
-        if ( pg == NULL )
-            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
-    }
-    else
-    {
-        spin_lock(&d->arch.paging.lock);
-        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
-        spin_unlock(&d->arch.paging.lock);
-    }
-
-    return pg;
-}
-
-static void p2m_free_page(struct domain *d, struct page_info *pg)
-{
-    if ( is_hardware_domain(d) )
-        free_domheap_page(pg);
-    else
-    {
-        spin_lock(&d->arch.paging.lock);
-        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
-        spin_unlock(&d->arch.paging.lock);
-    }
-}
-
 /* Return the size of the pool, in bytes. */
 int arch_get_paging_mempool_size(struct domain *d, uint64_t *size)
 {
@@ -186,441 +115,10 @@ int p2m_teardown_allocation(struct domain *d)
     return ret;
 }
 
-/* Unlock the flush and do a P2M TLB flush if necessary */
-void p2m_write_unlock(struct p2m_domain *p2m)
-{
-    /*
-     * The final flush is done with the P2M write lock taken to avoid
-     * someone else modifying the P2M wbefore the TLB invalidation has
-     * completed.
-     */
-    p2m_tlb_flush_sync(p2m);
-
-    write_unlock(&p2m->lock);
-}
-
-void p2m_dump_info(struct domain *d)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-    p2m_read_lock(p2m);
-    printk("p2m mappings for domain %d (vmid %d):\n",
-           d->domain_id, p2m->vmid);
-    BUG_ON(p2m->stats.mappings[0] || p2m->stats.shattered[0]);
-    printk("  1G mappings: %ld (shattered %ld)\n",
-           p2m->stats.mappings[1], p2m->stats.shattered[1]);
-    printk("  2M mappings: %ld (shattered %ld)\n",
-           p2m->stats.mappings[2], p2m->stats.shattered[2]);
-    printk("  4K mappings: %ld\n", p2m->stats.mappings[3]);
-    p2m_read_unlock(p2m);
-}
-
 void memory_type_changed(struct domain *d)
 {
 }
 
-void dump_p2m_lookup(struct domain *d, paddr_t addr)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-    printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
-
-    printk("P2M @ %p mfn:%#"PRI_mfn"\n",
-           p2m->root, mfn_x(page_to_mfn(p2m->root)));
-
-    dump_pt_walk(page_to_maddr(p2m->root), addr,
-                 P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
-}
-
-/*
- * p2m_save_state and p2m_restore_state work in pair to workaround
- * ARM64_WORKAROUND_AT_SPECULATE. p2m_save_state will set-up VTTBR to
- * point to the empty page-tables to stop allocating TLB entries.
- */
-void p2m_save_state(struct vcpu *p)
-{
-    p->arch.sctlr = READ_SYSREG(SCTLR_EL1);
-
-    if ( cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) )
-    {
-        WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR_EL2);
-        /*
-         * Ensure VTTBR_EL2 is correctly synchronized so we can restore
-         * the next vCPU context without worrying about AT instruction
-         * speculation.
-         */
-        isb();
-    }
-}
-
-void p2m_restore_state(struct vcpu *n)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(n->domain);
-    uint8_t *last_vcpu_ran;
-
-    if ( is_idle_vcpu(n) )
-        return;
-
-    WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1);
-    WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2);
-
-    /*
-     * ARM64_WORKAROUND_AT_SPECULATE: VTTBR_EL2 should be restored after all
-     * registers associated to EL1/EL0 translations regime have been
-     * synchronized.
-     */
-    asm volatile(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_AT_SPECULATE));
-    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
-
-    last_vcpu_ran = &p2m->last_vcpu_ran[smp_processor_id()];
-
-    /*
-     * While we are restoring an out-of-context translation regime
-     * we still need to ensure:
-     *  - VTTBR_EL2 is synchronized before flushing the TLBs
-     *  - All registers for EL1 are synchronized before executing an AT
-     *  instructions targeting S1/S2.
-     */
-    isb();
-
-    /*
-     * Flush local TLB for the domain to prevent wrong TLB translation
-     * when running multiple vCPU of the same domain on a single pCPU.
-     */
-    if ( *last_vcpu_ran != INVALID_VCPU_ID && *last_vcpu_ran != n->vcpu_id )
-        flush_guest_tlb_local();
-
-    *last_vcpu_ran = n->vcpu_id;
-}
-
-/*
- * Force a synchronous P2M TLB flush.
- *
- * Must be called with the p2m lock held.
- */
-static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
-{
-    unsigned long flags = 0;
-    uint64_t ovttbr;
-
-    ASSERT(p2m_is_write_locked(p2m));
-
-    /*
-     * ARM only provides an instruction to flush TLBs for the current
-     * VMID. So switch to the VTTBR of a given P2M if different.
-     */
-    ovttbr = READ_SYSREG64(VTTBR_EL2);
-    if ( ovttbr != p2m->vttbr )
-    {
-        uint64_t vttbr;
-
-        local_irq_save(flags);
-
-        /*
-         * ARM64_WORKAROUND_AT_SPECULATE: We need to stop AT to allocate
-         * TLBs entries because the context is partially modified. We
-         * only need the VMID for flushing the TLBs, so we can generate
-         * a new VTTBR with the VMID to flush and the empty root table.
-         */
-        if ( !cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) )
-            vttbr = p2m->vttbr;
-        else
-            vttbr = generate_vttbr(p2m->vmid, empty_root_mfn);
-
-        WRITE_SYSREG64(vttbr, VTTBR_EL2);
-
-        /* Ensure VTTBR_EL2 is synchronized before flushing the TLBs */
-        isb();
-    }
-
-    flush_guest_tlb();
-
-    if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
-    {
-        WRITE_SYSREG64(ovttbr, VTTBR_EL2);
-        /* Ensure VTTBR_EL2 is back in place before continuing. */
-        isb();
-        local_irq_restore(flags);
-    }
-
-    p2m->need_flush = false;
-}
-
-void p2m_tlb_flush_sync(struct p2m_domain *p2m)
-{
-    if ( p2m->need_flush )
-        p2m_force_tlb_flush_sync(p2m);
-}
-
-/*
- * Find and map the root page table. The caller is responsible for
- * unmapping the table.
- *
- * The function will return NULL if the offset of the root table is
- * invalid.
- */
-static lpae_t *p2m_get_root_pointer(struct p2m_domain *p2m,
-                                    gfn_t gfn)
-{
-    unsigned long root_table;
-
-    /*
-     * While the root table index is the offset from the previous level,
-     * we can't use (P2M_ROOT_LEVEL - 1) because the root level might be
-     * 0. Yet we still want to check if all the unused bits are zeroed.
-     */
-    root_table = gfn_x(gfn) >> (XEN_PT_LEVEL_ORDER(P2M_ROOT_LEVEL) +
-                                XEN_PT_LPAE_SHIFT);
-    if ( root_table >= P2M_ROOT_PAGES )
-        return NULL;
-
-    return __map_domain_page(p2m->root + root_table);
-}
-
-/*
- * Lookup the MFN corresponding to a domain's GFN.
- * Lookup mem access in the ratrix tree.
- * The entries associated to the GFN is considered valid.
- */
-static p2m_access_t p2m_mem_access_radix_get(struct p2m_domain *p2m, gfn_t gfn)
-{
-    void *ptr;
-
-    if ( !p2m->mem_access_enabled )
-        return p2m->default_access;
-
-    ptr = radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn));
-    if ( !ptr )
-        return p2m_access_rwx;
-    else
-        return radix_tree_ptr_to_int(ptr);
-}
-
-/*
- * In the case of the P2M, the valid bit is used for other purpose. Use
- * the type to check whether an entry is valid.
- */
-static inline bool p2m_is_valid(lpae_t pte)
-{
-    return pte.p2m.type != p2m_invalid;
-}
-
-/*
- * lpae_is_* helpers don't check whether the valid bit is set in the
- * PTE. Provide our own overlay to check the valid bit.
- */
-static inline bool p2m_is_mapping(lpae_t pte, unsigned int level)
-{
-    return p2m_is_valid(pte) && lpae_is_mapping(pte, level);
-}
-
-static inline bool p2m_is_superpage(lpae_t pte, unsigned int level)
-{
-    return p2m_is_valid(pte) && lpae_is_superpage(pte, level);
-}
-
-#define GUEST_TABLE_MAP_FAILED 0
-#define GUEST_TABLE_SUPER_PAGE 1
-#define GUEST_TABLE_NORMAL_PAGE 2
-
-static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry);
-
-/*
- * Take the currently mapped table, find the corresponding GFN entry,
- * and map the next table, if available. The previous table will be
- * unmapped if the next level was mapped (e.g GUEST_TABLE_NORMAL_PAGE
- * returned).
- *
- * The read_only parameters indicates whether intermediate tables should
- * be allocated when not present.
- *
- * Return values:
- *  GUEST_TABLE_MAP_FAILED: Either read_only was set and the entry
- *  was empty, or allocating a new page failed.
- *  GUEST_TABLE_NORMAL_PAGE: next level mapped normally
- *  GUEST_TABLE_SUPER_PAGE: The next entry points to a superpage.
- */
-static int p2m_next_level(struct p2m_domain *p2m, bool read_only,
-                          unsigned int level, lpae_t **table,
-                          unsigned int offset)
-{
-    lpae_t *entry;
-    int ret;
-    mfn_t mfn;
-
-    entry = *table + offset;
-
-    if ( !p2m_is_valid(*entry) )
-    {
-        if ( read_only )
-            return GUEST_TABLE_MAP_FAILED;
-
-        ret = p2m_create_table(p2m, entry);
-        if ( ret )
-            return GUEST_TABLE_MAP_FAILED;
-    }
-
-    /* The function p2m_next_level is never called at the 3rd level */
-    ASSERT(level < 3);
-    if ( p2m_is_mapping(*entry, level) )
-        return GUEST_TABLE_SUPER_PAGE;
-
-    mfn = lpae_get_mfn(*entry);
-
-    unmap_domain_page(*table);
-    *table = map_domain_page(mfn);
-
-    return GUEST_TABLE_NORMAL_PAGE;
-}
-
-/*
- * Get the details of a given gfn.
- *
- * If the entry is present, the associated MFN will be returned and the
- * access and type filled up. The page_order will correspond to the
- * order of the mapping in the page table (i.e it could be a superpage).
- *
- * If the entry is not present, INVALID_MFN will be returned and the
- * page_order will be set according to the order of the invalid range.
- *
- * valid will contain the value of bit[0] (e.g valid bit) of the
- * entry.
- */
-mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
-                    p2m_type_t *t, p2m_access_t *a,
-                    unsigned int *page_order,
-                    bool *valid)
-{
-    paddr_t addr = gfn_to_gaddr(gfn);
-    unsigned int level = 0;
-    lpae_t entry, *table;
-    int rc;
-    mfn_t mfn = INVALID_MFN;
-    p2m_type_t _t;
-    DECLARE_OFFSETS(offsets, addr);
-
-    ASSERT(p2m_is_locked(p2m));
-    BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
-
-    /* Allow t to be NULL */
-    t = t ?: &_t;
-
-    *t = p2m_invalid;
-
-    if ( valid )
-        *valid = false;
-
-    /* XXX: Check if the mapping is lower than the mapped gfn */
-
-    /* This gfn is higher than the highest the p2m map currently holds */
-    if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) )
-    {
-        for ( level = P2M_ROOT_LEVEL; level < 3; level++ )
-            if ( (gfn_x(gfn) & (XEN_PT_LEVEL_MASK(level) >> PAGE_SHIFT)) >
-                 gfn_x(p2m->max_mapped_gfn) )
-                break;
-
-        goto out;
-    }
-
-    table = p2m_get_root_pointer(p2m, gfn);
-
-    /*
-     * the table should always be non-NULL because the gfn is below
-     * p2m->max_mapped_gfn and the root table pages are always present.
-     */
-    if ( !table )
-    {
-        ASSERT_UNREACHABLE();
-        level = P2M_ROOT_LEVEL;
-        goto out;
-    }
-
-    for ( level = P2M_ROOT_LEVEL; level < 3; level++ )
-    {
-        rc = p2m_next_level(p2m, true, level, &table, offsets[level]);
-        if ( rc == GUEST_TABLE_MAP_FAILED )
-            goto out_unmap;
-        else if ( rc != GUEST_TABLE_NORMAL_PAGE )
-            break;
-    }
-
-    entry = table[offsets[level]];
-
-    if ( p2m_is_valid(entry) )
-    {
-        *t = entry.p2m.type;
-
-        if ( a )
-            *a = p2m_mem_access_radix_get(p2m, gfn);
-
-        mfn = lpae_get_mfn(entry);
-        /*
-         * The entry may point to a superpage. Find the MFN associated
-         * to the GFN.
-         */
-        mfn = mfn_add(mfn,
-                      gfn_x(gfn) & ((1UL << XEN_PT_LEVEL_ORDER(level)) - 1));
-
-        if ( valid )
-            *valid = lpae_is_valid(entry);
-    }
-
-out_unmap:
-    unmap_domain_page(table);
-
-out:
-    if ( page_order )
-        *page_order = XEN_PT_LEVEL_ORDER(level);
-
-    return mfn;
-}
-
-mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
-{
-    mfn_t mfn;
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-    p2m_read_lock(p2m);
-    mfn = p2m_get_entry(p2m, gfn, t, NULL, NULL, NULL);
-    p2m_read_unlock(p2m);
-
-    return mfn;
-}
-
-struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn,
-                                        p2m_type_t *t)
-{
-    struct page_info *page;
-    p2m_type_t p2mt;
-    mfn_t mfn = p2m_lookup(d, gfn, &p2mt);
-
-    if ( t )
-        *t = p2mt;
-
-    if ( !p2m_is_any_ram(p2mt) )
-        return NULL;
-
-    if ( !mfn_valid(mfn) )
-        return NULL;
-
-    page = mfn_to_page(mfn);
-
-    /*
-     * get_page won't work on foreign mapping because the page doesn't
-     * belong to the current domain.
-     */
-    if ( p2m_is_foreign(p2mt) )
-    {
-        struct domain *fdom = page_get_owner_and_reference(page);
-        ASSERT(fdom != NULL);
-        ASSERT(fdom != d);
-        return page;
-    }
-
-    return get_page(page, d) ? page : NULL;
-}
-
 int guest_physmap_mark_populate_on_demand(struct domain *d,
                                           unsigned long gfn,
                                           unsigned int order)
@@ -634,1780 +132,16 @@ unsigned long p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn,
     return 0;
 }
 
-static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
-{
-    /* First apply type permissions */
-    switch ( t )
-    {
-    case p2m_ram_rw:
-        e->p2m.xn = 0;
-        e->p2m.write = 1;
-        break;
-
-    case p2m_ram_ro:
-        e->p2m.xn = 0;
-        e->p2m.write = 0;
-        break;
-
-    case p2m_iommu_map_rw:
-    case p2m_map_foreign_rw:
-    case p2m_grant_map_rw:
-    case p2m_mmio_direct_dev:
-    case p2m_mmio_direct_nc:
-    case p2m_mmio_direct_c:
-        e->p2m.xn = 1;
-        e->p2m.write = 1;
-        break;
-
-    case p2m_iommu_map_ro:
-    case p2m_map_foreign_ro:
-    case p2m_grant_map_ro:
-    case p2m_invalid:
-        e->p2m.xn = 1;
-        e->p2m.write = 0;
-        break;
-
-    case p2m_max_real_type:
-        BUG();
-        break;
-    }
-
-    /* Then restrict with access permissions */
-    switch ( a )
-    {
-    case p2m_access_rwx:
-        break;
-    case p2m_access_wx:
-        e->p2m.read = 0;
-        break;
-    case p2m_access_rw:
-        e->p2m.xn = 1;
-        break;
-    case p2m_access_w:
-        e->p2m.read = 0;
-        e->p2m.xn = 1;
-        break;
-    case p2m_access_rx:
-    case p2m_access_rx2rw:
-        e->p2m.write = 0;
-        break;
-    case p2m_access_x:
-        e->p2m.write = 0;
-        e->p2m.read = 0;
-        break;
-    case p2m_access_r:
-        e->p2m.write = 0;
-        e->p2m.xn = 1;
-        break;
-    case p2m_access_n:
-    case p2m_access_n2rwx:
-        e->p2m.read = e->p2m.write = 0;
-        e->p2m.xn = 1;
-        break;
-    }
-}
-
-static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a)
-{
-    /*
-     * sh, xn and write bit will be defined in the following switches
-     * based on mattr and t.
-     */
-    lpae_t e = (lpae_t) {
-        .p2m.af = 1,
-        .p2m.read = 1,
-        .p2m.table = 1,
-        .p2m.valid = 1,
-        .p2m.type = t,
-    };
-
-    BUILD_BUG_ON(p2m_max_real_type > (1 << 4));
-
-    switch ( t )
-    {
-    case p2m_mmio_direct_dev:
-        e.p2m.mattr = MATTR_DEV;
-        e.p2m.sh = LPAE_SH_OUTER;
-        break;
-
-    case p2m_mmio_direct_c:
-        e.p2m.mattr = MATTR_MEM;
-        e.p2m.sh = LPAE_SH_OUTER;
-        break;
-
-    /*
-     * ARM ARM: Overlaying the shareability attribute (DDI
-     * 0406C.b B3-1376 to 1377)
-     *
-     * A memory region with a resultant memory type attribute of Normal,
-     * and a resultant cacheability attribute of Inner Non-cacheable,
-     * Outer Non-cacheable, must have a resultant shareability attribute
-     * of Outer Shareable, otherwise shareability is UNPREDICTABLE.
-     *
-     * On ARMv8 shareability is ignored and explicitly treated as Outer
-     * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable.
-     * See the note for table D4-40, in page 1788 of the ARM DDI 0487A.j.
-     */
-    case p2m_mmio_direct_nc:
-        e.p2m.mattr = MATTR_MEM_NC;
-        e.p2m.sh = LPAE_SH_OUTER;
-        break;
-
-    default:
-        e.p2m.mattr = MATTR_MEM;
-        e.p2m.sh = LPAE_SH_INNER;
-    }
-
-    p2m_set_permission(&e, t, a);
-
-    ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK));
-
-    lpae_set_mfn(e, mfn);
-
-    return e;
-}
-
-/* Generate table entry with correct attributes. */
-static lpae_t page_to_p2m_table(struct page_info *page)
+void __init p2m_restrict_ipa_bits(unsigned int ipa_bits)
 {
     /*
-     * The access value does not matter because the hardware will ignore
-     * the permission fields for table entry.
-     *
-     * We use p2m_ram_rw so the entry has a valid type. This is important
-     * for p2m_is_valid() to return valid on table entries.
+     * Calculate the minimum of the maximum IPA bits that any external entity
+     * can support.
      */
-    return mfn_to_p2m_entry(page_to_mfn(page), p2m_ram_rw, p2m_access_rwx);
-}
-
-static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool clean_pte)
-{
-    write_pte(p, pte);
-    if ( clean_pte )
-        clean_dcache(*p);
-}
-
-static inline void p2m_remove_pte(lpae_t *p, bool clean_pte)
-{
-    lpae_t pte;
-
-    memset(&pte, 0x00, sizeof(pte));
-    p2m_write_pte(p, pte, clean_pte);
-}
-
-/* Allocate a new page table page and hook it in via the given entry. */
-static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
-{
-    struct page_info *page;
-    lpae_t *p;
-
-    ASSERT(!p2m_is_valid(*entry));
-
-    page = p2m_alloc_page(p2m->domain);
-    if ( page == NULL )
-        return -ENOMEM;
-
-    page_list_add(page, &p2m->pages);
-
-    p = __map_domain_page(page);
-    clear_page(p);
-
-    if ( p2m->clean_pte )
-        clean_dcache_va_range(p, PAGE_SIZE);
-
-    unmap_domain_page(p);
-
-    p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte);
-
-    return 0;
-}
-
-static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn,
-                                    p2m_access_t a)
-{
-    int rc;
-
-    if ( !p2m->mem_access_enabled )
-        return 0;
-
-    if ( p2m_access_rwx == a )
-    {
-        radix_tree_delete(&p2m->mem_access_settings, gfn_x(gfn));
-        return 0;
-    }
-
-    rc = radix_tree_insert(&p2m->mem_access_settings, gfn_x(gfn),
-                           radix_tree_int_to_ptr(a));
-    if ( rc == -EEXIST )
-    {
-        /* If a setting already exists, change it to the new one */
-        radix_tree_replace_slot(
-            radix_tree_lookup_slot(
-                &p2m->mem_access_settings, gfn_x(gfn)),
-            radix_tree_int_to_ptr(a));
-        rc = 0;
-    }
-
-    return rc;
+    if ( ipa_bits < p2m_ipa_bits )
+        p2m_ipa_bits = ipa_bits;
 }
 
-/*
- * Put any references on the single 4K page referenced by pte.
- * TODO: Handle superpages, for now we only take special references for leaf
- * pages (specifically foreign ones, which can't be super mapped today).
- */
-static void p2m_put_l3_page(const lpae_t pte)
-{
-    mfn_t mfn = lpae_get_mfn(pte);
-
-    ASSERT(p2m_is_valid(pte));
-
-    /*
-     * TODO: Handle other p2m types
-     *
-     * It's safe to do the put_page here because page_alloc will
-     * flush the TLBs if the page is reallocated before the end of
-     * this loop.
-     */
-    if ( p2m_is_foreign(pte.p2m.type) )
-    {
-        ASSERT(mfn_valid(mfn));
-        put_page(mfn_to_page(mfn));
-    }
-    /* Detect the xenheap page and mark the stored GFN as invalid. */
-    else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) )
-        page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN);
-}
-
-/* Free lpae sub-tree behind an entry */
-static void p2m_free_entry(struct p2m_domain *p2m,
-                           lpae_t entry, unsigned int level)
-{
-    unsigned int i;
-    lpae_t *table;
-    mfn_t mfn;
-    struct page_info *pg;
-
-    /* Nothing to do if the entry is invalid. */
-    if ( !p2m_is_valid(entry) )
-        return;
-
-    if ( p2m_is_superpage(entry, level) || (level == 3) )
-    {
-#ifdef CONFIG_IOREQ_SERVER
-        /*
-         * If this gets called then either the entry was replaced by an entry
-         * with a different base (valid case) or the shattering of a superpage
-         * has failed (error case).
-         * So, at worst, the spurious mapcache invalidation might be sent.
-         */
-        if ( p2m_is_ram(entry.p2m.type) &&
-             domain_has_ioreq_server(p2m->domain) )
-            ioreq_request_mapcache_invalidate(p2m->domain);
-#endif
-
-        p2m->stats.mappings[level]--;
-        /* Nothing to do if the entry is a super-page. */
-        if ( level == 3 )
-            p2m_put_l3_page(entry);
-        return;
-    }
-
-    table = map_domain_page(lpae_get_mfn(entry));
-    for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
-        p2m_free_entry(p2m, *(table + i), level + 1);
-
-    unmap_domain_page(table);
-
-    /*
-     * Make sure all the references in the TLB have been removed before
-     * freing the intermediate page table.
-     * XXX: Should we defer the free of the page table to avoid the
-     * flush?
-     */
-    p2m_tlb_flush_sync(p2m);
-
-    mfn = lpae_get_mfn(entry);
-    ASSERT(mfn_valid(mfn));
-
-    pg = mfn_to_page(mfn);
-
-    page_list_del(pg, &p2m->pages);
-    p2m_free_page(p2m->domain, pg);
-}
-
-static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
-                                unsigned int level, unsigned int target,
-                                const unsigned int *offsets)
-{
-    struct page_info *page;
-    unsigned int i;
-    lpae_t pte, *table;
-    bool rv = true;
-
-    /* Convenience aliases */
-    mfn_t mfn = lpae_get_mfn(*entry);
-    unsigned int next_level = level + 1;
-    unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level);
-
-    /*
-     * This should only be called with target != level and the entry is
-     * a superpage.
-     */
-    ASSERT(level < target);
-    ASSERT(p2m_is_superpage(*entry, level));
-
-    page = p2m_alloc_page(p2m->domain);
-    if ( !page )
-        return false;
-
-    page_list_add(page, &p2m->pages);
-    table = __map_domain_page(page);
-
-    /*
-     * We are either splitting a first level 1G page into 512 second level
-     * 2M pages, or a second level 2M page into 512 third level 4K pages.
-     */
-    for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
-    {
-        lpae_t *new_entry = table + i;
-
-        /*
-         * Use the content of the superpage entry and override
-         * the necessary fields. So the correct permission are kept.
-         */
-        pte = *entry;
-        lpae_set_mfn(pte, mfn_add(mfn, i << level_order));
-
-        /*
-         * First and second level pages set p2m.table = 0, but third
-         * level entries set p2m.table = 1.
-         */
-        pte.p2m.table = (next_level == 3);
-
-        write_pte(new_entry, pte);
-    }
-
-    /* Update stats */
-    p2m->stats.shattered[level]++;
-    p2m->stats.mappings[level]--;
-    p2m->stats.mappings[next_level] += XEN_PT_LPAE_ENTRIES;
-
-    /*
-     * Shatter superpage in the page to the level we want to make the
-     * changes.
-     * This is done outside the loop to avoid checking the offset to
-     * know whether the entry should be shattered for every entry.
-     */
-    if ( next_level != target )
-        rv = p2m_split_superpage(p2m, table + offsets[next_level],
-                                 level + 1, target, offsets);
-
-    if ( p2m->clean_pte )
-        clean_dcache_va_range(table, PAGE_SIZE);
-
-    unmap_domain_page(table);
-
-    /*
-     * Even if we failed, we should install the newly allocated LPAE
-     * entry. The caller will be in charge to free the sub-tree.
-     */
-    p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte);
-
-    return rv;
-}
-
-/*
- * Insert an entry in the p2m. This should be called with a mapping
- * equal to a page/superpage (4K, 2M, 1G).
- */
-static int __p2m_set_entry(struct p2m_domain *p2m,
-                           gfn_t sgfn,
-                           unsigned int page_order,
-                           mfn_t smfn,
-                           p2m_type_t t,
-                           p2m_access_t a)
-{
-    unsigned int level = 0;
-    unsigned int target = 3 - (page_order / XEN_PT_LPAE_SHIFT);
-    lpae_t *entry, *table, orig_pte;
-    int rc;
-    /* A mapping is removed if the MFN is invalid. */
-    bool removing_mapping = mfn_eq(smfn, INVALID_MFN);
-    DECLARE_OFFSETS(offsets, gfn_to_gaddr(sgfn));
-
-    ASSERT(p2m_is_write_locked(p2m));
-
-    /*
-     * Check if the level target is valid: we only support
-     * 4K - 2M - 1G mapping.
-     */
-    ASSERT(target > 0 && target <= 3);
-
-    table = p2m_get_root_pointer(p2m, sgfn);
-    if ( !table )
-        return -EINVAL;
-
-    for ( level = P2M_ROOT_LEVEL; level < target; level++ )
-    {
-        /*
-         * Don't try to allocate intermediate page table if the mapping
-         * is about to be removed.
-         */
-        rc = p2m_next_level(p2m, removing_mapping,
-                            level, &table, offsets[level]);
-        if ( rc == GUEST_TABLE_MAP_FAILED )
-        {
-            /*
-             * We are here because p2m_next_level has failed to map
-             * the intermediate page table (e.g the table does not exist
-             * and they p2m tree is read-only). It is a valid case
-             * when removing a mapping as it may not exist in the
-             * page table. In this case, just ignore it.
-             */
-            rc = removing_mapping ?  0 : -ENOENT;
-            goto out;
-        }
-        else if ( rc != GUEST_TABLE_NORMAL_PAGE )
-            break;
-    }
-
-    entry = table + offsets[level];
-
-    /*
-     * If we are here with level < target, we must be at a leaf node,
-     * and we need to break up the superpage.
-     */
-    if ( level < target )
-    {
-        /* We need to split the original page. */
-        lpae_t split_pte = *entry;
-
-        ASSERT(p2m_is_superpage(*entry, level));
-
-        if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets) )
-        {
-            /*
-             * The current super-page is still in-place, so re-increment
-             * the stats.
-             */
-            p2m->stats.mappings[level]++;
-
-            /* Free the allocated sub-tree */
-            p2m_free_entry(p2m, split_pte, level);
-
-            rc = -ENOMEM;
-            goto out;
-        }
-
-        /*
-         * Follow the break-before-sequence to update the entry.
-         * For more details see (D4.7.1 in ARM DDI 0487A.j).
-         */
-        p2m_remove_pte(entry, p2m->clean_pte);
-        p2m_force_tlb_flush_sync(p2m);
-
-        p2m_write_pte(entry, split_pte, p2m->clean_pte);
-
-        /* then move to the level we want to make real changes */
-        for ( ; level < target; level++ )
-        {
-            rc = p2m_next_level(p2m, true, level, &table, offsets[level]);
-
-            /*
-             * The entry should be found and either be a table
-             * or a superpage if level 3 is not targeted
-             */
-            ASSERT(rc == GUEST_TABLE_NORMAL_PAGE ||
-                   (rc == GUEST_TABLE_SUPER_PAGE && target < 3));
-        }
-
-        entry = table + offsets[level];
-    }
-
-    /*
-     * We should always be there with the correct level because
-     * all the intermediate tables have been installed if necessary.
-     */
-    ASSERT(level == target);
-
-    orig_pte = *entry;
-
-    /*
-     * The radix-tree can only work on 4KB. This is only used when
-     * memaccess is enabled and during shutdown.
-     */
-    ASSERT(!p2m->mem_access_enabled || page_order == 0 ||
-           p2m->domain->is_dying);
-    /*
-     * The access type should always be p2m_access_rwx when the mapping
-     * is removed.
-     */
-    ASSERT(!mfn_eq(INVALID_MFN, smfn) || (a == p2m_access_rwx));
-    /*
-     * Update the mem access permission before update the P2M. So we
-     * don't have to revert the mapping if it has failed.
-     */
-    rc = p2m_mem_access_radix_set(p2m, sgfn, a);
-    if ( rc )
-        goto out;
-
-    /*
-     * Always remove the entry in order to follow the break-before-make
-     * sequence when updating the translation table (D4.7.1 in ARM DDI
-     * 0487A.j).
-     */
-    if ( lpae_is_valid(orig_pte) || removing_mapping )
-        p2m_remove_pte(entry, p2m->clean_pte);
-
-    if ( removing_mapping )
-        /* Flush can be deferred if the entry is removed */
-        p2m->need_flush |= !!lpae_is_valid(orig_pte);
-    else
-    {
-        lpae_t pte = mfn_to_p2m_entry(smfn, t, a);
-
-        if ( level < 3 )
-            pte.p2m.table = 0; /* Superpage entry */
-
-        /*
-         * It is necessary to flush the TLB before writing the new entry
-         * to keep coherency when the previous entry was valid.
-         *
-         * Although, it could be defered when only the permissions are
-         * changed (e.g in case of memaccess).
-         */
-        if ( lpae_is_valid(orig_pte) )
-        {
-            if ( likely(!p2m->mem_access_enabled) ||
-                 P2M_CLEAR_PERM(pte) != P2M_CLEAR_PERM(orig_pte) )
-                p2m_force_tlb_flush_sync(p2m);
-            else
-                p2m->need_flush = true;
-        }
-        else if ( !p2m_is_valid(orig_pte) ) /* new mapping */
-            p2m->stats.mappings[level]++;
-
-        p2m_write_pte(entry, pte, p2m->clean_pte);
-
-        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn,
-                                      gfn_add(sgfn, (1UL << page_order) - 1));
-        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, sgfn);
-    }
-
-    if ( is_iommu_enabled(p2m->domain) &&
-         (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) )
-    {
-        unsigned int flush_flags = 0;
-
-        if ( lpae_is_valid(orig_pte) )
-            flush_flags |= IOMMU_FLUSHF_modified;
-        if ( lpae_is_valid(*entry) )
-            flush_flags |= IOMMU_FLUSHF_added;
-
-        rc = iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)),
-                               1UL << page_order, flush_flags);
-    }
-    else
-        rc = 0;
-
-    /*
-     * Free the entry only if the original pte was valid and the base
-     * is different (to avoid freeing when permission is changed).
-     */
-    if ( p2m_is_valid(orig_pte) &&
-         !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
-        p2m_free_entry(p2m, orig_pte, level);
-
-out:
-    unmap_domain_page(table);
-
-    return rc;
-}
-
-int p2m_set_entry(struct p2m_domain *p2m,
-                  gfn_t sgfn,
-                  unsigned long nr,
-                  mfn_t smfn,
-                  p2m_type_t t,
-                  p2m_access_t a)
-{
-    int rc = 0;
-
-    /*
-     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
-     * be dropped in relinquish_p2m_mapping(). As the P2M will still
-     * be accessible after, we need to prevent mapping to be added when the
-     * domain is dying.
-     */
-    if ( unlikely(p2m->domain->is_dying) )
-        return -ENOMEM;
-
-    while ( nr )
-    {
-        unsigned long mask;
-        unsigned long order;
-
-        /*
-         * Don't take into account the MFN when removing mapping (i.e
-         * MFN_INVALID) to calculate the correct target order.
-         *
-         * XXX: Support superpage mappings if nr is not aligned to a
-         * superpage size.
-         */
-        mask = !mfn_eq(smfn, INVALID_MFN) ? mfn_x(smfn) : 0;
-        mask |= gfn_x(sgfn) | nr;
-
-        /* Always map 4k by 4k when memaccess is enabled */
-        if ( unlikely(p2m->mem_access_enabled) )
-            order = THIRD_ORDER;
-        else if ( !(mask & ((1UL << FIRST_ORDER) - 1)) )
-            order = FIRST_ORDER;
-        else if ( !(mask & ((1UL << SECOND_ORDER) - 1)) )
-            order = SECOND_ORDER;
-        else
-            order = THIRD_ORDER;
-
-        rc = __p2m_set_entry(p2m, sgfn, order, smfn, t, a);
-        if ( rc )
-            break;
-
-        sgfn = gfn_add(sgfn, (1 << order));
-        if ( !mfn_eq(smfn, INVALID_MFN) )
-           smfn = mfn_add(smfn, (1 << order));
-
-        nr -= (1 << order);
-    }
-
-    return rc;
-}
-
-/* Invalidate all entries in the table. The p2m should be write locked. */
-static void p2m_invalidate_table(struct p2m_domain *p2m, mfn_t mfn)
-{
-    lpae_t *table;
-    unsigned int i;
-
-    ASSERT(p2m_is_write_locked(p2m));
-
-    table = map_domain_page(mfn);
-
-    for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
-    {
-        lpae_t pte = table[i];
-
-        /*
-         * Writing an entry can be expensive because it may involve
-         * cleaning the cache. So avoid updating the entry if the valid
-         * bit is already cleared.
-         */
-        if ( !pte.p2m.valid )
-            continue;
-
-        pte.p2m.valid = 0;
-
-        p2m_write_pte(&table[i], pte, p2m->clean_pte);
-    }
-
-    unmap_domain_page(table);
-
-    p2m->need_flush = true;
-}
-
-/*
- * Invalidate all entries in the root page-tables. This is
- * useful to get fault on entry and do an action.
- *
- * p2m_invalid_root() should not be called when the P2M is shared with
- * the IOMMU because it will cause IOMMU fault.
- */
-void p2m_invalidate_root(struct p2m_domain *p2m)
-{
-    unsigned int i;
-
-    ASSERT(!iommu_use_hap_pt(p2m->domain));
-
-    p2m_write_lock(p2m);
-
-    for ( i = 0; i < P2M_ROOT_LEVEL; i++ )
-        p2m_invalidate_table(p2m, page_to_mfn(p2m->root + i));
-
-    p2m_write_unlock(p2m);
-}
-
-/*
- * Resolve any translation fault due to change in the p2m. This
- * includes break-before-make and valid bit cleared.
- */
-bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    unsigned int level = 0;
-    bool resolved = false;
-    lpae_t entry, *table;
-
-    /* Convenience aliases */
-    DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
-
-    p2m_write_lock(p2m);
-
-    /* This gfn is higher than the highest the p2m map currently holds */
-    if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) )
-        goto out;
-
-    table = p2m_get_root_pointer(p2m, gfn);
-    /*
-     * The table should always be non-NULL because the gfn is below
-     * p2m->max_mapped_gfn and the root table pages are always present.
-     */
-    if ( !table )
-    {
-        ASSERT_UNREACHABLE();
-        goto out;
-    }
-
-    /*
-     * Go down the page-tables until an entry has the valid bit unset or
-     * a block/page entry has been hit.
-     */
-    for ( level = P2M_ROOT_LEVEL; level <= 3; level++ )
-    {
-        int rc;
-
-        entry = table[offsets[level]];
-
-        if ( level == 3 )
-            break;
-
-        /* Stop as soon as we hit an entry with the valid bit unset. */
-        if ( !lpae_is_valid(entry) )
-            break;
-
-        rc = p2m_next_level(p2m, true, level, &table, offsets[level]);
-        if ( rc == GUEST_TABLE_MAP_FAILED )
-            goto out_unmap;
-        else if ( rc != GUEST_TABLE_NORMAL_PAGE )
-            break;
-    }
-
-    /*
-     * If the valid bit of the entry is set, it means someone was playing with
-     * the Stage-2 page table. Nothing to do and mark the fault as resolved.
-     */
-    if ( lpae_is_valid(entry) )
-    {
-        resolved = true;
-        goto out_unmap;
-    }
-
-    /*
-     * The valid bit is unset. If the entry is still not valid then the fault
-     * cannot be resolved, exit and report it.
-     */
-    if ( !p2m_is_valid(entry) )
-        goto out_unmap;
-
-    /*
-     * Now we have an entry with valid bit unset, but still valid from
-     * the P2M point of view.
-     *
-     * If an entry is pointing to a table, each entry of the table will
-     * have there valid bit cleared. This allows a function to clear the
-     * full p2m with just a couple of write. The valid bit will then be
-     * propagated on the fault.
-     * If an entry is pointing to a block/page, no work to do for now.
-     */
-    if ( lpae_is_table(entry, level) )
-        p2m_invalidate_table(p2m, lpae_get_mfn(entry));
-
-    /*
-     * Now that the work on the entry is done, set the valid bit to prevent
-     * another fault on that entry.
-     */
-    resolved = true;
-    entry.p2m.valid = 1;
-
-    p2m_write_pte(table + offsets[level], entry, p2m->clean_pte);
-
-    /*
-     * No need to flush the TLBs as the modified entry had the valid bit
-     * unset.
-     */
-
-out_unmap:
-    unmap_domain_page(table);
-
-out:
-    p2m_write_unlock(p2m);
-
-    return resolved;
-}
-
-int p2m_insert_mapping(struct domain *d, gfn_t start_gfn, unsigned long nr,
-                       mfn_t mfn, p2m_type_t t)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc;
-
-    p2m_write_lock(p2m);
-    rc = p2m_set_entry(p2m, start_gfn, nr, mfn, t, p2m->default_access);
-    p2m_write_unlock(p2m);
-
-    return rc;
-}
-
-static inline int p2m_remove_mapping(struct domain *d,
-                                     gfn_t start_gfn,
-                                     unsigned long nr,
-                                     mfn_t mfn)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    unsigned long i;
-    int rc;
-
-    p2m_write_lock(p2m);
-    /*
-     * Before removing the GFN - MFN mapping for any RAM pages make sure
-     * that there is no difference between what is already mapped and what
-     * is requested to be unmapped.
-     * If they don't match bail out early. For instance, this could happen
-     * if two CPUs are requesting to unmap the same P2M entry concurrently.
-     */
-    for ( i = 0; i < nr; )
-    {
-        unsigned int cur_order;
-        p2m_type_t t;
-        mfn_t mfn_return = p2m_get_entry(p2m, gfn_add(start_gfn, i), &t, NULL,
-                                         &cur_order, NULL);
-
-        if ( p2m_is_any_ram(t) &&
-             (!mfn_valid(mfn) || !mfn_eq(mfn_add(mfn, i), mfn_return)) )
-        {
-            rc = -EILSEQ;
-            goto out;
-        }
-
-        i += (1UL << cur_order) -
-             ((gfn_x(start_gfn) + i) & ((1UL << cur_order) - 1));
-    }
-
-    rc = p2m_set_entry(p2m, start_gfn, nr, INVALID_MFN,
-                       p2m_invalid, p2m_access_rwx);
-
-out:
-    p2m_write_unlock(p2m);
-
-    return rc;
-}
-
-int map_regions_p2mt(struct domain *d,
-                     gfn_t gfn,
-                     unsigned long nr,
-                     mfn_t mfn,
-                     p2m_type_t p2mt)
-{
-    return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);
-}
-
-int unmap_regions_p2mt(struct domain *d,
-                       gfn_t gfn,
-                       unsigned long nr,
-                       mfn_t mfn)
-{
-    return p2m_remove_mapping(d, gfn, nr, mfn);
-}
-
-int map_mmio_regions(struct domain *d,
-                     gfn_t start_gfn,
-                     unsigned long nr,
-                     mfn_t mfn)
-{
-    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev);
-}
-
-int unmap_mmio_regions(struct domain *d,
-                       gfn_t start_gfn,
-                       unsigned long nr,
-                       mfn_t mfn)
-{
-    return p2m_remove_mapping(d, start_gfn, nr, mfn);
-}
-
-int map_dev_mmio_page(struct domain *d, gfn_t gfn, mfn_t mfn)
-{
-    int res;
-
-    if ( !iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn)) )
-        return 0;
-
-    res = p2m_insert_mapping(d, gfn, 1, mfn, p2m_mmio_direct_c);
-    if ( res < 0 )
-    {
-        printk(XENLOG_G_ERR "Unable to map MFN %#"PRI_mfn" in %pd\n",
-               mfn_x(mfn), d);
-        return res;
-    }
-
-    return 0;
-}
-
-int guest_physmap_add_entry(struct domain *d,
-                            gfn_t gfn,
-                            mfn_t mfn,
-                            unsigned long page_order,
-                            p2m_type_t t)
-{
-    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
-}
-
-int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
-                              unsigned int page_order)
-{
-    return p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
-}
-
-int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
-                          unsigned long gfn, mfn_t mfn)
-{
-    struct page_info *page = mfn_to_page(mfn);
-    int rc;
-
-    ASSERT(arch_acquire_resource_check(d));
-
-    if ( !get_page(page, fd) )
-        return -EINVAL;
-
-    /*
-     * It is valid to always use p2m_map_foreign_rw here as if this gets
-     * called then d != fd. A case when d == fd would be rejected by
-     * rcu_lock_remote_domain_by_id() earlier. Put a respective ASSERT()
-     * to catch incorrect usage in future.
-     */
-    ASSERT(d != fd);
-
-    rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw);
-    if ( rc )
-        put_page(page);
-
-    return rc;
-}
-
-static struct page_info *p2m_allocate_root(void)
-{
-    struct page_info *page;
-    unsigned int i;
-
-    page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
-    if ( page == NULL )
-        return NULL;
-
-    /* Clear both first level pages */
-    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
-        clear_and_clean_page(page + i);
-
-    return page;
-}
-
-static int p2m_alloc_table(struct domain *d)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-    p2m->root = p2m_allocate_root();
-    if ( !p2m->root )
-        return -ENOMEM;
-
-    p2m->vttbr = generate_vttbr(p2m->vmid, page_to_mfn(p2m->root));
-
-    /*
-     * Make sure that all TLBs corresponding to the new VMID are flushed
-     * before using it
-     */
-    p2m_write_lock(p2m);
-    p2m_force_tlb_flush_sync(p2m);
-    p2m_write_unlock(p2m);
-
-    return 0;
-}
-
-
-static spinlock_t vmid_alloc_lock = SPIN_LOCK_UNLOCKED;
-
-/*
- * VTTBR_EL2 VMID field is 8 or 16 bits. AArch64 may support 16-bit VMID.
- * Using a bitmap here limits us to 256 or 65536 (for AArch64) concurrent
- * domains. The bitmap space will be allocated dynamically based on
- * whether 8 or 16 bit VMIDs are supported.
- */
-static unsigned long *vmid_mask;
-
-static void p2m_vmid_allocator_init(void)
-{
-    /*
-     * allocate space for vmid_mask based on MAX_VMID
-     */
-    vmid_mask = xzalloc_array(unsigned long, BITS_TO_LONGS(MAX_VMID));
-
-    if ( !vmid_mask )
-        panic("Could not allocate VMID bitmap space\n");
-
-    set_bit(INVALID_VMID, vmid_mask);
-}
-
-static int p2m_alloc_vmid(struct domain *d)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-    int rc, nr;
-
-    spin_lock(&vmid_alloc_lock);
-
-    nr = find_first_zero_bit(vmid_mask, MAX_VMID);
-
-    ASSERT(nr != INVALID_VMID);
-
-    if ( nr == MAX_VMID )
-    {
-        rc = -EBUSY;
-        printk(XENLOG_ERR "p2m.c: dom%d: VMID pool exhausted\n", d->domain_id);
-        goto out;
-    }
-
-    set_bit(nr, vmid_mask);
-
-    p2m->vmid = nr;
-
-    rc = 0;
-
-out:
-    spin_unlock(&vmid_alloc_lock);
-    return rc;
-}
-
-static void p2m_free_vmid(struct domain *d)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    spin_lock(&vmid_alloc_lock);
-    if ( p2m->vmid != INVALID_VMID )
-        clear_bit(p2m->vmid, vmid_mask);
-
-    spin_unlock(&vmid_alloc_lock);
-}
-
-int p2m_teardown(struct domain *d, bool allow_preemption)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    unsigned long count = 0;
-    struct page_info *pg;
-    unsigned int i;
-    int rc = 0;
-
-    if ( page_list_empty(&p2m->pages) )
-        return 0;
-
-    p2m_write_lock(p2m);
-
-    /*
-     * We are about to free the intermediate page-tables, so clear the
-     * root to prevent any walk to use them.
-     */
-    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
-        clear_and_clean_page(p2m->root + i);
-
-    /*
-     * The domain will not be scheduled anymore, so in theory we should
-     * not need to flush the TLBs. Do it for safety purpose.
-     *
-     * Note that all the devices have already been de-assigned. So we don't
-     * need to flush the IOMMU TLB here.
-     */
-    p2m_force_tlb_flush_sync(p2m);
-
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-    {
-        p2m_free_page(p2m->domain, pg);
-        count++;
-        /* Arbitrarily preempt every 512 iterations */
-        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
-        {
-            rc = -ERESTART;
-            break;
-        }
-    }
-
-    p2m_write_unlock(p2m);
-
-    return rc;
-}
-
-void p2m_final_teardown(struct domain *d)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-    /* p2m not actually initialized */
-    if ( !p2m->domain )
-        return;
-
-    /*
-     * No need to call relinquish_p2m_mapping() here because
-     * p2m_final_teardown() is called either after domain_relinquish_resources()
-     * where relinquish_p2m_mapping() has been called, or from failure path of
-     * domain_create()/arch_domain_create() where mappings that require
-     * p2m_put_l3_page() should never be created. For the latter case, also see
-     * comment on top of the p2m_set_entry() for more info.
-     */
-
-    BUG_ON(p2m_teardown(d, false));
-    ASSERT(page_list_empty(&p2m->pages));
-
-    while ( p2m_teardown_allocation(d) == -ERESTART )
-        continue; /* No preemption support here */
-    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
-
-    if ( p2m->root )
-        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
-
-    p2m->root = NULL;
-
-    p2m_free_vmid(d);
-
-    radix_tree_destroy(&p2m->mem_access_settings, NULL);
-
-    p2m->domain = NULL;
-}
-
-int p2m_init(struct domain *d)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc;
-    unsigned int cpu;
-
-    rwlock_init(&p2m->lock);
-    spin_lock_init(&d->arch.paging.lock);
-    INIT_PAGE_LIST_HEAD(&p2m->pages);
-    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
-
-    p2m->vmid = INVALID_VMID;
-    p2m->max_mapped_gfn = _gfn(0);
-    p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
-
-    p2m->default_access = p2m_access_rwx;
-    p2m->mem_access_enabled = false;
-    radix_tree_init(&p2m->mem_access_settings);
-
-    /*
-     * Some IOMMUs don't support coherent PT walk. When the p2m is
-     * shared with the CPU, Xen has to make sure that the PT changes have
-     * reached the memory
-     */
-    p2m->clean_pte = is_iommu_enabled(d) &&
-        !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
-
-    /*
-     * Make sure that the type chosen to is able to store the an vCPU ID
-     * between 0 and the maximum of virtual CPUS supported as long as
-     * the INVALID_VCPU_ID.
-     */
-    BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < MAX_VIRT_CPUS);
-    BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0])* 8)) < INVALID_VCPU_ID);
-
-    for_each_possible_cpu(cpu)
-       p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
-
-    /*
-     * "Trivial" initialisation is now complete.  Set the backpointer so
-     * p2m_teardown() and friends know to do something.
-     */
-    p2m->domain = d;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc )
-        return rc;
-
-    rc = p2m_alloc_table(d);
-    if ( rc )
-        return rc;
-
-    /*
-     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
-     * when the domain is created. Considering the worst case for page
-     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
-     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
-     * the allocated 16 pages here would not be lost, hence populate these
-     * pages unconditionally.
-     */
-    spin_lock(&d->arch.paging.lock);
-    rc = p2m_set_allocation(d, 16, NULL);
-    spin_unlock(&d->arch.paging.lock);
-    if ( rc )
-        return rc;
-
-    return 0;
-}
-
-/*
- * The function will go through the p2m and remove page reference when it
- * is required. The mapping will be removed from the p2m.
- *
- * XXX: See whether the mapping can be left intact in the p2m.
- */
-int relinquish_p2m_mapping(struct domain *d)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    unsigned long count = 0;
-    p2m_type_t t;
-    int rc = 0;
-    unsigned int order;
-    gfn_t start, end;
-
-    BUG_ON(!d->is_dying);
-    /* No mappings can be added in the P2M after the P2M lock is released. */
-    p2m_write_lock(p2m);
-
-    start = p2m->lowest_mapped_gfn;
-    end = gfn_add(p2m->max_mapped_gfn, 1);
-
-    for ( ; gfn_x(start) < gfn_x(end);
-          start = gfn_next_boundary(start, order) )
-    {
-        mfn_t mfn = p2m_get_entry(p2m, start, &t, NULL, &order, NULL);
-
-        count++;
-        /*
-         * Arbitrarily preempt every 512 iterations.
-         */
-        if ( !(count % 512) && hypercall_preempt_check() )
-        {
-            rc = -ERESTART;
-            break;
-        }
-
-        /*
-         * p2m_set_entry will take care of removing reference on page
-         * when it is necessary and removing the mapping in the p2m.
-         */
-        if ( !mfn_eq(mfn, INVALID_MFN) )
-        {
-            /*
-             * For valid mapping, the start will always be aligned as
-             * entry will be removed whilst relinquishing.
-             */
-            rc = __p2m_set_entry(p2m, start, order, INVALID_MFN,
-                                 p2m_invalid, p2m_access_rwx);
-            if ( unlikely(rc) )
-            {
-                printk(XENLOG_G_ERR "Unable to remove mapping gfn=%#"PRI_gfn" order=%u from the p2m of domain %d\n", gfn_x(start), order, d->domain_id);
-                break;
-            }
-        }
-    }
-
-    /*
-     * Update lowest_mapped_gfn so on the next call we still start where
-     * we stopped.
-     */
-    p2m->lowest_mapped_gfn = start;
-
-    p2m_write_unlock(p2m);
-
-    return rc;
-}
-
-int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    gfn_t next_block_gfn;
-    gfn_t start = *pstart;
-    mfn_t mfn = INVALID_MFN;
-    p2m_type_t t;
-    unsigned int order;
-    int rc = 0;
-    /* Counter for preemption */
-    unsigned short count = 0;
-
-    /*
-     * The operation cache flush will invalidate the RAM assigned to the
-     * guest in a given range. It will not modify the page table and
-     * flushing the cache whilst the page is used by another CPU is
-     * fine. So using read-lock is fine here.
-     */
-    p2m_read_lock(p2m);
-
-    start = gfn_max(start, p2m->lowest_mapped_gfn);
-    end = gfn_min(end, gfn_add(p2m->max_mapped_gfn, 1));
-
-    next_block_gfn = start;
-
-    while ( gfn_x(start) < gfn_x(end) )
-    {
-       /*
-         * Cleaning the cache for the P2M may take a long time. So we
-         * need to be able to preempt. We will arbitrarily preempt every
-         * time count reach 512 or above.
-         *
-         * The count will be incremented by:
-         *  - 1 on region skipped
-         *  - 10 for each page requiring a flush
-         */
-        if ( count >= 512 )
-        {
-            if ( softirq_pending(smp_processor_id()) )
-            {
-                rc = -ERESTART;
-                break;
-            }
-            count = 0;
-        }
-
-        /*
-         * We want to flush page by page as:
-         *  - it may not be possible to map the full block (can be up to 1GB)
-         *    in Xen memory
-         *  - we may want to do fine grain preemption as flushing multiple
-         *    page in one go may take a long time
-         *
-         * As p2m_get_entry is able to return the size of the mapping
-         * in the p2m, it is pointless to execute it for each page.
-         *
-         * We can optimize it by tracking the gfn of the next
-         * block. So we will only call p2m_get_entry for each block (can
-         * be up to 1GB).
-         */
-        if ( gfn_eq(start, next_block_gfn) )
-        {
-            bool valid;
-
-            mfn = p2m_get_entry(p2m, start, &t, NULL, &order, &valid);
-            next_block_gfn = gfn_next_boundary(start, order);
-
-            if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_any_ram(t) || !valid )
-            {
-                count++;
-                start = next_block_gfn;
-                continue;
-            }
-        }
-
-        count += 10;
-
-        flush_page_to_ram(mfn_x(mfn), false);
-
-        start = gfn_add(start, 1);
-        mfn = mfn_add(mfn, 1);
-    }
-
-    if ( rc != -ERESTART )
-        invalidate_icache();
-
-    p2m_read_unlock(p2m);
-
-    *pstart = start;
-
-    return rc;
-}
-
-/*
- * Clean & invalidate RAM associated to the guest vCPU.
- *
- * The function can only work with the current vCPU and should be called
- * with IRQ enabled as the vCPU could get preempted.
- */
-void p2m_flush_vm(struct vcpu *v)
-{
-    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
-    int rc;
-    gfn_t start = _gfn(0);
-
-    ASSERT(v == current);
-    ASSERT(local_irq_is_enabled());
-    ASSERT(v->arch.need_flush_to_ram);
-
-    do
-    {
-        rc = p2m_cache_flush_range(v->domain, &start, _gfn(ULONG_MAX));
-        if ( rc == -ERESTART )
-            do_softirq();
-    } while ( rc == -ERESTART );
-
-    if ( rc != 0 )
-        gprintk(XENLOG_WARNING,
-                "P2M has not been correctly cleaned (rc = %d)\n",
-                rc);
-
-    /*
-     * Invalidate the p2m to track which page was modified by the guest
-     * between call of p2m_flush_vm().
-     */
-    p2m_invalidate_root(p2m);
-
-    v->arch.need_flush_to_ram = false;
-}
-
-/*
- * See note at ARMv7 ARM B1.14.4 (DDI 0406C.c) (TL;DR: S/W ops are not
- * easily virtualized).
- *
- * Main problems:
- *  - S/W ops are local to a CPU (not broadcast)
- *  - We have line migration behind our back (speculation)
- *  - System caches don't support S/W at all (damn!)
- *
- * In the face of the above, the best we can do is to try and convert
- * S/W ops to VA ops. Because the guest is not allowed to infer the S/W
- * to PA mapping, it can only use S/W to nuke the whole cache, which is
- * rather a good thing for us.
- *
- * Also, it is only used when turning caches on/off ("The expected
- * usage of the cache maintenance instructions that operate by set/way
- * is associated with the powerdown and powerup of caches, if this is
- * required by the implementation.").
- *
- * We use the following policy:
- *  - If we trap a S/W operation, we enabled VM trapping to detect
- *  caches being turned on/off, and do a full clean.
- *
- *  - We flush the caches on both caches being turned on and off.
- *
- *  - Once the caches are enabled, we stop trapping VM ops.
- */
-void p2m_set_way_flush(struct vcpu *v, struct cpu_user_regs *regs,
-                       const union hsr hsr)
-{
-    /* This function can only work with the current vCPU. */
-    ASSERT(v == current);
-
-    if ( iommu_use_hap_pt(current->domain) )
-    {
-        gprintk(XENLOG_ERR,
-                "The cache should be flushed by VA rather than by set/way.\n");
-        inject_undef_exception(regs, hsr);
-        return;
-    }
-
-    if ( !(v->arch.hcr_el2 & HCR_TVM) )
-    {
-        v->arch.need_flush_to_ram = true;
-        vcpu_hcr_set_flags(v, HCR_TVM);
-    }
-}
-
-void p2m_toggle_cache(struct vcpu *v, bool was_enabled)
-{
-    bool now_enabled = vcpu_has_cache_enabled(v);
-
-    /* This function can only work with the current vCPU. */
-    ASSERT(v == current);
-
-    /*
-     * If switching the MMU+caches on, need to invalidate the caches.
-     * If switching it off, need to clean the caches.
-     * Clean + invalidate does the trick always.
-     */
-    if ( was_enabled != now_enabled )
-        v->arch.need_flush_to_ram = true;
-
-    /* Caches are now on, stop trapping VM ops (until a S/W op) */
-    if ( now_enabled )
-        vcpu_hcr_clear_flags(v, HCR_TVM);
-}
-
-mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
-{
-    return p2m_lookup(d, gfn, NULL);
-}
-
-struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
-                                    unsigned long flags)
-{
-    struct domain *d = v->domain;
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    struct page_info *page = NULL;
-    paddr_t maddr = 0;
-    uint64_t par;
-    mfn_t mfn;
-    p2m_type_t t;
-
-    /*
-     * XXX: To support a different vCPU, we would need to load the
-     * VTTBR_EL2, TTBR0_EL1, TTBR1_EL1 and SCTLR_EL1
-     */
-    if ( v != current )
-        return NULL;
-
-    /*
-     * The lock is here to protect us against the break-before-make
-     * sequence used when updating the entry.
-     */
-    p2m_read_lock(p2m);
-    par = gvirt_to_maddr(va, &maddr, flags);
-    p2m_read_unlock(p2m);
-
-    /*
-     * gvirt_to_maddr may fail if the entry does not have the valid bit
-     * set. Fallback to the second method:
-     *  1) Translate the VA to IPA using software lookup -> Stage-1 page-table
-     *  may not be accessible because the stage-2 entries may have valid
-     *  bit unset.
-     *  2) Software lookup of the MFN
-     *
-     * Note that when memaccess is enabled, we instead call directly
-     * p2m_mem_access_check_and_get_page(...). Because the function is a
-     * a variant of the methods described above, it will be able to
-     * handle entries with valid bit unset.
-     *
-     * TODO: Integrate more nicely memaccess with the rest of the
-     * function.
-     * TODO: Use the fault error in PAR_EL1 to avoid pointless
-     *  translation.
-     */
-    if ( par )
-    {
-        paddr_t ipa;
-        unsigned int s1_perms;
-
-        /*
-         * When memaccess is enabled, the translation GVA to MADDR may
-         * have failed because of a permission fault.
-         */
-        if ( p2m->mem_access_enabled )
-            return p2m_mem_access_check_and_get_page(va, flags, v);
-
-        /*
-         * The software stage-1 table walk can still fail, e.g, if the
-         * GVA is not mapped.
-         */
-        if ( !guest_walk_tables(v, va, &ipa, &s1_perms) )
-        {
-            dprintk(XENLOG_G_DEBUG,
-                    "%pv: Failed to walk page-table va %#"PRIvaddr"\n", v, va);
-            return NULL;
-        }
-
-        mfn = p2m_lookup(d, gaddr_to_gfn(ipa), &t);
-        if ( mfn_eq(INVALID_MFN, mfn) || !p2m_is_ram(t) )
-            return NULL;
-
-        /*
-         * Check permission that are assumed by the caller. For instance
-         * in case of guestcopy, the caller assumes that the translated
-         * page can be accessed with the requested permissions. If this
-         * is not the case, we should fail.
-         *
-         * Please note that we do not check for the GV2M_EXEC
-         * permission. This is fine because the hardware-based translation
-         * instruction does not test for execute permissions.
-         */
-        if ( (flags & GV2M_WRITE) && !(s1_perms & GV2M_WRITE) )
-            return NULL;
-
-        if ( (flags & GV2M_WRITE) && t != p2m_ram_rw )
-            return NULL;
-    }
-    else
-        mfn = maddr_to_mfn(maddr);
-
-    if ( !mfn_valid(mfn) )
-    {
-        dprintk(XENLOG_G_DEBUG, "%pv: Invalid MFN %#"PRI_mfn"\n",
-                v, mfn_x(mfn));
-        return NULL;
-    }
-
-    page = mfn_to_page(mfn);
-    ASSERT(page);
-
-    if ( unlikely(!get_page(page, d)) )
-    {
-        dprintk(XENLOG_G_DEBUG, "%pv: Failing to acquire the MFN %#"PRI_mfn"\n",
-                v, mfn_x(maddr_to_mfn(maddr)));
-        return NULL;
-    }
-
-    return page;
-}
-
-void __init p2m_restrict_ipa_bits(unsigned int ipa_bits)
-{
-    /*
-     * Calculate the minimum of the maximum IPA bits that any external entity
-     * can support.
-     */
-    if ( ipa_bits < p2m_ipa_bits )
-        p2m_ipa_bits = ipa_bits;
-}
-
-/* VTCR value to be configured by all CPUs. Set only once by the boot CPU */
-static register_t __read_mostly vtcr;
-
-static void setup_virt_paging_one(void *data)
-{
-    WRITE_SYSREG(vtcr, VTCR_EL2);
-
-    /*
-     * ARM64_WORKAROUND_AT_SPECULATE: We want to keep the TLBs free from
-     * entries related to EL1/EL0 translation regime until a guest vCPU
-     * is running. For that, we need to set-up VTTBR to point to an empty
-     * page-table and turn on stage-2 translation. The TLB entries
-     * associated with EL1/EL0 translation regime will also be flushed in case
-     * an AT instruction was speculated before hand.
-     */
-    if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) )
-    {
-        WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR_EL2);
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2);
-        isb();
-
-        flush_all_guests_tlb_local();
-    }
-}
-
-void __init setup_virt_paging(void)
-{
-    /* Setup Stage 2 address translation */
-    register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
-
-#ifdef CONFIG_ARM_32
-    if ( p2m_ipa_bits < 40 )
-        panic("P2M: Not able to support %u-bit IPA at the moment\n",
-              p2m_ipa_bits);
-
-    printk("P2M: 40-bit IPA\n");
-    p2m_ipa_bits = 40;
-    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
-    val |= VTCR_SL0(0x1); /* P2M starts at first level */
-#else /* CONFIG_ARM_64 */
-    static const struct {
-        unsigned int pabits; /* Physical Address Size */
-        unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
-        unsigned int root_order; /* Page order of the root of the p2m */
-        unsigned int sl0;    /* Desired SL0, maximum in comment */
-    } pa_range_info[] __initconst = {
-        /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
-        /*      PA size, t0sz(min), root-order, sl0(max) */
-        [0] = { 32,      32/*32*/,  0,          1 },
-        [1] = { 36,      28/*28*/,  0,          1 },
-        [2] = { 40,      24/*24*/,  1,          1 },
-        [3] = { 42,      22/*22*/,  3,          1 },
-        [4] = { 44,      20/*20*/,  0,          2 },
-        [5] = { 48,      16/*16*/,  0,          2 },
-        [6] = { 52,      12/*12*/,  4,          2 },
-        [7] = { 0 }  /* Invalid */
-    };
-
-    unsigned int i;
-    unsigned int pa_range = 0x10; /* Larger than any possible value */
-
-    /*
-     * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
-     * with IPA bits == PA bits, compare against "pabits".
-     */
-    if ( pa_range_info[system_cpuinfo.mm64.pa_range].pabits < p2m_ipa_bits )
-        p2m_ipa_bits = pa_range_info[system_cpuinfo.mm64.pa_range].pabits;
-
-    /*
-     * cpu info sanitization made sure we support 16bits VMID only if all
-     * cores are supporting it.
-     */
-    if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
-        max_vmid = MAX_VMID_16_BIT;
-
-    /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
-    for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
-    {
-        if ( p2m_ipa_bits == pa_range_info[i].pabits )
-        {
-            pa_range = i;
-            break;
-        }
-    }
-
-    /* pa_range is 4 bits but we don't support all modes */
-    if ( pa_range >= ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_range].pabits )
-        panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
-
-    val |= VTCR_PS(pa_range);
-    val |= VTCR_TG0_4K;
-
-    /* Set the VS bit only if 16 bit VMID is supported. */
-    if ( MAX_VMID == MAX_VMID_16_BIT )
-        val |= VTCR_VS;
-    val |= VTCR_SL0(pa_range_info[pa_range].sl0);
-    val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
-
-    p2m_root_order = pa_range_info[pa_range].root_order;
-    p2m_root_level = 2 - pa_range_info[pa_range].sl0;
-    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
-
-    printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n",
-           p2m_ipa_bits,
-           pa_range_info[pa_range].pabits,
-           ( MAX_VMID == MAX_VMID_16_BIT ) ? 16 : 8);
-#endif
-    printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n",
-           4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val);
-
-    p2m_vmid_allocator_init();
-
-    /* It is not allowed to concatenate a level zero root */
-    BUG_ON( P2M_ROOT_LEVEL == 0 && P2M_ROOT_ORDER > 0 );
-    vtcr = val;
-
-    /*
-     * ARM64_WORKAROUND_AT_SPECULATE requires to allocate root table
-     * with all entries zeroed.
-     */
-    if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) )
-    {
-        struct page_info *root;
-
-        root = p2m_allocate_root();
-        if ( !root )
-            panic("Unable to allocate root table for ARM64_WORKAROUND_AT_SPECULATE\n");
-
-        empty_root_mfn = page_to_mfn(root);
-    }
-
-    setup_virt_paging_one(NULL);
-    smp_call_function(setup_virt_paging_one, NULL, 1);
-}
-
-static int cpu_virt_paging_callback(struct notifier_block *nfb,
-                                    unsigned long action,
-                                    void *hcpu)
-{
-    switch ( action )
-    {
-    case CPU_STARTING:
-        ASSERT(system_state != SYS_STATE_boot);
-        setup_virt_paging_one(NULL);
-        break;
-    default:
-        break;
-    }
-
-    return NOTIFY_DONE;
-}
-
-static struct notifier_block cpu_virt_paging_nfb = {
-    .notifier_call = cpu_virt_paging_callback,
-};
-
-static int __init cpu_virt_paging_init(void)
-{
-    register_cpu_notifier(&cpu_virt_paging_nfb);
-
-    return 0;
-}
-/*
- * Initialization of the notifier has to be done at init rather than presmp_init
- * phase because: the registered notifier is used to setup virtual paging for
- * non-boot CPUs after the initial virtual paging for all CPUs is already setup,
- * i.e. when a non-boot CPU is hotplugged after the system has booted. In other
- * words, the notifier should be registered after the virtual paging is
- * initially setup (setup_virt_paging() is called from start_xen()). This is
- * required because vtcr config value has to be set before a notifier can fire.
- */
-__initcall(cpu_virt_paging_init);
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/p2m_mmu.c b/xen/arch/arm/p2m_mmu.c
new file mode 100644
index 0000000000..88a9d8f392
--- /dev/null
+++ b/xen/arch/arm/p2m_mmu.c
@@ -0,0 +1,2295 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <xen/cpu.h>
+#include <xen/domain_page.h>
+#include <xen/iocap.h>
+#include <xen/ioreq.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <xen/softirq.h>
+
+#include <asm/alternative.h>
+#include <asm/event.h>
+#include <asm/flushtlb.h>
+#include <asm/guest_walk.h>
+#include <asm/page.h>
+#include <asm/traps.h>
+
+#define MAX_VMID_8_BIT  (1UL << 8)
+#define MAX_VMID_16_BIT (1UL << 16)
+
+#define INVALID_VMID 0 /* VMID 0 is reserved */
+
+#ifdef CONFIG_ARM_64
+static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
+/* VMID is by default 8 bit width on AArch64 */
+#define MAX_VMID       max_vmid
+#else
+/* VMID is always 8 bit width on AArch32 */
+#define MAX_VMID        MAX_VMID_8_BIT
+#endif
+
+#ifdef CONFIG_ARM_64
+unsigned int __read_mostly p2m_root_order;
+unsigned int __read_mostly p2m_root_level;
+#endif
+
+#define P2M_ROOT_PAGES    (1<<P2M_ROOT_ORDER)
+
+static mfn_t __read_mostly empty_root_mfn;
+
+static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
+{
+    return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
+}
+
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
+/* Unlock the flush and do a P2M TLB flush if necessary */
+void p2m_write_unlock(struct p2m_domain *p2m)
+{
+    /*
+     * The final flush is done with the P2M write lock taken to avoid
+     * someone else modifying the P2M wbefore the TLB invalidation has
+     * completed.
+     */
+    p2m_tlb_flush_sync(p2m);
+
+    write_unlock(&p2m->lock);
+}
+
+void p2m_dump_info(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    p2m_read_lock(p2m);
+    printk("p2m mappings for domain %d (vmid %d):\n",
+           d->domain_id, p2m->vmid);
+    BUG_ON(p2m->stats.mappings[0] || p2m->stats.shattered[0]);
+    printk("  1G mappings: %ld (shattered %ld)\n",
+           p2m->stats.mappings[1], p2m->stats.shattered[1]);
+    printk("  2M mappings: %ld (shattered %ld)\n",
+           p2m->stats.mappings[2], p2m->stats.shattered[2]);
+    printk("  4K mappings: %ld\n", p2m->stats.mappings[3]);
+    p2m_read_unlock(p2m);
+}
+
+void dump_p2m_lookup(struct domain *d, paddr_t addr)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
+
+    printk("P2M @ %p mfn:%#"PRI_mfn"\n",
+           p2m->root, mfn_x(page_to_mfn(p2m->root)));
+
+    dump_pt_walk(page_to_maddr(p2m->root), addr,
+                 P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
+}
+
+/*
+ * p2m_save_state and p2m_restore_state work in pair to workaround
+ * ARM64_WORKAROUND_AT_SPECULATE. p2m_save_state will set-up VTTBR to
+ * point to the empty page-tables to stop allocating TLB entries.
+ */
+void p2m_save_state(struct vcpu *p)
+{
+    p->arch.sctlr = READ_SYSREG(SCTLR_EL1);
+
+    if ( cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) )
+    {
+        WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR_EL2);
+        /*
+         * Ensure VTTBR_EL2 is correctly synchronized so we can restore
+         * the next vCPU context without worrying about AT instruction
+         * speculation.
+         */
+        isb();
+    }
+}
+
+void p2m_restore_state(struct vcpu *n)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(n->domain);
+    uint8_t *last_vcpu_ran;
+
+    if ( is_idle_vcpu(n) )
+        return;
+
+    WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1);
+    WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2);
+
+    /*
+     * ARM64_WORKAROUND_AT_SPECULATE: VTTBR_EL2 should be restored after all
+     * registers associated to EL1/EL0 translations regime have been
+     * synchronized.
+     */
+    asm volatile(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_AT_SPECULATE));
+    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
+
+    last_vcpu_ran = &p2m->last_vcpu_ran[smp_processor_id()];
+
+    /*
+     * While we are restoring an out-of-context translation regime
+     * we still need to ensure:
+     *  - VTTBR_EL2 is synchronized before flushing the TLBs
+     *  - All registers for EL1 are synchronized before executing an AT
+     *  instructions targeting S1/S2.
+     */
+    isb();
+
+    /*
+     * Flush local TLB for the domain to prevent wrong TLB translation
+     * when running multiple vCPU of the same domain on a single pCPU.
+     */
+    if ( *last_vcpu_ran != INVALID_VCPU_ID && *last_vcpu_ran != n->vcpu_id )
+        flush_guest_tlb_local();
+
+    *last_vcpu_ran = n->vcpu_id;
+}
+
+/*
+ * Force a synchronous P2M TLB flush.
+ *
+ * Must be called with the p2m lock held.
+ */
+static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
+{
+    unsigned long flags = 0;
+    uint64_t ovttbr;
+
+    ASSERT(p2m_is_write_locked(p2m));
+
+    /*
+     * ARM only provides an instruction to flush TLBs for the current
+     * VMID. So switch to the VTTBR of a given P2M if different.
+     */
+    ovttbr = READ_SYSREG64(VTTBR_EL2);
+    if ( ovttbr != p2m->vttbr )
+    {
+        uint64_t vttbr;
+
+        local_irq_save(flags);
+
+        /*
+         * ARM64_WORKAROUND_AT_SPECULATE: We need to stop AT to allocate
+         * TLBs entries because the context is partially modified. We
+         * only need the VMID for flushing the TLBs, so we can generate
+         * a new VTTBR with the VMID to flush and the empty root table.
+         */
+        if ( !cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) )
+            vttbr = p2m->vttbr;
+        else
+            vttbr = generate_vttbr(p2m->vmid, empty_root_mfn);
+
+        WRITE_SYSREG64(vttbr, VTTBR_EL2);
+
+        /* Ensure VTTBR_EL2 is synchronized before flushing the TLBs */
+        isb();
+    }
+
+    flush_guest_tlb();
+
+    if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
+    {
+        WRITE_SYSREG64(ovttbr, VTTBR_EL2);
+        /* Ensure VTTBR_EL2 is back in place before continuing. */
+        isb();
+        local_irq_restore(flags);
+    }
+
+    p2m->need_flush = false;
+}
+
+void p2m_tlb_flush_sync(struct p2m_domain *p2m)
+{
+    if ( p2m->need_flush )
+        p2m_force_tlb_flush_sync(p2m);
+}
+
+/*
+ * Find and map the root page table. The caller is responsible for
+ * unmapping the table.
+ *
+ * The function will return NULL if the offset of the root table is
+ * invalid.
+ */
+static lpae_t *p2m_get_root_pointer(struct p2m_domain *p2m,
+                                    gfn_t gfn)
+{
+    unsigned long root_table;
+
+    /*
+     * While the root table index is the offset from the previous level,
+     * we can't use (P2M_ROOT_LEVEL - 1) because the root level might be
+     * 0. Yet we still want to check if all the unused bits are zeroed.
+     */
+    root_table = gfn_x(gfn) >> (XEN_PT_LEVEL_ORDER(P2M_ROOT_LEVEL) +
+                                XEN_PT_LPAE_SHIFT);
+    if ( root_table >= P2M_ROOT_PAGES )
+        return NULL;
+
+    return __map_domain_page(p2m->root + root_table);
+}
+
+/*
+ * Lookup the MFN corresponding to a domain's GFN.
+ * Lookup mem access in the ratrix tree.
+ * The entries associated to the GFN is considered valid.
+ */
+static p2m_access_t p2m_mem_access_radix_get(struct p2m_domain *p2m, gfn_t gfn)
+{
+    void *ptr;
+
+    if ( !p2m->mem_access_enabled )
+        return p2m->default_access;
+
+    ptr = radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn));
+    if ( !ptr )
+        return p2m_access_rwx;
+    else
+        return radix_tree_ptr_to_int(ptr);
+}
+
+/*
+ * In the case of the P2M, the valid bit is used for other purpose. Use
+ * the type to check whether an entry is valid.
+ */
+static inline bool p2m_is_valid(lpae_t pte)
+{
+    return pte.p2m.type != p2m_invalid;
+}
+
+/*
+ * lpae_is_* helpers don't check whether the valid bit is set in the
+ * PTE. Provide our own overlay to check the valid bit.
+ */
+static inline bool p2m_is_mapping(lpae_t pte, unsigned int level)
+{
+    return p2m_is_valid(pte) && lpae_is_mapping(pte, level);
+}
+
+static inline bool p2m_is_superpage(lpae_t pte, unsigned int level)
+{
+    return p2m_is_valid(pte) && lpae_is_superpage(pte, level);
+}
+
+#define GUEST_TABLE_MAP_FAILED 0
+#define GUEST_TABLE_SUPER_PAGE 1
+#define GUEST_TABLE_NORMAL_PAGE 2
+
+static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry);
+
+/*
+ * Take the currently mapped table, find the corresponding GFN entry,
+ * and map the next table, if available. The previous table will be
+ * unmapped if the next level was mapped (e.g GUEST_TABLE_NORMAL_PAGE
+ * returned).
+ *
+ * The read_only parameters indicates whether intermediate tables should
+ * be allocated when not present.
+ *
+ * Return values:
+ *  GUEST_TABLE_MAP_FAILED: Either read_only was set and the entry
+ *  was empty, or allocating a new page failed.
+ *  GUEST_TABLE_NORMAL_PAGE: next level mapped normally
+ *  GUEST_TABLE_SUPER_PAGE: The next entry points to a superpage.
+ */
+static int p2m_next_level(struct p2m_domain *p2m, bool read_only,
+                          unsigned int level, lpae_t **table,
+                          unsigned int offset)
+{
+    lpae_t *entry;
+    int ret;
+    mfn_t mfn;
+
+    entry = *table + offset;
+
+    if ( !p2m_is_valid(*entry) )
+    {
+        if ( read_only )
+            return GUEST_TABLE_MAP_FAILED;
+
+        ret = p2m_create_table(p2m, entry);
+        if ( ret )
+            return GUEST_TABLE_MAP_FAILED;
+    }
+
+    /* The function p2m_next_level is never called at the 3rd level */
+    ASSERT(level < 3);
+    if ( p2m_is_mapping(*entry, level) )
+        return GUEST_TABLE_SUPER_PAGE;
+
+    mfn = lpae_get_mfn(*entry);
+
+    unmap_domain_page(*table);
+    *table = map_domain_page(mfn);
+
+    return GUEST_TABLE_NORMAL_PAGE;
+}
+
+/*
+ * Get the details of a given gfn.
+ *
+ * If the entry is present, the associated MFN will be returned and the
+ * access and type filled up. The page_order will correspond to the
+ * order of the mapping in the page table (i.e it could be a superpage).
+ *
+ * If the entry is not present, INVALID_MFN will be returned and the
+ * page_order will be set according to the order of the invalid range.
+ *
+ * valid will contain the value of bit[0] (e.g valid bit) of the
+ * entry.
+ */
+mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
+                    p2m_type_t *t, p2m_access_t *a,
+                    unsigned int *page_order,
+                    bool *valid)
+{
+    paddr_t addr = gfn_to_gaddr(gfn);
+    unsigned int level = 0;
+    lpae_t entry, *table;
+    int rc;
+    mfn_t mfn = INVALID_MFN;
+    p2m_type_t _t;
+    DECLARE_OFFSETS(offsets, addr);
+
+    ASSERT(p2m_is_locked(p2m));
+    BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
+
+    /* Allow t to be NULL */
+    t = t ?: &_t;
+
+    *t = p2m_invalid;
+
+    if ( valid )
+        *valid = false;
+
+    /* XXX: Check if the mapping is lower than the mapped gfn */
+
+    /* This gfn is higher than the highest the p2m map currently holds */
+    if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) )
+    {
+        for ( level = P2M_ROOT_LEVEL; level < 3; level++ )
+            if ( (gfn_x(gfn) & (XEN_PT_LEVEL_MASK(level) >> PAGE_SHIFT)) >
+                 gfn_x(p2m->max_mapped_gfn) )
+                break;
+
+        goto out;
+    }
+
+    table = p2m_get_root_pointer(p2m, gfn);
+
+    /*
+     * the table should always be non-NULL because the gfn is below
+     * p2m->max_mapped_gfn and the root table pages are always present.
+     */
+    if ( !table )
+    {
+        ASSERT_UNREACHABLE();
+        level = P2M_ROOT_LEVEL;
+        goto out;
+    }
+
+    for ( level = P2M_ROOT_LEVEL; level < 3; level++ )
+    {
+        rc = p2m_next_level(p2m, true, level, &table, offsets[level]);
+        if ( rc == GUEST_TABLE_MAP_FAILED )
+            goto out_unmap;
+        else if ( rc != GUEST_TABLE_NORMAL_PAGE )
+            break;
+    }
+
+    entry = table[offsets[level]];
+
+    if ( p2m_is_valid(entry) )
+    {
+        *t = entry.p2m.type;
+
+        if ( a )
+            *a = p2m_mem_access_radix_get(p2m, gfn);
+
+        mfn = lpae_get_mfn(entry);
+        /*
+         * The entry may point to a superpage. Find the MFN associated
+         * to the GFN.
+         */
+        mfn = mfn_add(mfn,
+                      gfn_x(gfn) & ((1UL << XEN_PT_LEVEL_ORDER(level)) - 1));
+
+        if ( valid )
+            *valid = lpae_is_valid(entry);
+    }
+
+out_unmap:
+    unmap_domain_page(table);
+
+out:
+    if ( page_order )
+        *page_order = XEN_PT_LEVEL_ORDER(level);
+
+    return mfn;
+}
+
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
+{
+    mfn_t mfn;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    p2m_read_lock(p2m);
+    mfn = p2m_get_entry(p2m, gfn, t, NULL, NULL, NULL);
+    p2m_read_unlock(p2m);
+
+    return mfn;
+}
+
+struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn,
+                                        p2m_type_t *t)
+{
+    struct page_info *page;
+    p2m_type_t p2mt;
+    mfn_t mfn = p2m_lookup(d, gfn, &p2mt);
+
+    if ( t )
+        *t = p2mt;
+
+    if ( !p2m_is_any_ram(p2mt) )
+        return NULL;
+
+    if ( !mfn_valid(mfn) )
+        return NULL;
+
+    page = mfn_to_page(mfn);
+
+    /*
+     * get_page won't work on foreign mapping because the page doesn't
+     * belong to the current domain.
+     */
+    if ( p2m_is_foreign(p2mt) )
+    {
+        struct domain *fdom = page_get_owner_and_reference(page);
+        ASSERT(fdom != NULL);
+        ASSERT(fdom != d);
+        return page;
+    }
+
+    return get_page(page, d) ? page : NULL;
+}
+
+static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
+{
+    /* First apply type permissions */
+    switch ( t )
+    {
+    case p2m_ram_rw:
+        e->p2m.xn = 0;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_ram_ro:
+        e->p2m.xn = 0;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_iommu_map_rw:
+    case p2m_map_foreign_rw:
+    case p2m_grant_map_rw:
+    case p2m_mmio_direct_dev:
+    case p2m_mmio_direct_nc:
+    case p2m_mmio_direct_c:
+        e->p2m.xn = 1;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_iommu_map_ro:
+    case p2m_map_foreign_ro:
+    case p2m_grant_map_ro:
+    case p2m_invalid:
+        e->p2m.xn = 1;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_max_real_type:
+        BUG();
+        break;
+    }
+
+    /* Then restrict with access permissions */
+    switch ( a )
+    {
+    case p2m_access_rwx:
+        break;
+    case p2m_access_wx:
+        e->p2m.read = 0;
+        break;
+    case p2m_access_rw:
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_w:
+        e->p2m.read = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_rx:
+    case p2m_access_rx2rw:
+        e->p2m.write = 0;
+        break;
+    case p2m_access_x:
+        e->p2m.write = 0;
+        e->p2m.read = 0;
+        break;
+    case p2m_access_r:
+        e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_n:
+    case p2m_access_n2rwx:
+        e->p2m.read = e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    }
+}
+
+static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a)
+{
+    /*
+     * sh, xn and write bit will be defined in the following switches
+     * based on mattr and t.
+     */
+    lpae_t e = (lpae_t) {
+        .p2m.af = 1,
+        .p2m.read = 1,
+        .p2m.table = 1,
+        .p2m.valid = 1,
+        .p2m.type = t,
+    };
+
+    BUILD_BUG_ON(p2m_max_real_type > (1 << 4));
+
+    switch ( t )
+    {
+    case p2m_mmio_direct_dev:
+        e.p2m.mattr = MATTR_DEV;
+        e.p2m.sh = LPAE_SH_OUTER;
+        break;
+
+    case p2m_mmio_direct_c:
+        e.p2m.mattr = MATTR_MEM;
+        e.p2m.sh = LPAE_SH_OUTER;
+        break;
+
+    /*
+     * ARM ARM: Overlaying the shareability attribute (DDI
+     * 0406C.b B3-1376 to 1377)
+     *
+     * A memory region with a resultant memory type attribute of Normal,
+     * and a resultant cacheability attribute of Inner Non-cacheable,
+     * Outer Non-cacheable, must have a resultant shareability attribute
+     * of Outer Shareable, otherwise shareability is UNPREDICTABLE.
+     *
+     * On ARMv8 shareability is ignored and explicitly treated as Outer
+     * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable.
+     * See the note for table D4-40, in page 1788 of the ARM DDI 0487A.j.
+     */
+    case p2m_mmio_direct_nc:
+        e.p2m.mattr = MATTR_MEM_NC;
+        e.p2m.sh = LPAE_SH_OUTER;
+        break;
+
+    default:
+        e.p2m.mattr = MATTR_MEM;
+        e.p2m.sh = LPAE_SH_INNER;
+    }
+
+    p2m_set_permission(&e, t, a);
+
+    ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK));
+
+    lpae_set_mfn(e, mfn);
+
+    return e;
+}
+
+/* Generate table entry with correct attributes. */
+static lpae_t page_to_p2m_table(struct page_info *page)
+{
+    /*
+     * The access value does not matter because the hardware will ignore
+     * the permission fields for table entry.
+     *
+     * We use p2m_ram_rw so the entry has a valid type. This is important
+     * for p2m_is_valid() to return valid on table entries.
+     */
+    return mfn_to_p2m_entry(page_to_mfn(page), p2m_ram_rw, p2m_access_rwx);
+}
+
+static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool clean_pte)
+{
+    write_pte(p, pte);
+    if ( clean_pte )
+        clean_dcache(*p);
+}
+
+static inline void p2m_remove_pte(lpae_t *p, bool clean_pte)
+{
+    lpae_t pte;
+
+    memset(&pte, 0x00, sizeof(pte));
+    p2m_write_pte(p, pte, clean_pte);
+}
+
+/* Allocate a new page table page and hook it in via the given entry. */
+static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
+{
+    struct page_info *page;
+    lpae_t *p;
+
+    ASSERT(!p2m_is_valid(*entry));
+
+    page = p2m_alloc_page(p2m->domain);
+    if ( page == NULL )
+        return -ENOMEM;
+
+    page_list_add(page, &p2m->pages);
+
+    p = __map_domain_page(page);
+    clear_page(p);
+
+    if ( p2m->clean_pte )
+        clean_dcache_va_range(p, PAGE_SIZE);
+
+    unmap_domain_page(p);
+
+    p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte);
+
+    return 0;
+}
+
+static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn,
+                                    p2m_access_t a)
+{
+    int rc;
+
+    if ( !p2m->mem_access_enabled )
+        return 0;
+
+    if ( p2m_access_rwx == a )
+    {
+        radix_tree_delete(&p2m->mem_access_settings, gfn_x(gfn));
+        return 0;
+    }
+
+    rc = radix_tree_insert(&p2m->mem_access_settings, gfn_x(gfn),
+                           radix_tree_int_to_ptr(a));
+    if ( rc == -EEXIST )
+    {
+        /* If a setting already exists, change it to the new one */
+        radix_tree_replace_slot(
+            radix_tree_lookup_slot(
+                &p2m->mem_access_settings, gfn_x(gfn)),
+            radix_tree_int_to_ptr(a));
+        rc = 0;
+    }
+
+    return rc;
+}
+
+/*
+ * Put any references on the single 4K page referenced by pte.
+ * TODO: Handle superpages, for now we only take special references for leaf
+ * pages (specifically foreign ones, which can't be super mapped today).
+ */
+static void p2m_put_l3_page(const lpae_t pte)
+{
+    mfn_t mfn = lpae_get_mfn(pte);
+
+    ASSERT(p2m_is_valid(pte));
+
+    /*
+     * TODO: Handle other p2m types
+     *
+     * It's safe to do the put_page here because page_alloc will
+     * flush the TLBs if the page is reallocated before the end of
+     * this loop.
+     */
+    if ( p2m_is_foreign(pte.p2m.type) )
+    {
+        ASSERT(mfn_valid(mfn));
+        put_page(mfn_to_page(mfn));
+    }
+    /* Detect the xenheap page and mark the stored GFN as invalid. */
+    else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) )
+        page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN);
+}
+
+/* Free lpae sub-tree behind an entry */
+static void p2m_free_entry(struct p2m_domain *p2m,
+                           lpae_t entry, unsigned int level)
+{
+    unsigned int i;
+    lpae_t *table;
+    mfn_t mfn;
+    struct page_info *pg;
+
+    /* Nothing to do if the entry is invalid. */
+    if ( !p2m_is_valid(entry) )
+        return;
+
+    if ( p2m_is_superpage(entry, level) || (level == 3) )
+    {
+#ifdef CONFIG_IOREQ_SERVER
+        /*
+         * If this gets called then either the entry was replaced by an entry
+         * with a different base (valid case) or the shattering of a superpage
+         * has failed (error case).
+         * So, at worst, the spurious mapcache invalidation might be sent.
+         */
+        if ( p2m_is_ram(entry.p2m.type) &&
+             domain_has_ioreq_server(p2m->domain) )
+            ioreq_request_mapcache_invalidate(p2m->domain);
+#endif
+
+        p2m->stats.mappings[level]--;
+        /* Nothing to do if the entry is a super-page. */
+        if ( level == 3 )
+            p2m_put_l3_page(entry);
+        return;
+    }
+
+    table = map_domain_page(lpae_get_mfn(entry));
+    for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
+        p2m_free_entry(p2m, *(table + i), level + 1);
+
+    unmap_domain_page(table);
+
+    /*
+     * Make sure all the references in the TLB have been removed before
+     * freing the intermediate page table.
+     * XXX: Should we defer the free of the page table to avoid the
+     * flush?
+     */
+    p2m_tlb_flush_sync(p2m);
+
+    mfn = lpae_get_mfn(entry);
+    ASSERT(mfn_valid(mfn));
+
+    pg = mfn_to_page(mfn);
+
+    page_list_del(pg, &p2m->pages);
+    p2m_free_page(p2m->domain, pg);
+}
+
+static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
+                                unsigned int level, unsigned int target,
+                                const unsigned int *offsets)
+{
+    struct page_info *page;
+    unsigned int i;
+    lpae_t pte, *table;
+    bool rv = true;
+
+    /* Convenience aliases */
+    mfn_t mfn = lpae_get_mfn(*entry);
+    unsigned int next_level = level + 1;
+    unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level);
+
+    /*
+     * This should only be called with target != level and the entry is
+     * a superpage.
+     */
+    ASSERT(level < target);
+    ASSERT(p2m_is_superpage(*entry, level));
+
+    page = p2m_alloc_page(p2m->domain);
+    if ( !page )
+        return false;
+
+    page_list_add(page, &p2m->pages);
+    table = __map_domain_page(page);
+
+    /*
+     * We are either splitting a first level 1G page into 512 second level
+     * 2M pages, or a second level 2M page into 512 third level 4K pages.
+     */
+    for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
+    {
+        lpae_t *new_entry = table + i;
+
+        /*
+         * Use the content of the superpage entry and override
+         * the necessary fields. So the correct permission are kept.
+         */
+        pte = *entry;
+        lpae_set_mfn(pte, mfn_add(mfn, i << level_order));
+
+        /*
+         * First and second level pages set p2m.table = 0, but third
+         * level entries set p2m.table = 1.
+         */
+        pte.p2m.table = (next_level == 3);
+
+        write_pte(new_entry, pte);
+    }
+
+    /* Update stats */
+    p2m->stats.shattered[level]++;
+    p2m->stats.mappings[level]--;
+    p2m->stats.mappings[next_level] += XEN_PT_LPAE_ENTRIES;
+
+    /*
+     * Shatter superpage in the page to the level we want to make the
+     * changes.
+     * This is done outside the loop to avoid checking the offset to
+     * know whether the entry should be shattered for every entry.
+     */
+    if ( next_level != target )
+        rv = p2m_split_superpage(p2m, table + offsets[next_level],
+                                 level + 1, target, offsets);
+
+    if ( p2m->clean_pte )
+        clean_dcache_va_range(table, PAGE_SIZE);
+
+    unmap_domain_page(table);
+
+    /*
+     * Even if we failed, we should install the newly allocated LPAE
+     * entry. The caller will be in charge to free the sub-tree.
+     */
+    p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte);
+
+    return rv;
+}
+
+/*
+ * Insert an entry in the p2m. This should be called with a mapping
+ * equal to a page/superpage (4K, 2M, 1G).
+ */
+static int __p2m_set_entry(struct p2m_domain *p2m,
+                           gfn_t sgfn,
+                           unsigned int page_order,
+                           mfn_t smfn,
+                           p2m_type_t t,
+                           p2m_access_t a)
+{
+    unsigned int level = 0;
+    unsigned int target = 3 - (page_order / XEN_PT_LPAE_SHIFT);
+    lpae_t *entry, *table, orig_pte;
+    int rc;
+    /* A mapping is removed if the MFN is invalid. */
+    bool removing_mapping = mfn_eq(smfn, INVALID_MFN);
+    DECLARE_OFFSETS(offsets, gfn_to_gaddr(sgfn));
+
+    ASSERT(p2m_is_write_locked(p2m));
+
+    /*
+     * Check if the level target is valid: we only support
+     * 4K - 2M - 1G mapping.
+     */
+    ASSERT(target > 0 && target <= 3);
+
+    table = p2m_get_root_pointer(p2m, sgfn);
+    if ( !table )
+        return -EINVAL;
+
+    for ( level = P2M_ROOT_LEVEL; level < target; level++ )
+    {
+        /*
+         * Don't try to allocate intermediate page table if the mapping
+         * is about to be removed.
+         */
+        rc = p2m_next_level(p2m, removing_mapping,
+                            level, &table, offsets[level]);
+        if ( rc == GUEST_TABLE_MAP_FAILED )
+        {
+            /*
+             * We are here because p2m_next_level has failed to map
+             * the intermediate page table (e.g the table does not exist
+             * and they p2m tree is read-only). It is a valid case
+             * when removing a mapping as it may not exist in the
+             * page table. In this case, just ignore it.
+             */
+            rc = removing_mapping ?  0 : -ENOENT;
+            goto out;
+        }
+        else if ( rc != GUEST_TABLE_NORMAL_PAGE )
+            break;
+    }
+
+    entry = table + offsets[level];
+
+    /*
+     * If we are here with level < target, we must be at a leaf node,
+     * and we need to break up the superpage.
+     */
+    if ( level < target )
+    {
+        /* We need to split the original page. */
+        lpae_t split_pte = *entry;
+
+        ASSERT(p2m_is_superpage(*entry, level));
+
+        if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets) )
+        {
+            /*
+             * The current super-page is still in-place, so re-increment
+             * the stats.
+             */
+            p2m->stats.mappings[level]++;
+
+            /* Free the allocated sub-tree */
+            p2m_free_entry(p2m, split_pte, level);
+
+            rc = -ENOMEM;
+            goto out;
+        }
+
+        /*
+         * Follow the break-before-sequence to update the entry.
+         * For more details see (D4.7.1 in ARM DDI 0487A.j).
+         */
+        p2m_remove_pte(entry, p2m->clean_pte);
+        p2m_force_tlb_flush_sync(p2m);
+
+        p2m_write_pte(entry, split_pte, p2m->clean_pte);
+
+        /* then move to the level we want to make real changes */
+        for ( ; level < target; level++ )
+        {
+            rc = p2m_next_level(p2m, true, level, &table, offsets[level]);
+
+            /*
+             * The entry should be found and either be a table
+             * or a superpage if level 3 is not targeted
+             */
+            ASSERT(rc == GUEST_TABLE_NORMAL_PAGE ||
+                   (rc == GUEST_TABLE_SUPER_PAGE && target < 3));
+        }
+
+        entry = table + offsets[level];
+    }
+
+    /*
+     * We should always be there with the correct level because
+     * all the intermediate tables have been installed if necessary.
+     */
+    ASSERT(level == target);
+
+    orig_pte = *entry;
+
+    /*
+     * The radix-tree can only work on 4KB. This is only used when
+     * memaccess is enabled and during shutdown.
+     */
+    ASSERT(!p2m->mem_access_enabled || page_order == 0 ||
+           p2m->domain->is_dying);
+    /*
+     * The access type should always be p2m_access_rwx when the mapping
+     * is removed.
+     */
+    ASSERT(!mfn_eq(INVALID_MFN, smfn) || (a == p2m_access_rwx));
+    /*
+     * Update the mem access permission before update the P2M. So we
+     * don't have to revert the mapping if it has failed.
+     */
+    rc = p2m_mem_access_radix_set(p2m, sgfn, a);
+    if ( rc )
+        goto out;
+
+    /*
+     * Always remove the entry in order to follow the break-before-make
+     * sequence when updating the translation table (D4.7.1 in ARM DDI
+     * 0487A.j).
+     */
+    if ( lpae_is_valid(orig_pte) || removing_mapping )
+        p2m_remove_pte(entry, p2m->clean_pte);
+
+    if ( removing_mapping )
+        /* Flush can be deferred if the entry is removed */
+        p2m->need_flush |= !!lpae_is_valid(orig_pte);
+    else
+    {
+        lpae_t pte = mfn_to_p2m_entry(smfn, t, a);
+
+        if ( level < 3 )
+            pte.p2m.table = 0; /* Superpage entry */
+
+        /*
+         * It is necessary to flush the TLB before writing the new entry
+         * to keep coherency when the previous entry was valid.
+         *
+         * Although, it could be defered when only the permissions are
+         * changed (e.g in case of memaccess).
+         */
+        if ( lpae_is_valid(orig_pte) )
+        {
+            if ( likely(!p2m->mem_access_enabled) ||
+                 P2M_CLEAR_PERM(pte) != P2M_CLEAR_PERM(orig_pte) )
+                p2m_force_tlb_flush_sync(p2m);
+            else
+                p2m->need_flush = true;
+        }
+        else if ( !p2m_is_valid(orig_pte) ) /* new mapping */
+            p2m->stats.mappings[level]++;
+
+        p2m_write_pte(entry, pte, p2m->clean_pte);
+
+        p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn,
+                                      gfn_add(sgfn, (1UL << page_order) - 1));
+        p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, sgfn);
+    }
+
+    if ( is_iommu_enabled(p2m->domain) &&
+         (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) )
+    {
+        unsigned int flush_flags = 0;
+
+        if ( lpae_is_valid(orig_pte) )
+            flush_flags |= IOMMU_FLUSHF_modified;
+        if ( lpae_is_valid(*entry) )
+            flush_flags |= IOMMU_FLUSHF_added;
+
+        rc = iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)),
+                               1UL << page_order, flush_flags);
+    }
+    else
+        rc = 0;
+
+    /*
+     * Free the entry only if the original pte was valid and the base
+     * is different (to avoid freeing when permission is changed).
+     */
+    if ( p2m_is_valid(orig_pte) &&
+         !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
+        p2m_free_entry(p2m, orig_pte, level);
+
+out:
+    unmap_domain_page(table);
+
+    return rc;
+}
+
+int p2m_set_entry(struct p2m_domain *p2m,
+                  gfn_t sgfn,
+                  unsigned long nr,
+                  mfn_t smfn,
+                  p2m_type_t t,
+                  p2m_access_t a)
+{
+    int rc = 0;
+
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
+    while ( nr )
+    {
+        unsigned long mask;
+        unsigned long order;
+
+        /*
+         * Don't take into account the MFN when removing mapping (i.e
+         * MFN_INVALID) to calculate the correct target order.
+         *
+         * XXX: Support superpage mappings if nr is not aligned to a
+         * superpage size.
+         */
+        mask = !mfn_eq(smfn, INVALID_MFN) ? mfn_x(smfn) : 0;
+        mask |= gfn_x(sgfn) | nr;
+
+        /* Always map 4k by 4k when memaccess is enabled */
+        if ( unlikely(p2m->mem_access_enabled) )
+            order = THIRD_ORDER;
+        else if ( !(mask & ((1UL << FIRST_ORDER) - 1)) )
+            order = FIRST_ORDER;
+        else if ( !(mask & ((1UL << SECOND_ORDER) - 1)) )
+            order = SECOND_ORDER;
+        else
+            order = THIRD_ORDER;
+
+        rc = __p2m_set_entry(p2m, sgfn, order, smfn, t, a);
+        if ( rc )
+            break;
+
+        sgfn = gfn_add(sgfn, (1 << order));
+        if ( !mfn_eq(smfn, INVALID_MFN) )
+           smfn = mfn_add(smfn, (1 << order));
+
+        nr -= (1 << order);
+    }
+
+    return rc;
+}
+
+/* Invalidate all entries in the table. The p2m should be write locked. */
+static void p2m_invalidate_table(struct p2m_domain *p2m, mfn_t mfn)
+{
+    lpae_t *table;
+    unsigned int i;
+
+    ASSERT(p2m_is_write_locked(p2m));
+
+    table = map_domain_page(mfn);
+
+    for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ )
+    {
+        lpae_t pte = table[i];
+
+        /*
+         * Writing an entry can be expensive because it may involve
+         * cleaning the cache. So avoid updating the entry if the valid
+         * bit is already cleared.
+         */
+        if ( !pte.p2m.valid )
+            continue;
+
+        pte.p2m.valid = 0;
+
+        p2m_write_pte(&table[i], pte, p2m->clean_pte);
+    }
+
+    unmap_domain_page(table);
+
+    p2m->need_flush = true;
+}
+
+/*
+ * Invalidate all entries in the root page-tables. This is
+ * useful to get fault on entry and do an action.
+ *
+ * p2m_invalid_root() should not be called when the P2M is shared with
+ * the IOMMU because it will cause IOMMU fault.
+ */
+void p2m_invalidate_root(struct p2m_domain *p2m)
+{
+    unsigned int i;
+
+    ASSERT(!iommu_use_hap_pt(p2m->domain));
+
+    p2m_write_lock(p2m);
+
+    for ( i = 0; i < P2M_ROOT_LEVEL; i++ )
+        p2m_invalidate_table(p2m, page_to_mfn(p2m->root + i));
+
+    p2m_write_unlock(p2m);
+}
+
+/*
+ * Resolve any translation fault due to change in the p2m. This
+ * includes break-before-make and valid bit cleared.
+ */
+bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned int level = 0;
+    bool resolved = false;
+    lpae_t entry, *table;
+
+    /* Convenience aliases */
+    DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
+
+    p2m_write_lock(p2m);
+
+    /* This gfn is higher than the highest the p2m map currently holds */
+    if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) )
+        goto out;
+
+    table = p2m_get_root_pointer(p2m, gfn);
+    /*
+     * The table should always be non-NULL because the gfn is below
+     * p2m->max_mapped_gfn and the root table pages are always present.
+     */
+    if ( !table )
+    {
+        ASSERT_UNREACHABLE();
+        goto out;
+    }
+
+    /*
+     * Go down the page-tables until an entry has the valid bit unset or
+     * a block/page entry has been hit.
+     */
+    for ( level = P2M_ROOT_LEVEL; level <= 3; level++ )
+    {
+        int rc;
+
+        entry = table[offsets[level]];
+
+        if ( level == 3 )
+            break;
+
+        /* Stop as soon as we hit an entry with the valid bit unset. */
+        if ( !lpae_is_valid(entry) )
+            break;
+
+        rc = p2m_next_level(p2m, true, level, &table, offsets[level]);
+        if ( rc == GUEST_TABLE_MAP_FAILED )
+            goto out_unmap;
+        else if ( rc != GUEST_TABLE_NORMAL_PAGE )
+            break;
+    }
+
+    /*
+     * If the valid bit of the entry is set, it means someone was playing with
+     * the Stage-2 page table. Nothing to do and mark the fault as resolved.
+     */
+    if ( lpae_is_valid(entry) )
+    {
+        resolved = true;
+        goto out_unmap;
+    }
+
+    /*
+     * The valid bit is unset. If the entry is still not valid then the fault
+     * cannot be resolved, exit and report it.
+     */
+    if ( !p2m_is_valid(entry) )
+        goto out_unmap;
+
+    /*
+     * Now we have an entry with valid bit unset, but still valid from
+     * the P2M point of view.
+     *
+     * If an entry is pointing to a table, each entry of the table will
+     * have there valid bit cleared. This allows a function to clear the
+     * full p2m with just a couple of write. The valid bit will then be
+     * propagated on the fault.
+     * If an entry is pointing to a block/page, no work to do for now.
+     */
+    if ( lpae_is_table(entry, level) )
+        p2m_invalidate_table(p2m, lpae_get_mfn(entry));
+
+    /*
+     * Now that the work on the entry is done, set the valid bit to prevent
+     * another fault on that entry.
+     */
+    resolved = true;
+    entry.p2m.valid = 1;
+
+    p2m_write_pte(table + offsets[level], entry, p2m->clean_pte);
+
+    /*
+     * No need to flush the TLBs as the modified entry had the valid bit
+     * unset.
+     */
+
+out_unmap:
+    unmap_domain_page(table);
+
+out:
+    p2m_write_unlock(p2m);
+
+    return resolved;
+}
+
+int p2m_insert_mapping(struct domain *d, gfn_t start_gfn, unsigned long nr,
+                       mfn_t mfn, p2m_type_t t)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    int rc;
+
+    p2m_write_lock(p2m);
+    rc = p2m_set_entry(p2m, start_gfn, nr, mfn, t, p2m->default_access);
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+static inline int p2m_remove_mapping(struct domain *d,
+                                     gfn_t start_gfn,
+                                     unsigned long nr,
+                                     mfn_t mfn)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long i;
+    int rc;
+
+    p2m_write_lock(p2m);
+    /*
+     * Before removing the GFN - MFN mapping for any RAM pages make sure
+     * that there is no difference between what is already mapped and what
+     * is requested to be unmapped.
+     * If they don't match bail out early. For instance, this could happen
+     * if two CPUs are requesting to unmap the same P2M entry concurrently.
+     */
+    for ( i = 0; i < nr; )
+    {
+        unsigned int cur_order;
+        p2m_type_t t;
+        mfn_t mfn_return = p2m_get_entry(p2m, gfn_add(start_gfn, i), &t, NULL,
+                                         &cur_order, NULL);
+
+        if ( p2m_is_any_ram(t) &&
+             (!mfn_valid(mfn) || !mfn_eq(mfn_add(mfn, i), mfn_return)) )
+        {
+            rc = -EILSEQ;
+            goto out;
+        }
+
+        i += (1UL << cur_order) -
+             ((gfn_x(start_gfn) + i) & ((1UL << cur_order) - 1));
+    }
+
+    rc = p2m_set_entry(p2m, start_gfn, nr, INVALID_MFN,
+                       p2m_invalid, p2m_access_rwx);
+
+out:
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+int map_regions_p2mt(struct domain *d,
+                     gfn_t gfn,
+                     unsigned long nr,
+                     mfn_t mfn,
+                     p2m_type_t p2mt)
+{
+    return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);
+}
+
+int unmap_regions_p2mt(struct domain *d,
+                       gfn_t gfn,
+                       unsigned long nr,
+                       mfn_t mfn)
+{
+    return p2m_remove_mapping(d, gfn, nr, mfn);
+}
+
+int map_mmio_regions(struct domain *d,
+                     gfn_t start_gfn,
+                     unsigned long nr,
+                     mfn_t mfn)
+{
+    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev);
+}
+
+int unmap_mmio_regions(struct domain *d,
+                       gfn_t start_gfn,
+                       unsigned long nr,
+                       mfn_t mfn)
+{
+    return p2m_remove_mapping(d, start_gfn, nr, mfn);
+}
+
+int map_dev_mmio_page(struct domain *d, gfn_t gfn, mfn_t mfn)
+{
+    int res;
+
+    if ( !iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn)) )
+        return 0;
+
+    res = p2m_insert_mapping(d, gfn, 1, mfn, p2m_mmio_direct_c);
+    if ( res < 0 )
+    {
+        printk(XENLOG_G_ERR "Unable to map MFN %#"PRI_mfn" in %pd\n",
+               mfn_x(mfn), d);
+        return res;
+    }
+
+    return 0;
+}
+
+int guest_physmap_add_entry(struct domain *d,
+                            gfn_t gfn,
+                            mfn_t mfn,
+                            unsigned long page_order,
+                            p2m_type_t t)
+{
+    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
+}
+
+int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                              unsigned int page_order)
+{
+    return p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
+}
+
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn)
+{
+    struct page_info *page = mfn_to_page(mfn);
+    int rc;
+
+    ASSERT(arch_acquire_resource_check(d));
+
+    if ( !get_page(page, fd) )
+        return -EINVAL;
+
+    /*
+     * It is valid to always use p2m_map_foreign_rw here as if this gets
+     * called then d != fd. A case when d == fd would be rejected by
+     * rcu_lock_remote_domain_by_id() earlier. Put a respective ASSERT()
+     * to catch incorrect usage in future.
+     */
+    ASSERT(d != fd);
+
+    rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw);
+    if ( rc )
+        put_page(page);
+
+    return rc;
+}
+
+static struct page_info *p2m_allocate_root(void)
+{
+    struct page_info *page;
+    unsigned int i;
+
+    page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
+    if ( page == NULL )
+        return NULL;
+
+    /* Clear both first level pages */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(page + i);
+
+    return page;
+}
+
+static int p2m_alloc_table(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    p2m->root = p2m_allocate_root();
+    if ( !p2m->root )
+        return -ENOMEM;
+
+    p2m->vttbr = generate_vttbr(p2m->vmid, page_to_mfn(p2m->root));
+
+    /*
+     * Make sure that all TLBs corresponding to the new VMID are flushed
+     * before using it
+     */
+    p2m_write_lock(p2m);
+    p2m_force_tlb_flush_sync(p2m);
+    p2m_write_unlock(p2m);
+
+    return 0;
+}
+
+
+static spinlock_t vmid_alloc_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * VTTBR_EL2 VMID field is 8 or 16 bits. AArch64 may support 16-bit VMID.
+ * Using a bitmap here limits us to 256 or 65536 (for AArch64) concurrent
+ * domains. The bitmap space will be allocated dynamically based on
+ * whether 8 or 16 bit VMIDs are supported.
+ */
+static unsigned long *vmid_mask;
+
+static void p2m_vmid_allocator_init(void)
+{
+    /*
+     * allocate space for vmid_mask based on MAX_VMID
+     */
+    vmid_mask = xzalloc_array(unsigned long, BITS_TO_LONGS(MAX_VMID));
+
+    if ( !vmid_mask )
+        panic("Could not allocate VMID bitmap space\n");
+
+    set_bit(INVALID_VMID, vmid_mask);
+}
+
+static int p2m_alloc_vmid(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    int rc, nr;
+
+    spin_lock(&vmid_alloc_lock);
+
+    nr = find_first_zero_bit(vmid_mask, MAX_VMID);
+
+    ASSERT(nr != INVALID_VMID);
+
+    if ( nr == MAX_VMID )
+    {
+        rc = -EBUSY;
+        printk(XENLOG_ERR "p2m.c: dom%d: VMID pool exhausted\n", d->domain_id);
+        goto out;
+    }
+
+    set_bit(nr, vmid_mask);
+
+    p2m->vmid = nr;
+
+    rc = 0;
+
+out:
+    spin_unlock(&vmid_alloc_lock);
+    return rc;
+}
+
+static void p2m_free_vmid(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    spin_lock(&vmid_alloc_lock);
+    if ( p2m->vmid != INVALID_VMID )
+        clear_bit(p2m->vmid, vmid_mask);
+
+    spin_unlock(&vmid_alloc_lock);
+}
+
+int p2m_teardown(struct domain *d, bool allow_preemption)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
+    struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        p2m_free_page(p2m->domain, pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    /* p2m not actually initialized */
+    if ( !p2m->domain )
+        return;
+
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
+    ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
+
+    if ( p2m->root )
+        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
+
+    p2m->root = NULL;
+
+    p2m_free_vmid(d);
+
+    radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
+    p2m->domain = NULL;
+}
+
+int p2m_init(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    int rc;
+    unsigned int cpu;
+
+    rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
+    INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
+
+    p2m->vmid = INVALID_VMID;
+    p2m->max_mapped_gfn = _gfn(0);
+    p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
+
+    p2m->default_access = p2m_access_rwx;
+    p2m->mem_access_enabled = false;
+    radix_tree_init(&p2m->mem_access_settings);
+
+    /*
+     * Some IOMMUs don't support coherent PT walk. When the p2m is
+     * shared with the CPU, Xen has to make sure that the PT changes have
+     * reached the memory
+     */
+    p2m->clean_pte = is_iommu_enabled(d) &&
+        !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
+
+    /*
+     * Make sure that the type chosen to is able to store the an vCPU ID
+     * between 0 and the maximum of virtual CPUS supported as long as
+     * the INVALID_VCPU_ID.
+     */
+    BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < MAX_VIRT_CPUS);
+    BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0])* 8)) < INVALID_VCPU_ID);
+
+    for_each_possible_cpu(cpu)
+       p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
+
+    /*
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
+     */
+    p2m->domain = d;
+
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
+    return 0;
+}
+
+/*
+ * The function will go through the p2m and remove page reference when it
+ * is required. The mapping will be removed from the p2m.
+ *
+ * XXX: See whether the mapping can be left intact in the p2m.
+ */
+int relinquish_p2m_mapping(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
+    p2m_type_t t;
+    int rc = 0;
+    unsigned int order;
+    gfn_t start, end;
+
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
+    p2m_write_lock(p2m);
+
+    start = p2m->lowest_mapped_gfn;
+    end = gfn_add(p2m->max_mapped_gfn, 1);
+
+    for ( ; gfn_x(start) < gfn_x(end);
+          start = gfn_next_boundary(start, order) )
+    {
+        mfn_t mfn = p2m_get_entry(p2m, start, &t, NULL, &order, NULL);
+
+        count++;
+        /*
+         * Arbitrarily preempt every 512 iterations.
+         */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+
+        /*
+         * p2m_set_entry will take care of removing reference on page
+         * when it is necessary and removing the mapping in the p2m.
+         */
+        if ( !mfn_eq(mfn, INVALID_MFN) )
+        {
+            /*
+             * For valid mapping, the start will always be aligned as
+             * entry will be removed whilst relinquishing.
+             */
+            rc = __p2m_set_entry(p2m, start, order, INVALID_MFN,
+                                 p2m_invalid, p2m_access_rwx);
+            if ( unlikely(rc) )
+            {
+                printk(XENLOG_G_ERR "Unable to remove mapping gfn=%#"PRI_gfn" order=%u from the p2m of domain %d\n", gfn_x(start), order, d->domain_id);
+                break;
+            }
+        }
+    }
+
+    /*
+     * Update lowest_mapped_gfn so on the next call we still start where
+     * we stopped.
+     */
+    p2m->lowest_mapped_gfn = start;
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    gfn_t next_block_gfn;
+    gfn_t start = *pstart;
+    mfn_t mfn = INVALID_MFN;
+    p2m_type_t t;
+    unsigned int order;
+    int rc = 0;
+    /* Counter for preemption */
+    unsigned short count = 0;
+
+    /*
+     * The operation cache flush will invalidate the RAM assigned to the
+     * guest in a given range. It will not modify the page table and
+     * flushing the cache whilst the page is used by another CPU is
+     * fine. So using read-lock is fine here.
+     */
+    p2m_read_lock(p2m);
+
+    start = gfn_max(start, p2m->lowest_mapped_gfn);
+    end = gfn_min(end, gfn_add(p2m->max_mapped_gfn, 1));
+
+    next_block_gfn = start;
+
+    while ( gfn_x(start) < gfn_x(end) )
+    {
+       /*
+         * Cleaning the cache for the P2M may take a long time. So we
+         * need to be able to preempt. We will arbitrarily preempt every
+         * time count reach 512 or above.
+         *
+         * The count will be incremented by:
+         *  - 1 on region skipped
+         *  - 10 for each page requiring a flush
+         */
+        if ( count >= 512 )
+        {
+            if ( softirq_pending(smp_processor_id()) )
+            {
+                rc = -ERESTART;
+                break;
+            }
+            count = 0;
+        }
+
+        /*
+         * We want to flush page by page as:
+         *  - it may not be possible to map the full block (can be up to 1GB)
+         *    in Xen memory
+         *  - we may want to do fine grain preemption as flushing multiple
+         *    page in one go may take a long time
+         *
+         * As p2m_get_entry is able to return the size of the mapping
+         * in the p2m, it is pointless to execute it for each page.
+         *
+         * We can optimize it by tracking the gfn of the next
+         * block. So we will only call p2m_get_entry for each block (can
+         * be up to 1GB).
+         */
+        if ( gfn_eq(start, next_block_gfn) )
+        {
+            bool valid;
+
+            mfn = p2m_get_entry(p2m, start, &t, NULL, &order, &valid);
+            next_block_gfn = gfn_next_boundary(start, order);
+
+            if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_any_ram(t) || !valid )
+            {
+                count++;
+                start = next_block_gfn;
+                continue;
+            }
+        }
+
+        count += 10;
+
+        flush_page_to_ram(mfn_x(mfn), false);
+
+        start = gfn_add(start, 1);
+        mfn = mfn_add(mfn, 1);
+    }
+
+    if ( rc != -ERESTART )
+        invalidate_icache();
+
+    p2m_read_unlock(p2m);
+
+    *pstart = start;
+
+    return rc;
+}
+
+/*
+ * Clean & invalidate RAM associated to the guest vCPU.
+ *
+ * The function can only work with the current vCPU and should be called
+ * with IRQ enabled as the vCPU could get preempted.
+ */
+void p2m_flush_vm(struct vcpu *v)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+    int rc;
+    gfn_t start = _gfn(0);
+
+    ASSERT(v == current);
+    ASSERT(local_irq_is_enabled());
+    ASSERT(v->arch.need_flush_to_ram);
+
+    do
+    {
+        rc = p2m_cache_flush_range(v->domain, &start, _gfn(ULONG_MAX));
+        if ( rc == -ERESTART )
+            do_softirq();
+    } while ( rc == -ERESTART );
+
+    if ( rc != 0 )
+        gprintk(XENLOG_WARNING,
+                "P2M has not been correctly cleaned (rc = %d)\n",
+                rc);
+
+    /*
+     * Invalidate the p2m to track which page was modified by the guest
+     * between call of p2m_flush_vm().
+     */
+    p2m_invalidate_root(p2m);
+
+    v->arch.need_flush_to_ram = false;
+}
+
+/*
+ * See note at ARMv7 ARM B1.14.4 (DDI 0406C.c) (TL;DR: S/W ops are not
+ * easily virtualized).
+ *
+ * Main problems:
+ *  - S/W ops are local to a CPU (not broadcast)
+ *  - We have line migration behind our back (speculation)
+ *  - System caches don't support S/W at all (damn!)
+ *
+ * In the face of the above, the best we can do is to try and convert
+ * S/W ops to VA ops. Because the guest is not allowed to infer the S/W
+ * to PA mapping, it can only use S/W to nuke the whole cache, which is
+ * rather a good thing for us.
+ *
+ * Also, it is only used when turning caches on/off ("The expected
+ * usage of the cache maintenance instructions that operate by set/way
+ * is associated with the powerdown and powerup of caches, if this is
+ * required by the implementation.").
+ *
+ * We use the following policy:
+ *  - If we trap a S/W operation, we enabled VM trapping to detect
+ *  caches being turned on/off, and do a full clean.
+ *
+ *  - We flush the caches on both caches being turned on and off.
+ *
+ *  - Once the caches are enabled, we stop trapping VM ops.
+ */
+void p2m_set_way_flush(struct vcpu *v, struct cpu_user_regs *regs,
+                       const union hsr hsr)
+{
+    /* This function can only work with the current vCPU. */
+    ASSERT(v == current);
+
+    if ( iommu_use_hap_pt(current->domain) )
+    {
+        gprintk(XENLOG_ERR,
+                "The cache should be flushed by VA rather than by set/way.\n");
+        inject_undef_exception(regs, hsr);
+        return;
+    }
+
+    if ( !(v->arch.hcr_el2 & HCR_TVM) )
+    {
+        v->arch.need_flush_to_ram = true;
+        vcpu_hcr_set_flags(v, HCR_TVM);
+    }
+}
+
+void p2m_toggle_cache(struct vcpu *v, bool was_enabled)
+{
+    bool now_enabled = vcpu_has_cache_enabled(v);
+
+    /* This function can only work with the current vCPU. */
+    ASSERT(v == current);
+
+    /*
+     * If switching the MMU+caches on, need to invalidate the caches.
+     * If switching it off, need to clean the caches.
+     * Clean + invalidate does the trick always.
+     */
+    if ( was_enabled != now_enabled )
+        v->arch.need_flush_to_ram = true;
+
+    /* Caches are now on, stop trapping VM ops (until a S/W op) */
+    if ( now_enabled )
+        vcpu_hcr_clear_flags(v, HCR_TVM);
+}
+
+mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
+{
+    return p2m_lookup(d, gfn, NULL);
+}
+
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
+                                    unsigned long flags)
+{
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct page_info *page = NULL;
+    paddr_t maddr = 0;
+    uint64_t par;
+    mfn_t mfn;
+    p2m_type_t t;
+
+    /*
+     * XXX: To support a different vCPU, we would need to load the
+     * VTTBR_EL2, TTBR0_EL1, TTBR1_EL1 and SCTLR_EL1
+     */
+    if ( v != current )
+        return NULL;
+
+    /*
+     * The lock is here to protect us against the break-before-make
+     * sequence used when updating the entry.
+     */
+    p2m_read_lock(p2m);
+    par = gvirt_to_maddr(va, &maddr, flags);
+    p2m_read_unlock(p2m);
+
+    /*
+     * gvirt_to_maddr may fail if the entry does not have the valid bit
+     * set. Fallback to the second method:
+     *  1) Translate the VA to IPA using software lookup -> Stage-1 page-table
+     *  may not be accessible because the stage-2 entries may have valid
+     *  bit unset.
+     *  2) Software lookup of the MFN
+     *
+     * Note that when memaccess is enabled, we instead call directly
+     * p2m_mem_access_check_and_get_page(...). Because the function is a
+     * a variant of the methods described above, it will be able to
+     * handle entries with valid bit unset.
+     *
+     * TODO: Integrate more nicely memaccess with the rest of the
+     * function.
+     * TODO: Use the fault error in PAR_EL1 to avoid pointless
+     *  translation.
+     */
+    if ( par )
+    {
+        paddr_t ipa;
+        unsigned int s1_perms;
+
+        /*
+         * When memaccess is enabled, the translation GVA to MADDR may
+         * have failed because of a permission fault.
+         */
+        if ( p2m->mem_access_enabled )
+            return p2m_mem_access_check_and_get_page(va, flags, v);
+
+        /*
+         * The software stage-1 table walk can still fail, e.g, if the
+         * GVA is not mapped.
+         */
+        if ( !guest_walk_tables(v, va, &ipa, &s1_perms) )
+        {
+            dprintk(XENLOG_G_DEBUG,
+                    "%pv: Failed to walk page-table va %#"PRIvaddr"\n", v, va);
+            return NULL;
+        }
+
+        mfn = p2m_lookup(d, gaddr_to_gfn(ipa), &t);
+        if ( mfn_eq(INVALID_MFN, mfn) || !p2m_is_ram(t) )
+            return NULL;
+
+        /*
+         * Check permission that are assumed by the caller. For instance
+         * in case of guestcopy, the caller assumes that the translated
+         * page can be accessed with the requested permissions. If this
+         * is not the case, we should fail.
+         *
+         * Please note that we do not check for the GV2M_EXEC
+         * permission. This is fine because the hardware-based translation
+         * instruction does not test for execute permissions.
+         */
+        if ( (flags & GV2M_WRITE) && !(s1_perms & GV2M_WRITE) )
+            return NULL;
+
+        if ( (flags & GV2M_WRITE) && t != p2m_ram_rw )
+            return NULL;
+    }
+    else
+        mfn = maddr_to_mfn(maddr);
+
+    if ( !mfn_valid(mfn) )
+    {
+        dprintk(XENLOG_G_DEBUG, "%pv: Invalid MFN %#"PRI_mfn"\n",
+                v, mfn_x(mfn));
+        return NULL;
+    }
+
+    page = mfn_to_page(mfn);
+    ASSERT(page);
+
+    if ( unlikely(!get_page(page, d)) )
+    {
+        dprintk(XENLOG_G_DEBUG, "%pv: Failing to acquire the MFN %#"PRI_mfn"\n",
+                v, mfn_x(maddr_to_mfn(maddr)));
+        return NULL;
+    }
+
+    return page;
+}
+
+/* VTCR value to be configured by all CPUs. Set only once by the boot CPU */
+static register_t __read_mostly vtcr;
+
+static void setup_virt_paging_one(void *data)
+{
+    WRITE_SYSREG(vtcr, VTCR_EL2);
+
+    /*
+     * ARM64_WORKAROUND_AT_SPECULATE: We want to keep the TLBs free from
+     * entries related to EL1/EL0 translation regime until a guest vCPU
+     * is running. For that, we need to set-up VTTBR to point to an empty
+     * page-table and turn on stage-2 translation. The TLB entries
+     * associated with EL1/EL0 translation regime will also be flushed in case
+     * an AT instruction was speculated before hand.
+     */
+    if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) )
+    {
+        WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR_EL2);
+        WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2);
+        isb();
+
+        flush_all_guests_tlb_local();
+    }
+}
+
+void __init setup_virt_paging(void)
+{
+    /* Setup Stage 2 address translation */
+    register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
+
+#ifdef CONFIG_ARM_32
+    if ( p2m_ipa_bits < 40 )
+        panic("P2M: Not able to support %u-bit IPA at the moment\n",
+              p2m_ipa_bits);
+
+    printk("P2M: 40-bit IPA\n");
+    p2m_ipa_bits = 40;
+    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
+    val |= VTCR_SL0(0x1); /* P2M starts at first level */
+#else /* CONFIG_ARM_64 */
+    const struct {
+        unsigned int pabits; /* Physical Address Size */
+        unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
+        unsigned int root_order; /* Page order of the root of the p2m */
+        unsigned int sl0;    /* Desired SL0, maximum in comment */
+    } pa_range_info[] = {
+        /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
+        /*      PA size, t0sz(min), root-order, sl0(max) */
+        [0] = { 32,      32/*32*/,  0,          1 },
+        [1] = { 36,      28/*28*/,  0,          1 },
+        [2] = { 40,      24/*24*/,  1,          1 },
+        [3] = { 42,      22/*22*/,  3,          1 },
+        [4] = { 44,      20/*20*/,  0,          2 },
+        [5] = { 48,      16/*16*/,  0,          2 },
+        [6] = { 52,      12/*12*/,  4,          2 },
+        [7] = { 0 }  /* Invalid */
+    };
+
+    unsigned int i;
+    unsigned int pa_range = 0x10; /* Larger than any possible value */
+
+    /*
+     * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
+     * with IPA bits == PA bits, compare against "pabits".
+     */
+    if ( pa_range_info[system_cpuinfo.mm64.pa_range].pabits < p2m_ipa_bits )
+        p2m_ipa_bits = pa_range_info[system_cpuinfo.mm64.pa_range].pabits;
+
+    /*
+     * cpu info sanitization made sure we support 16bits VMID only if all
+     * cores are supporting it.
+     */
+    if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
+        max_vmid = MAX_VMID_16_BIT;
+
+    /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
+    for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
+    {
+        if ( p2m_ipa_bits == pa_range_info[i].pabits )
+        {
+            pa_range = i;
+            break;
+        }
+    }
+
+    /* pa_range is 4 bits but we don't support all modes */
+    if ( pa_range >= ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_range].pabits )
+        panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
+
+    val |= VTCR_PS(pa_range);
+    val |= VTCR_TG0_4K;
+
+    /* Set the VS bit only if 16 bit VMID is supported. */
+    if ( MAX_VMID == MAX_VMID_16_BIT )
+        val |= VTCR_VS;
+    val |= VTCR_SL0(pa_range_info[pa_range].sl0);
+    val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
+
+    p2m_root_order = pa_range_info[pa_range].root_order;
+    p2m_root_level = 2 - pa_range_info[pa_range].sl0;
+    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
+
+    printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n",
+           p2m_ipa_bits,
+           pa_range_info[pa_range].pabits,
+           ( MAX_VMID == MAX_VMID_16_BIT ) ? 16 : 8);
+#endif
+    printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n",
+           4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val);
+
+    p2m_vmid_allocator_init();
+
+    /* It is not allowed to concatenate a level zero root */
+    BUG_ON( P2M_ROOT_LEVEL == 0 && P2M_ROOT_ORDER > 0 );
+    vtcr = val;
+
+    /*
+     * ARM64_WORKAROUND_AT_SPECULATE requires to allocate root table
+     * with all entries zeroed.
+     */
+    if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) )
+    {
+        struct page_info *root;
+
+        root = p2m_allocate_root();
+        if ( !root )
+            panic("Unable to allocate root table for ARM64_WORKAROUND_AT_SPECULATE\n");
+
+        empty_root_mfn = page_to_mfn(root);
+    }
+
+    setup_virt_paging_one(NULL);
+    smp_call_function(setup_virt_paging_one, NULL, 1);
+}
+
+static int cpu_virt_paging_callback(struct notifier_block *nfb,
+                                    unsigned long action,
+                                    void *hcpu)
+{
+    switch ( action )
+    {
+    case CPU_STARTING:
+        ASSERT(system_state != SYS_STATE_boot);
+        setup_virt_paging_one(NULL);
+        break;
+    default:
+        break;
+    }
+
+    return NOTIFY_DONE;
+}
+
+static struct notifier_block cpu_virt_paging_nfb = {
+    .notifier_call = cpu_virt_paging_callback,
+};
+
+static int __init cpu_virt_paging_init(void)
+{
+    register_cpu_notifier(&cpu_virt_paging_nfb);
+
+    return 0;
+}
+/*
+ * Initialization of the notifier has to be done at init rather than presmp_init
+ * phase because: the registered notifier is used to setup virtual paging for
+ * non-boot CPUs after the initial virtual paging for all CPUs is already setup,
+ * i.e. when a non-boot CPU is hotplugged after the system has booted. In other
+ * words, the notifier should be registered after the virtual paging is
+ * initially setup (setup_virt_paging() is called from start_xen()). This is
+ * required because vtcr config value has to be set before a notifier can fire.
+ */
+__initcall(cpu_virt_paging_init);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/p2m_mpu.c b/xen/arch/arm/p2m_mpu.c
new file mode 100644
index 0000000000..0a95d58111
--- /dev/null
+++ b/xen/arch/arm/p2m_mpu.c
@@ -0,0 +1,191 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <xen/lib.h>
+#include <xen/mm-frame.h>
+#include <xen/sched.h>
+
+#include <asm/p2m.h>
+
+/* TODO: Implement on the first usage */
+void p2m_write_unlock(struct p2m_domain *p2m)
+{
+}
+
+void p2m_dump_info(struct domain *d)
+{
+}
+
+void dump_p2m_lookup(struct domain *d, paddr_t addr)
+{
+}
+
+void p2m_save_state(struct vcpu *p)
+{
+}
+
+void p2m_restore_state(struct vcpu *n)
+{
+}
+
+mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
+                    p2m_type_t *t, p2m_access_t *a,
+                    unsigned int *page_order,
+                    bool *valid)
+{
+    return INVALID_MFN;
+}
+
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
+{
+    return INVALID_MFN;
+}
+
+struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn,
+                                        p2m_type_t *t)
+{
+    return NULL;
+}
+
+int p2m_set_entry(struct p2m_domain *p2m,
+                  gfn_t sgfn,
+                  unsigned long nr,
+                  mfn_t smfn,
+                  p2m_type_t t,
+                  p2m_access_t a)
+{
+    return -ENOSYS;
+}
+
+void p2m_invalidate_root(struct p2m_domain *p2m)
+{
+}
+
+bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn)
+{
+    return false;
+}
+
+int p2m_insert_mapping(struct domain *d, gfn_t start_gfn, unsigned long nr,
+                       mfn_t mfn, p2m_type_t t)
+{
+    return -ENOSYS;
+}
+
+int map_regions_p2mt(struct domain *d,
+                     gfn_t gfn,
+                     unsigned long nr,
+                     mfn_t mfn,
+                     p2m_type_t p2mt)
+{
+    return -ENOSYS;
+}
+
+int unmap_regions_p2mt(struct domain *d,
+                       gfn_t gfn,
+                       unsigned long nr,
+                       mfn_t mfn)
+{
+    return -ENOSYS;
+}
+
+int map_mmio_regions(struct domain *d,
+                     gfn_t start_gfn,
+                     unsigned long nr,
+                     mfn_t mfn)
+{
+    return -ENOSYS;
+}
+
+int unmap_mmio_regions(struct domain *d,
+                       gfn_t start_gfn,
+                       unsigned long nr,
+                       mfn_t mfn)
+{
+    return -ENOSYS;
+}
+
+int map_dev_mmio_page(struct domain *d, gfn_t gfn, mfn_t mfn)
+{
+    return -ENOSYS;
+}
+
+int guest_physmap_add_entry(struct domain *d,
+                            gfn_t gfn,
+                            mfn_t mfn,
+                            unsigned long page_order,
+                            p2m_type_t t)
+{
+    return -ENOSYS;
+}
+
+int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                              unsigned int page_order)
+{
+    return -ENOSYS;
+}
+
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn)
+{
+    return -ENOSYS;
+}
+
+int p2m_teardown(struct domain *d, bool allow_preemption)
+{
+    return -ENOSYS;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+}
+
+int p2m_init(struct domain *d)
+{
+    return -ENOSYS;
+}
+
+int relinquish_p2m_mapping(struct domain *d)
+{
+    return -ENOSYS;
+}
+
+int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end)
+{
+    return -ENOSYS;
+}
+
+void p2m_flush_vm(struct vcpu *v)
+{
+}
+
+void p2m_set_way_flush(struct vcpu *v, struct cpu_user_regs *regs,
+                       const union hsr hsr)
+{
+}
+
+void p2m_toggle_cache(struct vcpu *v, bool was_enabled)
+{
+}
+
+mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
+{
+    return INVALID_MFN;
+}
+
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
+                                    unsigned long flags)
+{
+    return NULL;
+}
+
+void __init setup_virt_paging(void)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476571.738898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCj6-0007qE-LD; Fri, 13 Jan 2023 05:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476571.738898; Fri, 13 Jan 2023 05:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCj6-0007no-FC; Fri, 13 Jan 2023 05:35:40 +0000
Received: by outflank-mailman (input) for mailman id 476571;
 Fri, 13 Jan 2023 05:35:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCec-0005sJ-EQ
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 772af550-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:31:01 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 00DCFFEC;
 Thu, 12 Jan 2023 21:31:43 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 42AD23F587;
 Thu, 12 Jan 2023 21:30:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 772af550-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 27/40] xen/mpu: map device memory resource in MPU system
Date: Fri, 13 Jan 2023 13:29:00 +0800
Message-Id: <20230113052914.3845596-28-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

In MPU system, we could not afford mapping a new MPU memory region
with each new device, it will exhaust limited MPU memory regions
very quickly.

So we introduce `mpu,device-memory-section` for users to statically
configure the whole system device memory with the least number of
memory regions in Device Tree. This section shall cover all devices
that will be used in Xen, like `UART`, `GIC`, etc.

Before we map `mpu,device-memory-section` with device memory attributes and
permissions(REGION_HYPRVISOR_NOCACHE), we shall destroy the mapping for early
UART which got set up in assembly boot-time, to avoid MPU memory
region overlapping.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/arm64/mpu.h |  6 ++++--
 xen/arch/arm/include/asm/setup.h     |  1 +
 xen/arch/arm/mm_mpu.c                | 14 +++++++++++++-
 xen/arch/arm/setup_mpu.c             |  5 +++++
 4 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/asm/arm64/mpu.h
index c1dea1c8e9..8e8679bc82 100644
--- a/xen/arch/arm/include/asm/arm64/mpu.h
+++ b/xen/arch/arm/include/asm/arm64/mpu.h
@@ -69,10 +69,11 @@
 #define REGION_TRANSIENT_MASK(x)  (((x) >> _REGION_TRANSIENT_BIT) & 0x3U)
 
 /*
- * _REGION_NORMAL is convenience define. It is not meant to be used
- * outside of this header.
+ * _REGION_NORMAL and _REGION_DEVICE are convenience defines. They are not
+ * meant to be used outside of this header.
  */
 #define _REGION_NORMAL            (MT_NORMAL|_REGION_PRESENT)
+#define _REGION_DEVICE            (_REGION_XN|_REGION_PRESENT)
 
 #define REGION_HYPERVISOR_RW      (_REGION_NORMAL|_REGION_XN)
 #define REGION_HYPERVISOR_RO      (_REGION_NORMAL|_REGION_XN|_REGION_RO)
@@ -80,6 +81,7 @@
 #define REGION_HYPERVISOR         REGION_HYPERVISOR_RW
 #define REGION_HYPERVISOR_BOOT    (REGION_HYPERVISOR_RW|_REGION_BOOTONLY)
 #define REGION_HYPERVISOR_SWITCH  (REGION_HYPERVISOR_RW|_REGION_SWITCH)
+#define REGION_HYPERVISOR_NOCACHE (_REGION_DEVICE|MT_DEVICE_nGnRE|_REGION_SWITCH)
 
 #define INVALID_REGION            (~0UL)
 
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 3581f8f990..b7a2225c25 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -194,6 +194,7 @@ struct init_info
 /* Index of MPU memory section */
 enum mpu_section_info {
     MSINFO_GUEST,
+    MSINFO_DEVICE,
     MSINFO_MAX
 };
 
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 3a0d110b13..1566ba60af 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -73,6 +73,7 @@ struct page_info *frame_table;
 
 static const unsigned int mpu_section_mattr[MSINFO_MAX] = {
     REGION_HYPERVISOR_SWITCH,
+    REGION_HYPERVISOR_NOCACHE,
 };
 
 /* Write a MPU protection region */
@@ -673,8 +674,19 @@ void __init setup_static_mappings(void)
     setup_staticheap_mappings();
 
     for ( uint8_t i = MSINFO_GUEST; i < MSINFO_MAX; i++ )
+    {
+#ifdef CONFIG_EARLY_PRINTK
+        if ( i == MSINFO_DEVICE )
+            /*
+             * Destroy early UART mapping before mapping device memory section.
+             * WARNING：console will be inaccessible temporarily.
+             */
+            destroy_xen_mappings(CONFIG_EARLY_UART_BASE_ADDRESS,
+                                 CONFIG_EARLY_UART_BASE_ADDRESS + EARLY_UART_SIZE);
+#endif
         map_mpu_memory_section_on_boot(i, mpu_section_mattr[i]);
-    /* TODO: device memory section, boot-module section, etc */
+    }
+    /* TODO: boot-module section, etc */
 }
 
 /* Map a frame table to cover physical addresses ps through pe */
diff --git a/xen/arch/arm/setup_mpu.c b/xen/arch/arm/setup_mpu.c
index 09a38a34a4..ec05542f68 100644
--- a/xen/arch/arm/setup_mpu.c
+++ b/xen/arch/arm/setup_mpu.c
@@ -29,6 +29,7 @@
 
 const char *mpu_section_info_str[MSINFO_MAX] = {
     "mpu,guest-memory-section",
+    "mpu,device-memory-section",
 };
 
 /*
@@ -47,6 +48,10 @@ struct mpuinfo __initdata mpuinfo;
  * through "xen,static-mem" property in MPU system. "mpu,guest-memory-section"
  * limits the scattering of "xen,static-mem", as users could not define
  * a "xen,static-mem" outside "mpu,guest-memory-section".
+ *
+ * "mpu,device-memory-section": this section draws the device memory layout
+ * with the least number of memory regions for all devices in system that will
+ * be used in Xen, like `UART`, `GIC`, etc.
  */
 static int __init process_mpu_memory_section(const void *fdt, int node,
                                              const char *name, void *data,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476579.738914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjF-0000Gf-0s; Fri, 13 Jan 2023 05:35:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476579.738914; Fri, 13 Jan 2023 05:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjE-0000GW-Rv; Fri, 13 Jan 2023 05:35:48 +0000
Received: by outflank-mailman (input) for mailman id 476579;
 Fri, 13 Jan 2023 05:35:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCev-0005sJ-GW
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:21 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 8277fd1c-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:31:20 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D92BBFEC;
 Thu, 12 Jan 2023 21:32:01 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1F50D3F587;
 Thu, 12 Jan 2023 21:31:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8277fd1c-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 32/40] xen/mpu: implement MPU version of ioremap_xxx
Date: Fri, 13 Jan 2023 13:29:05 +0800
Message-Id: <20230113052914.3845596-33-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Function ioremap_xxx is normally being used to remap device address ranges
in MMU system during device driver initialization.

However, in MPU system, virtual translation is not supported and
device memory layout is statically configured in Device Tree, and being mapped
at very early stage.
So here we only add a check to verify this assumption.

But for tolerating a few cases where the function is called to map for
temporary copy and paste, like ioremap_wc in kernel image loading, the
region attribute mismatch will be treated as warning than error.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/arm64/mpu.h |  1 +
 xen/arch/arm/include/asm/mm.h        | 16 ++++-
 xen/arch/arm/include/asm/mm_mpu.h    |  2 +
 xen/arch/arm/mm_mpu.c                | 88 ++++++++++++++++++++++++----
 xen/include/xen/vmap.h               | 12 ++++
 5 files changed, 106 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/asm/arm64/mpu.h
index 8e8679bc82..b4e50a9a0e 100644
--- a/xen/arch/arm/include/asm/arm64/mpu.h
+++ b/xen/arch/arm/include/asm/arm64/mpu.h
@@ -82,6 +82,7 @@
 #define REGION_HYPERVISOR_BOOT    (REGION_HYPERVISOR_RW|_REGION_BOOTONLY)
 #define REGION_HYPERVISOR_SWITCH  (REGION_HYPERVISOR_RW|_REGION_SWITCH)
 #define REGION_HYPERVISOR_NOCACHE (_REGION_DEVICE|MT_DEVICE_nGnRE|_REGION_SWITCH)
+#define REGION_HYPERVISOR_WC      (_REGION_DEVICE|MT_NORMAL_NC)
 
 #define INVALID_REGION            (~0UL)
 
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 7969ec9f98..fa44cfc50d 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -14,6 +14,10 @@
 # error "unknown ARM variant"
 #endif
 
+#if defined(CONFIG_HAS_MPU)
+# include <asm/arm64/mpu.h>
+#endif
+
 /* Align Xen to a 2 MiB boundary. */
 #define XEN_PADDR_ALIGN (1 << 21)
 
@@ -198,19 +202,25 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 /* map a physical range in virtual memory */
 void __iomem *ioremap_attr(paddr_t start, size_t len, unsigned int attributes);
 
+#ifndef CONFIG_HAS_MPU
+#define DEFINE_ATTRIBUTE(var)   (PAGE_##var)
+#else
+#define DEFINE_ATTRIBUTE(var)   (REGION_##var)
+#endif
+
 static inline void __iomem *ioremap_nocache(paddr_t start, size_t len)
 {
-    return ioremap_attr(start, len, PAGE_HYPERVISOR_NOCACHE);
+    return ioremap_attr(start, len, DEFINE_ATTRIBUTE(HYPERVISOR_NOCACHE));
 }
 
 static inline void __iomem *ioremap_cache(paddr_t start, size_t len)
 {
-    return ioremap_attr(start, len, PAGE_HYPERVISOR);
+    return ioremap_attr(start, len, DEFINE_ATTRIBUTE(HYPERVISOR));
 }
 
 static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
 {
-    return ioremap_attr(start, len, PAGE_HYPERVISOR_WC);
+    return ioremap_attr(start, len, DEFINE_ATTRIBUTE(HYPERVISOR_WC));
 }
 
 /* XXX -- account for base */
diff --git a/xen/arch/arm/include/asm/mm_mpu.h b/xen/arch/arm/include/asm/mm_mpu.h
index eebd5b5d35..5aa61c43b6 100644
--- a/xen/arch/arm/include/asm/mm_mpu.h
+++ b/xen/arch/arm/include/asm/mm_mpu.h
@@ -2,6 +2,8 @@
 #ifndef __ARCH_ARM_MM_MPU__
 #define __ARCH_ARM_MM_MPU__
 
+#include <asm/arm64/mpu.h>
+
 #define setup_mm_mappings(boot_phys_offset) ((void)(boot_phys_offset))
 /*
  * Function setup_static_mappings() sets up MPU memory region mapping
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index ea64aa38e4..7b54c87acf 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -712,32 +712,100 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
            frametable_size - (nr_pdxs * sizeof(struct page_info)));
 }
 
-/* TODO: Implementation on the first usage */
-void dump_hyp_walk(vaddr_t addr)
+static bool region_attribute_match(pr_t *region, unsigned int attributes)
 {
+    if ( region->prbar.reg.ap != REGION_AP_MASK(attributes) )
+    {
+        printk(XENLOG_ERR "region permission is not matched (0x%x -> 0x%x)\n",
+               region->prbar.reg.ap, REGION_AP_MASK(attributes));
+        return false;
+    }
+
+    if ( region->prbar.reg.xn != REGION_XN_MASK(attributes) )
+    {
+        printk(XENLOG_ERR "region execution permission is not matched (0x%x -> 0x%x)\n",
+               region->prbar.reg.xn, REGION_XN_MASK(attributes));
+        return false;
+    }
+
+    if ( region->prlar.reg.ai != REGION_AI_MASK(attributes) )
+    {
+        printk(XENLOG_ERR "region memory attributes is not matched (0x%x -> 0x%x)\n",
+               region->prlar.reg.ai, REGION_AI_MASK(attributes));
+        return false;
+    }
+
+    return true;
 }
 
-void __init remove_early_mappings(void)
+static bool check_region_and_attributes(paddr_t pa, size_t len,
+                                        unsigned int attributes,
+                                        const char *prefix)
+{
+    pr_t *region;
+    int rc;
+    uint64_t idx;
+
+    rc = mpumap_contain_region(xen_mpumap, max_xen_mpumap, pa, pa + len - 1,
+                               &idx);
+    if ( rc != MPUMAP_REGION_FOUND && rc != MPUMAP_REGION_INCLUSIVE )
+    {
+        region_printk("%s: range 0x%"PRIpaddr" - 0x%"PRIpaddr" has not been properly mapped\n",
+                      prefix, pa, pa + len - 1);
+        return false;
+    }
+
+    region = &xen_mpumap[idx];
+    /*
+     * For tolerating a few cases where the function is called to remap for
+     * temporary copy and paste, like ioremap_wc in kernel image loading, the
+     * permission mismatch will be treated as warning than error.
+     */
+    if ( !region_attribute_match(region, attributes) )
+        printk(XENLOG_WARNING
+               "mpu: %s: range 0x%"PRIpaddr" - 0x%"PRIpaddr" attributes mismatched\n",
+               prefix, pa, pa + len - 1);
+
+    return true;
+}
+
+/*
+ * This function is normally being used to remap device address ranges
+ * in MMU system.
+ * However, in MPU system, virtual translation is not supported and
+ * device memory is statically configured in FDT, while being mapped at very
+ * early stage.
+ * So here we only add a check to verify this assumption.
+ */
+void *ioremap_attr(paddr_t pa, size_t len, unsigned int attributes)
 {
+    if ( !check_region_and_attributes(pa, len, attributes, "ioremap") )
+        return NULL;
+
+    return maddr_to_virt(pa);
 }
 
-int init_secondary_pagetables(int cpu)
+void *ioremap(paddr_t pa, size_t len)
 {
-    return -ENOSYS;
+    return ioremap_attr(pa, len, REGION_HYPERVISOR_NOCACHE);
 }
 
-void mmu_init_secondary_cpu(void)
+/* TODO: Implementation on the first usage */
+void dump_hyp_walk(vaddr_t addr)
 {
 }
 
-void *ioremap_attr(paddr_t pa, size_t len, unsigned int attributes)
+void __init remove_early_mappings(void)
 {
-    return NULL;
 }
 
-void *ioremap(paddr_t pa, size_t len)
+int init_secondary_pagetables(int cpu)
+{
+    return -ENOSYS;
+}
+
+void mmu_init_secondary_cpu(void)
 {
-    return NULL;
 }
 
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags)
diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h
index 2e3ae0ca6a..fc56d02fc8 100644
--- a/xen/include/xen/vmap.h
+++ b/xen/include/xen/vmap.h
@@ -89,15 +89,27 @@ static inline void vfree(void *va)
     ASSERT_UNREACHABLE();
 }
 
+#ifdef CONFIG_HAS_MPU
+void __iomem *ioremap(paddr_t, size_t);
+#else
 void __iomem *ioremap(paddr_t, size_t)
 {
     ASSERT_UNREACHABLE();
     return NULL;
 }
+#endif
 
 static inline void iounmap(void __iomem *va)
 {
+#ifdef CONFIG_HAS_MPU
+    /*
+     * iounmap and ioremap are a couple, and as ioremap is only doing
+     * checking in MPU system, we do nothing and just return in iounmap
+     */
+    return;
+#else
     ASSERT_UNREACHABLE();
+#endif
 }
 
 static inline void *arch_vmap_virt_end(void)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476581.738925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjH-0000gx-AW; Fri, 13 Jan 2023 05:35:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476581.738925; Fri, 13 Jan 2023 05:35:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjH-0000go-73; Fri, 13 Jan 2023 05:35:51 +0000
Received: by outflank-mailman (input) for mailman id 476581;
 Fri, 13 Jan 2023 05:35:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeE-0005sJ-1z
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:38 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 68527d9e-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:36 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 17767FEC;
 Thu, 12 Jan 2023 21:31:18 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 58DCA3F587;
 Thu, 12 Jan 2023 21:30:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68527d9e-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 19/40] xen/mpu: populate a new region in Xen MPU mapping table
Date: Fri, 13 Jan 2023 13:28:52 +0800
Message-Id: <20230113052914.3845596-20-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The new helper xen_mpumap_update() is responsible for updating an entry
in Xen MPU memory mapping table, including creating a new entry, updating
or destroying an existing one.

This commit only talks about populating a new entry in Xen MPU mapping table(
xen_mpumap). Others will be introduced in the following commits.

In xen_mpumap_update_entry(), firstly, we shall check if requested address
range [base, limit) is not mapped. Then we use pr_of_xenaddr() to build up the
structure of MPU memory region(pr_t).
In the last, we set memory attribute and permission based on variable @flags.

To summarize all region attributes in one variable @flags, layout of the
flags is elaborated as follows:
[0:2] Memory attribute Index
[3:4] Execute Never
[5:6] Access Permission
[7]   Region Present
Also, we provide a set of definitions(REGION_HYPERVISOR_RW, etc) that combine
the memory attribute and permission for common combinations.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/arm64/mpu.h |  72 +++++++
 xen/arch/arm/mm_mpu.c                | 276 ++++++++++++++++++++++++++-
 2 files changed, 340 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/asm/arm64/mpu.h
index c945dd53db..fcde6ad0db 100644
--- a/xen/arch/arm/include/asm/arm64/mpu.h
+++ b/xen/arch/arm/include/asm/arm64/mpu.h
@@ -16,6 +16,61 @@
  */
 #define ARM_MAX_MPU_MEMORY_REGIONS 255
 
+/* Access permission attributes. */
+/* Read/Write at EL2, No Access at EL1/EL0. */
+#define AP_RW_EL2 0x0
+/* Read/Write at EL2/EL1/EL0 all levels. */
+#define AP_RW_ALL 0x1
+/* Read-only at EL2, No Access at EL1/EL0. */
+#define AP_RO_EL2 0x2
+/* Read-only at EL2/EL1/EL0 all levels. */
+#define AP_RO_ALL 0x3
+
+/*
+ * Excute never.
+ * Stage 1 EL2 translation regime.
+ * XN[1] determines whether execution of the instruction fetched from the MPU
+ * memory region is permitted.
+ * Stage 2 EL1/EL0 translation regime.
+ * XN[0] determines whether execution of the instruction fetched from the MPU
+ * memory region is permitted.
+ */
+#define XN_DISABLED    0x0
+#define XN_P2M_ENABLED 0x1
+#define XN_ENABLED     0x2
+
+/*
+ * Layout of the flags used for updating Xen MPU region attributes
+ * [0:2] Memory attribute Index
+ * [3:4] Execute Never
+ * [5:6] Access Permission
+ * [7]   Region Present
+ */
+#define _REGION_AI_BIT            0
+#define _REGION_XN_BIT            3
+#define _REGION_AP_BIT            5
+#define _REGION_PRESENT_BIT       7
+#define _REGION_XN                (2U << _REGION_XN_BIT)
+#define _REGION_RO                (2U << _REGION_AP_BIT)
+#define _REGION_PRESENT           (1U << _REGION_PRESENT_BIT)
+#define REGION_AI_MASK(x)         (((x) >> _REGION_AI_BIT) & 0x7U)
+#define REGION_XN_MASK(x)         (((x) >> _REGION_XN_BIT) & 0x3U)
+#define REGION_AP_MASK(x)         (((x) >> _REGION_AP_BIT) & 0x3U)
+#define REGION_RO_MASK(x)         (((x) >> _REGION_AP_BIT) & 0x2U)
+
+/*
+ * _REGION_NORMAL is convenience define. It is not meant to be used
+ * outside of this header.
+ */
+#define _REGION_NORMAL            (MT_NORMAL|_REGION_PRESENT)
+
+#define REGION_HYPERVISOR_RW      (_REGION_NORMAL|_REGION_XN)
+#define REGION_HYPERVISOR_RO      (_REGION_NORMAL|_REGION_XN|_REGION_RO)
+
+#define REGION_HYPERVISOR         REGION_HYPERVISOR_RW
+
+#define INVALID_REGION            (~0UL)
+
 #ifndef __ASSEMBLY__
 
 /* Protection Region Base Address Register */
@@ -49,6 +104,23 @@ typedef struct {
     prlar_t prlar;
 } pr_t;
 
+/* Access to set base address of MPU protection region(pr_t). */
+#define pr_set_base(pr, paddr) ({                           \
+    pr_t *_pr = pr;                                         \
+    _pr->prbar.reg.base = (paddr >> MPU_REGION_SHIFT);      \
+})
+
+/* Access to set limit address of MPU protection region(pr_t). */
+#define pr_set_limit(pr, paddr) ({                          \
+    pr_t *_pr = pr;                                         \
+    _pr->prlar.reg.limit = (paddr >> MPU_REGION_SHIFT);     \
+})
+
+#define region_is_valid(pr) ({                              \
+    pr_t *_pr = pr;                                         \
+    _pr->prlar.reg.en;                                      \
+})
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* __ARM64_MPU_H__ */
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index f2b494449c..08720a7c19 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -22,9 +22,23 @@
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/page-size.h>
+#include <xen/spinlock.h>
 #include <asm/arm64/mpu.h>
 #include <asm/page.h>
 
+#ifdef NDEBUG
+static inline void
+__attribute__ ((__format__ (__printf__, 1, 2)))
+region_printk(const char *fmt, ...) {}
+#else
+#define region_printk(fmt, args...)         \
+    do                                      \
+    {                                       \
+        dprintk(XENLOG_ERR, fmt, ## args);  \
+        WARN();                             \
+    } while (0)
+#endif
+
 /* Xen MPU memory region mapping table. */
 pr_t __aligned(PAGE_SIZE) __section(".data.page_aligned")
      xen_mpumap[ARM_MAX_MPU_MEMORY_REGIONS];
@@ -46,6 +60,8 @@ uint64_t __ro_after_init next_transient_region_idx;
 /* Maximum number of supported MPU memory regions by the EL2 MPU. */
 uint64_t __ro_after_init max_xen_mpumap;
 
+static DEFINE_SPINLOCK(xen_mpumap_lock);
+
 /* Write a MPU protection region */
 #define WRITE_PROTECTION_REGION(sel, pr, prbar_el2, prlar_el2) ({       \
     uint64_t _sel = sel;                                                \
@@ -73,6 +89,28 @@ uint64_t __ro_after_init max_xen_mpumap;
     _pr;                                                                \
 })
 
+/*
+ * In boot-time, fixed MPU regions(e.g. Xen text section) are added
+ * at the front, indexed by next_fixed_region_idx, the others like
+ * boot-only regions(e.g. early FDT) should be added at the rear,
+ * indexed by next_transient_region_idx.
+ * With more and more MPU regions added-in, when the two indexes
+ * meet and pass with each other, we would run out of the whole
+ * EL2 MPU memory regions.
+ */
+static bool __init xen_boot_mpu_regions_is_full(void)
+{
+    return next_transient_region_idx < next_fixed_region_idx;
+}
+
+static void __init update_boot_xen_mpumap_idx(uint64_t idx)
+{
+    if ( idx == next_transient_region_idx )
+        next_transient_region_idx--;
+    else
+        next_fixed_region_idx++;
+}
+
 /*
  * Access MPU protection region, including both read/write operations.
  * Armv8-R AArch64 at most supports 255 MPU protection regions.
@@ -197,6 +235,236 @@ static void access_protection_region(bool read, pr_t *pr_read,
     }
 }
 
+/*
+ * Standard entry for building up the structure of MPU memory region(pr_t).
+ * It is equivalent to mfn_to_xen_entry in MMU system.
+ * base and limit both refer to inclusive address.
+ */
+static inline pr_t pr_of_xenaddr(paddr_t base, paddr_t limit, unsigned attr)
+{
+    prbar_t prbar;
+    prlar_t prlar;
+    pr_t region;
+
+    /* Build up value for PRBAR_EL2. */
+    prbar = (prbar_t) {
+        .reg = {
+            .ap = AP_RW_EL2,  /* Read/Write at EL2, no access at EL1/EL0. */
+            .xn = XN_ENABLED, /* No need to execute outside .text */
+        }};
+
+    switch ( attr )
+    {
+    case MT_NORMAL_NC:
+        /*
+         * ARM ARM: Overlaying the shareability attribute (DDI
+         * 0406C.b B3-1376 to 1377)
+         *
+         * A memory region with a resultant memory type attribute of normal,
+         * and a resultant cacheability attribute of Inner non-cacheable,
+         * outer non-cacheable, must have a resultant shareability attribute
+         * of outer shareable, otherwise shareability is UNPREDICTABLE.
+         *
+         * On ARMv8 sharability is ignored and explicitly treated as outer
+         * shareable for normal inner non-cacheable, outer non-cacheable.
+         */
+        prbar.reg.sh = LPAE_SH_OUTER;
+        break;
+    case MT_DEVICE_nGnRnE:
+    case MT_DEVICE_nGnRE:
+        /*
+         * Shareability is ignored for non-normal memory, Outer is as
+         * good as anything.
+         *
+         * On ARMv8 sharability is ignored and explicitly treated as outer
+         * shareable for any device memory type.
+         */
+        prbar.reg.sh = LPAE_SH_OUTER;
+        break;
+    default:
+        /* Xen mappings are SMP coherent */
+        prbar.reg.sh = LPAE_SH_INNER;
+        break;
+    }
+
+    /* Build up value for PRLAR_EL2. */
+    prlar = (prlar_t) {
+        .reg = {
+            .ns = 0,        /* Hyp mode is in secure world */
+            .ai = attr,
+            .en = 1,        /* Region enabled */
+        }};
+
+    /* Build up MPU memory region. */
+    region = (pr_t) {
+        .prbar = prbar,
+        .prlar = prlar,
+    };
+
+    /* Set base address and limit address. */
+    pr_set_base(&region, base);
+    pr_set_limit(&region, limit);
+
+    return region;
+}
+
+#define MPUMAP_REGION_FAILED    0
+#define MPUMAP_REGION_FOUND     1
+#define MPUMAP_REGION_INCLUSIVE 2
+#define MPUMAP_REGION_OVERLAP   3
+
+/*
+ * Check whether memory range [base, limit] is mapped in MPU memory
+ * region table \mpu. Only address range is considered, memory attributes
+ * and permission are not considered here.
+ * If we find the match, the associated index will be filled up.
+ * If the entry is not present, INVALID_REGION will be set in \index
+ *
+ * Make sure that parameter \base and \limit are both referring
+ * inclusive addresss
+ *
+ * Return values:
+ *  MPUMAP_REGION_FAILED: no mapping and no overmapping
+ *  MPUMAP_REGION_FOUND: find an exact match in address
+ *  MPUMAP_REGION_INCLUSIVE: find an inclusive match in address
+ *  MPUMAP_REGION_OVERLAP: overlap with the existing mapping
+ */
+static int mpumap_contain_region(pr_t *mpu, uint64_t nr_regions,
+                                 paddr_t base, paddr_t limit, uint64_t *index)
+{
+    uint64_t i = 0;
+    uint64_t _index = INVALID_REGION;
+
+    /* Allow index to be NULL */
+    index = index ?: &_index;
+
+    for ( ; i < nr_regions; i++ )
+    {
+        paddr_t iter_base = pr_get_base(&mpu[i]);
+        paddr_t iter_limit = pr_get_limit(&mpu[i]);
+
+        /* Found an exact valid match */
+        if ( (iter_base == base) && (iter_limit == limit) &&
+             region_is_valid(&mpu[i]) )
+        {
+            *index = i;
+            return MPUMAP_REGION_FOUND;
+        }
+
+        /* No overlapping */
+        if ( (iter_limit < base) || (iter_base > limit) )
+            continue;
+        /* Inclusive and valid */
+        else if ( (base >= iter_base) && (limit <= iter_limit) &&
+                  region_is_valid(&mpu[i]) )
+        {
+            *index = i;
+            return MPUMAP_REGION_INCLUSIVE;
+        }
+        else
+        {
+            region_printk("Range 0x%"PRIpaddr" - 0x%"PRIpaddr" overlaps with the existing region 0x%"PRIpaddr" - 0x%"PRIpaddr"\n",
+                          base, limit, iter_base, iter_limit);
+            return MPUMAP_REGION_OVERLAP;
+        }
+    }
+
+    return MPUMAP_REGION_FAILED;
+}
+
+/*
+ * Update an entry at the index @idx.
+ * @base:  base address
+ * @limit: limit address(exclusive)
+ * @flags: region attributes, should be the combination of REGION_HYPERVISOR_xx
+ */
+static int xen_mpumap_update_entry(paddr_t base, paddr_t limit,
+                                   unsigned int flags)
+{
+    uint64_t idx;
+    int rc;
+
+    rc = mpumap_contain_region(xen_mpumap, max_xen_mpumap, base, limit - 1,
+                               &idx);
+    if ( rc == MPUMAP_REGION_OVERLAP )
+        return -EINVAL;
+
+    /* We are inserting a mapping => Create new region. */
+    if ( flags & _REGION_PRESENT )
+    {
+        if ( rc != MPUMAP_REGION_FAILED )
+            return -EINVAL;
+
+        if ( xen_boot_mpu_regions_is_full() )
+        {
+            region_printk("There is no room left in EL2 MPU memory region mapping\n");
+            return -ENOMEM;
+        }
+
+        /* During boot time, the default index is next_fixed_region_idx. */
+        if ( system_state <= SYS_STATE_active )
+            idx = next_fixed_region_idx;
+
+        xen_mpumap[idx] = pr_of_xenaddr(base, limit - 1, REGION_AI_MASK(flags));
+        /* Set permission */
+        xen_mpumap[idx].prbar.reg.ap = REGION_AP_MASK(flags);
+        xen_mpumap[idx].prbar.reg.xn = REGION_XN_MASK(flags);
+
+        /* Update and enable the region */
+        access_protection_region(false, NULL, (const pr_t*)(&xen_mpumap[idx]),
+                                 idx);
+
+        if ( system_state <= SYS_STATE_active )
+            update_boot_xen_mpumap_idx(idx);
+    }
+
+    return 0;
+}
+
+static int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags)
+{
+    int rc;
+
+    /*
+     * The hardware was configured to forbid mapping both writeable and
+     * executable.
+     * When modifying/creating mapping (i.e _REGION_PRESENT is set),
+     * prevent any update if this happen.
+     */
+    if ( (flags & _REGION_PRESENT) && !REGION_RO_MASK(flags) &&
+         !REGION_XN_MASK(flags) )
+    {
+        region_printk("Mappings should not be both Writeable and Executable.\n");
+        return -EINVAL;
+    }
+
+    if ( !IS_ALIGNED(base, PAGE_SIZE) || !IS_ALIGNED(limit, PAGE_SIZE) )
+    {
+        region_printk("base address 0x%"PRIpaddr", or limit address 0x%"PRIpaddr" is not page aligned.\n",
+                      base, limit);
+        return -EINVAL;
+    }
+
+    spin_lock(&xen_mpumap_lock);
+
+    rc = xen_mpumap_update_entry(base, limit, flags);
+
+    spin_unlock(&xen_mpumap_lock);
+
+    return rc;
+}
+
+int map_pages_to_xen(unsigned long virt,
+                     mfn_t mfn,
+                     unsigned long nr_mfns,
+                     unsigned int flags)
+{
+    ASSERT(virt == mfn_to_maddr(mfn));
+
+    return xen_mpumap_update(mfn_to_maddr(mfn),
+                             mfn_to_maddr(mfn_add(mfn, nr_mfns)), flags);
+}
+
 /* TODO: Implementation on the first usage */
 void dump_hyp_walk(vaddr_t addr)
 {
@@ -230,14 +498,6 @@ void *ioremap(paddr_t pa, size_t len)
     return NULL;
 }
 
-int map_pages_to_xen(unsigned long virt,
-                     mfn_t mfn,
-                     unsigned long nr_mfns,
-                     unsigned int flags)
-{
-    return -ENOSYS;
-}
-
 int destroy_xen_mappings(unsigned long s, unsigned long e)
 {
     return -ENOSYS;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476583.738936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjI-00012a-TB; Fri, 13 Jan 2023 05:35:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476583.738936; Fri, 13 Jan 2023 05:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjI-00011m-Mm; Fri, 13 Jan 2023 05:35:52 +0000
Received: by outflank-mailman (input) for mailman id 476583;
 Fri, 13 Jan 2023 05:35:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdo-0005sP-8g
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:12 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 5795f1c1-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:30:08 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1F580FEC;
 Thu, 12 Jan 2023 21:30:50 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DF3163F587;
 Thu, 12 Jan 2023 21:30:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5795f1c1-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 10/40] xen/arm: split MMU and MPU config files from config.h
Date: Fri, 13 Jan 2023 13:28:43 +0800
Message-Id: <20230113052914.3845596-11-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Xen defines some global configuration macros for Arm in
config.h. We still want to use it for Armv8-R systems, but
there are some address related macros that are defined for
MMU systems. These macros will not be used by MPU systems,
Adding ifdefery with CONFIG_HAS_MPU to gate these macros
will result in a messy and hard-to-read/maintain code.

So we keep some common definitions still in config.h, but
move virtual address related definitions to a new file -
config_mmu.h. And use a new file config_mpu.h to store
definitions for MPU systems. To avoid spreading #ifdef
everywhere, we keep the same definition names for MPU
systems, like XEN_VIRT_START and HYPERVISOR_VIRT_START,
but the definition contents are MPU specific.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
v1 -> v2:
1. Remove duplicated FIXMAP definitions from config_mmu.h
---
 xen/arch/arm/include/asm/config.h     | 103 +++--------------------
 xen/arch/arm/include/asm/config_mmu.h | 112 ++++++++++++++++++++++++++
 xen/arch/arm/include/asm/config_mpu.h |  25 ++++++
 3 files changed, 147 insertions(+), 93 deletions(-)
 create mode 100644 xen/arch/arm/include/asm/config_mmu.h
 create mode 100644 xen/arch/arm/include/asm/config_mpu.h

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 25a625ff08..86d8142959 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -48,6 +48,12 @@
 
 #define INVALID_VCPU_ID MAX_VIRT_CPUS
 
+/* Used for calculating PDX */
+#ifdef CONFIG_ARM_64
+#define FRAMETABLE_SIZE        GB(32)
+#define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
+#endif
+
 #define __LINUX_ARM_ARCH__ 7
 #define CONFIG_AEABI
 
@@ -71,99 +77,10 @@
 #include <xen/const.h>
 #include <xen/page-size.h>
 
-/*
- * Common ARM32 and ARM64 layout:
- *   0  -   2M   Unmapped
- *   2M -   4M   Xen text, data, bss
- *   4M -   6M   Fixmap: special-purpose 4K mapping slots
- *   6M -  10M   Early boot mapping of FDT
- *   10M - 12M   Livepatch vmap (if compiled in)
- *
- * ARM32 layout:
- *   0  -  12M   <COMMON>
- *
- *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
- * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
- *                    space
- *
- *   1G -   2G   Xenheap: always-mapped memory
- *   2G -   4G   Domheap: on-demand-mapped
- *
- * ARM64 layout:
- * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
- *   0  -  12M   <COMMON>
- *
- *   1G -   2G   VMAP: ioremap and early_ioremap
- *
- *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
- *
- * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
- *  Unused
- *
- * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
- *  1:1 mapping of RAM
- *
- * 0x0000850000000000 - 0x0000ffffffffffff (123TB, L0 slots [266..511])
- *  Unused
- */
-
-#define XEN_VIRT_START         _AT(vaddr_t,0x00200000)
-#define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
-
-#define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
-#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
-
-#ifdef CONFIG_LIVEPATCH
-#define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
-#define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
-#endif
-
-#define HYPERVISOR_VIRT_START  XEN_VIRT_START
-
-#ifdef CONFIG_ARM_32
-
-#define CONFIG_SEPARATE_XENHEAP 1
-
-#define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
-#define FRAMETABLE_SIZE        MB(128-32)
-#define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
-#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
-
-#define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
-#define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
-
-#define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
-#define XENHEAP_VIRT_SIZE      _AT(vaddr_t, GB(1))
-
-#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
-#define DOMHEAP_VIRT_SIZE      _AT(vaddr_t, GB(2))
-
-#define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
-
-/* Number of domheap pagetable pages required at the second level (2MB mappings) */
-#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
-
-#else /* ARM_64 */
-
-#define SLOT0_ENTRY_BITS  39
-#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
-#define SLOT0_ENTRY_SIZE  SLOT0(1)
-
-#define VMAP_VIRT_START  GB(1)
-#define VMAP_VIRT_SIZE   GB(1)
-
-#define FRAMETABLE_VIRT_START  GB(32)
-#define FRAMETABLE_SIZE        GB(32)
-#define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
-
-#define DIRECTMAP_VIRT_START   SLOT0(256)
-#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
-#define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
-
-#define XENHEAP_VIRT_START     directmap_virt_start
-
-#define HYPERVISOR_VIRT_END    DIRECTMAP_VIRT_END
-
+#ifdef CONFIG_HAS_MPU
+#include <asm/config_mpu.h>
+#else
+#include <asm/config_mmu.h>
 #endif
 
 #define NR_hypercalls 64
diff --git a/xen/arch/arm/include/asm/config_mmu.h b/xen/arch/arm/include/asm/config_mmu.h
new file mode 100644
index 0000000000..c12ff25cf4
--- /dev/null
+++ b/xen/arch/arm/include/asm/config_mmu.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/******************************************************************************
+ * config_mmu.h
+ *
+ * A Linux-style configuration list, only can be included by config.h
+ */
+
+#ifndef __ARM_CONFIG_MMU_H__
+#define __ARM_CONFIG_MMU_H__
+
+/*
+ * Common ARM32 and ARM64 layout:
+ *   0  -   2M   Unmapped
+ *   2M -   4M   Xen text, data, bss
+ *   4M -   6M   Fixmap: special-purpose 4K mapping slots
+ *   6M -  10M   Early boot mapping of FDT
+ *   10M - 12M   Livepatch vmap (if compiled in)
+ *
+ * ARM32 layout:
+ *   0  -  12M   <COMMON>
+ *
+ *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
+ * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
+ *                    space
+ *
+ *   1G -   2G   Xenheap: always-mapped memory
+ *   2G -   4G   Domheap: on-demand-mapped
+ *
+ * ARM64 layout:
+ * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
+ *   0  -  12M   <COMMON>
+ *
+ *   1G -   2G   VMAP: ioremap and early_ioremap
+ *
+ *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
+ *
+ * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
+ *  Unused
+ *
+ * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
+ *  1:1 mapping of RAM
+ *
+ * 0x0000850000000000 - 0x0000ffffffffffff (123TB, L0 slots [266..511])
+ *  Unused
+ */
+
+#define XEN_VIRT_START         _AT(vaddr_t,0x00200000)
+#define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
+
+#define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
+#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
+
+#ifdef CONFIG_LIVEPATCH
+#define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
+#define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
+#endif
+
+#define HYPERVISOR_VIRT_START  XEN_VIRT_START
+
+#ifdef CONFIG_ARM_32
+
+#define CONFIG_SEPARATE_XENHEAP 1
+
+#define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
+#define FRAMETABLE_SIZE        MB(128-32)
+#define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
+#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
+
+#define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
+#define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
+
+#define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
+#define XENHEAP_VIRT_SIZE      _AT(vaddr_t, GB(1))
+
+#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
+#define DOMHEAP_VIRT_SIZE      _AT(vaddr_t, GB(2))
+
+#define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
+
+/* Number of domheap pagetable pages required at the second level (2MB mappings) */
+#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
+
+#else /* ARM_64 */
+
+#define SLOT0_ENTRY_BITS  39
+#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
+#define SLOT0_ENTRY_SIZE  SLOT0(1)
+
+#define VMAP_VIRT_START  GB(1)
+#define VMAP_VIRT_SIZE   GB(1)
+
+#define FRAMETABLE_VIRT_START  GB(32)
+
+#define DIRECTMAP_VIRT_START   SLOT0(256)
+#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
+#define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
+
+#define XENHEAP_VIRT_START     directmap_virt_start
+
+#define HYPERVISOR_VIRT_END    DIRECTMAP_VIRT_END
+
+#endif
+
+#endif /* __ARM_CONFIG_MMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/config_mpu.h b/xen/arch/arm/include/asm/config_mpu.h
new file mode 100644
index 0000000000..6b52b11ef7
--- /dev/null
+++ b/xen/arch/arm/include/asm/config_mpu.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * config_mpu.h: A Linux-style configuration list for Arm MPU systems,
+ *               only can be included by config.h
+ */
+
+#ifndef __ARM_CONFIG_MPU_H__
+#define __ARM_CONFIG_MPU_H__
+
+#define XEN_START_ADDRESS CONFIG_XEN_START_ADDRESS
+
+/*
+ * All MPU platforms need to provide a XEN_START_ADDRESS for linker. This
+ * address indicates where Xen image will be loaded and run from. This
+ * address must be aligned to a PAGE_SIZE.
+ */
+#if (XEN_START_ADDRESS % PAGE_SIZE) != 0
+#error "XEN_START_ADDRESS must be aligned to PAGE_SIZE"
+#endif
+
+#define XEN_VIRT_START         _AT(paddr_t, XEN_START_ADDRESS)
+
+#define HYPERVISOR_VIRT_START  XEN_VIRT_START
+
+#endif /* __ARM_CONFIG_MPU_H__ */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476586.738946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjK-0001Lw-4C; Fri, 13 Jan 2023 05:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476586.738946; Fri, 13 Jan 2023 05:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjK-0001L5-0a; Fri, 13 Jan 2023 05:35:54 +0000
Received: by outflank-mailman (input) for mailman id 476586;
 Fri, 13 Jan 2023 05:35:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCdw-0005sP-Sc
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:21 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 5d0f1015-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:30:17 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2E44AFEC;
 Thu, 12 Jan 2023 21:30:59 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 705B83F587;
 Thu, 12 Jan 2023 21:30:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d0f1015-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 13/40] xen/mpu: introduce unified function setup_early_uart to map early UART
Date: Fri, 13 Jan 2023 13:28:46 +0800
Message-Id: <20230113052914.3845596-14-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In MMU system, we map the UART in the fixmap (when earlyprintk is used).
However in MPU system, we map the UART with a transient MPU memory
region.

So we introduce a new unified function setup_early_uart to replace
the previous setup_fixmap.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/arm64/head.S               |  2 +-
 xen/arch/arm/arm64/head_mmu.S           |  4 +-
 xen/arch/arm/arm64/head_mpu.S           | 52 +++++++++++++++++++++++++
 xen/arch/arm/include/asm/early_printk.h |  1 +
 4 files changed, 56 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 7f3f973468..a92883319d 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -272,7 +272,7 @@ primary_switched:
          * afterwards.
          */
         bl    remove_identity_mapping
-        bl    setup_fixmap
+        bl    setup_early_uart
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
         ldr   x23, =EARLY_UART_VIRTUAL_ADDRESS
diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
index b59c40495f..a19b7c873d 100644
--- a/xen/arch/arm/arm64/head_mmu.S
+++ b/xen/arch/arm/arm64/head_mmu.S
@@ -312,7 +312,7 @@ ENDPROC(remove_identity_mapping)
  *
  * Clobbers x0 - x3
  */
-ENTRY(setup_fixmap)
+ENTRY(setup_early_uart)
 #ifdef CONFIG_EARLY_PRINTK
         /* Add UART to the fixmap table */
         ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
@@ -325,7 +325,7 @@ ENTRY(setup_fixmap)
         dsb   nshst
 
         ret
-ENDPROC(setup_fixmap)
+ENDPROC(setup_early_uart)
 
 /* Fail-stop */
 fail:   PRINT("- Boot failed -\r\n")
diff --git a/xen/arch/arm/arm64/head_mpu.S b/xen/arch/arm/arm64/head_mpu.S
index e2ac69b0cc..72d1e0863d 100644
--- a/xen/arch/arm/arm64/head_mpu.S
+++ b/xen/arch/arm/arm64/head_mpu.S
@@ -18,8 +18,10 @@
 #define REGION_TEXT_PRBAR       0x38    /* SH=11 AP=10 XN=00 */
 #define REGION_RO_PRBAR         0x3A    /* SH=11 AP=10 XN=10 */
 #define REGION_DATA_PRBAR       0x32    /* SH=11 AP=00 XN=10 */
+#define REGION_DEVICE_PRBAR     0x22    /* SH=10 AP=00 XN=10 */
 
 #define REGION_NORMAL_PRLAR     0x0f    /* NS=0 ATTR=111 EN=1 */
+#define REGION_DEVICE_PRLAR     0x09    /* NS=0 ATTR=100 EN=1 */
 
 /*
  * Macro to round up the section address to be PAGE_SIZE aligned
@@ -334,6 +336,56 @@ ENTRY(enable_mm)
     ret
 ENDPROC(enable_mm)
 
+/*
+ * Map the early UART with a new transient MPU memory region.
+ *
+ * x27: region selector
+ * x28: prbar
+ * x29: prlar
+ *
+ * Clobbers x0 - x4
+ *
+ */
+ENTRY(setup_early_uart)
+#ifdef CONFIG_EARLY_PRINTK
+    /* stack LR as write_pr will be called later like nested function */
+    mov   x3, lr
+
+    /*
+     * MPU region for early UART is a transient region, since it will be
+     * replaced by specific device memory layout when FDT gets parsed.
+     */
+    load_paddr x0, next_transient_region_idx
+    ldr   x4, [x0]
+
+    ldr   x28, =CONFIG_EARLY_UART_BASE_ADDRESS
+    and   x28, x28, #MPU_REGION_MASK
+    mov   x1, #REGION_DEVICE_PRBAR
+    orr   x28, x28, x1
+
+    ldr x29, =(CONFIG_EARLY_UART_BASE_ADDRESS + EARLY_UART_SIZE)
+    roundup_section x29
+    /* Limit address is inclusive */
+    sub   x29, x29, #1
+    and   x29, x29, #MPU_REGION_MASK
+    mov   x2, #REGION_DEVICE_PRLAR
+    orr   x29, x29, x2
+
+    mov   x27, x4
+    bl    write_pr
+
+    /* Create a new entry in xen_mpumap for early UART */
+    create_mpu_entry xen_mpumap, x4, x28, x29, x1, x2
+
+    /* Update next_transient_region_idx */
+    sub   x4, x4, #1
+    str   x4, [x0]
+
+    mov   lr, x3
+    ret
+#endif
+ENDPROC(setup_early_uart)
+
 /*
  * Local variables:
  * mode: ASM
diff --git a/xen/arch/arm/include/asm/early_printk.h b/xen/arch/arm/include/asm/early_printk.h
index 44a230853f..d87623e6d5 100644
--- a/xen/arch/arm/include/asm/early_printk.h
+++ b/xen/arch/arm/include/asm/early_printk.h
@@ -22,6 +22,7 @@
  * for EARLY_UART_VIRTUAL_ADDRESS.
  */
 #define EARLY_UART_VIRTUAL_ADDRESS CONFIG_EARLY_UART_BASE_ADDRESS
+#define EARLY_UART_SIZE            0x1000
 
 #else
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:35:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:35:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476589.738958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjM-0001nx-IH; Fri, 13 Jan 2023 05:35:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476589.738958; Fri, 13 Jan 2023 05:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjM-0001nk-CM; Fri, 13 Jan 2023 05:35:56 +0000
Received: by outflank-mailman (input) for mailman id 476589;
 Fri, 13 Jan 2023 05:35:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCf2-0005sP-AO
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:28 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 85e3d938-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:26 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CE16FFEC;
 Thu, 12 Jan 2023 21:32:07 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1BACD3F587;
 Thu, 12 Jan 2023 21:31:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85e3d938-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 34/40] xen/mpu: free init memory in MPU system
Date: Fri, 13 Jan 2023 13:29:07 +0800
Message-Id: <20230113052914.3845596-35-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit implements free_init_memory in MPU system, trying to keep
the same strategy with MMU system.

In order to inserting BRK instruction into init code section, which
aims to provok a fault on purpose, we should change init code section
permission to RW at first.
Function modify_xen_mappings is introduced to modify permission of the
existing valid MPU memory region.

Then we nuke the instruction cache to remove entries related to init
text.
At last, we destroy these two MPU memory regions referring init text and
init data using destroy_xen_mappings.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/mm_mpu.c | 85 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 83 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 0b720004ee..de0c7d919a 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -20,6 +20,7 @@
  */
 
 #include <xen/init.h>
+#include <xen/kernel.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/mm.h>
 #include <xen/page-size.h>
@@ -77,6 +78,8 @@ static const unsigned int mpu_section_mattr[MSINFO_MAX] = {
     REGION_HYPERVISOR_BOOT,
 };
 
+extern char __init_data_begin[], __init_end[];
+
 /* Write a MPU protection region */
 #define WRITE_PROTECTION_REGION(sel, pr, prbar_el2, prlar_el2) ({       \
     uint64_t _sel = sel;                                                \
@@ -443,8 +446,41 @@ static int xen_mpumap_update_entry(paddr_t base, paddr_t limit,
     if ( rc == MPUMAP_REGION_OVERLAP )
         return -EINVAL;
 
+    /* We are updating the permission. */
+    if ( (flags & _REGION_PRESENT) && (rc == MPUMAP_REGION_FOUND ||
+                                       rc == MPUMAP_REGION_INCLUSIVE) )
+    {
+
+        /*
+         * Currently, we only support modifying a *WHOLE* MPU memory region,
+         * part-region modification is not supported, as in worst case, it will
+         * lead to three fragments in result after modification.
+         * part-region modification will be introduced only when actual usage
+         * come
+         */
+        if ( rc == MPUMAP_REGION_INCLUSIVE )
+        {
+            region_printk("mpu: part-region modification is not supported\n");
+            return -EINVAL;
+        }
+
+        /* We don't allow changing memory attributes. */
+        if (xen_mpumap[idx].prlar.reg.ai != REGION_AI_MASK(flags) )
+        {
+            region_printk("Modifying memory attributes is not allowed (0x%x -> 0x%x).\n",
+                          xen_mpumap[idx].prlar.reg.ai, REGION_AI_MASK(flags));
+            return -EINVAL;
+        }
+
+        /* Set new permission */
+        xen_mpumap[idx].prbar.reg.ap = REGION_AP_MASK(flags);
+        xen_mpumap[idx].prbar.reg.xn = REGION_XN_MASK(flags);
+
+        access_protection_region(false, NULL, (const pr_t*)(&xen_mpumap[idx]),
+                                 idx);
+    }
     /* We are inserting a mapping => Create new region. */
-    if ( flags & _REGION_PRESENT )
+    else if ( flags & _REGION_PRESENT )
     {
         if ( rc != MPUMAP_REGION_FAILED )
             return -EINVAL;
@@ -831,11 +867,56 @@ void mmu_init_secondary_cpu(void)
 
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags)
 {
-    return -ENOSYS;
+    ASSERT(IS_ALIGNED(s, PAGE_SIZE));
+    ASSERT(IS_ALIGNED(e, PAGE_SIZE));
+    ASSERT(s <= e);
+    return xen_mpumap_update(s, e, flags);
 }
 
 void free_init_memory(void)
 {
+    /* Kernel init text section. */
+    paddr_t init_text = virt_to_maddr(_sinittext);
+    paddr_t init_text_end = round_pgup(virt_to_maddr(_einittext));
+    /* Kernel init data. */
+    paddr_t init_data = virt_to_maddr(__init_data_begin);
+    paddr_t init_data_end = round_pgup(virt_to_maddr(__init_end));
+    unsigned long init_section[4] = {(unsigned long)init_text,
+                                     (unsigned long)init_text_end,
+                                     (unsigned long)init_data,
+                                     (unsigned long)init_data_end};
+    unsigned int nr_init = 2;
+    uint32_t insn = AARCH64_BREAK_FAULT;
+    unsigned int i = 0, j = 0;
+
+    /* Change kernel init text section to RW. */
+    modify_xen_mappings((unsigned long)init_text,
+                        (unsigned long)init_text_end, REGION_HYPERVISOR_RW);
+
+    /*
+     * From now on, init will not be used for execution anymore,
+     * so nuke the instruction cache to remove entries related to init.
+     */
+    invalidate_icache_local();
+
+    /* Destroy two MPU memory regions referring init text and init data. */
+    for ( ; i < nr_init; i++ )
+    {
+        uint32_t *p;
+        unsigned int nr;
+        int rc;
+
+        i = 2 * i;
+        p = (uint32_t *)init_section[i];
+        nr = (init_section[i + 1] - init_section[i]) / sizeof(uint32_t);
+
+        for ( ; j < nr ; j++ )
+            *(p + j) = insn;
+
+        rc = destroy_xen_mappings(init_section[i], init_section[i + 1]);
+        if ( rc < 0 )
+            panic("Unable to remove the init section (rc = %d)\n", rc);
+    }
 }
 
 int xenmem_add_to_physmap_one(
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476601.738969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjW-0002wE-TN; Fri, 13 Jan 2023 05:36:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476601.738969; Fri, 13 Jan 2023 05:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjW-0002w3-Ny; Fri, 13 Jan 2023 05:36:06 +0000
Received: by outflank-mailman (input) for mailman id 476601;
 Fri, 13 Jan 2023 05:36:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeQ-0005sJ-1W
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:50 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 6fbbe079-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:48 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8DBB513D5;
 Thu, 12 Jan 2023 21:31:30 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CFA2A3F587;
 Thu, 12 Jan 2023 21:30:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fbbe079-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 23/40] xen/mpu: initialize frametable in MPU system
Date: Fri, 13 Jan 2023 13:28:56 +0800
Message-Id: <20230113052914.3845596-24-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen is using page as the smallest granularity for memory managment.
And we want to follow the same concept in MPU system.
That is, structure page_info and the frametable which is used for storing
and managing page_info is also required in MPU system.

In MPU system, since there is no virtual address translation (VA == PA),
we can not use a fixed VA address(FRAMETABLE_VIRT_START) to map frametable
like MMU system does.
Instead, we define a variable "struct page_info *frame_table" as frametable
pointer, and ask boot allocator to allocate memory for frametable.

As frametable is successfully initialized, the convertion between machine frame
number/machine address/"virtual address" and page-info structure is
ready too, like mfn_to_page/maddr_to_page/virt_to_page, etc

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/mm.h     | 15 ---------------
 xen/arch/arm/include/asm/mm_mmu.h | 16 ++++++++++++++++
 xen/arch/arm/include/asm/mm_mpu.h | 17 +++++++++++++++++
 xen/arch/arm/mm_mpu.c             | 25 +++++++++++++++++++++++++
 4 files changed, 58 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index e29158028a..7969ec9f98 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -176,7 +176,6 @@ struct page_info
 
 #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
 
-#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
 /* PDX of the first page in the frame table. */
 extern unsigned long frametable_base_pdx;
 
@@ -280,20 +279,6 @@ static inline uint64_t gvirt_to_maddr(vaddr_t va, paddr_t *pa,
 #define virt_to_mfn(va)     __virt_to_mfn(va)
 #define mfn_to_virt(mfn)    __mfn_to_virt(mfn)
 
-/* Convert between Xen-heap virtual addresses and page-info structures. */
-static inline struct page_info *virt_to_page(const void *v)
-{
-    unsigned long va = (unsigned long)v;
-    unsigned long pdx;
-
-    ASSERT(va >= XENHEAP_VIRT_START);
-    ASSERT(va < directmap_virt_end);
-
-    pdx = (va - XENHEAP_VIRT_START) >> PAGE_SHIFT;
-    pdx += mfn_to_pdx(directmap_mfn_start);
-    return frame_table + pdx - frametable_base_pdx;
-}
-
 static inline void *page_to_virt(const struct page_info *pg)
 {
     return mfn_to_virt(mfn_x(page_to_mfn(pg)));
diff --git a/xen/arch/arm/include/asm/mm_mmu.h b/xen/arch/arm/include/asm/mm_mmu.h
index 6d7e5ddde7..bc1b04c4c7 100644
--- a/xen/arch/arm/include/asm/mm_mmu.h
+++ b/xen/arch/arm/include/asm/mm_mmu.h
@@ -23,6 +23,8 @@ extern uint64_t init_ttbr;
 extern void setup_directmap_mappings(unsigned long base_mfn,
                                      unsigned long nr_mfns);
 
+#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
+
 static inline paddr_t __virt_to_maddr(vaddr_t va)
 {
     uint64_t par = va_to_par(va);
@@ -49,6 +51,20 @@ static inline void *maddr_to_virt(paddr_t ma)
 }
 #endif
 
+/* Convert between Xen-heap virtual addresses and page-info structures. */
+static inline struct page_info *virt_to_page(const void *v)
+{
+    unsigned long va = (unsigned long)v;
+    unsigned long pdx;
+
+    ASSERT(va >= XENHEAP_VIRT_START);
+    ASSERT(va < directmap_virt_end);
+
+    pdx = (va - XENHEAP_VIRT_START) >> PAGE_SHIFT;
+    pdx += mfn_to_pdx(directmap_mfn_start);
+    return frame_table + pdx - frametable_base_pdx;
+}
+
 #endif /* __ARCH_ARM_MM_MMU__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/mm_mpu.h b/xen/arch/arm/include/asm/mm_mpu.h
index fe6a828a50..eebd5b5d35 100644
--- a/xen/arch/arm/include/asm/mm_mpu.h
+++ b/xen/arch/arm/include/asm/mm_mpu.h
@@ -9,6 +9,8 @@
  */
 extern void setup_static_mappings(void);
 
+extern struct page_info *frame_table;
+
 static inline paddr_t __virt_to_maddr(vaddr_t va)
 {
     /* In MPU system, VA == PA. */
@@ -22,6 +24,21 @@ static inline void *maddr_to_virt(paddr_t ma)
     return (void *)ma;
 }
 
+/* Convert between virtual address to page-info structure. */
+static inline struct page_info *virt_to_page(const void *v)
+{
+    unsigned long va = (unsigned long)v;
+    unsigned long pdx;
+
+    /*
+     * In MPU system, VA == PA, virt_to_maddr() outputs the
+     * exact input address.
+     */
+    pdx = mfn_to_pdx(maddr_to_mfn(virt_to_maddr(va)));
+
+    return frame_table + pdx - frametable_base_pdx;
+}
+
 #endif /* __ARCH_ARM_MM_MPU__ */
 
 /*
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index f057ee26df..7b282be4fb 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -69,6 +69,8 @@ static DEFINE_SPINLOCK(xen_mpumap_lock);
 
 static paddr_t dtb_paddr;
 
+struct page_info *frame_table;
+
 /* Write a MPU protection region */
 #define WRITE_PROTECTION_REGION(sel, pr, prbar_el2, prlar_el2) ({       \
     uint64_t _sel = sel;                                                \
@@ -564,6 +566,29 @@ void __init setup_static_mappings(void)
     /* TODO: guest memory section, device memory section, boot-module section, etc */
 }
 
+/* Map a frame table to cover physical addresses ps through pe */
+void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
+{
+    mfn_t base_mfn;
+    unsigned long nr_pdxs = mfn_to_pdx(mfn_add(maddr_to_mfn(pe), -1)) -
+                            mfn_to_pdx(maddr_to_mfn(ps)) + 1;
+    unsigned long frametable_size = nr_pdxs * sizeof(struct page_info);
+
+    frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
+    frametable_size = ROUNDUP(frametable_size, PAGE_SIZE);
+    /*
+     * Since VA == PA in MPU and we've already setup Xenheap mapping
+     * in setup_staticheap_mappings(), we could easily deduce the
+     * "virtual address" of frame table.
+     */
+    base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 1);
+    frame_table = (struct page_info*)mfn_to_virt(base_mfn);
+
+    memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info));
+    memset(&frame_table[nr_pdxs], -1,
+           frametable_size - (nr_pdxs * sizeof(struct page_info)));
+}
+
 /* TODO: Implementation on the first usage */
 void dump_hyp_walk(vaddr_t addr)
 {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476605.738979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjZ-0003Nc-Cv; Fri, 13 Jan 2023 05:36:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476605.738979; Fri, 13 Jan 2023 05:36:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjZ-0003N8-8e; Fri, 13 Jan 2023 05:36:09 +0000
Received: by outflank-mailman (input) for mailman id 476605;
 Fri, 13 Jan 2023 05:36:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeZ-0005sJ-DT
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:59 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 754da6db-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:58 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DB5B413D5;
 Thu, 12 Jan 2023 21:31:39 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2914E3F587;
 Thu, 12 Jan 2023 21:30:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 754da6db-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 26/40] xen/mpu: destroy an existing entry in Xen MPU memory mapping table
Date: Fri, 13 Jan 2023 13:28:59 +0800
Message-Id: <20230113052914.3845596-27-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit expands xen_mpumap_update/xen_mpumap_update_entry to include
destroying an existing entry.

We define a new helper "control_xen_mpumap_region_from_index" to enable/disable
the MPU region based on index. If region is within [0, 31], we could quickly
disable the MPU region through PRENR_EL2 which provides direct access to the
PRLAR_EL2.EN bits of EL2 MPU regions.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/arm64/mpu.h     | 20 ++++++
 xen/arch/arm/include/asm/arm64/sysregs.h |  3 +
 xen/arch/arm/mm_mpu.c                    | 77 ++++++++++++++++++++++--
 3 files changed, 95 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/asm/arm64/mpu.h
index 0044bbf05d..c1dea1c8e9 100644
--- a/xen/arch/arm/include/asm/arm64/mpu.h
+++ b/xen/arch/arm/include/asm/arm64/mpu.h
@@ -16,6 +16,8 @@
  */
 #define ARM_MAX_MPU_MEMORY_REGIONS 255
 
+#define MPU_PRENR_BITS    32
+
 /* Access permission attributes. */
 /* Read/Write at EL2, No Access at EL1/EL0. */
 #define AP_RW_EL2 0x0
@@ -132,6 +134,24 @@ typedef struct {
     _pr->prlar.reg.en;                                      \
 })
 
+/*
+ * Access to get base address of MPU protection region(pr_t).
+ * The base address shall be zero extended.
+ */
+#define pr_get_base(pr) ({                                  \
+    pr_t *_pr = pr;                                         \
+    (uint64_t)_pr->prbar.reg.base << MPU_REGION_SHIFT;      \
+})
+
+/*
+ * Access to get limit address of MPU protection region(pr_t).
+ * The limit address shall be concatenated with 0x3f.
+ */
+#define pr_get_limit(pr) ({                                        \
+    pr_t *_pr = pr;                                                \
+    (uint64_t)((_pr->prlar.reg.limit << MPU_REGION_SHIFT) | 0x3f); \
+})
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* __ARM64_MPU_H__ */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index aca9bca5b1..c46daf6f69 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -505,6 +505,9 @@
 /* MPU Type registers encode */
 #define MPUIR_EL2 S3_4_C0_C0_4
 
+/* MPU Protection Region Enable Register encode */
+#define PRENR_EL2 S3_4_C6_C1_1
+
 #endif
 
 /* Access to system registers */
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index d2e19e836c..3a0d110b13 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -385,6 +385,45 @@ static int mpumap_contain_region(pr_t *mpu, uint64_t nr_regions,
     return MPUMAP_REGION_FAILED;
 }
 
+/* Disable or enable EL2 MPU memory region at index #index */
+static void control_mpu_region_from_index(uint64_t index, bool enable)
+{
+    pr_t region;
+
+    access_protection_region(true, &region, NULL, index);
+    if ( (region_is_valid(&region) && enable) ||
+         (!region_is_valid(&region) && !enable) )
+    {
+        printk(XENLOG_WARNING
+               "mpu: MPU memory region[%lu] is already %s\n", index,
+               enable ? "enabled" : "disabled");
+        return;
+    }
+
+    /*
+     * ARM64v8R provides PRENR_EL2 to have direct access to the
+     * PRLAR_EL2.EN bits of EL2 MPU regions from 0 to 31.
+     */
+    if ( index < MPU_PRENR_BITS )
+    {
+        uint64_t orig, after;
+
+        orig = READ_SYSREG(PRENR_EL2);
+        if ( enable )
+            /* Set respective bit */
+            after = orig | (1UL << index);
+        else
+            /* Clear respective bit */
+            after = orig & (~(1UL << index));
+        WRITE_SYSREG(after, PRENR_EL2);
+    }
+    else
+    {
+        region.prlar.reg.en = enable ? 1 : 0;
+        access_protection_region(false, NULL, (const pr_t*)&region, index);
+    }
+}
+
 /*
  * Update an entry at the index @idx.
  * @base:  base address
@@ -449,6 +488,30 @@ static int xen_mpumap_update_entry(paddr_t base, paddr_t limit,
         if ( system_state <= SYS_STATE_active )
             update_boot_xen_mpumap_idx(idx);
     }
+    else
+    {
+        /*
+         * Currently, we only support destroying a *WHOLE* MPU memory region,
+         * part-region removing is not supported, as in worst case, it will
+         * lead to two fragments in result after destroying.
+         * part-region removing will be introduced only when actual usage
+         * comes.
+         */
+        if ( rc == MPUMAP_REGION_INCLUSIVE )
+        {
+            region_printk("mpu: part-region removing is not supported\n");
+            return -EINVAL;
+        }
+
+        /* We are removing the region */
+        if ( rc != MPUMAP_REGION_FOUND )
+            return -EINVAL;
+
+        control_mpu_region_from_index(idx, false);
+
+        /* Clear the according MPU memory region entry.*/
+        memset(&xen_mpumap[idx], 0, sizeof(pr_t));
+    }
 
     return 0;
 }
@@ -589,6 +652,15 @@ static void __init map_mpu_memory_section_on_boot(enum mpu_section_info type,
     }
 }
 
+int destroy_xen_mappings(unsigned long s, unsigned long e)
+{
+    ASSERT(IS_ALIGNED(s, PAGE_SIZE));
+    ASSERT(IS_ALIGNED(e, PAGE_SIZE));
+    ASSERT(s <= e);
+
+    return xen_mpumap_update(s, e, 0);
+}
+
 /*
  * System RAM is statically partitioned into different functionality
  * section in Device Tree, including static xenheap, guest memory
@@ -656,11 +728,6 @@ void *ioremap(paddr_t pa, size_t len)
     return NULL;
 }
 
-int destroy_xen_mappings(unsigned long s, unsigned long e)
-{
-    return -ENOSYS;
-}
-
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags)
 {
     return -ENOSYS;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476606.738986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjZ-0003RQ-O8; Fri, 13 Jan 2023 05:36:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476606.738986; Fri, 13 Jan 2023 05:36:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjZ-0003Qh-He; Fri, 13 Jan 2023 05:36:09 +0000
Received: by outflank-mailman (input) for mailman id 476606;
 Fri, 13 Jan 2023 05:36:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCe1-0005sJ-HA
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 5ee55659-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:20 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 48308FEC;
 Thu, 12 Jan 2023 21:31:02 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8A0113F587;
 Thu, 12 Jan 2023 21:30:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ee55659-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 14/40] xen/arm64: head: Jump to the runtime mapping in enable_mm()
Date: Fri, 13 Jan 2023 13:28:47 +0800
Message-Id: <20230113052914.3845596-15-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

At the moment, on MMU system, enable_mm() will return to an address in
the 1:1 mapping, then each path is responsible to switch to virtual runtime
mapping. Then remove_identity_mapping() is called to remove all 1:1 mapping.

Since remove_identity_mapping() is not necessary on MPU system, and we also
avoid creating empty function for MPU system, trying to keep only one codeflow
in arm64/head.S, we move path switch and remove_identity_mapping() in
enable_mm() on MMU system.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/arm64/head.S     | 28 +++++++++++++---------------
 xen/arch/arm/arm64/head_mmu.S | 33 ++++++++++++++++++++++++++++++---
 2 files changed, 43 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index a92883319d..6358305f03 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -258,20 +258,15 @@ real_start_efi:
          * and memory regions for MPU systems.
          */
         bl    prepare_early_mappings
+        /*
+         * Address in the runtime mapping to jump to after the
+         * MMU/MPU is enabled
+         */
+        ldr   lr, =primary_switched
         /* Turn on MMU or MPU */
-        bl    enable_mm
+        b    enable_mm
 
-        /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        ldr   x0, =primary_switched
-        br    x0
 primary_switched:
-        /*
-         * The 1:1 map may clash with other parts of the Xen virtual memory
-         * layout. As it is not used anymore, remove it completely to
-         * avoid having to worry about replacing existing mapping
-         * afterwards.
-         */
-        bl    remove_identity_mapping
         bl    setup_early_uart
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
@@ -317,11 +312,14 @@ GLOBAL(init_secondary)
         bl    check_cpu_mode
         bl    cpu_init
         bl    prepare_early_mappings
-        bl    enable_mm
 
-        /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        ldr   x0, =secondary_switched
-        br    x0
+        /*
+         * Address in the runtime mapping to jump to after the
+         * MMU/MPU is enabled
+         */
+        ldr   lr, =secondary_switched
+        b    enable_mm
+
 secondary_switched:
         /*
          * Non-boot CPUs need to move on to the proper pagetables, which were
diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
index a19b7c873d..c9e83bbe2d 100644
--- a/xen/arch/arm/arm64/head_mmu.S
+++ b/xen/arch/arm/arm64/head_mmu.S
@@ -211,9 +211,11 @@ virtphys_clash:
 ENDPROC(prepare_early_mappings)
 
 /*
- * Turn on the Data Cache and the MMU. The function will return on the 1:1
- * mapping. In other word, the caller is responsible to switch to the runtime
- * mapping.
+ * Turn on the Data Cache and the MMU. The function will return
+ * to the virtual address provided in LR (e.g. the runtime mapping).
+ *
+ * Inputs:
+ * lr(x30): Virtual address to return to
  *
  * Clobbers x0 - x3
  */
@@ -238,6 +240,31 @@ ENTRY(enable_mm)
         dsb   sy                     /* Flush PTE writes and finish reads */
         msr   SCTLR_EL2, x0          /* now paging is enabled */
         isb                          /* Now, flush the icache */
+
+        /*
+         * The MMU is turned on and we are in the 1:1 mapping. Switch
+         * to the runtime mapping.
+         */
+        ldr   x0, =1f
+        br    x0
+1:
+        /*
+         * The 1:1 map may clash with other parts of the Xen virtual memory
+         * layout. As it is not used anymore, remove it completely to
+         * avoid having to worry about replacing existing mapping
+         * afterwards.
+         *
+         * On return this will jump to the virtual address requested by
+         * the caller
+         */
+        b     remove_identity_mapping
+
+        /*
+         * Here might not be reached, as "ret" in remove_identity_mapping
+         * will use the return address in LR in advance. But keep ret here
+         * might be more safe if "ret" in remove_identity_mapping is removed
+         * in future.
+         */
         ret
 ENDPROC(enable_mm)
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476608.739002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjc-00047g-6P; Fri, 13 Jan 2023 05:36:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476608.739002; Fri, 13 Jan 2023 05:36:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjc-00047Q-0Z; Fri, 13 Jan 2023 05:36:12 +0000
Received: by outflank-mailman (input) for mailman id 476608;
 Fri, 13 Jan 2023 05:36:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeC-0005sP-4U
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:36 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 66628c11-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:30:33 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F12CCFEC;
 Thu, 12 Jan 2023 21:31:14 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3E9B43F587;
 Thu, 12 Jan 2023 21:30:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66628c11-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 18/40] xen/mpu: introduce helper access_protection_region
Date: Fri, 13 Jan 2023 13:28:51 +0800
Message-Id: <20230113052914.3845596-19-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Each EL2 MPU protection region could be configured using PRBAR<n>_EL2 and
PRLAR<n>_EL2.

This commit introduces a new helper access_protection_region() to access
EL2 MPU protection region, including both read/write operations.

As explained in section G1.3.18 of the reference manual for AArch64v8R,
a set of system register PRBAR<n>_EL2 and PRLAR<n>_EL2 provide access to
the EL2 MPU region which is determined by the value of 'n' and
PRSELR_EL2.REGION as PRSELR_EL2.REGION<7:4>:n.(n = 0, 1, 2, ... , 15)
For example to access regions from 16 to 31:
- Set PRSELR_EL2 to 0b1xxxx
- Region 16 configuration is accessible through PRBAR0_EL2 and PRLAR0_EL2
- Region 17 configuration is accessible through PRBAR1_EL2 and PRLAR1_EL2
- Region 18 configuration is accessible through PRBAR2_EL2 and PRLAR2_EL2
- ...
- Region 31 configuration is accessible through PRBAR15_EL2 and PRLAR15_EL2

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/mm_mpu.c | 151 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 151 insertions(+)

diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index c9e17ab6da..f2b494449c 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -46,6 +46,157 @@ uint64_t __ro_after_init next_transient_region_idx;
 /* Maximum number of supported MPU memory regions by the EL2 MPU. */
 uint64_t __ro_after_init max_xen_mpumap;
 
+/* Write a MPU protection region */
+#define WRITE_PROTECTION_REGION(sel, pr, prbar_el2, prlar_el2) ({       \
+    uint64_t _sel = sel;                                                \
+    const pr_t *_pr = pr;                                               \
+    asm volatile(                                                       \
+        "msr "__stringify(PRSELR_EL2)", %0;" /* Selects the region */   \
+        "dsb sy;"                                                       \
+        "msr "__stringify(prbar_el2)", %1;" /* Write PRBAR<n>_EL2 */    \
+        "msr "__stringify(prlar_el2)", %2;" /* Write PRLAR<n>_EL2 */    \
+        "dsb sy;"                                                       \
+        : : "r" (_sel), "r" (_pr->prbar.bits), "r" (_pr->prlar.bits));  \
+})
+
+/* Read a MPU protection region */
+#define READ_PROTECTION_REGION(sel, prbar_el2, prlar_el2) ({            \
+    uint64_t _sel = sel;                                                \
+    pr_t _pr;                                                           \
+    asm volatile(                                                       \
+        "msr "__stringify(PRSELR_EL2)", %2;" /* Selects the region */   \
+        "dsb sy;"                                                       \
+        "mrs %0, "__stringify(prbar_el2)";" /* Read PRBAR<n>_EL2 */     \
+        "mrs %1, "__stringify(prlar_el2)";" /* Read PRLAR<n>_EL2 */     \
+        "dsb sy;"                                                       \
+        : "=r" (_pr.prbar.bits), "=r" (_pr.prlar.bits) : "r" (_sel));   \
+    _pr;                                                                \
+})
+
+/*
+ * Access MPU protection region, including both read/write operations.
+ * Armv8-R AArch64 at most supports 255 MPU protection regions.
+ * See section G1.3.18 of the reference manual for Armv8-R AArch64,
+ * PRBAR<n>_EL2 and PRLAR<n>_EL2 provide access to the EL2 MPU region
+ * determined by the value of 'n' and PRSELR_EL2.REGION as
+ * PRSELR_EL2.REGION<7:4>:n(n = 0, 1, 2, ... , 15)
+ * For example to access regions from 16 to 31 (0b10000 to 0b11111):
+ * - Set PRSELR_EL2 to 0b1xxxx
+ * - Region 16 configuration is accessible through PRBAR0_ELx and PRLAR0_ELx
+ * - Region 17 configuration is accessible through PRBAR1_ELx and PRLAR1_ELx
+ * - Region 18 configuration is accessible through PRBAR2_ELx and PRLAR2_ELx
+ * - ...
+ * - Region 31 configuration is accessible through PRBAR15_ELx and PRLAR15_ELx
+ *
+ * @read: if it is read operation.
+ * @pr_read: mpu protection region returned by read op.
+ * @pr_write: const mpu protection region passed through write op.
+ * @sel: mpu protection region selector
+ */
+static void access_protection_region(bool read, pr_t *pr_read,
+                                     const pr_t *pr_write, uint64_t sel)
+{
+    switch ( sel & 0xf )
+    {
+    case 0:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR0_EL2, PRLAR0_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR0_EL2, PRLAR0_EL2);
+        break;
+    case 1:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR1_EL2, PRLAR1_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR1_EL2, PRLAR1_EL2);
+        break;
+    case 2:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR2_EL2, PRLAR2_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR2_EL2, PRLAR2_EL2);
+        break;
+    case 3:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR3_EL2, PRLAR3_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR3_EL2, PRLAR3_EL2);
+        break;
+    case 4:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR4_EL2, PRLAR4_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR4_EL2, PRLAR4_EL2);
+        break;
+    case 5:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR5_EL2, PRLAR5_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR5_EL2, PRLAR5_EL2);
+        break;
+    case 6:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR6_EL2, PRLAR6_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR6_EL2, PRLAR6_EL2);
+        break;
+    case 7:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR7_EL2, PRLAR7_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR7_EL2, PRLAR7_EL2);
+        break;
+    case 8:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR8_EL2, PRLAR8_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR8_EL2, PRLAR8_EL2);
+        break;
+    case 9:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR9_EL2, PRLAR9_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR9_EL2, PRLAR9_EL2);
+        break;
+    case 10:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR10_EL2, PRLAR10_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR10_EL2, PRLAR10_EL2);
+        break;
+    case 11:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR11_EL2, PRLAR11_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR11_EL2, PRLAR11_EL2);
+        break;
+    case 12:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR12_EL2, PRLAR12_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR12_EL2, PRLAR12_EL2);
+        break;
+    case 13:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR13_EL2, PRLAR13_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR13_EL2, PRLAR13_EL2);
+        break;
+    case 14:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR14_EL2, PRLAR14_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR14_EL2, PRLAR14_EL2);
+        break;
+    case 15:
+        if ( read )
+            *pr_read = READ_PROTECTION_REGION(sel, PRBAR15_EL2, PRLAR15_EL2);
+        else
+            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR15_EL2, PRLAR15_EL2);
+        break;
+    }
+}
+
 /* TODO: Implementation on the first usage */
 void dump_hyp_walk(vaddr_t addr)
 {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476613.739013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjf-0004gG-JU; Fri, 13 Jan 2023 05:36:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476613.739013; Fri, 13 Jan 2023 05:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjf-0004fj-DW; Fri, 13 Jan 2023 05:36:15 +0000
Received: by outflank-mailman (input) for mailman id 476613;
 Fri, 13 Jan 2023 05:36:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCfF-0005sP-7Y
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8d74ef45-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:38 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 838A813D5;
 Thu, 12 Jan 2023 21:32:20 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C55C63F587;
 Thu, 12 Jan 2023 21:31:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d74ef45-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 38/40] xen/mpu: implement setup_virt_paging for MPU system
Date: Fri, 13 Jan 2023 13:29:11 +0800
Message-Id: <20230113052914.3845596-39-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For MMU system setup_virt_paging is used to configure stage 2 address
translation, like IPA bits, VMID bits, etc. And this function is also doing the
VMID allocator initializtion for later VM creation.

Except for IPA bits and VMID bits, the setup_virt_paging function in MPU
system should be also responsible for determining the default EL1/EL0
translation regime.
ARMv8-R AArch64 could have the following memory translation regime:
- PMSAv8-64 at both EL1/EL0 and EL2
- PMSAv8-64 or VMSAv8-64 at EL1/EL0 and PMSAv8-64 at EL2
The default value will be VMSAv8-64, unless the platform could not support,
which could be checked against MSA_frac bit in Memory Model Feature Register 0(
ID_AA64MMFR0_EL1)

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/arm64/sysregs.h |  6 ++
 xen/arch/arm/include/asm/cpufeature.h    |  7 ++
 xen/arch/arm/include/asm/p2m.h           | 18 +++++
 xen/arch/arm/include/asm/processor.h     | 13 ++++
 xen/arch/arm/p2m.c                       | 28 ++++++++
 xen/arch/arm/p2m_mmu.c                   | 38 ----------
 xen/arch/arm/p2m_mpu.c                   | 91 ++++++++++++++++++++++--
 7 files changed, 159 insertions(+), 42 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 9546e8e3d0..7d4f959dae 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -507,6 +507,12 @@
 /* MPU Protection Region Enable Register encode */
 #define PRENR_EL2 S3_4_C6_C1_1
 
+/* Virtualization Secure Translation Control Register */
+#define VSTCR_EL2  S3_4_C2_C6_2
+#define VSTCR_EL2_RES1_SHIFT 31
+#define VSTCR_EL2_SA_SHIFT   30
+#define VSTCR_EL2_SC_SHIFT   20
+
 #endif
 
 #ifdef CONFIG_ARM_SECURE_STATE
diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
index c62cf6293f..513e5b9918 100644
--- a/xen/arch/arm/include/asm/cpufeature.h
+++ b/xen/arch/arm/include/asm/cpufeature.h
@@ -244,6 +244,12 @@ struct cpuinfo_arm {
             unsigned long tgranule_16K:4;
             unsigned long tgranule_64K:4;
             unsigned long tgranule_4K:4;
+#ifdef CONFIG_ARM_V8R
+            unsigned long __res:16;
+            unsigned long msa:4;
+            unsigned long msa_frac:4;
+            unsigned long __res0:8;
+#else
             unsigned long tgranule_16k_2:4;
             unsigned long tgranule_64k_2:4;
             unsigned long tgranule_4k_2:4;
@@ -251,6 +257,7 @@ struct cpuinfo_arm {
             unsigned long __res0:8;
             unsigned long fgt:4;
             unsigned long ecv:4;
+#endif
 
             /* MMFR1 */
             unsigned long hafdbs:4;
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index a430aca232..cd28a9091a 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -14,9 +14,27 @@
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
 
+#define MAX_VMID_8_BIT  (1UL << 8)
+#define MAX_VMID_16_BIT (1UL << 16)
+
+#define INVALID_VMID 0 /* VMID 0 is reserved */
+
+#ifdef CONFIG_ARM_64
+extern unsigned int max_vmid;
+/* VMID is by default 8 bit width on AArch64 */
+#define MAX_VMID       max_vmid
+#else
+/* VMID is always 8 bit width on AArch32 */
+#define MAX_VMID        MAX_VMID_8_BIT
+#endif
+
+extern spinlock_t vmid_alloc_lock;
+extern unsigned long *vmid_mask;
+
 struct domain;
 
 extern void memory_type_changed(struct domain *);
+extern void p2m_vmid_allocator_init(void);
 
 /* Per-p2m-table state */
 struct p2m_domain {
diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index 1dd81d7d52..d866421d88 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -388,6 +388,12 @@
 
 #define VTCR_RES1       (_AC(1,UL)<<31)
 
+#ifdef CONFIG_ARM_V8R
+#define VTCR_MSA_VMSA   (_AC(0x1,UL)<<31)
+#define VTCR_MSA_PMSA   ~(_AC(0x1,UL)<<31)
+#define NSA_SEL2        ~(_AC(0x1,UL)<<30)
+#endif
+
 /* HCPTR Hyp. Coprocessor Trap Register */
 #define HCPTR_TAM       ((_AC(1,U)<<30))
 #define HCPTR_TTA       ((_AC(1,U)<<20))        /* Trap trace registers */
@@ -447,6 +453,13 @@
 #define MM64_VMID_16_BITS_SUPPORT   0x2
 #endif
 
+#ifdef CONFIG_ARM_V8R
+#define MM64_MSA_PMSA_SUPPORT       0xf
+#define MM64_MSA_FRAC_NONE_SUPPORT  0x0
+#define MM64_MSA_FRAC_PMSA_SUPPORT  0x1
+#define MM64_MSA_FRAC_VMSA_SUPPORT  0x2
+#endif
+
 #ifndef __ASSEMBLY__
 
 extern register_t __cpu_logical_map[];
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 42f51051e0..0d0063aa2e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -4,6 +4,21 @@
 
 #include <asm/event.h>
 #include <asm/page.h>
+#include <asm/p2m.h>
+
+#ifdef CONFIG_ARM_64
+unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
+#endif
+
+spinlock_t vmid_alloc_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * VTTBR_EL2 VMID field is 8 or 16 bits. AArch64 may support 16-bit VMID.
+ * Using a bitmap here limits us to 256 or 65536 (for AArch64) concurrent
+ * domains. The bitmap space will be allocated dynamically based on
+ * whether 8 or 16 bit VMIDs are supported.
+ */
+unsigned long *vmid_mask;
 
 /*
  * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
@@ -142,6 +157,19 @@ void __init p2m_restrict_ipa_bits(unsigned int ipa_bits)
         p2m_ipa_bits = ipa_bits;
 }
 
+void p2m_vmid_allocator_init(void)
+{
+    /*
+     * allocate space for vmid_mask based on MAX_VMID
+     */
+    vmid_mask = xzalloc_array(unsigned long, BITS_TO_LONGS(MAX_VMID));
+
+    if ( !vmid_mask )
+        panic("Could not allocate VMID bitmap space\n");
+
+    set_bit(INVALID_VMID, vmid_mask);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/p2m_mmu.c b/xen/arch/arm/p2m_mmu.c
index 88a9d8f392..7e1afd0bb3 100644
--- a/xen/arch/arm/p2m_mmu.c
+++ b/xen/arch/arm/p2m_mmu.c
@@ -14,20 +14,6 @@
 #include <asm/page.h>
 #include <asm/traps.h>
 
-#define MAX_VMID_8_BIT  (1UL << 8)
-#define MAX_VMID_16_BIT (1UL << 16)
-
-#define INVALID_VMID 0 /* VMID 0 is reserved */
-
-#ifdef CONFIG_ARM_64
-static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
-/* VMID is by default 8 bit width on AArch64 */
-#define MAX_VMID       max_vmid
-#else
-/* VMID is always 8 bit width on AArch32 */
-#define MAX_VMID        MAX_VMID_8_BIT
-#endif
-
 #ifdef CONFIG_ARM_64
 unsigned int __read_mostly p2m_root_order;
 unsigned int __read_mostly p2m_root_level;
@@ -1516,30 +1502,6 @@ static int p2m_alloc_table(struct domain *d)
     return 0;
 }
 
-
-static spinlock_t vmid_alloc_lock = SPIN_LOCK_UNLOCKED;
-
-/*
- * VTTBR_EL2 VMID field is 8 or 16 bits. AArch64 may support 16-bit VMID.
- * Using a bitmap here limits us to 256 or 65536 (for AArch64) concurrent
- * domains. The bitmap space will be allocated dynamically based on
- * whether 8 or 16 bit VMIDs are supported.
- */
-static unsigned long *vmid_mask;
-
-static void p2m_vmid_allocator_init(void)
-{
-    /*
-     * allocate space for vmid_mask based on MAX_VMID
-     */
-    vmid_mask = xzalloc_array(unsigned long, BITS_TO_LONGS(MAX_VMID));
-
-    if ( !vmid_mask )
-        panic("Could not allocate VMID bitmap space\n");
-
-    set_bit(INVALID_VMID, vmid_mask);
-}
-
 static int p2m_alloc_vmid(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
diff --git a/xen/arch/arm/p2m_mpu.c b/xen/arch/arm/p2m_mpu.c
index 0a95d58111..77b4bc9221 100644
--- a/xen/arch/arm/p2m_mpu.c
+++ b/xen/arch/arm/p2m_mpu.c
@@ -2,8 +2,95 @@
 #include <xen/lib.h>
 #include <xen/mm-frame.h>
 #include <xen/sched.h>
+#include <xen/warning.h>
 
 #include <asm/p2m.h>
+#include <asm/processor.h>
+#include <asm/sysregs.h>
+
+void __init setup_virt_paging(void)
+{
+    uint64_t val = 0;
+    bool p2m_vmsa = true;
+
+    /* PA size */
+    const unsigned int pa_range_info[] = { 32, 36, 40, 42, 44, 48, 52, 0, /* Invalid */ };
+
+    /*
+     * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
+     * with IPA bits == PA bits, compare against "pabits".
+     */
+    if ( pa_range_info[system_cpuinfo.mm64.pa_range] < p2m_ipa_bits )
+        p2m_ipa_bits = pa_range_info[system_cpuinfo.mm64.pa_range];
+
+    /* In ARMV8R, hypervisor in secure EL2. */
+    val &= NSA_SEL2;
+
+    /*
+     * ARMv8-R AArch64 could have the following memory system
+     * configurations:
+     * - PMSAv8-64 at EL1 and EL2
+     * - PMSAv8-64 or VMSAv8-64 at EL1 and PMSAv8-64 at EL2
+     *
+     * In ARMv8-R, the only permitted value is
+     * 0b1111(MM64_MSA_PMSA_SUPPORT).
+     */
+    if ( system_cpuinfo.mm64.msa == MM64_MSA_PMSA_SUPPORT )
+    {
+        if ( system_cpuinfo.mm64.msa_frac == MM64_MSA_FRAC_NONE_SUPPORT )
+            goto fault;
+
+        if ( system_cpuinfo.mm64.msa_frac != MM64_MSA_FRAC_VMSA_SUPPORT )
+        {
+            p2m_vmsa = false;
+            warning_add("Be aware of that there is no support for VMSAv8-64 at EL1 on this platform.\n");
+        }
+    }
+    else
+        goto fault;
+
+    /*
+     * If the platform supports both PMSAv8-64 or VMSAv8-64 at EL1,
+     * then it's VTCR_EL2.MSA that determines the EL1 memory system
+     * architecture.
+     * Normally, we set the initial VTCR_EL2.MSA value VMSAv8-64 support,
+     * unless this platform only supports PMSAv8-64.
+     */
+    if ( !p2m_vmsa )
+        val &= VTCR_MSA_PMSA;
+    else
+        val |= VTCR_MSA_VMSA;
+
+    /*
+     * cpuinfo sanitization makes sure we support 16bits VMID only if
+     * all cores are supporting it.
+     */
+    if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
+        max_vmid = MAX_VMID_16_BIT;
+
+    /* Set the VS bit only if 16 bit VMID is supported. */
+    if ( MAX_VMID == MAX_VMID_16_BIT )
+        val |= VTCR_VS;
+
+    p2m_vmid_allocator_init();
+
+    WRITE_SYSREG(val, VTCR_EL2);
+
+    /*
+     * All stage 2 translations for the Secure PA space access the
+     * Secure PA space, so we keep SA bit as 0.
+     *
+     * Stage 2 NS configuration is checked against stage 1 NS configuration
+     * in EL1&0 translation regime for the given address, and generate a
+     * fault if they are different. So we set SC bit as 1.
+     */
+    WRITE_SYSREG(1 << VSTCR_EL2_RES1_SHIFT | 1 << VSTCR_EL2_SC_SHIFT, VTCR_EL2);
+
+    return;
+
+fault:
+    panic("Hardware with no PMSAv8-64 support in any translation regime.\n");
+}
 
 /* TODO: Implement on the first usage */
 void p2m_write_unlock(struct p2m_domain *p2m)
@@ -177,10 +264,6 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
     return NULL;
 }
 
-void __init setup_virt_paging(void)
-{
-}
-
 /*
  * Local variables:
  * mode: C
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476616.739019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjg-0004rw-Bw; Fri, 13 Jan 2023 05:36:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476616.739019; Fri, 13 Jan 2023 05:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjg-0004pb-5Y; Fri, 13 Jan 2023 05:36:16 +0000
Received: by outflank-mailman (input) for mailman id 476616;
 Fri, 13 Jan 2023 05:36:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCer-0005sP-Th
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:17 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7fe06f02-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:16 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B7A0CFEC;
 Thu, 12 Jan 2023 21:31:57 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F18763F587;
 Thu, 12 Jan 2023 21:31:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fe06f02-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 31/40] xen/mpu: disable FIXMAP in MPU system
Date: Fri, 13 Jan 2023 13:29:04 +0800
Message-Id: <20230113052914.3845596-32-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

FIXMAP in MMU system is used to do special-purpose 4K mapping, like
mapping early UART, temporarily mapping source codes for copy and paste
(copy_from_paddr), ect. As there is no VMSA in MPU system, we do not
support FIXMAP in MPU system.

We deine !CONFIG_HAS_FIXMAP to provide empty stubbers for MPU system

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/Kconfig              |  3 ++-
 xen/arch/arm/include/asm/fixmap.h | 28 +++++++++++++++++++++++++---
 xen/common/Kconfig                |  3 +++
 3 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 9230c8b885..91491341c4 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -13,9 +13,10 @@ config ARM
 	def_bool y
 	select HAS_ALTERNATIVE if !ARM_V8R
 	select HAS_DEVICE_TREE
+	select HAS_FIXMAP if !ARM_V8R
 	select HAS_PASSTHROUGH
 	select HAS_PDX
-	select HAS_PMAP
+	select HAS_PMAP if !ARM_V8R
 	select IOMMU_FORCE_PT_SHARE
 	select HAS_VMAP if !ARM_V8R
 
diff --git a/xen/arch/arm/include/asm/fixmap.h b/xen/arch/arm/include/asm/fixmap.h
index d0c9a52c8c..f0f4eb57ac 100644
--- a/xen/arch/arm/include/asm/fixmap.h
+++ b/xen/arch/arm/include/asm/fixmap.h
@@ -4,9 +4,6 @@
 #ifndef __ASM_FIXMAP_H
 #define __ASM_FIXMAP_H
 
-#include <xen/acpi.h>
-#include <xen/pmap.h>
-
 /* Fixmap slots */
 #define FIXMAP_CONSOLE  0  /* The primary UART */
 #define FIXMAP_MISC     1  /* Ephemeral mappings of hardware */
@@ -22,6 +19,11 @@
 
 #ifndef __ASSEMBLY__
 
+#ifdef CONFIG_HAS_FIXMAP
+
+#include <xen/acpi.h>
+#include <xen/pmap.h>
+
 /*
  * Direct access to xen_fixmap[] should only happen when {set,
  * clear}_fixmap() is unusable (e.g. where we would end up to
@@ -43,6 +45,26 @@ static inline unsigned int virt_to_fix(vaddr_t vaddr)
     return ((vaddr - FIXADDR_START) >> PAGE_SHIFT);
 }
 
+#else /* !CONFIG_HAS_FIXMAP */
+
+static inline void set_fixmap(unsigned int map, mfn_t mfn,
+                              unsigned int attributes)
+{
+    ASSERT_UNREACHABLE();
+}
+
+static inline void clear_fixmap(unsigned int map)
+{
+    ASSERT_UNREACHABLE();
+}
+
+static inline unsigned int virt_to_fix(vaddr_t vaddr)
+{
+    ASSERT_UNREACHABLE();
+    return -EINVAL;
+}
+#endif /* !CONFIG_HAS_FIXMAP */
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* __ASM_FIXMAP_H */
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index ba16366a4b..680dc6f59c 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -43,6 +43,9 @@ config HAS_EX_TABLE
 config HAS_FAST_MULTIPLY
 	bool
 
+config HAS_FIXMAP
+	bool
+
 config HAS_IOPORTS
 	bool
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476619.739031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCji-0005Ez-06; Fri, 13 Jan 2023 05:36:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476619.739031; Fri, 13 Jan 2023 05:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjh-0005Dn-O5; Fri, 13 Jan 2023 05:36:17 +0000
Received: by outflank-mailman (input) for mailman id 476619;
 Fri, 13 Jan 2023 05:36:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCez-0005sP-21
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8417391e-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:23 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B3E5C13D5;
 Thu, 12 Jan 2023 21:32:04 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 40C9B3F587;
 Thu, 12 Jan 2023 21:31:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8417391e-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 33/40] xen/arm: check mapping status and attributes for MPU copy_from_paddr
Date: Fri, 13 Jan 2023 13:29:06 +0800
Message-Id: <20230113052914.3845596-34-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

We introduce map_page_to_xen_misc/unmap_page_to_xen_misc to temporarily
map a page in Xen misc field to gain access, however, in MPU system,
all resource is statically configured in Device Tree and already mapped
at very early boot stage.

When enabling map_page_to_xen_misc for copy_from_paddr in MPU system,
we need to check whether a given paddr is properly mapped.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/kernel.c |  2 +-
 xen/arch/arm/mm_mpu.c | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index ee7144ec13..ce2b3347d7 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -57,7 +57,7 @@ void __init copy_from_paddr(void *dst, paddr_t paddr, unsigned long len)
         s = paddr & (PAGE_SIZE - 1);
         l = min(PAGE_SIZE - s, len);
 
-        src = map_page_to_xen_misc(maddr_to_mfn(paddr), PAGE_HYPERVISOR_WC);
+        src = map_page_to_xen_misc(maddr_to_mfn(paddr), DEFINE_ATTRIBUTE(HYPERVISOR_WC));
         ASSERT(src != NULL);
         memcpy(dst, src + s, l);
         clean_dcache_va_range(dst, l);
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 7b54c87acf..0b720004ee 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -790,6 +790,27 @@ void *ioremap(paddr_t pa, size_t len)
     return ioremap_attr(pa, len, REGION_HYPERVISOR_NOCACHE);
 }
 
+/*
+ * In MPU system, due to limited MPU memory regions, all resource is statically
+ * configured in Device Tree and mapped at very early stage, dynamic temporary
+ * page mapping is not allowed.
+ * So in map_page_to_xen_misc, we need to check if page is already properly
+ * mapped with #attributes.
+ */
+void *map_page_to_xen_misc(mfn_t mfn, unsigned int attributes)
+{
+    paddr_t pa = mfn_to_maddr(mfn);
+
+    if ( !check_region_and_attributes(pa, PAGE_SIZE, attributes, "map_to_misc") )
+        return NULL;
+
+    return maddr_to_virt(pa);
+}
+
+void unmap_page_from_xen_misc(void)
+{
+}
+
 /* TODO: Implementation on the first usage */
 void dump_hyp_walk(vaddr_t addr)
 {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476620.739037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjj-0005JM-0U; Fri, 13 Jan 2023 05:36:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476620.739037; Fri, 13 Jan 2023 05:36:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCji-0005HE-7S; Fri, 13 Jan 2023 05:36:18 +0000
Received: by outflank-mailman (input) for mailman id 476620;
 Fri, 13 Jan 2023 05:36:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCej-0005sP-Nm
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:09 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7ad5d262-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:07 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 34C54FEC;
 Thu, 12 Jan 2023 21:31:49 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7676F3F587;
 Thu, 12 Jan 2023 21:31:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ad5d262-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 29/40] xen/mpu: introduce mpu_memory_section_contains for address range check
Date: Fri, 13 Jan 2023 13:29:02 +0800
Message-Id: <20230113052914.3845596-30-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We have already introduced "mpu,xxx-memory-section" to limit system/domain
configuration, so we shall add check to verfify user's configuration.

We shall check if any guest boot module is within the boot module section,
including kernel module(BOOTMOD_KERNEL), device tree
passthrough module(BOOTMOD_GUEST_DTB), and ramdisk module(BOOTMOD_RAMDISK).

We also shall check if any guest RAM through "xen,static-mem" is within
the guest memory section.

Function mpu_memory_section_contains is introduced to do above check.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/domain_build.c      |  4 ++++
 xen/arch/arm/include/asm/setup.h |  2 ++
 xen/arch/arm/kernel.c            | 18 ++++++++++++++++++
 xen/arch/arm/setup_mpu.c         | 22 ++++++++++++++++++++++
 4 files changed, 46 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 829cea8de8..f48a3f679f 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -546,6 +546,10 @@ static mfn_t __init acquire_static_memory_bank(struct domain *d,
                d, *psize);
         return INVALID_MFN;
     }
+#ifdef CONFIG_HAS_MPU
+    if ( !mpu_memory_section_contains(*pbase, *pbase + *psize, MSINFO_GUEST) )
+        return INVALID_MFN;
+#endif
 
     smfn = maddr_to_mfn(*pbase);
     res = acquire_domstatic_pages(d, smfn, PFN_DOWN(*psize), 0);
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 61f24b5848..d4c1336597 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -209,6 +209,8 @@ extern struct mpuinfo mpuinfo;
 
 extern int process_mpuinfo(const void *fdt, int node, uint32_t address_cells,
                            uint32_t size_cells);
+extern bool mpu_memory_section_contains(paddr_t s, paddr_t e,
+                                        enum mpu_section_info type);
 #endif /* CONFIG_HAS_MPU */
 
 #endif
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 0475d8fae7..ee7144ec13 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -467,6 +467,12 @@ int __init kernel_probe(struct kernel_info *info,
                 mod = boot_module_find_by_addr_and_kind(
                         BOOTMOD_KERNEL, kernel_addr);
                 info->kernel_bootmodule = mod;
+#ifdef CONFIG_HAS_MPU
+                if ( !mpu_memory_section_contains(mod->start,
+                                                  mod->start + mod->size,
+                                                  MSINFO_BOOTMODULE) )
+                    return -EINVAL;
+#endif
             }
             else if ( dt_device_is_compatible(node, "multiboot,ramdisk") )
             {
@@ -477,6 +483,12 @@ int __init kernel_probe(struct kernel_info *info,
                 dt_get_range(&val, node, &initrd_addr, &size);
                 info->initrd_bootmodule = boot_module_find_by_addr_and_kind(
                         BOOTMOD_RAMDISK, initrd_addr);
+#ifdef CONFIG_HAS_MPU
+                if ( !mpu_memory_section_contains(mod->start,
+                                                  mod->start + mod->size,
+                                                  MSINFO_BOOTMODULE) )
+                    return -EINVAL;
+#endif
             }
             else if ( dt_device_is_compatible(node, "multiboot,device-tree") )
             {
@@ -489,6 +501,12 @@ int __init kernel_probe(struct kernel_info *info,
                 dt_get_range(&val, node, &dtb_addr, &size);
                 info->dtb_bootmodule = boot_module_find_by_addr_and_kind(
                         BOOTMOD_GUEST_DTB, dtb_addr);
+#ifdef CONFIG_HAS_MPU
+                if ( !mpu_memory_section_contains(mod->start,
+                                                  mod->start + mod->size,
+                                                  MSINFO_BOOTMODULE) )
+                    return -EINVAL;
+#endif
             }
             else
                 continue;
diff --git a/xen/arch/arm/setup_mpu.c b/xen/arch/arm/setup_mpu.c
index 160934bf86..f7d74ea604 100644
--- a/xen/arch/arm/setup_mpu.c
+++ b/xen/arch/arm/setup_mpu.c
@@ -130,6 +130,28 @@ void __init setup_mm(void)
     init_staticmem_pages();
 }
 
+bool __init mpu_memory_section_contains(paddr_t s, paddr_t e,
+                                        enum mpu_section_info type)
+{
+    unsigned int i = 0;
+
+    for ( ; i < mpuinfo.sections[type].nr_banks; i++ )
+    {
+        paddr_t section_start = mpuinfo.sections[type].bank[i].start;
+        paddr_t section_size = mpuinfo.sections[type].bank[i].size;
+        paddr_t section_end = section_start + section_size;
+
+        /* range inclusive */
+        if ( s >= section_start && e <= section_end )
+            return true;
+    }
+
+    printk(XENLOG_ERR
+           "mpu: invalid range configuration 0x%"PRIpaddr" - 0x%"PRIpaddr", and it shall be within %s\n",
+           s, e, mpu_section_info_str[i]);
+    return false;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476623.739042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjj-0005S6-Gb; Fri, 13 Jan 2023 05:36:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476623.739042; Fri, 13 Jan 2023 05:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCji-0005OI-QJ; Fri, 13 Jan 2023 05:36:18 +0000
Received: by outflank-mailman (input) for mailman id 476623;
 Fri, 13 Jan 2023 05:36:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCfI-0005sP-BQ
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:44 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8f4ed25c-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:41 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9D68D13D5;
 Thu, 12 Jan 2023 21:32:23 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DF3EB3F587;
 Thu, 12 Jan 2023 21:31:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f4ed25c-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 39/40] xen/mpu: re-order xen_mpumap in arch_init_finialize
Date: Fri, 13 Jan 2023 13:29:12 +0800
Message-Id: <20230113052914.3845596-40-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In function init_done, we have finished the booting and we do the final
clean-up working, including marking the section .data.ro_after_init
read-only, free init text and init data section, etc.

In MPU system, other than above operations, we also need to re-order
Xen MPU memory region mapping table(xen_mpumap).

In xen_mpumap, we have two type MPU memory region: fixed memory region
and switching memory region.
Fixed memory region are referring to the regions which won't change
since birth, like Xen .text section, while switching region(i.e. device memory)
are regions that gets switched out when idle vcpu leaving hypervisor mode,
and gets switched in when idle vcpu entering hypervisor mode. They were added
at tail during the boot stage.
To save the trouble of hunting down each switching region in time-sensitive
context switch, we re-order xen_mpumap to keep fixed regions still in the
front, and switching ones in the heels of them.

We define a MPU memory region mapping table(sw_mpumap) to store all
switching regions. After disabling them at its original position, we
re-enable them at re-ordering position.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/include/asm/arm64/mpu.h |   5 ++
 xen/arch/arm/include/asm/mm_mpu.h    |   1 +
 xen/arch/arm/include/asm/setup.h     |   2 +
 xen/arch/arm/mm_mpu.c                | 110 +++++++++++++++++++++++++++
 xen/arch/arm/setup.c                 |  13 +---
 xen/arch/arm/setup_mmu.c             |  16 ++++
 xen/arch/arm/setup_mpu.c             |  20 +++++
 7 files changed, 155 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/asm/arm64/mpu.h
index b4e50a9a0e..e058f36435 100644
--- a/xen/arch/arm/include/asm/arm64/mpu.h
+++ b/xen/arch/arm/include/asm/arm64/mpu.h
@@ -155,6 +155,11 @@ typedef struct {
     (uint64_t)((_pr->prlar.reg.limit << MPU_REGION_SHIFT) | 0x3f); \
 })
 
+#define region_needs_switching_on_ctxt(pr) ({               \
+    pr_t *_pr = pr;                                         \
+    _pr->prlar.reg.sw;                                      \
+})
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* __ARM64_MPU_H__ */
diff --git a/xen/arch/arm/include/asm/mm_mpu.h b/xen/arch/arm/include/asm/mm_mpu.h
index 5aa61c43b6..f8f54eb901 100644
--- a/xen/arch/arm/include/asm/mm_mpu.h
+++ b/xen/arch/arm/include/asm/mm_mpu.h
@@ -10,6 +10,7 @@
  * section by section based on static configuration in Device Tree.
  */
 extern void setup_static_mappings(void);
+extern int reorder_xen_mpumap(void);
 
 extern struct page_info *frame_table;
 
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index d4c1336597..39cd95553d 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -182,6 +182,8 @@ int map_range_to_domain(const struct dt_device_node *dev,
 
 extern const char __ro_after_init_start[], __ro_after_init_end[];
 
+extern void arch_init_finialize(void);
+
 struct init_info
 {
     /* Pointer to the stack, used by head.S when entering in C */
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 118bb11d1a..434ed872c1 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -80,6 +80,25 @@ static const unsigned int mpu_section_mattr[MSINFO_MAX] = {
 
 extern char __init_data_begin[], __init_end[];
 
+/*
+ * MPU memory mapping table records regions that needs switching in/out
+ * during vcpu context switch
+ */
+static pr_t *sw_mpumap;
+static uint64_t nr_sw_mpumap;
+
+/*
+ * After reordering, nr_xen_mpumap records number of regions for Xen fixed
+ * memory mapping
+ */
+static uint64_t nr_xen_mpumap;
+
+/*
+ * After reordering, nr_cpu_mpumap records number of EL2 valid
+ * MPU memory regions
+ */
+static uint64_t nr_cpu_mpumap;
+
 /* Write a MPU protection region */
 #define WRITE_PROTECTION_REGION(sel, pr, prbar_el2, prlar_el2) ({       \
     uint64_t _sel = sel;                                                \
@@ -847,6 +866,97 @@ void unmap_page_from_xen_misc(void)
 {
 }
 
+void dump_hyp_mapping(void)
+{
+    uint64_t i = 0;
+    pr_t region;
+
+    for ( i = 0; i < nr_cpu_mpumap; i++ )
+    {
+        access_protection_region(true, &region, NULL, i);
+        printk(XENLOG_INFO
+               "MPU memory region [%lu]: 0x%"PRIpaddr" - 0x%"PRIpaddr".\n",
+               i, pr_get_base(&region), pr_get_limit(&region));
+    }
+}
+
+/* Standard entry to dynamically allocate MPU memory region mapping table. */
+static pr_t *alloc_mpumap(void)
+{
+    pr_t *map;
+
+    /*
+     * A MPU memory region structure(pr_t) takes 16 bytes, even with maximum
+     * supported MPU protection regions in EL2, 255, MPU table at most takes up
+     * less than 4KB(PAGE_SIZE).
+     */
+    map = alloc_xenheap_pages(0, 0);
+    if ( map == NULL )
+        return NULL;
+
+    clear_page(map);
+    return map;
+}
+
+/*
+ * Switching region(i.e. device memory) are regions that gets switched out
+ * when idle vcpu leaving hypervisor mode, and gets switched in when idle vcpu
+ * entering hypervisor mode. They're added at tail during the boot stage.
+ * To save the trouble of hunting down each switching region in time-sensitive
+ * context switch, we re-order xen_mpumap to keep fixed regions still in the
+ * front, and switching ones in the heels of them.
+ */
+int reorder_xen_mpumap(void)
+{
+    uint64_t i;
+
+    sw_mpumap = alloc_mpumap();
+    if ( !sw_mpumap )
+        return -ENOMEM;
+
+    /* Record the switching regions in sw_mpumap. */
+    for ( i = next_transient_region_idx - 1; i < max_xen_mpumap; i++ )
+    {
+        pr_t *region;
+
+        region = &xen_mpumap[i];
+        if ( region_is_valid(region) && region_needs_switching_on_ctxt(region) )
+        {
+            sw_mpumap[nr_sw_mpumap++] = xen_mpumap[i];
+
+            /*
+             * Disable it temporarily for later enabling it in the
+             * new reordered position
+             * WARNING: since device memory section, as switching region,
+             * will get disabled temporarily, console will become inaccessible
+             * in a short time.
+             */
+            control_mpu_region_from_index(i, false);
+            memset(&xen_mpumap[i], 0, sizeof(pr_t));
+        }
+    }
+
+    /* Put switching regions after fixed regions */
+    i = 0;
+    nr_cpu_mpumap = nr_xen_mpumap = next_fixed_region_idx;
+    do
+    {
+        access_protection_region(false, NULL,
+                                 (const pr_t*)(&sw_mpumap[i]),
+                                 nr_cpu_mpumap);
+        nr_cpu_mpumap++;
+    } while ( ++i < nr_sw_mpumap );
+
+    /*
+     * Now, xen_mpumap becomes a tight mapping, with fixed region at front and
+     * switching ones after fixed ones.
+     */
+    printk(XENLOG_INFO "Xen EL2 MPU memory region mapping after re-order.\n");
+    dump_hyp_mapping();
+
+    return 0;
+}
+
 /* TODO: Implementation on the first usage */
 void dump_hyp_walk(vaddr_t addr)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 49ba998f68..b21fc4b8e2 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -61,23 +61,12 @@ domid_t __read_mostly max_init_domid;
 
 static __used void init_done(void)
 {
-    int rc;
-
     /* Must be done past setting system_state. */
     unregister_init_virtual_region();
 
     free_init_memory();
 
-    /*
-     * We have finished booting. Mark the section .data.ro_after_init
-     * read-only.
-     */
-    rc = modify_xen_mappings((unsigned long)&__ro_after_init_start,
-                             (unsigned long)&__ro_after_init_end,
-                             PAGE_HYPERVISOR_RO);
-    if ( rc )
-        panic("Unable to mark the .data.ro_after_init section read-only (rc = %d)\n",
-              rc);
+    arch_init_finialize();
 
     startup_cpu_idle_loop();
 }
diff --git a/xen/arch/arm/setup_mmu.c b/xen/arch/arm/setup_mmu.c
index 611a60633e..5b7a5de086 100644
--- a/xen/arch/arm/setup_mmu.c
+++ b/xen/arch/arm/setup_mmu.c
@@ -365,6 +365,22 @@ void __init discard_initial_modules(void)
     remove_early_mappings();
 }
 
+void arch_init_finialize(void)
+{
+    int rc;
+
+    /*
+     * We have finished booting. Mark the section .data.ro_after_init
+     * read-only.
+     */
+    rc = modify_xen_mappings((unsigned long)&__ro_after_init_start,
+                             (unsigned long)&__ro_after_init_end,
+                             PAGE_HYPERVISOR_RO);
+    if ( rc )
+        panic("Unable to mark the .data.ro_after_init section read-only (rc = %d)\n",
+              rc);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/setup_mpu.c b/xen/arch/arm/setup_mpu.c
index f47f1f39ee..b510780cde 100644
--- a/xen/arch/arm/setup_mpu.c
+++ b/xen/arch/arm/setup_mpu.c
@@ -178,6 +178,26 @@ void __init discard_initial_modules(void)
     remove_early_mappings();
 }
 
+void arch_init_finialize(void)
+{
+    int rc;
+
+    /*
+     * We have finished booting. Mark the section .data.ro_after_init
+     * read-only.
+     */
+    rc = modify_xen_mappings((unsigned long)&__ro_after_init_start,
+                             (unsigned long)&__ro_after_init_end,
+                             REGION_HYPERVISOR_RO);
+    if ( rc )
+        panic("mpu: Unable to mark the .data.ro_after_init section read-only (rc = %d)\n",
+              rc);
+
+    rc = reorder_xen_mpumap();
+    if ( rc )
+        panic("mpu: Failed to reorder Xen MPU memory mapping (rc = %d)\n", rc);
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476626.739053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjl-0005wq-Fa; Fri, 13 Jan 2023 05:36:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476626.739053; Fri, 13 Jan 2023 05:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjk-0005sH-Q6; Fri, 13 Jan 2023 05:36:20 +0000
Received: by outflank-mailman (input) for mailman id 476626;
 Fri, 13 Jan 2023 05:36:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeK-0005sJ-LY
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:44 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 6c024867-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:42 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4BB60FEC;
 Thu, 12 Jan 2023 21:31:24 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8D3473F587;
 Thu, 12 Jan 2023 21:30:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c024867-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 21/40] xen/arm: move MMU-specific setup_mm to setup_mmu.c
Date: Fri, 13 Jan 2023 13:28:54 +0800
Message-Id: <20230113052914.3845596-22-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

setup_mm is used for Xen to setup memory management subsystem, like boot
allocator, direct-mapping, xenheap, frametable and static memory pages.
We could inherit some components seamlessly in MPU system like
boot allocator, whilst we need to implement some components differently
in MPU, like xenheap, and some components could not be applied in MPU system,
like direct-mapping.

In the commit, we move setup_mm and its related functions and
variables to setup_mmu.c in preparation of implementing MPU
version of setup_mm later in future commits

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/Makefile            |   3 +
 xen/arch/arm/include/asm/setup.h |   5 +
 xen/arch/arm/setup.c             | 326 +---------------------------
 xen/arch/arm/setup_mmu.c         | 350 +++++++++++++++++++++++++++++++
 4 files changed, 362 insertions(+), 322 deletions(-)
 create mode 100644 xen/arch/arm/setup_mmu.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 21188b207f..adeb17b7ab 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -51,6 +51,9 @@ obj-y += physdev.o
 obj-y += processor.o
 obj-y += psci.o
 obj-y += setup.o
+ifneq ($(CONFIG_HAS_MPU), y)
+obj-y += setup_mmu.o
+endif
 obj-y += shutdown.o
 obj-y += smp.o
 obj-y += smpboot.o
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 4f39a1aa0a..8f353b67f8 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -158,6 +158,11 @@ struct bootcmdline *boot_cmdline_find_by_kind(bootmodule_kind kind);
 struct bootcmdline * boot_cmdline_find_by_name(const char *name);
 const char *boot_module_kind_as_string(bootmodule_kind kind);
 
+extern void init_pdx(void);
+extern void init_staticmem_pages(void);
+extern void populate_boot_allocator(void);
+extern void setup_mm(void);
+
 extern uint32_t hyp_traps_vector[];
 void init_traps(void);
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index d7d200179c..3ebf9e9a5c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -2,7 +2,7 @@
 /*
  * xen/arch/arm/setup.c
  *
- * Early bringup code for an ARMv7-A with virt extensions.
+ * Early bringup code for an ARMv7-A/ARM64v8R with virt extensions.
  *
  * Tim Deegan <tim@xen.org>
  * Copyright (c) 2011 Citrix Systems.
@@ -57,11 +57,6 @@ struct cpuinfo_arm __read_mostly system_cpuinfo;
 bool __read_mostly acpi_disabled;
 #endif
 
-#ifdef CONFIG_ARM_32
-static unsigned long opt_xenheap_megabytes __initdata;
-integer_param("xenheap_megabytes", opt_xenheap_megabytes);
-#endif
-
 domid_t __read_mostly max_init_domid;
 
 static __used void init_done(void)
@@ -455,138 +450,6 @@ static void * __init relocate_fdt(paddr_t dtb_paddr, size_t dtb_size)
     return fdt;
 }
 
-#ifdef CONFIG_ARM_32
-/*
- * Returns the end address of the highest region in the range s..e
- * with required size and alignment that does not conflict with the
- * modules from first_mod to nr_modules.
- *
- * For non-recursive callers first_mod should normally be 0 (all
- * modules and Xen itself) or 1 (all modules but not Xen).
- */
-static paddr_t __init consider_modules(paddr_t s, paddr_t e,
-                                       uint32_t size, paddr_t align,
-                                       int first_mod)
-{
-    const struct bootmodules *mi = &bootinfo.modules;
-    int i;
-    int nr;
-
-    s = (s+align-1) & ~(align-1);
-    e = e & ~(align-1);
-
-    if ( s > e ||  e - s < size )
-        return 0;
-
-    /* First check the boot modules */
-    for ( i = first_mod; i < mi->nr_mods; i++ )
-    {
-        paddr_t mod_s = mi->module[i].start;
-        paddr_t mod_e = mod_s + mi->module[i].size;
-
-        if ( s < mod_e && mod_s < e )
-        {
-            mod_e = consider_modules(mod_e, e, size, align, i+1);
-            if ( mod_e )
-                return mod_e;
-
-            return consider_modules(s, mod_s, size, align, i+1);
-        }
-    }
-
-    /* Now check any fdt reserved areas. */
-
-    nr = fdt_num_mem_rsv(device_tree_flattened);
-
-    for ( ; i < mi->nr_mods + nr; i++ )
-    {
-        paddr_t mod_s, mod_e;
-
-        if ( fdt_get_mem_rsv(device_tree_flattened,
-                             i - mi->nr_mods,
-                             &mod_s, &mod_e ) < 0 )
-            /* If we can't read it, pretend it doesn't exist... */
-            continue;
-
-        /* fdt_get_mem_rsv returns length */
-        mod_e += mod_s;
-
-        if ( s < mod_e && mod_s < e )
-        {
-            mod_e = consider_modules(mod_e, e, size, align, i+1);
-            if ( mod_e )
-                return mod_e;
-
-            return consider_modules(s, mod_s, size, align, i+1);
-        }
-    }
-
-    /*
-     * i is the current bootmodule we are evaluating, across all
-     * possible kinds of bootmodules.
-     *
-     * When retrieving the corresponding reserved-memory addresses, we
-     * need to index the bootinfo.reserved_mem bank starting from 0, and
-     * only counting the reserved-memory modules. Hence, we need to use
-     * i - nr.
-     */
-    nr += mi->nr_mods;
-    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
-    {
-        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
-        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
-
-        if ( s < r_e && r_s < e )
-        {
-            r_e = consider_modules(r_e, e, size, align, i + 1);
-            if ( r_e )
-                return r_e;
-
-            return consider_modules(s, r_s, size, align, i + 1);
-        }
-    }
-    return e;
-}
-
-/*
- * Find a contiguous region that fits in the static heap region with
- * required size and alignment, and return the end address of the region
- * if found otherwise 0.
- */
-static paddr_t __init fit_xenheap_in_static_heap(uint32_t size, paddr_t align)
-{
-    unsigned int i;
-    paddr_t end = 0, aligned_start, aligned_end;
-    paddr_t bank_start, bank_size, bank_end;
-
-    for ( i = 0 ; i < bootinfo.reserved_mem.nr_banks; i++ )
-    {
-        if ( bootinfo.reserved_mem.bank[i].type != MEMBANK_STATIC_HEAP )
-            continue;
-
-        bank_start = bootinfo.reserved_mem.bank[i].start;
-        bank_size = bootinfo.reserved_mem.bank[i].size;
-        bank_end = bank_start + bank_size;
-
-        if ( bank_size < size )
-            continue;
-
-        aligned_end = bank_end & ~(align - 1);
-        aligned_start = (aligned_end - size) & ~(align - 1);
-
-        if ( aligned_start > bank_start )
-            /*
-             * Allocate the xenheap as high as possible to keep low-memory
-             * available (assuming the admin supplied region below 4GB)
-             * for other use (e.g. domain memory allocation).
-             */
-            end = max(end, aligned_end);
-    }
-
-    return end;
-}
-#endif
-
 /*
  * Return the end of the non-module region starting at s. In other
  * words return s the start of the next modules after s.
@@ -621,7 +484,7 @@ static paddr_t __init next_module(paddr_t s, paddr_t *end)
     return lowest;
 }
 
-static void __init init_pdx(void)
+void __init init_pdx(void)
 {
     paddr_t bank_start, bank_size, bank_end;
 
@@ -666,7 +529,7 @@ static void __init init_pdx(void)
 }
 
 /* Static memory initialization */
-static void __init init_staticmem_pages(void)
+void __init init_staticmem_pages(void)
 {
 #ifdef CONFIG_STATIC_MEMORY
     unsigned int bank;
@@ -700,7 +563,7 @@ static void __init init_staticmem_pages(void)
  * allocator with the corresponding regions only, but with Xenheap excluded
  * on arm32.
  */
-static void __init populate_boot_allocator(void)
+void __init populate_boot_allocator(void)
 {
     unsigned int i;
     const struct meminfo *banks = &bootinfo.mem;
@@ -769,187 +632,6 @@ static void __init populate_boot_allocator(void)
     }
 }
 
-#ifdef CONFIG_ARM_32
-static void __init setup_mm(void)
-{
-    paddr_t ram_start, ram_end, ram_size, e, bank_start, bank_end, bank_size;
-    paddr_t static_heap_end = 0, static_heap_size = 0;
-    unsigned long heap_pages, xenheap_pages, domheap_pages;
-    unsigned int i;
-    const uint32_t ctr = READ_CP32(CTR);
-
-    if ( !bootinfo.mem.nr_banks )
-        panic("No memory bank\n");
-
-    /* We only supports instruction caches implementing the IVIPT extension. */
-    if ( ((ctr >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK) == ICACHE_POLICY_AIVIVT )
-        panic("AIVIVT instruction cache not supported\n");
-
-    init_pdx();
-
-    ram_start = bootinfo.mem.bank[0].start;
-    ram_size  = bootinfo.mem.bank[0].size;
-    ram_end   = ram_start + ram_size;
-
-    for ( i = 1; i < bootinfo.mem.nr_banks; i++ )
-    {
-        bank_start = bootinfo.mem.bank[i].start;
-        bank_size = bootinfo.mem.bank[i].size;
-        bank_end = bank_start + bank_size;
-
-        ram_size  = ram_size + bank_size;
-        ram_start = min(ram_start,bank_start);
-        ram_end   = max(ram_end,bank_end);
-    }
-
-    total_pages = ram_size >> PAGE_SHIFT;
-
-    if ( bootinfo.static_heap )
-    {
-        for ( i = 0 ; i < bootinfo.reserved_mem.nr_banks; i++ )
-        {
-            if ( bootinfo.reserved_mem.bank[i].type != MEMBANK_STATIC_HEAP )
-                continue;
-
-            bank_start = bootinfo.reserved_mem.bank[i].start;
-            bank_size = bootinfo.reserved_mem.bank[i].size;
-            bank_end = bank_start + bank_size;
-
-            static_heap_size += bank_size;
-            static_heap_end = max(static_heap_end, bank_end);
-        }
-
-        heap_pages = static_heap_size >> PAGE_SHIFT;
-    }
-    else
-        heap_pages = total_pages;
-
-    /*
-     * If the user has not requested otherwise via the command line
-     * then locate the xenheap using these constraints:
-     *
-     *  - must be contiguous
-     *  - must be 32 MiB aligned
-     *  - must not include Xen itself or the boot modules
-     *  - must be at most 1GB or 1/32 the total RAM in the system (or static
-          heap if enabled) if less
-     *  - must be at least 32M
-     *
-     * We try to allocate the largest xenheap possible within these
-     * constraints.
-     */
-    if ( opt_xenheap_megabytes )
-        xenheap_pages = opt_xenheap_megabytes << (20-PAGE_SHIFT);
-    else
-    {
-        xenheap_pages = (heap_pages/32 + 0x1fffUL) & ~0x1fffUL;
-        xenheap_pages = max(xenheap_pages, 32UL<<(20-PAGE_SHIFT));
-        xenheap_pages = min(xenheap_pages, 1UL<<(30-PAGE_SHIFT));
-    }
-
-    do
-    {
-        e = bootinfo.static_heap ?
-            fit_xenheap_in_static_heap(pfn_to_paddr(xenheap_pages), MB(32)) :
-            consider_modules(ram_start, ram_end,
-                             pfn_to_paddr(xenheap_pages),
-                             32<<20, 0);
-        if ( e )
-            break;
-
-        xenheap_pages >>= 1;
-    } while ( !opt_xenheap_megabytes && xenheap_pages > 32<<(20-PAGE_SHIFT) );
-
-    if ( ! e )
-        panic("Not enough space for xenheap\n");
-
-    domheap_pages = heap_pages - xenheap_pages;
-
-    printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages%s)\n",
-           e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages,
-           opt_xenheap_megabytes ? ", from command-line" : "");
-    printk("Dom heap: %lu pages\n", domheap_pages);
-
-    /*
-     * We need some memory to allocate the page-tables used for the
-     * directmap mappings. So populate the boot allocator first.
-     *
-     * This requires us to set directmap_mfn_{start, end} first so the
-     * direct-mapped Xenheap region can be avoided.
-     */
-    directmap_mfn_start = _mfn((e >> PAGE_SHIFT) - xenheap_pages);
-    directmap_mfn_end = mfn_add(directmap_mfn_start, xenheap_pages);
-
-    populate_boot_allocator();
-
-    setup_directmap_mappings(mfn_x(directmap_mfn_start), xenheap_pages);
-
-    /* Frame table covers all of RAM region, including holes */
-    setup_frametable_mappings(ram_start, ram_end);
-    max_page = PFN_DOWN(ram_end);
-
-    /*
-     * The allocators may need to use map_domain_page() (such as for
-     * scrubbing pages). So we need to prepare the domheap area first.
-     */
-    if ( !init_domheap_mappings(smp_processor_id()) )
-        panic("CPU%u: Unable to prepare the domheap page-tables\n",
-              smp_processor_id());
-
-    /* Add xenheap memory that was not already added to the boot allocator. */
-    init_xenheap_pages(mfn_to_maddr(directmap_mfn_start),
-                       mfn_to_maddr(directmap_mfn_end));
-
-    init_staticmem_pages();
-}
-#else /* CONFIG_ARM_64 */
-static void __init setup_mm(void)
-{
-    const struct meminfo *banks = &bootinfo.mem;
-    paddr_t ram_start = INVALID_PADDR;
-    paddr_t ram_end = 0;
-    paddr_t ram_size = 0;
-    unsigned int i;
-
-    init_pdx();
-
-    /*
-     * We need some memory to allocate the page-tables used for the directmap
-     * mappings. But some regions may contain memory already allocated
-     * for other uses (e.g. modules, reserved-memory...).
-     *
-     * For simplicity, add all the free regions in the boot allocator.
-     */
-    populate_boot_allocator();
-
-    total_pages = 0;
-
-    for ( i = 0; i < banks->nr_banks; i++ )
-    {
-        const struct membank *bank = &banks->bank[i];
-        paddr_t bank_end = bank->start + bank->size;
-
-        ram_size = ram_size + bank->size;
-        ram_start = min(ram_start, bank->start);
-        ram_end = max(ram_end, bank_end);
-
-        setup_directmap_mappings(PFN_DOWN(bank->start),
-                                 PFN_DOWN(bank->size));
-    }
-
-    total_pages += ram_size >> PAGE_SHIFT;
-
-    directmap_virt_end = XENHEAP_VIRT_START + ram_end - ram_start;
-    directmap_mfn_start = maddr_to_mfn(ram_start);
-    directmap_mfn_end = maddr_to_mfn(ram_end);
-
-    setup_frametable_mappings(ram_start, ram_end);
-    max_page = PFN_DOWN(ram_end);
-
-    init_staticmem_pages();
-}
-#endif
-
 static bool __init is_dom0less_mode(void)
 {
     struct bootmodules *mods = &bootinfo.modules;
diff --git a/xen/arch/arm/setup_mmu.c b/xen/arch/arm/setup_mmu.c
new file mode 100644
index 0000000000..7e5d87f8bd
--- /dev/null
+++ b/xen/arch/arm/setup_mmu.c
@@ -0,0 +1,350 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * xen/arch/arm/setup_mmu.c
+ *
+ * Early bringup code for an ARMv7-A with virt extensions.
+ *
+ * Tim Deegan <tim@xen.org>
+ * Copyright (c) 2011 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/init.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/mm.h>
+#include <xen/param.h>
+#include <xen/pfn.h>
+#include <asm/page.h>
+#include <asm/setup.h>
+
+#ifdef CONFIG_ARM_32
+static unsigned long opt_xenheap_megabytes __initdata;
+integer_param("xenheap_megabytes", opt_xenheap_megabytes);
+
+/*
+ * Returns the end address of the highest region in the range s..e
+ * with required size and alignment that does not conflict with the
+ * modules from first_mod to nr_modules.
+ *
+ * For non-recursive callers first_mod should normally be 0 (all
+ * modules and Xen itself) or 1 (all modules but not Xen).
+ */
+static paddr_t __init consider_modules(paddr_t s, paddr_t e,
+                                       uint32_t size, paddr_t align,
+                                       int first_mod)
+{
+    const struct bootmodules *mi = &bootinfo.modules;
+    int i;
+    int nr;
+
+    s = (s+align-1) & ~(align-1);
+    e = e & ~(align-1);
+
+    if ( s > e ||  e - s < size )
+        return 0;
+
+    /* First check the boot modules */
+    for ( i = first_mod; i < mi->nr_mods; i++ )
+    {
+        paddr_t mod_s = mi->module[i].start;
+        paddr_t mod_e = mod_s + mi->module[i].size;
+
+        if ( s < mod_e && mod_s < e )
+        {
+            mod_e = consider_modules(mod_e, e, size, align, i+1);
+            if ( mod_e )
+                return mod_e;
+
+            return consider_modules(s, mod_s, size, align, i+1);
+        }
+    }
+
+    /* Now check any fdt reserved areas. */
+
+    nr = fdt_num_mem_rsv(device_tree_flattened);
+
+    for ( ; i < mi->nr_mods + nr; i++ )
+    {
+        paddr_t mod_s, mod_e;
+
+        if ( fdt_get_mem_rsv(device_tree_flattened,
+                             i - mi->nr_mods,
+                             &mod_s, &mod_e ) < 0 )
+            /* If we can't read it, pretend it doesn't exist... */
+            continue;
+
+        /* fdt_get_mem_rsv returns length */
+        mod_e += mod_s;
+
+        if ( s < mod_e && mod_s < e )
+        {
+            mod_e = consider_modules(mod_e, e, size, align, i+1);
+            if ( mod_e )
+                return mod_e;
+
+            return consider_modules(s, mod_s, size, align, i+1);
+        }
+    }
+
+    /*
+     * i is the current bootmodule we are evaluating, across all
+     * possible kinds of bootmodules.
+     *
+     * When retrieving the corresponding reserved-memory addresses, we
+     * need to index the bootinfo.reserved_mem bank starting from 0, and
+     * only counting the reserved-memory modules. Hence, we need to use
+     * i - nr.
+     */
+    nr += mi->nr_mods;
+    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
+    {
+        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
+        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
+
+        if ( s < r_e && r_s < e )
+        {
+            r_e = consider_modules(r_e, e, size, align, i + 1);
+            if ( r_e )
+                return r_e;
+
+            return consider_modules(s, r_s, size, align, i + 1);
+        }
+    }
+    return e;
+}
+
+/*
+ * Find a contiguous region that fits in the static heap region with
+ * required size and alignment, and return the end address of the region
+ * if found otherwise 0.
+ */
+static paddr_t __init fit_xenheap_in_static_heap(uint32_t size, paddr_t align)
+{
+    unsigned int i;
+    paddr_t end = 0, aligned_start, aligned_end;
+    paddr_t bank_start, bank_size, bank_end;
+
+    for ( i = 0 ; i < bootinfo.reserved_mem.nr_banks; i++ )
+    {
+        if ( bootinfo.reserved_mem.bank[i].type != MEMBANK_STATIC_HEAP )
+            continue;
+
+        bank_start = bootinfo.reserved_mem.bank[i].start;
+        bank_size = bootinfo.reserved_mem.bank[i].size;
+        bank_end = bank_start + bank_size;
+
+        if ( bank_size < size )
+            continue;
+
+        aligned_end = bank_end & ~(align - 1);
+        aligned_start = (aligned_end - size) & ~(align - 1);
+
+        if ( aligned_start > bank_start )
+            /*
+             * Allocate the xenheap as high as possible to keep low-memory
+             * available (assuming the admin supplied region below 4GB)
+             * for other use (e.g. domain memory allocation).
+             */
+            end = max(end, aligned_end);
+    }
+
+    return end;
+}
+
+void __init setup_mm(void)
+{
+    paddr_t ram_start, ram_end, ram_size, e, bank_start, bank_end, bank_size;
+    paddr_t static_heap_end = 0, static_heap_size = 0;
+    unsigned long heap_pages, xenheap_pages, domheap_pages;
+    unsigned int i;
+    const uint32_t ctr = READ_CP32(CTR);
+
+    if ( !bootinfo.mem.nr_banks )
+        panic("No memory bank\n");
+
+    /* We only supports instruction caches implementing the IVIPT extension. */
+    if ( ((ctr >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK) == ICACHE_POLICY_AIVIVT )
+        panic("AIVIVT instruction cache not supported\n");
+
+    init_pdx();
+
+    ram_start = bootinfo.mem.bank[0].start;
+    ram_size  = bootinfo.mem.bank[0].size;
+    ram_end   = ram_start + ram_size;
+
+    for ( i = 1; i < bootinfo.mem.nr_banks; i++ )
+    {
+        bank_start = bootinfo.mem.bank[i].start;
+        bank_size = bootinfo.mem.bank[i].size;
+        bank_end = bank_start + bank_size;
+
+        ram_size  = ram_size + bank_size;
+        ram_start = min(ram_start,bank_start);
+        ram_end   = max(ram_end,bank_end);
+    }
+
+    total_pages = ram_size >> PAGE_SHIFT;
+
+    if ( bootinfo.static_heap )
+    {
+        for ( i = 0 ; i < bootinfo.reserved_mem.nr_banks; i++ )
+        {
+            if ( bootinfo.reserved_mem.bank[i].type != MEMBANK_STATIC_HEAP )
+                continue;
+
+            bank_start = bootinfo.reserved_mem.bank[i].start;
+            bank_size = bootinfo.reserved_mem.bank[i].size;
+            bank_end = bank_start + bank_size;
+
+            static_heap_size += bank_size;
+            static_heap_end = max(static_heap_end, bank_end);
+        }
+
+        heap_pages = static_heap_size >> PAGE_SHIFT;
+    }
+    else
+        heap_pages = total_pages;
+
+    /*
+     * If the user has not requested otherwise via the command line
+     * then locate the xenheap using these constraints:
+     *
+     *  - must be contiguous
+     *  - must be 32 MiB aligned
+     *  - must not include Xen itself or the boot modules
+     *  - must be at most 1GB or 1/32 the total RAM in the system (or static
+          heap if enabled) if less
+     *  - must be at least 32M
+     *
+     * We try to allocate the largest xenheap possible within these
+     * constraints.
+     */
+    if ( opt_xenheap_megabytes )
+        xenheap_pages = opt_xenheap_megabytes << (20-PAGE_SHIFT);
+    else
+    {
+        xenheap_pages = (heap_pages/32 + 0x1fffUL) & ~0x1fffUL;
+        xenheap_pages = max(xenheap_pages, 32UL<<(20-PAGE_SHIFT));
+        xenheap_pages = min(xenheap_pages, 1UL<<(30-PAGE_SHIFT));
+    }
+
+    do
+    {
+        e = bootinfo.static_heap ?
+            fit_xenheap_in_static_heap(pfn_to_paddr(xenheap_pages), MB(32)) :
+            consider_modules(ram_start, ram_end,
+                             pfn_to_paddr(xenheap_pages),
+                             32<<20, 0);
+        if ( e )
+            break;
+
+        xenheap_pages >>= 1;
+    } while ( !opt_xenheap_megabytes && xenheap_pages > 32<<(20-PAGE_SHIFT) );
+
+    if ( ! e )
+        panic("Not enough space for xenheap\n");
+
+    domheap_pages = heap_pages - xenheap_pages;
+
+    printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages%s)\n",
+           e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages,
+           opt_xenheap_megabytes ? ", from command-line" : "");
+    printk("Dom heap: %lu pages\n", domheap_pages);
+
+    /*
+     * We need some memory to allocate the page-tables used for the
+     * directmap mappings. So populate the boot allocator first.
+     *
+     * This requires us to set directmap_mfn_{start, end} first so the
+     * direct-mapped Xenheap region can be avoided.
+     */
+    directmap_mfn_start = _mfn((e >> PAGE_SHIFT) - xenheap_pages);
+    directmap_mfn_end = mfn_add(directmap_mfn_start, xenheap_pages);
+
+    populate_boot_allocator();
+
+    setup_directmap_mappings(mfn_x(directmap_mfn_start), xenheap_pages);
+
+    /* Frame table covers all of RAM region, including holes */
+    setup_frametable_mappings(ram_start, ram_end);
+    max_page = PFN_DOWN(ram_end);
+
+    /*
+     * The allocators may need to use map_domain_page() (such as for
+     * scrubbing pages). So we need to prepare the domheap area first.
+     */
+    if ( !init_domheap_mappings(smp_processor_id()) )
+        panic("CPU%u: Unable to prepare the domheap page-tables\n",
+              smp_processor_id());
+
+    /* Add xenheap memory that was not already added to the boot allocator. */
+    init_xenheap_pages(mfn_to_maddr(directmap_mfn_start),
+                       mfn_to_maddr(directmap_mfn_end));
+
+    init_staticmem_pages();
+}
+#else /* CONFIG_ARM_64 */
+void __init setup_mm(void)
+{
+    const struct meminfo *banks = &bootinfo.mem;
+    paddr_t ram_start = INVALID_PADDR;
+    paddr_t ram_end = 0;
+    paddr_t ram_size = 0;
+    unsigned int i;
+
+    init_pdx();
+
+    /*
+     * We need some memory to allocate the page-tables used for the directmap
+     * mappings. But some regions may contain memory already allocated
+     * for other uses (e.g. modules, reserved-memory...).
+     *
+     * For simplicity, add all the free regions in the boot allocator.
+     */
+    populate_boot_allocator();
+
+    total_pages = 0;
+
+    for ( i = 0; i < banks->nr_banks; i++ )
+    {
+        const struct membank *bank = &banks->bank[i];
+        paddr_t bank_end = bank->start + bank->size;
+
+        ram_size = ram_size + bank->size;
+        ram_start = min(ram_start, bank->start);
+        ram_end = max(ram_end, bank_end);
+
+        setup_directmap_mappings(PFN_DOWN(bank->start),
+                                 PFN_DOWN(bank->size));
+    }
+
+    total_pages += ram_size >> PAGE_SHIFT;
+
+    directmap_virt_end = XENHEAP_VIRT_START + ram_end - ram_start;
+    directmap_mfn_start = maddr_to_mfn(ram_start);
+    directmap_mfn_end = maddr_to_mfn(ram_end);
+
+    setup_frametable_mappings(ram_start, ram_end);
+    max_page = PFN_DOWN(ram_end);
+
+    init_staticmem_pages();
+}
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476632.739078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjp-000782-IB; Fri, 13 Jan 2023 05:36:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476632.739078; Fri, 13 Jan 2023 05:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjp-00076b-A9; Fri, 13 Jan 2023 05:36:25 +0000
Received: by outflank-mailman (input) for mailman id 476632;
 Fri, 13 Jan 2023 05:36:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCds-0005sP-RV
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:16 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 5b2e49e9-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:30:14 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 149F913D5;
 Thu, 12 Jan 2023 21:30:56 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5679A3F587;
 Thu, 12 Jan 2023 21:30:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b2e49e9-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 12/40] xen/mpu: introduce helpers for MPU enablement
Date: Fri, 13 Jan 2023 13:28:45 +0800
Message-Id: <20230113052914.3845596-13-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We need a new helper for Xen to enable MPU in boot-time.
The new helper is semantically consistent with the original enable_mmu.

If the Background region is enabled, then the MPU uses the default memory
map as the Background region for generating the memory
attributes when MPU is disabled.
Since the default memory map of the Armv8-R AArch64 architecture is
IMPLEMENTATION DEFINED, we always turn off the Background region.

In this patch, we also introduce a neutral name enable_mm for
Xen to enable MMU/MPU. This can help us to keep one code flow
in head.S

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/arm64/head.S     |  5 +++--
 xen/arch/arm/arm64/head_mmu.S |  4 ++--
 xen/arch/arm/arm64/head_mpu.S | 19 +++++++++++++++++++
 3 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 145e3d53dc..7f3f973468 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -258,7 +258,8 @@ real_start_efi:
          * and memory regions for MPU systems.
          */
         bl    prepare_early_mappings
-        bl    enable_mmu
+        /* Turn on MMU or MPU */
+        bl    enable_mm
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
         ldr   x0, =primary_switched
@@ -316,7 +317,7 @@ GLOBAL(init_secondary)
         bl    check_cpu_mode
         bl    cpu_init
         bl    prepare_early_mappings
-        bl    enable_mmu
+        bl    enable_mm
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
         ldr   x0, =secondary_switched
diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
index 2346f755df..b59c40495f 100644
--- a/xen/arch/arm/arm64/head_mmu.S
+++ b/xen/arch/arm/arm64/head_mmu.S
@@ -217,7 +217,7 @@ ENDPROC(prepare_early_mappings)
  *
  * Clobbers x0 - x3
  */
-ENTRY(enable_mmu)
+ENTRY(enable_mm)
         PRINT("- Turning on paging -\r\n")
 
         /*
@@ -239,7 +239,7 @@ ENTRY(enable_mmu)
         msr   SCTLR_EL2, x0          /* now paging is enabled */
         isb                          /* Now, flush the icache */
         ret
-ENDPROC(enable_mmu)
+ENDPROC(enable_mm)
 
 /*
  * Remove the 1:1 map from the page-tables. It is not easy to keep track
diff --git a/xen/arch/arm/arm64/head_mpu.S b/xen/arch/arm/arm64/head_mpu.S
index 0b97ce4646..e2ac69b0cc 100644
--- a/xen/arch/arm/arm64/head_mpu.S
+++ b/xen/arch/arm/arm64/head_mpu.S
@@ -315,6 +315,25 @@ ENDPROC(prepare_early_mappings)
 
 GLOBAL(_end_boot)
 
+/*
+ * Enable EL2 MPU and data cache
+ * If the Background region is enabled, then the MPU uses the default memory
+ * map as the Background region for generating the memory
+ * attributes when MPU is disabled.
+ * Since the default memory map of the Armv8-R AArch64 architecture is
+ * IMPLEMENTATION DEFINED, we intend to turn off the Background region here.
+ */
+ENTRY(enable_mm)
+    mrs   x0, SCTLR_EL2
+    orr   x0, x0, #SCTLR_Axx_ELx_M    /* Enable MPU */
+    orr   x0, x0, #SCTLR_Axx_ELx_C    /* Enable D-cache */
+    orr   x0, x0, #SCTLR_Axx_ELx_WXN  /* Enable WXN */
+    dsb   sy
+    msr   SCTLR_EL2, x0
+    isb
+    ret
+ENDPROC(enable_mm)
+
 /*
  * Local variables:
  * mode: ASM
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476633.739083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjq-0007Ix-FE; Fri, 13 Jan 2023 05:36:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476633.739083; Fri, 13 Jan 2023 05:36:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjq-0007Gx-5P; Fri, 13 Jan 2023 05:36:26 +0000
Received: by outflank-mailman (input) for mailman id 476633;
 Fri, 13 Jan 2023 05:36:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCen-0005sJ-3P
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:13 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 7d730806-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:31:12 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 95A01FEC;
 Thu, 12 Jan 2023 21:31:53 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 90BF33F587;
 Thu, 12 Jan 2023 21:31:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d730806-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 30/40] xen/mpu: disable VMAP sub-system for MPU systems
Date: Fri, 13 Jan 2023 13:29:03 +0800
Message-Id: <20230113052914.3845596-31-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

VMAP in MMU system, is used to remap a range of normal memory
or device memory to another virtual address with new attributes
for specific purpose, like ALTERNATIVE feature. Since there is
no virtual address translation support in MPU system, we can
not support VMAP in MPU system.

So in this patch, we disable VMAP for MPU systems, and some
features depending on VMAP also need to be disabled at the same
time, Like ALTERNATIVE, CPU ERRATA.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/Kconfig                   |  3 +-
 xen/arch/arm/Makefile                  |  2 +-
 xen/arch/arm/include/asm/alternative.h | 15 +++++
 xen/arch/arm/include/asm/cpuerrata.h   | 12 ++++
 xen/arch/arm/setup.c                   |  7 +++
 xen/arch/x86/Kconfig                   |  1 +
 xen/common/Kconfig                     |  3 +
 xen/common/Makefile                    |  2 +-
 xen/include/xen/vmap.h                 | 81 ++++++++++++++++++++++++--
 9 files changed, 119 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index c6b6b612d1..9230c8b885 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -11,12 +11,13 @@ config ARM_64
 
 config ARM
 	def_bool y
-	select HAS_ALTERNATIVE
+	select HAS_ALTERNATIVE if !ARM_V8R
 	select HAS_DEVICE_TREE
 	select HAS_PASSTHROUGH
 	select HAS_PDX
 	select HAS_PMAP
 	select IOMMU_FORCE_PT_SHARE
+	select HAS_VMAP if !ARM_V8R
 
 config ARCH_DEFCONFIG
 	string
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 23dfbc3333..c949661590 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_HAS_VPCI) += vpci.o
 
 obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
 obj-y += bootfdt.init.o
-obj-y += cpuerrata.o
+obj-$(CONFIG_HAS_ALTERNATIVE) += cpuerrata.o
 obj-y += cpufeature.o
 obj-y += decode.o
 obj-y += device.o
diff --git a/xen/arch/arm/include/asm/alternative.h b/xen/arch/arm/include/asm/alternative.h
index 1eb4b60fbb..bc23d1d34f 100644
--- a/xen/arch/arm/include/asm/alternative.h
+++ b/xen/arch/arm/include/asm/alternative.h
@@ -8,6 +8,7 @@
 
 #ifndef __ASSEMBLY__
 
+#include <xen/errno.h>
 #include <xen/types.h>
 #include <xen/stringify.h>
 
@@ -28,8 +29,22 @@ typedef void (*alternative_cb_t)(const struct alt_instr *alt,
 				 const uint32_t *origptr, uint32_t *updptr,
 				 int nr_inst);
 
+#ifdef CONFIG_HAS_ALTERNATIVE
 void apply_alternatives_all(void);
 int apply_alternatives(const struct alt_instr *start, const struct alt_instr *end);
+#else
+static inline void apply_alternatives_all(void)
+{
+    ASSERT_UNREACHABLE();
+}
+
+static inline int apply_alternatives(const struct alt_instr *start,
+                                     const struct alt_instr *end)
+{
+    ASSERT_UNREACHABLE();
+    return -EINVAL;
+}
+#endif /* !CONFIG_HAS_ALTERNATIVE */
 
 #define ALTINSTR_ENTRY(feature, cb)					      \
 	" .word 661b - .\n"				/* label           */ \
diff --git a/xen/arch/arm/include/asm/cpuerrata.h b/xen/arch/arm/include/asm/cpuerrata.h
index 8d7e7b9375..5d97f33763 100644
--- a/xen/arch/arm/include/asm/cpuerrata.h
+++ b/xen/arch/arm/include/asm/cpuerrata.h
@@ -4,8 +4,20 @@
 #include <asm/cpufeature.h>
 #include <asm/alternative.h>
 
+#ifdef CONFIG_HAS_ALTERNATIVE
 void check_local_cpu_errata(void);
 void enable_errata_workarounds(void);
+#else
+static inline void check_local_cpu_errata(void)
+{
+    ASSERT_UNREACHABLE();
+}
+
+static inline void enable_errata_workarounds(void)
+{
+    ASSERT_UNREACHABLE();
+}
+#endif /* !CONFIG_HAS_ALTERNATIVE */
 
 #define CHECK_WORKAROUND_HELPER(erratum, feature, arch)         \
 static inline bool check_workaround_##erratum(void)             \
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 3ebf9e9a5c..0eac33e68c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -721,7 +721,9 @@ void __init start_xen(unsigned long boot_phys_offset,
      */
     system_state = SYS_STATE_boot;
 
+#ifdef CONFIG_HAS_VMAP
     vm_init();
+#endif
 
     if ( acpi_disabled )
     {
@@ -753,11 +755,13 @@ void __init start_xen(unsigned long boot_phys_offset,
     nr_cpu_ids = smp_get_max_cpus();
     printk(XENLOG_INFO "SMP: Allowing %u CPUs\n", nr_cpu_ids);
 
+#ifdef CONFIG_HAS_ALTERNATIVE
     /*
      * Some errata relies on SMCCC version which is detected by psci_init()
      * (called from smp_init_cpus()).
      */
     check_local_cpu_errata();
+#endif
 
     check_local_cpu_features();
 
@@ -824,12 +828,15 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     do_initcalls();
 
+
+#ifdef CONFIG_HAS_ALTERNATIVE
     /*
      * It needs to be called after do_initcalls to be able to use
      * stop_machine (tasklets initialized via an initcall).
      */
     apply_alternatives_all();
     enable_errata_workarounds();
+#endif
     enable_cpu_features();
 
     /* Create initial domain 0. */
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 6a7825f4ba..7f072cc603 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -28,6 +28,7 @@ config X86
 	select HAS_UBSAN
 	select HAS_VPCI if HVM
 	select NEEDS_LIBELF
+	select HAS_VMAP
 
 config ARCH_DEFCONFIG
 	string
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index f1ea3199c8..ba16366a4b 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -61,6 +61,9 @@ config HAS_SCHED_GRANULARITY
 config HAS_UBSAN
 	bool
 
+config HAS_VMAP
+	bool
+
 config MEM_ACCESS_ALWAYS_ON
 	bool
 
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 9a3a12b12d..9d991effb2 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -50,7 +50,7 @@ obj-$(CONFIG_TRACEBUFFER) += trace.o
 obj-y += version.o
 obj-y += virtual_region.o
 obj-y += vm_event.o
-obj-y += vmap.o
+obj-$(CONFIG_HAS_VMAP) += vmap.o
 obj-y += vsprintf.o
 obj-y += wait.o
 obj-bin-y += warning.init.o
diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h
index b0f7632e89..2e3ae0ca6a 100644
--- a/xen/include/xen/vmap.h
+++ b/xen/include/xen/vmap.h
@@ -1,15 +1,17 @@
-#if !defined(__XEN_VMAP_H__) && defined(VMAP_VIRT_START)
+#if !defined(__XEN_VMAP_H__) && (defined(VMAP_VIRT_START) || !defined(CONFIG_HAS_VMAP))
 #define __XEN_VMAP_H__
 
-#include <xen/mm-frame.h>
-#include <xen/page-size.h>
-
 enum vmap_region {
     VMAP_DEFAULT,
     VMAP_XEN,
     VMAP_REGION_NR,
 };
 
+#ifdef CONFIG_HAS_VMAP
+
+#include <xen/mm-frame.h>
+#include <xen/page-size.h>
+
 void vm_init_type(enum vmap_region type, void *start, void *end);
 
 void *__vmap(const mfn_t *mfn, unsigned int granularity, unsigned int nr,
@@ -38,4 +40,75 @@ static inline void vm_init(void)
     vm_init_type(VMAP_DEFAULT, (void *)VMAP_VIRT_START, arch_vmap_virt_end());
 }
 
+#else /* !CONFIG_HAS_VMAP */
+
+static inline void vm_init_type(enum vmap_region type, void *start, void *end)
+{
+    ASSERT_UNREACHABLE();
+}
+
+static inline void *__vmap(const mfn_t *mfn, unsigned int granularity,
+                           unsigned int nr, unsigned int align,
+                           unsigned int flags, enum vmap_region type)
+{
+    ASSERT_UNREACHABLE();
+    return NULL;
+}
+
+static inline void *vmap(const mfn_t *mfn, unsigned int nr)
+{
+    ASSERT_UNREACHABLE();
+    return NULL;
+}
+
+static inline void vunmap(const void *va)
+{
+    ASSERT_UNREACHABLE();
+}
+
+static inline void *vmalloc(size_t size)
+{
+    ASSERT_UNREACHABLE();
+    return NULL;
+}
+
+static inline void *vmalloc_xen(size_t size)
+{
+    ASSERT_UNREACHABLE();
+    return NULL;
+}
+
+static inline void *vzalloc(size_t size)
+{
+    ASSERT_UNREACHABLE();
+    return NULL;
+}
+
+static inline void vfree(void *va)
+{
+    ASSERT_UNREACHABLE();
+}
+
+void __iomem *ioremap(paddr_t, size_t)
+{
+    ASSERT_UNREACHABLE();
+    return NULL;
+}
+
+static inline void iounmap(void __iomem *va)
+{
+    ASSERT_UNREACHABLE();
+}
+
+static inline void *arch_vmap_virt_end(void)
+{
+    ASSERT_UNREACHABLE();
+    return NULL;
+}
+
+static inline void vm_init(void)
+{
+    ASSERT_UNREACHABLE();
+}
+#endif  /* !CONFIG_HAS_VMAP */
 #endif /* __XEN_VMAP_H__ */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476634.739087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjr-0007T3-IN; Fri, 13 Jan 2023 05:36:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476634.739087; Fri, 13 Jan 2023 05:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjr-0007Ow-35; Fri, 13 Jan 2023 05:36:27 +0000
Received: by outflank-mailman (input) for mailman id 476634;
 Fri, 13 Jan 2023 05:36:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCfN-0005sP-8M
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 9291ca8e-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:47 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1CA57FEC;
 Thu, 12 Jan 2023 21:32:29 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DCB763F587;
 Thu, 12 Jan 2023 21:31:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9291ca8e-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/mpu: make Xen boot to idle on MPU systems(DNM)
Date: Fri, 13 Jan 2023 13:29:14 +0800
Message-Id: <20230113052914.3845596-42-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

As we have not implmented guest support in part#1 series of MPU
support, Xen can not create any guest in boot time. So in this
patch we make Xen boot to idle on MPU system for reviewer to
test part#1 series.

THIS PATCH IS ONLY FOR TESTING, NOT FOR REVIEWING.

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/mm_mpu.c |  3 +++
 xen/arch/arm/setup.c  | 21 ++++++++++++---------
 xen/arch/arm/traps.c  |  2 ++
 3 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 434ed872c1..73d5779ab4 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -32,6 +32,9 @@
 #include <asm/page.h>
 #include <asm/setup.h>
 
+/* Non-boot CPUs use this to find the correct pagetables. */
+uint64_t init_ttbr;
+
 #ifdef NDEBUG
 static inline void
 __attribute__ ((__format__ (__printf__, 1, 2)))
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index b21fc4b8e2..d04ad8f838 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -803,16 +803,19 @@ void __init start_xen(unsigned long boot_phys_offset,
 #endif
     enable_cpu_features();
 
-    /* Create initial domain 0. */
-    if ( !is_dom0less_mode() )
-        create_dom0();
-    else
-        printk(XENLOG_INFO "Xen dom0less mode detected\n");
-
-    if ( acpi_disabled )
+    if ( !IS_ENABLED(CONFIG_ARM_V8R) )
     {
-        create_domUs();
-        alloc_static_evtchn();
+        /* Create initial domain 0. */
+        if ( !is_dom0less_mode() )
+            create_dom0();
+        else
+            printk(XENLOG_INFO "Xen dom0less mode detected\n");
+
+        if ( acpi_disabled )
+        {
+            create_domUs();
+            alloc_static_evtchn();
+        }
     }
 
     /*
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 061c92acbd..2444f7f6d8 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -963,7 +963,9 @@ void vcpu_show_registers(const struct vcpu *v)
     ctxt.ifsr32_el2 = v->arch.ifsr;
 #endif
 
+#ifndef CONFIG_HAS_MPU
     ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
+#endif
 
     _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1, v);
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476636.739098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjt-0007mA-R9; Fri, 13 Jan 2023 05:36:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476636.739098; Fri, 13 Jan 2023 05:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjs-0007iT-Ml; Fri, 13 Jan 2023 05:36:28 +0000
Received: by outflank-mailman (input) for mailman id 476636;
 Fri, 13 Jan 2023 05:36:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCf5-0005sP-Dh
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:31 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 87c8b44c-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:29 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E7EFAFEC;
 Thu, 12 Jan 2023 21:32:10 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 35CBE3F587;
 Thu, 12 Jan 2023 21:31:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87c8b44c-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 35/40] xen/mpu: destroy boot modules and early FDT mapping in MPU system
Date: Fri, 13 Jan 2023 13:29:08 +0800
Message-Id: <20230113052914.3845596-36-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In MMU system, we will free all memory as boot modules, like kernel
initramfs module, into heap, and it is not applicable in MPU system.
Heap must be statically configured in Device tree, so it should not
change.
In MPU system, we destory MPU memory regions of boot modules.

In MPU version of remove_early_mappings, we destroy MPU memory
region of early FDT mapping.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/mm_mpu.c    |  4 ++++
 xen/arch/arm/setup.c     | 25 -------------------------
 xen/arch/arm/setup_mmu.c | 25 +++++++++++++++++++++++++
 xen/arch/arm/setup_mpu.c | 26 ++++++++++++++++++++++++++
 4 files changed, 55 insertions(+), 25 deletions(-)

diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index de0c7d919a..118bb11d1a 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -854,6 +854,10 @@ void dump_hyp_walk(vaddr_t addr)
 
 void __init remove_early_mappings(void)
 {
+    /* Earlier, early FDT is mapped with MAX_FDT_SIZE in early_fdt_map */
+    if ( destroy_xen_mappings(round_pgdown(dtb_paddr),
+                              round_pgup(dtb_paddr + MAX_FDT_SIZE)) )
+        panic("Unable to destroy early Device-Tree mapping.\n");
 }
 
 int init_secondary_pagetables(int cpu)
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 0eac33e68c..49ba998f68 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -412,31 +412,6 @@ const char * __init boot_module_kind_as_string(bootmodule_kind kind)
     }
 }
 
-void __init discard_initial_modules(void)
-{
-    struct bootmodules *mi = &bootinfo.modules;
-    int i;
-
-    for ( i = 0; i < mi->nr_mods; i++ )
-    {
-        paddr_t s = mi->module[i].start;
-        paddr_t e = s + PAGE_ALIGN(mi->module[i].size);
-
-        if ( mi->module[i].kind == BOOTMOD_XEN )
-            continue;
-
-        if ( !mfn_valid(maddr_to_mfn(s)) ||
-             !mfn_valid(maddr_to_mfn(e)) )
-            continue;
-
-        fw_unreserved_regions(s, e, init_domheap_pages, 0);
-    }
-
-    mi->nr_mods = 0;
-
-    remove_early_mappings();
-}
-
 /* Relocate the FDT in Xen heap */
 static void * __init relocate_fdt(paddr_t dtb_paddr, size_t dtb_size)
 {
diff --git a/xen/arch/arm/setup_mmu.c b/xen/arch/arm/setup_mmu.c
index 7e5d87f8bd..611a60633e 100644
--- a/xen/arch/arm/setup_mmu.c
+++ b/xen/arch/arm/setup_mmu.c
@@ -340,6 +340,31 @@ void __init setup_mm(void)
 }
 #endif
 
+void __init discard_initial_modules(void)
+{
+    struct bootmodules *mi = &bootinfo.modules;
+    int i;
+
+    for ( i = 0; i < mi->nr_mods; i++ )
+    {
+        paddr_t s = mi->module[i].start;
+        paddr_t e = s + PAGE_ALIGN(mi->module[i].size);
+
+        if ( mi->module[i].kind == BOOTMOD_XEN )
+            continue;
+
+        if ( !mfn_valid(maddr_to_mfn(s)) ||
+             !mfn_valid(maddr_to_mfn(e)) )
+            continue;
+
+        fw_unreserved_regions(s, e, init_domheap_pages, 0);
+    }
+
+    mi->nr_mods = 0;
+
+    remove_early_mappings();
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/setup_mpu.c b/xen/arch/arm/setup_mpu.c
index f7d74ea604..f47f1f39ee 100644
--- a/xen/arch/arm/setup_mpu.c
+++ b/xen/arch/arm/setup_mpu.c
@@ -152,6 +152,32 @@ bool __init mpu_memory_section_contains(paddr_t s, paddr_t e,
     return false;
 }
 
+void __init discard_initial_modules(void)
+{
+    unsigned int i = 0;
+
+    /*
+     * Xenheap in MPU system must be statically configured in FDT in MPU
+     * system, so its base address and size couldn't change and it could not
+     * accept freed memory from boot modules.
+     * Disable MPU memory region of boot module section, since it will be in
+     * no use after boot.
+     */
+    for ( ; i < mpuinfo.sections[MSINFO_BOOTMODULE].nr_banks; i++ )
+    {
+        paddr_t start = mpuinfo.sections[MSINFO_BOOTMODULE].bank[i].start;
+        paddr_t size = mpuinfo.sections[MSINFO_BOOTMODULE].bank[i].size;
+        int rc;
+
+        rc = destroy_xen_mappings(start, start + size);
+        if ( rc )
+            panic("mpu: Unable to destroy boot module section 0x%"PRIpaddr"- 0x%"PRIpaddr"\n",
+                  start, start + size);
+    }
+
+    remove_early_mappings();
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476642.739116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjy-0000ed-83; Fri, 13 Jan 2023 05:36:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476642.739116; Fri, 13 Jan 2023 05:36:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCjy-0000dE-2Q; Fri, 13 Jan 2023 05:36:34 +0000
Received: by outflank-mailman (input) for mailman id 476642;
 Fri, 13 Jan 2023 05:36:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeX-0005sP-BN
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:57 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 73673be4-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:30:55 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C178BFEC;
 Thu, 12 Jan 2023 21:31:36 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0F3863F587;
 Thu, 12 Jan 2023 21:30:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73673be4-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 25/40] xen/mpu: map MPU guest memory section before static memory initialization
Date: Fri, 13 Jan 2023 13:28:58 +0800
Message-Id: <20230113052914.3845596-26-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Previous commit introduces a new device tree property
"mpu,guest-memory-section" to define MPU guest memory section, which
will mitigate the scattering of statically-configured guest RAM.

We only need to set up MPU memory region mapping for MPU guest memory section
to have access to all guest RAM.
And this should happen before static memory initialization(init_staticmem_pages())

MPU memory region for MPU guest memory secction gets switched out when
idle vcpu leaving, to avoid region overlapping if the vcpu enters into guest
mode later. On the contrary, it gets switched in when idle vcpu entering.
We introduce a bit in region "region.prlar.sw"(struct pr_t region) to
indicate this kind of feature.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/arm64/mpu.h | 14 ++++++---
 xen/arch/arm/mm_mpu.c                | 47 +++++++++++++++++++++++++---
 2 files changed, 53 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/asm/arm64/mpu.h
index b85e420a90..0044bbf05d 100644
--- a/xen/arch/arm/include/asm/arm64/mpu.h
+++ b/xen/arch/arm/include/asm/arm64/mpu.h
@@ -45,22 +45,26 @@
  * [3:4] Execute Never
  * [5:6] Access Permission
  * [7]   Region Present
- * [8]   Boot-only Region
+ * [8:9] 0b00: Fixed Region; 0b01: Boot-only Region;
+ *       0b10: Region needs switching out/in during vcpu context switch;
  */
 #define _REGION_AI_BIT            0
 #define _REGION_XN_BIT            3
 #define _REGION_AP_BIT            5
 #define _REGION_PRESENT_BIT       7
-#define _REGION_BOOTONLY_BIT      8
+#define _REGION_TRANSIENT_BIT     8
 #define _REGION_XN                (2U << _REGION_XN_BIT)
 #define _REGION_RO                (2U << _REGION_AP_BIT)
 #define _REGION_PRESENT           (1U << _REGION_PRESENT_BIT)
-#define _REGION_BOOTONLY          (1U << _REGION_BOOTONLY_BIT)
+#define _REGION_BOOTONLY          (1U << _REGION_TRANSIENT_BIT)
+#define _REGION_SWITCH            (2U << _REGION_TRANSIENT_BIT)
 #define REGION_AI_MASK(x)         (((x) >> _REGION_AI_BIT) & 0x7U)
 #define REGION_XN_MASK(x)         (((x) >> _REGION_XN_BIT) & 0x3U)
 #define REGION_AP_MASK(x)         (((x) >> _REGION_AP_BIT) & 0x3U)
 #define REGION_RO_MASK(x)         (((x) >> _REGION_AP_BIT) & 0x2U)
 #define REGION_BOOTONLY_MASK(x)   (((x) >> _REGION_BOOTONLY_BIT) & 0x1U)
+#define REGION_SWITCH_MASK(x)     (((x) >> _REGION_TRANSIENT_BIT) & 0x2U)
+#define REGION_TRANSIENT_MASK(x)  (((x) >> _REGION_TRANSIENT_BIT) & 0x3U)
 
 /*
  * _REGION_NORMAL is convenience define. It is not meant to be used
@@ -73,6 +77,7 @@
 
 #define REGION_HYPERVISOR         REGION_HYPERVISOR_RW
 #define REGION_HYPERVISOR_BOOT    (REGION_HYPERVISOR_RW|_REGION_BOOTONLY)
+#define REGION_HYPERVISOR_SWITCH  (REGION_HYPERVISOR_RW|_REGION_SWITCH)
 
 #define INVALID_REGION            (~0UL)
 
@@ -98,7 +103,8 @@ typedef union {
         unsigned long ns:1;     /* Not-Secure */
         unsigned long res:1;    /* Reserved 0 by hardware */
         unsigned long limit:42; /* Limit Address */
-        unsigned long pad:16;
+        unsigned long pad:15;
+        unsigned long sw:1;     /* Region gets switched out/in during vcpu context switch? */
     } reg;
     uint64_t bits;
 } prlar_t;
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 7b282be4fb..d2e19e836c 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -71,6 +71,10 @@ static paddr_t dtb_paddr;
 
 struct page_info *frame_table;
 
+static const unsigned int mpu_section_mattr[MSINFO_MAX] = {
+    REGION_HYPERVISOR_SWITCH,
+};
+
 /* Write a MPU protection region */
 #define WRITE_PROTECTION_REGION(sel, pr, prbar_el2, prlar_el2) ({       \
     uint64_t _sel = sel;                                                \
@@ -414,10 +418,13 @@ static int xen_mpumap_update_entry(paddr_t base, paddr_t limit,
         if ( system_state <= SYS_STATE_active )
         {
             /*
-             * If it is a boot-only region (i.e. region for early FDT),
-             * it shall be added from the tail for late init re-organizing
+             * If it is a transient region, including boot-only region
+             * (i.e. region for early FDT), and region which needs switching
+             * in/out during vcpu context switch(i.e. region for guest memory
+             * section), it shall be added from the tail for late init
+             * re-organizing
              */
-            if ( REGION_BOOTONLY_MASK(flags) )
+            if ( REGION_TRANSIENT_MASK(flags) )
                 idx = next_transient_region_idx;
             else
                 idx = next_fixed_region_idx;
@@ -427,6 +434,13 @@ static int xen_mpumap_update_entry(paddr_t base, paddr_t limit,
         /* Set permission */
         xen_mpumap[idx].prbar.reg.ap = REGION_AP_MASK(flags);
         xen_mpumap[idx].prbar.reg.xn = REGION_XN_MASK(flags);
+        /*
+         * Bit sw indicates that region gets switched out when idle vcpu
+         * leaving hypervisor mode, and region gets switched in when idle vcpu
+         * entering hypervisor mode.
+         */
+        if ( REGION_SWITCH_MASK(flags) )
+            xen_mpumap[idx].prlar.reg.sw = 1;
 
         /* Update and enable the region */
         access_protection_region(false, NULL, (const pr_t*)(&xen_mpumap[idx]),
@@ -552,6 +566,29 @@ static void __init setup_staticheap_mappings(void)
     }
 }
 
+static void __init map_mpu_memory_section_on_boot(enum mpu_section_info type,
+                                                  unsigned int flags)
+{
+    unsigned int i = 0;
+
+    for ( ; i < mpuinfo.sections[type].nr_banks; i++ )
+    {
+        paddr_t start = round_pgup(
+                        mpuinfo.sections[type].bank[i].start);
+        paddr_t size = round_pgdown(mpuinfo.sections[type].bank[i].size);
+
+        /*
+         * Map MPU memory section with transient MPU memory region,
+         * as they are either boot-only, or will be switched out/in
+         * during vcpu context switch(i.e. guest memory section).
+         */
+        if ( map_pages_to_xen(start, maddr_to_mfn(start), size >> PAGE_SHIFT,
+                              flags) )
+            panic("mpu: failed to map MPU memory section %s\n",
+                  mpu_section_info_str[type]);
+    }
+}
+
 /*
  * System RAM is statically partitioned into different functionality
  * section in Device Tree, including static xenheap, guest memory
@@ -563,7 +600,9 @@ void __init setup_static_mappings(void)
 {
     setup_staticheap_mappings();
 
-    /* TODO: guest memory section, device memory section, boot-module section, etc */
+    for ( uint8_t i = MSINFO_GUEST; i < MSINFO_MAX; i++ )
+        map_mpu_memory_section_on_boot(i, mpu_section_mattr[i]);
+    /* TODO: device memory section, boot-module section, etc */
 }
 
 /* Map a frame table to cover physical addresses ps through pe */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476650.739133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCk2-0001aL-3K; Fri, 13 Jan 2023 05:36:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476650.739133; Fri, 13 Jan 2023 05:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCk1-0001YR-Ub; Fri, 13 Jan 2023 05:36:37 +0000
Received: by outflank-mailman (input) for mailman id 476650;
 Fri, 13 Jan 2023 05:36:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeT-0005sJ-63
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:30:53 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 71a1f7f5-9303-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 06:30:52 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A77B5FEC;
 Thu, 12 Jan 2023 21:31:33 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E97423F587;
 Thu, 12 Jan 2023 21:30:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71a1f7f5-9303-11ed-91b6-6bf2151ebd3b
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 24/40] xen/mpu: introduce "mpu,xxx-memory-section"
Date: Fri, 13 Jan 2023 13:28:57 +0800
Message-Id: <20230113052914.3845596-25-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In MPU system, all kinds of resources, including system resource and
domain resource must be statically configured in Device Tree, i.e,
guest RAM must be statically allocated through "xen,static-mem" property
under domain node.

However, due to limited MPU protection regions and a wide variety of resource,
we could easily exhaust all MPU protection regions very quickly.
So we want to introduce a set of new property, "#mpu,xxx-memory-section"
to mitigate the impact.
Each property limits the available host address range of one kind of
system/domain resource.

This commit also introduces "#mpu,guest-memory-section" as an example, for
limiting the scattering of static memory as guest RAM.
Guest RAM shall be not only statically configured through "xen,static-mem"
property in MPU system, but also shall be defined inside
outside "mpu,guest-memory-section".

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/bootfdt.c           | 13 ++++---
 xen/arch/arm/include/asm/setup.h | 24 +++++++++++++
 xen/arch/arm/setup_mpu.c         | 58 ++++++++++++++++++++++++++++++++
 3 files changed, 91 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 0085c28d74..d7a5dd0ede 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -59,10 +59,10 @@ void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
     *size = dt_next_cell(size_cells, cell);
 }
 
-static int __init device_tree_get_meminfo(const void *fdt, int node,
-                                          const char *prop_name,
-                                          u32 address_cells, u32 size_cells,
-                                          void *data, enum membank_type type)
+int __init device_tree_get_meminfo(const void *fdt, int node,
+                                   const char *prop_name,
+                                   u32 address_cells, u32 size_cells,
+                                   void *data, enum membank_type type)
 {
     const struct fdt_property *prop;
     unsigned int i, banks;
@@ -315,6 +315,11 @@ static int __init process_chosen_node(const void *fdt, int node,
         bootinfo.static_heap = true;
     }
 
+#ifdef CONFIG_HAS_MPU
+    if ( process_mpuinfo(fdt, node, address_cells, size_cells) )
+        return -EINVAL;
+#endif
+
     printk("Checking for initrd in /chosen\n");
 
     prop = fdt_get_property(fdt, node, "linux,initrd-start", &len);
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 8f353b67f8..3581f8f990 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -172,6 +172,11 @@ void device_tree_get_reg(const __be32 **cell, u32 address_cells,
 u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
 
+int device_tree_get_meminfo(const void *fdt, int node,
+                            const char *prop_name,
+                            u32 address_cells, u32 size_cells,
+                            void *data, enum membank_type type);
+
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
@@ -185,6 +190,25 @@ struct init_info
     unsigned int cpuid;
 };
 
+#ifdef CONFIG_HAS_MPU
+/* Index of MPU memory section */
+enum mpu_section_info {
+    MSINFO_GUEST,
+    MSINFO_MAX
+};
+
+extern const char *mpu_section_info_str[MSINFO_MAX];
+
+struct mpuinfo {
+    struct meminfo sections[MSINFO_MAX];
+};
+
+extern struct mpuinfo mpuinfo;
+
+extern int process_mpuinfo(const void *fdt, int node, uint32_t address_cells,
+                           uint32_t size_cells);
+#endif /* CONFIG_HAS_MPU */
+
 #endif
 /*
  * Local variables:
diff --git a/xen/arch/arm/setup_mpu.c b/xen/arch/arm/setup_mpu.c
index ca0d8237d5..09a38a34a4 100644
--- a/xen/arch/arm/setup_mpu.c
+++ b/xen/arch/arm/setup_mpu.c
@@ -20,12 +20,70 @@
  */
 
 #include <xen/init.h>
+#include <xen/libfdt/libfdt.h>
 #include <xen/mm.h>
 #include <xen/pfn.h>
 #include <asm/mm_mpu.h>
 #include <asm/page.h>
 #include <asm/setup.h>
 
+const char *mpu_section_info_str[MSINFO_MAX] = {
+    "mpu,guest-memory-section",
+};
+
+/*
+ * mpuinfo stores mpu memory section info, which is configured under
+ * "mpu,xxx-memory-section" in Device Tree.
+ */
+struct mpuinfo __initdata mpuinfo;
+
+/*
+ * Due to limited MPU protection regions and a wide variety of resource,
+ * "#mpu,xxx-memory-section" is introduced to mitigate the impact.
+ * Each property limits the available host address range of one kind of
+ * system/domain resource.
+ *
+ * "mpu,guest-memory-section": guest RAM must be statically allocated
+ * through "xen,static-mem" property in MPU system. "mpu,guest-memory-section"
+ * limits the scattering of "xen,static-mem", as users could not define
+ * a "xen,static-mem" outside "mpu,guest-memory-section".
+ */
+static int __init process_mpu_memory_section(const void *fdt, int node,
+                                             const char *name, void *data,
+                                             uint32_t address_cells,
+                                             uint32_t size_cells)
+{
+    if ( !fdt_get_property(fdt, node, name, NULL) )
+        return -EINVAL;
+
+    return device_tree_get_meminfo(fdt, node, name, address_cells, size_cells,
+                                   data, MEMBANK_DEFAULT);
+}
+
+int __init process_mpuinfo(const void *fdt, int node,
+                           uint32_t address_cells, uint32_t size_cells)
+{
+    uint8_t idx = 0;
+    const char *prop_name;
+
+    for ( ; idx < MSINFO_MAX; idx++ )
+    {
+        prop_name = mpu_section_info_str[idx];
+
+        printk("Checking for %s in /chosen\n", prop_name);
+
+        if ( process_mpu_memory_section(fdt, node, prop_name,
+                                        &mpuinfo.sections[idx],
+                                        address_cells, size_cells) )
+        {
+            printk(XENLOG_ERR "fdt: failed to process %s\n", prop_name);
+            return -EINVAL;
+        }
+    }
+
+    return 0;
+}
+
 void __init setup_mm(void)
 {
     paddr_t ram_start = ~0, ram_end = 0, ram_size = 0;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 05:36:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 05:36:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476652.739139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCk2-0001if-T2; Fri, 13 Jan 2023 05:36:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476652.739139; Fri, 13 Jan 2023 05:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGCk2-0001gS-Ip; Fri, 13 Jan 2023 05:36:38 +0000
Received: by outflank-mailman (input) for mailman id 476652;
 Fri, 13 Jan 2023 05:36:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGCeg-0005sP-Hr
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 05:31:06 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 78f4a527-9303-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 06:31:04 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1B20FFEC;
 Thu, 12 Jan 2023 21:31:46 -0800 (PST)
Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com
 [10.169.190.94])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5C9923F587;
 Thu, 12 Jan 2023 21:31:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78f4a527-9303-11ed-b8d0-410ff93cb8f0
From: Penny Zheng <Penny.Zheng@arm.com>
To: xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com,
	Penny Zheng <Penny.Zheng@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Penny Zheng <penny.zheng@arm.com>
Subject: [PATCH v2 28/40] xen/mpu: map boot module section in MPU system
Date: Fri, 13 Jan 2023 13:29:01 +0800
Message-Id: <20230113052914.3845596-29-Penny.Zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In MPU system, we could not afford mapping a new MPU memory region
with each new guest boot module, it will exhaust limited MPU memory regions
very quickly.

So we introduce `mpu,boot-module-section` for users to statically configure
one big memory section or very few memory sections for all guests' boot mudules.
Users shall make sure that any guest boot module defined in Device Tree is
within the section, including kernel module(BOOTMOD_KERNEL), device tree
passthrough module(BOOTMOD_GUEST_DTB), and ramdisk module(BOOTMOD_RAMDISK).

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/arch/arm/include/asm/setup.h | 1 +
 xen/arch/arm/mm_mpu.c            | 2 +-
 xen/arch/arm/setup_mpu.c         | 7 +++++++
 3 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index b7a2225c25..61f24b5848 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -195,6 +195,7 @@ struct init_info
 enum mpu_section_info {
     MSINFO_GUEST,
     MSINFO_DEVICE,
+    MSINFO_BOOTMODULE,
     MSINFO_MAX
 };
 
diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
index 1566ba60af..ea64aa38e4 100644
--- a/xen/arch/arm/mm_mpu.c
+++ b/xen/arch/arm/mm_mpu.c
@@ -74,6 +74,7 @@ struct page_info *frame_table;
 static const unsigned int mpu_section_mattr[MSINFO_MAX] = {
     REGION_HYPERVISOR_SWITCH,
     REGION_HYPERVISOR_NOCACHE,
+    REGION_HYPERVISOR_BOOT,
 };
 
 /* Write a MPU protection region */
@@ -686,7 +687,6 @@ void __init setup_static_mappings(void)
 #endif
         map_mpu_memory_section_on_boot(i, mpu_section_mattr[i]);
     }
-    /* TODO: boot-module section, etc */
 }
 
 /* Map a frame table to cover physical addresses ps through pe */
diff --git a/xen/arch/arm/setup_mpu.c b/xen/arch/arm/setup_mpu.c
index ec05542f68..160934bf86 100644
--- a/xen/arch/arm/setup_mpu.c
+++ b/xen/arch/arm/setup_mpu.c
@@ -30,6 +30,7 @@
 const char *mpu_section_info_str[MSINFO_MAX] = {
     "mpu,guest-memory-section",
     "mpu,device-memory-section",
+    "mpu,boot-module-section",
 };
 
 /*
@@ -52,6 +53,12 @@ struct mpuinfo __initdata mpuinfo;
  * "mpu,device-memory-section": this section draws the device memory layout
  * with the least number of memory regions for all devices in system that will
  * be used in Xen, like `UART`, `GIC`, etc.
+ *
+ * "mpu,boot-module-section": this property uses one big memory section or
+ * very few memory sections to describe all guests' boot mudules. Users shall
+ * make sure that any guest boot module defined in Device Tree is within
+ * the section, including kernel module(BOOTMOD_KERNEL), device tree
+ * passthrough module(BOOTMOD_GUEST_DTB), and ramdisk module(BOOTMOD_RAMDISK).
  */
 static int __init process_mpu_memory_section(const void *fdt, int node,
                                              const char *name, void *data,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 07:30:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 07:30:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476800.739156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGEW6-0000g7-1m; Fri, 13 Jan 2023 07:30:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476800.739156; Fri, 13 Jan 2023 07:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGEW5-0000g0-VG; Fri, 13 Jan 2023 07:30:21 +0000
Received: by outflank-mailman (input) for mailman id 476800;
 Fri, 13 Jan 2023 07:30:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3K7w=5K=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pGEW4-0000fs-6f
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 07:30:20 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1fb243a0-9314-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 08:30:17 +0100 (CET)
Received: by mail-ej1-x635.google.com with SMTP id az20so31211390ejc.1
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 23:30:16 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.138.tellas.gr. [109.242.138.67])
 by smtp.gmail.com with ESMTPSA id
 15-20020a170906310f00b00738795e7d9bsm8137799ejx.2.2023.01.12.23.30.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 12 Jan 2023 23:30:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fb243a0-9314-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Dn+bnoVvZ/UNa2MNEKNx9JDAsWdqmdui9lxIIlE1MoI=;
        b=ERYoLubf+XYSFgtjTsuNsN78yVI5OR7hy9Rhy5f9SNbUS9RrwMSxadHxbg+IcEbT6f
         Qttx/TIVC3u3ubL1hLSd3Dc593o+ZN+xxx+ETmd4j0I8/zZdV5HF5qbLQo3EOzAgmgEb
         cKrc6cJaW17xvKf5jhhwfJTo+lyG+wJzAlq4ZHIcBSNYsmSHLFetzZJy60UEpw3HWjtw
         nJiZ5iHg7TFlXUBcPvZPlPJP4Esu3k9NHBc7wN94cb6gh9P0L4sk5FY6aWfoZHuHu+nZ
         ouC1ndJdJ7wOSrJ+F/0975aBxDjGnWOab89RyIHB/jpoLAQS3qi15EfkYvfxtdqEIq7a
         cHOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Dn+bnoVvZ/UNa2MNEKNx9JDAsWdqmdui9lxIIlE1MoI=;
        b=LX4M5TWCAwDhjdv9fSsc2sppQsX7Vu5uXjSxzEk1YlnPNOxlTLrrsdclCEn1yAPfJm
         E2vgZWdET9zG6SvvUPSOaKaJEeKSoPMNSkJlkPWacGRDyyQgQQyVdoXJjIJn4LBqgY77
         lGjk6eFMRP8ItthS8FUJ/D2MVGuwiJzuC/pY2/lIrh1Su1liYCHqnMo9X2uR8I8px23o
         OEeKvitvmfDT+I7DcA6TALDZBHnRM/aL2wWFTHlvVHu4vqbb+ydmaeRJTG636fZmdXEU
         zpvN/blKRBN9/yx1A707b/i7oSjvNljTSveNMHYVG+Rb9Z41QzH0vo7vvGAa0WwhwhBo
         q3zQ==
X-Gm-Message-State: AFqh2kpFaGx5Yb06N/w/0UPCmPcu4wTYRU9GhjHIY2DdcVOkEa/ztsR4
	mnbPgnaAfYYkY4GB82RFNsk=
X-Google-Smtp-Source: AMrXdXurQr12GQ4mPcB+mw5MYVR4QBl1VK3fQ0CGEhbv0alr4vZjdHUQ0FdJw+2Od22YIV7/yowTDA==
X-Received: by 2002:a17:906:6a05:b0:7c1:28a7:f7a0 with SMTP id qw5-20020a1709066a0500b007c128a7f7a0mr98645100ejc.31.1673595015958;
        Thu, 12 Jan 2023 23:30:15 -0800 (PST)
Message-ID: <04e5b744-bb61-b3d1-7d60-97bb710f7584@gmail.com>
Date: Fri, 13 Jan 2023 09:30:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v2 6/8] x86/iommu: call pi_update_irte through an
 hvm_function callback
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-7-burzalodowa@gmail.com>
 <aa20eb4d-7b18-9bbf-718f-2fe5fa896713@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <aa20eb4d-7b18-9bbf-718f-2fe5fa896713@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/12/23 14:16, Jan Beulich wrote:
> On 04.01.2023 09:45, Xenia Ragiadakou wrote:
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -2143,6 +2143,14 @@ static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
>>       return pi_test_pir(vec, &v->arch.hvm.vmx.pi_desc);
>>   }
>>   
>> +static int cf_check vmx_pi_update_irte(const struct vcpu *v,
>> +                                       const struct pirq *pirq, uint8_t gvec)
>> +{
>> +    const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
>> +
>> +    return pi_update_irte(pi_desc, pirq, gvec);
>> +}
> 
> This being the only caller of pi_update_irte(), I don't see the point in
> having the extra wrapper. Adjust pi_update_irte() such that it can be
> used as the intended hook directly. Plus perhaps prefix it with vtd_.

Ok I will remove the extra wrapper.

> 
>> @@ -2591,6 +2599,8 @@ static struct hvm_function_table __initdata_cf_clobber vmx_function_table = {
>>       .tsc_scaling = {
>>           .max_ratio = VMX_TSC_MULTIPLIER_MAX,
>>       },
>> +
>> +    .pi_update_irte = vmx_pi_update_irte,
> 
> You want to install this hook only when iommu_intpost (i.e. the only case
> when it can actually be called, and only when INTEL_IOMMU=y (avoiding the
> need for an inline stub of pi_update_irte() or whatever its final name is
> going to be.

Ok will do.

> 
>> @@ -250,6 +252,9 @@ struct hvm_function_table {
>>           /* Architecture function to setup TSC scaling ratio */
>>           void (*setup)(struct vcpu *v);
>>       } tsc_scaling;
>> +
>> +    int (*pi_update_irte)(const struct vcpu *v,
>> +                          const struct pirq *pirq, uint8_t gvec);
>>   };
> 
> Please can this be moved higher up, e.g. next to .

Right after handle_eoi would be ok? or higher up?

> 
>> @@ -774,6 +779,16 @@ static inline void hvm_set_nonreg_state(struct vcpu *v,
>>           alternative_vcall(hvm_funcs.set_nonreg_state, v, nrs);
>>   }
>>   
>> +static inline int hvm_pi_update_irte(const struct vcpu *v,
>> +                                     const struct pirq *pirq, uint8_t gvec)
>> +{
>> +    if ( hvm_funcs.pi_update_irte )
>> +        return alternative_call(hvm_funcs.pi_update_irte, v, pirq, gvec);
>> +
>> +    return -EOPNOTSUPP;
> 
> I don't think the conditional is needed, at least not with the other
> suggested adjustments. Plus the way alternative patching works, a NULL
> hook will be converted to some equivalent of BUG() anyway, so
> ASSERT_UNREACHABLE() should also be unnecessary.

Ok will remove it.

> 
>> +}
>> +
>> +
>>   #else  /* CONFIG_HVM */
> 
> Please don't add double blank lines.

Ok will fix.

> 
>> --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
>> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
>> @@ -146,6 +146,17 @@ static inline void pi_clear_sn(struct pi_desc *pi_desc)
>>       clear_bit(POSTED_INTR_SN, &pi_desc->control);
>>   }
>>   
>> +#ifdef CONFIG_INTEL_IOMMU
>> +int pi_update_irte(const struct pi_desc *pi_desc,
>> +                   const struct pirq *pirq, const uint8_t gvec);
>> +#else
>> +static inline int pi_update_irte(const struct pi_desc *pi_desc,
>> +                                 const struct pirq *pirq, const uint8_t gvec)
>> +{
>> +    return -EOPNOTSUPP;
>> +}
>> +#endif
> 
> This still is a VT-d function, so I think its declaration would better
> remain in asm/iommu.h.
> 
> Jan

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 07:45:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 07:45:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476808.739167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGEkK-0002IZ-C3; Fri, 13 Jan 2023 07:45:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476808.739167; Fri, 13 Jan 2023 07:45:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGEkK-0002IS-98; Fri, 13 Jan 2023 07:45:04 +0000
Received: by outflank-mailman (input) for mailman id 476808;
 Fri, 13 Jan 2023 07:45:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3K7w=5K=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pGEkI-0002IK-CK
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 07:45:02 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2f6a43a6-9316-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 08:45:01 +0100 (CET)
Received: by mail-ed1-x534.google.com with SMTP id z11so30101520ede.1
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 23:45:01 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.138.tellas.gr. [109.242.138.67])
 by smtp.gmail.com with ESMTPSA id
 ew7-20020a056402538700b0049b58744f93sm1498910edb.81.2023.01.12.23.44.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 12 Jan 2023 23:45:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f6a43a6-9316-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=JsGO+7TGWvFuZLMUYRp7Wdn3kI5JHbZiEPOnzCuFKAA=;
        b=e7+23y1+z7Eka7I4AFEjEWQN+gzh0v5BS8ZlEZjTf/hCAPe7mhvu3EujcQSwpnlUiS
         T231oXDgjfZ8R3GkQQyPSw6FN6PzSu1RyJ9tWU4l+fWrk979mCn64dSW/Z/qp4BHbY66
         krWSikRkmDAhK++cONo5thNJLDORD64Te7ZKJLda1jroqo6BcCW52qJkCyr8h+ErBa5u
         ID2mHj3dXUPWvnBsPDr8ZRWm3SnjITOiCmd3yPNY/3o8yC7O2gyTxMAxcvNpXFimEGN8
         CmjXzaX9Idoh5TpcLTCYk7OhDpzP2mO8dpNqEWjMmzwGUwXWTQvc5U0jftZpzCOV3Eym
         WtbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=JsGO+7TGWvFuZLMUYRp7Wdn3kI5JHbZiEPOnzCuFKAA=;
        b=tcksvOYNHMO6TEsZ/4/p0cM191xXKTAfahkLTtuKz6R4eLJ0b7P7qve6QN7CZkrHJc
         2LgNCP7d6lwlWS/nDz/Dvtk37ly7Agr5o+623lip1OvusQ4TU7mBKI2rx7s27pauBsQq
         On+bLJoHjmmj5ncFb0CT2ZTieveXzOu6GvP/tLEXvl5e8ZY3m9VSuq58zEccDNjP7zmD
         dOGEb6b7jHgnRBqKeePMyISxx4BTqkd6H+h48sb8Rwwt0odzwnMSUYUk0+BwGsn6iQjS
         IiuYHnsSWJdWa92fAnR/naZW4V/CmIWEPq9OIEuaP6N+/fssYSq47fmzTWuha48ppN7W
         F8Dg==
X-Gm-Message-State: AFqh2krEtIUWcWX9I5Y524mREzn5nvbr/6NB0A02FGHxOQGbIZigaOfz
	i+zAJwNfZXTbWQBencA/D+I=
X-Google-Smtp-Source: AMrXdXv4kOKjqyHRZKSZO51hVTmb/pAlGTR4NvvYIuzi2SgJz/H8LFjwKZ/tn3y+mxEJwPuxNMDjwQ==
X-Received: by 2002:a05:6402:e9b:b0:479:8313:3008 with SMTP id h27-20020a0564020e9b00b0047983133008mr59127596eda.0.1673595900901;
        Thu, 12 Jan 2023 23:45:00 -0800 (PST)
Message-ID: <e16daaf6-5f6a-d0a3-802c-0bfc0b6876a7@gmail.com>
Date: Fri, 13 Jan 2023 09:44:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v2 6/8] x86/iommu: call pi_update_irte through an
 hvm_function callback
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-7-burzalodowa@gmail.com>
 <aa20eb4d-7b18-9bbf-718f-2fe5fa896713@suse.com>
 <6c5a4c07-e942-a683-8579-a0f9d5971c7b@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <6c5a4c07-e942-a683-8579-a0f9d5971c7b@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/12/23 14:37, Jan Beulich wrote:
> On 12.01.2023 13:16, Jan Beulich wrote:
>> On 04.01.2023 09:45, Xenia Ragiadakou wrote:
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>> @@ -2143,6 +2143,14 @@ static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
>>>       return pi_test_pir(vec, &v->arch.hvm.vmx.pi_desc);
>>>   }
>>>   
>>> +static int cf_check vmx_pi_update_irte(const struct vcpu *v,
>>> +                                       const struct pirq *pirq, uint8_t gvec)
>>> +{
>>> +    const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
>>> +
>>> +    return pi_update_irte(pi_desc, pirq, gvec);
>>> +}
>>
>> This being the only caller of pi_update_irte(), I don't see the point in
>> having the extra wrapper. Adjust pi_update_irte() such that it can be
>> used as the intended hook directly. Plus perhaps prefix it with vtd_.
> 
> Plus move it to vtd/x86/hvm.c (!HVM builds shouldn't need it), albeit I
> realize this could be done independent of your work. In principle the
> function shouldn't be VT-d specific (and could hence live in x86/hvm.c),
> as msi_msg_write_remap_rte() is already available as IOMMU hook anyway,
> provided struct pi_desc turns out compatible with what's going to be
> needed for AMD.

Since the posted interrupt descriptor is vmx specific while 
msi_msg_write_remap_rte is iommu specific, can I propose the following:

- Keep the name as is (i.e vmx_pi_update_irte) and keep its definition 
in xen/arch/x86/hvm/vmx/vmx.c

- Open code pi_update_irte() inside the body of vmx_pi_update_irte() but 
replace intel-specific msi_msg_write_remap_rte() with generic 
iommu_update_ire_from_msi().

Does this approach make sense?

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 07:47:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 07:47:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476806.739178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGEmu-0002tM-SU; Fri, 13 Jan 2023 07:47:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476806.739178; Fri, 13 Jan 2023 07:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGEmu-0002tF-NT; Fri, 13 Jan 2023 07:47:44 +0000
Received: by outflank-mailman (input) for mailman id 476806;
 Fri, 13 Jan 2023 07:39:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E9J+=5K=gmail.com=joanbrugueram@srs-se1.protection.inumbo.net>)
 id 1pGEf9-0001P9-DM
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 07:39:43 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 706c3330-9315-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 08:39:41 +0100 (CET)
Received: by mail-wm1-x329.google.com with SMTP id
 bg13-20020a05600c3c8d00b003d9712b29d2so18166471wmb.2
 for <xen-devel@lists.xenproject.org>; Thu, 12 Jan 2023 23:39:40 -0800 (PST)
Received: from solpc.. (67.pool90-171-92.dynamic.orange.es. [90.171.92.67])
 by smtp.gmail.com with ESMTPSA id
 h19-20020a05600c351300b003d9a86a13bfsm26207681wmq.28.2023.01.12.23.39.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 12 Jan 2023 23:39:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 706c3330-9315-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3M4PJa8bRUQ4Sc+Wzz/wLpc5Oc/26l8w9l+YmujHJ60=;
        b=P1LDNQgqNtOTOXUgMU4kngFnoQqhy9Oq7QtZJYVR4R8y3LYth3y7NREJn0SEQjzIpt
         Y9vIhsduHpDGCLyv9tAMdFmQ4j/dh5+D1JbS+VYHbQaThnd4MoacI8qcpGg/4l1gOW2/
         lQrabY4P5iQTl0yf1gv9Y2/MwMldp7MEwXewivv1y4ZenzLvn0tSTfIwYpnphHcVWJ9o
         LgTArCu7JAcefSAtZYvgnHmnLGcJRBDBWePUwVhXgAQZ0yfJ9mjGeoNyp8CQcP2qCdqL
         NQ/rs73ktpBLscWTNQE4C8ddF4lzEPDW27yBO0IKttw+AhYeje1S34W4z4dieZeoD7PI
         saKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=3M4PJa8bRUQ4Sc+Wzz/wLpc5Oc/26l8w9l+YmujHJ60=;
        b=J3KptU6qbcXUxcVzzQY50SLyC+dvTptdi35cmyGSLqo+z2wfg94aaxFZl9G/cclAvh
         5yZfRfEEWAYsPiDFPfnzMA3F5Zd/ld0X1gStAipWrSVV950BkQrrB8k8F1C2+j3sATTZ
         +B4tfie7KIH7rFBDdu+0Y7mvYc7QQJJhTQDMEVktU6nwyF6R44DMsjcKScpstsumEKo+
         6A/WrCnAreUo7F6soJJ9htDcQgO4UIYpj/IuDJDE9ZmBGRsIfktzceEmGZZdQAZpi9wh
         L3VMpTxJj2R/3fL+Yj5QUgAaCXs4tTLd2JY+VaScS92SFGXE8OGmdXJNnvQkiF6oSzlN
         /u5Q==
X-Gm-Message-State: AFqh2kpgieYZAeBiWgYVImTuXdRT8TkKyVQISRvwO1BG/iUO4CeUkfaQ
	Dtzq2hVgvpcDvWJWYwhp0G8=
X-Google-Smtp-Source: AMrXdXs3tLo1GHegPJMdV9HWu8qZmoNTQpGBWl1/Ydv1V8mjdLCceMWAUbQlN+gWfV6Ebj/DmKEHOw==
X-Received: by 2002:a05:600c:500e:b0:3cf:88c3:d008 with SMTP id n14-20020a05600c500e00b003cf88c3d008mr61730943wmr.28.1673595580235;
        Thu, 12 Jan 2023 23:39:40 -0800 (PST)
From: Joan Bruguera <joanbrugueram@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org,
	Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>,
	mark.rutland@arm.com,
	x86@kernel.org
Subject: Re: [RFC][PATCH 0/6] x86: Fix suspend vs retbleed=stuff
Date: Fri, 13 Jan 2023 07:39:38 +0000
Message-Id: <20230113073938.1066227-1-joanbrugueram@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230112143141.645645775@infradead.org>
References: <20230112143141.645645775@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi Peter,

I tried your patches on both QEMU and my two (real) computers where
s2ram with `retbleed=stuff` was failing and they wake up fine now.

However, I think some minor reviews are needed:

(1) I got a build error due to a symbol conflict between the
    `restore_registers` in `arch/x86/include/asm/suspend_64.h` and the
    one in `drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.c`.

    (I fixed by renaming the one in `hw_gpio.c`, but it's worth
     an `allmodconfig` just in case there's something else)

(2) Tracing with QEMU I still see two `sarq $5, %gs:0x1337B33F` before
    `%gs` is restored. Those correspond to the calls from
    `secondary_startup_64` in `arch/x86/kernel/head_64.S` to
    `verify_cpu` and `sev_verify_cbit`.
    Those don't cause a crash but look suspicious, are they correct?

    (There are also some `sarq`s in the call to `early_setup_idt` from
    `secondary_startup_64`, but `%gs` is restored immediately before)

    I attach an annotated QEMU log for those if it is useful.

Regards,
- Joan

QEMU wakeup log:

# 32-bit code ellided. Next line calls `secondary_startup_64` from `startup_64`
0x0009a0d0:  ff 25 2a 2f 00 00        jmpq     *0x2f2a(%rip)
# Next line is `call verify_cpu` from `secondary_startup_64`
0xffffffff9a800070:  e8 f1 00 00 00           callq    0xffffffff9a800166
# This next `sarq` does not have the correct GS set?
#     RAX=0000000080050033 RBX=0000000000000800 RCX=00000000c0000080 RDX=0000000000000000
#     RSI=0000000000000000 RDI=0000000000000001 RBP=0000000000000000 RSP=000000000009e018
#     R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000000
#     R12=0000000000000000 R13=0000000000000000 R14=0000000000000000 R15=0000000000000000
#     RIP=ffffffff9a800166 RFL=00200097 [--S-APC] CPL=0 II=0 A20=1 SMM=0 HLT=0
#     ES =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     CS =0010 0000000000000000 ffffffff 00af9b00 DPL=0 CS64 [-RA]
#     SS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     DS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     FS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     GS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT
#     TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy
#     GDT=     0000000000098030 0000001f
#     IDT=     0000000000000000 00000000
#     CR0=80050033 CR2=0000000000000000 CR3=000000000009c000 CR4=000006b0
#     DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
#     DR6=00000000ffff0ff0 DR7=0000000000000400
#     CCS=0000000000000095 CCD=fffffffffffff6ff CCO=EFLAGS
#     EFER=0000000000000d01
0xffffffff9a800166:  65 48 c1 3c 25 90 29 03  sarq     $5, %gs:0x32990
0xffffffff9a80016e:  00 05
0xffffffff9a800170:  66 0f 1f 00              nopw     (%rax)
0xffffffff9a800174:  9c                       pushfq   
0xffffffff9a800175:  6a 00                    pushq    $0
0xffffffff9a800177:  9d                       popfq    
0xffffffff9a800178:  b8 00 00 00 00           movl     $0, %eax
0xffffffff9a80017d:  0f a2                    cpuid    
0xffffffff9a80017f:  83 f8 01                 cmpl     $1, %eax
0xffffffff9a800182:  0f 82 d2 00 00 00        jb       0xffffffff9a80025a
0xffffffff9a800188:  66 31 ff                 xorw     %di, %di
0xffffffff9a80018b:  81 fb 41 75 74 68        cmpl     $0x68747541, %ebx
0xffffffff9a800191:  75 16                    jne      0xffffffff9a8001a9
0xffffffff9a800193:  81 fa 65 6e 74 69        cmpl     $0x69746e65, %edx
0xffffffff9a800199:  75 0e                    jne      0xffffffff9a8001a9
0xffffffff9a80019b:  81 f9 63 41 4d 44        cmpl     $0x444d4163, %ecx
0xffffffff9a8001a1:  75 06                    jne      0xffffffff9a8001a9
0xffffffff9a8001a3:  66 bf 01 00              movw     $1, %di
0xffffffff9a8001a7:  eb 4d                    jmp      0xffffffff9a8001f6
0xffffffff9a8001f6:  b8 01 00 00 00           movl     $1, %eax
0xffffffff9a8001fb:  0f a2                    cpuid    
0xffffffff9a8001fd:  81 e2 61 81 00 07        andl     $0x7008161, %edx
0xffffffff9a800203:  81 f2 61 81 00 07        xorl     $0x7008161, %edx
0xffffffff9a800209:  75 4f                    jne      0xffffffff9a80025a
0xffffffff9a80020b:  b8 00 00 00 80           movl     $0x80000000, %eax
0xffffffff9a800210:  0f a2                    cpuid    
0xffffffff9a800212:  3d 01 00 00 80           cmpl     $0x80000001, %eax
0xffffffff9a800217:  72 41                    jb       0xffffffff9a80025a
0xffffffff9a800219:  b8 01 00 00 80           movl     $0x80000001, %eax
0xffffffff9a80021e:  0f a2                    cpuid    
0xffffffff9a800220:  81 e2 00 00 00 20        andl     $0x20000000, %edx
0xffffffff9a800226:  81 f2 00 00 00 20        xorl     $0x20000000, %edx
0xffffffff9a80022c:  75 2c                    jne      0xffffffff9a80025a
0xffffffff9a80022e:  b8 01 00 00 00           movl     $1, %eax
0xffffffff9a800233:  0f a2                    cpuid    
0xffffffff9a800235:  81 e2 00 00 00 06        andl     $0x6000000, %edx
0xffffffff9a80023b:  81 fa 00 00 00 06        cmpl     $0x6000000, %edx
0xffffffff9a800241:  74 22                    je       0xffffffff9a800265
0xffffffff9a800265:  9d                       popfq    
0xffffffff9a800266:  31 c0                    xorl     %eax, %eax
0xffffffff9a800268:  e9 23 24 d4 00           jmp      0xffffffff9b542690
0xffffffff9b542690:  f3 0f 1e fa              endbr64  
0xffffffff9b542694:  65 48 c1 24 25 90 29 03  shlq     $5, %gs:0x32990
0xffffffff9b54269c:  00 05
0xffffffff9b54269e:  74 02                    je       0xffffffff9b5426a2
0xffffffff9b5426a2:  e8 01 00 00 00           callq    0xffffffff9b5426a8
0xffffffff9b5426a8:  e8 01 00 00 00           callq    0xffffffff9b5426ae
0xffffffff9b5426ae:  e8 01 00 00 00           callq    0xffffffff9b5426b4
0xffffffff9b5426b4:  e8 01 00 00 00           callq    0xffffffff9b5426ba
0xffffffff9b5426ba:  e8 01 00 00 00           callq    0xffffffff9b5426c0
0xffffffff9b5426c0:  e8 01 00 00 00           callq    0xffffffff9b5426c6
0xffffffff9b5426c6:  e8 01 00 00 00           callq    0xffffffff9b5426cc
0xffffffff9b5426cc:  e8 01 00 00 00           callq    0xffffffff9b5426d2
0xffffffff9b5426d2:  e8 01 00 00 00           callq    0xffffffff9b5426d8
0xffffffff9b5426d8:  e8 01 00 00 00           callq    0xffffffff9b5426de
0xffffffff9b5426de:  e8 01 00 00 00           callq    0xffffffff9b5426e4
0xffffffff9b5426e4:  e8 01 00 00 00           callq    0xffffffff9b5426ea
0xffffffff9b5426ea:  e8 01 00 00 00           callq    0xffffffff9b5426f0
0xffffffff9b5426f0:  e8 01 00 00 00           callq    0xffffffff9b5426f6
0xffffffff9b5426f6:  e8 01 00 00 00           callq    0xffffffff9b5426fc
0xffffffff9b5426fc:  e8 01 00 00 00           callq    0xffffffff9b542702
0xffffffff9b542702:  48 81 c4 80 00 00 00     addq     $0x80, %rsp
0xffffffff9b542709:  65 48 c7 04 25 90 29 03  movq     $-1, %gs:0x32990
0xffffffff9b542711:  00 ff ff ff ff
# Returns from `verify_cpu`
0xffffffff9b542716:  c3                       retq     
0xffffffff9a800075:  48 8b 04 25 38 2e 64 9c  movq     0xffffffff9c642e38, %rax
0xffffffff9a80007d:  48 05 00 00 61 1c        addq     $0x1c610000, %rax
0xffffffff9a800083:  0f 20 e1                 movq     %cr4, %rcx
0xffffffff9a800086:  83 e1 40                 andl     $0x40, %ecx
0xffffffff9a800089:  81 c9 a0 00 00 00        orl      $0xa0, %ecx
0xffffffff9a80008f:  f7 05 87 bf 6c 01 01 00  testl    $1, 0x16cbf87(%rip)
0xffffffff9a800097:  00 00
0xffffffff9a800099:  74 06                    je       0xffffffff9a8000a1
0xffffffff9a8000a1:  0f 22 e1                 movq     %rcx, %cr4
0xffffffff9a8000a4:  48 03 05 65 9f e1 01     addq     0x1e19f65(%rip), %rax
0xffffffff9a8000ab:  56                       pushq    %rsi
0xffffffff9a8000ac:  48 89 c7                 movq     %rax, %rdi
# Next line is `call sev_verify_cbit` from `secondary_startup_64`
0xffffffff9a8000af:  e8 c2 01 00 00           callq    0xffffffff9a800276
# This next `sarq` does not have the correct GS set?
#     RAX=0000000002e10000 RBX=0000000000000800 RCX=00000000000000a0 RDX=0000000006000000
#     RSI=0000000000000000 RDI=0000000002e10000 RBP=0000000000000000 RSP=000000000009e018
#     R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000000
#     R12=0000000000000000 R13=0000000000000000 R14=0000000000000000 R15=0000000000000000
#     RIP=ffffffff9a8000af RFL=00200007 [-----PC] CPL=0 II=0 A20=1 SMM=0 HLT=0
#     ES =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     CS =0010 0000000000000000 ffffffff 00af9b00 DPL=0 CS64 [-RA]
#     SS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     DS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     FS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     GS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT
#     TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy
#     GDT=     0000000000098030 0000001f
#     IDT=     0000000000000000 00000000
#     CR0=80050033 CR2=0000000000000000 CR3=000000000009c000 CR4=000000a0
#     DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
#     DR6=00000000ffff0ff0 DR7=0000000000000400
#     CCS=ffffffffe6800000 CCD=0000000002e10000 CCO=ADDQ
#     EFER=0000000000000d01
0xffffffff9a800276:  65 48 c1 3c 25 90 29 03  sarq     $5, %gs:0x32990
0xffffffff9a80027e:  00 05
0xffffffff9a800280:  66 0f 1f 00              nopw     (%rax)
0xffffffff9a800284:  48 8b 35 ad 2b e4 01     movq     0x1e42bad(%rip), %rsi
0xffffffff9a80028b:  48 85 f6                 testq    %rsi, %rsi
0xffffffff9a80028e:  74 4b                    je       0xffffffff9a8002db
0xffffffff9a8002db:  48 89 f8                 movq     %rdi, %rax
0xffffffff9a8002de:  e9 ad 23 d4 00           jmp      0xffffffff9b542690
0xffffffff9b542690:  f3 0f 1e fa              endbr64  
0xffffffff9b542694:  65 48 c1 24 25 90 29 03  shlq     $5, %gs:0x32990
0xffffffff9b54269c:  00 05
0xffffffff9b54269e:  74 02                    je       0xffffffff9b5426a2
# Returns from `sev_verify_cbit`
0xffffffff9b5426a0:  c3                       retq     
0xffffffff9a8000b4:  5e                       popq     %rsi
0xffffffff9a8000b5:  0f 22 d8                 movq     %rax, %cr3
0xffffffff9a8000b8:  0f 20 e1                 movq     %cr4, %rcx
0xffffffff9a8000bb:  48 89 c8                 movq     %rcx, %rax
0xffffffff9a8000be:  48 81 f1 80 00 00 00     xorq     $0x80, %rcx
0xffffffff9a8000c5:  0f 22 e1                 movq     %rcx, %cr4
0xffffffff9a8000c8:  0f 22 e0                 movq     %rax, %cr4
0xffffffff9a8000cb:  48 c7 c0 d4 00 80 9a     movq     $-0x657fff2c, %rax
0xffffffff9a8000d2:  ff e0                    jmpq     *%rax
0xffffffff9a8000d4:  0f 01 15 25 9f e1 01     lgdtq    0x1e19f25(%rip)
0xffffffff9a8000db:  31 c0                    xorl     %eax, %eax
0xffffffff9a8000dd:  8e d8                    movl     %eax, %ds
0xffffffff9a8000df:  8e d0                    movl     %eax, %ss
0xffffffff9a8000e1:  8e c0                    movl     %eax, %es
0xffffffff9a8000e3:  8e e0                    movl     %eax, %fs
0xffffffff9a8000e5:  8e e8                    movl     %eax, %gs
0xffffffff9a8000e7:  b9 01 01 00 c0           movl     $0xc0000101, %ecx
0xffffffff9a8000ec:  8b 05 36 e5 fa 01        movl     0x1fae536(%rip), %eax
0xffffffff9a8000f2:  8b 15 34 e5 fa 01        movl     0x1fae534(%rip), %edx
# Restores GS in `secondary_startup_64`
0xffffffff9a8000f8:  0f 30                    wrmsr    
# Processor state after is:
#     RAX=00000000c7a00000 RBX=0000000000000800 RCX=00000000c0000101 RDX=00000000ffff97b9
#     RSI=0000000000000000 RDI=0000000002e10000 RBP=0000000000000000 RSP=000000000009e020
#     R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000000
#     R12=0000000000000000 R13=0000000000000000 R14=0000000000000000 R15=0000000000000000
#     RIP=ffffffff9a8000fa RFL=00200046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
#     ES =0000 0000000000000000 00000000 00000000
#     CS =0010 0000000000000000 ffffffff 00af9b00 DPL=0 CS64 [-RA]
#     SS =0000 0000000000000000 00000000 00000000
#     DS =0000 0000000000000000 00000000 00000000
#     FS =0000 0000000000000000 00000000 00000000
#     GS =0000 ffff97b9c7a00000 00000000 00000000
#     LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT
#     TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy
#     GDT=     ffff97b9c7a0b000 0000007f
#     IDT=     0000000000000000 00000000
#     CR0=80050033 CR2=0000000000000000 CR3=0000000002e10000 CR4=000000a0
#     DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
#     DR6=00000000ffff0ff0 DR7=0000000000000400
#     CCS=0000000000000081 CCD=0000000000000020 CCO=CLR
#     EFER=0000000000000d01
0xffffffff9a8000fa:  48 8b 25 37 e5 fa 01     movq     0x1fae537(%rip), %rsp
0xffffffff9a800101:  56                       pushq    %rsi
# Next line is `call early_setup_idt` from `secondary_startup_64`
0xffffffff9a800102:  e8 9f 0f 00 00           callq    0xffffffff9a8010a6
0xffffffff9a8010a6:  65 48 c1 3c 25 90 29 03  sarq     $5, %gs:0x32990
0xffffffff9a8010ae:  00 05
0xffffffff9a8010b0:  66 0f 1f 00              nopw     (%rax)
0xffffffff9a8010b4:  e8 2d af 08 00           callq    0xffffffff9a88bfe6
0xffffffff9a88bfe6:  65 48 c1 3c 25 90 29 03  sarq     $5, %gs:0x32990
0xffffffff9a88bfee:  00 05
0xffffffff9a88bff0:  66 0f 1f 00              nopw     (%rax)
0xffffffff9a88bff4:  bf 03 00 00 00           movl     $3, %edi
0xffffffff9a88bff9:  e8 18 68 f7 ff           callq    0xffffffff9a802816
0xffffffff9a802816:  65 48 c1 3c 25 90 29 03  sarq     $5, %gs:0x32990
0xffffffff9a80281e:  00 05
0xffffffff9a802820:  f3 0f 1e fa              endbr64  
0xffffffff9a802824:  8b 15 3e 98 6c 01        movl     0x16c983e(%rip), %edx
# ... more stuff inside `early_setup_idt` ellided
0xffffffff9a800107:  5e                       popq     %rsi
0xffffffff9a800108:  b8 01 00 00 80           movl     $0x80000001, %eax
0xffffffff9a80010d:  0f a2                    cpuid    
0xffffffff9a80010f:  89 d7                    movl     %edx, %edi
0xffffffff9a800111:  b9 80 00 00 c0           movl     $0xc0000080, %ecx
0xffffffff9a800116:  0f 32                    rdmsr    
0xffffffff9a800118:  89 c2                    movl     %eax, %edx
0xffffffff9a80011a:  0f ba e8 00              btsl     $0, %eax
0xffffffff9a80011e:  0f ba e7 14              btl      $0x14, %edi
0xffffffff9a800122:  73 0d                    jae      0xffffffff9a800131
0xffffffff9a800124:  0f ba e8 0b              btsl     $0xb, %eax
0xffffffff9a800128:  48 0f ba 2d 8f 9f e1 01  btsq     $0x3f, 0x1e19f8f(%rip)
0xffffffff9a800130:  3f
0xffffffff9a800131:  39 d0                    cmpl     %edx, %eax
0xffffffff9a800133:  74 04                    je       0xffffffff9a800139
0xffffffff9a800139:  b8 33 00 05 80           movl     $0x80050033, %eax
0xffffffff9a80013e:  0f 22 c0                 movq     %rax, %cr0
0xffffffff9a800141:  6a 00                    pushq    $0
0xffffffff9a800143:  9d                       popfq    
0xffffffff9a800144:  48 89 f7                 movq     %rsi, %rdi
0xffffffff9a800147:  68 5a 01 80 9a           pushq    $-0x657ffea6
0xffffffff9a80014c:  31 ed                    xorl     %ebp, %ebp
0xffffffff9a80014e:  48 8b 05 cb e4 fa 01     movq     0x1fae4cb(%rip), %rax
0xffffffff9a800155:  6a 10                    pushq    $0x10
0xffffffff9a800157:  50                       pushq    %rax
0xffffffff9a800158:  48 cb                    lretq    
0xffffffff9a86db70:  f3 0f 1e fa              endbr64  
# START wakeup_long64
0xffffffff9a86db74:  48 8b 04 25 90 0a 63 9c  movq     0xffffffff9c630a90, %rax
0xffffffff9a86db7c:  48 ba f0 de bc 9a 78 56  movabsq  $0x123456789abcdef0, %rdx
0xffffffff9a86db84:  34 12
0xffffffff9a86db86:  48 39 d0                 cmpq     %rdx, %rax
0xffffffff9a86db89:  74 0c                    je       0xffffffff9a86db97
0xffffffff9a86db97:  66 b8 18 00              movw     $0x18, %ax
0xffffffff9a86db9b:  8e d0                    movl     %eax, %ss
0xffffffff9a86db9d:  8e d8                    movl     %eax, %ds
0xffffffff9a86db9f:  8e c0                    movl     %eax, %es
0xffffffff9a86dba1:  8e e0                    movl     %eax, %fs
# This clears GS again
0xffffffff9a86dba3:  8e e8                    movl     %eax, %gs
# Processor state after is:
#     RAX=123456789abc0018 RBX=0000000000000000 RCX=00000000c0000080 RDX=123456789abcdef0
#     RSI=0000000000000000 RDI=0000000000000000 RBP=0000000000000000 RSP=ffffffff9cff3fd8
#     R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000000
#     R12=0000000000000000 R13=0000000000000000 R14=0000000000000000 R15=0000000000000000
#     RIP=ffffffff9a86dba5 RFL=00000046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
#     ES =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     CS =0010 0000000000000000 ffffffff 00af9b00 DPL=0 CS64 [-RA]
#     SS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     DS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     FS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     GS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT
#     TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy
#     GDT=     ffff97b9c7a0b000 0000007f
#     IDT=     ffffffff9c604000 000001ff
#     CR0=80050033 CR2=0000000000000000 CR3=0000000002e10000 CR4=000000a0
#     DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
#     DR6=00000000ffff0ff0 DR7=0000000000000400
#     CCS=0000000000000044 CCD=0000000000000000 CCO=EFLAGS
#     EFER=0000000000000d01
0xffffffff9a86dba5:  48 8b 24 25 88 0a 63 9c  movq     0xffffffff9c630a88, %rsp
0xffffffff9a86dbad:  48 8b 1c 25 78 0a 63 9c  movq     0xffffffff9c630a78, %rbx
0xffffffff9a86dbb5:  48 8b 3c 25 70 0a 63 9c  movq     0xffffffff9c630a70, %rdi
0xffffffff9a86dbbd:  48 8b 34 25 68 0a 63 9c  movq     0xffffffff9c630a68, %rsi
0xffffffff9a86dbc5:  48 8b 2c 25 60 0a 63 9c  movq     0xffffffff9c630a60, %rbp
0xffffffff9a86dbcd:  48 8b 04 25 80 0a 63 9c  movq     0xffffffff9c630a80, %rax
0xffffffff9a86dbd5:  ff e0                    jmpq     *%rax
# START `.Lresume_point` in `do_suspend_lowlevel`
0xffffffff9a86dc90:  48 c7 c0 e0 53 0e 9d     movq     $-0x62f1ac20, %rax
0xffffffff9a86dc97:  48 8b 98 e0 00 00 00     movq     0xe0(%rax), %rbx
0xffffffff9a86dc9e:  0f 22 e3                 movq     %rbx, %cr4
0xffffffff9a86dca1:  48 8b 98 d8 00 00 00     movq     0xd8(%rax), %rbx
0xffffffff9a86dca8:  0f 22 db                 movq     %rbx, %cr3
0xffffffff9a86dcab:  48 8b 98 d0 00 00 00     movq     0xd0(%rax), %rbx
0xffffffff9a86dcb2:  0f 22 d3                 movq     %rbx, %cr2
0xffffffff9a86dcb5:  48 8b 98 c8 00 00 00     movq     0xc8(%rax), %rbx
0xffffffff9a86dcbc:  0f 22 c3                 movq     %rbx, %cr0
0xffffffff9a86dcbf:  ff b0 90 00 00 00        pushq    0x90(%rax)
0xffffffff9a86dcc5:  9d                       popfq    
0xffffffff9a86dcc6:  48 8b a0 98 00 00 00     movq     0x98(%rax), %rsp
0xffffffff9a86dccd:  48 8b 68 20              movq     0x20(%rax), %rbp
0xffffffff9a86dcd1:  48 8b 70 68              movq     0x68(%rax), %rsi
0xffffffff9a86dcd5:  48 8b 78 70              movq     0x70(%rax), %rdi
0xffffffff9a86dcd9:  48 8b 58 28              movq     0x28(%rax), %rbx
0xffffffff9a86dcdd:  48 8b 48 58              movq     0x58(%rax), %rcx
0xffffffff9a86dce1:  48 8b 50 60              movq     0x60(%rax), %rdx
0xffffffff9a86dce5:  4c 8b 40 48              movq     0x48(%rax), %r8
0xffffffff9a86dce9:  4c 8b 48 40              movq     0x40(%rax), %r9
0xffffffff9a86dced:  4c 8b 50 38              movq     0x38(%rax), %r10
0xffffffff9a86dcf1:  4c 8b 58 30              movq     0x30(%rax), %r11
0xffffffff9a86dcf5:  4c 8b 60 18              movq     0x18(%rax), %r12
0xffffffff9a86dcf9:  4c 8b 68 10              movq     0x10(%rax), %r13
0xffffffff9a86dcfd:  4c 8b 70 08              movq     8(%rax), %r14
0xffffffff9a86dd01:  4c 8b 38                 movq     (%rax), %r15
0xffffffff9a86dd04:  31 c0                    xorl     %eax, %eax
0xffffffff9a86dd06:  48 83 c4 08              addq     $8, %rsp
# Jumps to `restore_processor_state`
0xffffffff9a86dd0a:  e9 31 ed cb 00           jmp      0xffffffff9b52ca40
0xffffffff9b52ca40:  55                       pushq    %rbp
0xffffffff9b52ca41:  48 89 e5                 movq     %rsp, %rbp
0xffffffff9b52ca44:  41 57                    pushq    %r15
0xffffffff9b52ca46:  41 56                    pushq    %r14
0xffffffff9b52ca48:  41 55                    pushq    %r13
0xffffffff9b52ca4a:  41 54                    pushq    %r12
0xffffffff9b52ca4c:  53                       pushq    %rbx
0xffffffff9b52ca4d:  48 83 ec 20              subq     $0x20, %rsp
0xffffffff9b52ca51:  80 3d c4 8a bb 01 00     cmpb     $0, 0x1bb8ac4(%rip)
0xffffffff9b52ca58:  74 15                    je       0xffffffff9b52ca6f
0xffffffff9b52ca5a:  48 8b 05 67 8a bb 01     movq     0x1bb8a67(%rip), %rax
0xffffffff9b52ca61:  b9 a0 01 00 00           movl     $0x1a0, %ecx
0xffffffff9b52ca66:  48 89 c2                 movq     %rax, %rdx
0xffffffff9b52ca69:  48 c1 ea 20              shrq     $0x20, %rdx
0xffffffff9b52ca6d:  0f 30                    wrmsr    
0xffffffff9b52ca6f:  48 8b 05 6a 8a bb 01     movq     0x1bb8a6a(%rip), %rax
0xffffffff9b52ca76:  b9 80 00 00 c0           movl     $0xc0000080, %ecx
0xffffffff9b52ca7b:  48 89 c2                 movq     %rax, %rdx
0xffffffff9b52ca7e:  48 c1 ea 20              shrq     $0x20, %rdx
0xffffffff9b52ca82:  0f 30                    wrmsr    
0xffffffff9b52ca84:  48 8b 05 35 8a bb 01     movq     0x1bb8a35(%rip), %rax
0xffffffff9b52ca8b:  0f 22 e0                 movq     %rax, %cr4
0xffffffff9b52ca8e:  48 89 05 2b 8a bb 01     movq     %rax, 0x1bb8a2b(%rip)
0xffffffff9b52ca95:  48 8b 05 1c 8a bb 01     movq     0x1bb8a1c(%rip), %rax
0xffffffff9b52ca9c:  0f 22 d8                 movq     %rax, %cr3
0xffffffff9b52ca9f:  48 8b 05 0a 8a bb 01     movq     0x1bb8a0a(%rip), %rax
0xffffffff9b52caa6:  0f 22 d0                 movq     %rax, %cr2
0xffffffff9b52caa9:  48 8b 05 f8 89 bb 01     movq     0x1bb89f8(%rip), %rax
0xffffffff9b52cab0:  0f 22 c0                 movq     %rax, %cr0
0xffffffff9b52cab3:  48 89 05 ee 89 bb 01     movq     %rax, 0x1bb89ee(%rip)
0xffffffff9b52caba:  0f 01 1d 35 8a bb 01     lidtq    0x1bb8a35(%rip)
0xffffffff9b52cac1:  b8 18 00 00 00           movl     $0x18, %eax
0xffffffff9b52cac6:  8e d0                    movl     %eax, %ss
0xffffffff9b52cac8:  b8 2b 00 00 00           movl     $0x2b, %eax
0xffffffff9b52cacd:  89 c2                    movl     %eax, %edx
0xffffffff9b52cacf:  8e da                    movl     %edx, %ds
0xffffffff9b52cad1:  8e c0                    movl     %eax, %es
0xffffffff9b52cad3:  48 8b 05 b6 89 bb 01     movq     0x1bb89b6(%rip), %rax
0xffffffff9b52cada:  b9 01 01 00 c0           movl     $0xc0000101, %ecx
0xffffffff9b52cadf:  48 89 c2                 movq     %rax, %rdx
0xffffffff9b52cae2:  48 c1 ea 20              shrq     $0x20, %rdx
# Restores GS inside `__restore_processor_state`. Processor state after is:
#     RAX=ffff97b9c7a00000 RBX=ffff97b9c5b6be00 RCX=00000000c0000101 RDX=00000000ffff97b9
#     RSI=ffffffffd43c95f9 RDI=0000000000000004 RBP=ffffad4e8062fca0 RSP=ffffad4e8062fc58
#     R8 =0000000000000004 R9 =0000000021bee048 R10=00000000aaaaaaab R11=0000000000000005
#     R12=0000000000000000 R13=0000000000000000 R14=0000000000000004 R15=ffff97b9c5929020
#     RIP=ffffffff9b52cae8 RFL=00000003 [------C] CPL=0 II=0 A20=1 SMM=0 HLT=0
#     ES =002b 0000000000000000 ffffffff 00cff300 DPL=3 DS   [-WA]
#     CS =0010 0000000000000000 ffffffff 00af9b00 DPL=0 CS64 [-RA]
#     SS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     DS =002b 0000000000000000 ffffffff 00cff300 DPL=3 DS   [-WA]
#     FS =0018 0000000000000000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     GS =0018 ffff97b9c7a00000 ffffffff 00cf9300 DPL=0 DS   [-WA]
#     LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT
#     TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy
#     GDT=     ffff97b9c7a0b000 0000007f
#     IDT=     fffffe0000000000 00000fff
#     CR0=80050033 CR2=000000000049304a CR3=0000000005b58000 CR4=000006f0
#     DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
#     DR6=00000000ffff0ff0 DR7=0000000000000400
#     CCS=00000001ffff2f73 CCD=00000000ffff97b9 CCO=SARQ
#     EFER=0000000000000d01
0xffffffff9b52cae6:  0f 30                    wrmsr    


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:08:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:08:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476823.739189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGF7A-00061T-20; Fri, 13 Jan 2023 08:08:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476823.739189; Fri, 13 Jan 2023 08:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGF79-00061M-V3; Fri, 13 Jan 2023 08:08:39 +0000
Received: by outflank-mailman (input) for mailman id 476823;
 Fri, 13 Jan 2023 08:08:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O/z9=5K=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pGF78-00061G-NB
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:08:38 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7add2999-9319-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 09:08:37 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E56295F986;
 Fri, 13 Jan 2023 08:08:35 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C182E13913;
 Fri, 13 Jan 2023 08:08:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 7oPDLYMRwWNaSwAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 13 Jan 2023 08:08:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7add2999-9319-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673597315; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=m4sELtLR4erGC6Nt+c21AO/Z8JBEn7P+96bMsRdB8wQ=;
	b=CJ8UhLWnNyD8jGmtMk023GJAiIicP6C8Z5knrrklLdTD9RU1kAOi5K01zM0Kwotiv1EC7h
	RgUXeE1OlbvVzL+1ymvzbzCSMmvn5cyNT1v7PXuYfcw9lsFMiCMeKSWqBExVL/1WR0oZsY
	mrtsmsCQayAtWrVoAfjMi1p9ACM9tWI=
Message-ID: <bdea54df-59dc-3d4d-dd0c-8c45403dea24@suse.com>
Date: Fri, 13 Jan 2023 09:08:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel <xen-devel@lists.xenproject.org>
Cc: regressions@lists.linux.dev
References: <Y8DIodWQGm99RA+E@mail-itl>
From: Juergen Gross <jgross@suse.com>
Subject: Re: S3 under Xen regression between 6.1.1 and 6.1.3
In-Reply-To: <Y8DIodWQGm99RA+E@mail-itl>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------78E4ABhTHO0B4lG8jyzNTay2"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------78E4ABhTHO0B4lG8jyzNTay2
Content-Type: multipart/mixed; boundary="------------LaTlo7JDqFZbSkQGkk2AzUwV";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel <xen-devel@lists.xenproject.org>
Cc: regressions@lists.linux.dev
Message-ID: <bdea54df-59dc-3d4d-dd0c-8c45403dea24@suse.com>
Subject: Re: S3 under Xen regression between 6.1.1 and 6.1.3
References: <Y8DIodWQGm99RA+E@mail-itl>
In-Reply-To: <Y8DIodWQGm99RA+E@mail-itl>

--------------LaTlo7JDqFZbSkQGkk2AzUwV
Content-Type: multipart/mixed; boundary="------------JTCEnxmZjr8uUX0QsykuUQXk"

--------------JTCEnxmZjr8uUX0QsykuUQXk
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTMuMDEuMjMgMDM6NTcsIE1hcmVrIE1hcmN6eWtvd3NraS1Hw7NyZWNraSB3cm90ZToN
Cj4gSGksDQo+IA0KPiA2LjEuMyBhcyBQViBkb20wIGNyYXNoZXMgd2hlbiBhdHRlbXB0aW5n
IHRvIHN1c3BlbmQuIDYuMS4xIHdvcmtzLiBUaGUNCj4gY3Jhc2g6DQo+IA0KPiAgICAgIFsg
IDM0OC4yODQwMDRdIFBNOiBzdXNwZW5kIGVudHJ5IChkZWVwKQ0KPiAgICAgIFsgIDM0OC4y
ODk1MzJdIEZpbGVzeXN0ZW1zIHN5bmM6IDAuMDA1IHNlY29uZHMNCj4gICAgICBbICAzNDgu
MjkxNTQ1XSBGcmVlemluZyB1c2VyIHNwYWNlIHByb2Nlc3NlcyAuLi4gKGVsYXBzZWQgMC4w
MDAgc2Vjb25kcykgZG9uZS4NCj4gICAgICBbICAzNDguMjkyNDU3XSBPT00ga2lsbGVyIGRp
c2FibGVkLg0KPiAgICAgIFsgIDM0OC4yOTI0NjJdIEZyZWV6aW5nIHJlbWFpbmluZyBmcmVl
emFibGUgdGFza3MgLi4uIChlbGFwc2VkIDAuMTA0IHNlY29uZHMpIGRvbmUuDQo+ICAgICAg
WyAgMzQ4LjM5NjYxMl0gcHJpbnRrOiBTdXNwZW5kaW5nIGNvbnNvbGUocykgKHVzZSBub19j
b25zb2xlX3N1c3BlbmQgdG8gZGVidWcpDQo+ICAgICAgWyAgMzQ4Ljc0OTIyOF0gUE06IHN1
c3BlbmQgZGV2aWNlcyB0b29rIDAuMzUyIHNlY29uZHMNCj4gICAgICBbICAzNDguNzY5NzEz
XSBBQ1BJOiBFQzogaW50ZXJydXB0IGJsb2NrZWQNCj4gICAgICBbICAzNDguODE2MDc3XSBC
VUc6IGtlcm5lbCBOVUxMIHBvaW50ZXIgZGVyZWZlcmVuY2UsIGFkZHJlc3M6IDAwMDAwMDAw
MDAwMDAwMWMNCj4gICAgICBbICAzNDguODE2MDgwXSAjUEY6IHN1cGVydmlzb3IgcmVhZCBh
Y2Nlc3MgaW4ga2VybmVsIG1vZGUNCj4gICAgICBbICAzNDguODE2MDgxXSAjUEY6IGVycm9y
X2NvZGUoMHgwMDAwKSAtIG5vdC1wcmVzZW50IHBhZ2UNCj4gICAgICBbICAzNDguODE2MDgz
XSBQR0QgMCBQNEQgMA0KPiAgICAgIFsgIDM0OC44MTYwODZdIE9vcHM6IDAwMDAgWyMxXSBQ
UkVFTVBUIFNNUCBOT1BUSQ0KPiAgICAgIFsgIDM0OC44MTYwODldIENQVTogMCBQSUQ6IDY3
NjQgQ29tbTogc3lzdGVtZC1zbGVlcCBOb3QgdGFpbnRlZCA2LjEuMy0xLmZjMzIucXViZXMu
eDg2XzY0ICMxDQo+ICAgICAgWyAgMzQ4LjgxNjA5Ml0gSGFyZHdhcmUgbmFtZTogU3RhciBM
YWJzIFN0YXJCb29rL1N0YXJCb29rLCBCSU9TIDguMDEgMDcvMDMvMjAyMg0KPiAgICAgIFsg
IDM0OC44MTYwOTNdIFJJUDogZTAzMDphY3BpX2dldF93YWtldXBfYWRkcmVzcysweGMvMHgy
MA0KPiAgICAgIFsgIDM0OC44MTYxMDBdIENvZGU6IDQ0IDAwIDAwIDQ4IDhiIDA1IDA0IGEz
IDgyIDAyIGMzIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNj
IGNjIGNjIGNjIGNjIDBmIDFmIDQ0IDAwIDAwIDQ4IDhiIDA1IGZjIDlkIDgyIDAyIDw4Yj4g
NDAgMWMgYzMgY2MgY2MgY2MgY2MgNjYgNjYgMmUgMGYgMWYgODQgMDAgMDAgMDAgMDAgMDAg
OTAgMGYgMWYNCj4gICAgICBbICAzNDguODE2MTAzXSBSU1A6IGUwMmI6ZmZmZmM5MDA0MjUz
N2QwOCBFRkxBR1M6IDAwMDEwMjQ2DQo+ICAgICAgWyAgMzQ4LjgxNjEwNV0gUkFYOiAwMDAw
MDAwMDAwMDAwMDAwIFJCWDogMDAwMDAwMDAwMDAwMDAwMyBSQ1g6IDIwYzQ5YmE1ZTM1M2Y3
Y2YNCj4gICAgICBbICAzNDguODE2MTA2XSBSRFg6IDAwMDAwMDAwMDAwMGNkMTkgUlNJOiAw
MDAwMDAwMDAwMDJlZTlhIFJESTogMDAyYTA1MWVkNDJkNzY5NA0KPiAgICAgIFsgIDM0OC44
MTYxMDhdIFJCUDogMDAwMDAwMDAwMDAwMDAwMyBSMDg6IGZmZmZjOTAwNDI1MzdjYTAgUjA5
OiBmZmZmZmZmZjgyYzVlNDY4DQo+ICAgICAgWyAgMzQ4LjgxNjExMF0gUjEwOiAwMDAwMDAw
MDAwMDA3ZmYwIFIxMTogMDAwMDAwMDAwMDAwMDAwMCBSMTI6IDAwMDAwMDAwMDAwMDAwMDAN
Cj4gICAgICBbICAzNDguODE2MTExXSBSMTM6IGZmZmZmZmZmZmZmZmZmZjIgUjE0OiBmZmZm
ODg4MTIyMDZlNmMwIFIxNTogZmZmZjg4ODEyMjA2ZTZlMA0KPiAgICAgIFsgIDM0OC44MTYx
MjFdIEZTOiAgMDAwMDdjYjQ5YjAxZWI4MCgwMDAwKSBHUzpmZmZmODg4MTg5NDAwMDAwKDAw
MDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANCj4gICAgICBbICAzNDguODE2MTIzXSBDUzog
IGUwMzAgRFM6IDAwMDAgRVM6IDAwMDAgQ1IwOiAwMDAwMDAwMDgwMDUwMDMzDQo+ICAgICAg
WyAgMzQ4LjgxNjEyNF0gQ1IyOiAwMDAwMDAwMDAwMDAwMDFjIENSMzogMDAwMDAwMDEyMjMx
YTAwMCBDUjQ6IDAwMDAwMDAwMDAwNTA2NjANCj4gICAgICBbICAzNDguODE2MTMxXSBDYWxs
IFRyYWNlOg0KPiAgICAgIFsgIDM0OC44MTYxMzNdICA8VEFTSz4NCj4gICAgICBbICAzNDgu
ODE2MTM0XSAgYWNwaV9wbV9wcmVwYXJlKzB4MWEvMHg1MA0KPiAgICAgIFsgIDM0OC44MTYx
NDFdICBzdXNwZW5kX2VudGVyKzB4OTQvMHgzNjANCj4gICAgICBbICAzNDguODE2MTQ2XSAg
c3VzcGVuZF9kZXZpY2VzX2FuZF9lbnRlcisweDE5OC8weDJiMA0KPiAgICAgIFsgIDM0OC44
MTYxNTBdICBlbnRlcl9zdGF0ZSsweDE4ZC8weDFmNQ0KPiAgICAgIFsgIDM0OC44MTYxNTVd
ICBwbV9zdXNwZW5kLmNvbGQrMHgyMC8weDZiDQo+ICAgICAgWyAgMzQ4LjgxNjE1OV0gIHN0
YXRlX3N0b3JlKzB4MjcvMHg2MA0KPiAgICAgIFsgIDM0OC44MTYxNjNdICBrZXJuZnNfZm9w
X3dyaXRlX2l0ZXIrMHgxMjUvMHgxYzANCj4gICAgICBbICAzNDguODE2MTY5XSAgbmV3X3N5
bmNfd3JpdGUrMHgxMDUvMHgxOTANCj4gICAgICBbICAzNDguODE2MTc2XSAgdmZzX3dyaXRl
KzB4MjExLzB4MmEwDQo+ICAgICAgWyAgMzQ4LjgxNjE4MF0gIGtzeXNfd3JpdGUrMHg2Ny8w
eGUwDQo+ICAgICAgWyAgMzQ4LjgxNjE4M10gIGRvX3N5c2NhbGxfNjQrMHg1OS8weDkwDQo+
ICAgICAgWyAgMzQ4LjgxNjE4OF0gID8gZG9fc3lzY2FsbF82NCsweDY5LzB4OTANCj4gICAg
ICBbICAzNDguODE2MTkyXSAgPyBleGNfcGFnZV9mYXVsdCsweDc2LzB4MTcwDQo+ICAgICAg
WyAgMzQ4LjgxNjE5NV0gIGVudHJ5X1NZU0NBTExfNjRfYWZ0ZXJfaHdmcmFtZSsweDYzLzB4
Y2QNCj4gICAgICBbICAzNDguODE2MjAwXSBSSVA6IDAwMzM6MHg3Y2I0OWMxNDEyZjcNCj4g
ICAgICBbICAzNDguODE2MjAzXSBDb2RlOiAwZCAwMCBmNyBkOCA2NCA4OSAwMiA0OCBjNyBj
MCBmZiBmZiBmZiBmZiBlYiBiNyAwZiAxZiAwMCBmMyAwZiAxZSBmYSA2NCA4YiAwNCAyNSAx
OCAwMCAwMCAwMCA4NSBjMCA3NSAxMCBiOCAwMSAwMCAwMCAwMCAwZiAwNSA8NDg+IDNkIDAw
IGYwIGZmIGZmIDc3IDUxIGMzIDQ4IDgzIGVjIDI4IDQ4IDg5IDU0IDI0IDE4IDQ4IDg5IDc0
IDI0DQo+ICAgICAgWyAgMzQ4LjgxNjIwNF0gUlNQOiAwMDJiOjAwMDA3ZmZjMTI1ZjYzZjgg
RUZMQUdTOiAwMDAwMDI0NiBPUklHX1JBWDogMDAwMDAwMDAwMDAwMDAwMQ0KPiAgICAgIFsg
IDM0OC44MTYyMDZdIFJBWDogZmZmZmZmZmZmZmZmZmZkYSBSQlg6IDAwMDAwMDAwMDAwMDAw
MDQgUkNYOiAwMDAwN2NiNDljMTQxMmY3DQo+ICAgICAgWyAgMzQ4LjgxNjIwOF0gUkRYOiAw
MDAwMDAwMDAwMDAwMDA0IFJTSTogMDAwMDdmZmMxMjVmNjRlMCBSREk6IDAwMDAwMDAwMDAw
MDAwMDQNCj4gICAgICBbICAzNDguODE2MjA5XSBSQlA6IDAwMDA3ZmZjMTI1ZjY0ZTAgUjA4
OiAwMDAwNWM4M2Q3NzJiY2EwIFIwOTogMDAwMDAwMDAwMDAwMDAwZA0KPiAgICAgIFsgIDM0
OC44MTYyMTBdIFIxMDogMDAwMDVjODNkNzcyN2ViMCBSMTE6IDAwMDAwMDAwMDAwMDAyNDYg
UjEyOiAwMDAwMDAwMDAwMDAwMDA0DQo+ICAgICAgWyAgMzQ4LjgxNjIxMV0gUjEzOiAwMDAw
NWM4M2Q3NzI3MmQwIFIxNDogMDAwMDAwMDAwMDAwMDAwNCBSMTU6IDAwMDA3Y2I0OWMyMTM3
MDANCj4gICAgICBbICAzNDguODE2MjEzXSAgPC9UQVNLPg0KPiAgICAgIFsgIDM0OC44MTYy
MTRdIE1vZHVsZXMgbGlua2VkIGluOiBsb29wIHZmYXQgZmF0IHNuZF9oZGFfY29kZWNfaGRt
aSBzbmRfc29mX3BjaV9pbnRlbF90Z2wgc25kX3NvZl9pbnRlbF9oZGFfY29tbW9uIHNvdW5k
d2lyZV9pbnRlbCBzb3VuZHdpcmVfZ2VuZXJpY19hbGxvY2F0aW9uIHNvdW5kd2lyZV9jYWRl
bmNlIHNuZF9zb2ZfaW50ZWxfaGRhIHNuZF9zb2ZfcGNpIHNuZF9zb2ZfeHRlbnNhX2RzcCBz
bmRfc29mIHNuZF9zb2ZfdXRpbHMgc25kX3NvY19oZGFjX2hkYSBzbmRfaGRhX2V4dF9jb3Jl
IHNuZF9zb2NfYWNwaV9pbnRlbF9tYXRjaCBzbmRfc29jX2FjcGkgc291bmR3aXJlX2J1cyBz
bmRfaGRhX2NvZGVjX3JlYWx0ZWsgc25kX2hkYV9jb2RlY19nZW5lcmljIGxlZHRyaWdfYXVk
aW8gc25kX3NvY19jb3JlIHNuZF9jb21wcmVzcyBhYzk3X2J1cyBzbmRfcGNtX2RtYWVuZ2lu
ZSBzbmRfaGRhX2ludGVsIHNuZF9pbnRlbF9kc3BjZmcgc25kX2ludGVsX3Nkd19hY3BpIGlU
Q09fd2R0IGludGVsX3BtY19ieHQgZWUxMDA0IGlUQ09fdmVuZG9yX3N1cHBvcnQgaW50ZWxf
cmFwbF9tc3Igc25kX2hkYV9jb2RlYyBzbmRfaGRhX2NvcmUgc25kX2h3ZGVwIHNuZF9zZXEg
c25kX3NlcV9kZXZpY2UgaXdsd2lmaSBzbmRfcGNtIHBjc3BrciBqb3lkZXYgcHJvY2Vzc29y
X3RoZXJtYWxfZGV2aWNlX3BjaV9sZWdhY3kgcHJvY2Vzc29yX3RoZXJtYWxfZGV2aWNlIHNu
ZF90aW1lciBzbmQgY2ZnODAyMTEgcHJvY2Vzc29yX3RoZXJtYWxfcmZpbSBpMmNfaTgwMSBw
cm9jZXNzb3JfdGhlcm1hbF9tYm94IGkyY19zbWJ1cyBpZG1hNjQgcmZraWxsIHByb2Nlc3Nv
cl90aGVybWFsX3JhcGwgc291bmRjb3JlIGludGVsX3JhcGxfY29tbW9uIGludDM0MHhfdGhl
cm1hbF96b25lIGludGVsX3NvY19kdHNfaW9zZiBpZ2VuNl9lZGFjIGludGVsX2hpZCBpbnRl
bF9wbWNfY29yZSBpbnRlbF9zY3VfcGx0ZHJ2IHNwYXJzZV9rZXltYXAgZnVzZSB4ZW5mcyBp
cF90YWJsZXMgZG1fdGhpbl9wb29sDQo+ICAgICAgaWMjMiBQYXJ0MQ0KPiAgICAgIFsgIDM0
OC44MTYyNTldICBkbV9wZXJzaXN0ZW50X2RhdGEgZG1fYmlvX3ByaXNvbiBkbV9jcnlwdCBp
OTE1IGNyY3QxMGRpZl9wY2xtdWwgY3JjMzJfcGNsbXVsIGNyYzMyY19pbnRlbCBwb2x5dmFs
X2NsbXVsbmkgcG9seXZhbF9nZW5lcmljIGRybV9idWRkeSBudm1lIHZpZGVvIHdtaSBkcm1f
ZGlzcGxheV9oZWxwZXIgbnZtZV9jb3JlIHhoY2lfcGNpIHhoY2lfcGNpX3JlbmVzYXMgZ2hh
c2hfY2xtdWxuaV9pbnRlbCBoaWRfbXVsdGl0b3VjaCBzaGE1MTJfc3NzZTMgc2VyaW9fcmF3
IG52bWVfY29tbW9uIGNlYyB4aGNpX2hjZCB0dG0gaTJjX2hpZF9hY3BpIGkyY19oaWQgcGlu
Y3RybF90aWdlcmxha2UgeGVuX2FjcGlfcHJvY2Vzc29yIHhlbl9wcml2Y21kIHhlbl9wY2li
YWNrIHhlbl9ibGtiYWNrIHhlbl9nbnRhbGxvYyB4ZW5fZ250ZGV2IHhlbl9ldnRjaG4gdWlu
cHV0DQo+ICAgICAgWyAgMzQ4LjgxNjI4MV0gQ1IyOiAwMDAwMDAwMDAwMDAwMDFjDQo+ICAg
ICAgWyAgMzQ4LjgxNjI4M10gLS0tWyBlbmQgdHJhY2UgMDAwMDAwMDAwMDAwMDAwMCBdLS0t
DQo+ICAgICAgWyAgMzQ4Ljg2Nzk5MV0gUklQOiBlMDMwOmFjcGlfZ2V0X3dha2V1cF9hZGRy
ZXNzKzB4Yy8weDIwDQo+ICAgICAgWyAgMzQ4Ljg2Nzk5Nl0gQ29kZTogNDQgMDAgMDAgNDgg
OGIgMDUgMDQgYTMgODIgMDIgYzMgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2Mg
Y2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgMGYgMWYgNDQgMDAgMDAgNDggOGIgMDUgZmMgOWQg
ODIgMDIgPDhiPiA0MCAxYyBjMyBjYyBjYyBjYyBjYyA2NiA2NiAyZSAwZiAxZiA4NCAwMCAw
MCAwMCAwMCAwMCA5MCAwZiAxZg0KPiAgICAgIFsgIDM0OC44Njc5OThdIFJTUDogZTAyYjpm
ZmZmYzkwMDQyNTM3ZDA4IEVGTEFHUzogMDAwMTAyNDYNCj4gICAgICBbICAzNDguODY3OTk5
XSBSQVg6IDAwMDAwMDAwMDAwMDAwMDAgUkJYOiAwMDAwMDAwMDAwMDAwMDAzIFJDWDogMjBj
NDliYTVlMzUzZjdjZg0KPiAgICAgIFsgIDM0OC44NjgwMDBdIFJEWDogMDAwMDAwMDAwMDAw
Y2QxOSBSU0k6IDAwMDAwMDAwMDAwMmVlOWEgUkRJOiAwMDJhMDUxZWQ0MmQ3Njk0DQo+ICAg
ICAgWyAgMzQ4Ljg2ODAwMV0gUkJQOiAwMDAwMDAwMDAwMDAwMDAzIFIwODogZmZmZmM5MDA0
MjUzN2NhMCBSMDk6IGZmZmZmZmZmODJjNWU0NjgNCj4gICAgICBbICAzNDguODY4MDAxXSBS
MTA6IDAwMDAwMDAwMDAwMDdmZjAgUjExOiAwMDAwMDAwMDAwMDAwMDAwIFIxMjogMDAwMDAw
MDAwMDAwMDAwMA0KPiAgICAgIFsgIDM0OC44NjgwMDJdIFIxMzogZmZmZmZmZmZmZmZmZmZm
MiBSMTQ6IGZmZmY4ODgxMjIwNmU2YzAgUjE1OiBmZmZmODg4MTIyMDZlNmUwDQo+ICAgICAg
WyAgMzQ4Ljg2ODAwOF0gRlM6ICAwMDAwN2NiNDliMDFlYjgwKDAwMDApIEdTOmZmZmY4ODgx
ODk0MDAwMDAoMDAwMCkga25sR1M6MDAwMDAwMDAwMDAwMDAwMA0KPiAgICAgIFsgIDM0OC44
NjgwMDldIENTOiAgZTAzMCBEUzogMDAwMCBFUzogMDAwMCBDUjA6IDAwMDAwMDAwODAwNTAw
MzMNCj4gICAgICBbICAzNDguODY4MDA5XSBDUjI6IDAwMDAwMDAwMDAwMDAwMWMgQ1IzOiAw
MDAwMDAwMTIyMzFhMDAwIENSNDogMDAwMDAwMDAwMDA1MDY2MA0KPiAgICAgIFsgIDM0OC44
NjgwMTRdIEtlcm5lbCBwYW5pYyAtIG5vdCBzeW5jaW5nOiBGYXRhbCBleGNlcHRpb24NCj4g
ICAgICBbICAzNDguODY4MDMxXSBLZXJuZWwgT2Zmc2V0OiBkaXNhYmxlZA0KPiANCj4gTG9v
a2luZyBhdCBnaXQgbG9nIGJldHdlZW4gdGhvc2UgdHdvIHZlcnNpb25zLCBhbmQgdGhlDQo+
IGFjcGlfZ2V0X3dha2V1cF9hZGRyZXNzKCkgZnVuY3Rpb24sIEkgc3VzcGVjdCBpdCdzIHRo
aXMgY2hhbmdlIChidXQgSQ0KPiBoYXZlIF9ub3RfIHRlc3RlZCBpdCk6DQo+IA0KPiBjb21t
aXQgYjE4OTg3OTM3NzdmZTEwYTMxYzE2MGJiOGJjMzg1ZDZlZWE2NDBjNg0KPiBBdXRob3I6
IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4gRGF0ZTogICBXZWQgTm92IDIz
IDEyOjQ1OjIzIDIwMjIgKzAxMDANCj4gDQo+ICAgICAgeDg2L2Jvb3Q6IFNraXAgcmVhbG1v
ZGUgaW5pdCBjb2RlIHdoZW4gcnVubmluZyBhcyBYZW4gUFYgZ3Vlc3QNCj4gICAgICANCj4g
ICAgICBbIFVwc3RyZWFtIGNvbW1pdCBmMWU1MjUwMDk0OTNjYmQ1NjllN2M4ZGQ3ZDU4MTU3
ODU1Zjg2NThkIF0NCg0KWWVzLCB5b3UgYXJlIHJpZ2h0Lg0KDQpDb3VsZCB5b3UgcGxlYXNl
IHRlc3QgdGhlIGF0dGFjaGVkIHBhdGNoPyBJdCBpcyBmb3IgdXBzdHJlYW0sIGJ1dCBJIHRo
aW5rIGl0DQpzaG91bGQgYXBwbHkgdG8gNi4xLjMsIHRvby4NCg0KDQpKdWVyZ2VuDQoNCg0K

--------------JTCEnxmZjr8uUX0QsykuUQXk
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-x86-acpi-fix-suspend-with-Xen.patch"
Content-Disposition: attachment;
 filename="0001-x86-acpi-fix-suspend-with-Xen.patch"
Content-Transfer-Encoding: base64

RnJvbSA0MDgzM2I2NzAxMDI2YTM3MjQzYmRhOTBiYmQwNTNjNTg5NjM4NDRkIE1vbiBTZXAg
MTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
ClRvOiB4ODZAa2VybmVsLm9yZwpUbzogbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZwpU
bzogbGludXgtcG1Admdlci5rZXJuZWwub3JnCkNjOiBUaG9tYXMgR2xlaXhuZXIgPHRnbHhA
bGludXRyb25peC5kZT4KQ2M6IEluZ28gTW9sbmFyIDxtaW5nb0ByZWRoYXQuY29tPgpDYzog
Qm9yaXNsYXYgUGV0a292IDxicEBhbGllbjguZGU+CkNjOiBEYXZlIEhhbnNlbiA8ZGF2ZS5o
YW5zZW5AbGludXguaW50ZWwuY29tPgpDYzogIkguIFBldGVyIEFudmluIiA8aHBhQHp5dG9y
LmNvbT4KQ2M6ICJSYWZhZWwgSi4gV3lzb2NraSIgPHJhZmFlbEBrZXJuZWwub3JnPgpDYzog
TGVuIEJyb3duIDxsZW4uYnJvd25AaW50ZWwuY29tPgpDYzogUGF2ZWwgTWFjaGVrIDxwYXZl
bEB1Y3cuY3o+CkNjOiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+CkNjOiBTdGVm
YW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+CkNjOiBPbGVrc2FuZHIg
VHlzaGNoZW5rbyA8b2xla3NhbmRyX3R5c2hjaGVua29AZXBhbS5jb20+CkNjOiB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKRGF0ZTogRnJpLCAxMyBKYW4gMjAyMyAwODozNzo0
NSArMDEwMApTdWJqZWN0OiBbUEFUQ0hdIHg4Ni9hY3BpOiBmaXggc3VzcGVuZCB3aXRoIFhl
bgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVR5cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9
VVRGLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogOGJpdAoKQ29tbWl0IGYxZTUyNTAw
OTQ5MyAoIng4Ni9ib290OiBTa2lwIHJlYWxtb2RlIGluaXQgY29kZSB3aGVuIHJ1bm5pbmcg
YXMKWGVuIFBWIGd1ZXN0IikgbWlzc2VkIG9uZSBjb2RlIHBhdGggYWNjZXNzaW5nIHJlYWxf
bW9kZV9oZWFkZXIsIGxlYWRpbmcKdG8gZGVyZWZlcmVuY2luZyBOVUxMIHdoZW4gc3VzcGVu
ZGluZyB0aGUgc3lzdGVtIHVuZGVyIFhlbjoKCiAgICBbICAzNDguMjg0MDA0XSBQTTogc3Vz
cGVuZCBlbnRyeSAoZGVlcCkKICAgIFsgIDM0OC4yODk1MzJdIEZpbGVzeXN0ZW1zIHN5bmM6
IDAuMDA1IHNlY29uZHMKICAgIFsgIDM0OC4yOTE1NDVdIEZyZWV6aW5nIHVzZXIgc3BhY2Ug
cHJvY2Vzc2VzIC4uLiAoZWxhcHNlZCAwLjAwMCBzZWNvbmRzKSBkb25lLgogICAgWyAgMzQ4
LjI5MjQ1N10gT09NIGtpbGxlciBkaXNhYmxlZC4KICAgIFsgIDM0OC4yOTI0NjJdIEZyZWV6
aW5nIHJlbWFpbmluZyBmcmVlemFibGUgdGFza3MgLi4uIChlbGFwc2VkIDAuMTA0IHNlY29u
ZHMpIGRvbmUuCiAgICBbICAzNDguMzk2NjEyXSBwcmludGs6IFN1c3BlbmRpbmcgY29uc29s
ZShzKSAodXNlIG5vX2NvbnNvbGVfc3VzcGVuZCB0byBkZWJ1ZykKICAgIFsgIDM0OC43NDky
MjhdIFBNOiBzdXNwZW5kIGRldmljZXMgdG9vayAwLjM1MiBzZWNvbmRzCiAgICBbICAzNDgu
NzY5NzEzXSBBQ1BJOiBFQzogaW50ZXJydXB0IGJsb2NrZWQKICAgIFsgIDM0OC44MTYwNzdd
IEJVRzoga2VybmVsIE5VTEwgcG9pbnRlciBkZXJlZmVyZW5jZSwgYWRkcmVzczogMDAwMDAw
MDAwMDAwMDAxYwogICAgWyAgMzQ4LjgxNjA4MF0gI1BGOiBzdXBlcnZpc29yIHJlYWQgYWNj
ZXNzIGluIGtlcm5lbCBtb2RlCiAgICBbICAzNDguODE2MDgxXSAjUEY6IGVycm9yX2NvZGUo
MHgwMDAwKSAtIG5vdC1wcmVzZW50IHBhZ2UKICAgIFsgIDM0OC44MTYwODNdIFBHRCAwIFA0
RCAwCiAgICBbICAzNDguODE2MDg2XSBPb3BzOiAwMDAwIFsjMV0gUFJFRU1QVCBTTVAgTk9Q
VEkKICAgIFsgIDM0OC44MTYwODldIENQVTogMCBQSUQ6IDY3NjQgQ29tbTogc3lzdGVtZC1z
bGVlcCBOb3QgdGFpbnRlZCA2LjEuMy0xLmZjMzIucXViZXMueDg2XzY0ICMxCiAgICBbICAz
NDguODE2MDkyXSBIYXJkd2FyZSBuYW1lOiBTdGFyIExhYnMgU3RhckJvb2svU3RhckJvb2ss
IEJJT1MgOC4wMSAwNy8wMy8yMDIyCiAgICBbICAzNDguODE2MDkzXSBSSVA6IGUwMzA6YWNw
aV9nZXRfd2FrZXVwX2FkZHJlc3MrMHhjLzB4MjAKCkZpeCB0aGF0IGJ5IGFkZGluZyBhbiBp
bmRpcmVjdGlvbiBmb3IgYWNwaV9nZXRfd2FrZXVwX2FkZHJlc3MoKSB3aGljaApYZW4gUFYg
ZG9tMCBjYW4gdXNlIHRvIHJldHVybiBhIGR1bW15IG5vbi16ZXJvIHdha2V1cCBhZGRyZXNz
ICh0aGlzCmFkZHJlc3Mgd29uJ3QgZXZlciBiZSB1c2VkLCBhcyB0aGUgcmVhbCBzdXNwZW5k
IGhhbmRsaW5nIGlzIGRvbmUgYnkgdGhlCmh5cGVydmlzb3IpLgoKRml4ZXM6IGYxZTUyNTAw
OTQ5MyAoIng4Ni9ib290OiBTa2lwIHJlYWxtb2RlIGluaXQgY29kZSB3aGVuIHJ1bm5pbmcg
YXMgWGVuIFBWIGd1ZXN0IikKUmVwb3J0ZWQtYnk6IE1hcmVrIE1hcmN6eWtvd3NraS1Hw7Ny
ZWNraSA8bWFybWFyZWtAaW52aXNpYmxldGhpbmdzbGFiLmNvbT4KU2lnbmVkLW9mZi1ieTog
SnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgotLS0KIGFyY2gveDg2L2luY2x1ZGUv
YXNtL2FjcGkuaCAgfCAyICstCiBhcmNoL3g4Ni9rZXJuZWwvYWNwaS9zbGVlcC5jIHwgMyAr
Ky0KIGluY2x1ZGUveGVuL2FjcGkuaCAgICAgICAgICAgfCA5ICsrKysrKysrKwogMyBmaWxl
cyBjaGFuZ2VkLCAxMiBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdp
dCBhL2FyY2gveDg2L2luY2x1ZGUvYXNtL2FjcGkuaCBiL2FyY2gveDg2L2luY2x1ZGUvYXNt
L2FjcGkuaAppbmRleCA2NTA2NGQ5ZjdmYTYuLjEzNzI1OWZmOGYwMyAxMDA2NDQKLS0tIGEv
YXJjaC94ODYvaW5jbHVkZS9hc20vYWNwaS5oCisrKyBiL2FyY2gveDg2L2luY2x1ZGUvYXNt
L2FjcGkuaApAQCAtNjEsNyArNjEsNyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgYWNwaV9kaXNh
YmxlX3BjaSh2b2lkKQogZXh0ZXJuIGludCAoKmFjcGlfc3VzcGVuZF9sb3dsZXZlbCkodm9p
ZCk7CiAKIC8qIFBoeXNpY2FsIGFkZHJlc3MgdG8gcmVzdW1lIGFmdGVyIHdha2V1cCAqLwot
dW5zaWduZWQgbG9uZyBhY3BpX2dldF93YWtldXBfYWRkcmVzcyh2b2lkKTsKK2V4dGVybiB1
bnNpZ25lZCBsb25nICgqYWNwaV9nZXRfd2FrZXVwX2FkZHJlc3MpKHZvaWQpOwogCiAvKgog
ICogQ2hlY2sgaWYgdGhlIENQVSBjYW4gaGFuZGxlIEMyIGFuZCBkZWVwZXIKZGlmZiAtLWdp
dCBhL2FyY2gveDg2L2tlcm5lbC9hY3BpL3NsZWVwLmMgYi9hcmNoL3g4Ni9rZXJuZWwvYWNw
aS9zbGVlcC5jCmluZGV4IDNiN2Y0Y2RiZjJlMC4uMWEzY2Q1ZTI0Y2QwIDEwMDY0NAotLS0g
YS9hcmNoL3g4Ni9rZXJuZWwvYWNwaS9zbGVlcC5jCisrKyBiL2FyY2gveDg2L2tlcm5lbC9h
Y3BpL3NsZWVwLmMKQEAgLTMzLDEwICszMywxMSBAQCBzdGF0aWMgY2hhciB0ZW1wX3N0YWNr
WzQwOTZdOwogICogUmV0dXJucyB0aGUgcGh5c2ljYWwgYWRkcmVzcyB3aGVyZSB0aGUga2Vy
bmVsIHNob3VsZCBiZSByZXN1bWVkIGFmdGVyIHRoZQogICogc3lzdGVtIGF3YWtlcyBmcm9t
IFMzLCBlLmcuIGZvciBwcm9ncmFtbWluZyBpbnRvIHRoZSBmaXJtd2FyZSB3YWtpbmcgdmVj
dG9yLgogICovCi11bnNpZ25lZCBsb25nIGFjcGlfZ2V0X3dha2V1cF9hZGRyZXNzKHZvaWQp
CitzdGF0aWMgdW5zaWduZWQgbG9uZyB4ODZfYWNwaV9nZXRfd2FrZXVwX2FkZHJlc3Modm9p
ZCkKIHsKIAlyZXR1cm4gKCh1bnNpZ25lZCBsb25nKShyZWFsX21vZGVfaGVhZGVyLT53YWtl
dXBfc3RhcnQpKTsKIH0KK3Vuc2lnbmVkIGxvbmcgKCphY3BpX2dldF93YWtldXBfYWRkcmVz
cykodm9pZCkgPSB4ODZfYWNwaV9nZXRfd2FrZXVwX2FkZHJlc3M7CiAKIC8qKgogICogeDg2
X2FjcGlfZW50ZXJfc2xlZXBfc3RhdGUgLSBlbnRlciBzbGVlcCBzdGF0ZQpkaWZmIC0tZ2l0
IGEvaW5jbHVkZS94ZW4vYWNwaS5oIGIvaW5jbHVkZS94ZW4vYWNwaS5oCmluZGV4IGIxZTEx
ODYzMTQ0ZC4uN2UxZTVkYmZiNzdjIDEwMDY0NAotLS0gYS9pbmNsdWRlL3hlbi9hY3BpLmgK
KysrIGIvaW5jbHVkZS94ZW4vYWNwaS5oCkBAIC01Niw2ICs1NiwxMiBAQCBzdGF0aWMgaW5s
aW5lIGludCB4ZW5fYWNwaV9zdXNwZW5kX2xvd2xldmVsKHZvaWQpCiAJcmV0dXJuIDA7CiB9
CiAKK3N0YXRpYyBpbmxpbmUgdW5zaWduZWQgbG9uZyB4ZW5fYWNwaV9nZXRfd2FrZXVwX2Fk
ZHJlc3Modm9pZCkKK3sKKwkvKiBKdXN0IHJldHVybiBhIGR1bW15IG5vbi16ZXJvIHZhbHVl
LCBpdCB3aWxsIG5ldmVyIGJlIHVzZWQuICovCisJcmV0dXJuIDE7Cit9CisKIHN0YXRpYyBp
bmxpbmUgdm9pZCB4ZW5fYWNwaV9zbGVlcF9yZWdpc3Rlcih2b2lkKQogewogCWlmICh4ZW5f
aW5pdGlhbF9kb21haW4oKSkgewpAQCAtNjUsNiArNzEsOSBAQCBzdGF0aWMgaW5saW5lIHZv
aWQgeGVuX2FjcGlfc2xlZXBfcmVnaXN0ZXIodm9pZCkKIAkJCSZ4ZW5fYWNwaV9ub3RpZnlf
aHlwZXJ2aXNvcl9leHRlbmRlZF9zbGVlcCk7CiAKIAkJYWNwaV9zdXNwZW5kX2xvd2xldmVs
ID0geGVuX2FjcGlfc3VzcGVuZF9sb3dsZXZlbDsKKyNpZmRlZiBDT05GSUdfQUNQSV9TTEVF
UAorCQlhY3BpX2dldF93YWtldXBfYWRkcmVzcyA9IHhlbl9hY3BpX2dldF93YWtldXBfYWRk
cmVzczsKKyNlbmRpZgogCX0KIH0KICNlbHNlCi0tIAoyLjM1LjMKCg==
--------------JTCEnxmZjr8uUX0QsykuUQXk
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------JTCEnxmZjr8uUX0QsykuUQXk--

--------------LaTlo7JDqFZbSkQGkk2AzUwV--

--------------78E4ABhTHO0B4lG8jyzNTay2
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPBEYMFAwAAAAAACgkQsN6d1ii/Ey/1
igf9FogeEHwsEjMohtO440DwmpQMGvUnv4pOEMrtP6NkUzx7UuOD//OrqK54yAPU4/assRZWBqB2
Er6qVknG6D4n5TEJ79znvPedLcmsQtxuRC2BRZBUpIDYxaf1lpc23mvPEsur5BQNvBVR18i3ERb+
GCI11aRyqVXcdk4/TMHKrrlB73TEw3ZbD2dSa9+9E3P3M4nuR89jVvgGR/tawSg6gcWXjJcP4WkA
toQByX0NloIqCMydj+lgnAwPxrWoMwDDsGWb6ntcNBiURbjH0JLcPyxE+LoweLUUdWlviRB4CJ0k
Vk1D9e+Q0VtE2+8AArEunaErMPsRYpVb+AYIC3TClQ==
=4rhg
-----END PGP SIGNATURE-----

--------------78E4ABhTHO0B4lG8jyzNTay2--


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:10:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:10:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476829.739200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGF8m-0007MN-Cl; Fri, 13 Jan 2023 08:10:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476829.739200; Fri, 13 Jan 2023 08:10:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGF8m-0007MG-9S; Fri, 13 Jan 2023 08:10:20 +0000
Received: by outflank-mailman (input) for mailman id 476829;
 Fri, 13 Jan 2023 08:10:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3K7w=5K=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pGF8k-0007M6-GO
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:10:18 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b70e21f8-9319-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 09:10:17 +0100 (CET)
Received: by mail-ej1-x62c.google.com with SMTP id vm8so50474685ejc.2
 for <xen-devel@lists.xenproject.org>; Fri, 13 Jan 2023 00:10:17 -0800 (PST)
Received: from [192.168.1.93] (adsl-139.109.242.225.tellas.gr.
 [109.242.225.139]) by smtp.gmail.com with ESMTPSA id
 g2-20020a170906538200b0085c3f08081esm3431892ejo.13.2023.01.13.00.10.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Jan 2023 00:10:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b70e21f8-9319-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=jLVRvKluyxNeRdZUwXoZTpFf6e3KVsHC6MuFrshYUQ0=;
        b=G6jZ6mSMyAArHIk9Tdtxxg+YG704AD1UGfK5U0JGphQYJM76y2WwU8nc5m4BspUsF6
         97+f6kArUNV7v1pKh9SIbqSCE6oOxtWgqO0ZXtRbDnpTdu21IHdv2TJ6or/UszF/YmnK
         JFaK7tkm0D6FJbsBlNPdf/ct+7He99JsXXgewxt29E8na7m4XGTC1CpWI5dkGWR6XewO
         JFfM6hK4EBCTFEpqahq30L23PEMbEggmft19pd33MWLWSKtQH/bAsXilz5knCkib6o5R
         RyMkeo0YICUPx/JRwwcHGRWhBWEuIL0S9zP14JNOGEU+Kt6rpZjhqb3gKzJ3FDdMRDtr
         Gv/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=jLVRvKluyxNeRdZUwXoZTpFf6e3KVsHC6MuFrshYUQ0=;
        b=P/sbUcdV08FVME/8/pr9f2m5V7JEOQvYJdGaIQ6X9CtgBuofkYbtQ/DME6odKKD8KL
         qkxvjUUfoN/IhMUYYbPia5y1flar+fspP7EcC6zeO0kDUVvsDLjfmBaJsvsU1Zwxm2S1
         ESufJihYwZRTMiL0uNTZzeVtFPf42xz+nYdtDhb1VIzviPDoHcIwOlD0E8Kf3HENO8XX
         s690L1Zp8o4PM7vm/9XwAEKSyQRQPc+VyifRIEqpsPWRHpFQR88YQ5pwX/N/d7t3bYw8
         KZaejpKYivtnqATG94g2HEthcvfeNe2hQUn/0zkUhhVK2sRXXdg8nh6V34MD75s2nQt0
         +ioA==
X-Gm-Message-State: AFqh2kqS1glJTg1JQ+292TrfqFN9OllFUchpgRsnoY94AHKnMGRHg2na
	0Rp7qVqi4w50+evE1bpPmqA=
X-Google-Smtp-Source: AMrXdXttuK5yqRd+vPPA3pxoKjyWE0YytZTEfIF+frYYOQiH1tDMfrHNDZrJ2I7iiGeJEqMlqzLwYQ==
X-Received: by 2002:a17:906:86cc:b0:869:3b49:35c2 with SMTP id j12-20020a17090686cc00b008693b4935c2mr2134775ejy.61.1673597417044;
        Fri, 13 Jan 2023 00:10:17 -0800 (PST)
Message-ID: <cf9a8caa-1ebd-b6cb-a1f8-c43fbe5ee381@gmail.com>
Date: Fri, 13 Jan 2023 10:10:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and
 iommu_snoop are VT-d specific
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-4-burzalodowa@gmail.com>
 <f2d68a4d-b9b3-7700-961d-f6888edfb858@suse.com>
 <f4771b3d-63e8-a44b-bdaf-4e2823f43fb8@gmail.com>
 <4bc3f2f6-9bf4-5810-89e3-526470e72d85@gmail.com>
 <d4105a37-e24f-96b6-f0f3-5990768fa8f5@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <d4105a37-e24f-96b6-f0f3-5990768fa8f5@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit


On 1/12/23 17:53, Jan Beulich wrote:
> On 12.01.2023 16:43, Xenia Ragiadakou wrote:
>> On 1/12/23 13:49, Xenia Ragiadakou wrote:
>>> On 1/12/23 13:31, Jan Beulich wrote:
>>>> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>>>>> --- a/xen/include/xen/iommu.h
>>>>> +++ b/xen/include/xen/iommu.h
>>>>> @@ -74,9 +74,13 @@ extern enum __packed iommu_intremap {
>>>>>       iommu_intremap_restricted,
>>>>>       iommu_intremap_full,
>>>>>    } iommu_intremap;
>>>>> -extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>>>    #else
>>>>>    # define iommu_intremap false
>>>>> +#endif
>>>>> +
>>>>> +#ifdef CONFIG_INTEL_IOMMU
>>>>> +extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>>> +#else
>>>>>    # define iommu_snoop false
>>>>>    #endif
>>>>
>>>> Do these declarations really need touching? In patch 2 you didn't move
>>>> amd_iommu_perdev_intremap's either.
>>>
>>> Ok, I will revert this change (as I did in v2 of patch 2) since it is
>>> not needed.
>>
>> Actually, my patch was altering the current behavior by defining
>> iommu_snoop as false when !INTEL_IOMMU.
>>
>> IIUC, there is no control over snoop behavior when using the AMD iommu.
>> Hence, iommu_snoop should evaluate to true for AMD iommu.
>> However, when using the INTEL iommu the user can disable it via the
>> "iommu" param, right?
> 
> That's the intended behavior, yes, but right now we allow the option
> to also affect behavior on AMD - perhaps wrongly so, as there's one
> use outside of VT-x and VT-d code. But of course the option is
> documented to be there for VT-d only, so one can view it as user
> error if it's used on a non-VT-d system.
> 
>> If that's the case then iommu_snoop needs to be moved from vtd/iommu.c
>> to x86/iommu.c and iommu_snoop assignment via iommu param needs to be
>> guarded by CONFIG_INTEL_IOMMU.
> 
> Or #define to true when !INTEL_IOMMU and keep the variable where it
> is.

Given the current implementation, if defined to true, it will be true 
even when !iommu_enabled.

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:14:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476835.739210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFCQ-0007zk-W1; Fri, 13 Jan 2023 08:14:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476835.739210; Fri, 13 Jan 2023 08:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFCQ-0007zd-TN; Fri, 13 Jan 2023 08:14:06 +0000
Received: by outflank-mailman (input) for mailman id 476835;
 Fri, 13 Jan 2023 08:14:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O/z9=5K=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pGFCP-0007zX-Lu
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:14:05 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3d6ee2ea-931a-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 09:14:03 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C6444601CA;
 Fri, 13 Jan 2023 08:14:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7BAEF13918;
 Fri, 13 Jan 2023 08:14:02 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ChC1HMoSwWMcTgAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 13 Jan 2023 08:14:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d6ee2ea-931a-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673597642; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=C3+MDGHRrGYRh8vwODotk/nPLYTNxff0308LBuGGZfg=;
	b=RYIeljUuSxnit3GLJZrTuqeePPe+Chmf7EKuF1QvSH4wKzgNSdpOW7xncMxFj3H2HfYNKa
	5poSrglahhvOdssZhZkmGzR0+Ws2msk3QdJgYfWyPLmhairkZf/z5e7xmVG1XnR1RpqyJe
	lCFA3RzlYyosLPIn9sjefy0mAL0ZK9E=
Message-ID: <457cc617-0cd2-fef8-c095-925c3c56c597@suse.com>
Date: Fri, 13 Jan 2023 09:14:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [RFC][PATCH 1/6] x86/power: De-paravirt restore_processor_state()
Content-Language: en-US
To: Peter Zijlstra <peterz@infradead.org>, x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org, "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>, Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com
References: <20230112143141.645645775@infradead.org>
 <20230112143825.584639584@infradead.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230112143825.584639584@infradead.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------o7qDRswZ9yx5PFp5BINIokek"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------o7qDRswZ9yx5PFp5BINIokek
Content-Type: multipart/mixed; boundary="------------0d4qDtsPpoGsM03O4Be40BeT";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Peter Zijlstra <peterz@infradead.org>, x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org, "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>, Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com
Message-ID: <457cc617-0cd2-fef8-c095-925c3c56c597@suse.com>
Subject: Re: [RFC][PATCH 1/6] x86/power: De-paravirt restore_processor_state()
References: <20230112143141.645645775@infradead.org>
 <20230112143825.584639584@infradead.org>
In-Reply-To: <20230112143825.584639584@infradead.org>

--------------0d4qDtsPpoGsM03O4Be40BeT
Content-Type: multipart/mixed; boundary="------------9ofMRWg0rFZOPtEySebYEw6A"

--------------9ofMRWg0rFZOPtEySebYEw6A
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTIuMDEuMjMgMTU6MzEsIFBldGVyIFppamxzdHJhIHdyb3RlOg0KPiBTaW5jZSBYZW4g
UFYgZG9lc24ndCB1c2UgcmVzdG9yZV9wcm9jZXNzb3Jfc3RhdGUoKSwgYW5kIHdlJ3JlIGdv
aW5nIHRvDQo+IGhhdmUgdG8gYXZvaWQgQ0FMTC9SRVQgdW50aWwgYXQgbGVhc3QgR1MgaXMg
cmVzdG9yZWQsIGRlLXBhcmF2aXJ0IHRoZQ0KPiBlYXN5IGJpdHMuDQo+IA0KPiBTaWduZWQt
b2ZmLWJ5OiBQZXRlciBaaWpsc3RyYSAoSW50ZWwpIDxwZXRlcnpAaW5mcmFkZWFkLm9yZz4N
Cg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0Kd2l0
aCBvbmUgcmVtYXJrOiBzYXZlX3Byb2Nlc3Nvcl9zdGF0ZSgpIGNvdWxkIGJlIGNoYW5nZWQg
dGhlIHNhbWUgd2F5Lg0KDQoNCkp1ZXJnZW4NCg0K
--------------9ofMRWg0rFZOPtEySebYEw6A
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9ofMRWg0rFZOPtEySebYEw6A--

--------------0d4qDtsPpoGsM03O4Be40BeT--

--------------o7qDRswZ9yx5PFp5BINIokek
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPBEsoFAwAAAAAACgkQsN6d1ii/Ey8d
Rwf/Y2IUCHkCou1Aa3zROg6KibgZdzhOt1vQuoErqYeXj8fSaUY98kFKD0CfwxdJJuAGZDmLu2GW
iInBABAIMwcuFzkJRbLUEht152ECE85BhIrmLK1fw3uy2jdakIsq7H2u+xVGqWTiJla7JS4/eN+5
aHUEz6vj6YqUZ53+DZXeJsfZbRj0FsHclUwXmTuhpem9RDkKxYjcPu8VmYmKRVCSlocDfWEF2ydR
XKJY+aX0OX0ZY2Mk9TWa5F37VsDDdN+tTm2wyupl7hElNpnZ4N6zTVdxebCrwB77DhLOqD6g2mhi
TUYad6ydXxJLgdef+mTEO3gelc1AjWn/WI3BvHq/Cg==
=53mK
-----END PGP SIGNATURE-----

--------------o7qDRswZ9yx5PFp5BINIokek--


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:39:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:39:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476841.739222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFb0-00023n-20; Fri, 13 Jan 2023 08:39:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476841.739222; Fri, 13 Jan 2023 08:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFaz-00023g-Ux; Fri, 13 Jan 2023 08:39:29 +0000
Received: by outflank-mailman (input) for mailman id 476841;
 Fri, 13 Jan 2023 08:39:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGFay-00023a-Pf
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:39:28 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2049.outbound.protection.outlook.com [40.107.20.49])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca1072bd-931d-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 09:39:27 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7719.eurprd04.prod.outlook.com (2603:10a6:20b:29a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 08:39:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 08:39:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca1072bd-931d-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kr0Dy+a3QHf6kFJmMHkBA8xASRoY249CXrOGs6EIjnI7RBd6RrlDQp5KLWdoMwxMvZW8+knvFYk/qz5bm83h/XHp8o5B270AkuWfiDaW63VHLemxy7HpgAZRFsBTlNOQb6V5fg6ZT1kONM4TZHlCaU0HXUP2UXvl6izTqCefnzU3Obv/Mg/HMnR91RN+rFkObKgKNdCygwX6puoL5xsmNmKjNF2kS8CftwhaO62u2+CaenRugectNVOYnZzZlTB17iMqmxA4ZrrM0BUIlYvo29yTrzSd+MiC9/u16Dtfruo1joZAuAwFL1vK86zC9NrxhkImLf0wbgViVDMp/83L4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=n8StdBSUm+VmJk5Ih9218PPvQ1Hl08H3FHmwUGtF2OA=;
 b=g7xthdexPrAklO2dOerTqdZBDc/nGS5deAcSMuj3E1+wFp3zWaSJLbxGqeBt5UDz4Eo80OfoLhsoLHevG4b4kptw/rHKy1euAO9Pm7V8ABmLFJiofRKrNxjY7XF/dyZyAAYED2SW8FuMY4oZvclTtN/gxrkxSB3eU9fTUEWsKRE0p92GbVsLeNqAQPyWJo17ygtoszi7OqeyagQuhNf8tfs/6KY+TRl6B7au+YLVRXV8veXSuszlaHL8j4YIRzh+yy3aAbctJcxuQ0xKtu3GXLjXrvK9AW20DlRJvAemmS2YxF5PD9O5sDP5ohQ3hWUXG1X9eg/b0J6r8KRs5NxXAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n8StdBSUm+VmJk5Ih9218PPvQ1Hl08H3FHmwUGtF2OA=;
 b=ZWkgLSESwbRP5OmN2ov9qC4roB8RI9r3tHyLg3HQ6ISsxePFcWlgNcB4RtoC/RXxBzJF4J8DfPG3E9l/d4X7eJtsln6DdHsEe4Yhb1+0W+wDIDrQFhc/0NpTwRuraK5EoDxExyX2flatU3T5Qaah14ozwURXAoSi45aTI65rPERj0n3rj2PEhScS6RCTqgLGSwhsFiw96XHcHXDzQbSF0yfIla/or8qx37OuzIzujfhh8r89m5kmP8F7uKCWk7sJV8EotO2MT/8tFHVA9MZ2ySfTzaQmOg2mQELgM2nZvlbt2561OFmVCodFfy5VzUhNVaCD6T7w5eB3uwyliCNyhg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <43cd6b65-0131-47a8-ce45-cc5d7ea25685@suse.com>
Date: Fri, 13 Jan 2023 09:39:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and
 iommu_snoop are VT-d specific
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-4-burzalodowa@gmail.com>
 <f2d68a4d-b9b3-7700-961d-f6888edfb858@suse.com>
 <f4771b3d-63e8-a44b-bdaf-4e2823f43fb8@gmail.com>
 <4bc3f2f6-9bf4-5810-89e3-526470e72d85@gmail.com>
 <d4105a37-e24f-96b6-f0f3-5990768fa8f5@suse.com>
 <cf9a8caa-1ebd-b6cb-a1f8-c43fbe5ee381@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <cf9a8caa-1ebd-b6cb-a1f8-c43fbe5ee381@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0031.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7719:EE_
X-MS-Office365-Filtering-Correlation-Id: 9e8c41f6-5d8e-495b-0d36-08daf541ad57
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5Rz71uC2uDcZrUdKuX2LHId3A0XZoXmOy7f/Jg5GonmPr9lY/FL7uMoGfP/gX0nRaTtSoG9Z3bjASUr7LRBSrW76BLuJdEPIeQwm4su0Q09Hz/a+qeyeJSvM5IfoTo0V1J5ECCx8R/JqAjzj5zMoT17BExwERuco93z14YNBMd2xyUak1piujDYO9alOb5A6tXcZxOWIZvk7cqWomIHWWKV1o3laxGbp6sjfHRyL82EYZqMDTPPJDUG4uFXoQ84LzfXZqqRVpFhpl6hF9OfJUJM6lrHFJAJu36jaLIqarJWwr3JEQ8ROgLrqVYimkyL6Nex8fWrWgXWF11tJVg6kksXY3XbTBNiitclqPBulrDVOAfl+++Svt1SobAfwJ/aI6Q/Bsaf0Cxae4t7zSmstOmOqOagFXeSLhW2I2WVpiZI8qk6Bkj2316vyW8QW5RBcgu5A4GMh3xyzczR2dPqtq/56VtvjTTIG5NcZJe3G7SYue1yjNd3E54P1HjZPqAc+1DeVH5aFsNFBY1r2jeTuxT64cfWWkdcDsPC5RyI7Eqsoc0mqGMPkSNjJzMaa1gxNiShMUPfa+8rrAkFYvVssE4PPt0WTL8CRAn93n/YILtadVhTdJMqZOaqVESxW+xViiVqKqBjyRejAZTVThRmC4KYp0RDC8JA5dFurbh0B1F2OkZmHQZ1ZDjlgyLjYy87Fwiijx4bLHlav26gEqvUVU92fQaM9at+ANIPJYg5vByk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(346002)(39860400002)(376002)(396003)(366004)(451199015)(41300700001)(66556008)(31686004)(8676002)(6916009)(66476007)(66946007)(4326008)(5660300002)(54906003)(8936002)(316002)(2906002)(86362001)(38100700002)(6486002)(36756003)(478600001)(186003)(26005)(31696002)(6506007)(53546011)(2616005)(6512007)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U0VnMHFQK3ZwN21aVkxodTgySm1QSXRpVWRNT2dEWDR4QXFWUVdSZ1J1THFw?=
 =?utf-8?B?aVNiVkxsblAxUndwM2pWQTB2aVJ1dmZkbFF0Z3E3d1kxWmRrTjZISGhpV01B?=
 =?utf-8?B?VEhuSDU4ZmlMWC92K0lJcjRDZTJ3VVhCNEZBNlJXd0VPemQ2Rk9TYTZjQzJW?=
 =?utf-8?B?Ri82YkwyVER1MHd6dnhQYS9FYUdWTlVvbWVUUkdpampyNkJWVjg3Y3o4bFpv?=
 =?utf-8?B?aHptZ1B3Tk1zMWl3Y1RCcFhpcnJtMHJEb2lKNDRJbnlBQ1F4ZDFldUszWVY5?=
 =?utf-8?B?c2N5c3RrSnlEQ3d0RE1VRnRPdnFYN1JLa2JTYURocmQxUmxDY1RZc3VJS3Q4?=
 =?utf-8?B?RStML2N3ekVwM3N2WVk3YUNZQTc2UDFaVnB6MEpPK0tlYlVQQWk1VlRPVVFn?=
 =?utf-8?B?V05qK3lrdWxqKzArMUxORUdpVVJuTUxVRW5tLzZJMERSeFhpTi96ZDE4bk4r?=
 =?utf-8?B?YXdEZ21INk9hS0R0NDdDM2NBVGFGeWFJMjgrYXpRL0hYYmkya0Nxb1BXY0c0?=
 =?utf-8?B?eWd3V1RvQ3A3Q1JkOTBrN0hVeTkzRGdtaTlXMzlCcDVqYWQwQjZwTGtlc29s?=
 =?utf-8?B?Sld0cTYrSXlQM3FKbGVqWCtuc3Z2Q1Zhbmg4cjdDSWVqNElaTlpBdHhGNE1r?=
 =?utf-8?B?T0tOZ2JiNUF0MWk4ZkFFTTJ0ZE45dVROMGxraWV0SXJXd1F5ZXJJR1AyZGRN?=
 =?utf-8?B?S3o4V2NBdEVNbjRremVkVk4rYXN6N3I4SnNPdXRBam9zOVRHcDhHZ1BOZDls?=
 =?utf-8?B?TXZsOG5qMlRpalRzTjlCdjNWa3VhYWRTLzhyNXhTazRYS0lzck1wcTNhNVla?=
 =?utf-8?B?WnB6Wm1tSXVRcWpSbFQzaHN4UU96b2Q3bVZDS1NlSE82cVRmUVdtMWhOdFZR?=
 =?utf-8?B?STREL3FUaXRBT0l5OHZ2RlpmTlM4MUVNWnprdWFLL01MV2VMNTc2Mk5Mb3Ir?=
 =?utf-8?B?UmpTOFFRZmZ0ckJJcWgyL014VTh2bUdVZ1ZkdVBRZ0lMRWl5QlhTRGtYeWZ1?=
 =?utf-8?B?bFZ3ckQ3TFQwei9aaVQ0VHZpVFU4NVZvZkgvNUFvVEVvS1hmdGcrdVIwUEhr?=
 =?utf-8?B?YW9Bd3ZzOHFtRS9jRkJFeW8vcmpGdWVxRnd0UjF2dmNFVWxCNUhFbGpRcTNT?=
 =?utf-8?B?ck0wdGthR2ovM3dNNGR2SzdhdXVubVpEK0ZGZFFkTkl3MFpvOE9QbDUrcVAv?=
 =?utf-8?B?RDFMbDdiclZKMWdEZXQyT3FxWG5jdDVibTRRYW1sZkVnZytrbXAraDRiUkRo?=
 =?utf-8?B?dmcwNXhoWThnTXBJM3gyakhkNml3ZmhCR2d1bXFyanZrSHR6dFRoWFp1aTBq?=
 =?utf-8?B?Sk9QbXNFd0U5Vml5eDNyZFJlWUdiNi9FTUYwZEJVRUdiOEpMakZ3U0puL21N?=
 =?utf-8?B?WE0yREtUL0dCcnRtNzd5N1hNemQrcllCZCt6MHlQOENxYTlQbWRjTDEyTjlK?=
 =?utf-8?B?VC9mOEVZUFNZdXZnaHByZHBzRnNIOHBzZ3k2OUF1SVY5TjB4SldTOG9UOVF6?=
 =?utf-8?B?b3pYdVJUd0F5QzBiSG9HNHo0aFZZVGIzYWdHbkZ6YzhYV0lJdWhlSDExWWdY?=
 =?utf-8?B?NUg1ZWxzekxMUVVta01XQmg2alh2Q3J1c2VYby9hM2dOV1lSTXdOcElXcnRU?=
 =?utf-8?B?OHV1NkZZOGwzU2hBcFh2djkzeXplb3B1eXdpQWtZOFV1N0Vxak01ak94MVNr?=
 =?utf-8?B?UXZvS21HRmcwL0J6NkZVNm1CSFR3QUlWWFFURmdSRXdwMjVmY2NEcGlpNFR6?=
 =?utf-8?B?WkFwRXdKQ2Y4blljMndJNGRuVFpSSEMydDFmZmFRVzFELzNpVlFYcFA2T3lD?=
 =?utf-8?B?enZwV01wWTVSYWlld2NvdW5DQkZSSkRPK0tQejhJZW9OZ3hDOFU2c1VJd0dG?=
 =?utf-8?B?MTVTZ1hVbjJUTkI0T0xQY1EvSDgzL21PVTRaQldIRExIQXhsVTVNRFc4SUhw?=
 =?utf-8?B?NktRV0JoUXV1TUtDemZ3TlluSjZ0UXk2dEdTNXF3Smt6VGkwSXVFZklNL2Zq?=
 =?utf-8?B?SDZXajA3ZWJnZ1NXSHoyZldOSURnNTdzbXVWeSttdCtqQjBJa1Y3K1YwZWE2?=
 =?utf-8?B?RVpQVTFYbjA1SVlrUlR2YjUxT3FLL1U3SytiWGRPbkV3M1pKUkpHUHVjQkxk?=
 =?utf-8?Q?kfBGiBu2HGu8q+kgS995o2Wtc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e8c41f6-5d8e-495b-0d36-08daf541ad57
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 08:39:25.7439
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: j0EP1tPacHCCeFYa9IiXQbOqT6aLfPpVrSjBk3iY3jLlBeprJaABBYnAzJkQvPDLMBHZuwKlSV9w6yrO00YFdQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7719

On 13.01.2023 09:10, Xenia Ragiadakou wrote:
> 
> On 1/12/23 17:53, Jan Beulich wrote:
>> On 12.01.2023 16:43, Xenia Ragiadakou wrote:
>>> On 1/12/23 13:49, Xenia Ragiadakou wrote:
>>>> On 1/12/23 13:31, Jan Beulich wrote:
>>>>> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>>>>>> --- a/xen/include/xen/iommu.h
>>>>>> +++ b/xen/include/xen/iommu.h
>>>>>> @@ -74,9 +74,13 @@ extern enum __packed iommu_intremap {
>>>>>>       iommu_intremap_restricted,
>>>>>>       iommu_intremap_full,
>>>>>>    } iommu_intremap;
>>>>>> -extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>>>>    #else
>>>>>>    # define iommu_intremap false
>>>>>> +#endif
>>>>>> +
>>>>>> +#ifdef CONFIG_INTEL_IOMMU
>>>>>> +extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>>>> +#else
>>>>>>    # define iommu_snoop false
>>>>>>    #endif
>>>>>
>>>>> Do these declarations really need touching? In patch 2 you didn't move
>>>>> amd_iommu_perdev_intremap's either.
>>>>
>>>> Ok, I will revert this change (as I did in v2 of patch 2) since it is
>>>> not needed.
>>>
>>> Actually, my patch was altering the current behavior by defining
>>> iommu_snoop as false when !INTEL_IOMMU.
>>>
>>> IIUC, there is no control over snoop behavior when using the AMD iommu.
>>> Hence, iommu_snoop should evaluate to true for AMD iommu.
>>> However, when using the INTEL iommu the user can disable it via the
>>> "iommu" param, right?
>>
>> That's the intended behavior, yes, but right now we allow the option
>> to also affect behavior on AMD - perhaps wrongly so, as there's one
>> use outside of VT-x and VT-d code. But of course the option is
>> documented to be there for VT-d only, so one can view it as user
>> error if it's used on a non-VT-d system.
>>
>>> If that's the case then iommu_snoop needs to be moved from vtd/iommu.c
>>> to x86/iommu.c and iommu_snoop assignment via iommu param needs to be
>>> guarded by CONFIG_INTEL_IOMMU.
>>
>> Or #define to true when !INTEL_IOMMU and keep the variable where it
>> is.
> 
> Given the current implementation, if defined to true, it will be true 
> even when !iommu_enabled.

Which is supposed to be benign; I'm about to send a patch to actually
make it benign in shadow code as well (which is the one place where I
notice it isn't right now).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:41:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:41:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476847.739233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFdG-0003PA-FB; Fri, 13 Jan 2023 08:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476847.739233; Fri, 13 Jan 2023 08:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFdG-0003P3-B3; Fri, 13 Jan 2023 08:41:50 +0000
Received: by outflank-mailman (input) for mailman id 476847;
 Fri, 13 Jan 2023 08:41:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGFdF-0003Ov-0p
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:41:49 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2051.outbound.protection.outlook.com [40.107.241.51])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1cd22357-931e-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 09:41:46 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8965.eurprd04.prod.outlook.com (2603:10a6:10:2e0::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 08:41:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 08:41:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cd22357-931e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eBzUvhJ/cTh5erRbmioa7r3g3z58OW4XuRMnKQIsqydkLnUnJY0ZRZfg9n9P73CfNLOIqKenAW/Fgwns2kBYi/loHwnVxbFkRmhkQzlEgkbUvg+rSpfvonNJzAAMUjnJERRE/Y/uaA6IdXuQnBx6WMKGSMryfvOvQvh3r1cZPGoMM4uWel7REc8wgsaxMHieDZ0OePZCMAKNMFhiCgG+DvbsNIPoGLD/AxvyPo9ti34XPHf6LPTVbs6tB3xSmKXZYi1STOYjHm1kMRA6yuY4VzUELdoYSuNJ1/OyI+XptFcdfTWzY3kAsIL2IPKtUZo7c1QOwF3eenJcDhCRw3cmrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=F9UyZXk1/Uh5BjNg7NaKvy6tFD19k2rx/NbURI4rtbI=;
 b=d5WA9enZNoY1hrGVcRU46kk5NXXVqRattLUdqWknT2vob8Ko2/KAOkNyLKwDZ3dNiXQ+YdZy/61UtPW2FmZ9cg1lpGEnL9Mp5L7+UXL6oS0oInyDSFmbc6AEy2JlYar1O2qvrBgc2W801EWCLwc2HwncpZHX77/2i55I93vZSMjQT71RHKoH8gZ1WoDXzsJR3OD1qwj8mSUJ4XBqM+PhdxAKJ8ap0buK78JzybI9b7ALyRBV3SZj7sEXiyLKmK+flVKE6c1Sr8yupSggNgfZHkXKYNLE/XP7VFECKi9ZpJFaXccfDGd5aQ9ihFg51TQhfs+pGYpNDJ9GBLKSOrfusw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F9UyZXk1/Uh5BjNg7NaKvy6tFD19k2rx/NbURI4rtbI=;
 b=R58EMZvqB51j2bzU7qz1WqSsraeyR+f/CyuHoPNnZFf1y2ZgAFHJSYF3+1FlWZbDJQROwWFiyO4pyLDrP+KSWdpk09CT1kKCOoLlliR0ymBSlBdjcdoNrobpRxIXuQqxXlgdH3fbrBtlGmDfdn8AqzhMJrfhAkBVsk6y/eivqDX3Pq1NY2JwAh28FMMBPWGBo5uJpIaAQYjX+mQ4futIv3xk4sZR8wywmpTyNSzJ3q6qPBPrmg399z3ZsCLxU9gn/h2+WTBW6zfh1e2lYiWcfPF2LXLC4T0mAZB0Efl2QH+RUgPYvi3RfjMgLgrAB0RIME09E/xwAqJj90+bk0hCVg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <749f6412-ff36-c5bf-8b5d-c866aa47cc39@suse.com>
Date: Fri, 13 Jan 2023 09:41:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and
 iommu_snoop are VT-d specific
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Paul Durrant <paul@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Xenia Ragiadakou <burzalodowa@gmail.com>
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-4-burzalodowa@gmail.com>
 <f2d68a4d-b9b3-7700-961d-f6888edfb858@suse.com>
 <f4771b3d-63e8-a44b-bdaf-4e2823f43fb8@gmail.com>
 <4bc3f2f6-9bf4-5810-89e3-526470e72d85@gmail.com>
 <0e21b24e-d715-bd04-f98c-4cdd53f129ee@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <0e21b24e-d715-bd04-f98c-4cdd53f129ee@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0041.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8965:EE_
X-MS-Office365-Filtering-Correlation-Id: 00d6305a-a99d-4686-faf0-08daf541fed0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QZC6sUcKlP1TdNfPrC4+mhzy1tJllYARZaL/46RH5W/z5eSPqvbZvNoICam3NSDF5ZckmK1X1cAm45aR0+2MVJOMtdion4JB5AtjwG2K19QtAiI1v0kyCs8qEY3xZ0960AT5iN7Q7JjP2DaP2nKUb42sXJrtlPYMigPSSrEmvvk4R+gtqANC2p30+ZjSbqrPGTlKAO9ILF29BWbz81iMXsZcKOEEBO9zanaoBTtY3JKM1W7H4MHwIAYzsLHq4G43if0SS2LbT7GpSalmVTXIEMS92TwWRQ8wGgt6JNVccF2Z+az4u9+wmdRJ/lWE+t/Y7LeECx0K6RJpzZsG9a0Ld3nV4qy9qHtIBvXVmCQJb18WkJgX4ygBjcw6pxntfdUn0uDD1rK6eOWRvd2Zgyfmeqk36VuuQHfmFOK5AxjEtMmeC/KO0s0Z31C4vIpB50L/xBowI5S8T7O0It35EiHHPSm5rT7yua0L2jkKem210kQbghEu3G3c5+nJvgSYqo+k9S/VGfF9ENFSsYzPjof4GkatNye6Z1enckSkc+igY6FhqthzdCmqMRryBZtwSXh/DlcHqGWVI2UfKpwoDHlUtiBcZOejtCRUwSKLBHRy/O7QeyHygGVcML999hY2EAc7QZzvwpCDVIgbRu+mHdsA5HrmdEgkfdiLhbyoGbTUpgjEqbeWsg8IJiGdnQXGvbl1BJSLc1yo84nEBU08b0jfw1KRwzhL1QRL5CId29vVRZo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(376002)(396003)(366004)(136003)(39860400002)(451199015)(83380400001)(38100700002)(86362001)(54906003)(31696002)(6916009)(66946007)(66556008)(4326008)(8676002)(41300700001)(316002)(66476007)(8936002)(2906002)(186003)(6512007)(66899015)(26005)(2616005)(6486002)(5660300002)(478600001)(53546011)(6506007)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MHJIMEhXQm1RUmlFcDVaWmJxMDljblVCUXBja2RhVXNDZWpxQjB5SmNjdUlh?=
 =?utf-8?B?aGVaRmROWUhYVFB5Z2FLckJsK3JySmRBNnliVEpaT01vblRMNGZwWU5oRjA3?=
 =?utf-8?B?bGdiQ1lJTzZNVHB2YnJyZlEzd01DbGlGbXVkNWJFV2F2NjNVU0pvOVV2bUk3?=
 =?utf-8?B?RmxQdWpDVGwyQUE5NDZTbEljd1dyMjZuNGtWWkRTYUF3UXZNWEZUMit5b1NM?=
 =?utf-8?B?ZjgvemNXQUcvY2kwWkI1UTAvV3RQSWJHcFBUdFJneGJwZFByWlVNYmdVblBv?=
 =?utf-8?B?bHpWeEtIVDFIRHZLanhPNzlkTDhzZmxpTUUrWWdsS0hXUzg5ckNMVzFmWXk5?=
 =?utf-8?B?aW1VZy9OeENZVkFKeXJub1hsYk1ETlJOcFdsN1ZaUkJXazhDZEd6TGhFVXVn?=
 =?utf-8?B?ZnZXT3FVaFdGbEh0T280eG9qbTJIWUwwVEdKMHhzMERkVHV2RDVUaHZnd2NF?=
 =?utf-8?B?MHlxeFNJWjVORnR0WU5adkkxMVBPTVQ4NGhjSWFRU3c0YkFoSUZJT1BWSTBO?=
 =?utf-8?B?dnk1RGxUTzV3L2hFcWJ6NDlnanNwN1F1QjduWW5qbERIV2tWZFFOc3d3OWh1?=
 =?utf-8?B?dC9WTVQ2RHlBTWF1TWN4K2NKU1pDUHVuaE8zNmYvYWlJY1dEeFpyYkNnWnFN?=
 =?utf-8?B?eGZMcUZhcEI5VTRmMWlqVXovS1FmeGN0Vk0xTDlvZzkvN0ZzUGxlcFphQ251?=
 =?utf-8?B?elBUYk1wOXhHeHR4aHB4dlp4NTlKNUlVeWp3dWpjeERYb3lRdGt4ZmZvbXcr?=
 =?utf-8?B?emVYM282dzlyK0JRZytjRUwyVFowTzQzSDluYmxrNWZDY2srMmF6R3FTSnZR?=
 =?utf-8?B?dlNKTUhOYVhVaENVajl4TWdlUm83OU5QMjRnVGVkdkpZVndUM3lMY1lGRFY2?=
 =?utf-8?B?bjI4L1V2elUwYWpIbkZMMklEcUJFK3YyT2NQUk5FRThOODZkdUlZaXNPeXlE?=
 =?utf-8?B?RWVsL2lwY2o3N3pEdWxTQStSOU1laWNsdmJ0WWJtelo3bi9GM3A4MnkrUFJj?=
 =?utf-8?B?Q2tTWnYzZ1l3VUxzSTRURm5pc1JCTHVHRGZFcjZoZDVQRjM0cldxWTJHejht?=
 =?utf-8?B?cnVVRE5zdlkyS2tPeVNEOU5TMnBZSGtXWXVnOGw4MTM1cWdtcWR3eElsakJy?=
 =?utf-8?B?UU0wQUhBelNEM0lXcEFQLzdmQ29BNXRtVXJYRmwxaENWa2o4dWlFRVRTNEow?=
 =?utf-8?B?bnpLcTZHWU8zZTlDVkViSGI2YkZocGRhY3NsTElpRTNKMmpYNXl1dlBOYW0r?=
 =?utf-8?B?aFV4TTAwSkJPRVEyWGYyTTlscFVRRUIvaDM1ZlZuYXlhM1BGN2RLZVdPYzNO?=
 =?utf-8?B?SkpRSnBqMXlSNDBKRFVZdTlCVitkYWpyKzNEZ2ZHQS9mcmZzUEZWT3hFQ1Rp?=
 =?utf-8?B?ZnQ4ZUUwbHM0QnBJYjhkT0UrbzgzNWFHNXZpa0dicnA3YXByNzRZejNUZHI4?=
 =?utf-8?B?K1ZyTERqQi9mR0xyNWVGczRRMmtHcFNmVWtXZXdLSmJ4SzRNaG5BTlE4TEdT?=
 =?utf-8?B?MUw1OWkyWjJpcjV3ZUZHbEV1dTlyQTNnYitsVGFWazA4SnpJa3RuTHBVbUdX?=
 =?utf-8?B?bE9YNHdvY0YwdUZJaWJqSjd1MzRibVl2aGJrdzNKaEk5N1FsL0hjc1pINXpQ?=
 =?utf-8?B?UklEV3ZDTDVKYzlYNy8wZ0NEenEzM0VYeXBETUhrRFFPd1R0U3o4amI1cHov?=
 =?utf-8?B?eTEzcTZVcnRyQjVacThuYXpnQlowTmhCRTVJTnJsNGFKVjBBMGYya3JDemxU?=
 =?utf-8?B?UjVHR0Q5bFdQNndsUlRvZjlTeDJBanhhd2Y1cTFoRUhqeVhmMkd4b0htdFBs?=
 =?utf-8?B?aDlxdTBveStaZ25mbVZrMUlRMFhISDNzOUNGSDNnRmNsY05GUGJnM1V5SzdW?=
 =?utf-8?B?S056RDZEME00aFNlLzl2bnV0MGZzYTQ0ODV3OE5uQ2k3Um1jMi9DcC9KMmhP?=
 =?utf-8?B?SzhMdVgrZmdzT1hPTWtmYndSa0R0TzhZa3FDd0VIS0ZxSCtsRTJ2WWZnSmZR?=
 =?utf-8?B?dm5tT2tGNFZZVVJtU2VFcUZBaURyOURxcFY0aDlGYStqRU5sZXh6ZWJWcU1L?=
 =?utf-8?B?UjdMWjkzdFJNaGdaV1NibXVKY2VYczV1VkNFenorZnh2OExZZFIwSW5OQjht?=
 =?utf-8?Q?e3JmLD22F0VF4ZqtVPi4k6x39?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 00d6305a-a99d-4686-faf0-08daf541fed0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 08:41:42.5634
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Qq3ZskQWbN+tgZLeTuxmJjkZBMprfMGzMcLHJE4VH/izqAVckF1fHWgxUplVGxhqOerYZ8Z4ckd51PJRvCKhVQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8965

On 12.01.2023 19:24, Andrew Cooper wrote:
> On 12/01/2023 3:43 pm, Xenia Ragiadakou wrote:
>>
>> On 1/12/23 13:49, Xenia Ragiadakou wrote:
>>>
>>> On 1/12/23 13:31, Jan Beulich wrote:
>>>> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>>>>
>>>>> --- a/xen/include/xen/iommu.h
>>>>> +++ b/xen/include/xen/iommu.h
>>>>> @@ -74,9 +74,13 @@ extern enum __packed iommu_intremap {
>>>>>      iommu_intremap_restricted,
>>>>>      iommu_intremap_full,
>>>>>   } iommu_intremap;
>>>>> -extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>>>   #else
>>>>>   # define iommu_intremap false
>>>>> +#endif
>>>>> +
>>>>> +#ifdef CONFIG_INTEL_IOMMU
>>>>> +extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>>> +#else
>>>>>   # define iommu_snoop false
>>>>>   #endif
>>>>
>>>> Do these declarations really need touching? In patch 2 you didn't move
>>>> amd_iommu_perdev_intremap's either.
>>>
>>> Ok, I will revert this change (as I did in v2 of patch 2) since it is
>>> not needed.
>>
>> Actually, my patch was altering the current behavior by defining
>> iommu_snoop as false when !INTEL_IOMMU.
>>
>> IIUC, there is no control over snoop behavior when using the AMD
>> iommu. Hence, iommu_snoop should evaluate to true for AMD iommu.
>> However, when using the INTEL iommu the user can disable it via the
>> "iommu" param, right?
>>
>> If that's the case then iommu_snoop needs to be moved from vtd/iommu.c
>> to x86/iommu.c and iommu_snoop assignment via iommu param needs to be
>> guarded by CONFIG_INTEL_IOMMU.
>>
> 
> Pretty much everything Xen thinks it knows about iommu_snoop is broken.
> 
> AMD IOMMUs have had this capability since the outset, but it's the FC
> bit (Force Coherent).  On Intel, the capability is optional, and
> typically differs between IOMMUs in the same system.
> 
> Treating iommu_snoop as a single global is buggy, because (when
> available) it's always a per-SBDF control.  It is used to take a TLP and
> force it to be coherent even when the device was trying to issue a
> non-coherent access.
> 
> Intel systems typically have a dedicated IOMMU for the IGD, which always
> issues coherent accesses (its memory access happens as an adjunct to the
> LLC, not as something that communicates with the memory controller
> directly), so the IOMMU doesn't offer snoop control, and Xen "levels"
> this down to "the system can't do snoop control".
> 
> 
> Xen is very confused when it comes to cacheability correctness.  I still
> have a pile of post-XSA-402 work pending, and it needs to start with
> splitting Xen's idea of "domain can use reduced cacheability" from
> "domain has a device", and work incrementally from there.
> 
> But in terms of snoop_control, it's strictly necessary for the cases
> where the guest kernel thinks it is using reduced cacheability, but it
> isn't because of something the hypervisor has done.  But beyond that,
> forcing snoop behind the back of a guest which is using reduced
> cacheability is just a waste of performance.

I guess I agree with most/all you say, but that's all orthogonal to
Xenia's work (and also to the patch I'm about to send to address the
one issue that I've spotted while reviewing Xenia's patch).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:45:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:45:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476854.739244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFgK-00045U-1m; Fri, 13 Jan 2023 08:45:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476854.739244; Fri, 13 Jan 2023 08:45:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFgJ-00045N-Tt; Fri, 13 Jan 2023 08:44:59 +0000
Received: by outflank-mailman (input) for mailman id 476854;
 Fri, 13 Jan 2023 08:44:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGFgI-00045G-0J
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:44:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGFgH-0004I7-UF
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:44:57 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.6.109]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGFgH-0001QZ-Nd; Fri, 13 Jan 2023 08:44:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=HsJwZ7pf4f8CxnJAVrHz8BvApRU4m43BBJa5/+ChXho=; b=lLzRmEz8JXaRvjXkhw6KmPjIWK
	rjsFHVX3ZutcXkFD7SL1WNnqOvq5k3r1m67uINZEjrzi1X/DezVbPC7FzGdUz5hGakvg82n9ujZC4
	xxUzsD7v/i5oCa54NjQ9G7ENrPf3rMyCnh8f4+xiunBMowIxU0ZlYJUb3FUouRg5lJBk=;
Message-ID: <f8823ca1-450f-7522-d5db-41f124195ab3@xen.org>
Date: Fri, 13 Jan 2023 08:44:54 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 0/8] SVE feature for arm guests
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <3e4ce6c0-9949-1312-f492-913b7dd2cf18@xen.org>
 <EB12FEDD-F3EC-401A-9648-77D7B28F6750@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <EB12FEDD-F3EC-401A-9648-77D7B28F6750@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 12/01/2023 11:58, Luca Fancellu wrote:
>> On 11 Jan 2023, at 16:59, Julien Grall <julien@xen.org> wrote:
>> On 11/01/2023 14:38, Luca Fancellu wrote:
>>> This serie is introducing the possibility for Dom0 and DomU guests to use
>>> sve/sve2 instructions.
>>> SVE feature introduces new instruction and registers to improve performances on
>>> floating point operations.
>>> The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE field, and
>>> when available the ID_AA64ZFR0_EL1 register provides additional information
>>> about the implemented version and other SVE feature.
>>> New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.
>>> Z0-Z31 are scalable vector register whose size is implementation defined and
>>> goes from 128 bits to maximum 2048, the term vector length will be used to refer
>>> to this quantity.
>>> P0-P15 are predicate registers and the size is the vector length divided by 8,
>>> same size is the FFR (First Fault Register).
>>> ZCR_ELx is a register that can control and restrict the maximum vector length
>>> used by the <x> exception level and all the lower exception levels, so for
>>> example EL3 can restrict the vector length usable by EL3,2,1,0.
>>> The platform has a maximum implemented vector length, so for every value
>>> written in ZCR register, if this value is above the implemented length, then the
>>> lower value will be used. The RDVL instruction can be used to check what vector
>>> length is the HW using after setting ZCR.
>>> For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there is no
>>> need to save them separately, saving Z0-Z31 will save implicitly also V0-V31.
>>> SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie the
>>> register is added to the domain state, to be able to trap only the guests that
>>> are not allowed to use SVE.
>>> This serie is introducing a command line parameter to enable Dom0 to use SVE and
>>> to set its maximum vector length that by default is 0 which means the guest is
>>> not allowed to use SVE. Values from 128 to 2048 mean the guest can use SVE with
>>> the selected value used as maximum allowed vector length (which could be lower
>>> if the implemented one is lower).
>>> For DomUs, an XL parameter with the same way of use is introduced and a dom0less
>>> DTB binding is created.
>>> The context switch is the most critical part because there can be big registers
>>> to be saved, in this serie an easy approach is used and the context is
>>> saved/restored every time for the guests that are allowed to use SVE.
>>
>> This would be OK for an initial approach. But I would be worry to officially support SVE because of the potential large impact on other users.
>>
>> What's the long term plan?
> 
> Hi Julien,
> 
> For the future we can plan some work and decide together how to handle the context switch,
> we might need some suggestions from you (arm maintainers) to design that part in the best
> way for functional and security perspective.
I think SVE will need to be lazily saved/restored. So on context switch, 
we would tell that the context belongs to the a previous domain. The 
first time after the current domain tries to access SVE, then we would 
load it.

> 
> For now we might flag the feature as unsupported, explaining in the Kconfig help that switching
> between SVE and non-SVE guests, or between SVE guests, might add latency compared to
> switching between non-SVE guests.

I am OK with that. I actually like the idea to spell it out because that 
helps us to remember what are the gaps in the code :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:46:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:46:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476859.739255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFiD-0004fI-Ds; Fri, 13 Jan 2023 08:46:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476859.739255; Fri, 13 Jan 2023 08:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFiD-0004fB-Ad; Fri, 13 Jan 2023 08:46:57 +0000
Received: by outflank-mailman (input) for mailman id 476859;
 Fri, 13 Jan 2023 08:46:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGFiC-0004f5-J8
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:46:56 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2044.outbound.protection.outlook.com [40.107.6.44])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d48bd93b-931e-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 09:46:54 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8732.eurprd04.prod.outlook.com (2603:10a6:20b:43f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 08:46:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 08:46:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d48bd93b-931e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M5KazfsCpkdnLqwmSttL21y74VyXhANyKi2w57+kFiW8GXGxhNu06lWzg/5sKp3eG4WEPW8yYjz3JVPgJH3SJ9FcOUWnlUA/CIsVYFYvcZ6MflUGAZLkAPsPttpW8/D7irEhnaYIfcmx4FeW3IqhVQF6YUAZzcA7mM6lvHgItnt/alZPge/uiEWpuOPGWW3f+3BkZW1l2QhohCCTYzR/c1ex1cllxceJlO+qCq2A4NMieogu1/mRjrTF2NsbPdgFWmZ8GD2IupkdxUwbiceKiIsBVHB+Y8I2AiOGUmG1tslcKwcBYq0bDiiS7nVi1XcXJXWM3Q0FCbcF43VPNqF6YQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XKEfNKB9Lh4zFo/AYJGrhFntaz82rAreSJv0LvWBp9E=;
 b=mXak62sMKO3q09vk72pg2YRjRqq2X7P/pGHHffPMjkoy5n8BWGc/KfdcHSrUBF7buyVm0q7duKcWNtuq98BPPuCa6nJdrhX6wBQG8+dXXKZKrvy+u9jxEMNV2CK4+Ua8ZyYleTdqIEqN5omQNs5YQKRtQx5B/Gwqy4YQQYJWPYeEz4gCFrQ/Xty0uAPfSkbUMGpcSyIeQVjMwtfyht1o6e+qncgRwbZg2+IsdjhYLH2Zt5+DMShM4BbNJHeyhPcwy6Ad2g8yxDLx8GhQr0f+UbVTtBiFCNfgWVe7eLmb3EGxLaSdyDM+S7CH81eryomJfcY71w+asLkLk//vGp1p9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XKEfNKB9Lh4zFo/AYJGrhFntaz82rAreSJv0LvWBp9E=;
 b=SmbvCNm5UQqbcvgCyFxMxI1NxbYLVFQbSCpdeCWgv1KLFwOa1g5b3SCOrUwbRq7cyeb1r323TzN2uJURQRGUCTxyTNPiGP9ysOSNczjmAvZhaBzQ70R7JjKffRWVcKfnH8VXDfO4qV8ZDiGHnmMVssfzp3+iqxl26Qv0QojWMM13SJnpzdJXShTjyq2ywhY4mfCvfvCwLjcRy4zoGc2tN3VvcPHNdCN1YaoypGsW3xrqEEFqqZFb9z0xaSFzPFM3KHQhtM55lfsW71NkXFAfMRwp4IOdP+llZNrINRLj63u+LcTSy8ObiLDBenJP3VEwRewPSEQZwCL2ccOgbDkgig==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
Date: Fri, 13 Jan 2023 09:46:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, Xenia Ragiadakou <burzalodowa@gmail.com>,
 George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] x86/shadow: MMIO treatment
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0168.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8732:EE_
X-MS-Office365-Filtering-Correlation-Id: cbde5a5a-d873-459e-3b7f-08daf542b794
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IngCBRJIEc6w4OzZNRBanczn9nLd4ebrZLIsn8uWJf0aXEKipFckRoTwRELu3n2KylxCeAvDA9HtnOP+Mh0mZcmPkAwU2nvs1XOegwBbKuIuG+L1P10nrGgV53fFL7qkwH21Q9NIffS/syz4dZwRcQCvLEbenOcfqwPMLgqByG1nfa8e4jNzVtpz7SbVy1IJnsV2AsAlzqk0mS2kfB22xRUKQiUg0moW748+OENhMJH16reAp+k+d1DhM2ASmzH7q+BJCf/QezbGcigqO6wLT2Wd1NnIFKKrtW5zRe0H8Y9r+VuEl4Gwr2bAsNsTRZF7XlArNtQz5MyF2ju3O/nVnwe45qtLdS+LXR9BbBPC/fU6qMxPNbiy8ktsG+ohhueJDr59qAgERclmmyJ6n++Syz8xokmnrWSzPHAFUa1R4myZGxQ7scqFzkf4XDt0Q4K+T5sJpx48uETVDNlsbOlSJPYK8LA28+or0KTI9zHKs4Kpm5zxJwBvz/0finzPaALMibnmXQbIrMl20a9xDYejn3BjvxAPa5yMMqQ6geZWsQDAxJfMmtICiRxpD1194l3IRUVdE1QDMRfALLdskQEGtCD/erFJvBNd49X11Hu3uPrfuq1ImFRNWggYWv7d0KMuj1SldnXKISIiM1a1FipvezB9lD935tkxlbR3c+JxqiKJdST3OGzmjvcNK1bAXGnP3E3+anI8hydaCwKf1rE42aF4e3vq5EYFekWdOJVc7yY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(346002)(136003)(39860400002)(396003)(366004)(451199015)(6506007)(36756003)(6666004)(26005)(186003)(6512007)(6486002)(478600001)(38100700002)(31696002)(86362001)(83380400001)(2616005)(558084003)(8936002)(5660300002)(31686004)(2906002)(66946007)(66556008)(316002)(4326008)(66476007)(8676002)(6916009)(54906003)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MzJ1NVFEdmNyR1ZIRTNUcGFlcm1UQTJ5bU50U0FhNGtZR3M3VzhPUTE3d1hp?=
 =?utf-8?B?eEtYRHMrNGtWeVdxYWFTZ09GV21TL0NnVGZKTkdOaUxxY1JVb3did0VqNHhE?=
 =?utf-8?B?UFFFc2FZak1HKy96c3ZrSTVtVGtnaFFlWEdRS2dldEhqM1Z3dVdyek5vdUZu?=
 =?utf-8?B?elppbEJjaDg4T1U0bWJlZjVsc3UzQmtlWTNXTDlXUEtzcTZ1Ty91RkhOakRi?=
 =?utf-8?B?a2RBeDBvb3FvUXB2cExaY3NuQ3M3K1hQckRyeW05MXBmVURLamZNUzNpOFR1?=
 =?utf-8?B?akJGR2pQd1V0R09sQ1FCU3F5bkV6c2NUcUZWbDlZYzBQY0Q3b2w3cWtJb0RW?=
 =?utf-8?B?ZGNHY3laM2JRekk5T3hQWit0ZTh1aHNobVJzUVpvaGdVMlNPODRHMEtBMWRK?=
 =?utf-8?B?UGdFSklZU2RscVc5NjFLVXZVeHo0YUZ1RFVnUlgxMXhvZkV4cEdOQW9IdFV5?=
 =?utf-8?B?cnZRczdnMk1aanBrWVREZ0c2UDR1eThkRHZVbGUyMDJMTmMvMW9CRmVQVG5a?=
 =?utf-8?B?U1dSY1BCSVA5dGNiNGR2cDFTamdVYnloeklHOHp4Q3Nqelp5aVBCalU1ZUxG?=
 =?utf-8?B?ZFpGNnRvSWg4bTMzdDVsUHNNZlZlb0l3U01xeXVDL1NDSTkyZ3dpdVJNaFNR?=
 =?utf-8?B?NGRXYms1TGpkOTNHd3hMZU5YUDdwVmhmZXBWOUsraG5WVC9NcUJLUVhhQW15?=
 =?utf-8?B?c3B3dTJVb1N2VElqLy96RC9Ed2pjT3grc2dncTdMdC90U2kzQ3ZubEFiWUhK?=
 =?utf-8?B?VFUraW1WUmVNWjVwU0dCcEJjOEdad2xTRHh5R1YzRTFUWHhSeCtkS0ZyRC9q?=
 =?utf-8?B?TXErcElUa0ZkeU5SamhwWDFmMkJObFhab2gzT21IUTFrdUc4NnZKUUVidFJ4?=
 =?utf-8?B?dDg0aVhESHR4a3Jac2lvbVlYWVdLOXp2V2RjTjVPRWhpVFpBTzRLWW11MkIz?=
 =?utf-8?B?N0dHM3QxSEc1bXl5eEtXTUZYMHVlZ0JUUVh3bFV1bmMxaGJhNzdvS3lONVZZ?=
 =?utf-8?B?MXNieXh5YjBjTEVnWGtac1gxSHFzTnJXZVRwRGFzVmtIVU5KYlZ6dXRETUZw?=
 =?utf-8?B?U2c3UXBZYXZZelp3VFlIV01vRDZZK243dlJnYWh0MjBQNlQ3TGtZUmZQNzlM?=
 =?utf-8?B?TjA1Q1doTmNkbVYydmdjUzhsdWI0RTc5OVNrQmJVZFFOeGdrWGdQMzlkYzBZ?=
 =?utf-8?B?cjZEU0pwSnBlZllUNEZHb2ErODNoTjRMRzBLUFBWVU8rQ2ZiazY3UnBCVDNK?=
 =?utf-8?B?OFJheEY1R0Y5bzhWUjZSNldpUVNuWFV2YlFOSHpJc2p6bVUzRVdOd29ydW1K?=
 =?utf-8?B?ZENWRVdxUTlGbkVoOFpZUVFONmpCN3QxcmQ0QzNqT0FIdEROaXJvc1VDSk0z?=
 =?utf-8?B?bUJobzlWVmx0MGtDUDNtMkpkQk01bi83bExqSXlicVhJS3pkNG92LzVLQmpz?=
 =?utf-8?B?bFcrZXlFUWQyZkRleUpiZmN2OG9tRmlJUkxOYi93WXZmV054UGdTNVRKTG9D?=
 =?utf-8?B?VU5SdUF3QkVPZDc4blRTb09XUFI2NGRSSGtvT1JxekpDdXFGRDJXMHdncEQ2?=
 =?utf-8?B?RlFtaWw3UFE0ZWgyYjR4QW1iNWxqWFZ3SVJtMndsOE16VDNtWEJhZ01raHVT?=
 =?utf-8?B?c3FSYlFzQ29RKys2emJNWnBjYmZXbUd6NXRlMHNFZjF0bk0xRXJxVWxBbmNk?=
 =?utf-8?B?SlB5WWx4aWttODZrWjE2ZGRSdENkNmlTWlg4VW5iVk5TVGtvYjBORnhKbkdQ?=
 =?utf-8?B?cVJuaXdJdnZEVXJxemdneFNhU2RCa2Q5cjR6a3dFeS9GOG1pMjY4R2hWdmtz?=
 =?utf-8?B?Zm8xdCt5UFJtRFhMRE1HR1RIUzAzcmdhYUs2ZGJPVDhyTHBoNkxFOWlxc01K?=
 =?utf-8?B?VVBVWHZZcGlWZTVzN2F1WFN1a05UY0FYdWNoNHFyRnZpL3Nja1hJQWNadXpQ?=
 =?utf-8?B?dTNvbkxYYVA2b25jSGNpUm01UEVzYTZhaTdtVVV0bDBjcUZCbjIwcWtCYldr?=
 =?utf-8?B?SVhmOERRMk5ZK2lJdUsrelBYRGExMHd0TFdZa3oyblNvUTAxMjFmNTRkS2lO?=
 =?utf-8?B?c2t4V05Id1pNSlNmL09KZ2pWSGl6K0JqWEV5RU5JWmdHVmo4UExjak12Tjg5?=
 =?utf-8?Q?SaIt4ohV4MReMS6CzdTYUT+ev?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cbde5a5a-d873-459e-3b7f-08daf542b794
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 08:46:52.4188
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uSmwboFMXHaNrPVg1SMJtQEu0jT+W8ASINZIxdexdfr/gdL/+hcV2FEf5O72CpQ+u3XcETDfP3gMejChGdi3Kw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8732

While reviewing Xenia's change to iommu_snoop placement, I've
spotted a shortcoming in _sh_propagate(), fixing of which made
me notice (again) a 2nd kind of issue.

1: sanitize iommu_snoop usage
2: further correct MMIO handling in _sh_propagate()

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:47:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476866.739265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFj1-0005CT-NC; Fri, 13 Jan 2023 08:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476866.739265; Fri, 13 Jan 2023 08:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFj1-0005CM-KS; Fri, 13 Jan 2023 08:47:47 +0000
Received: by outflank-mailman (input) for mailman id 476866;
 Fri, 13 Jan 2023 08:47:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGFj0-00056k-3C
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:47:46 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2058.outbound.protection.outlook.com [40.107.6.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2a8557c-931e-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 09:47:45 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7433.eurprd04.prod.outlook.com (2603:10a6:102:86::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 08:47:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 08:47:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2a8557c-931e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iRmT0ij8gAtOOt4rCEN/vyCQ7Aso5MV0XWank3L9GX8Pm9kdJRjqWx/qqSwvB8d/eNZnZDKpwQFamjhtOleQnz5PVlKfPIbNUt21UtVhKO2ThDlXjEK14uZfOXkOfizcNQ0ZyVj3GUtEq40jNJTdH3FA5Rc3kV0QBoSYM15K72gT81P1NIdMMGRyeysQj99x6ND9ykfmv7FZMsja/efWtUwRHwodik/2u734w1ZHBoFS6ll2lJgdyMac1t0cRSZFtUlmDOn+6/aEOTMvvpZbG55dfOt3Uyc3ahHsVSCzXBYpBl6/CVAsTW2Cm/Mdab15awaPbnJasv9VsEdSJBeSlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PIN7bYME3B1gixZ9fvJuwYJvOqsQq3RxKGZEx+HCKzM=;
 b=CCxguQIoxLhGj1nARBQ3OCJP9W45jvd85jKIXF7a2E6VS4X4lCQ0XYef5nieJIz1osyMAWuZJsulp3drkrt+4LPjlRCesLTlgf9rdbCdx8mDf8HcaWhj3tHy8co6UiBin3eg0ZFEe1SklInxp1TU1+B0sd7dFKFW6lMIveKs7k6MfTpan7laquOBC/AG0jlnSXLRvAXYrpCXLUreXYHFAqcgddHOVfEz6XHrtRG2NxNYQbDO2w9gWDtF/TiozCswnK2NKUDftiti50iKehtCfeILBd0zYYiSXOeL4zHcJTYJT1M2zI4UGrjM17xguo2CTQ9LugHUAUfkG+dvP03/RQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PIN7bYME3B1gixZ9fvJuwYJvOqsQq3RxKGZEx+HCKzM=;
 b=5jFDtqWuYfKhs9AEiFfHSZzeg1IMGMKLgqtSorQ6bBlBPqwynqVIFdG3KUrx/wrEuxuPUkPgDzte1DN4CDuicNYasVjmo5WneaZo+gIaKt+EWJa+KfTWAptHhmCK8XaCF4cITchiihyyk5tpEUElofQ87CyclBIXxwOrsb83lhBH1ZE03BU8VMxq5xsr9ALGaVEraOTG2Km+0O7r7ZZS+IPV3yzZ7TpcfQP6jgpNYXFPH20VU/LnQPLMpA6uwQQukyF3BIEtA0gimAtNwZZ/zstNgFXqZAUVuEwTdG2obkMDUKPs9VB8Fml5byY9NmApLz/pD58IVHMfbMubEzBlAw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
Date: Fri, 13 Jan 2023 09:47:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, Xenia Ragiadakou <burzalodowa@gmail.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
In-Reply-To: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0175.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7433:EE_
X-MS-Office365-Filtering-Correlation-Id: c8992c1a-1ff5-48e3-afb4-08daf542d54b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uPegB6XImAJuvmJpNRvfuo3EQ8pI8B1Yz4N6b5aBhPvl3m6NePkT3E+5ZJgz9mxDUqDvspc4sXNvMz+S3nWfJO3EJRm5rG6cpUXq28xS34tZXkZZ9NMPhVPpLPDdMQI7gKK3xyrV98EbrsZRTAcAlo/9ZoTZTt/N7Fa/CQG/jCvifm+RzpDFcq7A/ebzG6LTKiXZvHr/OFyRAR3mq2ooeSy1PDX8oJfLnwq/t0iEknFIhpwGC342jWrSlvmDbSvxuz3QKTMToMMNe93mIKi8mc2V0QSQvGYRQwa4337713lSRNlgX1U+MwKFhE5pikuu0skIkIkW/HYbOcMied8vYzpAQQr+j10mkGxpc1hA3bjRfJLHjt0iFli8Klrypc4J+HZVjavYvQb6m+W2j1o0F4aTzcWzbPgxwJeZF9+IbfwXhq/77Rwfs5rql4SdRE3s2culqMSQLjWr9LwLo8qFVHibRQBKOJk2yJpqATXICRXlsrevKpzaezpElq1yzja0wmd7iYUXXQOJz259KeocV2y3UpFeWmprdb/5QyiuQbAUYc6iDIW9dWpbXwggsK0k/WCguMT3lfyUZXdiXF0+tDF+rZPHGhJifJRDlcTf+ADm4L3ZdnlLTB6KIZSFu3e81Gu8e9t+izt+E0mlxR4jA6iifqQ0jOX/DiaPXW3MII8W/tnn42ZPs/eGHjJJ7n5tyoSJz40cITFouPSnexK7cAKe+dL+lz1prx+b1RjUxRcCfDq7R5N58+VceNNBeIGFvaSg8zJZIpJGifUpEORYPw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(136003)(376002)(396003)(366004)(346002)(451199015)(6512007)(6486002)(966005)(478600001)(6506007)(31686004)(186003)(26005)(66946007)(66476007)(316002)(66556008)(54906003)(2616005)(6916009)(4326008)(8676002)(86362001)(38100700002)(8936002)(5660300002)(41300700001)(83380400001)(31696002)(36756003)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dDVuczl0cWM0VUJrUHlVVnY1bk5rZVU0Q2ppZGY3VHN2RlNOUkhBRVhMM1Ev?=
 =?utf-8?B?TkQ4UDNyNC8venVNOWJYUWR2TGY2aWVpaTFnNmJadkcyaTgwcjdpOGlOQWx4?=
 =?utf-8?B?eFBRUExBWExwQjRwaU5RRXF1VU9OWURDcHFTYm9pcFFFa1NjYUNLcVBGT1di?=
 =?utf-8?B?bUZYZVRwbUlqMjRrZmJVcWtHRDNrd1ZlTFhzVHo1UEtmVTVmT3Ftb2xqMXZn?=
 =?utf-8?B?bkQxNUp4ZmZUKzBpTVFoVDEvSVJIcDFtMmtLV2Foc3J6d1JDS0ZJaHl3OW9m?=
 =?utf-8?B?MGYrS3lPbmdranZxZWNTVExCN2hJbWc5SWdjUi9XL0IvNmozLzU0dDZpbWNn?=
 =?utf-8?B?UXRuYW4zSnBaazFaUVc4QmVDWk8yTWJxQTlmbWloTUIwMEowcnIvR1JOZURQ?=
 =?utf-8?B?d3V3Q3ZDMkZVTGVodzRuRGNFNitwclZQMXdRL3l2K3Y1V3ErTmUxMWcvd2Jt?=
 =?utf-8?B?UE5iYUtLOG05bmxRbTFZZk5Id0JaWk10cHo1bXIvYVpQOUdZT3VVdUZxWGtX?=
 =?utf-8?B?SzZ1dHBQK3NOSEMrVEJMV2xFUkpnY0kza3ZXWWdaSmRrRXZ2bDEwVzBrRStt?=
 =?utf-8?B?MzBRNUljL2FlR0cyd1VlK1hnbTUrSHZJNGpvZk1LVythc1ppOXl2dEtwTVN6?=
 =?utf-8?B?N2pkY29mdGdCVDhLUTJTTnR3UGEya0tNZWRwcWp5WFBYcHNGeXRaWXBJdFNU?=
 =?utf-8?B?cXY5V2NjM2hXWVRrNlpWYzNLdmV1ZVlYcjkwM05TNlBRdlQ4WXVEZXhhZGNv?=
 =?utf-8?B?OWhsaDhFNGwzaE1CNExzTFNCaEJ0UVZlMmpZcDA4QzdOSFBXVU5YbHliSFdH?=
 =?utf-8?B?anNRbTV5ek1TM0ZYRnhENkM0YU14c2dXRHQ3cEVyWGo2d3J6QlhEQm5Makcx?=
 =?utf-8?B?UU1wVVZ3RWZDWVBUbzBYYnRudlFhTzVTMktyTFhyUWowb21haXhGbG1yUEUy?=
 =?utf-8?B?OWE4K2Z5M1ZIeDBwSnkrQ0pnaUduWTF5RERYcUJmRW1OSmtaZlZ5M20wRjY0?=
 =?utf-8?B?cmp5RStSK1dDRU5OanowQi9nYisyd29WL1JYSjlvcC9TLzdBZGowazlXWWFX?=
 =?utf-8?B?NHZ1NEkveXlJT3gzbVovTElDdGdxY3VHT2NGcytLdGZ4Qk1VbHc0UCsrMnFl?=
 =?utf-8?B?Z0FPeEJpS1QyRUdLZ3Y5eU0yZ1VuYkh6TWNZSWozU0F1dGw1cWgxeVpQSXo1?=
 =?utf-8?B?NjFhc3UzRHNML3JkNFFzRXplcDFNakloWG5OR0dYbDhWTjNnQ0psaXN5cEt5?=
 =?utf-8?B?OUloM1lGalpHWXc3NTlDYm5IallYYjB6UjNpSGdBN3g0NXF3VWN2RlowV1ha?=
 =?utf-8?B?Y0M5eUNmSzIrM0hzWEtVblRsbTZyVkRwMXZ5SjB1b29JcGxiL290bG9nQmxQ?=
 =?utf-8?B?b0VyZkJxYUdZbDFua1JuYWVvZWkwZTVoVUxsblVKSlQ5YnMyaTJFSkh2TmVn?=
 =?utf-8?B?RWtiVzI4ZzNHamdXL2xKcVIzcE9SMEtIcG5GQ2Fkb0taMEdHZFAwVU9pTWJX?=
 =?utf-8?B?RStjazM5STdUVTFxQWdWSXpuRGZobGRjaC9NaldnMGNWQXlXdm9QZnk2SFBS?=
 =?utf-8?B?RXRXODNQSHZvZU4rRWNORWFzY0tvUXRrZU0xcEdKNHRldFhTL1paVVBiVXRy?=
 =?utf-8?B?aFZTT2lNQzJUNmRCNnJRN3lOQkxxdjlHUzduTFR6dm8yNE1yZU1Ea0NPVm9z?=
 =?utf-8?B?NHVaSVUrdmc2UW5Jb1NYSHUrUll5TzBFUmVLY0NZL1UwQkc1amxuUzVaclNO?=
 =?utf-8?B?Ym9NR2JXc3VyTHFCK3BLRmJKdHA2ZDB1d05UTUd1MEJ0TllSOStFQnhZY3kx?=
 =?utf-8?B?VjVsYXRvK0d3a2w0Vi84QmVvbFQrSDErK3pyeGc1dE9YcHBDSzVuV1dHQjdY?=
 =?utf-8?B?STYwTCtoOFAzN2YyMUw0M0orOE1IRkdqOUQwYy9neEtWaTRsbEJMYm9XeTJE?=
 =?utf-8?B?WXFhTmdrQS9vVDZqUmFLbGV6eXozcmFNSnBqcEMyOGZ1TWZzSVhrOTdKUlFU?=
 =?utf-8?B?UjVYTVB0K25iSFpmMlZGSVJxV2dDUjdnQkRWOFRQOUMreWUrTWxyQjZNS3dw?=
 =?utf-8?B?VDFOVlRkOCtISmhqekQwWGZVUjAwVUE2dGh0ZWFQdzNvRFV1YVhDbmc2eTNr?=
 =?utf-8?Q?wJmVqrocJuCNf/Uzy2tQGlrMh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c8992c1a-1ff5-48e3-afb4-08daf542d54b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 08:47:42.2906
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4LmJ6pT6TAxMypz69GVfbm4OWgd+10mmUco7RWBX1ljKZzMqSn+8XB3y32EGXqNMUBvoM8UqPvCHwfoO8D2GVQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7433

First of all the variable is meaningful only when an IOMMU is in use for
a guest. Qualify the check accordingly, like done elsewhere. Furthermore
the controlling command line option is supposed to take effect on VT-d
only. Since command line parsing happens before we know whether we're
going to use VT-d, force the variable back to set when instead running
with AMD IOMMU(s).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was first considering to add the extra check to the outermost
enclosing if(), but I guess that would break the (questionable) case of
assigning MMIO ranges directly by address. The way it's done now also
better fits the existing checks, in particular the ones in p2m-ept.c.

Note that the #ifndef is put there in anticipation of iommu_snoop
becoming a #define when !IOMMU_INTEL (see
https://lists.xen.org/archives/html/xen-devel/2023-01/msg00103.html
and replies).

In _sh_propagate() I'm further puzzled: The iomem_access_permitted()
certainly suggests very bad things could happen if it returned false
(i.e. in the implicit "else" case). The assumption looks to be that no
bad "target_mfn" can make it there. But overall things might end up
looking more sane (and being cheaper) when simply using "mmio_mfn"
instead.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -571,7 +571,7 @@ _sh_propagate(struct vcpu *v,
                             gfn_to_paddr(target_gfn),
                             mfn_to_maddr(target_mfn),
                             X86_MT_UC);
-                else if ( iommu_snoop )
+                else if ( is_iommu_enabled(d) && iommu_snoop )
                     sflags |= pat_type_2_pte_flags(X86_MT_WB);
                 else
                     sflags |= get_pat_flags(v,
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
     if ( !acpi_disabled )
     {
         ret = acpi_dmar_init();
+
+#ifndef iommu_snoop
+        /* A command line override for snoop control affects VT-d only. */
+        if ( ret )
+            iommu_snoop = true;
+#endif
+
         if ( ret == -ENODEV )
             ret = acpi_ivrs_init();
     }



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:48:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:48:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476870.739277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFja-0005jy-02; Fri, 13 Jan 2023 08:48:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476870.739277; Fri, 13 Jan 2023 08:48:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFjZ-0005jr-SW; Fri, 13 Jan 2023 08:48:21 +0000
Received: by outflank-mailman (input) for mailman id 476870;
 Fri, 13 Jan 2023 08:48:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGFjY-00056k-KH
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:48:20 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2072.outbound.protection.outlook.com [40.107.6.72])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 07466450-931f-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 09:48:19 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7433.eurprd04.prod.outlook.com (2603:10a6:102:86::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 08:48:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 08:48:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07466450-931f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Mar/jKBke6h4QjhUVgZt3oODFDBI5NYhVJ3TDpb5JhxVcLCKC3rkAxBnephlL8G2RJqqtreyJDgWvWPJkF8JW6foWKwS7PvV+EPgNCt+1eBmQTXhVviwO5RrYVDDAjNK5JXGm/KfYmp4nxAz069sWbeMtY0dYS/Z/4UrC/KJgOMZuLiS5Qo6GjxaQUTn29vIgc9H2sZ8jDDR+Wv6fLrPvqi0RflyYDZlwajIcFkd2iTpTNmt5cozMc15MixvZsWCUO7wE/4/2oZFULtj+SMV8HIdcMcfClRxnpohM3bEbniavHIoNAjkWhU6394O6UJ7fFzrxUaacJEwfCGWJRL67g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B1oV9eOFNOGvXVgiJ+rg0AtuPcAmRh13n0oNq2fBO88=;
 b=gzlbZV3WLme3h01AW50G8U8F+xefSLtdQ0MLEzqpSS6rFMPnzUpnNkpaJqP+YTTt8AOYfe5pEHWRRdMCQoe4xhphKSq769vaxAjYpxAOMhWz+xN0YMyY5e6yxPA18qbsBiiv6X+s+dqKbBtzAkHtvw48M6cWzTZhBgb1z76hXxwvAsWfo12ab4+lU22/kqr+cGCBczBY48zJeaMjroAAvYsP2dQ0zB7rSrhlwfksyLdxYRprW9DlLuBCWlnVEYyaDuJm7Qs0yWRGE8wR0MGLFgKIGOCvPNmDDUmixQlNz37dw9GDa/jrJsx25AENhYFractcly3jOK7ixZDCQwvzaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B1oV9eOFNOGvXVgiJ+rg0AtuPcAmRh13n0oNq2fBO88=;
 b=3W2Z/aB4738ZIiFLrC+Lkq8UnoXohflxiXnf/9Kzs+xl2kgcHicAbpA7Z08Ogi97CrmvnyZnUeG0VYBZYghmC3s/a0cZbMKOaYiUzfQgi40QuQ7/pKXVXO93DE+f4g1I6InKLYxU2Ra73ZOr3GWFA2r1vFN/C59WzSmSkjbZfEE/aMhO2aSXKPNe/BbZ5Zgb91e4JOi34CfzAuYjaBCg/V2lxAd2CEFz0zIYyPXOinU92q7bUARkqdjZYjfEhfCVIibUGqfw1eUlvxzeF81904YW5h7a5asZesB1PyKaWu3RPwZJ7E/XY4JJNkY9j2gLDdSVddGKcIDzBdJ+bTd2hg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b05d3911-a6c7-68f1-0e48-255630ab6516@suse.com>
Date: Fri, 13 Jan 2023 09:48:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 2/2] x86/shadow: further correct MMIO handling in
 _sh_propagate()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, Xenia Ragiadakou <burzalodowa@gmail.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
In-Reply-To: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0173.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7433:EE_
X-MS-Office365-Filtering-Correlation-Id: fe3e9317-3c9a-4a0d-ce56-08daf542eace
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	saK4/T8AEIKCh/un8TMOqDgGA3IE30R6OLJ2/fyewo4eYzAKZsCKg/X3O3ehKyrsSGjSpg3zc6pZVPw57HQ/+jRtkKOM3uujBERXrG2Nu7knC+BZyqLXUdyEtm9KeKvopT8G0ebtcezKrYZNg9WwS/M6c27BvEc/+3S9251d8diBAIaKI4EQK534DDbiauhYNiaayw8mVCSGcA8V3gRzOPvqP3ebW4X57edHFMsThNLymmKjqr0vIQDMC15uq6Tn4NdF250gDRCUeAoxyd5nmTpOwN51qNksm0H34wsgTGoWIVuz7oyyuW4yukrii7vrwa0C09/rHVC1ZhKjl1XknrNYDzn8cpTk0IEQ5GE5n+gvOr4Hvv5WBK6lKy+8XA/sH7W02ZNy6AWBUzqudqC5lk7ibY7WN+4MN/Ap5AfuFrhVn/NLQ3CNH+0rThqoNWfYEYRBa/6zBgeu6fF18WnxoXc0awV2Qj473wfIy3qPsaFzv/ysWlBvnzvwh06wSo3Tsn977/xCQPCQR12BrqFV5K4V54J0ESpda8mCJjkerCftH9HBPTfIxNxlBlX94MmuqkK1wFuBz6mUZILb8IzzrPYE6ceDC98DKjuhRYgsJIAJLvBMVHVhPSpmWR8guE6BpZGZM8UUhYHgf3dQcnEV2eyBmyJwwS0jasJZRyRSJC0g97cmGiikTmk1gsqjoQJFFKPREtqrmc/uOHlzIZwjqbHt9yPQUL+zqiF3ZB/253g=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(136003)(376002)(396003)(366004)(346002)(451199015)(6512007)(6486002)(478600001)(6506007)(31686004)(186003)(26005)(66946007)(66476007)(316002)(66556008)(54906003)(2616005)(6916009)(4326008)(8676002)(86362001)(38100700002)(8936002)(5660300002)(41300700001)(31696002)(36756003)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Vk91dENoZWZLQUdMdjQvUE0yVGhRN3JXUkNlbTMvL05HMml4VDN3bkkwRC9D?=
 =?utf-8?B?MTZ6UGxHa29ENXpqODV0b0pNK2QxcU9BZHVHS1B3aDVuRTU2S1VzeDAxRVVl?=
 =?utf-8?B?QnZDMWhpVlVQWVpuY2pQR2Rqc1J3akVnUjAvM0h0WC9USHZvK041REU2WWE3?=
 =?utf-8?B?QVUvb3VxQ25uUXJOU1BRNno0RjRkSlFlV2VjSDdpb09zNDJ1WEdBMFl1LzFt?=
 =?utf-8?B?TlUralFqWXpvOFUwNmxlenF3M2ViWDNuc3Blek9STExWeXR0Ym1vdGFMeDRK?=
 =?utf-8?B?OFhGbXpUTVNvWXRvNE40Sk1Wb1FENDRJUDZvTWE3S0IxblV0MDNOREExWW5H?=
 =?utf-8?B?VTF6T2t3YzdrVmVocGtWQjVqbmliakZnaUlhU0QvZExPSUx0K29MM0haaVNx?=
 =?utf-8?B?NzlDUHRJd2RQUkdSWTR5WmptSWM1d1RYSGFwM253UDFJL2poSW83Ui81L1BP?=
 =?utf-8?B?TnVBOThwcUpHbHRXb1FrcEJUa2xaMnZTVmdmMHIrT2lhV0hkNXYzYTRrcHhk?=
 =?utf-8?B?MWhqYVhwU2xOanpGMk5tU1FzRHZQOUhDM1ZyT2U2UG0xRWNtNVd0T2NEQkd6?=
 =?utf-8?B?TWtJMlo2TU1WZkJEandlSitaS3FQZlVDZU5uZEJMWjA5ZGFZUlNtMWc4WW1x?=
 =?utf-8?B?VXF4VEN6K2o5VDBZT1lGM0p1MDRvQTh4SFkrMmNyenBhdXZoaktremY0bU1H?=
 =?utf-8?B?Tld2MWVxK3daN2V1UEo1ZWlxS2o3N05wcjdSYkJPVko4K3hlU2RZWVdkTjlk?=
 =?utf-8?B?RGVWbnlUdjdxUGsrMXNGdmJtQnk4TnNmQTdIS0ttTnFlVVcxNEJSNzRzZTdx?=
 =?utf-8?B?Y0dTeENnRW85VkZYaWw0Sk85VUFlUGZoR0lISnkrakhMeXczTE1SSFJXcytk?=
 =?utf-8?B?NEdiMUNEY0czeE5YRzdJTmgwNlpaWFppdTRDU2psK0hpdlp4aTAxV09nWk9x?=
 =?utf-8?B?dXhBYkpwUDhQM2laVEJhOW1ITjE0SENaTHZranljZUN5SU9Jbnl6SlNlUExv?=
 =?utf-8?B?Vy9RdjdRNnozRTJGcjlON2dVdGRSMWd0Y0JkQ3IrUC9IMlEzWmxaM3NoUWxq?=
 =?utf-8?B?WVBkTXgyMng2dDRDcU5pNWdDbzJVczRrNWJZNHNjRlpSbFFJU3duUjBXN3k1?=
 =?utf-8?B?V1BIOHkzZGRQdEk2YXB0MWlNaHBpUFdhM2JVNjByVmdVZU9LMmNGUWRXSmFt?=
 =?utf-8?B?cmJ1WEdXcW50a1V0Y0tOZXBwZGdXS3p2UlM1WEZjUG1EVTlsK3ZiZyt4b1o1?=
 =?utf-8?B?ZktxdkdXS0pTckJXRjFNNUZ0OVdrbUhtZ0VoNytKcExFdml2ZmdSUmVMdlMz?=
 =?utf-8?B?VUJIRTJrYStDVlNSbGVaYzQvSDc1U2h4c3NNRDQwREFFbnQyQTkwdFZMZVM2?=
 =?utf-8?B?c2J4U1JmblZPamxHU0p3VUtRRkhtTE9vMjJXaHZzKzg0VFhDRndzczAySDl1?=
 =?utf-8?B?aDcya21XeGx2Nk9XeHlZZitrdU5yN2xJM3FKRkViQVVRamJsQ0dPVGtJZkRa?=
 =?utf-8?B?NFNyZ0txOCtYRFgvL2taaXRYQmZab2U5UGkyaTd2dmF6RVh3VjBEOURHdEFE?=
 =?utf-8?B?cGRMWW1mTjBTNE40MUFzTXNKRXN4YWxwRE9DenhjSlhJUEdaRFVGYVlqeDhK?=
 =?utf-8?B?eDRmK1VybHlLMkJIV1ZQVnBBZlBySHhCVjFWbjg3Y2hRZXZ5cjZML21FQTI4?=
 =?utf-8?B?d2p5WTNaL0cvTmVLMlNpemdIektkT1IvWlNLN1BpYnEvUWZUYkM3MnBYTWJo?=
 =?utf-8?B?NjRmdkVZc1FBOWVmMzJoODZjRE51RGRCTXlsQ2dyLzBocHp1SytVR29sS1lh?=
 =?utf-8?B?cm1ENjhCVGFOQnZ6aVMxUXRqZFBHcElJTnRLeVB1VnVXSmE3VjBEWUlvbDUv?=
 =?utf-8?B?b0owcWFQVEdqdU9ERXdJcFVuODFWQ0FRVmkvdkVQbjg3aDRIL0pSMGJidGxz?=
 =?utf-8?B?TEFVQktTWTYvTEJlWk1hUVg1UWJUTEYrcWpzNWJUL0NaeTZZc2FGejZ4ZDhC?=
 =?utf-8?B?NVJJbjRHcXg4V1lPTVUwbnJjL2FkM2VBWlJ0MS9jL0QvWXJFMUd3aDRjNHQ2?=
 =?utf-8?B?ZFJ4bVY0UjZTVFRXdzFSWUgzNW5Mai9LaXhZZFlqL0xzUmk0LzcyeVA5RFZT?=
 =?utf-8?Q?jJF0ACAW+4B46P4ooWwUllsUl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fe3e9317-3c9a-4a0d-ce56-08daf542eace
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 08:48:18.4758
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fkqqHkz2MxGF8jzFGQoHNK5PjZErFw/5B8+9hOXU9ts6/D0A/rweT187O1WYlsc7U4S7K76VY1dSF39JA+VkWA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7433

While c61a6f74f80e ("x86: enforce consistent cachability of MMIO
mappings") correctly converted one !mfn_valid() check there, two others
were wrongly left untouched: Both cachability control and log-dirty
tracking ought to be uniformly handled/excluded for all (non-)MMIO
ranges, not just ones qualifiable by mfn_valid().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Note that this is orthogonal to there looking to be plans to undo other
aspects of said commit (XSA-154).

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -543,8 +543,7 @@ _sh_propagate(struct vcpu *v,
      * caching attributes in the shadows to match what was asked for.
      */
     if ( (level == 1) && is_hvm_domain(d) &&
-         (!mfn_valid(target_mfn) ||
-          !is_special_page(mfn_to_page(target_mfn))) )
+         (mmio_mfn || !is_special_page(mfn_to_page(target_mfn))) )
     {
         int type;
 
@@ -655,8 +654,7 @@ _sh_propagate(struct vcpu *v,
      * (We handle log-dirty entirely inside the shadow code, without using the
      * p2m_ram_logdirty p2m type: only HAP uses that.)
      */
-    if ( level == 1 && unlikely(shadow_mode_log_dirty(d)) &&
-         mfn_valid(target_mfn) )
+    if ( level == 1 && unlikely(shadow_mode_log_dirty(d)) && !mmio_mfn )
     {
         if ( ft & FETCH_TYPE_WRITE )
             paging_mark_dirty(d, target_mfn);



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:53:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:53:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476879.739288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFoY-0007JA-Ke; Fri, 13 Jan 2023 08:53:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476879.739288; Fri, 13 Jan 2023 08:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFoY-0007J3-HY; Fri, 13 Jan 2023 08:53:30 +0000
Received: by outflank-mailman (input) for mailman id 476879;
 Fri, 13 Jan 2023 08:53:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGFoW-0007Ix-S9
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:53:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGFoV-0004ch-Mc; Fri, 13 Jan 2023 08:53:27 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.6.109]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGFoV-0001bx-G3; Fri, 13 Jan 2023 08:53:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=k1Ov1U07Jan8onZaC2heWsBWIE6+/OP85zbh62hF1ls=; b=lDeVwjScMCd9XObMN3xEBg6HJH
	jzoxu93uw8Tv3/0IdvL6QxNnVgpoBPc2LP4Ij4AiyShM5bVxfKNEiEebkaTsgTBHPkfek5hakCTJb
	Q5Q2xbdlpd2HhvDI4hfcZeQIYb5jHRiJhNlfAd+2Mjmjbe2OR/RZI/BWQU9baDJE1EkU=;
Message-ID: <0d736370-5dd9-637d-c6d2-74dfb7e4209e@xen.org>
Date: Fri, 13 Jan 2023 08:53:25 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 1/8] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <20230111143826.3224-2-luca.fancellu@arm.com>
 <e37e5564-e7b9-c9d2-1360-171c014649c7@xen.org>
 <85F9C725-816A-46EE-AD0C-2725AE13F14C@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <85F9C725-816A-46EE-AD0C-2725AE13F14C@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 12/01/2023 10:46, Luca Fancellu wrote:
> 
> 
>> On 11 Jan 2023, at 17:16, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> As this is an RFC, I will be mostly making general comments.
> 
> Hi Julien,
> 
> Thank you.
> 
>>
>> On 11/01/2023 14:38, Luca Fancellu wrote:
>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>> index 99577adb6c69..8ea3843ea8e8 100644
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -181,6 +181,8 @@ static void ctxt_switch_to(struct vcpu *n)
>>>       /* VGIC */
>>>       gic_restore_state(n);
>>>   +    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
>>
>> Shouldn't this need an isb() afterwards do ensure that any previously trapped will be accessible?
> 
> Yes you are right, would it be ok for you if I move this before gic_restore_state because it inside
> has an isb()? This to limit the isb() usage. I could put also a comment to don’t forget it.

I would rather prefer if we don't rely on gic_restore_state() to have an 
isb() because this could change in the future (although unlikely).

Looking at the context switch code, I think we can move the call to 
restore the floating register towards the end of the helper and use one 
of the existing isb() for our purpose.


>>> @@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
>>>     void init_traps(void)
>>>   {
>>> +    register_t cptr_bits = get_default_cptr_flags();
>>>       /*
>>>        * Setup Hyp vector base. Note they might get updated with the
>>>        * branch predictor hardening.
>>> @@ -135,17 +151,15 @@ void init_traps(void)
>>>       /* Trap CP15 c15 used for implementation defined registers */
>>>       WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
>>>   -    /* Trap all coprocessor registers (0-13) except cp10 and
>>> -     * cp11 for VFP.
>>> -     *
>>> -     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
>>> -     *
>>> -     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
>>> -     * RES1, i.e. they would trap whether we did this write or not.
>>> +#ifdef CONFIG_ARM64_SVE
>>> +    /*
>>> +     * Don't trap SVE now, Xen might need to access ZCR reg in cpufeature code,
>>> +     * trapping again or not will be handled on vcpu creation/scheduling later
>>>        */
>>
>> Instead of enable by default at boot, can we try to enable/disable only when this is strictly needed?
> 
> Yes we could un-trap inside compute_max_zcr() just before accessing SVE resources and trap it
> again when finished. Would it be ok for you this approach?

Yes.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 08:55:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 08:55:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476885.739299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFq0-0007qv-VA; Fri, 13 Jan 2023 08:55:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476885.739299; Fri, 13 Jan 2023 08:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFq0-0007qo-RQ; Fri, 13 Jan 2023 08:55:00 +0000
Received: by outflank-mailman (input) for mailman id 476885;
 Fri, 13 Jan 2023 08:54:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGFpy-0007qi-N7
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 08:54:58 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2073.outbound.protection.outlook.com [40.107.21.73])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f44c7f60-931f-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 09:54:57 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7433.eurprd04.prod.outlook.com (2603:10a6:102:86::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 08:54:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 08:54:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f44c7f60-931f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=APgihOLz7d2BIjBiXdfJtdCFov3B2rnJnRuQrud99qxbMhWqdaOGwPSshb4V0PFTc4c75kxXJp7TuhbxAbG3W5mm+jBLp+j/ia8I4oFUPYcypkrVrJgVHILyDndUfPax0D5SN4XxJimxUxW/aJo5X2RkTnVhiLEF+hscu+Iq7r8Fr3q5LKeMCBc7uM1ttofjV9ZoMJ7DqU33znxxbwe9xd9RBca2sJCOfeLXnqXuulace/LzP8MGyfOHAFXGOfPi5HLyFFhs7YgY6S4ErnDhoHeWEtoYY5agZU4oBRWiB0OZLwBFXIGOSzNCL+BzTzNxB5MeZygfUSBtlXzpEZDJHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BWrPUVjoqMFh0FM2rL79u3+EH1x10mLK40Rea1zlylE=;
 b=f9sd+ew0uC7MtfozG3aW+8z1rPgIbAqEbK+jqs+kO6lpUhaMjV4RMcQj0VIFHgNfUavJwDxBgGHZNOcPqFXitX/vT4obPTGtWvYNrl1ucrrv8yuUCmca8Su+4jw+Fq+QgG9gv4C5S4NYCG2GoqbcBdGXHx0vrExTJ6KYeZ1iyTPAAnQaTzBoxYndLoRCh2jOFgkq9+NN1tNJUUDV8xPLfgtqN7fE3SCdee2m3Rwq7mugtmJEtw3eWcnSrrc1tjr0ojkRV7Q2z97t6mqWFSnO7ccS+O8I/cZq1brfDogKv6tkbHSWqXls5JEVtoSDEw2BEA54lyp4HRX7WuEZ97F6Uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BWrPUVjoqMFh0FM2rL79u3+EH1x10mLK40Rea1zlylE=;
 b=qglbwcK3FHKx3xYvsxL7LAHo46gjAqIarjqaKBbPUhmGz13HtxedBrw5ELzMWzG8vFQaQdi5MYzkR19RJWsyQbaUculie8NgniHlmDl0wboCt7lKjpVSOhBRb27K2GSCANlPAU+E9sfKwm/Q+z3Hmg3qJeNFeXUKOWxY8Z88UrJTIKSD9sjNfOOLQcqU1zul0k1PS5N7WegUvwd+XbBxghoKs/w3dapYs7rO6yZncwZn7jfQOjNRlsE2L2wXtPqskJlRZwy7zptmLOOxxJvz5YqtjZD0OQgU16IN+QWpIXqDeGF9Ph1gRkl+rhuxjeLej5h7y/87OhBxQja8ToZetw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d88834a2-dde3-5438-e5a2-2bdfb25be4c3@suse.com>
Date: Fri, 13 Jan 2023 09:54:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 00/41] xen/arm: Add Armv8-R64 MPU support to Xen -
 Part#1
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0001.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7433:EE_
X-MS-Office365-Filtering-Correlation-Id: 2bd909e6-1c63-4322-7a26-08daf543d760
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EG5TDp0nGvN6kBFgl/AUcIqUxsgyQiorzee3O8wornYL4CDFaKxzs6CjFO0evfiI8UKY/rqDksiGNxd04PypI7zug91fboqUXPPNpOomHsx7MkjtOoMz2QHA/XNvFk84RMVX/onpZoHf+Jm2A3kC0tkHDUMyhlHful/Llpgd10bTg+Q+YOSi1LEs7KrL5TIKVgzikQFhYKLTVVE9vIuQ1HG2EQLO29V7T4EGhGp2SdgFDic14OxAj4syHdAVkqSvVsqvLYI8PwWWDbIkJh8pJxFW9u6yWiKX3JvNSE6wFrvL/v+dutYc856l8GGWD4WcWbLpEBLvHo0WrPR5ju+4eVjlZshqSFpKRCOHOUxlFhbGT3FflHUkShpSQJb74QMhKjA/WYDlsyyClbWjyjhPGECOfp7e0UMsSr7wP6NXq8YOgrn+4LreConxxlLpjsXohCDUDMvYQMlabc8wFSrwHBB7bvvLHc2xJlr7YA6+8hSwzjJySzW2PwTIOIe8tTzQqhhl7B2zVSQRIBkVziHXnog0sNIwz3MED65qyUwD+ZZNWfhaWEGqGu7VCfMoCh05hvMnxBc+xmXtS3s8OnonNacq2Tb0xh1rn/0O9mZHjuksxI8SmO4mci1DwuA6baVptbbYBvmn4eFM8y0cD1nDrZ8Bzh3GqK0+BbJDtZmv4vGIjO8wqoYm8nHVgpF1DzJAvN4OadCdUSFBkLaSFiv02Oj/x20uZteK03uVDKlWY1gmioP9XkLAaabixGWFXsQQ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(136003)(376002)(396003)(366004)(346002)(451199015)(6512007)(6486002)(478600001)(6506007)(31686004)(186003)(26005)(66946007)(66476007)(316002)(66556008)(54906003)(2616005)(6916009)(4326008)(8676002)(53546011)(86362001)(38100700002)(8936002)(7416002)(5660300002)(41300700001)(4744005)(31696002)(36756003)(2906002)(17413003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUY2ODl2Y1p5aXRrWUhRZ0wyMm9QZEpSbGx0bVFWQ3Z3RFdIWjZxYzVSZDVY?=
 =?utf-8?B?NnMzTjFXNElWYTBYeXd2Nkl5SE54cmlSVDllSWhKR0dMZFlqY0NZYkxiK3Zo?=
 =?utf-8?B?MWE1cWlDN29DK3JrZzVBQzMvY3lWaTFEQ3ltbmVkUHExUEpzR3FtRmdwMEJh?=
 =?utf-8?B?V0RlczN5RStQd1ZYSGN6L2JBYkMrU2pUQnBTQTRCaGtKQk1mem1yeEo2RmVy?=
 =?utf-8?B?WlF2MUxMM2dGTTBpaDIxUklwcEcyRzVhS3ppTUFmTmxwT3JjaUQ3VUZodEty?=
 =?utf-8?B?OGhRMUZmYnYwZHBGaE41RGFTRjFIakFuc3JGUkVSMTNNaUx4YkNadndmYTNF?=
 =?utf-8?B?N0prUmZTUEFPRVR4SEJRK0lYdEdTaTJxZmlxeUp4WXlxYm9JYTkrVk9VOTRE?=
 =?utf-8?B?RDR1STRFSHcrR2hHT0tma2I4YzlyZU1zaGxWSkpPczdzQXNvN3k0S2s5UzN2?=
 =?utf-8?B?bENGRlJJWXp6dWVZREFDMy9pa3VPSlUyb0NVRXJ2dkY0Z2dRQmVBYVNCNHE4?=
 =?utf-8?B?VW12QlFnK0VaMHlGMUlDRnhmZWhNdWRTUEJFa3NRaVUxWWxBbkQwTTRxbTht?=
 =?utf-8?B?ZjMvc3RYWTd1SG9UVENiWkt0TTNiM1pUVy9peWh2d3QxVlVlRkd3c0VSTUdL?=
 =?utf-8?B?QTRXb2xHTTErNE1UOElTdjM4clN0ckNUWmxSUXk3bTJCYXhoeVVXelVta1pD?=
 =?utf-8?B?QkNheEU1dG90LzAvdnhKTmpGbHRiTkZ2WEVySlVHNys5Z1lVdk00K3FiWFRD?=
 =?utf-8?B?SDdyNFVkd01LdjVQVnp2RG5CbUdvQXEyVWhkV2p3cEZ2c3VmeklsT1RjY0o0?=
 =?utf-8?B?bm9vcndUSFQ1MnRGYU43YWdmcStIeXRYWXZaUHppa0t1QXBibDFLNlQ2R2xG?=
 =?utf-8?B?Y2NvWmM1YUM1VnkrK2ZyVEJpMVZtYzIrM3l0bUE2VE9yZy9icUdhZ0VhS3pa?=
 =?utf-8?B?NVhxVUZQajNHL1Q2V05uaWtEc25kZTVOM1J5clU0bm9LMEQ2QTJyY3RyOG9x?=
 =?utf-8?B?WElRaUdBZFZwZkVyUkljeGI1Yk5sMHFLQXhmZEdiZHYzWE1CdXdtTHN1UDZW?=
 =?utf-8?B?RmZOWFNYbDNKcVdGRHdoSWRjQVFObjZmNHp1WDZhOTNueC8yQlBHSmtTYXVy?=
 =?utf-8?B?czFIaENXN2dwY3kvMWszSXczYnh5VjFzajhtU0Y1TW1KL1FtRGRWLzdhSVhm?=
 =?utf-8?B?Nnd6V3piMWx0NTJMSC9RZFA3UjlhT1QzbFl3WjF2NGVKYjduVERTQW5QN25F?=
 =?utf-8?B?eXdNaXJVUkNFdzM0QnRzd2t5OXJVekwvOUJyN2h4ZWFJSU1ZWTJ3bTlJVUJX?=
 =?utf-8?B?dXJLaDdqQ0ZlZGkyQ0xpSXh0WXJ1ZVVZYVpzNFA1ZzZnNUhEb0JEeklBc3dN?=
 =?utf-8?B?Z2VTZ1JlOUZqZnZ3bnY5WnlpeHJ4WUFUWFRnNFdXV1dOd1NIdGVFZVQ1akxS?=
 =?utf-8?B?WGdBYUlmT0JVWFArZy92b3lYZW9jOXhvREorcmg4Z0p0WXRkMkhOR2Z5RDZv?=
 =?utf-8?B?cWZPb1RyODRJMC9MQzZNbGovQ1dpenJtUTRHcGd2a0wrUWlBaHJMRmV4dEFB?=
 =?utf-8?B?S1hpTnlrTVY2blRoTUZBV0lZMHZWVk93dVgyM1A1R3JTbTAwSEorS0MrTmhQ?=
 =?utf-8?B?RDFpKzZjM216Z2xuOUozQVFvd1A3MlZlZUFOallBRVBHUm0rdFNJUlo4eGhp?=
 =?utf-8?B?OTBrdlFEVmVxLzNNWVVBSUM5YXdVaTlTSEVXa2MxUWJkdGJ5RW9DTTBtdE5W?=
 =?utf-8?B?WXNHTTZjR0F5bTg5OFozNWJMOGFNZGI1QXRaNXl2M3hhMGd1dWt4V3Vnc0FY?=
 =?utf-8?B?blNSQ1V3MTZONTNlZU56bTE0VDBmNHpzdFlMSEFFZWlhQVZjQmtGVk9xem8x?=
 =?utf-8?B?SDdXK3JZeUx5VWZyM3hWL2ZzRkdEVjRsWDExbDFVaG43bHJvVElETVRNak1P?=
 =?utf-8?B?Y09Na0NKbUJnbFVxUDlxRU1PNnNocG1oWTBwUHNwTzkzOTdQc2tIOXlyaDk2?=
 =?utf-8?B?eW84cDJxbm5yWEIvTTJ2Ry9odklBMGhkR3NkNDVOcDkwQ2ZkL216Vlp2TkRH?=
 =?utf-8?B?NWZrc01oa1EvamF0bkdsUFl3YUprZ3pGSWttY1dzeWpOOFduenM1SWR0bzNR?=
 =?utf-8?Q?YMmjp7GfLUKnTCXaHwKU+iMn3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2bd909e6-1c63-4322-7a26-08daf543d760
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 08:54:55.2790
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kY7JCtVU3qfWQf4FjUFczKz7JBWIiZCcJZYxYolg5QU6C97pnDCh0JSdij5eQlEn/+Cw2XFn1DuB5DuMFmAKHQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7433

On 13.01.2023 06:28, Penny Zheng wrote:
>  xen/arch/x86/Kconfig                      |    1 +
>  xen/common/Kconfig                        |    6 +
>  xen/common/Makefile                       |    2 +-
>  xen/include/xen/vmap.h                    |   93 +-

I would like to take a look at these non-Arm changes, but I view it as not
very reasonable to wade through 40 patches just to find those changes. The
titles don't look to help in that either. For such pretty large series,
could you please help non-Arm folks by pointing out in some way where the
non-Arm changes actually are?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:04:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:04:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476891.739310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFz2-0000xe-Qk; Fri, 13 Jan 2023 09:04:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476891.739310; Fri, 13 Jan 2023 09:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGFz2-0000xX-Ns; Fri, 13 Jan 2023 09:04:20 +0000
Received: by outflank-mailman (input) for mailman id 476891;
 Fri, 13 Jan 2023 09:04:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGFz1-0000xR-Vr
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:04:19 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2052.outbound.protection.outlook.com [40.107.247.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42864f57-9321-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 10:04:17 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6902.eurprd04.prod.outlook.com (2603:10a6:20b:107::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 09:04:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:04:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42864f57-9321-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R3Z99oOIQvg+/iWYUa00yVVIpSEWpgjmm+TB0v8EqQKCUpVZSIjFu2dGBcitcJpPG9+lX+4OUh2I338OhFQfP0aooxGgOIVuEIN6q8z5yWCMsCQ08CE0HlA94PGp8+MBElJIEDi1OvCrH+3zS8hTnARAgcgFxv3bFX2z+bti2M/b8esTXO5uWoOLK7nb0hwEM/Bw76s7xTHeQYhGga8Gq81ryfVhq3HlPD7VlQCoutY54HbP5d4Wcms3FTph/elB5KxWa2nzf+0ZPT3OlpAtJw87jBNw2JC1QJdxT7Dan1ElCctXGhGs/sKHQdG5kvi+Idg0TYd9WlWU2n9qk5ymig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ikpCPRyBnn2PQBXs6zphf8ZMYfTSU9wBgd0hKfIBIVE=;
 b=miLAhXV4kWJ/uqD6Z5WrDDBjm7u/AN+zogL5m4M+g2G1jpNo7at8v8LAe8Y+VqpAQskEPsz4p0MC3AeJGlj1HPDEXi4BEjcgtKpSmxhN60QMFHpHaQ9W8gr7stPgOKyeB4C6hpLYMxX5fDyI11HaaM19axeJZ7Iwefc3o8Y/QR1WNUHGPMiUvBEG9hmCXMCulwfZ2blPAOslWIdzx27irc12+E96U1vFXe2r1dt3C7ZT++qEyHvQKFw9ROuSIu8vOlBjBnNXq9h3W5SyRjJjP3tIA5Bh/sAR3zmJUA4ma/gJWNaRckzjyoVueSXYIOMyDoSzwuE8bgVWOuIPORWPuA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ikpCPRyBnn2PQBXs6zphf8ZMYfTSU9wBgd0hKfIBIVE=;
 b=yzwomtxAUiCecr9YolNHIKyXR6YVJ7eUZIQVeDT41/QTD1VefZmnJrXs4p1iX2knX4TbRrUfEFYYzTUpM8JwUDSfJxxrRDicB0QCabUnbIUt1gTBgoHYoow1zqiKIBGFi8z8WqIrc9RA4JSzMP8XlEBczBV7f5hBjU4lPkidyGzPKK3hrDe8JLgcruJca9jyWM0UHTUpLDw1d+q59Zoa0Y+ljsqikOqCgZat4JKUUMfP0IynuS8OQyN8o7LjFBFxy3ZJgmfIXgzSt8PgS58IcSFGyzNprDW5j54g1LbYp/HqAnzv33A133mz7RpUGZQomWRbLQM+3IheagoS/wNElA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <635fb465-9350-70b0-3c99-cbb87bea58d1@suse.com>
Date: Fri, 13 Jan 2023 10:04:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/8] x86/boot: Sanitise PKRU on boot
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-2-andrew.cooper3@citrix.com>
 <27d830b1-a32b-1368-3c0e-e5de15da5000@suse.com>
 <fcfe9344-ceac-aa80-404b-55fb7a75fdeb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <fcfe9344-ceac-aa80-404b-55fb7a75fdeb@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0161.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6902:EE_
X-MS-Office365-Filtering-Correlation-Id: 938a1971-08e9-4977-3102-08daf545256e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	faTv1E6/ZC/DhrOwgDxA8a+p3o0XloAW4CeaD8pGxpd4Cx6lYVErYH+1HAEqfFob/aaizSC55486zkXnwS2tD7niqdqfDbYcfiLxv1XLp6QIriHfm9Rzr1coQLewVGOeRhUA1blW1UM8rUU7rMj6EnZ6vG4Y90oRIRtNI1jmJB0dey/oFx4lvs6AFNjka7xwuE2D26sdLy/L9O6UhrJQr+eIY+sxA8dgEBzj9jT74vBphggCLTIuhliadQpvxSA4wExZHUJtRPg136WwG+LD+/ynUaA8AIJhKbRiorxVKbrDWJj6ZUb5MwqA7p6om+h+sG0L5RY2jJEkzD3PRp5P37BhazVsMFj+52lsSvQW6NvBcrJj8P7cIpnd8rricEp7Nf1qFgxvhKoAuh+d/Kfjm15XJO03vBSsoOUwi2Xl6dv8rl9zcgqvG3LUSXw0fQuno/91C249vpct9fwSol4folUilC7A2MAk5TbGJKnXgy1N9h2VToEdjfoDym6TJu6S5Q4oXSrOV4veoBSHgAfNVSAdbxDO/M3pNsv2aL7uPY3aXgoDnx6qTOmSHl89UwIUPWV2JGYhu1QZLRhF3dL6uSil0KukrvsCjpveRFYAsIZRHEB6k61qgMH5ep/UPHosRZ2o8nFFh8t9VBErMBi8JJ8vFXEvF99k1WrZelksCaf5bqQ4fJb5+n3F2lj9GsFJROd6HHoqI1dWVEUvNlZCFgExr2H9ymgMWppy2ljUbtc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(396003)(376002)(366004)(346002)(136003)(451199015)(86362001)(31696002)(54906003)(2616005)(316002)(31686004)(6486002)(8676002)(6916009)(66946007)(66476007)(66556008)(478600001)(26005)(186003)(6512007)(53546011)(6666004)(6506007)(36756003)(38100700002)(41300700001)(4326008)(4744005)(8936002)(2906002)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UGlpeXZKaW5rS2VZNktnY29nYlB5TExjRjhZWFVsODAwdkVHbFUvVDFsS1hj?=
 =?utf-8?B?Rmw1UGIybHcwR1crY0p4cWw2Smx0a1FzWUpndTRIa21nbXFDbGN4K0FHbUFB?=
 =?utf-8?B?QWo5MGtQYXIrdHVFMXV0ODNINGk3bUJySmZHR2pmQVVBZExacDdWRVJwN3RO?=
 =?utf-8?B?UmNBRThwczY2Z2NjV240QjNDVUxHS0o4MkF1UGxKMTh2M1Bjamt5R1Q1MCtJ?=
 =?utf-8?B?TjdRWm1pU1dYYW12MXIxUGVWay9BSFp1dGRIQ0VtVmRPM1p0TmU1MS9ZbW5l?=
 =?utf-8?B?dU9MbS9sdC9za0lCbzZJU0M3QkpwWE13cmpGRzN6azkrSnJYQjdoWCtiZ2dt?=
 =?utf-8?B?b29rWFNBcUhoM2Y3S0twbjhZMkFLVkN0MDY0UWEwU0lPRTA0WjEwREhsRlZr?=
 =?utf-8?B?MXFMcDFpU09hUWtDQU5iSGxtay9IQXJtYUwzQTJrNmViSXQrVnlmUzlSOXp5?=
 =?utf-8?B?L0Q0RHNkRE4wNWp6K09Ua0czZUxxNzN0SE83VFY4NFBDak44bC9ZeERmUzZO?=
 =?utf-8?B?MHVLajFVdUhWZzBkZWt0ZlJ2d1h2N08vUUdZQVRlcFJFa2Q3WHF3MkNVZm5h?=
 =?utf-8?B?bFg5cy8vQ2RKUEp6WE5Yb1BKUVNHbVJPY3kyWUZBRGJrd25ZZ0NONXRTVUpS?=
 =?utf-8?B?VEdxSjR0VVlNZ0RDT0NHemJ2QkR5dmVjMzRKRFJXS2VxUXVGRXhURDNNazFQ?=
 =?utf-8?B?dTJ5NGhSbDBXWnAzaVRjS1lWWk9LbWZnL1d6MkVYTVZBNllBZE10V1E1cGsv?=
 =?utf-8?B?cFE3TkNoRUxSRkkyRlU4R0Q0d09qNXpFTFl0eVVTR0VqdDFrVDFlaGl2QUtY?=
 =?utf-8?B?d3RuZWJlSS9Wb0RJSkFtcFNaWFBkSTRqbjRIeXYrdHdBYnoyYTE4SlhQMURn?=
 =?utf-8?B?bmVQU1g4K3UwUjFTd21FSkRVV0c3VG5tYVo5ZzlXdTUwVE9KclBycmtXY053?=
 =?utf-8?B?VVNKYllCSDZ2N1dIVytrbzVqRFlMb3hZL1B5TEJZeDNvaHlSWGJXTnBvTmxC?=
 =?utf-8?B?NGZyblM1OFdKcGp5UFVIcndualBja1NuNmYzM1Yxc09EaTlJTWRmdmRhMHM2?=
 =?utf-8?B?bUVkek5FRlNyd3RiZlZLMnBIUnRDWUxESmM0cVdnU00yRmhHd2hQbmpNU2U5?=
 =?utf-8?B?UHJjZG90a0RsK2pOdGlQTFVrNVlYd1huKzdQUmVoaDdoNzdQdmFFZ3JRaVdN?=
 =?utf-8?B?UmxOczVVRWVOR2ZBb1hLcDJSdXRMbHZaY28vbUNJU3phT3VrTkFWL2RNeXNj?=
 =?utf-8?B?MjdXVVdkVHFPY1dQU1U1cWs5dFoxSjVkZzRGSXB5UjVUekpWTS9PaHp6WnVs?=
 =?utf-8?B?b1RvU082cFkrYkY0MkxnY0gvcHhWUFhVaWthVERFWVpXRWlrQUMrMCt5UTBm?=
 =?utf-8?B?T0k2QVFUYllNc0FZS0VtM2RWc1YzRnllZmo3UTNoZWhNbi9ockY2bGZhekxF?=
 =?utf-8?B?UXdYM29ITlFYbW5ockxVSDU1eHNNOVN2ZFVPUGRBMUZReVdBdU4wYlFxbElB?=
 =?utf-8?B?ZUUxM1hkdFJnTEdrdHpCNzJwYUU5c0ZwM05ZWXpoeFlzZmErTG9YdXdFZTBN?=
 =?utf-8?B?K2NLRXBEOGRST0s5YWNjRDEwR3VEL2Viam5jMmF6ZlNvc3BpNzNxNHA2TFlM?=
 =?utf-8?B?eFEvSEVaNzhPeFVVSURiS0ptOFpscEFEeUE4QzFJU0VIR0xvVFg2d20reXhx?=
 =?utf-8?B?d00yM2JkRXVCQ3dTR3pvb1lkVjJUWXQzSlBXQnp5THdWMHpvelE5dmNkUXVa?=
 =?utf-8?B?QzVkUTZsRUpCdmd0TG9kWTZvL3A2VXFDbk5GUXAzL0xyMFZKUUFhbVdLMjI5?=
 =?utf-8?B?MlQ2Z1pzb2VHUDlqVW8zN1BVTkprbWQwVGdZa09RRFZTNkF6b3RjbnZYbjla?=
 =?utf-8?B?NHRYTlhmVXk0QkhkUCtPdHhPRGE4c3hxNWxHcVNNT3UxeTJJWTlzcWpyZjI0?=
 =?utf-8?B?ZzJJSkk0Q3ZVNUx5aTROYVAxeWp1UHRjd1V5ZVI1QnVGWFpwL3dlK3phNXpO?=
 =?utf-8?B?STlTeDVyWWhQeDZYWkU3UCt3ZFlyNVI2dGFENVUwcjQyQ1FqN0FndXliYVY4?=
 =?utf-8?B?Mm1qZ3BkTEZxZEZsSjJGZmQ2RHdCMWtsYkdyTldTa21wWHZlQ3gyOUppSDVy?=
 =?utf-8?Q?NOAIAF2n90Z7iX1L3rNdMw3uN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 938a1971-08e9-4977-3102-08daf545256e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:04:15.9779
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Wdfp9WTkKFcaZPyjRTrOBkFpx0kVavrIoglAWsSONpPEn88RD2/z7TN/ENDaiIw/4qFGiy9o2Sod4GILdYthFQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6902

On 12.01.2023 18:07, Andrew Cooper wrote:
> On 12/01/2023 12:47 pm, Jan Beulich wrote:
>> On 10.01.2023 18:18, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/cpu/common.c
>>> +++ b/xen/arch/x86/cpu/common.c
>>> @@ -936,6 +936,9 @@ void cpu_init(void)
>>>  	write_debugreg(6, X86_DR6_DEFAULT);
>>>  	write_debugreg(7, X86_DR7_DEFAULT);
>>>  
>>> +	if (cpu_has_pku)
>>> +		wrpkru(0);
>> What about the BSP during S3 resume? Shouldn't we play safe there too, just
>> in case?
> 
> Out of S3, I think it's reasonable to rely on proper reset values, and
> for pkru, and any issues of it being "wrong" should be fixed when we
> reload d0v0's XSAVE state.

Fair enough:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:06:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:06:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476897.739321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGG1M-0001X9-6i; Fri, 13 Jan 2023 09:06:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476897.739321; Fri, 13 Jan 2023 09:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGG1M-0001X2-3l; Fri, 13 Jan 2023 09:06:44 +0000
Received: by outflank-mailman (input) for mailman id 476897;
 Fri, 13 Jan 2023 09:06:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGG1K-0001Ww-TR
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:06:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGG1J-0004u8-L5; Fri, 13 Jan 2023 09:06:41 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.6.109]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGG1J-0002Po-EF; Fri, 13 Jan 2023 09:06:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=fWx2t2vjKMyHxD03g44wYPIrZcah6J8RTFBmmy80OyA=; b=Wd8DAAT3pfpZVu1BJCIPCD/ESu
	ammoZJQUlsAKLmvYxBOriIVM16PPEDKdlKjM+Iiq8DFawrfGW6+dglWbu7bz1OtpsrUTzLc2rWWL+
	mSlVg6gIDo/PyG8eAlwjR1pha5T++rNalfwiyTXIE0BrCVbfmKCxSM0FlQoEyiSIG9Bk=;
Message-ID: <096b4129-ace6-01b6-85c1-b153d3bc4ada@xen.org>
Date: Fri, 13 Jan 2023 09:06:38 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 2/8] xen/arm: add sve_vl_bits field to domain
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <20230111143826.3224-3-luca.fancellu@arm.com>
 <91b5c7db-ec9b-efa6-f5cf-dc5e8b176db6@xen.org>
 <9168CB2A-A1F1-43E0-9DAD-BB31AD3979E0@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <9168CB2A-A1F1-43E0-9DAD-BB31AD3979E0@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 12/01/2023 10:54, Luca Fancellu wrote:
>> On 11 Jan 2023, at 17:27, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> On 11/01/2023 14:38, Luca Fancellu wrote:
>>> Add sve_vl_bits field to arch_domain and xen_arch_domainconfig
>>> structure, to allow the domain to have an information about the
>>> SVE feature and the number of SVE register bits that are allowed
>>> for this domain.
>>> The field is used also to allow or forbid a domain to use SVE,
>>> because a value equal to zero means the guest is not allowed to
>>> use the feature.
>>> When the guest is allowed to use SVE, the zcr_el2 register is
>>> updated on context switch to restict the domain on the allowed
>>> number of bits chosen, this value is the minimum among the chosen
>>> value and the platform supported value.
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>>   xen/arch/arm/arm64/sve.c             |  9 ++++++
>>>   xen/arch/arm/domain.c                | 45 ++++++++++++++++++++++++++++
>>>   xen/arch/arm/include/asm/arm64/sve.h | 12 ++++++++
>>>   xen/arch/arm/include/asm/domain.h    |  6 ++++
>>>   xen/include/public/arch-arm.h        |  2 ++
>>>   xen/include/public/domctl.h          |  2 +-
>>>   6 files changed, 75 insertions(+), 1 deletion(-)
>>> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
>>> index 326389278292..b7695834f4ba 100644
>>> --- a/xen/arch/arm/arm64/sve.c
>>> +++ b/xen/arch/arm/arm64/sve.c
>>> @@ -6,6 +6,7 @@
>>>    */
>>>     #include <xen/types.h>
>>> +#include <asm/cpufeature.h>
>>>   #include <asm/arm64/sve.h>
>>>   #include <asm/arm64/sysregs.h>
>>>   @@ -36,3 +37,11 @@ register_t vl_to_zcr(uint16_t vl)
>>>   {
>>>       return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
>>>   }
>>> +
>>> +/* Get the system sanitized value for VL in bits */
>>> +uint16_t get_sys_vl_len(void)
>>> +{
>>> +    /* ZCR_ELx len field is ((len+1) * 128) = vector bits length */
>>> +    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
>>> +            SVE_VL_MULTIPLE_VAL;
>>> +}
>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>> index 8ea3843ea8e8..27f38729302b 100644
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -13,6 +13,7 @@
>>>   #include <xen/wait.h>
>>>     #include <asm/alternative.h>
>>> +#include <asm/arm64/sve.h>
>>>   #include <asm/cpuerrata.h>
>>>   #include <asm/cpufeature.h>
>>>   #include <asm/current.h>
>>> @@ -183,6 +184,11 @@ static void ctxt_switch_to(struct vcpu *n)
>>>         WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
>>>   +#ifdef CONFIG_ARM64_SVE
>>> +    if ( is_sve_domain(n->domain) )
>>> +        WRITE_SYSREG(n->arch.zcr_el2, ZCR_EL2);
>>> +#endif
>>
>> I would actually expect that is_sve_domain() returns false when the SVE is not enabled. So can we simply remove the #ifdef?
> 
> I was tricked by it too, the arm32 build will fail because it doesn’t know ZCR_EL2

Ok. In which case, I think we should move the call to sve_restore_state().

> 
>>
>>> +
>>>       /* VFP */
>>>       vfp_restore_state(n);
>>>   @@ -551,6 +557,11 @@ int arch_vcpu_create(struct vcpu *v)
>>>       v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
>>>         v->arch.cptr_el2 = get_default_cptr_flags();
>>> +    if ( is_sve_domain(v->domain) )
>>> +    {
>>> +        v->arch.cptr_el2 &= ~HCPTR_CP(8);
>>> +        v->arch.zcr_el2 = vl_to_zcr(v->domain->arch.sve_vl_bits);
>>> +    }
>>>         v->arch.hcr_el2 = get_default_hcr_flags();
>>>   @@ -595,6 +606,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>       unsigned int max_vcpus;
>>>       unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>>>       unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
>>> +    unsigned int sve_vl_bits = config->arch.sve_vl_bits;
>>>         if ( (config->flags & ~flags_optional) != flags_required )
>>>       {
>>> @@ -603,6 +615,36 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>           return -EINVAL;
>>>       }
>>>   +    /* Check feature flags */
>>> +    if ( sve_vl_bits > 0 ) {
>>> +        unsigned int zcr_max_bits;
>>> +
>>> +        if ( !cpu_has_sve )
>>> +        {
>>> +            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
>>> +            return -EINVAL;
>>> +        }
>>> +        else if ( !is_vl_valid(sve_vl_bits) )
>>> +        {
>>> +            dprintk(XENLOG_INFO, "Unsupported SVE vector length (%u)\n",
>>> +                    sve_vl_bits);
>>> +            return -EINVAL;
>>> +        }
>>> +        /*
>>> +         * get_sys_vl_len() is the common safe value among all cpus, so if the
>>> +         * value specified by the user is above that value, use the safe value
>>> +         * instead.
>>> +         */
>>> +        zcr_max_bits = get_sys_vl_len();
>>> +        if ( sve_vl_bits > zcr_max_bits )
>>> +        {
>>> +            config->arch.sve_vl_bits = zcr_max_bits;
>>> +            dprintk(XENLOG_INFO,
>>> +                    "SVE vector length lowered to %u, safe value among CPUs\n",
>>> +                    zcr_max_bits);
>>> +        }
>>
>> I don't think Xen should "downgrade" the value. Instead, this should be the decision from the tools. So we want to find a different way to reproduce the value (Andrew may have some ideas here as he was looking at it).
> 
> Can you explain this in more details?

I would extend XEN_SYSCTL_physinfo to return the maximum SVE vecto 
length supported by the Hardware.

Libxl would then read the value and compare to what the user requested. 
This would then be up to the toolstack to decide what to do.

> By the tools you mean xl?

I think libxl should do strict checking. If we also want to reduce what 
the user requested, then this part should be in xl.

> This would be ok for DomUs, but how to do it for Dom0?
Then we should fail to create dom0 and say the vector-length requested 
is not supported.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:10:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:10:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476904.739332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGG5G-00030p-SB; Fri, 13 Jan 2023 09:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476904.739332; Fri, 13 Jan 2023 09:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGG5G-00030i-Na; Fri, 13 Jan 2023 09:10:46 +0000
Received: by outflank-mailman (input) for mailman id 476904;
 Fri, 13 Jan 2023 09:10:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGG5F-00030a-G0
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:10:45 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2086.outbound.protection.outlook.com [40.107.241.86])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28b545e1-9322-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:10:44 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7982.eurprd04.prod.outlook.com (2603:10a6:102:c4::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 09:10:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:10:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28b545e1-9322-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jKLQI6nQJU9ELfeMLlKuGkgHq0aOumlQKZ/SVt+oVi+Jg6iGTrk5xjHpmeyX0kMkqVR0S9cl1TRqhWhoTAY3SHI3wofe4k+wr+7wkso1qBdIKkju5Q9d2Cm88OqU31AipNSvqvbnmwuz5/tSzHD70onqEg/g2/QVm7WKo266pzYLMua0Hx286SSq6EiUe028jYX3W4y0swlzQsMRLy/LzYHa86+OBZ/HSBD3pC3l4hwDn0DcWQ2G0ZZIVC+RAtgWDl0rqF9LLEinqrEbvjFjPeBbTsqJ9ZqIgTgaPhIL/nNki8O8cAff+4nzu6BhtAvHyH6wzT2et8BJSA2X5gemzQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mEpZ5SzWvp8KWJ5Cil7cQwtmvjg2LQtZa/sb2AiU7zQ=;
 b=NhD8RoYuxXj7DsFGRgqC+AVuLkEQxYfHxbDWY4uJFuNLsEHSTWjbaPx9RBpK4cgej7HMwFFVqOvkHBPDY7kBRGxtZXKD9A0njaj3tdJSiKZX05ubTdGtv4Yn5H4DHzb/nbzl00Aq8nl67kD0SR0uI3oSrSLukTQ1pgoyAv1PWur0WD3Rh0SM4byLosCPHQ5M47N5m5BRSSC+q65ZJoNamcLAkJRwrgXtPrz2vqQMZOHk9yNqFupxSP0nvZ9J9LZq0EzliE8N0igdmIE0epNpX9UQhVINTAuj+j1gLYcFLR1B6ZMbWWzgB6f32tMONCWHgrZq6p92E/s2V105y1I1Gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mEpZ5SzWvp8KWJ5Cil7cQwtmvjg2LQtZa/sb2AiU7zQ=;
 b=Wy5liNjL5lg9ONxR46u6ajEaXY8UsR1z7nEiFEobovRznBMx8jZp45NFAr3TjnZ0jGu8tZr2R9nlVOBfcSniayW+05VgyuH6BgSJXiMUQtFf5unYtqq7sc8SkOawMy6/rziACO3CkEWcPNJn1HM49SgG70ANso/PgUJWV9HUh1qbS19OKH1mtOqTsJxjAoPtxPs5whcVr7YxjZs9RDm5UcfXKs3w/j71QyOZzbgJoXMvW/VaLLtqjyFqKISLdNeGZSNVwsngf6mS+Zcqrg1QpE4dSoy/lbhSNkDP2tYs/BU05G/EIwdRpa+ENXHDDjL3F5EKXklYAOGzYH0Ncx1UwA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <07b71b77-6979-6bc6-508f-4041a978392c@suse.com>
Date: Fri, 13 Jan 2023 10:10:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/8] x86/iommu: call pi_update_irte through an
 hvm_function callback
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-7-burzalodowa@gmail.com>
 <aa20eb4d-7b18-9bbf-718f-2fe5fa896713@suse.com>
 <04e5b744-bb61-b3d1-7d60-97bb710f7584@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <04e5b744-bb61-b3d1-7d60-97bb710f7584@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0078.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7982:EE_
X-MS-Office365-Filtering-Correlation-Id: 07d6bbbe-8dea-4fa1-2872-08daf5460b63
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NBPRFLb6ofUDjoYuIW6maiX3pd8nuVu7dTQc3dnWXXMLAPT8PNSlOT8htcNGYDoMN1lTg5wlkjFoxOkWfLhOsMMXRDGSfGGoOtjE0q5M7e/D8UNAAff/HNu0cUfWs90Mzy74KxVlgb+43CwRLUPNfT4dxXIvldowy9wxGs+ncceOQ9QvbqS5HLBX8IfLDyLTYsAc0TW8KciB+/ZLzQqOKEP+jXXWGsTHs3EW9bVZTIJEGk9brZeRxmiHrud/13VcxBf+v/Ne6comDBgP3nUHRL0nMSW+BOWMyPGPyVyqEl99LoATnseH5JrybwUFf5436TYi27rmTn3f/MTXfyscit/ObGD2ocYUQKdNIWqveS8nPvjLdmJhCtO7U9EbKldHj4Lvy8swrbZrPYaMMF1Xu5PL55sOp+LjsSXhTnH2rquj5F2qD2nYRk2Dxh70zVu0CO14uN+9J48fkaaKsnOKVVq4ZW4KsXSuuEPhslVX9lKI69FaHYTiPcVMkRZKrUsOFasl9Y70dWNcrjpnIyckHS4G+9WrBx90q8L/x99480HSVr1s9y/0K5Dtapa684UHc03gA1+/BiyU7n0sFSZwX0f49mArBVhvQ/y2FEVDDaqnnr+BYy0ZY9gBL9+tfmbNajf3O1Vp/8g31DDI7UaI0PtXOEvEFmoMLy/5UYCUwwz1/HIbbozwmMwj8S+c8iCeyOg1tVh+ZeW4bp8xOJsFTp6rswwCKNR1bJJvlj62A5Y=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(346002)(376002)(136003)(39860400002)(451199015)(5660300002)(4744005)(316002)(36756003)(54906003)(2906002)(2616005)(38100700002)(83380400001)(6486002)(26005)(86362001)(31696002)(53546011)(6506007)(6512007)(186003)(478600001)(66476007)(66946007)(66556008)(31686004)(8936002)(4326008)(6916009)(8676002)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UHNXSVVsd2kzNUFsSDlpN0hhaG5zQW5LcTdkbnZzMDhXcWV3QkpWVHBYVGxv?=
 =?utf-8?B?N2Nxc0J5ckdIK1BGaWJYY1g2S1NPKzV5UlZUTzFnSHpyaG92czZ1MlB0cEtt?=
 =?utf-8?B?QXNsbHpQUWVDeGxVVFZIazR5alpSSVBjUnB6eVFYQlViUmVXcmNXeFBCcVh1?=
 =?utf-8?B?RnFiS0pnMjBGUERKZDYwSlV3bWdQS3lFYW80LzdxODJpVmlWc3ZUbHA1OUtC?=
 =?utf-8?B?RWtPZDdLeTRXRUhOTlF2THdSSStvdTVGRTRaLy9ybU5JQTRGODdQQ2xJOGFz?=
 =?utf-8?B?T0x4MVk0MDFQeXBROGpYNVd1MFgvd1RUUW1hUVRYSzZ6aUZQeVdCdjZ4S3VW?=
 =?utf-8?B?YjFqZHRXcml6ZmN6ZGladnV0WVVvQUJCaTkyRis0ZllOUlNCbHhvb2c0bzho?=
 =?utf-8?B?SVU1aEphcUk3UGtLTWZNS1B1N3NmK2lCWWY3ejBjczFVYkxnRUhZTHArVlFx?=
 =?utf-8?B?aC8rTitLbTlSV1FFYWV6QzBXNklHZ2N5ekxpVWpFYWk5dEZENVEyVU1KVFpw?=
 =?utf-8?B?bytlQ2dPWWFxT3dhcFh4aWMzYTZrb2lZWWpMbWdYTTVsK0ptMEx3QS9uMXpP?=
 =?utf-8?B?cGhnRWY2OFVra2JQQmlyOWQ2NTZMNUwyREFYMit2OXBtZ01idFgyNkFLNm44?=
 =?utf-8?B?ZHlZQ0wrUFY2bitlUDJzbzAwZStsV0xQQnhYS1NXRy92aEJCSVl0TWFEUEk5?=
 =?utf-8?B?ejZiaG1UMnZXUzUySlNsWUFWbFlYbSs0a2tjVHF1d01wWENJRWR0WjEyQjJa?=
 =?utf-8?B?cnR3OW00VnpyZSs4QnVvekpuVDVacFFhRTFvRXFBWDE2NW9tN3VxUHpNWFNr?=
 =?utf-8?B?L3I4RGwzd3lBcnZrZnQ3Yk9vbURib0w5L2RhcS90bG9xc3IwVHkrTDdDQ1M2?=
 =?utf-8?B?ZklnOUdkK3ArNWRkaDd0MHpaRGtCMXlMK0JFWGVpTDlQWTdEUGdmeitNSUY5?=
 =?utf-8?B?ZHZzcDhHLzF6L1dqZWRPelh4Z3BJem90OXlTclJ1OXhCNzhIWEFNckc2bWNu?=
 =?utf-8?B?bVQ5SUtxc3Y4N0plUXZ5bWlZTGF3OTFKL2tVTzB6eHNxcVNqeWhkUmxEMGFF?=
 =?utf-8?B?YWFuenFyOHRaWVhvSjhqUVJrcDR6Ymk1K0FnTVpTeFBXUVpOZVRFZkNoRzdS?=
 =?utf-8?B?VW1MaHZPdlZsKyswSE5MaFhoY1FEcTBlNVFHMm5QeGtlVll0TzNYZElQRFhr?=
 =?utf-8?B?WFRRbTEwejA5QlFhaUxyblAwRG9oWnlFdTF1RGROZjlVejdSRE9Tb0dDc3ow?=
 =?utf-8?B?bmdvak9Ta29rQ2E2TEgyZHdmS2FRa3VzUlBMWjZDTEgxT3RLSkxML1dMakZX?=
 =?utf-8?B?NEZYSEVrVTdobGRpcW1qdmFLTEtGUDErckN0SXFNc1BRUUlPWXF0dHpzWEdS?=
 =?utf-8?B?ei9NS1BRamtmalZURFRUMVU0aWU3eWVxSkNaZ28zYUJBUnpNVDhwcHpKVVhJ?=
 =?utf-8?B?ZHpGOHorTXF1TkRLeGxuYVlCU0tvWC9QTlJobjFhcWNjNFlmUnltbVRhblRp?=
 =?utf-8?B?VER4OTdpYlBUdVowSVlQWmpubGZ6WVB3NmNKYmtBV0dvVWhnVW5ZSTd0Smsz?=
 =?utf-8?B?eFZXSTQ5V2lxdGNPcGovTUQzK3BNK1Z6djJGbWNYdWduTkVYZXUzWXlUSjRN?=
 =?utf-8?B?UnRTSjlYNElqVnlycExTbnZpaDkwejliWEdDckxYZGh3SHdyRUw5N3huLzhB?=
 =?utf-8?B?d2FMOXVzNGIvTGs5SnRYdkgya2JhbkRmR2tBbkJXQzBucVltSzh1Y2RWMlRy?=
 =?utf-8?B?T21vMGdNcTlBSy9Rc0JYUVlvTkhPQzl1RlNlQlRaZnlwcFBnSlBDaHo0aUNz?=
 =?utf-8?B?YTdyWVQ0MDJHb0tHOHhlbGJMSEl1R1o0UzhzZWdFSUU1MnM5Y3hWUExJKzlj?=
 =?utf-8?B?RDREOTFMWGQzeE9Fak8zUjJPK01vdDMvTE9rSXZjS1RmN3Y0UWErQUo0b0gy?=
 =?utf-8?B?VUlQVzBPRkgzdkJGL3RPbS8xUXFreW4wMnZ1MVkwZ2E1RDhGWmhYSUM1bWZU?=
 =?utf-8?B?VW5pZlA3a09HaE4rZWdMc1B4U3JBcHlLaVZBUTg2bFhkTVBLbEZ1U0VyT0Rm?=
 =?utf-8?B?VWtJVWozSDRJcHlLUno5T2pQWHloZHRJcUNXUTZmeGI5TTVqZFZ0SVhtcUJt?=
 =?utf-8?Q?TOPOwJM0VO4NMoKqc42tzuRqh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 07d6bbbe-8dea-4fa1-2872-08daf5460b63
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:10:41.5628
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xIjdgTy8ky4RU2G2clgdBkr6ewoSUORa9M6BcZi/BwpMD6d39bq3nje03QXDiO+KMhRr8IXkB8/ADoXuSg4dWA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7982

On 13.01.2023 08:30, Xenia Ragiadakou wrote:
> On 1/12/23 14:16, Jan Beulich wrote:
>> On 04.01.2023 09:45, Xenia Ragiadakou wrote:
>>> @@ -250,6 +252,9 @@ struct hvm_function_table {
>>>           /* Architecture function to setup TSC scaling ratio */
>>>           void (*setup)(struct vcpu *v);
>>>       } tsc_scaling;
>>> +
>>> +    int (*pi_update_irte)(const struct vcpu *v,
>>> +                          const struct pirq *pirq, uint8_t gvec);
>>>   };
>>
>> Please can this be moved higher up, e.g. next to .
> 
> Right after handle_eoi would be ok?

Yes. I'm sorry for sending out the earlier mail without completing the
sentence.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:13:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:13:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476910.739342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGG7Z-0003ZT-6C; Fri, 13 Jan 2023 09:13:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476910.739342; Fri, 13 Jan 2023 09:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGG7Z-0003ZM-3Z; Fri, 13 Jan 2023 09:13:09 +0000
Received: by outflank-mailman (input) for mailman id 476910;
 Fri, 13 Jan 2023 09:13:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGG7X-0003Z0-Ev
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:13:07 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2080.outbound.protection.outlook.com [40.107.22.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7d889a5a-9322-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:13:06 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8743.eurprd04.prod.outlook.com (2603:10a6:10:2e1::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 09:13:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:13:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d889a5a-9322-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PQkOv7g1o2Zy40OpLBtVVK06i2rZOkRXp46QkgrUVOFkE4dMOzFnSO8/LZnMan+mM7FUasG67Pl2k1Ct0FNtSRHcD4aI8FToaKhJjtPGK7xCdrorF1wib3oxg2ce8pHQ9KvuPV6+BedS/oT59LJorYo88xC3nRPO91hg0pvsMRmodu8apCSi5C0jDElw4XGQ48L5nyTQno1Uwe7vsI+aXsp4zftdV62uGhs+k0WwlAxj4y+JJ+ZTGhWxyupv6ICqbTqA8RKmYJcM932kUXHVDdPSuFOUvW4E6GVtor2zgyXBkdxJUEDXZSrNKoZH6+0E0eyvTsG0axV1Qhr/KNbB4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UqMdE8TRkZlwe7RW6FFwmS0gNI30pCf2CvesE5gndbc=;
 b=EpNjBIpiaHcmp02V5NpvW4AJdg0+UvBc54leTRsNvC2ijt9D/6f11dTnNPX7aoChTiOIVh7JkrbTpLRq3Zz7l7fKAFot7BKVHlD7U/0dhXGuIXqMpSH72VM6YP51g7+i1k3H/cj1sl+xzrYq32B9l2ZHt3Mum/Tnl0xO0iGdXpG2550uatbzwqDKaaPDql1224xqDPkJ7NRvbh5b6QM2bjzYf3BKbpB2o3JIH5D/Smb3Mr4moGihwD9FNbtk9/I9ty+xisQfWxkUqfxOpAA5lc2NbFiaX1lm1pXT8GxZuRKUYIAJSTogzPCvTGFf6PS8KCDy1OONxPQ5agYBsz6Zmg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UqMdE8TRkZlwe7RW6FFwmS0gNI30pCf2CvesE5gndbc=;
 b=C1pWcBHviAzQxhhQWHuRancXfrzEh2SWBXSgsD86XdSiJWcbvPiJNl3V+korMFDBe5nz9E0kfl8gNSuePg9cMg9GMb8IPXbFMrqE8GQo7JI4PCdgO5vCSdzVlKAkaXoVmqMheBExgIo3zcFhkfchcYNRGnl+rIe+HdTxC6XIHkCSX7cxII412Ockh6FNBwi4EhIqMLgoiH4JF8QxdAbxLoxpOmGacd7+X9tWur8T5utfd6R4Qi0ScDqQeapgJM6GwmELcazBCEHF6xjDxQqFBRQMlbLt4SAhiJIdyvnSwO5ywiqd4YBNAK8LxTLJjEFLP7HRU8bIROxoGcpm0S0w8g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4144bf32-1eaf-b85c-7a0a-b734d6267a77@suse.com>
Date: Fri, 13 Jan 2023 10:13:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/8] x86/iommu: call pi_update_irte through an
 hvm_function callback
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230104084502.61734-1-burzalodowa@gmail.com>
 <20230104084502.61734-7-burzalodowa@gmail.com>
 <aa20eb4d-7b18-9bbf-718f-2fe5fa896713@suse.com>
 <6c5a4c07-e942-a683-8579-a0f9d5971c7b@suse.com>
 <e16daaf6-5f6a-d0a3-802c-0bfc0b6876a7@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e16daaf6-5f6a-d0a3-802c-0bfc0b6876a7@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0128.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8743:EE_
X-MS-Office365-Filtering-Correlation-Id: fca0c8c2-de8a-4587-6b42-08daf546600c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5fgGFoiQ3X1Ora452ZqyVYXWd2e39IsHaQhkbbMSkoOVQhmVtnSdjvQlHXRn7gFIRN01YNUiTOHVA5UB4TP6OTnaeeIa5Gl93XlIG6TrN6rLQefuETv5s+f7DXvQlED4Snxaa+k+xOCb7GqX9M1qB2jH1rIObURDOZjYkppsgavwqMEr5jflro8PWyEMt3dJ2LsXAqUsj4Z5JIEjUVasnRs0Yk5S2MHVTsW7RuYG2hrcv96QV/BSm7lcunRPnWMZUYA9BGGyq6d/lUZAuHScsHq9uBXQ+LhgYyvW9lBye4KMMUhMLFnidB4BCAkZrp9kP5ytzFGi+t5qSxzbKcaTRMrSzVxS2ys/sacOd0pdfMdQbKfL2K+wI3WsmJWF+6ixzSknYfeLD6L+reUUhJagp8vIPFWdW3drrwe7ZQ1WBvdvQT0rxChAXiN4z0XStZ1lMJopzFqoxTdAeEkWLAmNcS9ZBodi9TmP/MpMKz2Z1/PQR53tFT3+8MEpuxkAM8ATr2xjO6j9gh3DmSCAFokCFSS4TzM07jJ/zjscvGck8q2RWCkdaL/qQFjDPnQex2kDSYZZ1gm4LG/XOUd6T9IwPAqZn+1yAj7lH57qM+GnGTQfMwJC8koo5owb9wkK4E5FOTQpVwKCzuIvKBVAi7WSChpZgugdIP4t28DBuV2Uh6VewQz34uAmzZVc/o6TM3XBrplWwaful/006tfIs+Vxo4yr6tgYv3OYL55rX+T69xE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(346002)(376002)(136003)(39860400002)(451199015)(5660300002)(316002)(36756003)(54906003)(2906002)(2616005)(38100700002)(83380400001)(6486002)(26005)(86362001)(31696002)(53546011)(6506007)(6512007)(186003)(478600001)(66476007)(66946007)(66556008)(31686004)(8936002)(4326008)(6916009)(8676002)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RWk1WU1qdEh2V3pSZ25DQm51Vkd0SXhOUHFYYTdYM05Idm9nWnliMmNkakxh?=
 =?utf-8?B?bDU2U2Q3QURlNU1vOGNRQ3hkS09CVE5Ld2E2SHBaUVAzejZmNnEvREUvNDBW?=
 =?utf-8?B?dlRYZTJVZkMzZUtJY1JGM1plVVp2Ri9ySjdKeGNOMGNZcHlydVdVWkNabnJy?=
 =?utf-8?B?ZTdEaVJvQ0lSVm10ZEQ2Kzl4aTJuK2EyeERZNUlhSFg3UGtaQXRmWWVFSTNy?=
 =?utf-8?B?VCtsWDF5cFkxc3djc0lpeVl4dkZibEV5cUQ5TWE1am5RcnUwcmNVM3RIN21Y?=
 =?utf-8?B?VGdpeFZ6UVdwUTNkTjlVdHMyOE9iN3NiZS8xeTdDTWhiam1HRTJtZ2JrcWI1?=
 =?utf-8?B?cTU3RlZsaDdFcE01ZDkzdnZUZjNnNWwwNUxyOUYrdmFDZTRNZEZuendwNS9v?=
 =?utf-8?B?ZHFLR3VwdHRQdmdCbzJRalZnWXh3UDdkM3phNG1ZZHN4V2tSaER2VUd2dlJK?=
 =?utf-8?B?ZzRDWDZoT1k3SldJTXN1YTBPY0l4U2FsSEl0ZkdvcElBbThrYlZqaUxZbnY4?=
 =?utf-8?B?M3FBcXpaV0VTSWxkNGJVeTU4SmppS3gyUGkzMU90NG9MM0xYQ1FPNGZkeUxa?=
 =?utf-8?B?d3hFTHNEcUVPVG5GNmpSdlZ2UlBqbFZMNmhLc0o0RjNpaGlTeHFmS29aVzRP?=
 =?utf-8?B?cHhZMmpPTHlYc01GTDFtSWFxUGUwKzJ6Z0lmaDl5TTlpNUppV1A3b05LcGds?=
 =?utf-8?B?a04rQU5QVjFLVmM5MkJZUUxPZlM5LzhCczA5QkpHZE5BcldMWEZzdzEvVS8y?=
 =?utf-8?B?aldwcHo3ZzcrR0ZiSE1haS9hOEdseDhOTldCWlZQNVZnSm83TWJvZFMrM2M5?=
 =?utf-8?B?dTBxemdHb1hJMXB0UFNXclZZcDRMT0hmZFpKNVRtcFJsUXdacllUdTFLMkU3?=
 =?utf-8?B?RzZtZmNKMXdsR3Y2ZDBwVkhGS2svT3B3UFBRcnVqajkzd0VmamtCeXRoWUNS?=
 =?utf-8?B?SXlhN2VKVXUrdjA2WE91YlE1YXhwTVlpRWltY3U3NWlROFJQRkN5OUZHc3JV?=
 =?utf-8?B?QktVSzlUck03WXBPVHZTMGFMREozQ1k2aHNub2hUSnFOSklhVmhoUjhVUlhT?=
 =?utf-8?B?ZDEzYTh1ZS9YenNSOVhmeTExek91Z3lpcGREbFZhQUQvSDBHYmt1L3g1Njhy?=
 =?utf-8?B?ZW9PT3lWRjA0ZFhqTEtOK1p1WDY1OVlBZHFXYkNBcHEzRE53R3d1K1pESzgy?=
 =?utf-8?B?WEszcmlDdnZ1ZnZrb3kvcTZaVDYrWjhUV2dmcHdtcklxdlhHZHMvLzMwSzJZ?=
 =?utf-8?B?U1hZK0tjV05MTU40OWNIbkI5b1RpakNOQzdIWktBVzlTRExvaDlueTR5Nk1B?=
 =?utf-8?B?UTZwTkZzcGJ1YTFjNTArTWxCNHhLRFZPczJZTTh0SWJ1TnRkN3V1dVJDemdV?=
 =?utf-8?B?OS9RNjBINm5rZlYxaC9EckJSZ0p6azYrK04rWDB4L0tQYjFiaHVJbkFHaGlu?=
 =?utf-8?B?cXJUZVRJTE4wbmNSNkFLZHFueHFrZlliSGlJY1NGMXkxaHIyYnQzOWM1TFVm?=
 =?utf-8?B?amZXSW93bTg4c1BYdkJ6QmR4UkFtYUtHOENSc1NnYngwVEJMaHZ5T25taWk1?=
 =?utf-8?B?T2xJTTJrRVluaGpIWEVPVXV6b2I1alkwS05lSitrR0VHUmMzUWR6WldFNmlI?=
 =?utf-8?B?c0U5dWZJNmpMeTYxdkRmMGFZbHRzcHllOE14MzJHT1d1OWFGMjVXOWhDU2Jm?=
 =?utf-8?B?ZHZWT0MxKzFpeWdJL05vQ0p0OS8yM1hmMHVGTisvbTNhZ0xUdThsRitlcWVW?=
 =?utf-8?B?THd1cGpkSGJ4dWpibWdvTFNqMXZ0Uk12WS9GOVZHUkJqa3E3OEdqU2I0Vk9j?=
 =?utf-8?B?bXZEeVhTcjV5VHFNZHBnL3lqUXJvNlRQY0x4Q3NVY3hPV0hYYjhsYVFVeTBq?=
 =?utf-8?B?RW5nNzlrL2lLRklDV29CTWdESHFEeWdkZHJhbnA1MjNURVJ2MGFiMXVhSnVi?=
 =?utf-8?B?U0hVSlIxaGNQVk9TSUtwSW1aN1AvaHZld01ubWU0eDM2V29IZmJFUGF1M0dU?=
 =?utf-8?B?U0pNTEFOalhqbTZLVlpTaWcwMnpHbTY5aElXSTI3TnVsNlJzTFZGZGdReEVQ?=
 =?utf-8?B?SlZEVEYzSXl3L3JOUnZ4T24rdVJYb0x6SHdCQjQrS0M5dTlwVkFEVUVaTE5Q?=
 =?utf-8?Q?Y77805CaiRJQ8wC25rThL+l0x?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fca0c8c2-de8a-4587-6b42-08daf546600c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:13:03.6007
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BRjpHAZr4ok9SOz3sC30aCuMc2lZgKlM26DnkFHy46/KdJ0lXkNF432kxb8dmtPewJVAg97sAvVPVASp8iX1Xw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8743

On 13.01.2023 08:44, Xenia Ragiadakou wrote:
> 
> On 1/12/23 14:37, Jan Beulich wrote:
>> On 12.01.2023 13:16, Jan Beulich wrote:
>>> On 04.01.2023 09:45, Xenia Ragiadakou wrote:
>>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>>> @@ -2143,6 +2143,14 @@ static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
>>>>       return pi_test_pir(vec, &v->arch.hvm.vmx.pi_desc);
>>>>   }
>>>>   
>>>> +static int cf_check vmx_pi_update_irte(const struct vcpu *v,
>>>> +                                       const struct pirq *pirq, uint8_t gvec)
>>>> +{
>>>> +    const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
>>>> +
>>>> +    return pi_update_irte(pi_desc, pirq, gvec);
>>>> +}
>>>
>>> This being the only caller of pi_update_irte(), I don't see the point in
>>> having the extra wrapper. Adjust pi_update_irte() such that it can be
>>> used as the intended hook directly. Plus perhaps prefix it with vtd_.
>>
>> Plus move it to vtd/x86/hvm.c (!HVM builds shouldn't need it), albeit I
>> realize this could be done independent of your work. In principle the
>> function shouldn't be VT-d specific (and could hence live in x86/hvm.c),
>> as msi_msg_write_remap_rte() is already available as IOMMU hook anyway,
>> provided struct pi_desc turns out compatible with what's going to be
>> needed for AMD.
> 
> Since the posted interrupt descriptor is vmx specific while 
> msi_msg_write_remap_rte is iommu specific, can I propose the following:
> 
> - Keep the name as is (i.e vmx_pi_update_irte) and keep its definition 
> in xen/arch/x86/hvm/vmx/vmx.c
> 
> - Open code pi_update_irte() inside the body of vmx_pi_update_irte() but 
> replace intel-specific msi_msg_write_remap_rte() with generic 
> iommu_update_ire_from_msi().
> 
> Does this approach make sense?

Why not - decouples one place of the assumed "CPU vendor" == "IOMMU vendor".

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:16:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:16:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476916.739353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGAQ-0004Bz-KC; Fri, 13 Jan 2023 09:16:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476916.739353; Fri, 13 Jan 2023 09:16:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGAQ-0004Bs-HT; Fri, 13 Jan 2023 09:16:06 +0000
Received: by outflank-mailman (input) for mailman id 476916;
 Fri, 13 Jan 2023 09:16:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGGAP-0004Bk-Ia
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:16:05 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2079.outbound.protection.outlook.com [40.107.22.79])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7a2b533-9322-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:16:04 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7982.eurprd04.prod.outlook.com (2603:10a6:102:c4::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 09:16:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:16:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7a2b533-9322-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cz36W0dh2FXosdtxqdAa92ZDiP4Oyr9Uac0ps+Bir/G9QyZJgakmOt9e6lJbEZ1IobKm4vxLF+RXVTAWNZyXT63uV4Mtv2nQDsuiA0wehzObnbGGFU6WMCQ2SekHjr3q7BIaHICAa9Au5yU+qdKKAc+L+7vviIHHSvrHif/vmrtOs8vsNZL8cdFrti/JWRtAIYoylR5WDdepvXt3t6z7RKmOisMThsQ8UmPRLn6O9bmMhk93gPFA8NaY1UuiZMApNlCqSBenn82aUc1Wya55ks10VtZM1To2IU0cpKu3Egcr2dtkDhwFmZ/vhECfPO+4RRyVuWP24uqGhBbCK7bDvw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=H/EGN5qboOgBGC012glyK/C4KBiGb90uJnhpWogn/v8=;
 b=Z/SqRmTfgsHqsREqawPqCdRgZafk98IoWDFVJeHOKPKnPKGHGS3+7dYhIG5O1yfzESLDZYh3DXuA7dzJOY1fmc+NV3cLQM+d21gnDyylYk0QykHk+I7LZiuEV3I4RWmzRgqhGSNeT+91pgrbYSwNNlYfEUCeq8fepbxisM8E8HRktQtzEUuu9faCOZR9CEh9eaj7RwXcfLau/WJ7eOmIgaMfLDJelT+sawnPEd2YErgVfP27Kf6wjbpmGC0bT7lsffmLjVOcuhxsUNJJQRCGnAWrjDVAdXgLWge7LG9ULdbtLSvECqSYhTnSGA/LDfrbm3g+IXx31g/+2MI58aAflQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H/EGN5qboOgBGC012glyK/C4KBiGb90uJnhpWogn/v8=;
 b=F35NrIeC6MkYr1JyXwHRdc35hUgz/BtKqWEBdolVOMK6g2U0Y0u7SNBKjZxdPL9lLnv2qiFBwcmef3595DDnS2jFPD/3j78RjA/cK0dDPZC5BqdzrdjvYKX3GLiYtu08ZhhiD57wVpqYH0SKjHl+pxCfd0n6CEAT/tU/z4/4mbs4DxnPDnUzD7F1TeZRosrsrqFRaLVsuC4ql/0LluJJPTrUrRc10tYyakeTUYGJOYMhzY07EXlieiibXL72USoFE4eB87gKEDfInnPslzpgJheqdIuw71xSUQu4LibxNfErdxb8A9JhtCLZCBFDWQRwxyrf9ZjASa2EZ2KIOttVlA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <386a58bc-6d16-475e-ffd5-a3340adbe4b3@suse.com>
Date: Fri, 13 Jan 2023 10:16:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 05/22] x86/srat: vmap the pages for acpi_slit
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-6-julien@xen.org>
 <ca02a313-0fa2-8041-3e8f-d467c3e99fb6@suse.com>
 <965e3faa-472d-9a79-83ca-fef57cda81c5@xen.org>
 <ade9f97d-aa28-bd7e-552c-35bd707bab29@suse.com>
 <a607223f-1cd5-5b32-4d9b-500496745786@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a607223f-1cd5-5b32-4d9b-500496745786@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0137.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7982:EE_
X-MS-Office365-Filtering-Correlation-Id: 80f1fb61-c02d-4e89-af96-08daf546cac0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UeUBtcNlWfkssLfoj6C5bGLXdUo1rTl7S1Sn3kjXIOt/H2LQ+Wu9T60UWMggk5KgQYL/iSR3TxKhOq+oPPGIUpC2sIClRexlSFgyP+PreYubhdofjnugU9dQENb+dvxuMWdPTjg2y8HBzehA3opmcyFoOG+iAsxgxYCHIqexw6tEEt3HZ7rXoMuHflzD/IwCCGohgIfdmodot4vCJ8mVuHufVTwrP3nk2gz4TccC/wIa6E311H3FG8l8HnGSAlTIJo08WSaRVokYsyyAHbZ8WpqmhOSrfWmm2aZ9eyWpU3DcXYI8gFaU5owRv0SvBUSXbCJ2RXc9wpVv67RHR/orUZWuM/Nc2OzW/i99qfPXJzEQY/z1rkfr9UqujztIWlYCrWqWDrx70cB1co9L9ZbMHgX521UP765m5hKtRJtDfGwMuoro9cvUDdW0+MlBZgntLaUeiR3ImUC0hmi5NdkUt+uVrje3Nbd3I6TuL/o2EaAHCdF5ZstZCb+b0bU7XLrQGi7BY57GBZOF6wxatkwQ+NsdRLlVVjAkjHnezp1hoMdhGgYGzAsWiuwIMJA52qJv5g/lBuQ72bKNh5U/9MIK3H4Cx+CKqq78pBURPm2KWNnjbcP9Fw0ai/uSetsJvry5bGMDE65MORih6LnG9YS6Fc9PjVBVGWFLLuHL6WhnfxrXRi/VePI11iGkeBR+0nhNM9vY5lMfzhRiFJZvofmTbHQyqjXJMFV+GfkQTU7j8qs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(346002)(136003)(366004)(396003)(451199015)(66556008)(66476007)(66946007)(31686004)(6916009)(41300700001)(8676002)(8936002)(4326008)(36756003)(2906002)(54906003)(316002)(2616005)(5660300002)(26005)(86362001)(31696002)(6486002)(186003)(6512007)(478600001)(53546011)(6506007)(38100700002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bngyTE1NTHdPc2pDcXR2bk1CYlBtcjRFb3hHOVN0VlROWUhtcFVZZHBnK3J6?=
 =?utf-8?B?QSt5alFMa0k2MjJyaXZtSWRENkNXNTRUQlg0V1I3UEhJZHJZRHY5U3g4L0Ji?=
 =?utf-8?B?ZzR1S2x3NjE4a2tEdEs3enQ3NEhVaVZYaUM4TXB4cjNvWDRaUTRpSXZTUFAz?=
 =?utf-8?B?RzdNd0RDYmtmWkZJMGtkZzd0ZWJJL0dYaEt5VHpjcFBHUUF3b01URU96TTBR?=
 =?utf-8?B?ei80cHFHUmhPOWRNZXRKbUhhUTZCN2dYZVNhNExTTkdjNzNuMFdRaU42Yk4r?=
 =?utf-8?B?TzVRdHkwZkQxa09Ud05lRmVwbUdCS2JiWlpZTnRRcUUramRQZ284dnNRTVEy?=
 =?utf-8?B?OE5YRTZMZGNIdE1LNVFSM3ZSd0JRV0d6SXp3aC9xQnYzVVc0QkVnUXpuMVNo?=
 =?utf-8?B?cmpxc2dTUjljWU9ubzFWK2VxT0RWeUFXWGNIWDA2Zm9wQklTSnh1SXBEbHB6?=
 =?utf-8?B?WXd0SmkraHdSQ3BObTRTY1lmUjdneThHczR0RlliSDBnZG9ydW5zS0tmam5W?=
 =?utf-8?B?TWJKU2gwWVg3cnJ1eGwyWTBjbkNvMjR4RnZUTzdpRmQ2dWN4L1dTclp5eXRi?=
 =?utf-8?B?TllDMTFJZEFHc0hmWEFhNlo3VHJnaW9pOUpIMjc1YzRXMTVneGVmcDAzaHRL?=
 =?utf-8?B?NkhwRW5qek55ZGZ5WDZzb1hpeVp6bkdKOG1xTitwMUo2WE4vdEtwTDNzM0lS?=
 =?utf-8?B?Z1lIQnhVc2tmZHZMUytBay9QMjhzNW1oeWdoelZlb3l3SytwWVBzOTNMelZQ?=
 =?utf-8?B?UGRic2pWcythSk0vSUxJOEZqY1lGMjdsNndUVk5KV293ZXp3Y3o3ejkrVEZr?=
 =?utf-8?B?UjY5aURGTit3U0VuMFdZSnMreEoxTzZWZ2c4YjZXSENCY3NXbEYvZCtteVpO?=
 =?utf-8?B?T2wrMUM0MXFXaGpHSXd0NXlTcytqQTh2L3QzTVpIZnZiNGJ4MjVmSWpTTkYr?=
 =?utf-8?B?NzVRVkVQWEhlZ3N6MlAyUDA0VlNxWmo2aWxxeG9Xem1NeGdtczU4cW43a3FY?=
 =?utf-8?B?Nlk4c0h0UGY4UzhIaHlIbjNuYjBkUEtyc1hTTHUvZlBWM0VzaGZEeVRSSlRB?=
 =?utf-8?B?NmtkcUJpV1c4YzVoTVdLV2FDb3pvU0VZUFVMMUREcnJmNldaNWxnSkR0K3Nu?=
 =?utf-8?B?WERJMVNBVXZJRTVVOTF1K2Jmd1VrdldHRDNPaHU5endxOWxlYmJQektTNHk0?=
 =?utf-8?B?SzdVUXlycElpL3FGMTBXQUd0cGMvbDNPYnQ2OXNZUUROUHVFQ01TTWJnOW1O?=
 =?utf-8?B?RGU3cndRc2VXcWxZQmhJV0xTUlN5TExwbzJqTUxJQTRhNEwxVjNuTGtTWjZ3?=
 =?utf-8?B?cFRGK1ExakRTRkYrcVp1b1BSN2FkSXFyZTV6bUQxa2djVHcxaW9NdkZtUnFv?=
 =?utf-8?B?cVVoT3hzOFZDNHdZdWNXcGdXb1VTT1Q4M0RkQ0tPekhwZTA5K0l6cnA4Yldw?=
 =?utf-8?B?M3RnS2t0ckMvM24zQzUxVG9sSXBWOGFxU1JjOVJZWUpVMlJwby9BV2tSZGtt?=
 =?utf-8?B?SjBxbmkwTWx6WlpXOHBGL3FoUUdPVVRBejI1bXI1dWU4bmZLcm5iYXlZajVj?=
 =?utf-8?B?VEdiZTZpUlhlLzcxYVJ4S2RvMmhsOEtlQzRzcXE2TVdOM2dSNXRuRUMyT1Q3?=
 =?utf-8?B?akh1TzBOVng1OGNIRjY4dDJMSXFURXY4cTV6UFFIVzJrS3R2T3Jja2FYeHFk?=
 =?utf-8?B?SWUyWmlmMnhaTFBDUkdKNlA2ZWh5WHBialhCRnF6T2dsdllOTXNQR0sxcFk2?=
 =?utf-8?B?aHVkZzhMaVBSNTM3bXlEMEZKMTNqN3cvSjBNMWFqUWJxTjZ6QnJaOXpCSzdP?=
 =?utf-8?B?WDBrTDlTK09KSmI2WHFyVGY2MS82QU9wQ3F0a1Q1UGVia0N2cnBPZnM0YVJT?=
 =?utf-8?B?RWxTbWt0WWhvNWU5KzlNWUtNa1BWME1tdTBpOUNFOXBGb2ZiZUFBNE9IUWdz?=
 =?utf-8?B?Z2VrdHZzeC9IZ2Vtb2pkbHNwUWFJWllFOG1nOFBsWGpQUG5OY0E2UDREMXlP?=
 =?utf-8?B?ditVdUN6SlEzVkRxRjF0dHgvUHFhUVNFVmwyU3lsVnkwM3YvMVdZRSsvMENw?=
 =?utf-8?B?OHl3bmNkYnJkUk9jdHBOMHlnbHZtRGp6SnRVWUZXYXVkVlVWWUppRWtETEc1?=
 =?utf-8?Q?01AzF+B99tBrc9fbi2574pE5D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 80f1fb61-c02d-4e89-af96-08daf546cac0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:16:02.9799
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sGZX8WnNxv2RMUv7dCtdwCYNxaJXgAzDXqFC1TDi5QBfAkHdi6XaYCu4V68N/6Mslownwlaq23MhCf+hLO6yqg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7982

On 13.01.2023 00:15, Julien Grall wrote:
> Hi,
> 
> On 04/01/2023 10:23, Jan Beulich wrote:
>> On 23.12.2022 12:31, Julien Grall wrote:
>>> On 20/12/2022 15:30, Jan Beulich wrote:
>>>> On 16.12.2022 12:48, Julien Grall wrote:
>>>>> From: Hongyan Xia <hongyxia@amazon.com>
>>>>>
>>>>> This avoids the assumption that boot pages are in the direct map.
>>>>>
>>>>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> However, ...
>>>>
>>>>> --- a/xen/arch/x86/srat.c
>>>>> +++ b/xen/arch/x86/srat.c
>>>>> @@ -139,7 +139,8 @@ void __init acpi_numa_slit_init(struct acpi_table_slit *slit)
>>>>>    		return;
>>>>>    	}
>>>>>    	mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
>>>>> -	acpi_slit = mfn_to_virt(mfn_x(mfn));
>>>>> +	acpi_slit = vmap_contig_pages(mfn, PFN_UP(slit->header.length));
>>>>
>>>> ... with the increased use of vmap space the VA range used will need
>>>> growing. And that's perhaps better done ahead of time than late.
>>>
>>> I will have a look to increase the vmap().
>>>
>>>>
>>>>> +	BUG_ON(!acpi_slit);
>>>>
>>>> Similarly relevant for the earlier patch: It would be nice if boot
>>>> failure for optional things like NUMA data could be avoided.
>>>
>>> If you can't map (or allocate the memory), then you are probably in a
>>> very bad situation because both should really not fail at boot.
>>>
>>> So I think this is correct to crash early because the admin will be able
>>> to look what went wrong. Otherwise, it may be missed in the noise.
>>
>> Well, I certainly can see one taking this view. However, at least in
>> principle allocation (or mapping) may fail _because_ of NUMA issues.
> 
> Right. I read this as the user will likely want to add "numa=off" on the 
> command line.
> 
>> At which point it would be better to boot with NUMA support turned off
> I have to disagree with "better" here. This may work for a user with a 
> handful of hosts. But for large scale setup, you will really want a 
> failure early rather than having a host booting with an expected feature 
> disabled (the NUMA issues may be a broken HW).
> 
> It is better to fail and then ask the user to specify "numa=off". At
> least the person made a conscientious decision to turn off the feature.

Yet how would the observing admin make the connection from the BUG_ON()
that triggered and the need to add "numa=off" to the command line,
without knowing Xen internals?

> I am curious to hear the opinion from the others.

So am I.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:16:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:16:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476923.739365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGAw-0004mN-1z; Fri, 13 Jan 2023 09:16:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476923.739365; Fri, 13 Jan 2023 09:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGAv-0004mF-TZ; Fri, 13 Jan 2023 09:16:37 +0000
Received: by outflank-mailman (input) for mailman id 476923;
 Fri, 13 Jan 2023 09:16:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGGAu-0004m8-OZ
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:16:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGAq-0005Fu-Bm; Fri, 13 Jan 2023 09:16:32 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.6.109]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGAq-0002uK-4q; Fri, 13 Jan 2023 09:16:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ZGKf6fIGE2gy8ttV8y3ZORWY2suqBf57lLpjVK0Yhlc=; b=1zhs7qSpXsZ/RpAQu/u7YXdOtb
	gokDF5itbiwal6sAKol24GAZEPZJMRO/Awap2HuWGdKFIUe9IeBcDfuekGvwVdUfbtgRk92Wrmy0K
	kOcUYpPGwELNmPfTs52oZjw0p7dwqRYCVpx/FwFehrgF6Y6rgpj6Utkk94J+iHpbZzH0=;
Message-ID: <3e3b8ada-3fb0-2bbf-e6a2-1aea712132e1@xen.org>
Date: Fri, 13 Jan 2023 09:16:29 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 00/41] xen/arm: Add Armv8-R64 MPU support to Xen -
 Part#1
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>, Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <d88834a2-dde3-5438-e5a2-2bdfb25be4c3@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d88834a2-dde3-5438-e5a2-2bdfb25be4c3@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/01/2023 08:54, Jan Beulich wrote:
> On 13.01.2023 06:28, Penny Zheng wrote:
>>   xen/arch/x86/Kconfig                      |    1 +
>>   xen/common/Kconfig                        |    6 +
>>   xen/common/Makefile                       |    2 +-
>>   xen/include/xen/vmap.h                    |   93 +-
> 
> I would like to take a look at these non-Arm changes, but I view it as not
> very reasonable to wade through 40 patches just to find those changes.

Right, but that's the purpose of the different CC list on each patch. 
AFAICT, Penny respected that and you should have been CC to the three 
patches (#30, #31, #32) touching common/x86 code.

> The
> titles don't look to help in that either. For such pretty large series,
> could you please help non-Arm folks by pointing out in some way where the
> non-Arm changes actually are?

See above. I am not entirely sure what else you are requested here. Do 
you want Penny to be explicit and list the patch modified in the cover 
letter?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:17:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:17:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476929.739375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGBh-0005L2-9h; Fri, 13 Jan 2023 09:17:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476929.739375; Fri, 13 Jan 2023 09:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGBh-0005Kv-75; Fri, 13 Jan 2023 09:17:25 +0000
Received: by outflank-mailman (input) for mailman id 476929;
 Fri, 13 Jan 2023 09:17:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGGBg-0005Kf-6J
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:17:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGBe-0005Gz-L1; Fri, 13 Jan 2023 09:17:22 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.6.109]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGBe-0002v5-Ea; Fri, 13 Jan 2023 09:17:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=wgdQi0ayQyWcdd8JT7eyCXgDG7VV2ZJGqmkz8o6Fxb4=; b=PL6Iu2N/o8wWMB+b+i+ifYyeqw
	vorf4vhns6+Q94F5tmkmAXiIALuyp444GSdHOhjKaRP3vD5hCRwauphtEhUhcV4vXAOjNYvd1M9E2
	tkjqzCV6K2gAEA9nevejoxQWXhHyghakYQHaVj/JJ9XZ0iWnXKxMfKjpe58IlzyFDis8=;
Message-ID: <85d1bc6b-dda6-c65e-a13b-78ea4baa943f@xen.org>
Date: Fri, 13 Jan 2023 09:17:20 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 05/22] x86/srat: vmap the pages for acpi_slit
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-6-julien@xen.org>
 <ca02a313-0fa2-8041-3e8f-d467c3e99fb6@suse.com>
 <965e3faa-472d-9a79-83ca-fef57cda81c5@xen.org>
 <ade9f97d-aa28-bd7e-552c-35bd707bab29@suse.com>
 <a607223f-1cd5-5b32-4d9b-500496745786@xen.org>
 <386a58bc-6d16-475e-ffd5-a3340adbe4b3@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <386a58bc-6d16-475e-ffd5-a3340adbe4b3@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 13/01/2023 09:16, Jan Beulich wrote:
> On 13.01.2023 00:15, Julien Grall wrote:
>> Hi,
>>
>> On 04/01/2023 10:23, Jan Beulich wrote:
>>> On 23.12.2022 12:31, Julien Grall wrote:
>>>> On 20/12/2022 15:30, Jan Beulich wrote:
>>>>> On 16.12.2022 12:48, Julien Grall wrote:
>>>>>> From: Hongyan Xia <hongyxia@amazon.com>
>>>>>>
>>>>>> This avoids the assumption that boot pages are in the direct map.
>>>>>>
>>>>>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>>
>>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> However, ...
>>>>>
>>>>>> --- a/xen/arch/x86/srat.c
>>>>>> +++ b/xen/arch/x86/srat.c
>>>>>> @@ -139,7 +139,8 @@ void __init acpi_numa_slit_init(struct acpi_table_slit *slit)
>>>>>>     		return;
>>>>>>     	}
>>>>>>     	mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
>>>>>> -	acpi_slit = mfn_to_virt(mfn_x(mfn));
>>>>>> +	acpi_slit = vmap_contig_pages(mfn, PFN_UP(slit->header.length));
>>>>>
>>>>> ... with the increased use of vmap space the VA range used will need
>>>>> growing. And that's perhaps better done ahead of time than late.
>>>>
>>>> I will have a look to increase the vmap().
>>>>
>>>>>
>>>>>> +	BUG_ON(!acpi_slit);
>>>>>
>>>>> Similarly relevant for the earlier patch: It would be nice if boot
>>>>> failure for optional things like NUMA data could be avoided.
>>>>
>>>> If you can't map (or allocate the memory), then you are probably in a
>>>> very bad situation because both should really not fail at boot.
>>>>
>>>> So I think this is correct to crash early because the admin will be able
>>>> to look what went wrong. Otherwise, it may be missed in the noise.
>>>
>>> Well, I certainly can see one taking this view. However, at least in
>>> principle allocation (or mapping) may fail _because_ of NUMA issues.
>>
>> Right. I read this as the user will likely want to add "numa=off" on the
>> command line.
>>
>>> At which point it would be better to boot with NUMA support turned off
>> I have to disagree with "better" here. This may work for a user with a
>> handful of hosts. But for large scale setup, you will really want a
>> failure early rather than having a host booting with an expected feature
>> disabled (the NUMA issues may be a broken HW).
>>
>> It is better to fail and then ask the user to specify "numa=off". At
>> least the person made a conscientious decision to turn off the feature.
> 
> Yet how would the observing admin make the connection from the BUG_ON()
> that triggered and the need to add "numa=off" to the command line,
> without knowing Xen internals?

I am happy to switch to a panic() that suggests to turn off NUMA.

> 
>> I am curious to hear the opinion from the others.
> 
> So am I.
> 
> Jan

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:18:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:18:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476935.739387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGCH-0005t3-Hs; Fri, 13 Jan 2023 09:18:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476935.739387; Fri, 13 Jan 2023 09:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGCH-0005sw-Eo; Fri, 13 Jan 2023 09:18:01 +0000
Received: by outflank-mailman (input) for mailman id 476935;
 Fri, 13 Jan 2023 09:18:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TZVY=5K=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pGGCE-0005mi-Hm
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:18:00 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2a6a04bd-9323-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:17:57 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pGGCF-005xMC-Ns; Fri, 13 Jan 2023 09:17:59 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 518FB300094;
 Fri, 13 Jan 2023 10:17:46 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 088172013A2A1; Fri, 13 Jan 2023 10:17:46 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a6a04bd-9323-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=0yGEeUtjPEYmvw01tTdJaP6xQOH4LDHoqrSGg4IbmmE=; b=b7GY0Fs4Kh53b766Ze4bZu1Jc3
	sQKmwNr2hglGmJZJeqnF5joOSCDOx/qOi5bgtrJcB8WFpuJ7T/vyAl8+mvecgBYhnxnO8kMgyVhNN
	v6eGdd6cwqv6aDki48ixTT8FYF5Bo9vWzEsufvVsogSfgELN6F63++l4Tzd22nVG8g1l0pwBPKGUZ
	A3lGv9kjETaf0T7YYyxcnZmF0zSy4O/IuXY3ykZGulJhVGuYa7iuM6gad8wGL8NJpmfLyHe10Qn53
	L4tqvf37g6tcW6x7srF1FtAQdXPdGN10AJOJwjFgzHtEJBO5WP16pPTYtGMQSPOZNqUmxfdGlxZpV
	w393kA+A==;
Date: Fri, 13 Jan 2023 10:17:45 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	x86@kernel.org
Subject: Re: [RFC][PATCH 0/6] x86: Fix suspend vs retbleed=stuff
Message-ID: <Y8EhucZfQ2IyJtnU@hirez.programming.kicks-ass.net>
References: <20230112143141.645645775@infradead.org>
 <20230113073938.1066227-1-joanbrugueram@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230113073938.1066227-1-joanbrugueram@gmail.com>

On Fri, Jan 13, 2023 at 07:39:38AM +0000, Joan Bruguera wrote:
> Hi Peter,
> 
> I tried your patches on both QEMU and my two (real) computers where
> s2ram with `retbleed=stuff` was failing and they wake up fine now.

Yay \o/

> However, I think some minor reviews are needed:
> 
> (1) I got a build error due to a symbol conflict between the
>     `restore_registers` in `arch/x86/include/asm/suspend_64.h` and the
>     one in `drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.c`.
> 
>     (I fixed by renaming the one in `hw_gpio.c`, but it's worth
>      an `allmodconfig` just in case there's something else)

Urgh, must be my .config for not spotting that, will fix!

> (2) Tracing with QEMU I still see two `sarq $5, %gs:0x1337B33F` before
>     `%gs` is restored. Those correspond to the calls from
>     `secondary_startup_64` in `arch/x86/kernel/head_64.S` to
>     `verify_cpu` and `sev_verify_cbit`.
>     Those don't cause a crash but look suspicious, are they correct?
> 
>     (There are also some `sarq`s in the call to `early_setup_idt` from
>     `secondary_startup_64`, but `%gs` is restored immediately before)

OK, I'll have a look, thanks!


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:22:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:22:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476943.739397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGH1-0007O3-4b; Fri, 13 Jan 2023 09:22:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476943.739397; Fri, 13 Jan 2023 09:22:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGH1-0007Nw-1t; Fri, 13 Jan 2023 09:22:55 +0000
Received: by outflank-mailman (input) for mailman id 476943;
 Fri, 13 Jan 2023 09:22:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGGH0-0007Nq-64
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:22:54 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2040.outbound.protection.outlook.com [40.107.8.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id da6e05ff-9323-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 10:22:52 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7826.eurprd04.prod.outlook.com (2603:10a6:20b:234::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 09:22:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:22:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da6e05ff-9323-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fKNvTp7e5dVmVgOUTTvVrKoRus+Sx6Tb37yxoEJIcOEudNqDPF0x8w7GLIJ6YsG//L0MZaGUKkDTBp8bmH1NzSNkzbCHvfqXZCqo7bQPmFlA1P5LzYiWcf1kwOCqcH+pPKPGHCZar0WxMRf226igflCuKakmo3yzHLm42FGCvEmzJjLn4DlaJLvWIamzQE8QYHioAXmBTsMukXao220U5kSkLIO4f/zTXQRJeFy3VD/tkzQnm83IoSMBo0Z6wF7whEYjTSHgnMPMdCg1QQ3hiJeRS+SyO35Q5RWWNjIDpfhNdKT8ifpWnYcDir+pMSuXyFfKEaMwazPHCgUtRk9IJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wDB8wfV51oHCbqMgc/qPmfMU7RxBN/N8M94+yck2TJs=;
 b=ZZ5fGDpDL5dkl6QoIpnJB/bcVgdGLXaNMukJnMb57BB87xUibpPuGn3mkeDfJfX/rGoC/GCBSMvV7ICmMI1wgOVnMZIe7PAX/7BhG8LyqWMUx8amA7KofPmqvQ4elxS+NDC7MiJFQlp9N5MUTvpcnu7ZVXhoSOFz/nHLjCTHffzBQwXf8JQ1GnDg3TxsmEz9ikD5wUVWd2nInruyIvHpn+c2G79+UXrQuTJMllJES8ln/8PIuHl/4xACAuInEx1TahLXLvZRMH5maaXhxqtSyRxMUmxVMojLPZMWUVkio7nnuo/BBMj6yYvgNM8YJ+Or2lY/OGUHTn0h36pRVvQfTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wDB8wfV51oHCbqMgc/qPmfMU7RxBN/N8M94+yck2TJs=;
 b=RL3TP5H2LHuLcc/CTGCGRVVtTM5gT/lqpW7eK7/Ot2804Iw0BI2HlE9C9PJ2lTS6Fa9xSE96l7PsHDhDdABZ1XwXncDkUmrn2SEYM9NwwrUeHPS4itjVO9VIOnZ6WfOKNzmBqXcpHDN/YVpvMi364lKGEWzHonvTMINWSV1Ud5tcIduKMiJ2wXfhNlH8+CPU3UHoHaFX42EAga14Eu4dNL1j/yMSuz/ceS+wThgg5ZzuN03J/0MIe5cp/Air1IsuxAQ4clbBIJlnSE7IzR2nENRVijymexzWkVvN10FpQtqP9UlfNumigWEi6ibqzfqAOVoR4hx06Df3rr64FZMlVA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <aa2c8649-4acd-bcf4-d547-e3609bb1a0a2@suse.com>
Date: Fri, 13 Jan 2023 10:22:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 06/22] x86: map/unmap pages in restore_all_guests
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-7-julien@xen.org>
 <478e04bc-6ff7-de01-dfb9-55d579228152@suse.com>
 <f84d30cb-e743-60f8-a496-603323b79f37@xen.org>
 <01584e11-36ca-7836-85ad-bba9351af46e@suse.com>
 <a99a8246-bc80-07b9-dacc-f117ace37027@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a99a8246-bc80-07b9-dacc-f117ace37027@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0006.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7826:EE_
X-MS-Office365-Filtering-Correlation-Id: babb946d-7a03-42bf-fd93-08daf547bd4c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RhYmXyd6BpMDp8vW9RKyo7ik18Gh2BEpCOWpzls/Jr+APNCZaYVhEUL4vCoUjGuurn+dp2UG/U6N7ZaHwxZtplXda13y+/rJdjgtekF0suJ4vfaAX0jUhDhjITjSC33CNdORQZ8W6HEWqGFIlMo4tOZBqZSS3XxDV4yx5P46F757ig2wju3roibB3TXjj/z5b4djln2ROj2ku7LABngi0WLZz2ERw0onWInftz4kw7hjbPTl6zFK4fgLZFciwdldit+5rO5Rg2AIhYUKSpy+GXM5H8/Z8qgI1mjoDSLE/oYvNj6fX8DUsLKZTfh2m8+16FqFf26kW1FEV+tZvR1EJzVpZL8fGzTiktQEm0+FB0lP3+n3FL4YjSmkUlkc3wGc+O+/xjFrcbox1XwZFPEpfAeci3EbL16oYNZ/nXSHqn26keo0AXAue4oD1MwrbtfHX+3dX5/yHiZhZ0AcfEqF0TMTor3IScg6Esr9AAK6dIN6GZk7/yu4sdtaAc0kBh422eDPGmuQ5hbo8gmSsNjn2kQCOyg71uY3mxI4RF7oVgFCiR0xIFfB+mCasZGPD/HvDE2oMTYS9ABLpCdzJ5Hdix0o1d37FvlFzN1f9JSXUO2RyfSieqqmY5JeaWRExnvF8t9Djh7LlDo261bg4ofs/XbAA1qKnRCDxa2GtNZ/5DeWgWN9ItW09pBFDL/NBCLFeS/4UBb4kPAfekYNTc0azvFz2WORdZLv5vJ3ye3/v2Y=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(366004)(376002)(136003)(396003)(346002)(451199015)(53546011)(66476007)(8676002)(66946007)(66556008)(6486002)(478600001)(26005)(6512007)(186003)(6506007)(4326008)(6916009)(83380400001)(41300700001)(5660300002)(8936002)(2906002)(38100700002)(36756003)(31696002)(2616005)(54906003)(316002)(31686004)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ckJtZHJGVmxOVTc3ekExOEp2cTZnZEwvdTFTazRjKzJTZklmR3FXU0krTUpr?=
 =?utf-8?B?THdaMDdOQnF3ZnNOaXNNbEpTeDhGRDJWQ1gvcVFNZWNEUkNtZWJ5aUNFSUsy?=
 =?utf-8?B?NUZ2cXI2TlZnMVFuYkhPbGJ6TkdMWjZjRFdFUFJUa3lMZGRTY3hSTll0M1dF?=
 =?utf-8?B?TDBmTlBXK2IvbVVZM0Z2aWpCL0RlTVV3bWlSczlCc0pDZEtubGNncmNhN1lD?=
 =?utf-8?B?dW9qdEdNeFRpQTBnaHExZ1hRSGtWRnRmNE1aRnprRjRBV3lzVEtTeHd3UzFn?=
 =?utf-8?B?dlJyVitzZ3NGL1oxbldmNVBoV05uTVdETnZMTUpMTWx1L2VNdGJ3Q2xVcjBy?=
 =?utf-8?B?NXRxL0NBSFVySzFiMDFPUjRtTXl6RFFodXhkeTd2eHM0TWN3L1liNzk4MTBJ?=
 =?utf-8?B?clJkZXJKaXVYb3BIUzhJZ2diUXdwY255S2VhenNSNndkUTc1MUdkTzJ4NEYw?=
 =?utf-8?B?UlJnYjNyOGJwR0FNejBVSStxTCt0Sy95Ung3WktubVF5bHY2cUsyVXE3Yk5x?=
 =?utf-8?B?OG5mbmlLK2kwNE8xeTlMS2FGSkYwTnlBODY3a09JV01PWGd4TUE2akNlTFdE?=
 =?utf-8?B?ZlQ5UjBRVTZoUS82ams4WG51SDdTQUNJb2piMjJXNS9kbzNEbGJJeU9BNGtx?=
 =?utf-8?B?STJiNFZpUnBKb0FQRVREdnk2ZWVCeXR5cnhKa3RMY0owNnpIbVVSRGNWS1Vn?=
 =?utf-8?B?dXJjQVNXL1pFK1ZTMDJVeWVjRjNjRHBNK1RPRDNPbDNKZFB2bzdXN3lMYnlH?=
 =?utf-8?B?TG85cTFVYlNsUE1FS2djV2Y3dW1kQUtIUi90cWZpVWRxZW5sL3B3WmZJTXlj?=
 =?utf-8?B?Q2dRYUwwYzJZRm1FNGJzMTAvTitkRE5UUitNVkRIUHQrdzF2ZGVjYldXWFVQ?=
 =?utf-8?B?dGg1Ymo2VnI3Zm9rczIvTExReklPdHJWekF6cFEwVHFnNHZlM1dKa3JoNjNm?=
 =?utf-8?B?Y1hWZlZSTW1raldjVDJ1Y1k3WDRWUEI5b0xXVndJZEY3NEpLb3ZtaVRCa3dw?=
 =?utf-8?B?Ukg1cCtuS0cyVXZLRUhFU1BDT3hDSVd0Y3ExbDV0MzRvc1BISU9DcWdOeFlu?=
 =?utf-8?B?YTJWTjlKVjdhaFpaY2F0NGtNZ0hEd0tlRytoTFpiTURPWDdkWUdFdThtWWVO?=
 =?utf-8?B?Z1ZjNHYrbkZ4ZlM2dVdyZnlmaXBEOG01ZWR5VWxXUjY1RFRhNFZ6d0R0R3po?=
 =?utf-8?B?anBDbldKdnJGbFpsNkc1UlVUV2VNUXpxS1FjMjJVMnk0NjJNSUcxd2k5UC9B?=
 =?utf-8?B?eUcwSlBJSXQ3M0pPR3E3MnB6bmc4S2NzT3hMYi8xUzdjcVBVZXB2RDIza2xF?=
 =?utf-8?B?Sk1FK2pUMDJ3Q3BubmZLcVlnUWJ3ZStna1RFbFFRN0hWdE84SWdHMjZrMzdZ?=
 =?utf-8?B?Z000bEJ3d212S3NUUlNNb2Z6aVd6aEpKblhseTJFTUxBeG44S2ZLRm52UFdZ?=
 =?utf-8?B?VG1jMmVWUXNUV1dLemJUMHR2ci9Cc2wwRnVqRUVxOHZPTU90azlWWi8rSk0w?=
 =?utf-8?B?UEthdjgyLzFRUVQ2UVB2MG9QUHF0cGhtVWI2T3d0VmZSYTJmZ3orSEdGc0Zp?=
 =?utf-8?B?eHBBNmlaejJLK056NVMzM21FYWtWOHpLanNSNm91QUhyTTNuSlJ3aG9NZGNn?=
 =?utf-8?B?bmNzcTRIczZqTkFKVnpEcnl0ck9jbnlHN3BxS2R6T0hycXVjYkVMTHNFWE5t?=
 =?utf-8?B?eUtlVEJmaExxNXBMZENhbGVYSHplbysvOUJPZDFzZmppYlhGTndCWlMrRWpL?=
 =?utf-8?B?NmdVbGpBMUNJOXUwVkRZTjJWc0hldGRIcklKR09zSWxFcy80UjBCWWVSZk9i?=
 =?utf-8?B?bGtWZnNneERIVFBuSlVjSGpLQTZoT0Y3TG1lZkRyYStoUXFvUzFUWnh0ZkVT?=
 =?utf-8?B?VWNEcWJvNjd2cDRvdlNZMnVHSlAxem1uejNzTlIrcTkvN3A2ZlYrVHZKUU9Q?=
 =?utf-8?B?Mld3VEZpemxnbU1iajlHUlA3ZWFhOGd1Z1JxOUQ3Z042UlBNV05xUnRSSDBm?=
 =?utf-8?B?ZVdoenkyMFdKSmorcHNmMjFCMHNkd2dxcUI3STFCaXIzZ1I0WklyYnNqdUFZ?=
 =?utf-8?B?UUY0a3NzNUFiVDZyQVMyODJlUyt5NTVjRk9oTWFEaVdpYUp5SUFlYVZVQ3ll?=
 =?utf-8?Q?eK1pnC6yvjFXx6+5fenXDRIoQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: babb946d-7a03-42bf-fd93-08daf547bd4c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:22:49.5643
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pgipjtgocTbwMMNNGZGKU2LxI4JUZdRr93ZVsIVeoxFa2JOWMoOOA26Lpf0/5HakLhy/Aj0bM0zmn1Gcbth8HQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7826

On 13.01.2023 00:20, Julien Grall wrote:
> On 04/01/2023 10:27, Jan Beulich wrote:
>> On 23.12.2022 13:22, Julien Grall wrote:
>>> On 22/12/2022 11:12, Jan Beulich wrote:
>>>> On 16.12.2022 12:48, Julien Grall wrote:
>>>>> --- a/xen/arch/x86/x86_64/entry.S
>>>>> +++ b/xen/arch/x86/x86_64/entry.S
>>>>> @@ -165,7 +165,24 @@ restore_all_guest:
>>>>>            and   %rsi, %rdi
>>>>>            and   %r9, %rsi
>>>>>            add   %rcx, %rdi
>>>>> -        add   %rcx, %rsi
>>>>> +
>>>>> +         /*
>>>>> +          * Without a direct map, we have to map first before copying. We only
>>>>> +          * need to map the guest root table but not the per-CPU root_pgt,
>>>>> +          * because the latter is still a xenheap page.
>>>>> +          */
>>>>> +        pushq %r9
>>>>> +        pushq %rdx
>>>>> +        pushq %rax
>>>>> +        pushq %rdi
>>>>> +        mov   %rsi, %rdi
>>>>> +        shr   $PAGE_SHIFT, %rdi
>>>>> +        callq map_domain_page
>>>>> +        mov   %rax, %rsi
>>>>> +        popq  %rdi
>>>>> +        /* Stash the pointer for unmapping later. */
>>>>> +        pushq %rax
>>>>> +
>>>>>            mov   $ROOT_PAGETABLE_FIRST_XEN_SLOT, %ecx
>>>>>            mov   root_table_offset(SH_LINEAR_PT_VIRT_START)*8(%rsi), %r8
>>>>>            mov   %r8, root_table_offset(SH_LINEAR_PT_VIRT_START)*8(%rdi)
>>>>> @@ -177,6 +194,14 @@ restore_all_guest:
>>>>>            sub   $(ROOT_PAGETABLE_FIRST_XEN_SLOT - \
>>>>>                    ROOT_PAGETABLE_LAST_XEN_SLOT - 1) * 8, %rdi
>>>>>            rep movsq
>>>>> +
>>>>> +        /* Unmap the page. */
>>>>> +        popq  %rdi
>>>>> +        callq unmap_domain_page
>>>>> +        popq  %rax
>>>>> +        popq  %rdx
>>>>> +        popq  %r9
>>>>
>>>> While the PUSH/POP are part of what I dislike here, I think this wants
>>>> doing differently: Establish a mapping when putting in place a new guest
>>>> page table, and use the pointer here. This could be a new per-domain
>>>> mapping, to limit its visibility.
>>>
>>> I have looked at a per-domain approach and this looks way more complex
>>> than the few concise lines here (not mentioning the extra amount of
>>> memory).
>>
>> Yes, I do understand that would be a more intrusive change.
> 
> I could be persuaded to look at a more intrusive change if there are a 
> good reason to do it. To me, at the moment, it mostly seem a matter of 
> taste.
> 
> So what would we gain from a perdomain mapping?

Rather than mapping/unmapping once per hypervisor entry/exit, we'd
map just once per context switch. Plus we'd save ugly/fragile assembly
code (apart from the push/pop I also dislike C functions being called
from assembly which aren't really meant to be called this way: While
these two may indeed be unlikely to ever change, any such change comes
with the risk of the assembly callers being missed - the compiler
won't tell you that e.g. argument types/count don't match parameters
anymore).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:29:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:29:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476949.739409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGMt-00082c-QS; Fri, 13 Jan 2023 09:28:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476949.739409; Fri, 13 Jan 2023 09:28:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGMt-00082V-N2; Fri, 13 Jan 2023 09:28:59 +0000
Received: by outflank-mailman (input) for mailman id 476949;
 Fri, 13 Jan 2023 09:28:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGGMs-00082P-92
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:28:58 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2052.outbound.protection.outlook.com [40.107.6.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b41db013-9324-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:28:57 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7155.eurprd04.prod.outlook.com (2603:10a6:208:194::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 09:28:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:28:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b41db013-9324-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MRPDa3ISoGiwOFJlO2rccatHZIPXoOGTfiWkUfSiaK6x3IQDmF7W/l3xUzN+zCbRAdwmLVEEZl1VcNxvzHEYFJzCDEMBhkfFaWY3R6Je74cuq0vRAite3NmD6VDrJMG4ieIgLp819/9BMCfNfDA8CmJuTDvygejHa1VlHOTYGBU54Y+5hxO4veHdfXeYv6XMlsObqrYXlmV+HBcCcg5sLFkFWrUV40Tt2qTlyz5S8KhPEO7RiTc/M/izWSvjNTU8RIQHUKXOctTQ6lfk+JvsQDS1YCQ46zGl12KBf7yIoLruhVC2lP2QvYaD4r3PNSxW1G0Wo3jjlffENt5EshWxkQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=r8/yZwUvgzKW7WO3dXhJMiOlt47lC9rB4MQQYCZDUdA=;
 b=WSCqyVsqdDBDjDW/fYWFR/dcIMs+oDIhwsJuDaaAnOXu7QF87aXhEoHz8GcQtgdt24MlhztUuvjTRsVTmO6R5MKLdTEdkOvKVfDxG+0PNwiT1oAiwjc1DjQ9Vu5VN7JL3isbb1hfp4yeaUig/G02CKABvnr80zjT628Rge7m+h0FhCPZbVKQwazqo3NwNMCQdu1cM3iGqJhq6wXjorThZ81722HQBVQ5q//RBCrOkzdwiLKnbVyE+x+maQIpxZsy3xuAa3WvgaWMZE1VoOw1oB9vLupVNGfoXGYrb1ricFWuNAo4xXR8pkjfevkIzc7TP49CqAQtVKZg7jiwEmobug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r8/yZwUvgzKW7WO3dXhJMiOlt47lC9rB4MQQYCZDUdA=;
 b=XghSvrbwy5ceTFljK2C4vlLwXziGG9MZtbHdQdqzyn7mpFMdUiKJK6mWsDhWUMjGUap+mkp2B59Rh9qcRjYM9qRXXLrappMpTJPgkBmchwER/wiqeHeZaPahl9PehpwSEkItUKSHEkGnF73Rz9pgXF9oEq6rUVjkBqCa1cvi9P8+fu628bt9CQ/E4OZVbbqVtRFg0L/wTDevd/S28hQGbkTfleagyYMsqpSMJo65MTi7aaLctaMVd2Ew6DZyZvF4IEhoeeEskcvdHowdXbSsOKvSPpNGMXvAjNcToS8ij2R8ydBCT7von44na5JfdtAfpGsAz1ntRaEJGXiG/+r5Fw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f6ba7a75-993d-e860-1eb6-112c7d8580c4@suse.com>
Date: Fri, 13 Jan 2023 10:28:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 00/41] xen/arm: Add Armv8-R64 MPU support to Xen -
 Part#1
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, Penny Zheng <Penny.Zheng@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <d88834a2-dde3-5438-e5a2-2bdfb25be4c3@suse.com>
 <3e3b8ada-3fb0-2bbf-e6a2-1aea712132e1@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <3e3b8ada-3fb0-2bbf-e6a2-1aea712132e1@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0143.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7155:EE_
X-MS-Office365-Filtering-Correlation-Id: 943e3125-c8c7-4fb3-898b-08daf548975d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w3Yd9RZMILvvuQE98swOi1TeSwHTo8jzM3M/7RGqK1SrO0kCThPcrEBkBqRsQc02k7TjfpYFd1h7IdSG05jSucmDdQ+h0h9+auntaUQaCCwvTE1AJUhu2malysSeePEs/8f7qLCRiwk6RcdsmT9zshIZcvYuWCwYeOIo8Wh5FKePs166DvMScUcxyI3EF+en5Fv9+Tzdt+crKUP2RtTmGchdrFWG1MEt7/Pvz3ond2K2xFh0CsymxJfQMPrSi6JAPjNQfwY/TiFVgg1PShwDEwZgn/+s/o52ja7ipghINV4i4vJ+/RJ7sAO533kiSD4mqhe58N+iELw4kH/Q5B0rfqftB1mpLmbKngO+yYAVx3IPwZoxdHbs3daRIpVrkEfWEmeRAfGYRZbPFA89C9dlhL1LfWs153UzrIX7B30B0d0BZgUkYxDj/aXShLk8X1K/P61d2Hbwjf4zKPmGGZdFEPw8iKTk3CZTcPpoZIH34Od9CaygGUSLkteBF9GXCa1REzgs76XQwNO03UVjLD4hecR9cGSKJddgW3gI/ky33BvN94B/ajTArLVj7YQtVWmwbFFnZ0nEAS6f6jXHZ9lNMsJv7I5hD9PzfJXTdkibhfjuvvOiSiqpNEvhDSNlEQhgimhK9yb+dpSFOOQfcEDdn/jdhBrOkSzOjsi2TpX0PSvXPz9YgbGe1AOkxN9+I0yj6w2cXILVJ8mM4w9VKNrZZkX0go5N/J4CXFFYxepLRntANKvSSWiUVRlwYgbk0K8B
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(366004)(396003)(376002)(136003)(451199015)(31686004)(26005)(6486002)(478600001)(54906003)(53546011)(6512007)(186003)(316002)(2616005)(8936002)(8676002)(4326008)(6916009)(66946007)(66556008)(66476007)(41300700001)(6506007)(83380400001)(2906002)(5660300002)(7416002)(31696002)(38100700002)(86362001)(36756003)(17413003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cmp2ZUlHcWJZRDZTZnQxaGM2ZEtsdDhENzBzNHpGWFdBSFgrYUlPUHJNU09t?=
 =?utf-8?B?dEp5amlGZGY3aG9iZzk1THhydHFrN1F6elR4L2dRQWVTSnVtaHl1cnozUDQ1?=
 =?utf-8?B?dkl6OFpGRDlTdzJKMkpvajcwR2FNRnpQSjRxaVcwc1V5Z1JtR3lLaDAvRFlG?=
 =?utf-8?B?VVhyMHNabWtPQlZrVDFlcG1zVVpEUGlWcU8vUlNGbFRyU3V5M2VDK1pxajVD?=
 =?utf-8?B?R2ZVNTk3b1NCTk5iUWJhN0ZoejJHazhJSnpZdnE5V1RLUE1nbmNQMThXMXdF?=
 =?utf-8?B?MUUwaUhrVGhONDFva0toRlVaM0cxRnNPTDRoZFU2MFVxMHR2SUtmOWJML1VD?=
 =?utf-8?B?ZmphUjlnVTNJby8xU2Q4a1dUaFBlVVZWRVRQTHVCL1BTUjdqM1pXTHgraVQr?=
 =?utf-8?B?SjdYYk10UWEvY004OVd3d1lVZVAyUjhMbElyVGlhT1lHSEREV09vTjNGWStZ?=
 =?utf-8?B?KzgybVJ1RTd6eGk5enpQUmYwTGF1TzRtVVdpeGdmUVNrNGVMV3B4U295Ry9M?=
 =?utf-8?B?dDVXbkEvbkJMRHBpeTJNR3ZJWnBLaGliS3ZmM2Y4QnlYZkt6WmVRYXhkSTgx?=
 =?utf-8?B?L2RNTTZRNFZlcGJydHA4T1pqRDJGd3ljNUl6TFFPMXVSNGlxd0NiZXVtZzNN?=
 =?utf-8?B?eElwZFJ6Y0RMVEZpVXRocUZiME9TM2p4Q1IxMU94V3RwSGVJWDJyRFJIMUZT?=
 =?utf-8?B?ci95ajJFWkI4cWJpeWhPcFg3aHpua1RiL2drQzFwYWxDQXJ1UythdFpkenV4?=
 =?utf-8?B?NUY3WVBkRVh3Vzd5dVRGMHNqTGwzYnJSUHZOOXozZGU0VTIxYzhQeEJ0OVov?=
 =?utf-8?B?SmRhS2NCWkFheTQ2MHRISStxYkJTZGlXQVhzcWtESjJneU1JZDVrVmRTSWZq?=
 =?utf-8?B?ZzAvZUdiLzVjcVVqRkJMdjlJa3IreFh3ZjJmM04zZFh4cXFXczlNbFRYL2ow?=
 =?utf-8?B?RXNjOFptQ0VReHYvaTBLLzhxL1IyNjBtdHE3VWE2b3BmOGlCRW5EUkJzSzk1?=
 =?utf-8?B?czBaeUF0SjJWd0xMZnZkSzhPenRwcDNzbk1LSWxNSG12bEpCbXp2QkQyQkk2?=
 =?utf-8?B?RkpMbTNGNVYwRzBCdThmZ0FIanAxQWZiYjhkREpEbXRJQnFCd0ZoWmhVZ1V5?=
 =?utf-8?B?d3docHRpdWdldDlleWYrUFNiMENyaHFNWHdxYUd1Zkh6Z2ROQ1pSNGtFeVI0?=
 =?utf-8?B?SnhZRWVPN1prM1k0UkNkaUlXbkNrejJFV3Iwa25QMDlNeU5NYjh1OEpHL2th?=
 =?utf-8?B?SkJuZWlmWUpaZ1Y2ZkFYNDl4VWZJY3ZBK2gwSVZ1em5Ga2E3eGx0Q1l4MDc0?=
 =?utf-8?B?Z0s5dm1HbWNvSElienV4VXhaKzU4QzZLeEg4dEhUcnJNU1lkY0VaVk1rR2F3?=
 =?utf-8?B?MVcvS1AzNFkvU0ZaS1VUaFQrM2hxRHpVSjZ5NEEzeGVqeXlvQjJmd29hSThW?=
 =?utf-8?B?QVVtK1RjRzRDUFhnbHpnY1BCSndxSWNGR3JYYUpkeitNYjFCRmpzRTE4c0VO?=
 =?utf-8?B?NWRaRG9IVm5qMUpjSXdCcHFRS3N2aEV6Q0Q1OGpTdEtsalVWK05hMytHTEpN?=
 =?utf-8?B?OUJNWENMNENhaS9XblpxT3dRSDU3L0NzL1NGOGFpNHRtY3ZsUTVEUjdGWlg0?=
 =?utf-8?B?UTVPMWljdGQ1Rk9XdDVLMno1UFY4LzZXQmV5RWcvLzREdWpDMVdnRTZ0WjNR?=
 =?utf-8?B?aFlGamprWDJqWDd0WmovQ2FqYVk3bWZTaG56azVudEFtdkhwK25FQWlETk9n?=
 =?utf-8?B?SXdtY01SQkJXRHg5L1ltRjZwTnVwSUxxMlNFWnUxMzlWMW5PZng3enlSR2RX?=
 =?utf-8?B?Q2l2VDN5WDR4RjNYTktSbDBDMlRwUG1NK2JQd252aFFKanpzcmxmQUp5Mzhu?=
 =?utf-8?B?VnBzRlJKN1RmWXhHWlVuTXNLR25XMDUzN3p4SlFlU0lwQkdlM3M5WGU0T3RW?=
 =?utf-8?B?aEZLK1hFODlHRm1qUUVmQmZ0SzlxL09qSHR2cjRCQ2ZmTzR0amQycEExT3pi?=
 =?utf-8?B?aVFGN25GT1hsNVk4WkRJclIvbko1UE1ielZaTmhtYlVYV0QyeUYrUUNSQVl1?=
 =?utf-8?B?VG1oMzJQVjRRMjYxVWdsVmFteGxnRFlsY0VLbzY3R0hYb1owbTRRSjVkRGhr?=
 =?utf-8?Q?uXyMZear+2BwLfdsLWgUabwKk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 943e3125-c8c7-4fb3-898b-08daf548975d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:28:55.3849
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pN3GRhbfAMkYSp84rKi/iFne+zbFApvDDAIU7U5oY/XvTnxVPG0vc7ns88U8ror8W/CM2aO9ZLk9viAM4Pknaw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7155

On 13.01.2023 10:16, Julien Grall wrote:
> On 13/01/2023 08:54, Jan Beulich wrote:
>> On 13.01.2023 06:28, Penny Zheng wrote:
>>>   xen/arch/x86/Kconfig                      |    1 +
>>>   xen/common/Kconfig                        |    6 +
>>>   xen/common/Makefile                       |    2 +-
>>>   xen/include/xen/vmap.h                    |   93 +-
>>
>> I would like to take a look at these non-Arm changes, but I view it as not
>> very reasonable to wade through 40 patches just to find those changes.
> 
> Right, but that's the purpose of the different CC list on each patch. 
> AFAICT, Penny respected that and you should have been CC to the three 
> patches (#30, #31, #32) touching common/x86 code.

Right, but I have no way to immediately see which patches I have been
Cc-ed on. Unlike you (iiuc) I'm subscribed to the list, and hence mails
all look the same whether or not I'm CC-ed. Then again I only now
realize that there are ways to filter what I've got - I'm sorry for
not having thought of this earlier.

>> The
>> titles don't look to help in that either. For such pretty large series,
>> could you please help non-Arm folks by pointing out in some way where the
>> non-Arm changes actually are?
> 
> See above. I am not entirely sure what else you are requested here. Do 
> you want Penny to be explicit and list the patch modified in the cover 
> letter?

For a large series mostly touching Arm code, calling out the
"outliers" (when patch titles don't make this clear) could certainly
help. It's not like I'm asking to do such everywhere.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:33:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:33:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476956.739420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGR4-00013q-F5; Fri, 13 Jan 2023 09:33:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476956.739420; Fri, 13 Jan 2023 09:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGR4-00013j-BC; Fri, 13 Jan 2023 09:33:18 +0000
Received: by outflank-mailman (input) for mailman id 476956;
 Fri, 13 Jan 2023 09:33:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NGZF=5K=redhat.com=imammedo@srs-se1.protection.inumbo.net>)
 id 1pGGR3-00013d-1h
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:33:17 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4def6a57-9325-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:33:15 +0100 (CET)
Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com
 [209.85.218.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-144-PhUiYbgXPaKVDQegHcb74Q-1; Fri, 13 Jan 2023 04:33:13 -0500
Received: by mail-ej1-f71.google.com with SMTP id
 sd1-20020a1709076e0100b00810be49e7afso14985849ejc.22
 for <xen-devel@lists.xenproject.org>; Fri, 13 Jan 2023 01:33:13 -0800 (PST)
Received: from imammedo.users.ipa.redhat.com (nat-pool-brq-t.redhat.com.
 [213.175.37.10]) by smtp.gmail.com with ESMTPSA id
 a17-20020a170906369100b007c0f2c4cdffsm8257387ejc.44.2023.01.13.01.33.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 13 Jan 2023 01:33:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4def6a57-9325-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673602394;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EGmXSGcryLyMwWBJBAGRPqNc83FeWcTOUw9/1eaT3RA=;
	b=fnz3gJuqJqcOzEh4EGw3XDs0F+yCs2Hg+zkDZpnGQlO9Rin9cXyIdONpZ46JgJ9KecAIpw
	SOrb1V0YncpCtxHbsK5wtKQKhzRQUB/jkW9QAmqU9yea9QqBU7z44yRfPImc4/heXETYMU
	9k/eSkDRSz0lBoEyv5Dpt/KQ6NduaTE=
X-MC-Unique: PhUiYbgXPaKVDQegHcb74Q-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=EGmXSGcryLyMwWBJBAGRPqNc83FeWcTOUw9/1eaT3RA=;
        b=yJIrt3mBMcYRxDTT+v6KVW5WLROf6CBHPLfAs310hZ9lxEMyhHk55NJlLEPf3G8DZt
         reIy2devi7mxN3D66fiSeYaibDTCLkQKqB2e+Qnnxrx6sx25b5CnyssJygc4V/dgvtgF
         n9+k2AuNbUXjHBOyefwaEXueaT25H0DswN1x6/WVMrgjnNrJZQWbEEuGcPM0V0PeC8YD
         abI86OvcPRbjh9eWItsyvTB5vFpzEFXWteePmg0jYtMiyEgnlbopxMNk5cLjPzkHLDHO
         IFqhPKblVV2yso7uez3giVr3Z//V5LdVbvXSZnGeUDnzmYX9qZ3KmxeK8k3Ye/u+KiPo
         viAw==
X-Gm-Message-State: AFqh2krAORvoTMaFNeEOnbHDhbokJeR8Zb/2zR5Ro+XhuuRXdvB64+vd
	ThEtDjuJz5NWokmixD6FZ9CmM8125eUxpjR33SkFjl/0Bths72q1MDQaVWnvaPBXQxRRuKIWKLg
	DLeQruwkx/szoHbwJ9jtM0F1apiw=
X-Received: by 2002:a17:906:6d14:b0:7c1:765:9cfc with SMTP id m20-20020a1709066d1400b007c107659cfcmr2434449ejr.34.1673602392309;
        Fri, 13 Jan 2023 01:33:12 -0800 (PST)
X-Google-Smtp-Source: AMrXdXvMyiZunNV7hT53Eb0xO5saoqrfpCr+V6WcY8sAbjY0cHEl4y6/dAWivhbcVBq6Aq75LiToFA==
X-Received: by 2002:a17:906:6d14:b0:7c1:765:9cfc with SMTP id m20-20020a1709066d1400b007c107659cfcmr2434437ejr.34.1673602392137;
        Fri, 13 Jan 2023 01:33:12 -0800 (PST)
Date: Fri, 13 Jan 2023 10:33:10 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>, Bernhard Beschow
 <shentey@gmail.com>, qemu-devel@nongnu.org, Stefano Stabellini
 <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, Paul
 Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, Richard
 Henderson <richard.henderson@linaro.org>, Eduardo Habkost
 <eduardo@habkost.net>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <philmd@linaro.org>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
In-Reply-To: <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
	<a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
	<20230110030331-mutt-send-email-mst@kernel.org>
	<a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
	<D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
	<9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
	<7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
	<20230112180314-mutt-send-email-mst@kernel.org>
	<128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.36; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, 12 Jan 2023 23:14:26 -0500
Chuck Zmudzinski <brchuckz@aol.com> wrote:

> On 1/12/23 6:03=E2=80=AFPM, Michael S. Tsirkin wrote:
> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote: =20
> >> I think the change Michael suggests is very minimalistic: Move the if
> >> condition around xen_igd_reserve_slot() into the function itself and
> >> always call it there unconditionally -- basically turning three lines
> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
> >> Michael further suggests to rename it to something more general. All
> >> in all no big changes required. =20
> >=20
> > yes, exactly.
> >  =20
>=20
> OK, got it. I can do that along with the other suggestions.

have you considered instead of reservation, putting a slot check in device =
model
and if it's intel igd being passed through, fail at realize time  if it can=
't take
required slot (with a error directing user to fix command line)?
That could be less complicated than dealing with slot reservations at the c=
ost of
being less convenient.

>=20
> Thanks.
>=20



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:34:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:34:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476962.739430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGSY-0001dD-Oc; Fri, 13 Jan 2023 09:34:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476962.739430; Fri, 13 Jan 2023 09:34:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGSY-0001d6-Lb; Fri, 13 Jan 2023 09:34:50 +0000
Received: by outflank-mailman (input) for mailman id 476962;
 Fri, 13 Jan 2023 09:34:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3K7w=5K=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pGGSW-0001d0-Q7
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:34:48 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 852c210c-9325-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:34:47 +0100 (CET)
Received: by mail-ej1-x635.google.com with SMTP id fy8so50835968ejc.13
 for <xen-devel@lists.xenproject.org>; Fri, 13 Jan 2023 01:34:47 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.138.tellas.gr. [109.242.138.67])
 by smtp.gmail.com with ESMTPSA id
 ov38-20020a170906fc2600b0084d4733c428sm5870772ejb.88.2023.01.13.01.34.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Jan 2023 01:34:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 852c210c-9325-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=+agMQNymtgBfMwDGibkZkEB7lupOmqeMBhkfaRI7Ro4=;
        b=e1QOtJbRjfxEFvoY4KnO0q5mrF5qA8udxXmHK2i8t9M4dOP0LQX7VK9l9xdcOf0fD+
         skKyYdLFKuBzEcYJ4nwliNX+YIGT+LmbBaKlYu5whuxlUdjLT5tegqpuNJfvjtTG7hPx
         XnO+2FzRnLrwCVksWo58A47lFAQPjYXCi1JCOX4GmWioztMFFa38Px0R1XS6pEccGURu
         86PlEDwotJc+lZi8Wm8GpafiWb0GaSPHCV435FPE/k0mdQIF2raCo12KZgEB9x2gsF+i
         xZLYrdB9C2bThrBg4O+LbvLjE7Z46MBeMfypcDVfO74hCGE02r2oim1o5AdBNNQ5JaIK
         P0Sg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=+agMQNymtgBfMwDGibkZkEB7lupOmqeMBhkfaRI7Ro4=;
        b=pp0r0zQpW/fuJx3sDdrln0oBQDtue3g+evXMxH6qM/LZTCMfpRurc+BT4yd7cZvO4v
         D58lRaNuNp+Kp3rKVz7ghBAK+9doZ7yZcwbbGoEAoD/7bEkt79eTh+7OjfYEnpE6JkQ/
         AOjeFGtIIzEB4rBPybgB5marrGt+HVEm4IgPLQCfVLbR6KSksc7RZrPjUJxfykbAmlg7
         ASXNedozR7Gv6RwNOF/QvpPJ8qN99yQa+hCcbHCiNsOJQCB/7OeBTkB6c/WOqEeHuKGd
         bDIM1Y0J+Zal8xBjseV3V6K09GVfX9S6YkZ4isxSToy7Y7ONDsP6SYNvl08uLWkPEfTl
         Ykqg==
X-Gm-Message-State: AFqh2krX6nom1NMqjywaWoTLuDLVK7vydmtsCzh5jKePkjCTUqqw5hVx
	65jauFMuWoDrqoF1IAHmt+A=
X-Google-Smtp-Source: AMrXdXuOjo1cGolaL56+vHwOmKnnQ9tDwGxUhDqTHAdUMJEPNBcM18KjiJpXT20G0tmUFgXtdPF9BQ==
X-Received: by 2002:a17:907:a705:b0:7bd:ece7:ae66 with SMTP id vw5-20020a170907a70500b007bdece7ae66mr74161558ejc.34.1673602487151;
        Fri, 13 Jan 2023 01:34:47 -0800 (PST)
Message-ID: <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
Date: Fri, 13 Jan 2023 11:34:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/13/23 10:47, Jan Beulich wrote:
> First of all the variable is meaningful only when an IOMMU is in use for
> a guest. Qualify the check accordingly, like done elsewhere. Furthermore
> the controlling command line option is supposed to take effect on VT-d
> only. Since command line parsing happens before we know whether we're
> going to use VT-d, force the variable back to set when instead running
> with AMD IOMMU(s).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I was first considering to add the extra check to the outermost
> enclosing if(), but I guess that would break the (questionable) case of
> assigning MMIO ranges directly by address. The way it's done now also
> better fits the existing checks, in particular the ones in p2m-ept.c.
> 
> Note that the #ifndef is put there in anticipation of iommu_snoop
> becoming a #define when !IOMMU_INTEL (see
> https://lists.xen.org/archives/html/xen-devel/2023-01/msg00103.html
> and replies).
> 
> In _sh_propagate() I'm further puzzled: The iomem_access_permitted()
> certainly suggests very bad things could happen if it returned false
> (i.e. in the implicit "else" case). The assumption looks to be that no
> bad "target_mfn" can make it there. But overall things might end up
> looking more sane (and being cheaper) when simply using "mmio_mfn"
> instead.
> 
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -571,7 +571,7 @@ _sh_propagate(struct vcpu *v,
>                               gfn_to_paddr(target_gfn),
>                               mfn_to_maddr(target_mfn),
>                               X86_MT_UC);
> -                else if ( iommu_snoop )
> +                else if ( is_iommu_enabled(d) && iommu_snoop )
>                       sflags |= pat_type_2_pte_flags(X86_MT_WB);
>                   else
>                       sflags |= get_pat_flags(v,
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>       if ( !acpi_disabled )
>       {
>           ret = acpi_dmar_init();
> +
> +#ifndef iommu_snoop
> +        /* A command line override for snoop control affects VT-d only. */
> +        if ( ret )
> +            iommu_snoop = true;
> +#endif
> +

Why here iommu_snoop is forced when iommu is not enabled?
This change is confusing because later on, in iommu_setup, iommu_snoop 
will be set to false in the case of !iommu_enabled.

>           if ( ret == -ENODEV )
>               ret = acpi_ivrs_init();
>       }
> 

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:40:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:40:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476968.739442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGXU-0002HG-AU; Fri, 13 Jan 2023 09:39:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476968.739442; Fri, 13 Jan 2023 09:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGXU-0002H9-7W; Fri, 13 Jan 2023 09:39:56 +0000
Received: by outflank-mailman (input) for mailman id 476968;
 Fri, 13 Jan 2023 09:39:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGGXS-0002H3-Ak
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:39:54 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2068.outbound.protection.outlook.com [40.107.22.68])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3a84e663-9326-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 10:39:52 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6836.eurprd04.prod.outlook.com (2603:10a6:208:187::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.19; Fri, 13 Jan
 2023 09:39:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:39:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a84e663-9326-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y8VLkJpbQS5VMt4rLubAdEWvi8g2Q8uTMdKyQM6UiFi4t9wO83JUowhX145in+cXWfSek1snOzvCGJi0tgMRqaJajlOQZLj0jcjynbVgZm1e+dtTE/GtlT2UHndqiinB+bqtQdezd8Dc1QznUsUvEPdCrvsdNWU1mLY7ZteqAmDTQmLkD5rJIuRemHBSza8/ExEzZz2OazmiCOKoufp1QjaWP6Niz0U6lqpjeXppMNYsddSIiSKf/fajz4aVadIYUuTDy3Ouf6SvCnL1Yp1c2wXXXLhTqbXQeYwERYfZpcaD8st25oecp/9p6ncpQsNLFI754OIRwjmfqeN8tHK3sw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0PvwRsuGbDTMrLZH4ThkVMgO5KX/PDF4NeJItln5ucw=;
 b=TkDkZWfRnGk4ncp5zGaIfZEgivl/XS3FyxTc8EWwfydlSgDS1XGKLUTEcOEY7aH6YaCV7LpH+xTd7ACitY9ClkrFVN8naqJi2aGO4KMfVxlGFmvqBRD19blH2dGsqIcypmSevalarhf/VAx9i2y0J4zuT5SG7hWTznLMHpy78LNF4czCRSnXbDPReVkRsnTmcOfxHEYHszrLjfnLhyn2H3SnkoxDIUkSg5wMSw0GxuTuGIF3i1uVOHyL+piJ1u24pDLdWPMK0MhVWl43bffESMwFGl/Usxl3kYAvhzBV0mc58R+lt4w8GoFwSYkH2wzOYQwgfUwWY5dT76byJEdsxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0PvwRsuGbDTMrLZH4ThkVMgO5KX/PDF4NeJItln5ucw=;
 b=O7vLUuSIFbMCUQm6CKj6SrpF/p4OYbnnlYgKbPha3xZU0vw8muyK4l1BbpV+0+RvJHWWVq/9dx6fo5hp7zM1MkNkGl1JtHz63jRhN0MfMAcrjYjO4mWN1qsRLrvC+9Y64zGq0Mx4lWJYuPeigXy8wfO5VAftU5lVPaUh676qruZ1yroKwTeq8vDwc3pyjDd1kWjyLNhiKbYD3g4debn8Dsx5Sad9ZR9PFzZpGP7IStD+k8NbEthIamgwF2v+fTkW3UFU5yFBqXKh3SAiw/9i8saJD0dxA4EG2zB0M869HC3t/kzzEkI3OMr2GMSrn9PQcbq4Pw2I+CVkJ7iBH4Xh3w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <07e1099b-8ef8-d28d-f46d-561ec27cab02@suse.com>
Date: Fri, 13 Jan 2023 10:39:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 30/40] xen/mpu: disable VMAP sub-system for MPU systems
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-31-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113052914.3845596-31-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0138.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6836:EE_
X-MS-Office365-Filtering-Correlation-Id: 19b8f678-3657-42b9-90c6-08daf54a1dce
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QVaQOVkS+ZCqDJ20XBfdhWTSjWCZKId51T2mVxZ0XGJfxVZ0paSTLwe1rKmuI3+xKSz4WTqGQiC+R8EP8c1f2IgIOniE9IdnkhHZAAL11+DaEeKWcAcWex+UVuI2OJ2YFU1+9d8o2tOu3GJy75ZuBIkvY/T/iuxrPHDw3KA1adJ4UFqhanT1ghJ+JOOItE37+ErTjDQD0JpPbaL58oMwh8VKBKkUfDnwMFOcL9w2T/6shNC/0vDB3Fnb0mHp3MbkVISHTVcdVDLIyfS1YeIfON3yBXfOigc6rjl5GQ+qxn+j77XJ4S8u4jdZLJPYyDJCP9YEkVrwq7fXb2ORz3hp/4awqkEVti5Y36kJe7MAXUmLe2a214NNDjEzuMWP4LXGBhi5FPA/ICeVzRNbSnGJ5usP4Y4EfetGZOnlG897eKGgweTiKUNFuWRCT5mS+nTS/GyXwlbZleRGOPF/HUI8JRvDxp3pjF2J76AV4xMuWEomD5dxMU2VN9rFJhX7YZ1upcsYd+zbqjeBjVy1bUjXltkg9jkJyiiMOowXiXw2KVoOEhGs3PyWWwhIiEs6xAAABFgrpdFiMvDfZIdkZU+rsB5v2FcwTJ8VswYVDcAqPhosp8UYJeAZSvkI3auQP3v7/CTI6alwIT6QAae9antItGyFqXEWYZDnS7bfzK2N+R+cjDZ+6wX+pE7MVvcB83q7J8qqHV4ODA8qtudXXWYSCX1+7fCVvW9QUH4EJA7DXmA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(346002)(39860400002)(366004)(136003)(396003)(451199015)(26005)(2616005)(83380400001)(86362001)(38100700002)(36756003)(31696002)(31686004)(66556008)(54906003)(6916009)(2906002)(8936002)(4326008)(8676002)(5660300002)(7416002)(53546011)(66476007)(66946007)(316002)(6512007)(478600001)(41300700001)(6486002)(186003)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YlpnVEx3SHFuYWZNcFBSejJsSyszcUxkRmZwbENubEJLYytHek5yVDkzb3pR?=
 =?utf-8?B?Qk5WU0p3cmFkUElkelR1T1g2SW95NzhybTR5YitvMEpCR1o1MmZnR3NnMUQ2?=
 =?utf-8?B?ZkVTRXl3dWtKNG53RlBNeGRubHV6bWlvd3g4U1JHV0ZDZjU5V0JHelpTenRU?=
 =?utf-8?B?b21MSEdpWXhvMm5DVFVCd2VBRkt2U0taQXpCRHRQUk9wK0FCbHdSSFB2WXh2?=
 =?utf-8?B?Y3cwSjFqTTZkenlXV3BRdjlVMVhMQ2xmbzRRZ25QRWlIaDh1bnUwbFNobG1x?=
 =?utf-8?B?dEZDcWxjM1FwcUNpTUd2c0pLUitpenZwTDVHVnFWUjJPRUk3UHVsOWZ6eUto?=
 =?utf-8?B?ODFVYTFQOTZmazJrN1VuOFVES3ZGQlUra0UrdHJ2VitnTzZKclRBc1IxOC83?=
 =?utf-8?B?MVI1U2c2OWw2cGR6QVBuRy9nSG5uV0Z4NjZST3pwMTVYQ1pFUWQ2Ri9lMXFv?=
 =?utf-8?B?aXgyQ2pKNDUvWGxXMWxXZ1VoTXNUY0JSdlAxZDBJY2UyVVpiOUVPTVBQVGJn?=
 =?utf-8?B?bkYxZ1pDenpCVERjc25EVm9qUlVyU0Iwem1sdHNMTURyRVdWd3M5SE83d0ZK?=
 =?utf-8?B?SDVyUnRuNkRubjgvMVdEczNrZitkUUJqT3FiL2sxbldrc29aTHpmRi9yeWlR?=
 =?utf-8?B?c0hLbm1hMTk4dE4wekw2OUxjeWxMcEJvR2VSZlpjY1pHNDFhT0dPdlZnVmYx?=
 =?utf-8?B?RjdRVHltVC9vanNTdmFGTzRSNzhLYXFleU40WEttS0t3VGY4dWZwQVhOV2xN?=
 =?utf-8?B?WGIwcFljQVFpb2o4b0FmaElGMW1zb3YrVmZNemluZXZST0hhZ2xVbUdITUs1?=
 =?utf-8?B?WWlLYlNXcnV5ZzNpOHA2NVZudXJGY3JFd05obXpaZUVFbG9yV1dldWJCNEdp?=
 =?utf-8?B?SW1ORE1nZHUzVnVtR2lEZkdseTRiTGlRWDVSTUpvWk91aForaTRyVHhma2lD?=
 =?utf-8?B?VjhtTW42Ymo0WlR1Yzl1ZTVFOG9RRjBxZ3B5Yk9NOWwzN3B2WjJydGR1N29n?=
 =?utf-8?B?RUlBRFhVakliaXZyam5MWTJoc0dOcW9NY05YeWsvUW9oNDVaQUFaV3M0MTFV?=
 =?utf-8?B?a2pkT1gzVERSOXQ2Wi9GWFRBYmZ1ZlJTZWp1Rmg2TDJxeFlVaFdiZVgrVFAx?=
 =?utf-8?B?WjBoQXFWQllxcmszbUx4QVRGYm9McUpOektQZTJBK3BtWGM0NERvQzd1U0dj?=
 =?utf-8?B?RU0yb2dkK1lBZjk3bjRNdUsrcVE5OU9JSE1YamJEUXFuUC9pSXpFTG9yU2wv?=
 =?utf-8?B?S0pDU3FiOStyZ3F6WE9kM3NxZ0JaWnNKUU1OL2Y5d0V1VWN3S2daVXpWK1dS?=
 =?utf-8?B?VFNlYnhCQXA1MFdFU0xLZHF6VDU4ME4vemRQS29ZN2MybXl3OWRhKy9kVmZ6?=
 =?utf-8?B?dGVTSUVPUDgxY2R1b05OcEFNYjM0dE16N01sYVVLdWVqcEJZelNwKyt6ekZx?=
 =?utf-8?B?MjJUNitGT1E2UStaK282V1VTZHlVejlabHVJb2dhRnFlU2dxQ2phMzJZS0ow?=
 =?utf-8?B?VmVHa0U2T0x4Q3hMbDB3S2x6MVFuVlYxS2hIYnNPckh6TWJBemt6NCt4RXZi?=
 =?utf-8?B?aG5DbFM2WHlUS3F4RFlQcTZPQlAxVGUxaWYvbHZ3c2oyb3ZmYXZuaTBoOWw1?=
 =?utf-8?B?NVZRaU5XcFl4dkJreVhCRmd0NXgzQWV3NTB2cHQyeHgrUnQ5Tk5mS0JUbUhn?=
 =?utf-8?B?L04ydzdsNG1ldkNpZ2pZbkFHYzhmaFc2MVowazlFSG5SY2NjL3ZIV1BjZFdv?=
 =?utf-8?B?aENuMlI3YWd5VnNmUm5KSXFqNjBhR0R3NllhbnFzQVp0T0N1R3FLMGZnUDg1?=
 =?utf-8?B?U0p4QkpjZ1BOQjlJblpmWUVJeFhua3lSSHZZSWp2MzVtU2NHOEQvKzV6VXNl?=
 =?utf-8?B?QllabGdZN2M3czB5V3E5eTd0VHFLdklOdTJBOHh2UktMRXF5Wmhmd1ZOeDVw?=
 =?utf-8?B?R1F5aWMybFBMaThlWmF3d1VpUHlNWEF3ek90bHdkZTAwNUYrSEtGN3ZiV2Qr?=
 =?utf-8?B?dlp4Y2IzOXNNTEVJMnhQa0tzeG9HWHhPM3l1TGVtL3FxWVBhRVgzWmJIVkFz?=
 =?utf-8?B?UWlxdkcyYTl6V0l3VVhPeU55dXV2Qm1BdXc1Q3hOQVNTLzRIUTZpSTBxdVdG?=
 =?utf-8?Q?KAfW7SFvVd2bbDDBLtidBe1Jb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 19b8f678-3657-42b9-90c6-08daf54a1dce
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:39:50.4683
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eKIfxsw6I/3a6tjRrPFxexoLZ6sfwMRc6fxanfBt2aDpJhR2QbhCCizFWcqAcv/i6Gjczh8JZTbtXEMD79yLdg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6836

On 13.01.2023 06:29, Penny Zheng wrote:
> VMAP in MMU system, is used to remap a range of normal memory
> or device memory to another virtual address with new attributes
> for specific purpose, like ALTERNATIVE feature. Since there is
> no virtual address translation support in MPU system, we can
> not support VMAP in MPU system.
> 
> So in this patch, we disable VMAP for MPU systems, and some
> features depending on VMAP also need to be disabled at the same
> time, Like ALTERNATIVE, CPU ERRATA.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
>  xen/arch/arm/Kconfig                   |  3 +-
>  xen/arch/arm/Makefile                  |  2 +-
>  xen/arch/arm/include/asm/alternative.h | 15 +++++
>  xen/arch/arm/include/asm/cpuerrata.h   | 12 ++++
>  xen/arch/arm/setup.c                   |  7 +++
>  xen/arch/x86/Kconfig                   |  1 +
>  xen/common/Kconfig                     |  3 +
>  xen/common/Makefile                    |  2 +-
>  xen/include/xen/vmap.h                 | 81 ++++++++++++++++++++++++--
>  9 files changed, 119 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index c6b6b612d1..9230c8b885 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -11,12 +11,13 @@ config ARM_64
>  
>  config ARM
>  	def_bool y
> -	select HAS_ALTERNATIVE
> +	select HAS_ALTERNATIVE if !ARM_V8R

Judging from the connection you make in the description, I think this
wants to be "if HAS_VMAP".

>  	select HAS_DEVICE_TREE
>  	select HAS_PASSTHROUGH
>  	select HAS_PDX
>  	select HAS_PMAP
>  	select IOMMU_FORCE_PT_SHARE
> +	select HAS_VMAP if !ARM_V8R

I think entries here are intended to be sorted alphabetically.

> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -28,6 +28,7 @@ config X86
>  	select HAS_UBSAN
>  	select HAS_VPCI if HVM
>  	select NEEDS_LIBELF
> +	select HAS_VMAP

Here they are certainly meant to be.

> --- a/xen/include/xen/vmap.h
> +++ b/xen/include/xen/vmap.h
> @@ -1,15 +1,17 @@
> -#if !defined(__XEN_VMAP_H__) && defined(VMAP_VIRT_START)
> +#if !defined(__XEN_VMAP_H__) && (defined(VMAP_VIRT_START) || !defined(CONFIG_HAS_VMAP))
>  #define __XEN_VMAP_H__
>  
> -#include <xen/mm-frame.h>
> -#include <xen/page-size.h>
> -
>  enum vmap_region {
>      VMAP_DEFAULT,
>      VMAP_XEN,
>      VMAP_REGION_NR,
>  };
>  
> +#ifdef CONFIG_HAS_VMAP
> +
> +#include <xen/mm-frame.h>
> +#include <xen/page-size.h>
> +
>  void vm_init_type(enum vmap_region type, void *start, void *end);
>  
>  void *__vmap(const mfn_t *mfn, unsigned int granularity, unsigned int nr,
> @@ -38,4 +40,75 @@ static inline void vm_init(void)
>      vm_init_type(VMAP_DEFAULT, (void *)VMAP_VIRT_START, arch_vmap_virt_end());
>  }
>  
> +#else /* !CONFIG_HAS_VMAP */
> +
> +static inline void vm_init_type(enum vmap_region type, void *start, void *end)
> +{
> +    ASSERT_UNREACHABLE();
> +}

Do you really need this and all other inline stubs? Imo the goal ought
to be to have as few of them as possible: The one above won't be
referenced if you further make LIVEPATCH depend on HAS_VMAP (which I
think you need to do anyway), and the only other call to the function
is visible in context above (i.e. won't be used either when !HAS_VMAP).
In other cases merely having a declaration (but no definition) may be
sufficient, as the compiler may be able to eliminate calls.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:42:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476974.739453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGa2-0003d9-PM; Fri, 13 Jan 2023 09:42:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476974.739453; Fri, 13 Jan 2023 09:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGa2-0003d2-MU; Fri, 13 Jan 2023 09:42:34 +0000
Received: by outflank-mailman (input) for mailman id 476974;
 Fri, 13 Jan 2023 09:42:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGGa1-0003cr-Sc
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:42:33 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2073.outbound.protection.outlook.com [40.107.21.73])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99c56c86-9326-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:42:32 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7272.eurprd04.prod.outlook.com (2603:10a6:10:1ad::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 09:42:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:42:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99c56c86-9326-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ds3NeXKz9VXUsFz17yg0jYGOkPVskcl59FynZkvcZ/fQVTISzOd9weUUMCRJoq6TT7AqCubz2GvhEcEPbCIpZVFxjAQCdjMd7auuLgHEZ6PGqBzOA8fid6rZ4RKMAiULSQMuOHbxOl8zhTTOMTWdSJ9sMe7kSUTZzxReGhv18ExCkCXjzl45PKiDQHMGcajVcyypuwEokqFvVO+3Xe8r9XDHl48ntfFT+N0D5uPpck/HpwAjQLyGH1KAS+qhEh4XG2K7n5m+sUjLAczZHniX6bPFWUUs/sMX7zdobutfyz3kYDQgWDwx3vUhPyEYliB9HxBaulB3dNia8I6FUYjazQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=i8PbSMIbGKpH21HZ7bfJ+tO2egH7QQ3j4iBGi/uT6tA=;
 b=Hhz1rXZu9gMkL6NAfYskxS8LfCUslshOme8dIAN05NKqP7sxye7r3poTHl8Ubdnq0SEdnxVuSG9tNl78vhUZ3E0qbhZ7d16QNCXPMj5A6ReTkZz1ZmXeCmeHiPKTjUNbOE5HyZIb1fPwSTzXRdICbCFKEjHTrlHp2cWIilkH8wV9ELwPN+vO9DNYhp2+5sNRSldZi+vhKORZHWtsF1ArjTdwMDJIYR7QUYZYt7wk12GttxZznaifuQoalwFgxefasQNuV9sEc0U3Cq8aRwIA4PMyNrAAnAR+7zCqNgHZYjFKoNmPQHVy70TiJiKQ3upKBa+V7UClLvbgYcF1LgMPOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i8PbSMIbGKpH21HZ7bfJ+tO2egH7QQ3j4iBGi/uT6tA=;
 b=x/JKzk3klIY40TPK27HEaxfLiu8OLw7hyMMf/BPe9WNgk0r6xS42z+v9SblUGdbPMToPCE50qG3Zo2iZdB8tL9bLzZdDl5A3vZP+wJZckSS1FItBk7kGnYdDd52D0NrxAajCN3p2Axaokm8kvRueldP3yeKBLPM9BeNqOMvu1amGA3PI3W7Lg5FedGuyHWobOTroojoqQkivyxO4BHc4zCU6DEs9nT9cAnmFkqZVcEpGLC0wBBEOFC7BpsHohoi+6AIbekQRatQDDhATF/LAYCDwlkMXgsZxE+ej9EBnpg327VAOmW/3VsxuDhtUFofBozVvUwb+oCsZ7x0bknf/Ow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e0eaad42-70ab-e27f-53eb-382bc15ae095@suse.com>
Date: Fri, 13 Jan 2023 10:42:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 31/40] xen/mpu: disable FIXMAP in MPU system
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-32-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113052914.3845596-32-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0168.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7272:EE_
X-MS-Office365-Filtering-Correlation-Id: b100b8f3-aa81-481b-5ee9-08daf54a7c2e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EsD0qNZIVB6rMLtcWYlsQbwx3wTqMOjPQKEWJqCSCad25t4YaGMMR/Io7kohbFuYMJ21OT16gX38HDKO8PJkpxGT4hEEUG5LBJCF9TSEzF4LPQGcCj3wvHZGdfe5AMq8Y0wHPJYGdd6CzSgyPWG6RyOXa8XCBNyVDA7BDW+Med4413bDWsUH+/gBiQvHqhftxiJhVtbxGFRwOW04ws1UaKiVZ0jaBEW8mN5/yPl6rvYvlUoFaxvPObB/OstsO88CxTdIjHGFsckHVU4+mTLe/E60unN6vCem8ER/vDbXtPIDVGFch8onrphqM2o98OKR2wbPHeRE9JXGoNbLgml5+zC1G5FwNgLX04W8WSJZ+9sVHCyMHlSdhOcyZWQglJwE1poWQ3KvhdFMofmt3Wm0hcdmvzPRVqe3dfxivnnvUeuX008HoZ3PhdlwUEXbLxxnINT+pf1ytIgPeKn2ONM+upsGO2uyLLG16ACBKoYJE5BxiiI8HKGOpFcwQdKdobN+/aZjRmKUMVWZacTt7E6kIKQ1FlU1EhvpDy0u+4FW3NoAGAiGrVpbN4tuVb9x3UNvsUyP/8nbgPvEBRSwup/M1LMsPMUw0TxEg2ASHkb6qg0V8w9PcJ5bX7FGwHmPrF/F9ereGaENvHXYhm838A+L0CW6y8+HYpEEVQPtwMsxL5LW9FqYZ/2MceIwRFiOQkf35lxe9ERQmvPXeaHXsjPc36Gk7bUU3utkqySEYZqPBw4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(346002)(136003)(396003)(39860400002)(376002)(451199015)(31686004)(66556008)(4744005)(2906002)(7416002)(5660300002)(36756003)(66476007)(2616005)(8676002)(316002)(4326008)(54906003)(53546011)(41300700001)(6486002)(478600001)(8936002)(6506007)(26005)(6512007)(186003)(6916009)(31696002)(66946007)(86362001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eDFhVGRBbkw0UkRDdHN4SFlQSytwbG9Cam9EQ2NnYmkyNDU3UmxRZUtBSXNi?=
 =?utf-8?B?WC9NbHBLSWZpMmVjZkk2MXJNQlpCOWtNYWs5ZEcwZWY2dy9zMC9XZWt3MWhB?=
 =?utf-8?B?U2JhM2p4ZGxKeHpubFRzRFJBeklMTzkrUG84U3Q4TzZKeGdSK0loUFJ0cWRV?=
 =?utf-8?B?ZkVZR2hzZFR5WHhiK2RScWlXWjBiTVVyb2FaWndXN1JCUDd3RE96RWZ1OEhR?=
 =?utf-8?B?M29zRW56TW5nYkFCazFScy9rTWNMQ1VpQWRRYlZzL0x6RjlHSUlkVXRUV3Jv?=
 =?utf-8?B?cW0xQ3BtVGJYSFpvSndQZU5ZQjdLZGxTcGFpbjZFdEUvMG9kTFNJMDNuOEow?=
 =?utf-8?B?NnMzTlIzTm5JUmJVY3lnVnFMYllQRFZQOTkvZTJtVFlrVkt1d0ppUHlqYzdV?=
 =?utf-8?B?cTEwQ1hlUmlSYThXRk53WmVwb0haUTNqYm8vRFhmYVdKemxDRXk0MG96MGdl?=
 =?utf-8?B?eUhPVHBEM0QxOU9pSFl0d0FtQkg3M2t1bndHTGtjNzN6Y2hXMnluejRyRmJm?=
 =?utf-8?B?Mzk3VlROTE9ZS0pvcEF3bG52QTljMFVvNjVqNFdnSkRCNkwwZzc5U0pCMk9n?=
 =?utf-8?B?SGxvS09OU2xDSE5sY25nRlMzdDAwZVBTSEtCRlB2dEd3TEk1eXc4c0swWlNE?=
 =?utf-8?B?dXJiM0lEU21TSUFKT3VuUWRpWFViaHFzNVoydzBUY3g2bkYva2JnODA1Q0hx?=
 =?utf-8?B?dzFoWXRhdVFmSkRNbUhXL25lWGJpNWJ1V2RVSkFnK2Qyb1QzYXdvNHBUaXk1?=
 =?utf-8?B?Nnp5WGhINmF2b2VSSk5JNGIxWUlDcVBBZGErUXIrR0RoUG15V0U2SkxBV0xD?=
 =?utf-8?B?SFhlUm9MUWhCYldOTmlzT1dhMzI4TnV0c2tlR1Zhb1MrTXU5V21mK0lJVGxT?=
 =?utf-8?B?UmJuZERvUnV3SERLZGJ1S2JrbUpJcU1xcHVNUExUT3hiMFhWSkpDbk9XcVVi?=
 =?utf-8?B?QjRPc3gzVmtuVWUxTURpWU96NGtld1k4eDI4Wm9tS21qMTd3b3FzcGlIcGxC?=
 =?utf-8?B?YXRBVnAyclZiN0NxbURDcnBvYVdxNE9HU1dHYnphZGQxbjZkUTRUM0xXdHB2?=
 =?utf-8?B?dEtVa2g4dEJxeXVKMmRreEt1Q295ZEtJMWxyMmdvOHROcVVJSXNwTlk2TWpa?=
 =?utf-8?B?NDFHMEVNZHRzdVBNNjIrZ3QvTmovMzlEaWZEZjFVUmlmRGF2UDlnb1ZFRmEv?=
 =?utf-8?B?YURTQnpaMk1Sd0QxanJMOTBnRVAvZ0lqbWRFbGFHWjBLOGhTa2ovYVN3NTdE?=
 =?utf-8?B?Znlvc0tham5mOTNWaWRSRXNXSVRsWUNPR2JxamY1YW9oaXhvMVF6blhuemkr?=
 =?utf-8?B?SVhhMVRZTzMrNlhNOThGYnNxMjFOQUVGcmpScm1Ec1B3SEtyKzFhYjNoeTB3?=
 =?utf-8?B?dUk1RTBQLzNKWnFvWG9Td1hJZHZaMHFvcTF6b3QzUGpjMkFkKzRTUTF3bFZD?=
 =?utf-8?B?b2owMkk3VGVyOFh1VGxtZ2oyTEJkMW1xWS9VT1VjdWRacktlSURTVG1sQklp?=
 =?utf-8?B?dWhQQTVUeGRHcDgrOFdnam1ER0oxWWYxZXRSZStncUpzOFBROXo0NitRd2dD?=
 =?utf-8?B?YlZocFU4RXpXL2YveEszZEtaaFh0UnhnSFlCWUZYYUI5TU9WU3hIZmdOV3ow?=
 =?utf-8?B?bElLUm84enR3cVZZa2ZmNzNKK05hbkIzWldKY016SVN2eGk1RldXRGU5M3RG?=
 =?utf-8?B?RWNUM1dvUkZPaW1udUlNckU2bytweHFEUEM3Mk1QTVMyem12RXdEZ2Fmczdm?=
 =?utf-8?B?bEU5bUNRb01LS3RBVTdsd2RWQ25LVnVRL3kwTXkyOFl6RDJEK0JUL1pESXVY?=
 =?utf-8?B?S0dHTTBqYUFmUG1VSjk0SHc5b0lPRDZxSFZKUGNuM2VQL2FPOXZydFNXcUNQ?=
 =?utf-8?B?dHFvb2htdVFIY2VNYThhQ3F5QTVVN0FvaW1kY2kvci9xakVhY1plOW55Zkc0?=
 =?utf-8?B?alFxOXVrMlVPdUNFY2N5c3VtNFpNYXFPNTBqRERoa2JFYU4yUGhwWThqNm9s?=
 =?utf-8?B?T01LNGhWQ1U5SUY5Z1pCMExxMUNwMHBDSUdObWR1bUhXNHlvTUxLK1ZEWDlF?=
 =?utf-8?B?R3hsVXZSaTZPaThOMlh5S0p2ZXNEbWNOT09DN3pGZ2twTENwOE5scjhZZzYz?=
 =?utf-8?Q?wO3gIF1hHz7ZsgHpqXtYRQSae?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b100b8f3-aa81-481b-5ee9-08daf54a7c2e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:42:28.8332
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LFQWpp2RuXYxMZ7p7SnCnVuiT2rJJQH548bP67xT5kmkgHCdcDTqJWa2EXIEZ5AENua5FiVm00A4LPrMUrB1lA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7272

On 13.01.2023 06:29, Penny Zheng wrote:
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -43,6 +43,9 @@ config HAS_EX_TABLE
>  config HAS_FAST_MULTIPLY
>  	bool
>  
> +config HAS_FIXMAP
> +	bool

I think it'll end up misleading if this option is not selected by x86
as well. So imo you either add that, or you move the option to an Arm-
specific Kconfig.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:49:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:49:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.476993.739480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGh3-00058z-1H; Fri, 13 Jan 2023 09:49:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 476993.739480; Fri, 13 Jan 2023 09:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGh2-00058s-Um; Fri, 13 Jan 2023 09:49:48 +0000
Received: by outflank-mailman (input) for mailman id 476993;
 Fri, 13 Jan 2023 09:49:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGGh1-0004qT-La
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:49:47 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2070.outbound.protection.outlook.com [40.107.20.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9cdee9b5-9327-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:49:46 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7434.eurprd04.prod.outlook.com (2603:10a6:102:8e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 09:49:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:49:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cdee9b5-9327-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bgxfBpqWOlP7kPPgJyxsSfXZXgMyDmwOoY4x+DXAEo9Yrcy/S8wzhxB5h+rOxyD1MFh3itQKmmWFhl0IP/XQLJk/OQhidZTrdXeTSMpvLBKGYpoP79MKLuHdplVaZvGu3kD+iZeymPPDvtk1sxKyu0CM2Ckdo6v4RjU4OMMb7Zt25UB+pwWxN/qd6ucvHmou1EJsTsMfvXDEXq0lXAQtQdi2W/Z33SeSoFKYdezqroPdmxdXk/3aM8FqA6+i+f/yamaJECVqDStgrF8in2mqz6sl50kmS9gT1wVTlBVTABUY//ysCd28tR9bZVj0RPdR3MrPp5L8UIIBF0fuoh0D/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IwfXkk+qTKlGw8HTM4dD2ubDPmrKmk+UOqeFoGLmFVA=;
 b=C//Lg72e+J5H3FG2EDqGFDj/waRKP26RP5+Qt+yvCW0kflovTA5pYHwfzIL+ta3jlf8zIiBBkI4ljlIETMFkpFjYieA5PzGp4oCIMRmqEykA1nGos8Z61PzDgOz44L8VVeApOa4KeD1exKdGFdE6H1VRECZDG6Jxf6iYUiNcKZQ1XkpfxGLsltz3HfBCKBhxFhd89n6yTaGUtEfqkLL2L1/zyXiF+SXqFcPwPKDggCrrrOAYKZMhLSLoYuUMNH25z3vOUxxQs3h4OwTYcjkHGUz5tiU4kBzNorksLSmy55C8SvFt4Gzss/PwpIyroTCFNJK8UaHoBkgRp9MDkJaxPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IwfXkk+qTKlGw8HTM4dD2ubDPmrKmk+UOqeFoGLmFVA=;
 b=ZoVk5v9/mCr7WNW31SsmZeInSOS9+rk/t5k6P+cPWFnM6pmsqBdQAiBfWag0RD1WHqMr4CsU/8JTq/+3XkCLUzIO6y5b0HJS+CzaPh3kNlJw6ycYhGKOEwmYgfByJ7GYTvW8j87cNTM52MByhyD5lgbWVTiRvwnIYF58igjvmDNQx2x2aWipgn6iAsFGG5B9vgoxcSw1khe/DVsO2LYWYxJwLvh+wCvSP0USVMOfZqVlmtkyY6/Rd4Hz163JQUjllNllCCYbFWwf/w0Z9Mn8qkqte46xKRyQtdhL2nCAoOjC94k13CxKj3nR3O0bt32mAPKiUaa23tRDzYhnijzPRA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0cdb6601-c549-fd0f-18f1-6106ed3a7e2a@suse.com>
Date: Fri, 13 Jan 2023 10:49:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 32/40] xen/mpu: implement MPU version of ioremap_xxx
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-33-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113052914.3845596-33-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0002.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7434:EE_
X-MS-Office365-Filtering-Correlation-Id: 5d0d35a3-d2e8-46f5-27eb-08daf54b801b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3nkoPJUkXXf5PjA7gi4guV7U/khgvyJ13fsufRpY/OxfRPewHLC3h5WstCeK5NfbDnkyRXxj4/agdrhk1bM8zVjHolrd3mkRkVtayPf13wK7JA8vAhIG8rlucHjTHqhotcCzDi8I+s8j1QKp2ciL9nZTPkPuzYJdR18QnlTLuSmxuHOnoluydiKOxshajp19V/YQTXGpNNYQcTqDmNEl7oN48oUak7KNBYqswS6KT0LQcCNFtY+TVfJYiuLGbIWlHx4oUrFFSDRAlVcTtRWY5iKcPKD3xvsVbFix+lGyHc3oL8QT26LG9MJeK9uT56eWf6v9c6QeuXQLobX/kxf5+t2upE48Fftl1d4ehP4oMbcx6gwlpcKjalVPkdo6oZiLrxdU0qBaMo2VDOPBRqUnTcJ3goC7H2IZaoRPbojYS6VCgDc9i3QTpxt4Xd1T24mRJylU8VBEzOo//sJ3cnvI/vIIcyqoOJvBi+6HcenVXlW807DLttSEhjVgs3odsFEc/DAXYxZ/4IU9KJ6R4JCsHn5ExLwyeKs9vfRs1yCGukv9Jg1ICumVnPMSKKxlI8h/JlazB3XZAqR/fKh/8SuokE5Q+il5W/A5+61ZUmt7kKJ1mKIZec4uwyLcW+UMoteXyqzWlkKJbPvutZ5v1HlHSj4u6J5m1HjBmc8T/IpEkqVIuRg8soncYGluD/1iVtHVxUqKMsle0oFQhFt056TBZ4nXQU8E7089gk0ACTzO2UM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(366004)(136003)(396003)(346002)(376002)(451199015)(83380400001)(31696002)(38100700002)(86362001)(2906002)(4744005)(41300700001)(7416002)(5660300002)(8936002)(53546011)(478600001)(6506007)(6666004)(2616005)(186003)(26005)(6512007)(6486002)(316002)(8676002)(6916009)(4326008)(66476007)(66946007)(66556008)(54906003)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R0Qzc28xTXVtV2RMb0ZGcDBnNFVaKzB0VUFWbUtDaWhBTmc0OTBkYmN4Vmo4?=
 =?utf-8?B?MHorbEJ0Y3B3QllvVFRmZnRVRjUrSEZZa0x2RjVDcDk4VVBtZ29DWjZ6T0sz?=
 =?utf-8?B?QzhsQzNNVFQ1L1Z0MjlsYWdIaEZvVnNTSFR1L0t5NGdZVFVXZUYvdWVMdytx?=
 =?utf-8?B?U3NDdG02ZHVjRzQvamM2Q0xZWWNMaVpXdllHcVEwN1VEMDdrVVF3TURvUGZr?=
 =?utf-8?B?QlRpd1VCb3RXc0xOM2IrL3Nmb1RzYVZLYmQyWWxnZTJQZDFkbXpRbGZ2ZUM5?=
 =?utf-8?B?ZjFjazR0dkVaaVdUTWpoS2ZTK09keW40RzhaZCtoUmFLYWp4eHJOa1pRWkZ1?=
 =?utf-8?B?d0ZqbUlJUThPV0pJRDc2azdUdkdJRDA5MFZaN1hlR1ZjV2dRQzUxZnBGL2Ir?=
 =?utf-8?B?bENmWThneHFhMmliMXYxYVFOblI2c29yMUFQVml4T3lDSktqWWE3S1FSZFIx?=
 =?utf-8?B?RGFITmk0djQ4L0NyK3hTZEpjMkp5cmFBUkhQWUdnRURTSXJacEY5YzRWU2Mx?=
 =?utf-8?B?dVBidGp0Vzh0UVdVQzJTZzZwWklhOFY1SHpxZDBoNEROSjE5Y2w1bGd3c3lB?=
 =?utf-8?B?VlJ1NVBnblhzajl1Q0pORmsvWjQzUkVwRzM2eWJEYlY2QTFoOWRCQXNKVzNC?=
 =?utf-8?B?UXJ6MmRiRjFWODVESDFWOE1LY2hNM2RhR0RhZ0dJbCtlK1kzRitWVUR1V3dJ?=
 =?utf-8?B?cjNJWGc4MHkveFlMRHNmSEFsS200RW5KOUtlRGRUeERmMHNHTnk0NGswRHJp?=
 =?utf-8?B?VXpMdHBNSXU1Zk1GbXQwQzhCTVhoR3hyYTlVazc2VHZKZXdwWXdqNFR4SDhk?=
 =?utf-8?B?WkJFSkFuM3JpSlpmRE40Rk5TRnBXZlFib01LQU5NeVBZM2wyUlNXMVRPalN5?=
 =?utf-8?B?U2tra0JjVjVpOUFaT3U5SkVvc3VVMS9lRC9LZDFzTTN2dmg0WEV0c0FzaCtF?=
 =?utf-8?B?UVNhbEl1QUxQWG1jUFJoL0liWFJVd2laZnM4UjdyMEpNYXlYSlVsVjRCais5?=
 =?utf-8?B?Q2hoeTMvTkFFZjFtSXBkNlg5TWdJeVlUZ3pES0pqZmNKREVDblZ6SkI3M2ZI?=
 =?utf-8?B?MXZlcXFrdGs0WmRaZzlPUlUrUzM1aVJTUFNWc3lYUlVrTUtvaVlLdXEyTlZo?=
 =?utf-8?B?aGFBSWJhQi82TWlTZTJrWW1WUnM4ekRaODFUL0xhU3BkbEg3dzJMb1lmYWJx?=
 =?utf-8?B?MXE1cW9uSFY5UWpranVNeEFnWGtSVm40SHNBYldyUXU5cXd6Z2h6Ynp6bFVm?=
 =?utf-8?B?aHdtV2hTM1E2a2J1Y21ESDEvb0wybVJXT3ZRVjN1ZEFBaHp5VFhLbEFkWUtq?=
 =?utf-8?B?aE9hczU3VmJ2UXlZalROSXBYZDV2WVQxdTF0OXMrdWdTZjJ1WWZQdmlOcG1x?=
 =?utf-8?B?SjY4NTVaRlM2WU4ySTdqV0MxMWNNYjBzTksrc2RCd1NwZjFBc0ZxRmQrbFcw?=
 =?utf-8?B?UUowTkp4SktCR2k3TTd2OU1wUlJueVIwbVorWkcvTzdYcUl0Q2VZRzA1eG5X?=
 =?utf-8?B?UWszRVZPTUpLV00xK0RaZWlVeEhXVVRPM2lIUnJremNoNHpSMFBrVW5Hc2hv?=
 =?utf-8?B?UDhZYWxiTTd6U1dvbGdkWkpBOWc4dGJqdnZHOWdwMWdBdDYyUDYvTVFKL2g1?=
 =?utf-8?B?dXptWjJCcjRsNlVXaFhFb2tkUXVoOUd1QmpvbW1MMU1FdlEyN2hYdDVFbG1R?=
 =?utf-8?B?OWJtTWJmalBXSkxGbWVUVzZ4c2RYLzVIOGJ2OER1dUhPc1dOcTlobDVPOHZS?=
 =?utf-8?B?K1FJQjJMNEs1c2lvNFp0ZXNXQzJtTXF4U084K3ZZUnlhVGdHaTRyb2QwQlV6?=
 =?utf-8?B?M24zd2svZTJPVkpqa0c1bEJCZEVSaXdkK1BTTTkrU0xicTRPWk45TzdoYUZO?=
 =?utf-8?B?cHlITGk1NEhVT0NiMmM4K1VGZUlYY0YzaU9pQnJOOHE1VmtvM3ZzK0phbmVp?=
 =?utf-8?B?dTdWWUp3c2pvOFJZN2dDb1Y4V2xMdGYzR09Fb1dORnJjbWxDc0hlUnUxQmNm?=
 =?utf-8?B?WkxnZzFIc3Z1NytGMFN5Qlp5Y3RUcmZZanNZTEd4T2YwZjdVd2tpTGlSVEpw?=
 =?utf-8?B?V1FQNnRUVnlqbWp1VlU1bmZhSVdmR2xDY050ZXNaYUMrdEw3VkprRmlkdzIy?=
 =?utf-8?Q?SUwkLQEeRunlca6aiFdRPFvQ0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5d0d35a3-d2e8-46f5-27eb-08daf54b801b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:49:44.8993
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5yHrG4A4uoawHSxEboqGwFhc+Z0R03KRDWNJTuinvsNGBkBNu/c0AIbk0hasBc8++pSueaT13hZuVPi15HJ2Tg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7434

On 13.01.2023 06:29, Penny Zheng wrote:
> --- a/xen/include/xen/vmap.h
> +++ b/xen/include/xen/vmap.h
> @@ -89,15 +89,27 @@ static inline void vfree(void *va)
>      ASSERT_UNREACHABLE();
>  }
>  
> +#ifdef CONFIG_HAS_MPU
> +void __iomem *ioremap(paddr_t, size_t);
> +#else
>  void __iomem *ioremap(paddr_t, size_t)
>  {
>      ASSERT_UNREACHABLE();
>      return NULL;
>  }
> +#endif

If, as per the comment on the earlier patch, a mere declaration isn't
sufficient, the earlier patch will need to make the stub static inline.
I'm actually surprised you didn't see a build failure from it not being
so. At the point here I then actually question why the stub function
isn't being dropped here again (assuming it needs putting in place at
all earlier on).

Furthermore, once you want a declaration for the function here as well,
I think it would be better to consolidate both declarations: It's
awkward to have to remember to update two instances, in case any
changes are necessary.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:53:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477007.739491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGkR-0006kR-JG; Fri, 13 Jan 2023 09:53:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477007.739491; Fri, 13 Jan 2023 09:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGkR-0006kK-GM; Fri, 13 Jan 2023 09:53:19 +0000
Received: by outflank-mailman (input) for mailman id 477007;
 Fri, 13 Jan 2023 09:53:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGGkQ-0006jP-HQ
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:53:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGkP-00069C-VS; Fri, 13 Jan 2023 09:53:17 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.6.109]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGkP-0004RR-Li; Fri, 13 Jan 2023 09:53:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=8pfgQl2mn9F5L7jLw/qzo0IU8snzgpXloGwCpRa8eD8=; b=eNLFxcc0Wfhq3218pS9vIwq92j
	vCKHOfVXK3bMTlZT49GjoBO9p9dOXn/2d6FbwQ7OAuDQTIXJbg4SigZ96eV5FqoUN1AGV2e4XArSz
	StiWxX0cp8jabo/+KYleOTlgESLtltw0RUeSoi0cS4xqN7xAR3mu3SaWcsIunCXpKZzs=;
Message-ID: <009d00d8-4ba9-167d-271f-0dde655415fa@xen.org>
Date: Fri, 13 Jan 2023 09:53:16 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 10/19] tools/xenstore: change per-domain node
 accounting interface
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-11-jgross@suse.com>
 <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>
 <05871696-1638-82d0-8d55-9088b4bb9a18@suse.com>
 <e9eeee72-ecd1-faaa-dc63-b57d50162bbf@xen.org>
 <7ba1b191-ef89-1e0d-0e1b-0b24159a9eb9@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <7ba1b191-ef89-1e0d-0e1b-0b24159a9eb9@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 12/01/2023 05:49, Juergen Gross wrote:
> On 11.01.23 18:48, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 11/01/2023 08:59, Juergen Gross wrote:
>>>> ... to make sure domain_nbentry_add() is not returning a negative 
>>>> value. Then it would not work.
>>>>
>>>> A good example imagine you have a transaction removing nodes from 
>>>> tree but not adding any. Then the "ret" would be negative.
>>>>
>>>> Meanwhile the nodes are also removed outside of the transaction. So 
>>>> the sum of "d->nbentry + ret" would be negative resulting to a 
>>>> failure here.
>>>
>>> Thanks for catching this.
>>>
>>> I think the correct way to handle this is to return max(d->nbentry + 
>>> ret, 0) in
>>> domain_nbentry_add(). The value might be imprecise, but always >= 0 
>>> and never
>>> wrong outside of a transaction collision.
>>
>> I am bit confused with your proposal. If the return value is 
>> imprecise, then what's the point of returning max(...) instead of 
>> simply 0?
> 
> Please have a look at the use case especially in domain_nbentry(). 
> Returning
> always 0 would clearly break quota checks.

I am a bit concerned that we would have a code checking the quota based 
on an imprecise value.

At the moment, I don't have a better suggestion. But we should at least 
document in the code when we think the value is imprecise and explain 
why bypassing the quota check is OK (IOW who will check it?).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:54:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:54:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477012.739502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGlA-0007Ho-SW; Fri, 13 Jan 2023 09:54:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477012.739502; Fri, 13 Jan 2023 09:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGlA-0007Hh-Pl; Fri, 13 Jan 2023 09:54:04 +0000
Received: by outflank-mailman (input) for mailman id 477012;
 Fri, 13 Jan 2023 09:54:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGGl9-0007DD-UR
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:54:04 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2044.outbound.protection.outlook.com [40.107.13.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 35b4ebe3-9328-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 10:54:03 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9250.eurprd04.prod.outlook.com (2603:10a6:10:351::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 09:54:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 09:54:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35b4ebe3-9328-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BVdoENX27JL4Mrz/aOpEKhUxpM7RQ1IVOUK01w2nrtcFGoEeqcm2uLWjXvAQIoyLSqIJ4TnRuBqsCqjmPY1FDPvBuAHQcO1UnG4KPtlH6iqWbzrRo2edR6F6Hqw/Y7eQ3jg6WqasqsR8WPHHfIHSoXJrmjbQOCCWdOvGgsfhjOFLqKf79mtvvjT3TeXW8arAHdqKo5A0W36na+Kks9SMQW/p+nvoJ+hQtggF/JvRxHxTAlGWXEe1KSHSZxC22EtyETotV19fIGMfxbF5CmKVgHdJSn5uULIBo8mln2YSIEqhCYPU6duU7eTrqHlwVFRM2iGKCHNt0Qos0Ggh3B9W/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FJyyx2XM3spic2azCfq9ReCKuK7+wPLVqDPl28tL3ao=;
 b=KhXemElA0fHyYlcJXXXohF0HgaE4gI/Og9+2ELYeqzqV8mSaKZ6sMLWWF/bmSCFaUJ5viVqe0lsgfwoSon9dmTeyRqmesrMvv9OeiQCV6uJRpZwCsUEo46GIdJY7FtBDO1q1CCf0ID+VA0FleswS9JNGiFtvUijYvGRLSKbCrp+QBnf1ob93ikDT/JOths+/Y+soRIc19UIrsol/cPxqeBZ/e9WP4BzGTc3j3rp3SXRCaB0pVRaC9z15Zz2KWiOm4zPS0OI0Rvh8BDdSDL5+v1zTKBtMd7o2SaDoxWt/VmZSTvBWJOmHcjSqwLQ1plavdr0sPhC/dWBNzmrHgNxmaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FJyyx2XM3spic2azCfq9ReCKuK7+wPLVqDPl28tL3ao=;
 b=RjmVNvc7KkHQt5OQnNRyKs6QXrOJ3Eyk9uqfBCZwfV+zPxRGXgMkN8D7Roy9owbRjImzNdlDa2aYmhJxbCHb/72ad5xOeEDKrA09GZWKc8y6uxzI85YZNufImCHE0R/9JYATww/8GhSLSml6UjkRTrSAPxfiZnTuQYSEKZvEp+e/SAdHIkLHaMmnFggCgoNa6545ZVymo7J7vKUrlkzoTGJtBVej9RXnY1tYy18a5hRrOT1bSKTViKey6QeNL6Z1XGZGqrgFHUhDoHvpLe490Z/X8EuigUH8AKfopvKin6zf6TB5llFt4f8VI5PPWv3uTqfff4sbR2AJa0p3GAnVuw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <88e3ec77-587a-ae68-a634-fed1fa917cd7@suse.com>
Date: Fri, 13 Jan 2023 10:53:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0039.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9250:EE_
X-MS-Office365-Filtering-Correlation-Id: d456853b-779b-4734-1e83-08daf54c185e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/ZrCWPwd0hPwJIRSkBxu2uXJbu6oXfxO1SKN3TQYTpSAGWEe9ywGxhLO0NaKZma7DgyIjYcBCSkKuhB7GfuAxr0feczrW5abwicHY8IG5FaoG3nwaMh0DMaufCz0XinAh1+f19SJPEvu7dh6laXgSi5nxXNgvSyrcY+RzeqFEAmcwZHKhlSQOTDguZQxVIn4FQSYPBvQC5nMcpamAIrHgcJny0RySE3YHGQRsLaZJFEfK5Mgyo2D59rrNyT1ONDZ5+g1GympHDlYmwHzcKgcaQLEoAAP1r4KDyRKed4uxz5A2bzpp8l62a3TFK/YPdRldH+DQnL7LZv6VMpmkd9VkvKQuFINo5l9vsBzmSjKg4B6m+lxO6UC1Y1hZYrqIpZGrEo1iaj0JR6Ers5opwLewhqPjztwe32S2d+bvItTgHgd7zP09te6dXywkmV39vJKwm4Eun51dLrg3bb77S1SyJu6NHFAw+fSkAgx4ymc2lyXAZ6HR7WCJhKx+JQp0ttLz+5COh3zgo536nveHr8e1hyIuWSgkdoeVqNOacxvEWeaeTtOUDdYcBLFPDia5QlzQua856Wr1XSsmpijN6Fv2WrMcAx/IdG0EjAvNlK3L61gTEI5/R2xU0oq8HVhZVhZB6evnScfywn1kBCqPLs6pMXCCeKRn7rwn/zNjV8LQ+sjG2jD4NfGoO+vVCHPMLT6voWjruAM53980iwmEKXIEOs9Q0vyBtgxfe5fHnlWj9QY837CBy//PHndZ3TRce39SdAEfwuNJukw8GT5fz87HA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(396003)(376002)(366004)(136003)(346002)(451199015)(53546011)(2906002)(6506007)(966005)(6486002)(478600001)(26005)(6512007)(186003)(31686004)(54906003)(36756003)(4326008)(6916009)(316002)(66946007)(66476007)(2616005)(8676002)(66556008)(41300700001)(5660300002)(83380400001)(38100700002)(8936002)(86362001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OS9hNThWMHplek14VUlzK1N2Tzg2REtKZ2Z3TUR2ZVh3N2ZHU0wxVWFSVjEv?=
 =?utf-8?B?YkRwdkhzcS9SMWFKNmh5VHNPd3JOMGQ0djRISmUwL1ZhUjAvQ25mcjBNeHlU?=
 =?utf-8?B?THYxRVJqTkV0amdNU0JuVDVXbm5FanMvYVhQaUpiaHpObzhLYkVZNWFjVUlR?=
 =?utf-8?B?WW1qQjdGa0tsS3hwL0JtRzVPV1prUDdTdTU2L01EenBJSFY5K1JCZ0tzTUI3?=
 =?utf-8?B?SElhM1ZHaFlBQ0kzUkZuUUM5bFBOMkhoTGcwM2ZSS2szTG5KZGJLL2xjWmFO?=
 =?utf-8?B?SUtGdjg1NzF6dlZUNmJCSk9VNFNDM1lWUzFOZlNtaDdDUktGWG5yTFNrVEYw?=
 =?utf-8?B?aFR2R29uT0dWaERHUC9IYnBHcWYvTTIySzVhVzBhcnpTMHQ4QkRVcENyekFM?=
 =?utf-8?B?Nk10LzJrNk5Jc0JzMmVpSmYyZnlYK2x6SUUyamk2UlJUMWl1THpPamsvRjFB?=
 =?utf-8?B?RFBWbVIxeUt4MDc4UGN5Q1c3K3o3bTVRNi9GZ0VlODBiakJnY2N0SSttaEgy?=
 =?utf-8?B?djNIcHZ3L2haWS9ZT05JQTRIamlRdFBuOEV1UjZPUGV2cWdyUTdCMEhXMENi?=
 =?utf-8?B?bmlkejFJMzk2b0xZN0JhelBVcDg0ZG4yemcrTzJ6aStkTC9CYWRhWFZVVzBN?=
 =?utf-8?B?MEZTNFUyTEVkL29VNGdDUkZkbnZ2Ty9NcXZYbjhta2dsV1VoVTJVR1Q3aHJM?=
 =?utf-8?B?bzUzeXU4VTRMSkxvYlkxZE9nL2hkSW1DU2c4aFlzc1QrNGYwYWJ2aVBHcFVC?=
 =?utf-8?B?MTFPNmxoV3lBcWhoODM0YlRROGhRbFU1OHZaUDRWL2xyMTlVa0t1ai9KVExK?=
 =?utf-8?B?Y3hqdkg3WHA5bkU4aWxtS283cERGbHhmUUJmcmF3TzZoSnI3dytJajlmMDZn?=
 =?utf-8?B?d2hCM0JxQ1grQyszcERjNHcyUk5qaWQ4dXJwZUZBWHpxRFBtQlZHc0gwb0Y4?=
 =?utf-8?B?bmZoSHdscHo5OU1Lb09XQm9oWVhDdEgwWmVQcENFcDRyUEl0U0xvaXpHSkd6?=
 =?utf-8?B?c3dUenhOS3VIUFdmTHdhNXEvV1l2UUREbU9MVVNaNDZDd09yZms1ZnlCUUIy?=
 =?utf-8?B?Z1lCNnMzQ2FJMGRmd2FoTGhqMU8wSndwYmZsNlJuZVVJS2N4SFZBZU1VaCtm?=
 =?utf-8?B?WGlVWkNTNEttQzdFZ0srVURrS3g2YU5iZ3pHNmM5K1E0WXBienZxRGYyZzNF?=
 =?utf-8?B?OFo2bUdZZEU5dW1jRXZXTnlmSkdjU0xPOGhaTWlURm16Y1NhN1BVcFkwaURP?=
 =?utf-8?B?dmd2bmc5QTNpQWxVUndlSGVhTHBsRUpCR09zZDNzSWRSRWZaOWtXUlJQb29K?=
 =?utf-8?B?WSt2SCtmeCtaVlNZY01iUVRobkQ1bFdic1BOTzNqTEs1SHBxY3RzM1h4amZq?=
 =?utf-8?B?Y0RuWHNmY0ExME1vcDJIb2JiRHFYL0J4TWFZQmNmTGN0UGl0aHk0UFY2cUg2?=
 =?utf-8?B?UjEwZkFCeWxRM3d6eHJRL21FQ3c2NUZlZnJrNTFlSjlpL2NkS2diNkVxWjB3?=
 =?utf-8?B?cnA1NjZKZ0Z4K0YyL2hTV0dYNVFpclhCdVdpdmo3cVRVRjg5Y0ZlNU1HRno1?=
 =?utf-8?B?TWVVNjJBek0ycEFqYzhYbGlybjB3cmRBUjhaSU9QbnNORWFXUGFuUWIvUC9o?=
 =?utf-8?B?WWZBQkdEaXhPT21RenhCeHcrREg3dWFXWEV4MnNUY0tRSUpjOTZhc0pDaWxF?=
 =?utf-8?B?WG00N1E4Um1vNXM0YWFHN0xLalp1OHU3R2J2MUVWNXNYbEt1c3BRT3VYRk5Z?=
 =?utf-8?B?dDgwdkJOMzgxWEJiZUFzT3VkQlgwWDBJWjY3Qy9KVmRTRVkvSzlHc2tsSzZ4?=
 =?utf-8?B?RFMrcmRKbmJ2SEZSOTFFTG95ajdtWWQwd3kxMSs3MTEyMjBKeDc2WTczMC9B?=
 =?utf-8?B?dzlDYVBBKytkbUVuSmovbWt0R1Y0dENCUm1pS3JzaUVYR0g2V1B2N3Uxb0Ex?=
 =?utf-8?B?dEpZRjl2UU91bnZFQTJaa29XMjViSGhoenR4Q2dadTE3THdyYWVhZDExVlZt?=
 =?utf-8?B?VjZmV2ZZM2c0UXNMNzRZNGkzNXZCcVpwQ1M4dWE1WnkrQzJYMmQzTnpVV0xG?=
 =?utf-8?B?TE1mYWtuZncwQkZobmZ3Mmt2cWpYNTdyRHpCd0xyVjN1YUFOVFBNcDNCSGI3?=
 =?utf-8?Q?dG+QeccbWhy5v0EDFVsTDtUyk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d456853b-779b-4734-1e83-08daf54c185e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 09:54:00.4143
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 208b2rHInIfjXRQjYl5zDRsdes5SvlWPvMvGwPHqZ1+Z3vbhbCG9ey+czWV7a46bQ//Ra19oR3XgJ0gi3Fc0Pw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9250

On 13.01.2023 10:34, Xenia Ragiadakou wrote:
> 
> On 1/13/23 10:47, Jan Beulich wrote:
>> First of all the variable is meaningful only when an IOMMU is in use for
>> a guest. Qualify the check accordingly, like done elsewhere. Furthermore
>> the controlling command line option is supposed to take effect on VT-d
>> only. Since command line parsing happens before we know whether we're
>> going to use VT-d, force the variable back to set when instead running
>> with AMD IOMMU(s).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I was first considering to add the extra check to the outermost
>> enclosing if(), but I guess that would break the (questionable) case of
>> assigning MMIO ranges directly by address. The way it's done now also
>> better fits the existing checks, in particular the ones in p2m-ept.c.
>>
>> Note that the #ifndef is put there in anticipation of iommu_snoop
>> becoming a #define when !IOMMU_INTEL (see
>> https://lists.xen.org/archives/html/xen-devel/2023-01/msg00103.html
>> and replies).
>>
>> In _sh_propagate() I'm further puzzled: The iomem_access_permitted()
>> certainly suggests very bad things could happen if it returned false
>> (i.e. in the implicit "else" case). The assumption looks to be that no
>> bad "target_mfn" can make it there. But overall things might end up
>> looking more sane (and being cheaper) when simply using "mmio_mfn"
>> instead.
>>
>> --- a/xen/arch/x86/mm/shadow/multi.c
>> +++ b/xen/arch/x86/mm/shadow/multi.c
>> @@ -571,7 +571,7 @@ _sh_propagate(struct vcpu *v,
>>                               gfn_to_paddr(target_gfn),
>>                               mfn_to_maddr(target_mfn),
>>                               X86_MT_UC);
>> -                else if ( iommu_snoop )
>> +                else if ( is_iommu_enabled(d) && iommu_snoop )
>>                       sflags |= pat_type_2_pte_flags(X86_MT_WB);
>>                   else
>>                       sflags |= get_pat_flags(v,
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>>       if ( !acpi_disabled )
>>       {
>>           ret = acpi_dmar_init();
>> +
>> +#ifndef iommu_snoop
>> +        /* A command line override for snoop control affects VT-d only. */
>> +        if ( ret )
>> +            iommu_snoop = true;
>> +#endif
>> +
> 
> Why here iommu_snoop is forced when iommu is not enabled?
> This change is confusing because later on, in iommu_setup, iommu_snoop 
> will be set to false in the case of !iommu_enabled.

Counter question: Why is it being set to false there? I see no reason at
all. On the same basis as here, I'd actually expect it to be set back to
true in such a case. Which, however, would be a benign change now that
all uses of the variable are properly qualified. Which in turn is why I
thought I'd leave that other place alone.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 09:57:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 09:57:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477021.739514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGop-0007xI-Di; Fri, 13 Jan 2023 09:57:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477021.739514; Fri, 13 Jan 2023 09:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGop-0007xB-8w; Fri, 13 Jan 2023 09:57:51 +0000
Received: by outflank-mailman (input) for mailman id 477021;
 Fri, 13 Jan 2023 09:57:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O/z9=5K=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pGGoo-0007x5-3T
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 09:57:50 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bbbdf79a-9328-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 10:57:48 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id AF21E253AF;
 Fri, 13 Jan 2023 09:57:46 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8E96013913;
 Fri, 13 Jan 2023 09:57:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id t5SKIRorwWPaBgAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 13 Jan 2023 09:57:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbbdf79a-9328-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673603866; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=boXpsw2LAjhiHUbAzcspld7diNzrPJHZXzIePBCgJ4s=;
	b=VnaRTYVKUW9q/wnKbVGV6jaSlnvYjmVclpbsy3kxXTMXb843dEqtyiuOgkBeGhAIPnD71l
	y1bh7SZhYOhMv/wfxg9bMSVMT79Y+fe63P9CwYIcIwxnWtotp3LXNsdmg6C5T2SKAptu8p
	bsq/pl3DTF+H2rgmkMZgbbCSixaOKXA=
Message-ID: <b7c26a7b-0db9-61f2-df4c-6aed325927a5@suse.com>
Date: Fri, 13 Jan 2023 10:57:46 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 10/19] tools/xenstore: change per-domain node
 accounting interface
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-11-jgross@suse.com>
 <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>
 <05871696-1638-82d0-8d55-9088b4bb9a18@suse.com>
 <e9eeee72-ecd1-faaa-dc63-b57d50162bbf@xen.org>
 <7ba1b191-ef89-1e0d-0e1b-0b24159a9eb9@suse.com>
 <009d00d8-4ba9-167d-271f-0dde655415fa@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <009d00d8-4ba9-167d-271f-0dde655415fa@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------Uyeak0oqpO00idGXm6iMd0du"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------Uyeak0oqpO00idGXm6iMd0du
Content-Type: multipart/mixed; boundary="------------5Mf4DjpLiz3hlP5CwQzejHyR";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <b7c26a7b-0db9-61f2-df4c-6aed325927a5@suse.com>
Subject: Re: [PATCH v2 10/19] tools/xenstore: change per-domain node
 accounting interface
References: <20221213160045.28170-1-jgross@suse.com>
 <20221213160045.28170-11-jgross@suse.com>
 <da814fed-c177-b0ee-32be-ef0656692c82@xen.org>
 <05871696-1638-82d0-8d55-9088b4bb9a18@suse.com>
 <e9eeee72-ecd1-faaa-dc63-b57d50162bbf@xen.org>
 <7ba1b191-ef89-1e0d-0e1b-0b24159a9eb9@suse.com>
 <009d00d8-4ba9-167d-271f-0dde655415fa@xen.org>
In-Reply-To: <009d00d8-4ba9-167d-271f-0dde655415fa@xen.org>

--------------5Mf4DjpLiz3hlP5CwQzejHyR
Content-Type: multipart/mixed; boundary="------------AwP04EUSeiwXsn7hEGlUf8M0"

--------------AwP04EUSeiwXsn7hEGlUf8M0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTMuMDEuMjMgMTA6NTMsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDEyLzAxLzIwMjMgMDU6NDksIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBP
biAxMS4wMS4yMyAxODo0OCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+IEhpIEp1ZXJnZW4s
DQo+Pj4NCj4+PiBPbiAxMS8wMS8yMDIzIDA4OjU5LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0K
Pj4+Pj4gLi4uIHRvIG1ha2Ugc3VyZSBkb21haW5fbmJlbnRyeV9hZGQoKSBpcyBub3QgcmV0
dXJuaW5nIGEgbmVnYXRpdmUgdmFsdWUuIA0KPj4+Pj4gVGhlbiBpdCB3b3VsZCBub3Qgd29y
ay4NCj4+Pj4+DQo+Pj4+PiBBIGdvb2QgZXhhbXBsZSBpbWFnaW5lIHlvdSBoYXZlIGEgdHJh
bnNhY3Rpb24gcmVtb3Zpbmcgbm9kZXMgZnJvbSB0cmVlIGJ1dCANCj4+Pj4+IG5vdCBhZGRp
bmcgYW55LiBUaGVuIHRoZSAicmV0IiB3b3VsZCBiZSBuZWdhdGl2ZS4NCj4+Pj4+DQo+Pj4+
PiBNZWFud2hpbGUgdGhlIG5vZGVzIGFyZSBhbHNvIHJlbW92ZWQgb3V0c2lkZSBvZiB0aGUg
dHJhbnNhY3Rpb24uIFNvIHRoZSBzdW0gDQo+Pj4+PiBvZiAiZC0+bmJlbnRyeSArIHJldCIg
d291bGQgYmUgbmVnYXRpdmUgcmVzdWx0aW5nIHRvIGEgZmFpbHVyZSBoZXJlLg0KPj4+Pg0K
Pj4+PiBUaGFua3MgZm9yIGNhdGNoaW5nIHRoaXMuDQo+Pj4+DQo+Pj4+IEkgdGhpbmsgdGhl
IGNvcnJlY3Qgd2F5IHRvIGhhbmRsZSB0aGlzIGlzIHRvIHJldHVybiBtYXgoZC0+bmJlbnRy
eSArIHJldCwgMCkgaW4NCj4+Pj4gZG9tYWluX25iZW50cnlfYWRkKCkuIFRoZSB2YWx1ZSBt
aWdodCBiZSBpbXByZWNpc2UsIGJ1dCBhbHdheXMgPj0gMCBhbmQgbmV2ZXINCj4+Pj4gd3Jv
bmcgb3V0c2lkZSBvZiBhIHRyYW5zYWN0aW9uIGNvbGxpc2lvbi4NCj4+Pg0KPj4+IEkgYW0g
Yml0IGNvbmZ1c2VkIHdpdGggeW91ciBwcm9wb3NhbC4gSWYgdGhlIHJldHVybiB2YWx1ZSBp
cyBpbXByZWNpc2UsIHRoZW4gDQo+Pj4gd2hhdCdzIHRoZSBwb2ludCBvZiByZXR1cm5pbmcg
bWF4KC4uLikgaW5zdGVhZCBvZiBzaW1wbHkgMD8NCj4+DQo+PiBQbGVhc2UgaGF2ZSBhIGxv
b2sgYXQgdGhlIHVzZSBjYXNlIGVzcGVjaWFsbHkgaW4gZG9tYWluX25iZW50cnkoKS4gUmV0
dXJuaW5nDQo+PiBhbHdheXMgMCB3b3VsZCBjbGVhcmx5IGJyZWFrIHF1b3RhIGNoZWNrcy4N
Cj4gDQo+IEkgYW0gYSBiaXQgY29uY2VybmVkIHRoYXQgd2Ugd291bGQgaGF2ZSBhIGNvZGUg
Y2hlY2tpbmcgdGhlIHF1b3RhIGJhc2VkIG9uIGFuIA0KPiBpbXByZWNpc2UgdmFsdWUuDQo+
IA0KPiBBdCB0aGUgbW9tZW50LCBJIGRvbid0IGhhdmUgYSBiZXR0ZXIgc3VnZ2VzdGlvbi4g
QnV0IHdlIHNob3VsZCBhdCBsZWFzdCBkb2N1bWVudCANCj4gaW4gdGhlIGNvZGUgd2hlbiB3
ZSB0aGluayB0aGUgdmFsdWUgaXMgaW1wcmVjaXNlIGFuZCBleHBsYWluIHdoeSBieXBhc3Np
bmcgdGhlIA0KPiBxdW90YSBjaGVjayBpcyBPSyAoSU9XIHdobyB3aWxsIGNoZWNrIGl0Pyku
DQoNClRoZSBpbXByZWNpc2UgdmFsdWUgd2lsbCBuZXZlciBiZSB0b28gbG93LCBpdCBjYW4g
b25seSBiZSB0b28gaGlnaCAoaS5lLiAwDQppbnN0ZWFkIG9mIG5lZ2F0aXZlKSwgYW5kIHRo
YXQgd2lsbCBvbmx5IGhhcHBlbiBpbiBhIHRyYW5zYWN0aW9uIHdoaWNoIGNhbid0DQpzdWNj
ZWVkLg0KDQpBZGRpbmcgYSBjb21tZW50IGlzIGdvb2QgaWRlYSwgdGhvdWdoLg0KDQoNCkp1
ZXJnZW4NCg0K
--------------AwP04EUSeiwXsn7hEGlUf8M0
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AwP04EUSeiwXsn7hEGlUf8M0--

--------------5Mf4DjpLiz3hlP5CwQzejHyR--

--------------Uyeak0oqpO00idGXm6iMd0du
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPBKxoFAwAAAAAACgkQsN6d1ii/Ey8D
3gf+M3w+cEW9DbgNQ1jyjEnPNnUyObjc9ANE4P7zjDWq0LU7tNeKt1SoDjIj+SCKCD7bXTT6JuC3
PGVHcfPn1EXal8ud7Ej4FANsyo4UplNU1XnlPkVBfTsG2jRSBVb2I0liJy0aX23I1BkRG6TqIn/l
C79CprcxtysJuuLxxfdB3WQ2G6owdzvm7NxgJcAjFN4wYCIv5TxSnIaihTQu1npsRyZLzAT7Ta+v
9gSTNtInkzSvNR5F11SvsvW3JyywFWUvnA4kZxXGY1BJG2GZ54t8gmv3ZEUs+5OguYmE0PIx1RJQ
W3tYgC0DnH0ydOv9m9Ze5yDBYkgYBd6f/7sSvGL4Vg==
=5b36
-----END PGP SIGNATURE-----

--------------Uyeak0oqpO00idGXm6iMd0du--


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:04:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:04:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477027.739524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGvP-00016p-6F; Fri, 13 Jan 2023 10:04:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477027.739524; Fri, 13 Jan 2023 10:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGvP-00016i-3C; Fri, 13 Jan 2023 10:04:39 +0000
Received: by outflank-mailman (input) for mailman id 477027;
 Fri, 13 Jan 2023 10:04:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGGvN-00016c-SW
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:04:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGvM-0006SL-SQ; Fri, 13 Jan 2023 10:04:36 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.6.109]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGvM-00054W-Mx; Fri, 13 Jan 2023 10:04:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=fFSIUkU5lnR2R24I4ISYYt/AWncFbL5EYvZaBl/MsR4=; b=GlfOTophUu0noKodOkVFXrA/80
	L6sje1gSAyJcEmKvxJMXAi7KQwIgH5lpnKrZsGI79sWgT2wRgfEhtAb2247cm+MNvrElDxnjywY4s
	XxcQ09R8ZkuYnRr1lkp/u64PMnDuqeZlMtuEzVPkTpOrOtMYJSJ7XO40RzgCCjo5Ykt4=;
Message-ID: <f2638016-1943-1552-7cf2-b2a39b0660ba@xen.org>
Date: Fri, 13 Jan 2023 10:04:35 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/arm: Add 0x prefix when printing memory size in
 construct_domU
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Ayan Kumar Halder <ayankuma@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230103102519.26224-1-michal.orzel@amd.com>
 <alpine.DEB.2.22.394.2301041546230.4079@ubuntu-linux-20-04-desktop>
 <1264e5cc-1960-95d3-5ecb-d6f23d194aa4@xen.org>
 <29460d07-cd43-7415-7125-6ed01f3c2920@amd.com>
 <c80f90d7-d3b5-1b13-d809-9506ff5414e4@xen.org>
 <35d590fb-4b96-70dd-a60b-1c8d7cc8f2d6@citrix.com>
 <383485e0-37ac-9b0c-f0c5-18e50cf7905b@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <383485e0-37ac-9b0c-f0c5-18e50cf7905b@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 06/01/2023 10:41, Michal Orzel wrote:
> 
> 
> On 05/01/2023 13:15, Andrew Cooper wrote:
>>
>>
>> On 05/01/2023 11:19 am, Julien Grall wrote:
>>> On 05/01/2023 09:59, Ayan Kumar Halder wrote:
>>>> Hi Julien,
>>>
>>> Hi,
>>>
>>>> I have a clarification.
>>>>
>>>> On 05/01/2023 09:26, Julien Grall wrote:
>>>>> CAUTION: This message has originated from an External Source. Please
>>>>> use proper judgment and caution when opening attachments, clicking
>>>>> links, or responding to this email.
>>>>>
>>>>>
>>>>> Hi Stefano,
>>>>>
>>>>> On 04/01/2023 23:47, Stefano Stabellini wrote:
>>>>>> On Tue, 3 Jan 2023, Michal Orzel wrote:
>>>>>>> Printing memory size in hex without 0x prefix can be misleading, so
>>>>>>> add it. Also, take the opportunity to adhere to 80 chars line length
>>>>>>> limit by moving the printk arguments to the next line.
>>>>>>>
>>>>>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>>>>>> ---
>>>>>>> Changes in v2:
>>>>>>>    - was: "Print memory size in decimal in construct_domU"
>>>>>>>    - stick to hex but add a 0x prefix
>>>>>>>    - adhere to 80 chars line length limit
>>>>>>
>>>>>> Honestly I prefer decimal but also hex is fine.
>>>>>
>>>>> decimal is perfect for very small values, but as we print the amount in
>>>>> KB it will become a big mess. Here some examples (decimal first, then
>>>>> hexadecimal):
>>>>>
>>>>>    512MB: 524288 vs 0x80000
>>>>>    555MB: 568320 vs 0x8ac00
>>>>>    1GB: 1048576 vs 0x100000
>>>>>    512GB: 536870912 vs 0x20000000
>>>>>    1TB: 1073741824 vs 0x40000000
>>>>>
>>>>> For power of two values, you might be able to find your way with
>>>>> decimal. It is more difficult for non power of two unless you have a
>>>>> calculator in hand.
>>>>>
>>>>> The other option I suggested in v1 is to print the amount in KB/GB/MB.
>>>>> Would that be better?
>>>>>
>>>>> That said, to be honest, I am not entirely sure why we are actually
>>>>> printing in KB. It would seems strange that someone would create a
>>>>> guest
>>>>> with memory not aligned to 1MB.
>>>>
>>>> For RTOS (Zephyr and FreeRTOS), it should be possible for guests to
>>>> have memory less than 1 MB, isn't it ?
>>>
>>> Yes. So does XTF. But most of the users are likely going allocate at
>>> least 1MB (or even 2MB to reduce the TLB pressure).
>>>
>>> So it would be better to print the value in a way that is more
>>> meaningful for the majority of the users.
>>>
>>>>> So I would consider to check the size is 1MB-aligned and then print the
>>>
>>> I will retract my suggestion to check the size. There are technically
>>> no restriction to run a guest with a size not aligned to 1MB.
>>> Although, it would still seem strange.
>>
>> I have a need to extend tools/tests/tsx with a VM that is a single 4k
>> page.  Something which can execute CPUID in the context of a VM and
>> cross-check the results with what the "toolstack" (test) tried to configure.
>>
>> Xen is buggy if it cannot operate a VM which looks like that, and a
>> bonus of explicitly testing like this is that it helps to remove
>> inappropriate checks.
> I can see we are all settled that it is fully ok to boot guests with memory size less than 1MB.
> The 'memory' dt parameter for dom0less domUs requires being specified in KB and this is the
> smallest common multiple so it makes the process of cross-checking easier. Stefano is ok with
> either decimal or hex, Julien wanted hex (hence my v2), I'm more towards printing in hex as well.
> Let's not forget that this patch aims to fix a misleading print that was missing 0x and can cause
> someone to take it as a decimal value.

I have committed it with my acked-by.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:06:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:06:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477033.739535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGxX-0001fx-GT; Fri, 13 Jan 2023 10:06:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477033.739535; Fri, 13 Jan 2023 10:06:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGGxX-0001fq-Dq; Fri, 13 Jan 2023 10:06:51 +0000
Received: by outflank-mailman (input) for mailman id 477033;
 Fri, 13 Jan 2023 10:06:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGGxV-0001fk-JM
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:06:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGxV-0006VY-Bn; Fri, 13 Jan 2023 10:06:49 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.6.109]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGGxV-0005BZ-4w; Fri, 13 Jan 2023 10:06:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=JRxBYx/YUF4D8/wWQVmasMas3TMqyWh4Aj5kGQ8fr9c=; b=PhpK+vTCsmVqOtK0ROww7AANIl
	HWejeFyADa4MPIGyWncWmpkO++2DyectF3k1S2gI1TUrnGc6hb9Aj28edxJtP82LzCN9xBpynhAjk
	CLee0SoLX1YK7f5Zt+tzrV/poxLyF4wwTQDuPfPe/ZfUlTuOqlD4vtBCp+Ro6jxzpYIo=;
Message-ID: <2e8a80d6-b45d-f852-1e54-7c6e0ae4f2fd@xen.org>
Date: Fri, 13 Jan 2023 10:06:47 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 01/40] xen/arm: remove xen_phys_start and
 xenheap_phys_end from config.h
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-2-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113052914.3845596-2-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> These two variables are stale variables, they only have declarations
> in config.h, they don't have any definition and no any code is using
> these two variables. So in this patch, we remove them from config.h.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> Acked-by: Julien Grall <jgrall@amazon.com>

I was going to commit this patch, however this technically needs your 
signed-off-by as the sender of this new version.

If you confirm your signed-off-by, then I can commit without a resending.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:10:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:10:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477040.739546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH1S-00036q-WB; Fri, 13 Jan 2023 10:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477040.739546; Fri, 13 Jan 2023 10:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH1S-00036j-TQ; Fri, 13 Jan 2023 10:10:54 +0000
Received: by outflank-mailman (input) for mailman id 477040;
 Fri, 13 Jan 2023 10:10:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGH1S-00036d-1Z
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:10:54 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2055.outbound.protection.outlook.com [40.107.103.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8f0faa43-932a-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 11:10:52 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9523.eurprd04.prod.outlook.com (2603:10a6:10:2f6::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 10:10:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 10:10:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f0faa43-932a-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eMtW6vDs5RTymkdjl8yDRs/w/O7LDNjYjK/8jBtnfwGbIs9zoJk1WV4NiFlFoR2Q+c4iPcmbfwRHTgm9mwTH3kugk9AFkgC6IaA8Edakh15oRMEWeg9DTV7JYOxqiXEqXL3iSgFE0xb0cQe3NMPKCkn+7P+5F8WNbfQHtZNcal+2yVVdWDP30+sLwNsa17dFMecmEGoztTfA2x7BMgw5lxEmgW6gbUle9xv0wNgcAxKLyPPDYMKDYpZlPgE5qcvENUnVHtqOPfcMC+YqW0qgXFNBWn8Q90AnTqvba5Q/RE4D997BbNcQjdvhVexzlh+M+434qe0Lf94pQwwsvHscTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZPle0OiTX6gcbg4/T+rGFfpPHXHPkX+AcLreLlEi0Iw=;
 b=DazdDQubrR+lzhX4V/R2ar1hxtwQW3SraFL+3gTzyiFZq5yz+vdmj0+JbWNGe3PYBmYw5JgbzHxQMObN18pYZjbwb4HKF9TuNGqb0uVM88r98qHRX2wMVX+2i8M3NEgI8fPuuCI4IEFxcaTAKSj0PLW6FXBwTVwTTJQtTPFRzar44minLR5K5MvHPeHTZWi1sXslP6i8m5FpuGSeW2D58GZPEZYxKZyL8eyMi6oYqHj/PzyttRFTajAvNbfQJKJT7Oi9+7FIAl0GJUMlgsc81ewJYLcEsN4tvaGTDX2Huoja6YPbdCgf0CVlSVP2KwBeHnt9xwWMAeBS/xzRgZKviQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZPle0OiTX6gcbg4/T+rGFfpPHXHPkX+AcLreLlEi0Iw=;
 b=FGwisb5M5++3i0C3B+y4c3ac18+K686+uoigFmNx/3sWLGLCZqj+tXpj3Kq+/nOcu/7hnyJbyzwI1fB/Ls/0AMu2mAAANTUQVzpHk/H6KBvCUv39i7E/hp3l9pswBZkXnu9zuihBUHsDP98UxMgJXl9CjGQiBCajgwO5bv4C1z4z3o9Ngr99z8VgrqZe8q3eSgbzBLFtv0j/ccAHEj1s+yUaetONNMKymH2B88vJ1O46l3WtEQ73c2h+sD6y1aTMc87JsglRTDp0Ix5z0FIONaPW+lyeB2CPOhett0O1i5kAlZ+YNetjmb1a64/x40pE3Y8Z2gQfwQcIQnTfPiD41Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a19dba22-1e75-473d-dba8-cc676b6594aa@suse.com>
Date: Fri, 13 Jan 2023 11:10:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 31/40] xen/mpu: disable FIXMAP in MPU system
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-32-Penny.Zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113052914.3845596-32-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0172.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9523:EE_
X-MS-Office365-Filtering-Correlation-Id: 2f45cc95-6b3d-471a-47d5-08daf54e71e1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gMbOHL7TicYcwr4/9Xh6Y84e0aSoUPmqzDw1tYWt6iTAARK9qWi5NfSN4S0VU7myEaQOqbkVm48KjCBcmqjZOS21nU/CfOYdytUbAA7UeU9XB4X20yBxiMM7nFN3Ione8sSZFV7Ds9NkbsgSEoJws/2iY8UrDri5MBZ4r5NnJVdiTTU7457Q0K8NsPYb4gNR0oe7efyJVW/6T8ra98SZQco6Sgb2nKSNxOTx94Gmo1uKGbqxaqJDl76p1XZqgsunOTs6V0LcyhbgfsiuH1inFMwXyZTdvDBTPoiYQQpPSkkaKtAAT9HwsRmYQ6RcHYMZ0sda3DChc3Rf8qijBFUSjIHcZY8Ay+oZu8dSOmloq5wPsti7XP6cia1dd+8HJ79y9lFicmODEdmfXIDFpgDGwO+4emYAcwFWeK/YO5Ki65dWraeOpY6Z3neMIWt4pZUpmtMDamAdrDAadRjrP8uM4xXQ+tic6zBYIvFWZcqhD2+oInkA/1XgsJMXpzMdNBxBwNFTy4v/l5HYKw5HTjDVY+SfauP1uNuKuS8bm9fNFpUS9Z0R6/7qXShHKxMobrOTWks/xYGmIB0mzD9YRrOYXtd8gmpJVJnI+pRuK6viz1YO94SS9ZQX5LqUvPCWAw5HpyKwjeTomA6+cL8QoHj07HrrGqmvg8E5KVXfZjUrdsx7l6Ls3xIsH8KvDkEHK5OALP8nsVqGuVatc1CyEFWrDwHu2jVGd87uhN2tO+hE7uo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(136003)(346002)(376002)(39860400002)(451199015)(38100700002)(31696002)(8936002)(8676002)(6916009)(86362001)(5660300002)(7416002)(4744005)(66476007)(2906002)(4326008)(66556008)(66946007)(54906003)(41300700001)(6512007)(6506007)(26005)(2616005)(53546011)(186003)(316002)(478600001)(6486002)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SmFnTlg1emtrdFo0L0ZuUkZXbnNHSVg4STV0Si94V2tHQkRvYUUwWHl5amhH?=
 =?utf-8?B?dGRqMFdKUExMZy94cWo3VittcEJtOTU4L0FRVFZFY051Nm4ydVNpRkVOeGlY?=
 =?utf-8?B?MytYeWtmUzErczBGTVhJT3U5NUVZblBiVmFkNWRSeGJpSkIrQ3ZDcjZJYmd5?=
 =?utf-8?B?RDVsczZPT1lqdno0V0d4NEk5TjM2cUVFU3BiVFNoMEFTR0hlTWtya3J0SVhM?=
 =?utf-8?B?emVCbkNXQ3dyUE9VL0g2Q1FqM2NFRWtxQ04ya29SMXpDTVlnd1FGa0hxSllh?=
 =?utf-8?B?K2V5djRRd3FuWmNlNDR0UDNEb1RVZ3ovdkM3ZDRiaUNSeHNXMEE5b1J5OXpw?=
 =?utf-8?B?ZGlwWWZMTE5aZWcrdFkzR1RHUTJGYTdYNVdzeUNHdGZjcTZaOExLKzU5M29G?=
 =?utf-8?B?cUpZTTdqOXRQZ1ljdUp5YUt0bERTQ0lEZkNjSWFlL2UwZEZiL1hTZmZQWUZS?=
 =?utf-8?B?aTArdUVEOGZYQWFndmFqSGJCSytCTkRJYWt5VzhSc1Z5SVpnMnNwTDVPeVFs?=
 =?utf-8?B?UGgrOXVlY2RGdVdWZ25VZS9FaWhtR3VhTUlYcDhiNDFkWmxHNHFkdGNPQy9h?=
 =?utf-8?B?TmdCK1lkVVlkaml1dFRObzlIR0tvU2JlTElSeGk3UlF5UnlPYUF4TDYyY1lE?=
 =?utf-8?B?bElVWG9UM3RmRzlqS3JuWlJzK3dBeEJyaVAyKzFjYmM1VDNSdUpDSmJmMGFE?=
 =?utf-8?B?TkpBUGRkdjFQQXJwY3VFbTFzNVV3YTgyMmwwVmdOMDBPYUd0amh2QzVEVzZK?=
 =?utf-8?B?cGxzbCtVNHN3MFFxSDhlUWlTLzhHN2YxVGFBS1NValhkTUptNUN0S1RsTFlC?=
 =?utf-8?B?ZmkwT3AxY0NLVzY5eGxhMWxycFpFWW5JTkRxblFRaThkbEZCeW9zNkFpM2Z1?=
 =?utf-8?B?Zm1HMnZlWDZNOHBOSnIvQzRXeDFqeHZjTzNVSDdPSkdiK1U0YTlqaTJLMTZS?=
 =?utf-8?B?aVNPQTZISFN2RmNyK0UzWkRtcGtwQ3BjdExaOUZiSU41blBBWkl1bkpaMmU2?=
 =?utf-8?B?UVZIM2t6NHFmWitQeVpCbU9RNWt1YzdyNWtjdVI2QThoeHd4K01qNmJNUU0y?=
 =?utf-8?B?MGtEeTJqRTh3NFl3aXFsYjR0YWwrZnpoc0YwdFVMU3pOb1ZsMDZNVzJFQUdw?=
 =?utf-8?B?aWFVcnZ3RmQ3RnJoMkdCWFFrbFU1TFJTS01ETXhmUTYrcEpaWlhQeWxwL0dW?=
 =?utf-8?B?aUNLZFY3N016MkViT3dsakhpZU5BUVJuZENnMGI3eEtQLzNOcEl4OC9NNzNE?=
 =?utf-8?B?Y3VBMlVnRmZlekVJVzVQRjVJZ2VrcXZsN05wdmZmV0RRTHphSlhJMHNSZjR4?=
 =?utf-8?B?RzVvWEZVTzBHMDQ0TVVZTHJxakR5UFJOTEZncmlLWHFHV1VkZHRIMC9HWk50?=
 =?utf-8?B?aGthUVpSQisxa0dEVWxyQnkraDlhelBhM1NoeTNTNjNOWmJQWm1PeEJyWFB0?=
 =?utf-8?B?bEJFZU5oaTNnUmE5NnM2NUF0NE4rZk9PNVg4MzhpZGhYNW05QXRmajFpdXo2?=
 =?utf-8?B?WjY1bEREeDdiTlpabWdFZUMwVGhHbVhoYkZFNFYrQkljUzBrdnZCWFhsSCtN?=
 =?utf-8?B?NzVPSDh2V3JPWVlNME9MV1ZRMnljNXNNUXhOcFQ3T2dwNUppdlk0aWNHc0hC?=
 =?utf-8?B?WDFjMnZtM1J1Q0NQMVQzaWdyVU5yaEU4YXBQVGZqMmhsSUhMNGY5Q04rMVc3?=
 =?utf-8?B?bysvZEVSbzJ0WGVtMUdPRnpielIwNzFvOWdSUU5rWU5jUnJSc2xJempuZjZr?=
 =?utf-8?B?VWJ6UW56emZZR3E1KzhSYkVocm9zdFhVUEd4RVYreUNVS01aaENLaTlLRmlZ?=
 =?utf-8?B?V2JQaDZlTGVBZTZucllTQXViV1RldFJwa04zK2tjY0NqNU1ham96dW9hbHZE?=
 =?utf-8?B?QVBkWW1raFMvRUp0cExZaGFWcU5zMUtCUFpDY2VTREZRbWZuU2ZXb1EvdXpa?=
 =?utf-8?B?OWh2UXR6SXJMS3R5QnMzZSsxUHAvTHJxZ3hpdHBhbkNaV0lUTEpGZFRDVzE4?=
 =?utf-8?B?YUJxZHlCek9UWTV3MnFyMmIxVTRmQW9xd2FOWFV6ODBYdVRtVndBSFhVcWpm?=
 =?utf-8?B?ckdMTFIzeTRZQWxrMVZ1UEdsWnR4ZUE4UVJWRDF3RkxoK01sM3dadWNISDlE?=
 =?utf-8?Q?Y5QW6xGqxqdvwag+flvrsn4wN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2f45cc95-6b3d-471a-47d5-08daf54e71e1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 10:10:49.4752
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OPv/YZ7bspYYllCF3RshEbZ6MgP4v8giNMbkCjsWvbw9JJdCezUiHg4oSBN2M7rvZUzCndKRF1Zvktk9lgI6fA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9523

On 13.01.2023 06:29, Penny Zheng wrote:
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -13,9 +13,10 @@ config ARM
>  	def_bool y
>  	select HAS_ALTERNATIVE if !ARM_V8R
>  	select HAS_DEVICE_TREE
> +	select HAS_FIXMAP if !ARM_V8R
>  	select HAS_PASSTHROUGH
>  	select HAS_PDX
> -	select HAS_PMAP
> +	select HAS_PMAP if !ARM_V8R
>  	select IOMMU_FORCE_PT_SHARE
>  	select HAS_VMAP if !ARM_V8R

Thinking about it - wouldn't it make sense to fold HAS_VMAP and HAS_FIXMAP
into a single HAS_MMU?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477046.739558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2D-0003dF-8k; Fri, 13 Jan 2023 10:11:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477046.739558; Fri, 13 Jan 2023 10:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2D-0003d8-5g; Fri, 13 Jan 2023 10:11:41 +0000
Received: by outflank-mailman (input) for mailman id 477046;
 Fri, 13 Jan 2023 10:11:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2B-0003cv-Ru
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2B-0006cb-JC; Fri, 13 Jan 2023 10:11:39 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2B-0005Ty-9h; Fri, 13 Jan 2023 10:11:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=YhEWC1pOoPWCZQofIj01JkZYD5T1f8EmgftS4PFxN4U=; b=IdGznh+1OpnY911Q3qV2fbnAWB
	BjRFMzM5QKLzmKw+6LaC0/nSDU7ojHmaMCf81sRwo5hGpBHGSAjPuLo9wpkwy1tZ7iz8da/jz9yvY
	ed03tMMQjqjtqVH32dxKb6Xm6E8qShEXnyT338hOf2PctssDe2fmA9ktj2jo6Vlq8vU8=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on
Date: Fri, 13 Jan 2023 10:11:22 +0000
Message-Id: <20230113101136.479-1-julien@xen.org>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Hi all,

Currently, Xen on Arm will switch TTBR whilst the MMU is on. This is
similar to replacing existing mappings with new ones. So we need to
follow a break-before-make sequence.

When switching the TTBR, we need to temporarily disable the MMU
before updating the TTBR. This means the page-tables must contain an
identity mapping.

The current memory layout is not very flexible and has an higher chance
to clash with the identity mapping.

On Arm64, we have plenty of unused virtual address space Therefore, we can
simply reshuffle the layout to leave the first part of the virtual
address space empty.

On Arm32, the virtual address space is already quite full. Even if we
find space, it would be necessary to have a dynamic layout. So a
different approach will be necessary. The chosen one is to have
a temporary mapping that will be used to jumped from the ID mapping
to the runtime mapping (or vice versa). The temporary mapping will
be overlapping with the domheap area as it should not be used when
switching on/off the MMU.

The Arm32 part is not yet addressed and will be handled in a follow-up
series.

After this series, most of Xen page-table code should be compliant
with the Arm Arm. The last two issues I am aware of are:
 - domheap: Mappings are replaced without using the Break-Before-Make
   approach.
 - The cache is not cleaned/invalidated when updating the page-tables
   with Data cache off (like during early boot).

The long term plan is to get rid of boot_* page tables and then
directly use the runtime pages. This means for coloring, we will
need to build the pages in the relocated Xen rather than the current
Xen.

For convience, I pushed a branch with everything applied:

https://xenbits.xen.org/git-http/people/julieng/xen-unstable.git
branch boot-pt-rework-v4

Cheers,

Julien Grall (14):
  xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush
  xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB
    flush by VA
  xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush
  xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush
  xen/arm: Clean-up the memory layout
  xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>"
  xen/arm32: head: Jump to the runtime mapping in enable_mmu()
  xen/arm32: head: Introduce an helper to flush the TLBs
  xen/arm32: head: Remove restriction where to load Xen
  xen/arm32: head: Widen the use of the temporary mapping
  xen/arm64: Rework the memory layout
  xen/arm64: mm: Introduce helpers to prepare/enable/disable the
    identity mapping
  xen/arm64: mm: Rework switch_ttbr()
  xen/arm64: smpboot: Directly switch to the runtime page-tables

 xen/arch/arm/arm32/head.S                 | 283 ++++++++++++++--------
 xen/arch/arm/arm32/smpboot.c              |   4 +
 xen/arch/arm/arm64/Makefile               |   1 +
 xen/arch/arm/arm64/head.S                 |  82 ++++---
 xen/arch/arm/arm64/mm.c                   | 160 ++++++++++++
 xen/arch/arm/arm64/smpboot.c              |  15 +-
 xen/arch/arm/include/asm/arm32/flushtlb.h |  27 ++-
 xen/arch/arm/include/asm/arm32/mm.h       |   4 +
 xen/arch/arm/include/asm/arm64/flushtlb.h |  56 +++--
 xen/arch/arm/include/asm/arm64/mm.h       |  13 +
 xen/arch/arm/include/asm/config.h         |  72 ++++--
 xen/arch/arm/include/asm/flushtlb.h       |  10 +-
 xen/arch/arm/include/asm/mm.h             |   2 +
 xen/arch/arm/include/asm/setup.h          |  11 +
 xen/arch/arm/include/asm/smp.h            |   1 +
 xen/arch/arm/mm.c                         |  33 ++-
 xen/arch/arm/smpboot.c                    |   1 +
 17 files changed, 566 insertions(+), 209 deletions(-)
 create mode 100644 xen/arch/arm/arm64/mm.c

-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477047.739569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2E-0003sz-H5; Fri, 13 Jan 2023 10:11:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477047.739569; Fri, 13 Jan 2023 10:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2E-0003ss-DS; Fri, 13 Jan 2023 10:11:42 +0000
Received: by outflank-mailman (input) for mailman id 477047;
 Fri, 13 Jan 2023 10:11:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2D-0003d3-4M
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2C-0006co-RG; Fri, 13 Jan 2023 10:11:40 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2C-0005Ty-Io; Fri, 13 Jan 2023 10:11:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Epx2xru5giDtyGeSu7KBQPAEsvy0qbPPMNGZosRt7DU=; b=3Mip3amkdLbFQBIIeWWOi8/59R
	2/U5bDwI4Utqz8IJwqNI+hR53AZuSgf1FOa4i7zHxZiFnTxgTvyifsPRqIHqsD8/UwNQuIHkBfyYx
	M6XZ6XIciwCaoHVxsoDBgfSBm2OfvKyADnYrjYZ2YWo3VFY0O8Pl5Wjt7JqbtzSVRgeI=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush
Date: Fri, 13 Jan 2023 10:11:23 +0000
Message-Id: <20230113101136.479-2-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Per D5-4929 in ARM DDI 0487H.a:
"A DSB NSH is sufficient to ensure completion of TLB maintenance
 instructions that apply to a single PE. A DSB ISH is sufficient to
 ensure completion of TLB maintenance instructions that apply to PEs
 in the same Inner Shareable domain.
"

This means barrier after local TLB flushes could be reduced to
non-shareable.

Note that the scope of the barrier in the workaround has not been
changed because Linux v6.1-rc8 is also using 'ish' and I couldn't
find anything in the Neoverse N1 suggesting that a 'nsh' would
be sufficient.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----

    I have used an older version of the Arm Arm because the explanation
    in the latest (ARM DDI 0487I.a) is less obvious. I reckon the paragraph
    about DSB in D8.13.8 is missing the shareability. But this is implied
    in B2.3.11:

    "If the required access types of the DSB is reads and writes, the
     following instructions issued by PEe before the DSB are complete for
     the required shareability domain:

     [...]

     — All TLB maintenance instructions.
    "

    Changes in v4:
        - Fix typoes
        - Don't modify the shareability in the workaround
        - Add Michal's reviewed-by tag

    Changes in v3:
        - Patch added
---
 xen/arch/arm/include/asm/arm64/flushtlb.h | 25 ++++++++++++++---------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/flushtlb.h b/xen/arch/arm/include/asm/arm64/flushtlb.h
index 7c5431518741..a40693b08dd3 100644
--- a/xen/arch/arm/include/asm/arm64/flushtlb.h
+++ b/xen/arch/arm/include/asm/arm64/flushtlb.h
@@ -12,8 +12,9 @@
  * ARM64_WORKAROUND_REPEAT_TLBI:
  * Modification of the translation table for a virtual address might lead to
  * read-after-read ordering violation.
- * The workaround repeats TLBI+DSB operation for all the TLB flush operations.
- * While this is stricly not necessary, we don't want to take any risk.
+ * The workaround repeats TLBI+DSB ISH operation for all the TLB flush
+ * operations. While this is strictly not necessary, we don't want to
+ * take any risk.
  *
  * For Xen page-tables the ISB will discard any instructions fetched
  * from the old mappings.
@@ -21,12 +22,16 @@
  * For the Stage-2 page-tables the ISB ensures the completion of the DSB
  * (and therefore the TLB invalidation) before continuing. So we know
  * the TLBs cannot contain an entry for a mapping we may have removed.
+ *
+ * Note that for local TLB flush, using non-shareable (nsh) is sufficient
+ * (see D5-4929 in ARM DDI 0487H.a). Although, the memory barrier in
+ * for the workaround is left as inner-shareable to match with Linux.
  */
-#define TLB_HELPER(name, tlbop)                  \
+#define TLB_HELPER(name, tlbop, sh)              \
 static inline void name(void)                    \
 {                                                \
     asm volatile(                                \
-        "dsb  ishst;"                            \
+        "dsb  "  # sh  "st;"                     \
         "tlbi "  # tlbop  ";"                    \
         ALTERNATIVE(                             \
             "nop; nop;",                         \
@@ -34,25 +39,25 @@ static inline void name(void)                    \
             "tlbi "  # tlbop  ";",               \
             ARM64_WORKAROUND_REPEAT_TLBI,        \
             CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \
-        "dsb  ish;"                              \
+        "dsb  "  # sh  ";"                       \
         "isb;"                                   \
         : : : "memory");                         \
 }
 
 /* Flush local TLBs, current VMID only. */
-TLB_HELPER(flush_guest_tlb_local, vmalls12e1);
+TLB_HELPER(flush_guest_tlb_local, vmalls12e1, nsh);
 
 /* Flush innershareable TLBs, current VMID only */
-TLB_HELPER(flush_guest_tlb, vmalls12e1is);
+TLB_HELPER(flush_guest_tlb, vmalls12e1is, ish);
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-TLB_HELPER(flush_all_guests_tlb_local, alle1);
+TLB_HELPER(flush_all_guests_tlb_local, alle1, nsh);
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-TLB_HELPER(flush_all_guests_tlb, alle1is);
+TLB_HELPER(flush_all_guests_tlb, alle1is, ish);
 
 /* Flush all hypervisor mappings from the TLB of the local processor. */
-TLB_HELPER(flush_xen_tlb_local, alle2);
+TLB_HELPER(flush_xen_tlb_local, alle2, nsh);
 
 /* Flush TLB of local processor for address va. */
 static inline void  __flush_xen_tlb_one_local(vaddr_t va)
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477048.739579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2F-0004A7-Ts; Fri, 13 Jan 2023 10:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477048.739579; Fri, 13 Jan 2023 10:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2F-00049y-QW; Fri, 13 Jan 2023 10:11:43 +0000
Received: by outflank-mailman (input) for mailman id 477048;
 Fri, 13 Jan 2023 10:11:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2E-0003t1-Eo
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2E-0006d1-3y; Fri, 13 Jan 2023 10:11:42 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2D-0005Ty-Rv; Fri, 13 Jan 2023 10:11:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=vzfTzczjLXV/PEg7iDoDX2AUbfuIDhGYbEkSsXpjSeE=; b=CTJBH7q+3tEciLNn32G6x2pXxh
	v7WFKd3NEiPWlcGbm5f5hXiAs+5kZBGlPmpt4OHD2dQ5i01kl1uVLr6RLXw2gNlSFpZV6vJvqON8B
	DEuBa1JY11p3Og8QqrJ90ogMAeJKUT9LkfpLCyqQgdad2QrArUn3iWit5t8QDS3DtDeM=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA
Date: Fri, 13 Jan 2023 10:11:24 +0000
Message-Id: <20230113101136.479-3-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Looking at the Neoverse N1 errata document, it is not clear to me
why the TLBI repeat workaround is not applied for TLB flush by VA.

The TBL flush by VA helpers are used in flush_xen_tlb_range_va_local()
and flush_xen_tlb_range_va(). So if the range size if a fixed size smaller
than a PAGE_SIZE, it would be possible that the compiler remove the loop
and therefore replicate the sequence described in the erratum 1286807.

So the TLBI repeat workaround should also be applied for the TLB flush
by VA helpers.

Fixes: 22e323d115d8 ("xen/arm: Add workaround for Cortex-A76/Neoverse-N1 erratum #1286807")
Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----
    This was spotted while looking at reducing the scope of the memory
    barriers. I don't have any HW affected.

    Changes in v4:
        - Add Michal's reviewed-by tag

    Changes in v3:
        - Patch added
---
 xen/arch/arm/include/asm/arm64/flushtlb.h | 31 +++++++++++++++++------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm64/flushtlb.h b/xen/arch/arm/include/asm/arm64/flushtlb.h
index a40693b08dd3..7f81cbbb93f9 100644
--- a/xen/arch/arm/include/asm/arm64/flushtlb.h
+++ b/xen/arch/arm/include/asm/arm64/flushtlb.h
@@ -44,6 +44,27 @@ static inline void name(void)                    \
         : : : "memory");                         \
 }
 
+/*
+ * FLush TLB by VA. This will likely be used in a loop, so the caller
+ * is responsible to use the appropriate memory barriers before/after
+ * the sequence.
+ *
+ * See above about the ARM64_WORKAROUND_REPEAT_TLBI sequence.
+ */
+#define TLB_HELPER_VA(name, tlbop)               \
+static inline void name(vaddr_t va)              \
+{                                                \
+    asm volatile(                                \
+        "tlbi "  # tlbop  ", %0;"                \
+        ALTERNATIVE(                             \
+            "nop; nop;",                         \
+            "dsb  ish;"                          \
+            "tlbi "  # tlbop  ", %0;",           \
+            ARM64_WORKAROUND_REPEAT_TLBI,        \
+            CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \
+        : : "r" (va >> PAGE_SHIFT) : "memory");  \
+}
+
 /* Flush local TLBs, current VMID only. */
 TLB_HELPER(flush_guest_tlb_local, vmalls12e1, nsh);
 
@@ -60,16 +81,10 @@ TLB_HELPER(flush_all_guests_tlb, alle1is, ish);
 TLB_HELPER(flush_xen_tlb_local, alle2, nsh);
 
 /* Flush TLB of local processor for address va. */
-static inline void  __flush_xen_tlb_one_local(vaddr_t va)
-{
-    asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
-}
+TLB_HELPER_VA(__flush_xen_tlb_one_local, vae2);
 
 /* Flush TLB of all processors in the inner-shareable domain for address va. */
-static inline void __flush_xen_tlb_one(vaddr_t va)
-{
-    asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory");
-}
+TLB_HELPER_VA(__flush_xen_tlb_one, vae2is);
 
 #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
 /*
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477049.739584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2G-0004E1-AB; Fri, 13 Jan 2023 10:11:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477049.739584; Fri, 13 Jan 2023 10:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2G-0004DT-4j; Fri, 13 Jan 2023 10:11:44 +0000
Received: by outflank-mailman (input) for mailman id 477049;
 Fri, 13 Jan 2023 10:11:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2F-00049S-JG
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2F-0006dD-Cp; Fri, 13 Jan 2023 10:11:43 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2F-0005Ty-4l; Fri, 13 Jan 2023 10:11:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=S5CSEJlBmASUGIMShN3J0tQmDdGXRbDxfQ6mQC3mU80=; b=ulnD1k8Awt/9sxbOaG9O+ol4Ir
	6x/Fwn9EklXfrM8GdEKuPGCMtMxAm8ZCiRDlyHOnDeEA9vE4g49APkobuRvJBI1pptfi60BIApyID
	VumJLZiqrePbqAEKDJFnWWtca5/DnEfFlAfECpa4qz0BiHTmIAZWPylwcHBl88IYJP8I=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush
Date: Fri, 13 Jan 2023 10:11:25 +0000
Message-Id: <20230113101136.479-4-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Per G5-9224 in ARM DDI 0487I.a:

"A DSB NSH is sufficient to ensure completion of TLB maintenance
 instructions that apply to a single PE. A DSB ISH is sufficient to
 ensure completion of TLB maintenance instructions that apply to PEs
 in the same Inner Shareable domain.
"

This is quoting the Armv8 specification because I couldn't find an
explicit statement in the Armv7 specification. Instead, I could find
bits in various places that confirm the same implementation.

Furthermore, Linux has been using 'nsh' since 2013 (62cbbc42e001
"ARM: tlb: reduce scope of barrier domains for TLB invalidation").

This means barrier after local TLB flushes could be reduced to
non-shareable.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----

    Changes in v4:
        - Add Michal's reviewed-by tag

    Changes in v3:
        - Patch added
---
 xen/arch/arm/include/asm/arm32/flushtlb.h | 27 +++++++++++++----------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/include/asm/arm32/flushtlb.h b/xen/arch/arm/include/asm/arm32/flushtlb.h
index 9085e6501153..7ae6a12f8155 100644
--- a/xen/arch/arm/include/asm/arm32/flushtlb.h
+++ b/xen/arch/arm/include/asm/arm32/flushtlb.h
@@ -15,30 +15,33 @@
  * For the Stage-2 page-tables the ISB ensures the completion of the DSB
  * (and therefore the TLB invalidation) before continuing. So we know
  * the TLBs cannot contain an entry for a mapping we may have removed.
+ *
+ * Note that for local TLB flush, using non-shareable (nsh) is sufficient
+ * (see G5-9224 in ARM DDI 0487I.a).
  */
-#define TLB_HELPER(name, tlbop) \
-static inline void name(void)   \
-{                               \
-    dsb(ishst);                 \
-    WRITE_CP32(0, tlbop);       \
-    dsb(ish);                   \
-    isb();                      \
+#define TLB_HELPER(name, tlbop, sh) \
+static inline void name(void)       \
+{                                   \
+    dsb(sh ## st);                  \
+    WRITE_CP32(0, tlbop);           \
+    dsb(sh);                        \
+    isb();                          \
 }
 
 /* Flush local TLBs, current VMID only */
-TLB_HELPER(flush_guest_tlb_local, TLBIALL);
+TLB_HELPER(flush_guest_tlb_local, TLBIALL, nsh);
 
 /* Flush inner shareable TLBs, current VMID only */
-TLB_HELPER(flush_guest_tlb, TLBIALLIS);
+TLB_HELPER(flush_guest_tlb, TLBIALLIS, ish);
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
-TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH);
+TLB_HELPER(flush_all_guests_tlb_local, TLBIALLNSNH, nsh);
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
-TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS);
+TLB_HELPER(flush_all_guests_tlb, TLBIALLNSNHIS, ish);
 
 /* Flush all hypervisor mappings from the TLB of the local processor. */
-TLB_HELPER(flush_xen_tlb_local, TLBIALLH);
+TLB_HELPER(flush_xen_tlb_local, TLBIALLH, nsh);
 
 /* Flush TLB of local processor for address va. */
 static inline void __flush_xen_tlb_one_local(vaddr_t va)
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477050.739601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2H-0004eY-Ll; Fri, 13 Jan 2023 10:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477050.739601; Fri, 13 Jan 2023 10:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2H-0004dF-E5; Fri, 13 Jan 2023 10:11:45 +0000
Received: by outflank-mailman (input) for mailman id 477050;
 Fri, 13 Jan 2023 10:11:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2G-0004RI-OW
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2G-0006dU-Lr; Fri, 13 Jan 2023 10:11:44 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2G-0005Ty-Dp; Fri, 13 Jan 2023 10:11:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=AlqTneJ4LSbMwjz1i6xFwfom8gh69DQnrqOXDMM4PVs=; b=0hhPFhOI2/yxluEn6Ulv0PJ20B
	bgPfex6S69Ngb71ds7CfOllSilwOFxXnG3J9b3nzTPOvxApEMOcQ1sHDxwVpmIdhm5rN1bwdOoPNW
	0TDJMTs17niug8uO9ojs6Ey8OdwyNwxHcCSt6cjoc9sG5skH3AdCaHWlvjsn+9j5SSKM=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush
Date: Fri, 13 Jan 2023 10:11:26 +0000
Message-Id: <20230113101136.479-5-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, flush_xen_tlb_range_va{,_local}() are using system
wide memory barrier. This is quite expensive and unnecessary.

For the local version, a non-shareable barrier is sufficient.
For the SMP version, an inner-shareable barrier is sufficient.

Furthermore, the initial barrier only needs to a store barrier.

For the full explanation of the sequence see asm/arm{32,64}/flushtlb.h.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----
    Changes in v4:
        - Add Michal's reviewed-by tag

    Changes in v3:
        - Patch added
---
 xen/arch/arm/include/asm/flushtlb.h | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/include/asm/flushtlb.h b/xen/arch/arm/include/asm/flushtlb.h
index 125a141975e0..e45fb6d97b02 100644
--- a/xen/arch/arm/include/asm/flushtlb.h
+++ b/xen/arch/arm/include/asm/flushtlb.h
@@ -37,13 +37,14 @@ static inline void flush_xen_tlb_range_va_local(vaddr_t va,
 {
     vaddr_t end = va + size;
 
-    dsb(sy); /* Ensure preceding are visible */
+    /* See asm/arm{32,64}/flushtlb.h for the explanation of the sequence. */
+    dsb(nshst); /* Ensure prior page-tables updates have completed */
     while ( va < end )
     {
         __flush_xen_tlb_one_local(va);
         va += PAGE_SIZE;
     }
-    dsb(sy); /* Ensure completion of the TLB flush */
+    dsb(nsh); /* Ensure the TLB invalidation has completed */
     isb();
 }
 
@@ -56,13 +57,14 @@ static inline void flush_xen_tlb_range_va(vaddr_t va,
 {
     vaddr_t end = va + size;
 
-    dsb(sy); /* Ensure preceding are visible */
+    /* See asm/arm{32,64}/flushtlb.h for the explanation of the sequence. */
+    dsb(ishst); /* Ensure prior page-tables updates have completed */
     while ( va < end )
     {
         __flush_xen_tlb_one(va);
         va += PAGE_SIZE;
     }
-    dsb(sy); /* Ensure completion of the TLB flush */
+    dsb(ish); /* Ensure the TLB invalidation has completed */
     isb();
 }
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477051.739613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2J-00052u-Th; Fri, 13 Jan 2023 10:11:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477051.739613; Fri, 13 Jan 2023 10:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2J-00052d-Q8; Fri, 13 Jan 2023 10:11:47 +0000
Received: by outflank-mailman (input) for mailman id 477051;
 Fri, 13 Jan 2023 10:11:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2I-0004rn-3X
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2H-0006dq-Ut; Fri, 13 Jan 2023 10:11:45 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2H-0005Ty-Mv; Fri, 13 Jan 2023 10:11:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=eMpoSuMM+8Mk7BiwAO9152ruWQvicYh/plE8HC6pYuc=; b=wzBBV/FeUrqxkG5oxwzYMh0XSj
	c7eo/3x4s/aK6QzZlkLyf08ftYtLT7gwXCFwPnsae4y1RLhhoSnVVyXUaEFR8ZdjX8N56/8F9JNnV
	Zyw2JqCt5vHMWZy5isltR4HYBwxYM3TvhkLWRr0vL2T2RaQEh1jKkYJvlh37Vh1YILF8=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH v4 05/14] xen/arm: Clean-up the memory layout
Date: Fri, 13 Jan 2023 10:11:27 +0000
Message-Id: <20230113101136.479-6-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

In a follow-up patch, the base address for the common mappings will
vary between arm32 and arm64. To avoid any duplication, define
every mapping in the common region from the previous one.

Take the opportunity to:
    * add missing *_SIZE for FIXMAP_VIRT_* and XEN_VIRT_*
    * switch to MB()/GB() to avoid hexadecimal (easier to read)

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----
    Changes in v4:
        - Add Michal's reviewed-by tag
        - Fix typo in the commit message

    Changes in v3:
        - Switch more macros to use MB()/GB()
        - Remove duplicated sentence in the commit message

    Changes in v2:
        - Use _AT(vaddr_t, ...) to build on 32-bit.
        - Drop COMMON_VIRT_START
---
 xen/arch/arm/include/asm/config.h | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 0fefed1b8aa9..87851e677701 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -107,14 +107,19 @@
  *  Unused
  */
 
-#define XEN_VIRT_START         _AT(vaddr_t,0x00200000)
-#define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
+#define XEN_VIRT_START          _AT(vaddr_t, MB(2))
+#define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
 
-#define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
-#define BOOT_FDT_VIRT_SIZE     _AT(vaddr_t, MB(4))
+#define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
+#define FIXMAP_VIRT_SIZE        _AT(vaddr_t, MB(2))
+
+#define FIXMAP_ADDR(n)          (FIXMAP_VIRT_START + (n) * PAGE_SIZE)
+
+#define BOOT_FDT_VIRT_START     (FIXMAP_VIRT_START + FIXMAP_VIRT_SIZE)
+#define BOOT_FDT_VIRT_SIZE      _AT(vaddr_t, MB(4))
 
 #ifdef CONFIG_LIVEPATCH
-#define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00a00000)
+#define LIVEPATCH_VMAP_START    (BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE)
 #define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
 #endif
 
@@ -124,18 +129,18 @@
 
 #define CONFIG_SEPARATE_XENHEAP 1
 
-#define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
+#define FRAMETABLE_VIRT_START  _AT(vaddr_t, MB(32))
 #define FRAMETABLE_SIZE        MB(128-32)
 #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
 #define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
 
-#define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
+#define VMAP_VIRT_START        _AT(vaddr_t, MB(256))
 #define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
 
-#define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
+#define XENHEAP_VIRT_START     _AT(vaddr_t, GB(1))
 #define XENHEAP_VIRT_SIZE      _AT(vaddr_t, GB(1))
 
-#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
+#define DOMHEAP_VIRT_START     _AT(vaddr_t, GB(2))
 #define DOMHEAP_VIRT_SIZE      _AT(vaddr_t, GB(2))
 
 #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477052.739623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2L-0005Ji-8z; Fri, 13 Jan 2023 10:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477052.739623; Fri, 13 Jan 2023 10:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2L-0005Iy-4I; Fri, 13 Jan 2023 10:11:49 +0000
Received: by outflank-mailman (input) for mailman id 477052;
 Fri, 13 Jan 2023 10:11:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2J-00051X-9T
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2J-0006eE-4D; Fri, 13 Jan 2023 10:11:47 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2I-0005Ty-Sa; Fri, 13 Jan 2023 10:11:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=5DxH3rJatNojgfrSjlsm9PmDUaxsl/RbdM8hJOLXQ5o=; b=AoU7QgyyLyJvn6mtgRKQFFNXLb
	vfDhyUNGNhGVHut5c/4xvPkGF8WB190KjSAtfmqMmyXWer39pvfhbNIHHAxvOBySVb9haN44qDL1y
	p+8ZpbNSJMJwggr6bfqyAMYF2gfe9oE4/EhRUNoMPsP4j5bWZMdxtd4lQgqzNCSd/Eco=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>"
Date: Fri, 13 Jan 2023 10:11:28 +0000
Message-Id: <20230113101136.479-7-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

"ldr rX, =<label>" is used to load a value from the literal pool. This
implies a memory access.

This can be avoided by using the macro mov_w which encode the value in
the immediate of two instructions.

So replace all "ldr rX, =<label>" with "mov_w rX, <label>".

No functional changes intended.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

----
    Changes in v4:
        * Add Stefano's reviewed-by tag
        * Add missing space
        * Add Michal's reviewed-by tag

    Changes in v3:
        * Patch added
---
 xen/arch/arm/arm32/head.S | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 5c1044710386..b680a4553fb6 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -62,7 +62,7 @@
 .endm
 
 .macro load_paddr rb, sym
-        ldr   \rb, =\sym
+        mov_w \rb, \sym
         add   \rb, \rb, r10
 .endm
 
@@ -149,7 +149,7 @@ past_zImage:
         mov   r8, r2                 /* r8 := DTB base address */
 
         /* Find out where we are */
-        ldr   r0, =start
+        mov_w r0, start
         adr   r9, start              /* r9  := paddr (start) */
         sub   r10, r9, r0            /* r10 := phys-offset */
 
@@ -170,7 +170,7 @@ past_zImage:
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        ldr   r0, =primary_switched
+        mov_w r0, primary_switched
         mov   pc, r0
 primary_switched:
         /*
@@ -190,7 +190,7 @@ primary_switched:
         /* Setup the arguments for start_xen and jump to C world */
         mov   r0, r10                /* r0 := Physical offset */
         mov   r1, r8                 /* r1 := paddr(FDT) */
-        ldr   r2, =start_xen
+        mov_w r2, start_xen
         b     launch
 ENDPROC(start)
 
@@ -198,7 +198,7 @@ GLOBAL(init_secondary)
         cpsid aif                    /* Disable all interrupts */
 
         /* Find out where we are */
-        ldr   r0, =start
+        mov_w r0, start
         adr   r9, start              /* r9  := paddr (start) */
         sub   r10, r9, r0            /* r10 := phys-offset */
 
@@ -227,7 +227,7 @@ GLOBAL(init_secondary)
 
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        ldr   r0, =secondary_switched
+        mov_w r0, secondary_switched
         mov   pc, r0
 secondary_switched:
         /*
@@ -236,7 +236,7 @@ secondary_switched:
          *
          * XXX: This is not compliant with the Arm Arm.
          */
-        ldr   r4, =init_ttbr         /* VA of HTTBR value stashed by CPU 0 */
+        mov_w r4, init_ttbr          /* VA of HTTBR value stashed by CPU 0 */
         ldrd  r4, r5, [r4]           /* Actual value */
         dsb
         mcrr  CP64(r4, r5, HTTBR)
@@ -254,7 +254,7 @@ secondary_switched:
 #endif
         PRINT("- Ready -\r\n")
         /* Jump to C world */
-        ldr   r2, =start_secondary
+        mov_w r2, start_secondary
         b     launch
 ENDPROC(init_secondary)
 
@@ -297,8 +297,8 @@ ENDPROC(check_cpu_mode)
  */
 zero_bss:
         PRINT("- Zero BSS -\r\n")
-        ldr   r0, =__bss_start       /* r0 := vaddr(__bss_start) */
-        ldr   r1, =__bss_end         /* r1 := vaddr(__bss_start) */
+        mov_w r0, __bss_start        /* r0 := vaddr(__bss_start) */
+        mov_w r1, __bss_end          /* r1 := vaddr(__bss_start) */
 
         mov   r2, #0
 1:      str   r2, [r0], #4
@@ -330,8 +330,8 @@ cpu_init:
 
 cpu_init_done:
         /* Set up memory attribute type tables */
-        ldr   r0, =MAIR0VAL
-        ldr   r1, =MAIR1VAL
+        mov_w r0, MAIR0VAL
+        mov_w r1, MAIR1VAL
         mcr   CP32(r0, HMAIR0)
         mcr   CP32(r1, HMAIR1)
 
@@ -341,10 +341,10 @@ cpu_init_done:
          * PT walks are write-back, write-allocate in both cache levels,
          * Full 32-bit address space goes through this table.
          */
-        ldr   r0, =(TCR_RES1|TCR_SH0_IS|TCR_ORGN0_WBWA|TCR_IRGN0_WBWA|TCR_T0SZ(0))
+        mov_w r0, (TCR_RES1|TCR_SH0_IS|TCR_ORGN0_WBWA|TCR_IRGN0_WBWA|TCR_T0SZ(0))
         mcr   CP32(r0, HTCR)
 
-        ldr   r0, =HSCTLR_SET
+        mov_w r0, HSCTLR_SET
         mcr   CP32(r0, HSCTLR)
         isb
 
@@ -452,7 +452,7 @@ ENDPROC(cpu_init)
  */
 create_page_tables:
         /* Prepare the page-tables for mapping Xen */
-        ldr   r0, =XEN_VIRT_START
+        mov_w r0, XEN_VIRT_START
         create_table_entry boot_pgtable, boot_second, r0, 1
         create_table_entry boot_second, boot_third, r0, 2
 
@@ -576,7 +576,7 @@ remove_identity_mapping:
         cmp   r1, #XEN_FIRST_SLOT
         beq   1f
         /* It is not in slot 0, remove the entry */
-        ldr   r0, =boot_pgtable      /* r0 := root table */
+        mov_w r0, boot_pgtable       /* r0 := root table */
         lsl   r1, r1, #3             /* r1 := Slot offset */
         strd  r2, r3, [r0, r1]
         b     identity_mapping_removed
@@ -590,7 +590,7 @@ remove_identity_mapping:
         cmp   r1, #XEN_SECOND_SLOT
         beq   identity_mapping_removed
         /* It is not in slot 1, remove the entry */
-        ldr   r0, =boot_second       /* r0 := second table */
+        mov_w r0, boot_second        /* r0 := second table */
         lsl   r1, r1, #3             /* r1 := Slot offset */
         strd  r2, r3, [r0, r1]
 
@@ -620,7 +620,7 @@ ENDPROC(remove_identity_mapping)
 setup_fixmap:
 #if defined(CONFIG_EARLY_PRINTK)
         /* Add UART to the fixmap table */
-        ldr   r0, =EARLY_UART_VIRTUAL_ADDRESS
+        mov_w r0, EARLY_UART_VIRTUAL_ADDRESS
         create_mapping_entry xen_fixmap, r0, r11, type=PT_DEV_L3
 #endif
         /* Map fixmap into boot_second */
@@ -643,7 +643,7 @@ ENDPROC(setup_fixmap)
  * Clobbers r3
  */
 launch:
-        ldr   r3, =init_data
+        mov_w r3, init_data
         add   r3, #INITINFO_stack    /* Find the boot-time stack */
         ldr   sp, [r3]
         add   sp, #STACK_SIZE        /* (which grows down from the top). */
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477053.739628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2L-0005Ns-N0; Fri, 13 Jan 2023 10:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477053.739628; Fri, 13 Jan 2023 10:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2L-0005Lt-Ed; Fri, 13 Jan 2023 10:11:49 +0000
Received: by outflank-mailman (input) for mailman id 477053;
 Fri, 13 Jan 2023 10:11:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2K-0005Cx-GV
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2K-0006eb-9r; Fri, 13 Jan 2023 10:11:48 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2K-0005Ty-20; Fri, 13 Jan 2023 10:11:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/jrRR52w9ey73zmvhw3eXGFrkt9IUeVOh86M9ieopvE=; b=Bpuwqn98yCEHVU+RLPiCGxjZ4A
	rXvYc02ZZAMIKHk05htoXUdR1ib/j3d7xp2zwU4Htm3fteYn5lyWVMQQyNuKa85GkoYHEMvxY9EZY
	0mii1PO9exL1hk7cHrfbW9qrInlzu9u8neCC+OKz6xKBEBdMIbd3zVOiAmy3oc6TLOrM=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in enable_mmu()
Date: Fri, 13 Jan 2023 10:11:29 +0000
Message-Id: <20230113101136.479-8-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, enable_mmu() will return to an address in the 1:1 mapping
and each path is responsible to switch to the runtime mapping.

In a follow-up patch, the behavior to switch to the runtime mapping
will become more complex. So to avoid more code/comment duplication,
move the switch in enable_mmu().

Lastly, take the opportunity to replace load from literal pool with
mov_w.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

----
    Changes in v4:
        - Add Stefano's reviewed-by tag

    Changes in v3:
        - Fix typo in the commit message

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 50 +++++++++++++++++++++++----------------
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index b680a4553fb6..50ad6c948be2 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -167,19 +167,11 @@ past_zImage:
         bl    check_cpu_mode
         bl    cpu_init
         bl    create_page_tables
-        bl    enable_mmu
 
-        /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        mov_w r0, primary_switched
-        mov   pc, r0
+        /* Address in the runtime mapping to jump to after the MMU is enabled */
+        mov_w lr, primary_switched
+        b     enable_mmu
 primary_switched:
-        /*
-         * The 1:1 map may clash with other parts of the Xen virtual memory
-         * layout. As it is not used anymore, remove it completely to
-         * avoid having to worry about replacing existing mapping
-         * afterwards.
-         */
-        bl    remove_identity_mapping
         bl    setup_fixmap
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
@@ -223,12 +215,10 @@ GLOBAL(init_secondary)
         bl    check_cpu_mode
         bl    cpu_init
         bl    create_page_tables
-        bl    enable_mmu
 
-
-        /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
-        mov_w r0, secondary_switched
-        mov   pc, r0
+        /* Address in the runtime mapping to jump to after the MMU is enabled */
+        mov_w lr, secondary_switched
+        b     enable_mmu
 secondary_switched:
         /*
          * Non-boot CPUs need to move on to the proper pagetables, which were
@@ -523,9 +513,12 @@ virtphys_clash:
 ENDPROC(create_page_tables)
 
 /*
- * Turn on the Data Cache and the MMU. The function will return on the 1:1
- * mapping. In other word, the caller is responsible to switch to the runtime
- * mapping.
+ * Turn on the Data Cache and the MMU. The function will return
+ * to the virtual address provided in LR (e.g. the runtime mapping).
+ *
+ * Inputs:
+ *   r9 : paddr(start)
+ *   lr : Virtual address to return to
  *
  * Clobbers r0 - r3
  */
@@ -551,7 +544,24 @@ enable_mmu:
         dsb                          /* Flush PTE writes and finish reads */
         mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
         isb                          /* Now, flush the icache */
-        mov   pc, lr
+
+        /*
+         * The MMU is turned on and we are in the 1:1 mapping. Switch
+         * to the runtime mapping.
+         */
+        mov_w r0, 1f
+        mov   pc, r0
+1:
+        /*
+         * The 1:1 map may clash with other parts of the Xen virtual memory
+         * layout. As it is not used anymore, remove it completely to
+         * avoid having to worry about replacing existing mapping
+         * afterwards.
+         *
+         * On return this will jump to the virtual address requested by
+         * the caller.
+         */
+        b     remove_identity_mapping
 ENDPROC(enable_mmu)
 
 /*
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477054.739644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2N-0005pV-DI; Fri, 13 Jan 2023 10:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477054.739644; Fri, 13 Jan 2023 10:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2N-0005oq-5P; Fri, 13 Jan 2023 10:11:51 +0000
Received: by outflank-mailman (input) for mailman id 477054;
 Fri, 13 Jan 2023 10:11:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2L-0005P0-NI
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2L-0006ep-Fe; Fri, 13 Jan 2023 10:11:49 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2L-0005Ty-7e; Fri, 13 Jan 2023 10:11:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ANX4uKeCMRVYaXkbuMkjBprVFaVY+sMMAHGSSjC0sGw=; b=dPsTKjOE8q39aZlEKcGaRYj8pi
	tsaflRTAnwNBfhsfgnCX2Vms/yRRLbEN6g0iPhut/UddBFIdBPNx2nZTFLw+SbStTmzJGrmHK5oHF
	S9fk1Q8mV2suAWKmk8TEVVlcTvYHzS6M4Jqf8qgMhV+teUggvnBZ4xhESGtwutw/LO+4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush the TLBs
Date: Fri, 13 Jan 2023 10:11:30 +0000
Message-Id: <20230113101136.479-9-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

The sequence for flushing the TLBs is 4 instruction long and often
requires an explanation how it works.

So create a helper and use it in the boot code (switch_ttbr() is left
alone until we decide the semantic of the call).

Note that in secondary_switched, we were also flushing the instruction
cache and branch predictor. Neither of them was necessary because:
    * We are only supporting IVIPT cache on arm32, so the instruction
      cache flush is only necessary when executable code is modified.
      None of the boot code is doing that.
    * The instruction cache is not invalidated and misprediction is not
      a problem at boot.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v4:
        - Expand the commit message to explain why switch_ttbr() is
          not updated.
        - Remove extra spaces in the comment
        - Fix typo in the commit message

    Changes in v3:
        * Fix typo
        * Update the documentation
        * Rename the argument from tmp1 to tmp
---
 xen/arch/arm/arm32/head.S | 30 +++++++++++++++++-------------
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 50ad6c948be2..67b910808b74 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -66,6 +66,20 @@
         add   \rb, \rb, r10
 .endm
 
+/*
+ * Flush local TLBs
+ *
+ * @tmp: Scratch register
+ *
+ * See asm/arm32/flushtlb.h for the explanation of the sequence.
+ */
+.macro flush_xen_tlb_local tmp
+        dsb   nshst
+        mcr   CP32(\tmp, TLBIALLH)
+        dsb   nsh
+        isb
+.endm
+
 /*
  * Common register usage in this file:
  *   r0  -
@@ -232,11 +246,7 @@ secondary_switched:
         mcrr  CP64(r4, r5, HTTBR)
         dsb
         isb
-        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
-        mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
-        mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
-        dsb                          /* Ensure completion of TLB+BP flush */
-        isb
+        flush_xen_tlb_local r0
 
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
@@ -529,8 +539,7 @@ enable_mmu:
          * The state of the TLBs is unknown before turning on the MMU.
          * Flush them to avoid stale one.
          */
-        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLBs */
-        dsb   nsh
+        flush_xen_tlb_local r0
 
         /* Write Xen's PT's paddr into the HTTBR */
         load_paddr r0, boot_pgtable
@@ -605,12 +614,7 @@ remove_identity_mapping:
         strd  r2, r3, [r0, r1]
 
 identity_mapping_removed:
-        /* See asm/arm32/flushtlb.h for the explanation of the sequence. */
-        dsb   nshst
-        mcr   CP32(r0, TLBIALLH)
-        dsb   nsh
-        isb
-
+        flush_xen_tlb_local r0
         mov   pc, lr
 ENDPROC(remove_identity_mapping)
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:11:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:11:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477055.739651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2O-0005y5-4V; Fri, 13 Jan 2023 10:11:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477055.739651; Fri, 13 Jan 2023 10:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGH2N-0005vn-MO; Fri, 13 Jan 2023 10:11:51 +0000
Received: by outflank-mailman (input) for mailman id 477055;
 Fri, 13 Jan 2023 10:11:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGH2M-0005kJ-Qu
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:11:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2M-0006f1-Od; Fri, 13 Jan 2023 10:11:50 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2M-0005Ty-DI; Fri, 13 Jan 2023 10:11:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=NC2ekxnVww5wg4SYyZ+ZoYGc9aeqnQEz5JZ2U/1m/VM=; b=N8m9sAtxqZPbFvAI9ThWmffC/t
	WGk+5ykLdBpoKsZ1JhTePi1n/rE0HYYbXj3b76eqca/GLEU6DrIQkWHQ7s3qGz79fGKdv0ECQUZlL
	oB0Teq27B2bfp4avx+fGhWLiGdV5fmf0XKLpt6RC1efFEv9PyoNMjJ2s20OKn0eVcZN4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to load Xen
Date: Fri, 13 Jan 2023 10:11:31 +0000
Message-Id: <20230113101136.479-10-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, bootloaders can load Xen anywhere in memory but the
region 2MB - 4MB. While I am not aware of any issue, we have no way
to tell the bootloader to avoid that region.

In addition to that, in the future, Xen may grow over 2MB if we
enable feature like UBSAN or GCOV. To avoid widening the restriction
on the load address, it would be better to get rid of it.

When the identity mapping is clashing with the Xen runtime mapping,
we need an extra indirection to be able to replace the identity
mapping with the Xen runtime mapping.

Reserve a new memory region that will be used to temporarily map Xen.
For convenience, the new area is re-using the same first slot as the
domheap which is used for per-cpu temporary mapping after a CPU has
booted.

Furthermore, directly map boot_second (which cover Xen and more)
to the temporary area. This will avoid to allocate an extra page-table
for the second-level and will helpful for follow-up patches (we will
want to use the fixmap whilst in the temporary mapping).

Lastly, some part of the code now needs to know whether the temporary
mapping was created. So reserve r12 to store this information.

Signed-off-by: Julien Grall <jgrall@amazon.com>
----
    Changes in v4:
        - Remove spurious newline

    Changes in v3:
        - Remove the ASSERT() in init_domheap_mappings() because it was
          bogus (secondary CPU root tables are initialized to the CPU0
          root table so the entry will be valid). Also, it is not
          related to this patch as the CPU0 root table are rebuilt
          during boot. The ASSERT() will be re-introduced later.

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S         | 139 ++++++++++++++++++++++++++----
 xen/arch/arm/include/asm/config.h |  14 +++
 xen/arch/arm/mm.c                 |  14 +++
 3 files changed, 152 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 67b910808b74..3800efb44169 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -35,6 +35,9 @@
 #define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
 #define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
 
+/* Offset between the early boot xen mapping and the runtime xen mapping */
+#define XEN_TEMPORARY_OFFSET      (TEMPORARY_XEN_VIRT_START - XEN_VIRT_START)
+
 #if defined(CONFIG_EARLY_PRINTK) && defined(CONFIG_EARLY_PRINTK_INC)
 #include CONFIG_EARLY_PRINTK_INC
 #endif
@@ -94,7 +97,7 @@
  *   r9  - paddr(start)
  *   r10 - phys offset
  *   r11 - UART address
- *   r12 -
+ *   r12 - Temporary mapping created
  *   r13 - SP
  *   r14 - LR
  *   r15 - PC
@@ -445,6 +448,9 @@ ENDPROC(cpu_init)
  *   r9 : paddr(start)
  *   r10: phys offset
  *
+ * Output:
+ *   r12: Was a temporary mapping created?
+ *
  * Clobbers r0 - r4, r6
  *
  * Register usage within this function:
@@ -484,7 +490,11 @@ create_page_tables:
         /*
          * Setup the 1:1 mapping so we can turn the MMU on. Note that
          * only the first page of Xen will be part of the 1:1 mapping.
+         *
+         * In all the cases, we will link boot_third_id. So create the
+         * mapping in advance.
          */
+        create_mapping_entry boot_third_id, r9, r9
 
         /*
          * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
@@ -501,8 +511,7 @@ create_page_tables:
         /*
          * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
          * 1:1 mapping will use its own set of page-tables from the
-         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
-         * it.
+         * third level.
          */
         get_table_slot r1, r9, 2     /* r1 := second slot */
         cmp   r1, #XEN_SECOND_SLOT
@@ -513,13 +522,33 @@ create_page_tables:
 link_from_second_id:
         create_table_entry boot_second_id, boot_third_id, r9, 2
 link_from_third_id:
-        create_mapping_entry boot_third_id, r9, r9
+        /* Good news, we are not clashing with Xen virtual mapping */
+        mov   r12, #0                /* r12 := temporary mapping not created */
         mov   pc, lr
 
 virtphys_clash:
-        /* Identity map clashes with boot_third, which we cannot handle yet */
-        PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
-        b     fail
+        /*
+         * The identity map clashes with boot_third. Link boot_first_id and
+         * map Xen to a temporary mapping. See switch_to_runtime_mapping
+         * for more details.
+         */
+        PRINT("- Virt and Phys addresses clash  -\r\n")
+        PRINT("- Create temporary mapping -\r\n")
+
+        /*
+         * This will override the link to boot_second in XEN_FIRST_SLOT.
+         * The page-tables are not live yet. So no need to use
+         * break-before-make.
+         */
+        create_table_entry boot_pgtable, boot_second_id, r9, 1
+        create_table_entry boot_second_id, boot_third_id, r9, 2
+
+        /* Map boot_second (cover Xen mappings) to the temporary 1st slot */
+        mov_w r0, TEMPORARY_XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_second, r0, 1
+
+        mov   r12, #1                /* r12 := temporary mapping created */
+        mov   pc, lr
 ENDPROC(create_page_tables)
 
 /*
@@ -528,9 +557,10 @@ ENDPROC(create_page_tables)
  *
  * Inputs:
  *   r9 : paddr(start)
+ *  r12 : Was the temporary mapping created?
  *   lr : Virtual address to return to
  *
- * Clobbers r0 - r3
+ * Clobbers r0 - r5
  */
 enable_mmu:
         PRINT("- Turning on paging -\r\n")
@@ -558,21 +588,79 @@ enable_mmu:
          * The MMU is turned on and we are in the 1:1 mapping. Switch
          * to the runtime mapping.
          */
-        mov_w r0, 1f
-        mov   pc, r0
+        mov   r5, lr                /* Save LR before overwritting it */
+        mov_w lr, 1f                /* Virtual address in the runtime mapping */
+        b     switch_to_runtime_mapping
 1:
+        mov   lr, r5                /* Restore LR */
         /*
-         * The 1:1 map may clash with other parts of the Xen virtual memory
-         * layout. As it is not used anymore, remove it completely to
-         * avoid having to worry about replacing existing mapping
-         * afterwards.
+         * At this point, either the 1:1 map or the temporary mapping
+         * will be present. The former may clash with other parts of the
+         * Xen virtual memory layout. As both of them are not used
+         * anymore, remove them completely to avoid having to worry
+         * about replacing existing mapping afterwards.
          *
          * On return this will jump to the virtual address requested by
          * the caller.
          */
-        b     remove_identity_mapping
+        teq   r12, #0
+        beq   remove_identity_mapping
+        b     remove_temporary_mapping
 ENDPROC(enable_mmu)
 
+/*
+ * Switch to the runtime mapping. The logic depends on whether the
+ * runtime virtual region is clashing with the physical address
+ *
+ *  - If it is not clashing, we can directly jump to the address in
+ *    the runtime mapping.
+ *  - If it is clashing, create_page_tables() would have mapped Xen to
+ *    a temporary virtual address. We need to switch to the temporary
+ *    mapping so we can remove the identity mapping and map Xen at the
+ *    correct position.
+ *
+ * Inputs
+ *    r9: paddr(start)
+ *   r12: Was a temporary mapping created?
+ *    lr: Address in the runtime mapping to jump to
+ *
+ * Clobbers r0 - r4
+ */
+switch_to_runtime_mapping:
+        /*
+         * Jump to the runtime mapping if the virt and phys are not
+         * clashing
+         */
+        teq   r12, #0
+        beq   ready_to_switch
+
+        /* We are still in the 1:1 mapping. Jump to the temporary Virtual address. */
+        mov_w r0, 1f
+        add   r0, r0, #XEN_TEMPORARY_OFFSET /* r0 := address in temporary mapping */
+        mov   pc, r0
+
+1:
+        /* Remove boot_second_id */
+        mov   r2, #0
+        mov   r3, #0
+        adr_l r0, boot_pgtable
+        get_table_slot r1, r9, 1            /* r1 := first slot */
+        lsl   r1, r1, #3                    /* r1 := first slot offset */
+        strd  r2, r3, [r0, r1]
+
+        flush_xen_tlb_local r0
+
+        /* Map boot_second into boot_pgtable */
+        mov_w r0, XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_second, r0, 1
+
+        /* Ensure any page table updates are visible before continuing */
+        dsb   nsh
+
+ready_to_switch:
+        mov   pc, lr
+ENDPROC(switch_to_runtime_mapping)
+
 /*
  * Remove the 1:1 map from the page-tables. It is not easy to keep track
  * where the 1:1 map was mapped, so we will look for the top-level entry
@@ -618,6 +706,27 @@ identity_mapping_removed:
         mov   pc, lr
 ENDPROC(remove_identity_mapping)
 
+/*
+ * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
+ *
+ * Clobbers r0 - r1
+ */
+remove_temporary_mapping:
+        /* r2:r3 := invalid page-table entry */
+        mov   r2, #0
+        mov   r3, #0
+
+        adr_l r0, boot_pgtable
+        mov_w r1, TEMPORARY_XEN_VIRT_START
+        get_table_slot r1, r1, 1     /* r1 := first slot */
+        lsl   r1, r1, #3             /* r1 := first slot offset */
+        strd  r2, r3, [r0, r1]
+
+        flush_xen_tlb_local r0
+
+        mov  pc, lr
+ENDPROC(remove_temporary_mapping)
+
 /*
  * Map the UART in the fixmap (when earlyprintk is used) and hook the
  * fixmap table in the page tables.
diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 87851e677701..6c1b762e976d 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -148,6 +148,20 @@
 /* Number of domheap pagetable pages required at the second level (2MB mappings) */
 #define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
 
+/*
+ * The temporary area is overlapping with the domheap area. This may
+ * be used to create an alias of the first slot containing Xen mappings
+ * when turning on/off the MMU.
+ */
+#define TEMPORARY_AREA_FIRST_SLOT    (first_table_offset(DOMHEAP_VIRT_START))
+
+/* Calculate the address in the temporary area */
+#define TEMPORARY_AREA_ADDR(addr)                           \
+     (((addr) & ~XEN_PT_LEVEL_MASK(1)) |                    \
+      (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1)))
+
+#define TEMPORARY_XEN_VIRT_START    TEMPORARY_AREA_ADDR(XEN_VIRT_START)
+
 #else /* ARM_64 */
 
 #define SLOT0_ENTRY_BITS  39
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0fc6f2992dd1..9ebc2d0f609e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -167,6 +167,9 @@ static void __init __maybe_unused build_assertions(void)
 #define CHECK_SAME_SLOT(level, virt1, virt2) \
     BUILD_BUG_ON(level##_table_offset(virt1) != level##_table_offset(virt2))
 
+#define CHECK_DIFFERENT_SLOT(level, virt1, virt2) \
+    BUILD_BUG_ON(level##_table_offset(virt1) == level##_table_offset(virt2))
+
 #ifdef CONFIG_ARM_64
     CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, FIXMAP_ADDR(0));
     CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, BOOT_FDT_VIRT_START);
@@ -174,7 +177,18 @@ static void __init __maybe_unused build_assertions(void)
     CHECK_SAME_SLOT(first, XEN_VIRT_START, FIXMAP_ADDR(0));
     CHECK_SAME_SLOT(first, XEN_VIRT_START, BOOT_FDT_VIRT_START);
 
+    /*
+     * For arm32, the temporary mapping will re-use the domheap
+     * first slot and the second slots will match.
+     */
+#ifdef CONFIG_ARM_32
+    CHECK_SAME_SLOT(first, TEMPORARY_XEN_VIRT_START, DOMHEAP_VIRT_START);
+    CHECK_DIFFERENT_SLOT(first, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START);
+    CHECK_SAME_SLOT(second, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START);
+#endif
+
 #undef CHECK_SAME_SLOT
+#undef CHECK_DIFFERENT_SLOT
 }
 
 static lpae_t *xen_map_table(mfn_t mfn)
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:21:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:21:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477107.739668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHBL-0001se-6T; Fri, 13 Jan 2023 10:21:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477107.739668; Fri, 13 Jan 2023 10:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHBL-0001sX-3g; Fri, 13 Jan 2023 10:21:07 +0000
Received: by outflank-mailman (input) for mailman id 477107;
 Fri, 13 Jan 2023 10:21:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bX0/=5K=citrix.com=prvs=37021d3d6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pGHBJ-0001sR-C3
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:21:05 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa90b132-932b-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 11:21:03 +0100 (CET)
Received: from mail-dm6nam12lp2173.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Jan 2023 05:20:53 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SN4PR03MB6797.namprd03.prod.outlook.com (2603:10b6:806:215::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 10:20:50 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Fri, 13 Jan 2023
 10:20:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa90b132-932b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673605262;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=W54dHoJM9P0PkAPwS0dPAtnXs0fXxsIPDOTFPoKcC3k=;
  b=fghC/TZxRourLR8R/MDPh4XXZHIx7iyce9fHbS9c7L8AqawxVSEubpim
   krMfncziELSrX6Xu9+/eZr/qj5+l90BlLIXascLCm2UqCFiYrfutfPoVt
   4iYq5x8fvNTx3mJuT/qlhXaBtpi0zZtXNdJr2zlWkwTkQNfh0iQooMK2T
   M=;
X-IronPort-RemoteIP: 104.47.59.173
X-IronPort-MID: 94931465
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:HSj84aDGr26sUBVW/+fiw5YqxClBgxIJ4kV8jS/XYbTApGh20zwCn
 2JNWmmHMquLN2PxeN0kbYW1phsEsJ+GydFmQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpB5QRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwxeovC18W/
 KQjE2oXcRqxocyo7ZahRbw57igjBJGD0II3nFhFlGucIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9OxuvDO7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6TzHiiBNxPfFG+3thNmHafxEUZMx8xC3a1isuIsXa0SfsKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHr7m9WX+bsLCOoluaJSkQBX8PY2kDVwRty9P+upM6ihXDSclqOKGwh9zxXzr3x
 li3QDMWgrwSiYsH0vu99FWe2Ta0/MGWFUgy+xndWX+j4kVhfom5aoe06F/dq/FdMIKeSVrHt
 38B8ySD0N0z4Vi2vHTlaI0w8HuBvZ5p7BW0bYZTIqQc
IronPort-HdrOrdr: A9a23:OM9PramWlvha6JR2rSvLpztwW67pDfIi3DAbv31ZSRFFG/Fw9v
 rDoB1/73TJYVkqN03I9ervBEDjexPhHO9OgLX5VI3KNGOKhILCFvAA0WKN+UyEJwTOssJbyK
 d8Y+xfJbTLfDxHZB/BkWuFL+o=
X-IronPort-AV: E=Sophos;i="5.97,213,1669093200"; 
   d="scan'208";a="94931465"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GNTVRAtYLu8EH8Ey/dfwWM2tA4ZN2fJ/A7aRQJb8dg1Mkvi8G2rtPssdkqG9W/uC1uDxqgf2Sr1ciDsYc8XLTuzvSlUNQBjnt3lmALgcgZGAjvTNLoqeoihPKwqRd7N2iG5cPva/Chy+wTlipV4yVjbM9VGUIkBko3nVAx61xSmRTniPFxffsJul+K90SvBo2NAug70ZPzZ9BZVqFTU2YzsskDTGDu4EbR0MvmBUVYzZRvFGqMztKUxMTqBoPvIEewmIifm+X8H48CSXBuejDcEPUWm96/lcIRtMEsgyPBTZxcynp1qC9KNq9HfB9PrgFBAvLxFUTvEYFP4qVTH7Mw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W54dHoJM9P0PkAPwS0dPAtnXs0fXxsIPDOTFPoKcC3k=;
 b=l8iYpImEG1u8ky9G7vxjBAh/nK8DSF1uQQPcaLGosaQhiDCXq/54NIMd8rd9YQwd/9zP8Hmt4l8sxhhYhN8cDxa47GGHalOMbWDyfUuQAIFJ5xcwnAm9fz5uGd17sCz8QsfpWVuft2KCvI46HKeAQwgT/D5CmF4ioX/HkNADTCJluh4qfnxDPkuwlQMwiq2Sub6T2qQgiJrZuwkFb/RG2StN3nexgtG6zt0DyFky52b4B0nZifck1MqXByRf/AmUr6YM6TUyjI81N4tAJEHqaPUkia7IaI7tW5ZwEIk8XLb3dz4izkbyjkFpXDOf7vCm9cesmmA7qHt4LWI9xD9UlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W54dHoJM9P0PkAPwS0dPAtnXs0fXxsIPDOTFPoKcC3k=;
 b=iPOJvJdM2wids4KnfLFnqDlziJd48OXBM5za5DuWilVZuAkv674wAjnN5k3OuyZWMc6bmP6OCNLw/LcAsWJjU+HFBglUIT1nZ9jJ9i2ay2EoqnCEmxTIt4Ac9sVAJe/hQwX+PDnR7uBCrwTcZ9VRSXq0GIaG3fLgVQPqMxjgvNQ=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, Xenia Ragiadakou <burzalodowa@gmail.com>, George
 Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 2/2] x86/shadow: further correct MMIO handling in
 _sh_propagate()
Thread-Topic: [PATCH 2/2] x86/shadow: further correct MMIO handling in
 _sh_propagate()
Thread-Index: AQHZJyvN3vLgdA26WkC1mtdAvOV/WK6cIxKA
Date: Fri, 13 Jan 2023 10:20:50 +0000
Message-ID: <18b606fd-e354-b43a-fa5c-bbdede7b7091@citrix.com>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <b05d3911-a6c7-68f1-0e48-255630ab6516@suse.com>
In-Reply-To: <b05d3911-a6c7-68f1-0e48-255630ab6516@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SN4PR03MB6797:EE_
x-ms-office365-filtering-correlation-id: 8a0f6c1f-172a-4f05-5bc2-08daf54fd804
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 mt57nCfPR2bLBeeFcgrKysw2a7PnGCtxr2vSbbDkzmFKlWX8Jjh1owsELENkVxOBp20ZCAHkOLmdLkSG4VLIFpB/qJeP3saVw1pxikHFj6zqKmz/bdQBWnf+RCapx3aoR9wtpWOgA99uUggiYCp3yK1Ltw8XlRATO4mfstCBZAq4B87EDtuN79RUVF4kGuuuPX1zqst+xFLgx7c3mL5fE6FEfEDQ0k90lp0sZV5E+UfE0qfqEWZoIJQ7Fd6krrKKqPz6717Y/XKTuA2kJLINkIBH4dJeG4IZ6ukoacNPxkmaDbD2vLOtK98ctjelBS6gexEE6jgrJJjy0umuQbzLEdGb7oUahIS1SB0SHFwVtzCPEPB39zGp528gJSbII1G7ADleDFK67mxIK5HjxZfOMwEzAYmSt0EiyTLheSAJi050UWKwlE4lOtbMSfxY75atzxVOUk93RMdgGNaVAb8aEGEh9sR+cuECQQeQ+iiEVN2uf90DRpXnnm+/7mKCBM0Sfsicvur66wHt4TlPDJ/LT94A+MEITm0RbizXp+DQYVFuXOCMFpfRP4RX9+3cdiC8w/OvZtgah/GHNBSkstMc3qMpPp4OS+6Y2OpoXL81KN/BkTWQKVnEwfEAyN6c9ePByxNVUrPlm5PSmw1Un9BnWAVC7u6ZOGqVo177JqoUHan6SATUHdDG7Q0r6DD4SSLbYtXq5w2/CltWX9s1Cwj/5TzIMhWDawHSL35xvL89lRTOIyRz813vNAqPe1qcbuxRGtHr0ugmCK/pHOEG1q/Hmg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(39860400002)(366004)(136003)(376002)(451199015)(6506007)(38100700002)(122000001)(82960400001)(107886003)(31686004)(53546011)(2906002)(6486002)(478600001)(5660300002)(2616005)(4744005)(86362001)(6512007)(186003)(316002)(26005)(71200400001)(8936002)(38070700005)(36756003)(8676002)(31696002)(41300700001)(91956017)(64756008)(110136005)(66556008)(76116006)(66946007)(54906003)(66476007)(66446008)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dDBhTkFadEdhMXFjVUIxWmNvT2xLVTJiKzdVTWRYVWZ2ZnZ6V2huc25UbkYy?=
 =?utf-8?B?QnJINFZGQVZPOG4vVFJDM2VmM3V3SWIxd1p0NnQvUUFFZEdlTmxmaUs4V3lm?=
 =?utf-8?B?MWpZZEZHVks5YllNN21KYVVmSVZFcTZSMmFQR3l4N0RiTFVjeDBPRGJYSGRC?=
 =?utf-8?B?MlB2TFlaSXVCbXozY2o4blkwMXRmRW5GUGo3dGRRVTJPbGNoR1FIclBRZDVV?=
 =?utf-8?B?cy91QWN4dDFqUXZ3WGtDeGI5VnlxdDBnaGVtSG8rNVRMOEtWTldkQ1Jiall6?=
 =?utf-8?B?SnpKZnRvRG1CSmZMWlg4QTlxVkdqUnpaMk1PUjQwcEJCZktzOHdwcEVEM0FM?=
 =?utf-8?B?Rkd0b0VpT081R0l2R3MxWkVibERadWFVYU1YeVZ4ZWtNdFUxcTllWGloZDhx?=
 =?utf-8?B?RjVJUUZOUWNIQ1p4SVBvOU1sbVlMZUFDaGppM0c4ZGhtSVFmNE1VVlEyRURZ?=
 =?utf-8?B?Tlp0bjBwU0VoSzJqR2daOVlZd1M1OE5TdTZOc2FPeHNlZnIyazFheXhyci9R?=
 =?utf-8?B?cUFVb2ozZzBrb1h3SGhzS0VLVGxCenY3ZGlROEE3aTVWSkNOZU9rL3c1VzRK?=
 =?utf-8?B?LzUxakh6eWlaaHIrazI0dUUxL1U5SFBEVmwrSVVvNkg2a0FUYi9RRTV0MHVo?=
 =?utf-8?B?SzNtUWtBeWJoRTR5NVpqWnJFdFB1TnNqN0dJV3JaRlprU09RTHdjeVlxMkhw?=
 =?utf-8?B?RUMyV1A1S0Q5UzdzQjlNMVRzQll4VjVLUzFYeVpTWGJYUTFSZjJ2UXJHZTJ1?=
 =?utf-8?B?Mkd4alVUMU1wbUNtQXpIRmxQVFNiK0MrSW4zRjBDQ0IvcUludUwvN3V2cXBW?=
 =?utf-8?B?OEFneHhmVTFlcXF0amdTc2l0cFpGWVNsYTdiWTdpWjdCbXU4a1Ixa3oxTFRv?=
 =?utf-8?B?VG9PK0l6QVorZDVjN3lJRWc2SGJNaEhQb0xkZ3VTUmhzRGt3WXdjSzNLMjNP?=
 =?utf-8?B?ZnBQcnplbEFWQmN6U0kxNXVkV1VBbGhMWXVjclpxSnNOYms4ZmZWMkZtWTJq?=
 =?utf-8?B?T2J1VEk5cW85Rk9iNnRUWnJJL3QrSVdDQi9lWVNhZVcyY2xodkV1WWZZVWdl?=
 =?utf-8?B?ZVRMSU1sVld3RHloM1YwdzNxOS93L3VVVlp3V2hRYnRINVl1YzVkQTM5dEIy?=
 =?utf-8?B?VjFIVmVzZWg1czB3TVRkVmw4aTN4TUppZXY0aUFLSE1LTndOSG42dVg4dmNi?=
 =?utf-8?B?ZmR5RGl6NXNuTENDYUhhWmsrNTVPV1R4MjErZllBcG1rTU5NSE5Uc0xoYXhi?=
 =?utf-8?B?WmQyeTJPdTFZd3BjTUtBQkZvU0FwQjlDSUtTSTlQWWlyUVBBbkJxOWE5RjVX?=
 =?utf-8?B?aUE4M1paalhSVmhMZ25rUWJrNVdJVjhwZzN0NGJTQ2RIMjhMQmxYR3dwQ2xR?=
 =?utf-8?B?dWhiejh4Q1dYRnhxdW9WenFudlZDdTgxcUZBL1R5MmpLWTJNOG5hbi9xZUYv?=
 =?utf-8?B?Qkppa3FFK2RhbTlEQk95eE1RKzBNR2E3aG9xY0dpR0VvODVEN29UVThTWnFj?=
 =?utf-8?B?ZjRaY0N4Y2FuNHh4VHZsYkpIZUpzbDBFVmg1emVsY1Y4V3czejFmZkpqN0NY?=
 =?utf-8?B?ampJejdPc1puQUtJblgxM1AzN1dYeFRNWi9PV3doT2hpR3cvVGQ0aEpLR1BY?=
 =?utf-8?B?czRRZC9Ma0dsREZHajlTZzVhdWRURzBXMXVXMWs0ZXdYdEhuS21wbDd3b3Q1?=
 =?utf-8?B?TUt5dWhFcEdNMmFTeVB1TERCZVRvVkdCZlEveE1VN2o5KzZnOHZBRjBuZ2gx?=
 =?utf-8?B?aGtmVDdKZ0Q0Q2FGT0p6Q0M5enNXRk5CSmN3UDdVR2ozOE5xRG5tNHNTRDJE?=
 =?utf-8?B?M1hncUhUK1NKbUk0cGsxRGVlWVNqbkM4TTc4Vk1Nek1FdEU0MDRJNkNuRHVH?=
 =?utf-8?B?Z0liaXB4UDI4MldZd3NRaTBTM0Q5YzNuS2NvQUE4b24xV2oxRlFTMEQwVEsy?=
 =?utf-8?B?TnBCSnlOYk1oeE1vZHQ2Y3BkUEk0dTNYaHJlTk1kTHhOT1JYN2E1WGFJMG40?=
 =?utf-8?B?SnJ6UEYyczRRcS9iZzh3TnduT2hqSjRSd0pYZEdodlNYb0tHRS9VNDMxZDV1?=
 =?utf-8?B?SWMrVW1zQ3gvUXdDaUJVbnB1eHhaNy90L2lsd1pXOWx1NTdXeGM3NVlhaXFQ?=
 =?utf-8?Q?rmG0qI8t09b8Lc6I3yA8sD+gT?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <FD56D441B254994AB99AFB3AF1A8F01F@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	tY1dnEjlqOo49f0MoknQhKzYysgZRFO4+lQqXOKwb9m6TxmiuaH6zyQg/52wkvQa9Np6/jHVToZV0vq1ylhXEntV8q5Y7x+N3dX/uBoatyi0QZGsdUkpx8mGegQRpKOBPmALzzvi0XZfpRiVk1eHissTA80D9eD6sGsUyHUK+2vl34e1bf6SgVT4cr2v85nq3OOdFfaNTEnoIomCJWd5jumVaxW1ChJ1y95YMogsOc8vVl5HVcCCBwmrrPOlzqDahHPpvC15w8GIlcHXBf8Gk1xfn7E2GyJ+o84q2qferIXfK7RVp2HIUYrxpfsP0w1qZyC386MDB0pORisXpfGcusUwFuq12Jjb4GJgKB/dkN3tQG41A4G2HcnSpJ/BKSOQL8smwMyRzER/O0n+GC4P6J/5jBzym2vcZXixaj9gsmRjOTsUKA1qSsQH25trcNZ5BupgCQJWE8NVFZ4IeYDh5mhHVGRdDFnv6I/5A6TMeY6MISzs8Aef5472u3EWaWJuCddHEGSk4fkMFjRYBIKm8tavYNou5N0s22mbj3BOOLdxfwZNGcQxMH7E1MSo6i8nnHSO7GPLiaeGTfTn1HfOpLEgJQgnKN79W5j+wCTDZTepgkENoVQzixSursrpMO6DFAevBYfiNVGqNSKr7mPLfmmlmfyq7wsiIAwOgksagbDLwbZC0rkahKA3LG+3loyl31tNWO6OkvepzIw3akt8oXyclrWH5ZywGAfJpNbZpiy2IqtKPNzB8YhVjBPUVJaB6dDfaRZodak90LaBZFu1+d31NsUzQLxe1JyHeZ8Z8tGLI9110BAvyz2vCu7W31+SX0PULntkITubgm341HEAvmE6002HqfkuYjRP205ynDs=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a0f6c1f-172a-4f05-5bc2-08daf54fd804
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Jan 2023 10:20:50.1419
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: DjpU6snos1HtxLDrrlNdX2/OlDx0gzuZWVeQ51C3GqdYv/H6QvkZwnXf1Hr4WihuV9nR8XdYE3lSx7TdoXv3VfXAu81sY1bfiNZ6i/zwfWo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN4PR03MB6797

T24gMTMvMDEvMjAyMyA4OjQ4IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gV2hpbGUgYzYxYTZm
NzRmODBlICgieDg2OiBlbmZvcmNlIGNvbnNpc3RlbnQgY2FjaGFiaWxpdHkgb2YgTU1JTw0KPiBt
YXBwaW5ncyIpIGNvcnJlY3RseSBjb252ZXJ0ZWQgb25lICFtZm5fdmFsaWQoKSBjaGVjayB0aGVy
ZSwgdHdvIG90aGVycw0KPiB3ZXJlIHdyb25nbHkgbGVmdCB1bnRvdWNoZWQ6IEJvdGggY2FjaGFi
aWxpdHkgY29udHJvbCBhbmQgbG9nLWRpcnR5DQo+IHRyYWNraW5nIG91Z2h0IHRvIGJlIHVuaWZv
cm1seSBoYW5kbGVkL2V4Y2x1ZGVkIGZvciBhbGwgKG5vbi0pTU1JTw0KPiByYW5nZXMsIG5vdCBq
dXN0IG9uZXMgcXVhbGlmaWFibGUgYnkgbWZuX3ZhbGlkKCkuDQo+DQo+IFNpZ25lZC1vZmYtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KQWNrZWQtYnk6IEFuZHJldyBDb29w
ZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:23:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:23:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477116.739679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHDH-0002V3-NJ; Fri, 13 Jan 2023 10:23:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477116.739679; Fri, 13 Jan 2023 10:23:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHDH-0002Uw-KM; Fri, 13 Jan 2023 10:23:07 +0000
Received: by outflank-mailman (input) for mailman id 477116;
 Fri, 13 Jan 2023 10:23:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGHDG-0002Uo-An
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:23:06 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2054.outbound.protection.outlook.com [40.107.21.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43848e59-932c-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 11:23:04 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7397.eurprd04.prod.outlook.com (2603:10a6:10:1a9::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 10:23:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 10:23:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43848e59-932c-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XRjgBZknBCV2P6KPoZVtIYf0jC/E4bxbh0egue04SuwQV/TEJgohWQunxmbWaM8vQlnv607yU+85+ZeHrHcXYZOHmoUe49uOCzN7doS5lvZ6ZJamZvOI4+ZdfvTaenHgMP6J6LP0cvZymczxmqnLVf7RkvTpbqujaQrKUtWJcO0QaWRNx339YpV+Hden3DtiTTFRL44YrsnatXz4WrQL7EOc5Zx3BT+z8IaqVIKCoQpVdvn8TNMhCE4bfw6PwCy1xNZx30w/sSChx24rRszA96EJdYuKwvqqFjLphp+xuOxO2KhfTFvfzHVQ2hnW5hX/sPzyhMV3NZDjbT24WmQ/dA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+++8ejAGIagRsZaioNDKUcsaghbLXon1K6kkQCfO444=;
 b=Aaxvq3xZggJBYVltAV5lgnYtCLoJXEZhQwTl2lSmgLUFS2C9JhQ1+TOSmaBMshJcDO7t+vRd7m0xiHF/3FHeK3gutl+QLIY/SrCPDldizx1cPhifuT+cOmmXY9nzbzOTFu4mh7hzjVwFRBcnbBYljG+jusuNVd54fR8Ngq6WJv5evF80jOkoxFZNIYoMLdXZ0yK8eldPwgzIUxC6h8wz+oYrZsi7Coijb1Nr2fDAr6yLdHeCuRFE9lyDwAzCq+CnOXPHFOs4QxeJJRNqFQxZ7hfDHcCOUwqRExkuU1EAVHgIzbV4ZR85aGKdbuqcNXn0wR47aPQakfnycOLDE6ScMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+++8ejAGIagRsZaioNDKUcsaghbLXon1K6kkQCfO444=;
 b=M0ANMvBUJ+R0QrTMnPATbpSQgwx5VWoW06aFHSM1E9ISZ3/2DtbNndu3OcLFmobwYgPNLTMix2t/zQXZj/E6yMD5nE40WTlf4sdWuEuhIvzLDVAYyQHaGiaQIKHkfTXqCHGObx4RRC/ba7/Z/y57UAtkKF4pHBgnHfg9+t21rQKH6GArs9aFkuQDvqudwQGNMzuryNNCZvZIen/McdS6TMRk+N1TgmuiHfrN5jUx8mvl0pU/IO8aomKwbjZyWLBwnJmNNni6GS41WYXmjAiEGK2R7+1KBHPaUYahzEixJ4VS6ADG8Jf0ugcI806GXmCRkKi5XzrmWZEr4JCuTbFlMQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <acc496fc-ee23-1836-1a15-e549e462773e@suse.com>
Date: Fri, 13 Jan 2023 11:23:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, Xenia Ragiadakou <burzalodowa@gmail.com>,
 George Dunlap <george.dunlap@citrix.com>, Paul Durrant <paul@xen.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
In-Reply-To: <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0006.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7397:EE_
X-MS-Office365-Filtering-Correlation-Id: b6c425a3-096a-4400-b889-08daf55026be
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+aPkSdS+bNAwk9oKke27zrCFZda+036e6LhpauKd6trX7uKrdG9JvayIqQOsh1GBUHUh/i/RYZWYNTbYcF/ezfWtyby/NXEWpH/Djis+pk7jB9fWsBaHm0y6HuC9FYtnVdtkdatW6KwH2YlCaIJNK+qjNI44b9aJiJeZMh7633DUu/z5Zn1XNf1PVx4QJmaiNvWSY8CtBnpn2yCmdhf5OUL2++oIntku2e9kERh5AY8qDBlIxce7tUh4zQruZ7JWLrjZuITgZaoNOiznt+jEQTAujzFKQ+1BTQbXIKtX15WshOFeIrunIBH3RCbYUbXAWV2O+OGYQ5yJmEiJuOC+SQ1lVngX2wv1oE6gvFuKgdY0emEnjeZ0Eh+1oTBOUk/i4G3gdVfjn0pUJ4xSTMs3rqPMwT4/tMMpfqpaYkWjkkB62ZPR6ECjTkD9DzROXRttaev1/7jm3i5sNl0G9yKLzLfPyXMnUlfD93uWPhBTZchQ/hlVUcV8EFSwrfUZ098GvjHw7tvcyoHt/H4v+tfAYaQYoNftOdf5OgOOvrGaN58SF4YtkOANRE6u6uGDNMNpi5FrG9lojZ+ieuGS9RfpWHwOKoCaDQ5djA7INSAlyrzwlUw0Ts1NDqsQuupEjhhCafmSIcKTc0gZ7y5YANa9x9BwqgIiflyk3ZyNtQdliGlYQveOWJ5nYZFp+oIyjJsJdSVxF4LjYnsFCgJ6xDmyYpoppU2BUgr112BuGfD2szzj2f1vxRpmtnLkfv/r4GHaU7Pw8OExm5tqXUzswjegww==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(346002)(366004)(396003)(39860400002)(451199015)(8936002)(5660300002)(83380400001)(41300700001)(86362001)(38100700002)(36756003)(2906002)(31696002)(31686004)(966005)(478600001)(26005)(186003)(6512007)(6506007)(6486002)(8676002)(6916009)(66476007)(4326008)(66946007)(53546011)(316002)(66556008)(54906003)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NlBBalRicW4rdnlmOVFiRmdmU1lJOS9iaFRYeFJzRXBzUGQ1aXYrQVptUXhF?=
 =?utf-8?B?eHlGSGkrNnZqNXE5dm1kZFdFN1M4L0R4cEtUN0lnUHMwdmgxWkF3NEVXRWlo?=
 =?utf-8?B?M2gxcVVKVGR3SjJ5M3hYQmZZaUpBTmtNS2ZuamNGbnEwYTNoQXhFYzNwS0JS?=
 =?utf-8?B?d0tXSFYxNkFoTllzNFhJM3F6ckgzTE9jMko0NCsvL2lLa1JobmJJbmdzRzV3?=
 =?utf-8?B?YlZXR2pMaVNuWG44REowc0Q4L0JzV041MzJXTG5yUkFaYng5NnEwd0lDNk1m?=
 =?utf-8?B?bk9qYUVENTBBeDFoNzR2Y3ZlZ3VzdU9TRDZmM21WdWhQZlp3UWNFL0ZQcjI5?=
 =?utf-8?B?MUw1YnlBQnVMTXI1WDVTVHovRlJvRy9CbHlXQWE2RGZSUi9nejFOcW5nR1pl?=
 =?utf-8?B?YXRhVEtlZ21NUW9Kbk1YcVMrUmtQWHUwTDJOeFVSWVRISlVQbjE2b1JpSXF6?=
 =?utf-8?B?V0dGelZ4dVNLQUthQk5OTFNSd1l1by9nWFZKOWdvOG0rUVhKcG9Bei95NitS?=
 =?utf-8?B?UjV0SCtjbExOOUU3T0RxR1VuL1NoNXRMa2Fub3N0TmN5TlArNGNzdW1Eclp3?=
 =?utf-8?B?SUg2NWJqOGdoSy9MU09MNW90Y1RpVFoxbVhsTWlDbGx5YnJIV0RENXhWWGJj?=
 =?utf-8?B?dnFKRVF1YmNJTUtnMkpKZEI2MHl5QkEzV2o0VnBvWlZwUVhRZUNYQWdsUEFw?=
 =?utf-8?B?ckN1UEVTRE5LUjM2bTZYaWFOZ3FWRDl3dW52QUl2TGF4VWZZRGs3YlBFdkNi?=
 =?utf-8?B?b2pBQ3k2RFdwVDRDellQVkdjTWMyRWFQbzZaWHlBS0w0U2hvUEZPTEVNMFJI?=
 =?utf-8?B?Y0hEOXdEUy9hTm9iQ1VOZTU3cVloV1lTWTBvS1dzZjl1bzVzNWNYSXhMTVZ3?=
 =?utf-8?B?d2dNYldDcFpDWTNHSkRodk4zV0E0V25rQmpIN3NBbzU4bmNXVkF2d3NOSlJW?=
 =?utf-8?B?d0pkd3puVENiVi9pVlhtZTVqU0t5ejU3Q0wrOXNVdk1GZ3ZqYzE1aldTb0p1?=
 =?utf-8?B?QXo1WHhHM09oblNnRDV3dnBvZzdFbFZyaDAwMnRkS1lqcjNGOVBPQlEzWEtq?=
 =?utf-8?B?R2Z5WEtsRUp4MG1Ya3NNV3IxaDdsVTZTMlhZYldkYjkrTThZQzZJVjFIbHZ5?=
 =?utf-8?B?a1Ftd0ZNNVI2RUUzL0V6azllWlNqaGdqWS84OUU2ZWxRWWkydEt2QXFvYVBK?=
 =?utf-8?B?cGlUZVFmWjNnWWx1TjRBaEkwVEs1MVorSmRra0xHNFhvRktHTmFZNkhsWE5w?=
 =?utf-8?B?elYxUTdNcVZyU05ZMlNYY1JPTnBsdUxVbGFDYng1K1VGeTJqRDNVVG85dkNJ?=
 =?utf-8?B?d2ErdDNTeGdqSzhOZW5WM0drbEJQZnhKN0dja0h4cGwxamtWSW9KYkZPZ0Rh?=
 =?utf-8?B?eFhIK3lDV29WQ2pBY0xKSGxseGI1SFlsODlkeVNXYXV1ODlYUzBFR3A1TGQ1?=
 =?utf-8?B?MXFRNlZpR2djVnJhSVVEN05RR0dnTG52OER1WGNtdll3Y1Q3REJUalRCMk5n?=
 =?utf-8?B?RkhRbEZRN0VLeklkUmFTSGwrNXI1Q1F0RXNmUTJNWXoyT25IanFmNk1RK3hC?=
 =?utf-8?B?aHp2cDhQRHE0UStXZFJDY2RSbk4rS1Z3UlNMb3VUaGtOWUtBRGRXdnhLY2ph?=
 =?utf-8?B?aDlKVThaZm5EWFlLK0ZuQlRMNVVGU25BR1BKcEV0V2l3bVloRk8xZCtpWmhi?=
 =?utf-8?B?K3M5NVcxYUZmTUlqVlRxbTc3ZCtmeWYyR2hYcjY3SG1WZkFvVXJRSWdRTVhw?=
 =?utf-8?B?aGJHZ3NYby94a01IN3Q3U2NEVWpKb2xqLzBXcEtiZEU2aGlsZHd1Y0xITGU5?=
 =?utf-8?B?NnN5WE9BN000TkRYM2xJbWJwekRYSHcyaUg1TmJQaHByR1Uzem5VWDlDdURJ?=
 =?utf-8?B?bTV4TU04SzdCVDNpTHE5TWpVV09uRzhrOExSTHRZcVJ1eXQ2bk1tZGJYQ3R0?=
 =?utf-8?B?Q1ZsNzhxb1VBb2FNc2xFSE5rV2kwSWJNamxMWkVHZHh3VHAzTVhlNDhYQ3VC?=
 =?utf-8?B?R2NzWXd2MXpScVFlQTlmMi93MzF1UTY1YlU2emJGZHFtZTdhcDB3WlI1MGZG?=
 =?utf-8?B?KzV6VkNUcS9MRHdORU1nMk81aE0xdkE5ejVXWU1CNXJsLzkwR2tjVGxIV3c5?=
 =?utf-8?Q?rIsxPH5VqPzT0JF7LBzEdnwzO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b6c425a3-096a-4400-b889-08daf55026be
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 10:23:02.4443
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aprvQKLrkT8JtflhDDBgm/jWSBoEQ6nGwgFiISD1GA9QpSlnMaRq6+jYJU+3K18fYN7EdZquon02zCRkDaVKOg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7397

(missed to CC Paul on the original submission)

On 13.01.2023 09:47, Jan Beulich wrote:
> First of all the variable is meaningful only when an IOMMU is in use for
> a guest. Qualify the check accordingly, like done elsewhere. Furthermore
> the controlling command line option is supposed to take effect on VT-d
> only. Since command line parsing happens before we know whether we're
> going to use VT-d, force the variable back to set when instead running
> with AMD IOMMU(s).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I was first considering to add the extra check to the outermost
> enclosing if(), but I guess that would break the (questionable) case of
> assigning MMIO ranges directly by address. The way it's done now also
> better fits the existing checks, in particular the ones in p2m-ept.c.
> 
> Note that the #ifndef is put there in anticipation of iommu_snoop
> becoming a #define when !IOMMU_INTEL (see
> https://lists.xen.org/archives/html/xen-devel/2023-01/msg00103.html
> and replies).
> 
> In _sh_propagate() I'm further puzzled: The iomem_access_permitted()
> certainly suggests very bad things could happen if it returned false
> (i.e. in the implicit "else" case). The assumption looks to be that no
> bad "target_mfn" can make it there. But overall things might end up
> looking more sane (and being cheaper) when simply using "mmio_mfn"
> instead.
> 
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -571,7 +571,7 @@ _sh_propagate(struct vcpu *v,
>                              gfn_to_paddr(target_gfn),
>                              mfn_to_maddr(target_mfn),
>                              X86_MT_UC);
> -                else if ( iommu_snoop )
> +                else if ( is_iommu_enabled(d) && iommu_snoop )
>                      sflags |= pat_type_2_pte_flags(X86_MT_WB);
>                  else
>                      sflags |= get_pat_flags(v,
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>      if ( !acpi_disabled )
>      {
>          ret = acpi_dmar_init();
> +
> +#ifndef iommu_snoop
> +        /* A command line override for snoop control affects VT-d only. */
> +        if ( ret )
> +            iommu_snoop = true;
> +#endif
> +
>          if ( ret == -ENODEV )
>              ret = acpi_ivrs_init();
>      }
> 
> 



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:30:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477123.739696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKG-00042c-Oj; Fri, 13 Jan 2023 10:30:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477123.739696; Fri, 13 Jan 2023 10:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKG-00041W-J8; Fri, 13 Jan 2023 10:30:20 +0000
Received: by outflank-mailman (input) for mailman id 477123;
 Fri, 13 Jan 2023 10:30:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGHKF-0003yq-0m
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:30:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGHKE-0007Du-NI; Fri, 13 Jan 2023 10:30:18 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2N-0005Ty-MN; Fri, 13 Jan 2023 10:11:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=3cyXZ0+n/RxygQrIDuBnyuElEfntomdEEmsp7TWQV70=; b=3HlebbHCMHUWVsdWb2nnBUtaEL
	PjkVyQbzPbgZci67ArEPdk0Sd6+rnPtb9EkXSV6qrwDEuvq1MzCAJy6L2zhm6KEm0i67FoRsCjB/V
	NeKx+Iips6N7bNpGG/ZDopB92gm/oJrcBjKAI1RmRDxioL/t5tJpvmbUG6i2enMhIdaA=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary mapping
Date: Fri, 13 Jan 2023 10:11:32 +0000
Message-Id: <20230113101136.479-11-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, the temporary mapping is only used when the virtual
runtime region of Xen is clashing with the physical region.

In follow-up patches, we will rework how secondary CPU bring-up works
and it will be convenient to use the fixmap area for accessing
the root page-table (it is per-cpu).

Rework the code to use temporary mapping when the Xen physical address
is not overlapping with the temporary mapping.

This also has the advantage to simplify the logic to identity map
Xen.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

Even if this patch is rewriting part of the previous patch, I decided
to keep them separated to help the review.

The "folow-up patches" are still in draft at the moment. I still haven't
find a way to split them nicely and not require too much more work
in the coloring side.

I have provided some medium-term goal in the cover letter.

    Changes in v3:
        - Resolve conflicts after switching from "ldr rX, <label>" to
          "mov_w rX, <label>" in a previous patch

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 82 +++++++--------------------------------
 1 file changed, 15 insertions(+), 67 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 3800efb44169..ce858e9fc4da 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -459,7 +459,6 @@ ENDPROC(cpu_init)
 create_page_tables:
         /* Prepare the page-tables for mapping Xen */
         mov_w r0, XEN_VIRT_START
-        create_table_entry boot_pgtable, boot_second, r0, 1
         create_table_entry boot_second, boot_third, r0, 2
 
         /* Setup boot_third: */
@@ -479,67 +478,37 @@ create_page_tables:
         cmp   r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */
         blo   1b
 
-        /*
-         * If Xen is loaded at exactly XEN_VIRT_START then we don't
-         * need an additional 1:1 mapping, the virtual mapping will
-         * suffice.
-         */
-        cmp   r9, #XEN_VIRT_START
-        moveq pc, lr
-
         /*
          * Setup the 1:1 mapping so we can turn the MMU on. Note that
          * only the first page of Xen will be part of the 1:1 mapping.
-         *
-         * In all the cases, we will link boot_third_id. So create the
-         * mapping in advance.
          */
+        create_table_entry boot_pgtable, boot_second_id, r9, 1
+        create_table_entry boot_second_id, boot_third_id, r9, 2
         create_mapping_entry boot_third_id, r9, r9
 
         /*
-         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
-         * then the 1:1 mapping will use its own set of page-tables from
-         * the second level.
+         * Find the first slot used. If the slot is not the same
+         * as XEN_TMP_FIRST_SLOT, then we will want to switch
+         * to the temporary mapping before jumping to the runtime
+         * virtual mapping.
          */
         get_table_slot r1, r9, 1     /* r1 := first slot */
-        cmp   r1, #XEN_FIRST_SLOT
-        beq   1f
-        create_table_entry boot_pgtable, boot_second_id, r9, 1
-        b     link_from_second_id
-
-1:
-        /*
-         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
-         * 1:1 mapping will use its own set of page-tables from the
-         * third level.
-         */
-        get_table_slot r1, r9, 2     /* r1 := second slot */
-        cmp   r1, #XEN_SECOND_SLOT
-        beq   virtphys_clash
-        create_table_entry boot_second, boot_third_id, r9, 2
-        b     link_from_third_id
+        cmp   r1, #TEMPORARY_AREA_FIRST_SLOT
+        bne   use_temporary_mapping
 
-link_from_second_id:
-        create_table_entry boot_second_id, boot_third_id, r9, 2
-link_from_third_id:
-        /* Good news, we are not clashing with Xen virtual mapping */
+        mov_w r0, XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_second, r0, 1
         mov   r12, #0                /* r12 := temporary mapping not created */
         mov   pc, lr
 
-virtphys_clash:
+use_temporary_mapping:
         /*
-         * The identity map clashes with boot_third. Link boot_first_id and
-         * map Xen to a temporary mapping. See switch_to_runtime_mapping
-         * for more details.
+         * The identity mapping is not using the first slot
+         * TEMPORARY_AREA_FIRST_SLOT. Create a temporary mapping.
+         * See switch_to_runtime_mapping for more details.
          */
-        PRINT("- Virt and Phys addresses clash  -\r\n")
         PRINT("- Create temporary mapping -\r\n")
 
-        /*
-         * This will override the link to boot_second in XEN_FIRST_SLOT.
-         * The page-tables are not live yet. So no need to use
-         * break-before-make.
-         */
         create_table_entry boot_pgtable, boot_second_id, r9, 1
         create_table_entry boot_second_id, boot_third_id, r9, 2
 
@@ -675,33 +644,12 @@ remove_identity_mapping:
         /* r2:r3 := invalid page-table entry */
         mov   r2, #0x0
         mov   r3, #0x0
-        /*
-         * Find the first slot used. Remove the entry for the first
-         * table if the slot is not XEN_FIRST_SLOT.
-         */
+        /* Find the first slot used and remove it */
         get_table_slot r1, r9, 1     /* r1 := first slot */
-        cmp   r1, #XEN_FIRST_SLOT
-        beq   1f
-        /* It is not in slot 0, remove the entry */
         mov_w r0, boot_pgtable       /* r0 := root table */
         lsl   r1, r1, #3             /* r1 := Slot offset */
         strd  r2, r3, [r0, r1]
-        b     identity_mapping_removed
-
-1:
-        /*
-         * Find the second slot used. Remove the entry for the first
-         * table if the slot is not XEN_SECOND_SLOT.
-         */
-        get_table_slot r1, r9, 2     /* r1 := second slot */
-        cmp   r1, #XEN_SECOND_SLOT
-        beq   identity_mapping_removed
-        /* It is not in slot 1, remove the entry */
-        mov_w r0, boot_second        /* r0 := second table */
-        lsl   r1, r1, #3             /* r1 := Slot offset */
-        strd  r2, r3, [r0, r1]
 
-identity_mapping_removed:
         flush_xen_tlb_local r0
         mov   pc, lr
 ENDPROC(remove_identity_mapping)
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:30:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477125.739706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKH-0004Ie-GK; Fri, 13 Jan 2023 10:30:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477125.739706; Fri, 13 Jan 2023 10:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKH-0004GB-9f; Fri, 13 Jan 2023 10:30:21 +0000
Received: by outflank-mailman (input) for mailman id 477125;
 Fri, 13 Jan 2023 10:30:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGHKF-0003z0-4K
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:30:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGHKE-0007E5-TM; Fri, 13 Jan 2023 10:30:18 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2R-0005Ty-73; Fri, 13 Jan 2023 10:11:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=HfxRMuStf8J9x3+8lAwIIz4sUR/UeTxH+PxqDsWAcB8=; b=rN7tqu6psH/l5Pw0wNl+W1h5i6
	0OeDJPl+XRzxilNNoF4tyjJhprYUoHHVjsS1EtRqoU3pBvFHykYb1r1XuGFx4h/H4rWLQIY8g290V
	EqPkRYcw5B1gy0JZHPQBXiZhm116/ZtVqdAcy48MrgvPuspa4Rkmz3inklmb67q923PE=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr()
Date: Fri, 13 Jan 2023 10:11:35 +0000
Message-Id: <20230113101136.479-14-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
still on.

Switching TTBR is like replacing existing mappings with new ones. So
we need to follow the break-before-make sequence.

In this case, it means the MMU needs to be switched off while the
TTBR is updated. In order to disable the MMU, we need to first
jump to an identity mapping.

Rename switch_ttbr() to switch_ttbr_id() and create an helper on
top to temporary map the identity mapping and call switch_ttbr()
via the identity address.

switch_ttbr_id() is now reworked to temporarily turn off the MMU
before updating the TTBR.

We also need to make sure the helper switch_ttbr() is part of the
identity mapping. So move _end_boot past it.

The arm32 code will use a different approach. So this issue is for now
only resolved on arm64.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v4:
        - Don't modify setup_pagetables() as we don't handle arm32.
        - Move the clearing of the boot page tables in an earlier patch
        - Fix the numbering

    Changes in v2:
        - Remove the arm32 changes. This will be addressed differently
        - Re-instate the instruct cache flush. This is not strictly
          necessary but kept it for safety.
        - Use "dsb ish"  rather than "dsb sy".


    TODO:
        * Handle the case where the runtime Xen is loaded at a different
          position for cache coloring. This will be dealt separately.
---
 xen/arch/arm/arm64/head.S     | 50 +++++++++++++++++++++++------------
 xen/arch/arm/arm64/mm.c       | 30 +++++++++++++++++++++
 xen/arch/arm/include/asm/mm.h |  2 ++
 xen/arch/arm/mm.c             |  2 --
 4 files changed, 65 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 663f5813b12e..5efd442b24af 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -816,30 +816,46 @@ ENDPROC(fail)
  * Switch TTBR
  *
  * x0    ttbr
- *
- * TODO: This code does not comply with break-before-make.
  */
-ENTRY(switch_ttbr)
-        dsb   sy                     /* Ensure the flushes happen before
-                                      * continuing */
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
+ENTRY(switch_ttbr_id)
+        /* 1) Ensure any previous read/write have completed */
+        dsb    ish
+        isb
+
+        /* 2) Turn off MMU */
+        mrs    x1, SCTLR_EL2
+        bic    x1, x1, #SCTLR_Axx_ELx_M
+        msr    SCTLR_EL2, x1
+        isb
+
+        /*
+         * 3) Flush the TLBs.
+         * See asm/arm64/flushtlb.h for the explanation of the sequence.
+         */
+        dsb   nshst
+        tlbi  alle2
+        dsb   nsh
+        isb
+
+        /* 4) Update the TTBR */
+        msr   TTBR0_EL2, x0
         isb
 
-        msr    TTBR0_EL2, x0
+        /*
+         * 5) Flush I-cache
+         * This should not be necessary but it is kept for safety.
+         */
+        ic     iallu
+        isb
 
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
+        /* 6) Turn on the MMU */
+        mrs   x1, SCTLR_EL2
+        orr   x1, x1, #SCTLR_Axx_ELx_M  /* Enable MMU */
+        msr   SCTLR_EL2, x1
         isb
 
         ret
-ENDPROC(switch_ttbr)
+ENDPROC(switch_ttbr_id)
 
 #ifdef CONFIG_EARLY_PRINTK
 /*
diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
index 798ae93ad73c..2ede4e75ae33 100644
--- a/xen/arch/arm/arm64/mm.c
+++ b/xen/arch/arm/arm64/mm.c
@@ -120,6 +120,36 @@ void update_identity_mapping(bool enable)
     BUG_ON(rc);
 }
 
+extern void switch_ttbr_id(uint64_t ttbr);
+
+typedef void (switch_ttbr_fn)(uint64_t ttbr);
+
+void __init switch_ttbr(uint64_t ttbr)
+{
+    vaddr_t id_addr = virt_to_maddr(switch_ttbr_id);
+    switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;
+    lpae_t pte;
+
+    /* Enable the identity mapping in the boot page tables */
+    update_identity_mapping(true);
+    /* Enable the identity mapping in the runtime page tables */
+    pte = pte_of_xenaddr((vaddr_t)switch_ttbr_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+    pte.pt.ro = 1;
+    write_pte(&xen_third_id[third_table_offset(id_addr)], pte);
+
+    /* Switch TTBR */
+    fn(ttbr);
+
+    /*
+     * Disable the identity mapping in the runtime page tables.
+     * Note it is not necessary to disable it in the boot page tables
+     * because they are not going to be used by this CPU anymore.
+     */
+    update_identity_mapping(false);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 68adcac9fa8d..bff6923f3ea9 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -196,6 +196,8 @@ extern unsigned long total_pages;
 extern void setup_pagetables(unsigned long boot_phys_offset);
 /* Map FDT in boot pagetable */
 extern void *early_fdt_map(paddr_t fdt_paddr);
+/* Switch to a new root page-tables */
+extern void switch_ttbr(uint64_t ttbr);
 /* Remove early mappings */
 extern void remove_early_mappings(void);
 /* Allocate and initialise pagetables for a secondary CPU. Sets init_ttbr to the
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 39e0d9e03c9c..b9c698088b25 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -476,8 +476,6 @@ static void xen_pt_enforce_wnx(void)
     flush_xen_tlb_local();
 }
 
-extern void switch_ttbr(uint64_t ttbr);
-
 /* Clear a translation table and clean & invalidate the cache */
 static void clear_table(void *table)
 {
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:30:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477122.739689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKG-0003zL-EX; Fri, 13 Jan 2023 10:30:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477122.739689; Fri, 13 Jan 2023 10:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKG-0003zE-Bt; Fri, 13 Jan 2023 10:30:20 +0000
Received: by outflank-mailman (input) for mailman id 477122;
 Fri, 13 Jan 2023 10:30:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGHKE-0003yk-Sv
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:30:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGHKE-0007Ds-JN; Fri, 13 Jan 2023 10:30:18 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2O-0005Ty-S2; Fri, 13 Jan 2023 10:11:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=IpZOOt3yK+fcKusl8jA4izYoHf69veEzI6Yhzry45P0=; b=sX1xkm2XfBzxt2Wqt7t80Sd1c/
	Yq3uLvAD2snqrNtd1i7Eh80QrqQXLylyepuxzgNY4S7bqCYWMyKSKStrnEi+u0AI6y9c/zJXTtaEk
	9flh27K8T95Bt3xUn2y8R2lV5Ro2hopCXmn0cQejsH4F/plP6AsD9oyVA7jyaIIyZsiE=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 11/14] xen/arm64: Rework the memory layout
Date: Fri, 13 Jan 2023 10:11:33 +0000
Message-Id: <20230113101136.479-12-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Xen is currently not fully compliant with the Arm Arm because it will
switch the TTBR with the MMU on.

In order to be compliant, we need to disable the MMU before
switching the TTBR. The implication is the page-tables should
contain an identity mapping of the code switching the TTBR.

In most of the case we expect Xen to be loaded in low memory. I am aware
of one platform (i.e AMD Seattle) where the memory start above 512GB.
To give us some slack, consider that Xen may be loaded in the first 2TB
of the physical address space.

The memory layout is reshuffled to keep the first two slots of the zeroeth
level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
tweak of the boot code because XEN_VIRT_START cannot be used as an
immediate.

This reshuffle will make trivial to create a 1:1 mapping when Xen is
loaded below 2TB.

Signed-off-by: Julien Grall <jgrall@amazon.com>
----
    Changes in v4:
        - Correct the documentation
        - The start address is 2TB, so slot0 is 4 not 2.

    Changes in v2:
        - Reword the commit message
        - Load Xen at 2TB + 2MB
        - Update the documentation to reflect the new layout
---
 xen/arch/arm/arm64/head.S         |  3 ++-
 xen/arch/arm/include/asm/config.h | 35 ++++++++++++++++++++-----------
 xen/arch/arm/mm.c                 | 11 +++++-----
 3 files changed, 31 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 4a3f87117c83..663f5813b12e 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -607,7 +607,8 @@ create_page_tables:
          * need an additional 1:1 mapping, the virtual mapping will
          * suffice.
          */
-        cmp   x19, #XEN_VIRT_START
+        ldr   x0, =XEN_VIRT_START
+        cmp   x19, x0
         bne   1f
         ret
 1:
diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 6c1b762e976d..c5d407a7495f 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -72,15 +72,12 @@
 #include <xen/page-size.h>
 
 /*
- * Common ARM32 and ARM64 layout:
+ * ARM32 layout:
  *   0  -   2M   Unmapped
  *   2M -   4M   Xen text, data, bss
  *   4M -   6M   Fixmap: special-purpose 4K mapping slots
  *   6M -  10M   Early boot mapping of FDT
- *   10M - 12M   Livepatch vmap (if compiled in)
- *
- * ARM32 layout:
- *   0  -  12M   <COMMON>
+ *  10M -  12M   Livepatch vmap (if compiled in)
  *
  *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
  * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
@@ -90,14 +87,22 @@
  *   2G -   4G   Domheap: on-demand-mapped
  *
  * ARM64 layout:
- * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
- *   0  -  12M   <COMMON>
+ * 0x0000000000000000 - 0x00001fffffffffff (2TB, L0 slots [0..3])
+ *  Reserved to identity map Xen
+ *
+ * 0x0000020000000000 - 0x000028fffffffff (512GB, L0 slot [4]
+ *  (Relative offsets)
+ *   0  -   2M   Unmapped
+ *   2M -   4M   Xen text, data, bss
+ *   4M -   6M   Fixmap: special-purpose 4K mapping slots
+ *   6M -  10M   Early boot mapping of FDT
+ *  10M -  12M   Livepatch vmap (if compiled in)
  *
  *   1G -   2G   VMAP: ioremap and early_ioremap
  *
  *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
  *
- * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
+ * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [5..255])
  *  Unused
  *
  * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
@@ -107,7 +112,17 @@
  *  Unused
  */
 
+#ifdef CONFIG_ARM_32
 #define XEN_VIRT_START          _AT(vaddr_t, MB(2))
+#else
+
+#define SLOT0_ENTRY_BITS  39
+#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
+#define SLOT0_ENTRY_SIZE  SLOT0(1)
+
+#define XEN_VIRT_START          (SLOT0(4) + _AT(vaddr_t, MB(2)))
+#endif
+
 #define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
 
 #define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
@@ -164,10 +179,6 @@
 
 #else /* ARM_64 */
 
-#define SLOT0_ENTRY_BITS  39
-#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
-#define SLOT0_ENTRY_SIZE  SLOT0(1)
-
 #define VMAP_VIRT_START  GB(1)
 #define VMAP_VIRT_SIZE   GB(1)
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 9ebc2d0f609e..0cf7ad4f0e8c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -153,7 +153,7 @@ static void __init __maybe_unused build_assertions(void)
 #endif
     /* Page table structure constraints */
 #ifdef CONFIG_ARM_64
-    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
+    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START) < 2);
 #endif
     BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
 #ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
@@ -496,10 +496,11 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
     phys_offset = boot_phys_offset;
 
 #ifdef CONFIG_ARM_64
-    p = (void *) xen_pgtable;
-    p[0] = pte_of_xenaddr((uintptr_t)xen_first);
-    p[0].pt.table = 1;
-    p[0].pt.xn = 0;
+    pte = pte_of_xenaddr((uintptr_t)xen_first);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+    xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] = pte;
+
     p = (void *) xen_first;
 #else
     p = (void *) cpu0_pgtable;
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:30:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477124.739705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKH-0004B3-6k; Fri, 13 Jan 2023 10:30:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477124.739705; Fri, 13 Jan 2023 10:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKG-00049i-U6; Fri, 13 Jan 2023 10:30:20 +0000
Received: by outflank-mailman (input) for mailman id 477124;
 Fri, 13 Jan 2023 10:30:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGHKF-0003yp-0f
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:30:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGHKE-0007E0-Qu; Fri, 13 Jan 2023 10:30:18 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2S-0005Ty-Cg; Fri, 13 Jan 2023 10:11:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=8i3ZQPALAa1NIhy5GgDUeuudwiDWrPE6WnHwf0+BofU=; b=iOSV4qKmc7M0vSiiTlP6CwY44V
	bI/hTKMjBVsiXNF/jcVfZryUW88GAPRPJf8rBbfz9XsfWRsCKa2uk70gOXFbzSSh81rVh6L058Tux
	E5HLb0huVqe5D1qnV90rwx7BYuqHi/vHgT/CN+5h8q2NJXoPdm0kJonT5hUMagnX/3V0=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the runtime page-tables
Date: Fri, 13 Jan 2023 10:11:36 +0000
Message-Id: <20230113101136.479-15-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Switching TTBR while the MMU is on is not safe. Now that the identity
mapping will not clash with the rest of the memory layout, we can avoid
creating temporary page-tables every time a CPU is brought up.

The arm32 code will use a different approach. So this issue is for now
only resolved on arm64.

Signed-off-by: Julien Grall <jgrall@amazon.com>
----
    Changes in v4:
        - Somehow I forgot to send it in v3. So re-include it.

    Changes in v2:
        - Remove arm32 code
---
 xen/arch/arm/arm32/smpboot.c   |  4 ++++
 xen/arch/arm/arm64/head.S      | 29 +++++++++--------------------
 xen/arch/arm/arm64/smpboot.c   | 15 ++++++++++++++-
 xen/arch/arm/include/asm/smp.h |  1 +
 xen/arch/arm/smpboot.c         |  1 +
 5 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/arm32/smpboot.c b/xen/arch/arm/arm32/smpboot.c
index e7368665d50d..518e9f9c7e70 100644
--- a/xen/arch/arm/arm32/smpboot.c
+++ b/xen/arch/arm/arm32/smpboot.c
@@ -21,6 +21,10 @@ int arch_cpu_up(int cpu)
     return platform_cpu_up(cpu);
 }
 
+void arch_cpu_up_finish(void)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 5efd442b24af..a61b4d3c2738 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -308,6 +308,7 @@ real_start_efi:
         bl    check_cpu_mode
         bl    cpu_init
         bl    create_page_tables
+        load_paddr x0, boot_pgtable
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
@@ -365,29 +366,14 @@ GLOBAL(init_secondary)
 #endif
         bl    check_cpu_mode
         bl    cpu_init
-        bl    create_page_tables
+        load_paddr x0, init_ttbr
+        ldr   x0, [x0]
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
         ldr   x0, =secondary_switched
         br    x0
 secondary_switched:
-        /*
-         * Non-boot CPUs need to move on to the proper pagetables, which were
-         * setup in init_secondary_pagetables.
-         *
-         * XXX: This is not compliant with the Arm Arm.
-         */
-        ldr   x4, =init_ttbr         /* VA of TTBR0_EL2 stashed by CPU 0 */
-        ldr   x4, [x4]               /* Actual value */
-        dsb   sy
-        msr   TTBR0_EL2, x4
-        dsb   sy
-        isb
-        tlbi  alle2
-        dsb   sy                     /* Ensure completion of TLB flush */
-        isb
-
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
         ldr   x23, =EARLY_UART_VIRTUAL_ADDRESS
@@ -672,9 +658,13 @@ ENDPROC(create_page_tables)
  * mapping. In other word, the caller is responsible to switch to the runtime
  * mapping.
  *
- * Clobbers x0 - x3
+ * Inputs:
+ *   x0 : Physical address of the page tables.
+ *
+ * Clobbers x0 - x4
  */
 enable_mmu:
+        mov   x4, x0
         PRINT("- Turning on paging -\r\n")
 
         /*
@@ -685,8 +675,7 @@ enable_mmu:
         dsb   nsh
 
         /* Write Xen's PT's paddr into TTBR0_EL2 */
-        load_paddr x0, boot_pgtable
-        msr   TTBR0_EL2, x0
+        msr   TTBR0_EL2, x4
         isb
 
         mrs   x0, SCTLR_EL2
diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
index 694fbf67e62a..9637f424699e 100644
--- a/xen/arch/arm/arm64/smpboot.c
+++ b/xen/arch/arm/arm64/smpboot.c
@@ -106,10 +106,23 @@ int __init arch_cpu_init(int cpu, struct dt_device_node *dn)
 
 int arch_cpu_up(int cpu)
 {
+    int rc;
+
     if ( !smp_enable_ops[cpu].prepare_cpu )
         return -ENODEV;
 
-    return smp_enable_ops[cpu].prepare_cpu(cpu);
+    update_identity_mapping(true);
+
+    rc = smp_enable_ops[cpu].prepare_cpu(cpu);
+    if ( rc )
+        update_identity_mapping(false);
+
+    return rc;
+}
+
+void arch_cpu_up_finish(void)
+{
+    update_identity_mapping(false);
 }
 
 /*
diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
index 8133d5c29572..a37ca55bff2c 100644
--- a/xen/arch/arm/include/asm/smp.h
+++ b/xen/arch/arm/include/asm/smp.h
@@ -25,6 +25,7 @@ extern void noreturn stop_cpu(void);
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
 extern int arch_cpu_up(int cpu);
+extern void arch_cpu_up_finish(void);
 
 int cpu_up_send_sgi(int cpu);
 
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 412ae2286906..4a89b3a8345b 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -500,6 +500,7 @@ int __cpu_up(unsigned int cpu)
     init_data.cpuid = ~0;
     smp_up_cpu = MPIDR_INVALID;
     clean_dcache(smp_up_cpu);
+    arch_cpu_up_finish();
 
     if ( !cpu_online(cpu) )
     {
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:30:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477126.739717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKH-0004QH-VO; Fri, 13 Jan 2023 10:30:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477126.739717; Fri, 13 Jan 2023 10:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHKH-0004Mp-Nw; Fri, 13 Jan 2023 10:30:21 +0000
Received: by outflank-mailman (input) for mailman id 477126;
 Fri, 13 Jan 2023 10:30:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGHKF-0003z7-7r
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:30:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGHKE-0007EA-VJ; Fri, 13 Jan 2023 10:30:18 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGH2Q-0005Ty-1Q; Fri, 13 Jan 2023 10:11:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=N8uZe3IORrYF4V/diPY6pxEUwpQmQNhoC9Ue9CL4ohs=; b=a4DPAQKyLTK6bQDVBtjFoi0lax
	XlmMfKprQqWrrbLbdfH+G7kAs1eZuBXcEFPVGF2+JNCugb++p2o/VzDmOISwKOerHCyF5x9Wgrt8r
	PXox5WyhgC5XDy0yAdaQrbTjX7767OGCdJuIar1mRp+uOV9YL3Ghagc6Q6iGnxgm+0HQ=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping
Date: Fri, 13 Jan 2023 10:11:34 +0000
Message-Id: <20230113101136.479-13-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230113101136.479-1-julien@xen.org>
References: <20230113101136.479-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

In follow-up patches we will need to have part of Xen identity mapped in
order to safely switch the TTBR.

On some platform, the identity mapping may have to start at 0. If we always
keep the identity region mapped, NULL pointer dereference would lead to
access to valid mapping.

It would be possible to relocate Xen to avoid clashing with address 0.
However the identity mapping is only meant to be used in very limited
places. Therefore it would be better to keep the identity region invalid
for most of the time.

Two new external helpers are introduced:
    - arch_setup_page_tables() will setup the page-tables so it is
      easy to create the mapping afterwards.
    - update_identity_mapping() will create/remove the identity mapping

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v4:
        - Fix typo in a comment
        - Clarify which page-tables are updated

    Changes in v2:
        - Remove the arm32 part
        - Use a different logic for the boot page tables and runtime
          one because Xen may be running in a different place.
---
 xen/arch/arm/arm64/Makefile         |   1 +
 xen/arch/arm/arm64/mm.c             | 130 ++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm32/mm.h |   4 +
 xen/arch/arm/include/asm/arm64/mm.h |  13 +++
 xen/arch/arm/include/asm/setup.h    |  11 +++
 xen/arch/arm/mm.c                   |   6 +-
 6 files changed, 163 insertions(+), 2 deletions(-)
 create mode 100644 xen/arch/arm/arm64/mm.c

diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 6d507da0d44d..28481393e98f 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -10,6 +10,7 @@ obj-y += entry.o
 obj-y += head.o
 obj-y += insn.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-y += mm.o
 obj-y += smc.o
 obj-y += smpboot.o
 obj-y += traps.o
diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
new file mode 100644
index 000000000000..798ae93ad73c
--- /dev/null
+++ b/xen/arch/arm/arm64/mm.c
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <xen/init.h>
+#include <xen/mm.h>
+
+#include <asm/setup.h>
+
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
+static DEFINE_PAGE_TABLE(xen_first_id);
+static DEFINE_PAGE_TABLE(xen_second_id);
+static DEFINE_PAGE_TABLE(xen_third_id);
+
+/*
+ * The identity mapping may start at physical address 0. So we don't want
+ * to keep it mapped longer than necessary.
+ *
+ * When this is called, we are still using the boot_pgtable.
+ *
+ * We need to prepare the identity mapping for both the boot page tables
+ * and runtime page tables.
+ *
+ * The logic to create the entry is slightly different because Xen may
+ * be running at a different location at runtime.
+ */
+static void __init prepare_boot_identity_mapping(void)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    lpae_t pte;
+    DECLARE_OFFSETS(id_offsets, id_addr);
+
+    /*
+     * We will be re-using the boot ID tables. They may not have been
+     * zeroed but they should be unlinked. So it is fine to use
+     * clear_page().
+     */
+    clear_page(boot_first_id);
+    clear_page(boot_second_id);
+    clear_page(boot_third_id);
+
+    if ( id_offsets[0] != 0 )
+        panic("Cannot handled ID mapping above 512GB\n");
+
+    /* Link first ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_first_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_pgtable[id_offsets[0]], pte);
+
+    /* Link second ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_second_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_first_id[id_offsets[1]], pte);
+
+    /* Link third ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_third_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_second_id[id_offsets[2]], pte);
+
+    /* The mapping in the third table will be created at a later stage */
+}
+
+static void __init prepare_runtime_identity_mapping(void)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    lpae_t pte;
+    DECLARE_OFFSETS(id_offsets, id_addr);
+
+    if ( id_offsets[0] != 0 )
+        panic("Cannot handled ID mapping above 512GB\n");
+
+    /* Link first ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_first_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_pgtable[id_offsets[0]], pte);
+
+    /* Link second ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_second_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_first_id[id_offsets[1]], pte);
+
+    /* Link third ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_third_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_second_id[id_offsets[2]], pte);
+
+    /* The mapping in the third table will be created at a later stage */
+}
+
+void __init arch_setup_page_tables(void)
+{
+    prepare_boot_identity_mapping();
+    prepare_runtime_identity_mapping();
+}
+
+void update_identity_mapping(bool enable)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    int rc;
+
+    if ( enable )
+        rc = map_pages_to_xen(id_addr, maddr_to_mfn(id_addr), 1,
+                              PAGE_HYPERVISOR_RX);
+    else
+        rc = destroy_xen_mappings(id_addr, id_addr + PAGE_SIZE);
+
+    BUG_ON(rc);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
index 8bfc906e7178..856f2dbec4ad 100644
--- a/xen/arch/arm/include/asm/arm32/mm.h
+++ b/xen/arch/arm/include/asm/arm32/mm.h
@@ -18,6 +18,10 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
 
 bool init_domheap_mappings(unsigned int cpu);
 
+static inline void arch_setup_page_tables(void)
+{
+}
+
 #endif /* __ARM_ARM32_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/arm64/mm.h b/xen/arch/arm/include/asm/arm64/mm.h
index aa2adac63189..e7059a36bf17 100644
--- a/xen/arch/arm/include/asm/arm64/mm.h
+++ b/xen/arch/arm/include/asm/arm64/mm.h
@@ -1,6 +1,8 @@
 #ifndef __ARM_ARM64_MM_H__
 #define __ARM_ARM64_MM_H__
 
+extern DEFINE_PAGE_TABLE(xen_pgtable);
+
 /*
  * On ARM64, all the RAM is currently direct mapped in Xen.
  * Hence return always true.
@@ -10,6 +12,17 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
     return true;
 }
 
+void arch_setup_page_tables(void);
+
+/*
+ * Enable/disable the identity mapping in the live page-tables (i.e.
+ * the one pointer by TTBR_EL2).
+ *
+ * Note that nested a call (e.g. enable=true, enable=true) is not
+ * supported.
+ */
+void update_identity_mapping(bool enable);
+
 #endif /* __ARM_ARM64_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2be4..66b27f2b57c1 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -166,6 +166,17 @@ u32 device_tree_get_u32(const void *fdt, int node,
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
+extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
+
+#ifdef CONFIG_ARM_64
+extern DEFINE_BOOT_PAGE_TABLE(boot_first_id);
+#endif
+extern DEFINE_BOOT_PAGE_TABLE(boot_second_id);
+extern DEFINE_BOOT_PAGE_TABLE(boot_third_id);
+
+/* Find where Xen will be residing at runtime and return a PT entry */
+lpae_t pte_of_xenaddr(vaddr_t);
+
 extern const char __ro_after_init_start[], __ro_after_init_end[];
 
 struct init_info
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0cf7ad4f0e8c..39e0d9e03c9c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -93,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
 
 #ifdef CONFIG_ARM_64
 #define HYP_PT_ROOT_LEVEL 0
-static DEFINE_PAGE_TABLE(xen_pgtable);
+DEFINE_PAGE_TABLE(xen_pgtable);
 static DEFINE_PAGE_TABLE(xen_first);
 #define THIS_CPU_PGTABLE xen_pgtable
 #else
@@ -388,7 +388,7 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
         invalidate_icache();
 }
 
-static inline lpae_t pte_of_xenaddr(vaddr_t va)
+lpae_t pte_of_xenaddr(vaddr_t va)
 {
     paddr_t ma = va + phys_offset;
 
@@ -495,6 +495,8 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
 
     phys_offset = boot_phys_offset;
 
+    arch_setup_page_tables();
+
 #ifdef CONFIG_ARM_64
     pte = pte_of_xenaddr((uintptr_t)xen_first);
     pte.pt.table = 1;
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:39:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477179.739762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHTO-0007yY-GE; Fri, 13 Jan 2023 10:39:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477179.739762; Fri, 13 Jan 2023 10:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHTO-0007yR-DW; Fri, 13 Jan 2023 10:39:46 +0000
Received: by outflank-mailman (input) for mailman id 477179;
 Fri, 13 Jan 2023 10:39:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f25I=5K=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pGHTM-0007yJ-Nj
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:39:44 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2059.outbound.protection.outlook.com [40.107.241.59])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9688d219-932e-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 11:39:42 +0100 (CET)
Received: from DB3PR06CA0010.eurprd06.prod.outlook.com (2603:10a6:8:1::23) by
 PAVPR08MB9435.eurprd08.prod.outlook.com (2603:10a6:102:317::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18; Fri, 13 Jan 2023 10:39:38 +0000
Received: from DBAEUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:1:cafe::bc) by DB3PR06CA0010.outlook.office365.com
 (2603:10a6:8:1::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Fri, 13 Jan 2023 10:39:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT031.mail.protection.outlook.com (100.127.142.173) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 10:39:38 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Fri, 13 Jan 2023 10:39:38 +0000
Received: from cf384ea11bf4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EDD1FBEC-2517-4900-B832-14673476D204.1; 
 Fri, 13 Jan 2023 10:39:27 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cf384ea11bf4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 10:39:27 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by DBAPR08MB5574.eurprd08.prod.outlook.com (2603:10a6:10:1ab::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 10:39:24 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%7]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 10:39:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9688d219-932e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=raE3IienIKJdUrPrC4yNOVKHkWXRO/TA3YJbKrq+SzQ=;
 b=SgLOifJWhlgYeg4m9qecvihR2govRVj/d26cLy7Z5OZrpmt+eeo/53Dfmq4SsXu4FcoC7HSa1s6c8+g2laY+Pr8W17wa44RxPp+Twbj5TOxqqGEz1uebzNEO5QvDzZ9jg5BU79z24SWoZh7oFTf9kTLXQOWpH4TYIeRZRg3yJ0E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PK2wynzKW4Xeo0ylysB8WUG9WUNw3hx2Ex5O5G4uCx7Snh3gQjZuSIqlX6ppl1j8QJsntQGa9QC7QLnjp3ig0PRxYbromGk7Jhr9f43UkOTrH9qAUqnhCkI6Iw/a1r0p69zd8MIbfk6/dqjmHnbjF1kue8bQXvGmSHVJGaI4FfwwJhGydWnxP0usxmamdRiX9gBQbamx12paf9Kom0CyegEdtryfxKDp7KteZ101ENeXbpXNbG20EfoQ7wGZzqW1B1+k9kWOb+NhaqX5PpIY2B+UlimYHKTVOkXQmHGfbfBHaY6F3K0TTBqjhMHl7GX+lHGAxWzUxFhl+Ahr6wZ1DA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=raE3IienIKJdUrPrC4yNOVKHkWXRO/TA3YJbKrq+SzQ=;
 b=QLh5KEdy2lW1V5o2QVBpCaO4/s1FG88oS0LoW695awKl/IEViW+JVb8krZcmM+a09Lz7W3pIjgstpR1/qTGiDX0rrPiJBVl+yTu3qKlsjOD5oUlk44ONsN9mGuD7hwb5P4jOHVjt7av6PQc7rr+cZ0moC+vVvJnE0GUKrjAxEwF2p99AnI7l9FHCubATUaXHWNMotcb1AVPM69zqwnqx2akXVCSizdmkJGAPqpM0KZyQbR0Uiwd3PQSPU8usUpZW3ujYmxKIyOMNFJMCPO4dSc43Il2S8f34gwBdKvg/o4BVDy0PmYHLCvKLObOTaZOIxvfV+rI9QDEPDYXN2FM4mw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=raE3IienIKJdUrPrC4yNOVKHkWXRO/TA3YJbKrq+SzQ=;
 b=SgLOifJWhlgYeg4m9qecvihR2govRVj/d26cLy7Z5OZrpmt+eeo/53Dfmq4SsXu4FcoC7HSa1s6c8+g2laY+Pr8W17wa44RxPp+Twbj5TOxqqGEz1uebzNEO5QvDzZ9jg5BU79z24SWoZh7oFTf9kTLXQOWpH4TYIeRZRg3yJ0E=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Julien Grall <jgrall@amazon.com>
Subject: RE: [PATCH v2 01/40] xen/arm: remove xen_phys_start and
 xenheap_phys_end from config.h
Thread-Topic: [PATCH v2 01/40] xen/arm: remove xen_phys_start and
 xenheap_phys_end from config.h
Thread-Index: AQHZJxAZFWT7HTzU/UKS259EJJCvG66cH12AgAAI1eA=
Date: Fri, 13 Jan 2023 10:39:24 +0000
Message-ID:
 <AM0PR08MB4530E854859433AEBA73CF44F7C29@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-2-Penny.Zheng@arm.com>
 <2e8a80d6-b45d-f852-1e54-7c6e0ae4f2fd@xen.org>
In-Reply-To: <2e8a80d6-b45d-f852-1e54-7c6e0ae4f2fd@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 2749E7C5D13F2F4FA5185745007A3E26.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|DBAPR08MB5574:EE_|DBAEUR03FT031:EE_|PAVPR08MB9435:EE_
X-MS-Office365-Filtering-Correlation-Id: c6d4871c-47c3-4927-5237-08daf5527868
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8B8uaCvKsrC8+Ki13Ab578NhBJ4kVmv4BlAQMO01geTLr1/vISyXsyrV7cKoqIdfJ1kTYr/v1q5kgsWhPdw+3sd1qUCZyrp1LsOcO7AjIG3SVT6U8Cp4LLbnZyHxBo1GASSwaZWNDqw/2hJJwfekRNS+G/v4U2d3IpAduE3vlqenhzrjVw9aoukY7Vsu1A4ydpF8mPiNVycwooFu6nPucgng1qAtNMZAytOn6pAnnejtoY+Kzi8NvwgzdthdTnJ+CUuVgnuuqdragzgJowm4RLNBfDNS7XqGRXy3/+sZRFqCeRIUnbI/a6Oy6ELrRhypImthrC4MgxjUv5A4YIc3lW9ACi0xpUoFUCU+beTBI1YV6by1GVvLgSCpSfT69o8vlZV3T3s3K9V5YyASwG9tjwnBaMHaLwdVembUqnQqKFSfd0ZNSjCxCTGJlT7PTq62fwh9RUsb5lgDotln9TTh/afRE+t1gyoRmsfeyOgxJDav1Lqri5DuPygszuCMTuHEb236TlNamL9vEuyVy/6iDylc+B74b5Hpj6Nza0rm+KnJUhRvcauSYhGfQZ/Puu7WcLu8Y5A4QeOykep0YH0350LDJJiOVFxFrrhV6UmoX6y1YCUstYiBsN/7RE36sYE8xAO+nsS44XyXo1S7L/gcINTIQXd6WzYmQwpsRzdqnrmcaLtpwHHe4pr2jdM+wODLVIb0A7jIwZgngcJUtoHYPQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(366004)(39860400002)(376002)(346002)(136003)(451199015)(478600001)(86362001)(33656002)(316002)(110136005)(2906002)(54906003)(71200400001)(7696005)(83380400001)(186003)(38100700002)(38070700005)(122000001)(55016003)(53546011)(9686003)(8936002)(66476007)(64756008)(52536014)(76116006)(6506007)(66446008)(8676002)(4326008)(66556008)(5660300002)(66946007)(26005)(4744005)(41300700001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5574
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f8f14d69-f38c-432e-eca6-08daf5527030
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0pOn5lOEt2SqcKMohJfCk+sd46V3I7wzcxQ2U7HHiU6iiM/s/iufzltOt3aaj5Nxe9UK4Lv6eNirf6EOvFf74w2QaxLwooeUq7RwHd/HIEjAXXRIRbKMLMPRrBNAVVsweVCpFKonC0TzVrpZVv1X+ho/HkO8J5t7F1y5pWWN0lVZeeNPSjzPCpLc+wv3f4wz5DRHI0hesZg8Eg5NHQix45fb/VcCInddpwwa1V19aOxEDX4OYH9UMD+22NwFBHMal7DPfQwkejErtXdDW48yh5SilyK8OE7Zx9JYa07z/y4iUVjhjOlZvRfTAmdAV+KzKmkgeAJvYLRE4E7cQKvruBnaRE9nvtSINmwBsoFkMj/YKB6lkcAM3bjuoC+fMiQAmACqnD0XFurMk3OeimA+e0HoRhgM7kBEef6hMeUMPdYf6q1jdC9IBqpM66WjMTMDn8X3PPGW4SENb25aWZApCt8GCt5h2DcyyzJwYanM/ROZzxfq8YGMve/B9C4/jvaNUfWpUK62GgK4HHxwqr559+utDyBs175VKN8OIRNXCHVnCBpuULyWa1Yy4nZ4UcijhK0WOdceZfLWC+6C5FbtmPzNERolNhONB5PWpqNYrNvirahCModrgfcp724KKeTVzhv08JKeOuJSKacqOat5yHn6XH4l3Hos2BupbJpa9CTvAHwlEwloWc4LhzWbqqWum70PJ7tyPI1h+IxmWztbhw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(110136005)(81166007)(54906003)(86362001)(316002)(52536014)(8936002)(8676002)(2906002)(356005)(478600001)(70206006)(70586007)(26005)(107886003)(82740400003)(53546011)(7696005)(40460700003)(41300700001)(4326008)(186003)(6506007)(9686003)(33656002)(40480700001)(55016003)(83380400001)(47076005)(5660300002)(36860700001)(336012)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 10:39:38.2416
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c6d4871c-47c3-4927-5237-08daf5527868
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9435

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPg0KPiBTZW50OiBGcmlkYXksIEphbnVhcnkgMTMsIDIwMjMgNjowNyBQTQ0KPiBU
bzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVu
cHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFubyBT
dGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVpcyA8
QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9keW15
cl9CYWJjaHVrQGVwYW0uY29tPjsgSnVsaWVuIEdyYWxsDQo+IDxqZ3JhbGxAYW1hem9uLmNvbT4N
Cj4gU3ViamVjdDogUmU6IFtQQVRDSCB2MiAwMS80MF0geGVuL2FybTogcmVtb3ZlIHhlbl9waHlz
X3N0YXJ0IGFuZA0KPiB4ZW5oZWFwX3BoeXNfZW5kIGZyb20gY29uZmlnLmgNCj4gDQo+IEhpIFBl
bm55LA0KDQpIaSBKdWxpZW4NCg0KPiANCj4gT24gMTMvMDEvMjAyMyAwNToyOCwgUGVubnkgWmhl
bmcgd3JvdGU6DQo+ID4gRnJvbTogV2VpIENoZW4gPHdlaS5jaGVuQGFybS5jb20+DQo+ID4NCj4g
PiBUaGVzZSB0d28gdmFyaWFibGVzIGFyZSBzdGFsZSB2YXJpYWJsZXMsIHRoZXkgb25seSBoYXZl
IGRlY2xhcmF0aW9ucw0KPiA+IGluIGNvbmZpZy5oLCB0aGV5IGRvbid0IGhhdmUgYW55IGRlZmlu
aXRpb24gYW5kIG5vIGFueSBjb2RlIGlzIHVzaW5nDQo+ID4gdGhlc2UgdHdvIHZhcmlhYmxlcy4g
U28gaW4gdGhpcyBwYXRjaCwgd2UgcmVtb3ZlIHRoZW0gZnJvbSBjb25maWcuaC4NCj4gPg0KPiA+
IFNpZ25lZC1vZmYtYnk6IFdlaSBDaGVuIDx3ZWkuY2hlbkBhcm0uY29tPg0KPiA+IEFja2VkLWJ5
OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KPiANCj4gSSB3YXMgZ29pbmcgdG8g
Y29tbWl0IHRoaXMgcGF0Y2gsIGhvd2V2ZXIgdGhpcyB0ZWNobmljYWxseSBuZWVkcyB5b3VyIHNp
Z25lZC0NCj4gb2ZmLWJ5IGFzIHRoZSBzZW5kZXIgb2YgdGhpcyBuZXcgdmVyc2lvbi4NCj4gDQo+
IElmIHlvdSBjb25maXJtIHlvdXIgc2lnbmVkLW9mZi1ieSwgdGhlbiBJIGNhbiBjb21taXQgd2l0
aG91dCBhIHJlc2VuZGluZy4NCj4gDQoNClllcywgSSBjb25maXJtLCB0aHgNCg0KPiBDaGVlcnMs
DQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:45:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:45:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477187.739772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHZ6-00012j-9y; Fri, 13 Jan 2023 10:45:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477187.739772; Fri, 13 Jan 2023 10:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHZ6-00012c-7H; Fri, 13 Jan 2023 10:45:40 +0000
Received: by outflank-mailman (input) for mailman id 477187;
 Fri, 13 Jan 2023 10:45:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tFx8=5K=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pGHZ5-00012W-GN
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:45:39 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2068.outbound.protection.outlook.com [40.107.223.68])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6972e169-932f-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 11:45:37 +0100 (CET)
Received: from DM6PR10CA0020.namprd10.prod.outlook.com (2603:10b6:5:60::33) by
 DS7PR12MB8418.namprd12.prod.outlook.com (2603:10b6:8:e9::16) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.19; Fri, 13 Jan 2023 10:45:34 +0000
Received: from DM6NAM11FT079.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:60:cafe::40) by DM6PR10CA0020.outlook.office365.com
 (2603:10b6:5:60::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Fri, 13 Jan 2023 10:45:34 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT079.mail.protection.outlook.com (10.13.173.4) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 10:45:33 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 13 Jan
 2023 04:45:33 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 13 Jan
 2023 04:45:33 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 13 Jan 2023 04:45:31 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6972e169-932f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YIPNvKLW+F3LOwqzbRlMR++7ST4WDbxUeBqiLRjPinkjYzhoEwZ9SNTorRjyukA3Fu9si1M9rUzWOcQX/rn0GYRXU1UFdiGrqrBOmb8wO5KqcpAnOwTONE4FBUW/v+LagHW2VFl1iHhV1Lgkkj8HqtHxl7OcxPHXl0nEPPcZ8uY7lx+TUfDmAZsJJjCqTBlTQCAHSW10JdqO7Py9JLNi8lPO9Do5I2DESpVL+uWNlRHoRKmXmlbV3zEyjbR4C8bdw+wSdZjkLEcSjj622Q4UVGn0w8kHfyr4sSmL8MdoNwpxa/cLc6uqtZpUS00zy6tfMQPGUsOhF9HzmBbgitG6vw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E9J6x8qredz53eStATv7nQ4sBPEGPs+D2tq/zbtF/2A=;
 b=kuqpX078QN1k7tWNgwF2L7UFfTjeiTNZTJnbf9lRHifkmn0X66MIDIw649oXnOxSelUcXJtnjMY6Mt+bUgCxxDd8WGX0S6jHyqmkN9vAzKwufh+eQhKQTOqAERUgYimHeS9Jvj8/IDt0fnzctVevL/fGnYmbxQM388uS2ipqhtGKWRc3ZUZSNFyyoQh3u61q3rw5Usque8GcUg1lg38ImgXP4M2OumqUt7wAeFQlSWD7Mwor+6eDOdy68nXg5guZQEYfJUQ5AcTvVRhp7FlC3W7EtOU8vz4g+SRNUC8Bm0hqMrTb0485RWssQkwYZbXUgYipFlYnsSHOJeUUZ8UxKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E9J6x8qredz53eStATv7nQ4sBPEGPs+D2tq/zbtF/2A=;
 b=lJde9UFukwOg4h2kRNdy1fe8aqjJkZInyBp4+PHPv9IpqJX7W8HELGptmdotRnS3b1D2UbOIONiu4AsCgO1HOHnD7P3W9rEEcNqyYQW+yrDujA66cIDly8jWaBZqNCIpg064mJmQEOKfBmnzv79GBoBbqLap9Rn/2+wbEpjArMc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <092f9d85-954c-281a-8738-de82f90be248@amd.com>
Date: Fri, 13 Jan 2023 11:45:31 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with
 "mov_w rX, <label>"
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-7-julien@xen.org>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230113101136.479-7-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT079:EE_|DS7PR12MB8418:EE_
X-MS-Office365-Filtering-Correlation-Id: ed363fb9-498d-4eef-8f4e-08daf5534c73
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hqkqeZ6M0pnS0Q1O/G0xwaNCk+PoHkQjbvp+2DarfJNBCrmeyltfLhUzkz0iDsMKTRIEC7cxHy806ZHxw+5JEiinV72HgHTpdHXqlZxNuk9luiBF5rNpcju8jZBVo79apL2gMdwWYbUlLq3w+RHjS4XdDOHcvTmLoPhU4M23jOjroIq7GnvXrAKh0rfZn0pB7QD/UETbSmK5sStjPbexaxL5sCdIoBk0U/CvFJ/DmAvDYYZDFhWCli2D0vYWuoOdWlee/wiJ1JB7hx/LNsI/LdThJgERH3a8rl42TitmJ08GdorQBLsUtSGtvX9mWM1R6D4pJg7WR8UAcV5vbbKkIIa1uCBOQ7QYXs1YVWK2CDtIeWA1gLuMsIAq7ev2PPAnqpV/aCv1Od4XKP8ecGGDju2VRLn3n02S7t+8YcSHizUiArjWhGMbBRYYajcikJVCv1MyJLYAAJXrRxVL9tqW4JXyyOcYx1wq7oEzgO0tHde3lA1plgxEVSotlFaHtQGBxfDKEPnh2RFSym2eO4+yTluKoK4oUxeqtROhWyJg1nvvDciq70yMyN6aHaRc1C/aK+o0DQAM+dFUhh/KyiAUzl5fABnuqiYgXo/CyhsyYfKyODa/x+ExkwfVap5U+BbjD7PPltf28/Yo0dzLEQvay1KRdNbvnycT7bCEPmCp/kpTjlyHSiSqT9UXZnuZsikj44WShSUU40UEw6DE1uKMFxiaeNn3wHmgaunjbUxgnLdSnEiwrWQAXOFxZIfI6bU872q7TXv2odLGZYnwjqcN9w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199015)(46966006)(36840700001)(40470700004)(8936002)(31686004)(8676002)(41300700001)(70586007)(70206006)(4326008)(54906003)(16576012)(316002)(2616005)(82310400005)(44832011)(36756003)(2906002)(5660300002)(110136005)(40460700003)(4744005)(40480700001)(86362001)(26005)(31696002)(82740400003)(478600001)(81166007)(53546011)(356005)(186003)(36860700001)(336012)(47076005)(426003)(83380400001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 10:45:33.9297
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ed363fb9-498d-4eef-8f4e-08daf5534c73
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT079.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB8418



On 13/01/2023 11:11, Julien Grall wrote:
> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> "ldr rX, =<label>" is used to load a value from the literal pool. This
> implies a memory access.
> 
> This can be avoided by using the macro mov_w which encode the value in
> the immediate of two instructions.
> 
> So replace all "ldr rX, =<label>" with "mov_w rX, <label>".
> 
> No functional changes intended.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> ----
>     Changes in v4:
>         * Add Stefano's reviewed-by tag
>         * Add missing space
>         * Add Michal's reviewed-by tag
It looks like you forgot to add it, so to make b4 happy:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:47:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:47:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477193.739783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHaO-0001Zh-KY; Fri, 13 Jan 2023 10:47:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477193.739783; Fri, 13 Jan 2023 10:47:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHaO-0001Za-Ho; Fri, 13 Jan 2023 10:47:00 +0000
Received: by outflank-mailman (input) for mailman id 477193;
 Fri, 13 Jan 2023 10:46:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tFx8=5K=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pGHaN-0001ZS-FL
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:46:59 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2060.outbound.protection.outlook.com [40.107.223.60])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99202e7b-932f-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 11:46:57 +0100 (CET)
Received: from DM6PR02CA0084.namprd02.prod.outlook.com (2603:10b6:5:1f4::25)
 by CY8PR12MB7415.namprd12.prod.outlook.com (2603:10b6:930:5d::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 10:46:54 +0000
Received: from DM6NAM11FT110.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1f4:cafe::69) by DM6PR02CA0084.outlook.office365.com
 (2603:10b6:5:1f4::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Fri, 13 Jan 2023 10:46:54 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT110.mail.protection.outlook.com (10.13.173.205) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 10:46:54 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 13 Jan
 2023 04:46:53 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 13 Jan
 2023 04:46:53 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 13 Jan 2023 04:46:52 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99202e7b-932f-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OA00ajGrj3QoB53DYE/ipfMFJa2jdtFnwgd+QlFEwxxnUJ8Qq1t2S9Whyl8iG1FzFiMYol5XtvGlqbQd9ad4NkRLcKGaJb/qkL9pmAij4eTPZr2XWfZBAoCiRGsX3u34EUFmwAK0Gkt3F/oPsPFnBYLj/TCxVMS5v2ZcNVz07uvX049DhR7DqPlML2tBbHB4gq8nAnUL7Vp64xT4+KeWhVZorqsOaKm6RWzHiNhvTdjeYA8aV/kzhY8jwcAccOdjSvNr403c3fllvuNqLgSAxVv88o9U/vuXuR8y36Hrk5R8XS6fAe0bkjt5LmssJSSKlHP3KC12pFZC+o+mWGGPuQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FzbfDr1VZnamU4LdrVFf8z88bEG70m4uommrJcHbGlw=;
 b=BwPtxvCFAChHM9V3va98qYla1TZoHzV75HQFSkTYmF7KMM74zUKtqWAvO5XrjNovs3GgCrPYEzBP9PHCalOAlRwMJV6s1nwsqX1CdNrkEFBBDoErVEeR4thVpgOKI1gSr3adXwipJH/1b+CPSGR8nFwvqzFpga+sWv2K4XUZ+Pn+R/U2XyyV0fepA8kRm9393KCcHWH18Mu8Ak8mvbkIzmRfqiRlGAST18zkDSxPYpDIzOiJZhW+O2bu8XGNdtwGihScrXB3g64t06Oa2DQZWsyNRHNI+pTHWCUblSUEra+k94aT4r4dK7UrXJgOIBww+WCJZYR/g58qF8cWgGicaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FzbfDr1VZnamU4LdrVFf8z88bEG70m4uommrJcHbGlw=;
 b=wyK2nhqgcKAnrW1rGKEjgLfqZ5GoaAWPIeLUz95cv9IWyTuOFk427r+YoNIWdxoiWImwMpOmr8Sg8UXNNiNQEKTQ/EshEr+eizJ3tP8B5upkIOQt4orpeiJo3bKicmVP/Qh0CB5p0fnSqJHZbqaf/LF1F5p3hA/CYqXvNyuRDbg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <7c5a0c31-4793-58c1-dc6b-9c63acc37153@amd.com>
Date: Fri, 13 Jan 2023 11:46:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush
 the TLBs
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-9-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230113101136.479-9-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT110:EE_|CY8PR12MB7415:EE_
X-MS-Office365-Filtering-Correlation-Id: c6aebba4-b1d6-4847-fa1d-08daf5537c31
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	itCWU5CNtzgmLjL4V/oZGFfLnjpEeW2HpkWjAmgUhs0RSzOLpMuKoCTSVYYfM17odz4fsTyD0hNNYSuxH9af3O7R2P9tAQKUMypAFhMUrkgRIzP+BuuhhGEtD0nhEC23njTiTFkgmwP3JkC1BtsKa0fI6UHzKrLsQN5pl0NmJ5QNQurbRb8igLjvDMn6KVxGc5WsC9YNJHo2t6S9YXvIX/X0NPoNbHCjoB2XmHmnM478pODxvO1M7IfLkj+LG986UigfOcQEyvpduX6Q7qQvM2XgFyhgpROU5AtkcvtM1Y59eCNtwLw2RsYMU/D9wUiQpCq1tamexLf8BHr+hILV93VIL6C8gho1ptDVrYifIKwiIp2r98NoWk6PpHbdd1DEr/8wvwuI9UyqecgelVg3CcP8va353oZ4AaAfr5f5vrZMFR/jXcFv5S3Gtu9oJ6bn+TYzksygNMnzWEzkbg5qRrMBwwJQ6FCifWiQSUFT4/+Gq0MhGAFtgT0vA0iRPLxARswM2UmUFHy/cxMflXbz/Xb2g06pTfjJvVx3lEgu3+RO5En5fdOJ/ExHF3b8IBRGpONBpIEPAwLj+xX1xSA+XwQdm1P6w5/JltSVpbJDnNZ/Ctk/K8i1Wuax8zej8UB812HR0tCEztUzKJ1Cn3rfpNGR7CIeTR3Aw4vcCp7E6hTEngm9+S/ZCn+cnZQlUGApatpfu9ZlxRlyVZ97bhqOHvXcrwkvFXaWWumS/dK1gYPCy2xUsQ9hp2H9lVcstyihEE4SGUxmyu79VXDZFirKvw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(39860400002)(346002)(136003)(451199015)(36840700001)(40470700004)(46966006)(82740400003)(478600001)(356005)(41300700001)(47076005)(81166007)(426003)(31696002)(40460700003)(316002)(110136005)(16576012)(2616005)(54906003)(86362001)(40480700001)(336012)(70586007)(26005)(186003)(82310400005)(36756003)(70206006)(5660300002)(53546011)(36860700001)(2906002)(8936002)(4326008)(44832011)(31686004)(8676002)(83380400001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 10:46:54.0306
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c6aebba4-b1d6-4847-fa1d-08daf5537c31
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT110.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7415



On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> The sequence for flushing the TLBs is 4 instruction long and often
> requires an explanation how it works.
> 
> So create a helper and use it in the boot code (switch_ttbr() is left
> alone until we decide the semantic of the call).
> 
> Note that in secondary_switched, we were also flushing the instruction
> cache and branch predictor. Neither of them was necessary because:
>     * We are only supporting IVIPT cache on arm32, so the instruction
>       cache flush is only necessary when executable code is modified.
>       None of the boot code is doing that.
>     * The instruction cache is not invalidated and misprediction is not
>       a problem at boot.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
>     Changes in v4:
>         - Expand the commit message to explain why switch_ttbr() is
>           not updated.
>         - Remove extra spaces in the comment
>         - Fix typo in the commit message

Thanks,
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal




From xen-devel-bounces@lists.xenproject.org Fri Jan 13 10:47:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 10:47:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477195.739794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHai-0001yF-SE; Fri, 13 Jan 2023 10:47:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477195.739794; Fri, 13 Jan 2023 10:47:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHai-0001y8-PR; Fri, 13 Jan 2023 10:47:20 +0000
Received: by outflank-mailman (input) for mailman id 477195;
 Fri, 13 Jan 2023 10:47:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pGHah-0001wP-Co
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 10:47:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGHah-0007ja-2d; Fri, 13 Jan 2023 10:47:19 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.6.109]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pGHag-0006pR-Rx; Fri, 13 Jan 2023 10:47:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ilTuiZWtxXIivaigHd1bbnuUetdYbfKeHsMzi0wizV4=; b=Myhhjgh9dcLWpoJ/jz5dA2t9hr
	HMYB2U9+q0UjvzaPfbndv8MgKPC7C9WBGv8FKVDBdmDyegLbucagsfD4HnlKkDzR8uY8HfhG3Z+bH
	NpohIu9NSt8nh6Cc5Zb7GG0QK98LLIeDmpVc6xL6JZksWo7dw/IXI1ITdib18htKl04U=;
Message-ID: <66db4978-3f2c-b46f-48d5-0607e849deef@xen.org>
Date: Fri, 13 Jan 2023 10:47:16 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with
 "mov_w rX, <label>"
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-7-julien@xen.org>
 <092f9d85-954c-281a-8738-de82f90be248@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <092f9d85-954c-281a-8738-de82f90be248@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/01/2023 10:45, Michal Orzel wrote:
> 
> 
> On 13/01/2023 11:11, Julien Grall wrote:
>> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> "ldr rX, =<label>" is used to load a value from the literal pool. This
>> implies a memory access.
>>
>> This can be avoided by using the macro mov_w which encode the value in
>> the immediate of two instructions.
>>
>> So replace all "ldr rX, =<label>" with "mov_w rX, <label>".
>>
>> No functional changes intended.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>
>> ----
>>      Changes in v4:
>>          * Add Stefano's reviewed-by tag
>>          * Add missing space
>>          * Add Michal's reviewed-by tag
> It looks like you forgot to add it, so to make b4 happy:

Ah yes. Sorry.

> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Thanks! I also forgot to replace the "----" with "---" before sending 
:/. I am not planning to respin the series just for that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 11:04:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 11:04:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477205.739806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHrW-0004aE-DB; Fri, 13 Jan 2023 11:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477205.739806; Fri, 13 Jan 2023 11:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHrW-0004a7-9d; Fri, 13 Jan 2023 11:04:42 +0000
Received: by outflank-mailman (input) for mailman id 477205;
 Fri, 13 Jan 2023 11:04:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGHrU-0004Zx-Il; Fri, 13 Jan 2023 11:04:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGHrU-000881-Eu; Fri, 13 Jan 2023 11:04:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGHrU-0006uE-3y; Fri, 13 Jan 2023 11:04:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGHrU-0008Hw-3T; Fri, 13 Jan 2023 11:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VamfJMo+T+0x6nFfr5Jx8XYSIyxvtxLoD3kBJIrNR8c=; b=KkBcNnfeoO2aExcrwmqs//jglg
	JFKPmEr4Fu9Cegg00tNymwV3GPD0+jxghMIy2T91uAKVU1QINGlw5AQ/tIRrtnZ6BF+yUU7bIeNo7
	wsuX2DZWmBAZrOdNYej6jMDHvifJIJLTxbBPpFTOZ2xsgoUGLC3Ga4+m1eAlfRC9kfpc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175750-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175750: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=3db29dcac23da85486704ef9e7a8e7217f7829cd
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Jan 2023 11:04:40 +0000

flight 175750 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175750/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743
 build-arm64                   6 xen-build                fail REGR. vs. 175743
 build-armhf                   6 xen-build                fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                3db29dcac23da85486704ef9e7a8e7217f7829cd
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    0 days
Testing same since   175750  2023-01-13 06:38:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 762 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 11:06:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 11:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477214.739817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHsq-0005Bd-TV; Fri, 13 Jan 2023 11:06:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477214.739817; Fri, 13 Jan 2023 11:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGHsq-0005BW-QA; Fri, 13 Jan 2023 11:06:04 +0000
Received: by outflank-mailman (input) for mailman id 477214;
 Fri, 13 Jan 2023 11:06:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGHsp-0005BK-2P; Fri, 13 Jan 2023 11:06:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGHsp-0008CE-0q; Fri, 13 Jan 2023 11:06:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGHso-0006vx-Ki; Fri, 13 Jan 2023 11:06:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGHso-0000tI-KI; Fri, 13 Jan 2023 11:06:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UQ01UA/RStceBAH8G6xElJ+5/p60/5gVlSWRi/ob0ss=; b=WtrZtvA+jFWNGfsJQZFR1sNuNZ
	N6HKlPbsRTgn8/800BcT7tMF59hFlRd0aTEbCnZva+LsEx3WHDBNcSePKiu/ZReYkIjvinbApFoer
	a5+XFsX63Yn6QXTlDGC9/2LDRqHmXM9IuLToiGlzUB5S6mamtyMAOvLhDjg2e5kK0Yw8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175751-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175751: regressions - FAIL
X-Osstest-Failures:
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-arm64:xen-build:fail:regression
    linux-linus:build-armhf:xen-build:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=d9fc1511728c15df49ff18e49a494d00f78b7cd4
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Jan 2023 11:06:02 +0000

flight 175751 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175751/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 173462
 build-amd64-xsm               6 xen-build                fail REGR. vs. 173462
 build-i386-xsm                6 xen-build                fail REGR. vs. 173462
 build-i386                    6 xen-build                fail REGR. vs. 173462
 build-arm64                   6 xen-build                fail REGR. vs. 173462
 build-armhf                   6 xen-build                fail REGR. vs. 173462
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd11-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd12-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a

version targeted for testing:
 linux                d9fc1511728c15df49ff18e49a494d00f78b7cd4
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   97 days
Failing since        173470  2022-10-08 06:21:34 Z   97 days  204 attempts
Testing same since   175751  2023-01-13 06:42:32 Z    0 days    1 attempts

------------------------------------------------------------
3336 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-freebsd11-amd64                             blocked 
 test-amd64-amd64-freebsd12-amd64                             blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-amd64-xl-vhd                                      blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 509453 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 11:16:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 11:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477222.739828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGI2o-0006hy-Rp; Fri, 13 Jan 2023 11:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477222.739828; Fri, 13 Jan 2023 11:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGI2o-0006hr-Od; Fri, 13 Jan 2023 11:16:22 +0000
Received: by outflank-mailman (input) for mailman id 477222;
 Fri, 13 Jan 2023 11:16:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGI2m-0006hl-Ff
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 11:16:20 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2067.outbound.protection.outlook.com [40.107.6.67])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b23f8c97-9333-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 12:16:16 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8735.eurprd04.prod.outlook.com (2603:10a6:102:21f::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 11:16:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 11:16:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b23f8c97-9333-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RGpEkSXGdQVIHQMXQzHfEl4ErKzjkw2QJlnSl/GQZvBzop0eS+Y47o/wDGUatGBARyup6rXWjzx3YeauERmSrYfyXF1N0nvUHLJQAZS3QlQctwxNO1LpoKX2lT920kXxWmtnvHNdb+jjzTbI16M7y3SQFuUWeet9lia6EqRG+JlXDAIP0uyZEn+OVBEQ17c8ng4ENvGx0859tQlsmgRaXCGXNkwuJumkQMOJiEw2sT1eMSq/XBeiwqx+aglDO9megL/MdWfGqHjs67jtqQoBtKdK3dLz84MaJ5krTC8wrjmsnLjTVvQDQ+VTeMCisLv4LtkJhKtKvA3VNPjnuSBKQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MiObNWxEp4WmdX8RLhIZ2MNwRpmKorvD7g/cRGflOKI=;
 b=Hh7OQ8XaffSmTEO4itByrh1ip/7d+n7kfs9EVNgajMr5CTg+3xwoNGiJQ0qq4RuGO6TMDT2WxljFR23n4gORExtZuR5vaxxj9MDDTI73kjJ1Kt5dd4JENZBy9qs1d7psRdQqNJjuazzaLGEMYrI8EE0RXayCqVePT7eWAYa1QyGyr+dbiMNt2B6L3zukINgITaalG0z2WfXuvDB5ZCwqIgm04+liqoRCOwCj+PRHoGVXTsE8RiI+ge9r5awy0fwIUJ/flBdnObLnioQjr+4thEOIy7Q8O/io1oV2pnh+dgIdAEmIn6RNdQMDdq1VD3c+KE9+1w8krXCsDAuCu2P18w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MiObNWxEp4WmdX8RLhIZ2MNwRpmKorvD7g/cRGflOKI=;
 b=qo7IvFX97JaA/aXaYnpYXG8hPqY2HwMfHfKlcrsk1Iik9oOwk4JK356Bbn12GHnXfur9OEO7L8tpTKPyvjwOsQ9H3r24t5slP6M8PkNjp2eptcMaqVqjLcJCZfsN7y0BY/IdEhjknJn3PoaEEWyAj4sD1LCvkqR8Sp8l8ER4HTFSlzAp5gSTmhbGAxbyG1gKQ0/cRQcYrgTNwnryNl5d7Qvc+9QI5129Ny3d3QJB+bGjXDijT9g6M0tGxWRS5MAXUQDpVgxBmsMTJkkN70WUwjVbDoHtX25BQ/HCoWFftPGzTGlzdl77oMP8oz9VtLDS8g4on6BKjfkGsHiqf3ARDw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <09573105-b441-9099-eda8-cc1e407da67d@suse.com>
Date: Fri, 13 Jan 2023 12:16:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [linux-linus test] 175751: regressions - FAIL
Content-Language: en-US
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-175751-mainreport@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <osstest-175751-mainreport@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0028.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8735:EE_
X-MS-Office365-Filtering-Correlation-Id: bc16bc85-843a-4dcd-0b81-08daf5579545
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zfojpSO3ce6ZYi39vOkCuY/JC/BLYOgPK01xKbXaet2P0jJJzawNNAVOXOlKUwENYch+SnAiuGYASNacj3XI26ok2usreIwxqdWK9DFD7x6bkB9eWT/Lo+u1W9yaa9ZsQCeOONa/fYRlV8VaJaL7/ZooSllt/C3qTr1QW4+BOpdibxpnTBgcb0OmFxjRM9LY37Nl6xq1D6nBGcBE7MKG8r/bdYSuxmxktJ7vMvJ/oAeFInAJHmaNszCvVGtqUjaRHpzk+1bzR1KkiH4Irj49rG45apyz9rkWod811JH+iAvPL7f/c+dSrd5T/zU/BrrwpwdAWhVZ3QGDcJUK2QM8hoTuT21fx5VsxeEWyPkXgmfP7lMjVhulf32CGz727NW2HATzLoN4xbSF0CteMRQAD1zGuHWl+pwlPDRV289VrL/1GHz8DwZ3/A3Q/5ZU6P/wdBuxFYIwIDqMv9aTN6uu8Uk8N5q0TOWQmfrtnhH4p3kU9D8oF9ij08pHo5gvKk09qKshX4P4Ea/naB9Eswj0D8AkQlb9O1SZ5ldv1zh1fBv1K2w8XtVkDJORelqW+vmExLXiLPhAUOVwz6wYGlApCEOseecq+Mk3KSN8eawS07B2u0HRYWMVKeW2BtLZ37uhGG+KDYP1ayuOnipqhvfZTF07YZ5gd3qAPRZERvuwX/h2cwMXbRPQN+PgzvRSfoE+G6ClC8MBdgx41XwDm4OQ2iHClIvDU/x2Ay7O4p4yQQIgGprNXnjnXHKMyfpaGAuxjkq/I6h5gtdJboWHNd3tQg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(396003)(136003)(346002)(39860400002)(451199015)(36756003)(2906002)(5660300002)(8936002)(41300700001)(83380400001)(31696002)(66556008)(38100700002)(86362001)(66946007)(31686004)(8676002)(966005)(186003)(6486002)(2616005)(53546011)(316002)(478600001)(26005)(6512007)(6506007)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MmQvZVZIdmhTNFAwYWEzbzFZbXoyY0xYUFBOZzVNMFdmZHduYitQdlpnTm1G?=
 =?utf-8?B?bDZUd1pUNVVuQ1p6SnJ4SGVibFhVQU81eUc5SlRxQ3lDc3k2WDUvUU9ZMFdE?=
 =?utf-8?B?STBUK3NQOC9Bczh6S0h2Y1VuUk12emFZcUp4VGI5M0VEdGFlZTdwRUxDeUVT?=
 =?utf-8?B?NHJZVHcvdUpWeVZ1ZEZzOWR6TFZMMWRtdEJIcXUySUNEb1YwdDZ1NEM0Z0k4?=
 =?utf-8?B?OFltQVRDbGVuRllYdEdJRkR5aHhZMllIWTZ3MkxhWk5URU1hTlF5R1pSTU1E?=
 =?utf-8?B?MnlGVW1zL1lramJieUg4TjFqT2oxU2FvZnFHWWxZVXRhbEc3UlR0Y09DZXg2?=
 =?utf-8?B?UGtQUHNIN3JDMVk1dm8rZ0taSU13UjVkYUNjci9xYzhmRHBDL0ZnaWVsRGUz?=
 =?utf-8?B?YTJ5L0o2YTdZL2gvaHRKZjVrZ0FlTW90N2lXd3RYWmlRZ0lTbmQ4M0liSHVQ?=
 =?utf-8?B?bCt5M1R6OEZUYi9iL1BnYlJTeWZMT0FIUm9LbWZiaTEwSldCU09XMHlQYXVV?=
 =?utf-8?B?ZmppZFhxbDVYaWYxalA0UWk3WHFFOXRhQmlqUDBSbTNScnFhbUdsL2wvMDB5?=
 =?utf-8?B?ZlhoM3NqYThMbXBhQjVodDExcngwdzJCa2UyODcxYlRsaEYvbHpvVW5ERHhC?=
 =?utf-8?B?YTBXMkFBcVJKdGM1MlZSdjFmdU9uUVlHYzJEWDcxTGdsSk96U0M1dVd5blNm?=
 =?utf-8?B?MEV6UkpXYWlrUmFidTRwdVNERU5vMlo0bitSN3pLcmVwTzN2Y28va0czSGk4?=
 =?utf-8?B?cVNqUmoyYklDUFRLSW9aNGJlRUVhRlhicS8yYlhucktWM3VDOVVkMHcwTjZZ?=
 =?utf-8?B?aGlqMXVzVk1rQ2FRaVZQb0M3b3NNeDNYZWZyRDRzb0xEWlNVUTlxdDhmVGY4?=
 =?utf-8?B?TnliL3pBeFlLMzk5Mm5nTUFPTHc1bmdzSlo0dkZPV3NMMHl4REhIc1ZnekVO?=
 =?utf-8?B?YTlZZXN6VFJ1NEZjdExDbnZTZ3QvYVg4NHJkdmhVNFo4WjdUODZTYWxNd3VK?=
 =?utf-8?B?Q1E1bVg1bloxZmphWmcwT2hsY24zSngrUlA2L1FDc1owdDg2emtHcTRpaFZY?=
 =?utf-8?B?czJ1alh5eDFodUIwMkg4R1g4MlU1MW5yOWZmTWk4MDhkd3BmK0VuWkIyVmtY?=
 =?utf-8?B?Q1JFM3JqTnY1UU5NQTNpeUprcGdybFFNMHdZT1FudEdqYkh2S3hXcVU5R3JQ?=
 =?utf-8?B?RHovY1UxbkRGQ2ZuMUdUYjFIQ0YyKzRGTzBBRCsxSTVXSEZ2d0tUN1J6MjEz?=
 =?utf-8?B?azhIZlN4K0FlNE5GVERJK3QrMEdjelBKTC9oL0hiRGRJNDFEc0YrNGVBVmNL?=
 =?utf-8?B?NjJBcmNHbnNvQy92V3c4RDdLTHZrZXArQW1JaFlROVlMN1ZncGNVQnBNTHRn?=
 =?utf-8?B?RGJJY1hiWm9NR0F3TFR5MW9HYW1VcTFZTThlRHpSS2VlSXg1a3ppZlA5Y0JR?=
 =?utf-8?B?ek1sc1YxUVN3MDdDbkZ2QlpYWU9PWk55YUcrVGQ1YkVQdlllS1kzYVZIMm1D?=
 =?utf-8?B?MnNjYkI4aG1ZSStyc2NpN3JxS3F5SXpjenphQ1Q5Y0dvODhFcHhENnlNQVVj?=
 =?utf-8?B?cUtxTWlvR0dOMWt5ay9QZDh1MERRYlJVeWY0RVVONU5jeFg3eFA5ekt0NW1o?=
 =?utf-8?B?QjRqUWFhdzFQQndFTjdJZ0hFVDBPcmFvZmF3UVZRMmFxbkJMbXpsdldhbkor?=
 =?utf-8?B?cWRWZWF0YzVGS09Xa3VsYnpQZmkrcjg2c0pJK0FiK2U2VjNOQVNJeHpQbjRS?=
 =?utf-8?B?UmdOcUgrcDB0NTl3ZElWK0l5N2NvZ2ZQWXlhWENZUmt6VmJjRU9tNHVpTStB?=
 =?utf-8?B?ZjhFWGYwb1NZdWJlT3Z3T3NwSTRXWk9VNnVwVFRIY1RobHl0Umk3WmpxVURO?=
 =?utf-8?B?MU9lYUJ3UHhiUHFoYjBkUGNMMnBXNnFZeW1OLy9uVVJWYTJuNFdzN1pnODI4?=
 =?utf-8?B?akFOK3I5ZXJiTm1wUGk1WWxsZndET3hsRHhuckMzRm5vWEwxemc3SmhHd0Rk?=
 =?utf-8?B?RnpSQjhIdVZEMEpWa0NHaUZrUTJkaTFzRzZrNDFGMERpcjR0YVcybUJzc1M4?=
 =?utf-8?B?cndQUUFQRmJIN0lDZnpmVGNyN3dpVXJVeTExN3RmWGh3c3B1SXcvc1dkVGJW?=
 =?utf-8?Q?XzMXolqjkQCL4L/2uqWAcOrx6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bc16bc85-843a-4dcd-0b81-08daf5579545
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 11:16:14.3042
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KWsD6lJ2DO55USZ0NKP5VY5XSFgdSX/npRTmRtgCZxUrcKzTvJN85joIUBbGKTk/QfCk68Tm7MaALb2zrX3nDg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8735

On 13.01.2023 12:06, osstest service owner wrote:
> flight 175751 linux-linus real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/175751/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-amd64                   6 xen-build                fail REGR. vs. 173462
>  build-amd64-xsm               6 xen-build                fail REGR. vs. 173462
>  build-i386-xsm                6 xen-build                fail REGR. vs. 173462
>  build-i386                    6 xen-build                fail REGR. vs. 173462
>  build-arm64                   6 xen-build                fail REGR. vs. 173462
>  build-armhf                   6 xen-build                fail REGR. vs. 173462
>  build-arm64-pvops             6 kernel-build             fail REGR. vs. 173462

Looking at just one of the logs I find

/usr/bin/wget -c -O newlib-1.16.0.tar.gz http://xenbits.xen.org/xen-extfiles/newlib-1.16.0.tar.gz
--2023-01-13 07:41:15--  http://xenbits.xen.org/xen-extfiles/newlib-1.16.0.tar.gz
Resolving cache (cache)... 172.16.148.6
Connecting to cache (cache)|172.16.148.6|:3128... failed: Connection refused.
make[1]: *** [Makefile:90: newlib-1.16.0.tar.gz] Error 4
make[1]: Leaving directory '/home/osstest/build.175751.build-amd64/xen/stubdom'
make: *** [Makefile:73: build-stubdom] Error 2

Let's hope this was merely a networking issue and will work again next
time round.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 11:36:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 11:36:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477229.739839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIMT-0000ks-Mn; Fri, 13 Jan 2023 11:36:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477229.739839; Fri, 13 Jan 2023 11:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIMT-0000kl-JO; Fri, 13 Jan 2023 11:36:41 +0000
Received: by outflank-mailman (input) for mailman id 477229;
 Fri, 13 Jan 2023 11:36:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=irsc=5K=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pGIMT-0000kf-3o
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 11:36:41 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2059.outbound.protection.outlook.com [40.107.241.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8ab7eb46-9336-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 12:36:38 +0100 (CET)
Received: from FR3P281CA0135.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:94::19)
 by GVXPR08MB7751.eurprd08.prod.outlook.com (2603:10a6:150:7::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 11:36:35 +0000
Received: from VI1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:94:cafe::d5) by FR3P281CA0135.outlook.office365.com
 (2603:10a6:d10:94::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Fri, 13 Jan 2023 11:36:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT063.mail.protection.outlook.com (100.127.144.155) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.18 via Frontend Transport; Fri, 13 Jan 2023 11:36:34 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Fri, 13 Jan 2023 11:36:34 +0000
Received: from 753a91522c15.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0F19391D-4363-44C9-8AF5-AFE72EE338FD.1; 
 Fri, 13 Jan 2023 11:36:27 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 753a91522c15.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 11:36:27 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PA4PR08MB5967.eurprd08.prod.outlook.com (2603:10a6:102:e7::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 11:36:26 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 11:36:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ab7eb46-9336-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JwvPxYqJhARnyusAQLA8bWR4iDsYbq8TXKcUsQPOS0A=;
 b=RVb98TLAd3sY0+9C/TygRA855EweOBYA8xp5rxonnvMjZe8zXEw1MrAycp1vZYBuBgvr+Zq7VFSwsfGa/BAwKmXGCGt5CDqMqvdY6h6d8rQ+jSpCOZM9yksI2JqQYwFw8QEgVXAfD/A1HXOfudUWLO/akhEVUvhi6w4TymAGqTU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hawn/fzl3gqQ9w85h9x1MqJDJk9eEVUrStnjih6XTUU8WUK3e5JzkGx9IouyIXxxRfojI6T3uTCobHnKVPZXVPoYNshpcVsSPxE4NzjeCvqqFSPlcdfpvPaT2kvpwCfjNOr6mMqz/dfIwGrbM2zPdcZdcirlliLxMj2vkiv549o2LzSiEeXxOk/h4k7vZ8VlOmiSVbLXVmMcQ32FGQZE7UjRg1lxq8x4F2N39n//jxUGsgK0klbghuHL5XJsbBgCoHcb87Prc0NJ40l9NZjYCoqoWmuVwHD59c4Zm9s+Tf0L2kVcF2e0Uzfl+ZiKQVCTVFhSF12Ku7wvn0juxlyEiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JwvPxYqJhARnyusAQLA8bWR4iDsYbq8TXKcUsQPOS0A=;
 b=mTdbmYCiHxQqFOUU87j6Aln6fGexFiG0zbAWnX3P3B09Xn8Lgy3/l03yNrPpCNSo/KlBzODnXegE5JNQ9c6I6xk9H8z8xO+zsg6UtZXNMzRaOjakT5oRoaNj87qonX9asauBl8C76B+JOYdviJps1Z4hVjpCgMo3HXPhgYb+a6g2FEtvRRVAWOeSNbfMEzKvLrSvxOR7y13cUGU/kdq5I8Uuiuw9oWVgXBzw2BU6oGFxI3Gj8+p9EN2DqPKAUJn444JKpx1Nk9dx+sXasywO35x9qHB0g3TdFYA9iWhD7reeqH+HVmV9hOPPg+vnsdW6ATbL5unSFUDpcutwcfyxXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JwvPxYqJhARnyusAQLA8bWR4iDsYbq8TXKcUsQPOS0A=;
 b=RVb98TLAd3sY0+9C/TygRA855EweOBYA8xp5rxonnvMjZe8zXEw1MrAycp1vZYBuBgvr+Zq7VFSwsfGa/BAwKmXGCGt5CDqMqvdY6h6d8rQ+jSpCOZM9yksI2JqQYwFw8QEgVXAfD/A1HXOfudUWLO/akhEVUvhi6w4TymAGqTU=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: RE: [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier for
 local TLB flush
Thread-Topic: [PATCH v4 01/14] xen/arm64: flushtlb: Reduce scope of barrier
 for local TLB flush
Thread-Index: AQHZJzd9qGEAItqZ/Emo59lWSJotia6cNK4A
Date: Fri, 13 Jan 2023 11:36:26 +0000
Message-ID:
 <AS8PR08MB7991E178E1AAADEE8BB9974B92C29@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-2-julien@xen.org>
In-Reply-To: <20230113101136.479-2-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 93B66E3C6812794FB170ACB640DA9580.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PA4PR08MB5967:EE_|VI1EUR03FT063:EE_|GVXPR08MB7751:EE_
X-MS-Office365-Filtering-Correlation-Id: dbe77d1c-62b7-4f66-4824-08daf55a6cc7
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 RyLgOYvIgrTmob3R+V07JQqqsClueZBPotMZBun1ON4fU+Y6RuLJ91CLW4sEJDyXOmCEjLwZfKfxmkoiB3KTUj+H2wxaHDsDrUBKTXizOYwEajBv0muSJqYiK7g0jA/9wMSirg5EIAk3qJ/5hCEodMoLzKeDqH+O3qdcYGA5WoDw33OavLS+ISlCt+paNzSIf8sFDtJdVxR7+0zcyCJPSkaUrpv8wedYS6NGharIDwB6J4qzWmeXUgSyzOWBqjB9Ex/sUE7Ur8OYsmig45JZahzhF+YMKzhp6MIOWGQWw48TWGvyfSP6RkeVtNFAanrHbqYHi70BC96sYhojWL8uRwPeh0DxbSq3yZVQ3/xHz/dGWVCyFvVVXazvQUyzCLAoQg02lPB1YUtrWNuUjWMOi39/gfvDheG6lLUKEupz1q8G0O0s9brWGUJsvCjx1rznZObeBLzrH4r1fD0BHbElhMT+NqxrIJH0cM+SvJZcfbxrNeU8B421pzQfwY+j+xJ89TQuYrrxURcNqbDlulHtdWr9V5kkI8eufwRK0bmHxgZFR7IVue/dkUTpjPNfIk4fY+B/d3l+Caql3YaeyNV6cnsYfvyNlpH6wJAwXLK+QsCoQ7bIytJfXuxO//B9U+9pM/RysMNERWWeg/SKLJ46utxa1D18bZnOXE9FqwvPI1WbcpDMWR6+yAvEUH3vsuiwrYtRFmXATQoc5q+0cRXeJw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(136003)(396003)(346002)(366004)(451199015)(8936002)(76116006)(66556008)(41300700001)(66476007)(8676002)(66946007)(52536014)(4326008)(66446008)(64756008)(38070700005)(54906003)(6506007)(2906002)(316002)(4744005)(5660300002)(33656002)(7696005)(55016003)(71200400001)(86362001)(478600001)(186003)(26005)(83380400001)(9686003)(110136005)(38100700002)(122000001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5967
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2cbc640c-77a6-4d83-3bb6-08daf55a67f4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1oCIPuNBEWzxm9eG09ea3Ad+WC7Wo473wN8/NVOSUeIIVmpUQEz0LXuCp7HqxMgAqgLidvIS9bDp3d7mkB2vH2TiBrHc9K254NnY60EA2iJI3r7CSGaw7dBLtC3IzIy18+/4v3lFLSbZKHJM/SLMcQ4vJV958t2C+tbJmvdyHddhvJgL88s4UgQZ8bMI8+8GLTU9sDNIMnp+qoQo7P3OCiYkZRJD04OaFr9zImXwz1x8P+u1v6vS8o/PvoqAvazJkAgs9RmLHBQpKqoz6Ytd/21O+Q5aSfLtL591AIQ9pFcI7lfPp30ZNfXDuXI5CJJHO0m5RYReyaxaOl9aAq+9QZWIjkXVh2uuRE44Z+BzDKk/ezIttpgUAqJBPc8jcsjX3ppTHIaCtZ8FluiVUDSxxQ4P2JhyxT2vgfQ0Ue7VJ3rskj8R1Vw9z5is6ZxzBwMUUV5mLPMjC7tuNr5iI5z4EaIb0fSL/Qw9t7rh6fFQp/WnVK0eEOlUrUI1jE3OZWgNPeJtTgaYBlEBf1xcRzct6HMEheNM9R5tG8MfTiNZs/xSbsvRFuQC34D4NbtgbUI9urxqcOmoevYfAMk1pn34H9MVUxUh8hFLkqOXXU0jCNvrMBSLlqihVYxrtAbBJfvKa4Azr39WKarUfznk0lTOoZNuYlrTPUsmGU0G1+ec43d3WHYmUaIhvYY2kFu/cqFqjgzf2LPYGRg8qCmIMfLx+Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(346002)(376002)(136003)(451199015)(40470700004)(36840700001)(46966006)(9686003)(478600001)(7696005)(33656002)(6506007)(54906003)(336012)(70586007)(110136005)(70206006)(4326008)(8676002)(26005)(186003)(316002)(41300700001)(47076005)(52536014)(83380400001)(8936002)(5660300002)(2906002)(55016003)(86362001)(36860700001)(40460700003)(40480700001)(82310400005)(81166007)(82740400003)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 11:36:34.6270
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dbe77d1c-62b7-4f66-4824-08daf55a6cc7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR08MB7751

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IFN1YmplY3Q6IFtQ
QVRDSCB2NCAwMS8xNF0geGVuL2FybTY0OiBmbHVzaHRsYjogUmVkdWNlIHNjb3BlIG9mIGJhcnJp
ZXIgZm9yDQo+IGxvY2FsIFRMQiBmbHVzaA0KPiANCj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqZ3Jh
bGxAYW1hem9uLmNvbT4NCj4gDQo+IFBlciBENS00OTI5IGluIEFSTSBEREkgMDQ4N0guYToNCj4g
IkEgRFNCIE5TSCBpcyBzdWZmaWNpZW50IHRvIGVuc3VyZSBjb21wbGV0aW9uIG9mIFRMQiBtYWlu
dGVuYW5jZQ0KPiAgaW5zdHJ1Y3Rpb25zIHRoYXQgYXBwbHkgdG8gYSBzaW5nbGUgUEUuIEEgRFNC
IElTSCBpcyBzdWZmaWNpZW50IHRvDQo+ICBlbnN1cmUgY29tcGxldGlvbiBvZiBUTEIgbWFpbnRl
bmFuY2UgaW5zdHJ1Y3Rpb25zIHRoYXQgYXBwbHkgdG8gUEVzDQo+ICBpbiB0aGUgc2FtZSBJbm5l
ciBTaGFyZWFibGUgZG9tYWluLg0KPiAiDQo+IA0KPiBUaGlzIG1lYW5zIGJhcnJpZXIgYWZ0ZXIg
bG9jYWwgVExCIGZsdXNoZXMgY291bGQgYmUgcmVkdWNlZCB0bw0KPiBub24tc2hhcmVhYmxlLg0K
PiANCj4gTm90ZSB0aGF0IHRoZSBzY29wZSBvZiB0aGUgYmFycmllciBpbiB0aGUgd29ya2Fyb3Vu
ZCBoYXMgbm90IGJlZW4NCj4gY2hhbmdlZCBiZWNhdXNlIExpbnV4IHY2LjEtcmM4IGlzIGFsc28g
dXNpbmcgJ2lzaCcgYW5kIEkgY291bGRuJ3QNCj4gZmluZCBhbnl0aGluZyBpbiB0aGUgTmVvdmVy
c2UgTjEgc3VnZ2VzdGluZyB0aGF0IGEgJ25zaCcgd291bGQNCj4gYmUgc3VmZmljaWVudC4NCj4g
DQo+IFNpZ25lZC1vZmYtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+IFJl
dmlld2VkLWJ5OiBNaWNoYWwgT3J6ZWwgPG1pY2hhbC5vcnplbEBhbWQuY29tPg0KDQpJJ3ZlIHRl
c3RlZCB0aGlzIHBhdGNoIG9uIEZWUC4gVGhlIGRvbTAgYm9vdHMgZmluZSwgbGludXggJiB6ZXBo
eXINCmd1ZXN0cyBjYW4gYmUgc3RhcnRlZCBhbmQgZGVzdHJveWVkIHdpdGhvdXQgaXNzdWUsIHNv
Og0KDQpUZXN0ZWQtYnk6IEhlbnJ5IFdhbmcgPEhlbnJ5LldhbmdAYXJtLmNvbT4NCg0KS2luZCBy
ZWdhcmRzLA0KSGVucnkNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 11:55:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 11:55:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477235.739849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIeT-00038s-6y; Fri, 13 Jan 2023 11:55:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477235.739849; Fri, 13 Jan 2023 11:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIeT-00038l-4A; Fri, 13 Jan 2023 11:55:17 +0000
Received: by outflank-mailman (input) for mailman id 477235;
 Fri, 13 Jan 2023 11:55:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bX0/=5K=citrix.com=prvs=37021d3d6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pGIeR-00038f-KI
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 11:55:15 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 21975da1-9339-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 12:55:12 +0100 (CET)
Received: from mail-co1nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Jan 2023 06:55:09 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5360.namprd03.prod.outlook.com (2603:10b6:208:1e9::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 11:55:05 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Fri, 13 Jan 2023
 11:55:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21975da1-9339-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673610912;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=7Qz8tI34H2+ukCB5uzp/ISWQQFOVurHXR+A8+ylwlXk=;
  b=VJhnsAxcV55Ec3bHo1vqHFxpWjdxA4lVgg5MpWIksiRhrRtDjVo+1L2r
   QYH+0RzG278d/dmrJJFY2ZFrIuIDIw2TQw9Nson8GAjL2KTaIyTfRqHjj
   fxhrbfc3VZPAeWoeJIvPwHnIxbI58zuiVZ3ZiEWPW8Zb5484NibU4hjPp
   Y=;
X-IronPort-RemoteIP: 104.47.56.173
X-IronPort-MID: 92462271
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:gcgaQ64dak7iEjjFP+4NnwxRtPbGchMFZxGqfqrLsTDasY5as4F+v
 jcdCzuAPv3YamCjLtgjb4W2o0kPvJ7WxtBmQAJu/iA3Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakS5weH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m/
 PoRbzcgQDy4oe+P3JWUQeZJgdUBFZy+VG8fkikIITDxK98DGMmGaIKToNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6Okkooj+OF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNISOPirKEz6LGV7mYcKixNS3ycm9S8h0OFGMxlD
 xczuTV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq16bO8vT60fy8PIgcqeS4BZRsI5Z/kuo5bphjSVMRqFKm5icL8MT71y
 jGO6iM5gt0uYdUj0qy6+RXMhGuqr52QFwotvFyIBSSi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf02DaDw7FJG+yRxkOe
IronPort-HdrOrdr: A9a23:8V+sDaCPcQsdzkHlHelW55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgL1+DfKlbbak/DH4BmtJ
 uJc8JFeaDN5VoRt7eH3OFveexQv+Vu88qT9JnjJ28Gd3AMV0n5hT0JcTpyFCdNNW97LKt8Lr
 WwzOxdqQGtfHwGB/7LfEXsD4D41qT2fIuNW29/OyIa
X-IronPort-AV: E=Sophos;i="5.97,213,1669093200"; 
   d="scan'208";a="92462271"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S1dD1UrxKS1+uGE6Bfj+jvelwwTjtacnzAg549SG0IwfSi58bLsX0fxQlNGJ8AZlPAfuSY0hahkP7xbfgXY9bcYnb7xeebTgs0APRdZqjLpWCbfgZxJHE4tAGNXVqj6f1t4XY4mIkicJLh1FiLyJIKAWOWRhblLEtar/zyeUhoQXx+zsEU/AE/WWAHASYpY+DkP/jEdYwXFfsfqM8J8m3AQG5DjRtM5HS7GrBb9/P+aj7o/wxBxyrcXJ8dvLV0lF8WL9F5weWjrjLGGUihqisOqApDQ9GFQu3tZNK7kQ7KGtEySLlsfUmp+vyqUVHveQ2wDNygIENfntOQCG0cIziQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7Qz8tI34H2+ukCB5uzp/ISWQQFOVurHXR+A8+ylwlXk=;
 b=ROTe+Ykr2ax0xkWar8CjS4CjR+e5xchtUOY1op43tnh/vNFWzSo3G1xM49vwnPoqWPtOwQC7JdKGFWW6s2RGUqccRK6YXL8fBwva6JzNvDnFd9Jv/S8QHJXtaVtsKLnaKzlkR7LF0+SI3JiLGnxW4jHPVvwnkYC3GHvt8Eaf/KiH6qjfdPY5SOj7d2nptyvrSAJMMq0dV/Y3U9A+T4KaPP+DZ5eUE8up9rFoojuW0jQRMBXqHEG6iCc6JwrxhzLYaiRNWQlN10f4p9Ks1+O/XJQJFPnrTFGuvO7oWckj3QsbjUX+VTETPRcMSImYFGeTyVviM/Hy7RzlUB+VP1m16w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Qz8tI34H2+ukCB5uzp/ISWQQFOVurHXR+A8+ylwlXk=;
 b=IfLdk5rqZFjSttHVPorSshyIHcuTRAWj5ygEYadv11dXwgXMNr6HdVvPZ5E6BFN12Y8AQGaRK1noF+RxHl/vD58fs/wC1wof1tJO0sivcuOQOqRaDGavTzC6aJ9gJpHLsV4hBNxDPbRBdLjKCvt1WtGguPYUbhqYtDLSlBb3My4=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, Xenia Ragiadakou <burzalodowa@gmail.com>, George
 Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Thread-Topic: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Thread-Index: AQHZJyu6jkZvmiKMqEGYr+zksqnvb66cPWcA
Date: Fri, 13 Jan 2023 11:55:05 +0000
Message-ID: <f24da4a8-4df8-0ec3-32f9-41f134b87d67@citrix.com>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
In-Reply-To: <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MN2PR03MB5360:EE_
x-ms-office365-filtering-correlation-id: f21f55f4-267c-4241-fcc5-08daf55d029e
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 j1HwCcgYR4KzB0xxnDZnCycosxWPbEUHfB6gSl1DyjLCbnWTKKLkv8Gr8xStl1uzz4PxZ4VaJb97S5H1BbAQdEUJFFUYiOIMoyqxks3mXF+Dyp80A4LSk3RKdsMzFh8zeHZarSwFpYLW7ggh7FD8q0/A9jqGMprSFBcZgUsl9bDdYEgWQ3NeO+hvQtgKoXJOuCTLgks1qLom6IMu++tQlUewOkh3eJRCKCX++4mjnSsXopym41SUg6Xc/eD+RphabeGUcIUu6naGWtZUYdpg0L9L+3cH43eOl8FYmq98FoMo8D21h2UavPXlFruCTylzLVKXQWgGp02ambt8P5ATHusOeWPFiiEEdBQywK9K7hD9KBCvfK9S5ftURFImMKBI8n6kCFXllRw9ZeMBfgtJo1yw1lzUV2vNO+hyw4HikG2E4UOMVXlcT5KkZpGexnSG6+jGMTpdeUJLAUkZWRyG0O2W3RGZPtxAv4TQHdCGstTKjqZLcm2JvwsDdFBTofvBHl4F83CPFx/MJxnNqlJMyIEO/CA6s+rkz6mIyFxRMt1Yl0LYJLoW6ZvmbioQUhXj3h6gV7OWLkSS4q7PCItGZyht+vyHMDONJGt9oeswhEdtGYT9kCx0pTUimS4UFh5S3saYGe/BZZtU3eSeOjgkcTWe+SHSF83FRdfMdcfOufVHIbBYZRmeNtOdf5aJ0N+jbGLgj24NdgI5AMTVXgxJT+9UwNwthJECmeaIZbh0bwwN3ugLNm6mYbIssEy8gTyDjNZECCZYJI+1sznTRaiug0G3p/dUrc+Bce0KdMwvS2Y=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(366004)(376002)(396003)(39860400002)(451199015)(66946007)(8676002)(31686004)(41300700001)(2906002)(8936002)(36756003)(5660300002)(316002)(64756008)(6512007)(71200400001)(966005)(107886003)(54906003)(110136005)(66556008)(76116006)(478600001)(6486002)(53546011)(6506007)(2616005)(66476007)(4326008)(186003)(26005)(31696002)(66446008)(83380400001)(91956017)(122000001)(38070700005)(82960400001)(86362001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZWxubGQ5MHlWbzZTRTlvMkM5L2NhSlhwaFNDWlZMN25VT3Ewb08zMmM2MVNx?=
 =?utf-8?B?MmZlcmMwYTI4KzUySEoyOG8vdXMrVmNrZldzS3dpTFpubGRaTi9JYlRYd1N1?=
 =?utf-8?B?dVZvM0JvRit3WkhNQyt0enRqdzU5MnlEdGJsU0todEkyU2d4bkU3R09MbmV0?=
 =?utf-8?B?ZUZMNjlxZzB4bTduZ3M2MXdUdTdOTVc5cGdhbVhSdFRMSlRnb0lSdGZWbFJu?=
 =?utf-8?B?ZDNmeUw5MktGTTBmR2FoWkVVdENCSEp5Zk9sTDYwSDFvaWZIRTZqV0ZrOGVU?=
 =?utf-8?B?N2xPWUlNY090ME1ESjNZR2g2YW5KaGFBcVVzSDk1UDhjT3pXeFNZb1NlbmVU?=
 =?utf-8?B?VTRDdFV4ekhTTElnWEd6OERRYzYwUENYQXJCeE9rWmM2cW1CdzVVWVBPSzl1?=
 =?utf-8?B?bVhSUUxRUnF3WU4yODdlSUZNdFRaZ2M5L0xyaFRXdUNmT01pVUZNTWdZZHhT?=
 =?utf-8?B?eUc3QkxQejZnTTBQS2F5ODBIdmhGa0dtanowV01hcGlJYjh1Z2UvQ1JyU3Fu?=
 =?utf-8?B?aC9FTzBPeEV0a3g4ZHRoV3pQdGZhMGhzSmZkOUp0UVgxT294a3B3RTZWTmsr?=
 =?utf-8?B?d2ZmNzd4NjNNUWUvbjQ0TXVmdHJwUG95bFVFWWFrZDBKelQ1bFJuL1dLOFY4?=
 =?utf-8?B?ZXIxZ2FzUkRmUHBUZ2lXdlhaS0NLOHdsYVhaOWQ4cUxkcHFBMEJ6ZTBNVkta?=
 =?utf-8?B?TkJzeUxOUlJIaDJIMm5jZXJXQkpKb04xbFo1UlRQbW8yUldpcE1RZkd0dUJ3?=
 =?utf-8?B?ZW5QM1VpTWZHbE1vcm1nWU4wTnBNZlRDWnZCdEIzaGJHcmVpN0c5bXlLaE1s?=
 =?utf-8?B?dGY4M2tLNDA1WjRFSnlTYW5VUk9sbHZxN2F3K1ZxWURENm9oNjZERVpKQ0E2?=
 =?utf-8?B?eHBseHZSRGdGYU15dmxQck9DZFFsQmkxMVE4T3QrVzVmdFUwZXoya0N1VFdh?=
 =?utf-8?B?NmtvMTIvMFZwK1FmemRkQkQyNTFOenFUbjJHWnc1SC9xUW1wOVFmcEdkTGxF?=
 =?utf-8?B?THEwUXp0NzEvaHlRYldlZDVicGtEV1l0M1FtbDBWcCt3dGZtZGJmd1JCQ0lq?=
 =?utf-8?B?MHBhNUNWOG0wVDQ0cERUYVFxZjdRVWczbElHbXo3eGJuMUlFWFNFb295anU3?=
 =?utf-8?B?enpWbDJveHB4TE5WdGI2Nm9rNkFQemJTeTV2OGZscjc0ZjdKZytYcXBCOUZw?=
 =?utf-8?B?U0F1MXU3VHg4TU0zNUxXMmc4NmFtTzZRUExSTnVSaHQ4VDdHcHRUbWpqblI4?=
 =?utf-8?B?N3Z6cGMzUTdKTXQxbENaOWY5b1ByeTFQdWpqVGc0UnRXRXpYSU02Ti92dDgx?=
 =?utf-8?B?L0ptQ1FOcXp4NkVkRTVOem1HeERwdmR0aThTVkZteVV2SW9kbnZRMEIzc1BQ?=
 =?utf-8?B?eHozWlVwcHdyM1gvNjBsRXV3RUd0aWdtTmUyaGVOdGJWdWlHekZvMlFKZG9G?=
 =?utf-8?B?NVIxeDE2NTBtNHdSYVgzZVBkRmc2ckVYV3NweTJsQUVoNjVad2ZITGg5SXkw?=
 =?utf-8?B?Q1M2MnhBVS95ZjlZOHhWR0JEdjhxMm9KSnVWeksyOFhsbDF0Rzlxc1NFUFZT?=
 =?utf-8?B?YXB6M25ldDJ5MlBDL3V2QTJ4YUcwZjFsR1VBYUlaUXI0WFpTYWtQNnZrelpH?=
 =?utf-8?B?ZGg0VHVQQUZUeDN3dU9FaFhza0Q1Y2U5WUF3OHVmSDR6R1NQK2ppSHVqdUJo?=
 =?utf-8?B?TDRxTmozTTNWb3Q5Skw2UkhxQ3NHNkhmb2F0U0s0d3Q1RnhEN0VseFl4WlFn?=
 =?utf-8?B?d1NJc0hYSXBEUXNLZDRkR28xSkdScXFiYTNKRWh6SWxaL1VNRmJzWW4rMnFO?=
 =?utf-8?B?UkxCR1kvaGo4VnlGeSsvT2ZJWlNYb1FrTk1qMU9WRllFTTZ1N1hSUnF0bndm?=
 =?utf-8?B?LzNvUzY0OUplTzNNeWNJQmZoRkpFckRKOEZ2Um80ZjRKSktDcmRsaXR4M3Fy?=
 =?utf-8?B?M1FnRHVpNiszVG1uSTVZakNNcWFMUTNLL1crUGs4WXl5aEJEQzh4VzBteFpX?=
 =?utf-8?B?Y3l5aktzQzVJTTZmWWp5ZWoxOUQyaW9mMFVyKzZ1amxtenc5RHVjVzhPV3Vy?=
 =?utf-8?B?RjdpTTVBMHR2ZXZiYUdMU1lvN3VsVWxvTkdHb1FHbmJFekt4amxNbkFFa0dF?=
 =?utf-8?Q?EVhigK9wuwjpuy8HRoN8NImuT?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5F49FB4F90B26444A34358D0AEB41A28@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	1jDFMH39zJebKcszwGbiTk6NmQxPFeuN5Zx8geHnJ5wDzb2FLhNLSeRFKreLcEcBiGmwpA0QOjaA4uUAdw7dBLptjM2sWnVLCV2AWEDqg8Q3hhAYb+TjG77A+UHNG96o12ApCsllMWjMyesHZk0j0pcsUMTlLaDEQnFB8ZphGsooeDU7c4rZZw7FYjidHw3RWN5KcNvwBmWkTnHfwg8QPL7Im3HW95g/5WOBFcTXY+ZBIlSZD5/Iu3/i0eCAVMde8PRmdR9wjnHnn2BMpFJAcoqf58RX3dLLFPqtkZLl1KSWKU3N3ceG3KHGqG/ZW7XLxLv6xyeZ2LdPd1nq6eMeQZmHOCHPRAVRVtlGE4RIYFPLRFVP2d6wWUBX1u1kSyvbKR+yMidlqZ4bQlTwcqNz5GlaHOr6YTqgGAHnfINwzOgQFNbV/vMyf/Or2IUn6RYI2GmozLbE6La2wIHZqI4tKPiwKcjrkLgNNnTMBOhBBF1Qv5TUyQxPofhoUCTezSuQPdkFTtaeDBJD0njKHiQGESWNEJudJVnNbz9J0rZnGU3tp4ZHdV5lQnpYmTwEexQnyygYt6EPT1Zllu87JzJ18Z96SqaAI5dRzCyChUVvA3hjRl8slXCvTiBDEfqT6sPSsd0PjjkH1b0RRDzfiPBoe/Vwe3lht4YV61TLjuoJqJ3fruLeBw+8dlIE3WW13wJH0E01Wr+uGAcOVxMsjhNBIJ59wPaoaY68eBG2z98YJy6EmETpGt2B1/ur0EGvAKlaxmmZHvsxkORvoLK4YgnjMikyRXnCIJmHu0rYNCq5SzSe3lpViV70YMcZvrpKh/gng69aOF+kFx2DEKaHSJRNScKy77vToAZgf1VlqkLoL1g=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f21f55f4-267c-4241-fcc5-08daf55d029e
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Jan 2023 11:55:05.0771
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: GcJ4O4QzKAr1MisB++ucHJDOWx1gYI8p0ScF6rQ18NjgTCWzKwObENZ4gBTBZoYuSd+brFefzEfjICMtgK9xm3o/DH2DNeqhXT5m/oQO59E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5360

T24gMTMvMDEvMjAyMyA4OjQ3IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCg0KDQpBcyBmYXIgYXMg
dGhlIHN1YmplY3QgZ29lcywgSSByZWFsbHkgd291bGRuJ3QgY2FsbCB0aGlzICJzYW5pdGlzZSIu
wqAgVGhlDQpiZWhhdmlvdXIgaXMgY3JhenksIGFuZCBicm9rZW4uDQoNCiJNYWtlIHNoYWRvdyBj
b25zaXN0ZW50IHdpdGggaG93IEhBUCB3b3JrcyIgZmVlbHMgc29tZXdoYXQgYmV0dGVyLg0KDQoN
Cj4gRmlyc3Qgb2YgYWxsIHRoZSB2YXJpYWJsZSBpcyBtZWFuaW5nZnVsIG9ubHkgd2hlbiBhbiBJ
T01NVSBpcyBpbiB1c2UgZm9yDQo+IGEgZ3Vlc3QuIFF1YWxpZnkgdGhlIGNoZWNrIGFjY29yZGlu
Z2x5LCBsaWtlIGRvbmUgZWxzZXdoZXJlLiBGdXJ0aGVybW9yZQ0KPiB0aGUgY29udHJvbGxpbmcg
Y29tbWFuZCBsaW5lIG9wdGlvbiBpcyBzdXBwb3NlZCB0byB0YWtlIGVmZmVjdCBvbiBWVC1kDQo+
IG9ubHkuIFNpbmNlIGNvbW1hbmQgbGluZSBwYXJzaW5nIGhhcHBlbnMgYmVmb3JlIHdlIGtub3cg
d2hldGhlciB3ZSdyZQ0KPiBnb2luZyB0byB1c2UgVlQtZCwgZm9yY2UgdGhlIHZhcmlhYmxlIGJh
Y2sgdG8gc2V0IHdoZW4gaW5zdGVhZCBydW5uaW5nDQo+IHdpdGggQU1EIElPTU1VKHMpLg0KPg0K
PiBTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IC0tLQ0K
PiBJIHdhcyBmaXJzdCBjb25zaWRlcmluZyB0byBhZGQgdGhlIGV4dHJhIGNoZWNrIHRvIHRoZSBv
dXRlcm1vc3QNCj4gZW5jbG9zaW5nIGlmKCksIGJ1dCBJIGd1ZXNzIHRoYXQgd291bGQgYnJlYWsg
dGhlIChxdWVzdGlvbmFibGUpIGNhc2Ugb2YNCj4gYXNzaWduaW5nIE1NSU8gcmFuZ2VzIGRpcmVj
dGx5IGJ5IGFkZHJlc3MuIFRoZSB3YXkgaXQncyBkb25lIG5vdyBhbHNvDQo+IGJldHRlciBmaXRz
IHRoZSBleGlzdGluZyBjaGVja3MsIGluIHBhcnRpY3VsYXIgdGhlIG9uZXMgaW4gcDJtLWVwdC5j
Lg0KPg0KPiBOb3RlIHRoYXQgdGhlICNpZm5kZWYgaXMgcHV0IHRoZXJlIGluIGFudGljaXBhdGlv
biBvZiBpb21tdV9zbm9vcA0KPiBiZWNvbWluZyBhICNkZWZpbmUgd2hlbiAhSU9NTVVfSU5URUwg
KHNlZQ0KPiBodHRwczovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAy
My0wMS9tc2cwMDEwMy5odG1sDQo+IGFuZCByZXBsaWVzKS4NCj4NCj4gSW4gX3NoX3Byb3BhZ2F0
ZSgpIEknbSBmdXJ0aGVyIHB1enpsZWQ6IFRoZSBpb21lbV9hY2Nlc3NfcGVybWl0dGVkKCkNCj4g
Y2VydGFpbmx5IHN1Z2dlc3RzIHZlcnkgYmFkIHRoaW5ncyBjb3VsZCBoYXBwZW4gaWYgaXQgcmV0
dXJuZWQgZmFsc2UNCj4gKGkuZS4gaW4gdGhlIGltcGxpY2l0ICJlbHNlIiBjYXNlKS4gVGhlIGFz
c3VtcHRpb24gbG9va3MgdG8gYmUgdGhhdCBubw0KPiBiYWQgInRhcmdldF9tZm4iIGNhbiBtYWtl
IGl0IHRoZXJlLiBCdXQgb3ZlcmFsbCB0aGluZ3MgbWlnaHQgZW5kIHVwDQo+IGxvb2tpbmcgbW9y
ZSBzYW5lIChhbmQgYmVpbmcgY2hlYXBlcikgd2hlbiBzaW1wbHkgdXNpbmcgIm1taW9fbWZuIg0K
PiBpbnN0ZWFkLg0KDQpUaGF0IGVudGlyZSBibG9jayBsb29rcyBzdXNwZWN0LsKgIEZvciBvbmUs
IEkgY2FuJ3Qgc2VlIHdoeSB0aGUgQVNTRVJUKCkNCmlzIGNvcnJlY3Q7IHdlIGhhdmUgbGl0ZXJh
bGx5IGp1c3QgKGNvbmRpdGlvbmFsbHkpIGFkZGVkIENBQ0hFX0FUVFIgdG8NCnBhc3NfdGhydV9m
bGFncyBhbmQgcHVsbGVkIGV2ZXJ5dGhpbmcgYWNyb3NzIGZyb20gZ2ZsYWdzIGludG8gc2ZsYWdz
Lg0KDQpJdCBjYW4gYWxzbyBoYWxmIGl0cyBudW1iZXIgb2YgZXh0ZXJuYWwgY2FsbHMgYnkgcmVh
cnJhbmdpbmcgdGhlIGlmL2Vsc2UNCmNoYWluIGFuZCBtYWtpbmcgYmV0dGVyIHVzZSBvZiB0aGUg
dHlwZSB2YXJpYWJsZS4NCg0KPg0KPiAtLS0gYS94ZW4vYXJjaC94ODYvbW0vc2hhZG93L211bHRp
LmMNCj4gKysrIGIveGVuL2FyY2gveDg2L21tL3NoYWRvdy9tdWx0aS5jDQoNCkp1c3Qgb3V0IG9m
IGNvbnRleHQgaGVyZSBpcyBhIGNvbW1lbnQgd2hpY2ggc2F5cyBWVC1kIGJ1dCByZWFsbHkgbWVh
bnMNCklPTU1VLsKgIEl0IHByb2JhYmx5IHdhbnRzIGFkanVzdGluZyBpbiB0aGUgY29udGV4dCBv
ZiB0aGlzIGNoYW5nZS4NCg0KPiBAQCAtNTcxLDcgKzU3MSw3IEBAIF9zaF9wcm9wYWdhdGUoc3Ry
dWN0IHZjcHUgKnYsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZ2ZuX3RvX3BhZGRy
KHRhcmdldF9nZm4pLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIG1mbl90b19tYWRk
cih0YXJnZXRfbWZuKSwNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICBYODZfTVRfVUMp
Ow0KPiAtICAgICAgICAgICAgICAgIGVsc2UgaWYgKCBpb21tdV9zbm9vcCApDQo+ICsgICAgICAg
ICAgICAgICAgZWxzZSBpZiAoIGlzX2lvbW11X2VuYWJsZWQoZCkgJiYgaW9tbXVfc25vb3AgKQ0K
PiAgICAgICAgICAgICAgICAgICAgICBzZmxhZ3MgfD0gcGF0X3R5cGVfMl9wdGVfZmxhZ3MoWDg2
X01UX1dCKTsNCg0KSG1tLi4uwqAgVGhpcyBpcyBzdGlsbCBvbmUgcmVhc29uYWJseSBleHBlbnNp
dmUgbm9wOyB0aGUgUFRFIEZsYWdzIGZvciBXQg0KYXJlIDAuDQoNCj4gICAgICAgICAgICAgICAg
ICBlbHNlDQo+ICAgICAgICAgICAgICAgICAgICAgIHNmbGFncyB8PSBnZXRfcGF0X2ZsYWdzKHYs
DQo+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3g4Ni9pb21tdS5jDQo+ICsrKyBiL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3g4Ni9pb21tdS5jDQo+IEBAIC01Niw2ICs1NiwxMyBAQCB2
b2lkIF9faW5pdCBhY3BpX2lvbW11X2luaXQodm9pZCkNCj4gICAgICBpZiAoICFhY3BpX2Rpc2Fi
bGVkICkNCj4gICAgICB7DQo+ICAgICAgICAgIHJldCA9IGFjcGlfZG1hcl9pbml0KCk7DQo+ICsN
Cj4gKyNpZm5kZWYgaW9tbXVfc25vb3ANCj4gKyAgICAgICAgLyogQSBjb21tYW5kIGxpbmUgb3Zl
cnJpZGUgZm9yIHNub29wIGNvbnRyb2wgYWZmZWN0cyBWVC1kIG9ubHkuICovDQo+ICsgICAgICAg
IGlmICggcmV0ICkNCj4gKyAgICAgICAgICAgIGlvbW11X3Nub29wID0gdHJ1ZTsNCj4gKyNlbmRp
Zg0KDQpJIHJlYWxseSBkb24ndCB0aGluayB0aGlzIGlzIGEgZ29vZCBpZGVhLsKgIElmIG5vdGhp
bmcgZWxzZSwgeW91J3JlDQpyZWluZm9yY2luZyB0aGUgbm90aW9uIHRoYXQgdGhpcyBsb2dpYyBp
cyBzb21laG93IGFjY2VwdGFibGUuDQoNCklmIGluc3RlYWQgdGhlIGNvbW1lbnQgcmVhZCBzb21l
dGhpbmcgbGlrZToNCg0KLyogVGhpcyBsb2dpYyBpcyBwcmV0dHkgYm9ndXMsIGJ1dCBuZWNlc3Nh
cnkgZm9yIG5vdy7CoCBpb21tdV9zbm9vcCBhcyBhDQpjb250cm9sIGlzIG9ubHkgd2lyZWQgdXAg
Zm9yIFZULWQgKHdoaWNoIG1heSBiZSBjb25kaXRpb25hbGx5IGNvbXBpbGVkDQpvdXQpLCBhbmQg
d2hpbGUgQU1EIGNhbiBjb250cm9sIGNvaGVyZW5jeSwgWGVuIGZvcmNlcyBjb2hlcmVudCBhY2Nl
c3Nlcw0KdW5pbGF0ZXJhbGx5IHNvIGlvbW11X3Nub29wIG5lZWRzIHRvIHJlcG9ydCB0cnVlIG9u
IGFsbCBBTUQgc3lzdGVtcyBmb3INCmxvZ2ljIGVsc2V3aGVyZSBpbiBYZW4gdG8gYmVoYXZlIGNv
cnJlY3RseS4gKi8NCg0KVGhhdCBhdCBsZWFzdCBoaWdobGlnaHRzIHRoYXQgaXQgaXMgYSBnaWFu
dCBib2RnZS4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 11:55:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 11:55:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477236.739861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIei-0003SQ-JL; Fri, 13 Jan 2023 11:55:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477236.739861; Fri, 13 Jan 2023 11:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIei-0003SH-GK; Fri, 13 Jan 2023 11:55:32 +0000
Received: by outflank-mailman (input) for mailman id 477236;
 Fri, 13 Jan 2023 11:55:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3K7w=5K=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pGIeg-0003RT-NX
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 11:55:30 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c3291cb-9339-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 12:55:28 +0100 (CET)
Received: by mail-ej1-x632.google.com with SMTP id qk9so51791285ejc.3
 for <xen-devel@lists.xenproject.org>; Fri, 13 Jan 2023 03:55:28 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.138.tellas.gr. [109.242.138.67])
 by smtp.gmail.com with ESMTPSA id
 lb21-20020a170907785500b0084c4657120fsm8512626ejc.55.2023.01.13.03.55.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Jan 2023 03:55:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c3291cb-9339-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=a367pTAZQ7WMXu5GdIE3oLyzRGZq0cQOvYB//vuBT5k=;
        b=p1LETkqnjUgeKdzVH8r4751UQgcGd/gsgcatdeVDgMhhCOWaLETzwnO9Gyimz7Q1zl
         Hf0hGuwiJOvhL4HxwCYjHygHUirTOF5PghZ2/58CdgUxbEbDDpGRCVDpOa2YzIiN7rAJ
         2vlB6llD2iFsE5bODeUWQoBiok2ldxAH7YNZOF72Rz/omCJ/z3PpA9EOyDoYDzKds5oF
         vZ+rZU/c2M3zHhbZlbbaIsGPuB09wXZTnDz6Dn4gkrg765vOrQr9ePoERtBB8Di7oQTn
         MwsgixjSidXMftLc0KiqQx1v3Vdlw5J2tMi5DjPrsITvc6BpAbCM546+LtqEReMCjNJX
         uTHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=a367pTAZQ7WMXu5GdIE3oLyzRGZq0cQOvYB//vuBT5k=;
        b=QPH6lNJ56K4OCZ3Ta3XXm3i1eIVZ1Clkqt/FENxbGgySw+xls2YkrlwK/yL+a78EEK
         tsEm8Yjih1WjFM7p89Qc3VGineZziyQWmB5O6f1Xk2M/IyGFT8DH9jKu2elCgWfgcNJT
         D6LEClDYoyvpcQdivBauS++64sP+bVksacagLREyl5CwSmKNK8F8lCLayjgKARoUX3WT
         KsW6TdzzW9/KDJcuQ7mF9Y0tRA+80difECtXeMt4LSFCRRe9hfFmifrUDBAkcqNUigVh
         1hoLmTUTJhNwBi49REtKWKlz0ZsOmukpzZ1B1qV08UYgq11QXmUJVyOPl9jA9mb5A8SD
         ArTw==
X-Gm-Message-State: AFqh2krqBNIDupuWdK2Kf1yP6qwwxx2pToUulw2FhgLEW6en8qqzlyWK
	/BZWZENt/nhFYtcjxW5pHcg=
X-Google-Smtp-Source: AMrXdXuj2FoXQ5mSwuJ6G/ZNIhiPb5aHDslclu1zQluVNqMf9Rz0ODW+D1EtIj6I4mkUX5DGgy156Q==
X-Received: by 2002:a17:906:6dcb:b0:7c0:d60b:2887 with SMTP id j11-20020a1709066dcb00b007c0d60b2887mr69199941ejt.69.1673610927849;
        Fri, 13 Jan 2023 03:55:27 -0800 (PST)
Message-ID: <b76a7834-9868-c5c2-e058-89911a552c80@gmail.com>
Date: Fri, 13 Jan 2023 13:55:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
 <88e3ec77-587a-ae68-a634-fed1fa917cd7@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <88e3ec77-587a-ae68-a634-fed1fa917cd7@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

(CC Paul as well)

On 1/13/23 11:53, Jan Beulich wrote:
> On 13.01.2023 10:34, Xenia Ragiadakou wrote:
>>
>> On 1/13/23 10:47, Jan Beulich wrote:
>>> First of all the variable is meaningful only when an IOMMU is in use for
>>> a guest. Qualify the check accordingly, like done elsewhere. Furthermore
>>> the controlling command line option is supposed to take effect on VT-d
>>> only. Since command line parsing happens before we know whether we're
>>> going to use VT-d, force the variable back to set when instead running
>>> with AMD IOMMU(s).
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> I was first considering to add the extra check to the outermost
>>> enclosing if(), but I guess that would break the (questionable) case of
>>> assigning MMIO ranges directly by address. The way it's done now also
>>> better fits the existing checks, in particular the ones in p2m-ept.c.
>>>
>>> Note that the #ifndef is put there in anticipation of iommu_snoop
>>> becoming a #define when !IOMMU_INTEL (see
>>> https://lists.xen.org/archives/html/xen-devel/2023-01/msg00103.html
>>> and replies).
>>>
>>> In _sh_propagate() I'm further puzzled: The iomem_access_permitted()
>>> certainly suggests very bad things could happen if it returned false
>>> (i.e. in the implicit "else" case). The assumption looks to be that no
>>> bad "target_mfn" can make it there. But overall things might end up
>>> looking more sane (and being cheaper) when simply using "mmio_mfn"
>>> instead.
>>>
>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>> @@ -571,7 +571,7 @@ _sh_propagate(struct vcpu *v,
>>>                                gfn_to_paddr(target_gfn),
>>>                                mfn_to_maddr(target_mfn),
>>>                                X86_MT_UC);
>>> -                else if ( iommu_snoop )
>>> +                else if ( is_iommu_enabled(d) && iommu_snoop )
>>>                        sflags |= pat_type_2_pte_flags(X86_MT_WB);
>>>                    else
>>>                        sflags |= get_pat_flags(v,
>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>>>        if ( !acpi_disabled )
>>>        {
>>>            ret = acpi_dmar_init();
>>> +
>>> +#ifndef iommu_snoop
>>> +        /* A command line override for snoop control affects VT-d only. */
>>> +        if ( ret )
>>> +            iommu_snoop = true;
>>> +#endif
>>> +
>>
>> Why here iommu_snoop is forced when iommu is not enabled?
>> This change is confusing because later on, in iommu_setup, iommu_snoop
>> will be set to false in the case of !iommu_enabled.
> 
> Counter question: Why is it being set to false there? I see no reason at
> all. On the same basis as here, I'd actually expect it to be set back to
> true in such a case.Which, however, would be a benign change now that
> all uses of the variable are properly qualified. Which in turn is why I
> thought I'd leave that other place alone.

I think I got confused... AFAIU with disabled iommu snooping cannot be 
enforced i.e iommu_snoop=true translates to snooping is enforced by the 
iommu (that's why we need to check that the iommu is enabled for the 
guest). So if the iommu is disabled how can iommu_snoop be true?

Also, in vmx_do_resume(), iommu_snoop is used without checking if the 
iommu is enabled.

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 11:56:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 11:56:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477248.739872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIg5-0004H9-Vc; Fri, 13 Jan 2023 11:56:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477248.739872; Fri, 13 Jan 2023 11:56:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIg5-0004H2-R1; Fri, 13 Jan 2023 11:56:57 +0000
Received: by outflank-mailman (input) for mailman id 477248;
 Fri, 13 Jan 2023 11:56:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qv1V=5K=citrix.com=prvs=370028da5=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1pGIg3-0004Gr-VK
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 11:56:55 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e261930-9339-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 12:56:53 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e261930-9339-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673611013;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=1VhMBk01iAdLd9A1jclGzRw/3yWThCvcmGKvuFToPDE=;
  b=STjmug96AgzJDeZSmRENdZN4MmOrfRTbo5LuvS17b2zhThs93opPUJfU
   n1Rw2EPkeeGN+LpmA83V7gvXGhhhf2gO77cakadWTc6HwNF7swqeoeBpV
   9ck2ODghdd7+dChfsg4p3ZOKao3trOKRZZGcVnGX0hIli3OP3jbxZjvAb
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91401879
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Taq30qhe5nGZPnpGVtEmnTUuX161WRAKZh0ujC45NGQN5FlHY01je
 htvDWiDOP6CZjTzKNF/OY3j9k9S75PTn9FmQANo+SE8FHwb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QaFzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRBKhoWfgrTt96NzZCqctU8tNQvDvf0adZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XaDBCp1+E46Ym6nPXzSR60aT3McqTcduPLSlQthfB+
 jqfrzuoav0cHNamywq/q3O2vMrsm3/RSZAIC7qY6uE/1TV/wURMUUZLBDNXu8KRiESzRtZeI
 Ew84Tc1oO4580nDZtvgWxy1plaUsxhaXMBfe8Uh8x2EwKfQ5wefB0AHQyRHZdhgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBsZRQBy8nupsc0lB2nZtRsCqmulfXuBCr9h
 TuNqUADa6471JBRkf/hpBae3mzq/8KSJuIo2unJdjunxBhpft6VW7ely0nj0aZyPIq7cEbU6
 RDohPOiAPAy4YClzXLSG7hSQu3yvp5pIxWH3wcxQsBJGyCFvif6INsOuGwWyFJBaJ5sRNP/X
 KPEVeq9Drd3NWDiU6J4apnZ5y8Cnfm5ToSNuhw5g7NzjnlNmOyvpnsGiba4hTyFraTVufhX1
 W2nWcitF20GLq9s0SC7QewQuZdymH9lmD+DHsqkl0j7uVZ7WJJzYe1eWLdpRrlmhJ5oXS2Pq
 4oPXyd040g3vBLCjtn/rtdIcAFiwYkTDpHqsc1HHtNv0SI/cFzN/8T5mOt7E6Q8xvQ9qws91
 i3lMqOu4Aal1CKvxMTjQiwLVY4Dqr4m9yhmZXZ3Ywz2s5XhCK72hJoim1IMVeFP3IReITRcF
 ZHpp+3o7ixzdwn6
IronPort-HdrOrdr: A9a23:uNNabahzCNhHD4n/v0v9+MnWgXBQXiAji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPE3I/OrrBEDuexzhHPJOj7X5Xo3SOTUO2lHYT72KhLGKq1Hd8kXFndK1vp
 0QEZSWZueQMbB75/yKnTVREbwbsaW6GHbDv5ag859vJzsaFZ2J921Ce2Gm+tUdfng8OXI+fq
 DsgPZvln6bVlk8SN+0PXUBV/irnaywqHq3CSR2fiLO8WO1/EuV1II=
X-IronPort-AV: E=Sophos;i="5.97,213,1669093200"; 
   d="scan'208";a="91401879"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Sergey Dyasli <sergey.dyasli@citrix.com>, Wei Liu <wl@xen.org>, "Anthony
 PERARD" <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, "Stefano Stabellini" <sstabellini@kernel.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH] tools/xen-ucode: print information about currently loaded ucode
Date: Fri, 13 Jan 2023 11:56:30 +0000
Message-ID: <20230113115630.22264-1-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Currently it's impossible to get CPU's microcode revision after late
loading without looking into Xen logs which is not always convenient.
Add an option to xen-ucode tool to print the currently loaded ucode
version and also print it during usage info.

Add a new platform op in order to get the required data from Xen.
Print CPU signature and processor flags as well.

Example output:
    Intel:
    Current CPU signature is: 06-55-04 (raw 0x50654)
    Current CPU microcode revision is: 0x2006e05
    Current CPU processor flags are: 0x1

    AMD:
    Current CPU signature is: fam19h (raw 0xa00f11)
    Current CPU microcode revision is: 0xa0011a8

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: Juergen Gross <jgross@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: "Roger Pau Monné" <roger.pau@citrix.com>
---
 tools/include/xenctrl.h           |  1 +
 tools/libs/ctrl/xc_misc.c         |  5 +++
 tools/misc/xen-ucode.c            | 68 +++++++++++++++++++++++++++++++
 xen/arch/x86/platform_hypercall.c | 32 +++++++++++++++
 xen/include/public/platform.h     | 14 +++++++
 xen/include/xlat.lst              |  1 +
 6 files changed, 121 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 23037874d3..e9911da5ea 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1185,6 +1185,7 @@ typedef uint32_t xc_node_to_node_dist_t;
 int xc_physinfo(xc_interface *xch, xc_physinfo_t *info);
 int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo);
+int xc_platform_op(xc_interface *xch, struct xen_platform_op *op);
 int xc_microcode_update(xc_interface *xch, const void *buf, size_t len);
 int xc_numainfo(xc_interface *xch, unsigned *max_nodes,
                 xc_meminfo_t *meminfo, uint32_t *distance);
diff --git a/tools/libs/ctrl/xc_misc.c b/tools/libs/ctrl/xc_misc.c
index 265f15ec2d..d03c240d14 100644
--- a/tools/libs/ctrl/xc_misc.c
+++ b/tools/libs/ctrl/xc_misc.c
@@ -226,6 +226,11 @@ int xc_microcode_update(xc_interface *xch, const void *buf, size_t len)
     return ret;
 }
 
+int xc_platform_op(xc_interface *xch, struct xen_platform_op *op)
+{
+    return do_platform_op(xch, op);
+}
+
 int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo)
 {
diff --git a/tools/misc/xen-ucode.c b/tools/misc/xen-ucode.c
index ad32face2b..c4cb4fbb50 100644
--- a/tools/misc/xen-ucode.c
+++ b/tools/misc/xen-ucode.c
@@ -12,6 +12,67 @@
 #include <fcntl.h>
 #include <xenctrl.h>
 
+static const char *intel_id = "GenuineIntel";
+static const char *amd_id   = "AuthenticAMD";
+
+void show_curr_cpu(FILE *f)
+{
+    int ret;
+    xc_interface *xch;
+    struct xen_platform_op op_cpu = {0}, op_ucode = {0};
+    struct xenpf_pcpu_version *cpu_ver = &op_cpu.u.pcpu_version;
+    struct xenpf_ucode_version *ucode_ver = &op_ucode.u.ucode_version;
+    bool intel = false, amd = false;
+
+    xch = xc_interface_open(0, 0, 0);
+    if ( xch == NULL )
+        return;
+
+    op_cpu.cmd = XENPF_get_cpu_version;
+    op_cpu.interface_version = XENPF_INTERFACE_VERSION;
+    op_cpu.u.pcpu_version.xen_cpuid = 0;
+
+    ret = xc_platform_op(xch, &op_cpu);
+    if ( ret )
+        return;
+
+    op_ucode.cmd = XENPF_get_ucode_version;
+    op_ucode.interface_version = XENPF_INTERFACE_VERSION;
+    op_ucode.u.pcpu_version.xen_cpuid = 0;
+
+    ret = xc_platform_op(xch, &op_ucode);
+    if ( ret )
+        return;
+
+    if ( memcmp(cpu_ver->vendor_id, intel_id,
+                sizeof(cpu_ver->vendor_id)) == 0 )
+        intel = true;
+    else if ( memcmp(cpu_ver->vendor_id, amd_id,
+                     sizeof(cpu_ver->vendor_id)) == 0 )
+        amd = true;
+
+    if ( intel )
+    {
+        fprintf(f, "Current CPU signature is: %02x-%02x-%02x (raw %#x)\n",
+                   cpu_ver->family, cpu_ver->model, cpu_ver->stepping,
+                   ucode_ver->cpu_signature);
+    }
+    else if ( amd )
+    {
+        fprintf(f, "Current CPU signature is: fam%xh (raw %#x)\n",
+                   cpu_ver->family, ucode_ver->cpu_signature);
+    }
+
+    if ( intel || amd )
+        fprintf(f, "Current CPU microcode revision is: %#x\n",
+                   ucode_ver->ucode_revision);
+
+    if ( intel )
+        fprintf(f, "Current CPU processor flags are: %#x\n", ucode_ver->pf);
+
+    xc_interface_close(xch);
+}
+
 int main(int argc, char *argv[])
 {
     int fd, ret;
@@ -20,11 +81,18 @@ int main(int argc, char *argv[])
     struct stat st;
     xc_interface *xch;
 
+    if ( argc >= 2 && !strcmp(argv[1], "show-cpu-info") )
+    {
+        show_curr_cpu(stdout);
+        return 0;
+    }
+
     if ( argc < 2 )
     {
         fprintf(stderr,
                 "xen-ucode: Xen microcode updating tool\n"
                 "Usage: %s <microcode blob>\n", argv[0]);
+        show_curr_cpu(stderr);
         exit(2);
     }
 
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 08ab2fea62..fcb7d81db1 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -640,6 +640,38 @@ ret_t do_platform_op(
     }
     break;
 
+    case XENPF_get_ucode_version:
+    {
+        struct xenpf_ucode_version *ver = &op->u.ucode_version;
+
+        if ( !get_cpu_maps() )
+        {
+            ret = -EBUSY;
+            break;
+        }
+
+        if ( (ver->xen_cpuid >= nr_cpu_ids) || !cpu_online(ver->xen_cpuid) )
+        {
+            ver->cpu_signature = 0;
+            ver->pf = 0;
+            ver->ucode_revision = 0;
+        }
+        else
+        {
+            const struct cpu_signature *sig = &per_cpu(cpu_sig, ver->xen_cpuid);
+
+            ver->cpu_signature = sig->sig;
+            ver->pf = sig->pf;
+            ver->ucode_revision = sig->rev;
+        }
+
+        put_cpu_maps();
+
+        if ( __copy_field_to_guest(u_xenpf_op, op, u.ucode_version) )
+            ret = -EFAULT;
+    }
+    break;
+
     case XENPF_cpu_online:
     {
         int cpu = op->u.cpu_ol.cpuid;
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 14784dfa77..1a24dc24c0 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -610,6 +610,19 @@ DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
 typedef struct dom0_vga_console_info xenpf_dom0_console_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_dom0_console_t);
 
+#define XENPF_get_ucode_version 65
+struct xenpf_ucode_version {
+    uint32_t xen_cpuid;       /* IN:  CPU number to get the revision from.  */
+                              /*      Return data should be equal among all */
+                              /*      the CPUs.                             */
+    uint32_t cpu_signature;   /* OUT: CPU signature (CPUID.1.EAX).          */
+    uint32_t pf;              /* OUT: Processor Flags.                      */
+                              /*      Only applicable to Intel.             */
+    uint32_t ucode_revision;  /* OUT: Microcode Revision.                   */
+};
+typedef struct xenpf_ucode_version xenpf_ucode_version_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_ucode_version_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -641,6 +654,7 @@ struct xen_platform_op {
         xenpf_resource_op_t           resource_op;
         xenpf_symdata_t               symdata;
         xenpf_dom0_console_t          dom0_console;
+        xenpf_ucode_version_t         ucode_version;
         uint8_t                       pad[128];
     } u;
 };
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index d601a8a984..164f700eb6 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -157,6 +157,7 @@
 ?	xenpf_pcpuinfo			platform.h
 ?	xenpf_pcpu_version		platform.h
 ?	xenpf_resource_entry		platform.h
+?	xenpf_ucode_version		platform.h
 ?	pmu_data			pmu.h
 ?	pmu_params			pmu.h
 !	sched_poll			sched.h
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 12:12:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 12:12:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477263.739882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIus-00079S-H5; Fri, 13 Jan 2023 12:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477263.739882; Fri, 13 Jan 2023 12:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGIus-00079L-E2; Fri, 13 Jan 2023 12:12:14 +0000
Received: by outflank-mailman (input) for mailman id 477263;
 Fri, 13 Jan 2023 12:12:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGIur-00079F-Aj
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 12:12:13 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2050.outbound.protection.outlook.com [40.107.6.50])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80b8d274-933b-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 13:12:09 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8962.eurprd04.prod.outlook.com (2603:10a6:20b:42d::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 12:12:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 12:12:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80b8d274-933b-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dHQDQTCGZfldRvjcqP9EiqYB3H4a0NRgw7GyA2XVOhE4th5FqkYdGORZwTbyrkZ2FQTkf5upfCMlYWIWkbniiQshG6awJQ8NDDpPGQMh7ldBBjg8fVjRRkBi+esHkje3nvbWmUkxoKoR3CZuAzoBu3b4ELhrjzZFJjkx5xMjdLLf8/HhmS8b7+odVkKnrhgxrcMuhmWUgOVaO1xgUvW/CJt+Y3KNHKwfJIZXfZJJai95Jq/Tm0rAvWNei1+J2Y6kgB0bQ+r5BzJPdGhab0AghomWOjyMdtln9uluJuvflGuV+4QpkWOz64pN6EV/1slGTpOAmV/7sKYGt/W5C9rFGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3jLHwocxj1mglFArwuvTQcMnn2Y2aw8W13iWvIKC528=;
 b=iyZtC1lqcXxiq2By/3NNKzr5lS4mhrKhji+UxSa2LigCSo2NwMZndzKSUjqv0HzPenG6rGyG/v0O7rupoqVsBEhdNty3Ws7WzDGq5ElyDfNgC3RbXBptvHiwQZYxch8kWKYA3QDW9mWQlBx9s5Jj5vw5lz035JTlwgb/kieW50EGzV5U3q/5cfnuN5YZF/y/V6z20DDLUXzqX3W9UL3NYxw902Q9aJAXSrOplSemQe75jsdoHNTuaMRKrlmO4MHGDUyx3LgAvVSb8Do/fmqv6DNTiMB1wj0iHde9k5mnzvy3DracO2ICyabnaDE5vLD8+atRN21LUjNWrxQoQ+t0Bw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3jLHwocxj1mglFArwuvTQcMnn2Y2aw8W13iWvIKC528=;
 b=yYce8kiaumkCtNiA4bWMoZxUGp2goYwRMJbDw5T9/4vo6jo0vebxTEbLK6JrmqR+9IHiJkX/tIv2qw8FfGleIzX1OXeIsWEmfKI2itWhDCEi4jKGSRHeedZnLsZAKfhq1Hj5r3NeFNm8O8iPUJSAgd2UY7xg6fzbhOHYdBtRGWAavWLCZU9UQZkmA8CzXLacQvjQdUZD93Qaf59lZcs3tOq3inS4S3Z8xsoPOHc8aUgXzT2RpsfTH+kfIfUdKX8MzsuY4uONFsj7c06nGdUVroBCrjMR413lxU0Dh4/SAPz0HGB2em1a/+Ix9qr1UUjXWdWGRlrhrEbH7IsCz1afCA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <512d8768-28f6-d9d6-c1cc-18c5fbf2a636@suse.com>
Date: Fri, 13 Jan 2023 13:12:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
 <88e3ec77-587a-ae68-a634-fed1fa917cd7@suse.com>
 <b76a7834-9868-c5c2-e058-89911a552c80@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b76a7834-9868-c5c2-e058-89911a552c80@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0071.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8962:EE_
X-MS-Office365-Filtering-Correlation-Id: 65b29b9d-5cf0-4a53-ba71-08daf55f644a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fqsYwF8+u4dpY+K5Eo8UTdGP9+DlDZoRwesVZZoAYOTPwSxOLb1CRyxxhoe/XGuN/J/x0pwIqZLK24S+/ZDS6Fo1IWUTzI8kmCDTTVMdteZ3CC8i8DdHZnAY00kWtQVglQAQKrqsNIk4hde3CU468wOhV9bUdMqBXp1jCwNxiwPY2MgQfUqBKe9Amj1j6benMYM9tCILwX4q76RoqXaVn8uOLJELQGKx6Ggj4fT5GQcWkd5ArNp1E7ASFe9GpFRCJGrfJZpRI4+xQ6wR0hwvTgou7WKAJnv5mqaOSbm+Wqd/uhz8b6gSCusDCOjV3OEubyeUDNk6B7PjWIZrJeU+7YKves77H2S6J6pxvKaqdhiTOi7v4S1ZIGqnAhdrKc5jlkWEEvLK0UCxEBlX6XXkRmhbfVnWsSmPEyh3zZBeweoO1Y4+sOrR0OSd6JbAqOQaKeN5fQTSqE+IfQe8wknpoEXG7MiLcUtHI0GSBN3xhbvEf4nEDY/RDWumAW2XIHc5RjANpRzinR6OaiPctsyUpOryr7hgAdq2yREnYY5+RoNGgKWRXCTYcPFju2eeC56L5+j9yPMUmWHfKFpsoKn0sW5ut/MMj5/IloqVtCBHgVQ5NA0pN4dJOczpB+qFq1BlNzhx7Qtp/0lSDznYeaiLvRbe9X89zY0Q/G9gI2a8xhhstdTcDKPX6rYtnCvpg+ZEyU1mN3OegT76SOCimHLy8hCkvOnylJp7dg8zD6XmZ8o=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(366004)(346002)(376002)(39860400002)(451199015)(6506007)(53546011)(6512007)(316002)(6486002)(186003)(26005)(478600001)(83380400001)(41300700001)(5660300002)(8936002)(4326008)(6916009)(66476007)(66946007)(8676002)(66556008)(2906002)(38100700002)(86362001)(31696002)(31686004)(54906003)(2616005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K3FHR0M3WUduZEdlQytsVWt2REx4aDNhdHBvd2hBM09yZW42eGpHVG05SjJW?=
 =?utf-8?B?czJmVXNpOW9qMm53QkxaeWFMaVpFV1pNZUZONHFoZFFPV3FDaUFuU0tESm4w?=
 =?utf-8?B?OGVRVEdLWWNqVkFkTVh2Zno2SVBLUjlkNHpac25rQVBXSDFCY3NFVnpBU3F4?=
 =?utf-8?B?alVIWjlvVFZLMWxNZmFFTy96Ykw5UjNSeU9PbHVDd0ZmL2pIQVFHczZ3enRh?=
 =?utf-8?B?NDJhcHhBZ2l2UWtuMkp2Q2VKRFUza1ZVMVVsUGZsdnZoNjJXaGE3aFk3amN5?=
 =?utf-8?B?QzR2ZTh5eUhtTUVidmZUYno2WDF6K3lnSWhtOXFaTEthTUt5NGVRa2w3WEY3?=
 =?utf-8?B?YzBLRjVXbHorNkNGL1I3bkk0dW5aaFl2UGpZa1l2SUh6ZzdsV2h3MHpudGR6?=
 =?utf-8?B?cE1DalB3QWlDRytRYXRtb25FRFJ5bFBVaUN6RUZpQzd4V2s2SmpOK3BhMGhj?=
 =?utf-8?B?Q3lKZCs0WENxSHZqOHJJN05kRkRFcnpnekpmQ1J5L2VWYUU0UE81MmRyT3hn?=
 =?utf-8?B?UEtvMnlQRFMrMFd5Nkt2YXRiRG9BN3NvWUNyc2pqL2I4Zlk3SXkyaWxzY1FC?=
 =?utf-8?B?Tk90a0U0dXpEWlRvTWRLbXJ0NzJZKytocUtESXg2WTAyam5yT25ZZlBLQkl5?=
 =?utf-8?B?b3licldqRmdyZU8zSEJYaGhZcXlmY2VIQit0dHIxcjUxV3Q4VnlxNkhzNW96?=
 =?utf-8?B?eFR4UmR0bGRhN2IwSjRuWEdicEtCOWhkaThIQkZGNjZhbmNvUWhrUG11U3gx?=
 =?utf-8?B?aEJ6QTJXZk5NbUFRRE9JL01RRnJ5MUxjSEZMTUZvMUNuTXZuZThERXAya1Ez?=
 =?utf-8?B?REVUZnhPaENLUDhIZHdFaysxd2lFc1kwNHBlY2pmNkVoWlE0OStpRDlFU2Zp?=
 =?utf-8?B?V2E3dndmVTJVNTBwT0hESWlTQ2t3bzNZYy8wRGFyUFl1a3k1YlczNXRPVjdI?=
 =?utf-8?B?UnJKSitNbGlYZjBHZ1pObG1FTytFc1RaczByREhyL0UvcTMxOGsyN3d1RDB0?=
 =?utf-8?B?UlJLQ3hsWFNnMWpWK2ZYQmhMZlUzNW4xejhCNGpxS3lJczdZc3NCSWEzZ3d1?=
 =?utf-8?B?elZhb2ZyeGRuYWUvamZmNTgrbTRvWHQrOUtHRkY5TEc4eWkxcGNPWHFXMU1l?=
 =?utf-8?B?dVdmUGhDR3d1NW5DSGJERFk3d01hSk5WKzRLeUNvV29ndGJFVkljWGxhSlc2?=
 =?utf-8?B?elVQSVhSbE1NRktsMDNraG9qZ1p5Y3ZmOVVEa2U1TTJMald0OGNxcy9KanA2?=
 =?utf-8?B?SW16TCtONTJiZkpsd1NXcjFzQmhoYkJZdG1xeGdDTnMxV21PbllIUlV2SVYr?=
 =?utf-8?B?cWNoTmw2ck0vd3hWczhqeW1DTytNTmVPQmExSG9UeFJlQmFxaXBLczBRNGJz?=
 =?utf-8?B?dFluUjB2ZWV1aGsySk9qamxKeDZWZ0RkUGRPblp4dXRKeGk4ZHdrK2dHQnAr?=
 =?utf-8?B?clhMYmJHWWF5MEFrVGZyc0piOVFWTlZpOXlOM1FtVDB1VVV2d0xzV0xkaEdu?=
 =?utf-8?B?SlcxVnM2a2NNbWl6eWVHNDdUQ2FSREU4dkVjNUtMakdKWHZRajVYandPNUxP?=
 =?utf-8?B?QTF5a3Nnb0VLam9Rc2tyWS80UURLc2t6cCs4MUZGTTNEM21RVit2aGFwZ2VS?=
 =?utf-8?B?WGw2eEZ2UzBNY2VGUjZITzdrdkNzejNFRnFraEx3ZVN4b3BXSEZwdy9HTWZI?=
 =?utf-8?B?UHd4MnI2Z2dKenFMczh1WE5sRE1RSVBXTFhVR0EzNG8wM01ub3NabHp1Tkt1?=
 =?utf-8?B?OC82aTdhNTZLR096a215NmMya1AybHNQaGU3eitvVlVCNjgwRE1yZVNKREc5?=
 =?utf-8?B?QmtXLzFrd0k2bndHZS9HVHZyQXorWm9JVkY5MHJZU0hsTjFYc1RObUxuVDJL?=
 =?utf-8?B?V1pHb2dPVitacHdKNGVheUxlODNqQW9uZ1VLUFk4VW1IVTMrcEp4UmhsUEpD?=
 =?utf-8?B?ajVicGsyUHkzblJNVVVRVVVOYk9wQzRJSXZuNytibEFEQmhhMG1MSG5neUJS?=
 =?utf-8?B?WWZnRDN6M3VpVXVWZExqNEhEZ0c2ZDcrc2dCdFFqVFBkVE1kaVZPMTJvdzNJ?=
 =?utf-8?B?ck9MemdLckxHcjFCVnFMZnQrVFZudkM4YkU5Yk1kcDIrN2JjL0hJZDR4SnJX?=
 =?utf-8?Q?tUxY/rIbjuHPB0xVjZH4EOb+k?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 65b29b9d-5cf0-4a53-ba71-08daf55f644a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 12:12:08.1226
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Clo2W+/5w8qE7/VNl8fpi3/ph6rf0+Aqs0wIlHyhvBNNIkioEy+fjCjYESTRH8eM40Oq8b+sgD59EpBbpYAEbQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8962

On 13.01.2023 12:55, Xenia Ragiadakou wrote:
> On 1/13/23 11:53, Jan Beulich wrote:
>> On 13.01.2023 10:34, Xenia Ragiadakou wrote:
>>> On 1/13/23 10:47, Jan Beulich wrote:
>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>>>>        if ( !acpi_disabled )
>>>>        {
>>>>            ret = acpi_dmar_init();
>>>> +
>>>> +#ifndef iommu_snoop
>>>> +        /* A command line override for snoop control affects VT-d only. */
>>>> +        if ( ret )
>>>> +            iommu_snoop = true;
>>>> +#endif
>>>> +
>>>
>>> Why here iommu_snoop is forced when iommu is not enabled?
>>> This change is confusing because later on, in iommu_setup, iommu_snoop
>>> will be set to false in the case of !iommu_enabled.
>>
>> Counter question: Why is it being set to false there? I see no reason at
>> all. On the same basis as here, I'd actually expect it to be set back to
>> true in such a case.Which, however, would be a benign change now that
>> all uses of the variable are properly qualified. Which in turn is why I
>> thought I'd leave that other place alone.
> 
> I think I got confused... AFAIU with disabled iommu snooping cannot be 
> enforced i.e iommu_snoop=true translates to snooping is enforced by the 
> iommu (that's why we need to check that the iommu is enabled for the 
> guest). So if the iommu is disabled how can iommu_snoop be true?

For a non-existent (or disabled) IOMMU the value of this boolean simply
is irrelevant. Or to put it in other words, when there's no active
IOMMU, it doesn't matter whether it would actually snoop.

> Also, in vmx_do_resume(), iommu_snoop is used without checking if the 
> iommu is enabled.

It only looks to be - a domain having a PCI device implies it having
IOMMU enabled for it. And indeed in that case we'd like to avoid the
effort for domains which have IOMMU support enabled for them, but which
have no devices assigned to them.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 12:24:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 12:24:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477269.739894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJ6w-0000EI-Jv; Fri, 13 Jan 2023 12:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477269.739894; Fri, 13 Jan 2023 12:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJ6w-0000EB-H8; Fri, 13 Jan 2023 12:24:42 +0000
Received: by outflank-mailman (input) for mailman id 477269;
 Fri, 13 Jan 2023 12:24:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xRhM=5K=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pGJ6v-0000E5-2t
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 12:24:41 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2058.outbound.protection.outlook.com [40.107.244.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ed5d26a-933d-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 13:24:39 +0100 (CET)
Received: from BN9PR03CA0439.namprd03.prod.outlook.com (2603:10b6:408:113::24)
 by PH0PR12MB5466.namprd12.prod.outlook.com (2603:10b6:510:d7::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 12:24:35 +0000
Received: from BL02EPF0000C404.namprd05.prod.outlook.com
 (2603:10b6:408:113:cafe::3b) by BN9PR03CA0439.outlook.office365.com
 (2603:10b6:408:113::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Fri, 13 Jan 2023 12:24:35 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF0000C404.mail.protection.outlook.com (10.167.241.6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Fri, 13 Jan 2023 12:24:34 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 13 Jan
 2023 06:24:34 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 13 Jan
 2023 06:24:34 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 13 Jan 2023 06:24:32 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ed5d26a-933d-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IOk49rBzV/B8dOHCxOWgIGR8GJ9SWHSo1t+MyKdenf/U8ePBXSDhoTUC+kbed0XapQlfKVqxmJfuggQShcjwSxIre5kaL0rLJVLcmyuuXzeL1CnxXtFJOS8pMODySYmX4mXZ3vzFyJZKSVPZkWdMUiau1Ez2zLz0Eq4zlFhmBOZc8plZabwAqVeWxU0hu6eSMmfxQ7s+7JAMb88aL5DsG2M7CeaF183r256L9HEU29H+MKwvxiolrYjy4++Hu/loqrCasnQWhSJmY2I7QJYh9UVGSxZCAU5ITXVDcaaXjkOsCPEAijApYi/7L7YIez1jgOXVn4BpmNHoA/x/DogRbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ycEQqwgkzsVZ028ofgUn3qmD37oS/iVNWA6GXMTomSE=;
 b=mfePwy00wIR9LUdjexMh1Ft/E6YEUd9JIgJ7NLK2fR09JNuiPQxCm6Cef9XjC4/Am6ukTwegBi70HtRm36MlULRUPCMwu8kNIzXrsBRGVHkcsKT3fonBp1OLRcQ8dl9rzQ0nhOmDD7udpWVKDwNorsdpbbtS0tawX7zMXyBjS7G/1zrr2iu5MTODKOvEGLcOZdbwc83ZWpGUDMzb6MVx3wjdUra/zK6vFmGuup3S/Inx6nrJ7PTolT72RUYbbERCTCc4x4LS+xGv8klYn3SJuM2xGh1SaB3D2Dg2thTxxJ8nI/IOSg82jikodMImTaplsoEp/rxXdHSB2dk9Cgky5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ycEQqwgkzsVZ028ofgUn3qmD37oS/iVNWA6GXMTomSE=;
 b=xgKy2Q5X0qbW1dPLBXyCHPDbu1mVoInTaq6/5begkVfaP7QulnFc5IFGCmMyiVm9qwZ68Xhk6nZL5OY+kr1IYkhOCetLes9aFTBQ6fDItIfz5cjbqO2w8Y9tA+k2HXnamgkX6nIovSNHPNoaIQcTDCatvUkmBTMgft17qtAIZKc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<michal.orzel@amd.com>, Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Subject: [XEN v5] xen/arm: Probe the load/entry point address of an uImage correctly
Date: Fri, 13 Jan 2023 12:24:23 +0000
Message-ID: <20230113122423.22902-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF0000C404:EE_|PH0PR12MB5466:EE_
X-MS-Office365-Filtering-Correlation-Id: fed8d023-0b5c-4773-707e-08daf561216e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0f+pl7gD4eaMKKhq1rS2nsgP/mPAh5lJnMqQ9BgSuNpQN2bdhBDC6M0GmkX6V+Xi0AsM9P/C5ZT1Q4vqMxizHyTrZIgRC56NZ2cNBWc8a5p15ktJH6h6VrzSqA1IZ86q7GPfhVc3ET6iQN7Arp8ce31v8Uf+HYBp/+nmjaDPIQJymN0DGR37MPvTyVj5G+qNUtXiTPAcdVhFg+pNkNTfwEVtOIJfF62wDqBMY0cBi4PfxCdYbDaDOohE50d75yjvtaOYdmficem2OcUA5oqo209MKNIhA0075iVaVqbEaUwN2IyfpD8ENhKyZxOowaB+cmpNck0xKVSTg41Q+1Cwr5bjc4hwV9AFrL1O7+QQq3tb9agbekX7FFaeflLK8OYpxMf7KiCB7OQBWw0t43+y3SZp4vdGC/GtmE8XbVRcJTdkiHnJq8mIDtds83rx00zbBTX8NnKzq8RwtWEOpWojt8UMBVgicbWhlRa5Yol25Efn2zdQ0p2qLEeaITiFyGTESsLZ1XyUSx25Ona6o3vqJpY2QEpTrNrp6+qiEI6Da8tF/wb+B+o1rtC4kzxK5Qt1I9H0EMuYZbjPRac9fypKYZOp2E7d+9rEqMiy58Kf/tuDHCRhVy0nkc9mUIJJejTGHtAXGROVLvTNG4p0XdLPeZbY1PZ0bg675Cnjoe7eLKs7i+ez5eb6NFxVOIta99buG3TZDxoWNItjZ9T5xWgOBHaWgIfIOiNbqQb83+C0/8VxjTKhcPjSX4EQObosz40jjWXp8Fa3YwRHt8PA9kqLEsG2qT1xAss5CocE5gvcBuV9EXsDmUmowaXPFm01iHMq
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(36860700001)(82740400003)(6666004)(356005)(81166007)(2906002)(86362001)(966005)(478600001)(2616005)(186003)(103116003)(1076003)(8676002)(40480700001)(316002)(5660300002)(26005)(82310400005)(8936002)(36756003)(83380400001)(70206006)(47076005)(41300700001)(426003)(4326008)(40460700003)(54906003)(70586007)(336012)(6916009)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 12:24:34.7396
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fed8d023-0b5c-4773-707e-08daf561216e
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF0000C404.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5466

Currently, kernel_uimage_probe() does not read the load/entry point address
set in the uImge header. Thus, info->zimage.start is 0 (default value). This
causes, kernel_zimage_place() to treat the binary (contained within uImage)
as position independent executable. Thus, it loads it at an incorrect
address.

The correct approach would be to read "uimage.load" and set
info->zimage.start. This will ensure that the binary is loaded at the
correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
address).

If user provides load address (ie "uimage.load") as 0x0, then the image is
treated as position independent executable. Xen can load such an image at
any address it considers appropriate. A position independent executable
cannot have a fixed entry point address.

This behavior is applicable for both arm32 and arm64 platforms.

Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
point address set in the uImage header. With this commit, Xen will use them.
This makes the behavior of Xen consistent with uboot for uimage headers.

Users who want to use Xen with statically partitioned domains, can provide
non zero load address and entry address for the dom0/domU kernel. It is
required that the load and entry address provided must be within the memory
region allocated by Xen.

A deviation from uboot behaviour is that we consider load address == 0x0,
to denote that the image supports position independent execution. This
is to make the behavior consistent across uImage and zImage.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from v1 :-
1. Added a check to ensure load address and entry address are the same.
2. Considered load address == 0x0 as position independent execution.
3. Ensured that the uImage header interpretation is consistent across
arm32 and arm64.

v2 :-
1. Mentioned the change in existing behavior in booting.txt.
2. Updated booting.txt with a new section to document "Booting Guests".

v3 :-
1. Removed the constraint that the entry point should be same as the load
address. Thus, Xen uses both the load address and entry point to determine
where the image is to be copied and the start address.
2. Updated documentation to denote that load address and start address
should be within the memory region allocated by Xen.
3. Added constraint that user cannot provide entry point for a position
independent executable (PIE) image.

v4 :-
1. Explicitly mentioned the version in booting.txt from when the uImage
probing behavior has changed.
2. Logged the requested load address and entry point parsed from the uImage
header.
3. Some style issues.

 docs/misc/arm/booting.txt         | 26 ++++++++++++++++
 xen/arch/arm/include/asm/kernel.h |  2 +-
 xen/arch/arm/kernel.c             | 49 +++++++++++++++++++++++++++++--
 3 files changed, 73 insertions(+), 4 deletions(-)

diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
index 3e0c03e065..aeb0123e8d 100644
--- a/docs/misc/arm/booting.txt
+++ b/docs/misc/arm/booting.txt
@@ -23,6 +23,28 @@ The exceptions to this on 32-bit ARM are as follows:
 
 There are no exception on 64-bit ARM.
 
+Booting Guests
+--------------
+
+Xen supports the legacy image header[3], zImage protocol for 32-bit
+ARM Linux[1] and Image protocol defined for ARM64[2].
+
+Until Xen 4.17, in case of legacy image protocol, Xen ignored the load
+address and entry point specified in the header. This has now changed.
+
+Now, it loads the image at the load address provided in the header.
+And the entry point is used as the kernel start address.
+
+A deviation from uboot is that, Xen treats "load address == 0x0" as
+position independent execution (PIE). Thus, Xen will load such an image
+at an address it considers appropriate. Also, user cannot specify the
+entry point of a PIE image since the start address cennot be
+predetermined.
+
+Users who want to use Xen with statically partitioned domains, can provide
+the fixed non zero load address and start address for the dom0/domU kernel.
+The load address and start address specified by the user in the header must
+be within the memory region allocated by Xen.
 
 Firmware/bootloader requirements
 --------------------------------
@@ -39,3 +61,7 @@ Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/t
 
 [2] linux/Documentation/arm64/booting.rst
 Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst
+
+[3] legacy format header
+Latest version: https://source.denx.de/u-boot/u-boot/-/blob/master/include/image.h#L315
+https://linux.die.net/man/1/mkimage
diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h
index 5bb30c3f2f..4617cdc83b 100644
--- a/xen/arch/arm/include/asm/kernel.h
+++ b/xen/arch/arm/include/asm/kernel.h
@@ -72,7 +72,7 @@ struct kernel_info {
 #ifdef CONFIG_ARM_64
             paddr_t text_offset; /* 64-bit Image only */
 #endif
-            paddr_t start; /* 32-bit zImage only */
+            paddr_t start; /* Must be 0 for 64-bit Image */
         } zimage;
     };
 };
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 23b840ea9e..0b7f591857 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -127,7 +127,7 @@ static paddr_t __init kernel_zimage_place(struct kernel_info *info)
     paddr_t load_addr;
 
 #ifdef CONFIG_ARM_64
-    if ( info->type == DOMAIN_64BIT )
+    if ( (info->type == DOMAIN_64BIT) && (info->zimage.start == 0) )
         return info->mem.bank[0].start + info->zimage.text_offset;
 #endif
 
@@ -162,7 +162,12 @@ static void __init kernel_zimage_load(struct kernel_info *info)
     void *kernel;
     int rc;
 
-    info->entry = load_addr;
+    /*
+     * If the image does not have a fixed entry point, then use the load
+     * address as the entry point.
+     */
+    if ( info->entry == 0 )
+        info->entry = load_addr;
 
     place_modules(info, load_addr, load_addr + len);
 
@@ -223,10 +228,38 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
     if ( len > size - sizeof(uimage) )
         return -EINVAL;
 
+    info->zimage.start = be32_to_cpu(uimage.load);
+    info->entry = be32_to_cpu(uimage.ep);
+
+    /*
+     * While uboot considers 0x0 to be a valid load/start address, for Xen
+     * to maintain parity with zImage, we consider 0x0 to denote position
+     * independent image. That means Xen is free to load such an image at
+     * any valid address.
+     */
+    if ( info->zimage.start == 0 )
+        printk(XENLOG_INFO
+               "No load address provided. Xen will decide where to load it.\n");
+    else
+        printk(XENLOG_INFO
+               "Provided load address: %"PRIpaddr" and entry address: %"PRIpaddr"\n",
+               info->zimage.start, info->entry);
+
+    /*
+     * If the image supports position independent execution, then user cannot
+     * provide an entry point as Xen will load such an image at any appropriate
+     * memory address. Thus, we need to return error.
+     */
+    if ( (info->zimage.start == 0) && (info->entry != 0) )
+    {
+        printk(XENLOG_ERR
+               "Entry point cannot be non zero for PIE image.\n");
+        return -EINVAL;
+    }
+
     info->zimage.kernel_addr = addr + sizeof(uimage);
     info->zimage.len = len;
 
-    info->entry = info->zimage.start;
     info->load = kernel_zimage_load;
 
 #ifdef CONFIG_ARM_64
@@ -366,6 +399,7 @@ static int __init kernel_zimage64_probe(struct kernel_info *info,
     info->zimage.kernel_addr = addr;
     info->zimage.len = end - start;
     info->zimage.text_offset = zimage.text_offset;
+    info->zimage.start = 0;
 
     info->load = kernel_zimage_load;
 
@@ -436,6 +470,15 @@ int __init kernel_probe(struct kernel_info *info,
     u64 kernel_addr, initrd_addr, dtb_addr, size;
     int rc;
 
+    /*
+     * We need to initialize start to 0. This field may be populated during
+     * kernel_xxx_probe() if the image has a fixed entry point (for e.g.
+     * uimage.ep).
+     * We will use this to determine if the image has a fixed entry point or
+     * the load address should be used as the start address.
+     */
+    info->entry = 0;
+
     /* domain is NULL only for the hardware domain */
     if ( domain == NULL )
     {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 12:27:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 12:27:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477274.739905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJA3-0000oy-2n; Fri, 13 Jan 2023 12:27:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477274.739905; Fri, 13 Jan 2023 12:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJA3-0000or-07; Fri, 13 Jan 2023 12:27:55 +0000
Received: by outflank-mailman (input) for mailman id 477274;
 Fri, 13 Jan 2023 12:27:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xRhM=5K=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pGJA1-0000oh-Im
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 12:27:53 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2058.outbound.protection.outlook.com [40.107.237.58])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b0eedc29-933d-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 13:27:50 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by CY8PR12MB8196.namprd12.prod.outlook.com (2603:10b6:930:78::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 12:27:48 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%4]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 12:27:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0eedc29-933d-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dpFxG4jCr/SKmLuflJBR9stC570YowG1ZQoxv0txNJK2hbBgObxxFQMetCxkx0s8eFmhut1ojz0yTjFLDkPHdGGdrA8BUKLB/R2TiMt8M1hMwxNoePrM7OeI6V85tg6/S8SWJ6BPhsIa1LltGu/5B6GUBniUYvZ8g+n59UbNFhRGIB92tcn1oWf0oBmx32tBYh6aEWU5YeoofZ6jmw1F8vTb2CwL/EyaO00zernFQL363AlANuY9LnJEE8u6/kz0uyDe/FJbMgmiuyk++8+2BGRCQtdz2BolmelBDs2ix5okV79a8mEV5dz/ZEnOb3fD/34ytp2N5c/LYEcBIjbrKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rsqW6VEFQgcSxHB4O0hC9OpgzP3TyDZ46hREDKuHIuY=;
 b=WWf4fouXmaovKcSuDABGWj/BcGu5f4W29d37vSf+7ZQcOTsovzqOLzH8hRlOkGjOs8vjDozDHaiTI1G+kAhPUJMJ46bNz4ixx81YDrDKPNUtlc0LPzV/Uhs03xfhj0ddQLvikWDZBE9FwOkwCMbJWTcvj9f/Iqg0PlNBucVq1+DKwXma7GrzjwmA9uGUrWYtNUakNpPGF2Ct9AlYJXcvvxP736kh5PcLTQqqHBU1ZjJxrpZouGdrEco41+sjvaPqaIeqzDyHMyk1ThsSAuanfBbA9mvKa5zmUFHQdgZc1e3mjhjgfjW9iYsLzKkWcAF/cOq9LtpwZ9dUnJQbKVgjjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rsqW6VEFQgcSxHB4O0hC9OpgzP3TyDZ46hREDKuHIuY=;
 b=Zx/X6WnOYVrIN54sYNo9cd6WBoOlxlT0wMjaAqqOkL+nfbDuetJxxsnidmFvIuvGmX3XujvlmCAyjozxlIfD8X1wu0BPM/EajY5nW0XbqQkxtgrwgvgSZDe3/NvcjoW5cuCtvzDePzvwypMkgoaWHsfKcnK489QTq6ODa48K4c8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <52139018-abfe-4083-c955-70d832aa3d1c@amd.com>
Date: Fri, 13 Jan 2023 12:27:41 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v4] xen/arm: Probe the load/entry point address of an uImage
 correctly
To: Dmytro Firsov <Dmytro_Firsov@epam.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>, Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "stefano.stabellini@amd.com" <stefano.stabellini@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 "michal.orzel@amd.com" <michal.orzel@amd.com>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>
References: <20221221185300.5309-1-ayan.kumar.halder@amd.com>
 <e26768b7-99f7-f4e4-6ae5-094d17e1594a@xen.org>
 <20b15211-492b-713e-288c-14bd5e137ed7@gmail.com>
 <ff1aa8c9-34a0-72a3-7a9e-c9a4fee93561@epam.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <ff1aa8c9-34a0-72a3-7a9e-c9a4fee93561@epam.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0127.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::19) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|CY8PR12MB8196:EE_
X-MS-Office365-Filtering-Correlation-Id: 619c01b0-a6bb-450e-6128-08daf5619453
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1C/UCmR1W9JTkTguYN5P2qYPEO2OrVnVCGKNg0UT5afwc1Bqo1yL4IfmBZjljdIjmR3TulrAzQ4sw+CDTxbygFyhV6/6UYYeIKLcAqjVlLcVb3ZAeGSFYteuwqAztOynsfnLdMs5TO2IUiHPtg75ZdsJ5Pv1Z5aRQF+OOYQ1UVGm4pgsr2KVaEW6BWnpb6gadmq22+cViGUaDiyry1gfjJAYhHNhsps1zTEvz0a7/XOt9NOHgOLQ0A2OQwktFmmDtuvAo++T82MKWZHjKUP1ES8U8QouLGEmodCwY/pcZxHMngkABvKW1j4jg9rq/q71IrKrnSzyZ4LTYVgD6nADQ5xrbSRh5FF8+TIED50hfm9PaLtKIP8AMNrHzAzVR7MmrRhqW4ivTOG/cEtUYCOKNd9p/Abe4kkw0zUmuSgVCKhJybL4meSjJZK6Ry9thK2CFvwsaDvsVYGrdFYQob3JxeIpCXe1pXisIXVnpDXUFR2AJbDsmgEtQSnOKcbVOEH5GA3oXEwXSvKij5OLhlbFZDk4iVMzCYRsy1sB7xlRllWz0bJzf1FadeLb1ih97TIIteA3B3sIT3FLGVmjIsUqmOxkCWAz/tdl/9uppP0oqnPm1md7DqT2/ncZa6+bdikYp5ADhed1uHIWJHc4AZI+lGy072wvGBMnqWnh8AAdLA4YU2fYkn68U3ykdskm+qw4apIbSpL0D40BKgWKMbJcyImqOjlX8ACZQqlkmo88uDs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(136003)(376002)(366004)(39860400002)(451199015)(38100700002)(6506007)(6666004)(31686004)(53546011)(2906002)(5660300002)(6486002)(478600001)(2616005)(6512007)(186003)(26005)(316002)(8936002)(36756003)(83380400001)(8676002)(31696002)(41300700001)(110136005)(66556008)(66946007)(54906003)(66476007)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bmV2Mk5yMjlhZmJ2NUl6Z2lncTh0MnN3SEJjY3ptK3doWEVmMkp2b2QzVnky?=
 =?utf-8?B?Y2FGUVdMRVozeVVGL3VMNG9sOHA1ZFhQa25zRGZOellRSzM5emx6emt3NmZG?=
 =?utf-8?B?Si9LaXhwZklPaU9uejNBV2UwZ2tvbXZhaEhnOGltamlMUGs0TVRkVjV4Z0U3?=
 =?utf-8?B?QnVlUjQ0dkxSYWZCditXZ3U3eFgrb0FKKzlyMWFSazBVUkx3Z0pvb1JkUCt2?=
 =?utf-8?B?bHJYaXNwc01sNlJsWTZheWN1UUlMQ2hFU2VYaWNGUXBZT1lIaDlIMjk5RmZZ?=
 =?utf-8?B?dWIzSjNMNmdLMisrdnlpTzFzT3JHdjJCQ0VvbTY3RGdIZVFSWkNoQkNrWWdq?=
 =?utf-8?B?Ry9TbkpndkNDRmw4Q0M2aFM5NDN3VDVkM1dZbVFYMTc5RE9DaU9SdmVIczlv?=
 =?utf-8?B?N0dhdFl1d1JqamppdlRXR055SHdZMkpEemNMTDZnYlh2R3hNcTFCenNSVXkw?=
 =?utf-8?B?M0tXY0MxaVg5a0FmMlRxVzZ6WjNhaWp4dkRQbi9GVlFCbFNUOUZSOWttTzFJ?=
 =?utf-8?B?TkMyWEY3QjFLZkhxVCtLU0ZlSmtLSDVHVzM5aFhOU1lvR3hUWmV2K2Z0QnZ3?=
 =?utf-8?B?K0JWcTQ5eDdDQXRSRXRXeFFRMklPN0JoYXZxSkVoVjVUbjUwYXFQdkNqM25q?=
 =?utf-8?B?N28rQXd3NTFmTnZoVlQzNDFwMEh3cXhaTE52cGFBcThHZmUxT1R4dW1ZSG43?=
 =?utf-8?B?SkwwNFU1aTAvcHlSN3JRRFRBRWNmQkRsWFpXRmExaVh2L29iWlExendMSUF5?=
 =?utf-8?B?cXVmZlFSbm54d2FRbzlmb1hSWklTR1Z6cFlDekU4TThQRHFweW9hT3BXSlNM?=
 =?utf-8?B?bk5Hb3dqT1BGN0tqN0VvSHU5L1VWdlVTMXFnUXl1VWpHWjhHM0hHMTczMmxh?=
 =?utf-8?B?UVAzWDNNY1JGT1BwUW5DckNXZURBWUpMRDJVTktVTWJPK0RSUnZvblFuci9Y?=
 =?utf-8?B?bVl6Vnd5a1NERlVueEk1ZXNkemo5NzJoT05yaWxjSjRYQldaaEt0aU45WFdz?=
 =?utf-8?B?YnkrY05jSjFFMHdWWjNmam93WW9jYk53Q00rbnIxekJUWUNGSUF2ZGx3WXVo?=
 =?utf-8?B?YjN2L2JkK0dsZTBHcWlQZlJyOVdGY2trbmcxQ0NTMjU1Qmk5ZFM5cVByOUFt?=
 =?utf-8?B?V1dkVERNMmZhOGNyYkcxc3R3cGt5SDJBazd1RUNVS0tUMmI3OVR1MnUwZ3U1?=
 =?utf-8?B?QVZQWHk0WEprMERNV0lkby96K0xSQXJlQU1HL2JkVGM4QzRwZFBiZU1jYzlX?=
 =?utf-8?B?NXJSUGNKQ2FSWVc0TnEyeTByS0hKQXk2RDhqLzhHWjgrS29BSG9nZDhWQWor?=
 =?utf-8?B?TTJkc1JOY3U4bDRUN3Fqc01pN1ZhZE5ESTdmRlE4S0taQUZ1NUhyc2lIMlNh?=
 =?utf-8?B?clA4Ry9xZHU5QkpVeVhBR0t3Z3dBSEtWcW41anpQQmpyRms4SkdVV3hGNkhW?=
 =?utf-8?B?cURKbmYrb1ZJRGM1TU4zMGZZelozenhPbzNUcDFsakVmWTlRYVlIY1oxS0d4?=
 =?utf-8?B?V1p5dkE2K2RGQ2wzazkxWVJ2VE5tYnV6SFdPOVdCclM1Y0ZnSDVTTW12cXFX?=
 =?utf-8?B?dk5NRE01ZHZlTDEzdXNTeWxsRzUwZmtENFB2UjJHUVJ5Tmd5eXliem1WQ1Fl?=
 =?utf-8?B?T21LZlZWV1lLVDUrTG5FUThFUWZEdzBOSnJ1bXRBUkxnL2RGcFRQQ0p4VVI4?=
 =?utf-8?B?RHo4eDdjR2V2d2p1UE1uNS9UZEN2anJhSkFnVUhzYmt2U2oyd0paQ1Q2M0xG?=
 =?utf-8?B?emM2SzdkbGNKdE5qNmh2MHB0R3ZXSjhMOU44VmxsK29yZWhZOGt5SWloc0hy?=
 =?utf-8?B?K1JvT0dZS3ZYQU5MdG1tTFdsL1g2RWNKR3hkY3R0UGlPUDZHNXNVdzVtQ1pL?=
 =?utf-8?B?ZWdVMlhjNTVJY0tYYnJlZExCeVdKdFp6NHZwYnk2aVFhQjUwOXlLUS9tTWlh?=
 =?utf-8?B?RXpDd0dtVVF3YVJGN2ZlRTNScE1uWm5FNVRpOTdDQ0VHczBJV0dJQmNWMHlI?=
 =?utf-8?B?VStDVGVLSmU0MURJZjYweERGSi8vcVZ1SlpiSGpBM1VNZ2h1VUdpZzFGUm1Z?=
 =?utf-8?B?T09hUzFML0Uzbm9qMDFaRlBXR1pSMHgrMDRYTGRZQVJaRTRZclp5NEFQUDhm?=
 =?utf-8?Q?Rh4F1iASLdY+AoFWA2iZR5NZV?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 619c01b0-a6bb-450e-6128-08daf5619453
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 12:27:47.7659
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: S0Gulg7exijUM77ZV6Ws4zqpMQJnxLPJWObIx8pX7aaHERAvhJn4ITDynpskmWuOwlam79wllG+Xr0Uz2NCMJQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8196

Hi All,

On 10/01/2023 12:30, Dmytro Firsov wrote:
> On 09.01.23 11:36, Oleksandr Tyshchenko wrote:
>>
>> On 08.01.23 18:06, Julien Grall wrote:
>>
>> Hello Julien, Ayan, all
>>
>>> Hi Ayan,
>>> ...
>>> The changes look good to me (with a few of comments below). That
>>> said, before acking the code, I would like an existing user of uImage
>>> (maybe EPAM or Arm?) to confirm they are happy with the change.
>> I have just re-checked current patch in our typical Xen based
>> environment (no dom0less, Linux in Dom0) and didn't notice issues with
>> it. But we use zImage for Dom0's kernel, so kernel_uimage_probe() is
>> not called.
>>
>>
>> I CCed Dmytro Firsov who is playing with Zephyr in Dom0 and *might*
>> use uImage.
> Hi Oleksandr, Julien, all
>
> Current Xenutils/Zephyr Dom0 setup uses standard format for Zephyr on
> arm64 which is zImage. Thus uImage changes will not affect me.

Many thanks for the confirmation.

I have addressed all of Julien's comments and sent out "[XEN v5] 
xen/arm: Probe the load/entry point address of an uImage correctly".

- Ayan

>


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 12:54:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 12:54:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477283.739916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJZE-00049x-7I; Fri, 13 Jan 2023 12:53:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477283.739916; Fri, 13 Jan 2023 12:53:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJZE-00049q-3q; Fri, 13 Jan 2023 12:53:56 +0000
Received: by outflank-mailman (input) for mailman id 477283;
 Fri, 13 Jan 2023 12:53:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Kou=5K=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pGJZD-00049k-CM
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 12:53:55 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2086.outbound.protection.outlook.com [40.107.103.86])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5481ee68-9341-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 13:53:52 +0100 (CET)
Received: from DB7PR02CA0016.eurprd02.prod.outlook.com (2603:10a6:10:52::29)
 by DU2PR08MB10105.eurprd08.prod.outlook.com (2603:10a6:10:46c::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 12:53:50 +0000
Received: from DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:52:cafe::b8) by DB7PR02CA0016.outlook.office365.com
 (2603:10a6:10:52::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend
 Transport; Fri, 13 Jan 2023 12:53:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT055.mail.protection.outlook.com (100.127.142.171) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 12:53:49 +0000
Received: ("Tessian outbound 0d7b2ab0f13d:v132");
 Fri, 13 Jan 2023 12:53:49 +0000
Received: from a4eb5f8d0530.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1D55CA4F-896E-4A48-BD0E-3B7EB780770B.1; 
 Fri, 13 Jan 2023 12:53:42 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a4eb5f8d0530.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 12:53:42 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB9607.eurprd08.prod.outlook.com (2603:10a6:10:449::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 12:53:37 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 12:53:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5481ee68-9341-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EOCzE38qMEN6kl5i4vxqfCt4TmImBh08mDzsr8cyH/k=;
 b=IfYC3nLuAVlEDnrh7hK0YD3z7J5UIuOKgi1Kr+BDzusr1/iRtlFtAfG4seN0+ELJAETvK27pi4tZEEXfL18cIsAk0MbOdZLCALdoFcBZ1SpecWgXl38J+WTetg/4j1GJRMci8clen+vdUIuwPwNvJi7PSH3cyDm+tnU9kcrpLH8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: c0104c1567a6cc9d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dFo7mQSuazCVK+idQCXgQ4qg+ddui+Zlp1IrO6yuajIwByI2L/vLFy97Mzw1suN8zLJxjVsb6AlSviuuTkwosXJrE3XXU5B9oAs0GW/E27dJj0ZjLRNU+6oNM4PKM6osR7GnfkksNFR25vcuLpOtFg46JIqxMuR0HynWIDG/qjIgdd1xX/JAtYPRmceLsfEJXwkkiSjqVBjNJrUmDIkLbgheJlP04jbEqLb+0fTMoGZcLty0Len08Qt0GC4qNJEm/EQSMNZ45R/B4gVW9bPHOxP47i10CeUY1588F2x7/P5rcIzq/H4erbE0gQi0/qyY5qxGinuOJCcN5L69eE864g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EOCzE38qMEN6kl5i4vxqfCt4TmImBh08mDzsr8cyH/k=;
 b=YABHPW+Fjyvd/quSlqlmEzS0SmSJ7rk5awh9Q80ATidxlncanAWQ2KyYRGuzq1exmVOI8FbHsCNdaFBKxRC3I2guZcd8S2zPBEzCKKJwD/Ba5o6kHVAxPemD8bj6fh+HppFEt84RpPk7IDimGpXHw9aRnwz/uLUyc8IruYfCKrvZQtyuuP4/+ughQ3kuLXRKhpAiiuQgZb9R6vof9oHvdgSW58jwIMZn3mcpSh2n8qdglxUz/FP9O3aHOZacH3oC4ccpIYZQhm1Qr3BT4t+tHTKVnbquPynLpKkGZSf2RANG5VswgX47zV6vEThgl+HU6bBpvkpOImv7l+ZZ39catw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EOCzE38qMEN6kl5i4vxqfCt4TmImBh08mDzsr8cyH/k=;
 b=IfYC3nLuAVlEDnrh7hK0YD3z7J5UIuOKgi1Kr+BDzusr1/iRtlFtAfG4seN0+ELJAETvK27pi4tZEEXfL18cIsAk0MbOdZLCALdoFcBZ1SpecWgXl38J+WTetg/4j1GJRMci8clen+vdUIuwPwNvJi7PSH3cyDm+tnU9kcrpLH8=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [RFC PATCH 1/8] xen/arm: enable SVE extension for Xen
Thread-Topic: [RFC PATCH 1/8] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZJcp25sHFUmTjSUKVOX9zw5fwA66ZdWWAgAElHQCAAXLngIAAQw0A
Date: Fri, 13 Jan 2023 12:53:35 +0000
Message-ID: <3D8C1980-E88F-497D-AACE-06438743C83B@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <20230111143826.3224-2-luca.fancellu@arm.com>
 <e37e5564-e7b9-c9d2-1360-171c014649c7@xen.org>
 <85F9C725-816A-46EE-AD0C-2725AE13F14C@arm.com>
 <0d736370-5dd9-637d-c6d2-74dfb7e4209e@xen.org>
In-Reply-To: <0d736370-5dd9-637d-c6d2-74dfb7e4209e@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB9607:EE_|DBAEUR03FT055:EE_|DU2PR08MB10105:EE_
X-MS-Office365-Filtering-Correlation-Id: 7bc9ea40-4e98-46cf-68e2-08daf565379c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 NwdW2KzyVznFL+6UyLwAmc5RMT03N2Qw9RXazY+so+Jrq3uzLnVuM/gd8xBIIjtE1k40BilAgWiWLQnR7la+GJojMjiovgB3Lov3EdAXPeiUqxLJGJME9uCsMq6bkB9WULTq+GV6GLYgKpQUEoFf1fiWLl8rL+JmYPDj5N1fNNyGF/Z0H9+8/5smqXHcMWNGudtmPiSK9EbPmgYXpAWYtCijWHh96+fI0qUEibSbU6R9MVgJzdcqaPokYNdv5EVSG8vimwdQvx2AUFaDvTFZ+VhWINNGBWVy1/6kEkXs/Wq/+wPiBuSTsn4iC8FCV33oXB+3ID/IssI6fbSDwkochNHA/chTFvAj6T/OCuMPgQbVjkiXBCSkzRG2qzrw5CXA9pd53FxllSZxMC5sO2IY1qKT20r+/flvCei9qMx/TmK404+HIoBukZRHJJxXpZSXj3hFyPUAkcgnipIO0rLcBMJ7FTv/RS+dPAopPTmmkUDo6zVB9tlvmxZAYhxvPT5EaxcKfYLnA9S7Ku2rnBG+xPH+HlJu5tRgPc+CjxY7s8d51/MqSNPnsxSLBBg/tcgGfVA7wD8pYwn7ZXs++q2d8vSN4VB2LdHWQ18QgsNIedh5a1acUC7FR3rohVb+IZQi9bbpoITiORpmsMYet+sIF8ylIjSmlTGMckj22qOygijYOUpP2uvaVYenpk6GdTMI3zElwArIJ9PXY/FvRO0pVSG0//hZ+KuBxv4wdPyVVaA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(366004)(39860400002)(451199015)(38100700002)(66556008)(86362001)(122000001)(66476007)(38070700005)(66446008)(76116006)(4326008)(33656002)(6486002)(66946007)(2906002)(5660300002)(8676002)(8936002)(91956017)(53546011)(26005)(6512007)(186003)(6506007)(2616005)(64756008)(6916009)(54906003)(316002)(83380400001)(41300700001)(71200400001)(478600001)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <795045D60640C34D9C546595BC8AB8B8@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9607
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	54e391e9-1423-4595-32fc-08daf5652ee7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xL5os4dGa4YQ++eP5HVzj7/dVdFyRtjSyvBUwsCVZKC3Pky5xh2jepHXdF3agssKbZwj9v6jpbHHBkmW6Xag2lOio6rhpWnuQ/YlEqHE3nODs/EtqA5OlsCC9rhisAZyTkcr04P/ASUiY4I/1a5KQhFDPT/Dl+qEtw+N4FhmLklDcsld9Yi/mL7Y0B7B5u0nnS+gcMyyBtFTmJ+l/9Rn0X/glCdZHJCgPKiQFvQtu3M8XMbOv0d9d8nrTVtOC4vBxeNWEG6t0aHBkKZU0SK8jZT5UF1HTl5NtDkQdRKbM+07KWLuDvskud1M4pfEINJcS4ya7fVsFlus3cmRHQoFhIXcuE2u9AI7Kj7SJ3WcVQfhnt3+SokVvreCI8r92aifD35z0gOC+1Ub2GB68tgujdcxQu0VGb0+trzsjALS+ogyXUCrtcxPf20CtpGMAAZzWk8WkJfFZ12EinhTS065/NcMPBOAV9aFMt/5ehQhJrEORfJfdrLv0P2T/hHhyDf1I5kzQ6Qh8Vi2M/L+4qVUifz3IjuT/6rtNarncsMHrb4AGHk8IaQqulqZntQiDc9XeE6875GlQX0T+D8uzs6T8pXzjZnTX/nKEymGMMA4P54LJdRkOxvN8hdok33UojKZPTh/mxwQTAAjEBdG3EVUMabdd9v8TiahDo32Tyg0fOfwHEF9yWtDDdQdfALdIZN3sWbVVPZvR3Awm3wWpcH55A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199015)(46966006)(36840700001)(40470700004)(82740400003)(81166007)(356005)(36860700001)(70206006)(4326008)(70586007)(8676002)(54906003)(41300700001)(33656002)(86362001)(47076005)(8936002)(40480700001)(83380400001)(107886003)(5660300002)(316002)(2616005)(336012)(6512007)(2906002)(40460700003)(82310400005)(6486002)(53546011)(186003)(6506007)(26005)(478600001)(6862004)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 12:53:49.9845
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7bc9ea40-4e98-46cf-68e2-08daf565379c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB10105

DQoNCj4gT24gMTMgSmFuIDIwMjMsIGF0IDA4OjUzLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAxMi8wMS8yMDIzIDEwOjQ2LCBM
dWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+IE9uIDExIEphbiAyMDIzLCBhdCAxNzoxNiwgSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+Pj4gDQo+Pj4gSGkgTHVjYSwNCj4+PiAN
Cj4+PiBBcyB0aGlzIGlzIGFuIFJGQywgSSB3aWxsIGJlIG1vc3RseSBtYWtpbmcgZ2VuZXJhbCBj
b21tZW50cy4NCj4+IEhpIEp1bGllbiwNCj4+IFRoYW5rIHlvdS4NCj4+PiANCj4+PiBPbiAxMS8w
MS8yMDIzIDE0OjM4LCBMdWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+PiBkaWZmIC0tZ2l0IGEveGVu
L2FyY2gvYXJtL2RvbWFpbi5jIGIveGVuL2FyY2gvYXJtL2RvbWFpbi5jDQo+Pj4+IGluZGV4IDk5
NTc3YWRiNmM2OS4uOGVhMzg0M2VhOGU4IDEwMDY0NA0KPj4+PiAtLS0gYS94ZW4vYXJjaC9hcm0v
ZG9tYWluLmMNCj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2RvbWFpbi5jDQo+Pj4+IEBAIC0xODEs
NiArMTgxLDggQEAgc3RhdGljIHZvaWQgY3R4dF9zd2l0Y2hfdG8oc3RydWN0IHZjcHUgKm4pDQo+
Pj4+ICAgICAgLyogVkdJQyAqLw0KPj4+PiAgICAgIGdpY19yZXN0b3JlX3N0YXRlKG4pOw0KPj4+
PiAgKyAgICBXUklURV9TWVNSRUcobi0+YXJjaC5jcHRyX2VsMiwgQ1BUUl9FTDIpOw0KPj4+IA0K
Pj4+IFNob3VsZG4ndCB0aGlzIG5lZWQgYW4gaXNiKCkgYWZ0ZXJ3YXJkcyBkbyBlbnN1cmUgdGhh
dCBhbnkgcHJldmlvdXNseSB0cmFwcGVkIHdpbGwgYmUgYWNjZXNzaWJsZT8NCj4+IFllcyB5b3Ug
YXJlIHJpZ2h0LCB3b3VsZCBpdCBiZSBvayBmb3IgeW91IGlmIEkgbW92ZSB0aGlzIGJlZm9yZSBn
aWNfcmVzdG9yZV9zdGF0ZSBiZWNhdXNlIGl0IGluc2lkZQ0KPj4gaGFzIGFuIGlzYigpPyBUaGlz
IHRvIGxpbWl0IHRoZSBpc2IoKSB1c2FnZS4gSSBjb3VsZCBwdXQgYWxzbyBhIGNvbW1lbnQgdG8g
ZG9u4oCZdCBmb3JnZXQgaXQuDQo+IA0KPiBJIHdvdWxkIHJhdGhlciBwcmVmZXIgaWYgd2UgZG9u
J3QgcmVseSBvbiBnaWNfcmVzdG9yZV9zdGF0ZSgpIHRvIGhhdmUgYW4gaXNiKCkgYmVjYXVzZSB0
aGlzIGNvdWxkIGNoYW5nZSBpbiB0aGUgZnV0dXJlIChhbHRob3VnaCB1bmxpa2VseSkuDQo+IA0K
PiBMb29raW5nIGF0IHRoZSBjb250ZXh0IHN3aXRjaCBjb2RlLCBJIHRoaW5rIHdlIGNhbiBtb3Zl
IHRoZSBjYWxsIHRvIHJlc3RvcmUgdGhlIGZsb2F0aW5nIHJlZ2lzdGVyIHRvd2FyZHMgdGhlIGVu
ZCBvZiB0aGUgaGVscGVyIGFuZCB1c2Ugb25lIG9mIHRoZSBleGlzdGluZyBpc2IoKSBmb3Igb3Vy
IHB1cnBvc2UuDQoNClNvdW5kcyBnb29kIHRvIG1lDQoNCj4gDQo+IA0KPj4+PiBAQCAtMTIyLDYg
KzEzNyw3IEBAIF9faW5pdGNhbGwodXBkYXRlX3NlcnJvcnNfY3B1X2NhcHMpOw0KPj4+PiAgICB2
b2lkIGluaXRfdHJhcHModm9pZCkNCj4+Pj4gIHsNCj4+Pj4gKyAgICByZWdpc3Rlcl90IGNwdHJf
Yml0cyA9IGdldF9kZWZhdWx0X2NwdHJfZmxhZ3MoKTsNCj4+Pj4gICAgICAvKg0KPj4+PiAgICAg
ICAqIFNldHVwIEh5cCB2ZWN0b3IgYmFzZS4gTm90ZSB0aGV5IG1pZ2h0IGdldCB1cGRhdGVkIHdp
dGggdGhlDQo+Pj4+ICAgICAgICogYnJhbmNoIHByZWRpY3RvciBoYXJkZW5pbmcuDQo+Pj4+IEBA
IC0xMzUsMTcgKzE1MSwxNSBAQCB2b2lkIGluaXRfdHJhcHModm9pZCkNCj4+Pj4gICAgICAvKiBU
cmFwIENQMTUgYzE1IHVzZWQgZm9yIGltcGxlbWVudGF0aW9uIGRlZmluZWQgcmVnaXN0ZXJzICov
DQo+Pj4+ICAgICAgV1JJVEVfU1lTUkVHKEhTVFJfVCgxNSksIEhTVFJfRUwyKTsNCj4+Pj4gIC0g
ICAgLyogVHJhcCBhbGwgY29wcm9jZXNzb3IgcmVnaXN0ZXJzICgwLTEzKSBleGNlcHQgY3AxMCBh
bmQNCj4+Pj4gLSAgICAgKiBjcDExIGZvciBWRlAuDQo+Pj4+IC0gICAgICoNCj4+Pj4gLSAgICAg
KiAvIVwgQWxsIGNvcHJvY2Vzc29ycyBleGNlcHQgY3AxMCBhbmQgY3AxMSBjYW5ub3QgYmUgdXNl
ZCBpbiBYZW4uDQo+Pj4+IC0gICAgICoNCj4+Pj4gLSAgICAgKiBPbiBBUk02NCB0aGUgVENQeCBi
aXRzIHdoaWNoIHdlIHNldCBoZXJlICgwLi45LDEyLDEzKSBhcmUgYWxsDQo+Pj4+IC0gICAgICog
UkVTMSwgaS5lLiB0aGV5IHdvdWxkIHRyYXAgd2hldGhlciB3ZSBkaWQgdGhpcyB3cml0ZSBvciBu
b3QuDQo+Pj4+ICsjaWZkZWYgQ09ORklHX0FSTTY0X1NWRQ0KPj4+PiArICAgIC8qDQo+Pj4+ICsg
ICAgICogRG9uJ3QgdHJhcCBTVkUgbm93LCBYZW4gbWlnaHQgbmVlZCB0byBhY2Nlc3MgWkNSIHJl
ZyBpbiBjcHVmZWF0dXJlIGNvZGUsDQo+Pj4+ICsgICAgICogdHJhcHBpbmcgYWdhaW4gb3Igbm90
IHdpbGwgYmUgaGFuZGxlZCBvbiB2Y3B1IGNyZWF0aW9uL3NjaGVkdWxpbmcgbGF0ZXINCj4+Pj4g
ICAgICAgKi8NCj4+PiANCj4+PiBJbnN0ZWFkIG9mIGVuYWJsZSBieSBkZWZhdWx0IGF0IGJvb3Qs
IGNhbiB3ZSB0cnkgdG8gZW5hYmxlL2Rpc2FibGUgb25seSB3aGVuIHRoaXMgaXMgc3RyaWN0bHkg
bmVlZGVkPw0KPj4gWWVzIHdlIGNvdWxkIHVuLXRyYXAgaW5zaWRlIGNvbXB1dGVfbWF4X3pjcigp
IGp1c3QgYmVmb3JlIGFjY2Vzc2luZyBTVkUgcmVzb3VyY2VzIGFuZCB0cmFwIGl0DQo+PiBhZ2Fp
biB3aGVuIGZpbmlzaGVkLiBXb3VsZCBpdCBiZSBvayBmb3IgeW91IHRoaXMgYXBwcm9hY2g/DQo+
IA0KPiBZZXMuDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gSnVsaWVuIEdyYWxsDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 12:58:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 12:58:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477288.739927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJd4-0004lg-PN; Fri, 13 Jan 2023 12:57:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477288.739927; Fri, 13 Jan 2023 12:57:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJd4-0004lZ-MR; Fri, 13 Jan 2023 12:57:54 +0000
Received: by outflank-mailman (input) for mailman id 477288;
 Fri, 13 Jan 2023 12:57:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wa25=5K=zf.com=youssef.elmesdadi@srs-se1.protection.inumbo.net>)
 id 1pGJd3-0004lT-LL
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 12:57:53 +0000
Received: from de-smtp-delivery-114.mimecast.com
 (de-smtp-delivery-114.mimecast.com [194.104.109.114])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e267a9f2-9341-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 13:57:50 +0100 (CET)
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2050.outbound.protection.outlook.com [104.47.2.50]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-43-E39RiVznMZKprJ2LD2wYHQ-1; Fri, 13 Jan 2023 13:56:55 +0100
Received: from AM5PR0802MB2578.eurprd08.prod.outlook.com
 (2603:10a6:203:9e::22) by AS4PR08MB7902.eurprd08.prod.outlook.com
 (2603:10a6:20b:51d::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 12:56:38 +0000
Received: from AM5PR0802MB2578.eurprd08.prod.outlook.com
 ([fe80::f7f1:3b45:d707:62b7]) by AM5PR0802MB2578.eurprd08.prod.outlook.com
 ([fe80::f7f1:3b45:d707:62b7%2]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 12:56:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e267a9f2-9341-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zf.com; s=mczfcom20220728;
	t=1673614669;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OMUsDPjysjWhzAncie4fu+q+QTc09bE4gxqQRsqN9X0=;
	b=dAwbw9rYY/wJ7bfJkNpEha8u4TMMswWtjF8VXo2PNVH0u9dFRc1I7ZxTXa5zLCeYXPT9WV
	eqjp09fNHiNM2YYZNDCxXrwZqpC8NqqVa5aoLSpTYiWQpsMGODCj7VJfrg0iZJGga7adOr
	0EfzqAX4YJl/u7qq2nsg1uA5MHpBI3U=
X-MC-Unique: E39RiVznMZKprJ2LD2wYHQ-1
From: El Mesdadi Youssef ESK UILD7 <youssef.elmesdadi@zf.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Subject: AW: Xenalyze on ARM ( NXP S32G3 with Cortex-A53)
Thread-Topic: Xenalyze on ARM ( NXP S32G3 with Cortex-A53)
Thread-Index: Adkk/qg7kRGuW6KDReaFxxX6SSJ/1gA5Tq8AAFmZfbA=
Date: Fri, 13 Jan 2023 12:56:37 +0000
Message-ID: <AM5PR0802MB2578A1389424064D6884588E9DC29@AM5PR0802MB2578.eurprd08.prod.outlook.com>
References: <AM5PR0802MB25781717167B5BFC980BF2A49DFF9@AM5PR0802MB2578.eurprd08.prod.outlook.com>
 <3e7059c2-0d23-03f2-9a93-f88de09171f4@xen.org>
In-Reply-To: <3e7059c2-0d23-03f2-9a93-f88de09171f4@xen.org>
Accept-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
msip_labels: MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Enabled=true;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_SetDate=2023-01-13T12:56:36Z;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Method=Privileged;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Name=Internal sub2 (no
 marking);
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_SiteId=eb70b763-b6d7-4486-8555-8831709a784e;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_ActionId=62bd4c74-7e48-455d-9bb2-d2a1b76962ae;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_ContentBits=0
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: AM5PR0802MB2578:EE_|AS4PR08MB7902:EE_
x-ms-office365-filtering-correlation-id: 07d9372f-a365-415d-36d6-08daf5659bbe
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0
x-microsoft-antispam-message-info: 7uBGEolo32bHL7pxc4Ohyim7FqDH48DZmsQr6NPFm0oXN/neYWGTIL1i7x0uDYvi60oB78iit0cmRfZT9YzhWA2Vh3h6vqX+6tzLtFKsI6i678yJttQsKJaIHxIiIej6VkhxtmKvw41CVWTbNZ4VwW1M71ENicUUuSC1XJOMQ607jyhjzDNY6VXyvQrd6YDi3Nq9oDbIQCarLsSlIBVlbLoyl2XMMx6vGdhpW8r8biqJH6XoxPEihRDQ8oXrf7ULxXqziK6DkpVB1OJPfdYrhXoqpECNPLePTnTWJrwJ+KXe7CWk1xO/AVnpF5TTS2YAXI5mUDprNwSIeepvarbAHOtzIjFdrJoeWZaxbU4Q313qw2TOaNGgZxWgJ2MtzPN22FtZo2Ogq4pM2/ZWf6ovvXFvvfh41D0J8FJYC0Vq0CCopoUcLbM1vWmZXAsoFeFroxD9bsPlxoxvWd81MmyhGXv92KM/8QaYfSU6+D82ZtFnI0+MPhq4Yo5zIhkmrwLcKImc+H2ZzzUOXwTDVLHJPZk2in8I6WAGLKKhbtkT8uEPJT36AZlFeET6zp8P3auPHV1OqkzP8ULFde7ecmVmTjTMe7/mhy+UT7oKKI2wiuE88XaAOABpxXpsynHdGl4NmsmM2WpC0RuwLomoM+Jw26BFYgsH4QxEosAC02aDhJoNuaO4agCOFxZfUUJc35+YBMcIeRZoiAohGbA35t9vhg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM5PR0802MB2578.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(136003)(366004)(396003)(376002)(451199015)(186003)(26005)(71200400001)(66556008)(9686003)(76116006)(478600001)(110136005)(53546011)(316002)(4326008)(8676002)(7696005)(64756008)(66476007)(66446008)(6506007)(66946007)(52536014)(66574015)(38100700002)(33656002)(8936002)(86362001)(5660300002)(83380400001)(55016003)(122000001)(41300700001)(82960400001)(38070700005)(2906002);DIR:OUT;SFP:1102
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?M3o3d0pLaEZDeGt5eEJ3S081T3QyNUxJeWhCMG4wRG5XTWRPVkJEY25RVVBQ?=
 =?utf-8?B?SkpFUStVeG5nL1NHc1VPZHFQTmJaVnhPSVRhb2VsejNLUWxQVUs3WC9UT3F1?=
 =?utf-8?B?ckFtaytQcGlZUVRobWdrZHRxRVUranU0akUzMktwNkg2blVLMDc0Q21rdU1r?=
 =?utf-8?B?dHo3ZWVtNEFjYi9YZEpCc3BNVktuRlVvUVNDSnBaU1JFM2huZ3dONisyRzdN?=
 =?utf-8?B?TWQwQm11QWI3NUM5Y0RQYmtET0RBOFh6TnZtWWRPbTBucjg1L29oRDNCWWI3?=
 =?utf-8?B?WHlKNWJHYXdLVVBSVG9TeTJnUXY1NEdFT2NPU091Y1hKZ284QS90am1Gek03?=
 =?utf-8?B?NThpUktvVk1YMTV1cElOeE9YaEUrNHRWV1FVaVNJcy9abG05bm5lRlYwb0tp?=
 =?utf-8?B?c0E0OUNJTWkvZFRKNXhxc09ZVEQ4QUdoamdkQ09ic29Fa3haMk4vQ2JtQkNX?=
 =?utf-8?B?V3hrSzJTKzJQbmhTUnVDTC9HMUVVeUpsYnhmNEY5V1hNbFhteHlwWk04QTVq?=
 =?utf-8?B?TkR1YkV4ZW5rNjYyYjdndDFES2o0M1ptRkJwTnNOaE5uTkNXdDdKaTlHVXB1?=
 =?utf-8?B?Z0VmS01CYjhPUkhuQ2E1d1AwMG4xSXJ4M0lzRVNaaTZMYlJvbndQbHRPR0Ji?=
 =?utf-8?B?L1plSWtOeHppYmY1WlRHaEQvd29sZEc2bHpQTmsrZE1JTDlRK0FiVU1xYy80?=
 =?utf-8?B?WlI3VFVuVXg5NnZORTZIcGQ5Vm5uUUF2NS9QUVB3YzBQNTVMbnBCTE1IRXg2?=
 =?utf-8?B?N1FxK0JaWVhyMHJHU3VrdThYNC9pb3F5ODd0V2Rqd1RZdDdMMHNpbE5ndUdC?=
 =?utf-8?B?b2ZSSnJzV0VTalFXeE9pYklHTWJabjhXdC8wY0x3UzJZTzc2VzJpcmQ2cmxR?=
 =?utf-8?B?cU5BQlJJbjhYSWRMQm0zemM4cG1QWW5zdU9CaFFjZSthOStVc2RqRzErMmZ2?=
 =?utf-8?B?UHlNclNPV0pEWllWU3JrWUVMbDRmbWhUZzhRVU4rSW1oOWN2RW9XNEtiT0Qv?=
 =?utf-8?B?QUs0Qld5WVR0SUJ0OVhwYzNoSUowemlibnZoSHhQTXNqUVduUDV1Ky94Sy9B?=
 =?utf-8?B?VS9xaGlhbmpmeFp0OUhweVVIZk5HUlhFMEJGV1pHNVNZMTZ2UmZubTlaRjB3?=
 =?utf-8?B?RmF5M3pudXE3aHNNdE9DN1VzMnlvcURIYjhkV2NSbVdqM1YxbEQwYktHQlZK?=
 =?utf-8?B?SWVHdjgvMHdOdEcwVUxlN29FUmw4ekcxWWJRUm5ZZEFvRXp5YVdzc2FrZGQy?=
 =?utf-8?B?Y0lWN2V4NkpvcVJXMHFlTGNmTS8rU2tsR2lWUEQ0WTVyMExnRG1KblcwZVE0?=
 =?utf-8?B?eVVmQ2pUSk5BZTBOQ1JBZ3JFekNZZll6U2NsWDFsT2VFbmo0N2FWa2tlTlNw?=
 =?utf-8?B?SGxJUlMzU1RndEhpbVpmVXV4c29LZEFqOEplaDVUY2dMZlcvSVNVVmlOaXk3?=
 =?utf-8?B?OXRBNFAyNHVqamVNelRRcEcvenNyYnRISmkyN2VqUGZQZHk4dDllSXo2bTQz?=
 =?utf-8?B?MTEzNU9CdXpGcU9WMzhCVzhlRzE5MVUySXYwMTVNMG52ajlBbVRLUmcxbGlr?=
 =?utf-8?B?UjFJNWs2a1JzR3poeFVXdWVSbkhhWWNNK0JVZUZUdFJBL2VkV0laSFp5ditJ?=
 =?utf-8?B?VnZwc1MyZEludWg2L28zcWpXUU9GclFvMEliaTRnMmY4Y3ZVdEtES3pmNWZn?=
 =?utf-8?B?elpvaTQvY3IzOXlHYTZiWVk4SDBNb1hRenVGR0lxejNlVWNVNS9obm5JK2o1?=
 =?utf-8?B?U2hoUDlPaCtZd2k4S3hjQWlyOWRFWlprQitmK3QxaWtQditqL05hVEZUMDlv?=
 =?utf-8?B?U09VRFNkdDArSXBNQUxiTGdzQW93Z25yMVFvTldLZ2lyaXhUcFo5cGZBZEd2?=
 =?utf-8?B?NEZRTGZqbVVJeTNzS2xZS0JrKzlFQzd0WXo1d3BiZ0VnVE5RWDFlVUltbGtP?=
 =?utf-8?B?czRVOGo3VDNjWUN1dXZHRDQ0b0VsV010OW95Z3c5aHB3MHp2eDlpYVJ2aUNE?=
 =?utf-8?B?ZW8zdDdPZGtUY0pHNU5uRUNtbEI3ZnorZ1ZTN2dHeVNjQlcvMXpsTzRBbXhs?=
 =?utf-8?B?Zk5aUXFBaThvTHNiS1BoMFFhOUxRdmpYcmRFaS8wNDlhejZrd0pKR2dUNTk5?=
 =?utf-8?Q?TTaDZmeLVcEdJFYD7akDg8vyQ?=
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: D9N/iigl+XCQcsE8GI8Hq89fAzx8nbNo/+dl30tYT3L2KGRBLVXB1o5GzY84K7IdpGovoKJkRtthAWi85RlT49iDSDZrPcYxfAyevErtF4f9Sf+PyloK6foqaZwiOA2yarPic/iThaUxqGKbzd7FFKWC+RVcm2N/u21F5jRaEGFY3ehnTKy1tztgzvxDcu3sL7/j6Msx+xFlEVec7+t1rAeQszcloFqoHP+tFGQhIWwoSxeZlC4OPCBTw39A2oCwCElKNfIfsCDuNcQi6oMYAKs19c3MZg/oFahw/cK0k2UklOTy8wu9Key1oUf4zq58teIc04HqOBJ9hIh5s+O0FYShG9tqa00piUmfhi+84ElmmiEy/lehkU/lwBLddhZHspSqY/Ikz8lJXPgxgxh8vq56XU2BLuGGWQV2mut3FixS/w8WW5oCn8GtYuT50HxlxwHl5Rx/GXVHv2RQ1m6+4RMID9giix5XneXavfUxQCcYXrGaPoV7fJdnUdsrCHm2vIsMCqxnixEyceC6H2HdUal6jgSuX8DHBm3FOsz+g2J0KFpu0kaPlCrrqZKovnThiz0LgAmWJEHX4b62jRBJLbGeQqVkY8kD8h8Lzlc8EPTL85Znl9o/teVtfF2AQLvOY686Qur/nB8W1C6uSRLyUreTDpqre3GiWPaMvUa2DMdVQPAzaWim6v2hdY9BbALjuHNnxcFNRkP6Rerr9ImG43L9xTjUonDWABXrYDmYVmboDMuwYRHfHmSqfUmulsZp7Dv9rb5/5MxvxkG/w0LdYI5JUjAxnilBW9KuDfSCY/93s2DjZEzeEARCELCGP26lZH8C4G+QMMXsHIB1KbRqlQ==
X-OriginatorOrg: zf.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM5PR0802MB2578.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 07d9372f-a365-415d-36d6-08daf5659bbe
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Jan 2023 12:56:37.9626
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: eb70b763-b6d7-4486-8555-8831709a784e
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 827A1b/PMh/l5vQA4f44tosUIhvKkZe0PUacIPwmkKukDwq9pQKuX4srmOq8XRTItknbSaRbys4X6t3DRDvtbscbvKCIASTiR1uQ78vgAbE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7902
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: zf.com
Content-Language: de-DE
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

SGVsbG8gSnVsaWVuLA0KDQo+Pj54ZW50cmFjZSBzaG91bGQgd29yayBvbiB1cHN0cmVhbSBYZW4u
IFdoYXQgZGlkIHlvdSB2ZXJzaW9uIGRpZCB5b3UgdHJ5PyANCg0KV2hpbGUgYnVpbGRpbmcgbXkg
aW1hZ2UgdXNpbmcgdGhlIEJTUC1saW51eCBvZiBOWFAsIHRoZSB2ZXJzaW9uIHRoYXQgd2FzIGRv
d25sb2FkZWQgaXMgWGVuIDQuMTQuIA0KDQoNCj4+PkNhbiB5b3UgYWxzbyBjbGFyaWZ5IHRoZSBl
cnJvciB5b3UgYXJlIHNlZW4/DQoNClRoZSBlcnJvciBJIHJlY2VpdmUgd2hpbGUgdGlwcGluZyB4
ZW50cmFjZSBpczogQ29tbWFuZCBub3QgZm91bmQuIEkgYXNzdW1lIGluIHRoaXMgWGVuIHZlcnNp
b24sIFhlbnRyYWNlIGlzIG5vdCBjb21wYXRpYmxlIHdpdGggQVJNIGFyY2hpdGVjdHVyZS4NCg0K
TXkgcXVlc3Rpb24gaXMsIGlzIHRoZXJlIGFueSBuZXcgdmVyc2lvbiBvZiBYZW4gdGhhdCBzdXBw
b3J0cyBYZW50cmFjZSBvbiBBUk0/IElmIHllcyBob3cgY291bGQgSSBpbnN0YWxsIGl0PyBCZWNh
dXNlIFhlbi40LjE0IHdhcyBpbnN0YWxsZWQgYXV0b21hdGljYWxseSBieSB0eXBpbmcgdGhpcyAo
IERJU1RST19GRUFUVVJFU19hcHBlbmQgKz0gInhlbiIpIGluIHRoZSBsb2NhbC5jb25mIGZpbGUg
d2hpbGUgY3JlYXRpbmcgbXkgaW1hZ2UuDQoNCk9yIGlzIHRoZXJlIGFueSBzb3VyY2Ugb24gWGVu
dHJhY2UgdGhhdCBpcyBjb21wYXRpYmxlIHdpdGggQVJNIG9uIEdpdEh1YiwgdGhhdCBJIGNvdWxk
IGRvd25sb2FkIGFuZCBjb21waWxlIG15c2VsZj8NCg0KDQo+Pj5ZZXMgaWYgeW91IGFzc2lnbiAo
b3IgcHJvdmlkZSBwYXJhLXZpcnR1YWxpemVkIGRyaXZlcikgdGhlIEdQSU8vTEVEL0Nhbi1JbnRl
cmZhY2UgdG8gdGhlIGd1ZXN0Lg0KDQpJcyB0aGVyZSBhbnkgdHV0b3JpYWwgdGhhdCBjb3VsZCBo
ZWxwIG1lIGNyZWF0ZSB0aG9zZSBkcml2ZXJzPyBBbmQgaG93IGNvbXBsaWNhdGVkIGlzIHRoYXQs
IHRvIGNyZWF0ZSB0aGVtPw0KDQpPciBjYW4gdGhleSBiZSBhc3NpZ25lZCBqdXN0IHdpdGggUENJ
LVBhc3N0aHJvdWdoPw0KDQoNCg0KVGhhbmsgeW91IHNvIG11Y2ggZm9yIHRoZSBoZWxwLg0KDQpD
aGVlcnMsDQpZb3Vzc2VmIEVsIE1lc2RhZGkNCg0KDQotLS0tLVVyc3Byw7xuZ2xpY2hlIE5hY2hy
aWNodC0tLS0tDQpWb246IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IA0KR2VzZW5kZXQ6
IE1pdHR3b2NoLCAxMS4gSmFudWFyIDIwMjMgMTg6NDENCkFuOiBFbCBNZXNkYWRpIFlvdXNzZWYg
RVNLIFVJTEQ3IDx5b3Vzc2VmLmVsbWVzZGFkaUB6Zi5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVu
cHJvamVjdC5vcmcNCkNjOiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc+DQpCZXRyZWZmOiBSZTogWGVuYWx5emUgb24gQVJNICggTlhQIFMzMkczIHdpdGggQ29ydGV4
LUE1MykNCg0KKCtTdGVmYW5vKQ0KDQpPbiAxMC8wMS8yMDIzIDE0OjIwLCBFbCBNZXNkYWRpIFlv
dXNzZWYgRVNLIFVJTEQ3IHdyb3RlOg0KPj4+IEhlbGxvLA0KPiANCj4+Pg0KPiBJJ20gdHJ5aW5n
IHRvIG1lYXN1cmUgdGhlIHBlcmZvcm1hbmNlIG9mIFhlbiBvbiB0aGUgUzMyRzMgbWljcm9wcm9j
ZXNzb3IsIHVuZm9ydHVuYXRlbHksIGFmdGVyIGZvbGxvd2luZyB0aGUgQlNQLUxpbnV4IHRvIGlu
c3RhbGwgWGVuLCBJIGZvdW5kIHRoYXQgWGVudHJhY2UgaXMgbm90IGF2YWlsYWJsZSBvciBub3Qg
Y29tcGF0aWJsZSB3aXRoIEFSTSBhcmNoaXRlY3R1cmUuIEkgaGF2ZSBzZWVuIHNvbWUgc3R1ZGll
cyBvbiAgWGlsaW54LCBhbmQgaG93IHRoZXkgbWFkZSBYZW50cmFjZSBjb21wYXRpYmxlIHdpdGgg
QVJNLCBidXQgSSBoYXZlIG5vIHJlc291cmNlcyBvciBhY2Nlc3MgdG8gZ2V0IHRoYXQgYW5kIG1h
a2UgaXQgd29yayBvbiBteSBCb2FyZC4gSWYgdGhlcmUgaXMgYW55IGhlbHAgSSB3b3VsZCBhcHBy
ZWNpYXRlIGl0IGFuZCB0aGFua3MgaW4gYWR2YW5jZS4NCg0KeGVudHJhY2Ugc2hvdWxkIHdvcmsg
b24gdXBzdHJlYW0gWGVuLiBXaGF0IGRpZCB5b3UgdmVyc2lvbiBkaWQgeW91IHRyeT8gDQpDYW4g
eW91IGFsc28gY2xhcmlmeSB0aGUgZXJyb3IgeW91IGFyZSBzZWVuPw0KDQo+IA0KPj4+IEkgaGF2
ZSBzb21lIGV4dHJhIHF1ZXN0aW9ucywgYW5kIGl0IHdpbGwgYmUgaGVscGZ1bCB0byBzaGFyZSB5
b3VyIA0KPj4+IGlkZWFzIHdpdGggbWUsDQo+IA0KPiANCj4gICAgKiAgIElzIGl0IHBvc3NpYmxl
IHRvIHJ1biBhIG5hdGl2ZSBhcHBsaWNhdGlvbiAoIEMtY29kZSkgb24gdGhlIHZpcnR1YWwgbWFj
aGluZSwgdHVybiBvbiBhIExFRCwgIGhhdmUgYWNjZXNzIHRvIHRoZSBHUElPLCBvciBzZW5kIHNv
bWUgbWVzc2FnZXMgdXNpbmcgQ2FuLUludGVyZmFjZT8NCg0KWWVzIGlmIHlvdSBhc3NpZ24gKG9y
IHByb3ZpZGUgcGFyYS12aXJ0dWFsaXplZCBkcml2ZXIpIHRoZSBHUElPL0xFRC9DYW4tSW50ZXJm
YWNlIHRvIHRoZSBndWVzdC4NCg0KPiAgICAqICAgTXkgQm9hcmQgaGFzIG5vIEV0aGVybmV0LCBh
bmQgbm8gRXh0ZXJuIFNEIENhcmQsIGlzIHRoZXJlIGFueSBtZXRob2QgSSBjYW4gdXNlIHRvIGJ1
aWxkIGEga2VybmVsIGZvciBhbiBvcGVyYXRpbmcgc3lzdGVtIHdpdGggbXkgbGFwdG9wLCBhbmQg
dHJhbnNmZXIgaXQgdG8gdGhlIGJvYXJkPw0KDQpJIGFtIGNvbmZ1c2VkLCBpZiB5b3UgZG9uJ3Qg
aGF2ZSBuZXR3b3JrIGFuZCBhbiBleHRlcm5hbCBzZGNhcmQsIHRoZW4geW91IGRpZCB5b3UgZmly
c3QgcHV0IFhlbiBvbiB5b3VyIHN5c3RlbT8NCg0KSW4gdGhlb3J5IHlvdSBjb3VsZCB0cmFuc2Zl
ciB0aGUgYmluYXJ5ICh1c2luZyBiYXNlNjQpIHZpYSB0aGUgc2VyaWFsIGNvbnNvbGUuIEJ1dCB0
aGF0J3MgaGFja2lzaC4gSW5zdGVhZCwgSSB3b3VsZCByZWNvbW1lbmQgdG8gc3BlYWsgd2l0aCB0
aGUgYm9hcmQgdmVuZG9yIGFuZCBhc2sgdGhlbSBob3cgeW91IGNhbiB1cGxvYWQgeW91ciBvd24g
c29mdHdhcmUuDQoNCj4gICAgKiAgIEFueSBzdWdnZXN0aW9ucyBpbiBkZXRhaWwgdG8gbWVhc3Vy
ZSB0aGUgaW50ZXJydXB0IGxhdGVuY3ksIFhlbiBPdmVyaGVhZCwgYW5kIHN3aXRjaCBjb250ZXh0
ICh0aW1lIHRvIHN3aXRjaCBmcm9tIG9uZSBWTSB0byBhbm90aGVyLCB0aGF0J3Mgd2hhdCBJIHdh
bnRlZCB0byBtZWFzdXJlIHdpdGggWGVuYWx5emUpDQoNCnhlbnRyYWNlIHdpbGwgYmUgeW91ciBi
ZXN0IGJldC4gT3RoZXJ3aXNlLCB5b3Ugd2lsbCBuZWVkIHRvIGltcGxlbWVudCBjdXN0b20gdHJh
Y2luZy4NCg0KQ2hlZXJzLA0KDQotLQ0KSnVsaWVuIEdyYWxsDQoNCg==



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477295.739937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJfI-0006B6-6A; Fri, 13 Jan 2023 13:00:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477295.739937; Fri, 13 Jan 2023 13:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJfI-0006Az-3C; Fri, 13 Jan 2023 13:00:12 +0000
Received: by outflank-mailman (input) for mailman id 477295;
 Fri, 13 Jan 2023 13:00:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TZVY=5K=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pGJfF-0006AR-M6
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:00:10 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 34f1a767-9342-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 14:00:08 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pGJfH-0066XJ-Al; Fri, 13 Jan 2023 13:00:12 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E5685300642;
 Fri, 13 Jan 2023 13:59:56 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id A17DB20B8E4E3; Fri, 13 Jan 2023 13:59:56 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34f1a767-9342-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=sfJlOYO8oT2oPc3YcXscu+eA6dh/Ct/nDZR0RvIAf6Y=; b=XiIJ6939tE1F6/5MIyvqQRsyX4
	zPQinpFwfCbUsr/ZeCKo0iPXa9gyR5C0u68GCX0LDeo4dwCSAuqGb1HcuNOc8EyIGcJlVk6QLr1sG
	3cTf/k7bOSKHZ5APi0Ip5t1q5jaPxnaaPwBGJY5teGYcBo0PPK9+hh1ipyYmWLNoBt/u2z+4wFvt0
	a/Pk3KzwPw7M9ERS7wWijK4ZoSmJCRqII4YP6dO0Z1jEHF5lhvvIIMmcRws26s7y6QzGOCEIPkv2J
	3hZs7m/RgwXavhyuPQO6FvjYJwAN6OjHM1gmDFTiNaUlXMGJb/cdlCfeemQxfBC4UgHBaVDU5QWeu
	9aIi9Flg==;
Date: Fri, 13 Jan 2023 13:59:56 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	x86@kernel.org
Subject: Re: [RFC][PATCH 0/6] x86: Fix suspend vs retbleed=stuff
Message-ID: <Y8FVzPWQOHl0H4CY@hirez.programming.kicks-ass.net>
References: <20230112143141.645645775@infradead.org>
 <20230113073938.1066227-1-joanbrugueram@gmail.com>
 <Y8EhucZfQ2IyJtnU@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Y8EhucZfQ2IyJtnU@hirez.programming.kicks-ass.net>

On Fri, Jan 13, 2023 at 10:17:46AM +0100, Peter Zijlstra wrote:

> > (2) Tracing with QEMU I still see two `sarq $5, %gs:0x1337B33F` before
> >     `%gs` is restored. Those correspond to the calls from
> >     `secondary_startup_64` in `arch/x86/kernel/head_64.S` to
> >     `verify_cpu` and `sev_verify_cbit`.
> >     Those don't cause a crash but look suspicious, are they correct?
> > 
> >     (There are also some `sarq`s in the call to `early_setup_idt` from
> >     `secondary_startup_64`, but `%gs` is restored immediately before)
> 
> OK, I'll have a look, thanks!

Definitely fishy and I'm not sure why SMP bringup doesn't burn. Trying
to figure out what to do about this.

One thing I noticed is that trampoline_start already does verify_cpu,
and perhaps we can make startup_64 also do it, then secodary_startup_64
doesn't have to do it (and the realmode trampolines aren't patched).

Doing that would also require pushing the whole SEV thing into the
trampoline which them also gets rid of sev_verify_cbit I think.

But this definitely needs more thinking -- this is not an area I've
poked at much before.


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:02:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:02:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477305.739948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJhD-0006oa-Mq; Fri, 13 Jan 2023 13:02:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477305.739948; Fri, 13 Jan 2023 13:02:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJhD-0006oT-KB; Fri, 13 Jan 2023 13:02:11 +0000
Received: by outflank-mailman (input) for mailman id 477305;
 Fri, 13 Jan 2023 13:02:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Kou=5K=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pGJhB-0006o4-Fj
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:02:09 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2084.outbound.protection.outlook.com [40.107.20.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b81a98d-9342-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 14:02:07 +0100 (CET)
Received: from FR2P281CA0146.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:98::7) by
 AM9PR08MB6641.eurprd08.prod.outlook.com (2603:10a6:20b:306::8) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.12; Fri, 13 Jan 2023 13:01:55 +0000
Received: from VI1EUR03FT021.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:98:cafe::b8) by FR2P281CA0146.outlook.office365.com
 (2603:10a6:d10:98::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Fri, 13 Jan 2023 13:01:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT021.mail.protection.outlook.com (100.127.144.91) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 13:01:54 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Fri, 13 Jan 2023 13:01:54 +0000
Received: from 29ec6b98b14f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3E914AE9-2269-4D47-95D5-EA95F4F28A24.1; 
 Fri, 13 Jan 2023 13:01:43 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 29ec6b98b14f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 13:01:43 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by GV1PR08MB7939.eurprd08.prod.outlook.com (2603:10a6:150:8c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 13:01:38 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 13:01:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b81a98d-9342-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YydpJQ5BqAochuT5p5Ad58KsgXhuW3CfYok9+73yKz4=;
 b=yyL4AuS+98Lq2HsLwBEGZh2HeYvLXXcxeNOSdQbe+Zp3BC5sX3mN9nnNpdKkm5Y3oQw9gTxPmzMmipi+cdFAAGWIMOYmpbJvW+Zs0tpQpvE6Al61tbLSygxXHdg/tBg/cFPWgU0oWPuN6xRRCNQ4+lZAE4c09nfN6z66sK9IRIA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6b5028978a5c9f95
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UxYQ65e2ASmXXScWqOMUwdF4sohoJ8U8TACjyraNlxRfhrpG0CmAykbd1cNcb+J9syqZPCykn3WM42+xYmiSMqQLDrNJGeRhXKPYCAB3lzklTLGTSdQ7DO4s8+qBDGK9tOBYAvtz4U1tq7j+PqihHhVFfaRZ0vmqvsks+nTIywB5VMvS5U8VjfX3+dhJGplD/YgkpeT0AcdyWpePqdkmkR/K9K6t8sqpa+zvYeC2jfrKWxgRfj4icaAI0NXHn3u6qNmDNgKC4Jx+k87HXhSNqoW1g/fUcqM9qhlgsvbwvGp1qwwwkerdSl3O/7bOzQxxM7S1gyDabqSdL6Uvwzd3xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YydpJQ5BqAochuT5p5Ad58KsgXhuW3CfYok9+73yKz4=;
 b=HT4in+xpV8iEJepsXbnNI4TNc6eUNpXJqGLs2g2PlmQqaeJ/Ovwk7sHG6VC2MArOSU5MxVv67k9MuRSjxwsknBJoysDx1XeRP2Z3ZMhgNNrGmizcBtyIT84YFO/BhArX13BLMCVu0qH2+T2jOjl1H5eCKdZKeTuRlmTn9qgbRfCSMG0ro+lHKlGjyeRI3gC/NZAJS8HtyL7zXHdjDe9VPqjR/EqLNdJktwsrLD5h3ph9bguoYmXwhIDz96X4EQVePusZreDpen8jqM3SDXqMQrc8FS7/TI7W28je1wNQwFsNpGN725yz9sJX54SK3j1HK3On/wnxwmS6Z8BsU21JAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YydpJQ5BqAochuT5p5Ad58KsgXhuW3CfYok9+73yKz4=;
 b=yyL4AuS+98Lq2HsLwBEGZh2HeYvLXXcxeNOSdQbe+Zp3BC5sX3mN9nnNpdKkm5Y3oQw9gTxPmzMmipi+cdFAAGWIMOYmpbJvW+Zs0tpQpvE6Al61tbLSygxXHdg/tBg/cFPWgU0oWPuN6xRRCNQ4+lZAE4c09nfN6z66sK9IRIA=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [RFC PATCH 2/8] xen/arm: add sve_vl_bits field to domain
Thread-Topic: [RFC PATCH 2/8] xen/arm: add sve_vl_bits field to domain
Thread-Index: AQHZJcqDGP094PnLfE+ehr42WQkWI66ZeH+AgAEkTQCAAXROAIAAQZoA
Date: Fri, 13 Jan 2023 13:01:37 +0000
Message-ID: <31CD6EFE-D460-40AF-AABA-D28772635288@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <20230111143826.3224-3-luca.fancellu@arm.com>
 <91b5c7db-ec9b-efa6-f5cf-dc5e8b176db6@xen.org>
 <9168CB2A-A1F1-43E0-9DAD-BB31AD3979E0@arm.com>
 <096b4129-ace6-01b6-85c1-b153d3bc4ada@xen.org>
In-Reply-To: <096b4129-ace6-01b6-85c1-b153d3bc4ada@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|GV1PR08MB7939:EE_|VI1EUR03FT021:EE_|AM9PR08MB6641:EE_
X-MS-Office365-Filtering-Correlation-Id: 318c1bd7-4eb7-45f1-f68c-08daf5665892
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YsLz36Pfvv81vbYYo67Frz7SWTvykgM96ie9q7Z4ONomugV/n127x4dFVQ42RGVHA6RswnVpRNbhrRTgskWfQI6MqEDZn6Z8d2hoJs82H40TrV/FY6W3wLKqeLRaz6tHW7eGvOjJJtH964NJ6BIV2vjHwsg3hD7Th6X7mx9TImWnOo7T1GhOHJBhk8g/HFXcVLCClPqBe/S+LJ/cbc1It8F4hYF2MLvK5SMHyUEL4qkZT+4b0LrGE6KLPJ4nAaSA3WQVKuuiEOG7zbq/UCBIOhJgWaMne3xf5NDhjupQ4AcxhVkgwQl87DMkSd6SYyRk8gyNzIPSdsZ732BFztVz9EL68484Zb2ZTgMGwwl0yaVnLBSPEESA62CxPY8Cn+m7Q5xoLWJrZw6Fm8FNOCWQYZ7XQkS2HPrxniypDcgdSREDAE/NbmqcEaiZOFV4lYE97a0Rwko27K5zx0c7qfjQP6H8bAJQkoZIsO41rYzS7pv97YZm04Q9w7oeyCgw5A4mQ6hnwHeBbw/DcM60cvXM8Wa4VAfRrnWP6q50S3qKBogBEt5/soPEmWrjE9u/2+g9C0cvZ2/Cq5o6U8/iALkuaOhpdbUdvV2NegPbXajzpR+32764Ii1eKQPUsxf9njN2rXFW26LIOq57kClrKgRA8TutIpCSQHomib/LJbKjgkPmljPnAQPZ+yo8ofUaTKvQ0qM+YYYfwAsLYUiO03ukHoOp7x6KO5jY6W5r4S8olwE=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(451199015)(83380400001)(316002)(6506007)(54906003)(71200400001)(86362001)(6486002)(6916009)(33656002)(38070700005)(36756003)(5660300002)(122000001)(38100700002)(6512007)(26005)(186003)(2616005)(76116006)(2906002)(66446008)(478600001)(64756008)(41300700001)(8676002)(8936002)(66476007)(91956017)(66556008)(4326008)(66946007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <32D4C7F170D1014D9CAF4FDC82146705@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7939
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2ef58ddb-ec29-4345-70d9-08daf5664e47
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DyTvWELsstQIq+YRWfRLufFXsso4IghZ7lfasVDnUO4p3KdZ9w4fg0PMSUk9NekIox62WwKjOFvZnzxzL+NnYbzQpHbOo9uv0yu6fWzqNWzldrTq9cZp3av2RiVBh8g80IpRWEeGpp9r/wOwSFAPRysPme/qc4P0txWUtRlwzTSciGkVkw1n9MRWuA5jqxcVdQ7JxIzBV5GPAn5THdiiTldYaURRW6Gi31B+zP3IN1+2RJ9rQSYe0QBOLTuEghrjKDPxa6B+HV69j1uyVfNqzNaW/mLT7xrwiHHGq76pcYn7tWpJckcvyGE5yZSf3HLBKrl1mSW0D0nlw0k0qRQUgYN6sQr9s8oZYENaSEVNoyedYRQY9uAckxCWAvZWZcPLo/6V6AS6NaXwzbFQs4egAcTR8ptGfK4xmiULeM82vWoY20v2v/u9nuPVEnkU7EHVkVlKkQGq4eYJzwaXL8+Sd0XpB/AbxR4TNScHUG1dqkxZ3IrQOPleMWQPuUFgE7u0c6SsMT3oC/G64oSFuyAKqfbv/+cLxq34309M4wSBM4XPoVEyqOf6rM/WAtsSVtltsZgpZcVTSbd55dR7gfaez9EPYRyL0TDp0ar3tgIvIrysR7qroWBM4N++vCk3hO7l7xH47gGaEqID6A6TKjMidvFPtdAgZRHmh30eG7xJjucY5+QBoI6gkRcIPJPtkRHQoKQstQAuDiL/5O5qaUDYdQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(39860400002)(396003)(376002)(451199015)(36840700001)(40470700004)(46966006)(336012)(70206006)(70586007)(8676002)(356005)(4326008)(316002)(81166007)(2616005)(54906003)(33656002)(40480700001)(2906002)(36860700001)(5660300002)(82740400003)(47076005)(40460700003)(41300700001)(36756003)(6862004)(83380400001)(478600001)(8936002)(186003)(6506007)(6486002)(26005)(6512007)(86362001)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 13:01:54.6436
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 318c1bd7-4eb7-45f1-f68c-08daf5665892
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6641

DQo+Pj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vZG9tYWluLmMgYi94ZW4vYXJjaC9hcm0v
ZG9tYWluLmMNCj4+Pj4gaW5kZXggOGVhMzg0M2VhOGU4Li4yN2YzODcyOTMwMmIgMTAwNjQ0DQo+
Pj4+IC0tLSBhL3hlbi9hcmNoL2FybS9kb21haW4uYw0KPj4+PiArKysgYi94ZW4vYXJjaC9hcm0v
ZG9tYWluLmMNCj4+Pj4gQEAgLTEzLDYgKzEzLDcgQEANCj4+Pj4gICNpbmNsdWRlIDx4ZW4vd2Fp
dC5oPg0KPj4+PiAgICAjaW5jbHVkZSA8YXNtL2FsdGVybmF0aXZlLmg+DQo+Pj4+ICsjaW5jbHVk
ZSA8YXNtL2FybTY0L3N2ZS5oPg0KPj4+PiAgI2luY2x1ZGUgPGFzbS9jcHVlcnJhdGEuaD4NCj4+
Pj4gICNpbmNsdWRlIDxhc20vY3B1ZmVhdHVyZS5oPg0KPj4+PiAgI2luY2x1ZGUgPGFzbS9jdXJy
ZW50Lmg+DQo+Pj4+IEBAIC0xODMsNiArMTg0LDExIEBAIHN0YXRpYyB2b2lkIGN0eHRfc3dpdGNo
X3RvKHN0cnVjdCB2Y3B1ICpuKQ0KPj4+PiAgICAgICAgV1JJVEVfU1lTUkVHKG4tPmFyY2guY3B0
cl9lbDIsIENQVFJfRUwyKTsNCj4+Pj4gICsjaWZkZWYgQ09ORklHX0FSTTY0X1NWRQ0KPj4+PiAr
ICAgIGlmICggaXNfc3ZlX2RvbWFpbihuLT5kb21haW4pICkNCj4+Pj4gKyAgICAgICAgV1JJVEVf
U1lTUkVHKG4tPmFyY2guemNyX2VsMiwgWkNSX0VMMik7DQo+Pj4+ICsjZW5kaWYNCj4+PiANCj4+
PiBJIHdvdWxkIGFjdHVhbGx5IGV4cGVjdCB0aGF0IGlzX3N2ZV9kb21haW4oKSByZXR1cm5zIGZh
bHNlIHdoZW4gdGhlIFNWRSBpcyBub3QgZW5hYmxlZC4gU28gY2FuIHdlIHNpbXBseSByZW1vdmUg
dGhlICNpZmRlZj8NCj4+IEkgd2FzIHRyaWNrZWQgYnkgaXQgdG9vLCB0aGUgYXJtMzIgYnVpbGQg
d2lsbCBmYWlsIGJlY2F1c2UgaXQgZG9lc27igJl0IGtub3cgWkNSX0VMMg0KPiANCj4gT2suIElu
IHdoaWNoIGNhc2UsIEkgdGhpbmsgd2Ugc2hvdWxkIG1vdmUgdGhlIGNhbGwgdG8gc3ZlX3Jlc3Rv
cmVfc3RhdGUoKS4NCg0KT2sgZm9yIG1lLCBJIHdpbGwgbW92ZSB0aGUgemNyX2VsMiBpbnRyb2R1
Y3Rpb24gdG9nZXRoZXIgd2l0aCB0aGUgY29udGV4dCBzd2l0Y2ggY29kZSBpbnRyb2R1Y2VkIGJ5
IHRoZSBwYXRjaA0KbGF0ZXIuDQoNCj4gDQo+Pj4gDQo+Pj4+ICsNCj4+Pj4gICAgICAvKiBWRlAg
Ki8NCj4+Pj4gICAgICB2ZnBfcmVzdG9yZV9zdGF0ZShuKTsNCj4+Pj4gIEBAIC01NTEsNiArNTU3
LDExIEBAIGludCBhcmNoX3ZjcHVfY3JlYXRlKHN0cnVjdCB2Y3B1ICp2KQ0KPj4+PiAgICAgIHYt
PmFyY2gudm1waWRyID0gTVBJRFJfU01QIHwgdmNwdWlkX3RvX3ZhZmZpbml0eSh2LT52Y3B1X2lk
KTsNCj4+Pj4gICAgICAgIHYtPmFyY2guY3B0cl9lbDIgPSBnZXRfZGVmYXVsdF9jcHRyX2ZsYWdz
KCk7DQo+Pj4+ICsgICAgaWYgKCBpc19zdmVfZG9tYWluKHYtPmRvbWFpbikgKQ0KPj4+PiArICAg
IHsNCj4+Pj4gKyAgICAgICAgdi0+YXJjaC5jcHRyX2VsMiAmPSB+SENQVFJfQ1AoOCk7DQo+Pj4+
ICsgICAgICAgIHYtPmFyY2guemNyX2VsMiA9IHZsX3RvX3pjcih2LT5kb21haW4tPmFyY2guc3Zl
X3ZsX2JpdHMpOw0KPj4+PiArICAgIH0NCj4+Pj4gICAgICAgIHYtPmFyY2guaGNyX2VsMiA9IGdl
dF9kZWZhdWx0X2hjcl9mbGFncygpOw0KPj4+PiAgQEAgLTU5NSw2ICs2MDYsNyBAQCBpbnQgYXJj
aF9zYW5pdGlzZV9kb21haW5fY29uZmlnKHN0cnVjdCB4ZW5fZG9tY3RsX2NyZWF0ZWRvbWFpbiAq
Y29uZmlnKQ0KPj4+PiAgICAgIHVuc2lnbmVkIGludCBtYXhfdmNwdXM7DQo+Pj4+ICAgICAgdW5z
aWduZWQgaW50IGZsYWdzX3JlcXVpcmVkID0gKFhFTl9ET01DVExfQ0RGX2h2bSB8IFhFTl9ET01D
VExfQ0RGX2hhcCk7DQo+Pj4+ICAgICAgdW5zaWduZWQgaW50IGZsYWdzX29wdGlvbmFsID0gKFhF
Tl9ET01DVExfQ0RGX2lvbW11IHwgWEVOX0RPTUNUTF9DREZfdnBtdSk7DQo+Pj4+ICsgICAgdW5z
aWduZWQgaW50IHN2ZV92bF9iaXRzID0gY29uZmlnLT5hcmNoLnN2ZV92bF9iaXRzOw0KPj4+PiAg
ICAgICAgaWYgKCAoY29uZmlnLT5mbGFncyAmIH5mbGFnc19vcHRpb25hbCkgIT0gZmxhZ3NfcmVx
dWlyZWQgKQ0KPj4+PiAgICAgIHsNCj4+Pj4gQEAgLTYwMyw2ICs2MTUsMzYgQEAgaW50IGFyY2hf
c2FuaXRpc2VfZG9tYWluX2NvbmZpZyhzdHJ1Y3QgeGVuX2RvbWN0bF9jcmVhdGVkb21haW4gKmNv
bmZpZykNCj4+Pj4gICAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQo+Pj4+ICAgICAgfQ0KPj4+PiAg
KyAgICAvKiBDaGVjayBmZWF0dXJlIGZsYWdzICovDQo+Pj4+ICsgICAgaWYgKCBzdmVfdmxfYml0
cyA+IDAgKSB7DQo+Pj4+ICsgICAgICAgIHVuc2lnbmVkIGludCB6Y3JfbWF4X2JpdHM7DQo+Pj4+
ICsNCj4+Pj4gKyAgICAgICAgaWYgKCAhY3B1X2hhc19zdmUgKQ0KPj4+PiArICAgICAgICB7DQo+
Pj4+ICsgICAgICAgICAgICBkcHJpbnRrKFhFTkxPR19JTkZPLCAiU1ZFIGlzIHVuc3VwcG9ydGVk
IG9uIHRoaXMgbWFjaGluZS5cbiIpOw0KPj4+PiArICAgICAgICAgICAgcmV0dXJuIC1FSU5WQUw7
DQo+Pj4+ICsgICAgICAgIH0NCj4+Pj4gKyAgICAgICAgZWxzZSBpZiAoICFpc192bF92YWxpZChz
dmVfdmxfYml0cykgKQ0KPj4+PiArICAgICAgICB7DQo+Pj4+ICsgICAgICAgICAgICBkcHJpbnRr
KFhFTkxPR19JTkZPLCAiVW5zdXBwb3J0ZWQgU1ZFIHZlY3RvciBsZW5ndGggKCV1KVxuIiwNCj4+
Pj4gKyAgICAgICAgICAgICAgICAgICAgc3ZlX3ZsX2JpdHMpOw0KPj4+PiArICAgICAgICAgICAg
cmV0dXJuIC1FSU5WQUw7DQo+Pj4+ICsgICAgICAgIH0NCj4+Pj4gKyAgICAgICAgLyoNCj4+Pj4g
KyAgICAgICAgICogZ2V0X3N5c192bF9sZW4oKSBpcyB0aGUgY29tbW9uIHNhZmUgdmFsdWUgYW1v
bmcgYWxsIGNwdXMsIHNvIGlmIHRoZQ0KPj4+PiArICAgICAgICAgKiB2YWx1ZSBzcGVjaWZpZWQg
YnkgdGhlIHVzZXIgaXMgYWJvdmUgdGhhdCB2YWx1ZSwgdXNlIHRoZSBzYWZlIHZhbHVlDQo+Pj4+
ICsgICAgICAgICAqIGluc3RlYWQuDQo+Pj4+ICsgICAgICAgICAqLw0KPj4+PiArICAgICAgICB6
Y3JfbWF4X2JpdHMgPSBnZXRfc3lzX3ZsX2xlbigpOw0KPj4+PiArICAgICAgICBpZiAoIHN2ZV92
bF9iaXRzID4gemNyX21heF9iaXRzICkNCj4+Pj4gKyAgICAgICAgew0KPj4+PiArICAgICAgICAg
ICAgY29uZmlnLT5hcmNoLnN2ZV92bF9iaXRzID0gemNyX21heF9iaXRzOw0KPj4+PiArICAgICAg
ICAgICAgZHByaW50ayhYRU5MT0dfSU5GTywNCj4+Pj4gKyAgICAgICAgICAgICAgICAgICAgIlNW
RSB2ZWN0b3IgbGVuZ3RoIGxvd2VyZWQgdG8gJXUsIHNhZmUgdmFsdWUgYW1vbmcgQ1BVc1xuIiwN
Cj4+Pj4gKyAgICAgICAgICAgICAgICAgICAgemNyX21heF9iaXRzKTsNCj4+Pj4gKyAgICAgICAg
fQ0KPj4+IA0KPj4+IEkgZG9uJ3QgdGhpbmsgWGVuIHNob3VsZCAiZG93bmdyYWRlIiB0aGUgdmFs
dWUuIEluc3RlYWQsIHRoaXMgc2hvdWxkIGJlIHRoZSBkZWNpc2lvbiBmcm9tIHRoZSB0b29scy4g
U28gd2Ugd2FudCB0byBmaW5kIGEgZGlmZmVyZW50IHdheSB0byByZXByb2R1Y2UgdGhlIHZhbHVl
IChBbmRyZXcgbWF5IGhhdmUgc29tZSBpZGVhcyBoZXJlIGFzIGhlIHdhcyBsb29raW5nIGF0IGl0
KS4NCj4+IENhbiB5b3UgZXhwbGFpbiB0aGlzIGluIG1vcmUgZGV0YWlscz8NCj4gDQo+IEkgd291
bGQgZXh0ZW5kIFhFTl9TWVNDVExfcGh5c2luZm8gdG8gcmV0dXJuIHRoZSBtYXhpbXVtIFNWRSB2
ZWN0byBsZW5ndGggc3VwcG9ydGVkIGJ5IHRoZSBIYXJkd2FyZS4NCj4gDQo+IExpYnhsIHdvdWxk
IHRoZW4gcmVhZCB0aGUgdmFsdWUgYW5kIGNvbXBhcmUgdG8gd2hhdCB0aGUgdXNlciByZXF1ZXN0
ZWQuIFRoaXMgd291bGQgdGhlbiBiZSB1cCB0byB0aGUgdG9vbHN0YWNrIHRvIGRlY2lkZSB3aGF0
IHRvIGRvLg0KDQpTb3VuZHMgZ29vZCwgbG9va2luZyBpbnRvIHN0cnVjdCB4ZW5fc3lzY3RsX3Bo
eXNpbmZvLCBzZWVtcyB0aGF0IEkgbWlnaHQgYmUgdGhlIGZpcnN0IHVzZXIgb2YgdGhlIGFyY2hf
Y2FwYWJpbGl0aWVzIGZpZWxkDQooYXMgd2VsbCBhcyBpbnRyb2R1Y2luZyBhIG5ldyBvbmUgZm9y
IHRoZSBWTCB2YWx1ZSksIHdoZXJlIGNhbiBJIHB1dCB0aGUgZGVmaW5lIGZvciB0aGUgYXJjaF9j
YXBhYmlsaXRpZXMgZmxhZz8NCklzIGl0IG9rIGluc2lkZSBzeXNjdGwuaCBzb21ldGhpbmcgYWxv
bmcgdGhlc2UgbGluZXM6IA0KDQojZGVmaW5lIFhFTl9TWVNDVExfUEhZU0NBUF9BUk1fU1ZFICgx
dSA8PCAwKQ0KDQpvcg0KDQojZGVmaW5lIFhFTl9TWVNDVExfUEhZU0NBUF9BUk1fU1ZFICgxdSkN
Cg0KQW5kLCBpcyBpdCBvayB0byBwdXQgdGhlIFZMIHZhbHVlIGluIHRoZSBzdHJ1Y3QgeGVuX3N5
c2N0bF9waHlzaW5mbyBldmVuIGlmIGl04oCZcyBqdXN0IGZvciBhcm02ND8NCg0KDQo+IA0KPj4g
QnkgdGhlIHRvb2xzIHlvdSBtZWFuIHhsPw0KPiANCj4gSSB0aGluayBsaWJ4bCBzaG91bGQgZG8g
c3RyaWN0IGNoZWNraW5nLiBJZiB3ZSBhbHNvIHdhbnQgdG8gcmVkdWNlIHdoYXQgdGhlIHVzZXIg
cmVxdWVzdGVkLCB0aGVuIHRoaXMgcGFydCBzaG91bGQgYmUgaW4geGwuDQo+IA0KPj4gVGhpcyB3
b3VsZCBiZSBvayBmb3IgRG9tVXMsIGJ1dCBob3cgdG8gZG8gaXQgZm9yIERvbTA/DQo+IFRoZW4g
d2Ugc2hvdWxkIGZhaWwgdG8gY3JlYXRlIGRvbTAgYW5kIHNheSB0aGUgdmVjdG9yLWxlbmd0aCBy
ZXF1ZXN0ZWQgaXMgbm90IHN1cHBvcnRlZC4NCg0KRmluZSBmb3IgbWUNCg0KPiANCj4gQ2hlZXJz
LA0KPiANCj4gLS0gDQo+IEp1bGllbiBHcmFsbA0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:08:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:08:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477311.739959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJmt-0007Se-CW; Fri, 13 Jan 2023 13:08:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477311.739959; Fri, 13 Jan 2023 13:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJmt-0007SX-9p; Fri, 13 Jan 2023 13:08:03 +0000
Received: by outflank-mailman (input) for mailman id 477311;
 Fri, 13 Jan 2023 13:08:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3K7w=5K=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pGJmr-0007SR-Qe
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:08:01 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4d9d7d52-9343-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 14:07:59 +0100 (CET)
Received: by mail-ej1-x62f.google.com with SMTP id hw16so40350568ejc.10
 for <xen-devel@lists.xenproject.org>; Fri, 13 Jan 2023 05:07:59 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.138.tellas.gr. [109.242.138.67])
 by smtp.gmail.com with ESMTPSA id
 wl21-20020a170907311500b0084d37cc06fesm7394568ejb.94.2023.01.13.05.07.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Jan 2023 05:07:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d9d7d52-9343-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=VuxQZ7FgmOJvvrVC6Ru26nnBHcxEDI0Y+X12Abvxeng=;
        b=orgSNKVdnxz+JDTWjBIXHdCPuIhTmaGUeEK7NQhj2IRomba1ByatSWzO4GeEO8uPRa
         W8zZuJ279gOlistsbhUqt3C4rmcRrKVXZsj2+MBRL0VFQSMVBeN6lQZ9Ckuz4jys7/Xe
         imbaDm2Y/rT4AmaLfugzUNKHqjUuv1b1pw/gp9TAuFaKqTjXbwsYc3Xvw59FCVuGtSNx
         6uBdOmAVJjbKtIveJHTyj6rAdLqiZviXDYZI9vry+ddc0SV7lShpWk7GIAxWV/bQ3aSc
         +KMP0RATzaVRQOgq6maVySO7rWftHy5ppHiD/tnHHQElCI2iUYc9GESgKVD5lm0+T3jR
         5low==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=VuxQZ7FgmOJvvrVC6Ru26nnBHcxEDI0Y+X12Abvxeng=;
        b=49GGjrLo/sWqPDQHnL/HLBhw4O8zdhZ60tY87B1EEbrtEV2oe5q//9BIUxf82yfkjo
         hvnMcqnWozIXuoj97OIspczO9zNpxAv0/CTH0Dc9svTwVxnqK9bi7Q+yKjOsfEcMNFkR
         5qXyiqtesKf04pROO5sQlUTdPAQGwuTvGZ4ZJ+dPhq4Yo6JslkYS8mgDgd3QbC09B/RO
         4JvnMzf9qy5fcv6vKdxs0KrL6wn/by6BeFsUoxfnP8I4v1Gf25v6SlkEw/WThZFh9NQh
         7Fa2CHkMV5BS7LJDVCe6Ss33Oo7dV12oDbxOYuW7tvffSw/c/XTVRYBpYyrd2xbVnYtT
         us9A==
X-Gm-Message-State: AFqh2kqc8WkPxZF/PjvbMTkIDvFt5N/IFRWnWZzKrrc8doDNABWNMU4p
	aHiClBIn3BlnokGYKDh/4I4=
X-Google-Smtp-Source: AMrXdXuBMZh09UTp1bUaSAGR4FoZAxgVtLjnZD+8vRZvxCRq/+Ln0mnrX6GJKdjWzta7jXxUUhrslA==
X-Received: by 2002:a17:907:3fa8:b0:831:a76:90c0 with SMTP id hr40-20020a1709073fa800b008310a7690c0mr4153114ejc.32.1673615278828;
        Fri, 13 Jan 2023 05:07:58 -0800 (PST)
Message-ID: <4f1d289a-7c3b-c4a1-34bc-1e8bd62a416a@gmail.com>
Date: Fri, 13 Jan 2023 15:07:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
 <88e3ec77-587a-ae68-a634-fed1fa917cd7@suse.com>
 <b76a7834-9868-c5c2-e058-89911a552c80@gmail.com>
 <512d8768-28f6-d9d6-c1cc-18c5fbf2a636@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <512d8768-28f6-d9d6-c1cc-18c5fbf2a636@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/13/23 14:12, Jan Beulich wrote:
> On 13.01.2023 12:55, Xenia Ragiadakou wrote:
>> On 1/13/23 11:53, Jan Beulich wrote:
>>> On 13.01.2023 10:34, Xenia Ragiadakou wrote:
>>>> On 1/13/23 10:47, Jan Beulich wrote:
>>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>>> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>>>>>         if ( !acpi_disabled )
>>>>>         {
>>>>>             ret = acpi_dmar_init();
>>>>> +
>>>>> +#ifndef iommu_snoop
>>>>> +        /* A command line override for snoop control affects VT-d only. */
>>>>> +        if ( ret )
>>>>> +            iommu_snoop = true;
>>>>> +#endif
>>>>> +
>>>>
>>>> Why here iommu_snoop is forced when iommu is not enabled?
>>>> This change is confusing because later on, in iommu_setup, iommu_snoop
>>>> will be set to false in the case of !iommu_enabled.
>>>
>>> Counter question: Why is it being set to false there? I see no reason at
>>> all. On the same basis as here, I'd actually expect it to be set back to
>>> true in such a case.Which, however, would be a benign change now that
>>> all uses of the variable are properly qualified. Which in turn is why I
>>> thought I'd leave that other place alone.
>>
>> I think I got confused... AFAIU with disabled iommu snooping cannot be
>> enforced i.e iommu_snoop=true translates to snooping is enforced by the
>> iommu (that's why we need to check that the iommu is enabled for the
>> guest). So if the iommu is disabled how can iommu_snoop be true?
> 
> For a non-existent (or disabled) IOMMU the value of this boolean simply
> is irrelevant. Or to put it in other words, when there's no active
> IOMMU, it doesn't matter whether it would actually snoop.

The variable iommu_snoop is initialized to true.
Also, the above change does not prevent it from being overwritten 
through the cmdline iommu param in iommu_setup().
I m afraid I still cannot understand why the change above is needed.

> 
>> Also, in vmx_do_resume(), iommu_snoop is used without checking if the
>> iommu is enabled.
> 
> It only looks to be - a domain having a PCI device implies it having
> IOMMU enabled for it. And indeed in that case we'd like to avoid the
> effort for domains which have IOMMU support enabled for them, but which
> have no devices assigned to them.
> 
> Jan

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:17:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:17:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477317.739971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJvU-0000VH-9w; Fri, 13 Jan 2023 13:16:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477317.739971; Fri, 13 Jan 2023 13:16:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGJvU-0000VA-66; Fri, 13 Jan 2023 13:16:56 +0000
Received: by outflank-mailman (input) for mailman id 477317;
 Fri, 13 Jan 2023 13:16:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B8NH=5K=gmail.com=mingo.kernel.org@srs-se1.protection.inumbo.net>)
 id 1pGJvS-0000V4-ST
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:16:54 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8b4da49a-9344-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 14:16:52 +0100 (CET)
Received: by mail-ej1-x634.google.com with SMTP id cf18so45878436ejb.5
 for <xen-devel@lists.xenproject.org>; Fri, 13 Jan 2023 05:16:52 -0800 (PST)
Received: from gmail.com ([31.46.242.235]) by smtp.gmail.com with ESMTPSA id
 b26-20020aa7dc1a000000b00499c3ca6a0dsm4114514edu.10.2023.01.13.05.16.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 13 Jan 2023 05:16:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 8b4da49a-9344-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id
         :reply-to;
        bh=JlCw+zZM+W0MIRuZZ9a+0t+5twIPtxY+d3mAecrMHG0=;
        b=aJ+qO/wntm5Sh3d2sH0EboaTSwBg43UeydQCepU9Dwcdfbnfk0bnxaVO4Tz1I2hpMM
         wmpAmaRxb/t0V45AOs/Og3rD3+pDn2nWgPF5K+rJtI9z27YH6fQjWh5h1rnh23/QE/Bw
         HAelQHtDb1IqC7Oba2euFvVTvi/KRovTaXZHgbeTl7If1knWpa4Zrgz070yc1vzmSneM
         jUv3TeFShdx5cF1NOWaRDNoRe249I/vk8J71D9YhRv6RA5CQUVvkg/jaNt/RaUD1jMmK
         x9qzjJtju6LpxTsL7TqMoY5mDf/aRI2PbUoA+ssjf8TVocwFvYvCj6RiBkTcUJB3NxFd
         F13Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=JlCw+zZM+W0MIRuZZ9a+0t+5twIPtxY+d3mAecrMHG0=;
        b=UGcBZvXhPxLu5OqJ2M3W7sxwPLHpX3qpenWHFWz36T0n/tuesjpzB2/q5DNjELLdDQ
         2NiIisX8D9wBO/3R9oDEUpvC1ymoBEY7Z5Tf4sign3TyROe7oJdAxRAg/3NTzq7wqfpj
         EGpsL19YHlvyWX2LYbZJfE3W/YYZLg0yWpq/taVvjoP8CI/Cmbe9uOQvml9dzCaRaG/K
         asQEDaCk3Fwe+3bUzaFNo8M+XFHML9MSRRb5qlcnwA26LDLPwkKqOAqHQSb39+YbaGYQ
         D1bDNoc4yxOQGG6RneoxkuU0BbxsAwJXClZ0oJ4iVhTWhYIIeIIR8tN+vtdmv7+o4XWW
         Y1uw==
X-Gm-Message-State: AFqh2kqej1JPdzU8Sy84p27U/eL6zRuAZfV1cRwbPoLyAF1//e4kQe/y
	3Lf/pqk7B4cdfCiaLcOLmA8=
X-Google-Smtp-Source: AMrXdXtNjIfcqueavthjwXjDEZYtukIwVQn1HNUGbds0mLX015YBewarJuZYaImpFSWzdBXfMsjSbw==
X-Received: by 2002:a17:906:9f07:b0:7ec:27d7:1838 with SMTP id fy7-20020a1709069f0700b007ec27d71838mr85275567ejc.22.1673615811828;
        Fri, 13 Jan 2023 05:16:51 -0800 (PST)
Sender: Ingo Molnar <mingo.kernel.org@gmail.com>
Date: Fri, 13 Jan 2023 14:16:44 +0100
From: Ingo Molnar <mingo@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>,
	Kees Cook <keescook@chromium.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com
Subject: Re: [RFC][PATCH 2/6] x86/power: Inline write_cr[04]()
Message-ID: <Y8FZvLq+MeQ7A+lI@gmail.com>
References: <20230112143141.645645775@infradead.org>
 <20230112143825.644480983@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230112143825.644480983@infradead.org>


* Peter Zijlstra <peterz@infradead.org> wrote:

> Since we can't do CALL/RET until GS is restored and CR[04] pinning is
> of dubious value in this code path, simply write the stored values.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  arch/x86/power/cpu.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> --- a/arch/x86/power/cpu.c
> +++ b/arch/x86/power/cpu.c
> @@ -208,11 +208,11 @@ static void notrace __restore_processor_
>  #else
>  /* CONFIG X86_64 */
>  	native_wrmsrl(MSR_EFER, ctxt->efer);
> -	native_write_cr4(ctxt->cr4);
> +	asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");

>  #endif
>  	native_write_cr3(ctxt->cr3);
>  	native_write_cr2(ctxt->cr2);
> -	native_write_cr0(ctxt->cr0);
> +	asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");

Yeah, so CR pinning protects against are easily accessible 'gadget' 
functions that exploits can call to disable HW protection features in the 
CR register.

__restore_processor_state() might be such a gadget if an exploit can pass 
in a well-prepared 'struct saved_context' on the stack.

Can we set up cr0/cr4 after we have a proper GS, or is that a 
chicken-and-egg scenario?

Thanks,

	Ingo


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:22:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:22:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477324.739982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGK0x-0001yg-1h; Fri, 13 Jan 2023 13:22:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477324.739982; Fri, 13 Jan 2023 13:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGK0w-0001yZ-Un; Fri, 13 Jan 2023 13:22:34 +0000
Received: by outflank-mailman (input) for mailman id 477324;
 Fri, 13 Jan 2023 13:22:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=irsc=5K=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pGK0v-0001yT-Qn
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:22:33 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2053.outbound.protection.outlook.com [40.107.7.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55bf7214-9345-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 14:22:32 +0100 (CET)
Received: from DU2PR04CA0183.eurprd04.prod.outlook.com (2603:10a6:10:28d::8)
 by AS2PR08MB9785.eurprd08.prod.outlook.com (2603:10a6:20b:606::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 13:22:30 +0000
Received: from DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28d:cafe::53) by DU2PR04CA0183.outlook.office365.com
 (2603:10a6:10:28d::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend
 Transport; Fri, 13 Jan 2023 13:22:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT051.mail.protection.outlook.com (100.127.142.148) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 13:22:29 +0000
Received: ("Tessian outbound baf1b7a96f25:v132");
 Fri, 13 Jan 2023 13:22:29 +0000
Received: from 25f775ffcf63.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AB7B6E75-2B01-4AC5-9ACB-84FE2FEA166A.1; 
 Fri, 13 Jan 2023 13:22:19 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 25f775ffcf63.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 13:22:19 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GV1PR08MB7938.eurprd08.prod.outlook.com (2603:10a6:150:8d::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 13:22:17 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 13:22:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55bf7214-9345-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ujhcm3PSPgw4bvKyeCbe7lP+sd7EjCAeMcV73gd2RxQ=;
 b=OAkpaPAqe9C2muUIw4bn/gpjSUoOj9fqpeekAptVpAlYsAGCeYOrAG42smCP74+jyXyvrLlv1nTRHY/5W3x52bdB/L2RFTmEHhjX67yFKoXRZKevbaZ9rlDPgZN3bOvqIWvCduhO1g8NqIB1OVVn1fWKzLw6zBL5M7hzznAhngE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VfSwjdbhX1FGjdaz/3KXOzoyJc4ho119BvbgiWJ4VTFDVTw6cm30r/g8hEfjds6swWuRiTuGtnm2bB4MDE/YiuRtzKCkSpzW9eJtLF5OKxbzXvqvJjDBu2QOuYDt4K6lwCT95SnGouslDzOXC051Zx5lTs5MQYaPD2PVEADn3qN0Xs5ojbs0q7gule3dDMJPm2V5OG5/WICjntUmU9F+KIMJU8AUBchTa5E/mWP7L7vYLBdwnATPnKhelUm0IeSEwTUXBPnmMPJR4u0IhLjAo0ld8xm5E2wsRqxzFb0+ORkmdnOGZTMsjSA0Non9o9NmmLDrY3pNhIigaaiLxK99PQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ujhcm3PSPgw4bvKyeCbe7lP+sd7EjCAeMcV73gd2RxQ=;
 b=SnkbN1hNgDJWuXeVDH831fGbzl5ek9NzQZ5/qufITTCHJRPyrFUzii2nuOs8k5fa2/EkxXv9QxMtTyfuuUfB+VklkDAe2RttEoui8dOrDACHTeewOPBiMHYlGoQ0gRtcKtSQP9I0ntykQ5phJd+94ijS8ZEs8X8wbooZ8Nzg4zPHp9iu9oELr8nxm6RXalOFCrNX3x8rli8F2r7Mm06uRa6Y9I9+W++oAIO4+FnmHtXrV0Y9JOpxqyztDTdfc6Oa1iDkhVJSlpgYA4ibNRGW2HmHPf4J5bSTLwL9usgEPq0bBi9ICLsfUAI4CBVfyA1kA7VT4WGmkBnQxRUp6Sn/6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ujhcm3PSPgw4bvKyeCbe7lP+sd7EjCAeMcV73gd2RxQ=;
 b=OAkpaPAqe9C2muUIw4bn/gpjSUoOj9fqpeekAptVpAlYsAGCeYOrAG42smCP74+jyXyvrLlv1nTRHY/5W3x52bdB/L2RFTmEHhjX67yFKoXRZKevbaZ9rlDPgZN3bOvqIWvCduhO1g8NqIB1OVVn1fWKzLw6zBL5M7hzznAhngE=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: RE: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat
 workaround for TLB flush by VA
Thread-Topic: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat
 workaround for TLB flush by VA
Thread-Index: AQHZJzd61qXKtXVMLEiNLp0lWxtmx66cPDEA
Date: Fri, 13 Jan 2023 13:22:16 +0000
Message-ID:
 <AS8PR08MB79914FDE4B386F24E1CBA40F92C29@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-3-julien@xen.org>
In-Reply-To: <20230113101136.479-3-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 8F0A0EC6757B484497BFC0E9F2601A84.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GV1PR08MB7938:EE_|DBAEUR03FT051:EE_|AS2PR08MB9785:EE_
X-MS-Office365-Filtering-Correlation-Id: c6c282c4-6004-496c-dd18-08daf56938b8
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 K9rsg6DQs6OWoAP8usFBVmoBKlt71otmS+eAMdWvIsuwtFKeMTOlBlJhvfxaXVhrOKUn0HxN3HSIofNZ/TtGEWcqdkXkxgQS2jeri/umBSjJENQ0XcTHSxzGnZz+I+F+qNzUV61E+Nbo08VTi5cp+YQ+cznNY42eLmzhhdS7B7ndaPE9UHclWmWw9KQF3swg8N1jBnBaZHq+BRoJSeXrEU4pqVGRcAK6/RxlJBurwSkXs76OrhMJo5teu7EykOhVmAy2etyd3lGl1ztlOKFmaWw2QKCaAqYeMMlyJkpG6N8oVzWJm30vFwUm1hAH1PtWKeAps9kgkBXewLjpjymhEBMG6M/4ewmJQGYX7MvGg+ddYKnhsVOZ07cYlLTiJ9tXJhPHoGOVe2M2e8xaIbwrTxUrAyfpKqWYavxT8qssICMlDLxUpD900PHs1NX1dwLXBBGJuK2AfatqdAB+qnncK4kN2LVb22F19Qly9kSsJMhOXsmNcYnKDb0NUZwJSrcsGn48O5J9n5vT+TD/3COJaFfwJizbTRP8ashXigRViEzDBGaDGANyHuZs13mQMwUh/kjK/lxhmr7EVOjrlDR82HJqdbhJi/7BIVqODtNBAa+BOS4L8R2AC4uxdxZxJBZ38p0/MCsqfiZc8MCF7f7O71Uk4o9c21Mx5Kssx8Pn9T1+uhwYrXAyzkl4/DlSAxK61gl1Nzt4UyghmbA3hUzuMg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(376002)(366004)(396003)(39860400002)(451199015)(86362001)(2906002)(4326008)(5660300002)(66446008)(41300700001)(8936002)(66556008)(66476007)(33656002)(52536014)(8676002)(64756008)(66946007)(76116006)(38100700002)(83380400001)(316002)(55016003)(54906003)(110136005)(26005)(9686003)(6506007)(186003)(71200400001)(478600001)(122000001)(7696005)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7938
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	24075649-f11a-41e6-adb2-08daf5693098
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	78y2ERMqdB6sDNR0U4509xItZx9p3HvrkSALFAWcXdy6izxretdaXsXvoO8Ph4/IwkdEec8DZhbmnHFeNc6uPMXoIW39GZqBQi45utDeupjt6g27R3TrkecOL1EBjg2gXg18jqiv4utE1d+0eT21cldQ+RpdVoJi+UYhz4az0AfB1hTT/JcdYL8+/1fb2jQS8hqWlVYKtBg+J4SOShICpMTqsj83bKs+4wBW7G2trZfm0DWn1CWJn2bRcv3Pp0sFVCxbxXlBKu9IOB7s8KX4yHgl1qslRdPU5JMhPazpBxY/rqrS6fKY8mepxuQiZSZEno4RjLudi+vP0VtX0NpMG8ri0ujm5M5Ps/hXhERxxKccaeSlZXNeRBcCTnzHZJKB1vmWnql7FJoWbqvsSNpqOPyyl1jleiNWDlGi72cdn0lSYSXKxpZrOf0fgZDib+lCFchko46sf3393vEX2cOeqmI5Hz8864P9muR4tEfNwMUJLY9vYutLBuK3V6SwHA1rE4QM71ZMjYkP6nZCEWpEH70bBcnzhVEy6mZhuZRZcQIL6OEnE1kTJei7eCOWLhGe49eYOJIYZPR+lnnI9GFRpxCnqqrXlFdrkOblQrkrTEuAlLkhWVKNKYs7HDJvAjC9wvkZA/7b+Ery/Z2gsU89KE00bsfS/Hq/lXwq9f08L5a1GR2npfepzFkKYBJ8LGeJUdb/TK4fNpOtE5esJBNyQw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199015)(46966006)(36840700001)(40470700004)(81166007)(55016003)(82740400003)(316002)(40480700001)(54906003)(110136005)(356005)(336012)(36860700001)(186003)(47076005)(26005)(9686003)(33656002)(40460700003)(83380400001)(6506007)(7696005)(86362001)(478600001)(82310400005)(8936002)(52536014)(41300700001)(2906002)(70586007)(5660300002)(4326008)(8676002)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 13:22:29.8200
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c6c282c4-6004-496c-dd18-08daf56938b8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9785

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat
> workaround for TLB flush by VA
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Looking at the Neoverse N1 errata document, it is not clear to me
> why the TLBI repeat workaround is not applied for TLB flush by VA.
>=20
> The TBL flush by VA helpers are used in flush_xen_tlb_range_va_local()
> and flush_xen_tlb_range_va(). So if the range size if a fixed size smalle=
r
> than a PAGE_SIZE, it would be possible that the compiler remove the loop
> and therefore replicate the sequence described in the erratum 1286807.
>=20
> So the TLBI repeat workaround should also be applied for the TLB flush
> by VA helpers.
>=20
> Fixes: 22e323d115d8 ("xen/arm: Add workaround for Cortex-A76/Neoverse-
> N1 erratum #1286807")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
>=20
> ----
>     This was spotted while looking at reducing the scope of the memory
>     barriers. I don't have any HW affected.

Seeing this scissors line comment, I tried to test this patch using basical=
ly
the same approach that I did for patch#1 on every board that I can find,
including some Neoverse N1 boards, and this patch looks good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:25:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:25:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477330.739993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGK3J-0002Yv-D7; Fri, 13 Jan 2023 13:25:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477330.739993; Fri, 13 Jan 2023 13:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGK3J-0002Ym-AW; Fri, 13 Jan 2023 13:25:01 +0000
Received: by outflank-mailman (input) for mailman id 477330;
 Fri, 13 Jan 2023 13:24:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGK3H-0002Ye-LF
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:24:59 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2084.outbound.protection.outlook.com [40.107.104.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acd5eb07-9345-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 14:24:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7544.eurprd04.prod.outlook.com (2603:10a6:20b:23f::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 13:24:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 13:24:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acd5eb07-9345-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mQ/OsSvSeKcVk3xGq4ozr6EweZC4+H+72oiXJN/5vNigGgBj5xuXk0D6yCNG/duYe6u2fuKKX43wxEMc9DB7DvlHET4rKApaGTEjbZckXdYO5ninPpJKp+gbDwtfuvcm78nTwV+cc2qGBKGy9kQOH8uvIRJtIMR4oYhRFUj24m/6GZRkYIgwK/SOfuSn6EtYakRVEJs3TzSgnCzuD/rn68V+ZiUAER0vpNnW88UBhNzyc9MbpX8X8PgNqnH8+MWljzSSDhhYk6Utjhqgv8Su3cBvzOd9rRwLY/W1vxbW2s6xMd1IsLDQh9U362krWqMUK1ZZmFOvPU1rEPd3y8flGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KFHJ1kJOlL7Dn84b4K30ZPwmVFgu+GJS4RBX8/8d4bc=;
 b=brYiRyYNt/E6EmKgIe2huISb20tpTI3E4AMkWU/5MbBNmaWjBQoQYW6mbDVUanDw+GfuADIATyKev6UrKzmpQyNB53t/CKY5o2WDw+zeKfztjlJI8592bha11Dprp9NkfnE8EFvYQVEYXoqhS7ConeF2y7djWbvYc+/eyuOin3iCgUL4c3gJu8d1HEMZjIy5HdWRUhH3F+dgAe4sZuK18UZEsasyEY5PLJyDd4LiEFiJluvbvfsKRVE//s8jGi58cjdBbBEaJ1X9tnqM4TQtstS1QDrAmzFFt16+IKOzvxBN/bmcJRJzFbpfrRU/2IWW646Yn8NEMKVs8n/ueSSHOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KFHJ1kJOlL7Dn84b4K30ZPwmVFgu+GJS4RBX8/8d4bc=;
 b=mOyrVe4TqSwKNykUbBsg9aTAIvZT7HMf3tAzxUDVh7bFEGnuuR2mBpx6ybFK5J105ekOlp/hQjxm8GCfxJMYiVR/+SUNgb6CMdtBMQN1IU/BWOELoGksASuVe1wM5bKSHcZ+GwaiIsOJlzNeDQMIxXRgZNQRa9DzTGrDE4yvHF+evsctjhpHagsjglL0OvkliqKz8efCqIQ7VdJXfuaRnQlViPPRG4h2gUpprF7HrHY356MvtRmoFBlQ3hyQFXgmbIy5Yy7Lb3Ju1qtg+xWIvMgSKLZd5n8NXwIc6GRGHdAtuP7JxM6lxPHPNDY+f5wx79pl3lXtX4RdikgmyZ4uog==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <da973e5a-3a1b-3e99-ebf9-e462915eb338@suse.com>
Date: Fri, 13 Jan 2023 14:24:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
 <88e3ec77-587a-ae68-a634-fed1fa917cd7@suse.com>
 <b76a7834-9868-c5c2-e058-89911a552c80@gmail.com>
 <512d8768-28f6-d9d6-c1cc-18c5fbf2a636@suse.com>
 <4f1d289a-7c3b-c4a1-34bc-1e8bd62a416a@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <4f1d289a-7c3b-c4a1-34bc-1e8bd62a416a@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0030.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7544:EE_
X-MS-Office365-Filtering-Correlation-Id: 757199bb-1e1a-4e63-f0dd-08daf5699021
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mX6sdCXF/n+bqh3jb+QlxbRaSc5J7cjCWEPpbZcPqUIaMH70ymJo4vJQw0nLvFNZmPng0py1GeLrJV92JOBbCZxLnuliLwDqOygZfOWMwh1/COoytXHx8OnYbZB9UEUgDLGB5zCjNgmxNITbBFJlI/b2TCKsHx8KPkIWZxl3Un2gTY6OsPUaY7MbQuSB8x2l5TPRZc62Y5AV4wGUba6vwryRoKo7S2cYL1fp/LAySLgsJL28UYcFbaBoXZigqIkolY0MhuzzzXOXL/E+2ySvkBJc73/8owJ/JG8gi4v7A/dEihEpTU74nfYqsar3Pr4iGLGY/LKxlDnx2spCjplQx79333TMFufN/5YG7UgsljVA41u5Q4S8Kqyn2Z2ZnMUKWF6PKKLO0lW5u8YdKHsEIvhKkwj/uBy+1yudGtjOD02kZfcTSyix/QFU58F8fK5Q3N8nDScvCod8PsXrne5+dyK/6jKokrgQzRiq3vvZnuM/dAYfyFGGtZ8pLGRyPoYdcGv05X+cx68qnFO713U/mhSsTbWEs6kYXRb20X4M7maayVRQFxuU+/7EL/GqiIHL7/nSTIZePvd0wqhw/SCJRWHL5pghAUJAbKU+s3eL80ih2PHUjd9ZtcHvgXYH/5dFvHc1IqqoNCrgDS4Odb8vnLM4uML951a91cQ2QMWm6Zk6UGGzGaMwfgBg+foR8pUy2zOWPV9ny3i4WxuN7ec0TFxH1dZ8DbQZ22Ei5xu8ycA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(346002)(136003)(396003)(39860400002)(366004)(451199015)(38100700002)(4326008)(66946007)(8936002)(6916009)(66476007)(66556008)(8676002)(5660300002)(31686004)(86362001)(26005)(2906002)(316002)(54906003)(83380400001)(41300700001)(2616005)(478600001)(6512007)(186003)(53546011)(6486002)(36756003)(6506007)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RkMvNkdFUC9VQkNvc0hkQUpydWw1Q0RDcTVPL0pqbDVvU0NBZUxXbDkvTjBJ?=
 =?utf-8?B?ak95TkRpNWlSeTNhWFppRkpoWDZ4RVVta1dtbUxzL1RLVWV0ZFphcjdSN2h1?=
 =?utf-8?B?c2lTWDFGMG5vcXBvN0pYWTE4MnBuYTk0RmpQK3A4Z2taZUV4d1o0Z3EveUgr?=
 =?utf-8?B?Smo0NVoyb1JQVENUZjJpN0VDbEhNV0sySGRDVHR3UjF4V21jeHdab2t1cVZp?=
 =?utf-8?B?MFNpVUxhRzJLYWVSSndtWjkram5LdjN3STFBV0lXNjVIZmtrYTdrQkRGQnlw?=
 =?utf-8?B?dDVSZExtZXpWZzhIZDgzU3BHeVljTVpqd0sydVlNY2l5amtCMW5SZzBTVmo2?=
 =?utf-8?B?Rk1RdjNpaU04RGIzZjk1NzNLTG1qRkpCVHpyRkRjOXdzV2lrVTN0ZE1XTjVP?=
 =?utf-8?B?SW9IdzRMQVAzaXp1emZ1ckg1OVlGRFJxRDJla0hhUnRIMzNtanJEcndPbzdV?=
 =?utf-8?B?dVlsQ2JCbUpzVGE3aDZYL3JOdW1OQXdFMzZxSFRoSk45d1IxSW9LK2FPYXQr?=
 =?utf-8?B?VzZ3U0NjSkQ2QmM0TmRVeEtWZWdoNFlYS3VVekQ4bDhZMWRPTVBmNVZ6OHZh?=
 =?utf-8?B?b253VXllb0x4cm8xVXY3dG4xVTBjL0NqbTMycnBvd2pTdVVmYlBuVU03RWV0?=
 =?utf-8?B?c3FMcngvNkhURGM4L1NnMnpRWWFRdU44RnF3TWR4QkRDYk1ZMmNPZVFZYk1a?=
 =?utf-8?B?bUl6NWFCR3grZ2lPMHRhZk45STF4MFlpUHppaitHWkVwREhQeWU5cEZ0RFI4?=
 =?utf-8?B?bnJ1cmtZcis1enozQkVIT0h3Z1Q0S3U3UWJHZFQ4R1k2dmJWTFNqVjd2WS9x?=
 =?utf-8?B?bEd5d2ppZEMrZXl3NHp2WHpYNUUwQmVMWThhZjFmN1hBNU53MGhnRm5YUkRm?=
 =?utf-8?B?ZkZpMHIrRzdWcDhNMlBURk9BM0NTMzlIVDRyWTl4ZDdKMGZWSkNyUllDYUpU?=
 =?utf-8?B?MjNVY042WUczQ3FsdDA2a0dNZ3huclI5ZDZrRDYrRWdvdnFsVkFjem5ZdlZB?=
 =?utf-8?B?RU1vMzZEb05ZY2QwUE51Y0xyY2J3N2lFdHFYM1R3WlBCSW15ZFJVOWZSc0pq?=
 =?utf-8?B?Z1hIcnp1MlRMTk1mR0VmdTdFM2pVdkZxYW9XbTZuQXZVcC95bkV2MTRXdXJU?=
 =?utf-8?B?czh3TlY2RFRURVlKLzZQVXdsYldrNW9MMDh5YXI5OTBYQytkdkNVMUg1enRt?=
 =?utf-8?B?MjBqUnR4YkNEUWMwRTNDaHJydHI0MENYMWU0TFVnc08zQ2hLdmtGeWlQVENl?=
 =?utf-8?B?bzJkNWxaRkxiZ0g2UXh4QXhOd0RaUml5SG16V0VNSnBRR1hYUlRsMzkyM2hP?=
 =?utf-8?B?R0tNVExMZ3JrSTFvNlJ5a1huTDBvNE9SNko5UU9JaThSeWRVd0NpWC9tak5n?=
 =?utf-8?B?ZWpYQXRjWVp5OHU1bWhyVWdNdlFPOVZhY014aEswQ1YvZHRhd01XMTBQRitE?=
 =?utf-8?B?cUQxWDdZTWpOOVcrcElWYmJqRVo4b2RKYjdFWWt4NlMvVm1MeVhZa1F3azE1?=
 =?utf-8?B?YmJtVHFJQVN1OEdzWms4VFBpRWFRMU1OM1l3dVN3MzBQL0R1cU43dFh0MUNR?=
 =?utf-8?B?VXZvb2RjdjA3dzlzTEY2eElSQk1zVi90L3hHQ0hQcHRWY3A2dTJYWk1ZVkk2?=
 =?utf-8?B?SHFVMVhGdHFUSlBJcDFHdzlUZ3N5OXJnOWJOYTVGeTFJd3REMmtsK2xuR21N?=
 =?utf-8?B?OERMcGpZeUpIelZXT1FpUVp3dnBsbHpTSlFhclQ0MERvWDJiYUpqalAvelVz?=
 =?utf-8?B?enhIcStxL1BxUGMybnExeXR3Z0VIRGxxVDlYVDd0VFNONFZzczlTZHZtVlFr?=
 =?utf-8?B?OTZWQ0lEeTZVeHIxcnl4alQrVldvN0xseVFYRkh2Z1hUdTN0U0F6QVFvcWVE?=
 =?utf-8?B?bG5WYmhVbnplcjFmcVgzbGdmaXlvalVtQXZhQUZxaFFtbm5lRC9XUHZhRXV6?=
 =?utf-8?B?SDR2VU5HdjlteFplY000UGJLU05lWVhKcHc2VzlMc1g5SFN6Uk9QKytXUmxU?=
 =?utf-8?B?NXk2cjhjM09nbHBpaTg5SVRGTjlGOHNQNkh1MGQ2VlZqT1JOcDJsajNPYlJ0?=
 =?utf-8?B?K3NNaVNvNStGVHcrSDcxdWFNWmRodks1SWx6ZGpQMWtmRGIyMVlLUWptOEJC?=
 =?utf-8?Q?U9tfDx004xKlaSqslS+7bjlPj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 757199bb-1e1a-4e63-f0dd-08daf5699021
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 13:24:56.6736
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U3ZcWuGZCKT2ZVsE/qfYsYC5o10C9KuoeiY2+hXhR0zGOmthj3cQrWxeZeDqUegrNXkNGUagKRTVLDS2vvuwEA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7544

On 13.01.2023 14:07, Xenia Ragiadakou wrote:
> 
> On 1/13/23 14:12, Jan Beulich wrote:
>> On 13.01.2023 12:55, Xenia Ragiadakou wrote:
>>> On 1/13/23 11:53, Jan Beulich wrote:
>>>> On 13.01.2023 10:34, Xenia Ragiadakou wrote:
>>>>> On 1/13/23 10:47, Jan Beulich wrote:
>>>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>>>> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>>>>>>         if ( !acpi_disabled )
>>>>>>         {
>>>>>>             ret = acpi_dmar_init();
>>>>>> +
>>>>>> +#ifndef iommu_snoop
>>>>>> +        /* A command line override for snoop control affects VT-d only. */
>>>>>> +        if ( ret )
>>>>>> +            iommu_snoop = true;
>>>>>> +#endif
>>>>>> +
>>>>>
>>>>> Why here iommu_snoop is forced when iommu is not enabled?
>>>>> This change is confusing because later on, in iommu_setup, iommu_snoop
>>>>> will be set to false in the case of !iommu_enabled.
>>>>
>>>> Counter question: Why is it being set to false there? I see no reason at
>>>> all. On the same basis as here, I'd actually expect it to be set back to
>>>> true in such a case.Which, however, would be a benign change now that
>>>> all uses of the variable are properly qualified. Which in turn is why I
>>>> thought I'd leave that other place alone.
>>>
>>> I think I got confused... AFAIU with disabled iommu snooping cannot be
>>> enforced i.e iommu_snoop=true translates to snooping is enforced by the
>>> iommu (that's why we need to check that the iommu is enabled for the
>>> guest). So if the iommu is disabled how can iommu_snoop be true?
>>
>> For a non-existent (or disabled) IOMMU the value of this boolean simply
>> is irrelevant. Or to put it in other words, when there's no active
>> IOMMU, it doesn't matter whether it would actually snoop.
> 
> The variable iommu_snoop is initialized to true.
> Also, the above change does not prevent it from being overwritten 
> through the cmdline iommu param in iommu_setup().

Command line parsing happens earlier (and in parse_iommu_param(), not in
iommu_setup()). iommu_setup() can further overwrite it on its error path,
but as said that's benign then.

> I m afraid I still cannot understand why the change above is needed.

When using an AMD IOMMU, with how things work right now the variable ought
to always be true (hence why I've suggested that when !INTEL_IOMMU, this
simply become a #define to true). See also Andrew's comments here and/or
on your patch.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:44:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:44:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477337.740004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKMP-0004zu-1v; Fri, 13 Jan 2023 13:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477337.740004; Fri, 13 Jan 2023 13:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKMO-0004zn-VD; Fri, 13 Jan 2023 13:44:44 +0000
Received: by outflank-mailman (input) for mailman id 477337;
 Fri, 13 Jan 2023 13:44:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LVbI=5K=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pGKMN-0004zh-M6
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:44:43 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6d29503e-9348-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 14:44:41 +0100 (CET)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 400CD5C00DB;
 Fri, 13 Jan 2023 08:44:39 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Fri, 13 Jan 2023 08:44:39 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 13 Jan 2023 08:44:38 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d29503e-9348-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1673617479; x=
	1673703879; bh=0Z133yuNnDBtJv8nz69dJpUcuCn5DG8cpUPCJWQzx8A=; b=z
	l4553zvD8JYgHFG3wMoYiQQ2JhzeU1uWwpxFdQjm3MCuSGKGkCw1kLXfYU2b9m4a
	1i81lTazVLPSqKhGyogcWIhBXZ33mrIjV4yd4GJyIVcByGlqqVOJ1sgBuciNA5gP
	uYM7iauI9KzNJFSaJW9hpVmKk5f0yy4ICNompnsm4np+B80gYh3ydo1EDJjzkI4V
	JhnP1DjKYWWtkd3mzgxAl7jLzggIJCxMY0dHVpD4pIzqTS54wTJz1qSX43vfJezQ
	a4d+RH0IXGzmJ9x6a9fVvd0PxkTX0hdLK/q/xCpeiB1Pq42jFSgvewrJ9jQOmoOS
	J45dPikxQTPjJvBt6uGiQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm3; t=1673617479; x=1673703879; bh=0Z133yuNnDBtJv8nz69dJpUcuCn5
	DG8cpUPCJWQzx8A=; b=m4F9K0EXrmeN2vQtWb1amWESEdiE60I8XU7vN0KpEfJT
	25nElse4vJ0Kk6abAP99IPqrLocNSDkxKpv0T7KzC3iw3frus1uylV6b2AVwZUPj
	BrWVL2xsMH6fCkD6nUk4w4eqFxiRExGaPPVXxX4elbNbiX6wrBQ/HiGQnZ1XY4tp
	3FztOBSg8oJElwwOlKXQRIdGht1aDBpypVnH7gRQu+KZOcE4zCNoa2VxiXGI+LD1
	Q4viKCiz4r8yhlSmIVuirDJ5hE2E3VKisQK+o4RXNQBPD5ATwBg4KoSdUl/9c/v6
	xg2KITBJ7myEeghEi4lnQMnzNbqJ0S/hRFZmjwg8nQ==
X-ME-Sender: <xms:R2DBY0I3h9yXTkJlUI97v08oYK-PTd7ODCjB2iGoSc-PnTXbOajZGw>
    <xme:R2DBY0LvBdb1RV0NLWgtYIfn4OPai_408fi2hqTEYS9i0AMfdIV6tbc8FwYf4W86B
    -kEL4ohx-3hzg>
X-ME-Received: <xmr:R2DBY0uDtDlzg7dFypURUO7VGgCNIkiVConBzwFc1NN_2NYhkDwymKN7GGTiubcnHprfYoI16R2_4R7e6QJbHMpGJwQIeOmqrA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrleekgdehvdcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeduteev
    leevffdvteeludeljeeiieeufffhgfdvfeekudeihfehteehkeekfeffieenucffohhmrg
    hinhepmhgrihhlqdgrrhgthhhivhgvrdgtohhmnecuvehluhhsthgvrhfuihiivgeptden
    ucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvth
    hhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:R2DBYxa12BeO9P5CAMlPB2YE70IVMP1oq-Ok-lTtTtd-y3ZtfyfEAQ>
    <xmx:R2DBY7aighrGYPTUQ3esH7nA8ozd6PmzRb-UddaFmhwN30rhm0eLdQ>
    <xmx:R2DBY9BnffzYOF5S5KrV8P8n7r19FtsCfflo9hUjbRYMBo5hVSrpDw>
    <xmx:R2DBY8Ce9ESwC2p7_P9AcGdnECsqObGzd9V5dfYgTnVVrWwS-YgOuw>
Feedback-ID: i1568416f:Fastmail
Date: Fri, 13 Jan 2023 14:44:33 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, regressions@lists.linux.dev
Subject: Re: S3 under Xen regression between 6.1.1 and 6.1.3
Message-ID: <Y8FgQyVvpUXqumvS@mail-itl>
References: <Y8DIodWQGm99RA+E@mail-itl>
 <bdea54df-59dc-3d4d-dd0c-8c45403dea24@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="a1z64RtYn87d7j8p"
Content-Disposition: inline
In-Reply-To: <bdea54df-59dc-3d4d-dd0c-8c45403dea24@suse.com>


--a1z64RtYn87d7j8p
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 13 Jan 2023 14:44:33 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, regressions@lists.linux.dev
Subject: Re: S3 under Xen regression between 6.1.1 and 6.1.3

On Fri, Jan 13, 2023 at 09:08:35AM +0100, Juergen Gross wrote:
> On 13.01.23 03:57, Marek Marczykowski-G=C3=B3recki wrote:
> > Hi,
> >=20
> > 6.1.3 as PV dom0 crashes when attempting to suspend. 6.1.1 works. The
> > crash:
> >=20
> >      [  348.284004] PM: suspend entry (deep)
> >      [  348.289532] Filesystems sync: 0.005 seconds
> >      [  348.291545] Freezing user space processes ... (elapsed 0.000 se=
conds) done.
> >      [  348.292457] OOM killer disabled.
> >      [  348.292462] Freezing remaining freezable tasks ... (elapsed 0.1=
04 seconds) done.
> >      [  348.396612] printk: Suspending console(s) (use no_console_suspe=
nd to debug)
> >      [  348.749228] PM: suspend devices took 0.352 seconds
> >      [  348.769713] ACPI: EC: interrupt blocked
> >      [  348.816077] BUG: kernel NULL pointer dereference, address: 0000=
00000000001c
> >      [  348.816080] #PF: supervisor read access in kernel mode
> >      [  348.816081] #PF: error_code(0x0000) - not-present page
> >      [  348.816083] PGD 0 P4D 0
> >      [  348.816086] Oops: 0000 [#1] PREEMPT SMP NOPTI
> >      [  348.816089] CPU: 0 PID: 6764 Comm: systemd-sleep Not tainted 6.=
1.3-1.fc32.qubes.x86_64 #1
> >      [  348.816092] Hardware name: Star Labs StarBook/StarBook, BIOS 8.=
01 07/03/2022
> >      [  348.816093] RIP: e030:acpi_get_wakeup_address+0xc/0x20
> >      [  348.816100] Code: 44 00 00 48 8b 05 04 a3 82 02 c3 cc cc cc cc =
cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 0f 1f 44 00 00 48 8b 05 fc 9d =
82 02 <8b> 40 1c c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f
> >      [  348.816103] RSP: e02b:ffffc90042537d08 EFLAGS: 00010246
> >      [  348.816105] RAX: 0000000000000000 RBX: 0000000000000003 RCX: 20=
c49ba5e353f7cf
> >      [  348.816106] RDX: 000000000000cd19 RSI: 000000000002ee9a RDI: 00=
2a051ed42d7694
> >      [  348.816108] RBP: 0000000000000003 R08: ffffc90042537ca0 R09: ff=
ffffff82c5e468
> >      [  348.816110] R10: 0000000000007ff0 R11: 0000000000000000 R12: 00=
00000000000000
> >      [  348.816111] R13: fffffffffffffff2 R14: ffff88812206e6c0 R15: ff=
ff88812206e6e0
> >      [  348.816121] FS:  00007cb49b01eb80(0000) GS:ffff888189400000(000=
0) knlGS:0000000000000000
> >      [  348.816123] CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
> >      [  348.816124] CR2: 000000000000001c CR3: 000000012231a000 CR4: 00=
00000000050660
> >      [  348.816131] Call Trace:
> >      [  348.816133]  <TASK>
> >      [  348.816134]  acpi_pm_prepare+0x1a/0x50
> >      [  348.816141]  suspend_enter+0x94/0x360
> >      [  348.816146]  suspend_devices_and_enter+0x198/0x2b0
> >      [  348.816150]  enter_state+0x18d/0x1f5
> >      [  348.816155]  pm_suspend.cold+0x20/0x6b
> >      [  348.816159]  state_store+0x27/0x60
> >      [  348.816163]  kernfs_fop_write_iter+0x125/0x1c0
> >      [  348.816169]  new_sync_write+0x105/0x190
> >      [  348.816176]  vfs_write+0x211/0x2a0
> >      [  348.816180]  ksys_write+0x67/0xe0
> >      [  348.816183]  do_syscall_64+0x59/0x90
> >      [  348.816188]  ? do_syscall_64+0x69/0x90
> >      [  348.816192]  ? exc_page_fault+0x76/0x170
> >      [  348.816195]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
> >      [  348.816200] RIP: 0033:0x7cb49c1412f7
> >      [  348.816203] Code: 0d 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb =
b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 =
0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24
> >      [  348.816204] RSP: 002b:00007ffc125f63f8 EFLAGS: 00000246 ORIG_RA=
X: 0000000000000001
> >      [  348.816206] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00=
007cb49c1412f7
> >      [  348.816208] RDX: 0000000000000004 RSI: 00007ffc125f64e0 RDI: 00=
00000000000004
> >      [  348.816209] RBP: 00007ffc125f64e0 R08: 00005c83d772bca0 R09: 00=
0000000000000d
> >      [  348.816210] R10: 00005c83d7727eb0 R11: 0000000000000246 R12: 00=
00000000000004
> >      [  348.816211] R13: 00005c83d77272d0 R14: 0000000000000004 R15: 00=
007cb49c213700
> >      [  348.816213]  </TASK>
> >      [  348.816214] Modules linked in: loop vfat fat snd_hda_codec_hdmi=
 snd_sof_pci_intel_tgl snd_sof_intel_hda_common soundwire_intel soundwire_g=
eneric_allocation soundwire_cadence snd_sof_intel_hda snd_sof_pci snd_sof_x=
tensa_dsp snd_sof snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_a=
cpi_intel_match snd_soc_acpi soundwire_bus snd_hda_codec_realtek snd_hda_co=
dec_generic ledtrig_audio snd_soc_core snd_compress ac97_bus snd_pcm_dmaeng=
ine snd_hda_intel snd_intel_dspcfg snd_intel_sdw_acpi iTCO_wdt intel_pmc_bx=
t ee1004 iTCO_vendor_support intel_rapl_msr snd_hda_codec snd_hda_core snd_=
hwdep snd_seq snd_seq_device iwlwifi snd_pcm pcspkr joydev processor_therma=
l_device_pci_legacy processor_thermal_device snd_timer snd cfg80211 process=
or_thermal_rfim i2c_i801 processor_thermal_mbox i2c_smbus idma64 rfkill pro=
cessor_thermal_rapl soundcore intel_rapl_common int340x_thermal_zone intel_=
soc_dts_iosf igen6_edac intel_hid intel_pmc_core intel_scu_pltdrv sparse_ke=
ymap fuse xenfs ip_tables dm_thin_pool
> >      ic#2 Part1
> >      [  348.816259]  dm_persistent_data dm_bio_prison dm_crypt i915 crc=
t10dif_pclmul crc32_pclmul crc32c_intel polyval_clmulni polyval_generic drm=
_buddy nvme video wmi drm_display_helper nvme_core xhci_pci xhci_pci_renesa=
s ghash_clmulni_intel hid_multitouch sha512_ssse3 serio_raw nvme_common cec=
 xhci_hcd ttm i2c_hid_acpi i2c_hid pinctrl_tigerlake xen_acpi_processor xen=
_privcmd xen_pciback xen_blkback xen_gntalloc xen_gntdev xen_evtchn uinput
> >      [  348.816281] CR2: 000000000000001c
> >      [  348.816283] ---[ end trace 0000000000000000 ]---
> >      [  348.867991] RIP: e030:acpi_get_wakeup_address+0xc/0x20
> >      [  348.867996] Code: 44 00 00 48 8b 05 04 a3 82 02 c3 cc cc cc cc =
cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 0f 1f 44 00 00 48 8b 05 fc 9d =
82 02 <8b> 40 1c c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f
> >      [  348.867998] RSP: e02b:ffffc90042537d08 EFLAGS: 00010246
> >      [  348.867999] RAX: 0000000000000000 RBX: 0000000000000003 RCX: 20=
c49ba5e353f7cf
> >      [  348.868000] RDX: 000000000000cd19 RSI: 000000000002ee9a RDI: 00=
2a051ed42d7694
> >      [  348.868001] RBP: 0000000000000003 R08: ffffc90042537ca0 R09: ff=
ffffff82c5e468
> >      [  348.868001] R10: 0000000000007ff0 R11: 0000000000000000 R12: 00=
00000000000000
> >      [  348.868002] R13: fffffffffffffff2 R14: ffff88812206e6c0 R15: ff=
ff88812206e6e0
> >      [  348.868008] FS:  00007cb49b01eb80(0000) GS:ffff888189400000(000=
0) knlGS:0000000000000000
> >      [  348.868009] CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
> >      [  348.868009] CR2: 000000000000001c CR3: 000000012231a000 CR4: 00=
00000000050660
> >      [  348.868014] Kernel panic - not syncing: Fatal exception
> >      [  348.868031] Kernel Offset: disabled
> >=20
> > Looking at git log between those two versions, and the
> > acpi_get_wakeup_address() function, I suspect it's this change (but I
> > have _not_ tested it):
> >=20
> > commit b1898793777fe10a31c160bb8bc385d6eea640c6
> > Author: Juergen Gross <jgross@suse.com>
> > Date:   Wed Nov 23 12:45:23 2022 +0100
> >=20
> >      x86/boot: Skip realmode init code when running as Xen PV guest
> >      [ Upstream commit f1e525009493cbd569e7c8dd7d58157855f8658d ]
>=20
> Yes, you are right.
>=20
> Could you please test the attached patch? It is for upstream, but I think=
 it
> should apply to 6.1.3, too.

Yes, this works (you can take it as my T-by), thanks!

But, unrelated to this bug, it did get message like in https://www.mail-arc=
hive.com/xen-devel@lists.xenproject.org/msg107609.html
(WARNING: CPU: 1 PID: 0 at arch/x86/mm/tlb.c:523 switch_mm_irqs_off+0x230/0=
x4a0)

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--a1z64RtYn87d7j8p
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmPBYEMACgkQ24/THMrX
1yyepAgAgdsyrMMt7iaX4Kyw4pymNgdTRjfqKW6Zceoe/eAINqWrShQhy/d04HBc
5ilg1AIRDn1+yyUUjxc4aN1LHPy96QGomL/X8cCM7JapbTFS8NlBYW10PQmrgCLf
ylBbdv9BR9yOJxBA4RDjyGZvl1gTrn5bMPPM9vW3v3UUAiK2AQNwyIdlgSNNIaUK
pJU7g0aNqPNjpe1S0eZwetfhtDspLk6JOLovCxwLQCy9twkEoPbiGWP5wjuVDSjT
WJgGLXsg0/N5OgI+0Uj4EfKHYJrLMy3EANL37UeWm7HgvNtTkb4RGQvSKB9Irr4V
bhAQSwhvXH2IjLvAWXEq8Oll1ocAMQ==
=7ybj
-----END PGP SIGNATURE-----

--a1z64RtYn87d7j8p--


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:46:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:46:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477344.740015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKOX-0005Zz-KN; Fri, 13 Jan 2023 13:46:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477344.740015; Fri, 13 Jan 2023 13:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKOX-0005Zs-Ge; Fri, 13 Jan 2023 13:46:57 +0000
Received: by outflank-mailman (input) for mailman id 477344;
 Fri, 13 Jan 2023 13:46:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=irsc=5K=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pGKOW-0005Zj-NW
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:46:56 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2078.outbound.protection.outlook.com [40.107.13.78])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd8a9d02-9348-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 14:46:54 +0100 (CET)
Received: from AM5P194CA0024.EURP194.PROD.OUTLOOK.COM (2603:10a6:203:8f::34)
 by AS8PR08MB7864.eurprd08.prod.outlook.com (2603:10a6:20b:52f::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 13:46:45 +0000
Received: from AM7EUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:8f:cafe::cf) by AM5P194CA0024.outlook.office365.com
 (2603:10a6:203:8f::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend
 Transport; Fri, 13 Jan 2023 13:46:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT010.mail.protection.outlook.com (100.127.141.22) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 13:46:45 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Fri, 13 Jan 2023 13:46:45 +0000
Received: from 11c847f2dc50.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8FAE6EBE-B0C7-44EB-89A7-FDCA57500A0F.1; 
 Fri, 13 Jan 2023 13:46:34 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 11c847f2dc50.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 13:46:34 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM8PR08MB6529.eurprd08.prod.outlook.com (2603:10a6:20b:354::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 13:46:31 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 13:46:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd8a9d02-9348-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wAEK+RqGvxPVZAACK1CWck7z0+Hezw0Fa0EVFz32+os=;
 b=lb++dB6IqYIcRSDLPYg9mKOYc0kxY08iomH+MXnIhlY1GeFg5YHc3xdMaB/Ck4EJzCb3wI3PbeiAnJH9KPmiN7/5AKCRBwvuFvdOMYOf1X1ltyikj9vhI+PCoBl7dhrgG9iVtDxfLyXiOts8BfioanQkdCswFcRdxIO16dt9cwY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KWNQwsRg/GOr5bGVWWnRpiinWAuRSulfnSR9HsX7M1EJdx7URHw9xpOBQxu1kkuWoEjIfVccv0Fmk1zE3C98OjVlfl2VOB30C1r0xaYOX7JfDhaRyclDQzwHChUqVqY2/KoAPm2wvehpE0cZoz/s1NgKEzCOGwL/3gWdzL6RKy5AqXXplxRCB7ftQmuVrVhnYnsFU60BzZLOKJWXNYJ0iclNXyrMDx5V/ImbtRLHu0k+H0GZ8zuDT6YqXrz2hAW4rEM9yX0G63436/dNsa7/ktAVYfIicXs7auYk09a/+WX03pQSQyX27n4SNYOESfca6k9OoEeb+n9EyJpRL/zToA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wAEK+RqGvxPVZAACK1CWck7z0+Hezw0Fa0EVFz32+os=;
 b=jQ+iNa4jmfmGlKhVMdNFDLlVs3jQGq76B6x9ckclbHJ1yJGzGY4EG2et/uI0YdS4GMBXYoslsREzVBEk1eNPAC3uHUAxLPZHl4lE5EWRLAowoeZWeChiq6NsFOJeSFV9SnVbk+RK6343m0XCaigB8ibscUHKsxNbPvyTqtKYCqkxTf8dw6MfI7r3ccVFnzwrOZQSh0Q4r8fHRxVN0C7NcRkSL6RlZgnCJjSMcw+pIiXG9UP1AZxYxs9tU5UykKbMx24fAGaAHBK32vkDh5EPZyQiiePVRz8+gsc+JSmt+9/Km3qmKJURApCAv+Ajc4+DWCF9cxXwPICs2lHQAFME7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wAEK+RqGvxPVZAACK1CWck7z0+Hezw0Fa0EVFz32+os=;
 b=lb++dB6IqYIcRSDLPYg9mKOYc0kxY08iomH+MXnIhlY1GeFg5YHc3xdMaB/Ck4EJzCb3wI3PbeiAnJH9KPmiN7/5AKCRBwvuFvdOMYOf1X1ltyikj9vhI+PCoBl7dhrgG9iVtDxfLyXiOts8BfioanQkdCswFcRdxIO16dt9cwY=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: RE: [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier for
 local TLB flush
Thread-Topic: [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier
 for local TLB flush
Thread-Index: AQHZJzd7+kqmWE84JkGI+nHeBx9+r66cUqgQ
Date: Fri, 13 Jan 2023 13:46:31 +0000
Message-ID:
 <AS8PR08MB7991982E6F5F7AA43820997492C29@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-4-julien@xen.org>
In-Reply-To: <20230113101136.479-4-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: C468B3F3EA06A44FB4C928D8C493E1D8.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AM8PR08MB6529:EE_|AM7EUR03FT010:EE_|AS8PR08MB7864:EE_
X-MS-Office365-Filtering-Correlation-Id: 10b25347-dd02-4815-2ab5-08daf56c9c63
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 TA5FRyu8TSfsa5didhA3+9j0E9kkVAlt0fkTJIApyKb9PBBJ2wK7v3TzwP5xwjw+d3ToLnzLyfRsPYdvW+4SKOF9biH+EOMPPECPAoiw/pWzEFZCe7g5x7FRntVYJPdCkG0rf6ziU4lpsMrNzimZ+5LiwU+OhKvooYS+Rl5BjwpbC9LmJdKAllKcNp4yiNQVvX6NQHqsELvFQU/t/bijqY9VrAw94X1EqMwbXR0xEjY4wZlOGpAxc2Q271hkp2RHqznq17oMsVayvloWlKqUx065bZ2jseWdoqs4twabspzbYtQ4HAhZcxA/+5x9mhWzu92tOIzwZ+GVd7tKn7uHVKhF2XtVGktpXcrB3AlR9vWop/fyLiXLNl0UfioXTip6d2vAV4gV/MrR0DWBwaht0XScITDZBSA8xyhK9PvMOduyxOVpOjSEq8Ig0m0sv+pIa/oydag/I49kbHQNEwUHp9teJ6CyNEgrEjlWvziOBkOcCYIzo0+x3vCWGoRsNPscX9dt4/r3bGQE6cCoouvU0on/ko3ZevFDPWFUGlkhQXmp61OVGoyST6ECsMVkZmE3zONkNLAS04gnrMHjRkfV9LKdT9lsz7SG7lyL56NXQqVyvZaF6qLvW6pIyszoNJAMMKppKzHDQKhcOyKnqUdWKUugTSze6JvPxndgd8l316wM+s0qMNxCZqRPFb+ulkHlJNLTEJLyUGGPL3bs4a209g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(346002)(366004)(376002)(396003)(451199015)(38070700005)(86362001)(2906002)(5660300002)(122000001)(38100700002)(33656002)(4326008)(8676002)(64756008)(66446008)(66476007)(66556008)(9686003)(66946007)(6506007)(186003)(76116006)(26005)(55016003)(71200400001)(7696005)(316002)(54906003)(110136005)(478600001)(8936002)(52536014)(41300700001)(83380400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6529
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	896964f0-74bd-47bc-7023-08daf56c942d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	db/lHVoEpw/tcbkNXW6IXsAuiMxYaILbqNFHVVvGOoktGcXafw0cJW+4Xq1PxP9h6sMpdWuo5/eWL1yHFEBE3NUDm/zWCMNE4q83Hu72kltLVmOYULQyLvpqojf2tNSiJOYygS88APw5jV/JTkNT7vvBRRQsgU2LF+NUkbC9M0WLojdCUldUZ4YdYpJ5HCsiRz9fHua6+R2oh2KzPUXRyS3rmYfyqCMdDtz3i04rbzYz9iBIXpEnc1T9mE9icE49z2kNJOWq4nKSC/J4jGI1YGlhf5GeNju9Bvd1TF4kI3cRXU97b2RCJTNn8Oh0sTW2OyhT9fMhwCl/3mIk8iO1f9uHdW/ia2c3YMdu5W0SDlUCfWtnAbfaI8AEu5aE5gvAoJXgrx+k9M71OfWlGcPqvf6oEx7jZiXIlTM+QiCxJEv7c2/KGAeQAm4LCYcvGno5Zl42Kyu5lWNe1ak0cUBmd6DG35+Q9Tc6/sb+G75QT1RgJe5FT9BPHpScaqv048H8dgAMUKdizbOLBpneI69ZT0D0uQHJ7NOPrO2hf2Fkczg3agmvzFZaMx7zcBC8DJFLqOdVmJq/n3vPuCfd5WnfacVHFmAlT++zYnxt/Db+I8YCUv5NKxmaJz/Z4/VMMwnWbFLV69kLZbQxcUJ0vKGlLjtHx1HxMpSQjL+KxikTz13dcD53Ogk4VdhlxoIzXByFTZEuFR2xJbdFEpM2ngERgQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199015)(36840700001)(46966006)(40470700004)(8936002)(70206006)(70586007)(41300700001)(8676002)(4326008)(52536014)(54906003)(6506007)(2906002)(316002)(82310400005)(336012)(40460700003)(36860700001)(5660300002)(33656002)(7696005)(55016003)(82740400003)(86362001)(478600001)(186003)(26005)(83380400001)(9686003)(81166007)(110136005)(356005)(47076005)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 13:46:45.4739
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 10b25347-dd02-4815-2ab5-08daf56c9c63
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7864

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 03/14] xen/arm32: flushtlb: Reduce scope of barrier fo=
r
> local TLB flush
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Per G5-9224 in ARM DDI 0487I.a:
>=20
> "A DSB NSH is sufficient to ensure completion of TLB maintenance
>  instructions that apply to a single PE. A DSB ISH is sufficient to
>  ensure completion of TLB maintenance instructions that apply to PEs
>  in the same Inner Shareable domain.
> "
>=20
> This is quoting the Armv8 specification because I couldn't find an
> explicit statement in the Armv7 specification. Instead, I could find
> bits in various places that confirm the same implementation.
>=20
> Furthermore, Linux has been using 'nsh' since 2013 (62cbbc42e001
> "ARM: tlb: reduce scope of barrier domains for TLB invalidation").
>=20
> This means barrier after local TLB flushes could be reduced to
> non-shareable.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

I've tested this patch on FVP in arm32 execution mode using the same
testing approach for patch#1, and this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:53:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:53:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477350.740027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKUt-00073I-AW; Fri, 13 Jan 2023 13:53:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477350.740027; Fri, 13 Jan 2023 13:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKUt-00073B-66; Fri, 13 Jan 2023 13:53:31 +0000
Received: by outflank-mailman (input) for mailman id 477350;
 Fri, 13 Jan 2023 13:53:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3K7w=5K=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pGKUr-000733-K5
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:53:29 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a7b09a0f-9349-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 14:53:27 +0100 (CET)
Received: by mail-ej1-x62e.google.com with SMTP id hw16so40635353ejc.10
 for <xen-devel@lists.xenproject.org>; Fri, 13 Jan 2023 05:53:27 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.138.tellas.gr. [109.242.138.67])
 by smtp.gmail.com with ESMTPSA id
 o19-20020a17090611d300b008373f9ea148sm8577505eja.71.2023.01.13.05.53.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Jan 2023 05:53:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7b09a0f-9349-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=XfbsPCf4QDNMjAwRmYwhV2JtZkhuVz5jXrfCBZH3w68=;
        b=RQVH34rgzWxspqNqRJeD4SdAhyOEKZZFN8fB0pbbvyJOm1P4dO3UaxdEnPIsIopgA4
         aS5ungxtyjFk1azFT1OvjKabThgmszDa6r5/S0X6zZ56iwILiUWaNz8Bv4PegcEthujw
         z6HlOcffLLP3/++K9LZ/E/0rYaY+5txn4X9U2+Mfn7ELE+Kf2G1Ea+ydj8Gb7atUlHy9
         Nin16KYEDj1/QAej7EPhOB7TS9xFiYjrWbVfvxkDz0fFjBkJAHR+jZb8UvMmfwKbTD9j
         OaKdqGFuBq2NpVVgIwQBS7ppuExe/v0SgEndVWUFLDHn/e1V8SsG3tag8dS0t+74SAU2
         T3kA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=XfbsPCf4QDNMjAwRmYwhV2JtZkhuVz5jXrfCBZH3w68=;
        b=04/nUVNgxWI7zz5pdkR7fHG/xZzFJuzEuoJDeiD1LPQyTr9D2pr7cqfNAvzInhU4CP
         /HBlvdaVs+PPuTdx0Fs4miG9ZioIjlnyt+EBOD+rs856Ocpo4z5MMH1+3O34yFCvWIZf
         y7GoAKYR7fi0IN21YBp/I50QUhGrsnsjV3yINICOhiQITQ5xYU30Zxykym2CtPZCtVJT
         n77pNDmdZSDqAbc1NDcAaMZGtQrUwhTp0p12y5agnHMswgaQS05WL+rVUFq5HtqnkJuh
         opBnrZ6A7xaRZZmqFbFYIdCrfPyNGCR74kJMpDDTKahCVSTuHCyXfRtJdAOlLCdOm2O2
         JHSg==
X-Gm-Message-State: AFqh2kontRscJ6sw++piTQ7dSmyz/mzei8dqFgGpJf401qLEwOZ3Gg9r
	ymw5UZymQZYPkfC4yKSZ9y0=
X-Google-Smtp-Source: AMrXdXslRJEn+JGQ+7bFQ5c9Az3+fHgP9Vz+oeV94pFtnqdwHpFNDepZikc8Mc0n5MvJaiDdkVyFtw==
X-Received: by 2002:a17:907:8a16:b0:7c1:458b:a947 with SMTP id sc22-20020a1709078a1600b007c1458ba947mr106475440ejc.26.1673618006857;
        Fri, 13 Jan 2023 05:53:26 -0800 (PST)
Message-ID: <8e1ecdf9-c5b2-2977-b4fc-a64cf04c765b@gmail.com>
Date: Fri, 13 Jan 2023 15:53:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
 <88e3ec77-587a-ae68-a634-fed1fa917cd7@suse.com>
 <b76a7834-9868-c5c2-e058-89911a552c80@gmail.com>
 <512d8768-28f6-d9d6-c1cc-18c5fbf2a636@suse.com>
 <4f1d289a-7c3b-c4a1-34bc-1e8bd62a416a@gmail.com>
 <da973e5a-3a1b-3e99-ebf9-e462915eb338@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <da973e5a-3a1b-3e99-ebf9-e462915eb338@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/13/23 15:24, Jan Beulich wrote:
> On 13.01.2023 14:07, Xenia Ragiadakou wrote:
>>
>> On 1/13/23 14:12, Jan Beulich wrote:
>>> On 13.01.2023 12:55, Xenia Ragiadakou wrote:
>>>> On 1/13/23 11:53, Jan Beulich wrote:
>>>>> On 13.01.2023 10:34, Xenia Ragiadakou wrote:
>>>>>> On 1/13/23 10:47, Jan Beulich wrote:
>>>>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>>>>> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>>>>>>>          if ( !acpi_disabled )
>>>>>>>          {
>>>>>>>              ret = acpi_dmar_init();
>>>>>>> +
>>>>>>> +#ifndef iommu_snoop
>>>>>>> +        /* A command line override for snoop control affects VT-d only. */
>>>>>>> +        if ( ret )
>>>>>>> +            iommu_snoop = true;
>>>>>>> +#endif
>>>>>>> +
>>>>>>
>>>>>> Why here iommu_snoop is forced when iommu is not enabled?
>>>>>> This change is confusing because later on, in iommu_setup, iommu_snoop
>>>>>> will be set to false in the case of !iommu_enabled.
>>>>>
>>>>> Counter question: Why is it being set to false there? I see no reason at
>>>>> all. On the same basis as here, I'd actually expect it to be set back to
>>>>> true in such a case.Which, however, would be a benign change now that
>>>>> all uses of the variable are properly qualified. Which in turn is why I
>>>>> thought I'd leave that other place alone.
>>>>
>>>> I think I got confused... AFAIU with disabled iommu snooping cannot be
>>>> enforced i.e iommu_snoop=true translates to snooping is enforced by the
>>>> iommu (that's why we need to check that the iommu is enabled for the
>>>> guest). So if the iommu is disabled how can iommu_snoop be true?
>>>
>>> For a non-existent (or disabled) IOMMU the value of this boolean simply
>>> is irrelevant. Or to put it in other words, when there's no active
>>> IOMMU, it doesn't matter whether it would actually snoop.
>>
>> The variable iommu_snoop is initialized to true.
>> Also, the above change does not prevent it from being overwritten
>> through the cmdline iommu param in iommu_setup().
> 
> Command line parsing happens earlier (and in parse_iommu_param(), not in
> iommu_setup()). iommu_setup() can further overwrite it on its error path,
> but as said that's benign then.

You are right. I misunderstood.

> 
>> I m afraid I still cannot understand why the change above is needed.
> 
> When using an AMD IOMMU, with how things work right now the variable ought
> to always be true (hence why I've suggested that when !INTEL_IOMMU, this
> simply become a #define to true). See also Andrew's comments here and/or
> on your patch.

Ok I see, so this change is specific to AMD iommu and when iommu_snoop 
becomes a #define, this change won't be needed anymore, right?

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:53:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477354.740036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKVK-0007W0-Gr; Fri, 13 Jan 2023 13:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477354.740036; Fri, 13 Jan 2023 13:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKVK-0007Vt-E7; Fri, 13 Jan 2023 13:53:58 +0000
Received: by outflank-mailman (input) for mailman id 477354;
 Fri, 13 Jan 2023 13:53:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=irsc=5K=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pGKVI-0007VX-Hq
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:53:56 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2066.outbound.protection.outlook.com [40.107.249.66])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b7c37e1f-9349-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 14:53:54 +0100 (CET)
Received: from DB9PR06CA0022.eurprd06.prod.outlook.com (2603:10a6:10:1db::27)
 by AS8PR08MB5928.eurprd08.prod.outlook.com (2603:10a6:20b:29b::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 13:53:47 +0000
Received: from DBAEUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1db:cafe::2c) by DB9PR06CA0022.outlook.office365.com
 (2603:10a6:10:1db::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Fri, 13 Jan 2023 13:53:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT029.mail.protection.outlook.com (100.127.142.181) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 13:53:46 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Fri, 13 Jan 2023 13:53:46 +0000
Received: from caec6e40b2e0.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 972D5A17-5443-4258-889B-4637368571EC.1; 
 Fri, 13 Jan 2023 13:53:41 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id caec6e40b2e0.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 13:53:41 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB10072.eurprd08.prod.outlook.com (2603:10a6:20b:634::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 13:53:39 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 13:53:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7c37e1f-9349-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PpFEWSanXFdEVYh+Sv0tOwepH2BkicRhfafa+WZtzbg=;
 b=6RU5wBe5wfBNhf4UMOTZGpIJRQXPs3p//5TC33mZTUhHbmkG6RNzuq8hCgVrIyG/ZGMtWeXnPdRiP+h0Nx5udxq9qXPh72s5f2CKzZE3pDNAvG4GyxCME3C+wslVJa4UF1LBNjfbdilU5uh7jdRBQZTYqWv4/xhz8CIIhZUER18=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SyZwaEGf1tgUW9FdYNB0Iaq+J8WVCZsjZBjHKF0RIuPgkqZ2bBao1JNT8dhsp9KbTAmdM4PCruoVhwbTdAK00wcptm5jfQ+9fO6gdaC6DYCfETYsxKMALX8mQDEwtOrS3V1mMdMN+wE8hrfkAwGAbZBxrpp8e4jimg/J0oSDAPPsX2dllkPqBObefQ34hjQEFvGGCvyPSMBhaXzERUEgZRB0Iqq6hbDYjFBjXxzItvuJci8F8dTkDejSJTdqAQhQBS/xScKxfmLRSDhN6wU1Rq9itceE9//SxWgzdRpfaPphARJzozpBad5uxLo11z4mRx/luRO20rxlMtFkFuC5uw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PpFEWSanXFdEVYh+Sv0tOwepH2BkicRhfafa+WZtzbg=;
 b=mgwRqjt+QNOBOVXJIAkFEB6OBHQDwOut+lIAtlmPeex16GLfh4PhotuXM/tewm5VP2WVCwL2Nsao1J16tzXp3IwuKBLmFoqTblWPFYrxMjUTPc3cimsE215oyZCb3fQLRwZ3U8dT3eEgHZG0Zuv1nl3WFCMYnD4WqSobgHVRC6Saj8DIFAm3eAAmAKK+YH4kGgXwaIQJxvlmIBOf7VlbClGS0yt0P+jAYQm5zBOcKBZsWxTc6GflAlvE/zSA1AmYCMyrGyNmGdDSF9JbaTfGlP1DavWCtVRAJXg9rQp+cAssvcTHQCDU9KIufvrWdW3thQGlOztxOw94HCQCx95JVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PpFEWSanXFdEVYh+Sv0tOwepH2BkicRhfafa+WZtzbg=;
 b=6RU5wBe5wfBNhf4UMOTZGpIJRQXPs3p//5TC33mZTUhHbmkG6RNzuq8hCgVrIyG/ZGMtWeXnPdRiP+h0Nx5udxq9qXPh72s5f2CKzZE3pDNAvG4GyxCME3C+wslVJa4UF1LBNjfbdilU5uh7jdRBQZTYqWv4/xhz8CIIhZUER18=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: RE: [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for
 the TLB range flush
Thread-Topic: [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for
 the TLB range flush
Thread-Index: AQHZJzeBX/yRWyvkUk23hxgjDc8lvK6cV9vg
Date: Fri, 13 Jan 2023 13:53:39 +0000
Message-ID:
 <AS8PR08MB79910CD967147561A16F4A8692C29@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-5-julien@xen.org>
In-Reply-To: <20230113101136.479-5-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D329D57E6119674DAB3ED3E7F9337EFE.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB10072:EE_|DBAEUR03FT029:EE_|AS8PR08MB5928:EE_
X-MS-Office365-Filtering-Correlation-Id: 6624ff04-e8f3-49ca-81f9-08daf56d976c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 H7ok0TmIkJiCNsOhR56x1LETt5435Ca2K4Any9bnXfvowdLRYOm6otNj1fEv6LH8AcXuFUrt/NvcXinxXCUXKI+sRWdsptR91LavMkaFPM0IgiiAoUfg19ELtZyp2HcLslKcq5PBtW63BJHbnTXeq3DPKa2SdilVmxBN/4osZERSaGgom00Pb1lTGtSPWfCgoTR1A8w/RwVxp2qnSoGLFmdS6yG8z0a5mLhTN0RwiiDHpU2SD4fybsRir7AfC/THykdm/MiA/lDbjX2YAjVGRRZNJ9xsB3VOKcJjaiIE1S8NgS1qt3ZfXgutC1gzilKNymszelkgViLZauFsTL9cEEa91kx5IIT0SZHpH1rUuJodBmIMVY/XwJ3CJrXSDjuFC63NW1mYixAiESknV56zj2265H55RRWb0eaBTq+9o1A7zKkvF6BTEL+7Tqj5TIxF4T0JYCkNNxw+n2k4m734mr+ndd0R1ftbcXYGHdNfsMOhwDew/52IqtAR3K0scuOCjI+Dwiz6Tf42BqpKUkqsIsjM/BBNC503ecC9VK+lQHBgOiaECjWsQFBQZchEieUpHgx60bbpARK8c356ENL2r+Jf3VrJ3kzRRfAbRs2xgvtN12Az+tpO/EFzblUmQr399cyJVTqxkXhIBA95cA5SrW+7oXR6utB09ILerW53M/0kqLSfsQ49+G9XMSlrAWhL
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(376002)(346002)(366004)(136003)(451199015)(8936002)(76116006)(66556008)(41300700001)(66476007)(8676002)(66946007)(66446008)(4326008)(52536014)(64756008)(38070700005)(54906003)(6506007)(2906002)(316002)(4744005)(5660300002)(33656002)(7696005)(55016003)(71200400001)(86362001)(478600001)(186003)(26005)(83380400001)(9686003)(110136005)(38100700002)(122000001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB10072
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	19a2fa18-c84e-47b6-0ab2-08daf56d933f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6T4A5Y8hbmS6r0hNHa5ylEOyvXHK9bU+gcda8v4ldXnap36cznaK0QKqw7OJH+TUyOzk9ERn+LjX+P0cdFSDp9/6/fobaHaVfw62qMpmpHJXpcLWNctFOjtGizZXJkV2BLfDL9wVo0GijP7p4+S7HjDbh1zOPMQpd+LXMIQx9Dd1bhBy9INsnXxtnaUa03YYxl7btYBMnysYarlgzolxC/Nr+dsiItUs7MpHXJ1Drv6VDEF0SrBxRO//nrxlm7OtdqohD4dgYrhqBfhIPxLI+Uy0FFV2JsVqzHUZtE0XepGGGqda8q35W8H4x8UcDF0VS0mdd8jqofrPCdMcZLESJJGTGzFRgqjhETYTteVRUVq9y8aF/2dDYxBdHkAl/osjSUuj10beztEt1s7goksjlL4SKFNB01N0rEvud/jVwHS2tbxCKwWtWBLxNqri3T18G+PVbVwE9+M6z+rtH+jhDkyX7eTo3EPoIBuQNZgbK3GOaHepIqa7WwIOCEUnlupwaNpaS+8LCyfbkIVQ+Jn5a4i3ePxkZTw5LJvO4p5h3BWWIPvmu3vh50p0i5rTqR3qsmax4NGth46quhwlKLiOMAPtu/btA1rUUNkUNi2M2dnaQQ/MEW+nrl/tO1s7cHYXErY2Y4pAILVyHv3bb+QXPlUX7zXbJcb+4d7lN3wsPU3Snd0MHlAAPuhESakSjTyk
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(70206006)(52536014)(70586007)(4326008)(8936002)(36860700001)(8676002)(81166007)(82310400005)(5660300002)(4744005)(356005)(33656002)(86362001)(110136005)(26005)(2906002)(316002)(40460700003)(54906003)(83380400001)(7696005)(41300700001)(336012)(47076005)(478600001)(82740400003)(186003)(55016003)(9686003)(40480700001)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 13:53:46.6901
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6624ff04-e8f3-49ca-81f9-08daf56d976c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5928

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 04/14] xen/arm: flushtlb: Reduce scope of barrier for =
the
> TLB range flush
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, flush_xen_tlb_range_va{,_local}() are using system
> wide memory barrier. This is quite expensive and unnecessary.
>=20
> For the local version, a non-shareable barrier is sufficient.
> For the SMP version, an inner-shareable barrier is sufficient.
>=20
> Furthermore, the initial barrier only needs to a store barrier.
>=20
> For the full explanation of the sequence see asm/arm{32,64}/flushtlb.h.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 13:58:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 13:58:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477362.740048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKZQ-0008Js-4p; Fri, 13 Jan 2023 13:58:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477362.740048; Fri, 13 Jan 2023 13:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKZQ-0008Jl-1z; Fri, 13 Jan 2023 13:58:12 +0000
Received: by outflank-mailman (input) for mailman id 477362;
 Fri, 13 Jan 2023 13:58:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=irsc=5K=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pGKZP-0008JK-L8
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 13:58:11 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2053.outbound.protection.outlook.com [40.107.8.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4f6524b5-934a-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 14:58:09 +0100 (CET)
Received: from DB8PR03CA0010.eurprd03.prod.outlook.com (2603:10a6:10:be::23)
 by VE1PR08MB5646.eurprd08.prod.outlook.com (2603:10a6:800:1a9::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 13:58:04 +0000
Received: from DBAEUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::10) by DB8PR03CA0010.outlook.office365.com
 (2603:10a6:10:be::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Fri, 13 Jan 2023 13:58:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT029.mail.protection.outlook.com (100.127.142.181) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 13:58:03 +0000
Received: ("Tessian outbound b1d3ffe56e73:v132");
 Fri, 13 Jan 2023 13:58:03 +0000
Received: from 663159713901.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1CC50B1A-AAEB-4ED1-BC6C-E4AFF7F9A264.1; 
 Fri, 13 Jan 2023 13:57:53 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 663159713901.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 13:57:53 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB9867.eurprd08.prod.outlook.com (2603:10a6:20b:5ab::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 13:57:50 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 13:57:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f6524b5-934a-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B/GM2z1yhQNqoAab/lYuXdYgVG6mkMHu1optzOZzNL8=;
 b=gnopeH7AD3VjeAl+wWfLAmRF/PrLAwWivDXRvsDIupPcV7yNT05YXTSF+OXgMpMDajHC6kneLNr2yOsodPkONBDhmc3CpijDR05eSQNmonMP8+L+fLQBbiwiyaoZkxTkDE6J7x+jlPWGO71jbtADrEwF7ixmGFqOy70C3z9R4AY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZUmroZoJvpzTOeY4njWlqNRmS6BWAdqN4Lpsf8XNUzM3MfIAK7mX681P8oLQKbCXJf0HMp7sIbhKxkEBKrGiObrCWhfv4DPu1EDHlW+WKrJ0ERCYSVWTRZ8Of3TUCsUrmLZZ/lNrJERpY3YpQWllFsVoByaomCGsK9k/bZfODuFlvNGzNGGXrh9Lqi+th8JQtDsgVxxQrCgUiiOWIwyVYaxZCRtVRfOX2OCrnEGH+cW5ea+ECEmURcsqG/RPOvlp+Pgcfy8SJBeO1thPb9pORvFV2MU543fd/WmyiKPLyBK8dNPpba5CnnZN7/1xmRi4dMY35NMZsGdHzsIv9nJCuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B/GM2z1yhQNqoAab/lYuXdYgVG6mkMHu1optzOZzNL8=;
 b=OfREyhr203NLFQWKvLHII8CPV1puJDECQEYRdijCsaQNfqboDsIG2E2WVtQ7YrWfcRMIYtybk8/2FZUcuweeUYHQc6UikoskGC7JgEcNzAjMVKJ6jT10p2VQEmlSQSJLgrGlzCQE6erCUac6YlKJOLIp5aEoVImnfq0nETBPa0Ah8tKD+6hy9dSeM0gtE1hiICYr9aZXe0kBhSBz4ojdIY6KyFsqQZzPW8m6Klq0hrx1BOKbEcFUMHfC4YHQGcvuG+5BV+NGNmvlXA3aeFnNBxEghrRWuA1BI3sK/xsaF9fjS/63q4V0mgVOxoAvNDQke68auUI+2WBzNS7HZewDJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B/GM2z1yhQNqoAab/lYuXdYgVG6mkMHu1optzOZzNL8=;
 b=gnopeH7AD3VjeAl+wWfLAmRF/PrLAwWivDXRvsDIupPcV7yNT05YXTSF+OXgMpMDajHC6kneLNr2yOsodPkONBDhmc3CpijDR05eSQNmonMP8+L+fLQBbiwiyaoZkxTkDE6J7x+jlPWGO71jbtADrEwF7ixmGFqOy70C3z9R4AY=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: RE: [PATCH v4 05/14] xen/arm: Clean-up the memory layout
Thread-Topic: [PATCH v4 05/14] xen/arm: Clean-up the memory layout
Thread-Index: AQHZJzeLCnyOeiX40USI61rJsfnYCK6cQS6g
Date: Fri, 13 Jan 2023 13:57:50 +0000
Message-ID:
 <AS8PR08MB7991F693573B0DF694104DD592C29@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-6-julien@xen.org>
In-Reply-To: <20230113101136.479-6-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 058876024C00A441BEF41B750339BB1A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB9867:EE_|DBAEUR03FT029:EE_|VE1PR08MB5646:EE_
X-MS-Office365-Filtering-Correlation-Id: b41d8795-cf71-4974-6d41-08daf56e30a9
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fgM6KzmEwM52yYF4j8RgJrvrwa2xWrSJaCAS/NE//l1jeLuRKuHn6nZtSl8PFivJBJ92ZOxh8G6gtAWcUh8DQ3GYqViEPbApc9zGT/UugAX8ONMhECk08cS3flzCiA1YunINNa7/cpIi7KsmsqFZrrdeanKFVXxLfvfm5f287sPU6i1jf5pZaUDay53b0ikMioLVt/i1bwrI+LpQK/1j2VZq5x/VZoK4nGoQ+ByAbGZui5PyBX+VZTENzO7KLAySwB3CMluN9N+T2XKuYilZ0sqyqneDaMxCc3gYffiDiOtJYkfONjxRx98VIOoFO0qtYVL6ZqY2/nJMlo01Stt4qeHVE3GVFLsb5JnuHTQ7rrIQ5ouJC/cSx1M/yjMVt9e5AdYIJCNdD6XOl2iNv42X+/Tpd9zJE9Sap0xfsqIOLPLAyX90mvXJ5dGZhbx62TRkIxPAodGfY9NETaPudft8ePbOBP2fSJ1WWbH+20+eUxSA+RUhKid1se+4sqTNGujq247AGRXsARsXN4IZDggrcYs+fX4r4X00SXk0Eylhn8dOZE85k6Qwidb8njmikKqL7uy2R+c9pffTvop0oISH4Beoz8sxGx5wpKLG/wbIczNHY89adb0tP7InV8QH/Yae/8GFil3jc58S4ScxJk52rl2M/P0xNquwvEjKPGnmTH3t3unHy+WMulJcNOmdgigM3/AYBNnFHhdPIO64tdX+lA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(366004)(376002)(39860400002)(136003)(346002)(451199015)(8936002)(66946007)(76116006)(66556008)(4326008)(64756008)(66446008)(86362001)(52536014)(41300700001)(66476007)(8676002)(5660300002)(4744005)(186003)(110136005)(478600001)(26005)(54906003)(2906002)(33656002)(316002)(71200400001)(7696005)(9686003)(55016003)(6506007)(38100700002)(83380400001)(122000001)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9867
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	97997d2e-6f81-41f9-8e6d-08daf56e28b6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vyu9pMYpZwFV2NgNcYvXdH/ETAeaDjtU0de713EfG4CJ8JEUqBxy/+ZiahUOWlBUoC/xglKnA8GxD8xX5kd+42G3/skYOJdEXFMG9c/2j/TLW3hY421fR7VBzKcWOn6ZN8nEM+Ni1KV6wZdpbz0YeOmx0mtymCMmHPW8H5L8esyOQGg8kvgX+LsCeWiwtrPT9vByW9pyzruVHCy4b6A+8b52tf9K8CrwScdmB+cK0IICIMgVw7KEUHgCnUY0R5Aki+U3Cvpy955O1y8UY9xWMY77MSWY7M1gJT9jS+kIrWsK2wwS8UikTnh0uSuAzUaE/cPjxEoeI+bzhypYMfpHpED++hcIi/4jeiSZV1e7yLeImmZ3u/p/0CamCfSfI8Q+gxpVW2QpyqTRGYu7kTw9PtO2oBwjhHUQLfZ76ViQ/proRKKOpTRzYq3Q37Ms7QeauVO/0SItlsZrA9JNdg2UOM5vdm6UcL2NN63Q7eUXgmZ5fOgnrGaLKTVDetcrN3n30BCEuExcboKd1F974pugcXUvYsO0+29J62xQYndW7mzsIhwj/5DIHKhIZt5mQvD2Ep+YrdW4VRoRq/12ggUZjGyg4boI0u7yI+qOKlXTMwr+fN8ohhlrWVYzMGnt4sRzIAfDpRczLIYWc4NMtY+kvtL4LJzkpmrrtvHjGVJg23BYNxUieYScnCwdctV4VSkLgbKcqv9iXgTq/Q21Dx8vJA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(86362001)(82740400003)(2906002)(36860700001)(81166007)(6506007)(40460700003)(47076005)(33656002)(40480700001)(55016003)(82310400005)(110136005)(54906003)(7696005)(316002)(356005)(70206006)(70586007)(4744005)(83380400001)(4326008)(336012)(41300700001)(8676002)(9686003)(8936002)(5660300002)(26005)(52536014)(478600001)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 13:58:03.7807
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b41d8795-cf71-4974-6d41-08daf56e30a9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5646

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 05/14] xen/arm: Clean-up the memory layout
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> In a follow-up patch, the base address for the common mappings will
> vary between arm32 and arm64. To avoid any duplication, define
> every mapping in the common region from the previous one.
>=20
> Take the opportunity to:
>     * add missing *_SIZE for FIXMAP_VIRT_* and XEN_VIRT_*
>     * switch to MB()/GB() to avoid hexadecimal (easier to read)
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 14:06:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 14:06:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477369.740059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKhE-0001W6-Vl; Fri, 13 Jan 2023 14:06:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477369.740059; Fri, 13 Jan 2023 14:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKhE-0001Vy-SU; Fri, 13 Jan 2023 14:06:16 +0000
Received: by outflank-mailman (input) for mailman id 477369;
 Fri, 13 Jan 2023 14:06:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O/z9=5K=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pGKhE-0001Vp-4C
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 14:06:16 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6f8e0808-934b-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 15:06:12 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B856E256B8;
 Fri, 13 Jan 2023 14:06:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4F2591358A;
 Fri, 13 Jan 2023 14:06:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id bXH8EVNlwWPnDQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 13 Jan 2023 14:06:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f8e0808-934b-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673618771; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=qerTG7yVEdATebY6zaKCk2qNuAQ7K4by6XyL0WSCPFo=;
	b=mlMz7tpaDKH5ON0g3Sl4mmBovvApom2MngsecURVvLNhBb2UjbPkYxhetfIhGhxNoH0ZBB
	Z8F0qUptHNhmYOLMOX7rYBaAOAWfUUHinRaV/sD4EEA2d5dDZF+YutajoQc1DwcLPCLOU8
	TEZAJnoIvy9o4hrjiLIPqIQCjVeXCZk=
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org,
	x86@kernel.org,
	linux-pm@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Len Brown <len.brown@intel.com>,
	Pavel Machek <pavel@ucw.cz>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH] x86/acpi: fix suspend with Xen
Date: Fri, 13 Jan 2023 15:06:10 +0100
Message-Id: <20230113140610.7132-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Commit f1e525009493 ("x86/boot: Skip realmode init code when running as
Xen PV guest") missed one code path accessing real_mode_header, leading
to dereferencing NULL when suspending the system under Xen:

    [  348.284004] PM: suspend entry (deep)
    [  348.289532] Filesystems sync: 0.005 seconds
    [  348.291545] Freezing user space processes ... (elapsed 0.000 seconds) done.
    [  348.292457] OOM killer disabled.
    [  348.292462] Freezing remaining freezable tasks ... (elapsed 0.104 seconds) done.
    [  348.396612] printk: Suspending console(s) (use no_console_suspend to debug)
    [  348.749228] PM: suspend devices took 0.352 seconds
    [  348.769713] ACPI: EC: interrupt blocked
    [  348.816077] BUG: kernel NULL pointer dereference, address: 000000000000001c
    [  348.816080] #PF: supervisor read access in kernel mode
    [  348.816081] #PF: error_code(0x0000) - not-present page
    [  348.816083] PGD 0 P4D 0
    [  348.816086] Oops: 0000 [#1] PREEMPT SMP NOPTI
    [  348.816089] CPU: 0 PID: 6764 Comm: systemd-sleep Not tainted 6.1.3-1.fc32.qubes.x86_64 #1
    [  348.816092] Hardware name: Star Labs StarBook/StarBook, BIOS 8.01 07/03/2022
    [  348.816093] RIP: e030:acpi_get_wakeup_address+0xc/0x20

Fix that by adding an indirection for acpi_get_wakeup_address() which
Xen PV dom0 can use to return a dummy non-zero wakeup address (this
address won't ever be used, as the real suspend handling is done by the
hypervisor).

Fixes: f1e525009493 ("x86/boot: Skip realmode init code when running as Xen PV guest")
Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/acpi.h  | 2 +-
 arch/x86/kernel/acpi/sleep.c | 3 ++-
 include/xen/acpi.h           | 9 +++++++++
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
index 65064d9f7fa6..137259ff8f03 100644
--- a/arch/x86/include/asm/acpi.h
+++ b/arch/x86/include/asm/acpi.h
@@ -61,7 +61,7 @@ static inline void acpi_disable_pci(void)
 extern int (*acpi_suspend_lowlevel)(void);
 
 /* Physical address to resume after wakeup */
-unsigned long acpi_get_wakeup_address(void);
+extern unsigned long (*acpi_get_wakeup_address)(void);
 
 /*
  * Check if the CPU can handle C2 and deeper
diff --git a/arch/x86/kernel/acpi/sleep.c b/arch/x86/kernel/acpi/sleep.c
index 3b7f4cdbf2e0..1a3cd5e24cd0 100644
--- a/arch/x86/kernel/acpi/sleep.c
+++ b/arch/x86/kernel/acpi/sleep.c
@@ -33,10 +33,11 @@ static char temp_stack[4096];
  * Returns the physical address where the kernel should be resumed after the
  * system awakes from S3, e.g. for programming into the firmware waking vector.
  */
-unsigned long acpi_get_wakeup_address(void)
+static unsigned long x86_acpi_get_wakeup_address(void)
 {
 	return ((unsigned long)(real_mode_header->wakeup_start));
 }
+unsigned long (*acpi_get_wakeup_address)(void) = x86_acpi_get_wakeup_address;
 
 /**
  * x86_acpi_enter_sleep_state - enter sleep state
diff --git a/include/xen/acpi.h b/include/xen/acpi.h
index b1e11863144d..7e1e5dbfb77c 100644
--- a/include/xen/acpi.h
+++ b/include/xen/acpi.h
@@ -56,6 +56,12 @@ static inline int xen_acpi_suspend_lowlevel(void)
 	return 0;
 }
 
+static inline unsigned long xen_acpi_get_wakeup_address(void)
+{
+	/* Just return a dummy non-zero value, it will never be used. */
+	return 1;
+}
+
 static inline void xen_acpi_sleep_register(void)
 {
 	if (xen_initial_domain()) {
@@ -65,6 +71,9 @@ static inline void xen_acpi_sleep_register(void)
 			&xen_acpi_notify_hypervisor_extended_sleep);
 
 		acpi_suspend_lowlevel = xen_acpi_suspend_lowlevel;
+#ifdef CONFIG_ACPI_SLEEP
+		acpi_get_wakeup_address = xen_acpi_get_wakeup_address;
+#endif
 	}
 }
 #else
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 14:20:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 14:20:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477375.740070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKvC-0003vr-7L; Fri, 13 Jan 2023 14:20:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477375.740070; Fri, 13 Jan 2023 14:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKvC-0003vk-4R; Fri, 13 Jan 2023 14:20:42 +0000
Received: by outflank-mailman (input) for mailman id 477375;
 Fri, 13 Jan 2023 14:20:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O/z9=5K=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pGKvA-0003vc-Dc
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 14:20:40 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7421f966-934d-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 15:20:39 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id CE4B16B5B2;
 Fri, 13 Jan 2023 14:20:37 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AEFEA1358A;
 Fri, 13 Jan 2023 14:20:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id CvoaKbVowWM7FQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 13 Jan 2023 14:20:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7421f966-934d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673619637; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=M55lXHF5YsSdUG/Me3dcSgrQaqLgE3puKEayCSDhibc=;
	b=KvvrhBhBCxHA+uo+pnzMYt6lyxfpHUn58mvV21jqWKoKcZqS1vDCEjig02RMrXWDwtgD2S
	v9QBA/inqJt46/e2EXnj3//KuLirBLkxSyWzq3LUra6yOolqYJJp1hz/w/SPSfngXGHCD+
	NYSLY0hmlIQZbf4rBlTeh37LRv+yjrA=
Message-ID: <47ae3bbb-6468-4282-1789-72eedaa54814@suse.com>
Date: Fri, 13 Jan 2023 15:20:37 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: S3 under Xen regression between 6.1.1 and 6.1.3
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, regressions@lists.linux.dev
References: <Y8DIodWQGm99RA+E@mail-itl>
 <bdea54df-59dc-3d4d-dd0c-8c45403dea24@suse.com> <Y8FgQyVvpUXqumvS@mail-itl>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <Y8FgQyVvpUXqumvS@mail-itl>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------6kkXqBrUY0FyOXKni8Ph3nMz"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------6kkXqBrUY0FyOXKni8Ph3nMz
Content-Type: multipart/mixed; boundary="------------0ddX8tnnEtC6EZA20X32Cia6";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, regressions@lists.linux.dev
Message-ID: <47ae3bbb-6468-4282-1789-72eedaa54814@suse.com>
Subject: Re: S3 under Xen regression between 6.1.1 and 6.1.3
References: <Y8DIodWQGm99RA+E@mail-itl>
 <bdea54df-59dc-3d4d-dd0c-8c45403dea24@suse.com> <Y8FgQyVvpUXqumvS@mail-itl>
In-Reply-To: <Y8FgQyVvpUXqumvS@mail-itl>

--------------0ddX8tnnEtC6EZA20X32Cia6
Content-Type: multipart/mixed; boundary="------------BtFof0gK3r9ZHyWf2ElM4AQq"

--------------BtFof0gK3r9ZHyWf2ElM4AQq
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTMuMDEuMjMgMTQ6NDQsIE1hcmVrIE1hcmN6eWtvd3NraS1Hw7NyZWNraSB3cm90ZToN
Cj4gQnV0LCB1bnJlbGF0ZWQgdG8gdGhpcyBidWcsIGl0IGRpZCBnZXQgbWVzc2FnZSBsaWtl
IGluIGh0dHBzOi8vd3d3Lm1haWwtYXJjaGl2ZS5jb20veGVuLWRldmVsQGxpc3RzLnhlbnBy
b2plY3Qub3JnL21zZzEwNzYwOS5odG1sDQo+IChXQVJOSU5HOiBDUFU6IDEgUElEOiAwIGF0
IGFyY2gveDg2L21tL3RsYi5jOjUyMyBzd2l0Y2hfbW1faXJxc19vZmYrMHgyMzAvMHg0YTAp
DQo+IA0KDQpIbW0sIGlzIGFwcGx5aW5nIHRoZSBhdHRhY2hlZCBwYXRjaCBtYWtpbmcgYW55
IGRpZmZlcmVuY2UgaGVyZT8NCg0KDQpKdWVyZ2VuDQo=
--------------BtFof0gK3r9ZHyWf2ElM4AQq
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-x86-xen-don-t-let-xen_pv_play_dead-return.patch"
Content-Disposition: attachment;
 filename="0001-x86-xen-don-t-let-xen_pv_play_dead-return.patch"
Content-Transfer-Encoding: base64

RnJvbSBhNGNiMDQyOWUxZDZkNGNhNDI3MDVkOTcyMTY4MjQyMGFlMDUzODUxIE1vbiBTZXAg
MTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
ClRvOiB4ODZAa2VybmVsLm9yZwpUbzogbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZwpD
YzogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpDYzogQm9yaXMgT3N0cm92c2t5
IDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT4KQ2M6IFRob21hcyBHbGVpeG5lciA8dGds
eEBsaW51dHJvbml4LmRlPgpDYzogSW5nbyBNb2xuYXIgPG1pbmdvQHJlZGhhdC5jb20+CkNj
OiBCb3Jpc2xhdiBQZXRrb3YgPGJwQGFsaWVuOC5kZT4KQ2M6IERhdmUgSGFuc2VuIDxkYXZl
LmhhbnNlbkBsaW51eC5pbnRlbC5jb20+CkNjOiAiSC4gUGV0ZXIgQW52aW4iIDxocGFAenl0
b3IuY29tPgpDYzogeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnCkRhdGU6IFRodSwg
MjQgTm92IDIwMjIgMDg6MDk6NDUgKzAxMDAKU3ViamVjdDogW1BBVENIIDEvMl0geDg2L3hl
bjogZG9uJ3QgbGV0IHhlbl9wdl9wbGF5X2RlYWQoKSByZXR1cm4KCkEgZnVuY3Rpb24gY2Fs
bGVkIHZpYSB0aGUgcGFyYXZpcnQgcGxheV9kZWFkKCkgaG9vayBzaG91bGQgbm90IHJldHVy
bgp0byB0aGUgY2FsbGVyLgoKeGVuX3B2X3BsYXlfZGVhZCgpIGhhcyBhIHByb2JsZW0gaW4g
dGhpcyByZWdhcmQsIGFzIGl0IGN1cnJlbnRseSB3aWxsCnJldHVybiBpbiBjYXNlIGFuIG9m
ZmxpbmVkIGNwdSBpcyBicm91Z2h0IHRvIGxpZmUgYWdhaW4uIFRoaXMgY2FuIGJlCmNoYW5n
ZWQgb25seSBieSBkb2luZyBiYXNpY2FsbHkgYSBsb25nam1wKCkgdG8gY3B1X2JyaW5ndXBf
YW5kX2lkbGUoKSwKYXMgdGhlIGh5cGVyY2FsbCBmb3IgYnJpbmdpbmcgZG93biB0aGUgY3B1
IHdpbGwganVzdCByZXR1cm4gd2hlbiB0aGUKY3B1IGlzIGNvbWluZyB1cCBhZ2Fpbi4gSnVz
dCByZS1pbml0aWFsaXppbmcgdGhlIGNwdSBpc24ndCBwb3NzaWJsZSwKYXMgdGhlIFhlbiBo
eXBlcnZpc29yIHdpbGwgZGVueSB0aGF0IG9wZXJhdGlvbi4KClNvIGludHJvZHVjZSB4ZW5f
Y3B1X2JyaW5ndXBfYWdhaW4oKSByZXNldHRpbmcgdGhlIHN0YWNrIGFuZCBjYWxsaW5nCmNw
dV9icmluZ3VwX2FuZF9pZGxlKCksIHdoaWNoIGNhbiBiZSBjYWxsZWQgYWZ0ZXIgSFlQRVJW
SVNPUl92Y3B1X29wKCkKaW4geGVuX3B2X3BsYXlfZGVhZCgpLgoKU2lnbmVkLW9mZi1ieTog
SnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgotLS0KIGFyY2gveDg2L3hlbi9zbXAu
aCAgICAgIHwgIDIgKysKIGFyY2gveDg2L3hlbi9zbXBfcHYuYyAgIHwgMTMgKystLS0tLS0t
LS0tLQogYXJjaC94ODYveGVuL3hlbi1oZWFkLlMgfCAgNyArKysrKysrCiAzIGZpbGVzIGNo
YW5nZWQsIDExIGluc2VydGlvbnMoKyksIDExIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBh
L2FyY2gveDg2L3hlbi9zbXAuaCBiL2FyY2gveDg2L3hlbi9zbXAuaAppbmRleCBiZDAyZjlk
NTAxMDcuLjZlOTBhMzEyMDY3YiAxMDA2NDQKLS0tIGEvYXJjaC94ODYveGVuL3NtcC5oCisr
KyBiL2FyY2gveDg2L3hlbi9zbXAuaApAQCAtMjEsNiArMjEsOCBAQCB2b2lkIHhlbl9zbXBf
c2VuZF9yZXNjaGVkdWxlKGludCBjcHUpOwogdm9pZCB4ZW5fc21wX3NlbmRfY2FsbF9mdW5j
dGlvbl9pcGkoY29uc3Qgc3RydWN0IGNwdW1hc2sgKm1hc2spOwogdm9pZCB4ZW5fc21wX3Nl
bmRfY2FsbF9mdW5jdGlvbl9zaW5nbGVfaXBpKGludCBjcHUpOwogCit2b2lkIHhlbl9jcHVf
YnJpbmd1cF9hZ2Fpbih1bnNpZ25lZCBsb25nIHN0YWNrKTsKKwogc3RydWN0IHhlbl9jb21t
b25faXJxIHsKIAlpbnQgaXJxOwogCWNoYXIgKm5hbWU7CmRpZmYgLS1naXQgYS9hcmNoL3g4
Ni94ZW4vc21wX3B2LmMgYi9hcmNoL3g4Ni94ZW4vc21wX3B2LmMKaW5kZXggNDgwYmU4MmU5
YjdiLi5iNDBiMjQzODJmZTMgMTAwNjQ0Ci0tLSBhL2FyY2gveDg2L3hlbi9zbXBfcHYuYwor
KysgYi9hcmNoL3g4Ni94ZW4vc21wX3B2LmMKQEAgLTM4NSwxNyArMzg1LDggQEAgc3RhdGlj
IHZvaWQgeGVuX3B2X3BsYXlfZGVhZCh2b2lkKSAvKiB1c2VkIG9ubHkgd2l0aCBIT1RQTFVH
X0NQVSAqLwogewogCXBsYXlfZGVhZF9jb21tb24oKTsKIAlIWVBFUlZJU09SX3ZjcHVfb3Ao
VkNQVU9QX2Rvd24sIHhlbl92Y3B1X25yKHNtcF9wcm9jZXNzb3JfaWQoKSksIE5VTEwpOwot
CWNwdV9icmluZ3VwKCk7Ci0JLyoKLQkgKiBjb21taXQgNGIwYzBmMjk0ICh0aWNrOiBDbGVh
bnVwIE5PSFogcGVyIGNwdSBkYXRhIG9uIGNwdSBkb3duKQotCSAqIGNsZWFycyBjZXJ0YWlu
IGRhdGEgdGhhdCB0aGUgY3B1X2lkbGUgbG9vcCAod2hpY2ggY2FsbGVkIHVzCi0JICogYW5k
IHRoYXQgd2UgcmV0dXJuIGZyb20pIGV4cGVjdHMuIFRoZSBvbmx5IHdheSB0byBnZXQgdGhh
dAotCSAqIGRhdGEgYmFjayBpcyB0byBjYWxsOgotCSAqLwotCXRpY2tfbm9oel9pZGxlX2Vu
dGVyKCk7Ci0JdGlja19ub2h6X2lkbGVfc3RvcF90aWNrX3Byb3RlY3RlZCgpOwotCi0JY3B1
aHBfb25saW5lX2lkbGUoQ1BVSFBfQVBfT05MSU5FX0lETEUpOworCXhlbl9jcHVfYnJpbmd1
cF9hZ2FpbigodW5zaWduZWQgbG9uZyl0YXNrX3B0X3JlZ3MoY3VycmVudCkpOworCUJVRygp
OwogfQogCiAjZWxzZSAvKiAhQ09ORklHX0hPVFBMVUdfQ1BVICovCmRpZmYgLS1naXQgYS9h
cmNoL3g4Ni94ZW4veGVuLWhlYWQuUyBiL2FyY2gveDg2L3hlbi94ZW4taGVhZC5TCmluZGV4
IGZmYWE2MjE2N2Y2ZS4uZTM2ZWE0MjY4YmQyIDEwMDY0NAotLS0gYS9hcmNoL3g4Ni94ZW4v
eGVuLWhlYWQuUworKysgYi9hcmNoL3g4Ni94ZW4veGVuLWhlYWQuUwpAQCAtNzYsNiArNzYs
MTMgQEAgU1lNX0NPREVfU1RBUlQoYXNtX2NwdV9icmluZ3VwX2FuZF9pZGxlKQogCiAJY2Fs
bCBjcHVfYnJpbmd1cF9hbmRfaWRsZQogU1lNX0NPREVfRU5EKGFzbV9jcHVfYnJpbmd1cF9h
bmRfaWRsZSkKKworU1lNX0NPREVfU1RBUlQoeGVuX2NwdV9icmluZ3VwX2FnYWluKQorCVVO
V0lORF9ISU5UX0ZVTkMKKwltb3YJJXJkaSwgJXJzcAorCVVOV0lORF9ISU5UX1JFR1MKKwlj
YWxsCWNwdV9icmluZ3VwX2FuZF9pZGxlCitTWU1fQ09ERV9FTkQoeGVuX2NwdV9icmluZ3Vw
X2FnYWluKQogLnBvcHNlY3Rpb24KICNlbmRpZgogI2VuZGlmCi0tIAoyLjM1LjMKCg==
--------------BtFof0gK3r9ZHyWf2ElM4AQq
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BtFof0gK3r9ZHyWf2ElM4AQq--

--------------0ddX8tnnEtC6EZA20X32Cia6--

--------------6kkXqBrUY0FyOXKni8Ph3nMz
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPBaLUFAwAAAAAACgkQsN6d1ii/Ey+k
fwf+OdNpyCt6DMLy99a9gW+UhF27kdatyu7MnA5mGXxbNBnr0ZfUsg1M0VFJkeM2e0C0STU7DBJG
vDdQKFZ572UQguil51eXR+oQjarRxzuIqsrhEguyZEiwEa1vnDTXS2aUBxYXBjyKIUzUyGNS9DZu
hMZF+eGGRIvy+515Y4KmZL1t+IuMBogSDzxDHTeh6f2ee2N+2BxTbzTCiPdODnkoneKAuJrlyEPV
OSaP23Qo26JOWaOXj3cMMVpraSgMdy7xwjHx2qUDUyLJIVoBD6h0C9BNDILvFfGsikfSmOVrzNIh
u408UF4+M3aAzZacLkdCP8SAn4nmdYb+yYl0YpMCGQ==
=fJb/
-----END PGP SIGNATURE-----

--------------6kkXqBrUY0FyOXKni8Ph3nMz--


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 14:23:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 14:23:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477382.740081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKxU-0004b2-Pc; Fri, 13 Jan 2023 14:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477382.740081; Fri, 13 Jan 2023 14:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKxU-0004av-Ks; Fri, 13 Jan 2023 14:23:04 +0000
Received: by outflank-mailman (input) for mailman id 477382;
 Fri, 13 Jan 2023 14:23:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=grKZ=5K=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pGKxT-0004ak-8i
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 14:23:03 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2040.outbound.protection.outlook.com [40.107.15.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c86f90aa-934d-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 15:23:00 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8830.eurprd04.prod.outlook.com (2603:10a6:102:20d::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 14:22:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Fri, 13 Jan 2023
 14:22:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c86f90aa-934d-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M8KwfngPendc/RFulZwuah9sBP80w1z4HUlwgA1Nap5QVaiyQis6ey1W2LXMoaySrXIoboWGckXVJWO/bqnRz+ONe/FmdQpO5SaUbGdbOE8vyq95FsPtbrJzLWJsVayj/WpQzCMId26UOYx0Du0kKAHysIu2UPrObNvN6RmRCSzfeC60YypGiFdXupG10U0zS8XEcrrkfnfIYP3cIXp58Ltg5UnBzuGChNBCJbhiutrjEwA9C/Q8ZluDU91tLK+k4+P+afso3DhmD6goOB91qdK/wfWD8GXLzTPAs66TP0KS1I/4qpSmEfW0XXLLnmTLpof7Xlw9LVBgL9jE6hhXJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R5/N32WcKpy/9eodosmzYYsCq1e9XuOCxjg5CzZMbyQ=;
 b=kiWQt6k0XAE9r81ISE6P3l8NHBigw/6PR5FdJIMLLWZpezOQE+abIumXZiUFkJUAA4IuIRssJRH5nv/Z3CiOVwWJ+BuL5yTifkRitKrm3SplnMC0SSNwcsLCT63u75R+j5huj0xBy3XvF8nMYdZjuVGk/JkYMCVPsEhHknkl6/ekZVekiFY4hvg8SC7mZSlsuMJ3NKB3kp3qUfKo7u6G8OF+mlzs42eNSBRCHdaYaT+6nHkuPSrd7GHKdcTtSVB9h19fJALPCRc8Bxp9sl0g6Tb480U6osefkzZbX+FXa/bpdxZW+xEvN0rD2hqBybOyUSCYVkjlg9m3VRTkPQ52dg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R5/N32WcKpy/9eodosmzYYsCq1e9XuOCxjg5CzZMbyQ=;
 b=0/bgcLs2i5XLXiBU3vaNBfG1mACmF+jf4XqRLbOxwJIWwjYCRckJH1zDuRH7Ics9Ew74dVXpgrDqZ1WP7MX4ORK7vubBwWeYURxjAePVQie+52tV8RgdI/BLeLKAXpboNme/X5ErmUDggCzMZBVAfiKVCVwL4EjeG5XDuw4q9iYPFK3YWBAJ1KrM0fVAhro4sa9fDTHkZh5di3wRrA14A8XIKnV8Ngl77kGEpfl5CgT6J/fbTVUO580LYFQI/erOCgxLU03oCRz2tFuWaZis8QdN+726+otOcbOyEq0yYZ7yhMy2zUZ41je7DgTb6RT8CgTGAjJPBmrcojBjxGT+JQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1fc41018-b27f-61f5-632d-3a36e5460590@suse.com>
Date: Fri, 13 Jan 2023 15:22:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
 <88e3ec77-587a-ae68-a634-fed1fa917cd7@suse.com>
 <b76a7834-9868-c5c2-e058-89911a552c80@gmail.com>
 <512d8768-28f6-d9d6-c1cc-18c5fbf2a636@suse.com>
 <4f1d289a-7c3b-c4a1-34bc-1e8bd62a416a@gmail.com>
 <da973e5a-3a1b-3e99-ebf9-e462915eb338@suse.com>
 <8e1ecdf9-c5b2-2977-b4fc-a64cf04c765b@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <8e1ecdf9-c5b2-2977-b4fc-a64cf04c765b@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0123.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8830:EE_
X-MS-Office365-Filtering-Correlation-Id: ac441efb-c99c-49b0-2380-08daf571aaf8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Lld5KclfsJJpaLmm5yOOJ/AFvJqWSN3ZAEsh63J1vQyi2atxt8ItTiPo8WfDLyMo2KyQ8dG5eGFihubVVOVc+r976TiicINSdYlVIhVFGZywU4xUOVYlgIQhGB4XmW5b8FkKp/wz0e6kUQ1fwqGOW2vLzC20KRmlCu/PYqhE4i9HjrY0i6AXLeqr0FRLIAnhwiZerbQkII+zQ4W0iWLUPq1L1IemvwLSMhqA2VrOtf5j/05oevB8YQMfMoVQ4VEz0e23Y3XcO1OQybICIzKTbo+5vvbcZf1gj3tGadtblrQRxrFiVIzcRbmBO2aUTXywJED61YSzz6zuUEY6W1PyJS6zdrtxmMLo1eNNRQYf+h6zvY07RTc55PHcmLEEHlPo5piHO+5gKq8QWZgbCWcz+Ff+QxkuKOm9foWY6LdtJ4UTL8CdqeHLittBclvnpJ+1tUtYQclwQXzQ1wkr59m9mQiu8f1TwKh0LIfBUnWmxoOHrBEUisKyW/FaH7bVXPunUtE0zrkIYCWc0HcpI8EN0+bsGTn+M/O91kLvxhCgcZOAKhtz8OQa62ktGNdXeRRt1tcb+7CdooXdih/9MP0ThO9+6cNqOUq3juDFvBlZFC3Yw7TZtIuFzVUlSEYFQuKFz6ajLpsJtMoGIihmvXX/f6T7OKB+xGbzWW8EHp6REzCZwtizWP4z6mM5bstW2sQuhvuxAxSr9YH0KEJWS4lTyL3+qX1M6scKluTgQCC/Etc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(376002)(396003)(39860400002)(136003)(346002)(451199015)(31686004)(8936002)(5660300002)(54906003)(41300700001)(4326008)(66946007)(2906002)(66556008)(316002)(6916009)(66476007)(8676002)(478600001)(6486002)(38100700002)(186003)(6512007)(26005)(53546011)(6666004)(36756003)(6506007)(86362001)(31696002)(2616005)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bmZRSzZtTmU4M2F5dXFHSjFjblhtN0liWDdybHM2VTk4ZXNBYkFMenpHYVhO?=
 =?utf-8?B?V1ViQlpiL2pvUERHVlg4cmJRamU1UVhpNTVXS091RDNVQlJIajRqUkdBOFpP?=
 =?utf-8?B?WDE2MnNCZzB3cGZhaW8vS2VaVUFmWXB1THZFUmRxSmlSdjA5anZRZEN2KzNS?=
 =?utf-8?B?NkwwZWVCZnBlTjVxdUgwUE5pT0VYYjllcVMxSDJLbVhjTTE3YU1Lang0WExh?=
 =?utf-8?B?R01qL2RKZ28zOGd4Y2EwNXpyWC9JYUZlQnF6WGp2dlBaMTJWWk9wdDAxZExu?=
 =?utf-8?B?L0VkL0tBT1hZb0ZnUUZRWEhrYW1aOGV3VytZVmovaFZreW50Y0RBZjNXRnNM?=
 =?utf-8?B?YUVJNlRxbyttNGpEeWtnSVlxSTVoOU1RT24wOGlZcGN3Q1I4Z2E0L3BpUlY5?=
 =?utf-8?B?Uk84NWJVYVRPZGFiQXJKMnZXSE1vWXZmSmR3NW5LQnpNR1ppRk1RUWxibVND?=
 =?utf-8?B?Y0QrMFBhdEw4MDBwLzNYVVVlRSs0blRYMTRiSTU0eGk3N2E2K2tjclJiSFJV?=
 =?utf-8?B?ZG9wZkw0YzZmTnNwZXBWVTJwRDlTaUFac2FFN1B6aHJsRFNNd3l2QVljTnpP?=
 =?utf-8?B?ZlhGcWo2aFBuME1BTi9ZZ3NCSGxrQkx4US9tSmJlYW5ieDJuakcrdytWV2p0?=
 =?utf-8?B?bi9ZMkt0aFUvSXE3ZUhqbVJRemplSS9HQXJSOVQxZmdUaWtscWlSUmVOcFBv?=
 =?utf-8?B?cE9KTmJhaHFHNWRKU0hrTjJ6eXcwMlFXS0ZJc0krRVBTRytRLy9weG5XbDRo?=
 =?utf-8?B?bmdLV04ycEpxc2RyMmlpOWw5YVdMQkx4YmpvM2JiZklrYVhrOUR6bndFZ2x3?=
 =?utf-8?B?SlJOQVltZ2JRanl1R1ZDTmVtSmFwZ1BFbUgydWVYOE9QbUZSdEFLVzdVb1h1?=
 =?utf-8?B?KzR0U1hOb1hubTMwaEN4bTVSdEViclBkWDhyS1hJQWptcXkrRlNrUDVJWEJH?=
 =?utf-8?B?ZU1VYm5LcWJ2T2E2bm84L08yTEZQcnVxWUlFcGFBa3V2Zm9jdXhUSjllMnVY?=
 =?utf-8?B?Rm9YcWFBVHhLdXYzY2ZYRDhFQS9LQmVxZ0RUZnpWTVJEaFg5bVJoeG1Vd3Rl?=
 =?utf-8?B?WXlhTDJMc0Yzb0hGZGdHcUhBOEhyUmkwTzcranExOVM0QU40TzhRN0dBYWU0?=
 =?utf-8?B?YS9YUUt5MlUzOXZpQWVhc1cxVlRqc0JGQnZKcXBtcW1CSkRHZnZtYVdTNVdy?=
 =?utf-8?B?THh1RGltNWM0RG1naDJXdmdWdDU0dDBNVWU1blcwSjZJMHIyeGJ5VWVKMkw1?=
 =?utf-8?B?QXYvNzBudi91ODdsSllvQXFTbktBb1BJS3QwcDA5Wm95cFdQYjFxc3JCUlJP?=
 =?utf-8?B?REl2bysxWFl5eXlBYi9ZWG5iaEh1ZWhFL1FqelFlZy9nemNuM3pnMlNOeWVF?=
 =?utf-8?B?azRMSEk5Uit2L0QranU0N2FObHFTVGNhWmdCOWs1eEowdDlvTlpaNzZNR041?=
 =?utf-8?B?cGFaV3p5YzE4R3JWQ3NyTTkvQjNacG45cFVPTDRVYU16ME04THVwaGNvTEtK?=
 =?utf-8?B?am11STlVanhjelplaDhqUGl1QWxLV3JIdlF2ZXd4aFlpaTNmOUVoM0pzT3Yr?=
 =?utf-8?B?cENyWjhIUUkrUDRwcG1WMkUxVzVheGpndTcwL29jUUFZbGlUTCtWMHNUeGNY?=
 =?utf-8?B?REFiY25lcXd2NGhGWUtpZ1RNbGVXWUhWZzFPRGxYNmpKNFlTN1VPdWM5K0tr?=
 =?utf-8?B?SnZaMDU4ZDlzckxIazI3MjU0Smh3dHBwWmpscGFwNm13TlJZd0Z2QVNrWVZu?=
 =?utf-8?B?d1c1T2Nhb0Z2eDlkaTVmelBkS3A2d0JoTlc2ZWRTQ0M4bjdqVHY1dzh2Rlhi?=
 =?utf-8?B?YzZRRTdPSEhwNCtNZEw5eWlvS1pKcTJzR2diUWpmYjkzcCtSdW9ibmdVVzNw?=
 =?utf-8?B?SkNqcjN2a0lNR0xmeUpuMmovb0FyMVFVby9CYXltd1pqZ2VrbWoxbGlJNGY1?=
 =?utf-8?B?RGJzV3grTElyY3E4N2VOZkhZRjZUYngwS1NCT2RMdWk1bnN1eVo5Q1ljR3Ar?=
 =?utf-8?B?Q0hEK3FRQ0pkWm5tc3NhNWpJN3FsUk05eksrb1o3NFZxUEV4bWxzdW9DL0g0?=
 =?utf-8?B?SHV5dCtnby9KaGRmSEY2US8wdHdKTkpHQlFCdEpRRkt4ak1IL0NMN1g0S1R2?=
 =?utf-8?Q?5BOlB2+axHNH4B50/dw89b8Up?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ac441efb-c99c-49b0-2380-08daf571aaf8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 14:22:58.1094
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vO2M8ZfMVHb3kpecSQRjcWjurstu5LI5tYDvlbduF40bk1FWCg6PnlofWMHXW1LBSUEMs4PQ1KJBjPwFS6j1lA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8830

On 13.01.2023 14:53, Xenia Ragiadakou wrote:
> On 1/13/23 15:24, Jan Beulich wrote:
>> On 13.01.2023 14:07, Xenia Ragiadakou wrote:
>>> On 1/13/23 14:12, Jan Beulich wrote:
>>>> On 13.01.2023 12:55, Xenia Ragiadakou wrote:
>>>>> On 1/13/23 11:53, Jan Beulich wrote:
>>>>>> On 13.01.2023 10:34, Xenia Ragiadakou wrote:
>>>>>>> On 1/13/23 10:47, Jan Beulich wrote:
>>>>>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>>>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>>>>>> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>>>>>>>>          if ( !acpi_disabled )
>>>>>>>>          {
>>>>>>>>              ret = acpi_dmar_init();
>>>>>>>> +
>>>>>>>> +#ifndef iommu_snoop
>>>>>>>> +        /* A command line override for snoop control affects VT-d only. */
>>>>>>>> +        if ( ret )
>>>>>>>> +            iommu_snoop = true;
>>>>>>>> +#endif
>>>>>>>> +
>>>>>>>
>>>>>>> Why here iommu_snoop is forced when iommu is not enabled?
>>>>>>> This change is confusing because later on, in iommu_setup, iommu_snoop
>>>>>>> will be set to false in the case of !iommu_enabled.
>>>>>>
>>>>>> Counter question: Why is it being set to false there? I see no reason at
>>>>>> all. On the same basis as here, I'd actually expect it to be set back to
>>>>>> true in such a case.Which, however, would be a benign change now that
>>>>>> all uses of the variable are properly qualified. Which in turn is why I
>>>>>> thought I'd leave that other place alone.
>>>>>
>>>>> I think I got confused... AFAIU with disabled iommu snooping cannot be
>>>>> enforced i.e iommu_snoop=true translates to snooping is enforced by the
>>>>> iommu (that's why we need to check that the iommu is enabled for the
>>>>> guest). So if the iommu is disabled how can iommu_snoop be true?
>>>>
>>>> For a non-existent (or disabled) IOMMU the value of this boolean simply
>>>> is irrelevant. Or to put it in other words, when there's no active
>>>> IOMMU, it doesn't matter whether it would actually snoop.
>>>
>>> The variable iommu_snoop is initialized to true.
>>> Also, the above change does not prevent it from being overwritten
>>> through the cmdline iommu param in iommu_setup().
>>
>> Command line parsing happens earlier (and in parse_iommu_param(), not in
>> iommu_setup()). iommu_setup() can further overwrite it on its error path,
>> but as said that's benign then.
> 
> You are right. I misunderstood.
> 
>>
>>> I m afraid I still cannot understand why the change above is needed.
>>
>> When using an AMD IOMMU, with how things work right now the variable ought
>> to always be true (hence why I've suggested that when !INTEL_IOMMU, this
>> simply become a #define to true). See also Andrew's comments here and/or
>> on your patch.
> 
> Ok I see, so this change is specific to AMD iommu and when iommu_snoop 
> becomes a #define, this change won't be needed anymore, right?

Well the (source) code change will still be needed; it'll simply be
compiled out for the case where iommu_snoop is a #define (which it
looks like we agree it will be when !INTEL_IOMMU).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 14:25:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 14:25:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477388.740092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKzq-0005DP-7L; Fri, 13 Jan 2023 14:25:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477388.740092; Fri, 13 Jan 2023 14:25:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGKzq-0005DI-24; Fri, 13 Jan 2023 14:25:30 +0000
Received: by outflank-mailman (input) for mailman id 477388;
 Fri, 13 Jan 2023 14:25:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGKzp-0005D6-Al; Fri, 13 Jan 2023 14:25:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGKzp-0004yB-6u; Fri, 13 Jan 2023 14:25:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGKzp-0002qX-08; Fri, 13 Jan 2023 14:25:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGKzo-0001WU-Vv; Fri, 13 Jan 2023 14:25:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jFCkryL3/YbnA76rXfk/9dnRDBplomgGg/AkJNRFlk8=; b=X8HLYHVSmKCeUW5bknmTvDa8Va
	DmlmPh7YJQCWKxdgs5V9Fn/u4dtwONXQBriSHIRWlIuNYQh0TZQmefr7xy3WGOvo1uz0Tmp2kH7ll
	4Myx4ZoOoUjtwR7ZzdmlvVD5wT8dCul6ZzSpCR0vrvV57c6UmdU5thDW0IrWfS5WSySU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175752-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175752: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:host-install(5):broken:regression
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=229ebd517b9df0e2d2f9e3ea50b57ca716334826
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Jan 2023 14:25:28 +0000

flight 175752 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175752/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl           5 host-install(5)        broken REGR. vs. 175746
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  229ebd517b9df0e2d2f9e3ea50b57ca716334826
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    0 days
Failing since        175748  2023-01-12 20:01:56 Z    0 days    2 attempts
Testing same since   175752  2023-01-13 07:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl broken
broken-step test-armhf-armhf-xl host-install(5)

Not pushing.

------------------------------------------------------------
commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 14:36:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 14:36:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477398.740103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGLAH-0006kb-6U; Fri, 13 Jan 2023 14:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477398.740103; Fri, 13 Jan 2023 14:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGLAH-0006kU-2U; Fri, 13 Jan 2023 14:36:17 +0000
Received: by outflank-mailman (input) for mailman id 477398;
 Fri, 13 Jan 2023 14:36:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3K7w=5K=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pGLAG-0006kO-B9
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 14:36:16 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a23d161a-934f-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 15:36:15 +0100 (CET)
Received: by mail-ej1-x62a.google.com with SMTP id l22so23062409eja.12
 for <xen-devel@lists.xenproject.org>; Fri, 13 Jan 2023 06:36:15 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.138.tellas.gr. [109.242.138.67])
 by smtp.gmail.com with ESMTPSA id
 lb24-20020a170907785800b008448d273670sm8503172ejc.49.2023.01.13.06.36.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Jan 2023 06:36:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a23d161a-934f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=oD80ua/UgcDwNcyC3REtSAvnr2fpx4MhN6KWkEIZTAE=;
        b=EdjevcdK+FfjVH3S+nr2N1t7CaIEfprmeVsYKKDmQQWTfdmDGA1NP94MMGXMSe9esY
         re7NHB04ouGCsLRuwexM3YkZwVnUEeh1PkKXe30Sc5SnpdxV/8pCGtU2mYb5VsE4uivX
         x9etR6evm0ri+12eEpDeyNFwq+t5xm677Wg3yBsogcrWCsI9Pe4TWYtt+cAdTZlFXBRk
         Tyhgl+2Lw/4W8vidFYZKamN7BJKiTscL+T0S5WC2pTvf98YNreISow0PiDHJRXmRgKzq
         nszAqGyzru+jtFbsHY2XPlzSjFga4SYaIKb14tJiozlLmy32wQzo4g7grgiHGkvkjEF7
         a1FQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=oD80ua/UgcDwNcyC3REtSAvnr2fpx4MhN6KWkEIZTAE=;
        b=h3LUKBIWuwAS9jvNv0WK7DNWPONNuI9AQZr9V7qGYpMxuLjrnDomL4MXoSyQ0ikkbT
         oZlEnebyE3zxwumYgvT4Gscm/gEi5Gf8VE5Vv/wklvFp9O4H97FrolsfdanYY+eF5t6R
         kOIvT+TXkUWrurVHe31cwkpRsnLt7F9cWj/1GnlAbzF5FYtKyHA0946LXmiS5fKUlNa0
         ocDWEQ4yIzr51aYvFRoE/jbqHzNjsAxfNAjyJMBZCHq2UP198NXFw5M6h2uwf7yi9VNr
         DgruRRIIJG8a6yMAbBdlUxuChKzDPNeieSa4wvCCSBfydikSM0IlseEZt8ftW/FMkLdV
         4qNg==
X-Gm-Message-State: AFqh2krSK6hb7SPCN7EfCa9He5YL9qXCoHJldFYHWUXvukDi3daHkdzL
	KlfuRpCpe7NqaVarQUjgAnQ=
X-Google-Smtp-Source: AMrXdXvAwcthKfrt+Z/78ld8JbEsSbArM7Sd9XbxldudvnyyHPQE/uDizs2CC0nX4n3G/Gi/z8nQEg==
X-Received: by 2002:a17:907:a485:b0:7c1:709d:fa49 with SMTP id vp5-20020a170907a48500b007c1709dfa49mr88074670ejc.18.1673620574763;
        Fri, 13 Jan 2023 06:36:14 -0800 (PST)
Message-ID: <9c7cd0e7-c1b5-322e-b89b-b4927ea568bd@gmail.com>
Date: Fri, 13 Jan 2023 16:36:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <6596d648-6400-7907-bc21-8074dc244247@gmail.com>
 <88e3ec77-587a-ae68-a634-fed1fa917cd7@suse.com>
 <b76a7834-9868-c5c2-e058-89911a552c80@gmail.com>
 <512d8768-28f6-d9d6-c1cc-18c5fbf2a636@suse.com>
 <4f1d289a-7c3b-c4a1-34bc-1e8bd62a416a@gmail.com>
 <da973e5a-3a1b-3e99-ebf9-e462915eb338@suse.com>
 <8e1ecdf9-c5b2-2977-b4fc-a64cf04c765b@gmail.com>
 <1fc41018-b27f-61f5-632d-3a36e5460590@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <1fc41018-b27f-61f5-632d-3a36e5460590@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/13/23 16:22, Jan Beulich wrote:
> On 13.01.2023 14:53, Xenia Ragiadakou wrote:
>> On 1/13/23 15:24, Jan Beulich wrote:
>>> On 13.01.2023 14:07, Xenia Ragiadakou wrote:
>>>> On 1/13/23 14:12, Jan Beulich wrote:
>>>>> On 13.01.2023 12:55, Xenia Ragiadakou wrote:
>>>>>> On 1/13/23 11:53, Jan Beulich wrote:
>>>>>>> On 13.01.2023 10:34, Xenia Ragiadakou wrote:
>>>>>>>> On 1/13/23 10:47, Jan Beulich wrote:
>>>>>>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>>>>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>>>>>>> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>>>>>>>>>           if ( !acpi_disabled )
>>>>>>>>>           {
>>>>>>>>>               ret = acpi_dmar_init();
>>>>>>>>> +
>>>>>>>>> +#ifndef iommu_snoop
>>>>>>>>> +        /* A command line override for snoop control affects VT-d only. */
>>>>>>>>> +        if ( ret )
>>>>>>>>> +            iommu_snoop = true;
>>>>>>>>> +#endif
>>>>>>>>> +
>>>>>>>>
>>>>>>>> Why here iommu_snoop is forced when iommu is not enabled?
>>>>>>>> This change is confusing because later on, in iommu_setup, iommu_snoop
>>>>>>>> will be set to false in the case of !iommu_enabled.
>>>>>>>
>>>>>>> Counter question: Why is it being set to false there? I see no reason at
>>>>>>> all. On the same basis as here, I'd actually expect it to be set back to
>>>>>>> true in such a case.Which, however, would be a benign change now that
>>>>>>> all uses of the variable are properly qualified. Which in turn is why I
>>>>>>> thought I'd leave that other place alone.
>>>>>>
>>>>>> I think I got confused... AFAIU with disabled iommu snooping cannot be
>>>>>> enforced i.e iommu_snoop=true translates to snooping is enforced by the
>>>>>> iommu (that's why we need to check that the iommu is enabled for the
>>>>>> guest). So if the iommu is disabled how can iommu_snoop be true?
>>>>>
>>>>> For a non-existent (or disabled) IOMMU the value of this boolean simply
>>>>> is irrelevant. Or to put it in other words, when there's no active
>>>>> IOMMU, it doesn't matter whether it would actually snoop.
>>>>
>>>> The variable iommu_snoop is initialized to true.
>>>> Also, the above change does not prevent it from being overwritten
>>>> through the cmdline iommu param in iommu_setup().
>>>
>>> Command line parsing happens earlier (and in parse_iommu_param(), not in
>>> iommu_setup()). iommu_setup() can further overwrite it on its error path,
>>> but as said that's benign then.
>>
>> You are right. I misunderstood.
>>
>>>
>>>> I m afraid I still cannot understand why the change above is needed.
>>>
>>> When using an AMD IOMMU, with how things work right now the variable ought
>>> to always be true (hence why I've suggested that when !INTEL_IOMMU, this
>>> simply become a #define to true). See also Andrew's comments here and/or
>>> on your patch.
>>
>> Ok I see, so this change is specific to AMD iommu and when iommu_snoop
>> becomes a #define, this change won't be needed anymore, right?
> 
> Well the (source) code change will still be needed; it'll simply be
> compiled out for the case where iommu_snoop is a #define (which it
> looks like we agree it will be when !INTEL_IOMMU).

Yes. Actually, I was thinking to move the setup of iommu_snoop out of 
the X86 #ifdef and to make it depend only on INTEL_IOMMU since for !X86 
only matters to be defined.

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 14:59:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 14:59:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477405.740114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGLWD-0000pS-3V; Fri, 13 Jan 2023 14:58:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477405.740114; Fri, 13 Jan 2023 14:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGLWD-0000pL-0V; Fri, 13 Jan 2023 14:58:57 +0000
Received: by outflank-mailman (input) for mailman id 477405;
 Fri, 13 Jan 2023 14:58:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Kou=5K=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pGLWB-0000pF-AA
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 14:58:55 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061.outbound.protection.outlook.com [40.107.21.61])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb1ddf31-9352-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 15:58:52 +0100 (CET)
Received: from FR3P281CA0126.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:94::16)
 by GV1PR08MB8009.eurprd08.prod.outlook.com (2603:10a6:150:9b::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 14:58:50 +0000
Received: from VI1EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:94:cafe::1) by FR3P281CA0126.outlook.office365.com
 (2603:10a6:d10:94::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Fri, 13 Jan 2023 14:58:49 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT014.mail.protection.outlook.com (100.127.145.17) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 14:58:49 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Fri, 13 Jan 2023 14:58:48 +0000
Received: from 3965f8f287ee.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A9F2DF37-6BD4-49C4-B726-5435679549B2.1; 
 Fri, 13 Jan 2023 14:58:41 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3965f8f287ee.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 14:58:41 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB8658.eurprd08.prod.outlook.com (2603:10a6:20b:564::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 14:58:40 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 14:58:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb1ddf31-9352-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gPUmg80n0eB51azLxyi5e7OIc/4qg3yfsmv3NZ6ZT8c=;
 b=bqJVFPZBL0q2F5ArHaZpYewyM09Y6+Q0PsGrHeiINCC4z1jjMp7oWjFak/cKi1OuHQ6GgMC5pc2pObz1SU/7AzkIYpc0eSK/CBy/lbsMwKJO7WfeULu6GhPOS5cRNpYUYjLpZh95HJBIppbZdPrdMXq4GLVztj1AEL70btl/jGs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4852022f2091c209
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F/F8stLFm3SmCDoQh30GV6qqxr8OoteSBbEbDAnS62QCrZ9XL5b2TKMZMjw0Cvi5ChZIqIT74wkb2/T/xTEnds/SYzp7vcLUCs7Y6pLVN80I0r8q5KLHYu3povfEwSVtEFh/YldcHzxjVkCDcoBm5YAXUzjIXJ02o6fIW1bCTEsSodTj+i5Ij5W2ukN7HoD3rg8Ygithssvj//YM1/LeblqEAUJBz66LRmZZ6XWrmf5hm7hgE35OpNCk+yCMoXEzgpkaxK8i+kfziIy2Hq3JdCVwAxi9jCWhoEkgzP13894HpzihFv9WGFOfnHC8o7+fvUsnsl0GH3NyRv/KJ3vYMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gPUmg80n0eB51azLxyi5e7OIc/4qg3yfsmv3NZ6ZT8c=;
 b=hlk2oOFlbkpbKwyWae4OgTi4qP3MKiE7sn8ziF7QbwNoVW+a8R549FtyLPmlbDqI7tRLqGMEtNLc13o6Y2a9nDoZmbJUxsM0ti0iwO3ZP06nMMDCG49dZnnEufzeqdpKIaq4dS1mtAIN+/HFSivdjTO1u64A4b2dnr2MhISJVWgXk+nvQni7xeeYPMaBo4sh3sbffBdRsmOoPwGL5PxxMg2fVQ5IaiaKva8hyu3Bg9hIZRTuPUmnHY2NPzcwJMlt9/z2Qt84iaBV6l+0y0AWvGbfprROuZnZjWSg42c3gI+U15rxLTpFOK/Mu9u1QADN4quAkn5KM/2nBeiA75rkBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gPUmg80n0eB51azLxyi5e7OIc/4qg3yfsmv3NZ6ZT8c=;
 b=bqJVFPZBL0q2F5ArHaZpYewyM09Y6+Q0PsGrHeiINCC4z1jjMp7oWjFak/cKi1OuHQ6GgMC5pc2pObz1SU/7AzkIYpc0eSK/CBy/lbsMwKJO7WfeULu6GhPOS5cRNpYUYjLpZh95HJBIppbZdPrdMXq4GLVztj1AEL70btl/jGs=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to
 load Xen
Thread-Topic: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to
 load Xen
Thread-Index: AQHZJzd+ffGmBsJNbUGFhc+Z6Z3UHa6ccI0A
Date: Fri, 13 Jan 2023 14:58:39 +0000
Message-ID: <859E7F71-489B-4DD3-A4B2-9AD0DE19837D@arm.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-10-julien@xen.org>
In-Reply-To: <20230113101136.479-10-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB8658:EE_|VI1EUR03FT014:EE_|GV1PR08MB8009:EE_
X-MS-Office365-Filtering-Correlation-Id: c5ab2b11-75e5-4243-47c7-08daf576ada5
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Bbg3jPJx2o/rkLpGJCUuTrlXFiYC25euffCsYOOFGQycz+qzNvb7tuTjiiQwrbeD3Aq/6rQNTFEAbTikcQqcXi7w3s2CGIkcJEsSPaR0cTvaF1mUIm0qJ8at4PlgwIFoZQO63XigRXQroE9j0t8wlstQlC3ldU7W03pFla0o/4BpcHL00CuYqO9fyjX7ewaYoaEDi8v9IkPYRrGYUc+rRep04NKX4DLi95sQYNzfEMaDueaO5QlHYagYzVWnfwu5i4BSLEsdish1F8GqIgkrVXspjJSa4TAyXmw36oYV956mEC5zzCe9brQAKEvuJW7O80QWCOtNTtRxTeP7t9YhazyjXy5ANAQCSxLqgbTD8PwSPTknKWIP216dWUL90LQ1OaHl9JIvr00hlv1lnafC1Asad85ud3nY2n7zoBiKyxCyR48UjHYSRVenZUr9wxBlQeJ1WISE+zw+TaiEFmB+p94XTtAG+1rmEksxVQ4PSRVS7pwq9OXWafaGfgnsKoC0F2JCPJMhNnxvI5lvF2zhtUS8MFcMqzeeHtKI9Qji1jHEu81B0kuCKdu4X4yf44ap5+o90jYyAMyJ4r8r+Ez9VVn/TMO4Q0UQOg004O0wb5wNpIDc+Kj15EiAexwtUx+PSBYyBweVBh06+7aAnp2hH8/8y+ZD90kVqKPM+HRGtZ5KboyX2CPnWc8C8QEtIp+3S0Yxyj8NhSUQS/w2+gVhSwLhVKzfILu4VwL4z106d7Q=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(366004)(346002)(376002)(39860400002)(396003)(451199015)(5660300002)(33656002)(53546011)(316002)(2616005)(2906002)(6506007)(54906003)(36756003)(122000001)(38100700002)(71200400001)(86362001)(6486002)(6512007)(186003)(478600001)(26005)(91956017)(8936002)(76116006)(4326008)(6916009)(66446008)(8676002)(66946007)(64756008)(66476007)(38070700005)(66556008)(41300700001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <E7E633D68D0614429767735F154981A0@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8658
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4a112096-14f2-4fbd-6e9e-08daf576a809
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BQ5nC9yweiVmiKykkRAL2GEs7Cy+z2c3j9ZZIFaLpIYk6dM8lledup9xLTdTXj1K0vqQCRjygnRLHhDGOBp5DM0TMhpd6k6WDAcv5PwUdva4HV/09uUIs2n1KYdY/MYCQDpAhoI7ox85SnSFdHh9gojTgMRTVnZ6bQD1KAA4gryS4S682EKnVKGH0bE0L4bXogrbPVpoyixpLMzKubiREF4JCsm7RiyBjXmYh8ZbZRuV/uhmLqpFvjmV/oZv8F34oxBkP5GBB/eaoe3e/u/mLL1YwbBu1K61tK0sEJz+/njfxvEI6lFhD8WDPHfEXjdR97vTipFTCgYzOs+GOlPArJxd4zlj8kXddOwdWaZKwZUd7DanPAvy5+BNMjwGb1PP+7hFcEQrJQeDukwbG/XPV3AJXstinQE7qrukeZiLjsqHEYzlCD0pUMqOm3NygJn5QugAJlpGAR9u3Jikh17zW1jEOnngE1iT4IgHg4fhsm5q6P3x31laE2896Oy0WfzSGA+T8f2zehuPP/7Ta2APCj1xU3jb3C5nSiE4FP3T+4Z26Dn41T039QbOeR7BrC7VFtss/qktcnAytdl8VKkiSeYzKatpPnJkb2MEGiUzBx4XnLA0CWuNKnWGTqlKtiqTHjXw/pidtcz+/hfOx8DA0+sy1S0iTN8/1aCTxrFcgXdiJYPvhn5Yw0gPeTTOy549elQmAFm8ijOMbgsIf2UI+g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(39860400002)(396003)(136003)(451199015)(36840700001)(40470700004)(46966006)(478600001)(36756003)(86362001)(70586007)(33656002)(40460700003)(8676002)(70206006)(4326008)(41300700001)(82740400003)(81166007)(356005)(36860700001)(54906003)(107886003)(186003)(6486002)(6506007)(53546011)(82310400005)(6862004)(26005)(5660300002)(316002)(47076005)(8936002)(40480700001)(2906002)(2616005)(336012)(6512007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 14:58:49.3489
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c5ab2b11-75e5-4243-47c7-08daf576ada5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8009

DQoNCj4gT24gMTMgSmFuIDIwMjMsIGF0IDEwOjExLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQo+IA0KPiBBdCB0aGUgbW9tZW50LCBib290bG9hZGVycyBjYW4gbG9hZCBYZW4gYW55d2hlcmUg
aW4gbWVtb3J5IGJ1dCB0aGUNCj4gcmVnaW9uIDJNQiAtIDRNQi4gV2hpbGUgSSBhbSBub3QgYXdh
cmUgb2YgYW55IGlzc3VlLCB3ZSBoYXZlIG5vIHdheQ0KPiB0byB0ZWxsIHRoZSBib290bG9hZGVy
IHRvIGF2b2lkIHRoYXQgcmVnaW9uLg0KPiANCj4gSW4gYWRkaXRpb24gdG8gdGhhdCwgaW4gdGhl
IGZ1dHVyZSwgWGVuIG1heSBncm93IG92ZXIgMk1CIGlmIHdlDQo+IGVuYWJsZSBmZWF0dXJlIGxp
a2UgVUJTQU4gb3IgR0NPVi4gVG8gYXZvaWQgd2lkZW5pbmcgdGhlIHJlc3RyaWN0aW9uDQo+IG9u
IHRoZSBsb2FkIGFkZHJlc3MsIGl0IHdvdWxkIGJlIGJldHRlciB0byBnZXQgcmlkIG9mIGl0Lg0K
PiANCj4gV2hlbiB0aGUgaWRlbnRpdHkgbWFwcGluZyBpcyBjbGFzaGluZyB3aXRoIHRoZSBYZW4g
cnVudGltZSBtYXBwaW5nLA0KPiB3ZSBuZWVkIGFuIGV4dHJhIGluZGlyZWN0aW9uIHRvIGJlIGFi
bGUgdG8gcmVwbGFjZSB0aGUgaWRlbnRpdHkNCj4gbWFwcGluZyB3aXRoIHRoZSBYZW4gcnVudGlt
ZSBtYXBwaW5nLg0KPiANCj4gUmVzZXJ2ZSBhIG5ldyBtZW1vcnkgcmVnaW9uIHRoYXQgd2lsbCBi
ZSB1c2VkIHRvIHRlbXBvcmFyaWx5IG1hcCBYZW4uDQo+IEZvciBjb252ZW5pZW5jZSwgdGhlIG5l
dyBhcmVhIGlzIHJlLXVzaW5nIHRoZSBzYW1lIGZpcnN0IHNsb3QgYXMgdGhlDQo+IGRvbWhlYXAg
d2hpY2ggaXMgdXNlZCBmb3IgcGVyLWNwdSB0ZW1wb3JhcnkgbWFwcGluZyBhZnRlciBhIENQVSBo
YXMNCj4gYm9vdGVkLg0KPiANCj4gRnVydGhlcm1vcmUsIGRpcmVjdGx5IG1hcCBib290X3NlY29u
ZCAod2hpY2ggY292ZXIgWGVuIGFuZCBtb3JlKQ0KPiB0byB0aGUgdGVtcG9yYXJ5IGFyZWEuIFRo
aXMgd2lsbCBhdm9pZCB0byBhbGxvY2F0ZSBhbiBleHRyYSBwYWdlLXRhYmxlDQo+IGZvciB0aGUg
c2Vjb25kLWxldmVsIGFuZCB3aWxsIGhlbHBmdWwgZm9yIGZvbGxvdy11cCBwYXRjaGVzICh3ZSB3
aWxsDQo+IHdhbnQgdG8gdXNlIHRoZSBmaXhtYXAgd2hpbHN0IGluIHRoZSB0ZW1wb3JhcnkgbWFw
cGluZykuDQo+IA0KPiBMYXN0bHksIHNvbWUgcGFydCBvZiB0aGUgY29kZSBub3cgbmVlZHMgdG8g
a25vdyB3aGV0aGVyIHRoZSB0ZW1wb3JhcnkNCj4gbWFwcGluZyB3YXMgY3JlYXRlZC4gU28gcmVz
ZXJ2ZSByMTIgdG8gc3RvcmUgdGhpcyBpbmZvcm1hdGlvbi4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6
IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+IC0tLS0NCj4gDQoNCkhpIEp1bGll
biwNCg0KPiANCj4gKy8qDQo+ICsgKiBSZW1vdmUgdGhlIHRlbXBvcmFyeSBtYXBwaW5nIG9mIFhl
biBzdGFydGluZyBhdCBURU1QT1JBUllfWEVOX1ZJUlRfU1RBUlQuDQo+ICsgKg0KPiArICogQ2xv
YmJlcnMgcjAgLSByMQ0KDQpOSVQ6IHIwIC0gcjM/DQoNCj4gKyAqLw0KPiArcmVtb3ZlX3RlbXBv
cmFyeV9tYXBwaW5nOg0KPiArICAgICAgICAvKiByMjpyMyA6PSBpbnZhbGlkIHBhZ2UtdGFibGUg
ZW50cnkgKi8NCj4gKyAgICAgICAgbW92ICAgcjIsICMwDQo+ICsgICAgICAgIG1vdiAgIHIzLCAj
MA0KPiArDQo+ICsgICAgICAgIGFkcl9sIHIwLCBib290X3BndGFibGUNCj4gKyAgICAgICAgbW92
X3cgcjEsIFRFTVBPUkFSWV9YRU5fVklSVF9TVEFSVA0KPiArICAgICAgICBnZXRfdGFibGVfc2xv
dCByMSwgcjEsIDEgICAgIC8qIHIxIDo9IGZpcnN0IHNsb3QgKi8NCj4gKyAgICAgICAgbHNsICAg
cjEsIHIxLCAjMyAgICAgICAgICAgICAvKiByMSA6PSBmaXJzdCBzbG90IG9mZnNldCAqLw0KPiAr
ICAgICAgICBzdHJkICByMiwgcjMsIFtyMCwgcjFdDQo+ICsNCj4gKyAgICAgICAgZmx1c2hfeGVu
X3RsYl9sb2NhbCByMA0KPiArDQo+ICsgICAgICAgIG1vdiAgcGMsIGxyDQo+ICtFTkRQUk9DKHJl
bW92ZV90ZW1wb3JhcnlfbWFwcGluZykNCj4gKw0KDQpUaGUgcmVzdCBsb29rcyBnb29kIHRvIG1l
LCBJ4oCZdmUgYWxzbyBidWlsdCBmb3IgYXJtNjQvMzIgYW5kIHRlc3QgdGhpcyBwYXRjaCBvbiBm
dnAgYWFyY2gzMiBtb2RlLA0KYm9vdGluZyBEb20wIGFuZCBjcmVhdGluZy9ydW5uaW5nL2Rlc3Ry
b3lpbmcgc29tZSBndWVzdHMuDQoNClJldmlld2VkLWJ5OiBMdWNhIEZhbmNlbGx1IDxsdWNhLmZh
bmNlbGx1QGFybS5jb20+DQpUZXN0ZWQtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVA
YXJtLmNvbT4NCg0KQ2hlZXJzLA0KTHVjYQ==


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 15:23:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 15:23:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477411.740124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGLtf-00045s-0Z; Fri, 13 Jan 2023 15:23:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477411.740124; Fri, 13 Jan 2023 15:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGLte-00045l-U4; Fri, 13 Jan 2023 15:23:10 +0000
Received: by outflank-mailman (input) for mailman id 477411;
 Fri, 13 Jan 2023 15:23:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGLtd-00045Z-8j; Fri, 13 Jan 2023 15:23:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGLtd-0006G7-5X; Fri, 13 Jan 2023 15:23:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGLtc-000456-Qh; Fri, 13 Jan 2023 15:23:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGLtc-0008BO-Q8; Fri, 13 Jan 2023 15:23:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eNw71msEf3Et1HXskS/sDm8oxe37CZ34hKZf74xR174=; b=fnGFoVO2HaMqY3zt/EUSFwzgUZ
	jRRV/4PXMpn0YXTGBkTUmajK2/IhZJk3JOqSGm9FnjIrovUWMvvD02C1cnIyd/nKmT8JkfXPvg82T
	S3yvgXP4bFZ89QLchJkoISNT8Gehfq3AJM2K0RCtS719s61Po3GcxsAt0T7G7wfM0QJI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175749-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175749: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-arndale:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-xl-vhd:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-libvirt:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-xl-cubietruck:host-install(5):broken:regression
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-arm64:xen-build:fail:regression
    xen-unstable:build-arm64-xsm:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:test-armhf-armhf-xl-credit1:debian-install:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:test-armhf-armhf-xl-credit2:debian-install:fail:regression
    xen-unstable:test-armhf-armhf-xl:debian-install:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:host-install(5):broken:allowable
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Jan 2023 15:23:08 +0000

flight 175749 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175749/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-libvirt-qcow2    <job status>                 broken
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 test-armhf-armhf-xl-arndale   5 host-install(5)        broken REGR. vs. 175734
 test-armhf-armhf-libvirt-raw  5 host-install(5)        broken REGR. vs. 175734
 test-armhf-armhf-xl-vhd       5 host-install(5)        broken REGR. vs. 175734
 test-armhf-armhf-libvirt-qcow2  5 host-install(5)      broken REGR. vs. 175734
 test-armhf-armhf-libvirt      5 host-install(5)        broken REGR. vs. 175734
 test-armhf-armhf-xl-multivcpu  5 host-install(5)       broken REGR. vs. 175734
 test-armhf-armhf-xl-cubietruck  5 host-install(5)      broken REGR. vs. 175734
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-arm64                   6 xen-build                fail REGR. vs. 175734
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175734
 build-amd64                   6 xen-build                fail REGR. vs. 175734
 test-armhf-armhf-xl-credit1  12 debian-install           fail REGR. vs. 175734
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175734
 test-armhf-armhf-xl-credit2  12 debian-install           fail REGR. vs. 175734
 test-armhf-armhf-xl          12 debian-install           fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      5 host-install(5)        broken REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    1 days
Failing since        175739  2023-01-12 09:38:44 Z    1 days    2 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               broken  
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     broken  
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      broken  
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken
broken-step test-armhf-armhf-xl-arndale host-install(5)
broken-step test-armhf-armhf-libvirt-raw host-install(5)
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-libvirt-qcow2 host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)
broken-step test-armhf-armhf-libvirt host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-xl-cubietruck host-install(5)

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 15:38:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 15:38:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477429.740153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGM87-0006Ss-TF; Fri, 13 Jan 2023 15:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477429.740153; Fri, 13 Jan 2023 15:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGM87-0006Sl-QV; Fri, 13 Jan 2023 15:38:07 +0000
Received: by outflank-mailman (input) for mailman id 477429;
 Fri, 13 Jan 2023 15:38:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Kou=5K=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pGM86-0006Gn-QZ
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 15:38:07 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2074.outbound.protection.outlook.com [40.107.247.74])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 446e8b4f-9358-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 16:38:03 +0100 (CET)
Received: from FR0P281CA0098.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a9::8) by
 DBBPR08MB6220.eurprd08.prod.outlook.com (2603:10a6:10:205::5) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13; Fri, 13 Jan 2023 15:37:56 +0000
Received: from VI1EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:a9:cafe::55) by FR0P281CA0098.outlook.office365.com
 (2603:10a6:d10:a9::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Fri, 13 Jan 2023 15:37:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT011.mail.protection.outlook.com (100.127.144.181) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 15:37:55 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Fri, 13 Jan 2023 15:37:54 +0000
Received: from 8a3998fc363e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1395C56A-9C20-4941-A1BE-010D949EB51C.1; 
 Fri, 13 Jan 2023 15:37:44 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8a3998fc363e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 15:37:44 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAXPR08MB6432.eurprd08.prod.outlook.com (2603:10a6:102:154::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 15:37:41 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 15:37:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 446e8b4f-9358-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ShCHXF58vn4RgDMtCTcfwMPk5Oqhr6eypMGbteKbhvs=;
 b=N4vochqwh0u+EJ/AgKA9HNInlySN5USaOoPhhyG5sr1ywmJnH/9XEsHmPUoi2v+NEBHuTpSByikIZ4Lx6JCy5av7+AN/92ffi4ylNDvTvCfakFIe64v9OxovlySqcgChfO7cjYwEG+vj3YXmqF3yvRM6wjxyZqVLM/jPNov/w6M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 43b8b73a28a68a60
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YhnqlZp9v9qGrMq7I2tgdcIwVGvMrgKYCyTU+e467GBp+r8ilNGG05DLu8TvE92o4M2M4mR38MXMcUzh65brIiWCVtqhctnt1Lv6ZkKuzm1uDvEGyRCuFJNkbQYvh7i2ekFNDkRDZrEz1oTy6LP+ekCplnTnkarKz/92CDYM/vt7NOKAaD4F2VAZjP7Gvq/kIl9DlFcSM6J38tvaHS0qTbYrTZRKbj62Dzo2B6RRXP2LsJoZwy0W1G7pqGEueewUS4vf5tgO6KPeeQMHasgU+Xf+Jdv1ujO+3abbJ7BPTmL833MRLO7PmzIs71XP0OD7dgQ2ULdRdIuhEIx1450fAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ShCHXF58vn4RgDMtCTcfwMPk5Oqhr6eypMGbteKbhvs=;
 b=oJa7ue/cUW7kMpNA/vnwwmpJYcI/kqkoii4508KGknF4/gqR0CDA/8A4ChiWKXS3z/Sp2j7sExi3X7IMDtGz9EAkSEYq5BfbuDO+PeLXahjh0OhZb/qy4nluZudGkuc7tQLbC0hwM2Tgty5fuYogZ3X34Yt482tV8n5n9DgS+Hwip4/7K5JyJKwmtcTmiKMQ7MjShfsfEIWAL0kgpr5QNICLFb5AVnmwcZxLlyDrHZYJxt9V42pPgddp0e3ddf291I5kG42Cnb69ovcTwLHrPkeE1gsW4e40BTNf3u76UDUU1Kpalb65zvUfgAtsGYk8/+HbXK6s6arH8emceslXfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ShCHXF58vn4RgDMtCTcfwMPk5Oqhr6eypMGbteKbhvs=;
 b=N4vochqwh0u+EJ/AgKA9HNInlySN5USaOoPhhyG5sr1ywmJnH/9XEsHmPUoi2v+NEBHuTpSByikIZ4Lx6JCy5av7+AN/92ffi4ylNDvTvCfakFIe64v9OxovlySqcgChfO7cjYwEG+vj3YXmqF3yvRM6wjxyZqVLM/jPNov/w6M=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary
 mapping
Thread-Topic: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary
 mapping
Thread-Index: AQHZJzoSeyoTqT1kWEmelvocRfmip66ce3AA
Date: Fri, 13 Jan 2023 15:37:41 +0000
Message-ID: <01DE3414-4523-4029-AB8D-A1F6DEF145B3@arm.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-11-julien@xen.org>
In-Reply-To: <20230113101136.479-11-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAXPR08MB6432:EE_|VI1EUR03FT011:EE_|DBBPR08MB6220:EE_
X-MS-Office365-Filtering-Correlation-Id: 073d67c2-77ec-4ed5-5efe-08daf57c23e1
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YIsuJGnz3v0yxJcu6DRQvT/ngFHMUzcV6yvnOe5qp0w01E7qi64xZOtPIxY8cQZRpL3d+8yy6oSPmIMw8yXUvZN4Z/KA6t01VmSP7Qp8YheJG6rG4LFRp/u/K6HOmcOpOaxRofT/uw3VLbLgI6AqQaIcXZ54HFfy30HnzAf0xSW894FNDY/xgcrkAT/wxwMZJteL0Anvig4sOnr0UKs9GD0p36ubSgFyahY0MaTWGQTNOKh1NvhCHSvSE4N9G80AwO1vVjKV6L6j7zhhw0NJaKLO2sLHPRwVy2jMD2Pq9amiPAa65wOZTrhpq7VNG0XVF2JEb9+dyUMW9usToZ3acuMra46/k6UGQ++TR8yZJuH8MZbWI2ExnDpU+UpAXlkFOVR76YudStZGC07xkGEig/iDVetiEWE8X8nNBl13tSKvJ5r8nRAGV6CjQEIvw180VPTCXq36UScBjoqDf1xTLvIHD2uZxVLh57AWv7iTows5oSFG4icOW747tLKP2a53qckVrWIruB6eGJ8ndr5WWnbLF5DdQi9mWMHK78O0072SN1txir9JiXIXis4xtLzudNKiumfSKF3aHFaIkP83DzSLsIrmpOVx4gzxzPP4eaNvR7ZaIYHag0ZeOmlDDOKtvShogQ0et9R5K8amP2tc7EZE2MsOS/KdWmP60/a6qHEJcBvZY8cscNwlBLu+CKuMmqUMkAq/o34WDe0jlXN99ffWur2XWSRmAdyXB+CahxcFt9/mRsTHL9GcAXxT2ajiWwGgMzXq8kj2rspMATYyGXflMl8kJCj4QFE8nzM7eoA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(366004)(346002)(39860400002)(396003)(451199015)(36756003)(6506007)(6512007)(186003)(53546011)(33656002)(4744005)(6486002)(71200400001)(38070700005)(26005)(122000001)(38100700002)(86362001)(83380400001)(2616005)(478600001)(2906002)(8936002)(5660300002)(76116006)(316002)(66946007)(91956017)(6916009)(54906003)(66556008)(66476007)(66446008)(41300700001)(8676002)(4326008)(64756008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <B990AE93D746CC47B74BD4202A9EE9C8@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6432
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	628b87d6-0b62-41ff-c6be-08daf57c1b8c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NagYE5yHPeaD8AqH8ovi/PAGUZs0uRQKsM7IjLw8NkbM3pfIAbOwcasWIr1YL6viXZV5JQUA7H3CwRGb/jZoKdfQ+7Nd6pbm+np4XHVLG0yAUn8D0hpRSzUKX+pgQo/vSDJ7O205muRAp+7jd+d08vHmzM0D8VZBlaNPiKv2pR+GwweG41YQ8Ctg74sjIn1fXiSfLmSxaNBCLnXJ9EgerfqWEtYA9z5BA52mH8nnMIoNPNpzOqZzVQZ3v33uCEGkAgZgbbYGtd0Cn76cTpzTSYQXdWIrRtjLAwr2Eb3Ro7qbuk1BRIfqEOVK7DKmiB9ikZsT4UfQwmQHYLRdnPsQkEs4PGenFuKVSH+FKXCqJkN1UH/am1r8XQ6m3/nM4mZYOJEif8ghnCf6AuoEMghryi2dD2wMSosk+52fdyBLpVz7LlpNSlVSyduThhxZbroLD/WKyzSG8+K83LLmjFKf3LHB3IQ/NHpllbEYedEebLUBF9h0xDJRyKqj50XN96e8lPMAgPTDSv/mABK8D9BU+aKAgGJPsN2NIlce6n/GPpwzvfsQdtT5qeEkF080brlKpllH4YyysZU5eCxv3yaU2erK8KXb2ArEtLXU+P/jv0n4rk3jSDgJPe4lBhxTi5SYl8yOsSzj3BSU3H2b/Gskf777s4gKUugnUl8NjHFD+GSSV1SbE1CsrthBvBOyx6+iiKSp7QjzDDTHLXoH4e+JFQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(376002)(346002)(136003)(451199015)(40470700004)(36840700001)(46966006)(70206006)(70586007)(54906003)(8676002)(4326008)(6862004)(8936002)(316002)(478600001)(4744005)(5660300002)(6506007)(2906002)(6512007)(26005)(186003)(2616005)(53546011)(336012)(47076005)(107886003)(83380400001)(33656002)(40480700001)(36860700001)(40460700003)(81166007)(82310400005)(356005)(86362001)(6486002)(41300700001)(36756003)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 15:37:55.2131
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 073d67c2-77ec-4ed5-5efe-08daf57c23e1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6220

DQoNCj4gT24gMTMgSmFuIDIwMjMsIGF0IDEwOjExLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQo+IA0KPiBBdCB0aGUgbW9tZW50LCB0aGUgdGVtcG9yYXJ5IG1hcHBpbmcgaXMgb25seSB1c2Vk
IHdoZW4gdGhlIHZpcnR1YWwNCj4gcnVudGltZSByZWdpb24gb2YgWGVuIGlzIGNsYXNoaW5nIHdp
dGggdGhlIHBoeXNpY2FsIHJlZ2lvbi4NCj4gDQo+IEluIGZvbGxvdy11cCBwYXRjaGVzLCB3ZSB3
aWxsIHJld29yayBob3cgc2Vjb25kYXJ5IENQVSBicmluZy11cCB3b3Jrcw0KPiBhbmQgaXQgd2ls
bCBiZSBjb252ZW5pZW50IHRvIHVzZSB0aGUgZml4bWFwIGFyZWEgZm9yIGFjY2Vzc2luZw0KPiB0
aGUgcm9vdCBwYWdlLXRhYmxlIChpdCBpcyBwZXItY3B1KS4NCj4gDQo+IFJld29yayB0aGUgY29k
ZSB0byB1c2UgdGVtcG9yYXJ5IG1hcHBpbmcgd2hlbiB0aGUgWGVuIHBoeXNpY2FsIGFkZHJlc3MN
Cj4gaXMgbm90IG92ZXJsYXBwaW5nIHdpdGggdGhlIHRlbXBvcmFyeSBtYXBwaW5nLg0KPiANCj4g
VGhpcyBhbHNvIGhhcyB0aGUgYWR2YW50YWdlIHRvIHNpbXBsaWZ5IHRoZSBsb2dpYyB0byBpZGVu
dGl0eSBtYXANCj4gWGVuLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxsIDxqZ3Jh
bGxAYW1hem9uLmNvbT4NCj4gDQo+IC0tLS0NCg0KUmV2aWV3ZWQtYnk6IEx1Y2EgRmFuY2VsbHUg
PGx1Y2EuZmFuY2VsbHVAYXJtLmNvbT4NCg0KSeKAmXZlIGFsc28gYnVpbHQgZm9yIGFybTMyIGFu
ZCB0ZXN0IHRoaXMgcGF0Y2ggb24gZnZwIGFhcmNoMzIgbW9kZSwNCmJvb3RpbmcgRG9tMCBhbmQg
Y3JlYXRpbmcvcnVubmluZy9kZXN0cm95aW5nIHNvbWUgZ3Vlc3RzDQoNClRlc3RlZC1ieTogTHVj
YSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 15:59:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 15:59:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477470.740181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGMSY-0001vU-3M; Fri, 13 Jan 2023 15:59:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477470.740181; Fri, 13 Jan 2023 15:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGMSX-0001vN-WC; Fri, 13 Jan 2023 15:59:14 +0000
Received: by outflank-mailman (input) for mailman id 477470;
 Fri, 13 Jan 2023 15:59:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Kou=5K=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pGMSW-0001uz-1v
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 15:59:12 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2089.outbound.protection.outlook.com [40.107.20.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3798980f-935b-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 16:59:10 +0100 (CET)
Received: from FR3P281CA0149.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:95::20)
 by DU0PR08MB8496.eurprd08.prod.outlook.com (2603:10a6:10:403::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 15:58:57 +0000
Received: from VI1EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:95:cafe::aa) by FR3P281CA0149.outlook.office365.com
 (2603:10a6:d10:95::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Fri, 13 Jan 2023 15:58:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT062.mail.protection.outlook.com (100.127.145.26) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 15:58:56 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Fri, 13 Jan 2023 15:58:56 +0000
Received: from 81341a76693d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 54A43F98-019A-4BA8-A382-E0DA14D79A81.1; 
 Fri, 13 Jan 2023 15:58:45 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 81341a76693d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 15:58:45 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAVPR08MB8847.eurprd08.prod.outlook.com (2603:10a6:102:2fd::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 15:58:42 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 15:58:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3798980f-935b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=clfzVTHlDVmaPbdKsxMGI5DA11N6Bv2LitzaPGgp0Js=;
 b=gNtdpQ8b/CnURpuS5gqvPFCOm4M95Ip/Qy31IhMhw40SlgK/FaJBU3jf1djSqQ+skXIIxqcUak+f++6ZGZsVAUklJgjAFv+wQw2F3Z2GLlB+MYaWmzHOkU15bBEFb3nWSKMTyJW+liFtPTasj9IhVCvxQ73LF37slY3kMJvsEEI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ef19e6cca0ba1596
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TV9NAC+i/YRM15z50dViSHOVRC/Xl0Abg6Y7+5Gy3cXXRoX9WBcGCuOmKxJsiiDjF6uSvGkthaQ5Oier1ya2zXTnDRi5C9RPkpJykXE4fF8pgKD0YR4M7/R+u7tNrP6Juh8cbVg4ldfn5DQxxudEP8mD91assmoC7vC7sD9Q6q8r3bamgZynynWjJ0IvwemSHBcwpL2IrhHgWDnWHQIAoYIe0ejBM5okk6ubFYpThc3tDFUQ7zhqV91OxWMChCYypVaeEEmhvn+osyruw18Wdp1USPMDd/+Dk1PiVG7aS3wX3c5fKgWoxlelp5mCZDi0BjP9OuTVG/rvMt2WqWE49A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=clfzVTHlDVmaPbdKsxMGI5DA11N6Bv2LitzaPGgp0Js=;
 b=N1fMGWb2SaYEvluVNemtrlkc38nrYU7WMLkXodPz+GnTobAhK+IhUVg4cgufhbME1CjBqF+alFvVz09pcNXSrxviOHPBI7UaR9rUnZaX7n7K+Etq9aduWm+i9o2Gf+wwNnBB14+jFPWhG3FgOIGdVHeV0/M4wL1dYTOB2+wy9ShWBh2rO2lI3jo/No2xSTJLmeep9R05KgSY9FwZEUGR+9I2oT/4XGhWSb8SWLpO1s9gY4/G/c70+eQVQ3BKDuMZgV1rWGFpRbiD55cCf3/SOpSmtxwoRgkrGUC6puqs0pOfTD4yrsA3UzmenA8oVPAL2y00PKJBSbgNXLlOuzRlQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=clfzVTHlDVmaPbdKsxMGI5DA11N6Bv2LitzaPGgp0Js=;
 b=gNtdpQ8b/CnURpuS5gqvPFCOm4M95Ip/Qy31IhMhw40SlgK/FaJBU3jf1djSqQ+skXIIxqcUak+f++6ZGZsVAUklJgjAFv+wQw2F3Z2GLlB+MYaWmzHOkU15bBEFb3nWSKMTyJW+liFtPTasj9IhVCvxQ73LF37slY3kMJvsEEI=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 11/14] xen/arm64: Rework the memory layout
Thread-Topic: [PATCH v4 11/14] xen/arm64: Rework the memory layout
Thread-Index: AQHZJzoOgS7RtMe29EyEfBRXKSrWh66cgU+A
Date: Fri, 13 Jan 2023 15:58:42 +0000
Message-ID: <0733C610-F5F4-48B3-A78D-5D9D22C96F0B@arm.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-12-julien@xen.org>
In-Reply-To: <20230113101136.479-12-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAVPR08MB8847:EE_|VI1EUR03FT062:EE_|DU0PR08MB8496:EE_
X-MS-Office365-Filtering-Correlation-Id: 5f5b7ba1-8350-4386-9b80-08daf57f13ae
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 kvL4L0HOoxRF1D4L/ACzp2YQJPs0YroA+E2AsEGva8jl8QVJ2IwF3fQ188HQoVTjTZkVL+gbxftV5Oc8WGum+IENvp13Jj9dG5MnWVGudCVcvSvduCqofq/sXvIAQw8Ze60w6U33sPod07ArNGU6DUZjobjAP+hXmr06lW5Meyalo/o5FPzXTLa71Qi+sv05+a/KSj2f8zMKmT1ELhKHv5L0fhLCpoKGPht+ya7u5C5a5z4BvSlBCcv5X9+Qz6vj0xp7/xb1b3Ok511CEoaqtXVP32/UbwcRgRk6/M45mm/YdjmU35Puwu3MsoLBfavK0gGIDVlrZvUH8oDJjGhtewCPQ6648ESkxrBTlON8V1P6vokEyhacS9Y1p9GqSZLxsaIXBfiFWm7pR47ATY8viTz/QQUlmOL6mLnT4BavT1TjAV2miyxA0NTNao4Jx2t0/qLSHK3C1LX3SmtTCCvrXzezUI8iBH9R1CeZWhCHs0RlzprX20C+rSR6lyuocCVtj52dxiBOiQ3aOoHXmKeBr3inXeZIciAvfQ0KciHsCIpo6woV2Cx7BKzo8rbgCIhG/vU6riTBW5d8TjN0OdW5hLgM7uGZMFMFICsUrIFSqn3eN7rBqfYdl2fs5Vq4mVulTvB5WEYsEME2Og+HouLl4DEHdbvrBq0GALL4bDcn6fYMsBGnyk1AA3Mx/1TLSCDAElbgdVXpLnlqERdfwVm5sKJf/kJrM72/ugLHugcGl34=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(39860400002)(136003)(346002)(376002)(396003)(451199015)(83380400001)(91956017)(33656002)(76116006)(64756008)(66446008)(4326008)(66556008)(8676002)(66476007)(66946007)(8936002)(36756003)(41300700001)(2616005)(71200400001)(6512007)(186003)(6486002)(38100700002)(26005)(478600001)(38070700005)(6916009)(122000001)(54906003)(6506007)(53546011)(316002)(86362001)(5660300002)(2906002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <790FCEA043C5C9478AA722FE69CB2527@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB8847
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	383ab280-6c50-4002-1a60-08daf57f0b3b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9lCaziK8+YVNX4jQKesJEEGnBiJFRXXrwhFskvz7DY8mXb46vocYGOht8n9WD1khgdRO/M0X15Jx7o/nJwQhCaslaoj7i+sF+B3d5gjz6Df0Ssla4pclib9kniOKCdiRTwid3O5AhqT6YzIu+joqpFtlr7npJLDOo6PnSb7/gcr8WGmp6OsF13dJSDBLooQJhplf35WxRmM0QpUSEWwLwXzPSItrnhVCtvFcUjJ2L6ml42DdIk6UNe170UhVVdRgmAbWBdHmeWJMIerOTVcwbPfEm7ihnC3abWgcgh2p3u6A4n77vzZ87uGUs0XF+psMCI1MMZQs2IvHAU11jYsJpYEHCDBiQXk18xL9FGxRmGcLWoDNZv89zFZe/xZltpkhZ6G1QWoNUSvBB8eRqP89tEKpKUopGJy47ZVroM3Gdjbjrf9OoNu+T0DgsaL3dJBcZ1zHEx+nqJV4LLx4UkHKBFCWY005VOFZlq0WMcLUQ3ir4sx2jJS162J/4e3cLTH1GAuCKHN5W4vB/Ql7pF6o+jpN18OPb+CvJBEs/Ko1P/MV3vXTFzq87zdqnkA/sHctNkd/VtqQxcchnwEJDz5bscdV9emNO21i+PhJT5hiK5eU2UKQUjfkpLHgZBEiym1+QORorCugwxWV1qSxisdZkavLZ05gdtdO2BbSR6GtmgBuoQQPdRHNVVTThVBPpUvg6WBq+JtKBgoJfS9Wyzdulw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(33656002)(40480700001)(40460700003)(47076005)(4326008)(336012)(41300700001)(8676002)(70586007)(83380400001)(5660300002)(70206006)(8936002)(6862004)(6486002)(478600001)(26005)(186003)(53546011)(6512007)(107886003)(2616005)(316002)(36756003)(356005)(82310400005)(54906003)(86362001)(2906002)(82740400003)(36860700001)(81166007)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 15:58:56.5114
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f5b7ba1-8350-4386-9b80-08daf57f13ae
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8496

DQoNCj4gT24gMTMgSmFuIDIwMjMsIGF0IDEwOjExLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQo+IA0KPiBYZW4gaXMgY3VycmVudGx5IG5vdCBmdWxseSBjb21wbGlhbnQgd2l0aCB0aGUgQXJt
IEFybSBiZWNhdXNlIGl0IHdpbGwNCj4gc3dpdGNoIHRoZSBUVEJSIHdpdGggdGhlIE1NVSBvbi4N
Cj4gDQo+IEluIG9yZGVyIHRvIGJlIGNvbXBsaWFudCwgd2UgbmVlZCB0byBkaXNhYmxlIHRoZSBN
TVUgYmVmb3JlDQo+IHN3aXRjaGluZyB0aGUgVFRCUi4gVGhlIGltcGxpY2F0aW9uIGlzIHRoZSBw
YWdlLXRhYmxlcyBzaG91bGQNCj4gY29udGFpbiBhbiBpZGVudGl0eSBtYXBwaW5nIG9mIHRoZSBj
b2RlIHN3aXRjaGluZyB0aGUgVFRCUi4NCj4gDQo+IEluIG1vc3Qgb2YgdGhlIGNhc2Ugd2UgZXhw
ZWN0IFhlbiB0byBiZSBsb2FkZWQgaW4gbG93IG1lbW9yeS4gSSBhbSBhd2FyZQ0KPiBvZiBvbmUg
cGxhdGZvcm0gKGkuZSBBTUQgU2VhdHRsZSkgd2hlcmUgdGhlIG1lbW9yeSBzdGFydCBhYm92ZSA1
MTJHQi4NCj4gVG8gZ2l2ZSB1cyBzb21lIHNsYWNrLCBjb25zaWRlciB0aGF0IFhlbiBtYXkgYmUg
bG9hZGVkIGluIHRoZSBmaXJzdCAyVEINCj4gb2YgdGhlIHBoeXNpY2FsIGFkZHJlc3Mgc3BhY2Uu
DQo+IA0KPiBUaGUgbWVtb3J5IGxheW91dCBpcyByZXNodWZmbGVkIHRvIGtlZXAgdGhlIGZpcnN0
IHR3byBzbG90cyBvZiB0aGUgemVyb2V0aA0KPiBsZXZlbCBmcmVlLiBYZW4gd2lsbCBub3cgYmUg
bG9hZGVkIGF0ICgyVEIgKyAyTUIpLiBUaGlzIHJlcXVpcmVzIGEgc2xpZ2h0DQo+IHR3ZWFrIG9m
IHRoZSBib290IGNvZGUgYmVjYXVzZSBYRU5fVklSVF9TVEFSVCBjYW5ub3QgYmUgdXNlZCBhcyBh
bg0KPiBpbW1lZGlhdGUuDQo+IA0KPiBUaGlzIHJlc2h1ZmZsZSB3aWxsIG1ha2UgdHJpdmlhbCB0
byBjcmVhdGUgYSAxOjEgbWFwcGluZyB3aGVuIFhlbiBpcw0KPiBsb2FkZWQgYmVsb3cgMlRCLg0K
PiANCj4gU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4g
LS0tLQ0KDQpSZXZpZXdlZC1ieTogTHVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29t
Pg0KDQpJ4oCZdmUgYWxzbyBidWlsdCBmb3IgYXJtNjQgYW5kIHRlc3QgdGhpcyBwYXRjaCBvbiBm
dnAsIGJvb3RpbmcgRG9tMA0KYW5kIGNyZWF0aW5nL3J1bm5pbmcvZGVzdHJveWluZyBzb21lIGd1
ZXN0cw0KDQpUZXN0ZWQtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVAYXJtLmNvbT4N
Cg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 16:26:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 16:26:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477477.740192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGMt6-0005nA-7F; Fri, 13 Jan 2023 16:26:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477477.740192; Fri, 13 Jan 2023 16:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGMt6-0005n3-4P; Fri, 13 Jan 2023 16:26:40 +0000
Received: by outflank-mailman (input) for mailman id 477477;
 Fri, 13 Jan 2023 16:26:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Kou=5K=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pGMt4-0005mx-Ro
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 16:26:38 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2085.outbound.protection.outlook.com [40.107.7.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0d1c8853-935f-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 17:26:37 +0100 (CET)
Received: from AM6PR05CA0022.eurprd05.prod.outlook.com (2603:10a6:20b:2e::35)
 by DBAPR08MB5575.eurprd08.prod.outlook.com (2603:10a6:10:1a6::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 16:26:32 +0000
Received: from VI1EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::ea) by AM6PR05CA0022.outlook.office365.com
 (2603:10a6:20b:2e::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Fri, 13 Jan 2023 16:26:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT027.mail.protection.outlook.com (100.127.144.103) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 16:26:31 +0000
Received: ("Tessian outbound baf1b7a96f25:v132");
 Fri, 13 Jan 2023 16:26:30 +0000
Received: from c07ecf60950f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6E0AB64B-D8A1-4DB7-A1A2-92EFAFE1BD74.1; 
 Fri, 13 Jan 2023 16:26:21 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c07ecf60950f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 16:26:21 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB9PR08MB9540.eurprd08.prod.outlook.com (2603:10a6:10:451::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Fri, 13 Jan
 2023 16:26:16 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 16:26:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d1c8853-935f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AaFig45RHzspCsEkQFelcF4LE8EvQLogngsVM32N+RU=;
 b=YgRGVKO1nKw1D6BpurIpbPTPsg3fDI6w4ahe0My5VlhebsaKVl6aKM6XIb5xOodex5RZMeMWtv+Qwn9JGsgdQNdXBy9a2twPF6lK3j9fLbKkNocZlvz3y5OBD406u0YYb/t5gfh0iIHGmrAXTSvHd5AEqanQQLkQFZbmdeJejak=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 58e900f2ef7e89df
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B/9kuoAUBPzFWSuSHK5+v7PW64sjuKqyfPdSzJRaRKQ1z6buN9nQawkvFBg4FqFBigtysuhG3uFMMx98mjvZTCwue3yaQ2O4h0/MF57MR+XVL9eQ4s9jSqdsUDtrnSp4yiQN/wfydGGQXy12ZBPSRzqcMrMW2xPgW20Ajp2hhzLtWO2743QFupILR/cRPP7SvoPZYioslGoigo6C98xkigsV53GgTgc5JxhMWljmYSzsj4ZU1THHVQCOI1U422kC++uFWWNKMG5aM7zyO9pPYkAHUoW0Zq4PVOj//YXz9uyh4zdkAuLieylsXtsQIdzeBDwbQbxJRFk7F15WO6y8Og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AaFig45RHzspCsEkQFelcF4LE8EvQLogngsVM32N+RU=;
 b=KEywhFIONAWXj70Q6Nx3tGmnZsaSGG36RF8pW/lx6lGWHuWWWJ6mUpBs3z/Ysrt1Zu+H5pZs3fsonhmnXUAizSFbKkUP8dKj9xRkpKb9C3/CnBY65qYx8yHP811wfr99dFLf3WDJFQlrPmafwDYWL/7uVgjhMmW0/nOeE5VsJvuuRWWP0YU7l+OB+LInzjwY4TRtaOTcjgACmB7Nl76ZH7pkLm7XwNEW6OtHWwZ4BAcACpRCQSrFu4JuR7sYbw2w/wAVssCiGejkXGi/0cFQWR9B0nnLMGS4I5wXvjnJIK5ldRKlrA6UUHr9MaTjodjYmc5FYu+zkk/QRstcfnwgxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AaFig45RHzspCsEkQFelcF4LE8EvQLogngsVM32N+RU=;
 b=YgRGVKO1nKw1D6BpurIpbPTPsg3fDI6w4ahe0My5VlhebsaKVl6aKM6XIb5xOodex5RZMeMWtv+Qwn9JGsgdQNdXBy9a2twPF6lK3j9fLbKkNocZlvz3y5OBD406u0YYb/t5gfh0iIHGmrAXTSvHd5AEqanQQLkQFZbmdeJejak=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Topic: [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Index: AQHZJzoTvCUimx3Q2EmG6peGw8mIYK6ciQOA
Date: Fri, 13 Jan 2023 16:26:16 +0000
Message-ID: <F2BF5056-664C-40E5-B7DD-AE158DE3CF67@arm.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-13-julien@xen.org>
In-Reply-To: <20230113101136.479-13-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB9PR08MB9540:EE_|VI1EUR03FT027:EE_|DBAPR08MB5575:EE_
X-MS-Office365-Filtering-Correlation-Id: c794fc00-f2f5-4971-11be-08daf582ede1
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 uHBMuWAkk0ltysbE4SViU0REp6zsT4G7rcuHPLS06bHVdq5dIHyFNxjpOw6C/wserEd9Ei4gnouyxmuD2P1+ji73zyjGxpVXrE0bPO3yC2u3We5B4Mz/kKnPhENm4H0r4/k2YNeMQQE7a+txJT6wfNDv5nlItEHqjI8AkkHLfT1kdtqg9muGfVO+hc6Pn7VevGgXesfTGppkTuzKz2dRkojaPY8K6ur3PoYXasEDCCWoCn6kXeqJ5BAyfLzUGUKlUdLVjEAyahRiVPPLUVPGZuG5OBR368iQXL/cuNI1+FIngPHG1C3Hh097730i7m7/pclnYhXzYaL/yAoQobFQK/W0agLowNG77A4B9t5SKZl/LhC2zlLcMXl8jtDZ9cVBvN8cGHOWd4iGd4NOl/ZMPuzjdtR94jeBrIRvFxZqfjr6fRofsHttLvJ30V6QoLQUMEoEKM1bYmryBwLgsuWhOjKQQYUSWPQqCpmkD4xvrashfmHhJ4kKbBgjeiFfhPMVXxnFPcjA0JmTiZG80l87G1fbkbSuSIb3+nqP+z4fsLTzk2fSnOt1BFgAtW1cRMN1J/NojC4XAssU/abpV0bGZvkH62DkQDWPMqJQqZyahh6DddY0vMUxETdJr2aqMYO7iHa39fi1YmUy6E3hZqAwlvAII6s41hH9n10rvFcyl12xORSBVwLljn+r62Y7Zmrme5uj8H9QnVCgr4DguRdKqN4m0H5ukUB79ryVr+mqbUQ=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(396003)(366004)(346002)(136003)(451199015)(122000001)(64756008)(83380400001)(38100700002)(8936002)(86362001)(66476007)(2906002)(5660300002)(66446008)(66556008)(66946007)(8676002)(76116006)(4326008)(2616005)(26005)(41300700001)(6486002)(186003)(53546011)(6506007)(71200400001)(478600001)(316002)(91956017)(6916009)(54906003)(33656002)(6512007)(38070700005)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <CBACC3ED31B5D74BAA43ADB9CF9EB56F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9540
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	52702472-b9b9-4d45-4756-08daf582e4ed
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vA40R2FPMvyFWRlvi6rTb8HlV3W5/chmHuGJTrtYQ6OW+QWr41U3g50uQVoV6vAyTP+hy2f6HuXkwVDBcVI7/Ei6pMX3JC15ZeLEuIw8+padhFny4gifnjNPP2bb9BCiJPZd3blKx/7SYOOMnFZu/rICsJh3kTlJNELRm7igmx9BTIeoi3u9wN+LWOs83BICOXPJlqDtk7zJa4BkXrzSo5FZ4Hg4X/JnGUV4BMVUV9wYnK5/8JG6XM5Qlb2antwhoYkpXth32y2KRr5hEuOKQA6NWsSaHYn3ZK5NUTAsZpWHRSg3qAvS7kiZodem7gGSz7PKuvKDGGiKZyqJTfPtbVkUxE4/BiW/cPJ2oKt0LT9GPXKACgpOvo0EmS6QP9IA6uoO8hxFenNF/2P8jauzPomEwQuYIZo+tkzt0S0vgPZp+se1d2vFDzZyY2LrlexRVeWkD/4PEqXC6CtcDR+dOL3AH0bAb1uJ9gGofYOmS/K2E4Yh0o0S2SmtKPTx8H8TCJpIMlj/LkYUMJTpbHS9zk5UxIkgbBslbvDKWlKREiws9f/JX6ACkrTbejF4/lsyCJe0EJQJ9LU8L3BxtIUjtWgK3LjwlPS3a3tXd+EQoyg/MyfdxFV4ixRk0NZPCsuQrBjGkWxVqq7lphoDofGnNUaiV4HGQwMaF+i9cYNEhgKnzBj9MOFgH4DcijpMyaeXWMz3k47Y028iu06SB1GUSA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199015)(40470700004)(36840700001)(46966006)(86362001)(316002)(40480700001)(478600001)(41300700001)(5660300002)(82310400005)(40460700003)(36860700001)(36756003)(54906003)(2616005)(6862004)(6486002)(8936002)(6512007)(2906002)(6506007)(47076005)(356005)(70206006)(8676002)(70586007)(83380400001)(107886003)(53546011)(336012)(4326008)(33656002)(82740400003)(186003)(81166007)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 16:26:31.0756
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c794fc00-f2f5-4971-11be-08daf582ede1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5575

DQoNCj4gT24gMTMgSmFuIDIwMjMsIGF0IDEwOjExLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQo+IA0KPiBJbiBmb2xsb3ctdXAgcGF0Y2hlcyB3ZSB3aWxsIG5lZWQgdG8gaGF2ZSBwYXJ0IG9m
IFhlbiBpZGVudGl0eSBtYXBwZWQgaW4NCj4gb3JkZXIgdG8gc2FmZWx5IHN3aXRjaCB0aGUgVFRC
Ui4NCj4gDQo+IE9uIHNvbWUgcGxhdGZvcm0sIHRoZSBpZGVudGl0eSBtYXBwaW5nIG1heSBoYXZl
IHRvIHN0YXJ0IGF0IDAuIElmIHdlIGFsd2F5cw0KPiBrZWVwIHRoZSBpZGVudGl0eSByZWdpb24g
bWFwcGVkLCBOVUxMIHBvaW50ZXIgZGVyZWZlcmVuY2Ugd291bGQgbGVhZCB0bw0KPiBhY2Nlc3Mg
dG8gdmFsaWQgbWFwcGluZy4NCj4gDQo+IEl0IHdvdWxkIGJlIHBvc3NpYmxlIHRvIHJlbG9jYXRl
IFhlbiB0byBhdm9pZCBjbGFzaGluZyB3aXRoIGFkZHJlc3MgMC4NCj4gSG93ZXZlciB0aGUgaWRl
bnRpdHkgbWFwcGluZyBpcyBvbmx5IG1lYW50IHRvIGJlIHVzZWQgaW4gdmVyeSBsaW1pdGVkDQo+
IHBsYWNlcy4gVGhlcmVmb3JlIGl0IHdvdWxkIGJlIGJldHRlciB0byBrZWVwIHRoZSBpZGVudGl0
eSByZWdpb24gaW52YWxpZA0KPiBmb3IgbW9zdCBvZiB0aGUgdGltZS4NCj4gDQo+IFR3byBuZXcg
ZXh0ZXJuYWwgaGVscGVycyBhcmUgaW50cm9kdWNlZDoNCj4gICAgLSBhcmNoX3NldHVwX3BhZ2Vf
dGFibGVzKCkgd2lsbCBzZXR1cCB0aGUgcGFnZS10YWJsZXMgc28gaXQgaXMNCj4gICAgICBlYXN5
IHRvIGNyZWF0ZSB0aGUgbWFwcGluZyBhZnRlcndhcmRzLg0KPiAgICAtIHVwZGF0ZV9pZGVudGl0
eV9tYXBwaW5nKCkgd2lsbCBjcmVhdGUvcmVtb3ZlIHRoZSBpZGVudGl0eSBtYXBwaW5nDQo+IA0K
PiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KPiANCj4g
LS0tLQ0KDQpSZXZpZXdlZC1ieTogTHVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29t
Pg0KDQpJ4oCZdmUgYWxzbyBidWlsdCBmb3IgYXJtMzIvNjQgYW5kIHRlc3QgdGhpcyBwYXRjaCBv
biBmdnAsIGJvb3RpbmcgRG9tMA0KYW5kIGNyZWF0aW5nL3J1bm5pbmcvZGVzdHJveWluZyBzb21l
IGd1ZXN0cw0KDQpUZXN0ZWQtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVAYXJtLmNv
bT4NCg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 16:37:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 16:37:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477485.740203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGN3F-0007Ke-8M; Fri, 13 Jan 2023 16:37:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477485.740203; Fri, 13 Jan 2023 16:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGN3F-0007KX-5Y; Fri, 13 Jan 2023 16:37:09 +0000
Received: by outflank-mailman (input) for mailman id 477485;
 Fri, 13 Jan 2023 16:37:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LVbI=5K=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pGN3D-0007KR-RM
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 16:37:08 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 83078aaa-9360-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 17:37:05 +0100 (CET)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id DDEE55C0059;
 Fri, 13 Jan 2023 11:37:03 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Fri, 13 Jan 2023 11:37:03 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 13 Jan 2023 11:37:02 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83078aaa-9360-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1673627823; x=
	1673714223; bh=yyzyyumq3jAZv3MJoulGW+5f+DgOr7g/UNTue6VYZaM=; b=z
	eqxscSSZ+wjdSIV6JHy3WxC0wNfWoigI1axNZXzauSrBhjglDXBXcuE+3AQ3aQnz
	Auf61Uema7LFS9RN+MTGyPIdZcANzDXVCChKuKbXAq1za45g3KxLtKI8mI+i975M
	iTMcHv4QGehwJn9THhHIBRc2TUjavLDblTVAo1EfQUp5T9CJqg5fzAX1YfJDzwE+
	ufd+/ucMiDIdOZjGswRaX1YFIVLgGY3oltdUiQo/H248FMGjyhUps+wXFMj3wOAh
	CorfuIgiyLyfaKSQiTnOW6dvOXoaAszo1vnPsyezbup/MpQrmQz7+s60WxGOy1xb
	BgnRxNIN+HfK3UiV9pXuA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm3; t=1673627823; x=1673714223; bh=yyzyyumq3jAZv3MJoulGW+5f+DgO
	r7g/UNTue6VYZaM=; b=mxKQgrYYakDbY7B8d6g546efjMYxlaEP7lEEpk41Vkdv
	pboCITRY80clUuPZnUpuxZhT8IiJGk+Sddnj1xffZLHsIrW8yn5hch1XPV0Ymr8W
	gLEf6dSSmF3E8g39nBdqRpCaoujvkkqRFlxm3ZYSR8EGRKk/kkQNLo8Oeox41aWh
	ObkvRXn/NPWIIPCr666N6feThfGAeUFNJ2uogaYEZ3VVKiEu4MfB1j0ZH4W2ZBJJ
	E+J6aw1J18G6zJK8dUAEXpkaV/4E3+yGx0GIUxhnl5Gf/UTs1PvFaJJ7OIJWD+2z
	ayL5d3/Ln7wiT0NPrZNTqUj2K4TflGfzSb9LB1z0Tg==
X-ME-Sender: <xms:r4jBY71yUEg-V22cXqxlp6NNrGFWhkU_s9j_gvixS77m4jGTp3feIg>
    <xme:r4jBY6HEiAX8w_DAr2e_XHxLZgMu-lvEqTxV7b9w4rlcD46V2XxDKE7wahQJFJM1D
    vU8XAu-liqIbg>
X-ME-Received: <xmr:r4jBY762sBUE5Q_M_M-9Dn0_C21O5u4WTGXgvuTYsAoYk9dB7-KPMcU6_tYzS6_NkRsuKvvnZQHYGEvA4EVwdUkDryQgHz5h_A>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrleekgdekiecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeduteev
    leevffdvteeludeljeeiieeufffhgfdvfeekudeihfehteehkeekfeffieenucffohhmrg
    hinhepmhgrihhlqdgrrhgthhhivhgvrdgtohhmnecuvehluhhsthgvrhfuihiivgeptden
    ucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvth
    hhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:r4jBYw3vd4D18q5vXtuVA37Xr3gN1xE1T3ZFs_4DHwVjwIQfkTcx4A>
    <xmx:r4jBY-E6FlarkibTY9Sf2ulVkC7gXUPxq4obb5dh2AuGLjW4-FvNCA>
    <xmx:r4jBYx_Ur4YyIVlZRPaz5znhqpl-HJ0YCUeJQXcRK9rNvQ6Q1NRdUA>
    <xmx:r4jBY5OtCiGxpJVgIxQUasy0EbkszDRmCtAc6_4aSEU1eeB8FpPLCg>
Feedback-ID: i1568416f:Fastmail
Date: Fri, 13 Jan 2023 17:36:58 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, regressions@lists.linux.dev
Subject: Re: S3 under Xen regression between 6.1.1 and 6.1.3
Message-ID: <Y8GIqncmmhFvJ2lB@mail-itl>
References: <Y8DIodWQGm99RA+E@mail-itl>
 <bdea54df-59dc-3d4d-dd0c-8c45403dea24@suse.com>
 <Y8FgQyVvpUXqumvS@mail-itl>
 <47ae3bbb-6468-4282-1789-72eedaa54814@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="dbTMTVjy0FuC/Odd"
Content-Disposition: inline
In-Reply-To: <47ae3bbb-6468-4282-1789-72eedaa54814@suse.com>


--dbTMTVjy0FuC/Odd
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 13 Jan 2023 17:36:58 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, regressions@lists.linux.dev
Subject: Re: S3 under Xen regression between 6.1.1 and 6.1.3

On Fri, Jan 13, 2023 at 03:20:37PM +0100, Juergen Gross wrote:
> On 13.01.23 14:44, Marek Marczykowski-G=C3=B3recki wrote:
> > But, unrelated to this bug, it did get message like in https://www.mail=
-archive.com/xen-devel@lists.xenproject.org/msg107609.html
> > (WARNING: CPU: 1 PID: 0 at arch/x86/mm/tlb.c:523 switch_mm_irqs_off+0x2=
30/0x4a0)
> >=20
>=20
> Hmm, is applying the attached patch making any difference here?

No, it's still there.
Some further observations:
 - it happens only on first suspend after system startup (at least in a
   few attempts I made now)
 - it's logged during suspend, just after "Disabling non-boot CPUs", not
   resume

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--dbTMTVjy0FuC/Odd
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmPBiKoACgkQ24/THMrX
1yyd8QgAkXK6/TiUJCvW1+gpgjmZVdI+lp+aFyeu8Q1xDJQys+r2ngbEpNemxiPW
Kj0NPmj2pUO9KlXcEslvAd6E/38w/fHdwr6g2luKN7oU3vtLG0TIF+1ha/cLmqMq
7pwYrhCCtfIFx85LWoTk5ocgB1U5TNdbE7GCP3TtZLuqWRJbj6YRjVAGEpeN51Ju
KPS5kgai4JOqXJgtwQ6tISbPjEfyPaR+hyniziVO8uWSjyWkBw1aUqi8v7BK5Vsl
bCHn4Z8BnQFWGLPdq1u3UTUIp9SFLS6L+amdktHQOWKTiR2aLIdj7GxYJXBGt4KN
EdQJO5OmWBe1Fgtrc7ZG5dckfsTijQ==
=di4t
-----END PGP SIGNATURE-----

--dbTMTVjy0FuC/Odd--


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 16:51:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 16:51:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477492.740214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGNGm-0001FL-Do; Fri, 13 Jan 2023 16:51:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477492.740214; Fri, 13 Jan 2023 16:51:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGNGm-0001FE-Ap; Fri, 13 Jan 2023 16:51:08 +0000
Received: by outflank-mailman (input) for mailman id 477492;
 Fri, 13 Jan 2023 16:51:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Kou=5K=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pGNGl-0001F8-Bc
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 16:51:07 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2054.outbound.protection.outlook.com [40.107.21.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 76222831-9362-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 17:51:02 +0100 (CET)
Received: from FR0P281CA0015.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:15::20)
 by AS8PR08MB8136.eurprd08.prod.outlook.com (2603:10a6:20b:561::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Fri, 13 Jan
 2023 16:50:53 +0000
Received: from VI1EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:15:cafe::4b) by FR0P281CA0015.outlook.office365.com
 (2603:10a6:d10:15::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Fri, 13 Jan 2023 16:50:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT036.mail.protection.outlook.com (100.127.144.159) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 16:50:52 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Fri, 13 Jan 2023 16:50:52 +0000
Received: from 3fcef95ad4d8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8D287D5E-50E5-477C-A203-62FFAA9CC2EC.1; 
 Fri, 13 Jan 2023 16:50:41 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3fcef95ad4d8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 16:50:41 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB6551.eurprd08.prod.outlook.com (2603:10a6:20b:319::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 16:50:39 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 16:50:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76222831-9362-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GohCHgT8xkSR18XtakrCSXvIqvzjUApH5r1DLvdgF2M=;
 b=2JLDmm/LEqIc8fakUBwA5MV/lTlXbM8GI12dkgARpUDqAGs1D02wF+eAQzJZwQDZFHWSqBE2n5F8PgtgHGOH4riK/tTfv4wvG3tDs1/HXnYgWD3LfvBCskSR6hHfldCCSeCTB1NDo+QiT2ZuVtbdLWqXWckKS47K/ZDSSKHHLPg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 308f1ba78188b80a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mXdbkkroSyIoq4e3ZLCzRRd9aomzJfxlp69Y3SIaH7HxajXRb3HLB93ZUfhLOR+9ONlepM3MzWRpfDOkxtph/l8Y29zB1/iUQcArALDZ6/tbCizXs5PxQl/asSYej48WKoF5YUaORs99XEj+cIPr/0a/4C3TVsulrYvhWgdhEvjuX0FiJfQjE11/ll/KCwBKw9Lp0KtLCuFcVXBXOsVJFb6NBTvx58qkiYGmuGwsEQvMmTaPfRc7lQEwB/2C4AFPt8zO5rC2j7l88QB5tmUhvFfQn/r0OrNffp2Jolf6F24B/eIJP9J6mgk02FpSAOZDZVC3HqGp+MWfKZRK3Z7JPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GohCHgT8xkSR18XtakrCSXvIqvzjUApH5r1DLvdgF2M=;
 b=SBdvnrsFSjUDTHmGwZGjbutWRHG3GTxVqyeWCkJDIdjGBXoY4dPscQ/sb+OsfhGZbvxKNwLfpQkw6fnyLaUXlP2yHmqIzx6EF9wdNG0ASOSpFbGCNrb/CmY8b/v1zYphUWZpI8OoY9dr5RfOaFsJB9TP334CVS/Srv35pk7AzXFowBrruieVWf+L/+rIjXOKq4jOplKyfCn10dmfO/XBHG5esieQHM+xXDJbHnRdjINRAshuPOm+ZHFKBzUOi5I3w51soV41SaVyPQ5WPsCFWRDmXgw8vr5wYXdGxX4iKpjVVajE9uS8NwLGnczxswnMpDBSoVdlkACFpeJrPd84xA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GohCHgT8xkSR18XtakrCSXvIqvzjUApH5r1DLvdgF2M=;
 b=2JLDmm/LEqIc8fakUBwA5MV/lTlXbM8GI12dkgARpUDqAGs1D02wF+eAQzJZwQDZFHWSqBE2n5F8PgtgHGOH4riK/tTfv4wvG3tDs1/HXnYgWD3LfvBCskSR6hHfldCCSeCTB1NDo+QiT2ZuVtbdLWqXWckKS47K/ZDSSKHHLPg=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr()
Thread-Topic: [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr()
Thread-Index: AQHZJzoRxQQAKbugEU6tZrnWUCTSoK6cj9MA
Date: Fri, 13 Jan 2023 16:50:38 +0000
Message-ID: <0F9A8368-8C5F-4FFF-977D-2401BD81C39A@arm.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-14-julien@xen.org>
In-Reply-To: <20230113101136.479-14-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB6551:EE_|VI1EUR03FT036:EE_|AS8PR08MB8136:EE_
X-MS-Office365-Filtering-Correlation-Id: b05b17e3-0c5a-4772-d096-08daf5865503
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 g+Q9Dk2nkrOySyaMrqTpIB+tDV6LFtOmcKY5yQvtbjrmu6ZipebS8EN6zhGyll/jprCOUSrYNu2WCzhGRAb8A2bplbqERYUOdD9+s0Xv2m0l+Eoy0qYqB5+ycuBEWaIihmtmYiWsiUTk1ez7rqmP150G5TJsrm1UIpkLPhzHRLaPjhiZSLDAdIEIX9VzzTmQZacV4mcIgmluvSGr9n0EMjy+JRZY3nzd6C/lqHn/MsIRc7z7NGRgxYeCefPGUWwjENndjaYUEmhLDbGhDgOcfU7U8bBxNrHQFNBD8dbEp8xv84q8Os6c/uYMsQPgoF3Q584JHzeHwZmYEwff2ttU6xz9ir2mVtbCVEzpqiUiNqWvJK27JS02kn2dOt7eeEZzLpOSFbWrwJQ0ZQ4yrAtav6GLJm/8hX3njyYQRVxT6HZgoAVLWmfYlml00Zpo1AIP/KrKVCraqhfqXQ4bPEYyLRjK54LbxXH4YoylLFgxviFBTUE94GUSBLtA9f2LPCOTwRhC0S9ZvOMANNl+LJh73OSmM2BR9wALqB8rCJTlScXHX51qvXLJDivzCQDfTDtANHKY1MDHqfGafbwl2iAvPXrRxMwxuHGjLhKs/2228GZHzGxE1Tza3nrZT+fvnueb5RkV77kNTgTbUYFVTfiRFEADI+snRvEQ/b9JTkALyxZxdo+5vUlysApG7rP3PzrIzXerXqIha3DrUEnevDnz2kmjGJYvxCQBp0AzbZIMTdU=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(136003)(396003)(376002)(366004)(451199015)(6506007)(41300700001)(53546011)(86362001)(122000001)(83380400001)(6486002)(38100700002)(186003)(8936002)(6512007)(478600001)(26005)(66446008)(4326008)(5660300002)(66556008)(64756008)(91956017)(8676002)(66476007)(71200400001)(316002)(36756003)(38070700005)(6916009)(33656002)(54906003)(2906002)(66946007)(2616005)(76116006)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <9199AC8108E1AC409927014928DAF4DF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6551
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	59062414-42c8-4d32-d1f4-08daf5864cd2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wls2u8B/FafCAkoexjXIiU39wl6gAW0UCbnklGwaRgVVg9+CkZIAx6g2Xwfd3pDTcFBoeLc1JZ4/Qu4Q+khhxF/4c1WDsWKuIkdhc7GjX9CTTwQtBB8Y28WR/MwfefjmaPUBR3jNW0eY+XoZjI6taqT5MhvRPRLsIkR3xBnZwoIoXhWSXx+GYzwZEKPEAxkUcQE5jkMpq1DvWxrGxj7cEMt/7+1TWztAv6DznJplkFjxUHVwhmwknC6k3LR0g+UQ9xx9kbBN2tzp8rcxS4mhthO6dirxJLu4pjKoOldNomzUCmhXeKSicQRd9B1xbRKxSkGbvnpOqDeixvf6X8O1fimkNDxNFqD32mBMtuQe/Kn/J9jv652Tx+UK/uywL8rkZdpAXyJ6RFzkrOprJTArtemXWh722Iu1b3gbSRK1v/vwjuSNq9HYkWFR6Xyujs99Lvbe2kkkBVZFpuNIorUagBqyE5euHhEQRMnrbGvRM/oqsNDacu1o89c0xsAkxy2Aox9qwr0FRsqDYP+5z5SYbWR3VhGtZaWN33oLVwOB3zBSqqa3e4zZca1Q0GdsLgR+DOWXGfNqNZQ8U40BVokN0EdckmuBKzUrwluHAlBzn4JuZ+Oo1cbsulckPnIVBPCW5GKPo5HKFUSHISXNAGncVzXIB/YAkxdZYasOhrOSunrT+wZ4x+bI5GdADlAPBuwcRNwH4L8f+P734u2q7qlfwA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(346002)(136003)(396003)(451199015)(40470700004)(36840700001)(46966006)(26005)(33656002)(82740400003)(6486002)(186003)(6512007)(478600001)(40480700001)(36860700001)(356005)(81166007)(107886003)(82310400005)(2906002)(53546011)(83380400001)(5660300002)(2616005)(54906003)(336012)(316002)(6506007)(6862004)(40460700003)(47076005)(70206006)(86362001)(70586007)(8676002)(4326008)(41300700001)(8936002)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 16:50:52.5988
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b05b17e3-0c5a-4772-d096-08daf5865503
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8136

DQoNCj4gT24gMTMgSmFuIDIwMjMsIGF0IDEwOjExLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQo+IA0KPiBBdCB0aGUgbW9tZW50LCBzd2l0Y2hfdHRicigpIGlzIHN3aXRjaGluZyB0aGUgVFRC
UiB3aGlsc3QgdGhlIE1NVSBpcw0KPiBzdGlsbCBvbi4NCj4gDQo+IFN3aXRjaGluZyBUVEJSIGlz
IGxpa2UgcmVwbGFjaW5nIGV4aXN0aW5nIG1hcHBpbmdzIHdpdGggbmV3IG9uZXMuIFNvDQo+IHdl
IG5lZWQgdG8gZm9sbG93IHRoZSBicmVhay1iZWZvcmUtbWFrZSBzZXF1ZW5jZS4NCj4gDQo+IElu
IHRoaXMgY2FzZSwgaXQgbWVhbnMgdGhlIE1NVSBuZWVkcyB0byBiZSBzd2l0Y2hlZCBvZmYgd2hp
bGUgdGhlDQo+IFRUQlIgaXMgdXBkYXRlZC4gSW4gb3JkZXIgdG8gZGlzYWJsZSB0aGUgTU1VLCB3
ZSBuZWVkIHRvIGZpcnN0DQo+IGp1bXAgdG8gYW4gaWRlbnRpdHkgbWFwcGluZy4NCj4gDQo+IFJl
bmFtZSBzd2l0Y2hfdHRicigpIHRvIHN3aXRjaF90dGJyX2lkKCkgYW5kIGNyZWF0ZSBhbiBoZWxw
ZXIgb24NCj4gdG9wIHRvIHRlbXBvcmFyeSBtYXAgdGhlIGlkZW50aXR5IG1hcHBpbmcgYW5kIGNh
bGwgc3dpdGNoX3R0YnIoKQ0KPiB2aWEgdGhlIGlkZW50aXR5IGFkZHJlc3MuDQo+IA0KPiBzd2l0
Y2hfdHRicl9pZCgpIGlzIG5vdyByZXdvcmtlZCB0byB0ZW1wb3JhcmlseSB0dXJuIG9mZiB0aGUg
TU1VDQo+IGJlZm9yZSB1cGRhdGluZyB0aGUgVFRCUi4NCj4gDQo+IFdlIGFsc28gbmVlZCB0byBt
YWtlIHN1cmUgdGhlIGhlbHBlciBzd2l0Y2hfdHRicigpIGlzIHBhcnQgb2YgdGhlDQo+IGlkZW50
aXR5IG1hcHBpbmcuIFNvIG1vdmUgX2VuZF9ib290IHBhc3QgaXQuDQo+IA0KPiBUaGUgYXJtMzIg
Y29kZSB3aWxsIHVzZSBhIGRpZmZlcmVudCBhcHByb2FjaC4gU28gdGhpcyBpc3N1ZSBpcyBmb3Ig
bm93DQo+IG9ubHkgcmVzb2x2ZWQgb24gYXJtNjQuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBKdWxp
ZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KDQpUaGUgc2VxdWVuY2UgbG9va3Mgb2sgdG8g
bWUsIGFsc28gdGhlIHJlYXNvbmluZyBhYm91dCBiYXJyaWVycyBhbmQNCnJlZ2lzdGVyIGRlcGVu
ZGVuY2llcyBkaXNjdXNzZWQgaW4gdGhlIHByZXZpb3VzIHZlcnNpb24uDQoNClJldmlld2VkLWJ5
OiBMdWNhIEZhbmNlbGx1IDxsdWNhLmZhbmNlbGx1QGFybS5jb20+DQoNCknigJl2ZSBhbHNvIGJ1
aWx0IGZvciBhcm0zMi82NCBhbmQgdGVzdCB0aGlzIHBhdGNoIG9uIGZ2cCwgYm9vdGluZyBEb20w
DQphbmQgY3JlYXRpbmcvcnVubmluZy9kZXN0cm95aW5nIHNvbWUgZ3Vlc3RzDQoNClRlc3RlZC1i
eTogTHVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 17:18:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 17:18:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477498.740225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGNgY-0003yU-GA; Fri, 13 Jan 2023 17:17:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477498.740225; Fri, 13 Jan 2023 17:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGNgY-0003yN-DL; Fri, 13 Jan 2023 17:17:46 +0000
Received: by outflank-mailman (input) for mailman id 477498;
 Fri, 13 Jan 2023 17:17:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TZVY=5K=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pGNgW-0003yF-QB
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 17:17:45 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2faede74-9366-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 18:17:41 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pGNgZ-006HaL-9N; Fri, 13 Jan 2023 17:17:47 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 80FF7300094;
 Fri, 13 Jan 2023 18:17:32 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 59EA620A72A62; Fri, 13 Jan 2023 18:17:32 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2faede74-9366-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=EDJ4xEF3lLYItJUA9SlN4B6NAGGeDix65KOyh4872KE=; b=aXGlnATjL/pIS/dvAc4QlTkC6n
	98Kga71sLav22h3qllusB8eu2B0ZdlIgpZcdvtKspuKgFQbCbUSVBF4X8qJwLt51fu8xisXhXywWa
	twLWUZjoFoNDX9Ees0xi3v4KhtyssJl04flS52YYf5y7rUuUyrU1GiJfmx+YfQMvbLOLl8Ngq4hfl
	Gm4QGhDNhBkYsb0ocZfuUvta1wC1w+K/gD/h2lN/BnXJXCf6VWmWxO/D7rY41Udntbi3TZupthNkl
	22GSsyOJZyDXLiDsVCH8BGAIXlR7Mc/CEOuAerH/v8gnYftKmtj4kulSxMXdLOU85rp5RmPxEGxh9
	y1vEGrDA==;
Date: Fri, 13 Jan 2023 18:17:32 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>, x86@kernel.org,
	Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>, mark.rutland@arm.com
Subject: Re: [RFC][PATCH 2/6] x86/power: Inline write_cr[04]()
Message-ID: <Y8GSLDhgMlE96P6+@hirez.programming.kicks-ass.net>
References: <20230112143141.645645775@infradead.org>
 <20230112143825.644480983@infradead.org>
 <Y8FZvLq+MeQ7A+lI@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Y8FZvLq+MeQ7A+lI@gmail.com>

On Fri, Jan 13, 2023 at 02:16:44PM +0100, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > Since we can't do CALL/RET until GS is restored and CR[04] pinning is
> > of dubious value in this code path, simply write the stored values.
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> >  arch/x86/power/cpu.c |    4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > --- a/arch/x86/power/cpu.c
> > +++ b/arch/x86/power/cpu.c
> > @@ -208,11 +208,11 @@ static void notrace __restore_processor_
> >  #else
> >  /* CONFIG X86_64 */
> >  	native_wrmsrl(MSR_EFER, ctxt->efer);
> > -	native_write_cr4(ctxt->cr4);
> > +	asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");
> 
> >  #endif
> >  	native_write_cr3(ctxt->cr3);
> >  	native_write_cr2(ctxt->cr2);
> > -	native_write_cr0(ctxt->cr0);
> > +	asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");
> 
> Yeah, so CR pinning protects against are easily accessible 'gadget' 
> functions that exploits can call to disable HW protection features in the 
> CR register.
> 
> __restore_processor_state() might be such a gadget if an exploit can pass 
> in a well-prepared 'struct saved_context' on the stack.

Given the extent of saved_context, I think it's a total loss. Best we
can do is something like the last patch here that dis-allows indirect
calls of this function entirely (on appropriate builds/hardware).

> Can we set up cr0/cr4 after we have a proper GS, or is that a 
> chicken-and-egg scenario?

Can be done, but given the state we're in, I'd rather have the simplest
possible rules, calling out to functions with dodgy CR[04] is
'suboptimal' as well.

If people really worry about this I suppose we can call the full
native_write_cr4() later to double check the value in the context or
something.


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 17:43:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 17:43:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477506.740236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGO5E-0007Lr-IV; Fri, 13 Jan 2023 17:43:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477506.740236; Fri, 13 Jan 2023 17:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGO5E-0007Lk-Ew; Fri, 13 Jan 2023 17:43:16 +0000
Received: by outflank-mailman (input) for mailman id 477506;
 Fri, 13 Jan 2023 17:43:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Kou=5K=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pGO5C-0007LO-68
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 17:43:14 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2042.outbound.protection.outlook.com [40.107.14.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bf9b033a-9369-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 18:43:11 +0100 (CET)
Received: from DUZPR01CA0067.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:3c2::6) by AS8PR08MB9410.eurprd08.prod.outlook.com
 (2603:10a6:20b:5a9::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 17:43:04 +0000
Received: from DBAEUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3c2:cafe::b9) by DUZPR01CA0067.outlook.office365.com
 (2603:10a6:10:3c2::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Fri, 13 Jan 2023 17:43:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT012.mail.protection.outlook.com (100.127.142.126) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 17:43:04 +0000
Received: ("Tessian outbound 0d7b2ab0f13d:v132");
 Fri, 13 Jan 2023 17:43:04 +0000
Received: from bd75016779ac.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E26E6BE6-1B3C-4763-9B35-805A38E04C15.1; 
 Fri, 13 Jan 2023 17:42:53 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bd75016779ac.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 17:42:53 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB10172.eurprd08.prod.outlook.com (2603:10a6:20b:628::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 17:42:51 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 17:42:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf9b033a-9369-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2RAPshMN3xIQ1BrKS7cc5QTA8+y3tTA+a4KJWLglJ1w=;
 b=WaobxEtpcWB/1wlsU26S8P9ZVSJUDuUFMQENvFc4Hu54G6jJf9JTexiqysV7foViWA1Rv4G6+Y6J5PPCyldaURpuP79Emo3/bZhqY0chnPGyj6cXdfCltuRjiVbSFvmkDHWQkAjg6n1mNRrfParTrJ5S4PqZbhahTCEEwAdVqrc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8660c8561e6f19b6
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ISmhEMfLjKafYYT5czq91XzkEVPxU3fiDmOnR6J2s2gphUc7hknNxqL4/lnwQ8eeoD+J9xJnsKr2r+ZG3TE3V4KTVJ0BtuYhwXN5bHMljUKfHmNp8km4MHD5Yl1f5Vf6F+dN1uyJuBCeHA31kKsVjWSDgpRhn/Sdvm4HwmrgH6lGCtP6LqZlDXYuuSvIpAMZWA+HSBByJxy6R+Ds9vAoAHsibSWkJ12le3IBJ7FfRzk7OjEOIezAVny8VTv3aUizPJp1q7TVfsgZSS7bGxP9RIlJUeqqI9Hbfxk0t4L/7m0glJ1/ikbfgyTo00Fjms+EbYWW9ljbY+TSSH6jIoFLtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2RAPshMN3xIQ1BrKS7cc5QTA8+y3tTA+a4KJWLglJ1w=;
 b=TKoHptZz3yH0CudI0SoiVnAlQ67dQqjBCN9MozmJgmrZVQghet0ZueDVTS5tIO18uz/J86ScUEW6xoN7hToy4G9JKM9hNtYc0aOSw11OE+qGS6KneiTpvZjhXw7jvzGNg+y8Ft2lREidUj0+bBSpzsDlo3vHTPlSkoX+8zA8IFxdJ9elWV0srr8EWP73Ll9ZKxdYbz1/hGeTWW9HnMVzR6NiTnFbE/gmo2PcnFIsPnvSOd60j4xvqidQgnMvcPWuwyH/ZNgq8BQITL2BhTcRRIxm7mQc0HQhyGYLpkl8I7mymH3mRSiyN0ZSkrGqvKexjFHIfvrRftopC4F4IHxvxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2RAPshMN3xIQ1BrKS7cc5QTA8+y3tTA+a4KJWLglJ1w=;
 b=WaobxEtpcWB/1wlsU26S8P9ZVSJUDuUFMQENvFc4Hu54G6jJf9JTexiqysV7foViWA1Rv4G6+Y6J5PPCyldaURpuP79Emo3/bZhqY0chnPGyj6cXdfCltuRjiVbSFvmkDHWQkAjg6n1mNRrfParTrJ5S4PqZbhahTCEEwAdVqrc=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the
 runtime page-tables
Thread-Topic: [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the
 runtime page-tables
Thread-Index: AQHZJzoPSgWN5vNZ70iPu83PQrLhuK6cnmWA
Date: Fri, 13 Jan 2023 17:42:49 +0000
Message-ID: <8A0AD684-FB21-46B3-A0C9-DE0BF67030D0@arm.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-15-julien@xen.org>
In-Reply-To: <20230113101136.479-15-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB10172:EE_|DBAEUR03FT012:EE_|AS8PR08MB9410:EE_
X-MS-Office365-Filtering-Correlation-Id: ae7e3c85-0ee0-4342-f887-08daf58d9fb6
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OI8sq1OgFmUBxQcTP6taKBqY5EwUdRKy9DRohTByYryy5ewUxaNFXvWzf0dGQtUi24R/gYG4WhmfLEM4FtNk3w05p1EQXu2Jli28mTU2XSrySsEMIM34u5G1JNfGauoGW79HQgZ0UXLQ3bT0mYi+pRRSi5bdL2nDNNcUFikZich30EUfi+7IGwSqXglah1R4BIlwE3CnEXd+1RwazQc1nqdSg/GFWQs1P4tPvnQuZUjcmnIdNY4GP2HbtHtOqajkqwOWf8jJ09a+LBlfZdEX4dlK5fWrwfNpR5JmoUgRHLC4rd2VbZYY1Bea90jwanrNF+BqpS8oUrc8rr+APDMtCsecSNUHLtiFIsmYOrn/Cwqm/fOJdDjQ2GxkemjS1QsKkWhFeARCvhbZLlbv32M1cnj7DSGe4ehC14LCR8flujoGzH3crphmXaP3wZlzJfVFGPIxhfURkLMfch/k34oW2H5k1Po4Pl+q8QTX2rz7ttHRVinmxqj7CP48xPA2d+Wn5NvXWj/2JXEFDAfm2kI6le27xqvNycIasuCk/VP4Hw8Iv0oFvmhNoHJxG2VQCw6/BnGR5NUnobcgmQOcolLcVgmiCsVugFhSHlGTqtHWGiOCFo7ifnx6QFVt2Ag9t2KV8fYSV1HZ0qGr8d8xWVGEHnBZ1lmOy9BTbQJ1CIemoWG+JK6lDFcXyGxgBIEQRdO9YxRcPO7S70I3ETlI/sZGE2+Fbq2H7W9iWG4neqZjbx59YXxJRln0ZfG/NyoJ+31NgHYH0TyIXTa4SwxGKz53rrLR4RnMhOh5rnkBbwPJ7wk=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(366004)(39860400002)(376002)(346002)(136003)(451199015)(6512007)(6916009)(54906003)(316002)(6486002)(36756003)(33656002)(71200400001)(122000001)(26005)(38070700005)(38100700002)(186003)(2616005)(86362001)(53546011)(6506007)(8676002)(4744005)(5660300002)(2906002)(66946007)(478600001)(66556008)(64756008)(41300700001)(66476007)(76116006)(8936002)(4326008)(66446008)(91956017)(45980500001)(414714003)(473944003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <D638175A4F42D1408630DC62195C5145@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB10172
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f8cc7704-515f-4340-6392-08daf58d969c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rqHSI0dzQF6/UO3abVyxO/ZLy+83SgqT0by0nEUhjweKWnZV5D+9P+iOd6gLuSGpEOXfD09V0p2PnNltl8KmAMSSox9RE26xuF4KuyU/XSQmuFI+FuEqUltWQ8yH0fYfw4DKITpPeazNY5fc5UhK3nbSPlMrIHD6jnV+KKU3umyEogqVn7GPs3kwe05k2Ds2h4ZdXK4+DZlGHDAa9TJlZMGwxD7lWTOAeUrj+05qP3HOgolM1wj6/uIeNmNUr8AZTLOr/JvSfFdX/5Tf2utr0FC000oIxH1uwCHH9rCMCxySn2+qM45oUIoyuOEwIZROwHjGwxjOE9NNG9DDSZyNKUoYPJFet5xeWuFZgB2rc/9jrWnr/IACGSshLgwrYPTQnGoq01p9UY+os90793dYnmhmxk5Aaj9PFslOJWr3Q7D3wpduNC2H2Cu1s55axrV6z0E8qjqhPNAUxwdfkhJCK2DEoYGH4KVY/L+mJmw4wac3Z/SZkV6qutQai+cPLJqPe0zBTJ7x9yMaRbhIta3Xf8es4QU6E8Xe0UBQ5NDLkoRZEZ8GIGG34qXqRDwhNCFPR7SnMJcjicx1rMvSfaFxJDyoIPjqHOZXl9/dmBwl1ahoPzLxE5SClDS0lXR1RX2n8ZXF/6Rmsz9OPuiaHeAGbXPFFmS5cnEB2SHJSet0fZFjSdrdOQ+7tk0Tx/6FS83kHxvsUq2bd3TALJP8pziTwhbsUE/zF3gXNWALkuPyiYbi41u9Ja1P1S6Wl+E/J5Vr
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(346002)(136003)(376002)(451199015)(46966006)(40470700004)(36840700001)(6506007)(6862004)(4326008)(8676002)(8936002)(70586007)(5660300002)(70206006)(41300700001)(26005)(2616005)(356005)(2906002)(54906003)(36756003)(33656002)(316002)(86362001)(6486002)(40460700003)(478600001)(47076005)(40480700001)(53546011)(6512007)(336012)(82310400005)(186003)(81166007)(82740400003)(107886003)(36860700001)(473944003)(414714003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 17:43:04.5108
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ae7e3c85-0ee0-4342-f887-08daf58d9fb6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9410

DQoNCj4gT24gMTMgSmFuIDIwMjMsIGF0IDEwOjExLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQo+IA0KPiBTd2l0Y2hpbmcgVFRCUiB3aGlsZSB0aGUgTU1VIGlzIG9uIGlzIG5vdCBzYWZlLiBO
b3cgdGhhdCB0aGUgaWRlbnRpdHkNCj4gbWFwcGluZyB3aWxsIG5vdCBjbGFzaCB3aXRoIHRoZSBy
ZXN0IG9mIHRoZSBtZW1vcnkgbGF5b3V0LCB3ZSBjYW4gYXZvaWQNCj4gY3JlYXRpbmcgdGVtcG9y
YXJ5IHBhZ2UtdGFibGVzIGV2ZXJ5IHRpbWUgYSBDUFUgaXMgYnJvdWdodCB1cC4NCj4gDQo+IFRo
ZSBhcm0zMiBjb2RlIHdpbGwgdXNlIGEgZGlmZmVyZW50IGFwcHJvYWNoLiBTbyB0aGlzIGlzc3Vl
IGlzIGZvciBub3cNCj4gb25seSByZXNvbHZlZCBvbiBhcm02NC4NCj4gDQo+IFNpZ25lZC1vZmYt
Ynk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+IC0tLS0NCj4gICAgQ2hhbmdl
cyBpbiB2NDoNCj4gICAgICAgIC0gU29tZWhvdyBJIGZvcmdvdCB0byBzZW5kIGl0IGluIHYzLiBT
byByZS1pbmNsdWRlIGl0Lg0KPiANCj4gICAgQ2hhbmdlcyBpbiB2MjoNCj4gICAgICAgIC0gUmVt
b3ZlIGFybTMyIGNvZGUNCj4gLS0tDQoNClJldmlld2VkLWJ5OiBMdWNhIEZhbmNlbGx1IDxsdWNh
LmZhbmNlbGx1QGFybS5jb20+DQoNCknigJl2ZSBhbHNvIGJ1aWx0IGZvciBhcm0zMi82NCBhbmQg
dGVzdCB0aGlzIHBhdGNoIChzbyB0aGlzIHNlcmllKSBvbiBmdnAsIG4xc2RwLCByYXNwYmVycnkg
cGk0LCBKdW5vLA0KYm9vdGluZyBEb20wIGFuZCBjcmVhdGluZy9ydW5uaW5nL2Rlc3Ryb3lpbmcg
c29tZSBndWVzdHMsIG9uIGEgZmlyc3Qgc2hvdCBldmVyeXRoaW5nIHdvcmtzLg0KDQpUZXN0ZWQt
Ynk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVAYXJtLmNvbT4NCg0KSeKAmXZlIGxlZnQg
dGhlIGJvYXJkcyB0byB0ZXN0IGFsbCBuaWdodCwgc28gb24gTW9uZGF5IEkgd2lsbCBiZSAxMDAl
IHN1cmUgdGhpcyBzZXJpZQ0KSXMgbm90IGludHJvZHVjaW5nIGFueSBpc3N1ZS4NCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 17:56:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 17:56:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477513.740247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGOI2-0000Ys-RL; Fri, 13 Jan 2023 17:56:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477513.740247; Fri, 13 Jan 2023 17:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGOI2-0000Yl-Oa; Fri, 13 Jan 2023 17:56:30 +0000
Received: by outflank-mailman (input) for mailman id 477513;
 Fri, 13 Jan 2023 17:56:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Kou=5K=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pGOI1-0000Yf-MM
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 17:56:29 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2060.outbound.protection.outlook.com [40.107.241.60])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a3372fb-936b-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 18:56:27 +0100 (CET)
Received: from FR0P281CA0129.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:97::15)
 by AS4PR08MB7904.eurprd08.prod.outlook.com (2603:10a6:20b:51f::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Fri, 13 Jan
 2023 17:56:25 +0000
Received: from VI1EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:97:cafe::eb) by FR0P281CA0129.outlook.office365.com
 (2603:10a6:d10:97::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Fri, 13 Jan 2023 17:56:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT046.mail.protection.outlook.com (100.127.144.113) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Fri, 13 Jan 2023 17:56:24 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Fri, 13 Jan 2023 17:56:23 +0000
Received: from 25bd8c2327b6.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D158C65D-991A-46B8-96ED-7E9DFBD759AE.1; 
 Fri, 13 Jan 2023 17:56:17 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 25bd8c2327b6.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Jan 2023 17:56:17 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS2PR08MB8312.eurprd08.prod.outlook.com (2603:10a6:20b:557::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 13 Jan
 2023 17:56:15 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.018; Fri, 13 Jan 2023
 17:56:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a3372fb-936b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ftqnv430xgPDOb82cDspMu86/q6iSrZZG8X7J02lw+A=;
 b=G97E0oKf89gyXSz7C0ZfVbji1lZRXPdPhkJ/33Ga9Kge3HQAEB5nGnr+t82VU1ddcgaAtj/Cx840V+cLKibYD8nNq95qQxb+aO9XgNPvesUXsIVCSMc8fBnWbjiVR8mqFZKFN3tBGFhkJCOj5H7DQxSm43rCZO07CrrB5QW9Uak=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: aafe227caa084974
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V1e9qv/HAx6E3gthVqM7mIDVgHowF6r/mzPEAYcmu2EEt0YjVpIvJX7xPTs3DIBvCQXwLXFEelvoHvmgySyosM9qQJFQ220CT07FAF0Vf4W3fzVp02rjfNHtuGjjE0tD6mdEo8Qb9knFVch15c4UES6ICEaPvCBfqCqUVaFzy7a9RIcfJ78goZMQ/9CeI1juH5bu2icoD87+qHg1fI8nuiBpViL6GdwjmZlhRGlo5JRnJmq8KsAIHBr9Ln+g91Sml9aScN18yGiWVi//VatdcLu3zMBiHiPK+9NGivsjIX1bS/XncR4WIt0GDPRM02iIS84p9XoSawjErze4Nk3EUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ftqnv430xgPDOb82cDspMu86/q6iSrZZG8X7J02lw+A=;
 b=c0zl0yPcmpcvtJaL4MuNEk3a8YT/IQ07yKukPfmyA+8679sT6J5ouqucUTnIRx+GMgGBOCpqIRT5kfT40K61WZPNOQs/LXhBua410lEpkUWwYm9YyZjE4/pXQtU/X//XODKFyphzwOC832h43hasNejYBQH6NYkc8YFi9yI/fXzODUdQn/jeZHp47VaZkqzDL/QgaPXllOeTmTpToRTEptZSJNuwoIQH14MKQoBi66owwWvO51Sw9R/Apx/1R/MCRPm0893fQtknaGA0sFU1q1gkt2IUcJLcuXW72cTQgoz9fCuawbYLkrPIyvvjORUjmmUIyaXdKsG81pjLZle3sQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ftqnv430xgPDOb82cDspMu86/q6iSrZZG8X7J02lw+A=;
 b=G97E0oKf89gyXSz7C0ZfVbji1lZRXPdPhkJ/33Ga9Kge3HQAEB5nGnr+t82VU1ddcgaAtj/Cx840V+cLKibYD8nNq95qQxb+aO9XgNPvesUXsIVCSMc8fBnWbjiVR8mqFZKFN3tBGFhkJCOj5H7DQxSm43rCZO07CrrB5QW9Uak=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Michal Orzel <michal.orzel@amd.com>
Subject: Re: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat
 workaround for TLB flush by VA
Thread-Topic: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat
 workaround for TLB flush by VA
Thread-Index: AQHZJzd3IAQCrI1EqUyCJy1AVoViEq6coiuA
Date: Fri, 13 Jan 2023 17:56:14 +0000
Message-ID: <C8CAEAFE-50FB-4DBB-8EDC-8AB87920EB06@arm.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-3-julien@xen.org>
In-Reply-To: <20230113101136.479-3-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS2PR08MB8312:EE_|VI1EUR03FT046:EE_|AS4PR08MB7904:EE_
X-MS-Office365-Filtering-Correlation-Id: 36861094-4ad7-40d2-c52d-08daf58f7c6f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 RIyZDSpTRtzIqmjg2tQu4TIWSLwSHFn1HojZn2lcozqlHJ8JBq3VrKY+rAwoLE4ti3DSH6/+QZQHWI9aO2R+71QmgCaWhFRw8JNdIuhjLpWoEv5bD813wVMqGwiCzEn4QmEeQnuDowtAhFHm8tWgowT8j3k82Nq3fMAhsJGQz1prUYT/56CSzEL9+f3q6xLiTtbhSB/ZLOugLI1C63CtL2YPUo5/9BqNi9lDLy4sRVXJlF2EvyWnEOorsSCWUqvjx0rAflmFxhp0LWq1fW+Ode3gSp+H5kCxRliniZh1esHjA9K3PIZkhvkqtoIyd/eswkbDwKYsazRErKuM8EdnQRJ4MpA5s54GIcY63UOK3vv5/SXwlux//Nss0d92Ua7+m/WRHXVlPMMsgoxFdqwjoW4yqKlAvahKqNtJzMBhcqNVe8SoI4/T6+eFtXGK5vlksPK3sgW3FB7Pefswoc+EGuAln2VWJciwvt5vVBkrG/BktETZNAOidytI7hD44V7HzQs1SCUtGpgVgAh2+lZT+BSTFibdKe7s0sLNbsCbKk/AEcNfLUBS4w2iA2FCwH5BkcdLXOzWhnO9MBM6WS6RSLPV6tzHqjpM0U7u5EjMrmQsLAuvJ6UvzEauArLfg6OLsmnUlrZQgyzbYHNOOdcbWmovDDs/WVaR2rSFAQGf4zVW/ndrZqa/DEGt0kLAAAUuNYjFIMaGt2grcjCm3LXUrfOqZhOpxPnGFY162WTOGDw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(39860400002)(136003)(396003)(376002)(346002)(451199015)(8936002)(76116006)(91956017)(66556008)(41300700001)(66476007)(8676002)(66946007)(4326008)(6916009)(66446008)(64756008)(38070700005)(54906003)(6506007)(2906002)(53546011)(316002)(2616005)(36756003)(4744005)(5660300002)(33656002)(71200400001)(86362001)(6486002)(478600001)(186003)(26005)(6512007)(38100700002)(122000001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <ED224D10F910A54E94D4FE340F015308@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8312
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8cea8d3d-2ee7-4ede-be62-08daf58f76b5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JBhTdXuuu+07M8qJuSYKixQ8znrKrNpsgd6OYL7DcU5b4UHKIciKEZa7P+iLwyPGzYF/QjeIR5kjNKS18/6zWTckfTHPaOztArYXznrVZPEhOeQYLb3KaT0U+rbBM3oNZGcD6yDJ/pvIwNkLTrDLFzaKopwIJ2IAFpNZ/aBcIyUsqPyJhVuKXNJcL7lw8sZSakChwvDxTPDK5RcC/OCT+Zh+jOn3DVzbsX1VT0Vuf1VdKWLqyc22394B4HlXihiCBYeO+EC6yQNwxkKsLO6P8gyAlm50/fqqADO7jcZkNAYMjHbSGi6DmA20JrqY7ih6UuEwsq/VuDV/rrQvxZOwYJ7Kd296UU6rAic8aP9tCJOJUyKLHkpyqQSHPlJCielSpupwgZHcnIS3CltdCCiqwVSX1rXw1pDeXd6ZLfC9m1XQ6gMFd2ZK+uvVUWfS1IbwtLR0kxuVAW1Yk6K3otMyjzFJAHr8Hl46cmI+JB9DFZn11c+vBCbs6lbK9MfyCwRl/9PpkEK2bhUbIVju3v15+8ba5P8comC85cffMeyi4OGI3yxCKTTAzoO+gpDvVJGV5l+qkR6RJ33nUvGoQ0dvNirtwtECWsOnHQQbZ/jwU0d9ZdXJiookbMTfdUXpbw67Q3K9WKTQP5eMHnYfb6hyB+S4zXJDA4uFN04+kinS0RqoKqdr/j1v4qX+GHtC+yn0xFi6AX1yJWQss+P9ndgQNA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(346002)(136003)(376002)(451199015)(40470700004)(36840700001)(46966006)(86362001)(82310400005)(81166007)(356005)(316002)(40480700001)(33656002)(40460700003)(2616005)(54906003)(8676002)(47076005)(336012)(186003)(26005)(478600001)(6506007)(53546011)(6512007)(6486002)(36860700001)(70206006)(82740400003)(36756003)(70586007)(41300700001)(4326008)(8936002)(4744005)(2906002)(6862004)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2023 17:56:24.2081
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 36861094-4ad7-40d2-c52d-08daf58f7c6f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7904



> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Looking at the Neoverse N1 errata document, it is not clear to me
> why the TLBI repeat workaround is not applied for TLB flush by VA.
>=20
> The TBL flush by VA helpers are used in flush_xen_tlb_range_va_local()
> and flush_xen_tlb_range_va(). So if the range size if a fixed size smalle=
r

NIT: is a fixed size




From xen-devel-bounces@lists.xenproject.org Fri Jan 13 19:40:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 19:40:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477529.740258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGPug-00032C-D0; Fri, 13 Jan 2023 19:40:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477529.740258; Fri, 13 Jan 2023 19:40:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGPug-000325-8s; Fri, 13 Jan 2023 19:40:30 +0000
Received: by outflank-mailman (input) for mailman id 477529;
 Fri, 13 Jan 2023 19:40:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8P1x=5K=gmail.com=rjwysocki@srs-se1.protection.inumbo.net>)
 id 1pGPuf-00031z-5C
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 19:40:29 +0000
Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com
 [209.85.218.49]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 212f2262-937a-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 20:40:27 +0100 (CET)
Received: by mail-ej1-f49.google.com with SMTP id ss4so47352118ejb.11
 for <xen-devel@lists.xenproject.org>; Fri, 13 Jan 2023 11:40:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 212f2262-937a-11ed-91b6-6bf2151ebd3b
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/FynnHHA+adKoY+qySCCuC0I0OWwcsOl+qYyvsB5YSU=;
        b=T1a+O7QCNB3TcCKYS1nainUZELsbZbCDR0+PDoOjKK2bk+7t6Yt078Fx0n1vTqDafc
         NO+L22TfIe4cimNeGq432w+HnZQi1CuKn5dHLat+MogtnJXF4QdO1NfqVuYYLjNdRpt8
         SGttBXEDanx2+m/yWVbGkKxUxdBMLPKAAinIvO41MCoUWXOPsOPZRatmLkgmGi/VvvkV
         GcG7uLvLD1fGBnSknoVG8e1yT3d/G+rx7JiAB9yKnueww5owrePKLmiWFLhuHUsiQIOI
         5LCv4dntCNEnMQ5EdGTtRdc74h//rugI+Y1u+GCpQj7GGDkoON4roTNJTUbOKy4mfCiY
         B9mA==
X-Gm-Message-State: AFqh2krOFQZrY18Ot6rjfj70FToRHUkZuzzWvWGwLPE8a0mkf6K9zv2O
	KCBSMjeZ07LxZyNCXEsqyv7A0dBxBwKdSL1YgHE=
X-Google-Smtp-Source: AMrXdXvn2X6x4HoxAN6K2w1MSEvmKDvbqX2OyOqsOjj0s6TYIEhLLev1mfoqRKdTYIEqIwH2b1rnxH1eGYQbqtwdUl8=
X-Received: by 2002:a17:907:29c3:b0:84d:4b8e:efc with SMTP id
 ev3-20020a17090729c300b0084d4b8e0efcmr1445143ejc.390.1673638826641; Fri, 13
 Jan 2023 11:40:26 -0800 (PST)
MIME-Version: 1.0
References: <20230113140610.7132-1-jgross@suse.com>
In-Reply-To: <20230113140610.7132-1-jgross@suse.com>
From: "Rafael J. Wysocki" <rafael@kernel.org>
Date: Fri, 13 Jan 2023 20:40:15 +0100
Message-ID: <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>
Subject: Re: [PATCH] x86/acpi: fix suspend with Xen
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, linux-pm@vger.kernel.org, 
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>, 
	"Rafael J. Wysocki" <rafael@kernel.org>, Len Brown <len.brown@intel.com>, Pavel Machek <pavel@ucw.cz>, 
	Stefano Stabellini <sstabellini@kernel.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, xen-devel@lists.xenproject.org, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, Jan 13, 2023 at 3:06 PM Juergen Gross <jgross@suse.com> wrote:
>
> Commit f1e525009493 ("x86/boot: Skip realmode init code when running as
> Xen PV guest") missed one code path accessing real_mode_header, leading
> to dereferencing NULL when suspending the system under Xen:
>
>     [  348.284004] PM: suspend entry (deep)
>     [  348.289532] Filesystems sync: 0.005 seconds
>     [  348.291545] Freezing user space processes ... (elapsed 0.000 secon=
ds) done.
>     [  348.292457] OOM killer disabled.
>     [  348.292462] Freezing remaining freezable tasks ... (elapsed 0.104 =
seconds) done.
>     [  348.396612] printk: Suspending console(s) (use no_console_suspend =
to debug)
>     [  348.749228] PM: suspend devices took 0.352 seconds
>     [  348.769713] ACPI: EC: interrupt blocked
>     [  348.816077] BUG: kernel NULL pointer dereference, address: 0000000=
00000001c
>     [  348.816080] #PF: supervisor read access in kernel mode
>     [  348.816081] #PF: error_code(0x0000) - not-present page
>     [  348.816083] PGD 0 P4D 0
>     [  348.816086] Oops: 0000 [#1] PREEMPT SMP NOPTI
>     [  348.816089] CPU: 0 PID: 6764 Comm: systemd-sleep Not tainted 6.1.3=
-1.fc32.qubes.x86_64 #1
>     [  348.816092] Hardware name: Star Labs StarBook/StarBook, BIOS 8.01 =
07/03/2022
>     [  348.816093] RIP: e030:acpi_get_wakeup_address+0xc/0x20
>
> Fix that by adding an indirection for acpi_get_wakeup_address() which
> Xen PV dom0 can use to return a dummy non-zero wakeup address (this
> address won't ever be used, as the real suspend handling is done by the
> hypervisor).

How exactly does this help?

> Fixes: f1e525009493 ("x86/boot: Skip realmode init code when running as X=
en PV guest")
> Reported-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab=
.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  arch/x86/include/asm/acpi.h  | 2 +-
>  arch/x86/kernel/acpi/sleep.c | 3 ++-
>  include/xen/acpi.h           | 9 +++++++++
>  3 files changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
> index 65064d9f7fa6..137259ff8f03 100644
> --- a/arch/x86/include/asm/acpi.h
> +++ b/arch/x86/include/asm/acpi.h
> @@ -61,7 +61,7 @@ static inline void acpi_disable_pci(void)
>  extern int (*acpi_suspend_lowlevel)(void);
>
>  /* Physical address to resume after wakeup */
> -unsigned long acpi_get_wakeup_address(void);
> +extern unsigned long (*acpi_get_wakeup_address)(void);
>
>  /*
>   * Check if the CPU can handle C2 and deeper
> diff --git a/arch/x86/kernel/acpi/sleep.c b/arch/x86/kernel/acpi/sleep.c
> index 3b7f4cdbf2e0..1a3cd5e24cd0 100644
> --- a/arch/x86/kernel/acpi/sleep.c
> +++ b/arch/x86/kernel/acpi/sleep.c
> @@ -33,10 +33,11 @@ static char temp_stack[4096];
>   * Returns the physical address where the kernel should be resumed after=
 the
>   * system awakes from S3, e.g. for programming into the firmware waking =
vector.
>   */
> -unsigned long acpi_get_wakeup_address(void)
> +static unsigned long x86_acpi_get_wakeup_address(void)
>  {
>         return ((unsigned long)(real_mode_header->wakeup_start));
>  }
> +unsigned long (*acpi_get_wakeup_address)(void) =3D x86_acpi_get_wakeup_a=
ddress;
>
>  /**
>   * x86_acpi_enter_sleep_state - enter sleep state
> diff --git a/include/xen/acpi.h b/include/xen/acpi.h
> index b1e11863144d..7e1e5dbfb77c 100644
> --- a/include/xen/acpi.h
> +++ b/include/xen/acpi.h
> @@ -56,6 +56,12 @@ static inline int xen_acpi_suspend_lowlevel(void)
>         return 0;
>  }
>
> +static inline unsigned long xen_acpi_get_wakeup_address(void)
> +{
> +       /* Just return a dummy non-zero value, it will never be used. */
> +       return 1;
> +}
> +
>  static inline void xen_acpi_sleep_register(void)
>  {
>         if (xen_initial_domain()) {
> @@ -65,6 +71,9 @@ static inline void xen_acpi_sleep_register(void)
>                         &xen_acpi_notify_hypervisor_extended_sleep);
>
>                 acpi_suspend_lowlevel =3D xen_acpi_suspend_lowlevel;
> +#ifdef CONFIG_ACPI_SLEEP
> +               acpi_get_wakeup_address =3D xen_acpi_get_wakeup_address;
> +#endif
>         }
>  }
>  #else
> --
> 2.35.3
>


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 21:31:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 21:31:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477540.740269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGReE-0005Pu-8Q; Fri, 13 Jan 2023 21:31:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477540.740269; Fri, 13 Jan 2023 21:31:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGReE-0005Pn-5I; Fri, 13 Jan 2023 21:31:38 +0000
Received: by outflank-mailman (input) for mailman id 477540;
 Fri, 13 Jan 2023 21:31:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2WVX=5K=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pGReD-0005Pe-5D
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 21:31:37 +0000
Received: from sonic316-55.consmr.mail.gq1.yahoo.com
 (sonic316-55.consmr.mail.gq1.yahoo.com [98.137.69.31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5609782-9389-11ed-b8d0-410ff93cb8f0;
 Fri, 13 Jan 2023 22:31:34 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic316.consmr.mail.gq1.yahoo.com with HTTP; Fri, 13 Jan 2023 21:31:30 +0000
Received: by hermes--production-ne1-5648bd7666-rmxcl (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID a9fd17d81623f8a2df35139df8a71637; 
 Fri, 13 Jan 2023 21:31:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5609782-9389-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673645490; bh=zSS9bg22QrDSgJE/e9kb2PDlrQOOObt1PP1nAHObc1Y=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=pqyKqsArVAcVi1TrG0HJ4FWK3gdVGBK9YNKZdem7dmGoLf8HEfE9kVBIPJaOKbDxikQeCXSsapNXVFY+O1i3m53SK8EdE6LjWgnC1Zm3e2rx7ecCnf5zOginfDO8vt5pjrsQvd7q7YSlUh8Vms/1OYcJqkyLBOPbfhTd97e/WrLv7qokqdTXVJ71RFlB79VmQBect1gqgy703blyyLtd4aZLNvC1YTwbRso+vGFCutPnmjEKM4FD8nWejLS32Pd1NWgxIfxGcp845yyfxtbG1pf7F5gJA/Jkg6kCxqWLeO1ycikifdf2zjp4/ap+k3kSWTziwu3pXDkjqNjGZRbzQw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673645490; bh=VKGF0VO/0VjR57ED+43hWAQheJ7Aqc6aXborCs1N5YA=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=oH0oU4dcyrx4/8f8pqIDKM9Qg1KDTTMAgCnNvrkcVV8INuX8dnIlEAyxMqAhv1XH9ymocfrVGYE4aJdtTiJjkf51xYK3FmuJ54mWe94gFZriXdBKFIJgFKpuMscS3AMxuTG3mPHm92buOB37fKxYPebvNZQKF6kgZvq1J20o0FhWdAg3ENcT0+uL3qUnySy9HOl00fzTjLMsSJoB0kbUBSQ7VBOfN1PYL2m23QdXpEpkjfQqUiUtamH81LALRDT09ym3ObfY6RU6oARLBj1j+gGjCrgTd8DDgvWzFmdY+eiMuYNoOsOL+QQe2cQJa8uWU1XYUrVK2l5XTx78+oOhSQ==
X-YMail-OSG: xC1oVbAVM1lS749HRPIGTcqAcF_WTml09OpNX5v1hv7ie2F6eIxLFNG2s.UbZhm
 .5TdgaGXi_J2BPZQz2ahLR0C.SHmnXVzeK7z8ro86RQgSOgvgeDQTotYJLVfX8a8Cbhn8DcbaPBq
 9nQaP4r8LwekD6vf_ZCm9XSP2pW.phlkggvZprO_e7_V_67DSja92zb2iuQYkaQn0TyRMbJHHMni
 6J0C2YzSDJyrk1b.JOZpBPFxC32CDUU9v.ebjL3KiiX6n4mmSU2SJYdvtvJ_OZTc7T7jL3ZMcfEg
 R3gsK2rWaQ1LaPO1hzUTk4Sg8ZXMuvb2RzY3Pn31rHghfZ620PadguxbY4vJfPxa11w5a_8h3x_E
 3NcJVwX4jt3dEpwFXHd5A_QZrUVAeVGWcuzrho7LoJ7zCHIUuVT.2VrwoQCgnjSCvYBq6axbIldT
 0RENBMdqjyY8DksE9jLSBdOdUaSV16snZlusWreA4clCun.MjRZSi2ZXfG9iI5sjLtRVNqNRFWaN
 p.R8w_2qW0qS32uP74J9EG.O5JoCE7Oy6.4OP_w5owD47XnY5stiQuUNFSKMTccJcFB1iz5mliQ.
 rGZexjlakp3labgBKudZmXoFU1PXyDFIVDIfw.PxAO.3.AICRhzprxcsXN4yYzHuqD.Oofn3FSJe
 vb0Bk0ATPrXZO9lXmAScXhu1qlfqW1yVEEQGFToJQ2THrx.BFQYcWAlKqa3XG1ycKkTkiHU1dFT2
 0QsDABPw76RjwPcxA4fVb.MF6maYRyDsgFAY1Nw2UUuW_T7m2WxK62L9PP9NdKboQU8ujSdoo1Ug
 52kpsqeOb9D0EyCGuSQm6rw2xThdLKMSVfDVG23FDYXm96olpsaqT0Q1rNglZFYTlJ2RfzLgZky7
 JOf21Ay7Kn9F9iPoapncy_1PR90Wo7aTZ1hjnoYY3WmUQSHtK.vcAEoDLus9V5XUgBUT1dBoAorj
 TVEbKiOY90rnz3G7zuET4yqpf8iDkTVvVKsVwL9xeIxArCOS3uwJ2.0wvOP2uBvJKjx6U4oghW73
 WzV846QDUGSGpBcOTDFdxhaDpPPlp46wIprxJeGy_ALC4xNx8Suub.iZ2gCTIk2zjeH2lgQVuUYJ
 N0YiBqNiq90R1c6ikIykklF_1DLXJiMYcqvbeW4AduoCy35fTik2nOgL238WTA3X7mos6INsvE6.
 FBMCkPnbtkn6Gn9aHEXSfLzPUxywkNqwfHrLGSvG0E5YlSYqXNx1cNvMI4d5ShJdfhZHAr3vqM40
 sLqIuA3hjl1vOgPkTih7JcdBXlSKFdZZYopeknm3XLZ0rv4hUBL_.q4RhSRb0WjE2jlwLDkO41Ng
 8b2sFo.IlA6lKK4qlEzjjLpmLA7c1hOW5VqOvYQ64oGKMfh.tOM1giHXvmnPJRSol0b6YZwRPn.y
 Hg2hn8.3EwdD4x4Mim1GfgNPstO6HCvIrENcN5ScgbsScY_4kW3b4svoedD.efW67_48kniRh6cN
 51LVzw1.tCbm1PYjbhsr3e9_N29sd7usejKUCR53upZ1rxxvFlOfjC9aL571QUI1iE5TrC5SCi6V
 r3g_cnQfHjSf.gMI9L7WfftsstdRs9LLyk9Ow5VqDWfA49FHiOERFcZRXP8Qp.cUa42Lcz9NzWMO
 mTH3YX1aSTBJW2h796FPf460LUGvFiIIQc7GmcUOWXdcA6f_8Yd3JHimtZyqWiJfmbMsZ09pe9SQ
 0CZWQm_ySINs04qVyEJPQo162lulW.MnRh_iWMcZ3z.zyngR_ubAo_LoL7eGG_G0mrttcYcVDWqa
 qDWMd9Jqj55yS7mHd3MKKY7AqTnzKhFQJar1oVuHFg5pkQr1bBJrTbhim2wgIph3MNsCm.53fiOL
 nnU9_kOGtVS9NUJAgISu_7K5Es42j3owsl_ETBjNTIRm6l_dyBtjrHteZA21AnlO9gF3QAjRwH91
 _YNtXNQUaqttY0InnVpb673ZG4avuPBOz8GtC7QW_cStKySk0m3GvHT.KozyPToyIUr3KlbaWWy2
 4Yy8_Kqlk_CMYYF2BXggRh9W2RWtsq2rkprWpthrbn0npzqZfKkr6pVjQ2W.nEWZdmRmWNdzn4C7
 VQEt8UoG6Q.oBfpSX75FRIq76F4Q2IlIA.V.8z.FI3eRItQQOeiaOSVgWmk1nMtzWLyqxT7o5Xq0
 XuWnLecQ1nqn0S2F5KSHvOyntFuxV1NFBVTLjyWlE6lOanb2SVSruw6tASR6X4SarkjIaJctKZM0
 Sv84n_u9IOfo-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
Date: Fri, 13 Jan 2023 16:31:26 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: Igor Mammedov <imammedo@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
 <20230112180314-mutt-send-email-mst@kernel.org>
 <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
 <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 2561

On 1/13/23 4:33 AM, Igor Mammedov wrote:
> On Thu, 12 Jan 2023 23:14:26 -0500
> Chuck Zmudzinski <brchuckz@aol.com> wrote:
> 
>> On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:
>> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:  
>> >> I think the change Michael suggests is very minimalistic: Move the if
>> >> condition around xen_igd_reserve_slot() into the function itself and
>> >> always call it there unconditionally -- basically turning three lines
>> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
>> >> Michael further suggests to rename it to something more general. All
>> >> in all no big changes required.  
>> > 
>> > yes, exactly.
>> >   
>> 
>> OK, got it. I can do that along with the other suggestions.
> 
> have you considered instead of reservation, putting a slot check in device model
> and if it's intel igd being passed through, fail at realize time  if it can't take
> required slot (with a error directing user to fix command line)?

Yes, but the core pci code currently already fails at realize time
with a useful error message if the user tries to use slot 2 for the
igd, because of the xen platform device which has slot 2. The user
can fix this without patching qemu, but having the user fix it on
the command line is not the best way to solve the problem, primarily
because the user would need to hotplug the xen platform device via a
command line option instead of having the xen platform device added by
pc_xen_hvm_init functions almost immediately after creating the pci
bus, and that delay in adding the xen platform device degrades
startup performance of the guest.

> That could be less complicated than dealing with slot reservations at the cost of
> being less convenient.

And also a cost of reduced startup performance.

However, the performance hit can be prevented by assigning slot
3 instead of slot 2 for the xen platform device if igd passthrough
is configured on the command line instead of doing slot reservation,
but there would still be less convenience and, for libxl users, an
inability to easily configure the command line so that the igd can
still have slot 2 without a hacky and error-prone patch to libxl to
deal with this problem.

I did post a patch on xen-devel to fix this using libxl, but so far
it has not yet been reviewed and I mentioned in that patch that the
approach of patching qemu so qemu reserves slot 2 for the igd is less
prone to coding errors and is easier to maintain than the patch that
would be required to implement the fix in libxl.


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 22:53:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 22:53:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477546.740280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGSvB-00050E-8a; Fri, 13 Jan 2023 22:53:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477546.740280; Fri, 13 Jan 2023 22:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGSvB-000507-5p; Fri, 13 Jan 2023 22:53:13 +0000
Received: by outflank-mailman (input) for mailman id 477546;
 Fri, 13 Jan 2023 22:53:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LVbI=5K=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pGSv9-000501-MU
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 22:53:11 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0aa51f95-9395-11ed-91b6-6bf2151ebd3b;
 Fri, 13 Jan 2023 23:53:08 +0100 (CET)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id 433D95C00B6;
 Fri, 13 Jan 2023 17:53:05 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute6.internal (MEProxy); Fri, 13 Jan 2023 17:53:05 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 13 Jan 2023 17:53:02 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0aa51f95-9395-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1673650385; x=
	1673736785; bh=lKCV0ycAY0SFtq1SYJ0F5dfFWAI3C/PKOUtKuY5T64U=; b=O
	KFd4cQE64z7byI+zm0c8nD+rqAxFmbGCCZ3Vh/GS10spMDBCdu2RoohH4GfizyP7
	vMHfF9Q3WqmBl2tCCbTlvQDj9C1AU71gp0LMeVLamWIKmQOJVj2WgoTcd2glt9CD
	NJU93AEM0M81j6mGmV+a78IrElGBu8J+WY2pq9Rph2PggMehhJ6Lxs8XMLC8+1xh
	dPPhBk46PubiUYUtfMLpaPP2foH7VbmabHbuSpPNeu7qlLnMzTf+BBW3Xn2dtM2u
	cHgF2KFFaOcui/CsmFJQNBOgoIuHOECmvhFMGxX+AdzcNb0brcCxrcIhfsuGLrL8
	jVBLMoKogB89F8lLoGumA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm3; t=1673650385; x=1673736785; bh=lKCV0ycAY0SFtq1SYJ0F5dfFWAI3
	C/PKOUtKuY5T64U=; b=h7tHLk4So03Ss4ZbzU95hbw5XdcLitSlfxF8xEqBfY3j
	joqrFC7z74bAP8YAbP5esYISC2yGIU8b1Egx55R5djKpVEJSvNdibHDK/PK9zCGF
	uOS8o3PHCloegp5m1Ft21Sarose/LPAvAQvydu7FsvL5YMgF3f77vlyV9z95U4vm
	2L/cmX66SFqzKcEUawqtUm1D1ikbDK+A0pHO88VUEl3q5K9TCjVn8xei29jJOiZa
	aqdN75PUUdaKyJvuaB6Raic4lKaYE7MtPNxwJP4FMeP+WLad4Rr3GBf1S33e/Eq0
	88eN3GjPn6Ggd+zIshPGDz8T8T9PfoyZ+f6JrITEyg==
X-ME-Sender: <xms:0ODBY8DY6RuCSKy1hKs8ei14XotYeEgNfvDM51g94ptZLn5i3BJQNQ>
    <xme:0ODBY-jlQ5TmHwJi7OFBAhA4mttip_uggmMKVULoKV-TlKfkIkumA4MHMTnCSTYF7
    Fo9YqvBg-Vb_w>
X-ME-Received: <xmr:0ODBY_mrtzrdIrJ9DzrMo5QCBskDEtngeyvUB4IrnxQSIPGrbl9NtRMrR6X-EjYvJTmQq547ut3C11F1URb8S6wZWIpb8SA67g>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrleelgddtgecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefgudel
    teefvefhfeehieetleeihfejhfeludevteetkeevtedtvdegueetfeejudenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:0ODBYyxbJcG7imjuq8Dklu6XNbYfj9boTjfAqr2_DvCLFB5rN_rtKA>
    <xmx:0ODBYxQepT3qyQlR4yI0nr6pPJ7s4J7b8tINsEXME9CfYWK1N_-OIg>
    <xmx:0ODBY9ZiqivvK03r8PZFSikxrh3ofaufnbSumwH3KxvqEhXTH6DbzQ>
    <xmx:0eDBYwLcidQbQg_3PNESAxuO6rWVzK0gKXNvMRH9QqCW17WYpgZuLg>
Feedback-ID: i1568416f:Fastmail
Date: Fri, 13 Jan 2023 23:52:59 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Juergen Gross <jgross@suse.com>, linux-kernel@vger.kernel.org,
	x86@kernel.org, linux-pm@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Len Brown <len.brown@intel.com>,
	Pavel Machek <pavel@ucw.cz>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] x86/acpi: fix suspend with Xen
Message-ID: <Y8Hgy4UHxKqh0T2T@mail-itl>
References: <20230113140610.7132-1-jgross@suse.com>
 <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="4m4xLSz0QPn9icEC"
Content-Disposition: inline
In-Reply-To: <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>


--4m4xLSz0QPn9icEC
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 13 Jan 2023 23:52:59 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Juergen Gross <jgross@suse.com>, linux-kernel@vger.kernel.org,
	x86@kernel.org, linux-pm@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Len Brown <len.brown@intel.com>,
	Pavel Machek <pavel@ucw.cz>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] x86/acpi: fix suspend with Xen

On Fri, Jan 13, 2023 at 08:40:15PM +0100, Rafael J. Wysocki wrote:
> On Fri, Jan 13, 2023 at 3:06 PM Juergen Gross <jgross@suse.com> wrote:
> >
> > Commit f1e525009493 ("x86/boot: Skip realmode init code when running as
> > Xen PV guest") missed one code path accessing real_mode_header, leading
> > to dereferencing NULL when suspending the system under Xen:
> >
> >     [  348.284004] PM: suspend entry (deep)
> >     [  348.289532] Filesystems sync: 0.005 seconds
> >     [  348.291545] Freezing user space processes ... (elapsed 0.000 sec=
onds) done.
> >     [  348.292457] OOM killer disabled.
> >     [  348.292462] Freezing remaining freezable tasks ... (elapsed 0.10=
4 seconds) done.
> >     [  348.396612] printk: Suspending console(s) (use no_console_suspen=
d to debug)
> >     [  348.749228] PM: suspend devices took 0.352 seconds
> >     [  348.769713] ACPI: EC: interrupt blocked
> >     [  348.816077] BUG: kernel NULL pointer dereference, address: 00000=
0000000001c
> >     [  348.816080] #PF: supervisor read access in kernel mode
> >     [  348.816081] #PF: error_code(0x0000) - not-present page
> >     [  348.816083] PGD 0 P4D 0
> >     [  348.816086] Oops: 0000 [#1] PREEMPT SMP NOPTI
> >     [  348.816089] CPU: 0 PID: 6764 Comm: systemd-sleep Not tainted 6.1=
=2E3-1.fc32.qubes.x86_64 #1
> >     [  348.816092] Hardware name: Star Labs StarBook/StarBook, BIOS 8.0=
1 07/03/2022
> >     [  348.816093] RIP: e030:acpi_get_wakeup_address+0xc/0x20
> >
> > Fix that by adding an indirection for acpi_get_wakeup_address() which
> > Xen PV dom0 can use to return a dummy non-zero wakeup address (this
> > address won't ever be used, as the real suspend handling is done by the
> > hypervisor).
>=20
> How exactly does this help?

By not accessing calling acpi_get_wakeup_address() (with the patch
renamed to x86_acpi_get_wakeup_address()) during PV dom0 suspend, which
otherwise would access not initialized real_mode_header.

I confirm this patch fixes the issue.

> > Fixes: f1e525009493 ("x86/boot: Skip realmode init code when running as=
 Xen PV guest")
> > Reported-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsl=
ab.com>
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > ---
> >  arch/x86/include/asm/acpi.h  | 2 +-
> >  arch/x86/kernel/acpi/sleep.c | 3 ++-
> >  include/xen/acpi.h           | 9 +++++++++
> >  3 files changed, 12 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
> > index 65064d9f7fa6..137259ff8f03 100644
> > --- a/arch/x86/include/asm/acpi.h
> > +++ b/arch/x86/include/asm/acpi.h
> > @@ -61,7 +61,7 @@ static inline void acpi_disable_pci(void)
> >  extern int (*acpi_suspend_lowlevel)(void);
> >
> >  /* Physical address to resume after wakeup */
> > -unsigned long acpi_get_wakeup_address(void);
> > +extern unsigned long (*acpi_get_wakeup_address)(void);
> >
> >  /*
> >   * Check if the CPU can handle C2 and deeper
> > diff --git a/arch/x86/kernel/acpi/sleep.c b/arch/x86/kernel/acpi/sleep.c
> > index 3b7f4cdbf2e0..1a3cd5e24cd0 100644
> > --- a/arch/x86/kernel/acpi/sleep.c
> > +++ b/arch/x86/kernel/acpi/sleep.c
> > @@ -33,10 +33,11 @@ static char temp_stack[4096];
> >   * Returns the physical address where the kernel should be resumed aft=
er the
> >   * system awakes from S3, e.g. for programming into the firmware wakin=
g vector.
> >   */
> > -unsigned long acpi_get_wakeup_address(void)
> > +static unsigned long x86_acpi_get_wakeup_address(void)
> >  {
> >         return ((unsigned long)(real_mode_header->wakeup_start));
> >  }
> > +unsigned long (*acpi_get_wakeup_address)(void) =3D x86_acpi_get_wakeup=
_address;
> >
> >  /**
> >   * x86_acpi_enter_sleep_state - enter sleep state
> > diff --git a/include/xen/acpi.h b/include/xen/acpi.h
> > index b1e11863144d..7e1e5dbfb77c 100644
> > --- a/include/xen/acpi.h
> > +++ b/include/xen/acpi.h
> > @@ -56,6 +56,12 @@ static inline int xen_acpi_suspend_lowlevel(void)
> >         return 0;
> >  }
> >
> > +static inline unsigned long xen_acpi_get_wakeup_address(void)
> > +{
> > +       /* Just return a dummy non-zero value, it will never be used. */
> > +       return 1;
> > +}
> > +
> >  static inline void xen_acpi_sleep_register(void)
> >  {
> >         if (xen_initial_domain()) {
> > @@ -65,6 +71,9 @@ static inline void xen_acpi_sleep_register(void)
> >                         &xen_acpi_notify_hypervisor_extended_sleep);
> >
> >                 acpi_suspend_lowlevel =3D xen_acpi_suspend_lowlevel;
> > +#ifdef CONFIG_ACPI_SLEEP
> > +               acpi_get_wakeup_address =3D xen_acpi_get_wakeup_address;
> > +#endif
> >         }
> >  }
> >  #else
> > --
> > 2.35.3
> >

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--4m4xLSz0QPn9icEC
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmPB4MsACgkQ24/THMrX
1yw2swgAk/5GqFX39AKLVH88v8kknH2HL3VTuIzqJY8+1FDQ5c7aWElrJp/WmgGR
EaqALVjRocUTHRX6/rIyg5l7R3drsEOfNLa8tNd5jlWWXRDwHlrf+eXBTlQZs9zX
a8mLaFADPonMM4XkiPfQutk/qpklvNf2ijAEfDiwsbODHa28Uzif2ysL2ZlTF2CA
rVxwY4YY9ndRkaBSR7Y/UCMP5P/f7NtbjD2UPcZVDHl0xNp89geMBuSNVHVEcMXn
tOivOdQ6IpczX6bCME1R3ewDNud+aFznTPyW9Q+ZQmYPLKXF+nzO1lQR5a/gBxyi
y1OfQ7J0s3r6JaQYvmF6O7JCgHNQ+A==
=8flr
-----END PGP SIGNATURE-----

--4m4xLSz0QPn9icEC--


From xen-devel-bounces@lists.xenproject.org Fri Jan 13 23:08:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 23:08:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477558.740335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAH-0007gm-Vb; Fri, 13 Jan 2023 23:08:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477558.740335; Fri, 13 Jan 2023 23:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAH-0007gZ-Rh; Fri, 13 Jan 2023 23:08:49 +0000
Received: by outflank-mailman (input) for mailman id 477558;
 Fri, 13 Jan 2023 23:08:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bX0/=5K=citrix.com=prvs=37021d3d6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pGTAG-00076h-3b
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 23:08:48 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 39d65cdf-9397-11ed-b8d0-410ff93cb8f0;
 Sat, 14 Jan 2023 00:08:45 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39d65cdf-9397-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673651326;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=ymMfRlm9ht6BlDD3wEaaTOA+gMUaexzdxRl9xxZ1XxA=;
  b=Mkwb1Z8APJ6Lw68XKw6Jh1Ii48sWj77hXDJoAwTjVYKbjucnfHuajbZ9
   UvPirSrHejODlzEU079k6n3LiykS2yG+WVVFZI27CwvQChwF0q1HM1Dd7
   jn+d76eNoHIAs8y2hAYjjKK4nvWCs3nxUBuy/eFCEsqrCibGIZfzECV7u
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92558109
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:hkB5ea85lXWNwxu0xsPUDrUDo36TJUtcMsCJ2f8bNWPcYEJGY0x3m
 DBOCj+FOK2LYzb3fdsibN7l9EgGu8eEy4IxTFNqpHo8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKkQ5AO2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklp6
 OUjBxkjfyrTuKHx2pOWTOhWockKeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0ExBvF9
 juergwVBDkkOvG45B7V1UmPi+7PzHLlfIUPGbe3o6sCbFq7mTVIVUx+uUGAieKilke0VtZbK
 koV0ikjt64/8AqsVNaVdwK8iG6JuFgbQdU4O+8n7ACAzILE7gDfAXILJhZjQtE7sM49RRQxy
 0SE2djuAFRHoLCTDH6Q6LqQhTezIjQOa38PYzceSgkI6MWlp5s85i8jVf46TvTz1IesX2itn
 XbT9nNWa6gvYdAj8Liixn/urSOW9qeKCRQUywPWZEWox1YsDGK6XLBE+WQ3/N4ZctnCEwbf4
 CNd8ySNxLtQVM/QzURhVM1IRej0vKjdbVUwlHY1R/EcGyKRF2lPlGy6yBV3Pw9XP8kNYlcFi
 2eD6FoKtPe/0JZHBJKbgr5d6Oxwl8AM7fy/CpjpgiNmO/CdjjOv8iB0flK31GvwikUqmqxXE
 c7FLp3yVipKWPw3lWHeqwIhPVgDn3BW+I8ubcqjk0TPPUS2ORZ5tovpwHPRN7tkvctoUS3e8
 spFNtvi9vmseLSWX8UjyqZKdQpiBSFiVfjLRzl/KrbrzvxORDtwVJc8ANoJJ+RYokiivr6Wo
 CDlCxIGkASXaL+uAVziV02PoYjHBf5XxU/X9wR1Vbp08xDPubqS0Zo=
IronPort-HdrOrdr: A9a23:2pe1SKht1mQuaZv+2QSfBpMnDXBQXtQji2hC6mlwRA09TyX4ra
 yTdZEgviMc5wx/ZJhNo7690cu7IU80hKQV3WB5B97LNmTbUQCTXeJfBOXZsljdMhy72ulB1b
 pxN4hSYeeAaWSSVPyKgjWFLw==
X-IronPort-AV: E=Sophos;i="5.97,215,1669093200"; 
   d="scan'208";a="92558109"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH v2 4/5] xen/version: Fold build_id handling into xenver_varbuf_op()
Date: Fri, 13 Jan 2023 23:08:34 +0000
Message-ID: <20230113230835.29356-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230113230835.29356-1-andrew.cooper3@citrix.com>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

struct xen_build_id and struct xen_varbuf are identical from an ABI point of
view, so XENVER_build_id can reuse xenver_varbuf_op() rather than having it's
own almost identical copy of the logic.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>

v2:
 * New
---
 xen/common/kernel.c          | 49 +++++++++++++-------------------------------
 xen/include/public/version.h |  5 ++++-
 2 files changed, 18 insertions(+), 36 deletions(-)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index cc5d8247b04d..7ab410ac7c28 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -476,9 +476,22 @@ static long xenver_varbuf_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     struct xen_varbuf user_str;
     const char *str = NULL;
     size_t sz;
+    int rc;
 
     switch ( cmd )
     {
+    case XENVER_build_id:
+    {
+        unsigned int local_sz;
+
+        rc = xen_build_id((const void **)&str, &local_sz);
+        if ( rc )
+            return rc;
+
+        sz = local_sz;
+        goto have_len;
+    }
+
     case XENVER_extraversion2:
         str = xen_extra_version();
         break;
@@ -502,6 +515,7 @@ static long xenver_varbuf_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     sz = strlen(str);
 
+ have_len:
     if ( sz > KB(64) ) /* Arbitrary limit.  Avoid long-running operations. */
         return -E2BIG;
 
@@ -703,41 +717,6 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     }
 
     case XENVER_build_id:
-    {
-        xen_build_id_t build_id;
-        unsigned int sz;
-        int rc;
-        const void *p;
-
-        if ( deny )
-            return -EPERM;
-
-        /* Only return size. */
-        if ( !guest_handle_is_null(arg) )
-        {
-            if ( copy_from_guest(&build_id, arg, 1) )
-                return -EFAULT;
-
-            if ( build_id.len == 0 )
-                return -EINVAL;
-        }
-
-        rc = xen_build_id(&p, &sz);
-        if ( rc )
-            return rc;
-
-        if ( guest_handle_is_null(arg) )
-            return sz;
-
-        if ( sz > build_id.len )
-            return -ENOBUFS;
-
-        if ( copy_to_guest_offset(arg, offsetof(xen_build_id_t, buf), p, sz) )
-            return -EFAULT;
-
-        return sz;
-    }
-
     case XENVER_extraversion2:
     case XENVER_capabilities2:
     case XENVER_changeset2:
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 9287b5d3ff50..ee32d4c6b30b 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -124,8 +124,10 @@ typedef char xen_commandline_t[1024];
 /*
  * Return value is the number of bytes written, or XEN_Exx on error.
  * Calling with empty parameter returns the size of build_id.
+ *
+ * Note: structure only kept for backwards compatibility.  Xen operates in
+ * terms of xen_varbuf_t.
  */
-#define XENVER_build_id 10
 struct xen_build_id {
         uint32_t        len; /* IN: size of buf[]. */
         unsigned char   buf[XEN_FLEX_ARRAY_DIM];
@@ -164,6 +166,7 @@ typedef struct xen_varbuf xen_varbuf_t;
  * effect.  e.g. Xen has no control over the formatting used for the command
  * line.
  */
+#define XENVER_build_id      10
 #define XENVER_extraversion2 11
 #define XENVER_capabilities2 12
 #define XENVER_changeset2    13
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 23:08:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 23:08:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477555.740302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAE-0006uO-SW; Fri, 13 Jan 2023 23:08:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477555.740302; Fri, 13 Jan 2023 23:08:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAE-0006uH-PI; Fri, 13 Jan 2023 23:08:46 +0000
Received: by outflank-mailman (input) for mailman id 477555;
 Fri, 13 Jan 2023 23:08:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bX0/=5K=citrix.com=prvs=37021d3d6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pGTAD-0006f0-2Z
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 23:08:45 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 39beebee-9397-11ed-91b6-6bf2151ebd3b;
 Sat, 14 Jan 2023 00:08:43 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39beebee-9397-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673651324;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=4vPnULWearCAs5pJV9RHnY3KcBHt0ecxfzxh751czBA=;
  b=T47e5geixP0LDAUbJiuKUIJsmM7xR01DRcBHnc+zLwbPh+1KpP5XmWmB
   kNlZRpGxtXJaH0ljXaK2qS+TJT4KK7riwRuixHHS375VufyLtn2f44mDQ
   gq88C9qpDfe8ezmaZ8fYfkVphcyZnWu615x+gE1BfXwBLJrADYCO+5yLZ
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92558107
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:01YKR66vw12gg3ZN85htwAxRtBvHchMFZxGqfqrLsTDasY5as4F+v
 mIbWWGGO/yLN2DzKN12bISypxxT65DSmIcwQAVvqHg9Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakS5weC/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m0
 uEBEx8kQAG51931h6iaZ+9tvOMuBZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9zxzF+
 zKfpzuR7hcybNy/k2qDry2Vu+7ztmCgR785EaKV36s/6LGU7jNKU0BHPbehmtGikVK3Ud9bL
 00S+wItoLI0+UjtScPyNzWnpFaUsxhaXMBfe8U49QWMx6z88wufQG8eQVZpSvYrqcs3TjwCz
 UKSkpXiAjkHmK2YTzeR+6mZqRu2ODMJNikSaCkcVwwH7tL/5oYpgXryos1LSfDvyIevQHepn
 m7M9XJl71kOsSIV/4Km5Gvoqhy9nMj2DUkvxyjRX1iC4yosMeZJeLeUBUjnAedoddjGFQTe4
 iRfwqBy/8hVU8jTyXXlrPElWejwuq3baGC0bUtHRcFJyti7x5K0kWm8ChlaLVwhDMsLcCSBj
 KT76VIIv8870JdHgMZKj2ON5ycCl/KI+SzNDKy8Uza3SsEZmPW71C9vf1WM+GvmjVIhl6oyU
 b/CL5n3Uy1GWfU/nGPtLwv47VPM7nlurV4/uLihl0j3uVZgTCP9pUg53KumMblisfLsTPT9+
 NdDLcqaoyizo8WnChQ7BbU7dAhQRVBiXMCeliCiXrLbSuaQMD17WqC5LHJIU9ANopm5Yc+Ro
 C/sAh4FlgKh7ZAFQC3TAk1ehHrUdc4XhRoG0eYEZD5EB1BLjV6T0Zoi
IronPort-HdrOrdr: A9a23:hJsw66DyzM2ffzflHemi55DYdb4zR+YMi2TDtnocdfUxSKelfq
 +V88jzuSWbtN9yYhEdcKG7WZVoKEm0nfQZ3WB7B8bAYOCJghrMEKhSqafk3j38C2nf24dmpM
 NdmnFFeb/NMWQ=
X-IronPort-AV: E=Sophos;i="5.97,215,1669093200"; 
   d="scan'208";a="92558107"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH v2 1/5] xen/version: Drop bogus return values for XENVER_platform_parameters
Date: Fri, 13 Jan 2023 23:08:31 +0000
Message-ID: <20230113230835.29356-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230113230835.29356-1-andrew.cooper3@citrix.com>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

A split in virtual address space is only applicable for x86 PV guests.
Furthermore, the information returned for x86 64bit PV guests is wrong.

Explain the problem in version.h, stating the other information that PV guests
need to know.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>

The only reason this does not get an XSA is because Xen does not have any form
of KALSR.

v2:
 * Retain the useless return value for 64bit PV guests in release builds only.
 * Rewrite comments.
---
 xen/common/kernel.c          | 20 +++++++++++++++++---
 xen/include/public/version.h | 27 +++++++++++++++++++++++++++
 2 files changed, 44 insertions(+), 3 deletions(-)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 4f268d83e3cb..f7b1f65f373c 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -518,11 +518,15 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     
     case XENVER_platform_parameters:
     {
+        const struct vcpu *curr = current;
+
 #ifdef CONFIG_COMPAT
-        if ( current->hcall_compat )
+        if ( curr->hcall_compat )
         {
             compat_platform_parameters_t params = {
-                .virt_start = HYPERVISOR_COMPAT_VIRT_START(current->domain),
+                .virt_start = is_pv_vcpu(curr)
+                            ? HYPERVISOR_COMPAT_VIRT_START(curr->domain)
+                            : 0,
             };
 
             if ( copy_to_guest(arg, &params, 1) )
@@ -532,7 +536,17 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 #endif
         {
             xen_platform_parameters_t params = {
-                .virt_start = HYPERVISOR_VIRT_START,
+                /*
+                 * Out of an abundance of caution, retain the useless return
+                 * value for 64bit PV guests, but in release builds only.
+                 *
+                 * This is not expected to cause any problems, but if it does,
+                 * the developer impacted will be the one best suited to fix
+                 * the caller not to issue this hypercall.
+                 */
+                .virt_start = !IS_ENABLED(CONFIG_DEBUG) && is_pv_vcpu(curr)
+                              ? HYPERVISOR_VIRT_START
+                              : 0,
             };
 
             if ( copy_to_guest(arg, &params, 1) )
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 0ff8bd9077c6..cbc4ef7a46e6 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -42,6 +42,33 @@ typedef char xen_capabilities_info_t[1024];
 typedef char xen_changeset_info_t[64];
 #define XEN_CHANGESET_INFO_LEN (sizeof(xen_changeset_info_t))
 
+/*
+ * This API is problematic.
+ *
+ * It is only applicable to guests which share pagetables with Xen (x86 PV
+ * guests), but unfortunately has leaked into other guest types and
+ * architectures with an expectation of never failing.
+ *
+ * It is intended to identify the virtual address split between guest kernel
+ * and Xen.
+ *
+ * For 32bit PV guests, there is a split, and it is variable (between two
+ * fixed bounds), and this boundary is reported to guests.  The detail missing
+ * from the hypercall is that the second boundary is the 32bit architectural
+ * boundary at 4G.
+ *
+ * For 64bit PV guests, Xen lives at the bottom of the upper canonical range.
+ * This hypercall happens to report the architectural boundary, not the one
+ * which would be necessary to make a variable split work.  As such, this
+ * hypercall entirely useless for 64bit PV guests, and all inspected
+ * implementations at the time of writing were found to have compile time
+ * expectations about the split.
+ *
+ * For architectures where this hypercall is implemented, for backwards
+ * compatibility with the expectation of the hypercall never failing Xen will
+ * return 0 instead of failing with -ENOSYS in cases where the guest should
+ * not be making the hypercall.
+ */
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
     xen_ulong_t virt_start;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 23:08:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 23:08:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477556.740313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAG-00079o-4f; Fri, 13 Jan 2023 23:08:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477556.740313; Fri, 13 Jan 2023 23:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAG-00079g-1B; Fri, 13 Jan 2023 23:08:48 +0000
Received: by outflank-mailman (input) for mailman id 477556;
 Fri, 13 Jan 2023 23:08:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bX0/=5K=citrix.com=prvs=37021d3d6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pGTAE-0006f0-Aj
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 23:08:46 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3a792e11-9397-11ed-91b6-6bf2151ebd3b;
 Sat, 14 Jan 2023 00:08:45 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a792e11-9397-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673651326;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=RRasjkNoe7Enmu6lqfWgkYl+d2SZ3qpC6O2iTzVGXP8=;
  b=MdsQVZDSP/WK9hJEdeK4w8UpXEB4V8TvFo2ZhEWiO5PE3rTtpqjzlZQk
   9k/blNT+K+qe4e5Zel/A4z7eUF3YEkr8XJ+pPgTOH+r3Ic9TtXQUi/5yt
   fcWaEahhLl2QAs8UY2Ab94+hZsiM/Bad37Rb0oVhzMNW8b3l0TBE8ys2O
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92558108
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:adSghqlVPqX1OGO2gMPF/C/o5gxYJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJMWmiDbvrfZzbwfNsibduwo0pXvJaGyt5iS1E/qS0yEiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5QSGyxH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 dsFMQsiRz6SvbKJg/Wha8Rqh/4EIeC+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQthfC+
 z+Wpjypav0cHIDEyzGqoi/yurHelBnkR4YYEaz7/fE/1TV/wURMUUZLBDNXu8KRiFO6Wt9ZA
 1wZ/Gwpt6da3FOvZsnwWVu/unHslhQRQcZKGus2rgSE0LPJ4h2xD3IBCDVGbbQOisgyQjA70
 06TqPngDzdvrb69RGqU8/GfqjbaETMOMWYIaCsATA0Ey9ruuoc+ilTIVNkLOIyfg8DxGDrw6
 yuXtyV4jLIW5eYb2qP+8V3ZjjaEopnSUhVz9gjRRnii7A5yeMiifYPA1LTAxa8edsDDFADH5
 SVa3ZHEt4jiEK1higSqXfw2M5iH9szVD36bm39CGZgb0DmErivLkZ9r3N1uGKt4Gp9aJmS0P
 xGP4lo5CIx7ZyXzM/IuC26lI4FzlPW7S4y4PhzBRoAWCqWdYjNr682HiaS4+2n22HYhnqgkU
 XtwWZb9VC1KYUiLIdffegv87VPI7npkrY8rbcqnpylLKJLHDJJvdZ8LMUGVcscy576erQPe/
 r53bpXVkEsEDL2vOnmOqub/yGzmylBiVfjLRzF/LLbfcmKK5kl8YxMu/V/RU9M8xPkE/gs51
 nq8RlVZ2DLCaY7vcG23hoRYQOq3B/5X9CtrVRHAyH70gxDPl67ztvZAH3b2FJF7nNFeIQlcF
 qdbKp3RX6oXFFwqOV01NPHAkWCrTzzz7SrmAsZvSGFXk0JIL+ARxuLZQw==
IronPort-HdrOrdr: A9a23:YUNK7qGnqKuFCqYgpLqE5seALOsnbusQ8zAXPiFKJSC9F/byqy
 nAppsmPHPP5gr5OktBpTnwAsi9qBrnnPYejLX5Vo3SPzUO1lHYSb1K3M/PxCDhBj271sM179
 YFT0GmMqyTMWRH
X-IronPort-AV: E=Sophos;i="5.97,215,1669093200"; 
   d="scan'208";a="92558108"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH v2 2/5] xen/version: Calculate xen_capabilities_info once at boot
Date: Fri, 13 Jan 2023 23:08:32 +0000
Message-ID: <20230113230835.29356-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230113230835.29356-1-andrew.cooper3@citrix.com>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The arch_get_xen_caps() infrastructure is horribly inefficient, for something
that is constant after features have been resolved on boot.

Every instance used snprintf() to format constants into a string (which gets
shorter when %d gets resolved!), which gets double buffered on the stack.

Switch to using string literals with the "3.0" inserted - these numbers
haven't changed in 18 years (The Xen 3.0 release was Dec 5th 2005).

Use initcalls to format the data into xen_cap_info, which is deliberately not
of type xen_capabilities_info_t because a 1k array is a silly overhead for
storing a maximum of 77 chars (the x86 version) and isn't liable to need any
more space in the forseeable future.

This speeds up the the XENVER_capabilities hypercall, but the purpose of the
change is to allow us to introduce a better XENVER_* API that doesn't force us
to put a 1k buffer on the stack.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>

v2:
 * New

If Xen had strncpy(), then the hunk in do_xen_version() could read:

  if ( deny )
     memset(info, 0, sizeof(info));
  else
     strncpy(info, xen_cap_info, sizeof(info));

to avoid double processing the start of the buffer, but given the ABI (must
write 1k chars into the guest), I cannot see any way of taking info off the
stack without some kind of strncpy_to_guest() API.

Moving to __initcall() also allows new architectures to not implement this
API, and I'm going to recommend strongly that they dont.  Its a very dubious
way of signalling about 3 bits of info to the toolstack, and inefficient to
use (the toolstack has to do string parsing on the result figure out if
PV64/PV32/HVM is available).
---
 xen/arch/arm/setup.c        | 19 ++++++-------------
 xen/arch/x86/setup.c        | 31 ++++++++++---------------------
 xen/common/kernel.c         |  3 ++-
 xen/include/xen/hypercall.h |  2 --
 xen/include/xen/version.h   |  2 ++
 5 files changed, 20 insertions(+), 37 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90e3..b71b4bc506e0 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1194,24 +1194,17 @@ void __init start_xen(unsigned long boot_phys_offset,
     switch_stack_and_jump(idle_vcpu[0]->arch.cpu_info, init_done);
 }
 
-void arch_get_xen_caps(xen_capabilities_info_t *info)
+static int __init init_xen_cap_info(void)
 {
-    /* Interface name is always xen-3.0-* for Xen-3.x. */
-    int major = 3, minor = 0;
-    char s[32];
-
-    (*info)[0] = '\0';
-
 #ifdef CONFIG_ARM_64
-    snprintf(s, sizeof(s), "xen-%d.%d-aarch64 ", major, minor);
-    safe_strcat(*info, s);
+    safe_strcat(xen_cap_info, "xen-3.0-aarch64 ");
 #endif
     if ( cpu_has_aarch32 )
-    {
-        snprintf(s, sizeof(s), "xen-%d.%d-armv7l ", major, minor);
-        safe_strcat(*info, s);
-    }
+        safe_strcat(xen_cap_info, "xen-3.0-armv7l ");
+
+    return 0;
 }
+__initcall(init_xen_cap_info);
 
 /*
  * Local variables:
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 566422600d94..f80821469ece 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1956,35 +1956,24 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     unreachable();
 }
 
-void arch_get_xen_caps(xen_capabilities_info_t *info)
+static int __init cf_check init_xen_cap_info(void)
 {
-    /* Interface name is always xen-3.0-* for Xen-3.x. */
-    int major = 3, minor = 0;
-    char s[32];
-
-    (*info)[0] = '\0';
-
     if ( IS_ENABLED(CONFIG_PV) )
     {
-        snprintf(s, sizeof(s), "xen-%d.%d-x86_64 ", major, minor);
-        safe_strcat(*info, s);
+        safe_strcat(xen_cap_info, "xen-3.0-x86_64 ");
 
         if ( opt_pv32 )
-        {
-            snprintf(s, sizeof(s), "xen-%d.%d-x86_32p ", major, minor);
-            safe_strcat(*info, s);
-        }
+            safe_strcat(xen_cap_info, "xen-3.0-x86_32p ");
     }
     if ( hvm_enabled )
-    {
-        snprintf(s, sizeof(s), "hvm-%d.%d-x86_32 ", major, minor);
-        safe_strcat(*info, s);
-        snprintf(s, sizeof(s), "hvm-%d.%d-x86_32p ", major, minor);
-        safe_strcat(*info, s);
-        snprintf(s, sizeof(s), "hvm-%d.%d-x86_64 ", major, minor);
-        safe_strcat(*info, s);
-    }
+        safe_strcat(xen_cap_info,
+                    "hvm-3.0-x86_32 "
+                    "hvm-3.0-x86_32p "
+                    "hvm-3.0-x86_64 ");
+
+    return 0;
 }
+__initcall(init_xen_cap_info);
 
 int __hwdom_init xen_in_range(unsigned long mfn)
 {
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index f7b1f65f373c..4fa1d6710115 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -30,6 +30,7 @@ enum system_state system_state = SYS_STATE_early_boot;
 
 xen_commandline_t saved_cmdline;
 static const char __initconst opt_builtin_cmdline[] = CONFIG_CMDLINE;
+char __ro_after_init xen_cap_info[128];
 
 static int assign_integer_param(const struct kernel_param *param, uint64_t val)
 {
@@ -509,7 +510,7 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         memset(info, 0, sizeof(info));
         if ( !deny )
-            arch_get_xen_caps(&info);
+            safe_strcpy(info, xen_cap_info);
 
         if ( copy_to_guest(arg, info, ARRAY_SIZE(info)) )
             return -EFAULT;
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index f307dfb59760..15b6be6ec818 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -56,6 +56,4 @@ common_vcpu_op(int cmd,
     struct vcpu *v,
     XEN_GUEST_HANDLE_PARAM(void) arg);
 
-void arch_get_xen_caps(xen_capabilities_info_t *info);
-
 #endif /* __XEN_HYPERCALL_H__ */
diff --git a/xen/include/xen/version.h b/xen/include/xen/version.h
index 93c58773630c..4856ad1b446d 100644
--- a/xen/include/xen/version.h
+++ b/xen/include/xen/version.h
@@ -19,6 +19,8 @@ const char *xen_deny(void);
 const char *xen_build_info(void);
 int xen_build_id(const void **p, unsigned int *len);
 
+extern char xen_cap_info[128];
+
 #ifdef BUILD_ID
 void xen_build_init(void);
 int xen_build_id_check(const Elf_Note *n, unsigned int n_sz,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 23:08:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 23:08:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477557.740317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAG-0007Cq-GK; Fri, 13 Jan 2023 23:08:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477557.740317; Fri, 13 Jan 2023 23:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAG-0007CU-A0; Fri, 13 Jan 2023 23:08:48 +0000
Received: by outflank-mailman (input) for mailman id 477557;
 Fri, 13 Jan 2023 23:08:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bX0/=5K=citrix.com=prvs=37021d3d6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pGTAF-0006f0-Nb
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 23:08:47 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b387538-9397-11ed-91b6-6bf2151ebd3b;
 Sat, 14 Jan 2023 00:08:46 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b387538-9397-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673651327;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=RhJxNddOZnuA9cAZ20nrQNAY+XT5cis1ShPg10NhTgQ=;
  b=ar9kl0ICAkpxo5bHeQ8Whdm+c0Ow8TAH91Mxi/7Sd8q0ABpwWZ6polsz
   BROa5BPFXCzGJMaZx/LAU0jRaL6RX9KtHQbg/PQSxgDg68fl0B3ZQ5YmJ
   vX0TNisyji8oWdgUN72yugyouEUXJLmRbbk9nfbyxplbzndwfWWKyaqUl
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92558110
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:R7iNCKLMS5z0TbkDFE+RO5UlxSXFcZb7ZxGr2PjKsXjdYENS12NRz
 WFJD2+Ca/+DYGrwet9zboS29ksDv5fUxtdmSQdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wVlPawjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5QHHh12
 tU2OQpdUVPEncObxrOBbeRF05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TbH5gIzh/B/
 goq+UzrPzU8O/unkQGUqF2vhPbhgyDkV4cdQejQGvlC3wTImz175ActfVmmpfi0jGauVtQZL
 FYbkgIxqYAi+UrtScPyNzW0r3KJsQQVc8ZBGO09rgeWw+zb5BjxLmoNSDJbecElnMAzTD0uk
 FSOmrvBCSR0tbyJSVqU7rqOsS6pIi8RMHMDYikfCwAC5rHLsIw1yx7CUNtnOKq0lcHuXyH9x
 SiQqyozjKlVitQEv42g5kzOiT+oopnPTyY26x/RU2bj6Rl2DKaHTYG17VnQ7d5bMZ2UCFKGu
 RA5d9O2tb5US8vXzWrUHbtLRevyjxqYDNHCqXlyBqIO3hq8wS6cPsdKwRx4JX1OP+9RLFcFf
 3TvVRNtCI57ZSX1NvIoPd7qUqzG3oC7S427C6m8gs5mJ8EoKVTZpHwGiVu4hTiFraQ6rU0o1
 X53m+6IBG1SN6loxSHeqww1ge5ynXBWKY8+qPnGI/WbPVm2PiT9pU8tagfmUwzAxPrsTP/p2
 9heLdCW7B5UTffzZCLamaZKcw9RcyNnVcGu+5UMHgJmHuaBMDhxY8I9PJt7I9A190irvrqgE
 o6Btr9wlwOk2CyvxfSiYXF/crL/NauTXlpiVRHAyW2AgiB5Ca72tfd3SnfCVeV/nACV5aIuH
 qZtlgTpKqgndwkrDBxEM8es9N0/Kkz17e9MVgL8CAUCk1dbb1Sh0rfZksHHrUHi0gLfWRMCn
 oCd
IronPort-HdrOrdr: A9a23:0elSxart9ZL3gN78Y++HzcsaV5oleYIsimQD101hICG9E/b1qy
 nKpp8mPHDP5wr5NEtPpTnjAsm9qALnlKKdiLN5Vd3OYOCMghrKEGgN1/qG/xTQXwH46+5Bxe
 NBXsFFebnN5IFB/KTH3DU=
X-IronPort-AV: E=Sophos;i="5.97,215,1669093200"; 
   d="scan'208";a="92558110"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, Daniel Smith
	<dpsmith@apertussolutions.com>, Jason Andryuk <jandryuk@gmail.com>
Subject: [PATCH v2 3/5] xen/version: Introduce non-truncating XENVER_* subops
Date: Fri, 13 Jan 2023 23:08:33 +0000
Message-ID: <20230113230835.29356-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230113230835.29356-1-andrew.cooper3@citrix.com>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Recently in XenServer, we have encountered problems caused by both
XENVER_extraversion and XENVER_commandline having fixed bounds.

More than just the invariant size, the APIs/ABIs also broken by typedef-ing an
array, and using an unqualified 'char' which has implementation-specific
signed-ness

Provide brand new ops, which are capable of expressing variable length
strings, and mark the older ops as broken.

This fixes all issues around XENVER_extraversion being longer than 15 chars.
More work is required to remove other assumptions about XENVER_commandline
being 1023 chars long.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
CC: Daniel De Graaf <dgdegra@tycho.nsa.gov>
CC: Daniel Smith <dpsmith@apertussolutions.com>
CC: Jason Andryuk <jandryuk@gmail.com>

v2:
 * Remove xen_capabilities_info_t from the stack now that arch_get_xen_caps()
   has gone.
 * Use an arbitrary limit check much lower than INT_MAX.
 * Use "buf" rather than "string" terminology.
 * Expand the API comment.

Tested by forcing XENVER_extraversion to be 20 chars long, and confirming that
an untruncated version can be obtained.
---
 xen/common/kernel.c          | 62 +++++++++++++++++++++++++++++++++++++++++++
 xen/include/public/version.h | 63 ++++++++++++++++++++++++++++++++++++++++++--
 xen/include/xlat.lst         |  1 +
 xen/xsm/flask/hooks.c        |  4 +++
 4 files changed, 128 insertions(+), 2 deletions(-)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 4fa1d6710115..cc5d8247b04d 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -24,6 +24,7 @@
 CHECK_build_id;
 CHECK_compile_info;
 CHECK_feature_info;
+CHECK_varbuf;
 #endif
 
 enum system_state system_state = SYS_STATE_early_boot;
@@ -470,6 +471,59 @@ static int __init cf_check param_init(void)
 __initcall(param_init);
 #endif
 
+static long xenver_varbuf_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    struct xen_varbuf user_str;
+    const char *str = NULL;
+    size_t sz;
+
+    switch ( cmd )
+    {
+    case XENVER_extraversion2:
+        str = xen_extra_version();
+        break;
+
+    case XENVER_changeset2:
+        str = xen_changeset();
+        break;
+
+    case XENVER_commandline2:
+        str = saved_cmdline;
+        break;
+
+    case XENVER_capabilities2:
+        str = xen_cap_info;
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        return -ENODATA;
+    }
+
+    sz = strlen(str);
+
+    if ( sz > KB(64) ) /* Arbitrary limit.  Avoid long-running operations. */
+        return -E2BIG;
+
+    if ( guest_handle_is_null(arg) ) /* Length request */
+        return sz;
+
+    if ( copy_from_guest(&user_str, arg, 1) )
+        return -EFAULT;
+
+    if ( user_str.len == 0 )
+        return -EINVAL;
+
+    if ( sz > user_str.len )
+        return -ENOBUFS;
+
+    if ( copy_to_guest_offset(arg, offsetof(struct xen_varbuf, buf),
+                              str, sz) )
+        return -EFAULT;
+
+    return sz;
+}
+
 long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     bool_t deny = !!xsm_xen_version(XSM_OTHER, cmd);
@@ -683,6 +737,14 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         return sz;
     }
+
+    case XENVER_extraversion2:
+    case XENVER_capabilities2:
+    case XENVER_changeset2:
+    case XENVER_commandline2:
+        if ( deny )
+            return -EPERM;
+        return xenver_varbuf_op(cmd, arg);
     }
 
     return -ENOSYS;
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index cbc4ef7a46e6..9287b5d3ff50 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -19,12 +19,20 @@
 /* arg == NULL; returns major:minor (16:16). */
 #define XENVER_version      0
 
-/* arg == xen_extraversion_t. */
+/*
+ * arg == xen_extraversion_t.
+ *
+ * This API/ABI is broken.  Use XENVER_extraversion2 instead.
+ */
 #define XENVER_extraversion 1
 typedef char xen_extraversion_t[16];
 #define XEN_EXTRAVERSION_LEN (sizeof(xen_extraversion_t))
 
-/* arg == xen_compile_info_t. */
+/*
+ * arg == xen_compile_info_t.
+ *
+ * This API/ABI is broken and truncates data.
+ */
 #define XENVER_compile_info 2
 struct xen_compile_info {
     char compiler[64];
@@ -34,10 +42,20 @@ struct xen_compile_info {
 };
 typedef struct xen_compile_info xen_compile_info_t;
 
+/*
+ * arg == xen_capabilities_info_t.
+ *
+ * This API/ABI is broken.  Use XENVER_capabilities2 instead.
+ */
 #define XENVER_capabilities 3
 typedef char xen_capabilities_info_t[1024];
 #define XEN_CAPABILITIES_INFO_LEN (sizeof(xen_capabilities_info_t))
 
+/*
+ * arg == xen_changeset_info_t.
+ *
+ * This API/ABI is broken.  Use XENVER_changeset2 instead.
+ */
 #define XENVER_changeset 4
 typedef char xen_changeset_info_t[64];
 #define XEN_CHANGESET_INFO_LEN (sizeof(xen_changeset_info_t))
@@ -95,6 +113,11 @@ typedef struct xen_feature_info xen_feature_info_t;
  */
 #define XENVER_guest_handle 8
 
+/*
+ * arg == xen_commandline_t.
+ *
+ * This API/ABI is broken.  Use XENVER_commandline2 instead.
+ */
 #define XENVER_commandline 9
 typedef char xen_commandline_t[1024];
 
@@ -110,6 +133,42 @@ struct xen_build_id {
 };
 typedef struct xen_build_id xen_build_id_t;
 
+/*
+ * Container for an arbitrary variable length buffer.
+ */
+struct xen_varbuf {
+    uint32_t len;                          /* IN:  size of buf[] in bytes. */
+    unsigned char buf[XEN_FLEX_ARRAY_DIM]; /* OUT: requested data.         */
+};
+typedef struct xen_varbuf xen_varbuf_t;
+
+/*
+ * arg == xen_varbuf_t
+ *
+ * Equivalent to the original ops, but with a non-truncating API/ABI.
+ *
+ * These hypercalls can fail for a number of reasons.  All callers must handle
+ * -XEN_xxx return values appropriately.
+ *
+ * Passing arg == NULL is a request for size, which will be signalled with a
+ * non-negative return value.  Note: a return size of 0 may be legitimate for
+ * the requested subop.
+ *
+ * Otherwise, the input xen_varbuf_t provides the size of the following
+ * buffer.  Xen will fill the buffer, and return the number of bytes written
+ * (e.g. if the input buffer was longer than necessary).
+ *
+ * Some subops may return binary data.  Some subops may be expected to return
+ * textural data.  These are returned without a NUL terminator, and while the
+ * contents is expected to be ASCII/UTF-8, Xen makes no guarentees to this
+ * effect.  e.g. Xen has no control over the formatting used for the command
+ * line.
+ */
+#define XENVER_extraversion2 11
+#define XENVER_capabilities2 12
+#define XENVER_changeset2    13
+#define XENVER_commandline2  14
+
 #endif /* __XEN_PUBLIC_VERSION_H__ */
 
 /*
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index d601a8a98421..762c8a77fb27 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -172,6 +172,7 @@
 ?	build_id                        version.h
 ?	compile_info                    version.h
 ?	feature_info                    version.h
+?	varbuf                          version.h
 ?	xenoprof_init			xenoprof.h
 ?	xenoprof_passive		xenoprof.h
 ?	flask_access			xsm/flask_op.h
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 78225f68c15c..a671dcd0322e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1777,15 +1777,18 @@ static int cf_check flask_xen_version(uint32_t op)
         /* These sub-ops ignore the permission checks and return data. */
         return 0;
     case XENVER_extraversion:
+    case XENVER_extraversion2:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_EXTRAVERSION, NULL);
     case XENVER_compile_info:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_COMPILE_INFO, NULL);
     case XENVER_capabilities:
+    case XENVER_capabilities2:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_CAPABILITIES, NULL);
     case XENVER_changeset:
+    case XENVER_changeset2:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_CHANGESET, NULL);
     case XENVER_pagesize:
@@ -1795,6 +1798,7 @@ static int cf_check flask_xen_version(uint32_t op)
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_GUEST_HANDLE, NULL);
     case XENVER_commandline:
+    case XENVER_commandline2:
         return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
                             VERSION__XEN_COMMANDLINE, NULL);
     case XENVER_build_id:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 23:08:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 23:08:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477554.740291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAD-0006fI-Lk; Fri, 13 Jan 2023 23:08:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477554.740291; Fri, 13 Jan 2023 23:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAD-0006fB-Iv; Fri, 13 Jan 2023 23:08:45 +0000
Received: by outflank-mailman (input) for mailman id 477554;
 Fri, 13 Jan 2023 23:08:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bX0/=5K=citrix.com=prvs=37021d3d6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pGTAC-0006f0-8M
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 23:08:44 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 376e8d7e-9397-11ed-91b6-6bf2151ebd3b;
 Sat, 14 Jan 2023 00:08:42 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 376e8d7e-9397-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673651323;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=9cCSMzRTvkKrwK52HgPiB0tJ1UsCnOlSGONnqaC/UWI=;
  b=Q32QUdSQmXaaQeNV7V0dMuSxgZmfw75nqSMQOdBhK9vxEJgn/oJzYxOs
   cJ+dYFPw0P1ODcz79OexympmiAqK4nNs27mfYUL5ECNIoWPGIWTmIsVeg
   03DuQX/rHpMXzY1r1re+8a2UrjajrQZ1QYwnjuodhzxbV95vTD32I0atz
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92558106
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:bxJUZ68BW7n1CB5pMYZADrUDan6TJUtcMsCJ2f8bNWPcYEJGY0x3y
 GRJCjqFaP/fYTH8eYx2Ptyxo0oHvJKDxoNjSgU4/C48E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKkQ5AO2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklp6
 OUjBxkjfyrTuKHx2pOWTOhWockKeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0ExBvF9
 jueoQwVBDkrJPvFjmKoo0mWl77BniXCRMUrSrKno6sCbFq7mTVIVUx+uUGAifukjk+zXfpPJ
 kpS/TAhxYAt8GS7Q9+7WAe3yFaIsRIRVMBZO/Er4wGKjKzP6kCWAXZsZjxIbtA8pdI1bTMv3
 16N2djuAFRHvKWOTHOB9p+dtT6oJTUONmgGeDMFSg0epdLkpekbnh/JC9puDqOxptn0Ai3rh
 SCHqjAkgLcehtJN0L+0lXjcmC6lrJXNSg8z5y3UU3ij4wc/Y5SqD6SKw1XG6fdLLK6CU0KM+
 nMDnqCjAPsmVM/X0nbXGaNUQe/vvqzeWNHBvbJxN7487g+C0lCyR9Bvo3ZkBWpvM/wjdBa8N
 Sc/pjhtCI9v0GqCNPEoON/uVZhxlMAMBvy+CKmKM4MmjoxZMVbeoXowPRP4M3XFyhBErE0pB
 XuMnS9A514+AL8v8je5Tvx1PVQDlnFnnjO7qXwWIn2aPVuiiJ29E+1t3KOmNLxR0U99iFy9H
 yxjH8WL0Q5Dd+b1fzPa94UeRXhTcydgW8uq+50PJrfYSuaDJI3GI6aBqY7NhqQ/x/gF/gs21
 izVtrBkJKrX2iScdFTihoFLY7LzR5dvxU/XzgR1VWtEL0MLON71hI9GLstfQFXS3LA7pRKCZ
 6VfKpro7zUmYmivxgnxmrGk8tIzLEX221rQV8dnCRBmF6Ndq8Xy0oeMVmPSGOMmV0JbaeNWT
 2Wc6z7m
IronPort-HdrOrdr: A9a23:EoBFQazy2Z8j830rDHmIKrPwIr1zdoMgy1knxilNoH1uHvBw8v
 rEoB1173DJYVoqNk3I++rhBEDwexLhHPdOiOF6UItKNzOW21dAQrsSiLfK8nnNHDD/6/4Y9Y
 oISdkbNDQoNykZsfrH
X-IronPort-AV: E=Sophos;i="5.97,215,1669093200"; 
   d="scan'208";a="92558106"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, Daniel Smith
	<dpsmith@apertussolutions.com>, Jason Andryuk <jandryuk@gmail.com>
Subject: [PATCH v2 0/5]  Fix truncation of various XENVER_* subops
Date: Fri, 13 Jan 2023 23:08:30 +0000
Message-ID: <20230113230835.29356-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

See patch 3 for details of the problem.  Other patches fix other errors found
while investigating.

Some patches committed straight from v1.  Several new patches with additional
cleanup.

Andrew Cooper (5):
  xen/version: Drop bogus return values for XENVER_platform_parameters
  xen/version: Calculate xen_capabilities_info once at boot
  xen/version: Introduce non-truncating XENVER_* subops
  xen/version: Fold build_id handling into xenver_varbuf_op()
  xen/version: Misc style fixes

 xen/arch/arm/setup.c         |  19 ++----
 xen/arch/x86/setup.c         |  31 ++++------
 xen/common/kernel.c          | 139 ++++++++++++++++++++++++++++++-------------
 xen/common/version.c         |   4 +-
 xen/include/public/version.h |  95 ++++++++++++++++++++++++++++-
 xen/include/xen/hypercall.h  |   2 -
 xen/include/xen/version.h    |   2 +
 xen/include/xlat.lst         |   1 +
 xen/xsm/flask/hooks.c        |   4 ++
 9 files changed, 214 insertions(+), 83 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 13 23:08:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Jan 2023 23:08:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477559.740346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAJ-0007xs-AC; Fri, 13 Jan 2023 23:08:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477559.740346; Fri, 13 Jan 2023 23:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGTAJ-0007wo-5R; Fri, 13 Jan 2023 23:08:51 +0000
Received: by outflank-mailman (input) for mailman id 477559;
 Fri, 13 Jan 2023 23:08:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bX0/=5K=citrix.com=prvs=37021d3d6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pGTAH-00076h-6T
 for xen-devel@lists.xenproject.org; Fri, 13 Jan 2023 23:08:49 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3a90c6ee-9397-11ed-b8d0-410ff93cb8f0;
 Sat, 14 Jan 2023 00:08:47 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a90c6ee-9397-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673651328;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=+mZ+HqjQOoRFKxUB1RXhTht2/NOUz4Dnfis5QIqIl1c=;
  b=F2BfhEMMJjLiLCGUpO80x3909EiczBsfFutDj35rwomIihvDa+7N6nGa
   9873YTsJvMqMjVeBNgl8NF7AKGuBgI0n/hpi6sy34HpUptFidal8Yoofj
   tCAYBPHlZicxC7irlFQBPrZ0wzCyS6ANF+gdnY+DuXwAIqY+62I9nAKVu
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92558111
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Um1A0a/vE11O5zuEa7xwDrUDo36TJUtcMsCJ2f8bNWPcYEJGY0x3y
 WoeUGrTPveNMDP2fY11bYzj/R4HucDVx4BnTgM4qi88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKkQ5AO2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklp6
 OUjBxkjfyrTuKHx2pOWTOhWockKeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0ExBvF9
 jufpgwVBDlLGMWWkiim40m1g/LmkQzFCaIQObeRo6sCbFq7mTVIVUx+uUGAieKilke0VtZbK
 koV0ikjt64/8AqsVNaVdwK8iG6JuFgbQdU4O+8n7ACAzILE7gDfAXILJhZjQtE7sM49RRQxy
 0SE2djuAFRHoLCTDH6Q6LqQhTezIjQOa38PYzceSgkI6MWlp5s85i8jVf46TvTz1IesX2itn
 XbT9nNWa6gvYdAj8Liixn/urSOW9qeKCRQUywPWZEWox1YsDGK6XLBE+WQ3/N4ZctnCEwbf4
 CNd8ySNxLtQVM/QzURhVM1IRej0vKjdbVUwlHY1R/EcGyKRF2lPlGy6yBV3Pw9XP8kNYlcFi
 2eD6FoKtPe/0JZHBJKbgr5d6Oxwl8AM7fy/CpjpgiNmO/CdjjOv8iB0flK31GvwikUqmqxXE
 c7FLp3yVipKWPw3lWHeqwIhPVgDn3BW+I8ubcqjk0TPPUS2ORZ5tovpwHPRN7tkvctoUS3e8
 spFNtvi9vmseLSWX8UjyqZKdQpiBSFiVfjLRzl/KrbrzvxORDtwVJc8ANoJJ+RYokiivr6Wo
 CDlCxIGkASXaL+uAVziV02PoYjHBf5XxU/X9wR1Vbp08xDPubqS0Zo=
IronPort-HdrOrdr: A9a23:xVet46uF6x1ysNnF9Yw0eF6I7skDeNV00zEX/kB9WHVpm62j+/
 xG+c5x6faaslkssR0b9+xoWpPhfZqsz/9ICOAqVN/JMTUO01HYT72Kg7GSpgHIKmnT8fNcyL
 clU4UWMqyVMbGit7eZ3DWF
X-IronPort-AV: E=Sophos;i="5.97,215,1669093200"; 
   d="scan'208";a="92558111"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH v2 5/5] xen/version: Misc style fixes
Date: Fri, 13 Jan 2023 23:08:35 +0000
Message-ID: <20230113230835.29356-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230113230835.29356-1-andrew.cooper3@citrix.com>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 xen/common/kernel.c  | 11 +++++------
 xen/common/version.c |  4 ++--
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 7ab410ac7c28..f1d4a66d8885 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -1,6 +1,6 @@
 /******************************************************************************
  * kernel.c
- * 
+ *
  * Copyright (c) 2002-2005 K A Fraser
  */
 
@@ -540,7 +540,7 @@ static long xenver_varbuf_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    bool_t deny = !!xsm_xen_version(XSM_OTHER, cmd);
+    bool deny = xsm_xen_version(XSM_OTHER, cmd);
 
     switch ( cmd )
     {
@@ -584,7 +584,7 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -EFAULT;
         return 0;
     }
-    
+
     case XENVER_platform_parameters:
     {
         const struct vcpu *curr = current;
@@ -623,9 +623,8 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         }
 
         return 0;
-        
     }
-    
+
     case XENVER_changeset:
     {
         xen_changeset_info_t chgset;
@@ -652,7 +651,7 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             if ( VM_ASSIST(d, pae_extended_cr3) )
                 fi.submap |= (1U << XENFEAT_pae_pgdir_above_4gb);
             if ( paging_mode_translate(d) )
-                fi.submap |= 
+                fi.submap |=
                     (1U << XENFEAT_writable_page_tables) |
                     (1U << XENFEAT_auto_translated_physmap);
             if ( is_hardware_domain(d) )
diff --git a/xen/common/version.c b/xen/common/version.c
index d32013520863..8e738672debe 100644
--- a/xen/common/version.c
+++ b/xen/common/version.c
@@ -209,11 +209,11 @@ void __init xen_build_init(void)
             }
         }
     }
-#endif
+#endif /* CONFIG_X86 */
     if ( !rc )
         printk(XENLOG_INFO "build-id: %*phN\n", build_id_len, build_id_p);
 }
-#endif
+#endif /* BUILD_ID */
 /*
  * Local variables:
  * mode: C
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Sat Jan 14 00:51:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 00:51:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477593.740357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGUlu-0003sk-9U; Sat, 14 Jan 2023 00:51:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477593.740357; Sat, 14 Jan 2023 00:51:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGUlu-0003sd-6N; Sat, 14 Jan 2023 00:51:46 +0000
Received: by outflank-mailman (input) for mailman id 477593;
 Sat, 14 Jan 2023 00:51:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8OqO=5L=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pGUls-0003sX-8j
 for xen-devel@lists.xenproject.org; Sat, 14 Jan 2023 00:51:44 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2041.outbound.protection.outlook.com [40.107.15.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c9ccf20-93a5-11ed-91b6-6bf2151ebd3b;
 Sat, 14 Jan 2023 01:51:42 +0100 (CET)
Received: from DUZPR01CA0009.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:3c3::16) by AS8PR08MB8063.eurprd08.prod.outlook.com
 (2603:10a6:20b:54c::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Sat, 14 Jan
 2023 00:51:29 +0000
Received: from DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3c3:cafe::ca) by DUZPR01CA0009.outlook.office365.com
 (2603:10a6:10:3c3::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Sat, 14 Jan 2023 00:51:29 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT063.mail.protection.outlook.com (100.127.142.255) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Sat, 14 Jan 2023 00:51:28 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Sat, 14 Jan 2023 00:51:28 +0000
Received: from 5ffa8a0298a9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B93D039F-7B67-4F8C-8C97-D30E0F2B5522.1; 
 Sat, 14 Jan 2023 00:51:18 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5ffa8a0298a9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 14 Jan 2023 00:51:18 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB4PR08MB8007.eurprd08.prod.outlook.com (2603:10a6:10:38d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Sat, 14 Jan
 2023 00:51:15 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Sat, 14 Jan 2023
 00:51:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c9ccf20-93a5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BDawBXf0X7erUZK9/UtOPiy/nfqfVb+0dAYYMjOInqA=;
 b=pc+yX76kK0mY0TzIDqO5UKc7v2tGWOZ4Pquma4H6h63Hzgg8GHywPDXw8B4EVMpH5wSNmf6Lk5TJxIlCk6TKx3NQURUyS5W1cxA6SxlMLhFg9Ttt3VbGCewTrAJIeetmAIbdCHsZ6JiGSbson/5ekPTBHlu+cVGHxwooYGdjZv4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z7fq17/uoVzhxXfMb7blPTIbhqblIOUuhrus6S2TahRxGFmwWTxryspkVgh3/9aWhct9wRjVOsRXno2fZbbeKVveFhkQgVV+NwNr8fYNNuCPhnKJcj4N5si87nYscg852vZ0UgSt0wcrJZspaFYeRFIMYYYhv7hxeFFqUweDmktjOXPBhn8CPKrBZyluMSYWJ8z1YSrLb9nBpc8iGYp9TlzhobDuoghOTmszTBUm3f5jqTjLW0iXMFLiWll7yfVR6NOLtzlEQdieEx1BkWk6r8jIxpCMKtzLMMr7H28nG+RmNt93uoe8WSviEk1H7035mbOA7BxurK1XOf6DqC4gRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BDawBXf0X7erUZK9/UtOPiy/nfqfVb+0dAYYMjOInqA=;
 b=kXA/dHmt7TWt0suLGzcSwsPvf9r24rPhkbtCPHQjv1DkzQOgQxbg06KKlrV7BO2/TDePRl6u3C7Gm5T/WIMd40VyMlkV0hAYH8OY5yNvsHSOazNlCRK6Mg+CMV9UExuSjwe4BjfP/DjHVZYRfERSfS+hBdjZJK/NPt7ya+CqDWxwOfK+5JdbUX0vHLjeD9EB1WR3vlCe5iSN7kOuxX6WR2CwoKkh2UrgpnRcrxQa/RnGGMWChtT/BnBUVFpB67TQl9BZ9LXZ9zHnmvSjdzw4Dak3Okl/eWcYlnt7lTbvYiXdHamgoZJ9MF4rPHdbUbfh47QaCA8zdB5b9obrzvyynQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BDawBXf0X7erUZK9/UtOPiy/nfqfVb+0dAYYMjOInqA=;
 b=pc+yX76kK0mY0TzIDqO5UKc7v2tGWOZ4Pquma4H6h63Hzgg8GHywPDXw8B4EVMpH5wSNmf6Lk5TJxIlCk6TKx3NQURUyS5W1cxA6SxlMLhFg9Ttt3VbGCewTrAJIeetmAIbdCHsZ6JiGSbson/5ekPTBHlu+cVGHxwooYGdjZv4=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>" with
 "mov_w rX, <label>"
Thread-Topic: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =<label>"
 with "mov_w rX, <label>"
Thread-Index: AQHZJzeChOboto8YKEqIP7caZFvxO66dC3QQ
Date: Sat, 14 Jan 2023 00:51:15 +0000
Message-ID:
 <AS8PR08MB799188C9E6044BDC202474B392C39@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-7-julien@xen.org>
In-Reply-To: <20230113101136.479-7-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 7C10AD5821D67E4C9A6F3E554D69FCD9.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB4PR08MB8007:EE_|DBAEUR03FT063:EE_|AS8PR08MB8063:EE_
X-MS-Office365-Filtering-Correlation-Id: 7a034420-ba2b-4354-5f7c-08daf5c978a9
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YerdKXz77lS9w1CxfSA42LD8miDnLUaSB7PscdelBy0xDTxRKvtkcveK+zYBijo7XHmKmn7weG5xY/XpVqP+PeMoenoCFjOhJN+s+G6xy33x4l3/sA0IJ+01w9MhXIi5MglJiN2Ue2mbgrWUsjro93DNm77n2mBKhfqCuivQZNvM4QMJhVQAtQOYGOH8LGRhzDqi4hMzG/hzalNPVIMXJCGNqVFj7R2ttuZ1cz+VYcSTAwMURMxCECGdGel1FTI2B/Al6UAg2egWQ8ZiBzlVjgkDOo4TkVWQyfQg26fQ/72uUxCaz3lGQHxGbpq4E2HM7HhYgHIVByqqz0m5QsBg3IznMwKqsZNfY3IEfcMgSJFIk0UUaG0yqp71tCDL4buS+bZL7h4TxAAejnPb691buRPdfRSs2AdHtT6cT3c3xSi25LQQszLhsh1sKztUAyOyCkhWPraFr5GI9JBnjHPbLi1lBmIkX+vS6vBkEwc4QrHC2pprQt3ymP4npC1fc/xNx7pdNPgt8x3Sm9WcZ6kKwwVm0Zx5pZ+D3+unScdOLg6CP3DDl6rHt/UiX+VDuHWUmEnFTlKHL0zKuMzs8RDWeSA5Y9Fwak3MGTobJSZRHj0BLLns//JWZEeRfq3EdudovRA7iMEcf4rts/M+Pobh0om1eV6Fo5yxVTkIKgDq8Gd88sTMRhvlVbUncwzfmVkAd508FRXCACzp0OT6Q1c/Xw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(451199015)(4326008)(66946007)(8676002)(54906003)(110136005)(66556008)(41300700001)(76116006)(66476007)(66446008)(64756008)(38100700002)(33656002)(86362001)(38070700005)(122000001)(6506007)(71200400001)(316002)(186003)(9686003)(478600001)(7696005)(26005)(4744005)(55016003)(5660300002)(52536014)(8936002)(83380400001)(2906002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB8007
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cf8012fb-4b48-4c7d-627f-08daf5c9709d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0zKFBC08yZGdH+tyYFt8Jg+3ZSjkweER6DeHgeadms3G6MSzBi8jswMr+ekvZmuvdHVE+8V+n3ABYnqShnUW8hNGv7dYb2q4KhlkAwaplVrjoNnVCIRrfC3xxkTfeO6pNap5qn6uaSnNXVSWoyw16osgm8THmdzNqSRCRRtHL9Khu/Oyan5fxELORQQJ5wXj/mAXMj+22XdD3M0EmbG0GCFV8gKX6XVaHkaweMoJTFJQ8pqaTjNPm4/H7Y5Pjd0tK/EqqagfpQLw7UCPz7gN6ERelC3A0h+DPmv7M0BlEyuN6W1oXiFaakRJdDePgcEahnsaD652jCnMhZ6xMGlIE/Orjh+apRzMa+Gc6F+lcWtBbgRLYeNzyV8KH4ukrNKlFAYmuP7aLpx3wyJhSZo768HMzep1dpIYYUdhpSyyQXXv19Q3pYEbWTYu4YtKb1WRwBmvWBDXVioIvktre/Wfehysn3/y6oejzvCZH+0IuErtCbqictpii7qgCxJJBsmt7T3nndghSQVumybLTYMvR7VZ0lI5aa9C1NHmWL3ip5X+zdvHoj7pEsMmiUj5O/EAXGxoj6t6FYvtY71VGYgtVX25ulRE5eUTZMVc/atc9yp/KSecdbzPWugvbbbU0nE1BeQc+/WZk6aIWCfrvlbjIpYtmMQHBcFkqBb3aHgsL7kowuO7SgwUbS8YvfL0wDTGrEloB76zC30sEc13xJbiCw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(396003)(39860400002)(376002)(451199015)(46966006)(36840700001)(40470700004)(4744005)(40460700003)(5660300002)(33656002)(36860700001)(316002)(82310400005)(54906003)(6506007)(2906002)(47076005)(336012)(40480700001)(82740400003)(478600001)(86362001)(7696005)(55016003)(356005)(81166007)(110136005)(9686003)(186003)(83380400001)(107886003)(70206006)(8936002)(26005)(70586007)(8676002)(52536014)(4326008)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jan 2023 00:51:28.7787
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7a034420-ba2b-4354-5f7c-08daf5c978a9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8063

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 06/14] xen/arm32: head: Replace "ldr rX, =3D<label>" w=
ith
> "mov_w rX, <label>"
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> "ldr rX, =3D<label>" is used to load a value from the literal pool. This
> implies a memory access.
>=20
> This can be avoided by using the macro mov_w which encode the value in
> the immediate of two instructions.
>=20
> So replace all "ldr rX, =3D<label>" with "mov_w rX, <label>".
>=20
> No functional changes intended.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

I've tested this patch on FVP in arm32 execution mode, and
this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 01:34:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 01:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477599.740368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGVQm-0006V2-BY; Sat, 14 Jan 2023 01:34:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477599.740368; Sat, 14 Jan 2023 01:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGVQm-0006Uu-7N; Sat, 14 Jan 2023 01:34:00 +0000
Received: by outflank-mailman (input) for mailman id 477599;
 Sat, 14 Jan 2023 01:33:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8OqO=5L=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pGVQk-0006Um-Tq
 for xen-devel@lists.xenproject.org; Sat, 14 Jan 2023 01:33:58 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2041.outbound.protection.outlook.com [40.107.103.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 82ccbd6c-93ab-11ed-91b6-6bf2151ebd3b;
 Sat, 14 Jan 2023 02:33:56 +0100 (CET)
Received: from DU2PR04CA0264.eurprd04.prod.outlook.com (2603:10a6:10:28e::29)
 by DB3PR08MB8985.eurprd08.prod.outlook.com (2603:10a6:10:43f::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Sat, 14 Jan
 2023 01:33:54 +0000
Received: from DBAEUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28e:cafe::76) by DU2PR04CA0264.outlook.office365.com
 (2603:10a6:10:28e::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.14 via Frontend
 Transport; Sat, 14 Jan 2023 01:33:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT047.mail.protection.outlook.com (100.127.143.25) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Sat, 14 Jan 2023 01:33:53 +0000
Received: ("Tessian outbound 0d7b2ab0f13d:v132");
 Sat, 14 Jan 2023 01:33:53 +0000
Received: from c959b49a63f9.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A63AFBE1-0F15-4DAD-9901-489AF8756DE1.1; 
 Sat, 14 Jan 2023 01:33:43 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c959b49a63f9.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 14 Jan 2023 01:33:43 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GV2PR08MB8318.eurprd08.prod.outlook.com (2603:10a6:150:b5::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Sat, 14 Jan
 2023 01:33:39 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Sat, 14 Jan 2023
 01:33:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82ccbd6c-93ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OTjK6UGNM9axJgPFI4jB7B6Gis3T+zUgCHh8Qdg4OWY=;
 b=aF+0wynU70tejRijfxW0GVPVudNU4T3nUxQnWSVLYxOiv6BVKFJjEDS+gLevrkoa5pZHls2QBhaxW2zii8fd0tnwAV4k6b1O1jnD5iOdhi9iLvOlzTtFgGPmsHBzAoQDekYH5Lc/QotZu5+FnurQpc53aux4OtIuhxAi6/YCBzk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bOY+Glw9p4udld8+47Si4bUzm0ckqOswZZUp7YRHUG5Gh47foMJ7xWRb7xk1JG5zy/afUdlXpXF7+TBwLQ2YWtGg/q7o1n8/b99/a4mu/7cJfsL9L2Rpo//G3PNNDOipSetuT2rjC8VwCjaC5NdhfQ97WPVgJHVd/s+of8hAPmG8O9AHcWs7oMeCMOhI0fQ+UuP1gCbdGXQqosqdVJZ36CADSuD2+g2hBWP675NhvHJ413qnN5xMiOfyvRc6FaSYYP3qODSDwaqR1yhsRnJ+4m5zSebLefCF4yfrNv1VEnHt67O/0iCyxFunG0S+bNiAEMskvx+VINi3dLBjZX+TbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OTjK6UGNM9axJgPFI4jB7B6Gis3T+zUgCHh8Qdg4OWY=;
 b=LOO/zDiKXOzIeoJoS3DEzl3sR1mO8q3bkpUFsOTG6MeywZIkHagpFonrGZelWj/yE1ibSPZQRi7k0e9xgCsug6GFo7qsNVSzLLko6cGdiBnM8ww6ZFE21RHGBZ7kTFBsUUyi6WvICSnGU7FNYHZOnoPgaWAVjOprgJ6Y5htDYDT5BFDHmswkXISxK3z9NMgniV64QaUFx012ilUgGxyfulpZJj3cm69LgO76xRGfjra1ijYoUjaOVf+TmAgR0FxL2rSHXFEDRSvw5fuoQTARGBoCytGW9aqumNYCpK16+YpTSX/NCD/AGGsKRYz7h4dhnk+2Eq1+mkE6dTf91tr1wQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OTjK6UGNM9axJgPFI4jB7B6Gis3T+zUgCHh8Qdg4OWY=;
 b=aF+0wynU70tejRijfxW0GVPVudNU4T3nUxQnWSVLYxOiv6BVKFJjEDS+gLevrkoa5pZHls2QBhaxW2zii8fd0tnwAV4k6b1O1jnD5iOdhi9iLvOlzTtFgGPmsHBzAoQDekYH5Lc/QotZu5+FnurQpc53aux4OtIuhxAi6/YCBzk=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in
 enable_mmu()
Thread-Topic: [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in
 enable_mmu()
Thread-Index: AQHZJzd9G8hkGu/MR0eHbN40F8df366dDCsw
Date: Sat, 14 Jan 2023 01:33:39 +0000
Message-ID:
 <AS8PR08MB79916F35468472BC5D27050092C39@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-8-julien@xen.org>
In-Reply-To: <20230113101136.479-8-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9079C52B9CDB554C8FCDE8B4F0079B2E.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GV2PR08MB8318:EE_|DBAEUR03FT047:EE_|DB3PR08MB8985:EE_
X-MS-Office365-Filtering-Correlation-Id: df4af43a-019d-4ccb-1ae2-08daf5cf655f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 vLe7XCrhDi1P5vFkexryqyx/I4hM6A8Kupt42BqLFhhMsOWG/JC6auAzi7vl60X6kDEVll6hOhaeI/c4EQJbxciuQJJOwoZQqZxtkAEpQyWgPho4xOoyPd+8YxDnyRP9Z/pFrgViNMDORnGU3eYdBtfqbV02oA5r50MI05vV2C5rFH/yw4Tp0BmHXmmG5GG+lWEJWik7/bUWWuGttWgUTGLDSv9umcxjGZYkHl9Yno5tutIfBKZ97BW/5Ny/+3Em4SfDSoN34f3ZcAXBFC/75j7wqGCiFWbgSrJr85pTBhjtejzLhW+VIeIj2gTkqDiR3DMNnT4UYLlOCqg1SCWL8LbWINxR/esku5dIqrNKmTT86xfpSM3I9GSI9vHdpzU5uGDMgBCgHdzh6a9EHQR/JogQHqPUlQ6GrynpFaIYKEof5tpZZQnLvAfL84RnBFAA7gArFSX1ooM4O6gCjeBPwJ7zgyghua0VX2iXurTekw2E3uQlyIPKi/FtWvapkuUjwDJUn51OwmRM49GzdOCR3xFLIvzjrgVvPkcaKkhCC7pECc1nEez5IO+jgPNqgdleQyEjvg+ehXhL12awX+lvB9aCU+G95uwRsZ69KuUjOEtZpkMvCAJCv8xaV2EbF5BaCNwtWIKv0w1ApIkPELhX2Cbi6DZeBJdbDOHhQnT4Iixm/394ccOjjrlcrFKcbngqTT/b7xkZOWHBZxx4eiFNlQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(396003)(366004)(136003)(346002)(451199015)(83380400001)(9686003)(54906003)(71200400001)(86362001)(110136005)(33656002)(55016003)(7696005)(6506007)(38070700005)(5660300002)(4744005)(122000001)(38100700002)(26005)(186003)(316002)(76116006)(2906002)(66446008)(478600001)(41300700001)(8936002)(66556008)(64756008)(52536014)(66476007)(8676002)(66946007)(4326008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8318
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	dd20ada4-669d-4fe0-2223-08daf5cf5d07
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Khe4ktwGrLQmeHkr61pneaYz2H3vS9wOEs/QC1n+S48qXfVF0ikvyZkabvgUedPGHS4YcAHKqQSNJfEndGI8DYlkrlgHmgdSEa8vE1p+nU8mduoyFxn1wFa1z02FtuHhMyYoxVAAL0h0oZFdSkal6I75D1CIWpeX3RKBxDuQRDU4z6AIdZ58wSK3gsUyvZZt5cOgU93u6x3WBQXbzZ5pzv6uPW88AlaLcMIg8Kaz7565fGDqMNWRScwFNv+uYBD5Cor072MJpOGJV++a2pkLh7WXbm+UMBuwKeBNWl3LBssbJbPsX/dXWD7vUv8w0Z5GH5hhC8R83r2FWRLYTdypyKT5NuZQuAFp14FLIjPdI9WonN91smpj7sbg3CiSfgFrYWMptPP4Zws5/WUCSlxoVUHcs/YYmCfJXXLqMbWwfHiZmDfLAw5II1xF2zm6e1AUSTFbMPW0R+DUGWcDm/LiAPXCRG3hSQhhsEi+BfGdNfm3tsmp/sJSnv7o03CnNGWI5CWaDMvv41oB7B92Gyug8gPOvGH5YWYbc91mVjDPYPCOcqp30w2AoylL8jDHDLL/+uAaiEfdFycB/iR6Ou6auad4nCINAZ2cPm7C0DD8aJztseC8CaMEK2UWsy6rbOjWhxDINg6BVUxZlqus84G0AaRL7d7EDsVI8gQh4R9VFcpoi3Q4lbdihm15DQdRbAtcwzLJx9yG4V3rYHvtz1QCKw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(7696005)(33656002)(26005)(478600001)(4744005)(356005)(2906002)(9686003)(186003)(41300700001)(336012)(86362001)(316002)(4326008)(8676002)(70586007)(70206006)(47076005)(40460700003)(81166007)(5660300002)(55016003)(83380400001)(40480700001)(8936002)(82310400005)(52536014)(110136005)(82740400003)(6506007)(54906003)(107886003)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jan 2023 01:33:53.4113
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: df4af43a-019d-4ccb-1ae2-08daf5cf655f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB8985

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 07/14] xen/arm32: head: Jump to the runtime mapping in
> enable_mmu()
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, enable_mmu() will return to an address in the 1:1 mapping
> and each path is responsible to switch to the runtime mapping.
>=20
> In a follow-up patch, the behavior to switch to the runtime mapping
> will become more complex. So to avoid more code/comment duplication,
> move the switch in enable_mmu().
>=20
> Lastly, take the opportunity to replace load from literal pool with
> mov_w.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

I've tested this patch on FVP in arm32 execution mode, and
this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 02:17:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 02:17:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477606.740379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGW6G-0002nE-RV; Sat, 14 Jan 2023 02:16:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477606.740379; Sat, 14 Jan 2023 02:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGW6G-0002n6-L0; Sat, 14 Jan 2023 02:16:52 +0000
Received: by outflank-mailman (input) for mailman id 477606;
 Sat, 14 Jan 2023 02:16:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8OqO=5L=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pGW6E-0002n0-OA
 for xen-devel@lists.xenproject.org; Sat, 14 Jan 2023 02:16:50 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2076.outbound.protection.outlook.com [40.107.22.76])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f14e528-93b1-11ed-b8d0-410ff93cb8f0;
 Sat, 14 Jan 2023 03:16:47 +0100 (CET)
Received: from AS8P189CA0025.EURP189.PROD.OUTLOOK.COM (2603:10a6:20b:31f::26)
 by DB4PR08MB7935.eurprd08.prod.outlook.com (2603:10a6:10:379::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Sat, 14 Jan
 2023 02:16:42 +0000
Received: from AM7EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:31f:cafe::f4) by AS8P189CA0025.outlook.office365.com
 (2603:10a6:20b:31f::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.16 via Frontend
 Transport; Sat, 14 Jan 2023 02:16:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT056.mail.protection.outlook.com (100.127.140.107) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Sat, 14 Jan 2023 02:16:41 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Sat, 14 Jan 2023 02:16:40 +0000
Received: from f8e558cb977a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 08BC1FCF-5E4B-44FA-B701-649627159BB4.1; 
 Sat, 14 Jan 2023 02:16:34 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f8e558cb977a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 14 Jan 2023 02:16:34 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS2PR08MB8746.eurprd08.prod.outlook.com (2603:10a6:20b:55e::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Sat, 14 Jan
 2023 02:16:33 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Sat, 14 Jan 2023
 02:16:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f14e528-93b1-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=usH70UPyi66+A0kDY5WODCtqwfRUynUMRZuCEGEjch0=;
 b=II/Bfmg9FB1N951GhzUAmoo2mbW9baLGGxy5IZocyIK5ltNHbLaMAmrrJ9nU7Q39sCZhtg38coVWTQ/cmvyTKc5XDi/yE8ewKnKbg2cw2iE+KgelOePDy8zdAU4VcJlAZ2tE40c7Ia64PLnBIX1Nqi4AALbZYjMNbBJydE3W0oQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LfN3v8flniCq7AAE8XBa0VIiVmB9IkCiVcnbQVylGV7uHhcK50+o811z1Gm4lvdZeJt8asIw20S7GdAuJbW6WqamO/qS+r/lhBwfKsx33ns8OBdvoSRPzZkhqLuPd04gdbzd/Tt+OguHqJQ7k74J7G5YtkMmNLbw+9nSeAEdvkZEEZM2BCmHSfoHBHBw2yJGWccpaJfwjLIue+TboEGqgAoNTIhktvuyhZOGlNUR3Vlh0aXGS3ZzRHTaLXGO7zwYe3oQ+8ZxssoEl7FSrAPOOJBLYD8+0EJWnFVeu1kQylJ9JY1cjvLS+T84Ykaw7vDpupOxMQhtYFvD7VZltOPDRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=usH70UPyi66+A0kDY5WODCtqwfRUynUMRZuCEGEjch0=;
 b=D/Qgq31TYgunEuwHXopbZ0voDrS0Tlh5BPGsAhIhhAv3mDjKhacGY+Gp/+eb5jrCmxOuMNrdAsBB7qi2Miwf4r1sLpPPXuukho5520NwkJKSFS3axliCKJZkIt4Ru+XlS/h/AM+/XEIn7GJRlcBAwZXh+lzFxK7KKj/T2RWqYpNLLFicDHk6z2AFSOWUwJv4nn4q6a3f2lvMo586Lp6CK9nW0CsnoxNvsIMEu+VNOTyc8e4xcrLDIB6UFJ7hWH8Li7k5GvZAGERiKjJDHhi53KNanVFey9qFkMsIwFrJarRR6DKMZqn2j5k1nBeV1XuuIBD+6M3a4C9wNg5i5ZyA2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=usH70UPyi66+A0kDY5WODCtqwfRUynUMRZuCEGEjch0=;
 b=II/Bfmg9FB1N951GhzUAmoo2mbW9baLGGxy5IZocyIK5ltNHbLaMAmrrJ9nU7Q39sCZhtg38coVWTQ/cmvyTKc5XDi/yE8ewKnKbg2cw2iE+KgelOePDy8zdAU4VcJlAZ2tE40c7Ia64PLnBIX1Nqi4AALbZYjMNbBJydE3W0oQ=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush
 the TLBs
Thread-Topic: [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush
 the TLBs
Thread-Index: AQHZJzeD+kLxmSVjqEKBUhQr2u05s66dC6eg
Date: Sat, 14 Jan 2023 02:16:32 +0000
Message-ID:
 <AS8PR08MB7991DC731D328F291043BB6F92C39@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-9-julien@xen.org>
In-Reply-To: <20230113101136.479-9-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D832F04FED09504FA83717A80094C7D7.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS2PR08MB8746:EE_|AM7EUR03FT056:EE_|DB4PR08MB7935:EE_
X-MS-Office365-Filtering-Correlation-Id: d26e0d94-1afd-41d7-74db-08daf5d55ff4
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 f2gjBxfbt8dE0RDWXBpN6dtCBh2L6wytkGZTl1xsihvTPKGYqJirhm+230NMhap+UiwSQxg9ocsWEccszui3EXRQG70BHv+Fb7iScoeugQz376Tdlf0B81xQRM0Por6fJ2WCM47XUEox8LX99NioYSfBGGI4pIbkI+gfi/shY1aplJwqMIsadAGVj2IZ2eNbfD+ZO08riwgpfhWR61QkPccER6YRI0v+FQ2PgqxULrfa/r48UrYoIWCFAVL/nKSqVNAq4JhNh0a49+DEuDRIAFWjvYHutBkIkTfp2Q/PZ8ol9FDpTmHYB7OqfnoViWuqg+YXi2NlCHnZTNdYkMV7CNZkHwJza0gdTGvOFyPmBLPJIHmiMdsFObVEQXHL4FUHXvHbLglKBzBytjlc/i0xU+9ct7N8fpfTpLU9CAYLqwTp51sSpXUiWlPG/RRMyWRAmgNeZiBvsW1d/XgvQfQXJarx/xwyr0i/6pv1DJutUHlcn9BZPrjtLOLTfNesSUKyI6viygVUy/jFnHT3N/vNdv96YI6Y4O4d+1zTpIiDsXhGEpQoqS6faJUNVfpZL2TRymsoxY4bKFmqxWy72nmBn7CdtzVsKwzytsS/CfC/NU6nD/aGcU+0AKJma6VDdcwgXg8dljHQkyUNgLtFmG4OPnIFgkQ8MDbkenQ28rCkBihqIQsJI61oy4ecARmpvMH+HxQ/1WBpPLUYUUhwpOPDZg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(39860400002)(396003)(366004)(451199015)(38100700002)(66446008)(71200400001)(52536014)(4326008)(8936002)(122000001)(66946007)(66476007)(64756008)(66556008)(8676002)(5660300002)(4744005)(76116006)(33656002)(86362001)(316002)(110136005)(38070700005)(26005)(2906002)(54906003)(83380400001)(7696005)(41300700001)(478600001)(186003)(55016003)(9686003)(6506007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8746
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d5f0cfac-69cc-4f98-9794-08daf5d55a7b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	akIp2qXj8wc2tR8XD62FxL2CEyOb9+gwRrWIPSrbFeGPcG2YdG7mF/gqKHehY9vMHWql/sWh2XTbwKL6JeaPahZ9oJhprPgdCwhUNTP7gJRYNt3XpvrsutXWU1JhKcCiNbudT5KcUxqcZe31z6YMhQvIUjgLtFg777iUSUH9WZ/Lm3/h0dhjQGqOkeC18tPJuVSZL+II3NTuEV65m+IK43LUzFeFCqWbKKYCVcfc6FLRq9wi2aPhWoK3c2TkJm8NItnC3UYOn1gPkv6vZZkUnq1Ukc3B1moNHo+PNieSR7T5l2b5ob6jj+CURprOwM3X0FkQZTk4ouIBIqTutmv1tA6GwoG6q8tcPBqhRM9ncv4CtLm+0aCC2GeBsPX1JixoYXQB8/eYHkWqCOs7vEmCZdgrMiUEoxZDxU2z/rz1z7X2AR8KA0ao/c9Hs3Mn8PHx+JuMcKadnJcGS35ftO6hNAT8AScx0s4K0BsmFi5TbdomidW4MMzlG8m0yEGdxwVpnTV9C8sAguoSNnWtej8SVNK4Hk6j9BAkjLKtJL/2uE9XRPBnZe6B3W+kPPFmEto/iTSPxCE56GU3aWRdU2gneKOOMSAhO0MJzQdmNjFIcaEzJiX8ceFdMHCJwM/jSJU1myA3se4AxcmeHZm0K0yxdC8vxJ9m9Xi+UDtZfWZobaIacCTbEhmWGGlQFD8Q3pj1fS+8akjJ/40B8mmeYotO5A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199015)(46966006)(36840700001)(40470700004)(54906003)(47076005)(55016003)(110136005)(83380400001)(336012)(356005)(70586007)(33656002)(70206006)(81166007)(36860700001)(8936002)(82310400005)(82740400003)(4326008)(40460700003)(8676002)(2906002)(52536014)(5660300002)(107886003)(316002)(6506007)(86362001)(41300700001)(478600001)(186003)(9686003)(26005)(7696005)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jan 2023 02:16:41.2416
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d26e0d94-1afd-41d7-74db-08daf5d55ff4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB7935

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v4 08/14] xen/arm32: head: Introduce an helper to flush t=
he
> TLBs
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> The sequence for flushing the TLBs is 4 instruction long and often
> requires an explanation how it works.
>=20
> So create a helper and use it in the boot code (switch_ttbr() is left
> alone until we decide the semantic of the call).
>=20
> Note that in secondary_switched, we were also flushing the instruction
> cache and branch predictor. Neither of them was necessary because:
>     * We are only supporting IVIPT cache on arm32, so the instruction
>       cache flush is only necessary when executable code is modified.
>       None of the boot code is doing that.
>     * The instruction cache is not invalidated and misprediction is not
>       a problem at boot.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I've also tested this patch on FVP in arm32 execution mode, and
this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 02:55:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 02:55:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477612.740390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGWhi-00071C-KS; Sat, 14 Jan 2023 02:55:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477612.740390; Sat, 14 Jan 2023 02:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGWhi-000715-H7; Sat, 14 Jan 2023 02:55:34 +0000
Received: by outflank-mailman (input) for mailman id 477612;
 Sat, 14 Jan 2023 02:55:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NrQy=5L=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pGWhg-00070z-CR
 for xen-devel@lists.xenproject.org; Sat, 14 Jan 2023 02:55:32 +0000
Received: from sonic316-55.consmr.mail.gq1.yahoo.com
 (sonic316-55.consmr.mail.gq1.yahoo.com [98.137.69.31])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6cfb932-93b6-11ed-91b6-6bf2151ebd3b;
 Sat, 14 Jan 2023 03:55:29 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic316.consmr.mail.gq1.yahoo.com with HTTP; Sat, 14 Jan 2023 02:55:27 +0000
Received: by hermes--production-bf1-6cb45cc684-d6p4g (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 340913b5a67483ed7fa44c35b1b92259; 
 Sat, 14 Jan 2023 02:55:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6cfb932-93b6-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673664927; bh=W4ww9r92ugp2cxRbQbcbEV49GmNwOMWVB1K8QEPCsI0=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=kEUGN7N3/Qi5ukviC8yOit8lP3fNZYl4CvdDfw8mL3G2PYhlFasztSLtd6kruTfDUU/TmI9BydnjQj17/US19D5kHT1sbkpzrUpYwJKI/agFvbnlhh+SpTErQPCdH9LlZwtUkBVtD6YYjcZ6tfNURZX7n6sz0im37Ex/XzdsLVVoZbRnTEcMiNO3/cmZFwePMiA/imhS5VR2ZWvO5V+PTcxrntvnAwerjmObuiRnKPRDhQSCA1d5WGLJ2uJWXrEp8QDcRsi47VEyPOqNHgDVrQHNY0Nd/IIfMQlGmWCojG5+swQWI+kpQrxf5j2G1mgMy5yXJ2JxYWlv4EZm42KVdA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673664927; bh=8nX+spAkZp20GYsxVlKzNUrwe7wUYdgux8hTGQ1uK3Q=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=Ikelonbm7C5uj80pz5SV7CChaHqAg+aKnqHrAVp3qoh24GYteJWPopqeygCpmSwtNuOrVwmZiDpZL4d1vhZgYTcS9yMRFiE3gpFaMhkUpTzzqO2drqXdS71e98za6j6/0D8w7qOSAFGBoN5sU423mMfiUClKv0bCnL3fPefqDkUgfxirWmkurM+O+QEFTuF9nCR/e0VRLT3nhELuKNw0bv9TcUxvC1O0/Mzhqdy6TFTtNNyKobcM8/Oe/lFV4FwN6u9aC7XkkBagnzrtVXFUDCZQ7HJIJh0OA9jtaIoVKQQGPwp6wBuMS2SHkuWo6Avk63U1h7d+X61pOYX0X78iKA==
X-YMail-OSG: 5a4QQYMVM1kM0Naec5AEtslyX34kNFKFdgn0TFYZD6psKz.H5UpUeMRhSDJNT02
 oXad5OScV74TSFgT6u5GX9gR2.I0QQJ.ufVXGiMyjaG94vsdBWGJs7c94URGCfTrn_4RYuowO3kB
 vcH55bV6d1ALtldzZ.QsS0Frizvwg2Ee5ymGjsaG2vS3WoMLKgukUv.kKjteusPAN.IWmVpfWq4o
 py2EY6p93WDFJ8ZrIoS0q0QYq6ajUCfISrFvbeIQp1eyCARZC2ddL368AGFAl.bP.xTi_u6O9C4u
 dhp3Rcuzt6S.vU0eW59DkHpa1G589GEj4UnwXse8LAEzU2JuXHZYCqY46tkCNN0wtMnCRwmwK2GX
 YZ3hRoK91iHRcYeSKdVKwiiEVZyt2Gdj2h.H6vN_5VDxGMykXiPFZift9gWW4D2z2.juYW3ZCiUM
 pXtOwsNguR2AXOc47PjuRRd6UNEzosmH.xGfNSgPuiLOAYKLW5Ibs0v.HPbj8cI.MuSANBYF_u.j
 dgk.WV4V0j4pKYGZ9c7IVjrzVJoMHVCUlqqmklqFliOewEZrZZJRVRKYMaWVnf2Nw5DyBpNDWojv
 JIgC.Gkg_zCq3JKYcuCof0IKM18SFQ3brZtTNQ_gAAE3RMd_InHvogYim9xKRUwxlLcsE1eytWAC
 nRS.7lUKdfIuqn4XYqG4qFtLv3Qdf1WNog7YHlp_oUkKjRUMNXE7lgjbmU_IocEOyKBAOPh8NyNU
 6dRjo1F11j.jt4_QhsNTVDg_RRvzw8aFYt4CXh2ssBJTr0zMbosYiqkm81NzAF69o13t63MP32Gm
 ZErhivsCdQcmPVAglFHFCnCpos_5blxBxL1OUmvHh3lCOyFaJDmXU8XekZKRiExBLZB0H7U50Vnh
 TYF5TuXZZ3KuQ96oro4ajx2fKc8frJVOsU72cfaLu__tR127SolZrbFS9kOOI9erUkHos0C2OMyu
 qyI9TaFNDwvgomhE1zYP2SlyBcecdhwdbSFgrfvM5Ev6Ix6lTEme5MceocZe861vSzQwKNGl2i.s
 PcvDeKVS6CtyKpd9A3z2w12vQgMtgkYoBR2o6dIlEY5WcZcZvCdt.AkIileDcYebHHq.YBMpRyoh
 2Fexkm9xi4.y2Y2wcC2Q96tRkOcSXO7gE2a4WkXMyxDB8hi0g3dFMMprQDgj3DKJ5pASHEZCGoSt
 CBzMiNJo4QrGCOSz..8kATr.HJ4DRGVoFyKmdvy9JTA33dOseA2OOTiRViD9ygcNUXcWoGnqImrz
 2YMU7MC3OOeamtAZgBWpv4aAHvLxWir_C6gm2QqlX.QvraGD8pdZvULVXAPvOAMZDfGE62GrIwUE
 Qb7bSz6IZ7O3IhzY86AIyA2JEUtGOhePp6.7aTAKP5DgtQIvsg5iJj7cg0bpri54n2J669amrsQX
 cF_g7ok3IyCrMNBxim7W1EuF0dv8cB__HQzazmrLXGG6IMcmpJZMWwPvf6wYhQeye5Cb3AzQKsp9
 Jwe7g1CLlvfFPIJO8N74hbUCXs5uUWMZnWtSMZxHxLwVheBZsjHdaDHrje_8.to8B5DeHRILeAh4
 _8.Cosqei1wLHxr__SPPhVw_VpycjJj0TVVq5eiC5hDaR7aKPfy05.eNRKx1vSrsYJvjMI.B1EDU
 hev13nDFkbP.1IN4y.okQ2QlQX2MB5oIyCQep6BZaOs1jfE8D4HMXZwPJlyHjmzzLPtWh5RRmmib
 DCokj7G8.tW2o4.ZRsVMjg7LVlXVM59zxaG_imhT5limdRDxxKh0w70Hj.uy9whRXOfbmvZv4hB3
 qTX4xkFiFHjDOcC027wHcoKOPIsBSa4wMQAiTLalpY_Rb42NqHMLl8hYLdTJu13Errolu.WMq99.
 Bmw1ARbYOIP1GFtW_.0rtZ8KLDr9cGotL3I.mW7wB5qoCIzdauU9BdD67_CmrNO0TyXCSx6zwMLL
 k3C7yeGx8WAH1ngvf6_xPqmFZOKY0zv3dYnlFvcytOQaiyh3K56S51Ary6FqGgVlrT.itXNW3dJA
 gd39v0KsSxQhHS9PizWpVLuTYkWEiQ_kNdQuPYc26VyJZeKPyRYbGZztQg9_76G2FwbGRtT6up9R
 a_ZlS.1WbZOwnU2SeMyYYsH_ZkfM.0dgl9q8nIY3eqwSPF2.uwp2BqK2macpN3lOAO2XjOoet3a6
 OQ2lmW91.0LRX_PqK_ywplXVB_xGNEPWZ1ocD1BxinCwQ.OYkgXoploxDyBf2Bi2gvvtPQ28C5sV
 IK6QIxegUPo90F3_DmA--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <87b831e1-6646-fd9b-2b5e-be61b9ec527d@aol.com>
Date: Fri, 13 Jan 2023 21:55:22 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230110030331-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 13466

On 1/10/23 3:16 AM, Michael S. Tsirkin wrote:
> On Tue, Jan 10, 2023 at 02:08:34AM -0500, Chuck Zmudzinski wrote:
>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>> as noted in docs/igd-assign.txt in the Qemu source code.
>> 
>> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>> Intel IGD passthrough to the guest with the Qemu upstream device model,
>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>> a different slot. This problem often prevents the guest from booting.
>> 
>> The only available workaround is not good: Configure Xen HVM guests to use
>> the old and no longer maintained Qemu traditional device model available
>> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>> 
>> To implement this feature in the Qemu upstream device model for Xen HVM
>> guests, introduce the following new functions, types, and macros:
>> 
>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>> * typedef XenPTQdevRealize function pointer
>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>> 
>> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>> the xl toolstack with the gfx_passthru option enabled, which sets the
>> igd-passthru=on option to Qemu for the Xen HVM machine type.
>> 
>> The new xen_igd_reserve_slot function also needs to be implemented in
>> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>> in which case it does nothing.
>> 
>> The new xen_igd_clear_slot function overrides qdev->realize of the parent
>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>> 
>> Move the call to xen_host_pci_device_get, and the associated error
>> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>> initialize the device class and vendor values which enables the checks for
>> the Intel IGD to succeed. The verification that the host device is an
>> Intel IGD to be passed through is done by checking the domain, bus, slot,
>> and function values as well as by checking that gfx_passthru is enabled,
>> the device class is VGA, and the device vendor in Intel.
>> 
>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>> ---
>> Notes that might be helpful to reviewers of patched code in hw/xen:
>> 
>> The new functions and types are based on recommendations from Qemu docs:
>> https://qemu.readthedocs.io/en/latest/devel/qom.html
>> 
>> Notes that might be helpful to reviewers of patched code in hw/i386:
>> 
>> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
>> not affect builds that do not have CONFIG_XEN defined.
>> 
>> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
>> existing function that is only true when Qemu is built with
>> xen-pci-passthrough enabled and the administrator has configured the Xen
>> HVM guest with Qemu's igd-passthru=on option.
>> 
>> v2: Remove From: <email address> tag at top of commit message
>> 
>> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
>> 
>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
>> 
>>     is changed to
>> 
>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>         && (s->hostaddr.function == 0)) {
>> 
>>     I hoped that I could use the test in v2, since it matches the
>>     other tests for the Intel IGD in Qemu and Xen, but those tests
>>     do not work because the necessary data structures are not set with
>>     their values yet. So instead use the test that the administrator
>>     has enabled gfx_passthru and the device address on the host is
>>     02.0. This test does detect the Intel IGD correctly.
>> 
>> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>>     email address to match the address used by the same author in commits
>>     be9c61da and c0e86b76
>>     
>>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
>> 
>> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>>     for the Intel IGD that uses the same criteria as in other places.
>>     This involved moving the call to xen_host_pci_device_get from
>>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>>     Intel IGD in xen_igd_clear_slot:
>>     
>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>         && (s->hostaddr.function == 0)) {
>> 
>>     is changed to
>> 
>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
>> 
>>     Added an explanation for the move of xen_host_pci_device_get from
>>     xen_pt_realize to xen_igd_clear_slot to the commit message.
>> 
>>     Rebase.
>> 
>> v6: Fix logging by removing these lines from the move from xen_pt_realize
>>     to xen_igd_clear_slot that was done in v5:
>> 
>>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>>                " to devfn 0x%x\n",
>>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>                s->dev.devfn);
>> 
>>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>>     set yet in xen_igd_clear_slot.
>> 
>> v7: The v7 that was posted to the mailing list was incorrect. v8 is what
>>     v7 was intended to be.
>> 
>> v8: Inhibit out of context log message and needless processing by
>>     adding 2 lines at the top of the new xen_igd_clear_slot function:
>> 
>>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>         return;
>> 
>>     Rebase. This removed an unnecessary header file from xen_pt.h 
>> 
>>  hw/i386/pc_piix.c    |  3 +++
>>  hw/xen/xen_pt.c      | 49 ++++++++++++++++++++++++++++++++++++--------
>>  hw/xen/xen_pt.h      | 16 +++++++++++++++
>>  hw/xen/xen_pt_stub.c |  4 ++++
>>  4 files changed, 63 insertions(+), 9 deletions(-)
>> 
>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>> index b48047f50c..bc5efa4f59 100644
>> --- a/hw/i386/pc_piix.c
>> +++ b/hw/i386/pc_piix.c
>> @@ -405,6 +405,9 @@ static void pc_xen_hvm_init(MachineState *machine)
>>      }
>>  
>>      pc_xen_hvm_init_pci(machine);
>> +    if (xen_igd_gfx_pt_enabled()) {
>> +        xen_igd_reserve_slot(pcms->bus);
>> +    }
>>      pci_create_simple(pcms->bus, -1, "xen-platform");
>>  }
>>  #endif
> 
> I would even maybe go further and move the whole logic into
> xen_igd_reserve_slot. And I would even just name it
> xen_hvm_init_reserved_slots without worrying about the what
> or why at the pc level.  At this point it will be up to Xen maintainers.
> 
>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> index 0ec7e52183..eff38155ef 100644
>> --- a/hw/xen/xen_pt.c
>> +++ b/hw/xen/xen_pt.c
>> @@ -780,15 +780,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>                 s->dev.devfn);
>>  
>> -    xen_host_pci_device_get(&s->real_device,
>> -                            s->hostaddr.domain, s->hostaddr.bus,
>> -                            s->hostaddr.slot, s->hostaddr.function,
>> -                            errp);
>> -    if (*errp) {
>> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
>> -        return;
>> -    }
>> -
>>      s->is_virtfn = s->real_device.is_virtfn;
>>      if (s->is_virtfn) {
>>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
>> @@ -950,11 +941,50 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>>  }
>>  
>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>> +{
>> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
>> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
>> +}
>> +
>> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
>> +{
>> +    ERRP_GUARD();
>> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
>> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
>> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
>> +
>> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>> +        return;
>> +
>> +    xen_host_pci_device_get(&s->real_device,
>> +                            s->hostaddr.domain, s->hostaddr.bus,
>> +                            s->hostaddr.slot, s->hostaddr.function,
>> +                            errp);
>> +    if (*errp) {
>> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
>> +        return;
>> +    }
>> +
>> +    if (is_igd_vga_passthrough(&s->real_device) &&
>> +        s->real_device.domain == 0 && s->real_device.bus == 0 &&
>> +        s->real_device.dev == 2 && s->real_device.func == 0 &&
>> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> 
> how about macros for these?
> 
> #define XEN_PCI_IGD_DOMAIN 0
> #define XEN_PCI_IGD_BUS 0
> #define XEN_PCI_IGD_DEV 2
> #define XEN_PCI_IGD_FN 0
> 
>> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
> 
> If you are going to do this, you should set it back
> either after pci_qdev_realize or in unrealize,
> for symmetry.

I presume you are talking about the log here. The clearing of
the bit must be done before pci_qdev_realize because the slot
is assigned in pci_qdev_realize. If the bit is not cleared *before*
calling pci_qdev_realize, the igd will not be assigned slot 2.
Doing that would defeat the whole purpose of the patch.

I suppose I could move the log message after pci_qdev_realize
for symmetry, that would not change how the patch works. 

> 
>> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
>> +    }
> 
> 
>> +    xpdc->pci_qdev_realize(qdev, errp);
>> +}
>> +
> 
> 
> 
>>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>>  {
>>      DeviceClass *dc = DEVICE_CLASS(klass);
>>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>  
>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
>> +    xpdc->pci_qdev_realize = dc->realize;
>> +    dc->realize = xen_igd_clear_slot;
>>      k->realize = xen_pt_realize;
>>      k->exit = xen_pt_unregister_device;
>>      k->config_read = xen_pt_pci_read_config;
>> @@ -977,6 +1007,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>>      .instance_size = sizeof(XenPCIPassthroughState),
>>      .instance_finalize = xen_pci_passthrough_finalize,
>>      .class_init = xen_pci_passthrough_class_init,
>> +    .class_size = sizeof(XenPTDeviceClass),
>>      .instance_init = xen_pci_passthrough_instance_init,
>>      .interfaces = (InterfaceInfo[]) {
>>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
>> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
>> index cf10fc7bbf..8c25932b4b 100644
>> --- a/hw/xen/xen_pt.h
>> +++ b/hw/xen/xen_pt.h
>> @@ -2,6 +2,7 @@
>>  #define XEN_PT_H
>>  
>>  #include "hw/xen/xen_common.h"
>> +#include "hw/pci/pci_bus.h"
>>  #include "xen-host-pci-device.h"
>>  #include "qom/object.h"
>>  
>> @@ -40,7 +41,20 @@ typedef struct XenPTReg XenPTReg;
>>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>>  
>> +#define XEN_PT_DEVICE_CLASS(klass) \
>> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
>> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
>> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
>> +
>> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
>> +
>> +typedef struct XenPTDeviceClass {
>> +    PCIDeviceClass parent_class;
>> +    XenPTQdevRealize pci_qdev_realize;
>> +} XenPTDeviceClass;
>> +
>>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
>> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>>                                             XenHostPCIDevice *dev);
>> @@ -75,6 +89,8 @@ typedef int (*xen_pt_conf_byte_read)
>>  
>>  #define XEN_PCI_INTEL_OPREGION 0xfc
>>  
>> +#define XEN_PCI_IGD_SLOT_MASK 0x4UL /* Intel IGD slot_reserved_mask */
>> +
> 
> I think you want to calculate this based on dev fn:
> 
> #define XEN_PCI_IGD_SLOT_MASK \
> 	(0x1 << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
> 
> 
>>  typedef enum {
>>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
>> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
>> index 2d8cac8d54..5c108446a8 100644
>> --- a/hw/xen/xen_pt_stub.c
>> +++ b/hw/xen/xen_pt_stub.c
>> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>>          error_setg(errp, "Xen PCI passthrough support not built in");
>>      }
>>  }
>> +
>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>> +{
>> +}
>> -- 
>> 2.39.0
> 



From xen-devel-bounces@lists.xenproject.org Sat Jan 14 05:40:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 05:40:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477619.740401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGZGl-00067s-I9; Sat, 14 Jan 2023 05:39:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477619.740401; Sat, 14 Jan 2023 05:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGZGl-00067l-Dl; Sat, 14 Jan 2023 05:39:55 +0000
Received: by outflank-mailman (input) for mailman id 477619;
 Sat, 14 Jan 2023 05:39:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NrQy=5L=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pGZGk-00067f-0Q
 for xen-devel@lists.xenproject.org; Sat, 14 Jan 2023 05:39:54 +0000
Received: from sonic315-8.consmr.mail.gq1.yahoo.com
 (sonic315-8.consmr.mail.gq1.yahoo.com [98.137.65.32])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id db5ad0c2-93cd-11ed-b8d0-410ff93cb8f0;
 Sat, 14 Jan 2023 06:39:49 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic315.consmr.mail.gq1.yahoo.com with HTTP; Sat, 14 Jan 2023 05:39:46 +0000
Received: by hermes--production-bf1-6cb45cc684-d52fx (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID e8ce94261cee09d281f8d4a72d11a3b7; 
 Sat, 14 Jan 2023 05:39:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db5ad0c2-93cd-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673674786; bh=fctXFxmPmvHl4ulPv84zQrc4Ivg9P7AFLuODVVW/rdk=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=b7W0uH6un0yOLgTNt97OhiZUDt/IOL9GIr4X3Jq1N4zoagvcrnGCDjf1BislWyXHzNBxj/tkIVkaDnoCBUvHK09n9haV1Dj6ugbD6qLyfSLisFRmZ4CJiSF6/jaskkPCUVyib1HoiQuMoesF3sPgMxzEaqXGDO17Ox1sHepEuI9C0QA4bGTezriPB8dc89SSeeyvTk0e/pXN+sgyrVh79DUHhQFRd/R9JlvDWfRJt4UMcNLx+wYEOr9qIrZl5bUeFpX4GUkf0oEIkheg7+6KeE/epfYFXZUoA3Xk+47V+X5MMkZiDlV7eI8pMG/GZ/prJzxjqld7rTAQXiiXeJDczQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673674786; bh=mmQMEU5nLF0uI4t9I+YwJMas/ozl1uhjPpILUl0PQMv=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=CDDIprPawkZiOl6Wz+GP5q015lnWdDHUAGYcwxEBlyu/CiERkyprVGg7wLL348pG7y/O3W+Xtcqk3JCbzDFMKBESNYHYit8pWSX1Ao1gB8hAE3+ZZ+t8mYTpapUnWgrkY82ncL7YHj/JBcgdSl25Ms0tpfptqjLjQ6fqO9gAS6DMYQQGXMIMFTcMupkpXmGr4IkcOvfci2wMHBbLGGQgc2MeMiocpnSegtSk8rMfHXVABYv9u/e7I/WN0+acCJQS+VBCr2XFg02c7m3zNEQGekzdOR5WoTH9ooCcoSrZdSpGTRiFBo0ag3ey3kvFJuB0ya/VUsYANSQTcD1wKNPdYA==
X-YMail-OSG: sPAqr20VM1luqqhoVH3alx6THea5G6EII4F4KXQ0LVqbMC.DWXDa4ns72JaSohr
 r7rx6yKV2wWJxvLkD.v06xx7ty0Qv3oKO1lNlheg5y15EWcr3Exim8tAImnXDIcnE4B3QrFr2eGI
 RYD2XIATlZ3ZENANq7LopNBlANo3kqDoCm3zoFCRIO5TR.wpGxYSen1CNzTUxRoE2w.N8R6WtMfV
 fcQBK40JH1ugakSO8w0TsjAXXXXk2mLdlm.5qKyQAVL1igKoArzAmL1D.pGbaFArlr.ICl4t2W2S
 yyK88ohfg_fJnjhqP0NTPQNhANEZqXmfv9VcUsHia5t1RScKpSRvtTumJ44CfVk7QNNG_W.aIJUw
 h09IqnjNZy2YdeFdjNr.q64wNpvaJE79AHhzcASk1ya1K831BpvQ6ef7bjyWeXyR.8iKS6jXPSTh
 usSTsVYqdAvK5EnagljD8h94IYhdNhzaSVeycg9NVcD9uNLJDX8Mzs9VsRbFPRGuaJcP0.YgAGc6
 pOvq8QqJPzn8uglCYG4r_s50Ez_xBvPPU3a_ArsGtV2EHcFtV9y.Rh6X7p1gabMBQpYU8xM2Bj4t
 07fYWm3cgoGC.BHdOyBh7FvvDF2xkDImXRGXoTxS2zGdCQYu0fOqDCqZRTZsMhd5gQ75icpuCvvo
 7ZTsrmnSkou5tOVqGHYVaPbsZYg49oX9hyakx03oaN_l8w_JiMqmOh32ZwgzOxKL9mJACppVge9Y
 QIUbLnmaTgddP2AQ3dUDldR.ePDlOdK1fYb_NPRHRRsi9rDFwcHFulb3z2VTc6mJpWrNK0VoJefq
 WVGRNk.rPwWcoBx8lcFhpVmb_4VJxq.DTCz5ivAn5hx6aggRJB2tJQDXhZ_yI86K1XfdFcddibY3
 xT8k7PKfOjiuBQyvwsDJc22d24wqCT4W3TVXXf2PyZkQN8xnerG2UcDrJFR5bYy36KrZ7Ddc61TC
 NIH4VE1G8k0Ugs6VjHlmXg5w4.UibTPFxB1o56J5502OO5D3IylBlLOx6uCBsUREcDxCAWoenyDW
 rfqQOzvcWzjhNLW1dr7nlfy9bwfCu_p_XqGqjDkZScofzdwsSYe7NsAiomCwr1K9qjlWqXVgH7ui
 eJXcPzGyfGevY92Pq4BPcMvROatTE2EcNy2yKeL11P04MkXcsPTrv8vQJcoHv.__4Fv9j.J2hbsm
 Nrn3nRmRXXeAsJeGwOZ96v83Lx6V1aZpvXEfiJp_4b8sXYpdtX5Sk_xlSO1FVMgTkmksX79LYWvG
 s82Xz636dVFghOwZ3TcP7tj42vz.z9AoZC8uFwO0Wz4ZdbUc3DjE54pNMEGxoRIrtz7_vXacs47i
 Vaq5mYIdR57YqjgaTWHDCkk7B2Qm.r.J7KDNkYx6QIrOvFZpMeSmqrw2JExb.MYElQn9cJW8shbw
 Y83bC2jt9CB3FaT6Q6ikGSbXNbxDCZKjaWEM96zwT3837i5dZ.XcumQHnqGINk0_NNMhM8ktHXF.
 kXoauTq_UivYNG_zxVmrPOoR_YSbA1FCYBVM2MM_ZDfPGe2zppLQz18jBbm.rvrPAbcQkRo8asFd
 EPh8g8VOFHEvb.wcGyShlAWIZHz2_I_xs8v6dQoqYlRtUhv941KkMorkGAFR8KVKhfKv0ofuJ3Yv
 KJYZbDHUSL7.Vj_hRbhQQZ45ZepbanFirOeyR.FoRg6UUMwcOEHzgBGGmt06dMXgCRFQwibhn3N2
 D2NzRNs5t1ZOL7KXPXCTVU0FBI5gUjVfiFs_Wut9ISaQUfNG7sXty4mg3TDs2EU522iAfLnBH3jC
 w6umtO0djPMd6Hw8L3y81UwmfaNGZkIE909GI.y7pA7AzsZqLyRtNfhkMMd1Iu33NMGPMTIq9Uns
 IXfSttbbrT5_jXJxCIj.LYgsfyyYgP3a97fHQKKV8drzPV14UNxj35aXbfU1P03zmVqvDrb0aS_3
 FIHuepK7XItTku_d6ZNqfGWl4XSit44I3zLeoeQi1zMvEBxic5WrQEIcPWnoVu29S0g4u8dYOVp0
 9opgoPWM6bPC9MyDQchu8hV2Np2r9pA3uiMesPB2p0X4SAakl3IHHon_3uYJtW4LChLgIzHqiwpu
 xSWBRylujlniIUrYH05I.XBMZc82fcwLzxOnjYaXBDU.1Waj38YVK4mk81ox.ZFnI23PLMPR_pz8
 O3DZZq7K3KKmZOz6nYPZmJxLX9PIpDYCHtnJazw_xINQUc.UVbfC_.GWhUazpt2n2lPVMNFyX7c9
 eFwjGCvI4VNPpNg--
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org,
	qemu-stable@nongnu.org
Subject: [PATCH v9] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Date: Sat, 14 Jan 2023 00:39:33 -0500
Message-Id: <974c616b8632f1d7ca3917f8143d8cebf946a55c.1673672956.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <974c616b8632f1d7ca3917f8143d8cebf946a55c.1673672956.git.brchuckz.ref@aol.com>
Content-Length: 13631

Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
as noted in docs/igd-assign.txt in the Qemu source code.

Currently, when the xl toolstack is used to configure a Xen HVM guest with
Intel IGD passthrough to the guest with the Qemu upstream device model,
a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
a different slot. This problem often prevents the guest from booting.

The only available workaround is not good: Configure Xen HVM guests to use
the old and no longer maintained Qemu traditional device model available
from xenbits.xen.org which does reserve slot 2 for the Intel IGD.

To implement this feature in the Qemu upstream device model for Xen HVM
guests, introduce the following new functions, types, and macros:

* XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
* XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
* typedef XenPTQdevRealize function pointer
* XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
* xen_igd_reserve_slot and xen_igd_clear_slot functions

Michael Tsirkin:
* Introduce XEN_PCI_IGD_DOMAIN, XEN_PCI_IGD_BUS, XEN_PCI_IGD_DEV, and
  XEN_PCI_IGD_FN - use them to compute the value of XEN_PCI_IGD_SLOT_MASK

The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
the xl toolstack with the gfx_passthru option enabled, which sets the
igd-passthru=on option to Qemu for the Xen HVM machine type.

The new xen_igd_reserve_slot function also needs to be implemented in
hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
in which case it does nothing.

The new xen_igd_clear_slot function overrides qdev->realize of the parent
PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
created in hw/i386/pc_piix.c for the case when igd-passthru=on.

Move the call to xen_host_pci_device_get, and the associated error
handling, from xen_pt_realize to the new xen_igd_clear_slot function to
initialize the device class and vendor values which enables the checks for
the Intel IGD to succeed. The verification that the host device is an
Intel IGD to be passed through is done by checking the domain, bus, slot,
and function values as well as by checking that gfx_passthru is enabled,
the device class is VGA, and the device vendor in Intel.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Notes that might be helpful to reviewers of patched code in hw/xen:

The new functions and types are based on recommendations from Qemu docs:
https://qemu.readthedocs.io/en/latest/devel/qom.html

Notes that might be helpful to reviewers of patched code in hw/i386:

The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
not affect builds that do not have CONFIG_XEN defined.

xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
existing function that is only true when Qemu is built with
xen-pci-passthrough enabled and the administrator has configured the Xen
HVM guest with Qemu's igd-passthru=on option.

v2: Remove From: <email address> tag at top of commit message

v3: Changed the test for the Intel IGD in xen_igd_clear_slot:

    if (is_igd_vga_passthrough(&s->real_device) &&
        (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {

    is changed to

    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    I hoped that I could use the test in v2, since it matches the
    other tests for the Intel IGD in Qemu and Xen, but those tests
    do not work because the necessary data structures are not set with
    their values yet. So instead use the test that the administrator
    has enabled gfx_passthru and the device address on the host is
    02.0. This test does detect the Intel IGD correctly.

v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
    email address to match the address used by the same author in commits
    be9c61da and c0e86b76
    
    Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc

v5: The patch of xen_pt.c was re-worked to allow a more consistent test
    for the Intel IGD that uses the same criteria as in other places.
    This involved moving the call to xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot and updating the checks for the
    Intel IGD in xen_igd_clear_slot:
    
    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    is changed to

    if (is_igd_vga_passthrough(&s->real_device) &&
        s->real_device.domain == 0 && s->real_device.bus == 0 &&
        s->real_device.dev == 2 && s->real_device.func == 0 &&
        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {

    Added an explanation for the move of xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot to the commit message.

    Rebase.

v6: Fix logging by removing these lines from the move from xen_pt_realize
    to xen_igd_clear_slot that was done in v5:

    XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
               " to devfn 0x%x\n",
               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
               s->dev.devfn);

    This log needs to be in xen_pt_realize because s->dev.devfn is not
    set yet in xen_igd_clear_slot.

v7: The v7 that was posted to the mailing list was incorrect. v8 is what
    v7 was intended to be.

v8: Inhibit out of context log message and needless processing by
    adding 2 lines at the top of the new xen_igd_clear_slot function:

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
        return;

    Rebase. This removed an unnecessary header file from xen_pt.h 

v9: Move check for xen_igd_gfx_pt_enabled() from pc_piix.c to xen_pt.c

    Move #include "hw/pci/pci_bus.h" from xen_pt.h to xen_pt.c

    Introduce macros for the IGD devfn constants and use them to compute
    the value of XEN_PCI_IGD_SLOT_MASK

    Also use the new macros at an appropriate place in xen_pt_realize

    Add Cc: to stable - This has been broken for a long time, ever since
                        support for igd-passthru was added to Qemu 7
                        years ago.

    Mention new macros in the commit message (Michael Tsirkin)

    N.B.: I could not follow the suggestion to move the statement
    pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK; to after
    pci_qdev_realize for symmetry. Doing that results in an error when
    creating the guest:
    
    libxl: error: libxl_qmp.c:1837:qmp_ev_parse_error_messages: Domain 4:PCI: slot 2 function 0 not available for xen-pci-passthrough, reserved
    libxl: error: libxl_pci.c:1809:device_pci_add_done: Domain 4:libxl__device_pci_add failed for PCI device 0:0:2.0 (rc -28)
    libxl: error: libxl_create.c:1921:domcreate_attach_devices: Domain 4:unable to add pci devices

 hw/i386/pc_piix.c    |  1 +
 hw/xen/xen_pt.c      | 61 ++++++++++++++++++++++++++++++++++++--------
 hw/xen/xen_pt.h      | 20 +++++++++++++++
 hw/xen/xen_pt_stub.c |  4 +++
 4 files changed, 75 insertions(+), 11 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index b48047f50c..8fc96eb63b 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -405,6 +405,7 @@ static void pc_xen_hvm_init(MachineState *machine)
     }
 
     pc_xen_hvm_init_pci(machine);
+    xen_igd_reserve_slot(pcms->bus);
     pci_create_simple(pcms->bus, -1, "xen-platform");
 }
 #endif
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 0ec7e52183..51f100f64a 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -57,6 +57,7 @@
 #include <sys/ioctl.h>
 
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_bus.h"
 #include "hw/qdev-properties.h"
 #include "hw/qdev-properties-system.h"
 #include "hw/xen/xen.h"
@@ -780,15 +781,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
                s->dev.devfn);
 
-    xen_host_pci_device_get(&s->real_device,
-                            s->hostaddr.domain, s->hostaddr.bus,
-                            s->hostaddr.slot, s->hostaddr.function,
-                            errp);
-    if (*errp) {
-        error_append_hint(errp, "Failed to \"open\" the real pci device");
-        return;
-    }
-
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
@@ -803,8 +795,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
     s->io_listener = xen_pt_io_listener;
 
     /* Setup VGA bios for passthrough GFX */
-    if ((s->real_device.domain == 0) && (s->real_device.bus == 0) &&
-        (s->real_device.dev == 2) && (s->real_device.func == 0)) {
+    if ((s->real_device.domain == XEN_PCI_IGD_DOMAIN) &&
+        (s->real_device.bus == XEN_PCI_IGD_BUS) &&
+        (s->real_device.dev == XEN_PCI_IGD_DEV) &&
+        (s->real_device.func == XEN_PCI_IGD_FN)) {
         if (!is_igd_vga_passthrough(&s->real_device)) {
             error_setg(errp, "Need to enable igd-passthru if you're trying"
                     " to passthrough IGD GFX");
@@ -950,11 +944,55 @@ static void xen_pci_passthrough_instance_init(Object *obj)
     PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
 }
 
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+    if (!xen_igd_gfx_pt_enabled())
+        return;
+
+    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
+    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
+}
+
+static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
+{
+    ERRP_GUARD();
+    PCIDevice *pci_dev = (PCIDevice *)qdev;
+    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
+    PCIBus *pci_bus = pci_get_bus(pci_dev);
+
+    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
+        return;
+
+    xen_host_pci_device_get(&s->real_device,
+                            s->hostaddr.domain, s->hostaddr.bus,
+                            s->hostaddr.slot, s->hostaddr.function,
+                            errp);
+    if (*errp) {
+        error_append_hint(errp, "Failed to \"open\" the real pci device");
+        return;
+    }
+
+    if (is_igd_vga_passthrough(&s->real_device) &&
+        s->real_device.domain == XEN_PCI_IGD_DOMAIN &&
+        s->real_device.bus == XEN_PCI_IGD_BUS &&
+        s->real_device.dev == XEN_PCI_IGD_DEV &&
+        s->real_device.func == XEN_PCI_IGD_FN &&
+        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
+        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
+        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
+    }
+    xpdc->pci_qdev_realize(qdev, errp);
+}
+
 static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
+    xpdc->pci_qdev_realize = dc->realize;
+    dc->realize = xen_igd_clear_slot;
     k->realize = xen_pt_realize;
     k->exit = xen_pt_unregister_device;
     k->config_read = xen_pt_pci_read_config;
@@ -977,6 +1015,7 @@ static const TypeInfo xen_pci_passthrough_info = {
     .instance_size = sizeof(XenPCIPassthroughState),
     .instance_finalize = xen_pci_passthrough_finalize,
     .class_init = xen_pci_passthrough_class_init,
+    .class_size = sizeof(XenPTDeviceClass),
     .instance_init = xen_pci_passthrough_instance_init,
     .interfaces = (InterfaceInfo[]) {
         { INTERFACE_CONVENTIONAL_PCI_DEVICE },
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index cf10fc7bbf..e184699740 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -40,7 +40,20 @@ typedef struct XenPTReg XenPTReg;
 #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
 OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
 
+#define XEN_PT_DEVICE_CLASS(klass) \
+    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
+#define XEN_PT_DEVICE_GET_CLASS(obj) \
+    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
+
+typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
+
+typedef struct XenPTDeviceClass {
+    PCIDeviceClass parent_class;
+    XenPTQdevRealize pci_qdev_realize;
+} XenPTDeviceClass;
+
 uint32_t igd_read_opregion(XenPCIPassthroughState *s);
+void xen_igd_reserve_slot(PCIBus *pci_bus);
 void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
 void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
                                            XenHostPCIDevice *dev);
@@ -75,6 +88,13 @@ typedef int (*xen_pt_conf_byte_read)
 
 #define XEN_PCI_INTEL_OPREGION 0xfc
 
+#define XEN_PCI_IGD_DOMAIN 0
+#define XEN_PCI_IGD_BUS 0
+#define XEN_PCI_IGD_DEV 2
+#define XEN_PCI_IGD_FN 0
+#define XEN_PCI_IGD_SLOT_MASK \
+    (1UL << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
+
 typedef enum {
     XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
     XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
index 2d8cac8d54..5c108446a8 100644
--- a/hw/xen/xen_pt_stub.c
+++ b/hw/xen/xen_pt_stub.c
@@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
         error_setg(errp, "Xen PCI passthrough support not built in");
     }
 }
+
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Sat Jan 14 08:03:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 08:03:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477633.740430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGbVl-0004JE-5P; Sat, 14 Jan 2023 08:03:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477633.740430; Sat, 14 Jan 2023 08:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGbVl-0004J7-2a; Sat, 14 Jan 2023 08:03:33 +0000
Received: by outflank-mailman (input) for mailman id 477633;
 Sat, 14 Jan 2023 08:03:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGbVk-0004Ix-4H; Sat, 14 Jan 2023 08:03:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGbVk-0005fM-2k; Sat, 14 Jan 2023 08:03:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGbVj-0000bx-LA; Sat, 14 Jan 2023 08:03:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGbVj-0003Ze-Kh; Sat, 14 Jan 2023 08:03:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=URQ8Ij/K/NZ1t+3ghfVQrav5mJLmfPHDvfMPYkrJajs=; b=Xzg4k2YDeDMuxf4i1wotiiCBFR
	+Ejsj3GkAbVhGO7HS89C1ELyJuYukWf2AckrGSonGAbq1/Wo/8XdNZIxKFEjLxb0vwNoFJXq4U+Qo
	hF4G3J22COmaQnX5AAuYZbGBjzyTyQwVaFCu0YM3HI6IR4QuWRiF7cbqgAWbPSe5pgbE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175833-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175833: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:host-build-prep:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 08:03:31 +0000

flight 175833 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175833/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-armhf                   5 host-build-prep          fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    1 days
Failing since        175748  2023-01-12 20:01:56 Z    1 days    3 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 09:54:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 09:54:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477658.740480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGdET-0007A5-QG; Sat, 14 Jan 2023 09:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477658.740480; Sat, 14 Jan 2023 09:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGdET-00079y-NU; Sat, 14 Jan 2023 09:53:49 +0000
Received: by outflank-mailman (input) for mailman id 477658;
 Sat, 14 Jan 2023 09:53:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGdES-00079o-AL; Sat, 14 Jan 2023 09:53:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGdES-0008CS-7g; Sat, 14 Jan 2023 09:53:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGdER-00039S-TT; Sat, 14 Jan 2023 09:53:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGdER-0002g7-T8; Sat, 14 Jan 2023 09:53:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L8ECTvSbVDlt466ZJ0b/JPBOeLPoV3zCtbbaUVf+mlg=; b=jUXAv6YOqiT1LT8N441nNAjzPs
	evLrktpgAFtGDG56JkybYhxIOiw61zcI81jCWDAMc4oXDm6WUojMGUP/Fyz+usank+5hM95zHBzRn
	jes4LIOABK95xzJD2zP70dPvEMhWtCngMTRO4KugCyw8XXzCyHJApCp0qb8p4SBbvla0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175844-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175844: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:host-build-prep:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 09:53:47 +0000

flight 175844 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175844/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-armhf                   5 host-build-prep          fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    1 days
Failing since        175748  2023-01-12 20:01:56 Z    1 days    4 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 13:14:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 13:14:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477677.740491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGgMK-0001KY-DT; Sat, 14 Jan 2023 13:14:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477677.740491; Sat, 14 Jan 2023 13:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGgMK-0001KR-Ak; Sat, 14 Jan 2023 13:14:08 +0000
Received: by outflank-mailman (input) for mailman id 477677;
 Sat, 14 Jan 2023 13:14:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGgMJ-0001KH-32; Sat, 14 Jan 2023 13:14:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGgMI-0004Dm-UW; Sat, 14 Jan 2023 13:14:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGgMI-0007of-HH; Sat, 14 Jan 2023 13:14:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGgMI-0004Bc-Gs; Sat, 14 Jan 2023 13:14:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bZSy6gXlffr6nYMEiqkEYFZ1/8VukK7GwWIEPhehU3w=; b=PI+VrqPxPnjjoq+rE4XMEdwb0o
	kO1YBIu3N7uXGB0PX3EheUBjvbDuja5Vz4mu/MywgFQKqonxm+O3LH4UpDD2xQcRv/Dhojms/82U2
	ebmxADFCsYwCE4uR3Ey51w2O/4GuJDt64K55t+uG2RBCJdTKxCa9bfBKELuPeoirkHp0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175835-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175835: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:host-build-prep:fail:regression
    qemu-mainline:test-arm64-arm64-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=886fb67020e32ce6a2cf7049c6f017acf1f0d69a
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 13:14:06 +0000

flight 175835 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175835/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-amd64                   6 xen-build                fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743
 build-armhf                   5 host-build-prep          fail REGR. vs. 175743
 test-arm64-arm64-xl-vhd      12 debian-di-install        fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                886fb67020e32ce6a2cf7049c6f017acf1f0d69a
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    1 days
Failing since        175750  2023-01-13 06:38:52 Z    1 days    2 attempts
Testing same since   175835  2023-01-14 07:07:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

(No revision log; it would be 1426 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 14:11:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 14:11:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477687.740507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGhFn-0007lD-Mn; Sat, 14 Jan 2023 14:11:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477687.740507; Sat, 14 Jan 2023 14:11:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGhFn-0007l6-K0; Sat, 14 Jan 2023 14:11:27 +0000
Received: by outflank-mailman (input) for mailman id 477687;
 Sat, 14 Jan 2023 14:11:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGhFl-0007kg-RB; Sat, 14 Jan 2023 14:11:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGhFl-0005uw-O1; Sat, 14 Jan 2023 14:11:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGhFl-0000kR-I5; Sat, 14 Jan 2023 14:11:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGhFl-0003aa-Hk; Sat, 14 Jan 2023 14:11:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OgeZLbUi15jiDVxyJPHxmVie4MTJVOxlDeXeMmjLGSo=; b=FK2m0HnQbInAn1LHJOKx9FSjXC
	vn2+stMc2tK/0K80aBdhZAmz8CDjuj4qLQ6obzY6Cq9TlVTmDnjIDYxpyGCB6PAP2IrPm3Zdg0hC5
	HlP8Buxfq+JclziXCqoHbvYR53ldwOBeipoF87tFCRlpnvL3eQ9vzW1MS0s4/v/VzWEs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175847-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175847: regressions - trouble: blocked/broken/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 14:11:25 +0000

flight 175847 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175847/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175746
 build-amd64                   6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    1 days
Failing since        175748  2023-01-12 20:01:56 Z    1 days    5 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  fail    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 14:30:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 14:30:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477696.740519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGhYI-0001r1-Bc; Sat, 14 Jan 2023 14:30:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477696.740519; Sat, 14 Jan 2023 14:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGhYI-0001qu-8m; Sat, 14 Jan 2023 14:30:34 +0000
Received: by outflank-mailman (input) for mailman id 477696;
 Sat, 14 Jan 2023 14:30:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGhYG-0001qk-Up; Sat, 14 Jan 2023 14:30:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGhYG-0006SO-QG; Sat, 14 Jan 2023 14:30:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGhYG-0001FA-D2; Sat, 14 Jan 2023 14:30:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGhYG-0001hS-CU; Sat, 14 Jan 2023 14:30:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LEQTQLeE+8LzeXDHuy68Ny/ANwPPLnE/yj5AQSxgvDI=; b=b2nF0m+oT8tv9UbVab52augrcD
	jZliL5fgk2DupjQEV8jcMAHimQHjfSjCs+hNopWFaYI/5BYeCjH1NbTqnS+0W8/sLwyQzemLXNqwR
	OUtBnksoSUBsLNPisEJvfwe4VSK0nvBU1eNctMKOC7QzP4etU2KVZ1OkNQMy8At0I65Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175834-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175834: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-armhf:host-build-prep:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 14:30:32 +0000

flight 175834 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175834/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-armhf                   5 host-build-prep          fail REGR. vs. 175734
 build-amd64                   6 xen-build                fail REGR. vs. 175734
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    2 days
Failing since        175739  2023-01-12 09:38:44 Z    2 days    3 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 17:14:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 17:14:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477706.740530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGk72-0000v5-HX; Sat, 14 Jan 2023 17:14:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477706.740530; Sat, 14 Jan 2023 17:14:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGk72-0000uy-EQ; Sat, 14 Jan 2023 17:14:36 +0000
Received: by outflank-mailman (input) for mailman id 477706;
 Sat, 14 Jan 2023 17:14:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yAkF=5L=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pGk71-0000us-1n
 for xen-devel@lists.xenproject.org; Sat, 14 Jan 2023 17:14:35 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on2089.outbound.protection.outlook.com [40.107.212.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e8b39106-942e-11ed-91b6-6bf2151ebd3b;
 Sat, 14 Jan 2023 18:14:32 +0100 (CET)
Received: from DS7PR03CA0148.namprd03.prod.outlook.com (2603:10b6:5:3b4::33)
 by IA1PR12MB7637.namprd12.prod.outlook.com (2603:10b6:208:427::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Sat, 14 Jan
 2023 17:14:29 +0000
Received: from DS1PEPF0000E630.namprd02.prod.outlook.com
 (2603:10b6:5:3b4:cafe::fa) by DS7PR03CA0148.outlook.office365.com
 (2603:10b6:5:3b4::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.16 via Frontend
 Transport; Sat, 14 Jan 2023 17:14:29 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DS1PEPF0000E630.mail.protection.outlook.com (10.167.17.134) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Sat, 14 Jan 2023 17:14:28 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sat, 14 Jan
 2023 11:14:28 -0600
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Sat, 14 Jan 2023 11:14:27 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8b39106-942e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U09ZFGpT4xGD2M5j0l7EJan/vdLmpxFpItQ2edpQR01RM3YNubvOOYILitpJb6FimQrzdZqbxJRs/Qj93PfdpEf5BWCb+DpHYehhtIQ5ghHEJmcejwP9yBCUpDMaBDHc/1rV1OuUzQl9ha2MVw49cxezso89h3/tAf8zF55/wDYKgRceaBHAXY+vi6Ld0znMBlCwu06GIhL13UNq3WV6sxHJshuO7WkxXYqsaWdZgIHb4P0G+3HRtsryfZs0hR+76aiJZ3vNZhxWPQlD1QzfRJVCcNOFFFmZlffMjSbph17xEWlc1pjOrTPWIHtqzuly21YP9j07UYWGVPSajbdJqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wz3KoBol57cyMe/+ZKbAqUlUeRlvIvDF9TO+e1H4Wlw=;
 b=WuXkhSXfE9zJVSUSkGY1rePj3xiWCPivz3wWzfAG3W9rHhdIeqG2u2yPj4SGlzC2e5bWpTgTc98OLwu0PSphd8kIcKE327nIBDbShgbB9qb9VFdDn5/Mfvgc8LP+OX+TRJgz/3cdTjRDPON/tABLvKTJhuhTMlykwupTtzDEGPMVYdX3HnoylDpCHhOUals74b/fRxaHYARs60FlIpajTuvI2DImw2y7V2CDDOTysSVFRmzPBBma/rqBRz35e0Ke/ANkMO8ChyDQ0Nd3d2My9ibWzUEcESP4tb+cbb8SEbTAnr5J9GJ2oBC6R4AP4nhsavhuwZ4o1FW7qhNPu0BK6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wz3KoBol57cyMe/+ZKbAqUlUeRlvIvDF9TO+e1H4Wlw=;
 b=Alt5yCAEJLDdoFxwnMvtEjPupeO0uiQjcrBgMze/AFr23ivOHO/PdZPY49aI6hTDJRV2LMYsoxEseols8x9msYWlCqV1CdevV8UiND5MIG4Q2pnWn5G4qlUZsvtwg73K7OHBaiJ4Xptw/UfAuH2jR0xv2DdM0R9FVvioIIyOVjk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <5a4eacef-9c66-fa75-2220-504a06c9306a@amd.com>
Date: Sat, 14 Jan 2023 12:14:26 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [RFC PATCH 14/21] xen/arm: vIOMMU: IOMMU device tree node for
 dom0
Content-Language: en-US
To: Rahul Singh <rahul.singh@arm.com>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1669888522.git.rahul.singh@arm.com>
 <544b8450c977f6d005f1d9adee8e0ff33b9bd3ec.1669888522.git.rahul.singh@arm.com>
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <544b8450c977f6d005f1d9adee8e0ff33b9bd3ec.1669888522.git.rahul.singh@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E630:EE_|IA1PR12MB7637:EE_
X-MS-Office365-Filtering-Correlation-Id: 1edc2ad8-aeac-40d8-c051-08daf652cb98
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mvU81g71fQjeC1+NkfpIpvKgpinzF9Dqr/Ahr/qzyABVokVSZ0y2b5nIU9DLA2tSWhVxrZECMa0TOP8o+5CGTlb1E5Y1lE2RCr52w7Gwds52PcxcSFJJRStGQAcSav3SuZDThNpSJLRQhdpVSzxYrtcoQsvmZTlqr9OPxejmhyV0CPQQMaHUUxf4JdoU0WgxHwlLWA3MnxAYXG74eHXWhxKqNF4dhUSPw1cSd0V1F32oKY6FLorUcV5vyaEIr21jsHmfstimN/u3L+hUXW2ilmctDWJt5Cr0ftgZwN63X7AUoZrgvBOLH391KPKhVGWzvm+OeE6+R6sbn07FV09Q3Azy8X6Z8f+m1TrU21wz2KXOfee9CIRbRpSZMoIvGBi9B9YvvJNoCEeX7Zy5slgAMCywK7UD/hpKEJFRXBVHdhy0JkKAN3uAWz5v+yKvi7NLWEiJf1c3trKPq+u9pkz1W6A7uVzQZmmlRn5IgAItoQX6l/RzZqqLxxG4utBnXT4mgIXZdcvG8JR2AwFe8WhGY46MQFP0pjE75XHk4YHWbldqM/e4fiXpUvrPDJDwTSu+/17cibnVRl+aWaLTIf+av0IZyNfUMPJeETVzOOUAzi9NrWvbq0dGtsePNW3ZtSSFvuKwJ8O7bXhn7zW1Ws5Vk0NA+XfSg0Tbh414st/OLs104PhM/G+VPqHNDZTPoI7eSwc1TmnoBo3GkON0Fb3Qiy6e06Xm1srVCDsZ1HIhWUmk9pjU9MjUgiAvJ1lTXgVeDqAgrI8FtJdPNUbyZJ4/ng==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(136003)(39860400002)(376002)(451199015)(46966006)(40470700004)(36840700001)(36860700001)(36756003)(2906002)(53546011)(82310400005)(70206006)(5660300002)(8936002)(44832011)(31686004)(8676002)(4326008)(4744005)(47076005)(82740400003)(356005)(41300700001)(81166007)(478600001)(426003)(186003)(336012)(70586007)(40480700001)(26005)(86362001)(31696002)(40460700003)(2616005)(16576012)(316002)(110136005)(54906003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jan 2023 17:14:28.9102
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1edc2ad8-aeac-40d8-c051-08daf652cb98
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E630.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7637

On 12/1/22 11:02, Rahul Singh wrote:
> XEN will create an IOMMU device tree node in the device tree
> to enable the dom0 to discover the virtual SMMUv3 during dom0 boot.
> IOMMU device tree node will only be created when cmdline option viommu
> is enabled.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/arch/arm/domain_build.c       | 94 +++++++++++++++++++++++++++++++
>  xen/arch/arm/include/asm/viommu.h |  1 +
>  2 files changed, 95 insertions(+)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index a5295e8c3e..b82121beb5 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2233,6 +2233,95 @@ int __init make_chosen_node(const struct kernel_info *kinfo)
>      return res;
>  }
> 
> +#ifdef CONFIG_VIRTUAL_IOMMU
> +static int make_hwdom_viommu_node(const struct kernel_info *kinfo)

This should have the __init attribute


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 17:58:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 17:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477712.740542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGknp-0005BE-Sk; Sat, 14 Jan 2023 17:58:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477712.740542; Sat, 14 Jan 2023 17:58:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGknp-0005B7-Ll; Sat, 14 Jan 2023 17:58:49 +0000
Received: by outflank-mailman (input) for mailman id 477712;
 Sat, 14 Jan 2023 17:58:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGknp-0005Ax-7t; Sat, 14 Jan 2023 17:58:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGknp-0003NW-3q; Sat, 14 Jan 2023 17:58:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGkno-0005la-Ql; Sat, 14 Jan 2023 17:58:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGkno-0006TT-QM; Sat, 14 Jan 2023 17:58:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Fv+0ydfWjUbExyGk0dIAeTSM/MbjmBwtI7R/T4nWLu4=; b=lNiU12XhV9NdVUofurULxuooA3
	OAvKJ97qzBQ3BQ/CUYEjTqL8emyC/zuY16sK0bDjEwwsDBSJ3/ckpudWFDMy5+4QgpRR4ehxl2caM
	zZOJPlBtQ8iJH0++mwLTrVvjEymyW9yxt7Hubz+nwXvKqbD8C//3aa6RiDFAijQ/n9nM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175848-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175848: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-amd64:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-armhf-pvops:<job status>:broken:regression
    qemu-mainline:build-armhf:host-install(4):broken:regression
    qemu-mainline:build-amd64:host-build-prep:fail:regression
    qemu-mainline:build-amd64-pvops:host-build-prep:fail:regression
    qemu-mainline:build-armhf-pvops:host-build-prep:fail:regression
    qemu-mainline:build-arm64-xsm:host-build-prep:fail:regression
    qemu-mainline:build-arm64-pvops:host-build-prep:fail:regression
    qemu-mainline:build-arm64:host-build-prep:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=886fb67020e32ce6a2cf7049c6f017acf1f0d69a
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 17:58:48 +0000

flight 175848 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175848/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175743
 build-amd64                   5 host-build-prep          fail REGR. vs. 175743
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-armhf-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175743
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-arm64                   5 host-build-prep          fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                886fb67020e32ce6a2cf7049c6f017acf1f0d69a
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    2 days
Failing since        175750  2023-01-13 06:38:52 Z    1 days    3 attempts
Testing same since   175835  2023-01-14 07:07:10 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               fail    
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 1426 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 18:50:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 18:50:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477721.740552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGlbR-0002mD-Ox; Sat, 14 Jan 2023 18:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477721.740552; Sat, 14 Jan 2023 18:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGlbR-0002lc-K7; Sat, 14 Jan 2023 18:50:05 +0000
Received: by outflank-mailman (input) for mailman id 477721;
 Sat, 14 Jan 2023 18:50:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGlbQ-0002dB-N9; Sat, 14 Jan 2023 18:50:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGlbQ-0004bJ-JA; Sat, 14 Jan 2023 18:50:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGlbQ-0006ta-6W; Sat, 14 Jan 2023 18:50:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGlbQ-0002FS-66; Sat, 14 Jan 2023 18:50:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UXR5UhliE04Wr4m/z+zgPNtmcR7JlaE2fwRVdczJbMI=; b=gtjOHbHMastCSTiZQ1/UfZv6Dz
	tAWgtpEDMqgXJrXUQDMeF49sk5u5xSr1scaYm9knBw1SRpsunc21GCkU7EVjc30PS7/KmDcSm+ycN
	N57BltBLQqSskeBJX68iTiKtqS1cpzEL/Qf3CpF3zqQOelbb3Y3p3YTMas5izdUpb7yY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175850-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175850: regressions - trouble: blocked/broken/fail
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xtf:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-i386-pvops:host-build-prep:fail:regression
    xen-unstable:build-armhf-pvops:host-build-prep:fail:regression
    xen-unstable:build-amd64-prev:host-build-prep:fail:regression
    xen-unstable:build-amd64-xtf:host-build-prep:fail:regression
    xen-unstable:build-amd64-pvops:host-build-prep:fail:regression
    xen-unstable:build-i386-xsm:host-build-prep:fail:regression
    xen-unstable:build-amd64-xsm:host-build-prep:fail:regression
    xen-unstable:build-amd64:host-build-prep:fail:regression
    xen-unstable:build-arm64-pvops:host-build-prep:fail:regression
    xen-unstable:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable:build-arm64:host-build-prep:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-i386:host-build-prep:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 18:50:04 +0000

flight 175850 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175850/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175734
 build-i386-pvops              5 host-build-prep          fail REGR. vs. 175734
 build-armhf-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-amd64-prev              5 host-build-prep          fail REGR. vs. 175734
 build-amd64-xtf               5 host-build-prep          fail REGR. vs. 175734
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-i386-xsm                5 host-build-prep          fail REGR. vs. 175734
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-amd64                   5 host-build-prep          fail REGR. vs. 175734
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-arm64                   5 host-build-prep          fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-i386                    5 host-build-prep          fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    2 days
Failing since        175739  2023-01-12 09:38:44 Z    2 days    4 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              fail    
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 20:14:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 20:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477729.740563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGmvE-0002Zi-Vm; Sat, 14 Jan 2023 20:14:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477729.740563; Sat, 14 Jan 2023 20:14:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGmvE-0002Zb-Rd; Sat, 14 Jan 2023 20:14:36 +0000
Received: by outflank-mailman (input) for mailman id 477729;
 Sat, 14 Jan 2023 20:14:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGmvD-0002ZR-PE; Sat, 14 Jan 2023 20:14:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGmvD-0006Qe-MC; Sat, 14 Jan 2023 20:14:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGmvD-0000II-AS; Sat, 14 Jan 2023 20:14:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGmvD-0001Yo-A0; Sat, 14 Jan 2023 20:14:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KqTMZiLQ46ogPWO4vO5TSyFHHksgvI2952+0IOVeBds=; b=uHQ+rYwpF69k87x2SajyILtz9Q
	Am7cSgspXjgzASFD6VBASpuxmy5GhO+cvG23NYzTsFgrR2zRv59IvNb0JRMsK+pgsdvogt60uSEd4
	lZiALQcZ8WCjTKxg0JVTD66DZWJyewyEyPPWE18UznqWxUl0gVruAtKK4Wi9OqTlnzKY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175836-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175836: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-armhf:host-build-prep:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=97ec4d559d939743e8af83628be5af8da610d9dc
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 20:14:35 +0000

flight 175836 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175836/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 build-amd64                   6 xen-build                fail REGR. vs. 173462
 build-amd64-xsm               6 xen-build                fail REGR. vs. 173462
 build-i386-xsm                6 xen-build                fail REGR. vs. 173462
 build-i386                    6 xen-build                fail REGR. vs. 173462
 build-armhf                   5 host-build-prep          fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd11-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd12-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                97ec4d559d939743e8af83628be5af8da610d9dc
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z   99 days
Failing since        173470  2022-10-08 06:21:34 Z   98 days  205 attempts
Testing same since   175836  2023-01-14 07:10:03 Z    0 days    1 attempts

------------------------------------------------------------
3360 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-freebsd11-amd64                             blocked 
 test-amd64-amd64-freebsd12-amd64                             blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 blocked 
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-amd64-xl-vhd                                      blocked 
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

(No revision log; it would be 513749 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 22:02:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 22:02:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477737.740574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGob3-0004ni-NO; Sat, 14 Jan 2023 22:01:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477737.740574; Sat, 14 Jan 2023 22:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGob3-0004nb-Jg; Sat, 14 Jan 2023 22:01:53 +0000
Received: by outflank-mailman (input) for mailman id 477737;
 Sat, 14 Jan 2023 22:01:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGob2-0004nR-FU; Sat, 14 Jan 2023 22:01:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGob2-0000q6-BW; Sat, 14 Jan 2023 22:01:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGob2-0002bt-0M; Sat, 14 Jan 2023 22:01:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGob1-0000Y4-W5; Sat, 14 Jan 2023 22:01:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b/0ZTLmuxX+IeIljr75bW/1ddt3PimFwFkvltoEhWc0=; b=oneJM0SD7DS6NaU15JwQ0I57a9
	drcdzMq3VG5rB7SfBvEVmZpZMRIRyOsWsNFvUQD4dqZoOLWtxycZka8CWBsOa0+3KAaSboBu0Navt
	E/7T9cxCShhdZ4CtRwlrd/1bCwwUTiybHP6UmIlIZxMf93fAbFa5z6pU6hUGdsARsXp8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175851-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175851: regressions - trouble: blocked/broken
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-build-prep:fail:regression
    xen-unstable-smoke:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 22:01:51 +0000

flight 175851 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175851/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-amd64                   5 host-build-prep          fail REGR. vs. 175746
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    2 days
Failing since        175748  2023-01-12 20:01:56 Z    2 days    6 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jan 14 23:16:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Jan 2023 23:16:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477746.740584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGpkc-0003WH-7p; Sat, 14 Jan 2023 23:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477746.740584; Sat, 14 Jan 2023 23:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGpkc-0003WA-57; Sat, 14 Jan 2023 23:15:50 +0000
Received: by outflank-mailman (input) for mailman id 477746;
 Sat, 14 Jan 2023 23:15:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGpkb-0003W0-E2; Sat, 14 Jan 2023 23:15:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGpkb-0002RZ-8p; Sat, 14 Jan 2023 23:15:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGpka-0004Ds-Qo; Sat, 14 Jan 2023 23:15:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGpka-0000Sh-QI; Sat, 14 Jan 2023 23:15:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R2/qtTlIveayBGERlxmsvhzfi1GWtuxCyuexRMPFCb4=; b=dTRNo976X1perFtqj4Fh7Tw7Nf
	+r7xGMLUUlQ+o8YWVkLd68MV1FU+rsIQrN2z0DNIz8YQcXLtjvIqTyl/+TDDlprDr8k6YDwCHBumC
	9p4YFq5NL2RCrzwQWUc0ic/IC9c4UNTtuUEb2mOFGXavA4D325VLS2LPSNUiTOO/XvTM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175853-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175853: regressions - trouble: blocked/broken
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xtf:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-i386-pvops:host-build-prep:fail:regression
    xen-unstable:build-armhf-pvops:host-build-prep:fail:regression
    xen-unstable:build-amd64-prev:host-build-prep:fail:regression
    xen-unstable:build-amd64-xtf:host-build-prep:fail:regression
    xen-unstable:build-amd64-pvops:host-build-prep:fail:regression
    xen-unstable:build-i386-xsm:host-build-prep:fail:regression
    xen-unstable:build-amd64-xsm:host-build-prep:fail:regression
    xen-unstable:build-i386-prev:host-build-prep:fail:regression
    xen-unstable:build-amd64:host-build-prep:fail:regression
    xen-unstable:build-arm64-pvops:host-build-prep:fail:regression
    xen-unstable:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable:build-arm64:host-build-prep:fail:regression
    xen-unstable:build-i386:host-build-prep:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Jan 2023 23:15:48 +0000

flight 175853 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175853/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175734
 build-i386-pvops              5 host-build-prep          fail REGR. vs. 175734
 build-armhf-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-amd64-prev              5 host-build-prep          fail REGR. vs. 175734
 build-amd64-xtf               5 host-build-prep          fail REGR. vs. 175734
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-i386-xsm                5 host-build-prep          fail REGR. vs. 175734
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-i386-prev               5 host-build-prep          fail REGR. vs. 175734
 build-amd64                   5 host-build-prep          fail REGR. vs. 175734
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-arm64                   5 host-build-prep          fail REGR. vs. 175734
 build-i386                    5 host-build-prep          fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    2 days
Failing since        175739  2023-01-12 09:38:44 Z    2 days    5 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 03:19:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 03:19:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477756.740602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGtYF-00018i-Hp; Sun, 15 Jan 2023 03:19:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477756.740602; Sun, 15 Jan 2023 03:19:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGtYF-00018I-Bk; Sun, 15 Jan 2023 03:19:19 +0000
Received: by outflank-mailman (input) for mailman id 477756;
 Sun, 15 Jan 2023 03:19:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGtYD-000188-G4; Sun, 15 Jan 2023 03:19:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGtYD-0007cQ-EG; Sun, 15 Jan 2023 03:19:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGtYD-00015e-7X; Sun, 15 Jan 2023 03:19:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGtYD-0006Ki-70; Sun, 15 Jan 2023 03:19:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JOMp3kwGcnC4xjEptDmQZINnYZUc4lCsnfNYz1P9YQY=; b=B2UpVFML5bEcX7zYaPJJfvDpmr
	r+vhDTQboHjVvAY7RM6hG2GiogeMcwea+uOieSW0h1fDPfm8FElAwOQSAH7POF9M0kMniXRd7AjE4
	aZOKBopzFtm/m3VMRdEWMXbfRNG/n5lDUzPJG9sHmblV1IC4NA0jKjE/8pN9hAH5/fjM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175854-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175854: regressions - trouble: blocked/broken
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-build-prep:fail:regression
    xen-unstable-smoke:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 03:19:17 +0000

flight 175854 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175854/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-amd64                   5 host-build-prep          fail REGR. vs. 175746
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    2 days
Failing since        175748  2023-01-12 20:01:56 Z    2 days    7 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 06:17:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 06:17:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477764.740612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGwK5-0001z6-1Y; Sun, 15 Jan 2023 06:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477764.740612; Sun, 15 Jan 2023 06:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGwK4-0001yz-VA; Sun, 15 Jan 2023 06:16:52 +0000
Received: by outflank-mailman (input) for mailman id 477764;
 Sun, 15 Jan 2023 06:16:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGwK2-0001yp-UM; Sun, 15 Jan 2023 06:16:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGwK2-0003z0-QM; Sun, 15 Jan 2023 06:16:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGwK2-0004uk-8C; Sun, 15 Jan 2023 06:16:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGwK2-0005O5-7g; Sun, 15 Jan 2023 06:16:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iT2B943mWtaY4DcnoUFt4deaanw3Ra3skYqpzmvbZak=; b=yU1VYFdcqyQ8ttcckbiH7H0q5F
	awoJg6pULy0/B7wEWs5M+/HCv3naU2Zp27A9DWrV5F7nKeOMr66Agwt5rgMVzVUJrO6ft8rhc9cjq
	JgrhaJs8IalPTZz5KXusxvZN7whj7jkZQvZ2CRn/i6GvqkpT9KkQwuIsho5a+yZNjWlY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175852-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175852: regressions - trouble: blocked/broken
X-Osstest-Failures:
    qemu-mainline:build-amd64:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-amd64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-armhf-pvops:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:<job status>:broken:regression
    qemu-mainline:build-i386-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf:host-install(4):broken:regression
    qemu-mainline:build-amd64:host-build-prep:fail:regression
    qemu-mainline:build-amd64-xsm:host-build-prep:fail:regression
    qemu-mainline:build-amd64-pvops:host-build-prep:fail:regression
    qemu-mainline:build-armhf-pvops:host-build-prep:fail:regression
    qemu-mainline:build-i386-xsm:host-build-prep:fail:regression
    qemu-mainline:build-i386:host-build-prep:fail:regression
    qemu-mainline:build-i386-pvops:host-build-prep:fail:regression
    qemu-mainline:build-arm64-xsm:host-build-prep:fail:regression
    qemu-mainline:build-arm64-pvops:host-build-prep:fail:regression
    qemu-mainline:build-arm64:host-build-prep:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=886fb67020e32ce6a2cf7049c6f017acf1f0d69a
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 06:16:50 +0000

flight 175852 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175852/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175743
 build-amd64                   5 host-build-prep          fail REGR. vs. 175743
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175743
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-armhf-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-i386-xsm                5 host-build-prep          fail REGR. vs. 175743
 build-i386                    5 host-build-prep          fail REGR. vs. 175743
 build-i386-pvops              5 host-build-prep          fail REGR. vs. 175743
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175743
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-arm64                   5 host-build-prep          fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                886fb67020e32ce6a2cf7049c6f017acf1f0d69a
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    2 days
Failing since        175750  2023-01-13 06:38:52 Z    1 days    4 attempts
Testing same since   175835  2023-01-14 07:07:10 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 1426 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 07:27:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 07:27:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477773.740624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGxPm-0000gW-8C; Sun, 15 Jan 2023 07:26:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477773.740624; Sun, 15 Jan 2023 07:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGxPm-0000gO-4W; Sun, 15 Jan 2023 07:26:50 +0000
Received: by outflank-mailman (input) for mailman id 477773;
 Sun, 15 Jan 2023 07:26:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxPk-0000gE-QU; Sun, 15 Jan 2023 07:26:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxPk-0005Pr-MR; Sun, 15 Jan 2023 07:26:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxPk-0006UK-3Y; Sun, 15 Jan 2023 07:26:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxPk-00024s-2y; Sun, 15 Jan 2023 07:26:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FLqsdjP7HzkGumFPlayCaXH0F3uh14UAkFIOvojCeKc=; b=Tfm+RTZrFqkhvpI4rH180LCbVu
	M1i3sTy2plGMnBFMqOMO1O7SOMGWs7CQSGRZ6evc1CD312UVO4rVzqh6dJgzZ1/P9GqVaqd2SPV+t
	wGph8do62ncc7TEyffaU9igqfge1UoeZuh1hfCreFCd/cKLA5fs9jQu7hl+yZUuQKIxk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175855-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175855: regressions - trouble: blocked/broken
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xtf:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-amd64-xtf:host-build-prep:fail:regression
    xen-unstable:build-i386-pvops:host-build-prep:fail:regression
    xen-unstable:build-armhf-pvops:host-build-prep:fail:regression
    xen-unstable:build-amd64-prev:host-build-prep:fail:regression
    xen-unstable:build-amd64-pvops:host-build-prep:fail:regression
    xen-unstable:build-i386-xsm:host-build-prep:fail:regression
    xen-unstable:build-amd64-xsm:host-build-prep:fail:regression
    xen-unstable:build-arm64-pvops:host-build-prep:fail:regression
    xen-unstable:build-i386-prev:host-build-prep:fail:regression
    xen-unstable:build-amd64:host-build-prep:fail:regression
    xen-unstable:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable:build-arm64:host-build-prep:fail:regression
    xen-unstable:build-i386:host-build-prep:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 07:26:48 +0000

flight 175855 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175855/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175734
 build-amd64-xtf               5 host-build-prep          fail REGR. vs. 175734
 build-i386-pvops              5 host-build-prep          fail REGR. vs. 175734
 build-armhf-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-amd64-prev              5 host-build-prep          fail REGR. vs. 175734
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-i386-xsm                5 host-build-prep          fail REGR. vs. 175734
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-i386-prev               5 host-build-prep          fail REGR. vs. 175734
 build-amd64                   5 host-build-prep          fail REGR. vs. 175734
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-arm64                   5 host-build-prep          fail REGR. vs. 175734
 build-i386                    5 host-build-prep          fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    3 days
Failing since        175739  2023-01-12 09:38:44 Z    2 days    6 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              broken  
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-job build-i386-xsm broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 07:39:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 07:39:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477781.740635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGxbr-0002GW-Eg; Sun, 15 Jan 2023 07:39:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477781.740635; Sun, 15 Jan 2023 07:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGxbr-0002GP-BY; Sun, 15 Jan 2023 07:39:19 +0000
Received: by outflank-mailman (input) for mailman id 477781;
 Sun, 15 Jan 2023 07:39:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxbp-0002GF-Vd; Sun, 15 Jan 2023 07:39:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxbp-0005cF-TA; Sun, 15 Jan 2023 07:39:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxbp-0006mU-El; Sun, 15 Jan 2023 07:39:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxbp-0004t2-EJ; Sun, 15 Jan 2023 07:39:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YGujkrJqBrmmO6xrCq/l/gyTe7xko5Vu8++ak/ndkjM=; b=C1r88OsX8WM0sOLtlWtSfxyW0Z
	qCaHFqfIxvQ0CopnQdGzD8v0cLwvAYX9yc1bY/Pft2jzoTd3Ii0Yoi3G1N93eYKqDKqfcVSvj8KdW
	bWa9MrZ5YaiWn/F5YyLMQpcJQiiPilKsai9YaiKnVrJGztKNZiCvSjwmqMANpMNWzZkE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175860-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175860: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ba08910df1071bf5ade987529d9becb38d14a14a
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 07:39:17 +0000

flight 175860 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175860/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ba08910df1071bf5ade987529d9becb38d14a14a
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Testing same since   175860  2023-01-15 07:11:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 08:03:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 08:03:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477795.740655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGxyy-000640-RL; Sun, 15 Jan 2023 08:03:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477795.740655; Sun, 15 Jan 2023 08:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGxyy-00063t-OC; Sun, 15 Jan 2023 08:03:12 +0000
Received: by outflank-mailman (input) for mailman id 477795;
 Sun, 15 Jan 2023 08:03:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxyx-00063j-Q3; Sun, 15 Jan 2023 08:03:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxyx-0006jc-PC; Sun, 15 Jan 2023 08:03:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxyx-0007Gq-FW; Sun, 15 Jan 2023 08:03:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGxyx-0007f2-F5; Sun, 15 Jan 2023 08:03:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z+eW3aMeQLiIZtqkBm9r7eZHr8H2glchYfgJ+I4r1eY=; b=SaDp3vEXLloK6HKzFEdrRHWnjn
	xI5YPuqorLqYvW2JgOzZQqtBRBpnwEeVGUPi2IQ/Y0TBy/9atoARTd1sJoOJ5ofcZvpm30+729pZB
	UofCOkOnLH6TuF3ZnyD0HBYV7Tk1UQAc6yKDflslhb652HHj0j6oiSb3lNNTgWnmR4lk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175862-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175862: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ba08910df1071bf5ade987529d9becb38d14a14a
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 08:03:11 +0000

flight 175862 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175862/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ba08910df1071bf5ade987529d9becb38d14a14a
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Testing same since   175860  2023-01-15 07:11:07 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 08:37:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 08:37:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477808.740681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGyVc-00014Y-Hv; Sun, 15 Jan 2023 08:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477808.740681; Sun, 15 Jan 2023 08:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGyVc-00014R-F0; Sun, 15 Jan 2023 08:36:56 +0000
Received: by outflank-mailman (input) for mailman id 477808;
 Sun, 15 Jan 2023 08:36:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGyVb-00014H-CO; Sun, 15 Jan 2023 08:36:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGyVb-0007SP-BX; Sun, 15 Jan 2023 08:36:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGyVb-00080r-4x; Sun, 15 Jan 2023 08:36:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGyVb-00042w-4R; Sun, 15 Jan 2023 08:36:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=q/y4o9XuxDPlq+qxz0Wx4KcNvQz4LJjuuobeGHueClg=; b=OG547N4iAGomMf9mwcuaGY8yBK
	KNVZ2RgiocbEPHgEPkJhMVgugeI+uAt1kHrdqje/vOV6bPeYXdmHDZkTiBJDNze+YqMh1hgG4aHZM
	emQnc9+dMzLapxwsly6VEl/vlg/mnPbzL53dERPjAN/WvMEpuBXQpZyMRFNo/SfWZ2U4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175865-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175865: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ba08910df1071bf5ade987529d9becb38d14a14a
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 08:36:55 +0000

flight 175865 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175865/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ba08910df1071bf5ade987529d9becb38d14a14a
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Testing same since   175860  2023-01-15 07:11:07 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 09:30:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 09:30:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477816.740692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGzLN-00075Z-F6; Sun, 15 Jan 2023 09:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477816.740692; Sun, 15 Jan 2023 09:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGzLN-00075S-BW; Sun, 15 Jan 2023 09:30:25 +0000
Received: by outflank-mailman (input) for mailman id 477816;
 Sun, 15 Jan 2023 09:30:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGzLM-00075I-Fj; Sun, 15 Jan 2023 09:30:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGzLM-0000DF-8r; Sun, 15 Jan 2023 09:30:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGzLL-0000od-Tw; Sun, 15 Jan 2023 09:30:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGzLL-0008JB-TQ; Sun, 15 Jan 2023 09:30:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dEYspIBjmrW5rRQV/60UUAkBTDANfvTOCM4osArGWt8=; b=3pJa1NNCnnB0IxzppFktizRa4E
	His53q/Uo5SlsIzITohCfbTVuUFlvI1bWG7VZb4vQrsvJiqaDW47H3WUlzoU8ZAqpcyKFWEUdhIzx
	5MUV/GFy52lFxywb9fXsgyM3Cwxo6JZEHJmlK6p18Ouglh23MGowLhE/CThcOXdi6dAY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175868-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175868: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ba08910df1071bf5ade987529d9becb38d14a14a
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 09:30:23 +0000

flight 175868 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175868/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ba08910df1071bf5ade987529d9becb38d14a14a
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Testing same since   175860  2023-01-15 07:11:07 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 10:10:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 10:10:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477824.740702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGzxU-0002Jo-Dz; Sun, 15 Jan 2023 10:09:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477824.740702; Sun, 15 Jan 2023 10:09:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pGzxU-0002Jh-BO; Sun, 15 Jan 2023 10:09:48 +0000
Received: by outflank-mailman (input) for mailman id 477824;
 Sun, 15 Jan 2023 10:09:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGzxT-0002JX-D3; Sun, 15 Jan 2023 10:09:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGzxT-00014V-9e; Sun, 15 Jan 2023 10:09:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pGzxS-0001i7-Qa; Sun, 15 Jan 2023 10:09:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pGzxS-00023a-QA; Sun, 15 Jan 2023 10:09:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BXEYQTqAa/gnE7P/KeYQixf6XxAjKNoA83jCdHfvVI4=; b=AqTzOWUu00reuT1RNTFO/UPju9
	QqZLqgQsz3q/oLfQ8eTqGQhGyvTxPfDSVmo+hX8tLhH8+ixvPkGjdh/P/9Qav1jGSXZdwm8RqiCnc
	KfyR22M1HhHULwDMbwF4EIUlfzCfPXqi0wxSzQkCrGhWX6uD5Jrut/2at8VJDssQjsOs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175869-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175869: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ba08910df1071bf5ade987529d9becb38d14a14a
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 10:09:46 +0000

flight 175869 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175869/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ba08910df1071bf5ade987529d9becb38d14a14a
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Testing same since   175860  2023-01-15 07:11:07 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 10:32:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 10:32:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477833.740714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH0Jo-0005kh-E3; Sun, 15 Jan 2023 10:32:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477833.740714; Sun, 15 Jan 2023 10:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH0Jo-0005ka-BC; Sun, 15 Jan 2023 10:32:52 +0000
Received: by outflank-mailman (input) for mailman id 477833;
 Sun, 15 Jan 2023 10:32:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH0Jm-0005kO-Df; Sun, 15 Jan 2023 10:32:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH0Jm-0001bP-AL; Sun, 15 Jan 2023 10:32:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH0Jl-0002Pc-Ui; Sun, 15 Jan 2023 10:32:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH0Jl-0004d1-UG; Sun, 15 Jan 2023 10:32:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9LLdKjj0nLZf3izhEx4/hnp2XUM6fAHT0ZhdB2a1m6w=; b=K6oT/ZU3xLe/LPXz6LYEhZUWor
	wii+JXPHiHBCxFwCDD5IM3UeoRDuNqTvtUtXk9XpXyuaOdO4nBTvUU2fBcuZFZrVkU1XpkB4O7Yfw
	998Q5nZuXjAsSh8IaIKKvROIcu1K5aRx5LdRO0qhXKsBxu/i2nPH9SNTp6+GkC09QVV0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175870-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175870: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ba08910df1071bf5ade987529d9becb38d14a14a
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 10:32:49 +0000

flight 175870 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175870/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ba08910df1071bf5ade987529d9becb38d14a14a
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Testing same since   175860  2023-01-15 07:11:07 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:00:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:00:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477841.740725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH0kg-0000kJ-LB; Sun, 15 Jan 2023 11:00:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477841.740725; Sun, 15 Jan 2023 11:00:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH0kg-0000kC-IR; Sun, 15 Jan 2023 11:00:38 +0000
Received: by outflank-mailman (input) for mailman id 477841;
 Sun, 15 Jan 2023 11:00:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH0ke-0000k2-W7; Sun, 15 Jan 2023 11:00:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH0ke-0002GI-RU; Sun, 15 Jan 2023 11:00:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH0ke-00035v-Ft; Sun, 15 Jan 2023 11:00:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH0ke-000670-FT; Sun, 15 Jan 2023 11:00:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e1kj1Mfmkcfi4nnYoDTaDsDqJyAhK0VU32nvbh3EeFw=; b=F3h4dPMVrHe7WnFiPayFud38zI
	y7yNvxAA7OJHrgA2z6Xwd7tFeWlmQUwwbqqXwu9HhOU8x2x9Utl8hRlkGkhkZGD+LdUn4NR1hnqM+
	zpylr657hWik9Micjl2mSvygn3oGwvjRiaSep6327+LJL55xDgPLpmfG36aEqs1TbveE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175871-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175871: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 11:00:36 +0000

flight 175871 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175871/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days    7 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:20:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:20:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477849.740735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH13t-000397-AU; Sun, 15 Jan 2023 11:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477849.740735; Sun, 15 Jan 2023 11:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH13t-000390-7o; Sun, 15 Jan 2023 11:20:29 +0000
Received: by outflank-mailman (input) for mailman id 477849;
 Sun, 15 Jan 2023 11:20:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH13r-00038q-8p; Sun, 15 Jan 2023 11:20:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH13r-0002iv-5i; Sun, 15 Jan 2023 11:20:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH13q-0003pD-O2; Sun, 15 Jan 2023 11:20:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH13q-0003P9-Na; Sun, 15 Jan 2023 11:20:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WOnEZZzIX6QY6wDgqVS6uIsDsGwnySSHFvnulnHBpt0=; b=EAmml/KTu3S6pvbj/slFUMyYFV
	lIsu9H7e9LbnmQyyDles8XYLBwEQhnAwY0uWACcBDV3P6JjkqhK0StrXa1ioxVm6Fet4eKOriH908
	zuw9iN9rqr9eha/DXoFvjKnGkecliK8EDZk68r1+VU2nEvmTYoS5UgbWYr+eckE+eYTc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175857-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175857: regressions - trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-amd64:host-build-prep:fail:regression
    xen-unstable-smoke:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 11:20:26 +0000

flight 175857 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175857/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-amd64                   5 host-build-prep          fail REGR. vs. 175746
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    2 days
Failing since        175748  2023-01-12 20:01:56 Z    2 days    8 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    1 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477857.740746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EN-0004d1-B9; Sun, 15 Jan 2023 11:31:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477857.740746; Sun, 15 Jan 2023 11:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EN-0004cu-8V; Sun, 15 Jan 2023 11:31:19 +0000
Received: by outflank-mailman (input) for mailman id 477857;
 Sun, 15 Jan 2023 11:31:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1EL-0004ci-7y
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:17 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1ee07df1-94c8-11ed-91b6-6bf2151ebd3b;
 Sun, 15 Jan 2023 12:31:15 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id hw16so50183816ejc.10
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:15 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ee07df1-94c8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=7XAphPrE39ytJWfoWHxTfI0tyCLfujC+d0fBUUNoF74=;
        b=JvrZoBJe8PUuaG7x7fP/I2qQZ6QMKEQwzm4Mt9TAn2RN3tK8xK0iaP5t4zvPDvxqAP
         osDi9CDWl3oyGvvcVbJwLRJyboMwTw4OSi3USoEpL871jdYj1m9suv12yu03yrASQiq7
         DdySg1woP5U1uPfDV8EAt3IYuUulbHpuQl61SinOBV4Xl2fWkpGKndlDtU5MZJHPpjjS
         VlMULBEYoGvbjeAADFyymegoJrQNV1T1+UM/7yJ2Ax7s4Qi+YbWc0GQ4TJFcNn5W0pqi
         khZec1Tr+VbpgP1GpQhFbCttQkli8Jd3TiCyJ8XsrzhOH4ObE88LE+swNZm1EX+7pNYp
         lANw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=7XAphPrE39ytJWfoWHxTfI0tyCLfujC+d0fBUUNoF74=;
        b=d7Gz2+Hee2Yla3n/xx6GEximInfEZhEyvWn8F4bVO0qE5QlpEL5F+lWwjBvBtcztfe
         uCaNtrBs5TDiQtMD8lYc9PVNoue7gsWr4fDokU0y0SFHNzs+5YQTdE5HdJmY2w80gA8D
         LtqPY9iklX7uTcOZtNEZUVp6sPpYe7yjKCzQiteL30Ry47Gv1cyBrVNERlDm/vhuJCru
         enr/Fks9QU53R56TPhbPCn+T/lcQQryqUxRcGtIaPyhSfqqUiiKNIsj/eUyo8TtW3D5X
         +1ypLb/TFO18d4yeQN6kvM0Lef/YJuKJtTkp5NeHusKQYExUoY3eo7riiZJEXtWt+r/a
         ZVlg==
X-Gm-Message-State: AFqh2krlCZdzqeJe/8CegZ65uZX6TEIwdAvqoxPSpy1roJHG7iHREuoO
	YKARkDQZMLeSHH96R28CanBws0FeIUuV8pGC
X-Google-Smtp-Source: AMrXdXtzJ3uR1zUwQqYLE9gXRcxLuTgDDYkx+NCVgp+9bAVY6JpmUT1R8r3hUxngS28aUXRSPJRdWw==
X-Received: by 2002:a17:907:7707:b0:844:c651:ce4b with SMTP id kw7-20020a170907770700b00844c651ce4bmr71819279ejc.33.1673782274232;
        Sun, 15 Jan 2023 03:31:14 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [RFC PATCH v3 00/10] PCID server
Date: Sun, 15 Jan 2023 13:31:01 +0200
Message-Id: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

PCID server used if domain has passthrough PCI controller and we wants
assign some device to other domain.
pcid server should be launched in domain owns the PCI controller and process
request from other domains.
pcid server needs if domain which owns the PCI controller is not Domain-0.
Changes:
patch with subject "remove xenstore entries on vchan server closure" removed
from patchset because it already merged.
patch with subject "Add pcid daemon to xl" divided on several patches because
it was very large.
Fixed crash in some cases
Fixed memory leak
Fixed false client detection.

Dmytro Semenets (6):
  tools/libs/light: Add vchan support to libxl
  tools/xl: Add pcid daemon to xl
  tools/libs/light: pcid: implement is_device_assigned command
  tools/libs/light: pcid: implement reset_device command
  tools/libs/light: pcid: implement resource_list command
  tools/libs/light: pcid: implement write_bdf command

Oleksandr Andrushchenko (2):
  tools: allow vchan XenStore paths more then 64 bytes long
  tools/libs/light: pcid: implement list_assignable command

Volodymyr Babchuk (2):
  tools/light: pci: describe [MAKE|REVERT]_ASSIGNABLE commands
  tools/light: pci: move assign/revert logic to pcid

 tools/configure                               |    8 +-
 tools/configure.ac                            |    1 +
 tools/helpers/Makefile                        |    4 +-
 tools/hotplug/FreeBSD/rc.d/xlpcid.in          |   75 ++
 tools/hotplug/Linux/init.d/xlpcid.in          |   76 ++
 tools/hotplug/Linux/systemd/Makefile          |    1 +
 .../hotplug/Linux/systemd/xenpcid.service.in  |   10 +
 tools/hotplug/NetBSD/rc.d/xlpcid.in           |   75 ++
 tools/include/pcid.h                          |  228 ++++
 tools/libs/light/Makefile                     |    8 +-
 tools/libs/light/libxl_pci.c                  |  673 +++++-----
 tools/libs/light/libxl_pcid.c                 | 1110 +++++++++++++++++
 tools/libs/light/libxl_vchan.c                |  496 ++++++++
 tools/libs/light/libxl_vchan.h                |   94 ++
 tools/libs/vchan/init.c                       |   28 +-
 tools/xl/Makefile                             |    5 +-
 tools/xl/xl.h                                 |    1 +
 tools/xl/xl_cmdtable.c                        |    7 +
 tools/xl/xl_pcid.c                            |   81 ++
 19 files changed, 2613 insertions(+), 368 deletions(-)
 create mode 100644 tools/hotplug/FreeBSD/rc.d/xlpcid.in
 create mode 100644 tools/hotplug/Linux/init.d/xlpcid.in
 create mode 100644 tools/hotplug/Linux/systemd/xenpcid.service.in
 create mode 100644 tools/hotplug/NetBSD/rc.d/xlpcid.in
 create mode 100644 tools/include/pcid.h
 create mode 100644 tools/libs/light/libxl_pcid.c
 create mode 100644 tools/libs/light/libxl_vchan.c
 create mode 100644 tools/libs/light/libxl_vchan.h
 create mode 100644 tools/xl/xl_pcid.c

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477858.740753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EN-0004jp-QG; Sun, 15 Jan 2023 11:31:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477858.740753; Sun, 15 Jan 2023 11:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EN-0004j8-Lz; Sun, 15 Jan 2023 11:31:19 +0000
Received: by outflank-mailman (input) for mailman id 477858;
 Sun, 15 Jan 2023 11:31:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1EM-0004co-En
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:18 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f70f17e-94c8-11ed-b8d0-410ff93cb8f0;
 Sun, 15 Jan 2023 12:31:16 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id u19so61973708ejm.8
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:16 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f70f17e-94c8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TBqwkdTRqUwsY27EiP5YbesMgj7n7qTjOsQBFo8GYrE=;
        b=QwaIpJdnEWMrDV9ZVxPKuUyl7Pi8ECOWt8hyB4G2cSC+mNK0H+LMpIrDm2cl90+hbU
         FFJOD2E7WpDSliUnmb/Dno2Af+sHkwT7PBW0UjtFKHQBC6b/twmJ4C9M1A5JqRUsAu5l
         TRJJ4MGHYIiCRTZCKsT/2ueklyDo5xRdUHW4dGQgrAOxcqYckEAqT+oFDYbtZpkHJPyU
         +ppjHBSSKJzCM6T4WY74pT6ivefE3azDHm1XOqDF96/RcwYPNnbE10fS6L39V8gSbUrC
         t35/J+zp1j4PHLmdh2isTy1sLJ6LpQRrP7kySnCnHmBmGQw07ag6qnzT6r3S2fuRbX8O
         gnqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=TBqwkdTRqUwsY27EiP5YbesMgj7n7qTjOsQBFo8GYrE=;
        b=mlwivffz3BUF05TY7GwQYHoOK1EFyknsEHzrmZkOGEmmHQzF9I3rz+oFylMAvF6g3Q
         Sdv7Crn0GITZt8dXp4cw/sYcdy5kjbXZgDAqWThe/s6q9FclEsnBYhJSK2SWiQEKR7w/
         jxayyi403g9mnK8FFCkEV532/IC913lsifyIY2R+DRgs342TWo3C0Qq1imhy3aWZt64g
         fr6gQzI9pFn6CTK7P8iwqqbFUqR1DC0bzaYFDLM/ZtmG4Gsse5BzpVxJ9DvoQCqxyFN7
         v807fx4FEO1zfsJwYkpxO9cUfDO+EBCuY77ZP+GbihL3YIN4OZg9iwZRE4Nd9cLPHtab
         J2jA==
X-Gm-Message-State: AFqh2kpvFdx4piZ1C8JBS7V6pNM/yR7X+i8HO4F9w4wwpZwIgP2wmMxK
	HVrf3wiWbtmTEDNeTV0SWkhRrT9LaX0d+2bQ
X-Google-Smtp-Source: AMrXdXvZ32YsPLgOdTO+otsGGHFp+YCDuOV+0rKNF6of4w6hnrg24wnd22SuHOKktSTOClOxf4TYPw==
X-Received: by 2002:a17:906:2816:b0:7c0:d452:2e74 with SMTP id r22-20020a170906281600b007c0d4522e74mr75892077ejc.4.1673782275466;
        Sun, 15 Jan 2023 03:31:15 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Dmytro Semenets <dmytro_semenets@epam.com>
Subject: [RFC PATCH v3 01/10] tools: allow vchan XenStore paths more then 64 bytes long
Date: Sun, 15 Jan 2023 13:31:02 +0200
Message-Id: <20230115113111.1207605-2-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Current vchan implementation, while dealing with XenStore paths,
allocates 64 bytes buffer on the stack which may not be enough for
some use-cases. Make the buffer longer to respect maximum allowed
XenStore path of XENSTORE_ABS_PATH_MAX.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
---
 tools/libs/vchan/init.c | 28 +++++++++++++++++++++-------
 1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/tools/libs/vchan/init.c b/tools/libs/vchan/init.c
index 9195bd3b98..fec650dc32 100644
--- a/tools/libs/vchan/init.c
+++ b/tools/libs/vchan/init.c
@@ -249,7 +249,7 @@ static int init_xs_srv(struct libxenvchan *ctrl, int domain, const char* xs_base
 	int ret = -1;
 	struct xs_handle *xs;
 	struct xs_permissions perms[2];
-	char buf[64];
+	char *buf;
 	char ref[16];
 	char* domid_str = NULL;
 	xs_transaction_t xs_trans = XBT_NULL;
@@ -259,6 +259,10 @@ static int init_xs_srv(struct libxenvchan *ctrl, int domain, const char* xs_base
 	if (!ctrl->xs_path)
 		return -1; 
 
+	buf = malloc(XENSTORE_ABS_PATH_MAX);
+	if (!buf)
+		return -1;
+
 	xs = xs_open(0);
 	if (!xs)
 		goto fail;
@@ -280,14 +284,14 @@ retry_transaction:
 		goto fail_xs_open;
 
 	snprintf(ref, sizeof ref, "%d", ring_ref);
-	snprintf(buf, sizeof buf, "%s/ring-ref", xs_base);
+	snprintf(buf, XENSTORE_ABS_PATH_MAX, "%s/ring-ref", xs_base);
 	if (!xs_write(xs, xs_trans, buf, ref, strlen(ref)))
 		goto fail_xs_open;
 	if (!xs_set_permissions(xs, xs_trans, buf, perms, 2))
 		goto fail_xs_open;
 
 	snprintf(ref, sizeof ref, "%d", ctrl->event_port);
-	snprintf(buf, sizeof buf, "%s/event-channel", xs_base);
+	snprintf(buf, XENSTORE_ABS_PATH_MAX, "%s/event-channel", xs_base);
 	if (!xs_write(xs, xs_trans, buf, ref, strlen(ref)))
 		goto fail_xs_open;
 	if (!xs_set_permissions(xs, xs_trans, buf, perms, 2))
@@ -303,6 +307,7 @@ retry_transaction:
 	free(domid_str);
 	xs_close(xs);
  fail:
+	free(buf);
 	return ret;
 }
 
@@ -419,13 +424,20 @@ struct libxenvchan *libxenvchan_client_init(struct xentoollog_logger *logger,
 {
 	struct libxenvchan *ctrl = malloc(sizeof(struct libxenvchan));
 	struct xs_handle *xs = NULL;
-	char buf[64];
+	char *buf;
 	char *ref;
 	int ring_ref;
 	unsigned int len;
 
 	if (!ctrl)
-		return 0;
+		return NULL;
+
+	buf = malloc(XENSTORE_ABS_PATH_MAX);
+	if (!buf) {
+		free(ctrl);
+		return NULL;
+	}
+
 	ctrl->ring = NULL;
 	ctrl->event = NULL;
 	ctrl->gnttab = NULL;
@@ -436,8 +448,9 @@ struct libxenvchan *libxenvchan_client_init(struct xentoollog_logger *logger,
 	if (!xs)
 		goto fail;
 
+
 // find xenstore entry
-	snprintf(buf, sizeof buf, "%s/ring-ref", xs_path);
+	snprintf(buf, XENSTORE_ABS_PATH_MAX, "%s/ring-ref", xs_path);
 	ref = xs_read(xs, 0, buf, &len);
 	if (!ref)
 		goto fail;
@@ -445,7 +458,7 @@ struct libxenvchan *libxenvchan_client_init(struct xentoollog_logger *logger,
 	free(ref);
 	if (!ring_ref)
 		goto fail;
-	snprintf(buf, sizeof buf, "%s/event-channel", xs_path);
+	snprintf(buf, XENSTORE_ABS_PATH_MAX, "%s/event-channel", xs_path);
 	ref = xs_read(xs, 0, buf, &len);
 	if (!ref)
 		goto fail;
@@ -475,6 +488,7 @@ struct libxenvchan *libxenvchan_client_init(struct xentoollog_logger *logger,
  out:
 	if (xs)
 		xs_close(xs);
+	free(buf);
 	return ctrl;
  fail:
 	libxenvchan_close(ctrl);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477859.740769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EP-00057b-2g; Sun, 15 Jan 2023 11:31:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477859.740769; Sun, 15 Jan 2023 11:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EO-00057Q-VE; Sun, 15 Jan 2023 11:31:20 +0000
Received: by outflank-mailman (input) for mailman id 477859;
 Sun, 15 Jan 2023 11:31:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1EN-0004ci-6R
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:19 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 204756f6-94c8-11ed-91b6-6bf2151ebd3b;
 Sun, 15 Jan 2023 12:31:17 +0100 (CET)
Received: by mail-ej1-x62a.google.com with SMTP id bk15so4531259ejb.9
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:17 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 204756f6-94c8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=03O/96QsOqq4aLYpGvEXu2Bk6X9D5zYH6rdH2Rhto3Q=;
        b=GjzY+W7G5a/MctJVU7CBCLUw9wATIetZIrBiEbrCoWfdM4C5Xx4AQ+5oCPnJqxnJ8Y
         lMxpnHXt+q6wsWEZpjHNqpxCj3ujMJ+9Rx+doJP2ijRd2SjwHL8E8tz7u7cm+whw1Jgq
         XZK/G4Asx6nRXjw5A+OZVjHZY15XRjsIWV6N/CwVM95WfIcpfEG0wmywFnuwn0qH6Uk9
         vKGSodzrl7SME9bv/hCB/YGmjdaoGkuXRxBjgjVUVc8KG9wvEWdhDoVB5tp8bcvVlw2R
         WVdtN7/L2c3SU8/QH32B2pYtJOEx5IeKY6wgvwBN7TZsf48r8a0pldoel4zqO9yyt975
         8t2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=03O/96QsOqq4aLYpGvEXu2Bk6X9D5zYH6rdH2Rhto3Q=;
        b=jxwtBWZMbrinlQf/fsJ3CU4VOMM9F4MAlM0wyD05bLIBrHY4ri1BDVkGWDIrQwaHYR
         z5jgztyOLk1II0NRSDC8IPP6naDKKrDFH4bZczceohQeEbXWKzJxjGvTIkuVGWW8uaje
         tEJdtFgjyGCH40R0e5nMy0bqk3IRNtZRQU1aXynlvyjRgBQV4mca8QsVP/wAXyo+ZhBn
         7+Tw473JlOCLGatSo8PfPyi0dmgMcIn5N/vkgaEHXTvE3Mrzkenztlk7jtjfUTGXWmFg
         WBAfxErVEKEp1suH0mGwfW18SnjcknTbo5SJFM4KQMomeffRREtbDMhP4rg1B2Lj8Rbv
         tLng==
X-Gm-Message-State: AFqh2kphWD8lWnPhzds5umrJcMEy0qze8ACisq8epyGMXyR9WmC15LWr
	apec6yfpfQLNBP7cAaM6eX0oZIGpbv6R30K3
X-Google-Smtp-Source: AMrXdXs1MGroZp8qjceei9NllSlJPHzb4YXfmzo63afyVz0Tkk0Cxgv03c1A8+LUozMTfi+OtgAAAw==
X-Received: by 2002:a17:906:edb0:b0:84d:3623:bf5e with SMTP id sa16-20020a170906edb000b0084d3623bf5emr27799097ejb.24.1673782276552;
        Sun, 15 Jan 2023 03:31:16 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>
Subject: [RFC PATCH v3 02/10] tools/libs/light: Add vchan support to libxl
Date: Sun, 15 Jan 2023 13:31:03 +0200
Message-Id: <20230115113111.1207605-3-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

Add possibility to send commands from libxl and execute them on server
side. Libxl vchan adds support for JSON messages processing.

The using of libxl vchan is preventing the client from libxl from reading
and writing from / to the local sysfs directly.To do this, the libxl vchan
was added - instead of working with the read and write functions from / to
the sysfs, functionality allows to send requests to the server, then
receive and process the response.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Signed-off-by: Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>
Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
---
 tools/helpers/Makefile         |   4 +-
 tools/libs/light/Makefile      |   7 +-
 tools/libs/light/libxl_vchan.c | 488 +++++++++++++++++++++++++++++++++
 tools/libs/light/libxl_vchan.h |  92 +++++++
 tools/xl/Makefile              |   3 +-
 5 files changed, 588 insertions(+), 6 deletions(-)
 create mode 100644 tools/libs/light/libxl_vchan.c
 create mode 100644 tools/libs/light/libxl_vchan.h

diff --git a/tools/helpers/Makefile b/tools/helpers/Makefile
index 09590eb5b6..2bf52d055d 100644
--- a/tools/helpers/Makefile
+++ b/tools/helpers/Makefile
@@ -20,7 +20,7 @@ $(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
 $(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxenstore)
 $(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxenctrl)
-xen-init-dom0: LDLIBS += $(call xenlibs-ldlibs,ctrl toollog store light)
+xen-init-dom0: LDLIBS += $(call xenlibs-ldlibs,ctrl toollog store light vchan)
 
 INIT_XENSTORE_DOMAIN_OBJS = init-xenstore-domain.o init-dom-json.o
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
@@ -29,7 +29,7 @@ $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenctrl)
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenstore)
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h
-init-xenstore-domain: LDLIBS += $(call xenlibs-ldlibs,toollog store ctrl guest light)
+init-xenstore-domain: LDLIBS += $(call xenlibs-ldlibs,toollog store ctrl guest light vchan)
 
 INIT_DOM0LESS_OBJS = init-dom0less.o init-dom-json.o
 $(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 4fddcc6f51..0941ad2cf4 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -71,6 +71,7 @@ OBJS-y += libxl.o
 OBJS-y += libxl_create.o
 OBJS-y += libxl_dm.o
 OBJS-y += libxl_pci.o
+OBJS-y += libxl_vchan.o
 OBJS-y += libxl_dom.o
 OBJS-y += libxl_exec.o
 OBJS-y += libxl_xshelp.o
@@ -170,7 +171,7 @@ LDLIBS-y += $(PTHREAD_LIBS)
 LDLIBS-y += -lyajl
 LDLIBS += $(LDLIBS-y)
 
-$(OBJS-y) $(PIC_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
+$(OBJS-y) $(PIC_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) $(CFLAGS_libxenevtchn) -include $(XEN_ROOT)/tools/config.h
 $(ACPI_OBJS) $(ACPI_PIC_OBJS): CFLAGS += -I. -DLIBACPI_STDUTILS=\"$(CURDIR)/libxl_x86_acpi.h\"
 $(TEST_PROG_OBJS) _libxl.api-for-check: CFLAGS += $(CFLAGS_libxentoollog) $(CFLAGS_libxentoolcore)
 libxl_dom.o libxl_dom.opic: CFLAGS += -I$(XEN_ROOT)/tools  # include libacpi/x86.h
@@ -241,13 +242,13 @@ libxenlight_test.so: $(PIC_OBJS) $(LIBXL_TEST_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LDLIBS) $(APPEND_LDFLAGS)
 
 test_%: test_%.o test_common.o libxenlight_test.so
-	$(CC) $(LDFLAGS) -o $@ $^ $(filter-out %libxenlight.so, $(LDLIBS_libxenlight)) $(LDLIBS_libxentoollog) $(LDLIBS_libxentoolcore) -lyajl $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $^ $(filter-out %libxenlight.so, $(LDLIBS_libxenlight)) $(LDLIBS_libxentoollog) $(LDLIBS_libxentoolcore) $(LDLIBS_libxenvchan) -lyajl $(APPEND_LDFLAGS)
 
 libxl-save-helper: $(SAVE_HELPER_OBJS) libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxentoollog) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxentoolcore) $(APPEND_LDFLAGS)
 
 testidl: testidl.o libxenlight.so
-	$(CC) $(LDFLAGS) -o $@ testidl.o $(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) $(LDLIBS_libxentoolcore) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ testidl.o $(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) $(LDLIBS_libxentoolcore) $(LDLIBS_libxenvchan) $(APPEND_LDFLAGS)
 
 install:: $(LIBHEADERS) libxl-save-helper
 	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
diff --git a/tools/libs/light/libxl_vchan.c b/tools/libs/light/libxl_vchan.c
new file mode 100644
index 0000000000..7611816f52
--- /dev/null
+++ b/tools/libs/light/libxl_vchan.c
@@ -0,0 +1,488 @@
+/*
+ * Vchan support for JSON messages processing
+ *
+ * Copyright (C) 2021 EPAM Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+#include "libxl_vchan.h"
+
+#define VCHAN_EOM       "\r\n"
+/*
+ * http://xenbits.xen.org/docs/unstable/misc/xenstore-paths.html
+ * 1.4.4 Domain Controlled Paths
+ * 1.4.4.1 ~/data [w]
+ * A domain writable path. Available for arbitrary domain use.
+ */
+#define VCHAN_SRV_DIR   "/local/domain"
+
+struct vchan_state {
+    struct libxenvchan *ctrl;
+
+    /* Server domain ID. */
+    libxl_domid domid;
+
+    /* XenStore path of the server with the ring buffer and event channel. */
+    char *xs_path;
+
+    int select_fd;
+
+    /* GC used for state's lifetime allocations, such as rx_buf. */
+    libxl__gc *gc;
+    /* Receive buffer. */
+    char *rx_buf;
+    /* Current allocated size. */
+    size_t rx_buf_size;
+    /* Actual data in the buffer. */
+    size_t rx_buf_used;
+
+    /* YAJL generator used to parse and create requests/replies. */
+    yajl_gen gen;
+};
+
+int libxl__vchan_field_add_string(libxl__gc *gc, yajl_gen gen,
+                                  const char *field, char *val)
+{
+    libxl__json_object *result;
+
+    libxl__yajl_gen_asciiz(gen, field);
+    result = libxl__json_object_alloc(gc, JSON_STRING);
+    result->u.string = val;
+    return libxl__json_object_to_yajl_gen(gc, gen, result);
+}
+
+static libxl__json_object *libxl__vchan_arg_new(libxl__gc *gc,
+                                                libxl__json_node_type type,
+                                                libxl__json_object *args,
+                                                char *key)
+{
+    libxl__json_map_node *arg;
+    libxl__json_object *obj;
+
+    obj = libxl__json_object_alloc(gc, type);
+
+    GCNEW(arg);
+
+    arg->map_key = key;
+    arg->obj = obj;
+
+    flexarray_append(args->u.map, arg);
+
+    return obj;
+}
+
+void libxl__vchan_arg_add_string(libxl__gc *gc, libxl__json_object *args,
+                                 char *key, char *val)
+{
+    libxl__json_object *obj = libxl__vchan_arg_new(gc, JSON_STRING, args, key);
+
+    obj->u.string = val;
+}
+
+void libxl__vchan_arg_add_bool(libxl__gc *gc, libxl__json_object *args,
+                               char *key, bool val)
+{
+    libxl__json_object *obj = libxl__vchan_arg_new(gc, JSON_BOOL, args, key);
+
+    obj->u.b = val;
+}
+
+static void reset_yajl_generator(struct vchan_state *state)
+{
+    yajl_gen_clear(state->gen);
+    yajl_gen_reset(state->gen, NULL);
+}
+
+void vchan_dump_gen(libxl__gc *gc, yajl_gen gen)
+{
+    const unsigned char *buf = NULL;
+    size_t len = 0;
+
+    yajl_gen_get_buf(gen, &buf, &len);
+    LOG(DEBUG, "%s\n", buf);
+}
+
+void vchan_dump_state(libxl__gc *gc, struct vchan_state *state)
+{
+    vchan_dump_gen(gc, state->gen);
+}
+
+/*
+ * Find a JSON object and store it in o_r.
+ * return ERROR_NOTFOUND if no object is found.
+ */
+static int vchan_get_next_msg(libxl__gc *gc, struct vchan_state *state,
+                              libxl__json_object **o_r)
+{
+    size_t len;
+    char *end = NULL;
+    const size_t eoml = sizeof(VCHAN_EOM) - 1;
+    libxl__json_object *o = NULL;
+
+    if (!state->rx_buf_used)
+        return ERROR_NOTFOUND;
+
+    /* Search for the end of a message which is CRLF. */
+    end = memmem(state->rx_buf, state->rx_buf_used, VCHAN_EOM, eoml);
+    if (!end)
+        return ERROR_NOTFOUND;
+
+    len = (end - state->rx_buf) + eoml;
+
+    LOGD(DEBUG, state->domid, "parsing %zuB: '%.*s'", len, (int)len,
+         state->rx_buf);
+
+    /* Replace \r by \0 so that libxl__json_parse can use strlen */
+    state->rx_buf[len - eoml] = '\0';
+
+    o = libxl__json_parse(gc, state->rx_buf);
+    state->rx_buf_used -= len;
+    if (!o) {
+        LOGD(ERROR, state->domid, "Parse error");
+        /*
+         * In case of parsing error get back to a known state:
+         * reset the buffer and continue reading.
+         */
+        return ERROR_INVAL;
+    }
+
+    memmove(state->rx_buf, state->rx_buf + len, state->rx_buf_used);
+
+    LOGD(DEBUG, state->domid, "JSON object received: %s", JSON(o));
+
+    *o_r = o;
+
+    return 0;
+}
+
+static int vchan_process_packet(libxl__gc *gc, struct vchan_info *vchan,
+                                libxl__json_object **resp_result)
+{
+    while (true) {
+        struct vchan_state *state = vchan->state;
+        int rc;
+        ssize_t r;
+
+        if (!libxenvchan_is_open(state->ctrl))
+            return ERROR_FAIL;
+
+        /* Check if the buffer still has space or increase its size. */
+        if (state->rx_buf_size - state->rx_buf_used < vchan->receive_buf_size) {
+            size_t newsize = state->rx_buf_size * 2 + vchan->receive_buf_size;
+
+            if (newsize > vchan->max_buf_size) {
+                LOGD(ERROR, state->domid,
+                     "receive buffer is too big (%zu > %zu)",
+                     newsize, vchan->max_buf_size);
+                return ERROR_NOMEM;
+            }
+
+            state->rx_buf_size = newsize;
+            state->rx_buf = libxl__realloc(state->gc, state->rx_buf,
+                                           state->rx_buf_size);
+        }
+
+        do {
+            libxl__json_object *msg;
+
+            r = libxenvchan_read(state->ctrl,
+                                 state->rx_buf + state->rx_buf_used,
+                                 state->rx_buf_size - state->rx_buf_used);
+
+            if (r < 0) {
+                LOGED(ERROR, state->domid, "error reading");
+                return ERROR_FAIL;
+            } else if (r == 0)
+                continue;
+
+            LOG(DEBUG, "received %zdB: '%.*s'", r,
+                (int)r, state->rx_buf + state->rx_buf_used);
+
+            state->rx_buf_used += r;
+            assert(state->rx_buf_used <= state->rx_buf_size);
+
+            /* parse rx buffer to find one json object */
+            rc = vchan_get_next_msg(gc, state, &msg);
+            if ((rc == ERROR_INVAL) || (rc == ERROR_NOTFOUND))
+                continue;
+            if (rc)
+                return rc;
+
+            if (resp_result)
+                return vchan->handle_response(gc, msg, resp_result);
+            else {
+                reset_yajl_generator(state);
+                return vchan->handle_request(gc, state->gen, msg);
+            }
+        } while (libxenvchan_data_ready(state->ctrl));
+    }
+
+    return 0;
+}
+
+static int vchan_write(libxl__gc *gc, struct vchan_state *state, char *cmd)
+{
+    size_t len;
+    int ret;
+
+    len = strlen(cmd);
+    while (len) {
+        ret = libxenvchan_write(state->ctrl, cmd, len);
+        if (ret < 0) {
+            LOGE(ERROR, "vchan write failed");
+            return ERROR_FAIL;
+        }
+        cmd += ret;
+        len -= ret;
+    }
+    return 0;
+}
+
+libxl__json_object *vchan_send_command(libxl__gc *gc, struct vchan_info *vchan,
+                                       char *cmd, libxl__json_object *args)
+{
+    libxl__json_object *result;
+    char *request;
+    int ret;
+
+    reset_yajl_generator(vchan->state);
+    request = vchan->prepare_request(gc, vchan->state->gen, cmd, args);
+    if (!request)
+        return NULL;
+
+    ret = vchan_write(gc, vchan->state, request);
+    if (ret < 0)
+        return NULL;
+
+    ret = vchan_write(gc, vchan->state, VCHAN_EOM);
+    if (ret < 0)
+        return NULL;
+
+    ret = vchan_process_packet(gc, vchan, &result);
+    if (ret < 0)
+        return NULL;
+
+    return result;
+}
+
+int vchan_process_command(libxl__gc *gc, struct vchan_info *vchan)
+{
+    char *json_str;
+    int ret;
+
+    ret = vchan_process_packet(gc, vchan, NULL);
+    if (ret)
+        return ret;
+
+    json_str = vchan->prepare_response(gc, vchan->state->gen);
+    if (!json_str)
+        return ERROR_INVAL;
+
+    ret = vchan_write(gc, vchan->state, json_str);
+    if (ret)
+        return ret;
+
+    return vchan_write(gc, vchan->state, VCHAN_EOM);
+}
+
+static libxl_domid vchan_find_server(libxl__gc *gc, char *xs_dir, char *xs_file)
+{
+    char **domains;
+    unsigned int i, n;
+    libxl_domid domid = DOMID_INVALID;
+
+    domains = libxl__xs_directory(gc, XBT_NULL, xs_dir, &n);
+    if (domains == NULL)
+        goto out;
+
+    for (i = 0; i < n; i++) {
+        const char *tmp;
+        int d;
+
+        if (sscanf(domains[i], "%d", &d) != 1)
+            continue;
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/%d/data/%s", xs_dir, d, xs_file));
+        /* Found the domain where the server lives. */
+        if (tmp) {
+            domid = d;
+            break;
+        }
+    }
+
+out:
+    return domid;
+}
+
+static int vchan_init_client(libxl__gc *gc, struct vchan_state *state,
+                             bool is_server)
+{
+    if (is_server) {
+        state->ctrl = libxenvchan_server_init(NULL, state->domid,
+                                              state->xs_path, 0, 0);
+        if (!state->ctrl) {
+            perror("Couldn't initialize vchan server");
+            exit(1);
+        }
+
+    } else {
+        state->ctrl = libxenvchan_client_init(CTX->lg, state->domid,
+                                              state->xs_path);
+        if (!state->ctrl) {
+            LOGE(ERROR, "Couldn't initialize vchan client");
+            return ERROR_FAIL;
+        }
+    }
+
+    state->ctrl->blocking = 1;
+    state->select_fd = libxenvchan_fd_for_select(state->ctrl);
+    if (state->select_fd < 0) {
+        LOGE(ERROR, "Couldn't read file descriptor for vchan client");
+        return ERROR_FAIL;
+    }
+
+    LOG(DEBUG, "Initialized vchan %s, XenSore at %s",
+        is_server ? "server" : "client", state->xs_path);
+
+    return 0;
+}
+
+struct vchan_state *vchan_init_new_state(libxl__gc *gc, libxl_domid domid,
+                                         char *vchan_xs_path, bool is_server)
+{
+    struct vchan_state *state;
+    yajl_gen gen;
+    int ret;
+
+    gen = libxl_yajl_gen_alloc(NULL);
+    if (!gen) {
+        LOGE(ERROR, "Failed to allocate yajl generator");
+        return NULL;
+    }
+
+#if HAVE_YAJL_V2
+    /* Disable beautify for data */
+    yajl_gen_config(gen, yajl_gen_beautify, 0);
+#endif
+
+    state = libxl__zalloc(gc, sizeof(*state));
+    state->domid = domid;
+    state->xs_path = vchan_xs_path;
+    state->gc = gc;
+    ret = vchan_init_client(gc, state, is_server);
+    if (ret) {
+        state = NULL;
+        yajl_gen_free(gen);
+    }
+
+    state->gen = gen;
+
+    return state;
+}
+
+char *vchan_get_server_xs_path(libxl__gc *gc, libxl_domid domid, char *srv_name)
+{
+    return GCSPRINTF(VCHAN_SRV_DIR "/%d/data/%s", domid, srv_name);
+}
+
+/*
+ * Wait for the server to create the ring and event channel:
+ * since the moment we create a XS folder to the moment we start
+ * watching it the server may have already created the ring and
+ * event channel entries. Thus, we cannot watch reliably here without
+ * races, so poll for both entries to be created.
+ */
+static int vchan_wait_server_available(libxl__gc *gc, const char *xs_path)
+{
+    char *xs_ring, *xs_evt;
+    int timeout_ms = 5000;
+
+    xs_ring = GCSPRINTF("%s/ring-ref", xs_path);
+    xs_evt = GCSPRINTF("%s/event-channel", xs_path);
+
+    while (timeout_ms) {
+        unsigned int len;
+        void *file;
+        int entries = 0;
+
+        file = xs_read(CTX->xsh, XBT_NULL, xs_ring, &len);
+        if (file) {
+            entries++;
+            free(file);
+        }
+
+        file = xs_read(CTX->xsh, XBT_NULL, xs_evt, &len);
+        if (file) {
+            entries++;
+            free(file);
+        }
+
+        if (entries == 2)
+            return 0;
+
+        timeout_ms -= 10;
+        usleep(10000);
+    }
+
+    return ERROR_TIMEDOUT;
+}
+
+struct vchan_state *vchan_new_client(libxl__gc *gc, char *srv_name)
+{
+    libxl_domid domid;
+    char *xs_path, *vchan_xs_path;
+    libxl_uuid uuid;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    domid = vchan_find_server(gc, VCHAN_SRV_DIR, srv_name);
+    if (domid == DOMID_INVALID) {
+        LOGE(ERROR, "Can't find vchan server");
+        return NULL;
+    }
+
+    xs_path = vchan_get_server_xs_path(gc, domid, srv_name);
+    LOG(DEBUG, "vchan server at %s\n", xs_path);
+
+    /* Generate unique client id. */
+    libxl_uuid_generate(&uuid);
+
+    vchan_xs_path = GCSPRINTF("%s/" LIBXL_UUID_FMT, xs_path,
+                              LIBXL_UUID_BYTES((uuid)));
+
+    if (!xs_mkdir(ctx->xsh, XBT_NULL, vchan_xs_path)) {
+        LOG(ERROR, "Can't create xs_dir at %s", vchan_xs_path);
+        return NULL;
+    }
+
+    if (vchan_wait_server_available(gc, vchan_xs_path)) {
+        LOG(ERROR, "Failed to wait for the server to come up at %s",
+            vchan_xs_path);
+        return NULL;
+    }
+
+    return vchan_init_new_state(gc, domid, vchan_xs_path, false);
+}
+
+void vchan_fini_one(libxl__gc *gc, struct vchan_state *state)
+{
+    if (!state)
+        return;
+
+    LOG(DEBUG, "Closing vchan");
+    libxenvchan_close(state->ctrl);
+
+    yajl_gen_free(state->gen);
+}
diff --git a/tools/libs/light/libxl_vchan.h b/tools/libs/light/libxl_vchan.h
new file mode 100644
index 0000000000..0968875298
--- /dev/null
+++ b/tools/libs/light/libxl_vchan.h
@@ -0,0 +1,92 @@
+/*
+    Common definitions for JSON messages processing by vchan
+    Copyright (C) 2021 EPAM Systems Inc.
+
+    This library is free software; you can redistribute it and/or
+    modify it under the terms of the GNU Lesser General Public
+    License as published by the Free Software Foundation; either
+    version 2.1 of the License, or (at your option) any later version.
+
+    This library is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+    Lesser General Public License for more details.
+
+    You should have received a copy of the GNU Lesser General Public
+    License along with this library; If not, see <http://www.gnu.org/licenses/>.
+*/
+
+#ifndef LIBXL_VCHAN_H
+#define LIBXL_VCHAN_H
+
+#include <libxenvchan.h>
+
+struct vchan_state;
+
+struct vchan_info {
+    struct vchan_state *state;
+
+    /* Process request and produce the result by adding json-objects to gen .*/
+    int (*handle_request)(libxl__gc *gc, yajl_gen gen,
+                      const libxl__json_object *request);
+    /* Convert the prepared response into JSON string. */
+    char *(*prepare_response)(libxl__gc *gc, yajl_gen gen);
+
+    /* Prepare request as JSON string which will be sent. */
+    char *(*prepare_request)(libxl__gc *gc, yajl_gen gen, char *request,
+                             libxl__json_object *args);
+    /* Handle response and produce the output suitable for the requester. */
+    int (*handle_response)(libxl__gc *gc, const libxl__json_object *response,
+                           libxl__json_object **result);
+
+    /* Handle new client connection on the server side. */
+    int (*handle_new_client)(libxl__gc *gc);
+
+    /* Buffer info. */
+    size_t receive_buf_size;
+    size_t max_buf_size;
+};
+
+int libxl__vchan_field_add_string(libxl__gc *gc, yajl_gen hand,
+                                  const char *field, char *val);
+
+static inline libxl__json_object *libxl__vchan_start_args(libxl__gc *gc)
+{
+    return libxl__json_object_alloc(gc, JSON_MAP);
+}
+
+void libxl__vchan_arg_add_string(libxl__gc *gc, libxl__json_object *args,
+                                 char *key, char *val);
+void libxl__vchan_arg_add_bool(libxl__gc *gc, libxl__json_object *args,
+                               char *key, bool val);
+
+libxl__json_object *vchan_send_command(libxl__gc *gc, struct vchan_info *vchan,
+                                       char *cmd, libxl__json_object *args);
+
+void vchan_reset_generator(struct vchan_state *state);
+
+int vchan_process_command(libxl__gc *gc, struct vchan_info *vchan);
+
+char *vchan_get_server_xs_path(libxl__gc *gc, libxl_domid domid, char *srv_name);
+
+struct vchan_state *vchan_init_new_state(libxl__gc *gc, libxl_domid domid,
+                                         char *vchan_xs_path, bool is_server);
+
+struct vchan_state *vchan_new_client(libxl__gc *gc, char *srv_name);
+
+void vchan_fini_one(libxl__gc *gc, struct vchan_state *state);
+
+void vchan_dump_state(libxl__gc *gc, struct vchan_state *state);
+void vchan_dump_gen(libxl__gc *gc, yajl_gen gen);
+
+#endif /* LIBXL_VCHAN_H */
+
+/*
+ * Local variables:
+ *  mode: C
+ *  c-file-style: "linux"
+ *  indent-tabs-mode: t
+ *  c-basic-offset: 8
+ *  tab-width: 8
+ * End:
+ */
diff --git a/tools/xl/Makefile b/tools/xl/Makefile
index 5f7aa5f46c..da4591b6a9 100644
--- a/tools/xl/Makefile
+++ b/tools/xl/Makefile
@@ -15,6 +15,7 @@ LDFLAGS += $(PTHREAD_LDFLAGS)
 CFLAGS_XL += $(CFLAGS_libxenlight)
 CFLAGS_XL += $(CFLAGS_libxenutil)
 CFLAGS_XL += -Wshadow
+CFLAGS_XL += $(CFLAGS_libxenvchan)
 
 XL_OBJS-$(CONFIG_X86) = xl_psr.o
 XL_OBJS = xl.o xl_cmdtable.o xl_sxp.o xl_utils.o $(XL_OBJS-y)
@@ -33,7 +34,7 @@ $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs i
 all: xl
 
 xl: $(XL_OBJS)
-	$(CC) $(LDFLAGS) -o $@ $(XL_OBJS) $(LDLIBS_libxenutil) $(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) -lyajl $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $(XL_OBJS) $(LDLIBS_libxenutil) $(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) $(LDLIBS_libxenvchan) -lyajl $(APPEND_LDFLAGS)
 
 .PHONY: install
 install: all
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477860.740780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1ER-0005Qk-G4; Sun, 15 Jan 2023 11:31:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477860.740780; Sun, 15 Jan 2023 11:31:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1ER-0005QY-BP; Sun, 15 Jan 2023 11:31:23 +0000
Received: by outflank-mailman (input) for mailman id 477860;
 Sun, 15 Jan 2023 11:31:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1EP-0004ci-BU
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:21 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 211d3372-94c8-11ed-91b6-6bf2151ebd3b;
 Sun, 15 Jan 2023 12:31:18 +0100 (CET)
Received: by mail-ej1-x631.google.com with SMTP id u9so62072714ejo.0
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:18 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 211d3372-94c8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OoECGyweNl38ENJbt1BJclW8weoldKxLjvYMq4gaQIY=;
        b=FdIGONtJ4A0wRis3AMTpplLlFdc1lrp2/W8tLHS9CpuC5oF63GuUNQkKMR9MoMawsr
         N1BLSlUWHG73eDAO5bVunbosrofKFfsthEC98hMurnKW8YqodK3S6+sgvHIko9zmvI1z
         s+yLsbpIE4uR5/g6AUS8HMBDox0iSRe57dQr7SqtIWdDcOpWABe5xddna/s64mmBLjx0
         Q/AKZt+GBGX/LbwgS4b/7qbawBPBHpK3R32GEt+5S3votqxN9cHKn2mm5wSwtcbjBrtx
         Eiy4Y996dZTmMQA4GwgfTF7Q8grNH5K8YFR9R6JFcgz+J5VdMPgEMmrHSOffrMBajLgo
         FJrQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=OoECGyweNl38ENJbt1BJclW8weoldKxLjvYMq4gaQIY=;
        b=Nn3d7ZEGlUmxkyd7+PSdBb3Be8/uBx5V5Un/vibi8D0cdOtXBDCQJW+xm9HEVBHTGm
         PKNTczsnTETPR+PbJNh3MnLbgv20VD7pW2bkNndd8NBJ715bHV2BF7ntQKiQzzgDtNpz
         eBYj6P9M1ozT4nw2YwfEybAcPbu95S6PKKRARpyn7o4n0djeG7Q7IJyqieYqCH6tGcy7
         Ic3mV/atOkbegop456Ft3Bojus+937fyp1aKUR89tpZcqR7Svuy+ZG2O9d7g08m8+/gN
         rgB6U4u3Zj+nWoXzN6hHwuBudgdIK81TND7yM+Z1FIf/+F0LM/FSaPCl5Bfh5OWiPMc0
         qwBQ==
X-Gm-Message-State: AFqh2kqFQBki/1ojvA0BPp8b35ENTnxi4/Q2TxHDvKfcc25l/RzIx1Y3
	L0fuhxbPMEfpshGqZliwrJsx04y4ikyEpEZl
X-Google-Smtp-Source: AMrXdXvvnmtSD9Recy2etH1LSWO8CG/k2xh1sEFNtstEDPuGMLLxqEl1vIXxpIxSdIOBGAlN9jyX6A==
X-Received: by 2002:a17:907:900d:b0:84d:4e9b:ace5 with SMTP id ay13-20020a170907900d00b0084d4e9bace5mr17021518ejc.67.1673782277723;
        Sun, 15 Jan 2023 03:31:17 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>
Subject: [RFC PATCH v3 03/10] tools/xl: Add pcid daemon to xl
Date: Sun, 15 Jan 2023 13:31:04 +0200
Message-Id: <20230115113111.1207605-4-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

Add draft version of pcid server (based on vchan-node2), which can receive
messages from the client.

Add essential functionality to handle pcid protocol:
- define required constants
- prepare for handling remote requests
- prepare and send an error packet

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Signed-off-by: Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>
---
 tools/configure                               |   8 +-
 tools/configure.ac                            |   1 +
 tools/hotplug/FreeBSD/rc.d/xlpcid.in          |  75 +++
 tools/hotplug/Linux/init.d/xlpcid.in          |  76 ++++
 tools/hotplug/Linux/systemd/Makefile          |   1 +
 .../hotplug/Linux/systemd/xenpcid.service.in  |  10 +
 tools/hotplug/NetBSD/rc.d/xlpcid.in           |  75 +++
 tools/include/pcid.h                          |  94 ++++
 tools/libs/light/Makefile                     |   1 +
 tools/libs/light/libxl_pci.c                  | 128 ++++++
 tools/libs/light/libxl_pcid.c                 | 428 ++++++++++++++++++
 tools/xl/Makefile                             |   4 +-
 tools/xl/xl.h                                 |   1 +
 tools/xl/xl_cmdtable.c                        |   7 +
 tools/xl/xl_pcid.c                            |  81 ++++
 15 files changed, 986 insertions(+), 4 deletions(-)
 create mode 100644 tools/hotplug/FreeBSD/rc.d/xlpcid.in
 create mode 100644 tools/hotplug/Linux/init.d/xlpcid.in
 create mode 100644 tools/hotplug/Linux/systemd/xenpcid.service.in
 create mode 100644 tools/hotplug/NetBSD/rc.d/xlpcid.in
 create mode 100644 tools/include/pcid.h
 create mode 100644 tools/libs/light/libxl_pcid.c
 create mode 100644 tools/xl/xl_pcid.c

diff --git a/tools/configure b/tools/configure
index dae377c982..0cd6edb6ca 100755
--- a/tools/configure
+++ b/tools/configure
@@ -2455,7 +2455,7 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu
 
 
 
-ac_config_files="$ac_config_files ../config/Tools.mk hotplug/common/hotplugpath.sh hotplug/FreeBSD/rc.d/xencommons hotplug/FreeBSD/rc.d/xendriverdomain hotplug/Linux/init.d/sysconfig.xencommons hotplug/Linux/init.d/sysconfig.xendomains hotplug/Linux/init.d/xen-watchdog hotplug/Linux/init.d/xencommons hotplug/Linux/init.d/xendomains hotplug/Linux/init.d/xendriverdomain hotplug/Linux/launch-xenstore hotplug/Linux/vif-setup hotplug/Linux/xen-hotplug-common.sh hotplug/Linux/xendomains hotplug/NetBSD/rc.d/xencommons hotplug/NetBSD/rc.d/xendriverdomain ocaml/libs/xs/paths.ml ocaml/xenstored/paths.ml ocaml/xenstored/oxenstored.conf"
+ac_config_files="$ac_config_files ../config/Tools.mk hotplug/common/hotplugpath.sh hotplug/FreeBSD/rc.d/xencommons hotplug/FreeBSD/rc.d/xendriverdomain hotplug/FreeBSD/rc.d/xlpcid hotplug/Linux/init.d/sysconfig.xencommons hotplug/Linux/init.d/sysconfig.xendomains hotplug/Linux/init.d/xlpcid hotplug/Linux/init.d/xen-watchdog hotplug/Linux/init.d/xencommons hotplug/Linux/init.d/xendomains hotplug/Linux/init.d/xendriverdomain hotplug/Linux/launch-xenstore hotplug/Linux/vif-setup hotplug/Linux/xen-hotplug-common.sh hotplug/Linux/xendomains hotplug/NetBSD/rc.d/xencommons hotplug/NetBSD/rc.d/xendriverdomain hotplug/NetBSD/rc.d/xlpcid ocaml/libs/xs/paths.ml ocaml/xenstored/paths.ml ocaml/xenstored/oxenstored.conf"
 
 ac_config_headers="$ac_config_headers config.h"
 
@@ -10081,7 +10081,7 @@ fi
 
 if test "x$systemd" = "xy"; then :
 
-    ac_config_files="$ac_config_files hotplug/Linux/systemd/proc-xen.mount hotplug/Linux/systemd/xen-init-dom0.service hotplug/Linux/systemd/xen-qemu-dom0-disk-backend.service hotplug/Linux/systemd/xen-watchdog.service hotplug/Linux/systemd/xenconsoled.service hotplug/Linux/systemd/xendomains.service hotplug/Linux/systemd/xendriverdomain.service hotplug/Linux/systemd/xenstored.service"
+    ac_config_files="$ac_config_files hotplug/Linux/systemd/proc-xen.mount hotplug/Linux/systemd/xen-init-dom0.service hotplug/Linux/systemd/xen-qemu-dom0-disk-backend.service hotplug/Linux/systemd/xen-watchdog.service hotplug/Linux/systemd/xenconsoled.service hotplug/Linux/systemd/xendomains.service hotplug/Linux/systemd/xendriverdomain.service hotplug/Linux/systemd/xenstored.service hotplug/Linux/systemd/xenpcid.service"
 
 
 fi
@@ -10946,8 +10946,10 @@ do
     "hotplug/common/hotplugpath.sh") CONFIG_FILES="$CONFIG_FILES hotplug/common/hotplugpath.sh" ;;
     "hotplug/FreeBSD/rc.d/xencommons") CONFIG_FILES="$CONFIG_FILES hotplug/FreeBSD/rc.d/xencommons" ;;
     "hotplug/FreeBSD/rc.d/xendriverdomain") CONFIG_FILES="$CONFIG_FILES hotplug/FreeBSD/rc.d/xendriverdomain" ;;
+    "hotplug/FreeBSD/rc.d/xlpcid") CONFIG_FILES="$CONFIG_FILES hotplug/FreeBSD/rc.d/xlpcid" ;;
     "hotplug/Linux/init.d/sysconfig.xencommons") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/init.d/sysconfig.xencommons" ;;
     "hotplug/Linux/init.d/sysconfig.xendomains") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/init.d/sysconfig.xendomains" ;;
+    "hotplug/Linux/init.d/xlpcid") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/init.d/xlpcid" ;;
     "hotplug/Linux/init.d/xen-watchdog") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/init.d/xen-watchdog" ;;
     "hotplug/Linux/init.d/xencommons") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/init.d/xencommons" ;;
     "hotplug/Linux/init.d/xendomains") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/init.d/xendomains" ;;
@@ -10958,6 +10960,7 @@ do
     "hotplug/Linux/xendomains") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/xendomains" ;;
     "hotplug/NetBSD/rc.d/xencommons") CONFIG_FILES="$CONFIG_FILES hotplug/NetBSD/rc.d/xencommons" ;;
     "hotplug/NetBSD/rc.d/xendriverdomain") CONFIG_FILES="$CONFIG_FILES hotplug/NetBSD/rc.d/xendriverdomain" ;;
+    "hotplug/NetBSD/rc.d/xlpcid") CONFIG_FILES="$CONFIG_FILES hotplug/NetBSD/rc.d/xlpcid" ;;
     "ocaml/libs/xs/paths.ml") CONFIG_FILES="$CONFIG_FILES ocaml/libs/xs/paths.ml" ;;
     "ocaml/xenstored/paths.ml") CONFIG_FILES="$CONFIG_FILES ocaml/xenstored/paths.ml" ;;
     "ocaml/xenstored/oxenstored.conf") CONFIG_FILES="$CONFIG_FILES ocaml/xenstored/oxenstored.conf" ;;
@@ -10970,6 +10973,7 @@ do
     "hotplug/Linux/systemd/xendomains.service") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/systemd/xendomains.service" ;;
     "hotplug/Linux/systemd/xendriverdomain.service") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/systemd/xendriverdomain.service" ;;
     "hotplug/Linux/systemd/xenstored.service") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/systemd/xenstored.service" ;;
+    "hotplug/Linux/systemd/xenpcid.service") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/systemd/xenpcid.service" ;;
 
   *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;;
   esac
diff --git a/tools/configure.ac b/tools/configure.ac
index 3a2f6a2da9..d2b22e94a9 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -489,6 +489,7 @@ AS_IF([test "x$systemd" = "xy"], [
     hotplug/Linux/systemd/xendomains.service
     hotplug/Linux/systemd/xendriverdomain.service
     hotplug/Linux/systemd/xenstored.service
+    hotplug/Linux/systemd/xenpcid.service
     ])
 ])
 
diff --git a/tools/hotplug/FreeBSD/rc.d/xlpcid.in b/tools/hotplug/FreeBSD/rc.d/xlpcid.in
new file mode 100644
index 0000000000..2817bfaeed
--- /dev/null
+++ b/tools/hotplug/FreeBSD/rc.d/xlpcid.in
@@ -0,0 +1,75 @@
+#! /bin/bash
+#
+# xlpcid
+#
+# description: Run xlpcid daemon
+### BEGIN INIT INFO
+# Provides:          xlpcid
+# Short-Description: Start/stop xlpcid
+# Description:       Run xlpcid daemon
+### END INIT INFO
+#
+
+. @XEN_SCRIPT_DIR@/hotplugpath.sh
+
+xencommons_config=@CONFIG_DIR@/@CONFIG_LEAF_DIR@
+
+test -f $xencommons_config/xencommons && . $xencommons_config/xencommons
+
+XLPCID_PIDFILE="@XEN_RUN_DIR@/xlpcid.pid"
+
+# Source function library.
+if [ -e  /etc/init.d/functions ] ; then
+    . /etc/init.d/functions
+elif [ -e /lib/lsb/init-functions ] ; then
+    . /lib/lsb/init-functions
+    success () {
+        log_success_msg $*
+    }
+    failure () {
+        log_failure_msg $*
+    }
+else
+    success () {
+        echo $*
+    }
+    failure () {
+        echo $*
+    }
+fi
+
+start() {
+  echo Starting xl pcid...
+  ${sbindir}/xl pcid --pidfile=$XLPCID_PIDFILE $XLPCID_ARGS
+}
+
+stop() {
+  echo Stopping xl pcid...
+  if read 2>/dev/null <$XLPCID_PIDFILE pid; then
+    kill $pid
+    while kill -9 $pid >/dev/null 2>&1; do sleep 1; done
+    rm -f $XLPCID_PIDFILE
+  fi
+}
+
+case "$1" in
+  start)
+    start
+	;;
+  stop)
+	stop
+	;;
+  restart)
+	stop
+	start
+	;;
+  status)
+	;;
+  condrestart)
+	stop
+	start
+	;;
+  *)
+	echo $"Usage: $0 {start|stop|status|restart|condrestart}"
+	exit 1
+esac
diff --git a/tools/hotplug/Linux/init.d/xlpcid.in b/tools/hotplug/Linux/init.d/xlpcid.in
new file mode 100644
index 0000000000..dce660098c
--- /dev/null
+++ b/tools/hotplug/Linux/init.d/xlpcid.in
@@ -0,0 +1,76 @@
+#! /bin/bash
+#
+# xlpcid
+#
+# description: Run xlpcid daemon
+### BEGIN INIT INFO
+# Provides:          xlpcid
+# Short-Description: Start/stop xlpcid
+# Description:       Run xlpcid daemon
+### END INIT INFO
+#
+
+. @XEN_SCRIPT_DIR@/hotplugpath.sh
+
+xencommons_config=@CONFIG_DIR@/@CONFIG_LEAF_DIR@
+
+test -f $xencommons_config/xencommons && . $xencommons_config/xencommons
+
+XLPCID_PIDFILE="@XEN_RUN_DIR@/xlpcid.pid"
+
+# Source function library.
+if [ -e  /etc/init.d/functions ] ; then
+    . /etc/init.d/functions
+elif [ -e /lib/lsb/init-functions ] ; then
+    . /lib/lsb/init-functions
+    success () {
+        log_success_msg $*
+    }
+    failure () {
+        log_failure_msg $*
+    }
+else
+    success () {
+        echo $*
+    }
+    failure () {
+        echo $*
+    }
+fi
+
+start() {
+  echo Starting xl pcid...
+  ${sbindir}/xl pcid --pidfile=$XLPCID_PIDFILE $XLPCID_ARGS
+}
+
+stop() {
+  echo Stopping xl pcid...
+  if read 2>/dev/null <$XLPCID_PIDFILE pid; then
+    kill $pid
+    while kill -9 $pid >/dev/null 2>&1; do sleep 1; done
+    rm -f $XLPCID_PIDFILE
+  fi
+}
+
+case "$1" in
+  start)
+    start
+	;;
+  stop)
+	stop
+	;;
+  restart)
+	stop
+	start
+	;;
+  status)
+	;;
+  condrestart)
+	stop
+	start
+	;;
+  *)
+	echo $"Usage: $0 {start|stop|status|restart|condrestart}"
+	exit 1
+esac
+
diff --git a/tools/hotplug/Linux/systemd/Makefile b/tools/hotplug/Linux/systemd/Makefile
index e29889156d..49f0f87296 100644
--- a/tools/hotplug/Linux/systemd/Makefile
+++ b/tools/hotplug/Linux/systemd/Makefile
@@ -12,6 +12,7 @@ XEN_SYSTEMD_SERVICE += xendomains.service
 XEN_SYSTEMD_SERVICE += xen-watchdog.service
 XEN_SYSTEMD_SERVICE += xen-init-dom0.service
 XEN_SYSTEMD_SERVICE += xendriverdomain.service
+XEN_SYSTEMD_SERVICE += xenpcid.service
 
 ALL_XEN_SYSTEMD :=	$(XEN_SYSTEMD_MODULES)  \
 			$(XEN_SYSTEMD_MOUNT)	\
diff --git a/tools/hotplug/Linux/systemd/xenpcid.service.in b/tools/hotplug/Linux/systemd/xenpcid.service.in
new file mode 100644
index 0000000000..49c57f635a
--- /dev/null
+++ b/tools/hotplug/Linux/systemd/xenpcid.service.in
@@ -0,0 +1,10 @@
+[Unit]
+Description=Xen PCI host daemon
+ConditionVirtualization=xen
+
+[Service]
+Type=forking
+ExecStart=@sbindir@/xl pcid
+
+[Install]
+WantedBy=multi-user.target
diff --git a/tools/hotplug/NetBSD/rc.d/xlpcid.in b/tools/hotplug/NetBSD/rc.d/xlpcid.in
new file mode 100644
index 0000000000..2817bfaeed
--- /dev/null
+++ b/tools/hotplug/NetBSD/rc.d/xlpcid.in
@@ -0,0 +1,75 @@
+#! /bin/bash
+#
+# xlpcid
+#
+# description: Run xlpcid daemon
+### BEGIN INIT INFO
+# Provides:          xlpcid
+# Short-Description: Start/stop xlpcid
+# Description:       Run xlpcid daemon
+### END INIT INFO
+#
+
+. @XEN_SCRIPT_DIR@/hotplugpath.sh
+
+xencommons_config=@CONFIG_DIR@/@CONFIG_LEAF_DIR@
+
+test -f $xencommons_config/xencommons && . $xencommons_config/xencommons
+
+XLPCID_PIDFILE="@XEN_RUN_DIR@/xlpcid.pid"
+
+# Source function library.
+if [ -e  /etc/init.d/functions ] ; then
+    . /etc/init.d/functions
+elif [ -e /lib/lsb/init-functions ] ; then
+    . /lib/lsb/init-functions
+    success () {
+        log_success_msg $*
+    }
+    failure () {
+        log_failure_msg $*
+    }
+else
+    success () {
+        echo $*
+    }
+    failure () {
+        echo $*
+    }
+fi
+
+start() {
+  echo Starting xl pcid...
+  ${sbindir}/xl pcid --pidfile=$XLPCID_PIDFILE $XLPCID_ARGS
+}
+
+stop() {
+  echo Stopping xl pcid...
+  if read 2>/dev/null <$XLPCID_PIDFILE pid; then
+    kill $pid
+    while kill -9 $pid >/dev/null 2>&1; do sleep 1; done
+    rm -f $XLPCID_PIDFILE
+  fi
+}
+
+case "$1" in
+  start)
+    start
+	;;
+  stop)
+	stop
+	;;
+  restart)
+	stop
+	start
+	;;
+  status)
+	;;
+  condrestart)
+	stop
+	start
+	;;
+  *)
+	echo $"Usage: $0 {start|stop|status|restart|condrestart}"
+	exit 1
+esac
diff --git a/tools/include/pcid.h b/tools/include/pcid.h
new file mode 100644
index 0000000000..6506b18d25
--- /dev/null
+++ b/tools/include/pcid.h
@@ -0,0 +1,94 @@
+/*
+    Common definitions for Xen PCI client-server protocol.
+    Copyright (C) 2021 EPAM Systems Inc.
+
+    This library is free software; you can redistribute it and/or
+    modify it under the terms of the GNU Lesser General Public
+    License as published by the Free Software Foundation; either
+    version 2.1 of the License, or (at your option) any later version.
+
+    This library is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+    Lesser General Public License for more details.
+
+    You should have received a copy of the GNU Lesser General Public
+    License along with this library; If not, see <http://www.gnu.org/licenses/>.
+*/
+
+#ifndef PCID_H
+#define PCID_H
+
+#define PCID_SRV_NAME           "pcid"
+#define PCID_XS_TOKEN           "pcid-token"
+
+#define PCI_RECEIVE_BUFFER_SIZE 4096
+#define PCI_MAX_SIZE_RX_BUF     MB(1)
+
+/*
+ *******************************************************************************
+ * Common request and response structures used be the pcid remote protocol are
+ * described below.
+ *******************************************************************************
+ * Request:
+ * +-------------+--------------+----------------------------------------------+
+ * | Field       | Type         | Comment                                      |
+ * +-------------+--------------+----------------------------------------------+
+ * | cmd         | string       | String identifying the command               |
+ * +-------------+--------------+----------------------------------------------+
+ *
+ * Response:
+ * +-------------+--------------+----------------------------------------------+
+ * | Field       | Type         | Comment                                      |
+ * +-------------+--------------+----------------------------------------------+
+ * | resp        | string       | Command string as in the request             |
+ * +-------------+--------------+----------------------------------------------+
+ * | error       | string       | "okay", "failed"                               |
+ * +-------------+--------------+----------------------------------------------+
+ * | error_desc  | string       | Optional error description string            |
+ * +-------------+--------------+----------------------------------------------+
+ *
+ * Notes.
+ * 1. Every request and response must contain the above mandatory structures.
+ * 2. In case if a bad packet or an unknown command received by the server side
+ * a valid reply with the corresponding error code must be sent to the client.
+ *
+ * Requests and responses, which require SBDF as part of their payload, must
+ * use the following convention for encoding SBDF value:
+ *
+ * pci_device object:
+ * +-------------+--------------+----------------------------------------------+
+ * | Field       | Type         | Comment                                      |
+ * +-------------+--------------+----------------------------------------------+
+ * | sbdf        | string       | SBDF string in form SSSS:BB:DD.F             |
+ * +-------------+--------------+----------------------------------------------+
+ */
+
+#define PCID_MSG_FIELD_CMD      "cmd"
+
+#define PCID_MSG_FIELD_RESP     "resp"
+#define PCID_MSG_FIELD_ERR      "error"
+#define PCID_MSG_FIELD_ERR_DESC "error_desc"
+
+/* pci_device object fields. */
+#define PCID_MSG_FIELD_SBDF     "sbdf"
+
+#define PCID_MSG_ERR_OK         "okay"
+#define PCID_MSG_ERR_FAILED     "failed"
+#define PCID_MSG_ERR_NA         "NA"
+
+#define PCID_SBDF_FMT           "%04x:%02x:%02x.%01x"
+
+int libxl_pcid_process(libxl_ctx *ctx);
+
+#endif /* PCID_H */
+
+/*
+ * Local variables:
+ *  mode: C
+ *  c-file-style: "linux"
+ *  indent-tabs-mode: t
+ *  c-basic-offset: 8
+ *  tab-width: 8
+ * End:
+ */
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 0941ad2cf4..72997eaac9 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -71,6 +71,7 @@ OBJS-y += libxl.o
 OBJS-y += libxl_create.o
 OBJS-y += libxl_dm.o
 OBJS-y += libxl_pci.o
+OBJS-y += libxl_pcid.o
 OBJS-y += libxl_vchan.o
 OBJS-y += libxl_dom.o
 OBJS-y += libxl_exec.o
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index f4c4f17545..b0c6de88ba 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -18,6 +18,10 @@
 
 #include "libxl_internal.h"
 
+#include <pcid.h>
+
+#include "libxl_vchan.h"
+
 #define PCI_BDF                "%04x:%02x:%02x.%01x"
 #define PCI_BDF_SHORT          "%02x:%02x.%01x"
 #define PCI_BDF_VDEVFN         "%04x:%02x:%02x.%01x@%02x"
@@ -25,6 +29,130 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
+static int pci_handle_response(libxl__gc *gc,
+                               const libxl__json_object *response,
+                               libxl__json_object **result)
+{
+    const libxl__json_object *command_obj;
+    const libxl__json_object *err_obj;
+    char *command_name;
+    int ret = 0;
+
+    *result = NULL;
+
+    command_obj = libxl__json_map_get(PCID_MSG_FIELD_RESP, response, JSON_STRING);
+    if (!command_obj) {
+        /* This is an unsupported or bad response. */
+        return 0;
+    }
+
+    err_obj = libxl__json_map_get(PCID_MSG_FIELD_ERR, response, JSON_STRING);
+    if (!err_obj) {
+        /* Bad packet without error code field. */
+        return 0;
+    }
+
+    if (strcmp(err_obj->u.string, PCID_MSG_ERR_OK) != 0) {
+        const libxl__json_object *err_desc_obj;
+
+        /* The response may contain an optional error string. */
+        err_desc_obj = libxl__json_map_get(PCID_MSG_FIELD_ERR_DESC,
+                                           response, JSON_STRING);
+        if (err_desc_obj)
+            LOG(ERROR, "%s", err_desc_obj->u.string);
+        else
+            LOG(ERROR, "%s", err_obj->u.string);
+        return ERROR_FAIL;
+    }
+
+    command_name = command_obj->u.string;
+    LOG(DEBUG, "command: %s", command_name);
+
+    return ret;
+}
+
+#define CONVERT_YAJL_GEN_TO_STATUS(gen) \
+    ((gen) == yajl_gen_status_ok ? yajl_status_ok : yajl_status_error)
+
+static char *pci_prepare_request(libxl__gc *gc, yajl_gen gen, char *cmd,
+                             libxl__json_object *args)
+{
+    const unsigned char *buf;
+    libxl_yajl_length len;
+    yajl_gen_status sts;
+    yajl_status ret;
+    char *request = NULL;
+    int rc;
+
+    ret = CONVERT_YAJL_GEN_TO_STATUS(yajl_gen_map_open(gen));
+    if (ret != yajl_status_ok)
+        return NULL;
+
+    rc = libxl__vchan_field_add_string(gc, gen, PCID_MSG_FIELD_CMD, cmd);
+    if (rc)
+        return NULL;
+
+    if (args) {
+        int idx = 0;
+        libxl__json_map_node *node = NULL;
+
+        assert(args->type == JSON_MAP);
+        for (idx = 0; idx < args->u.map->count; idx++) {
+            if (flexarray_get(args->u.map, idx, (void**)&node) != 0)
+                break;
+
+            ret = CONVERT_YAJL_GEN_TO_STATUS(libxl__yajl_gen_asciiz(gen, node->map_key));
+            if (ret != yajl_status_ok)
+                return NULL;
+            ret = libxl__json_object_to_yajl_gen(gc, gen, node->obj);
+            if (ret != yajl_status_ok)
+                return NULL;
+        }
+    }
+    ret = CONVERT_YAJL_GEN_TO_STATUS(yajl_gen_map_close(gen));
+    if (ret != yajl_status_ok)
+        return NULL;
+
+    sts = yajl_gen_get_buf(gen, &buf, &len);
+    if (sts != yajl_gen_status_ok)
+        return NULL;
+
+    request = libxl__sprintf(gc, "%s", buf);
+
+    vchan_dump_gen(gc, gen);
+
+    return request;
+}
+
+struct vchan_info *pci_vchan_get_client(libxl__gc *gc);
+struct vchan_info *pci_vchan_get_client(libxl__gc *gc)
+{
+    struct vchan_info *vchan;
+
+    vchan = libxl__zalloc(gc, sizeof(*vchan));
+    if (!vchan)
+        goto out;
+    vchan->state = vchan_new_client(gc, PCID_SRV_NAME);
+    if (!(vchan->state)) {
+        vchan = NULL;
+        goto out;
+    }
+
+    vchan->handle_response = pci_handle_response;
+    vchan->prepare_request = pci_prepare_request;
+    vchan->receive_buf_size = PCI_RECEIVE_BUFFER_SIZE;
+    vchan->max_buf_size = PCI_MAX_SIZE_RX_BUF;
+
+out:
+    return vchan;
+}
+
+void pci_vchan_free(libxl__gc *gc, struct vchan_info *vchan);
+void pci_vchan_free(libxl__gc *gc, struct vchan_info *vchan)
+{
+    vchan_fini_one(gc, vchan->state);
+}
+
 static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
diff --git a/tools/libs/light/libxl_pcid.c b/tools/libs/light/libxl_pcid.c
new file mode 100644
index 0000000000..958fe387f9
--- /dev/null
+++ b/tools/libs/light/libxl_pcid.c
@@ -0,0 +1,428 @@
+/*
+    Utils for xl pcid daemon
+
+    Copyright (C) 2021 EPAM Systems Inc.
+
+    This library is free software; you can redistribute it and/or
+    modify it under the terms of the GNU Lesser General Public
+    License as published by the Free Software Foundation; either
+    version 2.1 of the License, or (at your option) any later version.
+
+    This library is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+    Lesser General Public License for more details.
+
+    You should have received a copy of the GNU Lesser General Public
+    License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#define _GNU_SOURCE  // required for strchrnul()
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+#include "libxl_vchan.h"
+
+#include <libxl_utils.h>
+#include <libxlutil.h>
+
+#include <xenstore.h>
+
+#include <libxl.h>
+#include <libxl_json.h>
+#include <dirent.h>
+
+#include <pthread.h>
+#include <pcid.h>
+
+#define DOM0_ID 0
+
+struct vchan_client {
+    XEN_LIST_ENTRY(struct vchan_client) list;
+
+    /* This is the watch entry fired for this client. */
+    char **watch_ret;
+    /* Length of the watch_ret[XS_WATCH_PATH]. */
+    size_t watch_len;
+
+    struct vchan_info info;
+
+    /*
+     * This context is used by the processing loop to create its own gc
+     * and use it while processing commands, so we do not get OOM.
+     */
+    libxl_ctx *ctx;
+    /* This gc holds all allocations made for the client needs itself. */
+    libxl__gc gc[1];
+    pthread_t run_thread;
+};
+
+static XEN_LIST_HEAD(clients_list, struct vchan_client) vchan_clients;
+
+static pthread_mutex_t vchan_client_mutex;
+
+static int make_error_reply(libxl__gc *gc, yajl_gen gen, char *desc,
+                            char *command_name)
+{
+    int rc;
+
+    rc = libxl__vchan_field_add_string(gc, gen, PCID_MSG_FIELD_RESP,
+                                       command_name);
+    if (rc)
+        return rc;
+
+    rc = libxl__vchan_field_add_string(gc, gen, PCID_MSG_FIELD_ERR,
+                                       PCID_MSG_ERR_FAILED);
+    if (rc)
+        return rc;
+
+    rc = libxl__vchan_field_add_string(gc, gen, PCID_MSG_FIELD_ERR_DESC, desc);
+    if (rc)
+        return rc;
+
+    return 0;
+}
+
+static int pcid_handle_request(libxl__gc *gc, yajl_gen gen,
+                               const libxl__json_object *request)
+{
+    const libxl__json_object *command_obj;
+    libxl__json_object *command_response = NULL;
+    char *command_name;
+    int ret = 0;
+
+    yajl_gen_map_open(gen);
+
+    command_obj = libxl__json_map_get(PCID_MSG_FIELD_CMD, request, JSON_STRING);
+    if (!command_obj) {
+        /* This is an unsupported or bad request. */
+        ret = make_error_reply(gc, gen, "Unsupported request or bad packet",
+                               PCID_MSG_ERR_NA);
+        goto out;
+    }
+
+    command_name = command_obj->u.string;
+
+    /*
+     * This is an unsupported command: make a reply and proceed over
+     * the error path.
+     */
+    ret = make_error_reply(gc, gen, "Unsupported command",
+                           command_name);
+    if (!ret)
+        ret = ERROR_NOTFOUND;
+
+    if (ret) {
+        /*
+         * The command handler on error must provide a valid response,
+         * so we don't need to add any other field below.
+         */
+        ret = 0;
+        goto out;
+    }
+
+    if (command_response) {
+	ret = libxl__json_object_to_yajl_gen(gc, gen, command_response);
+	if (ret)
+	    goto out;
+    }
+
+    ret = libxl__vchan_field_add_string(gc, gen, PCID_MSG_FIELD_RESP,
+                                        command_name);
+    if (ret)
+        goto out;
+
+    ret = libxl__vchan_field_add_string(gc, gen, PCID_MSG_FIELD_ERR,
+                                        PCID_MSG_ERR_OK);
+out:
+    yajl_gen_map_close(gen);
+
+    vchan_dump_gen(gc, gen);
+
+    return ret;
+}
+
+static char *pcid_prepare_response(libxl__gc *gc, yajl_gen gen)
+{
+    const unsigned char *buf;
+    libxl_yajl_length len;
+    yajl_gen_status sts;
+    char *reply = NULL;
+
+    sts = yajl_gen_get_buf(gen, &buf, &len);
+    if (sts != yajl_gen_status_ok)
+        goto out;
+
+    reply = libxl__sprintf(gc, "%s", buf);
+
+    vchan_dump_gen(gc, gen);
+
+out:
+    return reply;
+}
+
+static void server_fini_one(libxl__gc *gc, struct vchan_client *client)
+{
+    pthread_mutex_lock(&vchan_client_mutex);
+    XEN_LIST_REMOVE(client, list);
+    pthread_mutex_unlock(&vchan_client_mutex);
+
+    GC_FREE;
+    free(client->watch_ret);
+    free(client);
+}
+
+static bool is_vchan_exist(libxl_ctx *ctx, char *watch_dir)
+{
+    char **dir = NULL;
+    unsigned int nb;
+    bool ret = false;
+
+    dir = xs_directory(ctx->xsh, XBT_NULL, watch_dir, &nb);
+    if (dir) {
+        free(dir);
+        ret = true;
+    }
+    return ret;
+}
+
+static void *client_thread(void *arg)
+{
+    struct vchan_client *client = arg;
+
+    while (true) {
+        int ret;
+        /*
+         * libvchan uses garbage collector for processing requests,
+         * so we create a new one each time we process a packet and
+         * dispose it right away to prevent OOM.
+         */
+        GC_INIT(client->ctx);
+        ret = vchan_process_command(gc, &client->info);
+        GC_FREE;
+
+        if ((ret == ERROR_NOTFOUND) || (ret == ERROR_INVAL))
+            continue;
+        if (ret < 0)
+            break;
+    }
+    vchan_fini_one(client->gc, client->info.state);
+    server_fini_one(client->gc, client);
+    return NULL;
+}
+
+#define DEFAULT_THREAD_STACKSIZE (24 * 1024)
+/* NetBSD doesn't have PTHREAD_STACK_MIN. */
+#ifndef PTHREAD_STACK_MIN
+#define PTHREAD_STACK_MIN 0
+#endif
+
+#define READ_THREAD_STACKSIZE                           \
+    ((DEFAULT_THREAD_STACKSIZE < PTHREAD_STACK_MIN) ?   \
+    PTHREAD_STACK_MIN : DEFAULT_THREAD_STACKSIZE)
+
+static bool init_client_thread(libxl__gc *gc, struct vchan_client *new_client)
+{
+
+    sigset_t set, old_set;
+    pthread_attr_t attr;
+    static size_t stack_size;
+#ifdef USE_DLSYM
+    size_t (*getsz)(pthread_attr_t *attr);
+#endif
+
+    if (pthread_attr_init(&attr) != 0)
+        return false;
+    if (!stack_size) {
+#ifdef USE_DLSYM
+        getsz = dlsym(RTLD_DEFAULT, "__pthread_get_minstack");
+        if (getsz)
+            stack_size = getsz(&attr);
+#endif
+        if (stack_size < READ_THREAD_STACKSIZE)
+            stack_size = READ_THREAD_STACKSIZE;
+    }
+    if (pthread_attr_setstacksize(&attr, stack_size) != 0) {
+        pthread_attr_destroy(&attr);
+        return false;
+    }
+
+    sigfillset(&set);
+    pthread_sigmask(SIG_SETMASK, &set, &old_set);
+
+    if (pthread_create(&new_client->run_thread, &attr, client_thread,
+                       new_client) != 0) {
+        pthread_sigmask(SIG_SETMASK, &old_set, NULL);
+        pthread_attr_destroy(&attr);
+        return false;
+    }
+    pthread_sigmask(SIG_SETMASK, &old_set, NULL);
+    pthread_attr_destroy(&attr);
+
+    return true;
+}
+
+static void init_new_client(libxl_ctx *ctx, libxl__gc *gc,
+                            struct clients_list *list, char **watch_ret)
+{
+    struct vchan_client *new_client;
+    char *xs_path = watch_ret[XS_WATCH_PATH];
+
+    LOG(DEBUG, "New client at \"%s\"", xs_path);
+
+    new_client = malloc(sizeof(*new_client));
+    if (!new_client) {
+        LOGE(ERROR, "Failed to allocate new client at \"%s\"", xs_path);
+        return;
+    }
+
+    memset(new_client, 0, sizeof(*new_client));
+
+    new_client->watch_ret = watch_ret;
+    new_client->watch_len = strlen(xs_path);
+    new_client->ctx = ctx;
+    /*
+     * Remember the GC of this client, so we can dispose its memory.
+     * Use it from now on.
+     */
+    LIBXL_INIT_GC(new_client->gc[0], ctx);
+
+    new_client->info.state = vchan_init_new_state(new_client->gc, DOM0_ID,
+                                                  xs_path, true);
+    if (!(new_client->info.state)) {
+        LOGE(ERROR, "Failed to add new client at \"%s\"", xs_path);
+        server_fini_one(new_client->gc, new_client);
+        return;
+    }
+
+    new_client->info.handle_request = pcid_handle_request;
+    new_client->info.prepare_response = pcid_prepare_response;
+    new_client->info.receive_buf_size = PCI_RECEIVE_BUFFER_SIZE;
+    new_client->info.max_buf_size = PCI_MAX_SIZE_RX_BUF;
+
+    if (!init_client_thread(new_client->gc, new_client)) {
+        LOGE(ERROR, "Failed to create client's thread for \"%s\"", xs_path);
+        server_fini_one(new_client->gc, new_client);
+        return;
+    }
+
+    pthread_mutex_lock(&vchan_client_mutex);
+    XEN_LIST_INSERT_HEAD(&vchan_clients, new_client, list);
+    pthread_mutex_unlock(&vchan_client_mutex);
+}
+
+static void terminate_clients(void)
+{
+    struct vchan_client *client;
+
+    pthread_mutex_lock(&vchan_client_mutex);
+    XEN_LIST_FOREACH(client, &vchan_clients, list) {
+        pthread_join(client->run_thread, NULL);
+    }
+    pthread_mutex_unlock(&vchan_client_mutex);
+}
+
+int libxl_pcid_process(libxl_ctx *ctx)
+{
+    GC_INIT(ctx);
+    char *xs_path, *str;
+    char **watch_ret;
+    unsigned int watch_num;
+    libxl_domid domid;
+    int ret;
+
+    pthread_mutex_init(&vchan_client_mutex, NULL);
+
+    str = xs_read(ctx->xsh, 0, "domid", NULL);
+    if (!str) {
+        LOGE(ERROR, "Can't read own domid\n");
+        ret = -ENOENT;
+        goto out;
+    }
+
+    ret = sscanf(str, "%d", &domid);
+    free(str);
+    if (ret != 1)
+    {
+        LOGE(ERROR, "Own domid is not an integer\n");
+        ret = -EINVAL;
+        goto out;
+    }
+
+    xs_path = vchan_get_server_xs_path(gc, domid, PCID_SRV_NAME);
+
+    /* Recreate the base folder: remove all leftovers. */
+    ret = libxl__xs_rm_checked(gc, XBT_NULL, xs_path);
+    if (ret)
+        goto out;
+
+    if (!xs_mkdir(CTX->xsh, XBT_NULL, xs_path))
+    {
+        LOGE(ERROR, "xenstore mkdir failed: `%s'", xs_path);
+        ret = ERROR_FAIL;
+        goto out;
+    }
+
+    /* Wait for vchan client to create a new UUID under the server's folder. */
+    if (!xs_watch(CTX->xsh, xs_path, PCID_XS_TOKEN)) {
+        LOGE(ERROR, "xs_watch (%s) failed", xs_path);
+        ret = ERROR_FAIL;
+        goto out;
+    }
+
+    while ((watch_ret = xs_read_watch(CTX->xsh, &watch_num))) {
+        struct vchan_client *client;
+        size_t len;
+        bool found;
+
+        /*
+         * Any change under the base directory will fire an event, so we need
+         * to filter if this is indeed a new client or it is because vchan
+         * server creates nodes under its UUID.
+         *
+         * Never try to instantiate a vchan server right under xs_path.
+         */
+        if (!strcmp(watch_ret[XS_WATCH_PATH], xs_path))
+            continue;
+
+        found = false;
+        len = strlen(watch_ret[XS_WATCH_PATH]);
+
+        pthread_mutex_lock(&vchan_client_mutex);
+        XEN_LIST_FOREACH(client, &vchan_clients, list) {
+            str = client->watch_ret[XS_WATCH_PATH];
+
+            if (strstr(watch_ret[XS_WATCH_PATH], str)) {
+                /*
+                 * Base path is a substring of the current path, so it can be:
+                 *  - a new node with different name, but starting with str
+                 *  - a subnode under str, so it will have '/' after str
+                 *  - same string
+                 */
+                if (len == client->watch_len) {
+                    found = true;
+                    break;
+                }
+                if (len > client->watch_len) {
+                    if (watch_ret[XS_WATCH_PATH][client->watch_len] == '/') {
+                        found = true;
+                        break;
+                    }
+                }
+            }
+        }
+        pthread_mutex_unlock(&vchan_client_mutex);
+
+        if (!found && is_vchan_exist(ctx, watch_ret[XS_WATCH_PATH]))
+            init_new_client(ctx, gc, &vchan_clients, watch_ret);
+    }
+
+    xs_unwatch(CTX->xsh, xs_path, PCID_XS_TOKEN);
+
+out:
+    terminate_clients();
+    GC_FREE;
+    pthread_mutex_destroy(&vchan_client_mutex);
+    return ret;
+}
diff --git a/tools/xl/Makefile b/tools/xl/Makefile
index da4591b6a9..e17550e678 100644
--- a/tools/xl/Makefile
+++ b/tools/xl/Makefile
@@ -22,7 +22,7 @@ XL_OBJS = xl.o xl_cmdtable.o xl_sxp.o xl_utils.o $(XL_OBJS-y)
 XL_OBJS += xl_parse.o xl_cpupool.o xl_flask.o
 XL_OBJS += xl_vtpm.o xl_block.o xl_nic.o xl_usb.o
 XL_OBJS += xl_sched.o xl_pci.o xl_vcpu.o xl_cdrom.o xl_mem.o
-XL_OBJS += xl_info.o xl_console.o xl_misc.o
+XL_OBJS += xl_info.o xl_console.o xl_misc.o xl_pcid.o
 XL_OBJS += xl_vmcontrol.o xl_saverestore.o xl_migrate.o
 XL_OBJS += xl_vdispl.o xl_vsnd.o xl_vkb.o
 
@@ -34,7 +34,7 @@ $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs i
 all: xl
 
 xl: $(XL_OBJS)
-	$(CC) $(LDFLAGS) -o $@ $(XL_OBJS) $(LDLIBS_libxenutil) $(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) $(LDLIBS_libxenvchan) -lyajl $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $(XL_OBJS) $(LDLIBS_libxenstore) $(LDLIBS_libxenutil) $(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) $(LDLIBS_libxenvchan) -lyajl $(APPEND_LDFLAGS)
 
 .PHONY: install
 install: all
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 72538d6a81..98a44c12e9 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -201,6 +201,7 @@ int main_loadpolicy(int argc, char **argv);
 int main_remus(int argc, char **argv);
 #endif
 int main_devd(int argc, char **argv);
+int main_pcid(int argc, char **argv);
 #if defined(__i386__) || defined(__x86_64__)
 int main_psr_hwinfo(int argc, char **argv);
 int main_psr_cmt_attach(int argc, char **argv);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 35182ca196..54574a7ed3 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -545,6 +545,13 @@ const struct cmd_spec cmd_table[] = {
       "-F                      Run in the foreground.\n"
       "-p, --pidfile [FILE]    Write PID to pidfile when daemonizing.",
     },
+    { "pcid",
+      &main_pcid, 0, 1,
+      "Daemon that acts as a server for the client in the libxl PCI",
+      "[options]",
+      "-f                      Run in the foreground.\n"
+      "-p, --pidfile [FILE]    Write PID to pidfile when daemonizing.",
+    },
 #if defined(__i386__) || defined(__x86_64__)
     { "psr-hwinfo",
       &main_psr_hwinfo, 0, 1,
diff --git a/tools/xl/xl_pcid.c b/tools/xl/xl_pcid.c
new file mode 100644
index 0000000000..a5d38e672f
--- /dev/null
+++ b/tools/xl/xl_pcid.c
@@ -0,0 +1,81 @@
+/*
+    Pcid daemon that acts as a server for the client in the libxl PCI
+
+    Copyright (C) 2021 EPAM Systems Inc.
+
+    This library is free software; you can redistribute it and/or
+    modify it under the terms of the GNU Lesser General Public
+    License as published by the Free Software Foundation; either
+    version 2.1 of the License, or (at your option) any later version.
+
+    This library is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+    Lesser General Public License for more details.
+
+    You should have received a copy of the GNU Lesser General Public
+    License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#define _GNU_SOURCE  // required for strchrnul()
+
+#include <libxl_utils.h>
+#include <libxlutil.h>
+
+#include "xl.h"
+#include "xl_utils.h"
+#include "xl_parse.h"
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <errno.h>
+
+#include <pcid.h>
+#include <xenstore.h>
+
+/*
+ * TODO: Running this code in multi-threaded environment
+ * Now the code is designed so that only one request to the server
+ * from the client is made in one domain. In the future, it is necessary
+ * to take into account cases when from different domains there can be
+ * several requests from a client at the same time. Therefore, it will be
+ * necessary to regulate the multithreading of processes for global variables.
+ */
+
+int main_pcid(int argc, char *argv[])
+{
+    int opt = 0, daemonize = 1, ret;
+    const char *pidfile = NULL;
+    static const struct option opts[] = {
+        {"pidfile", 1, 0, 'p'},
+        COMMON_LONG_OPTS,
+        {0, 0, 0, 0}
+    };
+
+    SWITCH_FOREACH_OPT(opt, "fp:", opts, "pcid", 0) {
+    case 'f':
+        daemonize = 0;
+        break;
+    case 'p':
+        pidfile = optarg;
+        break;
+    }
+
+    if (daemonize) {
+        ret = do_daemonize("xlpcid", pidfile);
+        if (ret) {
+            ret = (ret == 1) ? 0 : ret;
+            goto out_daemon;
+        }
+    }
+
+    libxl_pcid_process(ctx);
+
+    ret = 0;
+
+out_daemon:
+    exit(ret);
+}
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477861.740786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1ER-0005Ug-TY; Sun, 15 Jan 2023 11:31:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477861.740786; Sun, 15 Jan 2023 11:31:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1ER-0005U4-Lv; Sun, 15 Jan 2023 11:31:23 +0000
Received: by outflank-mailman (input) for mailman id 477861;
 Sun, 15 Jan 2023 11:31:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1EP-0004co-R9
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:21 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2199714c-94c8-11ed-b8d0-410ff93cb8f0;
 Sun, 15 Jan 2023 12:31:19 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id ud5so62065038ejc.4
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:19 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2199714c-94c8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ylOSlAAsRMSF13m/WxnF31oV8wjlQEFnbS1C2vMhAmQ=;
        b=lNcXXQ3WpawuLh/3f81jyVto3aISNpKmjWDl2wwuXCSiF50o2NMnWn7KtAjgsQrvtn
         eUxjNtPh3n3a5xm9Ye+1RcPav1lc0xnmNfBlcMaJ1wep8J/4+qWp4g2FJSRLzfo8NIEI
         LmEDxgjlSF5fv9yuEL8zh52CeWoi94A83N8F690NQpkLQBSxvipQDDShNi1CQSV2GGmK
         1V+WNt8En85YvYRFqDA9tdMX5oCHvUt+s7xrrjVlcqZ5iz5+D4YSnKlxO/W2EIZF/+YQ
         M6D0ma60Oc37VMjd0bPATOlvCHiLUI+Htq79FS0jt0Ad84IJYRb+urcLGMxaIaxN4M0v
         rR6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ylOSlAAsRMSF13m/WxnF31oV8wjlQEFnbS1C2vMhAmQ=;
        b=wELnWtLn5uInqtjvBe7r0kyB1h89C8DvWO3DAbrk/F/aeWgkNXKDPmexfoIOjfxJa3
         E90KuVGyifwdZNrvzk6OZcJ4Ac4+nX7rPAaoV7fIM8uVLf3Ki3ovyNF9gM6akpqwZuha
         z+J6L1AZKAhOgvu1ofc+fx9uNu/YgUYwm1lJOVScprWAE/JuTjMDIt6tc2OyR3E6ciTf
         zS4Rya5GiTea6wyhQMrVXZ5RcoZtGeg+szCVSvcgmlofZAoRm9snc+58BUdOg34OqrIr
         m4uidKloeLYbKyBvmTP2yiZSFYltTEZa3wgfTBHt6nZxs2SKz7wkf6ljTKvGHt3gyfpx
         Va8A==
X-Gm-Message-State: AFqh2kpg/SSTChQVR+da/R/adXB7IIySBdR3e6VTI9GzYnwTgqrGf2wz
	4ahdHYkG5So/vTyZGGSELzJ5VZBTvbNtu3Ud
X-Google-Smtp-Source: AMrXdXt1PVXJzXqXXv1CjLnjGLuod88mQki1Do5dmGNVJqTG1BuQUjE+az8UvXD8viFox25uMpExyQ==
X-Received: by 2002:a17:907:7657:b0:7ff:727f:65cb with SMTP id kj23-20020a170907765700b007ff727f65cbmr9474933ejc.19.1673782279053;
        Sun, 15 Jan 2023 03:31:19 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [RFC PATCH v3 04/10] tools/libs/light: pcid: implement list_assignable command
Date: Sun, 15 Jan 2023 13:31:05 +0200
Message-Id: <20230115113111.1207605-5-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 tools/include/pcid.h          | 19 ++++++++++++
 tools/libs/light/libxl_pci.c  | 54 ++++++++++++++++++++++-----------
 tools/libs/light/libxl_pcid.c | 56 ++++++++++++++++++++++++++++++-----
 3 files changed, 103 insertions(+), 26 deletions(-)

diff --git a/tools/include/pcid.h b/tools/include/pcid.h
index 6506b18d25..452bdc11cf 100644
--- a/tools/include/pcid.h
+++ b/tools/include/pcid.h
@@ -79,6 +79,25 @@
 
 #define PCID_SBDF_FMT           "%04x:%02x:%02x.%01x"
 
+/*
+ *******************************************************************************
+ * List assignable devices
+ *
+ * This command lists PCI devices that can be passed through to a guest domain.
+ *
+ * Request (see other mandatory fields above):
+ *  - "cmd" field of the request must be set to "list_assignable".
+ *
+ * Response (see other mandatory fields above):
+ *  - "resp" field of the response must be set to "list_assignable".
+ * Command specific response data:
+ * +-------------+--------------+----------------------------------------------+
+ * | devices     | list         | List of of pci_device objects                |
+ * +-------------+--------------+----------------------------------------------+
+ */
+#define PCID_CMD_LIST_ASSIGNABLE        "list_assignable"
+#define PCID_MSG_FIELD_DEVICES          "devices"
+
 int libxl_pcid_process(libxl_ctx *ctx);
 
 #endif /* PCID_H */
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index b0c6de88ba..321543f5bf 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -29,6 +29,18 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
+static int process_list_assignable(libxl__gc *gc,
+                                   const libxl__json_object *response,
+                                   libxl__json_object **result)
+{
+    *result = (libxl__json_object *)libxl__json_map_get(PCID_MSG_FIELD_DEVICES,
+                                                        response, JSON_ARRAY);
+    if (!*result)
+        return ERROR_INVAL;
+
+    return 0;
+}
+
 static int pci_handle_response(libxl__gc *gc,
                                const libxl__json_object *response,
                                libxl__json_object **result)
@@ -68,6 +80,9 @@ static int pci_handle_response(libxl__gc *gc,
     command_name = command_obj->u.string;
     LOG(DEBUG, "command: %s", command_name);
 
+    if (strcmp(command_name, PCID_CMD_LIST_ASSIGNABLE) == 0)
+       ret = process_list_assignable(gc, response, result);
+
     return ret;
 }
 
@@ -124,8 +139,7 @@ static char *pci_prepare_request(libxl__gc *gc, yajl_gen gen, char *cmd,
     return request;
 }
 
-struct vchan_info *pci_vchan_get_client(libxl__gc *gc);
-struct vchan_info *pci_vchan_get_client(libxl__gc *gc)
+static struct vchan_info *pci_vchan_get_client(libxl__gc *gc)
 {
     struct vchan_info *vchan;
 
@@ -147,8 +161,7 @@ out:
     return vchan;
 }
 
-void pci_vchan_free(libxl__gc *gc, struct vchan_info *vchan);
-void pci_vchan_free(libxl__gc *gc, struct vchan_info *vchan)
+static void pci_vchan_free(libxl__gc *gc, struct vchan_info *vchan)
 {
     vchan_fini_one(gc, vchan->state);
 }
@@ -561,26 +574,29 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
     libxl_device_pci *pcis = NULL, *new;
-    struct dirent *de;
-    DIR *dir;
+    struct vchan_info *vchan;
+    libxl__json_object *result, *dev_obj;
+    int i;
 
     *num = 0;
 
-    dir = opendir(SYSFS_PCIBACK_DRIVER);
-    if (NULL == dir) {
-        if (errno == ENOENT) {
-            LOG(ERROR, "Looks like pciback driver not loaded");
-        } else {
-            LOGE(ERROR, "Couldn't open %s", SYSFS_PCIBACK_DRIVER);
-        }
+    vchan = pci_vchan_get_client(gc);
+    if (!vchan)
         goto out;
-    }
 
-    while((de = readdir(dir))) {
+    result = vchan_send_command(gc, vchan, PCID_CMD_LIST_ASSIGNABLE, NULL);
+    if (!result)
+        goto vchan_free;
+
+    for (i = 0; (dev_obj = libxl__json_array_get(result, i)); i++) {
+        const char *sbdf_str = libxl__json_object_get_string(dev_obj);
         unsigned int dom, bus, dev, func;
-        char *name;
+        const char *name;
+
+        if (!sbdf_str)
+            continue;
 
-        if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
+        if (sscanf(sbdf_str, PCID_SBDF_FMT, &dom, &bus, &dev, &func) != 4)
             continue;
 
         new = realloc(pcis, ((*num) + 1) * sizeof(*new));
@@ -602,7 +618,9 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         (*num)++;
     }
 
-    closedir(dir);
+vchan_free:
+    pci_vchan_free(gc, vchan);
+
 out:
     GC_FREE;
     return pcis;
diff --git a/tools/libs/light/libxl_pcid.c b/tools/libs/light/libxl_pcid.c
index 958fe387f9..bab08b72cf 100644
--- a/tools/libs/light/libxl_pcid.c
+++ b/tools/libs/light/libxl_pcid.c
@@ -84,6 +84,41 @@ static int make_error_reply(libxl__gc *gc, yajl_gen gen, char *desc,
     return 0;
 }
 
+static int process_list_assignable(libxl__gc *gc, yajl_gen gen,
+                                   char *command_name,
+                                   const struct libxl__json_object *request,
+                                   struct libxl__json_object **response)
+{
+    struct dirent *de;
+    DIR *dir = NULL;
+
+    dir = opendir(SYSFS_PCI_DEV);
+    if (dir == NULL) {
+        make_error_reply(gc, gen, strerror(errno), command_name);
+        return ERROR_FAIL;
+    }
+
+    libxl__yajl_gen_asciiz(gen, PCID_MSG_FIELD_DEVICES);
+
+    *response = libxl__json_object_alloc(gc, JSON_ARRAY);
+
+    while ((de = readdir(dir))) {
+        unsigned int dom, bus, dev, func;
+
+        if (sscanf(de->d_name, PCID_SBDF_FMT, &dom, &bus, &dev, &func) != 4)
+            continue;
+
+        struct libxl__json_object *node =
+            libxl__json_object_alloc(gc, JSON_STRING);
+        node->u.string = de->d_name;
+        flexarray_append((*response)->u.array, node);
+    }
+
+    closedir(dir);
+
+    return 0;
+}
+
 static int pcid_handle_request(libxl__gc *gc, yajl_gen gen,
                                const libxl__json_object *request)
 {
@@ -104,14 +139,19 @@ static int pcid_handle_request(libxl__gc *gc, yajl_gen gen,
 
     command_name = command_obj->u.string;
 
-    /*
-     * This is an unsupported command: make a reply and proceed over
-     * the error path.
-     */
-    ret = make_error_reply(gc, gen, "Unsupported command",
-                           command_name);
-    if (!ret)
-        ret = ERROR_NOTFOUND;
+    if (strcmp(command_name, PCID_CMD_LIST_ASSIGNABLE) == 0)
+       ret = process_list_assignable(gc, gen, command_name,
+                                     request, &command_response);
+    else {
+        /*
+         * This is an unsupported command: make a reply and proceed over
+         * the error path.
+         */
+        ret = make_error_reply(gc, gen, "Unsupported command",
+                               command_name);
+        if (!ret)
+            ret = ERROR_NOTFOUND;
+    }
 
     if (ret) {
         /*
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477862.740790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1ES-0005cC-EJ; Sun, 15 Jan 2023 11:31:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477862.740790; Sun, 15 Jan 2023 11:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1ES-0005aq-7m; Sun, 15 Jan 2023 11:31:24 +0000
Received: by outflank-mailman (input) for mailman id 477862;
 Sun, 15 Jan 2023 11:31:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1EQ-0004ci-BW
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:22 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 22474e46-94c8-11ed-91b6-6bf2151ebd3b;
 Sun, 15 Jan 2023 12:31:20 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id cf18so55655629ejb.5
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:20 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22474e46-94c8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RBd+hmBQgXZkrJPkk2A6G6FhZeqNtc7f2zf0tt6B+Y4=;
        b=A6g4LBmvHkUjz9gzJSTm8njOCHRcxbwKot/vnTJEsRj2KBagbqA4nLR7ENR61+Kv4K
         ZIaflMDApL5DvuJO865GIau25biBL5Vle+LSr0/P4rFK26j6uZjVEXxKXk7QMVie6doB
         t4Ope3jJK+eVqWJV9chTrR0/uH6lQFQge1zkxtIGKHib7vofe6zcBuG2xPN9B4rVAc2R
         uQ8RE/JmllneEKwOVLzxc8/cUL4lYQnrKx4sgs3InX07lSkxZfL9Wrw0FtEwa7v495If
         Z+Vcc7/ibHupzF0hurL4mpfEbWcWTYny7mz841/j/rEimBsv4tMCJA5SpjDEeZQNUXoq
         qwfA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=RBd+hmBQgXZkrJPkk2A6G6FhZeqNtc7f2zf0tt6B+Y4=;
        b=B+cumoEnK7mJ6mVDrDOJtSlG58sqvfHNR0NB5W9OQPDa53JcnAKMFwHAMlV2CSLUoQ
         IUxYkd4jj2Vw7jadY3/7k0UxHH/UG+eJi6WF5zW+lZsMwqC2qmtaUtjyUJal4pQQpmAD
         OtFNuhgfC//rjNbS0sgzv6n3s5AI6CUiVxtayTF2534sjHiVkMWYe0y2sT+bGA55k+Q/
         UZXF1espRGPvhe5xT0um1almIHXyYvlCAYGAptaIWJit7wSQxJfNywf9gtU667bjEZLC
         NqgCZhbr4TnpWiWj5LWOp08kLrZrmDLB0A0AppjlD4PySwu5mw9q6ncGS+O86a97O6YA
         m+pQ==
X-Gm-Message-State: AFqh2koabTYkHHIohMNP0MCHU9Jrp87IfKm1vcMqs92UamVGpJY2ODXT
	ol3AlwsnqiDKbDpVfBLVoNDOz4RvGV0QPeqx
X-Google-Smtp-Source: AMrXdXszUm9sdvZugQmrnDnG84ZBeP8VU310ltdSm5Vq5MqV8JhaKIifUaao3soTTYzNSyY9yBG+jg==
X-Received: by 2002:a17:906:2813:b0:829:59d5:e661 with SMTP id r19-20020a170906281300b0082959d5e661mr74258838ejc.29.1673782280151;
        Sun, 15 Jan 2023 03:31:20 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [RFC PATCH v3 05/10] tools/light: pci: describe [MAKE|REVERT]_ASSIGNABLE commands
Date: Sun, 15 Jan 2023 13:31:06 +0200
Message-Id: <20230115113111.1207605-6-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

Add protocol for two more commands, one to make a PCI device
assignable, and other - to revert its state back.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 tools/include/pcid.h | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/tools/include/pcid.h b/tools/include/pcid.h
index 452bdc11cf..118f8105cf 100644
--- a/tools/include/pcid.h
+++ b/tools/include/pcid.h
@@ -98,6 +98,44 @@
 #define PCID_CMD_LIST_ASSIGNABLE        "list_assignable"
 #define PCID_MSG_FIELD_DEVICES          "devices"
 
+/*
+ *******************************************************************************
+ * Make device assignable
+ *
+ * This command makes given device assignable by ensuring that OS
+ * will not try to access it.
+ *
+ * Request (see other mandatory fields above):
+ *  - "cmd" field of the request must be set to "make_assignable".
+ *  - "sbdf" SBDF of the device in format defined by PCID_SBDF_FMT.
+ *  - "rebind" = true if daemon needs to save original driver name,
+ *    so device later can be rebound back.
+ *
+ * Response (see other mandatory fields above):
+ *  - "resp" field of the response must be set to "make_assignable".
+ */
+#define PCID_CMD_MAKE_ASSIGNABLE        "make_assignable"
+#define PCID_MSG_FIELD_REBIND           "rebind"
+
+/*
+ *******************************************************************************
+ * Revert device from assignable state
+ *
+ * This command reverts effect of "make_assignable" command. Basically,
+ * now device can be used by OS again.
+ *
+ * Request (see other mandatory fields above):
+ *  - "cmd" field of the request must be set to "revert_assignable".
+ *  - "sbdf" SBDF of the device in format defined by PCID_SBDF_FMT.
+ *  - "rebind" = true if daemon needs to rebind device back to it's
+ *    original driver, which name was saved by "make_assignable" command
+ *
+ * Response (see other mandatory fields above):
+ *  - "resp" field of the response must be set to "revert_assignable".
+ */
+#define PCID_CMD_REVERT_ASSIGNABLE      "revert_assignable"
+
+
 int libxl_pcid_process(libxl_ctx *ctx);
 
 #endif /* PCID_H */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477863.740810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1ET-00063t-QO; Sun, 15 Jan 2023 11:31:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477863.740810; Sun, 15 Jan 2023 11:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1ET-000630-HR; Sun, 15 Jan 2023 11:31:25 +0000
Received: by outflank-mailman (input) for mailman id 477863;
 Sun, 15 Jan 2023 11:31:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1ER-0004ci-SL
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:23 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 239eae73-94c8-11ed-91b6-6bf2151ebd3b;
 Sun, 15 Jan 2023 12:31:23 +0100 (CET)
Received: by mail-ej1-x62a.google.com with SMTP id bk15so4531517ejb.9
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:23 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 239eae73-94c8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=X6pojTqohgbfc6HlNckJ9zlssPABsEQZhoi9N0xRTxw=;
        b=BcMKaFa411QobFIhInxDjJUI8JkLC8GEhG0hTQH8KxYKVDH0SstOxoy9CFMAMbBH/R
         Xo48qU5Bf7Zghmb3isT6DkmMkSG2CYv5dZSGKJU5lg6jLESu8I5fQmWH/krF/D8s2VOB
         UszVy4v09Xpc2AfGnx1SF7OnjTRuUWZwPehpGBWnVk+cB1iGEIhmMTJkE5rb/O545w3a
         lr9w/hgGCMXc73EADqXkudFaP9L557UesX7qiuUv3iohljLoFDByox/qI8Ccvv6vQVPw
         g3R7fotNvFWJX4GUqLkBeO1+dfBOk64iXr48pDknJX1PKSqjXXDclGVTpF8S0Qs5qDVt
         Z3dQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=X6pojTqohgbfc6HlNckJ9zlssPABsEQZhoi9N0xRTxw=;
        b=WzpTkqOl2dG8XmOoZGvTtGe5NI+4lRTCxaMQAYNIXLYa769kpFN9QrnR9CcLJvJaz3
         7YF5sQHPQ5Khsylq3+SkI/afoGwTdATpNTn1rrXG6cA+3KpvvdepdB/2f0Py66W/Hc5p
         OAyIgp4utvPW4CAOCf5FXbifTzn9iYWUH83H9iwqqlX4byWJcDaubTfXL7/gCepnJOx0
         Zo2n2+bjxWFctHJJLrJRVRmOeOFkyxRCmzrfVvLdwm9yeECX/0TYIZ+lU9WPW351/BA6
         Uy8tiVI0ubkhxcQiFfdMsu3Wo8TZ6katNsWRWscQYQpsHO4HK5ZQ0z49ep/9QK+Zozlc
         1JvA==
X-Gm-Message-State: AFqh2krgs44zzqBuy1nUiRuUEFwiuB2+tfdW4Jp9kVAQV2/UnLPnWw8t
	0aq6CVRXytqeRp8Vo9Jd7iB655QhNVkvKFq6
X-Google-Smtp-Source: AMrXdXvOSIf8FV/MCs4zS31QZqNbKyoeuGhPIH8lGD3/tJRCRp+7UYM91FAowYKZ1MYzHrDOMgLtiw==
X-Received: by 2002:a17:906:b053:b0:7ad:ca80:5669 with SMTP id bj19-20020a170906b05300b007adca805669mr89988229ejb.64.1673782282413;
        Sun, 15 Jan 2023 03:31:22 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [RFC PATCH v3 07/10] tools/libs/light: pcid: implement is_device_assigned command
Date: Sun, 15 Jan 2023 13:31:08 +0200
Message-Id: <20230115113111.1207605-8-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
---
 tools/include/pcid.h          | 19 ++++++++++++++++
 tools/libs/light/libxl_pci.c  | 43 +++++++++++++++++++----------------
 tools/libs/light/libxl_pcid.c | 34 ++++++++++++++++++++++++++-
 3 files changed, 75 insertions(+), 21 deletions(-)

diff --git a/tools/include/pcid.h b/tools/include/pcid.h
index 118f8105cf..983e067dfc 100644
--- a/tools/include/pcid.h
+++ b/tools/include/pcid.h
@@ -135,6 +135,25 @@
  */
 #define PCID_CMD_REVERT_ASSIGNABLE      "revert_assignable"
 
+/*
+ *******************************************************************************
+ * Check is device assigned
+ *
+ * This command checks device is assigned
+ *
+ * Request (see other mandatory fields above):
+ *  - "cmd" field of the request must be set to "is_device_assigned".
+ *  - "sbdf" SBDF of the device in format defined by PCID_SBDF_FMT.
+ *
+ * Response (see other mandatory fields above):
+ *  - "resp" field of the response must be set to "is_device_assigned".
+ * Command specific response data:
+ * +-------------+--------------+----------------------------------------------+
+ * | result      | bool         | true if device assigned                      |
+ * +-------------+--------------+----------------------------------------------+
+ */
+#define PCID_CMD_IS_ASSIGNED            "is_device_assigned"
+#define PCID_MSG_FIELD_RESULT           "result"
 
 int libxl_pcid_process(libxl_ctx *ctx);
 
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 0351a0d3df..d68cb1986f 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -86,7 +86,9 @@ static int pci_handle_response(libxl__gc *gc,
         *result = libxl__json_object_alloc(gc, JSON_NULL);
     else if (strcmp(command_name, PCID_CMD_REVERT_ASSIGNABLE) == 0)
         *result = libxl__json_object_alloc(gc, JSON_NULL);
-
+    else if (strcmp(command_name, PCID_CMD_IS_ASSIGNED) == 0)
+        *result = (libxl__json_object *)libxl__json_map_get(PCID_MSG_FIELD_RESULT,
+                                                          response, JSON_BOOL);
     return ret;
 }
 
@@ -753,30 +755,31 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
 
 static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
 {
-    char * spath;
+    struct vchan_info *vchan;
     int rc;
-    struct stat st;
+    libxl__json_object *args, *result;
 
-    if ( access(SYSFS_PCIBACK_DRIVER, F_OK) < 0 ) {
-        if ( errno == ENOENT ) {
-            LOG(ERROR, "Looks like pciback driver is not loaded");
-        } else {
-            LOGE(ERROR, "Can't access "SYSFS_PCIBACK_DRIVER);
-        }
-        return -1;
+    vchan = pci_vchan_get_client(gc);
+    if (!vchan) {
+        rc = ERROR_NOT_READY;
+        goto out;
     }
 
-    spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->domain, pci->bus,
-                      pci->dev, pci->func);
-    rc = lstat(spath, &st);
+    args = libxl__vchan_start_args(gc);
 
-    if( rc == 0 )
-        return 1;
-    if ( rc < 0 && errno == ENOENT )
-        return 0;
-    LOGE(ERROR, "Accessing %s", spath);
-    return -1;
+    libxl__vchan_arg_add_string(gc, args, PCID_MSG_FIELD_SBDF,
+                                GCSPRINTF(PCID_SBDF_FMT, pci->domain,
+                                          pci->bus, pci->dev, pci->func));
+
+    result = vchan_send_command(gc, vchan, PCID_CMD_IS_ASSIGNED, args);
+    if (!result) {
+        rc = ERROR_FAIL;
+    }
+    rc = result->u.b;
+    pci_vchan_free(gc, vchan);
+
+out:
+    return rc;
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
diff --git a/tools/libs/light/libxl_pcid.c b/tools/libs/light/libxl_pcid.c
index d8245195ee..d72beed405 100644
--- a/tools/libs/light/libxl_pcid.c
+++ b/tools/libs/light/libxl_pcid.c
@@ -147,7 +147,7 @@ static int pciback_dev_is_assigned(libxl__gc *gc, unsigned int domain,
     if (rc < 0 && errno == ENOENT)
         return 0;
     LOGE(ERROR, "Accessing %s", spath);
-    return -1;
+    return 0;
 }
 
 #define PCID_INFO_PATH		"pcid"
@@ -335,6 +335,35 @@ static int pciback_dev_assign(libxl__gc *gc, unsigned int domain,
     return 0;
 }
 
+static int process_pciback_dev_is_assigned(libxl__gc *gc, yajl_gen gen,
+                                   char *command_name,
+                                   const struct libxl__json_object *request,
+                                   struct libxl__json_object **response)
+{
+    const struct libxl__json_object *json_o;
+    unsigned int dom, bus, dev, func;
+    int rc;
+
+    libxl__yajl_gen_asciiz(gen, PCID_MSG_FIELD_RESULT);
+    *response = libxl__json_object_alloc(gc, JSON_BOOL);
+    json_o = libxl__json_map_get(PCID_MSG_FIELD_SBDF, request, JSON_STRING);
+    if (!json_o) {
+        make_error_reply(gc, gen, "No mandatory parameter 'sbdf'", command_name);
+        return ERROR_FAIL;
+    }
+
+    if (sscanf(libxl__json_object_get_string(json_o), PCID_SBDF_FMT,
+               &dom, &bus, &dev, &func) != 4) {
+        make_error_reply(gc, gen, "Can't parse SBDF", command_name);
+        return ERROR_FAIL;
+    }
+    rc = pciback_dev_is_assigned(gc, dom, bus, dev, func);
+    if (rc < 0)
+        return ERROR_FAIL;
+    (*response)->u.b = rc;
+    return 0;
+}
+
 static int process_make_assignable(libxl__gc *gc, yajl_gen gen,
                                    char *command_name,
                                    const struct libxl__json_object *request,
@@ -538,6 +567,9 @@ static int pcid_handle_request(libxl__gc *gc, yajl_gen gen,
     else if (strcmp(command_name, PCID_CMD_REVERT_ASSIGNABLE) == 0)
        ret = process_revert_assignable(gc, gen, command_name,
                                      request, &command_response);
+    else if (strcmp(command_name, PCID_CMD_IS_ASSIGNED) == 0)
+       ret = process_pciback_dev_is_assigned(gc, gen, command_name,
+                                     request, &command_response);
     else {
         /*
          * This is an unsupported command: make a reply and proceed over
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477864.740822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EV-0006TQ-1v; Sun, 15 Jan 2023 11:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477864.740822; Sun, 15 Jan 2023 11:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EU-0006SY-Ty; Sun, 15 Jan 2023 11:31:26 +0000
Received: by outflank-mailman (input) for mailman id 477864;
 Sun, 15 Jan 2023 11:31:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1ET-0004co-DB
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:25 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 230e1483-94c8-11ed-b8d0-410ff93cb8f0;
 Sun, 15 Jan 2023 12:31:22 +0100 (CET)
Received: by mail-ej1-x62e.google.com with SMTP id mp20so15434679ejc.7
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:22 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 230e1483-94c8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qQf6nMYZTsKyVSis1dtkaKRd/WNynJi6oS+H2pCRPK4=;
        b=b1fWQURdMrLlTHhc7988e863g0gmdekRAYutqRUOdN7cPy7Sfb4P1c2FmKM5UqsfbS
         g0E1qVTXCwtotjcXHLtDOwcl8leftSP7cd8N5AUjN+opJPHyytKGsgd93TGhQzRXEOdJ
         oHc8YhzncL8xzpPNL+C1+sMxrmNoynlScFryOp2odfrc4cJaqQmOORSFLj5eiDeC64Dw
         clRgm1ZHjjUrM1YIqC2pwY+lGuKoHbQrDLHt5seQ1hl8GLHn+W56auWL1Wv3emVDtSad
         O8sF80wekUj08Onzt3XfJ+4wFUuH+guOY+PqFW00AQNn1K3jtniK+DPekeuXt47DIUIq
         kr2g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=qQf6nMYZTsKyVSis1dtkaKRd/WNynJi6oS+H2pCRPK4=;
        b=6Gq6iIggLxRuuR3/fC5E7IVUQCmbYcvyjw4jaag0VaMFK0VjuneA26u8tuYFKHoAzQ
         Xq4LqerL33hcLiMfXo85hRbmJt9OW4HgfzGYmbthXQTYUlrDo+uhNRMMGAMl0DdWVEpc
         8Fnq+blmDLtQy/GVn4ChOK9Ovcoo4u6aa1fq/mTGfVS4+EyZTIEWOnyn/N6i4zc+v5nk
         Q76XYP4lOHWYspXorf+n3zPkrI7fDiDgo7iaRBaAxRbPsRMBFlmxS34ITwvVybIgRwZ4
         MNig4HOI3G10Bj9MUqza4rzQcFSsQd3ZmSZm1IRG1UTUJxR1Oalblo67ynSNBFkUSDI0
         58wA==
X-Gm-Message-State: AFqh2kqOP2DCZRrTtZAKx22TRmcwYQ/1FYMkA0woX0GeFBUO04HaXv9v
	bdnMfOffb77thEUepsLFxfho6TWNl57H5m0v
X-Google-Smtp-Source: AMrXdXvys0v7on+HORafcG2tSLqa7aLVQ7y7zFyuXcAIURFw/5cESvXLmPgsZxWNaTgSkvni09S3Og==
X-Received: by 2002:a17:906:4f02:b0:86e:4067:b699 with SMTP id t2-20020a1709064f0200b0086e4067b699mr3656816eju.4.1673782281236;
        Sun, 15 Jan 2023 03:31:21 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [RFC PATCH v3 06/10] tools/light: pci: move assign/revert logic to pcid
Date: Sun, 15 Jan 2023 13:31:07 +0200
Message-Id: <20230115113111.1207605-7-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

Implement commands MAKE_ASSIGNABLE and REVERT_ASSIGNABLE in pcid in
the same way as they were implemented in libxl_pci.c

Replace original logic in libxl_pci.c by calling appropriate functions
from pcid.

This is quite huge patch, as lots of code were moved from lixbl_pci.c
to libxl_pcid.c.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 tools/libs/light/libxl_pci.c  | 292 +++++--------------------
 tools/libs/light/libxl_pcid.c | 396 ++++++++++++++++++++++++++++++++++
 2 files changed, 454 insertions(+), 234 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 321543f5bf..0351a0d3df 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -82,6 +82,10 @@ static int pci_handle_response(libxl__gc *gc,
 
     if (strcmp(command_name, PCID_CMD_LIST_ASSIGNABLE) == 0)
        ret = process_list_assignable(gc, response, result);
+    else if (strcmp(command_name, PCID_CMD_MAKE_ASSIGNABLE) == 0)
+        *result = libxl__json_object_alloc(gc, JSON_NULL);
+    else if (strcmp(command_name, PCID_CMD_REVERT_ASSIGNABLE) == 0)
+        *result = libxl__json_object_alloc(gc, JSON_NULL);
 
     return ret;
 }
@@ -636,44 +640,6 @@ void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
     free(list);
 }
 
-/* Unbind device from its current driver, if any.  If driver_path is non-NULL,
- * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
-                            char **driver_path)
-{
-    char * spath, *dp = NULL;
-    struct stat st;
-
-    spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func);
-    if ( !lstat(spath, &st) ) {
-        /* Find the canonical path to the driver. */
-        dp = libxl__zalloc(gc, PATH_MAX);
-        dp = realpath(spath, dp);
-        if ( !dp ) {
-            LOGE(ERROR, "realpath() failed");
-            return -1;
-        }
-
-        LOG(DEBUG, "Driver re-plug path: %s", dp);
-
-        /* Unbind from the old driver */
-        spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
-            LOGE(ERROR, "Couldn't unbind device");
-            return -1;
-        }
-    }
-
-    if ( driver_path )
-        *driver_path = dp;
-
-    return 0;
-}
-
 static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
@@ -785,49 +751,6 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
     return false;
 }
 
-/*
- * A brief comment about slots.  I don't know what slots are for; however,
- * I have by experimentation determined:
- * - Before a device can be bound to pciback, its BDF must first be listed
- *   in pciback/slots
- * - The way to get the BDF listed there is to write BDF to
- *   pciback/new_slot
- * - Writing the same BDF to pciback/new_slot is not idempotent; it results
- *   in two entries of the BDF in pciback/slots
- * It's not clear whether having two entries in pciback/slots is a problem
- * or not.  Just to be safe, this code does the conservative thing, and
- * first checks to see if there is a slot, adding one only if one does not
- * already exist.
- */
-
-/* Scan through /sys/.../pciback/slots looking for pci's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
-{
-    FILE *f;
-    int rc = 0;
-    unsigned dom, bus, dev, func;
-
-    f = fopen(SYSFS_PCIBACK_DRIVER"/slots", "r");
-
-    if (f == NULL) {
-        LOGE(ERROR, "Couldn't open %s", SYSFS_PCIBACK_DRIVER"/slots");
-        return ERROR_FAIL;
-    }
-
-    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
-        if (dom == pci->domain
-            && bus == pci->bus
-            && dev == pci->dev
-            && func == pci->func) {
-            rc = 1;
-            goto out;
-        }
-    }
-out:
-    fclose(f);
-    return rc;
-}
-
 static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
 {
     char * spath;
@@ -856,133 +779,34 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
-{
-    int rc;
-
-    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
-        LOGE(ERROR, "Error checking for pciback slot");
-        return ERROR_FAIL;
-    } else if (rc == 0) {
-        if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pci) < 0 ) {
-            LOGE(ERROR, "Couldn't bind device to pciback!");
-            return ERROR_FAIL;
-        }
-    }
-
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
-        LOGE(ERROR, "Couldn't bind device to pciback!");
-        return ERROR_FAIL;
-    }
-    return 0;
-}
-
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
-{
-    /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
-        LOG(ERROR, "Couldn't unbind device!");
-        return ERROR_FAIL;
-    }
-
-    /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
-        if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pci) < 0 ) {
-            LOGE(ERROR, "Couldn't remove pciback slot");
-            return ERROR_FAIL;
-        }
-    }
-    return 0;
-}
-
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_device_pci *pci,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    unsigned dom, bus, dev, func;
-    char *spath, *driver_path = NULL;
-    const char *name;
+    struct vchan_info *vchan;
     int rc;
-    struct stat st;
-
-    /* Local copy for convenience */
-    dom = pci->domain;
-    bus = pci->bus;
-    dev = pci->dev;
-    func = pci->func;
-    name = pci->name;
-
-    /* Sanitise any name that is set */
-    if (name) {
-        unsigned int i, n = strlen(name);
+    libxl__json_object *args, *result;
 
-        if (n > 64) { /* Reasonable upper bound on name length */
-            LOG(ERROR, "Name too long");
-            return ERROR_FAIL;
-        }
-
-        for (i = 0; i < n; i++) {
-            if (!isgraph(name[i])) {
-                LOG(ERROR, "Names may only include printable characters");
-                return ERROR_FAIL;
-            }
-        }
-    }
-
-    /* See if the device exists */
-    spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
-    if ( lstat(spath, &st) ) {
-        LOGE(ERROR, "Couldn't lstat %s", spath);
-        return ERROR_FAIL;
-    }
-
-    /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pci);
-    if ( rc < 0 ) {
-        return ERROR_FAIL;
-    }
-    if ( rc ) {
-        LOG(WARN, PCI_BDF" already assigned to pciback", dom, bus, dev, func);
-        goto name;
+    vchan = pci_vchan_get_client(gc);
+    if (!vchan) {
+        rc = ERROR_NOT_READY;
+        goto out;
     }
 
-    /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
-        LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
-            dom, bus, dev, func);
-        return ERROR_FAIL;
-    }
+    args = libxl__vchan_start_args(gc);
 
-    /* Store driver_path for rebinding to dom0 */
-    if ( rebind ) {
-        if ( driver_path ) {
-            pci_info_xs_write(gc, pci, "driver_path", driver_path);
-        } else if ( (driver_path =
-                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
-            LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
-                dom, bus, dev, func, driver_path);
-        } else {
-            LOG(WARN, PCI_BDF" not bound to a driver, will not be rebound.",
-                dom, bus, dev, func);
-        }
-    } else {
-        pci_info_xs_remove(gc, pci, "driver_path");
-    }
+    libxl__vchan_arg_add_string(gc, args, PCID_MSG_FIELD_SBDF,
+                                GCSPRINTF(PCID_SBDF_FMT, pci->domain,
+                                          pci->bus, pci->dev, pci->func));
+    libxl__vchan_arg_add_bool(gc, args, PCID_MSG_FIELD_REBIND, rebind);
 
-    if ( pciback_dev_assign(gc, pci) ) {
-        LOG(ERROR, "Couldn't bind device to pciback!");
-        return ERROR_FAIL;
+    result = vchan_send_command(gc, vchan, PCID_CMD_MAKE_ASSIGNABLE, args);
+    if (!result) {
+        rc = ERROR_FAIL;
+        goto vchan_free;
     }
 
-name:
-    if (name)
-        pci_info_xs_write(gc, pci, "name", name);
-    else
-        pci_info_xs_remove(gc, pci, "name");
-
     /*
      * DOMID_IO is just a sentinel domain, without any actual mappings,
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
@@ -990,12 +814,15 @@ name:
      */
     rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
-    if ( rc < 0 ) {
-        LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
-        return ERROR_FAIL;
-    }
+    if ( rc < 0 )
+        LOG(ERROR, "failed to quarantine "PCI_BDF, pci->domain, pci->bus,
+            pci->dev, pci->func);
 
-    return 0;
+vchan_free:
+    pci_vchan_free(gc, vchan);
+
+out:
+    return rc;
 }
 
 static int name2bdf(libxl__gc *gc, libxl_device_pci *pci)
@@ -1038,13 +865,8 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc;
-    char *driver_path;
-
-    /* If the device is named then we need to look up the BDF */
-    if (pci->name) {
-        rc = name2bdf(gc, pci);
-        if (rc) return rc;
-    }
+    struct vchan_info *vchan;
+    libxl__json_object *args, *temp_obj, *result;
 
     /* De-quarantine */
     rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
@@ -1054,41 +876,43 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
-    /* Unbind from pciback */
-    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
-        return ERROR_FAIL;
-    } else if ( rc ) {
-        pciback_dev_unassign(gc, pci);
-    } else {
-        LOG(WARN, "Not bound to pciback");
+    vchan = pci_vchan_get_client(gc);
+    if (!vchan) {
+        rc = ERROR_NOT_READY;
+        goto out;
     }
 
-    /* Rebind if necessary */
-    driver_path = pci_info_xs_read(gc, pci, "driver_path");
+    args = libxl__json_object_alloc(gc, JSON_MAP);
+    temp_obj = libxl__json_object_alloc(gc, JSON_STRING);
+    if (!temp_obj) {
+        rc = ERROR_NOMEM;
+        goto vchan_free;
+    }
+    temp_obj->u.string = GCSPRINTF(PCID_SBDF_FMT, pci->domain, pci->bus,
+                                   pci->dev, pci->func);
+    flexarray_append_pair(args->u.map, PCID_MSG_FIELD_SBDF, temp_obj);
 
-    if ( driver_path ) {
-        if ( rebind ) {
-            LOG(INFO, "Rebinding to driver at %s", driver_path);
+    args = libxl__json_object_alloc(gc, JSON_MAP);
+    temp_obj = libxl__json_object_alloc(gc, JSON_BOOL);
+    if (!temp_obj) {
+        rc = ERROR_NOMEM;
+        goto vchan_free;
+    }
 
-            if ( sysfs_write_bdf(gc,
-                                 GCSPRINTF("%s/bind", driver_path),
-                                 pci) < 0 ) {
-                LOGE(ERROR, "Couldn't bind device to %s", driver_path);
-                return -1;
-            }
+    temp_obj->u.b = rebind;
+    flexarray_append_pair(args->u.map, PCID_MSG_FIELD_REBIND, temp_obj);
 
-            pci_info_xs_remove(gc, pci, "driver_path");
-        }
-    } else {
-        if ( rebind ) {
-            LOG(WARN,
-                "Couldn't find path for original driver; not rebinding");
-        }
+    result = vchan_send_command(gc, vchan, PCID_CMD_REVERT_ASSIGNABLE, args);
+    if (!result) {
+        rc = ERROR_FAIL;
+        goto vchan_free;
     }
 
-    pci_info_xs_remove(gc, pci, "name");
+vchan_free:
+    pci_vchan_free(gc, vchan);
 
-    return 0;
+out:
+    return rc;
 }
 
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
diff --git a/tools/libs/light/libxl_pcid.c b/tools/libs/light/libxl_pcid.c
index bab08b72cf..d8245195ee 100644
--- a/tools/libs/light/libxl_pcid.c
+++ b/tools/libs/light/libxl_pcid.c
@@ -38,6 +38,8 @@
 
 #define DOM0_ID 0
 
+#define PCI_BDF                "%04x:%02x:%02x.%01x"
+
 struct vchan_client {
     XEN_LIST_ENTRY(struct vchan_client) list;
 
@@ -119,6 +121,394 @@ static int process_list_assignable(libxl__gc *gc, yajl_gen gen,
     return 0;
 }
 
+static int pciback_dev_is_assigned(libxl__gc *gc, unsigned int domain,
+				   unsigned int bus, unsigned int dev,
+				   unsigned int func)
+{
+    char * spath;
+    int rc;
+    struct stat st;
+
+    if (access(SYSFS_PCIBACK_DRIVER, F_OK) < 0) {
+        if (errno == ENOENT) {
+            LOG(ERROR, "Looks like pciback driver is not loaded");
+        } else {
+            LOGE(ERROR, "Can't access "SYSFS_PCIBACK_DRIVER);
+        }
+        return -1;
+    }
+
+    spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
+		      domain, bus, dev, func);
+    rc = lstat(spath, &st);
+
+    if (rc == 0)
+        return 1;
+    if (rc < 0 && errno == ENOENT)
+        return 0;
+    LOGE(ERROR, "Accessing %s", spath);
+    return -1;
+}
+
+#define PCID_INFO_PATH		"pcid"
+#define PCID_BDF_XSPATH         "%04x-%02x-%02x-%01x"
+
+static char *pcid_info_xs_path(libxl__gc *gc, unsigned int domain,
+			       unsigned int bus, unsigned int dev,
+			       unsigned int func, const char *node)
+{
+    return node ?
+        GCSPRINTF(PCID_INFO_PATH"/"PCID_BDF_XSPATH"/%s",
+                  domain, bus, dev, func, node) :
+        GCSPRINTF(PCID_INFO_PATH"/"PCID_BDF_XSPATH,
+                  domain, bus, dev, func);
+}
+
+
+static int pcid_info_xs_write(libxl__gc *gc, unsigned int domain,
+			       unsigned int bus, unsigned int dev,
+			       unsigned int func, const char *node,
+			      const char *val)
+{
+    char *path = pcid_info_xs_path(gc, domain, bus, dev, func, node);
+    int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
+
+    if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
+
+    return rc;
+}
+
+static char *pcid_info_xs_read(libxl__gc *gc, unsigned int domain,
+			       unsigned int bus, unsigned int dev,
+			       unsigned int func, const char *node)
+{
+    char *path = pcid_info_xs_path(gc, domain, bus, dev, func, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
+}
+
+static void pcid_info_xs_remove(libxl__gc *gc, unsigned int domain,
+			       unsigned int bus, unsigned int dev,
+			       unsigned int func, const char *node)
+{
+    char *path = pcid_info_xs_path(gc, domain, bus, dev, func, node);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    /* Remove the xenstore entry */
+    xs_rm(ctx->xsh, XBT_NULL, path);
+}
+
+
+/* Write the standard BDF into the sysfs path given by sysfs_path. */
+static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
+			   unsigned int domain, unsigned int bus,
+			   unsigned int dev, unsigned int func)
+{
+    int rc, fd;
+    char *buf;
+
+    fd = open(sysfs_path, O_WRONLY);
+    if (fd < 0) {
+        LOGE(ERROR, "Couldn't open %s", sysfs_path);
+        return ERROR_FAIL;
+    }
+
+    buf = GCSPRINTF(PCI_BDF, domain, bus, dev, func);
+    rc = write(fd, buf, strlen(buf));
+    /* Annoying to have two if's, but we need the errno */
+    if (rc < 0)
+        LOGE(ERROR, "write to %s returned %d", sysfs_path, rc);
+    close(fd);
+
+    if (rc < 0)
+        return ERROR_FAIL;
+
+    return 0;
+}
+
+
+/* Unbind device from its current driver, if any.  If driver_path is non-NULL,
+ * store the path to the original driver in it. */
+static int sysfs_dev_unbind(libxl__gc *gc, unsigned int domain,
+			    unsigned int bus, unsigned int dev,
+			    unsigned int func,
+                            char **driver_path)
+{
+    char * spath, *dp = NULL;
+    struct stat st;
+
+    spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
+                           domain, bus, dev, func);
+    if (!lstat(spath, &st)) {
+        /* Find the canonical path to the driver. */
+        dp = libxl__zalloc(gc, PATH_MAX);
+        dp = realpath(spath, dp);
+        if ( !dp ) {
+            LOGE(ERROR, "realpath() failed");
+            return -1;
+        }
+
+        LOG(DEBUG, "Driver re-plug path: %s", dp);
+
+        /* Unbind from the old driver */
+        spath = GCSPRINTF("%s/unbind", dp);
+        if (sysfs_write_bdf(gc, spath, domain, bus, dev, func) < 0) {
+            LOGE(ERROR, "Couldn't unbind device");
+            return -1;
+        }
+    }
+
+    if (driver_path)
+        *driver_path = dp;
+
+    return 0;
+}
+
+/*
+ * A brief comment about slots.  I don't know what slots are for; however,
+ * I have by experimentation determined:
+ * - Before a device can be bound to pciback, its BDF must first be listed
+ *   in pciback/slots
+ * - The way to get the BDF listed there is to write BDF to
+ *   pciback/new_slot
+ * - Writing the same BDF to pciback/new_slot is not idempotent; it results
+ *   in two entries of the BDF in pciback/slots
+ * It's not clear whether having two entries in pciback/slots is a problem
+ * or not.  Just to be safe, this code does the conservative thing, and
+ * first checks to see if there is a slot, adding one only if one does not
+ * already exist.
+ */
+
+/* Scan through /sys/.../pciback/slots looking for pci's BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, unsigned int domain,
+			      unsigned int bus, unsigned int dev,
+			      unsigned int func)
+{
+    FILE *f;
+    int rc = 0;
+    unsigned s_domain, s_bus, s_dev, s_func;
+
+    f = fopen(SYSFS_PCIBACK_DRIVER"/slots", "r");
+
+    if (f == NULL) {
+        LOGE(ERROR, "Couldn't open %s", SYSFS_PCIBACK_DRIVER"/slots");
+        return ERROR_FAIL;
+    }
+
+    while (fscanf(f, "%x:%x:%x.%d\n",
+		  &s_domain, &s_bus, &s_dev, &s_func) == 4) {
+        if (s_domain == domain &&
+            s_bus == bus &&
+            s_dev == dev &&
+            s_func == func) {
+            rc = 1;
+            goto out;
+        }
+    }
+out:
+    fclose(f);
+    return rc;
+}
+
+static int pciback_dev_assign(libxl__gc *gc, unsigned int domain,
+			      unsigned int bus, unsigned int dev,
+			      unsigned int func)
+{
+    int rc;
+
+    if ( (rc = pciback_dev_has_slot(gc, domain, bus, dev, func)) < 0 ) {
+        LOGE(ERROR, "Error checking for pciback slot");
+        return ERROR_FAIL;
+    } else if (rc == 0) {
+        if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
+                             domain, bus, dev, func) < 0 ) {
+            LOGE(ERROR, "Couldn't bind device to pciback!");
+            return ERROR_FAIL;
+        }
+    }
+
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind",
+			 domain, bus, dev, func) < 0 ) {
+        LOGE(ERROR, "Couldn't bind device to pciback!");
+        return ERROR_FAIL;
+    }
+    return 0;
+}
+
+static int process_make_assignable(libxl__gc *gc, yajl_gen gen,
+                                   char *command_name,
+                                   const struct libxl__json_object *request,
+                                   struct libxl__json_object **response)
+{
+    struct stat st;
+    const struct libxl__json_object *json_o;
+    unsigned int dom, bus, dev, func;
+    int rc;
+    bool rebind;
+    char *spath, *driver_path = NULL;
+
+    json_o = libxl__json_map_get(PCID_MSG_FIELD_SBDF, request, JSON_STRING);
+    if (!json_o) {
+        make_error_reply(gc, gen, "No mandatory parameter 'sbdf'", command_name);
+        return ERROR_FAIL;
+    }
+
+    if (sscanf(libxl__json_object_get_string(json_o), PCID_SBDF_FMT,
+	       &dom, &bus, &dev, &func) != 4) {
+        make_error_reply(gc, gen, "Can't parse SBDF", command_name);
+        return ERROR_FAIL;
+    }
+
+    json_o = libxl__json_map_get(PCID_MSG_FIELD_REBIND, request, JSON_BOOL);
+    if (!json_o) {
+        make_error_reply(gc, gen, "No mandatory parameter 'rebind'", command_name);
+        return ERROR_FAIL;
+    }
+
+    rebind = libxl__json_object_get_bool(json_o);
+
+    /* See if the device exists */
+    spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
+    if ( lstat(spath, &st) ) {
+        make_error_reply(gc, gen, strerror(errno), command_name);
+        LOGE(ERROR, "Couldn't lstat %s", spath);
+        return ERROR_FAIL;
+    }
+
+    /* Check to see if it's already assigned to pciback */
+    rc = pciback_dev_is_assigned(gc, dom, bus, dev, func);
+    if (rc < 0) {
+        make_error_reply(gc, gen, "Can't check if device is assigned",
+			 command_name);
+        return ERROR_FAIL;
+    }
+    if (rc) {
+        LOG(WARN, PCI_BDF" already assigned to pciback", dom, bus, dev, func);
+        goto done;
+    }
+
+    /* Check to see if there's already a driver that we need to unbind from */
+    if (sysfs_dev_unbind(gc, dom, bus, dev, func, &driver_path)) {
+        LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
+            dom, bus, dev, func);
+        return ERROR_FAIL;
+    }
+
+    /* Store driver_path for rebinding back */
+    if (rebind) {
+        if (driver_path) {
+            pcid_info_xs_write(gc, dom, bus, dev, func, "driver_path",
+			       driver_path);
+        } else if ( (driver_path =
+                     pcid_info_xs_read(gc, dom, bus, dev, func,
+				       "driver_path")) != NULL ) {
+            LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
+                dom, bus, dev, func, driver_path);
+        } else {
+            LOG(WARN, PCI_BDF" not bound to a driver, will not be rebound.",
+                dom, bus, dev, func);
+        }
+    } else {
+        pcid_info_xs_remove(gc, dom, bus, dev, func, "driver_path");
+    }
+
+    if (pciback_dev_assign(gc, dom, bus, dev, func)) {
+        LOG(ERROR, "Couldn't bind device to pciback!");
+        return ERROR_FAIL;
+    }
+
+done:
+    return 0;
+}
+
+static int pciback_dev_unassign(libxl__gc *gc, unsigned int domain,
+			      unsigned int bus, unsigned int dev,
+			      unsigned int func)
+{
+    /* Remove from pciback */
+    if ( sysfs_dev_unbind(gc, domain, bus, dev, func, NULL) < 0 ) {
+        LOG(ERROR, "Couldn't unbind device!");
+        return ERROR_FAIL;
+    }
+
+    /* Remove slot if necessary */
+    if ( pciback_dev_has_slot(gc, domain, bus, dev, func) > 0 ) {
+        if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
+                             domain, bus, dev, func) < 0 ) {
+            LOGE(ERROR, "Couldn't remove pciback slot");
+            return ERROR_FAIL;
+        }
+    }
+    return 0;
+}
+
+static int process_revert_assignable(libxl__gc *gc, yajl_gen gen,
+                                   char *command_name,
+                                   const struct libxl__json_object *request,
+                                   struct libxl__json_object **response)
+{
+    const struct libxl__json_object *json_o;
+    unsigned int dom, bus, dev, func;
+    int rc;
+    bool rebind;
+    char *driver_path = NULL;
+
+    json_o = libxl__json_map_get(PCID_MSG_FIELD_SBDF, request, JSON_STRING);
+    if (!json_o) {
+        make_error_reply(gc, gen, "No mandatory parameter 'sbdf'", command_name);
+        return ERROR_FAIL;
+    }
+
+    if (sscanf(libxl__json_object_get_string(json_o), PCID_SBDF_FMT,
+	       &dom, &bus, &dev, &func) != 4) {
+        make_error_reply(gc, gen, "Can't parse SBDF", command_name);
+        return ERROR_FAIL;
+    }
+
+    json_o = libxl__json_map_get(PCID_MSG_FIELD_REBIND, request, JSON_BOOL);
+    if (!json_o) {
+        make_error_reply(gc, gen, "No mandatory parameter 'rebind'", command_name);
+        return ERROR_FAIL;
+    }
+
+    rebind = libxl__json_object_get_bool(json_o);
+
+    /* Unbind from pciback */
+    if ( (rc = pciback_dev_is_assigned(gc, dom, bus, dev, func)) < 0 ) {
+        make_error_reply(gc, gen, "Can't unbind from pciback", command_name);
+        return ERROR_FAIL;
+    } else if ( rc ) {
+        pciback_dev_unassign(gc, dom, bus, dev, func);
+    } else {
+        LOG(WARN, "Not bound to pciback");
+    }
+
+    /* Rebind if necessary */
+    driver_path = pcid_info_xs_read(gc, dom, bus, dev, func, "driver_path");
+
+    if ( driver_path ) {
+        if ( rebind ) {
+            LOG(INFO, "Rebinding to driver at %s", driver_path);
+
+            if ( sysfs_write_bdf(gc,
+                                 GCSPRINTF("%s/bind", driver_path),
+                                 dom, bus, dev, func) < 0 ) {
+                LOGE(ERROR, "Couldn't bind device to %s", driver_path);
+                return -1;
+            }
+
+            pcid_info_xs_remove(gc, dom, bus, dev, func, "driver_path");
+        }
+    } else {
+        if ( rebind ) {
+            LOG(WARN,
+                "Couldn't find path for original driver; not rebinding");
+        }
+    }
+
+    return 0;
+}
+
 static int pcid_handle_request(libxl__gc *gc, yajl_gen gen,
                                const libxl__json_object *request)
 {
@@ -142,6 +532,12 @@ static int pcid_handle_request(libxl__gc *gc, yajl_gen gen,
     if (strcmp(command_name, PCID_CMD_LIST_ASSIGNABLE) == 0)
        ret = process_list_assignable(gc, gen, command_name,
                                      request, &command_response);
+    else if (strcmp(command_name, PCID_CMD_MAKE_ASSIGNABLE) == 0)
+       ret = process_make_assignable(gc, gen, command_name,
+                                     request, &command_response);
+    else if (strcmp(command_name, PCID_CMD_REVERT_ASSIGNABLE) == 0)
+       ret = process_revert_assignable(gc, gen, command_name,
+                                     request, &command_response);
     else {
         /*
          * This is an unsupported command: make a reply and proceed over
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477865.740828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EV-0006Xc-JO; Sun, 15 Jan 2023 11:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477865.740828; Sun, 15 Jan 2023 11:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EV-0006Wr-8I; Sun, 15 Jan 2023 11:31:27 +0000
Received: by outflank-mailman (input) for mailman id 477865;
 Sun, 15 Jan 2023 11:31:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1EU-0004co-DE
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:26 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 243543c7-94c8-11ed-b8d0-410ff93cb8f0;
 Sun, 15 Jan 2023 12:31:24 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id vm8so62019600ejc.2
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:24 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 243543c7-94c8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Xrk/C/YZL5jbZac94OfOErvgSHJBWQ1xYVIaSMYUI8w=;
        b=VtNcbBNxJQHL8qNJ13hjEKUccbTHIhsH+7PdDIVkx03I0dj93avNdZj2yMqZtplsxn
         5Ka51TPEkoQFzU5mq9MAARaZxKlKsvrUE1ZB3bkacojsBZGVG7Oo0iMtSWBwWpSs7e5r
         gqtMo488X0izuIxDjKsaaBIFEXoT1ILUR92JfET09m2H2iofTap+Nq0rpHnxVZ6bUOSi
         URlvSfs4juLQJ6bd5PX0MbLCceDqEtCn/6hEF1wtcp3s+8k7Wc60BrsIYIQdt0DsurVL
         rXrPZvVNmDMF6fWbSsi03JUkHmo6N6NoX0pketo2xDhxsrjshJZ0OfRgIHMazYPS9TPO
         X05g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Xrk/C/YZL5jbZac94OfOErvgSHJBWQ1xYVIaSMYUI8w=;
        b=eaXn9WFkBNnIQU1pciyS8dmoG9KxWw0vCTeyMgJRgug6x0/Z6RtQodkMFhi4tuQqcq
         br4whrCyYUgYlHO6pdUqKGmSbu5A6lP48arSGNY2/ZIqroDrDCNAQU0enqzh9nLW3ApJ
         C7gPbCucKQBpJsln0GjVHd/gm97Hf+jrr16PkE1vdMiU4tyiQ0EdTgPNITloLsBu4psO
         a1CQbYswJmn59NoL8rAIb3lVvMfEXBa7ySrO/S91ie1THd6oks0+r6E5TRHqbjZTyDCU
         63AuaKkVz3PZCX5FCbKghfP0fAh5qVew1Panp851Lu1NoypKMWCgxHxAiaQ/Vc5M0dDL
         hDEQ==
X-Gm-Message-State: AFqh2koq+d0YPR2m74Hu4R2TdJ/1/JuuNcK9wHis/8LSTHMA1/vBAUbI
	NptuDgRFPp5ZouO4orFumCxf5Ovk6OiomjwP
X-Google-Smtp-Source: AMrXdXsZ1NOKoKhI67d7/ClWLpVJASzmYxGSrjUO1bFyEb+E0FluHbskFeoq51YxQuaL0nxzmDBKVw==
X-Received: by 2002:a17:906:eb09:b0:84d:34fa:f1a4 with SMTP id mb9-20020a170906eb0900b0084d34faf1a4mr25284731ejb.60.1673782283481;
        Sun, 15 Jan 2023 03:31:23 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [RFC PATCH v3 08/10] tools/libs/light: pcid: implement reset_device command
Date: Sun, 15 Jan 2023 13:31:09 +0200
Message-Id: <20230115113111.1207605-9-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
---
 tools/include/pcid.h          | 15 ++++++++
 tools/libs/light/libxl_pci.c  | 52 +++++++++++----------------
 tools/libs/light/libxl_pcid.c | 66 +++++++++++++++++++++++++++++++++++
 3 files changed, 102 insertions(+), 31 deletions(-)

diff --git a/tools/include/pcid.h b/tools/include/pcid.h
index 983e067dfc..63ac0bcac9 100644
--- a/tools/include/pcid.h
+++ b/tools/include/pcid.h
@@ -155,6 +155,21 @@
 #define PCID_CMD_IS_ASSIGNED            "is_device_assigned"
 #define PCID_MSG_FIELD_RESULT           "result"
 
+/*
+ *******************************************************************************
+ * Reset PCI device
+ *
+ * This command resets PCI device
+ *
+ * Request (see other mandatory fields above):
+ *  - "cmd" field of the request must be set to "reset_device".
+ *  - "sbdf" SBDF of the device in format defined by PCID_SBDF_FMT.
+ *
+ * Response (see other mandatory fields above):
+ *  - "resp" field of the response must be set to "reset_device".
+ */
+#define PCID_CMD_RESET_DEVICE            "reset_device"
+
 int libxl_pcid_process(libxl_ctx *ctx);
 
 #endif /* PCID_H */
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index d68cb1986f..e9f0bad442 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -89,6 +89,8 @@ static int pci_handle_response(libxl__gc *gc,
     else if (strcmp(command_name, PCID_CMD_IS_ASSIGNED) == 0)
         *result = (libxl__json_object *)libxl__json_map_get(PCID_MSG_FIELD_RESULT,
                                                           response, JSON_BOOL);
+    else if (strcmp(command_name, PCID_CMD_RESET_DEVICE) == 0)
+        *result = libxl__json_object_alloc(gc, JSON_NULL);
     return ret;
 }
 
@@ -1518,38 +1520,26 @@ out:
 static int libxl__device_pci_reset(libxl__gc *gc, unsigned int domain, unsigned int bus,
                                    unsigned int dev, unsigned int func)
 {
-    char *reset;
-    int fd, rc;
-
-    reset = GCSPRINTF("%s/do_flr", SYSFS_PCIBACK_DRIVER);
-    fd = open(reset, O_WRONLY);
-    if (fd >= 0) {
-        char *buf = GCSPRINTF(PCI_BDF, domain, bus, dev, func);
-        rc = write(fd, buf, strlen(buf));
-        if (rc < 0)
-            LOGD(ERROR, domain, "write to %s returned %d", reset, rc);
-        close(fd);
-        return rc < 0 ? rc : 0;
-    }
-    if (errno != ENOENT)
-        LOGED(ERROR, domain, "Failed to access pciback path %s", reset);
-    reset = GCSPRINTF("%s/"PCI_BDF"/reset", SYSFS_PCI_DEV, domain, bus, dev, func);
-    fd = open(reset, O_WRONLY);
-    if (fd >= 0) {
-        rc = write(fd, "1", 1);
-        if (rc < 0)
-            LOGED(ERROR, domain, "write to %s returned %d", reset, rc);
-        close(fd);
-        return rc < 0 ? rc : 0;
-    }
-    if (errno == ENOENT) {
-        LOGD(ERROR, domain,
-             "The kernel doesn't support reset from sysfs for PCI device "PCI_BDF,
-             domain, bus, dev, func);
-    } else {
-        LOGED(ERROR, domain, "Failed to access reset path %s", reset);
+    struct vchan_info *vchan;
+    int rc = 0;
+    libxl__json_object *args, *result;
+
+    vchan = pci_vchan_get_client(gc);
+    if (!vchan) {
+        rc = ERROR_NOT_READY;
+        goto out;
     }
-    return -1;
+    args = libxl__vchan_start_args(gc);
+
+    libxl__vchan_arg_add_string(gc, args, PCID_MSG_FIELD_SBDF,
+            GCSPRINTF(PCID_SBDF_FMT, domain, bus, dev, func));
+    result = vchan_send_command(gc, vchan, PCID_CMD_RESET_DEVICE, args);
+    if (!result)
+        rc = ERROR_FAIL;
+    pci_vchan_free(gc, vchan);
+
+ out:
+    return rc;
 }
 
 int libxl__device_pci_setdefault(libxl__gc *gc, uint32_t domid,
diff --git a/tools/libs/light/libxl_pcid.c b/tools/libs/light/libxl_pcid.c
index d72beed405..80bcd3c63e 100644
--- a/tools/libs/light/libxl_pcid.c
+++ b/tools/libs/light/libxl_pcid.c
@@ -364,6 +364,69 @@ static int process_pciback_dev_is_assigned(libxl__gc *gc, yajl_gen gen,
     return 0;
 }
 
+static int device_pci_reset(libxl__gc *gc, unsigned int domain, unsigned int bus,
+                                   unsigned int dev, unsigned int func)
+{
+    char *reset;
+    int fd, rc;
+
+    reset = GCSPRINTF("%s/do_flr", SYSFS_PCIBACK_DRIVER);
+    fd = open(reset, O_WRONLY);
+    if (fd >= 0) {
+        char *buf = GCSPRINTF(PCI_BDF, domain, bus, dev, func);
+        rc = write(fd, buf, strlen(buf));
+        if (rc < 0)
+            LOGD(ERROR, domain, "write to %s returned %d", reset, rc);
+        close(fd);
+        return rc < 0 ? rc : 0;
+    }
+    if (errno != ENOENT)
+        LOGED(ERROR, domain, "Failed to access pciback path %s", reset);
+    reset = GCSPRINTF("%s/"PCI_BDF"/reset", SYSFS_PCI_DEV, domain, bus, dev, func);
+    fd = open(reset, O_WRONLY);
+    if (fd >= 0) {
+        rc = write(fd, "1", 1);
+        if (rc < 0)
+            LOGED(ERROR, domain, "write to %s returned %d", reset, rc);
+        close(fd);
+        return rc < 0 ? rc : 0;
+    }
+    if (errno == ENOENT) {
+        LOGD(ERROR, domain,
+             "The kernel doesn't support reset from sysfs for PCI device "PCI_BDF,
+             domain, bus, dev, func);
+    } else {
+        LOGED(ERROR, domain, "Failed to access reset path %s", reset);
+    }
+    return -1;
+}
+
+static int process_device_pci_reset(libxl__gc *gc, yajl_gen gen,
+                                   char *command_name,
+                                   const struct libxl__json_object *request,
+                                   struct libxl__json_object **response)
+{
+    const struct libxl__json_object *json_o;
+    unsigned int dom, bus, dev, func;
+    int rc;
+
+    json_o = libxl__json_map_get(PCID_MSG_FIELD_SBDF, request, JSON_STRING);
+    if (!json_o) {
+        make_error_reply(gc, gen, "No mandatory parameter 'sbdf'", command_name);
+        return ERROR_FAIL;
+    }
+
+    if (sscanf(libxl__json_object_get_string(json_o), PCID_SBDF_FMT,
+               &dom, &bus, &dev, &func) != 4) {
+        make_error_reply(gc, gen, "Can't parse SBDF", command_name);
+        return ERROR_FAIL;
+    }
+    rc = device_pci_reset(gc, dom, bus, dev, func);
+    if (rc < 0)
+        return ERROR_FAIL;
+    return rc;
+}
+
 static int process_make_assignable(libxl__gc *gc, yajl_gen gen,
                                    char *command_name,
                                    const struct libxl__json_object *request,
@@ -570,6 +633,9 @@ static int pcid_handle_request(libxl__gc *gc, yajl_gen gen,
     else if (strcmp(command_name, PCID_CMD_IS_ASSIGNED) == 0)
        ret = process_pciback_dev_is_assigned(gc, gen, command_name,
                                      request, &command_response);
+    else if (strcmp(command_name, PCID_CMD_RESET_DEVICE) == 0)
+       ret = process_device_pci_reset(gc, gen, command_name,
+                                     request, &command_response);
     else {
         /*
          * This is an unsupported command: make a reply and proceed over
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477866.740845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EY-0007B4-32; Sun, 15 Jan 2023 11:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477866.740845; Sun, 15 Jan 2023 11:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EX-0007Ap-UN; Sun, 15 Jan 2023 11:31:29 +0000
Received: by outflank-mailman (input) for mailman id 477866;
 Sun, 15 Jan 2023 11:31:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1EV-0004ci-3Q
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:27 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 257c687c-94c8-11ed-91b6-6bf2151ebd3b;
 Sun, 15 Jan 2023 12:31:26 +0100 (CET)
Received: by mail-ej1-x62a.google.com with SMTP id bk15so4531706ejb.9
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:26 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 257c687c-94c8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6Di8T7RFkSXJWtJ7IeU06hxU0Lz5Aq6jlmAWzDwNGXI=;
        b=iweg5CpL5M1YkOuzr2tqeKOpn4QsKSQbE4lPh15fHRb/E+IKHE0gont35IDfqMQ3Xm
         XEEi5YbTezWUNspnrfixupk/0YXZKwk9Zs6Lu4EAi8fDm0Ui3K710o2P29IO8tQLzSIt
         h/TtF4V7sgs+s3t4m8vi66Kl9mJIALrEE24zQ3jXym+rnvc6oXEdBJ4Wl0xBPQwfbGV6
         0c27JUPC4gw94jUI3c+FZMe6UBL3ydJPwARhFwLZ8yWYs+VGTWAYpSiqWN7ObU7hPqYn
         8lYZdkteljcxSLGijGGgaI5xVAFudZ5o6viwYgOZQWeLoNZ5vwR6WZkasgFR/IpPGnXM
         /qXQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=6Di8T7RFkSXJWtJ7IeU06hxU0Lz5Aq6jlmAWzDwNGXI=;
        b=sRryGQ0ivXOL70uigOJjg/3B0/pHgMfiB12CXKqGpj5cUHhinL4I1O/yKzmHVYmRB0
         kmAhEXTiOm76Vxqwzsk+m94M9eetaeDgNSWuM5ifBcgTLIt6HZagSw2Js/sQSuSbynZY
         SPS+OlklwV4dwVY+TJG9AUSHcDcFVI4KPIzdXfoRAqAm3ZCQ3f6P4zTlbpkbd0ROpt55
         Ku6WcewNTqsbdIjiafw/4EmTK4iNJUKNCTq/vKk0gl3Z2aEyiiPWEzRo1cCqOd+tC8N9
         THJgt7MeKT7V2EQxu5GyMXqQJlaflEmffXDXcQcR5VvlwJxbbIFLrO62pmKs6hghuyxC
         s1Eg==
X-Gm-Message-State: AFqh2kqWl2q3UT1VsfP+qZJ5YJFGtYYpRKkz31K5zjJYksvv1dGcRmvt
	g/nsMpOP0MN9D7X181cJ6Sz7xvPwqaLnZG9G
X-Google-Smtp-Source: AMrXdXvLAVwK3qonUGLjg/wBl1L74Jpy3sI9dIZFYwDnDYGuPwbOjHUZ+FDmnz+U9lCjjOwhNEgR3A==
X-Received: by 2002:a17:907:4d6:b0:7c0:d6ba:c934 with SMTP id vz22-20020a17090704d600b007c0d6bac934mr8040416ejb.13.1673782285537;
        Sun, 15 Jan 2023 03:31:25 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [RFC PATCH v3 10/10] tools/libs/light: pcid: implement write_bdf command
Date: Sun, 15 Jan 2023 13:31:11 +0200
Message-Id: <20230115113111.1207605-11-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
---
 tools/include/pcid.h          | 17 +++++++++
 tools/libs/light/libxl_pci.c  | 67 ++++++++++++++++++-----------------
 tools/libs/light/libxl_pcid.c | 38 ++++++++++++++++++++
 3 files changed, 90 insertions(+), 32 deletions(-)

diff --git a/tools/include/pcid.h b/tools/include/pcid.h
index 833b6c7f3e..2c1bd0727e 100644
--- a/tools/include/pcid.h
+++ b/tools/include/pcid.h
@@ -181,6 +181,23 @@
 #define PCID_RESULT_KEY_IOMEM           "iomem"
 #define PCID_RESULT_KEY_IRQS            "irqs"
 
+/*
+ *******************************************************************************
+ * Write BDF values to the pciback sysfs path
+ *
+ * This command resets PCI device
+ *
+ * Request (see other mandatory fields above):
+ *  - "cmd" field of the request must be set to "write_bdf".
+ *  - "sbdf" SBDF of the device in format defined by PCID_SBDF_FMT.
+ *  - "name" name of sysfs file of pciback driver
+ *
+ * Response (see other mandatory fields above):
+ *  - "resp" field of the response must be set to "write_bdf".
+ */
+#define PCID_CMD_WRITE_BDF               "write_bdf"
+#define PCID_MSG_FIELD_NAME              "name"
+
 /*
  *******************************************************************************
  * Reset PCI device
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 2e7bd2eae5..a9a641829a 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -94,6 +94,8 @@ static int pci_handle_response(libxl__gc *gc,
     else if (strcmp(command_name, PCID_CMD_RESOURCE_LIST) == 0)
         *result = (libxl__json_object *)libxl__json_map_get(PCID_MSG_FIELD_RESOURCES,
                 response, JSON_MAP);
+    else if (strcmp(command_name, PCID_CMD_WRITE_BDF) == 0)
+        *result = libxl__json_object_alloc(gc, JSON_NULL);
     return ret;
 }
 
@@ -511,33 +513,6 @@ static bool is_pci_in_array(libxl_device_pci *pcis, int num,
     return i < num;
 }
 
-/* Write the standard BDF into the sysfs path given by sysfs_path. */
-static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pci)
-{
-    int rc, fd;
-    char *buf;
-
-    fd = open(sysfs_path, O_WRONLY);
-    if (fd < 0) {
-        LOGE(ERROR, "Couldn't open %s", sysfs_path);
-        return ERROR_FAIL;
-    }
-
-    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
-                    pci->dev, pci->func);
-    rc = write(fd, buf, strlen(buf));
-    /* Annoying to have two if's, but we need the errno */
-    if (rc < 0)
-        LOGE(ERROR, "write to %s returned %d", sysfs_path, rc);
-    close(fd);
-
-    if (rc < 0)
-        return ERROR_FAIL;
-
-    return 0;
-}
-
 #define PCI_INFO_PATH "/libxl/pci"
 
 static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
@@ -1384,6 +1359,36 @@ static bool pci_supp_legacy_irq(void)
 #endif
 }
 
+static int pciback_write_bdf(libxl__gc *gc, char *name, libxl_device_pci *pci)
+{
+    struct vchan_info *vchan;
+    int rc;
+    libxl__json_object *args, *result;
+
+    vchan = pci_vchan_get_client(gc);
+    if (!vchan) {
+        rc = ERROR_NOT_READY;
+        goto out;
+    }
+
+    args = libxl__vchan_start_args(gc);
+
+    libxl__vchan_arg_add_string(gc, args, PCID_MSG_FIELD_SBDF,
+            GCSPRINTF(PCID_SBDF_FMT, pci->domain,
+                pci->bus, pci->dev, pci->func));
+    libxl__vchan_arg_add_string(gc, args, PCID_MSG_FIELD_NAME, name);
+
+    result = vchan_send_command(gc, vchan, PCID_CMD_WRITE_BDF, args);
+    if (!result) {
+        rc = ERROR_FAIL;
+        goto vchan_free;
+    }
+vchan_free:
+    pci_vchan_free(gc, vchan);
+out:
+    return rc;
+}
+
 static void pci_add_dm_done(libxl__egc *egc,
                             pci_add_state *pas,
                             int rc)
@@ -1421,8 +1426,9 @@ static void pci_add_dm_done(libxl__egc *egc,
     libxl__vchan_arg_add_integer(gc, args, PCID_MSG_FIELD_DOMID, domid);
 
     result = vchan_send_command(gc, vchan, PCID_CMD_RESOURCE_LIST, args);
+    pci_vchan_free(gc, vchan);
     if (!result)
-        goto vchan_free;
+        goto out;
     value = libxl__json_map_get(PCID_RESULT_KEY_IOMEM, result, JSON_ARRAY);
 
     /* stubdomain is always running by now, even at create time */
@@ -1483,8 +1489,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
-        if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pci) < 0 ) {
+        if (pciback_write_bdf(gc, "permissive", pci)) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1512,8 +1517,6 @@ out_no_irq:
         rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
-vchan_free:
-    pci_vchan_free(gc, vchan);
 out:
     libxl__ev_time_deregister(gc, &pas->timeout);
     libxl__ev_time_deregister(gc, &pas->timeout_retries);
diff --git a/tools/libs/light/libxl_pcid.c b/tools/libs/light/libxl_pcid.c
index d968071224..66b433d2bf 100644
--- a/tools/libs/light/libxl_pcid.c
+++ b/tools/libs/light/libxl_pcid.c
@@ -257,6 +257,41 @@ static int pciback_dev_is_assigned(libxl__gc *gc, unsigned int domain,
     return 0;
 }
 
+static int process_pciback_write_bdf(libxl__gc *gc, yajl_gen gen,
+                                   char *command_name,
+                                   const struct libxl__json_object *request,
+                                   struct libxl__json_object **response)
+{
+    const struct libxl__json_object *json_o;
+    unsigned int dom, bus, dev, func;
+    int rc = 0;
+    const char *name;
+    char *spath;
+
+    json_o = libxl__json_map_get(PCID_MSG_FIELD_SBDF, request, JSON_STRING);
+    if (!json_o) {
+        make_error_reply(gc, gen, "No mandatory parameter 'sbdf'", command_name);
+        return ERROR_FAIL;
+    }
+
+    if (sscanf(libxl__json_object_get_string(json_o), PCID_SBDF_FMT,
+           &dom, &bus, &dev, &func) != 4) {
+        make_error_reply(gc, gen, "Can't parse SBDF", command_name);
+        return ERROR_FAIL;
+    }
+
+    json_o = libxl__json_map_get(PCID_MSG_FIELD_NAME, request, JSON_STRING);
+    if (!json_o) {
+        make_error_reply(gc, gen, "No mandatory parameter 'rebind'", command_name);
+        return ERROR_FAIL;
+    }
+
+    name = libxl__json_object_get_string(json_o);
+    spath = GCSPRINTF("%s/%s", SYSFS_PCIBACK_DRIVER, name);
+    LOG(WARN, "sysf_write_bdf(%s, %d, %d, %d, %d)", spath, dom, bus, dev,func);
+    return rc;
+}
+
 #define PCID_INFO_PATH		"pcid"
 #define PCID_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 
@@ -746,6 +781,9 @@ static int pcid_handle_request(libxl__gc *gc, yajl_gen gen,
     else if (strcmp(command_name, PCID_CMD_RESOURCE_LIST) == 0)
        ret = process_list_resources(gc, gen, command_name,
                                      request, &command_response);
+    else if (strcmp(command_name, PCID_CMD_WRITE_BDF) == 0)
+       ret = process_pciback_write_bdf(gc, gen, command_name,
+                                     request, &command_response);
     else {
         /*
          * This is an unsupported command: make a reply and proceed over
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:31:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477867.740852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EY-0007Fx-O6; Sun, 15 Jan 2023 11:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477867.740852; Sun, 15 Jan 2023 11:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1EY-0007EO-A9; Sun, 15 Jan 2023 11:31:30 +0000
Received: by outflank-mailman (input) for mailman id 477867;
 Sun, 15 Jan 2023 11:31:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytYW=5M=gmail.com=dmitry.semenets@srs-se1.protection.inumbo.net>)
 id 1pH1EV-0004co-Ne
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 11:31:27 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 24d8fbed-94c8-11ed-b8d0-410ff93cb8f0;
 Sun, 15 Jan 2023 12:31:25 +0100 (CET)
Received: by mail-ej1-x635.google.com with SMTP id kt14so3138294ejc.3
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 03:31:25 -0800 (PST)
Received: from dsemenets-HP-EliteBook-850-G8-Notebook-PC.. ([91.219.254.73])
 by smtp.gmail.com with ESMTPSA id
 uj42-20020a170907c9aa00b0084d4e612a22sm7459961ejc.67.2023.01.15.03.31.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 03:31:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24d8fbed-94c8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+mtALKQlPwVEF5U0ik7VRrk0fFuHPK8gOgYu9sCiUsM=;
        b=CE/3ULVB9SpcW6LlUxdmDTxQectR15DDiHVLnD89g9HUpoRoGsdV12uwSxniPJ1jnU
         BXf2msA30m3zan9lIOwObqXPV3SkvPeIzxr8HDGiUF86QIMY/NOvm72ke6qglA2eOUmb
         DTypfSN/covIcd7c4Zay4nnzRuFGrgRNWsipmNZJDjcpYSaXAanIHbc/4j8I5BK58nLn
         zSiQAVfV8lpgh0UZUTReA9h08ayhW7P/QndYf77Ok4P0ffTr7lb3HsIk63ZkhbhzjaCX
         egOJkXcpezAFuUJ66pmcKCXO/v1+sgS8VxDF8DuUny3KuOThUxHjYVMTjYJMcCVTsYLa
         4dtQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=+mtALKQlPwVEF5U0ik7VRrk0fFuHPK8gOgYu9sCiUsM=;
        b=IsL1ihP/pMIxdLwHeLrVEc/oyLkAUH16YMB4ypuftsp9ZtnqplbcnPPpJ03LA+q8SW
         L+k1RaRM6JiZbJxfJHJbJwJIYlxKE5E9YzNAzmZpeL+hKSrXM8wn4GbimFQ1OFWnDFNl
         Ydyu5Xh02IRS8KIUuElZzoFP63tTzLjIVYJVY8LVPSTGIB4XrNgdseaRt1Ao+/Zm0Ouk
         TK95u9AdExGdFvAC6hzKcMQvFy7XoDZ0gWpxIJOaJTIVXfHgT44Ua9NV4vnzNvxUgANv
         lOv4af7WqZUK7y2REkuKnQ33eBBh5DeR0h5FEprkWKklANjeMk4FgDxHQkWRbnb3zOiF
         l57A==
X-Gm-Message-State: AFqh2kp4Y1ev+SOgsELayMrjQoms+shNuE3eAmPkSd2LnjsrGNaNjPGG
	lRxTtCHaPSp9RunHfiNDM/9oITs+PIBUvTmr
X-Google-Smtp-Source: AMrXdXsSAXNShODj7QVtcBkVZdHwv7RQ9FqDmQAFZzvvyzYxA7FcyDE/dmJoJZTz3YV19UI4Vtu+TQ==
X-Received: by 2002:a17:906:e81:b0:7c1:962e:cf23 with SMTP id p1-20020a1709060e8100b007c1962ecf23mr70469799ejf.37.1673782284482;
        Sun, 15 Jan 2023 03:31:24 -0800 (PST)
From: Dmytro Semenets <dmitry.semenets@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Dmytro Semenets <dmytro_semenets@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [RFC PATCH v3 09/10] tools/libs/light: pcid: implement resource_list command
Date: Sun, 15 Jan 2023 13:31:10 +0200
Message-Id: <20230115113111.1207605-10-dmitry.semenets@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Dmytro Semenets <dmytro_semenets@epam.com>

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
---
 tools/include/pcid.h           |  26 ++++++++
 tools/libs/light/libxl_pci.c   |  63 ++++++++++---------
 tools/libs/light/libxl_pcid.c  | 110 +++++++++++++++++++++++++++++++++
 tools/libs/light/libxl_vchan.c |   8 +++
 tools/libs/light/libxl_vchan.h |   4 +-
 5 files changed, 180 insertions(+), 31 deletions(-)

diff --git a/tools/include/pcid.h b/tools/include/pcid.h
index 63ac0bcac9..833b6c7f3e 100644
--- a/tools/include/pcid.h
+++ b/tools/include/pcid.h
@@ -155,6 +155,32 @@
 #define PCID_CMD_IS_ASSIGNED            "is_device_assigned"
 #define PCID_MSG_FIELD_RESULT           "result"
 
+/*
+ *******************************************************************************
+ * Get device resources
+ *
+ * This command returns resource list of device
+ *
+ * Request (see other mandatory fields above):
+ *  - "cmd" field of the request must be set to "resource_list".
+ *  - "sbdf" SBDF of the device in format defined by PCID_SBDF_FMT.
+ *
+ * Response (see other mandatory fields above):
+ *  - "resp" field of the response must be set to "resource_list".
+ * Command specific response data:
+ * +-------------+--------------+----------------------------------------------+
+ * | resources   | map          | key 'iomem' - list of memory regions         |
+ * |             |              | key 'irqs' - list of irqs                    |
+ * +-------------+--------------+----------------------------------------------+
+ */
+#define PCID_CMD_RESOURCE_LIST          "resource_list"
+/* Arguments */
+#define PCID_MSG_FIELD_DOMID            "domid"
+/* Result */
+#define PCID_MSG_FIELD_RESOURCES        "resources"
+#define PCID_RESULT_KEY_IOMEM           "iomem"
+#define PCID_RESULT_KEY_IRQS            "irqs"
+
 /*
  *******************************************************************************
  * Reset PCI device
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index e9f0bad442..2e7bd2eae5 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -81,16 +81,19 @@ static int pci_handle_response(libxl__gc *gc,
     LOG(DEBUG, "command: %s", command_name);
 
     if (strcmp(command_name, PCID_CMD_LIST_ASSIGNABLE) == 0)
-       ret = process_list_assignable(gc, response, result);
+        ret = process_list_assignable(gc, response, result);
     else if (strcmp(command_name, PCID_CMD_MAKE_ASSIGNABLE) == 0)
         *result = libxl__json_object_alloc(gc, JSON_NULL);
     else if (strcmp(command_name, PCID_CMD_REVERT_ASSIGNABLE) == 0)
         *result = libxl__json_object_alloc(gc, JSON_NULL);
     else if (strcmp(command_name, PCID_CMD_IS_ASSIGNED) == 0)
         *result = (libxl__json_object *)libxl__json_map_get(PCID_MSG_FIELD_RESULT,
-                                                          response, JSON_BOOL);
+                response, JSON_BOOL);
     else if (strcmp(command_name, PCID_CMD_RESET_DEVICE) == 0)
         *result = libxl__json_object_alloc(gc, JSON_NULL);
+    else if (strcmp(command_name, PCID_CMD_RESOURCE_LIST) == 0)
+        *result = (libxl__json_object *)libxl__json_map_get(PCID_MSG_FIELD_RESOURCES,
+                response, JSON_MAP);
     return ret;
 }
 
@@ -1388,14 +1391,21 @@ static void pci_add_dm_done(libxl__egc *egc,
     STATE_AO_GC(pas->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     libxl_domid domid = pas->pci_domid;
-    char *sysfs_path;
-    FILE *f;
     unsigned long long start, end, flags, size;
     int irq, i;
     int r;
     uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED;
     uint32_t domainid = domid;
     bool isstubdom = libxl_is_stubdom(ctx, domid, &domainid);
+    struct vchan_info *vchan;
+    libxl__json_object *result;
+    libxl__json_object *args;
+    const libxl__json_object *value;
+    libxl__json_object *res_obj;
+
+    vchan = pci_vchan_get_client(gc);
+    if (!vchan)
+        goto out;
 
     /* Convenience aliases */
     bool starting = pas->starting;
@@ -1404,25 +1414,27 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
 
-    if (rc) goto out;
+    args = libxl__vchan_start_args(gc);
+    libxl__vchan_arg_add_string(gc, args, PCID_MSG_FIELD_SBDF,
+                                GCSPRINTF(PCID_SBDF_FMT, pci->domain,
+                                          pci->bus, pci->dev, pci->func));
+    libxl__vchan_arg_add_integer(gc, args, PCID_MSG_FIELD_DOMID, domid);
+
+    result = vchan_send_command(gc, vchan, PCID_CMD_RESOURCE_LIST, args);
+    if (!result)
+        goto vchan_free;
+    value = libxl__json_map_get(PCID_RESULT_KEY_IOMEM, result, JSON_ARRAY);
 
     /* stubdomain is always running by now, even at create time */
     if (isstubdom)
         starting = false;
-
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                           pci->bus, pci->dev, pci->func);
-    f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
-
-    if (f == NULL) {
-        LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
-        rc = ERROR_FAIL;
-        goto out;
-    }
     for (i = 0; i < PROC_PCI_NUM_RESOURCES; i++) {
-        if (fscanf(f, "0x%llx 0x%llx 0x%llx\n", &start, &end, &flags) != 3)
+        if ((res_obj = libxl__json_array_get(value, i)) == NULL)
+            continue;
+        const char *iomem_str = libxl__json_object_get_string(res_obj);
+        if (sscanf(iomem_str, "0x%llx 0x%llx 0x%llx\n", &start, &end, &flags) != 3)
             continue;
         size = end - start + 1;
         if (start) {
@@ -1432,7 +1444,6 @@ static void pci_add_dm_done(libxl__egc *egc,
                     LOGED(ERROR, domainid,
                           "xc_domain_ioport_permission 0x%llx/0x%llx (error %d)",
                           start, size, r);
-                    fclose(f);
                     rc = ERROR_FAIL;
                     goto out;
                 }
@@ -1443,29 +1454,21 @@ static void pci_add_dm_done(libxl__egc *egc,
                     LOGED(ERROR, domainid,
                           "xc_domain_iomem_permission 0x%llx/0x%llx (error %d)",
                           start, size, r);
-                    fclose(f);
                     rc = ERROR_FAIL;
                     goto out;
                 }
             }
         }
     }
-    fclose(f);
     if (!pci_supp_legacy_irq())
         goto out_no_irq;
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                                pci->bus, pci->dev, pci->func);
-    f = fopen(sysfs_path, "r");
-    if (f == NULL) {
-        LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
-        goto out_no_irq;
-    }
-    if ((fscanf(f, "%u", &irq) == 1) && irq) {
+    value = libxl__json_map_get(PCID_RESULT_KEY_IRQS, result, JSON_ARRAY);
+    if ((res_obj = libxl__json_array_get(value, i)) &&
+            (irq = libxl__json_object_get_integer(res_obj))) {
         r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
         if (r < 0) {
             LOGED(ERROR, domainid, "xc_physdev_map_pirq irq=%d (error=%d)",
                   irq, r);
-            fclose(f);
             rc = ERROR_FAIL;
             goto out;
         }
@@ -1473,12 +1476,10 @@ static void pci_add_dm_done(libxl__egc *egc,
         if (r < 0) {
             LOGED(ERROR, domainid,
                   "xc_domain_irq_permission irq=%d (error=%d)", irq, r);
-            fclose(f);
             rc = ERROR_FAIL;
             goto out;
         }
     }
-    fclose(f);
 
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
@@ -1511,6 +1512,8 @@ out_no_irq:
         rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
+vchan_free:
+    pci_vchan_free(gc, vchan);
 out:
     libxl__ev_time_deregister(gc, &pas->timeout);
     libxl__ev_time_deregister(gc, &pas->timeout_retries);
diff --git a/tools/libs/light/libxl_pcid.c b/tools/libs/light/libxl_pcid.c
index 80bcd3c63e..d968071224 100644
--- a/tools/libs/light/libxl_pcid.c
+++ b/tools/libs/light/libxl_pcid.c
@@ -40,6 +40,10 @@
 
 #define PCI_BDF                "%04x:%02x:%02x.%01x"
 
+static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
+        unsigned int domain, unsigned int bus,
+        unsigned int dev, unsigned int func);
+
 struct vchan_client {
     XEN_LIST_ENTRY(struct vchan_client) list;
 
@@ -121,6 +125,109 @@ static int process_list_assignable(libxl__gc *gc, yajl_gen gen,
     return 0;
 }
 
+static bool pci_supp_legacy_irq(void)
+{
+#ifdef CONFIG_PCI_SUPP_LEGACY_IRQ
+    return true;
+#else
+    return false;
+#endif
+}
+
+static int process_list_resources(libxl__gc *gc, yajl_gen gen,
+                                   char *command_name,
+                                   const struct libxl__json_object *request,
+                                   struct libxl__json_object **response)
+{
+    struct libxl__json_object *iomem =
+                 libxl__json_object_alloc(gc, JSON_ARRAY);
+    struct libxl__json_object *irqs =
+                 libxl__json_object_alloc(gc, JSON_ARRAY);
+    const struct libxl__json_object *json_sdbf;
+    const struct libxl__json_object *json_domid;
+    unsigned int dom, bus, dev, func;
+    libxl_domid domainid;
+    char *sysfs_path;
+    FILE *f;
+    unsigned long long start, end, flags;
+    int irq, i;
+    int rc = 0;
+    libxl__json_map_node *map_node = NULL;
+
+    json_sdbf = libxl__json_map_get(PCID_MSG_FIELD_SBDF, request, JSON_STRING);
+    if (!json_sdbf) {
+        make_error_reply(gc, gen, "No mandatory parameter 'sbdf'", command_name);
+        return ERROR_FAIL;
+    }
+    if (sscanf(libxl__json_object_get_string(json_sdbf), PCID_SBDF_FMT,
+               &dom, &bus, &dev, &func) != 4) {
+        make_error_reply(gc, gen, "Can't parse SBDF", command_name);
+        return ERROR_FAIL;
+    }
+
+    json_domid = libxl__json_map_get(PCID_MSG_FIELD_DOMID, request, JSON_INTEGER);
+    if (!json_domid) {
+        make_error_reply(gc, gen, "No mandatory parameter 'domid'", command_name);
+        return ERROR_FAIL;
+    }
+    domainid = libxl__json_object_get_integer(json_domid);
+
+    libxl__yajl_gen_asciiz(gen, PCID_MSG_FIELD_RESOURCES);
+    *response = libxl__json_object_alloc(gc, JSON_MAP);
+
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", dom, bus, dev, func);
+    f = fopen(sysfs_path, "r");
+    start = 0;
+    irq = 0;
+
+    if (f == NULL) {
+        LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+    for (i = 0; i < PROC_PCI_NUM_RESOURCES; i++) {
+        if (fscanf(f, "0x%llx 0x%llx 0x%llx\n", &start, &end, &flags) != 3)
+            continue;
+        if (start) {
+            struct libxl__json_object *node =
+                libxl__json_object_alloc(gc, JSON_STRING);
+
+            node->u.string = GCSPRINTF("0x%llx 0x%llx 0x%llx", start, end, flags);
+            flexarray_append(iomem->u.array, node);
+        }
+    }
+    fclose(f);
+    if (!pci_supp_legacy_irq())
+        goto out_no_irq;
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", dom, bus, dev, func);
+    f = fopen(sysfs_path, "r");
+    if (f == NULL) {
+        LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
+        goto out_no_irq;
+    }
+    if ((fscanf(f, "%u", &irq) == 1) && irq) {
+            struct libxl__json_object *node =
+                libxl__json_object_alloc(gc, JSON_INTEGER);
+
+            node->u.i = irq;
+            flexarray_append(irqs->u.array, node);
+    }
+    fclose(f);
+
+    GCNEW(map_node);
+    map_node->map_key = libxl__strdup(gc, PCID_RESULT_KEY_IRQS);
+    map_node->obj = irqs;
+    flexarray_append((*response)->u.map, map_node);
+out_no_irq:
+    GCNEW(map_node);
+    map_node->map_key = libxl__strdup(gc, PCID_RESULT_KEY_IOMEM);
+    map_node->obj = iomem;
+    flexarray_append((*response)->u.map, map_node);
+    rc = 0;
+out:
+    return rc;
+}
+
 static int pciback_dev_is_assigned(libxl__gc *gc, unsigned int domain,
 				   unsigned int bus, unsigned int dev,
 				   unsigned int func)
@@ -636,6 +743,9 @@ static int pcid_handle_request(libxl__gc *gc, yajl_gen gen,
     else if (strcmp(command_name, PCID_CMD_RESET_DEVICE) == 0)
        ret = process_device_pci_reset(gc, gen, command_name,
                                      request, &command_response);
+    else if (strcmp(command_name, PCID_CMD_RESOURCE_LIST) == 0)
+       ret = process_list_resources(gc, gen, command_name,
+                                     request, &command_response);
     else {
         /*
          * This is an unsupported command: make a reply and proceed over
diff --git a/tools/libs/light/libxl_vchan.c b/tools/libs/light/libxl_vchan.c
index 7611816f52..a1beda9e1b 100644
--- a/tools/libs/light/libxl_vchan.c
+++ b/tools/libs/light/libxl_vchan.c
@@ -99,6 +99,14 @@ void libxl__vchan_arg_add_bool(libxl__gc *gc, libxl__json_object *args,
     obj->u.b = val;
 }
 
+void libxl__vchan_arg_add_integer(libxl__gc *gc, libxl__json_object *args,
+                                 char *key,  int val)
+{
+    libxl__json_object *obj = libxl__vchan_arg_new(gc, JSON_INTEGER, args, key);
+
+    obj->u.i = val;
+}
+
 static void reset_yajl_generator(struct vchan_state *state)
 {
     yajl_gen_clear(state->gen);
diff --git a/tools/libs/light/libxl_vchan.h b/tools/libs/light/libxl_vchan.h
index 0968875298..07f0db4e93 100644
--- a/tools/libs/light/libxl_vchan.h
+++ b/tools/libs/light/libxl_vchan.h
@@ -58,7 +58,9 @@ static inline libxl__json_object *libxl__vchan_start_args(libxl__gc *gc)
 void libxl__vchan_arg_add_string(libxl__gc *gc, libxl__json_object *args,
                                  char *key, char *val);
 void libxl__vchan_arg_add_bool(libxl__gc *gc, libxl__json_object *args,
-                               char *key, bool val);
+                                 char *key, bool val);
+void libxl__vchan_arg_add_integer(libxl__gc *gc, libxl__json_object *args,
+                                 char *key,  int val);
 
 libxl__json_object *vchan_send_command(libxl__gc *gc, struct vchan_info *vchan,
                                        char *cmd, libxl__json_object *args);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jan 15 11:46:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 11:46:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477928.740867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1TL-0003gI-CK; Sun, 15 Jan 2023 11:46:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477928.740867; Sun, 15 Jan 2023 11:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH1TL-0003gB-9c; Sun, 15 Jan 2023 11:46:47 +0000
Received: by outflank-mailman (input) for mailman id 477928;
 Sun, 15 Jan 2023 11:46:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH1TK-0003g1-86; Sun, 15 Jan 2023 11:46:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH1TK-0003M0-58; Sun, 15 Jan 2023 11:46:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH1TJ-0004Rb-TB; Sun, 15 Jan 2023 11:46:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH1TJ-0003lh-Si; Sun, 15 Jan 2023 11:46:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OqEIzWYU0lDHvvDqwWjKBBSaX5ZUh+PkDv3qcPI7kcs=; b=NVrwGJUK5PRZWEKeV/zZiPY7Vg
	e+PbwDbnfV42DoGGqaXV/M3rpzG1Kn6PAWhKBLltkQsHGs0rsABNcjmBTQliEu8LT7gFrjXeztf1w
	72ANrsZobNPzRET4BwcpuNt9FCXOcqg+25fEiq/21zTA4UunXIu6FJxcmz2NeGlBOrBI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175872-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175872: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 11:46:45 +0000

flight 175872 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175872/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days    8 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 12:33:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 12:33:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477939.740885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH2CP-0000XC-2n; Sun, 15 Jan 2023 12:33:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477939.740885; Sun, 15 Jan 2023 12:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH2CO-0000X5-VV; Sun, 15 Jan 2023 12:33:20 +0000
Received: by outflank-mailman (input) for mailman id 477939;
 Sun, 15 Jan 2023 12:33:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH2CN-0000VO-C3; Sun, 15 Jan 2023 12:33:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH2CN-0004PY-8f; Sun, 15 Jan 2023 12:33:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH2CN-0005cU-17; Sun, 15 Jan 2023 12:33:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH2CN-0002eS-0j; Sun, 15 Jan 2023 12:33:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6iZcoW5C/mNAPigV8mfZUA3l325VHlKo+WsOKwo1SdE=; b=Z2xM4BlitVrWEEwzOpJvfE3vNg
	Tf4iRNdTE5Hvy8RX+o5bQI1GOl3g1g2KB7qt0Lt9xrzVdsVzfC7zB/PMvUlzZ+b60waZYIHGj8/Xy
	6Sx8pATFIgX8T2f/m8tnDR+qM4WHt6MIItTVNtZkGkHe865s8i+pupsH+gT3lwIaqoG0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175875-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175875: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 12:33:19 +0000

flight 175875 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175875/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days    9 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 13:01:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 13:01:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477947.740896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH2dA-0003s8-8E; Sun, 15 Jan 2023 13:01:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477947.740896; Sun, 15 Jan 2023 13:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH2dA-0003s1-43; Sun, 15 Jan 2023 13:01:00 +0000
Received: by outflank-mailman (input) for mailman id 477947;
 Sun, 15 Jan 2023 13:00:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH2d8-0003rr-UC; Sun, 15 Jan 2023 13:00:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH2d8-00053G-SR; Sun, 15 Jan 2023 13:00:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH2d8-0006KK-Gd; Sun, 15 Jan 2023 13:00:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH2d8-0003OH-GH; Sun, 15 Jan 2023 13:00:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XM39g+9WSSaPJyyRPR9e6G9bSn29sIY2TjK5OzpMVss=; b=kHoTvfgXsJQvKaoelJ805DCfGO
	Rt9r6qxosep9Td2//rMUo055qZCmmH4pilqiMVazsoTSaNYMwLpssNcsPBnSPT4j109Vd/SsSIiRQ
	uhmr1XJrdWFKzB/8vpSmyMauQvWP9Rjx5Bbh/PXi1Wg47ROhxO2sUc8/ftqR7b9kOtuQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175876-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175876: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 13:00:58 +0000

flight 175876 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175876/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   10 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 13:31:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 13:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477956.740907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH36S-0007D7-LD; Sun, 15 Jan 2023 13:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477956.740907; Sun, 15 Jan 2023 13:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH36S-0007D0-Hl; Sun, 15 Jan 2023 13:31:16 +0000
Received: by outflank-mailman (input) for mailman id 477956;
 Sun, 15 Jan 2023 13:31:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Jr4=5M=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pH36Q-0007Cu-Uo
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 13:31:15 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df3663ce-94d8-11ed-91b6-6bf2151ebd3b;
 Sun, 15 Jan 2023 14:31:11 +0100 (CET)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 674715C0062;
 Sun, 15 Jan 2023 08:31:09 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Sun, 15 Jan 2023 08:31:09 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 15 Jan 2023 08:31:07 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df3663ce-94d8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm3; t=1673789469; x=
	1673875869; bh=+SXSZ20hXLrjTTJOPMKMT76P9OAHK56Wb+bD0RGRY6A=; b=h
	Mi5o70zxeg2tr6OK0LM+gvqj2csyZ0UpZ4YHspmyA61iM5VCRwVYXQ3MhktJfNfi
	wId881BcTaEc8tkHi20OKscHjQHmgKg2CkDR1pM91KzrnF4++CJ1n71T63iYx8T4
	oFh98DlbpGbuH9rpKFub4nNGCAa37fyUicuywBNWq7YrRsViYPAoro3LKWKMEu7k
	nu5/+H7mRS/fSDegBG6pKJkzLjS6seXwWAehRFpwLpCxmRjNcz65+fg/Ri7hX5Ub
	JUtWZ2OonyTs52+fwy3xyLaSK9osMvMoBPZiYZaNTWU7sIK1YrOUTm8F+odKBRz3
	8muj+LFVBbciYWXneuAXA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm3; t=1673789469; x=1673875869; bh=+SXSZ20hXLrjTTJOPMKMT76P9OAH
	K56Wb+bD0RGRY6A=; b=Y2MK6Hc2dieOOOMhdkaB7k0XCdOQlVPTn4wjwXtFcauE
	a9v9fvAiKh9R+nCaY3wxpOm3x/jB7VjXuzBB8KQPfn2OXwAgD5K/d2FiPAwKhfvJ
	hZEK36sPjIxi0/UIsyQEXaTVzJTXoLac99Krw3aru9ZgTs/gwUm4Gs3YswIoFF/e
	wnJ128ethOM4Zzd5x+yWzI/CC5XOwgvVAx/vldKD2qOOrQG7msbbrIkE6QrNzDeI
	3kbwhm+POXrK9Py64JdDDMntq9QpvsNCkdNWIgTlYypk71PXrZ4ranD8R8CvgYHv
	W3YV3hNrLOcOiZnAUSlrepTAz76QSe1/S48t0bPYHA==
X-ME-Sender: <xms:HADEYwsPHscSFtDF7wFn29G8FWS5IOKXWc3uyGMBE4S9kT44QeKrSw>
    <xme:HADEY9dlann9Llf38U-VoGzB8WlxOPj5T8Qx20MBsUJL9JEbeKx1j4W6cBP9zYjix
    B0QiyBFOnun7w>
X-ME-Received: <xmr:HADEY7zeYJUulTVSizoYiZQ3U-CMn3ERrUIotLqH67Ps6oWEDlAKVekG3ykMIke8bUgO_SRxHEfZ-JY7_JH3BD59HPhVBg9LMg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddtvddgheegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepueek
    teetgefggfekudehteegieeljeejieeihfejgeevhfetgffgteeuteetueetnecuffhomh
    grihhnpehgihhthhhusgdrtghomhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhm
X-ME-Proxy: <xmx:HADEYzMqj0Ty7WfAMcyZ6RROAUQsyoQqnTNFY6C3wyuXytC9h1brVw>
    <xmx:HADEYw-OJi7H5ZgptihbzRMBEOqpBrxRXkhb-pO3gnxqAnIfSgpREg>
    <xmx:HADEY7WSxsIrLmfyhp7VTgQ2yIT3STCwcLcHovaL417im0X9QzmnTg>
    <xmx:HQDEY7Y93-IVKrVse5pErUp4l2AKFjCZ4Bd5Ng5tGX8_Mm6lz3BDzQ>
Feedback-ID: i1568416f:Fastmail
Date: Sun, 15 Jan 2023 14:31:03 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Ard Biesheuvel <ardb@kernel.org>, linux-efi@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Peter Jones <pjones@redhat.com>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Kees Cook <keescook@chromium.org>,
	Anton Vorontsov <anton@enomsg.org>,
	Colin Cross <ccross@android.com>, Tony Luck <tony.luck@intel.com>
Subject: Re: [PATCH v2 5/6] efi: xen: Implement memory descriptor lookup
 based on hypercall
Message-ID: <Y8QAF5K4ZLJxzPni@mail-itl>
References: <20221003112625.972646-1-ardb@kernel.org>
 <20221003112625.972646-6-ardb@kernel.org>
 <Yzr/1s9CbA0CClmt@itl-email>
 <CAMj1kXEXhDXRSnBp8P=urFj8UzzeRtYS9V8Tdt9GSrZTnGRFhA@mail.gmail.com>
 <YzsMYfEwmjHwVheb@itl-email>
 <CAMj1kXHR1FfD+ipG4RtbOezx+s_Jo6JwG4fpT5XUmvoqHTctLA@mail.gmail.com>
 <YzsWAnD7q9qeBoBn@mail-itl>
 <Yzsii72GWWvc5tRD@itl-email>
 <YzsjYHirK+SXUjGl@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Brnh0jZ4ScsEj+y3"
Content-Disposition: inline
In-Reply-To: <YzsjYHirK+SXUjGl@mail-itl>


--Brnh0jZ4ScsEj+y3
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sun, 15 Jan 2023 14:31:03 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Ard Biesheuvel <ardb@kernel.org>, linux-efi@vger.kernel.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Peter Jones <pjones@redhat.com>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Kees Cook <keescook@chromium.org>,
	Anton Vorontsov <anton@enomsg.org>,
	Colin Cross <ccross@android.com>, Tony Luck <tony.luck@intel.com>
Subject: Re: [PATCH v2 5/6] efi: xen: Implement memory descriptor lookup
 based on hypercall

On Mon, Oct 03, 2022 at 08:01:03PM +0200, Marek Marczykowski-G=C3=B3recki w=
rote:
> On Mon, Oct 03, 2022 at 01:57:14PM -0400, Demi Marie Obenour wrote:
> > On Mon, Oct 03, 2022 at 07:04:02PM +0200, Marek Marczykowski-G=C3=B3rec=
ki wrote:
> > > On Mon, Oct 03, 2022 at 06:37:19PM +0200, Ard Biesheuvel wrote:
> > > > On Mon, 3 Oct 2022 at 18:23, Demi Marie Obenour
> > > > <demi@invisiblethingslab.com> wrote:
> > > > >
> > > > > On Mon, Oct 03, 2022 at 05:59:52PM +0200, Ard Biesheuvel wrote:
> > > > > > On Mon, 3 Oct 2022 at 17:29, Demi Marie Obenour
> > > > > > <demi@invisiblethingslab.com> wrote:
> > > > > > >
> > > > > > > On Mon, Oct 03, 2022 at 01:26:24PM +0200, Ard Biesheuvel wrot=
e:
> > > > > > > > Xen on x86 boots dom0 in EFI mode but without providing a m=
emory map.
> > > > > > > > This means that some sanity checks we would like to perform=
 on
> > > > > > > > configuration tables or other data structures in memory are=
 not
> > > > > > > > currently possible. Xen does, however, expose EFI memory de=
scriptor info
> > > > > > > > via a Xen hypercall, so let's wire that up instead.
> > > > > > > >
> > > > > > > > Co-developed-by: Demi Marie Obenour <demi@invisiblethingsla=
b.com>
> > > > > > > > Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.=
com>
> > > > > > > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > > > > > > > ---
> > > > > > > >  drivers/firmware/efi/efi.c |  5 ++-
> > > > > > > >  drivers/xen/efi.c          | 34 ++++++++++++++++++++
> > > > > > > >  include/linux/efi.h        |  1 +
> > > > > > > >  3 files changed, 39 insertions(+), 1 deletion(-)
> > > > > > > >
> > > > > > > > diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/=
efi/efi.c
> > > > > > > > index 55bd3f4aab28..2c12b1a06481 100644
> > > > > > > > --- a/drivers/firmware/efi/efi.c
> > > > > > > > +++ b/drivers/firmware/efi/efi.c
> > > > > > > > @@ -456,7 +456,7 @@ void __init efi_find_mirror(void)
> > > > > > > >   * and if so, populate the supplied memory descriptor with=
 the appropriate
> > > > > > > >   * data.
> > > > > > > >   */
> > > > > > > > -int efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *=
out_md)
> > > > > > > > +int __efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t=
 *out_md)
> > > > > > > >  {
> > > > > > > >       efi_memory_desc_t *md;
> > > > > > > >
> > > > > > > > @@ -485,6 +485,9 @@ int efi_mem_desc_lookup(u64 phys_addr, =
efi_memory_desc_t *out_md)
> > > > > > > >       return -ENOENT;
> > > > > > > >  }
> > > > > > > >
> > > > > > > > +extern int efi_mem_desc_lookup(u64 phys_addr, efi_memory_d=
esc_t *out_md)
> > > > > > > > +      __weak __alias(__efi_mem_desc_lookup);
> > > > > > > > +
> > > > > > > >  /*
> > > > > > > >   * Calculate the highest address of an efi memory descript=
or.
> > > > > > > >   */
> > > > > > > > diff --git a/drivers/xen/efi.c b/drivers/xen/efi.c
> > > > > > > > index d1ff2186ebb4..74f3f6d8cdc8 100644
> > > > > > > > --- a/drivers/xen/efi.c
> > > > > > > > +++ b/drivers/xen/efi.c
> > > > > > > > @@ -26,6 +26,7 @@
> > > > > > > >
> > > > > > > >  #include <xen/interface/xen.h>
> > > > > > > >  #include <xen/interface/platform.h>
> > > > > > > > +#include <xen/page.h>
> > > > > > > >  #include <xen/xen.h>
> > > > > > > >  #include <xen/xen-ops.h>
> > > > > > > >
> > > > > > > > @@ -292,3 +293,36 @@ void __init xen_efi_runtime_setup(void)
> > > > > > > >       efi.get_next_high_mono_count    =3D xen_efi_get_next_=
high_mono_count;
> > > > > > > >       efi.reset_system                =3D xen_efi_reset_sys=
tem;
> > > > > > > >  }
> > > > > > > > +
> > > > > > > > +int efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *=
out_md)
> > > > > > > > +{
> > > > > > > > +     static_assert(XEN_PAGE_SHIFT =3D=3D EFI_PAGE_SHIFT,
> > > > > > > > +                   "Mismatch between EFI_PAGE_SHIFT and XE=
N_PAGE_SHIFT");
> > > > > > > > +     struct xen_platform_op op =3D {
> > > > > > > > +             .cmd =3D XENPF_firmware_info,
> > > > > > > > +             .u.firmware_info =3D {
> > > > > > > > +                     .type =3D XEN_FW_EFI_INFO,
> > > > > > > > +                     .index =3D XEN_FW_EFI_MEM_INFO,
> > > > > > > > +                     .u.efi_info.mem.addr =3D phys_addr,
> > > > > > > > +                     .u.efi_info.mem.size =3D U64_MAX - ph=
ys_addr,
> > > > > > > > +             }
> > > > > > > > +     };
> > > > > > > > +     union xenpf_efi_info *info =3D &op.u.firmware_info.u.=
efi_info;
> > > > > > > > +     int rc;
> > > > > > > > +
> > > > > > > > +     if (!efi_enabled(EFI_PARAVIRT) || efi_enabled(EFI_MEM=
MAP))
> > > > > > > > +             return __efi_mem_desc_lookup(phys_addr, out_m=
d);
> > > > > > > > +
> > > > > > > > +     rc =3D HYPERVISOR_platform_op(&op);
> > > > > > > > +     if (rc) {
> > > > > > > > +             pr_warn("Failed to lookup header 0x%llx in Xe=
n memory map: error %d\n",
> > > > > > > > +                     phys_addr, rc);
> > > > > > > > +     }
> > > > > > > > +
> > > > > > > > +     out_md->phys_addr       =3D info->mem.addr;
> > > > > > >
> > > > > > > This will be equal to phys_addr, not the actual start of the =
memory
> > > > > > > region.
> > > > > > >
> > > > > > > > +     out_md->num_pages       =3D info->mem.size >> EFI_PAG=
E_SHIFT;
> > > > > > >
> > > > > > > Similarly, this will be the number of bytes in the memory reg=
ion
> > > > > > > after phys_addr, not the total number of bytes in the region.=
  These two
> > > > > > > differences mean that this function is not strictly equivalen=
t to the
> > > > > > > original efi_mem_desc_lookup().
> > > > > > >
> > > > > > > I am not sure if this matters in practice, but I thought you =
would want
> > > > > > > to be aware of it.
> > > > > >
> > > > > > This is a bit disappointing. Is there no way to obtain this
> > > > > > information via a Xen hypercall?
> > > > >
> > > > > It is possible, but doing so is very complex (it essentially requ=
ires a
> > > > > binary search).  This really should be fixed on the Xen side.
> > > > >
> > > > > > In any case, it means we'll need to round down phys_addr to pag=
e size
> > > > > > at the very least.
> > > > >
> > > > > That makes sense.  Are there any callers that will be broken even=
 with
> > > > > this rounding?
> > > >=20
> > > > As far as I can tell, it should work fine. The only thing to double
> > > > check is whether we are not creating spurious error messages from
> > > > efi_arch_mem_reserve() this way, but as far as I can tell, that sho=
uld
> > > > be fine too.
> > > >=20
> > > > Is there anyone at your end that can give this a spin on an actual
> > > > Xen/x86 system?
> > >=20
> > > Demi, if you open a PR with this at
> > > https://github.com/QubesOS/qubes-linux-kernel/pulls, I can run it
> > > through our CI - (at least) one of the machines has ESRT table. AFAIR
> > > your test laptop has it too.
> >=20
> > Just this patch or the whole series?
>=20
> Whole series.

I have tested the series as in
https://github.com/QubesOS/qubes-linux-kernel/pull/681 and it seems to
work great.
Note the series there differs from this thread, and is marked as "v3" - I
assume (but haven't verified) it has changes requested in this thread
applied. Demi, can you confirm? If so, you can probably send this v3,
and feel free to include my Tested-by (unless you make significant
changes, ofc).

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--Brnh0jZ4ScsEj+y3
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmPEABcACgkQ24/THMrX
1yzidQf/fw6/BjJE0q8FQMmaZtoPGu8ikWosCRW2C8jXxcCr6Mygrhlu7Ddn6RBC
yBJmyuuvl5xEd3TS5h0Xd6iRE/DEQqIinE+FwLNejsNiQkiGIBDHCEJaFox6B2f4
NdXMzUYIXdIXarC5rhWIUO75Zbc61+H6MO6iuyEDhHtOlQ7QZTDvBH9/EevPNNeY
tHQqDhUkz7v9CczneoyqzAFADJXxLueIYD6sBdwEeVlNHMurS41DsLxRNuOV/ADf
e/3pYFOpvBYBWIj8NXTUoyBrNwU3NtE+Yj6cXbB7UQxSNa6TXc+lmZkntbQawPcQ
xJKIyGBbO/okzjrADNmJhkggn0OGiA==
=pAmZ
-----END PGP SIGNATURE-----

--Brnh0jZ4ScsEj+y3--


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 14:01:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 14:01:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477964.740918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH3ZB-0002E2-4t; Sun, 15 Jan 2023 14:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477964.740918; Sun, 15 Jan 2023 14:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH3ZB-0002Dv-1V; Sun, 15 Jan 2023 14:00:57 +0000
Received: by outflank-mailman (input) for mailman id 477964;
 Sun, 15 Jan 2023 14:00:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH3Z9-0002Dl-Nb; Sun, 15 Jan 2023 14:00:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH3Z9-0006OV-6t; Sun, 15 Jan 2023 14:00:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH3Z8-0007kw-NX; Sun, 15 Jan 2023 14:00:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH3Z8-0003eM-N7; Sun, 15 Jan 2023 14:00:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nU4mZa/foIJ+HpTeNf/PiFSoR5fx47cwhz/pGUBbItc=; b=KEkWlJGRSf23WOPN04rvOQHiP1
	XW9emGl0+zW7uHN4x4UyvrXvC9nkKt/B0/YSgtA/18MdDcWRNBgVr8hYpKuL04w5yGqMlAYylku99
	VUB4IRzdBLWTU9Q3uRfraiwGNwarsH7WbjZEyyYF6oWPAT8x7tqiRPdwoxm/GqUdjp00=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175877-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175877: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 14:00:54 +0000

flight 175877 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175877/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   11 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 14:13:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 14:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477972.740929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH3lM-0003kB-8M; Sun, 15 Jan 2023 14:13:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477972.740929; Sun, 15 Jan 2023 14:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH3lM-0003k4-5P; Sun, 15 Jan 2023 14:13:32 +0000
Received: by outflank-mailman (input) for mailman id 477972;
 Sun, 15 Jan 2023 14:13:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hCbC=5M=casper.srs.infradead.org=BATV+708f9baf63a8fc202797+7084+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pH3lG-0003jy-J5
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 14:13:30 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3ba8953-94de-11ed-b8d0-410ff93cb8f0;
 Sun, 15 Jan 2023 15:13:23 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pH3ks-007tbH-UF; Sun, 15 Jan 2023 14:13:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3ba8953-94de-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=OOTmTEZ5tSYiLE0Aufj+nlndsvBbI3BEiPky8VVv93w=; b=KwUOt/Nw5GaPANM4wZ26yailqU
	2IrfIAai3ZAYJBbtzFDTwfW6uMtL+7SJRiVSfYRNVXhyBYD9bgE58ZgrMMXKIyrR1pIAvQ4U2irfg
	ZnRaY1LJfeyTNtqBdr4IYkbgAyMEh1cTRwl9yr/GbPwTiLeucfOboLMJar99XxmqNVYE+zgDTWPt4
	JwFk8ZzbJGj35RUjyWkeReh0Soo6DSzsPJEWNyggCoHKdOSm4/CIm4+PlVqkICTGb6AcunGoG15fr
	OTejiZc++bXwB2vmAiO21saI28Eb/veLOiSLjRTqlpnGdXYKkXaeUumREQmjilHWm96BikBCcJXPM
	HFG87D4Q==;
Message-ID: <55f581345df465a73d6469f44d3512d9ccac7ffc.camel@infradead.org>
Subject: Re: [patch V2 30/46] x86/xen: Wrap XEN MSI management into irqdomain
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML
 <linux-kernel@vger.kernel.org>,  xen-devel <xen-devel@lists.xensource.com>
Cc: Dimitri Sivanich <sivanich@hpe.com>, linux-hyperv@vger.kernel.org, Steve
 Wahl <steve.wahl@hpe.com>, linux-pci@vger.kernel.org, "K. Y. Srinivasan"
 <kys@microsoft.com>,  Dan Williams <dan.j.williams@intel.com>, Wei Liu
 <wei.liu@kernel.org>, Stephen Hemminger <sthemmin@microsoft.com>, Baolu Lu
 <baolu.lu@intel.com>, Marc Zyngier <maz@kernel.org>, x86@kernel.org, Jason
 Gunthorpe <jgg@mellanox.com>, Megha Dey <megha.dey@intel.com>,
 xen-devel@lists.xenproject.org, Kevin Tian <kevin.tian@intel.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>,  Haiyang Zhang
 <haiyangz@microsoft.com>, Alex Williamson <alex.williamson@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Bjorn Helgaas
 <bhelgaas@google.com>, Dave Jiang <dave.jiang@intel.com>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Jon Derrick <jonathan.derrick@intel.com>,
 Juergen Gross <jgross@suse.com>, Russ Anderson <rja@hpe.com>,  Greg
 Kroah-Hartman <gregkh@linuxfoundation.org>,
 iommu@lists.linux-foundation.org, Jacob Pan <jacob.jun.pan@intel.com>, 
 "Rafael J. Wysocki" <rafael@kernel.org>
Date: Sun, 15 Jan 2023 14:12:48 +0000
In-Reply-To: <20200826112333.622352798@linutronix.de>
References: <20200826111628.794979401@linutronix.de>
	 <20200826112333.622352798@linutronix.de>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-DTVWzgYGVALze2v8okE9"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-DTVWzgYGVALze2v8okE9
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-08-26 at 13:16 +0200, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
>=20
> To allow utilizing the irq domain pointer in struct device it is necessar=
y
> to make XEN/MSI irq domain compatible.
>=20
> While the right solution would be to truly convert XEN to irq domains, th=
is
> is an exercise which is not possible for mere mortals with limited XENolo=
gy.
>=20
> Provide a plain irqdomain wrapper around XEN. While this is blatant
> violation of the irqdomain design, it's the only solution for a XEN igora=
nt
> person to make progress on the issue which triggered this change.
>=20
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Acked-by: Juergen Gross <jgross@suse.com>

I think it broke MSI-X support, because xen_pci_msi_domain_info is
lacking a .flags =3D MSI_FLAGS_PCI_MSIX?

> ---
> Note: This is completely untested, but it compiles so it must be perfect.


I'm working on making it simple for you to test that, by hosting Xen
HVM guests natively in qemu (under KVM=C2=B9).=C2=A0

But I'm absolutely not going to try hacking on both guest and host side
at the same time when I'm trying to ensure compatibility =E2=80=94 that way
lies madness.

So for now I'm going to test qemu with older kernels, and maybe someone
(J=C3=BCrgen}? can test MSI-X to PIRQ support under real Xen?) FWIW if I ad=
d
the missing MSI_FLAGS_PCI_MSIX flag then under my qemu I get:

 38:       3180          0  xen-pirq    -msi-x     ens4-rx-0
 39:          0       3610  xen-pirq    -msi-x     ens4-tx-0
 40:          1          0  xen-pirq    -msi-x     ens4

But without the flags I get:

[    8.464212] e1000e 0000:00:04.0 ens4: Failed to initialize MSI interrupt=
s.  Falling back to legacy interrupts.

=C2=B9 https://lore.kernel.org/qemu-devel/20230110122042.1562155-1-dwmw2@in=
fradead.org/

--=-DTVWzgYGVALze2v8okE9
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE1MTQxMjQ4WjAvBgkqhkiG9w0BCQQxIgQguO82t4IS
f+uoEXQbT4rRlZ6ifrAfYnPW74lNGbcKO8Awgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAeL6Dy2c7VdfjtqZJMCg0pa64/NdfSW9w5
Cg7C83Bd5vV5tP2diRd4FM/rDByCqYeyDvsV1wg4h+dRSMs1ogl2nr6uys1d+2faKJwPlVtLrhuL
owjq0407WyZECN2t7Nq+O5sBOh2qjBS/uw8FeJXmerOSnhP3A9RgYaSd4EEkfdwhAaZdwLpnGQp5
rd0RS2WkBnyGZx2XSr2eN4C8nmG3rDU3VGqFLtkQW7jZykAD9KC/oSrJeuXbOGcyq3MfAw2QgNh5
oolfMRczPOz7QBcctdKO10gr/6Oe4Ig+ojEP74Jh5r7R+tbJZEM4IBlp5OSyU1hNMj9tuuguJ2vg
bRUbM4+N0Pk1mzJnceGKLn709Lxod3V++sMaQpA4b6TGIvLc8hsG4iiAQkJTMznW0BQklhJ3ELT6
+P5Bt//RkuMpA2SUPXlmnq6Xx4kbDqRr0ZsvWZBFx+rPswkGmIXWn04+yZ5GjFspkedMO3pvn/u5
Vcp8ulIZ6vywMda6cBXmM21Lt/H8q5+B1QJufHdPS3lxnHMUrkbcaDtlmG0W0itxazAma8lDiYVv
CTHb75A9v1l6y+JZIBjbQKdZ0sFYcMCQ3CukZSfLtium4lvGBpTCvKzDPmElohsBfRGn+BVoM0nV
g1qcbcRN5UA0jR3EHujU7TfJNj1NIemxbK9LfmIsDgAAAAAAAA==


--=-DTVWzgYGVALze2v8okE9--


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 14:30:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 14:30:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477979.740940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH41y-00066G-N0; Sun, 15 Jan 2023 14:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477979.740940; Sun, 15 Jan 2023 14:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH41y-000669-KH; Sun, 15 Jan 2023 14:30:42 +0000
Received: by outflank-mailman (input) for mailman id 477979;
 Sun, 15 Jan 2023 14:30:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH41x-00065z-If; Sun, 15 Jan 2023 14:30:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH41x-00071h-GK; Sun, 15 Jan 2023 14:30:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH41x-0008Rz-6j; Sun, 15 Jan 2023 14:30:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH41x-0003pS-6E; Sun, 15 Jan 2023 14:30:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ngfig6XkFPQitjQ/hWGyzrXkeFXaGziklY8fPybHAEo=; b=s5lvOLGKyfhYxVi2sqW7+RVihM
	v0NuDjpY5nyr4k6djzpdvoffiUZiRS+SECBP2x/e873f7BOAwwoS8+A6wPG7Ve+9LNcgoBUBnXslq
	PyH0TeiPsqxA7xdPPIzip+tRNjH1I9paBkBk2+VSWY26waH36Tc839zxV2enymBcLavI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175878-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175878: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 14:30:41 +0000

flight 175878 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175878/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   12 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 14:45:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 14:45:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477988.740951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH4G4-0007i3-4H; Sun, 15 Jan 2023 14:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477988.740951; Sun, 15 Jan 2023 14:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH4G4-0007hw-1G; Sun, 15 Jan 2023 14:45:16 +0000
Received: by outflank-mailman (input) for mailman id 477988;
 Sun, 15 Jan 2023 14:45:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH4G2-0007hi-TG; Sun, 15 Jan 2023 14:45:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH4G2-0007Ep-Op; Sun, 15 Jan 2023 14:45:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH4G2-0000JK-7O; Sun, 15 Jan 2023 14:45:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH4G2-0007Fx-6t; Sun, 15 Jan 2023 14:45:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3DIxEPVwboFB/T/UYt/kt+Wjbr/qDipa47zle3hqfok=; b=OhE4prpcXOdctEMj8OcMWrwtf4
	qcr6ef7KXbwZxV+qHNeR+5Ev8Qe4j4C6yS/lCvpmmrWfj4n4CWYFGSYwNL1PNhE9o8BRAFGS1oe0J
	IF8PKcxp8DrB+lltGpAVRSkIm6Vc+cnBZDvyjKJBXBWKBer8YbwbS+W5CSArUgsnLZrY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175858-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175858: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-amd64:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf-pvops:<job status>:broken:regression
    qemu-mainline:build-i386:<job status>:broken:regression
    qemu-mainline:build-i386-pvops:<job status>:broken:regression
    qemu-mainline:build-amd64:host-build-prep:fail:regression
    qemu-mainline:build-amd64-pvops:host-build-prep:fail:regression
    qemu-mainline:build-armhf-pvops:host-build-prep:fail:regression
    qemu-mainline:build-i386:host-build-prep:fail:regression
    qemu-mainline:build-i386-pvops:host-build-prep:fail:regression
    qemu-mainline:build-arm64-xsm:host-build-prep:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=886fb67020e32ce6a2cf7049c6f017acf1f0d69a
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 14:45:14 +0000

flight 175858 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175858/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-amd64                   5 host-build-prep          fail REGR. vs. 175743
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-armhf-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-i386                    5 host-build-prep          fail REGR. vs. 175743
 build-i386-pvops              5 host-build-prep          fail REGR. vs. 175743
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                886fb67020e32ce6a2cf7049c6f017acf1f0d69a
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    3 days
Failing since        175750  2023-01-13 06:38:52 Z    2 days    5 attempts
Testing same since   175835  2023-01-14 07:07:10 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               fail    
 build-amd64                                                  broken  
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf-pvops broken
broken-job build-i386 broken
broken-job build-i386-pvops broken

Not pushing.

(No revision log; it would be 1426 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 15:02:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 15:02:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.477997.740965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH4WD-0001jE-JO; Sun, 15 Jan 2023 15:01:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 477997.740965; Sun, 15 Jan 2023 15:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH4WD-0001j7-GS; Sun, 15 Jan 2023 15:01:57 +0000
Received: by outflank-mailman (input) for mailman id 477997;
 Sun, 15 Jan 2023 15:01:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH4WB-0001ix-L7; Sun, 15 Jan 2023 15:01:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH4WB-0007hc-GL; Sun, 15 Jan 2023 15:01:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH4WB-0000h0-5J; Sun, 15 Jan 2023 15:01:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH4WB-00062v-4u; Sun, 15 Jan 2023 15:01:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M9t1uoQIlYU+lkemvudxK5V4yJqAqOb0n7S/YfdblSI=; b=hWj2IWzcH5+YmM4hyQ9BWminNc
	3FSh3x8ybZNNtpmI9UGLBzZ4YMC9zxxgJp3qrzDKjm9ekYhduYbCRdMi7KO5H4JEj8me/E1ZdoMMz
	0cI23wS9/KRkR0PlHwXXvLT4+HghLOkcHd7p3WtC8l1RpkijuHlpAm2Zcrw4DCiQaeBI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175879-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175879: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 15:01:55 +0000

flight 175879 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175879/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   13 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 15:57:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 15:57:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478006.740975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH5OB-00076l-S9; Sun, 15 Jan 2023 15:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478006.740975; Sun, 15 Jan 2023 15:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH5OB-00076e-Oc; Sun, 15 Jan 2023 15:57:43 +0000
Received: by outflank-mailman (input) for mailman id 478006;
 Sun, 15 Jan 2023 15:57:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH5OB-00076U-31; Sun, 15 Jan 2023 15:57:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH5OA-0000RF-VU; Sun, 15 Jan 2023 15:57:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH5OA-0001vF-Jb; Sun, 15 Jan 2023 15:57:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH5OA-0006pE-J5; Sun, 15 Jan 2023 15:57:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mgpVVzxIquw6khx5MgTfM+GYhFc+EOg+LRfD6+8UrwE=; b=si/Q753/GXW7Mgdqc1R7+n7Rik
	exeK0PRvnQ6BgZseJmsf5iZvxNs7wmPnifox6dArq1GRwh9xyCfVrbStYp+aiq52y8J2CaMmLNZz3
	k2bcNqtrrHfMEgvohdqxxWJVDqDy2ucFr6py2X/lpH6h0CxVCP9+BlUFjKyClvoKjoZE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175881-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175881: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 15:57:42 +0000

flight 175881 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175881/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    2 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   14 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 16:31:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 16:31:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478015.740987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH5uI-0003SN-E9; Sun, 15 Jan 2023 16:30:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478015.740987; Sun, 15 Jan 2023 16:30:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH5uI-0003SG-A6; Sun, 15 Jan 2023 16:30:54 +0000
Received: by outflank-mailman (input) for mailman id 478015;
 Sun, 15 Jan 2023 16:30:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH5uH-0003S5-HX; Sun, 15 Jan 2023 16:30:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH5uH-0001gp-E8; Sun, 15 Jan 2023 16:30:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH5uH-0002en-1c; Sun, 15 Jan 2023 16:30:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH5uH-0007Nw-18; Sun, 15 Jan 2023 16:30:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cEHa04LMzOM6ZhSxgC6s8MRVUQqSeOQHnaCrAFEVId4=; b=tNvUHKGn96qu4h/+8WM6ZPHZbl
	zLysCPEEu2WzXhj9Clkrf85g+ZkUIJ9fvv64WeKsg6ndQGeI7RE2/Ao/iQKGKim70dm62oZTYKqhU
	0EKxmNym7RpHlokb4ZeAGR4o1juixNLmZO8hdpWHtfu4MCmCiD5sHM2vgXzhZHPaih8M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175882-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175882: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 16:30:53 +0000

flight 175882 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175882/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   15 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 17:00:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 17:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478024.740997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH6NA-0006vc-Lo; Sun, 15 Jan 2023 17:00:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478024.740997; Sun, 15 Jan 2023 17:00:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH6NA-0006vV-Io; Sun, 15 Jan 2023 17:00:44 +0000
Received: by outflank-mailman (input) for mailman id 478024;
 Sun, 15 Jan 2023 17:00:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH6N8-0006vK-Fk; Sun, 15 Jan 2023 17:00:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH6N8-0002MJ-DH; Sun, 15 Jan 2023 17:00:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH6N8-0003IN-1Y; Sun, 15 Jan 2023 17:00:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH6N8-0008O3-14; Sun, 15 Jan 2023 17:00:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lk84tiNnOIdlWCHQK+TNH5NRsGqZVSII4KVx+j6q2L8=; b=JpXJxzuYBibJwfutYmiRMg8FkM
	Ok7t9L9DebxfLZMrM6Ege+H783zbCMmhHJgBkvcuGmioBMAiZPnxQ/ta3YsorbUDQua5Pd0ltvy2p
	oytP+R5QzQdbuNwHjS9Ih+rQTTlT3zgDW3sk9kyVQrlhy6S79LNf7h368DFT+gP0SD/w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175883-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175883: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 17:00:42 +0000

flight 175883 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175883/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   16 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 17:21:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 17:21:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478033.741009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH6gg-00010K-Fk; Sun, 15 Jan 2023 17:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478033.741009; Sun, 15 Jan 2023 17:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH6gg-00010D-Au; Sun, 15 Jan 2023 17:20:54 +0000
Received: by outflank-mailman (input) for mailman id 478033;
 Sun, 15 Jan 2023 17:20:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH6gf-000100-9Q; Sun, 15 Jan 2023 17:20:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH6gf-0002sq-6F; Sun, 15 Jan 2023 17:20:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH6ge-0003mi-Qx; Sun, 15 Jan 2023 17:20:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH6ge-0004VF-QX; Sun, 15 Jan 2023 17:20:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QcgbHffCdcJemCaVq13FI2GRUb7C0EwomkSA9onvymE=; b=eep26ACowPKN8SuTYtaJhbfQqe
	5AXLDJtdVQ1n4vFGvHL9tMaS/vajGV0Edfr6a/OytYH5VEDZrJqoaznGUo0awjNmx0q63ZeooQJ3F
	B4Oh/VonPVfDfvff1r7oUcXeTaY+MoKRTqaTO/jPYBhrplqt7HXFRyfDgYusMwma9z5g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175874-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175874: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 17:20:52 +0000

flight 175874 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175874/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    3 days
Failing since        175748  2023-01-12 20:01:56 Z    2 days    9 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    1 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 17:52:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 17:52:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478043.741026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH7BS-0004Pk-Tq; Sun, 15 Jan 2023 17:52:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478043.741026; Sun, 15 Jan 2023 17:52:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH7BS-0004Pd-Ol; Sun, 15 Jan 2023 17:52:42 +0000
Received: by outflank-mailman (input) for mailman id 478043;
 Sun, 15 Jan 2023 17:52:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH7BR-0004PT-RS; Sun, 15 Jan 2023 17:52:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH7BR-0003hC-P8; Sun, 15 Jan 2023 17:52:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH7BR-0004YA-7r; Sun, 15 Jan 2023 17:52:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH7BR-0006Yw-7R; Sun, 15 Jan 2023 17:52:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Nw2jQxmkPEQknA7bv4eQo7MskQeE2CO/ho6HckU4jRY=; b=VM97fQPD2J9oOSfAetbFa5n9DH
	CyUFtCImKih0kxCIP4GT7XaNCQHDtono+/qYLayaabmUaAWkyXIKlWskCLp5yXC/4JUooim++QXIm
	Zhgx+WV7Gy6xHplt1aOvIcuuZe5DZfr0ahbdmUdxTo1MnhOSuTMhxCzsDZdf2eiUFUFc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175884-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175884: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 17:52:41 +0000

flight 175884 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175884/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   17 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 18:31:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 18:31:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478051.741037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH7mX-0000Q4-P7; Sun, 15 Jan 2023 18:31:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478051.741037; Sun, 15 Jan 2023 18:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH7mX-0000Px-KQ; Sun, 15 Jan 2023 18:31:01 +0000
Received: by outflank-mailman (input) for mailman id 478051;
 Sun, 15 Jan 2023 18:31:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH7mX-0000Pn-5B; Sun, 15 Jan 2023 18:31:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH7mX-0004fh-1g; Sun, 15 Jan 2023 18:31:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH7mW-0005SZ-Mc; Sun, 15 Jan 2023 18:31:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH7mW-0003vP-M7; Sun, 15 Jan 2023 18:31:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ulLhbKJtH34nJ8x58oCENYW65KcZrx+arR0YSh/EWUo=; b=ua/Kn4mfWoxVGkvc8XHh36Ac0k
	5lusc+p8XjINxR2+pdC/pruuwubfddlgKx+eYLvLukGNlHCZEn1xFi4u3d37d72ZQNViCbk0aVgmU
	12IPuBZWUDIFsSFUWpwx+FDbKuTBicuJOdnGdbH8vVV+ommk6yGDba4MB4f9d6mo/EDs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175887-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175887: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 18:31:00 +0000

flight 175887 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175887/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   18 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 19:06:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 19:06:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478060.741047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH8KM-00045c-IL; Sun, 15 Jan 2023 19:05:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478060.741047; Sun, 15 Jan 2023 19:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH8KM-00045V-FY; Sun, 15 Jan 2023 19:05:58 +0000
Received: by outflank-mailman (input) for mailman id 478060;
 Sun, 15 Jan 2023 19:05:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8KL-00045L-Qc; Sun, 15 Jan 2023 19:05:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8KL-0005TZ-O6; Sun, 15 Jan 2023 19:05:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8KL-0006KM-E9; Sun, 15 Jan 2023 19:05:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8KL-00085E-Di; Sun, 15 Jan 2023 19:05:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JG/7nylkl+V6DjWxIpoB83DGXsyBSpKX1H//Ju2zQFw=; b=1InijQMZP40K81b5fiqdPbjTnp
	yIYFD+URhwvS6raREWBaGs733qY6AfjQ5UZkwALdMhLSJYju2KvTnDOtC6wwTGxuUVuQGkEwR44jk
	PKflWvAU27gsSBjIUdlXFFUGJ6rPRGeZaSX2NNVRWkisBZdPLhW3m/rYcPY14d2fpLPg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175888-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175888: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 19:05:57 +0000

flight 175888 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175888/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   19 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 19:37:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 19:37:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478068.741058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH8om-0007Xz-0n; Sun, 15 Jan 2023 19:37:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478068.741058; Sun, 15 Jan 2023 19:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH8ol-0007Xs-U6; Sun, 15 Jan 2023 19:37:23 +0000
Received: by outflank-mailman (input) for mailman id 478068;
 Sun, 15 Jan 2023 19:37:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8ok-0007Xi-UB; Sun, 15 Jan 2023 19:37:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8ok-0006Io-Qy; Sun, 15 Jan 2023 19:37:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8ok-00079c-Ds; Sun, 15 Jan 2023 19:37:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8ok-0003SQ-DN; Sun, 15 Jan 2023 19:37:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xES5DGuhTFLeUZxDkq5WTQVrJ9/wsOrgyacprSkGM24=; b=IqI9SZGoEzthuEfW8BGZQ9SsGV
	XMJvn/nDX3X2zg3RrrcwlHJ8eFsVf99GKRmHCpmAh8vpCB7/LWrwkaiE/L26gVLRIEPx4yqHozZVX
	jWuZlwl1BpzpWcOrcOSlFIfvJjG7VYetQ5l2Dg0C2hjqfSHomiU5i41PP0onT/pplSv0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175859-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175859: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7c698440524117dca7534592db0e7f465ae4d0bb
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 19:37:22 +0000

flight 175859 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175859/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 build-amd64                   6 xen-build                fail REGR. vs. 173462
 build-amd64-xsm               6 xen-build                fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 build-i386-xsm                6 xen-build                fail REGR. vs. 173462
 build-i386                    6 xen-build                fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd11-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd12-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7c698440524117dca7534592db0e7f465ae4d0bb
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  100 days
Failing since        173470  2022-10-08 06:21:34 Z   99 days  206 attempts
Testing same since   175859  2023-01-15 07:10:18 Z    0 days    1 attempts

------------------------------------------------------------
3366 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-freebsd11-amd64                             blocked 
 test-amd64-amd64-freebsd12-amd64                             blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 blocked 
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-amd64-xl-vhd                                      blocked 
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 514531 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 19:47:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 19:47:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478077.741070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH8yU-0000fa-3M; Sun, 15 Jan 2023 19:47:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478077.741070; Sun, 15 Jan 2023 19:47:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH8yU-0000fT-0Z; Sun, 15 Jan 2023 19:47:26 +0000
Received: by outflank-mailman (input) for mailman id 478077;
 Sun, 15 Jan 2023 19:47:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8yS-0000fJ-IL; Sun, 15 Jan 2023 19:47:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8yS-0006h0-Af; Sun, 15 Jan 2023 19:47:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8yR-0007SF-U8; Sun, 15 Jan 2023 19:47:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH8yR-0002V0-Tk; Sun, 15 Jan 2023 19:47:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CeZdf8X2MnsqnAEQMG819HtpPusWaul/2ArYO6GnsOg=; b=NHXxkW+RWuiah++S42ohCnCPk2
	Z36Gj05FsY8cA+XvP6C3gTXPS7z0wtfZA0JePHuBBbPovkZBVrbUssB18XmzCdud8Ru2F33q65P7J
	1n234csI8qruIDkgjiThkET2mtB3er5y075AeFxRdBXIsmXac+9kr8s2+jUJGOv9c6V0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175889-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175889: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 19:47:23 +0000

flight 175889 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175889/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   20 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 19:56:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 19:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478085.741080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH96f-00029J-U4; Sun, 15 Jan 2023 19:55:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478085.741080; Sun, 15 Jan 2023 19:55:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH96f-00029C-RK; Sun, 15 Jan 2023 19:55:53 +0000
Received: by outflank-mailman (input) for mailman id 478085;
 Sun, 15 Jan 2023 19:55:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH96e-00028z-Az; Sun, 15 Jan 2023 19:55:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH96e-0006sD-71; Sun, 15 Jan 2023 19:55:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pH96d-0007fL-QR; Sun, 15 Jan 2023 19:55:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pH96d-0005tz-Pz; Sun, 15 Jan 2023 19:55:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bkHI5uL70X5wDWBsRi32EsC2pOlruXVgphSocxwlFOk=; b=kCqaLl1VqfLrAN2zxPz0ma5TNA
	iLzRvuvVAR6yPxHHj0haHbBYwKJQJ7PwckP2WywW5Wk+7ipyAjeFF1LwTvHRXjAaxmPdas1J7gIV3
	DFW6AgjgGPoNJFiFKMEZwlIyX6uttS3pqYgPeXW4KGj0YS6rj/itn+A9k64O61m5BerE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175861-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175861: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 19:55:51 +0000

flight 175861 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175861/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-amd64                   6 xen-build                fail REGR. vs. 175734
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175734
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175734
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175734
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    3 days
Failing since        175739  2023-01-12 09:38:44 Z    3 days    7 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    2 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 20:28:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 20:28:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478096.741108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH9bq-0005uZ-RZ; Sun, 15 Jan 2023 20:28:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478096.741108; Sun, 15 Jan 2023 20:28:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH9bq-0005uS-Ox; Sun, 15 Jan 2023 20:28:06 +0000
Received: by outflank-mailman (input) for mailman id 478096;
 Sun, 15 Jan 2023 20:28:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hCbC=5M=casper.srs.infradead.org=BATV+708f9baf63a8fc202797+7084+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pH9bn-0005fF-Uj
 for xen-devel@lists.xen.org; Sun, 15 Jan 2023 20:28:05 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1960c826-9513-11ed-b8d0-410ff93cb8f0;
 Sun, 15 Jan 2023 21:28:00 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pH9bj-0087DO-2j; Sun, 15 Jan 2023 20:27:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1960c826-9513-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=hma09GUeOo7cUxAt1yb4jq2ttpTMOL2/HqikVNdDUyU=; b=ix779t5rx2ljo0GSoIwBMqI3Rp
	OWtSaolAWth33HaIaWD3AnpEL8qegqJKX0izXE4LzOvJbpL2q30iMd6YS2QAB6Ei/NelMPWV15Cka
	uLbXor+xf30XtMcF08VOmroAMeL4zq6BEll0+8XgCwbf7EYZdRZz1vkfUiA2gOZw3w/DNYCl8kH45
	HxL4qdy7h6nim4STkmhMYD8iJWbBsfGtOkZQLGR5Z+D1aA6SbufljKwQAent9aoQUuEzJ7Ab7sdkR
	rCi8FW30V+GbtGMcfTbZ9woeyz+sWAqtKVde/yuzu+o0TrbThbRYuOdu5VitP3E51U5OWa27g4+zr
	HJaSIzWA==;
Message-ID: <b3e86a1c3c1b524bccdb71c837cb65fdf51495ff.camel@infradead.org>
Subject: Re: [patch V2 30/46] x86/xen: Wrap XEN MSI management into irqdomain
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML
 <linux-kernel@vger.kernel.org>,  xen-devel <xen-devel@lists.xen.org>
Cc: Dimitri Sivanich <sivanich@hpe.com>, linux-hyperv@vger.kernel.org, Steve
 Wahl <steve.wahl@hpe.com>, linux-pci@vger.kernel.org, "K. Y. Srinivasan"
 <kys@microsoft.com>,  Dan Williams <dan.j.williams@intel.com>, Wei Liu
 <wei.liu@kernel.org>, Stephen Hemminger <sthemmin@microsoft.com>, Baolu Lu
 <baolu.lu@intel.com>, Marc Zyngier <maz@kernel.org>, x86@kernel.org, Jason
 Gunthorpe <jgg@mellanox.com>, Megha Dey <megha.dey@intel.com>,
 xen-devel@lists.xenproject.org, Kevin Tian <kevin.tian@intel.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>,  Haiyang Zhang
 <haiyangz@microsoft.com>, Alex Williamson <alex.williamson@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Bjorn Helgaas
 <bhelgaas@google.com>, Dave Jiang <dave.jiang@intel.com>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Jon Derrick <jonathan.derrick@intel.com>,
 Juergen Gross <jgross@suse.com>, Russ Anderson <rja@hpe.com>,  Greg
 Kroah-Hartman <gregkh@linuxfoundation.org>,
 iommu@lists.linux-foundation.org, Jacob Pan <jacob.jun.pan@intel.com>, 
 "Rafael J. Wysocki" <rafael@kernel.org>
Date: Sun, 15 Jan 2023 20:27:44 +0000
In-Reply-To: <55f581345df465a73d6469f44d3512d9ccac7ffc.camel@infradead.org>
References: <20200826111628.794979401@linutronix.de>
	 <20200826112333.622352798@linutronix.de>
	 <55f581345df465a73d6469f44d3512d9ccac7ffc.camel@infradead.org>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-fjKh0loIuJRGzJfL12nv"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-fjKh0loIuJRGzJfL12nv
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2023-01-15 at 14:12 +0000, David Woodhouse wrote:
> On Wed, 2020-08-26 at 13:16 +0200, Thomas Gleixner wrote:
> > From: Thomas Gleixner <tglx@linutronix.de>
> >=20
> > To allow utilizing the irq domain pointer in struct device it is necess=
ary
> > to make XEN/MSI irq domain compatible.
> >=20
> > While the right solution would be to truly convert XEN to irq domains, =
this
> > is an exercise which is not possible for mere mortals with limited XENo=
logy.
> >=20
> > Provide a plain irqdomain wrapper around XEN. While this is blatant
> > violation of the irqdomain design, it's the only solution for a XEN igo=
rant
> > person to make progress on the issue which triggered this change.
> >=20
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Acked-by: Juergen Gross <jgross@suse.com>
>=20
> I think it broke MSI-X support, because xen_pci_msi_domain_info is
> lacking a .flags =3D MSI_FLAGS_PCI_MSIX?

Hm, I think it only actually *broke* with commit 99f3d27976 ("PCI/MSI:
Reject MSI-X early") from November last year. So 6.1 was OK and we have
time to fix it in 6.2.

Confirmed on real Xen at least that a Fedora 37 install with a 6.0.7
kernel works fine, then on upgrading to Rawhide's 6.2-rc3 it dies with=20

[   41.498694] ena 0000:00:03.0 (unnamed net_device) (uninitialized): Faile=
d to enable MSI-X. irq_cnt -524
[   41.498705] ena 0000:00:03.0: Can not reserve msix vectors
[   41.498712] ena 0000:00:03.0: Failed to enable and set the admin interru=
pts

> > ---
> > Note: This is completely untested, but it compiles so it must be perfec=
t.
>=20
>=20
> I'm working on making it simple for you to test that, by hosting Xen
> HVM guests natively in qemu (under KVM=C2=B9).=C2=A0
>=20
> But I'm absolutely not going to try hacking on both guest and host side
> at the same time when I'm trying to ensure compatibility =E2=80=94 that w=
ay
> lies madness.
>=20
> So for now I'm going to test qemu with older kernels, and maybe someone
> (J=C3=BCrgen}? can test MSI-X to PIRQ support under real Xen?) FWIW if I =
add
> the missing MSI_FLAGS_PCI_MSIX flag then under my qemu I get:
>=20
> =C2=A038:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 3180=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0 xen-pirq=C2=A0=C2=A0=C2=A0 -msi-x=
=C2=A0=C2=A0=C2=A0=C2=A0 ens4-rx-0
> =C2=A039:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 3610=C2=A0 xen-pirq=C2=A0=C2=A0=C2=A0 -msi-x=
=C2=A0=C2=A0=C2=A0=C2=A0 ens4-tx-0
> =C2=A040:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0 xen-pirq=C2=A0=C2=
=A0=C2=A0 -msi-x=C2=A0=C2=A0=C2=A0=C2=A0 ens4
>=20
> But without the flags I get:
>=20
> [=C2=A0=C2=A0=C2=A0 8.464212] e1000e 0000:00:04.0 ens4: Failed to initial=
ize MSI interrupts.=C2=A0 Falling back to legacy interrupts.
>=20
> =C2=B9 https://lore.kernel.org/qemu-devel/20230110122042.1562155-1-dwmw2@=
infradead.org/


--=-fjKh0loIuJRGzJfL12nv
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE1MjAyNzQ0WjAvBgkqhkiG9w0BCQQxIgQgEVuRC5TT
XSVIGCNaaCP2mTBadklcRarafOUHkixEk+Qwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgCjarrOzTF2g2zpInRZck17E4WRCOpZU8OR
2z67mhuuqPT3z6VGbDBGfQQNv3SnCMu1fIVcYvBddCr+euhUkv3YbwFL1x5QhmJXdfiePtdBY7vH
9u1dAJ6FobJ5zdPpJpptDkIPB4Psp96T+4t5s3JWQarVWHoFKGItv6aQA5WeDa6+tbaXEPMgwSqR
ycly36VCCUl+wPSOgBZrXA4o+npH5Rvhp2s9qS4itgsx0NKPMXDsPVB8uj9vzV/YPfQr8PIUnE5d
Kauu4H4ttIITfPAMKxpQmiuXw3Y/GCFAx7rXJuwswbxUUAX7ycQMEb/GIUdPQDwe7uggGvuMtvDU
P9XUjizUP2ECBSf7QymND9mtUsxQ0g81DaneF9PhDAFRfPBulihKNSIFrt2c582qqnDM0CFDBAPW
33It4wOZxzoOtU5lysLZqXwcCfJ4nVNW+VfsE8k3xJbozPKmVCqbIMkK6EaLb7S8GYdzyShpH9vj
y7E5z/KqHbA1bOjt1bWcvk4Ll/PMz68DRhT9ncoUguoMPpQZF8fBXjoFI1mOqaMSEXm8Zk6dH49x
jw0e9EkG04dhpd1z1UlC1ecq0L2wSzVsQFpEvRA6KdLlZQL4l6hm0E2EBZWGPPFuoVQNTIktQNuT
L5aNYXCJRWDAN65JTDm/4pjO9xPI/DIOn2JteLI5twAAAAAAAA==


--=-fjKh0loIuJRGzJfL12nv--


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 20:28:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 20:28:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478095.741098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH9bp-0005fa-KA; Sun, 15 Jan 2023 20:28:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478095.741098; Sun, 15 Jan 2023 20:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pH9bp-0005fT-GS; Sun, 15 Jan 2023 20:28:05 +0000
Received: by outflank-mailman (input) for mailman id 478095;
 Sun, 15 Jan 2023 20:28:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hCbC=5M=casper.srs.infradead.org=BATV+708f9baf63a8fc202797+7084+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pH9bn-0005fE-U5
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 20:28:04 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1960c826-9513-11ed-b8d0-410ff93cb8f0;
 Sun, 15 Jan 2023 21:28:00 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pH9bj-0087DO-2j; Sun, 15 Jan 2023 20:27:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1960c826-9513-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=hma09GUeOo7cUxAt1yb4jq2ttpTMOL2/HqikVNdDUyU=; b=ix779t5rx2ljo0GSoIwBMqI3Rp
	OWtSaolAWth33HaIaWD3AnpEL8qegqJKX0izXE4LzOvJbpL2q30iMd6YS2QAB6Ei/NelMPWV15Cka
	uLbXor+xf30XtMcF08VOmroAMeL4zq6BEll0+8XgCwbf7EYZdRZz1vkfUiA2gOZw3w/DNYCl8kH45
	HxL4qdy7h6nim4STkmhMYD8iJWbBsfGtOkZQLGR5Z+D1aA6SbufljKwQAent9aoQUuEzJ7Ab7sdkR
	rCi8FW30V+GbtGMcfTbZ9woeyz+sWAqtKVde/yuzu+o0TrbThbRYuOdu5VitP3E51U5OWa27g4+zr
	HJaSIzWA==;
Message-ID: <b3e86a1c3c1b524bccdb71c837cb65fdf51495ff.camel@infradead.org>
Subject: Re: [patch V2 30/46] x86/xen: Wrap XEN MSI management into irqdomain
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML
 <linux-kernel@vger.kernel.org>,  xen-devel <xen-devel@lists.xen.org>
Cc: Dimitri Sivanich <sivanich@hpe.com>, linux-hyperv@vger.kernel.org, Steve
 Wahl <steve.wahl@hpe.com>, linux-pci@vger.kernel.org, "K. Y. Srinivasan"
 <kys@microsoft.com>,  Dan Williams <dan.j.williams@intel.com>, Wei Liu
 <wei.liu@kernel.org>, Stephen Hemminger <sthemmin@microsoft.com>, Baolu Lu
 <baolu.lu@intel.com>, Marc Zyngier <maz@kernel.org>, x86@kernel.org, Jason
 Gunthorpe <jgg@mellanox.com>, Megha Dey <megha.dey@intel.com>,
 xen-devel@lists.xenproject.org, Kevin Tian <kevin.tian@intel.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>,  Haiyang Zhang
 <haiyangz@microsoft.com>, Alex Williamson <alex.williamson@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Bjorn Helgaas
 <bhelgaas@google.com>, Dave Jiang <dave.jiang@intel.com>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Jon Derrick <jonathan.derrick@intel.com>,
 Juergen Gross <jgross@suse.com>, Russ Anderson <rja@hpe.com>,  Greg
 Kroah-Hartman <gregkh@linuxfoundation.org>,
 iommu@lists.linux-foundation.org, Jacob Pan <jacob.jun.pan@intel.com>, 
 "Rafael J. Wysocki" <rafael@kernel.org>
Date: Sun, 15 Jan 2023 20:27:44 +0000
In-Reply-To: <55f581345df465a73d6469f44d3512d9ccac7ffc.camel@infradead.org>
References: <20200826111628.794979401@linutronix.de>
	 <20200826112333.622352798@linutronix.de>
	 <55f581345df465a73d6469f44d3512d9ccac7ffc.camel@infradead.org>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-fjKh0loIuJRGzJfL12nv"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-fjKh0loIuJRGzJfL12nv
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2023-01-15 at 14:12 +0000, David Woodhouse wrote:
> On Wed, 2020-08-26 at 13:16 +0200, Thomas Gleixner wrote:
> > From: Thomas Gleixner <tglx@linutronix.de>
> >=20
> > To allow utilizing the irq domain pointer in struct device it is necess=
ary
> > to make XEN/MSI irq domain compatible.
> >=20
> > While the right solution would be to truly convert XEN to irq domains, =
this
> > is an exercise which is not possible for mere mortals with limited XENo=
logy.
> >=20
> > Provide a plain irqdomain wrapper around XEN. While this is blatant
> > violation of the irqdomain design, it's the only solution for a XEN igo=
rant
> > person to make progress on the issue which triggered this change.
> >=20
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Acked-by: Juergen Gross <jgross@suse.com>
>=20
> I think it broke MSI-X support, because xen_pci_msi_domain_info is
> lacking a .flags =3D MSI_FLAGS_PCI_MSIX?

Hm, I think it only actually *broke* with commit 99f3d27976 ("PCI/MSI:
Reject MSI-X early") from November last year. So 6.1 was OK and we have
time to fix it in 6.2.

Confirmed on real Xen at least that a Fedora 37 install with a 6.0.7
kernel works fine, then on upgrading to Rawhide's 6.2-rc3 it dies with=20

[   41.498694] ena 0000:00:03.0 (unnamed net_device) (uninitialized): Faile=
d to enable MSI-X. irq_cnt -524
[   41.498705] ena 0000:00:03.0: Can not reserve msix vectors
[   41.498712] ena 0000:00:03.0: Failed to enable and set the admin interru=
pts

> > ---
> > Note: This is completely untested, but it compiles so it must be perfec=
t.
>=20
>=20
> I'm working on making it simple for you to test that, by hosting Xen
> HVM guests natively in qemu (under KVM=C2=B9).=C2=A0
>=20
> But I'm absolutely not going to try hacking on both guest and host side
> at the same time when I'm trying to ensure compatibility =E2=80=94 that w=
ay
> lies madness.
>=20
> So for now I'm going to test qemu with older kernels, and maybe someone
> (J=C3=BCrgen}? can test MSI-X to PIRQ support under real Xen?) FWIW if I =
add
> the missing MSI_FLAGS_PCI_MSIX flag then under my qemu I get:
>=20
> =C2=A038:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 3180=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0 xen-pirq=C2=A0=C2=A0=C2=A0 -msi-x=
=C2=A0=C2=A0=C2=A0=C2=A0 ens4-rx-0
> =C2=A039:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 3610=C2=A0 xen-pirq=C2=A0=C2=A0=C2=A0 -msi-x=
=C2=A0=C2=A0=C2=A0=C2=A0 ens4-tx-0
> =C2=A040:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0 xen-pirq=C2=A0=C2=
=A0=C2=A0 -msi-x=C2=A0=C2=A0=C2=A0=C2=A0 ens4
>=20
> But without the flags I get:
>=20
> [=C2=A0=C2=A0=C2=A0 8.464212] e1000e 0000:00:04.0 ens4: Failed to initial=
ize MSI interrupts.=C2=A0 Falling back to legacy interrupts.
>=20
> =C2=B9 https://lore.kernel.org/qemu-devel/20230110122042.1562155-1-dwmw2@=
infradead.org/


--=-fjKh0loIuJRGzJfL12nv
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE1MjAyNzQ0WjAvBgkqhkiG9w0BCQQxIgQgEVuRC5TT
XSVIGCNaaCP2mTBadklcRarafOUHkixEk+Qwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgCjarrOzTF2g2zpInRZck17E4WRCOpZU8OR
2z67mhuuqPT3z6VGbDBGfQQNv3SnCMu1fIVcYvBddCr+euhUkv3YbwFL1x5QhmJXdfiePtdBY7vH
9u1dAJ6FobJ5zdPpJpptDkIPB4Psp96T+4t5s3JWQarVWHoFKGItv6aQA5WeDa6+tbaXEPMgwSqR
ycly36VCCUl+wPSOgBZrXA4o+npH5Rvhp2s9qS4itgsx0NKPMXDsPVB8uj9vzV/YPfQr8PIUnE5d
Kauu4H4ttIITfPAMKxpQmiuXw3Y/GCFAx7rXJuwswbxUUAX7ycQMEb/GIUdPQDwe7uggGvuMtvDU
P9XUjizUP2ECBSf7QymND9mtUsxQ0g81DaneF9PhDAFRfPBulihKNSIFrt2c582qqnDM0CFDBAPW
33It4wOZxzoOtU5lysLZqXwcCfJ4nVNW+VfsE8k3xJbozPKmVCqbIMkK6EaLb7S8GYdzyShpH9vj
y7E5z/KqHbA1bOjt1bWcvk4Ll/PMz68DRhT9ncoUguoMPpQZF8fBXjoFI1mOqaMSEXm8Zk6dH49x
jw0e9EkG04dhpd1z1UlC1ecq0L2wSzVsQFpEvRA6KdLlZQL4l6hm0E2EBZWGPPFuoVQNTIktQNuT
L5aNYXCJRWDAN65JTDm/4pjO9xPI/DIOn2JteLI5twAAAAAAAA==


--=-fjKh0loIuJRGzJfL12nv--


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 20:54:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 20:54:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478137.741145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHA1i-0002BW-Kf; Sun, 15 Jan 2023 20:54:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478137.741145; Sun, 15 Jan 2023 20:54:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHA1i-0002BP-Hy; Sun, 15 Jan 2023 20:54:50 +0000
Received: by outflank-mailman (input) for mailman id 478137;
 Sun, 15 Jan 2023 20:54:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHA1g-0002BF-V7; Sun, 15 Jan 2023 20:54:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHA1g-0008Lo-TV; Sun, 15 Jan 2023 20:54:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHA1g-0000kY-Jk; Sun, 15 Jan 2023 20:54:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHA1g-0001BU-JD; Sun, 15 Jan 2023 20:54:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j/wxHesimeIUV8fZ45K73C8DjRVe57xkzu6hIKEyuQ0=; b=Tii8wG2D+eAEPZPPqurBYom04R
	/4vg3I32zf625ZaM2sleVgwMqfqfYzG95nuqzLanxdW7MJSspZ8P9R+HxR/dOP4HnFkEWRRMn0j9u
	wRdACnXEENTE14mvy7eCs84sPLchPnHRWVbIYCMZm7y8HXI+wJMPMxYJ26VFg1BgHM+0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175892-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175892: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 20:54:48 +0000

flight 175892 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175892/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   21 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 21:10:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 21:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478147.741163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHAGR-0003zs-1B; Sun, 15 Jan 2023 21:10:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478147.741163; Sun, 15 Jan 2023 21:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHAGQ-0003z3-Tz; Sun, 15 Jan 2023 21:10:02 +0000
Received: by outflank-mailman (input) for mailman id 478147;
 Sun, 15 Jan 2023 21:10:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHAGQ-0003o2-1r; Sun, 15 Jan 2023 21:10:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHAGP-0000C2-Vd; Sun, 15 Jan 2023 21:10:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHAGP-00016g-Pw; Sun, 15 Jan 2023 21:10:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHAGP-0004VN-PQ; Sun, 15 Jan 2023 21:10:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yVm0uHRXlAQL6KedBpjY87YdaAOBOBFi5jHLWW/ILHA=; b=ON6iWWKKLngsvVFRMva91yRJJA
	ANeZavsNYNZ9hwjz2ZFhwcHYc811aSJlIq12nmfnTSz+tKpTlgFk0lDDKW92ZoBiBhB8gyrZTRkte
	26BhrLTamp/d6HDfP5G+r1r7u1pjP1t/kPmWZN5e2qxfmrsslLe5z8p1/1YmqrPLczbA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175886-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175886: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:host-install(5):broken:heisenbug
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 21:10:01 +0000

flight 175886 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175886/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl             <job status>                 broken
 build-amd64                   6 xen-build                fail REGR. vs. 175746

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           5 host-install(5)          broken pass in 175874

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-armhf-armhf-xl         15 migrate-support-check fail in 175874 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 175874 never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    3 days
Failing since        175748  2023-01-12 20:01:56 Z    3 days   10 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    1 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl broken
broken-step test-armhf-armhf-xl host-install(5)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 21:30:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 21:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478157.741179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHAaA-0006zU-Ne; Sun, 15 Jan 2023 21:30:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478157.741179; Sun, 15 Jan 2023 21:30:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHAaA-0006zN-L1; Sun, 15 Jan 2023 21:30:26 +0000
Received: by outflank-mailman (input) for mailman id 478157;
 Sun, 15 Jan 2023 21:30:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHAa9-0006zD-H4; Sun, 15 Jan 2023 21:30:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHAa9-0000jh-Dn; Sun, 15 Jan 2023 21:30:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHAa9-0001Y2-6i; Sun, 15 Jan 2023 21:30:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHAa9-0003h5-6G; Sun, 15 Jan 2023 21:30:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LpWKcTNkCTqWwOevG8HQ4XVOLXQeM2wNnICEGjajvxU=; b=Ec3QtKTZBffGvhGvKGWfH4348i
	TxMBxY1Z7qPpPMqC6Fjkljen3lTVjtVYpQOkyEf/gMWK3axrC+hDdOSdTHY+//+dWqOwdRt5f22B2
	LROg1JNAqekb1ctZAVfFgP+orLKFJKmEPtZ1gkFKUq3I+ctq8sGxuH+SdTuj3AhFpufQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175897-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175897: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 21:30:25 +0000

flight 175897 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175897/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   22 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   16 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 22:00:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 22:00:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478166.741190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHB2z-00021w-AV; Sun, 15 Jan 2023 22:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478166.741190; Sun, 15 Jan 2023 22:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHB2z-00021p-7c; Sun, 15 Jan 2023 22:00:13 +0000
Received: by outflank-mailman (input) for mailman id 478166;
 Sun, 15 Jan 2023 22:00:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHB2x-00021f-QL; Sun, 15 Jan 2023 22:00:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHB2x-0001Ni-LQ; Sun, 15 Jan 2023 22:00:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHB2x-0002C9-Bw; Sun, 15 Jan 2023 22:00:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHB2x-0003LE-BR; Sun, 15 Jan 2023 22:00:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qstFmdNJcYzMRoRxkXduNM7f1ZP2vNG5TeDrgTRiNrg=; b=CP4QedMRkUe5pYZdCNbvBY7bA4
	RIurHyV/wVL22sPH4x3Tab1lO2mzUV8HLSbgUBziUbPGIGLFxylCoCZVEn2KCKlqTWxcOJCeAU2sH
	SpBf7gX4OFren/4+wDbi16utv1nIMlyjqtb1BbFAt2dXWq6pN95MF0I0PfXmYuBmY53g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175899-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175899: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 22:00:11 +0000

flight 175899 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175899/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   23 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   17 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 22:14:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 22:14:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478174.741202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHBGy-0003ac-KM; Sun, 15 Jan 2023 22:14:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478174.741202; Sun, 15 Jan 2023 22:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHBGy-0003aV-HC; Sun, 15 Jan 2023 22:14:40 +0000
Received: by outflank-mailman (input) for mailman id 478174;
 Sun, 15 Jan 2023 22:14:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hCbC=5M=casper.srs.infradead.org=BATV+708f9baf63a8fc202797+7084+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pHBGv-0003aN-Cc
 for xen-devel@lists.xenproject.org; Sun, 15 Jan 2023 22:14:38 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe05a741-9521-11ed-91b6-6bf2151ebd3b;
 Sun, 15 Jan 2023 23:14:35 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHBGr-008Am9-UN; Sun, 15 Jan 2023 22:14:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe05a741-9521-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=USOJ+cxlk1PyqjpZN81t13zkmrDPpT6u+SQIj14OMZE=; b=BsRaAFzKtc/sGee4NxKo+j1BWV
	mECSjylTlTXAhsO1+Rg9dRLzLeN9yhebdXuaQJVEdFgb7pdCn9eE347Ou2vseswC/OobK6Czyaquy
	ROqu/NxtbcGiKueFDffZQ7HUqQPoUIfzqXzy4SEzg5pUkUoGaUHLxAtCU88lQRmQn6g9lzNP7SFpv
	UfRq1mRUI7fr9ntUgBtbXIu6StNGgLcSUJSuAZtMjM+DPlG2X8z0YtzM5OSsBr7IRAWpjT3hxwst/
	wDEjRp+/TkQGeUNn9wwkvvXuT4lVRteAnWcfAExHEpZ3xjPa0003P2GhdKUsGqUnG1InNZ5XXxBk8
	5e0y1eFQ==;
Message-ID: <4bffa69a949bfdc92c4a18e5a1c3cbb3b94a0d32.camel@infradead.org>
Subject: [PATCH] x86/xen: Set MSI_FLAG_PCI_MSIX support in Xen MSI domain
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML
 <linux-kernel@vger.kernel.org>,  xen-devel
 <xen-devel@lists.xenproject.org>, Juergen Gross <jgross@suse.com>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>,  linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>, Alex
 Williamson <alex.williamson@redhat.com>, Kevin Tian <kevin.tian@intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Logan Gunthorpe
 <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon Mason
 <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>, Michael Ellerman
 <mpe@ellerman.id.au>, Christophe Leroy <christophe.leroy@csgroup.eu>, 
 linuxppc-dev@lists.ozlabs.org, "Ahmed S. Darwish" <darwi@linutronix.de>, 
 Reinette Chatre <reinette.chatre@intel.com>
Date: Sun, 15 Jan 2023 22:14:19 +0000
In-Reply-To: <20221111122015.631728309@linutronix.de>
References: <20221111120501.026511281@linutronix.de>
	 <20221111122015.631728309@linutronix.de>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-wFNK7/8gDbzPnO/xJG0z"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-wFNK7/8gDbzPnO/xJG0z
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

The Xen MSI =E2=86=92 PIRQ magic does support MSI-X, so advertise it.

(In fact it's better off with MSI-X than MSI, because it's actually
broken by design for 32-bit MSI, since it puts the high bits of the
PIRQ# into the high 32 bits of the MSI message address, instead of the
Extended Destination ID field which is in bits 4-11.

Strictly speaking, this really fixes a much older commit 2e4386eba0c0
("x86/xen: Wrap XEN MSI management into irqdomain") which failed to set
the flag. But that never really mattered until __pci_enable_msix_range()
started to check and bail out early. So in 6.2-rc we see failures e.g.
to bring up networking on an Amazon EC2 m4.16xlarge instance:

[   41.498694] ena 0000:00:03.0 (unnamed net_device) (uninitialized): Faile=
d to enable MSI-X. irq_cnt -524
[   41.498705] ena 0000:00:03.0: Can not reserve msix vectors
[   41.498712] ena 0000:00:03.0: Failed to enable and set the admin interru=
pts

Side note: This is the first bug found, and first patch tested, by running
Xen guests under QEMU/KVM instead of running under actual Xen.

Fixes: 99f3d2797657 ("PCI/MSI: Reject MSI-X early")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/pci/xen.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index b94f727251b6..790550479831 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -433,6 +433,7 @@ static struct msi_domain_ops xen_pci_msi_domain_ops =3D=
 {
 };
=20
 static struct msi_domain_info xen_pci_msi_domain_info =3D {
+	.flags			=3D MSI_FLAG_PCI_MSIX,
 	.ops			=3D &xen_pci_msi_domain_ops,
 };
=20
--=20
2.34.1



--=-wFNK7/8gDbzPnO/xJG0z
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE1MjIxNDE5WjAvBgkqhkiG9w0BCQQxIgQgQYEu9/SX
2QaDil9FeKhpEpCRMFzlb4ZhHPkBjJFca2swgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgB63XdzpDJjIjU7cl63fFsGvtTruLo0IQp0
DfYTz4vlJTWQdB6TXADOYBrPWw4x8fklHHVtecdK/pBeuhCJV4iS3p27iUOveRi9He9fvdcNYO7I
Oof/L4aZmz47XT16oaTVlFtBTWQxqHPN4L9T22vT5p72eDqup7EONvUA0yKd7kd0LL5UidZj0G4p
LnQxlG+aZ2JRXgtxDp9bkhl4UPRgDaaZJXGrKTnAT6LTDoAt+llHavDzq+vh8mUg5vAJzbzhQ4Rm
jafsl/dYYtg0VfQaF8anHhU9WVZPWARHzML7yk3H4cpFRQ3Ei4QdFRT4jlSDjmykn6GQyiyHel4b
eqQ9szviZ5emqJr0ajUfgx6t/2JHw7HSMeoKUbvH7xhGMtD1ywvqLVwrFOhXfIdX8F4IhWvgx3nc
rp97tlWkt8ESbZW/VTyguh+keaccgVeUJWTSBo1LxJvKH9xsd+H59jR3sBS8dn5nlFNeG6F4/K6E
89mJYHZBZWXSE2ZXyqCujeOpwOg0X/TXcTEXCdy/nox5E2yk1YNyT2cdl7Gyy8tQiiXsi6RuaIO1
1Ye/3fe/3+7ocIjSmc8od1yvOtsbweiWtlpMyHOzAh6x2KESE6andENkNiIEvT2qQrY4Rw/aK5ZH
d1QqY0w8CW7vSrSj3ZA7LeCjIRTBxI+ocgdJVrQztgAAAAAAAA==


--=-wFNK7/8gDbzPnO/xJG0z--


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 22:55:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 22:55:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478181.741213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHBtt-0007r4-KL; Sun, 15 Jan 2023 22:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478181.741213; Sun, 15 Jan 2023 22:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHBtt-0007qx-Hj; Sun, 15 Jan 2023 22:54:53 +0000
Received: by outflank-mailman (input) for mailman id 478181;
 Sun, 15 Jan 2023 22:54:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHBtr-0007qn-Q0; Sun, 15 Jan 2023 22:54:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHBtr-0002YD-MU; Sun, 15 Jan 2023 22:54:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHBtr-0003Nt-AA; Sun, 15 Jan 2023 22:54:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHBtr-00013e-9g; Sun, 15 Jan 2023 22:54:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NUqapgtLT87IaF/VNAmMA1QEml9nKGOculdgN7tggGQ=; b=z7+eMGoRPRSHzkFtKxL2zo+A3i
	6ITHm2K0AlMzsrAR4VtehiVNrh/OF1hI6pHxXYltsRPLFOy2N1xtmNanFF0ibk7HzrzV+apFPeJfP
	3R8VBEU1isK0KWCiiQ9g+nSNWjPmrVSNxxq8e2PbFqIS8gsgd5stWln2jLWQikzs6+WQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175901-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175901: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 22:54:51 +0000

flight 175901 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175901/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   24 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   18 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 23:43:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 23:43:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478190.741224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHCep-0004kG-DQ; Sun, 15 Jan 2023 23:43:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478190.741224; Sun, 15 Jan 2023 23:43:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHCep-0004k9-8o; Sun, 15 Jan 2023 23:43:23 +0000
Received: by outflank-mailman (input) for mailman id 478190;
 Sun, 15 Jan 2023 23:43:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHCeo-0004jz-MX; Sun, 15 Jan 2023 23:43:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHCeo-0003Xe-Ik; Sun, 15 Jan 2023 23:43:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHCeo-0004YJ-9D; Sun, 15 Jan 2023 23:43:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHCeo-0007Om-8l; Sun, 15 Jan 2023 23:43:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/7N+DgHJG5MhFMxkkgVKro7gD3vVYYrjTv4ncnjnmks=; b=oVbvicItOnEU86/Q9ti2dFiREA
	Drwp6Gtk6G4qsXlOU4Z428y+u19GMNIrkqSr/7+fqflCefdTT+zTah+zdWjly20X+JESGkq/36bI7
	WV11hBsTkhHMuADW5VdXcVWFDr4Lez6l6jTyiSDISbAOE12B3qzPmSpTC/y6xqy9VyRc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175902-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175902: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=7cd55f300915af8759bdf1687af7e3a7f4d4f13c
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 23:43:22 +0000

flight 175902 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175902/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    0 days   25 attempts
Testing same since   175871  2023-01-15 10:40:40 Z    0 days   19 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 15 23:56:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Jan 2023 23:56:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478198.741235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHCrd-0006Hv-If; Sun, 15 Jan 2023 23:56:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478198.741235; Sun, 15 Jan 2023 23:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHCrd-0006Hn-DQ; Sun, 15 Jan 2023 23:56:37 +0000
Received: by outflank-mailman (input) for mailman id 478198;
 Sun, 15 Jan 2023 23:56:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHCrc-0006Hc-FR; Sun, 15 Jan 2023 23:56:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHCrc-00041D-D7; Sun, 15 Jan 2023 23:56:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHCrc-0004rt-5z; Sun, 15 Jan 2023 23:56:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHCrc-0002s3-5R; Sun, 15 Jan 2023 23:56:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sDr52VtbUMwkbYPqdrJPr4fDtv92j4B1nS8DUKWaZWY=; b=4KjsISgJVMRakpi3hmMlJI6GoD
	wBS2g/B459C0Rmrl33bjpyzLTtrOdTPWrsxCp9k91h5nMGw3v2Hi8+JOidrGME1Y1qtwfmkbujqaB
	2pYCaasX+hqWJpu01Pxem0Jr3fvlyIibwGAMxFVojdqdP667xZQyle6WlSVFbX/Nqr1A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175900: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Jan 2023 23:56:36 +0000

flight 175900 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175900/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746
 build-armhf                   6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    3 days
Failing since        175748  2023-01-12 20:01:56 Z    3 days   11 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 00:44:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 00:44:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478208.741252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHDbO-0003kN-To; Mon, 16 Jan 2023 00:43:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478208.741252; Mon, 16 Jan 2023 00:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHDbO-0003kG-Ql; Mon, 16 Jan 2023 00:43:54 +0000
Received: by outflank-mailman (input) for mailman id 478208;
 Mon, 16 Jan 2023 00:43:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHDbM-0003k6-Oj; Mon, 16 Jan 2023 00:43:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHDbM-0005f2-N5; Mon, 16 Jan 2023 00:43:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHDbM-0005yq-8c; Mon, 16 Jan 2023 00:43:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHDbM-0007C1-8C; Mon, 16 Jan 2023 00:43:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mioe6hGoDLtHgBURUAS2Er08vwwy3837O47z8Kn8HCo=; b=4mOXnoxVMulhR9uX6SOm82uHCG
	Gh/KJLE54ZZMnSZSKbLjPoHnZoSmjrGi3gGlVD0r5qtWPd2cKVeGxu+nE8kvVMdD3LbynLXOWF7Uq
	RxbxkVIKd3eR92Q6mEQ3DscG1Y/jvvMjHpCIOZN/RPYZKoP4i7K4yTeQBmZS7tUMM8J8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175904-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175904: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 00:43:52 +0000

flight 175904 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175904/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746
 build-armhf                   6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    3 days
Failing since        175748  2023-01-12 20:01:56 Z    3 days   12 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 01:11:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 01:11:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478219.741266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHE25-0005OP-Du; Mon, 16 Jan 2023 01:11:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478219.741266; Mon, 16 Jan 2023 01:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHE25-0005OI-9s; Mon, 16 Jan 2023 01:11:29 +0000
Received: by outflank-mailman (input) for mailman id 478219;
 Mon, 16 Jan 2023 01:11:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHE24-0005O8-NK; Mon, 16 Jan 2023 01:11:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHE24-0004hd-JQ; Mon, 16 Jan 2023 01:11:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHE24-0006bL-9b; Mon, 16 Jan 2023 01:11:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHE24-0005iG-9B; Mon, 16 Jan 2023 01:11:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jf6lYvRXk1aIT3e4fRswvdyQf8M2Su1PAsWsqMP33Qw=; b=3SD2VZaJ3E/zHdHkhaUOCDxbVy
	7yEVdI+78ojPQ9PoKwBRUJmGY3j3Q7UXvIOONrSSiHS8CCbz38fRrArxetGfTpIpu646GYe8l15uu
	vCWq7ZpY9i3xBudYR7LKucjLc9OcXVeq1v4T3qgMwCdMggMbCB1wOq3IGhPD6XX0BeNw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175880-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175880: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:host-install(5):broken:regression
    qemu-mainline:test-armhf-armhf-xl-multivcpu:host-install(5):broken:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:host-install(5):broken:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:host-install(5):broken:allowable
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=886fb67020e32ce6a2cf7049c6f017acf1f0d69a
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 01:11:28 +0000

flight 175880 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175880/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-qcow2    <job status>                 broken
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 test-armhf-armhf-libvirt-qcow2  5 host-install(5)      broken REGR. vs. 175743
 test-armhf-armhf-xl-vhd       5 host-install(5)        broken REGR. vs. 175743
 test-armhf-armhf-xl-multivcpu  5 host-install(5)       broken REGR. vs. 175743
 test-armhf-armhf-libvirt-raw  5 host-install(5)        broken REGR. vs. 175743
 build-amd64                   6 xen-build                fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      5 host-install(5)        broken REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175743
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                886fb67020e32ce6a2cf7049c6f017acf1f0d69a
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    3 days
Failing since        175750  2023-01-13 06:38:52 Z    2 days    6 attempts
Testing same since   175835  2023-01-14 07:07:10 Z    1 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               broken  
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     broken  
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      broken  
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken
broken-step test-armhf-armhf-libvirt-qcow2 host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-libvirt-raw host-install(5)

Not pushing.

(No revision log; it would be 1426 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 01:34:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 01:34:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478227.741279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHENz-0007si-9d; Mon, 16 Jan 2023 01:34:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478227.741279; Mon, 16 Jan 2023 01:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHENz-0007sb-6C; Mon, 16 Jan 2023 01:34:07 +0000
Received: by outflank-mailman (input) for mailman id 478227;
 Mon, 16 Jan 2023 01:34:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHENx-0007sR-N1; Mon, 16 Jan 2023 01:34:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHENx-0005Nq-J0; Mon, 16 Jan 2023 01:34:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHENx-000791-5b; Mon, 16 Jan 2023 01:34:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHENx-0003Mc-5A; Mon, 16 Jan 2023 01:34:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sR2RmCsIuFB1fySIMF/hE4z6c5d/dtW7y8kpzrsrlCg=; b=c/NYsiZI7kEtU9mMz9SmlIGnLo
	2ACz6ZBPJVq0ncdG2oQ48F6PWF46UVwKDmJYBjCUyQyuy/rBF1g7K6cfKwxzxwyZ2WaJ6MzE37xG0
	wPrS3eIJHckWml3zwRO6aNHibk1c9zxlLZizLMvVQpCaqo7yooOtfdc+1GFhv7Pgh/58=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175890-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175890: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:build-armhf:xen-build:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 01:34:05 +0000

flight 175890 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175890/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-amd64                   6 xen-build                fail REGR. vs. 175734
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734
 build-armhf                   6 xen-build                fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    3 days
Failing since        175739  2023-01-12 09:38:44 Z    3 days    8 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    2 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 01:44:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 01:44:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478235.741294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHEYN-00011I-K1; Mon, 16 Jan 2023 01:44:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478235.741294; Mon, 16 Jan 2023 01:44:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHEYN-00011A-Fh; Mon, 16 Jan 2023 01:44:51 +0000
Received: by outflank-mailman (input) for mailman id 478235;
 Mon, 16 Jan 2023 01:44:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHEYM-00010q-H7; Mon, 16 Jan 2023 01:44:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHEYM-0005eV-Fb; Mon, 16 Jan 2023 01:44:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHEYM-0007Mz-A8; Mon, 16 Jan 2023 01:44:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHEYM-00083v-9o; Mon, 16 Jan 2023 01:44:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/h51pP4/vfZ1EqT9Swwtg72valK8CVUJRPAJNpEAacs=; b=HrcYiUqBUaVredfdMzavDu7ADu
	JztOx/Pxo+l2/qChez/w06hJEaZ/viJO6oejInm4jlRb2bkZJIL5thl7ased/2rxY72p/MbTE0hw6
	vhKZYg+KQ4JZl1RjuTZj3jvphwPG1q1gQHJeyhQB0YAsKOYqNXUvsARX7L45kRDUwQbs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175906-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175906: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 01:44:50 +0000

flight 175906 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175906/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746
 build-armhf                   6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    3 days
Failing since        175748  2023-01-12 20:01:56 Z    3 days   13 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 01:58:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 01:58:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478242.741305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHElp-0002bw-PX; Mon, 16 Jan 2023 01:58:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478242.741305; Mon, 16 Jan 2023 01:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHElp-0002bp-Ld; Mon, 16 Jan 2023 01:58:45 +0000
Received: by outflank-mailman (input) for mailman id 478242;
 Mon, 16 Jan 2023 01:58:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wes/=5N=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pHElo-0002bj-TP
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 01:58:44 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 4d1e8b59-9541-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 02:58:42 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C24ADAD7;
 Sun, 15 Jan 2023 17:59:23 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5063D3F71A;
 Sun, 15 Jan 2023 17:58:39 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d1e8b59-9541-11ed-91b6-6bf2151ebd3b
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 0/3] P2M improvements for Arm
Date: Mon, 16 Jan 2023 09:58:17 +0800
Message-Id: <20230116015820.1269387-1-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are some clean-up/improvement work that can be done in the
Arm P2M code triggered by [1] and [2]. These were found at the 4.17
code freeze period so the issues were not fixed at that time.
Therefore do the follow-ups here.

Patch#1 addresses one comment in [1]. It was sent earlier and reviewed
once. Pick the updated version, i.e. "[PATCH v2] xen/arm: Reduce
redundant clear root pages when teardown p2m", to this series.

Patch#2 and #3 addresses the comment in [2] following the discussion
between two possible options.

[1] https://lore.kernel.org/xen-devel/a947e0b4-8f76-cea6-893f-abf30ff95e0d@xen.org/
[2] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/

Henry Wang (3):
  xen/arm: Reduce redundant clear root pages when teardown p2m
  xen/arm: Defer GICv2 CPU interface mapping until the first access
  xen/arm: Clean-up in p2m_init() and p2m_final_teardown()

 xen/arch/arm/domain.c           | 12 ++++++++
 xen/arch/arm/include/asm/p2m.h  |  5 +--
 xen/arch/arm/include/asm/vgic.h |  2 ++
 xen/arch/arm/p2m.c              | 54 +++++++++------------------------
 xen/arch/arm/traps.c            | 19 ++++++++++--
 xen/arch/arm/vgic-v2.c          | 25 ++++-----------
 6 files changed, 52 insertions(+), 65 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 01:58:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 01:58:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478244.741316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHEm3-0002v4-0o; Mon, 16 Jan 2023 01:58:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478244.741316; Mon, 16 Jan 2023 01:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHEm2-0002ux-Tu; Mon, 16 Jan 2023 01:58:58 +0000
Received: by outflank-mailman (input) for mailman id 478244;
 Mon, 16 Jan 2023 01:58:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wes/=5N=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pHEm1-0002to-IH
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 01:58:57 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 53bf5d3b-9541-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 02:58:53 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88F71AD7;
 Sun, 15 Jan 2023 17:59:34 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 215233F71A;
 Sun, 15 Jan 2023 17:58:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53bf5d3b-9541-11ed-b8d0-410ff93cb8f0
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/3] xen/arm: Reduce redundant clear root pages when teardown p2m
Date: Mon, 16 Jan 2023 09:58:18 +0800
Message-Id: <20230116015820.1269387-2-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230116015820.1269387-1-Henry.Wang@arm.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently, p2m for a domain will be teardown from two paths:
(1) The normal path when a domain is destroyed.
(2) The arch_domain_destroy() in the failure path of domain creation.

When tearing down p2m from (1), the part to clear and clean the root
is only needed to do once rather than for every call of p2m_teardown().
If the p2m teardown is from (2), the clear and clean of the root
is unnecessary because the domain is not scheduled.

Therefore, this patch introduces a helper `p2m_clear_root_pages()` to
do the clear and clean of the root, and move this logic outside of
p2m_teardown(). With this movement, the `page_list_empty(&p2m->pages)`
check can be dropped.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
Was: [PATCH v2] xen/arm: Reduce redundant clear root pages when
teardown p2m. Picked to this series with changes in original v1:
1. Introduce a new PROGRESS for p2m_clear_root_pages() to avoid
   multiple calling when p2m_teardown() is preempted.
2. Move p2m_force_tlb_flush_sync() to p2m_clear_root_pages().
---
 xen/arch/arm/domain.c          | 12 ++++++++++++
 xen/arch/arm/include/asm/p2m.h |  1 +
 xen/arch/arm/p2m.c             | 34 ++++++++++++++--------------------
 3 files changed, 27 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 99577adb6c..961dab9166 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -959,6 +959,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m_root,
     PROG_p2m,
     PROG_p2m_pool,
     PROG_done,
@@ -1021,6 +1022,17 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_root):
+        /*
+         * We are about to free the intermediate page-tables, so clear the
+         * root to prevent any walk to use them.
+         * The domain will not be scheduled anymore, so in theory we should
+         * not need to flush the TLBs. Do it for safety purpose.
+         * Note that all the devices have already been de-assigned. So we don't
+         * need to flush the IOMMU TLB here.
+         */
+        p2m_clear_root_pages(&d->arch.p2m);
+
     PROGRESS(p2m):
         ret = p2m_teardown(d, true);
         if ( ret )
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index 91df922e1c..bf5183e53a 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -281,6 +281,7 @@ int p2m_set_entry(struct p2m_domain *p2m,
 
 bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn);
 
+void p2m_clear_root_pages(struct p2m_domain *p2m);
 void p2m_invalidate_root(struct p2m_domain *p2m);
 
 /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 948f199d84..7de7d822e9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1314,6 +1314,20 @@ static void p2m_invalidate_table(struct p2m_domain *p2m, mfn_t mfn)
     p2m->need_flush = true;
 }
 
+void p2m_clear_root_pages(struct p2m_domain *p2m)
+{
+    unsigned int i;
+
+    p2m_write_lock(p2m);
+
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    p2m_force_tlb_flush_sync(p2m);
+
+    p2m_write_unlock(p2m);
+}
+
 /*
  * Invalidate all entries in the root page-tables. This is
  * useful to get fault on entry and do an action.
@@ -1698,30 +1712,10 @@ int p2m_teardown(struct domain *d, bool allow_preemption)
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
     struct page_info *pg;
-    unsigned int i;
     int rc = 0;
 
-    if ( page_list_empty(&p2m->pages) )
-        return 0;
-
     p2m_write_lock(p2m);
 
-    /*
-     * We are about to free the intermediate page-tables, so clear the
-     * root to prevent any walk to use them.
-     */
-    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
-        clear_and_clean_page(p2m->root + i);
-
-    /*
-     * The domain will not be scheduled anymore, so in theory we should
-     * not need to flush the TLBs. Do it for safety purpose.
-     *
-     * Note that all the devices have already been de-assigned. So we don't
-     * need to flush the IOMMU TLB here.
-     */
-    p2m_force_tlb_flush_sync(p2m);
-
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
         p2m_free_page(p2m->domain, pg);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 01:59:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 01:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478246.741327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHEm6-0003FH-8P; Mon, 16 Jan 2023 01:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478246.741327; Mon, 16 Jan 2023 01:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHEm6-0003F5-4c; Mon, 16 Jan 2023 01:59:02 +0000
Received: by outflank-mailman (input) for mailman id 478246;
 Mon, 16 Jan 2023 01:59:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wes/=5N=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pHEm4-0002to-OA
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 01:59:00 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 56ef3974-9541-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 02:58:58 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 105BCAD7;
 Sun, 15 Jan 2023 17:59:40 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9C36C3F71A;
 Sun, 15 Jan 2023 17:58:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56ef3974-9541-11ed-b8d0-410ff93cb8f0
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the first access
Date: Mon, 16 Jan 2023 09:58:19 +0800
Message-Id: <20230116015820.1269387-3-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230116015820.1269387-1-Henry.Wang@arm.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently, the mapping of the GICv2 CPU interface is created in
arch_domain_create(). This causes some troubles in populating and
freeing of the domain P2M pages pool. For example, a default 16
P2M pages are required in p2m_init() to cope with the P2M mapping
of 8KB GICv2 CPU interface area, and these 16 P2M pages would cause
the complexity of P2M destroy in the failure path of
arch_domain_create().

As per discussion in [1], similarly as the MMIO access for ACPI, this
patch defers the GICv2 CPU interface mapping until the first MMIO
access. This is achieved by moving the GICv2 CPU interface mapping
code from vgic_v2_domain_init() to the stage-2 data abort trap handling
code. The original CPU interface size and virtual CPU interface base
address is now saved in `struct vgic_dist` instead of the local
variable of vgic_v2_domain_init().

Note that GICv2 changes introduced by this patch is not applied to the
"New vGIC" implementation, as the "New vGIC" is not used. Also since
the hardware domain (Dom0) has an unlimited size P2M pool, the
gicv2_map_hwdom_extra_mappings() is also not touched by this patch.

[1] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/include/asm/vgic.h |  2 ++
 xen/arch/arm/traps.c            | 19 ++++++++++++++++---
 xen/arch/arm/vgic-v2.c          | 25 ++++++-------------------
 3 files changed, 24 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/include/asm/vgic.h b/xen/arch/arm/include/asm/vgic.h
index 3d44868039..1d37c291e1 100644
--- a/xen/arch/arm/include/asm/vgic.h
+++ b/xen/arch/arm/include/asm/vgic.h
@@ -153,6 +153,8 @@ struct vgic_dist {
     /* Base address for guest GIC */
     paddr_t dbase; /* Distributor base address */
     paddr_t cbase; /* CPU interface base address */
+    paddr_t csize; /* CPU interface size */
+    paddr_t vbase; /* virtual CPU interface base address */
 #ifdef CONFIG_GICV3
     /* GIC V3 addressing */
     /* List of contiguous occupied by the redistributors */
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 061c92acbd..d98f166050 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1787,9 +1787,12 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
 }
 
 /*
- * When using ACPI, most of the MMIO regions will be mapped on-demand
- * in stage-2 page tables for the hardware domain because Xen is not
- * able to know from the EFI memory map the MMIO regions.
+ * Try to map the MMIO regions for some special cases:
+ * 1. When using ACPI, most of the MMIO regions will be mapped on-demand
+ *    in stage-2 page tables for the hardware domain because Xen is not
+ *    able to know from the EFI memory map the MMIO regions.
+ * 2. For guests using GICv2, the GICv2 CPU interface mapping is created
+ *    on the first access of the MMIO region.
  */
 static bool try_map_mmio(gfn_t gfn)
 {
@@ -1798,6 +1801,16 @@ static bool try_map_mmio(gfn_t gfn)
     /* For the hardware domain, all MMIOs are mapped with GFN == MFN */
     mfn_t mfn = _mfn(gfn_x(gfn));
 
+    /*
+     * Map the GICv2 virtual cpu interface in the gic cpu interface
+     * region of the guest on the first access of the MMIO region.
+     */
+    if ( d->arch.vgic.version == GIC_V2 &&
+         gfn_x(gfn) == gfn_x(gaddr_to_gfn(d->arch.vgic.cbase)) )
+        return !map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
+                                 d->arch.vgic.csize / PAGE_SIZE,
+                                 maddr_to_mfn(d->arch.vgic.vbase));
+
     /*
      * Device-Tree should already have everything mapped when building
      * the hardware domain.
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 0026cb4360..21e14a5a6f 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -644,10 +644,6 @@ static int vgic_v2_vcpu_init(struct vcpu *v)
 
 static int vgic_v2_domain_init(struct domain *d)
 {
-    int ret;
-    paddr_t csize;
-    paddr_t vbase;
-
     /*
      * The hardware domain and direct-mapped domain both get the hardware
      * address.
@@ -667,8 +663,8 @@ static int vgic_v2_domain_init(struct domain *d)
          * aligned to PAGE_SIZE.
          */
         d->arch.vgic.cbase = vgic_v2_hw.cbase;
-        csize = vgic_v2_hw.csize;
-        vbase = vgic_v2_hw.vbase;
+        d->arch.vgic.csize = vgic_v2_hw.csize;
+        d->arch.vgic.vbase = vgic_v2_hw.vbase;
     }
     else if ( is_domain_direct_mapped(d) )
     {
@@ -683,8 +679,8 @@ static int vgic_v2_domain_init(struct domain *d)
          */
         d->arch.vgic.dbase = vgic_v2_hw.dbase;
         d->arch.vgic.cbase = vgic_v2_hw.cbase;
-        csize = GUEST_GICC_SIZE;
-        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
+        d->arch.vgic.csize = GUEST_GICC_SIZE;
+        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
     }
     else
     {
@@ -697,19 +693,10 @@ static int vgic_v2_domain_init(struct domain *d)
          */
         BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
         d->arch.vgic.cbase = GUEST_GICC_BASE;
-        csize = GUEST_GICC_SIZE;
-        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
+        d->arch.vgic.csize = GUEST_GICC_SIZE;
+        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
     }
 
-    /*
-     * Map the gic virtual cpu interface in the gic cpu interface
-     * region of the guest.
-     */
-    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
-                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
-    if ( ret )
-        return ret;
-
     register_mmio_handler(d, &vgic_v2_distr_mmio_handler, d->arch.vgic.dbase,
                           PAGE_SIZE, NULL);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 01:59:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 01:59:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478247.741338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHEm9-0003ae-Hp; Mon, 16 Jan 2023 01:59:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478247.741338; Mon, 16 Jan 2023 01:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHEm9-0003aT-EP; Mon, 16 Jan 2023 01:59:05 +0000
Received: by outflank-mailman (input) for mailman id 478247;
 Mon, 16 Jan 2023 01:59:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wes/=5N=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pHEm8-0002bj-22
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 01:59:04 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 597b134c-9541-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 02:59:02 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85C51AD7;
 Sun, 15 Jan 2023 17:59:44 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1D86A3F71A;
 Sun, 15 Jan 2023 17:58:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 597b134c-9541-11ed-91b6-6bf2151ebd3b
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Wei Chen <wei.chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 3/3] xen/arm: Clean-up in p2m_init() and p2m_final_teardown()
Date: Mon, 16 Jan 2023 09:58:20 +0800
Message-Id: <20230116015820.1269387-4-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230116015820.1269387-1-Henry.Wang@arm.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

With the change in previous patch, the initial 16 pages in the P2M
pool is not necessary anymore. Drop them for code simplification.

Also the call to p2m_teardown() from arch_domain_destroy() is not
necessary anymore since the movement of the P2M allocation out of
arch_domain_create(). Drop the code and the above in-code comment
mentioning it.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
I am not entirely sure if I should also drop the "TODO" on top of
the p2m_set_entry(). Because although we are sure there is no
p2m pages populated in domain_create() stage now, but we are not
sure if anyone will add more in the future...Any comments?
---
 xen/arch/arm/include/asm/p2m.h |  4 ----
 xen/arch/arm/p2m.c             | 20 +-------------------
 2 files changed, 1 insertion(+), 23 deletions(-)

diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index bf5183e53a..cf06d3cc21 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -200,10 +200,6 @@ int p2m_init(struct domain *d);
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
- *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
- *  free the P2M when failures happen in the domain creation with P2M pages
- *  already in use. In this case p2m_teardown() is called non-preemptively and
- *  p2m_teardown() will always return 0.
  */
 int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7de7d822e9..d41a316d18 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1744,13 +1744,9 @@ void p2m_final_teardown(struct domain *d)
     /*
      * No need to call relinquish_p2m_mapping() here because
      * p2m_final_teardown() is called either after domain_relinquish_resources()
-     * where relinquish_p2m_mapping() has been called, or from failure path of
-     * domain_create()/arch_domain_create() where mappings that require
-     * p2m_put_l3_page() should never be created. For the latter case, also see
-     * comment on top of the p2m_set_entry() for more info.
+     * where relinquish_p2m_mapping() has been called.
      */
 
-    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
 
     while ( p2m_teardown_allocation(d) == -ERESTART )
@@ -1821,20 +1817,6 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
-    /*
-     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
-     * when the domain is created. Considering the worst case for page
-     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
-     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
-     * the allocated 16 pages here would not be lost, hence populate these
-     * pages unconditionally.
-     */
-    spin_lock(&d->arch.paging.lock);
-    rc = p2m_set_allocation(d, 16, NULL);
-    spin_unlock(&d->arch.paging.lock);
-    if ( rc )
-        return rc;
-
     return 0;
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 02:43:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 02:43:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478269.741349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHFSS-0001UF-0l; Mon, 16 Jan 2023 02:42:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478269.741349; Mon, 16 Jan 2023 02:42:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHFSR-0001U8-U3; Mon, 16 Jan 2023 02:42:47 +0000
Received: by outflank-mailman (input) for mailman id 478269;
 Mon, 16 Jan 2023 02:42:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFSQ-0001Ty-UY; Mon, 16 Jan 2023 02:42:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFSQ-0007Xz-RR; Mon, 16 Jan 2023 02:42:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFSQ-0000AL-Kh; Mon, 16 Jan 2023 02:42:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFSQ-00084e-K7; Mon, 16 Jan 2023 02:42:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f/6wYmgdQe8qE8gxICsbudH1le2/bG7cTjQRoKD/Cjw=; b=CPeqjhql4CnjxAonphWReod/xx
	DjGLsk5Ey3ERtAm3z2afzt5xzoxQx/4THqC1JmX/l2fU0XfhAcWr/YV2W5fk89ieYPr936XxT1r8C
	gzArjWBT6vkaWiIjI2dbDrJzMxf3cp0GR1U1y6tHOpz3722tVIrLQznv0KMwKqxy+q0Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175910-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175910: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 02:42:46 +0000

flight 175910 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175910/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746
 build-armhf                   6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    3 days
Failing since        175748  2023-01-12 20:01:56 Z    3 days   14 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    1 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 02:58:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 02:58:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478276.741358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHFhB-00034R-AG; Mon, 16 Jan 2023 02:58:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478276.741358; Mon, 16 Jan 2023 02:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHFhB-00034K-7J; Mon, 16 Jan 2023 02:58:01 +0000
Received: by outflank-mailman (input) for mailman id 478276;
 Mon, 16 Jan 2023 02:58:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFh9-00034A-Vc; Mon, 16 Jan 2023 02:57:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFh9-0007wG-QV; Mon, 16 Jan 2023 02:57:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFh9-0000VK-Gk; Mon, 16 Jan 2023 02:57:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFh9-00041O-GF; Mon, 16 Jan 2023 02:57:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ffGtAWn2/rA1G1KOX6Kt9/gcP23Er62Q/jvG5geLSkw=; b=25GaT3STcHpowgtnlWwxtLG+2L
	ORvx4XA2LFLMZjQkMvGabwwVNPxATSQgvf3LZXQ7O5JgRrutwLM/COZqwvj+srQqXxyvrnxtiOn2r
	Zl8m6XC/OS++vLRI7FSBCgC06qovho2gjTzH7uqEhJRX1lgCKfKOMZ+mi19/ybQBoFgc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175908-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175908: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=886fb67020e32ce6a2cf7049c6f017acf1f0d69a
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 02:57:59 +0000

flight 175908 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175908/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743
 build-arm64                   6 xen-build                fail REGR. vs. 175743
 build-armhf                   6 xen-build                fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                886fb67020e32ce6a2cf7049c6f017acf1f0d69a
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    3 days
Failing since        175750  2023-01-13 06:38:52 Z    2 days    7 attempts
Testing same since   175835  2023-01-14 07:07:10 Z    1 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1426 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 03:14:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 03:14:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478284.741369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHFxC-0005zq-Se; Mon, 16 Jan 2023 03:14:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478284.741369; Mon, 16 Jan 2023 03:14:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHFxC-0005zj-Pd; Mon, 16 Jan 2023 03:14:34 +0000
Received: by outflank-mailman (input) for mailman id 478284;
 Mon, 16 Jan 2023 03:14:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFxB-0005zZ-6t; Mon, 16 Jan 2023 03:14:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFxB-0000Fz-31; Mon, 16 Jan 2023 03:14:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFxA-0000tY-OQ; Mon, 16 Jan 2023 03:14:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHFxA-0000nM-Nw; Mon, 16 Jan 2023 03:14:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fo6GYvnjMbsdMDJ6t/LOIku90Srn+C0aEver3wAXSiY=; b=nLGmqJyyv4VgyvKpts6xGRKjae
	zEhNCAOY9gsOLGia5Gn88dIH80/qGXIjNlUVOvQ0SPXphJhhsdHojo+PeS6/RLMRXeT6J5OuC76vp
	Lp2CU1+ioYa3nwm3ErtB8KyDWls0te5FAmjcMHQaWUZFfjJ4Du7Zk5Wc6ZNst47IqSyo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175907-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175907: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-arm64:xen-build:fail:regression
    xen-unstable:build-arm64-xsm:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:build-armhf:xen-build:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 03:14:32 +0000

flight 175907 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175907/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-arm64                   6 xen-build                fail REGR. vs. 175734
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175734
 build-amd64                   6 xen-build                fail REGR. vs. 175734
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734
 build-armhf                   6 xen-build                fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    4 days
Failing since        175739  2023-01-12 09:38:44 Z    3 days    9 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    2 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 03:41:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 03:41:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478294.741385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHGNA-00010F-7W; Mon, 16 Jan 2023 03:41:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478294.741385; Mon, 16 Jan 2023 03:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHGNA-000108-4b; Mon, 16 Jan 2023 03:41:24 +0000
Received: by outflank-mailman (input) for mailman id 478294;
 Mon, 16 Jan 2023 03:41:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHGN9-0000zy-Fx; Mon, 16 Jan 2023 03:41:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHGN9-0000rX-EF; Mon, 16 Jan 2023 03:41:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHGN9-0001Sp-0X; Mon, 16 Jan 2023 03:41:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHGN9-0001Tn-04; Mon, 16 Jan 2023 03:41:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zcI5Cspk/kBKfE9F1b4LzbS/0sgYljb17YFrtQI076g=; b=GiYNg8hjbrP6lm/n0f+tPO8Bno
	0GoX0Dl81FtOy6qkUKD0rmrHFva4jPi/d1S2yEW9y4I19XOPQJ1icXyn3L8VI/N/E5nH+ZEJwA+ZI
	1JUqwD8oBGtemWqolHY+ziF80N/gfverncXGgKfozjS3n4zIaBr59SuPaAJo9NIELYHU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175912-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175912: regressions - trouble: blocked/broken/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-amd64:host-build-prep:fail:regression
    xen-unstable-smoke:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 03:41:23 +0000

flight 175912 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175912/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-amd64                   5 host-build-prep          fail REGR. vs. 175746
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175746
 build-armhf                   6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    3 days
Failing since        175748  2023-01-12 20:01:56 Z    3 days   15 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    1 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 04:12:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 04:12:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478301.741395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHGr6-0004T3-LL; Mon, 16 Jan 2023 04:12:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478301.741395; Mon, 16 Jan 2023 04:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHGr6-0004Sw-IS; Mon, 16 Jan 2023 04:12:20 +0000
Received: by outflank-mailman (input) for mailman id 478301;
 Mon, 16 Jan 2023 04:12:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHGr5-0004Sm-Sk; Mon, 16 Jan 2023 04:12:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHGr5-0001bN-PA; Mon, 16 Jan 2023 04:12:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHGr5-00027n-HR; Mon, 16 Jan 2023 04:12:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHGr5-0002m5-Gv; Mon, 16 Jan 2023 04:12:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9Q68DqLR6fWmoLAblJlwDxZ9SofHzQ5776cIWMHe7dc=; b=qj9YrV2IxDDR2DKgDo0GcUaOiV
	UT/rZCYYhJ8umwK2Q6O31yWptN2/lSuih/+jMLZ1IdZuQSzBqMinAtlcK8J9GFwoScwamp/gT8gLe
	/UuIdoGHW2i5iVWZwt/w6wmcIYSFHj23i88LVD950wBtmItDAeMF6YHri+WPYLoYZo/M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175911-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175911: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:host-build-prep:fail:regression
    qemu-mainline:build-arm64-xsm:host-build-prep:fail:regression
    qemu-mainline:build-arm64-pvops:host-build-prep:fail:regression
    qemu-mainline:build-arm64:host-build-prep:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=886fb67020e32ce6a2cf7049c6f017acf1f0d69a
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 04:12:19 +0000

flight 175911 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175911/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops               <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175743
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-arm64                   5 host-build-prep          fail REGR. vs. 175743
 build-amd64                   6 xen-build                fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743
 build-armhf                   6 xen-build                fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                886fb67020e32ce6a2cf7049c6f017acf1f0d69a
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    3 days
Failing since        175750  2023-01-13 06:38:52 Z    2 days    8 attempts
Testing same since   175835  2023-01-14 07:07:10 Z    1 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  broken  
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64-pvops broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken

Not pushing.

(No revision log; it would be 1426 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 04:28:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 04:28:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478309.741405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHH6J-00063m-4Z; Mon, 16 Jan 2023 04:28:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478309.741405; Mon, 16 Jan 2023 04:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHH6J-00063f-1K; Mon, 16 Jan 2023 04:28:03 +0000
Received: by outflank-mailman (input) for mailman id 478309;
 Mon, 16 Jan 2023 04:28:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KGc4=5N=csail.mit.edu=srivatsa@srs-se1.protection.inumbo.net>)
 id 1pHH6H-00063Z-Gw
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 04:28:01 +0000
Received: from outgoing2021.csail.mit.edu (outgoing2021.csail.mit.edu
 [128.30.2.78]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 264d1602-9556-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 05:27:57 +0100 (CET)
Received: from c-24-17-218-140.hsd1.wa.comcast.net ([24.17.218.140]
 helo=srivatsab3MD6R.vmware.com)
 by outgoing2021.csail.mit.edu with esmtpsa (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.95)
 (envelope-from <srivatsa@csail.mit.edu>) id 1pHH69-00EU45-Mo;
 Sun, 15 Jan 2023 23:27:53 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 264d1602-9556-11ed-b8d0-410ff93cb8f0
To: Juergen Gross <jgross@suse.com>, linux-kernel@vger.kernel.org,
 x86@kernel.org, virtualization@lists.linux-foundation.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, Alexey Makhalov <amakhalov@vmware.com>,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org
References: <20230112152132.4399-1-jgross@suse.com>
From: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Subject: Re: [PATCH] x86/paravirt: merge activate_mm and dup_mmap callbacks
Message-ID: <3fcb5078-852e-0886-c084-7fb0cfa5b757@csail.mit.edu>
Date: Sun, 15 Jan 2023 20:27:50 -0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.12.0
MIME-Version: 1.0
In-Reply-To: <20230112152132.4399-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit


Hi Juergen,

On 1/12/23 7:21 AM, Juergen Gross wrote:
> The two paravirt callbacks .mmu.activate_mm and .mmu.dup_mmap are
> sharing the same implementations in all cases: for Xen PV guests they
> are pinning the PGD of the new mm_struct, and for all other cases
> they are a NOP.
> 

I was expecting that the duplicated functions xen_activate_mm() and
xen_dup_mmap() would be merged into a common one, and that both
.mmu.activate_mm and .mmu.dup_mmap callbacks would be mapped to that
common implementation for Xen PV.

> So merge them to a common callback .mmu.enter_mmap (in contrast to the
> corresponding already existing .mmu.exit_mmap).
> 

Instead, this patch seems to be merging the callbacks themselves...

I see that's not an issue right now since there is no other actual
user for these callbacks. But are we sure that merging the callbacks
just because the current user (Xen PV) has the same implementation for
both is a good idea? The callbacks are invoked at distinct points from
fork/exec, so wouldn't it be valuable to retain that distinction in
semantics in the callbacks as well?

However, if you believe that two separate callbacks are not really
required here (because there is no significant difference in what they
mean, rather than because their callback implementations happen to be
the same right now), then could you please expand on this and call it
out in the commit message, please?

Thank you!

Regards,
Srivatsa
VMware Photon OS

> As the first parameter of the old callbacks isn't used, drop it from
> the replacement one.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  arch/x86/include/asm/mmu_context.h    |  4 ++--
>  arch/x86/include/asm/paravirt.h       | 14 +++-----------
>  arch/x86/include/asm/paravirt_types.h |  7 ++-----
>  arch/x86/kernel/paravirt.c            |  3 +--
>  arch/x86/xen/mmu_pv.c                 | 12 ++----------
>  5 files changed, 10 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
> index b8d40ddeab00..6a14b6c2165c 100644
> --- a/arch/x86/include/asm/mmu_context.h
> +++ b/arch/x86/include/asm/mmu_context.h
> @@ -134,7 +134,7 @@ extern void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>  
>  #define activate_mm(prev, next)			\
>  do {						\
> -	paravirt_activate_mm((prev), (next));	\
> +	paravirt_enter_mmap(next);		\
>  	switch_mm((prev), (next), NULL);	\
>  } while (0);
>  
> @@ -167,7 +167,7 @@ static inline void arch_dup_pkeys(struct mm_struct *oldmm,
>  static inline int arch_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm)
>  {
>  	arch_dup_pkeys(oldmm, mm);
> -	paravirt_arch_dup_mmap(oldmm, mm);
> +	paravirt_enter_mmap(mm);
>  	return ldt_dup_context(oldmm, mm);
>  }
>  
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index 73e9522db7c1..07bbdceaf35a 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -332,16 +332,9 @@ static inline void tss_update_io_bitmap(void)
>  }
>  #endif
>  
> -static inline void paravirt_activate_mm(struct mm_struct *prev,
> -					struct mm_struct *next)
> +static inline void paravirt_enter_mmap(struct mm_struct *next)
>  {
> -	PVOP_VCALL2(mmu.activate_mm, prev, next);
> -}
> -
> -static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm,
> -					  struct mm_struct *mm)
> -{
> -	PVOP_VCALL2(mmu.dup_mmap, oldmm, mm);
> +	PVOP_VCALL1(mmu.enter_mmap, next);
>  }
>  
>  static inline int paravirt_pgd_alloc(struct mm_struct *mm)
> @@ -787,8 +780,7 @@ extern void default_banner(void);
>  
>  #ifndef __ASSEMBLY__
>  #ifndef CONFIG_PARAVIRT_XXL
> -static inline void paravirt_arch_dup_mmap(struct mm_struct *oldmm,
> -					  struct mm_struct *mm)
> +static inline void paravirt_enter_mmap(struct mm_struct *mm)
>  {
>  }
>  #endif
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 8c1da419260f..71bf64b963df 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -164,11 +164,8 @@ struct pv_mmu_ops {
>  	unsigned long (*read_cr3)(void);
>  	void (*write_cr3)(unsigned long);
>  
> -	/* Hooks for intercepting the creation/use of an mm_struct. */
> -	void (*activate_mm)(struct mm_struct *prev,
> -			    struct mm_struct *next);
> -	void (*dup_mmap)(struct mm_struct *oldmm,
> -			 struct mm_struct *mm);
> +	/* Hook for intercepting the creation/use of an mm_struct. */
> +	void (*enter_mmap)(struct mm_struct *mm);
>  
>  	/* Hooks for allocating and freeing a pagetable top-level */
>  	int  (*pgd_alloc)(struct mm_struct *mm);
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index 327757afb027..ff1109b9c6cd 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -352,8 +352,7 @@ struct paravirt_patch_template pv_ops = {
>  	.mmu.make_pte		= PTE_IDENT,
>  	.mmu.make_pgd		= PTE_IDENT,
>  
> -	.mmu.dup_mmap		= paravirt_nop,
> -	.mmu.activate_mm	= paravirt_nop,
> +	.mmu.enter_mmap		= paravirt_nop,
>  
>  	.mmu.lazy_mode = {
>  		.enter		= paravirt_nop,
> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
> index ee29fb558f2e..b3b8d289b9ab 100644
> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -885,14 +885,7 @@ void xen_mm_unpin_all(void)
>  	spin_unlock(&pgd_lock);
>  }
>  
> -static void xen_activate_mm(struct mm_struct *prev, struct mm_struct *next)
> -{
> -	spin_lock(&next->page_table_lock);
> -	xen_pgd_pin(next);
> -	spin_unlock(&next->page_table_lock);
> -}
> -
> -static void xen_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm)
> +static void xen_enter_mmap(struct mm_struct *mm)
>  {
>  	spin_lock(&mm->page_table_lock);
>  	xen_pgd_pin(mm);
> @@ -2153,8 +2146,7 @@ static const typeof(pv_ops) xen_mmu_ops __initconst = {
>  		.make_p4d = PV_CALLEE_SAVE(xen_make_p4d),
>  #endif
>  
> -		.activate_mm = xen_activate_mm,
> -		.dup_mmap = xen_dup_mmap,
> +		.enter_mmap = xen_enter_mmap,
>  		.exit_mmap = xen_exit_mmap,
>  
>  		.lazy_mode = {
> 



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 06:03:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 06:03:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478317.741414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHIa5-00080Q-1G; Mon, 16 Jan 2023 06:02:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478317.741414; Mon, 16 Jan 2023 06:02:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHIa4-00080J-Uy; Mon, 16 Jan 2023 06:02:52 +0000
Received: by outflank-mailman (input) for mailman id 478317;
 Mon, 16 Jan 2023 06:02:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KGc4=5N=csail.mit.edu=srivatsa@srs-se1.protection.inumbo.net>)
 id 1pHIa3-00080D-Dy
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 06:02:51 +0000
Received: from outgoing2021.csail.mit.edu (outgoing2021.csail.mit.edu
 [128.30.2.78]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65d3b344-9563-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 07:02:47 +0100 (CET)
Received: from [128.177.82.146] (helo=srivatsa-dev.eng.vmware.com)
 by outgoing2021.csail.mit.edu with esmtpa (Exim 4.95)
 (envelope-from <srivatsa@csail.mit.edu>) id 1pHIZd-00EV5m-KZ;
 Mon, 16 Jan 2023 01:02:25 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65d3b344-9563-11ed-b8d0-410ff93cb8f0
From: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
To: linux-kernel@vger.kernel.org
Cc: amakhalov@vmware.com,
	ganb@vmware.com,
	ankitja@vmware.com,
	bordoloih@vmware.com,
	keerthanak@vmware.com,
	blamoreaux@vmware.com,
	namit@vmware.com,
	srivatsa@csail.mit.edu,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Wyes Karny <wyes.karny@amd.com>,
	Lewis Caroll <lewis.carroll@amd.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Juergen Gross <jgross@suse.com>,
	x86@kernel.org,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	virtualization@lists.linux-foundation.org,
	kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle state
Date: Sun, 15 Jan 2023 22:01:34 -0800
Message-Id: <20230116060134.80259-1-srivatsa@csail.mit.edu>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>

Under hypervisors that support mwait passthrough, a vCPU in mwait
CPU-idle state remains in guest context (instead of yielding to the
hypervisor via VMEXIT), which helps speed up wakeups from idle.

However, this runs into problems with CPU hotplug, because the Linux
CPU offline path prefers to put the vCPU-to-be-offlined in mwait
state, whenever mwait is available. As a result, since a vCPU in mwait
remains in guest context and does not yield to the hypervisor, an
offline vCPU *appears* to be 100% busy as viewed from the host, which
prevents the hypervisor from running other vCPUs or workloads on the
corresponding pCPU. [ Note that such a vCPU is not actually busy
spinning though; it remains in mwait idle state in the guest ].

Fix this by preventing the use of mwait idle state in the vCPU offline
play_dead() path for any hypervisor, even if mwait support is
available.

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Wyes Karny <wyes.karny@amd.com>
Cc: Lewis Caroll <lewis.carroll@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Alexey Makhalov <amakhalov@vmware.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: x86@kernel.org
Cc: VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
Cc: virtualization@lists.linux-foundation.org
Cc: kvm@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
---

v1: https://lore.kernel.org/lkml/165843627080.142207.12667479241667142176.stgit@csail.mit.edu/

 arch/x86/kernel/smpboot.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 55cad72715d9..125a5d4bfded 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1763,6 +1763,15 @@ static inline void mwait_play_dead(void)
 		return;
 	if (!this_cpu_has(X86_FEATURE_CLFLUSH))
 		return;
+
+	/*
+	 * Do not use mwait in CPU offline play_dead if running under
+	 * any hypervisor, to make sure that the offline vCPU actually
+	 * yields to the hypervisor (which may not happen otherwise if
+	 * the hypervisor supports mwait passthrough).
+	 */
+	if (this_cpu_has(X86_FEATURE_HYPERVISOR))
+		return;
 	if (__this_cpu_read(cpu_info.cpuid_level) < CPUID_MWAIT_LEAF)
 		return;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 06:43:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 06:43:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478323.741425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJDP-0003pI-4G; Mon, 16 Jan 2023 06:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478323.741425; Mon, 16 Jan 2023 06:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJDP-0003pB-02; Mon, 16 Jan 2023 06:43:31 +0000
Received: by outflank-mailman (input) for mailman id 478323;
 Mon, 16 Jan 2023 06:43:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OEfv=5N=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHJDN-0003p5-P8
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 06:43:29 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 14cfaa01-9569-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 07:43:27 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id DA01C34E8C;
 Mon, 16 Jan 2023 06:43:26 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7C10B139C2;
 Mon, 16 Jan 2023 06:43:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id /f+PHA7yxGOZXgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 16 Jan 2023 06:43:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14cfaa01-9569-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673851406; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uqQqkkW8vDqXYBwySLpMuxVJV9a63KYmU1amJWZzGc4=;
	b=P2wXriVIfUN3At+wL+o1ULRQl7N/2JdPYEOOoKn4IkbYr7aJqKzEAr2DR7fxmzyIWpjuZZ
	YQUJbqqMFE8so695aDnsbRO0ClAdncmMWZK3pZyfq2ngtcbcuXFdRg+rbAwJJJ31qlnVf6
	hv0iVP0TVetu/KwwIg71uWDVuXR8TCU=
Message-ID: <27d08d32-1a17-0959-203f-39e769f555d1@suse.com>
Date: Mon, 16 Jan 2023 07:43:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/paravirt: merge activate_mm and dup_mmap callbacks
To: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>,
 linux-kernel@vger.kernel.org, x86@kernel.org,
 virtualization@lists.linux-foundation.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, Alexey Makhalov <amakhalov@vmware.com>,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org
References: <20230112152132.4399-1-jgross@suse.com>
 <3fcb5078-852e-0886-c084-7fb0cfa5b757@csail.mit.edu>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <3fcb5078-852e-0886-c084-7fb0cfa5b757@csail.mit.edu>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------f2QYQmcp7oyhJ0GflGkKSPBI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------f2QYQmcp7oyhJ0GflGkKSPBI
Content-Type: multipart/mixed; boundary="------------PYYImwLJ6duLYMmgfXqX04GL";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>,
 linux-kernel@vger.kernel.org, x86@kernel.org,
 virtualization@lists.linux-foundation.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, Alexey Makhalov <amakhalov@vmware.com>,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org
Message-ID: <27d08d32-1a17-0959-203f-39e769f555d1@suse.com>
Subject: Re: [PATCH] x86/paravirt: merge activate_mm and dup_mmap callbacks
References: <20230112152132.4399-1-jgross@suse.com>
 <3fcb5078-852e-0886-c084-7fb0cfa5b757@csail.mit.edu>
In-Reply-To: <3fcb5078-852e-0886-c084-7fb0cfa5b757@csail.mit.edu>

--------------PYYImwLJ6duLYMmgfXqX04GL
Content-Type: multipart/mixed; boundary="------------0z6dtOJSlYZ8eWZSil6wUXCH"

--------------0z6dtOJSlYZ8eWZSil6wUXCH
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTYuMDEuMjMgMDU6MjcsIFNyaXZhdHNhIFMuIEJoYXQgd3JvdGU6DQo+IA0KPiBIaSBK
dWVyZ2VuLA0KPiANCj4gT24gMS8xMi8yMyA3OjIxIEFNLCBKdWVyZ2VuIEdyb3NzIHdyb3Rl
Og0KPj4gVGhlIHR3byBwYXJhdmlydCBjYWxsYmFja3MgLm1tdS5hY3RpdmF0ZV9tbSBhbmQg
Lm1tdS5kdXBfbW1hcCBhcmUNCj4+IHNoYXJpbmcgdGhlIHNhbWUgaW1wbGVtZW50YXRpb25z
IGluIGFsbCBjYXNlczogZm9yIFhlbiBQViBndWVzdHMgdGhleQ0KPj4gYXJlIHBpbm5pbmcg
dGhlIFBHRCBvZiB0aGUgbmV3IG1tX3N0cnVjdCwgYW5kIGZvciBhbGwgb3RoZXIgY2FzZXMN
Cj4+IHRoZXkgYXJlIGEgTk9QLg0KPj4NCj4gDQo+IEkgd2FzIGV4cGVjdGluZyB0aGF0IHRo
ZSBkdXBsaWNhdGVkIGZ1bmN0aW9ucyB4ZW5fYWN0aXZhdGVfbW0oKSBhbmQNCj4geGVuX2R1
cF9tbWFwKCkgd291bGQgYmUgbWVyZ2VkIGludG8gYSBjb21tb24gb25lLCBhbmQgdGhhdCBi
b3RoDQo+IC5tbXUuYWN0aXZhdGVfbW0gYW5kIC5tbXUuZHVwX21tYXAgY2FsbGJhY2tzIHdv
dWxkIGJlIG1hcHBlZCB0byB0aGF0DQo+IGNvbW1vbiBpbXBsZW1lbnRhdGlvbiBmb3IgWGVu
IFBWLg0KPiANCj4+IFNvIG1lcmdlIHRoZW0gdG8gYSBjb21tb24gY2FsbGJhY2sgLm1tdS5l
bnRlcl9tbWFwIChpbiBjb250cmFzdCB0byB0aGUNCj4+IGNvcnJlc3BvbmRpbmcgYWxyZWFk
eSBleGlzdGluZyAubW11LmV4aXRfbW1hcCkuDQo+Pg0KPiANCj4gSW5zdGVhZCwgdGhpcyBw
YXRjaCBzZWVtcyB0byBiZSBtZXJnaW5nIHRoZSBjYWxsYmFja3MgdGhlbXNlbHZlcy4uLg0K
PiANCj4gSSBzZWUgdGhhdCdzIG5vdCBhbiBpc3N1ZSByaWdodCBub3cgc2luY2UgdGhlcmUg
aXMgbm8gb3RoZXIgYWN0dWFsDQo+IHVzZXIgZm9yIHRoZXNlIGNhbGxiYWNrcy4gQnV0IGFy
ZSB3ZSBzdXJlIHRoYXQgbWVyZ2luZyB0aGUgY2FsbGJhY2tzDQo+IGp1c3QgYmVjYXVzZSB0
aGUgY3VycmVudCB1c2VyIChYZW4gUFYpIGhhcyB0aGUgc2FtZSBpbXBsZW1lbnRhdGlvbiBm
b3INCj4gYm90aCBpcyBhIGdvb2QgaWRlYT8gVGhlIGNhbGxiYWNrcyBhcmUgaW52b2tlZCBh
dCBkaXN0aW5jdCBwb2ludHMgZnJvbQ0KPiBmb3JrL2V4ZWMsIHNvIHdvdWxkbid0IGl0IGJl
IHZhbHVhYmxlIHRvIHJldGFpbiB0aGF0IGRpc3RpbmN0aW9uIGluDQo+IHNlbWFudGljcyBp
biB0aGUgY2FsbGJhY2tzIGFzIHdlbGw/DQo+IA0KPiBIb3dldmVyLCBpZiB5b3UgYmVsaWV2
ZSB0aGF0IHR3byBzZXBhcmF0ZSBjYWxsYmFja3MgYXJlIG5vdCByZWFsbHkNCj4gcmVxdWly
ZWQgaGVyZSAoYmVjYXVzZSB0aGVyZSBpcyBubyBzaWduaWZpY2FudCBkaWZmZXJlbmNlIGlu
IHdoYXQgdGhleQ0KPiBtZWFuLCByYXRoZXIgdGhhbiBiZWNhdXNlIHRoZWlyIGNhbGxiYWNr
IGltcGxlbWVudGF0aW9ucyBoYXBwZW4gdG8gYmUNCj4gdGhlIHNhbWUgcmlnaHQgbm93KSwg
dGhlbiBjb3VsZCB5b3UgcGxlYXNlIGV4cGFuZCBvbiB0aGlzIGFuZCBjYWxsIGl0DQo+IG91
dCBpbiB0aGUgY29tbWl0IG1lc3NhZ2UsIHBsZWFzZT8NCg0KV291bGQgeW91IGJlIGZpbmUg
d2l0aDoNCg0KICAgSW4gdGhlIGVuZCBib3RoIGNhbGxiYWNrcyBhcmUgbWVhbnQgdG8gcmVn
aXN0ZXIgYW4gYWRkcmVzcyBzcGFjZSB3aXRoIHRoZQ0KICAgdW5kZXJseWluZyBoeXBlcnZp
c29yLCBzbyB0aGVyZSBuZWVkcyB0byBiZSBvbmx5IGEgc2luZ2xlIGNhbGxiYWNrIGZvciB0
aGF0DQogICBwdXJwb3NlLg0KDQoNCkp1ZXJnZW4NCg==
--------------0z6dtOJSlYZ8eWZSil6wUXCH
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0z6dtOJSlYZ8eWZSil6wUXCH--

--------------PYYImwLJ6duLYMmgfXqX04GL--

--------------f2QYQmcp7oyhJ0GflGkKSPBI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPE8g0FAwAAAAAACgkQsN6d1ii/Ey8t
Ngf/d6OpWFNSFigpwYK7Kc+TkmGV5RqwiPEJMARDOsmaX6zjIzYvRHgoC6x2XUwznZlE8lTWf1gV
EfFTLU3ZqmW3RwxZJp8Be1GKpDJwHlVtIiuK5PUV44QPQbiCWlaqHZtMZ/UV8n5ToaFL8ocXG/sl
vWEgWlgZps0DvwculC02JvVYmhV4+oiaO48Ko796E5XpXskYzd72e3IP9TuLo+gH6uDPKRaU3fmc
1QBVa4Da7mcD/JIQ8++Lt5L/DtaApmTFlgk6jqUUjo0xKJumtDzyuoTsV8o/kGXP4PFEgLfKJZy4
KM8/GTRX4GVomox/Ate31wf6jZCRkibNzLebhhUnIQ==
=EPIp
-----END PGP SIGNATURE-----

--------------f2QYQmcp7oyhJ0GflGkKSPBI--


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 06:45:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 06:45:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478328.741434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJFL-0004Np-Ed; Mon, 16 Jan 2023 06:45:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478328.741434; Mon, 16 Jan 2023 06:45:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJFL-0004Ni-C3; Mon, 16 Jan 2023 06:45:31 +0000
Received: by outflank-mailman (input) for mailman id 478328;
 Mon, 16 Jan 2023 06:45:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OEfv=5N=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHJFK-0004Na-6O
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 06:45:30 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5cd9af19-9569-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 07:45:28 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B927534EA5;
 Mon, 16 Jan 2023 06:45:27 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4D664138FA;
 Mon, 16 Jan 2023 06:45:27 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id jL9uEYfyxGNlXwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 16 Jan 2023 06:45:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cd9af19-9569-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673851527; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=G86aOTGRKtGT/dL2BP/zBD5nsf0SVezrWANnZeOyr08=;
	b=BOcpKiRnsFMazn2SHl1SFaYx3eOq7z8cZSJwi8fpak+xv6YFs4yJ93xt/1Wnic9duvs/ZF
	K2bc85GDUspAUwMA/b65JWhNmCNWqvf5iwIMwaUfM3JVaT++Vb9+uBANx4OnPC+7wqcOw4
	9D+1hCAsz1ekWiyn/uu8zvbSbX9Tyh4=
Message-ID: <e5cc2f96-82bc-a0dc-21fa-2f605bc867d1@suse.com>
Date: Mon, 16 Jan 2023 07:45:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/acpi: fix suspend with Xen
Content-Language: en-US
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, linux-pm@vger.kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, Len Brown <len.brown@intel.com>,
 Pavel Machek <pavel@ucw.cz>, Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <20230113140610.7132-1-jgross@suse.com>
 <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------DiezvYaLwsyIRkmUHZiBoqpD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------DiezvYaLwsyIRkmUHZiBoqpD
Content-Type: multipart/mixed; boundary="------------lCn3u2Hrj53tNUYLono29aVo";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, linux-pm@vger.kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, Len Brown <len.brown@intel.com>,
 Pavel Machek <pavel@ucw.cz>, Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Message-ID: <e5cc2f96-82bc-a0dc-21fa-2f605bc867d1@suse.com>
Subject: Re: [PATCH] x86/acpi: fix suspend with Xen
References: <20230113140610.7132-1-jgross@suse.com>
 <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>
In-Reply-To: <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>

--------------lCn3u2Hrj53tNUYLono29aVo
Content-Type: multipart/mixed; boundary="------------ypmOYZbFydzaBEXMU6GCKR0z"

--------------ypmOYZbFydzaBEXMU6GCKR0z
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTMuMDEuMjMgMjA6NDAsIFJhZmFlbCBKLiBXeXNvY2tpIHdyb3RlOg0KPiBPbiBGcmks
IEphbiAxMywgMjAyMyBhdCAzOjA2IFBNIEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNv
bT4gd3JvdGU6DQo+Pg0KPj4gQ29tbWl0IGYxZTUyNTAwOTQ5MyAoIng4Ni9ib290OiBTa2lw
IHJlYWxtb2RlIGluaXQgY29kZSB3aGVuIHJ1bm5pbmcgYXMNCj4+IFhlbiBQViBndWVzdCIp
IG1pc3NlZCBvbmUgY29kZSBwYXRoIGFjY2Vzc2luZyByZWFsX21vZGVfaGVhZGVyLCBsZWFk
aW5nDQo+PiB0byBkZXJlZmVyZW5jaW5nIE5VTEwgd2hlbiBzdXNwZW5kaW5nIHRoZSBzeXN0
ZW0gdW5kZXIgWGVuOg0KPj4NCj4+ICAgICAgWyAgMzQ4LjI4NDAwNF0gUE06IHN1c3BlbmQg
ZW50cnkgKGRlZXApDQo+PiAgICAgIFsgIDM0OC4yODk1MzJdIEZpbGVzeXN0ZW1zIHN5bmM6
IDAuMDA1IHNlY29uZHMNCj4+ICAgICAgWyAgMzQ4LjI5MTU0NV0gRnJlZXppbmcgdXNlciBz
cGFjZSBwcm9jZXNzZXMgLi4uIChlbGFwc2VkIDAuMDAwIHNlY29uZHMpIGRvbmUuDQo+PiAg
ICAgIFsgIDM0OC4yOTI0NTddIE9PTSBraWxsZXIgZGlzYWJsZWQuDQo+PiAgICAgIFsgIDM0
OC4yOTI0NjJdIEZyZWV6aW5nIHJlbWFpbmluZyBmcmVlemFibGUgdGFza3MgLi4uIChlbGFw
c2VkIDAuMTA0IHNlY29uZHMpIGRvbmUuDQo+PiAgICAgIFsgIDM0OC4zOTY2MTJdIHByaW50
azogU3VzcGVuZGluZyBjb25zb2xlKHMpICh1c2Ugbm9fY29uc29sZV9zdXNwZW5kIHRvIGRl
YnVnKQ0KPj4gICAgICBbICAzNDguNzQ5MjI4XSBQTTogc3VzcGVuZCBkZXZpY2VzIHRvb2sg
MC4zNTIgc2Vjb25kcw0KPj4gICAgICBbICAzNDguNzY5NzEzXSBBQ1BJOiBFQzogaW50ZXJy
dXB0IGJsb2NrZWQNCj4+ICAgICAgWyAgMzQ4LjgxNjA3N10gQlVHOiBrZXJuZWwgTlVMTCBw
b2ludGVyIGRlcmVmZXJlbmNlLCBhZGRyZXNzOiAwMDAwMDAwMDAwMDAwMDFjDQo+PiAgICAg
IFsgIDM0OC44MTYwODBdICNQRjogc3VwZXJ2aXNvciByZWFkIGFjY2VzcyBpbiBrZXJuZWwg
bW9kZQ0KPj4gICAgICBbICAzNDguODE2MDgxXSAjUEY6IGVycm9yX2NvZGUoMHgwMDAwKSAt
IG5vdC1wcmVzZW50IHBhZ2UNCj4+ICAgICAgWyAgMzQ4LjgxNjA4M10gUEdEIDAgUDREIDAN
Cj4+ICAgICAgWyAgMzQ4LjgxNjA4Nl0gT29wczogMDAwMCBbIzFdIFBSRUVNUFQgU01QIE5P
UFRJDQo+PiAgICAgIFsgIDM0OC44MTYwODldIENQVTogMCBQSUQ6IDY3NjQgQ29tbTogc3lz
dGVtZC1zbGVlcCBOb3QgdGFpbnRlZCA2LjEuMy0xLmZjMzIucXViZXMueDg2XzY0ICMxDQo+
PiAgICAgIFsgIDM0OC44MTYwOTJdIEhhcmR3YXJlIG5hbWU6IFN0YXIgTGFicyBTdGFyQm9v
ay9TdGFyQm9vaywgQklPUyA4LjAxIDA3LzAzLzIwMjINCj4+ICAgICAgWyAgMzQ4LjgxNjA5
M10gUklQOiBlMDMwOmFjcGlfZ2V0X3dha2V1cF9hZGRyZXNzKzB4Yy8weDIwDQo+Pg0KPj4g
Rml4IHRoYXQgYnkgYWRkaW5nIGFuIGluZGlyZWN0aW9uIGZvciBhY3BpX2dldF93YWtldXBf
YWRkcmVzcygpIHdoaWNoDQo+PiBYZW4gUFYgZG9tMCBjYW4gdXNlIHRvIHJldHVybiBhIGR1
bW15IG5vbi16ZXJvIHdha2V1cCBhZGRyZXNzICh0aGlzDQo+PiBhZGRyZXNzIHdvbid0IGV2
ZXIgYmUgdXNlZCwgYXMgdGhlIHJlYWwgc3VzcGVuZCBoYW5kbGluZyBpcyBkb25lIGJ5IHRo
ZQ0KPj4gaHlwZXJ2aXNvcikuDQo+IA0KPiBIb3cgZXhhY3RseSBkb2VzIHRoaXMgaGVscD8N
Cg0KSSBiZWxpZXZlZCB0aGUgZmlyc3Qgc2VudGVuY2Ugb2YgdGhlIGNvbW1pdCBtZXNzYWdl
IHdvdWxkIG1ha2UgdGhpcw0KY2xlYXIgZW5vdWdoLg0KDQpJIGNhbiBleHBhbmQgdGhlIGNv
bW1pdCBtZXNzYWdlIHRvIGdvIG1vcmUgaW50byBkZXRhaWwgaWYgeW91IHRoaW5rDQp0aGlz
IGlzIHJlYWxseSBuZWVkZWQuDQoNCg0KSnVlcmdlbg0K
--------------ypmOYZbFydzaBEXMU6GCKR0z
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ypmOYZbFydzaBEXMU6GCKR0z--

--------------lCn3u2Hrj53tNUYLono29aVo--

--------------DiezvYaLwsyIRkmUHZiBoqpD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPE8oYFAwAAAAAACgkQsN6d1ii/Ey9J
ZAgAi4CoUUPJ9ONm01UIjdAlhMyh5th9qLyYKLOUDPzBQegwPpK3s6Mo1YOl12jqDLcspdbgsXLV
KGvkA3oqqLDHZJV257VZqSAHHHcbjmOohiAa2monvmfQJULL4lLSG0nR/yjkqNcxR2Md8XHBFX12
RP0b7v7WLMa29o2GCPQNu/RWbsv/rfXLj5xjy8TfRDbzMkAugfmLxaz133kVZuoqScHpT7NbQZFB
XYNfm3uaGTlCasOqNCrIAWyxGUtpX5qV3SfdOk5XTeJW9VN+GVSjQmwpyllQYtkNEE23DYYaWNME
6mCyQ4pe/TD3UerPryakVNnS4JIbGj7v1Okk+X+u1Q==
=ZhOs
-----END PGP SIGNATURE-----

--------------DiezvYaLwsyIRkmUHZiBoqpD--


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:05:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:05:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478334.741445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJY6-0006sj-8M; Mon, 16 Jan 2023 07:04:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478334.741445; Mon, 16 Jan 2023 07:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJY6-0006sc-4B; Mon, 16 Jan 2023 07:04:54 +0000
Received: by outflank-mailman (input) for mailman id 478334;
 Mon, 16 Jan 2023 07:04:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJY5-0006sD-6A
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:04:53 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 11a7d09a-956c-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 08:04:50 +0100 (CET)
Received: by mail-ej1-x62c.google.com with SMTP id kt14so6927543ejc.3
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:04:50 -0800 (PST)
Received: from uni.router.wind (adsl-67.109.242.224.tellas.gr.
 [109.242.224.67]) by smtp.googlemail.com with ESMTPSA id
 v15-20020a056402184f00b0046c5baa1f58sm10990824edy.97.2023.01.15.23.04.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 23:04:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11a7d09a-956c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=mp1FQXXyYfq3zENnQECHsCJiRxZfGT2QDjD6yHefefY=;
        b=oAVq/hxLg1akUx2ep0FHl8grGRt3pxaXLdhk8rOqWwiMLu/d3aMHwreojG4aLnZr9L
         8cfVVB6zYgdUq5JuvNMl1KRvz6tbcaB13ZZNsHoTfvn2yKDwkTEcoW2MQ7Hlg29HWblh
         6x0rpcEdG0lOjJOwFKA2fU2T/Pb4vGjFpwV2A1uUZJu7wah407sLtkt56V9SLUzSB6gl
         qtoDDC04YlZZPD0SUrc4XaIreDIYPHpgIg9s2aGaKzvsljEa4mpAFvUy/tcqmOgmtuQF
         EyQIVFESzUsy1z9j1nzSa8Fr9SSyH4oVNaEQ/Mnph6+gtsDOUDF6OUlc71lRc+NntTUX
         qDug==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=mp1FQXXyYfq3zENnQECHsCJiRxZfGT2QDjD6yHefefY=;
        b=vKZ2JGwNm0FuY8QTGnB8Cd/oBx/BQZaGmfvbeZKuzl4y5gKtR8ZWiZM60qnLhg5SDa
         N+yOZGHtdUyDXE2leMeYEKc1NBu4T9KkuUqgrG60hnGrpOC+J/DgX2GQYbtPVb+l4kLY
         vdNfMyie53Tj2DYLNzT3PImjxsFwYwAW9dkcR/+y7b/BJYFxmMQ1xMUxuduWipYHDLbV
         jaRL1G0uV915YKF+OY52jlzqyysOSdH3xjlIW/4J+q/NOTK1blFod2hHqRXsGZYoJP0Q
         JTaSAYLI5HM62t/UoEwkVNwV6CoUoOqk8FEWBqcsDL5ovUlXXjwhDAcvX6B2J8jJzCdq
         5J9Q==
X-Gm-Message-State: AFqh2kqhDnfSZWSZSaNCnSnhvm+fYI8b8e+7bh9wU7+ENjBxkz6MY36N
	mToztp9nD6egZlkCBwkC9r7a7Xqji5o=
X-Google-Smtp-Source: AMrXdXuZY3+bs1/S5WfooYPQKwksVHMtp5PYwH6MbPrMdyoAYnyDoq/kPWQ3bpF8MYzoU4X2mW3+eg==
X-Received: by 2002:a17:907:8e93:b0:7ae:bfec:74c7 with SMTP id tx19-20020a1709078e9300b007aebfec74c7mr82569866ejc.72.1673852689586;
        Sun, 15 Jan 2023 23:04:49 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Kevin Tian <kevin.tian@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [PATCH v3 0/8] Make x86 IOMMU driver support configurable
Date: Mon, 16 Jan 2023 09:04:23 +0200
Message-Id: <20230116070431.905594-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series aims to provide a means to render the iommu driver support for x86
configurable. Currently, irrespectively of the target platform, both AMD and
Intel iommu drivers are built. This is the case because the existent Kconfig
infrastructure does not provide any facilities for finer-grained configuration.

The series adds two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, that can be
used to generate a tailored iommu configuration for a given platform.

This version of the series is rebased on top of the current staging and
addresses the comments made on version 2.
Patch "[v2] x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options"
is not included in this series because it has been already merged, and patch
"[v2] x86/iommu: iommu_igfx, iommu_qinval and iommu_snoop are VT-d specific"
has been splitted up into two separate patches.

Xenia Ragiadakou (8):
  x86/iommu: amd_iommu_perdev_intremap is AMD-Vi specific
  x86/iommu: iommu_igfx and iommu_qinval are Intel VT-d specific
  x86/iommu: snoop control is allowed only by Intel VT-d
  x86/acpi: separate AMD-Vi and VT-d specific functions
  x86/iommu: make code addressing CVE-2011-1898 no VT-d specific
  x86/iommu: call pi_update_irte through an hvm_function callback
  x86/dpci: move hvm_dpci_isairq_eoi() to generic HVM code
  x86/iommu: make AMD-Vi and Intel VT-d support configurable

 xen/arch/x86/hvm/vmx/vmx.c               | 41 +++++++++++++++
 xen/arch/x86/include/asm/acpi.h          |  6 ++-
 xen/arch/x86/include/asm/hvm/hvm.h       | 10 ++++
 xen/arch/x86/include/asm/iommu.h         |  3 --
 xen/drivers/passthrough/Kconfig          | 22 +++++++-
 xen/drivers/passthrough/amd/iommu_init.c |  2 +
 xen/drivers/passthrough/iommu.c          | 15 +++++-
 xen/drivers/passthrough/vtd/intremap.c   | 36 -------------
 xen/drivers/passthrough/vtd/iommu.c      |  3 --
 xen/drivers/passthrough/vtd/x86/Makefile |  1 -
 xen/drivers/passthrough/vtd/x86/hvm.c    | 64 ------------------------
 xen/drivers/passthrough/x86/hvm.c        | 50 ++++++++++++++++--
 xen/drivers/passthrough/x86/iommu.c      |  5 ++
 xen/include/xen/acpi.h                   |  7 +++
 xen/include/xen/iommu.h                  |  8 ++-
 15 files changed, 156 insertions(+), 117 deletions(-)
 delete mode 100644 xen/drivers/passthrough/vtd/x86/hvm.c

-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:05:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:05:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478335.741455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJY9-00077n-F4; Mon, 16 Jan 2023 07:04:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478335.741455; Mon, 16 Jan 2023 07:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJY9-00077g-Bg; Mon, 16 Jan 2023 07:04:57 +0000
Received: by outflank-mailman (input) for mailman id 478335;
 Mon, 16 Jan 2023 07:04:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJY8-0006sD-22
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:04:56 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13e88359-956c-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 08:04:54 +0100 (CET)
Received: by mail-ej1-x62b.google.com with SMTP id bk15so8323180ejb.9
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:04:54 -0800 (PST)
Received: from uni.router.wind (adsl-67.109.242.224.tellas.gr.
 [109.242.224.67]) by smtp.googlemail.com with ESMTPSA id
 v15-20020a056402184f00b0046c5baa1f58sm10990824edy.97.2023.01.15.23.04.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 23:04:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13e88359-956c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=53CNedcl2LpaLrX+og6xiNzpDo61T+Xa5G/+nq0djig=;
        b=MNH32xpmk7O4JRawfjj/7+WeXgb4/eJew9BrD/cd8foYWnF0gDYh8WQ8C3uRq5AYtk
         Qfav2j+zCsIOEpVnwniTyBOWWr74Z7+9OtWaUIeq2RjbV7hX9nD5ICoQ75YPwkLu3pJi
         xJMrTMjPQqBVU/6rloVRHTMKx7nCFUaOl+8h7GJF1dRxLHZ1RHdLZ/HVBcw1AS6+CDKN
         63cT+jWdAg6jwsdTUL2Yu8RVUOUWauvqPWf/3u9JA6VXHf2Bluryq49fGSuQcYCLwAuM
         kBdlSya9+z6pR81KUkAWbI3oQjVVv/D+7t/PiNyY4UsB1YsXH24zJbKNtsJmqO4Ffxs4
         UGog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=53CNedcl2LpaLrX+og6xiNzpDo61T+Xa5G/+nq0djig=;
        b=CGyeaM5168WHWwKH1TiKizHBZNfUKN0RQhChrj4A+riRwW94QLx76B/F+iuREbR4b5
         rZ+Dd3xcF7Om9NoE036Eo/1M0MachzeTx5q7JAzWlB45Fk9XYJ3iQmo2ey0uIIbK1tyS
         nTES96qETAXFPTvZArgbRsYoe6Bq/9xShM41ZSKgYQUKzxM3vy4nF1vvZI1f2IxX7emo
         7wV3hSVPstuqNSHC2JJh9TKIRgcWt5g27D85xqbN3F2CjgODo9Wb+S5Yelb8MWkIE1re
         w39WpXmiisrg3Wp4pjBDm4rGxV7i0XLYVofO/sgucStPR6US50GEM9J9VFtVlZqTgbvV
         6XKQ==
X-Gm-Message-State: AFqh2kpw5YsWbDRNMFbCB1iArIvHfGKXlf0/A6a4VoUKHOSRHRlCLxIs
	CLuYwoMaKx1bUPmNT6oJ1qwOyzvzM7I=
X-Google-Smtp-Source: AMrXdXvYHQUxlrIBRDsIC5Fy1aNpHLbbszebpSlbeaCjwNkQKFFzRNI23dtm/XQgoBdr9Tuia50AsA==
X-Received: by 2002:a17:907:1316:b0:863:e08e:2ac3 with SMTP id vj22-20020a170907131600b00863e08e2ac3mr10398478ejb.63.1673852693562;
        Sun, 15 Jan 2023 23:04:53 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 1/8] x86/iommu: amd_iommu_perdev_intremap is AMD-Vi specific
Date: Mon, 16 Jan 2023 09:04:24 +0200
Message-Id: <20230116070431.905594-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230116070431.905594-1-burzalodowa@gmail.com>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move its definition to the AMD-Vi driver and use CONFIG_AMD_IOMMU
to guard its usage in common code.

Take the opportunity to replace bool_t with bool and 1 with true.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v3:
  - the second arg of no_config_param() should have been the parameter name,
    i.e "iommu", and not the boolean suboption "amd-iommu-perdev-intremap"

 xen/drivers/passthrough/amd/iommu_init.c | 2 ++
 xen/drivers/passthrough/iommu.c          | 5 ++++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
index 1f14aaf49e..9773ccfcb4 100644
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -36,6 +36,8 @@ static struct radix_tree_root ivrs_maps;
 LIST_HEAD_READ_MOSTLY(amd_iommu_head);
 bool_t iommuv2_enabled;
 
+bool __ro_after_init amd_iommu_perdev_intremap = true;
+
 static bool iommu_has_ht_flag(struct amd_iommu *iommu, u8 mask)
 {
     return iommu->ht_flags & mask;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 5e2a720d29..9d95fb27d0 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -58,7 +58,6 @@ bool __read_mostly iommu_hap_pt_share = true;
 #endif
 
 bool_t __read_mostly iommu_debug;
-bool_t __read_mostly amd_iommu_perdev_intremap = 1;
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
@@ -116,7 +115,11 @@ static int __init cf_check parse_iommu_param(const char *s)
                 iommu_verbose = 1;
         }
         else if ( (val = parse_boolean("amd-iommu-perdev-intremap", s, ss)) >= 0 )
+#ifdef CONFIG_AMD_IOMMU
             amd_iommu_perdev_intremap = val;
+#else
+            no_config_param("AMD_IOMMU", "iommu", s, ss);
+#endif
         else if ( (val = parse_boolean("dom0-passthrough", s, ss)) >= 0 )
             iommu_hwdom_passthrough = val;
         else if ( (val = parse_boolean("dom0-strict", s, ss)) >= 0 )
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:05:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:05:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478338.741484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYF-0007wz-91; Mon, 16 Jan 2023 07:05:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478338.741484; Mon, 16 Jan 2023 07:05:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYF-0007wq-5v; Mon, 16 Jan 2023 07:05:03 +0000
Received: by outflank-mailman (input) for mailman id 478338;
 Mon, 16 Jan 2023 07:05:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJYD-00077f-Vk
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:05:02 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 17fe312e-956c-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 08:05:01 +0100 (CET)
Received: by mail-ed1-x52c.google.com with SMTP id v30so39457275edb.9
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:05:01 -0800 (PST)
Received: from uni.router.wind (adsl-67.109.242.224.tellas.gr.
 [109.242.224.67]) by smtp.googlemail.com with ESMTPSA id
 v15-20020a056402184f00b0046c5baa1f58sm10990824edy.97.2023.01.15.23.04.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 23:05:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17fe312e-956c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PpFKQyBz/d1EHnJYrbWB9uWY4J6FRm55N2ci72k9mak=;
        b=fstejPwJ4VDhCtYQDIAR7deIFG0CNd1qnz8sEUjV7iOuZNi9W+tIlxax5FWBDgQchi
         vl4Kncwjg+M40qUwzCErbWQcWY05Blp5f5b6wwuLPKzstAUfIEihspP86vQz0n05GEWC
         HSwxPkZwfOzTewVGX6qlB8adj6V3IRPesaTqtqM65B2efVWY1nu59l8Tbq3fVaG6pWhr
         G36ReBaRw25wXJ5/KGNQS9zmjFo8peHj6YBLHWdpY3zFHJfnX+h4WXoZd80jiyiWmE17
         zreqbFNUT5Tq//YPa8x8uDHpN2EN3ui7tUdoL3VwAikkoLJ98U04zJXmTY0vRN+1rLkU
         NGcQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=PpFKQyBz/d1EHnJYrbWB9uWY4J6FRm55N2ci72k9mak=;
        b=oSJfZJ6vT0FHf3wzf+9MJM2J8MzncoXXPiv5N6FBlzwF15QJ1c11IHiUjgzdyUcb3/
         ppCyysmfSo+9L6lCXwAIJHHpz8M074A6KTloOifsomayxTFWqm7FXbe7HBpzHjYI449S
         yucjDBONT3jlotKVsxqDRt0w7dxv2Rztv0/DTqr8WbEtS4SfQVKtjdqyU/BTlxJW8+bm
         685z8Jz2u/WyZh6TKhxSUrbmwaa3p3zAyVg4NdUmZkxfBvCDVrrQ1rsWIpjSAJcW3YSe
         iqfLwLSTHFJN/x8vi6BuCaTqqCcYYrM+hUIatfNWQsNdrOcx5y99cNP5RaMsXythjGOQ
         GMuw==
X-Gm-Message-State: AFqh2kpyktgq5fepoC71nP0yhbLndfjLS3sherXvZ8Erkdw1OnKWt87H
	dl/wgs1Z6kV9pBRcUHZlRr8utDb1JyU=
X-Google-Smtp-Source: AMrXdXuf/XxEfoHGwGw1ENaSPe9IGLDSq5yqMvSGhWEq+9x088+3yFDlog7T3dGMjfvZh4IoOt02Yw==
X-Received: by 2002:aa7:cb94:0:b0:496:6a20:6b61 with SMTP id r20-20020aa7cb94000000b004966a206b61mr29625347edt.22.1673852700440;
        Sun, 15 Jan 2023 23:05:00 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 4/8] x86/acpi: separate AMD-Vi and VT-d specific functions
Date: Mon, 16 Jan 2023 09:04:27 +0200
Message-Id: <20230116070431.905594-5-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230116070431.905594-1-burzalodowa@gmail.com>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The functions acpi_dmar_init() and acpi_dmar_zap/reinstate() are
VT-d specific while the function acpi_ivrs_init() is AMD-Vi specific.
To eliminate dead code, they need to be guarded under CONFIG_INTEL_IOMMU
and CONFIG_AMD_IOMMU, respectively.

Instead of adding #ifdef guards around the function calls, implement them
as empty static inline functions.

Take the opportunity to move the declaration of acpi_dmar_init from the
x86 arch-specific header to the common header, since Intel VT-d has been
also used on IA-64 platforms.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v3:
  - move the declarations of Intel VT-d functions to the common header,
    because Intel VT-d has been also used on IA-64 platforms, and update
    the commit log accordingly

 xen/arch/x86/include/asm/acpi.h | 6 +++++-
 xen/include/xen/acpi.h          | 7 +++++++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/include/asm/acpi.h b/xen/arch/x86/include/asm/acpi.h
index c453450a74..6ce79ce465 100644
--- a/xen/arch/x86/include/asm/acpi.h
+++ b/xen/arch/x86/include/asm/acpi.h
@@ -140,8 +140,12 @@ extern u32 pmtmr_ioport;
 extern unsigned int pmtmr_width;
 
 void acpi_iommu_init(void);
-int acpi_dmar_init(void);
+
+#ifdef CONFIG_AMD_IOMMU
 int acpi_ivrs_init(void);
+#else
+static inline int acpi_ivrs_init(void) { return -ENODEV; }
+#endif
 
 void acpi_mmcfg_init(void);
 
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index 1b9c75e68f..352f27f6a7 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -206,8 +206,15 @@ static inline int acpi_get_pxm(acpi_handle handle)
 
 void acpi_reboot(void);
 
+#ifdef CONFIG_INTEL_IOMMU
+int acpi_dmar_init(void);
 void acpi_dmar_zap(void);
 void acpi_dmar_reinstate(void);
+#else
+static inline int acpi_dmar_init(void) { return -ENODEV; }
+static inline void acpi_dmar_zap(void) {}
+static inline void acpi_dmar_reinstate(void) {}
+#endif
 
 #endif /* __ASSEMBLY__ */
 
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:05:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:05:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478336.741464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYB-0007Nn-Nk; Mon, 16 Jan 2023 07:04:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478336.741464; Mon, 16 Jan 2023 07:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYB-0007Ng-Ka; Mon, 16 Jan 2023 07:04:59 +0000
Received: by outflank-mailman (input) for mailman id 478336;
 Mon, 16 Jan 2023 07:04:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJY9-00077f-Su
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:04:58 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 154c79fc-956c-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 08:04:56 +0100 (CET)
Received: by mail-ed1-x535.google.com with SMTP id y19so5875906edc.2
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:04:56 -0800 (PST)
Received: from uni.router.wind (adsl-67.109.242.224.tellas.gr.
 [109.242.224.67]) by smtp.googlemail.com with ESMTPSA id
 v15-20020a056402184f00b0046c5baa1f58sm10990824edy.97.2023.01.15.23.04.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 23:04:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 154c79fc-956c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=U1neDOzq+0HPMCn8Gxw25YMbFizPFUqx82cwaRPbvKM=;
        b=YOCIbQpk3MkERrpCOGsmLsudRonoBRC3lSPg7RQfcsd9xhfXCk/KBTc+DRSht/tUU6
         4pBhqpOIL2L610GIkFzYARa64U0yDQzYqZLoih7RlffrZqM8q5vgK13RnpWMstTwdn2S
         Bl8uYGk3qQw4JwXGKcKrbTL0iBGVJGbje/wuZHVJWH2bKGdBWRwBpRtjrH1K+dI+YQ0i
         fc6ipaQ4lkdu/Jq6RYDwcv5yAbcamm1PHHWgUBPIcTN1ea5jWTsIXHssi3BIov31xugU
         ZsTvaPTvQQPtunNGSCXV6pfEPDmsbgBEZD1fDiY/sAyrIxnN5UUcMYp6Of6+qY/+rCP1
         a4eg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=U1neDOzq+0HPMCn8Gxw25YMbFizPFUqx82cwaRPbvKM=;
        b=pJcmf0Kamzb+tSTGZm+tlZeP2P1XwzJepKmZVdN6K53Kle9vc6WjGZFy2qpv3c256d
         HeYElcArT5zU30p66XVsjhEgIlyPw9ntw7Bekdf4oSeZjHtaUo/84Gc8T1Zu9SPuYamL
         VyjwXQj5HD3DBkXpK6I0kPyryt+S9g7cuzvT1rtbDdrTUrOvN0ZJQ+oc8L+7p7giLM40
         RFaWnQVPSPZmaR6GoIvs9KbgHyRCAHhbuehl8mdOUoLBH1Hc8ghUPSGDj4b4G3w0oEYf
         +Z1OwQwoiTLiIYns/p8lIl+GbGaF8W4oCvWKBeb7zzH+XGEIGkA6QPw8+bOrZ5hYtkk1
         d0ww==
X-Gm-Message-State: AFqh2ko3ipfIEnCSWv3u4BYu7HtLC/7hUswzJ+b2ZYqULA2z1k7ooyzz
	qWK/X43s2x0RTR3hHcP8PxiZyfBhFWo=
X-Google-Smtp-Source: AMrXdXs4zO27qoHXCunCPvFvXsJ+dOF/tfY+KglxUAtIxkhOZZxJK9V4h89mHXIwXNl/jGdEx7nhpQ==
X-Received: by 2002:a50:ed91:0:b0:48e:c073:9453 with SMTP id h17-20020a50ed91000000b0048ec0739453mr10559134edr.15.1673852695898;
        Sun, 15 Jan 2023 23:04:55 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 2/8] x86/iommu: iommu_igfx and iommu_qinval are Intel VT-d specific
Date: Mon, 16 Jan 2023 09:04:25 +0200
Message-Id: <20230116070431.905594-3-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230116070431.905594-1-burzalodowa@gmail.com>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use CONFIG_INTEL_IOMMU to guard the usage of iommu_igfx and iommu_qinval
in common code.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v3:
  - handle iommu_snoop case in a different patch and update commit msg
  - use no_config_param() to print a warning when the user sets an INTEL_IOMMU
    specific string in the iommu boot parameter and INTEL_IOMMU is disabled

 xen/drivers/passthrough/iommu.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 9d95fb27d0..b4dfa95dfd 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -82,11 +82,19 @@ static int __init cf_check parse_iommu_param(const char *s)
         else if ( ss == s + 23 && !strncmp(s, "quarantine=scratch-page", 23) )
             iommu_quarantine = IOMMU_quarantine_scratch_page;
 #endif
-#ifdef CONFIG_X86
         else if ( (val = parse_boolean("igfx", s, ss)) >= 0 )
+#ifdef CONFIG_INTEL_IOMMU
             iommu_igfx = val;
+#else
+            no_config_param("INTEL_IOMMU", "iommu", s, ss);
+#endif
         else if ( (val = parse_boolean("qinval", s, ss)) >= 0 )
+#ifdef CONFIG_INTEL_IOMMU
             iommu_qinval = val;
+#else
+            no_config_param("INTEL_IOMMU", "iommu", s, ss);
+#endif
+#ifdef CONFIG_X86
         else if ( (val = parse_boolean("superpages", s, ss)) >= 0 )
             iommu_superpages = val;
 #endif
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:05:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:05:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478337.741475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYD-0007eB-0a; Mon, 16 Jan 2023 07:05:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478337.741475; Mon, 16 Jan 2023 07:05:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYC-0007e2-Sq; Mon, 16 Jan 2023 07:05:00 +0000
Received: by outflank-mailman (input) for mailman id 478337;
 Mon, 16 Jan 2023 07:04:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJYB-00077f-JL
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:04:59 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 169c8c69-956c-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 08:04:58 +0100 (CET)
Received: by mail-ej1-x62d.google.com with SMTP id vw16so2677358ejc.12
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:04:58 -0800 (PST)
Received: from uni.router.wind (adsl-67.109.242.224.tellas.gr.
 [109.242.224.67]) by smtp.googlemail.com with ESMTPSA id
 v15-20020a056402184f00b0046c5baa1f58sm10990824edy.97.2023.01.15.23.04.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 23:04:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 169c8c69-956c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+O7AGGT5LTOYunZyjJAtUCNbg1rumh26m/rUSOWP+V8=;
        b=gnM2zjMwI7vjjXcR91Diixg4uJGE8qmSsFEfsKbwwi6+E2kDsroPHUbkT2WaxxsabD
         Tbidyx9PPeoClqRjDP27m/p8My0YU7dKan3BpKj2uQ9B4aJDM6q5yLWiYm7DG0MEphs4
         SDROHxYu3s7hrjwGO2VQLG8H5VJEK0KGT437bcUPVmMrl8aj/2wRFWLiXCttsBHolqpD
         /Tdq0VuhCBrLgprdHm3/s/kjsT3/jaQHRGVVKQEEldb0BtJqf+xxdY9nFDoQRV/cdier
         AlkKUaFuHNBeFYwTeCbowm0DDoL5oP707cfJAPAFHl+mplYSpBDv7glfdXIyo1XadSiU
         RAsQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=+O7AGGT5LTOYunZyjJAtUCNbg1rumh26m/rUSOWP+V8=;
        b=7S6d+JPBO1QUHb4blDIlPswuH9l0JwpeR8LuH6K1Ww9jbbHFqr+GeWmLcNeX77QP4B
         ima42rQ1ECjTc8fmV4Hxwpqo/x+g4wUJgiuyV05wwNm7+SuYo6NC757lb9phl9+YnUtN
         oN2tR8D4oeOXbeatUahySggkIRV9w4xXfRfER0mI8oz/KicLR/e7LLF4Q0KqOJEbc5Hz
         Ft14um3KDVhH7iNE2XmTuwvt35zXG/uDmvZ1EV4rJk7BoTQ2EMgZNTtTOi4yUSGMS3MY
         GW/xcqyab9HynnP4md5YfztS2PJcu+bc7JDiwjzMAJjLI9XFizPJ0QZcZnE82YcR1wfn
         j0pA==
X-Gm-Message-State: AFqh2kpb3R516uz3xJZ0A3KAKB8759y4Hcfd40d5JCwcky7GFG5zCauB
	vCKfPTtIxF2IW2ncqS4k220ikEx9e9o=
X-Google-Smtp-Source: AMrXdXsY8LEvyZQR6oxAA9fjTrWKYEuTodFSVjB2DHBN0suWBigssqmyj0d1Y0tw44Rtk8L47jiHhQ==
X-Received: by 2002:a17:906:bc44:b0:870:95b6:94a4 with SMTP id s4-20020a170906bc4400b0087095b694a4mr2595244ejv.48.1673852698028;
        Sun, 15 Jan 2023 23:04:58 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 3/8] x86/iommu: snoop control is allowed only by Intel VT-d
Date: Mon, 16 Jan 2023 09:04:26 +0200
Message-Id: <20230116070431.905594-4-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230116070431.905594-1-burzalodowa@gmail.com>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The AMD-Vi driver forces coherent accesses by hardwiring the FC bit to 1.
Therefore, given that iommu_snoop is used only when the iommu is enabled,
when Xen is configured with only the AMD iommu enabled, iommu_snoop can be
reduced to a #define to true.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v3:
  - new patch
    This patch depends on Jan's patch "x86/shadow: sanitize iommu_snoop usage"
    to ensure that iommu_snoop is used only when the iommu is enabled

 xen/include/xen/iommu.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 4f22fc1bed..626731941b 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -74,7 +74,12 @@ extern enum __packed iommu_intremap {
    iommu_intremap_restricted,
    iommu_intremap_full,
 } iommu_intremap;
-extern bool iommu_igfx, iommu_qinval, iommu_snoop;
+extern bool iommu_igfx, iommu_qinval;
+#ifdef CONFIG_INTEL_IOMMU
+extern bool iommu_snoop;
+#else
+# define iommu_snoop true
+#endif /* CONFIG_INTEL_IOMMU */
 #else
 # define iommu_intremap false
 # define iommu_snoop false
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:05:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:05:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478339.741494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYI-0008IF-Iq; Mon, 16 Jan 2023 07:05:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478339.741494; Mon, 16 Jan 2023 07:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYI-0008I6-EW; Mon, 16 Jan 2023 07:05:06 +0000
Received: by outflank-mailman (input) for mailman id 478339;
 Mon, 16 Jan 2023 07:05:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJYH-0006sD-G2
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:05:05 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 19843c94-956c-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 08:05:03 +0100 (CET)
Received: by mail-ed1-x52c.google.com with SMTP id x10so36565834edd.10
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:05:03 -0800 (PST)
Received: from uni.router.wind (adsl-67.109.242.224.tellas.gr.
 [109.242.224.67]) by smtp.googlemail.com with ESMTPSA id
 v15-20020a056402184f00b0046c5baa1f58sm10990824edy.97.2023.01.15.23.05.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 23:05:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19843c94-956c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7dy2BgTNKVXOkcEoPRHI+Qc+RIk111AGiSJu7FgSGxM=;
        b=JavYjN879rVFqWYTMfx5K7T7u+VhhAZtIXkySIIz3B+qj6EJzd+Nzv34MpCwey9i5M
         ZZ7w3kI9WnYByK1t1wC3JITstU/8mIyys9/jlHrXaUscW155uUIIQpSSc7eRcd3rnMBi
         Vu+YyQhjpdDMqB1OwDPuS/ycjiBxIoHu1aqoNPD4W7GjJLGNOzhRwy/54TStKoK6BTMb
         PLD9fAFgyU8x/jf6YUUPhx6xRw2wo0m/TUFMZNKsx9oHQsXdNUr/H45H2XnegiuzM4jB
         2cg74rgunfLVgnjCMAyAHW67rYG3GGBvHWxh50h8xs5xnSI8cMTKoSLKFBxSWP2T37Pg
         AWXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7dy2BgTNKVXOkcEoPRHI+Qc+RIk111AGiSJu7FgSGxM=;
        b=i/AyL56Wt+aacBTWWt6s+MVQVJo7j8EOK6MQ4r4cKmVIyAXw1F0TkpCjzc3U/fzha9
         vqgBkYC6lhyFT2HZgO9ypcJf9TZMcUl30kXTrobBzGsixofMEIyvO5u/1l3OzMuiTUHU
         FuF9PKLE3smDh6JpJCVv6Z6c/nWYcbPY+yd08zNsV68FMlopv0IAU5yvuEjUmreG0VoZ
         1kVwnM5XRjmKCUCvumuBCBVuirnkADbmyZ9GiDo8YcNuOvxCiH7AY9zJAZ2Mrb6QKjzs
         vXVThq6ut6MD2T0wlBddGrmY8siYqKZ/oyY0J4C1LLgs2sPAk9oAMT2dkg+JmCOtoNL7
         ZfwQ==
X-Gm-Message-State: AFqh2kqFHhZv8xnHpebKtNSJGWV0DUlXvAfSWMGrm1f/bFWmOG5uOIDo
	C1Rmr/dc1U03bHJdFLOdK4h5rk/V2iY=
X-Google-Smtp-Source: AMrXdXsD39gY8491njMzdZaxfuDf6M6qULTZ1eY93Aw31t7bwSMmoksTT7XyzW2FArG1yunOZoa3nQ==
X-Received: by 2002:a05:6402:b55:b0:49d:d8ec:cbd3 with SMTP id bx21-20020a0564020b5500b0049dd8eccbd3mr6984361edb.16.1673852702961;
        Sun, 15 Jan 2023 23:05:02 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 5/8] x86/iommu: make code addressing CVE-2011-1898 no VT-d specific
Date: Mon, 16 Jan 2023 09:04:28 +0200
Message-Id: <20230116070431.905594-6-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230116070431.905594-1-burzalodowa@gmail.com>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variable untrusted_msi indicates whether the system is vulnerable to
CVE-2011-1898 due to the absence of interrupt remapping  support.
AMD iommus with interrupt remapping disabled are also exposed.
Therefore move the definition of the variable to the common x86 iommu code.

Also, since the current implementation assumes that only PV guests are prone
to this attack, take the opportunity to define untrusted_msi only when PV is
enabled.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v3:
  - change untrusted_msi from being VT-d specific to PV specific and
    update commit log accordingly
  - remove unnecessary #ifdef guard from its declaration

 xen/drivers/passthrough/vtd/iommu.c | 3 ---
 xen/drivers/passthrough/x86/iommu.c | 5 +++++
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 62e143125d..dae2426cb9 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -54,9 +54,6 @@
                                  ? dom_iommu(d)->arch.vtd.pgd_maddr \
                                  : (pdev)->arch.vtd.pgd_maddr)
 
-/* Possible unfiltered LAPIC/MSI messages from untrusted sources? */
-bool __read_mostly untrusted_msi;
-
 bool __read_mostly iommu_igfx = true;
 bool __read_mostly iommu_qinval = true;
 #ifndef iommu_snoop
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index f671b0f2bb..c5021ea023 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -36,6 +36,11 @@ bool __initdata iommu_superpages = true;
 
 enum iommu_intremap __read_mostly iommu_intremap = iommu_intremap_full;
 
+#ifdef CONFIG_PV
+/* Possible unfiltered LAPIC/MSI messages from untrusted sources? */
+bool __read_mostly untrusted_msi;
+#endif
+
 #ifndef iommu_intpost
 /*
  * In the current implementation of VT-d posted interrupts, in some extreme
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:05:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:05:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478340.741504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYL-0000EA-Sy; Mon, 16 Jan 2023 07:05:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478340.741504; Mon, 16 Jan 2023 07:05:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYL-0000Ds-PZ; Mon, 16 Jan 2023 07:05:09 +0000
Received: by outflank-mailman (input) for mailman id 478340;
 Mon, 16 Jan 2023 07:05:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJYK-0006sD-40
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:05:08 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b087e99-956c-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 08:05:06 +0100 (CET)
Received: by mail-ed1-x52c.google.com with SMTP id x10so36565942edd.10
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:05:06 -0800 (PST)
Received: from uni.router.wind (adsl-67.109.242.224.tellas.gr.
 [109.242.224.67]) by smtp.googlemail.com with ESMTPSA id
 v15-20020a056402184f00b0046c5baa1f58sm10990824edy.97.2023.01.15.23.05.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 23:05:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b087e99-956c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=5xQjSd77lIkGjcOqGsMIhqf/ThCEh+wIVWJkvdxosCY=;
        b=GZGC3XrcSwMB5M67O/4zd6fZu5sD6S/kwnQuEK7AavL3QHbLS845CkzH0/sfI63DmA
         tqMMt88aIK6SshHlWB90WGuvZeo0NOSXYn7KgqcDCuEJ7yWgCbX4Rz9VBG8n4LdeCCFw
         S0vOuFXJIX0cXvNDL2spIC3H/bZiz/3+0R8ANCLAgJPV1334l/ibYXECVnwmEhXQ4nMQ
         7DJQLCcofqHgGKlbGKQgG1n3IF2tHF2m66eFgfgWgU50TeoVd2isuRuiQPjxY2VpZbIx
         A0HOwMo4PIrFyN9MSJb/YbDiIIFSFHPgQ/RFx+9BWHJT7tC64SHxbvTqUlsL5Uj0Lntc
         C41g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=5xQjSd77lIkGjcOqGsMIhqf/ThCEh+wIVWJkvdxosCY=;
        b=y3hEE0A/QnpZ2R75DaDAK5Ob5Dvft85vayHVkjIQ53lHrWr737OYzFdXi7BdB9JfH7
         ExrzkpDcLXdefVf3tQZMKgTd+plJE8mZFek5j4cNe2wiC1dOlJrxpe+DpP4JhB0jY3YS
         rQw40tBvDDOSKjTOL6k1dj7Pn9Ekz15J0r6wNsIn3uPIUyIR6FeeviB6mY5K+R4vpZdH
         m9WEgZKcJ0rorTnzr68Yw5xG3eUVHwv8L+H/4ZOpK6Cnu9hOCDT7s/kqulgC6wBlUdSv
         BPNNVU+GRJXfmw4cc6ghmONb7LsIwahSTmGoCQlAOWFG12wsSHW4mfpNzB1yexvnnjaf
         gc+A==
X-Gm-Message-State: AFqh2kqIRVhj8T4xa2nrT/Eo+Ul3DOqsDAHXmUDFJ6P6/goBssM/ZXlo
	PtWYcZfChgeGs0D+BXQ+XarcMRZg+40=
X-Google-Smtp-Source: AMrXdXuoO4AAYEqAhjXlKKR6+QSqoRZBbFqFzEdZfUMVTjxLLbRArYeoLMqNQE9/nB+xMfvoxv81jw==
X-Received: by 2002:a05:6402:201:b0:497:4311:59c7 with SMTP id t1-20020a056402020100b00497431159c7mr8700190edv.18.1673852705406;
        Sun, 15 Jan 2023 23:05:05 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v3 6/8] x86/iommu: call pi_update_irte through an hvm_function callback
Date: Mon, 16 Jan 2023 09:04:29 +0200
Message-Id: <20230116070431.905594-7-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230116070431.905594-1-burzalodowa@gmail.com>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Posted interrupt support in Xen is currently implemented only for the
Intel platforms. Instead of calling directly pi_update_irte() from the
common hvm code, add a pi_update_irte callback to the hvm_function_table.
Then, create a wrapper function hvm_pi_update_irte() to be used by the
common hvm code.

In the pi_update_irte callback prototype, pass the vcpu as first parameter
instead of the posted-interrupt descriptor that is platform specific, and
remove the const qualifier from the parameter gvec since it is not needed
and because it does not compile with the alternative code patching in use.

Since the posted interrupt descriptor is Intel VT-x specific while
msi_msg_write_remap_rte is iommu specific, open code pi_update_irte() inside
vmx_pi_update_irte() but replace msi_msg_write_remap_rte() with generic
iommu_update_ire_from_msi(). That way vmx_pi_update_irte() is not bound to
Intel VT-d anymore.

Remove the now unused pi_update_irte() implementation.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v3:
  - open code pi_update_irte() in vmx_pi_update_irte() but instead of using
    the VT-d specific function msi_msg_write_remap_rte() use the generic
    iommu_update_ire_from_msi()
  - delete pi_update_irte() from vtd/intremap.c
  - move the initialization of vmx pi_update_irte stub to start_vmx() and
    perform it only if iommu_intpost is enabled.
  - move pi_update_irte right after handle_eoi

 xen/arch/x86/hvm/vmx/vmx.c             | 41 ++++++++++++++++++++++++++
 xen/arch/x86/include/asm/hvm/hvm.h     | 10 +++++++
 xen/arch/x86/include/asm/iommu.h       |  3 --
 xen/drivers/passthrough/vtd/intremap.c | 36 ----------------------
 xen/drivers/passthrough/x86/hvm.c      |  5 ++--
 5 files changed, 53 insertions(+), 42 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 15a07933ee..a5355dbac2 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -396,6 +396,43 @@ void vmx_pi_hooks_deassign(struct domain *d)
     domain_unpause(d);
 }
 
+/*
+ * This function is used to update the IRTE for posted-interrupt
+ * when guest changes MSI/MSI-X information.
+ */
+static int cf_check vmx_pi_update_irte(const struct vcpu *v,
+                                       const struct pirq *pirq, uint8_t gvec)
+{
+    const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
+    struct irq_desc *desc;
+    struct msi_desc *msi_desc;
+    int rc;
+
+    desc = pirq_spin_lock_irq_desc(pirq, NULL);
+    if ( !desc )
+        return -EINVAL;
+
+    msi_desc = desc->msi_desc;
+    if ( !msi_desc )
+    {
+        rc = -ENODEV;
+        goto unlock_out;
+    }
+    msi_desc->pi_desc = pi_desc;
+    msi_desc->gvec = gvec;
+
+    spin_unlock_irq(&desc->lock);
+
+    ASSERT(pcidevs_locked());
+
+    return iommu_update_ire_from_msi(msi_desc, &msi_desc->msg);
+
+ unlock_out:
+    spin_unlock_irq(&desc->lock);
+
+    return rc;
+}
+
 static const struct lbr_info {
     u32 base, count;
 } p4_lbr[] = {
@@ -2969,8 +3006,12 @@ const struct hvm_function_table * __init start_vmx(void)
     {
         alloc_direct_apic_vector(&posted_intr_vector, pi_notification_interrupt);
         if ( iommu_intpost )
+        {
             alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);
 
+            vmx_function_table.pi_update_irte = vmx_pi_update_irte;
+        }
+
         vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
         vmx_function_table.test_pir            = vmx_test_pir;
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 93254651f2..b1990a047c 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -28,6 +28,8 @@
 #include <asm/x86_emulate.h>
 #include <asm/hvm/asid.h>
 
+struct pirq; /* needed by pi_update_irte */
+
 #ifdef CONFIG_HVM_FEP
 /* Permit use of the Forced Emulation Prefix in HVM guests */
 extern bool_t opt_hvm_fep;
@@ -213,6 +215,8 @@ struct hvm_function_table {
     void (*sync_pir_to_irr)(struct vcpu *v);
     bool (*test_pir)(const struct vcpu *v, uint8_t vector);
     void (*handle_eoi)(uint8_t vector, int isr);
+    int (*pi_update_irte)(const struct vcpu *v, const struct pirq *pirq,
+                          uint8_t gvec);
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa,
@@ -774,6 +778,12 @@ static inline void hvm_set_nonreg_state(struct vcpu *v,
         alternative_vcall(hvm_funcs.set_nonreg_state, v, nrs);
 }
 
+static inline int hvm_pi_update_irte(const struct vcpu *v,
+                                     const struct pirq *pirq, uint8_t gvec)
+{
+    return alternative_call(hvm_funcs.pi_update_irte, v, pirq, gvec);
+}
+
 #else  /* CONFIG_HVM */
 
 #define hvm_enabled false
diff --git a/xen/arch/x86/include/asm/iommu.h b/xen/arch/x86/include/asm/iommu.h
index fc0afe35bf..4794e72cf1 100644
--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -129,9 +129,6 @@ void iommu_identity_map_teardown(struct domain *d);
 
 extern bool untrusted_msi;
 
-int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
-                   const uint8_t gvec);
-
 extern bool iommu_non_coherent, iommu_superpages;
 
 static inline void iommu_sync_cache(const void *addr, unsigned int size)
diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
index 1512e4866b..b39bc83282 100644
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -866,39 +866,3 @@ void cf_check intel_iommu_disable_eim(void)
     for_each_drhd_unit ( drhd )
         disable_qinval(drhd->iommu);
 }
-
-/*
- * This function is used to update the IRTE for posted-interrupt
- * when guest changes MSI/MSI-X information.
- */
-int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
-    const uint8_t gvec)
-{
-    struct irq_desc *desc;
-    struct msi_desc *msi_desc;
-    int rc;
-
-    desc = pirq_spin_lock_irq_desc(pirq, NULL);
-    if ( !desc )
-        return -EINVAL;
-
-    msi_desc = desc->msi_desc;
-    if ( !msi_desc )
-    {
-        rc = -ENODEV;
-        goto unlock_out;
-    }
-    msi_desc->pi_desc = pi_desc;
-    msi_desc->gvec = gvec;
-
-    spin_unlock_irq(&desc->lock);
-
-    ASSERT(pcidevs_locked());
-
-    return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
-
- unlock_out:
-    spin_unlock_irq(&desc->lock);
-
-    return rc;
-}
diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c
index a16e0e5344..e720461a14 100644
--- a/xen/drivers/passthrough/x86/hvm.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -381,8 +381,7 @@ int pt_irq_create_bind(
 
         /* Use interrupt posting if it is supported. */
         if ( iommu_intpost )
-            pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
-                           info, pirq_dpci->gmsi.gvec);
+            hvm_pi_update_irte(vcpu, info, pirq_dpci->gmsi.gvec);
 
         if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
         {
@@ -672,7 +671,7 @@ int pt_irq_destroy_bind(
             what = "bogus";
     }
     else if ( pirq_dpci && pirq_dpci->gmsi.posted )
-        pi_update_irte(NULL, pirq, 0);
+        hvm_pi_update_irte(NULL, pirq, 0);
 
     if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
          list_empty(&pirq_dpci->digl_list) )
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:05:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:05:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478342.741514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYN-0000ZO-8T; Mon, 16 Jan 2023 07:05:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478342.741514; Mon, 16 Jan 2023 07:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYN-0000Yr-3r; Mon, 16 Jan 2023 07:05:11 +0000
Received: by outflank-mailman (input) for mailman id 478342;
 Mon, 16 Jan 2023 07:05:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJYL-00077f-Sj
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:05:10 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1c6aa1c2-956c-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 08:05:08 +0100 (CET)
Received: by mail-ed1-x52d.google.com with SMTP id v13so5336401eda.11
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:05:08 -0800 (PST)
Received: from uni.router.wind (adsl-67.109.242.224.tellas.gr.
 [109.242.224.67]) by smtp.googlemail.com with ESMTPSA id
 v15-20020a056402184f00b0046c5baa1f58sm10990824edy.97.2023.01.15.23.05.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 23:05:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c6aa1c2-956c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FGOhxxftnqS09dviAWeYBv4kGOGjyoGhBUYO7YzYbYs=;
        b=OZ6V5vdKz7FFRoISBZXF7APZzMyRs4Dekxlt/ZX8BMBYrEh9WNRKmjL+QRljNKO3YV
         IaT96mDVLeZW/4wHYy7Qx7XBIiEI4KM+5YpmCWLF12pEMxZnWVjQVH1uj0FO7OIh6KTF
         rj4L8ACmVVW3qs7SvlizAVUqOIdk4UrGpoNy8DpoesdH7WcU4mvpCOYSA4PXXQ9AuI3D
         anOIST4FrAyPAi2WGg1Xk7da7jc4tiYWrEnkQZTE7Grqzju18zj614RSq1/tbe0ymxw2
         gvK3YlSxyoHoozQI1p3PNQIlmvTCswMYIpAdI4PFgWutfntmKHNBgzFcE9f6054bip0A
         IKXw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FGOhxxftnqS09dviAWeYBv4kGOGjyoGhBUYO7YzYbYs=;
        b=47CVnAnDJ0ud8fpXuaJ9XIrnLTOhdta/EU2mJAqg8Wj52m4qjYKId3mny5Kkp+yoIS
         Sq9S6lQ8auooEhOA0f1hIWTeFQLGRI6kN0LSE+lBwm0AgVb4+nwj1D00CpxcE0C4JjU3
         NcTOT71BvwriiRuv2fiPllnshyEhZMjve8KS/iTxrBTwdWCjS6Kb+0ZshmPcPXG+Px0K
         IWYJtNeaT7fydUmQmTgx/7wl9NnZ6JFDVoae6EYyJ7CcvyLYOUZuBknEpEOmjxTMgzv0
         4fgCvK/e2N9FQNOS9Z5Bgx+ghDO20AOBMFBYd1gkhBVW3TDc51NHPvI0kiEWnSfTn+NP
         hRgQ==
X-Gm-Message-State: AFqh2krxDxJdWwi/9eM/hzhU2oyKCZIjFBkLhInfgO3NIzvADJzAHtns
	jhQi34lfh3WozIWZ678NaBOKipKqbUA=
X-Google-Smtp-Source: AMrXdXsvE+JsU2LZw6kyJBg/vfrH3k7HKGQmR4l10D1ovSVMoTnbGcwDyIItl91Muq+VqxxBPZzpmw==
X-Received: by 2002:a05:6402:b02:b0:499:e564:a1c with SMTP id bm2-20020a0564020b0200b00499e5640a1cmr7513207edb.11.1673852707760;
        Sun, 15 Jan 2023 23:05:07 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 7/8] x86/dpci: move hvm_dpci_isairq_eoi() to generic HVM code
Date: Mon, 16 Jan 2023 09:04:30 +0200
Message-Id: <20230116070431.905594-8-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230116070431.905594-1-burzalodowa@gmail.com>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function hvm_dpci_isairq_eoi() has no dependencies on VT-d driver code
and can be moved from xen/drivers/passthrough/vtd/x86/hvm.c to
xen/drivers/passthrough/x86/hvm.c, along with the corresponding copyrights.

Remove the now empty xen/drivers/passthrough/vtd/x86/hvm.c.

Since the function is used only in this file, declare it static.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---

Changes in v3:
  - in _hvm_dpci_isairq_eoi(), make hvm_irq pointer to const
  - in hvm_dpci_isairq_eoi(), make dpci pointer to const, add a blank line
    after ASSERT(isairq < NR_ISAIRQS), add a blank line before write_unlock()
  - fix typo in commit log, s/funcion/function/
  - add Jan's Reviewed-by tag

 xen/drivers/passthrough/vtd/x86/Makefile |  1 -
 xen/drivers/passthrough/vtd/x86/hvm.c    | 64 ------------------------
 xen/drivers/passthrough/x86/hvm.c        | 45 +++++++++++++++++
 xen/include/xen/iommu.h                  |  1 -
 4 files changed, 45 insertions(+), 66 deletions(-)
 delete mode 100644 xen/drivers/passthrough/vtd/x86/hvm.c

diff --git a/xen/drivers/passthrough/vtd/x86/Makefile b/xen/drivers/passthrough/vtd/x86/Makefile
index 4ef00a4c5b..fe20a0b019 100644
--- a/xen/drivers/passthrough/vtd/x86/Makefile
+++ b/xen/drivers/passthrough/vtd/x86/Makefile
@@ -1,3 +1,2 @@
 obj-y += ats.o
-obj-$(CONFIG_HVM) += hvm.o
 obj-y += vtd.o
diff --git a/xen/drivers/passthrough/vtd/x86/hvm.c b/xen/drivers/passthrough/vtd/x86/hvm.c
deleted file mode 100644
index bc776cf7da..0000000000
--- a/xen/drivers/passthrough/vtd/x86/hvm.c
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Copyright (c) 2008, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; If not, see <http://www.gnu.org/licenses/>.
- *
- * Copyright (C) Allen Kay <allen.m.kay@intel.com>
- * Copyright (C) Weidong Han <weidong.han@intel.com>
- */
-
-#include <xen/iommu.h>
-#include <xen/irq.h>
-#include <xen/sched.h>
-
-static int cf_check _hvm_dpci_isairq_eoi(
-    struct domain *d, struct hvm_pirq_dpci *pirq_dpci, void *arg)
-{
-    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
-    unsigned int isairq = (long)arg;
-    const struct dev_intx_gsi_link *digl;
-
-    list_for_each_entry ( digl, &pirq_dpci->digl_list, list )
-    {
-        unsigned int link = hvm_pci_intx_link(digl->device, digl->intx);
-
-        if ( hvm_irq->pci_link.route[link] == isairq )
-        {
-            hvm_pci_intx_deassert(d, digl->device, digl->intx);
-            if ( --pirq_dpci->pending == 0 )
-                pirq_guest_eoi(dpci_pirq(pirq_dpci));
-        }
-    }
-
-    return 0;
-}
-
-void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq)
-{
-    struct hvm_irq_dpci *dpci = NULL;
-
-    ASSERT(isairq < NR_ISAIRQS);
-    if ( !is_iommu_enabled(d) )
-        return;
-
-    write_lock(&d->event_lock);
-
-    dpci = domain_get_irq_dpci(d);
-
-    if ( dpci && test_bit(isairq, dpci->isairq_map) )
-    {
-        /* Multiple mirq may be mapped to one isa irq */
-        pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq);
-    }
-    write_unlock(&d->event_lock);
-}
diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c
index e720461a14..6bbd04bf3d 100644
--- a/xen/drivers/passthrough/x86/hvm.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -14,6 +14,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  *
  * Copyright (C) Allen Kay <allen.m.kay@intel.com>
+ * Copyright (C) Weidong Han <weidong.han@intel.com>
  * Copyright (C) Xiaohui Xin <xiaohui.xin@intel.com>
  */
 
@@ -924,6 +925,50 @@ static void hvm_gsi_eoi(struct domain *d, unsigned int gsi)
     hvm_pirq_eoi(pirq);
 }
 
+static int cf_check _hvm_dpci_isairq_eoi(
+    struct domain *d, struct hvm_pirq_dpci *pirq_dpci, void *arg)
+{
+    const struct hvm_irq *hvm_irq = hvm_domain_irq(d);
+    unsigned int isairq = (long)arg;
+    const struct dev_intx_gsi_link *digl;
+
+    list_for_each_entry ( digl, &pirq_dpci->digl_list, list )
+    {
+        unsigned int link = hvm_pci_intx_link(digl->device, digl->intx);
+
+        if ( hvm_irq->pci_link.route[link] == isairq )
+        {
+            hvm_pci_intx_deassert(d, digl->device, digl->intx);
+            if ( --pirq_dpci->pending == 0 )
+                pirq_guest_eoi(dpci_pirq(pirq_dpci));
+        }
+    }
+
+    return 0;
+}
+
+static void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq)
+{
+    const struct hvm_irq_dpci *dpci = NULL;
+
+    ASSERT(isairq < NR_ISAIRQS);
+
+    if ( !is_iommu_enabled(d) )
+        return;
+
+    write_lock(&d->event_lock);
+
+    dpci = domain_get_irq_dpci(d);
+
+    if ( dpci && test_bit(isairq, dpci->isairq_map) )
+    {
+        /* Multiple mirq may be mapped to one isa irq */
+        pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq);
+    }
+
+    write_unlock(&d->event_lock);
+}
+
 void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi)
 {
     const struct hvm_irq_dpci *hvm_irq_dpci;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 626731941b..405db59971 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -201,7 +201,6 @@ int hvm_do_IRQ_dpci(struct domain *, struct pirq *);
 int pt_irq_create_bind(struct domain *, const struct xen_domctl_bind_pt_irq *);
 int pt_irq_destroy_bind(struct domain *, const struct xen_domctl_bind_pt_irq *);
 
-void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq);
 struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *);
 void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
 
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:05:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:05:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478344.741525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYP-00015Z-Me; Mon, 16 Jan 2023 07:05:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478344.741525; Mon, 16 Jan 2023 07:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJYP-000155-HQ; Mon, 16 Jan 2023 07:05:13 +0000
Received: by outflank-mailman (input) for mailman id 478344;
 Mon, 16 Jan 2023 07:05:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJYN-00077f-Nx
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:05:11 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1dcc2f81-956c-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 08:05:10 +0100 (CET)
Received: by mail-ed1-x535.google.com with SMTP id w14so22136567edi.5
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:05:10 -0800 (PST)
Received: from uni.router.wind (adsl-67.109.242.224.tellas.gr.
 [109.242.224.67]) by smtp.googlemail.com with ESMTPSA id
 v15-20020a056402184f00b0046c5baa1f58sm10990824edy.97.2023.01.15.23.05.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 15 Jan 2023 23:05:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dcc2f81-956c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iFtAIFWagelWwYtF1JcxxpIotxhdCnsB0YtD2VA9RRo=;
        b=RT//jZDLvmix66SORRWgKClrn5/Rfa9SfWfdd01NqS2+KeNxcbUjgD3ynhoeDOC9FN
         8cH7b0b0SImNbTWC/H4xsCDsBPMVp8kXunVuILoXAhGyRBUlYaGXpvsSOUeWOWGJWgLh
         v5VkLHYur8nQMtheKKXu5/v9FkBWpwDUshk6X12qLiGKgU+JICuO+8dSoYH/GySQQ5RL
         HsbRWRphVQcSKP7y3m6RbWVXV5sRa0/v54n6BVf/+i0UrZRHT5jYnfmO0sfZxnR1teaY
         w+BL6m1tHsR2Wnl6Jh+nRkR5OI2OpVJ6DRGi456WAbapgq/Vo+gcREbkB+bv3mCZIX6T
         JMwQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=iFtAIFWagelWwYtF1JcxxpIotxhdCnsB0YtD2VA9RRo=;
        b=E6ngn+JYNwQLOVfKCgPLIjqieR8j3XFFtQam0Zc4mwe6RP2zjUmHku6XEiiZq4oTFi
         XGX7V9xhONDYSaKn1Y5tivg13ZHj09V5DQRPCVFoJzzWX+epp/N/Ek1+XQKqRFM9LTMS
         9EICp8nDSWqTbhVgiljXfZJMplmg9cMPmRWZQ9h8jkg046v9CFJV6X5mkFlkKq12d3sj
         ECJmNBICsEcYQsJV0NmUSyA8cUw6Tv4cu2yDW69fiFgyK9vfR90rKavtlU+gUh+rrMXl
         Iwm7JD8hy8F6hoByhFIYtQK4EGWelDsAkR7AHejQCFDnZSKbzn5Ov2Te1z/eu3gR9IR0
         8dPQ==
X-Gm-Message-State: AFqh2kogGiOtgvq8pym3I49bUEwfvUbBzSr+HDP61VE36wVEZ9d1IG9f
	EEzA20RYSb6Ab+P7toO1AtukoEGktW4=
X-Google-Smtp-Source: AMrXdXt1cCRLvVR7hwmgT7ueBnmamvx7LubZMuPwhQ4tixWhuyn7PeJawKU0VJYORkt4+PIMEiOKkA==
X-Received: by 2002:aa7:c946:0:b0:496:4d2f:1b4b with SMTP id h6-20020aa7c946000000b004964d2f1b4bmr9099920edt.7.1673852710095;
        Sun, 15 Jan 2023 23:05:10 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 8/8] x86/iommu: make AMD-Vi and Intel VT-d support configurable
Date: Mon, 16 Jan 2023 09:04:31 +0200
Message-Id: <20230116070431.905594-9-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230116070431.905594-1-burzalodowa@gmail.com>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Provide the user with configuration control over the IOMMU support by making
AMD_IOMMU and INTEL_IOMMU options user selectable and able to be turned off.

However, there are cases where the IOMMU support is required, for instance for
a system with more than 254 CPUs. In order to prevent users from unknowingly
disabling it and ending up with a broken hypervisor, make the support user
selectable only if EXPERT is enabled.

To preserve the current default configuration of an x86 system, both options
depend on X86 and default to Y.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---

Changes in v3:
  - add Jan's Acked-by tag

 xen/drivers/passthrough/Kconfig | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 5c65567744..864fcf3b0c 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -38,10 +38,28 @@ config IPMMU_VMSA
 endif
 
 config AMD_IOMMU
-	def_bool y if X86
+	bool "AMD IOMMU" if EXPERT
+	depends on X86
+	default y
+	help
+	  Enables I/O virtualization on platforms that implement the
+	  AMD I/O Virtualization Technology (IOMMU).
+
+	  If your system includes an IOMMU implementing AMD-Vi, say Y.
+	  This is required if your system has more than 254 CPUs.
+	  If in doubt, say Y.
 
 config INTEL_IOMMU
-	def_bool y if X86
+	bool "Intel VT-d" if EXPERT
+	depends on X86
+	default y
+	help
+	  Enables I/O virtualization on platforms that implement the
+	  Intel Virtualization Technology for Directed I/O (Intel VT-d).
+
+	  If your system includes an IOMMU implementing Intel VT-d, say Y.
+	  This is required if your system has more than 254 CPUs.
+	  If in doubt, say Y.
 
 config IOMMU_FORCE_PT_SHARE
 	bool
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:21:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478380.741534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJny-0005KU-34; Mon, 16 Jan 2023 07:21:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478380.741534; Mon, 16 Jan 2023 07:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHJny-0005KN-0Z; Mon, 16 Jan 2023 07:21:18 +0000
Received: by outflank-mailman (input) for mailman id 478380;
 Mon, 16 Jan 2023 07:21:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHJnv-0005KH-Vf
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:21:16 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c48ae01-956e-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 08:21:14 +0100 (CET)
Received: by mail-ej1-x634.google.com with SMTP id bk15so8396785ejb.9
 for <xen-devel@lists.xenproject.org>; Sun, 15 Jan 2023 23:21:14 -0800 (PST)
Received: from [192.168.1.93] (adsl-115.37.6.0.tellas.gr. [37.6.0.115])
 by smtp.gmail.com with ESMTPSA id
 g2-20020a170906538200b0085c3f08081esm6688249ejo.13.2023.01.15.23.21.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 15 Jan 2023 23:21:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c48ae01-956e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:references:cc:to:from
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=8JtSkaQcflK1ahTg7wrc6bjO4kE/9rKv9Ni62yEvH/s=;
        b=D4ANXSIWN1we4EyrbxP6TS1dKkdqt7xSOuxTxGbYUdBN4UmzHDoEmSVgiq86cmW/I1
         38Y0Zz9wfRY8pNV+93qFGGoyIG9zAEUEAlmjxoDnDWpvolBOmPT8lupQsLQBA3n0jPpM
         NHJsaSmwBYEpGSaVKY2AJN4nuPMKnbrp7Lt54VIWcHwKHzMBgeo3tmDR3g+Ou6+g2Mzk
         bbeO0+2g1ogUNvo80T/IM3CrdJnzgUzQIlRSes0NLvBm9bOWiZ0+N8pAzRrgpAeXnqIL
         D9uuhD5cBB8lSdpMeD8NMVa+0HfVDprE6Wl3FgrnR6TQLxmOYv88qIvV69lth18OcqKT
         /CiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:references:cc:to:from
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=8JtSkaQcflK1ahTg7wrc6bjO4kE/9rKv9Ni62yEvH/s=;
        b=Rjq/0K7q3hNGZxmFUciag7FTWh2NkUjcS8bCBaJgHmlBvpqPJLiJSnTL5GtH/swSqq
         +UJL7DfpKDHeX2W+Ciq4QBnayVEvDoqrQoXvhjvHL9H95aWoPBMI2WALhwr8A2sl/mVL
         LOB5tM+0EkHiwflmNdmLcuR+2fi8zmOGBp8ijn3xkfK+Viu2Gjkr+kzwB/nlLiDeC0j6
         vY7E8GoEMsVzpLctoHSSaIqoHpEUYenPJ4N87pJId1XEpFEse/qRMHnKTJ6XGAs67foS
         7pTlHXNuID1XTn00INq66gp/zQJlu5k/l4Q3SOISEiJxbJ5JiMEAQAJ5xxPAHdho1AF5
         agnQ==
X-Gm-Message-State: AFqh2koyd+rzak5l+gGZA6cQRhwYDL5bhY/Ncis8f/O5vL4OQORtuZHd
	8jWMrHoKSx2prHxhwOV4hWZ/dXJsWIk=
X-Google-Smtp-Source: AMrXdXv5qu3GRSkdYu6nL25slAEnZKoB4CTJIsdIjI3L02KPY4u05mp84FrUfX19YuL3tOUUz6olaA==
X-Received: by 2002:a17:907:cb89:b0:870:8beb:9735 with SMTP id un9-20020a170907cb8900b008708beb9735mr1357971ejc.54.1673853673953;
        Sun, 15 Jan 2023 23:21:13 -0800 (PST)
Message-ID: <f620f8ee-e852-de62-53ef-5cd243e4cc09@gmail.com>
Date: Mon, 16 Jan 2023 09:21:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v3 5/8] x86/iommu: make code addressing CVE-2011-1898 no
 VT-d specific
Content-Language: en-US
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Kevin Tian <kevin.tian@intel.com>, Jan Beulich <jbeulich@suse.com>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
 <20230116070431.905594-6-burzalodowa@gmail.com>
In-Reply-To: <20230116070431.905594-6-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/16/23 09:04, Xenia Ragiadakou wrote:
> The variable untrusted_msi indicates whether the system is vulnerable to
> CVE-2011-1898 due to the absence of interrupt remapping  support.
> AMD iommus with interrupt remapping disabled are also exposed.
> Therefore move the definition of the variable to the common x86 iommu code.
> 
> Also, since the current implementation assumes that only PV guests are prone
> to this attack, take the opportunity to define untrusted_msi only when PV is
> enabled.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> ---
> 
> Changes in v3:
>    - change untrusted_msi from being VT-d specific to PV specific and
>      update commit log accordingly
>    - remove unnecessary #ifdef guard from its declaration
> 
>   xen/drivers/passthrough/vtd/iommu.c | 3 ---
>   xen/drivers/passthrough/x86/iommu.c | 5 +++++
>   2 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index 62e143125d..dae2426cb9 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -54,9 +54,6 @@
>                                    ? dom_iommu(d)->arch.vtd.pgd_maddr \
>                                    : (pdev)->arch.vtd.pgd_maddr)
>   
> -/* Possible unfiltered LAPIC/MSI messages from untrusted sources? */
> -bool __read_mostly untrusted_msi;
> -
>   bool __read_mostly iommu_igfx = true;
>   bool __read_mostly iommu_qinval = true;
>   #ifndef iommu_snoop
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> index f671b0f2bb..c5021ea023 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -36,6 +36,11 @@ bool __initdata iommu_superpages = true;
>   
>   enum iommu_intremap __read_mostly iommu_intremap = iommu_intremap_full;
>   
> +#ifdef CONFIG_PV
> +/* Possible unfiltered LAPIC/MSI messages from untrusted sources? */
> +bool __read_mostly untrusted_msi;
> +#endif
> +
>   #ifndef iommu_intpost
>   /*
>    * In the current implementation of VT-d posted interrupts, in some extreme

Here, somehow I missed the part:
diff --git a/xen/drivers/passthrough/vtd/iommu.c 
b/xen/drivers/passthrough/vtd/iommu.c
index dae2426cb9..e97b1fe8cd 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2767,6 +2767,7 @@ static int cf_check reassign_device_ownership(
          if ( !has_arch_pdevs(target) )
              vmx_pi_hooks_assign(target);

+#ifdef CONFIG_PV
          /*
           * Devices assigned to untrusted domains (here assumed to be 
any domU)
           * can attempt to send arbitrary LAPIC/MSI messages. We are 
unprotected
@@ -2775,6 +2776,7 @@ static int cf_check reassign_device_ownership(
          if ( !iommu_intremap && !is_hardware_domain(target) &&
               !is_system_domain(target) )
              untrusted_msi = true;
+#endif

          ret = domain_context_mapping(target, devfn, pdev);

I will fix it in the next version.

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:49:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:49:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478386.741544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKEu-0007vO-Ay; Mon, 16 Jan 2023 07:49:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478386.741544; Mon, 16 Jan 2023 07:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKEu-0007vH-8K; Mon, 16 Jan 2023 07:49:08 +0000
Received: by outflank-mailman (input) for mailman id 478386;
 Mon, 16 Jan 2023 07:49:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OEfv=5N=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHKEt-0007vB-DP
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 07:49:07 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3ffbc088-9572-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 08:49:05 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 9D36E2214D;
 Mon, 16 Jan 2023 07:49:04 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4E5B3139C2;
 Mon, 16 Jan 2023 07:49:04 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id b3W6EXABxWPleQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 16 Jan 2023 07:49:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ffbc088-9572-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673855344; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=kI/9n/LxQnfTez/eAqP/SHUYnXorN1nObJCwWRHa0cI=;
	b=UwqZlIz0oE8CKcGIpoIosgmA+QOss5ViBnHe8aj7ZqSY5FqW0u4WIx3nQXNSyMxwlh9/SE
	lTawv06K0fmHq0YU8vLNKBoo7YLrnEDgvWrgle78CeJLrKKfhguHWQLbvQltZ70Ur9bCmy
	Sav9IiyylmGEcoisYLMA9d8PE31DQbU=
Message-ID: <d0c6363f-d5c9-a53b-5275-e5134b54f09a@suse.com>
Date: Mon, 16 Jan 2023 08:49:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 0/2] x86/xen: don't return from xen_pv_play_dead()
Content-Language: en-US
To: linux-kernel@vger.kernel.org, x86@kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 Josh Poimboeuf <jpoimboe@kernel.org>, Peter Zijlstra <peterz@infradead.org>
References: <20221125063248.30256-1-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20221125063248.30256-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------0nAghWXiYBvAaH6RQMQFfUSx"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------0nAghWXiYBvAaH6RQMQFfUSx
Content-Type: multipart/mixed; boundary="------------LWRXFRU1UdBEuI0eEO6d0jPz";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org, x86@kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 Josh Poimboeuf <jpoimboe@kernel.org>, Peter Zijlstra <peterz@infradead.org>
Message-ID: <d0c6363f-d5c9-a53b-5275-e5134b54f09a@suse.com>
Subject: Re: [PATCH 0/2] x86/xen: don't return from xen_pv_play_dead()
References: <20221125063248.30256-1-jgross@suse.com>
In-Reply-To: <20221125063248.30256-1-jgross@suse.com>

--------------LWRXFRU1UdBEuI0eEO6d0jPz
Content-Type: multipart/mixed; boundary="------------yvHPhWTTfK48d68wOsnr01OZ"

--------------yvHPhWTTfK48d68wOsnr01OZ
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjUuMTEuMjIgMDc6MzIsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IEFsbCBwbGF5X2Rl
YWQoKSBmdW5jdGlvbnMgYnV0IHhlbl9wdl9wbGF5X2RlYWQoKSBkb24ndCByZXR1cm4gdG8g
dGhlDQo+IGNhbGxlci4NCj4gDQo+IEFkYXB0IHhlbl9wdl9wbGF5X2RlYWQoKSB0byBiZWhh
dmUgbGlrZSB0aGUgb3RoZXIgcGxheV9kZWFkKCkgdmFyaWFudHMuDQo+IA0KPiBKdWVyZ2Vu
IEdyb3NzICgyKToNCj4gICAgeDg2L3hlbjogZG9uJ3QgbGV0IHhlbl9wdl9wbGF5X2RlYWQo
KSByZXR1cm4NCj4gICAgeDg2L3hlbjogbWFyayB4ZW5fcHZfcGxheV9kZWFkKCkgYXMgX19u
b3JldHVybg0KPiANCj4gICBhcmNoL3g4Ni94ZW4vc21wLmggICAgICB8ICAyICsrDQo+ICAg
YXJjaC94ODYveGVuL3NtcF9wdi5jICAgfCAxNyArKysrLS0tLS0tLS0tLS0tLQ0KPiAgIGFy
Y2gveDg2L3hlbi94ZW4taGVhZC5TIHwgIDcgKysrKysrKw0KPiAgIHRvb2xzL29ianRvb2wv
Y2hlY2suYyAgIHwgIDEgKw0KPiAgIDQgZmlsZXMgY2hhbmdlZCwgMTQgaW5zZXJ0aW9ucygr
KSwgMTMgZGVsZXRpb25zKC0pDQo+IA0KDQpQaW5nPw0KDQoNCkp1ZXJnZW4NCg==
--------------yvHPhWTTfK48d68wOsnr01OZ
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------yvHPhWTTfK48d68wOsnr01OZ--

--------------LWRXFRU1UdBEuI0eEO6d0jPz--

--------------0nAghWXiYBvAaH6RQMQFfUSx
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPFAW8FAwAAAAAACgkQsN6d1ii/Ey9S
yQf8DQlfdCE+6vBQ7htRNbdOs0J9iISKLSUh8pakjmjrbRR8z0IA6FDkLcdS+yIbX/wj+jjMJIdq
0PZsQTswsxJvfWcdd6/gHtWR89kNTc6qNBG2GCRO1PQPeOjOS72woauloN9IIYL6oPJUySyf2P4T
0WA+pQhjNqdqBaDRY33QZnTJQ836tO/JPyxG80DEqaLAsWnrR2kUtRR3I67L/K0LPECtTzM5vW1Q
e1pV0s5cLVHZ5/GJagN8WMmBGAChm5hIfgoVLMzrze00WnaVs12aCEpAkPhTEzw3QcPzdq/lNn6t
Fz0mUwqiKoCy/C/m+PvySYkF54OPw5Ux71IcdYgMMQ==
=d9PH
-----END PGP SIGNATURE-----

--------------0nAghWXiYBvAaH6RQMQFfUSx--


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 07:50:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 07:50:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478391.741555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKFo-0000Uh-Kh; Mon, 16 Jan 2023 07:50:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478391.741555; Mon, 16 Jan 2023 07:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKFo-0000U3-HK; Mon, 16 Jan 2023 07:50:04 +0000
Received: by outflank-mailman (input) for mailman id 478391;
 Mon, 16 Jan 2023 07:50:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHKFn-0000KW-R5; Mon, 16 Jan 2023 07:50:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHKFn-00076q-NN; Mon, 16 Jan 2023 07:50:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHKFn-0006uR-Bq; Mon, 16 Jan 2023 07:50:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHKFn-00032l-BQ; Mon, 16 Jan 2023 07:50:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=utVg0sULFOA3SNtUADr5MUkFgTTnSzd9DAS2BmAlGgI=; b=FYUVsGaoqAPgLWV4RJu5VeAHW3
	SSg6Uglk1dnePxyIaYg+ZczxOH/h3twe+H1ERhcZXEI93sbRSMMqnKw9n3sU5thsk2G6GT1OiNoY5
	HSB1WPL2SJdDliqL9yZwHtyPzgk2phCVyF8tUl+YECZnEUfGk5cuzgkhjvDaxzhPS9YI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175920-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175920: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=47ab397011b6d1ce4d5805117dc87d9e35f378db
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 07:50:03 +0000

flight 175920 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175920/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 47ab397011b6d1ce4d5805117dc87d9e35f378db
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    3 days
Failing since        175860  2023-01-15 07:11:07 Z    1 days   26 attempts
Testing same since   175920  2023-01-16 07:10:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 47ab397011b6d1ce4d5805117dc87d9e35f378db
Author: Abner Chang <abner.chang@amd.com>
Date:   Wed Jan 11 11:10:08 2023 +0800

    MdeModulePkg/XhciPei: Unlinked XhciPei memory block
    
    Unlink the XhciPei memory block when it has been freed.
    
    Signed-off-by: Jiangang He <jiangang.he@amd.com>
    Cc: Hao A Wu <hao.a.wu@intel.com>
    Cc: Ray Ni <ray.ni@intel.com>
    Cc: Garrett Kirkendall <garrett.kirkendall@amd.com>
    Cc: Abner Chang <abner.chang@amd.com>
    Cc: Kuei-Hung Lin <Kuei-Hung.Lin@amd.com>
    Reviewed-by: Hao A Wu <hao.a.wu@intel.com>

commit be8d6ef3856fac2e64e23847a8f05d37822b1f14
Author: Abner Chang <abner.chang@amd.com>
Date:   Wed Jan 11 11:10:07 2023 +0800

    MdeModulePkg/Usb: Read a large number of blocks
    
    Changes to allow reading blocks that greater than 65535 sectors.
    
    Signed-off-by: Jiangang He <jiangang.he@amd.com>
    Cc: Hao A Wu <hao.a.wu@intel.com>
    Cc: Ray Ni <ray.ni@intel.com>
    Cc: Garrett Kirkendall <garrett.kirkendall@amd.com>
    Cc: Abner Chang <abner.chang@amd.com>
    Cc: Kuei-Hung Lin <Kuei-Hung.Lin@amd.com>
    Reviewed-by: Hao A Wu <hao.a.wu@intel.com>

commit 8147fe090fb566f9a1ed8fde24098bbe425026be
Author: Abner Chang <abner.chang@amd.com>
Date:   Wed Jan 11 11:10:06 2023 +0800

    MdeModulePkg/Xhci: Initial XHCI DCI slot's Context value
    
    Initialize XHCI DCI slot's context entries value.
    
    Signed-off-by: Jiangang He <jiangang.he@amd.com>
    Cc: Hao A Wu <hao.a.wu@intel.com>
    Cc: Ray Ni <ray.ni@intel.com>
    Cc: Garrett Kirkendall <garrett.kirkendall@amd.com>
    Cc: Abner Chang <abner.chang@amd.com>
    Cc: Kuei-Hung Lin <Kuei-Hung.Lin@amd.com>
    Reviewed-by: Hao A Wu <hao.a.wu@intel.com>

commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:11:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478401.741564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKZw-0003tf-Lh; Mon, 16 Jan 2023 08:10:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478401.741564; Mon, 16 Jan 2023 08:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKZw-0003tY-JB; Mon, 16 Jan 2023 08:10:52 +0000
Received: by outflank-mailman (input) for mailman id 478401;
 Mon, 16 Jan 2023 08:10:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHKZu-0003tS-Uc
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:10:51 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2045.outbound.protection.outlook.com [40.107.8.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 48faaf59-9575-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 09:10:49 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7740.eurprd04.prod.outlook.com (2603:10a6:10:1ee::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 08:10:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 08:10:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48faaf59-9575-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XgaQ/59kQf2IXRpU+7IIGheMhmz/iUOMNwE+mH375JsTDbGqZbW4f/N/c5CjSZGxVwPktPX2obrgv1iuAbxSlfOAV5LyTfS3U61PFPL9FcJBjmPFjOg0ROwzhbTmwc9G93BLPTcdhxxgt+IcfM5kUSzwPcgrN/f6cyKMRL9PzrbLLpsP8fSiMCQPGHEKndks9KrooSWOVgPt7bXCW5THbu6FJpsDiEPiKI/1V6F3J+wSYHwhMx81m3n694pJOnLQArTxbAmirfv60w7sAAa/KTSd+FNZ5eRvZCilRabLzZNZOGWg4eRv57h5p9k7Fk0wstD3bZRHI6f6EdNOD2euHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5cJBl3Wdi/i7QE6zNHOjm/kREqiE4r19as+0ZMjILFs=;
 b=TrNAUf0Wk3cTbnco8mkrxiVBhsNApPKS409g73AYeRWyxztyTKTf4NlywDISgGncpC8ewmzY61ae2XazyN+WsPh4+sZ9w82S22IaRclI3xbA0yNXJ85FSJ2+goFfnfCFBg0OCpK6NYN/3yu6YossPJmrsAuI+G+wa1a0yShVuDbdK1J/uVWjxMY5QBw3NhWtZyckuas580IwwEQVe0ryrFdDVGqploxpnlsfVc3fq9nCgQ1Mq/hs/F+yDULO1q0wDBzt9a/PYrzxAS5AUDt8YGZII9dUwnGMmdXoXmVFysyT7xe5bbRpXXKRhY6FDef+tmvu3E7EkElZ1Pu7cAwRRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5cJBl3Wdi/i7QE6zNHOjm/kREqiE4r19as+0ZMjILFs=;
 b=BV8m/waeFKJU4Smmi8c56/YJDTVJdpTJQo519PGtfVFh5AGM8+VYfaoVSN2iSwqO1UDZtqnYXSGIyRgTOtC5gdnNessC6I/ocPUtETzzxHVixZIY+/EO6+Swgvzzi7WtNvYAaJcNA8nhNPzJLCMWsnZUZ96Tj77CN2Ybih3qblP+2uzKNw8mSTrqK30LktMlmucDks6x0WVhg/SiEDIqNGTLSYqS5KqI4iDNg16gm1dNQGA7OrEsNaLfqTeKwnENMxhw+o9aYRzhYbbQJv4++COF+J9EiZGvP24VPrRFtPm/gkDZXkffhw7jXrYgVTi0aokYrdb0aspC/AQQ/y7H8g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <51b4011c-e76b-d134-6ce8-cf72e2fd2040@suse.com>
Date: Mon, 16 Jan 2023 09:10:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [linux-linus test] 175751: regressions - FAIL
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: osstest service owner <osstest-admin@xenproject.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <osstest-175751-mainreport@xen.org>
 <09573105-b441-9099-eda8-cc1e407da67d@suse.com>
Cc: xen-devel@lists.xenproject.org
In-Reply-To: <09573105-b441-9099-eda8-cc1e407da67d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0102.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7740:EE_
X-MS-Office365-Filtering-Correlation-Id: 95542b48-727b-431a-eed2-08daf7992bfb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8Jd51VVuHlcmDhtZsixCRC/nX8N2vrfuNJ+5Xxoqdle8sTOXgt0EwI8LkG4qRE5JJFmQwbcxNujlkglGW1RWlyFwcBA5YYQoCdyOuiTomo65cQrrLa7eFbMC9mP4TcwUVwa6EYwLDmdrB5lTNYLygylwZ0U6QeSp0VNH+n5a6Nj/xBjg/H7BEFVENxHv/oK8GLUoe3chTaVZ9wHpDAEvKVv2kCwkB4RxiAOeaZCcVWhPcPwYXCYPexw3b9E93O2j3d/Tva2tB4PfQvJnUolyQ7LnfDRVyoQM+3qZHgjnl/qB5LmiuOMiEu+QVtPxHmmWLgBcYWwK/p7+71xyIxYolXiPrMej2BEhTqj2u7/DZ2enJPmrgatpijhvJ8Q29SnpsRwe3AE8MYPMh+CW6a9q9VeI6XQ1NbqdI4cWOuyeRCX0NrlSoLT2rCb5CLrcvsQBnRag2tkpPpokHPUNxaUi8JikJcG1O+Irru2YB5yzOWxwb4IpTWnAYDGMAgEJOpCX+x2DJg2cEiF5YgR2HUscCjCsBJTDd6BkgeBqPrDmgp44LjzawXQ0KHOaY7/BBDWTDXCONrssYEdZPx0P0ytO4MrE+joqEkOUHE0+tbhZaOYyLDbr2bJuMFkwHGsFeCc9yuMwJ15fjUasAfXR9i6spFgY8Cd9eC1qnVQgfWx4XxUkaGfiiyPderoWfYzDpFqx8YoqU0Jgp/LNUlou/MB4W9Yy4Os/xXnqaUN/x0gMOtNOrJ+Wl3xsgN2FSfj6YdjY5gJvQc0RRU0u4HInyIEiFA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(136003)(396003)(376002)(366004)(346002)(451199015)(31686004)(66556008)(41300700001)(2616005)(66476007)(66946007)(6512007)(186003)(26005)(8676002)(4326008)(36756003)(53546011)(86362001)(83380400001)(5660300002)(38100700002)(8936002)(110136005)(31696002)(478600001)(6506007)(316002)(2906002)(966005)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V0RocUdlYTFqRExmNUVOK2llODlRaG5OU2gzS2FHTm5vTVdEWXhDdTBBQ3VW?=
 =?utf-8?B?RXB0QkpzdExlOXE3QzZGS0xsZ2J4ZFdlSTJRaklRQ0JoS3FERlByYk1mT2Fk?=
 =?utf-8?B?N1pPdFBuM0VxZUZjN1IyWVRnQ0ZXclE1MDduLyt2N1B1NzNhdXdPZnpPOTBD?=
 =?utf-8?B?Wko3QUlOVzVVbFJJcnZocGFvMWE2UTlDTkZFRlNPV1BnQjlHaCtMYVdoaWVH?=
 =?utf-8?B?OVJHOEJwNERBQ1I5REZKRHBPVXJSaDBRNVFvQ2pucjhFakpVUzZsdE5CTUh0?=
 =?utf-8?B?U2hFN29BZHE1WXlJNE40ZjFpSHpybHVXZ0RhTHZBcDArRjRsaXBhWG1ETUV4?=
 =?utf-8?B?d1RuaUxoYU1BK0xLS2JFa2w1bkNmL3c2RkR2b1dXYnVuVnhRdDlMdTEwRUVx?=
 =?utf-8?B?R3Z0eSs0S3NLVHJvOGdLYkhJM3BPcUtmNGcrY0JCN2tzdUVsazZONTZYUkVq?=
 =?utf-8?B?L3M2VnFFSXRUdTJsVEdKN3BiZXJCdVBwci8vOWt4NkZObVRRSENtMlBsV2gz?=
 =?utf-8?B?YVdLdEM2SnFGNzhiUWR4L29kcEQxTG5mSVFqYllQckVGOWFyRlI4UWYrWDV3?=
 =?utf-8?B?VVBFeHVyWCtxK0FDN2ZJa0huTVMzZ25EK2h0aEV1Ynd4M2l2VTdwSE5uWnNV?=
 =?utf-8?B?RmxNNU5FZTIzdCtoa3BSVWRBRWViRndZVEdlbFNqQWdFZHFlTCtkYnhZcTFj?=
 =?utf-8?B?TE84Ry9UREJQRlBNa0JCcGhVTDBTOG5aZWdsM2ZJNGcrZVdSU1FWdFQvcUV2?=
 =?utf-8?B?WnJkVjFUWUJyVldXcFZERmIvMVhYNlQrdHJIdDY1WjhLN1FCb0JFZE5RMXRx?=
 =?utf-8?B?U3JBaEsxdlJ0Slhqby9PREV0SnpMU3BtNzlpSTkzcGcweWk5Y2o0UUpzQXM4?=
 =?utf-8?B?MW1zQVEwdXMxOStISyt2ME04a2p1STJ3Z1ZXUkM3NkJBNnBrcjVBVjk3VXkw?=
 =?utf-8?B?Q3pUQmlYckR1UDc0eW9LZjV3ZEYzempTZ0FhV2FSY2N1MzFaOUdqL0RQbzA2?=
 =?utf-8?B?UjFxdlBCZEhMQjJlSk1yU0VEL0lkcmlleFBBWFJLL1NMTXJheXhKWXRZSy9H?=
 =?utf-8?B?S0lmbVI4WWNFbXQ1S2c1UEZNcmJCRldQWDUyL3NKM0UvaVNKcFJWOFdGQ1hh?=
 =?utf-8?B?M0w3ZStodUVLTkZMeVJSMHh3S1d1RS9pUDZ0VXhPWVVGR0FrWE5OZWk3STdQ?=
 =?utf-8?B?dVo4OEpIcFlER2dacENPdTZTdjlPdVF4WG1PUmxiblRncjdGRXlXUXNrT0xY?=
 =?utf-8?B?ODZUQnQyUnJCSDNkRXBtL2RjQU9NME8wZkVkdllvd3owaVUyejdZQkdtekVy?=
 =?utf-8?B?RENEeVlXendrYWF1MkFvYldmZnQ3L1RFaFdqWUIzQjdJV0xwcS82Wk1BR2Rq?=
 =?utf-8?B?RWtWbldDamhhRnhmVTlQZVcyRHZ0clc0aTRWT2NObUNHRXBXWUNTcmtDNU1z?=
 =?utf-8?B?YUVvbjNHNGJkVDYyZE15SnNRcTY5T0IzRVh6bktjL3drOHhwVk5HeFJKN2xT?=
 =?utf-8?B?QUg3Q3hGOWczSi9naHpHUlVVL2NQbXBIREtzQ2Y2eEk2R0pXU2x4OGJhdG96?=
 =?utf-8?B?M3BxOFlrcHRBckF4UkxZZEliWWZqOC9CdnJJbndLZXdCZVZBbGRQcG5qN1Bh?=
 =?utf-8?B?bkFRTUdzcmJyTmt4SlBBb3pObjcvcFRnRVZKMVVjWEU5dG1LQjdaeE90Z2lU?=
 =?utf-8?B?Vm5VVkRJRUZNb1ZVem9abm45TlF6K0dXQ0pOVG82WGFGZUt6dTVJNGpaUFpu?=
 =?utf-8?B?bGIxKzdRcmI1Z2o2enpYdDlQbWFod3gyUG0rdngva1BIR3phSWhVSGZ4NllH?=
 =?utf-8?B?UDhVMUlOSU1iUmtRdjYydmJ5NzByT1UwQ2tlbUFOSFo3NTUwSUg4UmlZYlJJ?=
 =?utf-8?B?U0NocXk2UFZpdkt0SGExeXFOZFZKdFU2N1dITVhWcXVEQVZOeDl0cUlvR1cv?=
 =?utf-8?B?dlhOekdGeWQxZWpsdWZNT1F2cDB0b1NWd1RJcGNnaWVaQkRmdW1VSmFUS3Qy?=
 =?utf-8?B?c3Zac252MW13OXBYWTVuaVBMZUN1eWtmcURDa21RNEJ0WkhDZXJEaW1lVEd1?=
 =?utf-8?B?UWdMelZXT2ZVeXgwdW01TXNkZG9Edjl5b2Iwa2pLeDBTTGtjSy82QkNSQmxp?=
 =?utf-8?Q?k3x5GOieCsCWWohTb9Wpa2Q4U?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 95542b48-727b-431a-eed2-08daf7992bfb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 08:10:47.0865
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W9fh0VA2+sr+OmlQOoLgwo3JMH9nVoZrUFXy5DtNC07WFWnLk2TZGt5a0FziV99lyMjU/hSoQNaIJjsWSAjiRA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7740

On 13.01.2023 12:16, Jan Beulich wrote:
> On 13.01.2023 12:06, osstest service owner wrote:
>> flight 175751 linux-linus real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/175751/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  build-amd64                   6 xen-build                fail REGR. vs. 173462
>>  build-amd64-xsm               6 xen-build                fail REGR. vs. 173462
>>  build-i386-xsm                6 xen-build                fail REGR. vs. 173462
>>  build-i386                    6 xen-build                fail REGR. vs. 173462
>>  build-arm64                   6 xen-build                fail REGR. vs. 173462
>>  build-armhf                   6 xen-build                fail REGR. vs. 173462
>>  build-arm64-pvops             6 kernel-build             fail REGR. vs. 173462
> 
> Looking at just one of the logs I find
> 
> /usr/bin/wget -c -O newlib-1.16.0.tar.gz http://xenbits.xen.org/xen-extfiles/newlib-1.16.0.tar.gz
> --2023-01-13 07:41:15--  http://xenbits.xen.org/xen-extfiles/newlib-1.16.0.tar.gz
> Resolving cache (cache)... 172.16.148.6
> Connecting to cache (cache)|172.16.148.6|:3128... failed: Connection refused.
> make[1]: *** [Makefile:90: newlib-1.16.0.tar.gz] Error 4
> make[1]: Leaving directory '/home/osstest/build.175751.build-amd64/xen/stubdom'
> make: *** [Makefile:73: build-stubdom] Error 2
> 
> Let's hope this was merely a networking issue and will work again next
> time round.

Sadly this looks to be permanent - all flights over the weekend have run
into this failure afaics. Anthony, the other day Andrew suggested you might
be able to help with such issues in Roger's absence?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:14:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:14:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478407.741574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKdF-0004Ye-9C; Mon, 16 Jan 2023 08:14:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478407.741574; Mon, 16 Jan 2023 08:14:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKdF-0004YX-6a; Mon, 16 Jan 2023 08:14:17 +0000
Received: by outflank-mailman (input) for mailman id 478407;
 Mon, 16 Jan 2023 08:14:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wx/b=5N=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHKdD-0004YR-0w
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:14:15 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2085.outbound.protection.outlook.com [40.107.94.85])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c10c80e6-9575-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 09:14:11 +0100 (CET)
Received: from MW4P223CA0008.NAMP223.PROD.OUTLOOK.COM (2603:10b6:303:80::13)
 by LV2PR12MB5870.namprd12.prod.outlook.com (2603:10b6:408:175::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 08:14:07 +0000
Received: from CO1NAM11FT060.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:80:cafe::37) by MW4P223CA0008.outlook.office365.com
 (2603:10b6:303:80::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Mon, 16 Jan 2023 08:14:06 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT060.mail.protection.outlook.com (10.13.175.132) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Mon, 16 Jan 2023 08:14:06 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 02:14:05 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 00:14:04 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 16 Jan 2023 02:14:03 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c10c80e6-9575-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WWFwYen1VsWpp8Ae/LV62v3uha+72NylR4lGAOCTbtmUtxa2vq4oa2bLtmE0pVEfiVvNcJcxdE2DdXK/mwJUxA6GvZuugYiLwfl/PaG/qVktssnDHuW74qGVcGlCsZ0Ye3fMjV6VEsdREIyrqX77qy3oM8WpdaKUncCYxwh+Ucsgdw0IlJwkzJYhrUzhuSWNE6veW+j9Qt1l7V0fo1LLIWUGTELVBwPymHVltAuTaT6VarQQga16ocgL720J/aTyoYJsR9TDuKGFUF3VuZ4cXE/ZXXmntLcUb8OCptQBGZRPNhbVfmFUQjiP+fALzzAll8vdE/74SkIJUAg+YWcAtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7WS26hMjynQ4ex2a9mEUYqQR+rU3WgPHwzZZ3Rdo+0w=;
 b=df2oNJ8KoZgJNLhXhRYbu6rP5mCtbPSachgPfoR4EUK7gJt14UTnwpn5zJ3RcnM+apd9fNN/wNqVuM4E1HooyCkiqMHXq39lXa6O2XdDUNLFctZYyrUsCz4pWQyTeFDWBkK0mv6EXxwlePdFwmjOLx4fquQ1/XmCZmVABe6i/HBF2TS9kG1AATRTvvnI8OV3XiSwAkarLJW5ssGc8ZT1qGoeqyUayCcQty/SQ2/z6guPA18tVfzIT28Ueuict4KzlVtnYhWiM2SFHVNW1elUcfWMqjvpbMqZGS4OLYr1Qv9m7wa/FMjXfhjofKub1xDLyfKJdKGOXmb2p+zwVQbafA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7WS26hMjynQ4ex2a9mEUYqQR+rU3WgPHwzZZ3Rdo+0w=;
 b=II7kLOt+7g6v9vbrRiEFwe3JxaSh0o9Vq57ctRNfEDVABDfg3PVj/Ha7xdhyq5wG2lcD+hF76wfr99LMBPXd1pQj5gkpibnbt+ZnEo4HpOvftT4nSKHGV6pMrw2s2onjsyG6CcOq1mwf0vpYhaSOBjJJa9OLB/3djj481MsPYoM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <36a8cb2d-0bea-cdc3-5311-c743f60763d5@amd.com>
Date: Mon, 16 Jan 2023 09:14:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to
 load Xen
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-10-julien@xen.org>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230113101136.479-10-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT060:EE_|LV2PR12MB5870:EE_
X-MS-Office365-Filtering-Correlation-Id: 3ecb7f7e-e1e8-490e-ca75-08daf799a2f0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t4QAPDxdg+xankTgnbzbKaLZVRBvk+4wOLFyHcnYVo77aXUwp6rOSSYDSf9JSFbmE5sQ/qUNgpBf/Vo5Nbef1MufNnIUz6F6LXa2ploJAQiaCv8s0bjY5W4p53WAqmiT/8PVDv0ERTYwo43Pcs6OJC7XSLkaBDt8SPoJEGqhDZqQy/uB6jIZSwb4X2e2wEfTSXEOBcSRhgxNhxwtyZCWpkB/1wPOVvKo8Z8WjikV6nJK8F1DZ2d59QJknyxtaeiVHxD0uXfzFrKObRlvqJyqhy60CExoSYLmKxteTTtNFoJA7gnZ82g7vSPHffB21+jb507YTMU2XhAay4AbVCTbgzYJOfEIF4AnrloVDPTdNIIGy7Olt0bviX7WuUzt0mecwJEdYvHM+5ToOZcGIFsn56FVsaiL/dlaj6+wio2/jMgGh7W8tSc/9sgRm4Q+XmEp1FwX/CuX1BqZrwIH9+i5ByOOsv7HA/ATqxVXbwN6MkqEQmzHbNcGS2GnLXANbXTjUui4iRbG/eTIBcdTedvZX1DkmrW+tuuW7Ir3IZMSBtwfICzrxassIt4dFN/z3dYlkdo7dHnMhsXJ3Ul6BN9XnLeLD4teiwFXf3oSu2I2DPpNzfiDThPE8+EIs9R5nQxtj6TFRS0BIUaM5JZgOkQjzvOc7cmIoN3Um5LqluzEQJYZhOaTXUr/AJZkuJix8SNCj+W+lBYavHAAOKGeBkWY527+1+LDFOId+/dKrqn+glvFodBNikrTFzRt4mQ9ETMqPoyv/KW9zAdA3qfY1bbJ5w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199015)(36840700001)(40470700004)(46966006)(31686004)(53546011)(70586007)(70206006)(41300700001)(2616005)(426003)(47076005)(26005)(186003)(8676002)(4326008)(82310400005)(36756003)(31696002)(86362001)(40480700001)(83380400001)(336012)(36860700001)(5660300002)(82740400003)(8936002)(110136005)(478600001)(54906003)(316002)(16576012)(356005)(2906002)(44832011)(81166007)(40460700003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 08:14:06.0686
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ecb7f7e-e1e8-490e-ca75-08daf799a2f0
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT060.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5870

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, bootloaders can load Xen anywhere in memory but the
> region 2MB - 4MB. While I am not aware of any issue, we have no way
> to tell the bootloader to avoid that region.
> 
> In addition to that, in the future, Xen may grow over 2MB if we
> enable feature like UBSAN or GCOV. To avoid widening the restriction
> on the load address, it would be better to get rid of it.
> 
> When the identity mapping is clashing with the Xen runtime mapping,
> we need an extra indirection to be able to replace the identity
> mapping with the Xen runtime mapping.
> 
> Reserve a new memory region that will be used to temporarily map Xen.
> For convenience, the new area is re-using the same first slot as the
> domheap which is used for per-cpu temporary mapping after a CPU has
> booted.
> 
> Furthermore, directly map boot_second (which cover Xen and more)
> to the temporary area. This will avoid to allocate an extra page-table
> for the second-level and will helpful for follow-up patches (we will
> want to use the fixmap whilst in the temporary mapping).
> 
> Lastly, some part of the code now needs to know whether the temporary
> mapping was created. So reserve r12 to store this information.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ----
>     Changes in v4:
>         - Remove spurious newline
> 
>     Changes in v3:
>         - Remove the ASSERT() in init_domheap_mappings() because it was
>           bogus (secondary CPU root tables are initialized to the CPU0
>           root table so the entry will be valid). Also, it is not
>           related to this patch as the CPU0 root table are rebuilt
>           during boot. The ASSERT() will be re-introduced later.
> 
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/arm32/head.S         | 139 ++++++++++++++++++++++++++----
>  xen/arch/arm/include/asm/config.h |  14 +++
>  xen/arch/arm/mm.c                 |  14 +++
>  3 files changed, 152 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 67b910808b74..3800efb44169 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -35,6 +35,9 @@
>  #define XEN_FIRST_SLOT      first_table_offset(XEN_VIRT_START)
>  #define XEN_SECOND_SLOT     second_table_offset(XEN_VIRT_START)
> 
> +/* Offset between the early boot xen mapping and the runtime xen mapping */
> +#define XEN_TEMPORARY_OFFSET      (TEMPORARY_XEN_VIRT_START - XEN_VIRT_START)
> +
>  #if defined(CONFIG_EARLY_PRINTK) && defined(CONFIG_EARLY_PRINTK_INC)
>  #include CONFIG_EARLY_PRINTK_INC
>  #endif
> @@ -94,7 +97,7 @@
>   *   r9  - paddr(start)
>   *   r10 - phys offset
>   *   r11 - UART address
> - *   r12 -
> + *   r12 - Temporary mapping created
>   *   r13 - SP
>   *   r14 - LR
>   *   r15 - PC
> @@ -445,6 +448,9 @@ ENDPROC(cpu_init)
>   *   r9 : paddr(start)
>   *   r10: phys offset
>   *
> + * Output:
> + *   r12: Was a temporary mapping created?
> + *
>   * Clobbers r0 - r4, r6
>   *
>   * Register usage within this function:
> @@ -484,7 +490,11 @@ create_page_tables:
>          /*
>           * Setup the 1:1 mapping so we can turn the MMU on. Note that
>           * only the first page of Xen will be part of the 1:1 mapping.
> +         *
> +         * In all the cases, we will link boot_third_id. So create the
> +         * mapping in advance.
>           */
> +        create_mapping_entry boot_third_id, r9, r9
> 
>          /*
>           * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
> @@ -501,8 +511,7 @@ create_page_tables:
>          /*
>           * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
>           * 1:1 mapping will use its own set of page-tables from the
> -         * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to handle
> -         * it.
> +         * third level.
>           */
>          get_table_slot r1, r9, 2     /* r1 := second slot */
>          cmp   r1, #XEN_SECOND_SLOT
> @@ -513,13 +522,33 @@ create_page_tables:
>  link_from_second_id:
>          create_table_entry boot_second_id, boot_third_id, r9, 2
>  link_from_third_id:
> -        create_mapping_entry boot_third_id, r9, r9
> +        /* Good news, we are not clashing with Xen virtual mapping */
> +        mov   r12, #0                /* r12 := temporary mapping not created */
>          mov   pc, lr
> 
>  virtphys_clash:
> -        /* Identity map clashes with boot_third, which we cannot handle yet */
> -        PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
> -        b     fail
> +        /*
> +         * The identity map clashes with boot_third. Link boot_first_id and
> +         * map Xen to a temporary mapping. See switch_to_runtime_mapping
> +         * for more details.
> +         */
> +        PRINT("- Virt and Phys addresses clash  -\r\n")
> +        PRINT("- Create temporary mapping -\r\n")
> +
> +        /*
> +         * This will override the link to boot_second in XEN_FIRST_SLOT.
> +         * The page-tables are not live yet. So no need to use
> +         * break-before-make.
> +         */
> +        create_table_entry boot_pgtable, boot_second_id, r9, 1
> +        create_table_entry boot_second_id, boot_third_id, r9, 2
> +
> +        /* Map boot_second (cover Xen mappings) to the temporary 1st slot */
> +        mov_w r0, TEMPORARY_XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_second, r0, 1
> +
> +        mov   r12, #1                /* r12 := temporary mapping created */
> +        mov   pc, lr
>  ENDPROC(create_page_tables)
> 
>  /*
> @@ -528,9 +557,10 @@ ENDPROC(create_page_tables)
>   *
>   * Inputs:
>   *   r9 : paddr(start)
> + *  r12 : Was the temporary mapping created?
>   *   lr : Virtual address to return to
>   *
> - * Clobbers r0 - r3
> + * Clobbers r0 - r5
>   */
>  enable_mmu:
>          PRINT("- Turning on paging -\r\n")
> @@ -558,21 +588,79 @@ enable_mmu:
>           * The MMU is turned on and we are in the 1:1 mapping. Switch
>           * to the runtime mapping.
>           */
> -        mov_w r0, 1f
> -        mov   pc, r0
> +        mov   r5, lr                /* Save LR before overwritting it */
> +        mov_w lr, 1f                /* Virtual address in the runtime mapping */
> +        b     switch_to_runtime_mapping
>  1:
> +        mov   lr, r5                /* Restore LR */
>          /*
> -         * The 1:1 map may clash with other parts of the Xen virtual memory
> -         * layout. As it is not used anymore, remove it completely to
> -         * avoid having to worry about replacing existing mapping
> -         * afterwards.
> +         * At this point, either the 1:1 map or the temporary mapping
> +         * will be present. The former may clash with other parts of the
> +         * Xen virtual memory layout. As both of them are not used
> +         * anymore, remove them completely to avoid having to worry
> +         * about replacing existing mapping afterwards.
>           *
>           * On return this will jump to the virtual address requested by
>           * the caller.
>           */
> -        b     remove_identity_mapping
> +        teq   r12, #0
> +        beq   remove_identity_mapping
> +        b     remove_temporary_mapping
>  ENDPROC(enable_mmu)
> 
> +/*
> + * Switch to the runtime mapping. The logic depends on whether the
> + * runtime virtual region is clashing with the physical address
> + *
> + *  - If it is not clashing, we can directly jump to the address in
> + *    the runtime mapping.
> + *  - If it is clashing, create_page_tables() would have mapped Xen to
> + *    a temporary virtual address. We need to switch to the temporary
> + *    mapping so we can remove the identity mapping and map Xen at the
> + *    correct position.
> + *
> + * Inputs
> + *    r9: paddr(start)
> + *   r12: Was a temporary mapping created?
> + *    lr: Address in the runtime mapping to jump to
> + *
> + * Clobbers r0 - r4
> + */
> +switch_to_runtime_mapping:
> +        /*
> +         * Jump to the runtime mapping if the virt and phys are not
> +         * clashing
> +         */
> +        teq   r12, #0
> +        beq   ready_to_switch
> +
> +        /* We are still in the 1:1 mapping. Jump to the temporary Virtual address. */
> +        mov_w r0, 1f
> +        add   r0, r0, #XEN_TEMPORARY_OFFSET /* r0 := address in temporary mapping */
> +        mov   pc, r0
> +
> +1:
> +        /* Remove boot_second_id */
> +        mov   r2, #0
> +        mov   r3, #0
> +        adr_l r0, boot_pgtable
> +        get_table_slot r1, r9, 1            /* r1 := first slot */
> +        lsl   r1, r1, #3                    /* r1 := first slot offset */
> +        strd  r2, r3, [r0, r1]
> +
> +        flush_xen_tlb_local r0
> +
> +        /* Map boot_second into boot_pgtable */
> +        mov_w r0, XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_second, r0, 1
> +
> +        /* Ensure any page table updates are visible before continuing */
> +        dsb   nsh
> +
> +ready_to_switch:
> +        mov   pc, lr
> +ENDPROC(switch_to_runtime_mapping)
> +
>  /*
>   * Remove the 1:1 map from the page-tables. It is not easy to keep track
>   * where the 1:1 map was mapped, so we will look for the top-level entry
> @@ -618,6 +706,27 @@ identity_mapping_removed:
>          mov   pc, lr
>  ENDPROC(remove_identity_mapping)
> 
> +/*
> + * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
> + *
> + * Clobbers r0 - r1
> + */
> +remove_temporary_mapping:
> +        /* r2:r3 := invalid page-table entry */
> +        mov   r2, #0
> +        mov   r3, #0
> +
> +        adr_l r0, boot_pgtable
> +        mov_w r1, TEMPORARY_XEN_VIRT_START
> +        get_table_slot r1, r1, 1     /* r1 := first slot */
Can't we just use TEMPORARY_AREA_FIRST_SLOT?

> +        lsl   r1, r1, #3             /* r1 := first slot offset */
> +        strd  r2, r3, [r0, r1]
> +
> +        flush_xen_tlb_local r0
> +
> +        mov  pc, lr
> +ENDPROC(remove_temporary_mapping)
> +
>  /*
>   * Map the UART in the fixmap (when earlyprintk is used) and hook the
>   * fixmap table in the page tables.
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 87851e677701..6c1b762e976d 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -148,6 +148,20 @@
>  /* Number of domheap pagetable pages required at the second level (2MB mappings) */
>  #define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
> 
> +/*
> + * The temporary area is overlapping with the domheap area. This may
> + * be used to create an alias of the first slot containing Xen mappings
> + * when turning on/off the MMU.
> + */
> +#define TEMPORARY_AREA_FIRST_SLOT    (first_table_offset(DOMHEAP_VIRT_START))
> +
> +/* Calculate the address in the temporary area */
> +#define TEMPORARY_AREA_ADDR(addr)                           \
> +     (((addr) & ~XEN_PT_LEVEL_MASK(1)) |                    \
> +      (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1)))
XEN_PT_LEVEL_{MASK/SHIFT} should be used when we do not know the level upfront.
Otherwise, no need for opencoding and you should use FIRST_MASK and FIRST_SHIFT.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:15:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:15:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478412.741585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKeW-000571-Kx; Mon, 16 Jan 2023 08:15:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478412.741585; Mon, 16 Jan 2023 08:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKeW-00056u-Fx; Mon, 16 Jan 2023 08:15:36 +0000
Received: by outflank-mailman (input) for mailman id 478412;
 Mon, 16 Jan 2023 08:15:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHKeV-00056k-7o; Mon, 16 Jan 2023 08:15:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHKeU-000877-Rf; Mon, 16 Jan 2023 08:15:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHKeU-0007SW-DB; Mon, 16 Jan 2023 08:15:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHKeU-0001P8-Cj; Mon, 16 Jan 2023 08:15:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xVJJ+s770KjU3VJJNMtVgB7DncaYSY6me9fV7NVdaDw=; b=ZEyRNdQrKMQNXOuUYqPASlCYN/
	g23T9InDi8e6gv6Hp6GzWdk/oFoRKKnTk2IOBoarOgKVWyrY76jqQ9PkAff8o63Ckh9IlUQtYAuPI
	noIYP5DS4szxwZw0wFnugSdWcTi2CMLHYzOWz8C2vfNSaYVacoMJIazMU17bimGEg60c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175916-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175916: regressions - trouble: blocked/broken
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-build-prep:fail:regression
    xen-unstable-smoke:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 08:15:34 +0000

flight 175916 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175916/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-amd64                   5 host-build-prep          fail REGR. vs. 175746
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    3 days
Failing since        175748  2023-01-12 20:01:56 Z    3 days   16 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    2 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:21:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:21:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478420.741595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKju-0006d8-C5; Mon, 16 Jan 2023 08:21:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478420.741595; Mon, 16 Jan 2023 08:21:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKju-0006d1-89; Mon, 16 Jan 2023 08:21:10 +0000
Received: by outflank-mailman (input) for mailman id 478420;
 Mon, 16 Jan 2023 08:21:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wx/b=5N=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHKjt-0006cv-8u
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:21:09 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2044.outbound.protection.outlook.com [40.107.92.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b9600019-9576-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 09:21:07 +0100 (CET)
Received: from DM6PR13CA0037.namprd13.prod.outlook.com (2603:10b6:5:134::14)
 by PH0PR12MB7960.namprd12.prod.outlook.com (2603:10b6:510:287::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Mon, 16 Jan
 2023 08:21:03 +0000
Received: from DS1PEPF0000E638.namprd02.prod.outlook.com
 (2603:10b6:5:134:cafe::23) by DM6PR13CA0037.outlook.office365.com
 (2603:10b6:5:134::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Mon, 16 Jan 2023 08:21:03 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DS1PEPF0000E638.mail.protection.outlook.com (10.167.17.70) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Mon, 16 Jan 2023 08:21:02 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 02:21:01 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 02:20:53 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 16 Jan 2023 02:20:51 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9600019-9576-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qw+i8N+aEtFABgk7RjWCKRmbL/z0y2JvlcGhmbPUE4Sik2Y6f7OH25XdUZWu4f6aAvxR+EdARvdGWWBCZNNa/f0uNqTfVCBxSHh6xB1Cw8dAkImm4EWeC28tuCupsjDAg13bwvQzn1kgK/urCgHSxNFILSevL2JEPoZbHwhAvDe/q90bF5TC0C7Zrp9EkO5MPrgkTmO9RMxEpuz/3Yvz44r32Yk0GkFpUq7qujK88q/MPi6sEk6KyeCRocIoRbxCT868QzloM2c9sxaYkNA6chJGjtO9Ip+CMfngcYKbhX+xeQmndu9lRp7tM51iv6jtIjXHrtKHXE6BEtbPU7SI7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NAGxMev/Uqdz8y1zI806Zr9i1eSD9520EXPwF22pQzs=;
 b=QhZzyyEjihyr7LYhncOOWahuklqJOVXcG8Lj8QZvFpQx9ahuG3sjkoJZfN6XeuYB97LAuAUuLHfjQ0FjvMSdKTSI6z/FFU48S2Q1tz2H5rccRnU1TVlB0tHto9Po1j8CS2KYiUVI8wp82SRn/nZ6hiOkZHwBXsM+DGwNuQNkxUOLXIusUSuMMlZrV/3PWuSqOTR1qupZzyZOP4eocaJiGCWcn4tywuHwGKHLe2MwfASOYDNFh6kFqkGW0Nk7OS2r022Uf9QAPCz4OJP8qroK+R6bFBLKtsRTpTgzG/DWFXWeBxe87JVj6LhUvEd3JtyYUFR9Kf78fG9fDvrffvAS7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NAGxMev/Uqdz8y1zI806Zr9i1eSD9520EXPwF22pQzs=;
 b=z7OEqUijyfr1gQ0hWOedUoeR2OpcISTrTYbkRhaQH7QN8guMPOzKisVTmpLYM/PNFDZ49mOHF48EXgvkl83BqF3q969Uf3HBo+b/1MaGsDjFGL8jRC3M7n9rHcvaN028YUitqipmpfONd7vl0WbXIDoi7NphzFeNuvu1M5HDGyk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <0271e540-d3b0-fb9b-0f66-015abb45231c@amd.com>
Date: Mon, 16 Jan 2023 09:20:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary
 mapping
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-11-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230113101136.479-11-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E638:EE_|PH0PR12MB7960:EE_
X-MS-Office365-Filtering-Correlation-Id: ff925a23-bba1-41c8-4ca5-08daf79a9b45
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Kc5klwM/4nM+8N0tSgFnLMZva1dR4ROKm6odIX1N9gVvLcXRxadXEN3X4LJMAGETBOxn8tPivdI9r+vBYUL6eGpj77WepM0qBFs6opqDVMLPtbz683xF1RliixwIbp5PXSKXN5oAI0ulCmW3z0kX5Vvj3Ui+lbuJrRCYPF7o77g5JExoNutN7uZdrecaXXZ+zGTEjllvCZEnGFgOA5l+WihMFptzVY3a3qidR2AwIVOY5yVqatl+S8vjoJW5Od9N2F6nbNU83JysqEUznpq/XHh/EoM81qwEnRIf2bktDHrctAQZeVT1qVtE+VhB8Wr0YKpZcsn6QWsriu0ZG2rcsl7to2Ty/Hr0PSWHGqw69Mwm+FUqt5vNYetqlfLxLyW4bTdGUcJuhfSjdPZlOtknrckyoHXs0qnRCpspceSDrzMrNhgmn8JduijKEWVsyHebTBdBlm8/5ABvgN91hn1gKWXcbKAgOwxRgNSQVc8PAfnVfsmbm5BElG4C7ug9crN+aaVVtVdr8zGR/PApIBb7zDKNz74tggD7IwyE6IzkrlTZf6PvtBN+xHodvQJOyciT+7strvvD2R9liFEeXmO+2wQJis+/B+R4/StO7sj6x/ofuQKsgiswyVxSG1xVWj79pqvM2AWM+ETa4FdK/SVsQrxJI1myEy9pdGSTjxLPgKe/HXerjgMSgWKpV6UWJnl4g990WmuN0IzL5OyuLnZ5hL8mdzzm3pWP59stRvuFBoHyDbge+Ls1qNRZpEOc5p1XWFIAz25U2PCY73Ew2FuzjA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(376002)(39860400002)(136003)(451199015)(36840700001)(40470700004)(46966006)(31686004)(44832011)(5660300002)(8936002)(54906003)(41300700001)(70586007)(70206006)(4326008)(110136005)(8676002)(316002)(2906002)(82740400003)(356005)(81166007)(478600001)(53546011)(82310400005)(26005)(16576012)(36756003)(186003)(47076005)(36860700001)(336012)(426003)(31696002)(40460700003)(40480700001)(86362001)(2616005)(83380400001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 08:21:02.7599
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ff925a23-bba1-41c8-4ca5-08daf79a9b45
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E638.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7960

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, the temporary mapping is only used when the virtual
> runtime region of Xen is clashing with the physical region.
> 
> In follow-up patches, we will rework how secondary CPU bring-up works
> and it will be convenient to use the fixmap area for accessing
> the root page-table (it is per-cpu).
> 
> Rework the code to use temporary mapping when the Xen physical address
> is not overlapping with the temporary mapping.
> 
> This also has the advantage to simplify the logic to identity map
> Xen.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
> 
> Even if this patch is rewriting part of the previous patch, I decided
> to keep them separated to help the review.
> 
> The "folow-up patches" are still in draft at the moment. I still haven't
> find a way to split them nicely and not require too much more work
> in the coloring side.
> 
> I have provided some medium-term goal in the cover letter.
> 
>     Changes in v3:
>         - Resolve conflicts after switching from "ldr rX, <label>" to
>           "mov_w rX, <label>" in a previous patch
> 
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/arm32/head.S | 82 +++++++--------------------------------
>  1 file changed, 15 insertions(+), 67 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 3800efb44169..ce858e9fc4da 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -459,7 +459,6 @@ ENDPROC(cpu_init)
>  create_page_tables:
>          /* Prepare the page-tables for mapping Xen */
>          mov_w r0, XEN_VIRT_START
> -        create_table_entry boot_pgtable, boot_second, r0, 1
>          create_table_entry boot_second, boot_third, r0, 2
> 
>          /* Setup boot_third: */
> @@ -479,67 +478,37 @@ create_page_tables:
>          cmp   r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */
>          blo   1b
> 
> -        /*
> -         * If Xen is loaded at exactly XEN_VIRT_START then we don't
> -         * need an additional 1:1 mapping, the virtual mapping will
> -         * suffice.
> -         */
> -        cmp   r9, #XEN_VIRT_START
> -        moveq pc, lr
> -
>          /*
>           * Setup the 1:1 mapping so we can turn the MMU on. Note that
>           * only the first page of Xen will be part of the 1:1 mapping.
> -         *
> -         * In all the cases, we will link boot_third_id. So create the
> -         * mapping in advance.
>           */
> +        create_table_entry boot_pgtable, boot_second_id, r9, 1
> +        create_table_entry boot_second_id, boot_third_id, r9, 2
>          create_mapping_entry boot_third_id, r9, r9
> 
>          /*
> -         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
> -         * then the 1:1 mapping will use its own set of page-tables from
> -         * the second level.
> +         * Find the first slot used. If the slot is not the same
> +         * as XEN_TMP_FIRST_SLOT, then we will want to switch
Do you mean TEMPORARY_AREA_FIRST_SLOT?

> +         * to the temporary mapping before jumping to the runtime
> +         * virtual mapping.
>           */
>          get_table_slot r1, r9, 1     /* r1 := first slot */
> -        cmp   r1, #XEN_FIRST_SLOT
> -        beq   1f
> -        create_table_entry boot_pgtable, boot_second_id, r9, 1
> -        b     link_from_second_id
> -
> -1:
> -        /*
> -         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
> -         * 1:1 mapping will use its own set of page-tables from the
> -         * third level.
> -         */
> -        get_table_slot r1, r9, 2     /* r1 := second slot */
> -        cmp   r1, #XEN_SECOND_SLOT
> -        beq   virtphys_clash
> -        create_table_entry boot_second, boot_third_id, r9, 2
> -        b     link_from_third_id
> +        cmp   r1, #TEMPORARY_AREA_FIRST_SLOT
> +        bne   use_temporary_mapping
> 
> -link_from_second_id:
> -        create_table_entry boot_second_id, boot_third_id, r9, 2
> -link_from_third_id:
> -        /* Good news, we are not clashing with Xen virtual mapping */
> +        mov_w r0, XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_second, r0, 1
>          mov   r12, #0                /* r12 := temporary mapping not created */
>          mov   pc, lr
> 
> -virtphys_clash:
> +use_temporary_mapping:
>          /*
> -         * The identity map clashes with boot_third. Link boot_first_id and
> -         * map Xen to a temporary mapping. See switch_to_runtime_mapping
> -         * for more details.
> +         * The identity mapping is not using the first slot
> +         * TEMPORARY_AREA_FIRST_SLOT. Create a temporary mapping.
> +         * See switch_to_runtime_mapping for more details.
>           */
> -        PRINT("- Virt and Phys addresses clash  -\r\n")
>          PRINT("- Create temporary mapping -\r\n")
> 
> -        /*
> -         * This will override the link to boot_second in XEN_FIRST_SLOT.
> -         * The page-tables are not live yet. So no need to use
> -         * break-before-make.
> -         */
>          create_table_entry boot_pgtable, boot_second_id, r9, 1
>          create_table_entry boot_second_id, boot_third_id, r9, 2
Do we need to duplicate this if we just did the same in create_page_tables before branching to
use_temporary_mapping?

~Michal



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:36:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:36:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478425.741605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKyX-0008CY-Kc; Mon, 16 Jan 2023 08:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478425.741605; Mon, 16 Jan 2023 08:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHKyX-0008CR-Hz; Mon, 16 Jan 2023 08:36:17 +0000
Received: by outflank-mailman (input) for mailman id 478425;
 Mon, 16 Jan 2023 08:36:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHKyW-0008CL-Si
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:36:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHKyW-0000A4-Jb; Mon, 16 Jan 2023 08:36:16 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.9.85])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHKyW-0002B0-DG; Mon, 16 Jan 2023 08:36:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=yGVQUYQ8k4N8zOcI+y5VfNe6HVyMZcuTfH2wfXekJY0=; b=ybcjhYapwtIaXhh0NIZItqIoGv
	uDTp5CYbwtBr+VJQoEGI4hcQDSF1A+9q30rHuzziQ/EszLXGjLNX33T5ZyjYQJSYvcRN/1oubxhqM
	KYDpd+NvauVArxOqHF01rHouQk+6DvzlqkaAAmXcln21N1NNgznbCwvEdvsXMbjb2Fa8=;
Message-ID: <853c086f-b556-e204-6f4d-eb844d710af9@xen.org>
Date: Mon, 16 Jan 2023 08:36:12 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 02/14] xen/arm64: flushtlb: Implement the TLBI repeat
 workaround for TLB flush by VA
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Michal Orzel <michal.orzel@amd.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-3-julien@xen.org>
 <C8CAEAFE-50FB-4DBB-8EDC-8AB87920EB06@arm.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <C8CAEAFE-50FB-4DBB-8EDC-8AB87920EB06@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 13/01/2023 17:56, Luca Fancellu wrote:
> 
> 
>> On 13 Jan 2023, at 10:11, Julien Grall <julien@xen.org> wrote:
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Looking at the Neoverse N1 errata document, it is not clear to me
>> why the TLBI repeat workaround is not applied for TLB flush by VA.
>>
>> The TBL flush by VA helpers are used in flush_xen_tlb_range_va_local()

s/TBL/TLB/

>> and flush_xen_tlb_range_va(). So if the range size if a fixed size smaller
> 
> NIT: is a fixed size

Thanks. I will fix it on commit unless there is another round needed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:38:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:38:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478430.741614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHL0g-0000MD-0H; Mon, 16 Jan 2023 08:38:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478430.741614; Mon, 16 Jan 2023 08:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHL0f-0000M6-Tz; Mon, 16 Jan 2023 08:38:29 +0000
Received: by outflank-mailman (input) for mailman id 478430;
 Mon, 16 Jan 2023 08:38:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OEfv=5N=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHL0e-0000Lw-Ks
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:38:28 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2596e474-9579-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 09:38:27 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E713367565;
 Mon, 16 Jan 2023 08:38:26 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4258C139C2;
 Mon, 16 Jan 2023 08:38:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id jXTKDgINxWMOFAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 16 Jan 2023 08:38:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2596e474-9579-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673858306; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rCTz8nPUFHfGm5/NFTvj8mMlaS89IAg2JXiWnIchUYI=;
	b=b21kYzV7bIOMaYuzQfHskSyOZTKxY6IQXB61/wpa3u9Tg/5EoyBmM4UyU8gOAIxSLQITel
	P3PSkg9pK712N5HheccAJJvufNIUBtCw7YXRx/cBFx94phXx5y74Mg0gExDjOhA8Rxb7pk
	DQQQT5GtBi1fA4cuBQn0lgxqm5UyOGs=
Message-ID: <1375c16c-1a81-f845-f9e6-d698148c4ffc@suse.com>
Date: Mon, 16 Jan 2023 09:38:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle
 state
Content-Language: en-US
To: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>, linux-kernel@vger.kernel.org
Cc: amakhalov@vmware.com, ganb@vmware.com, ankitja@vmware.com,
 bordoloih@vmware.com, keerthanak@vmware.com, blamoreaux@vmware.com,
 namit@vmware.com, Peter Zijlstra <peterz@infradead.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
 "Paul E. McKenney" <paulmck@kernel.org>, Wyes Karny <wyes.karny@amd.com>,
 Lewis Caroll <lewis.carroll@amd.com>, Tom Lendacky
 <thomas.lendacky@amd.com>, x86@kernel.org,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20230116060134.80259-1-srivatsa@csail.mit.edu>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230116060134.80259-1-srivatsa@csail.mit.edu>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------R20aP0MBAjebS9kz3CSWyaDC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------R20aP0MBAjebS9kz3CSWyaDC
Content-Type: multipart/mixed; boundary="------------SKpCTbiCrL4RZ847sUVVOW7b";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>, linux-kernel@vger.kernel.org
Cc: amakhalov@vmware.com, ganb@vmware.com, ankitja@vmware.com,
 bordoloih@vmware.com, keerthanak@vmware.com, blamoreaux@vmware.com,
 namit@vmware.com, Peter Zijlstra <peterz@infradead.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
 "Paul E. McKenney" <paulmck@kernel.org>, Wyes Karny <wyes.karny@amd.com>,
 Lewis Caroll <lewis.carroll@amd.com>, Tom Lendacky
 <thomas.lendacky@amd.com>, x86@kernel.org,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
 xen-devel@lists.xenproject.org
Message-ID: <1375c16c-1a81-f845-f9e6-d698148c4ffc@suse.com>
Subject: Re: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle
 state
References: <20230116060134.80259-1-srivatsa@csail.mit.edu>
In-Reply-To: <20230116060134.80259-1-srivatsa@csail.mit.edu>

--------------SKpCTbiCrL4RZ847sUVVOW7b
Content-Type: multipart/mixed; boundary="------------dh2EUrC8xJ648HWyt0Ov3f0j"

--------------dh2EUrC8xJ648HWyt0Ov3f0j
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTYuMDEuMjMgMDc6MDEsIFNyaXZhdHNhIFMuIEJoYXQgd3JvdGU6DQo+IEZyb206ICJT
cml2YXRzYSBTLiBCaGF0IChWTXdhcmUpIiA8c3JpdmF0c2FAY3NhaWwubWl0LmVkdT4NCj4g
DQo+IFVuZGVyIGh5cGVydmlzb3JzIHRoYXQgc3VwcG9ydCBtd2FpdCBwYXNzdGhyb3VnaCwg
YSB2Q1BVIGluIG13YWl0DQo+IENQVS1pZGxlIHN0YXRlIHJlbWFpbnMgaW4gZ3Vlc3QgY29u
dGV4dCAoaW5zdGVhZCBvZiB5aWVsZGluZyB0byB0aGUNCj4gaHlwZXJ2aXNvciB2aWEgVk1F
WElUKSwgd2hpY2ggaGVscHMgc3BlZWQgdXAgd2FrZXVwcyBmcm9tIGlkbGUuDQo+IA0KPiBI
b3dldmVyLCB0aGlzIHJ1bnMgaW50byBwcm9ibGVtcyB3aXRoIENQVSBob3RwbHVnLCBiZWNh
dXNlIHRoZSBMaW51eA0KPiBDUFUgb2ZmbGluZSBwYXRoIHByZWZlcnMgdG8gcHV0IHRoZSB2
Q1BVLXRvLWJlLW9mZmxpbmVkIGluIG13YWl0DQo+IHN0YXRlLCB3aGVuZXZlciBtd2FpdCBp
cyBhdmFpbGFibGUuIEFzIGEgcmVzdWx0LCBzaW5jZSBhIHZDUFUgaW4gbXdhaXQNCj4gcmVt
YWlucyBpbiBndWVzdCBjb250ZXh0IGFuZCBkb2VzIG5vdCB5aWVsZCB0byB0aGUgaHlwZXJ2
aXNvciwgYW4NCj4gb2ZmbGluZSB2Q1BVICphcHBlYXJzKiB0byBiZSAxMDAlIGJ1c3kgYXMg
dmlld2VkIGZyb20gdGhlIGhvc3QsIHdoaWNoDQo+IHByZXZlbnRzIHRoZSBoeXBlcnZpc29y
IGZyb20gcnVubmluZyBvdGhlciB2Q1BVcyBvciB3b3JrbG9hZHMgb24gdGhlDQo+IGNvcnJl
c3BvbmRpbmcgcENQVS4gWyBOb3RlIHRoYXQgc3VjaCBhIHZDUFUgaXMgbm90IGFjdHVhbGx5
IGJ1c3kNCj4gc3Bpbm5pbmcgdGhvdWdoOyBpdCByZW1haW5zIGluIG13YWl0IGlkbGUgc3Rh
dGUgaW4gdGhlIGd1ZXN0IF0uDQo+IA0KPiBGaXggdGhpcyBieSBwcmV2ZW50aW5nIHRoZSB1
c2Ugb2YgbXdhaXQgaWRsZSBzdGF0ZSBpbiB0aGUgdkNQVSBvZmZsaW5lDQo+IHBsYXlfZGVh
ZCgpIHBhdGggZm9yIGFueSBoeXBlcnZpc29yLCBldmVuIGlmIG13YWl0IHN1cHBvcnQgaXMN
Cj4gYXZhaWxhYmxlLg0KPiANCj4gU3VnZ2VzdGVkLWJ5OiBQZXRlciBaaWpsc3RyYSAoSW50
ZWwpIDxwZXRlcnpAaW5mcmFkZWFkLm9yZz4NCj4gU2lnbmVkLW9mZi1ieTogU3JpdmF0c2Eg
Uy4gQmhhdCAoVk13YXJlKSA8c3JpdmF0c2FAY3NhaWwubWl0LmVkdT4NCj4gQ2M6IFRob21h
cyBHbGVpeG5lciA8dGdseEBsaW51dHJvbml4LmRlPg0KPiBDYzogUGV0ZXIgWmlqbHN0cmEg
PHBldGVyekBpbmZyYWRlYWQub3JnPg0KPiBDYzogSW5nbyBNb2xuYXIgPG1pbmdvQHJlZGhh
dC5jb20+DQo+IENjOiBCb3Jpc2xhdiBQZXRrb3YgPGJwQGFsaWVuOC5kZT4NCj4gQ2M6IERh
dmUgSGFuc2VuIDxkYXZlLmhhbnNlbkBsaW51eC5pbnRlbC5jb20+DQo+IENjOiAiSC4gUGV0
ZXIgQW52aW4iIDxocGFAenl0b3IuY29tPg0KPiBDYzogIlJhZmFlbCBKLiBXeXNvY2tpIiA8
cmFmYWVsLmoud3lzb2NraUBpbnRlbC5jb20+DQo+IENjOiAiUGF1bCBFLiBNY0tlbm5leSIg
PHBhdWxtY2tAa2VybmVsLm9yZz4NCj4gQ2M6IFd5ZXMgS2FybnkgPHd5ZXMua2FybnlAYW1k
LmNvbT4NCj4gQ2M6IExld2lzIENhcm9sbCA8bGV3aXMuY2Fycm9sbEBhbWQuY29tPg0KPiBD
YzogVG9tIExlbmRhY2t5IDx0aG9tYXMubGVuZGFja3lAYW1kLmNvbT4NCj4gQ2M6IEFsZXhl
eSBNYWtoYWxvdiA8YW1ha2hhbG92QHZtd2FyZS5jb20+DQo+IENjOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+DQo+IENjOiB4ODZAa2VybmVsLm9yZw0KPiBDYzogVk13YXJl
IFBWLURyaXZlcnMgUmV2aWV3ZXJzIDxwdi1kcml2ZXJzQHZtd2FyZS5jb20+DQo+IENjOiB2
aXJ0dWFsaXphdGlvbkBsaXN0cy5saW51eC1mb3VuZGF0aW9uLm9yZw0KPiBDYzoga3ZtQHZn
ZXIua2VybmVsLm9yZw0KPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQoN
ClJldmlld2VkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQoNCg0KSnVl
cmdlbg0KDQo=
--------------dh2EUrC8xJ648HWyt0Ov3f0j
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------dh2EUrC8xJ648HWyt0Ov3f0j--

--------------SKpCTbiCrL4RZ847sUVVOW7b--

--------------R20aP0MBAjebS9kz3CSWyaDC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPFDQEFAwAAAAAACgkQsN6d1ii/Ey8W
xwf/X/GKsP0H+Iit0bG9QeFgb0ohp08/y4PiVKqFcswlEHIG/fwFJtvHYFxlnQ0zVWBjO/ksowgv
DKedL5giBFHXzjCu9xmCReFb/YlX2fqRQ9Nu/hfwO5Ckze6QcVgTtbzWxLhrb32MbydSpk/XXSBG
94UJZBOFutx9O/ObihZfY1wO55GZwh7b655J3uIzv3y1RPVxg9joxGPRh2uubt4GLsiD5uSH3N6K
qA0DSoPIgM/2cq3iNTb+TTx+YnFfV79+uP2nRbMkQ9XT2WqdC3WNSe/Wepv6RgcP+3jdaIxW8Xdo
oFevc1mDroKpfdbB9i5I+DvLXnTCjERGqV1JZ0PmYQ==
=L2a/
-----END PGP SIGNATURE-----

--------------R20aP0MBAjebS9kz3CSWyaDC--


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:43:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:43:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478435.741625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHL5V-0001nK-Ii; Mon, 16 Jan 2023 08:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478435.741625; Mon, 16 Jan 2023 08:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHL5V-0001nD-G2; Mon, 16 Jan 2023 08:43:29 +0000
Received: by outflank-mailman (input) for mailman id 478435;
 Mon, 16 Jan 2023 08:43:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHL5T-0001n7-J9
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:43:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHL5T-0000HJ-6T; Mon, 16 Jan 2023 08:43:27 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.9.85])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHL5T-0002PR-0Q; Mon, 16 Jan 2023 08:43:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=SyqcjB7poyYQQc1LwMdKNmAV23Qfyh7R9AJhpPATybU=; b=uRLq+IV80fGZzEZqBmxuH4PMR2
	3dWPEF9LD6Ux/2xl2mscDTzyvF+Au3xZthMUL02+Nu1mnCdyvPc/IxLVsS6lWLKIQof32OYpc7oye
	etjBcBrBBHIBYRn0MNM+QgPF64SH+VhfzCP7sNDDHrPQIkd9ymwc5w9SEI0Fwu1WN/jU=;
Message-ID: <a366d06c-d078-b19c-793d-5bfc1943b05a@xen.org>
Date: Mon, 16 Jan 2023 08:43:25 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to
 load Xen
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-10-julien@xen.org>
 <859E7F71-489B-4DD3-A4B2-9AD0DE19837D@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <859E7F71-489B-4DD3-A4B2-9AD0DE19837D@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 13/01/2023 14:58, Luca Fancellu wrote:
>> +/*
>> + * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
>> + *
>> + * Clobbers r0 - r1
> 
> NIT: r0 - r3?

Yes. I have updated the version in my tree.

> 
>> + */
>> +remove_temporary_mapping:
>> +        /* r2:r3 := invalid page-table entry */
>> +        mov   r2, #0
>> +        mov   r3, #0
>> +
>> +        adr_l r0, boot_pgtable
>> +        mov_w r1, TEMPORARY_XEN_VIRT_START
>> +        get_table_slot r1, r1, 1     /* r1 := first slot */
>> +        lsl   r1, r1, #3             /* r1 := first slot offset */
>> +        strd  r2, r3, [r0, r1]
>> +
>> +        flush_xen_tlb_local r0
>> +
>> +        mov  pc, lr
>> +ENDPROC(remove_temporary_mapping)
>> +
> 
> The rest looks good to me, I’ve also built for arm64/32 and test this patch on fvp aarch32 mode,
> booting Dom0 and creating/running/destroying some guests.
> 
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Tested-by: Luca Fancellu <luca.fancellu@arm.com>

Thanks!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:45:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:45:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478441.741635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHL7Q-0002Pj-2k; Mon, 16 Jan 2023 08:45:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478441.741635; Mon, 16 Jan 2023 08:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHL7P-0002Pc-V0; Mon, 16 Jan 2023 08:45:27 +0000
Received: by outflank-mailman (input) for mailman id 478441;
 Mon, 16 Jan 2023 08:45:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHL7P-0002PQ-DI; Mon, 16 Jan 2023 08:45:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHL7P-0000LB-Ar; Mon, 16 Jan 2023 08:45:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHL7P-00086j-3B; Mon, 16 Jan 2023 08:45:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHL7P-0000tc-2h; Mon, 16 Jan 2023 08:45:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vzwr6pNQCIzShQuNo4jEykUsY/K/qdYt57ZQFREJXMo=; b=kXTblrl7kn+2aQAZqIndcaoouo
	pbTt02D/KlxngvOA7C/HGe1ppRYI6Cp7mffyxzLZEq1nAXSebwTrHC2086vyFJJ0Uk3eu3PD1Bdn4
	RQ3/l9VUO4YY94I2373VsXWpwea4aKxNfFZfakwOc2pAFVk5a8ypt7EienfSM+72mmig=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175918-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175918: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-amd64:<job status>:broken:regression
    qemu-mainline:build-amd64-pvops:<job status>:broken:regression
    qemu-mainline:build-amd64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-armhf-pvops:<job status>:broken:regression
    qemu-mainline:build-armhf:host-install(4):broken:regression
    qemu-mainline:build-amd64:host-build-prep:fail:regression
    qemu-mainline:build-amd64-xsm:host-build-prep:fail:regression
    qemu-mainline:build-amd64-pvops:host-build-prep:fail:regression
    qemu-mainline:build-armhf-pvops:host-build-prep:fail:regression
    qemu-mainline:build-arm64-xsm:host-build-prep:fail:regression
    qemu-mainline:build-arm64-pvops:host-build-prep:fail:regression
    qemu-mainline:build-arm64:host-build-prep:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=886fb67020e32ce6a2cf7049c6f017acf1f0d69a
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 08:45:27 +0000

flight 175918 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175918/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175743
 build-amd64                   5 host-build-prep          fail REGR. vs. 175743
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175743
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-armhf-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175743
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175743
 build-arm64                   5 host-build-prep          fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                886fb67020e32ce6a2cf7049c6f017acf1f0d69a
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    3 days
Failing since        175750  2023-01-13 06:38:52 Z    3 days    9 attempts
Testing same since   175835  2023-01-14 07:07:10 Z    2 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               fail    
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 1426 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:46:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478448.741645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHL8b-0002yE-DP; Mon, 16 Jan 2023 08:46:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478448.741645; Mon, 16 Jan 2023 08:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHL8b-0002y7-A8; Mon, 16 Jan 2023 08:46:41 +0000
Received: by outflank-mailman (input) for mailman id 478448;
 Mon, 16 Jan 2023 08:46:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wx/b=5N=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHL8a-0002xt-4F
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:46:40 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2077.outbound.protection.outlook.com [40.107.244.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4946e9b0-957a-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 09:46:38 +0100 (CET)
Received: from MW4PR03CA0091.namprd03.prod.outlook.com (2603:10b6:303:b7::6)
 by CH2PR12MB4892.namprd12.prod.outlook.com (2603:10b6:610:65::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 08:46:34 +0000
Received: from CO1NAM11FT052.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b7:cafe::ed) by MW4PR03CA0091.outlook.office365.com
 (2603:10b6:303:b7::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Mon, 16 Jan 2023 08:46:34 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT052.mail.protection.outlook.com (10.13.174.225) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Mon, 16 Jan 2023 08:46:33 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 02:46:32 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 02:46:32 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 16 Jan 2023 02:46:31 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4946e9b0-957a-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PI55dvDU1HOuYlHjTbdE1at3o/lk10QalWRDMKvdMt6GhCHgLY8LKQw/A0L0wxdGqY5pMeTzUU57pnRW4us6XEUK9GEQjNuumqD6FutHbyS1sLiCXmTnSZI/UvPkD6yzsvnu3SzJxHNSAyaXGiFlVO51/FxfxI7jCgZtwZyUQBBOCyhiE4yzoei1fm1A6RfnS/PTNz1NNZAOqHcmEIc5lx+1jKskZ2A0du15ojkIiZu4Tz2XIaeJB8+EaSRmBYcNCNOiAlEDV4vGkTXuj3ZfUuhn3jsTt2QNUl+/jg3UWW1hQjytynU9fdI2Ca8T6FPfh4HJG2vNccjGQRyKdSfWRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kPJ/4zyj4FOlfb65H/9fAMd9pTZhI2hVsCXL/vYHx+c=;
 b=LYykhUtUXV1viUEyKnxGLqtze1mBqKTHf58WLzE0GmSjifFKGbIusgt2W+bTjlXKxUocHBz1fQdnY0l1irrjK4oRfD5WSQUxjuiOYsMMBPh0DpGK3lW3swMjwKQ8oYPrWs7G7s2AXlcY1nfFX2yV5jCnPVIdukQLNktuz3mZF+T8bMPcrGD0A+QwrFvZRFR1CimRJ1yRsr+U1lrphngGLsNK4E4eYzHbo3/ePMfMrzEtbZWvVsxqGufBMWjTBAWpkVUWaeqbYLkUnfz/8lVHhD5NDzUVCADSADFKALl94jfXxtmSA5sMDaE/gvdTgDoXy9JB1vZ7U50kQUsBObp2qQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kPJ/4zyj4FOlfb65H/9fAMd9pTZhI2hVsCXL/vYHx+c=;
 b=CYhyxPGO6sBPih/3JlVOXy1DqzVp0o12rSyK4vBjPjhSFIscggdo1jYlv9eJNpXXoatebsS4dqqo80XBHnN+WDT6qViAyu8tsR7ZNH4LLB8Y8d2zksVbamk6WnXbLvCqUmU3kQsHQQqz/xscUiY/A7/68aaTOpzGnwjEpQw+uVQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <72b2be45-d7bc-a94f-1d49-b9fc0b2fd081@amd.com>
Date: Mon, 16 Jan 2023 09:46:30 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 11/14] xen/arm64: Rework the memory layout
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-12-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230113101136.479-12-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT052:EE_|CH2PR12MB4892:EE_
X-MS-Office365-Filtering-Correlation-Id: 997b1a42-1470-4db6-863e-08daf79e2bd5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BzeqDArINkk3C3HJPOK5SIs5mwPOgD0jPwRjjHi0YLbrVhj26BjBrAtOFJnEAdDu1RmJLjt7g7ouKcdKdAOV0TkGfXEmmgXCcFR1l54jXT9aqGkcvmvE+JKKXVmurW+3NI4HUP59VpQFyYfCE3lmAXHdkPBJ2fRj8r+3dLMqoCNmI3uDpgmyLOSYawItqlnDD0uW5kmpizqpQQT0tiKEWSYQ+lwbO0FU+tzPdA4oo8ihyq12qR4KAchlM9AhsHYzO34c/aHhPGn4Zmn78BuUgfztuoPxKh0Gchy2l0Wl0tKUwRVPCz7J7gfjfGJ8xvEANUgbw/rizTaYBP66ufVO5P8Pglo7zbNfSZrrPU0mcbKfcITG2pWPaSzPJvcEdMx2w9IIieQYjCSJqfot1AG2QV1e4/T96pRrxWF7r1DHHrImSwk+wqiAu4HCbT5dwgDoL6vag2tvvxLYlGAh1zbt0FnbBHIrYA9dkjYC9IjrxP/mzikgC7qy2D+hJM5gYfFr5HNPImNkbeaavpWhHuRPxOvUDD3T+T8H8BdQz9+w9zXfNF+bRKggUzp4vcJhuAFiXeFpYyI/y7Cwbh8WquDDySD+CRrQmfcH1O/jyp/w3fosGhSFe03K0vz2E3ME0DXnYG29e5bPeHL+/gkWnjcTQM1D9/ITSQB2+3KPp0KMRR/ma8D48eP/iPrAw/tCCxODZsJOXJyyRnMHLn/lA+J0ZfwbHmVK1EFP1/DmNWEvJ53YHbiVKeotrQ8IW+d551q4/99xMDFqsv3G9OY4+/reIw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(346002)(136003)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(186003)(478600001)(44832011)(5660300002)(8676002)(36756003)(26005)(4326008)(70586007)(70206006)(40460700003)(54906003)(316002)(110136005)(16576012)(336012)(40480700001)(83380400001)(8936002)(82310400005)(2616005)(47076005)(426003)(82740400003)(31696002)(36860700001)(2906002)(53546011)(41300700001)(86362001)(81166007)(31686004)(356005)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 08:46:33.7091
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 997b1a42-1470-4db6-863e-08daf79e2bd5
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT052.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4892

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Xen is currently not fully compliant with the Arm Arm because it will
> switch the TTBR with the MMU on.
> 
> In order to be compliant, we need to disable the MMU before
> switching the TTBR. The implication is the page-tables should
> contain an identity mapping of the code switching the TTBR.
> 
> In most of the case we expect Xen to be loaded in low memory. I am aware
> of one platform (i.e AMD Seattle) where the memory start above 512GB.
> To give us some slack, consider that Xen may be loaded in the first 2TB
> of the physical address space.
> 
> The memory layout is reshuffled to keep the first two slots of the zeroeth
Should be "four slots" instead of "two".

> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
> tweak of the boot code because XEN_VIRT_START cannot be used as an
> immediate.
> 
> This reshuffle will make trivial to create a 1:1 mapping when Xen is
> loaded below 2TB.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ----
>     Changes in v4:
>         - Correct the documentation
>         - The start address is 2TB, so slot0 is 4 not 2.
> 
>     Changes in v2:
>         - Reword the commit message
>         - Load Xen at 2TB + 2MB
>         - Update the documentation to reflect the new layout
> ---
>  xen/arch/arm/arm64/head.S         |  3 ++-
>  xen/arch/arm/include/asm/config.h | 35 ++++++++++++++++++++-----------
>  xen/arch/arm/mm.c                 | 11 +++++-----
>  3 files changed, 31 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 4a3f87117c83..663f5813b12e 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -607,7 +607,8 @@ create_page_tables:
>           * need an additional 1:1 mapping, the virtual mapping will
>           * suffice.
>           */
> -        cmp   x19, #XEN_VIRT_START
> +        ldr   x0, =XEN_VIRT_START
> +        cmp   x19, x0
>          bne   1f
>          ret
>  1:
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 6c1b762e976d..c5d407a7495f 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -72,15 +72,12 @@
>  #include <xen/page-size.h>
> 
>  /*
> - * Common ARM32 and ARM64 layout:
> + * ARM32 layout:
>   *   0  -   2M   Unmapped
>   *   2M -   4M   Xen text, data, bss
>   *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>   *   6M -  10M   Early boot mapping of FDT
> - *   10M - 12M   Livepatch vmap (if compiled in)
> - *
> - * ARM32 layout:
> - *   0  -  12M   <COMMON>
> + *  10M -  12M   Livepatch vmap (if compiled in)
>   *
>   *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
>   * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
> @@ -90,14 +87,22 @@
>   *   2G -   4G   Domheap: on-demand-mapped
>   *
>   * ARM64 layout:
> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
> - *   0  -  12M   <COMMON>
> + * 0x0000000000000000 - 0x00001fffffffffff (2TB, L0 slots [0..3])
End address should be 0x1FFFFFFFFFF (one less f).

> + *  Reserved to identity map Xen
> + *
> + * 0x0000020000000000 - 0x000028fffffffff (512GB, L0 slot [4]
End address should be 0x27FFFFFFFFF.

> + *  (Relative offsets)
> + *   0  -   2M   Unmapped
> + *   2M -   4M   Xen text, data, bss
> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
> + *   6M -  10M   Early boot mapping of FDT
> + *  10M -  12M   Livepatch vmap (if compiled in)
>   *
>   *   1G -   2G   VMAP: ioremap and early_ioremap
>   *
>   *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
>   *
> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
> + * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [5..255])
Start address should be 0x28000000000.

Not related to this patch:
I took a look at config.h and spotted two things:
1) DIRECTMAP_SIZE calculation is incorrect. It is defined as (SLOT0_ENTRY_SIZE * (265-256))
but it actually should be (SLOT0_ENTRY_SIZE * (266-256)) i.e. 10 slots and not 9. Due to this
bug we actually support 4.5TB of direct-map and not 5TB.

2) frametable information
struct page_info is no longer 24B but 56B for arm64 and 32B for arm32. It looks like SUPPORT.md
took this into account when stating that we support 12GB for arm32 and 2TB for arm64. However,
this is also wrong as it does not take into account physical address compression. With PDX that
is enabled by default we could fit tens of TB in 32GB frametable. I think we want to get rid of
comments like "Frametable: 24 bytes per page for 16GB of RAM" in favor of just "Frametable".
This is to because the struct page_info size may change again and it is rather difficult to
calculate the max RAM size supported with PDX enabled.

If you want, I can push the patches for these issues.

~Michal



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:53:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:53:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478454.741655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLFQ-0004Xi-75; Mon, 16 Jan 2023 08:53:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478454.741655; Mon, 16 Jan 2023 08:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLFQ-0004Xb-47; Mon, 16 Jan 2023 08:53:44 +0000
Received: by outflank-mailman (input) for mailman id 478454;
 Mon, 16 Jan 2023 08:53:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wx/b=5N=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHLFP-0004XV-GS
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:53:43 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2065.outbound.protection.outlook.com [40.107.243.65])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 456ba475-957b-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 09:53:40 +0100 (CET)
Received: from MW2PR2101CA0019.namprd21.prod.outlook.com (2603:10b6:302:1::32)
 by BL1PR12MB5946.namprd12.prod.outlook.com (2603:10b6:208:399::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 08:53:34 +0000
Received: from CO1NAM11FT057.eop-nam11.prod.protection.outlook.com
 (2603:10b6:302:1:cafe::a7) by MW2PR2101CA0019.outlook.office365.com
 (2603:10b6:302:1::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.1 via Frontend
 Transport; Mon, 16 Jan 2023 08:53:34 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT057.mail.protection.outlook.com (10.13.174.205) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Mon, 16 Jan 2023 08:53:33 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 02:53:32 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 16 Jan 2023 02:53:31 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 456ba475-957b-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hQ1uXlteOY+g5sBTrr/VpfweSvdd9llM0z9f134tKNHRBqZQUigSf+GAU1pAAsnLfUVQ7NKiwvvQMCGu+1ubygwAW1LNLwOS/ItXQ8DThUVCZtHCBejcBsOD6rEvPdLaPoJZmc0ZybuLKCewP74r6URGwDsPKbwiRbu/i/tA11c1wO8Y7aInYZ3B9FV/nohIkcPEeLRE23bNSlkjCe9uSk4OTVO6neLWVHSaibw8d1CLZqrpmwwqqAqCF+UaJ7bDaJ1l+1I0NX8zm3wXmdJg4S+KVBUUC4Pj5gV9wT1hPetuRB3QvYJEcc7MiRFlkGHAs2sqwj+gga4M3/1BtHxqvQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ykRl3i+I1TI6cIYC21wWjoIBy9H0caPspQZJafx8fK8=;
 b=LEJn1cwsnfA/fVca4jrBNzbEIAcimDyunOtpwkUfnkitEigATNXN/zxJXXuYfMDm5kdkpwtWlHSmPOHTflqQyq+S0NbN0cq6fIOMuGbYLi/hERR7bM0OxJjOybF/wUThBSWDFnNYMpT4MMdulQbkH2EVvtWDxdDhw9PLiPDRm/q5TVxRFAleiV0JcKE8dDJ6Kz/U6GLQlP95HzmQ6jn2cKl9MrTpKu5uy2xF7vEtDwC4zrlTQckkt43B6dg381SIeJu6bMswV5n0uQgmpMOVuDkWI9vhvle03q3GOZXZrTwXYv6tOJCsRnfXpdsZV0HjtqUJ6LXRdS8BTX8AJyi2qw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ykRl3i+I1TI6cIYC21wWjoIBy9H0caPspQZJafx8fK8=;
 b=l4CsCL0qKQ3kUkaZ3ydFtZaHtpqBUb25ECwUvXCJqpNgiTprevF7uNoZdBdIArC77rHXs2kpUjM9p/rnoOt5msly2HaH6NbQQ6iuNEM2VBI5m6h0QTM0zDi16AHj1ilp60RqwawL2c1eTAPPQx4n2FhAmbI7VDAmgXA3sp5sdGE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <ddbcb326-b158-daa4-e9d2-42c420983497@amd.com>
Date: Mon, 16 Jan 2023 09:53:30 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-13-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230113101136.479-13-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT057:EE_|BL1PR12MB5946:EE_
X-MS-Office365-Filtering-Correlation-Id: 4b527745-5d8e-428a-af81-08daf79f2619
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ausT/qimGUp1o6A5uB1aNYqCV/T0ur4mlq3Jbfxi3etjR005FLiFY8BvrZ03wZT266Q35Id8vQC3bOCZDFk1xkspoiY8BsXBFtWLGERnZtg45+1UhaKamujB+nRQqirNIVOmY0m/2fAZAi9JkyM/FeI9E/w4GUA4Mi5SWK7JpKSOv7eiNbEL/XwgON/SdZ/ED+nkbiI7O0mPKM6GiyiJq/UAfmrnQtzKyIevsXz+FAe/P1ZxvXNbxWLpLo+iaDjOPXgebaUbhXu/D/bvBqvyRJnxh5DQFVWdx1Wjemw4Ij5L0R/6CC+WyH1hrYMBWOpCX7sW0iIYGY3kkW7O4MZvGtjsh/UZwPb4xK8X0mjb8+OukhnQOgnxDXOxZg8OXwOKP69c0gknzXa6ea03kbAiebwCniV0Rqsg0DAEQnnQ91d0dHfDeYxNpWKAm3cArEqHh3I2kVfDoy5SwmX555OMqW17al1h0pL4iy1veGVCgHsHMk419yEbcXcYPOgbf25Gs9EPVPBISHNHw20tTeZAI5MtTR9wEcunyKK95axVg5MMPxu9RGWue+AsO9dn/iuJ/9/xW/0yn5h2LwoUeT/UHeSX4/sMfQ/9NbBITiin4PoFi3BIHde7D/5MPwglqD+mpyCn/a7mYF97iOVK4sWmvM709F+1xuPgObk4ZstQX3iHX7yz2UUCjNsaSTsiBZcgMsvZ6zknd9FrFhrU0Fdy9iUfqy5LOMUGEJF9oHv6wtjhvE5miUdy7BtwdE0MfpSHdNAw0ngzHCh3L3NO13Cocw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(31696002)(86362001)(82310400005)(36756003)(186003)(4326008)(8676002)(53546011)(26005)(70206006)(426003)(70586007)(2616005)(47076005)(41300700001)(316002)(40480700001)(16576012)(54906003)(110136005)(478600001)(40460700003)(44832011)(81166007)(2906002)(356005)(82740400003)(83380400001)(36860700001)(5660300002)(8936002)(336012)(31686004)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 08:53:33.6018
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b527745-5d8e-428a-af81-08daf79f2619
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT057.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5946

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> In follow-up patches we will need to have part of Xen identity mapped in
> order to safely switch the TTBR.
> 
> On some platform, the identity mapping may have to start at 0. If we always
> keep the identity region mapped, NULL pointer dereference would lead to
> access to valid mapping.
> 
> It would be possible to relocate Xen to avoid clashing with address 0.
> However the identity mapping is only meant to be used in very limited
> places. Therefore it would be better to keep the identity region invalid
> for most of the time.
> 
> Two new external helpers are introduced:
>     - arch_setup_page_tables() will setup the page-tables so it is
>       easy to create the mapping afterwards.
>     - update_identity_mapping() will create/remove the identity mapping
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
>     Changes in v4:
>         - Fix typo in a comment
>         - Clarify which page-tables are updated
> 
>     Changes in v2:
>         - Remove the arm32 part
>         - Use a different logic for the boot page tables and runtime
>           one because Xen may be running in a different place.
> ---
>  xen/arch/arm/arm64/Makefile         |   1 +
>  xen/arch/arm/arm64/mm.c             | 130 ++++++++++++++++++++++++++++
>  xen/arch/arm/include/asm/arm32/mm.h |   4 +
>  xen/arch/arm/include/asm/arm64/mm.h |  13 +++
>  xen/arch/arm/include/asm/setup.h    |  11 +++
>  xen/arch/arm/mm.c                   |   6 +-
>  6 files changed, 163 insertions(+), 2 deletions(-)
>  create mode 100644 xen/arch/arm/arm64/mm.c
> 
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 6d507da0d44d..28481393e98f 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -10,6 +10,7 @@ obj-y += entry.o
>  obj-y += head.o
>  obj-y += insn.o
>  obj-$(CONFIG_LIVEPATCH) += livepatch.o
> +obj-y += mm.o
>  obj-y += smc.o
>  obj-y += smpboot.o
>  obj-y += traps.o
> diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
> new file mode 100644
> index 000000000000..798ae93ad73c
> --- /dev/null
> +++ b/xen/arch/arm/arm64/mm.c
> @@ -0,0 +1,130 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <xen/init.h>
> +#include <xen/mm.h>
> +
> +#include <asm/setup.h>
> +
> +/* Override macros from asm/page.h to make them work with mfn_t */
> +#undef virt_to_mfn
> +#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> +
> +static DEFINE_PAGE_TABLE(xen_first_id);
> +static DEFINE_PAGE_TABLE(xen_second_id);
> +static DEFINE_PAGE_TABLE(xen_third_id);
> +
> +/*
> + * The identity mapping may start at physical address 0. So we don't want
> + * to keep it mapped longer than necessary.
> + *
> + * When this is called, we are still using the boot_pgtable.
> + *
> + * We need to prepare the identity mapping for both the boot page tables
> + * and runtime page tables.
> + *
> + * The logic to create the entry is slightly different because Xen may
> + * be running at a different location at runtime.
> + */
> +static void __init prepare_boot_identity_mapping(void)
> +{
> +    paddr_t id_addr = virt_to_maddr(_start);
> +    lpae_t pte;
> +    DECLARE_OFFSETS(id_offsets, id_addr);
> +
> +    /*
> +     * We will be re-using the boot ID tables. They may not have been
> +     * zeroed but they should be unlinked. So it is fine to use
> +     * clear_page().
> +     */
> +    clear_page(boot_first_id);
> +    clear_page(boot_second_id);
> +    clear_page(boot_third_id);
> +
> +    if ( id_offsets[0] != 0 )
> +        panic("Cannot handled ID mapping above 512GB\n");
I might be lost but didn't we say before that we can load Xen in the first 2TB?
Then, how does this check correspond to it?

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:55:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:55:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478459.741665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLHS-00056k-J8; Mon, 16 Jan 2023 08:55:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478459.741665; Mon, 16 Jan 2023 08:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLHS-00056d-GG; Mon, 16 Jan 2023 08:55:50 +0000
Received: by outflank-mailman (input) for mailman id 478459;
 Mon, 16 Jan 2023 08:55:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHLHR-00056X-5V
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:55:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHLHQ-0000fa-Py; Mon, 16 Jan 2023 08:55:48 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.9.85])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHLHQ-0002sD-Jj; Mon, 16 Jan 2023 08:55:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=tnjZKwQ+Z9i0XREuukCBT0bJLNsB8i4nTwlMiBb8FJ0=; b=NReLxl/1+ewURp8ZYnSL7dBFQ0
	qB6LeBtDbaULbBKFWyEbAc5U59K3xziJX3/tH/qpNsnl6QsvOKHv1fLMfyN9eRKzJmuwNC+JYVvMX
	Dn1VrAbbU6CC6OyUbLjrx382DlDPj57g7F0f1dIo7giGrw8+449Vz5Adq6dzIJBQ4vO0=;
Message-ID: <cb8a09fd-6174-f747-1fc1-1ab472eecdfd@xen.org>
Date: Mon, 16 Jan 2023 08:55:46 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to
 load Xen
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-10-julien@xen.org>
 <36a8cb2d-0bea-cdc3-5311-c743f60763d5@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <36a8cb2d-0bea-cdc3-5311-c743f60763d5@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 16/01/2023 08:14, Michal Orzel wrote:
> Hi Julien,

Hi Luca,

> On 13/01/2023 11:11, Julien Grall wrote:
>> +/*
>> + * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
>> + *
>> + * Clobbers r0 - r1
>> + */
>> +remove_temporary_mapping:
>> +        /* r2:r3 := invalid page-table entry */
>> +        mov   r2, #0
>> +        mov   r3, #0
>> +
>> +        adr_l r0, boot_pgtable
>> +        mov_w r1, TEMPORARY_XEN_VIRT_START
>> +        get_table_slot r1, r1, 1     /* r1 := first slot */
> Can't we just use TEMPORARY_AREA_FIRST_SLOT?

IMHO, it would make the code a bit more difficult to read because the 
connection between TEMPORARY_XEN_VIRT_START and 
TEMPORARY_AREA_FIRST_SLOT is not totally obvious.

So I would rather prefer if this stays like that.

> 
>> +        lsl   r1, r1, #3             /* r1 := first slot offset */
>> +        strd  r2, r3, [r0, r1]
>> +
>> +        flush_xen_tlb_local r0
>> +
>> +        mov  pc, lr
>> +ENDPROC(remove_temporary_mapping)
>> +
>>   /*
>>    * Map the UART in the fixmap (when earlyprintk is used) and hook the
>>    * fixmap table in the page tables.
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 87851e677701..6c1b762e976d 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -148,6 +148,20 @@
>>   /* Number of domheap pagetable pages required at the second level (2MB mappings) */
>>   #define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
>>
>> +/*
>> + * The temporary area is overlapping with the domheap area. This may
>> + * be used to create an alias of the first slot containing Xen mappings
>> + * when turning on/off the MMU.
>> + */
>> +#define TEMPORARY_AREA_FIRST_SLOT    (first_table_offset(DOMHEAP_VIRT_START))
>> +
>> +/* Calculate the address in the temporary area */
>> +#define TEMPORARY_AREA_ADDR(addr)                           \
>> +     (((addr) & ~XEN_PT_LEVEL_MASK(1)) |                    \
>> +      (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1)))
> XEN_PT_LEVEL_{MASK/SHIFT} should be used when we do not know the level upfront.
> Otherwise, no need for opencoding and you should use FIRST_MASK and FIRST_SHIFT.

We discussed in the past to phase out the use of FIRST_MASK, FIRST_SHIFT 
because the name is too generic. So for new code, we should use 
XEN_PT_LEVEL_{MASK/SHIFT}.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 08:57:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 08:57:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478464.741675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLIe-0005fr-Tm; Mon, 16 Jan 2023 08:57:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478464.741675; Mon, 16 Jan 2023 08:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLIe-0005fk-QR; Mon, 16 Jan 2023 08:57:04 +0000
Received: by outflank-mailman (input) for mailman id 478464;
 Mon, 16 Jan 2023 08:57:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHLId-0005fZ-F1
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 08:57:03 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2050.outbound.protection.outlook.com [40.107.15.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bdbf1a14-957b-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 09:57:02 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8881.eurprd04.prod.outlook.com (2603:10a6:20b:42c::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.22; Mon, 16 Jan
 2023 08:56:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 08:56:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdbf1a14-957b-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L4ug+IVv4wiIPwD8nABUdEvsfRqyW5Wq8P9DNA2otOlw3xa5RJshRA/DEq1yR/9jUSJPDEQJn9oAdK4Xn3kauNxP4RQ3UfHEgNbZgXK3ZDdF61b+OcBuDBKXlgPzUebehNnvPy0L8tN/+3Ec7ZpjnA4qRlxF7PSvpnvtqUrX9+C8lp2cDs7ITqcjPCJErOtarKBq7qgD1kj0M5LpySuX6pZ/X7YYdW7ExKPDVkYDxehvCYV6dLDlX4UORPs/Moy1L0qWJ0JRwub8gD46Nez81H5Wl3bE2HdreG3OrN4QvW0Ty2qH/aG1pDg0k+SqCi83fd7RDP1uulxYiez6m/Y0dg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9f1bNX4yYHaymSPKHN3ykqRaflHQLdQFgGma2M45HEA=;
 b=SoqCJ3gBl7DuviBmdOHs3DkGpsD0TI+03M6vSPpC9PVZW8X2Bq9Di8MgvYzRcnpE/6oOSggaFN9Gr9ZjyDQIQNyOPaIAmAbESTrBrnppZY/ER9ZsXsvgOLQwcPiNPgL9iBurGQ5heRMyaZX4w1ntH+bW6yyqQxi2B+bjHHvXCiIKlUXVJfC0y5fN2nk0X87Y7HfF3QIp245w3i8ki6TGG7NUiZjRmv/i5CiQO2+AFcU7+9vwi0YuGc4VYTIDPU9XrRFTPwfwP6caa7RISuoQif3Jbpd1ij21cCcry2R++byPKs4MO4Ms5w/UYWnTUfvwg1YReQQKNI2lI+NuH5PMOA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9f1bNX4yYHaymSPKHN3ykqRaflHQLdQFgGma2M45HEA=;
 b=jFYZ0ZTIJBZDtBN7WviUpRXi38OEvSawsUJWChat9aEhgFb9B2tT3dku2R/iutTuteMipvJmdf4WeXdeCZfhwzAFytznXlFaIw8/cZxgKBKC5SiHMQsJyZWPicLvOLl4FoLv2VYfU+C6eyVXcenI6BO3durWa28HGKjpnYeAkFWyIFZH6QAcBbic8BTms6L3xHY8vTO17n/djAiCODfALFdO0hco0MjNwQDR//usgyW7/YoJ6ogNOFZr57IADFXcz7Mpu8JGl5BzgtkxJtzxocI/ZWGwoQsMlFgbZA9LvY6h2bWL6xd763wMrlUmuGC1xeZrVBUnXFT2Tgn0M+c5Xg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9e918d63-3f32-6d43-9836-a9b75b98c295@suse.com>
Date: Mon, 16 Jan 2023 09:56:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, Xenia Ragiadakou <burzalodowa@gmail.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <f24da4a8-4df8-0ec3-32f9-41f134b87d67@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f24da4a8-4df8-0ec3-32f9-41f134b87d67@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0088.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8881:EE_
X-MS-Office365-Filtering-Correlation-Id: 6c010605-a358-45e1-ffd2-08daf79fa057
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RQKqqKWFNkBbkcqKemEEZpIul68hdUEBRA+OmmdEXARv0Wp/cPNFySyErTWb12fGR+/2/kzVDfEl/o78U5ertLzYV1a+PdSJlQYUaGypMMHvLlpEO+gY3cr+S6JVO9KUxu3/n0QV3F/+gWc+CDZk9HYnoFrBxaT9r5vlZKQI2xCK5FJGbOIWXZ882nVzxBk9+BQBHXsWrvNrW27UXOHJip3UF3PBruY1AMVjWVyPHa3E5gNXq7wsmcZZ6EVEcQSNymufR37y8j19t1SX8UT1sWtXCTNosk7ix9fD6dQJOwoFwQ14PYZ+lYevzVuICU0xeeoNHzXcwcMZ58kdOukgknogYqobGy8E3vi5tWEUQNcIMv0Fj/zDAfwHVvgTP7W7wlUXntx91h4HSCnM14zSTsKm1GGjsq3NRGMSL/FUIvxOJvTWvlNjIqt/2Ce+k4KquD/6aG1qzqB0cevSUYJumdHZsASxN+hpQD89Q8zb9INUZncOXQN3kTqBRFNnq0OaO2HmTfrsIsCNESljjR2HG0g1d8HPGTWf30qBmtTyblUJJfgYe6t6MEdSLZLzMciiCvoQ29icrcY7O8fw91iyNn+OZ6715t5aZJDkKp2rZdEADKwH9ErA8SK3kUr3Z7El6pdRKuXLT1+8TZpw6XD2/TrtNA0sZiZ2S/WVIXkZeFZ2Gli5MKN9cGtytTDn+MFV5wDEFkHLclPBzk9FjLHvWM57Oe1hrGndRgdBGe4KHxU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(376002)(136003)(39860400002)(396003)(346002)(451199015)(54906003)(2616005)(26005)(8676002)(6916009)(4326008)(186003)(6512007)(66476007)(66556008)(31686004)(66946007)(478600001)(41300700001)(8936002)(5660300002)(2906002)(83380400001)(53546011)(316002)(6506007)(38100700002)(31696002)(6486002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RjZqSlFHbm1ES2pEMThQNDhSZWZ4VGtzSGxGbFlsMk9tSldtY3NDTU9ici8r?=
 =?utf-8?B?c1pqOS9rMitXek1KY2dxTEZtd3htLy83c1VONFU4dmtQVXE4RXNQVXdkMHRy?=
 =?utf-8?B?U3gyVXVDMW1XK3BsalN0QlRyZDhBY2lUNmtwV0VEeS82Wlkwdkc5ZUV0TjR3?=
 =?utf-8?B?S1RXWnJjWUxZVnFjb2VxUXNDMzMwZi9nWjhkMFJTVk1jWFFDZ1VxTVBtRzhl?=
 =?utf-8?B?YUFXMTQwNDhiQm9lYktCQytPZ0tZaVN2U3E2Z2xhRFhYTjUzb2pLa0hOVjJW?=
 =?utf-8?B?YzZoOGNzSEFDWFUxWlcwUHJON2RLdE1TYW5oTmxKSEVtelBHbWUzNzBVSDJ1?=
 =?utf-8?B?OTQ1WXBJOVFpaTlybDkwbGxTZDh2U25zSGtvajc2U2k4dmEyRVVIb2pScnFn?=
 =?utf-8?B?Z1V1SjlYL20wbHc2eHk1YkZDd1QxK3BSMm1taldkb0dUcFVwYkl2TXZsTlVq?=
 =?utf-8?B?VEE3ck1zcmRHL04xQUU2UXQ4dmhFNUZXOEFhSjVpVXF6V1YvQlJKZHd4aHU5?=
 =?utf-8?B?VDZiTjRsa0VjaWtCSEpxUTNjMmlwM3dqSC92cmJxQTlrR1lFQlRHWkR6SkhT?=
 =?utf-8?B?ME5xOU5aekd5WkJwdGV6V3B2WE5hdWVvVEpramg3OGltMi9YTmp3S1BENnRF?=
 =?utf-8?B?cGROUnlyVklGUGlRZEFBLzFGT1FqWENhZm5yS0Y2UFhoZlRkWmlPU1Z6VUwr?=
 =?utf-8?B?T1p4cEtTaUszMFpuelh5dnZSRmVaeGFRb1NVeHlleFcvTTZ3elFzT2prZWdT?=
 =?utf-8?B?RnB2V29aZzFNREVBb3ZmOFdaOHpNVFVoeUx0dXAxZVVVbXFFOG5DYlJ6NUR2?=
 =?utf-8?B?NGErZS9ZNkNXc0JxVklSci9sVU9ZcjVsek1tTHg3VEk2Ukt3ZUFMRWdWekQz?=
 =?utf-8?B?N3QvdXZzck5pNTlqY1FCVDRONGVGWnltYjBoSU1lMFpJdDFIbHZlSERRdUJk?=
 =?utf-8?B?RW5LNjlJL21DSVF5aU14YlNjZmF3K052VmxNS2pya3hyNnJyeTVmTStHMFNx?=
 =?utf-8?B?b0NybE9nMlV0b25IYTZDU2ZCQnNicHRjWTkxOE9xditIMHBRMzhmNlRkejZz?=
 =?utf-8?B?LzF0UkpycXFudEtFU2lxUS9KR2pqc2dlTXQ0SytNUkVnRDdMenduT3NwSDhm?=
 =?utf-8?B?Rm5TRXBCaWluMDREUXk3WmhCbUVmWUwvQS9lbzJTWXc1TTdQL1d0VS9mUisz?=
 =?utf-8?B?cTdJZ3VabUsydkpoUDRzU0hkYlFoMGh6dWF0L1FWdHFVWXMzNVRBL1IwaXlP?=
 =?utf-8?B?Q0xGZGRIK1U2WUdyZ01jejNGa1YvWWJBamRIeWZTeHFOTm12MEp4bGx1TWVu?=
 =?utf-8?B?MGdPaW1GWnhCQ1dLc3BoOGxSWitmNVlQcm9SOTJDQ2FhT0ZuR2V3L2l5a283?=
 =?utf-8?B?WHVTUjFCM1dYNkcrU2JvdFlIazNaSEYzeEFTSU9EYWNwWEx4TENWckR6T2hG?=
 =?utf-8?B?SndHUjVqbTdEZDBPdGdrN1MrZW9ROUF6U0lJNWI3b1paYjI2MTY2YmFTdEIr?=
 =?utf-8?B?Z2VvMHpFd3N3MzR3VmpIbGlQVkgyZk9mTEJIcmFpZ2RFZTBWQ1ljdGlSSU8x?=
 =?utf-8?B?a2F2T0xEZmR1QmV0TDRoR0ZQZDdjWmhSSXBIU0RiTmQ2MEtnaWxrcEI4Y0hL?=
 =?utf-8?B?WFdrajVhMzFxdFBoNlFLZUc2SmpuVGxhVlRqSXV2THU3blNDeGw2THFSTXph?=
 =?utf-8?B?bG9VZmhvQmhvTHJDRGsxNEZjK0NKZ0lBQkNFOEJQU2wwOENxWTA3eHM2Q0Uy?=
 =?utf-8?B?dmdJZ01rT1BtWVd5MHdLZWJyZ1paMzV0Wm1aUVFjdEtBR1hJVmJTRmdtSTcw?=
 =?utf-8?B?TzcrTHNIbU9CV0lsK2tHTWJqSXcvbGlwUkN0dGw4Q0pXMUJOdmVJRmxGMkVj?=
 =?utf-8?B?UTNQOVRqdjdKVzNxbW10cElqUm45b1JGOHhlaXBXWVNtWEtyTExnOHZhM1Ru?=
 =?utf-8?B?eGliV1lpY2s4ekVNYnB1a0xNMm1OSW1CL3JyYXdWV3RWVEhoNTEwajgrQkNr?=
 =?utf-8?B?YTlmS0ErN3gzNDRIbGtPTVh3TVg3R29sbGdvSytWcDdZRUJocVViSHJ3R2d0?=
 =?utf-8?B?NGVUR2JvYW5tSFhGeFZiZUdPa2x1ZVpLcERCSHg1Qk9UMlhldzNDVm5JMCtv?=
 =?utf-8?Q?TpkuPxDq3nBTrPLwH5Ss16//Z?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6c010605-a358-45e1-ffd2-08daf79fa057
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 08:56:58.9891
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ce6jYqfEK2P/ckkMAksB83VufH/nnofDCtMpcTSexIDssG9H5fgCYqkQYAeG7U/e1D8j7LKeUczgNSvXc756ag==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8881

On 13.01.2023 12:55, Andrew Cooper wrote:
> On 13/01/2023 8:47 am, Jan Beulich wrote:
>> In _sh_propagate() I'm further puzzled: The iomem_access_permitted()
>> certainly suggests very bad things could happen if it returned false
>> (i.e. in the implicit "else" case). The assumption looks to be that no
>> bad "target_mfn" can make it there. But overall things might end up
>> looking more sane (and being cheaper) when simply using "mmio_mfn"
>> instead.
> 
> That entire block looks suspect.  For one, I can't see why the ASSERT()
> is correct; we have literally just (conditionally) added CACHE_ATTR to
> pass_thru_flags and pulled everything across from gflags into sflags.

That is for !shadow_mode_refcounts() domains, i.e. PV, whereas the
outermost conditional here limits things to HVM. Using different
predicates of course obfuscates this some, but bringing those two
closer together (perhaps even merging them) didn't look reasonable
to do right here.

> It can also half its number of external calls by rearranging the if/else
> chain and making better use of the type variable.

I did actually spend quite a bit of time to see whether I could figure
a valid way of re-arranging the order, but in the end for every
transformation I found a reason why it wouldn't be valid. So I'm
curious what valid simplification(s) you see.

>> --- a/xen/arch/x86/mm/shadow/multi.c
>> +++ b/xen/arch/x86/mm/shadow/multi.c
> 
> Just out of context here is a comment which says VT-d but really means
> IOMMU.  It probably wants adjusting in the context of this change.

I was pondering that when making the patch, but wasn't sure about making
such a not directly related adjustment right here. Now that you ask for
it, I've done so. I've also removed the "and device assigned" part.

>> @@ -571,7 +571,7 @@ _sh_propagate(struct vcpu *v,
>>                              gfn_to_paddr(target_gfn),
>>                              mfn_to_maddr(target_mfn),
>>                              X86_MT_UC);
>> -                else if ( iommu_snoop )
>> +                else if ( is_iommu_enabled(d) && iommu_snoop )
>>                      sflags |= pat_type_2_pte_flags(X86_MT_WB);
> 
> Hmm...  This is still one reasonably expensive nop; the PTE Flags for WB
> are 0.

Right, but besides being unrelated to the patch (there's a following
"else", so the condition cannot be purged altogether) I would wonder
if we really want to bake in more PAT layout <-> PTE dependencies.

>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -56,6 +56,13 @@ void __init acpi_iommu_init(void)
>>      if ( !acpi_disabled )
>>      {
>>          ret = acpi_dmar_init();
>> +
>> +#ifndef iommu_snoop
>> +        /* A command line override for snoop control affects VT-d only. */
>> +        if ( ret )
>> +            iommu_snoop = true;
>> +#endif
> 
> I really don't think this is a good idea.  If nothing else, you're
> reinforcing the notion that this logic is somehow acceptable.
> 
> If instead the comment read something like:
> 
> /* This logic is pretty bogus, but necessary for now.  iommu_snoop as a
> control is only wired up for VT-d (which may be conditionally compiled
> out), and while AMD can control coherency, Xen forces coherent accesses
> unilaterally so iommu_snoop needs to report true on all AMD systems for
> logic elsewhere in Xen to behave correctly. */

I've extended the comment to this:

        /*
         * As long as there's no per-domain snoop control, and as long as on
         * AMD we uniformly force coherent accesses, a possible command line
         * override should affect VT-d only.
         */

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:07:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:07:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478470.741685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLSS-0007HA-05; Mon, 16 Jan 2023 09:07:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478470.741685; Mon, 16 Jan 2023 09:07:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLSR-0007H3-Tg; Mon, 16 Jan 2023 09:07:11 +0000
Received: by outflank-mailman (input) for mailman id 478470;
 Mon, 16 Jan 2023 09:07:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E1ZZ=5N=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pHLSR-0007Gx-3E
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 09:07:11 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2085.outbound.protection.outlook.com [40.107.14.85])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2737d73f-957d-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 10:07:08 +0100 (CET)
Received: from DB6PR07CA0058.eurprd07.prod.outlook.com (2603:10a6:6:2a::20) by
 AS2PR08MB10034.eurprd08.prod.outlook.com (2603:10a6:20b:64b::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Mon, 16 Jan
 2023 09:07:06 +0000
Received: from DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2a:cafe::84) by DB6PR07CA0058.outlook.office365.com
 (2603:10a6:6:2a::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Mon, 16 Jan 2023 09:07:06 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT041.mail.protection.outlook.com (100.127.142.233) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Mon, 16 Jan 2023 09:07:05 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Mon, 16 Jan 2023 09:07:05 +0000
Received: from a35adcd54538.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C1C9024E-6031-4696-AC70-00F477C47BC0.1; 
 Mon, 16 Jan 2023 09:06:55 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a35adcd54538.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 16 Jan 2023 09:06:55 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAVPR08MB9038.eurprd08.prod.outlook.com (2603:10a6:102:32d::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Mon, 16 Jan
 2023 09:06:52 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.023; Mon, 16 Jan 2023
 09:06:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2737d73f-957d-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zog7tXTO/VoC0WmG7Bj7yCLW/uq66futVr1cwAL01Lg=;
 b=7N/TLZk227hAe+Wvwky1KqpUVwk434hjbZanbMFZ0RlkfVWkpU9KZCskCiXM2LLSkc9KFSS6ksme9E8DdUW6zQtmV04Fao7zeDb8cv64e6M6SmC4aWu/PG5+0f7V4L6Gdry4KZOGShrJHYf0zJc70M6wnHdqiumj75AfuU/MEjY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b6573d5a856029f0
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OvKKRcvQRWGLFKh48XsNb4WC1+OQp9KwpTzk/uYl56Ef5lcmH+JW6Nm1D9f4mKLzTDZaut5tsoRYmOEfcFjH7RlZafKxiQvw4QOfOtO1G3obso6M9Oiw+aMSGH7x4Hr9WlyOlFlolGHTr8kAS072QJfwwu7+AN2yKBT21Icet4tEACOzwPmB6jWIGYxotl1islS2ImLS73/rBOX/mEkTjDV68Coi292g6bUC94P0B3QTMTBDEOS/E0uOelzL+ieStr3GF/8yAm4nH5vBphfzkYMa3NJBDYqbJgBEsNT0WXGqZrpYT04t8uxy4JWveYjNZhARm++MVXrzGkszHc/tLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zog7tXTO/VoC0WmG7Bj7yCLW/uq66futVr1cwAL01Lg=;
 b=Fy7UqKQuVbCJilY2kDgHeKKB7x8UYK8hi6+UQHN++QOIQS7RADEqDKU4hBCb6x4QfyJsIWHI0GxLdakd3vsdSZeW74FwMGvEOdXXG688KnHUEaLi1av9BEbfUAWYn2ofyPcmfh5GeEcASftGE7IA4wzp8Xqx4CVS3LLFE2wm8nx1ukLDXQsWEyjD1Y7iLEakO/olNNPYeJF2XAqUHcMcg8mLWXqkOyZ15dRi4w1h2vIWePiupLFg4V4uezbB98nD37modU63jnnO0OVWZ+Ge4WVh49WRQiv/vkgCWZqu4XqMkT7+S+buG5IFx9453shDfMAcviBNcJ8Y3gju5HaO5Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zog7tXTO/VoC0WmG7Bj7yCLW/uq66futVr1cwAL01Lg=;
 b=7N/TLZk227hAe+Wvwky1KqpUVwk434hjbZanbMFZ0RlkfVWkpU9KZCskCiXM2LLSkc9KFSS6ksme9E8DdUW6zQtmV04Fao7zeDb8cv64e6M6SmC4aWu/PG5+0f7V4L6Gdry4KZOGShrJHYf0zJc70M6wnHdqiumj75AfuU/MEjY=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the
 runtime page-tables
Thread-Topic: [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the
 runtime page-tables
Thread-Index: AQHZJzoPSgWN5vNZ70iPu83PQrLhuK6cnmWAgAQm2IA=
Date: Mon, 16 Jan 2023 09:06:51 +0000
Message-ID: <69C4635C-1C1D-4F00-813B-83DF9E6D825D@arm.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-15-julien@xen.org>
 <8A0AD684-FB21-46B3-A0C9-DE0BF67030D0@arm.com>
In-Reply-To: <8A0AD684-FB21-46B3-A0C9-DE0BF67030D0@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAVPR08MB9038:EE_|DBAEUR03FT041:EE_|AS2PR08MB10034:EE_
X-MS-Office365-Filtering-Correlation-Id: ee22c2b7-b26d-4a8b-bb14-08daf7a10a31
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9hgNX8IFEgFUQHJqP09qb/9YoRegdHUvCJ9mlPFTpYc1Abe6whIGvLseh5fZmTz3ZUOM9VYAgW5oZbykJ1ZP/wvhGULOG+bQxtIOSYbdOaFuqek7oxoUpZPtjRxDUy+SsGNlVtG2ezD7xMHkpEQJKG9DUJSsn4jmhqciClM8HWgisNjmrVIaYIkR61KnJJ6P9j0UYS9UDsRKhsXC8QeepLxosBwRBu0PY7akQSeoSBt1jwMrcQKN/+xBDved2cIFAQcyuIcMBZYaP6So2Unm27m35z1o6Zgr2ElGcZcoVQ0zC/MjxxBRaeN8+k9pytbGx5OVxsJfAdXiJIXp8FxrkmCfX5KI51OVJJ0dxynDjFSsUfl9y/k2eBpHhDL82j87VhWpSL7kF8NP0OpEXCt0BqmyD4KVQ4C9zfUMi87kCCdzWrHnvM+FgF2an4KT2bRUaBUY4yIVU/qf5Nx7DJ3L3Ek4ueyocTes6PaIIY6dgKbN6OWnpcfseJSwio591VxEENta/SlMb/RVXkbkD2w6lLYELkunM20ucSiWBp5PKauE5uVTVbBXvL9Shi8e8qc8jPAdzEPrHwtqUES/5A4Oe3MHpKzZF5EnIdOIdIoArg7gNhakb8QaHkX2+VUs6r/Hh3+OMgp5cdG1DJFgHZ4LUqbrRhUWqBHbuCzd8PslcK6V/dMy+w5WokLKV1ZNAeqJGkhxd91uwU1euZGW5t8xq6+qY9AyPCjA5NbcX+QaOAg=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(366004)(376002)(396003)(136003)(451199015)(38070700005)(122000001)(2616005)(558084003)(38100700002)(86362001)(6916009)(6486002)(316002)(71200400001)(478600001)(33656002)(54906003)(8676002)(8936002)(66946007)(66556008)(5660300002)(41300700001)(66476007)(4326008)(2906002)(66446008)(76116006)(91956017)(186003)(6506007)(64756008)(26005)(6512007)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <EB9A791146A81E42B5C19672046F9CFB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9038
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a2f7d13a-9729-4ead-fd23-08daf7a101d8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eITWi53Wi3jLawltHGGhglIq/0ywB4yeDtdvoBIuChU+Ea4OYjrX+PH/1/VY6xMDp1n+0aAxCarTPw5tqHfgWcW7+KJj0ss+/o0KJel4yF96wZwvNZFsE2EByF3XLKSXX/IlfWgY3E4+ylzvRaUewHqqYkx/P9y9X8VNCeDL25IZOIhpmPx1sw3bHwbaY0R3TaGHatspRZQu8DfOD+XCDeb/3Xfel21SvGdiuLTHTzpzhcXnKYEELlPUdbP2l4AiKH9z0RPwQSuAWM27Gp4+vLSpuCu+taLQ7+naYgbnPxzg0/jcsEeOOxIsOero9vACK0YFO7jcFg74qHoRZf/H5DC17pgWc8sx8ybaJ86vjinkmo5KIe83INYxYU79DrZRlhqBkekCtQoRymn3/uDh5R01g/JZLx1h/BV21QFe+iomlTvmE2M3/uhB5HqEFYbw5XZVFPkl7eSoeVTNVpoVp473rkFJmEpphDF+LzcdL6wKYG5/GtVYW1yEMjyD+pwEqD2dbJx92UvBqRD5APYAXk0e5s/aP+dStUgf6moHvFhGHxIOF/mo4s9Q95cM8fuVe+okh1iYKt0QzB02ZCy96htfNdzms67HUu3rfIGOJtTBXEwJtVJcpp8bUbJVSZz5VsYs+w5Hu5da8cRmq/7eQ17vLqvUwoSGKXSRHOlXHOEUMF872loICE+2PkcayW9YNr6ZjyIN32ic9/MpdHNOQw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(376002)(136003)(396003)(451199015)(46966006)(40470700004)(36840700001)(36756003)(33656002)(558084003)(47076005)(82740400003)(5660300002)(41300700001)(2906002)(6862004)(86362001)(8936002)(81166007)(36860700001)(70206006)(82310400005)(336012)(40480700001)(6486002)(54906003)(70586007)(356005)(40460700003)(6506007)(478600001)(8676002)(107886003)(316002)(4326008)(6512007)(26005)(186003)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 09:07:05.9016
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ee22c2b7-b26d-4a8b-bb14-08daf7a10a31
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10034

DQpIaSBKdWxpZW4sDQoNCj4gDQo+IEnigJl2ZSBsZWZ0IHRoZSBib2FyZHMgdG8gdGVzdCBhbGwg
bmlnaHQsIHNvIG9uIE1vbmRheSBJIHdpbGwgYmUgMTAwJSBzdXJlIHRoaXMgc2VyaWUNCj4gSXMg
bm90IGludHJvZHVjaW5nIGFueSBpc3N1ZS4NCg0KVGhlIHNlcmllIHBhc3NlZCB0aGUgb3Zlcm5p
Z2h0IHRlc3RzIG9uIG5lb3ZlcnNlIGJvYXJkLCByYXNwYmVycnkgcGkgNCwgSnVubyBib2FyZC4N
Cg0KQ2hlZXJzLA0KTHVjYQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:13:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:13:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478476.741695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLYC-0000HS-La; Mon, 16 Jan 2023 09:13:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478476.741695; Mon, 16 Jan 2023 09:13:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLYC-0000HK-IP; Mon, 16 Jan 2023 09:13:08 +0000
Received: by outflank-mailman (input) for mailman id 478476;
 Mon, 16 Jan 2023 09:13:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLMM=5N=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHLYB-0000H6-2l
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 09:13:08 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f93987ef-957d-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 10:13:03 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pHLXi-005Xef-1V; Mon, 16 Jan 2023 09:12:38 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 7754A30030F;
 Mon, 16 Jan 2023 10:12:44 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id C608420A006E1; Mon, 16 Jan 2023 10:12:44 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f93987ef-957d-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=Kwtryj1HCBFMowhvUsB8ib2dwQrySNMnhO0NS2CkGbQ=; b=fL34W63fUC/XvD2Z3ptNvw8G8a
	FTOVlIkTnPr/ESjmavB79YyPUbKJbBW9CY2+qWn8xZCQkSzZZrehxK37u2dDt4spOuPEstoggX/+o
	rbrBLVnCuZQio8BsPRDwYPLHNonYLHwwrDmeFDabsXzr5foisEmnDcbybjJeWMHPPR7Z9DeBOlKwt
	S1FDmqLc35Kppc7hQWzuT/E2K7rjYlSDiCSiVC7/h4TUkCslR/zdAl7jvV88gStKgxs6Fn8i3+/48
	soi/xtuHAscLZbFSPhPNUACSXsGgy/slwncXr3F2CiQRIPWmNX3YBGAIL5DAMUhULfAHz6LQJ5366
	BhK/mVfw==;
Date: Mon, 16 Jan 2023 10:12:44 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Josh Poimboeuf <jpoimboe@kernel.org>
Subject: Re: [PATCH 0/2] x86/xen: don't return from xen_pv_play_dead()
Message-ID: <Y8UVDAgXh/y+B66k@hirez.programming.kicks-ass.net>
References: <20221125063248.30256-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20221125063248.30256-1-jgross@suse.com>

On Fri, Nov 25, 2022 at 07:32:46AM +0100, Juergen Gross wrote:
> All play_dead() functions but xen_pv_play_dead() don't return to the
> caller.
> 
> Adapt xen_pv_play_dead() to behave like the other play_dead() variants.
> 
> Juergen Gross (2):
>   x86/xen: don't let xen_pv_play_dead() return
>   x86/xen: mark xen_pv_play_dead() as __noreturn
> 
>  arch/x86/xen/smp.h      |  2 ++
>  arch/x86/xen/smp_pv.c   | 17 ++++-------------
>  arch/x86/xen/xen-head.S |  7 +++++++
>  tools/objtool/check.c   |  1 +
>  4 files changed, 14 insertions(+), 13 deletions(-)

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:21:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:21:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478483.741705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLfq-0001l2-EN; Mon, 16 Jan 2023 09:21:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478483.741705; Mon, 16 Jan 2023 09:21:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLfq-0001kv-AT; Mon, 16 Jan 2023 09:21:02 +0000
Received: by outflank-mailman (input) for mailman id 478483;
 Mon, 16 Jan 2023 09:21:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Os5i=5N=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pHLfo-0001kp-IP
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 09:21:00 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2070.outbound.protection.outlook.com [40.107.14.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 15f8da1f-957f-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 10:20:58 +0100 (CET)
Received: from AM5PR0201CA0015.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::25) by DU2PR08MB10186.eurprd08.prod.outlook.com
 (2603:10a6:10:46c::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Mon, 16 Jan
 2023 09:20:45 +0000
Received: from AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::48) by AM5PR0201CA0015.outlook.office365.com
 (2603:10a6:203:3d::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Mon, 16 Jan 2023 09:20:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT005.mail.protection.outlook.com (100.127.140.218) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Mon, 16 Jan 2023 09:20:45 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Mon, 16 Jan 2023 09:20:44 +0000
Received: from d3c8cdb384b6.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9751E32E-072B-4172-A80A-8874C680B298.1; 
 Mon, 16 Jan 2023 09:20:33 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d3c8cdb384b6.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 16 Jan 2023 09:20:33 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by GVXPR08MB7822.eurprd08.prod.outlook.com (2603:10a6:150:3::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 09:20:31 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Mon, 16 Jan 2023
 09:20:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15f8da1f-957f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HhM+jXBFcFHhJjrttRCB+P3RodE670Fip/KL5D8rZyo=;
 b=EzISAtW/nRMflJZJvhrOZHyCqZhQCYfnMo68YQl5uk2faQQbgqgfJfp8dMy8Ox5dbdlgtIuZnmd7B3BvkLOyHRKkhwRr7pFYx1Y/kUaulIDo27UipFlLxs6cuJMa9XBF+CUbfNLYGet1V46ZPGvJ5O5I2ksA7Tk3XYFwRaW/6TI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: c4b931767f79c115
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RlrDW2/mCJ4i6KnXm5KYweBro/e6pZGNetvGoJSUkXqxoBOk5v9jF4DrLOdbcFTUiOm6jgkC4XljHtIU6J5G6uwmYQCh4IHReZjsCn7AHrQ+6+x2TzfVJ/ffwYtXBhIMmD4hWKN2JPJBt19LO/IvCBxll2YzpNM6j8p9bCCY5nxy+O0/gYqeJwr1RVOTYImu+Aemz0Y/BfclRc7CUx9ED6SbIO9N7W8Dao7FOTwRnGEbeCWuE7DliA/rv/AsDcAMynO15HIckCz1gnivZg/vqnxRvaVbl1d4g23E9quLLebwNIPamvTpaQYcSj6QMWImKngWye3znLewWV4n7PnYvQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HhM+jXBFcFHhJjrttRCB+P3RodE670Fip/KL5D8rZyo=;
 b=TA7grINL4s7QWCAAEqQ/tB8fp0MXXSKGnHZjvkUJ0VTfibMmI6tNDUGNy1+53ngOErmx1M1FQevwwgD1IbbBO5FgAwUHifrEOBTqNYvzLFWEqrxPOcgRBLmhCVeACH15iSdOCt35zDDUrLYUK2cvaWx1kpxvH/dUHaRPo7ubxa/8vGTZNG76UeF8IIcIQZyUsQK9+kZKRrMNG6q70ZZdL5IXjXX7efMONEhWOkrugvLshn3mWML/NQFxa9qGFC7fXQjzrqPc6U7mL5Z36AMCwsvDJzyJD8QGhCgzaEzDXyzk3uYa55rS3wSBxsOATEgGA/BttU2MXkY7oo0/RLrqcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HhM+jXBFcFHhJjrttRCB+P3RodE670Fip/KL5D8rZyo=;
 b=EzISAtW/nRMflJZJvhrOZHyCqZhQCYfnMo68YQl5uk2faQQbgqgfJfp8dMy8Ox5dbdlgtIuZnmd7B3BvkLOyHRKkhwRr7pFYx1Y/kUaulIDo27UipFlLxs6cuJMa9XBF+CUbfNLYGet1V46ZPGvJ5O5I2ksA7Tk3XYFwRaW/6TI=
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
Message-ID: <cb2ba9fe-e29f-c44e-9139-701f894060a8@arm.com>
Date: Mon, 16 Jan 2023 17:20:14 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 02/17] xen/arm: implement helpers to get and update
 NUMA status
To: Jan Beulich <jbeulich@suse.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-3-wei.chen@arm.com>
 <9e32ffa1-1499-f9cd-7ca8-f9493b1269cb@suse.com>
 <PAXPR08MB7420E482CACC741B1BA976569EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <5a802657-a6e8-9cc3-fefb-09a7e68d1e5e@suse.com>
Content-Language: en-US
From: Wei Chen <Wei.Chen@arm.com>
In-Reply-To: <5a802657-a6e8-9cc3-fefb-09a7e68d1e5e@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: TY2PR06CA0012.apcprd06.prod.outlook.com
 (2603:1096:404:42::24) To PAXPR08MB7420.eurprd08.prod.outlook.com
 (2603:10a6:102:2b9::9)
MIME-Version: 1.0
X-MS-TrafficTypeDiagnostic:
	PAXPR08MB7420:EE_|GVXPR08MB7822:EE_|AM7EUR03FT005:EE_|DU2PR08MB10186:EE_
X-MS-Office365-Filtering-Correlation-Id: d843a37f-daa7-4b48-42cf-08daf7a2f2ae
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 NRb10WlaMXOSoz1Qg/Y2cwiVJmg/uplwgkNXY9V062pRqIDCCP3F2Grt/vo091YyLWsdWiqrLHDCKg0tXiynApITkGMGKIWemgRylfiGuF3Nb8bghK6J4NPm9OLIqt8Wg3/DBoZL8AE7h1n/CjkKNPfPQUz35HHHe22WllUdH72f0bH3DCQjVD3XaYTX67E/wZJM8mQJLvT0GpUXluFCCcfRrtFTOTZRdxlEOEln3RLkBNJ+IbkdXVA+UFLMJzfJmOD6eLYhkGXVh3DbW/ECExs3Bcg9xvUtbIJVRHfQd1zewEhcNuR2F2oOhHrBCe90+0jfWgsudFbCUG3Na+J5vvTUg/ZlpIcolpHDF4WmSpTmPPSWD4vlhlY7Accb0HUielRuY+rLeNUA2l5Gt4eHRjMhC5JWcf5vNkiPl6uFp2DAxYVYrWRPbJ5wev65npqDqUFfL904cT9C+AB6Yv027FOuz8lpBa/DMRR30GP7T1cR6d4LDB27cO3eP6/Elsq+nKj0j4LujHt0jLGtcBSrJbyL6QTXBltREwlPgrmTxpdC7do8CaqQlujkX1yUoKLgO6bsd/zBcQ2h4hKdNOqa4UhIqkSC13tNNF90HvhF7lb0FrqyYz9jDsCGC5DvmI6ZSmE0UOSqjLgyfIplEYVqSVGMR0BGjXy8KthB3ekG5icPM6cfnyLg5mXIgOfOPF44HSRBWJqlt5QrJvQ9Ugrhu+9aelkS59/Llx36CmRrU6I=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(376002)(346002)(366004)(136003)(451199015)(8936002)(31686004)(26005)(41300700001)(66556008)(66946007)(6916009)(8676002)(66476007)(4326008)(15650500001)(6506007)(2906002)(53546011)(316002)(2616005)(6666004)(36756003)(5660300002)(54906003)(86362001)(478600001)(6486002)(186003)(83380400001)(6512007)(31696002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR08MB7822
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bcd24e08-df85-4ff7-d085-08daf7a2e943
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sY/6bLdZB9gf0Y9MWmiInkh29/HJhn234jwlK17GSNIqTKDMgOMtEHhST+uYSyjc3UZ1ELbS/hCKh4iBySHC6q80jHgGDGLsaA6DBclkpLTSCUYYX6/O29IRzXn5uIsvmkyjggqdyabayFI2Jm9RgI/ENbXT+GQb9oogPwfLeFzoSOHh2/cn9N8xLiCIYEnCDufwznPdmMd9QWyMY3rYAF4IeAdUNmdFN5HxnMGK2fYuUJJYKtKGhLrttF++w1vWA/mwsBDJZhqXe8T1Oj1cMnYCbGZVNijkDkVRGtybcf/T5w42BdAzqbuht4tNtSeDauJV70ZMiwKt39ZFinogQVu2OjPNhedeFrYWb9GLWGjr3rP9ZuyzkOYw7+tHJMdR28cik7nY1G/Wfhg8k1RHa3em6Q9mV3H5yziLft38dL2IEY3gof+UCuddxAE4lNIT3XBA/kI54MOZykHYEJdiyJ+UU5auz6m23o5MngQ4YoD4o44E0qXUwIOIq85OzFCVvetOccD2YMXk1c+657PUOzK7UjCrFgTuGef6+mtiOn2UOzKB7OtYjDGTbrrnaf9dxTU9+n4TmNDcAnbdgYQwnX6+JXqKT+0mZ6LyubuD81xDz49+1hHQw1NNyxg3+ZU6Z7pKPhj/rPbqjqbjLXLF0+ScP3M9eC87XqriNCczY2RtbBvWtPXt81N9w6rMpT590pYGJ0upvF82r10LUINI6BGPFYfDJDQ5S8yIWm41/fM=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(346002)(136003)(376002)(451199015)(46966006)(36840700001)(40470700004)(31686004)(83380400001)(82310400005)(70206006)(2616005)(41300700001)(70586007)(47076005)(186003)(8676002)(4326008)(6512007)(26005)(36756003)(31696002)(86362001)(53546011)(5660300002)(36860700001)(82740400003)(8936002)(6862004)(81166007)(54906003)(478600001)(6506007)(336012)(40480700001)(316002)(2906002)(15650500001)(6486002)(6666004)(40460700003)(356005)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 09:20:45.3834
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d843a37f-daa7-4b48-42cf-08daf7a2f2ae
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB10186

Hi Jan,

On 2023/1/12 16:08, Jan Beulich wrote:
> On 12.01.2023 07:22, Wei Chen wrote:
>>> -----Original Message-----
>>> From: Jan Beulich <jbeulich@suse.com>
>>> Sent: 2023年1月11日 0:38
>>>
>>> On 10.01.2023 09:49, Wei Chen wrote:
>>>> --- a/xen/arch/arm/include/asm/numa.h
>>>> +++ b/xen/arch/arm/include/asm/numa.h
>>>> @@ -22,6 +22,12 @@ typedef u8 nodeid_t;
>>>>    */
>>>>   #define NR_NODE_MEMBLKS NR_MEM_BANKS
>>>>
>>>> +enum dt_numa_status {
>>>> +    DT_NUMA_INIT,
>>>
>>> I don't see any use of this. I also think the name isn't good, as INIT
>>> can be taken for "initializer" as well as "initialized". Suggesting an
>>> alternative would require knowing what the future plans with this are;
>>> right now ...
>>>
>>
>> static enum dt_numa_status __read_mostly device_tree_numa;
> 
> There's no DT_NUMA_INIT here. You _imply_ it having a value of zero.
> 

How about I assign device_tree_numa explicitly like:
... __read_mostly device_tree_numa = DT_NUMA_UNINIT;

>> We use DT_NUMA_INIT to indicate device_tree_numa is in a default value
>> (System's initial value, hasn't done initialization). Maybe rename it
>> To DT_NUMA_UNINIT be better?
> 
> Perhaps, yes.
> 
>>>> --- a/xen/arch/x86/include/asm/numa.h
>>>> +++ b/xen/arch/x86/include/asm/numa.h
>>>> @@ -12,7 +12,6 @@ extern unsigned int numa_node_to_arch_nid(nodeid_t n);
>>>>
>>>>   #define ZONE_ALIGN (1UL << (MAX_ORDER+PAGE_SHIFT))
>>>>
>>>> -extern bool numa_disabled(void);
>>>>   extern nodeid_t setup_node(unsigned int pxm);
>>>>   extern void srat_detect_node(int cpu);
>>>>
>>>> --- a/xen/include/xen/numa.h
>>>> +++ b/xen/include/xen/numa.h
>>>> @@ -55,6 +55,7 @@ extern void numa_init_array(void);
>>>>   extern void numa_set_node(unsigned int cpu, nodeid_t node);
>>>>   extern void numa_initmem_init(unsigned long start_pfn, unsigned long
>>> end_pfn);
>>>>   extern void numa_fw_bad(void);
>>>> +extern bool numa_disabled(void);
>>>>
>>>>   extern int arch_numa_setup(const char *opt);
>>>>   extern bool arch_numa_unavailable(void);
>>>
>>> How is this movement of a declaration related to the subject of the patch?
>>>
>>
>> Can I add some messages in commit log to say something like "As we have
>> Implemented numa_disabled for Arm, so we move numa_disabled to common header"?
> 
> See your own patch 3, where you have a similar statement (albeit you mean
> "declaration" there, not "definition"). However, right now numa_disabled()
> is a #define on Arm, so the declaration becoming common isn't really
> warranted. In fact it'll get in the way of converting function-like macros
> to inline functions for Misra.
> 

Yes, I think you're right. This may also seem a little strange,when we 
disable NUMA, there are two headers have numa_disabled statement. I will 
revert this change. And I also will covert the macro to a static inline 
function.

Cheers,
Wei Chen

> Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:24:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478490.741715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLj5-0002Pr-2R; Mon, 16 Jan 2023 09:24:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478490.741715; Mon, 16 Jan 2023 09:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLj4-0002Pk-Vp; Mon, 16 Jan 2023 09:24:22 +0000
Received: by outflank-mailman (input) for mailman id 478490;
 Mon, 16 Jan 2023 09:24:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wx/b=5N=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHLj3-0002Pc-0i
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 09:24:21 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2057.outbound.protection.outlook.com [40.107.94.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8ba92745-957f-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 10:24:17 +0100 (CET)
Received: from CY5P221CA0067.NAMP221.PROD.OUTLOOK.COM (2603:10b6:930:4::42) by
 CH3PR12MB7644.namprd12.prod.outlook.com (2603:10b6:610:14f::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Mon, 16 Jan
 2023 09:24:12 +0000
Received: from CY4PEPF0000C985.namprd02.prod.outlook.com
 (2603:10b6:930:4:cafe::7b) by CY5P221CA0067.outlook.office365.com
 (2603:10b6:930:4::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Mon, 16 Jan 2023 09:24:12 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000C985.mail.protection.outlook.com (10.167.241.201) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Mon, 16 Jan 2023 09:24:11 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 03:24:11 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 03:23:58 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 16 Jan 2023 03:23:57 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ba92745-957f-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VCPTO1LoTWCGdxhrWuuKPO7xv7IMlSJcXrfmqXQuM1hj2D6T70DPlEQCJhFY4TDMiP52fTLdYbH1N14nHrx7smtV58FO6QlxKSGtOWxX0zwkP5p+riI05yqxdtZqrpvX5JVWTVB4eK97rQ+7bV5N56kWtUklL4CFM+iv3yAH4CH6iGrL38WALncMv/3XtCUp09h7SehZmqHv9+LDLcWxgkTbC2V7gnRdweg2tOp7SK7Cgh7zE4kbhBxBG6n3p/Gfc0V49NvKiw1R8MWb2+f057cLuuxsWb3qTGTW2w6Ww1eytaGIskNTh/N7UK9BzHJlS+xDau4I/epItKzTko5UbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M98l9Cp54GVTYjj+Azy5Y384U23wcTC4npMxyvD3UEg=;
 b=GPX2jKrQ7NRI9qWlXIWMXjYoDGqJ/0hylH0wsXh0IgOkPYwbdhoQHzCHTJouk5Q4czqePLPPgzswmmaSNL+eQZCHZFFUIMAYe6ZhG0T9MQ8tWIkcO3sgSIj5OTaBeCp8Ia7N5lpYz6otfJ+7BYJIrpGc+g60dqMoEtvHs2FmB5qA4VkNRcrfY18vr8lYRXKuqk7y/iWkcsXblWuitPlLeMbOxbgFJlTWcuVKO/ip5TYO3TYNI3/kLhdRnn6DjLrFRAG6FjBE0KY4TgP4Hhou4oATacm3nvRCZ9keVqNjM4vFzXNCGj3ymDM7m+McbF/DUqeG87d7uXfY4TQfalii5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M98l9Cp54GVTYjj+Azy5Y384U23wcTC4npMxyvD3UEg=;
 b=wHmRi1+wBn6Ad6AYbG6PuAURb7DzPzMJLeV8AVjbEdntT/p7v2ftVpM2nyQmtVjHXOJl8CBpJLWC8xUmz0DQ2UY5rNHzJsSHJGMBwJbj3o9I0eG29ruWQuUUvy5uJPLAJXJfpFGDznegTiUkkvS0D3njNU66tvjFrA4woE6usv8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <19559eed-525e-206f-c8ac-f9902f610714@amd.com>
Date: Mon, 16 Jan 2023 10:23:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-14-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230113101136.479-14-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C985:EE_|CH3PR12MB7644:EE_
X-MS-Office365-Filtering-Correlation-Id: 235c979d-3e62-4453-5fc5-08daf7a36dba
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QMYL4QRWKk/QSD4iVcb9691I0//5Mev4RklknGxwWTmK7+KR8Azk0JXEUPbtp6SN6hoBEZnrCEZUB2ARPOa9UPF9fxBsEx3wW/4DpI410dG5Tccktngyy2juvqBs/nglQ6HJLc9hXI/OsDLR4nYtJmBf6Q3b9pXv7unQpH9HYuUwwP2LswXNSyE4xsgQITbww/IB2zAJYOLhdf/YljdOM/js0uv/hmaUUGCcpxcjyEVOCf/G/R1AxJv/Sttll5B+Icin3pX4GGPzItLRdKUlpjbfVdbHijFQ3/b6g1o6Xn4ui+cqs+wfET15jQYFhyURi1ZLb4LyPzn9mNSaLDTJZTC9uZVUYouYj1yjapi4V9sl68CiM1wIrFfMenN7Y+kAT3XNf3N0BUXuHpGrtOrX11xLyVHhs7YdYEc0v4uPtezo5yZ/NAnlfbqNa2DMQrJ1wuZ2pVzDjWDeSin1KCtNarGloXqNyONWo4S83aoSdfLwrP9P6SRpy3mQcU7lWg9OJ/YhWiAyE2w3SFSRxcTGz6JoCJVuKd9cKq0hn/gOKx8ex38DN6+kFeMRxFZ9g3iJfWq+Iwt8arN6DTEVv+noc96asROYAFweTAiNuObja76M/C9Lscs5ABmpWJ8BStzjQrj4E8uduzaXsRUjjt+ixpgreWJOPW3xDlBlAzZH6sZhyw9GvMa5wEEveIHhrdeoLeUw7xMVnlbq3nRtDfyhJLHN/zwuSXnh1g2uGXDu5pJrfyrp0ZTv+v321ustmvoo+0zv7NlO7lLIt+m1w3feMQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199015)(36840700001)(40470700004)(46966006)(53546011)(66899015)(336012)(16576012)(186003)(26005)(316002)(478600001)(70586007)(41300700001)(426003)(70206006)(8676002)(47076005)(83380400001)(8936002)(5660300002)(4326008)(2906002)(44832011)(36860700001)(40480700001)(356005)(81166007)(2616005)(82740400003)(31696002)(86362001)(54906003)(110136005)(36756003)(40460700003)(31686004)(82310400005)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 09:24:11.7912
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 235c979d-3e62-4453-5fc5-08daf7a36dba
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C985.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7644

Hi Julien,

On 13/01/2023 11:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
> still on.
> 
> Switching TTBR is like replacing existing mappings with new ones. So
> we need to follow the break-before-make sequence.
> 
> In this case, it means the MMU needs to be switched off while the
> TTBR is updated. In order to disable the MMU, we need to first
> jump to an identity mapping.
> 
> Rename switch_ttbr() to switch_ttbr_id() and create an helper on
> top to temporary map the identity mapping and call switch_ttbr()
> via the identity address.
> 
> switch_ttbr_id() is now reworked to temporarily turn off the MMU
> before updating the TTBR.
> 
> We also need to make sure the helper switch_ttbr() is part of the
> identity mapping. So move _end_boot past it.
> 
> The arm32 code will use a different approach. So this issue is for now
> only resolved on arm64.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
>     Changes in v4:
>         - Don't modify setup_pagetables() as we don't handle arm32.
>         - Move the clearing of the boot page tables in an earlier patch
>         - Fix the numbering
> 
>     Changes in v2:
>         - Remove the arm32 changes. This will be addressed differently
>         - Re-instate the instruct cache flush. This is not strictly
>           necessary but kept it for safety.
>         - Use "dsb ish"  rather than "dsb sy".
> 
> 
>     TODO:
>         * Handle the case where the runtime Xen is loaded at a different
>           position for cache coloring. This will be dealt separately.
> ---
>  xen/arch/arm/arm64/head.S     | 50 +++++++++++++++++++++++------------
>  xen/arch/arm/arm64/mm.c       | 30 +++++++++++++++++++++
>  xen/arch/arm/include/asm/mm.h |  2 ++
>  xen/arch/arm/mm.c             |  2 --
>  4 files changed, 65 insertions(+), 19 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 663f5813b12e..5efd442b24af 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -816,30 +816,46 @@ ENDPROC(fail)
>   * Switch TTBR
>   *
>   * x0    ttbr
> - *
> - * TODO: This code does not comply with break-before-make.
>   */
> -ENTRY(switch_ttbr)
> -        dsb   sy                     /* Ensure the flushes happen before
> -                                      * continuing */
> -        isb                          /* Ensure synchronization with previous
> -                                      * changes to text */
> -        tlbi   alle2                 /* Flush hypervisor TLB */
> -        ic     iallu                 /* Flush I-cache */
> -        dsb    sy                    /* Ensure completion of TLB flush */
> +ENTRY(switch_ttbr_id)
> +        /* 1) Ensure any previous read/write have completed */
> +        dsb    ish
> +        isb
> +
> +        /* 2) Turn off MMU */
> +        mrs    x1, SCTLR_EL2
> +        bic    x1, x1, #SCTLR_Axx_ELx_M
> +        msr    SCTLR_EL2, x1
> +        isb
> +
> +        /*
> +         * 3) Flush the TLBs.
> +         * See asm/arm64/flushtlb.h for the explanation of the sequence.
> +         */
> +        dsb   nshst
> +        tlbi  alle2
> +        dsb   nsh
> +        isb
> +
> +        /* 4) Update the TTBR */
> +        msr   TTBR0_EL2, x0
>          isb
> 
> -        msr    TTBR0_EL2, x0
> +        /*
> +         * 5) Flush I-cache
> +         * This should not be necessary but it is kept for safety.
> +         */
> +        ic     iallu
> +        isb
> 
> -        isb                          /* Ensure synchronization with previous
> -                                      * changes to text */
> -        tlbi   alle2                 /* Flush hypervisor TLB */
> -        ic     iallu                 /* Flush I-cache */
> -        dsb    sy                    /* Ensure completion of TLB flush */
> +        /* 6) Turn on the MMU */
> +        mrs   x1, SCTLR_EL2
> +        orr   x1, x1, #SCTLR_Axx_ELx_M  /* Enable MMU */
> +        msr   SCTLR_EL2, x1
>          isb
> 
>          ret
> -ENDPROC(switch_ttbr)
> +ENDPROC(switch_ttbr_id)
> 
>  #ifdef CONFIG_EARLY_PRINTK
>  /*
> diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
> index 798ae93ad73c..2ede4e75ae33 100644
> --- a/xen/arch/arm/arm64/mm.c
> +++ b/xen/arch/arm/arm64/mm.c
> @@ -120,6 +120,36 @@ void update_identity_mapping(bool enable)
>      BUG_ON(rc);
>  }
> 
> +extern void switch_ttbr_id(uint64_t ttbr);
> +
> +typedef void (switch_ttbr_fn)(uint64_t ttbr);
> +
> +void __init switch_ttbr(uint64_t ttbr)
> +{
> +    vaddr_t id_addr = virt_to_maddr(switch_ttbr_id);
Shouldn't id_addr be of type paddr_t?

> +    switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;
> +    lpae_t pte;
> +
> +    /* Enable the identity mapping in the boot page tables */
> +    update_identity_mapping(true);
Could you please add an empty line here?

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:29:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:29:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478495.741725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLoJ-000342-MT; Mon, 16 Jan 2023 09:29:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478495.741725; Mon, 16 Jan 2023 09:29:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLoJ-00033v-Js; Mon, 16 Jan 2023 09:29:47 +0000
Received: by outflank-mailman (input) for mailman id 478495;
 Mon, 16 Jan 2023 09:29:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHLoI-00033p-IS
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 09:29:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHLoI-0001OJ-6c; Mon, 16 Jan 2023 09:29:46 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.9.85])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHLoH-0004D6-Vb; Mon, 16 Jan 2023 09:29:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=c41tjpbnGSyx0wKc64pMWfHAkpC8dRrAAtw6XGn+YjY=; b=iU0YNo9pO7sxIXpym6dfeCvk6C
	OeHKIMQAhMye7AHCasvJ4Nt4LE2g80tOipi47LkiXAVOZnVHfVRQhTd7qwXoAt2/Sc5c53IZuvf8F
	W8nErHhi6JsBR2vtqo5vyKNp4E2Ph1DJna4bfRBquCt0QrvVyCou8RzHuaDaWK2jTCUk=;
Message-ID: <54fdf78a-bd46-eae3-f00f-a21738561874@xen.org>
Date: Mon, 16 Jan 2023 09:29:43 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 11/14] xen/arm64: Rework the memory layout
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-12-julien@xen.org>
 <72b2be45-d7bc-a94f-1d49-b9fc0b2fd081@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <72b2be45-d7bc-a94f-1d49-b9fc0b2fd081@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 16/01/2023 08:46, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 13/01/2023 11:11, Julien Grall wrote:
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Xen is currently not fully compliant with the Arm Arm because it will
>> switch the TTBR with the MMU on.
>>
>> In order to be compliant, we need to disable the MMU before
>> switching the TTBR. The implication is the page-tables should
>> contain an identity mapping of the code switching the TTBR.
>>
>> In most of the case we expect Xen to be loaded in low memory. I am aware
>> of one platform (i.e AMD Seattle) where the memory start above 512GB.
>> To give us some slack, consider that Xen may be loaded in the first 2TB
>> of the physical address space.
>>
>> The memory layout is reshuffled to keep the first two slots of the zeroeth
> Should be "four slots" instead of "two".
> 
>> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
>> tweak of the boot code because XEN_VIRT_START cannot be used as an
>> immediate.
>>
>> This reshuffle will make trivial to create a 1:1 mapping when Xen is
>> loaded below 2TB.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ----
>>      Changes in v4:
>>          - Correct the documentation
>>          - The start address is 2TB, so slot0 is 4 not 2.
>>
>>      Changes in v2:
>>          - Reword the commit message
>>          - Load Xen at 2TB + 2MB
>>          - Update the documentation to reflect the new layout
>> ---
>>   xen/arch/arm/arm64/head.S         |  3 ++-
>>   xen/arch/arm/include/asm/config.h | 35 ++++++++++++++++++++-----------
>>   xen/arch/arm/mm.c                 | 11 +++++-----
>>   3 files changed, 31 insertions(+), 18 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index 4a3f87117c83..663f5813b12e 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -607,7 +607,8 @@ create_page_tables:
>>            * need an additional 1:1 mapping, the virtual mapping will
>>            * suffice.
>>            */
>> -        cmp   x19, #XEN_VIRT_START
>> +        ldr   x0, =XEN_VIRT_START
>> +        cmp   x19, x0
>>           bne   1f
>>           ret
>>   1:
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 6c1b762e976d..c5d407a7495f 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -72,15 +72,12 @@
>>   #include <xen/page-size.h>
>>
>>   /*
>> - * Common ARM32 and ARM64 layout:
>> + * ARM32 layout:
>>    *   0  -   2M   Unmapped
>>    *   2M -   4M   Xen text, data, bss
>>    *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>    *   6M -  10M   Early boot mapping of FDT
>> - *   10M - 12M   Livepatch vmap (if compiled in)
>> - *
>> - * ARM32 layout:
>> - *   0  -  12M   <COMMON>
>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>    *
>>    *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
>>    * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>> @@ -90,14 +87,22 @@
>>    *   2G -   4G   Domheap: on-demand-mapped
>>    *
>>    * ARM64 layout:
>> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
>> - *   0  -  12M   <COMMON>
>> + * 0x0000000000000000 - 0x00001fffffffffff (2TB, L0 slots [0..3])
> End address should be 0x1FFFFFFFFFF (one less f).
> 
>> + *  Reserved to identity map Xen
>> + *
>> + * 0x0000020000000000 - 0x000028fffffffff (512GB, L0 slot [4]
> End address should be 0x27FFFFFFFFF.
> 
>> + *  (Relative offsets)
>> + *   0  -   2M   Unmapped
>> + *   2M -   4M   Xen text, data, bss
>> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>> + *   6M -  10M   Early boot mapping of FDT
>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>    *
>>    *   1G -   2G   VMAP: ioremap and early_ioremap
>>    *
>>    *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
>>    *
>> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>> + * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [5..255])
> Start address should be 0x28000000000.

I have updated all the addresses.

> 
> Not related to this patch:
> I took a look at config.h and spotted two things:
> 1) DIRECTMAP_SIZE calculation is incorrect. It is defined as (SLOT0_ENTRY_SIZE * (265-256))
> but it actually should be (SLOT0_ENTRY_SIZE * (266-256)) i.e. 10 slots and not 9. Due to this
> bug we actually support 4.5TB of direct-map and not 5TB.


> 
> 2) frametable information
> struct page_info is no longer 24B but 56B for arm64 and 32B for arm32.

The values were always wrong. I have an action in my todo list to look 
at it, but never got the time.

There are two problems with the current values:
   1) The size of the frametable is not big enough as you pointed one below.
   2) The struct page_info could cross a cache line. We should decide 
whether we want to increase the size or attempt to reduce it.

  It looks like SUPPORT.md
> took this into account when stating that we support 12GB for arm32 and 2TB for arm64. However,
> this is also wrong as it does not take into account physical address compression. With PDX that
> is enabled by default we could fit tens of TB in 32GB frametable.
I don't understand your argument. Yes the PDX can compress, but it will 
compress non-RAM pages. So while I agree that this could cover tens of 
TB of physical address space, we will always be able to support a fixed 
amount of RAM.

> I think we want to get rid of
> comments like "Frametable: 24 bytes per page for 16GB of RAM" in favor of just "Frametable".

I would rather update the comments because we need a way to explain how 
we came up with the size.

> This is to because the struct page_info size may change again
We could have a BUILD_BUG_ON() confirming the size of the page_info.

> and it is rather difficult to
> calculate the max RAM size supported with PDX enabled.
See above about the max RAM size.

> 
> If you want, I can push the patches for these issues.

Happy to review them.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:32:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:32:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478500.741735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLqv-0004QP-48; Mon, 16 Jan 2023 09:32:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478500.741735; Mon, 16 Jan 2023 09:32:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLqv-0004QI-0m; Mon, 16 Jan 2023 09:32:29 +0000
Received: by outflank-mailman (input) for mailman id 478500;
 Mon, 16 Jan 2023 09:32:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHLqt-0004QC-J7
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 09:32:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHLqt-0001SZ-AO; Mon, 16 Jan 2023 09:32:27 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.9.85])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHLqt-0004H3-4a; Mon, 16 Jan 2023 09:32:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=EUmlqLYuI7kybKU7uph9Vinh1iqxbE7SF94BU9Q6j6w=; b=TywtCwmdmgNHEyO8AZmfiq063U
	/ks2x40Ai/iE2Uawf2N2HRGn1laPFOaDV0urAiUOKNoDC8gDo8NOJjz+6IQb/UKgP9KDMIRZt08oH
	1Xft1BTVQNZl9LN0uzZ2U4zW0N1EN0dMA6n1Bz8GI8Dt8ktH1q9v4cCX1TQGFenL+mf0=;
Message-ID: <18b23f77-6d51-0851-5a58-b19d790b09cc@xen.org>
Date: Mon, 16 Jan 2023 09:32:25 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 13/14] xen/arm64: mm: Rework switch_ttbr()
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-14-julien@xen.org>
 <19559eed-525e-206f-c8ac-f9902f610714@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <19559eed-525e-206f-c8ac-f9902f610714@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 16/01/2023 09:23, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 13/01/2023 11:11, Julien Grall wrote:
>> diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
>> index 798ae93ad73c..2ede4e75ae33 100644
>> --- a/xen/arch/arm/arm64/mm.c
>> +++ b/xen/arch/arm/arm64/mm.c
>> @@ -120,6 +120,36 @@ void update_identity_mapping(bool enable)
>>       BUG_ON(rc);
>>   }
>>
>> +extern void switch_ttbr_id(uint64_t ttbr);
>> +
>> +typedef void (switch_ttbr_fn)(uint64_t ttbr);
>> +
>> +void __init switch_ttbr(uint64_t ttbr)
>> +{
>> +    vaddr_t id_addr = virt_to_maddr(switch_ttbr_id);
> Shouldn't id_addr be of type paddr_t?

No because...

> 
>> +    switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;

... here it will be used as a virtual address.

>> +    lpae_t pte;
>> +
>> +    /* Enable the identity mapping in the boot page tables */
>> +    update_identity_mapping(true);
> Could you please add an empty line here?

Sure.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:32:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:32:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478504.741744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLrP-0004vz-CZ; Mon, 16 Jan 2023 09:32:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478504.741744; Mon, 16 Jan 2023 09:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLrP-0004vs-9g; Mon, 16 Jan 2023 09:32:59 +0000
Received: by outflank-mailman (input) for mailman id 478504;
 Mon, 16 Jan 2023 09:32:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wx/b=5N=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHLrN-0004om-Mv
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 09:32:57 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2065.outbound.protection.outlook.com [40.107.101.65])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c0b5ac5f-9580-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 10:32:55 +0100 (CET)
Received: from CY8PR02CA0003.namprd02.prod.outlook.com (2603:10b6:930:4d::10)
 by CH2PR12MB4069.namprd12.prod.outlook.com (2603:10b6:610:ac::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 09:32:51 +0000
Received: from CY4PEPF0000C971.namprd02.prod.outlook.com
 (2603:10b6:930:4d:cafe::b0) by CY8PR02CA0003.outlook.office365.com
 (2603:10b6:930:4d::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Mon, 16 Jan 2023 09:32:51 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CY4PEPF0000C971.mail.protection.outlook.com (10.167.242.9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Mon, 16 Jan 2023 09:32:51 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 03:32:50 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 01:32:50 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 16 Jan 2023 03:32:49 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0b5ac5f-9580-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=neuoSdM160/euMf3cVvHjikGBVguYeEEY+yUxJVEDvP4DSGzkrz9iXK8EGBVFj1IjTrGSJuCd777JsodxtSz9te5cYcogv82QbRW+pKQCQtN9bfYI9G4/kEawglJntZ0S+cDMyv6i+6OxXXlBIp3v7O4shUFlyqHzHmQj0SWPqP4FPF/MHrFL4BvvEL0LCE8Ds2azJVLA8V1tSFy8++WeKB6j7bO6ozPyshHj4QUFrtir5CWEBxmHduaCKaZRN9Ruw9Lr498phRLUK5Ud99fInUVahdpfa96jlngARnELSFxlCpSRDXBH4dmozhPjXXrNAZzhOaArqT8eS5BdKyz9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=v5iUHhIwh/XODZGW3iDuo8D2KtzuRWXqk+c5VW3AA/c=;
 b=iBXV8yX8BnkxFYJ4RNDmSJ9Z/cZZyG3klFL81uSpUXztulPEO2mOegAZ1qYXdxWDr4L5bl+gmUklmvvhpJAr4r58BRs8YnsKY0aqjMPW73yvWM9Zj+lYRegCW0iTa8+LBDuDXsmUvW0U0VJdAYD+9oqOSvLaSh29bQ1wPbNDCgzUzZ3VOhVVeoxRewVKD9W6oCng/nPJv7wz1B+NRDlLsQxgOHu9lz/NsURqdXE8Om5/d87Ul43zr3y3U2Y7yTtJiE/mKGW9imWtdcBBhFXuajxSGDFD2IkyemMHnWxXm9lU2Q46xK/lKBXLqaMf6lVccoifp/ijp+tJsoPXzOJtcQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v5iUHhIwh/XODZGW3iDuo8D2KtzuRWXqk+c5VW3AA/c=;
 b=SFq+lM0EETzRJSLyxylijj5r56A2M/Ip+yewqRuNLVSYAsjH0f5Ok80sBrN+0ppn6JCz25r+ht9GZiD7h1wGnTlilY6Rk/69UBJfW/DiBOg+zf15ETs1D6dzqikC2HibKFG8821EmShDSJ5KiAC80w0zJzY+cEsq/SdR3xdbJkw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <91c6614a-1168-8017-ffae-75984eff2dd9@amd.com>
Date: Mon, 16 Jan 2023 10:32:48 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 09/14] xen/arm32: head: Remove restriction where to
 load Xen
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-10-julien@xen.org>
 <36a8cb2d-0bea-cdc3-5311-c743f60763d5@amd.com>
 <cb8a09fd-6174-f747-1fc1-1ab472eecdfd@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <cb8a09fd-6174-f747-1fc1-1ab472eecdfd@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C971:EE_|CH2PR12MB4069:EE_
X-MS-Office365-Filtering-Correlation-Id: c550bbc3-c2ff-4efa-264c-08daf7a4a34a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/sYpNrMRbVaLlkbEe1t+iz2dfemWDTg0/vhZEVX6Sq5Nygnrl/ekT366SBnON4ZRTdqXUXs5E4//AtuT991qMgMdreeV3Pti9VCrmqRrwpD9+8KOVAza4xdDdYqAEYp+dREhkyMtSlx87kcrdU04AZjtr3ABZ4IMAatehibu20PJDJbG5HTssk5MZgNsuoPcSc346ipRA+vJrLZP4+FlaiU042Cq2+1iA0txff3aJZ1HtE88qnA7zl/SLCepNCJyiNhrj2LuJbB7NytrfbDDPOVlVo5046/SLwhjviVlcJJ5HlIXrotu0dDuIyymqUsA55/8wrCP8DkXNMGcTe3c2FX3atmBbwLsaouJakXhzocS80X6+UAEOtjJIPZ/wFZRTe8LyzEQeKar5+7tBrzSFC0E7pcXg84td5TKA3ToR6BJI6N/0CD35Suz/Z804b2M3/7xRZjWSyCSlI+gpcjI9qRyRgv9oxVoeVwqpxoMlb5YgA4U/G+jM3MxfKN/DEFZAV39Yj4YZsQ0Scr8blu+XjizZa/JZ6W3nAGf7EuH30vm08JlEQzjviiYhFEOYrCxWSiPNJHt21t3eVkBPV70LM3IN1209fEbshRNbDY3aFMbcHcUnmn4tZYm1ogsrHdF+Jurj2OTAKoYWB2VWDJCukylsiX25SFEao7/FlPp98GByoWNU3llfbY/HSvYcNOT11ZGbNbGc8DpJLx7x0jc0cjbRnPpH43x338uQAaAtT6WYE5bzQBeOSlmuK+DksmRXSdwaXBlqot/DDhYzAhHwA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199015)(36840700001)(40470700004)(46966006)(31686004)(41300700001)(70206006)(426003)(2616005)(47076005)(70586007)(186003)(26005)(4326008)(8676002)(36756003)(82310400005)(53546011)(86362001)(336012)(5660300002)(36860700001)(82740400003)(8936002)(54906003)(478600001)(110136005)(31696002)(40480700001)(16576012)(316002)(356005)(2906002)(81166007)(40460700003)(44832011)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 09:32:51.1495
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c550bbc3-c2ff-4efa-264c-08daf7a4a34a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C971.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4069



On 16/01/2023 09:55, Julien Grall wrote:
> 
> 
> On 16/01/2023 08:14, Michal Orzel wrote:
>> Hi Julien,
> 
> Hi Luca,
> 
>> On 13/01/2023 11:11, Julien Grall wrote:
>>> +/*
>>> + * Remove the temporary mapping of Xen starting at TEMPORARY_XEN_VIRT_START.
>>> + *
>>> + * Clobbers r0 - r1
>>> + */
>>> +remove_temporary_mapping:
>>> +        /* r2:r3 := invalid page-table entry */
>>> +        mov   r2, #0
>>> +        mov   r3, #0
>>> +
>>> +        adr_l r0, boot_pgtable
>>> +        mov_w r1, TEMPORARY_XEN_VIRT_START
>>> +        get_table_slot r1, r1, 1     /* r1 := first slot */
>> Can't we just use TEMPORARY_AREA_FIRST_SLOT?
> 
> IMHO, it would make the code a bit more difficult to read because the
> connection between TEMPORARY_XEN_VIRT_START and
> TEMPORARY_AREA_FIRST_SLOT is not totally obvious.
> 
> So I would rather prefer if this stays like that.
> 
>>
>>> +        lsl   r1, r1, #3             /* r1 := first slot offset */
>>> +        strd  r2, r3, [r0, r1]
>>> +
>>> +        flush_xen_tlb_local r0
>>> +
>>> +        mov  pc, lr
>>> +ENDPROC(remove_temporary_mapping)
>>> +
>>>   /*
>>>    * Map the UART in the fixmap (when earlyprintk is used) and hook the
>>>    * fixmap table in the page tables.
>>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>>> index 87851e677701..6c1b762e976d 100644
>>> --- a/xen/arch/arm/include/asm/config.h
>>> +++ b/xen/arch/arm/include/asm/config.h
>>> @@ -148,6 +148,20 @@
>>>   /* Number of domheap pagetable pages required at the second level (2MB mappings) */
>>>   #define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
>>>
>>> +/*
>>> + * The temporary area is overlapping with the domheap area. This may
>>> + * be used to create an alias of the first slot containing Xen mappings
>>> + * when turning on/off the MMU.
>>> + */
>>> +#define TEMPORARY_AREA_FIRST_SLOT    (first_table_offset(DOMHEAP_VIRT_START))
>>> +
>>> +/* Calculate the address in the temporary area */
>>> +#define TEMPORARY_AREA_ADDR(addr)                           \
>>> +     (((addr) & ~XEN_PT_LEVEL_MASK(1)) |                    \
>>> +      (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1)))
>> XEN_PT_LEVEL_{MASK/SHIFT} should be used when we do not know the level upfront.
>> Otherwise, no need for opencoding and you should use FIRST_MASK and FIRST_SHIFT.
> 
> We discussed in the past to phase out the use of FIRST_MASK, FIRST_SHIFT
> because the name is too generic. So for new code, we should use
> XEN_PT_LEVEL_{MASK/SHIFT}.
In that case:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:35:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:35:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478511.741754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLth-0005dN-Tv; Mon, 16 Jan 2023 09:35:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478511.741754; Mon, 16 Jan 2023 09:35:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLth-0005dG-RH; Mon, 16 Jan 2023 09:35:21 +0000
Received: by outflank-mailman (input) for mailman id 478511;
 Mon, 16 Jan 2023 09:35:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHLtg-0005d4-6P
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 09:35:20 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 164c2a06-9581-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 10:35:17 +0100 (CET)
Received: by mail-ed1-x535.google.com with SMTP id s3so1744049edd.4
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 01:35:17 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.226.tellas.gr. [109.242.226.67])
 by smtp.gmail.com with ESMTPSA id
 f9-20020a056402068900b0048999d127e0sm11144509edy.86.2023.01.16.01.35.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 16 Jan 2023 01:35:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 164c2a06-9581-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:from:subject:user-agent:mime-version:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=JOhv9z+PNsgnqkOokKHghLvnay2fArMCvGNAfPJKnao=;
        b=PChpdD92IJQ+8d6cufeKzxfqpp0I5dLfn+gI4KdEYpa3dx+RjcKu2c11Q4w/nSbATb
         aOaI3Yb4e4/xY6lpJbBnMoUFvgIv9YiULovHLibw0njSgsmFKFq9yW6m4gb2Hk3c8y3/
         Ex3AwjjxzyjQ/tjIF6MOZN3glZL6WOVfax47PvUNFLm+1CCX+gJ7mZmezebQ28KYuEyA
         OAEG4IUiyw1fBQcTJ/G27U2b/jzKbAJlCfGKzwNDzvZZjltki0QtJmTImIYHXS34HtSd
         uNtKnMBYHw/8iv5v/mWC+X71BZWg0eXWFUAlTWPibCarfiWpICvmHu7c1vIguKa3GUgV
         P08g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:from:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=JOhv9z+PNsgnqkOokKHghLvnay2fArMCvGNAfPJKnao=;
        b=BbKajxRaGGWLoutIflE8rXOenxtICVWF15QadG/J3MbKnZ8QF+7O0c4BCXdd2IKLf4
         deTPhI0EaEubye1SxcRbDxrpIeBgD9Ams9Ug2gSDY3Sx7agQTfn3qspwP1cpkzYnLzLX
         hUM4y46C4EVht4Sjdz60mxOHNeUr3RljvbRIG0PgETq8nqb7ZkCbzqoFioSjerKc100i
         LdrW023NBuzB3zRC7QFqZdvWoyh4j5HGU6E8qiw3edOu479fmCppSeP/wkuD8fC2BIHU
         Ff9wpvMT50pNXrzoOxe+oP7o4mkcCW1F/Bv3moIQTEW+IBXMawUaU5AuqD14sbTPx+Mm
         6miA==
X-Gm-Message-State: AFqh2kpdNULJTpjxosTrbWjW+Z8//ZXMt2uPmTkDYLJmYZMZGLJwqlVk
	IDwhlMTgjwqfI2geljUA12XGqFqHmn4=
X-Google-Smtp-Source: AMrXdXvGMV9JQ48YEQnPV/Ink7aGPdFyceDrx84IFkS12qSw2FwlCbiEgr9jzZsOLYLnHuHBUPZ7GQ==
X-Received: by 2002:a05:6402:454:b0:499:376e:6b2b with SMTP id p20-20020a056402045400b00499376e6b2bmr25497135edw.0.1673861716823;
        Mon, 16 Jan 2023 01:35:16 -0800 (PST)
Message-ID: <80dc13f9-f193-2def-cce6-93fd519e824f@gmail.com>
Date: Mon, 16 Jan 2023 11:35:14 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v3 6/8] x86/iommu: call pi_update_irte through an
 hvm_function callback
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>
References: <20230116070431.905594-1-burzalodowa@gmail.com>
 <20230116070431.905594-7-burzalodowa@gmail.com>
Content-Language: en-US
In-Reply-To: <20230116070431.905594-7-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/16/23 09:04, Xenia Ragiadakou wrote:
> Posted interrupt support in Xen is currently implemented only for the
> Intel platforms. Instead of calling directly pi_update_irte() from the
> common hvm code, add a pi_update_irte callback to the hvm_function_table.
> Then, create a wrapper function hvm_pi_update_irte() to be used by the
> common hvm code.
> 
> In the pi_update_irte callback prototype, pass the vcpu as first parameter
> instead of the posted-interrupt descriptor that is platform specific, and
> remove the const qualifier from the parameter gvec since it is not needed
> and because it does not compile with the alternative code patching in use.
> 
> Since the posted interrupt descriptor is Intel VT-x specific while
> msi_msg_write_remap_rte is iommu specific, open code pi_update_irte() inside
> vmx_pi_update_irte() but replace msi_msg_write_remap_rte() with generic
> iommu_update_ire_from_msi(). That way vmx_pi_update_irte() is not bound to
> Intel VT-d anymore.
> 
> Remove the now unused pi_update_irte() implementation.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> ---
> 
> Changes in v3:
>    - open code pi_update_irte() in vmx_pi_update_irte() but instead of using
>      the VT-d specific function msi_msg_write_remap_rte() use the generic
>      iommu_update_ire_from_msi()
>    - delete pi_update_irte() from vtd/intremap.c
>    - move the initialization of vmx pi_update_irte stub to start_vmx() and
>      perform it only if iommu_intpost is enabled.
>    - move pi_update_irte right after handle_eoi
> 
>   xen/arch/x86/hvm/vmx/vmx.c             | 41 ++++++++++++++++++++++++++
>   xen/arch/x86/include/asm/hvm/hvm.h     | 10 +++++++
>   xen/arch/x86/include/asm/iommu.h       |  3 --
>   xen/drivers/passthrough/vtd/intremap.c | 36 ----------------------
>   xen/drivers/passthrough/x86/hvm.c      |  5 ++--
>   5 files changed, 53 insertions(+), 42 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 15a07933ee..a5355dbac2 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -396,6 +396,43 @@ void vmx_pi_hooks_deassign(struct domain *d)
>       domain_unpause(d);
>   }
>   
> +/*
> + * This function is used to update the IRTE for posted-interrupt
> + * when guest changes MSI/MSI-X information.
> + */
> +static int cf_check vmx_pi_update_irte(const struct vcpu *v,
> +                                       const struct pirq *pirq, uint8_t gvec)
> +{
> +    const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
> +    struct irq_desc *desc;
> +    struct msi_desc *msi_desc;
> +    int rc;
> +
> +    desc = pirq_spin_lock_irq_desc(pirq, NULL);
> +    if ( !desc )
> +        return -EINVAL;
> +
> +    msi_desc = desc->msi_desc;
> +    if ( !msi_desc )
> +    {
> +        rc = -ENODEV;
> +        goto unlock_out;
> +    }
> +    msi_desc->pi_desc = pi_desc;
> +    msi_desc->gvec = gvec;
> +
> +    spin_unlock_irq(&desc->lock);
> +
> +    ASSERT(pcidevs_locked());
> +
> +    return iommu_update_ire_from_msi(msi_desc, &msi_desc->msg);
> +
> + unlock_out:
> +    spin_unlock_irq(&desc->lock);
> +
> +    return rc;
> +}
> +
>   static const struct lbr_info {
>       u32 base, count;
>   } p4_lbr[] = {
> @@ -2969,8 +3006,12 @@ const struct hvm_function_table * __init start_vmx(void)
>       {
>           alloc_direct_apic_vector(&posted_intr_vector, pi_notification_interrupt);
>           if ( iommu_intpost )
> +        {
>               alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);
>   
> +            vmx_function_table.pi_update_irte = vmx_pi_update_irte;
> +        }
> +
>           vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
>           vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
>           vmx_function_table.test_pir            = vmx_test_pir;
> diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
> index 93254651f2..b1990a047c 100644
> --- a/xen/arch/x86/include/asm/hvm/hvm.h
> +++ b/xen/arch/x86/include/asm/hvm/hvm.h
> @@ -28,6 +28,8 @@
>   #include <asm/x86_emulate.h>
>   #include <asm/hvm/asid.h>
>   
> +struct pirq; /* needed by pi_update_irte */
> +
>   #ifdef CONFIG_HVM_FEP
>   /* Permit use of the Forced Emulation Prefix in HVM guests */
>   extern bool_t opt_hvm_fep;
> @@ -213,6 +215,8 @@ struct hvm_function_table {
>       void (*sync_pir_to_irr)(struct vcpu *v);
>       bool (*test_pir)(const struct vcpu *v, uint8_t vector);
>       void (*handle_eoi)(uint8_t vector, int isr);
> +    int (*pi_update_irte)(const struct vcpu *v, const struct pirq *pirq,
> +                          uint8_t gvec);
>   
>       /*Walk nested p2m  */
>       int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa,
> @@ -774,6 +778,12 @@ static inline void hvm_set_nonreg_state(struct vcpu *v,
>           alternative_vcall(hvm_funcs.set_nonreg_state, v, nrs);
>   }
>   
> +static inline int hvm_pi_update_irte(const struct vcpu *v,
> +                                     const struct pirq *pirq, uint8_t gvec)
> +{
> +    return alternative_call(hvm_funcs.pi_update_irte, v, pirq, gvec);
> +}
> +
>   #else  /* CONFIG_HVM */
>   
>   #define hvm_enabled false
> diff --git a/xen/arch/x86/include/asm/iommu.h b/xen/arch/x86/include/asm/iommu.h
> index fc0afe35bf..4794e72cf1 100644
> --- a/xen/arch/x86/include/asm/iommu.h
> +++ b/xen/arch/x86/include/asm/iommu.h
> @@ -129,9 +129,6 @@ void iommu_identity_map_teardown(struct domain *d);
>   
>   extern bool untrusted_msi;
>   
> -int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
> -                   const uint8_t gvec);
> -
>   extern bool iommu_non_coherent, iommu_superpages;
>   
>   static inline void iommu_sync_cache(const void *addr, unsigned int size)

Here, I forgot to remove the #include <asm/hvm/vmx/vmcs.h>. I will fix 
it in the next version.

> diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
> index 1512e4866b..b39bc83282 100644
> --- a/xen/drivers/passthrough/vtd/intremap.c
> +++ b/xen/drivers/passthrough/vtd/intremap.c
> @@ -866,39 +866,3 @@ void cf_check intel_iommu_disable_eim(void)
>       for_each_drhd_unit ( drhd )
>           disable_qinval(drhd->iommu);
>   }
> -
> -/*
> - * This function is used to update the IRTE for posted-interrupt
> - * when guest changes MSI/MSI-X information.
> - */
> -int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
> -    const uint8_t gvec)
> -{
> -    struct irq_desc *desc;
> -    struct msi_desc *msi_desc;
> -    int rc;
> -
> -    desc = pirq_spin_lock_irq_desc(pirq, NULL);
> -    if ( !desc )
> -        return -EINVAL;
> -
> -    msi_desc = desc->msi_desc;
> -    if ( !msi_desc )
> -    {
> -        rc = -ENODEV;
> -        goto unlock_out;
> -    }
> -    msi_desc->pi_desc = pi_desc;
> -    msi_desc->gvec = gvec;
> -
> -    spin_unlock_irq(&desc->lock);
> -
> -    ASSERT(pcidevs_locked());
> -
> -    return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
> -
> - unlock_out:
> -    spin_unlock_irq(&desc->lock);
> -
> -    return rc;
> -}
> diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c
> index a16e0e5344..e720461a14 100644
> --- a/xen/drivers/passthrough/x86/hvm.c
> +++ b/xen/drivers/passthrough/x86/hvm.c
> @@ -381,8 +381,7 @@ int pt_irq_create_bind(
>   
>           /* Use interrupt posting if it is supported. */
>           if ( iommu_intpost )
> -            pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
> -                           info, pirq_dpci->gmsi.gvec);
> +            hvm_pi_update_irte(vcpu, info, pirq_dpci->gmsi.gvec);
>   
>           if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
>           {
> @@ -672,7 +671,7 @@ int pt_irq_destroy_bind(
>               what = "bogus";
>       }
>       else if ( pirq_dpci && pirq_dpci->gmsi.posted )
> -        pi_update_irte(NULL, pirq, 0);
> +        hvm_pi_update_irte(NULL, pirq, 0);
>   
>       if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
>            list_empty(&pirq_dpci->digl_list) )

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:41:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478516.741765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLzI-00075m-Ii; Mon, 16 Jan 2023 09:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478516.741765; Mon, 16 Jan 2023 09:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHLzI-00075f-Fx; Mon, 16 Jan 2023 09:41:08 +0000
Received: by outflank-mailman (input) for mailman id 478516;
 Mon, 16 Jan 2023 09:41:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Os5i=5N=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pHLzH-00075Z-M2
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 09:41:07 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2057.outbound.protection.outlook.com [40.107.21.57])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6181641-9581-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 10:41:06 +0100 (CET)
Received: from AS9PR06CA0196.eurprd06.prod.outlook.com (2603:10a6:20b:45d::22)
 by AS8PR08MB5944.eurprd08.prod.outlook.com (2603:10a6:20b:297::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 09:40:55 +0000
Received: from AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45d:cafe::3b) by AS9PR06CA0196.outlook.office365.com
 (2603:10a6:20b:45d::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Mon, 16 Jan 2023 09:40:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT009.mail.protection.outlook.com (100.127.140.130) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Mon, 16 Jan 2023 09:40:54 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Mon, 16 Jan 2023 09:40:54 +0000
Received: from d5c67f4d84b7.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6969F8D1-AF23-45C0-B46C-96CE31B03AE7.1; 
 Mon, 16 Jan 2023 09:40:43 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d5c67f4d84b7.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 16 Jan 2023 09:40:43 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AS2PR08MB8747.eurprd08.prod.outlook.com (2603:10a6:20b:55f::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Mon, 16 Jan
 2023 09:40:40 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Mon, 16 Jan 2023
 09:40:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6181641-9581-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L3P0XiTgynu573SmPp/QcDNdyG9IX1u1LFbgNqKMTuk=;
 b=WijJ/n6QB3QsEki369209ninq/OelMS3EMwgnS3pjCGktSTFDCZU1LN5kcrCh6E09E/01KKOGvSt1o5hW2TQo7B2Fx7074/+brFELqWK32ST1nsCtMA7Zyl+/hGVoFmPpCOqg6aLmhnr3NbTf6yivtoDtDOlq327aEuaID7eTzI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ad01db07b5d3269f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J3cKJAPnltdFeqOexwNHHhz/ONeUhf1TpKDTLaGEuHaOea5CVlp91sMiTO0PK9KHUAF1np96F5wrRBwIfn4hkPc6g49NBdVloQlPUPLKVn7/JUGnk9+6FsX14Xp2vRogt7S7rmBBGFnqvXtMdy1bbVWJZ2xaoMjNi+s/5oRsbIiHiYo7PHDdAFAy6qDFLA8Ox0JDdpGIiTeCGVBRNQcNF/At+Z4AnKkWWAnwAUPO5cMk2YBT20opoaa1ooVkmctStBnRkY+GOHQ/EqKbHFqrimamek4iLCjx69ftUWqSXFIq8EgV6LL5H0RRD9Im+EkwdU2cK/6M27eYkzbRSxD9Bg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=L3P0XiTgynu573SmPp/QcDNdyG9IX1u1LFbgNqKMTuk=;
 b=O5fXeLUxfe98qze6nxd3XiWgkskLKoieuosYgZ71kXkDS2F2SHqiSU6b/689/GZAHWRqjcdmOOIkUezXnTZGBZZN4zysKND+xovi0aMLFPOC0esdRcGJ7jbGAmCg+UKBusHSicVnZipXPZVq11hAKDUUjrWlbU/Xjwocty9iFYLuxe58uGd5GPTqDBj4sMBuFgXDLKxQO93897hoyXuwTapHVqjzyFB/2d6QoNlGoUQ4uViS1oomnKXVIdrlx2V7cpt9WKIuMA1B4FY0a51ByvhJgi31vHpS08l4rY9cPLEC4rK8owa5Misrz4yI/eikeR+nlmJ6QipasC0kGdfOqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L3P0XiTgynu573SmPp/QcDNdyG9IX1u1LFbgNqKMTuk=;
 b=WijJ/n6QB3QsEki369209ninq/OelMS3EMwgnS3pjCGktSTFDCZU1LN5kcrCh6E09E/01KKOGvSt1o5hW2TQo7B2Fx7074/+brFELqWK32ST1nsCtMA7Zyl+/hGVoFmPpCOqg6aLmhnr3NbTf6yivtoDtDOlq327aEuaID7eTzI=
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
Message-ID: <1ed674f8-4e69-de96-aca0-c25e589cc998@arm.com>
Date: Mon, 16 Jan 2023 17:40:25 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v2 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-4-wei.chen@arm.com>
 <9fd67aa2-0bd5-16a2-1e19-139504c2090f@suse.com>
 <PAXPR08MB7420A4E3DA252F9F37450EDA9EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <ee06b9a7-bfc7-e6f1-f2f6-f73a1fb42d6d@suse.com>
 <71e806f4-8bd2-dd47-67b9-958bb9061c7b@xen.org>
From: Wei Chen <Wei.Chen@arm.com>
In-Reply-To: <71e806f4-8bd2-dd47-67b9-958bb9061c7b@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: TYCP286CA0178.JPNP286.PROD.OUTLOOK.COM
 (2603:1096:400:3c6::9) To PAXPR08MB7420.eurprd08.prod.outlook.com
 (2603:10a6:102:2b9::9)
MIME-Version: 1.0
X-MS-TrafficTypeDiagnostic:
	PAXPR08MB7420:EE_|AS2PR08MB8747:EE_|AM7EUR03FT009:EE_|AS8PR08MB5944:EE_
X-MS-Office365-Filtering-Correlation-Id: d7b48b5b-5e9e-41ca-27e9-08daf7a5c394
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 rSuVt6Hc0ZZs3xQqRZQoxgwFjg+M7f9HGdyS1qMVEeLHYpzTxru1Y3L4/tM/QK8+OzAB8SUt8Cek+5Fzg5fOXBptpdLl4ImASZMb8DWOu4CRdy11HAReWOollprdUoNEy6KuMbrmQZ3xGcHEXiG8G24XLgw+eb+uESzZ8dqv3Zmg0KtOgOBo/yZDrLza8gyEfKBlPvbuz3YZbdSpeQOcnEW/LgNYNpt7ZR/p+rBKuRJsLGFZSiI6dKdF8ouYoUVUENypy9xCAPyp6HNcsWXum0T2Je6/V+d45UgGtBtMdwXgCG3/O56wgPhrV3lxQ91OPp0Y+C2sRAdTxdJjLHfAPWzCmnyK/CzaqZ7aoL88lp4/QWzFUPwlLXLEF1/EIFohlyzUXUoKqLXsuHiV00ehbSvJ+eJhiOGfbYWH/V90ZPfN+fhtMonjqjWb2lUF+sq8y0QdscoFpJMMtNqLIbRUhbCml1ngKYztDyaGDKpP3+pAURSy65ttA5vWOxH9e0iCy6aMYVZLvCh4lOh/ZD+SyOmFbd5VIlWKhesiqklN52erW8YjSTAI/u6S3ncLYQAbtMEdUok5bPXiVALQHjhaeUf1KU98KMPuo9Pi4obFNa1DXIgHJq8G8bfgMDSf8/BRnDMcg6ueLFVHtMquIIU9gehBCK6F/tmAoaf4POqJnDbwmsHN5a3t/gioVBbgFHc3E/FJn5NwIUdNTpBmH57WwMLmorKZXi+dlJ0An58/oGU=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(376002)(346002)(39860400002)(366004)(396003)(451199015)(36756003)(5660300002)(53546011)(6666004)(6506007)(6512007)(186003)(83380400001)(31696002)(86362001)(2616005)(2906002)(26005)(38100700002)(316002)(31686004)(54906003)(110136005)(41300700001)(4326008)(66476007)(66556008)(66946007)(8936002)(8676002)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8747
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	27927cdc-99da-4e9d-4ae0-08daf7a5ba8b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Eghg5/Ca4S9YjXr8pyAXAM2vzJwaYdxY6levEHYf4uEd0GDLJVddjfU8qW2sANLj/ibGNSELgHq27OWRaJDZXCKV4pJj2pyWaWdsO9pOctKT0nUNdco4ve1jf2dsDyX4HD45o5yU09vIpCKzaQMpTj6yzYad3G0xCAU0qRr1RL4/m2CG9s/m2zcTFzFcsN4is5j31nJncLcXmKdkALMI9fveEF1LRCEhiv+/15gr6Ic2/TseSKxKL00pZbRMWJTTI3tIh5FB4dsqaTj9gI3THtO/W26tKzYBY7ytf8oxugqmLXbamScfkdZ+Gk7AptR9YMhG5EM+G4S2mxIXqNBDZ0rg/BTMuELiS/CUC68vg0A6pptun7wQYJQNFq8/q21Z+ccwngh7zWtIDHU5K3uBfyc3/AAj0sNdpuwZDdmobonNWquneglWAPLK35z58WJ3tLKiwC7KjprXoBWzcel6FG0BFEjDZbxIZpBKwvwyTHG0XH06CO2EaIUnBVVyN7DjhpdtlafV2AKzOEme/MVg1p2h6HroTi8Ex7WWU5Lka5oo3rHxrLGJTTA58X1YJe5M25lr3fTgDKHY620LW/YnBrBHw6HIWIZBHEvX4xmwMXtcYXp4BOVvHwWmFFF95gd7gB98wUeaV5j9FjLmWmNQs8FBDef+zJ9uWqMEH+51bpLu8PM/YVk3WQ9QKTqY/hsvvddwVxAyC5pNtUoHWMV60HjWU2de/9wp5DuQqxun/rk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199015)(46966006)(36840700001)(40470700004)(36860700001)(2906002)(82740400003)(356005)(86362001)(31696002)(81166007)(31686004)(40460700003)(36756003)(47076005)(40480700001)(2616005)(82310400005)(54906003)(110136005)(316002)(5660300002)(70586007)(70206006)(41300700001)(8676002)(6506007)(6666004)(4326008)(6486002)(83380400001)(8936002)(26005)(186003)(53546011)(478600001)(336012)(6512007)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 09:40:54.8652
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d7b48b5b-5e9e-41ca-27e9-08daf7a5c394
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5944

Hi Julien,

On 2023/1/12 17:47, Julien Grall wrote:
> Hi,
> 
> On 12/01/2023 08:11, Jan Beulich wrote:
>> On 12.01.2023 07:31, Wei Chen wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 2023年1月11日 0:47
>>>>
>>>> On 10.01.2023 09:49, Wei Chen wrote:
>>>>> --- a/xen/arch/arm/include/asm/numa.h
>>>>> +++ b/xen/arch/arm/include/asm/numa.h
>>>>> @@ -28,6 +28,20 @@ enum dt_numa_status {
>>>>>       DT_NUMA_OFF,
>>>>>   };
>>>>>
>>>>> +/*
>>>>> + * In ACPI spec, 0-9 are the reserved values for node distance,
>>>>> + * 10 indicates local node distance, 20 indicates remote node
>>>>> + * distance. Set node distance map in device tree will follow
>>>>> + * the ACPI's definition.
> 
> Looking at the ACPI spec, I agree that the local node distance is 
> defined to 10. But I couldn't find any mention of the value 20.
> 
> How is NUMA_REMOTE_DISTANCE is meant to be used? Is this a default 
> value? If so, maybe we should add "DEFAULT" in the name.
> 

I think you're right, maybe we can rename NUMA_REMOTE_DISTANCE
to NUMA_DEFAULT_DISTANCE.Different pairs of nodes may have different 
remote distance values, I can not define one value for them.

And why use 20 as default value, I have followed the ACPI's logic.
In ACPI NUMA, when ACPI doesn't provide SLIT table, 20 will be used
for all nodes default distance.

>>>>> + */
>>>>> +#define NUMA_DISTANCE_UDF_MIN   0
>>>>> +#define NUMA_DISTANCE_UDF_MAX   9
>>>>> +#define NUMA_LOCAL_DISTANCE     10
>>>>> +#define NUMA_REMOTE_DISTANCE    20
>>>>
>>>> In the absence of a caller of numa_set_distance() it is entirely 
>>>> unclear
>>>> whether this tying to ACPI used values is actually appropriate.
>>>>
>>>
>>>  From Kernel's NUMA device tree binding, it seems DT NUMA are reusing
>>> ACPI used values for distances [1].
>>
>> I can't find any mention of ACPI in that doc, so the example values used
>> there matching ACPI's may also be coincidental. In no event can a Linux
>> kernel doc serve as DT specification. 
> 
> The directory Documentation/devicetree is the de-facto place where all 
> the bindings are described. This is used by most (to not say all) users.
> 
> I vaguely remember there were plan in the past to move the bindings out 
> of the kernel. But it looks like this has not been done. Yet, they tend 
> to be reviewed independently from the kernel.
> 
> So, as Wei pointed, if we don't follow them then we will not be able to 
> re-use Device-Tree shipped.
> 
>> If values are to match ACPI's, I
>> expect a DT spec to actually say so.
> I don't think it is necessary to say that. Bindings are not meant to 
> change and a developer can rely on the local distance value to not 
> change with the current bindings.
> 
> So IMHO, it is OK to assume that the local distance value is the same 
> between ACPI and DT. That would definitely simplify the parsing and code.
> 

Thanks for clarifying this.

Cheers,
Wei Chen

> Cheers,
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 09:56:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 09:56:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478521.741775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHMEI-0000H0-0J; Mon, 16 Jan 2023 09:56:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478521.741775; Mon, 16 Jan 2023 09:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHMEH-0000Gt-SH; Mon, 16 Jan 2023 09:56:37 +0000
Received: by outflank-mailman (input) for mailman id 478521;
 Mon, 16 Jan 2023 09:56:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N7lT=5N=casper.srs.infradead.org=BATV+fb0b8ce1ba8490165fd5+7085+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pHMEF-0000Gm-Uz
 for xen-devel@lists.xen.org; Mon, 16 Jan 2023 09:56:36 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d9457b2-9584-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 10:56:32 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHMED-008cuB-85; Mon, 16 Jan 2023 09:56:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d9457b2-9584-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=sr1dLQ2DHrJOQsUjXblfDx9uh9Q4CaHpI+rB6T/J4gs=; b=rTw1x4d34v6J1knJy5sTd45Mpw
	v8ZusWXumaI2xklE1OnqvqHENTvoJnuOJjPRJHG/aoNYsbMO8DeukCeh6NwJJorL8crHkQIsLOEA3
	aedtDrCnJScFhlUkTVyTs2OGMMaviM35E8Z1H4MtGLT6sPa2WLJvhN85jb07Q5MStV4pmPcX11Rgh
	RmfxYPMWix9WSjrhK01wNxbmVQhGZpTG7B3n2ZIMLMNCAI7s731IIwwKC1WEPyc3QK92Id9QJyAgi
	MnF1XTb2Q0VNaKH49gRc2HNm06ex9/oSSJV+ujqmlH3yqtsOkgsu0K4Lc2TmCC6wWJ1MYBoA0iFfS
	u+z4mTuQ==;
Message-ID: <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
Subject: Re: [patch V3 16/22] genirq/msi: Provide new domain id based
 interfaces for freeing interrupts
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML
 <linux-kernel@vger.kernel.org>,  Juergen Gross <jgross@suse.com>, xen-devel
 <xen-devel@lists.xen.org>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>,  linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>, Alex
 Williamson <alex.williamson@redhat.com>, Kevin Tian <kevin.tian@intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Logan Gunthorpe
 <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon Mason
 <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>
Date: Mon, 16 Jan 2023 09:56:18 +0000
In-Reply-To: <20221124230314.337844751@linutronix.de>
References: <20221124225331.464480443@linutronix.de>
	 <20221124230314.337844751@linutronix.de>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-nah1D/e0Pf0KPrv4hTl/"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-nah1D/e0Pf0KPrv4hTl/
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2022-11-25 at 00:24 +0100, Thomas Gleixner wrote:
> Provide two sorts of interfaces to handle the different use cases:
>=20
> =C2=A0 - msi_domain_free_irqs_range():
>=20
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0Handles a caller defined =
precise range
>=20
> =C2=A0 - msi_domain_free_irqs_all():
>=20
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0Frees all interrupts asso=
ciated to a domain
>=20
> The latter is useful for device teardown and to handle the legacy MSI sup=
port
> which does not have any range information available.
>=20
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---

...

> +static void msi_domain_free_locked(struct device *dev, struct msi_ctrl *=
ctrl)
> =C2=A0{
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0struct msi_domain_info *info;
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0struct msi_domain_ops *ops;
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0struct irq_domain *domain;
> +
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0if (!msi_ctrl_valid(dev, ctrl)=
)
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0return;
> +
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0domain =3D msi_get_device_doma=
in(dev, ctrl->domid);
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0if (!domain)
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0return;
> +
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0info =3D domain->host_data;
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0ops =3D info->ops;
> +
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0if (ops->domain_free_irqs)
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0ops->domain_free_irqs(domain, dev);

Do you want a call to msi_free_dev_descs(dev) here? In the case where
the core code calls ops->domain_alloc_irqs() it *has* allocated the
descriptors first... but it's expecting the irqdomain to free them?

However, it's not quite as simple as adding msi_free_dev_descs()...

> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0else
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0__msi_domain_free_irqs(dev, domain, ctrl);
> +

The igb driver seems to set up a single MSI-X in its setup, then tear
that down, then try again with more. Thus exercising the teardown path.

In 6.2-rc3 it fails under Xen (emulation in qemu) thus:

[    1.491207] igb: Intel(R) Gigabit Ethernet Network Driver
[    1.494003] igb: Copyright (c) 2007-2014 Intel Corporation.
[    1.664907] ACPI: \_SB_.LNKA: Enabled at IRQ 10
[    1.670837] ------------[ cut here ]------------
[    1.672644] WARNING: CPU: 1 PID: 1 at drivers/xen/events/events_base.c:7=
93 xen_free_irq+0x156/0x170
[    1.676202] Modules linked in:
[    1.677638] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 6.2.0-rc3+ #1059
[    1.680134] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS =
rel-1.16.1-0-g3208b098f51a-prebuilt.qemu.org 04/01/2014
[    1.684484] RIP: 0010:xen_free_irq+0x156/0x170
[    1.686240] Code: 5c 41 5d 41 5e 41 5f e9 08 03 95 ff e8 a3 5b 95 ff 48 =
85 c0 74 14 48 8b 58 30 e9 df fe ff ff 31 f6 89 ef e8 6c 59 95 ff eb 94 <0f=
> 0b 5b 5d 41 5c 41 5d 41 5e 41 5f c3 cc cc cc cc 0f 0b eb 86 0f
[    1.692888] RSP: 0000:ffffc90000013ac8 EFLAGS: 00010246
[    1.694705] RAX: 0000000000000000 RBX: 0000000000000026 RCX: 00000000000=
00000
[    1.697113] RDX: 0000000000000028 RSI: ffff888001400490 RDI: 00000000000=
00026
[    1.699498] RBP: 0000000000000026 R08: ffff8880014005e8 R09: ffffffff89c=
00240
[    1.701917] R10: 0000000000000000 R11: 0000000000000000 R12: 00000000fff=
ffffe
[    1.704520] R13: ffffffff89de6880 R14: 0000000000000000 R15: 00000000000=
00005
[    1.707202] FS:  0000000000000000(0000) GS:ffff88803ed00000(0000) knlGS:=
0000000000000000
[    1.709974] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    1.711867] CR2: 0000000000000000 CR3: 000000003c812001 CR4: 00000000001=
70ee0
[    1.714260] Call Trace:
[    1.715145]  <TASK>
[    1.715897]  xen_destroy_irq+0x64/0x120
[    1.717181]  ? msi_find_desc+0x41/0xb0
[    1.718552]  xen_teardown_msi_irqs+0x3d/0x70
[    1.720064]  msi_domain_free_locked.part.0+0x58/0x1c0
[    1.721791]  msi_domain_free_irqs_all_locked+0x6a/0x90
[    1.723551]  __pci_enable_msix_range+0x353/0x590
[    1.725159]  igb_set_interrupt_capability+0x90/0x1c0
[    1.726879]  igb_init_interrupt_scheme+0x2d/0x230
[    1.728494]  ? rcu_read_lock_sched_held+0x3f/0x80
[    1.730361]  igb_sw_init+0x1b3/0x260
[    1.731797]  igb_probe+0x3b6/0xf00
[    1.733146]  ? _raw_spin_unlock_irqrestore+0x40/0x60
[    1.734834]  local_pci_probe+0x41/0x80
[    1.736164]  pci_call_probe+0x54/0x160
[    1.737441]  pci_device_probe+0x7c/0x100
[    1.738828]  ? driver_sysfs_add+0x71/0xd0
[    1.740229]  really_probe+0xde/0x380
[    1.741434]  ? pm_runtime_barrier+0x50/0x90
[    1.742873]  __driver_probe_device+0x78/0x170
[    1.744314]  driver_probe_device+0x1f/0x90
[    1.745689]  __driver_attach+0xd2/0x1c0
[    1.747035]  ? __pfx___driver_attach+0x10/0x10
[    1.748518]  bus_for_each_dev+0x79/0xc0
[    1.749859]  bus_add_driver+0x1b1/0x200
[    1.751182]  driver_register+0x89/0xe0
[    1.752472]  ? __pfx_igb_init_module+0x10/0x10
[    1.754054]  do_one_initcall+0x5b/0x320
[    1.755573]  ? rcu_read_lock_sched_held+0x3f/0x80
[    1.757375]  kernel_init_freeable+0x1a6/0x1ec
[    1.759005]  ? __pfx_kernel_init+0x10/0x10
[    1.760375]  kernel_init+0x16/0x130
[    1.761554]  ret_from_fork+0x2c/0x50
[    1.762797]  </TASK>
[    1.763590] irq event stamp: 1798623
[    1.764869] hardirqs last  enabled at (1798633): [<ffffffff8814aa8e>] __=
up_console_sem+0x5e/0x70
[    1.767762] hardirqs last disabled at (1798642): [<ffffffff8814aa73>] __=
up_console_sem+0x43/0x70
[    1.770715] softirqs last  enabled at (1798570): [<ffffffff88d91f76>] __=
do_softirq+0x356/0x4da
[    1.773576] softirqs last disabled at (1798565): [<ffffffff880bb83d>] __=
irq_exit_rcu+0xdd/0x150
[    1.776492] ---[ end trace 0000000000000000 ]---
[    1.839462] igb 0000:00:04.0: added PHC on eth0
[    1.843531] igb 0000:00:04.0: Intel(R) Gigabit Ethernet Network Connecti=
on
[    1.843541] igb 0000:00:04.0: eth0: (PCIe:5.0Gb/s:Width x4) 00:1e:67:cb:=
7a:93
[    1.843620] igb 0000:00:04.0: eth0: PBA No: 006000-000
[    1.849237] igb 0000:00:04.0: Using legacy interrupts. 1 rx queue(s), 1 =
tx queue(s)


If I add the missing call to msi_free_msi_descs() then it does work,
but complains differently:

[    1.563055] igb: Intel(R) Gigabit Ethernet Network Driver
[    1.566442] igb: Copyright (c) 2007-2014 Intel Corporation.
[    1.737236] ACPI: \_SB_.LNKA: Enabled at IRQ 10
[    1.742162] ------------[ cut here ]------------
[    1.744393] WARNING: CPU: 0 PID: 1 at kernel/irq/msi.c:196 msi_domain_fr=
ee_descs+0xe1/0x110
[    1.748248] Modules linked in:
[    1.749289] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.2.0-rc3+ #1057
[    1.751466] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS =
rel-1.16.1-0-g3208b098f51a-prebuilt.qemu.org 04/01/2014
[    1.755187] RIP: 0010:msi_domain_free_descs+0xe1/0x110
[    1.756875] Code: 00 48 89 e6 4c 89 e7 e8 ed f4 ba 00 48 89 c3 48 85 c0 =
0f 84 71 ff ff ff 48 8b 34 24 4c 89 e7 e8 a5 01 bb 00 8b 03 85 c0 74 be <0f=
> 0b eb cb 48 8b 87 70 03 00 00 be ff ff ff ff 48 8d 78 78 e8 26
[    1.763060] RSP: 0000:ffffc90000013b78 EFLAGS: 00010202
[    1.764804] RAX: 0000000000000026 RBX: ffff888001ac5f00 RCX: 00000000000=
00000
[    1.767155] RDX: 0000000000000001 RSI: ffffffffa649808a RDI: 00000000fff=
fffff
[    1.769462] RBP: ffffc90000013ba8 R08: 0000000000000001 R09: 00000000000=
00000
[    1.771934] R10: 000000006ac46bb1 R11: 00000000aee0433d R12: ffff888001a=
238c8
[    1.774695] R13: ffffffffa6de6880 R14: ffff888001a51218 R15: ffff888001a=
50000
[    1.777081] FS:  0000000000000000(0000) GS:ffff88803ec00000(0000) knlGS:=
0000000000000000
[    1.779754] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    1.781681] CR2: ffff888010801000 CR3: 000000000e812001 CR4: 00000000001=
70ef0
[    1.784093] Call Trace:
[    1.784880]  <TASK>
[    1.785640]  msi_domain_free_msi_descs_range+0x34/0x60
[    1.787370]  msi_domain_free_locked.part.0+0x58/0x1c0
[    1.789034]  msi_domain_free_irqs_all_locked+0x6a/0x90
[    1.790815]  pci_free_msi_irqs+0xe/0x30
[    1.792157]  pci_disable_msix+0x48/0x60
[    1.793413]  igb_reset_interrupt_capability+0xa4/0xb0
[    1.795077]  igb_sw_init+0x13f/0x260
[    1.796281]  igb_probe+0x3b6/0xf00
[    1.797421]  ? _raw_spin_unlock_irqrestore+0x40/0x60
[    1.799050]  local_pci_probe+0x41/0x80
[    1.800282]  pci_call_probe+0x54/0x160
[    1.801543]  pci_device_probe+0x7c/0x100
[    1.803393]  ? driver_sysfs_add+0x71/0xd0
[    1.804761]  really_probe+0xde/0x380
[    1.806021]  ? pm_runtime_barrier+0x50/0x90
[    1.807395]  __driver_probe_device+0x78/0x170
[    1.808877]  driver_probe_device+0x1f/0x90
[    1.810257]  __driver_attach+0xd2/0x1c0
[    1.811529]  ? __pfx___driver_attach+0x10/0x10
[    1.813251]  bus_for_each_dev+0x79/0xc0
[    1.814534]  bus_add_driver+0x1b1/0x200
[    1.816058]  driver_register+0x89/0xe0
[    1.817291]  ? __pfx_igb_init_module+0x10/0x10
[    1.818821]  do_one_initcall+0x5b/0x320
[    1.820133]  ? rcu_read_lock_sched_held+0x3f/0x80
[    1.821697]  kernel_init_freeable+0x1a6/0x1ec
[    1.823150]  ? __pfx_kernel_init+0x10/0x10
[    1.824484]  kernel_init+0x16/0x130
[    1.825990]  ret_from_fork+0x2c/0x50
[    1.827757]  </TASK>
[    1.828865] irq event stamp: 1797845
[    1.830573] hardirqs last  enabled at (1797855): [<ffffffffa514aa8e>] __=
up_console_sem+0x5e/0x70
[    1.834809] hardirqs last disabled at (1797864): [<ffffffffa514aa73>] __=
up_console_sem+0x43/0x70
[    1.838915] softirqs last  enabled at (1797742): [<ffffffffa5d91f76>] __=
do_softirq+0x356/0x4da
[    1.842442] softirqs last disabled at (1797737): [<ffffffffa50bb83d>] __=
irq_exit_rcu+0xdd/0x150
[    1.846094] ---[ end trace 0000000000000000 ]---


If I zero msidesc->irq in the loop in xen_teardown_msi_irqs(), *then*
it both works and stops complaining. But I'm mostly just tampering
empirically now...

I can provide a qemu tree which will let you test this easily with just
`qemu-system-x86_64 -kernel ...` but you have to promise not to look at
the way I've fixed some qemu deadlocks just by commenting out the lock
on MSI delivery/translation :)

You'd also have to provide your own igb device as qemu doesn't emulate
those; I'm testing it in passthrough. Or hack the e1000e driver to do a
setup/teardown/setup... or perhaps just unload and reload its module?

I'll provide a SoB just in case it's actually the right way to fix it=E2=80=
=A6

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>

diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index 790550479831..293e23b7229a 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -390,8 +390,10 @@ static void xen_teardown_msi_irqs(struct pci_dev *dev)
 	int i;
=20
 	msi_for_each_desc(msidesc, &dev->dev, MSI_DESC_ASSOCIATED) {
-		for (i =3D 0; i < msidesc->nvec_used; i++)
+		for (i =3D 0; i < msidesc->nvec_used; i++) {
 			xen_destroy_irq(msidesc->irq + i);
+			msidesc->irq =3D 0;
+		}
 	}
 }
=20

diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
index 955267bbc2be..812e1ec1a633 100644
--- a/kernel/irq/msi.c
+++ b/kernel/irq/msi.c
@@ -1533,9 +1533,10 @@ static void msi_domain_free_locked(struct device *de=
v, struct msi_ctrl *ctrl)
 	info =3D domain->host_data;
 	ops =3D info->ops;
=20
-	if (ops->domain_free_irqs)
+	if (ops->domain_free_irqs) {
 		ops->domain_free_irqs(domain, dev);
-	else
+		msi_free_msi_descs(dev);
+	} else
 		__msi_domain_free_irqs(dev, domain, ctrl);
=20
 	if (ops->msi_post_free)



--=-nah1D/e0Pf0KPrv4hTl/
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE2MDk1NjE4WjAvBgkqhkiG9w0BCQQxIgQgsRUQ5NvJ
kq/bzgisun4qMnwYRB1eiBFypoKU56ZSGrwwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgCBl7M4X85P8yzR8UicvngbrfZ11eqNafgF
P6nh07J2nOp6OsfQJYOurZidF6fjCaH5vd0Wg2dUWql6DLA/vuiF3BtoLprhx2623vkA3LI3hXYK
XJqG5C6/LOwW7U++p3mueGEwlUMXSvLMqt2b06fQVSTJIV58nm75z4mKugNUKzDFtQ2ehHl5p7P+
ELDAvSo7+qnxtUWbe2yGdXFafNBIMMnW+1NNZ9xg2pzNKh8jeXcMdf7ftyfrSVbMST/b2NEqVI7u
cgd8Y+01KuLx+aU72lhzfhVNIWZCbn/49AJYVHqG7T15xEup5bJe2pO+Qrw2dswQE0zXkhrQTy0P
htRR5pVeblXN6/4htWqqws4U7FRg3EjzEDdkDfAC4RTDnLs5fU9KH50BfX4CWp4l919AFFfS+UtE
xldDz/uvX7qhOBSSvnCbrfq/jpzVtElxBPCYsLbZ6Gzv9mjcVp3l0VI4tlb+KQTkbeJpNWF8efDB
KH0qD9z4rJJy6leVyvrzpCTJJROoTiUzDaEgcYOj8Vj/Y02JeEivIrgMS/53H7o4CaIPlYre242b
Nkegv7X3sHHT3B6mwk+b/kyTDqnYg+tmPr93eleaTZbJiT14EPBoPPlXeb9Rplft0wUMO5NfdK0v
0AZmk/RljO7Vy7SfXTnvoreA18P6owtiVD9aIA/QzQAAAAAAAA==


--=-nah1D/e0Pf0KPrv4hTl/--


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 10:10:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 10:10:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478528.741784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHMS1-0002k7-Af; Mon, 16 Jan 2023 10:10:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478528.741784; Mon, 16 Jan 2023 10:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHMS1-0002k0-6i; Mon, 16 Jan 2023 10:10:49 +0000
Received: by outflank-mailman (input) for mailman id 478528;
 Mon, 16 Jan 2023 10:10:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N7lT=5N=casper.srs.infradead.org=BATV+fb0b8ce1ba8490165fd5+7085+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pHMRz-0002jp-Tb
 for xen-devel@lists.xen.org; Mon, 16 Jan 2023 10:10:47 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a9dc072-9586-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 11:10:45 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHMS6-008dPx-1N; Mon, 16 Jan 2023 10:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a9dc072-9586-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=t4xQXvcK8KLqFQdEV6zFme6E23zZ8Zze7iB+3HzU6SY=; b=vmzflYQhgpPtC0KRWE4tnpUWhd
	g4syYqB56eereaHgWRvKx8zjs0vGmvMUvaxWa3WXUM9ZIUdihA+nX0r2xRj1dbQlQMdH2CE3Lg36D
	tJSxGHWBhU//EQ8Z3zxahTSlpCU35r2BsF9HTc/zBwGbcunL+xx/OiDgRxgWc3zF2TkmAvMbGPnX2
	RwHiwJGPMDVbs6/sDAcbWVP3N+0JCEAPliiE3rewmf8rEEXu6Yi+uQQvP6gesEB/JzLe2IZo/TugD
	iODJGsIWjXyirFDsTFrwj76jCa/t4IWmH29G3NGEH9z6CmFI2to4KsgMayrmlgHdAIY5Oor6B/TZv
	mxkxj0Jg==;
Message-ID: <ad1dfa58888f82d7cf6fd1b86d2df3821511cf33.camel@infradead.org>
Subject: Re: [patch V3 16/22] genirq/msi: Provide new domain id based
 interfaces for freeing interrupts
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML
 <linux-kernel@vger.kernel.org>,  Juergen Gross <jgross@suse.com>, xen-devel
 <xen-devel@lists.xen.org>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>,  linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>, Alex
 Williamson <alex.williamson@redhat.com>, Kevin Tian <kevin.tian@intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Logan Gunthorpe
 <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon Mason
 <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>
Date: Mon, 16 Jan 2023 10:10:39 +0000
In-Reply-To: <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
References: <20221124225331.464480443@linutronix.de>
	 <20221124230314.337844751@linutronix.de>
	 <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-5x7AImqvOVRFcwicSHaI"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-5x7AImqvOVRFcwicSHaI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2023-01-16 at 09:56 +0000, David Woodhouse wrote:
>=20
> =C2=A0	msi_for_each_desc(msidesc, &dev->dev, MSI_DESC_ASSOCIATED) {
> -		for (i =3D 0; i < msidesc->nvec_used; i++)
> +		for (i =3D 0; i < msidesc->nvec_used; i++) {
> =C2=A0			xen_destroy_irq(msidesc->irq + i);
> +			msidesc->irq =3D 0;
> +		}
> =C2=A0	}
> =C2=A0}
> =C2=A0

Der, setting it to zero wants to be in the msi_for_each_desc() loop and
*not* in the 'for i' loop of course.

--=-5x7AImqvOVRFcwicSHaI
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE2MTAxMDM5WjAvBgkqhkiG9w0BCQQxIgQgZUoK6Qjc
FJ3vfWDL+DPQzvvA4OpPuS6LyfffmyNE83Iwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgCz3YrkFXqz5fyBCsrYjQqiT0uCkqJytdr1
X6Tvydah4fKP6cbMXzfxoDs2LAVkkPzC33Mv7bShrIxUY7nyChD/mgkMxT1gNVeI+YtxLTChW+ox
x9ianf47vk9fLe86sHe3Lq3hQy9YIP0+XWk6Qmk2rMa4It6KQB83tMgfuxOwHRMtNVo7DZvcTmQt
DKQPnbOnUkS50FQYSmx/lFvdAb1HRxD+tofXoOKD9sM3KdOO4hKCjgSgWPo0XUSr3BeLBbtiVLVO
PucGZHpukzPAQhv3brq9LLSFZa2vY86KqocygUrApzUIQLIRzTD6mse+a0ao5Bh/JwSXZU2IDT1u
O3rBdjXJWndt8RejsQgMXzX2sxh1fd3yVoLxnUFJJdJK5N6mDE54ntJq6T8nOitOiGGpZFbEUfXu
+Xe2ohwWYDB1HnioD8XWdg/C+nKXRPN91OZy0X0ihlMDtICxay2oAUIGE5ezSiBf25+PEuT9PdqA
1cgKEfG6/BZZsjnOQ7TM+nPwZUPiomTcFX0nXlwgzrQEdlCQr4CcF5bYop39lyGOFdwp+twtapaT
qCmfEDACJs6Sr0Ob8Zr8MS6NwLJDY4+7A3Z/y/GDdb7asTnVXGr8eACrPM8+ZAZ/XsGol28o9ut2
nHkCQ9pVSXsyR5JkKeciNrE4qfLVh6s7eTRkVMc0jgAAAAAAAA==


--=-5x7AImqvOVRFcwicSHaI--


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 10:59:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 10:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478562.741813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHND5-00088E-Dy; Mon, 16 Jan 2023 10:59:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478562.741813; Mon, 16 Jan 2023 10:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHND5-000887-8P; Mon, 16 Jan 2023 10:59:27 +0000
Received: by outflank-mailman (input) for mailman id 478562;
 Mon, 16 Jan 2023 10:59:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wx/b=5N=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHND3-00087z-O1
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 10:59:25 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2087.outbound.protection.outlook.com [40.107.93.87])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d58a056a-958c-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 11:59:24 +0100 (CET)
Received: from BL1PR13CA0112.namprd13.prod.outlook.com (2603:10b6:208:2b9::27)
 by MW4PR12MB7030.namprd12.prod.outlook.com (2603:10b6:303:20a::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.22; Mon, 16 Jan
 2023 10:59:20 +0000
Received: from BL02EPF000108E8.namprd05.prod.outlook.com
 (2603:10b6:208:2b9:cafe::12) by BL1PR13CA0112.outlook.office365.com
 (2603:10b6:208:2b9::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Mon, 16 Jan 2023 10:59:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF000108E8.mail.protection.outlook.com (10.167.241.201) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Mon, 16 Jan 2023 10:59:19 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 04:59:15 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 04:59:14 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 16 Jan 2023 04:59:12 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d58a056a-958c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V7MAiou62fUiJxVtTN2PkcWnkul2Ykggh7+QG7Cn+O9MWvdiiKjahddizpaT8UHiTh++x92eavbWj3FGuETFxby0o+hIVldf4jLCEiUS7We04qS9+GYQzDzyk51rJjT+cTPHJILZE68b2bob6sRD6VvaEUmELTLMQkwBHEmlqx+tq/uyCHPZv3MrehFDHHf1bQ8lY4mryZF+9/3vn9jGC4KBi0amdDVQs73LGdwG26evKcjR0ADnI/k14EbgLSfvm0SpRF2B1eR3AXFILQLe0dAcbkO4RxEL59LLna+//OcNQCCTmZMaR6qqVTr//tmD8EmOIQ3/hZE6w5omQq/rWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UFeB5rZwRrz5YpTsPxIvIWRzcaS02lWju8Pbex8UaaM=;
 b=kMTjuinr5epz7/33GU/gWsa/24bnItFu8ahIUQhVhhYbCsuI9JCv2PhljWndyegRzWVf8OfZndZJ8TkUjXjKMBcdC6h9UXRzv0L3QuHAi6jehWBHzvbM1d+b50Z1zMm5BwV7llSRNYv1tW7GJZd8VXKuBqV9H//FR2MNwoAezKfFA2ukqtYLGcLuSaM70jJnRCFBM2lq2uVBHmF4rw1BGek7otCNgWZ2eyOa6kSotu2/KlMap1L2sJ74JtCWaJQrFLEkkcDIaZ9TgLk2RvCsnQa6RxD9ogrAn9I2YqNdjQX7bqtZPwjafVLwlRvZrz16DOEuQqTUcsNoODnC4/DoVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UFeB5rZwRrz5YpTsPxIvIWRzcaS02lWju8Pbex8UaaM=;
 b=hmY95SmciOcNjhwoeWvVCwJlM1wocXb77r/jpcoYWOYe7w6WUJfwCYL43J7lxoQBhmsQ5Snb6cHc2agWz5KrPXM2crA0btBqRy8MFh5vXI8UlKcbg+clN6qGoIwPLzShN7g21RuPpj7n5eoVaWZhqnaNvxDjBgADb54t1rqzDCc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <8e4c76f5-2afc-8be0-1c32-9059e7730f77@amd.com>
Date: Mon, 16 Jan 2023 11:59:07 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 11/14] xen/arm64: Rework the memory layout
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-12-julien@xen.org>
 <72b2be45-d7bc-a94f-1d49-b9fc0b2fd081@amd.com>
 <54fdf78a-bd46-eae3-f00f-a21738561874@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <54fdf78a-bd46-eae3-f00f-a21738561874@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000108E8:EE_|MW4PR12MB7030:EE_
X-MS-Office365-Filtering-Correlation-Id: 3cc6becd-79ce-4dd7-2495-08daf7b0b7f1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QqG+MRYBc5y9A2q72eJ7ZNQIWGLVu3p+KscvpvBS5V2D7Zv/1Ku+tTrITU64hFaMjNNHvaLkm5cBsl3opGl6Bktd72ssdpDOxt5dmSK+aJ5VGNj92fEWbAn9qzRGr1iVaaYn5npJHlR3aM2dHp/LTvWVDI+CHpqGjBWMGit5/CsN8t0489dcIv1nSHfn1oQ9btBSNADhe9pyKe9qh7FYbgSWhpBenHJiuLBmJAZEQuEgbaISPnUXkrF1FbTK3Nv1WjhPSUAge0dRL/o8CvoiFCJi9dqRNb1ea5iHFShduZfuzYWHmdQO8ygLb4/L/qXiuBW50wc3GW4n3ZAhSSKiKeKcHXszV5wmWRu572ysPFD8ggTj2tUuJDvoKKao2ZK3jceCkQq1KNufp9qd148rTUrdUsF6upw2RmuORaRPiIVFqqVGwmeyI+4z+MZcpMPCvYKjgqsieE/VfCUR9GKJsxNSe9Z86VHaLaNFtR2Z5dSXlnRQ0yHADK1mscvSS69r48AYsImHHm79q1NUwKpDT9J7pFlMUgZZ5dDUTMN8HQwkdbzLz//XPBUM6gwTpx2OXaRMJIct9dnNvo319la95T2E7DwoYPt6zAHiARhUApEHrmG4iSeGZd/fzw5n3CYawkcO8yn2TLU6TgFckpSmIhiUBzRZXR+GoiOt7eqok9tnFkmyyDoOEqU7gIZiMN9wQBd7k7xu4eXqnZdqcWr0DuJ56AKr5vYNKUQ+V+R1ExXJKOcONuj5rwKW/L9OefASPSFE5hKmjm8JsLw1EfajZQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(31686004)(2906002)(44832011)(5660300002)(8936002)(8676002)(86362001)(83380400001)(70206006)(70586007)(4326008)(36756003)(336012)(316002)(110136005)(16576012)(54906003)(41300700001)(40460700003)(26005)(2616005)(31696002)(82310400005)(186003)(53546011)(6666004)(36860700001)(356005)(426003)(47076005)(81166007)(82740400003)(478600001)(40480700001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 10:59:19.8373
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3cc6becd-79ce-4dd7-2495-08daf7b0b7f1
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000108E8.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7030

Hi Julien,

On 16/01/2023 10:29, Julien Grall wrote:
> 
> 
> On 16/01/2023 08:46, Michal Orzel wrote:
>> Hi Julien,
> 
> Hi Michal,
> 
>> On 13/01/2023 11:11, Julien Grall wrote:
>>>
>>>
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Xen is currently not fully compliant with the Arm Arm because it will
>>> switch the TTBR with the MMU on.
>>>
>>> In order to be compliant, we need to disable the MMU before
>>> switching the TTBR. The implication is the page-tables should
>>> contain an identity mapping of the code switching the TTBR.
>>>
>>> In most of the case we expect Xen to be loaded in low memory. I am aware
>>> of one platform (i.e AMD Seattle) where the memory start above 512GB.
>>> To give us some slack, consider that Xen may be loaded in the first 2TB
>>> of the physical address space.
>>>
>>> The memory layout is reshuffled to keep the first two slots of the zeroeth
>> Should be "four slots" instead of "two".
>>
>>> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
>>> tweak of the boot code because XEN_VIRT_START cannot be used as an
>>> immediate.
>>>
>>> This reshuffle will make trivial to create a 1:1 mapping when Xen is
>>> loaded below 2TB.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> ----
>>>      Changes in v4:
>>>          - Correct the documentation
>>>          - The start address is 2TB, so slot0 is 4 not 2.
>>>
>>>      Changes in v2:
>>>          - Reword the commit message
>>>          - Load Xen at 2TB + 2MB
>>>          - Update the documentation to reflect the new layout
>>> ---
>>>   xen/arch/arm/arm64/head.S         |  3 ++-
>>>   xen/arch/arm/include/asm/config.h | 35 ++++++++++++++++++++-----------
>>>   xen/arch/arm/mm.c                 | 11 +++++-----
>>>   3 files changed, 31 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>>> index 4a3f87117c83..663f5813b12e 100644
>>> --- a/xen/arch/arm/arm64/head.S
>>> +++ b/xen/arch/arm/arm64/head.S
>>> @@ -607,7 +607,8 @@ create_page_tables:
>>>            * need an additional 1:1 mapping, the virtual mapping will
>>>            * suffice.
>>>            */
>>> -        cmp   x19, #XEN_VIRT_START
>>> +        ldr   x0, =XEN_VIRT_START
>>> +        cmp   x19, x0
>>>           bne   1f
>>>           ret
>>>   1:
>>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>>> index 6c1b762e976d..c5d407a7495f 100644
>>> --- a/xen/arch/arm/include/asm/config.h
>>> +++ b/xen/arch/arm/include/asm/config.h
>>> @@ -72,15 +72,12 @@
>>>   #include <xen/page-size.h>
>>>
>>>   /*
>>> - * Common ARM32 and ARM64 layout:
>>> + * ARM32 layout:
>>>    *   0  -   2M   Unmapped
>>>    *   2M -   4M   Xen text, data, bss
>>>    *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>>    *   6M -  10M   Early boot mapping of FDT
>>> - *   10M - 12M   Livepatch vmap (if compiled in)
>>> - *
>>> - * ARM32 layout:
>>> - *   0  -  12M   <COMMON>
>>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>>    *
>>>    *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
>>>    * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>>> @@ -90,14 +87,22 @@
>>>    *   2G -   4G   Domheap: on-demand-mapped
>>>    *
>>>    * ARM64 layout:
>>> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
>>> - *   0  -  12M   <COMMON>
>>> + * 0x0000000000000000 - 0x00001fffffffffff (2TB, L0 slots [0..3])
>> End address should be 0x1FFFFFFFFFF (one less f).
>>
>>> + *  Reserved to identity map Xen
>>> + *
>>> + * 0x0000020000000000 - 0x000028fffffffff (512GB, L0 slot [4]
>> End address should be 0x27FFFFFFFFF.
>>
>>> + *  (Relative offsets)
>>> + *   0  -   2M   Unmapped
>>> + *   2M -   4M   Xen text, data, bss
>>> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>> + *   6M -  10M   Early boot mapping of FDT
>>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>>    *
>>>    *   1G -   2G   VMAP: ioremap and early_ioremap
>>>    *
>>>    *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
>>>    *
>>> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>>> + * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [5..255])
>> Start address should be 0x28000000000.
> 
> I have updated all the addresses.
Thanks, in that case you can add my:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

> 
>>
>> Not related to this patch:
>> I took a look at config.h and spotted two things:
>> 1) DIRECTMAP_SIZE calculation is incorrect. It is defined as (SLOT0_ENTRY_SIZE * (265-256))
>> but it actually should be (SLOT0_ENTRY_SIZE * (266-256)) i.e. 10 slots and not 9. Due to this
>> bug we actually support 4.5TB of direct-map and not 5TB.
> 
> 
>>
>> 2) frametable information
>> struct page_info is no longer 24B but 56B for arm64 and 32B for arm32.
> 
> The values were always wrong. I have an action in my todo list to look
> at it, but never got the time.
> 
> There are two problems with the current values:
>    1) The size of the frametable is not big enough as you pointed one below.
>    2) The struct page_info could cross a cache line. We should decide
> whether we want to increase the size or attempt to reduce it.
> 
>   It looks like SUPPORT.md
>> took this into account when stating that we support 12GB for arm32 and 2TB for arm64. However,
>> this is also wrong as it does not take into account physical address compression. With PDX that
>> is enabled by default we could fit tens of TB in 32GB frametable.
> I don't understand your argument. Yes the PDX can compress, but it will
> compress non-RAM pages. So while I agree that this could cover tens of
> TB of physical address space, we will always be able to support a fixed
> amount of RAM.
Right.

> 
>> I think we want to get rid of
>> comments like "Frametable: 24 bytes per page for 16GB of RAM" in favor of just "Frametable".
> 
> I would rather update the comments because we need a way to explain how
> we came up with the size.
> 
>> This is to because the struct page_info size may change again
> We could have a BUILD_BUG_ON() confirming the size of the page_info.
So, apart from fixing a DIRECTMAP_SIZE, I would like to send a patch correcting
a frametable information in config.h. In this patch I'd take the opportunity
to add the following in setup_frametable_mappings:
- BUILD_BUG_ON to check the size of page_info
For that, I could add a new macro e.g. CONFIG_PAGE_INFO_SIZE in config.h to set it to 56 for arm64
and 32 for arm32 to avoid ifdefery in a function itself.
- if ( frametable_size >= FRAMETABLE_SIZE )
to call a panic "RAM is too big to fit in a frametable area", as we do not have any check at the moment.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 11:14:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 11:14:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478571.741823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHNS0-00027U-MA; Mon, 16 Jan 2023 11:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478571.741823; Mon, 16 Jan 2023 11:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHNS0-00027N-J5; Mon, 16 Jan 2023 11:14:52 +0000
Received: by outflank-mailman (input) for mailman id 478571;
 Mon, 16 Jan 2023 11:14:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHNRz-00027F-CU
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 11:14:51 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2053.outbound.protection.outlook.com [40.107.7.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fde9c002-958e-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 12:14:50 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6900.eurprd04.prod.outlook.com (2603:10a6:208:17d::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Mon, 16 Jan
 2023 11:14:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 11:14:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fde9c002-958e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M3c3Tl76cMDB3nz4p7F5UO0NzmxjhvLU+qghavBHzpILyxAIjgXUl5WmJYoRSOBBzlwm9fDArxBYSE0G/VOD6Y5bfjyFzoFUWBmg3mrdwEtLWe4hYAhUbhGfhMvym6A/IVueFFJpAOqC6gaDt6m8SSpV9vTYydHFVKtZJzvgt0MRhi8kaTwlS1baLZbjtsLTib3I96IO0scv/6xfRhgf3xvyWMGY9mD/WFMUr/ikToYWEnn5V5FkfyAJEoSpH3YebFoyS7Ybwr9NUFBIADH9n0akzntAiOQkVwy3K1+GdxMFvsE6mLqSgleUV+U0pXFFCY4wzTWiNaa2fMT02HdvTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GEhwphNCYfyS6Z7ABBAnU5IV5efcZTpY+saNniqbPX8=;
 b=epxfjgky1KbzlzdVBDUn5+qsMF5B4lm+39g5JnD0zSR70VCGZQ1R+1a9G8ou4V+lQbhvepdAtIde/LO42mLylnDGj3racaFXvlN3992uekcTW8D16FqtdSc8BwXA2LEIYP92iTC0yPrBZvQfkOAQkGZd9fha9GQKohCInnfkKVzre4cprkXbRn3Jo+AESvN7IwI7RR97Jv3OnC+IxZGkw2Fx9TtJkK5ReEueioxGvKbqhSIAZN4MsqaFt1zp7sE1YDCboe8f2Nqyeg/WsUpoNnDoWdGfDX7rL+FNWWouDHqPAVc/ONtvVTCrMEeHb2Ob/B9BAwWDukCcYnLPGEKMMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GEhwphNCYfyS6Z7ABBAnU5IV5efcZTpY+saNniqbPX8=;
 b=E2vWpyvhfb5bGEAa/3gxxNi4kBFeN7rwr+/vHwUs54UX3k3ie62iAvEJ08qKjidhP5kVu0aHXR4F4wz0+b9tTIlBbR3d2I4uyQqPM2HBiOiO+PXaPhfP3pEKVUMZqd0uKHppCXpcfkqCkwZicM+WOVVfcjgOZsBDH2tBLG7z338R6NcOLYmglR9gHKpkf7X938zgLalL8vICMTB2ytuhrQGWPWTyXDhWLZE5qPAYLXndbPBoIxpYFU5wJpsYcoUnaD8nMDgzgdmlhlQCT5QG4r1tKEY/o6Vjh+yTivRdyxTI+0gVDLPHmKL1iIkxsiQTppyyZos6Auv6cNR2HZkRkw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7c5f0dea-4aed-d8f2-363f-758ecce0104f@suse.com>
Date: Mon, 16 Jan 2023 12:14:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 02/17] xen/arm: implement helpers to get and update
 NUMA status
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-3-wei.chen@arm.com>
 <9e32ffa1-1499-f9cd-7ca8-f9493b1269cb@suse.com>
 <PAXPR08MB7420E482CACC741B1BA976569EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <5a802657-a6e8-9cc3-fefb-09a7e68d1e5e@suse.com>
 <cb2ba9fe-e29f-c44e-9139-701f894060a8@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <cb2ba9fe-e29f-c44e-9139-701f894060a8@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0096.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6900:EE_
X-MS-Office365-Filtering-Correlation-Id: b7b54ee8-f809-4db7-8d55-08daf7b2e0c2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OmdOB1tblR5MulVc7IEdj8iDn8wvmy+bApXMNbvnUnlJkAbGBb7DjS7W00Apwd8BNTNr62CvGoap3Jc/qv0bmUEAZiLkBdmq8ole/bSaCD1vXYEyIJzy0PP8OPaVUmkBPqoj1yQh06wHkESrtc7CyYK79/Win9jr0xhnAHOZYn8M43PBcLnJn2pR92j8fyQjVTU4q8qsWR9XrRIY4PZkKuvUsgn13pC0mC0Q0Dvwr33beb5Dolf6T0j+Ju5AA/3CqN27vHn/IlUJ129nsu2s5CJ4qkpW2eCF6xZlY01ihJiG+24loSl/xQ5JZnkcdkBBJ+5apomV6mtVg3xG7brKJN3cXbTJAYGfWnKQagiB8tJIAH68QJmj86app19roiXgXVcTvwMg5Rk29VUmMQr35REz1GymOxG74slZR2ZGAtk4fjlGxePwObCbSevssSkWAGAUUSNYjHOXJN3dlFscbVAl9vnJiFc9q+MHLMQPMODfOBol2kAezmRrNy+Q3P2lrj+i3Bj7s5HHfwEIz1XEV06SQnfHjERnpoEWthly1als+S7qdedoW7It+hMzxHceuvlKJxkM5EXqSOT2z5KEw8nK8n0L0wz1upCoHNcMW04nQD/aBrpQBoVPN7wD9UbUt1XljLtMzvEOooZQSpU9W8kgbIJO7Dy+jeapSAiF84b1HoziWju6okHk2cp2XX+fg2keAnx9qZ12aMAw+kFXgoiW21cQ71RUSjQNdR6MO4Q=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(376002)(136003)(396003)(346002)(39850400004)(451199015)(2616005)(6916009)(66476007)(8676002)(4326008)(26005)(6512007)(186003)(66946007)(66556008)(31686004)(478600001)(54906003)(41300700001)(8936002)(7416002)(5660300002)(2906002)(6666004)(15650500001)(83380400001)(53546011)(316002)(6506007)(38100700002)(86362001)(31696002)(6486002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RGNFbXVBMXRrNWRvLzBML3ZUa21MblJ6UWErWXg2d2JTNkFFRWthenJoa0Uz?=
 =?utf-8?B?WHFiKzY4a01yWEFVUEp5d3U5TC84RXFDU1gwSFdXK1JURTgyWTBTeDcrMnFn?=
 =?utf-8?B?Tnk0d2Y3QjhDekV5OTlXUkd0QUZFczMvemJieWFmV3JLNUN6QVpJRkpGWHNI?=
 =?utf-8?B?VkFQMTA1OGdzbGlhcUQ3cmxoWTU2UE81cW00dENUTnA4RHI0UFZwU08xdXNF?=
 =?utf-8?B?eURyNlRrSEJ4KzcvWnNPM1pXTWFPTEJHd0lyOTkwYTIraEhBa2FZazZmQ3cv?=
 =?utf-8?B?b1Z5Yng5czBqZ2lydEpOUS9IOUhRd0hXRW4yYzEvWHNad3A4WG03T1N6TUVS?=
 =?utf-8?B?OHNaWUNLWXJzekJUNHlSbnltN0tJUVVJdW5EbVdsbDUyR2w0dDNjZ0VmM3ha?=
 =?utf-8?B?ZlU3cHZrTEdQMDhUQ05MdUVuUUpkbHdZV0JQUm1kMWNyODkvQ0JrUnVzYlBa?=
 =?utf-8?B?aW5lby9pUkRtSXVQZUtTQXN6SFBrTWJxMjNxVndzY1hraWREZkF4NFZna1lD?=
 =?utf-8?B?TzROMWFlSjJYMkc0K0NQS0Z3Y1lyMlNEcWlSOHNNWkZjR2ZZZGpSY2FvYkI0?=
 =?utf-8?B?RUxzNHJ3TkRuK21pam1CNHZuTmF3YW9vdTVSa3dvNEpXeWJuT2VaYWN1MW9i?=
 =?utf-8?B?aXQ4cnZPekpPa1A4T3VQdXZjL3Bia0plRWUxMWFIc1hvb0pqRDlzL0dsQWFj?=
 =?utf-8?B?RVhiUTJ5K1lmQS92bmJ5QzlZT29tdGlrV2VwZnQvNkV2RHBQQTNQd0gvY0ZP?=
 =?utf-8?B?YXBqOG1EbElNUVhucmIxV3JsdzAySks2cmszTlB5RGtIak1jaDg3U3hFbUN0?=
 =?utf-8?B?Qk0reG1WbDFpbmdnQmZxV3dZWWpaTVNtV3QyQVIyZEpuOXU1MmxNMXpPSEtG?=
 =?utf-8?B?TnhYVnpPZmZQenRWOTI0bHlDY1IveW4rdU9yVTNzOEtUZDVzaEpyZTlmbHVD?=
 =?utf-8?B?aUhFb0tLVnFhd2VjUWYveTJrRkpjOHJUOG5rUFIweVg5L3UvSkZxZWR2NTZP?=
 =?utf-8?B?TnF5MHE0YW53MVYvZU9KdE5oa0huVXczaE01MEhVbXdDTEl0bEhsYkdRakdn?=
 =?utf-8?B?TjRaNHNhVWUzU2QwTUFjUUZNSEtsei83bEZza29tQXZyQ0NuMzlKZWkycXdp?=
 =?utf-8?B?Rlo3YnZVMEpEZWU1dmJoUjdyMmp5N3lKUE43bXVCVGZQTFZCRDVRdjIxaDJH?=
 =?utf-8?B?aHVnOTJpZ3JreklhR3hTeXIwOFI3WXRWM1NndWRPWVVCMnphL0VKK3l4MUZo?=
 =?utf-8?B?aHpzV3RTQUR2elJGWkoyMitKYm9wYmlVVzJqRzhOR2phb2hteFltYkRKZDV3?=
 =?utf-8?B?Qk1PbzBhNzk2RlpHbHRzbldaY24wS1Q2TzJySXN1YWFCRGRzNkdHcDJ3bEJ5?=
 =?utf-8?B?TXRIbU9meWNuR2RyUkljS29YNXN2SU1meFVKVmxwZmMrQW9uYnNmaFhtRlA0?=
 =?utf-8?B?TGcza3dvNEdqREZwS2RDeC9XQjRmRGpyOGZMVmhlcE52Nk9mYjdPcUhHeUFp?=
 =?utf-8?B?TERnalFCbWo2SHp3QURmNk8zaHk3VXJ0RXJ2dFA5ZEpCUU11N1hFNnNpZlJT?=
 =?utf-8?B?S1JCVVFTQVhSYnYxNnd6NmdKeFNROVNDMFBiN2lWQjVYUHlMS2gzQTZSSzJ0?=
 =?utf-8?B?U01MZ2UvVGRrV0xzZ3VlTlNsL1UrbFcxdXVVVURLbFBFVUwrRGUwcmRmRUNi?=
 =?utf-8?B?ZmJPM2lRazA2OG51eWhMa1FtcEZvWjdiVXY2MGN2UVRoYmF4RUQwN1pEdlJj?=
 =?utf-8?B?T0JoVGpxNTg3VE5XWXlKYzFSVTF6ZkM2VG1EQ2FtYXpOdExyNXhaMlkzdVRh?=
 =?utf-8?B?YjBsa2FUZE9tK05VSmhSTEhUQTdYdVI1TEpLallINlNqN2J1cE1rSkZMTUp1?=
 =?utf-8?B?S29mRi9CYmFObitJakVhU1M3N3dUVXFrdkNJWnp3VWczcUg2My9pejRHSmo1?=
 =?utf-8?B?d0J4VzVRc0MyalFlUGdiNkpIVVpOcm9UOHB5OFlaMkZtQkc4V0tkcHo0KzZP?=
 =?utf-8?B?bHlvdFpOcnQvVW1uSzdsTzBnOExuV3JOOGU1SUtiTE5hRCtqcFd0eTAxMytI?=
 =?utf-8?B?RUxXeG1LTFVsNnF6NlpjdGFOZ0ZTbW9BVDlmeFc4Q3pGZHhzbWRMMjNVbGFP?=
 =?utf-8?Q?2jV7ujTXJYA1ZuBDowcli27Je?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b7b54ee8-f809-4db7-8d55-08daf7b2e0c2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 11:14:48.2963
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: e5xCsm/Cmq7xJpIKwGEZEslNMMbn1Op0ebnXaWQ4emFPBQCT1Ll4uVF7RZq38W1ND+NdsWYBwFZC/xWwqBvCaQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6900

On 16.01.2023 10:20, Wei Chen wrote:
> Hi Jan,
> 
> On 2023/1/12 16:08, Jan Beulich wrote:
>> On 12.01.2023 07:22, Wei Chen wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 2023年1月11日 0:38
>>>>
>>>> On 10.01.2023 09:49, Wei Chen wrote:
>>>>> --- a/xen/arch/arm/include/asm/numa.h
>>>>> +++ b/xen/arch/arm/include/asm/numa.h
>>>>> @@ -22,6 +22,12 @@ typedef u8 nodeid_t;
>>>>>    */
>>>>>   #define NR_NODE_MEMBLKS NR_MEM_BANKS
>>>>>
>>>>> +enum dt_numa_status {
>>>>> +    DT_NUMA_INIT,
>>>>
>>>> I don't see any use of this. I also think the name isn't good, as INIT
>>>> can be taken for "initializer" as well as "initialized". Suggesting an
>>>> alternative would require knowing what the future plans with this are;
>>>> right now ...
>>>>
>>>
>>> static enum dt_numa_status __read_mostly device_tree_numa;
>>
>> There's no DT_NUMA_INIT here. You _imply_ it having a value of zero.
>>
> 
> How about I assign device_tree_numa explicitly like:
> ... __read_mostly device_tree_numa = DT_NUMA_UNINIT;

Well, yes, this is what I was asking for when mentioning the lack of use
of the enumerator. Irrespective of that I remain unhappy with the name,
though.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 12:01:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 12:01:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478581.741834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOBD-0007ek-CF; Mon, 16 Jan 2023 12:01:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478581.741834; Mon, 16 Jan 2023 12:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOBD-0007ed-98; Mon, 16 Jan 2023 12:01:35 +0000
Received: by outflank-mailman (input) for mailman id 478581;
 Mon, 16 Jan 2023 12:01:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Os5i=5N=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pHOBB-0007dg-SR
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 12:01:34 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2063.outbound.protection.outlook.com [40.107.21.63])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 84144711-9595-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 13:01:32 +0100 (CET)
Received: from AS9PR06CA0297.eurprd06.prod.outlook.com (2603:10a6:20b:45a::35)
 by PR3PR08MB5579.eurprd08.prod.outlook.com (2603:10a6:102:82::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 12:01:29 +0000
Received: from AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45a:cafe::b3) by AS9PR06CA0297.outlook.office365.com
 (2603:10a6:20b:45a::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Mon, 16 Jan 2023 12:01:29 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT044.mail.protection.outlook.com (100.127.140.169) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Mon, 16 Jan 2023 12:01:29 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Mon, 16 Jan 2023 12:01:28 +0000
Received: from db079eda02db.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 33F254A1-4B14-4B5F-B1A6-694CD7A891A8.1; 
 Mon, 16 Jan 2023 12:01:23 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id db079eda02db.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 16 Jan 2023 12:01:23 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AS4PR08MB7502.eurprd08.prod.outlook.com (2603:10a6:20b:4e6::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 12:01:20 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Mon, 16 Jan 2023
 12:01:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84144711-9595-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4yomIzoPDfG9YMo+xQOuq41a+sPGuYd7VcrscX3EIA0=;
 b=TpazR9esVkL2/m84LDNb6+j2wzVNr+1xgYsdiATWVcOxI7MPJCUW/rAeKXdR+553TvZ1FK8EHx2hFwqmpuY/HpAHPG73vTl74aTc6rumZu8TF22VkPh6wZ3OGJsfuwG9XSTRW1Lh273iUkl/g9r6mL0pzdqS4GYr46Lw0ohEVnM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jFZCmVrwNlCcBQEwgGCylnUrdcXuNkd5CR9nRrFlicM4mjOUL662M7f++sfulQhaIFwCoCSdZs4ih+OyEkDsAJu7Bg2Vm/e0pzG+MTYR56J7fy5TCJLkZmBkHL2fC6mKHdPxx25zz/WIjcFGdaipJlR/THWP0aDODdGMW35sqfmdHCpv/j5QmFhBBwS7MiBGZmZdStZJVYGqpE39pMBw1fOiMfMYuLvSVs7PpDZI0XjUjGLWVAKveuEqFBsDYyv1nCH+qstr+icaiol6cUC0UeFjk/FHcubI04uIUuvaRRB9jJ7pRBgctyQsZ11f7XkyksYljjC7do66XpQ+BlSCLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4yomIzoPDfG9YMo+xQOuq41a+sPGuYd7VcrscX3EIA0=;
 b=R3KOGKSYJo+VVTh6dwUboHkTa8Zb+bnxtlSA54SPZgfJBP3SaGcsfdVQ4pXda3NsQpdxayQjxnrgBAj8V39iEauWGcH/IjyJcU6lMEJcD77PasEGcTeRWyvNeHTs7vmIwnT9GJ/ORVAsRU1VQvS8iGrUkK9jEQNtB35bWw84CiyPyBe4oQThATaflajG7TdNNkHu4QjuUcE4ZHD5Wyz6YO00Qp6M1MqCRc2xBaTJvyGM59N+AEnmASmRQu1FvVhXgcVi638oD1FKEKX37km60c4rggME3GE+Px5DLA4ecWwEnOBv55dR7vX8iO7g5N6bqgZRM8KSUiILB7apYZdpVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4yomIzoPDfG9YMo+xQOuq41a+sPGuYd7VcrscX3EIA0=;
 b=TpazR9esVkL2/m84LDNb6+j2wzVNr+1xgYsdiATWVcOxI7MPJCUW/rAeKXdR+553TvZ1FK8EHx2hFwqmpuY/HpAHPG73vTl74aTc6rumZu8TF22VkPh6wZ3OGJsfuwG9XSTRW1Lh273iUkl/g9r6mL0pzdqS4GYr46Lw0ohEVnM=
From: Wei Chen <Wei.Chen@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v2 02/17] xen/arm: implement helpers to get and update
 NUMA status
Thread-Topic: [PATCH v2 02/17] xen/arm: implement helpers to get and update
 NUMA status
Thread-Index:
 AQHZJNEXNkX9GZxGhkCgYl8odWGCka6X2i6AgAJ2fxCAAB/FgIAGXWYAgAAf/4CAAAyvoA==
Date: Mon, 16 Jan 2023 12:01:20 +0000
Message-ID:
 <PAXPR08MB7420AE4FCCF31C35D3FA3EB49EC19@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-3-wei.chen@arm.com>
 <9e32ffa1-1499-f9cd-7ca8-f9493b1269cb@suse.com>
 <PAXPR08MB7420E482CACC741B1BA976569EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <5a802657-a6e8-9cc3-fefb-09a7e68d1e5e@suse.com>
 <cb2ba9fe-e29f-c44e-9139-701f894060a8@arm.com>
 <7c5f0dea-4aed-d8f2-363f-758ecce0104f@suse.com>
In-Reply-To: <7c5f0dea-4aed-d8f2-363f-758ecce0104f@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1FDC3684535F7D4DA5EFEE77E9A75181.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|AS4PR08MB7502:EE_|AM7EUR03FT044:EE_|PR3PR08MB5579:EE_
X-MS-Office365-Filtering-Correlation-Id: b5797e36-028b-46da-04d0-08daf7b966d3
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 GYgmiwYCypTfp8RvEAxNP5VDL7qukPbXEbXM/tt0lVH3WGJdzttQUXReRkpHhQDrp8bdB0KXTF/JrkmTfh9v5o1oacthGMsfRQW9pp4DKcYtv1Z91EdHUItYVXAU6w710ZFPs7Uyp9/DoccBtk8Wkwm3bimS4VULp3xm+8N7aVgpQPLSflXjCbCCeCscvKto/m4480pCPKBWIw7s0wjw9e8cqFOjaNuFS58aVZ9oE0tMzgykkEzrG/DkCmQjaEpDeI6sl7iSNdU7CKfSFVyBSn3xClKZBy6K/YsZ+d+K+DAy4D1GSBP0wqXFDfhwHAovEM03NQaDtb8+Xp89DiAFVsVcTPkcoIN/uEye3/VTqP2RZjdP+CyarzCTC2l/NrS/D6Ar4X1Afc2PXKWEYQAamP4xHBarx6xmpV72GHQ9g0FxRjMpkIxsKVmC3Wr74/ckpFJAPQ6MWDd6K9kBgV4sXzZj0OSJLcliB46Ba6Fi8tULz2K24To3wB6MSo3hwDuCQOQnSZtRdrRDtH+KIoDPZe+890HyZuWJvGHGR/Px7ogwnNPAH90gIyAQrRWuWm3khTjJMgyi/soIKP8GVF++EwRZYfhI6UUmQGkOd1LZ1HbXW4x4IuubfpdjRdQaM3Dx3/6BLFsTBOT0rgt2IbM9wvb775l/aiFqS8w2+Wisbi/Xot5J8lJE1s6a2Ej8CzIJzHXZKgrhKsXBcwT99IHJ3A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(39850400004)(396003)(366004)(136003)(451199015)(55016003)(33656002)(41300700001)(66476007)(64756008)(5660300002)(66556008)(4326008)(66446008)(76116006)(83380400001)(15650500001)(66946007)(6506007)(8676002)(8936002)(52536014)(186003)(26005)(9686003)(478600001)(53546011)(6916009)(54906003)(7696005)(71200400001)(316002)(38070700005)(86362001)(2906002)(122000001)(38100700002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7502
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b0cba299-a129-4561-03d3-08daf7b961a4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AnmF1jhUgzp3U7XVDZq4RAz0MwR5dyl8Ss07EE6o126N5m2meyJDRewty0iZJPzE/xk6Te2McbNSuK3BmWxVVFSjO0Ep7Dm01OtjL1wx6dwQ2L2T2ODIQMGeJY8uJxsgu8Wcpzbj/il6uqW7udtFfGmkHC3ow+DDOlF8ZuPQxj9wDRloA/hUg1vDYG4KdpdhZp0zn1Tml3EtXQ9o9f36GuSWPT52azuQ4hzf9rZa4y40Qfl/PnEilm7fAbzy5sVFImJsYro9f6+fApUOF+rA2PZ7okT0QgCec+7A++AfOV8K1mesiW7zSsRyYJgORo6QqdN2ywo6/C097dbF4//9wKSxYiMIUHaSIXti89HfXA8hMhJ7Z6scbVjf9N86MkcPErt1hNrrnW2VPRGMKq/BI3H4CdlCdrBXe+qej1g7p6fePq6ZYlDhRwxg37OIlgctzhJsoa9McTaLKW0D5HD41IlhK70VqDHzA9pdnhWixhlpuAadZxKJdF7qIK75UksJEEzq/mCt8GgZbt1gS5Xh2o/zgwKAFzvqVn871mZthKr08AGUlr0CkjZMxMNtaDFjc0Njqnly9EIvyIbBlb3eLDiwzgtV+kx2PokTz3UjVEPYQpi/T0WlkZ3Nby+a9Po/iW1cAoHixb/+VDGX6pyuakF4AVwXEmxp71BmGbmszSqRiQvi0vtfjqXMoTL7WpcNKa+0ytNPOAuhbe4iu9aPnA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(55016003)(40480700001)(33656002)(40460700003)(47076005)(41300700001)(8676002)(5660300002)(70586007)(70206006)(15650500001)(8936002)(52536014)(6862004)(83380400001)(26005)(478600001)(336012)(9686003)(186003)(53546011)(6506007)(4326008)(54906003)(316002)(7696005)(82310400005)(86362001)(356005)(82740400003)(2906002)(36860700001)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 12:01:29.1681
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b5797e36-028b-46da-04d0-08daf7b966d3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5579

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogMjAyM+W5tDHmnIgxNuaXpSAxOToxNQ0K
PiBUbzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+DQo+IENjOiBuZCA8bmRAYXJtLmNvbT47
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEp1bGllbg0KPiBH
cmFsbCA8anVsaWVuQHhlbi5vcmc+OyBCZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlz
QGFybS5jb20+Ow0KPiBWb2xvZHlteXIgQmFiY2h1ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5j
b20+OyBBbmRyZXcgQ29vcGVyDQo+IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPjsgR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPjsgV2VpDQo+IExpdSA8d2xAeGVuLm9y
Zz47IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPjsgeGVuLQ0KPiBkZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDAyLzE3XSB4
ZW4vYXJtOiBpbXBsZW1lbnQgaGVscGVycyB0byBnZXQgYW5kIHVwZGF0ZQ0KPiBOVU1BIHN0YXR1
cw0KPiANCj4gT24gMTYuMDEuMjAyMyAxMDoyMCwgV2VpIENoZW4gd3JvdGU6DQo+ID4gSGkgSmFu
LA0KPiA+DQo+ID4gT24gMjAyMy8xLzEyIDE2OjA4LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gPj4g
T24gMTIuMDEuMjAyMyAwNzoyMiwgV2VpIENoZW4gd3JvdGU6DQo+ID4+Pj4gLS0tLS1PcmlnaW5h
bCBNZXNzYWdlLS0tLS0NCj4gPj4+PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+DQo+ID4+Pj4gU2VudDogMjAyM+W5tDHmnIgxMeaXpSAwOjM4DQo+ID4+Pj4NCj4gPj4+PiBP
biAxMC4wMS4yMDIzIDA5OjQ5LCBXZWkgQ2hlbiB3cm90ZToNCj4gPj4+Pj4gLS0tIGEveGVuL2Fy
Y2gvYXJtL2luY2x1ZGUvYXNtL251bWEuaA0KPiA+Pj4+PiArKysgYi94ZW4vYXJjaC9hcm0vaW5j
bHVkZS9hc20vbnVtYS5oDQo+ID4+Pj4+IEBAIC0yMiw2ICsyMiwxMiBAQCB0eXBlZGVmIHU4IG5v
ZGVpZF90Ow0KPiA+Pj4+PiAgICAqLw0KPiA+Pj4+PiAgICNkZWZpbmUgTlJfTk9ERV9NRU1CTEtT
IE5SX01FTV9CQU5LUw0KPiA+Pj4+Pg0KPiA+Pj4+PiArZW51bSBkdF9udW1hX3N0YXR1cyB7DQo+
ID4+Pj4+ICsgICAgRFRfTlVNQV9JTklULA0KPiA+Pj4+DQo+ID4+Pj4gSSBkb24ndCBzZWUgYW55
IHVzZSBvZiB0aGlzLiBJIGFsc28gdGhpbmsgdGhlIG5hbWUgaXNuJ3QgZ29vZCwgYXMNCj4gSU5J
VA0KPiA+Pj4+IGNhbiBiZSB0YWtlbiBmb3IgImluaXRpYWxpemVyIiBhcyB3ZWxsIGFzICJpbml0
aWFsaXplZCIuIFN1Z2dlc3RpbmcNCj4gYW4NCj4gPj4+PiBhbHRlcm5hdGl2ZSB3b3VsZCByZXF1
aXJlIGtub3dpbmcgd2hhdCB0aGUgZnV0dXJlIHBsYW5zIHdpdGggdGhpcyBhcmU7DQo+ID4+Pj4g
cmlnaHQgbm93IC4uLg0KPiA+Pj4+DQo+ID4+Pg0KPiA+Pj4gc3RhdGljIGVudW0gZHRfbnVtYV9z
dGF0dXMgX19yZWFkX21vc3RseSBkZXZpY2VfdHJlZV9udW1hOw0KPiA+Pg0KPiA+PiBUaGVyZSdz
IG5vIERUX05VTUFfSU5JVCBoZXJlLiBZb3UgX2ltcGx5XyBpdCBoYXZpbmcgYSB2YWx1ZSBvZiB6
ZXJvLg0KPiA+Pg0KPiA+DQo+ID4gSG93IGFib3V0IEkgYXNzaWduIGRldmljZV90cmVlX251bWEg
ZXhwbGljaXRseSBsaWtlOg0KPiA+IC4uLiBfX3JlYWRfbW9zdGx5IGRldmljZV90cmVlX251bWEg
PSBEVF9OVU1BX1VOSU5JVDsNCj4gDQo+IFdlbGwsIHllcywgdGhpcyBpcyB3aGF0IEkgd2FzIGFz
a2luZyBmb3Igd2hlbiBtZW50aW9uaW5nIHRoZSBsYWNrIG9mIHVzZQ0KPiBvZiB0aGUgZW51bWVy
YXRvci4gSXJyZXNwZWN0aXZlIG9mIHRoYXQgSSByZW1haW4gdW5oYXBweSB3aXRoIHRoZSBuYW1l
LA0KPiB0aG91Z2guDQo+IA0KDQpIb3cgYWJvdXQgRFRfTlVNQV9ERUYgb3IgZG8geW91IGhhdmUg
c29tZSBzdWdnZXN0aW9ucyBmb3IgdGhlIG5hbWU/DQoNCkNoZWVycywNCldlaSBDaGVuDQoNCj4g
SmFuDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 12:02:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 12:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478588.741845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOBr-0008Dx-M5; Mon, 16 Jan 2023 12:02:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478588.741845; Mon, 16 Jan 2023 12:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOBr-0008Dq-Iz; Mon, 16 Jan 2023 12:02:15 +0000
Received: by outflank-mailman (input) for mailman id 478588;
 Mon, 16 Jan 2023 12:02:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NbzH=5N=citrix.com=prvs=37389537a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHOBq-0008Da-R0
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 12:02:14 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9aed3b76-9595-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 13:02:12 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9aed3b76-9595-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673870532;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=95Vm59aOWK3yQFoHi4NAE+fawDfJISzzNS+lSYwNb3g=;
  b=Vy+WJZIUvZecYysnudguxq6DYOvBsWY23eWnCcHuUm7jydu6ZKEYdxr8
   WZ54J0K2iTQnKrufec5u973KnnLllFTxIO1WubqooyYIkOIxbuJo2CxSZ
   zlOCD8PhTaxkymdGX8jenf/DGRt78M+XtDnpbzp+7NQhmK95HJa+nsOtR
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93230991
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:iEbtzqCqP8dgshVW/1Ljw5YqxClBgxIJ4kV8jS/XYbTApDJz0GcDx
 jYeXT/UafiKYjb8ftkgbdvn8UhV7MPRyYAxQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpB4ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwwMMnAjkXr
 fkhM3MTRBDcpOmk36u7Y7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9K4fQHp0Ezx/wS
 mTu+EKpGSsIa4Sj2TO0r1yqnOnxxhn+R9dHfFG/3qEz2wDCroAJMzUJUXOrrP//jVSxM/pPJ
 kpR9icwoKwa8E2wUsK7TxC+uGSDvBMXR5xXCeJSwCOnx7fQ4g2ZLnMZVTMHY9sj3PLaXhRzi
 AXPxYmwQ2Uy7vvMEyn1GqqoQS2aAw1FLjdbPSo9Vkgbydm8g4ohkQvPZ4M2eEKqteEZCQ0c0
 hjT8ndl2upN0ZVSv0mo1QuZ2mzx//AlWiZwv1yKBTz9s2uVcab/P+SVBU7nAeGsxWpzZn2Ip
 zA6lseX94ji5rndxXXWEI3h8FxEjstp0QEwYnY1RfHNDxz3pxaekXl4uVmS3ntBPMceYiPOa
 0TOow5X75I7FCL0MvQnO93vUJVwlvaI+THZuhf8N4omX3SMXFXfoHEGibC4gggBb3TAYYlgY
 MzGIK5A/F4RCLh9zSreegvu+eZD+8zK/kuKHcqT503+gdKjiIu9Fe9t3K2mMrpos8tpYWz9r
 75iCid9404CCbOmMnGOodd7wJJjBSFTOK0aYvd/LoarSjeK0kl7YxMN6dvNo7BYopk=
IronPort-HdrOrdr: A9a23:2z/mB6r3zR+z9qIOnrmMq78aV5rdeYIsimQD101hICG9Evb0qy
 lhppQmPH7P+VAssRQb8+xoV5PufZqxz/BICMwqTNWftWrdyQyVxeNZnOjfKlTbckWTygce79
 YET0EXMrbN5DNB/KLHCWeDcurJwLO8gd+VbeW19QYScem9AZsQnjuQCWygYz1LrBEtP+tBKH
 IFjPA32gZJfx4sH7yGL0hAZcfvjfvRmqnrZBYXbiRXlDVn3VuTmcXH+wHz5GZlbw9y
X-IronPort-AV: E=Sophos;i="5.97,220,1669093200"; 
   d="scan'208";a="93230991"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/hvm: Drop pat_entry_2_pte_flags
Date: Mon, 16 Jan 2023 12:02:01 +0000
Message-ID: <20230116120201.2829-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Converting from PAT to PTE is trivial, and shorter to encode with bitwise
logic than the space taken by a table counting from 0 to 7 in non-adjacent
bits.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

Noticed while reviewing other shadow patches
---
 xen/arch/x86/hvm/mtrr.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 093103f6c768..344edc2d6a96 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -29,13 +29,6 @@
 /* Get page attribute fields (PAn) from PAT MSR. */
 #define pat_cr_2_paf(pat_cr,n)  ((((uint64_t)pat_cr) >> ((n)<<3)) & 0xff)
 
-/* PAT entry to PTE flags (PAT, PCD, PWT bits). */
-static const uint8_t pat_entry_2_pte_flags[8] = {
-    0,           _PAGE_PWT,
-    _PAGE_PCD,   _PAGE_PCD | _PAGE_PWT,
-    _PAGE_PAT,   _PAGE_PAT | _PAGE_PWT,
-    _PAGE_PAT | _PAGE_PCD, _PAGE_PAT | _PAGE_PCD | _PAGE_PWT };
-
 /* Effective mm type lookup table, according to MTRR and PAT. */
 static const uint8_t mm_type_tbl[MTRR_NUM_TYPES][X86_NUM_MT] = {
 #define RS MEMORY_NUM_TYPES
@@ -117,7 +110,7 @@ uint8_t pat_type_2_pte_flags(uint8_t pat_type)
     if ( unlikely(pat_entry == INVALID_MEM_TYPE) )
         pat_entry = pat_entry_tbl[X86_MT_UC];
 
-    return pat_entry_2_pte_flags[pat_entry];
+    return cacheattr_to_pte_flags(pat_entry);
 }
 
 int hvm_vcpu_cacheattr_init(struct vcpu *v)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 12:21:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 12:21:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478596.741856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOUe-0002Jl-85; Mon, 16 Jan 2023 12:21:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478596.741856; Mon, 16 Jan 2023 12:21:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOUe-0002Je-4y; Mon, 16 Jan 2023 12:21:40 +0000
Received: by outflank-mailman (input) for mailman id 478596;
 Mon, 16 Jan 2023 12:21:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NbzH=5N=citrix.com=prvs=37389537a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHOUd-0002JY-0x
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 12:21:39 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51924566-9598-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 13:21:37 +0100 (CET)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jan 2023 07:21:34 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5035.namprd03.prod.outlook.com (2603:10b6:5:1e5::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 12:21:32 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Mon, 16 Jan 2023
 12:21:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51924566-9598-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673871697;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=nEPNbfV9fzYc4e65rLHeg4BPShBWBRDdYMMKbn3od6Q=;
  b=eV2OWKlELrEk3hs7kSqcDvq2eXgKSizaBbl+SOzfoN81/08SsKUSwlTa
   uCjR27duhOPqtXTiGOdX32c94hzpoY6jYm0uZlV077R6frJrzXIifOCJ3
   g6EBB7oYJcwdDPHz+RtA0mYbY57kZsf3ZiRPRzQ8w+Yzobm/Q0143ZRph
   4=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 92793863
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:+9uPiKjKU+XsfJXwzJFeiWjYX161QhEKZh0ujC45NGQN5FlHY01je
 htvUTjUOf3YYTTyKt0gYYvi8B4FvpCEyoVkHgpuqCw8E3kb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QaAzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQkFgsxYDml29uy55GhUPgv3NYSd8jSadZ3VnFIlVk1DN4AaLWaGuDgw48d2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEhluGzYLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6Refnp6U63gb7Kmo7JBIbX0meuveF13HiBIoDG
 XAV4i4PlP1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xGWwsXjNHLts8u6ceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDRLoViMA6tjn5Ys13hTGS486FLbv14OlXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94aRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:csPa16FT++L2WV8QpLqEGMeALOsnbusQ8zAXPiBKJCC9E/bo8/
 xG+c5w6faaslkssR0b9+xoW5PwJE80l6QFgrX5VI3KNGXbUQ2TTb2KhbGI/9SKIVydygcy78
 ddmtNFebrN5VgRt7eH3OG7eexQv+VuJsqT9JnjJ3QGd3AaV0l5hT0JbDpyiidNNXN77ZxSLu
 vk2uN34wCOVF4wdcqBCnwMT4H41qD2fMKPW29/O/Y/gjP+9g+V1A==
X-IronPort-AV: E=Sophos;i="5.97,220,1669093200"; 
   d="scan'208";a="92793863"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vcs8wcCY628QBMJzdk3hFoddwGfM1WcxcrZUyhMvAqeE4Qwj7aV6hT4Yk2iWAxRxtc3BNz1suJeJgjwAk5Fn+5zBu5nvvvY0amS3zlPaFDi6h+lkPMCbGl4KCpAVWJDGpmPckHhKJrPCqFqdEYPsRWobbG9CNNL4zw12ZH0wzDZSqQdUAcO77s8yREcRXi0LZI0gBEaKVEtt/4O2fiW6tTYjn/uBn9AiWyD+6Pjf0ALXJ+LpO5v8J5GO1b+PjX5ScK5wGolmHpetYjc2QjwdfAErsNy8hmRRw51hqaHm/zMdP+ygFRU/MBdByYQJiCkg4cAb9XhXW2Nydb0r8cSpLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nEPNbfV9fzYc4e65rLHeg4BPShBWBRDdYMMKbn3od6Q=;
 b=KSgvm27VD2utYYCEbIJOAZD0V5ggB8GXtFohXDaD/fHuTWMFQsoH6zQ5fZZhYEz+0AZ07hm2hSYvJo2SbN3aKWhCzshdpRJaUsy2+/ViuDVT75l5abFRJSeOuvFbzgQ5yDYXdbnib/AV0w12e61rY1K9OWG7QlfKcLrq7sNdLpwU6trH9hxnRvd5bCmVBLYV9WBUlqet2UvWAlXYOw0PMI3z61ZZMAFRUietkr7Vwenb4vd4xZ3JanPPhMWN2bBb82I4UAm+43kelskBYvSElLLhyXbOLriJB+VIqLOQEutKjAswgHLlv9gp64UL7swi1istJVB+ZBv2ZjfzQEJCHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nEPNbfV9fzYc4e65rLHeg4BPShBWBRDdYMMKbn3od6Q=;
 b=kX2OMvAG/n73zCc7GnjC+kluBSxivWCnx0r5Dliy1iD+LOINoa0ayoBrcYCKnxS4ztDr0Hs3YjSNx+ew9qdB2/P5w56bgkh/KzzU2p3G2/rFUL3VbKHoAaJZFLkbggUJlOh/PScYljZ8czW9INfaXw7sIpyckQ5CdTSCTlvYIKs=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 4/8] x86: Initial support for WRMSRNS
Thread-Topic: [PATCH v2 4/8] x86: Initial support for WRMSRNS
Thread-Index: AQHZJRefYU5gP8dknEybN3m0hZOPA66awNaAgAAQ7QCAABRFgIAGGeqA
Date: Mon, 16 Jan 2023 12:21:31 +0000
Message-ID: <9805e688-9205-096c-b8c5-ff4b8f162221@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-5-andrew.cooper3@citrix.com>
 <97d16968-57fa-0114-1a93-4d0d253b8172@suse.com>
 <2e568a8d-02d6-5761-8b55-c37a8de1be0e@citrix.com>
 <86519ac5-1f37-f3a9-f586-9c41f0ef66cc@suse.com>
In-Reply-To: <86519ac5-1f37-f3a9-f586-9c41f0ef66cc@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB5035:EE_
x-ms-office365-filtering-correlation-id: 4745aa2e-8ea2-4c8f-c454-08daf7bc3389
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 +t8m5c4DK2faXUcLorW4SHCAHGdU5gTL9LHMss6dej/tCRsfEkFKgNl2qQnYaTxDNDG7owzVbATTvxWADjXsIHT1zAuWgxM5lzsTjroNu6sBZaeFM/WnafEKvqVz+u8Dfkm8htu+3hh0ANxiuYYQzAQUVM+ZMpGPyHP/FZCqvZ1Q/zOPxC1JP6/PXaZG8PFfgBov/3n7LCnxW3YlagYpfnfrE3gkeW1NgUFijes2o5ySMPHJBSSyV6q/EbZwkCXS1Ga3/j+lnj5EG2sSv39lctNpw7bwnP89REYS/75dZNhBonLWQk1a0+0qgti8QdeDw+XmQSsuZPYjCc1wMRNbKAIWdPgklnnzD+Cn2skgnypJMMsdCihqOPW/0VpPgf21BVYL7piG3GD1ykbLQOPdB1HUph8QC0xBph0wSkgSUmiXuFsOLPnDNpbUr7Q3Tjynoh3p7lc/20eY49kgDDCblZjD8rOg5AeReiIhpCeXsuQDKqRE5R7GwZAWzEHy98AaFzf5DYIYUcacD9XoREcR+ZA2UA3FA4uZb09DMEaDBiB8yBd9+25MjCIbrR2BKD2xcI1W+ctD8DJ9WAf3mgzXeo2yTQjgOaDlV4JCJw+HCbH4D0Cky7BGZn9x9nvx/EbZC06EZehV9Ua3yVRyybizbMqnNIGXg8PfkuRMNK7pDgZG7j/8m9tUpk5deFGgWAUMSrXIUXD4e9VQfpIJAisyKlRP7tcETgU7pH2W3xPCOf+Vk7tVJ0aMJ7aqSjBuMizaBPA/JR+jCz0TBpASJv4Dzg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(346002)(376002)(396003)(136003)(39860400002)(451199015)(2616005)(122000001)(54906003)(6916009)(36756003)(31686004)(316002)(71200400001)(53546011)(6506007)(6486002)(86362001)(31696002)(91956017)(4326008)(8676002)(76116006)(64756008)(66946007)(66556008)(66476007)(66446008)(26005)(186003)(6512007)(82960400001)(38100700002)(2906002)(5660300002)(38070700005)(478600001)(83380400001)(41300700001)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Q04zb1BCRERyeC9aZ013SHh0clBkOXc5Qk53VlZGSkhlcncyWEZQRFl4bE9o?=
 =?utf-8?B?L2tZbEpiMEcxeVdSNTNnL24rbTlBOUZadnV5aGtsL0JVY0hvaDRMVzQ5SEha?=
 =?utf-8?B?ZXBId21SUkJ3V3NRVzFFSzJFUmc0T3FIYURqNjNUNGhycklMSFRLSkZwV2tU?=
 =?utf-8?B?SEF2ZmhhYStEUnB6d0NxVXBUZXlDMVRNNEM5ZTA5aUpiQnVRNEEzcFl5NVJV?=
 =?utf-8?B?ZzRtUytDemhmcFRpNnZYeTNQNlJGNVhkWEVwT01PREZqNDJOZ3dwbWRNa3Zu?=
 =?utf-8?B?OU1HT041NURrUFM1VkF5cGJpVjdVNWpLMyszZlhkM1pMRFIxcUNyYjVMNElC?=
 =?utf-8?B?ZmFBYytGc0l0L3cxei95MXJCVGRXTHN0bUdIV1V2Ulh5RXpxbXQ5SDY4VDk5?=
 =?utf-8?B?ZmpDeWV5UkVjdU15V1RKWDFuSHdEL1pGVlBBTUd5c25rWE0zbEQrelUxUmRE?=
 =?utf-8?B?cERrb2IzcmFDNkd3TmVQbkFkTG5mcGFwODN3bkx6T0dDOUdadGMzWlUyK3pZ?=
 =?utf-8?B?UHpTSkNCOUVLRzd1UU1KRGpyWElrY2VpSlBnM2hPZ2p5ajJoTDVOQzdwZEpK?=
 =?utf-8?B?T09BTitodTRaTUYwS01sMmkyaU9IWDZvMC9DczZEVlp5TXRhNDBlUndNQ0VR?=
 =?utf-8?B?dTVsS2hJalRFMmx6ZDVWVHM2Zm4vYXltTDlFclhKL2s5ZEtYZS9HR0o1Tm5p?=
 =?utf-8?B?TTQzcXRtSVhPZXZqTUxTUG15ZWFwVWJxdFBaTUpobENjcW1mZmYwWTR1ODJ6?=
 =?utf-8?B?YS91Nkp3b1ZIdmZYS21HTEwxaUZDazczQjE0T1UrYVNsSUU4a2t4ZDUyeVlH?=
 =?utf-8?B?ZGZtbTIrMzNpaU9oVW9WQmhiN0lYNE1QdlFGbkpnZEhaMmg4WkNRVTkvb2hr?=
 =?utf-8?B?bHV4MVJKYU5pVkhmNVFKTTVld3RRdlB4bFZMTkpJQkxZNjJKTUo2a291WVBo?=
 =?utf-8?B?MzBGM010cjloMVJzWWhLUXY1SVBqS1d2VHlGczUzUVViREx0NEs3dHFaOXhL?=
 =?utf-8?B?bmNNTjc1RHl6blhtdGtFeGtySklWRlRaQ0JZdGJFczFLQ2kvUEhLT0U3Y1VD?=
 =?utf-8?B?RnN6NGNIL2grYmZrTXo5aWt4RnNKK3ZWbmFzSDJJbTdJeXBreFpjb05PSC96?=
 =?utf-8?B?clgxR1VONEM0QlNsWU5FRExQaU13OWh6RytkUkxaYlQxZ2swK00wVTFtUmlt?=
 =?utf-8?B?dUZHd0pFZGt5bjRUa2xvbUZtQXdXT0E2M3AyemIvL1ZvZVNTbUk2UFVuQUg5?=
 =?utf-8?B?VkRST3ZIUDd2TkVpL2pEUHhTNnZaeit6OTVGUk1iY3RIRFFJKzI1TWdBRHZK?=
 =?utf-8?B?ZzBwQXlUQ01QWFpQWWFnSW1KNXQ1RGh2SXhxZTRteW56MlhROE9qeGVHYlRT?=
 =?utf-8?B?WFpMNlpDWS91ZndZSnhQUlg4eFBVdHR2Szd1V2lrS2JuNjVYcEdBMU03dWx1?=
 =?utf-8?B?SDdPZWpDaWkzZjNXUzVPbk5oenZjSnlvZ1VEaWJueW9yQlorTzhMUHIxRjBo?=
 =?utf-8?B?Snlxd0JUME9YZVRGZW5xT1BKSGhsNGh0ME9vOG8xTVVQVW1kVzlIR2dES0pM?=
 =?utf-8?B?RW41VGdPYzdveUNjUk9DQUt5dW5DT3c5VHUxWm1YMFV6MmY3Z0dydktlNVpa?=
 =?utf-8?B?anBBNnBRbG94TjJ1QWl3U0E1cXM5andVT3ZEUHVCTCtrb25KaVlhbXpYRnlH?=
 =?utf-8?B?dUgwVm5MeW9pZmFFSUIrRlJGcVZ1T2drUTlYblBSTDhhOWNiM0FwTld4Q3N6?=
 =?utf-8?B?MHN5d3E5SmRvVnl1bVlwUk1ycElCbUVQSzdITnY1bzhQbDQ3QUxvUUZmVC9x?=
 =?utf-8?B?MVZVdTI1QnRXQnUwZXpaR01TVVh0UmVOa1NRT1h1V29rZjhuL0xGeVRxM2FO?=
 =?utf-8?B?MzNNQlhGdHpVNjhETmdFWU5BcktYMjZCUGhtQjlrcnBxaE9HTy8rdU42cWhi?=
 =?utf-8?B?cEpPOHNxLzNqSUI2U2pOZ0U3TkdraGFDUE0ydU5FeXVnR1gvMUpKck1FbGtJ?=
 =?utf-8?B?T29ZM1JzT2NGUjVEZWYvQWdFbXpyemZGR1Zkei9KTHNZVUJ2YWRQNnRqbVM2?=
 =?utf-8?B?N2NLc0hodDJuVnVhZDg3Y3kxcGhQd0dHYXJoazRZalA4Y081dDlVNkRNM3B1?=
 =?utf-8?Q?56uUHjTsMmlNdB0C8TInUU77W?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <E7EDF0A91C02494384D59CB519C5D7DF@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2/7p3CkCL/fpM+3U4nr0srcpL5X+NwJD7ZGPKM2TQyjV1AxuzOXUP6FGYA49Ikir8rsLLW79scyVU6q9QrAdkUszfJ3ofWYvHkUKHktbOwNsJ4M2LCgaiqF8PKIF/CXy3qgtAO2u98FazN0Re3l4fXMgQ37HVChwI6cYc4a5GsoHPGSZPEJYjNvL+PNAT3AkiudNxF71ymHcWYFNgU0ZINIDx28mECo+ixe1vYSSMhuUWC2hIcn8QPEh7molzTBR2PRMVvZuwkPCHPo/cnjhe9hLfky2n8wUPu5WDBRK6qs5hFbhDD17r/huwuQkLZVRo+jlH5DgASKgPahLTW+TmfFgZy2hbrTxWAkSGdh9aPJi2OONT/V87Cc8cZ4mq9gZ72PCFMHuFr8rRQrX0g1qNs3KOO9D2OAQew9rGaEgO8omANBjz9aFETBDJcaeZvJllx9hajdcvZgWAcWEJcr+FkngwlyVPtlGO69uQQkDfabcoPu7eFgastlo/DwCWRjo9lf8+dr/ty52f4NyUgoVr8PKYb0bZxVZ0lJXjegS7Y6rLM4rMEp9I+6jIn8tVjuX5AQhS4vKeNTv7Iya/+z55NGEzhn7dT5N1AetZYiAv5QoYF8dd2NNheON+RT/vYAdNCaOggsrsotjdQhxdoF52XHGqws9ARkx5YjVhsybNwi6fLyQbr9RGppyKlg59ZuSOf3cpWp20UGkhjm/J+arsrFGb/SjmqKMl8syedYjvg/Wd/VhB6OzodK7O6I6akRYom3p5XiDUPZbjJvRrKEqG4UDbpFu9WJ2OHRKyOr2rNPhQBF5Z4Gh4/KCmDyL9V8POLRoOtmWI2jpGuVBRSxARg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4745aa2e-8ea2-4c8f-c454-08daf7bc3389
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jan 2023 12:21:31.6300
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hv2pjcSU1lg5jenL3OlSYTzZuzgHdBsfTsS95nIhxMHgEpEhfuIqumpEKGhL3/WVFJjLKWpL29TIQoQflHWLKdxe92j0385AgPeWu9xYtVU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5035

T24gMTIvMDEvMjAyMyAzOjExIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTIuMDEuMjAy
MyAxNDo1OCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDEyLzAxLzIwMjMgMTI6NTggcG0s
IEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+IERvIHlvdSBoYXZlIGFueSBpbmRpY2F0aW9ucyB0b3dh
cmRzIGEgQ1MgcHJlZml4IGJlaW5nIHRoZSBsZWFzdCByaXNreQ0KPj4+IG9uZSB0byB1c2UgaGVy
ZSAob3IgaW4gZ2VuZXJhbCk/DQo+PiBZZXMuDQo+Pg0KPj4gUmVtZW1iZXIgaXQncyB0aGUgcHJl
Zml4IHJlY29tbWVuZGVkIGZvciwgYW5kIHVzZWQgYnksDQo+PiAtbWJyYW5jaGVzLXdpdGhpbi0z
MkItYm91bmRhcmllcyB0byB3b3JrIGFyb3VuZCB0aGUgU2t5bGFrZSBqbXAgZXJyYXRhLg0KPj4N
Cj4+IEFuZCBiYXNlZCBvbiB0aGlzIGp1c3RpZmljYXRpb24sIGl0cyBhbHNvIHRoZSBwcmVmaXgg
d2UgdXNlIGZvciBwYWRkaW5nDQo+PiBvbiB2YXJpb3VzIGptcC9jYWxsJ3MgZm9yIHJldHBvbGlu
ZSBpbmxpbmluZyBwdXJwb3Nlcy4NCj4gV2hpbGUgSSdtIG9rYXkgd2l0aCB0aGUgcmVwbHksIEkn
ZCBsaWtlIHRvIHBvaW50IG91dCB0aGF0IGluIHRob3NlIGNhc2VzDQo+IGFkZHJlc3Mgb3Igb3Bl
cmFuZCBzaXplIHByZWZpeCBzaW1wbHkgY291bGQgbm90IGhhdmUgYmVlbiB1c2VkLCBmb3IgdGhl
DQo+IGluc25zIGluIHF1ZXN0aW9uIGhhdmluZyBleHBsaWNpdCBvcGVyYW5kcyB3aGljaCB3b3Vs
ZCBiZSBhZmZlY3RlZC4gV2hpY2gNCj4gaXMgdW5saWtlIHRoZSBjYXNlIGhlcmUuDQoNCkEgQ1Mg
cHJlZml4ICppcyogdGhlIG9wdGlvbiB3aGljaCB0aGUgYXJjaGl0ZWN0cyBkZWVtZWQgd2FzIHRo
ZSBzYWZlc3QsDQphbmQgc3Vic2VxdWVudCB3b3JrYXJvdW5kcyBhcmUgdXNpbmcgQ1MgYmVjYXVz
ZSBpdCBoYWQgcHJldmlvdXNseSBwYXNzZWQNCm11c3Rlci4NCg0KVmFyaW91cyB0b29sY2hhaW4g
Y29kZWdlbiBvcHRpb25zIHdpbGwgcmVzdWx0IGluIGBjcyB3cm1zcmAgYmVpbmcgZW1pdHRlZC4N
Cg0KVGhlcmUgYXJlIGEgc3Vic2V0IG9mIGluc3RydWN0aW9ucyBmb3Igd2hpY2ggb3RoZXIgcHJl
Zml4ZXMgd29yayBmb3INCnBhZGRpbmcgcHVycG9zZXMsIGJ1dCBieSBub3QgZm9sbG93aW5nIHRo
ZSByZWNvbW1lbmRhdGlvbiBhbmQgY2hvb3NpbmcNCkNTLCB5b3UncmUgYmV0dGluZyBhZ2FpbnN0
IHRoZSBwZW9wbGUgd2hvIG1ha2UgbmV3IG1lYW5pbmdzIGZvcg0KaW5zdHJ1Y3Rpb24gZW5jb2Rp
bmdzLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 12:24:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 12:24:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478602.741867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOX4-0002xN-P1; Mon, 16 Jan 2023 12:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478602.741867; Mon, 16 Jan 2023 12:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOX4-0002xG-LV; Mon, 16 Jan 2023 12:24:10 +0000
Received: by outflank-mailman (input) for mailman id 478602;
 Mon, 16 Jan 2023 12:24:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHOX3-0002x6-5W; Mon, 16 Jan 2023 12:24:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHOX3-0005We-0U; Mon, 16 Jan 2023 12:24:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHOX2-0004QL-Kc; Mon, 16 Jan 2023 12:24:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHOX2-0002Jb-KD; Mon, 16 Jan 2023 12:24:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NbjYQIc325vqKDbY3xuwKCaUIa0jShpf8pv1+ki9bj4=; b=YdfP3Y/wzi97J0+HvlRfxh3vRo
	NUrlzDxxiI9+9hTB6w1tDIVKEdj+GGMEL+EqZtolwbvMC9fGT/bMxTYsh9STYLjHXy3nYEAR8s8eu
	6t+BI4khUJdmy9TH7/qOsXgG/U9alqS5pNfUMCRHoHL6/zJmTLZRVn2q26lOukLLjQQs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175913-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175913: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-prev:<job status>:broken:regression
    xen-unstable:build-amd64-pvops:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xtf:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf-pvops:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-amd64-xtf:host-build-prep:fail:regression
    xen-unstable:build-armhf-pvops:host-build-prep:fail:regression
    xen-unstable:build-amd64-xsm:host-build-prep:fail:regression
    xen-unstable:build-amd64-prev:host-build-prep:fail:regression
    xen-unstable:build-amd64-pvops:host-build-prep:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-amd64:host-build-prep:fail:regression
    xen-unstable:build-arm64-pvops:host-build-prep:fail:regression
    xen-unstable:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable:build-arm64:host-build-prep:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 12:24:08 +0000

flight 175913 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175913/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-prev                <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-amd64-xtf                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175734
 build-amd64-xtf               5 host-build-prep          fail REGR. vs. 175734
 build-armhf-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-amd64-prev              5 host-build-prep          fail REGR. vs. 175734
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-amd64                   5 host-build-prep          fail REGR. vs. 175734
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-arm64                   5 host-build-prep          fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    4 days
Failing since        175739  2023-01-12 09:38:44 Z    4 days   10 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    3 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              broken  
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             broken  
 build-i386-prev                                              fail    
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-prev broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-amd64-xtf broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 12:52:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 12:52:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478609.741878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOyo-0006NU-5y; Mon, 16 Jan 2023 12:52:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478609.741878; Mon, 16 Jan 2023 12:52:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHOyo-0006NN-2y; Mon, 16 Jan 2023 12:52:50 +0000
Received: by outflank-mailman (input) for mailman id 478609;
 Mon, 16 Jan 2023 12:52:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHOym-0006ND-H0; Mon, 16 Jan 2023 12:52:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHOym-00069e-ED; Mon, 16 Jan 2023 12:52:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHOyl-00051X-W8; Mon, 16 Jan 2023 12:52:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHOyl-0001zK-Ve; Mon, 16 Jan 2023 12:52:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aaZyxGFttHT4S57IUhauiYRRD7ipOLp4kLSjcXzwCzc=; b=NQMLH33Lrdk4xor6eXw1ER+1Ij
	SXgJfHTtq4tJbXVvqWtX6ycnMv1G/gUeMIii5d3fhHsQOkdilO7CtUdtST/Ixxc+Md37RIVL6+fHM
	eMLx+Ev7iSn1MZa+J4a3DmDXm925dNcZTecDGW9xlN1tvSgalFigjadyTnCMB/1X05Q8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175917-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175917: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-amd64:<job status>:broken:regression
    libvirt:build-amd64-pvops:<job status>:broken:regression
    libvirt:build-amd64-xsm:<job status>:broken:regression
    libvirt:build-arm64:<job status>:broken:regression
    libvirt:build-arm64-pvops:<job status>:broken:regression
    libvirt:build-arm64-xsm:<job status>:broken:regression
    libvirt:build-armhf:<job status>:broken:regression
    libvirt:build-armhf-pvops:<job status>:broken:regression
    libvirt:build-armhf:host-install(4):broken:regression
    libvirt:build-armhf-pvops:host-build-prep:fail:regression
    libvirt:build-amd64:host-build-prep:fail:regression
    libvirt:build-amd64-xsm:host-build-prep:fail:regression
    libvirt:build-arm64:host-build-prep:fail:regression
    libvirt:build-amd64-pvops:host-build-prep:fail:regression
    libvirt:build-arm64-pvops:host-build-prep:fail:regression
    libvirt:build-arm64-xsm:host-build-prep:fail:regression
    libvirt:build-i386-xsm:xen-build:fail:regression
    libvirt:build-i386:xen-build:fail:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=46aee2a9255adf842ab44a9292acb46189a726f7
X-Osstest-Versions-That:
    libvirt=12a3bee3899cdba8b637a7286f24ade1214b6420
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 12:52:47 +0000

flight 175917 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175917/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-pvops               <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf-pvops               <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175736
 build-armhf-pvops             5 host-build-prep          fail REGR. vs. 175736
 build-amd64                   5 host-build-prep          fail REGR. vs. 175736
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175736
 build-arm64                   5 host-build-prep          fail REGR. vs. 175736
 build-amd64-pvops             5 host-build-prep          fail REGR. vs. 175736
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175736
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175736
 build-i386-xsm                6 xen-build                fail REGR. vs. 175736
 build-i386                    6 xen-build                fail REGR. vs. 175736

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              46aee2a9255adf842ab44a9292acb46189a726f7
baseline version:
 libvirt              12a3bee3899cdba8b637a7286f24ade1214b6420

Last test of basis   175736  2023-01-12 04:18:57 Z    4 days
Testing same since   175917  2023-01-16 04:18:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anton Fadeev <anton.fadeev@red-soft.ru>
  antonios-f <anton.fadeev@red-soft.ru>
  Erik Skultety <eskultet@redhat.com>
  Han Han <hhan@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Yuri Chornoivan <yurchor@ukr.net>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               fail    
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            broken  
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-pvops broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-job build-armhf-pvops broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 46aee2a9255adf842ab44a9292acb46189a726f7
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Fri Jan 13 10:24:38 2023 +0100

    NEWS: Document virDomainFDAssociate and NULL dereference in virXMLPropStringRequired
    
    Signed-off-by: Peter Krempa <pkrempa@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 6ce7cebea32148372ebd567ba25b380d8ab49781
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jan 12 23:42:18 2023 -0500

    tests: remove unused qemu .args file
    
    net-user-passt.args was generated early during testing of the passt
    qemu commandline, when qemuxml2argvtest was using
    DO_TEST("net-user-passt"). This was later changed to
    DO_TEST_CAPS_LATEST(), so the file net-user-passt.x86_64-latest.args
    is used instead, but the original (now unused) test file was
    accidentally added to the original patch. This patch removes it.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

commit a2042a45165938f2d747edd17f12ca03eea51791
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jan 12 23:42:17 2023 -0500

    qemu: remove commented-out option in passt qemu commandline setup
    
    This commented-out option was pointed out by jtomko during review, but
    I missed taking it out when addressing his comments.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

commit 3592b81c4c717f01f34362e0b578989f9f93676f
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jan 12 23:42:16 2023 -0500

    conf: remove <backend upstream='xxx'/> attribute
    
    This attribute was added to support setting the --interface option for
    passt, but in a post-push/pre-9.0-release review, danpb pointed out
    that it would be better to use the existing <source dev='xxx'/>
    attribute to set --interface rather than creating a new attribute (in
    the wrong place). So we remove backend/upstream, and change the passt
    commandline creation to grab the name for --interface from source/dev.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

commit 8ff8fe3f8a7bb67a150c7f4801c2df5ef743aa8f
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 5 09:51:07 2023 +0100

    qemuBuildThreadContextProps: Generate ThreadContext less frequently
    
    Currently, the ThreadContext object is generated whenever we see
    .host-nodes attribute for a memory-backend-* object. The idea was
    that when the backend is pinned to a specific set of host NUMA
    nodes, then the allocation could be happening on CPUs from those
    nodes too. But this may not be always possible.
    
    Users might configure their guests in such way that vCPUs and
    corresponding guest NUMA nodes are on different host NUMA nodes
    than emulator thread. In this case, ThreadContext won't work,
    because ThreadContext objects live in context of the emulator
    thread (vCPU threads are moved around by us later, when emulator
    thread finished its setup and spawned vCPU threads - see
    qemuProcessSetupVcpus()). Therefore, memory allocation is done by
    emulator thread which is pinned to a subset of host NUMA nodes,
    but tries to create a ThreadContext object with a disjoint subset
    of host NUMA nodes, which fails.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2154750
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit ed6b8a30b90807d5a4d6bc0a5d0ec99fd5f040ff
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Tue Jan 3 10:29:01 2023 +0100

    security_selinux: Set and restore /dev/sgx_* labels
    
    For SGX type of memory, QEMU needs to open and talk to
    /dev/sgx_vepc and /dev/sgx_provision files. But we do not set nor
    restore SELinux labels on these files when starting a guest.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit a50e6f649b49ee89e25a4afba4ad8d537014e33f
Author: Ján Tomko <jtomko@redhat.com>
Date:   Wed Jan 11 16:04:23 2023 +0100

    NEWS: document external swtpm backend addition
    
    Signed-off-by: Ján Tomko <jtomko@redhat.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>

commit 08e3bf0b6f8b9612bb2fc02e7d30fc75d6b10daf
Author: 김인수 <simmon@nplob.com>
Date:   Thu Jan 12 09:59:36 2023 +0100

    Translated using Weblate (Korean)
    
    Currently translated at 99.5% (10362 of 10405 strings)
    
    Translation: libvirt/libvirt
    Translate-URL: https://translate.fedoraproject.org/projects/libvirt/libvirt/ko/
    
    Co-authored-by: 김인수 <simmon@nplob.com>
    Signed-off-by: 김인수 <simmon@nplob.com>

commit d07a7793dab57d49b3dec166a62f8e82137e17a9
Author: Yuri Chornoivan <yurchor@ukr.net>
Date:   Thu Jan 12 09:59:36 2023 +0100

    Translated using Weblate (Ukrainian)
    
    Currently translated at 100.0% (10405 of 10405 strings)
    
    Translation: libvirt/libvirt
    Translate-URL: https://translate.fedoraproject.org/projects/libvirt/libvirt/uk/
    
    Co-authored-by: Yuri Chornoivan <yurchor@ukr.net>
    Signed-off-by: Yuri Chornoivan <yurchor@ukr.net>

commit 9233f0fa8c8e031197c647f7bc980dee45283641
Author: antonios-f <anton.fadeev@red-soft.ru>
Date:   Thu Nov 17 09:53:23 2022 +0000

    src/util/vircgroupv2.c: interpret neg quota as "max"
    
    Because of kernel doesn't allow passing negative values to
    cpu.max as quota, it's needing to convert negative values to "max" token.
    
    Signed-off-by: Anton Fadeev <anton.fadeev@red-soft.ru>
    Reviewed-by: Pavel Hrdina <phrdina@redhat.com>

commit f41d1a2e751549c8817326740c5761a0775a1fb6
Author: Han Han <hhan@redhat.com>
Date:   Thu Jan 12 12:10:21 2023 +0800

    docs: drvqemu: Fix a typo
    
    Fixes: a677ea928a65fceb1463e14044b23d7fba6acd3d
    Signed-off-by: Han Han <hhan@redhat.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>

commit ad00beffa6f3ca8b7c09e454a70a8281fa656524
Author: Erik Skultety <eskultet@redhat.com>
Date:   Thu Jan 12 07:57:58 2023 +0100

    ci: integration: Set an expiration on logs artifacts
    
    The default expiry time is 30 days. Since the RPM artifacts coming from
    the previous pipeline stages are set to expire in 1 day we can set the
    failed integration job log artifacts to the same value. The sentiment
    here is that if an integration job legitimately failed (i.e. not with
    an infrastructure failure) unless it was fixed in the meantime it will
    fail the next day with the scheduled pipeline again, meaning, that even
    if the older log artifacts are removed, they'll be immediately
    replaced with fresh ones.
    
    Signed-off-by: Erik Skultety <eskultet@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 13:00:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 13:00:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478616.741889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHP6C-0007tS-0J; Mon, 16 Jan 2023 13:00:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478616.741889; Mon, 16 Jan 2023 13:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHP6B-0007tL-Tj; Mon, 16 Jan 2023 13:00:27 +0000
Received: by outflank-mailman (input) for mailman id 478616;
 Mon, 16 Jan 2023 13:00:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NbzH=5N=citrix.com=prvs=37389537a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHP6A-0007tF-H9
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 13:00:26 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bcc8cf9a-959d-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 14:00:24 +0100 (CET)
Received: from mail-dm6nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jan 2023 08:00:21 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6676.namprd03.prod.outlook.com (2603:10b6:a03:389::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Mon, 16 Jan
 2023 13:00:19 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Mon, 16 Jan 2023
 13:00:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcc8cf9a-959d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673874024;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=z0N7OstpLjBMBTdgOCP8BX9f/j9u+I+kof1U/TrllB4=;
  b=VZ//vhB8zp/1OlTRDeNZJdvkfxWrIAlNvJzoePg1nQOhpn9t0IfzQZxl
   QCIoj2keYAPxnRlWGpV6QHYdczxjm54H/+05FdCmJqJh2kEHvPC6YZWhy
   5QMdl8hcyQNloFREjKZzgw9Cx6ocV7nd3ppipTRhvFLggemPuPFgBlGSs
   8=;
X-IronPort-RemoteIP: 104.47.57.171
X-IronPort-MID: 92858993
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:QWxhGaNdGblckevvrR2dlsFynXyQoLVcMsEvi/4bfWQNrUp31WMPz
 TBJCG/XPveLMWH1eYpwaInn/U1Xu8KAm9dnSwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5wJmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0t93Dllx+
 cAmEj4MdRGRmcyT/O2SUuY506zPLOGzVG8ekldJ6GiASNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PVxujeKpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eex3OmCNNDT9VU8NZpvFaJxVcBKScrdkflsPrjjEKsX85mf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZCZcInsokqRDUs/
 l6Pg97tQzdotdW9WX+bs7uZsz62ESwUNnMZIz8JSxMf5Nvuq511iQjAJuuPC4awh9zxXD31n
 TaDqXFng61J1JFSkaKm4VrAnjSg4IDTSRI47RnWWWTj6R5lYImiZMqj7l2zAet8Ebt1h2Kp5
 BAs8/VyJshSZX1RvERhmNkwIYw=
IronPort-HdrOrdr: A9a23:z2VzpKyto7W5DrTb73x3KrPwO71zdoMgy1knxilNoHtuEvBw9v
 rOoB1/73SftN9/YgBDpTn+AtjkfZqxz/NICOoqU4tKPjOW21dARbsKhbcKqAeOJ8SRzIJgPK
 5bAsxDNOE=
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="92858993"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HdE9a+72Iz0FbI2rTGdcGZgc2Jjek/toJ78h2tt/8GIZzYoVnRei5pxeryE0xfJ8ocuItl7TLYIm8SG4oCdOC2jUME339cfpaVbr7dNapDXLaufLQ7O7rFV3zIJHCcpUOepd8uTPP1zQ7AM5HvUI2ZGCVdP1EBrYzJhPGq2g6rym1qaPc5Axqo5Syx6zpJ44aSzaGchPmpMgOTug/VhHIkKqj6lYC+/WL/WptGpXp1Z7BMAfjfTLLkp2tdOipTxFpA10XwIfvGXgPw8uP74P6mNVMQ6h2To/iasW6TJsaJDOJAYmIpkUTsIk+FJx/aeIgkeeY6UnNEpXVgmGRe088Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z0N7OstpLjBMBTdgOCP8BX9f/j9u+I+kof1U/TrllB4=;
 b=c00ChRDzg+5WB4S1XgnQwHmzN+lc1S3ML8VtkEtmonk/0+lkbrgOCr1Lhxdi/64fudZEvR69MWZy3zHvmUrMOpsobxWADCUkZB2ug5uF9AA+LgQjRZkLjn+7ASSh1Jf5husySSQv3vrrLp9ucbBA+OhW/KsfDG2nli+7DE2hwvdp64v+/IULoy4ITbTs9fndlHoWNBCoUIoDgoL43sHvoBX3wHcSizWukHy0bLRr+6OdgL25A5PteCPP15OkGwdAOFnavWiopenOoWL547p6PGmbl+PRKWt3ZbV9ALOlMHDeBCsSrw6SZHXhNDcSnCZLRjwB6RzFz8vqXnh74fhXHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z0N7OstpLjBMBTdgOCP8BX9f/j9u+I+kof1U/TrllB4=;
 b=EVnzuvunYATRsSXXFScKwl794mMDhowdm4k41OrccLTuvUtkEECrlgyI9oj3jpeDKvBMb4r7H1L/PQaAZjaHO8uDS8hs3DeLm/cDJKF5WgC8WRf3zZzIv+uPQW2B37BYWCYFhKXX3PPZM6CP1MO7kWcE61TcgV6Kp1YkKSysuZw=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Thread-Topic: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Thread-Index: AQHZJRefeExwWeppzE65VXS63DgRAq6axG6AgAA9twCABgijAA==
Date: Mon, 16 Jan 2023 13:00:18 +0000
Message-ID: <adf6f951-a0e5-c167-9739-d8b0a2b4af38@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-6-andrew.cooper3@citrix.com>
 <af2b74b2-8f37-223c-b830-c2bb3bc6d467@suse.com>
 <3ac6a4d6-44db-d248-4440-6e71aa14ad93@citrix.com>
In-Reply-To: <3ac6a4d6-44db-d248-4440-6e71aa14ad93@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6676:EE_
x-ms-office365-filtering-correlation-id: 46341846-5157-4ad6-aecc-08daf7c19eb7
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 2+jd2xmX+4BVjwbjEuJuH+F30zrW5l/tddirGi0RbpDIdbN2vn8TtpcSqsYfxd1g9r8M92f69Xdk6jXmZVfGEkFXtoQVqZOS1gZAhKS4/ajRyCTOPm6xFHyAZ5JRpkfX9O4IyDGNRKzdGxnYi/pr1JiaKcaomJG2c3VngPFaLKkcuMPGSPZCqUKCohifEtBWtIuUOBAFC1ot2NQ3SMdtRfAyFHC7VjhRghbF7terDMJcuczR3bRavjIphElsNZwxPg5fz+4lGurxtZx3pyjUioLhAXcXfL7dQewIzaIe6MRSKoBYiul1rHsjzoyUmHgaReI06yMf5sx3STw5Et1wQDXTy1s5q4lMDnXw7TWqAWleQ2blr/KjIcvjR9Y8poup6j057B/3c9n5Y25VmO7aDcnXHcmXmpzHZ/Qv/rM/l1PrPNZezHHUwOo0cztJUfhv92PYToCPpBFi8xgTsD1bPZ2bUXdxs62L9PHkxqH24heiSHkhV3NLZSXCtW47cKp83+d/OFaL+4YZ4wm7wfDvWgOyTvhqtoYWWR//AkkuQdGwuy9IOUObD1mHSpTxpNA4vSbECWCkR707OD+h2dBgvOWFUF/sI7IyQcInSGOeEycC1TClGOrawc3yJvWTekr19wZb7goiPa2T6XdaK/nyGp3dV+Ccw+1/BK4XEgXCJa53kKBpRht/iKKENTuou2Y6n6CTqk6Wn8D01g19a2kmIRGD7PrZHnO5PnEteBCbLK1OVNZwSdV2uTe14mKpUoeraoyMPchUr3gECeRSOCovgg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(366004)(346002)(376002)(396003)(451199015)(6512007)(186003)(53546011)(71200400001)(2906002)(26005)(66446008)(6486002)(478600001)(31686004)(6506007)(76116006)(8676002)(91956017)(2616005)(54906003)(6916009)(4326008)(316002)(83380400001)(41300700001)(66476007)(8936002)(36756003)(86362001)(66556008)(5660300002)(82960400001)(64756008)(66946007)(122000001)(31696002)(38070700005)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?aEcyTW5YWEtFRCtybXBEampvR1JsTllJSStSaDhaZ21nVExvb2NwS0k5NmNC?=
 =?utf-8?B?T29SREpwZng1ck4wUHp4bjNmckRiREM1dWUrd1hvcGtMK21ZZjR4QU4ySUp0?=
 =?utf-8?B?VDY3MTUvUkhQZCt3SlJmNnJTUzZPd1J1K09KM29jM1l5SGNqRHpKelBCbkND?=
 =?utf-8?B?UTlBUnhYUXF5Wk15d1J3ME9SWWFvcklYRVZjbWY1T3QyUUVxZ3JodU9tU2I5?=
 =?utf-8?B?VGtHblBCMG9aQk90anFlRU9uNDNrOWN4VUZpSzRVeFJXZndYVDJBYnhQczhr?=
 =?utf-8?B?U2pRc2JUcGlRS0pKMUZwYS83VFVkTzVML21EVUR1cVJDQmdiTWEvNUw0WnQy?=
 =?utf-8?B?SSsybG1Sc1VMaVoyTHYva3VTWmZyUi9Gb290b0JTVVRnMVNWNkJ0RjZiVUN4?=
 =?utf-8?B?MWhqZkhGWkNiMWVWd2xETUhZMVF3eEVTWENEZDdSQ3lvbkpxd0pIWVY5cnRI?=
 =?utf-8?B?VFhqY3FqL1lOWTRudzJjQlNxTWUyZG5vYVZmTXZQVXhhZlRndy9kRkN1OGgz?=
 =?utf-8?B?MkRORVFlWGdPOGVSTS9laHp5UFdrY1VsTTkyK3hCekp3NkF2cDdScEZrVStp?=
 =?utf-8?B?aGpFa09uQmtFYjAyZEZBazFUQlBFcEdpN0lPeDBPODhheGFvdXJhUUFBNmFM?=
 =?utf-8?B?V3FBdVhZK2xKb1ZKZzArdStBeGpSYlkvQ1B6bmlkNlVzb2N0eGx1NVd4YTdn?=
 =?utf-8?B?dndVUnZNakZOVVloVlRzdkJqZ3J4eUVHckpxeks4dmhualpmTVExQzZsV0lD?=
 =?utf-8?B?eFNobHRMUzIwQ1NhL2hrTU9XUVF2c0VLOGdkZlhrVi81VERYRnFpYjJoZnNa?=
 =?utf-8?B?dXpSTEJKNmU3ZktFZmJ5TDFjbnlWL0h6dDB6ay9Va2FFV3dabVFFejFRSEpP?=
 =?utf-8?B?TUJ4MWRneFcvOEdqekp0dm1lL1ptSWd0K2YwZVFsR1NrZTVKY1hmVmp4VjM4?=
 =?utf-8?B?TXlCN3RmZTMvREs2TE1PaXVjNXlkdzNqcHNIbGtaQ0laQzlIZ25oWW9RcUht?=
 =?utf-8?B?RVBBSG9zei9NTXBrVERuK2xuZzViWWxBYkRCSHlYbys5NDloZm1KWnU3QnFv?=
 =?utf-8?B?K1lKakNOWDUvNlFnZUt6M3Jrby9Gc0REdG9Fd0hWZmlYRmdVRjBMNTJibkdM?=
 =?utf-8?B?Y3hFazYyR2dWckVHN3NsOTYyT002c1RwR3I4V3EwaTgzS05QM0ViSGw5MFFH?=
 =?utf-8?B?N1F6U0ovWlZRZ3RmUHh2WGpnT0E5VGxzNS9wQitxcHRLc0htcXc1SjFYZk5i?=
 =?utf-8?B?ck1raXp4dVcwTUU0WGZtbEc1cVJjM1diOE0rTUdudVFLWisxQTVCUEJ1cHRK?=
 =?utf-8?B?SVVPcDFhVEdFTDM2VHZGSDMzcUxxSEZlQkhtL2Q1UVU2d3BmQzRVWmtKSGdr?=
 =?utf-8?B?UUtZWlFlcEROMmFhRnpzWkx1MENEb3dJaVVUandva3U3WmQwYjNqOFBZZWlH?=
 =?utf-8?B?YXQ5bFpBRWNXeFRIVVlnLzdCSkJyeDRoY3lwYmZmVjNUT3RjRks0Zmg1eC9m?=
 =?utf-8?B?YU0wM25MQ0Uzdmd4YVJnQ3QrSEVKMHhjazhnT2ZIU1RDMGcwWjNad2VhdTlP?=
 =?utf-8?B?MCtOZlJZRFRhU01vOXdHZU9nSjRkZWhIYUs4ZmxRTXBTL0NMZ1FISHloczBQ?=
 =?utf-8?B?TkZTYUREL0FKd0tDL29kSUlZWXJYaHZjUHorZitYa1VzNzVzakdjdEJRUGR6?=
 =?utf-8?B?TkU0RDBzTU1Lb3EyczdiZlhWcVNaMk9GK280TWF0c3JOOUV0Q2pTZTFSWDdQ?=
 =?utf-8?B?RFdyNUZsVG55dHpuTnpQRlV3UG1PdXl4VDNqSE9GR2laVGJHSnZJWHhZUVVH?=
 =?utf-8?B?MkF4VVd3LzdjdG5ueFJnU3NhR203SkNXWDVFVWkxQjFxR1RQdDdNdmIyQWtn?=
 =?utf-8?B?ek5jS0lCbW13S0I5Uk5CYmcxWFgrd2pyN3A0Mm0xeDhHb3ZGRU9ZdUkyTDdL?=
 =?utf-8?B?QXJDVmM1SWgzb01kek1mUHlLbUdiZ2R6b0tEakVBOXg1a1NiVDVHaTZOY2hR?=
 =?utf-8?B?c1Q0RmJBMnh0bmVZUy9PR3lKUTdtL1EwTTNHLzNsRkIwd2ViM2xZbVJXVjV2?=
 =?utf-8?B?SkNIU2pYWDFJUmZqRkpMZTl2Ly9xRG9oem0rOUpFT2tKSHBiYlZnTDFSMzYy?=
 =?utf-8?Q?zGcGyDHNicBGaXA9nGVk3qiqc?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <092BBC0778ECA84E8E51BFAE988927E3@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	cB6pg9IXRQQ+4OLMLAKF+djsCurg0DjkfVR3a25icRta8DqqafGWBwELooELwEGRc2hV7+2ZaewQ1GwexGoWFHrqgq+U/0KfOcKtX/tq4u6WApkKlJ7Tj4o/81RNElLZMqRL6EiWDb597TVhwcPODh5wIm74JOY/ay6RqXCjJ32GeskFrg2nEXcuSlJ5NpBC1RkqN5tx+Cx1NZMA80yUCQAhAOZAo9aJZV/JUDFJTzJNe5PLw6d+ZsdXZ596v1I8QuMg3OJFI36tmhTK77bL/fMa9vCG5DaX2/4jSWiRNMR4hQI1hRxA7k/UfQMA3wgHkxG60OPJt7MDk0QDepDOT96QgO1wogWobddFu6x1iLIbT6Bs71rvN6iTkQgtgW+qDUH3fs7fKEBse/qoQO72ULKFyAFiPD+p3hR9u4fn8EumADKpreJuEoYMIYi1vsRQhFI7rRF6+JepiINS03QU3w8BupfngdFCj+FtFC6ZAmrj/AVHKSieMcEJ1DRrPOqETbDk7V4BE/itBgizCLiaF+e1lZrrTzGumBedKD5xp3rZh36Jq+PZROHNgWaW0KTGKze1c7Lhg123e7EgiUW7qjmQOO9JMq4SHv+slUf/neTgpwKNpEuEQS+GO9/eD7oMCXojGEOj2yXMuleE5QBOIu4SIZeSvob8DS+cXXCtLP+5Bd8nn93toZeEVTi7Zr1oawPBj5JTsV7N8ZtYbSYtI5VnpfDw/mrD09D2x1gULi6zMtfi1Ol2OnqEKEjnpqILHKtGHhFTlyU5rWEPRvY/gQ4mKKkfVezIGxrWuhwIEOhyhe16bn+ZsRQ74kHd5BxJuk9YJKnmHRKdefMWQjsrK64C00RsXfXW4Xw6uylIZa4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 46341846-5157-4ad6-aecc-08daf7c19eb7
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jan 2023 13:00:18.9464
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Y9rrL8gbwkv/cNBrX7fiDA0tIiIfe58YM8EU8FVcnmFJWjRzl/Dzim1U6z0fvmCQ7YbJhadV5iZqwuge+STOT1l1pCAS8s2AgszbISMQy44=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6676

T24gMTIvMDEvMjAyMyA0OjUxIHBtLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0KPiBPbiAxMi8wMS8y
MDIzIDE6MTAgcG0sIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4gT24gMTAuMDEuMjAyMyAxODoxOCwg
QW5kcmV3IENvb3BlciB3cm90ZToNCj4+PiAtLS0gYS94ZW4vYXJjaC94ODYvc2V0dXAuYw0KPj4+
ICsrKyBiL3hlbi9hcmNoL3g4Ni9zZXR1cC5jDQo+Pj4gQEAgLTU0LDYgKzU0LDcgQEANCj4+PiAg
I2luY2x1ZGUgPGFzbS9zcGVjX2N0cmwuaD4NCj4+PiAgI2luY2x1ZGUgPGFzbS9ndWVzdC5oPg0K
Pj4+ICAjaW5jbHVkZSA8YXNtL21pY3JvY29kZS5oPg0KPj4+ICsjaW5jbHVkZSA8YXNtL3Byb3Qt
a2V5Lmg+DQo+Pj4gICNpbmNsdWRlIDxhc20vcHYvZG9tYWluLmg+DQo+Pj4gIA0KPj4+ICAvKiBv
cHRfbm9zbXA6IElmIHRydWUsIHNlY29uZGFyeSBwcm9jZXNzb3JzIGFyZSBpZ25vcmVkLiAqLw0K
Pj4+IEBAIC0xODA0LDYgKzE4MDUsOSBAQCB2b2lkIF9faW5pdCBub3JldHVybiBfX3N0YXJ0X3hl
bih1bnNpZ25lZCBsb25nIG1iaV9wKQ0KPj4+ICAgICAgaWYgKCBvcHRfaW52cGNpZCAmJiBjcHVf
aGFzX2ludnBjaWQgKQ0KPj4+ICAgICAgICAgIHVzZV9pbnZwY2lkID0gdHJ1ZTsNCj4+PiAgDQo+
Pj4gKyAgICBpZiAoIGNwdV9oYXNfcGtzICkNCj4+PiArICAgICAgICB3cnBrcnNfYW5kX2NhY2hl
KDApOyAvKiBNdXN0IGJlIGJlZm9yZSBzZXR0aW5nIENSNC5QS1MgKi8NCj4+IFNhbWUgcXVlc3Rp
b24gaGVyZSBhcyBmb3IgUEtSVSB3cnQgdGhlIEJTUCBkdXJpbmcgUzMgcmVzdW1lLg0KPiBJIGhh
ZCByZWFzb25lZCBub3QsIGJ1dCBpdCB0dXJucyBvdXQgdGhhdCBJJ20gd3JvbmcuDQo+DQo+IEl0
J3MgaW1wb3J0YW50IHRvIHJlc2V0IHRoZSBjYWNoZSBiYWNrIHRvIDAgaGVyZS7CoCAoSGFuZGxp
bmcgUEtSVSBpcw0KPiBkaWZmZXJlbnQgLSBJJ2xsIGZvbGxvdyB1cCBvbiB0aGUgb3RoZXIgZW1h
aWwuLikNCg0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9hY3BpL3Bvd2VyLmMgYi94ZW4vYXJj
aC94ODYvYWNwaS9wb3dlci5jDQppbmRleCBkMjMzMzUzOTFjNjcuLmRlOTMxN2U4YzU3MyAxMDA2
NDQNCi0tLSBhL3hlbi9hcmNoL3g4Ni9hY3BpL3Bvd2VyLmMNCisrKyBiL3hlbi9hcmNoL3g4Ni9h
Y3BpL3Bvd2VyLmMNCkBAIC0yOTksNiArMjk5LDEzIEBAIHN0YXRpYyBpbnQgZW50ZXJfc3RhdGUo
dTMyIHN0YXRlKQ0KwqANCsKgwqDCoMKgIHVwZGF0ZV9tY3Vfb3B0X2N0cmwoKTsNCsKgDQorwqDC
oMKgIC8qDQorwqDCoMKgwqAgKiBTaG91bGQgYmUgYmVmb3JlIHJlc3RvcmluZyBDUjQsIGJ1dCB0
aGF0IGlzIGVhcmxpZXIgaW4gYXNtLsKgIFdlDQpyZWx5IG9uDQorwqDCoMKgwqAgKiBNU1JfUEtS
UyBhY3R1YWxseSBiZWluZyAwIG91dCBvZiBTMyByZXN1bWUuDQorwqDCoMKgwqAgKi8NCivCoMKg
wqAgaWYgKCBjcHVfaGFzX3BrcyApDQorwqDCoMKgwqDCoMKgwqAgd3Jwa3JzX2FuZF9jYWNoZSgw
KTsNCisNCsKgwqDCoMKgIC8qIChyZSlpbml0aWFsaXNlIFNZU0NBTEwvU1lTRU5URVIgc3RhdGUs
IGFtb25nc3Qgb3RoZXIgdGhpbmdzLiAqLw0KwqDCoMKgwqAgcGVyY3B1X3RyYXBzX2luaXQoKTsN
CsKgDQoNCkkndmUgZm9sZGVkIHRoaXMgaHVuaywgdG8gc29ydCBvdXQgdGhlIFMzIHJlc3VtZSBw
YXRoLg0KDQpBcyBpdHMgdGhlIGZpbmFsIGh1bmsgYmVmb3JlIHRoZSBlbnRpcmUgc2VyaWVzIGNh
biBiZSBjb21taXR0ZWQsIEkNCnNoYW4ndCBib3RoZXIgc2VuZGluZyBhIHYzIGp1c3QgZm9yIHRo
aXMuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 13:02:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 13:02:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478622.741900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHP7w-0008VT-Ef; Mon, 16 Jan 2023 13:02:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478622.741900; Mon, 16 Jan 2023 13:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHP7w-0008VM-Bv; Mon, 16 Jan 2023 13:02:16 +0000
Received: by outflank-mailman (input) for mailman id 478622;
 Mon, 16 Jan 2023 13:02:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHP7u-0008VG-3Y
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 13:02:14 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2083.outbound.protection.outlook.com [40.107.21.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe1f786d-959d-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 14:02:12 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7106.eurprd04.prod.outlook.com (2603:10a6:208:191::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Mon, 16 Jan
 2023 13:02:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 13:02:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe1f786d-959d-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CwqPCCa33SVGKelYnh8fHUWgyKdzXEvAeZ+1kBtBofycvGQ6gcNj6PNy/FSYUfUoJ8MHV8ajjpZNDUljbqhMDWQ+JovSBi6oQ1WuxlGfClAMBRQKmJc0DT5HtZFl7MbEQrZTPRk7L2EO7bS0WbOm57ngjrxOyMP2BsYL3KgfRIhh/5HuXxpuzReObnOD+E8no+oOdDghNEbNpYdkLGwiaeCfLZ/F/+ZR6l2vpWYVWFYKc9Zxfzs5tXP9YZXDxbwMwTZDb5PHFpNVFQ2+YFG9dBpA3TOw0yPoKPL/FrLWmYYEaRQjADq+/v7mRQlvbgVix+MkSipn0GsdpiLILv6GCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HPcv3kBseubSGIczE+gyQLRvkyIRxf4QdIzScOTg1hc=;
 b=K3/ROOFoMIE40kXmd0YFQidM2LkEhelIzsUYwyeqfdtZTUUzshBaRo0YasL5Enx2EYH+RvJI6QBVsX59WY5O9NjgVlnuk8WRFOsWhu9jhbxU14yeWq23VLVscDq8d1y3gmHJsb1P/xDJcs9CD5P51rqdPlKE5KMAxoEWqK9lAipUSpfyMIRaEo4JZJhNMtNpq19oidDlotj0VD3Su/lmnM1jZS+hCTP/zyYgRqCDssCFqnd3DZ/TmivW+YAquWcgiZGDj/LO5bibJjxa67+j3KABsXPhaY2307YW32SXcDmGgTpeUHNYNgM4v5jFgQ3aexd7Ag5XqIhpnrs1gEN/9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HPcv3kBseubSGIczE+gyQLRvkyIRxf4QdIzScOTg1hc=;
 b=jU+Po0CZCUKq4SUXazJxvt3aUDf2PiiXFbRhMYtuI4gSJ+GEtu1Za0bwE61GCTFg3dnqVohOTnxjTEUHSH+CvHcgRam4KKFnwTUFdh7gHff8pdZDZVKyHfJOxnTkIF+eprAzMLqw0HHuX5aeqnIZvLfiFlaGrCiE7sSWrthEUMW/Y5dRf+BqyMnKbNqA3k9D3tSodNK59KGASKwNCLtFnJUSmflG4TAcyZyeNeDOY4J8ll3JnOZLlUHrV16Bylac60ZJI8ICKMUrVcyLEITQK9r4vGb/8EGx4WyLu+SlvGClK5geSlA78UbpSEP9l4EK8Wnu8jEjv8vCiIZBVxVCzw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8ffc0f3c-eee6-ce1c-70c5-a21931026bf5@suse.com>
Date: Mon, 16 Jan 2023 14:02:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 02/17] xen/arm: implement helpers to get and update
 NUMA status
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230110084930.1095203-1-wei.chen@arm.com>
 <20230110084930.1095203-3-wei.chen@arm.com>
 <9e32ffa1-1499-f9cd-7ca8-f9493b1269cb@suse.com>
 <PAXPR08MB7420E482CACC741B1BA976569EFD9@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <5a802657-a6e8-9cc3-fefb-09a7e68d1e5e@suse.com>
 <cb2ba9fe-e29f-c44e-9139-701f894060a8@arm.com>
 <7c5f0dea-4aed-d8f2-363f-758ecce0104f@suse.com>
 <PAXPR08MB7420AE4FCCF31C35D3FA3EB49EC19@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <PAXPR08MB7420AE4FCCF31C35D3FA3EB49EC19@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0148.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7106:EE_
X-MS-Office365-Filtering-Correlation-Id: 47444e97-94cc-43fc-07ed-08daf7c1e116
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SuCthXXTC58fh67bp/WmMUlc8o4OmRKYoYtO6Jds1UaQ4I9HvRVo7UA1/bh7lS0d0vNnXs2X3IuYMQYsWOEyx6Tu/AcS24yvGDR/8kS/XZ/OUYUGWiXLgkUCPoLMGVeI4FHda0/IaK9AFkkD8aEqSFmO6RSy+pWRbz5kSeKrtXtfFGBel82lwp684UGtikn7mhlrOkhKdhdfQfp+yDDoS8pr2s+/qyVvnIgf++mHIzWq4jxobjaOam0uQV6SHxY8EHHwgKBsyxlrccMXH2ceiNx1FneMEiuXBZV8q8aEdCASkuhiQVzgTe1A459zINOwSDnGuMCysXbGYVli3SQsaGPUE26cGwpd5DSdeST9gdz3s8urP1rnCk6rMQ8A99aYAgCkq+zMsIFfa1jRnsPRVe3R+KPlGPYJCrts19DVfZLijFK3oTW1DGvPHZ69Es67m/5V4XHneACPlze3Iun/mPaFhIml7FbcjAOeh1Wt2aDMfsBvi3nvWc1jLdlLD7/5xRigDVBB+yaE1ObRAuXPOfQk3/QCTU5c7xBg3T2C2pqnZpCed1WXdgVQivgTa2j3WlTmkQzBHWM37L2lZzTkOZNJdyJ7uzLNdD0sYFCbgkqz69JZfM0upg4a94kSrd/NEg1FM9v5A6V15qK69kcU4fJ3J3A/fHAeUPZU7sTvjCoSd+dV0XbNtRZ8c27BJpzZYp7kjRlDpKzbtWZ24FylQJBV8yzFdkyaP8UH8GNkpa4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(136003)(376002)(39860400002)(346002)(451199015)(31696002)(86362001)(36756003)(6512007)(186003)(6916009)(4326008)(8676002)(53546011)(26005)(66556008)(66946007)(2616005)(66476007)(41300700001)(6506007)(316002)(54906003)(6666004)(478600001)(6486002)(15650500001)(38100700002)(7416002)(2906002)(83380400001)(5660300002)(8936002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bW5MWGkyR2lCcU5FS0xMZDY1MVRUSm9YN3NuelNYZGtSRDl0b0EzWWxCVGI0?=
 =?utf-8?B?cVRCanBqQm04bmlQdy80UDZUNVRvT0pOMS9nNjM5dFN4em1pSWpqT21ycXQ4?=
 =?utf-8?B?a2ozcWtnOThRUkZqOWorWHREY2c1RFVCMzIrT0FEWWQrMThkNVVjSUZ2RVhp?=
 =?utf-8?B?VnJvTlgySVNsbEtad05UL0dia1BPV0wzVzZHZHNjOENCQVVMUUQ2eW5FeEp6?=
 =?utf-8?B?YklJRWJSYmJFSk5rZ0laTHB3aUQ0QnBMZ1Y4bDdTRTQ2SGZTbFRwUHNGZ09R?=
 =?utf-8?B?S1NLTkg4UlpldFZsUjQzSUx6SDU4OEplNk9vbjJ2NHpZMnZnWVRGd2h4ZW9J?=
 =?utf-8?B?bnpTcXUwSm1TazJIVFVWRmd1NFcwMEZkdFc0bkI2MHpWSXBNVEdHeFRzQ2pl?=
 =?utf-8?B?NHBLc0RSUmRSc0FlSkZvTkFJejh5NjRQTUV2L3dLbncxQ29RMktqUFloTFBB?=
 =?utf-8?B?b0p1QTZFa2Z1T0tjd3djWWJaWjJ6QzUwZ1dnN2hUUThDSDVFSHFYL0VmTXk4?=
 =?utf-8?B?bkZ2QUFndjAzWGFvVElkQWlJVWJCWmxJQzFySjQyb1RiTVlhNDRKdHBrMXZL?=
 =?utf-8?B?Q3pNNStmd2FwMlk3UW1HWlVkTERudkczK2F4bTQvcldpTDBLM2pLK2Jkanpr?=
 =?utf-8?B?MHkyVVZ5WXI2OWFOUUoydFpGZWVvOFBlMHlGQ3Q3d05CRnNYYUdHbUFtbW5T?=
 =?utf-8?B?RkR2ckpVVHVDR1BocHlDenlidGlHR3pHZGVlUVM5V0xQbHo3cTFxWk5mK0lL?=
 =?utf-8?B?YndqSTVIb2tZSFRJMUR0NERsRk54M1F6VjZHeDZ5NGN4ZC81V09ZVTBxb1Jh?=
 =?utf-8?B?dDRQTFVzUE9USmZmR2N0TG1mWWJBQ2FJMURuemYyM3lGUEIrTEFuZnhRMDNj?=
 =?utf-8?B?NjhpWmRPbVZIcUZuRTZSc0JicUd3WkdlRWl1NFh5ZjBURXNzNzBmM01GUGZO?=
 =?utf-8?B?QjE0czY5TW1pNzBKV25OUmNUaTRLK2poNkFXbVRGOWxOWEFxWmc5MW9RTmc3?=
 =?utf-8?B?TlZMTHQ1aUsrZ2tieVZCREVXam83RWZKRlg4WnRXRzV1MUc1QjltQ3pFbFJ1?=
 =?utf-8?B?VWt1dnZYVmJqRWwrczg1RTdvV1ZhbHlZcGZmWmMybmFTT2wvM3pWT0VzcGFT?=
 =?utf-8?B?a3BBcWhRV3o0ZVU1VDNTQVpsM0E3aGpxVWFtKy84NVJCOEQzV1RZT3NwT1Bn?=
 =?utf-8?B?U202SXMwazlRQ3V5b3U5RGJvY2lNOHUrN0IzSHoyQzB2bldRSlVXVGVuWnJH?=
 =?utf-8?B?QUovV2lHNVBIVU5IakZyOHZTUlFlQ05JNVIrSXQ1RzU2WkwvQ3Q3bWFaUEJT?=
 =?utf-8?B?cnQvWnFyc1A4WVpONUI5dTdzVFVTV1VMNGEreng0Sy9tMDJqYlQ5UXYxQVMv?=
 =?utf-8?B?ZlBhdkZjQkFTMGtrUTBacXY5NGk0Sy8yelVxZkV4bjZjbzdMQSt0RDJ5UEEv?=
 =?utf-8?B?MXZIZWlUUE9wN2NrUlBmNkYvUG9iQy92SzhQL3RJVHd0ZXZkU3dZaDRENTNv?=
 =?utf-8?B?VFRISFY5RTVJRFNubmVLU25CU3l1ZHp0eGxqRnFKbTNaT3A4THI2eVNVTk1F?=
 =?utf-8?B?d3FpWTkwRHRTZHZxTlVqOWhQVGhyTjBCRjlWMWZPSWI4ZTNoVnAyZFhld2VC?=
 =?utf-8?B?b3F3cUN2bTc2TXNYTWlBWVBrZkpCdGZkS0lDdXEyQkd6QzdpWVJzWitqWUJL?=
 =?utf-8?B?SW1VUWJnSWhOWjJzWUdpdHB5TW5uY1hGMEpJdWlOOVJIeDFkbGIvSlpGNlhY?=
 =?utf-8?B?eHJaRGNSa3lBQitERHBuWjJ0YXdvQUZaU2pPM0RoUStnbTQ2SVNXOTdBTzJX?=
 =?utf-8?B?SUlEc3NlazZzSm42Wm8remV2WVRjUlc1SXJPSTU2MG1ONjMxY0hEQlpWSGtR?=
 =?utf-8?B?MmJxM3A4R2xQdzVSdmFpcjhCZXZYbXRLRGpJWUV6WXh1SzVFT3RjMGVvQVZQ?=
 =?utf-8?B?NDZ1ckt3Z3ZBZktpT2JoWUpiMU1YeVEwMUttdmF3OFFmVXNta0czVTFVRGpR?=
 =?utf-8?B?YmFMaGNBZWRIUG9xVkVOUFdwR0ZBY3lRR1NZMit5MitzYmUveThsMGpvTEkz?=
 =?utf-8?B?dHJ2MStGM1l5V2s0aWlQMEhURW5PRlJlbUdBZHRwTXBQa3g4UVphMzdPZk9q?=
 =?utf-8?Q?jpw3LtAzEpttHmCEYM189p6LQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 47444e97-94cc-43fc-07ed-08daf7c1e116
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 13:02:10.8881
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LiWK3ds5efcmqTs4PBh8c8kjvW7aEsVVlQbEuc+QMvnYui3S8Zu/dBRHZpv8ZueNitziXHj3sFw1zTmhVVxIOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7106

On 16.01.2023 13:01, Wei Chen wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 2023年1月16日 19:15
>>
>> On 16.01.2023 10:20, Wei Chen wrote:
>>> On 2023/1/12 16:08, Jan Beulich wrote:
>>>> On 12.01.2023 07:22, Wei Chen wrote:
>>>>>> -----Original Message-----
>>>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>>> Sent: 2023年1月11日 0:38
>>>>>>
>>>>>> On 10.01.2023 09:49, Wei Chen wrote:
>>>>>>> --- a/xen/arch/arm/include/asm/numa.h
>>>>>>> +++ b/xen/arch/arm/include/asm/numa.h
>>>>>>> @@ -22,6 +22,12 @@ typedef u8 nodeid_t;
>>>>>>>    */
>>>>>>>   #define NR_NODE_MEMBLKS NR_MEM_BANKS
>>>>>>>
>>>>>>> +enum dt_numa_status {
>>>>>>> +    DT_NUMA_INIT,
>>>>>>
>>>>>> I don't see any use of this. I also think the name isn't good, as
>> INIT
>>>>>> can be taken for "initializer" as well as "initialized". Suggesting
>> an
>>>>>> alternative would require knowing what the future plans with this are;
>>>>>> right now ...
>>>>>>
>>>>>
>>>>> static enum dt_numa_status __read_mostly device_tree_numa;
>>>>
>>>> There's no DT_NUMA_INIT here. You _imply_ it having a value of zero.
>>>>
>>>
>>> How about I assign device_tree_numa explicitly like:
>>> ... __read_mostly device_tree_numa = DT_NUMA_UNINIT;
>>
>> Well, yes, this is what I was asking for when mentioning the lack of use
>> of the enumerator. Irrespective of that I remain unhappy with the name,
>> though.
>>
> 
> How about DT_NUMA_DEF or do you have some suggestions for the name?

Yeah, "DEFAULT" is probably the least bad one.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 13:57:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 13:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478628.741911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHPyt-0005Kq-IA; Mon, 16 Jan 2023 13:56:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478628.741911; Mon, 16 Jan 2023 13:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHPyt-0005Kj-E1; Mon, 16 Jan 2023 13:56:59 +0000
Received: by outflank-mailman (input) for mailman id 478628;
 Mon, 16 Jan 2023 13:56:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHPys-0005KK-MJ
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 13:56:58 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2064.outbound.protection.outlook.com [40.107.105.64])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a32658fc-95a5-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 14:56:56 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8295.eurprd04.prod.outlook.com (2603:10a6:20b:3b0::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 13:56:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 13:56:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a32658fc-95a5-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bLMcxJCTMbskZ+XBgKOlEOVJAA6FsiPhk14PiS1k8auXdgRa7BDAlw0iscObYSjObUec6xjz8dcQLuPoKjrldpOp3qZNbx5STFNA5EVah/x9MJxnOP3OGOFyDJXm2BaKz1Nv9soPe15ADlM8JZRSgT5m6kmrywuUOuLv8ROgLWPu7wyFJP5Wgu/CiAqw65XEk4hlLP6/RcxWzR5leaPzjtpDN106yqZcvikmemTTh+w4lNJYPNLfw+p+wvLbupC6+ODTR7mzE/BaWOdxqDDBqt3bnY+z77D0Y7zhCZii6HchAcWQdDjoU8spN6P4EjrOxsddnRnEz3yKT17zFa3VYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=w31jkP7aOjVeauELbRNQ2T+6FKKrIvCsUNdDh5v2KVw=;
 b=BHCkBWZxgy55Sh8TdWaWIJpkcxKsEh8TWj3xcgQXOgjFPqRzVQAF9hfsXo5h1V3IJrOHDpADNVQlcfU1IphuisNyA9YhFYnpfA/yTF3uqiRbtai3MyyXzI3IV+V0nIJYAvoEDUxHDVatWVOM0PztF2+ae+hdmPQFemPC+4Qoc6noLTwJgM478nH0VPEMY5MN+qvmwTowDdgsDQX4Mv0F5AYJTidIhBW36QR+wbJc3ow5TdnkSZoeHq7G4iGMqxHMuE5dIAHtdb+A6lpvmr4Qre/pNhfz/Mz8Nogc2ajd5e7ERFlZcCWVs/BsJJuKqpQ1P/+82IetZd2okIMPqEK0XA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w31jkP7aOjVeauELbRNQ2T+6FKKrIvCsUNdDh5v2KVw=;
 b=LIt6JG8j6pW7iYrfNuFIU+cgzjiJCfc3oyobSn8wF5CLpYX/EUewHUkfkTW43eY9EhTV2BQQJ2bi0iHYxAOk7XI8k3E2TPvY+yeLJAYIYTPYetSYfmTZttIpPg/E+X0WY6wG2oXcrEH4dozSp8IdRoLLGVBcpA9EAnDETm60vhggszSK873BLlbcLLJPTU6ZQ2GcRr3mfaaz6Ldsn5YKmAdQj43ORSDYY84/oMc9/cSbBsURsfkX278F2qSvrERKqjmE5xgGhivSgXJRuj/T18zqFZyT+rd1PH9jnpudBK1k/YRvpWvfilhXg8LTbpe4OMSjgQrrQEVJTJCakefX8A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <59a9637c-e7fe-a99b-43eb-1aff7ffb4b76@suse.com>
Date: Mon, 16 Jan 2023 14:56:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/hvm: Drop pat_entry_2_pte_flags
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230116120201.2829-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230116120201.2829-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0098.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8295:EE_
X-MS-Office365-Filtering-Correlation-Id: 97d8dbad-0e68-4cd4-f132-08daf7c9859a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZW8xKNLQ43mCBkyNP8kKWVixbg5Is2upY+cFKWxhZ/HNaEJkgGd2QMCHl/6blpzaFSgOuMZ3Mh/DZd1+RWtaqP3xvrEC0al0w3P1ALz7nw3Upc7RjHcgUoBEJ9KRvE67Nh2fdyFJOV7wlGA/kQCqU7oGpdWxRebuqtEpYQ/bVwfS+mFeTAC1S1YAfxhUyZHkxgclfeFPyD9/IZfQQAdjoijrVMO8nER651QI9gQAzHtZt1FafuqoGB8xzOZRyMdNLpY7MXO6l3WTj+ZOna/+26A6R9IkAOruMTSk51hiXKGwYKtur7tW84Ty/a+kjRlwOzRMNCJhEJLoKsTH1MKhvXYzz7MEOjgEEywi0rqjxY/DsVUpW4bDnd/zCJn5K9N2xYu8gvb5muNTS7QLtTqm6o7bpzpw2UfSrCVNeBqw1XRx0ZAMkyl2gz4JtcTlZzhSa4Or/OZmjARcj2VlNsPYrW6WpwrEt5sMjaC55UgiFiFLNs896Avlvon01dvMcdJZ28j2NxSIZnzS0vxHJcYllih8JHtCUYmbOlKHas+kAA/L+fw0xq15y5/HPhYpzfuIlmORlaA+dOxREDeT1LjgzYVsz+ymJVWVW9oukoUDNzCG9ooh292XHe3nJsve8Q/3g+Vv+aY9t+j2AaxGXQhADK008I5wY8LkaE1fADnF1xK6D7XAxS45iKCcyTNVNQTVlRWpDPtnkamkzxfgS9SQz/x7FZMelzF0u6K/lFv1/bU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(396003)(346002)(136003)(366004)(376002)(451199015)(5660300002)(38100700002)(86362001)(4744005)(2906002)(8936002)(4326008)(66556008)(66946007)(66476007)(8676002)(31696002)(6916009)(478600001)(6512007)(53546011)(186003)(6506007)(26005)(2616005)(316002)(54906003)(6486002)(41300700001)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WXhzUWRnNDRLczhFZTR3WExnRmlTQjV5YlFBMkk3RUYrR2FsOGc5Q24zRUdv?=
 =?utf-8?B?NHJmdFZIS0Z6WTFVcU9EUW04cG02bWFqM2xkQVNJMGZ2N1RjcHhvUEFFRE1Y?=
 =?utf-8?B?VEJxeWVISXlnR3p6V1dzQzh4aFlaLytJcG85SjBYd3BXMUNKYWdrZkRjRnAv?=
 =?utf-8?B?TmhGZlNGNzhMNzM0M2FCb3ZDQWRTS2NLSlpnbDdtay9oaSs5YTU1Y0lGTmgr?=
 =?utf-8?B?cUQ0VlA0aW9mVy83R2x1K1hUUExhOUlKWkpOM2w0WXgwckJROW84V3RSL2Z0?=
 =?utf-8?B?TU5sdVpGSS9tYWJMdmorVURDYkFlMkpkZUVTcy9CK3EyOUs2TVRYa2kyQ2Vj?=
 =?utf-8?B?NDNMNUdxUXpId3l0MndLZG1wK0c1Z0txcWFtbkJhR2J0NEJxNU85SGtSdTk0?=
 =?utf-8?B?Q254NDE3d3hQa1ljaTUwTmlhQU9xZ3dBZHZreVhoVUFteVpnb252Q2pJMVVy?=
 =?utf-8?B?cDNDN01jd25RbDU4eFMvcGhYM2VSOFdtNTY1OW9DM2ZSUkpvak8wNFdaRi9E?=
 =?utf-8?B?N1VJU3lxNzNTNHZUQXB6bThVanRaOEhUUSsyQUhqWTh3NnhiaWphOUgxNlFQ?=
 =?utf-8?B?cnJrajg5VWVEbERLZGZPb002OFR1bXpYYVRESnhlNmlmVXprMFpUWmJqbjM4?=
 =?utf-8?B?a3FlRUE2NVNZalpGWkl2TWM3QjhNOWVrRGRDbW0zMHdBdE5ic1l3cHowNG5L?=
 =?utf-8?B?MSs4T21lMTM4ZTR0STgzamFBcU5DU1dXcjFidS9jVGsyamp3MXpxTEZ1MUIv?=
 =?utf-8?B?L3JLV0ozNm9GY3R6Q3JXRnBTTHU2dW1BYlZVVHJRYkcxT1kxREJFK2M1SjA1?=
 =?utf-8?B?QndvMVprRWU2SGJydGpranNXcU9KeS96OFRBQVNIa3kwMklnT2E0UWwzZlhn?=
 =?utf-8?B?NnpFSHNVbGFrME9SNkptZyswMDh1d2VrTWpmVmppdmczcmkvd1JndmNjamht?=
 =?utf-8?B?SnlVVFl5VVlodG8wK3dyWXZ6NkVVN1o2cmtULytTVStqUEREK3RSTGVPYmN1?=
 =?utf-8?B?dGVuYSt3ZnN0RnNOYllSeGlhVmVCeEFsNW80c3NmbWdXUW9NazVhSzdQL3hK?=
 =?utf-8?B?N2FqMGZPL2ppTGtvbFFaZTlkMXBtRVVTc1pmQTlEdC9jUnpwaHRSSzFHemg4?=
 =?utf-8?B?L0o1aWwyQ0RJd3JpT1UwNGFaTVVZdUQvci8yZjZFZVFoK1YwT0dDcFlHTW41?=
 =?utf-8?B?N2ZKNDIzeHZPc1ZOM2p0SHRLOVhFWU1DWWJpYkw5QzRyVUFlMU9QYTdxVlBr?=
 =?utf-8?B?N3h1cTA3QitDTVd1eTRhNFRkaFlmQkNod29GcFdrUUtXYlNpTy9KRmJQWDFI?=
 =?utf-8?B?anByMXo3VWxudHdkNEpKeDZDakxFNkxhR3JaTmtDNEpmMWFvZDRpVkljTi85?=
 =?utf-8?B?MjZ3RC9BYUVhNjZTVnEvUE1Pb0h2QjV3UmRUa0ZXNDF3UC9wVm95a2p6MndB?=
 =?utf-8?B?dzZJMUM1SEhJdlN5Y2V2TXI5clNaWVBSb2xmVnVCUGNIZDZ5WkRCWWxhYksz?=
 =?utf-8?B?SEk4NGVLQzNpWTM2WncyU2pkZWY3Vk1rY1NJcUFPUEprNm5KTjZhdWluSTVp?=
 =?utf-8?B?L3g5Sk9raWdVLzVRKzVZeGdZeGpLVDVIT09JVHRNeDZtUFJWd0ZjWWZqRWVV?=
 =?utf-8?B?dTBTWlIrVUdpTE9ZMWluMTJ4cmYwWWV0MmswQ3JwbGVJUlBrUjZGaDVvNmJz?=
 =?utf-8?B?MGlidHB6UzBoWGxNY2FrMnIvdHhFK0ZMSnJiVmhFY2RSR2dOOEx0RXRQbzdJ?=
 =?utf-8?B?cDlIYVlQMDFZY2ZGZ0J6QWF2ejJuT1cxcUV5ZXd2eW9GMVUzS09BNXErYzBY?=
 =?utf-8?B?NXF5TEFLTG5tM0lnMUhMdERnT1VMSjh4c3N1aGxXcnpJbWRlNlpQWWVPekds?=
 =?utf-8?B?dDhhRnVlMlM4Z1I4QXlWUkpObk1SUllmd2ZpRkdLTXB2VmRmNHJGZjU4R3lo?=
 =?utf-8?B?b08rd2VpdUIrMTJwL1Rkdm8yK3dscHpwMFpHVlNiVlZOWHQxY21lUkdhNzVQ?=
 =?utf-8?B?d3hDdVRETkhsTHN1dnJuOUt5ODVEVWpKaS82WWlHWTJZaTZMMzNtdnJoQWg3?=
 =?utf-8?B?Qnh3RlR6UkVkT2NQWkFNQ1pHQTFHZG9GZTVUTTBCcWU2eVY2VTd1K3VpdXdt?=
 =?utf-8?Q?fjidgvI7P1bDIZcTTd3OTVMEF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 97d8dbad-0e68-4cd4-f132-08daf7c9859a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 13:56:53.5544
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1Bz1ZYfWtEA8Z9XFkdLHQp86OJ6c4M2+TxR3PH7gG+gT4buMQxwQJizsLsXnyc9uMqyxzsCgpAzjLHnci7ewcg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8295

On 16.01.2023 13:02, Andrew Cooper wrote:
> Converting from PAT to PTE is trivial, and shorter to encode with bitwise
> logic than the space taken by a table counting from 0 to 7 in non-adjacent
> bits.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:03:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:03:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478633.741921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQ4r-0006p9-56; Mon, 16 Jan 2023 14:03:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478633.741921; Mon, 16 Jan 2023 14:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQ4r-0006p2-2T; Mon, 16 Jan 2023 14:03:09 +0000
Received: by outflank-mailman (input) for mailman id 478633;
 Mon, 16 Jan 2023 14:03:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J8FY=5N=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1pHQ4p-0006ow-Gy
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:03:07 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e561b34-95a6-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:03:04 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by sin.source.kernel.org (Postfix) with ESMTPS id 83E69CE1161;
 Mon, 16 Jan 2023 14:03:02 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1EB4C433F1;
 Mon, 16 Jan 2023 14:02:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e561b34-95a6-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1673877780;
	bh=+faF3hepArrc352Tbtu73yFkegQ3khnMr0J0WFP+EaY=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=C2HdpjHb1Uf5MFTNrmx/9tF1YIVDaOvQ91gbIByaC5kmbdsUJGBCTwMhouKrc9I5c
	 X4d7bHvF5oe5GJlULmypIIrh0R7DEeiVtr5Q/NFUalFZi3KeOjyrROuBTkwQ9wPs+k
	 KYUrN5/D1k1iqmhc4P6CEEAJqw/dmFDo1v3RtIzNOH90Si1sjIzn717WwOuA8IiHq+
	 3hqs2BOTgoiI0EGcsaVK2NfS0sltXIUzyp24ueEHNyYPAgXu7gH3vN/6KgLdUtZqwa
	 QvGTd78aBPrj//HPAf/l/4UC42Y5AtExsmZ3eRK6Wg9CwqTUcVH2JYbqCXu6REXSqF
	 /qZNKzumScBNQ==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>,
	Oleksii Moisieiev <oleksii_moisieiev@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 6.1 21/53] xen/pvcalls: free active map buffer on pvcalls_front_free_map
Date: Mon, 16 Jan 2023 09:01:21 -0500
Message-Id: <20230116140154.114951-21-sashal@kernel.org>
X-Mailer: git-send-email 2.35.1
In-Reply-To: <20230116140154.114951-1-sashal@kernel.org>
References: <20230116140154.114951-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>

[ Upstream commit f57034cedeb6e00256313a2a6ee67f974d709b0b ]

Data buffer for active map is allocated in alloc_active_ring and freed
in free_active_ring function, which is used only for the error
cleanup. pvcalls_front_release is calling pvcalls_front_free_map which
ends foreign access for this buffer, but doesn't free allocated pages.
Call free_active_ring to clean all allocated resources.

Signed-off-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Link: https://lore.kernel.org/r/6a762ee32dd655cbb09a4aa0e2307e8919761311.1671531297.git.oleksii_moisieiev@epam.com
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/xen/pvcalls-front.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
index 1826e8e67125..9b569278788a 100644
--- a/drivers/xen/pvcalls-front.c
+++ b/drivers/xen/pvcalls-front.c
@@ -225,6 +225,8 @@ static irqreturn_t pvcalls_front_event_handler(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void free_active_ring(struct sock_mapping *map);
+
 static void pvcalls_front_free_map(struct pvcalls_bedata *bedata,
 				   struct sock_mapping *map)
 {
@@ -240,7 +242,7 @@ static void pvcalls_front_free_map(struct pvcalls_bedata *bedata,
 	for (i = 0; i < (1 << PVCALLS_RING_ORDER); i++)
 		gnttab_end_foreign_access(map->active.ring->ref[i], NULL);
 	gnttab_end_foreign_access(map->active.ref, NULL);
-	free_page((unsigned long)map->active.ring);
+	free_active_ring(map);
 
 	kfree(map);
 }
-- 
2.35.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:17:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:17:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478638.741933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQIc-0008NM-Be; Mon, 16 Jan 2023 14:17:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478638.741933; Mon, 16 Jan 2023 14:17:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQIc-0008NF-8t; Mon, 16 Jan 2023 14:17:22 +0000
Received: by outflank-mailman (input) for mailman id 478638;
 Mon, 16 Jan 2023 14:17:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHQIa-0008N8-UI
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:17:21 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2048.outbound.protection.outlook.com [40.107.104.48])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7ba37fa6-95a8-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:17:18 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7595.eurprd04.prod.outlook.com (2603:10a6:10:20d::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 14:17:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 14:17:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ba37fa6-95a8-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IJGbRZ8TxVM19RYkvUXuIzsPKcgO4ykDWuK/bCU033FZOA0MMyIWu4UQKQ3cRe8CinjKYtTZsgf6L5/KKx+u9ltNRyytmkphYsnW6wzSp+NWVJ2qvXnbWSiMOziiyN78cUHZ6Oigt3yzPB+mSNewlo+mOIl7ly4DHo3rtcvXCprGi0auZDROgUwqcgScNU2SjJya82xwNeVD2X8XIJ8EZ5v9RefOfExImPSA5s5BVNOr6sWvzYHGPduv3sWgLVU1962SliT0Ntol601iMi3FzfRKFCOGt3XzMOsKb0mzjMUpUMaxZzBdR1NW37yXnon+K+AZ6/7OKepO1rz242d3aw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DZdV/7KhVK8Mkszdu3fyT+VI9hxYQgROkZDAHCL1GmM=;
 b=DGiMLR5+d1/fEFutNHMbX7lx3xbiCIIz8Sw6b10+IZ+GhcVKed5FyCcwmPOGZfZlShwIXKe52EXjIIGdGDFqP0Sq/xrklI0GUO4hfQM8i3rKe3o7NeC0GEkYEwRvn99gUZZJkZTdNhL6+5N+fgr9emz6Q/l+MM3hdZlu7xlfjgQB5y4emfHMG3X189hTyMG0PTlUdJ1cNCxX/bM1Ob+vFhTOLIef9l1qVxYK2KTe98ocJuAEQgj2gvR7SveH518dlrJKqnau8o2BOtqNp6AoQqHZzTDNyBUTMLCy2jxMa6eqNjUH0LQ4I4EXBLfIAaw4wd864UOE8uNH8lRpCUz1ug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DZdV/7KhVK8Mkszdu3fyT+VI9hxYQgROkZDAHCL1GmM=;
 b=lz8PxpaRsQvvPzrlcRdYjZpLSFneI97JC2nINsrUwSynOklO9+rvCoxOmXCSCM5+ZST1hrZ46BIjc1yxavXmvBv0tpW+3/+Mgyg09GUarxk1jmh4Yzit5JcHjk/qdbfPPlxtMCpVeSY+zMh4/BZopNJ6hCE4p2HEu63zdbJVTiO4Gg+lvy/4AssnWTyQWjw09r6K4pZeNvF+cvJWd8rpNTiu30d88xtN0nEf+oeMnMH6Q/0tg1W5IfuTJmcclD1t3ebZYUwtKurSDFjzhODSmd9GXObkX9e6JOmA0WGy4gO00QiHs19Fp14n02kK9MJMkgT1gPYQ3YEHMe5Ur3WIiA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <309925fd-1e7b-4541-693a-0296bd22e242@suse.com>
Date: Mon, 16 Jan 2023 15:17:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-6-andrew.cooper3@citrix.com>
 <af2b74b2-8f37-223c-b830-c2bb3bc6d467@suse.com>
 <3ac6a4d6-44db-d248-4440-6e71aa14ad93@citrix.com>
 <adf6f951-a0e5-c167-9739-d8b0a2b4af38@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <adf6f951-a0e5-c167-9739-d8b0a2b4af38@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0059.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7595:EE_
X-MS-Office365-Filtering-Correlation-Id: fbe08447-9c54-4da3-700c-08daf7cc5ea7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Hpv8NGCAsN3i75kKejferwIqW+LE6ubZZ02b3U6TusVX2RDejXLRjfa63v6QHPK5q/CVPVl3slGSoVAcM/Pk+OHOdCTOPqyytpphs9Vjn090Pp5ihp22X9OLolAawJ+KeSukAsJL7phFpLt98YMFNSUZjW/+myKTJnCXR1ro6fIDz4BF7gVP8Ni0VQk0jUAFPWA9WGewGOcYUb3F/bkFx2ue2sEnsphwXOh8bmw7mdwWAnh1Xdn+PeoZxY5BXo/Sd3YU3t3hICIAPLyOIzNUvKE1DXrzfbobwBjsiT4xFfr8jqeidRhlowopJLN/T3QwF8KQBZDfgkIf34ChZY4qgW9MYtinYhybTNIssSt2Wde4XfwKFqCQYsttx9M71+/3RhK+iQpZ2jAPjZRIcNkdLg6Pe0JAdsWsO5PYH9QOT4GWRlyy/kp+kiMqkAMNnSPkcqbTOMcFugf6UHMwcbWboWFcs6E0wzolBnED/zJOgKF5gIwx8Bts04n4ZgKEMsA186+8ZBUllmHV0uQywkgDGUnuSxwP9fFcsoW/Res8TblJjEmH98yBt2qCgQPKBWeUxWwZw2jZVIGpxrhk+0XjdkYxKy1LPcINRF1/fQNttR6ZyFReRUY0Y1gp+abUFZGXEw6E2IZuZMSuU/1jWuNfQWdRp8Pd97fYCpva39hixmKaZlwPeZRXsZ+bc3aQHd3qWBlss/jzwryWzvMpgAPhJJmSJiSPfqZgYYfF/XK5lzM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(376002)(136003)(396003)(366004)(451199015)(31686004)(66946007)(66556008)(41300700001)(2616005)(66476007)(26005)(186003)(6512007)(53546011)(8676002)(4326008)(6916009)(86362001)(31696002)(36756003)(83380400001)(5660300002)(8936002)(54906003)(478600001)(6506007)(316002)(38100700002)(2906002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MnlVRmRPeFk1aVNRb3NHVUxMKzB1SVpIQmZwRzRJU3ZQSWkxazZha2EvRFBv?=
 =?utf-8?B?em1tWVByUU80dDlvNWVzcHU4eWpua2l4TjN3M24xa1B4Z1EzeGR2Z1cwdHBW?=
 =?utf-8?B?Tzc2Si9pVVVCNHlPZDFDRlN3V2hqYW9nN0xDam9WeXlodm9GNUZrOG90eVlq?=
 =?utf-8?B?SjVqZThjSUE2YXZyYVdnc0t5VStXYVFsV2h5YndaS3VHbWtHbVN1Y0FNOTE0?=
 =?utf-8?B?K1VYUWRuTGJlaVVkaUczM051R2J6RmU0NzBZSHAzay9JZ0VwUVYxOXplalAw?=
 =?utf-8?B?RXFEeE9EQWVKendVdGRLL1pGQU43d01yQ2VFTDBnODV3b1dVc1NxQktRdEta?=
 =?utf-8?B?VHJGMEhBaVM3cFhyYmJrNVRrMVRPNjVyUGFBcFh0Z0N4bmVodFp1bThCNmlQ?=
 =?utf-8?B?UEtNTHlvSVJaTDFkeXZYckRNRW9MdkNlbWFSR21JMUFlWVB4T281ZmN1NVZ4?=
 =?utf-8?B?ZzlsTVEwUklJaDl0TCtGV2lPYmZiMFFIWXgxVGRMc3c3ME9OUmJPUVRqNkJB?=
 =?utf-8?B?OVQ0Z2hJMGdzVzF3VFVNcGRpQSt6TWZJVG45QVp6bkF2bktBWXZEWkVnVVFG?=
 =?utf-8?B?VmRocEIrNk9ETlVybkpKaXNpSEQ3OFdhQ08zN25rMCtPWXB4blVXUWZQR0Zw?=
 =?utf-8?B?eVovOWxXZzhtK3dsSjVwRkJ0eWZ3NWwxK09MQ3plQ0dQMGs0cUtRNytMY0dn?=
 =?utf-8?B?NG02bWdDUWJXNmI3eGlLcmtPNE50OTRMQjJiNlFsSVo1UU1qMG9oNk5vcTQ1?=
 =?utf-8?B?UHRZTTgzQmdZeWtyWVBPc3U2bkVvQmJkcjNhdUMrdjZDWExYVDVWejRlOWV3?=
 =?utf-8?B?UkpHcE5tNjJ0V3puWmVEZUlnbmFXQmFhWGlaM0tFTUdYWDFyRHYremwzVE9P?=
 =?utf-8?B?aUpSNElxOC8vNHc1SmtOQXYxT1lpMndGT1pIdFpSeXZ5bVBCVnphc2x5Zy91?=
 =?utf-8?B?K2g3eUlZbTQwWUROMUxvVGp6ZlAvaE8zYjBRQ1ZUbmZvT3RjWDlXbnRZS0FW?=
 =?utf-8?B?bTFUZUthSTlCeEJCM0U2aDhrMTB5cjFURGg0ejFuUCtveE1WMUc1ZWFpOHBz?=
 =?utf-8?B?dk5PVlp4NGpyb0FxVXJWQy9TbFNEdkQ1TEZtQXNqUWxTTHJaQ0FuTXhoVFlH?=
 =?utf-8?B?TkdrM2tVV0JmajRNOVJ0U0NkQjQwUFo1NHpEb2hyejFiZ0Z2dE5wbHF5dVV2?=
 =?utf-8?B?OS8zMGRib3Y0cUZRc2Y4cVJScHVZakRSQ3FIalRsRzBMOU9JSDhSbFNTbGVn?=
 =?utf-8?B?OFg1SU9LTm1MYS9sUUNYaXIrTjFaam81V0gwNXp4cWNKWlFjOEw3ZEY3SmNB?=
 =?utf-8?B?OUZEcnVNZjYxeThlWS9LcnNqY1ZmTGg2T2dkQkpkRm5TMi9kREtKTFN5czhp?=
 =?utf-8?B?WFpTbmtCSjRRY0RzWVhrdUFBbHY1d0plQUZCWS9qRnZkMVBid2ZRNHE0dXMr?=
 =?utf-8?B?c01NdjdvSXBEU1BTeWxtQXdaZE9FVzFmNzBFV1VQaTJnT0xaMkR3NDZhT0lB?=
 =?utf-8?B?cHdWV1BMNzN6TWRpYkFSYkZkUE9SNno5WDBneXJ4YWlhRmw0M1ppcWZQUTBR?=
 =?utf-8?B?dWJCVWQyZ2tscFhmQ1hSV1JGa3E5Z1JKbnhrUjUzeUpESnNTcnJGSWZZSXVU?=
 =?utf-8?B?T3ZCNGFWLzZRQXdHSmR0K21pdDBvU2F6bld4a2hoZHlWbUxEQmtoK09ueGV4?=
 =?utf-8?B?R0ZxY1NQQ05LaGoxODZMUzlkYVhZVENRWDVKZ0R0bFAvNkZFK3VhZ0JWYzJv?=
 =?utf-8?B?OHd4K3ZqV2diODRqNHM1bERyMnhJcHNJQ2lLYzY5Yy9FYWU1bHhZQm16cnVk?=
 =?utf-8?B?MzdYQi9pWmh5WUJPTTNKWTdKM3VmRk1ITy81SUZXWitnVkZsbFQ3YXlobTk5?=
 =?utf-8?B?OUFQMlE2MzU1Nm8vY2EybndSQVBZbmRYZDRnM1dUbFlOLzU4OEthcXpHUUY0?=
 =?utf-8?B?VXd2Q0hKNysyU2VUMW5OY21TVkxTRTdoUTYwTFhIcGthb0d0bkg3aGZTczJI?=
 =?utf-8?B?Q0d5UDVuTmxsUk1UU1V4UlRyL1lTMmJlbzVuZUFTbFN5elQvRjh1VnArcGxq?=
 =?utf-8?B?QUJXbTlmbmNqZzFnN2J5TnZOKy9uK2JxQWI4OW9iUnBGRjV0dFRCK3ZqY0xr?=
 =?utf-8?Q?4qfrqkxUllPuFqyxCYjQjUW11?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fbe08447-9c54-4da3-700c-08daf7cc5ea7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 14:17:16.2579
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l0yUeocQuML/DITxH6dPL4qUOqAQJGW3PsuVhLeEWY3pJiz05B3/4Y92R0E6KoeZd1xZNoB3C51/SUyze6pniw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7595

On 16.01.2023 14:00, Andrew Cooper wrote:
> On 12/01/2023 4:51 pm, Andrew Cooper wrote:
>> On 12/01/2023 1:10 pm, Jan Beulich wrote:
>>> On 10.01.2023 18:18, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/setup.c
>>>> +++ b/xen/arch/x86/setup.c
>>>> @@ -54,6 +54,7 @@
>>>>  #include <asm/spec_ctrl.h>
>>>>  #include <asm/guest.h>
>>>>  #include <asm/microcode.h>
>>>> +#include <asm/prot-key.h>
>>>>  #include <asm/pv/domain.h>
>>>>  
>>>>  /* opt_nosmp: If true, secondary processors are ignored. */
>>>> @@ -1804,6 +1805,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>>>>      if ( opt_invpcid && cpu_has_invpcid )
>>>>          use_invpcid = true;
>>>>  
>>>> +    if ( cpu_has_pks )
>>>> +        wrpkrs_and_cache(0); /* Must be before setting CR4.PKS */
>>> Same question here as for PKRU wrt the BSP during S3 resume.
>> I had reasoned not, but it turns out that I'm wrong.
>>
>> It's important to reset the cache back to 0 here.  (Handling PKRU is
>> different - I'll follow up on the other email..)
> 
> diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
> index d23335391c67..de9317e8c573 100644
> --- a/xen/arch/x86/acpi/power.c
> +++ b/xen/arch/x86/acpi/power.c
> @@ -299,6 +299,13 @@ static int enter_state(u32 state)
>  
>      update_mcu_opt_ctrl();
>  
> +    /*
> +     * Should be before restoring CR4, but that is earlier in asm.  We
> rely on
> +     * MSR_PKRS actually being 0 out of S3 resume.
> +     */
> +    if ( cpu_has_pks )
> +        wrpkrs_and_cache(0);
> +
>      /* (re)initialise SYSCALL/SYSENTER state, amongst other things. */
>      percpu_traps_init();
>  
> 
> I've folded this hunk, to sort out the S3 resume path.

The comment is a little misleading imo - it looks to justify that nothing
needs doing. Could you add "..., but our cache needs clearing" to clarify
why, despite our relying on zero being in the register (which I find
problematic, considering that the doc doesn't even spell out reset state),
the write is needed?

> As its the final hunk before the entire series can be committed, I
> shan't bother sending a v3 just for this.

If you're seeing reasons not to be concerned of the unspecified reset
state, then feel free to add my A-b (but not R-b) here.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:38:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478646.741965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcY-0002uH-MO; Mon, 16 Jan 2023 14:37:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478646.741965; Mon, 16 Jan 2023 14:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcY-0002uA-IQ; Mon, 16 Jan 2023 14:37:58 +0000
Received: by outflank-mailman (input) for mailman id 478646;
 Mon, 16 Jan 2023 14:37:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLMM=5N=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHQcW-0002Pl-HV
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:37:57 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5ccb2147-95ab-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:37:55 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pHQc8-005csz-1j; Mon, 16 Jan 2023 14:37:33 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 56FD1300C9D;
 Mon, 16 Jan 2023 15:37:39 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id C252D20EF0A28; Mon, 16 Jan 2023 15:37:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ccb2147-95ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=w9+PAdJk73UH2Fl3uPAAs4wgk5kyO76VlH+ufUk/PX4=; b=OB3RdkQh/z8SFHj5199h+5XxCG
	thtkjKdk+ckbxf7Iellx2ZelkT1vO3AvWAgAjOiq8fzpCJwR6AC2SkmBx8R0Lr5PfqM/r3BVOSesf
	iNTTF8J0Dh6dYSuq81Aos7ogKJWM8VOUSyQK3UNQsqCLKlLOI0+qQzX7Hru0M/PxVUOf8M3fiPwcL
	XUNEPBzCO1X90Eazd5t/tUXX6nnvwXa2NznHU7elGvs2Vf9jjgratRjuPAlLu2YOjbhnfusCUjccb
	9glOoHYAYwuO41FGkPxDyw2EqP509qne4J5wYOpjcjdfHCeOJ6BFrHWxncvKtAlQKHBLhyYXy8uSD
	vFlY2FHg==;
Message-ID: <20230116143645.948125465@infradead.org>
User-Agent: quilt/0.66
Date: Mon, 16 Jan 2023 15:25:40 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 =?UTF-8?q?J=C3=B6rg=20R=C3=B6del?= <joro@8bytes.org>,
 "H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 7/7] PM / hibernate: Add minimal noinstr annotations
References: <20230116142533.905102512@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

When resuming there must not be any code between swsusp_arch_suspend()
and restore_processor_state() since the CPU state is ill defined at
this point in time.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/power/hibernate.c |   30 +++++++++++++++++++++++++++---
 1 file changed, 27 insertions(+), 3 deletions(-)

--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -280,6 +280,32 @@ __weak int arch_resume_nosmt(void)
 	return 0;
 }
 
+static noinstr int suspend_and_restore(void)
+{
+	int error;
+
+	/*
+	 * Strictly speaking swsusp_arch_suspend() should be noinstr too but it
+	 * is typically written in asm, as such, assume it is good and shut up
+	 * the validator.
+	 */
+	instrumentation_begin();
+	error = swsusp_arch_suspend();
+	instrumentation_end();
+
+	/*
+	 * Architecture resume code 'returns' from the swsusp_arch_suspend()
+	 * call and resumes execution here with some very dodgy machine state.
+	 *
+	 * Compiler instrumentation between these two calls (or in
+	 * restore_processor_state() for that matter) will make life *very*
+	 * interesting indeed.
+	 */
+	restore_processor_state();
+
+	return error;
+}
+
 /**
  * create_image - Create a hibernation image.
  * @platform_mode: Whether or not to use the platform driver.
@@ -323,9 +349,7 @@ static int create_image(int platform_mod
 	in_suspend = 1;
 	save_processor_state();
 	trace_suspend_resume(TPS("machine_suspend"), PM_EVENT_HIBERNATE, true);
-	error = swsusp_arch_suspend();
-	/* Restore control flow magically appears here */
-	restore_processor_state();
+	error = suspend_and_restore();
 	trace_suspend_resume(TPS("machine_suspend"), PM_EVENT_HIBERNATE, false);
 	if (error)
 		pr_err("Error %d creating image\n", error);




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:38:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478649.741988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQca-0003JE-L0; Mon, 16 Jan 2023 14:38:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478649.741988; Mon, 16 Jan 2023 14:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQca-0003EQ-BN; Mon, 16 Jan 2023 14:38:00 +0000
Received: by outflank-mailman (input) for mailman id 478649;
 Mon, 16 Jan 2023 14:37:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLMM=5N=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHQcY-0002Pk-T6
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:37:58 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5cfbc166-95ab-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:37:55 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHQcU-008oZF-RG; Mon, 16 Jan 2023 14:37:55 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 56162300C6F;
 Mon, 16 Jan 2023 15:37:39 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id B9FA620EF0A23; Mon, 16 Jan 2023 15:37:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cfbc166-95ab-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=Mc6JE/lSqhoI08ZCOd1MO1vmLEzmsp/ETRSpv4hRS+8=; b=T1nrQTJttu/3Jnm9EJrYVDiMxC
	s34Y8QKh6Os9ywtC1Z1w2DiWPN6Xf9K5GkRtu6DtxWWsA8ej0VDBcu9iVcitzgXSTWCKMPsZsT/yv
	UyAjSANcObXIQvCe1J1cb2ZFm0PgpHHKh22ywjkdRZY49ET7hYiwVX9R/AkWzL156Ux4u1mcKbpW3
	wubPrTdQVa+IM+KQkoM2pcN0SQ8YrHNTvKlSQpXlxOMZ0Z7Y2ts9qxviLI1ZgAHkTz7oQ5Z95SZu9
	jTowvNgMdlt4jqLsw2QD7LpDyIe1S1Qwn8XSQdU41tuNFe9PXsjS6ESqii8W89Nd11lkxG5d+ERsF
	kQkbM/9A==;
Message-ID: <20230116143645.829076358@infradead.org>
User-Agent: quilt/0.66
Date: Mon, 16 Jan 2023 15:25:38 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 =?UTF-8?q?J=C3=B6rg=20R=C3=B6del?= <joro@8bytes.org>,
 "H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 5/7] x86/callthunk: No callthunk for restore_processor_state()
References: <20230116142533.905102512@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

From: Joan Bruguera <joanbrugueram@gmail.com>

When resuming from suspend we don't have coherent CPU state, trying to
do callthunks here isn't going to work. Specifically GS isn't set yet.

Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Signed-off-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20230109040531.7888-1-joanbrugueram@gmail.com
---
 arch/x86/kernel/callthunks.c |    5 +++++
 arch/x86/power/cpu.c         |    3 +++
 2 files changed, 8 insertions(+)

--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -7,6 +7,7 @@
 #include <linux/memory.h>
 #include <linux/moduleloader.h>
 #include <linux/static_call.h>
+#include <linux/suspend.h>
 
 #include <asm/alternative.h>
 #include <asm/asm-offsets.h>
@@ -151,6 +152,10 @@ static bool skip_addr(void *dest)
 	    dest < (void*)hypercall_page + PAGE_SIZE)
 		return true;
 #endif
+#ifdef CONFIG_PM_SLEEP
+	if (dest == restore_processor_state)
+		return true;
+#endif
 	return false;
 }
 




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:38:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478645.741949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcX-0002Sf-DT; Mon, 16 Jan 2023 14:37:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478645.741949; Mon, 16 Jan 2023 14:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcX-0002Rt-8S; Mon, 16 Jan 2023 14:37:57 +0000
Received: by outflank-mailman (input) for mailman id 478645;
 Mon, 16 Jan 2023 14:37:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLMM=5N=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHQcV-0002Pl-2i
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:37:55 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5bd6f224-95ab-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:37:53 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHQcU-008oZG-So; Mon, 16 Jan 2023 14:37:55 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 56119300C50;
 Mon, 16 Jan 2023 15:37:39 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id BEB6220D304B0; Mon, 16 Jan 2023 15:37:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bd6f224-95ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=eW1eJS20PBPNIc/x7hkOEbhV5Z70C96GEP7VlPQr3GE=; b=bkEUaHzN3DU2Ozeq31w4gh/n2o
	R6F441bq+sbQaHOybPyF32cPvwYN0zd7oUr9R4K9RCdt8hDgWzWxaj2YBgmp4DkR9+D8F5Gs4dYOt
	9lUNiL9h6dmbWadpBSYOt2SznC7hv64myxTtyscypeuWaItFMH/MD2H9N+RHOQ0ha2uXGYbfnTpp+
	NghLUZyPGSsWGWKu+nqt7jQ7P4s9CMg2/V9GnkXxMdKljoX9C9dsDu4q8LkBDu52HDGHBkiTsolMw
	vvwHa2GRba1mlCVFQpftSw0QHpUi6oRrEyKnr2j/V09JyB9VCvK9zjxBViUdKEbhTQhWC7ktrQ+hh
	hJ6cgCPA==;
Message-ID: <20230116143645.888786209@infradead.org>
User-Agent: quilt/0.66
Date: Mon, 16 Jan 2023 15:25:39 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 =?UTF-8?q?J=C3=B6rg=20R=C3=B6del?= <joro@8bytes.org>,
 "H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 6/7] x86/power: Sprinkle some noinstr
References: <20230116142533.905102512@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Ensure no compiler instrumentation sneaks in while restoring the CPU
state. Specifically we can't handle CALL/RET until GS is restored.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/power/cpu.c |   13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -192,7 +192,7 @@ static void fix_processor_context(void)
  * The asm code that gets us here will have restored a usable GDT, although
  * it will be pointing to the wrong alias.
  */
-static void notrace __restore_processor_state(struct saved_context *ctxt)
+static __always_inline void __restore_processor_state(struct saved_context *ctxt)
 {
 	struct cpuinfo_x86 *c;
 
@@ -235,6 +235,13 @@ static void notrace __restore_processor_
 	loadsegment(fs, __KERNEL_PERCPU);
 #endif
 
+	/*
+	 * Definitely wrong, but at this point we should have at least enough
+	 * to do CALL/RET (consider SKL callthunks) and this avoids having
+	 * to deal with the noinstr explosion for now :/
+	 */
+	instrumentation_begin();
+
 	/* Restore the TSS, RO GDT, LDT, and usermode-relevant MSRs. */
 	fix_processor_context();
 
@@ -276,10 +283,12 @@ static void notrace __restore_processor_
 	 * because some of the MSRs are "emulated" in microcode.
 	 */
 	msr_restore_context(ctxt);
+
+	instrumentation_end();
 }
 
 /* Needed by apm.c */
-void notrace restore_processor_state(void)
+void noinstr restore_processor_state(void)
 {
 	__restore_processor_state(&saved_context);
 }




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:38:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478644.741944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcX-0002Q3-4I; Mon, 16 Jan 2023 14:37:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478644.741944; Mon, 16 Jan 2023 14:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcX-0002Pw-1P; Mon, 16 Jan 2023 14:37:57 +0000
Received: by outflank-mailman (input) for mailman id 478644;
 Mon, 16 Jan 2023 14:37:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLMM=5N=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHQcU-0002Pk-77
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:37:54 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 59b18eec-95ab-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:37:51 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHQcT-008oZ7-Sz; Mon, 16 Jan 2023 14:37:54 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E3A5C3007DA;
 Mon, 16 Jan 2023 15:37:38 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id AF96620306BCC; Mon, 16 Jan 2023 15:37:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59b18eec-95ab-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=/Xw5T+5vUqgsMfvHO/2XJeMLoksLNzw5fViFdsAKWFI=; b=Eq3y0scguUgdkMj79Zzk6GCrvF
	hQAhEQP/E7cAerOW9szVqcnkFmKfzWUaiZxup/I+SMwXzVRopqfn4zPHPbmkQufcFo19M0uELMYv/
	kYPxOMfLbqjIwv9QIGVc5oiVosXskQwBXNQfwBy/jDHeBBuxrkRaeFIhaA+Tm2XLDqfwvxvnRyH0O
	rvOYW2PlqyMs+3qT9hF1XykM1+ON8wTIDA4x8JJJGyRjveBRsW6uZuMNrQ9P/htuomhOyIDx2ZWUH
	ITNdFmVF/slE93TC/wZdRxX36dsgHyfS5wX7wOPPamJ7fIEFgb5fD7j0P6Cn79iKWZ/XYGoYdt6+A
	rE5ri56A==;
Message-ID: <20230116143645.649204101@infradead.org>
User-Agent: quilt/0.66
Date: Mon, 16 Jan 2023 15:25:35 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 =?UTF-8?q?J=C3=B6rg=20R=C3=B6del?= <joro@8bytes.org>,
 "H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit
References: <20230116142533.905102512@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Per the comment it is important to call sev_verify_cbit() before the
first RET instruction, this means we can delay calling this until more
of the CPU state is set up, specifically delay this until GS is
'sane' such that per-cpu variables work.

Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/kernel/head_64.S |   26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -185,19 +185,6 @@ SYM_CODE_START(secondary_startup_64)
 	addq	phys_base(%rip), %rax
 
 	/*
-	 * For SEV guests: Verify that the C-bit is correct. A malicious
-	 * hypervisor could lie about the C-bit position to perform a ROP
-	 * attack on the guest by writing to the unencrypted stack and wait for
-	 * the next RET instruction.
-	 * %rsi carries pointer to realmode data and is callee-clobbered. Save
-	 * and restore it.
-	 */
-	pushq	%rsi
-	movq	%rax, %rdi
-	call	sev_verify_cbit
-	popq	%rsi
-
-	/*
 	 * Switch to new page-table
 	 *
 	 * For the boot CPU this switches to early_top_pgt which still has the
@@ -265,6 +252,19 @@ SYM_CODE_START(secondary_startup_64)
 	 */
 	movq initial_stack(%rip), %rsp
 
+	/*
+	 * For SEV guests: Verify that the C-bit is correct. A malicious
+	 * hypervisor could lie about the C-bit position to perform a ROP
+	 * attack on the guest by writing to the unencrypted stack and wait for
+	 * the next RET instruction.
+	 * %rsi carries pointer to realmode data and is callee-clobbered. Save
+	 * and restore it.
+	 */
+	pushq	%rsi
+	movq	%rax, %rdi
+	call	sev_verify_cbit
+	popq	%rsi
+
 	/* Setup and Load IDT */
 	pushq	%rsi
 	call	early_setup_idt




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:38:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478647.741977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcZ-0003AB-Uw; Mon, 16 Jan 2023 14:37:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478647.741977; Mon, 16 Jan 2023 14:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcZ-0003A4-Rq; Mon, 16 Jan 2023 14:37:59 +0000
Received: by outflank-mailman (input) for mailman id 478647;
 Mon, 16 Jan 2023 14:37:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLMM=5N=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHQcY-0002Pl-BF
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:37:58 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e68e912-95ab-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:37:57 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHQcT-008oZ6-SE; Mon, 16 Jan 2023 14:37:54 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E016530073F;
 Mon, 16 Jan 2023 15:37:38 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id B816220B75F29; Mon, 16 Jan 2023 15:37:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e68e912-95ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=dtG3CCLzMA0i0qOsNCGAACCWIWjzH+RX76nYJr/7nUg=; b=mNSQfQeUAAjFw9ESss496f+Z7Q
	u0TgyafjRhE1gQwftI2iBVWQ3NaSLG34VdbAWazXouiYsRdKyzqy1wML0X2573JOCsV/N0PK5k2F2
	TPov7/LEGoLwGuZcybAwU9tHaa3wffyOD9zxSarAfZ29Q4WErfXs+qrs4jgpQMpEdyO9/ORvp95nX
	EjaMEymmS+cwIurns0c3wEq3uE17tL2K8Hwq/Ex3y3gpAT4bxe9pp5k86FtmXoW5UtC3/6pfTmVtZ
	8qPTNYgKrdqtoTo1pP63NdYMrqwT3UhPTSsaXrAPfezN6rdi70FJ8sxMqiPPuMxRvDSFpPFtdzk0Q
	gXratnhg==;
Message-ID: <20230116143645.768035056@infradead.org>
User-Agent: quilt/0.66
Date: Mon, 16 Jan 2023 15:25:37 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 =?UTF-8?q?J=C3=B6rg=20R=C3=B6del?= <joro@8bytes.org>,
 "H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 4/7] x86/power: Inline write_cr[04]()
References: <20230116142533.905102512@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Since we can't do CALL/RET until GS is restored and CR[04] pinning is
of dubious value in this code path, simply write the stored values.

Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/power/cpu.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -208,11 +208,11 @@ static void notrace __restore_processor_
 #else
 /* CONFIG X86_64 */
 	native_wrmsrl(MSR_EFER, ctxt->efer);
-	native_write_cr4(ctxt->cr4);
+	asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");
 #endif
 	native_write_cr3(ctxt->cr3);
 	native_write_cr2(ctxt->cr2);
-	native_write_cr0(ctxt->cr0);
+	asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");
 
 	/* Restore the IDT. */
 	native_load_idt(&ctxt->idt);




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:38:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478648.741983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQca-0003CT-9C; Mon, 16 Jan 2023 14:38:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478648.741983; Mon, 16 Jan 2023 14:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQca-0003C1-2p; Mon, 16 Jan 2023 14:38:00 +0000
Received: by outflank-mailman (input) for mailman id 478648;
 Mon, 16 Jan 2023 14:37:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLMM=5N=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHQcW-0002Pk-Pb
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:37:58 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5cd572d9-95ab-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:37:55 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pHQc7-005csx-17; Mon, 16 Jan 2023 14:37:32 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E1E283007C5;
 Mon, 16 Jan 2023 15:37:38 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id A94B22081B292; Mon, 16 Jan 2023 15:37:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cd572d9-95ab-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Subject:Cc:To:From:Date:Message-ID:
	Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=myQj/MBdPle3LrkO0rdNRoaNXUdtePdBjz6BKLQ5f+E=; b=FLqszaemcNeksY+EzWYCNg5GsL
	/G5G9NGNsU5jSnXxmUVfg9zG6N/3rGrhBPe83E3mCwQJwFiPtv8xLfUsSjfoBhZsvFtKTQ1zUjPJn
	loCzDMF4bI22KL1b9sTSVxk7fHjzXBVSse/Lwsv3dC7P30UtJgbySM1qUCQ7MOTR2TwOglVOWs9hT
	33DfxsJhDNCZvED7aBcJBwNHXcILUuCh/DHiTwhRVuw4HWX/ilGletp3oUrAu2C5nKWm5xshkfTdK
	qREZO4eRXzw9xnacQMHb2wRGQM7vUlh1iTHsSlwu3j+0/G5iySYxx1czWr0FWKGo1MB1E/AVje45d
	zxW+5voA==;
Message-ID: <20230116142533.905102512@infradead.org>
User-Agent: quilt/0.66
Date: Mon, 16 Jan 2023 15:25:33 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 =?UTF-8?q?J=C3=B6rg=20R=C3=B6del?= <joro@8bytes.org>,
 "H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 0/7] x86: retbleed=stuff fixes

Hi all,

Patches to address the various callthunk fails reported by Joan.

The first two patches are new (and I've temporarily dropped the
restore_processor_state sealing).

It is my understanding that AP bringup will always use the 16bit trampoline
path, if this is not the case, please holler.




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:38:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:38:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478650.742006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcc-0003n1-25; Mon, 16 Jan 2023 14:38:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478650.742006; Mon, 16 Jan 2023 14:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcb-0003kj-Pq; Mon, 16 Jan 2023 14:38:01 +0000
Received: by outflank-mailman (input) for mailman id 478650;
 Mon, 16 Jan 2023 14:38:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLMM=5N=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHQca-0002Pl-Ex
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:38:00 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f9efc28-95ab-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:37:59 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHQcT-008oZ5-S7; Mon, 16 Jan 2023 14:37:54 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E59A8300C0C;
 Mon, 16 Jan 2023 15:37:38 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id ADBD820B75F3D; Mon, 16 Jan 2023 15:37:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f9efc28-95ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=rHjspetb6Sj1qMw4IETTuTuaCFumb5EYx3riiNCTYv8=; b=cxz2ZJ+y57FVH0axH7GJ3/SDvk
	SfIli5aMNXUCBqSZ5s0hpQ541ETtZeXdeKeGN56Ebrzzk0C4N9eLXGVlRGHVrQ5JzV49oWAUTb+Qe
	Kn4rI0WMLi8kkdccgmv96vW1qIQzU99aMQ1+XNeo6itRotmYQ0j8+xlhlEUBQ0ztiZvuG2cxAChr6
	22QxIhtpXZ6ZvAkrMLo5HT7ZYfsHVWzIJ2yKMB5H3w81SPg3IvAOF/XqgpTenKxT3AR/Anre1jbBa
	5T4FbzmfMt67wCGdvro8HFHivVFFtpRbdX2x3wnviudr35z2uOGNZgbxfcNisa/ZFz7Fzu5vDAznP
	UN6EINCw==;
Message-ID: <20230116143645.589522290@infradead.org>
User-Agent: quilt/0.66
Date: Mon, 16 Jan 2023 15:25:34 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 =?UTF-8?q?J=C3=B6rg=20R=C3=B6del?= <joro@8bytes.org>,
 "H. Peter Anvin" <hpa@zytor.com>,
 jroedel@suse.de
Subject: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64()
References: <20230116142533.905102512@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

The boot trampolines from trampoline_64.S have code flow like:

  16bit BIOS			SEV-ES				64bit EFI

  trampoline_start()		sev_es_trampoline_start()	trampoline_start_64()
    verify_cpu()			  |				|
  switch_to_protected:    <---------------'				v
       |							pa_trampoline_compat()
       v								|
  startup_32()		<-----------------------------------------------'
       |
       v
  startup_64()
       |
       v
  tr_start() := head_64.S:secondary_startup_64()

Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
touch the APs), there is already a verify_cpu() invocation.

Removing the verify_cpu() invocation from secondary_startup_64()
renders the whole secondary_startup_64_no_verify() thing moot, so
remove that too.

Cc: jroedel@suse.de
Cc: hpa@zytor.com
Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/realmode.h |    1 -
 arch/x86/kernel/head_64.S       |   16 ----------------
 arch/x86/realmode/init.c        |    6 ------
 3 files changed, 23 deletions(-)

--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -73,7 +73,6 @@ extern unsigned char startup_32_smp[];
 extern unsigned char boot_gdt[];
 #else
 extern unsigned char secondary_startup_64[];
-extern unsigned char secondary_startup_64_no_verify[];
 #endif
 
 static inline size_t real_mode_size_needed(void)
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -143,22 +143,6 @@ SYM_CODE_START(secondary_startup_64)
 	 * after the boot processor executes this code.
 	 */
 
-	/* Sanitize CPU configuration */
-	call verify_cpu
-
-	/*
-	 * The secondary_startup_64_no_verify entry point is only used by
-	 * SEV-ES guests. In those guests the call to verify_cpu() would cause
-	 * #VC exceptions which can not be handled at this stage of secondary
-	 * CPU bringup.
-	 *
-	 * All non SEV-ES systems, especially Intel systems, need to execute
-	 * verify_cpu() above to make sure NX is enabled.
-	 */
-SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
-	UNWIND_HINT_EMPTY
-	ANNOTATE_NOENDBR
-
 	/*
 	 * Retrieve the modifier (SME encryption mask if SME is active) to be
 	 * added to the initial pgdir entry that will be programmed into CR3.
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -74,12 +74,6 @@ static void __init sme_sev_setup_real_mo
 		th->flags |= TH_FLAGS_SME_ACTIVE;
 
 	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
-		/*
-		 * Skip the call to verify_cpu() in secondary_startup_64 as it
-		 * will cause #VC exceptions when the AP can't handle them yet.
-		 */
-		th->start = (u64) secondary_startup_64_no_verify;
-
 		if (sev_es_setup_ap_jump_table(real_mode_header))
 			panic("Failed to get/update SEV-ES AP Jump Table");
 	}




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:38:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:38:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478651.742021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcd-0004EL-SX; Mon, 16 Jan 2023 14:38:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478651.742021; Mon, 16 Jan 2023 14:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQcd-0004Dw-Nq; Mon, 16 Jan 2023 14:38:03 +0000
Received: by outflank-mailman (input) for mailman id 478651;
 Mon, 16 Jan 2023 14:38:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLMM=5N=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHQcb-0002Pk-D9
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:38:01 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f817ab7-95ab-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:37:59 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pHQc7-005csw-18; Mon, 16 Jan 2023 14:37:32 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id DE246300652;
 Mon, 16 Jan 2023 15:37:38 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0)
 id B311720EF0A20; Mon, 16 Jan 2023 15:37:38 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f817ab7-95ab-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References:
	Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:
	Content-ID:Content-Description:In-Reply-To;
	bh=ts2f9zMhByBtAAtY3UBCAh5kpNSswewgiYAUfY8HRdU=; b=aGBhaLZsy7tRmU8DfE6K+TDDO0
	vuZ+yzEJlLmyIkJpKj5j6TVZcKHJw7LV5zhoweNmIAXEhyqNbDctZ6rxIemADC20CHJlks7N8JIx+
	K5F4VyyMc1wR1Ib2pi1CQk44Pzj8XTpGUHCP4oZt5ZuUo1cEJm9pBXD/3o3j4D+/yj051n7fxMsXX
	9AcOJ4ti/xsLZ0KgZKwSKrmlvgRdCt+MP/E3zPoSa5b3jgTI5IW4LJAlgGIKa96qCBr3IpBr3kI5N
	Tg9J6WHAyn2j5+4GcLAyMQZQrgoS+GXww4z8hdfHou6kXyg3O02NeUUJTaNy52X7cIFx8XWm84+1+
	NZWTDZ8g==;
Message-ID: <20230116143645.708895882@infradead.org>
User-Agent: quilt/0.66
Date: Mon, 16 Jan 2023 15:25:36 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org,
 Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org,
 peterz@infradead.org,
 Juergen Gross <jgross@suse.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Kees Cook <keescook@chromium.org>,
 mark.rutland@arm.com,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 =?UTF-8?q?J=C3=B6rg=20R=C3=B6del?= <joro@8bytes.org>,
 "H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 3/7] x86/power: De-paravirt restore_processor_state()
References: <20230116142533.905102512@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8

Since Xen PV doesn't use restore_processor_state(), and we're going to
have to avoid CALL/RET until at least GS is restored, de-paravirt the
easy bits.

Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/power/cpu.c |   24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -197,25 +197,25 @@ static void notrace __restore_processor_
 	struct cpuinfo_x86 *c;
 
 	if (ctxt->misc_enable_saved)
-		wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable);
+		native_wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable);
 	/*
 	 * control registers
 	 */
 	/* cr4 was introduced in the Pentium CPU */
 #ifdef CONFIG_X86_32
 	if (ctxt->cr4)
-		__write_cr4(ctxt->cr4);
+		native_write_cr4(ctxt->cr4);
 #else
 /* CONFIG X86_64 */
-	wrmsrl(MSR_EFER, ctxt->efer);
-	__write_cr4(ctxt->cr4);
+	native_wrmsrl(MSR_EFER, ctxt->efer);
+	native_write_cr4(ctxt->cr4);
 #endif
-	write_cr3(ctxt->cr3);
-	write_cr2(ctxt->cr2);
-	write_cr0(ctxt->cr0);
+	native_write_cr3(ctxt->cr3);
+	native_write_cr2(ctxt->cr2);
+	native_write_cr0(ctxt->cr0);
 
 	/* Restore the IDT. */
-	load_idt(&ctxt->idt);
+	native_load_idt(&ctxt->idt);
 
 	/*
 	 * Just in case the asm code got us here with the SS, DS, or ES
@@ -230,7 +230,7 @@ static void notrace __restore_processor_
 	 * handlers or in complicated helpers like load_gs_index().
 	 */
 #ifdef CONFIG_X86_64
-	wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);
+	native_wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);
 #else
 	loadsegment(fs, __KERNEL_PERCPU);
 #endif
@@ -246,15 +246,15 @@ static void notrace __restore_processor_
 	loadsegment(ds, ctxt->es);
 	loadsegment(es, ctxt->es);
 	loadsegment(fs, ctxt->fs);
-	load_gs_index(ctxt->gs);
+	native_load_gs_index(ctxt->gs);
 
 	/*
 	 * Restore FSBASE and GSBASE after restoring the selectors, since
 	 * restoring the selectors clobbers the bases.  Keep in mind
 	 * that MSR_KERNEL_GS_BASE is horribly misnamed.
 	 */
-	wrmsrl(MSR_FS_BASE, ctxt->fs_base);
-	wrmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base);
+	native_wrmsrl(MSR_FS_BASE, ctxt->fs_base);
+	native_wrmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base);
 #else
 	loadsegment(gs, ctxt->gs);
 #endif




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:39:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:39:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478687.742049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeF-0007EW-RO; Mon, 16 Jan 2023 14:39:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478687.742049; Mon, 16 Jan 2023 14:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeF-0007EP-OH; Mon, 16 Jan 2023 14:39:43 +0000
Received: by outflank-mailman (input) for mailman id 478687;
 Mon, 16 Jan 2023 14:39:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fQBk=5N=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pHQeE-0006p7-7J
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:39:42 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c3b18c8-95ab-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:39:41 +0100 (CET)
Received: by mail-wr1-x430.google.com with SMTP id b5so6514212wrn.0
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 06:39:41 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m13-20020adfe94d000000b002714b3d2348sm26543406wrn.25.2023.01.16.06.39.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Jan 2023 06:39:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c3b18c8-95ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=aBZQnz4OW8tkz12qH7i/lg43hEmXqjuUvrMmu6ORg7s=;
        b=E03jcfzb97n8CNYtCugxiyQBH8BoE+MGGHugTO4YW9E5stDPbiv5+WQHcbclBpFhpX
         3lqvMtSAooiLKmoZi2bYseizcbk0bpuXvUYkdj1j0Jbqtm/gcaeFbJH7aAFIGKHAlLy0
         9O1NOYK/JcAopliuS298RT8Cgf4SUxzV0+QdaejlbxKlmS2jAh0iDIPkuVlxqja2V2pS
         zphEGhy0jXA/5PkX6QbhfCIFYvqBVFdC3Bvv+imMFswtKOdos8hsgpzVUrBZ33x1G5E7
         kBQKoMQ5WbslBG6bDAGbZDRcEZ5MADIdotW+VZEBx0KVmejNWfOAY6GIdNzkXzdK6XNP
         E6mg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=aBZQnz4OW8tkz12qH7i/lg43hEmXqjuUvrMmu6ORg7s=;
        b=wTyn8gDodKYgM4l+R25X5ycskgjM9T6fx2C86RFrldK/jOC3tf194Xx3YcM7XfJR2o
         GPQP/4VZPnlmWpqpM4ck/e3ySYkFLOJwpZb/XyCdoeLa0ic3I9YebakxfVj83aLzBpIO
         1KkANwFWki812STatVGG7z6sUb2U6+/d7mHuXAJnFqMYgjRtXssKh08IwPaR2vXg2/xQ
         +WJo789YPINC63QkbcCII1w0ELEyVFVH9lBaLYRDQiqU+rkLWHEVNFNe9AjaGzxJYyqQ
         ThL5h5x5x0pd+RdVfaq4nUb3QQPEdLpPl0/TlsiKCP+XmmZJxt8BdUy4sTc/NqKHEhru
         tFeg==
X-Gm-Message-State: AFqh2krORt7Ik+px36826QAYsveGyyBTs/O7hAZTZ7XszuFULTNQt/cw
	sRXRIBZKFuVZaYsyVyCTJvAzk/8DBjGmCQ==
X-Google-Smtp-Source: AMrXdXsoS07ykbSfghKT2t9ZmXHf738/KZleqti/mvoBBFR32Y5TwSK07k40Yqep2ZxrZRnOpJKm5w==
X-Received: by 2002:a5d:480f:0:b0:2bd:bc57:3c5b with SMTP id l15-20020a5d480f000000b002bdbc573c5bmr15387306wrq.33.1673879980409;
        Mon, 16 Jan 2023 06:39:40 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v4 0/4] The patch series introduces the following:
Date: Mon, 16 Jan 2023 16:39:28 +0200
Message-Id: <cover.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

---
Changes in V4:
    - Patches "xen/riscv: introduce dummy asm/init.h" and "xen/riscv: introduce
      stack stuff" were removed from the patch series as they were merged separately
      into staging.
    - Remove "depends on RISCV*" from Kconfig.debug as Kconfig.debug is located
      in arch specific folder.
    - fix code style.
    - Add "ifdef __riscv_cmodel_medany" to early_printk.c.  
---
Changes in V3:
    - Most of "[PATCH v2 7/8] xen/riscv: print hello message from C env"
      was merged with [PATCH v2 3/6] xen/riscv: introduce stack stuff.
    - "[PATCH v2 7/8] xen/riscv: print hello message from C env" was
      merged with "[PATCH v2 6/8] xen/riscv: introduce early_printk basic
      stuff".
    - "[PATCH v2 5/8] xen/include: include <asm/types.h> in
      <xen/early_printk.h>" was removed as it has been already merged to
      mainline staging.
    - code style fixes.
---
Changes in V2:
    - update commit patches commit messages according to the mailing
      list comments
    - Remove unneeded types in <asm/types.h>
    - Introduce definition of STACK_SIZE
    - order the files alphabetically in Makefile
    - Add license to early_printk.c
    - Add RISCV_32 dependecy to config EARLY_PRINTK in Kconfig.debug
    - Move dockerfile changes to separate config and sent them as
      separate patch to mailing list.
    - Update test.yaml to wire up smoke test
---


Bobby Eshleman (1):
  xen/riscv: introduce sbi call to putchar to console

Oleksii Kurochko (3):
  xen/riscv: introduce asm/types.h header file
  xen/riscv: introduce early_printk basic stuff
  automation: add RISC-V smoke test

 automation/gitlab-ci/test.yaml            | 20 ++++++++++
 automation/scripts/qemu-smoke-riscv64.sh  | 20 ++++++++++
 xen/arch/riscv/Kconfig.debug              |  6 +++
 xen/arch/riscv/Makefile                   |  2 +
 xen/arch/riscv/early_printk.c             | 45 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 ++++++
 xen/arch/riscv/include/asm/sbi.h          | 34 +++++++++++++++++
 xen/arch/riscv/include/asm/types.h        | 43 ++++++++++++++++++++++
 xen/arch/riscv/sbi.c                      | 45 +++++++++++++++++++++++
 xen/arch/riscv/setup.c                    |  6 ++-
 10 files changed, 232 insertions(+), 1 deletion(-)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/include/asm/types.h
 create mode 100644 xen/arch/riscv/sbi.c

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:39:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:39:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478688.742059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeI-0007VI-3z; Mon, 16 Jan 2023 14:39:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478688.742059; Mon, 16 Jan 2023 14:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeI-0007V9-0g; Mon, 16 Jan 2023 14:39:46 +0000
Received: by outflank-mailman (input) for mailman id 478688;
 Mon, 16 Jan 2023 14:39:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fQBk=5N=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pHQeF-0006p7-UJ
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:39:44 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9d50a2de-95ab-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:39:43 +0100 (CET)
Received: by mail-wr1-x42b.google.com with SMTP id h16so27672263wrz.12
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 06:39:43 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m13-20020adfe94d000000b002714b3d2348sm26543406wrn.25.2023.01.16.06.39.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Jan 2023 06:39:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d50a2de-95ab-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nGpYf6LxU4uEJy+Cdd6qu2fGfvSKVdNlp4dIm9FxUs4=;
        b=T8N0GA/dysWRLj7Ko3Fnf/XQRj681w1L3btZUpvFKOxateDpVbT1wEzj0MIk5x+jg0
         MxpV5VnHEwrQamsNLZ66Ba83jhvmWNHGG+Tvd2ktj8LRH6XrR/dnVvQpdfxCWYjrgy4P
         DckrxAUVgvbqB+67oQLgShv0bDxrqb0Nlb8PUXDcsmNOtc2EM+65GJdF24rAKmj/aEuO
         5w2Fx+Fe1PiDWAYKzmn4wabadfqqb0UgC5WORpQhAN/Hm/hweF6QBgu8AOW2BBdeoy/6
         VUBL0jBlQ1s+Lh6jz0Ss5SwhloXBVbOQgLxXt+99P8cUfVBwzrDCHQ1KHN9Btr9/59oi
         E/HA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nGpYf6LxU4uEJy+Cdd6qu2fGfvSKVdNlp4dIm9FxUs4=;
        b=gVPuprxeab9m5zCp4JhqVVAdj9TNV3YIXmq3Pa7JSsBlQLLUx6L4YFEZIhWA0QllaQ
         E9/FWUf+NIM9r0YBqhU1dW/DZFL8UVotdtxby8/GIopTzu/vu1Z6N3Z58t1ZRyhTAqmI
         MNG0up05M+SRrFLYxYH7lNYYbu8irsMgYW3a9YRo/tXdxLtJ1tcCYNpxfH3/QOgeen2A
         rDrR7a7qclkyVX7uqKFwgxhyBaeiZlI+n705d7Ll5JhaTnkLVGyhCjArPBWC4in9YGIb
         D9Bki49ij5AMNPeZccXhdNQDJ33If0Eryot7VRnvz+vKzb7bgwdRHNEVANhKIduPLMZ8
         Xreg==
X-Gm-Message-State: AFqh2kqv5wmoIdc9diRPTrLmhqipkHAMheEfuHIlH1IlYPC1yOD/dh1k
	2P3zzUN7w5TNB43eXb6nqhYOGCjQpZi5yFI1
X-Google-Smtp-Source: AMrXdXt35P4U71YQyf2Eybb0C5+zogQ54iq2A107z2uodb0OvOYzUUJqxYGryK9RlyjBSFhwzidJgw==
X-Received: by 2002:a5d:6784:0:b0:2bc:7f48:4cf0 with SMTP id v4-20020a5d6784000000b002bc7f484cf0mr17942914wru.63.1673879982332;
        Mon, 16 Jan 2023 06:39:42 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v4 2/4] xen/riscv: introduce sbi call to putchar to console
Date: Mon, 16 Jan 2023 16:39:30 +0200
Message-Id: <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Bobby Eshleman <bobby.eshleman@gmail.com>

Originally SBI implementation for Xen was introduced by
Bobby Eshleman <bobby.eshleman@gmail.com> but it was removed
all the stuff for simplicity  except SBI call for putting
character to console.

The patch introduces sbi_putchar() SBI call which is necessary
to implement initial early_printk.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
    - Nothing changed
---
Changes in V3:
    - update copyright's year
    - rename definition of __CPU_SBI_H__ to __ASM_RISCV_SBI_H__
    - fix identations
    - change an author of the commit
---
Changes in V2:
    - add an explanatory comment about sbi_console_putchar() function.
    - order the files alphabetically in Makefile
---
 xen/arch/riscv/Makefile          |  1 +
 xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
 xen/arch/riscv/sbi.c             | 45 ++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/sbi.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 5a67a3f493..fd916e1004 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
+obj-y += sbi.o
 obj-y += setup.o
 
 $(TARGET): $(TARGET)-syms
diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
new file mode 100644
index 0000000000..0e6820a4ed
--- /dev/null
+++ b/xen/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: (GPL-2.0-or-later) */
+/*
+ * Copyright (c) 2021-2023 Vates SAS.
+ *
+ * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Taken/modified from Xvisor project with the following copyright:
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ */
+
+#ifndef __ASM_RISCV_SBI_H__
+#define __ASM_RISCV_SBI_H__
+
+#define SBI_EXT_0_1_CONSOLE_PUTCHAR		0x1
+
+struct sbiret {
+    long error;
+    long value;
+};
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
+                        unsigned long arg0, unsigned long arg1,
+                        unsigned long arg2, unsigned long arg3,
+                        unsigned long arg4, unsigned long arg5);
+
+/**
+ * Writes given character to the console device.
+ *
+ * @param ch The data to be written to the console.
+ */
+void sbi_console_putchar(int ch);
+
+#endif /* __ASM_RISCV_SBI_H__ */
diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
new file mode 100644
index 0000000000..dc0eb44bc6
--- /dev/null
+++ b/xen/arch/riscv/sbi.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Taken and modified from the xvisor project with the copyright Copyright (c)
+ * 2019 Western Digital Corporation or its affiliates and author Anup Patel
+ * (anup.patel@wdc.com).
+ *
+ * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2021-2023 Vates SAS.
+ */
+
+#include <xen/errno.h>
+#include <asm/sbi.h>
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
+                        unsigned long arg0, unsigned long arg1,
+                        unsigned long arg2, unsigned long arg3,
+                        unsigned long arg4, unsigned long arg5)
+{
+    struct sbiret ret;
+
+    register unsigned long a0 asm ("a0") = arg0;
+    register unsigned long a1 asm ("a1") = arg1;
+    register unsigned long a2 asm ("a2") = arg2;
+    register unsigned long a3 asm ("a3") = arg3;
+    register unsigned long a4 asm ("a4") = arg4;
+    register unsigned long a5 asm ("a5") = arg5;
+    register unsigned long a6 asm ("a6") = fid;
+    register unsigned long a7 asm ("a7") = ext;
+
+    asm volatile ("ecall"
+              : "+r" (a0), "+r" (a1)
+              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
+              : "memory");
+    ret.error = a0;
+    ret.value = a1;
+
+    return ret;
+}
+
+void sbi_console_putchar(int ch)
+{
+    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:39:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:39:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478689.742064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeI-0007Yd-Dk; Mon, 16 Jan 2023 14:39:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478689.742064; Mon, 16 Jan 2023 14:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeI-0007Xz-9P; Mon, 16 Jan 2023 14:39:46 +0000
Received: by outflank-mailman (input) for mailman id 478689;
 Mon, 16 Jan 2023 14:39:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fQBk=5N=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pHQeG-0007HD-D8
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:39:44 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9cc0f7fb-95ab-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:39:42 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id k8so13185874wrc.9
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 06:39:42 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m13-20020adfe94d000000b002714b3d2348sm26543406wrn.25.2023.01.16.06.39.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Jan 2023 06:39:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cc0f7fb-95ab-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=acnYUtdnwYpIYk/Tnrxms/hajM4oUKDgrFUUijudygw=;
        b=cxG7BbmW5EXJNbhiyxD1yAxapzKZ7gMV68sns7gJQCN7WUJtlGmCflt02xwBi9E0QU
         XCIqlfgCTJnf4u4w2jNqppDx69EZXTos4ebt4L9RLfOxVmPXm37+o7y4ktUebM1POWh6
         KC4wk9Y48hgQE831bQnbOTNvib59wbUtidFC+Yacqyi45gxRMuUkoWUXXrkUCgYc3dxY
         CVs6Nxsl9lR3bgfbjXAf3RQYRa7XzbPOLswZ5Ugfneb7lBKQVNVzzMZ1uVBR2/EMAO41
         QD7pFAFYZ/6BXKTgxcWSqOgZgjWVhI4etdmBWh5iuBbmDKIH1rcd9hVmIBHlFncraUgP
         NHMw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=acnYUtdnwYpIYk/Tnrxms/hajM4oUKDgrFUUijudygw=;
        b=Arsk2hssVlloOF4NHyut+t9M30sZ3xOrQuxNN4GiDwSI6FdPSLbsQNUJIk4LUIU0yG
         dN64qWJg2jshCG7DWaShK/IxcLRc58cr4BvUwXlmt1j+Vlp/BmLmL0kZgqbiL1YwqpOV
         JWD1roRqUQXBpigZn2JN7vev1Jewm9z+6aURUNwcYwQ8u9BAeKia389Jf5PDq8NqJ0TF
         pCIlPSmZKbWajamDM3t4Xk2p35SQOoTGTKPiMa27geoOMzPfjY8ZF1cBtdNd/sb30yTr
         Q10zAkrJ4+lW0QyLvQKxMJ9nWh5gk7AR/gcCvcjPWWa3VHjWJPlKq6zcYhBhhzx1GCdQ
         yCHg==
X-Gm-Message-State: AFqh2ko5wWDaVU7VfmOmxl0zPEKf5Rc7vFw7pRymxiLbcDAAWaIr80iB
	JYCeRREueqh92Qe+qqv4so7siOzq20JClQ==
X-Google-Smtp-Source: AMrXdXvPGgIAD4uUIPtpMwYIG4Gu4t19pJZkcwzDCMxf2+lAo8E2HMSDR2iuZ9/T3wbfqQKphEZZZw==
X-Received: by 2002:a05:6000:1c09:b0:2bd:eb6b:173a with SMTP id ba9-20020a0560001c0900b002bdeb6b173amr8447699wrb.36.1673879981399;
        Mon, 16 Jan 2023 06:39:41 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v4 1/4] xen/riscv: introduce asm/types.h header file
Date: Mon, 16 Jan 2023 16:39:29 +0200
Message-Id: <2ce57f95f8445a4880e0992668a48ffe7c2f9732.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
    - Clean up types in <asm/types.h> and remain only necessary.
      The following types was removed as they are defined in <xen/types.h>:
      {__|}{u|s}{8|16|32|64}
---
Changes in V3:
    - Nothing changed
---
Changes in V2:
    - Remove unneeded now types from <asm/types.h>
---
 xen/arch/riscv/include/asm/types.h | 43 ++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/types.h

diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
new file mode 100644
index 0000000000..9e55bcf776
--- /dev/null
+++ b/xen/arch/riscv/include/asm/types.h
@@ -0,0 +1,43 @@
+#ifndef __RISCV_TYPES_H__
+#define __RISCV_TYPES_H__
+
+#ifndef __ASSEMBLY__
+
+#if defined(CONFIG_RISCV_32)
+typedef unsigned long long u64;
+typedef unsigned int u32;
+typedef u32 vaddr_t;
+#define PRIvaddr PRIx32
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0ULL)
+#define PRIpaddr "016llx"
+typedef u32 register_t;
+#define PRIregister "x"
+#elif defined (CONFIG_RISCV_64)
+typedef unsigned long u64;
+typedef u64 vaddr_t;
+#define PRIvaddr PRIx64
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PRIpaddr "016lx"
+typedef u64 register_t;
+#define PRIregister "lx"
+#endif
+
+#if defined(__SIZE_TYPE__)
+typedef __SIZE_TYPE__ size_t;
+#else
+typedef unsigned long size_t;
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_TYPES_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:39:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478690.742081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeJ-0007zX-Sw; Mon, 16 Jan 2023 14:39:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478690.742081; Mon, 16 Jan 2023 14:39:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeJ-0007xu-OA; Mon, 16 Jan 2023 14:39:47 +0000
Received: by outflank-mailman (input) for mailman id 478690;
 Mon, 16 Jan 2023 14:39:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fQBk=5N=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pHQeI-0007HD-73
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:39:46 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9de12272-95ab-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:39:44 +0100 (CET)
Received: by mail-wr1-x430.google.com with SMTP id r2so27680620wrv.7
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 06:39:44 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m13-20020adfe94d000000b002714b3d2348sm26543406wrn.25.2023.01.16.06.39.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Jan 2023 06:39:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9de12272-95ab-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ht+yXJXCQ416rH9qG+XVpEx3bPoAaETeI1O2ZSFKNnE=;
        b=UUWnka02cSQQtVrEyziF41obENY8bhSj51iQ0Vz9CybDkAIVsGwYXjCoPHEw5Gm3UK
         KCGUaN1F0fm5awItPipqA2tyhtE+TR9dIk4eF4BueY4cVQccLzydc8sRP4429zPpza3c
         eujOg9PFQsdpSEpDhBnfwUu7HNshJ5dD+OmretWGFmLUh+9csJ+oP/WR4b4VZZp8kcU5
         NoaVIsK6rvzUMwEvS4ptp0pkOUbzPewNTRLRAuOHxvlOa4kiS4kTJISBJ2XUyci810qK
         qIBVaT4FWAuyyfUgjjQmJMEaaV/2JR3O7SKY/wxWjgrclt2VXzNaepIWjUDVHDbxblB3
         fTwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ht+yXJXCQ416rH9qG+XVpEx3bPoAaETeI1O2ZSFKNnE=;
        b=G9sLeWvFdacIgrlLAGK61Olwtntt+RAWNk2oxpgbbNWxUSOcGQgTuomcTehOVkhb7T
         bFH4uWDBL9t4UObMDMN7qCDHeda06qOtC73wyhQ6J/oOkpXygmqUEAkQAosiaZddEpFR
         U+PhXFneddAs+uD0bmkTB6ubj3w3gaMPFGk9ipi/aBvaU00l3uPBEXKfmxx9P8XeXm2D
         LW621bcfONjCguan9F4ekGRKBH1TvCLLZjY6fbr3pTfslrakt3K82TDkQEc5aU1dqkKA
         l/k6d2GNjt2LuzVhgt6Gl+DouRhdAWWYMwILmNmJ9h8qTJ0iQnpXEM80mDaab41UICOA
         /Mqw==
X-Gm-Message-State: AFqh2koiA44E6V0uedxT4TtF1rDtNZ/Eu4zIC8zFUxDy4Ce8GA5qZvuk
	4fB2rWjWg1nbcRpBDpWqiqL3o4TY3DjB8Q==
X-Google-Smtp-Source: AMrXdXuvKN+KpsWx1NAJ0UyAmvRb8fksqDc6ChVgzirE65cUacUYQ5kZPsvDdu2nCkr3QoIszEBDnA==
X-Received: by 2002:a05:6000:a0e:b0:2bd:fdd8:2d0a with SMTP id co14-20020a0560000a0e00b002bdfdd82d0amr5175166wrb.40.1673879983306;
        Mon, 16 Jan 2023 06:39:43 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: [PATCH v4 3/4] xen/riscv: introduce early_printk basic stuff
Date: Mon, 16 Jan 2023 16:39:31 +0200
Message-Id: <915bd184c6648a1a3bf0ac6a79b5274972bb33dd.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Because printk() relies on a serial driver (like the ns16550 driver)
and drivers require working virtual memory (ioremap()) there is not
print functionality early in Xen boot.

The patch introduces the basic stuff of early_printk functionality
which will be enough to print 'hello from C environment".

Originally early_printk.{c,h} was introduced by Bobby Eshleman
(https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1aab71384)
but some functionality was changed:
early_printk() function was changed in comparison with the original as
common isn't being built now so there is no vscnprintf.

This commit adds early printk implementation built on the putc SBI call.

As sbi_console_putchar() is already being planned for deprecation
it is used temporarily now and will be removed or reworked after
real uart will be ready.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
    - Remove "depends on RISCV*" from Kconfig.debug as it is located in
      arch specific folder so by default RISCV configs should be ebabled.
    - Add "ifdef __riscv_cmodel_medany" to be sure that PC-relative addressing
      is used as early_*() functions can be called from head.S with MMU-off and
      before relocation (if it would be needed at all for RISC-V)
    - fix code style
---
Changes in V3:
    - reorder headers in alphabetical order
    - merge changes related to start_xen() function from "[PATCH v2 7/8]
      xen/riscv: print hello message from C env" to this patch
    - remove unneeded parentheses in definition of STACK_SIZE
---
Changes in V2:
    - introduce STACK_SIZE define.
    - use consistent padding between instruction mnemonic and operand(s)
---
---
 xen/arch/riscv/Kconfig.debug              |  6 +++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 45 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 ++++++
 xen/arch/riscv/setup.c                    |  6 ++-
 5 files changed, 69 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
index e69de29bb2..e139e44873 100644
--- a/xen/arch/riscv/Kconfig.debug
+++ b/xen/arch/riscv/Kconfig.debug
@@ -0,0 +1,6 @@
+config EARLY_PRINTK
+    bool "Enable early printk"
+    default DEBUG
+    help
+
+      Enables early printk debug messages
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index fd916e1004..1a4f1a6015 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,3 +1,4 @@
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
new file mode 100644
index 0000000000..6bc29a1942
--- /dev/null
+++ b/xen/arch/riscv/early_printk.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V early printk using SBI
+ *
+ * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
+ */
+#include <asm/early_printk.h>
+#include <asm/sbi.h>
+
+/*
+ * early_*() can be called from head.S with MMU-off.
+ *
+ * The following requiremets should be honoured for early_*() to
+ * work correctly:
+ *    It should use PC-relative addressing for accessing symbols.
+ *    To achieve that GCC cmodel=medany should be used.
+ */
+#ifndef __riscv_cmodel_medany
+#error "early_*() can be called from head.S before relocate so it should not use absolute addressing."
+#endif
+
+/*
+ * TODO:
+ *   sbi_console_putchar is already planned for deprecation
+ *   so it should be reworked to use UART directly.
+*/
+void early_puts(const char *s, size_t nr)
+{
+    while ( nr-- > 0 )
+    {
+        if ( *s == '\n' )
+            sbi_console_putchar('\r');
+        sbi_console_putchar(*s);
+        s++;
+    }
+}
+
+void early_printk(const char *str)
+{
+    while ( *str )
+    {
+        early_puts(str, 1);
+        str++;
+    }
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
new file mode 100644
index 0000000000..05106e160d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -0,0 +1,12 @@
+#ifndef __EARLY_PRINTK_H__
+#define __EARLY_PRINTK_H__
+
+#include <xen/early_printk.h>
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_printk(const char *str);
+#else
+static inline void early_printk(const char *s) {};
+#endif
+
+#endif /* __EARLY_PRINTK_H__ */
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 13e24e2fe1..9c9412152a 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,13 +1,17 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/early_printk.h>
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
 void __init noreturn start_xen(void)
 {
-    for ( ;; )
+    early_printk("Hello from C env\n");
+
+    for ( ; ; )
         asm volatile ("wfi");
 
     unreachable();
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:39:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478691.742085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeK-000873-BN; Mon, 16 Jan 2023 14:39:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478691.742085; Mon, 16 Jan 2023 14:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQeK-00085M-5G; Mon, 16 Jan 2023 14:39:48 +0000
Received: by outflank-mailman (input) for mailman id 478691;
 Mon, 16 Jan 2023 14:39:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fQBk=5N=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pHQeI-0007HD-Py
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:39:46 +0000
Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com
 [2a00:1450:4864:20::435])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9e641c21-95ab-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:39:44 +0100 (CET)
Received: by mail-wr1-x435.google.com with SMTP id r2so27680660wrv.7
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 06:39:44 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m13-20020adfe94d000000b002714b3d2348sm26543406wrn.25.2023.01.16.06.39.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Jan 2023 06:39:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e641c21-95ab-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ctVbfTUMTOLN4KM620LAPIxTZt2tk9IeJCHpD6YNn4Y=;
        b=au2GRLGAWzB4NTiOMuF6peJns9v9FtsJ7/vNiy2ohSvXtl/YXaQaHxIte82snMoQeT
         q/0DzAUSwsggzsN5BxQIADDMQrbkICIWQnoziFZDZBIPPH7ZHDyXzRThtDnZarSLIfZ6
         3w4xkvX1nQAmi47lGm5re5B72ZP+dz9KwYRaMP/2HfHwHdlx+GcsP3+R2b+tCCQ6zutW
         EfP3e/qS8YXhQJ19wItDmI8U/WAzyHvBkH5cKzls4pKQWVgB1mdxAf7fN2waKLo1z+Le
         ySIoywsxmov7eLs/Wkp1HwlGn0J5SppWvNmIFS37K///E11mTfWAaxryHPXmvhxAr0bu
         7ctg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ctVbfTUMTOLN4KM620LAPIxTZt2tk9IeJCHpD6YNn4Y=;
        b=gtLiaK2FKlVsMmaP+bRPPiIB64SyyHUk1QfBMn9QfFd+YcGxaF2fPLYPb0pZzWA7Mz
         Qi3X93IateaM1wHvA2PmVxE3eQmwZIZ7Y9gDF9JVVOCQz7CsgHDiTKHQw1SNYGGJWAxC
         xLdFSbyIBmDkZ2Es8T/N7ZNhBXGM7Ji1zpFq800S2fuGuLLLB92DveZQkRwb1Bt2ZwnK
         FWpyKhjwzQJDYEPEslT/cfks//vk0QkNSiCh+1zzpDO1GGAyC5n4NAEGyyfGPWCGLrjr
         hFcem1GVh1ySkMCpVtcKQn1HYOlZXLDCv56z/4LeMmKkei3WRAe8Wg/gED4JHvmDh8ER
         yJNg==
X-Gm-Message-State: AFqh2koNO1GlsmzpaAQIrcL8XgVdXEpN8I+ObEqJbNRAPnE3QKsCxdb+
	gktvLGDKQ3WFCCrXVM16+3pxtlUVbraolA==
X-Google-Smtp-Source: AMrXdXv30T+VZFurwDb3EsXFT8D5iQtuvT08anQ4bGLUGXwDowZg4hrvyQx4T+1SlvSSXi6sWKCO+A==
X-Received: by 2002:a5d:51c2:0:b0:2bd:e007:c54f with SMTP id n2-20020a5d51c2000000b002bde007c54fmr9822705wrv.57.1673879984230;
        Mon, 16 Jan 2023 06:39:44 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v4 4/4] automation: add RISC-V smoke test
Date: Mon, 16 Jan 2023 16:39:32 +0200
Message-Id: <216c21039a5552a329178b4376ff53ba16cf6104.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add check if there is a message 'Hello from C env' presents
in log file to be sure that stack is set and C part of early printk
is working.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in V4:
    - Nothing changed
---
Changes in V3:
    - Nothing changed
    - All mentioned comments by Stefano in Xen mailing list will be
      fixed as a separate patch out of this patch series.
---
 automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
 2 files changed, 40 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index afd80adfe1..64f47a0ab9 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -54,6 +54,19 @@
   tags:
     - x86_64
 
+.qemu-riscv64:
+  extends: .test-jobs-common
+  variables:
+    CONTAINER: archlinux:riscv64
+    LOGFILE: qemu-smoke-riscv64.log
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - x86_64
+
 .yocto-test:
   extends: .test-jobs-common
   script:
@@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
   needs:
     - debian-unstable-clang-debug
 
+qemu-smoke-riscv64-gcc:
+  extends: .qemu-riscv64
+  script:
+    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
+  needs:
+    - riscv64-cross-gcc
+
 # Yocto test jobs
 yocto-qemuarm64:
   extends: .yocto-test-arm64
diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
new file mode 100755
index 0000000000..e0f06360bc
--- /dev/null
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -ex
+
+# Run the test
+rm -f smoke.serial
+set +e
+
+timeout -k 1 2 \
+qemu-system-riscv64 \
+    -M virt \
+    -smp 1 \
+    -nographic \
+    -m 2g \
+    -kernel binaries/xen \
+    |& tee smoke.serial
+
+set -e
+(grep -q "Hello from C env" smoke.serial) || exit 1
+exit 0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:44:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:44:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478728.742104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQj2-0003E4-5J; Mon, 16 Jan 2023 14:44:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478728.742104; Mon, 16 Jan 2023 14:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQj2-0003Dx-1l; Mon, 16 Jan 2023 14:44:40 +0000
Received: by outflank-mailman (input) for mailman id 478728;
 Mon, 16 Jan 2023 14:44:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fQBk=5N=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pHQj1-0003Dr-5k
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:44:39 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d183bf8-95ac-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:44:38 +0100 (CET)
Received: by mail-wr1-x431.google.com with SMTP id b5so6528490wrn.0
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 06:44:38 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 k9-20020a5d6d49000000b002bc8130cca7sm18509428wri.23.2023.01.16.06.44.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Jan 2023 06:44:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d183bf8-95ac-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=GVGwd9fnauWsTU3LaxuXn11iTl5K2eA0fAqZiPLtppo=;
        b=FfiQ4uxhBPeMazyezOxRThT4KB8yW1y++mgxp0ggJ7RAHTjaPo+JB1P4ZhzMnUN1oG
         Q1NTNtYXOTZBt1lY8ErPzecOZHHBIP9C12sVjO03vivkW7bDdIXO5jsap8SGpeoQlJWx
         65/jalvPTSm85B4dJNthI9nbZAU0DBhpxU8+vn/kMFP42yJ+2cOcjQXXlcflS8BkhTTD
         2exQgi8ZrMmXbD0b4AANQFKdBDgM1oUjXb08uu/eOsUDpK66SvTlqevUeZ0C5MVUDbgy
         BNI+lgI/EZr5xzAXElaTnns353KBggWFLCUrvGeg/VzB+EvjhVn8tygqEkRScWVdlRhH
         2m+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=GVGwd9fnauWsTU3LaxuXn11iTl5K2eA0fAqZiPLtppo=;
        b=lQT/g/ilzuCSKPj5FNP82nECTD6Lzl0S61P81EgRIPn22LIqOslOwEyshP/4amzyZK
         lSLf3OhXGvrB/AL6kavKQzDv6gU1AXjqqVXhqbKlil9nhqhbQfrVqoQxc/7EIkpj2UHy
         HYS0kCydqB0JNAptOy8u+LKzBuNhsGfPlWDlQDY/xOOqsHXtQftqrsP3qLHzQyVR6B+K
         U1B5VPXI39FMZr2ikuBoE4SjKSCPdzydadRs9e1EWNyyiAMjJZyqPqrx+3TqOh+kpN1O
         ALrkg1o6V0IHDHHdMeTwlecvtI8pnV1xZFn6Jv2YhMahc+e14q3U2OFXS6kZahmi/35/
         NHgw==
X-Gm-Message-State: AFqh2kqgHorYJ0EouHLZkDuAN0rUFkDkA/0gd5EoY0RLihqSdU0TuM0v
	QYe3sw527n/ObLD18AGu2dOf/T2XMYLyoAzl
X-Google-Smtp-Source: AMrXdXuLxDnTHwA/TkpOTCItdQ22aymV2PP0J80YcNyTXo8L3plkyrEG3UA5Yp3R59mN1PlQsN/6zw==
X-Received: by 2002:a5d:640d:0:b0:2bd:e5d5:78ca with SMTP id z13-20020a5d640d000000b002bde5d578camr9083603wru.26.1673880277332;
        Mon, 16 Jan 2023 06:44:37 -0800 (PST)
Message-ID: <6ce931ffdd8c210be9f2c4a682fa1a4ecd773417.camel@gmail.com>
Subject: Re: [PATCH v4 0/4] The patch series introduces the following:
From: Oleksii <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Doug
 Goldstein <cardoe@cardoe.com>
Date: Mon, 16 Jan 2023 16:44:36 +0200
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

Hi all,

Something went wrong with cover letter. I do not know if i have to
make new patch series but cover letter should be the following:


Subject: [PATCH v4 0/4] Basic early_printk and smoke test
implementation

The patch series introduces the following:
- the minimal set of headers and changes inside them.
- SBI (RISC-V Supervisor Binary Interface) things necessary for basic
  early_printk implementation.
- things needed to set up the stack.
- early_printk() function to print only strings.
- RISC-V smoke test which checks if  "Hello from C env" message is
  present in serial.tmp

---
Changes in V4:
    - Patches "xen/riscv: introduce dummy asm/init.h" and "xen/riscv:
introduce
      stack stuff" were removed from the patch series as they were
merged separately
      into staging.
    - Remove "depends on RISCV*" from Kconfig.debug as Kconfig.debug is
located
      in arch specific folder.
    - fix code style.
    - Add "ifdef __riscv_cmodel_medany" to early_printk.c. =20
---
Changes in V3:
    - Most of "[PATCH v2 7/8] xen/riscv: print hello message from C
env"
      was merged with [PATCH v2 3/6] xen/riscv: introduce stack stuff.
    - "[PATCH v2 7/8] xen/riscv: print hello message from C env" was
      merged with "[PATCH v2 6/8] xen/riscv: introduce early_printk
basic
      stuff".
    - "[PATCH v2 5/8] xen/include: include <asm/types.h> in
      <xen/early_printk.h>" was removed as it has been already merged
to
      mainline staging.
    - code style fixes.
---
Changes in V2:
    - update commit patches commit messages according to the mailing
      list comments
    - Remove unneeded types in <asm/types.h>
    - Introduce definition of STACK_SIZE
    - order the files alphabetically in Makefile
    - Add license to early_printk.c
    - Add RISCV_32 dependecy to config EARLY_PRINTK in Kconfig.debug
    - Move dockerfile changes to separate config and sent them as
      separate patch to mailing list.
    - Update test.yaml to wire up smoke test
---


Bobby Eshleman (1):
  xen/riscv: introduce sbi call to putchar to console

Oleksii Kurochko (3):
  xen/riscv: introduce asm/types.h header file
  xen/riscv: introduce early_printk basic stuff
  automation: add RISC-V smoke test

 automation/gitlab-ci/test.yaml            | 20 ++++++++++
 automation/scripts/qemu-smoke-riscv64.sh  | 20 ++++++++++
 xen/arch/riscv/Kconfig.debug              |  6 +++
 xen/arch/riscv/Makefile                   |  2 +
 xen/arch/riscv/early_printk.c             | 45 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 ++++++
 xen/arch/riscv/include/asm/sbi.h          | 34 +++++++++++++++++
 xen/arch/riscv/include/asm/types.h        | 43 ++++++++++++++++++++++
 xen/arch/riscv/sbi.c                      | 45 +++++++++++++++++++++++
 xen/arch/riscv/setup.c                    |  6 ++-
 10 files changed, 232 insertions(+), 1 deletion(-)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/include/asm/types.h
 create mode 100644 xen/arch/riscv/sbi.c

--=20
2.39.0

On Mon, 2023-01-16 at 16:39 +0200, Oleksii Kurochko wrote:
> ---
> Changes in V4:
> =C2=A0=C2=A0=C2=A0 - Patches "xen/riscv: introduce dummy asm/init.h" and =
"xen/riscv:
> introduce
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 stack stuff" were removed from the patch s=
eries as they were
> merged separately
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 into staging.
> =C2=A0=C2=A0=C2=A0 - Remove "depends on RISCV*" from Kconfig.debug as Kco=
nfig.debug
> is located
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in arch specific folder.
> =C2=A0=C2=A0=C2=A0 - fix code style.
> =C2=A0=C2=A0=C2=A0 - Add "ifdef __riscv_cmodel_medany" to early_printk.c.=
=C2=A0=20
> ---
> Changes in V3:
> =C2=A0=C2=A0=C2=A0 - Most of "[PATCH v2 7/8] xen/riscv: print hello messa=
ge from C
> env"
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 was merged with [PATCH v2 3/6] xen/riscv: =
introduce stack
> stuff.
> =C2=A0=C2=A0=C2=A0 - "[PATCH v2 7/8] xen/riscv: print hello message from =
C env" was
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 merged with "[PATCH v2 6/8] xen/riscv: int=
roduce early_printk
> basic
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 stuff".
> =C2=A0=C2=A0=C2=A0 - "[PATCH v2 5/8] xen/include: include <asm/types.h> i=
n
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <xen/early_printk.h>" was removed as it ha=
s been already merged
> to
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mainline staging.
> =C2=A0=C2=A0=C2=A0 - code style fixes.
> ---
> Changes in V2:
> =C2=A0=C2=A0=C2=A0 - update commit patches commit messages according to t=
he mailing
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list comments
> =C2=A0=C2=A0=C2=A0 - Remove unneeded types in <asm/types.h>
> =C2=A0=C2=A0=C2=A0 - Introduce definition of STACK_SIZE
> =C2=A0=C2=A0=C2=A0 - order the files alphabetically in Makefile
> =C2=A0=C2=A0=C2=A0 - Add license to early_printk.c
> =C2=A0=C2=A0=C2=A0 - Add RISCV_32 dependecy to config EARLY_PRINTK in Kco=
nfig.debug
> =C2=A0=C2=A0=C2=A0 - Move dockerfile changes to separate config and sent =
them as
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 separate patch to mailing list.
> =C2=A0=C2=A0=C2=A0 - Update test.yaml to wire up smoke test
> ---
>=20
>=20
> Bobby Eshleman (1):
> =C2=A0 xen/riscv: introduce sbi call to putchar to console
>=20
> Oleksii Kurochko (3):
> =C2=A0 xen/riscv: introduce asm/types.h header file
> =C2=A0 xen/riscv: introduce early_printk basic stuff
> =C2=A0 automation: add RISC-V smoke test
>=20
> =C2=A0automation/gitlab-ci/test.yaml=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 20 ++++++++++
> =C2=A0automation/scripts/qemu-smoke-riscv64.sh=C2=A0 | 20 ++++++++++
> =C2=A0xen/arch/riscv/Kconfig.debug=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 6 +++
> =C2=A0xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 =
2 +
> =C2=A0xen/arch/riscv/early_printk.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 45
> +++++++++++++++++++++++
> =C2=A0xen/arch/riscv/include/asm/early_printk.h | 12 ++++++
> =C2=A0xen/arch/riscv/include/asm/sbi.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 | 34 +++++++++++++++++
> =C2=A0xen/arch/riscv/include/asm/types.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 | 43
> ++++++++++++++++++++++
> =C2=A0xen/arch/riscv/sbi.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 | 45
> +++++++++++++++++++++++
> =C2=A0xen/arch/riscv/setup.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=
=A0 6 ++-
> =C2=A010 files changed, 232 insertions(+), 1 deletion(-)
> =C2=A0create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
> =C2=A0create mode 100644 xen/arch/riscv/early_printk.c
> =C2=A0create mode 100644 xen/arch/riscv/include/asm/early_printk.h
> =C2=A0create mode 100644 xen/arch/riscv/include/asm/sbi.h
> =C2=A0create mode 100644 xen/arch/riscv/include/asm/types.h
> =C2=A0create mode 100644 xen/arch/riscv/sbi.c
>=20



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:45:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:45:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478741.742114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQjZ-0003k5-DY; Mon, 16 Jan 2023 14:45:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478741.742114; Mon, 16 Jan 2023 14:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQjZ-0003jr-AX; Mon, 16 Jan 2023 14:45:13 +0000
Received: by outflank-mailman (input) for mailman id 478741;
 Mon, 16 Jan 2023 14:45:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wx/b=5N=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHQfj-0006p7-8c
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:41:15 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2085.outbound.protection.outlook.com [40.107.94.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d20b8972-95ab-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:41:12 +0100 (CET)
Received: from BN6PR17CA0057.namprd17.prod.outlook.com (2603:10b6:405:75::46)
 by BL3PR12MB6402.namprd12.prod.outlook.com (2603:10b6:208:3b2::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 14:41:09 +0000
Received: from BN8NAM11FT092.eop-nam11.prod.protection.outlook.com
 (2603:10b6:405:75:cafe::dd) by BN6PR17CA0057.outlook.office365.com
 (2603:10b6:405:75::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.20 via Frontend
 Transport; Mon, 16 Jan 2023 14:41:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT092.mail.protection.outlook.com (10.13.176.180) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Mon, 16 Jan 2023 14:41:09 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 08:41:08 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 08:41:07 -0600
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Mon, 16 Jan 2023 08:41:07 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d20b8972-95ab-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L53WMuAg/YNQ1/alwxGdi6ZP12q2W0fPqQFJoSManfCAw3mc7J8D0bFnbOJNsuCfMw2vAswkH/kejETGHgazCHmkZyVeuhCUwbqeg8G9HBrpKT/+nXhE2PCUlSq3TOgr0WvrMdCpqvoQhpfcDKNHo6oOZLHFGRJF/WOizesjzDbqb0ZcGLBiBbef5XS4+UeR5Kgeg7raXJlsEttvS8Hh6/TZ4wBczljuuq7ivGCHO/HEbB3SWg7tlShIdIhyNzR6b6nOfs/55BDefJ/IDNpT97JHvf+I4I2SMbRvF3rfRkvqBQQooQA62TSF1AY6UumBm5Yrv04+B+sbIuUQdTxHtw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0JHRey8jQegVPl/scg0qC1Q1cBu4AxiKMGjEWCu71uc=;
 b=fk21255Pq3+nt8CKWDh+naXhovWsn/7W2a1n+7tEY0QI0FoQM80y8c5ScK5/iKBBzcBsfwc/h+0RGF9a/Hy24uTM43J2kaReBSxCVxCERmYZ2XhH5uDhn3ZtMbQuoJlXS9yay4Imf/AUIe3jNdGiodAY+cElCGXy+jgSY52TtgqdsrdT4vYGV7/G/8EDQZEh6LS4pJYFocy+IRH1m4ZM6/I4SKb4ZAiEwH2c8luZs5gyoW6R9HAAaEaQjIvyzfC+1P6Xp2E8FCwpPHsZgcnsnoGQOVI2b5SMnnVT/SXJAPYzhKohYgCjIeE4lQDVLplCbF9/CS9dOZnSZZYa8PJxow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0JHRey8jQegVPl/scg0qC1Q1cBu4AxiKMGjEWCu71uc=;
 b=FSFassEGCaxEreBnzXEGo9PdyRwx8aeBEGnzBPNrp5D2T9Vj0RqupvywN0g1P//X4ik1zEkz+iW2G79JHeJ++7pTi0Djnvh8u6/UkTByDO1U0NBTklYyGGYRIi7ANVgYhKM1jXT07UZ/qvdlerKj1qEPEIvTRyVFNgRJc0cMcNc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: Harden setup_frametable_mappings
Date: Mon, 16 Jan 2023 15:41:05 +0100
Message-ID: <20230116144106.12544-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT092:EE_|BL3PR12MB6402:EE_
X-MS-Office365-Filtering-Correlation-Id: 952691d1-0517-421d-80d2-08daf7cfb4f2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4WJd9MhNBHkYqLa9OK5OaYoSytrdIDf18prXwvLBNDWwSVrIqSiVJXMfUXWQw7IH1yKQfJPsXzGkWlhGKh4GEhzTLfWkccux47GSiMozU2/CadOkLBc42ZyvtDODXz2RNZKXURlqBq7p8U5vFXgEuAvd99SNu7N8l0wq0gR2D7DsnvtyoBsmgRZ9m2U9JyU20/PEjrVLKv473a+4rsIy+4qbgIHaQL36wandES15yK9XdquPEnnRSrWn7dORdKMGATPcEbhLsxf04xiIfX0fnymZrkwR3ewEyUfKjOKnGbDQ/LFJMd7k8tDcxlcZuLyQ5wjGwaz2EdPeLhb2pgBotxUZ8DHJp5y/VGz91nQipQIkoyURuhucRD65naZnL0Y0Wz15pr9AFK2EbPLT0F+ajep9gCmmqvbypxZ3Gbl6EJRsaMTsxAwMnnfhPaUvrXUY95RpXef0Gm3RIDjyzA4ukFAQEwW9aD7DCB+OFtkn2cftzAiUKsjXMxNi1WWrs6q3o1DzJYENifGh++qkyVCsBCn3f+nFrjbFsaN/AAOaG7GzDUhLM33H3vX0BtiqlTscCAUB4tLeSCF474yYlsUrmT+MNbCbK1ltEYyFjrdJ9VKP7QCqpXMXWdbinyJf1UBCcQ8p3GgXXRqDJoQlcs14eZJamcTRDPucc9Qw2A0BgOZTxF2Oe4UIkNW1UaDwQbjn4sM2bn0OvyEmZ7D4LeDd8o5lsfNQj5idBJI/BVA86Do=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(376002)(136003)(396003)(451199015)(40470700004)(46966006)(36840700001)(82310400005)(70206006)(426003)(70586007)(41300700001)(2616005)(47076005)(26005)(186003)(8676002)(4326008)(6916009)(86362001)(36756003)(83380400001)(36860700001)(5660300002)(82740400003)(336012)(8936002)(54906003)(478600001)(316002)(40480700001)(1076003)(81166007)(2906002)(356005)(40460700003)(44832011)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 14:41:09.1179
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 952691d1-0517-421d-80d2-08daf7cfb4f2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT092.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6402

The amount of supported physical memory depends on the frametable size
and the number of struct page_info entries that can fit into it. Define
a macro PAGE_INFO_SIZE to store the current size of the struct page_info
(i.e. 56B for arm64 and 32B for arm32) and add a sanity check in
setup_frametable_mappings to be notified whenever the size of the
structure changes. Also call a panic if the calculated frametable_size
exceeds the limit defined by FRAMETABLE_SIZE macro.

Update the comments regarding the frametable in asm/config.h and take
the opportunity to remove unused macro FRAMETABLE_VIRT_END on arm32.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/arch/arm/include/asm/config.h |  5 ++---
 xen/arch/arm/include/asm/mm.h     | 11 +++++++++++
 xen/arch/arm/mm.c                 |  5 +++++
 3 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 16213c8b677f..d8f99776986f 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -82,7 +82,7 @@
  * ARM32 layout:
  *   0  -  12M   <COMMON>
  *
- *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
+ *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
  * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
  *                    space
  *
@@ -95,7 +95,7 @@
  *
  *   1G -   2G   VMAP: ioremap and early_ioremap
  *
- *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
+ *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
  *
  * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
  *  Unused
@@ -127,7 +127,6 @@
 #define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
 #define FRAMETABLE_SIZE        MB(128-32)
 #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
-#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
 
 #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
 #define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 68adcac9fa8d..23dec574eb31 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -26,6 +26,17 @@
  */
 #define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
 
+/*
+ * The size of struct page_info impacts the number of entries that can fit
+ * into the frametable area and thus it affects the amount of physical memory
+ * we claim to support. Define PAGE_INFO_SIZE to be used for sanity checking.
+*/
+#ifdef CONFIG_ARM_64
+#define PAGE_INFO_SIZE 56
+#else
+#define PAGE_INFO_SIZE 32
+#endif
+
 struct page_info
 {
     /* Each frame can be threaded onto a doubly-linked list. */
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0fc6f2992dd1..a8c28fa5b768 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -676,6 +676,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
     int rc;
 
+    BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
+
+    if ( frametable_size > FRAMETABLE_SIZE )
+        panic("RAM size is too big to fit in a frametable area\n");
+
     frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
     /* Round up to 2M or 32M boundary, as appropriate. */
     frametable_size = ROUNDUP(frametable_size, mapping_size);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:45:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:45:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478742.742120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQjZ-0003mL-P3; Mon, 16 Jan 2023 14:45:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478742.742120; Mon, 16 Jan 2023 14:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQjZ-0003lH-HQ; Mon, 16 Jan 2023 14:45:13 +0000
Received: by outflank-mailman (input) for mailman id 478742;
 Mon, 16 Jan 2023 14:45:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wx/b=5N=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHQfi-0006p7-8L
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:41:14 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2081.outbound.protection.outlook.com [40.107.223.81])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d2a4ad39-95ab-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:41:13 +0100 (CET)
Received: from BN9PR03CA0965.namprd03.prod.outlook.com (2603:10b6:408:109::10)
 by CH2PR12MB4939.namprd12.prod.outlook.com (2603:10b6:610:61::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 14:41:10 +0000
Received: from BN8NAM11FT004.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:109:cafe::89) by BN9PR03CA0965.outlook.office365.com
 (2603:10b6:408:109::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Mon, 16 Jan 2023 14:41:10 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT004.mail.protection.outlook.com (10.13.176.164) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Mon, 16 Jan 2023 14:41:10 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan
 2023 08:41:10 -0600
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Mon, 16 Jan 2023 08:41:08 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2a4ad39-95ab-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jj5dcNfCoHrwB3NpdKP16/RXaKZSbZsTwzSIaUO8pDTJAy5fULhNry6ZxlUdWHIZwOiUp5jtOBnChgWV5S2IELv6/dtJvOCdsoYF+aISer6jpE4d+iHWuKZl7gv3ZO2KUj737fasrTJ9zN0jRW3cgtNWQ1PPrYo+bvI8lh8wXC1w64acNVY83jvxzo5wv+Sdl4upnLnCp5cbIIYAApaofq0sqQIgSZutrRlPQlF1hcf/WN75aEnUBkaMGREfLffBEI6DNx9aelXyBhbPSKWwpd+ZLECoNfJoUpKlFFxDZAarP+FvVs/J8psmhU80F/8Yw0Vh01QC8WByBstuLPlE7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=reRyL0p8bwFUK45PxPrMWtUJPOIAApiXXP6NU2zUKZk=;
 b=NDa36JCpVNps3kC7a2lPu/TtikniYWpZIirJGV5g4dM87pfodZoh5KahMzwbhshBgJQFWmQHcwvskcnSRe2a8qeeMh+ppmdXpz0DBUtrRZTMwkuX4NlFOtoJv6CfiRprxYz2zJv5X9kZ1sHjqz21TVz+X8n/KgF60ie9kGNj2pQcWNUI1R5BYEdUQYhjk3fONNuwHTXASBY2UMq1yHR76mPoBkV7a3T0Mz4OIME4sDnKSCgsw65ajuEWbjtp1BEdH7ztYFYIziHoiDxMSo1wPEa5E0dOtE7gAsTBM0BU2sgqIFGqz3U+EsuFjsEZLg/B5mg/VoExI4M/9joXgTV8aw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=reRyL0p8bwFUK45PxPrMWtUJPOIAApiXXP6NU2zUKZk=;
 b=r/FXYotoEZMM8Hk3dT+a2pbaYWTE1a6YatLkqNS+W/P4rp/sGSamWyEP+0e3AERXPOUCRVP0QEl0ai5nkq7+vEtzt/k9EAYwdDloAOyxFMBAQOYO9I6CmqObwrP0u/b4gsp/4CFSE2kJqtXjWpMkqgHwgtVkCEPZEgjplHANQWA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Date: Mon, 16 Jan 2023 15:41:06 +0100
Message-ID: <20230116144106.12544-2-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230116144106.12544-1-michal.orzel@amd.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT004:EE_|CH2PR12MB4939:EE_
X-MS-Office365-Filtering-Correlation-Id: 8bdd33b8-dbf7-4705-d35f-08daf7cfb5b8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	36+adSBYd37sbbvclE5xkrK/Gyryf1gboQXj+kSka5LWjeVcmKfauZZ6syUhuMofH+bHS+wqqoOSvAUuS913g0yqgMfyiCnAbt5RNyNnrVEunrhHlVA5O1FaXW31H5MrtEt4N47boVgMDzFtq5iFtzBJPwBFr5agxAqGTF0BgMgL+hOP/4dGLwuzQ+G3Pmv05sDcHOm7cp4om5wKFIbBmTgrzuNm1oUH/W4ccz7el4z1ZNTe0Rmaidqz3IIWt9vUBnb2CYIrpiTQ4oqYDP21X8qlv29zfc/COKynS9DSXgJ96Pl9UtzVZb6bHuZtLVcuBjKwadj71NF8BjGA7hYmuNFUmqP+FgXOroaJM7NoKg62yEwrEBdrDMeqHtR4G5GKDn86OkLB5JMovkMS5xQoXyf/xKk3TSJrTgw3QevQ+8sWZA8u4pi7LNzYGTTeyX+TuZmolVplF2JSzYlAy77X1rJcX+5DjyJQ5dLptbZ4mMU5MKBwkqnlZeXTx8YXVuZsn4u/742fm0QBfMV5CHpq1qhsqqAvXGAFdnLvfUgROba0eoHQZOl9l9lWnmTx8GCH2r4ZGe7BYkAJYiViXT0dyBsabjo1h0f/SaMPJCxyH1cJ7pHmu49BS9VFgrJEH1H0PD6Ljax0NZsy1RQa/OcXJS9tPLp8rao8PxHWYBr1p86aifvconG0YCVIHEqFdgngqbC/tuvjIrqMlk7HudOmQqngIvpiWo3X6MfBzhndDAs=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(376002)(39860400002)(396003)(451199015)(40470700004)(46966006)(36840700001)(36860700001)(40460700003)(82740400003)(5660300002)(426003)(26005)(8676002)(4326008)(6916009)(54906003)(186003)(356005)(478600001)(2906002)(70206006)(36756003)(44832011)(40480700001)(336012)(8936002)(81166007)(70586007)(82310400005)(47076005)(86362001)(1076003)(2616005)(316002)(41300700001)(83380400001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 14:41:10.4801
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8bdd33b8-dbf7-4705-d35f-08daf7cfb5b8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT004.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4939

The direct mapped area occupies L0 slots from 256 to 265 (i.e. 10 slots),
resulting in 5TB (512GB * 10) of virtual address space. However, due to
incorrect slot subtraction (we take 9 slots into account) we set
DIRECTMAP_SIZE to 4.5TB instead. Fix it.

Fixes: 5263507b1b4a ("xen: arm: Use a direct mapping of RAM on arm64")
Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/arch/arm/include/asm/config.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 0fefed1b8aa9..16213c8b677f 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -157,7 +157,7 @@
 #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
 
 #define DIRECTMAP_VIRT_START   SLOT0(256)
-#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
+#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (266 - 256))
 #define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
 
 #define XENHEAP_VIRT_START     directmap_virt_start
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:47:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478772.742137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQlO-000513-6D; Mon, 16 Jan 2023 14:47:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478772.742137; Mon, 16 Jan 2023 14:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQlO-00050w-3O; Mon, 16 Jan 2023 14:47:06 +0000
Received: by outflank-mailman (input) for mailman id 478772;
 Mon, 16 Jan 2023 14:47:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHQlM-00050j-Td
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:47:04 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2057.outbound.protection.outlook.com [40.107.8.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a31cef53-95ac-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 15:47:02 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8542.eurprd04.prod.outlook.com (2603:10a6:102:215::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 14:47:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 14:47:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a31cef53-95ac-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ByhlN1XZJjaXb9Ii3Bbd0GoCDb+UdYJWQcsUrB2m0Y1gpA/13WrnHDvQfWuGjD44TaJvAhhrX9mMxeUWR18cTIpgdjRPuy4HUWykxO0aP4BosTb62knpeEWL87BNE3CbjBMKpq5gQZNh93DPfFexHaLyKeWbvlzHBCp6cXSbkeCuuwCnAseQlCuIj7zEmif3YU20jpGpnttk3TxjixV4Lc6CyawPQEljxsKNNbNNfT7lPRfh2qHm0Y76YRPpw998SVn5VjfPQ4SuWZ3yzJsyu+CNJpEyPvhOEHNPEUm/xi+6HqzQnn9um+MeMc004HIT0pj7hItstVJBqvuqINE//w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TMrOgWomrtHAkFNbnbEUN0U7wFQHsLA97H+lySkknj0=;
 b=HHhZ1r8HFmV3SAt5prRb+wv/CQ7DrQYZpKKDkmeIDxKyg/hY9FvG9UkRY3XdqZLOlDW5TtNM6obSNCHCcjIxCBvvNrld20HSkq9nz4/IrcBA2P+povnwFQ/7+8uIiaKsanX7XX0Ae44t24aLq0PDbKT8xALMZzqn1KzxPTwDbbwVtwDfdebVRCvjGUxZF31toP5KfFZpe05hKzjgwwELlbMSGIl3MIKfGeH9kOCkUTciHsGNNzyRg3k+eCPtMp0+xdzZkZ8CWbMdaBP82emhCJJGJCiZXxyCsBgxB2zdTPIucyBxIelPZKAES8POerqay0HG763o5R5DzxIqLDg4tQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TMrOgWomrtHAkFNbnbEUN0U7wFQHsLA97H+lySkknj0=;
 b=E/P/+PVkcxSOwYv9tCCZ824Z/YL+g4U7F23e9cPKZEtR80pnhORvIqjIbhmUO1vpDX8IAf+xLRyVL6JDs9AwuqqdyxMTPe1to3qEj1fbNR3v9VWtVWi/DubxTtihYATXczSUHQOD3QGDQOZ/F8sWFn0Wns4XUFk5JCMee8uc3Ve3ktdBBD0dBtlfqtnFpKqykjqmQWIeagbUgwtmM5nIIO+AepWlbVl7+l9s9SFOLRgu2XO/5b9x06mDRBFE3gzJJhsiAhnt2q2mUWsPhrynMj5OLtupbi5oe/9zAaZR2/6vbUxOsnet0ySpohr/XGqOyFG2qtNgfe72NESPHeXeXg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a728fa61-eb33-f348-ca72-caec45154889@suse.com>
Date: Mon, 16 Jan 2023 15:46:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] x86/ucode/AMD: apply the patch early on every logical
 thread
Content-Language: en-US
To: Sergey Dyasli <sergey.dyasli@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230111142329.4379-1-sergey.dyasli@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230111142329.4379-1-sergey.dyasli@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0130.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8542:EE_
X-MS-Office365-Filtering-Correlation-Id: 8ed70e8d-4fb6-4ac3-623f-08daf7d08661
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PW146RTT9vgBxpapsyBgnzVrBxQrv+N4a/c8mOqFAArGlJSaQg92I1EJqoDCSYuUFlYGc13ACNwj9b584+tg6tEM5hlXKq2qxPwz8JctoLmX1L0Oz/yVaxmTla1has/bjjlcpFtLx7wq4E4ASNAK3unBbUkscv8aIFx1tPkkDVfU7ZLZAzJve+cZbEYA1EWw0VkgJbs8ncGdH7JHlNMurI76HAnyfTYh2gB9zG5ziljwpc9oo+fXXjb/+lBwzcTs2oLy9eQRdLZHYN8KJBDKoxUInLfFxayq0IpktHF6Z3q9jjT4ccRC/VDKU3rvMVhgcpkikX+8I56jROUdr6C3WC3ksuyUbOKUtvQeBrrzlXu2yZfBHjH4NhwZ2qDsQ+iM98JnfUaNHWDK+c7MFpRtpUQ0DL90zK8H1iE+3SOhnT4JbxoAb1k8/wssR6Qi2ucvYqP5zhp9lzQXGdNK9WsjaFV0y9ZTepo/ZBAycGpu4k5hAj9OjMi7lRuCtGvhnOabx5cPqBsq89tt6J4qkGT0q3AxnwCRj+0RVbeUbxTan01Mkzu/QoEIrPOPn/i3et4ZXbYnGgJ091TTeXXo52Xn2/hURYcL8TSRpQZUhYmCoSpz3xWCFixXpohSOnUg8HhQ5uGK9ikOPA9OIBkHErbbwr1x5sMF7c/7K4e/OPvfTsjxIZ+7ydoPxqT/s/Qkui/m7mAqUYJ/e8L9FAF5krBWemQAQRWtQMB1LteM2NZwa3Y=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(376002)(136003)(396003)(366004)(451199015)(31686004)(66946007)(66556008)(41300700001)(2616005)(66476007)(26005)(186003)(6512007)(53546011)(8676002)(4326008)(6916009)(86362001)(31696002)(36756003)(83380400001)(5660300002)(8936002)(54906003)(478600001)(6506007)(316002)(4744005)(38100700002)(2906002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dUZrL0JqOTZtWklJWGRtY2NxYmJSZlBFa2cyaVcyL2J0cWxnc3RGdHVHT1RL?=
 =?utf-8?B?NXNlZ1I5aWNuTDVjMnlLb2hVYkFQbFV2QzFRUTdWekVUdk9laHBSWnBtSEw3?=
 =?utf-8?B?Ukg2NXBUZDNmQUo0MmR1eG1OcXNYamxZNWttQVJvVHQrdDFPcDBRRXlBVXFO?=
 =?utf-8?B?UFY1UjluUk5vS0ZRZVM0RGNLNW9EcDdCZElnNldCVjVSTVpNVFJVTHZvdWow?=
 =?utf-8?B?VllIS2J1QThXRktsZzI1d0o5NWR3Q1NPM2trcTNMd2lueEFHZ214bDZEM3lM?=
 =?utf-8?B?RHhsYVRaZFdqaGZVSldoSEFlMXZpQVdEZkQ2Si9qUHQ2eTVFdHB4Y2VsVzdp?=
 =?utf-8?B?OTJhdndrcmxEVmtxMHpoT0xTUXpKRmxRME9Tc0UvZHMvaWFQRUhHc0tjS0ZM?=
 =?utf-8?B?WUVwYjhBS05XT2JQR1cwZzY2ZEtkOHV2dU1VZGM0UWdubUZNMDV3bnpYWmVs?=
 =?utf-8?B?NGFEdlRJKy9DejZlNDNnWVd0SWFnOTlBT21oTzhncU8rWTFnWG1XYnNFa2F1?=
 =?utf-8?B?TGFrV2dORUFqMW0ycTZsaENQU25hb0p2Q0kvVStUQ0JPbHdOK3VtUUVzdmRY?=
 =?utf-8?B?NEdZSzlLWm12R3dYRkFTMUJJQ21zQ1hHaGlDVjJsT0gxem5lTXdVSlVPTVBE?=
 =?utf-8?B?RC96aEZHaGwzSnBlV29BbUZ5TGpmNkRCU0VuVnZQNTNyYUdhNy8yQitkc3Vw?=
 =?utf-8?B?ZDNOMTFyZE40bFIvYlhWUmZOVjZtNXhJbi9iMzBDYm1tdTkvNXFUZE1OUVAz?=
 =?utf-8?B?RFdPK24wZ0pIYzBPUWtuREg3U0VKM1dEbkpoRCtDekZkVjVzUUlCRE9SK0N4?=
 =?utf-8?B?a1NtSzF2dHEwdFpZQllHc3JlQVRTYWdLcVk4NFJqd3NxVWc1ZngzdFQvTFdp?=
 =?utf-8?B?N1ZDUEhsRVlpdzBHdWk5cEVrQlJTa0d0dVF3dVE3RnFZS3M0ZXRLOVZNdHhj?=
 =?utf-8?B?bEJxT2dYL2huVExVN3VTR3pObms4TUV4UmkwNHBxY2E2NXN5REpjSDBXV0pm?=
 =?utf-8?B?UStpZ1FHTGk2R20yWFZoMVh2Z2dsU0ZyQWRhdjZubzVLdlNxVi93Q0cvT1BM?=
 =?utf-8?B?MktCNWJYcGE2VGRLenM4UGdreUl2UWVOOE5WMU9FUHNCQ0NEVGVtT3FVNkhE?=
 =?utf-8?B?NHpDS1ZiTkt4elFwNCtJNFZnTDZGVHlVYmRXRWVibnBIcnhzM2VKYm5MQTNX?=
 =?utf-8?B?T25aTWFNdWxGNndwajFpc2MrWVcrQkpnenZ2RCtoMkRXUUdoa0ZFL3EyR0Y4?=
 =?utf-8?B?TlA3REp0R3N0alJGUWkzRS96SkkxMmZEcElNSEtPSUxUMGdNUU1LYUJncmk4?=
 =?utf-8?B?ZDAzZ0tKb1BhamNjVW5QREYvNXRZMDRPVitKS3NxRFB3cTNKRjZFNmVJRjZX?=
 =?utf-8?B?VFIxSnZnUnVRbTZCSmk4b2dROGdEUWxZLzBIb09Gb3JJMHVZNzd6Q2hXWEkz?=
 =?utf-8?B?M0tLV3lTOSt4dFllMEZNeWhHMHV0cHVXVXp3NmZnRHB1NkJRb1lEc1JmYWdU?=
 =?utf-8?B?aXprbzZWa1pnVk41ZW5jN1AxQUZlVjdKVzc4YWNNS3YybGxHZjJyRE5vT1Y0?=
 =?utf-8?B?Z1VwUEk5a0V5b1AwU3hIeFM5YmxHVjR0T3EyTXlibWs1SHQ4SGhsNXlXUWVv?=
 =?utf-8?B?a3UybS82OEVsK3BkWDZZQjRoaWZITlc1aXhBK2tCSG1sd2dlaEt2RW5xaUZH?=
 =?utf-8?B?TDNGMkc1bUdDT1RZQzROMFdSeTlVZVhuNWw3VVBKa1hmbXhCV2JoVFdycFRC?=
 =?utf-8?B?NzFyOUxZUnEzVWx3Zjd1UWJyZTAvOTFxckM2eXFycTNGamxreGlXZ1ZWTHQz?=
 =?utf-8?B?VkZjZWtWdDJQSkNjdXE2SjEyZVRqR21QTThXYUt0RW01Mm5haUNaeVcvc1FJ?=
 =?utf-8?B?cHBRUnlWMHg0RVRWK2hZdUI3WlJXeGNmb3NQTEdIQkVvbG80d3hSSmJxeUts?=
 =?utf-8?B?TmlvZ0tsbEJzdXdLSGZJcThzNEtkYS8yV0Q3djNacEwwdmQ4Tmx4cnBuOG5I?=
 =?utf-8?B?dGpZVmZ2TnNKTWJvVENoRXFpUVhQNnlIN0xUU1MvVXdsOXZ2YmE4OEMrTmVW?=
 =?utf-8?B?TzVKV1dMYTZJZlBJdldHN016OS9lbHdseVorcC9mZWVNNFc4bzRVNzk0VkxQ?=
 =?utf-8?Q?cuKxi/0Ttb51epIjHCudzq42i?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ed70e8d-4fb6-4ac3-623f-08daf7d08661
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 14:47:00.7227
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3Lr7Q7FPJBmYWCi/lOULh1YdaiKZEKAsFw2bnlScJJsZHAMOwThdjbyTLVKWiKn2oPYjGl4FnX3rLVEdD4FQJQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8542

On 11.01.2023 15:23, Sergey Dyasli wrote:
> --- a/xen/arch/x86/cpu/microcode/amd.c
> +++ b/xen/arch/x86/cpu/microcode/amd.c
> @@ -176,8 +176,13 @@ static enum microcode_match_result compare_revisions(
>      if ( new_rev > old_rev )
>          return NEW_UCODE;
>  
> -    if ( opt_ucode_allow_same && new_rev == old_rev )
> -        return NEW_UCODE;
> +    if ( new_rev == old_rev )
> +    {
> +        if ( opt_ucode_allow_same )
> +            return NEW_UCODE;
> +        else
> +            return SAME_UCODE;
> +    }

I find this misleading: "same" should not depend on the command line
option. In fact the command line option should affect only the cases
where ucode is actually to be loaded; it should not affect cases where
the check is done merely to know whether the cache needs updating.

With that e.g. microcode_update_helper() should then also be adjusted:
It shouldn't say merely "newer" when "allow-same" is in effect.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:55:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:55:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478778.742147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQtb-0006VM-1N; Mon, 16 Jan 2023 14:55:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478778.742147; Mon, 16 Jan 2023 14:55:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQta-0006VF-Uv; Mon, 16 Jan 2023 14:55:34 +0000
Received: by outflank-mailman (input) for mailman id 478778;
 Mon, 16 Jan 2023 14:55:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o6RV=5N=redhat.com=imammedo@srs-se1.protection.inumbo.net>)
 id 1pHQtZ-0006V5-NA
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:55:33 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d2665525-95ad-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:55:32 +0100 (CET)
Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com
 [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-245-mKaQAv_TNUGhncv7dQlH_w-1; Mon, 16 Jan 2023 09:55:29 -0500
Received: by mail-ed1-f69.google.com with SMTP id
 t17-20020a056402525100b0049e0d2dd9b1so2410535edd.11
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 06:55:29 -0800 (PST)
Received: from imammedo.users.ipa.redhat.com (nat-pool-brq-t.redhat.com.
 [213.175.37.10]) by smtp.gmail.com with ESMTPSA id
 a3-20020aa7cf03000000b0049019b48373sm11543707edy.85.2023.01.16.06.55.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Jan 2023 06:55:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2665525-95ad-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673880930;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=g48LKw/EQzyimcflA0uDUwRnS8dI8OL8jQyceWm5H5M=;
	b=PughXJ7o8kHmymIdqGuvEVsMHz7v1jxsYtvLoBuvYizxxas0HaC5Exr2ubBjlyBlk6OM0f
	sJX8PSPTYIMofJ72fOBWZwRKA8y874Sii1J47x0B4Q4alYW1hjFbIeJTjMQmd7LUcmKymW
	Sd3wMNEjAWGyeMR8mmV3SAMKsmFhm4g=
X-MC-Unique: mKaQAv_TNUGhncv7dQlH_w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=g48LKw/EQzyimcflA0uDUwRnS8dI8OL8jQyceWm5H5M=;
        b=PXZKKWeg1AwNP+SebqVS4UCFm1ow/n+zNAFzNfutmGpVyXTB0l4k8tNExPK1LSPNEA
         S7iH5BGz95tZj4LeDHTmXblqUaLvIkAbW2TEYPNm/uZtu3+N8tekKMstUY614kM4xpjV
         eU1o2+VxGi8DCvShrQqoewHI0+KB8W1m6KQhxQ4pH37gPRbsof/waltftJNrBkZGlghC
         1pYKHHZ8oGErVLkc5EG+w1hN7dNWTlT0Uf91iABXAxI8LUOL+9lyIjyQ4FjfOphWSpUc
         AbBjjrXssmNtUyK1ZoS+sIIFlo/sNMs74wMLskyfdQl9mFjPd0KOguJdmiHOZ8wlW6mH
         nKZg==
X-Gm-Message-State: AFqh2krPMbvpUH+bPddfu5DEhy3krIteKKKrMRYLiRIVgwZE/tP43T0s
	hzFEIBQxBEbzXiti0F8YNPdTmZfI2uWiBvD1p6SGGclb5g6lRIwEZ3ZWc2gMJsjIbv0s+4yGP/0
	SMDtyCkRD2TGlsH4+AjbxqJL50kQ=
X-Received: by 2002:a17:906:8478:b0:84d:373b:39da with SMTP id hx24-20020a170906847800b0084d373b39damr29113664ejc.40.1673880928387;
        Mon, 16 Jan 2023 06:55:28 -0800 (PST)
X-Google-Smtp-Source: AMrXdXsse3LgRug+Pp9ZoWtZxmVwLzZ7+mUuv4JYFK2PzosFJrEZ0QAORQ3jcyrTsWEYEH4Absw9GA==
X-Received: by 2002:a17:906:8478:b0:84d:373b:39da with SMTP id hx24-20020a170906847800b0084d373b39damr29113641ejc.40.1673880928211;
        Mon, 16 Jan 2023 06:55:28 -0800 (PST)
Date: Mon, 16 Jan 2023 15:55:26 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Cc: linux-kernel@vger.kernel.org, amakhalov@vmware.com, ganb@vmware.com,
 ankitja@vmware.com, bordoloih@vmware.com, keerthanak@vmware.com,
 blamoreaux@vmware.com, namit@vmware.com, Peter Zijlstra
 <peterz@infradead.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar
 <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>, "Rafael J.
 Wysocki" <rafael.j.wysocki@intel.com>, "Paul E. McKenney"
 <paulmck@kernel.org>, Wyes Karny <wyes.karny@amd.com>, Lewis Caroll
 <lewis.carroll@amd.com>, Tom Lendacky <thomas.lendacky@amd.com>, Juergen
 Gross <jgross@suse.com>, x86@kernel.org, VMware PV-Drivers Reviewers
 <pv-drivers@vmware.com>, virtualization@lists.linux-foundation.org,
 kvm@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle
 state
Message-ID: <20230116155526.05d37ff9@imammedo.users.ipa.redhat.com>
In-Reply-To: <20230116060134.80259-1-srivatsa@csail.mit.edu>
References: <20230116060134.80259-1-srivatsa@csail.mit.edu>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.36; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Sun, 15 Jan 2023 22:01:34 -0800
"Srivatsa S. Bhat" <srivatsa@csail.mit.edu> wrote:

> From: "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>
> 
> Under hypervisors that support mwait passthrough, a vCPU in mwait
> CPU-idle state remains in guest context (instead of yielding to the
> hypervisor via VMEXIT), which helps speed up wakeups from idle.
> 
> However, this runs into problems with CPU hotplug, because the Linux
> CPU offline path prefers to put the vCPU-to-be-offlined in mwait
> state, whenever mwait is available. As a result, since a vCPU in mwait
> remains in guest context and does not yield to the hypervisor, an
> offline vCPU *appears* to be 100% busy as viewed from the host, which
> prevents the hypervisor from running other vCPUs or workloads on the
> corresponding pCPU. [ Note that such a vCPU is not actually busy
> spinning though; it remains in mwait idle state in the guest ].
>
> Fix this by preventing the use of mwait idle state in the vCPU offline
> play_dead() path for any hypervisor, even if mwait support is
> available.

if mwait is enabled, it's very likely guest to have cpuidle
enabled and using the same mwait as well. So exiting early from
 mwait_play_dead(), might just punt workflow down:
  native_play_dead()
        ...
        mwait_play_dead();
        if (cpuidle_play_dead())   <- possible mwait here                                              
                hlt_play_dead(); 

and it will end up in mwait again and only if that fails
it will go HLT route and maybe transition to VMM.

Instead of workaround on guest side,
shouldn't hypervisor force VMEXIT on being uplugged vCPU when it's
actually hot-unplugging vCPU? (ex: QEMU kicks vCPU out from guest
context when it is removing vCPU, among other things)

> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> Cc: "Paul E. McKenney" <paulmck@kernel.org>
> Cc: Wyes Karny <wyes.karny@amd.com>
> Cc: Lewis Caroll <lewis.carroll@amd.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Alexey Makhalov <amakhalov@vmware.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: x86@kernel.org
> Cc: VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
> Cc: virtualization@lists.linux-foundation.org
> Cc: kvm@vger.kernel.org
> Cc: xen-devel@lists.xenproject.org
> ---
> 
> v1: https://lore.kernel.org/lkml/165843627080.142207.12667479241667142176.stgit@csail.mit.edu/
> 
>  arch/x86/kernel/smpboot.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index 55cad72715d9..125a5d4bfded 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -1763,6 +1763,15 @@ static inline void mwait_play_dead(void)
>  		return;
>  	if (!this_cpu_has(X86_FEATURE_CLFLUSH))
>  		return;
> +
> +	/*
> +	 * Do not use mwait in CPU offline play_dead if running under
> +	 * any hypervisor, to make sure that the offline vCPU actually
> +	 * yields to the hypervisor (which may not happen otherwise if
> +	 * the hypervisor supports mwait passthrough).
> +	 */
> +	if (this_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return;
>  	if (__this_cpu_read(cpu_info.cpuid_level) < CPUID_MWAIT_LEAF)
>  		return;
>  



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:57:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:57:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478783.742158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQvt-00077H-E5; Mon, 16 Jan 2023 14:57:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478783.742158; Mon, 16 Jan 2023 14:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQvt-00077A-BT; Mon, 16 Jan 2023 14:57:57 +0000
Received: by outflank-mailman (input) for mailman id 478783;
 Mon, 16 Jan 2023 14:57:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NbzH=5N=citrix.com=prvs=37389537a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHQvr-000770-Hn
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:57:55 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 268c6591-95ae-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:57:53 +0100 (CET)
Received: from mail-dm6nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jan 2023 09:57:46 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA0PR03MB5450.namprd03.prod.outlook.com (2603:10b6:806:be::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 14:57:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Mon, 16 Jan 2023
 14:57:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 268c6591-95ae-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673881073;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=U2AGuMg6cUCwe+vBWe3fYmDT+y7h+dacKF228U1LXwo=;
  b=gskvbUrfynUjakV+ntCXbghIIruNM0oMQh7Dj5obfu0WjZDwEbsdQL+5
   8dDlZInO05W3Xyi+3dxpdetVNrSK1PW0MwwxUBQx0ReLpgjGNwIhR03MI
   +ljmGrdW5LruTmBj8N7mJC+ISltqcxxvwzys2pxuYarvF/4Pvs3P6DHZc
   k=;
X-IronPort-RemoteIP: 104.47.57.176
X-IronPort-MID: 95294998
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ioIw0aujroBfOPK+GDcnCf2eJ+fnVG5fMUV32f8akzHdYApBsoF/q
 tZmKWmDP/7cM2Xwfdt+Od639R5VscLcyYdgT1E6qy40Qn5E+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj51v0gnRkPaoQ5AaHySFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwMAkwfwyDrd2MmPGEVM9v3JoHJuasM9ZK0p1g5Wmx4fcOZ7nmGvyPyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osj/60b4S9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzlrK820QzDroAVIC9Oenuhr9O9sXbgUIgYJ
 EU3wAd/vadnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebT8ny
 F6P2c/oDDpHsbuJRHbb/bCRxRuiNC5QIWIcaCssSQoe/8KlsIw1lgjITNtoDOiylNKdJN3r6
 zWDrSx7i7BNi8cOjvy/5Qqe3GzqoYXVRAko4AmRRnii8g5yeI+iYcqv9ETf6vFDao2eSzFto
 UQ5piRX18hWZbnlqcBHaLxl8G2BjxpdDADhvA==
IronPort-HdrOrdr: A9a23:s4Kzz6v4Qxfn95SioviPsiyd7skDq9V00zEX/kB9WHVpmwKj5q
 eTdZUgpGTJYVMqNE3I9urwWpVoLUmskKKdorNhQotKPzOWwFdATrsD0WK4+UydJ8SWzIc0vs
 0MHMYeNDSzNykCsS+Q2njfLz9P+qjizEnlv5a8815dCSVja6Rh6Ak8LwaADyRNNXd77IICZe
 ehD9R81kCdRUg=
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="95294998"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RS4AYEAeRSMVcScJmQjQrcCVLV07LRomBfHteHxaD6c+f4ohjWn4+hTuDpoCDs6UYBzgn7Q0V+KkLNBPB4R/iH9o/0VAJI8AKZqLV/IDm26bpT3+/LJ+wWW2k7lGxZRuiF4NSxfNWFNZzjywYgCBhF30BEu2uqAIvQTnkE/AeZbYhtUsplE3EIOOIkSEixkMtzoqKSxjNEQ/mc4mGtpDa9Qvvt6CDCY1kFO0IDM4AZf1ADFwKpGjEc3NyZ3yNtxQabVoKLkUx4RZNK+bQ1XZSexfTOH1rDxnEcGthV0WVwMxUJsAV5JyzWY7tkh9+glrIZBfW9eZYMY/e95ihyfBNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=U2AGuMg6cUCwe+vBWe3fYmDT+y7h+dacKF228U1LXwo=;
 b=fmdlQYCxTxz7xkEMk/XnQgAIJ7SPJ3Ry1Xpg1RjwQnQ9f1+JqD4E1K17quBjOX48IxVH9zYlMPTw1yTlw6XoZS9aZ40CJ2K0O0ygiE8noW4oSDWA1dj2O2J1F4/1kvSDWU8bvBTPxNUjyN1DU8If+7n/qhbXuJcvvRRUcuUVnO5uP3p69JsOr0dig+qlrmPUkfHkGGsTan2GRq5ryWjRqC/rZymSp7GvIzbhWKgZ0A41LDWgXKqsrahjxpJQkMquEUSJuVmnKj9NXuy+vVJB9lY0nNt56uERv0uy4N5fAV6NCdEKpBlwf3CuI/fk/diDYhm+dxcD1PGIpZGbbGqgCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U2AGuMg6cUCwe+vBWe3fYmDT+y7h+dacKF228U1LXwo=;
 b=BChUJIDOzskM/f2FdhqovKNIevrYOqob1MxYEpw/smQ+akTmRYwctWYuhqRygK/vMVwM11h6nujIHMyaj6Q/Ao73rRBDgJc0wVwB/qxeeKMFOQk6JoVW660cS3f/jB88GuCMfCI4RFkD8ZhPb9YcdtoM10G0H2h/Wvav2TXLqzI=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Thread-Topic: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Thread-Index: AQHZJRefeExwWeppzE65VXS63DgRAq6axG6AgAA9twCABgijAIAAFX4AgAALUQA=
Date: Mon, 16 Jan 2023 14:57:44 +0000
Message-ID: <a1ffc132-5343-c070-7bda-b3198a1ccc95@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-6-andrew.cooper3@citrix.com>
 <af2b74b2-8f37-223c-b830-c2bb3bc6d467@suse.com>
 <3ac6a4d6-44db-d248-4440-6e71aa14ad93@citrix.com>
 <adf6f951-a0e5-c167-9739-d8b0a2b4af38@citrix.com>
 <309925fd-1e7b-4541-693a-0296bd22e242@suse.com>
In-Reply-To: <309925fd-1e7b-4541-693a-0296bd22e242@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA0PR03MB5450:EE_
x-ms-office365-filtering-correlation-id: 839cdc77-e73e-4338-d81e-08daf7d2062a
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 nvF3cvKk4DO4WA5m+IgFPjvwxwW8yq3+zqvh+dbO5L86uTOdpDF/6vMQ5FEUoZHSZnt8d6GZw3rxavWRcA5b7uJvG2Gb6/dVbyDlKVAQvDoIPqAPK0iKUpHbzfCDj1m0Twr2fpyme6f3XGD9yFVGkZE3/1ShCPV/PAufxaAWA7fufNGRhTJUnW+YDpkWkUh25fIPRPB+MZJUBlDtcxkc6QkPo15quYWC96qoH9TO4OsKdJL6Cr/2T/wOoAP3CQZxMrz7p5HddZ2P+jchnodDmq8dsHb8fEPIyD2lKDxA1HlJ83bQG/p6+Ay11jGE73BdHEtyNm7Wcav7jMIYWVETZwWjiYlMnFawPQWqwmsqT6+s53q0byaNKgtqevgZc9Ok5D37fgQFPo093aFqS3wqc1NPFqoIToMROnXuZjMXuj/Jsrl+5dcVOBgdRCUcDrsafzlS6+blo7olfL66c3RMl58NJjxwlS68daGvDiq017NM+DoS9/TsEfTLfQouiOWSgulkmu5sSP4YqmJ6SB4S6uyRGAdPLEIY5qGWSldWEaTl8Cc9PD6+2J4mivBNKaggE8PgHgWVA3iaSnd+BvCNeYNa4H5BDSPd2If6i3VffCLu8dr5VTHSO+/Q9GLx1881ML2OF5MDD6Hy4nRiHux7vb2MW0bah/SUutu9VVd/OZ1MF/hTOzgiePMKvRU+y0r3pB7tL8EklthKLsQL3q0U2PgVh11WiYLuUzAXzTyAjknmxHcLysvdCT23GFQvfCBzGH/tQasq7LCuueParT9qdw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(366004)(136003)(396003)(346002)(451199015)(31686004)(36756003)(86362001)(38070700005)(4326008)(76116006)(8936002)(66556008)(31696002)(64756008)(66946007)(66446008)(66476007)(8676002)(2906002)(6916009)(82960400001)(83380400001)(122000001)(5660300002)(38100700002)(71200400001)(316002)(54906003)(6486002)(41300700001)(478600001)(91956017)(2616005)(53546011)(6512007)(26005)(6506007)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TXd4Z0x2V0ZZUGhKVWVDY0pwbUl2TlNHRGNSNnRIZ000TlhnWWcrNWFPaHVu?=
 =?utf-8?B?RDYwaytQaHI3enE4dE5seG0yUXlnblg2THFvUkRLcElPT0pHTFN2UVFhRHUz?=
 =?utf-8?B?bVFycnZVblBpb05CWEF0VlowU0lSUjN0cFRxaWROaDluWWdCL1YrSldOMExD?=
 =?utf-8?B?bEJtN280TCtmYUZ3RCtkK05JaUFCYTVwVkI0ajUxRnZTNzJWclBCMmxiZWFT?=
 =?utf-8?B?ampqUjJSeEZHb1QvUWU4QWcrSlJxR1kyU2NkRFkvcmxWTjA3MnZ6SnZCQkE2?=
 =?utf-8?B?bVllUEYzTStDYUFnYWh5MklpTTdVQnJlWjZHWWd0S3l1ZFNZUkp6M09WN0RX?=
 =?utf-8?B?TDA2anpNdEhDb3QyNHkxZ0RVU0gvcSs1UlJPZ0hWNGdzZ00wMXBSVTF5NVho?=
 =?utf-8?B?Tzh6N1NtcWVON1U4R21YdGRkK3I2cnI5NEdJcitrUmdESGNQYjhaVFl6Nktj?=
 =?utf-8?B?RGljZGFzMkhNQzhWb09vWStBMW5pZzJFMC9RREoyS3ZSZmtzWnlUUjFVRzVp?=
 =?utf-8?B?NzNrWkhiQ2ZFbkdEMDRtNU1ZV3dpT3RpRVdEWmliNFVRRStrWDBTc2crTVVZ?=
 =?utf-8?B?UlJNcFNiOWlYZnQ2U3ZhMFdqcnFMajVCaUNGL0tFRVQvaUJOVS93d3krZ1Fx?=
 =?utf-8?B?WkpLWW4vWVlCYnRaVzREL2tTMi9EanIxWmNiblp2bG9udEV3TklFQ0ZydDNv?=
 =?utf-8?B?cGxKSjRFMlVGb3hNZzVJRWdlSHM1MHp3bGJWWE1BVGtxd2Z2aWs3aDhvc3l0?=
 =?utf-8?B?b05wT1Vxb3lNOVQ4a1hmdHg0RStqQUhPd00ybS92VmlwRUVYZ2dmbnhzMSt0?=
 =?utf-8?B?aXh2Z1o3UmZFTTBmMWNma2oyNmxhTW9pUEtrbjlaOElNdXcwd2s2bVoxY1Fs?=
 =?utf-8?B?aFVuVGhDSjk3TmtzRk5TOThWQS9IN004Q0VvWTVWKzg5Y25kZXRJUDIwV3V6?=
 =?utf-8?B?cmxETUVTWlBEVHVKNW1oczNlYmxOcDBjeFpGRlJDdkMzcUpIdk8yRU5MR2Nr?=
 =?utf-8?B?VDduUXVvd1dMRU5sRzBFc0k0aGxWVGl0MlQyUXp4R2FHQi9XYWt2NWdQM1ZD?=
 =?utf-8?B?Yzc2emIrR1RFRmViNmxOenFvV2VacWdUbGh0YXFZeHBSSElidG1KdHpkc3ll?=
 =?utf-8?B?WktyUDZSRzJEWER5MEs4VXdiUE8vaWlhWUdHVnJjMEdJTDZzYWZzMFVtclpG?=
 =?utf-8?B?bndwWlczV0t1R1F5V1NicldsMURPREY5NU0xRVRaY0RTQWtSODhSYVBTVHZz?=
 =?utf-8?B?NDJMT2VzZjdlc2xqZlpSQ1NhZEdVWVhWUjRQV3RBcGNKLzduSGFSTG05WkhB?=
 =?utf-8?B?SG1ycnZIdTBjTEd0MnZ4Z1hBWmFsSWttSGIzNnpraHhzSHNHZXZUZXU2djgx?=
 =?utf-8?B?UGsxT1ZkSjZtb3graEJRbWJHMVZjL0lmakY3RDZVMS9VN1YrYnZCS2RqTWlF?=
 =?utf-8?B?a2thbWZzeldTVHBEbE1YR0Z0bG9aanltSGV3VEhHK0pjYjJTeEJLaGpiMGVQ?=
 =?utf-8?B?MHNObDNYNTUvN21JNDEydlFhNWFDWFNMYUVobThCNnBrd2ovTkpyTEV2N04r?=
 =?utf-8?B?VWJGR1cwY2ZTNmd1akV1TEpPUmJZeGV0YVBUQlQxZENVL053czgrU1F1THFq?=
 =?utf-8?B?cFlTRG1KLzlGZGtFWmhpUFNKWHI1Zm1iRUVIQVNaaTA4WmN3RlJPUHdCR2p3?=
 =?utf-8?B?UnZyWDJndUZuMFhIUkNPSjhmWG1oMkIxc0I4RjVJU2E5eTVaZnF5a2hXeFg4?=
 =?utf-8?B?OWdKb0VtUHVGU3VWQmV4VTlLN3NHcnlMR1YyUDAxdHhpTGxYa3UzdkJtY3Z3?=
 =?utf-8?B?dCtDQitrZWJRM3lueGg3SVU3N1R4d2dsZVlVbHVRWTBUaURDVnFPUnk0ZVcy?=
 =?utf-8?B?cnIrL0F1aEhta1RZeXYvQkZaU0Fnc1A3VTd3cWRjUWdSNE8wSTRYUFBZOE1Y?=
 =?utf-8?B?SW01UDFjSGYrMXVrSHZKUStGM0RVRVJFWmpjK3BseXRjMWNSYThTdVpRSFJK?=
 =?utf-8?B?V2hUTkhZVGxac0FaRHAxWnBSZ1J1U1pOZ3VPNXM2NStzSmJiOEdESHMwRVhl?=
 =?utf-8?B?MTdZamtXOFZLT3gvY1VFTHJ1NTl0bFlVT3NxWEVzc2RtRXJkeEZMbld0UEJF?=
 =?utf-8?Q?BVFg6GdeaBU+zqSwikY0/s3G7?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <AD5E60FC067FDC41A18C5915CD123B46@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	TP9heVOLauN3BbwfBPVUwEBCsMNQZxZxEi/iZIZRJXxLJnpF00AjzKIFvlv/IFVoUrA3ZkaTXlV7Rp8VAnTiOdMWXbSS1om/bEvjmPQ16rN26EONXcXPXH8IpeOZJ/lg6dQOkzlvczhmgkvT142XDErrurDGmitU0ycGDoMTUFo5AU70EgFoIq79DweUylxjrboGdgM6NgxLzMMt/+WA5Fs9qm+NA9kgJ9D+7zzgsoShX6xNUm1656TAmd2u8NfJUidlAmJwAA7WyTNPSdquDSpTxfRdOjm/LAC9PnLBj+4Nubufo2qOiyuNnd9DOvNPBymPzV9TcQv0oTn04QOamZalRf9bbdMxyOnYspTRJvyUlOGFu2CRvYrHEnrN+7P0u5Sn2xeJ+x4ypCsXRna2Q7/POwDtOTO0DIuOoYHXKO0THAGqrlCwckToJzrhFqTwMDMTmBaA5PmezXpvc8RNbcLz+dnXCo6IaoCPBFDdafMj/J+YJWL/3jL5WCiVaXSyzuiB82taEO92grVXce3HVFK9sXuaJ2jJe4+zUKfa54DBY2x4rdcrQ/xpx9KkR5IlbIPni/ryNIdxpVdygXKQLnQA2i18I7KhA4YsjJxwPVncSaTmj7BRYaAenQGRpgImJlKEwaP0Y5awLx+8ScoP2SlJxkNd+ET5HmmJVpC21gP5sRy0NAgphbSwvsUPwYQKwm2UMICZeM1fa+MYZp1uqCWkm75N9wfa4fDmzT/MvI9LKYtZ0su357mixPnRFknZ+qMiXPBJipVamdUmOOBDc1IKaLOVF/TMHj7B9GUDMwh+hvRSjrXhPaIuZhlWyH+ZXPh428BWQd7T7gMgi45lkH7ouEcyyPdqX68818chiI4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 839cdc77-e73e-4338-d81e-08daf7d2062a
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jan 2023 14:57:44.4385
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: MHSXslb+Se7azvopQ56XHalSkzeL/IPnCSvdxWE8dIRZdu8yHZey6YFR9/FbMPZ/4qViIPJvaYo0jS3SdHPh47ICnyK9rcKcwFUiit+PyBM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5450

T24gMTYvMDEvMjAyMyAyOjE3IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTYuMDEuMjAy
MyAxNDowMCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDEyLzAxLzIwMjMgNDo1MSBwbSwg
QW5kcmV3IENvb3BlciB3cm90ZToNCj4+PiBPbiAxMi8wMS8yMDIzIDE6MTAgcG0sIEphbiBCZXVs
aWNoIHdyb3RlOg0KPj4+PiBPbiAxMC4wMS4yMDIzIDE4OjE4LCBBbmRyZXcgQ29vcGVyIHdyb3Rl
Og0KPj4+Pj4gLS0tIGEveGVuL2FyY2gveDg2L3NldHVwLmMNCj4+Pj4+ICsrKyBiL3hlbi9hcmNo
L3g4Ni9zZXR1cC5jDQo+Pj4+PiBAQCAtNTQsNiArNTQsNyBAQA0KPj4+Pj4gICNpbmNsdWRlIDxh
c20vc3BlY19jdHJsLmg+DQo+Pj4+PiAgI2luY2x1ZGUgPGFzbS9ndWVzdC5oPg0KPj4+Pj4gICNp
bmNsdWRlIDxhc20vbWljcm9jb2RlLmg+DQo+Pj4+PiArI2luY2x1ZGUgPGFzbS9wcm90LWtleS5o
Pg0KPj4+Pj4gICNpbmNsdWRlIDxhc20vcHYvZG9tYWluLmg+DQo+Pj4+PiAgDQo+Pj4+PiAgLyog
b3B0X25vc21wOiBJZiB0cnVlLCBzZWNvbmRhcnkgcHJvY2Vzc29ycyBhcmUgaWdub3JlZC4gKi8N
Cj4+Pj4+IEBAIC0xODA0LDYgKzE4MDUsOSBAQCB2b2lkIF9faW5pdCBub3JldHVybiBfX3N0YXJ0
X3hlbih1bnNpZ25lZCBsb25nIG1iaV9wKQ0KPj4+Pj4gICAgICBpZiAoIG9wdF9pbnZwY2lkICYm
IGNwdV9oYXNfaW52cGNpZCApDQo+Pj4+PiAgICAgICAgICB1c2VfaW52cGNpZCA9IHRydWU7DQo+
Pj4+PiAgDQo+Pj4+PiArICAgIGlmICggY3B1X2hhc19wa3MgKQ0KPj4+Pj4gKyAgICAgICAgd3Jw
a3JzX2FuZF9jYWNoZSgwKTsgLyogTXVzdCBiZSBiZWZvcmUgc2V0dGluZyBDUjQuUEtTICovDQo+
Pj4+IFNhbWUgcXVlc3Rpb24gaGVyZSBhcyBmb3IgUEtSVSB3cnQgdGhlIEJTUCBkdXJpbmcgUzMg
cmVzdW1lLg0KPj4+IEkgaGFkIHJlYXNvbmVkIG5vdCwgYnV0IGl0IHR1cm5zIG91dCB0aGF0IEkn
bSB3cm9uZy4NCj4+Pg0KPj4+IEl0J3MgaW1wb3J0YW50IHRvIHJlc2V0IHRoZSBjYWNoZSBiYWNr
IHRvIDAgaGVyZS7CoCAoSGFuZGxpbmcgUEtSVSBpcw0KPj4+IGRpZmZlcmVudCAtIEknbGwgZm9s
bG93IHVwIG9uIHRoZSBvdGhlciBlbWFpbC4uKQ0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4
Ni9hY3BpL3Bvd2VyLmMgYi94ZW4vYXJjaC94ODYvYWNwaS9wb3dlci5jDQo+PiBpbmRleCBkMjMz
MzUzOTFjNjcuLmRlOTMxN2U4YzU3MyAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL3g4Ni9hY3Bp
L3Bvd2VyLmMNCj4+ICsrKyBiL3hlbi9hcmNoL3g4Ni9hY3BpL3Bvd2VyLmMNCj4+IEBAIC0yOTks
NiArMjk5LDEzIEBAIHN0YXRpYyBpbnQgZW50ZXJfc3RhdGUodTMyIHN0YXRlKQ0KPj4gwqANCj4+
IMKgwqDCoMKgIHVwZGF0ZV9tY3Vfb3B0X2N0cmwoKTsNCj4+IMKgDQo+PiArwqDCoMKgIC8qDQo+
PiArwqDCoMKgwqAgKiBTaG91bGQgYmUgYmVmb3JlIHJlc3RvcmluZyBDUjQsIGJ1dCB0aGF0IGlz
IGVhcmxpZXIgaW4gYXNtLsKgIFdlDQo+PiByZWx5IG9uDQo+PiArwqDCoMKgwqAgKiBNU1JfUEtS
UyBhY3R1YWxseSBiZWluZyAwIG91dCBvZiBTMyByZXN1bWUuDQo+PiArwqDCoMKgwqAgKi8NCj4+
ICvCoMKgwqAgaWYgKCBjcHVfaGFzX3BrcyApDQo+PiArwqDCoMKgwqDCoMKgwqAgd3Jwa3JzX2Fu
ZF9jYWNoZSgwKTsNCj4+ICsNCj4+IMKgwqDCoMKgIC8qIChyZSlpbml0aWFsaXNlIFNZU0NBTEwv
U1lTRU5URVIgc3RhdGUsIGFtb25nc3Qgb3RoZXIgdGhpbmdzLiAqLw0KPj4gwqDCoMKgwqAgcGVy
Y3B1X3RyYXBzX2luaXQoKTsNCj4+IMKgDQo+Pg0KPj4gSSd2ZSBmb2xkZWQgdGhpcyBodW5rLCB0
byBzb3J0IG91dCB0aGUgUzMgcmVzdW1lIHBhdGguDQo+IFRoZSBjb21tZW50IGlzIGEgbGl0dGxl
IG1pc2xlYWRpbmcgaW1vIC0gaXQgbG9va3MgdG8ganVzdGlmeSB0aGF0IG5vdGhpbmcNCj4gbmVl
ZHMgZG9pbmcuIENvdWxkIHlvdSBhZGQgIi4uLiwgYnV0IG91ciBjYWNoZSBuZWVkcyBjbGVhcmlu
ZyIgdG8gY2xhcmlmeQ0KPiB3aHksIGRlc3BpdGUgb3VyIHJlbHlpbmcgb24gemVybyBiZWluZyBp
biB0aGUgcmVnaXN0ZXIgKHdoaWNoIEkgZmluZA0KPiBwcm9ibGVtYXRpYywgY29uc2lkZXJpbmcg
dGhhdCB0aGUgZG9jIGRvZXNuJ3QgZXZlbiBzcGVsbCBvdXQgcmVzZXQgc3RhdGUpLA0KPiB0aGUg
d3JpdGUgaXMgbmVlZGVkPw0KDQpYZW4gZG9lc24ndCBhY3R1YWxseSBzZXQgQ1I0LlBLUyBhdCBh
bGwgKHlldCkuDQoNCkknbSBqdXN0IHRyeWluZyB0byBkbyBhIHJlYXNvbmFibGUgam9iIG9mIGxl
YXZpbmcgWGVuIGluIGEgcG9zaXRpb24NCndoZXJlIGl0IGRvZXNuJ3QgZXhwbG9kZSB0aGUgaW5z
dGFudCB3ZSB3YW50IHRvIHN0YXJ0IHVzaW5nIFBLUyBvdXJzZWx2ZXMuDQoNClMzIHJlc3VtZSBp
cyBvdXQgb2YgYSBmdWxsIGNvcmUgcG93ZXJvZmYuwqAgUmVnaXN0ZXJzICh3aGljaCBhcmVuJ3QN
Cm1vZGlmaWVkIGJ5IGZpcm13YXJlKSB3aWxsIGhhdmUgdGhlaXIgYXJjaGl0ZWN0dXJhbCByZXNl
dCB2YWx1ZXMsIGFuZA0KZm9yIE1TUl9QS1JTLCB0aGF0J3MgMC4NCg0KSSB3aWxsIGFkZCBhIGNv
bW1lbnQgYWJvdXQgcmVzZXR0aW5nIHRoZSBjYWNoZSwgYmVjYXVzZSB0aGF0IGlzIHRoZQ0KcG9p
bnQgb2YgZG9pbmcgdGhpcyBvcGVyYXRpb24uDQoNCklmIHdlIGRvIGZpbmQgZmlybXdhcmUgd2hp
Y2ggcmVhbGx5IGRvZXMgbWVzcyBhcm91bmQgd2l0aCBNU1JfUEtSUyAoYW5kDQppc24ndCByZXN0
b3JpbmcgdGhlIHByZS1TMyB2YWx1ZSksIHRoZW4gdGhpcyBjbGF1c2UgbmVlZHMgbW92aW5nIGRv
d24NCmludG8gYXNtIGJlZm9yZSB3ZSByZXN0b3JlICVjcjQgZnVsbHksIGJ1dCBUQkgsIEkgaG9w
ZSBJJ3ZlIGhhZCB0aW1lIHRvDQp0cnkgYW5kIHVuaWZ5IHRoZSBib290IGFuZCBTMyByZXN1bWUg
cGF0aHMgYSBiaXQgYmVmb3JlIHRoZW4uDQoNCj4+IEFzIGl0cyB0aGUgZmluYWwgaHVuayBiZWZv
cmUgdGhlIGVudGlyZSBzZXJpZXMgY2FuIGJlIGNvbW1pdHRlZCwgSQ0KPj4gc2hhbid0IGJvdGhl
ciBzZW5kaW5nIGEgdjMganVzdCBmb3IgdGhpcy4NCj4gSWYgeW91J3JlIHNlZWluZyByZWFzb25z
IG5vdCB0byBiZSBjb25jZXJuZWQgb2YgdGhlIHVuc3BlY2lmaWVkIHJlc2V0DQo+IHN0YXRlLCB0
aGVuIGZlZWwgZnJlZSB0byBhZGQgbXkgQS1iIChidXQgbm90IFItYikgaGVyZS4NCg0KVGhlcmUg
YXJlIHNldmVyYWwgcmVhc29ucyBub3QgdG8gYmUgY29uY2VybmVkLsKgIEkgZ3Vlc3MgSSdsbCB0
YWtlIHlvdXINCmFjayB0aGVuLsKgIFRoYW5rcy4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 14:59:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 14:59:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478789.742169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQxo-0007lj-Uk; Mon, 16 Jan 2023 14:59:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478789.742169; Mon, 16 Jan 2023 14:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHQxo-0007lc-Rt; Mon, 16 Jan 2023 14:59:56 +0000
Received: by outflank-mailman (input) for mailman id 478789;
 Mon, 16 Jan 2023 14:59:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHQxn-0007lS-2b
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 14:59:55 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2081.outbound.protection.outlook.com [40.107.13.81])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6ee330fa-95ae-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 15:59:54 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9065.eurprd04.prod.outlook.com (2603:10a6:10:2f0::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 14:59:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 14:59:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ee330fa-95ae-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XeS2MEVUoQsBjna3EtC1wSviOwzm9rNdZGcjglhpWCq/dIfNSiYZ1DqdcAkc0JD/s4Gs09EpH+V/0DK1kP0fVvF68gEsIRINpK/JMRWyqtaiCgSF6xpTa3Ki6ba2YTPPLgRKGOP10IZ/Hpfh0mzSvjVEvg86WphRRdXroJlavV7RhEO5kGf6xT54LGQE0MHDmwiOX5xvdChoU7ncJJmSezxzzTjGyzamSChNZPG+OhTL8U2bKtRCm03C5Pufl+DWlbCK3YRa+6NSinZ8fiy6n1eyvVRjwLQfQisJPEZOp4rzVtyyL/rOY717ydQ0dl8BMoCvLSDiF6e34xP8LoCgiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iPSNmUR2QtSx1xiyEeUZ7bqFeFb/qlAPR2o2cqkNqPA=;
 b=nKgn6ZPQprZ/EB/m1H/rdz8jJx25ZE+fWHA2MEhG4hoz+dMN+xbVkSpY50ZW+1kwNEDoiV/qvi6L1nOsMrh06drrZIOoU58uJ/16orZK/25OJTRr7+oaWk3fUiBhqFp9lV0QwntUGrdWKMcL83AT6TVb+2/vjbatRVzUUcNK/fu6xg9FtYIotO1raescR44aM31cIVaH18v+FFemeysLDBkB9r/s2z5eKvy1B1QL3oyPQz77xtKb/Yi1GGjjXytfs7DKFCyS/EHHPStn/+Yr3CueqYczJ9lJ4+oCr7Ac/QP6VsX2d8rNwqcGFtTiZfXa+YbLAnkCxqmdXe73XfIXWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iPSNmUR2QtSx1xiyEeUZ7bqFeFb/qlAPR2o2cqkNqPA=;
 b=Dr5FUta9Uq+GzkOCFuARLyPwxhxVryC+f7IDeEQorqSuRWohrbonSI+mt+0mpBqRErCCmrZwZAdSjaMNUoU5Zy4QofuIffORlITDzKH5Wb4Ch+ZxyY2kipMIkdFj2Vg7VHNk1a3DKZuNiGdb6Hq3KGlbtFvDjVj+/XMcFgWHiMtoYgwdunDpcF+WxcWYvVm4HsCTK9/N0uJgzcmAQOvzg6CpF7Bndbw2Z8LR9XWPua0QAM58rrDmihM9buUfS6qZ6sQ522TB1tU/qUGpk+xZTaDAZrlyZoK7D6df5Uxk3Av6VEbMpZzSeKR/u8RW7QMszgmYiD3SzdUUEZ0AKdftEA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e00512a6-5d32-6dbf-4269-429532f8a852@suse.com>
Date: Mon, 16 Jan 2023 15:59:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 1/4] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
 <2ce57f95f8445a4880e0992668a48ffe7c2f9732.1673877778.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2ce57f95f8445a4880e0992668a48ffe7c2f9732.1673877778.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0069.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9065:EE_
X-MS-Office365-Filtering-Correlation-Id: 2a18a441-7198-48e0-c1e5-08daf7d25174
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9PgBeqSQ68748YeuBe0KDWSOSd+T71LbsMKJDGS7NxhrQTVaSdhfNh20DFQWOVg2l6S1MCLwiFzyjp6rMYqR+oIWd27PwntJQWHs9iAGEH3eu1JKa7grVkhWtU0GzW2Z/iSacNsudVo0/vRwfpQz0ZzQirrNJsxOi9/tCSt2agfd4C06T2AyfWi4gfurkiuLDpU1vdnSM1+LQpfeDvqf0JiebN7P4lOEGlFmaxqIOiMF+u2kDEqtGzpmvSeEnfHKV7NclYLl0gfBhWPnmMxX0s1UgXJAcsHhmr+fvDiwPDJU7VcJ58pQmj4XDwBRs5Hn3W9DuhW1rHmcv0it1k1buYNLPiaL8OhkRiwFH9j3o6+vPd57FX8dW2eXPxyiaA9s7EVkCNnhmwc7ZIEBtidiVieejeHvvdy3PkrG+giHt7KLTyNIIondFagAzjTWGRjNCMwuoEXnSAhsgPF5eoFTaesCL0noKVwOrY+65sC80asQiVqZMbYNBIVmNnsnXKlgJLEwO+wU4h357OMV1jpH8GkyQBWbofqEJ96ttzjCAUXQXpoZTnhgSTBXrH068r/UGWqG+myKFllKY+G6E1hi7Rju6VzynBFhkFf27lF4IdAn1se8fmf3MO5uujHSuwGL5hA8tiyl0pAtVKxxeV4dXWMrSPBuz1Vv+yJ2hgYxMGJ57YbSHWKs4r//yu5Tg8aIUDFPW/ZM+K0Z7fiiPf3dETNblNDUwAU2TzGrKnL8AcmyTeDl2O3ZeXEbe/NxU3fEFc+LoQNpcfYF7d485r/MPA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(366004)(396003)(376002)(39860400002)(346002)(451199015)(31696002)(86362001)(36756003)(6916009)(6512007)(186003)(4326008)(8676002)(53546011)(26005)(66556008)(66946007)(2616005)(66476007)(41300700001)(6506007)(316002)(54906003)(478600001)(6486002)(38100700002)(2906002)(966005)(83380400001)(5660300002)(8936002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SVhtN25aUUhtOGhydEFwRDFHTkd3UlBFTUlnWGFMeDN3V0R3VmIwQURubXdS?=
 =?utf-8?B?NGhSbnpHSjVmUXF6WXk4dE5tR3dFaWllKzR4bjRCeEcwczhaRXFaclVSU05r?=
 =?utf-8?B?K1NOM0hEcU84U21Pa29ORDhaZ215aDRQeWxrVEZsRVhqcmcrdE56MHBrTVhr?=
 =?utf-8?B?eVRONFpadHRIdENybXE2TnVHMVhLVDR6SjdlN1ZMVE5xSm5ySXRFMzI3MzV1?=
 =?utf-8?B?bWpmOGNxZXpUbWFXamd5Yno1NFRjck5xdTJqb3F2Tm1MMDU2TUVnaTRpYi9x?=
 =?utf-8?B?cmFXZU1SVkpOMm8xYWdvV1hJeUltKzFsNytnWlFteEZFbzFSTWZuM001Q1dj?=
 =?utf-8?B?eXh4cHAxZXE2RDE4VVQvU3VJbzFTS0hTaGY0ZExZUGJ5L3liZVdmR2U5Rmoz?=
 =?utf-8?B?N1dxSHdmWUJtV0ZCSWdxQ3orK1IzbHBvRjUrbDlsTjJ4bmVUMWx1Y09VRTls?=
 =?utf-8?B?a1RoVkV6MlhkVWJnSURDU3VNUTFEMldDY3pxeVZsZS9uSUpwQUswbWNsQ2Uv?=
 =?utf-8?B?Q1F3MHVSQlJuUUphRHdTSWF0R2VscVZTUXl4RWQvb3Vnc01CaDUxVWtEcnR3?=
 =?utf-8?B?VC9XK29GWkhUMHVsdG1lU29jSjV6bUIrUldha2pMK1htZ2VVT0RWcTgxUHdn?=
 =?utf-8?B?SU5FZWpOVXFEVVQvUVJuVHYxZ1k3VWxPSXF0WTFsVW5xQXpIMGt5Q2ZjOTB2?=
 =?utf-8?B?SEZFRVdjS2pHYlZ3cGFvZS9ZSFkwb0dQTTJaU3lkaFNwamNMS1RjaFlEU2tM?=
 =?utf-8?B?MFgxN21HajdnYk9kUG1rUG5BU2puNStsOWZtdEU5ZW1LMis4amFIZEE1bGVI?=
 =?utf-8?B?eTRuM1ZSZVJ6aHBtajFUU1lSNjlUMGN6c0dIZU92aiszS3BqT0JpbjJCTXMw?=
 =?utf-8?B?YTduUDJpcStmNDJrUVRuc2JGM2E3QVEybVNCcjhkbVdxbmhoY3JKdWM2QVlC?=
 =?utf-8?B?d0pCSnFRTG9vNmlIUCtXR1B1Z25kOXNoYit3b0hoU09MNmRCTmkyckYyak9X?=
 =?utf-8?B?UjlFclBGT0x1ZHFNamE3MWJMODJleFdKOU0vN1lGbGJnMml1dkpGZUFZQTFI?=
 =?utf-8?B?M1FrOTFOcHpIRXBxV2tKSU9MckMxeUdYNm4rcnQ4eUhXRVVlUEl4VnlNVDM1?=
 =?utf-8?B?ZHJIWitjdkI4aU5SU1JSK3E4citld3FDa1BKeWh2WkRUb01lK1dQdG5waUhP?=
 =?utf-8?B?eFJRTGZodjNpckFuVkVURnJrM2NnN2orOG8xdFR4dWIzT25oOExiM0NNV290?=
 =?utf-8?B?M1ZXa3BRVlgya1k4ZWk0RDhDTnZBWVRpenIreGZYSTExd2FwVHE2RE9kTUtu?=
 =?utf-8?B?WGxPbUp5UGNlYjNJU0d3SEhySGdGbDQzQm9QeEtRUVpzWHY0cEhYS2YwbmhJ?=
 =?utf-8?B?OVlGK3hFdmFpb01xMUN2bENxVXhUT1dVQkhVL0JaTnV4S2I4QzI2eW5zK3J5?=
 =?utf-8?B?MHBGNGVyYW9aVktzK0FSdHEvUEsxWVZqL0F2RkZZeTJIUG54NFZaTTBJQ2px?=
 =?utf-8?B?VzN0d04wSlRhNWptbG1nNEhEK001WEVzelExZ0lnam80ajJsUDJiaFFZeC9v?=
 =?utf-8?B?N0kzZFZpWHJZK1FoZW9QczlhZTMrSllhZldWejdJTkg1emNPQlVzWDBzNkhL?=
 =?utf-8?B?VFFmUnZoYlcwVTl1OThiTXFBbUVFRDg3d1Rya3hGc2x0V3VHUWNvdVBrM0R2?=
 =?utf-8?B?aWVUeGRERXp5T29Rdi93K1cvajlXMTA2UUgvS0h3QzZQTTRoOFZlOVdkWVpQ?=
 =?utf-8?B?OHJObWVNWEJLZmNvci9hVWpDdlJsY3ZwQzRHNkxuejl4M2tRMnNiWGNXYzA4?=
 =?utf-8?B?Z2x1M1JyOFB5RjVZc0xZU3dsYlRmRlZseVR0aTdFaDZLVEUwRER1QkF1eW0z?=
 =?utf-8?B?S0JNM0YzTE9xV3hNRDdVYkxuWEMwZllNM21BZWdZb2NQSW1BRGRVVml6RTZh?=
 =?utf-8?B?amlCZGRyYnA1RU5nRDhJV2xjRDE0ZWpUVm1Ja2VrV1pKNDVuUVNacHZCbzg3?=
 =?utf-8?B?OHdUOXZaTndRQVZINzZwSkpwbmYxYjhMc251V3Fjd3JuSVMzZUlMYThUZHho?=
 =?utf-8?B?MUxTQTJPSkVrZTE1NUFJWUY5ZU45cCtQamc0V1lCVFY2bXJZOThwTG9mT2lp?=
 =?utf-8?Q?3oPO8D/50kxrbzqW7VpkIAVIR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a18a441-7198-48e0-c1e5-08daf7d25174
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 14:59:50.9083
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JMThv80104ip9PcQEfjhQuFWjALsNuBT9UhpeNtCePC4wuQNwBKGv3ExM0q4W9PmfWvExT3Yh5uJiuVD+B+aKg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9065

On 16.01.2023 15:39, Oleksii Kurochko wrote:
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V4:
>     - Clean up types in <asm/types.h> and remain only necessary.
>       The following types was removed as they are defined in <xen/types.h>:
>       {__|}{u|s}{8|16|32|64}

For one you still typedef u32 and u64. And imo correctly so, until we
get around to move the definition basic types into xen/types.h. Plus
I can't see how things build for you: xen/types.h expects __{u,s}<N>
to be defined in order to then derive {u,}int<N>_t from them.

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/types.h
> @@ -0,0 +1,43 @@
> +#ifndef __RISCV_TYPES_H__
> +#define __RISCV_TYPES_H__
> +
> +#ifndef __ASSEMBLY__
> +
> +#if defined(CONFIG_RISCV_32)
> +typedef unsigned long long u64;
> +typedef unsigned int u32;
> +typedef u32 vaddr_t;
> +#define PRIvaddr PRIx32
> +typedef u64 paddr_t;
> +#define INVALID_PADDR (~0ULL)
> +#define PRIpaddr "016llx"
> +typedef u32 register_t;
> +#define PRIregister "x"
> +#elif defined (CONFIG_RISCV_64)
> +typedef unsigned long u64;
> +typedef u64 vaddr_t;
> +#define PRIvaddr PRIx64
> +typedef u64 paddr_t;
> +#define INVALID_PADDR (~0UL)
> +#define PRIpaddr "016lx"
> +typedef u64 register_t;
> +#define PRIregister "lx"
> +#endif

Any chance you could insert blank lines after #if, around #elif, and
before #endif?

> +#if defined(__SIZE_TYPE__)
> +typedef __SIZE_TYPE__ size_t;
> +#else
> +typedef unsigned long size_t;
> +#endif

I'd appreciate if this part was dropped by re-basing on top of my
"include/types: move stddef.h-kind types to common header" [1], to
avoid that (re-based) patch then needing to drop this from here
again. I would have committed this already, if osstest wasn't
completely broken right now.

Jan

[1] https://lists.xen.org/archives/html/xen-devel/2023-01/msg00720.html
(since you would not be able to find a patch of the quoted title,
as in the submission I mistakenly referenced stdlib.h)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 15:04:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 15:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478794.742181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHR2Q-0000ne-Gp; Mon, 16 Jan 2023 15:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478794.742181; Mon, 16 Jan 2023 15:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHR2Q-0000nX-D0; Mon, 16 Jan 2023 15:04:42 +0000
Received: by outflank-mailman (input) for mailman id 478794;
 Mon, 16 Jan 2023 15:04:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHR2O-0000nR-L4
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 15:04:40 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2049.outbound.protection.outlook.com [40.107.8.49])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1858bd9e-95af-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 16:04:38 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8676.eurprd04.prod.outlook.com (2603:10a6:20b:42b::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Mon, 16 Jan
 2023 15:04:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 15:04:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1858bd9e-95af-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AfgBewjfoxXJDhKvrz2erMJ+ftkEW2ZFKs0uxWMOAjFa+zZtt8XtjvVo6hXG5exKpmg3eHoOeOy4mOTX2cQi7m1Ap3XkJk4IRscj8kFjIz82bp1oU4VGACHwalBSVAw4wakF044toJN16RRLhCD/3CbOAP1qbQ46pwwuYmcj0K8d/Ux68B+r7WA41H9C1myS+Fh+9gj+V7MTKVN/hlgH5V5o9DaQWslzM9SOsJ5C8JcDfZn8adigtext9uHB42RTtTBOLuJ2E512RvrzrGWEzD/yy4idQpQVCB02zB+plxHrasEOZ3ME4HoXx+JqbP7OiFZV70uDxYf6W5EmAqefiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mFsgRX5bACsixQV9TsmLqP01VelcJnv1dRK4g9WEXdg=;
 b=BQxHYR2LIfmENm8dcpqU3mXk5ubp0XrJrGFZw3zADd49QgJ2KS23bAUeHeDFreWmsVRX6f4Eqj9x4LSJ4PVWuHfg31297tcQv7pkZbaT1DwO9o8jjnM3pDHINH4gTdnRxpN5G4/p5FSkW+b+acjDBkKxs8GxN/8IAJqnYBbPFdNOx/7a+LavhkO5AALC17r0EHMMHv1T1wz7jg6kBZOz1tcX6QdWroa2ILAfHDDo02V7dx4aLdWUWuM4zKJwIO5YR8dd0B5fQ2aQe+36LF7UBs/4vxRyy2bcYGCie/UFmXxzi7laA5i4iLyEv/CtfbvYf5VriEWP39Qx7XQ4ih0nKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mFsgRX5bACsixQV9TsmLqP01VelcJnv1dRK4g9WEXdg=;
 b=Hz3xarsDH0I4fOsqkhFNEP8vIEj3MUHjcXns632QpCLAleZjjbdy6pAhmjHtmv4t8QmLQ/9L0hfrLmxo67sbmznjoc3JNcRL+/55uY5oXdZZixlYPrRTRYtXjnO062Q2ZbI+yHjuXgLqLj5unvlxQ3VYogukGgqWCZMzjOFzSiWX7XIIDRMZsPgBWXlILxf6Eg/T6rsEIJwJHEJR+PGL5joK2oSV5R2aXOBIlhzYfoDrdUGJyZo8pGRHOJ9XKaUZ2sjsmiyLuDh0eW1UfO3JtZVfuZZm0IVu07cFWnyu0DtZuCdF6dHuJZL1z1e1/A4NzaMisXhrWW4doXuSF206CA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ac8d8c62-e1f8-a98e-3ead-e1f3f8a55c2d@suse.com>
Date: Mon, 16 Jan 2023 16:04:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-6-andrew.cooper3@citrix.com>
 <af2b74b2-8f37-223c-b830-c2bb3bc6d467@suse.com>
 <3ac6a4d6-44db-d248-4440-6e71aa14ad93@citrix.com>
 <adf6f951-a0e5-c167-9739-d8b0a2b4af38@citrix.com>
 <309925fd-1e7b-4541-693a-0296bd22e242@suse.com>
 <a1ffc132-5343-c070-7bda-b3198a1ccc95@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a1ffc132-5343-c070-7bda-b3198a1ccc95@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0119.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8676:EE_
X-MS-Office365-Filtering-Correlation-Id: 1fcff5fe-d8c8-44df-3915-08daf7d2fb81
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	D1dt71g9zeXaB9IRNISBaE3DxvfhvVNr4W7lKbmkn7em0joeR6HEiADYLIUIrgnjwkCB6VfsA/crVrbUpZtLAXW5b9clI6K8/0N8GA5Sd8uHXkP8oMICFhwutM+amE1nMao12Z7k2k4dOiHBgYUhcw+X/kt9+7TEQtmgdoB1vBp45qlcphJ5KsVf+0lxmrVbslVOLGMY9p6s4fkJwyJHxPCEAGgHKFFZ/ohXijBF2VoGQ5lO6Tj8eDMZdxhYyq4MDBR/xs+IGEPJ6TlaI5eJhZ4PPvcwTUpfARlU0YgwRDXWmBRkw/VA7Q7E/AjO4gQPmF9YVXjdm5uayXY2HqZJk0C08ikXT/XzFKqE6WTAEXlUVll+clZASH9umjf9wUD62IdUYPZbwrITJnTp4wjjMQMpEQwbCLQyVub0WYF0owyb8iC560Eo1GdHpnucx2E3OMm0LU+9IMNVlh3vgyUjP8U07CWbItCzJ5aszuysIMYUi3+qlciDwz3j05owZszAxF5iCW1bBtufQdUSZaLb8SkQyewXnGlkuEqIvAEwwN5RT379lrsxir0CWWgCfOlgIss2/qpGNDmo4iO8VOXicTUEVpGhD7h1EIS0GnGRwvsQofx948Y0ZerWE2ZxGRxPskTHRBY3gjFPkWyBtVWrNTYGw8ChzD2SmaE0mY9mPymMwIbLp4L/8Mr26/HYRf+Qo2s++5hotWfiJ8Zsr9woKLs28AxroZ+kCSuRPc+PDnQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(39860400002)(376002)(136003)(366004)(396003)(451199015)(36756003)(26005)(6506007)(53546011)(6512007)(186003)(38100700002)(86362001)(31696002)(83380400001)(2616005)(478600001)(6486002)(8936002)(5660300002)(31686004)(66556008)(2906002)(316002)(6916009)(66946007)(8676002)(66476007)(4326008)(41300700001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UVVvRkhlcGxQdjQ5bFp3TGt0S3g4dk9rT2tjUDR4dHYxSC9RUUg2VXErRTV3?=
 =?utf-8?B?eitHOWF1cXFpUXllNmVweU1oU1g5SUJxb2h3S2h1TXFSOWRweEY1OU5aTCtq?=
 =?utf-8?B?NmdDdmt1Y3hJTERhaW1aekpsNGFxTDd4WUhYU2U0UmgxNlJNMlNPMWtwbmNo?=
 =?utf-8?B?YlF3anEzQUNnNzhMZngrbVB6cloxa3BydkdpVjd1dHB2NWVTcFJPUEZjUjN5?=
 =?utf-8?B?OWhvd3VlSjZvbmMvUno4K1BLSDIyL0M0M0ltcVlqUldLUXJuZjRwZnRzTm4r?=
 =?utf-8?B?VFgrSFozSkNFRXVTT2FLbjdhdlZ3VGg1anpxWS82YTdmSUZmbngwWnIyVXp6?=
 =?utf-8?B?TXRIdXQ1L2ZPR3FiS0pteG40Wm9nUzFhTnB1SWJXdEx3SDhXRFFHc3JDRDVw?=
 =?utf-8?B?UjBmbDF3ZXhCSTNSenhWZE4zMFM3dmwxZjMwOWcyRGhUNDVDQ0hGdWJFcml0?=
 =?utf-8?B?NTg5RWZwdHBmd1pjS0t3VU5KVmY0bndkSWJhZkJObExMRmw3eUZsaDlseTNU?=
 =?utf-8?B?WHBzNmRVeEpnRmdlQWtQZEI0dmptK3FlRTRRbGQ4M0lmdnk1RVM1N055cHVp?=
 =?utf-8?B?ZzQ4azY2SEdOSmVLWmd5UjlmSzh4VEJreGltdi8vVCtBaTBTb3FXejQ1T1RU?=
 =?utf-8?B?VUZwZm5xMnp5OEpBMVBYb0tEU1BuUElFSUp2VEk0OGtMTk5mb0s3Y2xkYlRR?=
 =?utf-8?B?TDJlMFVxNzFqZDl1R2RFNmhQdWRrYWtoMWVkdGdTbVN4OEJqdlRBaEMzRnNJ?=
 =?utf-8?B?OXI2RCt0czkvTTYrMHJLUG9aU2sycFUzV0xkSzNuN2JjbGdFVnJ4aDFLL3lS?=
 =?utf-8?B?SzRnR0ZtUFpWbjZ0czFvZUg3MUxRd3F5KzY5akNRa1JqRlhHU3BEL1FaaDc3?=
 =?utf-8?B?Ui9uMkZOaGRZZUNBY3ZQUDNqRFArTG9qMFk2aUFjNHlFRWRaL0NGdDlUeldO?=
 =?utf-8?B?cUZBbTY1dVZRUExzSENFZ0hWZEVVWU54MkRsODVuLzN6bytDZTFmckNGc0VY?=
 =?utf-8?B?Z0hucjBIalh6Z3M1WTY4V0FzTUtOTi9XOFduRnFVRXNLbEVZMXk3VGlVclZo?=
 =?utf-8?B?d1Z4bGk0ekQ2anh5dFA1R1FvQkhwTmxsOWFMejRrUG56bjNxdEhBWmlKVUJj?=
 =?utf-8?B?OGZiSmt3bmlYdjdCeWdvV25JdUxHU3V3eGJXZEtwRWhGMU5mYXl4NkNJOGpj?=
 =?utf-8?B?S2c0K1JQSW1iODFpOWpPU2gvY2k4bWZ1VThhMWVNWXhrRWNUaGJ0cG9HTmpx?=
 =?utf-8?B?V2J0UnFYTnpKVnVkckFKNWxQWlBqNXIrdGpMb2tSTTcxZ0trK2JZbkR1NGFw?=
 =?utf-8?B?M0t2YWxIN1RLRk9odEhsR25vNVBYYU5yTGlYZTBFV0ZDVlRzZnplcElRNXp3?=
 =?utf-8?B?bzFGcEZiNE1kNnl2c3BCNW9JODU0S0QzY1g5MnVtMVZ0NTk2OWtVVWpYbHFq?=
 =?utf-8?B?VzBzV0lpbFl0ZWJ3NHlVVHFlZDVkeDE2bURCcjZaOWl4SWowUFM3UVJRNkd6?=
 =?utf-8?B?TFhjaW13STMvMFpKSlBCNk54d0ZTanhMeXk2SzRnNnVOck9vU2NCSXhQK3pv?=
 =?utf-8?B?b0VqWk1yY1llTzNWdXpLaCsxRFRaRCtNZDlSUU9sbEVRY2pOS2FCK1JhOGls?=
 =?utf-8?B?aU02QmxjVUlIYW1mRDZrbEJzMUdxVFBWcGRpRE5jN3ErWmlhYU80dGFVK2xW?=
 =?utf-8?B?eGU0ZVVMN2lzdk5DSUhkM1VVVXN0dnhHK1NJSzNXaERjYjY2MXA0RXAxbGNW?=
 =?utf-8?B?Y2JrVDJCdEpLVXV0VTdWcVBZMjNNN0hCcHRYZmd3SnJHNG1HVWlEQ1oyMHZG?=
 =?utf-8?B?UVVuV1hDekZZNUVQT0tZWjY4TTErS2s4NGttRDczdHhSVjF1b08raGhxK09M?=
 =?utf-8?B?QXdSQnFXQVpNMFU1VVhGM3daU3FKUlRMWVRsWERKalhYTkZUcDVKUXB5a1lJ?=
 =?utf-8?B?bDBIa1lJc005SGlYWFBUSkd5eHlEMXRpdFlubHgwTTBNR25hbk51SWRBMnRG?=
 =?utf-8?B?eXVKenRpVi9NMFFLSStQRE9zTjN6TWR3a0IwWldoeWkrMnNvczlhVXJjMS85?=
 =?utf-8?B?c0s4VCtqYndLT3FvY0lVYUtla0N5SDVsU0dDSytKTzZ0NXNhVlZkZGF6WTdl?=
 =?utf-8?Q?JdwnvtKa2OOdg62K4IW3DkD1t?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1fcff5fe-d8c8-44df-3915-08daf7d2fb81
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 15:04:36.2495
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ruh5eh/scffdv2aNSQFipAzNzJaKQWDCR1zl9waAF+51CxcSv9bMtltN0bm+63Wv+hUjuJWhkAsX0lCV2+Euag==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8676

On 16.01.2023 15:57, Andrew Cooper wrote:
> On 16/01/2023 2:17 pm, Jan Beulich wrote:
>> On 16.01.2023 14:00, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/acpi/power.c
>>> +++ b/xen/arch/x86/acpi/power.c
>>> @@ -299,6 +299,13 @@ static int enter_state(u32 state)
>>>  
>>>      update_mcu_opt_ctrl();
>>>  
>>> +    /*
>>> +     * Should be before restoring CR4, but that is earlier in asm.  We
>>> rely on
>>> +     * MSR_PKRS actually being 0 out of S3 resume.
>>> +     */
>>> +    if ( cpu_has_pks )
>>> +        wrpkrs_and_cache(0);
>>> +
>>>      /* (re)initialise SYSCALL/SYSENTER state, amongst other things. */
>>>      percpu_traps_init();
>>>  
>>>
>>> I've folded this hunk, to sort out the S3 resume path.
>> The comment is a little misleading imo - it looks to justify that nothing
>> needs doing. Could you add "..., but our cache needs clearing" to clarify
>> why, despite our relying on zero being in the register (which I find
>> problematic, considering that the doc doesn't even spell out reset state),
>> the write is needed?
> 
> Xen doesn't actually set CR4.PKS at all (yet).
> 
> I'm just trying to do a reasonable job of leaving Xen in a position
> where it doesn't explode the instant we want to start using PKS ourselves.
> 
> S3 resume is out of a full core poweroff.  Registers (which aren't
> modified by firmware) will have their architectural reset values, and
> for MSR_PKRS, that's 0.

And where have you found that to be spelled out? It is this lack of
specification (afaics) which is concerning me.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 15:21:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 15:21:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478799.742192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHRIq-00039e-Uh; Mon, 16 Jan 2023 15:21:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478799.742192; Mon, 16 Jan 2023 15:21:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHRIq-00039X-Rs; Mon, 16 Jan 2023 15:21:40 +0000
Received: by outflank-mailman (input) for mailman id 478799;
 Mon, 16 Jan 2023 15:21:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHRIq-00039R-82
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 15:21:40 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2056.outbound.protection.outlook.com [40.107.105.56])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77c0ee68-95b1-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 16:21:37 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8091.eurprd04.prod.outlook.com (2603:10a6:10:245::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 15:21:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 15:21:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77c0ee68-95b1-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IgdYxScOm/IVRxxwqB9ddNzeE7jMkWe2UdxBw0k769iX1Hde3SDuIY2jd5MI//8OqE6rIa34di89SuwDfPT372kkoBe6Gv+uMYnyRTQ4PMis2bzy6qD2ERmyAK4Vc7bjZ+Z+wQdj308ismBBzTYOEt0sRwKgKNTYHew9T/88n7/DOk22HtwPDFciBvsAi2XjD1GBZPinDmJLSkEAB2LQZ05YNwnrMHnHm/qDXVIochCZqQIVfS3NTEC6/ZxVkK7nf0v7uXwzSUlN6oBnvPLuHVu6hUxzHrrvonaCOkbxtEvR8+WNk6OPiEqC0pWPsRgyfu8mFvYnLl3Gnl7WT2lYlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IxkvyELeO30gwo1hNMwGZ9VENPcTdBnlwf4A6MgvRns=;
 b=hWHPauMVJohbd9Jl+o77o4/3lLGTv3hbHDqTMv7+rnJspOq1fP22Dx2bS3KAB5iyMWcrO9eI2nnsfsfM6JwZ9hKgZFHMHwIS93sgc2vIJfGLSL/x3Qo9swLHc8RjUAwqZctM77CGPRsrPJZD8EIaw5wq3TEi3OTD4jFZ9eKhhI14fgZLuhZr31aM1jKFhErKHVgaFWNxRfTxJcq4udFqs3xRxm8nAPhSzuMISshdSR12LDMHJJpQDGPGV80X2o+7gZjCthS1l5fentmvtu0Vru6Fe/oqwO9v5N4hx5JL49UvPtBYC4wk1AkZocQFA9LxHJrFKXWlyVMg8jVUhOkmQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IxkvyELeO30gwo1hNMwGZ9VENPcTdBnlwf4A6MgvRns=;
 b=trgQfJ8eTMOjcc+V81d6M/JF4W/bWTAmDgak50K6NdJgoQzS1Z/+6jaMg+0htETC8jxn2JhaY+FPoHvpNSsPQ0YuP8oDDm4r+VNLPKcgjccAospmwZMdN2TzocojjTIjCCLihc5vUZBcAxU4TQ/Oj7gVwnYh4IsOvYsiKS0wjXjt0kHGwPiC7stbzg+zE63hoHeHPK4x8ww1KzcnGoyd56xnEMLrfaqL4uffGxGzcuAx10ve9VZdWsefkGkPNPHZfCYgbWcEybpwJGPB+CREQWa92ULpKbAHGkh4i0S/nVs0OMuOtmT5Bc0yjvAVl4L98Pc+x47uTxSOBka8LXwYPA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <677cbd89-cbd5-87ef-8e3f-466242fc6012@suse.com>
Date: Mon, 16 Jan 2023 16:21:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] tools/xen-ucode: print information about currently loaded
 ucode
Content-Language: en-US
To: Sergey Dyasli <sergey.dyasli@citrix.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230113115630.22264-1-sergey.dyasli@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113115630.22264-1-sergey.dyasli@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0094.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8091:EE_
X-MS-Office365-Filtering-Correlation-Id: 7589c9f6-4171-4cb7-29d6-08daf7d54e3f
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pTa+MZUa/jiAkTysBQiJjamYpjNIUmtaXHYI/mq70Ykqw7VF5U8L7CalTkrVVNq6YJV2wY5n+QgjonllgqKTo1y9tVfYZoSfsBjGf3k+p/DGUYVfUHb05f0hqw/9XqDBscXHQDssI1S1CpRpISyQEV525zAlc8vYCm55wwj3Vww9/a8y08Fb4+2ifb/YDcA+2fD4YWuspnm3lyAbl7fvYwepB4Bi8I+i7T+F8D4gjn5T54xp+6rauJroitLTSTdOANgH6Ql+nPzYnfqwB7qJNZBmmOn1YTr9rcpfYJgtcEFUEZzeLgBTJ97C7P4a+9odkHvO8CaeeY9aLTCO7bmbS2vc7e2kRYUCs2cijWkBcMIV1xrnry5q8gyXJoeK3CAMItfyP9mV9Xu6yAQxudJl7Kj9mnSkdsyOoaaPOuwAwvEKq1jj6MrWxx6ClQ+b538AvTuzFSiD5ut0hXiPUUCv2S+udpjWrjzS9LinCqgGOZyFN5DhvM1njvy6mj4mvccEYvyBrMCkFA6WQUrFfRUS+wO2dufiM9TqEMim6bUzNAbhkZqzJsdpiUisCdbDkmVLolYva1XCIP00HhrnVkMaFrG9ih92JLvxWDl+XcxmN0EXUwQtVfLvB4wnIjWFTcm9l7nVMityL1mx8I/4mgNMKgKDc3l5hu/juVLFwEIj2IMeUThT0EImELboWLvx/i/nKK0s5qq+rf5ss4QSnk+jtUt8/mXYrM8V3tWYp27XOuU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(396003)(366004)(136003)(376002)(39860400002)(451199015)(83380400001)(86362001)(38100700002)(5660300002)(31696002)(6916009)(2906002)(66556008)(8676002)(8936002)(66946007)(66476007)(4326008)(41300700001)(6512007)(6506007)(186003)(26005)(53546011)(2616005)(316002)(54906003)(6486002)(478600001)(6666004)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K1FNbkNsbmRSWnRRZ1c2MTQ0QU1EVTlJVTFTZVZ2alhLZmM4S1ArMkxVNm1B?=
 =?utf-8?B?QlUwdVpJUkU2UXk3SFRMMmdmb1VUaXB6V2R4QW8rY0I0TWtMMDdldTQxMnRB?=
 =?utf-8?B?YmtHdW84S3B3TXY3RFZGNnBHVisyS2E0RzVMM0tXWmV6REd0dFhXVzcxRm9y?=
 =?utf-8?B?YXBwYnFhZ3E0aDFJdlVZc25QTWRNUDQzMkV4d2g0bVd3VGI3MkpsU1pRWVpp?=
 =?utf-8?B?WGJuczNSNWlWbjhQaTBmYUs1RHZGYjVWV1V0Q0d5RytKY1ZwOEtnQmVQT2hr?=
 =?utf-8?B?RXl3a3A4eW1HNjJDRU96SHRtUWNqSlRPcUdNU1FaSW9qck5KQldQTlM5WTJ4?=
 =?utf-8?B?MkVMUjVtMFY1bUVoY25OWUlsemxsNllML1gxbVd6Y1dzMXRIZXdxRjFqMmky?=
 =?utf-8?B?Q0VWYkppV05LOERNQXVtVFJucUF1R3N3RE1mdWhPVkorL0ZCWTcvUW9TZXVl?=
 =?utf-8?B?NEs3SDJTQkVlbE5ZWHJqYkh0QjlPRlV0QjErb2wzKy84TTQ4UG1zRGh6enY3?=
 =?utf-8?B?R1hpWU93ZmlyMTc4RGs4OGpxMThyZUkrZ2Y4L0dkTnNFbUd6ZnlkaHl4a2RW?=
 =?utf-8?B?bzlsVDExdkVjZlJZSitZTHRzaVpZOHgzNnFQWTdSbGZQWk5TeERYM0Y0ZnFp?=
 =?utf-8?B?bHFUeEdNNEpHREJaT241eWFMSnlheWFhZk5qWUV3MkZOUHZWeXBZQnhuMDd5?=
 =?utf-8?B?a2o3NWRha1VVZk9uMVVmazBpdUJWQW1OQjdQWlh6N2dLcGEzZ3NabHJralNY?=
 =?utf-8?B?UWU5ZzhtYnlZN1ZKVXgra3JiNE5MOGFGaTBCc3V0enFBVGthRlFCNWg1OFRM?=
 =?utf-8?B?QldHQXJyVXdGNTdoaXdsMW5KYjJ3WEt6Uy9aWXhacERwMzh0dC95Yk1nZGZo?=
 =?utf-8?B?R1F6V1AyQW5JeHh0R1dHK0IxalVWSWNROUpqZWpPckwyYnJrL1p5RkJGRXNC?=
 =?utf-8?B?eVUxZnB0OWdDTnZBdGVRWkJCN1VtQlU1MUhKNi9FSCtxR2lSb0R2UE1NbHVW?=
 =?utf-8?B?WlNrQjFIK1lkRWMvSTRWTnhIMWdJeFFTT084eE83bmNPVXIwMVUzME5QaTds?=
 =?utf-8?B?d011c1h1Skx6QnZhNXBJL0t3MWF3WmlVTGlsSG1HZ2VqSTdOb2sweFZrU3lw?=
 =?utf-8?B?WDVHQ3Q1aGt3ZENVQXdkTEJQSWhUbFRzZ09QOW54THdJZldYMm51NGt3NFIz?=
 =?utf-8?B?Y1NnNit0M3hCSC9DNE93SS9maThjQnpLbGFTQmRSWHh0REtGOFppZEtnNGpj?=
 =?utf-8?B?UUN5UlNkTFZWU2tpdmRtQkxPMWJabVRHb1RKQWdHM0xyNldUR0d6OHhob1dC?=
 =?utf-8?B?cDN6N2t1SHlEL2ZaNkQ3L01rS3BZV2s3a0FqUHM5emdZKzBOdHpCRmE0bCsz?=
 =?utf-8?B?eCsrMUtHcDVlbjNXa3NwUFRoUzUrWkgzUk9JK0I2cDdWcE05czQ1NjFYanJ1?=
 =?utf-8?B?a0hBa2Q1VkRjaTUrRitQSklsNVVseHFVR3hsdnZnWEpSeVJpL2pjYUdvbnBa?=
 =?utf-8?B?VVFoUGxhT0Y0UGVZUlFSVXB0NXBlaEFYQ1VCYnpNMVFrR2t3UlRRa2lPZW55?=
 =?utf-8?B?dUwzcnY4ckZwQ2RzODE1NXNYenFnaytCWFJmYnN1U2Z3TXBHam9nSmdhQzNH?=
 =?utf-8?B?azdaaU5VaTFKMndSakpudm5EZnpNbVVCNWdEbjJSUFpRMDRjTUZFcE9wVFZX?=
 =?utf-8?B?eEVyWTgzMFR1cFVlZERwSWFnbUpKblI3WXp5bEo3dnVIeXljSXBObWplZXZN?=
 =?utf-8?B?OWFUM1U1MVltOThxT014d1J5TVVKaWJHS0RjdVEyRmRNblc5a09YdHplL1Rz?=
 =?utf-8?B?UWJxbWhqRFhTcGI2OWpSY2xuWlR3SjVnVGptbS9XL01RRzdjRlhYLytRdFlk?=
 =?utf-8?B?OXdJNldaKytmZUNxMlluOFNrcGRXT2d3UHBlc0l2WTVlNGEvakpiejQ3ekl6?=
 =?utf-8?B?VUN0eW0vcDFtR29NVmZjRzNEQnVBRzEyQlFqMjlMQS9wRFJoTHB5K2dVUjNq?=
 =?utf-8?B?ak1MdXBaZFdrL0xqdldSTFpqSDJlUEhBL2J4RDArZ1QxQzhXVjA1NHJDd1M1?=
 =?utf-8?B?eVJ5eThVOFpHanNWRlVWV3AwTUxuSWVkUWdYSEQ4ZW8zRVpQV2JHRUxDeERm?=
 =?utf-8?Q?nzvBv3mv4szMB0jSIpcjseCY/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7589c9f6-4171-4cb7-29d6-08daf7d54e3f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 15:21:14.0922
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fTfmEdgiGOMLJvg/IXW6ZLOJqGHatymIlPCPA/34RuJl1KTyP1njMUXGfyymqlxDQvg16AAcSZfv1ohMlVIG6A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8091

On 13.01.2023 12:56, Sergey Dyasli wrote:
> Currently it's impossible to get CPU's microcode revision after late
> loading without looking into Xen logs which is not always convenient.
> Add an option to xen-ucode tool to print the currently loaded ucode
> version and also print it during usage info.
> 
> Add a new platform op in order to get the required data from Xen.
> Print CPU signature and processor flags as well.
> 
> Example output:
>     Intel:
>     Current CPU signature is: 06-55-04 (raw 0x50654)
>     Current CPU microcode revision is: 0x2006e05
>     Current CPU processor flags are: 0x1
> 
>     AMD:
>     Current CPU signature is: fam19h (raw 0xa00f11)

So quite a bit less precise information than on Intel in the non-raw
part. Is there a reason for this?

> --- a/tools/libs/ctrl/xc_misc.c
> +++ b/tools/libs/ctrl/xc_misc.c
> @@ -226,6 +226,11 @@ int xc_microcode_update(xc_interface *xch, const void *buf, size_t len)
>      return ret;
>  }
>  
> +int xc_platform_op(xc_interface *xch, struct xen_platform_op *op)
> +{
> +    return do_platform_op(xch, op);
> +}

Wouldn't it make sense to simply rename do_platform_op()?

> --- a/tools/misc/xen-ucode.c
> +++ b/tools/misc/xen-ucode.c
> @@ -12,6 +12,67 @@
>  #include <fcntl.h>
>  #include <xenctrl.h>
>  
> +static const char *intel_id = "GenuineIntel";
> +static const char *amd_id   = "AuthenticAMD";

Do these need to be (non-const) pointers, rather than const char[]?

> +void show_curr_cpu(FILE *f)
> +{
> +    int ret;
> +    xc_interface *xch;
> +    struct xen_platform_op op_cpu = {0}, op_ucode = {0};

Instead of the dummy initializers, can't you make ...

> +    struct xenpf_pcpu_version *cpu_ver = &op_cpu.u.pcpu_version;
> +    struct xenpf_ucode_version *ucode_ver = &op_ucode.u.ucode_version;
> +    bool intel = false, amd = false;
> +
> +    xch = xc_interface_open(0, 0, 0);
> +    if ( xch == NULL )
> +        return;
> +
> +    op_cpu.cmd = XENPF_get_cpu_version;
> +    op_cpu.interface_version = XENPF_INTERFACE_VERSION;
> +    op_cpu.u.pcpu_version.xen_cpuid = 0;

... this and ...

> +    ret = xc_platform_op(xch, &op_cpu);
> +    if ( ret )
> +        return;
> +
> +    op_ucode.cmd = XENPF_get_ucode_version;
> +    op_ucode.interface_version = XENPF_INTERFACE_VERSION;
> +    op_ucode.u.pcpu_version.xen_cpuid = 0;

... this the initializers?

> @@ -20,11 +81,18 @@ int main(int argc, char *argv[])
>      struct stat st;
>      xc_interface *xch;
>  
> +    if ( argc >= 2 && !strcmp(argv[1], "show-cpu-info") )
> +    {
> +        show_curr_cpu(stdout);
> +        return 0;
> +    }
> +
>      if ( argc < 2 )
>      {
>          fprintf(stderr,
>                  "xen-ucode: Xen microcode updating tool\n"
>                  "Usage: %s <microcode blob>\n", argv[0]);
> +        show_curr_cpu(stderr);
>          exit(2);
>      }

Personally I'd find it mode logical if this remained first and you
inserted your new fragment right afterwards. That way you also don't
need to check argc twice.

> --- a/xen/arch/x86/platform_hypercall.c
> +++ b/xen/arch/x86/platform_hypercall.c
> @@ -640,6 +640,38 @@ ret_t do_platform_op(
>      }
>      break;
>  
> +    case XENPF_get_ucode_version:
> +    {
> +        struct xenpf_ucode_version *ver = &op->u.ucode_version;
> +
> +        if ( !get_cpu_maps() )
> +        {
> +            ret = -EBUSY;
> +            break;
> +        }
> +
> +        if ( (ver->xen_cpuid >= nr_cpu_ids) || !cpu_online(ver->xen_cpuid) )
> +        {
> +            ver->cpu_signature = 0;
> +            ver->pf = 0;
> +            ver->ucode_revision = 0;

Better return -ENOENT in this case?

> +        }
> +        else
> +        {
> +            const struct cpu_signature *sig = &per_cpu(cpu_sig, ver->xen_cpuid);
> +
> +            ver->cpu_signature = sig->sig;
> +            ver->pf = sig->pf;
> +            ver->ucode_revision = sig->rev;

Here you read what is actually present, which ...

> --- a/xen/include/public/platform.h
> +++ b/xen/include/public/platform.h
> @@ -610,6 +610,19 @@ DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
>  typedef struct dom0_vga_console_info xenpf_dom0_console_t;
>  DEFINE_XEN_GUEST_HANDLE(xenpf_dom0_console_t);
>  
> +#define XENPF_get_ucode_version 65
> +struct xenpf_ucode_version {
> +    uint32_t xen_cpuid;       /* IN:  CPU number to get the revision from.  */
> +                              /*      Return data should be equal among all */
> +                              /*      the CPUs.                             */

... doesn't necessarily match the promise here. Perhaps weaken the
"should", or clarify what the conditionsare for this to be the case?
Also your addition to xen-ucode builds on this, which can easily
end up misleading when it's not really the case.

> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -157,6 +157,7 @@
>  ?	xenpf_pcpuinfo			platform.h
>  ?	xenpf_pcpu_version		platform.h
>  ?	xenpf_resource_entry		platform.h
> +?	xenpf_ucode_version		platform.h

You also want to invoke the resulting macro, so that the intended checking
actually occurs.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 15:33:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 15:33:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478805.742203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHRUd-0004jO-3N; Mon, 16 Jan 2023 15:33:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478805.742203; Mon, 16 Jan 2023 15:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHRUd-0004jH-0M; Mon, 16 Jan 2023 15:33:51 +0000
Received: by outflank-mailman (input) for mailman id 478805;
 Mon, 16 Jan 2023 15:33:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o6RV=5N=redhat.com=imammedo@srs-se1.protection.inumbo.net>)
 id 1pHRUb-0004jB-Eg
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 15:33:49 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2adebed3-95b3-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 16:33:48 +0100 (CET)
Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com
 [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-28-M1ajViqMNuaHWJycP0l74g-1; Mon, 16 Jan 2023 10:33:45 -0500
Received: by mail-ej1-f72.google.com with SMTP id
 oz11-20020a1709077d8b00b007c0dd8018b6so19961185ejc.17
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 07:33:45 -0800 (PST)
Received: from imammedo.users.ipa.redhat.com (nat-pool-brq-t.redhat.com.
 [213.175.37.10]) by smtp.gmail.com with ESMTPSA id
 p14-20020a17090653ce00b0085ea718a81bsm6570877ejo.198.2023.01.16.07.33.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Jan 2023 07:33:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2adebed3-95b3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673883226;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gfxOU5yee2HwVQ8hW5LGqj0eQl/J5DDe98lmdDg/TjM=;
	b=Q/F5EO6mH/sqJM6ySRu3E6Pxifbc6mkbbOpqOnaAKq1l2RPbaRVicwF2nMXoCDGL4yVkcn
	4KqjwB83cCFZAGrbKHDmpBvoYZ+nU+SMY2Xn9E+J8HsQgWCW5KRCgoq9h9Nk3NVvHNMQey
	zvDw6r0ynbEk1dBu6Vx+dmbK1so0EdM=
X-MC-Unique: M1ajViqMNuaHWJycP0l74g-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=U1g1zCi+06h/BCDFKfYU8XUNQoM01xY7v+dJwQFuiQw=;
        b=M3vOr0HoZsNH2JQkFXgXB42Wb3reK8Gc91WuyDTHjU7abFg5dG/veojVaKexAjXuku
         zLY39jkUFp6W/3tLNcJ9Spmq5BirhtuimVian5qOYLwfbPJMWQkHmNcCH2rvpf608FxN
         8m752L1n6yB1eJ/LqjvlBiWvysAKCItagSgI50lKzluaos1avVML3hWIJeyDTKNPcyS3
         bDKpjE7qTDux/ZNRJ+yf4eep3+F2tZw8JtAsVEO5p43ep/4in71mGkG4zIqAwhuyZQwh
         oDHBclPofjUPfGgCsWkMcFUtoClmFiHbqmLf0adM2LRXA28QtWUhmVbJJJhvH+S6N1A5
         ClAA==
X-Gm-Message-State: AFqh2kp6ZTQPqaZHs5419DSzzutw6tCxNGFnmPclk2SSKEYTu9udxsKS
	qBImL76/Chsy+NoReaIz9LNjPu00n7JJC7De/zRD9NgvJEyXAFwctfNbN3/Vnx7LYZzit4E5w/u
	EycwXA6TWeRGzkxUpHOmF7Lz4zXk=
X-Received: by 2002:a17:907:a585:b0:871:3919:cbea with SMTP id vs5-20020a170907a58500b008713919cbeamr2708614ejc.54.1673883224354;
        Mon, 16 Jan 2023 07:33:44 -0800 (PST)
X-Google-Smtp-Source: AMrXdXsP1+tXYFUEmlpJRu6mf2xwlR+HRv7sYl+XDTQgxmYtii99ewpWT8Pm9lEk+AysfSPRbqKRIQ==
X-Received: by 2002:a17:907:a585:b0:871:3919:cbea with SMTP id vs5-20020a170907a58500b008713919cbeamr2708594ejc.54.1673883224151;
        Mon, 16 Jan 2023 07:33:44 -0800 (PST)
Date: Mon, 16 Jan 2023 16:33:42 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>, Bernhard Beschow
 <shentey@gmail.com>, qemu-devel@nongnu.org, Stefano Stabellini
 <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, Paul
 Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, Richard
 Henderson <richard.henderson@linaro.org>, Eduardo Habkost
 <eduardo@habkost.net>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <philmd@linaro.org>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230116163342.467039a0@imammedo.users.ipa.redhat.com>
In-Reply-To: <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
	<a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
	<20230110030331-mutt-send-email-mst@kernel.org>
	<a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
	<D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
	<9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
	<7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
	<20230112180314-mutt-send-email-mst@kernel.org>
	<128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
	<20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
	<88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.36; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Fri, 13 Jan 2023 16:31:26 -0500
Chuck Zmudzinski <brchuckz@aol.com> wrote:

> On 1/13/23 4:33=E2=80=AFAM, Igor Mammedov wrote:
> > On Thu, 12 Jan 2023 23:14:26 -0500
> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> >  =20
> >> On 1/12/23 6:03=E2=80=AFPM, Michael S. Tsirkin wrote: =20
> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:   =
=20
> >> >> I think the change Michael suggests is very minimalistic: Move the =
if
> >> >> condition around xen_igd_reserve_slot() into the function itself an=
d
> >> >> always call it there unconditionally -- basically turning three lin=
es
> >> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
> >> >> Michael further suggests to rename it to something more general. Al=
l
> >> >> in all no big changes required.   =20
> >> >=20
> >> > yes, exactly.
> >> >    =20
> >>=20
> >> OK, got it. I can do that along with the other suggestions. =20
> >=20
> > have you considered instead of reservation, putting a slot check in dev=
ice model
> > and if it's intel igd being passed through, fail at realize time  if it=
 can't take
> > required slot (with a error directing user to fix command line)? =20
>=20
> Yes, but the core pci code currently already fails at realize time
> with a useful error message if the user tries to use slot 2 for the
> igd, because of the xen platform device which has slot 2. The user
> can fix this without patching qemu, but having the user fix it on
> the command line is not the best way to solve the problem, primarily
> because the user would need to hotplug the xen platform device via a
> command line option instead of having the xen platform device added by
> pc_xen_hvm_init functions almost immediately after creating the pci
> bus, and that delay in adding the xen platform device degrades
> startup performance of the guest.
>=20
> > That could be less complicated than dealing with slot reservations at t=
he cost of
> > being less convenient. =20
>=20
> And also a cost of reduced startup performance

Could you clarify how it affects performance (and how much).
(as I see, setup done at board_init time is roughly the same
as with '-device foo' CLI options, modulo time needed to parse
options which should be negligible. and both ways are done before
guest runs)

> However, the performance hit can be prevented by assigning slot
> 3 instead of slot 2 for the xen platform device if igd passthrough
> is configured on the command line instead of doing slot reservation,
> but there would still be less convenience and, for libxl users, an
> inability to easily configure the command line so that the igd can
> still have slot 2 without a hacky and error-prone patch to libxl to
> deal with this problem.
libvirt manages to get it right on management side without quirks on
QEMU side.

> I did post a patch on xen-devel to fix this using libxl, but so far
> it has not yet been reviewed and I mentioned in that patch that the
> approach of patching qemu so qemu reserves slot 2 for the igd is less
> prone to coding errors and is easier to maintain than the patch that
> would be required to implement the fix in libxl.

the patch is not trivial, and adds maintenance on QEMU.
Though I don't object to it as long as it's constrained to xen only
code and doesn't spill into generic PCI.
All I wanted is just point out there are other approach to problem
(i.e. do force user to user to provide correct configuration instead
of adding quirks whenever it's possible).



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 15:36:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 15:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478810.742214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHRWv-0005Ih-FY; Mon, 16 Jan 2023 15:36:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478810.742214; Mon, 16 Jan 2023 15:36:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHRWv-0005Ia-Cl; Mon, 16 Jan 2023 15:36:13 +0000
Received: by outflank-mailman (input) for mailman id 478810;
 Mon, 16 Jan 2023 15:36:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHRWt-0005IQ-PB
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 15:36:11 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2068.outbound.protection.outlook.com [40.107.22.68])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7fc05366-95b3-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 16:36:09 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8275.eurprd04.prod.outlook.com (2603:10a6:20b:3ec::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 15:36:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 15:36:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fc05366-95b3-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y1AX1cTqKNkhwHZL7oHFI5HSmsQBgCfchiZRTsnR+1npqbuVUKBSVLv4FvYlp+W7W2w6CHixtmQk78vhAG2Zg5/BubjT1u1sk1pLjjn171MTRmCdyHgmlYMpVg2rBysHiIVgR7rYb2MD421kmWiFpXn8D0nSRj+2y3Mkbe86Je5QjpbhRRCgMB8J1S/IZwrka1UfxEi6Pn1O485TjfqpfVgRVrZXsqsy2W3E5sJSRVWHwXyWFm6ILNTHs8E4fCZDJvZHMRVxfESndFyJtzl5g22wKcIc/kcR0smwLOBoS6c7ehnNUJZDnzX6eMcCaqGykMHilys5QikcgIa+I2EHiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XxdfyaGjzGOJkQF8K0bSOQVeSIu+/NEkdq2y1PSw86o=;
 b=HMEukcyZG3U1YyDdOBqEX1CuvD3vDUOkHrgDIT0jjoytjPrBm5gjgZ5b2MkhnT6NY6/vHKXVqhg+1XYkkpuL+za7AJa3Wu7TvxlVbCf84HyXKVdG3sSILJvk8RXCKfPaj16yug8uQm1NE/uUvaCNlc/rgoCv8/8CiCGdzgRe+hbTXawraPeDhzDmbOh9TYDq0sBl7ajLzEG8gq76wQac3Qzn48L6FsHzXYdi+uxtFwFtNeviMGqQOjwZ7dujNvjFyjAPhBKhiCADXzW0yIToRRkCJM31WNUyTADqNc7kf87ecNzr94PmDZdkZRQ3ubpFGsSkdogMi0015PWIxB+qCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XxdfyaGjzGOJkQF8K0bSOQVeSIu+/NEkdq2y1PSw86o=;
 b=3XTEMdXFbn7BSZO+aMoeoRr5OeE9pqnAX8MGIHvQcZM6GfhV6mn8I4RX23bNpyDOQpR92dwJb6++AJuMf72oeUDpVBL30GPuvo6bqMYKjjYMSuhy2m72v8PXCIDmX8p1s1MKvoXZjsV6wADi+3ETgfqyxuwruKptYoScQZ0sH6X2yFJcNN8HgRM7+nXcbcx3jhXSqRBB+1Ncerp1HnP+4ED9wWEzFqCtSbn73oro4+vGdWL6bTT5YdTbEVuEYWBXAMuFDd8VtHtUQFxcN2FwD30Pkg7XyvUYSQdnMChNou3yuGTN4jR+/wuuLTqkhbs0ka1qWdBpfbHgtiDe/Ual9Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3920b910-35a2-f806-dfa5-eaf44475f83a@suse.com>
Date: Mon, 16 Jan 2023 16:36:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/5] xen/version: Drop bogus return values for
 XENVER_platform_parameters
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113230835.29356-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0104.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8275:EE_
X-MS-Office365-Filtering-Correlation-Id: e19de267-516c-4487-8b07-08daf7d762eb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IbMj0hoxr9IgceQwtL4QchC22ER8gjbIWI7MGqakiM/Cpe8AVNpNPVpFxoTzb1Furgo5u4LQHiAdThaxYzWdx1wjir3fYc33G+1XYpr8ttjFJjPK7lgmFSpNyjM9u+nOieEswDiGUR9UCfo6ubBDcSnPv3SjpR6mCNKuLi+BrzpKsWlhS0jU4utBaoqLc2PeAgTiVH59SiTNef0WydWUORDUsUB65W6+1QHKrEq84NnI2Tle2v02SUaUhkgqKiiR3cZKPlo5sTlV86vAdgguOQqM6QL4qkXq/IukWwfavPk1bQc2mGEB3SwitxR/1NUGduB2q9t1pUvSFRAXlKBZ8VdTe0ZA6Cw4xZFis38O9xBStutWDw2bSETxQCKnrvb8n/fOrX7t071UkEu/ET2Lgy3cctQAmuvsdQBpJRrgUAlhQEr2HYK+Q2Yw69Fd1go8mau1Jl59VoSLUyEgFFZ400ldoREmwsMJBmlYgiDiVLW1ePZTuNXxw++DIZyqDK3IgoHG4i+ITGmfkA275uh+V+Vu6LExSkV6U80EW8GAIjtzxrSezFW7dWYcVNe8Dm5bdVDyZBtbzlEFSgBe2vDUSZf0ynGnrF+2yCTtxJPFblfAqXjQFwrUnB9chG/zAcg/AQmHkxY7lG4Seor013bREzcObIddz+pediqxQoYwSsq6eIlyNHbCnEcDSLK/jLPhHeEtXacSb8huLyF2kA133ADv2MnCOpCeK8Olq0EXfrE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(39860400002)(346002)(136003)(366004)(376002)(451199015)(186003)(54906003)(2906002)(26005)(6486002)(6512007)(66476007)(66556008)(66946007)(4326008)(478600001)(6916009)(8676002)(31696002)(41300700001)(86362001)(2616005)(316002)(4744005)(36756003)(8936002)(38100700002)(5660300002)(53546011)(31686004)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VnpqaWJuMXdhbldXWHBkcmFxS2loT2xqeFY1cTlieWlJUk02bEp6MzdGK01G?=
 =?utf-8?B?R05SZWJjK2oyL0ZwRmVRWlFuaVFqZm5xUWNXV2tyMlBEMFFycnpuOGJkRGdM?=
 =?utf-8?B?eGpDdVYrRElTby9uQVlNT2FVbkJtWVNYYStiMkE4clN3ZEdFTXVpTGo0UUp1?=
 =?utf-8?B?QlB5UWhaUEp0M21JUEVkM2UyckJIbXhPSTlqSjgyN0FlRkljSUJhTnFpZW9O?=
 =?utf-8?B?d3pBcC9scVV1a29iS3RrODY3aUxWc3p1L1dDVmFGcEhFY01JTlQ1TFFSZnV6?=
 =?utf-8?B?VWdMQk10dGxEYkQ3QlArK2hjOUJCRVpHZjl5cU0rdWFrNHZteTdtREVlYUdN?=
 =?utf-8?B?RCtURTNQamZtMXNSU21td1RoR2NoeWNSWWZpN3RCa2phaFBkdk41N2FabTM5?=
 =?utf-8?B?UlphTlBnRnMrNDN5ZklsTG5rOTFmaEo1YTV4N3c4b0FNU1gwYmdSWGNTVlVs?=
 =?utf-8?B?M2xFNjVPVFZ3dk9BUjFFcHpXcTI1eWcxcGt6aXd6c201VW1EcmsyeEgrOTl1?=
 =?utf-8?B?UFQwTm90T0VKVi9Ed21aVVQ4WE54RTdHY1Iwb0RDVE5IOGswbjg5UnQ4REdE?=
 =?utf-8?B?bkRXVDhwZ2ZhN2xKdFdLU2hkTUh0TFRVT0w1bVY5M1UyWW1xVUhGVzUxdWZX?=
 =?utf-8?B?MkQ2NllJUFJOVDJuZDZGRnBNbkNQZmIzMG9peFJUZ2VMZ0VzSjVycmk5ZEh4?=
 =?utf-8?B?UzlqcVFoT3RwNlU4RHZaSFVLNlp0bmZDMWVnd3ljQ0I5eXhBYUlmUXFSc0x3?=
 =?utf-8?B?am4yUXFEWGV3NkFhSDQ1OGtoYkI4Q1VRVVh4eDFUUndFNTM5T3JCa3F0ZXlK?=
 =?utf-8?B?Vk1mYndTaktXUTF6RmZqWXRzazRBV0VWazJsdjJMbnNCNkpuR0JncnFyNTB2?=
 =?utf-8?B?Zmh0bHlSOE9BQ1M0SlhyWEwrRVFwV0IxSXN1MVd0ZDBUUmFyWm1pTUZHc3Zq?=
 =?utf-8?B?bXYzUit5azIrRTJsSmpaeFc5ckxpaStkTjdud1dUU1pJQTJmV0krRmJrblhY?=
 =?utf-8?B?NFd1bk12eDE2QkpWbWFZR0E0cDg3UXV6Vm1vcGoxWkN4NVlwWnM4bFdmK29T?=
 =?utf-8?B?TXNTdGNMOFZEM0xmQmVZSWtlQXdqVFFkb285QTZUQ0tMYXFiNXN3elptK0xB?=
 =?utf-8?B?ZmdqMGNVYkZrZXdVdWpEYWJOc1BXTEUyKzBxaGFYRE5BTUI5dDZKdjByV0lo?=
 =?utf-8?B?blFiUHFEWWloL0RyRk5MUVhXS3FRRVN1OTM2RWpGdkxYWnJhMElVQ21Zdk8z?=
 =?utf-8?B?c3JaVzB4YzFvRzRJYVIzSEZnWW9vdzhla0UxQWdUY3FscUNQMytlN0VBQWd4?=
 =?utf-8?B?VlNCc3NibENCV281RDlERW9FbHBPYUtIRHR4WUxXVjJ4T3haK1lMOHhmM2xt?=
 =?utf-8?B?RHJab1U0Ti9PQXV5RVQ5b05JNTltazc0cVoyYktVL0xUWVFLTlRGeHFVUzd1?=
 =?utf-8?B?KzZra0hOczhacFFNcmZ1cDFJMDREMGFjYWNESE1KN2NYWS9KUUFxYkl4RUdY?=
 =?utf-8?B?UHZZSnVmbjJNOFpSVUtiRjRzaVVEck4wVlIzYVQyRVZVS3h3bE5wRHZleGtG?=
 =?utf-8?B?TUJUcHRNSjB4K3JDaVBSYVVLWFNMS1dJSGFydHVib0hGRXVFYlJ0WFZTMlUr?=
 =?utf-8?B?RHRXNGhNRVdRZ0xhODNVZk1aYWwwZGRXaVJOVWM5K2FIdFAyMkRFaU92WWk0?=
 =?utf-8?B?TjBBODNpWmhRKzBoWmlzeHFLYVVsL2V1STM5NlJhRUs2d0NkbW9zTE9ST2gw?=
 =?utf-8?B?UmtNdGc4MkoyUlUzTTZWZzRIaHRtV25aeFlnMmtFc0xuUTFYVU1meTZRYWkz?=
 =?utf-8?B?MTc2QjlhN3RHa0oxMXhtQncvTVFab2FLK05HR0ZWcTFZU2IyM1FwcnpQQkJF?=
 =?utf-8?B?YXNFMUVyeWhFQy8wcStoUVNtMlBMb0h4eXVJREEyczBZdWZvWlZsSzZBZDVx?=
 =?utf-8?B?ZzN1czNINHJXblJFSUQ5QmpJdE9LTUdlZ21tZEFSUldOczA1eVloa01xQmIy?=
 =?utf-8?B?ZHJFRDgrVFpsVFNwNkRVbnF6eWcvM0MrOEZUKzZkNWRqdDZYMVBtRm1OOTJD?=
 =?utf-8?B?NUdDVml0UU1KL1pJRDlKalQ1TDN5dkpXOVl0WXB4MHpPT2pYRWJMOUIxNU13?=
 =?utf-8?Q?/ZxOuloxOCpX2VeQObnq0zyUp?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e19de267-516c-4487-8b07-08daf7d762eb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 15:36:07.7854
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yhV2m7UhAcRdrUdXoEzXGXI8EG298nPQCmghagOyXQ3taL+pj5rxA3nLiv8l15n+y+3eHZJv4XE1joYPtFRJPA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8275

On 14.01.2023 00:08, Andrew Cooper wrote:
> A split in virtual address space is only applicable for x86 PV guests.
> Furthermore, the information returned for x86 64bit PV guests is wrong.
> 
> Explain the problem in version.h, stating the other information that PV guests
> need to know.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> The only reason this does not get an XSA is because Xen does not have any form
> of KALSR.

I continue to question this. I think I did say before that even with KASLR Xen
would need to constrain itself to the 16 L4 slots that the ABI reserves for
its purpose. And nothing is being returned about the inner structure of that
virtual address range.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 15:43:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 15:43:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478815.742225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHReF-0006mU-8N; Mon, 16 Jan 2023 15:43:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478815.742225; Mon, 16 Jan 2023 15:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHReF-0006mN-57; Mon, 16 Jan 2023 15:43:47 +0000
Received: by outflank-mailman (input) for mailman id 478815;
 Mon, 16 Jan 2023 15:43:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NbzH=5N=citrix.com=prvs=37389537a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHReD-0006mF-6U
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 15:43:45 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8c4dab4d-95b4-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 16:43:41 +0100 (CET)
Received: from mail-dm6nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jan 2023 10:43:38 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6347.namprd03.prod.outlook.com (2603:10b6:303:11e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Mon, 16 Jan
 2023 15:43:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Mon, 16 Jan 2023
 15:43:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c4dab4d-95b4-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673883822;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=egGgxn++pMs3jICn9GuUZ62v5QcxMryKjEcVXGx3rj4=;
  b=QuLer6i7NVpn2vdcBWW83PT1myg+gaRgb9wu2yo25lkfOOqlhz8Q5Yyc
   54PHMYcBHLnDC/OGVD7VSj8FPom6pVTjNnosoWrpdU19lOteedxAiGshT
   hdlttr+SYVH+kIeY/x6f40SihsvRN4JTaT8dM5GS1/XcJX0VVvm+sFCzg
   I=;
X-IronPort-RemoteIP: 104.47.57.175
X-IronPort-MID: 92823819
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dUjBG6gHDLTA2KVmPoEjSCoaX161WBEKZh0ujC45NGQN5FlHY01je
 htvXWGHPv6JZmShfNlyO4+w8h4FsZXTz9M1GQtl/yk0FXkb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QaAzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQXGRYvZzmE2diPyZ3lbLJnvpo7KsD0adZ3VnFIlVk1DN4AaLWaGuDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEuluGybbI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6ReforKU62Qz7Kmo7JTgyUFLk+qODzV+Ga81Dc
 2Me3Qp0lP1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xGWwsXjNHLts8u6ceRjE01
 1nPg9LgAxRutqGYTTSW8bL8hSO/P20ZIHEPYQcATBAZ+J/zrYcrlBXNQ91/VqmvgbXI9SrYx
 jmLqG0ygusVhMtSjqGjpwmY3nSru4TDSRMz6kPPRGW54whlZYmjIYu19Vzc6vUGJ4GcJrWcg
 EU5dwGlxLhmJfmweOalGY3hwJnBCy65DQDh
IronPort-HdrOrdr: A9a23:MAJTDq54CdLURpuz2QPXwPfXdLJyesId70hD6qm+c20tTiX4rb
 HXoB1/73XJYVkqKRQdcLy7Scu9qDbnhP1ICOoqXItKPjOW3FdARbsKheDfKn/bexEWndQtsp
 uIHZIObuEYzmIXsS852mSF+hobr+VvOZrHudvj
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="92823819"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SO8QHTatFmCs04tYDd/QHJVHBMLIrOwP24UD2N/E+3qAjMD7kzI9AqEuH2SGpPfL5jKA4oXHkVm7xw2PJuJntkG7IhwDbjgB5o3l8nOuKVz7aKFDjF5oUkspzyqMK/lmsXq1nOWU6E2DeK8Ku85zp6K2kCBpE2j3zuxs+i43kEGicUKPaCemMQaxfHbKfzjXS7zVm7fjmNGvs9fLKmsyXRnV5xNkE1QZgGtcADZNeNcrP9xxWK2oANi1Fc8A5wfmG63fb5VdjdIBGSvEX3u2yxNJGLqf3yVf/bnd1aP4WxciFgWIYWuKxD2vvXzYzV/He8wCU7DrkOeTRmADzg2s0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=egGgxn++pMs3jICn9GuUZ62v5QcxMryKjEcVXGx3rj4=;
 b=mY+q5wqhDydCCBKeBPC8hcb5BzZXgFOgAgLSYILoxr0nufpmwA5p2U+o3/Rp1yIkuRjmZ2/ypIfFGjS/Q1v+CxmalD9T197hMJai/NVZ01+3gm+vVUPIv1wqhLOmWZy3GXjret/E3IssYeo+1KVgt8xhMXPoxmCyaBCVARDsFzDPyr7XL5vp/PJ3qTbw7klD+eGKx6M1pQ4Ch8PeAAeJ0Jq0CEswmDAEWliL+HMjqvj4KVxnbTFFNQgBGb2+qMqIAUhWNn69l50dG+J43q5iq69N0fxsjMxfqMFTuumR6tx3fryenYMTu8/KkRj12HOudzzrvVq94u2MZONsZCXNQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=egGgxn++pMs3jICn9GuUZ62v5QcxMryKjEcVXGx3rj4=;
 b=oHcSjGIEFzFoMD5oqUw710Kqr8HWZUqkhq+1eouT5urpdr7PxGPrfgNyOBulSyfgpw1SK0UG90zqwzDqg4Y8mjGs2gRXwydfrL+YHmc7OE0QYjt9VezC2G12+9R3fjO716WyiR8vRB1bsq3+eyQdTHzMJbWyLv5P+jxpddbFAcY=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Thread-Topic: [PATCH v2 5/8] x86/hvm: Context switch MSR_PKRS
Thread-Index:
 AQHZJRefeExwWeppzE65VXS63DgRAq6axG6AgAA9twCABgijAIAAFX4AgAALUQCAAAHpAIAACugA
Date: Mon, 16 Jan 2023 15:43:36 +0000
Message-ID: <ada339ac-f2f1-fd6f-cca6-12a42c49da36@citrix.com>
References: <20230110171845.20542-1-andrew.cooper3@citrix.com>
 <20230110171845.20542-6-andrew.cooper3@citrix.com>
 <af2b74b2-8f37-223c-b830-c2bb3bc6d467@suse.com>
 <3ac6a4d6-44db-d248-4440-6e71aa14ad93@citrix.com>
 <adf6f951-a0e5-c167-9739-d8b0a2b4af38@citrix.com>
 <309925fd-1e7b-4541-693a-0296bd22e242@suse.com>
 <a1ffc132-5343-c070-7bda-b3198a1ccc95@citrix.com>
 <ac8d8c62-e1f8-a98e-3ead-e1f3f8a55c2d@suse.com>
In-Reply-To: <ac8d8c62-e1f8-a98e-3ead-e1f3f8a55c2d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MW4PR03MB6347:EE_
x-ms-office365-filtering-correlation-id: f64ef5b4-63dd-4b26-12e5-08daf7d86ead
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 KrnHmNFHfad7akyvbTgRtzs7lbVGtXf6gOy4XRKMBH7IC8bFLSznCJkIRfLhk7epq1Svxet63gZ3gANPmbg0CIRzMl4rwQyhUrVXHXBR/wBbRhRUkZrKvpGg1+LL9Ji8QB3CDlwSuVrSQXK9wGuEta/aI0jiKhG1NFDtG2pESGPHVwH7B90ViH+viPVnoyTZjph3EFy/ixAkq1wm28Of4tjLY5V8ADFxRZnTgSncaH1yovUgNwvEZnGwITNQZxlAVy1RDNpbHdilqZVREN/zkUjeZNdJLczUkeZfWE79D+xdLhr2+kkZQL/UPiYMARYlFQGuG7RLuMqZHg6SjBjDWS4TwNZSuolqUIQPAIPpfGGpznfvZokwsBdU+sRPcPeRGz8a7739XBXb2WxclyxSWZ7sxoUMCK25++yQSkzPW5xvZMvK9blVAJSXbn8yPfPiyEsfhiSaj0WUZh68chpi22gmIoOOne7x67z2SHX8VXWbPoy5MDVDAoMmciQmQrNXL5OxLDvH61ES2uXl4WoD17GfmTHmw5oO3wq+OUOPCehMLNJw+SB594tc5I4dHX7cC9ZcieksJkBGIdMTaRRJf/HiHbodNBYVCCxgE2XUOJs4nAJjmu1nJ17DmigNzyVxz27gfT+OzHxkIP1InYz3hQPUhwb54FKOwAiipanwJ9JcT5cuy8G2ftrkyAcM/gKj7L1Ya3ysfxSBOiLatxODyS/qIY2UPJqgIbbcIjOIsEozoGvnXJTO5pAbFQ20MNQ5dMofyA3fsznN3szGT+0Sbg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(366004)(376002)(346002)(136003)(451199015)(122000001)(82960400001)(91956017)(6916009)(4326008)(66946007)(66556008)(76116006)(66476007)(64756008)(38100700002)(36756003)(86362001)(31696002)(71200400001)(186003)(316002)(38070700005)(66446008)(478600001)(26005)(8676002)(54906003)(2616005)(6512007)(6506007)(6486002)(53546011)(31686004)(8936002)(5660300002)(41300700001)(2906002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VmxsOTVRQ1JYRVhEUHkzRkUvV1o1K2lBN3pXNHEyRTZOODh5RFJCeFRERlpU?=
 =?utf-8?B?Wnp1NzFIa0NuaSt4NUlEVHRTUzMvVW93dkJTVWlhZnlEc2ZIREVwZVYwS0dH?=
 =?utf-8?B?cXRZSUExOFdlenNxVlZUQU9BT3h0OHhucGpGN1cyOFpSNHJFc0JJOW1tNjkv?=
 =?utf-8?B?MjZKVC9Rc1FWektZOEN2WmpkUW04bXU3VTNQbUk5M0cvaldqdS9xZm5lM2Ns?=
 =?utf-8?B?SHkvalptdlBlelYxQkxWYVdkb2wrb3VJY2VZQlFybDFRalV0VVY5dFpSb1BI?=
 =?utf-8?B?NTVHTkFDaTRpS3Y1cTBRcXlqTlRDbUJqMG5wUlRGcW5sY1RYK1BxcHQySHV2?=
 =?utf-8?B?VHErZ0VkWW9SUm1ob0lxYlVWMWN3UmFnY3ZlRENRUllReElWRGE1RzlYOVlU?=
 =?utf-8?B?MWlxeWp1K2dYY29oaFdzSERMbkxYS0I0Z1NyczUycW9Nbk00SEtEeEhUWTR5?=
 =?utf-8?B?MWRHU1BsN1Frem45amVJTHhSUHNKc3Y1VTFTeHVQOG1ERVZ1Q1lvUkp6SVZn?=
 =?utf-8?B?dXYyenVOWldvSXhOVzFGY2gzY0JZazNNVWZYQWo2NDBONzZxMUwvSUtXUmll?=
 =?utf-8?B?Y0EvdmVUL0d6ZnNHc1pZZjNrNVArRGZEekdkTUhQdWFNWW9pbkRvV1crVTdR?=
 =?utf-8?B?dGNOZWVVdTB0Z01DV1NQc2o1NHl1ak9rS2I3Y3ZoZVRteGE2VE1aeTUwTGxL?=
 =?utf-8?B?bmhab01xZTVKVEFkUzZhMDJRUHhIQnNjempTb2lxS0Iydk53RHY5d05MMHFF?=
 =?utf-8?B?d094NVhoTUpmUGxTRWFIdkJCeS9vcGZvdkpsVU01ZENxUVJRSG9XVEp0Tkto?=
 =?utf-8?B?eGkwSnpOUlUyOFk2T3BoZEdXSXhZcGduVE4xQlZRVnRiMktKQ3M2U0M0REpD?=
 =?utf-8?B?Y2xCMEZ6OTduc3ZXYXU1TlRmcTBLNXRrODBOenVheG5XN1pFaUdrWmdFRllr?=
 =?utf-8?B?ZjdmVnJwOG1ZTlJ1eFBoZzJDOXFCeWZUSlZ2RUx6UlgxMTJoTC90S1Baa04w?=
 =?utf-8?B?b3BnS3J3L2hUQk95MitKZ3hodXI5ZkNpNlZ6NGJtMFpYSlZuVDBaUnprVFZM?=
 =?utf-8?B?NzEyaUJya09ya3lLQUxjZElRVkUvbVo2Mk02Mnp5dDZBejVqajBJc3gya0Ev?=
 =?utf-8?B?aldKcnBoZFcwSUljUG5WZmFvVnpjK09ZeVpmQlJxY2RaNGdwajZEUFl2L0Fx?=
 =?utf-8?B?ZXZhR3EzTGdJS0hBc01FSXJQNHQ1Yy9kakhwZHRpOFdoTXIyK2FXV1IvbjQv?=
 =?utf-8?B?WVF2NDltWEdpQXI2Q28yUGpobExkRGl5UWlEVGEyL3k2cTk4ZEExcFdjYm9E?=
 =?utf-8?B?QW1CYzdCVW0wY1lCN2FJMTdoTWlmUkRnUmY1emxlTXZpeGxOYTB3QmlJdmVz?=
 =?utf-8?B?akd6eWtrRFZHczhHOWRTa2ZUZXhSTkFKTjlkelNtQ1ZweWlkYlNSRDlaMlJ6?=
 =?utf-8?B?OGh2MzAyNXIwUWJjSGhhV0hJU0FaYVFrRGltYkkrYzlETkM3bTUrTVFWa2Vo?=
 =?utf-8?B?Y0ZXdHF1YzAyY0FDdkhJNzJxVTVVbXFlYmk0ZEFSZW0yYVkwUmsrd2pibEVL?=
 =?utf-8?B?cnYxSEZLclVEYWZIOUZrSzk2cmVsQ05ESm9Uc2NYbHRYdVJNaXlza1BSenY4?=
 =?utf-8?B?akVkWDQvakNjODJ6c3I5NGNFTDFkNUhscEt1WDh2Y2VPK0Y0cTVaSjcyVUwy?=
 =?utf-8?B?T2FCRmRpUmRRZDBXM1A5dytCbnYybjREcitXUEpTdkJETjZPdjZrNmpFcGdR?=
 =?utf-8?B?eG9BT0lGTDVpZVB1OGJLMW9CUlNZTkpJU0Y2Rk9IRVNlajl5c0hmOWxzbjlx?=
 =?utf-8?B?anBEb01NUmdob3NWYnhJRHF3akxZa0xtUG0vSUtqVThFbzBWNUZHN2dTUGV1?=
 =?utf-8?B?NHM5bnpLdzhidTFIWjVFcWE4Smpnb0QyT3piVDA5Z0FHRFE3S0Rua1FvMnlm?=
 =?utf-8?B?WmY3RkZ6cno0TGo3N2ZRMytuN2twUEtlUDlTY1JpK3llWUkxZDRvNFhwMGgr?=
 =?utf-8?B?SVJwcG9GYXRJNENTc255cXBHYWNZbDh5ejlGN203R0FUVnNYczNTYzg4ZWll?=
 =?utf-8?B?RU1vWnNpemFBbDhkMjlqTGRUbnorcmI0enl4TzJwVUx6ZEsrUWZzZkFPZnlX?=
 =?utf-8?Q?6hOLxeVz6iruG/ktaX89IAUZI?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <4D069026EBB81A4DAC1C7A9E2A1633B1@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	mk+DFgyZMzKfJaZ8uWdCuqBzN8iZRfJ+13mtfK3ETH9YcSOVM2K13GfPmiDDjCZAllx8X1Zq7C24pJ6StXJR+HKqBTGGpwCCYLALezQp4/H4QPk+TPzaxnWZVYUvR0csrfNjeOkiw94ZczQKG8bl0OpU3TxhkuOc2x+CxpBNrG9taUuNx4qidDaUbyhJNn1tEYs0UgoLNjYJFp1dGSodbk74y88/R9rXPgykul321UwD3KIV2mgDD70IsspNqpGanJMwHIzzwOUqQf5TecfgI9X8G1FM2AOW2aup+fJ8SVsy9yPavJXsHxAA/XtD8K9OjdeQD3vDl9oxxXjJHlAkZDehc3uEkxZjY56c6e2PjxiWbO79GsRlP/BXbBbJZYNuICXQsJ2Kunzot44vlhsU9KXiqaQLpb/k2WiPWBCSS1IXX5M6brsiKSNlW702e0b9OLk2+JnGmf359CMCKSUsPrKRBjBEFEPNRXDb2CfOjEs/RHfdZR1GXMYHc8TcgJfbv/31zR/hItwO1qlB6AShElzdiMeIPKvcW93g2JhvYQwfQn5m8dtrjevnWeZYrRSUfQaLO1JNHRQWBlwRskoPFsjsQFaFbC8Ni5Zhf3ZmAzqIO9PSMfsDt9q960Aoyoq2m+sE3ieOMB3YzGGALwHYx9ytHF8XKyd1v65fejj8YK4PxTKZr9q+5Me8aHgqcmOpuqF6iQU7sO+QhFEio9IzTyvI7Rz+0XeubRpFf/zUrFl2IBvPcp0ifHd9bKphVJ6LBAnvRnLOqYHcIgwr5hUzYA6KEskjArPTZerxpk+8F5ss0vVZ3iiMWsaxW3vMVHbQpq315/jUVAQMOdpiZJ/DjBSswigJTkj2CCMQXyf0b28=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f64ef5b4-63dd-4b26-12e5-08daf7d86ead
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jan 2023 15:43:36.8023
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: gG68dFJkCpgn1nbCyXYeNoGQTEiUeFcl6brHW7fMmkF2fp76V1qVS+HdzDsX8Dt+7SkWTaXFUitNJ9p6/ngfZDVnV2+8ixluVhfS7w+l6bM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6347

T24gMTYvMDEvMjAyMyAzOjA0IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTYuMDEuMjAy
MyAxNTo1NywgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDE2LzAxLzIwMjMgMjoxNyBwbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMTYuMDEuMjAyMyAxNDowMCwgQW5kcmV3IENvb3Bl
ciB3cm90ZToNCj4+Pj4gLS0tIGEveGVuL2FyY2gveDg2L2FjcGkvcG93ZXIuYw0KPj4+PiArKysg
Yi94ZW4vYXJjaC94ODYvYWNwaS9wb3dlci5jDQo+Pj4+IEBAIC0yOTksNiArMjk5LDEzIEBAIHN0
YXRpYyBpbnQgZW50ZXJfc3RhdGUodTMyIHN0YXRlKQ0KPj4+PiDCoA0KPj4+PiDCoMKgwqDCoCB1
cGRhdGVfbWN1X29wdF9jdHJsKCk7DQo+Pj4+IMKgDQo+Pj4+ICvCoMKgwqAgLyoNCj4+Pj4gK8Kg
wqDCoMKgICogU2hvdWxkIGJlIGJlZm9yZSByZXN0b3JpbmcgQ1I0LCBidXQgdGhhdCBpcyBlYXJs
aWVyIGluIGFzbS7CoCBXZQ0KPj4+PiByZWx5IG9uDQo+Pj4+ICvCoMKgwqDCoCAqIE1TUl9QS1JT
IGFjdHVhbGx5IGJlaW5nIDAgb3V0IG9mIFMzIHJlc3VtZS4NCj4+Pj4gK8KgwqDCoMKgICovDQo+
Pj4+ICvCoMKgwqAgaWYgKCBjcHVfaGFzX3BrcyApDQo+Pj4+ICvCoMKgwqDCoMKgwqDCoCB3cnBr
cnNfYW5kX2NhY2hlKDApOw0KPj4+PiArDQo+Pj4+IMKgwqDCoMKgIC8qIChyZSlpbml0aWFsaXNl
IFNZU0NBTEwvU1lTRU5URVIgc3RhdGUsIGFtb25nc3Qgb3RoZXIgdGhpbmdzLiAqLw0KPj4+PiDC
oMKgwqDCoCBwZXJjcHVfdHJhcHNfaW5pdCgpOw0KPj4+PiDCoA0KPj4+Pg0KPj4+PiBJJ3ZlIGZv
bGRlZCB0aGlzIGh1bmssIHRvIHNvcnQgb3V0IHRoZSBTMyByZXN1bWUgcGF0aC4NCj4+PiBUaGUg
Y29tbWVudCBpcyBhIGxpdHRsZSBtaXNsZWFkaW5nIGltbyAtIGl0IGxvb2tzIHRvIGp1c3RpZnkg
dGhhdCBub3RoaW5nDQo+Pj4gbmVlZHMgZG9pbmcuIENvdWxkIHlvdSBhZGQgIi4uLiwgYnV0IG91
ciBjYWNoZSBuZWVkcyBjbGVhcmluZyIgdG8gY2xhcmlmeQ0KPj4+IHdoeSwgZGVzcGl0ZSBvdXIg
cmVseWluZyBvbiB6ZXJvIGJlaW5nIGluIHRoZSByZWdpc3RlciAod2hpY2ggSSBmaW5kDQo+Pj4g
cHJvYmxlbWF0aWMsIGNvbnNpZGVyaW5nIHRoYXQgdGhlIGRvYyBkb2Vzbid0IGV2ZW4gc3BlbGwg
b3V0IHJlc2V0IHN0YXRlKSwNCj4+PiB0aGUgd3JpdGUgaXMgbmVlZGVkPw0KPj4gWGVuIGRvZXNu
J3QgYWN0dWFsbHkgc2V0IENSNC5QS1MgYXQgYWxsICh5ZXQpLg0KPj4NCj4+IEknbSBqdXN0IHRy
eWluZyB0byBkbyBhIHJlYXNvbmFibGUgam9iIG9mIGxlYXZpbmcgWGVuIGluIGEgcG9zaXRpb24N
Cj4+IHdoZXJlIGl0IGRvZXNuJ3QgZXhwbG9kZSB0aGUgaW5zdGFudCB3ZSB3YW50IHRvIHN0YXJ0
IHVzaW5nIFBLUyBvdXJzZWx2ZXMuDQo+Pg0KPj4gUzMgcmVzdW1lIGlzIG91dCBvZiBhIGZ1bGwg
Y29yZSBwb3dlcm9mZi7CoCBSZWdpc3RlcnMgKHdoaWNoIGFyZW4ndA0KPj4gbW9kaWZpZWQgYnkg
ZmlybXdhcmUpIHdpbGwgaGF2ZSB0aGVpciBhcmNoaXRlY3R1cmFsIHJlc2V0IHZhbHVlcywgYW5k
DQo+PiBmb3IgTVNSX1BLUlMsIHRoYXQncyAwLg0KPiBBbmQgd2hlcmUgaGF2ZSB5b3UgZm91bmQg
dGhhdCB0byBiZSBzcGVsbGVkIG91dD8gSXQgaXMgdGhpcyBsYWNrIG9mDQo+IHNwZWNpZmljYXRp
b24gKGFmYWljcykgd2hpY2ggaXMgY29uY2VybmluZyBtZS4NCg0KSSBoYXZlIGEgcmVxdWVzdCBm
b3IgYW4gdXBkYXRlIHRvIHRhYmxlIDEwLTEgYWxyZWFkeSBwZW5kaW5nLsKgIE1TUl9QS1JTDQpp
c24ndCBwbGF1c2libHkgZGlmZmVyZW50IGZyb20gUEtSVS4NCg0KKEFuZCBldmVuIGlmIGl0IGlz
IGRpZmZlcmVudCwgdGhpcyBzdGlsbCB3b24ndCBtYXR0ZXIgd2hpbGUgWGVuIGRvZXNuJ3QNCnVz
ZSBDUjQuUEtTIGl0c2VsZi4pDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 15:53:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 15:53:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478821.742236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHRnL-0008Jr-7I; Mon, 16 Jan 2023 15:53:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478821.742236; Mon, 16 Jan 2023 15:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHRnL-0008Jk-41; Mon, 16 Jan 2023 15:53:11 +0000
Received: by outflank-mailman (input) for mailman id 478821;
 Mon, 16 Jan 2023 15:53:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHRnK-0008Jc-J7
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 15:53:10 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id deff86c7-95b5-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 16:53:08 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7354.eurprd04.prod.outlook.com (2603:10a6:102:8e::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Mon, 16 Jan
 2023 15:53:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 15:53:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: deff86c7-95b5-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VyJwOnjpW3Exilt7MJCaz62/10owZ7zMpUqlHlCwEPprWaMFJCSmu+3SQ32/R+pXrvGsV2NbzIefClnMmbfgOhND9W92PqKKlcFVhf6bEAJeiplblmatnxeDhuEx7KzTqeob/mJOGUcnNNTlPGBsIDHfYqTGcZ2Ksjgl8g6mL7NUfW0ZWFqJp9pA83FcQJvzTJdpWqFI7r2QTKgADbFybw+TL8iEdBlkEG9ketOFfV1vJ0Rtfl5Fb4KwAuvDUodVcJhmSLuWiCe5Arr1qTMX8C3SxDVnssGZ075Mo0D6/nMTyi6L4Gbv5cDsw7idu67n5Nyiyf9mE8u0UeK8dQWWBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zq5dcdvsUgQl8t7WqSJGZgZE/1yIwBnvejIvXnI0TBg=;
 b=DnV51OHOF4DcI5JKiHjSG+qG5rhgkL6/iQpZjyb+lb44ZmHWkY7fRuZ17bvU4pzqEUVTFcUvJCiSVfwqqgrE2/ZdAVu/zPQNzy2Dp/mzjlM/fdg0K24QcrxyGgKFQxn5Gm1jXcDh0BfXUY5iEjMLDsk0y4e0+NTKQnhH0D0Ot2IIrMwv8P3yDewYPOrlalDhE0J4m+H+JukHWPwKneXHyKwZb7Y/TC4Lr5TSS+Uf+ROz7YpXJ+0fHTohV05SzRx6tKH3l1UDIxJTP8bkLCBqUMKFXebL6Uhp6Iq5oCshwriwYcTI9be+w4PHVx7c5qYpFgJjexHeLf2kIpnLIBcWJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zq5dcdvsUgQl8t7WqSJGZgZE/1yIwBnvejIvXnI0TBg=;
 b=KKNhnLTSmb/hc1Kba6/z9whhdlKrOGDHlq+P9WNMYVK117ByuWvyiQwg8gFuHf/Cmvev9rKBEaU/slLAKwNajNHiLv1TkCu4ZIZGniLZNODFmICkwLRDjatngbRAaFPlIYUdFj2yqI9oKyQZibxNP5ZKs/AcV2ULyZP0f9LJy7X50bjigST5rLrxA1bO4i7VN/oDqAr72XVQBlESdc/5wNBX4fTls1QFXamO7eZhoThzLksqigUuYZyJcBq5RSNS7uv2AsoFp0VQpZ8hWYyW7dz+mAp0WjNCt7MxVTCLvxuTLtoevNl6eDTZXDAvCUh5RvVF7wAZBJwSudvhEq9ntQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9ff94f87-e3fa-c397-ebf0-b4849cba757d@suse.com>
Date: Mon, 16 Jan 2023 16:53:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 2/5] xen/version: Calculate xen_capabilities_info once
 at boot
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113230835.29356-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0020.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7354:EE_
X-MS-Office365-Filtering-Correlation-Id: a1da5ae1-f340-4d3b-10e9-08daf7d9c1d3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	v2kvI4/9Ut7oJtw0O5EKumOD4ejgqlLUMIDUb4SydlIRepR/VMtdnBSLlFjfGjBvzZxga4ZXmVv7P5xiKbES3iEH/COFRWBpBlDtI1jOFfn7EH0xxVvLiuYtcrsJXqp2avXYxpCTs+jUPpAyAS396oJzuC/Yv+FbPHmDmQkNisZMtiNDfpTAg4MjHgNaMcgGRiTOQnvlNaWNP8RXoBLJZoJR4rAenOWVlqvb1VtsDaVlyouj7lcunc8J2aTVCf4/V+0OTkT7ccY+eHhhTePab1Noeyvg7ZR6lqcTraVo7mOygM6VnCmuFm0POu7U/Y8WX17vyVIgysiaKRtNT+bGf0ccJBMKoZVxN8mbX7VzX6Ecv9w7n/LHQHr+9JLYGyRxGaGMwjeIp402g17DdrG3EdeipIG31YNm0K6bhLyOLnY3LuT2iVDQ+PBUCdsHUKoXUJ+22KSIi7Llkl6aNpSgZTY+lIwFnNaBvaa6anQCCtUIyfPKL6thqzGWLsge0wA0Sh9IrhjsUPUaRbaHJCLxakw1BZYrSu8P0y4rmofVXqDYPs3msAntPXlhzthF1wf3/j8TTlvohSd9oR3FLDB+AtBhqJwkOcGahJlSe+yOSiOPEosjR/uYNWk2wQnMsJJw7mXKmpoirqReHFWegWBlvHRdR3+SO3Hb4HjP4lZ2gAHPSrmPon88J53LmIk/vrqLYw7PUsBgVLMIopKoekrLvzPr3Crk5YfcTccSzyu5zYY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(136003)(396003)(39850400004)(376002)(451199015)(38100700002)(36756003)(86362001)(31696002)(8936002)(5660300002)(41300700001)(6916009)(316002)(66946007)(66476007)(66556008)(8676002)(4326008)(2906002)(2616005)(6512007)(53546011)(54906003)(26005)(6486002)(478600001)(186003)(6506007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RzFqSWlMOHgrTm5jNEJ0RHl3cnVIWnNXdlhWTWEvYnpuU1J4OUpEWitpNTlj?=
 =?utf-8?B?aHVjLzhaY3RzeEtUOG5Uc0ZucTl0VElLZ3p1RjBxc01JQzE4TGVhbFFmWGFi?=
 =?utf-8?B?QVlyQVRMYThNTjZva3E5d3NaY0x2ajVxekVPenVwbHQ1eUJKSE9ab1gvZzM3?=
 =?utf-8?B?L1VkWDNZNXJqWXEwcWRYcmxNWmNUcXZ2UDNMMUdGdkhtRUc5dzdxMkI4NEtS?=
 =?utf-8?B?SlFuV0ZEQXZFZWpUa3JOaEN5WW5uYXlaWFNmR3BhMnRZNUhkanBmL1FTemMx?=
 =?utf-8?B?ZmN1c3l0K3FLdkRWQnZBSkIrSVdyOGJHakNYRDh5bXpMZ29QbXJ4Q0tzbndX?=
 =?utf-8?B?dUZWQ1poRWFUVTF1RWNnTkVHcnBwUmZZcEw5ZEVvR1BhdjZwRDdGLzlkVDhF?=
 =?utf-8?B?U3crMW9uMmd2NkVxd043TlluT1d5VFR4WW1HYTBrOGxXZndWdnl2QzhFVDRI?=
 =?utf-8?B?VlpuVEYyUmhXYzRZK09IemN0d3RUWHJaOVRCT1ZWcytpenZaVHczUXJtZE83?=
 =?utf-8?B?d2xEUUVrT3dEWkdCYTN3NnNod2UrQ3R0cU53RVd2aDBCSlRicWljNjg3VXd5?=
 =?utf-8?B?dUpobm9GZUdnRmF3QllSeUZtOGtBVW1wNGFUYmtVMW42eFY0eWhFcnZXT2Fy?=
 =?utf-8?B?aFVkbkR4UWVkQ2FGVHUza01PWHpYRVNxVmFJaHdlT2xnZ2lEa2xrNkV6Vkx5?=
 =?utf-8?B?ck5YakMyOVFYc005aXJ4NHJjT1JOUXBPZHVRQ3FzQkl3Y0w5WHp0c1NHRmpS?=
 =?utf-8?B?WDRWUUFNd25UM3VZcE9LZC9DMTBCcW0zNkduS2RJUjFrbUVuVDlVNWNIUkRF?=
 =?utf-8?B?M3JSZ0V3bGVXWTEwcWdEOWNjNmlOWnB0MVQxYlVBeDVYRjlKRDZJbUFIVysz?=
 =?utf-8?B?d1BMWDNrZ3cxWFZpSWhMZE5kcUoxT3Z3VVJZWDRYTnA1QTdKLzREK3J4K3lI?=
 =?utf-8?B?RDV0UzNyZDlPaDRVU2ZUYnRDeW0zNHh3aXpmZ3hsTTlKUzM0dlFzRGFoU3hD?=
 =?utf-8?B?TjV5SlNMeWdHTWhtb204eFhFMHdtSEs2cFJKWHFkaGhFMU1CQ2lDdWxNMlRM?=
 =?utf-8?B?WjBUMk44S2lrbHRseHZtY2dNZFhMSmJJOVlDbytqSG9Rd0o3N1RKVHllRVBS?=
 =?utf-8?B?NlNjYjZWc3ZHS0tMODJObktPajBoS3N4MWlTcjh5NEkraTJUYjRnSzFMcytH?=
 =?utf-8?B?b0ptYklDWVJRd3JTZzZKaHNkZHpRWmxOZnR4aGZnQmowQjIzcS9kbHBVQ3JK?=
 =?utf-8?B?ejQza3dSMzFrdXRGZFhOcGhqYWtXbFEzME1ibEhTcjdQeW1oV2pMUno4cTN5?=
 =?utf-8?B?T1Y3REoyUU84ZVdhRHVGTzVSZ1VKcEdqZUdHWi9QZUc5SFlwUXpWVVRTVm5L?=
 =?utf-8?B?dlVFZTVYbjVQWG1YVktGcTBFQ1Q0Qlg2OU9xa3g2blVJWEM3Q3ZYakE3WXhE?=
 =?utf-8?B?K0N4RXpEd0dIY1F3U2VKdFRrdHppWmI5V0JlaXZ6cldUbXNZa0g5MWxOMVVR?=
 =?utf-8?B?ajM0eEMxZnRLOTZLOVVweDkxS216bzJhZnNIUTgxaTVMRTduNElubHRuUzc0?=
 =?utf-8?B?TXBpOVZ1Yk5ldFl0NmNhaTJIY0h5Zk9ZNFFKNGdNemRQZWgyK3lrdzZzUUE0?=
 =?utf-8?B?Y1ZIMGdUZTVKNTFYN3QySU9Wam5IcWJaR0lHMFRQeEpmbXhveCtnWWJzK1Fa?=
 =?utf-8?B?SGZXT3U1TGdWYUNEYmthbjBTS0FwdU90c24reHRkUGd6TUNudUs0c1ZVYlhv?=
 =?utf-8?B?SWhtdkp4U2RLRUNHeHFIQW5aUXlqR3JYaFpBeElYeWVYeEtkdm8vdzJ0TE9H?=
 =?utf-8?B?aFA5bUc2Z0hTbGUyak9zdGRNcnFYdDE0dHZ5aTFLejBsM3QvQ0ZUclVGMllI?=
 =?utf-8?B?aWl2U1VTaEwyK0ltLzYyR0NHWlFUbHhlbDA4cnRFY3pKblFBTDBhSmpmMEN0?=
 =?utf-8?B?ZVNDcmxBUDM4QkxVY0IzMjl4ME8yR0VES0JOSGhDM1BzYXY0U1NMMkdJOGRs?=
 =?utf-8?B?ak1XeDhLNUErZFhRNGlwUmtBWHNCNllJZDVkTHozV2lCTW1TK0JTUTNUT05K?=
 =?utf-8?B?TzZRSWF0RGVZZUM5aVpMOHNYLzN0OVg4N3UwbmNSOTN0QVczVGsyT01hckxj?=
 =?utf-8?Q?LRJdE6qbLG2X8j8U8Xr8biwxq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a1da5ae1-f340-4d3b-10e9-08daf7d9c1d3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 15:53:06.0332
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: p5e0AzR8Y1WJldaKOQKpsAjM4JlhM0PB7Q7becPST2KVRMD9vzxT3DdiMJaBOXzMGSx5szzKdfMoF+TWMhPL1g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7354

On 14.01.2023 00:08, Andrew Cooper wrote:
> The arch_get_xen_caps() infrastructure is horribly inefficient, for something
> that is constant after features have been resolved on boot.
> 
> Every instance used snprintf() to format constants into a string (which gets
> shorter when %d gets resolved!), which gets double buffered on the stack.
> 
> Switch to using string literals with the "3.0" inserted - these numbers
> haven't changed in 18 years (The Xen 3.0 release was Dec 5th 2005).
> 
> Use initcalls to format the data into xen_cap_info, which is deliberately not
> of type xen_capabilities_info_t because a 1k array is a silly overhead for
> storing a maximum of 77 chars (the x86 version) and isn't liable to need any
> more space in the forseeable future.

So I was wondering if once we arrived at the new ABI (and hence the 3.0 one
is properly legacy) we shouldn't declare Xen 5.0 and then also mark the new
ABI's availability here by a string including "5.0" where at present we
expose (only) "3.0".

> If Xen had strncpy(), then the hunk in do_xen_version() could read:
> 
>   if ( deny )
>      memset(info, 0, sizeof(info));
>   else
>      strncpy(info, xen_cap_info, sizeof(info));
> 
> to avoid double processing the start of the buffer, but given the ABI (must
> write 1k chars into the guest), I cannot see any way of taking info off the
> stack without some kind of strncpy_to_guest() API.

How about using clear_guest() for the 1k range, then copy_to_guest() for
merely the string? Plus - are we even required to clear the buffer past
the nul terminator?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:06:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:06:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478826.742247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHS0T-0001yp-BN; Mon, 16 Jan 2023 16:06:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478826.742247; Mon, 16 Jan 2023 16:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHS0T-0001yi-8Z; Mon, 16 Jan 2023 16:06:45 +0000
Received: by outflank-mailman (input) for mailman id 478826;
 Mon, 16 Jan 2023 16:06:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHS0S-0001ya-05
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:06:44 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2085.outbound.protection.outlook.com [40.107.247.85])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c29f80ba-95b7-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 17:06:40 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9466.eurprd04.prod.outlook.com (2603:10a6:10:35a::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 16:06:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 16:06:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c29f80ba-95b7-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UEJr5CPaLfegeOmSWOAwY9BRU6NTliAgupipIVgJFJjQ6wJCwMIH3AQmciIzOWbDkVZd454RyW5yGPEG2GTF0Ba2ybGhWBy8MRb5xS28EK/XaZOSVpinHD5hjMBHFiHvGOVvyXUqhj26/wgnhNQoz4ZKPW4TfGC8FcBIzG4bh/PPOTgipN466NJaNfT2fhMazWQyPCqyJvCOiketAMWI3ucUMeKM+74BbkqYO9FXZJIYoVMYgdGMSnRO0GSXv6q/WcYv1dEjjMa7IoEt3Q1rel5A5FNZbpirbKGsJAsYFqAKYGggnc2zqcdeKqa3Wkq0wbPMgMVzN4pgGtMWm3cvVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+tPKXKMoi99fM2iPt+23QD/yD2JohopK3GU6RwAmZXY=;
 b=bjyEaN+DYSDwofY6IzZQcQjvrc4CZgPuXMP8zbdACfAX5Azn9I1E5AUE3Nv/Cr6OAM6UJtW1Aef3L3k/y9rePf2F7NO2QafEzE8ROE4spUy9HNDKp6V/7AL+E2p1Fm0NvdGyQCCxRir/M3Fv6fRrs+IUYxGHUXpvDjBPKuLqKEcPqKmvHQAPxpeaAQ0a/mhzL2/fOYC1qnpn8iwyMKqbv54zBMLMZj1svsVPE45PP6/H35AawQrOhK7mBxvx8wSdo9hM+O26enKA06N2WaAAOt9xIyzlUg9azjdSiZaoq+QWL+bpuuVG06IqmksQFKmL6CMVHUQfigCBff/ctHl/SA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+tPKXKMoi99fM2iPt+23QD/yD2JohopK3GU6RwAmZXY=;
 b=1VEAvsqMQo5AFkgXE1sutDPEvFiD2Iip8NHnqrr6hC+ULDEwvvCZcTlVNWNeyPYYbqPBsVmOvOlCSX3AzkAMhdMP+PdC6thQC3p4ZGjPeGOgsz6dyaQdSj6Ihcu3qGut7uWWcql0uxhwQGJnIvJRocxQiKRehFRBu7thVh2txE8zV7PMzQG4iCnY71DZUpKJXbyeRbnav6X5eMlWYiz670EYwhMuqPXIFrQprpOM8O8OTMsjJ6IFmn9X05Fouj7/Ya0zPU/JN2dMUQ2fD5zA3vZ+PHdeMTNjTIOYHj9SYydL2+z5ZAo/Vv9V1lCezI0imjKeJksoDeaqKZg8nMLGZw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5ebe5337-f84e-12a1-e8a0-92832100946d@suse.com>
Date: Mon, 16 Jan 2023 17:06:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/5] xen/version: Introduce non-truncating XENVER_*
 subops
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Jason Andryuk <jandryuk@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113230835.29356-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0016.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9466:EE_
X-MS-Office365-Filtering-Correlation-Id: 8702ab57-e1e8-4568-b167-08daf7dba4e3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FDh6EvXxWSZw2bqW92YwY5EIA5mVfq1Eyrr8XJWf7vLWRKHqldZt3YhraYjzgHGi5o/L8EY+lFlxRRhaT3rc15L0/hhc8TMyV+xBshDcJbxAF+yshkRHws3Y1dsdr2Fu1tI4TOYz+/P5kaHglWfrFHY7HAn2woM5mcLAEadVq9r1DKvATQXPQieF980A8yRv1YFRkJTyS0EBKT2RStcUf391sHrv2FzGRC4LNx9qL/MhNVhTpMdNq9Upy5RyJhCzBqqaD729kBqdu2mbKcWPGTqqBZc96WN8GFnBLKawQDzEqke+SmworQwHsGzRx1VwMysvQmTzs5IAHtaG7HsOT63nJ5CnMGmpz1muKJw6MLZDizoUCYIxpkvRe5uLG3Qsv1Ehm2CIVW50oPVMPatI7MGSfFuYU+m/5F8FqVIWojOFXaCxqqOtGcdkJUwCtq3XngOxx63CTqpExixEMGTsDblXILM2iiviyw9K3IhVbUeK/Hj+Ur/rjlUmb9Ujk9LTVmxLIlD3/678F0o5jLaVPjGLcd+Cavo807hW4BbbwRrRC3+g9uOiCRtelzrH2QfJW8ntHrtVrEhl18pcXk/T6yszkmZ+oYn0O/pAWI8VRU4Yurhkc5HD/t3xVrP1sY4QbgTUXsfvUvFyzfpV3TnZO7zUj2uSlDOq4doXL/BPU1WfCe6UtyZuMcJho/XP2BGEAHcVdsVRBY84F7QXwQQOhZZnpuchZjUO6TBS0/5F1xA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(346002)(376002)(136003)(396003)(39860400002)(451199015)(53546011)(186003)(8676002)(66946007)(478600001)(66476007)(66556008)(6486002)(6512007)(6506007)(26005)(4326008)(2906002)(8936002)(83380400001)(5660300002)(41300700001)(38100700002)(2616005)(36756003)(86362001)(31696002)(31686004)(316002)(6916009)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QXdocXdlUVRya1YwbXNNU1F1Um03TGpsRU5lMTUxY2VXVFB4dENWZXM3NTBl?=
 =?utf-8?B?K1JXL3p4aENhRnQzeS8raC9za2QvMCtMS3VVTmR1WUpBdEFZY0ZNLzhodWtl?=
 =?utf-8?B?Z3MvUXIvSDhNZU9BM1hRdVF1dU1PTXduZTJBTHVDcjZCZ2ZWM3ZvSzdpcXlE?=
 =?utf-8?B?enVKUzRBZm53L3hwYjc3S0xoaVEyZm83ZTFTanNqb0hlelFyNEg2S3lVek1i?=
 =?utf-8?B?YVByYk1GOHlocXhmVFQxdXJYOTZtUVA1RGFKbFBTY1NLTVMvTTRuOFdRQkhH?=
 =?utf-8?B?RC9RQ3Avb2dqcGJWeUZaMGM4SmQrbmVsMzRyL2ZyRFVMRmVXMC8zSWwxQ0ph?=
 =?utf-8?B?N1Z1bXhzQTBEeittVW9NbmQyd1FOSHdiR09wbWUzRCtQbk1xc3ZSUWlqbGV1?=
 =?utf-8?B?ckVZejN3OHZIMXdHZisxV0VhMzdqbWdVZkl0cXZZekV1MUNNcWpsSDNHZGxt?=
 =?utf-8?B?WVBxUUQzSWxrM2RPVXdpWDVjakM5UUJoTEErUERvT2dCMmZ6Q1gxay9JUWd3?=
 =?utf-8?B?YUI0QnNBUjJJa3NRUzR6aC9OKytoWlhlRUtlZVlTck12V3lPeEFpUkdiNlM2?=
 =?utf-8?B?OHowVGhlaDY0bWRDY1lmZU5Bb3FWQ1BCVTUyUjFSMFdZYi9YajRVV01vdEQv?=
 =?utf-8?B?YnJDUU4xdGZyQ05zMHVSTmFscGUwc3ZqbGVqK3hkczhtZEIyWnlKWlEwQm42?=
 =?utf-8?B?dmkrTUFWdVgybWl5WkI5d1hQRldYaUhFTlIvcWNNNmsrdStIRmhKOXNGSE12?=
 =?utf-8?B?cFgzcXhRVFk5VGNMTTRKMmw2TWg3Nm1McUh6NzYwQ1U4anRLbzlIbnRaZW1M?=
 =?utf-8?B?eU1WOUJ0ckh3ZEo5OEFZSlhqd3lLRGhGcDRQb2poS29IbGEzVSt3VngrTXJE?=
 =?utf-8?B?dWxCT0Y0dE45UzlQL1U4c3JmdUtKUFhjOUhHMXcvMkV2bW0zeXNOWWVxRndH?=
 =?utf-8?B?bjJHa0U0aDVXcTNVRmQ3UitKOVJHaEJmcXRKd3JoeWxiRzcwWXFPV1IwcXNO?=
 =?utf-8?B?dm5CNms1aTlEbEtBVExSMEdILzhpTDhmRTBpNW16TEZ6cGl1QVJVOEd0aXFB?=
 =?utf-8?B?bTIrUzhyVERiN0h4WTNFY2xGY0ZOVzdIZE9OQit5b3ducE8rRDg1VWJPVkxo?=
 =?utf-8?B?MXg1RmZ0U1gwSS9DTDc0NmNpaHhObVpjenpCdUo0Q05WYjVTeTlDMG5xZTJF?=
 =?utf-8?B?QUI1N21jekVudGpPSUl2VTN5dlpkT3NaaTc3TUhTSVhTcEQ2UTdCY0pDZkJl?=
 =?utf-8?B?alYycjBueUU5L3dMUEJwTHgzdkVKR0R3Z1cvQ1NxTnRha1BoOGVSa096ZWNn?=
 =?utf-8?B?RVpPN1U4SkhhdHVuMEg3c3NUNVBpWXlIYkpPZ3hVcDk3UlZtRFRkYmRxL1dB?=
 =?utf-8?B?Qk1sdVhMcXFVTVhjWGZteGRxMC92YW1nenJFaVVNWkZMWk00cWJDSmM5ako2?=
 =?utf-8?B?bFUvQ0J5cnU3K2Y0dnltcXJ3akdaV2JrV2VhWVlHYi9OaGpBQ0hXV0NNdkZq?=
 =?utf-8?B?emNzTWhHVjA1Tms5eTRkMWhYejJKbTVZNmZFZkRDbTQ2TzV1dnF5U2lSTGZW?=
 =?utf-8?B?RTBJSlR1OGtONnhvWHdlWjlMMmZEQ0pzemhIY3djbDA4ZXNmTUVsUUdtcVRi?=
 =?utf-8?B?dmpRWGZaWDkrOGxCTVhwZCtIWjM1bmk3Y2dsdzY2cjdtczh4bFhyZVNFWEg0?=
 =?utf-8?B?K05yQlpYelEwZEtNajZqczlqRG5ybFRCemhxS3ZxRnZoU2FHSzRheGo0Q3NG?=
 =?utf-8?B?YWxNblN2N1EzeWtVd1dMWTNoMnNCL3RrL0QxZWtlekJjMVhrcVZ0NW93T2hZ?=
 =?utf-8?B?dzZ4ZHVIMlhkOHpFbVErTlZ0eWtWY2FtNE1OenR0bEdrTDhJUTJLWmc0V2x0?=
 =?utf-8?B?WFFqcFk0L05zUEdnM2lVcWNaL2FvSFlINlNKTDQ4T3ptYWluK0lpNVl0NmpC?=
 =?utf-8?B?S01tQkNKRGFpbEUxdVFZMTB4T3ZLK2RKV2ZPQW9TOHc0MWpaS0pPTGlseVdi?=
 =?utf-8?B?cHJGdmNLdkVDd1ZhZ043a1ZuVlBxN0tTcUowemIzZDMvNlYwS1FsMWNoZE1n?=
 =?utf-8?B?UkhVS0tzZEhIYkZ0VVNxKzluZHk5MTBNekxCYmgxZUlYMmxqRWx2Q1FNbGpt?=
 =?utf-8?Q?840HDxVPxyqqU/1icAQA9q6aV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8702ab57-e1e8-4568-b167-08daf7dba4e3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 16:06:36.4192
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sW+LIIsKXsawwHkcIE95zElzhQ89UtbqHECQ3cIo7MitTJmM4IERY2RuOdSLd9eqga60gOUzrZVxuXVqOxU2/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9466

On 14.01.2023 00:08, Andrew Cooper wrote:
> @@ -470,6 +471,59 @@ static int __init cf_check param_init(void)
>  __initcall(param_init);
>  #endif
>  
> +static long xenver_varbuf_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> +{
> +    struct xen_varbuf user_str;
> +    const char *str = NULL;

This takes away from the compiler any chance of reporting "str" as
uninitialized (particularly by future changes; the way it's here is
obvious enough of course). The NULL also doesn't buy you anything, as
...

> +    size_t sz;
> +
> +    switch ( cmd )
> +    {
> +    case XENVER_extraversion2:
> +        str = xen_extra_version();
> +        break;
> +
> +    case XENVER_changeset2:
> +        str = xen_changeset();
> +        break;
> +
> +    case XENVER_commandline2:
> +        str = saved_cmdline;
> +        break;
> +
> +    case XENVER_capabilities2:
> +        str = xen_cap_info;
> +        break;
> +
> +    default:
> +        ASSERT_UNREACHABLE();
> +        return -ENODATA;
> +    }
> +
> +    sz = strlen(str);

... we will still crash here in case the variable doesn't get any other
value assigned.

> +    if ( sz > KB(64) ) /* Arbitrary limit.  Avoid long-running operations. */
> +        return -E2BIG;
> +
> +    if ( guest_handle_is_null(arg) ) /* Length request */
> +        return sz;
> +
> +    if ( copy_from_guest(&user_str, arg, 1) )
> +        return -EFAULT;
> +
> +    if ( user_str.len == 0 )
> +        return -EINVAL;
> +
> +    if ( sz > user_str.len )
> +        return -ENOBUFS;

The earlier of these last two checks makes it that one can't successfully
call this function when the size query has returned 0.

> --- a/xen/include/public/version.h
> +++ b/xen/include/public/version.h
> @@ -19,12 +19,20 @@
>  /* arg == NULL; returns major:minor (16:16). */
>  #define XENVER_version      0
>  
> -/* arg == xen_extraversion_t. */
> +/*
> + * arg == xen_extraversion_t.
> + *
> + * This API/ABI is broken.  Use XENVER_extraversion2 instead.

Personally I don't like these "broken" that you're adding. These interfaces
simply are the way they are, with certain limitations. We also won't be
able to remove the old variants (except in the new ABI), so telling people
to avoid them provides us about nothing.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:10:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478831.742257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHS4G-0003Nh-QW; Mon, 16 Jan 2023 16:10:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478831.742257; Mon, 16 Jan 2023 16:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHS4G-0003Na-Ny; Mon, 16 Jan 2023 16:10:40 +0000
Received: by outflank-mailman (input) for mailman id 478831;
 Mon, 16 Jan 2023 16:10:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KGc4=5N=csail.mit.edu=srivatsa@srs-se1.protection.inumbo.net>)
 id 1pHS4F-0003NU-D8
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:10:39 +0000
Received: from outgoing2021.csail.mit.edu (outgoing2021.csail.mit.edu
 [128.30.2.78]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4fba73af-95b8-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 17:10:37 +0100 (CET)
Received: from c-24-17-218-140.hsd1.wa.comcast.net ([24.17.218.140]
 helo=srivatsab3MD6R.vmware.com)
 by outgoing2021.csail.mit.edu with esmtpsa (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.95)
 (envelope-from <srivatsa@csail.mit.edu>) id 1pHS49-00Ec49-Pd;
 Mon, 16 Jan 2023 11:10:33 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fba73af-95b8-11ed-91b6-6bf2151ebd3b
Subject: Re: [PATCH] x86/paravirt: merge activate_mm and dup_mmap callbacks
To: Juergen Gross <jgross@suse.com>, linux-kernel@vger.kernel.org,
 x86@kernel.org, virtualization@lists.linux-foundation.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, Alexey Makhalov <amakhalov@vmware.com>,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org
References: <20230112152132.4399-1-jgross@suse.com>
 <3fcb5078-852e-0886-c084-7fb0cfa5b757@csail.mit.edu>
 <27d08d32-1a17-0959-203f-39e769f555d1@suse.com>
From: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Message-ID: <6a8e25eb-758d-8ad6-23c1-5fea7dab3b09@csail.mit.edu>
Date: Mon, 16 Jan 2023 08:10:30 -0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.12.0
MIME-Version: 1.0
In-Reply-To: <27d08d32-1a17-0959-203f-39e769f555d1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 1/15/23 10:43 PM, Juergen Gross wrote:
> On 16.01.23 05:27, Srivatsa S. Bhat wrote:
>>
>> Hi Juergen,
>>
>> On 1/12/23 7:21 AM, Juergen Gross wrote:
>>> The two paravirt callbacks .mmu.activate_mm and .mmu.dup_mmap are
>>> sharing the same implementations in all cases: for Xen PV guests they
>>> are pinning the PGD of the new mm_struct, and for all other cases
>>> they are a NOP.
>>>
>>
>> I was expecting that the duplicated functions xen_activate_mm() and
>> xen_dup_mmap() would be merged into a common one, and that both
>> .mmu.activate_mm and .mmu.dup_mmap callbacks would be mapped to that
>> common implementation for Xen PV.
>>
>>> So merge them to a common callback .mmu.enter_mmap (in contrast to the
>>> corresponding already existing .mmu.exit_mmap).
>>>
>>
>> Instead, this patch seems to be merging the callbacks themselves...
>>
>> I see that's not an issue right now since there is no other actual
>> user for these callbacks. But are we sure that merging the callbacks
>> just because the current user (Xen PV) has the same implementation for
>> both is a good idea? The callbacks are invoked at distinct points from
>> fork/exec, so wouldn't it be valuable to retain that distinction in
>> semantics in the callbacks as well?
>>
>> However, if you believe that two separate callbacks are not really
>> required here (because there is no significant difference in what they
>> mean, rather than because their callback implementations happen to be
>> the same right now), then could you please expand on this and call it
>> out in the commit message, please?
> 
> Would you be fine with:
> 
>   In the end both callbacks are meant to register an address space with the
>   underlying hypervisor, so there needs to be only a single callback for that
>   purpose.
> 

Sure, that looks good. Thank you!
 
Regards,
Srivatsa
VMware Photon OS


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:14:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478837.742268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHS7p-00040l-94; Mon, 16 Jan 2023 16:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478837.742268; Mon, 16 Jan 2023 16:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHS7p-00040e-6F; Mon, 16 Jan 2023 16:14:21 +0000
Received: by outflank-mailman (input) for mailman id 478837;
 Mon, 16 Jan 2023 16:14:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHS7n-00040Y-PK
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:14:19 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2055.outbound.protection.outlook.com [40.107.7.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d366c7a7-95b8-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 17:14:17 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7425.eurprd04.prod.outlook.com (2603:10a6:20b:1d6::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Mon, 16 Jan
 2023 16:14:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 16:14:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d366c7a7-95b8-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i2noB8WQxL9a2dzbSDE8DSEXpqRUh4MDlNSf5bdbjYYFWJgiJxq8/QWffIbtJ4cnIVF3tWflyj8DHD9hFTQV/EmLOQMW+Dwh7PPyFTlM/YJmSyxqIjyDxLeJ/5N0crRMaRfD87b9HDYBjleVeiL4dgzLRWDLAvQRLopjx6EaE+rJX0nGrRaJqABBQyYNJqlyiyjfiNomQ4n/1crIj1fihZ32W+dvh7jR+MU79MpZd7h6VkyULUUVnx92M6CgMDGzxcdNxGHSec8uMf/wiqHLYV5Hx1XpXA+f7SyzAho/l+c/2ph+3i/NkJ8+BitvQ4hyT+j5mFKBqK9h8ZNNLj492g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Z7zYV21ojY15fFTvj1ZVQOWr87Qa26P2o9uzlTRqVDo=;
 b=N2Raiu0Fn+nNiI0hlOowWaign0rPbBJSUH7+2IsvxU7ayebPD5hnXYypUXtbEoCFFYzx7r7KgDretgcXyIWfENqWvXZtzWwFuq/Gl5rJ8rOft0w69tI1ZIZEOj3fmsb9aQXAdZPG6kKDRvZrJBYppq6+bR/kFuO3HoCZHsZycfXtW3T8p36Pu3F1yrkadr/3zYxliRpIJ8ofrTLyv19VSlMEOWpr5x+QC7eIR5aq3KqFpBXfixwzngQJ7o1bLuja9aPSWfUtWcmk6uwJMTn4mpX/1wzc6pfmAn6BhkEaX/WBkiBXfjTN+HinWQlEVpZe5XAkZTaggyWuozOrdI0m+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z7zYV21ojY15fFTvj1ZVQOWr87Qa26P2o9uzlTRqVDo=;
 b=yiV27FIKwGRIsbYR+UcG0aF+bPC3CuNuS5Sit8ad7JxEsOyhSShNQLRAvHsPpICsuAmA+ApyV7wx/qgQjmuV1epN+YLxQUgcQtPWVk092yPATw7ymNQpyoUlHS4W9v2brhN6KPxVyCyGwoMVTQ/1lJZ+Qgrf0GKnvvcdBD65E4UYcuIRB5CfcZYROWwlB9jvjyEObYl51OoSHU2DZ/UapYik60rRifEzZsEBYv1y9kRS3jdigrCGpPqt39PzU3gCt0lTFHdqb6ot8DQGAsATGyaTU6PzxkZpdQdWAtd4x3zGWXEhNXoBo2LK44E2H2VIYXSfpEItW1bl9XXQGOUwzQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f92334b1-7819-d638-fabd-91baca711615@suse.com>
Date: Mon, 16 Jan 2023 17:14:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/5] xen/version: Fold build_id handling into
 xenver_varbuf_op()
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113230835.29356-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0092.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7425:EE_
X-MS-Office365-Filtering-Correlation-Id: 39840d4b-d021-4fbe-2d5c-08daf7dcb64a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ERWBWzavefKtdpdnuVszBDqDHHr4g6L914Zu/pqtUK8RG8fJTuuhidM5OI01+E3BpJMv21nktC71o47oVKmbPhuxPWa6zhHlUVV12ni2TnSIst+KIx28yEW/9D/NjTxrC6P0z4CBjPTDidj6HGCn20QfQZ6PG3+De18TtKLnyHwsAq2czbZnchaY+wgj7PnWWKlS1fV0hLWT3ZOlSDKA5HRlUaKKAk15COf7O4gue/mOsYzCdZRq45MMk1RfA2KgG3GLbj4S1ouu5NzXndW5JCrELKaihR5edwwgceI0Rx3Myxul/JSPf14JMvnm+r127foYOYULUVni0h9GVFPpWUbkdkPcwM1+U0rtxtPahii86mH0LhQinUyRpcsUd0Wyme6a5MlTD3JsCRzU2Feb8KpFQ1snEIfZ+M1p+k1vyXpshr8X/1RwoN08wHRf5MLB54oCFwqQ/vf5S294hxZhhP7VK3qe8MzOVLJoLFL7kj6Hln72LGZoqZEUyNb7yFnBVmX5SKvgn0h2YG7mAg2zgFvoJIqcMzPlNkjptwJJxyWqYQw8GB8Hx7FgseHMrjd9Rpm59erlj4SAfsUhc7OljCY9PO0WyfbUTdgLitHGJiYzUKwHtejnUSSdg+x9a2NeNZfOUSY8wTQJSN+f9pvqtkUBndqZqf9/QHPfOUyArqjFLxY6KXNniTcuuh/1ppge5zFKWklwqVyPp7PLWEtBhi6juK+TkeZVa5lVkaMdUlY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199015)(5660300002)(186003)(54906003)(36756003)(2616005)(2906002)(316002)(26005)(38100700002)(83380400001)(31696002)(6486002)(86362001)(53546011)(6506007)(6512007)(31686004)(8936002)(478600001)(66476007)(4326008)(66946007)(6916009)(8676002)(66556008)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bnhZVDdmaWZ2azh1eGg3ZzhGVVNHbm9pdmZyTkhIM3pXNVdsWW55RVE5cVFu?=
 =?utf-8?B?QWhHNDRDZUYrTk1OR0RSNEoxQ25VUUFqakZzblhPVXp0R0lEeXFIUFRKeGJD?=
 =?utf-8?B?cE92cjZwTVI3c1Qxcno3cmhSQURTaVRnK2NiL0w4NWJTWUhsR2ZUbytuMnZ0?=
 =?utf-8?B?WTNBWmVxbkZrbWlnYzFmNFM1TUdQbkovWG5OWGtyZE00UDg1R0tZdGg3Tk1h?=
 =?utf-8?B?WkQ3R1N3NWJmeFR0VzRCZ3JTM2p2TXloZlh3R2RSZW5YRWg5OUM2WWxqMWtl?=
 =?utf-8?B?U2ZFdEdvbTVqbjN6VGhPbGlCQk5zZmRJRVprWWdNSGxUcWw2STI1ck9qdnh5?=
 =?utf-8?B?MEQ5dVVwa0dqUEowNVpRNzJEV1Fld0tyK09hNld1RDU5YXltR0tCdHZLUmI2?=
 =?utf-8?B?OVg0ZWVpSWpmS04rK1NhNDBUNlBTWERiTFBKUkRwZ1haN1F2dWVoVkpCSm1i?=
 =?utf-8?B?anhDOFZyNzZ5a0ZqTGtXWFFtc3gxNjVmWmZYRHBRVVgzRUN2K1M4YUUxd3Fk?=
 =?utf-8?B?YUR3ZUU0Q0xxWHR0VERoUU5ldFJXSER4S0ZxbURQNTYrdVJmL3VneUhoSERl?=
 =?utf-8?B?UFFVV3hkZ1NycFNuaGFqMkpwM3orMjN4QmZKOVFvYzk2TGU5SjlUUlJwWW9s?=
 =?utf-8?B?aHpsR2cwa1dGVDZLN0VjdjliZWZVdm8rRWlaQzQxZkhZUkMxZk9TUnJRRFg3?=
 =?utf-8?B?eUJmblYyWHdYSkFzTlBEdVVydmtMcXdLL2FVcG5rZitYMmtCald4b0ZkTkho?=
 =?utf-8?B?a2p2T0UyQUxPVnZJUi84bVIrZXJrKzd5eW16NlBDZ0YxTVo5b2VJTkJ1RG83?=
 =?utf-8?B?RFR2NUJaR0gwOHV2ZEVSZERkemRKMzBJYTdCYTNXOFI3Y3gxZi9kcU12eEZ2?=
 =?utf-8?B?b0sydSs3bWZVNXpma0U4RjlybCt4anhqc1UveUQwQ3M0SFg1djV2V25EK0Ju?=
 =?utf-8?B?UEhmSHNEUUN6RlM0ZUIwek9NNGE0K1hFbW05bk9SNVRJamIrUzJMQ0wweWhh?=
 =?utf-8?B?Uyt2aitGcnFQQUFQL3VyV1E1YmNsbjZWL0R5V3BFMVNYYlpoallDNUx0MTBK?=
 =?utf-8?B?ZllLN3lXc1MyYk9CTWxmNTUxemlsdkNoTitPVm92ay9vN2M2clNZK0pyZlNs?=
 =?utf-8?B?NjgyS29DMHNPZGhpdlVFWHE0ejAvbHc4M2hYMTBzUGFLK1ZWTnhVcTgxTXU3?=
 =?utf-8?B?QkhxSkoxa0thWjZwbU1SUnJMRWkrc2ZaSFc5VUdPclBQUUdkUXhZVXFSeE56?=
 =?utf-8?B?QWhOSGlnTExkbjFBYlVCalp2cWZBVVFKUXBhQXNHQU92YUtKeXhZN3EvZkpj?=
 =?utf-8?B?Q1JRQnhEUXFtVGc2OGRzbUFoRmp1OWxsbnVoZHFiR21oaUxlNlpzQzZMNW1j?=
 =?utf-8?B?NkoxU3NxOWJ6MEdMSWpQelJnT3gzTjB0MUlhcklIWVRwMDFNSDB1cGZqdUov?=
 =?utf-8?B?TDNTd01TaERQYXdtempweTVoVjlkMlNtTWJYN2hoUnJ3c2dic2VyMm5UeXBK?=
 =?utf-8?B?QWQ2aURJN1BxYTdYNS91TUwvdHY3WmFvbzVLbHlWTi83U2FwRUJnc2wyakFo?=
 =?utf-8?B?S1VDUGVSUnBFRUtLd1JNWWtNMVBJRWZaYkg3QVQzMUhiSjZwRVNvK3BwMW44?=
 =?utf-8?B?dDV3WWszNGpPT2NtbCtzNDkzNHIwZFFDaUYxODlXdkZqWS91MTYwTGNmV2Fp?=
 =?utf-8?B?VklHUEVoTXhsMkFadEhYbGVFNDI2ZmErNHFhRHgwRGxVdDVQeUU0UXBFSm5I?=
 =?utf-8?B?bjhUakFmMjNUb3YyRHFoSXZLdVI1dEprc2RXOUE2SUdydVBYWm1oSm9qUFdG?=
 =?utf-8?B?Z2tNbjl3UUc3am5XeGtDREw1Y1ZFN0lVVnF3Z0xDY2tuSHdBU3FKbHhuV2VB?=
 =?utf-8?B?dTRtNXZ2V2hnWGxhblZYenUvb2JRa0l1MitFK1BlZ3dpTTd4QzFhazdkOTdl?=
 =?utf-8?B?S1Q3ZTZ2U0g5NzV0NWZqOTlEanpyRVYxdzBTQ08yWk96ZktpU1hJU2M3L3NI?=
 =?utf-8?B?Qml1NXh4NWppU2hKVDhtdGo0eTBsOVFNN2FTMk92UmF0TExEbWZOaVpaZWI5?=
 =?utf-8?B?VEZVcHJPVU9qYllYUlp6Z1BEMEg4OUsrakJmdU9qRzQ2MHZ6RXN6amswQ2J5?=
 =?utf-8?Q?yEzWyRav+aH4QVn968F8ukj45?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 39840d4b-d021-4fbe-2d5c-08daf7dcb64a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 16:14:15.1089
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ULe8uA9CryVkfzK2UcdCX16u1r6NrHNtcUYF5Z6pM/aW9vBTR9VwQtPizx9mrg20YHEZnxlw1CS+KvKy5eQvCQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7425

On 14.01.2023 00:08, Andrew Cooper wrote:
> struct xen_build_id and struct xen_varbuf are identical from an ABI point of
> view, so XENVER_build_id can reuse xenver_varbuf_op() rather than having it's
> own almost identical copy of the logic.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit with a style related question plus remark:

> --- a/xen/common/kernel.c
> +++ b/xen/common/kernel.c
> @@ -476,9 +476,22 @@ static long xenver_varbuf_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>      struct xen_varbuf user_str;
>      const char *str = NULL;
>      size_t sz;
> +    int rc;

Why is this declared here, yet ...

>      switch ( cmd )
>      {
> +    case XENVER_build_id:
> +    {
> +        unsigned int local_sz;

... this declared here? Both could live in switch()'s scope, allowing
for re-use in other case blocks, but making clear that the values are
unavailable outside of the switch().

> +        rc = xen_build_id((const void **)&str, &local_sz);
> +        if ( rc )
> +            return rc;
> +
> +        sz = local_sz;
> +        goto have_len;

Personally I certainly dislike "goto" in general, and I thought the
common grounds were to permit its use in error handling (only).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:25:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:25:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478843.742280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSIf-0005aP-D6; Mon, 16 Jan 2023 16:25:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478843.742280; Mon, 16 Jan 2023 16:25:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSIf-0005aI-AK; Mon, 16 Jan 2023 16:25:33 +0000
Received: by outflank-mailman (input) for mailman id 478843;
 Mon, 16 Jan 2023 16:25:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHSId-0005aA-6j
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:25:31 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2049.outbound.protection.outlook.com [40.107.15.49])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6326ad9a-95ba-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 17:25:28 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7620.eurprd04.prod.outlook.com (2603:10a6:20b:2d9::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Mon, 16 Jan
 2023 16:25:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 16:25:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6326ad9a-95ba-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BeDosKGe9BX3RvhJHus1qnfpxRPGOwg++iFhSbwQCmajS1aSXZUXIjvH6YHAdYNXK/WJzus1PU4UcsgN1lt+jAsUxWbeapeaHt9B9XOG/ZJvAuVk50riAQvC69FtTq+DXj4Ho2q6hH14KW0J2FAMJOuxjUwaGg7bJiGD9EKiGp9HlK9/7cdOIIGhBKvUhEw4eOZBuuyN/aNOX1VC9cYT5Oe1rtoS03fxUlI3xJTCE4Glw2LNUv1ppBZeaP3+heosMNQFMEJUC62P6N567iaerv5NG3eswPsTI6bRJeXNRSwMq1fT2PnLbDnpUubdI225WSg2MMr76ke5H7q4SeyCXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5Jwkd5Sz23cMk59TPJm0atTzxjDdl3j8u+68EG19txQ=;
 b=Us0Z/ZBDaUQarOXRQ6j2OVn7UteaZqkUQuuckTavVDocHQhsezbR28p3n45cGTcUtcnvHXexK1cKWuiVwsFn2svpY+cHfFVBkkTxUhWK2cj2IWOaqi+pgeZ21qah0ai2FOcEpcs8uwP1TArtvJZbd2do6cKNaqFHoqBpL/nuDUOZCzXa+MVdLfwWcddNW2GxUuWlRNo0+kcNxaTWR8DKrtFMDZCwsSGWo9YHeC4/ZAKU3Ucpxc4FTVQOcJnjsBqqq/pFZEgTvwC/NP01x8Ed2XoXWXryi/dsJhGjCy4lJMKRcFjGeYyOo+qxfta/ZyNF4TykOJlb45JwgfexbDE4Xw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Jwkd5Sz23cMk59TPJm0atTzxjDdl3j8u+68EG19txQ=;
 b=gGRrgkxtKryOWIh4VFILgSABia3suZxkgEhQJ1UUr80FpBHosAOuL4s1n/56mAoIsa8wlQwEzVKlyOKT/AxAXg7Lv7M/CaOusq7ZDbP8cIvmIGkSxiWUb0vWAICxNyMNteTfHZ/61FZ+UJyrkc57/4c6vEHc/556Em9bfBJRqfOsKas7VYF+hDqFACiJMh0evckWennC1F4GeFbNPGdfUuK3pyO7v/dVooy0Lh1Aa9MFk8kt/dnC2BJOcRUlpKeq5H9vSSNbdEZKSJnKsE04wEDoXEgQrNe7q7IfToeXYcFuhTv7BQqawhPG8rYV7Gnc0lKr/vqXtHOMiJrpghWWHQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <230bc200-e163-52ee-6689-76bf82b526a4@suse.com>
Date: Mon, 16 Jan 2023 17:25:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 5/5] xen/version: Misc style fixes
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-6-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230113230835.29356-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0016.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::26) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7620:EE_
X-MS-Office365-Filtering-Correlation-Id: 688f353b-ac87-4e4c-8a93-08daf7de4638
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6J7eG5sOeKSgZgcOiBaohdI/x3lZPiYS59C9YLK+sN7i8ijCm/tSjKX4RzaOeSzZ0C72y/h0HYqtAUGBSd2iG/Nv43biADHDdJ2zgV1/yzd35oogfMJnI/YpBFxGOTSc1l8k+/zc3CxhThBnSSnwOqlJwmzjdqwITMdDP++zYEA1iddqSdHntJblExsue/9A3CXyHzbf6cLSF+gE75LGfau3/CigGcVQl6kmAPKrLoMqmgkbBf95cwu4BV2v6yiMRKcFC5MHwSNkUDqyJJbINe8PRFDooLM6coKn3fJjQAVnH6Q6Vns1c8K8lSqErqVeWjthBGzVSZPQJsLb1FXCEfj3/GP59o4ezgARfP+oQuwzqrdk3Gr7Os0q3yTbohx4QHUA7ecygnOFSlJPR/nwt0jLsBrRCmTAwTrzlk5XdQf9UDKHqaxGSeU+5QB0VK/dvnmUkhxSOgix7pr8qZft7rUcuNNkMoPMkXp+BLWDY0XdQktKRqz84RS7UYHMLIhrs1iMAd2Psm1jPdCkvMg4uQIgTMs5D/Gi4+1N+8GvUP+yw8LXCHsFEkvH3BV5QwU7mweds7xOSXIOodhG4cbEWhfnsN6epYSxdDiqy+dklvDoDUwPkHDt4ZLIKCz2VlVi2EAnGx3rZiR+QbDH4lyexii0uvaeiBkY3bH9IA83pzt7uRw4fNFGnQL24MyBi2GPQh26uLmd1U2b0Vwx00ASGG5TN4g2WtkyUs3NAJj8mEg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(396003)(39860400002)(366004)(346002)(376002)(451199015)(2906002)(5660300002)(4326008)(66556008)(6506007)(6512007)(53546011)(41300700001)(26005)(186003)(66476007)(66946007)(8936002)(6916009)(8676002)(478600001)(36756003)(38100700002)(54906003)(2616005)(316002)(6486002)(558084003)(31686004)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dzA3QWZlVjRIODlMa25SNnhWUjdKbGZjYWEvVUZvcXh1RitvbTZmWmQ0Qkgx?=
 =?utf-8?B?VWhZUlZBUGFFZlBLRWJYbEFzRTgxMThQbjkyUnRHUmhDNlB2R2lwdkFBS0J3?=
 =?utf-8?B?eHIvajM3NURtejlrMkNNZ05sQlJBOGtISDg5TUdyVjA4bFdoNGViMWlxaGNT?=
 =?utf-8?B?VThpYkYwa2JYUGZIOE1oNDRUOEFvdUNZWkJmd3lMRmZTb0ZGSElyUVFTcTVk?=
 =?utf-8?B?amlKM0VSWXFTVWFPaU1ESVZ3OCtXU2gxeFhWaGsyR1ZUeHhiSyt4N0tYZnBi?=
 =?utf-8?B?SWMrdUdZYnBXRjlzNzZaOXJ5SGFnY3pQVCtIblZyQW5PdnRVS3RXU2VJR0Fk?=
 =?utf-8?B?allvMk5vOVdrc0t2cTAzRUV4KzJtNkZwVHBTK25vbWlJcnRvWWlGcG9KSm0v?=
 =?utf-8?B?WThLWENaalZvbmY1SE02aWVRT0YybWhhUUNxbXpmMFIvZFE5enJwT0NvRFVs?=
 =?utf-8?B?SG8zL2UwWGFvNFVTTTFJUUsrOXA1QVFjMUlDSW5wRHdlb29UTUlmQ1JtOVlz?=
 =?utf-8?B?M0F5cVpLcEtjRlR4TE0xdG03d05YdlhuemJiQ2lET09zNmg4NFhzZnNBTXNY?=
 =?utf-8?B?SWE2MEo0U0pTSlFaT0llYU5rVFBva3A0NURRV21SWHp3d1FWRFoyOGxrSWFr?=
 =?utf-8?B?Vk1QOGdOTG82WG8zNkpVTUl4QzZPVE1iZFNFMllpbXZpb0hVTWFaUjREeHJL?=
 =?utf-8?B?VFoxbGp6RlhmOXI0SlRLZk11TGh0Q3dueDlTdGpyQXdXUkp0dUNONjFJRDdQ?=
 =?utf-8?B?dTVpMmo2T015bEpsZDJpM0JXYUtKU3JUVi9RZUpqV1hEWVRsNzR2Nmw2UzIx?=
 =?utf-8?B?Sis1alViR2NFNis2WklFdHFQUUpXTVc0Rnd3emFtcWNVc3pYV2lLVlJwRkk4?=
 =?utf-8?B?VVpJYm9PTktiWmZka1o1NE5WZHNCMFNjYzBnTUZxMjZCc2dRckcxNk94akhu?=
 =?utf-8?B?K1QzcXlpNFkxbFdwS3JmYVRNeGdQZ2h6djJJSXNjUTVMWHhvMk5MT1R6NWY1?=
 =?utf-8?B?b25VenQyeE5mNEdDak12dUI1QmtuUk52cnVyM1czNURGMURyajhqODBZbVQ0?=
 =?utf-8?B?WGZYcTZSY3JMTGJuTW5kUlRFSWRuZUNwdU9tamJqd3lWRCtlZml4OFRZV1NN?=
 =?utf-8?B?cnJrbW1DR0lXbDY3Z0FOQkhwOSsvMERpQWgxU0VITkNya2R5UktIY0pWQ2ww?=
 =?utf-8?B?ancrdmdQcXVVVFUzM3pSWEZ4SHQ1R1ZmbDZiOWxvN0dHZ2NhV094S3EwVjAz?=
 =?utf-8?B?U2JNblo2VlN0bngyOTNjcDFrWkM1bDVxeHpKMGZuakVmb1hXcDhocjZOa3cx?=
 =?utf-8?B?QVBvZHRHQjZFWVo2dUVieVNmSUVxUmVjMU5VRHIxamRFZExvUGtsZ1FDSjZO?=
 =?utf-8?B?NEZxWGlnT05pdjNyL3NpUWlzcGcyZVRuSTdZK1pUSElwRXNZUXduWnBxcFYz?=
 =?utf-8?B?TmJkS2IrQi9PdDAxS0ZHSUU2UnpCcGd2eGMyNzZ2aEhiZnJhUFBtdVAzWm5X?=
 =?utf-8?B?ZitIS2FJNnMxeVN3SW96SUc4MUJBbW1iRmNlY2hUcmVxaUxwZkkybmtUVjJP?=
 =?utf-8?B?MklWMDF5SklTRGY3ajNFL0t5VlBxMUx1RUJUTmlrTFFYUGxuendOSjdVK3Bh?=
 =?utf-8?B?TTQwbzNhZE9pbk9jU2drNktHNm5GZ1RQQlpsSXkxd1Z6KzZqS0dPUWJENWNw?=
 =?utf-8?B?V2V2Lzk2UzhHNkFwVEg1MVhHMDFBN1A1clNkeVlvOG9qZ3RtcStVZEVKR053?=
 =?utf-8?B?YndTWGdJSGpyVWtUVndFak03QVRNN2pzelpFTWp5d2pGak82K3dGc29xMVlx?=
 =?utf-8?B?dXFOMjNxdkJxNFhMbDRkb3ZHNEhrd0Q1bWJ2akRMaUNUVVNKRGNlT1FNVlJX?=
 =?utf-8?B?Wk9zNyszSEt2ZlRqaXdIY2VEVDFxOXJpT0hkRG1HVmlsaU9xV1N3eWNGT2xs?=
 =?utf-8?B?S3BJRHd5dk9ETlhTeUJkUkxsbFJmZ1JnMStHSXdLYmtPQWIwR3VzcDdTNWhI?=
 =?utf-8?B?OTFGQ0VDTUFUeU5iM05DTzUrMi91QzR4TTNoNVBGUkMyZDJwVjJZL1g0OS93?=
 =?utf-8?B?MnN1OXFlajFoV3lRY0RPYXhOZmJEM2UrbjR5b2o3dkdWVlE5RXB4bHlnRW9Q?=
 =?utf-8?Q?s5GS1P3IcLIyNjB679lF03r8Y?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 688f353b-ac87-4e4c-8a93-08daf7de4638
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 16:25:26.1288
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cToHhCgr1xTfF01oDVYWhj/Yrbe48k8tjtCrbo19f6TCLNd7ylSlXUT4q1ed3q5UFskNLkiJfEX7dwhPsGbBFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7620

On 14.01.2023 00:08, Andrew Cooper wrote:
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:26:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:26:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478848.742291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSJW-00066e-NF; Mon, 16 Jan 2023 16:26:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478848.742291; Mon, 16 Jan 2023 16:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSJW-00066X-K5; Mon, 16 Jan 2023 16:26:26 +0000
Received: by outflank-mailman (input) for mailman id 478848;
 Mon, 16 Jan 2023 16:26:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHSJV-00066J-P0; Mon, 16 Jan 2023 16:26:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHSJV-0003Vu-Mp; Mon, 16 Jan 2023 16:26:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHSJV-0001Ew-7I; Mon, 16 Jan 2023 16:26:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHSJV-0001IU-6p; Mon, 16 Jan 2023 16:26:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Gm8UUT+2uxUWs7ZkNUhg7tixfRe0MgHYftE7s1GcpeE=; b=nWc6pMjM0z/ErceVzyyQAcxN+X
	Ep+ODSqSscT6tvaCnqai0jajr9oQ8u+MfORgdflWSm6X0sav2d3E7zUvSoSwcpXfR4T+Wx9t5qyNY
	+chQp0Yxn9dzC2HBI/KxqYAgFw7sS4FBov6pKXJOJ5bo8CqiSVUdpoQ5fTaoc1T9G8d0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175921-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175921: regressions - trouble: blocked/broken/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 16:26:25 +0000

flight 175921 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175921/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    4 days
Failing since        175748  2023-01-12 20:01:56 Z    3 days   17 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    2 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:27:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:27:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478854.742301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSK6-0006eF-0h; Mon, 16 Jan 2023 16:27:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478854.742301; Mon, 16 Jan 2023 16:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSK5-0006e8-Tt; Mon, 16 Jan 2023 16:27:01 +0000
Received: by outflank-mailman (input) for mailman id 478854;
 Mon, 16 Jan 2023 16:27:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHSK5-0006dr-3k
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:27:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHSJk-0003W7-Cw; Mon, 16 Jan 2023 16:26:40 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.9.85])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHSJk-0004M5-4d; Mon, 16 Jan 2023 16:26:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=q3K9ndZ8QJO7+PVyn77l3AM2xxpKYRfzapTlesiVvrg=; b=KtrBgE4tVLcmHI9PJUYt4kfh7L
	KUVWltyn4+AYUUbVVEiOoJRVWYtE2TTiaCV7ubtGdqSBdWWr8s9Ia+Yk7lz0ozlAvaFtM1kAWh1V9
	W1yG9R1aWNMdH0RotDJdxvp1FdL+YspqzreCJcciIh8WqZfwpunE3wXCVFQjy7njxaIc=;
Message-ID: <b2dd000f-ca63-58f8-94eb-ce200e8ba30b@xen.org>
Date: Mon, 16 Jan 2023 16:26:37 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/5] xen/version: Introduce non-truncating XENVER_*
 subops
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Jason Andryuk <jandryuk@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-4-andrew.cooper3@citrix.com>
 <5ebe5337-f84e-12a1-e8a0-92832100946d@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <5ebe5337-f84e-12a1-e8a0-92832100946d@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 16/01/2023 16:06, Jan Beulich wrote:
>> --- a/xen/include/public/version.h
>> +++ b/xen/include/public/version.h
>> @@ -19,12 +19,20 @@
>>   /* arg == NULL; returns major:minor (16:16). */
>>   #define XENVER_version      0
>>   
>> -/* arg == xen_extraversion_t. */
>> +/*
>> + * arg == xen_extraversion_t.
>> + *
>> + * This API/ABI is broken.  Use XENVER_extraversion2 instead.
> 
> Personally I don't like these "broken" that you're adding. These interfaces
> simply are the way they are, with certain limitations. We also won't be
> able to remove the old variants (except in the new ABI), so telling people
> to avoid them provides us about nothing.

+1. This is inline with the comment that I made in v1.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:35:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:35:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478860.742313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSSA-0008D3-Qc; Mon, 16 Jan 2023 16:35:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478860.742313; Mon, 16 Jan 2023 16:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSSA-0008Cw-O0; Mon, 16 Jan 2023 16:35:22 +0000
Received: by outflank-mailman (input) for mailman id 478860;
 Mon, 16 Jan 2023 16:35:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PHax=5N=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pHSS9-0008Cp-Ih
 for xen-devel@lists.xen.org; Mon, 16 Jan 2023 16:35:21 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c2dbec27-95bb-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 17:35:19 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2dbec27-95bb-11ed-91b6-6bf2151ebd3b
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1673886917;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y5aBMbxFAFIdStpWd8PjTw92BwcyzNQZ6KDghf5lWME=;
	b=WcOzXIZhvSGHI4kvKmeXfaaJdxjdle5MVf1vpbxI/AmUPipq+r7qwkITTv//VM9mg2PMc7
	LJyptKIS3yd4uD1y6XgEyOBdnZINhLk4pwbRiAJcb7tXuvov9dJEQxirB+Wi2tCA8f1eIa
	h2cO72zuZ79rBMwZaKUf4/SNsPtLgaWlTBOzrBSUW7ykALM+MBfG/QKRYEG1fzOv6vMTSa
	yJY53J3efC7hTx+lqxx5IvHLGh81f78i2M2JK9nKssO6CgT8pgnSPs0ravGlqLQ+uozKD7
	9S28aL/s/ATwBDcokK0dTbo+ezQyz5JBswbnBs3R4ZIqdf2rLtC9muKIMPPjcA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1673886917;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y5aBMbxFAFIdStpWd8PjTw92BwcyzNQZ6KDghf5lWME=;
	b=9t2m2hJfCfUhEd3ewF1n6zPM2zYK0sf7ZNxJnZ3SeXx+vQAZEnM9dxnpLeYBFOWDXJwYSF
	lbgkBHygutn1QsAw==
To: David Woodhouse <dwmw2@infradead.org>, LKML
 <linux-kernel@vger.kernel.org>, Juergen Gross <jgross@suse.com>, xen-devel
 <xen-devel@lists.xen.org>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>, linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>,
 Alex
 Williamson <alex.williamson@redhat.com>, Kevin Tian <kevin.tian@intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Logan Gunthorpe
 <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon Mason
 <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>
Subject: Re: [patch V3 16/22] genirq/msi: Provide new domain id based
 interfaces for freeing interrupts
In-Reply-To: <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
References: <20221124225331.464480443@linutronix.de>
 <20221124230314.337844751@linutronix.de>
 <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
Date: Mon, 16 Jan 2023 17:35:16 +0100
Message-ID: <875yd6o2t7.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

David!

On Mon, Jan 16 2023 at 09:56, David Woodhouse wrote:
> On Fri, 2022-11-25 at 00:24 +0100, Thomas Gleixner wrote:
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0if (ops->domain_free_irqs)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0ops->domain_free_irqs(domain, dev);
>
> Do you want a call to msi_free_dev_descs(dev) here? In the case where
> the core code calls ops->domain_alloc_irqs() it *has* allocated the
> descriptors first... but it's expecting the irqdomain to free them?

No. Let me stare at it.

> However, it's not quite as simple as adding msi_free_dev_descs()...

Correct.

> diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
> index 955267bbc2be..812e1ec1a633 100644
> --- a/kernel/irq/msi.c
> +++ b/kernel/irq/msi.c
> @@ -1533,9 +1533,10 @@ static void msi_domain_free_locked(struct device *=
dev, struct msi_ctrl *ctrl)
>  	info =3D domain->host_data;
>  	ops =3D info->ops;
>=20=20
> -	if (ops->domain_free_irqs)
> +	if (ops->domain_free_irqs) {
>  		ops->domain_free_irqs(domain, dev);
> -	else
> +		msi_free_msi_descs(dev);

This is just wrong. I need to taxi my grandson. Will have a look
afterwards.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:36:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:36:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478865.742324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSTe-0000Ki-4I; Mon, 16 Jan 2023 16:36:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478865.742324; Mon, 16 Jan 2023 16:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSTe-0000Kb-1N; Mon, 16 Jan 2023 16:36:54 +0000
Received: by outflank-mailman (input) for mailman id 478865;
 Mon, 16 Jan 2023 16:36:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHSTc-0000KV-M4
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:36:52 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2047.outbound.protection.outlook.com [40.107.22.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f9f00981-95bb-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 17:36:50 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7361.eurprd04.prod.outlook.com (2603:10a6:20b:1d2::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 16:36:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 16:36:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9f00981-95bb-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aOfY0HTQ8rUg1f96n40RvSVfla/PQ1xU5QLlCrXbwF+BSBGtsoTRtcJ7QO6PZVzR13oBHON5z/dMRPytEhl0z3pB+XuAXDnM0u5YMrmyWXP3bh0pwoxDMFLgdgnYq8YPidN7cfzSInUA5oM1bEfS1k1W19K4EmUfdQpxbT4iGmQBWPIsBjNQlrum/uWnNceuQsbjmvJ08s2Je/Tl7dLSB42x2Rre240frawZEC69LJGrPwXmR1FMwvxa6gXYo+tgCkAaea3JwNK9j743/IRZ+dKkGMkQotpCBuZNTdG0DdvGp5HtryzxK+gFp5l+hL861O0VAZAWrmTasbDkng53Jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xdH49lKk6w+lR4PnD6C6TfMo64NpFmgTwni8Ne0AeLc=;
 b=AdubrvxhjxdBVFP3gsmSL+nLZAaNzVU4/z7h+nITs2f1rEV/+hV0dztwrTkuVSX637CZxUpW/OfpK3axYODrDow9xExAz9v46Ysix19BInxcwwzzyab14ilQp0U6QFyJkymdhHH8DZ98/4ZQFu9iZNKwhYoruEoc0wzn0BhBWWOZp6W/iKLQYutwyXGy1EREq+KhI3IUgVI2ZuiCX/7WzBclp2WYgUdgSslzGmQQcWhigd4YWCb8g4brLu0o9Gjabsn1dwCNVeOjOnLP+QWtct5i63lQgurVfwYOr+FAguQiPDGPt98j+M25AnXgOC1Y+Z+SM7Unp70xS7WsX7L2kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xdH49lKk6w+lR4PnD6C6TfMo64NpFmgTwni8Ne0AeLc=;
 b=AM+aA1my0/OFt/u5Qor9adMEPy4hXTDYq362fHpFj+0MQ7S5PGj5frr3xDm+OnsPkkieKeadxltIwH/5XbhKqZSG5OmwUjoCEPKUbF60RqKgCQSDLePZtFfZiM3GBzf3agotIrQ9TpojpGUP/NKsPH+rQlXLIghGKn1A7+/vX3a/R/PZt3UyGY5jDmaYO1+w8fMqHfepcviBWbCgtjrrdW94B/+w/8xSxcWd8yMeQ8xxXYiZd4Ejf+ZRmkJKfXdTxpsdcOIiS0zunro+eyBWoLZEm6gHBnFq7ug8jOcBeL/VgdVZD/6UaS8bATMNRCJ1Vku1b2ZCIpPDJeDV2N1OeQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f8018fbd-bb6e-ca2a-3c97-5d6bbbb7b0e0@suse.com>
Date: Mon, 16 Jan 2023 17:36:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 1/8] x86/iommu: amd_iommu_perdev_intremap is AMD-Vi
 specific
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230116070431.905594-1-burzalodowa@gmail.com>
 <20230116070431.905594-2-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230116070431.905594-2-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0119.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7361:EE_
X-MS-Office365-Filtering-Correlation-Id: 6ccfec04-912e-48ce-04fd-08daf7dfdcd9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qc9XBzk8Fy9PWGktL8KybUUzhqYUAMhnMI1orJZnDmUir5dqsYkVkm96K6u0oGVPyht27P6g5OX2oI2zUodi7FEGriEsC4mWqU1hRvbmDKChqYRhZgtrFv+EuEnkbpDYl9SzHIB8cX7ZMqVoTOxNVt9H9SLFSlyo4dt85JwhPBrWY+apqi6zC2twSEy1Y9Uy1B90gt1xVgGrIDzZLEY8/iEby1owCoiuGEuAccYvVfaCI3wX9g71O2p6mjMl/5A43pBn55mQO29SoTQGYAIDGWhSNa/qbmMm3LX6ouQIfJ4bBKh9ov78yXObWCLVJ6pkCROR7NqvXsw7nzQ56LWT+Ux+e2uwtXAcm9zM4ROebgT8rsK4zVYYaYmC0C0a35H36y16V+hqsfgT8AeiZLhdOTH6JKO7t+GJTrJgM4AD2KOEzv8wBwOXPcdQpN8wL1c7R2qaIJmuYWoGK9vAzECTpDsntbDZjl+LTAkCQbEN6jygXCWUVXPQCaavN8nBLXREWbL6BdlOipqegHiIofR/WlUoz+TwtHoZtiKrhi5CH6HcGKNHsbsef+uO62uxpc9K7Rhn0jjBnlAMBOzLbB3cGdG4OA5UIyNdx3IAK5799y33ZyJ9rVt/4YM3wJFM8yrGR3og2S19mvYddgBRQ0Bq6C5UNez21xBWtMqAg/w88A4IKMPKFPbBL0ZPifgocLOW6f6tLFJ+saHtJqq09+ZkhEhNvk8Ve6ALARAuijlmLTI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(346002)(376002)(136003)(39860400002)(451199015)(38100700002)(36756003)(86362001)(31696002)(6486002)(66946007)(66556008)(4326008)(26005)(6916009)(66476007)(8676002)(186003)(6512007)(478600001)(31686004)(54906003)(2616005)(53546011)(316002)(8936002)(41300700001)(6506007)(83380400001)(2906002)(5660300002)(4744005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cjR1K3BBSDNlWVJhaXAwUWwybWRJditkbG9OaG5WaURuM2l0aWg4NTBaTnV4?=
 =?utf-8?B?cUNmTXQ0S3ZzM0ZPZ3dOam44M2YyY2E5RzB6TDJlQjhldzJHcHlaVzFUOW5F?=
 =?utf-8?B?d0pCT09UT2pBa0xKNmV1UTNrMnp0QVdSZUhnd2w5Zmh1SFQrZHFPcUMyTUF0?=
 =?utf-8?B?cWRBVHNHZWpmMkNERXJMaUQzeE5MdVVrU0d6cE41TDNvSWNtZ05aYnZVd2N1?=
 =?utf-8?B?QUV3Nnpsb0ltbjkvUE9tMVQ4SXl5YjhSMmdhdVYycFNuMzh2ZWJxUE55WFBq?=
 =?utf-8?B?NUU3Y2NjUE04Rjl0WTJJWGJUMlczV0VQWnZNY0t3bXFJWHZoaXBZYTk1MGRK?=
 =?utf-8?B?RWg1ckxxL05paGZnYWJBSUFPUEZybzJpRUJUN216cUdOT0VwMlNPWDdDWkdJ?=
 =?utf-8?B?eVJJQXZFMzA4TGpkc0VsZjFHM0k5bXBoajNkUHVMMXR2MEJNMUpldjhqeE1t?=
 =?utf-8?B?M2FVTW1mT1R4Z0JZMUlQenVjQ0dreGZPWnpWUEdEc3pOSFRPNFBOMERjaXdR?=
 =?utf-8?B?c1pHcnZUZng1Vit6VERySTVqUTJQUFowcTNGUUxHSzNaSnk0bmRud3hwNHNl?=
 =?utf-8?B?bTJBQnlFdnU1TTNFRjRpQUM4OWJnd2Fqa2F4UExlRm1aL2FYbWZTSkZwU0hl?=
 =?utf-8?B?aGVkUUUzZ3Q5bFdDMk9wTVpkY1plWTU4b3U4L1UvNGxyQ3dwOThkMkhjaVV6?=
 =?utf-8?B?MzA1aXJQLzhRb3JmNTkvZGV6MmptMm1Qc281U0xYNzU5aEJlQzh3N0cvSkxO?=
 =?utf-8?B?QlB3SVhDR1BFVGxObFNSZFltc2ZQaGtKK29hRU5DSjlBOEkzZUZCMEEveWtS?=
 =?utf-8?B?YlYwOEM0MCt2NXpkTnZPS1ozTXBLbldzcjMyTVdJajZmOXNlK2hZNldPTnI1?=
 =?utf-8?B?cTBMR29tV05wWXdVd2dXdm5hZVhyam8zTFF6MC9DNnR0QU80bWdBbS9GbHFa?=
 =?utf-8?B?UFZ2QVVNanFWOHZlQ2hnUDJGWlQ4NDJBckNuNUVRQXo2SlN4bm5Rb0ZleVhK?=
 =?utf-8?B?STIrUzF0SGVRbUJsOXZGbWJVMU5VWUpUeTBJSVlpVUgwbjl3dVRZUzUvVzFa?=
 =?utf-8?B?V25qZnV3L1pNd3IyaUFadVhnVmVUekExVmtXTCtseVYvb3pvc3BNei8rV1lD?=
 =?utf-8?B?c2t4a3lUaFYwRDJ3WHBWNXNwZngyMDU2NWxSZTIxRFZnSzI5MkpiSlRXNWVB?=
 =?utf-8?B?RkpjSW16aHhWNk9nQUpraS9lRjI2SkIraUpXN3VaQ2t4WVVUbHVqV2dsYnFQ?=
 =?utf-8?B?Y01xS1A5NWltMExwQ1BwR2ZxKzBsamdrcXg3OGZHWjRQWjd3ZXJZd2tCMlZL?=
 =?utf-8?B?Z3hGVEZhY0xOeW9zN3V4eFNyNDRFMmxDeWxHMG5LLzlLcFpBOG9McXdhSjZS?=
 =?utf-8?B?NUlVU2NMTVJtZ2pwNE9TaEhkaVVBbDdmTDhodGVpNU52UXQyRXFqbEtZODY1?=
 =?utf-8?B?NzRLYy9zeGJ0OWlwN0NNd0lhVkFIQzcvTFR4amZ5V1FkYzRVTTZvelhzY3dQ?=
 =?utf-8?B?dXlKWXIxanhJTnZHWGp2VlNvTDFkNWg5OXhrNWlKZG5wd3d0bTZZZXE3VkxZ?=
 =?utf-8?B?WHljTzVwbDhsNmtBdkZIYzZHaFNuLzdQTWRYd292UUlBbEVtU1FRUWlpbUVR?=
 =?utf-8?B?WUMzZGF4M3FOeStFWUs5M0Z5NnRya3lYN0xEclhMZ01ZbVcvSmgvTkw5cnNS?=
 =?utf-8?B?aG5JSkdPeXFESDJEZ2g0ZW1DQmVIMkFpY2NYTDBSVGh1TTNWYVEyRXNvQUZD?=
 =?utf-8?B?MlBvb3Z6Q0NCb3ViZ29aUkUxVk5DdUhJVmNrU0Y2cHlJMGFSditEOGIyRWZE?=
 =?utf-8?B?dXdVMzI3RWhNOGZ0MVloWHNHMmQwa2l0ZWdVNHRhOEt4bzVMdlBHc1dLc1Bj?=
 =?utf-8?B?RC9tTG9UeGQ3VDd4VDYvSFFRbXN0eFc1UStueE9UNnFkM1VZd3lhWHVIQzIy?=
 =?utf-8?B?NTZZK29uUitZVktpalJsYmpsNDVNTy96ZUFyR2RqaUpySzJ0ZGEwQjdEbjBa?=
 =?utf-8?B?cXVKSDRYVCtxVllKV0xEcC91YWJmUkxPeWhlUTM1cy8yWjNDQVQrUkcxdWFE?=
 =?utf-8?B?OCtRZkZZQlJySGtKVGlkUUJZU3ZreFcwRkR2MUlydWlCWU5yRXBTb0QzOTZT?=
 =?utf-8?Q?/w76bbY9iPslY5HMx4ea7KCaz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ccfec04-912e-48ce-04fd-08daf7dfdcd9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 16:36:48.3355
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qQnw2wlY9hJVq1051c+uLdzQ9bL+ZBpOJAUuAP/UrDEOKO2rUfme2RZ3B3udDFhmHpYlzUa9WZkBipmWRwq5ew==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7361

On 16.01.2023 08:04, Xenia Ragiadakou wrote:
> Move its definition to the AMD-Vi driver and use CONFIG_AMD_IOMMU
> to guard its usage in common code.
> 
> Take the opportunity to replace bool_t with bool and 1 with true.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:38:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:38:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478871.742335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSUw-0000y1-Jz; Mon, 16 Jan 2023 16:38:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478871.742335; Mon, 16 Jan 2023 16:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSUw-0000xu-Gm; Mon, 16 Jan 2023 16:38:14 +0000
Received: by outflank-mailman (input) for mailman id 478871;
 Mon, 16 Jan 2023 16:38:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHSUu-0000xj-Ex
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:38:12 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2049.outbound.protection.outlook.com [40.107.22.49])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 29716d01-95bc-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 17:38:10 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7361.eurprd04.prod.outlook.com (2603:10a6:20b:1d2::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 16:38:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 16:38:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29716d01-95bc-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XRG5BTRntx99+0PjID1N62loV03PdQoiW7aBYYByfnZLk1NwyWqfGVSoC3CT2EtCUW3VniPlhVhPbul2n5r7StH8GKsvbk3Q3cbcegpH8Q6b5Oqr76PCPda6Kd54EDFWEaDG/AGd9N/eamB0T/11rvwzks+Fj0E2HP8fKhY1HiZgczyF16hiGYoFncyCyKWd/dwjhKtHHwirifX2zVaj5+6OHeADEYNaD6TFMjMBfoPHCkJoiEqrHWdvuqEtOjcDtgPRN9G+icGFDHDpSSqoFiZJsU7A4vGjFlOPvDYYVCua5ZG8TnqkDZ6N7BbnpX/Yzdnx0m7/dZYBLscrY5kFdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kgEbC9EL1hsYlCUMz/E4Jjnqg9IYKBLHy2GwFHXaIpg=;
 b=X2m1pYATA7o4bSRVZabxu1JXi4FyOQ8zeovxzuzyPyHUwJEIeoLsUYJxdtdQ8dxTh7j5tKNPMk6i82llkI9cTTQqxawwjzSlfSJbGsfWmD1nH7YvVBokZMiuw3rZ8osQBAZw5SeBNri557llicP/3y7kTkca4iYpOz/Nu2c+el0Oz++dzERV4NXguk92C1F5KpmBzG53tskNh6rfI9SLQlD3SM8oN69oFQT2evNOTMq5kPnYK1B9B/UuoY7pQjcahAS4QN7wk3HITUrmKUNrpIgVITji+2qPW2L0C86wOO/bdNu+76VpkQ4OtY4cBz6uYvIVAKZWxcJjEPWr1E/faA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kgEbC9EL1hsYlCUMz/E4Jjnqg9IYKBLHy2GwFHXaIpg=;
 b=aHefZXQiwk8hlUp4bjG31nd+1uJv+ZAu4YfoiUFNdPcFrLCWEICnJWdvF4KLO2+3Rj9Je6p0AjIFFkiI7COoWUVp4vieyDH8R3wH9hR+C4wLEroMlbKI7cNnUB9/SmTMZQ+Tcz/nWU30QWnWYfQz7PeKlj825+OM4y85dc4dKur8kIuiHUoaEGQIU8/rxx8kCf/X3ytgfQ3QbbElayZpOkC6qFt9/9KxOueLI0NeFkvkcsCk1olWCq9T4C3j+1VtxiwN9oCE9wIunLQNRu/GgcLQKqvnv0BHiaFkc+KAvmrscJTJdqyNEUaMBb8IvrCCanxa1mdO00Tth5s81VdVSg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <69b4225c-1e48-5949-20ec-41106a8ccc0f@suse.com>
Date: Mon, 16 Jan 2023 17:38:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 2/8] x86/iommu: iommu_igfx and iommu_qinval are Intel
 VT-d specific
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230116070431.905594-1-burzalodowa@gmail.com>
 <20230116070431.905594-3-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230116070431.905594-3-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0133.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7361:EE_
X-MS-Office365-Filtering-Correlation-Id: 507e8b5a-5da6-4b69-5592-08daf7e00cd8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7gW7IW+QJ/YNRAnlVCwlThPh7xjhZ2l1xC943huPXAoEGVEtPifp+KbEkITh4Luq4UxZdta6trY4g9suLZxESL82DM5iJD5A+g6abZh2IP4726uN663a8ufJZO2X95KenUA977U2CB4xrxmDY5IrVoApzXKo1wiuW6icTNMAgWSKsGOfVnWkm2xFHyVM9p/YdUu5yB2hpmGm9oiiTlwUFuu4tjg9jVCe55KY41MNClyEDAqVyqi0A+4QPK9JsQT5aC3rQhN8YZF1E/0uxOnYIO2esx9Az67qe6PUnuszDjKHvVUlHhZWrSTVdSuT54YbccMQeZ1ypau6wKlavbqjK32wjb59QmKq3vbU/s5VD0x9Wc8ZYeF4hQ3SFio/Y388Jwm4JmFSQlCG279Mm32bavjZgoH3DsKpdHYcXbtMiboFCsz/jCwQBJa164XC5YGEcvTD2zVNRGb4uPRnqs6u71usA4wpiyYHcfrIMZFxnVANKuhPn+58vl6j9ZpHRy3z5Y4v9T5AxMcHKf6RJoGJHYcxcYNeRV9290RSHnQG89XyatRaDNSFxAvp0m5f+nejbwi6tEP39yTSPCPzmfjjiUWGcxS60ZaD2VIoFDnqgcoMgBTssx66eGoavXHwC5f/hJOgEg4D5FmJGPh1iH2ZSdSBYnMz46CLO9W4yaF638GXqQD1kn3tXRDZgEWydOJJ9erCApyCgCMr4CdEX0kjFtyGRObgdxuiTL6hjIQ7iJI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(346002)(376002)(136003)(39860400002)(451199015)(38100700002)(558084003)(36756003)(86362001)(31696002)(6486002)(66946007)(66556008)(4326008)(26005)(6916009)(66476007)(8676002)(186003)(6512007)(478600001)(31686004)(54906003)(2616005)(53546011)(316002)(8936002)(41300700001)(6506007)(83380400001)(2906002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aHpOdy9VQ00vLzFOMERDRFZuM2RlR0hSQWswa24zeHRXcEExV2FjRTBwQThP?=
 =?utf-8?B?MnBCZzV1YUVDRmY1VEwybTA3VzhNL3dhelNvTkltbmR6N0JXU1BDV2FsUzhj?=
 =?utf-8?B?bUdDc1hjcENJOGZSMSt2YU5vNEtNd3ZjNTlQeDgrdVBJTVRPajdNd1I4REtk?=
 =?utf-8?B?WlRxUGh2akUwb3FHVkdGbFpsS2UreU1sT28xUjZLeXlPdWRuZExwWFEzZW0y?=
 =?utf-8?B?MVlQb0V1MlVlMytkU0tTNldWV0NBdWF5b25MUDFVTGZrMmViR3dOZTdJM2pD?=
 =?utf-8?B?Zzlrbzg4YU0wOThMTkRBYzluOW43YjdISitqaXg2enNreFN1TndnV3p5TmlG?=
 =?utf-8?B?V0wxUDMyV0JFTWFUenM0VEwrV00yeC9xRTkyd0VjeU5XZklMSXRJSEdlMnBS?=
 =?utf-8?B?eGNYT3QzOTFaQzVxZFlEWkp0YVQwL2Z5cVg2bElid2hOanEvTjRpeDVMYzlR?=
 =?utf-8?B?NGlwVlp5aHRpOFR3ZWhxbnM0WjdpeFQvdzRjYXV1d1pyNDFZRnlSdjFmNDUx?=
 =?utf-8?B?cHhVSzJrbkRUdFFvOEd0Mi9yR04rZENqRXdYcHZSMmgwaDRVU3RWM2M1dXk5?=
 =?utf-8?B?REpVdnd6ejVwbDA2b2NJaElkaTlJQUZPeFVqWHROTEwrVCtZWlBXdXNwZUd0?=
 =?utf-8?B?cFpiQWwzU09FbWlxQXpNSUlvSmlBMkFaNi9tbm5VYyttdG1BcXFIdzI2WGlj?=
 =?utf-8?B?UkVZVmoxVnZVcUJXcFBQb2pMU3NidWs0Y2IvRHVWbHNaWjdYOUlNWExVUHhn?=
 =?utf-8?B?OFpLTFhjendDeVRuVXFWenhSK2RlbWJXM0FMTjUwaWpLY3A3QzNja3VBTnhS?=
 =?utf-8?B?M0F2ZTJYSXYzdHo1TDBJbmNDSTd2MXg4SVVyZzgwV3RpOGJCdGZ2Vm5LeDFO?=
 =?utf-8?B?U3drWTdwbDdqVlAxNTEvanRYQ29uWTJrTCt5eGZMcUFoTW9mS1JaYUVxWjVu?=
 =?utf-8?B?SkhvMm13OW1jU2NxL0oxa1F1eFcrL0RoMU8ybk5wbEM0QktRZ1pWSXBJVnhT?=
 =?utf-8?B?bFR5bnZqZWNqOUczWXRwRC9KaFlHSy9TeEdsTmpWT3lkWVRQdDAwR29IZTlG?=
 =?utf-8?B?Q0FFQzl4VjNUcEk4TG9CZjhPdThodGNWS09lcjE3QjBXUzR1cTBsSEtRZWNq?=
 =?utf-8?B?K2FPZGxPckdacFFDNFRNUmZuMHU4RXhiOGhTQmZ3V3NkYm81SmM0ZkxyZmFt?=
 =?utf-8?B?dHgyUlJtM1o5YjBwMUIzVW8ycmt0bDhlbHFXUFBKMHE1Zi8rZ1lJQ2FLMkZT?=
 =?utf-8?B?Ums0eDByWkpzYWxRUTM3ZnVFejkzWURHVVk1Q0MyUThpdUFwQzhtajVVWktn?=
 =?utf-8?B?dVBhY2Y4Y1lobWFFSWpwWmZXcVp0VXJjTG9Xam9QNFRJYnl5SVA0YVhBaUlG?=
 =?utf-8?B?cFZRelhNZDkxaTUrT2MwYVZtZTRkL0JBUkZHSnBpalVPclZZckNsOWRBWFBx?=
 =?utf-8?B?SlpmRVFwaXFWYXhoYXZhWmpNc3ZMK3U1YTV3RFFFL1lwOUVJYWo1cDg3dWVK?=
 =?utf-8?B?UW91OStMb2xDT01Udk00clVtMHN3TXZqenlHMXFRYVBmSTVlb00xR1paMmZm?=
 =?utf-8?B?cC9ReWxLbVpHTE9oOHRQMXR4a0tJYWpYd3V5ZnZTYVFNeXJvSFlVd1ppYmNO?=
 =?utf-8?B?ZUpwdVBFdFJzVW0wZ20wMG5KQUM4WmVUaFFXVkdobXBPUnJPUFhpL3V1MzFk?=
 =?utf-8?B?WkV3UzRCLzhNVGkzTzdWTk5ReEhWZXA4YTlPc0NwQlJGSFlBMnBaY2lmVmdK?=
 =?utf-8?B?V0hxWWRTOHVOeHc5OW8xaEloc0VRbU9Bc0R0NGg3djFDYWVHVHpZak5XMXJ3?=
 =?utf-8?B?VDcvWmdmaGtpRzJHZ2kwK2MvcWNhYklEd0I1czFtNmtQdldjM1hSZXRwZzBq?=
 =?utf-8?B?UWlyYmVFVkdlVEMxOThqZnRtMHNLQzlVMlcxZ0RGU0NyWTlPby9EaWJUUnFt?=
 =?utf-8?B?VUJhSVZiR0N1dm44RkhPYXZSczUwd2lVRVdzS1F1MHNrSHcvb2hkU2JNMHJV?=
 =?utf-8?B?WHZCZE1BQ2hEbjFrWnZpNUZzOHBwckFUdVFIZlpUWjR5YlZldzNpbjhuRU01?=
 =?utf-8?B?QjBhQS83dG9rTHJibjhIVkw3SVQ1by9Rd1lRUTlOaHVJWkFOMXgyMFZhMWxZ?=
 =?utf-8?Q?5T29ItD777OirmeoyvjC4K8UF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 507e8b5a-5da6-4b69-5592-08daf7e00cd8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 16:38:08.8460
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4oPVSzBhxO+GDNBvyTrsDdkCfBFtD3+MkEnkeAn1aDvEQ8xqYN9wkYDutMH+IYVZsajjeArW7NmtuYJJT2+hGQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7361

On 16.01.2023 08:04, Xenia Ragiadakou wrote:
> Use CONFIG_INTEL_IOMMU to guard the usage of iommu_igfx and iommu_qinval
> in common code.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:39:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478876.742346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSWU-0001Yq-Vv; Mon, 16 Jan 2023 16:39:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478876.742346; Mon, 16 Jan 2023 16:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSWU-0001Yj-T0; Mon, 16 Jan 2023 16:39:50 +0000
Received: by outflank-mailman (input) for mailman id 478876;
 Mon, 16 Jan 2023 16:39:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHSWT-0001Yd-JU
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:39:49 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2059.outbound.protection.outlook.com [40.107.7.59])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 641b63de-95bc-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 17:39:48 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6838.eurprd04.prod.outlook.com (2603:10a6:20b:10a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 16:39:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 16:39:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 641b63de-95bc-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V1NYjqr0kRrSExesfeaRdvk/7CR1231SFGb1UyZZqxN81zkqP9G45yo8U7UW79k/vVXJRl4CJLubIx2mxF1gWS2uYLx8kA1cWmUA6V5VUIgU3Y2GoYlbY0CwDYHErZ6Av8GRlUResSE+2DTXQzoc38h4Dlr6HXJKXHnDVqgXZMvE5o7XplTKS+foD3otm3VCEBzRMKAjYfpPgVutYKXrZgdfiiP2kRApys/2alaRMo1lR9QdxUMYpvtoO7xe0hbkL/aWuf0cLL5Maa34677rYBm/cQAitxT7Kxp/the6P991CgdQIf7pGj5wkbi6EiTw0NSSIY7Kk2Joqgo+nTIBqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wipUpVRZR128WrxZa6clJijzB/y5fOUZjWYfTVbgmwc=;
 b=GhmO+VF3GJoqqeEGWk/lQ2kIE3yKHZlxQruD11AOh0rI1Socf7vkLt0k8cOCnzUV+9BNKwRNgkxQXzO6qJB9rw3oHj7/TAj4n1zDHwm6tuLsizAeEKvwMHoC98V6OoAmTR6HfOAo81ErwOY8fLiidSJ5YuQlSZwXLDN9hRxe3A6vr7xJyI2GC9mjauTiELo3YCG7rjZxOQIn6UBnY1MibdiQpBOtlu8JyhnAcqiHEbHiy5zDPPosWX5VCoPUabGsgt651KjUemQx4jNuK0F60BNz0yY6EfPniG92LAnPCKWfCoVt8fHHE/VgK2xbLa64iaMPaM0w2010gJAwZbCE3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wipUpVRZR128WrxZa6clJijzB/y5fOUZjWYfTVbgmwc=;
 b=Y5nrDzTLFEUTH77u+DrSy052Kb238IjXX/EgdBWnoVIGNhYLTakhwzedkp1QihtPxXga17Uu0CW+HNV+8OLLJwUVfvxEh5P+vJCxGiLKw0RP3XcXmyayTWhcFJlL4t7d2Gaox/+/LXlFGSkHJV0VYVkdhv1sVRLZpKi08knZpbc+Umr0DZbZwfc8KOO4PiriRgLjaWWRkZa9/Z+GkMDTJ+c8d0V2LSP4bmy3G1YS9rawNkvxPerHxfOqlVJU4+CeYQfwWa4LBrkUkjRQ7WyICiuRro3DHdwI/sJ0FgKWfhcVXoYRFoEJvEKqXbPuPvn2orhRfDUu10hBSup5tyw7mw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2f293899-fe37-eea2-2471-b1a212d35838@suse.com>
Date: Mon, 16 Jan 2023 17:39:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 3/8] x86/iommu: snoop control is allowed only by Intel
 VT-d
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230116070431.905594-1-burzalodowa@gmail.com>
 <20230116070431.905594-4-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230116070431.905594-4-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0116.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6838:EE_
X-MS-Office365-Filtering-Correlation-Id: 332e3df9-39b6-49e0-c491-08daf7e04745
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mMlxcUNuOXowpJegqPgvPdd10NKaZ4WK1Cz1iAfnXZlNox1efTGQ36W/wQazRB6xd/vmkSYfOZPDGuFl7P2GxIxGnZ/KoXv5+rFSjfeTpT5CPWG26c5On0GxGAFkiCEU3VbjNTAP5Bz+JQsHHSYzrc6/0hjKvh4UKN5rVA6S/Ii3Q4iBBTzW0OOBp08QWpK4YBCdvd4LPoEk7heXN0Scin1O25bw6+aagUNuPi8BF3bdcBHOAoWQpkxZjv7+Iprhm6Z8LOP7pggL61AUtn83UkGaoxI7gBGQDlWDJnXNBvNQI0k7WIx18i7MyVh+uZQhCQs1XuVM4/SZviMklEEmJOx8OTgYZ4BmRDiRoMMzcjMfLhhOKxnnnsjt96HtmHkhyGnjmRBKnhDMlNZ/EqMDtsHeop2UQmb2DKXc2NKAN1oLzMmmA72KqD6++qW3bYR2zc9t0KyTEmTJ0jPzVsioD3IhZBldNCc3Dk9ht6fa55xPRUYy75GitcsLieGlfvqVzxKky9JFDPFggX1zbCOFidgjKJl4GL2snOwVq96faK/L5AFzHABPFrx1JXV0z/tL1K/N/vSoFN0P4npxmM/KxBAjiCxR8YK/1XpHCyO7V3w1cGTktSeCd17CAsje4HRLYOetBkGU04ts+3ows3gORyHyhkdMldu4oopCVGR3OI4ImpQ+8Q4jxEjzRBNJpEVIks421X/EVOIEy2iRBpay/NZoCbqOEgP8ahTE1BckFIA+Ett61h+ynO7dEPN35JON4DvHGqTsAC82IKtr3tko54bTM1v1U/lKtBhxqLGVOHQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(366004)(346002)(396003)(39850400004)(451199015)(54906003)(36756003)(316002)(478600001)(86362001)(6486002)(66946007)(4326008)(66556008)(66476007)(6916009)(38100700002)(8676002)(2616005)(53546011)(6506007)(31696002)(83380400001)(26005)(186003)(6512007)(5660300002)(8936002)(41300700001)(2906002)(4744005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OVNHamR0a1plY25jNlFSUzlvUnJLVW03UmptSG5DckUzV3R1Zm9JbTRSYWR6?=
 =?utf-8?B?QW5tc1BGVXdIVXJ5dko1QWhaUUNSTUlJMWJlbkZaa1g5elJFU2NxTTNUV3Ri?=
 =?utf-8?B?NVk5Mkh4b2NFWU9nMm5Qd0xzaFVhdXNMLzVKL0IvS0VmUkNxRW02b0VQekdB?=
 =?utf-8?B?QzAzbE9xcXlzVTlwMGRlOWdlT0xydXVsMVdiVGtKWVJ5amRKbmhzQlJLZ1Ir?=
 =?utf-8?B?VytYZ3VQcVQ4cnRxNWlaaGZpK2pDcTZDeTNMd2tzYkFxZzlWTUJRUFZyMHp4?=
 =?utf-8?B?cGtpZ1hENjhlQjFmOS9QaVBvSGE2R0FHc3BDVDZqWmUzR3pyM0loR3gvMDQ4?=
 =?utf-8?B?cjc4THU3NGU5T2N1ekJaWmlEK2dhTFZMalNIeDFqNmZYUUZpVVBOOUhKZUVS?=
 =?utf-8?B?VVVhZjlydzNXQlc2ZGdpTk8vREVqWEJTMDQ3dnBrbXUrVHNPSld4Z2p2Njlj?=
 =?utf-8?B?VExmeG1KVk1FbW96cFVha2J0S013a2kvaHpjVis0TisxdHpjclVqSGpqVTJN?=
 =?utf-8?B?LzV4STFWREJWTERqSkhpSW1USWdSK0xBOUdraU51VTdYaUVzcHpNb0lnNG12?=
 =?utf-8?B?YTJCaHE0WXR5N1Q1TVRaand0TzFxSWhYS2tkNVc3MldXakZmUUhhYVBwWThO?=
 =?utf-8?B?dExZdmQvRjVRN0VBeTVEaHU1WHlXNnd3cU1TYnRnRExBcDRnc3lGY1ZnWWZJ?=
 =?utf-8?B?RHBTTTNWV2RPaHY0NlhkRnVqU3ErYWRFRnRiM3VuSVUvVlFIUUZIbVBSTjBa?=
 =?utf-8?B?ZlF1UUJXUmZTSzd5cDN0L2tUY2JKdWhqVTBrZ1RwMHNEVzZlWHRRQUM3ekx0?=
 =?utf-8?B?NGhZdEhLMVBWRVRVS3AwbURiaGJBZmFCeC9yT1JGTW0rMnA3MjBpWnYralZO?=
 =?utf-8?B?SzltS3VRY2hPMFBKenUwdHVGeGFMZXZpR1BwYTJ2Z3EvRkFDc1ZFWUFORHlK?=
 =?utf-8?B?Ymd4SklBbW5BVXdURlNmQ3BWdmgxMWNaYk5DY0tvVTg2WDBaZ2g1cldSVEtl?=
 =?utf-8?B?elVPYzlzWTUzNkloNTlRZEJtSlkzNUptVU5tVmZSL3hKSXJCQUZMajNNMW1Q?=
 =?utf-8?B?M2U4dnMzZytMYjRIbjBCNGtVbE9XZW56MjNyZmlNZWQvRU5KNHhHL3habExE?=
 =?utf-8?B?UVVrQlEyTDVDM2k3aUJMZGQzVlJUY3pXVTFob0oydXQ5SzE2enRQcGNjSTU4?=
 =?utf-8?B?RFF6VzcvRi8zaXRuN21RUFNEN2dZWlRVRGxtVXg5OXg1dUg3TmhYQlRXdjJF?=
 =?utf-8?B?VU1RR1R6LzNnTmZlZ1NNT0dxbnhIenl6WC92NzVIMFA1YW1udXZpMGxlTlNV?=
 =?utf-8?B?ZklMMXZjMEUwcFFyL0xma1c3WEwzTFE4RkRla1p0c3BkN05Bb003MzBEYVYw?=
 =?utf-8?B?N1owNUYwc3kzYXBNek1rR3V1ODAzUHo3VlIxdHp2YUZkRENpVURiekhsWjlu?=
 =?utf-8?B?d3Y4MHpHMjgwQS9IQXh2TEdnNkNmQ2hFdjcwdTRUbU96dW5RNExkSnBCc1l2?=
 =?utf-8?B?VG5OTVlPeG9OM0JyUzlyWFV4M05mdlhGZXJIdkdqYk5ZajJualNvcm53VUFw?=
 =?utf-8?B?RjNjTXlzQ2pGZUcwa24yNVJGWVNteExhNklJdVp0dWlLSU11YXpzaFdqRDhJ?=
 =?utf-8?B?Q2ZYYkg1VDdtTmZJdGJNNHFlMHFUa1N1NWw2dGc2bFlNR3NHR3BnZzlqZjRx?=
 =?utf-8?B?SDkwM0grVENXcDB2YjNQQlhDVUx1STBwL1FCS2lRQzZBLytRUlBsSUtLazJB?=
 =?utf-8?B?TGNjcndnSTNSRTQ5ZFhyNVZ4YmtuQ0ladXBGOFdaRjdnaDlVZnFFd2xaekZV?=
 =?utf-8?B?OFhQdFJBcjRIQngrV0FyZDY5MXNqaVN3aFE5c1BuNm1ENmlzRVk1WkRoektI?=
 =?utf-8?B?c04zU0lBS3JhL2hHNEk2T2RCeXVOSEJOQ25xbG1oUXExWVY4QVZjSEpPUWwr?=
 =?utf-8?B?a3N4cVNsVmF3UTkvc0pmK09BZGFlSmZtaVc0MWRUY2tCYzZrUTZZR2NMMDl4?=
 =?utf-8?B?cmJjNVpVaU1rQlpqaFBXRXc1YUgydWg3T2hIM2lkOGlaeVkxcm5nZStuWlFG?=
 =?utf-8?B?STdzNmt1TzVaSHAra2FUd08ybmtBK1lVcEhDM0ZGcnJBZVZhbG9xdldGYUE1?=
 =?utf-8?Q?Gyj40gRgJ2WjmFbjca7UY/wXw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 332e3df9-39b6-49e0-c491-08daf7e04745
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 16:39:46.9179
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: x9zAb0wzS0lG5Fd4UhB40nC2XI1CaF16t+x6sIBlquB/opDJeYPiwuLyQX5tCutD9x8tBpjv3T8NvYkp1HBNwQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6838

On 16.01.2023 08:04, Xenia Ragiadakou wrote:
> The AMD-Vi driver forces coherent accesses by hardwiring the FC bit to 1.
> Therefore, given that iommu_snoop is used only when the iommu is enabled,
> when Xen is configured with only the AMD iommu enabled, iommu_snoop can be
> reduced to a #define to true.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:43:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:43:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478881.742357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSaM-0002zM-JM; Mon, 16 Jan 2023 16:43:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478881.742357; Mon, 16 Jan 2023 16:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSaM-0002zF-EE; Mon, 16 Jan 2023 16:43:50 +0000
Received: by outflank-mailman (input) for mailman id 478881;
 Mon, 16 Jan 2023 16:43:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHSaK-0002z9-TF
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:43:48 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2053.outbound.protection.outlook.com [40.107.13.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2b62b85-95bc-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 17:43:48 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB8012.eurprd04.prod.outlook.com (2603:10a6:10:1e6::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 16:43:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 16:43:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2b62b85-95bc-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G+Y+M8plUiUG7O69FlKBvv8yqCPRTSu3BDPokQvSG4qSR5GKi03EWz0+YDV/OtL4lS/4BOqWaQMCm1Pb64M76A179IteaPGr7KSB12cYch4sfogrvhaoPXV6p3jkKUuf4IclF1sF8MPIejQJi9XT+muNyFn1RrHSmUqbd7nw0P16ynb7O0N17ET0ClGJWZZ2+CbUrSp4YXv3aFNxCiXlb3Y5zrw4rrgNYSc5qydjaW2ggDJxfaZ6+D6Kyjw7+FEsYPvXHAdRAC62fRFt8lI+IyGu9s/yxOKK3AV3inwy88lWvPKmPkn1XNXinUOwOlB7TUVotbBiNEbxQGLsFbUlfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G5BypSEu7nhC4vzlRDwc4z+jEE3hL2Auckna78YSUOE=;
 b=nsplM/PysY2OZXz/+3ogafLU7qqKtOrXKUj5nEaC2vXgzdbNHd6EeNCYkJjZTOjpsb6kMlItnNXTFCm1Ca6Xj9dpmlL5513mn38Pt+BJqi1iSASyA8eZkJeTc1PO6FK1Vgzr9QLdzhWxauoctRqSy/BwGC+/IDK9goQqQbsSXsc9OB8yKuvI84jlmqlJBQkW6rvk6Y/DlOgIZCGVGrEpIPWQ8CE8gNXzA9abasqQt20J8JsAU4te8HegyihJlAQlaFmfL2H/kyYhq7dovAlEc0m/ngMkua8vCWsxp1JkARbNXsh4fLSO1I2woQIKE0NcIGi6IS7I9B8y4ZXVPDLLFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G5BypSEu7nhC4vzlRDwc4z+jEE3hL2Auckna78YSUOE=;
 b=RH9vFzq04tu1rufrevrdfnbL7B/hEF72XJQ6ANZwpN0CCZd8CXREPdTFHK9caKtn4ULqhdFLMYMhSvn9Fgv0ZlxKX1ig/zyUwl7jc3+dcxdyFkRQtgB2VlEgeieXoreNKnaZKZJRN4SauTEvOGvTsw0cB/rdESGyprOxkPGhRpSMlTJPEdnuDXeHK8PTMw1M4kVTBCBLJMW82IixeVvYz+9f3EaIjOwkhRZ2YVeED7qbTfvRQoAH0GYZpMKtadGFm+3cU8tbCkFfUNWWq8DZNkOo49LKO8rbdAw5WCZYTPcKjfx36SusG7dX0NvYgbsFQdk5Ivk21+OMc5cZc1AIYg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b7586398-8625-2cc6-c44a-821001293974@suse.com>
Date: Mon, 16 Jan 2023 17:43:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 4/8] x86/acpi: separate AMD-Vi and VT-d specific
 functions
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230116070431.905594-1-burzalodowa@gmail.com>
 <20230116070431.905594-5-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230116070431.905594-5-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0137.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB8012:EE_
X-MS-Office365-Filtering-Correlation-Id: ad720afe-4d63-4c0e-5a9e-08daf7e0d59b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wp1W9UfWjEnLx4z0DnvM6kWJnjGfqhJWKnWIhFWauh2cdgSXjdxNKVJC6qGtdJpswzO8JA7kERqo4nIUL37Vn9/yb+d8/9Tu4KQW8JTj5DMt+qrpwCukSSKiq0PDVR/eNExapMKHi3JQ491Olqpafrdw1BnoK+KAUryZ74nKtLB6AazbGl8Z6QzVJt96cTYBcctp997SD4EjHt9QlBGY6hgHWLdYguD44dSEsMSQhcjuKeawYFSOuO/MMtVkfi5LLE/9Jgwd7dYTDkbjzN8tidMWwUdau5AcBF6+qEdpzNOS4AA3QhQyPrdtD7M7QJlQO5c/RuA0Na8CYBWCN/+vVWaWLZT5LK0JiUIuvY6Ll/VsfASv+bZ39ygz5R3fWT0PXp5D/E1TaCAY7damiMfR6OIcYl7JNvWI6O0+0tMxv/0q7/WvvI0kCQ8uYD4D/1kuVGXiN03btdeRZXDbTJgAKyYS4zf7/rKFdNWdMlN9Bheb5TBclE0/K5pmzV8cOMd7fXKZ9SY8nECuh01c9x9C9S0UoPmjWwbOf/rpgU5P1HIZ2np0U9K9R7qO1dvEjojzqb2R4o91fAh/qKdCvbP0qPSvnNvM2AdPLEOf6KKYFlh5eZe0e+qR87mdacwrV4WULG9mRfi3lQz5kpIDB7wfRTCCl/GFUKCGwt3RE+n0L2L9dCMSP1dQX25OpJ+ojXGB7RXykdmFtwbF28+ju43nig6zTVaWRtabKJKQDiqNaXs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(396003)(136003)(39860400002)(366004)(346002)(451199015)(8676002)(66476007)(66946007)(66556008)(6916009)(38100700002)(186003)(86362001)(26005)(6512007)(31696002)(316002)(4326008)(2616005)(6486002)(36756003)(478600001)(54906003)(5660300002)(8936002)(41300700001)(4744005)(6506007)(53546011)(2906002)(31686004)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NzJ4RTZLd0lPcEk5UGEweVpiVFVncExmdEFXY1k3YnhyeUxMWmpGWkFwdTRH?=
 =?utf-8?B?dkh3TC8yd1l0NThDd0t2UW51YVJ2SklWL1RHMFoyUUN1SU43WXpEaFZTUUNM?=
 =?utf-8?B?RUtZKzlJQnNRSm82Zi9SSmszVUpQS2xvcHE5NzlaYU1ZaENuWHlRSzhESEV0?=
 =?utf-8?B?NWlXUGtaK2NOY21sVERpbHVZNHZrOGZwa29NQ2hOUGNPQllHQzAvS29jZHVZ?=
 =?utf-8?B?NnVkVHBtNXFFWjlOeHdDbmQrLzhXTjRUMGhQL0tCZWFJZ3BWcDNHeDNaV1Bi?=
 =?utf-8?B?c0kzcE1na2VSYUxUVEcwd1ZXdU0xRjF5Y0dndTNxSXIwREpXMWpRTEZicnVS?=
 =?utf-8?B?L3BNQWh5ajNCU3B2UWd6cS9wdFdqeTg5TWFHcVpsTXBTL1Vlc1JUWXlLOXcy?=
 =?utf-8?B?Sk4wUitUWE1jNHBYencva2swZlRXYWhyU2NHWUlEWW1YNGVOa2R1ZTd3K251?=
 =?utf-8?B?bVVEN0E1bVNjbDZZbHNXZmNaMzhGbk52Q1lpMFhKRWZBMmhQb0UvYllZUmlM?=
 =?utf-8?B?emJFNFUxUm8zSWlTYi84TXhUbjF3bS9DRE5ZWlhXN1hsVlZrNzUvQkJrK2lS?=
 =?utf-8?B?MVBIcHFINzhXV1h2YzdIbm5aSktMQzRWWVF1L1ByeXI0Q2JUYUMwMkgxeWw5?=
 =?utf-8?B?MTdFeW9McnJpd2NRYlQ4VmZHR3Z1T0FlM2tzWnlkdldFM0toUERCYlN4cThI?=
 =?utf-8?B?OHlpOURlMGZ1TGlXKytOOTAzMTZRd0dkMy83b0oyVHNYMHBleWpRb255VkF3?=
 =?utf-8?B?VkRnWXh1U1hOVXJtTWR4YklCWmxKT2EzdHVBYkNTNVd3ekZtd1YyemFqN3FZ?=
 =?utf-8?B?dXk0clBLbFRURVdpalc4YlNvdDFaUnczU3pyQlV3WHUyOVF1cGdzYjdmR29M?=
 =?utf-8?B?QzVRUy9LWlZSLzQwRzBRek8zOU1NVHRxejBScWZLOG1qbGVlbWNlNDZwQmcy?=
 =?utf-8?B?MXhySzF1WGRxNnI2MmRQdVh6STUxL2ZrVmdPNjdSR1VGcnowVU9BRVpGMm5R?=
 =?utf-8?B?Z1RHZWk2WHhGWUZuRzRDTnlOOTR6b3czcmI3clJpbyszdnRwdjFTRlY1bGNl?=
 =?utf-8?B?Mmd1UzIraUhwZGxiL0V5SU1SVGlsL29WOElyZ01VK0RtbkQvRTNmd1d3bndC?=
 =?utf-8?B?RTNnMDU4SFJ1ajlLMW9XWFYxeFFKclhFWE53OFJGMURIaHo0TUswM2NHREw0?=
 =?utf-8?B?dGFPQ1paTnRHcHhuQWVWZE1FaHl3aWE4ZTJaSVRzaEhqclZnSHlORXl3L1R3?=
 =?utf-8?B?M2t1SXlyYlNodVUwZ2J2cDg4eUhnZTgwZnkwNFdMSlBGcDgyMCsxOEhxNnor?=
 =?utf-8?B?L2xBYjlvaDc4NTdGMTlKVStjTytqdzdWclltNnpuMWo1OVBFdDFyejgvK2pI?=
 =?utf-8?B?Q3hOdnZ3YUFZUmFIdGYxR21seEZEd21Ub3J3bTYzR3RSQ0V3YjFicU1FN1p2?=
 =?utf-8?B?RFd4U2pqd0hXaVlma0ZzY0hYUFg1YkQwQWxhZVF0MFBHOFRVVytlVVVLcG4z?=
 =?utf-8?B?WHZXb0tCcnlUTE9EMlk0MFpZNDQ1RHJ5UEhtQStNVzk3M1NDK3M3U3YwalV2?=
 =?utf-8?B?MEVrTFAyMVAyZE81cy9IQnllUWxieThteE8zbWFTNytLNkhxUFhHeXUxdEdD?=
 =?utf-8?B?VlkzSVRJUjZVN1puaENnWlNCaUFIZ25CMDhKUjFrZEV2TVFldzVBYVpma0lQ?=
 =?utf-8?B?cWlZc1Znam4wVzZSUWtzREFKVGRETkN6NmJzUTFSMDhkMzdKaWo5anBLL2NN?=
 =?utf-8?B?V0hVZjErR25QcWVLcmZTSzlnRkpCeHhyVDhlQ1lYUE13QVBTNG13NU80Z3VW?=
 =?utf-8?B?SG5OQTBTcjZBZ2Q3MHN2TjBlU0ZySDJQNGlicjIzVi9SbzBDcktvS1lCcndR?=
 =?utf-8?B?d0U0aFphTzZveW4wTTdGRm9xdHpsSjRobWJxZkVKTXdEQmV4c3ZrMDlPd2xt?=
 =?utf-8?B?RHdZRGdsczFlNFJxTnJkVFdHN1k0c25kOEdjdmZnSDZVclcvVUJEVllFMmlC?=
 =?utf-8?B?Y2dScEhBNkhoR3dZQStjS3hOVXp2UnRmQjNTc2dIdjBYdjcwNFI2UytzdDdI?=
 =?utf-8?B?OHR4bzlXeEkzNllBdGIzSUFhMlcxNUJQb0dNdXRUTTJweXl4Z0ZpVW5JYkgw?=
 =?utf-8?Q?yfVTacarkUZCDluaIlkFUfso7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad720afe-4d63-4c0e-5a9e-08daf7e0d59b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 16:43:45.6215
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZUsYBFeyx3dgE5GTzDI6oRbUhecVrUPxpSQbZkanKSPZYubJaFQJOlCM7iMQD8YYJbm6mPPwhSN0Pj4bMSB69A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB8012

On 16.01.2023 08:04, Xenia Ragiadakou wrote:
> The functions acpi_dmar_init() and acpi_dmar_zap/reinstate() are
> VT-d specific while the function acpi_ivrs_init() is AMD-Vi specific.
> To eliminate dead code, they need to be guarded under CONFIG_INTEL_IOMMU
> and CONFIG_AMD_IOMMU, respectively.
> 
> Instead of adding #ifdef guards around the function calls, implement them
> as empty static inline functions.
> 
> Take the opportunity to move the declaration of acpi_dmar_init from the
> x86 arch-specific header to the common header, since Intel VT-d has been
> also used on IA-64 platforms.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:49:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:49:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478887.742368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSfm-0003gb-9n; Mon, 16 Jan 2023 16:49:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478887.742368; Mon, 16 Jan 2023 16:49:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSfm-0003gU-74; Mon, 16 Jan 2023 16:49:26 +0000
Received: by outflank-mailman (input) for mailman id 478887;
 Mon, 16 Jan 2023 16:49:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SJiS=5N=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHSfk-0003gO-GW
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:49:24 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2077.outbound.protection.outlook.com [40.107.15.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id baada170-95bd-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 17:49:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB8012.eurprd04.prod.outlook.com (2603:10a6:10:1e6::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 16:49:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Mon, 16 Jan 2023
 16:49:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: baada170-95bd-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JgcKWelc0TVMh1ZUITNcznowlJTGOC06qdiVB2AGUUBLlOpe4ScJMYDR3/v9HI6WugdzsziVjK26cxjVoDwA4YiSnpiNrwg5TDtD8twjOr2vWCHkZ9WoeCtO1UKSk/fibt9qhyNyDipr1K5lrLpjzT8sQosYCjJrCN+4vupXvO5Uc0zvA1ftwMAP+Kz74slYUPnrzMHe0Fml33kmCtJbS4Nf5gubQWAIcshwE+blsobhC947Jjb6Nw/wL/jmj+jtk5nZQQHX8n3fG7twqsJ4bTDr7LnJ6/brPCuOlBPJyHG67K1Gt+VDdyy/oaRB4XXpKUFuB0XrDFGR09s3B0ebXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UI5K4YPy1l5f7L/GJZQp0OCjTek+/dcxDN58TYa2PXw=;
 b=YxC7VXIK3blNjJQ9tGj+9v+LG41CHAMGrTtAH510xAYZatmxSp/ymcmULZ/ISpndQEYtHJ4IQl0GkURf5pmHAq/+mSJMEcbPT7H+acH8wP0D9p5jMltoo1PJYxYZDsMliARsAdiBwTk8cHOD6yLXEM7STTfpCc2XVB05F+l9L+C59EGmrJyaZ1CpkFspYzDosljLhmsdyRF+cfSVntD9aNdF6jxfB6xhA1+GAAKk8usK35AwJdDWbfwuWLYY0cMTYv+W21Ooq95giE15AhOIOY7gVR3Zs2yWKREuhEa6FzjbYJ21iebF2IQCqvMYXRtou3vZo0tzA/9r1skj+f8MOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UI5K4YPy1l5f7L/GJZQp0OCjTek+/dcxDN58TYa2PXw=;
 b=n2UWvXO0XcQNs4Zi+2yBpkRYxiD7af5qVG5tKGB1GgI2CAEoGrG6DTmSet7xVC/idZco0ErMh/OhVXzhLSqyCSn6IwIdxXqkDbG8uDBE8Xx5qE5YkY+biDjCEChzOA8Yr88EMeaVmvTYtLFdrwL0Bl4FLRzCAr097R2IDU5XuWy7dkPz8Wx7ExPifOMatbee58pLeh1/mKXOwHpiVUM/zLW7O8EsTzThaUE8QFQOc3xBIA2CvCx6Afqyr7HMq4lBliL/bUwmIrefAqPhXoFJxltIf9PwD9HgHv4IAHyaof669KzQV1ELWVUgE7DkHzszmIYUbaiI1hMvicKvflGVVg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <37367eb9-7700-27f8-1871-b84224f0c63c@suse.com>
Date: Mon, 16 Jan 2023 17:49:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 5/8] x86/iommu: make code addressing CVE-2011-1898 no
 VT-d specific
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230116070431.905594-1-burzalodowa@gmail.com>
 <20230116070431.905594-6-burzalodowa@gmail.com>
 <f620f8ee-e852-de62-53ef-5cd243e4cc09@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f620f8ee-e852-de62-53ef-5cd243e4cc09@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0140.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB8012:EE_
X-MS-Office365-Filtering-Correlation-Id: 946135cd-d31e-4a83-bbc4-08daf7e19d74
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cRk7e8yBnA8s7+MTaGUrOEvOhjP/ruJRmxdn2Kmo0AWoFv5txV048ZgQDUN/b9zEMRkZj/o51v47aTdomMYBWj72Tupz3XvjIMm8UtJ74eC8b46LXSN5/GtBC5keOL9TzY4sFVoukF+iOP5NX8obzwqP3RKqhGV5fw/cxKabR1pdXNGj+fuKu/2UCTMbmygiH/6m77C+uF7kX+og5OhEmHlyLvWkFECbpkyfJDSvx08Uj8bzplHpofGDApMBcyZCIpgCPo2umhHSjJiiYDFVPt7Y3/Sfj7amORuZflfx7+0F38xCSSpKyHLnp6FUlT3KxuhwIpgg+ccnqZN4CGwSjAdTNUdCrx+SWPJwLrwOR5Rr5/zEYCaIqGYUSzEadNslEj2sSPJPxHaP8FRZpbGgfRvrMqy4Bb2qvtAMFnITkVgUK3V1tGjVk+ASm7l5aYyMu0IS5Bna9lh+/ka0rYQiT2Py49jQo5mUDb5wjicCtJkxpMxkdiGrQ1e/N6RVGN9SjK9GWiQ4tqIJ5EEeNIaHXLV6cWAflAusurjCOL67fTUthaMJ40l/c4YQdf/ozBNwoky4xNGL2OW4vizxoWMPp0XOzHEX8Vqyg5Y6IsIqHz8dJXwBkuaBq3tebhDeI8eJZP7Rq94Q7w4meH/beqIwolv/vjdMux9v9QUvEjJV21MLs7/mW3rOG48XbPLbxj1tNH2aGlv2M2vedkCTMVfUIN9oAR3K2ye8se0HXE3u3/I=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(136003)(39860400002)(396003)(376002)(451199015)(41300700001)(2906002)(6506007)(53546011)(5660300002)(8936002)(83380400001)(31686004)(6916009)(38100700002)(66556008)(8676002)(66476007)(66946007)(316002)(54906003)(36756003)(478600001)(4326008)(6486002)(2616005)(86362001)(186003)(31696002)(6512007)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WHI1OHA5dXRHd1ZvR28rQ3p2UXZtdU1tY2dJNUc0SThvRHdEaGpiRU5udFVM?=
 =?utf-8?B?T3RhMTFPeEFVUjdzYXNWN2dmRllVU2lkYXV6RVN5bWlzbDlWcmN0UDJFU0VZ?=
 =?utf-8?B?UVJRWDFpWHkxSWRRd2QrUVJJSG85RHZ6cmRhc0xnbUdOKzJpUHFvaDNEaXFJ?=
 =?utf-8?B?cnZzb3NWWWF5SXk0OVgxdEYzKzVsRmc1dFFXU2ovcXFqL2NpalB5RTN1Y0oy?=
 =?utf-8?B?Q2tkSjM1Z2dzVDJqeVpzODBrWm42eXlnQjZ3TTcyQ0JGQ2t5aFQvOWdDQllR?=
 =?utf-8?B?bEtXb2FqMHF2clpXUlo1Q1g5ZVdNRjF3ZTY4Qy9QL3MzTlFoTHN0b05Xd1I4?=
 =?utf-8?B?WTFaL3hBa1BnbXNjUjNGdjhwQVFUVGRyVFlaWU1Gd3JxbjI2MmNXaDdzazl1?=
 =?utf-8?B?NDFhUFVKSHBRM1Z0Uzl4c1JWc0JJSytnU3EvY3drUjdRRjhVaGg2YXVleFh5?=
 =?utf-8?B?cHNjMGF4ZjNPQ1JNSG1md0xOSnZjeTdLUE1keEc5OCtYNCtoeVpnQTM5Qm1x?=
 =?utf-8?B?K1JWSHBucEZPeWxnMERYdlUwN1p5dk5uNHdobXcyUkRTR25XN1l4TU5jbU5w?=
 =?utf-8?B?VlpVREhHeVlqTDlaWG5VTXlZZ3k2NEhvZy9ubWRxL1Y5dHBGRG12ZlRQUmVw?=
 =?utf-8?B?cG5XTEZQTVB5M3lLeHJhUTJTK3NCSS9oWmJVa1JoWDNFWERZbzR4RS8zQ2c3?=
 =?utf-8?B?cFUvRXI1NnMyZWpOOFRHRGNKbldxc2ZyYnpDSFdFRUk5QXFQZnlaNWxERy84?=
 =?utf-8?B?ZGFGYlNHajhwTjA5STFtR1hJREhJTFExYXVZWURXVHBMSG8wMXhweUVVZFo0?=
 =?utf-8?B?TUdZSGpXbWhJMlNFZ2xCcURFZDZKZUdLdmViTkRjMXBMK0NBM0lobWJIWWVS?=
 =?utf-8?B?UFdGdFR0QUVyaUd3cmp2VStTalJRdVJlZXo5L1NGM0h2WkdXTGVCOVRIVFcz?=
 =?utf-8?B?dllVeFczbGEyUmxGVXlMT0wreUZHazZIZXlWUHZheTFHTytXNkdET3BObThD?=
 =?utf-8?B?eFZwM0dldUtEUjY2OVAwdlBNYjI0SDcvUWtlNS96VTNVcldFYkdUdXdQWU1J?=
 =?utf-8?B?dEtUcDNHeEczY05DZXZONTduTi96UWUzKzdjaGNma0tCUDFLeVBEeGxMcEpX?=
 =?utf-8?B?KzdoYWw2SUdUY3ZvOEI0REhDeEVVVVVxSFNBSExlSnhDZHJ4NktGUEk5ZjRB?=
 =?utf-8?B?K3lIc3NiWGI5S3lsRDBXTkcyZHdXN2d4MTRhV2RnRExBMm5iQzV2UUdDVnQv?=
 =?utf-8?B?U00zSDAvQm5XQ3pSZGhHZE1nYU14UitNUFU0NGc3ekVGVjZZSFhNYUt3OFJY?=
 =?utf-8?B?bURxSi94bGdISGJXeEdvOWIyblgyNVc5ZXBUZzQ3MkFyVEUrYXRRV1ZEVktH?=
 =?utf-8?B?OTd5QVVMOGZ3R2cvOVM1U2tCVU1ReitDZ2dOQklDUWg1RndqTU10WncxOThq?=
 =?utf-8?B?NFZwSjBOUVR4NVFkQ0dmTnJNT21uUW9sTWw0RTE3ejMrUnZZS2Z2SEk5R0Ir?=
 =?utf-8?B?Unc4dTYydFNIbVhOQlBOZXpTaFZ5TjdQN3JZV0dTMWYxRXNFRThxQXpBTG1W?=
 =?utf-8?B?bW9qaTZmYlMzZGVhT1BxZGxaMkx5Wm1MRVZTT2tackFPeDhzcnJYV2ZQL1V3?=
 =?utf-8?B?bE82WkxqUm5VekU1SVo0K2N4OHdMdHhxMUhSWXBRSjViNHMwSE9jSS9rWmJN?=
 =?utf-8?B?S1huRGdubDVkTGFneE05U1VDWVZWb0QzUHFyV0x6ODR6ZlVJbm94bFBJU0Rx?=
 =?utf-8?B?NGUvQnlzdjNCVGJINDJST1BNZUFSNVlnUVUvK2Y5cGM3VytYTmFCRS81M091?=
 =?utf-8?B?bTcxS05HY2MrUmlvbXdFOXJPWUpsbHNLMEM1ZC9mNHl2azduZGRVQ0lJWFdM?=
 =?utf-8?B?VHRnSjJJVUhFeVlrZ29McG9CbnFLUkVZTEJ5RTZsdk9jUGU4OUhhNUFZSTVW?=
 =?utf-8?B?OHV3VWRLMkhpdkVLQ0ROSUtUL2JOZ1NkVTVpaDNkWERncHpKV3ZxOUpjRTVD?=
 =?utf-8?B?TEx5dUF6UXh5TDZwVkFxOGtXNitXQXdRb1BtdFJJLythQTVyZnNobHRHWW1m?=
 =?utf-8?B?S3BjYzZtVkRWSzB1MnpHeXA1OHFSWUd0a3JvRnREdXFKZkVVdUFUWU5DY0Zl?=
 =?utf-8?Q?84tWnnDf0lzPsrHiRrImFc+CW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 946135cd-d31e-4a83-bbc4-08daf7e19d74
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 16:49:20.9284
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aliMcca1qcZADwLljt4VWa9Y0fARZnErVpaZLXUPi6KDhfNXdQoZWSptsDOVzUOZGa+kA4jhALAVUMoRRRmx0g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB8012

On 16.01.2023 08:21, Xenia Ragiadakou wrote:
> On 1/16/23 09:04, Xenia Ragiadakou wrote:
>> The variable untrusted_msi indicates whether the system is vulnerable to
>> CVE-2011-1898 due to the absence of interrupt remapping  support.
>> AMD iommus with interrupt remapping disabled are also exposed.

It would probably help if you mentioned here explicitly that, while affected,
we don't handle that yet (the code setting the flag would either need to
move out of VT-d specific space, or be cloned accordingly).

>> Therefore move the definition of the variable to the common x86 iommu code.
>>
>> Also, since the current implementation assumes that only PV guests are prone
>> to this attack, take the opportunity to define untrusted_msi only when PV is
>> enabled.
>>
>> No functional change intended.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>[...]
> 
> Here, somehow I missed the part:
> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
> b/xen/drivers/passthrough/vtd/iommu.c
> index dae2426cb9..e97b1fe8cd 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -2767,6 +2767,7 @@ static int cf_check reassign_device_ownership(
>           if ( !has_arch_pdevs(target) )
>               vmx_pi_hooks_assign(target);
> 
> +#ifdef CONFIG_PV
>           /*
>            * Devices assigned to untrusted domains (here assumed to be 
> any domU)
>            * can attempt to send arbitrary LAPIC/MSI messages. We are 
> unprotected
> @@ -2775,6 +2776,7 @@ static int cf_check reassign_device_ownership(
>           if ( !iommu_intremap && !is_hardware_domain(target) &&
>                !is_system_domain(target) )
>               untrusted_msi = true;
> +#endif
> 
>           ret = domain_context_mapping(target, devfn, pdev);
> 
> I will fix it in the next version.
> 

With that added (and preferably the description clarified)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 16:51:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 16:51:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478892.742379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHShI-00051e-K5; Mon, 16 Jan 2023 16:51:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478892.742379; Mon, 16 Jan 2023 16:51:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHShI-00051X-HG; Mon, 16 Jan 2023 16:51:00 +0000
Received: by outflank-mailman (input) for mailman id 478892;
 Mon, 16 Jan 2023 16:50:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5iTp=5N=bounce.vates.fr=bounce-md_30504962.63c5806f.v1-deb99e9ad5914e9a99e81e7fa1a56a45@srs-se1.protection.inumbo.net>)
 id 1pHShG-000514-I8
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 16:50:58 +0000
Received: from mail186-1.suw21.mandrillapp.com
 (mail186-1.suw21.mandrillapp.com [198.2.186.1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f207d610-95bd-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 17:50:56 +0100 (CET)
Received: from pmta10.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail186-1.suw21.mandrillapp.com (Mailchimp) with ESMTP id 4NwdM32bMWzBsThlm
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 16:50:55 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 deb99e9ad5914e9a99e81e7fa1a56a45; Mon, 16 Jan 2023 16:50:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f207d610-95bd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1673887855; x=1674190255; i=marc.ungeschikts@vates.fr;
	bh=sHr/xwDxjipTQV1dy6m7ldmaOE2MgF9P4S8xS4OSrVg=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:CC:Date:Subject:From;
	b=M3WYZP6yHSbHVZxEc3ihVvbG7Af4JWSKT02zYLLn9iejL0z62nABnBc8IK1/b/ShT
	 NYdLke/mUwGTGn7Vw4LfQIctMStAeBl4SJbk4IKGOjR0t/CTxKiCo0g14kZMCXgZ21
	 Y4Xsf9YVkGvQph4qyXEs0UeX6o8/mk8xAD8tT2R0=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1673887855; h=From : 
 Subject : To : Cc : Message-Id : Date : MIME-Version : Content-Type : 
 From : Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=sHr/xwDxjipTQV1dy6m7ldmaOE2MgF9P4S8xS4OSrVg=; 
 b=cTRwiUmzAqwv+9rTf/e13a7v19pmjCDzbjQjwwpIDa4mkhXbmGfZmKLIm4xOcINNv7sccT
 n0VxByw+LwH+Bbp7H3sj0FEP3AQN9gPOoyGBbq8L1Xjufisbo+3d0pBq4A+rtPV7HVBoXICx
 5E+DZEulvmflIDXug/U35/1icQfjI=
From: Marc Ungeschikts <marc.ungeschikts@vates.fr>
Subject: 
X-Priority: 3
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 42ee4fbd-b127-4a8b-8818-d3f038e2f551
X-Bm-Transport-Timestamp: 1673887852787
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>, Olivier Lambert <olivier.lambert@vates.fr>, Andrew Cooper <andrew.cooper3@citrix.com>, Henry Wang <Henry.Wang@arm.com>
Message-Id: <20230116165052.88E2312A926@mail2.vates.fr>
X-Report-Abuse: Please forward a copy of this message, including all headers, to abuse@mandrill.com
X-Report-Abuse: You can also report abuse here: http://mandrillapp.com/contact/abuse?id=30504962.deb99e9ad5914e9a99e81e7fa1a56a45
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230116:md
Date: Mon, 16 Jan 2023 16:50:55 +0000
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="_av-8B6A9R1qxNbZy6fL15VuNA"

--_av-8B6A9R1qxNbZy6fL15VuNA
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hi all,

As discussed during the  last Community Call, here is a Doodle to choose th=
e apporiate time for the Xen backlog review meeting=C2=A0:

https://doodle.com/meeting/participate/id/e16WwpPa

Agenda:
- Review Epic Backlog and Status
- Feedback on the process (Project, Release & Bug Tracking)

Marc Ungeschikts (Vates)



Marc Ungeschikts | Vates Project Manager
Mobile: 0613302401
XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


--_av-8B6A9R1qxNbZy6fL15VuNA
Content-Type: multipart/related; boundary="_av-24Wpq_1hZGm5mU6av9vXXQ"

--_av-24Wpq_1hZGm5mU6av9vXXQ
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html xmlns:o=3D"urn:schemas-microsoft-com:office:office" xmlns:w=3D"urn:sc=
hemas-microsoft-com:office:word" xmlns:m=3D"http://schemas.microsoft.com/of=
fice/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-html40">
 <head>
  <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dutf-8">
  <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
  <style><!--
/* Font Definitions */
@font-face
=09{font-family:"Cambria Math";
=09panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
=09{font-family:Calibri;
=09panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
=09{font-family:"Fira Sans";
=09panose-1:2 11 5 3 5 0 0 2 0 4;}
@font-face
=09{font-family:Montserrat;
=09panose-1:0 0 5 0 0 0 0 0 0 0;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
=09{margin:0cm;
=09font-size:11.0pt;
=09font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
=09{mso-style-priority:99;
=09color:blue;
=09text-decoration:underline;}
.MsoChpDefault
=09{mso-style-type:export-only;}
@page WordSection1
=09{size:612.0pt 792.0pt;
=09margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
=09{page:WordSection1;}
--></style>
 </head>
 <body lang=3D"FR" link=3D"blue" vlink=3D"#954F72" style=3D"word-wrap:break=
-word">
  <div class=3D"WordSection1">
   <p class=3D"MsoNormal"><span style=3D"color:#2F3740;background:white">Hi=
 all,<br><br>As discussed during the &nbsp;<a href=3D"https://cryptpad.fr/p=
ad/#/2/pad/edit/9YJwttVUMtpCrgS4ZX3sbNyA/">last Community Call</a>, here is=
 a Doodle to choose the apporiate time for the Xen backlog review meeting&n=
bsp;:</span><span style=3D"font-size:12.0pt;font-family:&quot;Fira Sans&quo=
t;,sans-serif;color:#2F3740;background:white"><br></span><span style=3D"fon=
t-size:9.0pt;font-family:Montserrat;color:#1F1F1F"><br></span><a href=3D"ht=
tps://doodle.com/meeting/participate/id/e16WwpPa"><span style=3D"font-size:=
9.0pt;font-family:Montserrat;background:white">https://doodle.com/meeting/p=
articipate/id/e16WwpPa</span></a><br><br><span style=3D"color:#2F3740;backg=
round:white">Agenda:<br>- Review Epic Backlog and Status<br>- Feedback on t=
he process (Project, Release &amp; Bug Tracking)</span></p>
   <p class=3D"MsoNormal">
    <o:p>
     &nbsp;
    </o:p></p>
   <p class=3D"MsoNormal">Marc Ungeschikts (Vates)<o:p></o:p></p>
   <p class=3D"MsoNormal">
    <o:p>
     &nbsp;
    </o:p></p>
  </div>
  <div class=3D"x-disclaimer507876458">
   <div>
     &nbsp;
   </div>
   <div>
     &nbsp;
   </div>
   <div>
    <div>
     <br>
     <table>
      <tbody>
       <tr>
        <td style=3D"font-size: 10pt;">&nbsp;</td>
        <td style=3D"font-size: 10pt; padding-left: 20px; border-left-color=
: #b42626; border-left-style: solid; border-left-width: 1px;">
         <div> <strong> Marc Ungeschikts | Vates Project Manager</strong>
         </div>
         <div> <strong>Mobile: 0613302401</strong>
         </div>
         <div> <strong>XCP-ng &amp; Xen Orchestra - </strong>Vates solution=
s
         </div>
         <div> <strong>w:</strong>&nbsp;vates.fr&nbsp;| xcp-ng.org | xen-or=
chestra.com
         </div>
         <div>
          <img style=3D"float: left;" src=3D"cid:x-disclaimer507876458-1673=
887852787.png@bm-disclaimer" alt=3D"" width=3D"174" height=3D"159">
         </div> </td>
       </tr>
      </tbody>
     </table>
    </div>
   </div>
  </div>
 <img src=3D"https://mandrillapp.com/track/open.php?u=3D30504962&id=3Ddeb99=
e9ad5914e9a99e81e7fa1a56a45" height=3D"1" width=3D"1" alt=3D""></body>
</html>


--_av-24Wpq_1hZGm5mU6av9vXXQ
Content-Type: image/png
Content-Transfer-Encoding: base64
Content-Id: <x-disclaimer507876458-1673887852787.png@bm-disclaimer>
Content-Disposition: inline

iVBORw0KGgoAAAANSUhEUgAAAK4AAACfCAYAAABgKuLmAAAm4XpUWHRSYXcg
cHJvZmlsZSB0eXBlIGV4aWYAAHjatZxpkmSpcoX/swotgXlYDuBgph1o+foO
kVmva2jZa8lU1VWZFRlxL+DuZ3C47c5//ed1/8GvEYZ3ubReR62eX3nkESff
dP/5Nd/fwef39+el8/Wz8PPrbtyvH0ReSnxNn3/2+vX+79fDjwt8vky+K3+5
UN9fP1g//2Dkr+v3Xy70daOkEUW+se8RfV0oxc8PwtcF5mdavo7e/jqF9TW1
r89/loE/Tn/dHYdeK+vzs1//nRurZ4X7pBhPCsnzd0pfA0j6E12afFP5O6YU
30vvlc7fIcWvkbAgf1qnH7+4rbsaav7jm36Kyo/vfonWGV9r9Gu0cvx6S/pl
keuPr3983YXy56i8pf/LnXP/+i7+/ProMX5G9Mvqv8W/1u+bM7OYubLU9WtS
31N83/E+wpF16+4YWvWNP4VLtPd78LuT1ZtUML/94vcOI0TCdUMOFma44byv
O2yGmONxsfFNjDum92JPLY64iVtIWb/DjS2NZMQxpv3CnlP8MZbwbjv8du9u
nTtb4K0xcLHwkuAf/nb/9AP3qhRC0FoS+vCJb4xabIahyOlv3kZEwv1a1PIW
+Pv3r78U10QEi1ZZJTJY2PW5xCrhX0iQXqATbyx8/dRgaPZ1AZaIWxcGExIR
IGohlVCDbzG2EFjIToAmQ48px0UEQinRGGTMKVVi06NuzUdaeG+NJfKy43XA
jEgUKq4Rm5Emwcq5kD8td3JollRyKaWWVnoZZdZUcy211lYFirOlll0rrbbW
ehtt9tRzL7321nsffY44EqBZRh1t9DHGnNxzcuXJpydvmHPFlVZexa262upr
rLlJn5132XW33ffY06IlAz+sWrNuw+YJh1Q6+ZRTTzv9jDMvqXaTu/mWW2+7
/Y47f0TtK6y//f4HUQtfUYsvUnpj+xE1Xm3t+xJBcFIUMwIWXQ5EvCkEJHRU
zHwPOUdFTjHzA8RLJTLIophZUMSIYD4hlhu+Y+fiJ6KK3P8pbq7ln+IW/7eR
cwrdP4zc73H7U9RMNLRfxD5VqEX1ierj56fP2KfI7revTjNvjRUI8HbqXrdv
xdvlv139mjVY7qcXSmjvnE4bo8XF2Po+56wzNhTXorNcILFSOrVFGe7A9LnF
NK3KmL3OuveqQXFodY9uQGIfkzocZ0Cv8/S0/XDHjzVuYJly3SeNUl+5pphZ
wnA2ydC5/Gms3vWbhUrjhH59OqOcMI3qtTTNBUi/7wWM2upzkzUk28nDkwib
peRNfVeuOYkcWVEWoyHtrJzdejkdFUJybDcy60sGdOZf9xzr7tVPLgZWA7un
pRuJQj8ELu92/SHFjDBxrTEZXWplj7tcjiRQ4D5UgmkAhMlaKocx7HLjJAQl
9VJOmYvI1pLi7gW+inX1OEsjy85pbl7+BXiFxSL5wj8LObqCTULGAjEO1rnn
wFVOKo0ViBEtRMFsS6GXmk+2c1ybjIVsz7WWXfhA8jt3VrhNhlQHFGQzHX4e
ZjUS/aSpW55k96xiSA/L0ABTG6msQIjyQvgR07amtR6O1VNaM4TSYvixPq0R
Q0zGHM3yu8WsA54kRg4as3LTaiSL7RbO1PULaxupwhzbYriRYgtjo35KhXsz
eTLrqYUs3Zv5Uj9uz7vJCkpxmU+FYDZbKIZUQN6Qx76gQSdpxs0I0VlJRj9O
7kq+ESja3fKmRM4l+FVE0ts+FR0WSvOdwJ05zt61BMqn8P6sb3KROvzDV/d3
P+ArI2aAo+Vl5PoF2pjz7GUZf3ZpixEAFndYSd5ZbHu3g7q4hXRf5cRrYeUM
qhAAgn8LRDbvzMHbmoHwGtULYwdAYuUWNffgiHnbCcCpfiRIgpvCkk0IC5MC
djESpKS45mPpSOM23fU0tA24Vcoq2cCj0ljOvraFsVoKBygJaezT7LbGZftN
dsqezHCWvfxRvjLwPQjcpNiJ0J7AyDk1jNJCzGVITzWSgMQPdw6GulgkQwyR
PLycbZGTzOSmaicBIevmmMO8w92FvlpzFWp5VIubWS9KCtSKB4IrI5L2fs9F
nXrrZyYwoTAR4MoD3iFQ9WE4BCJzWaQh6bqbNaqMer0l2WAGm8iV2QLVEU05
xLhPBy9yIFFLPZpF4JaOpSX5gfmRbh8xqzJnaCWBCZV/8zFKB73YMCswBIgc
y5lb8Y+xGtzVUlzH7eZBm7JT2tQxeQBjWa2HdbwR7iqwwRi5Ax/oGWNGK+qe
4ChoR2neDKzG5YI/lfWf5Sq+eofFAEN11Bs1ukG6OOPRrM5B5RbgASIjWTpk
OS4/7cPyJPyeRG7xxA0MrlJmznPtjQFsZfrT4csNvEYSgjFwXVYXSjA+yNio
PXHFudRaXBQjl5IZYI2qRqLAHq7YrYLgc3cIm5QmkKx0hUJUfavDYmQVP9zm
JnFBSPcKas+sFIaCFuIkGFkcdqvkk5HNKz9p3jdTj3B6k468jPV4gAOoLQR6
Znj9dooOfjjiNIqBP/bFpQav+Pm+93/z1f32A+gH4Ukq1JOgssPStw3Jwfyn
whyQiGpvyfDNS6Ch2WTRLdsKR4+3mp04YMjSSbs4Ka8hCaw4g4Z8CJHEihCJ
Qt1INC0iFkMfTMhRY/MuJnzJbjyHl3Bi8XoX1gP6jZrwECpaqwrtKA1WhUEb
vL+mJAjyaTsqdiC1oqgFLO+oPZUQtE3ZiZt46xqinw2tJt6VVRKFNIROU7G2
8cD+uLoo2YFCGIb4AfP6nO1Ax8BvTJXc6wYHpKbvblGt7ZA6tcB4pKuQGMze
3DWMV52icGOw7aiwmrLukCtzCh9JP8ogGQxb6pOFYPm9naWRAVeSXqf0K8U3
iCsY6nKfVW/KAhNgp4IRtc7LgA/xKtQF9YegPNVSrHyu8SKpXtwDcEgEyqIe
9uNXBIJW1b9kgs9IZ2iHeW3kQkbropKAJ+IY4KZMAiC0CBdsM1rjSowX7fnU
BzIuAohIyQbYIQ1hi0l2AyJA4eKGAAGhYLnCEaQ4FEJBb4vNIWgYRPKKu8om
wWngQLKnFA6ITpUjdsXB6Yqtqb0pcRML3M8FVwuQBrIIgASuO/V19R+RQoyg
0akfOB+/la7ZZml5F682FBXw5jsy10Hz2QgSq9tJbCKU14K20MOpUR2dSuk7
XH1EQ5/Q/RVmiwFQ28dmb5J+YNlAId7Hrghw6NI/lkZ6/DtfixlubSy3rQAE
udThFyrPp3Q8UoSlHcd4A3oe1YyER6rHhQIuZGEn6nBDUQZy45VGdSREAR6J
BapWqnTiWU7D45jaIEgVrdtl+CdU2ADjeUVeyw4fWDs+n1SSO6jHLc24Qfk6
FqxNjlNVLXnk32DJK1oYyQuKAHukDjA8MAPoGeCXOidbNwRJmkGfUAicRf3o
dnJESUKZ+k8d82Faki0rSGitzzVwC/sywE61wVitOxZ4oyqhn0SUIAwQDnc0
QAHAjgxgTRa5jtZjhan0+qXUScl+SRR0xOGaLgEhr6GAEQifCt8jI0P0flJk
ngQQHWWxJFtH6zLe4tX3qipkxL8oxkFSTAqWQ9kcALxJBMaSVfckL1kzfTGP
ifK3UPbob0IC65JJwCjv405ze8cIPD4EYAC9kK8suFJW/pRy0GXhfOCIT7CC
Yy6KuU4KH7GLx9hG9clqOIwliQmeLtQw+W31bhCJoXX4dEaWmTVgqEVYB/YA
Z6AnSh0J6GMhqTBUq1IiCMBja9cDTUEHU7MghUBUqTmsgkHYT1QDi4hHJAGQ
vTEkvuYkj5V2yehsNAkLFAwEmRNjsQX4oSL3A54OjbcQa7gh3tQRAbFQ3qTG
9lsgenApESnmgB+jGKAkcAbtjZ6BLzHCAN6BmpBCcwSiTQIrTZFOqaNWD+gH
DuaKo+x2p7N7UVpeFKGhBRyTbkj+XgkzxsK/GDp/GCJmA1RA5EdWTi2wSg4R
NKY27t4SaKgqUtTjkRYpl0UAOxEk4Phc3M5ZCV1MwTI1ZMNDNDIuiJTAsoCI
IGNmarfyXvlRrB0431hcFPcqCETCeNOtuGUuBm2thX5BmB0YGECmGOSy/85+
v68Lw4p5Yf2O76iggwZOMD1qRTl1JbDD83WIURg6S5PB1QaMI94lfMBYCX44
2cslSY61K/6naDCuBY6TkDywKdIAJKZo0eFEGQ6R7BURjryANggvspKA9UEy
IyhYXMQ1ohBEgRzJu1wzMgq9TzGSkMOEQWTERMPAsbAQlA7q9+UX6IFF24ls
BluJt+QVLoscQKHhQQnG1IQdeOszYoOy2RDpwNvqDhQWVjo3y0gguMEQToho
ZJtxV+JUKOw4kCDUKF44uowWRMDkhiGW9qDExF19nxmBm2tCIPVgUBcCt8uS
zLlQWih3ro3iAHgzVhQM52OsCT/Ez6IN+WHHQCw0PDWIcztNmrPydkqJIhJ7
YgrxMxdyRwaPUzHHeeM8wIvbgqwimabmNZ4b6s7PsJJD8ckXiEshRXQPVp2A
dRYiV3WXnDy/YTNPqjgASLmAs9gx8jExJEAUEGhwA4DP2mK8WN9B3cIyov/G
RRaGyQW0RCXeaPCLiwloClz+JSlQORQvNQqg14TyQFpPBA0IBuGRKig0hY2c
mym7NVJsSKfI20gBTNWD4y9oHZVgQEa4INaBYbTMbTqItzCcBulQJLiNlNwH
fEFJ7Gc4H6kUYUIYAlEzkFn5brIfTbdAIuXRAIqPyK2Qm9hww8dRIgnp2ja4
UyWu0IHrUDuIFwgLJ4Ptq6fLE+A9D9mHlInPZmCU0KRSLI14uziWBdAQn0PS
5IjlQUMhVHfWssaGbS/6GzLinixuxloP6PYCc4/Gr3yQw7zjGLH2IBGrsaXK
KOYOJEGRh0msAS+q0plegcRZNwQNkSBdq4WI2kNmOO02NRINR0tCJzCIZV9k
PwUInB2P7CUCFKotNQeh4N0lTTe1g7a+CRoAtR3FRI01WIc0Gr2jeFkhxBWu
EQoCwLpnkkudJNQ6OI7p4q8h9Sr7nkhlVsNRkGgttKIZ3MuNMGUJ6WuMOIA9
3bxQOG4AFoAC+NXOeZ5g7CfOUPOM1pHxH/3V/w4m40SAqB1rPe8plGBuaAem
i+CeRNTIK0cJTYUlHwQhSQ50Y/AoRji5W2bs0MVFOIBGhHKA+SsI7QZVFlHD
i3RqsTksFmOlPvAdkXStYcJ82BByMQYSrNaGYasIGQo43ZZVyVXMBWMEQKKX
68sh/AE1UlId8ELfDK1FDCPrFpkJH5sUASwEBsBTSPuQZeD2JrS2YahANTFV
11dQq1rtWFhWbYUnUkORrggIYnQoVplspLCxL/29CFKkco8ujBrAXxyHNAas
mIZym8TPZig66mdGqQA0a2pVPr0QZ8gy+MDMcH0kKAqg4DqYUVtgdvKoctBT
zVNwgJzSTJFt2Fo8LbxV1RGs6k8EFU6mHGHfirzLwBI4hKpy6jUuxhGylGC9
KH302FNwWtygpUC78FG01lI3ed7LMnbKcUrhCR+gBzfxH5qSqbHNZxF6XVm2
qGx1eOS7jFXrSZtF8v94P0TTaZTQZVA9YGMlIvAp8J7QSXR4tdU7q2oSJAOX
h5jSh8kyMqJUgfRIpC2xEGojxyEN5IubFZ84iXeEp9XO1FrbyqqrCDyk68FX
YEwdAZAtlWcRDnx8yGN0cJKV9w7MuaoyqRRxlzQPQILqgehKUNMG9G5kE5AP
SVR5YGMF8VsgyJYPiLfgRRAH2I09qV34HSETRaymlkhDYvFFu5hhJUA5xtrk
1DtUAHKR1vV5TbV9UOVzHaCWyr3Q7dj26YUaVNn/fcPlfnohyLfhFagX5Bwq
69ELyR4gl9s2KvBOuSVMh3wCmv+1JvZMz/dDBd4aa/xAnSJE4oC461G3evlj
1jjJf6z6IH+xikLGim9eQ9uEbZhrYBgCg7RJmHojSXANmK0Ku3p7TgJjLd6H
1Ej4HUuPBAxUJR0hogL7UmYsNl4ablQXrcqO1NI2iYeO56LIeaRUVa8nT5BB
9hFpjNEbZCQKSY3xJlvsbIbYnjNCi6NDQFJCPZVNGWinvJN6CKR9R0/h1kS0
Er8YaDW2N8KQFPausAo4yIDNmCQ3ghZUIK9kvckR9D2wF1Di0COvRxRxU2NQ
zc+k3l4lwcq6rsULUWGWo5SfkR/oMNxAr0n7PkQnq1OKwmVlgW+GAhYgpyBu
xMDBc0nBZMclcBlUGWasCH1UJ0gSbp7UlED8seLYCcww9R+Upw+PYBKApG2U
YvLpOioc2JLBWTcAH1tdcQQ/YWuRIp0kNuDdwUG5r7f5xDxx6gk9S1HUFRrY
6IKOUeCUcScJfoMpKFuo72ovK4YQqUSkCQlPfjGbpL2HQaYhliCMpZ2GQ5Y6
lhjYR6p5eAiUQnyrpQDqkFP3IFNRM/ACYY/DTGZmdDzF7p4yQlCpFQ4jY2pw
ayjb+LZSKytzyAybS1s4B505IcY91KuBEwFIct7uoZBQN4CTIaMPueuGZ/mo
5whtUNdloSNImaBtUAwLfN1RXyhkBDWwzFi7NBtRB5kRW9pNBRDMaRPvU+65
pf5P2it/LfbVULWvCYrI1/bDSqQ92CajeVrVvugFGZFHmbXqFQ5Gj7PMRBaQ
RmcsVD946d3OR0UMQrxtR5wbpAS0gqirk7QocPDtykVi4utOkCE2Mowzj0dx
TmQBybS40BWBQfpgJ1pD+yiwONhHul3BpxQWV8JaET/lpNcqErTI2jV5/w4Q
uartko6/yIFvhDmUSlj19WJR0WrIs9Rkt1o0VLB62JAg2B8W2hOdbDBMcHVB
MoxUO942t4eb4OKpVl2QImC2FZZWYyyomcEPXjc7elQhxgeZEMHE7ZL21du1
myVqsHOYFxg0IoQQP9R4er0W8iOJ0OJuW0PC9sWtPlqhNFnW46gjZrcZT8eo
Qj1F2nDlcnBh2jBgGIFJYg+RX1fdeXmdzkxmNJjEUOZlNtcRktSwr9ryPt2Q
EqDIwKYgw+L0alDBT0kVTWaC+7vfpcxGiwUyb6nHv7bLGw+HMtM2b/u0WFGj
QAFy+YBfSELlBqZ9HFOLjvIoSwiGTzyIi6RdsX2dUBobL8k3UQ6oNSCLqio7
yVdgqNDUiMaKlg/YaOC3GrBP0EdCg1UqDlaCjpp6HIhwFE/QnimevXtJPRAc
FYdnI/B4CGzb8VQ9ohGoqjDSYiqVG2hbfrhYtTnpXzhNHrYQUpQiA99cs8L7
UTsGUoJTWyjaxIckVsBP5nfYqWMx8P1UCDAGtKyFoJi1AkMF5whVJg3F1OzH
+XkIlNWO+Af4lAVFpvQ7lyQYes/BweCcNijuXmDn7b5AIMSHrK3AQcfzR6/0
yyzr1g5qRGTierT3BU5CX0Cdo+jxORtSUwMfQgNjKobiG1uq/x93VX98df7f
fOP/z4XQrKyQQe24MASGuL9bQst6ZDlvWoAz5SIRTf2zMAAzYFIR92qkqpWF
VcLeC6BY66kyjHh93532mfldIDxpvxi4FgrAi/77EwLqgQF0acSOBLAV8a6F
N5BSaD+kYdJxBKkRtPqtHbjHaSbuRjUBABkfGwgtMgLO9/XsMUgo8VLxfQ2x
LsiXIl4M7+PUw4xxC6j35rMYn/vzvXQraoK5XB2JIR1JUvRSBdxwtQlzTKI6
rx2XEnFDCCYk1YaE0WPYT4EkN3y5LF4Hc3NX7xHEu1jv65EuU3vTOsvjSn0H
TeTqAfqo4zddR4d8ax4PAUYh7aDfO+4AsaI2DMoEC2vdhnAh4wZwW1xSK5DR
MYpf1lAnaj4z67+PmjBCL0YxHCijtusiwhmKxxvKgejzBHVzOXwnqgFwWSh7
BgXnqJrRWVvdfaH2jbkkUMUvHKRwSy6SKUXs5lIcYGSS5vdJ/Wm0o32G5ub3
2H4fmv91cJtcAuFFnB7c53MFAQ1krG5O27m1fgXi9AKxkYdSZ9BqeTs12LjX
isZNI68Qs0AIvJyLyoSgwm9tYI7JhC3KEhgHtbPByXwYDByIsJfbswVc6+wW
CYOivdQS9ApjlSu1GFbxUv4IuJs17iniVpS4vQ2QT4QGCU6ENfxsgDkMTrlU
MQb0aSwdGaNWA1YUAvubvVxkHJUGqOOAQt7kDk5c7RJAq01UA7qfq09yMLhG
NDN1NZP29PuldqeteWPVdjJWMlydNJQ0FpVh/UhezJjISBvjOjekNj/h10kP
yWOsH6uHGMaisgQNAeELYQ77Y3XebjIAoqNA2BsEEIFicXXcB99vmOVV1TNI
F3ZBUuaKsjlSAVtNY+0s4pyRgQSyjXEE32iaDglFirYbFHbx/RdLAkuetpg0
fiYGdCRrq+0rhdNQce0oXnBwgG5ginN5h5zmJhIduV+TEyZqI6ohEQ5Wh2Ed
dcQKLk0t3ZKQ3QDOEjX1Vv3FgUBZpjNRI1MJB60dUP6wok6ZHJgT+Yvqw4lR
+ls722h9fF7QDmel0iNqg4zUDFjsxji6OggHl41fUwdVR0ZC6aQGJIp+EiID
c0ZVNrVEESpk/2Itr3aRieVQMxoTUkSXMm/uuY/8zAlFofPhgixEltqB1xBS
hJJ6azqigp5JF62PHmf28qP5deR4xYHyx9SOydJsQy2UygLwxhGokbMl83LD
6peFYfSGBYBRMKM2Vq7aeqEcC4Ids3TV6H6ZTuBwFPgJmJlvQHxisuCC8bbb
9/DaVB0Zrtbukc5ewQJqjjowt7HWE2hEENi9KKqqTRrZ0hVxZUJF7Pc6W0Lr
6Him2nZZh1/iOwGjVXSBkWBGn6QkSwGZpO4/yhY00XYNCffGCiSkDMChgs8A
+yqGHRxSg9/OTk7aBi80BIdGXiL+qHDmiuNbG76ksMhOLR+ufhMaGVQsKsoO
uESzsZpm4+33+23etBPGL92UpcDCBWU3mNzVtUcbbb1AWeaedbwGB77RqEO9
bjjYRaEXF21SVQxmzBKoc9uwDWt4DiIO2QCJgAng/y6gLZndN+UYcBJN5zDI
I+BK4hL3bzgOcsKrWYnuy20NbCXYRrCKtoPgLTweinn7VXVDzBO5qfM8YzgS
GsOoYyZFB6LgBKCIdDdDbWBzQPLwPDf8XNQZTSA3L/SJzZj8UA3esZf7CLOD
bdPB3D8roqr9Udm3pZ0M+MNQj8MbzN4wFxcnvJyaq0wy6+Rjxv9g8KunsifW
FA/BmIz4Q4TzpIOgzV1nNxEtpWG3prQzAyzDeeqEPMLzok1b40YHiY+kx6NX
Qsi15Wcg6aWhg5yI4FGOfCx1wlC1SiM64lyLjpBpgzm8bT7tg8TedCIRmFrv
VCDABIri+JZ6+OqdtBhuQrlrR+dYd4h7yLcEHSTd6GhVPUU6dIIdc32W6JB5
KhBQChiVcNkrUXTW0ntoIBD46+5YF/jBqlPOSAbgAT4qmExybiIQKSivwwUZ
/wSzUCtXA4euhU4VUCaWKTjA4mR1sK8xsL2eqhusynk9owPwogPTQWZ1vGPQ
CcDdIU7eFhg6NYOmKur5o7goq463YDhXx1gM/MIaa8fXIxp0/C9REs9HRX6r
NwThJWEAljiJ7xw0Cp00xG1TPwfFAHLpKQl/dI6YcL8tacPiwHEnJ4zPq8/4
OrxendSGhXHEEMEAmHudF8bQaopKMfgPfTl0Bkgf2TqdnpG+Jplzs863JTFj
WAcI6w6ppJ0SxHyGjHVqGL4qwiF4DZgD2YAGzF0lsOgwon/1oAjIfjC2nRxX
Uzg40jKMoX8sIicHpP5Xwt+9k3VeUT1oAcHgVoePMRSNFGZ6W1GA3/S47L11
GnRfaYqExmG5UtKR8gdlVGsj/4GloazLDAj6YbaxemTdODqkuxMG0M0MFqu2
SIwTtWfqBRAJrwUyzopVrl6HjRpRIEKv8wE2wKWrHgTJAPsEtfNo0/hFAC3A
Dd6esVEb2lssgzlSYHCgR4Uj+6oIXk6vMo5oBiji8XGQxLWqr6oqyaYjZxVR
ON5OiLW+0HpDMafgxI2FSIOSeppGbSkdgQw6NlGhbAxreMeaO9WMpNGZ0QAJ
AcctHnUVECpqEIfLaMT8XQeug69wUQvomoa9cGkfpRy0HgSo2gdeRY3apOkO
omIKNkoI4Djqj4mLQH6IMenmOg+FRHRfm1YXkiTN/nZHC1WDBtisLtLLr0Ta
a/9rCWYBA234LrUukVZd54x76L7qyaG+Gf+R1D0oCqAJB6LKRyVoPxysQanO
gXyJ0seGzZIgSO9MDzGjng0loxNuL46eGh1oOu1yIowBpV58IwA2H4ztuhje
ojhcifs2ryOIhqeaLOEcOquBqx3ECqbhR+BMzWhHKf6zVmaE2rHWUUuSCWDG
HWXtCZMDSU0BHX+N8EBWGwpNzdB6DFldWk2qFykdMgs80UEFPV4RElS6VnAL
o1d03EQa0MAc9YmpRcJraHxyBeGIEGZygQVbr5BAjl+XwWkd3mkcTd7rMFrL
VZ6yaN80KuSAdWuJggb+NCOdVLMpYmJASKikkqZEdDbD5DtAiKhUnV+7EcA1
LItsaBfd7N/h9AZxdHinvTsl9RPPtNvMReU+agY/jhQbRB8h7SW1PseQgYC/
QcYI4arJJ6+UcZADR5CRdlHaDv9AxWR1RcVi0XMPUl590KTm45MAY/WlE8Y1
Te0Jo5W0CbVsoF7xjy0Z1oWrl/IFaTDeB9ISiPRGQ3VM1vo+VMQ6Bgz3BelR
cU5OJQ17khIVgoZBXQ1j/vgp4jJMx60PAg0FopFXHSSHvX/BUfcTkJazovrr
ADeVeXwt1BdXguOTpChFKoG+u55+MnRgkmMrrIK5ObAAynVZaBKRUbfzJrKO
L0qRv8znzWYPqQ+qAk5d2u7FVq2rJ1h0ejXorNzU0SOdxUchfNr2k4T34zW3
oZyQ/3U379/9dDdEiNSIHnlBA189VVR1RgM8itPrgRLteQ30DTaEbNE5Edwe
0TAdTAe+HrqC76yfd6QsBMfoyHc+CBSqs7LUISAD9pT3bu1PuLAAhKLTVwlU
Oq69XUDyyriYWlIr6qDYVOcB8CezqqaFatDO7tQzEg2hijCLIyxLrFXW6VY1
NKtcjI7HnWY68IPa1uPDUYVXtUcjh4cy0qnTI+Ir/etRC1Dp+0Cn037hMZ3H
BWh6fudXoQ2DjZAgEABGtqFetSGQZPt1bkb9IeEaK7tX0IFFlL8ekEG/qpmv
apOOkAJfcELSqalS9QSOdm6pTuqaCUfspajcK+6EHVGfdR7yneE22SB8xL8y
qGJeH5PgpNsnhYoO0vDm9jmAMtunb4Oiso8XyThe7S/r5HlSH2ZVrPJRSwQx
KfV+f7qX7pTePljXoYwE1zqv1rYe22lBp+fOmDGKei7CVU/qaIOx+dfVwG1o
r1znqV+3cSMATMdaqEamtq72unb13evgWJL1U/97+Pf4VxVibz0uhmYkJUER
zAq6O8hg6/kHaGLrFKueWwHM/e46fagng8DrY6anGfaV4FDL8zA+dC6qwN+q
xpT8MchedXwKh39c0JNoF87ziMrxtcJ/WV/yoSFu4LQGsthmHjAy6by1PYfW
06FrH7MjrlsgtbxBuLpQ1YXa94Xi8xAe5sWLBsKuh7tC8Tr7dpCNkJ2O/Ban
58Z+D3R+GdF+itJVH4zqgEYoJ27WdTRCJ4sA5ekgXC9/iO3QU9wqzS6JjKFH
FCDRCxoFhAgvcXw0dZd0EpbYlxqwhjrjMrHrkyqbpdg7mzY1h+/xa1PqGNZF
zxOyKKbVxfLrUbKLdiHZHy7BVEkPntSHuwnB8g0QQ8ckGqrkarfyqeyLtAJN
sMNaXD2F0CAK7VagxKEj70LQI5RepwcwzGLtGrInq/bRc80jk/aoBPQs5I7O
CP6CmTfFrtPwwDmSTeN0fx7oH8fZPuP8HA75daSSfnok1mIEzScJl7yIcckV
Yt/6x6v9SIsf+RXV+s/Wi/inJu+yTvvoZKwOMGRI8Wh3HJcHStX/cYFr1Olt
nVsPlLjTg+AYvPQNEXt9rfBn1GGzolx+NsGDrqPNdj1yMdfrISRIXgF3wi0d
dQnv2EGbOYK0oANiE7/3+R8ElKwjOkYt6fzJ1rFFCAiunF0n8UH105x6u/2d
s6CueC8aZWLowSdqyy/R6aDGk/oeOvgdYNzp/5L48QOFTl2ydzud290hYIn1
UDbqdH5qoiblpU72j6ND5qHgS1Fs1D1LGEkhYgOLpLXD3GCQThoWSBHmRymq
laltPcjToi549PRbVAcpyhz0jiGfgUigglie5CKUixIbb1cQw6fI7qXnPYcU
NCUStU/lwxZSEr4aV44WMWnLrg5FA8iIdxfWMp20g5HnV3+PNfru7wVpGSbX
1HUrUpZVR9JT1qF9eHRR2k197eFsjIYi9NrllvhHNlxtwUG5V8dZcfWsfHzN
voSEfE8x1iZnhvOyZ9DWTlf7/QxnUjMIh4iIZL5JZ1dJEGgCOga3buO1Tm1V
PYnbSSevTTu15hIXJufV0QLS9ThxgVWv/tcEEF97nRLtRkxZS/8auuTMfhpC
F23Ub1ZTOWtnlDjqqOd7qIvKQ4aOT6niuD+F8Kb5Cpzw4z4/Wxlo3vg6pNpy
YbV0cMnFsHgbUqYundrX8TcdhD9BnZC3Al/X/1z9c23/dfWmq+M8qCLtZcNw
aBltgOgJwaRBH1YTL+vzeyQiqxdWt9ScpZOzMr8h8VLX/75BveU+HHX1lqEh
CFG4lpCG2p0ZMLE0vp7mJXVhx04IR2N9vY6XHD07YnpuRKSxigNXq8zX2xH+
vmrQFnkXk34vw4hSNDuCpkl2IOg5Gj1lgfE8A3PiekNB6Fk+VgJHTnVsMbSa
Fd+XlXq1PpYdibF1SalsH/qa5VOy+SLYS9HxFfWlNtUBs+p8BUyCDvRBZk9t
MSipvP8XRg7azQhJj1kyzLNg213PdTCeTlJTE72mlMZ77GzrkRXBjXb71ipd
55a2Glx6diuj9H1epWHCxJuqvUr4jzr1yKKES1xYftMeI4HzehSydmEette/
p41SVtWwGClRy9pe1lnzBMAyIrTJ0mGLIypX6Lok8ce4EDxdtr3LAgXaTBp6
4Kd1oq4d/2PKkFrRR11PAp7wykec4aGTqFVO611svYfQ9NTppzxIO6FMHDJk
/bwD8Se497gOfF3L0YYBWBBkipUwOuZetBuvDa5P1akzrII4uF21OLWptgZO
6TidysQmyAlcM+0X6lQQmhEIkGBtepZ963jC1tEZkinO9lkJPTvyulkarvtY
AkYshwUpH4nHV0B6qF3PFyQ9w9yE0jonpHOJeuzWH6G8nod4hd0cNbY//SPU
ivS52PKsqh/DhTZQhbdKLSMr9RzK0D4j2ZIMnNl6eNSTqdXpiT+CgEfpLHuT
avpEuuAFlKY6Y+kTSLBAfjSf9uY/R7DexlQNldie5ChkcdeM7+i72nY23H8D
uWZ05wi91FUAAAGEaUNDUElDQyBwcm9maWxlAAB4nH2RPUjDQBzFX1O1IhUH
C4o4ZKhOFqSKOGoVilAh1AqtOphc+iE0aUhSXBwF14KDH4tVBxdnXR1cBUHw
A8TRyUnRRUr8X1JoEePBcT/e3XvcvQOEeplpVsc4oOm2mU4mxGxuRQy9ogsD
iCCEuMwsY1aSUvAdX/cI8PUuxrP8z/05etW8xYCASDzDDNMmXiee2rQNzvvE
EVaSVeJz4jGTLkj8yHXF4zfORZcFnhkxM+k54gixWGxjpY1ZydSIJ4mjqqZT
vpD1WOW8xVkrV1nznvyF4by+vMR1msNIYgGLkCBCQRUbKMNGjFadFAtp2k/4
+Idcv0QuhVwbYOSYRwUaZNcP/ge/u7UKE3EvKZwAOl8c52MECO0CjZrjfB87
TuMECD4DV3rLX6kD05+k11pa9Ajo2wYurluasgdc7gCDT4Zsyq4UpCkUCsD7
GX1TDui/BXpWvd6a+zh9ADLUVeoGODgERouUvebz7u723v490+zvB3gHcqkl
oKXxAAAN/WlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPD94cGFja2V0IGJl
Z2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4KPHg6
eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1Q
IENvcmUgNC40LjAtRXhpdjIiPgogPHJkZjpSREYgeG1sbnM6cmRmPSJodHRw
Oi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICA8
cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgeG1sbnM6eG1wTU09
Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iCiAgICB4bWxuczpz
dEV2dD0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291
cmNlRXZlbnQjIgogICAgeG1sbnM6ZGM9Imh0dHA6Ly9wdXJsLm9yZy9kYy9l
bGVtZW50cy8xLjEvIgogICAgeG1sbnM6R0lNUD0iaHR0cDovL3d3dy5naW1w
Lm9yZy94bXAvIgogICAgeG1sbnM6dGlmZj0iaHR0cDovL25zLmFkb2JlLmNv
bS90aWZmLzEuMC8iCiAgICB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5j
b20veGFwLzEuMC8iCiAgIHhtcE1NOkRvY3VtZW50SUQ9ImdpbXA6ZG9jaWQ6
Z2ltcDo5NmE3ZjI0MS1lMjNjLTRiMWEtOTdjZS1kNmU2NjliOTk4ZTIiCiAg
IHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6MGNlZmJjNjYtNjFiMy00ZDZk
LWExYzgtMTg5M2QwNWFjOTg5IgogICB4bXBNTTpPcmlnaW5hbERvY3VtZW50
SUQ9InhtcC5kaWQ6NDIyZDdlNTItOGE2Ny00NmExLWI5MjYtNTJiOGEzMGIx
OGIwIgogICBkYzpGb3JtYXQ9ImltYWdlL3BuZyIKICAgR0lNUDpBUEk9IjIu
MCIKICAgR0lNUDpQbGF0Zm9ybT0iTGludXgiCiAgIEdJTVA6VGltZVN0YW1w
PSIxNjU2MDE0ODk0NDU0Mjg5IgogICBHSU1QOlZlcnNpb249IjIuMTAuMzAi
CiAgIHRpZmY6T3JpZW50YXRpb249IjEiCiAgIHhtcDpDcmVhdG9yVG9vbD0i
R0lNUCAyLjEwIj4KICAgPHhtcE1NOkhpc3Rvcnk+CiAgICA8cmRmOlNlcT4K
ICAgICA8cmRmOmxpCiAgICAgIHN0RXZ0OmFjdGlvbj0ic2F2ZWQiCiAgICAg
IHN0RXZ0OmNoYW5nZWQ9Ii8iCiAgICAgIHN0RXZ0Omluc3RhbmNlSUQ9Inht
cC5paWQ6YTY0MGI4MmMtMDg0My00MjYwLTk3NmMtYTg1ZjA3MDc5ZjcwIgog
ICAgICBzdEV2dDpzb2Z0d2FyZUFnZW50PSJHaW1wIDIuMTAgKExpbnV4KSIK
ICAgICAgc3RFdnQ6d2hlbj0iMjAyMi0wNC0yOVQxMzoyMzo1NCswMjowMCIv
PgogICAgIDxyZGY6bGkKICAgICAgc3RFdnQ6YWN0aW9uPSJzYXZlZCIKICAg
ICAgc3RFdnQ6Y2hhbmdlZD0iLyIKICAgICAgc3RFdnQ6aW5zdGFuY2VJRD0i
eG1wLmlpZDozYTUyMDNkNS04NGRiLTQzNDMtOWZhYy03NjFmZDZmZmFhYjgi
CiAgICAgIHN0RXZ0OnNvZnR3YXJlQWdlbnQ9IkdpbXAgMi4xMCAoTGludXgp
IgogICAgICBzdEV2dDp3aGVuPSIyMDIyLTA2LTIzVDIyOjA4OjE0KzAyOjAw
Ii8+CiAgICA8L3JkZjpTZXE+CiAgIDwveG1wTU06SGlzdG9yeT4KICA8L3Jk
ZjpEZXNjcmlwdGlvbj4KIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+CiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAg
ICAgICAgICAgICAgICAgICAKPD94cGFja2V0IGVuZD0idyI/PlmiVpAAAAAG
YktHRADwAKIAftw2PhcAAAAJcEhZcwAALiMAAC4jAXilP3YAAAAHdElNRQfm
BhcUCA56CWQaAAAgAElEQVR42ux9Z5gkV3X2e+6tqk4Td3d2epN2tUlhJRSQ
QEKABpFE/ggm2SAMxmBbtjA2/uwPJ2wwGNs8BBuDbTAyOdnYGIEtECMhhISQ
YIXianOc2Z3diR2r7j3fj3sqdO/M7Ehik9TneXq3p7u6uvrWe8894T3nEjry
mOWnT79UFdaUledrUoFPBADWAgzYRoio3rTR2CQfuW/UPuPgdu6M2GMX6gzB
I5MvAuqiN7+8N9/f1Uta94C5xzbCZQCWE/MSUqoLzAFZVrAccWRqNoomOTIj
1vIBT6uDNjTTptqYmtkxOnHR5ntqnVHtAPe4yAcA9aprX3O231N6ktJ6E0Ab
CLyWmc8gYFApBTADlsHWAswgZsAwYC3YMmAsODKwYVSHtXs4srvYmK22GT1g
a817p3cdvOein9871hntDnAfs4x95DeWB+XBl0S18MW1feMbibEUhD4AgAXA
DGYGCVhhOXkNNn7YBMDuISCOH5FpIjKH2JgDpt78cWOy+vWoEd1y/o/ujDp3
oAPcBcn0z79O/uKlJZ6aXG12bX0bwuYvAeiv7DgYmJkagWXAmMEWABxQs6CN
wcqMFKjt/xsHYjY2AbH73xgY2zC18CFTb36Cm9G3olrz0Dk/vqvZuTsd4M4q
tT3f61G54tNA9FoeP/wqs3NLCcagfmgS9f3jIM4CFSBmB073T6phGQDbDHjb
tW1sPlhQon3NUSBGZGEb4RbbjK63jehbY7f85L7LEHa0cAe4Tr4P0OUjt7xA
5XK/CqWv4kp1UfTQPUCjhqjWRGXHKLhpkNW2DsDynMkBVQCdALXNdKDY/s2Y
DWwtyLADa6yBI3kexQA2sI3oAW5G/9mcqn7irLt+uqsD2ycwcOt7vqOY9BqV
L/wx+cGLoPQAIkPhQ/eCx8cAZtT2HUHzyLSAVAZMNC/Hmje2d5E1F6zYuQCx
PE+ctVbTIQYtotgeNmDDgDUpkCMLhKZpm9HOqFr/a2v4c9GRmfDcrQ89YUNr
+gkJ2t3f7qEg9xKVy3+afP/ZINUFIjIj+2EO7AGshak00Dg04bReLC3gjac9
y1NuUwecqoW246ntfOCMCiH3lI5WK5qIFmtPX03Aucr3HvytpUsP//3IKHeA
+0QA7Y5vbaRC/t0IgveS5y0FiKAItlJFtO0hoFEDW0ZjbApmun70CZgyiJsN
gTHwSMwKyiA9A0aOj6UW0B6FYzpqUfSUUptI0ZUKmH7TpHrgk80p0zEVHqdS
ve/LSnV1PY3y+Q/C858KpRVIAUoDDITbt8Ls3ApSCrYRorL9oFu6E13q7AVi
yuCWW7VwxmEjMR1iBy0xGVqiD3GEIXXYkHHYkvdixy1yNm9sPnAzGjfN8PP1
w5U/O3fLA0c6wD3JwibSbK0GW5+NLbCNjA2bDQ6bhqPQchRZNpEtrjlrQctk
9YEv+6pQeBHlCx8nzyuz0gTScMBVsJUaGrffArIWpBTqByfRHJvODBCJkkyX
f7KZEUwiDXL9MTiRcdjaw2aZBEUK3NjObQ2XJeCNDDhiUGQcsCMLNE1omtE3
o0b4e8pXu9bdcfcTwnTwTrULMmFjEVuzni0vh4lK1kTEJiKOIsCaBlszzdZM
sAkPTz949xSHYQXM1Z4nXTbrclm99/NFlcu9SeVzH4BW3dmVGaI8w507wI0m
oBWstQgnKslSziCQ876S5Z8TwLZaDUycrPMxnlODFanZEL+usrpDOa1MBCiV
2gwxyBWBtQLYgKHSL2b4CniF76mSZf59APd2NO4Jlqg6vQTAZTCmLwYsoojY
RMaayMJEPhujOQVzncNwkk1z3IbRITbRyOJnvHAyNQ8+X9CFwrWUy70bnt8L
JaYBacTPbb2J6i03A40aSGuYWhO1fUdafKv5BooZIMqALLZtmVvBl9W0nIn5
xq+bWTSvbYvtGuPeiyMNJmM6GAbl9O3N6fp163545487wD1B0jh4gMjXl7C1
Z7GJFKLIMts9BGxja2esiRgmUjYMCxw2+zkKl9owLLOJijARWRMZNlEdUTRm
w+jhXF80mh/w30aF4l+S9rqgtQNt/CD3f3PvftTvuhPEBqQ1wqkqmmMzqe16
1GAd7Zy1BA1sm/0bZ9CQPncWRzZpkUkNcxa8DDbGxXqtgDR+L5S/TQpgHXgM
ojubk9W3nXnrnT/rmAonxK5tehzaIluj2Rpma7eTUncWVp8Vth06PrX5tgNs
7QMwkeJmY4BNuMZG0UpEUcma6EzYaD1pfR4F/luIkBPPqQ1qTqWaI0fAjRBQ
nNqnQkkEEZjIEWbiMxBnIJzGcuNvOEr7QlQy24y1EM+KTMRBUaqBKY6JMUgp
MDsnEZ4CIoBgwR5l7Q2AGTay5HcFTwl68h/ceeVT3r7m5h9v7wD3OItt1tlG
IWCMYhM12PK+nk2XhLMd23PB0xJdBuAAgAOj//VZn61ZbqNwdXFQvTS3uPBG
Isq1hKJa9KN7bmZmwGEIaAcAHXiIFDkPPrED2j5Oqf3bEpeNTYckOpaJAVNG
UyukSQvKfI2iVr3ODsCkyH0LE6Al2MYK7AGAkRMy2DAsA7rgP5eZ37v1souu
XX/7T490gHscpbj6rGjq53dU2BogipS19hF5x4MvfUMIYFf17n9eqoq5FytP
dR29zvNRhhKDYKMISkLa5Cn4/UUXVbAupcs0Wyx3lvPNYnsxZ4E9WzzXARNW
JgQh0bgOsCxuoYA30bAWBAX24vnrZoM1FirnQxf81+RQOnQj8M7nOnQ/rkSd
UhGFWvWgqVUiU6/6tl7pfcSx2js/PkC++jPy6KzUPIhVIKcevizDYED39jiz
MjJgE4EjA5UPECzpBnn66Ngrs3ATRCPDgpH+zVnGWBwBludHWcYUTx52c4FE
9SqAldPApMhpYqXccy1/a3ccKeVWC3m4cxEo5yld8N941vOe9pbHo8Y9pYDL
YWO/rVWnTK0amGqlPPrNzwcL/ez4v/+5B/BvkadelLC42jJaRyk+ZgTLlwF+
DhxGCXhhDXQxQH55H4LFJaiCB/KUA1qW6WWt8HIFfAkrjJNH1gim9vgZ0tBZ
elnueCKn6V14jEQLKwE2pWBVAGnKAFdWCN+DCrw+rxhct+s5l118/8aN1AHu
cZJFV1wd2Ub9XlurwNSqK2zYGFjoZ/0l3c9Xvn6HI7s4NDHzLIt5JjEAhtff
i/yTNsGGkQNvaMCRAUcRQIygv4TCikXIr+hHfkU/gmX9CAZ64C8qwesrQHcF
8IoBdM6HCjyQryWRHpNonOfPcTQgq30zThqJicCKkjgxqRSryGpgUqlJodwB
DrwO1GzZTTRfQ+W8c3Uh+N2uM5eUOjbu8XTSarVdNmruYWPWsok27f3sRw6u
fMN14Xyfmfzvv1iuct4fgdADy26pzWhVZLQftdu7xCiesxGILGp33gmERixG
9zkLgDwNnfOAonJLM1HCMcjGGBJeQRLGlZBW5GxPjgw4NOBGCNuMYKoNl/iw
6bUloeBsAE4ULospQYpdEoKNHJwlO5D7Lq3Er2RoBK82kfnOtisv+cK6m3/C
HeAeBym//E3R3s999CdsomUcmY0cRTsBbJnr+MOffZf2unJvIOJLnBtunbPC
FiS2KLNuATC1k2ICH8WLNsFbsgi1n26GnZ4Em2YmicCwrKEYYM/ZlaQAaOXA
SgJaUuJXCdJiGyFW8vH5LCdEdI4MTK0JU2nAzFQRzdQltSuJB2KXvYudNY41
NLtMGqwYxNYBWJGjRSoCPC+2PAK/K/eeqBF+F8BoB7jHS+vWa4c5MvewNU+x
xlyx7UPvPrLune+btZDQX1TaCMIbAeQSMncc0FcshYtWNJQAxrLEbV1CF2CQ
VsiduQrByuUIRw8hGhmFnZ4G12tAWAfCJtiGQDNyqWGtQUwg0oCWeC9ZsCzj
1OIJphExzsRsiQHyPZCn4ZXywJIesImBXIep1GBrTZhq08WaORMyY7lyFcfW
4siDcpPVMpSvwYEnsQhenesr/h6AP3g8APeUNdh3fPQ9RRs1n8/GnAFjdrAx
393w7g9VW0yEb/yxTznv3bpU+DOVCwDfB3nuAc8HPA+kPEDLQ2lAe+CWtK9y
z4lApNyQKPc/hyG4EYKbTQhXQlKwkctagUHW2cQEYX+5WDQoioCoCYQNcLPu
PgvKmDApo6w98ZFlidlmCFMPHZCnazCTVXBkMsWYklXLknUiC10K4PcUHYss
ioBmBNsItzanar+0+pu3/qwD3OMoD3/gXQNszP+BMUU25ic2Mnec875/TOqu
xr/yByu9ntLdqpgfoMAD+YGA1nP/ax+kNaB9uJSvgFd5gFIOwEqlXpBSmWVe
ZZIPqjUR0ZIZQyvN0XJS/ZDSFg240QDqVXB1GlyZBlcmgXpVQmwSTrOcOHDc
UuLj+Ak2isDNCNFkBeGRGdh6U0p9suAFYCzIU8gN9gChTZxNbhhj6s0PTNy7
/z2bfn5/2AHucZSH/vTajWzM89gazZG5ka19YNPf/SsDwOTX//B9urv4R5Tz
iQK/ReOS5wlgZ9G6ygMrBVKO2ug0MBBTHSlejikT8CeAWbUl0lqdojTjlbFr
M5oxKWM3wjlo1MEzU+DpCfDMBNCog8M6OGwCRiaAkfhwzMmNARoahNM1B+Bq
A7YRyXnTmrf8qkXuPKERu9nA1sKfR9Xmq1Z99aYtHRv3eCYlKpWtUKqPjbkC
xj7LGtMEsPXwP//mMqXptWBLR3NdnZYisgBZMImjJsTw2CYErLMZrQWRcl67
smCbAShJViqpfKC20FqWdBOHstLXKE4oAGAoB17tJgrl84CfA/UtdsCqVYDq
DOzMJLgyBcxMgK2rTI/TvkkGzdPwe4vwSjmYSgPRVBXRZBW21kiiDCyZwGxZ
kcrx+RSZK3a87BnbzvzPH5y2GbVTvnTnH370M/7NS590iKOwyCZayZFZ/mtn
rR9dvGnJy1TOfwV52leeLPOkkqWf4r/lQXFgNKMhCRLgByVJqzRcRq00B8qE
uxJiubx+lPZFi2kRg7m9Lo3bExGeD8oXobp6QaV+UM8iUL4AGGcvE7ddV3zN
noZXyMEr5qACDxxGYGuhSzmoQCe8oCTRYblP+frLH7pnW9gB7nGUj9+x2fzG
xecesM1oKYxZpjXO6FrV/Xqd99c7G1biq0olYak4ch+HqwAl2ah2AGa8/hha
TKm2nM26mqWSkaid7hh/dyuYqf08NJsmJ5DWYD8AFbug+5aAuvqE+hhJ6CtT
jZEkLBR0zoPXlQdpBXgaKtCZ0qPkWlc2p6r/+eH7dx7oAPc4yz/+5L7oN87f
uIejqKzz6ik67w353UFR5XxQFrixxo0B2wJeJGGjJKqfZNFUGxYpA+A58EuU
wSQfrXUFnDQLIz3W+URHfwFTWyElKVCQg+pdBOruA5QHshZswgT0JMmU2KzQ
+QC6qwjl+RLL5qzWJQDVD/1s6/90gHsC5BObH2q+/bz1I7m+4GkEfpYJDfnd
OajAl4ySSp0rSvP7FCcDkvwpzfL/LG5rsiRTK7E8k5WjozTzLDbwUeDN2MDy
eUosZEoATekMSTW254FK3VClXqhc0dn1Yf1oCgQBKshBFQsg7WK5HEWp4if0
vW5L/dP/1JiMOsA9AfIXb7lC5RcVXmItPzmsNhHVQ/jdeSjfazUXlErDWImN
mzUP6JEHVtiRZx3uCG053xatfrSKbbOFidqOEB1MbYVqbXZ3ch6tnT1c6gEK
3UDYAKIo07OBQYEPFQRuRfI8kOc57oS1IEU6WNXzgw/fs/207IzjnXYXnNNd
5HuXeqUcZkam0RyvYGr7QfSsL8Pv1s4OJAkjkcRBYR14LYNhj4Zqu8mb7dih
tDtPrAI5Trva1AyJqxYSnZkloGcKexTJNcQvKYl6xH4jAVaBlAWzAhQjtYA5
5T+wc7gckTwPpT1QsRt2/BB4bASoVR2NR2vAc+X35DOU2M5mega2Wiv6/T1P
N8w/auzewsrzTX7FWu4A9zgJG7uYPD7Xy2t0repFZf8UGocmMcWM7vXL4fd7
SdM5F+aKg/tGaCyUhMISZRnnYpXYiUoSCETp/6QkR6tS+1ecvZTdZVu1aYsT
lzEbIARxJba1UCIpS9Zjm8SQKWPJZvluBBbwimmwuAyUemCOHALGD4GUA2pM
u4yvxOvuhvG8HFTXUOWhzbfrXK5mrZ2sHdg1rTx/mrQ3HSxaajrA/QXJyJ++
kmy1cb7KeQEsw8/76DqjHzN7J9A4NAUbWXSftRK5xb2upFy0LZEFWwLYIOvX
J0kCBZCSrBepjEWhWioSQKJR4wSFOhrAiPmwMZOrxXrgpM6MbOoTAgAsuetw
vLS2OHEavyXWLcVChEjAy04L5wvQS5eDu3qByjQQ1Y6uygBDI09EXtkcPniW
GlxWBxETUZ1J1UipSnPy8GGl/f1Q6pBX7LId4D6WZMRMhWDNxV5vIUmRenkf
3WuWYGr7ITSPTGPyvl3oPXcNckt6QUYIKCZmTikwmTT5QNwKYEmUOWXHDszK
1X7F3j8RA5bSpEQM7qTVAWcAzWn4jdEaKFZZ8Aq0bEpddCnoWfxFHWvgNJjr
XvdSzjoDVCwBQQnUqMPOjIG4hpTX7swOCuu56P6fPOQtecFegAYYWAqgZIEu
ApYyaKNSQcU0ajtIeztJ6xqRsh3gPkKhyBDXG5tsvQnKec4UsAwv56F34zJM
bT2IxuFpTPxsG3rOX4vC4CLJiMXsKSN+i3GVtko7jaycxkvWZPk74RwoSstq
SIl2zfQHI5VoXmTrx5iOtpvjyshY8yYJihjAbrJRsrirWYMdaVxCZ173Ms9Z
EjAeSPvgyjioNikmkgb5AFvbT4e2e91nXbQFwJbK9vsVrOmDCcsglC3QA3Av
cXARSJ1FpLYx824AE0TEHeAuUKofvRHF33nuRlttQHcX0nZFiqE9B97pXR6q
e49g4qdbYM45E12ry0k4LE6Zoi3k77DHwlXgFFxKA2SkYLLdUROerEo5sQzl
eoslwITUoVHatSaxh9MHIcMFV7H1qqSDjqSiMwBuddg4S6dw3GMwCB7IKHFG
A6BrMeDlgMo4UJ8Rh832kfUWxectrT3XAjgC4Eh9ZPeDMKafQWUAZwC82ILP
I/JXEaktzLydiE5aKE2d6mCtj+wmU68pZu454/ChjWxoma00HLWPWUpiXIBd
eRo9Z5ZRWj0A2wgxde82TG7ZA9NoCgHFpMebTG9aeR3WZB629WEMEEUJQSb5
XKY0B2zSjjNx07qWjuUp26u9lCcFc1pjxkpJuY6W+jJKqJiucFI75yumZyr5
W7KJ8IQdJ6Ew5EugnqWgYj/geyBf++Tp/u/NEhPMl8+wuaUrDoP5frb2BxyF
P+dmI+Sw2cvWXAjgHGb2Ohq3TQ7f8i3f6+3LcxQus1FztQqC/nD8cDcA31Zq
sPUmKPBBSgBnlHOstEbP2mWA1qjsHMX0vdthGyF6zl4Dr5AXe9ZzlQJJVEol
/RJIcRKidT27MrRHZIokk9dj8nhWEzvSOif9QFKOr1DGEo3M1JYuTmzgWJGn
kQ1SGnEfBafHZdlvSWcwCNqNhfFSsyK2bQFQrH2nx4BGtGj1W1/o4Z9vmJW3
ECwpM4CZcGLsHhuFoQI/Ccw+PP9cUmoMrq9FB7h7PvN3XbqrZ9A2aitto7Ay
mpnMgygkoolo/15CFDEzw0xWoUt5p1mMY3WRNQ4QWqPnzGXQ+RymH96HmQd2
IqrU0XveOgS9Pc7WZRcPTcrPFYOUdpW6BJAWABpOQ2TZzBuzA6tSmWVfpd1p
SDmQsZgolh0IkwqJ1AZmUm3ZOk4dPpuaGe7w2AZWAkorZkWbzeuyFK02MWXj
wl0AaSiV6+49R2kA8xJu/L4l3Dw8+qC19gyydoli9qG9VTZsjio/sE9Y4G75
y+uKKpc/N6rW1kB7i0l7PkAVADtJe7t1d+/YzA1fOIejkEEa5vAU7KJuKK0d
aA07+qJQGYkIpWWLofM5TN63A7U9I4iqdfRdsBGFgcWAjp0rKdmKPW4BYtJu
KY7f8izghQLX6+BmE5TLg/J5cdxUxplL7WsYAbXKkH2Yjo7/zpZhE83rSnWk
T69SaTwXypUlJXRKlZCGkigC0kY87u8CYFA0hYVlUIPFg7Y+sntGWX/AMoM8
308Zd08g4N71yueRLuY93du1KZqaebIu2S4QMYFnCPgZaW+LLhQrpjIdBj2L
edfbn+trYwmKwI0Q0dgk/HzgMk1khP1lQWQcMJRCYVEP9JPPxpHNWxEensDh
2+5B30Vno7RyEPDS+C10ppkHuYJKluLHuEmH075WogiM6MgRmIlxhwat4JWX
Q/f1Sz1bHCZTqQdGkoY2nPYlUxkvjV34LO3B0BZPSIIMEus1ovFZzAatUnOA
/JaohQO1zoBYIO55HuuFZf4rW+/N22ZjKVtLyu3vNkpK8xMKuHe8aEhxZFay
MU+31foqMAwIh0F0P4Cfr/nNP6ke9aHIGBibcGjDg5PQfV1Aj4YiI0ATEDMl
jeGCUh5LLjkbE/fvQn3/IRz50WZE561H9/rV0PmcgNJhkiV+y1IUxsSiGeNq
WwJHDYSjo7DTk5msmkK0fx9UoQgKgrRwkjNxWbKpJo4N3JYoBGUa6FFbq9NM
nzIid26F1N5uy7ARuwhJkiaG16px43N52ir/2DCY3HxbzkbhJYrQBWay4P1k
ov358hlPHODePnSZ5mrjEpP3LwNRD1ueAeFuAPec+8FPjc8Zx52oTHOOmIzk
90Mg3D+OIJ+DVQpkLEAuO8baOltWu2Xa830sOm8tprqLmHloN6bueRjhZBW9
m9Yh6OlKa9BYpYWMkgJ2HRstQBqm0UQ4sh92ZsaxupQSLe+0oJ2cgF4y4Mrg
W2K/3NqNJsmypbxdjsNlSrWnupJgbxrG4zQC0RYqAynAatczV6d8iTRtnIJc
53I1v6s0r406fsf3ejiKLmRqrLVsFTGPKvDdhdVnVZ8wUYVbNp1P0Uz9WSrn
XUqEAMxjIHybiHZf8Kl/nzcuSFqFkF4DiBzXwEzMIDo0Bb/cLzHPuDRHHDUY
uUOA0ho9Zy6HXypgYvPDqO3ch2hyGr0XnIVieUmijuLtnYhbkw62XkNz3z7Y
atWV0iglpB6VEHpsZQZ60aKjQJlUWyT9wVRiw7JQL51il++nTOOR2Gywadw5
3RSFUs5DvKsPeTKRbBpliM0FnfkbDPJ0FRqzAvfQ/35Vkx9ssGFzkwL3Wdfg
abtS6q7i+vOnnlgJCLYXcZMutYhyxDwBa79JRHsu+Y/vHnPJMX1do/rIhE28
64jBHCHcNwZVykP36gSsTrM4jetuvnFaUGkUBxfBu/xJGN/8MMIjUzh822aE
561H95krHbdXmoiwZUdUIYat1dDYsxdcr2dirKKNlYDRWlfNa0wCPBaqolva
M8mNtgyaOy4u1HTZvcQkQIZUzi29+49uGKk0iL2M2REbHpyGzrImSRhNHL73
wURh7PviPyjy/EAFuaUcRReBaEDmzIxi3AOih7ufdPlJL/k5ocAdPnNjF4f2
AlguANrA8j0M7HnK8B0LspOoUpuC5SqMKba0V66FaO4YQbB+JXR3UZwXkwEs
UuaxmAFBdwGLLz4bkw/vQW33CCbufgDhVBW9Z62B310UXDGYCaYyjXDvPtha
VSqD40SAaOM4ARD3YoiipFVTmiLmNns21pay1EtLUbKZqg1QUrnAnNbJtdQx
cFsimAiSz209JGMmIILQHS0oFxze8Kf/YHZ/6oMead3PJhokpddzFC2XfOA4
gD0A7ut/+gsmTpUo1InVuIZ7mE0vrCIwmhzZQ0+/e/OCjXuaqVkGdhBjietF
ALDnNv2wk1U0tx9AsHY5dHdJbpLcPM7cPEq3dPLyAfrOWYOgtwuT9+9A5aHt
CCen0XvOWhTKSwBiRGMTiA4cADcbzg6OTYOEtG7FPnbFmSzZtaTal9itDnEo
DUga1aUZCE4IO6wy5TxCoYwrLDiJCLTt5scZaJKX2LwsHc2zh1A8gyOANU+b
ah07Pvae820YrlTAEhD1WgqhgEMgPMxEexpTM2OrXvvrpxRX94QClw1HZGHY
9ZLNwdCiR3YCZgAPwfClSHxsm9h4dmIGzW37E/DGFbGsKWmRy4pb9iRTWqFr
1VJ43UVM/HwrmiNjODI1g9KG1SiUcrCHD8vSLyaBirNaUspOSl6XaIcx7hED
VykJswmIFSXhsKOcNQGw40BQ6nMlnF+VDAOhrchSCA+kc+67rDOLkkN00sAM
1kaIqlWEU9Nh9cDoJjZRFwiaQREYuwG6j5U+gCiqD77iLaccpfHEa1zGEWbe
SeABZtLQ6sKb1569B0Q7rtz2wLFnNBNg7T2OdB0bacq11xQNbMen0XxoN4K1
K6D6ul2TEKSUAdKZ1rmWJdevke/rweJLzsXkfdtR33sQ48N3odKVQ/eKRfAK
OZehk33QXBIh5g64PgxxNIAj2Vw6abEYmwQqWfIpm2WLaZCU6ZZDShJznKmf
UxmHk9JO57GpwORqy5SW3mgaMBGsNa59aqOOaHoG0cw0bK3mtHdopqNqbYKN
GQeww4IeVkqNn3HN756SYD1pwH3W/q3R95etu5UtdRPzuYBdykq9jBTfdvP6
s++/cuuD0/N93uYDpnrzTmoaZw9qm4Z5JIwFTbDTNTTu3wlvxQD0kj5nOgSU
tP+MebbQygFJAyALbS26l/SARw+hVm+iPlWDma6juGoRCv1daYdyss4WTXqP
2dSMEIJO0kshBmQmRew0asbGNZlSdZJMWmJFxHTJjA17VK2RVPAqH6bRhKnX
YWo12FoFplqFqVRgmw0Qu4mqcwWofA4Atk/v3Ptl02ju3vjuD51WRZMnpQXT
95et6wFwKRRdTory0NQkrQ5Aq/tJ0z3PfOC+OQG8+wUXroGxPwZhAFq5BsaZ
Dt0JoUU7WqIq5qC6S9CLeqB6SlCFAijnWjOBXeWrrTZgp2Zgq1Vwo+k27as2
MbN/EmG1AeVr5Ad60LW8HzoXuMWBFjYAACAASURBVEgFpWwtQuqs6cUDCDZu
yPR3oJYGzNmWpMjasi2vZ96LqZMJ+SYO+QE2jGCqNZhKFabWAIcM2wydgxiG
ieZXngevuwdeVze8gtSoKQrZmo/lNz3393AayknrHfb9Zes0QMtBeCYpWgOP
fNKKyVMNeGoHaXU/ebSbfN0gX0Xkeeby4dt599UXLoYxnwXzCxzVT2VayTuw
ZvdNgFauylUKB0kraYCXlrFztl2SjhvguQbJ0yNTaIxX3X4RpQA9Zw4g6CpA
aS39aSk5DykFPbAUwYb1rYBUWQBniDbZluM0W6k8JfYxRxGiag1RtYpopoJo
esZ1kYwpFX4Oygsc1VFp6FIXgr5+B9hiwX2XS9O6lcHYKRs235rf9OyvdID7
aEG8Yv0qUnQ+NK0irfrJUyV4isjXNeWrA+TrfeTrEQq8qXxem77m9LXUDN/l
Uv8OmKwFQPHeCEQOpKJ9Y1AmGlll3lMZjSjAJa0S6mF1oobqwWmYegTyFEor
+lEc6IHO+SkoBZB66VLkNm5odb5UpkGJoqTJM0tTvJSfa2EjA9towDQaMLWG
06bVKrjZTDQ3KQXyPei8W/J1Vw+83n54XT3wu7rhlbqgfD9pPcpxv2B2HGTX
TMTuCyvTl5cuuHpPB7iPNc57xsZuaCqTokHy1HJ4qky+WqwCrSnQlgK/pgJv
qqjt2XkTvlUp5EiRUGMpaTCXhJPYiK8Ta2OVaGPKAjUBcgbEif1KIAWEDYPq
oRnUj1QArRD0FVEq9yHfV0z6OTAAtXgJ/HVrXWdEk5LWbRRJO/1I3pPWn6GB
DSPYZhMcNh1BPjMRSBFUEEB3d8Pr6oIuFaG7itCFAnQuD10swF+0FCqXd90k
kwaASJoAMrfv0G4BE30rWPf0F+M0lVOKjzu0e8s0gOmb1529FZbzsJyD5SJH
djmDliuOBi3z4kbOa2hShxRhpdIeyHcNL5QfPzTI90Gedg9p0USedtmpRLPS
0VNYgArpnRtHHXylUFgH1KbrmNz8AJpHKogqDVSKubRNLgOq+zD03jHJvkma
ldM0MmSXSHdeSlYAUgoUBPAX9cLr6YLuKsHv6oJXKkDl8/I7PChPqhxiTV7s
AgW5tEslZ6riJWyX7V/ivo/YRvgiTmM5JSsgrtz2IAOoyWPiBxddcICY7wYz
gaGbTP19pjlAxv4ye55m4wG+LLWc8lFVzJpSGgh8IPBBfgDlyw45nvTLFQ2X
dHSMCxez8VN57i8l5JcuwfjmB1DfvR/N8UrGPiZopUHVmpgqGqw1yNdQMglU
PgddLMArFqCLBac5i3nnNOVybS2ksi2lkFybc7oAeD5UroCE6xXveJlllVG6
fatLO1uwwQFrzXc6wD3O8oyfbs5uEGYAjO55yaWf5cPVl4LRFwOPfQ34ngOp
74F9B1Yb+KDAAwUBOPBhfR8UOACTp8WRa3XYjuqpgLR5niagf8MqTHOI6t5R
mHrkqt89heKKJShu2gAVBK4tlO9DBb5bCYKg1d4las2mxRuryH4kJCQZNrFp
Y1OWmKegC6WY/wG0tFHN0BaTymVytq17+4sqCKY7wD0Zxvmqvlt4qnYP18Nn
xuVg1Ixca/lGmEQZWMjcnN2hUWvA11JsmLFvdVqU2AoupFqZKFl2CznAW9mP
6lgFjYkabNOgcWQahcggN9CVmANJijjuRB6H0qAc9VGlapLjY1m1lANxhpRD
WkEVSm67AGtTk4dtS+untFNPTNIgEHicib9hGrXTupW+Pl0v/EN3bjfvuGDN
jD1SfSURWnrcU0t3xex+uEiNUes2HoGxbiMS49rNI4qAMBInKkpTuFLpy9ak
LfEBaE3Ideeh8x6iaghTa6IxOY2w3oDfU3ImQmLfZri0nCmOjCmU8ZoS794e
X2uyH7B7UeWL0F09LaZAAk7O7AecCarFf7O1/83G/Gth3TNrHeCeJHn7isFd
0Ux4lTnSWJX2MUhvcKJogKSbdwKGJGXMKcvMWnBoYBoGptqEqYsGR6YSlzOf
SzrDEPy8j6A37wiESiGcmELtwBgoF0DnA+d8MbcCMbPrTgIym6F5Z/cEjr8r
l4fX15+QZ+bKrbc4nXG9mbVTUa3x4fyZV9yB01y803rWDXTVTGQ+HI03nhzt
reeooKALChQoaF+BAg0VKChfupbDAFaWabaJA2aMgY0sTCOCqUWw9QgxQZs0
Qffk4C8qwe/JZ4g2Cqw4Q75R8JRCz7I+2CVLUBufRmP0CCZ+ch9qK5aia80K
FJb0O7ILK+ETuMoJZDgJ2XKflO/gbCFVKMLr6XPfHfdjiHuXcXsTXm7Rxszg
+sTUgfv++pPTNwLBc4Hm6XzvT/uNibe87PKBaNvYv/C+6kuTpVM8fPIJ5JEL
l+WU2w9BkhEMcrFUw8nOOGzRatNm9tBVnkawrAtBXzHT+Ty2mymJCZPvIzhn
A6irhOrBI5h+eA9sowlVLKCwfAA968+AVypKdYRqidei5bVMJk0pqHwe3pIB
UC7naJIUGwFtfX+pNfOWtEY1trnzhps/t+dX37+bBnr/F8AdVx3abk9bpXW6
A/djD+2t/s75a+pcbV6Fui0l4GW4ComIXQ6/bmBqBqZqYGoRTCWCrRlw0+0p
xnHD5vYVl1MurK00oUsBlG6rxM1uJmIMqKcHXm83gp4u5JcuggkNzEwVzcMT
qI2MAbLZCGXolmCb9iKLTZt4c5QggL94idulJ7aTk7BfVg1lgy+ZPdgIgDGf
ffBPPvYJnmluBGEdgJHrq+NjHeCeRHn3bz9vW2PHofU83rgw6cGZ2RAa3L64
tFYSUNZxi6P1LQ2eKXHa2Vj4pVwG3PHewKlzRYEP1dvlSDe+j+LSRdClAjgy
CCdnUN93EM3pGSjPh8r5LusnoE3sYDm3yufhDQxA5fMttiu1zJg0htvSjze2
+a3dwlH0tt0f++oWNCIPwBkAzrym1L/7+ur41Ol4zwmPE3n4FU87I9py6Aa7
p7qplSKY+aUxOVujdfsnTUmrT7e5dDYEhpYlmHyNwooeeKWglTij0nAZFYsI
LjgHqpDLxIYVTDNC/fAEZrbvQzRThcoFyJWXoLR6OfJL+jMpZve/7umGv3QA
FOTld7SxxmKnsn0Xobjw0iF3mg1fx6H5bG71M6KbBtaWALwcwFkAtgH4j6sO
bZ/sAPdk2rtXX/qc6KGxb/CRZgnZJnLZkFGchdJJ+vMo8EK4Dy3HxzFUBfiL
Cigs6ZIYcab+LGObBudthF68SEg86W5AIIIJI8zsOYjKrv1gY6ByAfLlJeje
sBp+VxHwPHh9vfCXLgX5Hlo2giDV+nd7x5tMlx0iGLb286bZvDa/5llJwuGm
gbWLAPwqgH4AwwBuvurQ9tMqrqsfT8C97qK1+8HW2Knm0xGxbp2eWV5C2y46
oKT0uyX+Ge+G094aiQFd8DO75WTf42QjaW9xb2a5To9TWiG/qAe5gX5YY2Gq
dTTHxlHbdxBMQLBsKfIryiBf4WhDti3cNVcozPVZ2myb0bX5Nc8ayR5xfXW8
dk2p/wiAcwGsArD7+ur4eAe4J0k+ev/u6LqL1z1oTbQaU81zwS196Y/aLK/1
ZWpxspKAvXQXz8b02TB0zoP2VGrnirOWJD9qDVBvN1QuSE8qAI7juDoXoDDQ
B18akphGA83KNMJaDdZa6MCHzgetAVmeJU47K355Dxvz9tzqoZ/O9vY1pf4p
AAUAawEMXFPqf+D66vhpo3UfV6ZCYu++9LLV4cNjX7L7ak+N98BLQlvZJTW2
eQkZjkImjKTJTW2NdGMRiYL5fXnk+gqOxK4yfcJUthqiD/65awHPS7ZsJZWp
cohplL4Pb3ARjCJMPrQDtV17HaGnvw/FNavQs2Et/O5uHL0vG6V2b8tmajTO
lt8WVutfL539gjlDXjcNrC0DeDWAATEXvtsB7skG7yuv2Bj+fOTf7Wj93KxS
TR2uzHNFGcpfZlhiuzUGL6UtRVWgUBjscjRDIbS3OmsK5GvotavgLR9ICeoZ
GiM8Dd3bhdwZg1DFHECupq16cByTP38IzUOHwcZCFQvo2rgePRvWwSvmQdpD
a1tzuVb39xQYf2Ii+4n82uccM8lw08Da5wO4AsAEgM9ddWj7wY6pcBLlnetX
HuFA/QhhdBnXTRktMfl5NtDLOnNx909u3x2SwJahfZWYC9QWz4VUHXC1Dirm
HTNMlnnSCrqrAH/5YgTLF4kD5sJqpBSCniK6Vq+Av6gfzAwzNY3qjl2o7NoD
YwzI86DjDQkJ2VDcNJj/xkT8kfy65y4oM3ZNqb8GYBOAbgAz15T6d11fHe8A
92TJR7bsxR+/4YrR5tjM3WDzZK5Gy1oC8nR0PDcFL9DCfYj9nbb9OthaeHk/
w1nIOGixNCOgUgOKeahCDrqvBL/ch2BpP3RXrpX3kLGFSRGC3m4Uly9FbnAA
qlBAc+wwarv2oLp3P5pTFfn+HKTb4gyAP42a9iOF9c9fcDr3mlL/DIDzAPTJ
VN12fXW82QHuSZS/veUBvPuFF+83im9GM7qYq9EKJC0PqSVJQUeBN8OyijNo
3KZ1DaA95TJpQGtbUM7MkjACGjUEq5YgWD0IHVdN2CzZBhnyDSeZOVIEv6uI
wuASlM5cBVXIIzx0BI3Rg6jvO4DKnv2I6o2KtfjjmYMTn7rjkl+qf+4RjNH1
1XFcU+o/B8ASAD6AB6+vjp/yXF3CE0S2XX3JoBmb+ajZPfNS27D5JGLQFv6C
JCCo3ZlL+hu4piIuSUHw8hqF/rw4ZM7WJU+KMD0FlfNAOSH55H34mzYgWLcK
qpRPS3B0WiKUlOTEpUUtSQkNgBDVm5jeuReVnXvRGB07MLNz7Lbp23Y9wJVw
Oxib4Xp9VUCoXXVwOx/DxtUA3iphMQbw6asObd/eAe4pJHtfdXlfuG/qOnOg
cq0day6xyoGXhRye7BopiYW4gXhL+liliQhogDyFfF8OftFH3OeBPAUEGsrP
RBmIQNrtA6xXLIW/cTW88mKokpSOJ7vmqKQaI67MyBJ6EgArbcJ68zu7v3Hz
p/b98TeZBoI1cBvsEYBJALsB7AVwEG4LqMmrDm1vtIG2COBiAFcByImp8Omr
Dm3f0QHuKSb7fv25+fDBg8/kidp77c7KpWziPRkgtEJhESaZMAdQViTWQxxW
EAWpCbrLR7Ck4BIGnk7Da0l1sTDHVCZ0FnjwBpdAryrDX1WG6iqkIE22hFIt
FcdJtQbpcfaCD8HPfyq35oUHbiqv82F4EYAyXCp3A4CSTLWq2L9VABV5biSG
uxjAoIAWcDvofOmqQ9vHOsA9FTXvm5+jzP6JMlea77K7Jt/Ik1E/DChJ+WbS
xUnzOdVmF6us2aDgDxagu4LEXEhitNn4bty4JC4N0uSqkQs5eKtXwD9zOVRP
t9sGy0tAmikvUg1o/x7ki+8C0W25DS9vSRjctHQtgeEDyANYDeAc+b+AlKWR
vffx/bcC6Buh6KdXjW7jDnBPYal84To19oUfPdfunXwnjzeeyhNRLyxad30k
auUraMwKXspp+INFqECnnIWYeKPRymeINXEyMTK1ZINL4K9eDjXQD10qgAp5
kO8beP6DCILPcan/44UNL1kwo+umgbU+gGUAlosp0ScaNo5mNwGMArjnqkPb
93YSEKeRHPi/r+xr3rXzRXam+Toeqz6PRxp+mmnLgFcBRzHPVFopoUoa/qI8
KNAtO0Qm5kJG06avq8z5UzYb5XPQ5QF4a1c9EKw740tU6vpa/uI33v+L+L03
DazNifZlEDWuOrjttCOUd4ArMva3b6bqj7cuhu892R6uvs3uPPICHqkFrj3M
HODNpoYVQB5BFTX8/pyzd7OtneL2ULEJojMApqwNTAxPR7Sof6saGPgEafVt
b9PZu3pfdF2zc5c6wJ1TRt/7RgoPVQLS+qxo3+E3m4dHrsZEY4Aj20PVyGtJ
YChKCTnCayCPoHIKXl8OKqdaTQTp8sjZFLMmKZVX0+QHk7S49261fPCz3hkr
v2PGJ2uLf+NvTOeudID7iOXI1/6yv/K9u59p9h6+gierm9A0a2HsGXy4UsRk
oxW8RIBPIA8gX8HrDaAK4lwRUq2qFVDMMbTeC0/vpEJ+K/UWf6RXL7958Tv+
fktn1DvA/YXKwY++Y3Hjvp0reaK6DL5eg2a4kWcaa2CilVxtLEY+txTVRhGa
FGmX+lWeangF7zAF3gQU7UHg7abA3wZjtqCQO6CXLdnvv/iZ+3oveUtHs3aA
e/xl/N//Sjfu2RZEe8Z8rtV8O1HRKOQ1pisq0azWAgD7pcBQITDkqwg9pVAt
H2gO/MEnw84odqQjHelIRzrSkY50pCMd6UhHOtKRjnSkIx3pSEc60pGOdKQj
HelIRzrSkY50pCMd6UhHOtKRjnSkIx3pSEc60pGOdKQjHelIRzry2OW4FksO
D5bVPN9hh0ZH+GScc3iwTMf47bzQaxseLB+PHsM82zXIdavHeu6h0RG7wN8W
f58H1xyvC67P2BSIDsqGwo/qPj5W8Y4jaPMAPgy3l1a7GACfA3DDIzpneVkJ
zB+D60Q4283+RwA3L+BUlwG4fB7w7gLwtQX8xhyATx+HcawA+CiAn7W9fg2A
5+GxNeSeGh4sv3dodGTXMe7dagBXA3gJgIvgGuel21YyjwO4C8B/DQ+WbwRw
YGh0pHnaA1ekBLery2ySf6TABfOzAfwKXOfsdpkG8EcL1Nh/LgCYS3YND5Zv
Gxod2X+M02n5fb/ocZyUidMO3Ivl+x4LcMcA/INMztnGZy3c5n1vgWuWN5cU
4BrpvQTAdgCfGB4sf2ZodOTQiQCuOo7nbgL4+jzvnz88WF7/CM/5ynlu2o1w
/V2PJRfB7XkwnywG8MIFXtPxaBiX3U16Ia//Is6N4cHyebJqvfsYoG2XtQDe
B+CTw4Pllac1cMWOuh/AfXMcshTA0x+B6bEcwCXzXPNXZbIcS64WYB5rpXjO
8GC58ERxdoYHy4MA/hLAcx6l7+MDeCmAjw4PlntOZ40LAPsA/HCO93oAXC52
4kLkcnEQZpOHAfzsWE6H3JxnIO3APZ/TejFcY+STMYbH02k+yjEV8+n5AP7P
PL/HOWXO/p7PdHougF+Rc56eNu7Q6EhleLD8QwCvFaC2y5MBrITbxXs+wHkA
njaHowcAt8kkOZZslO9ciKwDcMnwYHnz0OjIXH29DID/Fm3DswBkBYAL5/js
NgAPzOOcPdKN8u6Ea9B8LNBOiA3dri3fNs/nDosTeq9M+mcCeAWA4izHdsHt
zv4fCzTdTknnDKJx980B3PMBrB8eLG8/Rkhl2TxmwgyAW4dGR6aPAf4AwBDc
tkgLXY1eCuArcrPnsuPfNs/nXwng7+ewM28A8Bdz/CaeBVzHko8B+J8F2uQT
s9j0F88zOT8K4P1DoyOhjOXXANQAvHkOn+MyUUinL3CHRke2DQ+Wfwa3qUb7
TQoAPBtu6/nGPKc5U5yqucyR2xZwKT0CpNmkMYf58Gy5ARNz/DaeSzPKUjk1
j4NUHRod+UVuEjI5NDryaLczXTZHpCYG+v/EoJXfPT48WP4C3G49g3Pgas3w
YPmuhcaMT0WNGztOL5cQWLu8BMD75wLu8GDZFyeue45BvW9odGQhLeYvBHDB
HO99GsCvzXLz8gBeI0vk41n0McyLFbO8fheA/zfHSqrEKT9uiYkTBdzvybJx
5izvnSWAGp7js3kAL5tnqf76Aq/hrXO8flCW8wvFAWyX1w0Plj94LFPkNJfR
Y4D6ncOD5fuHRkcezGjdaTGjToqcKOBOi7H+zjlm9GvmAe66eeyvIwC+tYBQ
z5kAnjXH2/8DYD+Ab8wB3BUSQvvqaRDSOmY0Yg5f4hBcEmHDHPfnMgBfGx4s
fxzAF+V+mpOR6j1R4bDsYH0ZQDTHIVcPD5a753jvl+aZYN+Yx47MyivnWNIs
gG/LOX4AYLZtwwPMnf07leQ88fbne1wyR2y6KYCcT+tuEgfwHgDvhQtlLjtO
XI1TA7giWwH8eI73Fosj1K5BcuLZz+U0fOlYs354sNwF4AUCwHZ5CMDPxYHY
KSGl2cboQskqncryPlm15nt8Hi5N265YIgD/BuDuBeBlJYD/C+C7cr53DQ+W
L5WQ5eMSuDNwMc/ZpCjgapfL4TZHnk3uxtxx0KxcKnb0bMvoT2WJBIARmViz
xWxXArjycWDLRvO8t0MAudANqAtifr0PwJfg0r1nPe6AK8yh2+cIH2kAF8yS
534hZg9yQ2zbY8VuPYlILJ9jIt0yNDpSzZgzt4i9N5uD+MzhwfKix6t3JqvO
TeJv3A639y8vEENr4ZhrNw4Pll9+IswHdYLH54F5QkvrsuEqAcllmD2+OA6X
dGgc4/uWwsUaZ9O2YwC+3/ba7QD2zHGuZ8BR/U5noWOBd2h05CdwfIXflWjQ
yALPrWV1/EcALzne4FUneFaPALgDwGw7ziwR5yG2RS+QmTyb3ANH4DmWh71e
wN8uDMdt2NJ2fdNys2ZbUpcBuPJE23KPUCn88BiPuwDUF3CfKkOjI/8kjvE1
4ozdfgxTI5ZBONromsdDOCwr/wsX7B+Y5b0r4XidYxICm22JDwH8eAFcWQ9z
Jz2MRDlmky8DuG6OsXk9gH9e4A080fJBGdtj2biHH4GimQDwv8OD5WHRpBsB
/LIAumceDX6BaN2/F8fvcQHcH8pyPBtwLxOwRrI0zzYwk1hYTr57njDWYQDf
meO9ewFsnkNTPxku9XzrKQjcIwuYzI/FP9k/PFg+IPfvQ3DZzhdi7lTxCwF8
8nhN8hMO3KHRkVBIGhfOYqoU4IL9/425ubq7sTBuwksAlOd47z/FOcMc2vgL
cwBXwVUGnIrAfdQyXF7mg7k8h6JgAKNDoyNNcWBDAA8MD5bfKtGEZ83xuXPx
2Co1TinnLJavzGNrvVpMhtnI3hbAfw2NjtTmvRFnrCYAb5xnKfvqHGGvOLrw
XczNCLvqRLH8T6B0i4n01VkeX5ptEkuJzn/Po1GXHE98nSzg7p3Fo4/lfMye
GgZchucLxzx7o3GpzPjZgLsZwEPHSFyMYO4U9GIAL35cwZa5LhGTp8zyuAyO
Tz3Xij2XcpjE44BkM5uD9RUAL5ojrHLmXMpUTIVjyQsxN+n8pgU4KFMSXXjZ
LDemCOD5w4Pl64+l+U+wLJLypmOJBXA4S1MUrTkszudsyu2Vw4Pl7wO4YWh0
pCJRmzXi/M5lDjw416p22gJ3aHTEDg+W7xAQnvEIPvplzB5Ky4bB5ivPmQJw
W5x0mOf6zPBg+W65vvbYLcmqcB5mTxGfLLlOPP5jyQSAP0Rr1UkIx1V4FWZP
jS+FY9B9e3iwvANpFcSl82jcb2FhNYCnlcaNl+ObAbxhgcfvAXDHAojJ52Fu
3u02uDTvQuRe0RqzJR3WAnjq8GD57nnKek60XLjA48bQxm0eGh3h4cHybXCk
pbkiMUvhYrp2ASbmVgDfPl6hsJNp42JodGQSjpFVX+BHbsYxSkGElPPMORw7
A2Dz0OjItgVe39Q810dijvSdjhbtbLbn0OjIEQAfAPCTY9imx8LMtGjn+4/n
j1AneRDvWKDNWodL8U4c47heuErV2aSGBXB32+QGzB02u/IRmjmnvAyNjvwU
wNvh4uThozjFGFyM95+O90p0soF7HxbG8NorID+WXArgSfMM6o2P8Po2z6M5
igB+6VGWYc/X/2w+ITz20vV5zzE0OnIXXCebd8CliBcSGYgE7G8B8Fcnwmn1
TvIMN8OD5X+CY4z58xz6EIjuPYaZQBJJuH6Owb5DzJNH6kR+SGxjmsP8mG9J
3gbHc7WzvPeTR7lCFfHYAvvTcJUj8/3uEbkvXxVF8Hy4FPxaAIuQlrnvgOMw
fBuOPzJxoqoiCCdZFtA5EVhg98RjnIsfZXfIR33OX/T1LHCs8IsYy3m+lzKT
71GPa0c60pGOdKQjHelIRzrSkY50pCMd6UhHOtKRjnTk8S3UGYKOPBaRquc+
KHV46MD+E5aI8DIXkIOr4rwIjrd6B4CDx5MsIe2RngzH2pqY57gyXIn0UwA8
9bHmwuW3FuF6ytpf0G/Jwe0dMX66ZJKGB8sb4Squ/+bR7JYjGbWXAfhtWPt2
OBroY72mM+B2T/rAfFtaeXJwNxy5+GVwvIECHBHkT4cHy9+V5xpaN2BML4Aw
ZsJnvrBXNPjk0OgIDy9bTrC2AKI6mLsBqKHRkfamcmcA+ASAa4YHy3cB8KFU
E9b2A2gMjY7MDJeXebJN1DMBvAdAODxYjs/bBaAuBZgFue4JSGv7odGRppBg
egFEQ6Mj0zLYr5XBed7wYHmfHG/h0pjxcxvv2yU9ehWImkMjB1i0TC+AytDo
SEx7fBVcO6JLhwfLR+AI2SZzjpyMTwNAfP25+Jjh8jIN5l4AtfaJKb8hB6Xq
sLZXXp4UHm3LuGcA1QsgglIVWFuQ87K81wdH8u6D69n2D5nP9QAJ7XQ2YHUL
biaFyxEXru6T/dFivm6XTGIjr3fL39FweRmBuQDH+ssDyEHriaH9+1i+/2oA
H2/DFgBMxb+R5I3Xy418J4i+LwP4DqQbtJ0LR7LwATwVjqjxKTi+agGOef9s
Od+tAK4HUQjm98NtLHIFHGv+63CN6iL53nPh9vN6k1zYs+U7LhEAfgRuP4TP
wxGZbwXw+wB+A66n68UA/lpu0lvlmB/IwO6EK+a7RlYRgiNKPwxHvXsaXB+C
98BVqhblRj4omvPw0OjIl6Ujy9Uyyf4VrlvLtXCdd8bk2rbJOZ8j3/ERuLKk
zUOjI/8hv/UaOJ7w9aIk7pLf+8+iLH4HjsQyKWP07cyNWya/eULGEnDNqON9
5DwAnwXwXzLOvwxHjKnLeGwA8Fcyad4k3zsNx3x7hUy6MRmrIRmrHwC4Pgvg
4cHyi+V4wJFqPgXXTuD1cO32r5HvXwbXjOVOGY83y98/gev4GME1DblLvq8f
wI/gejecAeCbcM0Ot8Ix1a6S7/whgE8OjY5UlcyyV8P1Zht5LAAABqlJREFU
0bpxaORANDQ6clgGOA9XLLcObu+rQG7QDBxZuCxf8DoZyE+K1n4NmEtwNLcN
cDtM3i6TYy7a4SoAvwdHFv9rGYA/guPR/hBuQ7mvyOu/Jtf8kMzkv8pcWwjg
twSsa+B6YX1NburvC8h/CNfG6Xq4yoohAU6fAP4ypK37SSbuFQKUj8P1zP0b
uL6674fr3nKbgOEzMjmubvutl8r3lOA4w++S756Qm9kHtyfEjwH89fBgOdsT
uEdAcbaM5VbRklfLZx+AI4EPyHe8C65m7tNwZf7Xyji9EK7E5wa4nT0vFpBp
uPqxV8tnPgHXmvV1cSul4cHyYri2TLfJuL1IxqQsz7sEYL8s3/0ZuU8fkM98
Uu7F6wRXr5fnXxWF9lbBS0LegavUfp0A+pMyad40PFimmEvah6PLsadkxg7I
zbsHjiB8C4C/E4A8Ha7i9Udw3NotctFPF018BK6c/HY4el8Vs7dljwFyH4Bv
Do2O/FDAtlau4QHRSrfAUQlnBKx/I8vSMnl+swx8XJ7TkKXrQgHJtXC7NW6T
a7ldNIoF8CMQXSugtjiaGhmfZwOADw6Njtws2vpv4cqQtsu1/ihjr7efI/7b
yCT7SwHl+QKGg0ib+V02y/34N1l1viTn+qRcx+dl6V8Pt13TdwF8BkTfk1Vi
XFayF8n5PyOrzafkmj1RQHfClSxtg+MuXyaAhBx3CK6DpoLr7HgnWisqrIzp
fwoYbxXz4IsyWW6QyReIQvq3odGRG+Dq3T4DVysYV5X0yiT8X7me+2VSPx9I
ieQPAtjQtufYWjlJvMFeDek+DVUBblE04JNktr5G1H7cZypE2lExks/PF8mY
RFpg15zlhnPmuCPiOMa/oS72Tw1puc0eAH8mN+YPAfwBWjfbsJn/DwyNHDDy
nNBaNBjvHRzvZ+uKLbVuCmCnM9eXPWeWY5xv+z3xTkOBPK6W8XuVKIltbZO6
nvmNkTwqmbEyojkDub7m0MgBztwryL2qZZp7VDP3tARXr/dquBZL8X0MM8B9
L1x711+RlWbTLBNzKvP7K3I/4r9n5BqVHFsRezqSz/lIucZK/r5Iruc18trt
EIeJZeadB+DPhwfLTx0eLL9ILvLHSLsrXgLg9cOD5U1idyyVyMOtcsJvy2NV
22Adr5BbfN4xGZy3SPPl18qyDLG/3gjXxv/34DbcO0eAVoIreJytQ/ceAE8b
HizHG55cLQP9M1mZfn14sHw2jHmr2LPL5abk4NqRFgV4Vw4Pls8ZHiw/DbN3
xoGM734B0OfEJDobR3dJXMg4hnLPXgjXu+si+f0DAvRb5fUXDw+Wnyz3sU9A
f5N8xw2Z+9jMALsgy/k4gN8W5XEFHhmpPfsbCgDeMDxYfsrwYPnZYgrdn1n5
p2QCB2Lz3ij271hW4/5UbJ+nidr+sCz7fyQgJFkWLhFj+w2yzG2VZWqLLA1f
kxt4Y0bbRpnZWEFr55N42TdyfCWjVUN5j2UAq5llaSZznu1iR70IwL/LcnOP
DPiorCZfkCXyftEYPxAQflhmdDXW0hIe+7RMhv+QSMEd8v6E2M9PlvNdB+Bf
BHxxR8S/kwjI+5FutP0O+b5qRvMY+b4pAL8JV8N2qzgtX0brVrLZcUL7OTLv
N+Umf09Mp8/IdT8k4/lV0VgfFafwiEywUCbNw/Kbvy7A+Z84HDo0OjIjy/Z1
goUV8tsa8t2xhs0Wl9ba/q7LayzPd8q1/Ivcxw9ncBOKP7EbrmXW12X8/2to
dIRbt8ZctlzB2mUApmVAIcb5r4nH/OsxqOSHZMMk/fIDk5DXcHmZD6IoDkwP
l5d5AOzQyAELAMPLVxCs9UjriI0hENTQgQMu4rBsmQJDATAgIjCroZEDLpQC
aBCZbMB7uLwsB+ZeaH0I1moALEt/HC8uktaHrnQhFyTnAQwICgzEx8tnAgCL
oNRhABbMyJzPB9AHosmhkQPNzDXE57RDIwfscHlZAOZ+AEegFIMZ0NrAGA9K
RRL+yQby+0E0PTRyoKWyWM7rAYiGRg64UCNz+reMI7SKhvbt45tXriIOw0UA
6iCqtnx2+QqCMYtBVBHQKQBmaORA/Lk+AdbkbPFoCU8uhivTaQyvWKlgjIJS
BtZqEHjogIzTsuWejJvc0+VOOzuM3Sq4+qFo3/Gh0REeXrGSsuMjwYMuAB6U
mojv+TGXHwHuW8Xof8sCKm070pFjYWqlOPS/OjQ68t3HlDmbR1iWLfP/x0DD
k0lGwYgCX6AjJE8o7eAQU+oy0vJkklEwokpcRuiIwR9yp9wBBiwyY5u0u3MA
AAAASUVORK5CYII=

--_av-24Wpq_1hZGm5mU6av9vXXQ--

--_av-8B6A9R1qxNbZy6fL15VuNA--



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 17:02:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 17:02:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478898.742389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSsa-0006cr-Pv; Mon, 16 Jan 2023 17:02:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478898.742389; Mon, 16 Jan 2023 17:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHSsa-0006ck-ND; Mon, 16 Jan 2023 17:02:40 +0000
Received: by outflank-mailman (input) for mailman id 478898;
 Mon, 16 Jan 2023 17:02:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHSsZ-0006ca-GC; Mon, 16 Jan 2023 17:02:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHSsZ-0004Kt-8D; Mon, 16 Jan 2023 17:02:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHSsY-000213-SO; Mon, 16 Jan 2023 17:02:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHSsY-0001K8-Rv; Mon, 16 Jan 2023 17:02:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rdl7tGn0BLLurYrAg56oRVQdo3/2vijPLZ8SyGK5KvQ=; b=ShBY11RW9lbSS4Bjmzyw1NY7gy
	ggtBjg08TApk1kmjXQk74/AJC3AVZJxrfEkPKztTcGlhh3e1TMG0meDW1vWFFKB0lN2YXpTU1vIoL
	CvHzeFIUqlBdaxya8FdLpTOeuKYvITDDjmsX/Ip9KYVGWNMzt9wAxJtCUXP7CKF8yMQU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175919-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175919: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-arm64:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-arm64-xsm:xen-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=5dc4c995db9eb45f6373a956eb1f69460e69e6d4
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 17:02:38 +0000

flight 175919 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175919/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 build-amd64                   6 xen-build                fail REGR. vs. 173462
 build-amd64-xsm               6 xen-build                fail REGR. vs. 173462
 build-i386-xsm                6 xen-build                fail REGR. vs. 173462
 build-arm64                   6 xen-build                fail REGR. vs. 173462
 build-i386                    6 xen-build                fail REGR. vs. 173462
 build-arm64-xsm               6 xen-build                fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd11-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd12-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a

version targeted for testing:
 linux                5dc4c995db9eb45f6373a956eb1f69460e69e6d4
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  100 days
Failing since        173470  2022-10-08 06:21:34 Z  100 days  207 attempts
Testing same since   175919  2023-01-16 07:09:44 Z    0 days    1 attempts

------------------------------------------------------------
3368 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-freebsd11-amd64                             blocked 
 test-amd64-amd64-freebsd12-amd64                             blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-amd64-xl-vhd                                      blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 514969 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 17:21:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 17:21:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478906.742411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTAr-0000qd-K5; Mon, 16 Jan 2023 17:21:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478906.742411; Mon, 16 Jan 2023 17:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTAr-0000qW-HC; Mon, 16 Jan 2023 17:21:33 +0000
Received: by outflank-mailman (input) for mailman id 478906;
 Mon, 16 Jan 2023 17:21:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cGjy=5N=citrix.com=prvs=37390198c=Per.Bilse@srs-se1.protection.inumbo.net>)
 id 1pHTAp-0000bM-RT
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 17:21:32 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3671308b-95c2-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 18:21:30 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3671308b-95c2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673889690;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=1dIIAJGKQFvYG4Z5OuWe803RqRSo1zkhts0L1OnLzO4=;
  b=Q8SaAoEnA6mYrku5zMearzkQfQViC+NMtuuOB4ZUt0wVutT8vcDo/ouw
   LcvPxGzKCP7JjcLC/3DFK9XPMsh6baCxKL5z1ufsCuspbURaN1DHFu4Kc
   dKSuXQRsldRIi/IhqxgtosJm4KFBdQJ/1lMGlwiv2m4xQrhpzRg0WaGab
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95317181
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Ge2N/a6kM7g4IlWUOb0jNgxRtD/HchMFZxGqfqrLsTDasY5as4F+v
 mMcCmiBaPjYZTGgKNh3aonk90kO6p/dyd5lTldkqn8zHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakS4geE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m/
 OQoNAstMxa5qee53q/rUsJ2v+8oFZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrkHyaXtyqVaOqII84nTJzRw327/oWDbQUo3XHpwKxxbBz
 o7A12SkKFI5P92N9TmUzW6Ql8vOsgXDZatHQdVU8dY12QbOlwT/EiY+Sl+TsfS/zEmkVLp3O
 0ESvyYjs6U23EiqVcXmGQ21pmaeuRwRUMYWFPc1gCmPwKfJ5weSBkAfUyVMLtchsacLqScCj
 wHT2YmzXHo27ePTECjGnluJkd+sESENHXM5RXICdyUA7Mf+8JkYlCvkRe82RcZZkebJ9SHML
 yGi9XZh3OhM05JQjs1X7nic3Wvy+8Ghohodo1yOAzn7tl4RiJuNPdTA1LTN0RpXwG91pHGlt
 WNMpcWR5ftm4XqlxH3UG7Vl8F1ECp+43Nzgbb1HRcNJG8yFoSLLQGyq3BlwJV1yLuEPciLzb
 UnYtGt5vcEMZyvyMvEsMtjqVazGKJQM8vy8BpjpgidmOMAtJGdrAgkzDaJv44wduBd1yvxuU
 XtqWc2tEWwbGcxaIMmeHo8gPUsQ7nlmnwv7HMmrpylLJJLCPBa9U6keClKSY4gRteXcyOkj2
 4oFZpTiJtQ2eLGWXxQ7BqZIfQFXdCFnVc2twyGVH8baSjdb9KgaI6e56dscl0ZNw8y5Ss+gE
 qmBZ3Jl
IronPort-HdrOrdr: A9a23:+kxhMK5uBIp/bZ8tOgPXwMzXdLJyesId70hD6qkRc3Bom6mj/P
 xG88516faZslgssRMb+exoSZPgfZq0z/cci+Qs1NyZLWrbUQWTXeVfxLqn7zr8GzDvss5xvJ
 0QFJSW0eeAb2SSW/yKhTWFLw==
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="95317181"
From: Per Bilse <per.bilse@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Per Bilse <per.bilse@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH] Create a Kconfig option to set preferred reboot method
Date: Mon, 16 Jan 2023 17:21:02 +0000
Message-ID: <20230116172102.43469-1-per.bilse@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

This patch provides the option to compile in a preferred reboot method,
as an alternative to specifying it on the Xen command line.  It uses the
same internals as the command line 'reboot' parameter, and will be
overridden by a choice on the command line.

I have referred to this as 'reboot method' rather than 'reboot type' as
used in the code.  A 'type' suggests something to happen after the
reboot, akin to a UNIX run level, whereas 'method' clearly identifies
how the reboot will be achieved.  I thought it best for this to be
clear in an outward facing utility.

Signed-off-by: Per Bilse <per.bilse@citrix.com>
---
 xen/arch/x86/Kconfig    | 95 +++++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/shutdown.c | 11 +++++
 2 files changed, 106 insertions(+)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 6a7825f4ba..d35b14aa17 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -306,6 +306,101 @@ config MEM_SHARING
 	bool "Xen memory sharing support (UNSUPPORTED)" if UNSUPPORTED
 	depends on HVM
 
+config REBOOT_SYSTEM_DEFAULT
+	default y
+	bool "Xen-defined reboot method"
+	help
+		Xen will choose the most appropriate reboot method,
+		which will be EFI, ACPI, or by way of the keyboard
+		controller, depending on system features.  Disabling
+		this will allow you to choose exactly how the system
+		will be rebooted.
+
+choice
+	bool "Choose reboot method"
+	depends on !REBOOT_SYSTEM_DEFAULT
+	default REBOOT_METHOD_ACPI
+	help
+		This is a compiled-in alternative to specifying the
+		reboot method on the Xen command line.  Specifying a
+		method on the command line will override this choice.
+
+		warm    Don't set the cold reboot flag
+		cold    Set the cold reboot flag
+		none    Suppress automatic reboot after panics or crashes
+		triple  Force a triple fault (init)
+		kbd     Use the keyboard controller, cold reset
+		acpi    Use the RESET_REG in the FADT
+		pci     Use the so-called "PCI reset register", CF9
+		power   Like 'pci' but for a full power-cyle reset
+		efi     Use the EFI reboot (if running under EFI)
+		xen     Use Xen SCHEDOP hypercall (if running under Xen as a guest)
+
+	config REBOOT_METHOD_WARM
+		bool "warm"
+		help
+			Don't set the cold reboot flag.
+
+	config REBOOT_METHOD_COLD
+		bool "cold"
+		help
+			Set the cold reboot flag.
+
+	config REBOOT_METHOD_NONE
+		bool "none"
+		help
+			Suppress automatic reboot after panics or crashes.
+
+	config REBOOT_METHOD_TRIPLE
+		bool "triple"
+		help
+			Force a triple fault (init).
+
+	config REBOOT_METHOD_KBD
+		bool "kbd"
+		help
+			Use the keyboard controller, cold reset.
+
+	config REBOOT_METHOD_ACPI
+		bool "acpi"
+		help
+			Use the RESET_REG in the FADT.
+
+	config REBOOT_METHOD_PCI
+		bool "pci"
+		help
+			Use the so-called "PCI reset register", CF9.
+
+	config REBOOT_METHOD_POWER
+		bool "power"
+		help
+			Like 'pci' but for a full power-cyle reset.
+
+	config REBOOT_METHOD_EFI
+		bool "efi"
+		help
+			Use the EFI reboot (if running under EFI).
+
+	config REBOOT_METHOD_XEN
+		bool "xen"
+		help
+			Use Xen SCHEDOP hypercall (if running under Xen as a guest).
+
+endchoice
+
+config REBOOT_METHOD
+	string
+	default "w" if REBOOT_METHOD_WARM
+	default "c" if REBOOT_METHOD_COLD
+	default "n" if REBOOT_METHOD_NONE
+	default "t" if REBOOT_METHOD_TRIPLE
+	default "k" if REBOOT_METHOD_KBD
+	default "a" if REBOOT_METHOD_ACPI
+	default "p" if REBOOT_METHOD_PCI
+	default "P" if REBOOT_METHOD_POWER
+	default "e" if REBOOT_METHOD_EFI
+	default "x" if REBOOT_METHOD_XEN
+
 endmenu
 
 source "common/Kconfig"
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 7619544d14..f44a188e2a 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -28,6 +28,7 @@
 #include <asm/apic.h>
 #include <asm/guest.h>
 
+/* NOTE: these constants are duplicated in arch/x86/Kconfig; keep in synch */
 enum reboot_type {
         BOOT_INVALID,
         BOOT_TRIPLE = 't',
@@ -143,6 +144,8 @@ void machine_halt(void)
     __machine_halt(NULL);
 }
 
+#ifdef CONFIG_REBOOT_SYSTEM_DEFAULT
+
 static void default_reboot_type(void)
 {
     if ( reboot_type != BOOT_INVALID )
@@ -533,6 +536,8 @@ static const struct dmi_system_id __initconstrel reboot_dmi_table[] = {
     { }
 };
 
+#endif /* CONFIG_REBOOT_SYSTEM_DEFAULT */
+
 static int __init cf_check reboot_init(void)
 {
     /*
@@ -542,8 +547,12 @@ static int __init cf_check reboot_init(void)
     if ( reboot_type != BOOT_INVALID )
         return 0;
 
+#ifdef CONFIG_REBOOT_SYSTEM_DEFAULT
     default_reboot_type();
     dmi_check_system(reboot_dmi_table);
+#else
+    set_reboot_type(CONFIG_REBOOT_METHOD);
+#endif
     return 0;
 }
 __initcall(reboot_init);
@@ -595,8 +604,10 @@ void machine_restart(unsigned int delay_millisecs)
         tboot_shutdown(TB_SHUTDOWN_REBOOT);
     }
 
+#ifdef CONFIG_REBOOT_SYSTEM_DEFAULT
     /* Just in case reboot_init() didn't run yet. */
     default_reboot_type();
+#endif
     orig_reboot_type = reboot_type;
 
     /* Rebooting needs to touch the page at absolute address 0. */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 17:21:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 17:21:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478905.742401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTAo-0000bZ-Cn; Mon, 16 Jan 2023 17:21:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478905.742401; Mon, 16 Jan 2023 17:21:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTAo-0000bS-9m; Mon, 16 Jan 2023 17:21:30 +0000
Received: by outflank-mailman (input) for mailman id 478905;
 Mon, 16 Jan 2023 17:21:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85+=5N=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pHTAn-0000bM-9W
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 17:21:29 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 360ffad7-95c2-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 18:21:28 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id bk15so12142991ejb.9
 for <xen-devel@lists.xenproject.org>; Mon, 16 Jan 2023 09:21:28 -0800 (PST)
Received: from [192.168.1.93] (adsl-115.37.6.0.tellas.gr. [37.6.0.115])
 by smtp.gmail.com with ESMTPSA id
 18-20020a170906311200b0084b89c66eb5sm12016582ejx.4.2023.01.16.09.21.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 16 Jan 2023 09:21:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 360ffad7-95c2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=VK2WSwwO3wem8lG8VumfhfaOUBXDe0sbdIex9VQG+7I=;
        b=kzqXBzvJIHOAX+eehaTHL7GpInXwkBd1FLV4/EncafC4YmdwTWZ4pNOt3R6ieK1B54
         Wn7iqcdV2AnJKdno2ThBiK8ugIgENq+RZtEvt3rHEanUa5FXgjX7LNh+YdhJATuwDLDu
         hGF2N3IpCsvHhytZarl8mMogUCdS8IYcgxFbDVdMVXrFF1E8vzOeD60DTT/DnJm0o1Uo
         XvpzLgHvAjcFnq+97j+TTukzRRHEynxv5/tJtYqIwuGYSjQwLAUdmkJ/8CPrviTx7DAD
         BRlI2Y92AJ/kiC+izoSgTCDIfBquwm+gzPupa/y9w4pgOG5/gmNLMfQ6gBCXs6VWcYR+
         SXzg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=VK2WSwwO3wem8lG8VumfhfaOUBXDe0sbdIex9VQG+7I=;
        b=T1Zph99xWmKeetGrkJd6XTcHqhcnCLET+VNnC2tCGC6hdbEoqbBmjku5krjKKrVX81
         VGbohrg+HqXRo0rXKJ5DOBY9H97diSuus2EMovF9niUzQ7NKgsEHcKV8PH3pk1J8k1xP
         b86geDDBjyH7JRr1Q3bdCNDbulpfxOg0ejFEcshNZN3wHLI8O6Pj74zv/tJ091vI/7sB
         XEQgoD+8pxI7rDuVKlsIQIXjHx6ynQ676dZe1UFbduZ8BLX0nJDDM5O07va5M9dk7/cU
         IHiyy2dY+H6oJAI1Va4GwQ0S+yRktQRzaz+mMlDdI/2wmpULhHcZsGmy1I7nFtPYmlGT
         oQvw==
X-Gm-Message-State: AFqh2kpTjKxqG+OEfSclH88Z+q4lAwUvgB8JQcZ+Ez86v4i0r+71wG5I
	ZaknKGQgEeN5a86c3sG7psg=
X-Google-Smtp-Source: AMrXdXvM3rTJg5z2BYuhOmRearVq7EK4JblcBceMIJufsOn9c9Ku7yjkYk/Q7ZY4toIgnsCkTNjpjw==
X-Received: by 2002:a17:906:a0c3:b0:872:5c57:4a32 with SMTP id bh3-20020a170906a0c300b008725c574a32mr86452ejb.51.1673889687861;
        Mon, 16 Jan 2023 09:21:27 -0800 (PST)
Message-ID: <fac90343-840c-8110-6a00-c438bff9f53b@gmail.com>
Date: Mon, 16 Jan 2023 19:21:26 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v3 5/8] x86/iommu: make code addressing CVE-2011-1898 no
 VT-d specific
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230116070431.905594-1-burzalodowa@gmail.com>
 <20230116070431.905594-6-burzalodowa@gmail.com>
 <f620f8ee-e852-de62-53ef-5cd243e4cc09@gmail.com>
 <37367eb9-7700-27f8-1871-b84224f0c63c@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <37367eb9-7700-27f8-1871-b84224f0c63c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/16/23 18:49, Jan Beulich wrote:
> On 16.01.2023 08:21, Xenia Ragiadakou wrote:
>> On 1/16/23 09:04, Xenia Ragiadakou wrote:
>>> The variable untrusted_msi indicates whether the system is vulnerable to
>>> CVE-2011-1898 due to the absence of interrupt remapping  support.
>>> AMD iommus with interrupt remapping disabled are also exposed.
> 
> It would probably help if you mentioned here explicitly that, while affected,
> we don't handle that yet (the code setting the flag would either need to
> move out of VT-d specific space, or be cloned accordingly).

Sure. I will update the comment.

> 
>>> Therefore move the definition of the variable to the common x86 iommu code.
>>>
>>> Also, since the current implementation assumes that only PV guests are prone
>>> to this attack, take the opportunity to define untrusted_msi only when PV is
>>> enabled.
>>>
>>> No functional change intended.
>>>
>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>>> [...]
>>
>> Here, somehow I missed the part:
>> diff --git a/xen/drivers/passthrough/vtd/iommu.c
>> b/xen/drivers/passthrough/vtd/iommu.c
>> index dae2426cb9..e97b1fe8cd 100644
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> @@ -2767,6 +2767,7 @@ static int cf_check reassign_device_ownership(
>>            if ( !has_arch_pdevs(target) )
>>                vmx_pi_hooks_assign(target);
>>
>> +#ifdef CONFIG_PV
>>            /*
>>             * Devices assigned to untrusted domains (here assumed to be
>> any domU)
>>             * can attempt to send arbitrary LAPIC/MSI messages. We are
>> unprotected
>> @@ -2775,6 +2776,7 @@ static int cf_check reassign_device_ownership(
>>            if ( !iommu_intremap && !is_hardware_domain(target) &&
>>                 !is_system_domain(target) )
>>                untrusted_msi = true;
>> +#endif
>>
>>            ret = domain_context_mapping(target, devfn, pdev);
>>
>> I will fix it in the next version.
>>
> 
> With that added (and preferably the description clarified)
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Jan

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 18:01:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 18:01:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478916.742423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTnF-0005Y6-SK; Mon, 16 Jan 2023 18:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478916.742423; Mon, 16 Jan 2023 18:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTnF-0005Xz-PI; Mon, 16 Jan 2023 18:01:13 +0000
Received: by outflank-mailman (input) for mailman id 478916;
 Mon, 16 Jan 2023 18:01:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qEJY=5N=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pHTnD-0005Xt-W4
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 18:01:12 +0000
Received: from sonic316-54.consmr.mail.gq1.yahoo.com
 (sonic316-54.consmr.mail.gq1.yahoo.com [98.137.69.30])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb70bc0e-95c7-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 19:01:01 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic316.consmr.mail.gq1.yahoo.com with HTTP; Mon, 16 Jan 2023 18:00:58 +0000
Received: by hermes--production-ne1-5648bd7666-6sqtt (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID b5a3654a4a17fc5e6bbb86087b8d1b15; 
 Mon, 16 Jan 2023 18:00:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb70bc0e-95c7-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netscape.net; s=a2048; t=1673892058; bh=quuGlmQoUYU2Da08/Oab2pCw2jCrmaGkCeJHfrrqCMw=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=l1v2tfn0ZVlt1DBgdi2Vm59qXVqQ6mYolWog+DFgej5vyjkf76866PCIEDZmJStXxxHXLd0s0t0Qv90uPzmbeM3YK5a8AJahjoEjb5Eo/l/jOCNPmex0Wdrn6VnoKrR6jPW2/WV98YoD99a/ceI9RwLygGFZJyX8CSitv9gNA048QWMoo7WkutfAKdUYtAHJGOu161H0FYTCWtpb4+WKWwk9DGZPofMORoTrvMI03+WQoUQOmUdJRX1qNNJOsoYuTfEEcytOruqute+ci8AGvnCLnYGzHwwyfzo7cJlpyPtXI2g7BHe4dJBO8XuH57Vi1BB9gMCwGFE5yACkKNraug==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673892058; bh=5RFQ6OFnIsbnY1yBI5QaVWdT1fCtbI9NM9A1AlrANEZ=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=MOeTQ0hMge0V3yb4nNAOf//CljYrDYVVc+DcjTHQLrEDa8QJDJc+y+Htkiy2L6GxPMAC1fNpJrumhRDRnMXI/9C2bWM9QyZCg2aa3K5x0O6G9YOxNRwVnw08bi6ycr4UrmcZK9vxx+FNuQSyLyFvppNId8MsnftNhDMpty1glpKjKEZs6HSv+NMeeVMuvYJPTCmvQrvrMLdCdUVJ6ppVI1yO4ayltI0ou6pw5Z/SIeV/L2l061nyqKNNMX0sQ2xWH4u40oqPykPh6pnxQrSEQLxeW1Pci4IWvcd5VzNTZCNe4BZwOrz16Yn/TijJJiadeTbNyN0vrmRpGpP1zcxBxQ==
X-YMail-OSG: JQ6OTrQVM1lFPBUbQmhSvV7AorbG7EtFTXBG_1t9E9LK51w5GweQxlPvfHFYzmb
 gvIl5L4Z7HnAfbIoaA6is0YnuTZqa.uYHNCBRAP7r0DjV4IDfMZCpt5oWPBQuC3m5x5cIlobx05X
 GB2HI6YeQftvGwh3uFs_eIZUuS1bCDHsY3JTR9ta_y0l4mSNMHh_nsuCbzA_kIAOQ1bOunwuM1EF
 17nt3YqrwZ6.5mB9BTB0RgDJ3qImJ8XRJytF4jTeAeBQquy5nz9AFaYz._fgEugPzoY5aFwvaaTH
 0IDROHFys3PhwbE7iKKTUcHuPKCkhL_e1QoQ_X4BjvTsXzRW60np4gK0Age.znk4QnoCu5O7glEp
 vcfHws7yJAWGW2TH.zvLYacPKV.guaNS06KtqhZlIOPHp.ENRe83Dh1l85khrohGzWtRITRK3s1n
 .oN0z3BdrTjlho7Hcd027rAPuxqxdPYwSxPXXACnU2Z2cOwQkdztZHNtdtYQ6JIBes00_sMBeKiL
 LR7MERbQ_yMTaKhxxGGgMi.GyBL9WJSa4a79mJp8S2cNxAYp_jaeuQwXyskyRDr02LJodvsr_Mws
 6UK4lO._hcnQZC_gOgvlB6pQhCNaSZjvOkbJrVMUZ1UrsPX1zBnP1qtMp.0kN5J.GddYyKni1igv
 d1V1OM480V9doRIIZXcw8crRx0fxExtqypwaIn5PoVdxdxRlicByGO4Bg_c1pqrIR7YMZ8BmkJOl
 SdanhLBMyPbH7I5YEOK9Hxlr.fPP15jCswerangJP2bMyhj8Ui5fJcyBmaxnYDwjjEoUX_I0fzct
 F5OZU6dLP45jFpj0.OotQic3FbF1vowim6564sHgzW9wP4pYCRYlqzRlgG5vS5o1azeyv9gvL2TF
 sv1vfRbmY_lO_IMTrG_UKetezef6Tjqu2jKcgmk.XS3P2CGYnxYfusgjQvegZV3pbLcnuXU_qghD
 hoZSUv5hCAE_4vWz14PE3SsL_pbux8I1Y.AhMprr7VJj9a3PL_kKGG3aXIkCxohRK.OhShxOYwdi
 K9LaFEq3WTDVlHBDcQPSzedk0bYzwxEGDfiKTRpAZqkEjRVkMiWvzO8ZXtDt_mAuAem6bWcCTJXn
 groWXlf25Xjdjm5lbWFGg8o34tSu54DW0TxREYHUEM4bfHTEEuLCRPXvTQWnlre6GKCRA0xU1ezK
 prnl2BTrO.1NGg.mgUkUHVuotyJoOwS4JGugkqRFEnbpaw5vRincnJK06rZiy7rMAZxQVad9Osld
 PJTEYd7Yuaqppa_gXpbhclqD7wtVhEsfrb79T9SzBHwhXbBumm.Ep_u.qXNh.WpqQEaMgrH403yp
 FL6V8.EFSs3pOGYRCOJpIHblASavl1mfQYwomsakuxPjJ8spZs521ywrMkrW8ljhimCly..dr.KW
 28Doic4D.UFjqIeLzmV_Xwt3eMcun2qRzOQGjcUlP9yeGV1tgGZ.SCnQUZQ5eHbKfB3bZT6CaJ8K
 9_vsKAYsQBoT71TzwCQq_JQx2Es.C.r_huXv6mwEERH2MCOzA2ZTkUtkA6G1GO31HLjIzu2uUZ8j
 aWqNAh57SgGD.4dth.WuwUU8xb_5FkUKR3cKVx4fjm6nHzGjw28R_oe.1xPnHC_KQfSyNM9R9zOh
 Is.9VbC5pgzLZ.bt8nQyuEoP.AKs4LLL7s6KxXSTlaarDrprwOtc2fnQxD_8IBtnG.Mzg2.6grWP
 SjVxIZK5lJYtn1rqwMtzp.yO.x3LW6rM04IlCyX7bC5QpRmsgsjT7KJJuzZtVy_40nLBUdNJPHIR
 6pXHJjHLdH1pV5A3kpYUnra2NRupmj8HXc82gfknNz0eX219Fo2mFOmOJ7WHLknUJuwBxIAP_YnI
 pxLk8MHh6X_kBLtJOpgv3mAoAYmYqptQLgr3SSU5hCvz7kibTtqZmQtxMN1aQ0rcjfSbIMhcZUPM
 0W_au3oedj8ti5SwfJSKmMI9XiNqC_rW63sNXE3h9P0gmZ4uzmNtmaezfyn6Mim2PaIKIXbnCHKO
 0Z1zsiyeY3Y5LftTiSipMHS7Qhp_ZRJ1Io.ZwgIE.Wlef0M9ayQUdfnPH0j.fGLFUHlhGiIrp.SK
 vloBWYJVTbF_Scw_Z6pd__EVpH5ZZmx_nwDp4eO7.z9qc8upKovImWsjBMI0ZnbCiASrMfW_xGjr
 N1mGoA4ASVm6RjRbTYtclMsWTCoqcaCazHvf7LbTJt7Hl8qTbQJCi5YnKke98hWLbc3NHn5XTBLv
 ckUy8gFhwMWRT23ZvNnRF_4B.
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
Date: Mon, 16 Jan 2023 13:00:53 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: Igor Mammedov <imammedo@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>
Cc: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, Anthony Perard <anthony.perard@citrix.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
 <20230112180314-mutt-send-email-mst@kernel.org>
 <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
 <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
 <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
 <20230116163342.467039a0@imammedo.users.ipa.redhat.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@netscape.net>
In-Reply-To: <20230116163342.467039a0@imammedo.users.ipa.redhat.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 8900

On 1/16/23 10:33, Igor Mammedov wrote:
> On Fri, 13 Jan 2023 16:31:26 -0500
> Chuck Zmudzinski <brchuckz@aol.com> wrote:
> 
>> On 1/13/23 4:33 AM, Igor Mammedov wrote:
>> > On Thu, 12 Jan 2023 23:14:26 -0500
>> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
>> >   
>> >> On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:  
>> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:    
>> >> >> I think the change Michael suggests is very minimalistic: Move the if
>> >> >> condition around xen_igd_reserve_slot() into the function itself and
>> >> >> always call it there unconditionally -- basically turning three lines
>> >> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
>> >> >> Michael further suggests to rename it to something more general. All
>> >> >> in all no big changes required.    
>> >> > 
>> >> > yes, exactly.
>> >> >     
>> >> 
>> >> OK, got it. I can do that along with the other suggestions.  
>> > 
>> > have you considered instead of reservation, putting a slot check in device model
>> > and if it's intel igd being passed through, fail at realize time  if it can't take
>> > required slot (with a error directing user to fix command line)?  
>> 
>> Yes, but the core pci code currently already fails at realize time
>> with a useful error message if the user tries to use slot 2 for the
>> igd, because of the xen platform device which has slot 2. The user
>> can fix this without patching qemu, but having the user fix it on
>> the command line is not the best way to solve the problem, primarily
>> because the user would need to hotplug the xen platform device via a
>> command line option instead of having the xen platform device added by
>> pc_xen_hvm_init functions almost immediately after creating the pci
>> bus, and that delay in adding the xen platform device degrades
>> startup performance of the guest.
>> 
>> > That could be less complicated than dealing with slot reservations at the cost of
>> > being less convenient.  
>> 
>> And also a cost of reduced startup performance
> 
> Could you clarify how it affects performance (and how much).
> (as I see, setup done at board_init time is roughly the same
> as with '-device foo' CLI options, modulo time needed to parse
> options which should be negligible. and both ways are done before
> guest runs)

I preface my answer by saying there is a v9, but you don't
need to look at that. I will answer all your questions here.

I am going by what I observe on the main HDMI display with the
different approaches. With the approach of not patching Qemu
to fix this, which requires adding the Xen platform device a
little later, the length of time it takes to fully load the
guest is increased. I also noticed with Linux guests that use
the grub bootoader, the grub vga driver cannot display the
grub boot menu at the native resolution of the display, which
in the tested case is 1920x1080, when the Xen platform device
is added via a command line option instead of by the
pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
to Qemu, the grub menu is displayed at the full, 1920x1080
native resolution of the display. Once the guest fully loads,
there is no noticeable difference in performance. It is mainly
a degradation in startup performance, not performance once
the guest OS is fully loaded.

> 
>> However, the performance hit can be prevented by assigning slot
>> 3 instead of slot 2 for the xen platform device if igd passthrough
>> is configured on the command line instead of doing slot reservation,
>> but there would still be less convenience and, for libxl users, an
>> inability to easily configure the command line so that the igd can
>> still have slot 2 without a hacky and error-prone patch to libxl to
>> deal with this problem.
> libvirt manages to get it right on management side without quirks on
> QEMU side.

I think the reason libvirt/kvm gets it right is simply because the
code implementing the libvirt/kvm approach got more attention and testing
than the code that implements the libxl/Xen approach. This patch
really represents what should have been done when support for the
igd-passthru=on option for xenfv machines was added seven years ago,
but the code was apparently added without much testing and is stale now
and needs this fix which is entirely implemented in either files maintained
by Xen maintainers or, in the case of the small patch to pc_piix.c,
entirely within a section guarded by #ifdef CONFIG_XEN. Not much
maintenance burden for hw/i386 maintainers.

> 
>> I did post a patch on xen-devel to fix this using libxl, but so far
>> it has not yet been reviewed and I mentioned in that patch that the
>> approach of patching qemu so qemu reserves slot 2 for the igd is less
>> prone to coding errors and is easier to maintain than the patch that
>> would be required to implement the fix in libxl.
> 
> the patch is not trivial, and adds maintenance on QEMU.

For all practical purposes, the only additional maintenance would
be handled by Xen maintainers, and the Xen maintainer of the Xen
files being patched gave a Reviewed-by to an earlier iteration of
this patch. So I think the decision about the maintenance cost of
this patch should be made by the Xen maintainers. In fact, if I
were a Xen maintainer, I think this patch to Qemu would be much
easier for the Xen maintainers to maintain than the proposed patch
to libxl to fix this. So ultimately, I think it makes sense for
the Xen maintainers to decide on the maintenance cost. So far
they have not weighed in since the Reviewed-by that Anthony
gave to an earlier iteration of this patch. So far, they have
not responded to my patch to libxl, and I don't blame them because
that would be more difficult for them to maintain than this patch
to some of the Xen-specific code within Qemu.

For reference, the patch for libxl that fixes this is here:

https://lore.kernel.org/qemu-devel/20230110073201.mdUvSjy1vKtxPriqMQuWAxIjQzf1eAqIlZgal1u3GBI@z/

> Though I don't object to it as long as it's constrained to xen only
> code

It already is constrained to Xen only code - the small patch to
pc_piix.c is entirely guarded by #ifdef CONFIG_XEN.

and doesn't spill into generic PCI.

In comments on an earlier iteration of this patch, Michael indicated
he would not object a patch to core pci if it added some useful
functionality.

Michael, do I misunderstand you?

I have already proposed a patch that does that, which, if accepted,
would address the objection that unconditionally reserving the slot
during initialization is not desirable. He pointed out that a patch
to core pci could fix that, and I have proposed such a patch,
independent of this patch, here:

https://lore.kernel.org/qemu-devel/ad5f5cf8bc4bd4a720724ed41e47565a7f27adf5.1673829387.git.brchuckz@aol.com/

> All I wanted is just point out there are other approach to problem
> (i.e. do force user to user to provide correct configuration instead
> of adding quirks whenever it's possible).
> 

I disagree that the default configuration should configure the hardware
in a way that does not conform to the requirements of the device and thereby
force users to add non-default settings to configure the machine correctly.
That is simply not being friendly to Xen users of Qemu, and that,
unfortunately is what Qemu code currently does and has done for the
past seven years as regards the configuration by Qemu of igd passthru on
Xen. IMO, it is unreasonable to not fix this, and since the fix can be
implemented in entirely Xen-specific code, I hope and expect that
eventually the Xen maintainers will fix this. I hope they are just waiting
until I implement the fixes that you and Michael have requested which
are mostly reasonable and admittedly, not completed yet.

Perhaps this approach is what you call a "quirk" because of the limitations
of how slot_reserved_mask works. That can be fixed by patching core pci.
That, IMO, is the best and most maintainable way to fix this.

So my plan is to wait and see how my proposed patch to core pci is received.
If it gets accepted, I will do a v10 of this patch which will use the
improved management capability added by the patch to core pci that addresses
the concerns that this patch will interfere with the libvirt/kvm approach
of manually assigning the slots by causing the slot_reserved_mask to
only take effect when the device being added is configured for auto
assignment of the slot address. When libvirt adds a pci device to a xenfv
machine configured for igd-patthru, my proposed v10, with the patch to core
pci as a prerequisite, will not introduce any change to how Qemu configures
the machine in response to a libvirt configuration that manually assigns the
slot addresses.

I do accept that v8/v9 of the patch is stalled, and I am working to address
all the concerns being raised here. Thanks for your comments!

Kind regards,

Chuck


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 18:11:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 18:11:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478923.742434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTwr-00071T-QN; Mon, 16 Jan 2023 18:11:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478923.742434; Mon, 16 Jan 2023 18:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTwr-00071M-NY; Mon, 16 Jan 2023 18:11:09 +0000
Received: by outflank-mailman (input) for mailman id 478923;
 Mon, 16 Jan 2023 18:11:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6LTC=5N=citrix.com=prvs=37359307f=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pHTwq-00071G-MB
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 18:11:08 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 20e17075-95c9-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 19:11:01 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20e17075-95c9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673892661;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=PYmDV5e5rqbDlmbOmu7LxHMAgMbJvk0lbqajoZZKocw=;
  b=Wz14F45DH5WUqS6LsGFhZev2XVKSzfDcxt1ORazZ7NN7ikrgmk6IiyLu
   YwV/ckXQ3jJb0J5W42pGLxRfpsmD1c6IsnZpY117pQ5ttX2YKKr+viyk9
   03FtkXxQc+7DZSoEXYHuMFXHix9P+Bo2/IoQDWkJfNfimPlIgxvV5b5QI
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92321921
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:cb9dUqqF2X9oG//G7+OQrghKytleBmI7ZRIvgKrLsJaIsI4StFCzt
 garIBmFPfmOM2qjedpzPdm09E4F65HQxtVnSQJlrXgwQSNHp5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHzidNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAC4qZzeZtqWw+7i2Zblj3+cFJuSwMLpK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVRrk6VoqwmpXDe1gVr3JDmMcbPe8zMTsJQ9qqdj
 jOcpD6gU0tDXDCZ4QS321fxr8PooQfyc9hPGaX/9v0pknTGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U49QWMx6z88wufQG8eQVZpd9gOpMIwAzsw2
 TehhMj1DDZitLmUT3O19bqOqz62fy8PIgcqZyUJUA8E6NnLu5wog1TESdMLLUKupoSrQ3eqm
 WnM9XVgwexJ1qbnyplX43jZpDuLvKmOSDU/6yqHUTuGyAlUP4KcMtnABUfg0RpQEGqIZgDf4
 yNZxJbCt7lm4YKlz3LUHrhUdF29z7PcaWCH3wYyd3U03271k0NPa7y8992XyK1BFs8fMQHkb
 0bI0e+6zM8CZSD6BUObjm/YNijL8UQDPY6/PhwsRoASCqWdjSfelM2UWWae3nr2jG8nmrwlN
 JGQfK6EVChFUvQ/k2TtFr1Gj9fHIxzSIkuKFfjGI+mPi+LCNBZ5t59YWLdxUgzJxPzd+1iEm
 zquH8CL1w9eQIXDjtr/qOYuwaQxBSFjX/je8pUHHtNv1yI6QAnN/deNm+J+E2Gk9owJ/tr1E
 oaVAR8JlQOm2S2acm1nqBlLMdvSYHq2llpjVQREALpi8yJLjVqHhEvHS6YKQA==
IronPort-HdrOrdr: A9a23:JjkOo6+vT/ptoeMmHEBuk+AuI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHnJYVMqMk3I9uruBEDtex3hHNtOkOss1NSZLW7bUQmTXeJfBOLZqlWNJ8S9zJ856U
 4JScND4bbLfDxHZKjBgTVRE7wbsaa6GKLDv5ah85+6JzsaGp2J7G1Ce3am+lUdfng+OXKgfq
 Dsm/auoVCbCAwqR/X+PFYpdc7ZqebGkZr3CCR2eyLOuGG1/EiVAKeRKWnj4isj
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="92321921"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>
Subject: [XEN PATCH v3 0/1] xen: rework compat headers generation
Date: Mon, 16 Jan 2023 18:10:47 +0000
Message-ID: <20230116181048.30704-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Patch series available in this git branch:
https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.build-system-xen-include-rework-v3

v3:
- Rewrite script into python instead of perl.
  (last patch of the series)

v2:
- new patch [1/4] to fix issue with command line that can be way too long
- other small changes, and reorder patches

Hi,

This patch series is about 2 improvement. First one is to use $(if_changed, )
in "include/Makefile" to make the generation of the compat headers less verbose
and to have the command line part of the decision to rebuild the headers.
Second one is to replace one slow script by a much faster one, and save time
when generating the headers.

Thanks.

Anthony PERARD (1):
  build: replace get-fields.sh by a python script

 xen/include/Makefile            |   6 +-
 xen/tools/compat-xlat-header.py | 468 ++++++++++++++++++++++++++++
 xen/tools/get-fields.sh         | 528 --------------------------------
 3 files changed, 470 insertions(+), 532 deletions(-)
 create mode 100644 xen/tools/compat-xlat-header.py
 delete mode 100644 xen/tools/get-fields.sh

-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 18:11:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 18:11:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478924.742445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTx1-0007JD-23; Mon, 16 Jan 2023 18:11:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478924.742445; Mon, 16 Jan 2023 18:11:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTx0-0007J4-VK; Mon, 16 Jan 2023 18:11:18 +0000
Received: by outflank-mailman (input) for mailman id 478924;
 Mon, 16 Jan 2023 18:11:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6LTC=5N=citrix.com=prvs=37359307f=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pHTx0-0007IZ-1N
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 18:11:18 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28588fb7-95c9-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 19:11:13 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28588fb7-95c9-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673892673;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=FQsPn2VY1ezossDmCwDjrvIBl/7Wrz4t9IMbkRf9yVo=;
  b=iAdw/5TiN3AyzuUnYwmPhrZIpYW0V2DHNskSP3AtXYa5BtE5SqTlFZB+
   NFdCBB6VxPKwAAvFsA57Cr9u2w/jwpPY2LVyvfAqcsY9WI9A4qjScrbmC
   cn/ACsJy13kpkCyozqO/fLBWWuPMrYD+22cKf6FNWTFjtntOyJvNEujac
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93283753
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:SRwXnaP6K9DDRC7vrR2Gl8FynXyQoLVcMsEvi/4bfWQNrUpz02ZRz
 TMbXjiFPfqLYDH2eI90Po6x/ElT7MLdztRkSQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5wJmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0tRHE0oW5
 +ZbFABTUhKkrem2kZaRdsA506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUo3RH5UOwRvDz
 o7A10rXBhRDO/uV9TCq/GOPvtLuhwDDQI1HQdVU8dY12QbOlwT/EiY+V0a/oPS/ol6zXZRYM
 UN80igkoLU29UerZsLgRBD+q3mB1jYDX/JAHut87xuCooLP+BqQDGUASj9HafQludUwSDhs0
 UWG9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLnW0opkuRFJA5Svfz14CrX2iqm
 FhmsRTSmZ0hvdwgj7ehvmz33Q6ugbLCTl8RzDn+CzfNAhxCWGK1W2C5wQGFsq0dc9jFFQDpU
 GsswJbHsr1XZX2ZvGnUGbhWQun0jxqQGGeE6WODCaXN4NhEF5SLWYlLqA9zK05yWirvUW+4O
 RSD0e+9CXI6AZdLUUOUS9jrYyjS5fK8fekJr9iNBja0XrB/dRWc4AZlblOK0mbmnSAEyP9gY
 sfDLpj3XCxBV8yLKQZaoM9EgdcWKt0WnzuPFfgXMTz6uVZhWJJlYehcawbfBgzIxKiFvB/U4
 75i2ziikn1ivBnFSnCPq+Y7dAlaRUXX8Liq86S7gMbfeFs5cIzgYteNqY4cl3tNxPQEy76Ro
 iHgASe1CjPX3BX6FOlDUVg7AJuHYHq1hS9T0fAEVbpw50UeXA==
IronPort-HdrOrdr: A9a23:GF4IiaqFGMs76E7GoWEGFe0aV5oTeYIsimQD101hICG8cqSj+f
 xG+85rsyMc6QxhIE3I9urhBEDtex/hHNtOkOws1NSZLW7bUQmTXeJfBOLZqlWKcUDDH6xmpM
 NdmsBFeaTN5DNB7PoSjjPWLz9Z+qjkzJyV
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="93283753"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python script
Date: Mon, 16 Jan 2023 18:10:48 +0000
Message-ID: <20230116181048.30704-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230116181048.30704-1-anthony.perard@citrix.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The get-fields.sh which generate all the include/compat/.xlat/*.h
headers is quite slow. It takes for example nearly 3 seconds to
generate platform.h on a recent machine, or 2.3 seconds for memory.h.

Rewriting the mix of shell/sed/python into a single python script make
the generation of those file a lot faster.

No functional change, the headers generated are identical.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    To test the header generation, I've submit a branch to gitlab ci,
    where all the headers where generated, and for each one both the shell
    script and the python script where run and the result of both
    compared.
    
    v3:
        convert to python script instead of perl
        - this should allow more developper do be able to review/edit it.
        - it avoid adding a dependency on perl for the hypervisor build.
    
        It can be twice as slow than the perl version, but overall, when doing
        a build with make, there isn't much difference between the perl and
        python script.
        We might be able to speed the python script up by precompiling the
        many regex, but it's probably not worth it. (python already have a
        cache of compiled regex, but I think it's small, maybe 10 or so)
    
    v2:
    - Add .pl extension to the perl script
    - remove "-w" from the shebang as it is duplicate of "use warning;"
    - Add a note in the commit message that the "headers generated are identical".

 xen/include/Makefile            |   6 +-
 xen/tools/compat-xlat-header.py | 468 ++++++++++++++++++++++++++++
 xen/tools/get-fields.sh         | 528 --------------------------------
 3 files changed, 470 insertions(+), 532 deletions(-)
 create mode 100644 xen/tools/compat-xlat-header.py
 delete mode 100644 xen/tools/get-fields.sh

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 65be310eca..b950423efe 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -60,9 +60,7 @@ cmd_compat_c = \
 
 quiet_cmd_xlat_headers = GEN     $@
 cmd_xlat_headers = \
-    while read what name; do \
-        $(SHELL) $(srctree)/tools/get-fields.sh "$$what" compat_$$name $< || exit $$?; \
-    done <$(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst >$@.new; \
+    $(PYTHON) $(srctree)/tools/compat-xlat-header.py $< $(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst > $@.new; \
     mv -f $@.new $@
 
 targets += $(headers-y)
@@ -80,7 +78,7 @@ $(obj)/compat/%.c: $(src)/public/%.h $(srcdir)/xlat.lst $(srctree)/tools/compat-
 	$(call if_changed,compat_c)
 
 targets += $(patsubst compat/%, compat/.xlat/%, $(headers-y))
-$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/get-fields.sh FORCE
+$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/compat-xlat-header.py FORCE
 	$(call if_changed,xlat_headers)
 
 quiet_cmd_xlat_lst = GEN     $@
diff --git a/xen/tools/compat-xlat-header.py b/xen/tools/compat-xlat-header.py
new file mode 100644
index 0000000000..c1b361ac56
--- /dev/null
+++ b/xen/tools/compat-xlat-header.py
@@ -0,0 +1,468 @@
+#!/usr/bin/env python
+
+from __future__ import print_function
+import re
+import sys
+
+typedefs = []
+
+def removeprefix(s, prefix):
+    if s.startswith(prefix):
+        return s[len(prefix):]
+    return s
+
+def removesuffix(s, suffix):
+    if s.endswith(suffix):
+        return s[:-len(suffix)]
+    return s
+
+def get_fields(looking_for, header_tokens):
+    level = 1
+    aggr = 0
+    fields = []
+    name = ''
+
+    for token in header_tokens:
+        if token in ['struct', 'union']:
+            if level == 1:
+                aggr = 1
+                fields = []
+                name = ''
+        elif token == '{':
+            level += 1
+        elif token == '}':
+            level -= 1
+            if level == 1 and name == looking_for:
+                fields.append(token)
+                return fields
+        elif re.match(r'^[a-zA-Z_]', token):
+            if not (aggr == 0 or name != ''):
+                name = token
+
+        if aggr != 0:
+            fields.append(token)
+
+    return []
+
+def get_typedefs(tokens):
+    level = 1
+    state = 0
+    typedefs = []
+    for token in tokens:
+        if token == 'typedef':
+            if level == 1:
+                state = 1
+        elif re.match(r'^COMPAT_HANDLE\((.*)\)$', token):
+            if not (level != 1 or state != 1):
+                state = 2
+        elif token in ['[', '{']:
+            level += 1
+        elif token in [']', '}']:
+            level -= 1
+        elif token == ';':
+            if level == 1:
+                state = 0
+        elif re.match(r'^[a-zA-Z_]', token):
+            if not (level != 1 or state != 2):
+                typedefs.append(token)
+    return typedefs
+
+def build_enums(name, tokens):
+    level = 1
+    kind = ''
+    named = ''
+    fields = []
+    members = []
+    id = ''
+
+    for token in tokens:
+        if token in ['struct', 'union']:
+            if not level != 2:
+                fields = ['']
+            kind = "%s;%s" % (token, kind)
+        elif token == '{':
+            level += 1
+        elif token == '}':
+            level -= 1
+            if level == 1:
+                subkind = kind
+                (subkind, _, _) = subkind.partition(';')
+                if subkind == 'union':
+                    print("\nenum XLAT_%s {" % (name,))
+                    for m in members:
+                        print("    XLAT_%s_%s," % (name, m))
+                    print("};")
+                return
+            elif level == 2:
+                named = '?'
+        elif re.match(r'^[a-zA-Z_]', token):
+            id = token
+            k = kind
+            (_, _, k) = k.partition(';')
+            if named != '' and k != '':
+                if len(fields) > 0 and fields[0] == '':
+                    fields.pop(0)
+                build_enums("%s_%s" % (name, token), fields)
+                named = '!'
+        elif token == ',':
+            if level == 2:
+                members.append(id)
+        elif token == ';':
+            if level == 2:
+                members.append(id)
+            if named != '':
+                (_, _, kind) = kind.partition(';')
+            named = ''
+        if len(fields) != 0:
+            fields.append(token)
+
+def handle_field(prefix, name, id, type, fields):
+    if len(fields) == 0:
+        print(" \\")
+        if type == '':
+            print("%s(_d_)->%s = (_s_)->%s;" % (prefix, id, id), end='')
+        else:
+            k = id.replace('.', '_')
+            print("%sXLAT_%s_HNDL_%s(_d_, _s_);" % (prefix, name, k), end='')
+    elif not re.search(r'[{}]', ' '.join(fields)):
+        tag = ' '.join(fields)
+        tag = re.sub(r'\s*(struct|union)\s+(compat_)?(\w+)\s.*', '\\3', tag)
+        print(" \\")
+        print("%sXLAT_%s(&(_d_)->%s, &(_s_)->%s);" % (prefix, tag, id, id), end='')
+    else:
+        func_id = id
+        func_tokens = fields
+        kind = ''
+        array = ""
+        level = 1
+        arrlvl = 1
+        array_type = ''
+        id = ''
+        type = ''
+        fields = []
+        for token in func_tokens:
+            if token in ['struct', 'union']:
+                if level == 2:
+                    fields = ['']
+                if level == 1:
+                    kind = token
+                    if kind == 'union':
+                        tmp = func_id.replace('.', '_')
+                        print(" \\")
+                        print("%sswitch (%s) {" % (prefix, tmp), end='')
+            elif token == '{':
+                level += 1
+                id = ''
+            elif token == '}':
+                level -= 1
+                id = ''
+                if level == 1 and kind == 'union':
+                    print(" \\")
+                    print("%s}" % (prefix,), end='')
+            elif token == '[':
+                if level != 2 or arrlvl != 1:
+                    pass
+                elif array == '':
+                    array = ' '
+                else:
+                    array = "%s;" % (array,)
+                arrlvl += 1
+            elif token == ']':
+                arrlvl -= 1
+            elif re.match(r'^COMPAT_HANDLE\((.*)\)$', token):
+                if level == 2 and id == '':
+                    m = re.match(r'^COMPAT_HANDLE\((.*)\)$', token)
+                    type = m.groups()[0]
+                    type = removeprefix(type, 'compat_')
+            elif token == "compat_domain_handle_t":
+                if level == 2 and id == '':
+                    array_type = token
+            elif re.match(r'^[a-zA-Z_]', token):
+                if id == '' and type == '' and array_type == '':
+                    for id in typedefs:
+                        if id == token:
+                            type = id
+                    if type == '':
+                        id = token
+                    else:
+                        id = ''
+                else:
+                    id = token
+            elif token in [',', ';']:
+                if level == 2 and not re.match(r'^_pad\d*$', id):
+                    if kind == 'union':
+                        tmp = "%s.%s" % (func_id, id)
+                        tmp = tmp.replace('.', '_')
+                        print(" \\")
+                        print("%scase XLAT_%s_%s:" % (prefix, name, tmp), end='')
+                        if len(fields) > 0 and fields[0] == '':
+                            fields.pop(0)
+                        handle_field("%s    " % (prefix,), name, "%s.%s" % (func_id, id), type, fields)
+                    elif array == '' and array_type == '':
+                        if len(fields) > 0 and fields[0] == '':
+                            fields.pop(0)
+                        handle_field(prefix, name, "%s.%s" % (func_id, id), type, fields)
+                    elif array == '':
+                        copy_array("    ", "%s.%s" % (func_id, id))
+                    else:
+                        (_, _, array) = array.partition(';')
+                        if len(fields) > 0 and fields[0] == '':
+                            fields.pop(0)
+                        handle_array(prefix, name, "{func_id}.{id}", array, type, fields)
+                    if token == ';':
+                        fields = []
+                        id = ''
+                        type = ''
+                    array = ''
+                    if kind == 'union':
+                        print(" \\")
+                        print("%s    break;" % (prefix,), end='')
+            else:
+                if array != '':
+                    array = "%s %s" % (array, token)
+            if len(fields) > 0:
+                fields.append(token)
+
+def copy_array(prefix, id):
+    print(" \\")
+    print("%sif ((_d_)->%s != (_s_)->%s) \\" % (prefix, id, id))
+    print("%s    memcpy((_d_)->%s, (_s_)->%s, sizeof((_d_)->%s));" % (prefix, id, id, id), end='')
+
+def handle_array(prefix, name, id, array, type, fields):
+    i = re.sub(r'[^;]', '', array)
+    i = "i%s" % (len(i),)
+
+    print(" \\")
+    print("%s{ \\" % (prefix,))
+    print("%s    unsigned int %s; \\" % (prefix, i))
+    (head, _, tail) = array.partition(';')
+    head = head.strip()
+    print("%s    for (%s = 0; %s < %s; ++%s) {" % (prefix, i, i, head, i), end='')
+    if not ';' in array:
+        handle_field("%s        " % (prefix,), name, "%s[%s]" % (id, i), type, fields)
+    else:
+        handle_array("%s        " % (prefix,) , name, "%s[%s]" % (id, i), tail, type, fields)
+    print(" \\")
+    print("%s    } \\" % (prefix,))
+    print("%s}" % (prefix,), end='')
+
+def build_body(name, tokens):
+    level = 1
+    id = ''
+    array = ''
+    arrlvl = 1
+    array_type = ''
+    type = ''
+    fields = []
+
+    print("\n#define XLAT_%s(_d_, _s_) do {" % (name,), end='')
+
+    for token in tokens:
+        if token in ['struct', 'union']:
+            if level == 2:
+                fields = ['']
+        elif token == '{':
+            level += 1
+            id = ''
+        elif token == '}':
+            level -= 1
+            id = ''
+        elif token == '[':
+            if level != 2 or arrlvl != 1:
+                pass
+            elif array == '':
+                array = ' '
+            else:
+                array = "%s;" % (array,)
+            arrlvl += 1
+        elif token == ']':
+            arrlvl -= 1
+        elif re.match(r'^COMPAT_HANDLE\((.*)\)$', token):
+            if level == 2 and id == '':
+                m = re.match(r'^COMPAT_HANDLE\((.*)\)$', token)
+                type = m.groups()[0]
+                type = removeprefix(type, 'compat_')
+        elif token == "compat_domain_handle_t":
+            if level == 2 and id == '':
+                array_type = token
+        elif re.match(r'^[a-zA-Z_]', token):
+            if array != '':
+                array = "%s %s" % (array, token)
+            elif id == '' and type == '' and array_type == '':
+                for id in typedefs:
+                    if id != token:
+                        type = id
+                if type == '':
+                    id = token
+                else:
+                    id = ''
+            else:
+                id = token
+        elif token in [',', ';']:
+            if level == 2 and not re.match(r'^_pad\d*$', id):
+                if array == '' and array_type == '':
+                    if len(fields) > 0 and fields[0] == '':
+                        fields.pop(0)
+                    handle_field("    ", name, id, type, fields)
+                elif array == '':
+                    copy_array("    ", id)
+                else:
+                    (head, sep, tmp) = array.partition(';')
+                    if sep == '':
+                        tmp = head
+                    if len(fields) > 0 and fields[0] == '':
+                        fields.pop(0)
+                    handle_array("    ", name, id, tmp, type, fields)
+                if token == ';':
+                    fields = []
+                    id = ''
+                    type = ''
+                array = ''
+        else:
+            if array != '':
+                array = "%s %s" % (array, token)
+        if len(fields) > 0:
+            fields.append(token)
+    print(" \\\n} while (0)")
+
+def check_field(kind, name, field, extrafields):
+    if not re.search(r'[{}]', ' '.join(extrafields)):
+        print("; \\")
+        if len(extrafields) != 0:
+            for token in extrafields:
+                if token in ['struct', 'union']:
+                    pass
+                elif re.match(r'^[a-zA-Z_]', token):
+                    print("    CHECK_%s" % (removeprefix(token, 'xen_'),), end='')
+                    break
+                else:
+                    raise Exception("Malformed compound declaration: '%s'" % (token,))
+        elif not '.' in field:
+            print("    CHECK_FIELD_(%s, %s, %s)" % (kind, name, field), end='')
+        else:
+            n = field.count('.')
+            field = field.replace('.', ', ')
+            print("    CHECK_SUBFIELD_%s_(%s, %s, %s)" % (n, kind, name, field), end='')
+    else:
+        level = 1
+        fields = []
+        id = ''
+
+        for token in extrafields:
+            if token in ['struct', 'union']:
+                if level == 2:
+                    fields = ['']
+            elif token == '{':
+                level += 1
+                id = ''
+            elif token == '}':
+                level -= 1
+                id = ''
+            elif re.match(r'^compat_.*_t$', token):
+                if level == 2:
+                    fields = ['']
+                    token = removesuffix(token, '_t')
+                    token = removeprefix(token, 'compat_')
+            elif re.match(r'^evtchn_.*_compat_t$', token):
+                if level == 2 and token != "evtchn_port_compat_t":
+                    fields = ['']
+                    token = removesuffix(token, '_compat_t')
+            elif re.match(r'^[a-zA-Z_]', token):
+                id = token
+            elif token in [',', ';']:
+                if level == 2 and not re.match(r'^_pad\d*$', id):
+                    if len(fields) > 0 and fields[0] == '':
+                        fields.pop(0)
+                    check_field(kind, name, "%s.%s" % (field, id), fields)
+                    if token == ";":
+                        fields = []
+                        id = ''
+            if len(fields) > 0:
+                fields.append(token)
+
+def build_check(name, tokens):
+    level = 1
+    fields = []
+    kind = ''
+    id = ''
+    arrlvl = 1
+
+    print("")
+    print("#define CHECK_%s \\" % (name,))
+
+    for token in tokens:
+        if token in ['struct', 'union']:
+            if level == 1:
+                kind = token
+                print("    CHECK_SIZE_(%s, %s)" % (kind, name), end='')
+            elif level == 2:
+                fields = ['']
+        elif token == '{':
+            level += 1
+            id = ''
+        elif token == '}':
+            level -= 1
+            id = ''
+        elif token == '[':
+            arrlvl += 1
+        elif token == ']':
+            arrlvl -= 1
+        elif re.match(r'^compat_.*_t$', token):
+            if level == 2 and token != "compat_argo_port_t":
+                fields = ['']
+                token = removesuffix(token, '_t')
+                token = removeprefix(token, 'compat_')
+        elif re.match(r'^[a-zA-Z_]', token):
+            if not (level != 2 or arrlvl != 1):
+                id = token
+        elif token in [',', ';']:
+            if level == 2 and not re.match(r'^_pad\d*$', id):
+                if len(fields) > 0 and fields[0] == '':
+                    fields.pop(0)
+                check_field(kind, name, id, fields)
+                if token == ";":
+                    fields = []
+                    id = ''
+
+        if len(fields) > 0:
+            fields.append(token)
+    print("")
+
+
+def main():
+    header_tokens = []
+
+    with open(sys.argv[1]) as header:
+        for line in header:
+            if re.match(r'^\s*(#|$)', line):
+                continue
+            line = re.sub(r'([\]\[,;:{}])', ' \\1 ', line)
+            line = line.strip()
+            header_tokens += re.split(r'\s+', line)
+
+    global typedefs
+    typedefs = get_typedefs(header_tokens)
+
+    with open(sys.argv[2]) as compat_list:
+        for line in compat_list:
+            words = re.split(r'\s+', line, maxsplit=1)
+            what = words[0]
+            name = words[1]
+
+            name = removeprefix(name, 'xen')
+            name = name.strip()
+
+            fields = get_fields("compat_%s" % (name,), header_tokens)
+            if len(fields) == 0:
+                raise Exception("Fields of 'compat_%s' not found in '%s'" % (name, sys.argv[1]))
+
+            if what == "!":
+                build_enums(name, fields)
+                build_body(name, fields)
+            elif what == "?":
+                build_check(name, fields)
+            else:
+                raise Exception("Invalid translation indicator: '%s'" % (what,))
+
+if __name__ == '__main__':
+    main()
diff --git a/xen/tools/get-fields.sh b/xen/tools/get-fields.sh
deleted file mode 100644
index 002db2093f..0000000000
--- a/xen/tools/get-fields.sh
+++ /dev/null
@@ -1,528 +0,0 @@
-test -n "$1" -a -n "$2" -a -n "$3"
-set -ef
-
-SED=sed
-if test -x /usr/xpg4/bin/sed; then
-	SED=/usr/xpg4/bin/sed
-fi
-if test -z ${PYTHON}; then
-	PYTHON=`/usr/bin/env python`
-fi
-if test -z ${PYTHON}; then
-	echo "Python not found"
-	exit 1
-fi
-
-get_fields ()
-{
-	local level=1 aggr=0 name= fields=
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 1 || aggr=1 fields= name=
-			;;
-		"{")
-			level=$(expr $level + 1)
-			;;
-		"}")
-			level=$(expr $level - 1)
-			if [ $level = 1 -a $name = $1 ]
-			then
-				echo "$fields }"
-				return 0
-			fi
-			;;
-		[a-zA-Z_]*)
-			test $aggr = 0 -o -n "$name" || name="$token"
-			;;
-		esac
-		test $aggr = 0 || fields="$fields $token"
-	done
-}
-
-get_typedefs ()
-{
-	local level=1 state=
-	for token in $1
-	do
-		case "$token" in
-		typedef)
-			test $level != 1 || state=1
-			;;
-		COMPAT_HANDLE\(*\))
-			test $level != 1 -o "$state" != 1 || state=2
-			;;
-		[\{\[])
-			level=$(expr $level + 1)
-			;;
-		[\}\]])
-			level=$(expr $level - 1)
-			;;
-		";")
-			test $level != 1 || state=
-			;;
-		[a-zA-Z_]*)
-			test $level != 1 -o "$state" != 2 || echo "$token"
-			;;
-		esac
-	done
-}
-
-build_enums ()
-{
-	local level=1 kind= fields= members= named= id= token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 2 || fields=" "
-			kind="$token;$kind"
-			;;
-		"{")
-			level=$(expr $level + 1)
-			;;
-		"}")
-			level=$(expr $level - 1)
-			if [ $level = 1 ]
-			then
-				if [ "${kind%%;*}" = union ]
-				then
-					echo
-					echo "enum XLAT_$1 {"
-					for m in $members
-					do
-						echo "    XLAT_${1}_$m,"
-					done
-					echo "};"
-				fi
-				return 0
-			elif [ $level = 2 ]
-			then
-				named='?'
-			fi
-			;;
-		[a-zA-Z]*)
-			id=$token
-			if [ -n "$named" -a -n "${kind#*;}" ]
-			then
-				build_enums ${1}_$token "$fields"
-				named='!'
-			fi
-			;;
-		",")
-			test $level != 2 || members="$members $id"
-			;;
-		";")
-			test $level != 2 || members="$members $id"
-			test -z "$named" || kind=${kind#*;}
-			named=
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-}
-
-handle_field ()
-{
-	if [ -z "$5" ]
-	then
-		echo " \\"
-		if [ -z "$4" ]
-		then
-			printf %s "$1(_d_)->$3 = (_s_)->$3;"
-		else
-			printf %s "$1XLAT_${2}_HNDL_$(echo $3 | $SED 's,\.,_,g')(_d_, _s_);"
-		fi
-	elif [ -z "$(echo "$5" | $SED 's,[^{}],,g')" ]
-	then
-		local tag=$(echo "$5" | ${PYTHON} -c '
-import re,sys
-for line in sys.stdin.readlines():
-    sys.stdout.write(re.subn(r"\s*(struct|union)\s+(compat_)?(\w+)\s.*", r"\3", line)[0].rstrip() + "\n")
-')
-		echo " \\"
-		printf %s "${1}XLAT_$tag(&(_d_)->$3, &(_s_)->$3);"
-	else
-		local level=1 kind= fields= id= array= arrlvl=1 array_type= type= token
-		for token in $5
-		do
-			case "$token" in
-			struct|union)
-				test $level != 2 || fields=" "
-				if [ $level = 1 ]
-				then
-					kind=$token
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "${1}switch ($(echo $3 | $SED 's,\.,_,g')) {"
-					fi
-				fi
-				;;
-			"{")
-				level=$(expr $level + 1) id=
-				;;
-			"}")
-				level=$(expr $level - 1) id=
-				if [ $level = 1 -a $kind = union ]
-				then
-					echo " \\"
-					printf %s "$1}"
-				fi
-				;;
-			"[")
-				if [ $level != 2 -o $arrlvl != 1 ]
-				then
-					:
-				elif [ -z "$array" ]
-				then
-					array=" "
-				else
-					array="$array;"
-				fi
-				arrlvl=$(expr $arrlvl + 1)
-				;;
-			"]")
-				arrlvl=$(expr $arrlvl - 1)
-				;;
-			COMPAT_HANDLE\(*\))
-				if [ $level = 2 -a -z "$id" ]
-				then
-					type=${token#COMPAT_HANDLE?}
-					type=${type%?}
-					type=${type#compat_}
-				fi
-				;;
-			compat_domain_handle_t)
-				if [ $level = 2 -a -z "$id" ]
-				then
-					array_type=$token
-				fi
-				;;
-			[a-zA-Z]*)
-				if [ -z "$id" -a -z "$type" -a -z "$array_type" ]
-				then
-					for id in $typedefs
-					do
-						test $id != "$token" || type=$id
-					done
-					if [ -z "$type" ]
-					then
-						id=$token
-					else
-						id=
-					fi
-				else
-					id=$token
-				fi
-				;;
-			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-				then
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "${1}case XLAT_${2}_$(echo $3.$id | $SED 's,\.,_,g'):"
-						handle_field "$1    " $2 $3.$id "$type" "$fields"
-					elif [ -z "$array" -a -z "$array_type" ]
-					then
-						handle_field "$1" $2 $3.$id "$type" "$fields"
-					elif [ -z "$array" ]
-					then
-						copy_array "    " $3.$id
-					else
-						handle_array "$1" $2 $3.$id "${array#*;}" "$type" "$fields"
-					fi
-					test "$token" != ";" || fields= id= type=
-					array=
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "$1    break;"
-					fi
-				fi
-				;;
-			*)
-				if [ -n "$array" ]
-				then
-					array="$array $token"
-				fi
-				;;
-			esac
-			test -z "$fields" || fields="$fields $token"
-		done
-	fi
-}
-
-copy_array ()
-{
-	echo " \\"
-	echo "${1}if ((_d_)->$2 != (_s_)->$2) \\"
-	printf %s "$1    memcpy((_d_)->$2, (_s_)->$2, sizeof((_d_)->$2));"
-}
-
-handle_array ()
-{
-	local i="i$(echo $4 | $SED 's,[^;], ,g' | wc -w | $SED 's,[[:space:]]*,,g')"
-	echo " \\"
-	echo "$1{ \\"
-	echo "$1    unsigned int $i; \\"
-	printf %s "$1    for ($i = 0; $i < "${4%%;*}"; ++$i) {"
-	if [ "$4" = "${4#*;}" ]
-	then
-		handle_field "$1        " $2 $3[$i] "$5" "$6"
-	else
-		handle_array "$1        " $2 $3[$i] "${4#*;}" "$5" "$6"
-	fi
-	echo " \\"
-	echo "$1    } \\"
-	printf %s "$1}"
-}
-
-build_body ()
-{
-	echo
-	printf %s "#define XLAT_$1(_d_, _s_) do {"
-	local level=1 fields= id= array= arrlvl=1 array_type= type= token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 2 || fields=" "
-			;;
-		"{")
-			level=$(expr $level + 1) id=
-			;;
-		"}")
-			level=$(expr $level - 1) id=
-			;;
-		"[")
-			if [ $level != 2 -o $arrlvl != 1 ]
-			then
-				:
-			elif [ -z "$array" ]
-			then
-				array=" "
-			else
-				array="$array;"
-			fi
-			arrlvl=$(expr $arrlvl + 1)
-			;;
-		"]")
-			arrlvl=$(expr $arrlvl - 1)
-			;;
-		COMPAT_HANDLE\(*\))
-			if [ $level = 2 -a -z "$id" ]
-			then
-				type=${token#COMPAT_HANDLE?}
-				type=${type%?}
-				type=${type#compat_}
-			fi
-			;;
-		compat_domain_handle_t)
-			if [ $level = 2 -a -z "$id" ]
-			then
-				array_type=$token
-			fi
-			;;
-		[a-zA-Z_]*)
-			if [ -n "$array" ]
-			then
-				array="$array $token"
-			elif [ -z "$id" -a -z "$type" -a -z "$array_type" ]
-			then
-				for id in $typedefs
-				do
-					test $id != "$token" || type=$id
-				done
-				if [ -z "$type" ]
-				then
-					id=$token
-				else
-					id=
-				fi
-			else
-				id=$token
-			fi
-			;;
-		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-			then
-				if [ -z "$array" -a -z "$array_type" ]
-				then
-					handle_field "    " $1 $id "$type" "$fields"
-				elif [ -z "$array" ]
-				then
-					copy_array "    " $id
-				else
-					handle_array "    " $1 $id "${array#*;}" "$type" "$fields"
-				fi
-				test "$token" != ";" || fields= id= type=
-				array=
-			fi
-			;;
-		*)
-			if [ -n "$array" ]
-			then
-				array="$array $token"
-			fi
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-	echo " \\"
-	echo "} while (0)"
-}
-
-check_field ()
-{
-	if [ -z "$(echo "$4" | $SED 's,[^{}],,g')" ]
-	then
-		echo "; \\"
-		local n=$(echo $3 | $SED 's,[^.], ,g' | wc -w | $SED 's,[[:space:]]*,,g')
-		if [ -n "$4" ]
-		then
-			for n in $4
-			do
-				case $n in
-				struct|union)
-					;;
-				[a-zA-Z_]*)
-					printf %s "    CHECK_${n#xen_}"
-					break
-					;;
-				*)
-					echo "Malformed compound declaration: '$n'" >&2
-					exit 1
-					;;
-				esac
-			done
-		elif [ $n = 0 ]
-		then
-			printf %s "    CHECK_FIELD_($1, $2, $3)"
-		else
-			printf %s "    CHECK_SUBFIELD_${n}_($1, $2, $(echo $3 | $SED 's!\.!, !g'))"
-		fi
-	else
-		local level=1 fields= id= token
-		for token in $4
-		do
-			case "$token" in
-			struct|union)
-				test $level != 2 || fields=" "
-				;;
-			"{")
-				level=$(expr $level + 1) id=
-				;;
-			"}")
-				level=$(expr $level - 1) id=
-				;;
-			compat_*_t)
-				if [ $level = 2 ]
-				then
-					fields=" "
-					token="${token%_t}"
-					token="${token#compat_}"
-				fi
-				;;
-			evtchn_*_compat_t)
-				if [ $level = 2 -a $token != evtchn_port_compat_t ]
-				then
-					fields=" "
-					token="${token%_compat_t}"
-				fi
-				;;
-			[a-zA-Z]*)
-				id=$token
-				;;
-			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-				then
-					check_field $1 $2 $3.$id "$fields"
-					test "$token" != ";" || fields= id=
-				fi
-				;;
-			esac
-			test -z "$fields" || fields="$fields $token"
-		done
-	fi
-}
-
-build_check ()
-{
-	echo
-	echo "#define CHECK_$1 \\"
-	local level=1 fields= kind= id= arrlvl=1 token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			if [ $level = 1 ]
-			then
-				kind=$token
-				printf %s "    CHECK_SIZE_($kind, $1)"
-			elif [ $level = 2 ]
-			then
-				fields=" "
-			fi
-			;;
-		"{")
-			level=$(expr $level + 1) id=
-			;;
-		"}")
-			level=$(expr $level - 1) id=
-			;;
-		"[")
-			arrlvl=$(expr $arrlvl + 1)
-			;;
-		"]")
-			arrlvl=$(expr $arrlvl - 1)
-			;;
-		compat_*_t)
-			if [ $level = 2 -a $token != compat_argo_port_t ]
-			then
-				fields=" "
-				token="${token%_t}"
-				token="${token#compat_}"
-			fi
-			;;
-		[a-zA-Z_]*)
-			test $level != 2 -o $arrlvl != 1 || id=$token
-			;;
-		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-			then
-				check_field $kind $1 $id "$fields"
-				test "$token" != ";" || fields= id=
-			fi
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-	echo ""
-}
-
-list="$($SED -e 's,^[[:space:]]#.*,,' -e 's!\([]\[,;:{}]\)! \1 !g' $3)"
-fields="$(get_fields $(echo $2 | $SED 's,^compat_xen,compat_,') "$list")"
-if [ -z "$fields" ]
-then
-	echo "Fields of '$2' not found in '$3'" >&2
-	exit 1
-fi
-name=${2#compat_}
-name=${name#xen}
-case "$1" in
-"!")
-	typedefs="$(get_typedefs "$list")"
-	build_enums $name "$fields"
-	build_body $name "$fields"
-	;;
-"?")
-	build_check $name "$fields"
-	;;
-*)
-	echo "Invalid translation indicator: '$1'" >&2
-	exit 1
-	;;
-esac
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 18:11:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 18:11:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478927.742456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTxK-0007uK-Hn; Mon, 16 Jan 2023 18:11:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478927.742456; Mon, 16 Jan 2023 18:11:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHTxK-0007uB-Di; Mon, 16 Jan 2023 18:11:38 +0000
Received: by outflank-mailman (input) for mailman id 478927;
 Mon, 16 Jan 2023 18:11:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PHax=5N=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pHTxI-0007pU-OR
 for xen-devel@lists.xen.org; Mon, 16 Jan 2023 18:11:36 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 36557bf5-95c9-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 19:11:35 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36557bf5-95c9-11ed-91b6-6bf2151ebd3b
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1673892694;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=AanC++QDp2/X9tKXMo/J9VwlKZ5d4eXbHEb+Q9Hnt7o=;
	b=y1ss6IxNbaBwoK/nSkiLVMK4xhBf4eEI4RJ2YAzKPab3tb+jXZIoYMhsoO1NxUetPzCoON
	jZturUXmt9NPLQChCJa8JGbKCOEVkmoE06aITmvClYb/XxbCvmNX/5/GTelHascVot5VUa
	e6eHDsSGQlWscVdYEtzQmJwBSr0VVDGXDIDN3MmFK9shs26olpIog+nrDAfiwQMKfbOM+g
	VzIx24GxtcwRguX46n7TBbG3r0k666yT+H/aOaYlG0AJbAuEtW/kMNaTCZe+g8L0XBNjOl
	UR+pytkgpLdn5NbDl5lAm6MaZhTW6Y5+gVWQ5jTbDh8UMpjdyNic6bN7tJhEvw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1673892694;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=AanC++QDp2/X9tKXMo/J9VwlKZ5d4eXbHEb+Q9Hnt7o=;
	b=HIExVyuSu/+qg+7fM4/RkuaqgVgj503pGwc+sZraedRyuaXA+hHMtu78thc97w5zv8R+Bf
	XUhFu5GGulLb5lBw==
To: David Woodhouse <dwmw2@infradead.org>, LKML
 <linux-kernel@vger.kernel.org>, Juergen Gross <jgross@suse.com>, xen-devel
 <xen-devel@lists.xen.org>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>, linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>,
 Alex
 Williamson <alex.williamson@redhat.com>, Kevin Tian <kevin.tian@intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Logan Gunthorpe
 <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon Mason
 <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>
Subject: Re: [patch V3 16/22] genirq/msi: Provide new domain id based
 interfaces for freeing interrupts
In-Reply-To: <875yd6o2t7.ffs@tglx>
References: <20221124225331.464480443@linutronix.de>
 <20221124230314.337844751@linutronix.de>
 <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
 <875yd6o2t7.ffs@tglx>
Date: Mon, 16 Jan 2023 19:11:32 +0100
Message-ID: <871qnunycr.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

David!

On Mon, Jan 16 2023 at 17:35, Thomas Gleixner wrote:
> On Mon, Jan 16 2023 at 09:56, David Woodhouse wrote:
>
> This is just wrong. I need to taxi my grandson. Will have a look
> afterwards.

There are three 'tglx missed to fixup XEN' problems:

 - b2bdda205c0c ("PCI/MSI: Let the MSI core free descriptors")

   This requires the MSI_FLAG_FREE_MSI_DESCS flag to be set in the XEN
   MSI domain info


 - 2f2940d16823 ("genirq/msi: Remove filter from msi_free_descs_free_range()")

   This requires the 'desc->irq = 0' disassociation on teardown.


 - ffd84485e6be ("PCI/MSI: Let the irq code handle sysfs groups")

   Lacks a flag in the XEN MSI domain info as well.

Combo patch below.

Thanks,

        tglx
---
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -392,6 +392,7 @@ static void xen_teardown_msi_irqs(struct
 	msi_for_each_desc(msidesc, &dev->dev, MSI_DESC_ASSOCIATED) {
 		for (i = 0; i < msidesc->nvec_used; i++)
 			xen_destroy_irq(msidesc->irq + i);
+		msidesc->irq = 0;
 	}
 }
 
@@ -434,6 +435,7 @@ static struct msi_domain_ops xen_pci_msi
 
 static struct msi_domain_info xen_pci_msi_domain_info = {
 	.ops			= &xen_pci_msi_domain_ops,
+	.flags			= MSI_FLAG_FREE_MSI_DESCS | MSI_FLAG_DEV_SYSFS,
 };
 
 /*


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 18:59:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 18:59:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478939.742467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHUhD-0004AC-37; Mon, 16 Jan 2023 18:59:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478939.742467; Mon, 16 Jan 2023 18:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHUhC-0004A5-VT; Mon, 16 Jan 2023 18:59:02 +0000
Received: by outflank-mailman (input) for mailman id 478939;
 Mon, 16 Jan 2023 18:59:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N7lT=5N=casper.srs.infradead.org=BATV+fb0b8ce1ba8490165fd5+7085+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pHUhA-00049g-7M
 for xen-devel@lists.xen.org; Mon, 16 Jan 2023 18:59:01 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d380d0a8-95cf-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 19:58:56 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHUh8-008zGd-3p; Mon, 16 Jan 2023 18:58:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d380d0a8-95cf-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=CwohRRL2P+XOvrTglyABkWSPVyftjlS9f+rUbAZongo=; b=Ezs8rPKYWA3Z29PcXkI8tFONOC
	GCIcWaoI3gJ5zwQRPi/4prbBZIwPCgP7n4JiAmNvTDnwpNfebq2fCcZ/aOCup9ap03axe/EKvh4WA
	thNtA7wjiYTdk6PmA+N5bgQZLoZtho6H6gtL/FXFDMWMRyn7NN8H4eXUGzFLRcQDw62WGmWRvOy7D
	AB/lfRGkM7tGPRugmY6izVTCjqAWpCCkG6br55qLrw25jZwVdKxxR7dbsHS5Sr71fyk2tLaA+kssX
	N6O0+QMCmTlQCsprn924YrcCEjpm3V9foSRe5M/C8EC4NXFUnCysBSWvheefCIJQQmKHnqDzhv2jn
	2tWRwwZA==;
Message-ID: <e12002af82e9554e42e876d7b9e813b90e673330.camel@infradead.org>
Subject: Re: [patch V3 16/22] genirq/msi: Provide new domain id based
 interfaces for freeing interrupts
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML
 <linux-kernel@vger.kernel.org>,  Juergen Gross <jgross@suse.com>, xen-devel
 <xen-devel@lists.xen.org>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>,  linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>, Alex
 Williamson <alex.williamson@redhat.com>, Kevin Tian <kevin.tian@intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Logan Gunthorpe
 <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon Mason
 <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>
Date: Mon, 16 Jan 2023 18:58:43 +0000
In-Reply-To: <871qnunycr.ffs@tglx>
References: <20221124225331.464480443@linutronix.de>
	 <20221124230314.337844751@linutronix.de>
	 <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
	 <875yd6o2t7.ffs@tglx> <871qnunycr.ffs@tglx>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-0s56QWH1O2XcXgnacqlm"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-0s56QWH1O2XcXgnacqlm
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2023-01-16 at 19:11 +0100, Thomas Gleixner wrote:
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0.flags=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=3D MSI_FLAG_FREE_MSI_DESCS | MSI_FLAG_DEV_SYSFS,
=C2=A0
That doesn't apply on top of
https://lore.kernel.org/all/4bffa69a949bfdc92c4a18e5a1c3cbb3b94a0d32.camel@=
infradead.org/
and doesn't include the | MSI_FLAG_PCI_MSIX either.

With that remedied,

Tested-by: David Woodhouse <dwmw@amazon.co.uk>

Albeit only under qemu with
https://git.infradead.org/users/dwmw2/qemu.git/shortlog/refs/heads/xenfv
and not under real Xen.



--=-0s56QWH1O2XcXgnacqlm
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE2MTg1ODQzWjAvBgkqhkiG9w0BCQQxIgQg1B9/uHnT
iKtySFse0hDmgcttzLqSbgCDpbxtbJMH7wkwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgCTLKOWKUAgIXMBENvqTO53l9OIPoe+XhGr
7alyyyLvRz+edj2DI7jqC+aUK5uQFcZdt/MLlziPcLjS5d/UHO6oVn8FXeL+u61R2MYkre6sBqCV
sflYoHlPpsEvdpqv085Xo5TrAFyisxW5ygV+eUOsjkbgRZrzJH236iVoqU9QHT+e7lCHgj7wUz6v
DwlC+CGDV26fVIdSLXnCKbejca18Xg6EXxefAMIOvHTOTGOTrxN8d6Y/IdXBwJnIQdwgzEvgaHzr
ZX/HwhwLefUnulOf4a9FNo1tXLtffbWWQVhfEqeYHIEdWAdqw1iKuj1zi+06aQS3UxXGg5nJY5FY
tTECtvpmufcyNsB+eos0EB2CQFo0jjWvZ7opPGfDrHM69wSnMl8oXKouBJmzK1jtk3G5EdHUdEsr
aPMF0/5BxzIEZaxJ8n08ZruSnaWMPhMFmB8WO+CrxvHRbKoDZ2+RhP22o3+OZOIvS78CccsxmyCJ
qsaUP2sMUuXSNKUA8USv/1D1u05pz+d0D/KHeLJKc0rod0clh3CQMbNbat+sMW1o4iKwIOpAzyl4
EQIRppkJjSsB2W1KvTU/EpgYT/M3hkcRYFZz9SepVbfs7iIFQOtVhZSR9gXGRktzt0TWmeFq7QzV
2gDeEj+RaZOUaAhSByqqleXNlDF5mZTIfWFMqT/6jwAAAAAAAA==


--=-0s56QWH1O2XcXgnacqlm--


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 19:22:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 19:22:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478945.742478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHV3b-0007OU-Tr; Mon, 16 Jan 2023 19:22:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478945.742478; Mon, 16 Jan 2023 19:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHV3b-0007ON-Q4; Mon, 16 Jan 2023 19:22:11 +0000
Received: by outflank-mailman (input) for mailman id 478945;
 Mon, 16 Jan 2023 19:22:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PHax=5N=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pHV3a-0007OH-VN
 for xen-devel@lists.xen.org; Mon, 16 Jan 2023 19:22:10 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 11e517b9-95d3-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 20:22:09 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11e517b9-95d3-11ed-91b6-6bf2151ebd3b
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1673896928;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/xRPy2lFgYzK4rQRIIVFPLcqwpR0bB+GwT9J43FZdpM=;
	b=QEUWF6j5u+1NhckWuR+mgr9v4hj2n0zEuXyboXvSn34zuMH1NDWqp4x6Xg37XzsjC0c5Rh
	WNoqv2l0pCMKKpQoBJu2TKmmvAIXzgWRljRu/CIoDjyrZgmK2O/SS8C2jkWdZmD8z+MS8y
	ThiN1BHGg9HCjqTI2lt1qvuZtYXLjoey+DmgRW1YNIRYN4NHKQwUlOjl5UQB7epv1l2got
	a1K2kj/oN+trWA842lnDJBZpXS0PtXW6DlqtMM84oj7pCcQ2IUTzqVGq694zdIopstvcDf
	o2c3pySEeDJ5TNbmmyS1E/TLtub1svpU7wex62TMvnlDDi1VL5ktKdYL1zSOIA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1673896928;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/xRPy2lFgYzK4rQRIIVFPLcqwpR0bB+GwT9J43FZdpM=;
	b=E3CY56ng5AjvLJnjTeTH/SIcAG7tQAejOTkhid6gy/IkjDhcsvcKeMNKlUgyuzP3Bg4btY
	oLWtXOUok+0V71BA==
To: David Woodhouse <dwmw2@infradead.org>, LKML
 <linux-kernel@vger.kernel.org>, Juergen Gross <jgross@suse.com>, xen-devel
 <xen-devel@lists.xen.org>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>, linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>,
 Alex Williamson <alex.williamson@redhat.com>, Kevin Tian
 <kevin.tian@intel.com>, Dan Williams <dan.j.williams@intel.com>, Logan
 Gunthorpe <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon
 Mason <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>
Subject: Re: [patch V3 16/22] genirq/msi: Provide new domain id based
 interfaces for freeing interrupts
In-Reply-To: <e12002af82e9554e42e876d7b9e813b90e673330.camel@infradead.org>
References: <20221124225331.464480443@linutronix.de>
 <20221124230314.337844751@linutronix.de>
 <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
 <875yd6o2t7.ffs@tglx> <871qnunycr.ffs@tglx>
 <e12002af82e9554e42e876d7b9e813b90e673330.camel@infradead.org>
Date: Mon, 16 Jan 2023 20:22:07 +0100
Message-ID: <87h6wqmgio.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

David!

On Mon, Jan 16 2023 at 18:58, David Woodhouse wrote:

> On Mon, 2023-01-16 at 19:11 +0100, Thomas Gleixner wrote:
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0.flags=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=3D MSI_FLAG_FREE_MSI_DESCS | MSI_FLAG_DEV_SYSFS,
> =C2=A0
> That doesn't apply on top of
> https://lore.kernel.org/all/4bffa69a949bfdc92c4a18e5a1c3cbb3b94a0d32.came=
l@infradead.org/
> and doesn't include the | MSI_FLAG_PCI_MSIX either.

Indeed. I saw that patch after my reply. :)

> With that remedied,
>
> Tested-by: David Woodhouse <dwmw@amazon.co.uk>
>
> Albeit only under qemu with
> https://git.infradead.org/users/dwmw2/qemu.git/shortlog/refs/heads/xenfv
> and not under real Xen.

Five levels of emulation. What could possibly go wrong?


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 19:28:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 19:28:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478950.742488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHV9l-000840-JM; Mon, 16 Jan 2023 19:28:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478950.742488; Mon, 16 Jan 2023 19:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHV9l-00083t-GQ; Mon, 16 Jan 2023 19:28:33 +0000
Received: by outflank-mailman (input) for mailman id 478950;
 Mon, 16 Jan 2023 19:28:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N7lT=5N=casper.srs.infradead.org=BATV+fb0b8ce1ba8490165fd5+7085+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pHV9k-00083n-BE
 for xen-devel@lists.xen.org; Mon, 16 Jan 2023 19:28:32 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f4e431d4-95d3-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 20:28:30 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHV9m-0090TW-Kw; Mon, 16 Jan 2023 19:28:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4e431d4-95d3-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=EgTHterDi3x+mF8AjWRpHqVxInp32HGfQrc6ZyHtCWE=; b=KDYhkaQQIvsBIozKGRYJqL3Z3/
	jB6mnQYdFl2Bz7pHcOfT8EG8Ke+jRItj2c3i0kx3DaLGHTklLRvVofL/3io1Feop0jP1sPtKGYmgI
	QVO1315vwZrYEmVj77cIWjdHz6vVYjV82r9YaN9JaIUv5EO62TVqdVwHTrLCOERWojk+je/Jfylrt
	2ZTndZP2i1/bTiZpRzcrrnfg5xnfqWXuN3M/qZkxjaLviE31ylNZgpcberHsyzaf8+n4KBJHPo8bJ
	j+SYyntNB7P8I4A1fFBkHzHnLZ9vxB6Xf4ye6YJw2QVlaghErn5FRspJ0pBssLZ3nDAcUQfZkSzmd
	UYR9nFDA==;
Message-ID: <57f3f757aaceccf866172faaf48ab62916f24d8b.camel@infradead.org>
Subject: Re: [patch V3 16/22] genirq/msi: Provide new domain id based
 interfaces for freeing interrupts
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML
 <linux-kernel@vger.kernel.org>,  Juergen Gross <jgross@suse.com>, xen-devel
 <xen-devel@lists.xen.org>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>,  linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>, Alex
 Williamson <alex.williamson@redhat.com>, Kevin Tian <kevin.tian@intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Logan Gunthorpe
 <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon Mason
 <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>
Date: Mon, 16 Jan 2023 19:28:19 +0000
In-Reply-To: <87h6wqmgio.ffs@tglx>
References: <20221124225331.464480443@linutronix.de>
	 <20221124230314.337844751@linutronix.de>
	 <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
	 <875yd6o2t7.ffs@tglx> <871qnunycr.ffs@tglx>
	 <e12002af82e9554e42e876d7b9e813b90e673330.camel@infradead.org>
	 <87h6wqmgio.ffs@tglx>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-7A8Lyz80Bd/lOOWfs7j3"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-7A8Lyz80Bd/lOOWfs7j3
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2023-01-16 at 20:22 +0100, Thomas Gleixner wrote:
> > Tested-by: David Woodhouse <dwmw@amazon.co.uk>
> >=20
> > Albeit only under qemu with
> > https://git.infradead.org/users/dwmw2/qemu.git/shortlog/refs/heads/xenf=
v
> > and not under real Xen.
>=20
> Five levels of emulation. What could possibly go wrong?

It's the opposite =E2=80=94 this is what happened when I threw my toys out =
of
the pram and said, "You're NOT doing that with nested virtualization!".

One level of emulation. We host guests that think they're running on
Xen, directly in QEMU/KVM by handling the hypercalls and event
channels, grant tables, etc.

We virtualised Xen itself :)

Now you have no more excuses for breaking Xen guest mode!

--=-7A8Lyz80Bd/lOOWfs7j3
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE2MTkyODE5WjAvBgkqhkiG9w0BCQQxIgQgJH7xaNlz
d72yO4soND1BPts+SmeyRi5Y/5/q1j1Dducwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgBhNAiKcFn3l7ngLQrJ7WwBNECDmRBlM0/b
RJ+BrqLERjbTB5TELo03zfCPB5a2J94gns7SooyyomDAxz4yvqAtZEOixRFcIRXbDciuJ82y3vX4
h4Jiyo/l7ZfuarfSuXAoRXZbbZAyDNgT1yFE4a+w82Ero+D8kVakjquRZk1od5NOHaisKHbhCDuI
h5+vDBo8lubujPtzU9xJNG+aMwWYZvkzJgnj6YQ6YJxfxpuk9LP7tUvb6z2fzU2rxG+Crn9iHihD
NaGcuhlKuplxxc1rcR/4AN2lfghnI7VFDW7laLDd6caliG6pl+w5DZm6BiBU3WVQxRGeD1SsPFyh
Ph0iFEPiklaJyhwHqjWsB8KndE+Lx+oNYrmzymlJU4pDx8Hx6CizI8ewE1IpjS90LDIFhT/0a5XX
ju6wPSOFe0KD4co1ZsShiTtBjfP+abL2mZuisY0jfRSGFbnWmMLwvdsoTvJQ/LYjt/KB0aTtlXtO
m6HgjPJ51ITSZbRajeoKigKrLmL4jfJQq39PjKVGf/SM02c5ggG59jOhZe7MHVNqSLO/ACUpC62g
9lUGNMbpOo2BMJaygkPkSHgq61b07fwnddPyj/MhK6Iz0oFOyHcif2spcw0BEAWlDoR7pgctUd39
1K8d+09FbVrRx1h0y1xMaQk2f4dbDzm/VdwpVYvYwQAAAAAAAA==


--=-7A8Lyz80Bd/lOOWfs7j3--


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 19:49:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 19:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478959.742500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHVTg-0002Ag-Em; Mon, 16 Jan 2023 19:49:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478959.742500; Mon, 16 Jan 2023 19:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHVTg-0002AZ-BL; Mon, 16 Jan 2023 19:49:08 +0000
Received: by outflank-mailman (input) for mailman id 478959;
 Mon, 16 Jan 2023 19:49:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PHax=5N=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pHVTf-0002AT-47
 for xen-devel@lists.xen.org; Mon, 16 Jan 2023 19:49:07 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d4cb3d2d-95d6-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 20:49:04 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4cb3d2d-95d6-11ed-b8d0-410ff93cb8f0
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1673898543;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bsQIYS8Reb5+JjRsIyy9SKu5zYksjmBvIj9vz87cqIA=;
	b=heEkt7NaYvIze8aJGQVSQyHNSN98z+Wfxq09MbTTdqrC+XKiOY0rDosjUd0z1XsSIluw/P
	Cr2Q5VCIR8uXtuiK6mYkB9j8V15SmnDnNd8MoO3oU3y7irPN42IRU27Jh4H/8wglTGEH25
	9Om5jylGiPoSEa68SE/xeQLD4UTAezrquPJe+NbYT4vogs55O4A6M/3mLcrXBt9/Z0vmdW
	/zROGTl2f7ioJ0ivQfFu0FFh7UFomXQK/1HkCSm7uVxq64vpQlA1SwezuakhBjKTLuBmk7
	6vupSlP1XINVTJeEQoKgecPdHcR9dTYuuWNP5NIzbbxbWCCMdpieSqhH0Bk9kQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1673898543;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bsQIYS8Reb5+JjRsIyy9SKu5zYksjmBvIj9vz87cqIA=;
	b=bfhTZB4zqL0K6JB8VZXUJnVxNs2r97QdIbh0E82HuZsZVNPSxYCPU420T4qGOhJ+vpVXFQ
	NsX/lhRHJD/orPDQ==
To: David Woodhouse <dwmw2@infradead.org>, LKML
 <linux-kernel@vger.kernel.org>, Juergen Gross <jgross@suse.com>, xen-devel
 <xen-devel@lists.xen.org>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>, linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>,
 Alex
 Williamson <alex.williamson@redhat.com>, Kevin Tian <kevin.tian@intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Logan Gunthorpe
 <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon Mason
 <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>
Subject: Re: [patch V3 16/22] genirq/msi: Provide new domain id based
 interfaces for freeing interrupts
In-Reply-To: <57f3f757aaceccf866172faaf48ab62916f24d8b.camel@infradead.org>
References: <20221124225331.464480443@linutronix.de>
 <20221124230314.337844751@linutronix.de>
 <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
 <875yd6o2t7.ffs@tglx> <871qnunycr.ffs@tglx>
 <e12002af82e9554e42e876d7b9e813b90e673330.camel@infradead.org>
 <87h6wqmgio.ffs@tglx>
 <57f3f757aaceccf866172faaf48ab62916f24d8b.camel@infradead.org>
Date: Mon, 16 Jan 2023 20:49:02 +0100
Message-ID: <87edrumf9t.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

David!

On Mon, Jan 16 2023 at 19:28, David Woodhouse wrote:
> On Mon, 2023-01-16 at 20:22 +0100, Thomas Gleixner wrote:
>> > Tested-by: David Woodhouse <dwmw@amazon.co.uk>
>> >=20
>> > Albeit only under qemu with
>> > https://git.infradead.org/users/dwmw2/qemu.git/shortlog/refs/heads/xen=
fv
>> > and not under real Xen.
>>=20
>> Five levels of emulation. What could possibly go wrong?
>
> It's the opposite =E2=80=94 this is what happened when I threw my toys ou=
t of
> the pram and said, "You're NOT doing that with nested virtualization!".
>
> One level of emulation. We host guests that think they're running on
> Xen, directly in QEMU/KVM by handling the hypercalls and event
> channels, grant tables, etc.
>
> We virtualised Xen itself :)

Groan. Can we please agree on *one* hypervisor instead of growing
emulators for all other hypervisors in each of them :)

> Now you have no more excuses for breaking Xen guest mode!

No cookies, you spoilsport! :)

        tglx


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 20:17:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 20:17:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478967.742511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHVuo-0005dM-Jg; Mon, 16 Jan 2023 20:17:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478967.742511; Mon, 16 Jan 2023 20:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHVuo-0005dF-GN; Mon, 16 Jan 2023 20:17:10 +0000
Received: by outflank-mailman (input) for mailman id 478967;
 Mon, 16 Jan 2023 20:17:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NbzH=5N=citrix.com=prvs=37389537a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHVum-0005d9-5K
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 20:17:08 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bd98b9bb-95da-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 21:17:06 +0100 (CET)
Received: from mail-dm6nam04lp2043.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jan 2023 15:16:59 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6859.namprd03.prod.outlook.com (2603:10b6:8:42::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 20:16:55 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Mon, 16 Jan 2023
 20:16:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd98b9bb-95da-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673900225;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=Kp0u26wWA76HiMuy69towX92MOBbT3n7/atAsLsYbK8=;
  b=FRb/BveT3n/j5PDCLaT4aXHteO8D7JvxrjqmrxMnTyezxMvVsUUsZtRX
   /6fkGeI3e2REO6V+Q1hCauDFyci9kfLISUU1c4EWlwT4gXCUP7K+vu3Ym
   RfTCnGIBGa+GJtl9iVql7cPb6WDws0OVT8D6KEYFj0hP1FDots4/Msc5h
   Q=;
X-IronPort-RemoteIP: 104.47.73.43
X-IronPort-MID: 95333692
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:npAiMK0tYWA6wqISuvbD5Rxwkn2cJEfYwER7XKvMYLTBsI5bpzMGz
 DYXXGqAPa6DZmryL9Egaou/pkwPusWGnNJjTQtupC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK5ULSfUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS93uDgNyo4GlD5gVnO6gR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfLWtkr
 9FFNWo2ahnEuuSHmJ67c+k1r5F2RCXrFNt3VnBI6xj8VapjbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxouC6Pl2Sd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzH6gA9lMRefQGvhCp1CUyj0ILQYtWHSJvdf6yUD5YONZN
 BlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml1a788wufQG8eQVZpdN0jnN87Q3otz
 FDht9n0Hy5mtLqZTm2U3riRpDK2fyMSKAcqdSICCAcI/dTniIUylQ7UCMZuFravid/4Ei22x
 CqFxBXSnJ0WhM8Pkqm+o1bOhmrwooCTFlJuoALKQmii8wV1Ipa/YJCl4kTa6vAGK5uFSl6Gv
 z4PnM32AP0yMKxhXRelGI0ldIxFLd7cWNEAqTaDx6Ucygk=
IronPort-HdrOrdr: A9a23:BOYvE6x21ceJ2nG7FMrVKrPwPb1zdoMgy1knxilNoH1uH/Bw8v
 rE9sjzuiWE6wr5J0tQ++xoVJPvfZq+z/JICOsqXYtKNTOO0FdAR7sM0WKN+Vzd8iTFh4tg6Z
 s=
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="95333692"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DOcSvN7KLFK7rSvvOamPHCL/WBxjCjgZwNvvfcITmAPiAFMg/UmuSnBcbuBfJnpEpLWHs9J/9RndErmYPm4qWIugj2u/6UfoQqEBiF3T4hN+hZ0bJ8jXquJiHYh+mlgeIDP/+AHpj+O6Cf/C0V/YTQ0FTwWKMOSVP0EwvFw2/I6pPvWRy0ryYRIzsKz0/b/7W7vBDXcvqvWUaMNs1f2VtxY/8juB3Y1lW3m7unbH/YwxAOZpPCPkFPqr50PUFsK0uX8SVj9R4jXgNsYAhrcBwdC+WZ7aGfYHsNtjSeHHrJ+T1S1LqkvG7umNTri1QWlKyKuVHRRri8+OUeOLcXcvFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Kp0u26wWA76HiMuy69towX92MOBbT3n7/atAsLsYbK8=;
 b=anmQcNmpP2bmRkztrqvqqDZmaY7hplDBtIH2CeG+JQtVGThY+un2PfBymcdrDLxCwTRd+UxIJ/lCGMYPZA2Rxfg0SfoiUIB/HPQK3aKgPj+FmpX5dXqYcYAEEHalAkQEYokm8qUjiqeFChd6TAILGFygmPeqWuG0ExPtSNu20AMztlqFX9d0GbylLgTkLfHBguaft7tg6RGH4DQiWX88WO/NRGvDCRqJoCqVu7hd5h5/DZHyPmUNzPQPJIF19BPEfZ8Hmdjo+GV27YzrJAxgZzgoIsTNN6tNAXN3+2KrFgEdEK9Ev4nxpWU4+5mxug9kygQQka8L3J/w/QV8/R9qig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Kp0u26wWA76HiMuy69towX92MOBbT3n7/atAsLsYbK8=;
 b=xYhHv/35TpdD4U3WA4lQ6W2RrBhGNytY5Ca1KNe3h+yKd5IXoopZO8WlD4Cxpb3GOcj0nO8OzvFnqYyFMG9nOerc3cYGcuWAzR4eYnCD0GaTOEMQRH3Acks5Pfbp2JwCNnu3c13eDBATWEzFlWGcZgdtr+0IulDA9G9j1y8KVRI=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, Xenia Ragiadakou <burzalodowa@gmail.com>, George
 Dunlap <George.Dunlap@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Thread-Topic: [PATCH 1/2] x86/shadow: sanitize iommu_snoop usage
Thread-Index: AQHZJyu6jkZvmiKMqEGYr+zksqnvb66cPWcAgASFPYCAAL33AA==
Date: Mon, 16 Jan 2023 20:16:54 +0000
Message-ID: <d7a81b30-3f49-7799-3dd0-c884def2e390@citrix.com>
References: <01756703-efc8-e342-295c-a40a286ad5f1@suse.com>
 <cf0ed06f-4d49-0f73-cfd9-eb49e951048c@suse.com>
 <f24da4a8-4df8-0ec3-32f9-41f134b87d67@citrix.com>
 <9e918d63-3f32-6d43-9836-a9b75b98c295@suse.com>
In-Reply-To: <9e918d63-3f32-6d43-9836-a9b75b98c295@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM4PR03MB6859:EE_
x-ms-office365-filtering-correlation-id: c2829ce8-a3af-49bb-9b81-08daf7fe9cb8
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 FDdoUVeq9v7pmiHkuGPejs5Rn0fP6mj1mS/BHUcnsmEL99VOpKRlJEcIAvhn/89BN/2DvOiMnrwM1se0gEcm61wA6U3C38vdOgxtOTqxvsJGIVIBterdADdPE3JcMsWVMO6lfHcGN7hmV4V1kV6R0ER9hAUzWgKhFU3arNUYGWZrXLqv2bmBDwO2N/nN9VqxygQ4XTLpOm0mpidot87uh3Y5qd9ygrZO8KstlIOfoiGFWNVKkxsvWRMVQ0eqLDvfF1JoTx9dN2yKotZgvw2Vq+Qe3zBHIOhdcQiyGnIQTjE4kXj5/XMuC+nOE/eCsZTcV1hRoqgCEqDMaBWvRieS2uAZDdgc/BC4iDXupz31lRH4cfmhoEOEkT95Oov5xotlCo0gJSJ01W1lWuznbuuwUisBIP08orYo2onZbQ7v8c3bFe+zO60EyyzwVVA4gQ/6ZvTwenZQDvOg9BwEUtIHZfT3pU7Hc8vPAUAuT7VCUyHVbGrfGgGtAkDTIqIm6mXX/i/KgZU0UmJS9D22JIiMOBx+J9Ab+/bQgKmVFXzvEFpn5ozdv3G9u67mJCtPJwHy/03LsbYTuzpJzQX6Rh+/EeVXxU6GDfHcZb3vif8ft4H8cfiCxwlGtwG3XnCUGuDVOL+x2R9gME8V1qUAV84vu75mMq9X7b0ZvId2xcxErXBb4BHEo4X/w2DaqOPkGiCHzqPRXXeoY51RVnwuzaaK+rxrSWytkUHR8U548wOqCcw9QBP7vAbtakxqdeFXHOSG+pmAyrNTxnqETKUNDwYhcA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(39860400002)(366004)(136003)(346002)(451199015)(31686004)(36756003)(8936002)(86362001)(38070700005)(66556008)(66476007)(64756008)(66946007)(4326008)(76116006)(31696002)(6916009)(8676002)(5660300002)(2906002)(66446008)(82960400001)(38100700002)(83380400001)(122000001)(6486002)(478600001)(316002)(54906003)(71200400001)(91956017)(41300700001)(53546011)(6512007)(26005)(2616005)(186003)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?RDU3Y2k1UzBCMyt5Yms2NUtlQXg5WTdXSUxMUUJxWDVPNHdEaStmc0NTeTN2?=
 =?utf-8?B?UFEyb2RnTlJ4NXJBVithZnJ4b1ZoTDYrU2YwWkJ5bHNsNEFtVkd6dENhV3ZW?=
 =?utf-8?B?NzdyditDTlc2SHgzV1Y2ajFhV0dvNjlGNEJ3eUZMeWtZZysvMWRtK04xOEVD?=
 =?utf-8?B?OHZxNUk1MlRKNWY0WEN1cmZ0a015Vm5yYURJTU00MXZZVkRwblk5MW4wWkJ2?=
 =?utf-8?B?akNlZ25YSU5FYVF1N082V2hveXhNb1RabEYzUTUvcFVnU0F3K0N5enh3WnJw?=
 =?utf-8?B?ZFhRWTYrN29QdisxZXlMNTlGRHZFcnJkMmNqTGNiUDRwVHlqYVliVUY1MUhE?=
 =?utf-8?B?TmJsNWFaK0xZdWxJem5UUGhzU1Z0bjdVa2NXc0VBaHpaUmZiY0dzV2Y0bVd5?=
 =?utf-8?B?OTlyKzZmM1BJR0xwZXByQ2swTUZmTVFWelludDdkTHdhczhrdW96SksxcVpN?=
 =?utf-8?B?TU9RRkIrM2t3b2JrSnkrWkhmaDIwWUtDNFhGME1GK0hja0VCR1EzR3IwUGZT?=
 =?utf-8?B?eVppMEpUUDBhZmZSbzdrZUhLSThMYk50ajR3UWZXcDMzWmI0aVlwdTdkNmpY?=
 =?utf-8?B?d0NVZXlZcGdrbi9TQXdjakV5UU02MWdiNzJRSXpkd0dsdEpHRytpcUpQUzlC?=
 =?utf-8?B?MHFjUFRCOWhWVGtMeWZFd1UwejNCUEdUcVBhaW96OUpNNytBQm9ZY1UrdUZK?=
 =?utf-8?B?bkVCdXRLSWNCRkJyTlcySTBwV0VMSjVxSi9IcG11dlRyckpiZnpMbW05YThW?=
 =?utf-8?B?U3RaNzE1V3hWOEhEanFDdWgzQkQwTk9XSnczWFlVbmNxZExSQ3JtamxUd3Nn?=
 =?utf-8?B?bTVwbGI3ajFwVDk5bndTQUt2NW9EYzhEZUxIM0xERGc2Yzlwd3FOZFBZOUZq?=
 =?utf-8?B?WWdleGIxM3pxNWgrbGdKSDBFUkJ0bWxUNW5QWmUwQlVQKzVzbmxqMkFNS2Uz?=
 =?utf-8?B?bGwxYXNqS0ZoZ1NyN0dFL0VNc0E5Y0lUdklXRXc1TFdtWWhhZUtGZjFXTkND?=
 =?utf-8?B?YjRJdUw0ZG5hQTlHdTJ1emdvSm95Y2E1Qzc2bjQ0OEEvNVFhaktwN0REKzZ5?=
 =?utf-8?B?V20rc2JNMjB5MjFKOWdjYkhRN0NlcXA3aGhnVHQrSGRUTEhyeFMySktUWXhL?=
 =?utf-8?B?dnNmMWFVTzJFdklIVjRlN2YxL0s0NnR0ZEZVUHFvVnRObUhBVk5aSktaR285?=
 =?utf-8?B?akM3SGNXOWIzeEFwNlAyNzFJMUNOZWJoWHFjcEw2eUt3ZU84OG9rRmZqbGF1?=
 =?utf-8?B?RDB1bkF4UFBOOUJWajRXelQrYnVIWEV3MEh3bkZlT3Y4U0tOaFF6Uy9kdGph?=
 =?utf-8?B?a20xU0VQYUtkOWxtOXg1TkFMMnhQMXZ5dytwUms4NzVPaHdVU2txcXpaM0Zu?=
 =?utf-8?B?Q2dOdHlhQm5rZzJHeDZvOFdMZ2dyZ0pHK2c3L3ZCMTEzcHJkUTFDM1ZxaEZZ?=
 =?utf-8?B?NDJpTnlsdUEyRG9lSlpJNk5kV1dmNXlvYXA3Y0EwK2ZFd3lDMmhlZ0pmNUhl?=
 =?utf-8?B?WHB5OEphUG9mN2t5U09GSlpkNlA4c282N21aMXRlTTNnVE11eG1RbTRzZHdJ?=
 =?utf-8?B?M1YwVkNBU3NyN00zYnpxZnJuUVZ5V2p4QXFnMW84VDBxVlpldTZCV1ZYN1M4?=
 =?utf-8?B?amNRdE9JRkJtUVRiajJZWVJISkhXQ0FqdDhFL0lnTnZncERYcE9uazc0Q214?=
 =?utf-8?B?Y2JTNDR3Szk1NFZPdnF2WnpMa2pielNib2xTQXZOUmNSd0R6K1AwaEJocFhx?=
 =?utf-8?B?dVZTb0hyYVJ2bWVUZmZlclpGcVNBVTg4em5UbkFmbExpSnZKbGRKV0N2V3JP?=
 =?utf-8?B?eHVzKzdHOC9jWHd0ODdPUEJHdE1GNDYvUGhsditlUk45Um9iMDNGZ3JJQ3pL?=
 =?utf-8?B?eWhJbEZLS2ZtOURuVFVOOENQWDEyZy85MUVZK3RicnNpN1lNK0tsc3pVVEJt?=
 =?utf-8?B?UDFWREw2NHBITUV0QmsxM0IrYlpyMUJXRUN5L3A3cElScXBsNjJOdTJ2YlpF?=
 =?utf-8?B?b3JRK1JFNUdHTjVEeTJrcElTVHl4cm04VXpUNlU3dkJiK3FJOFVjK0ZsZEFj?=
 =?utf-8?B?WXhvZGQzKzdUSzJDV01vUTRNb0xXWjd3aFBUT2FjaDI5WGFCZUtKNG9qcE56?=
 =?utf-8?Q?zi/jL4M3wj7iz2Oc4JkO7fdzs?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <796C1A2045EE33469FA6C99F4FCD8897@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	b+XQkuAQzVPw3gyJ0l29Uox0PKPlY2TU419R4WEEKhILmB1SJr+4/45U1Fc0Oo8gxrS3iYlw+DHtKFtaQ5Ax/eP2vrRV1RhgFXZLWOpkxhZzK1md7fzy3MyRDU6RGlbc73QmwdNrkzHYkBfBIn8MkwB5r+sjDoOcSOcabsC1hLOM5cHc/OtLLMgj6qzMvHllIA3t1Tp0qgteNr+1MSgOeqeVbN59RsvA50JXA441IsurJjmq+gbpznyXFvKatxuTkD7C3uT0lGV5J/0hR28kABZRccdKmgqDgrySLDqbj221hlYkVIQ60PEXddSg26WEbwexnRyWIh1uFkdE8hDTUqlEyn6OI/cmQXcQrfjqrhtKocjGqPbUWTdYS+kqOwLxTqV1OyzfIVOtZF5KJyixL2IALEC5erNSzsSg8vUaLOElGBuhncbiSSY5mMhdAjTVwD81Y4yUqIdIOxjMTgnsb2pa98pi2u6w8LolPlzJbJIaf0RrBjDRNZHTETNDrWvwIJwLMhwExVpG6M00Jtj1fYMKtQpGrTOyEgAPnNY4d/Tkp8nYpUSuQOsu62akGPF0HGsKozb2xiibE0pGtqT22k4kTMYEek3GxDSh2WqfTebaUQv8QZmxUMy/JBHgahfNlchyRN08/p7+uOzzp3pBONvWHZWfCTZl3RL2Pi+Ud0tRxdBs9X8CtXWV3H+5ESciogvHLNXyL+4IuDe9KarijZlnWCAbN06vHuBoehi+6BtLeyhPq0r0Ro9mzz6qDm5OEuw5zjq/IEjkn7h39EpBz6jV6fPUOtU6ALfVJG3NeBzPtCszJLOeKQ3T6UqoXegVGFZkpraING4U19bcdjibKIMMn8Yv64rXtgeBh4v/JdU=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c2829ce8-a3af-49bb-9b81-08daf7fe9cb8
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jan 2023 20:16:54.9132
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /ct/+keiNIz2xOvbuVfkx15lBZWT5qIkTr26stT4eFdDXhfkHYVKId3DtmZtTejdF4sPJUO6oPTZVfuemmtyYS11+//LatxA4DFPNGbjUes=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6859

T24gMTYvMDEvMjAyMyA4OjU2IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTMuMDEuMjAy
MyAxMjo1NSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDEzLzAxLzIwMjMgODo0NyBhbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gSW4gX3NoX3Byb3BhZ2F0ZSgpIEknbSBmdXJ0aGVyIHB1
enpsZWQ6IFRoZSBpb21lbV9hY2Nlc3NfcGVybWl0dGVkKCkNCj4+PiBjZXJ0YWlubHkgc3VnZ2Vz
dHMgdmVyeSBiYWQgdGhpbmdzIGNvdWxkIGhhcHBlbiBpZiBpdCByZXR1cm5lZCBmYWxzZQ0KPj4+
IChpLmUuIGluIHRoZSBpbXBsaWNpdCAiZWxzZSIgY2FzZSkuIFRoZSBhc3N1bXB0aW9uIGxvb2tz
IHRvIGJlIHRoYXQgbm8NCj4+PiBiYWQgInRhcmdldF9tZm4iIGNhbiBtYWtlIGl0IHRoZXJlLiBC
dXQgb3ZlcmFsbCB0aGluZ3MgbWlnaHQgZW5kIHVwDQo+Pj4gbG9va2luZyBtb3JlIHNhbmUgKGFu
ZCBiZWluZyBjaGVhcGVyKSB3aGVuIHNpbXBseSB1c2luZyAibW1pb19tZm4iDQo+Pj4gaW5zdGVh
ZC4NCj4+IFRoYXQgZW50aXJlIGJsb2NrIGxvb2tzIHN1c3BlY3QuwqAgRm9yIG9uZSwgSSBjYW4n
dCBzZWUgd2h5IHRoZSBBU1NFUlQoKQ0KPj4gaXMgY29ycmVjdDsgd2UgaGF2ZSBsaXRlcmFsbHkg
anVzdCAoY29uZGl0aW9uYWxseSkgYWRkZWQgQ0FDSEVfQVRUUiB0bw0KPj4gcGFzc190aHJ1X2Zs
YWdzIGFuZCBwdWxsZWQgZXZlcnl0aGluZyBhY3Jvc3MgZnJvbSBnZmxhZ3MgaW50byBzZmxhZ3Mu
DQo+IFRoYXQgaXMgZm9yICFzaGFkb3dfbW9kZV9yZWZjb3VudHMoKSBkb21haW5zLCBpLmUuIFBW
LCB3aGVyZWFzIHRoZQ0KPiBvdXRlcm1vc3QgY29uZGl0aW9uYWwgaGVyZSBsaW1pdHMgdGhpbmdz
IHRvIEhWTS4gVXNpbmcgZGlmZmVyZW50DQo+IHByZWRpY2F0ZXMgb2YgY291cnNlIG9iZnVzY2F0
ZXMgdGhpcyBzb21lLCBidXQgYnJpbmdpbmcgdGhvc2UgdHdvDQo+IGNsb3NlciB0b2dldGhlciAo
cGVyaGFwcyBldmVuIG1lcmdpbmcgdGhlbSkgZGlkbid0IGxvb2sgcmVhc29uYWJsZQ0KPiB0byBk
byByaWdodCBoZXJlLg0KDQpBaCwgdGhhdCBiaXQuwqAgQWxzbyBmdXJ0aGVyIG9iZnVzY2F0ZWQg
YnkgcGFydGlhbCBuZXN0ZWQgISdzLg0KDQpJIGRvdWJ0IFNoYWRvdyBoYXMgc2VlbiBhbnl0aGlu
ZyBiZXlvbmQgdG9rZW4gdGVzdGluZyBpbiBjb21iaW5hdGlvbg0Kd2l0aCBQQ0kgUGFzc3Rocm91
Z2guwqAgSXQgY2VydGFpbmx5IHNhdyBubyB0ZXN0aW5nIHVuZGVyIFhlblNlcnZlci4NCg0KPj4g
SXQgY2FuIGFsc28gaGFsZiBpdHMgbnVtYmVyIG9mIGV4dGVybmFsIGNhbGxzIGJ5IHJlYXJyYW5n
aW5nIHRoZSBpZi9lbHNlDQo+PiBjaGFpbiBhbmQgbWFraW5nIGJldHRlciB1c2Ugb2YgdGhlIHR5
cGUgdmFyaWFibGUuDQo+IEkgZGlkIGFjdHVhbGx5IHNwZW5kIHF1aXRlIGEgYml0IG9mIHRpbWUg
dG8gc2VlIHdoZXRoZXIgSSBjb3VsZCBmaWd1cmUNCj4gYSB2YWxpZCB3YXkgb2YgcmUtYXJyYW5n
aW5nIHRoZSBvcmRlciwgYnV0IGluIHRoZSBlbmQgZm9yIGV2ZXJ5DQo+IHRyYW5zZm9ybWF0aW9u
IEkgZm91bmQgYSByZWFzb24gd2h5IGl0IHdvdWxkbid0IGJlIHZhbGlkLiBTbyBJJ20NCj4gY3Vy
aW91cyB3aGF0IHZhbGlkIHNpbXBsaWZpY2F0aW9uKHMpIHlvdSBzZWUuDQoNCldlbGwsIHRoZSBm
aXJzdCB0d28gY2FsbHMgY2FsbHMgdG8gcGF0X3R5cGVfMl9wdGVfZmxhZ3MoKSBjYW4gYmUgbWVy
Z2VkDQpxdWl0ZSBlYXNpbHksIGJ1dCBJIHdhcyBhbHNvIHRoaW5raW5nIGluIHRlcm1zIG9mIGZ1
dHVyZSB3aGVyZSBjb2hlcmVuY3kNCmhhbmRsaW5nIHdhcyB3b3JraW5nIGluIGEgbW9yZSBzYW5l
IHdheS4NCg0KPj4+IEBAIC01NzEsNyArNTcxLDcgQEAgX3NoX3Byb3BhZ2F0ZShzdHJ1Y3QgdmNw
dSAqdiwNCj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGdmbl90b19wYWRkcih0YXJn
ZXRfZ2ZuKSwNCj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIG1mbl90b19tYWRkcih0
YXJnZXRfbWZuKSwNCj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFg4Nl9NVF9VQyk7
DQo+Pj4gLSAgICAgICAgICAgICAgICBlbHNlIGlmICggaW9tbXVfc25vb3AgKQ0KPj4+ICsgICAg
ICAgICAgICAgICAgZWxzZSBpZiAoIGlzX2lvbW11X2VuYWJsZWQoZCkgJiYgaW9tbXVfc25vb3Ag
KQ0KPj4+ICAgICAgICAgICAgICAgICAgICAgIHNmbGFncyB8PSBwYXRfdHlwZV8yX3B0ZV9mbGFn
cyhYODZfTVRfV0IpOw0KPj4gSG1tLi4uwqAgVGhpcyBpcyBzdGlsbCBvbmUgcmVhc29uYWJseSBl
eHBlbnNpdmUgbm9wOyB0aGUgUFRFIEZsYWdzIGZvciBXQg0KPj4gYXJlIDAuDQo+IFJpZ2h0LCBi
dXQgYmVzaWRlcyBiZWluZyB1bnJlbGF0ZWQgdG8gdGhlIHBhdGNoICh0aGVyZSdzIGEgZm9sbG93
aW5nDQo+ICJlbHNlIiwgc28gdGhlIGNvbmRpdGlvbiBjYW5ub3QgYmUgcHVyZ2VkIGFsdG9nZXRo
ZXIpIEkgd291bGQgd29uZGVyDQo+IGlmIHdlIHJlYWxseSB3YW50IHRvIGJha2UgaW4gbW9yZSBQ
QVQgbGF5b3V0IDwtPiBQVEUgZGVwZW5kZW5jaWVzLg0KDQpJJ20gbm90IGFkdm9jYXRpbmcgZm9y
IG1vcmUgYXNzdW1wdGlvbiBhYm91dCBQQVQgPC0+IFBURSBsYXlvdXQsIGJ1dCBpdA0Kd291bGQg
YmUgbmljZSBpZiB0aGUgTk9QcyB3ZXJlIGFjdHVhbGx5IE5PUHMuDQoNCkkgc3VibWl0dGVkIGEg
cGF0Y2ggd2hpY2ggbWFrZXMgcGF0X3R5cGVfMl9wdGVfZmxhZ3MoKSBtYXJnaW5hbGx5IGxlc3MN
CmV4cGVuc2l2ZSwgYnV0IHRoZXJlJ3Mgc3RpbGwgbWFzc2l2ZSBzYXZpbmdzIHRvIGJlIG1hZGUg
aGVyZS7CoCBCZWNhdXNlDQpYRU4ncyBQQVQgaXMgY29tcGlsZSB0aW1lIGNvbnN0YW50LCB0aGlz
IGludmVyc2UgY2FuIGJlIHRvby4NCg0KPg0KPj4+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL3g4Ni9pb21tdS5jDQo+Pj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gveDg2L2lv
bW11LmMNCj4+PiBAQCAtNTYsNiArNTYsMTMgQEAgdm9pZCBfX2luaXQgYWNwaV9pb21tdV9pbml0
KHZvaWQpDQo+Pj4gICAgICBpZiAoICFhY3BpX2Rpc2FibGVkICkNCj4+PiAgICAgIHsNCj4+PiAg
ICAgICAgICByZXQgPSBhY3BpX2RtYXJfaW5pdCgpOw0KPj4+ICsNCj4+PiArI2lmbmRlZiBpb21t
dV9zbm9vcA0KPj4+ICsgICAgICAgIC8qIEEgY29tbWFuZCBsaW5lIG92ZXJyaWRlIGZvciBzbm9v
cCBjb250cm9sIGFmZmVjdHMgVlQtZCBvbmx5LiAqLw0KPj4+ICsgICAgICAgIGlmICggcmV0ICkN
Cj4+PiArICAgICAgICAgICAgaW9tbXVfc25vb3AgPSB0cnVlOw0KPj4+ICsjZW5kaWYNCj4+IEkg
cmVhbGx5IGRvbid0IHRoaW5rIHRoaXMgaXMgYSBnb29kIGlkZWEuwqAgSWYgbm90aGluZyBlbHNl
LCB5b3UncmUNCj4+IHJlaW5mb3JjaW5nIHRoZSBub3Rpb24gdGhhdCB0aGlzIGxvZ2ljIGlzIHNv
bWVob3cgYWNjZXB0YWJsZS4NCj4+DQo+PiBJZiBpbnN0ZWFkIHRoZSBjb21tZW50IHJlYWQgc29t
ZXRoaW5nIGxpa2U6DQo+Pg0KPj4gLyogVGhpcyBsb2dpYyBpcyBwcmV0dHkgYm9ndXMsIGJ1dCBu
ZWNlc3NhcnkgZm9yIG5vdy7CoCBpb21tdV9zbm9vcCBhcyBhDQo+PiBjb250cm9sIGlzIG9ubHkg
d2lyZWQgdXAgZm9yIFZULWQgKHdoaWNoIG1heSBiZSBjb25kaXRpb25hbGx5IGNvbXBpbGVkDQo+
PiBvdXQpLCBhbmQgd2hpbGUgQU1EIGNhbiBjb250cm9sIGNvaGVyZW5jeSwgWGVuIGZvcmNlcyBj
b2hlcmVudCBhY2Nlc3Nlcw0KPj4gdW5pbGF0ZXJhbGx5IHNvIGlvbW11X3Nub29wIG5lZWRzIHRv
IHJlcG9ydCB0cnVlIG9uIGFsbCBBTUQgc3lzdGVtcyBmb3INCj4+IGxvZ2ljIGVsc2V3aGVyZSBp
biBYZW4gdG8gYmVoYXZlIGNvcnJlY3RseS4gKi8NCj4gSSd2ZSBleHRlbmRlZCB0aGUgY29tbWVu
dCB0byB0aGlzOg0KPg0KPiAgICAgICAgIC8qDQo+ICAgICAgICAgICogQXMgbG9uZyBhcyB0aGVy
ZSdzIG5vIHBlci1kb21haW4gc25vb3AgY29udHJvbCwgYW5kIGFzIGxvbmcgYXMgb24NCj4gICAg
ICAgICAgKiBBTUQgd2UgdW5pZm9ybWx5IGZvcmNlIGNvaGVyZW50IGFjY2Vzc2VzLCBhIHBvc3Np
YmxlIGNvbW1hbmQgbGluZQ0KPiAgICAgICAgICAqIG92ZXJyaWRlIHNob3VsZCBhZmZlY3QgVlQt
ZCBvbmx5Lg0KPiAgICAgICAgICAqLw0KDQpCZXR0ZXIuwqAgSSBzdXBwb3NlIG15IGRpc3BsZWFz
dXJlIG9mIHRoaXMgY2FuIGxpdmUgb24gbGlzdC4uLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 20:33:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 20:33:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478972.742522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHWAn-00084L-W2; Mon, 16 Jan 2023 20:33:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478972.742522; Mon, 16 Jan 2023 20:33:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHWAn-00084E-T1; Mon, 16 Jan 2023 20:33:41 +0000
Received: by outflank-mailman (input) for mailman id 478972;
 Mon, 16 Jan 2023 20:33:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHWAm-000844-Ng; Mon, 16 Jan 2023 20:33:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHWAm-0001FX-Kn; Mon, 16 Jan 2023 20:33:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHWAm-0006aF-BO; Mon, 16 Jan 2023 20:33:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHWAm-0008TN-Ar; Mon, 16 Jan 2023 20:33:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9xMYiKPFrPeB5Sjo139AHShTvAJ5UA0mBdciQk1W7Hk=; b=S4BYxTI+r1i9XAZdMASVJzD3Aw
	/wYvl1QjsixLPQSbQ+6uY8kPzHSPjkIv7D3Jpztjj6YPHPgNQoGbj39l3eKcYvlzoeuTi5N8fWLwr
	PdN9LDCUWdXMcHLllaokuFr+waQhgZ2KXkA/j7CL44hKCrzj22wcXcSKMGmPmV/C+Vks=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175922-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175922: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-armhf:host-install(4):broken:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=886fb67020e32ce6a2cf7049c6f017acf1f0d69a
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 20:33:40 +0000

flight 175922 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175922/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175743
 build-amd64                   6 xen-build                fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743
 build-arm64                   6 xen-build                fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                886fb67020e32ce6a2cf7049c6f017acf1f0d69a
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    4 days
Failing since        175750  2023-01-13 06:38:52 Z    3 days   10 attempts
Testing same since   175835  2023-01-14 07:07:10 Z    2 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Marcel Holtmann <marcel@holtmann.org>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 1426 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 20:47:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 20:47:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478984.742533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHWNv-0001D2-Ai; Mon, 16 Jan 2023 20:47:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478984.742533; Mon, 16 Jan 2023 20:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHWNv-0001Cv-7r; Mon, 16 Jan 2023 20:47:15 +0000
Received: by outflank-mailman (input) for mailman id 478984;
 Mon, 16 Jan 2023 20:47:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h0W3=5N=oracle.com=boris.ostrovsky@srs-se1.protection.inumbo.net>)
 id 1pHWNu-0001Cp-Gk
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 20:47:14 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2b92117-95de-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 21:47:11 +0100 (CET)
Received: from pps.filterd (m0333520.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 30GJhnMW003869; Mon, 16 Jan 2023 20:46:45 GMT
Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta02.appoci.oracle.com [147.154.18.20])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3n3medbd47-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 16 Jan 2023 20:46:45 +0000
Received: from pps.filterd
 (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5)
 with ESMTP id 30GJBwht004887; Mon, 16 Jan 2023 20:46:44 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2105.outbound.protection.outlook.com [104.47.70.105])
 by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3n4qyxtt23-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 16 Jan 2023 20:46:44 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by CH2PR10MB4247.namprd10.prod.outlook.com (2603:10b6:610:7a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Mon, 16 Jan
 2023 20:46:43 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::909f:fa34:2dac:11c5]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::909f:fa34:2dac:11c5%7]) with mapi id 15.20.6002.013; Mon, 16 Jan 2023
 20:46:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2b92117-95de-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=message-id : date :
 subject : to : cc : references : from : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2022-7-12;
 bh=pKfc940SQRf+NNw/ySjxJUeflDB4HQRpjsNKoUevMVQ=;
 b=EosGz5v3ptjwBQsIhfKy5zEzOS+ugfr3qWGCCcTygClY0Qwbx0se9DEB2cWMxftk99f0
 pkUZB8qiv1e/FVSC3XlCm9brsCvP4sfC5BlBS12MVdxbNRqM7hjqFIBEo2GTLz+4ALtD
 64BSHl/SXJZiAaOICNQFCg6rlJ1jIGRNjFd8Xj9/1ZCUAMGv9rlEkkoIeCyxXsrz5/pA
 RJCAKkx6YEqcoTNLjyTQqDYtB/4idwP/XMgr4IwqkLsgaKjYo1qctlrRj4CpxF11zmdI
 93zUU9LHE6aWpxsQqUXb70vUNmxwVIvwmg8+HmqYlJVuIXy0Tg0uGCBIw6AxorUtCbSQ wQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U4icHxhBrG4gkB4RMHwVC88GZ+kN3TG5lCfOaT1YjgePIgO83Dlm75d9MxPQW2XtJeRATT6KM4UegmOmzb77VxNrQDJRH+f27M5ZgQEA00VLrauT1wuocK5euiCaz+Zy7GZJCJ2dUAUa4/zrwnsmdNMTJliqOya4/wTXlf2per0xnDn8/oafhIw9pqAXky527lz8iHXHL3Z1tZ+5bxKWLESoKRF46RTeNqocjU/oYjcDh+GzhjZwswWy/AnhUsoJ3ToZJZQQeXbw52pEiiD9R4RH0YMWTdTUgeftiANgeCGLoFP0/ATvCmGsG3Lw6zRoMxcZHSBDPoijHGlqInnE7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pKfc940SQRf+NNw/ySjxJUeflDB4HQRpjsNKoUevMVQ=;
 b=mn0awvLMUUol7Usvc/+eCKMAH5FbtLK8Tz+buy8AJgiQkPEEkocLRf4czpVdW36BhqdrnkvVgYKTNpI6HEIjYm46jqTnwo+pQPFqW9nNUIzb4HvLi44Tqqe30ZQYSlIMMSeGIQU+sOvhe8F7Tm4pbLw+LJSC7+lD/s0CJqOPnbE4GLgZ7Wb747hraDI2vBDz7ldmUYmplYoVwGfKZBe38vMMynpBpbJMDssfXDQEX+QF5vLgeiTAtPpeNbunsC3gVD1RSkQ8pkg6DywZ0W8Hf9U6ejG2cFm/K7I+17O9cwiewwGCBzt3W+9LMrkFNdYrdOAT1Msc9ROq64EbFQcsYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pKfc940SQRf+NNw/ySjxJUeflDB4HQRpjsNKoUevMVQ=;
 b=suPLTflhoE+M/0L/5V6IykEqqJ4P7TqS84RjRrMziivQIYocudf1xiBHd1o9dLofusrMUWXolohPsOvy6yZa1O/23QDDcJjHS6AFh6atwZZy55TY5s8hYtoQfWdV4xiXhiZp+yh/8/4xISajnYO2Pb6NgwEwP7EuqJNhpB8YG3Y=
Message-ID: <4f7189ca-afd6-25cf-38a0-80b53526c2fe@oracle.com>
Date: Mon, 16 Jan 2023 15:46:39 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 0/2] x86/xen: don't return from xen_pv_play_dead()
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, linux-kernel@vger.kernel.org,
        x86@kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
        Borislav Petkov <bp@alien8.de>,
        Dave Hansen <dave.hansen@linux.intel.com>,
        "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
        Josh Poimboeuf <jpoimboe@kernel.org>,
        Peter Zijlstra <peterz@infradead.org>
References: <20221125063248.30256-1-jgross@suse.com>
 <d0c6363f-d5c9-a53b-5275-e5134b54f09a@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
In-Reply-To: <d0c6363f-d5c9-a53b-5275-e5134b54f09a@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: SN6PR16CA0043.namprd16.prod.outlook.com
 (2603:10b6:805:ca::20) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BLAPR10MB5009:EE_|CH2PR10MB4247:EE_
X-MS-Office365-Filtering-Correlation-Id: 281a061b-9b48-4396-eb91-08daf802c670
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	tbUlitxIpE/0jklkT0Is+I/cAuDD9kaupddhbqtBW43OoOOmXdG/OuQRw1wjO7ffGiXfswyh7otRtM1ZiBOSGeWwIuqZVkUsLv2LRkcSjf72P9kc07sHsCUAP7ubCrBD8TgpxSBf5IockgKIbQUMSgi02bPsvv+vYvgLHrC66nXTVPWavvQA/+s4z71lj4Q4FiHQlRM9KjigRnToIWLmaBuVnpwM/SUmUqAhzOyFA1jlzI7SUhGQM1V4UovD9UkhJ7tUi2o3qh7OviuNUkW117xSLcIRABlSwtlURYhwp+39x1HBccXE/HL2JhE+NnkTZnAegRseV4CfWudgtVpIyWvt/iFIcji9XhQEvgBbOrn6ZjCRI5aVxtg2zO0zL24plnoKXHqNN26u/Dm3Dxyh0VdptTBY4eahzZNH5S1HClJjfa5760m/K/B6+R8vt8Fb5accCVwoIbyR8c/LUOClwqlLfWFQuhUXj1zzVz1xvT9VbOXqJpL2vwdD5M+EVfCA6xRdupRO90O3KugbUmT+hE+y1GVOS1WK9HIQZRBkv9n19kT8yxR3qwCib0l1KUEZJBRf0x6vuv0tpQXuEPflVu4lxbyO5HyE1kyhl47mDMruLeN7Go5orwsy3/+Mll9qHeQZ91/t7V89++mKPdeLKU/+No1A9rVN1uqSlDSC19ZcAJk1ZbzYbECOakI2Zp/VNJKOpZC6L/vs8CRy+Ey2JNTe23vOIrRSMP+JOcGwAjI=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(346002)(39860400002)(136003)(396003)(451199015)(83380400001)(38100700002)(86362001)(41300700001)(8936002)(31696002)(7416002)(5660300002)(4744005)(2906002)(44832011)(8676002)(4326008)(66476007)(66946007)(66556008)(478600001)(26005)(6506007)(186003)(53546011)(2616005)(6512007)(6666004)(54906003)(316002)(6486002)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?bG11WFg2N2NpcHhTY2N0RWlIaG4za243MGJTZEo3aE40UDk4SmRBS1Y3TjAz?=
 =?utf-8?B?UURHUDZVcmkwSGNoY1R3RVpralBia2Q4c042aXlOUGdqTE5lczZoR2lWSno5?=
 =?utf-8?B?U1psZTk5NTBXOWNlNVlrOWNEbVBINnJnSEVDMzBJM1FOMDEvUlVFK1pzekFK?=
 =?utf-8?B?aEdKdW1yWFVoM3hhTndZL2ZyaUxKMm1tYWUyaDVRU2xTR01SZXNBN3VQVWZ1?=
 =?utf-8?B?eGFFQ2xRVFlBRGlwVkhpS0s0cFc3MGhQSjJqMWNJSFRSblQ1bW1tblVWTTYz?=
 =?utf-8?B?c21IelM1ZjAvc0lsMEJyUmlNK0NJU2JPbzdLMzVZS2lIdkJxNWJDOTBKa0tw?=
 =?utf-8?B?UElUTU1TTDY0bWZWb1FGdk9mNTh5M3ZGWHV4MnRyNWw4OExOWXVScm02UTl4?=
 =?utf-8?B?TWJCSGhHQzZoWGhpZ29hRjhYdEgzcyt2M2s0U1BkUXhBYlI2U0EvazFnSm9O?=
 =?utf-8?B?NHVTSXFNVmEvVGt2a0I1OWRsMGErZm1xbms5R29TVm1ZaTc5NFArbmp1UHJz?=
 =?utf-8?B?dmxxRFU2dHZhSytXcDJpcWh5Y2JCdFFOeUFlR3czMzdsTjVLT0dDY1VXak9u?=
 =?utf-8?B?UHlYcTgrak85eWlHQnRZZndIZUg0RnB4YzJ4enFjTjlvSjRoV1NxVkVScHJW?=
 =?utf-8?B?d25WMjl2SmNVZjFaNE54WEoxaCtyaWk1dFBRdDlneXp1YjhRZFpMZVh0Sk94?=
 =?utf-8?B?b3Q2VUJrOWUvUGhxTUJFc01YeTRZTFJrSm9tZjR6cS91MENhMWZKcllsUzBl?=
 =?utf-8?B?Vm9mcjhGblc2UmRGSWNMRzNwYlpxamlmdlhSM3QxRnFjS2N4cjUrUFRLTzJO?=
 =?utf-8?B?ZHNoL0cxN0kxemt3Y1JTY2V2RGE0aHVTNFBTeWU4SzZQTHE0VnUvNFZtaDRq?=
 =?utf-8?B?UUxoMEMyZUhqZ2dKZHRKL3pZN2RES0poZHQ0VEFYL0JYNWFHdUxWNjhxb2tK?=
 =?utf-8?B?dU5oeDV6dGNjV2p4Q3VnM1hnZWsyTWEwV0NDQkdLNVliOHVkdktIRWxaS1VU?=
 =?utf-8?B?TjhUUEVySU1qVGNJNVMxdGU3M3U0S3hPd1JoY2NiaVBzQXBSZWQ1YmdtVmNp?=
 =?utf-8?B?RlBNdmZpYi9raDlLN0VWZ0RJdmNiWERTNE9rYVJoN08wZ2VzVUpORS9WZk12?=
 =?utf-8?B?MVhVRmN0SmZETC93cjZqZ1pRTGVCRnpmRS96ajNDRGlGWGk5c3ZVSFQ1ZGM5?=
 =?utf-8?B?MndhandqcnBjSmlaUE5lMk1LMkV6MzROVVp6SzFRcmNOcU1KR2pVWC9nbHpL?=
 =?utf-8?B?N0ZBc3FiMlE2bEpEY1dWN2NNaGp2dFY5YzhhS2ViTWY3TFZ5eUFTUlVYRDRN?=
 =?utf-8?B?MWo0ZkxUWUJpaXc0WEkreXk0TDJTcnRJT0x3UmE1YzRLY3hiVUlDYXJFS2l1?=
 =?utf-8?B?S1RFUlo4M1ErYklZMHBSa1pld1dNanQ2elJORWROUGZDRkZXbWo1elhlWlVi?=
 =?utf-8?B?d3BoRWxSWW1MczE5bHVLWFYzenZxODlGbjUzeEwxdjgzMGt6RFNYYkRZa3FM?=
 =?utf-8?B?Ri9GR0tJU0lqS2hVQ1RIL01Kc0psVndIaEkxMFRBMzBWTHFNckxUYVRqTUNa?=
 =?utf-8?B?UUhyOHY1b3hDa0RXYUUzRkQwU0FWT05JOUhYdDZ3YzJsUWNtK2N3d0JLQW1O?=
 =?utf-8?B?RHMzbzBudkpFMTg4MXd1cER5TVZNcHVvKzNsSFJmZTRpZXJPajZxZUNpNEpv?=
 =?utf-8?B?RGZmWFNGMExkcmlNUEpTcGUwQXJJUUpKNWNMYnVScVFIN2JVUzFQS2ZkR210?=
 =?utf-8?B?K0dIdjhCTktPejBMVWg4TmFhVzhxQ1B3YjlzeG9PQnN1S0pyWlUxUmpQZWpZ?=
 =?utf-8?B?ZWJ3c01TRmlhM2wyTHdHTklQdlFXQjFVT0dDb0VnbWtaUWFiVE9IZGVyc3NR?=
 =?utf-8?B?RDBadjVETTV2OWRpc1BBQk5uZEEyZFBRYUdQT2pnNXBpS083bWpyemdYMkZP?=
 =?utf-8?B?anpXYm9PZklQRlcydFltY2hFUEhURlpFWUsrNDN2UWNpcUxGVWVSeGVGSEJS?=
 =?utf-8?B?YVZ1Y1BJUzMySWtKTm9Kb3lqT2R4bDZ1UUxWWWNKVVVjTmtqSnFFNTY5bGV0?=
 =?utf-8?B?Q3EzNE9ibmEyOHo0dmhtSkpqMmxMRjc1VGNxRGJSK3Jrc1ZOVlVleHo3Rm5K?=
 =?utf-8?B?UlFQWEg1NE5qVm1qVmlkTFVlWDRDUHIrVUE3S0xGd0xXdnpUc0ZZNXNmNHNE?=
 =?utf-8?B?WFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	=?utf-8?B?WVpXMW5aaDZDL1RqbEg5anRWcEhkZ3ZXSUhaZm9CT0Z3T0Z3dDFCUEw2VGZy?=
 =?utf-8?B?L1BQRm1VUFBwZUVFM1Y4SHY2MGpjT0o1K3VoZHZENmZ5QllqMEhmSW9FSS9S?=
 =?utf-8?B?WURQT2N3dWkvUGYzTHBYNTN4ODlaV2FFT002RG5tMmRoamg5NXhZSkdCZUxk?=
 =?utf-8?B?a2phUDFoK25NZWxYd1dMbkoxZXpTL0tFNTlZVStkbGV5OUV5YTRGcG8vREp6?=
 =?utf-8?B?cXJYMEkybEZWQTRkTmZqd1hoaWVnZHdpd1pOYjIwbnVuTVB0SE9RaWh1c3Fs?=
 =?utf-8?B?TTF2QU1QK2F0NWo2bktGT1hUL2J0Tm94amJWcUVzYmhQQ0U0ZFNMTkFiNkhw?=
 =?utf-8?B?bG52Z25ONTVibmRudEtXMVM3ZTJWQ0V1MmNJd2RlQ3pvUUR4VDVjaWxwSFI4?=
 =?utf-8?B?NXV4RkZodnhURmtxZzlwUG5CZG5KT2IzNmlkSEFKQ1d6ZzBKV3RwQVFWNzF3?=
 =?utf-8?B?VktlTEFNVnlYd0RNR2tma1BvdFdKbE5zWVBTU3V5aUtUWlBTVzBQUGQ5Z014?=
 =?utf-8?B?ZVJzZ1RNOGFya1EwWXRYeFRjM0ZuWlZaa01wclRjdWNMMktrWFE5Z3M5a3Rx?=
 =?utf-8?B?UHEzeGRUdEZlVEtnZmx0V0hNTDNlUEhENjVBV2JGZm1ybXY1bmJkTXBLNmJU?=
 =?utf-8?B?K09DWjI3bE9lYUxTTEorUHZXOHZFaEE3VGg4NGVOZGxBdGVPZkFHc0hlUFpR?=
 =?utf-8?B?QktDYXlZd3R4bDFEVWFjZzNrOW5Ya1BQd2wxMVhaWTRGYlpSQnFmOGlzWm5r?=
 =?utf-8?B?cXdHcllDbzhmak1UNldYbVVVRit3STl1cWk3bUltaUZpS3R4RnA1M05KcUNV?=
 =?utf-8?B?K1h2aHhXTFZoSmNXVWR2MmpxeThieVcvTk1uYXRhMVFVV3ppampIRGhObSt6?=
 =?utf-8?B?djNQNzF6ZE5sRWlXSUtNUDdXaDcvUmZEdVFaaUZacVJzTE1ublJUK2hab1M0?=
 =?utf-8?B?QzZ0T3FqV0cyNXNvQ3RqTlRCMmVlU2crMHNUVERNN2NDbTltdndTd0l1ZHJM?=
 =?utf-8?B?UnFKcmdKVng1Q2ZCQUMydERDWi9pTXdyVzcyV3FoeHVXTUc0eUhadERLcUor?=
 =?utf-8?B?YmliTGxycDd6UjhEUUZWNmhaRFJiUyt0ZGpKeUdFUFZnVEJDcWJIN2g3a0FM?=
 =?utf-8?B?eHYrYkpqcHMyckRHWEdKbHpsd2hzZkNMMlVibHBGODVKNVZYTHY1OVZueWFy?=
 =?utf-8?B?RFE0YzdLZ1U5VUdHbExzdDVsb01wTGZzZWpjT25zb2pFM2pvUSt5T21qNWhF?=
 =?utf-8?B?MlJFVlEzaUR5dTVNSWZVRFJzNmNzc3NCU2RPMDNDZ1IyWkJ4c3VHcUNtSjZn?=
 =?utf-8?B?M1VlVU1vSEZmdGJCWG9NdDNHUEtsaWxYNTZLMVk5NjZHZmJjRmVoOGZiOUJt?=
 =?utf-8?B?eEpiRjJLSmxPQjN2bWI5bGFJb25vbmFyRDJrZjdyUmN0R0hYaDFhNjE5cWRy?=
 =?utf-8?Q?Iw5vAlsE?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 281a061b-9b48-4396-eb91-08daf802c670
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2023 20:46:43.0956
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Jh+zWXRjnN8XnU/dXvrD9oucnVt5mZMsvYWam4qejdEHw43dAM3qD8InEsdD++Lg008EaVyVlVBk2yI6uRPSaaep89kTSo7VKebVFL8SR7E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR10MB4247
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.219,Aquarius:18.0.923,Hydra:6.0.562,FMLib:17.11.122.1
 definitions=2023-01-16_16,2023-01-13_02,2022-06-22_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 malwarescore=0
 phishscore=0 suspectscore=0 spamscore=0 bulkscore=0 mlxlogscore=999
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2212070000 definitions=main-2301160152
X-Proofpoint-ORIG-GUID: JeEdyMrcvD2DTVv8_5PvbLdEmKMc4q9w
X-Proofpoint-GUID: JeEdyMrcvD2DTVv8_5PvbLdEmKMc4q9w


On 1/16/23 2:49 AM, Juergen Gross wrote:
> On 25.11.22 07:32, Juergen Gross wrote:
>> All play_dead() functions but xen_pv_play_dead() don't return to the
>> caller.
>>
>> Adapt xen_pv_play_dead() to behave like the other play_dead() variants.
>>
>> Juergen Gross (2):
>>    x86/xen: don't let xen_pv_play_dead() return
>>    x86/xen: mark xen_pv_play_dead() as __noreturn
>>
>>   arch/x86/xen/smp.h      |  2 ++
>>   arch/x86/xen/smp_pv.c   | 17 ++++-------------
>>   arch/x86/xen/xen-head.S |  7 +++++++
>>   tools/objtool/check.c   |  1 +
>>   4 files changed, 14 insertions(+), 13 deletions(-)
>>
>
> Ping?
>
>

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Mon Jan 16 20:53:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 20:53:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478989.742544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHWTb-0002cV-V8; Mon, 16 Jan 2023 20:53:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478989.742544; Mon, 16 Jan 2023 20:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHWTb-0002cO-SB; Mon, 16 Jan 2023 20:53:07 +0000
Received: by outflank-mailman (input) for mailman id 478989;
 Mon, 16 Jan 2023 20:53:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NbzH=5N=citrix.com=prvs=37389537a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHWTa-0002cI-7N
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 20:53:06 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c47dae32-95df-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 21:53:04 +0100 (CET)
Received: from mail-mw2nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jan 2023 15:53:01 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6329.namprd03.prod.outlook.com (2603:10b6:303:11c::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Mon, 16 Jan
 2023 20:52:59 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Mon, 16 Jan 2023
 20:52:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c47dae32-95df-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673902384;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=iDL1zluBrBcTcbxS6RUN8p9kDjFJHfxxR9EVb4tVrEk=;
  b=W/g9yBkUQho/t057gBjuVG6VInPnQqfixvZ2U/kjyhvTu+8N2WUYtZ2G
   rXjahUb4YtqrHUY5cMWDzG7OSJw457tG9ioh9Uh7thI363gJMohXGbJ4t
   vGKX3cFf40LgpUQLuUDMOZ1fC+PXvMcrWnDcnQdzTPRRGSVDB4M8R8HIE
   M=;
X-IronPort-RemoteIP: 104.47.55.102
X-IronPort-MID: 93298023
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/zuYE6Pp9mEBqqbvrR0llsFynXyQoLVcMsEvi/4bfWQNrUomgTNRy
 jAfWT+GP6uDYmf1eYgkYIy+/EwDuZeAzoBlGwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5wJmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0r8rPl1u1
 9E+ERkqXxOvgefq/pOLZfY506zPLOGzVG8ekldJ6GiBSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujCJpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexnmqCNpIT9VU8NZUoUOQz34SUCRVdneK8dKcjnKlVuNAf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZac8AvvsIyQT0s1
 3eKksnvCDgpt6eaIVqf67OVoDWaKSUTa2gYakcscwwB5NXypZApuTjGRN1jDa2dg8X8HHf7x
 DXihCIznakJhMgHkaCy50nagimEr4LMCAUy423/Tm+jqw90eoOhT4ip8kTAq+ZNKp6DSVuMt
 2RCnNKRhN3iFrmInS2JBeASRreg4q/dNCWG2AY1WZ486z6q5nivO5hK5y1zL1toNcBCfiL1Z
 EjUukVa45o70GaWUJKbqrmZU6wCpZUM3/y8PhwIRrKiuqRMSTI=
IronPort-HdrOrdr: A9a23:ZWfzXqP28Hq/P8BcTs+jsMiBIKoaSvp037BL7SxMoHluGfBw+P
 rAoB1273HJYVQqOE3I6OrgBEDoexq1n/NICOIqTNSftWfdyQ6VBbAnwYz+wyDxXw3Sn9QtsZ
 uIqpIOauHNMQ==
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="93298023"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J9spva7YyCwqYmB5pRtVYLj4B00KTjrGhUP8b0t2vlZdnsogbSTSehMsaKZsXxV3j4aI3+7HtuGe9JZy0NgZsus8I1ao2Q5fg8JhsUk8Es3U9E5iFgaGaj3wylWCDXAaNAWeNZjnsPgsOGMXmuWc0/WBoqJn7Ol6brRYz6DB7I5kgDv8RpIsEo13LyFELJIEZHGzBKtrv7zuC1CtLSyqeNhUIbrDKPRJINlihWpS6kdVvCdrsGArgim6t/tlqd2ZR+l3QYpNJzmcsbOHdafEKJCPgwP5noQlI8yCZAeUiOdMdF4RyirnPQEkl9TB+AoPsmeLfg77cmJYYhndUq9Faw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iDL1zluBrBcTcbxS6RUN8p9kDjFJHfxxR9EVb4tVrEk=;
 b=NZvH+qXxOp1hl8rVwYAKwh2U9cr19Utiev+zyGbeuTLPYMXYW2vYgFFaVyKr90wTWycymmv4/ugQhFMOFSmb8MuDAawZ4Cllabha3iQ67u6VzsoXjmzdJuFIa2PJDqZmc19LJWz4blm1Z1yWH12GUP/i+e2hmwoItdbaf+psacApiHOsk1BV814wxccQTDT7LejsL+gwh6uGQDIhPpuNhyHawwfRqYH/NT00fNhvKvHvO68IkJbmCdB9zFdDAzQEyrZIQkKfoNn5CLOGUt9YsWaQBFbWTgY+QRBIN8crMzBaWysmQBdl/8OZ7k7RV5FizGjgOfB6mbPMpwjKP4gnmw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iDL1zluBrBcTcbxS6RUN8p9kDjFJHfxxR9EVb4tVrEk=;
 b=EjdkPe+4mFFHwXOR4BGpQ3CSOVdAh444eLlKxt3HLOXHwc0t4Qw10a3DGinUr9ESCPUwuhF4xJGBQpYyU5Gur/Xo9pECq1BzjZTPpzREiJgpUgsjrxANaXB/ES7/NbiUv44NKkEixTxk8q0mKRQShu7f9XJ6p0/XwP1abnC4i8Q=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 2/5] xen/version: Calculate xen_capabilities_info once
 at boot
Thread-Topic: [PATCH v2 2/5] xen/version: Calculate xen_capabilities_info once
 at boot
Thread-Index: AQHZJ6P9a3SW6Dw/uUGcS4ObfuDLQK6hNfQAgABTzIA=
Date: Mon, 16 Jan 2023 20:52:59 +0000
Message-ID: <164e5248-948e-9467-5b34-1510d32f8d82@citrix.com>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-3-andrew.cooper3@citrix.com>
 <9ff94f87-e3fa-c397-ebf0-b4849cba757d@suse.com>
In-Reply-To: <9ff94f87-e3fa-c397-ebf0-b4849cba757d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MW4PR03MB6329:EE_
x-ms-office365-filtering-correlation-id: 7feead8a-bdec-45aa-feab-08daf803a6e8
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 /W3lLB5/47P6fVfSlFJW3qJFuX3dQ0EMhPVSRXaA2gBK+KNJRYN0QitpbKWQGLRlPJJ6ccnb8+lNr9Og/ymJf2SFX5MVHLfr+sfujJrdlAIGUFkjGbcIxMiPRMPEHeilxvj4CuCwwe/4SOHmfHKOoR96WVlZOAfgZGfs8zugVH/oqVkAYflCCdTSE961bfo2GOISX+ImpQjMzKMxZntGOqWwkr+4hsjCMaADZRoZbqbHfAKpoNVUPJt85Q9VukT3fIVNVT/AfIFr0Oh2spq9XRX5/QcIbY5B0C77WXTL2ZxfszEIm2Ipq/17Z2KbuLCl+u6B90HDj97TNR7Kw7mvNSjplpov9gO6MolMSjHksRgfdAtm+DKsD02mH+deahWqrJgdb84mIMfy8a5LJ74PI1EDtItqpWyuMUuzAvS8HgR4RJSt6wo2hBIY00Eedky95u6dzG8AGEokSPxOYPc58/ebsbfnQv3cwj2lhqv4iojX7No3Dsl1H+fryVVBFkTT4te6aiFq+Lm5T9G0JFdttfxl5a+ob+AU3vJnwTtEIYikzDl3KrvfmxbCV/EU5mwV+fZhtJAhM7V36tNgAUp9ZbNeloUO4MI54gbAYsFfavo9XZ5UoUOIlIr5roUGCQlHYyLTOa3E54WmogHc/5Qf/reLSs9aBNDyvUXHjyVPtB4aURsEzyf6gt4FnmKNXZJ0VtevNof9X+B/lajZVRMGRnwk1jjBYkxI4a/kLlFHRUqBZguIC27CzLHoiA6AY+75bnJjFv48+MvT/3WqDOg1ng==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(366004)(376002)(39860400002)(136003)(396003)(451199015)(38100700002)(82960400001)(38070700005)(36756003)(86362001)(31696002)(6486002)(66556008)(66446008)(66946007)(4326008)(26005)(6916009)(66476007)(64756008)(8676002)(186003)(91956017)(2616005)(6512007)(478600001)(31686004)(76116006)(54906003)(71200400001)(53546011)(122000001)(316002)(8936002)(41300700001)(6506007)(2906002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Uk9sblpRelF6di9qZmxURURiTEc5T3Rob0h1TDNxNEVMNWppMXBDVjU5cndF?=
 =?utf-8?B?aVVMWGVOZXN1ZE51Zm9ZaFp5Mkhlbmt0VUZBNzViTUlHMDJwcjR1d2RnUE9x?=
 =?utf-8?B?UXpCc29TYzdqbmNuRjRPUVJVOWFZWHduOUUyTG9EaHdPd0dBSHBnTWIveVl5?=
 =?utf-8?B?NWhyZHR3RXZodTY0MWE3UEFUaVVqcVplckdLek4vRThRVkJGUDN4UjZhV21s?=
 =?utf-8?B?R1ZEdUJLTGRTV1hNcm9XSmlDZi93alJYR3Fsano2L0lNQ05sZ0srZU1SaWh0?=
 =?utf-8?B?MjB2SEJKeTBhazlJWVJBU0R3U2kwOVRZanRxR1dobElOQVdMMFU2c0pmcko4?=
 =?utf-8?B?cEV3NEV5eGQ5emdGZEVPRlYrMEt2dVlsMWxaci92MGRyeU5LRFl0Vjh0YmhF?=
 =?utf-8?B?cmMvMjFRT0lWb2FLbEY3NkhLazVPZWNVWVV6azdnd1oxUlN3Si9QbzgraVRk?=
 =?utf-8?B?alNkSnlzUXRYeGRseVE5QjRRNFVpNHdUQXlsUy9SNVoxUDJyclFWWDlhR1E1?=
 =?utf-8?B?U0NCNXVXVmpCWlJzZkRRL3lZRlBUTHJaRkRzQU9CajFVdGF5dWJVTDQyY3RP?=
 =?utf-8?B?Vk0zS0tOOHViY045ais1OG1kQ3p1cmZmcUNTcU5ldDBuS2ZIZXR6Wk00emlT?=
 =?utf-8?B?VldMc2oxZittbHVoQ3ladUh6cDdzMk56cWovcUdDMXlpOHZPNFByS21xeThR?=
 =?utf-8?B?a1RQNXNsQVhtdmJkL3FFSDNWb1VJb2dEclAwNnM3MWtvd2h2Zm1zQnlodXRy?=
 =?utf-8?B?SzJpWVdwais1dXBTbzhydEROcVNGTnNWMlFCaVdLaXhiejcydDRrOFQwTWky?=
 =?utf-8?B?eElhVVlkZEVwcU14QnBIdk9aS2QrSlpManlYWWhHenM2amo1VHJqdHdXSjZo?=
 =?utf-8?B?L1ppK083SXA3aFJDczhTYXJpMlV6OVpURmFkcFlmRXIxMnlBbHBQdTJOcXlx?=
 =?utf-8?B?K3NQaWtJVCtobUsrd2o1VEFHL1d5MnNqQmw4TDZ0SDhlczVZRk45TUJxc3Bt?=
 =?utf-8?B?SUxGb05nMmhDL0N3NklIaGJuMEVNaThJWkJGai9wQTR1djlCYzBuWUt6UG9j?=
 =?utf-8?B?MTNQZ2ZUSE1LNlJzKzIzY1J0TlJ0ZE5FeE5uUldqUUVncFRXdEgwV1M4OUtE?=
 =?utf-8?B?VDZCK0RXM0lxSnlEZG1KSWErNVBmSXdYenZPQlNtTmtubTBaNUdjL0UybmF2?=
 =?utf-8?B?djFNSFU5aHJDSlVwWkYwUmpqV2VITEhxRjl2YmFteG5raE9MQ1VCM3liUjdn?=
 =?utf-8?B?eUwvdEN6bEMrNlNXc1NsNS9aMk9QTEg4N0NacElMbXJwTzVXL0NHa2FBSGhQ?=
 =?utf-8?B?UGlZSktBSnpQbHBVU1BSZXFxd0UrYlhrN3RKSWtYNUtYR2p4OUZtdVg1dmpt?=
 =?utf-8?B?c3pqd0pnclRBSmZhNUVLN1VSd0IwcFNPS3JkTy9DYTZ6VGFtWmIyRktmTHJL?=
 =?utf-8?B?NUxBbmoyMzR1ekhEdHFMbE5vRmNlTEIwWTVGeURwS0tRVGhPTm1YdnBwV3Fi?=
 =?utf-8?B?N3kxT0xxb2NNVjhKZVN4VllJS0ZTVVZqdElFSHJMQUNkVTRCMThzZElMZEY3?=
 =?utf-8?B?RVYxN0J4TFFZeTJFU0JwUzduR2N3Mmp4MzdEeVUyK0RUS2FtL0RCZk55UEdC?=
 =?utf-8?B?ZER5YXBaWnREbmg2ZjBJU3BVbzVINDNKcjNTYnoyQUJxRkYxSWNBODYxOEhr?=
 =?utf-8?B?eDE4aXdHWG96eXE3WUJmdS93UzMxZkhCVmdqS3QwTmdycWFCanh0OHF5TVhZ?=
 =?utf-8?B?ZjZlL1ROSFYvWnhjckFic3BTUmlUa1cyZSthR1BFTTI5T0d0MDhReWdLei9B?=
 =?utf-8?B?Q3EvOU1jcVkxU0dCOERJR2pHbWdmU28vbEtEcDFPZmZ0SlVvOUU5NCtnM0gz?=
 =?utf-8?B?OG1IM1BtdjJUNXczUnppWHlpUEFWS3EwV0tkaDgxK01SN1NEQUJUMjZQV2lx?=
 =?utf-8?B?ZjVRdE9kb2t0dDlsejdtME84aFpxWjJ4alhXU3owbm5QWUJLaHFJNEtSL2ZY?=
 =?utf-8?B?MnE5aXNxVFRpR1BGTitHbmIwdWUvcWZKdUpIclNBUVhhUm1LeVpPeEJtc1px?=
 =?utf-8?B?ZkFpQ2R4U1k1aXp0RDJhQUR5bW9vSitrck9Pc0l4VUVjcGxkdm5QMHNTVVU2?=
 =?utf-8?Q?c+uGkaCRqn1we5+mbkt1p+rUH?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <08BB3628E9C64546B77C5229F22C2C98@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	9fFGr2zew/9yAPEbPtOhX22xndc2epBXgJS5CuCrYZoKTB0vm1+gNi+Pmp7c5HjxlrWU439HTvby8qmULya219s1XR8M8mPPyhEk/bcsH+o9wwaHSamwNKXG07JnxLGO4MCaclg89fsVMY0ed5IxhrIE2tYc24nV8VxxyY7qmBpLwDXNL82hRjNiyIjecs7FFlke4nOEUCuOvlGvrDaUUIkKBNInieu6YrHX0mPdZe5cbCJBytNVtvSntFeXQCwB+IT/ZJUY6uUgjy5AVmsLdV99CZmuyfDyGv3856u6saGnymgwDmCzgoAUUfgTdrdDurbPqzD0c6iQHGAHzM+dQptB5ypzfjR2E/FT/MrPjo0XkEZnkoz4AJ9FvRrFNwYBVADxxVdxkZhNEjF6q46pJ2PEErPCpPKjsfDBSnDVO3XNKRJpC3ob1h9hiTSYP4SOjk9O9fAAwkPLaStEP95KyI9MTMPfgtOJ4QHrzgUsFPOq+jlGHVCER0TNwrU02rdtNkOpd8aOIG7KSaJkBGBIPx8itP9Lz4VYQXhCIW0dj9uqifsW3lnXXr27Edn7aelZY81/qAMYnBUdk9xMn3mh/yWcstS52qE8wLHuAwoKORQdXBxfszSPFXIrscN4jQbSjr73l/OS9pnD+2fsSPjxoRvOo5V9Miv13Fgdoc7Pclm5j3nJeHbrJpuWq4hxWfWP6bIwRya2JDraiLjdI7QGESVVBuc8X7fOd7HWGxHoxAP8D7O2888KK7R0+oxM0ZU1KlP2r2AfTZsWDCZn3rIB3bsfpX9wzSWLbTSKxZAE7lhRXlkMclAw7M4vVrdqqPs0NS1kX+h5ghdcHOr+UAOCr8yFr6gKlJWEJUqri++BGFOHUJCPLd92ROJvwKGC8YGUjmsR9/xocuJk+APv2nKX3A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7feead8a-bdec-45aa-feab-08daf803a6e8
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jan 2023 20:52:59.4911
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JLznpn+l3rSnFIkvktoZPfHiMYfPbqlg4Nl9MIDG+F3oRueTr6dH6WYdXPc+jHp7j69+ShUu+tRDu/Qwxp/jyUIv6ymfOAB7DLpc+ghZP0w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6329

T24gMTYvMDEvMjAyMyAzOjUzIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTQuMDEuMjAy
MyAwMDowOCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IFRoZSBhcmNoX2dldF94ZW5fY2Fwcygp
IGluZnJhc3RydWN0dXJlIGlzIGhvcnJpYmx5IGluZWZmaWNpZW50LCBmb3Igc29tZXRoaW5nDQo+
PiB0aGF0IGlzIGNvbnN0YW50IGFmdGVyIGZlYXR1cmVzIGhhdmUgYmVlbiByZXNvbHZlZCBvbiBi
b290Lg0KPj4NCj4+IEV2ZXJ5IGluc3RhbmNlIHVzZWQgc25wcmludGYoKSB0byBmb3JtYXQgY29u
c3RhbnRzIGludG8gYSBzdHJpbmcgKHdoaWNoIGdldHMNCj4+IHNob3J0ZXIgd2hlbiAlZCBnZXRz
IHJlc29sdmVkISksIHdoaWNoIGdldHMgZG91YmxlIGJ1ZmZlcmVkIG9uIHRoZSBzdGFjay4NCj4+
DQo+PiBTd2l0Y2ggdG8gdXNpbmcgc3RyaW5nIGxpdGVyYWxzIHdpdGggdGhlICIzLjAiIGluc2Vy
dGVkIC0gdGhlc2UgbnVtYmVycw0KPj4gaGF2ZW4ndCBjaGFuZ2VkIGluIDE4IHllYXJzIChUaGUg
WGVuIDMuMCByZWxlYXNlIHdhcyBEZWMgNXRoIDIwMDUpLg0KPj4NCj4+IFVzZSBpbml0Y2FsbHMg
dG8gZm9ybWF0IHRoZSBkYXRhIGludG8geGVuX2NhcF9pbmZvLCB3aGljaCBpcyBkZWxpYmVyYXRl
bHkgbm90DQo+PiBvZiB0eXBlIHhlbl9jYXBhYmlsaXRpZXNfaW5mb190IGJlY2F1c2UgYSAxayBh
cnJheSBpcyBhIHNpbGx5IG92ZXJoZWFkIGZvcg0KPj4gc3RvcmluZyBhIG1heGltdW0gb2YgNzcg
Y2hhcnMgKHRoZSB4ODYgdmVyc2lvbikgYW5kIGlzbid0IGxpYWJsZSB0byBuZWVkIGFueQ0KPj4g
bW9yZSBzcGFjZSBpbiB0aGUgZm9yc2VlYWJsZSBmdXR1cmUuDQo+IFNvIEkgd2FzIHdvbmRlcmlu
ZyBpZiBvbmNlIHdlIGFycml2ZWQgYXQgdGhlIG5ldyBBQkkgKGFuZCBoZW5jZSB0aGUgMy4wIG9u
ZQ0KPiBpcyBwcm9wZXJseSBsZWdhY3kpIHdlIHNob3VsZG4ndCBkZWNsYXJlIFhlbiA1LjAgYW5k
IHRoZW4gYWxzbyBtYXJrIHRoZSBuZXcNCj4gQUJJJ3MgYXZhaWxhYmlsaXR5IGhlcmUgYnkgYSBz
dHJpbmcgaW5jbHVkaW5nICI1LjAiIHdoZXJlIGF0IHByZXNlbnQgd2UNCj4gZXhwb3NlIChvbmx5
KSAiMy4wIi4NCg0KInRoZSBuZXcgQUJJIiBpcyBpcyBzdGlsbCB0d28gdGhpbmdzLg0KDQpUaGUg
b25lIHBhcnQgaXMgY2hhbmdlcyB0byB0aGUgaW4tZ3Vlc3QgQUJJIHdoaWNoIGRvZXMgbWFrZSBp
dCBHUEEgYmFzZWQNCihmb3IgSFZNKSwgYnV0IHRoaXMgZG9lcyBuZWVkIHRvIGJlIGJyb2FkbHkg
YmFja3dhcmRzIGNvbXBhdGlibGUuwqAgVGhpcw0KQUJJIHN0cmluZyBsaXZlcyBpbiB0aGUgUFYg
Z3Vlc3QgZWxmbm90ZXMgKGFuZCBpcyB1bHRpbWF0ZWx5IHRoZSB0aGluZw0KdGhhdCBkaXN0aW5n
dWlzaGVzIFBWMzJwYWUgdnMgUFY2NCksIGJ1dCBub3doZXJlIGludGVyZXN0aW5nIGZvciBIVk0N
Cmd1ZXN0cyBhcyBmYXIgYXMgSSBjYW4gc2VlIChmdXJ0aGVybW9yZSwgdGhlIDMgdmFyaWF0aW9u
cyBvZiBodm0tMy4wLQ0KYXJlIGJvZ3VzLg0KDQp4ZW4tMy4wLXg4Nl82NGxhNTcgd291bGQgcHJv
YmFibHkgYmUgdGhlIGxlYXN0IGludmFzaXZlIHdheSB0byBleHRlbmQgUFYNCnN1cHBvcnQgdG8g
NS1sZXZlbCBwYWdpbmcuDQoNClRoZSBvdGhlciBwYXJ0IGlzIGEgc3RhYmxlIHRvb2xzIEFQSS9B
QkkuwqAgVGhpcyBjYW4gaGF2ZSBhbnkga2luZCBvZg0KaW50ZXJmYWNlIHdlIGNob29zZSwgYW5k
IGZyYW5rbHkgdGhlcmUgYXJlIGJldHRlciBpbnRlcmZhY2VzIHRoYW4gdGhpcw0Kc3RyaW5nbHkg
dHlwZWQgb25lLg0KDQoNCiJ4ZW4tMy4wIiBpcyBldmVuIGhhcmRjb2RlZCBpbiBsaWJlbGYuwqAg
SSBjYW4ndCBmb3Jlc2VlIGEgZ29vZCByZWFzb24gdG8NCmJ1bXAgMyAtPiA1IGFuZCBicmVhayBh
bGwgY3VycmVudCBQViBndWVzdHMuDQoNCj4+IElmIFhlbiBoYWQgc3RybmNweSgpLCB0aGVuIHRo
ZSBodW5rIGluIGRvX3hlbl92ZXJzaW9uKCkgY291bGQgcmVhZDoNCj4+DQo+PiAgIGlmICggZGVu
eSApDQo+PiAgICAgIG1lbXNldChpbmZvLCAwLCBzaXplb2YoaW5mbykpOw0KPj4gICBlbHNlDQo+
PiAgICAgIHN0cm5jcHkoaW5mbywgeGVuX2NhcF9pbmZvLCBzaXplb2YoaW5mbykpOw0KPj4NCj4+
IHRvIGF2b2lkIGRvdWJsZSBwcm9jZXNzaW5nIHRoZSBzdGFydCBvZiB0aGUgYnVmZmVyLCBidXQg
Z2l2ZW4gdGhlIEFCSSAobXVzdA0KPj4gd3JpdGUgMWsgY2hhcnMgaW50byB0aGUgZ3Vlc3QpLCBJ
IGNhbm5vdCBzZWUgYW55IHdheSBvZiB0YWtpbmcgaW5mbyBvZmYgdGhlDQo+PiBzdGFjayB3aXRo
b3V0IHNvbWUga2luZCBvZiBzdHJuY3B5X3RvX2d1ZXN0KCkgQVBJLg0KPiBIb3cgYWJvdXQgdXNp
bmcgY2xlYXJfZ3Vlc3QoKSBmb3IgdGhlIDFrIHJhbmdlLCB0aGVuIGNvcHlfdG9fZ3Vlc3QoKSBm
b3INCj4gbWVyZWx5IHRoZSBzdHJpbmc/IFBsdXMgLSBhcmUgd2UgZXZlbiByZXF1aXJlZCB0byBj
bGVhciB0aGUgYnVmZmVyIHBhc3QNCj4gdGhlIG51bCB0ZXJtaW5hdG9yPw0KDQpXZWxsLCB3ZSBo
YXZlIHByZXZpb3VzbHkgYWx3YXlzIGNvcGllZCAxayBieXRlcy7CoCBCdXQgdGhpcyBpcyBhbHdh
eXMNCmJlZW4gYSBOVUwgdGVybWluYXRlZCBBUEkgb2YgYSBzdHJpbmcgcGVyc3Vhc2lvbiwgc28g
SSBmaW5kIGl0IGhhcmQgdG8NCmJlbGlldmUgdGhhdCBhbnkgY2FsbGVyIGNhcmVzIGJleW9uZCB0
aGUgTlVMLg0KDQpCZWNhdXNlIG9mIHNhZmVfc3RyY3B5KCksIHhlbl9jYXBfaW5mbyBpcyBndWFy
YW50ZWVkIHRvIGJlIE5VTA0KdGVybWluYXRlZCwgc28gaWYgd2UgZG9uJ3QgY2FyZSBhYm91dCBw
YWRkaW5nIHRoZSBidWZmZXIgd2l0aCBleHRyYQ0KemVyb2VzLCB3ZSBkb24ndCBldmVuIG5lZWQg
dGhlIGNsZWFyX2d1ZXN0KCkuDQoNCkFsc28sIHNpbWlsYXIgcmVhc29uaW5nIHdvdWxkIGFwcGx5
IHRvIFhFTlZFUl9jbWRsaW5lIHdoaWNoIGlzIHR5cGljYWxseQ0KcmF0aGVyIGxlc3MgdGhhbiAx
ayBpbiBsZW5ndGggKGF0IGxlYXN0IGl0J3Mgbm90IG9uIHRoZSBzdGFjaywgYnV0IGl0IGlzDQpz
dGlsbCBhbiBleGNlc3NpdmUgY29weSkuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 21:10:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 21:10:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.478995.742554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHWk8-000544-F5; Mon, 16 Jan 2023 21:10:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 478995.742554; Mon, 16 Jan 2023 21:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHWk8-00053x-CO; Mon, 16 Jan 2023 21:10:12 +0000
Received: by outflank-mailman (input) for mailman id 478995;
 Mon, 16 Jan 2023 21:10:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHWk7-00053n-B1; Mon, 16 Jan 2023 21:10:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHWk7-00023q-23; Mon, 16 Jan 2023 21:10:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHWk6-0007O4-Mg; Mon, 16 Jan 2023 21:10:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHWk6-0007KE-ME; Mon, 16 Jan 2023 21:10:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LrJkQzCZuMrttvAk/h7Xd4hlPdEG/hQAkzoFmxSFQqg=; b=yRgC5Wsky5gJZs8q1TvZfPFfwC
	be/ZFGXce+qf3Dkcpyhj1YzEEm8Z3QRMn/PHmk+wqVhxR6d1neo7v3FyD5gJGn5YB0LfrvyMdKq3M
	XoVq318/4wqAAX3u07tLlCkIbegJ4xgNpgLUrhRpc1fBJvD0lKSIAijSoirOSmSo6T9M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175924-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175924: regressions - trouble: blocked/broken/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Jan 2023 21:10:10 +0000

flight 175924 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175924/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    4 days
Failing since        175748  2023-01-12 20:01:56 Z    4 days   18 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    2 days   16 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 21:48:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 21:48:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479003.742566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHXKc-0008R7-9c; Mon, 16 Jan 2023 21:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479003.742566; Mon, 16 Jan 2023 21:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHXKc-0008R0-6i; Mon, 16 Jan 2023 21:47:54 +0000
Received: by outflank-mailman (input) for mailman id 479003;
 Mon, 16 Jan 2023 21:47:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NbzH=5N=citrix.com=prvs=37389537a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHXKa-0008Qu-Ko
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 21:47:52 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6aead30e-95e7-11ed-91b6-6bf2151ebd3b;
 Mon, 16 Jan 2023 22:47:50 +0100 (CET)
Received: from mail-dm6nam11lp2174.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jan 2023 16:47:46 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN8PR03MB4995.namprd03.prod.outlook.com (2603:10b6:408:d8::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 21:47:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Mon, 16 Jan 2023
 21:47:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6aead30e-95e7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673905670;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=d1bDsOmjl8xN+I4QgWWPB1S7LLb3MDkQBMx+iuLiCV0=;
  b=X7qMt7LmlfZ9ZbL+gyzDUcyPnWpaU0lAeEeHqSjrSerhkEXIFEPYdXmt
   q/zWjR8RQq81eNHE4G7t/wsc+JBjhV1rVZJzUlL4NKXDPcyLJKnJPGNoh
   D4x1UR1ufMmZqKkpxNNQ+iqPccTJKjPZEW0CnQ4lWvmMklg/UC9wWCVFv
   E=;
X-IronPort-RemoteIP: 104.47.57.174
X-IronPort-MID: 92925505
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6gfwta3ALkKcrozBD/bD5U5wkn2cJEfYwER7XKvMYLTBsI5bpzNTy
 WcdUT/TOveIY2ahc4t/aoXl8B8AsMXTnYNhGwRkpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK5ULSfUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS93uDgNyo4GlD5gVnO6gR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfO1hir
 swxDx43UUqvofzr/5CDFexqmZF2RCXrFNt3VnBI6xj8VK9jbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvC6Kk1AZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXKhCdhPSeXlnhJsqHKymkFCARoRbmnhvuSirXaffd5NA
 nVBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rHP/w+TC2wATzhAQN8rrsk7QXotz
 FDht8ztLSxitvuSU3313rWJqTK/PwAFIGlEYjULJSMJ7NXur5s6pg7eRdZkVqiuh5v6Hi+Y6
 zySty0/m7U7hNYGzbmm5kvAhy+wp5/PVUg+4QC/dmCs6A9jdZOmT4Ot4Fnfq/1HKe6xXlSH+
 XQJhcWaxOQPFo2W0jyARv0XG7Ok7OrDNyfT6WODBLEk/jWpvnKmI4ZZ5WgnIF8za5lYPzj0f
 EXUpAVdoodJO2enZrN2ZIT3DNk2ya/nFpLuUfW8gsdyX6WdvTSvpElGDXN8FUi3+KTwucnT4
 aumTPs=
IronPort-HdrOrdr: A9a23:z5rIOKDI75avPkPlHejpsceALOsnbusQ8zAXPiFKOGlom6mj/K
 6TdZsgtSMc9wxhJE3I9ergBEDiewKuyXcK2/hyAV7KZmCP0ldAR7sSjrcKrQeQfhEX/YZmpN
 hdm8AVMrHN5TMRt6nHCMbTKbsd6ejCyYTtodr3i05qSwQCUdAT0++6YDzrbHGfgGN9dOoE/F
 /33Ls3m9PaQwVyUu2LQkMdWvTFpZnijYuOW29+OzcXrDOWiC+u6vrQDxic034lIk5y6IZny3
 HBjwv6ooKqt/3T8G6660bjq65OncfnyJ9kGsuBkaEuW1PRozftXp1lR7qB+AoUjYiUmS4Xue
 iJmQ4kI8Nwr0ncZX64ujzk3wWI6kdU11bSjWWAhGflo4jHSCkhC8xH7LgpCCfk1w==
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="92925505"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hY70pNa8g0kEy5qVRqSsS+n7OXaPnj6yBQNfUZ9XiES6ph7m+nYOqV3PiL6Hd6+7rOKrgwg/6Zir3B/EZj4U79UxCsQN+gMGYdqEDD8ekxyORElOZ97sINsGrBBAlqELFlBWmgyERyZbtceypi+evLzFJstFP2r5ifLe+y1DN1kiOCLQxldm5Pi31A5+jH+tRojfdN8Q7Vs8vbMHi2piskPOkECSbe7CaCmh/ARbG/f3NhtGCWVuOGiVgJ9xFxRQJRxkkSe4HYOIWVwKxbHGmzNp6xd9d3TQWO2ewJZolbXk8RT9c2omzz9FB3NoOlDU1rBswU0swdhqTBoWyvMiOQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=d1bDsOmjl8xN+I4QgWWPB1S7LLb3MDkQBMx+iuLiCV0=;
 b=biiUXJShUpFBoJWfoBhZxMdrCNvwrUd0QVdVhPqxXqESEFilvKXdP1RQVR/3O7en04TFYLmlU8VJztNMaQhZW3tl7+EfLluN+EvWSrcE0/z2guWbuGxmx44vKqUS0elggZDV/mzqudDyKi2ad9TDNGsJotB4QledIjitSIhlEPOXFtW4eTBuvkNUfqW74tshX/aKn9fWdYsfYjwCRkM9gi4TonSgqm55pk/pi2OoaEmyw1+lL2tPUGSYiS1NlYXvHWtBT3Kqj9Mk2e9bRCTB/rnKRdflYOd/hgfU7Eu8vhheDUU+Lyhw9f5hkr856NSpukUQLRUhHqyu7JP9cfbX1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d1bDsOmjl8xN+I4QgWWPB1S7LLb3MDkQBMx+iuLiCV0=;
 b=U9/EAQ2vG3cEDUv9e6pNTa+pKAYLdlZK7jmxoBv5tB9EtEHOEXyKlFpimYKpi8Rsrt9lxYAeQ++5DCGy40dCNe/c+bpeSjRy1Mt5fk83jPghY8i6ZBtxWacULGZPV+3j4cp8dyVThh+wyPuBikENYaonbm+XT9j7cASEJWn0Vi0=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, Daniel Smith
	<dpsmith@apertussolutions.com>, Jason Andryuk <jandryuk@gmail.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 3/5] xen/version: Introduce non-truncating XENVER_*
 subops
Thread-Topic: [PATCH v2 3/5] xen/version: Introduce non-truncating XENVER_*
 subops
Thread-Index: AQHZJ6P9s5HCoeqlF020fKh7orvbwa6hOboAgABfUgA=
Date: Mon, 16 Jan 2023 21:47:44 +0000
Message-ID: <2d3dc0ef-4920-2bc8-6ffd-6b954fc8c68a@citrix.com>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-4-andrew.cooper3@citrix.com>
 <5ebe5337-f84e-12a1-e8a0-92832100946d@suse.com>
In-Reply-To: <5ebe5337-f84e-12a1-e8a0-92832100946d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BN8PR03MB4995:EE_
x-ms-office365-filtering-correlation-id: 081a235e-0111-4ae1-615d-08daf80b4cd8
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 yLtXGFtYQbDycF9o6INNxfVIF5Sw8wR5zIY28nGbemoCgzR6wDs2dTghTwgvSE1P0WeVxiwTzyWO3ywg8diyetSqFFhw03hMZMaSUIjBIU7XEWdITKUmCJAmSRbeoTV2eSozdZEW8EgLhbMovv8ct9C+Gf6hC0eTbW8ga7SXeXA2RYwn4iK5zA1qt39NH2wfwRsFqo4pnrOWkBm05w2Y278fQfuBn1GrrfAS+vxURcaFOqPkGJpdmZ0yPnD8Iwvg1gIl6rwunMsGLiw0PsHpOqt/CXLkqB5gUiK6SSdLU0lkiWf7FF4tUXTQX6oSVq4Jgk9P0IMCP0i2u2HGZ3WFErpZWypNvl/KPaKtCkbDUAqH0smFmad2ACdoGFtM+ynYe3nRBskf0PrsvT1JEsMVIl3ygGnADkoGpxms5x7xoNKMOVTULc98ggONCAUmp95C/RSspBf9RasWnR+PvDzvDMWa9uXiaUsHREL5LHy0d9yadlauGGI7aztiMUFmPKvOPRAbPC+/wGvAa1F7ufrkS4lk0nC38a8SwEZGzUchymeX4HrYiiJZyN+y1mfrCBxxDOI4jQoZyFmVMLuSDQrMANpR0Pb+pOSCEz4JXOjwpuw2bqC5Sg+aOQZsd3+Bg97bdo8uXiwtQsD/mSE537FStgHiOqeDXbegqYxpo6/6OBR7/uH2Cg2qed1LouylZiwbwwHnEyGi8s4l6XBYKepO6c4Vza9NR6gHZ6wxrnK1Te2T7Vvs16DFT5YidUOzWJiGQ6RJDu9pkxvGa82gzsEcBA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(39860400002)(346002)(366004)(136003)(451199015)(82960400001)(83380400001)(38100700002)(122000001)(38070700005)(64756008)(8676002)(31696002)(5660300002)(2906002)(66446008)(6916009)(4326008)(66556008)(66946007)(8936002)(66476007)(76116006)(41300700001)(86362001)(91956017)(6512007)(6506007)(186003)(26005)(53546011)(2616005)(316002)(478600001)(6486002)(54906003)(71200400001)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?N1hxMEI0T1dBRlJwdXhKUUxKaW0rN2xCcFZjM3JQQzlqcHdTdk9tYmlkdSsz?=
 =?utf-8?B?M2x4R3lKV09yck5IMTkweEZ0d2Zqa3VuUkhVSkpIaUNTRElMZ0t3N3VTUnpO?=
 =?utf-8?B?WUpJSVFDN0gzSklKTTY3eE0wT0hCREJHd2lqYzltd1FDNEQxcXFTWTArdEtu?=
 =?utf-8?B?N2F3VE1FcE15NHdiVmpyZGdmTTFhelM3blFkT2ZsSkhza1NFSHlId2Qzeisy?=
 =?utf-8?B?Q2VhSFV3TWRqT29hOXVDeEx3UWN2Q0F0L0N5VUc3Q2tUWWhwMi9LcFdsMCty?=
 =?utf-8?B?Nzg5ZkluaDZ3d3RValFIRVlqVGVNOXlxalhlWEFyTksrb2xyRjFwdzgxb1NZ?=
 =?utf-8?B?YkF5ZnpQVmx1UTlRdks1U0puZ2Q0aTkzN1drOXFGQyttbEcxaUtzcUVJZXg2?=
 =?utf-8?B?UGgzelQxY2NTZGpTMys5NFhBT3FUWTd5czhmS0tQNk8yRmxtNFVOZklhUFdu?=
 =?utf-8?B?Z3dFZlhTUmxsTXpNcTlIT1cwQkRoeE5GT3UrV2wyQTMwZ3hmZjd3ei91a0s1?=
 =?utf-8?B?SjFqb1g2WXZ2QnBjRG1iQWEyWFBVcGNvODNqS3pXcHIyUTg2bFdDTmRiMGQ3?=
 =?utf-8?B?VkR4K1JBQ2g1cS8xbG9GS3RGY0hPdnVLMTBKSUhJK0x1ZklOQmI0alBuVm5H?=
 =?utf-8?B?c1BjUGVzZ0xlS1AzUEVkNGZSTDNnMjdYaFZsanY1NnNFQ09taVBtaHBkcFRu?=
 =?utf-8?B?OEZmOTVZMG9NWUJKY2dsdDZGKzIwcERWb0xSTjlYY3JZVGhMSCtlVXBXWkgz?=
 =?utf-8?B?djhWeW9nMUlnODZTaHRZbWpzQjJxeVZaZHdXOC90ZVlORTF5U1YzNEZ2Ui96?=
 =?utf-8?B?SmM1eUVhR2VFOVBrUHYrL3V4Qk1DSGZBd3BERkRpL2xoYStaeVcwYmpaTjZx?=
 =?utf-8?B?RnZheSttM2d6RDBtb2JqUGZOdUl3MkRkeDBpb240OTlmemh6QlVudUV6RVlT?=
 =?utf-8?B?ZGsrRVo2NlFYT1ZCRVBkL3RNdDdlMVJZOEQ2b2ZQWW1ldFpzaW5VbzkraHpV?=
 =?utf-8?B?OThVblJ4dEorN0Z3RXNHUHdGbVg1MEhKT1NlR1V0WkZmNHMzQzVZWXBrRW9N?=
 =?utf-8?B?M2Y5WUdpK1M1Si9ZUDAxZSthRlJMKzYxS3h2dlFzMjFLR1VVMWJETkFTcDhQ?=
 =?utf-8?B?VkxQM3JvcjROd1ZVQjMrZ1NRQzhvRUp4bXZxVGFxWDI1SGxCZmtaemVRM0RE?=
 =?utf-8?B?eWFvOFo2QkRnVkJqNWZyWUxLL1ljelA0UXVEYUxVbElYTDhQTmNGRHB1bVZl?=
 =?utf-8?B?L1FpT0ozOUdidnlMWjl1SFhHMEo4blhCb3Exby82MkM0c2JsVHJzM3d3WFZa?=
 =?utf-8?B?dFQyNUhJSXpOdlhGNC9mWmxoVGpCQkd1ZTRQdk13bUxMUFUzUUFyZEVSSnhh?=
 =?utf-8?B?dS9pSkY2djZGalhBTkVFSW9ZaFZpLzdKQzVsUVdRMnFGaGtFZHkxbjVULy9a?=
 =?utf-8?B?Q3dBNzN3QWoxbG1GZGZ2emNsT201dk82aThBVzJSMXhXbTRRT0RIT0ZiQkU0?=
 =?utf-8?B?djcwdzFiU29DSHZHcVFRNWc0VyttamVJb09DNXREa0pKb1NEUEFyOU9pelhP?=
 =?utf-8?B?Z0txL1VTRmZ3SFdFQzMzdndOWE5zNTREWTc2cTE1N2Y3Z200QktzWnA1dGgz?=
 =?utf-8?B?UlB4LzZGVmpGTC9HcDZibVlCZjRYeThIRlNjSnE0RnlqMDMxMUtOQkF4S0Ri?=
 =?utf-8?B?dmdQdFhHWmxHWmptc0xwbVNaR2s1SWVOelVGUG4vY0FGM1gzc2FQR3VYTDVF?=
 =?utf-8?B?UUh2ZS80Y0JXeG5RRVdVL3Q1S0lWNkFCamNJakdWU2lPbVVmS1QxSFhtZ1BH?=
 =?utf-8?B?aUNsM2lsKzgvVkh3bzEyYU5JNVpVSVkwanZpdWZCeTBzVDJYZ0Q2UktJZ3d2?=
 =?utf-8?B?QzN4OFpXdHJtVnRLOUlhNmZUcDBtVi80bmZRaWVuWTlSak9ZVUdxTzJJR3U2?=
 =?utf-8?B?VlF1ajFkdGl3U2QxU0ZtTU9SN0JDK3dnam40RFdldTZTL0JlTjl5SGdTdkFX?=
 =?utf-8?B?eXJKVUlqeHhxbHFmczh2Qy9Ia2NDUzlqeTBIZ1lVMFpmWldHVU9JVGd5aWMz?=
 =?utf-8?B?RTJQeTA1SStMOUl3VEp5THpPZGxSRzdaeHFDc2NMb2V3eGc1UXAyN2tFejRX?=
 =?utf-8?Q?rz3nFYoC65WMCJLFTmk4cX6Rq?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <51CDAE48E337E747AC585B5C3596845D@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?cGI5dVVUQWJNYmVPRmdldzcrbG1qcXRiRUJnc3lGelhpQXV5Ris1SUNqVDRh?=
 =?utf-8?B?SXlUbTJmL2JwTmNlNlBmV0dzVzVnMTgzdWxqRzNQUnZHTUVtKzRoUkdIRWdZ?=
 =?utf-8?B?QVB1aUxydCswcDFUZmFSUmlGQThMVDdVdTlvTE1jTnpEMFc4SUJqSDI4bE9X?=
 =?utf-8?B?SmM2ZjkzbFBpS083TXQzRlMyNm9BZkYzeDRXYW9xMGgrMkRRUFE1SHpZTGY3?=
 =?utf-8?B?OVg0aFE4dTM0SjcvVW1PcUZqSDlWaEVtQ0VUd3dmMzRxeVRpbXFzbVpKalJX?=
 =?utf-8?B?cEhEUVdKMU5vTTFwT2tlZmNEcTUxSGNyaHpOWHlReXlGeExwMnJQdVBHSjZp?=
 =?utf-8?B?Y2FUVjlIOFRqaG8rRzZkc1lDMEcrbGFLZFA1V2lEK0NiUnB4cnhPb09hbWd0?=
 =?utf-8?B?NUpmOTBROXZDVCtrejdLcjJXbDVtbzhMUFhkR2RiR0ZYb2FNTzc5N2ptT2VV?=
 =?utf-8?B?UTFmcVVzVVhjZ2pCNlJpRXRQWHFma3VRU1lWQm15aTFjanZtM3QvaU9DcklZ?=
 =?utf-8?B?eUFwVmI0dXNQdkZwMHF1UU9JZ2pUOFFPdzVTN1ZlUy9TMExOZmNjZGJ4MHd3?=
 =?utf-8?B?dHlNYVVvbGpLL0x6NzJGd3A3eGhWM2NZdDUzcmNJdHRZcHVJYWRwV2hIcllq?=
 =?utf-8?B?bE5YKytJbXhaaW9MM2hxZE0xSENVZGRNOGZBM3VxSHZJZUJQTG5FeElXdW5T?=
 =?utf-8?B?ZG5oWXdBeDRxK2drY1FFWitWbEIyeUJQcnlQeHJmdWNoMEVKYjJleUZsNFlK?=
 =?utf-8?B?QnlVZm5UaGU4VmNwb0FlU1hjVHFXTERmTktzYnQ1dm0vbW1EQ3Zjb2sxTzU5?=
 =?utf-8?B?NVR3Z3UyVFE3R3FwVFdCZEx0S1JsUUYyU2FwK1QxeEh6NVdFZUZlZVYrLzZk?=
 =?utf-8?B?Vm9YZWRGaXBOdENjelNua0ViNDJEMEVXWVNQQWZCOW4zK0wwV1E2b24wbUxE?=
 =?utf-8?B?YnVUaWNGMVMySkpjRWlqcjRpejdjTU9XS1FVRE5aTzBLbU9DbjRqSGIxUXpX?=
 =?utf-8?B?ckJldGhjTXJ0REdXWFN0RWk1eVhZRG54d1FtbmtFMDZndS83WGF4Q3dIenAv?=
 =?utf-8?B?UHVuMWNEV3FCKzByK1AyaUI1VWpxZFZRaTliN0lhdFEybUhxWW1yR2RJcU9j?=
 =?utf-8?B?Q0pVWXhqa1JzMnprNEduOUpkVzZabE5tS0x1VEZwcGdDT0tWOWVuVnlTRW8v?=
 =?utf-8?B?YzVWWnRmaEdrdFNrSlU4Z0cwTzczM0lyU3hHTXI2WTFhRzBqNzc0cjduZjNB?=
 =?utf-8?B?WTR3NmFmWmpyWGFObDVac3pjVmg4dUt0RjZEWFRQRi9lYzRjdz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 081a235e-0111-4ae1-615d-08daf80b4cd8
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jan 2023 21:47:44.3611
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 3olt3H39/jp+BMxJq8Az0UnwZy1tw2yJvQGEAm/Njs3b4Rj7IAJCUQO7BEwyBfSaeAOPVYTks67tANyJtVqrTZipyFf6j6BFoIi3EjMSVJM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB4995

T24gMTYvMDEvMjAyMyA0OjA2IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTQuMDEuMjAy
MyAwMDowOCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IEBAIC00NzAsNiArNDcxLDU5IEBAIHN0
YXRpYyBpbnQgX19pbml0IGNmX2NoZWNrIHBhcmFtX2luaXQodm9pZCkNCj4+ICBfX2luaXRjYWxs
KHBhcmFtX2luaXQpOw0KPj4gICNlbmRpZg0KPj4gIA0KPj4gK3N0YXRpYyBsb25nIHhlbnZlcl92
YXJidWZfb3AoaW50IGNtZCwgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBhcmcpDQo+PiAr
ew0KPj4gKyAgICBzdHJ1Y3QgeGVuX3ZhcmJ1ZiB1c2VyX3N0cjsNCj4+ICsgICAgY29uc3QgY2hh
ciAqc3RyID0gTlVMTDsNCj4gVGhpcyB0YWtlcyBhd2F5IGZyb20gdGhlIGNvbXBpbGVyIGFueSBj
aGFuY2Ugb2YgcmVwb3J0aW5nICJzdHIiIGFzDQo+IHVuaW5pdGlhbGl6ZWQNCg0KWWVzLi4uDQoN
Ckl0IGlzIGFsc28gdGhlIGNsYXNzaWMgZmFsc2UgcG9zaXRpdmUgcGF0dGVybiBpbiBHQ0MgNC54
IHdoaWNoIGlzIHN0aWxsDQpzdXBwb3J0ZWQgZGVzcGl0ZSBhdHRlbXB0cyB0byByZXRpcmUgaXQu
DQoNCj4+ICsgICAgaWYgKCBzeiA+IEtCKDY0KSApIC8qIEFyYml0cmFyeSBsaW1pdC4gIEF2b2lk
IGxvbmctcnVubmluZyBvcGVyYXRpb25zLiAqLw0KPj4gKyAgICAgICAgcmV0dXJuIC1FMkJJRzsN
Cj4+ICsNCj4+ICsgICAgaWYgKCBndWVzdF9oYW5kbGVfaXNfbnVsbChhcmcpICkgLyogTGVuZ3Ro
IHJlcXVlc3QgKi8NCj4+ICsgICAgICAgIHJldHVybiBzejsNCj4+ICsNCj4+ICsgICAgaWYgKCBj
b3B5X2Zyb21fZ3Vlc3QoJnVzZXJfc3RyLCBhcmcsIDEpICkNCj4+ICsgICAgICAgIHJldHVybiAt
RUZBVUxUOw0KPj4gKw0KPj4gKyAgICBpZiAoIHVzZXJfc3RyLmxlbiA9PSAwICkNCj4+ICsgICAg
ICAgIHJldHVybiAtRUlOVkFMOw0KPj4gKw0KPj4gKyAgICBpZiAoIHN6ID4gdXNlcl9zdHIubGVu
ICkNCj4+ICsgICAgICAgIHJldHVybiAtRU5PQlVGUzsNCj4gVGhlIGVhcmxpZXIgb2YgdGhlc2Ug
bGFzdCB0d28gY2hlY2tzIG1ha2VzIGl0IHRoYXQgb25lIGNhbid0IHN1Y2Nlc3NmdWxseQ0KPiBj
YWxsIHRoaXMgZnVuY3Rpb24gd2hlbiB0aGUgc2l6ZSBxdWVyeSBoYXMgcmV0dXJuZWQgMC4NCg0K
VGhpcyBpcyBhY3R1YWxseSBhIGNoZWNrIHRoYXQgdGhlIGJ1aWxkX2lkIHBhdGggYWxyZWFkeSBo
YXMuwqAgSSBkaWQNCmNvbnNpZGVyIGl0IHNvbWV3aGF0IGR1YmlvdXMgdG8gc3BlY2lhbCBjYXNl
IDAgaGVyZSwgYnV0IGl0IG5lZWRzIHRvDQpzdGF5IGZvciB0aGUgZm9sbG93aW5nIHBhdGNoIHRv
IGhhdmUgbm8gZnVuY3Rpb25hbCBjaGFuZ2UuDQoNCj4+IC0tLSBhL3hlbi9pbmNsdWRlL3B1Ymxp
Yy92ZXJzaW9uLmgNCj4+ICsrKyBiL3hlbi9pbmNsdWRlL3B1YmxpYy92ZXJzaW9uLmgNCj4+IEBA
IC0xOSwxMiArMTksMjAgQEANCj4+ICAvKiBhcmcgPT0gTlVMTDsgcmV0dXJucyBtYWpvcjptaW5v
ciAoMTY6MTYpLiAqLw0KPj4gICNkZWZpbmUgWEVOVkVSX3ZlcnNpb24gICAgICAwDQo+PiAgDQo+
PiAtLyogYXJnID09IHhlbl9leHRyYXZlcnNpb25fdC4gKi8NCj4+ICsvKg0KPj4gKyAqIGFyZyA9
PSB4ZW5fZXh0cmF2ZXJzaW9uX3QuDQo+PiArICoNCj4+ICsgKiBUaGlzIEFQSS9BQkkgaXMgYnJv
a2VuLiAgVXNlIFhFTlZFUl9leHRyYXZlcnNpb24yIGluc3RlYWQuDQo+IFBlcnNvbmFsbHkgSSBk
b24ndCBsaWtlIHRoZXNlICJicm9rZW4iIHRoYXQgeW91J3JlIGFkZGluZy4gVGhlc2UgaW50ZXJm
YWNlcw0KPiBzaW1wbHkgYXJlIHRoZSB3YXkgdGhleSBhcmUsIHdpdGggY2VydGFpbiBsaW1pdGF0
aW9ucy4gV2UgYWxzbyB3b24ndCBiZQ0KPiBhYmxlIHRvIHJlbW92ZSB0aGUgb2xkIHZhcmlhbnRz
IChleGNlcHQgaW4gdGhlIG5ldyBBQkkpLCBzbyB0ZWxsaW5nIHBlb3BsZQ0KPiB0byBhdm9pZCB0
aGVtIHByb3ZpZGVzIHVzIGFib3V0IG5vdGhpbmcuDQoNCkluY29ycmVjdC4NCg0KRmlyc3QsIHRo
ZSBicmVha2FnZSBoZXJlIGlzbid0IG9ubHkgdHJ1bmNhdGlvbjsgaXQncyBjaGFyLXNpZ25lZG5l
c3MNCndpdGggZGF0YSB0aGF0J3Mgbm90IGd1YXJhbnRlZWQgdG8gYmUgQVNDSUkgdGV4dC7CoCBZ
ZXQgYW5vdGhlcg0KZGVtb25zdHJhdGlvbiBvZiB3aHkgQyBpcyBhbiBpbmFwcHJvcHJpYXRlIHdh
eSBvZiBkZWZpbmluZyBhbiBBQkkuDQoNClNlY29uZGx5LCBpdCBpcyB1bnJlYXNvbmFibGUgZm9y
IEFCSSBlcnJvcnMgYW5kIGNvcnJlY3Rpb24gaW5mb3JtYXRpb24NCnN1Y2ggYXMgdGhpcyBub3Qg
dG8gYmUgZG9jdW1lbnRlZCAqc29tZXdoZXJlKi7CoCBJdCBzaG91bGQgbGl2ZSB3aXRoIHRoZQ0K
QVBJIHRlY2huaWNhbCByZWZlcmVuY2UsIHdoaWNoIGhhcHBlbnMgdG8gYmUgZXhhY3RseSAoYW5k
IG9ubHkpIGhlcmUuDQoNClRoZXNlIGNvbW1lbnRzIHdvbid0IGZpeCBleGlzdGluZyBpbXBsZW1l
bnRhdGlvbnMuwqAgV2hhdCB0aGV5IHdpbGwgZG8gaXMNCmNhdXNlIGFueW9uZSBuZXcgaW1wbGVt
ZW50aW5nIGd1ZXN0cywgb3IgYW55b25lIHdobyByZS1zeW5jcyB0aGUNCmhlYWRlcnMsIHRvIG5v
dGljZSBhbmQgaG9wZWZ1bGx5IHRha2UgbWl0aWdhdGluZyBhY3Rpb24uDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 16 22:14:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Jan 2023 22:14:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479009.742577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHXkg-0003LW-Da; Mon, 16 Jan 2023 22:14:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479009.742577; Mon, 16 Jan 2023 22:14:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHXkg-0003LP-Aj; Mon, 16 Jan 2023 22:14:50 +0000
Received: by outflank-mailman (input) for mailman id 479009;
 Mon, 16 Jan 2023 22:14:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NbzH=5N=citrix.com=prvs=37389537a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHXke-0003LH-0u
 for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 22:14:48 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2cc71389-95eb-11ed-b8d0-410ff93cb8f0;
 Mon, 16 Jan 2023 23:14:43 +0100 (CET)
Received: from mail-bn8nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 Jan 2023 17:14:40 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB6019.namprd03.prod.outlook.com (2603:10b6:610:be::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Mon, 16 Jan
 2023 22:14:38 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.019; Mon, 16 Jan 2023
 22:14:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2cc71389-95eb-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673907283;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=4Blw27gjCINtwT9t9bU6ZoWRv4DCoUYKhqngNd1B9rc=;
  b=LgyWp2kAcrcDxVJKJkcAmwfhIejjhKsOhqXFzEustjjMgfRJ/1/DrA0y
   vG1WolfHEBu+Y1G44Rt+Kk9h99l6UbYSRn3pARH5qbcn3FZycRHdxGqLO
   aTngC95CAxlvrJUPzOG1xxqLQT0ZFRgIBuAHBrrHkc6crvETO0Vpr++AC
   U=;
X-IronPort-RemoteIP: 104.47.55.168
X-IronPort-MID: 92863881
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6bKBIalvKDN6M56H5lVUyuLo5gxLJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIdXWjTafqNZWr0Kdh2Pdjl9kMEuZbWz99nTFQ+qn0zEyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5QGGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aAIERAuLUCFu+G/2by8Rsc3r5wFEOC+aevzulk4pd3YJdAPZMmZBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVE3iea9WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDTu3mra462TV/wEQTDBhHfhjkmcCgqV+ZfOx7F
 0oI8S8X+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6CHXQNRDNFbN0gtec1SCYs2
 1vPmMnmbRRwtJWFRHTb8a2bxRuwJCwUIGkqdSICCwwf7LHLsIw1yx7CUNtnOKq0lcHuXyH9x
 SiQqyozjKlVitQEv5hX5njCijOo45LPHgg841yNWnr/t10pIom4e4av9F7Xq+5aK5qURUWAu
 35CnNWC6OcJDteGkynlrPgxIYxFLs2taFX06WOD1bF4n9hx0xZPpbxt3Qw=
IronPort-HdrOrdr: A9a23:/Fw+Iqs2/AqPuC5VqUq5BkKA7skDGNV00zEX/kB9WHVpm62j9v
 xG+c5w6faaslkssR0b9OxoW5PqfZq0z/cc3WB7B9mftWfd1FeVEA==
X-IronPort-AV: E=Sophos;i="5.97,221,1669093200"; 
   d="scan'208";a="92863881"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UL51MTPQcZapoyPOwi5FoWduWTSswy5zEOOBiqiJLg1F+r1Rtc/ydDwFY3px57ttLKy7iXfVMtcOKAQ3+pr9Bwrp4CaKoFXXitm1libifo6e4ocEkxAZi7joLCKq0YjME52dLTivky5sT4tA5+DrxRh6SORBY7huZhafe0WPjzERWSJf1+RUTWMllsHfqHrwf8obot/o8qxBgNjcQJkQLomlbGqcW+VRke4b80gMNepdRSTmZDWhYtmNUziWneTWvAUb4lUL7iP0eTXoUfLi2j7f62O37zmE5TeSIdtZUSDUed6Z2ccWAC0voLO2kPrhjg6I9BNuA46zT4xhffpXPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4Blw27gjCINtwT9t9bU6ZoWRv4DCoUYKhqngNd1B9rc=;
 b=JZde02OJhUE8DtwgXL5BsK1d7Ds2wE7X7rNUfJW5YR0SYX8hE4tEYByb8HyA5LCAaPocv5Rikz1eKYkUv9v5W/yrUEFPWVDL3CshW0ry90OAHo8lIuhP6wrJdjOdBz5otiJkUXHog+QFVyTfhgPylUS5YRFtvmR3Nw+W98+LQ5LpEmlUqnbXSga9wnBXcwRQQ99URuSTspjfglELUbd7oK0bx3WW+h8dKMqSHrHRp3Y8MtC1DdnFNu+FJPOIo+gbLiVL0DE8+hia10SAJGfBFDWRn03PUW3jBnWb3Bs6JSO1QP/GQ5tlfrxuL6cobQ2kGSpoF0VhLRtEnXbtwZZoBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4Blw27gjCINtwT9t9bU6ZoWRv4DCoUYKhqngNd1B9rc=;
 b=uBGwwNY4ASdPiOIP0ZXvjZ4JEbpU/9G0a7ndWO3OfxmoapiinelFc2w0sNv+aGUZvknTxIa0CxakTaplgncEbtJKKej7OyGL4Tk1sTOx5k3tLe0tVq9CiFxjjpO/t58cQWwSTJVo0oCiEf0bNxhFneIBOd4D+/jPFmrsIkA27Uw=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 4/5] xen/version: Fold build_id handling into
 xenver_varbuf_op()
Thread-Topic: [PATCH v2 4/5] xen/version: Fold build_id handling into
 xenver_varbuf_op()
Thread-Index: AQHZJ6P7YcGbTGh3kUuVUrqx0/v8ha6hO92AgABksoA=
Date: Mon, 16 Jan 2023 22:14:37 +0000
Message-ID: <4639aa6c-cef8-3434-1607-fcb4b563a991@citrix.com>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-5-andrew.cooper3@citrix.com>
 <f92334b1-7819-d638-fabd-91baca711615@suse.com>
In-Reply-To: <f92334b1-7819-d638-fabd-91baca711615@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH0PR03MB6019:EE_
x-ms-office365-filtering-correlation-id: 425dbd16-96a5-4e49-07e8-08daf80f0e7f
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 EGUs+URlDpUuB3lz01DBnohmyQUKiabOysNNurUvgzOJoA2DPRbYhWQkjJvZt7oiMB7e0P8BD4a1/7Xvf0dMK24PLFR/lCYEBUlP0cxfZo9PEmERJ/glyjdmx0uvRoXtvPjYLO3XH3plQDMVlYYGjBG7uetvojYu/MbH4V+Jzhdyw3CJJcEPk9U5R6momVMjAfSl7/A4o0zZowH4dURsfdIjbLvduN8P5S6tXLbrar5IcTzfqT/pvevgN+IKe/AG2qY4W7Ps6tB5TL7MnVwziiKqHXjR7jJf10hRj4tgOd4/rk+vfij/y6T26004Mva66SUalMRqnCRMXvxoxuqYyJv9EfAszItQywjSNSN0lDuurJ8T/XBcOwRwV//GT269udX7zjrBsYPzP3ZztVVb5gto3h6fTvWufebN6R8ma1SM01k/AFnrXXfxw2LxEknTut7T2wM4JROgjyBUVvHqixWFeXxCxATGxhzFXGgxyO/qmhJmXpTM+WsvwcB98qOOy+R8Sy6m/dxkRhicOSud1szQMnrZXZ+9tp+9SubFdtL5f8VvZH/sNi7wswi0fHxnt+E2uYAriDUg6/DB4QCyQB9zJGs6ZWaXhgyK+0A6FNZmiCHqXaO2SX+teMEvCsTgW/w9CdNt5abQl5zTap2oyNLp79tZVAjfGWW4rL3GrVNiiw983GCaAWe7gvth5Ta4+PY8iIFf+GPb+8ccheIlu2kYXDNZFtiH1prGCCfYthyNmAhsxnPzIlLDGQE1bKoJLJupPPqE4bnIp3awSuB3+Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(366004)(136003)(376002)(39860400002)(396003)(451199015)(82960400001)(83380400001)(122000001)(38070700005)(86362001)(38100700002)(31696002)(5660300002)(4326008)(2906002)(6916009)(8936002)(8676002)(66946007)(66446008)(66476007)(76116006)(64756008)(66556008)(41300700001)(91956017)(2616005)(186003)(6506007)(26005)(53546011)(6512007)(316002)(54906003)(478600001)(6486002)(71200400001)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VnZzVWt1Z1d5cXQ4bkNSK3p2TmluZFB0TStIOEJieUd2Und1dlJibXZhTS8z?=
 =?utf-8?B?QzBMUlBjQ0JWZHdDS3hwenZFWXcxanpFckJ5R1JMSWxLOTVHcE5jaHEvNnk0?=
 =?utf-8?B?Q0pHWnYrQm9JOXlSMkJITkpqOVppZzFLVXJTeWJCTTVULzRrTE9Pc3MwRkFB?=
 =?utf-8?B?TzB0dnVDT1Q1aEtOZ1R3a3ZsdzNBQjJ6ZkM1MjdKSFJTSXZFWStVdUhyMXdl?=
 =?utf-8?B?VlYySTg3b29pTFh4RmRuQldwTFNRbWk2ZWszZ055RDNCbFlQbHBIemgzd1M5?=
 =?utf-8?B?RnNOQWdaQy8wUTVDWUFYbUZ4MmpWME9LQmJWMkNTWnRDMWVQbWIvTytUL2lJ?=
 =?utf-8?B?dWxHYitqZmRuTlFyR01iTlhtNnV5ZmhuNHo4eE45UXpPeC9pOTZVQWxtUFZF?=
 =?utf-8?B?TTJaVk4zdklSRmZXenNuOVBEazQyYkJlU1JlcFZEWnJHUmVJdndjSUJ0UXZ2?=
 =?utf-8?B?cXdLZkcyVk5objBtdDVIVlVlT2IxMFZ3WWxCSVh3dVBhdC85ZXBwR1NuQXA1?=
 =?utf-8?B?MGpRdExiK09HNWorem9UNGUxZU5IalFCYjAyY29uQ0R3ZWpsQnQ1dUhaazh5?=
 =?utf-8?B?SWZ0ZXg1SDRERW1uVlNramhCZzNWMWp1bG00d1JRTjF1azhaNGkrY1NiNm9x?=
 =?utf-8?B?N2h3WDl6SURJKy9kN0NJcHJCSjRweEt6K3JabkYzZ1dScnBtVjR1WndwM0hG?=
 =?utf-8?B?cHg2YWFYVVAvQWYrRGovWGNpbVlqOUR5dlU3WGhYQnpydFcwcXlYMmVZWmdQ?=
 =?utf-8?B?U3pFOGVSWnBoNG5BQXpSdWtPVEt2RnhoMFBLRTNBM3FxODFpMjJlbzlMbjhl?=
 =?utf-8?B?RjRrVVFkRDE3SUlqaTdLK05YbDdFczJtaWRHUTREZFhDQVpHZ3o4bjhEY1h5?=
 =?utf-8?B?empRVkovLyswb0g3VHhDenYrNUs0a2p0dmJwSG13WDVnUjR6QjBYWFFFdDhB?=
 =?utf-8?B?d2ZyVlFrMGZSRmwzNWJiM3lacW9za0ZHSGJGRytzUGRTT1lDSlQveTcrbEJa?=
 =?utf-8?B?RGUwT09uQTEzZG5XcDgycGNWalBXVmIxeEhaY1B2Vkw0N1UxN3FVOFB3NjNp?=
 =?utf-8?B?dE0zRHplL0Y2WFVJVjc1Y3o0ODNQVnk1OHB1b1VjSTZjcDZBK3krSHF3VE9P?=
 =?utf-8?B?L29kcjZ6KzFhdjdUOUErQndMejNONGw4aGlFK0E3RitQdzhPRUJCbGFGMWNk?=
 =?utf-8?B?ekZwVzhPZG5iQlJTVzRRZVROOWlhd0R1TmtHK1NacWVZYUtnMHpNWlJJVE8z?=
 =?utf-8?B?SkdKbm10c2NKSFk5WkZWckNsM2dmZGNvZ0lSNjBKdC9nRlBaZzlBa0xNVEhR?=
 =?utf-8?B?QVpFeGhCYitZVjliaGpsMUpPK1JOd2NyNHpQZnV4Zm1JMnQyb2VEWlNIS25P?=
 =?utf-8?B?YjBmMW1UcUVjM0FER0JnNktIN2NBOGRJMFlsQzVQaDBWcVdnUHZkcnYvZGRl?=
 =?utf-8?B?WGpQOFpLZ0NVcWdDSVdyZVI4V2tqWGt5dHdMVExqWnVTNTI0U2pHeVhGR3hQ?=
 =?utf-8?B?aEZHcHEvSXhlend1eHFLczgyYXhyakRub1NWVjZhS0dhOU9QQ2V2MlBhdkR0?=
 =?utf-8?B?dWliVW9XeW1VdFQ4Yy9YUURVM1pBbVo0TkJ1aUJyRWl1NkEyRmlKb29OK0RX?=
 =?utf-8?B?Y203dXZlY2Nreit0Q2VBaE5EaHFSZis2OGhTeXJJUzIwMUFHWVRVRG1sSzBr?=
 =?utf-8?B?cVd4enZ5U1JkZmVhU0ZKUDhiVWwyUlorRUw3WllPNHF2aUtvM25rT3hDbmF1?=
 =?utf-8?B?MkZSTWVqNWpLaHhFMVAvRkxoMTlEN0s1UERmNnptNVZUMUpVSDNWemdUd3NG?=
 =?utf-8?B?ei9SYWdHWjRxVU9TTm5wdkJLdUtrdmhDN24rQzgzMEFDcUdYakJEWmoveEJl?=
 =?utf-8?B?VHNhY0hUb05TMW5mOEloNllaT0xqZklud3I0NjZWQ0FIaEQ2WUE4MWE5QytC?=
 =?utf-8?B?N2NCa2ZCd3NCN1BNVnFNVEN5aVBjME81OHdkRGFiWHFHNTNvR2p5c1VJaGRk?=
 =?utf-8?B?bGxaZ0duemZTTk5JOFd4SkJlWnN4STlya3lQMjBBRjhMNzFodldaK0ExNnVM?=
 =?utf-8?B?SlcrR1VHRFhxS2haMENHUDJWUXc1N3AxVmhxTFRDR0h5TUF4TnIrK3dkVmdk?=
 =?utf-8?Q?v0RtqP/11SCwyGogDoinC0X58?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <2663C71880A7C24381D9959F3EDB6319@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	c+QuxNillunIxaBU0eLzIhZGCfBDtukL/mFxaUMoEKuZrxYisTN4YNI4KozaeNs8fkFd8oAuWgvmlsSg0O8JXhEcoUrLNJrCmviHt0GBCUke2dFU8EQ7zdKtdHXUPs38hH9dLinRdD5TcKXlaKRVAcDOtxwIHrGcC37vKrpJsazS/SFnCNWRYFN4Yi0IQC4061LwiFwLh+jP05FHQ8VMkrJ00+N82jZllRNpfNUVm5AIlWR9bzlYJmo+0/4zhh8yR7y7yaNKicmeqhH4B4l/RM694ALgBgPakM14ZSUCBjM2z1VAh8rVky+Ow5zottSZqlOlTTyNQ5i2diW5lwLVFHzpwd+f3WR+ShBBmlyJ6e2kBIVs+KQG8Dii/OihwWnR7Ri6JH5A46EAtHZMQu8kvZw0iJKSpMRyeSOlg5aMF2cX9JYYu+fMeoWLPdYiJnS5D1MnAbhRwAafAJBKeTe8AooCl0QH/cr7EUYe2R8hh6v76WFIR9wlbZlqq1wPU/iaw/LYvKJ/s7iviQFi4rO6bxQnwLIXbn8Wks9bjyroEX3ddjMSbFC4iLLNTQTDU80+oti0nKHbartCSvUlCqhvl/E/0bqOto4t/axhJmFwXGJUowdr1kA27gr4CcMkI9tEksYxIsM8wtd6s+bQh0ecfX/lQLO3iXs29XysyE3CaY2hIfU2g1EQ3JzlBkVHiuJFG7q6oDR1HbX6OKi6MoUM+K+sil8+WqluGhDcDwZgl61VA64yjTm3pZB5cRHhvze9scBtAd+5J44CMNO5fMxu9OrXajYkpQ40KbjmXC6gZK82pbE0AoN/lgj4Cbz6HW4DcLNpwkSYypfZiCH75v682qxd2bhf8PY14C2RT8mdFVA=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 425dbd16-96a5-4e49-07e8-08daf80f0e7f
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Jan 2023 22:14:37.7125
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 6jHOriYa7pyO1uMQO/bZAkxfKE54OTUkWxOv2dyykPqa02T8LCNmyLJvAQ1HhGH8IC5iBrxnwowpRpmi6KkvbNQZMOj6Tdir/eTX38Dewwk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6019

T24gMTYvMDEvMjAyMyA0OjE0IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTQuMDEuMjAy
MyAwMDowOCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IHN0cnVjdCB4ZW5fYnVpbGRfaWQgYW5k
IHN0cnVjdCB4ZW5fdmFyYnVmIGFyZSBpZGVudGljYWwgZnJvbSBhbiBBQkkgcG9pbnQgb2YNCj4+
IHZpZXcsIHNvIFhFTlZFUl9idWlsZF9pZCBjYW4gcmV1c2UgeGVudmVyX3ZhcmJ1Zl9vcCgpIHJh
dGhlciB0aGFuIGhhdmluZyBpdCdzDQo+PiBvd24gYWxtb3N0IGlkZW50aWNhbCBjb3B5IG9mIHRo
ZSBsb2dpYy4NCj4+DQo+PiBObyBmdW5jdGlvbmFsIGNoYW5nZS4NCj4+DQo+PiBTaWduZWQtb2Zm
LWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBSZXZpZXdl
ZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KDQpUaGFua3MuDQoNCj4+IC0t
LSBhL3hlbi9jb21tb24va2VybmVsLmMNCj4+ICsrKyBiL3hlbi9jb21tb24va2VybmVsLmMNCj4+
IEBAIC00NzYsOSArNDc2LDIyIEBAIHN0YXRpYyBsb25nIHhlbnZlcl92YXJidWZfb3AoaW50IGNt
ZCwgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBhcmcpDQo+PiAgICAgIHN0cnVjdCB4ZW5f
dmFyYnVmIHVzZXJfc3RyOw0KPj4gICAgICBjb25zdCBjaGFyICpzdHIgPSBOVUxMOw0KPj4gICAg
ICBzaXplX3Qgc3o7DQo+PiArICAgIGludCByYzsNCj4gV2h5IGlzIHRoaXMgZGVjbGFyZWQgaGVy
ZSwgeWV0IC4uLg0KPg0KPj4gICAgICBzd2l0Y2ggKCBjbWQgKQ0KPj4gICAgICB7DQo+PiArICAg
IGNhc2UgWEVOVkVSX2J1aWxkX2lkOg0KPj4gKyAgICB7DQo+PiArICAgICAgICB1bnNpZ25lZCBp
bnQgbG9jYWxfc3o7DQo+IC4uLiB0aGlzIGRlY2xhcmVkIGhlcmU/DQoNCkJlY2F1c2UgcmMgaXMg
bW9yZSBsaWtlbHkgdG8gYmUgdXNlZCBvdXRzaWRlIGluIHRoZSBmdXR1cmUsIGFuZC4uLg0KDQo+
ICBCb3RoIGNvdWxkIGxpdmUgaW4gc3dpdGNoKCkncyBzY29wZSwNCg0KLi4uIHRoaXMgd291bGQg
aGF2ZSB0byBiZSByZXZlcnRlZCB0cmVlLXdpZGUgdG8gdXNlDQp0cml2aWFsLWluaXRpYWxpc2F0
aW9uIGhhcmRlbmluZywgd2hpY2ggd2UgYWJzb2x1dGVseSBzaG91bGQgYmUgZG9pbmcgYnkNCmRl
ZmF1bHQgYWxyZWFkeS4NCg0KDQpJIHdhcyBzb3JlbHkgdGVtcHRlZCB0byBjb3JyZWN0IHhlbl9i
dWlsZF9pZCgpIHRvIHVzZSBzaXplX3QsIGJ1dCBJDQpkb24ndCBoYXZlIHRpbWUgdG8gY29ycmVj
dCBldmVyeXRoaW5nIHdoaWNoIGlzIHdyb25nIGhlcmUuwqAgVGhhdCBjYW4NCndhaXQgdW50aWwg
bGF0ZXIgY2xlYW4tdXAuDQoNCkFsdGVybmF0aXZlbHksIHRoaXMgaXMgYSBwYXR0ZXJuIHdlIGhh
dmUgaW4gcXVpdGUgYSBmZXcgcGxhY2VzLA0KcmV0dXJuaW5nIGEge3B0ciwgc3p9IHBhaXIuwqAg
QWxsIGFyY2hpdGVjdHVyZXMgd2UgY29tcGlsZSBmb3IgKGFuZCBldmVuDQp4ODYgMzJiaXQgZ2l2
ZW4gYSBzdWl0YWJsZSBjb2RlLWdlbiBmbGFnKSBhcmUgY2FwYWJsZSBvZiByZXR1cm5pbmcgYXQN
CmxlYXN0IDIgR1BScyB3b3J0aCBvZiBkYXRhIChBUk0gY2FuIGRvIDQpLCBzbyBzd2l0Y2hpbmcg
dG8gc29tZSBraW5kIG9mDQoNCnN0cnVjdCBwYWlyIHsNCsKgwqDCoCB2b2lkICpwdHI7DQrCoMKg
wqAgc2l6ZV90IHN6Ow0KfTsNCg0KcmV0dXJuIHZhbHVlIHdvdWxkIGltcHJvdmUgdGhlIGNvZGUg
Z2VuZXJhdGlvbiAoYW5kIHBlcmZvcm1hbmNlIGZvciB0aGF0DQptYXR0ZXIpIGFjcm9zcyB0aGUg
Ym9hcmQgYnkgYXZvaWRpbmcgdW5uZWNlc3Nhcnkgc3BpbGxzIG9mDQpwb2ludGVycy9zaXplcy9z
ZWNvbmRhcnkgZXJyb3IgaW5mb3JtYXRpb24gdG8gdGhlIHN0YWNrLg0KDQpUaGUgd2lucyBmb3Ig
aHZtIGdldC9zZXRfc2VnbWVudF9yZWdpc3RlcigpIG1vZGVzdCBidWcgYWJzb2x1dGVseQ0Kd29y
dGh3aGlsZSAoYW5kIEkgbm90aWNlIEkgc3RpbGwgaGF2ZW4ndCBnb3QgdGhvc2UgcGF0Y2hlcyBw
dWJsaXNoZWQuwqANCi9zaWdoKS4NCg0KPj4gKyAgICAgICAgcmMgPSB4ZW5fYnVpbGRfaWQoKGNv
bnN0IHZvaWQgKiopJnN0ciwgJmxvY2FsX3N6KTsNCj4+ICsgICAgICAgIGlmICggcmMgKQ0KPj4g
KyAgICAgICAgICAgIHJldHVybiByYzsNCj4+ICsNCj4+ICsgICAgICAgIHN6ID0gbG9jYWxfc3o7
DQo+PiArICAgICAgICBnb3RvIGhhdmVfbGVuOw0KPiBQZXJzb25hbGx5IEkgY2VydGFpbmx5IGRp
c2xpa2UgImdvdG8iIGluIGdlbmVyYWwsIGFuZCBJIHRob3VnaHQgdGhlDQo+IGNvbW1vbiBncm91
bmRzIHdlcmUgdG8gcGVybWl0IGl0cyB1c2UgaW4gZXJyb3IgaGFuZGxpbmcgKG9ubHkpLg0KDQpU
aGF0J3Mgbm90IHdyaXR0ZW4gaW4gQ09ESU5HX1NUWUxFLCBub3IgaGFzIGl0ICh0byBteSBrbm93
bGVkZ2UpIGV2ZXINCmJlZW4gYW4gZXhwcmVzc2VkIHZpZXcgb24geGVuLWRldmVsLg0KDQpJIGRv
bid0IHVzZSBnb3RvJ3MgZ3JhdHVpdG91c2x5LCBhbmQgdGhpcyBvbmUgaXNuJ3QuwqAgSnVzdCB0
cnkgYW5kIHdyaXRlDQp0aGlzIHBhdGNoIHdpdGhvdXQgYSBnb3RvIGFuZCB0aGVuIGp1ZGdlIHdo
aWNoIHZlcnNpb24gaXMgY2xlYW5lci9lYXNpZXINCnRvIGZvbGxvdy4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 00:40:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 00:40:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479016.742588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHa1H-0001Tm-9i; Tue, 17 Jan 2023 00:40:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479016.742588; Tue, 17 Jan 2023 00:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHa1H-0001Tf-5D; Tue, 17 Jan 2023 00:40:07 +0000
Received: by outflank-mailman (input) for mailman id 479016;
 Tue, 17 Jan 2023 00:40:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHa1F-0001Ov-TF; Tue, 17 Jan 2023 00:40:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHa1F-0007Ws-PF; Tue, 17 Jan 2023 00:40:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHa1F-0003Vt-EW; Tue, 17 Jan 2023 00:40:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHa1F-0004p7-E4; Tue, 17 Jan 2023 00:40:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0Ll51UqqboHSV459Xyj5MeKQJRGZJUz8I0yaFlCEit0=; b=wE+kKmeO/CHjegxY32O4PNd1QX
	nridZHm2RediySEo/r8lEmWJWxO+Y/N0elcA7KAAOTMl8nqopXBhX4rIKoOixTsMKH9Q8HpTlUp+T
	pLnLlScJmsrHHMuTWLCNo+qOtUssgkafwrWIDDQIkqBY9DEj827g+POzTOo42foJm4MA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175923-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175923: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-arm64:xen-build:fail:regression
    xen-unstable:build-arm64-xsm:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 00:40:05 +0000

flight 175923 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175923/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175734
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-arm64                   6 xen-build                fail REGR. vs. 175734
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175734
 build-amd64                   6 xen-build                fail REGR. vs. 175734
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    4 days
Failing since        175739  2023-01-12 09:38:44 Z    4 days   11 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    3 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 01:21:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 01:21:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479026.742599 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHafF-00045C-JJ; Tue, 17 Jan 2023 01:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479026.742599; Tue, 17 Jan 2023 01:21:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHafF-000455-EU; Tue, 17 Jan 2023 01:21:25 +0000
Received: by outflank-mailman (input) for mailman id 479026;
 Tue, 17 Jan 2023 01:21:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHafE-00044v-Uv; Tue, 17 Jan 2023 01:21:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHafE-000710-Of; Tue, 17 Jan 2023 01:21:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHafE-0004On-Bm; Tue, 17 Jan 2023 01:21:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHafE-0007qN-BF; Tue, 17 Jan 2023 01:21:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jlNKQVfSjFXtRtn6cq5GSiewsVmi0QFfIHweTk/k3wo=; b=t7BITZoMVM+OVat8wdxDbmMXUt
	8OAP0CZdbNRvbdD/6Lj+pCENtT4VWDATgk5sq5GrMcFDeRMP6wzqgpWIxo5FTxE+Ox3yv4PqO/BkX
	uq9hWQ6sf/0tJ7HOM2PzFrrbMOsM2tJBqiWBeDaGeIT8iOFuaysKzRm2WPo07NR+kO2E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175925-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175925: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-amd64:<job status>:broken:regression
    linux-linus:build-amd64-xsm:<job status>:broken:regression
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:build-amd64:host-build-prep:fail:regression
    linux-linus:build-amd64-xsm:host-build-prep:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-arm64:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-arm64-xsm:xen-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    linux=5dc4c995db9eb45f6373a956eb1f69460e69e6d4
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 01:21:24 +0000

flight 175925 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175925/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 build-amd64                   5 host-build-prep          fail REGR. vs. 173462
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 173462
 build-i386-xsm                6 xen-build                fail REGR. vs. 173462
 build-arm64                   6 xen-build                fail REGR. vs. 173462
 build-i386                    6 xen-build                fail REGR. vs. 173462
 build-arm64-xsm               6 xen-build                fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd11-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd12-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a

version targeted for testing:
 linux                5dc4c995db9eb45f6373a956eb1f69460e69e6d4
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  101 days
Failing since        173470  2022-10-08 06:21:34 Z  100 days  208 attempts
Testing same since   175919  2023-01-16 07:09:44 Z    0 days    2 attempts

------------------------------------------------------------
3368 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  broken  
 build-arm64                                                  fail    
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-freebsd11-amd64                             blocked 
 test-amd64-amd64-freebsd12-amd64                             blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-amd64-xl-vhd                                      blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-xsm broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

(No revision log; it would be 514969 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 02:30:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 02:30:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479034.742610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHbjw-00033G-Hq; Tue, 17 Jan 2023 02:30:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479034.742610; Tue, 17 Jan 2023 02:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHbjw-00032y-Bu; Tue, 17 Jan 2023 02:30:20 +0000
Received: by outflank-mailman (input) for mailman id 479034;
 Tue, 17 Jan 2023 02:30:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mnn9=5O=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pHbju-00032s-Re
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 02:30:19 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2052.outbound.protection.outlook.com [40.107.8.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e02f9d20-960e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 03:30:15 +0100 (CET)
Received: from FR3P281CA0126.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:94::16)
 by DB9PR08MB9945.eurprd08.prod.outlook.com (2603:10a6:10:3d2::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 02:30:13 +0000
Received: from VI1EUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:94:cafe::dc) by FR3P281CA0126.outlook.office365.com
 (2603:10a6:d10:94::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Tue, 17 Jan 2023 02:30:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT049.mail.protection.outlook.com (100.127.144.168) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 02:30:12 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Tue, 17 Jan 2023 02:30:11 +0000
Received: from 45f7033facbf.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 ACB7310D-757F-494D-8875-0BDB32C95B6B.1; 
 Tue, 17 Jan 2023 02:30:01 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 45f7033facbf.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 17 Jan 2023 02:30:01 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GV1PR08MB8354.eurprd08.prod.outlook.com (2603:10a6:150:a4::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 02:29:58 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Tue, 17 Jan 2023
 02:29:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e02f9d20-960e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cz0CZqL3S/ce4G+/I/Tj9WtuJq7K7FSPIMrz63rOQyg=;
 b=LJ32SKiaWimC6NjYwfItbpCUWTX7fwGeL6HEhjvULSbE+zrf2evfGYnjBFVbVGE25x3eKGf2viFhCbXCAij6Nful6eVh8aZ/utZy+3bTfGNRtF4AtUyH7RShQ4vIHqLeFLWoIjFSCypAf2cDYUtGs46AM/7dfdffGvr4skLGCcE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a6tE+CyJJDtYXTjTf24d+xy/Ohn22Qr6NAbYgSFBYcyc6s+OZvAuTUIaO5vnhvM8GEvjUN1KdpYNzpY0z1AFBmXhetVm17xxM75NCxSm/KQMzgdWs93DBgn1CeWib9xj8PGVf2YtPPjcS8kU2VMLK72JQ9n4ITs2DKGBoNsNT6QYsIoWUvseZ4TuzC+rzm1OHxtgGEhZBFU8EHiVD1v9hv46GzsXU+IJmh0/jwA9pH5HOtWYprMILRxHMCU7GpA6sbKAGL+42BAj/dFU7u2RAHjhfmllzTVMbpAyKd6Vz54QHEUCuBHOA+QidMwHk0KJ+iqqa92d0q0f3mNzW+EMTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Cz0CZqL3S/ce4G+/I/Tj9WtuJq7K7FSPIMrz63rOQyg=;
 b=lKy2rxY8n78uiNbr7ePiUw1y0RRJLNlA265tUu2Wl7tvWuHzKj9X2TNWGDNyftPi4nd7aNq7xjO7ZHCPiv2gvqmB0oPotCPGaKFLYwZNwe2AuS8KBD/JN66i6GkVlYv0Kk9WuO4KFnvbMG5twhJtrpCxd41xd6zX8qMzTTlxRPvAkHzyLc1MJdNu+M2a4fgSsLCGoXAx6YyVjwFfoH5Bb/FULaU9ymKW3Fv7y7OtJJUrQWU5R8Nn+kQQSaDm9knT4p8R6cT8wFMkmoL+e2Dokx4G/M56bElLZvce3LAwsKi/QelLSIrg8HVu18l1A1F+X3ROOK+gAbvkTSAXBQW35g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cz0CZqL3S/ce4G+/I/Tj9WtuJq7K7FSPIMrz63rOQyg=;
 b=LJ32SKiaWimC6NjYwfItbpCUWTX7fwGeL6HEhjvULSbE+zrf2evfGYnjBFVbVGE25x3eKGf2viFhCbXCAij6Nful6eVh8aZ/utZy+3bTfGNRtF4AtUyH7RShQ4vIHqLeFLWoIjFSCypAf2cDYUtGs46AM/7dfdffGvr4skLGCcE=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH] xen/arm: Harden setup_frametable_mappings
Thread-Topic: [PATCH] xen/arm: Harden setup_frametable_mappings
Thread-Index: AQHZKbkxN5rEygeNfEqj+XfEFX++Ha6h0GPg
Date: Tue, 17 Jan 2023 02:29:58 +0000
Message-ID:
 <AS8PR08MB79913948C3C92CEBD6657DC992C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
In-Reply-To: <20230116144106.12544-1-michal.orzel@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 3CAFD5B202B30C478707AB64A30E7448.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GV1PR08MB8354:EE_|VI1EUR03FT049:EE_|DB9PR08MB9945:EE_
X-MS-Office365-Filtering-Correlation-Id: bf290444-1f16-4463-8e80-08daf832c292
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nxACedG7mo3+wVSuJUHjrssBM6+Tvl+LvL/4L4bONckbBo7iCFU2g3vnNJDM0+uAyZX4AruNXljqXpBAic1oxpXqLrZOedI08Dpb+R+1VM4ol9sxmhyzb9yt19FAqVek/0fNRoAU0/OA8r6q4QXv9aPu/xpIYgzyxPwoo6wzepK00P71NU66XrfRYhzxFVVmR2iyiQcI6OLnJg+vUOJg6A79fAz5NqRHqd7LewucSbl+C6TM31qEjsgMy2BSP/wyjb56epzcHGWLiu+gBM80+RZJXcIhdFds7e25dhD989zBEFc2yBH6gGfGb4Oac4I9gegqmtbvsLq0NqHOD2K67I8mrjMRTaQYTjY69W4ut1izcGIwnhRde9oJJ2ONZJ2GySQDqM4JsqzA7dAyuf1sOlO+qhCH3vaHujaFA08vAmkZuy19O9RNHy5ev2QC6ai1N6eXIWObKagOH4ugDxS5PDW2TXfMM/PBotrYbjZJdpjBr/x1E7F3gMdAFeGSnLNAFJ6WwV1osprQe8+1qdah0qPeAP/f2OySyVeP4mpaejLRsXyhCqAvF+sSfuLGJRTDa/4Rhi0exc47H3NhOWUgBTBq4TyKQe3blFxLqI2T5fv6cGOd/bb+6Af+vZzT57dDv6NQF3jTHywCvz/fmqfcCS+e4j5iYqQyWDy4K1XaRhErODnsysNrb0vHyqGLFXyOxd6qQRdYlClZNJsrtIIncQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(136003)(366004)(396003)(376002)(451199015)(86362001)(55016003)(7696005)(33656002)(71200400001)(478600001)(186003)(9686003)(6506007)(66946007)(64756008)(66446008)(52536014)(66476007)(4326008)(5660300002)(66556008)(26005)(76116006)(8936002)(54906003)(2906002)(110136005)(41300700001)(4744005)(8676002)(316002)(38070700005)(122000001)(38100700002)(83380400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8354
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bdfa63c2-32d4-426d-e7ad-08daf832ba4e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Vk8BR/M04fHn+tS87w4w29WG1T2ZqY+h+zfmBBgjNrycFzwwKewEtZHGfPUXJe0e8wht+KAnWIvJyrss8TWPySPKentF1z1t7s2F4cVrY3nihddymDhrmwPdJuEjO8RPN27EN/8S/zWJQYHHhZn4H/apNVbpaW5FjeFakqr4YS1BEV77uVApDUeimQybrvgwmMQVcc+hN8p7Qwn36z84Peeq9/fCwsnyintT8zBaYxp/JavPm7XiivBZR8ALnwmZ4o1iemlUQ0z9YVnI+dFUdwq7RaIFcA1TpdPAzbgDcGiwcHyd8YFRx6oBjlPXfcs9XLLPG2bsPd7qNehim1k9ixbhzvEcO6CoQDCaW4U9H3c03aCB8tjGI6IYhA9qy8Orq2YOM2u39KXAeB7LitDjF1x9Tp91KSObmW+Frwm8vz6p+lrxxfjxjW29Iyhy0my9grS1Xd7rf2zFeChUS7KXUh7mt5yWt/uw7276bn+joHe8NY36zqBCoGldxHcDeiPBlgrLDqkffdNGkQK5VMnSPQPMvw+3oyCr/ineVpajZJ6aJGFrbswnJ/oKa2ECal4dOu4P9EGN7Y3SydEEt97M59EvwKpXR8lZszvrenShfX098JRv+zbgohaO1LZ7fOfHScd2pr14+0bhG0QMRrhfa5YNxvt+Po1skDEFQ+6bNWEZg2o4/bgXVZCgD+BPMhJWmcICKWPSNa7tdHYTV8bXjw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(346002)(136003)(39860400002)(451199015)(40470700004)(36840700001)(46966006)(86362001)(55016003)(40480700001)(33656002)(40460700003)(336012)(83380400001)(41300700001)(8936002)(8676002)(4326008)(70206006)(70586007)(5660300002)(52536014)(186003)(478600001)(26005)(9686003)(47076005)(7696005)(6506007)(107886003)(316002)(54906003)(110136005)(356005)(82310400005)(36860700001)(2906002)(82740400003)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 02:30:12.1690
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bf290444-1f16-4463-8e80-08daf832c292
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9945

Hi Michal,

> -----Original Message-----
> Subject: [PATCH] xen/arm: Harden setup_frametable_mappings
>=20
> The amount of supported physical memory depends on the frametable size
> and the number of struct page_info entries that can fit into it. Define
> a macro PAGE_INFO_SIZE to store the current size of the struct page_info
> (i.e. 56B for arm64 and 32B for arm32) and add a sanity check in
> setup_frametable_mappings to be notified whenever the size of the
> structure changes. Also call a panic if the calculated frametable_size
> exceeds the limit defined by FRAMETABLE_SIZE macro.
>=20
> Update the comments regarding the frametable in asm/config.h and take
> the opportunity to remove unused macro FRAMETABLE_VIRT_END on arm32.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I've also used XTP to test this patch on FVP in both arm32 and arm64 execut=
ion
mode, and this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 03:11:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 03:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479041.742620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHcNC-0007UI-Ja; Tue, 17 Jan 2023 03:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479041.742620; Tue, 17 Jan 2023 03:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHcNC-0007UB-Gw; Tue, 17 Jan 2023 03:10:54 +0000
Received: by outflank-mailman (input) for mailman id 479041;
 Tue, 17 Jan 2023 03:10:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mnn9=5O=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pHcNB-0007Ty-7v
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 03:10:53 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2045.outbound.protection.outlook.com [40.107.247.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8beb75a6-9614-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 04:10:51 +0100 (CET)
Received: from DB6PR07CA0166.eurprd07.prod.outlook.com (2603:10a6:6:43::20) by
 AS2PR08MB10322.eurprd08.prod.outlook.com (2603:10a6:20b:5ff::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Tue, 17 Jan
 2023 03:10:49 +0000
Received: from DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:43:cafe::d5) by DB6PR07CA0166.outlook.office365.com
 (2603:10a6:6:43::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Tue, 17 Jan 2023 03:10:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT026.mail.protection.outlook.com (100.127.142.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 03:10:48 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Tue, 17 Jan 2023 03:10:48 +0000
Received: from 47af3368f8fc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 00992B5D-3AC1-4250-A5E0-E559802E193B.1; 
 Tue, 17 Jan 2023 03:10:38 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 47af3368f8fc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 17 Jan 2023 03:10:37 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS2PR08MB8807.eurprd08.prod.outlook.com (2603:10a6:20b:5f3::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Tue, 17 Jan
 2023 03:10:29 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Tue, 17 Jan 2023
 03:10:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8beb75a6-9614-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HjaOFzc3csXxMX7AQyXBMtP1uvGNHik3RJzjrfKZYN8=;
 b=oDmHgWacoTM8two7BV+oBblt5FWWLJyyr/h8u/4hMEShx5DwPgf5tqjZ787fUvKgqt4zN4sykBw+evLmxkn8tSvCOAAIaAJZ3cL+f0Gr3ZVDAariwEFWmn+jjTMB16Y1EOlDswLQhwULrUoIcYPLfw5WGjaZmXqgskGlGEbkUh4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bwLOViJ6u0CHzFWwXHrX2ZCznt3/zxeoHNP7cmI+cvb+1172IXpJkG5dzxkMsxYRgEVS/MSm42vVzeo4eyNZTRE4CCnagX5Kgr1MfUoZVdl959ZCxCVsUsktbqhAnCjbdTaT+gZzUubCl4lN4onha4+b1dGKITobfJWk9vc984NpC5heqLfSZPykgBZBEWlrpK0UUM5iOYgDz80kaoLBPPSuztkCTVI4bFO90ZLqiONGkk/pgTCdNiJgMtcSq0S479p/Q0PYEjvPfvqc6+vffdQQYJ6pGFzIBMIeUfy+PYNhHjPKawGKTJBxjE1CWntDs1wdIB+Rx8qO0A9wi+9E6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HjaOFzc3csXxMX7AQyXBMtP1uvGNHik3RJzjrfKZYN8=;
 b=aK0d7zR8wtNR/izfsFTryNkbPRPnW3qsT49JiEexTQVnq15aQvlppN7EodE8Oc93QyZREOok7TdpAPffZRTCtbEUPsahejLGHoBVj2SqQCZVUXUf8xrxCJoInrfXZvbxlPzHI+RJSshistM+PS7MfwATBlUo+rC8badL5KNjtyktVEypTrlkQKg7LzsDbJUVZKvdLjO1H9JoYs80j24lH4dhLNHexApzStpU5R5ijAYGMXtBSlc7CII3uo6XMNnWK/bzRpLAeeqkrbYKwkY20FfdNpCfPLKNdxJiwfb0xAlTtnV0PU2qlATjP9yNGiCvlTcbjLdfkiMkTtOVihmdBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HjaOFzc3csXxMX7AQyXBMtP1uvGNHik3RJzjrfKZYN8=;
 b=oDmHgWacoTM8two7BV+oBblt5FWWLJyyr/h8u/4hMEShx5DwPgf5tqjZ787fUvKgqt4zN4sykBw+evLmxkn8tSvCOAAIaAJZ3cL+f0Gr3ZVDAariwEFWmn+jjTMB16Y1EOlDswLQhwULrUoIcYPLfw5WGjaZmXqgskGlGEbkUh4=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Thread-Topic: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Thread-Index: AQHZKbkuhF2CydVKlkSWMcq+jFNCca6h0FvQ
Date: Tue, 17 Jan 2023 03:10:28 +0000
Message-ID:
 <AS8PR08MB7991378797D89AF18F735C7C92C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
 <20230116144106.12544-2-michal.orzel@amd.com>
In-Reply-To: <20230116144106.12544-2-michal.orzel@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: E7FD13E8FE65A449BB7B70A3B45102C0.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS2PR08MB8807:EE_|DBAEUR03FT026:EE_|AS2PR08MB10322:EE_
X-MS-Office365-Filtering-Correlation-Id: 790fa496-4973-497d-27fc-08daf8386ea3
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 tr6ANeXTpVOY6xYSimNsddsbtA0es9FS/9mEgm390o9fs83K6a7Q4OUBVA8aN+zIWnNBCUXJVlj4KxjOBbGjGDwiRiorli8W6CVfa6KIdYxB5V+UdprxSp7yHodv1M3LQpGg8RaHK8zFKlpdzaTA+0GetLWrr1LD4978eYmnB3YVEJn3q6Wi6pe1bLdhGASjuh4SX3xI/NfwGa6ayP/EdYd3C8EUVG8oVa/FWMeerg7wznMMAndIdouIM15ZWibJmQwiNUqyNkC62OKsCwEumJ9fYGaum9c357DqqVAReITjh8EFPA+/l2QoYK3bVN2Z+AtA1Y9tMBTnIxYfSYbfmal25CQqfP5OlOdT6TZ9A1D2vwc/Afn3iYhv4cx5kitfnPhKpcjFdoFbVHFegyRdgSk+pjOuM/zgDSJ8sxLuAY2bP68tk7u8r+tX+m/dwg1zTdZyUYdmUf/KKaDoicWmcQcLc7LzCr1r+2otIGVGotenrK2pN7hW3BCZuOFyFOYGEczfWzX1+kb9ClGq3QI0yppMaP9NZpXCJ+LupCrMXVhhV0ka1OeckMgcrcpYgW7JlRnCfKPr3gyPtM+OrythimIiOykvFEVZgjM4o8emFM3ywAVDXKsI6IRfpoYQTDCUJAvPyIN7n5/12TopExd01v9zdxPFbsJNXAndQWvA0Y5/EEMwJ+/Syd7HLF+FnHQpne6DiIzfMCi6R88wJsq5Dw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(39860400002)(376002)(366004)(346002)(451199015)(55016003)(33656002)(8676002)(66476007)(76116006)(64756008)(83380400001)(41300700001)(5660300002)(66946007)(66556008)(52536014)(186003)(26005)(478600001)(8936002)(66446008)(9686003)(4326008)(316002)(54906003)(71200400001)(7696005)(110136005)(38070700005)(86362001)(2906002)(38100700002)(122000001)(6506007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8807
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a9e12951-32e6-40c9-1cd1-08daf83862ec
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ymvmb+jDcqqJS3iTsUtwDAkEErwPxsd0PY29/8b2TA3NG5HrKMF/GNdIwelAc6ok0rsRvARLbECiqQu/P60Qky/T7lhaG2rYh1ZnkM7jblXHeiTrdHtxU5UNpFkzez+NPAERXZPP3G9xO27e5am964EElgi7kErtQv9pl2dWZ8FR1dC1x+PEQfHnY+OHXgaP514GmRvbUtD5bPIotS9odGWOBIhdUzFahIBFNlI2D74TY81xI8Tch0RFed0O7GDG8YqQiHAPzcedwdGouG8Ydye1eb6vFnklGaWYwkpCb4Fttr9oyQIgdq4j46YR2K/kcUULCIB5v1nd18avf9/vkTLno5wypEmoD2XzhTYg2slTkZ/QmvKMX6eOiBq3JtstlHNT+vKvgzfjsO5Y6vJB3ZKFppGYtC6hZx30gZBC6aBycKXgaJ5fuFdhW6qN5gJIYzAZrHQMbSJLMRpM309h1veFrbOOKzPXz7b0BuGDh80hoVQjF229ASs/0wAPT7wmtPxgXhkxSfxHCAraB/rpKjDPkyo87k8IaGcv/CLOjcA1cnS0JEQwYSogZBLFHciWg6ZT0YpOzNYSoKGD03RS9XMZxJhui5Ite4uJs7RcKl1wDSbrSi3ZMvtSJ9H4XkVW3kChBrJcRACiFctnmqJ1B8B773u5iVTd/Ix2QkyDvojyAIXfA6ju7T3YInngpdBonTQFBuwlQOhAq1FRVPiHmw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(39860400002)(346002)(136003)(451199015)(36840700001)(46966006)(40470700004)(9686003)(86362001)(8936002)(478600001)(186003)(26005)(7696005)(6506007)(82310400005)(107886003)(54906003)(52536014)(110136005)(356005)(336012)(316002)(70206006)(70586007)(8676002)(4326008)(81166007)(40480700001)(40460700003)(47076005)(41300700001)(83380400001)(55016003)(5660300002)(36860700001)(82740400003)(2906002)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 03:10:48.4389
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 790fa496-4973-497d-27fc-08daf8386ea3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10322

Hi Michal,

> -----Original Message-----
> Subject: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
>=20
> The direct mapped area occupies L0 slots from 256 to 265 (i.e. 10 slots),
> resulting in 5TB (512GB * 10) of virtual address space. However, due to
> incorrect slot subtraction (we take 9 slots into account) we set
> DIRECTMAP_SIZE to 4.5TB instead. Fix it.
>=20
> Fixes: 5263507b1b4a ("xen: arm: Use a direct mapping of RAM on arm64")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
>  xen/arch/arm/include/asm/config.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/arm/include/asm/config.h
> b/xen/arch/arm/include/asm/config.h
> index 0fefed1b8aa9..16213c8b677f 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -157,7 +157,7 @@
>  #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>=20
>  #define DIRECTMAP_VIRT_START   SLOT0(256)
> -#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
> +#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (266 - 256))

>From the commit message "L0 slots from 256 to 265 (i.e. 10 slots)", I think
the actual range is [256, 265] so probably using "(265 - 256 + 1)" here is =
a
bit better? It seems to me that the number 266 looks like a magic number
because 266 is not in the range. But this is my personal taste though and I
am open to discussion if you or maintainers have other opinions.

Maybe we can also putting a comment on top of the macro to explain this
calculation.

I did test this patch on FVP using XTP in both arm32 and arm64 execution
mode, and this patch is good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 04:42:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 04:42:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479047.742631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHdnU-00085D-5m; Tue, 17 Jan 2023 04:42:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479047.742631; Tue, 17 Jan 2023 04:42:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHdnU-000856-3E; Tue, 17 Jan 2023 04:42:08 +0000
Received: by outflank-mailman (input) for mailman id 479047;
 Tue, 17 Jan 2023 04:42:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHdnT-00084w-4o; Tue, 17 Jan 2023 04:42:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHdnT-0003eV-2O; Tue, 17 Jan 2023 04:42:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHdnS-0000J3-Nx; Tue, 17 Jan 2023 04:42:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHdnS-0004YY-NT; Tue, 17 Jan 2023 04:42:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EZ7Mf8sJ7fcMpVnYMd2ICMIaqhvDMg3LFmMJhMkedxM=; b=zG82vwzLNHdg0IQx82YuLSCIAe
	eP9jmRxtTOa4640OwQGXtLW+aZ6Zawep1hhl1nSAs2v70iT1CMx4R43bw89MswkLitjUlVsbQ0Acr
	eVcXmnZTDxLcfweea0b0G5Kuj3J9OW1nvGWBtmvKmKmNQOAhSUKFDdETcffIfPIp3mhY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175926-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175926: regressions - trouble: blocked/broken/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-build-prep:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 04:42:06 +0000

flight 175926 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175926/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-amd64                   5 host-build-prep          fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    4 days
Failing since        175748  2023-01-12 20:01:56 Z    4 days   19 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    2 days   17 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  broken  
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 07:04:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 07:04:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479055.742642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHg1D-0005RW-7p; Tue, 17 Jan 2023 07:04:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479055.742642; Tue, 17 Jan 2023 07:04:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHg1D-0005RP-4s; Tue, 17 Jan 2023 07:04:27 +0000
Received: by outflank-mailman (input) for mailman id 479055;
 Tue, 17 Jan 2023 07:04:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHg1B-0005RJ-Kh
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 07:04:25 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b3d87d3-9635-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 08:04:22 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D856A37FB9;
 Tue, 17 Jan 2023 07:04:21 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AD3341390C;
 Tue, 17 Jan 2023 07:04:21 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id DZXUKHVIxmOmKgAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 07:04:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b3d87d3-9635-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673939061; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MdkjSkzm05RohS38Rjt/aDDs5GVqQVYmitRleVxiinY=;
	b=XeUZOtGH+yWGLqh9SNMWgJi/8VmlchyK7y0ykgYEnSNgOcyOjBjNQ8pF6d22mxQbYDAFhJ
	oD9H7axJMLGV5QkvuNloq1OGTr++xtgq1Irm55KTE/snqEnxOXHJNkx0zzwpKJJFEl1U+r
	aEfdMDQuNZqRUL14MaI3b71vJQYMGnY=
Message-ID: <d800f7f9-ab9a-93d8-289c-2154d2d55939@suse.com>
Date: Tue, 17 Jan 2023 08:04:21 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3] drivers/xen/hypervisor: Expose Xen SIF flags to
 userspace
Content-Language: en-US
To: Per Bilse <per.bilse@citrix.com>, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 "moderated list:XEN HYPERVISOR INTERFACE" <xen-devel@lists.xenproject.org>
References: <20230103130213.2129753-1-per.bilse@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230103130213.2129753-1-per.bilse@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------VBISEoQwMG6EX6bDl5I0EGSt"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------VBISEoQwMG6EX6bDl5I0EGSt
Content-Type: multipart/mixed; boundary="------------9j8AUhJFMzy1LYMUehCXOwxp";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Per Bilse <per.bilse@citrix.com>, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 "moderated list:XEN HYPERVISOR INTERFACE" <xen-devel@lists.xenproject.org>
Message-ID: <d800f7f9-ab9a-93d8-289c-2154d2d55939@suse.com>
Subject: Re: [PATCH v3] drivers/xen/hypervisor: Expose Xen SIF flags to
 userspace
References: <20230103130213.2129753-1-per.bilse@citrix.com>
In-Reply-To: <20230103130213.2129753-1-per.bilse@citrix.com>

--------------9j8AUhJFMzy1LYMUehCXOwxp
Content-Type: multipart/mixed; boundary="------------BYVuKDEJh9JsAxr30ovnRW3y"

--------------BYVuKDEJh9JsAxr30ovnRW3y
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDEuMjMgMTQ6MDIsIFBlciBCaWxzZSB3cm90ZToNCj4gL3Byb2MveGVuIGlzIGEg
bGVnYWN5IHBzZXVkbyBmaWxlc3lzdGVtIHdoaWNoIHByZWRhdGVzIFhlbiBzdXBwb3J0DQo+
IGdldHRpbmcgbWVyZ2VkIGludG8gTGludXguICBJdCBoYXMgbGFyZ2VseSBiZWVuIHJlcGxh
Y2VkIHdpdGggbW9yZQ0KPiBub3JtYWwgbG9jYXRpb25zIGZvciBkYXRhICgvc3lzL2h5cGVy
dmlzb3IvIGZvciBpbmZvLCAvZGV2L3hlbi8gZm9yDQo+IHVzZXIgZGV2aWNlcykuICBXZSB3
YW50IHRvIGNvbXBpbGUgeGVuZnMgc3VwcG9ydCBvdXQgb2YgdGhlIGRvbTAga2VybmVsLg0K
PiANCj4gVGhlcmUgaXMgb25lIGl0ZW0gd2hpY2ggb25seSBleGlzdHMgaW4gL3Byb2MveGVu
LCBuYW1lbHkNCj4gL3Byb2MveGVuL2NhcGFiaWxpdGllcyB3aXRoICJjb250cm9sX2QiIGJl
aW5nIHRoZSBzaWduYWwgb2YgInlvdSdyZSBpbg0KPiB0aGUgY29udHJvbCBkb21haW4iLiAg
VGhpcyB1bHRpbWF0ZWx5IGNvbWVzIGZyb20gdGhlIFNJRiBmbGFncyBwcm92aWRlZA0KPiBh
dCBWTSBzdGFydC4NCj4gDQo+IFRoaXMgcGF0Y2ggZXhwb3NlcyBhbGwgU0lGIGZsYWdzIGlu
IC9zeXMvaHlwZXJ2aXNvci9zdGFydF9mbGFncy8gYXMNCj4gYm9vbGVhbiBmaWxlcywgb25l
IGZvciBlYWNoIGJpdCwgcmV0dXJuaW5nICcxJyBpZiBzZXQsICcwJyBvdGhlcndpc2UuDQo+
IFR3byBrbm93biBmbGFncywgJ3ByaXZpbGVnZWQnIGFuZCAnaW5pdGRvbWFpbicsIGFyZSBl
eHBsaWNpdGx5IG5hbWVkLA0KPiBhbmQgYWxsIHJlbWFpbmluZyBmbGFncyBjYW4gYmUgYWNj
ZXNzZWQgdmlhIGdlbmVyaWNhbGx5IG5hbWVkIGZpbGVzLA0KPiBhcyBzdWdnZXN0ZWQgYnkg
QW5kcmV3IENvb3Blci4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IFBlciBCaWxzZSA8cGVyLmJp
bHNlQGNpdHJpeC5jb20+DQoNClJldmlld2VkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NA
c3VzZS5jb20+DQoNCg0KSnVlcmdlbg0K
--------------BYVuKDEJh9JsAxr30ovnRW3y
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BYVuKDEJh9JsAxr30ovnRW3y--

--------------9j8AUhJFMzy1LYMUehCXOwxp--

--------------VBISEoQwMG6EX6bDl5I0EGSt
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPGSHUFAwAAAAAACgkQsN6d1ii/Ey+A
GQf+NGqEepBADdoTrIer/vgh3nTiZqCxC01/ltV/V5KGNNC/EmeDnfBSJIKiG88lgVn/ON5dgKCn
q3hqEZ1T80iPutavOMYQH4pBcGebcaMo9doNSqnC1dVvBpEzJ3gkhsxIQDHRvG8HUHB6u9SFRuyW
ybU/xhcZDKaZCpviv0DkSMJKr9LqOBt5Rgx8MWeNKPqNq8AzcQQLFUuyeOEqDo/BwugRv56CKtH5
iiJzKdxZjR6xKb2Ikxqq238hF89orXX4D1ydMnyaI35Y8w2EKJZK6ErTPDfF6o3LysLJcB9/KhzD
PG8T3MJk3MrgMK54huWkZPDuh1DDvLd3AgPitlL1/Q==
=qrOc
-----END PGP SIGNATURE-----

--------------VBISEoQwMG6EX6bDl5I0EGSt--


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 08:19:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 08:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479064.742654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhBM-0004jp-2f; Tue, 17 Jan 2023 08:19:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479064.742654; Tue, 17 Jan 2023 08:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhBL-0004ji-Vn; Tue, 17 Jan 2023 08:18:59 +0000
Received: by outflank-mailman (input) for mailman id 479064;
 Tue, 17 Jan 2023 08:18:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pwid=5O=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHhBK-0004jc-Ep
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 08:18:58 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on2061.outbound.protection.outlook.com [40.107.212.61])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94d996ff-963f-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 09:18:55 +0100 (CET)
Received: from BL0PR0102CA0043.prod.exchangelabs.com (2603:10b6:208:25::20) by
 BY5PR12MB4196.namprd12.prod.outlook.com (2603:10b6:a03:205::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 08:18:52 +0000
Received: from BL02EPF0000EE3D.namprd05.prod.outlook.com
 (2603:10b6:208:25:cafe::2a) by BL0PR0102CA0043.outlook.office365.com
 (2603:10b6:208:25::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 08:18:51 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BL02EPF0000EE3D.mail.protection.outlook.com (10.167.241.134) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Tue, 17 Jan 2023 08:18:51 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 02:18:51 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 02:18:50 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 17 Jan 2023 02:18:49 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94d996ff-963f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XUl7B/smV1Gcp5rCMEDSCnh9tJ2YnOzLiFtQrES4aeL6YPSkKePbn/3DrfruF5liK3yWB3uPLyI/4r961QfpnVmyLDsOh6qPdUg4gcd97hE3GbCGlspJ//IBOmBLbO9klurCq3fiHA/bPQf+jOtfLh1Mw1nvfiPwmG9ObcDBgJo6NM4rM6QKPnUWWen9J4ph8ojYA3/Cj+JGN5XXwZ+aDP5oJhYQPh4NY8DdYAQ9ZPSC7aQKyyvIKOIlps6O7m28Ak6m0EkiSo+Q5mvjEga5Vm/dykpNL25ZgcwDg1KKvg3tz6p/4mcCFjxY5uWyydt9RMxgly3JhnAluGGfrLcceg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dgpEP/WjrUCJHbhWdQHNO5PYTzlfO9OU5BbSA0hxE24=;
 b=TSFUTlVLqW52goq5PoQFe2rLswVkZEUsixZ+Z3bvk5HLoU930qLRiFnacISB/vOWndXtSzpgBgFoZIee6zlz8JpWOQfHZX5tcbwF7W9v4rUs7vSh0QJlsaaBDmsfGMrnDxV3lULP/WForC6Xd38f2QirmhEh4nRCxjDXHRs1D9QpL/Supb4UKOIibWXYMdBspedl+WONNBVhbPhTvqZzS+egSMbCJJ4Wg/WOA6UdYBJ1+n++78QckLpJyejtuQ7cJF3hR5a3FW9tndsHYRPmX5CZQmm7OD1/e1v7+VURGYmybvbufreq4SOnS4YesxzElPitcGLA+heQAVWdUE+HPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dgpEP/WjrUCJHbhWdQHNO5PYTzlfO9OU5BbSA0hxE24=;
 b=UzxUYBQ8GRZmVjl4SkQ/8u5SOws5mxUX7ewmj/SiKsVmBfyIGlao66WIcK5OE9CSwWngrjZLmPe3apPp2bi4uBY5fUcUOmn68p0LJqZiOnUfr2fIzH0LqeFxZzuuvVhJeA1+FmEi+F4XQY31vH4+QKGAmoHAK77VL3GN9tiD0Go=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <f1db1e82-d6f7-5424-2925-d1c6d35fee11@amd.com>
Date: Tue, 17 Jan 2023 09:18:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
To: Henry Wang <Henry.Wang@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, "Volodymyr
 Babchuk" <Volodymyr_Babchuk@epam.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
 <20230116144106.12544-2-michal.orzel@amd.com>
 <AS8PR08MB7991378797D89AF18F735C7C92C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <AS8PR08MB7991378797D89AF18F735C7C92C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF0000EE3D:EE_|BY5PR12MB4196:EE_
X-MS-Office365-Filtering-Correlation-Id: bf9e3f22-2909-43f5-1c85-08daf8637773
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NBX6FsRqLl7HGGgVTqaPnZuEv4nX4OImUQtrkwZSxsWjd0XrRhouU+bydG50g80rOvqMuvHjLBX8vcFuYoQHDFAmBXimJYlJqaNGLXOwd7GwjtCNb0spGwlD1kMOHlpJzWXr/nsHi+tbYeg2zqhRJRr07cFFGbKUkpER0lZQGkbsOE0E4F+MKO5dFN4cQhjxPicDWfOOpFkLVSOXx8jjHOZCCD9UxnLFA1pMDQmFdRxEBtA9SQmHpAsXKGEMX/9CGSuByyRjHAjqUX97lopOZ9SDX+bdaLY1i4Dg9btWGh2ph/m50jOpsw/qzD6MywznHSDnOFAz6p/+pPMzUaugJMReTOFFmf7G/S/yILkXoJrMiFFlGit5WrrDJgId9cTpbISNQ4S66Ak9L242uQhcQIOB7m7aK48j76qjLO6NVYJQ2jdja58/3fKhG19b1NELwshigxiVfUs9l/CqStcwcJUfoFKU0obnn3ffEhhrUMq+EZphT9cW2hXnaug4ZmGuYe5KKDkycA2d/eBiMqump9Av7IdEilN9hlfxW94WeYbxBD5X5Ct8HnJNaph2MKvmI/GajQdo+XlAy5KvtEYo6Bndf53u8760Ay2cIt6iU9J6KjLWlvpH85vNIR/vK2AxXzQ9OEtC82fJaU7R3mw2juVeI/DJf2AVXK/j0KHb6GRHeoPYU4d4kkvpIQVpq/AfkpLgm/92qpNc7ygZL2YvxSbKVMT0h1ncdfqT3sgBQInB5U9vVOOsTpA+KUiGYhxenoIfkDyDme3W7hquDj8yRw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199015)(40470700004)(46966006)(36840700001)(2616005)(478600001)(26005)(186003)(53546011)(16576012)(70586007)(70206006)(316002)(31686004)(4326008)(8676002)(336012)(110136005)(54906003)(426003)(47076005)(82310400005)(81166007)(83380400001)(8936002)(5660300002)(41300700001)(31696002)(40480700001)(44832011)(36860700001)(86362001)(82740400003)(36756003)(2906002)(356005)(40460700003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 08:18:51.5361
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bf9e3f22-2909-43f5-1c85-08daf8637773
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF0000EE3D.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4196

Hi Henry,

On 17/01/2023 04:10, Henry Wang wrote:
> 
> 
> Hi Michal,
> 
>> -----Original Message-----
>> Subject: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
>>
>> The direct mapped area occupies L0 slots from 256 to 265 (i.e. 10 slots),
>> resulting in 5TB (512GB * 10) of virtual address space. However, due to
>> incorrect slot subtraction (we take 9 slots into account) we set
>> DIRECTMAP_SIZE to 4.5TB instead. Fix it.
>>
>> Fixes: 5263507b1b4a ("xen: arm: Use a direct mapping of RAM on arm64")
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>>  xen/arch/arm/include/asm/config.h | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/include/asm/config.h
>> b/xen/arch/arm/include/asm/config.h
>> index 0fefed1b8aa9..16213c8b677f 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -157,7 +157,7 @@
>>  #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>>
>>  #define DIRECTMAP_VIRT_START   SLOT0(256)
>> -#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
>> +#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (266 - 256))
> 
> From the commit message "L0 slots from 256 to 265 (i.e. 10 slots)", I think
> the actual range is [256, 265] so probably using "(265 - 256 + 1)" here is a
> bit better? It seems to me that the number 266 looks like a magic number
> because 266 is not in the range. But this is my personal taste though and I
> am open to discussion if you or maintainers have other opinions.
I think this is a matter of taste. I prefer it the way it is because at least it matches
how x86 defines the DIRECTMAP_SIZE and it also matches the usual way of calculating the size
which is subtracting the start address of that region from the start address of the next region
(e.g. VMAP_VIRT_SIZE calculation on arm32).

> 
> Maybe we can also putting a comment on top of the macro to explain this
> calculation.
> 
> I did test this patch on FVP using XTP in both arm32 and arm64 execution
> mode, and this patch is good, so:
> 
> Tested-by: Henry Wang <Henry.Wang@arm.com>
Thanks.

> 
> Kind regards,
> Henry

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 08:22:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 08:22:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479070.742665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhEx-00067H-Hi; Tue, 17 Jan 2023 08:22:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479070.742665; Tue, 17 Jan 2023 08:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhEx-00067A-Ex; Tue, 17 Jan 2023 08:22:43 +0000
Received: by outflank-mailman (input) for mailman id 479070;
 Tue, 17 Jan 2023 08:22:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mnn9=5O=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pHhEv-000674-Pa
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 08:22:41 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2044.outbound.protection.outlook.com [40.107.14.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1a93e32d-9640-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 09:22:38 +0100 (CET)
Received: from AM6PR08CA0035.eurprd08.prod.outlook.com (2603:10a6:20b:c0::23)
 by VE1PR08MB5757.eurprd08.prod.outlook.com (2603:10a6:800:1a4::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 08:22:36 +0000
Received: from AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:c0:cafe::9a) by AM6PR08CA0035.outlook.office365.com
 (2603:10a6:20b:c0::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 08:22:36 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT027.mail.protection.outlook.com (100.127.140.124) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 08:22:36 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Tue, 17 Jan 2023 08:22:35 +0000
Received: from 8e562291a90e.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 24E76A8C-1525-432E-946E-953E528C3343.1; 
 Tue, 17 Jan 2023 08:22:25 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8e562291a90e.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 17 Jan 2023 08:22:25 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB7392.eurprd08.prod.outlook.com (2603:10a6:10:353::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 08:22:22 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Tue, 17 Jan 2023
 08:22:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a93e32d-9640-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GZNfsjqiMrKTyOZAMsDTnFWGOL7CzY82zZphDBIVogw=;
 b=vifCtKYAHC5L/ztjU+M2LgKfDlySkh5zlTrxtUXFCHGVZZ+68av4PDsNNOhoM7yNAZdtSDgT4GCAgnfDj+9or5VmLnF4S7wVYg74SCJQ+57gb/9krKE2aY6PNtaysbdBX7mLV8FDh1ShhSg97JQdlum/pNL0xq6Mky/zA5yIPME=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Th1uo7D0OFDEoki4odNQnGHRfruO5fDa1VQN3/BSh/CzO1Nv8rbTiDidqoX3+JS3nQIdRllxEuBIPWFYnM0453rPR+fWGSzlFAageGXcXdbjKC7Ew27mCE5UGcqcMmgJXxWJWsoIll7dLpchNrMAwx28YWJRRfcBv0e7p0iJnTlmPaywZZdBXNQhiXRfxwojPQlO9xQxr01tnyqQpeo2WSyNEhGO3x9Uj4A9cloFgtfv0LHH0igS42nehlXn/PtyzZhz+sPm5lqepTm8wdNDBpWpa1Kh26TDeC4IBUGIvF+AHSDGSPhBSrVszQl/ouvGoOvYOQOl7H+C2oKeqjU7gQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GZNfsjqiMrKTyOZAMsDTnFWGOL7CzY82zZphDBIVogw=;
 b=QgBRVFPXw0LuiUbSRnIsuB4g3DcSIa8rgiwLxEIlyoOkz9CJvoa0M8SPwHYoiJgudKuhAZFsCJWWFmIFs7TA6GgtcVSX3loEV8c6XyFe2icz0lULeQKrAEMlWYLks+uYbfa4ijWupr+9El9opvoeJ/l+1f55ZCeaTaHMzgKjP1cXsvSBIgbiT+aVGjXnFgC0o+1IwTO64R/LCXIWEvYQTldqFJC65c2JJPsp3ru2pKq0WkEp679VpYSYF4UKCsKZQ4npirq2iIOZV+DAPkNkcpHce4y3n780WXzyvT78QUrYnQK3aV4KJ9hY7qoW258kxgYPf1J5dVcLV+Sb7nHRqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GZNfsjqiMrKTyOZAMsDTnFWGOL7CzY82zZphDBIVogw=;
 b=vifCtKYAHC5L/ztjU+M2LgKfDlySkh5zlTrxtUXFCHGVZZ+68av4PDsNNOhoM7yNAZdtSDgT4GCAgnfDj+9or5VmLnF4S7wVYg74SCJQ+57gb/9krKE2aY6PNtaysbdBX7mLV8FDh1ShhSg97JQdlum/pNL0xq6Mky/zA5yIPME=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Thread-Topic: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Thread-Index: AQHZKbkuhF2CydVKlkSWMcq+jFNCca6h0FvQgAB02YCAAABNQA==
Date: Tue, 17 Jan 2023 08:22:22 +0000
Message-ID:
 <AS8PR08MB79918757235E971524BD11A592C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
 <20230116144106.12544-2-michal.orzel@amd.com>
 <AS8PR08MB7991378797D89AF18F735C7C92C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <f1db1e82-d6f7-5424-2925-d1c6d35fee11@amd.com>
In-Reply-To: <f1db1e82-d6f7-5424-2925-d1c6d35fee11@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: F4E6C348F80CA6458E987B8D3135E74B.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB7392:EE_|AM7EUR03FT027:EE_|VE1PR08MB5757:EE_
X-MS-Office365-Filtering-Correlation-Id: cb922907-8a70-4e60-797f-08daf863fd48
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 XfoL6SeiPq1R+b28VJZoOdK3g1eAbTonGU6N+rn5Ky2buk7o8Vz9JHZM7JPLC/IwLMB9RiMEu2XtuRBeOrWJAE3KxhG92bxYtxfJBGpbgEcvrEXfzgzmhERrV/7/WhveetHRXdbpZggRC8xHNN6PtWsq66I5+FjuL7ZvcTLwf9wIyUTxbxvy82lfGWPa+BCskpEzkPxi/7EL04SVrCQAjjD/FKuwOOxKY9+ZXjnDaqNZ/kGhtoD2ZYdGChhgf7ZBWOCUdLImyg5NmcEbhpadehaRv0i3apGTlmCiyiZZzwRHddp+u/Wrt7pfnXKEU+nqrSK/YxqhangG04OuGUnxeXUoUks7ASZij2c82ITOVmVwDdx66TM/PwmWL+9kp/YmCzJ7CGGSnJj9otKJ77Iwe8rGHvnGTpK2dF78+VIlbxq+X0zF+CgT0N9utK09d6zFadoheAlEgMLdigI21m7c0rVmmfGUvghT2+IgGDkv9zRlOo9gYYW6WHDGqX1OBPGuRLSNb7QtaaTIrR34zFiUXM8sqFuaC0W8SasUMAkrV9AuoWLXreD52SrZu0BqIbE0GRRur/eJnhyVVZiRygPdWTdUDagxa5aLwZXG63uFqZ59C5LhrfvTzH8tf5BSevJLgyYf0UPbKizQboZ31Fwgax5hN0NqDZYBHLo9N3lisJN2RK+rjc6Dhpvm6JG03x6hfyfN/w57A7stTEFiRm3XKA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(136003)(396003)(39860400002)(366004)(451199015)(55016003)(33656002)(4326008)(8676002)(41300700001)(76116006)(66556008)(5660300002)(64756008)(66946007)(66476007)(8936002)(52536014)(83380400001)(9686003)(478600001)(186003)(26005)(6506007)(66446008)(110136005)(54906003)(316002)(7696005)(71200400001)(38070700005)(86362001)(2906002)(38100700002)(122000001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7392
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9891350b-8fbe-406f-7e54-08daf863f549
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MI8xduAdXr20URh4RCsnzILIXH3crR4wfwudw20Kj2cxRtAhmeoalAhflbH4uexOKmY6Iaeht476flN478j8iYV6Kf0bJivrso1mV8zSRf5hMKcMtC7vlTe67NQtImz9Srdfyn+IDcwtPmqSRoua6iE4Xg2S9A0PrJkagSKico6WpAwuldPvJ7+UakIleG+pkyxMHR0GusJ89iopN7XnkYfotfbTQUoTqsXe6qTwDa+1BPQVn6jRK6gGqiYhs/eB+seCwbeePSwAMKVOAam2J1U5ni+C6tbyXc2v0K2k3cCuXsMk8+RjQgV/SbdHYdxevtBGX8fRy0zymEUurDu2CIcqE7FwD4hS/PrOkgxVlYxl1D7V/dDDf9KmsIy0E9dSwLcSXj1Cg26TuioJlghusZI6bS8sMB+rJwHeAN6edAbsGemhPfXURtDJGr/eLD7IdnLUFNCHZw6Iqos3HmeE9HfxFBD6JJRL3iv62/+em7ztfRfXl9ja4ZN8IdcmHbsV6pmZYPcpRJCBLvlbUYY5Ck6Yl+2nIGPormghc2fBQeFfRXhBQe06j0W/CnS7aRvtD74MWpSRNeOMD5bqagrxOTt5RLD6QZxBGgeiAilhhGMVFRKD1BVbTWUXlxuHm2QeMfO7AnT5O5f3UgDc/kcWMd76y1du7h54oc2HHWGjytJxq7QKtVCSiqgMROe84mXL/WL6pGm5YM8fsgnvL97FWg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199015)(46966006)(36840700001)(40470700004)(36860700001)(2906002)(82740400003)(356005)(86362001)(81166007)(40460700003)(47076005)(33656002)(40480700001)(55016003)(82310400005)(110136005)(54906003)(7696005)(316002)(83380400001)(70586007)(5660300002)(70206006)(6506007)(8676002)(41300700001)(4326008)(8936002)(52536014)(107886003)(478600001)(336012)(26005)(186003)(9686003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 08:22:36.0557
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cb922907-8a70-4e60-797f-08daf863fd48
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5757

SGkgTWljaGFsLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IE1pY2hh
bCBPcnplbCA8bWljaGFsLm9yemVsQGFtZC5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0hdIHhl
bi9hcm02NDogRml4IGluY29ycmVjdCBESVJFQ1RNQVBfU0laRSBjYWxjdWxhdGlvbg0KPiANCj4g
SGkgSGVucnksDQo+IA0KPiA+PiAtI2RlZmluZSBESVJFQ1RNQVBfU0laRSAgICAgICAgIChTTE9U
MF9FTlRSWV9TSVpFICogKDI2NS0yNTYpKQ0KPiA+PiArI2RlZmluZSBESVJFQ1RNQVBfU0laRSAg
ICAgICAgIChTTE9UMF9FTlRSWV9TSVpFICogKDI2NiAtIDI1NikpDQo+ID4NCj4gPiBGcm9tIHRo
ZSBjb21taXQgbWVzc2FnZSAiTDAgc2xvdHMgZnJvbSAyNTYgdG8gMjY1IChpLmUuIDEwIHNsb3Rz
KSIsIEkgdGhpbmsNCj4gPiB0aGUgYWN0dWFsIHJhbmdlIGlzIFsyNTYsIDI2NV0gc28gcHJvYmFi
bHkgdXNpbmcgIigyNjUgLSAyNTYgKyAxKSIgaGVyZSBpcyBhDQo+ID4gYml0IGJldHRlcj8gSXQg
c2VlbXMgdG8gbWUgdGhhdCB0aGUgbnVtYmVyIDI2NiBsb29rcyBsaWtlIGEgbWFnaWMgbnVtYmVy
DQo+ID4gYmVjYXVzZSAyNjYgaXMgbm90IGluIHRoZSByYW5nZS4gQnV0IHRoaXMgaXMgbXkgcGVy
c29uYWwgdGFzdGUgdGhvdWdoIGFuZCBJDQo+ID4gYW0gb3BlbiB0byBkaXNjdXNzaW9uIGlmIHlv
dSBvciBtYWludGFpbmVycyBoYXZlIG90aGVyIG9waW5pb25zLg0KPg0KPiBJIHRoaW5rIHRoaXMg
aXMgYSBtYXR0ZXIgb2YgdGFzdGUuDQoNClllcyBpbmRlZWQsIHNvIEkgd291bGRuJ3QgYXJndWUg
Zm9yIHlvdXIgZXhwbGFuYXRpb24uLi4NCg0KPiBJIHByZWZlciBpdCB0aGUgd2F5IGl0IGlzIGJl
Y2F1c2UgYXQgbGVhc3QgaXQgbWF0Y2hlcw0KPiBob3cgeDg2IGRlZmluZXMgdGhlIERJUkVDVE1B
UF9TSVpFIGFuZCBpdCBhbHNvIG1hdGNoZXMgdGhlIHVzdWFsIHdheSBvZg0KPiBjYWxjdWxhdGlu
ZyB0aGUgc2l6ZQ0KPiB3aGljaCBpcyBzdWJ0cmFjdGluZyB0aGUgc3RhcnQgYWRkcmVzcyBvZiB0
aGF0IHJlZ2lvbiBmcm9tIHRoZSBzdGFydCBhZGRyZXNzIG9mDQo+IHRoZSBuZXh0IHJlZ2lvbg0K
PiAoZS5nLiBWTUFQX1ZJUlRfU0laRSBjYWxjdWxhdGlvbiBvbiBhcm0zMikuDQoNCi4uLmhlcmUg
YW5kIHlvdSBjYW4gaGF2ZSBteToNCg0KUmV2aWV3ZWQtYnk6IEhlbnJ5IFdhbmcgPEhlbnJ5Lldh
bmdAYXJtLmNvbT4NCg0KPiANCj4gPg0KPiA+IE1heWJlIHdlIGNhbiBhbHNvIHB1dHRpbmcgYSBj
b21tZW50IG9uIHRvcCBvZiB0aGUgbWFjcm8gdG8gZXhwbGFpbiB0aGlzDQo+ID4gY2FsY3VsYXRp
b24uDQo+ID4NCj4gPiBJIGRpZCB0ZXN0IHRoaXMgcGF0Y2ggb24gRlZQIHVzaW5nIFhUUCBpbiBi
b3RoIGFybTMyIGFuZCBhcm02NCBleGVjdXRpb24NCj4gPiBtb2RlLCBhbmQgdGhpcyBwYXRjaCBp
cyBnb29kLCBzbzoNCj4gPg0KPiA+IFRlc3RlZC1ieTogSGVucnkgV2FuZyA8SGVucnkuV2FuZ0Bh
cm0uY29tPg0KPiBUaGFua3MuDQoNClBsZWFzdXJlLg0KDQpLaW5kIHJlZ2FyZHMsDQpIZW5yeQ0K
DQo+IA0KPiA+DQo+ID4gS2luZCByZWdhcmRzLA0KPiA+IEhlbnJ5DQo+IA0KPiB+TWljaGFsDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 08:23:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 08:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479072.742676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhFH-0006Yk-RW; Tue, 17 Jan 2023 08:23:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479072.742676; Tue, 17 Jan 2023 08:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhFH-0006Yd-NU; Tue, 17 Jan 2023 08:23:03 +0000
Received: by outflank-mailman (input) for mailman id 479072;
 Tue, 17 Jan 2023 08:23:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j12i=5O=casper.srs.infradead.org=BATV+d80603fb936c028ea1fe+7086+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pHhFF-0006YK-RK
 for xen-devel@lists.xen.org; Tue, 17 Jan 2023 08:23:02 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2515e91f-9640-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 09:22:57 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHhFB-009VSs-VX; Tue, 17 Jan 2023 08:22:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2515e91f-9640-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=k/gEwJEtlONiOt6Iv32NajGAWrBwKo5aBJgrTct0fuk=; b=n+w1MpjtVd6hHW6wlWTgsd6eZc
	Zm5PNdON/Epvn/l3JQ7rtncvoubqONb1SxnQQxVciq8TRgK85ObfMqm7g11gDwcDy6wgyhwOeNtnW
	StWD7cvTub2TQaRQy9nUYWNQjYtALD2wqTdmwu6iQrgCGxeIdFdZST+el19cgPiLhlHgbKq03jSh5
	bOcdq2eTLaZuLFzFkkiv4+lJCmCGGb2n7dVkYnwgdSzjXUck+OlJ/y3XAmQHN/nYAEbeOvFkG87Ns
	nnE6nrNfKhIXaj5qilnWYFKeU5Y/vYX6Jy4S0Gmz0w8RNF470cneyDCdeZFdh7AOUlCPAIRC+YWvR
	+VlXZ3MQ==;
Message-ID: <07a2bae2aa194c6b1c1037d9c6c286e4f828d7b0.camel@infradead.org>
Subject: Re: [patch V3 16/22] genirq/msi: Provide new domain id based
 interfaces for freeing interrupts
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, LKML
 <linux-kernel@vger.kernel.org>,  Juergen Gross <jgross@suse.com>, xen-devel
 <xen-devel@lists.xen.org>
Cc: x86@kernel.org, Joerg Roedel <joro@8bytes.org>, Will Deacon
 <will@kernel.org>,  linux-pci@vger.kernel.org, Bjorn Helgaas
 <bhelgaas@google.com>, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>, Marc
 Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jason Gunthorpe <jgg@mellanox.com>, Dave Jiang <dave.jiang@intel.com>, Alex
 Williamson <alex.williamson@redhat.com>, Kevin Tian <kevin.tian@intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Logan Gunthorpe
 <logang@deltatee.com>, Ashok Raj <ashok.raj@intel.com>, Jon Mason
 <jdmason@kudzu.us>, Allen Hubbe <allenbh@gmail.com>
Date: Tue, 17 Jan 2023 08:22:42 +0000
In-Reply-To: <87edrumf9t.ffs@tglx>
References: <20221124225331.464480443@linutronix.de>
	 <20221124230314.337844751@linutronix.de>
	 <1901d84f8f999ac6b2f067360f098828cb8c17cf.camel@infradead.org>
	 <875yd6o2t7.ffs@tglx> <871qnunycr.ffs@tglx>
	 <e12002af82e9554e42e876d7b9e813b90e673330.camel@infradead.org>
	 <87h6wqmgio.ffs@tglx>
	 <57f3f757aaceccf866172faaf48ab62916f24d8b.camel@infradead.org>
	 <87edrumf9t.ffs@tglx>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-DT46ZoMEN7s5cxIEO1Eg"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-DT46ZoMEN7s5cxIEO1Eg
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2023-01-16 at 20:49 +0100, Thomas Gleixner wrote:
> David!
>=20
> On Mon, Jan 16 2023 at 19:28, David Woodhouse wrote:
> > On Mon, 2023-01-16 at 20:22 +0100, Thomas Gleixner wrote:
> > > > Tested-by: David Woodhouse <dwmw@amazon.co.uk>
> > > >=20
> > > > Albeit only under qemu with
> > > > https://git.infradead.org/users/dwmw2/qemu.git/shortlog/refs/heads/=
xenfv
> > > > and not under real Xen.
> > >=20
> > > Five levels of emulation. What could possibly go wrong?
> >=20
> > It's the opposite =E2=80=94 this is what happened when I threw my toys =
out of
> > the pram and said, "You're NOT doing that with nested virtualization!".
> >=20
> > One level of emulation. We host guests that think they're running on
> > Xen, directly in QEMU/KVM by handling the hypercalls and event
> > channels, grant tables, etc.
> >=20
> > We virtualised Xen itself :)
>=20
> Groan. Can we please agree on *one* hypervisor instead of growing
> emulators for all other hypervisors in each of them :)

Hey, we did work across KVM, Xen and even Hyper-V to make sure the
Extended Destination ID in MSI supports 32Ki vCPUs the *same* way on
each guest. Be thankful for small mercies!

And the code to support Xen guests natively in KVM is *fairly* minimal;
we allow userspace to catch hypercalls, and do a little bit of the fast
path of IRQ delivery because we really don't want to be bouncing out to
the userspace VMM for IPIs etc.

As for qemu, emulating environments that you may not have access to in
real hardware is its raison d'=C3=AAtre, isn't it?=20

And agreeing on one hypervisor =E2=80=94 that's what we're doing. But the
*administration* is the far more important part. We're allowing people
to standardise on KVM, and to focus on the administration and security
of only Linux and KVM.

But there are still huge numbers of of virtual machine images out there
which are configured to run on Xen. Their root disk is /dev/xvda, the
network device they have configured is vif0.

In some ways it's theoretically just as easy as telling all those folks
"well, you just need to install an NVMe driver and a new network card
driver". Except it isn't really, because that often ends up being
"rebuild it on a newer kernel and/or OS". And if the intern who set
this system up left three years ago and the company now depends on it
as critical infrastructure without really knowing it yet...=20

It isn't practical to tell people, "screw you, you can't run that any
more".

So we host them under Linux and they mostly look like native KVM guests
to the kernel, you stop breaking Xen guest mode, and everybody wins.

> > Now you have no more excuses for breaking Xen guest mode!
>=20
> No cookies, you spoilsport! :)

:)

--=-DT46ZoMEN7s5cxIEO1Eg
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE3MDgyMjQyWjAvBgkqhkiG9w0BCQQxIgQgLHq8om+p
/Ug38NeBqaxjPzrSZrl30e++P+zXEnPscDUwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgA52bGkRWUSlD73ZJcQlPJFYit6veAIC9YL
pP6OuQbl5CO+yyCgyzkHo4RkTqZRE5UHGcQKkTHVGAW1vYzTDxQkGzdahBzsv1IZZjv8LcdZKgul
gcWwXX3Arpz8FRxDy3weSGds2JbhTR0GxfOQDsbR2P6jRS8pH9FwBS0mESr6K+38KVtFMMAbC0E4
MJ2HgsT0TI+tpotUvw/1Idd+rB7nyMEHRGgCXFUAdBRuFGNpupPUAaEdRYI9MWS7pcVTXNsw9pRH
ZJ4rsbCQWDX27gIpgejajL1vw/F2mKZEgL+QZv0EBuBNkA2Z0s8whJEz9VQp8mM+1qijr44gaqp0
IDL2XA+rxP5Q1aeR56XAERVkzot2bU6CiRBP1OSa2+ZFV5tha20ZYkdjAjg0Ky1tm7fXfO7tuvvY
tu0oyB32cU+Im4h+ZCErXAnG+wdTgNV+5GGfNLT5/elgCfHnIxywmGqlU6aBxVFO7WUXuoMBLoDG
vb3qGJ+kc8edfuxR38E8MaI1zRm3IEI+wBqjDaySV28HE1Tycjud10+FJsqyvVklc0ww0zl8M1g4
qTbN1lal4vHo6iHce/g23yr7HuIe+eXjlWSBC1ELjisH58ZcN8Y/UTJiO0iwoS+GUP4FPQmIK5gT
NIDsfVHLux4lsUtbEBwNqbN4A/+tQ4koNIXxq1XpmwAAAAAAAA==


--=-DT46ZoMEN7s5cxIEO1Eg--


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 08:30:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 08:30:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479084.742687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhM8-0008DX-Mw; Tue, 17 Jan 2023 08:30:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479084.742687; Tue, 17 Jan 2023 08:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhM8-0008DQ-Ji; Tue, 17 Jan 2023 08:30:08 +0000
Received: by outflank-mailman (input) for mailman id 479084;
 Tue, 17 Jan 2023 08:30:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mnn9=5O=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pHhM7-0008DK-4Q
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 08:30:07 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2077.outbound.protection.outlook.com [40.107.7.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2542fa19-9641-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 09:30:06 +0100 (CET)
Received: from DU2PR04CA0152.eurprd04.prod.outlook.com (2603:10a6:10:2b0::7)
 by GV2PR08MB9256.eurprd08.prod.outlook.com (2603:10a6:150:e3::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Tue, 17 Jan
 2023 08:30:02 +0000
Received: from DBAEUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b0:cafe::5b) by DU2PR04CA0152.outlook.office365.com
 (2603:10a6:10:2b0::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 08:30:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT025.mail.protection.outlook.com (100.127.142.226) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 08:30:01 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Tue, 17 Jan 2023 08:30:01 +0000
Received: from 294ef77e53ec.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BC3C8195-D1D4-48DB-9426-881E47D6E3A0.1; 
 Tue, 17 Jan 2023 08:29:50 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 294ef77e53ec.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 17 Jan 2023 08:29:50 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB5PR08MB10190.eurprd08.prod.outlook.com (2603:10a6:10:4a8::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 08:29:49 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.5986.018; Tue, 17 Jan 2023
 08:29:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2542fa19-9641-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=obxlW2dDYeA7ayaxyUNGp6UxsW1h2hrE6LY/khFLDFY=;
 b=C4zTASyVoGhfjo+gYnF2I923KtnJcL9Lvi8DUTlU+fEUDdx++pDLzdCMOaHgHNtmbB4itFyGtpQ6nIk72/YP71m9lhZ/+//STT4iWaMwDfPRsBL9tNNlTbC3U1WRxScQIXunF35GodpobTevVEyMufJWgbcvJeSmHyPpFz5fcRs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ofkaBzaRw9OEP+a21wmn61fPleztMMS6lygblzmaaRsegfTKT/4Q4TZNOTDJ16xXrI9mMMO+NVV56Qi/4x5ML3gB4Q8cnMxZxQAvcz0D2gMVZIssiZ9xGdYiH1jlf+YZZkLGDZ81sChRIbcv+apt2jtSDoS9ALw5crWeJejGIqAH/W68cQmIYEqI7JBZ0o/y7a42De3ynrB/CDNxwkrYKhOsTAIOuazIzpnA4GHDD8kPuffP3Jp/K61xZp19cbFiaKGtHNCwpPf5HNRA3Fnda6Ar2d53Iik1ZcEhYh++KLQwfQymirT/ipJlCYe7nKCS7gnS2YYflUsJDZrs9ZDqhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=obxlW2dDYeA7ayaxyUNGp6UxsW1h2hrE6LY/khFLDFY=;
 b=iJjBxNg7JNyKCTG79BqiA/cO4OSlDzSZJzKcozipqOSElR1FsIqnjkhq8EHQAse/Pg0PmdMYQqvi7/k7LzRqXuwrJKA5VCbyLOiLzYTWkrBZ9p/NqucLb/ZyqeB54j7v2zrvg9gd6pF5JtaEFsZclwleoKM+18GJ0IMjpazaqUl05pD3Kt/S2olmJLyKzObXTJYTmU4VyikGpfsSol2ad5q1KqOCfyfuJOox2NQx0wAYE55RwKQNAOyLgfQlEYbZpaVi+6SJ+tX7qqu655pXHGGzmtc9lgfDBmOyLnv1Tt2nr2vrNjOmxMhny81BiLawnifFA7evLAJq0mn+5XQdyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=obxlW2dDYeA7ayaxyUNGp6UxsW1h2hrE6LY/khFLDFY=;
 b=C4zTASyVoGhfjo+gYnF2I923KtnJcL9Lvi8DUTlU+fEUDdx++pDLzdCMOaHgHNtmbB4itFyGtpQ6nIk72/YP71m9lhZ/+//STT4iWaMwDfPRsBL9tNNlTbC3U1WRxScQIXunF35GodpobTevVEyMufJWgbcvJeSmHyPpFz5fcRs=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Thread-Topic: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Thread-Index: AQHZKbkuhF2CydVKlkSWMcq+jFNCca6h0FvQgAB02YCAAABNQIAAAoXw
Date: Tue, 17 Jan 2023 08:29:48 +0000
Message-ID:
 <AS8PR08MB799156B911FFE773F86D577D92C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
 <20230116144106.12544-2-michal.orzel@amd.com>
 <AS8PR08MB7991378797D89AF18F735C7C92C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <f1db1e82-d6f7-5424-2925-d1c6d35fee11@amd.com>
 <AS8PR08MB79918757235E971524BD11A592C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To:
 <AS8PR08MB79918757235E971524BD11A592C69@AS8PR08MB7991.eurprd08.prod.outlook.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9C6D63F2E4C8F949B79BEEE3E8675A2A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB5PR08MB10190:EE_|DBAEUR03FT025:EE_|GV2PR08MB9256:EE_
X-MS-Office365-Filtering-Correlation-Id: ca8ba113-6a65-48f9-9136-08daf86506d6
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VantikBmsCOtsa15kJHlFJ2nmQkeo1mdkGRlCOO2Ol1vYlH6Sw+FM3ZicO8JKCDODFvwxDNA5sG2jLsc+TbhtS4y3JN5ScC17tYuxiDLhDgahKTb5ZkHK9LJFlvEuK1/pg+5mBzFGLr2ANen8asG9WTfQZomPioAhcge0Toaz4aA6iRdLZTszb/e7CI2+ElLuWF7hZBIPEh9cm5Je6rrPdv5nYou6ZFUtJoQB4x2+N6lpzQuwt4sGicHi3i9c5tjR2+g27OjeNy/dSjViAXZ4kE9D7ArpG03sbLvruHwCkeOjyeU9uvwRmvXrfcssL/Mh50EhoughYLKbc9iEDmQ8LIihBIEExdDXUyKueMFqKZTaMSBp3eq5tN1pvG7eXqYePTtNYAahSvBnU164peFLikMvvQ6C/2Iw1HIb7Ju6pNQd3yuhDwcO+JazibZm9KMx7dlgs32VwAPMkEKY4KXLXpQkpGuamP4tGg6PnyqR1ti3jpLECHF/JFEF099BGnoEqMVdq2581vbhLVJxVsOaGr0zSq/Bvo8GC7rR6uwM1ss1HL4iFZbVYaGtxAEOTd2yvg/IV3fOMLEIOQWE6/r2SPZQIplkRdQoHlG/ySpXsRX1gBLaxoVMDQUMorDLOmagVYDnkG6qI6MHCe1tHUAdjoF681Tj9r0ShT2vI8tilOVutsOqlvFNbuLy+3y83TVWe+NIWOKL8pK9pdKNbfwZA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(136003)(396003)(346002)(39860400002)(376002)(451199015)(55016003)(122000001)(52536014)(33656002)(478600001)(4744005)(71200400001)(6506007)(86362001)(7696005)(2906002)(83380400001)(186003)(38070700005)(110136005)(54906003)(9686003)(5660300002)(8936002)(76116006)(41300700001)(316002)(38100700002)(66476007)(8676002)(66446008)(64756008)(26005)(4326008)(2940100002)(66556008)(66946007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR08MB10190
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ce350fce-438d-40e7-c87e-08daf864ff49
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IgdEKUlzNWvxtaYNNVoYcPc5qAl/ypyORRXnlj71x9cPkByXLIIoSmZy3W2EXgbETaLKzS7pOzJVEWl+m/hQELkogC8nvL84B/abIrgJWoVk2U1TbbVkZqk7QPocj9xAUCqJGKii+OtGllvsFQO4ZPcoGkIHil2sLHU42peIuyoxiPuqa/3ZJYBJlgJk6YJJAtFMilg3rkDczYb0dvYKbpDOwHyFVTZka8UEfK+CDZuYqDF6NS8MqrJkd6C5OGMS1GCHoRjyYDnA1/Jdq4y2dEUEGFLWug162oAfVaJIU61pr/zisxzNGqK8FOiFwnlaesRTqHpTc0GpFJbupBAq2GOERFRTBR0EY/kcSqo0WsPQ3w+YPufWcVc8o/4sLRlc4qOESh1Gia7sU/G+sFyMT6JDAKN5uLN65YphR3CLydCCudd2CmlAUIQ8fTniMorUJYijgg7/TyMYvzPhtFyKcyJjW/QWxM+7QBX5kRU1TBR88RvGkhMIn+o3QRUuxP+V9C2eYdCc+/KKSX59655/BIfdgobFutpT0jHihwQS4JeLfe9PgRjJhNEB9O/CWbO8hc5Kg50T+dVGaow0xtbrW8tt6eTIAWKEXZuegO09pb9B8T6NKYeNG9UNaYn6ygdDK48X87xFhJcCHCXrZ+f+hMk0uNWApQFGhnuUoe/Hiq+nwragrZU0ATSKX6hFN0sm2nfXrGUB0RDQ1NmkBbmUFw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(396003)(39860400002)(136003)(451199015)(46966006)(36840700001)(40470700004)(316002)(336012)(41300700001)(70586007)(4326008)(70206006)(36860700001)(47076005)(8676002)(4744005)(5660300002)(8936002)(82310400005)(83380400001)(52536014)(2906002)(82740400003)(356005)(86362001)(186003)(478600001)(26005)(33656002)(107886003)(6506007)(81166007)(110136005)(54906003)(55016003)(2940100002)(40480700001)(7696005)(9686003)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 08:30:01.6300
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ca8ba113-6a65-48f9-9136-08daf86506d6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB9256

SGkgTWljaGFsLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IFN1YmplY3Q6IFJF
OiBbUEFUQ0hdIHhlbi9hcm02NDogRml4IGluY29ycmVjdCBESVJFQ1RNQVBfU0laRSBjYWxjdWxh
dGlvbg0KPiA+ID4+IC0jZGVmaW5lIERJUkVDVE1BUF9TSVpFICAgICAgICAgKFNMT1QwX0VOVFJZ
X1NJWkUgKiAoMjY1LTI1NikpDQo+ID4gPj4gKyNkZWZpbmUgRElSRUNUTUFQX1NJWkUgICAgICAg
ICAoU0xPVDBfRU5UUllfU0laRSAqICgyNjYgLSAyNTYpKQ0KPiA+ID4NCj4gPiA+IEZyb20gdGhl
IGNvbW1pdCBtZXNzYWdlICJMMCBzbG90cyBmcm9tIDI1NiB0byAyNjUgKGkuZS4gMTAgc2xvdHMp
IiwgSSB0aGluaw0KPiA+ID4gdGhlIGFjdHVhbCByYW5nZSBpcyBbMjU2LCAyNjVdIHNvIHByb2Jh
Ymx5IHVzaW5nICIoMjY1IC0gMjU2ICsgMSkiIGhlcmUgaXMgYQ0KPiA+ID4gYml0IGJldHRlcj8g
SXQgc2VlbXMgdG8gbWUgdGhhdCB0aGUgbnVtYmVyIDI2NiBsb29rcyBsaWtlIGEgbWFnaWMgbnVt
YmVyDQo+ID4gPiBiZWNhdXNlIDI2NiBpcyBub3QgaW4gdGhlIHJhbmdlLiBCdXQgdGhpcyBpcyBt
eSBwZXJzb25hbCB0YXN0ZSB0aG91Z2ggYW5kIEkNCj4gPiA+IGFtIG9wZW4gdG8gZGlzY3Vzc2lv
biBpZiB5b3Ugb3IgbWFpbnRhaW5lcnMgaGF2ZSBvdGhlciBvcGluaW9ucy4NCj4gPg0KPiA+IEkg
dGhpbmsgdGhpcyBpcyBhIG1hdHRlciBvZiB0YXN0ZS4NCj4gDQo+IFllcyBpbmRlZWQsIHNvIEkg
d291bGRuJ3QgYXJndWUgZm9yIHlvdXIgZXhwbGFuYXRpb24uLi4NCg0KU29ycnkgSSBtZWFuIGFy
Z3VlIGFnYWluc3QgaGVyZS4uLiA6KQ0KDQpLaW5kIHJlZ2FyZHMsDQpIZW5yeQ0K


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 08:46:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 08:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479090.742698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhbv-0001MT-2D; Tue, 17 Jan 2023 08:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479090.742698; Tue, 17 Jan 2023 08:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhbu-0001MM-Uz; Tue, 17 Jan 2023 08:46:26 +0000
Received: by outflank-mailman (input) for mailman id 479090;
 Tue, 17 Jan 2023 08:46:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IWGa=5O=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHhbt-0001MG-NW
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 08:46:25 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2086.outbound.protection.outlook.com [40.107.6.86])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b9ec1ff-9643-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 09:46:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8856.eurprd04.prod.outlook.com (2603:10a6:10:2e3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 08:46:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Tue, 17 Jan 2023
 08:46:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b9ec1ff-9643-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ig4x/Oh0qEZCGOC67eZcnZv36W7IfcMogkgRTiV3KvZAACW1XuFrClKE6AOpmGqIzT88nVKZxWJCS129mDeq7q0p48JBBPNwu643Pv/Z4593adjdooJp/Tv5W5fBkfsgPpMdmFMI5RwYhacIYql1oqsfGvaI3wL6p2A7PZXZfWMa2ZlqvjG082mCwqeveSnmhvP6EjWc0upv34amZrVhrpBer6vzMYwmT2tseoTzZtXS/9aGqXNi5Ng0gV+k0Qq3r7qyze5SMovr40CRy3FdtNditUoQQ5Wc9WqOtKSFBxbYd3+MOiVaLy0m02E3AaPH35WwdKPPvPpzIJuAgILI0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rn5GBTSM+U69lN2djSQA9cKttj49h9QItBQKYFoZY5E=;
 b=BS+ndgcNtoznEyBxPPUtJMafDsdwaKSVwqz+y4HEMWrnb6vhPc6CTiuGmyLKSUFk2xwxk7ELs5B0IYSih8O0J//hc+roY905XFXi0SQsc1VIvk4mZNvnzfkK8KoTmcQ8XCp85jgZtSvrpL4vFeNsvAVuE96WP6FbNrr54BvgtIElmw/+Vh0NKTmGBDJx637phxFEOT1cceI+fx+MvI/Vh1bO6QHsmQl7rQ3OVqRsaw8BpT5wDhS+RQJhZSZRvW9zMVyEbfE5fF8ZnK9RMhoSQmu5zmlCqMK/Kh1FiL2gqLnTCGzJnpWzO/KoNR6AOwSkGdSxp517d+FZ2Jby13766g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rn5GBTSM+U69lN2djSQA9cKttj49h9QItBQKYFoZY5E=;
 b=kH6piNBcmjz9ArSszpImdZfjEy+Cvzq3Txd64IgJcX++cBr0H1QicJC9tSIkl5ZK6sor6xtVJX5zLeuZVsxDc5GRuCNyT6FHq7eM2I9s2RNlrl40QYyX08NbC0hsWa8d9AhXjVoLy6et0A5q4QcplQ/YvMsKzt+Ouww3hAPobcGFPxyXqEVgjoxeJRVf06EFKEEmHyQLztHqSiIBFHwNHgakPak5nXKsn23xExdBISerXHZABXdlDWdOijnGRGAGfOjL3GzvGgfSWpSPYwo5BbcJcbVukLQwRomnZe2vMl/qMKWp3FgwZ4U4olygYt2QksU7TOyaLjzeNvIl6KXWJQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9b867256-7ec5-27f9-9089-cec551ef79ff@suse.com>
Date: Tue, 17 Jan 2023 09:46:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 2/5] xen/version: Calculate xen_capabilities_info once
 at boot
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-3-andrew.cooper3@citrix.com>
 <9ff94f87-e3fa-c397-ebf0-b4849cba757d@suse.com>
 <164e5248-948e-9467-5b34-1510d32f8d82@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <164e5248-948e-9467-5b34-1510d32f8d82@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0096.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8856:EE_
X-MS-Office365-Filtering-Correlation-Id: 63566c19-38ac-413a-c39a-08daf8674e8e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uzaYrS4slpYdP5JJcwQojeWJi0bzQHXqYv7AFs4jRn/hfkteEg4mdJyGv09OKkw8zpxZy9Yv9kwvsY/uBvGTDVAO8AZsCzfp9zEfQBNMyljYTrXbCUlftPSrI47gkrwU7DR1Ms6GHRXNsaOHBYBHsAMbo73FWTpi7Ca3qvn3BDQSS8PHpOvm49gIGvzCpMxdHm8yCObIcNkrfAzk996TjCzwggwfT7SBhQTxtoUYTd4Vx7pbXmhCfobFBFBd6Zidk2AdjDRpLUDSkMjI0ay38MtkNnUo459EibPE+aDGZgvVrY+s6lh//7CU/C2HdRrhIhQ9OL6H16AVf5w6Rtbxl2OHThbxj+tw7BevttFYoa9OiTERmzL8HXicIHeXujt10NOdInoFiXJ9zCQQhGUrsrR5R6rJRg4zMFRKC7AJ7Exw4ASegvKBUyxGFW3hnfiTCzSRSTUJtRsxJbwzwTZHzVlP0FIJle6+nZgdoZ/c0RTT1S6JfYKjA9lU+Ao48IxiJSNRZv06C0Zw5le7VqOOuPj6tqk3hQLYDaKjtLgWl2Havhom8kkw3gWwDsjH6aq/WKVKOhZgox+m0PihKEFL25BkE8L89cPhKC4kvW4PvyVSkG5P5COFJDrTKhTeXaWWRatQVej8d91nazRzP/MK7Z7WzQxVXgpIcFX9IVcRCoplwwEVK95tcIFuGVTcz/KruYnrA2P0G4MMAf+WIoH0R2unhIaCSK5whqw3FnT87PU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(346002)(136003)(396003)(39860400002)(451199015)(6486002)(478600001)(41300700001)(6512007)(38100700002)(186003)(86362001)(31696002)(316002)(54906003)(2616005)(66476007)(66556008)(66946007)(26005)(4326008)(53546011)(5660300002)(36756003)(6506007)(2906002)(31686004)(6916009)(8936002)(8676002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VXMwc3Z0V2JjeFl0TFhqTU8wREtEQXgvS2FNeHUrbFIzN3hybm41UFBUdm02?=
 =?utf-8?B?VEFab2J5cFg0OTVNeE1VaWlwS2FRNVhRbC9TeE1NaEJGV0Jja0hSMWdubVYr?=
 =?utf-8?B?V1A0VlhidmpzelgvLzA3OHgydUdGbGxrZjEwMDBsYllaK056NUlnNUlhcEhl?=
 =?utf-8?B?RW1NMDBzN0ZjK1ZOVFE0eUNmM2xwMTdOOXhFR25XdE45SUgyY2dUMDF5WGpm?=
 =?utf-8?B?cGlDdEJJK0lvZ3l5WVEzUG1qMHF4cEorWE9zUjhyV3UySVU5Z0hEaWJka3RD?=
 =?utf-8?B?NC9CUEdMWUUxWFhyV0grUVRBNmpGYTdaYnlQbzJ5VEprVU9BMWE5em1tUzQ0?=
 =?utf-8?B?RWUxNGxmR2JWZXFvcDQzTldyOXY2MTlpOUJjWWkzZ2p6dC9iT2RjeTI1dEFk?=
 =?utf-8?B?WXpYTUZxWkQvQmk4cFpiMXk5Qjl0Tk5BS1hlRmNSNTgvb2tuQ2RmZWtrRzhn?=
 =?utf-8?B?Ri9keGphaU1zWDhCTExEVnlxNk9qeHNzRllycEhQUFV5bG5EejdsMU4vV3p0?=
 =?utf-8?B?d21lTmZtQnRrNTRBOWh6VFJqWFVRZ09QbkoxQSt2L2U3akJlVFphQkY5amR6?=
 =?utf-8?B?Z2x3ZnpjVkhDTXNFOWlWNlBtVXptaWYvY202dmduZkxueUk2M0lWRlJmVDhW?=
 =?utf-8?B?Rk5wNkIwNUZmNDVVNEpwYTJJZHBPbnhKeTk4M3VaRll1cUVHTmZWeHNZb1JY?=
 =?utf-8?B?T2VleFhXOXZhRzI3UXZmRUtscllwYUxvdDdKTGNBQzBabDNGakhKUVJ0MUd5?=
 =?utf-8?B?Yy96MXVCQ05FR1dYZnc1d3lhMDFHZlNYY2NFTHgrcllldGVvM2Vha0owOU1j?=
 =?utf-8?B?a3NZWEZTWlBHTkFvcG8zanJYRTArSjA2cEtzdndDNDhOc2xvNGpDM0xzNlZi?=
 =?utf-8?B?VldvcG5qeXYwd0dsNlNIaVYzOGhxS0ZqQkdRWGxhYlloMGZoc0kzWGdiemUr?=
 =?utf-8?B?VjhIWG13Q2JwcENuUm55VC9ZbW5NOFhpeUc1MWIvckNucE5uWVM2ZlY0T0oy?=
 =?utf-8?B?anJpNWtiTUlwYWlzSm80SWVQSWhRbW9hZ29FWk92bWFqUFJuZlNvNkdvWlpX?=
 =?utf-8?B?NHpKUlNaRUp4cmhuS1U0RHpibWJIc3hVZ3VBZ1kxTWtGVnBtKy9nRTd0cjdL?=
 =?utf-8?B?Q2oxVnM3UWhET3pUS1ZGVnBrdnRQSXUrYSs4ME1FYldYbitzRCtZSXp3cnIr?=
 =?utf-8?B?SDgzYnBPeUVOR0tjdmt0ZlNxZmVsbE5OdFBjakVQWm50M2p4Y2grRGtiYlpR?=
 =?utf-8?B?aGp3Tkg4R28rMXdPSVlFYU9MYWNGOVhZbGl3THpzZGZLZ0dlQ2grNVVaZ2lj?=
 =?utf-8?B?S2ZkTjRHN0VINFJ5ZFZmQjZSVzIzamtaeDBRQ0daeXRTVkZubnZQZWoydFpU?=
 =?utf-8?B?K1Q5dEFXdlFYL3Q1TjRxaTg1TDY3K1lDOG83TzdXS2psNENFMWZTVTBpOWMy?=
 =?utf-8?B?WVNCOU8xYjVXMUFRcE5hQWljU0QwY0tSVmFGVk02UmFoQVhEOTFWQ2lrcWNi?=
 =?utf-8?B?bFE5Y2NmV3A2MHhiWEVEQjh4OGN3a3Yra05FLzh5YjJHeHMwaktmSjZrbExE?=
 =?utf-8?B?TENCc2dFYnd4YThmcEtHeFRmTUNmeUFzYVVnN3NhYzdBTWRiYThuRFlvUTFL?=
 =?utf-8?B?WUQwYkp1RzNtMHUxRHV6Q21xMTlCMFhtYTE0K1oxeForcFpnaDBGbFNRWEI3?=
 =?utf-8?B?Y2JqQ3VGL3AzcStmQkJLR1IvY2pVT1BUQzdRdlg3S0hOaDQ1WlBBMm9wcEll?=
 =?utf-8?B?SmNOWWZXUG5kY05RVTViOTZaVWVyU2FTdzZGWjNUODdKUUlPMndvK0ZmV2dw?=
 =?utf-8?B?YUdxT2UxL3IyQWorMTIwN3lNU3VSUXMrSStNelBsbUxaNGVkZDBGUDVocGFD?=
 =?utf-8?B?dU54d25ESVdsdURJVHA1NUNZMEJCSFIvUXYwRFZsc1c1SzJwb21HaXYvWms4?=
 =?utf-8?B?eXphaDJSR3BUSGhiK2JucWZpVFQzM01LMCsrTjdnKzhEUHJCOHBPcWhRdlVO?=
 =?utf-8?B?cUdqK1pxaFgwcjJuQ2U5WGdQYzNxUDg0WTA0OGJiVFdZc3BHb3J3L2NzQVlG?=
 =?utf-8?B?VldkWDc3Mm1BQkpyQlI4bHo2cWdYOFB2VHFXajRkWWl4MHdzYlREM3JHcjU5?=
 =?utf-8?Q?it8W/zjFB30oKSwIZ0XdU6PXh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 63566c19-38ac-413a-c39a-08daf8674e8e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 08:46:21.4332
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: u+O9DrRMh4D7ujJQmYOxCgAdfHK8nqH9ZSOO2B0r0ZWr0ymSgz61bjq5+NaYCuu/vbeKlqVzuep6R9ThFV8vMg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8856

On 16.01.2023 21:52, Andrew Cooper wrote:
> On 16/01/2023 3:53 pm, Jan Beulich wrote:
>> On 14.01.2023 00:08, Andrew Cooper wrote:
>>> The arch_get_xen_caps() infrastructure is horribly inefficient, for something
>>> that is constant after features have been resolved on boot.
>>>
>>> Every instance used snprintf() to format constants into a string (which gets
>>> shorter when %d gets resolved!), which gets double buffered on the stack.
>>>
>>> Switch to using string literals with the "3.0" inserted - these numbers
>>> haven't changed in 18 years (The Xen 3.0 release was Dec 5th 2005).
>>>
>>> Use initcalls to format the data into xen_cap_info, which is deliberately not
>>> of type xen_capabilities_info_t because a 1k array is a silly overhead for
>>> storing a maximum of 77 chars (the x86 version) and isn't liable to need any
>>> more space in the forseeable future.
>> So I was wondering if once we arrived at the new ABI (and hence the 3.0 one
>> is properly legacy) we shouldn't declare Xen 5.0 and then also mark the new
>> ABI's availability here by a string including "5.0" where at present we
>> expose (only) "3.0".
> 
> "the new ABI" is is still two things.
> 
> The one part is changes to the in-guest ABI which does make it GPA based
> (for HVM), but this does need to be broadly backwards compatible.  This
> ABI string lives in the PV guest elfnotes (and is ultimately the thing
> that distinguishes PV32pae vs PV64), but nowhere interesting for HVM
> guests as far as I can see (furthermore, the 3 variations of hvm-3.0-
> are bogus.
> 
> xen-3.0-x86_64la57 would probably be the least invasive way to extend PV
> support to 5-level paging.
> 
> The other part is a stable tools API/ABI.  This can have any kind of
> interface we choose, and frankly there are better interfaces than this
> stringly typed one.
> 
> 
> "xen-3.0" is even hardcoded in libelf.  I can't foresee a good reason to
> bump 3 -> 5 and break all current PV guests.

I didn't by any means mean to suggest to bump. I was seeing us add new
5.0 entries, with the presence of the 3.0 ones indicating the backwards
compatible ABI. An option might then later be to make the compatible
ABI compile (kconfig) time conditional.

>>> If Xen had strncpy(), then the hunk in do_xen_version() could read:
>>>
>>>   if ( deny )
>>>      memset(info, 0, sizeof(info));
>>>   else
>>>      strncpy(info, xen_cap_info, sizeof(info));
>>>
>>> to avoid double processing the start of the buffer, but given the ABI (must
>>> write 1k chars into the guest), I cannot see any way of taking info off the
>>> stack without some kind of strncpy_to_guest() API.
>> How about using clear_guest() for the 1k range, then copy_to_guest() for
>> merely the string? Plus - are we even required to clear the buffer past
>> the nul terminator?
> 
> Well, we have previously always copied 1k bytes.  But this is always
> been a NUL terminated API of a string persuasion, so I find it hard to
> believe that any caller cares beyond the NUL.
> 
> Because of safe_strcpy(), xen_cap_info is guaranteed to be NUL
> terminated, so if we don't care about padding the buffer with extra
> zeroes, we don't even need the clear_guest().

Right, that's what I was hinting at as a possible option to do right here,
or to do subsequently.

> Also, similar reasoning would apply to XENVER_cmdline which is typically
> rather less than 1k in length (at least it's not on the stack, but it is
> still an excessive copy).

Indeed, but I understood your primary concern was the on-stack copy, so my
suggestion first focused on that.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 08:50:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 08:50:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479096.742708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhfZ-0002lN-He; Tue, 17 Jan 2023 08:50:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479096.742708; Tue, 17 Jan 2023 08:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhfZ-0002lG-EX; Tue, 17 Jan 2023 08:50:13 +0000
Received: by outflank-mailman (input) for mailman id 479096;
 Tue, 17 Jan 2023 08:50:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHhfY-0002l6-JN; Tue, 17 Jan 2023 08:50:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHhfY-0002Kx-Ek; Tue, 17 Jan 2023 08:50:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHhfY-0005l2-2L; Tue, 17 Jan 2023 08:50:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHhfY-0002qv-1r; Tue, 17 Jan 2023 08:50:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T6plDq8oMnhxLvCFyIV4qEpIqBhWhMDHSc6nDVgzWcM=; b=qDu64/DL4mlyQ9DU629r4/3+yt
	cJvgHAFOXX8OMKvGZW/DdSCzsGE4QIu/VmXR0ZyzLfkmckfh3pyXuU11BWsbYwPntLiJAiRZFE9XK
	20SSyoWNQl89ZK0g9QCkCwQfy2BCOizR7TjiZQvo1pvKds0oPJdbX7DgDjxRxSEiGrm0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175927-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175927: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-amd64-xsm:host-build-prep:fail:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-amd64:host-build-prep:fail:regression
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-arm64:xen-build:fail:regression
    xen-unstable:build-arm64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 08:50:12 +0000

flight 175927 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175927/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175734
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-amd64                   5 host-build-prep          fail REGR. vs. 175734
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-arm64                   6 xen-build                fail REGR. vs. 175734
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    5 days
Failing since        175739  2023-01-12 09:38:44 Z    4 days   12 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    4 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  broken  
 build-arm64                                                  fail    
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-xsm broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 08:55:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 08:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479105.742720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhkW-0003S7-AN; Tue, 17 Jan 2023 08:55:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479105.742720; Tue, 17 Jan 2023 08:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhkW-0003S0-7N; Tue, 17 Jan 2023 08:55:20 +0000
Received: by outflank-mailman (input) for mailman id 479105;
 Tue, 17 Jan 2023 08:55:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IWGa=5O=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHhkV-0003Ru-1B
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 08:55:19 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2045.outbound.protection.outlook.com [40.107.21.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa2cc600-9644-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 09:55:17 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8452.eurprd04.prod.outlook.com (2603:10a6:20b:348::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 08:55:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Tue, 17 Jan 2023
 08:55:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa2cc600-9644-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A62ayVvoPNO3ApzkyeDi66NbD69bqRJGSxd3Ojb3pXmRDbHJwuo68GQK2RMinz0xZYMGR6p8OZf1eFu2I8z+9rx2klMZyzgpj+bIH5nZPjvlnc/mcvpYA0Z+dztEelnjMBBVdn8028q31y/yAoxbu8llEAo/JdKeVfp49jS6EAueYhgRakromi9OhMGMBQXGnPyEKZVg896o84zLe98c6romWHiIU557N1JjAWia6ZvvgEzzRLy1/0YgbDRd8wcssffo0HO0HTpRPrI4vm56+MMOWn3WZcd7nAYU1QISCVmNAJmDLXJXx8GjBK3sx6qWHpLQNoVvY+GgU+poGFn1HQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1d0EFzGoUlPmQpIOV+viY6VDUfuyum5MtDkj6IXo63I=;
 b=hW4nG+5M/xEVgQsLtujmG2dUixrriIVLzDTZqrU4+l9h0pia7MVuWMknDNSv8KblZmu1qhuGqGhrOlGDwLaBRLfRoLfd92wBWL+LvLG313CtfUxear+F0tnX5cMQS7Q8CBmSxlluOSbb5SFDEpboe0Q0KJGlNEdhE7RbKedYz1n6k4lin5c0rx6DPxDTGoHJI0nkTXO0puQUpzAgsdK8Zu+DB9ZXrOamjMZ0B6oR4LdWW0kvON4zJLaVqSJrdcV59Yi77z09zDr7PDwTcg8pAgYi5wYqDHwsdYvRSRpkC8TQfKmKugbuJaHvYeLKsaDOl1Xkp2N6Yuq+ZhiI3pjFQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1d0EFzGoUlPmQpIOV+viY6VDUfuyum5MtDkj6IXo63I=;
 b=dJWCzvJ2HqJpVyE+9HzlKId0rfeTeRTScNr+kPvuC39G9z5hJcg3AdVx0Z4fgt2Pkt+DUWKNsSj8XJHUpSpiMClgWLL+px4qqOqQkVxiKTauj0HKWZwY5253gp4B5okCHABqQqFiYCp6bPBXFjEc0XpOgiHmy84uaX88I7+TGt5ujsexf3hFRyYeQ8PUzbdlA6AMu+jGC0aJ+b1HsnpNS41vXuO8JMIXaX0rKrpeZaYQIXmtTXO0hPp0plgDVuu6l0NNXWtTe1FZ60Xt3SVj+JGHhoerHEr2X8MMNtkmmBOvqRrOz4PHsqvltKPK3DFxfjpyigNu1QeEvj9/lCTOQg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ff7770d4-841f-356a-78b4-294bdb770fbe@suse.com>
Date: Tue, 17 Jan 2023 09:55:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/5] xen/version: Introduce non-truncating XENVER_*
 subops
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Jason Andryuk <jandryuk@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-4-andrew.cooper3@citrix.com>
 <5ebe5337-f84e-12a1-e8a0-92832100946d@suse.com>
 <2d3dc0ef-4920-2bc8-6ffd-6b954fc8c68a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2d3dc0ef-4920-2bc8-6ffd-6b954fc8c68a@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0078.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8452:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d884473-77ee-4a3d-73a9-08daf8688cfe
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pJVnSEkjWkGxMepRse2q3KCOx+SajIWcoIbdlc64Kh23iNiVeHR0/14L10pmqimDdvMcPPVDhKTCG9J9FNHSvL4zamLOQ80Lcbz2aic4g3cIKcVwGPlqkJcJvXTQbeUIp0dxLSzU1H5EIRxQgqyFuccc/58OG7D/AXb+161joHf6o5iEVhNlV9TmuCsXc1VLZkwpEA3pw507t2LEHbqBnyYM/Av4PWBxQoY9huug8NtYEsbDTi04PYV+rlputs4pvBmhwrxyGy5nC8qwC1bN15cBLBpl4GMdE0neeKVxvsbBaXiRCQI8FKT7KmKOHpbezE3v3ia0Cxs6syefQ7jUd7vr30bJWvuPpicRsh/eFqTQmtNw9TXRlYP4tAEsOj0IJpGnyRIAejn0RYI+chdfN+uSSvfB/fglk/mMa58tP/8uGhgfZkBEbiqRQXzrtDW+AiYH9J+Nb6pAdttOHhFs4aha3nnuZNwWqm7CTxH9AVXEtJhjP5kAaHZoOg9dRfKdNCiaxB9+1ZhS4etb0QAt0jArmBtnRD4TVpuRriOAJfLGLBY+0SoatySbiL2eKggcA2Q6ENi+6Gr/If7ep/E9Z3UPgFp5MYkRmeZEj7yAPlMm2jKKjlOKKkSmgu8pKj4SPIwtEeuwo3wecobH3NsY+dDetXjgDNmy7Wxs/YtgFHtpHaM2/b2HhXJGFsresroocq6oghRffswzJVBXNU6c64doBwdGs5U1rNy3rSRw59Q=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(39860400002)(366004)(346002)(396003)(451199015)(31686004)(36756003)(86362001)(38100700002)(5660300002)(8936002)(2906002)(83380400001)(31696002)(54906003)(41300700001)(316002)(478600001)(6916009)(53546011)(8676002)(6506007)(66476007)(66946007)(4326008)(2616005)(66556008)(6512007)(26005)(6486002)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZWQ3TXdNQjJlME4wdHUvSVlLWEkxbXNMb05tbUR5UCs1Q3hlNXdVaExXRW5G?=
 =?utf-8?B?MlB0cU9BdGNXRWxTNzg5Vy8wRHdaUjlzNkwyNmsxQVpzeFlXV0E1TjlTZWhS?=
 =?utf-8?B?RkJGeVZYNXlJU1JSV2dXSzlSTkl6aXgvTk5oM2s3dy9KUUordnhvZUg2Tm9H?=
 =?utf-8?B?R1NqanVaL29kTWVNWFJia0cxZjYxdW1hYS9pcHFkNTlXenpUcDA2RHh0aFRG?=
 =?utf-8?B?M0ZmbnhQeTlCTnhORjY4YWtqNk9OY0NsUVJNdXR0Y09lSXNwNjRxME10M1Mz?=
 =?utf-8?B?cjN6NDBhUjRmZXU5SnhSckRFeGdaMEg3RGtKZU0reHA3WHFhY2ZUbGhhOW0z?=
 =?utf-8?B?M2w3THN0em1YcGU5ZTEvSkp2RWpCU3dOR1ZzWTZweU44b3FjVXlSZXNnUDRi?=
 =?utf-8?B?T2RsK0NHckQwVmdOUTZRK3lDeDJ4Y3FUeWhtQTk3S0ZscldlNk1xVWJGT2ha?=
 =?utf-8?B?dVFWYlNjei9kS0dia2ZybHBUS0k4bllMcXE4WUU4UlowMFBhbkFSSWFBYk5a?=
 =?utf-8?B?cll0d1B0RmlDd21GM0RwVzVGRUJIUTBWak4wSVNkUTFBNG9Wc1B3a0ttdGNx?=
 =?utf-8?B?MTVGakpuTWxYUE1aUkJkWFZ5b2MxODJzQ3ZTYTdiRkJtTTM0UmtOVENxcjVV?=
 =?utf-8?B?ZGh3YTRUVW53TytXZjIvcmg4YTBOWk04cnNTZ2J1OG44dUxGaGJzMEIrM0NW?=
 =?utf-8?B?aFJzWGtTR3RKS0NwREdDRFVLMkNOeE1OOE14aXJ3c3NLZTExM29CRHlmU3dV?=
 =?utf-8?B?N0VYdVdDYk5RVWlFM256eGhMcUtBQnpRYjZXN2Y4eXdpT3dueUJGSktGV3pM?=
 =?utf-8?B?OUVFc3ErRWtoTHdCR0w2dEd3bElObzRpaFdFbWFQMUVwanE0aXd5NDVITEp5?=
 =?utf-8?B?OE1oV0Z1MWoyeFFSVWRoWGJiMDhHcUJndHA1VHVEQ3VEeW9ieTBRSy82RFdm?=
 =?utf-8?B?ZnJqQmUvTnh5SFJ4WW1Pb0pQWURKbTllK1JEc1JCeUJmMDFBMWRjRVVmc3NR?=
 =?utf-8?B?MnhxTXc3UmtRaGJ6S2Zob2xRMHMzOGloVEFJTW9XNVg2SklKMEtCc1UwcXdw?=
 =?utf-8?B?T0FUcDlCZWRLVndWT1BJajNRdXhnRi9ETm12UzlrM21hb2U5cWN0SVZxR3lj?=
 =?utf-8?B?bUhHTUxOamdvNUZDeUJ2TmYzc1JkT1ZVRG1DbnVZOUMzQ0RiaU9EaHVHcFpY?=
 =?utf-8?B?S2k0NVJ5SG1FN0FmQ25HMGIxaTU3dTY5d0pjZnZ5M3o0MUVUUkl1dm4zdUo4?=
 =?utf-8?B?dytHNWVDMGlPYzlnb01JSVR4V0E2MjBHcW1LK3FNOGFXY2p5L0hBcFBvQkZT?=
 =?utf-8?B?aE41REFlOTFBL3hXTCtvQjFpQkJyRGtRWFZBeVNJSDhnNlhYTW9DbVdmL09B?=
 =?utf-8?B?S3dRR1RjNDE3RThpS3RINEtmSUxQcTVPeHB2Q2VxOXVIaU1ldkFqd3c1L2Nq?=
 =?utf-8?B?cEtESUVDMndOM3E5TEFjSzZ5dEVWRjdqdmR3K3VGYklmOU1rMVpJL1pJOG5J?=
 =?utf-8?B?emNqM0oyVWJJL2F6SEo0eDBkRWFQUHpwSlNtN1NyWE5oTStTYThpL0lrcG5B?=
 =?utf-8?B?dGtML2NqWlRKV2trbmIrclpBZ1JRTVVxS2IwckZnd2p3VVVrOHprRC8rZUNl?=
 =?utf-8?B?QkVGeldtTWlyckNHYUhVMThPdlBZYWZqUmphQTRvS05VUTVnbTVpekNrbUJY?=
 =?utf-8?B?Skk2aWZ4NFZNUHA3Z2dGTHNvbGwvQk9mQ0hYMWVWdWQ4R1F6aHQ1eTJPUmY1?=
 =?utf-8?B?TW5hbWdsbm9uUGJRdkxUaXRrTDJuMFhzZ1hCRVN3UTI2NmRMQ0Y2YVk0ZWRV?=
 =?utf-8?B?MUxLdnRxVkhUc1pmNFBQYk9qeHZnRU5DUW93dXZkRUJheFZPQlJGTWFZZmhG?=
 =?utf-8?B?Mm11YjE1bGxKU3dSZEtMbXdaVThOdSt0bkZpdHlJRVRTZFg4UzJkR3prcFE0?=
 =?utf-8?B?VUZwM2ZkZS9WeWp4aTloSHNhY2JtN0NaWnRZQTRMZ0toNE01MjRYK1dmS1F4?=
 =?utf-8?B?T0JBbkZLSzBvcE80Zk9uUzlpSmdQUlpaK1dMRk52VnBSMHlVTDRYSnlxR1F4?=
 =?utf-8?B?ZFhwREtiOW9Ua1AvQjl6eVgycVRhbExBbXFEcTY4aGFYREUxNlBXOEdjZFEv?=
 =?utf-8?Q?U/LsxF/aaYlo4hKNpWrDXZPdt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d884473-77ee-4a3d-73a9-08daf8688cfe
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 08:55:15.3682
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CUqnYFztPU5M5dXchvRJ5yiTxGZ1Aj9WS+mHxn5A90Tpn1ovVg5QMQF7nTscP/tlaV+0NP+jXeZSAJ1kt3loTQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8452

On 16.01.2023 22:47, Andrew Cooper wrote:
> On 16/01/2023 4:06 pm, Jan Beulich wrote:
>> On 14.01.2023 00:08, Andrew Cooper wrote:
>>> @@ -470,6 +471,59 @@ static int __init cf_check param_init(void)
>>>  __initcall(param_init);
>>>  #endif
>>>  
>>> +static long xenver_varbuf_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>> +{
>>> +    struct xen_varbuf user_str;
>>> +    const char *str = NULL;
>> This takes away from the compiler any chance of reporting "str" as
>> uninitialized
> 
> Yes...
> 
> It is also the classic false positive pattern in GCC 4.x which is still
> supported despite attempts to retire it.

Hmm, I didn't think this was the classic pattern, but instead it was when
there are two variables, where the value of one identifies whether the
other was (also) initialized. But yes, if proven to be needed for 4.x,
then keeping it there would be unavoidable for the time being (but we
should then remind ourselves to drop this once we've raised the baseline,
perhaps short of adding a gcc version conditional around the initializer
right away).

>>> +    if ( sz > KB(64) ) /* Arbitrary limit.  Avoid long-running operations. */
>>> +        return -E2BIG;
>>> +
>>> +    if ( guest_handle_is_null(arg) ) /* Length request */
>>> +        return sz;
>>> +
>>> +    if ( copy_from_guest(&user_str, arg, 1) )
>>> +        return -EFAULT;
>>> +
>>> +    if ( user_str.len == 0 )
>>> +        return -EINVAL;
>>> +
>>> +    if ( sz > user_str.len )
>>> +        return -ENOBUFS;
>> The earlier of these last two checks makes it that one can't successfully
>> call this function when the size query has returned 0.
> 
> This is actually a check that the build_id path already has.  I did
> consider it somewhat dubious to special case 0 here, but it needs to
> stay for the following patch to have no functional change.

Would such a minor functional change actually be a problem there?

>>> --- a/xen/include/public/version.h
>>> +++ b/xen/include/public/version.h
>>> @@ -19,12 +19,20 @@
>>>  /* arg == NULL; returns major:minor (16:16). */
>>>  #define XENVER_version      0
>>>  
>>> -/* arg == xen_extraversion_t. */
>>> +/*
>>> + * arg == xen_extraversion_t.
>>> + *
>>> + * This API/ABI is broken.  Use XENVER_extraversion2 instead.
>> Personally I don't like these "broken" that you're adding. These interfaces
>> simply are the way they are, with certain limitations. We also won't be
>> able to remove the old variants (except in the new ABI), so telling people
>> to avoid them provides us about nothing.
> 
> Incorrect.
> 
> First, the breakage here isn't only truncation; it's char-signedness
> with data that's not guaranteed to be ASCII text.  Yet another
> demonstration of why C is an inappropriate way of defining an ABI.
> 
> Secondly, it is unreasonable for ABI errors and correction information
> such as this not to be documented *somewhere*.  It should live with the
> API technical reference, which happens to be exactly (and only) here.

I agree with spelling out shortcomings and restrictions. The only thing
I do not agree with is the use of the word "broken" (or anything to the
same effect). Instead of using that word, how about you actually spell
out what the limitations are, so that people have grounds to pick
between these and the new interfaces you're adding (and being nudged
towards the latter)?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:05:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:05:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479111.742730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhtu-0004ws-6t; Tue, 17 Jan 2023 09:05:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479111.742730; Tue, 17 Jan 2023 09:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhtu-0004wl-47; Tue, 17 Jan 2023 09:05:02 +0000
Received: by outflank-mailman (input) for mailman id 479111;
 Tue, 17 Jan 2023 09:05:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IWGa=5O=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHhtt-0004wd-0c
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:05:01 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2063.outbound.protection.outlook.com [40.107.6.63])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 04671866-9646-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:04:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6900.eurprd04.prod.outlook.com (2603:10a6:208:17d::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Tue, 17 Jan
 2023 09:04:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Tue, 17 Jan 2023
 09:04:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04671866-9646-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TDC2dz3ZRx2k7uJx2Xty4g6m1YVJViTxpzHTRRLtScLUBwrB4cCMSPbdPfaILzNNmN2f0KQ6HiYxl84LiViRrIgkGEjmBVo833dZ9lN+T1wqKWlaaxJTUPFFIwqh0/vS4NGUh3/AMvQPxT02nw1yf4G6hudDIbz8jjf0EAaNoq4qJ8txqQD1mULvk5Nx8acsXZC+5OOZv0tHDvlILgiZYxv/iYfaGL1vrrcsjY2PHvs2D2wSbb228jr5rDkAEk/l/dGNhVCF4qSr/NU/Kf1fKrRXs7qAkNu5TrPn5FYDb4XeWZL8uqDUJcGbBTrp3wT6xQIQ4wobe4ATBhiWIQt1Jw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0A8HKecoo09GKxdSKjuaI4KhNvuu7fq4sN6QkVn3eVA=;
 b=YxX8FkgblhbLy2KEmqnkldjxjAkRmtEIWltR73Cjpr4MHcMgneKS3QwhfwYrAt7DgSVv+CVHsRANkBNrUSHBm/bxC47+TJjlNpr73BUiVBfS28Yv3MjoepKYD4Yd+K+ku2k3r3fqGVDM4jmfvNoRcFmiHMQTxOB50k1R1cw2jCKM2yNKVzj8+XLYRwTQhiRNIQPKDJLDmGq25wjlr94N3COFw2jhiW+zSV+UeIOgGUcofHMK6M0hpEJcKdf2KAXHSP4hPCYaxa7YWIlvzN1yTV6+QHibWgtoJQq3O1jMntBeAYyOG+v73X6LSTDhUWc5s9FvC8Z8M4VCOJRjeztHGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0A8HKecoo09GKxdSKjuaI4KhNvuu7fq4sN6QkVn3eVA=;
 b=hoObRQKf7lVlebUZW1eY7ZuGOWkjP+zzJkrU1Ko09PhnFMtaLHFqJ78kq9Cw03aKlZqOHPMmeWrLO1XB6k3CBwpgDCfdWx8ag767jRvhP7JtL7bZITJuv609S99CiHMmcj+sDGrc33TKP1Ar16b3PH9QJ7hVmLrPZKG6HaPwY7LzxX4neRXtptoBlE+lIU+4ANuP70a84sR8sWhy3W5zvMG02i3PQJSIy0QokvtzQmEzWORAxYgHM28DkY6DDy2OHgei9F2xeDCsNnJq92+agcHIgq4mYHpMNCAHlFjk0d4VNRqI+2CGBrnAIvNA7AAkbl4Mh15J/83kKX76ZUiN3Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <790836eb-c5e7-a768-c759-2cc7554168a9@suse.com>
Date: Tue, 17 Jan 2023 10:04:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/5] xen/version: Fold build_id handling into
 xenver_varbuf_op()
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-5-andrew.cooper3@citrix.com>
 <f92334b1-7819-d638-fabd-91baca711615@suse.com>
 <4639aa6c-cef8-3434-1607-fcb4b563a991@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <4639aa6c-cef8-3434-1607-fcb4b563a991@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0140.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6900:EE_
X-MS-Office365-Filtering-Correlation-Id: d35a8770-7bae-4651-f42d-08daf869e730
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2gVN9kiDkKzJZOBfRVGOAXRdrg1Tt5VTu+FBpfyhwIU8nem185s38GNEb7+LMAhDDInoxRJKqVlugeiBnandUUSdC1VhRJFTBmbxnJxzG9pGgM8i1jNMrLpTz8bQV+bq1Gl1HaiKl9cZfJM068lvFeLm4Tlcx1R/UpnWbVRRMPZQXRDcjQ3NQOPnErbLUlL86W6OZOtBOiadw8xrOrL2NqgZwidc4OzgTXFn/lqIjG58SDPYQr92l00oA+bOW/j+/5a+3VAqwC2HyhGmipow9XFX24EWxqZHdrSA9q/xDSE3iytY6smx4BABjLN9PPKsHYyLQxtPc64denPf6AlJgYK9um+XmY9VGXm7KoBOxjDhXZ4GN5VasTSAVHrdn9qwkasAgC0In5Z/lZgRRzLSjxq3RSbcUnBQYbXlnJDXOcnGnltF0V0+jAPqFcztwGcpmbiTNSaTuxe+6mBc1T+2H+3yQt7qYXNm6h7G/kaTc+fQdRsTeE/JBjkOWGuC9yGx2CvL9YN20OLx16wyTxRngX/zbyWFUOo6BNvyGbMycc6L2vr84zfdN7RvS8uIIhWTTGF9zPbOLqpmoj2G7f88HF/Nmr9FSxL97uLDlVVe9iYmGO42Bl4I1fPza6mNTFj/fv+VcFYiLAzGa79L30EzLywoII2kpaHBeR8WHO2i8sAnCH6KHOff7K40pQQCxwohoqIgAf6krxyOMVUUeuKvUKxwxV1QVrO6Wb1LLwt8W5I=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(366004)(396003)(376002)(39860400002)(346002)(451199015)(31686004)(66899015)(36756003)(66476007)(66556008)(66946007)(2616005)(41300700001)(6512007)(186003)(53546011)(6916009)(8676002)(26005)(4326008)(86362001)(31696002)(83380400001)(5660300002)(8936002)(54906003)(478600001)(6506007)(316002)(38100700002)(2906002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cTFmaWJZUFNsKzlTUjRDOVJUZllQSlVZNk9hUGZYT2RWNEZycU54bkpxSFpn?=
 =?utf-8?B?dm5GcGdaSG9qSjdjMi8rOGQzaEdRN0JhZjFaRDYxQmhnWEJkYjRnMEZtbGxX?=
 =?utf-8?B?TUkxYUp1L1N0WGMwZlBhY0tteEhzRzEvUFhqcVRFSWMrajA5ZXF1bnRzWk9x?=
 =?utf-8?B?RFN4dmVXakdYeXVTL1J1ZjZhRUgrdG5EUnFqRlJkaktQRnhHeFBlMnRnQU1v?=
 =?utf-8?B?Y0lFZitrNHBwcFVGZ1R5WlZmU0IzS1lQTTV1RktkdlZwbFVWSEZQeHdKVXFr?=
 =?utf-8?B?bTZBTktEUEtRTGRlc0d0NVVRZ0doSGV2cU5WNmUrYytDRncyUzA0MmhZeC9L?=
 =?utf-8?B?c2hLZ1hlU1Q4TW9yMXRIcnVta1Jwc0ZZYmtGektUOVdiU3Y2MytadENmTjd5?=
 =?utf-8?B?TDNVVUxHTDAvcHNKTlFSZlRnMzdFVkZRTUI2UUZGRWcxcW11MGk5UE9uSmIw?=
 =?utf-8?B?YXlEOXRieHVEcG5JUkFPQllDWTBpVTMrdnIycXE5aVp1ZnQ0NjF0UzNlT29C?=
 =?utf-8?B?L3F0WHpxWWdSZkt5U2JmUWo5QytOR3BwaXFqdk94R01kNExYcm8wVTBPYS9J?=
 =?utf-8?B?K1ZTaDNhKy9VNFJ2RmVQejlTajNkeGJwUU4wZ0t2V1NmTEliUFZsRkwwdHZF?=
 =?utf-8?B?cDdDRWpIckRDSTE0dzF4RjJvR2hUK0NWcUhJMEJjUkxhME1zVXZDZmZONXdk?=
 =?utf-8?B?QXd2SWtTdmVudWJaaHh1UnYybXQ5N2ZVcEVRdWpGQlBDUWZDTUI0Njh3RkJU?=
 =?utf-8?B?YXpGMm1aanRVVWIrc1lCL2laaW5Qd1FMQTZDT1hjRnFWdHBUSTA2SHp3bkFZ?=
 =?utf-8?B?clBQeVhlY2dkRlpzazduT0lSN0VrKzlFRFRQT2VyYThaYXdiVWhJTXI2ZzhR?=
 =?utf-8?B?OFg1RjdPWk12cDVwSkl1bzVlTHBkM1VlUXNncUlrdnJaNGMrVEZzTjJCV2t4?=
 =?utf-8?B?VmxQT3Baa082SGpUaWNhMEQ1bTE2UXFYalVYRU9weDdNeXE5YVRNNVVSTDQw?=
 =?utf-8?B?SHc1UXhXZldyZ05uN2JpTmNnQTVTbGM5UGFZZngwdFdCNmY4L05ZUkM4MHQ2?=
 =?utf-8?B?eEZUa3FZY2U3YzQ2eDkxSXhETWJrSjhlYnl4M1o2Ny9BY1Fac1E0d2p4UDI1?=
 =?utf-8?B?QnRBaS9OY2h3Yllqczk0czl1ZDhoaHgwdTRLU3IrMDIyeG5xRVkzRUF0cE5l?=
 =?utf-8?B?cVNISUdXZWt0bVBCeHFuMWZibHBaVlpZUjdDakovYWJvbVFwRTkycG91NC85?=
 =?utf-8?B?QTIzd1hSajlmNGEyTHdkT291MXA5Y1ZDRE5tWTdUd3JKcVZoWGFTRkdkb21L?=
 =?utf-8?B?WGlHV3pXc1lzbnN2eTMxc0FWTkRZM2pSdjZUaUR3bWR3ZkR5OU1XaWZWczFE?=
 =?utf-8?B?TDJadEpVajg3NE81cGk0WkhRRUxEQTY1aXpJUEFGMkkzbW8xeDRXMVBnU0Np?=
 =?utf-8?B?VVkyRjhWdW9MMzNqUTlhdHJWZnZzMDZJNUdvOVhneDlLc2ZjYVpvMTV0dUJL?=
 =?utf-8?B?U0x0S25KNGlzb2pQQm1IR0xWWUNHb0Z6UTZQL0s3K2pMU29TbCtQZyt5SHBI?=
 =?utf-8?B?WHJRRXZCcFRIbU9NSkJybm5CWHFrbWZYWWNNNHY0TTVsbTVlRWlaTnVkS1Vh?=
 =?utf-8?B?Ti9rdlE2LzdDSUdoRUovWWhqYWhQaFpvM1J5QTJ4T0t5VVI1aHJQTHhHTG45?=
 =?utf-8?B?ZWM1UkxDVW44VzFraHhEZndoak5uU2dnTWYzQytzZzRPMmh6YWJVSk5hT0Ni?=
 =?utf-8?B?bnJ5MUJ2L1Q1dVM1c0haYm1xcURxZklKUDZ1UG9waEE0bFFBaTJuSEROTjZD?=
 =?utf-8?B?dGVDMEEyVFRuL3hEMUcxRTB1b3JoLzlsY1kzVW5GaHlTOVRBL3VtSXluayts?=
 =?utf-8?B?cFhYK2pYV1liQ1RPTmQvaDVUTThtQVlJWnQ3Umh4VUhpdXpyNzlEcGFqRjhE?=
 =?utf-8?B?QXJqZXlVM29adWRQZC9heEtjaU9lQ3VFRStUbFQyeXhZT2hrbHBQaVZPZW45?=
 =?utf-8?B?aFVueFpGZ1M3bGRUN1BPTFlDR3FkWjRiYmorT2tGa2dobHNhdUhOaTdFb1cr?=
 =?utf-8?B?Y3UyM045clpvWkRNS2MwVUJhWmUxVTVlR1h6U0IvWk1rWDNNTDA2a3NXTWJu?=
 =?utf-8?Q?K8WHL2Vm1PzB3APd4/mHUSnLU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d35a8770-7bae-4651-f42d-08daf869e730
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 09:04:56.1908
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DlkyYEcnMOjOyYAm3YWNk+fz3QUCXyX6zy9iPNMZIH/JXY2tvZFVEDMD1qNWKbYSa6cXYT8DN0BJKLUBut04hA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6900

On 16.01.2023 23:14, Andrew Cooper wrote:
> On 16/01/2023 4:14 pm, Jan Beulich wrote:
>> On 14.01.2023 00:08, Andrew Cooper wrote:
>>> --- a/xen/common/kernel.c
>>> +++ b/xen/common/kernel.c
>>> @@ -476,9 +476,22 @@ static long xenver_varbuf_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>>      struct xen_varbuf user_str;
>>>      const char *str = NULL;
>>>      size_t sz;
>>> +    int rc;
>> Why is this declared here, yet ...
>>
>>>      switch ( cmd )
>>>      {
>>> +    case XENVER_build_id:
>>> +    {
>>> +        unsigned int local_sz;
>> ... this declared here?
> 
> Because rc is more likely to be used outside in the future, and...
> 
>>  Both could live in switch()'s scope,
> 
> ... this would have to be reverted tree-wide to use
> trivial-initialisation hardening, which we absolutely should be doing by
> default already.

Interesting; thanks for giving some background.

> I was sorely tempted to correct xen_build_id() to use size_t, but I
> don't have time to correct everything which is wrong here.  That can
> wait until later clean-up.
> 
> Alternatively, this is a pattern we have in quite a few places,
> returning a {ptr, sz} pair.  All architectures we compile for (and even
> x86 32bit given a suitable code-gen flag) are capable of returning at
> least 2 GPRs worth of data (ARM can do 4), so switching to some kind of
> 
> struct pair {
>     void *ptr;
>     size_t sz;
> };
> 
> return value would improve the code generation (and performance for that
> matter) across the board by avoiding unnecessary spills of
> pointers/sizes/secondary error information to the stack.

Sounds like a possible plan (ideally we'd check with RISC-V and PPC as
well in this regard, as these look to be the two upcoming new ports).

> The wins for hvm get/set_segment_register() modest bug absolutely
> worthwhile (and I notice I still haven't got those patches published. 
> /sigh).

Did I ever post my 128-bit retval extension for altcall?

>>> +        rc = xen_build_id((const void **)&str, &local_sz);
>>> +        if ( rc )
>>> +            return rc;
>>> +
>>> +        sz = local_sz;
>>> +        goto have_len;
>> Personally I certainly dislike "goto" in general, and I thought the
>> common grounds were to permit its use in error handling (only).
> 
> That's not written in CODING_STYLE, nor has it (to my knowledge) ever
> been an expressed view on xen-devel.

It has been, but that was years ago.

> I don't use goto's gratuitously, and this one isn't.  Just try and write
> this patch without a goto and then judge which version is cleaner/easier
> to follow.

I wouldn't have given my R-b if I didn't realize that.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:07:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479117.742742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhvz-0005Uj-JY; Tue, 17 Jan 2023 09:07:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479117.742742; Tue, 17 Jan 2023 09:07:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHhvz-0005Uc-GY; Tue, 17 Jan 2023 09:07:11 +0000
Received: by outflank-mailman (input) for mailman id 479117;
 Tue, 17 Jan 2023 09:07:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHhvx-0005U9-L2; Tue, 17 Jan 2023 09:07:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHhvx-0002hN-CO; Tue, 17 Jan 2023 09:07:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHhvx-00066i-2I; Tue, 17 Jan 2023 09:07:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHhvx-0004dp-1Z; Tue, 17 Jan 2023 09:07:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zWpdGtgJ/Z0rg6jCDNIciXY77MvlcW4AubHf5iM6Bls=; b=Ley24nnFHjBSm5oKMV3Zy70AqL
	fpedlLKzICWvS6L/2TfvpLwgkYfdV0j+M4QlBdUN7i2iZ5klRz6gmJdrbM5pf+2ABQKdntaE1fi3u
	s5gF55K2IBFwbIVwcuRPxZEu98b3HcuU8ZPudtLLDPM0xoEshF7/ZBy1Trh3FvIIhaKI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175928-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175928: regressions - trouble: blocked/broken
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-build-prep:fail:regression
    xen-unstable-smoke:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 09:07:09 +0000

flight 175928 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175928/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-amd64                   5 host-build-prep          fail REGR. vs. 175746
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    4 days
Failing since        175748  2023-01-12 20:01:56 Z    4 days   20 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    3 days   18 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:11:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:11:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479127.742764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0E-0007IR-Gn; Tue, 17 Jan 2023 09:11:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479127.742764; Tue, 17 Jan 2023 09:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0E-0007IK-Dt; Tue, 17 Jan 2023 09:11:34 +0000
Received: by outflank-mailman (input) for mailman id 479127;
 Tue, 17 Jan 2023 09:11:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi0D-000725-CL
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:11:33 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef57441b-9646-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:11:32 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7822438216;
 Tue, 17 Jan 2023 09:11:32 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 48F211390C;
 Tue, 17 Jan 2023 09:11:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id QoNqEERmxmPebwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:11:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef57441b-9646-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946692; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zdMk1qkjmk7nTB2tDqSEvlsn0bUTpkOAiGi5WVrM4wM=;
	b=EvWYlOE4+ht6qUFPWnuAeTt227VV/Idv4qfIr3XFiGOaPnCQXkq9/HYHucdO+6u2aiihAf
	IcNRLuqjsVV818MOilkGf7X1iIiKzN0CNx9f2TjalW/c6txg61iHoKXabFrMTrZCsHm1G/
	f7XGesvJs1k2N7UYrFfcH7WHF3jLrmM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 01/17] tools/xenstore: let talloc_free() preserve errno
Date: Tue, 17 Jan 2023 10:11:08 +0100
Message-Id: <20230117091124.22170-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today talloc_free() is not guaranteed to preserve errno, especially in
case a custom destructor is being used.

So preserve errno in talloc_free().

This allows to remove some errno saving outside of talloc.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- drop wrapper (Julien Grall)
---
 tools/xenstore/talloc.c         | 21 +++++++++++++--------
 tools/xenstore/xenstored_core.c |  2 --
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/talloc.c b/tools/xenstore/talloc.c
index d7edcf3a93..23c3a23b19 100644
--- a/tools/xenstore/talloc.c
+++ b/tools/xenstore/talloc.c
@@ -541,38 +541,39 @@ static void talloc_free_children(void *ptr)
 */
 int talloc_free(void *ptr)
 {
+	int saved_errno = errno;
 	struct talloc_chunk *tc;
 
 	if (ptr == NULL) {
-		return -1;
+		goto err;
 	}
 
 	tc = talloc_chunk_from_ptr(ptr);
 
 	if (tc->null_refs) {
 		tc->null_refs--;
-		return -1;
+		goto err;
 	}
 
 	if (tc->refs) {
 		talloc_reference_destructor(tc->refs);
-		return -1;
+		goto err;
 	}
 
 	if (tc->flags & TALLOC_FLAG_LOOP) {
 		/* we have a free loop - stop looping */
-		return 0;
+		goto success;
 	}
 
 	if (tc->destructor) {
 		talloc_destructor_t d = tc->destructor;
 		if (d == (talloc_destructor_t)-1) {
-			return -1;
+			goto err;
 		}
 		tc->destructor = (talloc_destructor_t)-1;
 		if (d(ptr) == -1) {
 			tc->destructor = d;
-			return -1;
+			goto err;
 		}
 		tc->destructor = NULL;
 	}
@@ -594,10 +595,14 @@ int talloc_free(void *ptr)
 	tc->flags |= TALLOC_FLAG_FREE;
 
 	free(tc);
+ success:
+	errno = saved_errno;
 	return 0;
-}
-
 
+ err:
+	errno = saved_errno;
+	return -1;
+}
 
 /*
   A talloc version of realloc. The context argument is only used if
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 78a3edaa4e..1650821922 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -771,9 +771,7 @@ struct node *read_node(struct connection *conn, const void *ctx,
 	return node;
 
  error:
-	err = errno;
 	talloc_free(node);
-	errno = err;
 	return NULL;
 }
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:11:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:11:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479126.742753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0A-00072I-9l; Tue, 17 Jan 2023 09:11:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479126.742753; Tue, 17 Jan 2023 09:11:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0A-00072B-71; Tue, 17 Jan 2023 09:11:30 +0000
Received: by outflank-mailman (input) for mailman id 479126;
 Tue, 17 Jan 2023 09:11:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi08-000725-BC
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:11:28 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ec2196dd-9646-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:11:27 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D94C338217;
 Tue, 17 Jan 2023 09:11:26 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9443E1390C;
 Tue, 17 Jan 2023 09:11:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id /2/HIj5mxmPJbwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:11:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec2196dd-9646-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946686; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=UxfCHPYnY8JNBlbyj3Vhxrb9ea3aVDnNMLeEgN1l8tQ=;
	b=iddew9XXzsl5iqO9bfkgy3Up85ClX7/1Rd0B6hRWg2LwyXwiAlzLwPUM/z3NSF0uPC1++r
	v/E8hWc+4T3kT71AEsRVXZsJCjk+Fmnxkyo0kxYf5HLuLXG/jd6tv74PCvKkG3WnhXw8g7
	K3UmNdvs9VvW/dOzbwRId5/QuumNpA4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 00/17] tools/xenstore: do some cleanup and fixes
Date: Tue, 17 Jan 2023 10:11:07 +0100
Message-Id: <20230117091124.22170-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a first run of post-XSA patches which piled up during the
development phase of all the recent Xenstore related XSA patches.

At least the first 5 patches are completely independent from each
other. After those the dependencies are starting to be more complex.

This is a mixture of small fixes, enhancements and cleanups.

Changes in V3:
- patches 2, 3, and 5 of V2 have been applied already
- new patch 12
- addressed comments

Changes in V2:
- patches 1+2 of V1 have been applied already
- addressed comments
- new patch 19

Juergen Gross (17):
  tools/xenstore: let talloc_free() preserve errno
  tools/xenstore: remove all watches when a domain has stopped
  tools/xenstore: add hashlist for finding struct domain by domid
  tools/xenstore: introduce dummy nodes for special watch paths
  tools/xenstore: replace watch->relative_path with a prefix length
  tools/xenstore: move changed domain handling
  tools/xenstore: change per-domain node accounting interface
  tools/xenstore: don't allow creating too many nodes in a transaction
  tools/xenstore: replace literal domid 0 with dom0_domid
  tools/xenstore: make domain_is_unprivileged() an inline function
  tools/xenstore: let chk_domain_generation() return a bool
  tools/xenstore: don't let hashtable_remove() return the removed value
  tools/xenstore: switch hashtable to use the talloc framework
  tools/xenstore: make log macro globally available
  tools/xenstore: introduce trace classes
  tools/xenstore: let check_store() check the accounting data
  tools/xenstore: make output of "xenstore-control help" more pretty

 docs/misc/xenstore.txt                 |  10 +-
 tools/xenstore/hashtable.c             | 104 ++---
 tools/xenstore/hashtable.h             |   7 +-
 tools/xenstore/talloc.c                |  21 +-
 tools/xenstore/xenstored_control.c     |  36 +-
 tools/xenstore/xenstored_core.c        | 266 +++++++----
 tools/xenstore/xenstored_core.h        |  31 ++
 tools/xenstore/xenstored_domain.c      | 620 +++++++++++++------------
 tools/xenstore/xenstored_domain.h      |  21 +-
 tools/xenstore/xenstored_transaction.c |  76 +--
 tools/xenstore/xenstored_transaction.h |   7 +-
 tools/xenstore/xenstored_watch.c       |  36 +-
 12 files changed, 652 insertions(+), 583 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:11:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:11:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479128.742775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0L-0007ek-Ro; Tue, 17 Jan 2023 09:11:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479128.742775; Tue, 17 Jan 2023 09:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0L-0007eJ-NP; Tue, 17 Jan 2023 09:11:41 +0000
Received: by outflank-mailman (input) for mailman id 479128;
 Tue, 17 Jan 2023 09:11:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi0K-0007bs-8P
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:11:40 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2ad1d0b-9646-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:11:38 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1399E38216;
 Tue, 17 Jan 2023 09:11:38 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D4A641390C;
 Tue, 17 Jan 2023 09:11:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 0wGCMklmxmPrbwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:11:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2ad1d0b-9646-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946698; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pX5UiTuGTkIWUb6YtyjDTQ7IukPWx9sxAlYTbqTbJs0=;
	b=kx+Wz/gVoOvbmptZE+o5NRjQpL0o8XEFpo0rhNTFM4XenVeTBueyxVmYeNO452JN4anBW6
	PSaAVxieTZOABJCqPQ/TgEWzIvOYHswC4tzs3hMxHixMzjxaHy0lchG5RrEUfFsRsGiYnH
	JYtEgsnEow0RrybXY/9P1+o0+Ro53p0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 02/17] tools/xenstore: remove all watches when a domain has stopped
Date: Tue, 17 Jan 2023 10:11:09 +0100
Message-Id: <20230117091124.22170-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a domain has been released by Xen tools, remove all its
registered watches. This avoids sending watch events to the dead domain
when all the nodes related to it are being removed by the Xen tools.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- move call to do_release() (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index aa86892fed..e669c89e94 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -740,6 +740,9 @@ int do_release(const void *ctx, struct connection *conn,
 	if (IS_ERR(domain))
 		return -PTR_ERR(domain);
 
+	/* Avoid triggering watch events when the domain's nodes are deleted. */
+	conn_delete_all_watches(domain->conn);
+
 	talloc_free(domain->conn);
 
 	send_ack(conn, XS_RELEASE);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:11:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479130.742786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0R-00084t-3q; Tue, 17 Jan 2023 09:11:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479130.742786; Tue, 17 Jan 2023 09:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0Q-00084h-WC; Tue, 17 Jan 2023 09:11:47 +0000
Received: by outflank-mailman (input) for mailman id 479130;
 Tue, 17 Jan 2023 09:11:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi0Q-0007bs-1f
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:11:46 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f6034998-9646-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:11:43 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A5A5F683D9;
 Tue, 17 Jan 2023 09:11:43 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 73C0C1390C;
 Tue, 17 Jan 2023 09:11:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id b4ztGk9mxmP2bwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:11:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6034998-9646-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946703; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cvGa+mf9SlLa4egdJdNXtGqzGNV+vTmzoXzqW3HDEaE=;
	b=ZJqVTQF9/M2yeoK0fgFxi0Kx511cw1FZAJ5KlRDSqxvP+lWccFDpcP/DI8vMf5EKy6sYov
	vZWxly1zsgUIn53xmTABqvCJ7msKctBTmJqh4kyUuz9lCmD/LamMnspDB23DGmzkG3eBBh
	cT7RP33CBDKZbMYhxhWr0Y/lI6Tl+48=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 03/17] tools/xenstore: add hashlist for finding struct domain by domid
Date: Tue, 17 Jan 2023 10:11:10 +0100
Message-Id: <20230117091124.22170-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today finding a struct domain by its domain id requires to scan the
list of domains until finding the correct domid.

Add a hashlist for being able to speed this up. This allows to remove
the linking of struct domain in a list. Note that the list of changed
domains per transaction is kept as a list, as there are no known use
cases with more than 4 domains being touched in a single transaction
(this would be a device handled by a driver domain and being assigned
to a HVM domain with device model in a stubdom, plus the control
domain).

Some simple performance tests comparing the scanning and hashlist have
shown that the hashlist will win as soon as more than 6 entries need
to be scanned.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- add comment, fix return value of check_domain() (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 102 ++++++++++++++++++------------
 1 file changed, 60 insertions(+), 42 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index e669c89e94..3ad1028edb 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -48,8 +48,6 @@ static struct node_perms dom_introduce_perms;
 
 struct domain
 {
-	struct list_head list;
-
 	/* The id of this domain */
 	unsigned int domid;
 
@@ -96,7 +94,7 @@ struct domain
 	bool wrl_delay_logged;
 };
 
-static LIST_HEAD(domains);
+static struct hashtable *domhash;
 
 static bool check_indexes(XENSTORE_RING_IDX cons, XENSTORE_RING_IDX prod)
 {
@@ -309,7 +307,7 @@ static int destroy_domain(void *_domain)
 
 	domain_tree_remove(domain);
 
-	list_del(&domain->list);
+	hashtable_remove(domhash, &domain->domid);
 
 	if (!domain->introduced)
 		return 0;
@@ -341,43 +339,50 @@ static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
 	       dominfo->domid == domid;
 }
 
-void check_domains(void)
+static int check_domain(const void *k, void *v, void *arg)
 {
 	xc_dominfo_t dominfo;
-	struct domain *domain;
 	struct connection *conn;
-	int notify = 0;
 	bool dom_valid;
+	struct domain *domain = v;
+	bool *notify = arg;
 
- again:
-	list_for_each_entry(domain, &domains, list) {
-		dom_valid = get_domain_info(domain->domid, &dominfo);
-		if (!domain->introduced) {
-			if (!dom_valid) {
-				talloc_free(domain);
-				goto again;
-			}
-			continue;
-		}
-		if (dom_valid) {
-			if ((dominfo.crashed || dominfo.shutdown)
-			    && !domain->shutdown) {
-				domain->shutdown = true;
-				notify = 1;
-			}
-			if (!dominfo.dying)
-				continue;
-		}
-		if (domain->conn) {
-			/* domain is a talloc child of domain->conn. */
-			conn = domain->conn;
-			domain->conn = NULL;
-			talloc_unlink(talloc_autofree_context(), conn);
-			notify = 0; /* destroy_domain() fires the watch */
-			goto again;
+	dom_valid = get_domain_info(domain->domid, &dominfo);
+	if (!domain->introduced) {
+		if (!dom_valid)
+			talloc_free(domain);
+		return 0;
+	}
+	if (dom_valid) {
+		if ((dominfo.crashed || dominfo.shutdown)
+		    && !domain->shutdown) {
+			domain->shutdown = true;
+			*notify = true;
 		}
+		if (!dominfo.dying)
+			return 0;
+	}
+	if (domain->conn) {
+		/* domain is a talloc child of domain->conn. */
+		conn = domain->conn;
+		domain->conn = NULL;
+		talloc_unlink(talloc_autofree_context(), conn);
+		*notify = false; /* destroy_domain() fires the watch */
+
+		/* Above unlink might result in 2 domains being freed! */
+		return 1;
 	}
 
+	return 0;
+}
+
+void check_domains(void)
+{
+	bool notify = false;
+
+	while (hashtable_iterate(domhash, check_domain, &notify))
+		;
+
 	if (notify)
 		fire_watches(NULL, NULL, "@releaseDomain", NULL, true, NULL);
 }
@@ -415,13 +420,7 @@ static char *talloc_domain_path(const void *context, unsigned int domid)
 
 static struct domain *find_domain_struct(unsigned int domid)
 {
-	struct domain *i;
-
-	list_for_each_entry(i, &domains, list) {
-		if (i->domid == domid)
-			return i;
-	}
-	return NULL;
+	return hashtable_search(domhash, &domid);
 }
 
 int domain_get_quota(const void *ctx, struct connection *conn,
@@ -470,9 +469,13 @@ static struct domain *alloc_domain(const void *context, unsigned int domid)
 	domain->generation = generation;
 	domain->introduced = false;
 
-	talloc_set_destructor(domain, destroy_domain);
+	if (!hashtable_insert(domhash, &domain->domid, domain)) {
+		talloc_free(domain);
+		errno = ENOMEM;
+		return NULL;
+	}
 
-	list_add(&domain->list, &domains);
+	talloc_set_destructor(domain, destroy_domain);
 
 	return domain;
 }
@@ -906,10 +909,25 @@ void dom0_init(void)
 	xenevtchn_notify(xce_handle, dom0->port);
 }
 
+static unsigned int domhash_fn(void *k)
+{
+	return *(unsigned int *)k;
+}
+
+static int domeq_fn(void *key1, void *key2)
+{
+	return *(unsigned int *)key1 == *(unsigned int *)key2;
+}
+
 void domain_init(int evtfd)
 {
 	int rc;
 
+	/* Start with a random rather low domain count for the hashtable. */
+	domhash = create_hashtable(8, domhash_fn, domeq_fn, 0);
+	if (!domhash)
+		barf_perror("Failed to allocate domain hashtable");
+
 	xc_handle = talloc(talloc_autofree_context(), xc_interface*);
 	if (!xc_handle)
 		barf_perror("Failed to allocate domain handle");
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:11:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:11:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479133.742796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0Y-00007s-Cm; Tue, 17 Jan 2023 09:11:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479133.742796; Tue, 17 Jan 2023 09:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0Y-00007l-9T; Tue, 17 Jan 2023 09:11:54 +0000
Received: by outflank-mailman (input) for mailman id 479133;
 Tue, 17 Jan 2023 09:11:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi0W-0007bs-7N
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:11:52 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f95dd100-9646-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:11:49 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 482C438217;
 Tue, 17 Jan 2023 09:11:49 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 169281390C;
 Tue, 17 Jan 2023 09:11:49 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 8AA1BFVmxmMHcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:11:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f95dd100-9646-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946709; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AyZo7Qotz+0VbPqrO+ia/QPua4oxXsp64cB3hD04LW0=;
	b=gq5vdoBYRSWMyx1A/ckf9kkg+Vi4ppMU1t6kp11/YNCZSxJ1IB5d3ZSeFz9J0H5/Fw2ydx
	+ZLYzhVZ59guu/P2zRAFH/Z5mFEtPvYfp2gpO5HjZ6UpUwsIO5gRert45DLKcEnhuN6SQX
	6AofNmZUCLy+nNaJpB76TyURSc8zJ9w=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 04/17] tools/xenstore: introduce dummy nodes for special watch paths
Date: Tue, 17 Jan 2023 10:11:11 +0100
Message-Id: <20230117091124.22170-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of special casing the permission handling and watch event
firing for the special watch paths "@introduceDomain" and
"@releaseDomain", use static dummy nodes added to the data base when
starting Xenstore.

The node accounting needs to reflect that change by adding the special
nodes in the domain_entry_fix() call in setup_structure().

Note that this requires to rework the calls of fire_watches() for the
special events in order to avoid leaking memory.

Move the check for a valid node name from get_node() to
get_node_canonicalized(), as it allows to use get_node() for the
special nodes, too.

In order to avoid read and write accesses to the special nodes use a
special variant for obtaining the current node data for the permission
handling.

This allows to simplify quite some code. In future sub-nodes of the
special nodes will be possible due to this change, allowing more fine
grained permission control of special events for specific domains.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- add get_spec_node()
- expand commit message (Julien Grall)
V3:
- modify get_acc_domid() comment (Julien Grall)
- log error in fire_special_watches() (Julien Grall)
---
 tools/xenstore/xenstored_control.c |   3 -
 tools/xenstore/xenstored_core.c    |  94 ++++++++++-------
 tools/xenstore/xenstored_domain.c  | 164 ++++-------------------------
 tools/xenstore/xenstored_domain.h  |   6 --
 tools/xenstore/xenstored_watch.c   |  17 +--
 5 files changed, 83 insertions(+), 201 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index d1aaa00bf4..41e6992591 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -676,9 +676,6 @@ static const char *lu_dump_state(const void *ctx, struct connection *conn)
 	if (ret)
 		goto out;
 	ret = dump_state_connections(fp);
-	if (ret)
-		goto out;
-	ret = dump_state_special_nodes(fp);
 	if (ret)
 		goto out;
 	ret = dump_state_nodes(fp, ctx);
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 1650821922..fb4379e67c 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -611,12 +611,13 @@ static void get_acc_data(TDB_DATA *key, struct node_account_data *acc)
  * Per-transaction nodes need to be accounted for the transaction owner.
  * Those nodes are stored in the data base with the transaction generation
  * count prepended (e.g. 123/local/domain/...). So testing for the node's
- * key not to start with "/" is sufficient.
+ * key not to start with "/" or "@" is sufficient.
  */
 static unsigned int get_acc_domid(struct connection *conn, TDB_DATA *key,
 				  unsigned int domid)
 {
-	return (!conn || key->dptr[0] == '/') ? domid : conn->id;
+	return (!conn || key->dptr[0] == '/' || key->dptr[0] == '@')
+	       ? domid : conn->id;
 }
 
 int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
@@ -958,10 +959,6 @@ static struct node *get_node(struct connection *conn,
 {
 	struct node *node;
 
-	if (!name || !is_valid_nodename(name)) {
-		errno = EINVAL;
-		return NULL;
-	}
 	node = read_node(conn, ctx, name);
 	/* If we don't have permission, we don't have node. */
 	if (node) {
@@ -1250,9 +1247,23 @@ static struct node *get_node_canonicalized(struct connection *conn,
 	*canonical_name = canonicalize(conn, ctx, name);
 	if (!*canonical_name)
 		return NULL;
+	if (!is_valid_nodename(*canonical_name)) {
+		errno = EINVAL;
+		return NULL;
+	}
 	return get_node(conn, ctx, *canonical_name, perm);
 }
 
+static struct node *get_spec_node(struct connection *conn, const void *ctx,
+				  const char *name, char **canonical_name,
+				  unsigned int perm)
+{
+	if (name[0] == '@')
+		return get_node(conn, ctx, name, perm);
+
+	return get_node_canonicalized(conn, ctx, name, canonical_name, perm);
+}
+
 static int send_directory(const void *ctx, struct connection *conn,
 			  struct buffered_data *in)
 {
@@ -1737,8 +1748,7 @@ static int do_get_perms(const void *ctx, struct connection *conn,
 	char *strings;
 	unsigned int len;
 
-	node = get_node_canonicalized(conn, ctx, onearg(in), NULL,
-				      XS_PERM_READ);
+	node = get_spec_node(conn, ctx, onearg(in), NULL, XS_PERM_READ);
 	if (!node)
 		return errno;
 
@@ -1780,17 +1790,9 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 	if (perms.p[0].perms & XS_PERM_IGNORE)
 		return ENOENT;
 
-	/* First arg is node name. */
-	if (strstarts(in->buffer, "@")) {
-		if (set_perms_special(conn, in->buffer, &perms))
-			return errno;
-		send_ack(conn, XS_SET_PERMS);
-		return 0;
-	}
-
 	/* We must own node to do this (tools can do this too). */
-	node = get_node_canonicalized(conn, ctx, in->buffer, &name,
-				      XS_PERM_WRITE | XS_PERM_OWNER);
+	node = get_spec_node(conn, ctx, in->buffer, &name,
+			     XS_PERM_WRITE | XS_PERM_OWNER);
 	if (!node)
 		return errno;
 
@@ -2388,7 +2390,9 @@ void setup_structure(bool live_update)
 		manual_node("/", "tool");
 		manual_node("/tool", "xenstored");
 		manual_node("/tool/xenstored", NULL);
-		domain_entry_fix(dom0_domid, 3, true);
+		manual_node("@releaseDomain", NULL);
+		manual_node("@introduceDomain", NULL);
+		domain_entry_fix(dom0_domid, 5, true);
 	}
 
 	check_store();
@@ -3170,6 +3174,23 @@ static int dump_state_node(const void *ctx, struct connection *conn,
 	return WALK_TREE_OK;
 }
 
+static int dump_state_special_node(FILE *fp, const void *ctx,
+				   struct dump_node_data *data,
+				   const char *name)
+{
+	struct node *node;
+	int ret;
+
+	node = read_node(NULL, ctx, name);
+	if (!node)
+		return dump_state_node_err(data, "Dump node read node error");
+
+	ret = dump_state_node(ctx, NULL, node, data);
+	talloc_free(node);
+
+	return ret;
+}
+
 const char *dump_state_nodes(FILE *fp, const void *ctx)
 {
 	struct dump_node_data data = {
@@ -3181,6 +3202,11 @@ const char *dump_state_nodes(FILE *fp, const void *ctx)
 	if (walk_node_tree(ctx, NULL, "/", &walkfuncs, &data))
 		return data.err;
 
+	if (dump_state_special_node(fp, ctx, &data, "@releaseDomain"))
+		return data.err;
+	if (dump_state_special_node(fp, ctx, &data, "@introduceDomain"))
+		return data.err;
+
 	return NULL;
 }
 
@@ -3354,25 +3380,21 @@ void read_state_node(const void *ctx, const void *state)
 		node->perms.p[i].id = sn->perms[i].domid;
 	}
 
-	if (strstarts(name, "@")) {
-		set_perms_special(&conn, name, &node->perms);
-		talloc_free(node);
-		return;
-	}
-
-	parentname = get_parent(node, name);
-	if (!parentname)
-		barf("allocation error restoring node");
-	parent = read_node(NULL, node, parentname);
-	if (!parent)
-		barf("read parent error restoring node");
+	if (!strstarts(name, "@")) {
+		parentname = get_parent(node, name);
+		if (!parentname)
+			barf("allocation error restoring node");
+		parent = read_node(NULL, node, parentname);
+		if (!parent)
+			barf("read parent error restoring node");
 
-	if (add_child(node, parent, name))
-		barf("allocation error restoring node");
+		if (add_child(node, parent, name))
+			barf("allocation error restoring node");
 
-	set_tdb_key(parentname, &key);
-	if (write_node_raw(NULL, &key, parent, true))
-		barf("write parent error restoring node");
+		set_tdb_key(parentname, &key);
+		if (write_node_raw(NULL, &key, parent, true))
+			barf("write parent error restoring node");
+	}
 
 	set_tdb_key(name, &key);
 	if (write_node_raw(NULL, &key, node, true))
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 3ad1028edb..99f0afcb1f 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,9 +43,6 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
-static struct node_perms dom_release_perms;
-static struct node_perms dom_introduce_perms;
-
 struct domain
 {
 	/* The id of this domain */
@@ -225,27 +222,6 @@ static void unmap_interface(void *interface)
 	xengnttab_unmap(*xgt_handle, interface, 1);
 }
 
-static void remove_domid_from_perm(struct node_perms *perms,
-				   struct domain *domain)
-{
-	unsigned int cur, new;
-
-	if (perms->p[0].id == domain->domid)
-		perms->p[0].id = priv_domid;
-
-	for (cur = new = 1; cur < perms->num; cur++) {
-		if (perms->p[cur].id == domain->domid)
-			continue;
-
-		if (new != cur)
-			perms->p[new] = perms->p[cur];
-
-		new++;
-	}
-
-	perms->num = new;
-}
-
 static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 				  struct node *node, void *arg)
 {
@@ -297,8 +273,26 @@ static void domain_tree_remove(struct domain *domain)
 			       "error when looking for orphaned nodes\n");
 	}
 
-	remove_domid_from_perm(&dom_release_perms, domain);
-	remove_domid_from_perm(&dom_introduce_perms, domain);
+	walk_node_tree(domain, NULL, "@releaseDomain", &walkfuncs, domain);
+	walk_node_tree(domain, NULL, "@introduceDomain", &walkfuncs, domain);
+}
+
+static void fire_special_watches(const char *name)
+{
+	void *ctx = talloc_new(NULL);
+	struct node *node;
+
+	if (!ctx)
+		return;
+
+	node = read_node(NULL, ctx, name);
+
+	if (node)
+		fire_watches(NULL, ctx, name, node, true, NULL);
+	else
+		syslog(LOG_ERR, "special node %s not found\n", name);
+
+	talloc_free(ctx);
 }
 
 static int destroy_domain(void *_domain)
@@ -326,7 +320,7 @@ static int destroy_domain(void *_domain)
 			unmap_interface(domain->interface);
 	}
 
-	fire_watches(NULL, domain, "@releaseDomain", NULL, true, NULL);
+	fire_special_watches("@releaseDomain");
 
 	wrl_domain_destroy(domain);
 
@@ -384,7 +378,7 @@ void check_domains(void)
 		;
 
 	if (notify)
-		fire_watches(NULL, NULL, "@releaseDomain", NULL, true, NULL);
+		fire_special_watches("@releaseDomain");
 }
 
 /* We scan all domains rather than use the information given here. */
@@ -633,8 +627,7 @@ static struct domain *introduce_domain(const void *ctx,
 		}
 
 		if (!is_master_domain && !restore)
-			fire_watches(NULL, ctx, "@introduceDomain", NULL,
-				     true, NULL);
+			fire_special_watches("@introduceDomain");
 	} else {
 		/* Use XS_INTRODUCE for recreating the xenbus event-channel. */
 		if (domain->port)
@@ -840,59 +833,6 @@ const char *get_implicit_path(const struct connection *conn)
 	return conn->domain->path;
 }
 
-static int set_dom_perms_default(struct node_perms *perms)
-{
-	perms->num = 1;
-	perms->p = talloc_array(NULL, struct xs_permissions, perms->num);
-	if (!perms->p)
-		return -1;
-	perms->p->id = 0;
-	perms->p->perms = XS_PERM_NONE;
-
-	return 0;
-}
-
-static struct node_perms *get_perms_special(const char *name)
-{
-	if (!strcmp(name, "@releaseDomain"))
-		return &dom_release_perms;
-	if (!strcmp(name, "@introduceDomain"))
-		return &dom_introduce_perms;
-	return NULL;
-}
-
-int set_perms_special(struct connection *conn, const char *name,
-		      struct node_perms *perms)
-{
-	struct node_perms *p;
-
-	p = get_perms_special(name);
-	if (!p)
-		return EINVAL;
-
-	if ((perm_for_conn(conn, p) & (XS_PERM_WRITE | XS_PERM_OWNER)) !=
-	    (XS_PERM_WRITE | XS_PERM_OWNER))
-		return EACCES;
-
-	p->num = perms->num;
-	talloc_free(p->p);
-	p->p = perms->p;
-	talloc_steal(NULL, perms->p);
-
-	return 0;
-}
-
-bool check_perms_special(const char *name, struct connection *conn)
-{
-	struct node_perms *p;
-
-	p = get_perms_special(name);
-	if (!p)
-		return false;
-
-	return perm_for_conn(conn, p) & XS_PERM_READ;
-}
-
 void dom0_init(void)
 {
 	evtchn_port_t port;
@@ -962,10 +902,6 @@ void domain_init(int evtfd)
 	if (xce_handle == NULL)
 		barf_perror("Failed to open evtchn device");
 
-	if (set_dom_perms_default(&dom_release_perms) ||
-	    set_dom_perms_default(&dom_introduce_perms))
-		barf_perror("Failed to set special permissions");
-
 	if ((rc = xenevtchn_bind_virq(xce_handle, VIRQ_DOM_EXC)) == -1)
 		barf_perror("Failed to bind to domain exception virq port");
 	virq_port = rc;
@@ -1535,60 +1471,6 @@ const char *dump_state_connections(FILE *fp)
 	return ret;
 }
 
-static const char *dump_state_special_node(FILE *fp, const char *name,
-					   const struct node_perms *perms)
-{
-	struct xs_state_record_header head;
-	struct xs_state_node sn;
-	unsigned int pathlen;
-	const char *ret;
-
-	pathlen = strlen(name) + 1;
-
-	head.type = XS_STATE_TYPE_NODE;
-	head.length = sizeof(sn);
-
-	sn.conn_id = 0;
-	sn.ta_id = 0;
-	sn.ta_access = 0;
-	sn.perm_n = perms->num;
-	sn.path_len = pathlen;
-	sn.data_len = 0;
-	head.length += perms->num * sizeof(*sn.perms);
-	head.length += pathlen;
-	head.length = ROUNDUP(head.length, 3);
-	if (fwrite(&head, sizeof(head), 1, fp) != 1)
-		return "Dump special node error";
-	if (fwrite(&sn, sizeof(sn), 1, fp) != 1)
-		return "Dump special node error";
-
-	ret = dump_state_node_perms(fp, perms->p, perms->num);
-	if (ret)
-		return ret;
-
-	if (fwrite(name, pathlen, 1, fp) != 1)
-		return "Dump special node path error";
-
-	ret = dump_state_align(fp);
-
-	return ret;
-}
-
-const char *dump_state_special_nodes(FILE *fp)
-{
-	const char *ret;
-
-	ret = dump_state_special_node(fp, "@releaseDomain",
-				      &dom_release_perms);
-	if (ret)
-		return ret;
-
-	ret = dump_state_special_node(fp, "@introduceDomain",
-				      &dom_introduce_perms);
-
-	return ret;
-}
-
 void read_state_connection(const void *ctx, const void *state)
 {
 	const struct xs_state_connection *sc = state;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index b38c82991d..630641d620 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -99,11 +99,6 @@ void domain_outstanding_domid_dec(unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 
-/* Special node permission handling. */
-int set_perms_special(struct connection *conn, const char *name,
-		      struct node_perms *perms);
-bool check_perms_special(const char *name, struct connection *conn);
-
 /* Write rate limiting */
 
 #define WRL_FACTOR   1000 /* for fixed-point arithmetic */
@@ -132,7 +127,6 @@ void wrl_apply_debit_direct(struct connection *conn);
 void wrl_apply_debit_trans_commit(struct connection *conn);
 
 const char *dump_state_connections(FILE *fp);
-const char *dump_state_special_nodes(FILE *fp);
 
 void read_state_connection(const void *ctx, const void *state);
 
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 316c08b7f7..75748ac109 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -46,13 +46,6 @@ struct watch
 	char *node;
 };
 
-static bool check_special_event(const char *name)
-{
-	assert(name);
-
-	return strstarts(name, "@");
-}
-
 /* Is child a subnode of parent, or equal? */
 static bool is_child(const char *child, const char *parent)
 {
@@ -153,14 +146,8 @@ void fire_watches(struct connection *conn, const void *ctx, const char *name,
 
 	/* Create an event for each watch. */
 	list_for_each_entry(i, &connections, list) {
-		/* introduce/release domain watches */
-		if (check_special_event(name)) {
-			if (!check_perms_special(name, i))
-				continue;
-		} else {
-			if (!watch_permitted(i, ctx, name, node, perms))
-				continue;
-		}
+		if (!watch_permitted(i, ctx, name, node, perms))
+			continue;
 
 		list_for_each_entry(watch, &i->watches, list) {
 			if (exact) {
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:11:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:11:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479137.742808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0c-0000aU-R3; Tue, 17 Jan 2023 09:11:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479137.742808; Tue, 17 Jan 2023 09:11:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0c-0000aK-NF; Tue, 17 Jan 2023 09:11:58 +0000
Received: by outflank-mailman (input) for mailman id 479137;
 Tue, 17 Jan 2023 09:11:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi0b-0007bs-O2
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:11:57 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fcb3ffbe-9646-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:11:55 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DD6F3683DC;
 Tue, 17 Jan 2023 09:11:54 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B41E51390C;
 Tue, 17 Jan 2023 09:11:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Du3CKlpmxmMVcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:11:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcb3ffbe-9646-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946714; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yeWrscM88Vhy837N0J0RMcSEc0fQ7a4WxiqHlihRSgA=;
	b=exL1qlG/Vi6ciQUENU9ah6Tn7u9wBJfIjoGYMw9QdwkIc5+1+j6L/SfH9/cv3ZEZ2KIuEF
	+M7TuON7BgbHh1KSbtLE2wH3bGEOSOpN6K5XaHeYnB1rq8cbuyucJqYK2XnP86XU+jHlpd
	GQbzaDgaWLMskBTpvXlQQKpDBIo/v5s=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 05/17] tools/xenstore: replace watch->relative_path with a prefix length
Date: Tue, 17 Jan 2023 10:11:12 +0100
Message-Id: <20230117091124.22170-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of storing a pointer to the path which is prepended to
relative paths in struct watch, just use the length of the prepended
path.

It should be noted that the now removed special case of the
relative path being "" in get_watch_path() can't happen at all.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- don't open code get_watch_path() (Julien Grall)
V3:
- drop needless modification of dump_state_watches() (Julien Grall)
---
 tools/xenstore/xenstored_watch.c | 19 ++++---------------
 1 file changed, 4 insertions(+), 15 deletions(-)

diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 75748ac109..8ad0229df6 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -39,8 +39,8 @@ struct watch
 	/* Current outstanding events applying to this watch. */
 	struct list_head events;
 
-	/* Is this relative to connnection's implicit path? */
-	const char *relative_path;
+	/* Offset into path for skipping prefix (used for relative paths). */
+	unsigned int prefix_len;
 
 	char *token;
 	char *node;
@@ -66,15 +66,7 @@ static bool is_child(const char *child, const char *parent)
 
 static const char *get_watch_path(const struct watch *watch, const char *name)
 {
-	const char *path = name;
-
-	if (watch->relative_path) {
-		path += strlen(watch->relative_path);
-		if (*path == '/') /* Could be "" */
-			path++;
-	}
-
-	return path;
+	return name + watch->prefix_len;
 }
 
 /*
@@ -211,10 +203,7 @@ static struct watch *add_watch(struct connection *conn, char *path, char *token,
 			      no_quota_check))
 		goto nomem;
 
-	if (relative)
-		watch->relative_path = get_implicit_path(conn);
-	else
-		watch->relative_path = NULL;
+	watch->prefix_len = relative ? strlen(get_implicit_path(conn)) + 1 : 0;
 
 	INIT_LIST_HEAD(&watch->events);
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:12:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:12:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479139.742819 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0h-000139-4c; Tue, 17 Jan 2023 09:12:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479139.742819; Tue, 17 Jan 2023 09:12:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0h-00012p-10; Tue, 17 Jan 2023 09:12:03 +0000
Received: by outflank-mailman (input) for mailman id 479139;
 Tue, 17 Jan 2023 09:12:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi0f-000725-Si
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:02 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 00088804-9647-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:12:00 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 794743821D;
 Tue, 17 Jan 2023 09:12:00 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4CE701390C;
 Tue, 17 Jan 2023 09:12:00 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id E285EWBmxmMgcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00088804-9647-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946720; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WUUgoNnUcKdU+i5lp6nHtOPfrpUDA9Id/qGOmHDQ6xg=;
	b=bdNDsjhoni8zdIgcTpxGj63EYKHYy7l21RJT/4G+/f0ua8Tqp7sZIjk7Cd04SoqY56KTQA
	IdMXqZOZ//rB4H1kVe5mJKYZrJI3/v/cVyviW9UYPEsrXc8PFQ08yFUDhW2fRQLm35EmTd
	E61Gc4ALypqcBOIeLj/y3ZiOVxlfKuY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 06/17] tools/xenstore: move changed domain handling
Date: Tue, 17 Jan 2023 10:11:13 +0100
Message-Id: <20230117091124.22170-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move all code related to struct changed_domain from
xenstored_transaction.c to xenstored_domain.c.

This will be needed later in order to simplify the accounting data
updates in cases of errors during a request.

Split the code to have a more generic base framework.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- remove unrelated change (Julien Grall)
V3:
- modify acc_add_dom_nbentry() interface (Julien Grall)
---
 tools/xenstore/xenstored_domain.c      | 78 ++++++++++++++++++++++++++
 tools/xenstore/xenstored_domain.h      |  3 +
 tools/xenstore/xenstored_transaction.c | 64 ++-------------------
 3 files changed, 85 insertions(+), 60 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 99f0afcb1f..3a2ab5d87e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -91,6 +91,18 @@ struct domain
 	bool wrl_delay_logged;
 };
 
+struct changed_domain
+{
+	/* List of all changed domains. */
+	struct list_head list;
+
+	/* Identifier of the changed domain. */
+	unsigned int domid;
+
+	/* Amount by which this domain's nbentry field has changed. */
+	int nbentry;
+};
+
 static struct hashtable *domhash;
 
 static bool check_indexes(XENSTORE_RING_IDX cons, XENSTORE_RING_IDX prod)
@@ -543,6 +555,72 @@ static struct domain *find_domain_by_domid(unsigned int domid)
 	return (d && d->introduced) ? d : NULL;
 }
 
+int acc_fix_domains(struct list_head *head, bool update)
+{
+	struct changed_domain *cd;
+	int cnt;
+
+	list_for_each_entry(cd, head, list) {
+		cnt = domain_entry_fix(cd->domid, cd->nbentry, update);
+		if (!update) {
+			if (cnt >= quota_nb_entry_per_domain)
+				return ENOSPC;
+			if (cnt < 0)
+				return ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
+static struct changed_domain *acc_find_changed_domain(struct list_head *head,
+						      unsigned int domid)
+{
+	struct changed_domain *cd;
+
+	list_for_each_entry(cd, head, list) {
+		if (cd->domid == domid)
+			return cd;
+	}
+
+	return NULL;
+}
+
+static struct changed_domain *acc_get_changed_domain(const void *ctx,
+						     struct list_head *head,
+						     unsigned int domid)
+{
+	struct changed_domain *cd;
+
+	cd = acc_find_changed_domain(head, domid);
+	if (cd)
+		return cd;
+
+	cd = talloc_zero(ctx, struct changed_domain);
+	if (!cd)
+		return NULL;
+
+	cd->domid = domid;
+	list_add_tail(&cd->list, head);
+
+	return cd;
+}
+
+int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
+			unsigned int domid)
+{
+	struct changed_domain *cd;
+
+	cd = acc_get_changed_domain(ctx, head, domid);
+	if (!cd)
+		return 0;
+
+	errno = 0;
+	cd->nbentry += val;
+
+	return cd->nbentry;
+}
+
 static void domain_conn_reset(struct domain *domain)
 {
 	struct connection *conn = domain->conn;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 630641d620..9e20d2b17d 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -98,6 +98,9 @@ void domain_outstanding_dec(struct connection *conn);
 void domain_outstanding_domid_dec(unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
+int acc_fix_domains(struct list_head *head, bool update);
+int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
+			unsigned int domid);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index ac854197ca..c009c67989 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -137,18 +137,6 @@ struct accessed_node
 	bool watch_exact;
 };
 
-struct changed_domain
-{
-	/* List of all changed domains in the context of this transaction. */
-	struct list_head list;
-
-	/* Identifier of the changed domain. */
-	unsigned int domid;
-
-	/* Amount by which this domain's nbentry field has changed. */
-	int nbentry;
-};
-
 struct transaction
 {
 	/* List of all transactions active on this connection. */
@@ -514,24 +502,6 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	return 0;
 }
 
-static int transaction_fix_domains(struct transaction *trans, bool update)
-{
-	struct changed_domain *d;
-	int cnt;
-
-	list_for_each_entry(d, &trans->changed_domains, list) {
-		cnt = domain_entry_fix(d->domid, d->nbentry, update);
-		if (!update) {
-			if (cnt >= quota_nb_entry_per_domain)
-				return ENOSPC;
-			if (cnt < 0)
-				return ENOMEM;
-		}
-	}
-
-	return 0;
-}
-
 int do_transaction_end(const void *ctx, struct connection *conn,
 		       struct buffered_data *in)
 {
@@ -558,7 +528,7 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 	if (streq(arg, "T")) {
 		if (trans->fail)
 			return ENOMEM;
-		ret = transaction_fix_domains(trans, false);
+		ret = acc_fix_domains(&trans->changed_domains, false);
 		if (ret)
 			return ret;
 		ret = finalize_transaction(conn, trans, &is_corrupt);
@@ -568,7 +538,7 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 		wrl_apply_debit_trans_commit(conn);
 
 		/* fix domain entry for each changed domain */
-		transaction_fix_domains(trans, true);
+		acc_fix_domains(&trans->changed_domains, true);
 
 		if (is_corrupt)
 			corrupt(conn, "transaction inconsistency");
@@ -580,44 +550,18 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 
 void transaction_entry_inc(struct transaction *trans, unsigned int domid)
 {
-	struct changed_domain *d;
-
-	list_for_each_entry(d, &trans->changed_domains, list)
-		if (d->domid == domid) {
-			d->nbentry++;
-			return;
-		}
-
-	d = talloc(trans, struct changed_domain);
-	if (!d) {
+	if (!acc_add_dom_nbentry(trans, &trans->changed_domains, 1, domid)) {
 		/* Let the transaction fail. */
 		trans->fail = true;
-		return;
 	}
-	d->domid = domid;
-	d->nbentry = 1;
-	list_add_tail(&d->list, &trans->changed_domains);
 }
 
 void transaction_entry_dec(struct transaction *trans, unsigned int domid)
 {
-	struct changed_domain *d;
-
-	list_for_each_entry(d, &trans->changed_domains, list)
-		if (d->domid == domid) {
-			d->nbentry--;
-			return;
-		}
-
-	d = talloc(trans, struct changed_domain);
-	if (!d) {
+	if (!acc_add_dom_nbentry(trans, &trans->changed_domains, -1, domid)) {
 		/* Let the transaction fail. */
 		trans->fail = true;
-		return;
 	}
-	d->domid = domid;
-	d->nbentry = -1;
-	list_add_tail(&d->list, &trans->changed_domains);
 }
 
 void fail_transaction(struct transaction *trans)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:12:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:12:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479148.742830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0o-0001dq-CJ; Tue, 17 Jan 2023 09:12:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479148.742830; Tue, 17 Jan 2023 09:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi0o-0001dd-9E; Tue, 17 Jan 2023 09:12:10 +0000
Received: by outflank-mailman (input) for mailman id 479148;
 Tue, 17 Jan 2023 09:12:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi0m-0007bs-Rn
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:08 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0361ee14-9647-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:12:06 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1A2C6683D9;
 Tue, 17 Jan 2023 09:12:06 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DDD041390C;
 Tue, 17 Jan 2023 09:12:05 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id wxzXNGVmxmM1cAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0361ee14-9647-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946726; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mfRRwxo4vvhdUuVMMBFhh02P5cIV8rdGRXpO+cxP6K8=;
	b=ohgAaZeOSqPTPI1fLeNEHrMNpW+1xbO5vEcwt+6PNN6Qq42ZuN9+e5rp3kFdgqWbLSbEgW
	/kbrHxKhgPB4FMdSglTReJM6BEkAFISZ0YrlUj87y9QoMgOM3OXAZlUJNC/TNb7UEb0kMI
	2OnxGeU95KSZtfhvR6NygCls3ksHYIQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 07/17] tools/xenstore: change per-domain node accounting interface
Date: Tue, 17 Jan 2023 10:11:14 +0100
Message-Id: <20230117091124.22170-8-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rework the interface and the internals of the per-domain node
accounting:

- rename the functions to domain_nbentry_*() in order to better match
  the related counter name

- switch from node pointer to domid as interface, as all nodes have the
  owner filled in

- use a common internal function for adding a value to the counter

For the transaction case add a helper function to get the list head
of the per-transaction changed domains, enabling to eliminate the
transaction_entry_*() functions.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- add get_node_owner() (Julien Grall)
- rename domain_nbentry_add() parameter (Julien Grall)
- avoid negative entry count (Julien Grall)
---
 tools/xenstore/xenstored_core.c        |  33 ++++---
 tools/xenstore/xenstored_domain.c      | 126 ++++++++++++-------------
 tools/xenstore/xenstored_domain.h      |  10 +-
 tools/xenstore/xenstored_transaction.c |  15 +--
 tools/xenstore/xenstored_transaction.h |   7 +-
 5 files changed, 86 insertions(+), 105 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index fb4379e67c..561fb121d3 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -700,6 +700,11 @@ int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 	return 0;
 }
 
+static unsigned int get_node_owner(const struct node *node)
+{
+	return node->perms.p[0].id;
+}
+
 /*
  * If it fails, returns NULL and sets errno.
  * Temporary memory allocations will be done with ctx.
@@ -752,13 +757,13 @@ struct node *read_node(struct connection *conn, const void *ctx,
 
 	/* Permissions are struct xs_permissions. */
 	node->perms.p = hdr->perms;
-	node->acc.domid = node->perms.p[0].id;
+	node->acc.domid = get_node_owner(node);
 	node->acc.memory = data.dsize;
 	if (domain_adjust_node_perms(node))
 		goto error;
 
 	/* If owner is gone reset currently accounted memory size. */
-	if (node->acc.domid != node->perms.p[0].id)
+	if (node->acc.domid != get_node_owner(node))
 		node->acc.memory = 0;
 
 	/* Data is binary blob (usually ascii, no nul). */
@@ -1459,7 +1464,7 @@ static void destroy_node_rm(struct connection *conn, struct node *node)
 static int destroy_node(struct connection *conn, struct node *node)
 {
 	destroy_node_rm(conn, node);
-	domain_entry_dec(conn, node);
+	domain_nbentry_dec(conn, get_node_owner(node));
 
 	/*
 	 * It is not possible to easily revert the changes in a transaction.
@@ -1498,7 +1503,7 @@ static struct node *create_node(struct connection *conn, const void *ctx,
 	for (i = node; i; i = i->parent) {
 		/* i->parent is set for each new node, so check quota. */
 		if (i->parent &&
-		    domain_entry(conn) >= quota_nb_entry_per_domain) {
+		    domain_nbentry(conn) >= quota_nb_entry_per_domain) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -1509,7 +1514,7 @@ static struct node *create_node(struct connection *conn, const void *ctx,
 
 		/* Account for new node */
 		if (i->parent) {
-			if (domain_entry_inc(conn, i)) {
+			if (domain_nbentry_inc(conn, get_node_owner(i))) {
 				destroy_node_rm(conn, i);
 				return NULL;
 			}
@@ -1662,7 +1667,7 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 	watch_exact = strcmp(root, node->name);
 	fire_watches(conn, ctx, node->name, node, watch_exact, NULL);
 
-	domain_entry_dec(conn, node);
+	domain_nbentry_dec(conn, get_node_owner(node));
 
 	return WALK_TREE_RM_CHILDENTRY;
 }
@@ -1798,29 +1803,29 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 
 	/* Unprivileged domains may not change the owner. */
 	if (domain_is_unprivileged(conn) &&
-	    perms.p[0].id != node->perms.p[0].id)
+	    perms.p[0].id != get_node_owner(node))
 		return EPERM;
 
 	old_perms = node->perms;
-	domain_entry_dec(conn, node);
+	domain_nbentry_dec(conn, get_node_owner(node));
 	node->perms = perms;
-	if (domain_entry_inc(conn, node)) {
+	if (domain_nbentry_inc(conn, get_node_owner(node))) {
 		node->perms = old_perms;
 		/*
 		 * This should never fail because we had a reference on the
 		 * domain before and Xenstored is single-threaded.
 		 */
-		domain_entry_inc(conn, node);
+		domain_nbentry_inc(conn, get_node_owner(node));
 		return ENOMEM;
 	}
 
 	if (write_node(conn, node, false)) {
 		int saved_errno = errno;
 
-		domain_entry_dec(conn, node);
+		domain_nbentry_dec(conn, get_node_owner(node));
 		node->perms = old_perms;
 		/* No failure possible as above. */
-		domain_entry_inc(conn, node);
+		domain_nbentry_inc(conn, get_node_owner(node));
 
 		errno = saved_errno;
 		return errno;
@@ -2392,7 +2397,7 @@ void setup_structure(bool live_update)
 		manual_node("/tool/xenstored", NULL);
 		manual_node("@releaseDomain", NULL);
 		manual_node("@introduceDomain", NULL);
-		domain_entry_fix(dom0_domid, 5, true);
+		domain_nbentry_fix(dom0_domid, 5, true);
 	}
 
 	check_store();
@@ -3400,7 +3405,7 @@ void read_state_node(const void *ctx, const void *state)
 	if (write_node_raw(NULL, &key, node, true))
 		barf("write node error restoring node");
 
-	if (domain_entry_inc(&conn, node))
+	if (domain_nbentry_inc(&conn, get_node_owner(node)))
 		barf("node accounting error restoring node");
 
 	talloc_free(node);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 3a2ab5d87e..edfe5809be 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -249,7 +249,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		domain->nbentry--;
 		node->perms.p[0].id = priv_domid;
 		node->acc.memory = 0;
-		domain_entry_inc(NULL, node);
+		domain_nbentry_inc(NULL, priv_domid);
 		if (write_node_raw(NULL, &key, node, true)) {
 			/* That's unfortunate. We only can try to continue. */
 			syslog(LOG_ERR,
@@ -561,7 +561,7 @@ int acc_fix_domains(struct list_head *head, bool update)
 	int cnt;
 
 	list_for_each_entry(cd, head, list) {
-		cnt = domain_entry_fix(cd->domid, cd->nbentry, update);
+		cnt = domain_nbentry_fix(cd->domid, cd->nbentry, update);
 		if (!update) {
 			if (cnt >= quota_nb_entry_per_domain)
 				return ENOSPC;
@@ -606,8 +606,8 @@ static struct changed_domain *acc_get_changed_domain(const void *ctx,
 	return cd;
 }
 
-int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
-			unsigned int domid)
+static int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
+			       unsigned int domid)
 {
 	struct changed_domain *cd;
 
@@ -991,30 +991,6 @@ void domain_deinit(void)
 		xenevtchn_unbind(xce_handle, virq_port);
 }
 
-int domain_entry_inc(struct connection *conn, struct node *node)
-{
-	struct domain *d;
-	unsigned int domid;
-
-	if (!node->perms.p)
-		return 0;
-
-	domid = node->perms.p[0].id;
-
-	if (conn && conn->transaction) {
-		transaction_entry_inc(conn->transaction, domid);
-	} else {
-		d = (conn && domid == conn->id && conn->domain) ? conn->domain
-		    : find_or_alloc_existing_domain(domid);
-		if (d)
-			d->nbentry++;
-		else
-			return ENOMEM;
-	}
-
-	return 0;
-}
-
 /*
  * Check whether a domain was created before or after a specific generation
  * count (used for testing whether a node permission is older than a domain).
@@ -1082,62 +1058,76 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
-void domain_entry_dec(struct connection *conn, struct node *node)
+static int domain_nbentry_add(struct connection *conn, unsigned int domid,
+			      int add, bool no_dom_alloc)
 {
 	struct domain *d;
-	unsigned int domid;
-
-	if (!node->perms.p)
-		return;
+	struct list_head *head;
+	int ret;
 
-	domid = node->perms.p ? node->perms.p[0].id : conn->id;
+	if (conn && domid == conn->id && conn->domain)
+		d = conn->domain;
+	else if (no_dom_alloc) {
+		d = find_domain_struct(domid);
+		if (!d) {
+			errno = ENOENT;
+			corrupt(conn, "Missing domain %u\n", domid);
+			return -1;
+		}
+	} else {
+		d = find_or_alloc_existing_domain(domid);
+		if (!d) {
+			errno = ENOMEM;
+			return -1;
+		}
+	}
 
 	if (conn && conn->transaction) {
-		transaction_entry_dec(conn->transaction, domid);
-	} else {
-		d = (conn && domid == conn->id && conn->domain) ? conn->domain
-		    : find_domain_struct(domid);
-		if (d) {
-			d->nbentry--;
-		} else {
-			errno = ENOENT;
-			corrupt(conn,
-				"Node \"%s\" owned by non-existing domain %u\n",
-				node->name, domid);
+		head = transaction_get_changed_domains(conn->transaction);
+		ret = acc_add_dom_nbentry(conn->transaction, head, add, domid);
+		if (errno) {
+			fail_transaction(conn->transaction);
+			return -1;
 		}
+		/*
+		 * In a transaction when a node is being added/removed AND the
+		 * same node has been added/removed outside the transaction in
+		 * parallel, the resulting number of nodes will be wrong. This
+		 * is no problem, as the transaction will fail due to the
+		 * resulting conflict.
+		 * In the node remove case the resulting number can be even
+		 * negative, which should be avoided.
+		 */
+		return max(d->nbentry + ret, 0);
 	}
+
+	d->nbentry += add;
+
+	return d->nbentry;
 }
 
-int domain_entry_fix(unsigned int domid, int num, bool update)
+int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
-	struct domain *d;
-	int cnt;
+	return (domain_nbentry_add(conn, domid, 1, false) < 0) ? errno : 0;
+}
 
-	if (update) {
-		d = find_domain_struct(domid);
-		assert(d);
-	} else {
-		/*
-		 * We are called first with update == false in order to catch
-		 * any error. So do a possible allocation and check for error
-		 * only in this case, as in the case of update == true nothing
-		 * can go wrong anymore as the allocation already happened.
-		 */
-		d = find_or_alloc_existing_domain(domid);
-		if (!d)
-			return -1;
-	}
+int domain_nbentry_dec(struct connection *conn, unsigned int domid)
+{
+	return (domain_nbentry_add(conn, domid, -1, true) < 0) ? errno : 0;
+}
 
-	cnt = d->nbentry + num;
-	assert(cnt >= 0);
+int domain_nbentry_fix(unsigned int domid, int num, bool update)
+{
+	int ret;
 
-	if (update)
-		d->nbentry = cnt;
+	ret = domain_nbentry_add(NULL, domid, update ? num : 0, update);
+	if (ret < 0 || update)
+		return ret;
 
-	return domid_is_unprivileged(domid) ? cnt : 0;
+	return domid_is_unprivileged(domid) ? ret + num : 0;
 }
 
-int domain_entry(struct connection *conn)
+int domain_nbentry(struct connection *conn)
 {
 	return (domain_is_unprivileged(conn))
 		? conn->domain->nbentry
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 9e20d2b17d..1e402f2609 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -66,10 +66,10 @@ int domain_adjust_node_perms(struct node *node);
 int domain_alloc_permrefs(struct node_perms *perms);
 
 /* Quota manipulation */
-int domain_entry_inc(struct connection *conn, struct node *);
-void domain_entry_dec(struct connection *conn, struct node *);
-int domain_entry_fix(unsigned int domid, int num, bool update);
-int domain_entry(struct connection *conn);
+int domain_nbentry_inc(struct connection *conn, unsigned int domid);
+int domain_nbentry_dec(struct connection *conn, unsigned int domid);
+int domain_nbentry_fix(unsigned int domid, int num, bool update);
+int domain_nbentry(struct connection *conn);
 int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
 
 /*
@@ -99,8 +99,6 @@ void domain_outstanding_domid_dec(unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 int acc_fix_domains(struct list_head *head, bool update);
-int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
-			unsigned int domid);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index c009c67989..82e5e66c18 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -548,20 +548,9 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 	return 0;
 }
 
-void transaction_entry_inc(struct transaction *trans, unsigned int domid)
+struct list_head *transaction_get_changed_domains(struct transaction *trans)
 {
-	if (!acc_add_dom_nbentry(trans, &trans->changed_domains, 1, domid)) {
-		/* Let the transaction fail. */
-		trans->fail = true;
-	}
-}
-
-void transaction_entry_dec(struct transaction *trans, unsigned int domid)
-{
-	if (!acc_add_dom_nbentry(trans, &trans->changed_domains, -1, domid)) {
-		/* Let the transaction fail. */
-		trans->fail = true;
-	}
+	return &trans->changed_domains;
 }
 
 void fail_transaction(struct transaction *trans)
diff --git a/tools/xenstore/xenstored_transaction.h b/tools/xenstore/xenstored_transaction.h
index 3417303f94..b6f8cb7d0a 100644
--- a/tools/xenstore/xenstored_transaction.h
+++ b/tools/xenstore/xenstored_transaction.h
@@ -36,10 +36,6 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 
 struct transaction *transaction_lookup(struct connection *conn, uint32_t id);
 
-/* inc/dec entry number local to trans while changing a node */
-void transaction_entry_inc(struct transaction *trans, unsigned int domid);
-void transaction_entry_dec(struct transaction *trans, unsigned int domid);
-
 /* This node was accessed. */
 int __must_check access_node(struct connection *conn, struct node *node,
                              enum node_access_type type, TDB_DATA *key);
@@ -54,6 +50,9 @@ void transaction_prepend(struct connection *conn, const char *name,
 /* Mark the transaction as failed. This will prevent it to be committed. */
 void fail_transaction(struct transaction *trans);
 
+/* Get the list head of the changed domains. */
+struct list_head *transaction_get_changed_domains(struct transaction *trans);
+
 void conn_delete_all_transactions(struct connection *conn);
 int check_transactions(struct hashtable *hash);
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:12:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:12:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479157.742841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi17-0002iZ-RB; Tue, 17 Jan 2023 09:12:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479157.742841; Tue, 17 Jan 2023 09:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi17-0002iS-NB; Tue, 17 Jan 2023 09:12:29 +0000
Received: by outflank-mailman (input) for mailman id 479157;
 Tue, 17 Jan 2023 09:12:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi17-000725-CK
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:29 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10b7f7da-9647-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:12:28 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 79FA7683D9;
 Tue, 17 Jan 2023 09:12:28 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 493381390C;
 Tue, 17 Jan 2023 09:12:28 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id B+WyEHxmxmNvcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10b7f7da-9647-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946748; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qxN+8k08vTt0nMXkeJr8ru9QvmOMBTZNPN5l16ahebE=;
	b=P/WKj3ov4+KtXochZB1lePMqmyq2VdvkXDHVrF7Ukb5Z8pdCVPkL5S78vRf479Q1V3/jPG
	nl2xTTR9/ookdhP4zN+OVj7VEYqdOlZWlAK2Dy6nz2vj+dHTdCQ85pP+cbWwd1whyOTDJE
	mYfSGxwCj6LxETmyoKoyBLaG7m9FiS0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 11/17] tools/xenstore: let chk_domain_generation() return a bool
Date: Tue, 17 Jan 2023 10:11:18 +0100
Message-Id: <20230117091124.22170-12-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of returning 0 or 1 let chk_domain_generation() return a
boolean value.

Simplify the only caller by removing the ret variable.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_domain.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 55a93eccdb..1723c9804a 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -985,20 +985,20 @@ void domain_deinit(void)
  * count (used for testing whether a node permission is older than a domain).
  *
  * Return values:
- *  0: domain has higher generation count (it is younger than a node with the
- *     given count), or domain isn't existing any longer
- *  1: domain is older than the node
+ *  false: domain has higher generation count (it is younger than a node with
+ *     the given count), or domain isn't existing any longer
+ *  true: domain is older than the node
  */
-static int chk_domain_generation(unsigned int domid, uint64_t gen)
+static bool chk_domain_generation(unsigned int domid, uint64_t gen)
 {
 	struct domain *d;
 
 	if (!xc_handle && domid == dom0_domid)
-		return 1;
+		return true;
 
 	d = find_domain_struct(domid);
 
-	return (d && d->generation <= gen) ? 1 : 0;
+	return d && d->generation <= gen;
 }
 
 /*
@@ -1033,14 +1033,12 @@ int domain_alloc_permrefs(struct node_perms *perms)
 int domain_adjust_node_perms(struct node *node)
 {
 	unsigned int i;
-	int ret;
 
 	for (i = 1; i < node->perms.num; i++) {
 		if (node->perms.p[i].perms & XS_PERM_IGNORE)
 			continue;
-		ret = chk_domain_generation(node->perms.p[i].id,
-					    node->generation);
-		if (!ret)
+		if (!chk_domain_generation(node->perms.p[i].id,
+					   node->generation))
 			node->perms.p[i].perms |= XS_PERM_IGNORE;
 	}
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:12:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:12:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479162.742852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi1P-0003K0-4l; Tue, 17 Jan 2023 09:12:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479162.742852; Tue, 17 Jan 2023 09:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi1P-0003Jr-0V; Tue, 17 Jan 2023 09:12:47 +0000
Received: by outflank-mailman (input) for mailman id 479162;
 Tue, 17 Jan 2023 09:12:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi1O-000725-62
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:46 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1abc3174-9647-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:12:45 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 492A6683D9;
 Tue, 17 Jan 2023 09:12:45 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 172B31390C;
 Tue, 17 Jan 2023 09:12:45 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id cThQBI1mxmOVcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1abc3174-9647-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946765; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Qhi92MnwPX7iSr8FlA4GN57x5a7k0yJyoHyO3b0J4k8=;
	b=ssmAxIkTO8DrjjidJyOflboV3TnbOo/57/H5yXzuG+jkF/NF0wzbLP/UCglJnQg+USYdB4
	C7IYOp23Yi1Ri5Uskc6AzPWlxYqkmEneCYRmqe2JGtaDXJZaTEv0NtnWxTa+6YGGFAGBfE
	UklUgj8W8afevKKjnpjvN3NNkG6e9F8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 14/17] tools/xenstore: make log macro globally available
Date: Tue, 17 Jan 2023 10:11:21 +0100
Message-Id: <20230117091124.22170-15-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move the definition of the log() macro to xenstored_core.h in order
to make it usable from other source files, too.

While at it preserve errno from being modified.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c | 14 --------------
 tools/xenstore/xenstored_core.h | 15 +++++++++++++++
 2 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index f27209dec8..d55ce632d8 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -88,20 +88,6 @@ TDB_CONTEXT *tdb_ctx = NULL;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
 
-#define log(...)							\
-	do {								\
-		char *s = talloc_asprintf(NULL, __VA_ARGS__);		\
-		if (s) {						\
-			trace("%s\n", s);				\
-			syslog(LOG_ERR, "%s\n",  s);			\
-			talloc_free(s);					\
-		} else {						\
-			trace("talloc failure during logging\n");	\
-			syslog(LOG_ERR, "talloc failure during logging\n"); \
-		}							\
-	} while (0)
-
-
 int quota_nb_entry_per_domain = 1000;
 int quota_nb_watch_per_domain = 128;
 int quota_max_entry_size = 2048; /* 2K */
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 3c4e27d0dd..3b96ecd018 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -267,6 +267,21 @@ void trace(const char *fmt, ...) __attribute__ ((format (printf, 1, 2)));
 void reopen_log(void);
 void close_log(void);
 
+#define log(...)							\
+	do {								\
+		int _saved_errno = errno;				\
+		char *s = talloc_asprintf(NULL, __VA_ARGS__);		\
+		if (s) {						\
+			trace("%s\n", s);				\
+			syslog(LOG_ERR, "%s\n",	s);			\
+			talloc_free(s);					\
+		} else {						\
+			trace("talloc failure during logging\n");	\
+			syslog(LOG_ERR, "talloc failure during logging\n"); \
+		}							\
+		errno = _saved_errno;					\
+	} while (0)
+
 extern int orig_argc;
 extern char **orig_argv;
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:15:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:15:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479172.742862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3l-0004Di-Fl; Tue, 17 Jan 2023 09:15:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479172.742862; Tue, 17 Jan 2023 09:15:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3l-0004Db-DA; Tue, 17 Jan 2023 09:15:13 +0000
Received: by outflank-mailman (input) for mailman id 479172;
 Tue, 17 Jan 2023 09:15:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi1Z-000725-Um
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:58 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 218fd624-9647-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:12:56 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B3CF2683D9;
 Tue, 17 Jan 2023 09:12:56 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8707B1390C;
 Tue, 17 Jan 2023 09:12:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 0JqeH5hmxmOrcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 218fd624-9647-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946776; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JzTrJ5tQk513x3Ydn7diRgOfwUfJxvyb7EQt/DCJ5Fk=;
	b=XPPC+AyCftO64LZrJ2ieTvFPYdpTJTuwQjXtINIH0maH3Bp4b0qQ2Q3npVgYvByq1wzMH7
	oRvo+mj85uCSH2Ed2jBaKmlF0heRQZMWox2/crDq7dZeR1qeDZhqH2dE5U9eX8GcSBMCMW
	a/JNsrwQZnc4y5ngJdBd0B7fhdDQu04=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 16/17] tools/xenstore: let check_store() check the accounting data
Date: Tue, 17 Jan 2023 10:11:23 +0100
Message-Id: <20230117091124.22170-17-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today check_store() is only testing the correctness of the node tree.

Add verification of the accounting data (number of nodes)  and correct
the data if it is wrong.

Do the initial check_store() call only after Xenstore entries of a
live update have been read.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   | 62 ++++++++++++++++------
 tools/xenstore/xenstored_domain.c | 86 +++++++++++++++++++++++++++++++
 tools/xenstore/xenstored_domain.h |  4 ++
 3 files changed, 137 insertions(+), 15 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 3099077a86..e201f14053 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2389,8 +2389,6 @@ void setup_structure(bool live_update)
 		manual_node("@introduceDomain", NULL);
 		domain_nbentry_fix(dom0_domid, 5, true);
 	}
-
-	check_store();
 }
 
 static unsigned int hash_from_key_fn(void *k)
@@ -2433,20 +2431,28 @@ int remember_string(struct hashtable *hash, const char *str)
  * As we go, we record each node in the given reachable hashtable.  These
  * entries will be used later in clean_store.
  */
+
+struct check_store_data {
+	struct hashtable *reachable;
+	struct hashtable *domains;
+};
+
 static int check_store_step(const void *ctx, struct connection *conn,
 			    struct node *node, void *arg)
 {
-	struct hashtable *reachable = arg;
+	struct check_store_data *data = arg;
 
-	if (hashtable_search(reachable, (void *)node->name)) {
+	if (hashtable_search(data->reachable, (void *)node->name)) {
 		log("check_store: '%s' is duplicated!", node->name);
 		return recovery ? WALK_TREE_RM_CHILDENTRY
 				: WALK_TREE_SKIP_CHILDREN;
 	}
 
-	if (!remember_string(reachable, node->name))
+	if (!remember_string(data->reachable, node->name))
 		return WALK_TREE_ERROR_STOP;
 
+	domain_check_acc_add(node, data->domains);
+
 	return WALK_TREE_OK;
 }
 
@@ -2496,37 +2502,61 @@ static int clean_store_(TDB_CONTEXT *tdb, TDB_DATA key, TDB_DATA val,
  * Given the list of reachable nodes, iterate over the whole store, and
  * remove any that were not reached.
  */
-static void clean_store(struct hashtable *reachable)
+static void clean_store(struct check_store_data *data)
 {
-	tdb_traverse(tdb_ctx, &clean_store_, reachable);
+	tdb_traverse(tdb_ctx, &clean_store_, data->reachable);
+	domain_check_acc(data->domains);
 }
 
+int check_store_path(const char *name, struct check_store_data *data)
+{
+	struct node *node;
+
+	node = read_node(NULL, NULL, name);
+	if (!node) {
+		log("check_store: error %d reading special node '%s'", errno,
+		    name);
+		return errno;
+	}
+
+	return check_store_step(NULL, NULL, node, data);
+}
 
 void check_store(void)
 {
-	struct hashtable *reachable;
 	struct walk_funcs walkfuncs = {
 		.enter = check_store_step,
 		.enoent = check_store_enoent,
 	};
+	struct check_store_data data;
 
 	/* Don't free values (they are all void *1) */
-	reachable = create_hashtable(NULL, 16, hash_from_key_fn, keys_equal_fn,
-				     HASHTABLE_FREE_KEY);
-	if (!reachable) {
+	data.reachable = create_hashtable(NULL, 16, hash_from_key_fn,
+					  keys_equal_fn, HASHTABLE_FREE_KEY);
+	if (!data.reachable) {
 		log("check_store: ENOMEM");
 		return;
 	}
 
+	data.domains = domain_check_acc_init();
+	if (!data.domains) {
+		log("check_store: ENOMEM");
+		goto out_hash;
+	}
+
 	log("Checking store ...");
-	if (walk_node_tree(NULL, NULL, "/", &walkfuncs, reachable)) {
+	if (walk_node_tree(NULL, NULL, "/", &walkfuncs, &data)) {
 		if (errno == ENOMEM)
 			log("check_store: ENOMEM");
-	} else if (!check_transactions(reachable))
-		clean_store(reachable);
+	} else if (!check_store_path("@introduceDomain", &data) &&
+		   !check_store_path("@releaseDomain", &data) &&
+		   !check_transactions(data.reachable))
+		clean_store(&data);
 	log("Checking store complete.");
 
-	hashtable_destroy(reachable);
+	hashtable_destroy(data.domains);
+ out_hash:
+	hashtable_destroy(data.reachable);
 }
 
 
@@ -2925,6 +2955,8 @@ int main(int argc, char *argv[])
 		lu_read_state();
 #endif
 
+	check_store();
+
 	/* Get ready to listen to the tools. */
 	initialize_fds(&sock_pollfd_idx, &timeout);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index cb1f09c297..a3c4fb7f93 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1573,6 +1573,92 @@ void read_state_connection(const void *ctx, const void *state)
 	read_state_buffered_data(ctx, conn, sc);
 }
 
+struct domain_acc {
+	unsigned int domid;
+	int nodes;
+};
+
+static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
+{
+	struct hashtable *domains = arg;
+	struct domain *d = v;
+	struct domain_acc *dom;
+
+	dom = talloc_zero(NULL, struct domain_acc);
+	if (!dom)
+		return -1;
+
+	dom->domid = d->domid;
+	/*
+	 * Set the initial value to the negative one of the current domain.
+	 * If everything is correct incrementing the value for each node will
+	 * result in dom->nodes being 0 at the end.
+	 */
+	dom->nodes = -d->nbentry;
+
+	if (!hashtable_insert(domains, &dom->domid, dom)) {
+		talloc_free(dom);
+		return -1;
+	}
+
+	return 0;
+}
+
+struct hashtable *domain_check_acc_init(void)
+{
+	struct hashtable *domains;
+
+	domains = create_hashtable(NULL, 8, domhash_fn, domeq_fn,
+				   HASHTABLE_FREE_VALUE);
+	if (!domains)
+		return NULL;
+
+	if (hashtable_iterate(domhash, domain_check_acc_init_sub, domains)) {
+		hashtable_destroy(domains);
+		return NULL;
+	}
+
+	return domains;
+}
+
+void domain_check_acc_add(const struct node *node, struct hashtable *domains)
+{
+	struct domain_acc *dom;
+	unsigned int domid;
+
+	domid = node->perms.p[0].id;
+	dom = hashtable_search(domains, &domid);
+	if (!dom)
+		log("Node %s owned by unknown domain %u", node->name, domid);
+	else
+		dom->nodes++;
+}
+
+static int domain_check_acc_sub(const void *k, void *v, void *arg)
+{
+	struct domain_acc *dom = v;
+	struct domain *d;
+
+	if (!dom->nodes)
+		return 0;
+
+	log("Correct accounting data for domain %u: nodes are %d off",
+	    dom->domid, dom->nodes);
+
+	d = find_domain_struct(dom->domid);
+	if (!d)
+		return 0;
+
+	d->nbentry += dom->nodes;
+
+	return 0;
+}
+
+void domain_check_acc(struct hashtable *domains)
+{
+	hashtable_iterate(domains, domain_check_acc_sub, NULL);
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 22996e2576..dc4660861e 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -129,4 +129,8 @@ const char *dump_state_connections(FILE *fp);
 
 void read_state_connection(const void *ctx, const void *state);
 
+struct hashtable *domain_check_acc_init(void);
+void domain_check_acc_add(const struct node *node, struct hashtable *domains);
+void domain_check_acc(struct hashtable *domains);
+
 #endif /* _XENSTORED_DOMAIN_H */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:15:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:15:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479175.742868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3l-0004G5-QO; Tue, 17 Jan 2023 09:15:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479175.742868; Tue, 17 Jan 2023 09:15:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3l-0004FM-KR; Tue, 17 Jan 2023 09:15:13 +0000
Received: by outflank-mailman (input) for mailman id 479175;
 Tue, 17 Jan 2023 09:15:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi0z-0007bs-AY
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:21 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a0e5259-9647-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:12:17 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4A47B3821E;
 Tue, 17 Jan 2023 09:12:17 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1CFA91390C;
 Tue, 17 Jan 2023 09:12:17 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id P+7HBXFmxmNRcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a0e5259-9647-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946737; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HhYKz+Oi7NXGdn9eIooP5S2YyA3D/bV21hS8CpK2wqk=;
	b=ALO1TvRobgMJ1AorCZaEgQY0wUZ9fDndL3T46npH/zyLE7xIWsfMfSAI7C1J9oSg1upGnO
	YsOAXkz+ouc5mFTkW2ljb14Pz04Hou6Cgikuyn2qUmS3rwB+HUo9u+GdH6PUvtXMEmVZzw
	JYHY0y7DYI/BPxaF6k8Mra5td28tIjI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 09/17] tools/xenstore: replace literal domid 0 with dom0_domid
Date: Tue, 17 Jan 2023 10:11:16 +0100
Message-Id: <20230117091124.22170-10-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are some places left where dom0 is associated with domid 0.

Use dom0_domid instead.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c   | 5 +++--
 tools/xenstore/xenstored_domain.c | 8 ++++----
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 561fb121d3..3336e65c97 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2316,9 +2316,10 @@ static void accept_connection(int sock)
 		return;
 
 	conn = new_connection(&socket_funcs);
-	if (conn)
+	if (conn) {
 		conn->fd = fd;
-	else
+		conn->id = dom0_domid;
+	} else
 		close(fd);
 }
 #endif
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 07d91eb50c..b7777b2afd 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -326,7 +326,7 @@ static int destroy_domain(void *_domain)
 	if (domain->interface) {
 		/* Domain 0 was mapped by dom0_init, so it must be unmapped
 		   using munmap() and not the grant unmap call. */
-		if (domain->domid == 0)
+		if (domain->domid == dom0_domid)
 			unmap_xenbus(domain->interface);
 		else
 			unmap_interface(domain->interface);
@@ -410,7 +410,7 @@ void handle_event(void)
 
 static bool domid_is_unprivileged(unsigned int domid)
 {
-	return domid != 0 && domid != priv_domid;
+	return domid != dom0_domid && domid != priv_domid;
 }
 
 bool domain_is_unprivileged(struct connection *conn)
@@ -798,7 +798,7 @@ static struct domain *onearg_domain(struct connection *conn,
 		return ERR_PTR(-EINVAL);
 
 	domid = atoi(domid_str);
-	if (!domid)
+	if (domid == dom0_domid)
 		return ERR_PTR(-EINVAL);
 
 	return find_connected_domain(domid);
@@ -1004,7 +1004,7 @@ static int chk_domain_generation(unsigned int domid, uint64_t gen)
 {
 	struct domain *d;
 
-	if (!xc_handle && domid == 0)
+	if (!xc_handle && domid == dom0_domid)
 		return 1;
 
 	d = find_domain_struct(domid);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:15:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:15:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479178.742875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3m-0004Nn-4x; Tue, 17 Jan 2023 09:15:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479178.742875; Tue, 17 Jan 2023 09:15:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3l-0004N3-Uq; Tue, 17 Jan 2023 09:15:13 +0000
Received: by outflank-mailman (input) for mailman id 479178;
 Tue, 17 Jan 2023 09:15:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi12-0007bs-UW
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:25 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d64c23f-9647-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:12:23 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id DC45F38222;
 Tue, 17 Jan 2023 09:12:22 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AEEB21390C;
 Tue, 17 Jan 2023 09:12:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id U1Z4KXZmxmNmcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d64c23f-9647-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946742; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yXucUqowhnBeO9Jn8o0sdHo0YPTusLdpuVGmf+Cjy/o=;
	b=vPjfHsTvKEqoIJzRAJxJriG6q+gyHoY+FpIIl7mKaBHRGmcBmtQSiuW1j4FIahaVDp2Dpr
	PlKDky6dYGZyu9dgd4cOi7rOpeCCtD+RCLVAchXHIjwPoTKf4qhw9R5DHf46LoCJBGoPYj
	N6ivMdA3/11lyVBcoJWwulNI9RfLWks=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 10/17] tools/xenstore: make domain_is_unprivileged() an inline function
Date: Tue, 17 Jan 2023 10:11:17 +0100
Message-Id: <20230117091124.22170-11-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

clang 14 is complaining about a NULL dereference for constructs like:

  domain_is_unprivileged(conn) ? conn->in : NULL

as it can't know that domain_is_unprivileged(conn) will return false
if conn is NULL.

Fix that by making domain_is_unprivileged() an inline function (and
related to that domid_is_unprivileged(), too).

In order not having to make struct domain public, use conn->id instead
of conn->domain->domid for the test.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.h   | 10 ++++++++++
 tools/xenstore/xenstored_domain.c | 11 -----------
 tools/xenstore/xenstored_domain.h |  2 --
 3 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 37006d508d..3c4e27d0dd 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -297,6 +297,16 @@ void unmap_xenbus(void *interface);
 
 static inline int xenbus_master_domid(void) { return dom0_domid; }
 
+static inline bool domid_is_unprivileged(unsigned int domid)
+{
+	return domid != dom0_domid && domid != priv_domid;
+}
+
+static inline bool domain_is_unprivileged(const struct connection *conn)
+{
+	return conn && domid_is_unprivileged(conn->id);
+}
+
 /* Return the event channel used by xenbus. */
 evtchn_port_t xenbus_evtchn(void);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index b7777b2afd..55a93eccdb 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -408,17 +408,6 @@ void handle_event(void)
 		barf_perror("Failed to write to event fd");
 }
 
-static bool domid_is_unprivileged(unsigned int domid)
-{
-	return domid != dom0_domid && domid != priv_domid;
-}
-
-bool domain_is_unprivileged(struct connection *conn)
-{
-	return conn && conn->domain &&
-	       domid_is_unprivileged(conn->domain->domid);
-}
-
 static char *talloc_domain_path(const void *context, unsigned int domid)
 {
 	return talloc_asprintf(context, "/local/domain/%u", domid);
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 1e402f2609..22996e2576 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -59,8 +59,6 @@ void ignore_connection(struct connection *conn, unsigned int err);
 /* Returns the implicit path of a connection (only domains have this) */
 const char *get_implicit_path(const struct connection *conn);
 
-bool domain_is_unprivileged(struct connection *conn);
-
 /* Remove node permissions for no longer existing domains. */
 int domain_adjust_node_perms(struct node *node);
 int domain_alloc_permrefs(struct node_perms *perms);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:15:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:15:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479186.742896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3o-000518-Lo; Tue, 17 Jan 2023 09:15:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479186.742896; Tue, 17 Jan 2023 09:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3o-00050i-HE; Tue, 17 Jan 2023 09:15:16 +0000
Received: by outflank-mailman (input) for mailman id 479186;
 Tue, 17 Jan 2023 09:15:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi1f-000725-Kx
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:13:03 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24ded8bb-9647-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:13:02 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 4BC90683DD;
 Tue, 17 Jan 2023 09:13:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 214A41390C;
 Tue, 17 Jan 2023 09:13:02 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id oZ3bBp5mxmO0cAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:13:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24ded8bb-9647-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946782; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hbeYM9pX9rme/MidcA7zKLWfF87qgsmjrN7bSznT5S0=;
	b=kXgUN8Ck+aE1WJGaEnno2laahxi4dCDB/JMqBUyD08sCYdIhcJsWaxnegCAVEbp7CTTRYS
	IURXKnaDnuwt6qALRzIeNJpq6z2NexoDgITvx2wrtOfrdw+lut8EFrDfE+fQmJMKFT6i92
	nBzhs9fT9mkSR2AAoazrVHoiqnoXuxw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 17/17] tools/xenstore: make output of "xenstore-control help" more pretty
Date: Tue, 17 Jan 2023 10:11:24 +0100
Message-Id: <20230117091124.22170-18-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Using a tab for separating the command from the options in the output
of "xenstore-control help" results in a rather ugly list.

Use a fixed size for the command instead.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 tools/xenstore/xenstored_control.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 000b2bb8c7..cbd62556c3 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -996,7 +996,7 @@ static int do_control_help(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 	for (cmd = 0; cmd < ARRAY_SIZE(cmds); cmd++) {
-		resp = talloc_asprintf_append(resp, "%s\t%s\n",
+		resp = talloc_asprintf_append(resp, "%-15s %s\n",
 					      cmds[cmd].cmd, cmds[cmd].pars);
 		if (!resp)
 			return ENOMEM;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:15:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:15:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479188.742907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3q-0005Ho-0Z; Tue, 17 Jan 2023 09:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479188.742907; Tue, 17 Jan 2023 09:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3p-0005HF-TH; Tue, 17 Jan 2023 09:15:17 +0000
Received: by outflank-mailman (input) for mailman id 479188;
 Tue, 17 Jan 2023 09:15:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi1K-0007bs-2o
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1765f81e-9647-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:12:39 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A9227683DE;
 Tue, 17 Jan 2023 09:12:39 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7A5D11390C;
 Tue, 17 Jan 2023 09:12:39 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id xnOAHIdmxmOCcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1765f81e-9647-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946759; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=77xbwD9TcvGEPUJAkwJ1xcSALMKCwiZJKLxyV5lcyls=;
	b=Dis9zcsVRAnAKml091hEsYnEB8Uv6Zf9oF2QOkUE1nQ5GFR6iVrOA7busVh1OYBWCqhRKN
	Hin4CPV6UDFX4L/ULWqjcKxJFLcVxd+ru8N0uNKleetxwoT8pzaLDvPm/SD6uSiaykxype
	FxTXFdSoGoAOEy8ET/8MF46z1u6Da6U=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 13/17] tools/xenstore: switch hashtable to use the talloc framework
Date: Tue, 17 Jan 2023 10:11:20 +0100
Message-Id: <20230117091124.22170-14-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using malloc() and friends, let the hashtable implementation
use the talloc framework.

This is more consistent with the rest of xenstored and it allows to
track memory usage via "xenstore-control memreport".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- make key and value children of element (if flagged)
---
 tools/xenstore/hashtable.c        | 98 +++++++++++--------------------
 tools/xenstore/hashtable.h        |  3 +-
 tools/xenstore/xenstored_core.c   |  5 +-
 tools/xenstore/xenstored_domain.c |  2 +-
 4 files changed, 38 insertions(+), 70 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index 6738719e47..6c2a0efeea 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -6,6 +6,8 @@
 #include <string.h>
 #include <math.h>
 #include <stdint.h>
+#include <stdarg.h>
+#include "talloc.h"
 
 struct entry
 {
@@ -50,7 +52,7 @@ indexFor(unsigned int tablelength, unsigned int hashvalue) {
 
 /*****************************************************************************/
 struct hashtable *
-create_hashtable(unsigned int minsize,
+create_hashtable(const void *ctx, unsigned int minsize,
                  unsigned int (*hashf) (void*),
                  int (*eqf) (void*,void*),
                  unsigned int flags)
@@ -66,10 +68,10 @@ create_hashtable(unsigned int minsize,
         if (primes[pindex] > minsize) { size = primes[pindex]; break; }
     }
 
-    h = (struct hashtable *)calloc(1, sizeof(struct hashtable));
+    h = talloc_zero(ctx, struct hashtable);
     if (NULL == h)
         goto err0;
-    h->table = (struct entry **)calloc(size, sizeof(struct entry *));
+    h->table = talloc_zero_array(h, struct entry *, size);
     if (NULL == h->table)
         goto err1;
 
@@ -83,7 +85,7 @@ create_hashtable(unsigned int minsize,
     return h;
 
 err1:
-   free(h);
+   talloc_free(h);
 err0:
    return NULL;
 }
@@ -115,47 +117,32 @@ hashtable_expand(struct hashtable *h)
     if (h->primeindex == (prime_table_length - 1)) return 0;
     newsize = primes[++(h->primeindex)];
 
-    newtable = (struct entry **)calloc(newsize, sizeof(struct entry*));
-    if (NULL != newtable)
+    newtable = talloc_realloc(h, h->table, struct entry *, newsize);
+    if (!newtable)
     {
-        /* This algorithm is not 'stable'. ie. it reverses the list
-         * when it transfers entries between the tables */
-        for (i = 0; i < h->tablelength; i++) {
-            while (NULL != (e = h->table[i])) {
-                h->table[i] = e->next;
-                index = indexFor(newsize,e->h);
+        h->primeindex--;
+        return 0;
+    }
+
+    h->table = newtable;
+    memset(newtable + h->tablelength, 0,
+           (newsize - h->tablelength) * sizeof(*newtable));
+    for (i = 0; i < h->tablelength; i++) {
+        for (pE = &(newtable[i]), e = *pE; e != NULL; e = *pE) {
+            index = indexFor(newsize, e->h);
+            if (index == i)
+            {
+                pE = &(e->next);
+            }
+            else
+            {
+                *pE = e->next;
                 e->next = newtable[index];
                 newtable[index] = e;
             }
         }
-        free(h->table);
-        h->table = newtable;
-    }
-    /* Plan B: realloc instead */
-    else 
-    {
-        newtable = (struct entry **)
-                   realloc(h->table, newsize * sizeof(struct entry *));
-        if (NULL == newtable) { (h->primeindex)--; return 0; }
-        h->table = newtable;
-        memset(newtable + h->tablelength, 0,
-               (newsize - h->tablelength) * sizeof(*newtable));
-        for (i = 0; i < h->tablelength; i++) {
-            for (pE = &(newtable[i]), e = *pE; e != NULL; e = *pE) {
-                index = indexFor(newsize,e->h);
-                if (index == i)
-                {
-                    pE = &(e->next);
-                }
-                else
-                {
-                    *pE = e->next;
-                    e->next = newtable[index];
-                    newtable[index] = e;
-                }
-            }
-        }
     }
+
     h->tablelength = newsize;
     h->loadlimit   = (unsigned int)
         (((uint64_t)newsize * max_load_factor) / 100);
@@ -184,12 +171,16 @@ hashtable_insert(struct hashtable *h, void *k, void *v)
          * element may be ok. Next time we insert, we'll try expanding again.*/
         hashtable_expand(h);
     }
-    e = (struct entry *)calloc(1, sizeof(struct entry));
+    e = talloc_zero(h, struct entry);
     if (NULL == e) { --(h->entrycount); return 0; } /*oom*/
     e->h = hash(h,k);
     index = indexFor(h->tablelength,e->h);
     e->k = k;
+    if (h->flags & HASHTABLE_FREE_KEY)
+        talloc_steal(e, k);
     e->v = v;
+    if (h->flags & HASHTABLE_FREE_VALUE)
+        talloc_steal(e, v);
     e->next = h->table[index];
     h->table[index] = e;
     return -1;
@@ -235,11 +226,7 @@ hashtable_remove(struct hashtable *h, void *k)
         {
             *pE = e->next;
             h->entrycount--;
-            if (h->flags & HASHTABLE_FREE_KEY)
-                free(e->k);
-            if (h->flags & HASHTABLE_FREE_VALUE)
-                free(e->v);
-            free(e);
+            talloc_free(e);
             return 1;
         }
         pE = &(e->next);
@@ -279,26 +266,7 @@ hashtable_iterate(struct hashtable *h,
 void
 hashtable_destroy(struct hashtable *h)
 {
-    unsigned int i;
-    struct entry *e, *f;
-    struct entry **table = h->table;
-
-    for (i = 0; i < h->tablelength; i++)
-    {
-        e = table[i];
-        while (NULL != e)
-        {
-            f = e;
-            e = e->next;
-            if (h->flags & HASHTABLE_FREE_KEY)
-                free(f->k);
-            if (h->flags & HASHTABLE_FREE_VALUE)
-                free(f->v);
-            free(f);
-        }
-    }
-    free(h->table);
-    free(h);
+    talloc_free(h);
 }
 
 /*
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index d6feb1b038..5d39c4e3a0 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -9,6 +9,7 @@ struct hashtable;
  * create_hashtable
    
  * @name                    create_hashtable
+ * @param   ctx             talloc context to use for allocations
  * @param   minsize         minimum initial size of hashtable
  * @param   hashfunction    function for hashing keys
  * @param   key_eq_fn       function for determining key equality
@@ -22,7 +23,7 @@ struct hashtable;
 #define HASHTABLE_FREE_KEY   (1U << 1)
 
 struct hashtable *
-create_hashtable(unsigned int minsize,
+create_hashtable(const void *ctx, unsigned int minsize,
                  unsigned int (*hashfunction) (void*),
                  int (*key_eq_fn) (void*,void*),
                  unsigned int flags
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 3336e65c97..f27209dec8 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2424,11 +2424,10 @@ static int keys_equal_fn(void *key1, void *key2)
 
 int remember_string(struct hashtable *hash, const char *str)
 {
-	char *k = malloc(strlen(str) + 1);
+	char *k = talloc_strdup(NULL, str);
 
 	if (!k)
 		return 0;
-	strcpy(k, str);
 	return hashtable_insert(hash, k, (void *)1);
 }
 
@@ -2523,7 +2522,7 @@ void check_store(void)
 	};
 
 	/* Don't free values (they are all void *1) */
-	reachable = create_hashtable(16, hash_from_key_fn, keys_equal_fn,
+	reachable = create_hashtable(NULL, 16, hash_from_key_fn, keys_equal_fn,
 				     HASHTABLE_FREE_KEY);
 	if (!reachable) {
 		log("check_store: ENOMEM");
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 1723c9804a..3f20f03eb0 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -931,7 +931,7 @@ void domain_init(int evtfd)
 	int rc;
 
 	/* Start with a random rather low domain count for the hashtable. */
-	domhash = create_hashtable(8, domhash_fn, domeq_fn, 0);
+	domhash = create_hashtable(NULL, 8, domhash_fn, domeq_fn, 0);
 	if (!domhash)
 		barf_perror("Failed to allocate domain hashtable");
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:15:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:15:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479190.742910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3q-0005Mp-Dq; Tue, 17 Jan 2023 09:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479190.742910; Tue, 17 Jan 2023 09:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3q-0005MT-A1; Tue, 17 Jan 2023 09:15:18 +0000
Received: by outflank-mailman (input) for mailman id 479190;
 Tue, 17 Jan 2023 09:15:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi1E-0007bs-5S
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:36 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 140e5eab-9647-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:12:34 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1402138222;
 Tue, 17 Jan 2023 09:12:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DC1FB1390C;
 Tue, 17 Jan 2023 09:12:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id cIJtNIFmxmN3cAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 140e5eab-9647-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946754; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bQqweqSwYnhQCXgkJ+GuoRH/j2JjRdq9d6DeU3NphTE=;
	b=pcb31h6yJWKzfEqmpCfESS8Z7el38BnEmqY9QrafiMwrSxiiVuPp1JFDx1K1yQO7bjbxqZ
	6adNigUvH2LXZiE+t8buwGaWDmfyFJtfD1dWIJzNc4Kyg62/fqMVoCc2Q+Pc1MakRNCyii
	AzFh3Ec4pz3TQLPLCpgomVEJXW8HVoQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 12/17] tools/xenstore: don't let hashtable_remove() return the removed value
Date: Tue, 17 Jan 2023 10:11:19 +0100
Message-Id: <20230117091124.22170-13-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Letting hashtable_remove() return the value of the removed element is
not used anywhere in Xenstore, and it conflicts with a hashtable
created specifying the HASHTABLE_FREE_VALUE flag.

So just drop returning the value.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 tools/xenstore/hashtable.c | 10 +++++-----
 tools/xenstore/hashtable.h |  4 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index 299549c51e..6738719e47 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -214,7 +214,7 @@ hashtable_search(struct hashtable *h, void *k)
 }
 
 /*****************************************************************************/
-void * /* returns value associated with key */
+int
 hashtable_remove(struct hashtable *h, void *k)
 {
     /* TODO: consider compacting the table when the load factor drops enough,
@@ -222,7 +222,6 @@ hashtable_remove(struct hashtable *h, void *k)
 
     struct entry *e;
     struct entry **pE;
-    void *v;
     unsigned int hashvalue, index;
 
     hashvalue = hash(h,k);
@@ -236,16 +235,17 @@ hashtable_remove(struct hashtable *h, void *k)
         {
             *pE = e->next;
             h->entrycount--;
-            v = e->v;
             if (h->flags & HASHTABLE_FREE_KEY)
                 free(e->k);
+            if (h->flags & HASHTABLE_FREE_VALUE)
+                free(e->v);
             free(e);
-            return v;
+            return 1;
         }
         pE = &(e->next);
         e = e->next;
     }
-    return NULL;
+    return 0;
 }
 
 /*****************************************************************************/
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index 6d65431f96..d6feb1b038 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -68,10 +68,10 @@ hashtable_search(struct hashtable *h, void *k);
  * @name        hashtable_remove
  * @param   h   the hashtable to remove the item from
  * @param   k   the key to search for  - does not claim ownership
- * @return      the value associated with the key, or NULL if none found
+ * @return      0 if element not found
  */
 
-void * /* returns value */
+int
 hashtable_remove(struct hashtable *h, void *k);
 
 /*****************************************************************************
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:15:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:15:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479192.742916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3q-0005R8-U4; Tue, 17 Jan 2023 09:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479192.742916; Tue, 17 Jan 2023 09:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3q-0005Oa-KL; Tue, 17 Jan 2023 09:15:18 +0000
Received: by outflank-mailman (input) for mailman id 479192;
 Tue, 17 Jan 2023 09:15:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi1U-000725-90
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:52 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1e229a60-9647-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:12:51 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0064D38225;
 Tue, 17 Jan 2023 09:12:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B16DD1390C;
 Tue, 17 Jan 2023 09:12:50 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id HasWKpJmxmOdcAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e229a60-9647-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946771; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=j/ffVP5VoVmPH0AcpkwhcaZe/NQTk5+HqyFL6Qt1ye4=;
	b=ugWkaeOZ4CtL7TkmcpuJcUEJlNRIveZz5zuZuz5W94yVmHYzlsrc+xN5F86k3Nn6mhbHOG
	P72vwH4m9w3ihBjdKu3qUZoC2dhReR2pGPpbAGb7LeesLUbS56qhzU4amEJDlC7FzjA1bZ
	+RPLJV1IT9CfAN8SIAjVcWL1DtzKvlg=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 15/17] tools/xenstore: introduce trace classes
Date: Tue, 17 Jan 2023 10:11:22 +0100
Message-Id: <20230117091124.22170-16-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make the xenstored internal trace configurable by adding classes
which can be switched on and off independently from each other.

Define the following classes:

- obj: Creation and deletion of interesting "objects" (watch,
  transaction, connection)
- io: incoming requests and outgoing responses
- wrl: write limiting

Per default "obj" and "io" are switched on.

Entries written via trace() will always be printed (if tracing is on
at all).

Add the capability to control the trace settings via the "log"
command and via a new "--log-control" command line option.

Add a missing trace_create() call for creating a transaction.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- keep "log" and "logfile" command names (Julien Grall)
---
 docs/misc/xenstore.txt                 | 10 +++--
 tools/xenstore/xenstored_control.c     | 31 +++++++++++++--
 tools/xenstore/xenstored_core.c        | 55 ++++++++++++++++++++++++--
 tools/xenstore/xenstored_core.h        |  6 +++
 tools/xenstore/xenstored_domain.c      | 27 +++++++------
 tools/xenstore/xenstored_transaction.c |  1 +
 6 files changed, 106 insertions(+), 24 deletions(-)

diff --git a/docs/misc/xenstore.txt b/docs/misc/xenstore.txt
index 44428ae3a7..8887e7df88 100644
--- a/docs/misc/xenstore.txt
+++ b/docs/misc/xenstore.txt
@@ -409,10 +409,12 @@ CONTROL			<command>|[<parameters>|]
 		error string in case of failure. -s can return "BUSY" in case
 		of an active transaction, a retry of -s can be done in that
 		case.
-	log|on
-		turn xenstore logging on
-	log|off
-		turn xenstore logging off
+	log|[on|off|+<switch>|-<switch>]
+		without parameters: show possible log switches
+		on: turn xenstore logging on
+		off: turn xenstore logging off
+		+<switch>: activates log entries for <switch>,
+		-<switch>: deactivates log entries for <switch>
 	logfile|<file-name>
 		log to specified file
 	memreport|[<file-name>]
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 41e6992591..000b2bb8c7 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -182,6 +182,28 @@ static int do_control_check(const void *ctx, struct connection *conn,
 static int do_control_log(const void *ctx, struct connection *conn,
 			  char **vec, int num)
 {
+	int ret;
+
+	if (num == 0) {
+		char *resp = talloc_asprintf(ctx, "Log switch settings:\n");
+		unsigned int idx;
+		bool on;
+
+		if (!resp)
+			return ENOMEM;
+		for (idx = 0; trace_switches[idx]; idx++) {
+			on = trace_flags & (1u << idx);
+			resp = talloc_asprintf_append(resp, "%-8s: %s\n",
+						      trace_switches[idx],
+						      on ? "on" : "off");
+			if (!resp)
+				return ENOMEM;
+		}
+
+		send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
+		return 0;
+	}
+
 	if (num != 1)
 		return EINVAL;
 
@@ -189,8 +211,11 @@ static int do_control_log(const void *ctx, struct connection *conn,
 		reopen_log();
 	else if (!strcmp(vec[0], "off"))
 		close_log();
-	else
-		return EINVAL;
+	else {
+		ret = set_trace_switch(vec[0]);
+		if (ret)
+			return ret;
+	}
 
 	send_ack(conn, XS_CONTROL);
 	return 0;
@@ -923,7 +948,7 @@ static int do_control_help(const void *, struct connection *, char **, int);
 
 static struct cmd_s cmds[] = {
 	{ "check", do_control_check, "" },
-	{ "log", do_control_log, "on|off" },
+	{ "log", do_control_log, "[on|off|+<switch>|-<switch>]" },
 
 #ifndef NO_LIVE_UPDATE
 	/*
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index d55ce632d8..3099077a86 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -85,6 +85,7 @@ static int reopen_log_pipe[2];
 static int reopen_log_pipe0_pollfd_idx = -1;
 char *tracefile = NULL;
 TDB_CONTEXT *tdb_ctx = NULL;
+unsigned int trace_flags = TRACE_OBJ | TRACE_IO;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
 
@@ -139,13 +140,13 @@ static void trace_io(const struct connection *conn,
 	time_t now;
 	struct tm *tm;
 
-	if (tracefd < 0)
+	if (tracefd < 0 || !(trace_flags & TRACE_IO))
 		return;
 
 	now = time(NULL);
 	tm = localtime(&now);
 
-	trace("%s %p %04d%02d%02d %02d:%02d:%02d %s (",
+	trace("io: %s %p %04d%02d%02d %02d:%02d:%02d %s (",
 	      out ? "OUT" : "IN", conn,
 	      tm->tm_year + 1900, tm->tm_mon + 1,
 	      tm->tm_mday, tm->tm_hour, tm->tm_min, tm->tm_sec,
@@ -158,12 +159,14 @@ static void trace_io(const struct connection *conn,
 
 void trace_create(const void *data, const char *type)
 {
-	trace("CREATE %s %p\n", type, data);
+	if (trace_flags & TRACE_OBJ)
+		trace("obj: CREATE %s %p\n", type, data);
 }
 
 void trace_destroy(const void *data, const char *type)
 {
-	trace("DESTROY %s %p\n", type, data);
+	if (trace_flags & TRACE_OBJ)
+		trace("obj: DESTROY %s %p\n", type, data);
 }
 
 /**
@@ -2604,6 +2607,8 @@ static void usage(void)
 "  -N, --no-fork           to request that the daemon does not fork,\n"
 "  -P, --output-pid        to request that the pid of the daemon is output,\n"
 "  -T, --trace-file <file> giving the file for logging, and\n"
+"      --trace-control=+<switch> activate a specific <switch>\n"
+"      --trace-control=-<switch> deactivate a specific <switch>\n"
 "  -E, --entry-nb <nb>     limit the number of entries per domain,\n"
 "  -S, --entry-size <size> limit the size of entry per domain, and\n"
 "  -W, --watch-nb <nb>     limit the number of watches per domain,\n"
@@ -2647,6 +2652,7 @@ static struct option options[] = {
 	{ "output-pid", 0, NULL, 'P' },
 	{ "entry-size", 1, NULL, 'S' },
 	{ "trace-file", 1, NULL, 'T' },
+	{ "trace-control", 1, NULL, 1 },
 	{ "transaction", 1, NULL, 't' },
 	{ "perm-nb", 1, NULL, 'A' },
 	{ "path-max", 1, NULL, 'M' },
@@ -2721,6 +2727,43 @@ static void set_quota(const char *arg, bool soft)
 		barf("unknown quota \"%s\"\n", arg);
 }
 
+/* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
+const char *trace_switches[] = {
+	"obj", "io", "wrl",
+	NULL
+};
+
+int set_trace_switch(const char *arg)
+{
+	bool remove = (arg[0] == '-');
+	unsigned int idx;
+
+	switch (arg[0]) {
+	case '-':
+		remove = true;
+		break;
+	case '+':
+		remove = false;
+		break;
+	default:
+		return EINVAL;
+	}
+
+	arg++;
+
+	for (idx = 0; trace_switches[idx]; idx++) {
+		if (!strcmp(arg, trace_switches[idx])) {
+			if (remove)
+				trace_flags &= ~(1u << idx);
+			else
+				trace_flags |= 1u << idx;
+			return 0;
+		}
+	}
+
+	return EINVAL;
+}
+
 int main(int argc, char *argv[])
 {
 	int opt;
@@ -2769,6 +2812,10 @@ int main(int argc, char *argv[])
 		case 'T':
 			tracefile = optarg;
 			break;
+		case 1:
+			if (set_trace_switch(optarg))
+				barf("Illegal trace switch \"%s\"\n", optarg);
+			break;
 		case 'I':
 			if (optarg && !strcmp(optarg, "off"))
 				tdb_flags = 0;
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 3b96ecd018..c85b15515c 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -287,6 +287,12 @@ extern char **orig_argv;
 
 extern char *tracefile;
 extern int tracefd;
+extern unsigned int trace_flags;
+#define TRACE_OBJ	0x00000001
+#define TRACE_IO	0x00000002
+#define TRACE_WRL	0x00000004
+extern const char *trace_switches[];
+int set_trace_switch(const char *arg);
 
 extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 3f20f03eb0..cb1f09c297 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1260,6 +1260,12 @@ static long wrl_ndomains;
 static wrl_creditt wrl_reserve; /* [-wrl_config_newdoms_dburst, +_gburst ] */
 static time_t wrl_log_last_warning; /* 0: no previous warning */
 
+#define trace_wrl(...)				\
+do {						\
+	if (trace_flags & TRACE_WRL)		\
+		trace("wrl: " __VA_ARGS__);	\
+} while (0)
+
 void wrl_gettime_now(struct wrl_timestampt *now_wt)
 {
 	struct timespec now_ts;
@@ -1365,12 +1371,9 @@ void wrl_credit_update(struct domain *domain, struct wrl_timestampt now)
 
 	domain->wrl_timestamp = now;
 
-	trace("wrl: dom %4d %6ld  msec  %9ld credit   %9ld reserve"
-	      "  %9ld discard\n",
-	      domain->domid,
-	      msec,
-	      (long)domain->wrl_credit, (long)wrl_reserve,
-	      (long)surplus);
+	trace_wrl("dom %4d %6ld msec %9ld credit %9ld reserve %9ld discard\n",
+		  domain->domid, msec, (long)domain->wrl_credit,
+		  (long)wrl_reserve, (long)surplus);
 }
 
 void wrl_check_timeout(struct domain *domain,
@@ -1402,10 +1405,9 @@ void wrl_check_timeout(struct domain *domain,
 	if (*ptimeout==-1 || wakeup < *ptimeout)
 		*ptimeout = wakeup;
 
-	trace("wrl: domain %u credit=%ld (reserve=%ld) SLEEPING for %d\n",
-	      domain->domid,
-	      (long)domain->wrl_credit, (long)wrl_reserve,
-	      wakeup);
+	trace_wrl("domain %u credit=%ld (reserve=%ld) SLEEPING for %d\n",
+		  domain->domid, (long)domain->wrl_credit, (long)wrl_reserve,
+		  wakeup);
 }
 
 #define WRL_LOG(now, ...) \
@@ -1423,9 +1425,8 @@ void wrl_apply_debit_actual(struct domain *domain)
 	wrl_credit_update(domain, now);
 
 	domain->wrl_credit -= wrl_config_writecost;
-	trace("wrl: domain %u credit=%ld (reserve=%ld)\n",
-	      domain->domid,
-	      (long)domain->wrl_credit, (long)wrl_reserve);
+	trace_wrl("domain %u credit=%ld (reserve=%ld)\n", domain->domid,
+		  (long)domain->wrl_credit, (long)wrl_reserve);
 
 	if (domain->wrl_credit < 0) {
 		if (!domain->wrl_delay_logged) {
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 82e5e66c18..1aa9d3cb3d 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -475,6 +475,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (!trans)
 		return ENOMEM;
 
+	trace_create(trans, "transaction");
 	INIT_LIST_HEAD(&trans->accessed);
 	INIT_LIST_HEAD(&trans->changed_domains);
 	trans->conn = conn;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:15:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:15:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479195.742925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3r-0005hN-Qx; Tue, 17 Jan 2023 09:15:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479195.742925; Tue, 17 Jan 2023 09:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHi3r-0005f4-HT; Tue, 17 Jan 2023 09:15:19 +0000
Received: by outflank-mailman (input) for mailman id 479195;
 Tue, 17 Jan 2023 09:15:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHi0s-0007bs-J3
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:12:14 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06b4961d-9647-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:12:11 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A7F2B683D9;
 Tue, 17 Jan 2023 09:12:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7C4081390C;
 Tue, 17 Jan 2023 09:12:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id oiToHGtmxmM/cAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:12:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06b4961d-9647-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673946731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=alJygBFItsbWNBsKn2icTcjkRMNZAqhlfawwKm82T34=;
	b=AccrtO3n03yFK+2uwjGahhIx0dLfSoO0c9GuG+PHyJ5ELSWcFm2Z05NOANvxFYQYEi0E9X
	F71imvHtPPvqBI5/2ElZ8Hp2LT5r+U9zljOfppd2Dk8Ybip0oy420zyvcowVXlx699Xo5X
	Ddyho+a9uwMTUYY79j5F3HVi0ltUBPw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 08/17] tools/xenstore: don't allow creating too many nodes in a transaction
Date: Tue, 17 Jan 2023 10:11:15 +0100
Message-Id: <20230117091124.22170-9-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
References: <20230117091124.22170-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The accounting for the number of nodes of a domain in an active
transaction is not working correctly, as it allows to create arbitrary
number of nodes. The transaction will finally fail due to exceeding
the number of nodes quota, but before closing the transaction an
unprivileged guest could cause Xenstore to use a lot of memory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_domain.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index edfe5809be..07d91eb50c 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1129,9 +1129,8 @@ int domain_nbentry_fix(unsigned int domid, int num, bool update)
 
 int domain_nbentry(struct connection *conn)
 {
-	return (domain_is_unprivileged(conn))
-		? conn->domain->nbentry
-		: 0;
+	return domain_is_unprivileged(conn)
+	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:26:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:26:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479238.742951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiEJ-0001XV-WE; Tue, 17 Jan 2023 09:26:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479238.742951; Tue, 17 Jan 2023 09:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiEJ-0001XO-TM; Tue, 17 Jan 2023 09:26:07 +0000
Received: by outflank-mailman (input) for mailman id 479238;
 Tue, 17 Jan 2023 09:26:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Is34=5O=gmail.com=mingo.kernel.org@srs-se1.protection.inumbo.net>)
 id 1pHiEH-0001XI-W6
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:26:06 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5c53ed5-9648-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:26:02 +0100 (CET)
Received: by mail-ej1-x635.google.com with SMTP id qx13so15506870ejb.13
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 01:26:02 -0800 (PST)
Received: from gmail.com (1F2EF7EB.nat.pool.telekom.hu. [31.46.247.235])
 by smtp.gmail.com with ESMTPSA id
 vo16-20020a170907a81000b0086dc174caf2sm3692018ejc.220.2023.01.17.01.26.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 01:26:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: f5c53ed5-9648-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id
         :reply-to;
        bh=exVmUOS71oVOAqiELfUuXDuaGik6dNbe+Gp7QyuOUb8=;
        b=GpMinRaS96fMLTxeHgP6Jtmqj9EhhkeoXQsvfQH8lrHGiCaqLd72g0fB9soxTwRier
         JlE3/pwdPSu/kOk7S9xJQ83vcthyeqA4CqwB1zRMHvcUCngcYebAClrC2HO3RFGGvhrR
         Q6vQDWVqaVhEwUDFyqWuM8hpLBdfP1YPrglsx0ubhV8Bz02IpG0sEsI8iBYTQawXD12c
         7sqIT9NzVYEvMokLZVa0GEqrYIuo8kGf54k9JY0VfSZdUN+r5j0snoaBNGmJ8RjxbRNa
         ZI2x7Rj2WV3xzIfLu0PYIPsoqr9c97+xP/RgFiiseQpzwH8EB+aF/x9HpAm/hTbpvbxd
         H1bA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=exVmUOS71oVOAqiELfUuXDuaGik6dNbe+Gp7QyuOUb8=;
        b=rnsW2n20LrKtEU4/JLHk9FDK26B+7kuKqW1HQcaCw4w3AfX5EoEsVnktPi0dqbBUII
         y7LMvWis9pokcSAe5c4XeEmKDAkmMsKqlYzMEW1dzfVQs/GLxoBpd1/A8qHB8wAbzHj4
         X1wHXJpSUVBB5S0hMgCe5l/QouCQvJtMuYLy/bBzGaild3wTUxoYp9T0cd+IoTQlddfG
         rnl0rmEbv4u7KEuHIRFMPx99MBwq7UiNVIWd0JrTvEif/rhDVXsZ24nRr7jXMAusql4p
         1yJYmlefjvP7MN4Bz8lD8PyZkPYoHd3vjpKuANkUvVGOj3rybhG6Awxlge02bUrzAPEu
         miYQ==
X-Gm-Message-State: AFqh2koiMDjM55UeeO0UEkM8mLp+PhZ3KBhnhAOuJFcJKqGyQN8CxBy0
	g+UkKGiTFAQRVC9NItbbX20=
X-Google-Smtp-Source: AMrXdXuKr4bAS6tqS2mchJrJpKXUGMsa3MmPkAdloFDHqv0zcLBeA4MsdPSn6XcLDgEVV1Q6TB3V8Q==
X-Received: by 2002:a17:907:c618:b0:868:b2b6:ee71 with SMTP id ud24-20020a170907c61800b00868b2b6ee71mr2275817ejc.6.1673947561671;
        Tue, 17 Jan 2023 01:26:01 -0800 (PST)
Sender: Ingo Molnar <mingo.kernel.org@gmail.com>
Date: Tue, 17 Jan 2023 10:25:58 +0100
From: Ingo Molnar <mingo@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?iso-8859-1?Q?J=F6rg_R=F6del?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>, jroedel@suse.de
Subject: Re: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from
 secondary_startup_64()
Message-ID: <Y8ZppgQ3RyzcR8eJ@gmail.com>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.589522290@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230116143645.589522290@infradead.org>


* Peter Zijlstra <peterz@infradead.org> wrote:

> The boot trampolines from trampoline_64.S have code flow like:
> 
>   16bit BIOS			SEV-ES				64bit EFI
> 
>   trampoline_start()		sev_es_trampoline_start()	trampoline_start_64()
>     verify_cpu()			  |				|
>   switch_to_protected:    <---------------'				v
>        |							pa_trampoline_compat()
>        v								|
>   startup_32()		<-----------------------------------------------'
>        |
>        v
>   startup_64()
>        |
>        v
>   tr_start() := head_64.S:secondary_startup_64()

oh ... this nice flow chart should move into a prominent C comment I think, 
it's far too good to be forgotten in an Git commit changelog.

> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
> touch the APs), there is already a verify_cpu() invocation.
> 
> Removing the verify_cpu() invocation from secondary_startup_64()
> renders the whole secondary_startup_64_no_verify() thing moot, so
> remove that too.
> 
> Cc: jroedel@suse.de
> Cc: hpa@zytor.com
> Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
> Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Reviewed-by: Ingo Molnar <mingo@kernel.org>

Thanks,

	Ingo


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:29:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:29:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479244.742962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiHJ-00028I-CZ; Tue, 17 Jan 2023 09:29:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479244.742962; Tue, 17 Jan 2023 09:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiHJ-00028B-9Q; Tue, 17 Jan 2023 09:29:13 +0000
Received: by outflank-mailman (input) for mailman id 479244;
 Tue, 17 Jan 2023 09:29:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHiHH-00027m-8M
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:29:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHiHG-0003I5-SM; Tue, 17 Jan 2023 09:29:10 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHiHG-00045F-Mb; Tue, 17 Jan 2023 09:29:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=KkXjFtlIRpWB5etx1wMH/9AzZvyHJj+jr/oR3RXSlwk=; b=bvYn7K8kRusgG4tKNP3IJG2Na6
	O85CW+UdP7zwtiF7UpI86Hjf0ldtJlNjtdiKkOnOQXaxIEW6UXWn4AokuMru4tNHwkMsSVnjqCcZD
	aJcbMyDmu5IhIj2Rc/1GluQg0xbmYh5X/TsSyv3WF8NxyLFUMabpzmL9J/3GwfzbL4F8=;
Message-ID: <2a679d99-4ed4-4fe8-8aee-faee57b5007b@xen.org>
Date: Tue, 17 Jan 2023 09:29:08 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm: Harden setup_frametable_mappings
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230116144106.12544-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 16/01/2023 14:41, Michal Orzel wrote:
> The amount of supported physical memory depends on the frametable size
> and the number of struct page_info entries that can fit into it. Define
> a macro PAGE_INFO_SIZE to store the current size of the struct page_info
> (i.e. 56B for arm64 and 32B for arm32) and add a sanity check in
> setup_frametable_mappings to be notified whenever the size of the
> structure changes. Also call a panic if the calculated frametable_size
> exceeds the limit defined by FRAMETABLE_SIZE macro.
> 
> Update the comments regarding the frametable in asm/config.h and take
> the opportunity to remove unused macro FRAMETABLE_VIRT_END on arm32.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
>   xen/arch/arm/include/asm/config.h |  5 ++---
>   xen/arch/arm/include/asm/mm.h     | 11 +++++++++++
>   xen/arch/arm/mm.c                 |  5 +++++
>   3 files changed, 18 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 16213c8b677f..d8f99776986f 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -82,7 +82,7 @@
>    * ARM32 layout:
>    *   0  -  12M   <COMMON>
>    *
> - *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
> + *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>    * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>    *                    space
>    *
> @@ -95,7 +95,7 @@
>    *
>    *   1G -   2G   VMAP: ioremap and early_ioremap
>    *
> - *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
> + *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
>    *
>    * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>    *  Unused
> @@ -127,7 +127,6 @@
>   #define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
>   #define FRAMETABLE_SIZE        MB(128-32)
>   #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
> -#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)

This is somewhat unrelated to the goal of the patch. We do clean-up in 
the same patch, but they tend to be in the same of already modified hunk 
(which is not the case here).

So I would prefer if this is split. This would make this patch a 
potential candidate for backport.

>   
>   #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
>   #define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> index 68adcac9fa8d..23dec574eb31 100644
> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -26,6 +26,17 @@
>    */
>   #define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
>   
> +/*
> + * The size of struct page_info impacts the number of entries that can fit
> + * into the frametable area and thus it affects the amount of physical memory
> + * we claim to support. Define PAGE_INFO_SIZE to be used for sanity checking.
> +*/
> +#ifdef CONFIG_ARM_64
> +#define PAGE_INFO_SIZE 56
> +#else
> +#define PAGE_INFO_SIZE 32
> +#endif
> +
>   struct page_info
>   {
>       /* Each frame can be threaded onto a doubly-linked list. */
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0fc6f2992dd1..a8c28fa5b768 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -676,6 +676,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>       const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
>       int rc;
>   
> +    BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
> +
> +    if ( frametable_size > FRAMETABLE_SIZE )
> +        panic("RAM size is too big to fit in a frametable area\n");

This is not correct. Depending on the PDX compression, the frametable 
may end up to cover non-RAM. So I would write:

"The cannot frametable cannot cover the physical region 0x%PRIpaddr - 
0x%PRIpaddr".

> +
>       frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
>       /* Round up to 2M or 32M boundary, as appropriate. */
>       frametable_size = ROUNDUP(frametable_size, mapping_size);

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:30:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:30:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479249.742973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiI4-0002fD-Kz; Tue, 17 Jan 2023 09:30:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479249.742973; Tue, 17 Jan 2023 09:30:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiI4-0002f6-I5; Tue, 17 Jan 2023 09:30:00 +0000
Received: by outflank-mailman (input) for mailman id 479249;
 Tue, 17 Jan 2023 09:29:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+bED=5O=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pHiI3-0002a8-ET
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:29:59 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 825e6b8e-9649-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:29:58 +0100 (CET)
Received: by mail-wm1-x32d.google.com with SMTP id q8so9918057wmo.5
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 01:29:58 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 n23-20020a05600c3b9700b003cf71b1f66csm41115709wms.0.2023.01.17.01.29.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 01:29:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 825e6b8e-9649-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=kYNyBLHaKUvYsczrNggYPfaD2ZzhIu2yZ4HF2d//eqQ=;
        b=clKmXFK4wr5qbGsHOAnY4/8yvSPAMttP1stCMNOol7op2QvOlimODR08lMMWiED3YT
         gCo7bTi6xSvnUPsTbMIEJmqsFD00lN/lBP8wnBi7RL5vdKpVh9Jbmn0PsLeFrNB22wUr
         u9NV+PDWWMRK8PgQw/lr+yqv4xV6ZP8xZndz1ADiV5QJId1sEIauz9CfcMsfj/c5foQ2
         gjEbotyTaVUibz6DSR9OCnfCTlq6mxHlmrydV6kPwkZGy9yNDGDgsvZjZPLRBopyfwZk
         r9QdLrSJDvrSmB6YAw3fILUAP41YdtqO/5dw1VcuckOKg5dt73uxyfBkh9SJQfQhZBlH
         EDVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=kYNyBLHaKUvYsczrNggYPfaD2ZzhIu2yZ4HF2d//eqQ=;
        b=feZ5kwVQaoZmr7wjusPROCz08OpiL/msgKkHFiCG9GfFRPKSJmHCQx4qORwD6HQsPY
         diGLuzBplpbTF02jh9MjC4X2vv61veskEeux+zswbYjOpR12mzaNAH/48nd+cPPbpJ1G
         dCWzg4BE22CjyZ0RhAIRUsqd1DMukzRuVuHIvSvDVDT5ZuHMQEJICctYnhG7oy/eS2jR
         pukYnYu+yWPD5K9cyc4UXIssi/u8DvizzQQrelnQhtgU/vJ+f7rg2j1PPGWfEfqkHVCf
         w6o7/CoLMgrqizuQKDQnRFLkaMPbt/8ZXtvYGFQHswhGAIe6bhDUyfByTUWCVkPwA6qU
         Lu+g==
X-Gm-Message-State: AFqh2krWub4K0XVahkNqLRHq4TZOcMD3r46V5G0DnuF+8d3MRcEU/TBQ
	V2BICxDs1nSMxgK8SoT+kXI=
X-Google-Smtp-Source: AMrXdXuBUu+S4NPYEwO04khGCCn3SycJfiman6nerk/0+jdtbMUmAGWJL4+NuP5q+bZfV0JgrNAZxw==
X-Received: by 2002:a05:600c:540c:b0:3cf:7704:50ce with SMTP id he12-20020a05600c540c00b003cf770450cemr2258433wmb.38.1673947797797;
        Tue, 17 Jan 2023 01:29:57 -0800 (PST)
Message-ID: <87107d8945c9f1513c305d115f24f488b87e088b.camel@gmail.com>
Subject: Re: [PATCH v4 1/4] xen/riscv: introduce asm/types.h header file
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Tue, 17 Jan 2023 11:29:56 +0200
In-Reply-To: <e00512a6-5d32-6dbf-4269-429532f8a852@suse.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
	 <2ce57f95f8445a4880e0992668a48ffe7c2f9732.1673877778.git.oleksii.kurochko@gmail.com>
	 <e00512a6-5d32-6dbf-4269-429532f8a852@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-16 at 15:59 +0100, Jan Beulich wrote:
> On 16.01.2023 15:39, Oleksii Kurochko wrote:
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes in V4:
> > =C2=A0=C2=A0=C2=A0 - Clean up types in <asm/types.h> and remain only ne=
cessary.
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 The following types was removed as they =
are defined in
> > <xen/types.h>:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {__|}{u|s}{8|16|32|64}
>=20
> For one you still typedef u32 and u64. And imo correctly so, until we
> get around to move the definition basic types into xen/types.h. Plus
> I can't see how things build for you: xen/types.h expects __{u,s}<N>
It builds because nothing is used <xen/types.h> now so that's why I
missed them but you are right __{u,s}<N> should be backed.
It looks like {__,}{u,s}{8,16,32} are the same for all available in Xen
architectures so probably can I move them to <xen/types.h> instead of
remain them in <asm/types.h>?
> to be defined in order to then derive {u,}int<N>_t from them.
>=20
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/types.h
> > @@ -0,0 +1,43 @@
> > +#ifndef __RISCV_TYPES_H__
> > +#define __RISCV_TYPES_H__
> > +
> > +#ifndef __ASSEMBLY__
> > +
> > +#if defined(CONFIG_RISCV_32)
> > +typedef unsigned long long u64;
> > +typedef unsigned int u32;
> > +typedef u32 vaddr_t;
> > +#define PRIvaddr PRIx32
> > +typedef u64 paddr_t;
> > +#define INVALID_PADDR (~0ULL)
> > +#define PRIpaddr "016llx"
> > +typedef u32 register_t;
> > +#define PRIregister "x"
> > +#elif defined (CONFIG_RISCV_64)
> > +typedef unsigned long u64;
> > +typedef u64 vaddr_t;
> > +#define PRIvaddr PRIx64
> > +typedef u64 paddr_t;
> > +#define INVALID_PADDR (~0UL)
> > +#define PRIpaddr "016lx"
> > +typedef u64 register_t;
> > +#define PRIregister "lx"
> > +#endif
>=20
> Any chance you could insert blank lines after #if, around #elif, and
> before #endif?
>=20
Sure, I will fix that.
> > +#if defined(__SIZE_TYPE__)
> > +typedef __SIZE_TYPE__ size_t;
> > +#else
> > +typedef unsigned long size_t;
> > +#endif
>=20
> I'd appreciate if this part was dropped by re-basing on top of my
> "include/types: move stddef.h-kind types to common header" [1], to
> avoid that (re-based) patch then needing to drop this from here
> again. I would have committed this already, if osstest wasn't
> completely broken right now.
>=20
I'll take it into account for the next version of the patch series.
> Jan
>=20
> [1]
> https://lists.xen.org/archives/html/xen-devel/2023-01/msg00720.html
> (since you would not be able to find a patch of the quoted title,
> as in the submission I mistakenly referenced stdlib.h)
Thanks for the link.

~Oleksii



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:31:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:31:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479254.742984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiJD-00042W-V7; Tue, 17 Jan 2023 09:31:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479254.742984; Tue, 17 Jan 2023 09:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiJD-00042P-Rl; Tue, 17 Jan 2023 09:31:11 +0000
Received: by outflank-mailman (input) for mailman id 479254;
 Tue, 17 Jan 2023 09:31:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Is34=5O=gmail.com=mingo.kernel.org@srs-se1.protection.inumbo.net>)
 id 1pHiJC-00041y-UX
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:31:10 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ac77fbfb-9649-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:31:09 +0100 (CET)
Received: by mail-ej1-x629.google.com with SMTP id mp20so27314023ejc.7
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 01:31:08 -0800 (PST)
Received: from gmail.com (1F2EF7EB.nat.pool.telekom.hu. [31.46.247.235])
 by smtp.gmail.com with ESMTPSA id
 t1-20020a1709061be100b0086f40238403sm3073376ejg.223.2023.01.17.01.31.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 01:31:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: ac77fbfb-9649-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id
         :reply-to;
        bh=jdnu+i46i2gnFS4iiJrllb1w2/cjIXrWKzQRHis/CFU=;
        b=J1vTmhiewt53nz0Aw/xd1Nkmv7FSoF283O3i2YTmIIqDBDOic1wlubmcsmDvQxDvO4
         QJJRMt5X9KB8BosQozglpBSaVcfQLVyazmkMvi/2G81lxgBFUGc6AKAr8/cFDxO23ada
         qJPsjKGuFDUhQ0womaUeXwkVfLpNp8fvvNK3Qf+fmrZV4I+Rm9Vf+Sog7ZOY1lk7Z7Zo
         O7aH+phliKhjKAsepC1stYUeuxBiwV1ymu+3Jj8Tp0YbmfxH8q2e9wemDFD892gfzbI5
         Hng8xTaD6TkJqzNTURbkAXfzMiuC6YuIkirf7yqoNz2mjydfVWSCXtDnmYXo/hguNMXq
         Xm9Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jdnu+i46i2gnFS4iiJrllb1w2/cjIXrWKzQRHis/CFU=;
        b=Rioi1VLR8FW7W/cotvHzwiTNhf0D9HT4tH3OoXsJX8QTAQankFuRB4yjEcJXl672+c
         ux1hmFaCaTMwA4pcTB7Un5SwFUTybmabr4sDvcW2oftyCBpXsVYe/3j0I9KI/6qJHsEo
         eefH4vyLYgaeOMKdy826Cwl7g04NgLQkV7f+DOkijDTUYiGYA7RW1xVJXE73LeuCe4Go
         BBQH+UWSqct8MN7ey/OeVGx/s1vPuzYEDY4fjjXTqMPJdoGEBzRRQAUq5y5+1+q+SkwS
         gWkUt+puH50Az+C/zNV3vZUGhSlrG1RkH57zHK4hFD2D2ppGPXoZHChKimA09cd52KQF
         T0wg==
X-Gm-Message-State: AFqh2koMAZv9PMYfV4QHZrkpqbtR6WFTh4gW9/uF+CMw6qJtdgIt+38i
	VUuZRIWr39m0c49Fask+mO0=
X-Google-Smtp-Source: AMrXdXvAbl6az2nEprKe4qbBEkiakEEniymcQ+bk8FYAhM2poQzyIc90typ8o1yTNjA99vtnWkmtJQ==
X-Received: by 2002:a17:907:20e9:b0:7c0:dcc2:e7b1 with SMTP id rh9-20020a17090720e900b007c0dcc2e7b1mr2127024ejb.43.1673947868545;
        Tue, 17 Jan 2023 01:31:08 -0800 (PST)
Sender: Ingo Molnar <mingo.kernel.org@gmail.com>
Date: Tue, 17 Jan 2023 10:31:05 +0100
From: Ingo Molnar <mingo@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?iso-8859-1?Q?J=F6rg_R=F6del?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 6/7] x86/power: Sprinkle some noinstr
Message-ID: <Y8Zq2WaYmxnOjfk8@gmail.com>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.888786209@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230116143645.888786209@infradead.org>


* Peter Zijlstra <peterz@infradead.org> wrote:

> +	/*
> +	 * Definitely wrong, but at this point we should have at least enough
> +	 * to do CALL/RET (consider SKL callthunks) and this avoids having
> +	 * to deal with the noinstr explosion for now :/
> +	 */
> +	instrumentation_begin();

BTW., readability side note: instrumentation_begin()/end() are the 
misnomers of the century - they don't signal the start/end of instrumented 
code areas like the name falsely & naively suggests, but the exact 
opposite: start/end of *non-*instrumented code areas.

As such they should probably be something like:

	noinstr_begin();
	...
	noinstr_end();

... to reuse the nomenclature of the 'noinstr' attribute?

Thanks,

	Ingo


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:35:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:35:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479261.742995 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiNU-0004ib-GA; Tue, 17 Jan 2023 09:35:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479261.742995; Tue, 17 Jan 2023 09:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiNU-0004iU-D8; Tue, 17 Jan 2023 09:35:36 +0000
Received: by outflank-mailman (input) for mailman id 479261;
 Tue, 17 Jan 2023 09:35:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHiNT-0004iL-Fh
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:35:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHiNT-0003QQ-4P; Tue, 17 Jan 2023 09:35:35 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHiNS-0004yx-T3; Tue, 17 Jan 2023 09:35:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=WiGYDHYoXOW/AkCi5YR+2UmOE7AaFzIBgCMa3CWTVlo=; b=OZRnTo3/38ZvzzvH50WMzpSRk1
	nm6lpGPHwyxMEcWxzL3iAJkcbFQuZKb4OYPXZ74I2NYcyht1hbwy8MpjALZvZHBIqQ2+P3evZFS67
	98R6YUbF0ivQ7wA5NClmTPWcXvLabbqvLuObkpQLRJGVbgNVs2e5WOQaXbLCForCATr4=;
Message-ID: <72fd8c47-d654-91d0-993c-97f2d0542cff@xen.org>
Date: Tue, 17 Jan 2023 09:35:33 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
 <20230116144106.12544-2-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230116144106.12544-2-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

It is not clear to me why this was sent In-reply-to the other patch. If 
they are meant to form a series, then this should have a cover letter + 
each patch should be numbered.

If they are truly separate, then please don't thread them.

On 16/01/2023 14:41, Michal Orzel wrote:
> The direct mapped area occupies L0 slots from 256 to 265 (i.e. 10 slots),

I would write "265 included"  or similar so it shows why this is a problem.

> resulting in 5TB (512GB * 10) of virtual address space. However, due to
> incorrect slot subtraction (we take 9 slots into account) we set
> DIRECTMAP_SIZE to 4.5TB instead. Fix it.

I would clarify that we only support up to 2TB. So this is a latent 
issue. This would make clear that...

> 
> Fixes: 5263507b1b4a ("xen: arm: Use a direct mapping of RAM on arm64")

... while this is fixing a bug, it is not going to be a candidate for 
backport.

> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
>   xen/arch/arm/include/asm/config.h | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 0fefed1b8aa9..16213c8b677f 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -157,7 +157,7 @@
>   #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>   
>   #define DIRECTMAP_VIRT_START   SLOT0(256)
> -#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
> +#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (266 - 256))
>   #define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
>   
>   #define XENHEAP_VIRT_START     directmap_virt_start

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:37:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:37:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479267.743005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiP8-0005M2-WC; Tue, 17 Jan 2023 09:37:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479267.743005; Tue, 17 Jan 2023 09:37:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiP8-0005Lv-TG; Tue, 17 Jan 2023 09:37:18 +0000
Received: by outflank-mailman (input) for mailman id 479267;
 Tue, 17 Jan 2023 09:37:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IWGa=5O=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHiP6-0005Lh-WF
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:37:17 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2080.outbound.protection.outlook.com [40.107.7.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86035ae4-964a-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:37:14 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8784.eurprd04.prod.outlook.com (2603:10a6:102:20f::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 09:37:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Tue, 17 Jan 2023
 09:37:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86035ae4-964a-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lPZvsfywG7ijVC3GyUyzmURO0/+8j/o52pAZ5gl9eLrZnHq9DsqAg7sojgAFBWfovqn+YunvobwRRayfE2iYctN0eWx5wsnU/HF2dhGAe0T/Z5Wt5NEvsqc39Whe5t5wvUrbee+B1SuDXqprLSSyJ2mTRUn6xjp9eGOqXoXNnAh/npVuyhOTj6veyKl4Tr0KcbTZ6GXmvWSdAa+MPY19NMBBM9R9Cqf8hlsVUFdMELhk40agd8UOt7ODfI7Mj8Qv4n5dxX4QvRidaxKUD6ZVBOcnLXFp3vxdPxLoXxzMchKqO1HJK4U8aGKMdhPOz4RNnR7J6JDzNmU6ABjc4elChg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kJ7ikuLLte0KHA5P+3v6Zz3NVJyL0NCw88NVD57UyQI=;
 b=bm5IZMZlNWN1lS3SBJZOgfho2KHsQ1886uhkoPdFzzOMz+Q3ovXTN7wH0kl6aYGX9h8rY3uPc8+ZSJsJIwobXdjF+EEEQ335FekkbJD8vNpqIyDzvATmlHfRDP47VW1oz29rcMaa7lJiO17ESgqu9FJ6/VIi3TqpUduYdxEBysGeGldNlE9SIUfx9GXJftX9hFkAcIJPWMUoo/y4gGvFhLcU0B6j3jImrdaFpG7xhG04anV6iLW6BofZOWDeFd4WdXBQuRKRbpEsuv5ASfaSmOiaylMWHGIXO6eW8yMf2g3XUgcgKfzScIrD2QAlF9ixM3HJA/mDXXFXU/ftKp7fKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kJ7ikuLLte0KHA5P+3v6Zz3NVJyL0NCw88NVD57UyQI=;
 b=MRFk/+pXqO4gak0Hqfj+ggQEPjemOWROwm24uc9Gq7mBM7ETmTrp25NGtiqh9q6G2KS1UVj5tKwBa9O4VmhN1lHqhTlEmgMiTnvxx0X0b39GdnrgxLqNmTlljWINw+a0w4SD6rdlyiBN/ZjlZEvYOjaMZZC6BDL3Cnaj3p9sco/zPonM4X3A9gtjHKlAI1/h/YBDd6Iy5TdMCll7+dDM0au5M9ztQsoHIYUuvJumvyD1G2SnxyYlIeo52XgfF/FYbambnQ09VISE4Mn6p7xehO6afrR+7VI5vrzPb+BRSGkisnD7U5vKKoXbdeBiIwfWYKRY+bivsLNStCfCbghKLA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <aceeec22-2183-2a60-7a68-58f43d8da493@suse.com>
Date: Tue, 17 Jan 2023 10:37:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 00/17] tools/xenstore: do some cleanup and fixes
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20230117091124.22170-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230117091124.22170-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0176.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8784:EE_
X-MS-Office365-Filtering-Correlation-Id: 7c98d52a-7490-4c3e-5171-08daf86e68ac
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WV2nWi4284np0KTBJsQ2H9C9Et7fv+HiUBHGFFgTVIi2OCxEBcBxsvcZiZoOopmjSgSvzswRrr8Mm/0Vfu3VC6njB7sN7/6zcR4V2FlEV/C0fziJix8WWaigZcjOvEDExDYDIGoSKlKZRZTT0iQTAuuweoGs3HX0XIHnFp8BwCEW6XxDfuD/+ZFUH6j2oFtjVmCfzIHNtFD2kLbr4O5dTGX7Vjq9Q0q0ltuh3d6VGGq3Qg846Jpvggsg1e4fsBtbLSmR/HeZtFCp0nv61xfpPp5nudDJS01O8tRciQhd0Z+WVeR9DBC4nkHpSxt2nA6m6/Nr/s/KN6VUJdF/nEkoPeKQ46HZpUppXGObL17GwMML9kjJfRvHBDCWjk+28LzffUKoAf9eGjUPqcJftS45Ifyxa4+EpWvPsKp59B80Xqowd0/5eKQtiTr5alPZgafxL53jiKMg91iMUYRSw1qIUxTrM4qbxrlN7eTfouMWBDJUXVXipJJIZrRlc965Zji9ne+SVwBDdIajMekxLWpF+cvX38Cmc0MBGOyrmZLLWHCK2wcTH/+gxNKv3ZzE9/go6fjnn+HwM+YWopW/Pe5fNoJAd5sY+HFDxJ/TGKRvU9zlJuo2Y9hcBTamzzw6CE/O2q6wmTlLL4D2ExcvZ7gpct5cLO6EjiPkb3szlJpuPz09VNb29uvQsxij6fvXi+TZpyd1WPh5+ZpYRXP12+aTqhsTrSjWNw6SFBi+dZqrS2M=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(346002)(136003)(396003)(366004)(451199015)(31686004)(2616005)(66946007)(41300700001)(66476007)(66556008)(26005)(53546011)(186003)(6512007)(8676002)(4326008)(31696002)(86362001)(36756003)(5660300002)(38100700002)(8936002)(6862004)(54906003)(6636002)(478600001)(37006003)(83380400001)(6506007)(316002)(4744005)(6486002)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dDRKOEViaEYrSjExOFZmUjEvWmtBemt3M3k4c2JqT2R1ZFMyZnlRQldVclR2?=
 =?utf-8?B?Q0R1WEtSajd6SVp6UU1nL3dKUFZOYmFEa3Q2eW4zdFJNL05CUWxBQTlHb2xE?=
 =?utf-8?B?MWw5RzNNZ0NTWGhReWNUWkJsNTkrRGRpdGRZRTdURy80LzdWeHk5Z0ZzQWw5?=
 =?utf-8?B?MTQwY0dUV1I3U1R2QksvMU1tQ0hNM0NFMWdsR1pMYlRQUDljVjhyMEdMTGlN?=
 =?utf-8?B?MnFnbHc5NEh4QXNFcTQwZHZKSFhnOTFiSmtmdCs0THV5Z0FRWUkrU0ZudU5E?=
 =?utf-8?B?T2VHbng2d1huMVVndkhnSktnZlJML2JWYlFqS0FQVUsvbGF3WTNCTUk2L2lK?=
 =?utf-8?B?ODdObW10NGNVamRJdy9HQVNTSmhqemdCd2ZXeHpQNXlldFc4bWsxQytFUlVn?=
 =?utf-8?B?VVpYTFN3QXEyUnNiQU1aTXJYR21STkJwQlVrbU5UOUFGK29ZSFkrUEgzbCtv?=
 =?utf-8?B?ZXNyR25kRWlXOHB5RVFZTUxaR1Vib2lmQzRYb0hLZnUvQ2hDWU1FNmhRbG1l?=
 =?utf-8?B?c3ZzRitXcSs4ZEhHRnoyMG5XNEg5b3ExSkN1TFJKMTNHTmlPQXRhNkpLSWF3?=
 =?utf-8?B?dFo4OUNST2VxSVhGa2lsUTBkUDBQRlJwWkU4U3RaU2dWV0MzR084aDFpZSt1?=
 =?utf-8?B?ZXdGWW16amlhNnBTditHdzZIei9QblVyRlYzeGVFV1dxMkEycFVmVlRXQnJt?=
 =?utf-8?B?WEdNRTlXYVhadXBtcWQ5WWsyS3ViczZ2bk02ZzZqVnpwdDZHZXJaOURyazVL?=
 =?utf-8?B?QXdlcnB6ZWN2c3Z3M3ZLcWxuOFZ3NFN3cTlPcmhiTlowUVlSbnBhTGVSUVRr?=
 =?utf-8?B?UWNZa3EyWkIrRnFERkN1bjhReUpSWmp5aHpaV2ROZGh1ODhSa0IwbUNIVVZF?=
 =?utf-8?B?MXJmK0pXMi9yTEljUUF6Y3k3WlV3VFNwVFRqUi9GcFh2cVpVNkZ0b016djdJ?=
 =?utf-8?B?am54N1lsVHlhM0c1TENKaWdTbXVIY2pqcDVPKzdRSWl0b2FZWmtMU0RvZExK?=
 =?utf-8?B?SS9GZk11SGxWR3FuME53SUM5MU10Nms2ZGhoamxwWXBoT0VkSVg1SmZmbXVZ?=
 =?utf-8?B?R09RSE1NanlqN1BFbkpFeTF5RjYvQ01zU3JTOUtTc2ZDb3UrcGVZc0NPTWNi?=
 =?utf-8?B?dmtGRWZZalVBalJYVnhLb3JhMEs4L3lXbm9ybEs5dmVKdUZ0aUVJUU5Dbi9F?=
 =?utf-8?B?bTBoTnBHZi9CdGFiYXBQbVdnc0xBSSt1NWw3VWhDMDE3U3BsSWVxWXdMWkhC?=
 =?utf-8?B?UnFnVDFsZ0tlNGRrRlJ0Y2FWV2NNQ0ZXMlhkSlZkNVJkVENmOWtIdWtJUGVs?=
 =?utf-8?B?Z2FYYTlya3NJMzA0aFZBUHY3NUtNSkF0VWswN3Z1bCt2TUM3VmhJeG9IZ2Nw?=
 =?utf-8?B?R0l4Znp0K3dvR0hzQTNEQkpZVWplc2lCT1V6eUNjbkdibnlNOUtEQ1U0M0ti?=
 =?utf-8?B?cGE0eElFaWcyMDBBd2FuMXlkVnFJWVVoeXRzME1VTWhuM004Vy9pdlh4ZDRE?=
 =?utf-8?B?cXNERkh4Q0FzMnQ1NTVwWGttOURUSlIxUlpIcFh3QXd3cU1DQnluTUgzaDli?=
 =?utf-8?B?MjVnVnhzNWlmWnhKQXBwSGtBaTREMGdZa3FsbjJNemxqd1V4RWdVVlg0UzNJ?=
 =?utf-8?B?aTU0K2EvS3BQVnZJWmhwUWNOZ3dNZEtJNDZrTDZyZFFadEtjTDZzUFJyMUFs?=
 =?utf-8?B?R3o4UGxwRmJRWmRrK0c0a09BSTN5dVNmdnpBUWdsTVBpS0tqNzBwRnJsSno2?=
 =?utf-8?B?NVRyV1Q4UmpnUjU0MmdFWThqN1Y3YTVabUxrYnZJc1NMMzRRcllGMjRmajZY?=
 =?utf-8?B?MTQ4azJDSGF5dWN6c2VTYzgvMTRUQVYzVm5mWXhrSENaWW0wdHp1TGlVVG1H?=
 =?utf-8?B?a1FGcVZTT203TENMeUljbzVxNjI1YTI0cXhYRXB0VUxNRnJhUWcxa29SbnQ3?=
 =?utf-8?B?Ny80enh4VWo2MjEvMU5zTmhMdFVoMWtLV3dEUURnZU5MdTUyRjVFZlZ0U0pS?=
 =?utf-8?B?RHVPQnNMcDViMXRHWDF3QTg1bmx4UkxiTVNzOXNudWh5MDBIdml5NC9jWXdm?=
 =?utf-8?B?T1NRSEJ3aFFIeW93bXJtQVp4RUdzeGd6T0ZRUlA1QkVHcTMwT3BjYnBnZmx6?=
 =?utf-8?Q?g8Od2FJaTasTxTeo6XeWQh0pX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7c98d52a-7490-4c3e-5171-08daf86e68ac
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 09:37:11.4435
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NeRqVdzEzSv45ClRMe2ZWv84+ZApy+RjXZUX8timFxROh57iiG1NCJMXHgOn5m5xBOc2BA4XQj0Bg9SpmomUJw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8784

On 17.01.2023 10:11, Juergen Gross wrote:
> This is a first run of post-XSA patches which piled up during the
> development phase of all the recent Xenstore related XSA patches.
> 
> At least the first 5 patches are completely independent from each
> other. After those the dependencies are starting to be more complex.

The same was said in v2, yet three(?) of the early patches were
committed already. Hence with a look towards committing I wonder in
how far the 5 above is accurate.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:49:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:49:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479275.743017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiak-0006wS-0p; Tue, 17 Jan 2023 09:49:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479275.743017; Tue, 17 Jan 2023 09:49:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiaj-0006wL-UH; Tue, 17 Jan 2023 09:49:17 +0000
Received: by outflank-mailman (input) for mailman id 479275;
 Tue, 17 Jan 2023 09:49:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pwid=5O=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHiai-0006wF-3y
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:49:16 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2066.outbound.protection.outlook.com [40.107.237.66])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3225820d-964c-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 10:49:12 +0100 (CET)
Received: from MW4PR03CA0053.namprd03.prod.outlook.com (2603:10b6:303:8e::28)
 by MN2PR12MB4472.namprd12.prod.outlook.com (2603:10b6:208:267::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 09:49:10 +0000
Received: from CO1PEPF00001A5E.namprd05.prod.outlook.com
 (2603:10b6:303:8e:cafe::5f) by MW4PR03CA0053.outlook.office365.com
 (2603:10b6:303:8e::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 09:49:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF00001A5E.mail.protection.outlook.com (10.167.241.5) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Tue, 17 Jan 2023 09:49:07 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 03:49:05 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 03:49:05 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 17 Jan 2023 03:49:03 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3225820d-964c-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I/IQDOWCFra+DCfOd/djXhk3sPu1QQjo9omv6Wi3O02mOxGNfEnwv0fo927gY9Bjz9x/pUClvP2aLl4n5FGUceNqgrcmsFqL12e2MN/J/TIhgKXunK7+lfmSCQnMR0d4j0yFP/ryZnm0Q02lxqYYcVRB6xRlY/mZwbEUUuMijqk4/HCNwTDP8dKVgQO/R05ut9QDM5FtZmuuViqM30Ez8E6GWvz7/Q56Lp0k6o5gp0QFYvwmgbnzBFfVnYfSNjio4FA3oE54pmwrTjlL6Pq7tIojfwTlQikB/DvKNk8ocHT9iSYxw+qOuDvifCnkX4RAoJGlpV1OSqrWSgSI5VKCYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7cuAwr511feybCwgNoTeux5EWB6I1zFbkf7avC719RE=;
 b=oBuJYu9mjs6GbHjpvNyLxZS3RXEqL+pnUWw8tQJ4l1WFLTyeTSDcv+ff6CuCkXVXL/15dWwMJ9GTGgQ0oY+E7xEv/xMbw8wUduVHdb8ail4xcNzDPowC5/7ufpbHCW/abTCEwQbhU/9wsFtOewZejSMc5Tklr69i1a3fGcgPVR2Ant/BtvJvAXisp/c8c2Pv6FS9lZcqvfC+sYk9B8Qngvnrz/2pw4V+baUcPktSyGPsYvpvkfoiUp/pEl5/UiepbUZLkYNX6npZMuRVfzZ6dEZKvki5Y0pA8UBGOqh5Je0mzuhJ21MfoWVMR4hKV42lFa08uccH9Uq5R3XW3ogVpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7cuAwr511feybCwgNoTeux5EWB6I1zFbkf7avC719RE=;
 b=AAU4oMF0QNlMtC6Zvv+ccQfqzVToeZN3KMVXVdtBgDYa+n+YWXOKf6Qvgx4OdcHQCW4eLeDWyM8tHNHCDD5lnyJYWAP+58R1YTVAGIyJSywRia405oWYr7frrY8WYhZINiQAjPVywQ4USODozmKCUYfUOQNBp5zVmBP8zOU419M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <001fe638-1bd8-5624-499e-8f1690cb33c0@amd.com>
Date: Tue, 17 Jan 2023 10:49:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm: Harden setup_frametable_mappings
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
 <2a679d99-4ed4-4fe8-8aee-faee57b5007b@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <2a679d99-4ed4-4fe8-8aee-faee57b5007b@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF00001A5E:EE_|MN2PR12MB4472:EE_
X-MS-Office365-Filtering-Correlation-Id: 391db344-c032-4fa8-3b75-08daf87013e4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bqjBeXUJY5wZDMoONguHWVsYSh5B4n4mak/hoYOrVLb4IvctbaRpqEJcBjlPjdn/uIaSr6on/95ZEJ+Ko+at27xRArF1MzSTMdLvbLeOVNOquILB1QtyGKrp9ViURKSqG8/fEmZRS7QZ1ZqpgIMKhnfQEmXCA4wIC7StyrEOknxsfBq/PJ1ofBhFKD0mN2VCMaQ++kRmFZbI0o5CVRvvAQ3lrFvD+mOrkj0N2S/GcTkuErhHmW/T2GTH8RZNPyWzdYGPcrGG2S3LYyN7bxMrm55J2hijbYB88OJY6Eg7iwMB3lyvUw9Ft77V3XTU/3IVT1t4QlOTNwqYgM+3CxGJvLUpsV/v1EvdzqfKhJTO1/l+O1QKKIPkBbmqICHXZ4g9/pe2g/95437dBYadiebkjEjltdTE8gdN2rJubi74GudH/l3h70VFFTjn0FXDiMs0wOaOs8kQ4xS7OIxEvhI/PCqEnq8S9PDcC1JXBp8tvBUL284BM+tTB8vEC9B3i4q7JjBifwLEfVaFA9uiKVrEC7TabHonGEoXI16HTz9Dn0DUIpHfv0TCpTWWx9aeRq+lu+M74CaFU886dfYFO1TzwOa6BEbNURANcd6hucuGQ4qtFjqu24/BIGoDyeTJxpUInPw8TGUPmwy+E2Gb8+6YID+AzPVlrIBtEecxqLQdBlZ1yvMvHG/LKwGzTtOVjzCq9PtIvqLhO5tRKjgmrwB7ANyTRvHeFzKnoA6mgHLzsVHjtNCjvMFMpSON0Jrplq+6MQOw2vJpcs3Z5d4d+410LQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199015)(40470700004)(36840700001)(46966006)(82740400003)(36860700001)(16576012)(82310400005)(31696002)(86362001)(40480700001)(54906003)(40460700003)(41300700001)(110136005)(4326008)(70206006)(83380400001)(8676002)(70586007)(316002)(2906002)(44832011)(8936002)(186003)(336012)(47076005)(478600001)(426003)(2616005)(26005)(5660300002)(53546011)(31686004)(81166007)(356005)(36756003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 09:49:07.8533
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 391db344-c032-4fa8-3b75-08daf87013e4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF00001A5E.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4472

Hi Julien,

On 17/01/2023 10:29, Julien Grall wrote:
> 
> 
> Hi Michal,
> 
> On 16/01/2023 14:41, Michal Orzel wrote:
>> The amount of supported physical memory depends on the frametable size
>> and the number of struct page_info entries that can fit into it. Define
>> a macro PAGE_INFO_SIZE to store the current size of the struct page_info
>> (i.e. 56B for arm64 and 32B for arm32) and add a sanity check in
>> setup_frametable_mappings to be notified whenever the size of the
>> structure changes. Also call a panic if the calculated frametable_size
>> exceeds the limit defined by FRAMETABLE_SIZE macro.
>>
>> Update the comments regarding the frametable in asm/config.h and take
>> the opportunity to remove unused macro FRAMETABLE_VIRT_END on arm32.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>>   xen/arch/arm/include/asm/config.h |  5 ++---
>>   xen/arch/arm/include/asm/mm.h     | 11 +++++++++++
>>   xen/arch/arm/mm.c                 |  5 +++++
>>   3 files changed, 18 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 16213c8b677f..d8f99776986f 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -82,7 +82,7 @@
>>    * ARM32 layout:
>>    *   0  -  12M   <COMMON>
>>    *
>> - *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
>> + *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>>    * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>>    *                    space
>>    *
>> @@ -95,7 +95,7 @@
>>    *
>>    *   1G -   2G   VMAP: ioremap and early_ioremap
>>    *
>> - *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
>> + *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
>>    *
>>    * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>>    *  Unused
>> @@ -127,7 +127,6 @@
>>   #define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
>>   #define FRAMETABLE_SIZE        MB(128-32)
>>   #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>> -#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
> 
> This is somewhat unrelated to the goal of the patch. We do clean-up in
> the same patch, but they tend to be in the same of already modified hunk
> (which is not the case here).
> 
> So I would prefer if this is split. This would make this patch a
> potential candidate for backport.
Just for clarity. Do you mean to separate all the config.h changes or only
the FRAMETABLE_VIRT_END removal? I guess the former.

> 
>>
>>   #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
>>   #define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
>> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
>> index 68adcac9fa8d..23dec574eb31 100644
>> --- a/xen/arch/arm/include/asm/mm.h
>> +++ b/xen/arch/arm/include/asm/mm.h
>> @@ -26,6 +26,17 @@
>>    */
>>   #define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
>>
>> +/*
>> + * The size of struct page_info impacts the number of entries that can fit
>> + * into the frametable area and thus it affects the amount of physical memory
>> + * we claim to support. Define PAGE_INFO_SIZE to be used for sanity checking.
>> +*/
>> +#ifdef CONFIG_ARM_64
>> +#define PAGE_INFO_SIZE 56
>> +#else
>> +#define PAGE_INFO_SIZE 32
>> +#endif
>> +
>>   struct page_info
>>   {
>>       /* Each frame can be threaded onto a doubly-linked list. */
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 0fc6f2992dd1..a8c28fa5b768 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -676,6 +676,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>>       const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
>>       int rc;
>>
>> +    BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
>> +
>> +    if ( frametable_size > FRAMETABLE_SIZE )
>> +        panic("RAM size is too big to fit in a frametable area\n");
> 
> This is not correct. Depending on the PDX compression, the frametable
> may end up to cover non-RAM. So I would write:
> 
> "The cannot frametable cannot cover the physical region 0x%PRIpaddr -
> 0x%PRIpaddr".
Yes, you're right.

> 
>> +
>>       frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
>>       /* Round up to 2M or 32M boundary, as appropriate. */
>>       frametable_size = ROUNDUP(frametable_size, mapping_size);
> 
> Cheers,
> 
> --
> Julien Grall

~Michal



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:50:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:50:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479281.743028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiby-0008Ge-Bi; Tue, 17 Jan 2023 09:50:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479281.743028; Tue, 17 Jan 2023 09:50:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHiby-0008GX-7r; Tue, 17 Jan 2023 09:50:34 +0000
Received: by outflank-mailman (input) for mailman id 479281;
 Tue, 17 Jan 2023 09:50:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHibx-0008Fj-LT
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:50:33 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6238fded-964c-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:50:32 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 84D9568445;
 Tue, 17 Jan 2023 09:50:32 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4B6A013357;
 Tue, 17 Jan 2023 09:50:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id EPInEWhvxmN4BwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 09:50:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6238fded-964c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673949032; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zqrp1waze+l6MnoF1JRnlebPy+aayXNcunsZEaOJgeA=;
	b=L2YOTC6amVLyNdJ6lAEod11A46DbykGyQ6w6TO6c9IGf6QG89eUP3euBtHO4rL0eGx5fLJ
	uNR1VNyBObyYSGIAw/RnCIjVUlaqNDZV8t10zBXvaW5AvfoxI2KBQSd4T/ECJArHMQ4POt
	6IIBQq1wTKDLzKhMLAForwD1iv7qjsA=
Message-ID: <f37c0d1b-ecb7-ef8b-cfa2-4bfad35d4452@suse.com>
Date: Tue, 17 Jan 2023 10:50:31 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20230117091124.22170-1-jgross@suse.com>
 <aceeec22-2183-2a60-7a68-58f43d8da493@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v3 00/17] tools/xenstore: do some cleanup and fixes
In-Reply-To: <aceeec22-2183-2a60-7a68-58f43d8da493@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------Fomdzzbz6bKegKT1Z7H1MP4j"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------Fomdzzbz6bKegKT1Z7H1MP4j
Content-Type: multipart/mixed; boundary="------------Zp29MsLUiejaAPf36SeEAiiy";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Message-ID: <f37c0d1b-ecb7-ef8b-cfa2-4bfad35d4452@suse.com>
Subject: Re: [PATCH v3 00/17] tools/xenstore: do some cleanup and fixes
References: <20230117091124.22170-1-jgross@suse.com>
 <aceeec22-2183-2a60-7a68-58f43d8da493@suse.com>
In-Reply-To: <aceeec22-2183-2a60-7a68-58f43d8da493@suse.com>

--------------Zp29MsLUiejaAPf36SeEAiiy
Content-Type: multipart/mixed; boundary="------------ml1ZGpVaOztQd3fcIeOgP7j0"

--------------ml1ZGpVaOztQd3fcIeOgP7j0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDEuMjMgMTA6MzcsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAxNy4wMS4yMDIz
IDEwOjExLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gVGhpcyBpcyBhIGZpcnN0IHJ1biBv
ZiBwb3N0LVhTQSBwYXRjaGVzIHdoaWNoIHBpbGVkIHVwIGR1cmluZyB0aGUNCj4+IGRldmVs
b3BtZW50IHBoYXNlIG9mIGFsbCB0aGUgcmVjZW50IFhlbnN0b3JlIHJlbGF0ZWQgWFNBIHBh
dGNoZXMuDQo+Pg0KPj4gQXQgbGVhc3QgdGhlIGZpcnN0IDUgcGF0Y2hlcyBhcmUgY29tcGxl
dGVseSBpbmRlcGVuZGVudCBmcm9tIGVhY2gNCj4+IG90aGVyLiBBZnRlciB0aG9zZSB0aGUg
ZGVwZW5kZW5jaWVzIGFyZSBzdGFydGluZyB0byBiZSBtb3JlIGNvbXBsZXguDQo+IA0KPiBU
aGUgc2FtZSB3YXMgc2FpZCBpbiB2MiwgeWV0IHRocmVlKD8pIG9mIHRoZSBlYXJseSBwYXRj
aGVzIHdlcmUNCj4gY29tbWl0dGVkIGFscmVhZHkuIEhlbmNlIHdpdGggYSBsb29rIHRvd2Fy
ZHMgY29tbWl0dGluZyBJIHdvbmRlciBpbg0KPiBob3cgZmFyIHRoZSA1IGFib3ZlIGlzIGFj
Y3VyYXRlLg0KDQpJIHRoaW5rIGl0IGlzIHN0aWxsIHRydWUsIG1heWJlIGV2ZW4gcGF0Y2gg
NiBjb3VsZCBiZSBhcHBsaWVkLCBidXQgdGhhdA0Kb25lIHdvdWxkIHJlcXVpcmUgYXQgbGVh
c3Qgc29tZSBjb250ZXh0IGVkaXRpbmcgKGNoYW5nZXMgb2YgcGF0Y2ggNCBhcmUNCnZpc2li
bGUgaW4gdGhlIHBhdGNoIGZpbGUpLg0KDQoNCkp1ZXJnZW4NCg==
--------------ml1ZGpVaOztQd3fcIeOgP7j0
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ml1ZGpVaOztQd3fcIeOgP7j0--

--------------Zp29MsLUiejaAPf36SeEAiiy--

--------------Fomdzzbz6bKegKT1Z7H1MP4j
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPGb2cFAwAAAAAACgkQsN6d1ii/Ey/m
UAf7Bzx9L+jY5jLeEftdC9EmwFeGABNyTVam9RERX9YqLzdgYDM7ESy7dx5B4dvfenifpuTpE6UD
SAQ2DGcz00/D6wK57siN4NXfsj4n5+3BBfqm6z3b6mn8bWGTyrWC/bTgWNN82HfxPwWSaZDaGo7K
SmWyJykPess3Hv50qocGM8LAzfA1fxXQupuaR4j8ZJTiv1V7PcLEis8boRY49ssHsTY2xEkYHlEv
lDZ1nm+iZw0083aGVbpilGz701UTVWa0xck3H2RlQdIZ9S4ysO2CmXG7IZ3BJM/Jj41cGVRNxXZ2
WM/Hjfeok/xb367epbphSdF/1gFWs4JVYNAYJQr7ig==
=lmVR
-----END PGP SIGNATURE-----

--------------Fomdzzbz6bKegKT1Z7H1MP4j--


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 09:51:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 09:51:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479284.743038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHicO-0000Mp-Mf; Tue, 17 Jan 2023 09:51:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479284.743038; Tue, 17 Jan 2023 09:51:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHicO-0000Me-Jb; Tue, 17 Jan 2023 09:51:00 +0000
Received: by outflank-mailman (input) for mailman id 479284;
 Tue, 17 Jan 2023 09:50:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pwid=5O=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHicN-0008Fj-5u
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 09:50:59 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2042.outbound.protection.outlook.com [40.107.243.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 707956f5-964c-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 10:50:57 +0100 (CET)
Received: from BN0PR04CA0043.namprd04.prod.outlook.com (2603:10b6:408:e8::18)
 by CH3PR12MB8583.namprd12.prod.outlook.com (2603:10b6:610:15f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 09:50:55 +0000
Received: from BN8NAM11FT041.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e8:cafe::b6) by BN0PR04CA0043.outlook.office365.com
 (2603:10b6:408:e8::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 09:50:54 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT041.mail.protection.outlook.com (10.13.177.18) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 09:50:54 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 03:50:54 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 17 Jan 2023 03:50:53 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 707956f5-964c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JYDd5v9JMrnN2CtU7SlCk8x3fWdiQ5BpV23ZWkUj1Jzq5094W2KSdEPiRrHsfiMXj6J6SgpsT4/GivP/6uuDGOWDNrFoGR6txYdPFQZRQeHLtycfcTZajsspBpJkR9lVsCOZwHB2bJ4F5kgQM4MYALQhCEunPWc+61S6tPQrCtSjbmVq9vm/lC7sxuskXSf37nFb+tQDSCWeAzo88eqHlqMwdXzFafMwLc2KPl5kCMzNEaMlHpJLP/8GFVPS054gt1DyVMvN5vgGpMe+8hJSa4OeODUGtSMp3cRZnsNhvh+eh9UeKoNfrM9/9tYIVr41HwyxWkm9qehqvtFTre0mMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h9wkKzoKSt9RjFkmTx9EneVW25YHZ5K/J3+amcAa1ps=;
 b=FVe50hSLEysUI/wK4qLhXUb2S7/QFJeBJ6EIzKM1THRCo1gspZPMbmBNzseL5Sj9glJ3NSZVr4R9ZhOfVbbz0vcs2tGqL+YqzgGUeB0zNW2GXLfeTUnBRSxBc4+dn0DJmj1igHyN9ZsH4fSRJwBsmywYs85tKQpy5jLV4BKaiyxN+M89E75hBEagck7sTAiq9jhQJEHG04oMLaxD3Lgqo8MIQobCY4I+C+tWqn8T44ndYhCeZzF2zsuy3hdDQvqcMMY0kWyKXpXlZDbbxhhcl8cSzJIrca/1etsGkJSbBeEIfOQeEZ/St9iEpQAAIMUlX50eZLtURLMPFj7RmwTAJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h9wkKzoKSt9RjFkmTx9EneVW25YHZ5K/J3+amcAa1ps=;
 b=Q0cG0TuZgcS6+fhA0GJZrhp/fgHQGQZ+KmOV0XXmbIwY0dp9aa1MrfxO0Lwi03yW1WhezCEFMXpSmXFWXsK8jPkcPO6IHeN5t3mybwy/D2NvPbIn1+Mh0lWjq04qQzogg7l2TS/m3JVLCP2OqpS7Qka0NvJAhZ2neWPjzbW237w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <96964034-53b0-c391-6be1-fa5fff6842e1@amd.com>
Date: Tue, 17 Jan 2023 10:50:52 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
 <20230116144106.12544-2-michal.orzel@amd.com>
 <72fd8c47-d654-91d0-993c-97f2d0542cff@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <72fd8c47-d654-91d0-993c-97f2d0542cff@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT041:EE_|CH3PR12MB8583:EE_
X-MS-Office365-Filtering-Correlation-Id: be7b2785-2f7d-4d53-f8d5-08daf870535f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4b7PbK/fFqFHSOieb63W0FloZDNbjnCEp+d9PNa9XZxsXzMgxe5fT4VozprcW4V4dq53HjDawk1uKrMfDYSnricOfMiVGCPUgGjOn1LH3Km0e2wmpw6udwP56DhoIrUuqAC/mm3JeIgh8DZX/qwwVnimb2KpKRPZnIs2CJu5shREkCcvQ7oKnrSAgHWKaDEkW+KoxLVdyALoo8nqExteyFcUP+UrxmI3/9hrud0zdUXKdMnmRNutrlCzKCzVQ6fduRSydg1oC/J28M+WbginLetVmHhu5M1ddnlvEYSYDIMM9jt/YGUnufwWWDwtmiNIG6Pb4m/6Bezk0BdnWgyJUa8Lvnux+Aad7q2k/K3wXw1Ck05NMkmCslAh9ettLT5BR/skTw3vVJYOSmiLV3zysnwdlrTd2QRnLfSbjDTvL0dB0OeiqTs3RKmMeAHx/jh29b85SFF3Yaa+l8NnPqqEIJGC9zrGYxFumEdsQ146xQBQMgkXGCVBApsum/qkpnMbZbgDLOCVMSTiApr8trswEnj+zLLAx0Z953EYw7NRAlrgiNwDtmC0xMQZ8C8BALMOKAIwDD7oe1vQy8Suy27pCjWLlZwJK+3wt36ElGooEXpNaF4OQNSR4WoFreO7zVvKQgbsXRMtf+fHTi+96Q30NmrOe+gj4eOdmTxwYunhoQvRj3HIw+nCzVFEuTZo36jeUy51y9OmRfT+b5QiG9jUJ2nGYUVsQJBpsq+o6p0zkxZk/38bJ55H+gGCvu9dbHl7797n20X4mxkCx0SxpVA2eQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(39860400002)(396003)(376002)(451199015)(36840700001)(46966006)(40470700004)(36756003)(40480700001)(31696002)(478600001)(40460700003)(110136005)(54906003)(26005)(44832011)(41300700001)(2906002)(53546011)(316002)(5660300002)(4326008)(16576012)(70586007)(70206006)(8936002)(8676002)(2616005)(36860700001)(186003)(356005)(86362001)(82740400003)(47076005)(426003)(336012)(81166007)(83380400001)(82310400005)(31686004)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 09:50:54.4701
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: be7b2785-2f7d-4d53-f8d5-08daf870535f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT041.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8583

Hi Julien,

On 17/01/2023 10:35, Julien Grall wrote:
> 
> 
> Hi Michal,
> 
> It is not clear to me why this was sent In-reply-to the other patch. If
> they are meant to form a series, then this should have a cover letter +
> each patch should be numbered.
> 
> If they are truly separate, then please don't thread them.
They were meant to be separated. I will form a series for v2 to make the commiting easier.

> 
> On 16/01/2023 14:41, Michal Orzel wrote:
>> The direct mapped area occupies L0 slots from 256 to 265 (i.e. 10 slots),
> 
> I would write "265 included"  or similar so it shows why this is a problem.
Ok.

> 
>> resulting in 5TB (512GB * 10) of virtual address space. However, due to
>> incorrect slot subtraction (we take 9 slots into account) we set
>> DIRECTMAP_SIZE to 4.5TB instead. Fix it.
> 
> I would clarify that we only support up to 2TB. So this is a latent
> issue. This would make clear that...
Ok.

> 
>>
>> Fixes: 5263507b1b4a ("xen: arm: Use a direct mapping of RAM on arm64")
> 
> ... while this is fixing a bug, it is not going to be a candidate for
> backport.
> 
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>>   xen/arch/arm/include/asm/config.h | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 0fefed1b8aa9..16213c8b677f 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -157,7 +157,7 @@
>>   #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>>
>>   #define DIRECTMAP_VIRT_START   SLOT0(256)
>> -#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
>> +#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (266 - 256))
>>   #define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
>>
>>   #define XENHEAP_VIRT_START     directmap_virt_start
> 
> Cheers,
> 
> --
> Julien Grall

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 10:04:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 10:04:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479294.743050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHipc-00028D-Tm; Tue, 17 Jan 2023 10:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479294.743050; Tue, 17 Jan 2023 10:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHipc-000286-Pr; Tue, 17 Jan 2023 10:04:40 +0000
Received: by outflank-mailman (input) for mailman id 479294;
 Tue, 17 Jan 2023 10:04:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IWGa=5O=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHipb-000280-9H
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 10:04:39 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2084.outbound.protection.outlook.com [40.107.21.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 59a23760-964e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 11:04:37 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7880.eurprd04.prod.outlook.com (2603:10a6:20b:2a5::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 10:04:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Tue, 17 Jan 2023
 10:04:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59a23760-964e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=avxZnmcHN59KSNDwIO1cIZV5JUiJl6lkHNYrq3r6xCOz949gh8CWAshmAuWXxvWHKGr7EzelAPJZw2dN1igVmfmam7McCtuAvANJtnUyliSwUzn29w8oPupvnyERaJPRqAvbX9Jxm7tYAkHziKaAhCijxQNdn4V5Q1d2YfCBTZdD+m3Ct5/ikMFs1XHBcwkgHsijBJ/uvk5XUpyWBfEaZ4hhF/skApipXcuM30nzrfiMR0bYggc62hPndwDvYiH3gNMGaQi0hAvdbavSihE5vMp56Zn4GJm0L1bootyF69WwyaxLTjtYeklNqPqmB5VpjmvVUjAewa45Vsdro3nVRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DOYL/pwjgxYdBPQJ9GXZrtMBPjHT5XydBbCvvp9E1TA=;
 b=P4RFG9YLTTT7rKFxCz85Ydwbxt1RUUikk/X+DKbo+Eu7xVl8VPp0IWi/ljf1NRc671shDD82w52U+f65HBrljkzhSl4MIupl2K4LUbEVDQeXMjoIPX+ZSYiD7zPTzhmhIVdcOOhdC/OuT6kpJbWgE3hqnIbVHmM3t5geIbEFm3LGt6GGhPBt6Pkq5zXJcMYvADBocMpPcmbseok6bMwYvrnSX9jGJz8So4vbgZp1hNtzpMk8hCV9Sb9vCsQdF5DF3UY00EnzuOYUctFUX4q8WPz3hKaflb331mWGeqKYu4zL5/xEeMk2yAkfokRNIXz0beWwkDjBAdrM3oDEe6nSEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DOYL/pwjgxYdBPQJ9GXZrtMBPjHT5XydBbCvvp9E1TA=;
 b=fZvhyJpRgo64CEUeHfR/rF/WoPpYQ/mFlKTwiS9dsppjjsC/PmTQurVgXn/5oGiJLASiQ6X9V5ijrw/eycVeAq8gp9ymlc2dA4Oa5i12MsjppShpLlIms39t2f/NdYYKb6FFFIT5AuC2jYORS24WCv7hu9SxenOEq1X8kNSUWTAkZA4GU7/fAKxbANgxnsCFicTMtmerVGrt3iutE/vstrFJlhz+5xVOjiCUpYDa/Q0DNX+mrTD15eM2dD+veK1C0WGYHWzuKLOZOIDosCew76mD8cyD8oFuAtZpZ39yT0uREvy3k4NKtWqA+YZSpolF1Xt51D/4no+71d66ctNx+Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d871f9e2-5f00-1f0d-3297-0084d4a4af27@suse.com>
Date: Tue, 17 Jan 2023 11:04:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 1/4] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
 <2ce57f95f8445a4880e0992668a48ffe7c2f9732.1673877778.git.oleksii.kurochko@gmail.com>
 <e00512a6-5d32-6dbf-4269-429532f8a852@suse.com>
 <87107d8945c9f1513c305d115f24f488b87e088b.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <87107d8945c9f1513c305d115f24f488b87e088b.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0125.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7880:EE_
X-MS-Office365-Filtering-Correlation-Id: a8d76f08-ae31-42aa-1032-08daf8723c6b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	s/+4tX1D/4a9m2k7n7Ec21Ct8L43prYP1WuGt9jU2RgQjUKc4FiZk4B2qv342mpoVu9PGP3fq0LTtXOaXn0hw9u/EI8l7fqKTK/S7m1dRU682RSEWK5zhf507htjbqK8rKO/f5W2ciRveC1cikjgQKBpSIUnFOvH9z9dRMZNt6Xc9i/bwRmcoxymUZgs48HGmDOBD2XL1xsQCN8A2vXWoDcrI0ba5GjtdPdzLuCZ+X6FQmCLfwFFeZpnjAJ4tH+Lpq0EB7OVQMmBYpFEXHj+52J7aehrzHH1JV7Og2V4hHuNexSQOijF4gPTJmR/KWW0P8zi/xOK1OgO9A6zt6V9gHH1/I5Wu+2a1COLh2PQw8Efux+HhttlsBl4mzZkFPiRcWAkX5m/3BFDQ6f46tCNdbcJsQC51isdTyLVvgiEiZvZdf+mRp1KsC52H/H3NpGpE3xgi/8PecTzM47Et7CHeR2Ffxvu0+TfYCGOB57t/lFqe1oYxsqMCYd8J6U8aMCjF/JaLY7cQ5d54+QraBJ2uNphRvTTWe9Qci3ZAsrFyMkhcCjkPUN43ZT5hIFq8dzFkvW41NGXUuKELqI6gc26i612Nf3HZ8rtrBzoVGUElOhGpSSLywCFjVRHzEJKZ2Hpze2JGEpdDImsPA+4JLuQ1Z4/oL9rT/+5I/pMlFHv+aO8+T5ynZXLm49X6YDSJ271B5yZoUEcrJG5B/hkI5iMF2ODxKCqwwNbTnrSt5ZDXxU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(346002)(136003)(366004)(39860400002)(396003)(451199015)(36756003)(53546011)(86362001)(186003)(6916009)(4326008)(6512007)(8676002)(26005)(66556008)(41300700001)(66476007)(66946007)(2616005)(316002)(6506007)(31696002)(54906003)(478600001)(6486002)(2906002)(38100700002)(5660300002)(8936002)(66899015)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGhQSXd5d0krNnh4c2Q5WVRrL25paURTVGN6K2xQSFlCT3pVM3dmUytVemRU?=
 =?utf-8?B?bVp4algxL0I0eCtIeElWUkpGVmRoMlNkeGtzWDZlczYzdFRva0Z0ME4velNy?=
 =?utf-8?B?WGZNUERJV29BaHhYYlZpUWRnU0cvdTJPdzRyZ0EvTDZhWUFkOXU5eVlsNEYy?=
 =?utf-8?B?M2xFcFpOQURjUW1rQUttdm9hckZVVzR3RFFTMUZJc2RMZEJnbFF0MnhTM1hy?=
 =?utf-8?B?eUgyM1NobkRMTU03U3JQUGh2NGJTZ1ZPeE1ORGJlNXlSdUlnNjB5ZXBxRXRW?=
 =?utf-8?B?eFJ2UElsY0VNanJEU0x4blBodUZrVytNcElOSHBHd3lNOVM5SGVOWjJ3NjN2?=
 =?utf-8?B?aG9GTXJVMmxRNXdZRlBUM29Rc1RMUnVMZWVIekM5aloyRnIxaWdYTW9MT0cy?=
 =?utf-8?B?TVhMYnljK25ZQUJIYW52WTl5VlZGdU5zT25RdzRFOUdrclZWY3lrOEpETndv?=
 =?utf-8?B?UUNPK2xjQko4N2RrM3RHUXpabTlERTZ6MUFxV3NnbHRUcVlLTnNjcjl6Vjc3?=
 =?utf-8?B?d01GdTB4T2k0Sk1GOFRKSDVVeDRiRVRmbHhtYkcyandMbDNWQklkY3R5Rlh6?=
 =?utf-8?B?azBpQXQrS2Vtb0grS3ZBdHJRSHdEeFZkWkZKckRSMWY4RW12UnppTkxuUnJz?=
 =?utf-8?B?TEZHZTYvVVF0Yk14c2N1TFdveTlrRVVJbTVJa1RpbEhuWUlvOGNvYWpaT1Zj?=
 =?utf-8?B?dDdycGo5WHQxRitIaWRCSnJJdm5GY1JIZndTcVhEWEFtb2JJbDczQ3lhenFv?=
 =?utf-8?B?aVMzUTFiZStIalMzYi9waCsrVjJGeVlya0VBanZ3YjhNNDEyTzV2VGZmendW?=
 =?utf-8?B?aVlFNndFL1ZaK1U5NUY5MW9GOUh0aVpDWHV0Q1VPQ2gwb2NxME45Yy8xZ1NE?=
 =?utf-8?B?SGdSb3FBTU1XeUErL1hXbTlVWkE2NVRKZ042eW9FSnlMdDl0Mzk4R21qSXd3?=
 =?utf-8?B?MnNhZWRGaGNxZEZnTm5zMkRvRkc3YmFPVmJhWFduVE5TY08xbjlDRVc1KzQ5?=
 =?utf-8?B?RjZ1THZaS3YyNU04M0h6eGZWRFFySGYvWWtDSkQ5VFVDTmNPSW5uSlZTbml2?=
 =?utf-8?B?elFlOVZwNWNUQy80Y1liaGhiNXhsVFkwMXQ0Qlh0L1hnSGs5YzA1RENhZjF6?=
 =?utf-8?B?TzRzeDkzWjJrQktGMFZXb3VwRzVaS3JIVGhIdTFDUE5CSHYvK1hPR2xIMXA1?=
 =?utf-8?B?N2N2VTBPTmxxaXpXSzFvNlUyZURMRmc1REQyMk9qU1dUZGgwZWlNMGtkUGNi?=
 =?utf-8?B?TVV6NTdtZEU1QnlKWVRDSGV3UzNoaVQ5UFhnS3RsWEJWMDE5V3dkZTdGM0Fn?=
 =?utf-8?B?Y0JQckdiRk03eCtaQktQcE45RUNuUTFTcFFyaWV6dlk4VVcvcWlVVE5xL3VU?=
 =?utf-8?B?MVJVYlZFWW4zZDJydXhVZXhRNlZNb1hZT3EvOElFclhaQkwwUlFhTWp1UElD?=
 =?utf-8?B?enZPZ2ROa2xYWWFlc0s5TVpvTjgycXFnYytyT2FrSEtMSE9Zb1NOZFptQXJl?=
 =?utf-8?B?MUZsZ2Z2YlEyUkxYVXpVNzZIV1pScm9jUEt0MUFnb1R1VFMwditVSlBzRkIy?=
 =?utf-8?B?RnByTGxaYWsybjJHSFB4WDd4SGtUbkM4dk5xMlp3S2Q4NGsvMXdxbE92WVlJ?=
 =?utf-8?B?MUlvRkV6NEJQYUxnZ05hS25CWGtMeGxFVXBZaVNPL2IyRzhXbzdESTZRaVcz?=
 =?utf-8?B?RVhNSHB0VmJUbkFxUzZiTEpRYm5SZ1NWbE9kVXB4NDZrcG5yTmJsODdVUTRu?=
 =?utf-8?B?NVNBcE1JWGNrUnBxRWp4eHZIZDAwNSt0SXEzTHMrU0xTeU5mcGRsNENKemFS?=
 =?utf-8?B?ZGtPM0lhQS9yOGZFR2tJRVFiMmo3dTV5SXI4M1VxdFoyUUthNXdPcHdCdUk5?=
 =?utf-8?B?WW5zeTBIS0xiV1NweXlNKzRnL1ZmQjFQcHliSGFQOHB4S3p2Y3ErS0VkSFZ2?=
 =?utf-8?B?bzJXcXR5SWZiNkVEc3lOMDBCdEhCL2xmcHkyRFBVbnVZSk5yUnJ1YnpaN2l6?=
 =?utf-8?B?WmU4OStFRUpLRytQNEpuVXBXVDRNVHBBRWhCR1IydEZPSjlSQkxBSUo1cTZt?=
 =?utf-8?B?Sk1VMVJ2YVB3Zml6bXU3ZTNIN0xLSDc2UjNSQUZPZ1Iya2p6cFMramlHVEl6?=
 =?utf-8?Q?PUkk35mfWjkCgxZ9bIKiwh25F?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a8d76f08-ae31-42aa-1032-08daf8723c6b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 10:04:35.1674
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5OidcHR8UMu/+KULvk1DXTMNhVEU0ou4OPvlRQuiKfOqnDKm/CkRSRLjOV1+oZu2ZINZEUVYQncuPGgDmiHdPg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7880

On 17.01.2023 10:29, Oleksii wrote:
> On Mon, 2023-01-16 at 15:59 +0100, Jan Beulich wrote:
>> On 16.01.2023 15:39, Oleksii Kurochko wrote:
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>> Changes in V4:
>>>     - Clean up types in <asm/types.h> and remain only necessary.
>>>       The following types was removed as they are defined in
>>> <xen/types.h>:
>>>       {__|}{u|s}{8|16|32|64}
>>
>> For one you still typedef u32 and u64. And imo correctly so, until we
>> get around to move the definition basic types into xen/types.h. Plus
>> I can't see how things build for you: xen/types.h expects __{u,s}<N>
> It builds because nothing is used <xen/types.h> now so that's why I
> missed them but you are right __{u,s}<N> should be backed.
> It looks like {__,}{u,s}{8,16,32} are the same for all available in Xen
> architectures so probably can I move them to <xen/types.h> instead of
> remain them in <asm/types.h>?

This next step isn't quite as obvious, i.e. has room for being
contentious. In particular deriving fixed width types from C basic
types is setting us up for future problems (especially in the
context of RISC-V think of RV128). Therefore, if we touch and
generalize this, I'd like to sanitize things at the same time.

I'd then prefer to typedef {u,}int<N>_t by using either the "mode"
attribute (requiring us to settle on a prereq of there always being
8 bits per char) or the compiler supplied __{U,}INT<N>_TYPE__
(taking gcc 4.7 as a prereq; didn't check clang yet). Both would
allow {u,}int64_t to also be put in the common header. Yet if e.g.
a prereq assumption faced opposition, some other approach would
need to be found. Plus using either of the named approaches has
issues with the printf() format specifiers, for which I'm yet to
figure out a solution (or maybe someone else knows a good way to
deal with that; using compiler provided headers isn't an option
afaict, as gcc provides stdint.h but not inttypes.h, but maybe
glibc's simplistic approach is good enough - they're targeting
far more architectures than we do and get away with that).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 10:10:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 10:10:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479299.743061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHivH-0003YX-J0; Tue, 17 Jan 2023 10:10:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479299.743061; Tue, 17 Jan 2023 10:10:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHivH-0003YQ-FI; Tue, 17 Jan 2023 10:10:31 +0000
Received: by outflank-mailman (input) for mailman id 479299;
 Tue, 17 Jan 2023 10:10:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IWGa=5O=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHivG-0003YK-4M
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 10:10:30 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2070.outbound.protection.outlook.com [40.107.249.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2af97631-964f-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 11:10:29 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7880.eurprd04.prod.outlook.com (2603:10a6:20b:2a5::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 10:10:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Tue, 17 Jan 2023
 10:10:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2af97631-964f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XjZI9SseASnBqWI4aSsmRxpcpwRE/zpOsuEEm5DRHq/EOjRTd7OCE4BDg5l/5Pc+uucUUPVwvLIZqG0SAuhaG74M3IkKJg8f5+FAW8z7edUBfDDTypf8VUjWtnShzkThGl6HZaiItRhdIWeGJzlEdlqIUjM90T8yk+JQ+8tVvMRTppXxi7Gzxxt6Cg2KsE1HjcoLksuDSYxTkaLStSAEEP/VGP9Smee0/7dEgl+bWAHgynECJbKIiGVw9cSHorG+Ir5+5XdxwwxP/jpA0qQ8w4Sw9B7pUuouAc4k1D8vWDDR42vZG5nFlkfVogblO427ZapBLw3/W6JwGn1DqeFkDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8uhDYCizP86w4i/B2PdFA4z/3S9n/2WFsq/Es3tf4EU=;
 b=IrIvZVvtETp9h9wGtNmIIAJl9ADZHCBfvpXt992lbuzutswyaanosOdhNs8VjDMEs5BVVgVaGp5SlhCfi2jzB83zfaAgfTQjI8A+VEduj08G6CUUkp/SUkl+t8e/k09LjXmmNYKr3N7Pubvldi5uueHBTHxnjyjnteBsyIgDu2N3BRH+SgDsIu1/DygzFmiplwHI+zkYv3QY8AqQJNu2ZUSHl0g+AicT/mfk5gng4aFSkJos+EaShr2m9KLr9cQngms/4oKN+Id7cJkD8lzRZeQkczDCVop4sijsFXeJff+ZndVWgvu7z8m0VCWq4ADB3z9aRFD9c19GbFIXgwmZvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8uhDYCizP86w4i/B2PdFA4z/3S9n/2WFsq/Es3tf4EU=;
 b=EHxZk73NJcJHdlZJ6636HsWxIIHBepBgM4wnFiiwKmFxsteJo0cHj12XUMoa6tI4yFEztl6bI0+3/S0Qxy2jmnfU6MFf4GfNGviHmvLXBZzci6avbysrit/ZsWUO95NIHcyKpgPqCCkfz9z1yoyTIWF6jztJICNEWEZCHEiwcUK2D6mkSg60vxLV8vSYKUGOKYrxuugG4ckEWN1rWxKhLUfZN/rGGgD6OZw8d+ZQbWj7kjPl1hS4gZIgiUdJuYycUOXXTdWbPC3gg5mpcPBrTEHxckYLVXqAS5BT4gcv85/Fkje+Bz+5nAqiCH3W9wUA43Zr08ULHIfkCcbpZ6XZHA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0d7ed55d-11b2-3a40-9b4e-2b52fc5a5cac@suse.com>
Date: Tue, 17 Jan 2023 11:10:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/9] x86/shadow: replace sh_reset_l3_up_pointers()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <d91b5315-a5bb-a6ee-c9bb-58974c733a4e@suse.com>
In-Reply-To: <d91b5315-a5bb-a6ee-c9bb-58974c733a4e@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0074.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7880:EE_
X-MS-Office365-Filtering-Correlation-Id: 1a68bc7e-f6d3-4d00-98af-08daf8730e36
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FHghy3YJakVuLiP04OtreIbw8IinpJAOHrWBha86CPwn5mfb3HfTW7ZZeJlzT4Rp+hsopB1FqHWzeWhaonRPDGbwJQ+3juhvMYCFCJuaZwJdbKBIJs6sorH3Qx+OmI6AwpkVBG5iguz6j90EBDfb1bXpHVrNnTZQMvGmiNwjR8+FwkOoPEd43VaxGR1IU7tw9GfDx/d731i79lVcXY0pQKHCH8AKNIIVv9iUwmkmIJLAmUvKau5kKRwLiHAn9AFBg299H/3uIh8cWZBGtR1+JuY2yypdfGgQiWOd7HJ3rn+t70H7nGW8MzWRwFEFB5w4O5+njd3EtMOM1e9ilmV57wj4NofZhYNLijpCfLptC9qTHYfZgzQHGco+MhDC8yq7zzU1USBkrnSrTOqkLCqPvYQUlPaqb5ftmubn4eg9S3RqV5LyZ6gCy9EZ2tA0CcfIUC9myAy/uGYyMJWBQE1DBRx58ZW/a0naVGHbA1oOh6hNhmFUk6DTm5AT9AcJwCdbFC3+yj/PjmtL2dSir+Ls3/IU0RLWr+55L4fVqRzfvawwQZYi0fumC/leFLqqvB4ts4s2ate2qVA5wWI8zwi7Zro0OfiaSG05yle06i321BIwjDzxnhDOHVt0ttEM62rvN2BlqYTi2uNQtKR/16l3/uf8V/WpPwSSgvO78YdCSmqDZvYchgQFfC7S9w3wFq46is/wtwanfjeOwMEBnHbSg5fmlf0AfKuwhtCV8rbhlz0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(346002)(136003)(366004)(39860400002)(396003)(451199015)(36756003)(53546011)(86362001)(186003)(6916009)(4326008)(6512007)(8676002)(26005)(66556008)(41300700001)(66476007)(66946007)(2616005)(316002)(6506007)(31696002)(54906003)(478600001)(6486002)(2906002)(4744005)(38100700002)(83380400001)(5660300002)(8936002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WVUvWEZSY1NxQmtReDNOaTM4V2VpUmRleHViUGkrV3FaOGJ1V3o5U1NOM041?=
 =?utf-8?B?eFF6ekZxbmU4QjVpVk5obzNIZWxXMzlwZlFqQWNOc0tVYlpXT1U4bEdFQUlX?=
 =?utf-8?B?VmpqSVo5ZXdhT0Z2SThNUFBDdHpZVncwMk9qdmtHUGRmTVI5elZFa3pmdG9y?=
 =?utf-8?B?T3BPak9vTkVvVWtqVm9URFl5bjM3VTV1OVk3eHpBQkg4L2lYVkdQdk1qa3Vv?=
 =?utf-8?B?ekU2cDhHOE5KR0o0UU1aZ1NLMUxlTG9raHJEREw4OUpBMlQ2S05kQjVnZ0NO?=
 =?utf-8?B?dkgxMXN4bzdmTTR4dEtOV0t2VGtuWEVTRXA0WkJYOVNISWZjSGM1bnZpd1ll?=
 =?utf-8?B?QnpWY2hhZmlCWHdYRWZDdHJnUUhkYm5saHZwajNqN3I2M09zZG1KOU9vRS9U?=
 =?utf-8?B?TWNiTEhidUxjS2c0WEo4OGplT2tLRWh3L2JnZDlsQ2NodWt1UjFoYmFIQUhW?=
 =?utf-8?B?eHhmczdxOVhOa3ZNSWdKamxPN0RpdjVLTzhhWEJWRzZBbWpiYkJSRE1MZ3M1?=
 =?utf-8?B?Wng5aGpTRFlUS0ZkOVdtVmtlNUc1Z2J4dW9jY2psdjVwT29BcXlER2VmNnlk?=
 =?utf-8?B?eElib1ROYU1xL01NdDVpYkRGZk0zc1VFKzNqYVJZOFJONE41VUcxK1l5VVcw?=
 =?utf-8?B?eitueXdwUkRWRSs5eW9pWGdldE01UWRNQXNvQm9ZRnR6emc5ak5icks0S0l1?=
 =?utf-8?B?K0xRaTc3dm44S3RYMkJTaldNcVBDU00yU0tBOFhrNUdrdWR6NmhYSTZtY3M4?=
 =?utf-8?B?WUwwc3pxc2x1ZmN3aERSOE5rYWo3bXBQV2E3a1BTS002VWpPYktoOHQrTGtC?=
 =?utf-8?B?dGgvWlpWZnN2NWxyaWhTNE9lY0lTS21sbFU5dVYyU2pKWi82STB0K3RtajlE?=
 =?utf-8?B?SHJYWHRwc01Mb29Jc3lMdk1xcWtFeEZBaDNuMDRHb3Z2RWd2RlI4NW5LSjVm?=
 =?utf-8?B?cldMd2NIdXE1cXYxaWRFbER5alo1NS85TThrN1p3cFZnTzJqZ1lmNHBvSHF1?=
 =?utf-8?B?aFFyeHZ1cGxQYlV6R0VTanVtSFRydkwrMklVS2JhbW9taWYvallqNzFvNzB4?=
 =?utf-8?B?eDBEaEVjTHBXRXhPbVVRYURDV3cvdkd2dFRpbCtXMXIzazJZN0JUWkIyRFQ5?=
 =?utf-8?B?RGM3MnBTZEM5YU00alJZdXBza2dhZ2t4ZkJBc2lOQXAxbWxuU0dVMkZQYWRn?=
 =?utf-8?B?QWttN2QxTHltbDFaM3h0MENDdWlvazRyNWxqZ0tuREozd2R4TFlsY2Y4OEdQ?=
 =?utf-8?B?elZXdnJOekF4azJUcTFMdTNRQStXbmUzRjc4WnljZVdqMWx1RTlqS3dBNVJI?=
 =?utf-8?B?dUlyd24vV0Jkay9kcU96U3lyZ0l2QlFNclc1TFB6Y3JHQVExellxVkJkWmdk?=
 =?utf-8?B?S3VEb2hxWC80SC9JU25IZHJIeno0VGQ5R1grcUpBY3NKbGJUOTBPMFdqTHJF?=
 =?utf-8?B?UXhsWnd2b1R3Rk0vNFhaYzdJOVBCZEo1YzJTN0lzSUp5ZzNEejRUVlFEeTZt?=
 =?utf-8?B?eHpnS3B0WDFxRVBnMUJaajdrcFFhcU9WNEZMczFIbW1RMXVSS01SWXEvUHBx?=
 =?utf-8?B?Kzd4aUpuTU0wSHVSS1VOTEtXa0RIUXZsYWpIQStBRVpTNDFUWVh4WHY5UDIx?=
 =?utf-8?B?RVFFWUFEZTZGNGZUa21kdTFPcjdGM3VGVTBFM1daVmJQT3dVaUdSMVh5eU5y?=
 =?utf-8?B?R1ArSWhlUFhQVGJMY3crM1RKbzhmZ0VQM0MvVG1Za0hHcklHd3VuREVXTXEy?=
 =?utf-8?B?WERSa0VGbXVxRWVPRnhPbVROWXNkeFRoYk9FRksrVTZBdXJMcU0wVW1lSXkr?=
 =?utf-8?B?QkZWM0hiVnNNV3c2Wk1mU01Td0JEL1ZIbGlwUEg4NWNZUTUrSVZGbWszOFpy?=
 =?utf-8?B?QmhHdW41eVRISDdkbU1HdE1YY3BEc0FORWtGTStnY0pmMW5uRmxWNUFiM2c4?=
 =?utf-8?B?QURxcnJ1TWlDY1RzWnV0RTJCcWQxaDRpZWhxc3NTVjhMYStLK29oT0d5ZEZD?=
 =?utf-8?B?a203cmgvY2RpeGlncGxlVXZId3NOSHl3UzJmcTFMU1Fublpqb2ROT09qUkRK?=
 =?utf-8?B?alJVeXRaekxiWWhNRE1SVGVJamdBOUp1TXhSUWtZdEpOYzU1Z3BpeVpEYzEw?=
 =?utf-8?Q?FgkIqCDN0vz0CGNtemYMF3Avs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1a68bc7e-f6d3-4d00-98af-08daf8730e36
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 10:10:27.1607
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CF46I5qBOV6tT40AkNR9yCL6071vOUnBGdgHZJBwjYxLHtA19lt7SnpdDnDmiqJDZBrzqrSL2imzAopxJ9ymQw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7880

On 11.01.2023 14:52, Jan Beulich wrote:
> Rather than doing a separate hash walk (and then even using the vCPU
> variant, which is to go away), do the up-pointer-clearing right in
> sh_unpin(), as an alternative to the (now further limited) enlisting on
> a "free floating" list fragment. This utilizes the fact that such list
> fragments are traversed only for multi-page shadows (in shadow_free()).

I've added mention of sh_next_page() here as well. Not sure how I
managed to miss that, but this doesn't change the reasoning.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 10:35:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 10:35:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479307.743072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHjJM-0006BK-Lw; Tue, 17 Jan 2023 10:35:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479307.743072; Tue, 17 Jan 2023 10:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHjJM-0006BC-JL; Tue, 17 Jan 2023 10:35:24 +0000
Received: by outflank-mailman (input) for mailman id 479307;
 Tue, 17 Jan 2023 10:35:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CTo4=5O=redhat.com=imammedo@srs-se1.protection.inumbo.net>)
 id 1pHjJK-0006Ao-Ol
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 10:35:23 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a31d5359-9652-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 11:35:19 +0100 (CET)
Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com
 [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-443-WPoUgjlNOhquP8LWC2ED8g-1; Tue, 17 Jan 2023 05:35:17 -0500
Received: by mail-ej1-f72.google.com with SMTP id
 oz11-20020a1709077d8b00b007c0dd8018b6so21486435ejc.17
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 02:35:16 -0800 (PST)
Received: from imammedo.users.ipa.redhat.com (nat-pool-brq-t.redhat.com.
 [213.175.37.10]) by smtp.gmail.com with ESMTPSA id
 l17-20020a1709063d3100b008727576e4ecsm832656ejf.117.2023.01.17.02.35.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 02:35:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a31d5359-9652-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673951718;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=udi9DLzP25IY3XXIlKaLN3gdYOdTRra71KHKKd1UMlU=;
	b=EGN1XJGwvmeQNS8tZsXZ+/qCrHsS78NCykubWA1XEuxBi1R0qQu3Zs3RAC3qla5jedgT+3
	hmmAIEjeHlY+OlTK7mdhs2I8UFkagq3dxokGg+nCoN9QriOfElJeR/VLSQjwZHcIM0T1Li
	xSgsINrwJDT2uR0RBM9E5DC9BIomroE=
X-MC-Unique: WPoUgjlNOhquP8LWC2ED8g-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=udi9DLzP25IY3XXIlKaLN3gdYOdTRra71KHKKd1UMlU=;
        b=SatvTrKDZ6Yb1B2nIbkCvUXG2oAy/cHWQ4l34OvyzIUkKMCuFK4rYebFltZnnSuFt4
         o6iL7UpAKaGAmISBFwxugvUxpN9VdGj7pNNc95gmJ92ANEUwZGCteaMJufY1P0RFJu7n
         wWXyMdEnIxpJgWP4CkdrtVYnoblOgbCPNGiLjKw9jZjBKBFScPGJhsLzQUx9I3T2skXe
         UT4IfyGnPsD/hNqFGBqwpqYz1e0KZDv56YA8e3hF9gvp5yjsg3DaEUCpcg8Fm/E7me3i
         P23wDdOKtTm0NwY1S6Kkwk592KafcIXdB+ekDofEAvFOGKPv84sVI8o/enFmIxda6ERs
         eWYA==
X-Gm-Message-State: AFqh2kruUQ9PlJhtvwDWs20CRBMLxWTZrJ3B9KvXigZCPmqlQfDVNCTz
	7ct76j+rp+8MiEkbvAJ8Rnue4ZrAb49N7pLkUIf/j2vt81n96xrUZQ0wetKRp/pZnn6QXYqfHVN
	ZuYBCdlg30dezYs2Y1+s/dA0e0lk=
X-Received: by 2002:a17:906:bc58:b0:872:2cc4:6886 with SMTP id s24-20020a170906bc5800b008722cc46886mr3447809ejv.30.1673951715964;
        Tue, 17 Jan 2023 02:35:15 -0800 (PST)
X-Google-Smtp-Source: AMrXdXtBbS6fyQA6axbQvi8yH1zGng9Ffc7Pe8EGqx5jeMqhrON4vczRjhquhrtG0NT3Cn2LeO5SWQ==
X-Received: by 2002:a17:906:bc58:b0:872:2cc4:6886 with SMTP id s24-20020a170906bc5800b008722cc46886mr3447796ejv.30.1673951715728;
        Tue, 17 Jan 2023 02:35:15 -0800 (PST)
Date: Tue, 17 Jan 2023 11:35:13 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Chuck Zmudzinski <brchuckz@netscape.net>
Cc: "Michael S. Tsirkin" <mst@redhat.com>, Bernhard Beschow
 <shentey@gmail.com>, qemu-devel@nongnu.org, Stefano Stabellini
 <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Paolo Bonzini
 <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org, Philippe
 =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230117113513.4e692539@imammedo.users.ipa.redhat.com>
In-Reply-To: <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
	<a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
	<20230110030331-mutt-send-email-mst@kernel.org>
	<a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
	<D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
	<9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
	<7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
	<20230112180314-mutt-send-email-mst@kernel.org>
	<128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
	<20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
	<88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
	<20230116163342.467039a0@imammedo.users.ipa.redhat.com>
	<fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.36; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Mon, 16 Jan 2023 13:00:53 -0500
Chuck Zmudzinski <brchuckz@netscape.net> wrote:

> On 1/16/23 10:33, Igor Mammedov wrote:
> > On Fri, 13 Jan 2023 16:31:26 -0500
> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> >  =20
> >> On 1/13/23 4:33=E2=80=AFAM, Igor Mammedov wrote: =20
> >> > On Thu, 12 Jan 2023 23:14:26 -0500
> >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> >> >    =20
> >> >> On 1/12/23 6:03=E2=80=AFPM, Michael S. Tsirkin wrote:   =20
> >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:=
     =20
> >> >> >> I think the change Michael suggests is very minimalistic: Move t=
he if
> >> >> >> condition around xen_igd_reserve_slot() into the function itself=
 and
> >> >> >> always call it there unconditionally -- basically turning three =
lines
> >> >> >> into one. Since xen_igd_reserve_slot() seems very problem specif=
ic,
> >> >> >> Michael further suggests to rename it to something more general.=
 All
> >> >> >> in all no big changes required.     =20
> >> >> >=20
> >> >> > yes, exactly.
> >> >> >      =20
> >> >>=20
> >> >> OK, got it. I can do that along with the other suggestions.   =20
> >> >=20
> >> > have you considered instead of reservation, putting a slot check in =
device model
> >> > and if it's intel igd being passed through, fail at realize time  if=
 it can't take
> >> > required slot (with a error directing user to fix command line)?   =
=20
> >>=20
> >> Yes, but the core pci code currently already fails at realize time
> >> with a useful error message if the user tries to use slot 2 for the
> >> igd, because of the xen platform device which has slot 2. The user
> >> can fix this without patching qemu, but having the user fix it on
> >> the command line is not the best way to solve the problem, primarily
> >> because the user would need to hotplug the xen platform device via a
> >> command line option instead of having the xen platform device added by
> >> pc_xen_hvm_init functions almost immediately after creating the pci
> >> bus, and that delay in adding the xen platform device degrades
> >> startup performance of the guest.
> >>  =20
> >> > That could be less complicated than dealing with slot reservations a=
t the cost of
> >> > being less convenient.   =20
> >>=20
> >> And also a cost of reduced startup performance =20
> >=20
> > Could you clarify how it affects performance (and how much).
> > (as I see, setup done at board_init time is roughly the same
> > as with '-device foo' CLI options, modulo time needed to parse
> > options which should be negligible. and both ways are done before
> > guest runs) =20
>=20
> I preface my answer by saying there is a v9, but you don't
> need to look at that. I will answer all your questions here.
>=20
> I am going by what I observe on the main HDMI display with the
> different approaches. With the approach of not patching Qemu
> to fix this, which requires adding the Xen platform device a
> little later, the length of time it takes to fully load the
> guest is increased. I also noticed with Linux guests that use
> the grub bootoader, the grub vga driver cannot display the
> grub boot menu at the native resolution of the display, which
> in the tested case is 1920x1080, when the Xen platform device
> is added via a command line option instead of by the
> pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> to Qemu, the grub menu is displayed at the full, 1920x1080
> native resolution of the display. Once the guest fully loads,
> there is no noticeable difference in performance. It is mainly
> a degradation in startup performance, not performance once
> the guest OS is fully loaded.
above hints on presence of bug[s] in igd-passthru implementation,
and this patch effectively hides problem instead of trying to figure
out what's wrong and fixing it.



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 10:46:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 10:46:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479313.743082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHjU9-0007hw-MG; Tue, 17 Jan 2023 10:46:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479313.743082; Tue, 17 Jan 2023 10:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHjU9-0007hp-JZ; Tue, 17 Jan 2023 10:46:33 +0000
Received: by outflank-mailman (input) for mailman id 479313;
 Tue, 17 Jan 2023 10:46:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHjU8-0007hj-Nk
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 10:46:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHjU8-0005CS-Co; Tue, 17 Jan 2023 10:46:32 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHjU8-0007zF-73; Tue, 17 Jan 2023 10:46:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=6KEtVoU3lW6w66inOomDSldMryZOzLz213eeQ86N4+U=; b=iYdpw3PtsCoxYfGiq6zVLspOw0
	qAZMtYlm5aN4Xa3PLQFuCG/0HunVx3APslJSrTGg3kgnq5DFcMdTe4v4qwlmebXmvwh6ssI43LHq+
	tIoKgGFzD3qKN8F6So6lqvJm2S/FAVLUkYQ+OC3ikno3We6QaB3TkPRlWiDE23ZsKDPs=;
Message-ID: <98cc04e5-b7e0-b19a-7d4e-3054ad662466@xen.org>
Date: Tue, 17 Jan 2023 10:46:30 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
 <20230116144106.12544-2-michal.orzel@amd.com>
 <72fd8c47-d654-91d0-993c-97f2d0542cff@xen.org>
 <96964034-53b0-c391-6be1-fa5fff6842e1@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <96964034-53b0-c391-6be1-fa5fff6842e1@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 17/01/2023 09:50, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> 
> On 17/01/2023 10:35, Julien Grall wrote:
>>
>>
>> Hi Michal,
>>
>> It is not clear to me why this was sent In-reply-to the other patch. If
>> they are meant to form a series, then this should have a cover letter +
>> each patch should be numbered.
>>
>> If they are truly separate, then please don't thread them.
> They were meant to be separated. I will form a series for v2 to make the commiting easier.

This is only two patches. So I would be OK if you send them separately 
as well. So pick what you prefer.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 10:47:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 10:47:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479319.743094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHjVW-0008FQ-0l; Tue, 17 Jan 2023 10:47:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479319.743094; Tue, 17 Jan 2023 10:47:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHjVV-0008FJ-TZ; Tue, 17 Jan 2023 10:47:57 +0000
Received: by outflank-mailman (input) for mailman id 479319;
 Tue, 17 Jan 2023 10:47:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHjVU-0008FA-H0
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 10:47:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHjVU-0005Dl-8d; Tue, 17 Jan 2023 10:47:56 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHjVU-000829-1h; Tue, 17 Jan 2023 10:47:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=mT57WQjRpNN4uGBMl2CRl8pj7ZmgBOmNppM/nj3e4ZA=; b=d/EeKMetxegDnokVN5Zjk7Dkxo
	dg16ELCQa8u5WLFipWMrS9yXnpNniCtvaEhBmz/Me1Cc9OD22Xeka0TCio3u6dgaokRLxXG9Rcamf
	AjOjX4t1BMuqEpqxoaD7j2VoGSVgZuxMNTqJdIZDkbI86TtYSdDL9F3yONHG+lz5zqtM=;
Message-ID: <6adeccb8-1cd9-37f3-25d1-0c8ef275c374@xen.org>
Date: Tue, 17 Jan 2023 10:47:54 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/arm: Harden setup_frametable_mappings
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116144106.12544-1-michal.orzel@amd.com>
 <2a679d99-4ed4-4fe8-8aee-faee57b5007b@xen.org>
 <001fe638-1bd8-5624-499e-8f1690cb33c0@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <001fe638-1bd8-5624-499e-8f1690cb33c0@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 17/01/2023 09:49, Michal Orzel wrote:
> Hi Julien,
> 
> On 17/01/2023 10:29, Julien Grall wrote:
>>
>>
>> Hi Michal,
>>
>> On 16/01/2023 14:41, Michal Orzel wrote:
>>> The amount of supported physical memory depends on the frametable size
>>> and the number of struct page_info entries that can fit into it. Define
>>> a macro PAGE_INFO_SIZE to store the current size of the struct page_info
>>> (i.e. 56B for arm64 and 32B for arm32) and add a sanity check in
>>> setup_frametable_mappings to be notified whenever the size of the
>>> structure changes. Also call a panic if the calculated frametable_size
>>> exceeds the limit defined by FRAMETABLE_SIZE macro.
>>>
>>> Update the comments regarding the frametable in asm/config.h and take
>>> the opportunity to remove unused macro FRAMETABLE_VIRT_END on arm32.
>>>
>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>> ---
>>>    xen/arch/arm/include/asm/config.h |  5 ++---
>>>    xen/arch/arm/include/asm/mm.h     | 11 +++++++++++
>>>    xen/arch/arm/mm.c                 |  5 +++++
>>>    3 files changed, 18 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>>> index 16213c8b677f..d8f99776986f 100644
>>> --- a/xen/arch/arm/include/asm/config.h
>>> +++ b/xen/arch/arm/include/asm/config.h
>>> @@ -82,7 +82,7 @@
>>>     * ARM32 layout:
>>>     *   0  -  12M   <COMMON>
>>>     *
>>> - *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
>>> + *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>>>     * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>>>     *                    space
>>>     *
>>> @@ -95,7 +95,7 @@
>>>     *
>>>     *   1G -   2G   VMAP: ioremap and early_ioremap
>>>     *
>>> - *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
>>> + *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
>>>     *
>>>     * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>>>     *  Unused
>>> @@ -127,7 +127,6 @@
>>>    #define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
>>>    #define FRAMETABLE_SIZE        MB(128-32)
>>>    #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>>> -#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
>>
>> This is somewhat unrelated to the goal of the patch. We do clean-up in
>> the same patch, but they tend to be in the same of already modified hunk
>> (which is not the case here).
>>
>> So I would prefer if this is split. This would make this patch a
>> potential candidate for backport.
> Just for clarity. Do you mean to separate all the config.h changes or only
> the FRAMETABLE_VIRT_END removal? I guess the former.

The latter. The comment update makes sense here because this is what 
actually triggered this patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 11:04:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 11:04:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479325.743105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHjlR-0002Em-DD; Tue, 17 Jan 2023 11:04:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479325.743105; Tue, 17 Jan 2023 11:04:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHjlR-0002Ef-9d; Tue, 17 Jan 2023 11:04:25 +0000
Received: by outflank-mailman (input) for mailman id 479325;
 Tue, 17 Jan 2023 11:04:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CTo4=5O=redhat.com=imammedo@srs-se1.protection.inumbo.net>)
 id 1pHjlP-0002EZ-QU
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 11:04:23 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b1eb5448-9656-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 12:04:22 +0100 (CET)
Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com
 [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-659-i3LsrDrjN92pgFpsAdF7aw-1; Tue, 17 Jan 2023 06:04:19 -0500
Received: by mail-ej1-f72.google.com with SMTP id
 hp2-20020a1709073e0200b0084d47e3fe82so17493830ejc.8
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 03:04:19 -0800 (PST)
Received: from imammedo.users.ipa.redhat.com (nat-pool-brq-t.redhat.com.
 [213.175.37.10]) by smtp.gmail.com with ESMTPSA id
 kz11-20020a17090777cb00b007aece68483csm13080334ejc.193.2023.01.17.03.04.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 03:04:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1eb5448-9656-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1673953461;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eyVjNXxDkmKeLwTrHfr3getUGen09SQggMhsus/FQIc=;
	b=eKB3UkXrIH4+GO/oTPAkLe3M+hQjPE/i9cuqvBRYsWBZ358ajThSfNUfdAD8ODG2PydlTL
	cnAciQD8HU3TJt3BSNcX+5WouFxBUNqg8Db2BuVjyL/bbdukxwtJoyanUvI7pzW958c7KZ
	IFTYpGgd8WODN5AV66tDE4PRCZkQMu8=
X-MC-Unique: i3LsrDrjN92pgFpsAdF7aw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=eyVjNXxDkmKeLwTrHfr3getUGen09SQggMhsus/FQIc=;
        b=F76yklHpjQd4Wuk56acfYEMyL4ts9ozm185J/O2Jjfvf2TQjiV/UVoBn2oHkV8bkFg
         y9Kwlp/APGU7e6e7IYY7hqsnexR8Nbmn2eBV4qfYq+5ZC8IQgeZELca6vOtHivz8vaYg
         6etU4N1v+MQoR15N+0TFjNHlmJDlD+SHhV9rU19aKIRCbNMYmNpDu3c4Gv4KmeJGeHcP
         ts+E1QPF8y+zJ0wipxV5xztLiFYg1cQloUMnTINWNsvCUHozPP3H91GhA5alozAxxYMP
         tQiHDXQynfXg8KK5S/J48JuRNJs8wQcjJgqtZX90c44SMF4juUI7MOi5Eus1WV5n+OXP
         ttGQ==
X-Gm-Message-State: AFqh2kpXiUJMtbhIygQSISjYEbDh0m6me5cO67lOmYiU5D7ZxSSoNSU9
	WExkUQPGIuYKYisVAGfqkO96mL52Go9ZH7peXp/46iCh93b3eUlAVEOPUr9GVkDRsR+F3WTL7GZ
	ABf2HkxEax0rMMaaz0VenmtlkIfw=
X-Received: by 2002:a17:907:ca85:b0:7c1:1e5a:ed10 with SMTP id ul5-20020a170907ca8500b007c11e5aed10mr2469136ejc.8.1673953458709;
        Tue, 17 Jan 2023 03:04:18 -0800 (PST)
X-Google-Smtp-Source: AMrXdXuMOCZZraUw4zh7iQC3j12bFprGagl0VcCEBpc0ktoOk5ixu3QwIXdb8aSqNQkqCcaYDyEk9A==
X-Received: by 2002:a17:907:ca85:b0:7c1:1e5a:ed10 with SMTP id ul5-20020a170907ca8500b007c11e5aed10mr2469119ejc.8.1673953458471;
        Tue, 17 Jan 2023 03:04:18 -0800 (PST)
Date: Tue, 17 Jan 2023 12:04:16 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Chuck Zmudzinski <brchuckz@netscape.net>
Cc: "Michael S. Tsirkin" <mst@redhat.com>, Bernhard Beschow
 <shentey@gmail.com>, qemu-devel@nongnu.org, Stefano Stabellini
 <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Paolo Bonzini
 <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org, Philippe
 =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>, Thomas Huth <thuth@redhat.com>, Eric Auger
 <eric.auger@redhat.com>, Alex Williamson <alex.williamson@redhat.com>,
 Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230117120416.0aa041d6@imammedo.users.ipa.redhat.com>
In-Reply-To: <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
	<a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
	<20230110030331-mutt-send-email-mst@kernel.org>
	<a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
	<D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
	<9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
	<7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
	<20230112180314-mutt-send-email-mst@kernel.org>
	<128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
	<20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
	<88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
	<20230116163342.467039a0@imammedo.users.ipa.redhat.com>
	<fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.36; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Mon, 16 Jan 2023 13:00:53 -0500
Chuck Zmudzinski <brchuckz@netscape.net> wrote:

> On 1/16/23 10:33, Igor Mammedov wrote:
> > On Fri, 13 Jan 2023 16:31:26 -0500
> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> >  =20
> >> On 1/13/23 4:33=E2=80=AFAM, Igor Mammedov wrote: =20
> >> > On Thu, 12 Jan 2023 23:14:26 -0500
> >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> >> >    =20
> >> >> On 1/12/23 6:03=E2=80=AFPM, Michael S. Tsirkin wrote:   =20
> >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:=
     =20
> >> >> >> I think the change Michael suggests is very minimalistic: Move t=
he if
> >> >> >> condition around xen_igd_reserve_slot() into the function itself=
 and
> >> >> >> always call it there unconditionally -- basically turning three =
lines
> >> >> >> into one. Since xen_igd_reserve_slot() seems very problem specif=
ic,
> >> >> >> Michael further suggests to rename it to something more general.=
 All
> >> >> >> in all no big changes required.     =20
> >> >> >=20
> >> >> > yes, exactly.
> >> >> >      =20
> >> >>=20
> >> >> OK, got it. I can do that along with the other suggestions.   =20
> >> >=20
> >> > have you considered instead of reservation, putting a slot check in =
device model
> >> > and if it's intel igd being passed through, fail at realize time  if=
 it can't take
> >> > required slot (with a error directing user to fix command line)?   =
=20
> >>=20
> >> Yes, but the core pci code currently already fails at realize time
> >> with a useful error message if the user tries to use slot 2 for the
> >> igd, because of the xen platform device which has slot 2. The user
> >> can fix this without patching qemu, but having the user fix it on
> >> the command line is not the best way to solve the problem, primarily
> >> because the user would need to hotplug the xen platform device via a
> >> command line option instead of having the xen platform device added by
> >> pc_xen_hvm_init functions almost immediately after creating the pci
> >> bus, and that delay in adding the xen platform device degrades
> >> startup performance of the guest.
> >>  =20
> >> > That could be less complicated than dealing with slot reservations a=
t the cost of
> >> > being less convenient.   =20
> >>=20
> >> And also a cost of reduced startup performance =20
> >=20
> > Could you clarify how it affects performance (and how much).
> > (as I see, setup done at board_init time is roughly the same
> > as with '-device foo' CLI options, modulo time needed to parse
> > options which should be negligible. and both ways are done before
> > guest runs) =20
>=20
> I preface my answer by saying there is a v9, but you don't
> need to look at that. I will answer all your questions here.
>=20
> I am going by what I observe on the main HDMI display with the
> different approaches. With the approach of not patching Qemu
> to fix this, which requires adding the Xen platform device a
> little later, the length of time it takes to fully load the
> guest is increased. I also noticed with Linux guests that use
> the grub bootoader, the grub vga driver cannot display the
> grub boot menu at the native resolution of the display, which
> in the tested case is 1920x1080, when the Xen platform device
> is added via a command line option instead of by the
> pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> to Qemu, the grub menu is displayed at the full, 1920x1080
> native resolution of the display. Once the guest fully loads,
> there is no noticeable difference in performance. It is mainly
> a degradation in startup performance, not performance once
> the guest OS is fully loaded.

Looking at igd-assign.txt, it recommends to add IGD using '-device' CLI
option, and actually drop at least graphics defaults explicitly.
So it is expected to work fine even when IGD is constructed with
'-device'.

Could you provide full CLI current xen starts QEMU with and then
a CLI you used (with explicit -device for IGD) that leads
to reduced performance?

CCing vfio folks who might have an idea what could be wrong based
on vfio experience.



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 11:30:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 11:30:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479331.743116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkAQ-0005YF-EM; Tue, 17 Jan 2023 11:30:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479331.743116; Tue, 17 Jan 2023 11:30:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkAQ-0005Y7-BK; Tue, 17 Jan 2023 11:30:14 +0000
Received: by outflank-mailman (input) for mailman id 479331;
 Tue, 17 Jan 2023 11:30:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+MSG=5O=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHkAO-0005Xl-Gi
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 11:30:13 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d5e3275-965a-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 12:30:11 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pHk9l-005upc-23; Tue, 17 Jan 2023 11:29:35 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 064423005C9;
 Tue, 17 Jan 2023 12:29:40 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 808C2201C94A4; Tue, 17 Jan 2023 12:29:40 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d5e3275-965a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=4dd1nPHSajeYfo138KPDTbRosWVLWxAJMDJHQavnKFE=; b=a6aKGxvXjkHxURoMeXnOaMqY7T
	JQ6hF/xzb92HYIEgt3cwLktllTNNk0uf/8i5bbNnySPTeuxsX6qpz/UwxLZp7ssM/Me7Rd7UePSoi
	1h5O0FpASKQtBAiJJliEUMPu7ESmw8EhriLJ28cKLTingDGX+rwJCIA+W1U32XJXDma/YhLvEProo
	KslYBSI+iHBVUUouKqpwqUsONdZeKX+M5hvTww6sI/hoPu3pmX/2/tdviLIXhf97GwM88MtqTuUVm
	8ovafHUDnVczSNpbj+BUM22p212dGl6S75mHVEVAsZij5E4QNhuosMnbeSbzHLOX56J0pzF2OZC7A
	whTdMhcw==;
Date: Tue, 17 Jan 2023 12:29:40 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Ingo Molnar <mingo@kernel.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?iso-8859-1?Q?J=F6rg_R=F6del?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 6/7] x86/power: Sprinkle some noinstr
Message-ID: <Y8aGpHgSOczqeEHf@hirez.programming.kicks-ass.net>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.888786209@infradead.org>
 <Y8Zq2WaYmxnOjfk8@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Y8Zq2WaYmxnOjfk8@gmail.com>

On Tue, Jan 17, 2023 at 10:31:05AM +0100, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > +	/*
> > +	 * Definitely wrong, but at this point we should have at least enough
> > +	 * to do CALL/RET (consider SKL callthunks) and this avoids having
> > +	 * to deal with the noinstr explosion for now :/
> > +	 */
> > +	instrumentation_begin();
> 
> BTW., readability side note: instrumentation_begin()/end() are the 
> misnomers of the century - they don't signal the start/end of instrumented 
> code areas like the name falsely & naively suggests, but the exact 
> opposite: start/end of *non-*instrumented code areas.

Nope, they do as they say on the tin.

noinstr void foo(void)
{
}

declares the whole function as non-instrumented.

Within such functions, we demark regions where instrumentation is
allowed by:

noinstr void foo(void)
{
	instrumentation_begin();
	/* code that calls non-noinstr functions goes here */
	instrumentation_end();
}

(note the double negative in the comment)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 11:43:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 11:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479342.743138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkNU-0007L5-DW; Tue, 17 Jan 2023 11:43:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479342.743138; Tue, 17 Jan 2023 11:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkNU-0007Jc-6g; Tue, 17 Jan 2023 11:43:44 +0000
Received: by outflank-mailman (input) for mailman id 479342;
 Tue, 17 Jan 2023 11:43:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pwid=5O=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHkNT-0007AO-68
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 11:43:43 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on2041.outbound.protection.outlook.com [40.107.100.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 304a4461-965c-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 12:43:42 +0100 (CET)
Received: from DS7PR05CA0059.namprd05.prod.outlook.com (2603:10b6:8:2f::32) by
 BY5PR12MB4082.namprd12.prod.outlook.com (2603:10b6:a03:212::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 11:43:39 +0000
Received: from DS1PEPF0000E62F.namprd02.prod.outlook.com
 (2603:10b6:8:2f:cafe::6) by DS7PR05CA0059.outlook.office365.com
 (2603:10b6:8:2f::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Tue, 17 Jan 2023 11:43:39 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E62F.mail.protection.outlook.com (10.167.17.133) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Tue, 17 Jan 2023 11:43:39 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 05:43:38 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 03:43:38 -0800
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 05:43:37 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 304a4461-965c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V8P4fWeSrJtDfaKx2FdXjDKzBtdx5AnwvjjdyL+EtL7BqgTFycXK1Xp0Xmmjasp1MnHs4LxLzFlYNnuOY5ESHoNDp4+2U6qHJ/s5SFNyVmWkRt8F+volDgkLq+rLnLtUfIwzXqLTszNzCruYEvGQkUP9mn2psq7Be2xx3bq54EoiJy2+sj03whdcXdQz1YaVNyf1hH60srpkpkFGaBeM5VJIn/rk4RdvTF8z8cj7fm5lYkwtSbJi+oPfO0g+VpRq+pALcHaqbq5kDNUm9sXhW9zTcZNv3y1ZGxSaCRriaknePyJs7VDpF7USwCO0Fu7qFFsfHu9CHMjgFNgAibJcVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=k1fBAnuXoU+4MFAl8nEfyIsVLvgTGQN8PGjh9Nb+57k=;
 b=nZnSP2IbBdz6Lfk8VS/wBYhB8M2NtKr0y0UmWZqAGO2Xa0xbjI/Py9Piwab8d/zDBmFhxH3ahKu6vDjBle4kAFArTerJPB2RfXgMUNH2W+BcPutZDvKf1X0ktpWUuOu80EtAmngrPBO7nrEj0zEmClyLQ3inzhw1HhEr/RagGMGkm+tAXa4ryUZB+f2TSsnrnVFIq+fk8VCGvE4BGyBk6/KdbxZbgSasNBoGMaiAroXYXvz8HJLcEacMRu8wEl8N4/jRPpBt7vZU3CNXqqneK5fJjP9mwnV7n7Kin+oCdTw5QSVpdkPTf6XLplJyQ2mdUBl4j0m96Y5y38MLmK0aEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k1fBAnuXoU+4MFAl8nEfyIsVLvgTGQN8PGjh9Nb+57k=;
 b=NhydvpAILfrSbSNiglqAU7nW/MVK6aSDKxUcek0wHhP/inZpIEkPZPeIVvEOYDvEsSsuLEt6AfFAQo0ZI8YbF0UNPkGJowzXEysYpxIHSfB0qL3usDbNfzrf05pOFVNcjq2noO6IYSZF0dLrQ5DG3Aw3k64e5+JD5DyjdjCE+Mc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 2/3] xen/arm32: Remove unused macro FRAMETABLE_VIRT_END
Date: Tue, 17 Jan 2023 12:43:31 +0100
Message-ID: <20230117114332.25863-3-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230117114332.25863-1-michal.orzel@amd.com>
References: <20230117114332.25863-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E62F:EE_|BY5PR12MB4082:EE_
X-MS-Office365-Filtering-Correlation-Id: 4fd8a623-83ad-4af9-fa16-08daf8801367
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1Hx8lleiCoJ+gN328j5QyD2wwLBMtc31KLL5vmkXa9cuvED1SN+u55bxPLDo1h4fW/47W1M7uykY9R/3Py+82ZXKs2c/aX/rD1T2M9TiMuVfnVa+SUdWThwj36kfmjOZrfgVO8Zge4UjHtX029T9Lj9wX45SyoXWqK6oXex5stMTDe5r2XgysPi6mN40d7KZdg6rVoKbvjWICachAEqj2KjraxipbuwYXHBQBgDQoU41l5mvYDjv7KWHTipqB8JeHFi7y0/+ScUKltZ5qj7g1CWexD3zICmZQxsL2YklWP2P7vyktZ4QVABFRynsLyx/b4jm5BsL04TQw/BRzn8qU0kaYbUtRE2Gav7QcS2DFHW+IMdTnkDi1zqgJMF3YTiOHvu2EEiknAvx7h/FAr3agsAz0JqzmQS75NXiYUn4prw+SvzsZGIJgMGB6KZd9KzNiNjtYiNmYiGc12GQ4yULQEpqs2uCZpbtOq74Jjlp3cj3B/b6HHcb04iMwdr6NOV0FYcWA7sw7E+v3dZ7LO9ZTdMCruJI4KZqA0/WJKMJLjrqpqMhvJu0LRVYJnuIl9sYekCBxOyh4biqijU6wr28Hc62Vep3fhW0axk8dCRPG5GPiyDRe1fLMZENn+PEwEjBKF7m/SuikBQW/bIfHa5PHXlQGfMWNR6DUBmgNwge9Ww390U78IWQ9sFjo0IzUOWNC4fXTWQ/F+GK7zadzzauv0svRJ2EmcpMyIGWfmi3n98=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199015)(40470700004)(36840700001)(46966006)(36756003)(356005)(86362001)(6916009)(8936002)(70586007)(44832011)(8676002)(70206006)(2906002)(4326008)(4744005)(81166007)(82740400003)(36860700001)(83380400001)(5660300002)(40460700003)(316002)(6666004)(54906003)(41300700001)(40480700001)(82310400005)(478600001)(426003)(2616005)(336012)(1076003)(47076005)(26005)(186003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 11:43:39.0648
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4fd8a623-83ad-4af9-fa16-08daf8801367
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E62F.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4082

This macro is unused and the corresponding one for arm64 has already
been removed as part of the commit 6dc9a1fe982f ("xen/arm: Remove most
of the *_VIRT_END defines").

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Changes in v2:
 - move a change to a separate patch
---
 xen/arch/arm/include/asm/config.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 16213c8b677f..6661a41583c6 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -127,7 +127,6 @@
 #define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
 #define FRAMETABLE_SIZE        MB(128-32)
 #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
-#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - 1)
 
 #define VMAP_VIRT_START        _AT(vaddr_t,0x10000000)
 #define VMAP_VIRT_SIZE         _AT(vaddr_t, GB(1) - MB(256))
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 11:43:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 11:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479343.743160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkNZ-0007xY-Ke; Tue, 17 Jan 2023 11:43:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479343.743160; Tue, 17 Jan 2023 11:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkNZ-0007xL-Go; Tue, 17 Jan 2023 11:43:49 +0000
Received: by outflank-mailman (input) for mailman id 479343;
 Tue, 17 Jan 2023 11:43:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pwid=5O=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHkNY-0007u4-2y
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 11:43:48 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2083.outbound.protection.outlook.com [40.107.223.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3185a38d-965c-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 12:43:44 +0100 (CET)
Received: from BN9PR03CA0923.namprd03.prod.outlook.com (2603:10b6:408:107::28)
 by PH7PR12MB7870.namprd12.prod.outlook.com (2603:10b6:510:27b::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 11:43:40 +0000
Received: from BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:107:cafe::75) by BN9PR03CA0923.outlook.office365.com
 (2603:10b6:408:107::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 11:43:40 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT056.mail.protection.outlook.com (10.13.177.26) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 11:43:40 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 05:43:39 -0600
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 05:43:38 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3185a38d-965c-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bIlGlNDXojU95gIuu15c38zz1HhtMPUgbJcqGGBtk5UytZzaPE3cVTQiy5EdxaNKs1hRNxVM5ZomvethdPZpanIfHpd6Zc2tHw6WkBS37Uf/WtL7WrFy2Qbhgce153hDzUuq4xgJFRvlIMXkEBD4Wf8uKILsr8cN5Tx9TLdF3sAHrk4zww1arTBQVdynIzkC8irX+goTriB3sHPWgDD4MJ0keiXsCRhd/VdumLe0PQF7+DV0iQY8YuTAw8KjSvdzJ685VqeP/P62733LumVrDXOqdKamciF+ZiagGy5Y7QBKtWSJm1DTYj/zUCi87tp9pZeoUiOf51ViETFgRvkNmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pBk4GWZPUaRaWdlny3PTQIfOw4ZMvGrW1IesvtQGku0=;
 b=jdCxzIOMSUKePSXGLSEdWoJAsjOkwgAoy2nopAdZuWRcWAQh53xPLmVfQdge2VLE3ESf9qBL8YWCsPFvqNMgB+6t8WX6nqUSy4S+2l3dpPzldwF9s92Pdm7krwypPV0ajIL399+tcuFj6tGi8dBW1xFwtBBcPDXMyVSrw0S42mzyGnlTODy61nCRZP1epVIJy7fNhnxZbgHNVPmF6ihDy8ylm7X4+CnRQMJLgbfmQf6WP/RuD3ksxBKbAiMNZO+KhqGet6GmwaMrI3I3ZmYEDyn7/LIaH6FusOtvpuqGH+xxLCALORtj2OuNhV/b/ASkXAXCYBRJQ6TRoz0FI58fZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pBk4GWZPUaRaWdlny3PTQIfOw4ZMvGrW1IesvtQGku0=;
 b=OHJbCktYU6FqtrNO4vxFqitlZ+NVJLT4FPhKTVvcqFVvryV4tcnklIPGJUKzL0iUBQDj4c7wYQhu27nB2M2jl288U7XwVE9NCLcmMNDdMbbMB+pw75kzqeASoYNxAN0VSvMxDdzSXCLB0LpI1+ceM0WWujB6qVGXl5A3w5Ak1Qw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 3/3] xen/arm: Harden setup_frametable_mappings
Date: Tue, 17 Jan 2023 12:43:32 +0100
Message-ID: <20230117114332.25863-4-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230117114332.25863-1-michal.orzel@amd.com>
References: <20230117114332.25863-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT056:EE_|PH7PR12MB7870:EE_
X-MS-Office365-Filtering-Correlation-Id: cab52fef-2382-48fd-479b-08daf8801426
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TLi6s1V0vFWJAuwCzu9SdER67/2xMVCxxY9ShPIu5IqcBHUkVhn+r8E3YP8Gk3lN3gF+VjXgo5XQ4j1RpXUNH7gLfeSLFsUOv5rVYhlAbsfUC8t0ZLeEYig8upKkZ07AJSuHniOyp/NLgvh+jiDXTChDxcgFmI2DwZAd267MHOGtlVnGB8gvHyE0LPd/CcL9M7CXLwreD6igNaosqz8MAI0BiqXo28nyLQLx96T025jSDLJL0ddYI9valbb8I5jcNyPf1iqwHNQPclTtzUeijTJl1a2SylEkke4KYxJcqqU9M8PfBWvQbD/tbPHD+iOGYZfd0izLZ7iLI7HEq+e2/gyoI8bofEtIRPkDVDZYXLfZb/mb+CF9qP/r8Uuitl1mobeRDsPMB+ccy7owHGHqVdYjy5n39pEq+wyAV2ieO1+Bk38TUHYSKghuGrv/b9sWepP6at1/cpynKwHwbDzNNQAX+SPcphA4lZCn01eBvvIzhHNNeWQF8wNVXJIbYKippda2FxP16YAtw742cjpOw0Y6B0/BrGpayXSR17XPtLjqEoZgEd6XE6OTaCtd3TY8z8ridYzk6woolC6kWYAVd4ZCGBhYhd56BJaVJM0bVRktNvRP41jkYVPmZcTIysSW/4uM5L2CTVMbZ+oSsMUNZE3AHqBbT7AjhG5yWQlfNbQt9rN+jDfFdvEGTAXFirZxFzxY32DuUhYEZbrLwzhwy/JH5XaaxvKXLVbV6LA8OOU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199015)(36840700001)(40470700004)(46966006)(2616005)(83380400001)(54906003)(4326008)(316002)(8936002)(336012)(6916009)(40480700001)(426003)(70586007)(47076005)(70206006)(8676002)(5660300002)(44832011)(2906002)(1076003)(36860700001)(41300700001)(81166007)(356005)(82310400005)(86362001)(478600001)(82740400003)(6666004)(186003)(36756003)(40460700003)(26005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 11:43:40.3433
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cab52fef-2382-48fd-479b-08daf8801426
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7870

The amount of supported physical memory depends on the frametable size
and the number of struct page_info entries that can fit into it. Define
a macro PAGE_INFO_SIZE to store the current size of the struct page_info
(i.e. 56B for arm64 and 32B for arm32) and add a sanity check in
setup_frametable_mappings to be notified whenever the size of the
structure changes. Also call a panic if the calculated frametable_size
exceeds the limit defined by FRAMETABLE_SIZE macro.

Update the comments regarding the frametable in asm/config.h.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Changes in v2:
 - refactor panic message
 - move removal of FRAMETABLE_VIRT_END to a separate patch
---
 xen/arch/arm/include/asm/config.h |  4 ++--
 xen/arch/arm/include/asm/mm.h     | 11 +++++++++++
 xen/arch/arm/mm.c                 |  6 ++++++
 3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 6661a41583c6..d8f99776986f 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -82,7 +82,7 @@
  * ARM32 layout:
  *   0  -  12M   <COMMON>
  *
- *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
+ *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
  * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
  *                    space
  *
@@ -95,7 +95,7 @@
  *
  *   1G -   2G   VMAP: ioremap and early_ioremap
  *
- *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
+ *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
  *
  * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
  *  Unused
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 68adcac9fa8d..23dec574eb31 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -26,6 +26,17 @@
  */
 #define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
 
+/*
+ * The size of struct page_info impacts the number of entries that can fit
+ * into the frametable area and thus it affects the amount of physical memory
+ * we claim to support. Define PAGE_INFO_SIZE to be used for sanity checking.
+*/
+#ifdef CONFIG_ARM_64
+#define PAGE_INFO_SIZE 56
+#else
+#define PAGE_INFO_SIZE 32
+#endif
+
 struct page_info
 {
     /* Each frame can be threaded onto a doubly-linked list. */
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0fc6f2992dd1..1a94b52cce7e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -676,6 +676,12 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
     int rc;
 
+    BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
+
+    if ( frametable_size > FRAMETABLE_SIZE )
+        panic("The frametable cannot cover the physical region %#"PRIpaddr" - %#"PRIpaddr"\n",
+              ps, pe);
+
     frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
     /* Round up to 2M or 32M boundary, as appropriate. */
     frametable_size = ROUNDUP(frametable_size, mapping_size);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 11:43:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 11:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479341.743131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkNU-0007EC-1X; Tue, 17 Jan 2023 11:43:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479341.743131; Tue, 17 Jan 2023 11:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkNT-0007Cq-TI; Tue, 17 Jan 2023 11:43:43 +0000
Received: by outflank-mailman (input) for mailman id 479341;
 Tue, 17 Jan 2023 11:43:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pwid=5O=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHkNS-0007AO-FG
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 11:43:42 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2040.outbound.protection.outlook.com [40.107.223.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2fd2f1d3-965c-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 12:43:41 +0100 (CET)
Received: from DS7PR05CA0033.namprd05.prod.outlook.com (2603:10b6:8:2f::20) by
 BL1PR12MB5192.namprd12.prod.outlook.com (2603:10b6:208:311::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 11:43:38 +0000
Received: from DS1PEPF0000E62F.namprd02.prod.outlook.com
 (2603:10b6:8:2f:cafe::8d) by DS7PR05CA0033.outlook.office365.com
 (2603:10b6:8:2f::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Tue, 17 Jan 2023 11:43:38 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E62F.mail.protection.outlook.com (10.167.17.133) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Tue, 17 Jan 2023 11:43:37 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 05:43:37 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 03:43:36 -0800
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 05:43:35 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fd2f1d3-965c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ca/QEn6HmGlsQ4o6PNPLIi5ftn6tFmu++O/WupoGI/hjZzIN7XBj5ljRRg4W5G0/C7v5iInvdNEC5mKGX5TK0mm/DIZuqyukiTJ4DqR/IXvLbt3yF9pKXHIjARV60H2iTNnu/FC8YUTq079VvmpyzlRd77vRcMQFzKuqIXf71Zo6XibyelLkDi57T3JryZt/AuvCAy+RjVuokWWQNlp5BiUBDyq++HHpTFmz0Gr6FZlyMvbllfc8wBFimFHyO4b/ypBZCkCJu55pVVyoXXaBu4ACK7vUMH2wyhmYPtmSkoRbkE/qu5ekBSxiKbWiVx7UyEnz652TOSKLQ0Thfnn6rg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iKLHhjoW+RSSaKho/VCljUTUCbTI/+fA3sXMWlAbEYE=;
 b=HKGNhNjgrOq8ca2QOvPEL8QzXvwrf5FdIOxCHKxOJtbEsWYqjHrya5M0uNT2AdrM6HziH4YuQ4Tf0dN4WK5DgglkL3q9QpF7xwsKJrXT35gDNFZsLJOouS/0Gq2SR/7SibK6QlDImCyqkcxPsGMbhj6odmV56Um8swaHmgYP73aSduLwjdorLd3+aUmI4+9L9aVQ/iu7gcMA/QnxDk5Nk018oziIxAJrvT/Kke8LmuXIKYFAOcWVt/qqe6+sJ08N1Gt+zfVxwTsjfh1UdT9ZT30HmKYSWdNyZH+BEN4cT6GeAEpsz/6AjlcQKWgsL+N0RWq0CEn+TMvS4lUHGd0YVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iKLHhjoW+RSSaKho/VCljUTUCbTI/+fA3sXMWlAbEYE=;
 b=f/FmQN8uR7QExZmYlJ1TItf4e1e0PuArIVsnLckrFYnN+zox5o7j3klIkxnthlzViuvnrxfbsQWoH9cy6yKZQueTLJFpBBQIuG/OIMVzC+0n81P7nq1I2Z+7T8aBZ9o6YmSSDMsCYHOTu2HTj8vIIV0jtihIToyG6u12MHVFaUY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 1/3] xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
Date: Tue, 17 Jan 2023 12:43:30 +0100
Message-ID: <20230117114332.25863-2-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230117114332.25863-1-michal.orzel@amd.com>
References: <20230117114332.25863-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E62F:EE_|BL1PR12MB5192:EE_
X-MS-Office365-Filtering-Correlation-Id: 5244ce5b-9571-4ca8-8ef8-08daf8801279
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	m3Iz2NSFNJ3PHEbNIJSOTh56dwH1Obxbun54SWCBrS1/+CC9TL19Ki4mDXfX5SO3jy85IGCDsKQuwWgO9eGxyMJGBxk87MBRE+sm7YgxKAUk34sfmyQFx9Yta+tSMouMLFzI8CLIoBqFfI3+Tuv84yYEwnbNYp7wfpeZTpfAgfs/X334/IQbLtPX/8TWoC0YJozhr/y5zSi4u0JcqXoAV8uUNdRsEczY3ZBHBQOKQRHwBfqvxRHhkItLhJYmGPyBq67rQImC66WtLenSl/GB6jYs9nGsOoZ1DXv9K6DaE8nEGMrgJzOVBc85b7C/coRAstOwk/oFjYESzllo8oQAsdn/g4j9i1MTdhW/ak3LB/NtrBTB4jWvwvHJnzPZSlIH7xOpPwP1Kr0q0em7f37B7tqVdpxwqhTTN5KfdTWbjW4t+WcntDb5M4CtiBLig2dWF90GakF4uoLQiOqWxoSXT4os6yPjhwbrpUOB5tOqp7detGHKU0oCIrYNZBxco1KhQ2cKMQ85p2FLomgwDdsmRb1pcyxboMjwfvq5SDUB04bkm+45F4FTBt1JMSRMXA3H9AGL12dwcHEo2Efk6d13YGPMFt9IZzeBbfwUNgcrM5EY76PSTG4vQG6ygWhYtZAGW3gOKCpDvKeg3D0H11W29iaTyzGmiX+7WwZNe09k9wwNij0u1+95P7dCBtxROalc895fPBgpCcP4VSaDEGqfiUAS+s9Ha8PSrtXtalKNUrc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(82740400003)(83380400001)(81166007)(36860700001)(86362001)(356005)(8936002)(4326008)(2906002)(70586007)(5660300002)(8676002)(44832011)(6916009)(70206006)(41300700001)(82310400005)(40480700001)(2616005)(186003)(26005)(47076005)(336012)(426003)(1076003)(6666004)(316002)(54906003)(478600001)(40460700003)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 11:43:37.5023
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5244ce5b-9571-4ca8-8ef8-08daf8801279
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E62F.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5192

The direct mapped area occupies L0 slots from 256 to 265 included
(i.e. 10 slots), resulting in 5TB (512GB * 10) of virtual address space.
However, due to incorrect slot subtraction (we take 9 slots into account)
we set DIRECTMAP_SIZE to 4.5TB instead. Fix it.

Note that we only support up to 2TB of physical memory so this is
a latent issue.

Fixes: 5263507b1b4a ("xen: arm: Use a direct mapping of RAM on arm64")
Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Changes in v2:
 - update commit msg making it clear that this is a latent issue 
---
 xen/arch/arm/include/asm/config.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 0fefed1b8aa9..16213c8b677f 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -157,7 +157,7 @@
 #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
 
 #define DIRECTMAP_VIRT_START   SLOT0(256)
-#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
+#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (266 - 256))
 #define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
 
 #define XENHEAP_VIRT_START     directmap_virt_start
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 11:43:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 11:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479340.743126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkNT-0007Al-PK; Tue, 17 Jan 2023 11:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479340.743126; Tue, 17 Jan 2023 11:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkNT-0007Ae-M5; Tue, 17 Jan 2023 11:43:43 +0000
Received: by outflank-mailman (input) for mailman id 479340;
 Tue, 17 Jan 2023 11:43:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pwid=5O=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pHkNR-0007AO-OE
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 11:43:42 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2085.outbound.protection.outlook.com [40.107.243.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2ec3cec6-965c-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 12:43:39 +0100 (CET)
Received: from BN9PR03CA0313.namprd03.prod.outlook.com (2603:10b6:408:112::18)
 by IA1PR12MB6604.namprd12.prod.outlook.com (2603:10b6:208:3a0::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 11:43:36 +0000
Received: from BN8NAM11FT046.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:112:cafe::8e) by BN9PR03CA0313.outlook.office365.com
 (2603:10b6:408:112::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 11:43:36 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT046.mail.protection.outlook.com (10.13.177.127) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 11:43:35 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 05:43:35 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 03:43:35 -0800
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 05:43:34 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ec3cec6-965c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VFNQG5vjTqgO+K6gzvk4yxzZej2wPLDh4t3HoiN8BmkBCNjQsVeGDZJfYPZ08A1YB6FcPAkPV0FamN151YHzCORuUU8Jswu1IzPg2q0XCXxdt4QKtaTTwYKataKZhrkSezJeFvNS4yKBlTLQMWfD0SKf4cqnmEKHBSZR7Jb1fxhS/9BIj7vs3jVCcwXWma9kRDO16Mv/SBIUsXcSOc37WWM2mtovVkzhL2t/kotw+qTl4YOUI5QQlvsA/rRYEXSQB5vcgkVlggd4ptv5Cr+L7AePnMKyhDO7uCtAYhRf3Vqb2Q2+0ebuoWdaHFX7oh9LxQ0/sqNebtrZJBKfN1P5iQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Jf6nKNHLvOErBL72flx6M70+gIIF/nfl192wzeKo9EE=;
 b=IGlhrBR1nCtJ6c7gk1IGU1vUFD58g/5E2OIPX2Dl5Z9VbRZJz/ckmxZRFrjdi/RWXu8BKsJldxrmXpC/I57fbJJxaoGSx/XzC7YRc8zj7fbGgJ0x/w2fl52mYPsYenp1rjWm8ISDLvYeBaAhSLekf85VCFkKGEEfnCKURn6FDsjjDxwvUrWDvqCmG2f3Y+phBHrhvzzUshOSD5ToRyCOybES/o6bR2PHQNwBjYzodXbZSCaYSfkxLJ77trQAckyT5XsRmhUEkAfnEkfvndF2kpcnTQo9vZBD7EWZa7T1LMzuB1eqIj1sdPwoayU4ahrhsSPAFUdsxh2A9ulRTul77w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jf6nKNHLvOErBL72flx6M70+gIIF/nfl192wzeKo9EE=;
 b=aFO07387uofMwM688gR3X0nP6vCgLkI6CEtdduUhXDhl7V2pxeoulchKz52FOHzg0R2utCeN0OKgCRdmXpZBTnKUdoJIjOsYpBQ2u14xPZai6ZilWaeRJSv7gttkAb7sBd+wFMWkzeaVb4yKkCqHuKxjepglbkohIpY59F7nS04=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 0/3] xen/arm: Frametable hardening and config.h cleanup
Date: Tue, 17 Jan 2023 12:43:29 +0100
Message-ID: <20230117114332.25863-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT046:EE_|IA1PR12MB6604:EE_
X-MS-Office365-Filtering-Correlation-Id: 7ad1f196-4852-45f6-00e6-08daf880117e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+mbqyj2KqGwgGPNONFO2Xwvd7Y4NNFf+WOND1FJPLPqr8uQ5Gt8KxRFSmZuaVFVY7D99qri2VpkNxyyrTMJdNRIkOUip4g4TyKxp/mgvlaBr+uNmYBglzioPPyBiVNfLqDQyLJUwh3FksjGG4+v1gvYdunycfAY/Tlb2fltRqKl2Xe63I2YZPXmQ4yHi4okIHv5K+VdjB4FyD17dbhK/aKewAOVwI28GcP+c4n3l/5T0qpEJ/Q7Sh4KZZM3pB+2f3PUnt900oOm3T+xe0vhJRHgEisBMzWcgfHJGyFQ0a61FPjRgZ6G/wYIvOo5L8FBKCb+A7z3Lg8TBxyfjTm1/FKaavkkWXPrB3RXiBttNITj44TQiCaCLL8Ap2YPZYwcQFegy5632/b40jaQrEBUNAjbdQarHV+QOPMPFIDqJb+mDLfj2/lg26IkfSrH82O22ByJSMfquWx/bRfIpMlk51laSXtjM04k7EgV5AV5u5bkT/eFQ254QU7v70gg1GJ/6NRDMvJjnZhat9iwOYgy+gCM8XmwLMdg5E6bgPFk4EtneaOfgCdXe0v54/BuUFgmzFNbKCcdsEupWOJYAwrDs3sbI3TTkosY4SvosIQL4pJmq++0ltA8PLIz1mCdo+Fpa3M/jlXg9UNb4O2fF8qOcL0wOTST2qluU/B3K9rGAPni8e1nCBJZdplxRGdt1ec1uq3keKhfckc6KCyTpUgxtfDpEEdAQTb8uWVJNz8/4Gsk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(39860400002)(396003)(346002)(451199015)(36840700001)(40470700004)(46966006)(2616005)(41300700001)(70586007)(186003)(70206006)(47076005)(426003)(8676002)(6916009)(4326008)(26005)(82310400005)(36756003)(86362001)(336012)(36860700001)(82740400003)(5660300002)(1076003)(8936002)(54906003)(478600001)(4744005)(40480700001)(316002)(83380400001)(356005)(6666004)(2906002)(81166007)(44832011)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 11:43:35.8538
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ad1f196-4852-45f6-00e6-08daf880117e
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT046.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6604

The first patch fixes a bug due to incorrect DIRECTMAP_SIZE calculation.

The second patch removes unused macro FRAMETABLE_VIRT_END.

The third patch hardens the setup_frametable_mappings by adding a sanity check
for the size of struct page_info and calling panic if the calculate size of
the frametable exceeds the limit.

Sent together for ease of merging.

Michal Orzel (3):
  xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
  xen/arm32: Remove unused macro FRAMETABLE_VIRT_END
  xen/arm: Harden setup_frametable_mappings

 xen/arch/arm/include/asm/config.h |  7 +++----
 xen/arch/arm/include/asm/mm.h     | 11 +++++++++++
 xen/arch/arm/mm.c                 |  6 ++++++
 3 files changed, 20 insertions(+), 4 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 11:54:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 11:54:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479365.743171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkYF-0001xP-RE; Tue, 17 Jan 2023 11:54:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479365.743171; Tue, 17 Jan 2023 11:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkYF-0001xI-N5; Tue, 17 Jan 2023 11:54:51 +0000
Received: by outflank-mailman (input) for mailman id 479365;
 Tue, 17 Jan 2023 11:54:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Is34=5O=gmail.com=mingo.kernel.org@srs-se1.protection.inumbo.net>)
 id 1pHkYE-0001xC-2t
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 11:54:50 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be7c6cef-965d-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 12:54:49 +0100 (CET)
Received: by mail-ej1-x62a.google.com with SMTP id tz11so10370306ejc.0
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 03:54:49 -0800 (PST)
Received: from gmail.com (1F2EF20B.nat.pool.telekom.hu. [31.46.242.11])
 by smtp.gmail.com with ESMTPSA id
 ed6-20020a056402294600b00499e5659988sm7369870edb.68.2023.01.17.03.54.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 03:54:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: be7c6cef-965d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id
         :reply-to;
        bh=mOTlX+2xkK1p+GnUE+ayQ0fDfjp/RAyyCQOOAvZYkGM=;
        b=q4QnDoUEQYCba0P9MfjTlEKkl7lCSelI40pAA4jAdOmMC1oAM3FllECRDClUfPVFbP
         PnW5gLo4kMoOdLLDtLwPYLR6Aao8uDm+YFN867efw9lzEzRJg6MshN/+ScEheHYnByNv
         jUsVlnfuU91TqZtqHCMVJt3m8awpCzNoqH//WBqgrkluuqGEjoYpa9IlclTbK6SOqhFM
         i+dkG7TqXcFUu/DGmTGUkp85E0UsFOPCrz6GvqdaY00GIUxHz8PmT2izI5NJeXRwqrf9
         AyLILS4Z6lmVMhAOIbMROAtG1MoLbx2g1tS+6E0ieu8FgLy6VRbawgpCNpc/ZPD3UygR
         k3hA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=mOTlX+2xkK1p+GnUE+ayQ0fDfjp/RAyyCQOOAvZYkGM=;
        b=5AG6LcPD8f63EsWDnZEnHiCBMiAWh6+Dv9nXalQgKqWlCwVK5d1twdkebQOLB2ioSN
         0DSSnu6FghG2qMSAi2NWzNI2Erv1zPHBGUV2EAXpm4BjIa2i4OMVWJSfcshs8M18TJFD
         VpRCPgpc9bI/1Dkr+xvcaoGjmbi1uPKF3AnM4pOuZsgxEBNc/+z9IUrDaihg2uXLqHIS
         Jqvn+WBiw861mCpVf0mag1OKZquhMXSMKvSBGx7z9vSLO+sl4qW/yP+vVqESarFvFAXE
         Azka3zyaxxF54a3AIOhO1bFub+SrFRRtNhOmd7BlJkf9uaC1HKTvYEW+Od8Idl55ZZiG
         UoZQ==
X-Gm-Message-State: AFqh2koSdl/o+iVQH2fpvrxWY7iEepBvzURXCSL/Ma8xIxflI4InbJOc
	VcFeZbfM1i+sxi2CF8BycsM=
X-Google-Smtp-Source: AMrXdXte64K/uel2SaxiSjwgohbfF0SMbp29SRu/54hYnBRnxBmiku+YJvwFUpzRlNSfAzolYRJIGA==
X-Received: by 2002:a17:906:80cd:b0:86d:b50f:6b00 with SMTP id a13-20020a17090680cd00b0086db50f6b00mr2326176ejx.43.1673956488708;
        Tue, 17 Jan 2023 03:54:48 -0800 (PST)
Sender: Ingo Molnar <mingo.kernel.org@gmail.com>
Date: Tue, 17 Jan 2023 12:54:46 +0100
From: Ingo Molnar <mingo@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?iso-8859-1?Q?J=F6rg_R=F6del?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 6/7] x86/power: Sprinkle some noinstr
Message-ID: <Y8aMhihmrahvFnrU@gmail.com>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.888786209@infradead.org>
 <Y8Zq2WaYmxnOjfk8@gmail.com>
 <Y8aGpHgSOczqeEHf@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Y8aGpHgSOczqeEHf@hirez.programming.kicks-ass.net>


* Peter Zijlstra <peterz@infradead.org> wrote:

> Nope, they do as they say on the tin.
> 
> noinstr void foo(void)
> {
> }
> 
> declares the whole function as non-instrumented.
> 
> Within such functions, we demark regions where instrumentation is
> allowed by:
> 
> noinstr void foo(void)
> {
> 	instrumentation_begin();
> 	/* code that calls non-noinstr functions goes here */
> 	instrumentation_end();

Indeed, you are right - I should have gotten more of my morning tea ... :-/

Thanks,

	Ingo


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 12:12:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 12:12:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479380.743182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkpS-0004Xv-KU; Tue, 17 Jan 2023 12:12:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479380.743182; Tue, 17 Jan 2023 12:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkpS-0004Xo-HQ; Tue, 17 Jan 2023 12:12:38 +0000
Received: by outflank-mailman (input) for mailman id 479380;
 Tue, 17 Jan 2023 12:12:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHkpR-0004Xi-Fd
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 12:12:37 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3a8d6398-9660-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 13:12:36 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BEC223436D;
 Tue, 17 Jan 2023 12:12:35 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 56E8313357;
 Tue, 17 Jan 2023 12:12:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id IpxtE7OQxmOhWQAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 12:12:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a8d6398-9660-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673957555; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=CSKjGcVtErec61EkiJFfPqVdWcTFuQbgLIfRoeFriNc=;
	b=WAT3MhqdgCQwS75O1H8jwO3DiJDdKXS9Fi57/llwf2/DLHXzwJGLv0V6KOyg0XeCHtVo/n
	ciI4Ds29AYXbmQ7yRLpI7OIqP+FXC6AmCnZhPqqEBXdMqhHtUc5M7qtoN5K/ihtFmTpPtK
	Ig+7m80Mp8QSRlSFN/mUJ3eUnwzpXyM=
Message-ID: <eda8d9f2-3013-1b68-0df8-64d7f13ee35e@suse.com>
Date: Tue, 17 Jan 2023 13:12:34 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH linux-next v3] x86/xen/time: prefer tsc as clocksource
 when it is invariant
Content-Language: en-US
To: Krister Johansen <kjlx@templeofstupid.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Jan Beulich <jbeulich@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Marcelo Tosatti <mtosatti@redhat.com>, Anthony Liguori
 <aliguori@amazon.com>, David Reaver <me@davidreaver.com>,
 Brendan Gregg <brendan@intel.com>
References: <20221216162118.GB2633@templeofstupid.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20221216162118.GB2633@templeofstupid.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------38t5y0B0Em8nbY3EMVit4Vgh"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------38t5y0B0Em8nbY3EMVit4Vgh
Content-Type: multipart/mixed; boundary="------------0yf02OYAWQlmBNc7Jtlc9pcv";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Krister Johansen <kjlx@templeofstupid.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Jan Beulich <jbeulich@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Marcelo Tosatti <mtosatti@redhat.com>, Anthony Liguori
 <aliguori@amazon.com>, David Reaver <me@davidreaver.com>,
 Brendan Gregg <brendan@intel.com>
Message-ID: <eda8d9f2-3013-1b68-0df8-64d7f13ee35e@suse.com>
Subject: Re: [PATCH linux-next v3] x86/xen/time: prefer tsc as clocksource
 when it is invariant
References: <20221216162118.GB2633@templeofstupid.com>
In-Reply-To: <20221216162118.GB2633@templeofstupid.com>

--------------0yf02OYAWQlmBNc7Jtlc9pcv
Content-Type: multipart/mixed; boundary="------------jUB04oQjKhCxGGAPi0Pav4Ia"

--------------jUB04oQjKhCxGGAPi0Pav4Ia
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTYuMTIuMjIgMTc6MjEsIEtyaXN0ZXIgSm9oYW5zZW4gd3JvdGU6DQo+IEt2bSBlbGVj
dHMgdG8gdXNlIHRzYyBpbnN0ZWFkIG9mIGt2bS1jbG9jayB3aGVuIGl0IGNhbiBkZXRlY3Qg
dGhhdCB0aGUNCj4gVFNDIGlzIGludmFyaWFudC4NCj4gDQo+IChBcyBvZiBjb21taXQgNzUz
OWIxNzRhZWY0ICgieDg2OiBrdm1ndWVzdDogdXNlIFRTQyBjbG9ja3NvdXJjZSBpZg0KPiBp
bnZhcmlhbnQgVFNDIGlzIGV4cG9zZWQiKSkuDQo+IA0KPiBOb3RhYmxlIGNsb3VkIHZlbmRv
cnNbMV0gYW5kIHBlcmZvcm1hbmNlIGVuZ2luZWVyc1syXSByZWNvbW1lbmQgdGhhdCBYZW4N
Cj4gdXNlcnMgcHJlZmVyZW50aWFsbHkgc2VsZWN0IHRzYyBvdmVyIHhlbi1jbG9ja3NvdXJj
ZSBkdWUgdGhlIHBlcmZvcm1hbmNlDQo+IHBlbmFsdHkgaW5jdXJyZWQgYnkgdGhlIGxhdHRl
ci4gIFRoZXNlIGFydGljbGVzIGFyZSBwZXJzdWFzaXZlIGFuZA0KPiB0YWlsb3JlZCB0byBz
cGVjaWZpYyB1c2UgY2FzZXMuICBJbiBvcmRlciB0byB1bmRlcnN0YW5kIHRoZSB0cmFkZW9m
ZnMNCj4gYXJvdW5kIHRoaXMgY2hvaWNlIG1vcmUgZnVsbHksIHRoaXMgYXV0aG9yIGhhZCB0
byByZWZlcmVuY2UgdGhlDQo+IGRvY3VtZW50ZWRbM10gY29tcGxleGl0aWVzIGFyb3VuZCB0
aGUgWGVuIGNvbmZpZ3VyYXRpb24sIGFzIHdlbGwgYXMgdGhlDQo+IGtlcm5lbCdzIGNsb2Nr
c291cmNlIHNlbGVjdGlvbiBhbGdvcml0aG0uICBNYW55IHVzZXJzIG1heSBub3QgYXR0ZW1w
dA0KPiB0aGlzIHRvIGNvcnJlY3RseSBjb25maWd1cmUgdGhlIHJpZ2h0IGNsb2NrIHNvdXJj
ZSBpbiB0aGVpciBndWVzdC4NCj4gDQo+IFRoZSBhcHByb2FjaCB0YWtlbiBpbiB0aGUga3Zt
LWNsb2NrIG1vZHVsZSBzcGFyZXMgdXNlcnMgdGhpcyBjb25mdXNpb24sDQo+IHdoZXJlIHBv
c3NpYmxlLg0KPiANCj4gQm90aCB0aGUgSW50ZWwgU0RNWzRdIGFuZCB0aGUgWGVuIHRzYyBk
b2N1bWVudGF0aW9uIGV4cGxhaW4gdGhhdCBtYXJraW5nDQo+IGEgdHNjIGFzIGludmFyaWFu
dCBtZWFucyB0aGF0IGl0IHNob3VsZCBiZSBjb25zaWRlcmVkIHN0YWJsZSBieSB0aGUgT1MN
Cj4gYW5kIGlzIGVsaWJpbGUgdG8gYmUgdXNlZCBhcyBhIHdhbGwgY2xvY2sgc291cmNlLg0K
PiANCj4gSW4gb3JkZXIgdG8gb2J0YWluIGJldHRlciBvdXQtb2YtdGhlLWJveCBwZXJmb3Jt
YW5jZSwgYW5kIHJlZHVjZSB0aGUNCj4gbmVlZCBmb3IgdXNlciB0dW5pbmcsIGZvbGxvdyBr
dm0ncyBhcHByb2FjaCBhbmQgZGVjcmVhc2UgdGhlIHhlbiBjbG9jaw0KPiByYXRpbmcgc28g
dGhhdCB0c2MgaXMgcHJlZmVyYWJsZSwgaWYgaXQgaXMgaW52YXJpYW50LCBzdGFibGUsIGFu
ZCB0aGUNCj4gdHNjIHdpbGwgbmV2ZXIgYmUgZW11bGF0ZWQuDQo+IA0KPiBbMV0gaHR0cHM6
Ly9hd3MuYW1hem9uLmNvbS9wcmVtaXVtc3VwcG9ydC9rbm93bGVkZ2UtY2VudGVyL21hbmFn
ZS1lYzItbGludXgtY2xvY2stc291cmNlLw0KPiBbMl0gaHR0cHM6Ly93d3cuYnJlbmRhbmdy
ZWdnLmNvbS9ibG9nLzIwMjEtMDktMjYvdGhlLXNwZWVkLW9mLXRpbWUuaHRtbA0KPiBbM10g
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZG9jcy91bnN0YWJsZS9tYW4veGVuLXRzY21vZGUu
Ny5odG1sDQo+IFs0XSBJbnRlbCA2NCBhbmQgSUEtMzIgQXJjaGl0ZWN0dXJlcyBTb2Z3YXJl
IERldmVsb3BlcidzIE1hbnVhbCBWb2x1bWUNCj4gICAgICAzYjogU3lzdGVtIFByb2dyYW1t
aW5nIEd1aWRlLCBQYXJ0IDIsIFNlY3Rpb24gMTcuMTcuMSwgSW52YXJpYW50IFRTQw0KPiAN
Cj4gU2lnbmVkLW9mZi1ieTogS3Jpc3RlciBKb2hhbnNlbiA8a2pseEB0ZW1wbGVvZnN0dXBp
ZC5jb20+DQo+IENvZGUtcmV2aWV3ZWQtYnk6IERhdmlkIFJlYXZlciA8bWVAZGF2aWRyZWF2
ZXIuY29tPg0KDQpSZXZpZXdlZC1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29t
Pg0KDQoNCkp1ZXJnZW4NCg0K
--------------jUB04oQjKhCxGGAPi0Pav4Ia
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------jUB04oQjKhCxGGAPi0Pav4Ia--

--------------0yf02OYAWQlmBNc7Jtlc9pcv--

--------------38t5y0B0Em8nbY3EMVit4Vgh
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPGkLIFAwAAAAAACgkQsN6d1ii/Ey8c
Ggf/TH7i0ArtUrjLpE2goKhE/s4MGVdA5iBgeHjPwKiBk+WvpFQU3opjL9xHZQgd9E/C2IbnmWJT
8zrmDW/s1RxcFMIHoEhcVmWUznSXH5b2pS7SYSIf3TvfT2fnmyNB+cRJwGYqdAMvf/zWMa6hp5RN
8s/2GmPDLEAIQOlWoYUNnvyS/2k7e4JBaOFNQdtM0m03ncfmK4qy0DH1/nYLQIU+PpgYGpjnc6dU
x53LMdwghXU9MpiRvSf+/zHgcnmiD41LQkVWH3rpdMohZ5QFUJMtrRqdDsZ4r+lt3ZaL/V4kdSFG
5YyK3SIODcEOMvYV8YCCr3f4JU483g3XGJkVBiHS6Q==
=N90w
-----END PGP SIGNATURE-----

--------------38t5y0B0Em8nbY3EMVit4Vgh--


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 12:23:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 12:23:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479386.743192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkzY-00062M-IM; Tue, 17 Jan 2023 12:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479386.743192; Tue, 17 Jan 2023 12:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHkzY-00062F-Fd; Tue, 17 Jan 2023 12:23:04 +0000
Received: by outflank-mailman (input) for mailman id 479386;
 Tue, 17 Jan 2023 12:23:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+MSG=5O=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pHkzW-000627-Ua
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 12:23:03 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae38c871-9661-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 13:23:00 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pHkzP-009fbF-JC; Tue, 17 Jan 2023 12:22:55 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 361363005C9;
 Tue, 17 Jan 2023 13:22:41 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 15B9420B1647C; Tue, 17 Jan 2023 13:22:41 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae38c871-9661-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=KPcxEANcwfX3iqDYXrP0qaBKg/IIzvrkL1jXD2O4/YU=; b=T37/fdyH3s0FoXrIwst2EWKWW2
	gWZV1W+E91gwq1HK6a/49VgIdqo+idZ6XnC5EolsKpR9J6SEA4x5mV228YM4nl/LqVkHXAv87cJHg
	AJE7gQpSpVbHwcysQlQQvz5d2KUQpnTwfNRun33w361Zbw4/eNEQqe1AbkaGLU75oBwowWHcS0HL7
	nP1LPydHLx7ZFZlAjm46I9fu01OOn1589OFzsMSMprpw34deb7I+GBxJa0tA8ywqvRqcbv45sYaHp
	huG6eQ3fAi/BuUQkxeMmOF/vQ/aRRDhp9PWAihhrUxezDkp53guytzHb8K3MN5IuPYzM0htYLo7Ra
	DUtJsC+g==;
Date: Tue, 17 Jan 2023 13:22:41 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Cc: Juergen Gross <jgross@suse.com>, linux-kernel@vger.kernel.org,
	x86@kernel.org, virtualization@lists.linux-foundation.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] x86/paravirt: merge activate_mm and dup_mmap callbacks
Message-ID: <Y8aTEfpw0Vm6g0hC@hirez.programming.kicks-ass.net>
References: <20230112152132.4399-1-jgross@suse.com>
 <3fcb5078-852e-0886-c084-7fb0cfa5b757@csail.mit.edu>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <3fcb5078-852e-0886-c084-7fb0cfa5b757@csail.mit.edu>

On Sun, Jan 15, 2023 at 08:27:50PM -0800, Srivatsa S. Bhat wrote:

> I see that's not an issue right now since there is no other actual
> user for these callbacks. But are we sure that merging the callbacks
> just because the current user (Xen PV) has the same implementation for
> both is a good idea?

IIRC the pv_ops are part of the PARAVIRT_ME_HARDER (also spelled as
_XXL) suite of ops and they are only to be used by Xen PV, no new users
of these must happen.

The moment we can drop Xen PV (hopes and dreams etc..) all these things
go in the bin right along with it.




From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:04:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:04:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479394.743204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHldO-0001vs-QL; Tue, 17 Jan 2023 13:04:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479394.743204; Tue, 17 Jan 2023 13:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHldO-0001vl-MZ; Tue, 17 Jan 2023 13:04:14 +0000
Received: by outflank-mailman (input) for mailman id 479394;
 Tue, 17 Jan 2023 13:04:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHldN-0001vL-4p; Tue, 17 Jan 2023 13:04:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHldN-0008E3-0m; Tue, 17 Jan 2023 13:04:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHldM-0002pw-Jn; Tue, 17 Jan 2023 13:04:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHldM-0007ue-JH; Tue, 17 Jan 2023 13:04:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xaqA79ds3pn0Ww4dCdlPDOYhhBE/GUlOjilqqNeL+uY=; b=DnMwxtrnwImIMZDhjmPBvTVdtq
	FkBwZApG/Abx3qFIfYWKj00fneVdDAnufBHNBqIIVpZILkcaLus7g0/fHUplSRVn+/gOZ8rqpUCHU
	KGgywqBFIJQ7yZst16+7mkHWVahDv7TTTQ0PDQ9N1kIG+tObPUjwM/+5eKZfBufa1hoM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175929-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175929: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-amd64:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-amd64-xsm:host-build-prep:fail:regression
    xen-unstable:build-i386-xsm:host-build-prep:fail:regression
    xen-unstable:build-amd64:host-build-prep:fail:regression
    xen-unstable:build-arm64-pvops:host-build-prep:fail:regression
    xen-unstable:build-arm64:host-build-prep:fail:regression
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-arm64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:host-build-prep:fail:regression
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 13:04:12 +0000

flight 175929 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175929/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-armhf                     <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175734
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-i386-xsm                5 host-build-prep          fail REGR. vs. 175734
 build-amd64                   5 host-build-prep          fail REGR. vs. 175734
 build-arm64-pvops             5 host-build-prep          fail REGR. vs. 175734
 build-arm64                   5 host-build-prep          fail REGR. vs. 175734
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175734
 build-i386                    5 host-build-prep          fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    5 days
Failing since        175739  2023-01-12 09:38:44 Z    5 days   13 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    4 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              pass    
 build-amd64                                                  broken  
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-xsm broken
broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-armhf broken
broken-job build-i386 broken
broken-job build-i386-xsm broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:52:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479402.743215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmNb-00076D-H3; Tue, 17 Jan 2023 13:51:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479402.743215; Tue, 17 Jan 2023 13:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmNb-000766-EB; Tue, 17 Jan 2023 13:51:59 +0000
Received: by outflank-mailman (input) for mailman id 479402;
 Tue, 17 Jan 2023 13:51:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHmNa-000760-62
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:51:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmNZ-0000yj-Oh; Tue, 17 Jan 2023 13:51:57 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmNZ-0007LG-IX; Tue, 17 Jan 2023 13:51:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=oUnLgfiWq3z2+0zDMvwMCyiXqT8wN1dnNLinglaaqo0=; b=qCiNaL0LqYWMSp2InMC4iDpsOe
	I9/UBGj2H327cJOND9ggtF6yqTu7KcWJhtDOROFWOtgm5jwIpsQqXlb0VHf8Ok7jZfzCzMNcJlGQM
	Mtu+UbJA5l2tmfNEuaU9DcJPkixp/DbfJM25F7Pf4K9+FbdLNM2Wf88psK/CBZlf0z/0=;
Message-ID: <c31a5507-7b72-2f2b-63f6-0f5c89d7b666@xen.org>
Date: Tue, 17 Jan 2023 13:51:55 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/3] xen/arm64: Fix incorrect DIRECTMAP_SIZE
 calculation
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230117114332.25863-1-michal.orzel@amd.com>
 <20230117114332.25863-2-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117114332.25863-2-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 17/01/2023 11:43, Michal Orzel wrote:
> The direct mapped area occupies L0 slots from 256 to 265 included
> (i.e. 10 slots), resulting in 5TB (512GB * 10) of virtual address space.
> However, due to incorrect slot subtraction (we take 9 slots into account)
> we set DIRECTMAP_SIZE to 4.5TB instead. Fix it.
> 
> Note that we only support up to 2TB of physical memory so this is
> a latent issue.
> 
> Fixes: 5263507b1b4a ("xen: arm: Use a direct mapping of RAM on arm64")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:52:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:52:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479405.743226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmOA-0007Y0-Or; Tue, 17 Jan 2023 13:52:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479405.743226; Tue, 17 Jan 2023 13:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmOA-0007Xt-Lp; Tue, 17 Jan 2023 13:52:34 +0000
Received: by outflank-mailman (input) for mailman id 479405;
 Tue, 17 Jan 2023 13:52:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHmO9-0007Xf-I4
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:52:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmO8-0000zn-Bh; Tue, 17 Jan 2023 13:52:32 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmO8-0007M3-6L; Tue, 17 Jan 2023 13:52:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=oBPWC3rQ8RWhxS2ur4J+eEU8B47Ku9t/BASpp8v21OU=; b=aEYJOvQDgDaIgUswLtydtowaOY
	Qmv/JAaxzSVbZXAtiwW8kwM5UyNBew+K2hhKH0Qu9YEsJtbkMFNwVTxjLscTfKguOKwDWT1pLeEG9
	DGYZuquq1ckQrKdds6WI41b6DuRf/FWKg1DtjslQFrVGuQDceC4+JR+oD6LFjIFner6Y=;
Message-ID: <37cfa6f0-9463-cf82-bac5-266e1f3c0c59@xen.org>
Date: Tue, 17 Jan 2023 13:52:30 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 2/3] xen/arm32: Remove unused macro FRAMETABLE_VIRT_END
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230117114332.25863-1-michal.orzel@amd.com>
 <20230117114332.25863-3-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117114332.25863-3-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 17/01/2023 11:43, Michal Orzel wrote:
> This macro is unused and the corresponding one for arm64 has already
> been removed as part of the commit 6dc9a1fe982f ("xen/arm: Remove most
> of the *_VIRT_END defines").
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:53:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:53:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479413.743237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmOt-00088L-1y; Tue, 17 Jan 2023 13:53:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479413.743237; Tue, 17 Jan 2023 13:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmOs-00088E-VH; Tue, 17 Jan 2023 13:53:18 +0000
Received: by outflank-mailman (input) for mailman id 479413;
 Tue, 17 Jan 2023 13:53:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHmOr-000882-B7
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:53:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmOr-00010h-4e; Tue, 17 Jan 2023 13:53:17 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmOq-0007Oq-VO; Tue, 17 Jan 2023 13:53:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=PZK9h8f4B3/YhGgkEAQZnACh6sTHYilAU9mdpOD0ZJU=; b=OI4HR414XwshK/ulj1p5okxLpO
	4zo5NyC09MiWNSDEHgedqzKlN7DgDx74KjU1qg2yQC7ytQSKl/MXZr+4eCCyQcqn9kM74vTncOY1W
	MMjI4fTv6dA+Wjq/49BJbF+lXFe3wNrSU3cv3g4lq/n4ENzWmyEL9HNiO/JuJqRlJbhU=;
Message-ID: <0edc072e-40df-7c01-822f-67cb8b78a457@xen.org>
Date: Tue, 17 Jan 2023 13:53:15 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 3/3] xen/arm: Harden setup_frametable_mappings
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230117114332.25863-1-michal.orzel@amd.com>
 <20230117114332.25863-4-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117114332.25863-4-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 17/01/2023 11:43, Michal Orzel wrote:
> The amount of supported physical memory depends on the frametable size
> and the number of struct page_info entries that can fit into it. Define
> a macro PAGE_INFO_SIZE to store the current size of the struct page_info
> (i.e. 56B for arm64 and 32B for arm32) and add a sanity check in
> setup_frametable_mappings to be notified whenever the size of the
> structure changes. Also call a panic if the calculated frametable_size
> exceeds the limit defined by FRAMETABLE_SIZE macro.
> 
> Update the comments regarding the frametable in asm/config.h.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:54:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:54:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479419.743248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPj-0000Gm-BN; Tue, 17 Jan 2023 13:54:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479419.743248; Tue, 17 Jan 2023 13:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPj-0000Gf-8U; Tue, 17 Jan 2023 13:54:11 +0000
Received: by outflank-mailman (input) for mailman id 479419;
 Tue, 17 Jan 2023 13:54:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHmPh-0000F6-HX
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:54:09 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65531a99-966e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 14:54:02 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65531a99-966e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673963646;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=Abp/N1+vD7yY6WtrVffgBelI/6bXRwma3RvGpevqz/w=;
  b=a3gG9h+bTykBA/hyOlpefWDDylQNZN/fuQ9WmP2NYrm+meLieLm2qHiV
   KKtW8PCZYY8rqP0Gf1BQOmOJ43YwbnnhFNXGXK9JyTHkC0vZGzOk03YGd
   ldZZTKYSiAnP1uMkG69cWn2kKizhVRjK5Wkx7x8kr5m7ZzohZLjIrviis
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91898367
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:8B7uzKl3cj3f72PL9bHe12vo5gyKJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIdXWnVO66NZGOgKo0kaou+80gDvpSDm9JgHgs9qS4yQyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5QCGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 eYDKhRdSDCtvLKnnLKaUtNpqtslCda+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQthfB9
 jOWpDqmav0cHNaBigK+zFuDvfLCvwbGBoJDROGCseE/1TV/wURMUUZLBDNXu8KRlUqWS99Zb
 UsO9UIGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbiy6VD3YJZiRMY9snsIkxXzNC/
 l2GhdTyHhR0raaYD3ma89+pQSiaYHZPazVYPGldEFVDuoO4yG0usv7RZsx4EorlqP3bImHhn
 zCrtBI7q6oVqNFegs1X4mv7byKQSonhF1BqvVSNBDr6vmuVd6b+OdX2tAGzAeJoad/AEwLf5
 CVsd922trhmMH2bqMCarAzh9pmN7u3NDjDTiEUH83IJp2X0oC7LkWy9DVhDyKZV3iUsI2WBj
 Lf741852XOqFCLCgVVLS4ywEd826qPrCM7oUPvZBvIXPMcqJF7bpHE/PB/Lt4wIrKTLufhvU
 ap3jO72VSpKYUiZ5GfeqxghPU8DmXllmDK7qWHTxBW7y7uODEN5up9cWGZimtsRtfveyC2Mq
 oY3Cid/40kHOAEISnWNoNF7wJFjBSRTOK0aXOQMJ77TeFQ4Rjt/YxITqJt4E7FYc21uvr+g1
 hmAtoVwlAGXaaHvQelSVk1eVQ==
IronPort-HdrOrdr: A9a23:WLSF3KuaLYtDUuyFyHZYY2XD7skDsNV00zEX/kB9WHVpm6yj+v
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBO3ZogEIcxeUygc379
 YDT0ERMr3N5VgRt7eG3CCIV+wO7fPC2pqO7N2uqEuET2tRGt1dB9ESMHflLqV0LjM2e6bQDP
 Cnl6x6T6LLQwVsUiy8bEN1JtQq97Xw5erbiQdtPW9d1DWz
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="91898367"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Anthony
 PERARD" <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: [PATCH 0/6] tools: Switch to non-truncating XENVER_* ops
Date: Tue, 17 Jan 2023 13:53:30 +0000
Message-ID: <20230117135336.11662-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

This is the tools side of the Xen series posted previously.

With this, a Xen built with long strings can be retrieved:

  # xl info
  ...
  xen_version            : 4.18-unstable+REALLY LONG EXTRAVERSION
  xen_changeset          : Tue Jan 3 19:27:17 2023 git:52d2da6c0544+REALLY SUPER DUPER EXTRA MEGA LONG CHANGESET
  ...


Andrew Cooper (6):
  tools/libxc: Move xc_version() out of xc_private.c into its own file
  tools: Introduce a non-truncating xc_xenver_extraversion()
  tools: Introduce a non-truncating xc_xenver_capabilities()
  tools: Introduce a non-truncating xc_xenver_changeset()
  tools: Introduce a non-truncating xc_xenver_cmdline()
  tools: Introduce a xc_xenver_buildid() wrapper

 tools/include/xenctrl.h             |  10 ++
 tools/libs/ctrl/Makefile.common     |   1 +
 tools/libs/ctrl/xc_private.c        |  66 ------------
 tools/libs/ctrl/xc_private.h        |   7 --
 tools/libs/ctrl/xc_version.c        | 206 ++++++++++++++++++++++++++++++++++++
 tools/libs/light/libxl.c            |  61 +----------
 tools/ocaml/libs/xc/xenctrl_stubs.c |  45 +++++---
 7 files changed, 250 insertions(+), 146 deletions(-)
 create mode 100644 tools/libs/ctrl/xc_version.c

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:54:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:54:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479420.743259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPl-0000Xb-JS; Tue, 17 Jan 2023 13:54:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479420.743259; Tue, 17 Jan 2023 13:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPl-0000XU-FL; Tue, 17 Jan 2023 13:54:13 +0000
Received: by outflank-mailman (input) for mailman id 479420;
 Tue, 17 Jan 2023 13:54:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHmPj-0000F6-VZ
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:54:12 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 679b7d47-966e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 14:54:05 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 679b7d47-966e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673963649;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=ZqFqnz0+1fOZwU/dhH9PGLPysacHFErPc/30UTcR1D4=;
  b=Azz1uy/c5+2+HCC+YXXFszlKyYozhzQEvsldUOoX/koePwtTGhV9eXfa
   LjYbJO92V9RDMtSNrIakSuhFERanjJKtRIqoAV2tO+ZRdfqQOGu7JItI1
   rCvrROv8/7rtRtTaJ5CPCoQH2MKCv6PobAIxkUTcy4KlKw8+5mlFMleKg
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91898366
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:BCp8cKoZ2eYew1u3xms3v0bEsT1eBmJgZRIvgKrLsJaIsI4StFCzt
 garIBmBOv+PZmXzL48gbdzn9UpS7Z7dxtU2TgBr/y0wE38S8JuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHziZNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACoWbDTTocKY++KqFulNnv4mANbtGZxK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 jOfrzWpWU9EXDCZ4Qii22uHmODqph72H5MqO4GTqNxonkLGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8Ul7Cmdx6yS5ByWbkAGQSRGc8cOr9ItSHoh0
 Vrhoj/yLWUx6vvPEyvbr+rK62roYkD5MFPuewceVgkhs//Djrpjn07Pb85ZFYKqiPjqTGSYL
 y+xkMQuu1kCpZdViP7qpwqf3GLESovhFVBsuFiONo6xxkYgPdP+OdT1gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hO0yT5FWyoyGsiTHqFy+5dEdMTX
 GfduBlK+LhYN2awYKl8buqZUpp1lvixSYy1B6mFNbKih6SdkyferElTibO4hTixwCDAb4liU
 XtkTSpcJSlDUvk2pNZHb+wczaUq1kgDKZD7HPjGI+Cc+ePGPha9EO5VWGZim8hltMtoVi2Jq
 YcAXyZLoj0DONDDjt7/qt9DfQpUcyZhW/gbaaV/L4a+H+avI0l5Y9e5/F/rU9A+90iJvo8kJ
 k2AZ3I=
IronPort-HdrOrdr: A9a23:t+4G/awqb7XrLUEP6eIdKrPw6r1zdoMgy1knxilNoHxuH/Bw9v
 re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxedZjLcKqweKJ8SUzJ8+6U
 4PSchD4N2bNykGse/KpDOWPvxl6uOhmZrY4ts3zR1WPH1Xg3cL1XYHNu6ZeHcGOjWvHfACZf
 yhDlIsnUvbRZwQBP7Lf0XsD4D41qX2fIuNW298OyIa
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="91898366"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Anthony
 PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
Subject: [PATCH 1/6] tools/libxc: Move xc_version() out of xc_private.c into its own file
Date: Tue, 17 Jan 2023 13:53:31 +0000
Message-ID: <20230117135336.11662-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

kexec-tools uses xc_version(), meaning that it is not a private API.  As we're
going to extend the functionality substantially, move it to its own file.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: Juergen Gross <jgross@suse.com>
---
 tools/libs/ctrl/Makefile.common |  1 +
 tools/libs/ctrl/xc_private.c    | 66 --------------------------------
 tools/libs/ctrl/xc_private.h    |  7 ----
 tools/libs/ctrl/xc_version.c    | 83 +++++++++++++++++++++++++++++++++++++++++
 4 files changed, 84 insertions(+), 73 deletions(-)
 create mode 100644 tools/libs/ctrl/xc_version.c

diff --git a/tools/libs/ctrl/Makefile.common b/tools/libs/ctrl/Makefile.common
index 0a09c28fd369..4e3680c123f6 100644
--- a/tools/libs/ctrl/Makefile.common
+++ b/tools/libs/ctrl/Makefile.common
@@ -16,6 +16,7 @@ OBJS-y       += xc_pm.o
 OBJS-y       += xc_cpu_hotplug.o
 OBJS-y       += xc_vm_event.o
 OBJS-y       += xc_vmtrace.o
+OBJS-y       += xc_version.o
 OBJS-y       += xc_monitor.o
 OBJS-y       += xc_mem_paging.o
 OBJS-y       += xc_mem_access.o
diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
index 2f99a7d2cfb5..cb22da604fe8 100644
--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -489,72 +489,6 @@ int xc_sysctl(xc_interface *xch, struct xen_sysctl *sysctl)
     return do_sysctl(xch, sysctl);
 }
 
-int xc_version(xc_interface *xch, int cmd, void *arg)
-{
-    DECLARE_HYPERCALL_BOUNCE(arg, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT); /* Size unknown until cmd decoded */
-    size_t sz;
-    int rc;
-
-    switch ( cmd )
-    {
-    case XENVER_version:
-        sz = 0;
-        break;
-    case XENVER_extraversion:
-        sz = sizeof(xen_extraversion_t);
-        break;
-    case XENVER_compile_info:
-        sz = sizeof(xen_compile_info_t);
-        break;
-    case XENVER_capabilities:
-        sz = sizeof(xen_capabilities_info_t);
-        break;
-    case XENVER_changeset:
-        sz = sizeof(xen_changeset_info_t);
-        break;
-    case XENVER_platform_parameters:
-        sz = sizeof(xen_platform_parameters_t);
-        break;
-    case XENVER_get_features:
-        sz = sizeof(xen_feature_info_t);
-        break;
-    case XENVER_pagesize:
-        sz = 0;
-        break;
-    case XENVER_guest_handle:
-        sz = sizeof(xen_domain_handle_t);
-        break;
-    case XENVER_commandline:
-        sz = sizeof(xen_commandline_t);
-        break;
-    case XENVER_build_id:
-        {
-            xen_build_id_t *build_id = (xen_build_id_t *)arg;
-            sz = sizeof(*build_id) + build_id->len;
-            HYPERCALL_BOUNCE_SET_DIR(arg, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
-            break;
-        }
-    default:
-        ERROR("xc_version: unknown command %d\n", cmd);
-        return -EINVAL;
-    }
-
-    HYPERCALL_BOUNCE_SET_SIZE(arg, sz);
-
-    if ( (sz != 0) && xc_hypercall_bounce_pre(xch, arg) )
-    {
-        PERROR("Could not bounce buffer for version hypercall");
-        return -ENOMEM;
-    }
-
-    rc = do_xen_version(xch, cmd, HYPERCALL_BUFFER(arg));
-
-    if ( sz != 0 )
-        xc_hypercall_bounce_post(xch, arg);
-
-    return rc;
-}
-
 unsigned long xc_make_page_below_4G(
     xc_interface *xch, uint32_t domid, unsigned long mfn)
 {
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index ed960c6f30e6..6404077903f0 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -215,13 +215,6 @@ void xc__hypercall_buffer_cache_release(xc_interface *xch);
  * Hypercall interfaces.
  */
 
-static inline int do_xen_version(xc_interface *xch, int cmd, xc_hypercall_buffer_t *dest)
-{
-    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dest);
-    return xencall2(xch->xcall, __HYPERVISOR_xen_version,
-                    cmd, HYPERCALL_BUFFER_AS_ARG(dest));
-}
-
 static inline int do_physdev_op(xc_interface *xch, int cmd, void *op, size_t len)
 {
     int ret = -1;
diff --git a/tools/libs/ctrl/xc_version.c b/tools/libs/ctrl/xc_version.c
new file mode 100644
index 000000000000..60e107dcee0b
--- /dev/null
+++ b/tools/libs/ctrl/xc_version.c
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
+/*
+ * xc_version.c
+ *
+ * Wrappers aound XENVER_* hypercalls
+ */
+
+#include "xc_private.h"
+#include <assert.h>
+
+static int do_xen_version(xc_interface *xch, int cmd,
+                          xc_hypercall_buffer_t *dest)
+{
+    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dest);
+    return xencall2(xch->xcall, __HYPERVISOR_xen_version,
+                    cmd, HYPERCALL_BUFFER_AS_ARG(dest));
+}
+
+int xc_version(xc_interface *xch, int cmd, void *arg)
+{
+    DECLARE_HYPERCALL_BOUNCE(arg, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT); /* Size unknown until cmd decoded */
+    size_t sz;
+    int rc;
+
+    switch ( cmd )
+    {
+    case XENVER_version:
+        sz = 0;
+        break;
+    case XENVER_extraversion:
+        sz = sizeof(xen_extraversion_t);
+        break;
+    case XENVER_compile_info:
+        sz = sizeof(xen_compile_info_t);
+        break;
+    case XENVER_capabilities:
+        sz = sizeof(xen_capabilities_info_t);
+        break;
+    case XENVER_changeset:
+        sz = sizeof(xen_changeset_info_t);
+        break;
+    case XENVER_platform_parameters:
+        sz = sizeof(xen_platform_parameters_t);
+        break;
+    case XENVER_get_features:
+        sz = sizeof(xen_feature_info_t);
+        break;
+    case XENVER_pagesize:
+        sz = 0;
+        break;
+    case XENVER_guest_handle:
+        sz = sizeof(xen_domain_handle_t);
+        break;
+    case XENVER_commandline:
+        sz = sizeof(xen_commandline_t);
+        break;
+    case XENVER_build_id:
+        {
+            xen_build_id_t *build_id = (xen_build_id_t *)arg;
+            sz = sizeof(*build_id) + build_id->len;
+            HYPERCALL_BOUNCE_SET_DIR(arg, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+            break;
+        }
+    default:
+        ERROR("xc_version: unknown command %d\n", cmd);
+        return -EINVAL;
+    }
+
+    HYPERCALL_BOUNCE_SET_SIZE(arg, sz);
+
+    if ( (sz != 0) && xc_hypercall_bounce_pre(xch, arg) )
+    {
+        PERROR("Could not bounce buffer for version hypercall");
+        return -ENOMEM;
+    }
+
+    rc = do_xen_version(xch, cmd, HYPERCALL_BUFFER(arg));
+
+    if ( sz != 0 )
+        xc_hypercall_bounce_post(xch, arg);
+
+    return rc;
+}
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:54:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:54:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479421.743264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPl-0000Zt-V6; Tue, 17 Jan 2023 13:54:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479421.743264; Tue, 17 Jan 2023 13:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPl-0000ZW-OT; Tue, 17 Jan 2023 13:54:13 +0000
Received: by outflank-mailman (input) for mailman id 479421;
 Tue, 17 Jan 2023 13:54:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHmPl-0000XC-75
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:54:13 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6a75dd9c-966e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 14:54:11 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a75dd9c-966e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673963650;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=/f+gRZe5yeZkzcEod1cob8AlGi1kzQQ0lSYKT6D72pM=;
  b=R0A8vR10SsyED/t7YXfpQhXQQpGOAYwLwWcpvNMwrO6sfnveNegci+Xj
   p0W6eYflkwj9FHuL/qTeN96RalJELyyS+UtkH03ts/Wzk6GVixAUa3dIm
   xqI+3XZA/KyMGsCNAuQFhDAWUrX0hvGmYwEipIu5N+VDc8kxvjcXH4yp3
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91898370
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:0hHgiKum7DucC1PlgZHt/Lky++fnVF1eMUV32f8akzHdYApBsoF/q
 tZmKTrVMvyMYjOgf9sjOtvj80xQuceDyd81QVBkrSAyHi4X+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj51v0gnRkPaoQ5AaHyCFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwMisPSkClp8yNkKPnSsh0vMEPM8/OI9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw9
 zuWrjSiXUly2Nq30jek2y+Qic30sj7laYc/TKei6eNBnwjGroAUIEJPDgbqyRWjsWahX/pPJ
 kpS/TAhxYAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLmQNUDNpctEts84yAzsw2
 TehndzzAid0mKaIUn/b/bCRxQ5eIgBMczVEP3VdC1JYvZ+6+tpbYg/zoshLCrW3qo3TOR/Lk
 yHWrAkmvbA/ksguyPDulbzYuA5AtqQlXyZsuFqMDzj/tlwpDGK2T9f2sAaGtJ6sOK7cFwDc5
 yZcxqBy+chUVfmweDqxrPLh9V1Dz9KMK3XijFFmBPHNHBz9qif4Lei8DNyTTXqF0/romhezO
 ic/QSsLuPdu0IKCNMebmb6ZBcUw1rTHHt/4TP3SZdcmSsEvK1TXrX02NR/JjjuFfK0QfUYXY
 MfzTCpRJSxCVfQPIMSeGo/xLoPHNghhnDiOFPgXPjys0KaEZW79dFv2GALmUwzN14vd+F+92
 48GZ6O3J+B3DLWWjt//rdRCcjjn7BETWfjLliCgXrXSclo8Rj9/UaG5LHFIU9UNopm5X9zgp
 hmVMnK0AnKk2hUr9S3ihqhfVY7S
IronPort-HdrOrdr: A9a23:YGR2yqxS4OclT4vDZ3JaKrPwKL1zdoMgy1knxilNoEpuA6ilfq
 GV/MjzuiWetN98YhsdcLO7WZVoI0myyXcv2/h1AV7KZmCPhILPFuxfBODZrQEIdReTygbzv5
 0QFJSXpLfLfDtHZWeR2njbL+od
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="91898370"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Anthony
 PERARD" <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: [PATCH 4/6] tools: Introduce a non-truncating xc_xenver_changeset()
Date: Tue, 17 Jan 2023 13:53:34 +0000
Message-ID: <20230117135336.11662-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Update libxl and the ocaml stubs to match.  No API/ABI change in either.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: Juergen Gross <jgross@suse.com>
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Torok <edvin.torok@citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/include/xenctrl.h             |  1 +
 tools/libs/ctrl/xc_version.c        |  5 +++++
 tools/libs/light/libxl.c            |  5 +----
 tools/ocaml/libs/xc/xenctrl_stubs.c | 19 ++++++++-----------
 4 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 279dc17d67d4..48dbf3eab75f 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1610,6 +1610,7 @@ int xc_version(xc_interface *xch, int cmd, void *arg);
  */
 char *xc_xenver_extraversion(xc_interface *xch);
 char *xc_xenver_capabilities(xc_interface *xch);
+char *xc_xenver_changeset(xc_interface *xch);
 
 int xc_flask_op(xc_interface *xch, xen_flask_op_t *op);
 
diff --git a/tools/libs/ctrl/xc_version.c b/tools/libs/ctrl/xc_version.c
index 512302a393ea..9f2cae03dba8 100644
--- a/tools/libs/ctrl/xc_version.c
+++ b/tools/libs/ctrl/xc_version.c
@@ -161,3 +161,8 @@ char *xc_xenver_capabilities(xc_interface *xch)
 {
     return varbuf_simple_string(xch, XENVER_capabilities2);
 }
+
+char *xc_xenver_changeset(xc_interface *xch)
+{
+    return varbuf_simple_string(xch, XENVER_changeset2);
+}
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index 139e838d1407..80e763aba944 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -582,7 +582,6 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
     GC_INIT(ctx);
     union {
         xen_compile_info_t xen_cc;
-        xen_changeset_info_t xen_chgset;
         xen_platform_parameters_t p_parms;
         xen_commandline_t xen_commandline;
         xen_build_id_t build_id;
@@ -607,9 +606,7 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
     info->compile_date = libxl__strdup(NOGC, u.xen_cc.compile_date);
 
     info->capabilities = xc_xenver_capabilities(ctx->xch);
-
-    xc_version(ctx->xch, XENVER_changeset, &u.xen_chgset);
-    info->changeset = libxl__strdup(NOGC, u.xen_chgset);
+    info->changeset = xc_xenver_changeset(ctx->xch);
 
     xc_version(ctx->xch, XENVER_platform_parameters, &u.p_parms);
     info->virt_start = u.p_parms.virt_start;
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 368f4727f0a0..291e92db7300 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -983,27 +983,24 @@ CAMLprim value stub_xc_version_compile_info(value xch)
 }
 
 
-static value xc_version_single_string(value xch, int code, void *info)
+CAMLprim value stub_xc_version_changeset(value xch)
 {
 	CAMLparam1(xch);
-	int retval;
+	CAMLlocal1(result);
+	char *changeset;
 
 	caml_enter_blocking_section();
-	retval = xc_version(_H(xch), code, info);
+	retval = xc_xenver_changeset(_H(xch));
 	caml_leave_blocking_section();
 
-	if (retval)
+	if (!changeset)
 		failwith_xc(_H(xch));
 
-	CAMLreturn(caml_copy_string((char *)info));
-}
+	result = caml_copy_string(changeset);
 
+	free(changeset);
 
-CAMLprim value stub_xc_version_changeset(value xch)
-{
-	xen_changeset_info_t ci;
-
-	return xc_version_single_string(xch, XENVER_changeset, &ci);
+	CAMLreturn(result);
 }
 
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:54:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:54:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479422.743281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPn-00012g-AO; Tue, 17 Jan 2023 13:54:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479422.743281; Tue, 17 Jan 2023 13:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPn-00012Q-5s; Tue, 17 Jan 2023 13:54:15 +0000
Received: by outflank-mailman (input) for mailman id 479422;
 Tue, 17 Jan 2023 13:54:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHmPm-0000XC-53
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:54:14 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c6af839-966e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 14:54:13 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c6af839-966e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673963652;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=UeCGU2douQiNkU5ETGdTRi7QCglzQpQvZz38J5a3SzA=;
  b=WyBH/eqlxAibQyesBemv9euhIM+u2eaOWQke3FQWib9UqTpEEKcOiJ+2
   nGX2KMj82GPVxFtI/Bs2kgwuMtFt2yrqx8woPl/Uu+TDLJjFqZK32yq3q
   +4Q7Gr8XsJPaI0bbPhhLQGM05iY+ifrf+U/H884IQNcdj+Uv0/CBEAk2s
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91898369
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:G9Nj6aLVmo4EyW5yFE+RqZUlxSXFcZb7ZxGr2PjKsXjdYENS1TMDn
 2dJWT2EO/aLNmGheI8iPoi+/EsDscfSyoQ3T1BlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wVhPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5tGmNqp
 do4Bgtdf0yDpvqMxZ6kTMxz05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TbHJUEzh3G9
 woq+UzbKDgBJuat2AOM63n9it7kjyrqVrs7QejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxasoRo0S9dWVeog52mlyKXO5B2CLnMZVTMHY9sj3PLaXhRzi
 AXPxYmwQ2Uy7vvMEyn1GqqoQS2aIzMXCT8kRQE/HRpZ4/j7moQfkRTqUYM2eEKqteEZCQ0c0
 hjT8ndl1u9J1ZFbv0mo1QuZ2mzx//AlWiZwv1yKBTz9s2uVcab/P+SVBU7nAeGsxWpzZn2Ip
 zA6lseX94ji5rndxXXWEI3h8FxEjstp0QEwYnY1RfHNDxz3pxaekXl4uVmS3ntBPMceYiPOa
 0TOow5X75I7FCL0MvQnMt7pW5VznPOI+THZuhf8N4omX3SMXFXfoHEGibC4gggBb3TAYYlgY
 MzGIK5A/F4RCLh9zSreegvu+eZD+8zK/kuKHcqT503+gdKjiIu9Fe9t3K2mMrpos8tpYWz9r
 75iCid9404OAL2kPHeJq9B7wJJjBSFTOK0aYvd/LoarSjeK0kl6Wpc9HZtJl1RZoplo
IronPort-HdrOrdr: A9a23:Agn956ATYoxs0SPlHemo55DYdb4zR+YMi2TDgXoBLSC9E/b5qy
 nApp8mPHPP4gr5O0tApTnjAsa9qCjnhPtICOAqVN+ftW/d1VdAR7sN0WKN+VHd84KVzJ876U
 /NGZIOa+EZrDJB/KTH3DU=
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="91898369"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Anthony
 PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
Subject: [PATCH 5/6] tools: Introduce a non-truncating xc_xenver_cmdline()
Date: Tue, 17 Jan 2023 13:53:35 +0000
Message-ID: <20230117135336.11662-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Update libxl to match.  No API/ABI change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: Juergen Gross <jgross@suse.com>
---
 tools/include/xenctrl.h      | 1 +
 tools/libs/ctrl/xc_version.c | 5 +++++
 tools/libs/light/libxl.c     | 4 +---
 3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 48dbf3eab75f..fd80a509197d 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1611,6 +1611,7 @@ int xc_version(xc_interface *xch, int cmd, void *arg);
 char *xc_xenver_extraversion(xc_interface *xch);
 char *xc_xenver_capabilities(xc_interface *xch);
 char *xc_xenver_changeset(xc_interface *xch);
+char *xc_xenver_commandline(xc_interface *xch);
 
 int xc_flask_op(xc_interface *xch, xen_flask_op_t *op);
 
diff --git a/tools/libs/ctrl/xc_version.c b/tools/libs/ctrl/xc_version.c
index 9f2cae03dba8..02f6e9551b57 100644
--- a/tools/libs/ctrl/xc_version.c
+++ b/tools/libs/ctrl/xc_version.c
@@ -166,3 +166,8 @@ char *xc_xenver_changeset(xc_interface *xch)
 {
     return varbuf_simple_string(xch, XENVER_changeset2);
 }
+
+char *xc_xenver_commandline(xc_interface *xch)
+{
+    return varbuf_simple_string(xch, XENVER_commandline2);
+}
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index 80e763aba944..3f906a47148b 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -583,7 +583,6 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
     union {
         xen_compile_info_t xen_cc;
         xen_platform_parameters_t p_parms;
-        xen_commandline_t xen_commandline;
         xen_build_id_t build_id;
     } u;
     long xen_version;
@@ -613,8 +612,7 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
 
     info->pagesize = xc_version(ctx->xch, XENVER_pagesize, NULL);
 
-    xc_version(ctx->xch, XENVER_commandline, &u.xen_commandline);
-    info->commandline = libxl__strdup(NOGC, u.xen_commandline);
+    info->commandline = xc_xenver_commandline(ctx->xch);
 
     u.build_id.len = sizeof(u) - sizeof(u.build_id);
     r = libxl__xc_version_wrap(gc, info, &u.build_id);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:54:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:54:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479423.743286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPn-00016z-NG; Tue, 17 Jan 2023 13:54:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479423.743286; Tue, 17 Jan 2023 13:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPn-00016A-FC; Tue, 17 Jan 2023 13:54:15 +0000
Received: by outflank-mailman (input) for mailman id 479423;
 Tue, 17 Jan 2023 13:54:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHmPm-0000F6-BF
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:54:14 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 690f5070-966e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 14:54:07 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 690f5070-966e-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673963651;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=Y1TsN24+gK5qG+xfcJUIrL6CpEgkX3sOFO8L70/aQgA=;
  b=hNTHdCOFdmvtUWckZytsRnkkPZjbPUc+qHVYX+2H53MevMwrKSK/8Gx5
   sm1YKNNa5XafUce9DiY2pDJGiSUu0+2/KiXC7s4wZlFO8Lunh1BAkW37O
   U4No19jjl1HzOE+KhdSOeV5x8GL5+6Inwu1rsdNFkFShIHlJYfox+hteW
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91898368
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:wHfkaqBUJcICiBVW/yLjw5YqxClBgxIJ4kV8jS/XYbTApDNw3jVRm
 GNMX2qAOqyCYmX1eIx0Ot/g9x9TvcCByNY3QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpB4QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw4P11JTFo1
 dIjeCFRbjGbveea37SeVbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9K4XaFJUOwBbwS
 mTu1Ef5DjYrbvil+QHeyU2Ppc+UswjFYddHfFG/3qEz2wDCroAJMzUJUXOrrP//jVSxM/pPJ
 kpR9icwoKwa8E2wUsK7TxC+uGSDvBMXR5xXCeJSwA2E1Kf8+QuSAWkACDlbZ7QOtsAsQicx/
 kSUhN6vDjtq2IB5UlrEqO3S92nrf3FIcylbP3RsoRY5D8fLupoxqkLpbvhYQL/pjvztIzTc3
 Davs31r71kMtvLnx5lX7Hie3W3398KTFlFljunEdjn7t10kPeZJc6TtsAGGtqgYce51W3Hb5
 BA5d96iAPfi5H1nvAiEW60zEb6g/J5p2xWM0Ac0T/HNG9lAkkNPnLy8Axkkfi+Fyu5eJVfUj
 Lb74Gu9HqN7MnqwdrNQaImsEcksxqWIPY27CauEP4YWMskoJVTvEMRSiam4hjCFraTRuftnZ
 cfznTiEUB729piLPBLpHrxAgNfHNwg1xH/JRICT8vhU+eP2WZJhcp9caAHmRrlgvMu5TPD9r
 4432z2il08OD4UTo0D/reYuELz9BSNqVcCs9ZIJLLDrz8gPMDhJNsI9CIgJI+RN95m5XM+Up
 xlRhmcwJILDuED6
IronPort-HdrOrdr: A9a23:nXztU63mx44FWTcRMYJqsAqjBHQkLtp133Aq2lEZdPU0SKGlfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgF1+rfKlXbcBEWndQtt5
 uIHZIfNDXxZ2IK8PrS0U2DPPsLhPO818mT9IDjJ3UGd3AXV0m3hT0JdTpyESdNNXd77YJSLu
 v72iLezQDQA0j+aK6AdwA4t7iqnayyqHr+CyR2fCIa1A==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="91898368"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Anthony
 PERARD" <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: [PATCH 2/6] tools: Introduce a non-truncating xc_xenver_extraversion()
Date: Tue, 17 Jan 2023 13:53:32 +0000
Message-ID: <20230117135336.11662-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

... which uses XENVER_extraversion2.

In order to do sensibly, use manual hypercall buffer handling.  Not only does
this avoid an extra bounce buffer (we need to strip the xen_varbuf_t header
anyway), it's also shorter and easlier to follow.

Update libxl and the ocaml stubs to match.  No API/ABI change in either.

With this change made, `xl info` can now correctly access a >15 char
extraversion:

  # xl info xen_version
  4.18-unstable+REALLY LONG EXTRAVERSION

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: Juergen Gross <jgross@suse.com>
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Torok <edvin.torok@citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>

Note: There is a marginal risk for a memory leak in the ocaml bindings, but
it turns out we have the same bug elsewhere and fixing that is going to be
rather complicated.
---
 tools/include/xenctrl.h             |  6 +++
 tools/libs/ctrl/xc_version.c        | 75 +++++++++++++++++++++++++++++++++++++
 tools/libs/light/libxl.c            |  4 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c |  9 +++--
 4 files changed, 87 insertions(+), 7 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 23037874d3d5..1e88d49371a4 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1604,6 +1604,12 @@ long xc_memory_op(xc_interface *xch, unsigned int cmd, void *arg, size_t len);
 
 int xc_version(xc_interface *xch, int cmd, void *arg);
 
+/*
+ * Wrappers around XENVER_* subops.  Callers must pass the returned pointer to
+ * free().
+ */
+char *xc_xenver_extraversion(xc_interface *xch);
+
 int xc_flask_op(xc_interface *xch, xen_flask_op_t *op);
 
 /*
diff --git a/tools/libs/ctrl/xc_version.c b/tools/libs/ctrl/xc_version.c
index 60e107dcee0b..2c14474feec5 100644
--- a/tools/libs/ctrl/xc_version.c
+++ b/tools/libs/ctrl/xc_version.c
@@ -81,3 +81,78 @@ int xc_version(xc_interface *xch, int cmd, void *arg)
 
     return rc;
 }
+
+/*
+ * Raw hypercall wrapper, letting us pass NULL and things which aren't of
+ * xc_hypercall_buffer_t *.
+ */
+static int do_xen_version_raw(xc_interface *xch, int cmd, void *hbuf)
+{
+    return xencall2(xch->xcall, __HYPERVISOR_xen_version,
+                    cmd, (unsigned long)hbuf);
+}
+
+/*
+ * Issues a xen_varbuf_t subop, using manual hypercall buffer handling to
+ * avoid unnecessary buffering.
+ *
+ * On failure, returns NULL.  errno probably useful.
+ * On success, returns a pointer which must be freed with xencall_free_buffer().
+ */
+static xen_varbuf_t *varbuf_op(xc_interface *xch, unsigned int subop)
+{
+    xen_varbuf_t *hbuf = NULL;
+    ssize_t sz;
+
+    sz = do_xen_version_raw(xch, subop, NULL);
+    if ( sz < 0 )
+        return NULL;
+
+    hbuf = xencall_alloc_buffer(xch->xcall, sizeof(*hbuf) + sz);
+    if ( !hbuf )
+        return NULL;
+
+    hbuf->len = sz;
+
+    sz = do_xen_version_raw(xch, subop, hbuf);
+    if ( sz < 0 )
+    {
+        xencall_free_buffer(xch->xcall, hbuf);
+        return NULL;
+    }
+
+    hbuf->len = sz;
+    return hbuf;
+}
+
+/*
+ * Wrap varbuf_op() to obtain a simple string.  Copy out of the hypercall
+ * buffer, stripping the xen_varbuf_t header and appending a NUL terminator.
+ *
+ * On failure, returns NULL, errno probably useful.
+ * On success, returns a pointer which must be free()'d.
+ */
+static char *varbuf_simple_string(xc_interface *xch, unsigned int subop)
+{
+    xen_varbuf_t *hbuf = varbuf_op(xch, subop);
+    char *res;
+
+    if ( !hbuf )
+        return NULL;
+
+    res = malloc(hbuf->len + 1);
+    if ( res )
+    {
+        memcpy(res, hbuf->buf, hbuf->len);
+        res[hbuf->len] = '\0';
+    }
+
+    xencall_free_buffer(xch->xcall, hbuf);
+
+    return res;
+}
+
+char *xc_xenver_extraversion(xc_interface *xch)
+{
+    return varbuf_simple_string(xch, XENVER_extraversion2);
+}
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index a0bf7d186f69..3e16e568839c 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -581,7 +581,6 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
 {
     GC_INIT(ctx);
     union {
-        xen_extraversion_t xen_extra;
         xen_compile_info_t xen_cc;
         xen_changeset_info_t xen_chgset;
         xen_capabilities_info_t xen_caps;
@@ -600,8 +599,7 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
     info->xen_version_major = xen_version >> 16;
     info->xen_version_minor = xen_version & 0xFF;
 
-    xc_version(ctx->xch, XENVER_extraversion, &u.xen_extra);
-    info->xen_version_extra = libxl__strdup(NOGC, u.xen_extra);
+    info->xen_version_extra = xc_xenver_extraversion(ctx->xch);
 
     xc_version(ctx->xch, XENVER_compile_info, &u.xen_cc);
     info->compiler = libxl__strdup(NOGC, u.xen_cc.compiler);
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 2fba9c5e94d6..f3ce12dd8683 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -929,9 +929,8 @@ CAMLprim value stub_xc_version_version(value xch)
 {
 	CAMLparam1(xch);
 	CAMLlocal1(result);
-	xen_extraversion_t extra;
+	char *extra;
 	long packed;
-	int retval;
 
 	caml_enter_blocking_section();
 	packed = xc_version(_H(xch), XENVER_version, NULL);
@@ -941,10 +940,10 @@ CAMLprim value stub_xc_version_version(value xch)
 		failwith_xc(_H(xch));
 
 	caml_enter_blocking_section();
-	retval = xc_version(_H(xch), XENVER_extraversion, &extra);
+	extra = xc_xenver_extraversion(_H(xch));
 	caml_leave_blocking_section();
 
-	if (retval)
+	if (!extra)
 		failwith_xc(_H(xch));
 
 	result = caml_alloc_tuple(3);
@@ -953,6 +952,8 @@ CAMLprim value stub_xc_version_version(value xch)
 	Store_field(result, 1, Val_int(packed & 0xffff));
 	Store_field(result, 2, caml_copy_string(extra));
 
+	free(extra);
+
 	CAMLreturn(result);
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:54:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479425.743302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPp-0001a0-1W; Tue, 17 Jan 2023 13:54:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479425.743302; Tue, 17 Jan 2023 13:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPo-0001Xp-TU; Tue, 17 Jan 2023 13:54:16 +0000
Received: by outflank-mailman (input) for mailman id 479425;
 Tue, 17 Jan 2023 13:54:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHmPo-0000XC-0g
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:54:16 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6be8b0f7-966e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 14:54:13 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6be8b0f7-966e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673963653;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=mE/iBXFwyWDQLAcCnVIU+5UD6GpcaO2CE3H3XDadFrA=;
  b=CIyhQRxXWhoJHJ1OBMCv2KtAvqV6vX2X9RjeOpet/5ot9cUAtMQQ/WHT
   xmUnI0hIqQUwxnyA5rQ6Yus7tMncpb4I/DiFbTPZM0yjxUq/1z+qrUdiO
   b8NogMY6LV+YrndYtzCxeLgA7KY1EQXA4Pfc6Mh60msM6femd7hvqWCyi
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91898371
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:veBvuakiVdH2uTlxDxxP/yLo5gyKJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJNWzjVPP2NYWqhe4h/a4+1oRtQvcXSzNdhSQdl+Xo3EyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5QCGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 eYDKhRdSDCtvLKnnLKaUtNpqtslCda+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQthfB9
 jOWpDugav0cHPW/zQrdzW/9uqjooyr8XtwdBqa6r9c/1TV/wURMUUZLBDNXu8KRlUqWS99Zb
 UsO9UIGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbiy6VD3YJZiRMY9snsIkxXzNC/
 l2GhdTyHhR0raaYD3ma89+pQSiaYHZPazVYPGldEFVDuoO4yG0usv7RZsx4EorlqP3bImHhn
 zCrtBI7q6oVqNFegs1X4mv7byKQSonhF1BqvVSNBDr6vmuVd6b+OdX2tAGzAeJoad/AEwLf5
 CVsd922trhmMH2bqMCarAzh9pmN7u3NDjDTiEUH83IJp2X0oC7LkWy9DVhDyKZV3iUsI2WBj
 Lf741852XOqFCLCgVVLS4ywEd826qPrCM7oUPvZBvIXPMcqJF7bpHE/PB/Lt4wIrKTLufhvU
 ap3jO72VSpKYUiZ5GfeqxghPU8DmXllmDK7qWHTxBW7y7uODEN5up9cWGZimtsRtfveyC2Mq
 oY3Cid/40kHOAEISnWNoNF7wJFjBSRTOK0aXOQMJ77TeFQ4Rjt/YxITqJt4E7FYc21uvr+g1
 hmAtoVwkwWXaaHvQelSVk1eVQ==
IronPort-HdrOrdr: A9a23:5gipqqMnNqAtUMBcTgejsMiBIKoaSvp037BK7S1MoH1uA6ilfq
 WV9sjzuiWatN98Yh8dcLO7Scy9qBHnhP1ICOAqVN/PYOCBggqVxelZhrcKqAeQeREWmNQ86U
 9hGZIOdeHYPBxBouvRpCODNL8bsb66GKLDv5aj85+6JzsaFJ2J7G1Ce3im+lUdfnghOXKgfq
 DsnPauoVCbCA0qhpTSPAh8YwDbzee7767bXQ==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="91898371"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Anthony
 PERARD" <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: [PATCH 3/6] tools: Introduce a non-truncating xc_xenver_capabilities()
Date: Tue, 17 Jan 2023 13:53:33 +0000
Message-ID: <20230117135336.11662-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Update libxl and the ocaml stubs to match.  No API/ABI change in either.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: Juergen Gross <jgross@suse.com>
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Torok <edvin.torok@citrix.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/include/xenctrl.h             |  1 +
 tools/libs/ctrl/xc_version.c        |  5 +++++
 tools/libs/light/libxl.c            |  4 +---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 17 +++++++++++++++--
 4 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 1e88d49371a4..279dc17d67d4 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1609,6 +1609,7 @@ int xc_version(xc_interface *xch, int cmd, void *arg);
  * free().
  */
 char *xc_xenver_extraversion(xc_interface *xch);
+char *xc_xenver_capabilities(xc_interface *xch);
 
 int xc_flask_op(xc_interface *xch, xen_flask_op_t *op);
 
diff --git a/tools/libs/ctrl/xc_version.c b/tools/libs/ctrl/xc_version.c
index 2c14474feec5..512302a393ea 100644
--- a/tools/libs/ctrl/xc_version.c
+++ b/tools/libs/ctrl/xc_version.c
@@ -156,3 +156,8 @@ char *xc_xenver_extraversion(xc_interface *xch)
 {
     return varbuf_simple_string(xch, XENVER_extraversion2);
 }
+
+char *xc_xenver_capabilities(xc_interface *xch)
+{
+    return varbuf_simple_string(xch, XENVER_capabilities2);
+}
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index 3e16e568839c..139e838d1407 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -583,7 +583,6 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
     union {
         xen_compile_info_t xen_cc;
         xen_changeset_info_t xen_chgset;
-        xen_capabilities_info_t xen_caps;
         xen_platform_parameters_t p_parms;
         xen_commandline_t xen_commandline;
         xen_build_id_t build_id;
@@ -607,8 +606,7 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
     info->compile_domain = libxl__strdup(NOGC, u.xen_cc.compile_domain);
     info->compile_date = libxl__strdup(NOGC, u.xen_cc.compile_date);
 
-    xc_version(ctx->xch, XENVER_capabilities, &u.xen_caps);
-    info->capabilities = libxl__strdup(NOGC, u.xen_caps);
+    info->capabilities = xc_xenver_capabilities(ctx->xch);
 
     xc_version(ctx->xch, XENVER_changeset, &u.xen_chgset);
     info->changeset = libxl__strdup(NOGC, u.xen_chgset);
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index f3ce12dd8683..368f4727f0a0 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -1009,9 +1009,22 @@ CAMLprim value stub_xc_version_changeset(value xch)
 
 CAMLprim value stub_xc_version_capabilities(value xch)
 {
-	xen_capabilities_info_t ci;
+	CAMLparam1(xch);
+	CAMLlocal1(result);
+	char *capabilities;
+
+	caml_enter_blocking_section();
+	retval = xc_xenver_capabilities(_H(xch));
+	caml_leave_blocking_section();
 
-	return xc_version_single_string(xch, XENVER_capabilities, &ci);
+	if (!capabilities)
+		failwith_xc(_H(xch));
+
+	result = caml_copy_string(capabilities);
+
+	free(capabilities);
+
+	CAMLreturn(result);
 }
 
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:54:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:54:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479426.743313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPq-0001sa-CQ; Tue, 17 Jan 2023 13:54:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479426.743313; Tue, 17 Jan 2023 13:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmPq-0001rO-7m; Tue, 17 Jan 2023 13:54:18 +0000
Received: by outflank-mailman (input) for mailman id 479426;
 Tue, 17 Jan 2023 13:54:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHmPp-0000XC-0l
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:54:17 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6d1a7f45-966e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 14:54:14 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d1a7f45-966e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673963654;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=ZeZpvUYtzSln2z+je77HpdVO31fBudqozIWZgnYM4bM=;
  b=fyZo0hd8H4zT9vNvABqWnUP+qlDrMJBpinIK6YhrTbQPpb2EkRpUK2LU
   /uZW3sbC40/NXHm8HJz3koONVHv/5bWnjX2uJgcply1gQ1yC2WAy6Faa9
   5G/eGdftyAeooAyNQwCjPmvMPWFcLoycTAkNGM3n+9Opr1u2/ANZflMI2
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 91898372
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Jw6Dq6BIcP9mURVW/1Hjw5YqxClBgxIJ4kV8jS/XYbTApD4q3jBWz
 2IaX2GDPfnZYjH8ed0iPI3j8kIHvJHUnNY2QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpB4QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw4P11JTFo1
 dIjeCFRbjGbveea37SeVbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9K4XaFJUOwRzwS
 mTu40X9LRoXMcalzSve0FG0vu/rhQzHYddHfFG/3qEz2wDCroAJMzUUWkG8uuKRkVOlVpRUL
 El80jojq+0++VKmSvH5XgakuziUsxgEQd1SHuYmrgaXxcL8/AKxFmUCCDlbZ7QbWNQeHGJwk
 AXTxpWwWGIp6efOIZ6AyluKhSmpOwxFC08sXn8VEQsk++PmjJ41qw2aG76PD5WJYs3J9SDYm
 m7V93lk3e1M3abnxI3gowmZ3mvESozhC1dsu16JBj/NAhZRPtbNWmC+1bTMAR+sxq69R0LJg
 nULktP2AAsmXcDUz3zlrAng8diUCxe53N702wQH82EJrWjFxpJaVdk4DMtCDEloKN0YXjTif
 VXevwhcjLcKYiTxPf4rO9LgUpVxpUQFKTgCfqmEBuein7ArLFPXlM2QTRP4M5/RfLgEzvhkZ
 MbznTeEBncGE6V3pAdatM9EuYLHMhsWnDuJLbiilkTP7FZrTCLNIVvzGAfUP79RAWLtiFm9z
 uuzwOPRmkUPCrOgPHOJmWPRRHhTRUUG6VnNg5Q/Xoa+zsBOQz5J5yP5qV/5R7FYog==
IronPort-HdrOrdr: A9a23:uzWFFq3bOauASq+PY8E6YwqjBHQkLtp133Aq2lEZdPU0SKGlfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgF1+rfKlXbcBEWndQtt5
 uIHZIfNDXxZ2IK8PrS0U2DPPsLhPO818mT9IDjJ3UGd3AXV0m3hT0JdTpyESdNNXd77YJSLu
 v72iLezQDQA0j+aK6AdwA4t7iqnayyqHr+CyR2fCIa1A==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="91898372"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Anthony
 PERARD" <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
Subject: [PATCH 6/6] tools: Introduce a xc_xenver_buildid() wrapper
Date: Tue, 17 Jan 2023 13:53:36 +0000
Message-ID: <20230117135336.11662-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

... which converts binary content to hex automatically.

Update libxl to match.  No API/ABI change.

This removes a latent bug for cases when the buildid is longer than 4092
bytes.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: Juergen Gross <jgross@suse.com>
---
 tools/include/xenctrl.h      |  1 +
 tools/libs/ctrl/xc_version.c | 33 +++++++++++++++++++++++++++++++++
 tools/libs/light/libxl.c     | 44 +-------------------------------------------
 3 files changed, 35 insertions(+), 43 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index fd80a509197d..48296930b892 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1612,6 +1612,7 @@ char *xc_xenver_extraversion(xc_interface *xch);
 char *xc_xenver_capabilities(xc_interface *xch);
 char *xc_xenver_changeset(xc_interface *xch);
 char *xc_xenver_commandline(xc_interface *xch);
+char *xc_xenver_buildid(xc_interface *xch);
 
 int xc_flask_op(xc_interface *xch, xen_flask_op_t *op);
 
diff --git a/tools/libs/ctrl/xc_version.c b/tools/libs/ctrl/xc_version.c
index 02f6e9551b57..54d1b9296696 100644
--- a/tools/libs/ctrl/xc_version.c
+++ b/tools/libs/ctrl/xc_version.c
@@ -171,3 +171,36 @@ char *xc_xenver_commandline(xc_interface *xch)
 {
     return varbuf_simple_string(xch, XENVER_commandline2);
 }
+
+static void str2hex(char *dst, const unsigned char *src, size_t n)
+{
+    static const unsigned char hex[] = "0123456789abcdef";
+
+    for ( ; n; n-- )
+    {
+        unsigned char c = *src++;
+
+        *dst++ = hex[c >> 4];
+        *dst++ = hex[c & 0xf];
+    }
+}
+
+char *xc_xenver_buildid(xc_interface *xch)
+{
+    xen_varbuf_t *hbuf = varbuf_op(xch, XENVER_build_id);
+    char *res;
+
+    if ( !hbuf )
+        return NULL;
+
+    res = malloc((hbuf->len * 2) + 1);
+    if ( res )
+    {
+        str2hex(res, hbuf->buf, hbuf->len);
+        res[hbuf->len * 2] = '\0';
+    }
+
+    xencall_free_buffer(xch->xcall, hbuf);
+
+    return res;
+}
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index 3f906a47148b..5512f444aca2 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -545,38 +545,6 @@ libxl_numainfo *libxl_get_numainfo(libxl_ctx *ctx, int *nr)
     return ret;
 }
 
-static int libxl__xc_version_wrap(libxl__gc *gc, libxl_version_info *info,
-                                  xen_build_id_t *build)
-{
-    int r;
-
-    r = xc_version(CTX->xch, XENVER_build_id, build);
-    switch (r) {
-    case -EPERM:
-    case -ENODATA:
-    case 0:
-        info->build_id = libxl__strdup(NOGC, "");
-        break;
-
-    case -ENOBUFS:
-        break;
-
-    default:
-        if (r > 0) {
-            unsigned int i;
-
-            info->build_id = libxl__zalloc(NOGC, (r * 2) + 1);
-
-            for (i = 0; i < r ; i++)
-                snprintf(&info->build_id[i * 2], 3, "%02hhx", build->buf[i]);
-
-            r = 0;
-        }
-        break;
-    }
-    return r;
-}
-
 const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
 {
     GC_INIT(ctx);
@@ -586,7 +554,6 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
         xen_build_id_t build_id;
     } u;
     long xen_version;
-    int r;
     libxl_version_info *info = &ctx->version_info;
 
     if (info->xen_version_extra != NULL)
@@ -613,17 +580,8 @@ const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx)
     info->pagesize = xc_version(ctx->xch, XENVER_pagesize, NULL);
 
     info->commandline = xc_xenver_commandline(ctx->xch);
+    info->build_id = xc_xenver_buildid(ctx->xch);
 
-    u.build_id.len = sizeof(u) - sizeof(u.build_id);
-    r = libxl__xc_version_wrap(gc, info, &u.build_id);
-    if (r == -ENOBUFS) {
-            xen_build_id_t *build_id;
-
-            build_id = libxl__zalloc(gc, info->pagesize);
-            build_id->len = info->pagesize - sizeof(*build_id);
-            r = libxl__xc_version_wrap(gc, info, build_id);
-            if (r) LOGEV(ERROR, r, "getting build_id");
-    }
  out:
     GC_FREE;
     return info;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 13:56:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 13:56:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479461.743325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmSG-0004Cr-1K; Tue, 17 Jan 2023 13:56:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479461.743325; Tue, 17 Jan 2023 13:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmSF-0004Ck-Up; Tue, 17 Jan 2023 13:56:47 +0000
Received: by outflank-mailman (input) for mailman id 479461;
 Tue, 17 Jan 2023 13:56:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHmSE-0004CK-0U
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 13:56:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmSD-00016w-Ib; Tue, 17 Jan 2023 13:56:45 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmSD-0007bv-DU; Tue, 17 Jan 2023 13:56:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=K2Q2SWmwEwQtkuC/lM/GUj/xb6gxQ/qAW28lwp5CeuQ=; b=NZL8Ix6TZCe0AQApJc4gikKtTR
	ApWcaP3rM1+jjfQmvg5kmMNKIFJywiFoRk7e+P8nFDsrCItFdAClga0L/rLm8c6849zq45gj+3SIh
	/BF7lVVhjV7KSQo1lofQfNXq7jwoZJ4v8RZYBNpMvpl/pRP+nMnzd6UlsATBdd2ejcM0=;
Message-ID: <ca989bac-b710-c8d3-0bc1-67e22dc6ba41@xen.org>
Date: Tue, 17 Jan 2023 13:56:43 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 01/17] tools/xenstore: let talloc_free() preserve errno
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117091124.22170-2-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> Today talloc_free() is not guaranteed to preserve errno, especially in
> case a custom destructor is being used.
> 
> So preserve errno in talloc_free().
> 
> This allows to remove some errno saving outside of talloc.c.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> V2:
> - drop wrapper (Julien Grall)
> ---
>   tools/xenstore/talloc.c         | 21 +++++++++++++--------
>   tools/xenstore/xenstored_core.c |  2 --
>   2 files changed, 13 insertions(+), 10 deletions(-)
> 
> diff --git a/tools/xenstore/talloc.c b/tools/xenstore/talloc.c
> index d7edcf3a93..23c3a23b19 100644
> --- a/tools/xenstore/talloc.c
> +++ b/tools/xenstore/talloc.c
> @@ -541,38 +541,39 @@ static void talloc_free_children(void *ptr)
>   */
>   int talloc_free(void *ptr)
>   {
> +	int saved_errno = errno;
>   	struct talloc_chunk *tc;
>   
>   	if (ptr == NULL) {
> -		return -1;
> +		goto err;
>   	}
>   
>   	tc = talloc_chunk_from_ptr(ptr);
>   
>   	if (tc->null_refs) {
>   		tc->null_refs--;
> -		return -1;
> +		goto err;
>   	}
>   
>   	if (tc->refs) {
>   		talloc_reference_destructor(tc->refs);
> -		return -1;
> +		goto err;
>   	}
>   
>   	if (tc->flags & TALLOC_FLAG_LOOP) {
>   		/* we have a free loop - stop looping */
> -		return 0;
> +		goto success;
>   	}
>   
>   	if (tc->destructor) {
>   		talloc_destructor_t d = tc->destructor;
>   		if (d == (talloc_destructor_t)-1) {
> -			return -1;
> +			goto err;
>   		}
>   		tc->destructor = (talloc_destructor_t)-1;
>   		if (d(ptr) == -1) {
>   			tc->destructor = d;
> -			return -1;
> +			goto err;
>   		}
>   		tc->destructor = NULL;
>   	}
> @@ -594,10 +595,14 @@ int talloc_free(void *ptr)
>   	tc->flags |= TALLOC_FLAG_FREE;
>   
>   	free(tc);
> + success:
> +	errno = saved_errno;
>   	return 0;
> -}
> -
>   
> + err:
> +	errno = saved_errno;
> +	return -1;
> +}
>   
>   /*
>     A talloc version of realloc. The context argument is only used if
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 78a3edaa4e..1650821922 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -771,9 +771,7 @@ struct node *read_node(struct connection *conn, const void *ctx,
>   	return node;
>   
>    error:
> -	err = errno;
>   	talloc_free(node);
> -	errno = err;
>   	return NULL;
>   }
>   

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:02:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:02:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479467.743335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmXW-0005x4-KM; Tue, 17 Jan 2023 14:02:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479467.743335; Tue, 17 Jan 2023 14:02:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmXW-0005wx-Hf; Tue, 17 Jan 2023 14:02:14 +0000
Received: by outflank-mailman (input) for mailman id 479467;
 Tue, 17 Jan 2023 14:02:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHmXV-0005wb-N9
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:02:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmXV-0001Ju-0w; Tue, 17 Jan 2023 14:02:13 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmXU-0007w8-Rl; Tue, 17 Jan 2023 14:02:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=/X8jrnSsy3ohe3W7ubaG0ogIm+jS9rlJkZ3O3/ly00k=; b=FJiJgtMXsM3kIuu6Z4Q7Yu6xFs
	kO+uRBPTBWrFnkGjTFYay+5n8kgpiX+qUw3EXcexpitdjMsTV6AJ3leiAmi/vmYjecBsQBNTVGDSk
	6ca031F821aaANBVBbXPgrLt+5KinUjge+qFFXRYu5rH+jQt27COIKdhpwCALznuLO0s=;
Message-ID: <a944a5ea-0e9f-add6-cbe7-8b06054c637a@xen.org>
Date: Tue, 17 Jan 2023 14:02:10 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 04/17] tools/xenstore: introduce dummy nodes for
 special watch paths
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-5-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117091124.22170-5-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> +static void fire_special_watches(const char *name)
> +{
> +	void *ctx = talloc_new(NULL);
> +	struct node *node;
> +
> +	if (!ctx)
> +		return;
> +
> +	node = read_node(NULL, ctx, name);
> +
> +	if (node)
> +		fire_watches(NULL, ctx, name, node, true, NULL);
> +	else
> +		syslog(LOG_ERR, "special node %s not found\n", name);

NIT: How about using log() so it is also printed in the trace log? This 
would be handy to avoid having to check multiple log files.

With or without:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:04:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:04:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479473.743346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmZP-0006WA-Vi; Tue, 17 Jan 2023 14:04:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479473.743346; Tue, 17 Jan 2023 14:04:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmZP-0006W3-Sv; Tue, 17 Jan 2023 14:04:11 +0000
Received: by outflank-mailman (input) for mailman id 479473;
 Tue, 17 Jan 2023 14:04:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHmZO-0006Vv-Qs
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:04:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmZN-0001Lq-UH; Tue, 17 Jan 2023 14:04:09 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmZN-000801-Oe; Tue, 17 Jan 2023 14:04:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ANxvU9NV5KMISZkbBg33IK/WHIx63yDIb1w+eGLlrlg=; b=OHaeIU/Bv/LMTvqLiPavobhIpy
	fVkA+E1NcZ0mYurJlA81iIoP1/8zDwlA7GbABOyvhah4KEaAunobrPiqj3Fsm0vRG9h7j02K+d+4N
	vcD4DuzgG4G6polTxqOIp13soUPqknxIGsWpBijCDI+vW+0m0lattETIqRh0ILKDMZ8c=;
Message-ID: <f7eaac35-4b80-a727-56d9-67d4c5f39db0@xen.org>
Date: Tue, 17 Jan 2023 14:04:08 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 05/17] tools/xenstore: replace watch->relative_path
 with a prefix length
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-6-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117091124.22170-6-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> Instead of storing a pointer to the path which is prepended to
> relative paths in struct watch, just use the length of the prepended
> path.
> 
> It should be noted that the now removed special case of the
> relative path being "" in get_watch_path() can't happen at all.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:07:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:07:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479482.743358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmc9-000795-EW; Tue, 17 Jan 2023 14:07:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479482.743358; Tue, 17 Jan 2023 14:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmc9-00078y-B4; Tue, 17 Jan 2023 14:07:01 +0000
Received: by outflank-mailman (input) for mailman id 479482;
 Tue, 17 Jan 2023 14:07:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHmc8-00078q-7N
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:07:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmc7-0001PZ-J0; Tue, 17 Jan 2023 14:06:59 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmc7-00088J-Dn; Tue, 17 Jan 2023 14:06:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=upwRCCs4UDo+eKOrVEkrMNJS1UU5v5QJnUv2lVns/F8=; b=BCivyuiVkkwRjrzwNC0OtLd7h7
	zSv8K6aLG0UaTMTNZxVmpzlc43JgaRqb2sfKUiziih445LibzmLv+3fUvyXy/7SWAnkDrZgpfmkFd
	s6849M3/lFrv3fkyMwfFVyvzrydrncFgmJRfcMYWOQvNeCyF21IK2fEd4BHn4Q/CKmfU=;
Message-ID: <8fb79fde-cc0e-9f9a-e22d-923ef7d6431e@xen.org>
Date: Tue, 17 Jan 2023 14:06:57 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 06/17] tools/xenstore: move changed domain handling
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-7-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117091124.22170-7-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> Move all code related to struct changed_domain from
> xenstored_transaction.c to xenstored_domain.c.
> 
> This will be needed later in order to simplify the accounting data
> updates in cases of errors during a request.
> 
> Split the code to have a more generic base framework.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:08:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:08:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479488.743369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmda-0007iL-Nr; Tue, 17 Jan 2023 14:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479488.743369; Tue, 17 Jan 2023 14:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmda-0007iE-LA; Tue, 17 Jan 2023 14:08:30 +0000
Received: by outflank-mailman (input) for mailman id 479488;
 Tue, 17 Jan 2023 14:08:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHmdY-0007i1-W8
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:08:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmdY-0001RI-3f; Tue, 17 Jan 2023 14:08:28 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmdX-0008E1-U7; Tue, 17 Jan 2023 14:08:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=EEw53UadeBRHOFbnoUCYTcnaRLTGU27o928rNAeyUYs=; b=fRECNWYhDjhF9OQJ9VXxQb4q7A
	ObYC4T1NFwosLWX5rFSr5Cuorw8EGEny+ymQTcGeqoGhJjG+UKr9VFmYLnQUSxOzOrIp+p6hiKp1Z
	+x0DDkrTKDjQR4k/FmWri+BuAtvAF3L8GSVUNg4VHl7Y5U9E6JJ3753brjG9wI2hhKzs=;
Message-ID: <8aa74a44-2189-ca6b-3668-722e74d233fd@xen.org>
Date: Tue, 17 Jan 2023 14:08:26 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 08/17] tools/xenstore: don't allow creating too many
 nodes in a transaction
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-9-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117091124.22170-9-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> The accounting for the number of nodes of a domain in an active
> transaction is not working correctly, as it allows to create arbitrary
> number of nodes. The transaction will finally fail due to exceeding
> the number of nodes quota, but before closing the transaction an
> unprivileged guest could cause Xenstore to use a lot of memory.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Is the rest of the series depend on this patch? I am asking this because 
I still need to go through your second series before forging an opinion 
on this patch.

Yet, I would like to reduce the number of inflight patches :).

Cheers,

> ---
>   tools/xenstore/xenstored_domain.c | 5 ++---
>   1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index edfe5809be..07d91eb50c 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -1129,9 +1129,8 @@ int domain_nbentry_fix(unsigned int domid, int num, bool update)
>   
>   int domain_nbentry(struct connection *conn)
>   {
> -	return (domain_is_unprivileged(conn))
> -		? conn->domain->nbentry
> -		: 0;
> +	return domain_is_unprivileged(conn)
> +	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
>   }
>   
>   static bool domain_chk_quota(struct domain *domain, int mem)

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:09:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:09:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479492.743380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmeK-0008En-2I; Tue, 17 Jan 2023 14:09:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479492.743380; Tue, 17 Jan 2023 14:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmeJ-0008Eg-Vd; Tue, 17 Jan 2023 14:09:15 +0000
Received: by outflank-mailman (input) for mailman id 479492;
 Tue, 17 Jan 2023 14:09:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHmeI-0008EV-JF; Tue, 17 Jan 2023 14:09:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHmeI-0001Sf-Hc; Tue, 17 Jan 2023 14:09:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHmeI-0004FA-5N; Tue, 17 Jan 2023 14:09:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHmeI-0003l3-4x; Tue, 17 Jan 2023 14:09:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5jNeHQ3inScyJOTwoLiRy6pnhMo7qopENyKU+8LfBh0=; b=t1kJHSDKfbOjT1TGBFHqeUDAyX
	t3HBIdOLAUNOwxFfioK4crbqdaviNLph+qQqAOJh095IaObAXiMQRu23xpC3qvEvfjVB2YzPTdRyy
	tFb/IXuMtaXsymrPx4cAa31LYuNxoFq23wjqIC3JjuyK7pF/8NGl/qEz+B8si7dQi+zU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175930-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175930: regressions - trouble: blocked/broken
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:<job status>:broken:regression
    xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:host-build-prep:fail:regression
    xen-unstable-smoke:build-arm64-xsm:host-build-prep:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 14:09:14 +0000

flight 175930 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175930/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-amd64                   5 host-build-prep          fail REGR. vs. 175746
 build-arm64-xsm               5 host-build-prep          fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    4 days
Failing since        175748  2023-01-12 20:01:56 Z    4 days   21 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    3 days   19 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:09:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:09:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479495.743391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmeh-0000EQ-CE; Tue, 17 Jan 2023 14:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479495.743391; Tue, 17 Jan 2023 14:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmeh-0000EJ-8j; Tue, 17 Jan 2023 14:09:39 +0000
Received: by outflank-mailman (input) for mailman id 479495;
 Tue, 17 Jan 2023 14:09:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mGQh=5O=gmail.com=rjwysocki@srs-se1.protection.inumbo.net>)
 id 1pHmef-0000Cf-Vl
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:09:37 +0000
Received: from mail-ej1-f54.google.com (mail-ej1-f54.google.com
 [209.85.218.54]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8fc6f7dd-9670-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 15:09:31 +0100 (CET)
Received: by mail-ej1-f54.google.com with SMTP id bk15so18120604ejb.9
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 06:09:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fc6f7dd-9670-11ed-b8d0-410ff93cb8f0
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=SxrCGjSKfBkjK5viyn3/63bLv/HFmaC1YTqocY1zE1E=;
        b=W/vgLvOb4RCSoEf3rNl+vo8bzNK3dox4hdvY0OV54GrXocUDFV44d6BEBf3axDUIoG
         psVT5Ss//MdAys5HEyZZWSSLPZJxdg+5Uu+jPZVdM4PwOOUCtGuS01ZTLG9sYfOuwb/s
         sKt7I8F0+NQ8DSGjC+m6du+yTkHYd0bVVgR11eUEVXpPaUPdOvorcfonJrkRO6J0UHLE
         Vhb2lO+kd19g79ZSkPM1C1ezJvfrbhcWJRWMwEVWf053Sj5wJs+FcovZNrd4i9fHuxQ5
         TJGwXM0qO4bEN4NOPJTJNB29Hgal4TnvZIPfIkF+A5ACRVB34w+L8EoXLVIFWwv1azc5
         iR0Q==
X-Gm-Message-State: AFqh2krSPPYlkdH1aZxn3GimcTyzM9bCMZr6jwPbG5aWsupHRoeygxD5
	HVwrkFDdevGB7C8TbCu2Wd70UI+W7woKsU5a4Sc=
X-Google-Smtp-Source: AMrXdXu3u90Y/GKUFfDt7Oeht0ZXER9iAI4urcFhf748H5avCW3qieQtQPSGgH0KHpzyPJom0xOGNHHT1Jp4RjFOURk=
X-Received: by 2002:a17:906:eb1b:b0:86e:abe4:5acf with SMTP id
 mb27-20020a170906eb1b00b0086eabe45acfmr297536ejb.615.1673964575446; Tue, 17
 Jan 2023 06:09:35 -0800 (PST)
MIME-Version: 1.0
References: <20230113140610.7132-1-jgross@suse.com> <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>
 <e5cc2f96-82bc-a0dc-21fa-2f605bc867d1@suse.com>
In-Reply-To: <e5cc2f96-82bc-a0dc-21fa-2f605bc867d1@suse.com>
From: "Rafael J. Wysocki" <rafael@kernel.org>
Date: Tue, 17 Jan 2023 15:09:24 +0100
Message-ID: <CAJZ5v0ggSbR7+RiXuJo4aq+PYWS-eb9R2iSr0DFfVhcaJ1bfUQ@mail.gmail.com>
Subject: Re: [PATCH] x86/acpi: fix suspend with Xen
To: Juergen Gross <jgross@suse.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>, linux-kernel@vger.kernel.org, x86@kernel.org, 
	linux-pm@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>, 
	Len Brown <len.brown@intel.com>, Pavel Machek <pavel@ucw.cz>, 
	Stefano Stabellini <sstabellini@kernel.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, xen-devel@lists.xenproject.org, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Jan 16, 2023 at 7:45 AM Juergen Gross <jgross@suse.com> wrote:
>
> On 13.01.23 20:40, Rafael J. Wysocki wrote:
> > On Fri, Jan 13, 2023 at 3:06 PM Juergen Gross <jgross@suse.com> wrote:
> >>
> >> Commit f1e525009493 ("x86/boot: Skip realmode init code when running as
> >> Xen PV guest") missed one code path accessing real_mode_header, leading
> >> to dereferencing NULL when suspending the system under Xen:
> >>
> >>      [  348.284004] PM: suspend entry (deep)
> >>      [  348.289532] Filesystems sync: 0.005 seconds
> >>      [  348.291545] Freezing user space processes ... (elapsed 0.000 seconds) done.
> >>      [  348.292457] OOM killer disabled.
> >>      [  348.292462] Freezing remaining freezable tasks ... (elapsed 0.104 seconds) done.
> >>      [  348.396612] printk: Suspending console(s) (use no_console_suspend to debug)
> >>      [  348.749228] PM: suspend devices took 0.352 seconds
> >>      [  348.769713] ACPI: EC: interrupt blocked
> >>      [  348.816077] BUG: kernel NULL pointer dereference, address: 000000000000001c
> >>      [  348.816080] #PF: supervisor read access in kernel mode
> >>      [  348.816081] #PF: error_code(0x0000) - not-present page
> >>      [  348.816083] PGD 0 P4D 0
> >>      [  348.816086] Oops: 0000 [#1] PREEMPT SMP NOPTI
> >>      [  348.816089] CPU: 0 PID: 6764 Comm: systemd-sleep Not tainted 6.1.3-1.fc32.qubes.x86_64 #1
> >>      [  348.816092] Hardware name: Star Labs StarBook/StarBook, BIOS 8.01 07/03/2022
> >>      [  348.816093] RIP: e030:acpi_get_wakeup_address+0xc/0x20
> >>
> >> Fix that by adding an indirection for acpi_get_wakeup_address() which
> >> Xen PV dom0 can use to return a dummy non-zero wakeup address (this
> >> address won't ever be used, as the real suspend handling is done by the
> >> hypervisor).
> >
> > How exactly does this help?
>
> I believed the first sentence of the commit message would make this
> clear enough.

That was clear, but the fix part wasn't really.

> I can expand the commit message to go more into detail if you think
> this is really needed.

IMO calling acpi_set_waking_vector() with a known-invalid wakeup
vector address in dom0 is plain confusing.

I'm not sure what to do about it yet, but IMV something needs to be done.


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:18:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:18:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479509.743402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmmi-0001ww-7y; Tue, 17 Jan 2023 14:17:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479509.743402; Tue, 17 Jan 2023 14:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmmi-0001wp-3y; Tue, 17 Jan 2023 14:17:56 +0000
Received: by outflank-mailman (input) for mailman id 479509;
 Tue, 17 Jan 2023 14:17:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f95U=5O=zf.com=youssef.elmesdadi@srs-se1.protection.inumbo.net>)
 id 1pHmmh-0001wj-J4
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:17:55 +0000
Received: from de-smtp-delivery-114.mimecast.com
 (de-smtp-delivery-114.mimecast.com [194.104.109.114])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bb457df5-9671-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 15:17:53 +0100 (CET)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2108.outbound.protection.outlook.com [104.47.18.108]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-45-2c0gRrfxNTGPpfvLnMQOQg-1; Tue, 17 Jan 2023 15:17:49 +0100
Received: from AM5PR0802MB2578.eurprd08.prod.outlook.com
 (2603:10a6:203:9e::22) by AS8PR08MB8369.eurprd08.prod.outlook.com
 (2603:10a6:20b:56c::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Tue, 17 Jan
 2023 14:17:45 +0000
Received: from AM5PR0802MB2578.eurprd08.prod.outlook.com
 ([fe80::f7f1:3b45:d707:62b7]) by AM5PR0802MB2578.eurprd08.prod.outlook.com
 ([fe80::f7f1:3b45:d707:62b7%2]) with mapi id 15.20.6002.012; Tue, 17 Jan 2023
 14:17:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb457df5-9671-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zf.com; s=mczfcom20220728;
	t=1673965073;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type;
	bh=WI+xmRfVT/ikgkE1BrdEaoyfHX7oprp9wJar+Qx5TGM=;
	b=sVhGoXFBe7z+FGIYcMr6yTRMhBuP9HRlF30Ju+j87ZBEhVC7V/RMPvrztUMAvZTgBTOp81
	ASlUcv85z1j96emkKoJqyuWRbZEj2r+KPVfWssBfiiGlV1/5WrlAnqJDE5dybbWB/bZ7G0
	hwio5mD/8SjND5FuMSdLPLoHTcdfxGQ=
X-MC-Unique: 2c0gRrfxNTGPpfvLnMQOQg-1
From: El Mesdadi Youssef ESK UILD7 <youssef.elmesdadi@zf.com>
To: "psujkov@gmail.com" <psujkov@gmail.com>, "dario.faggioli@citrix.com"
	<dario.faggioli@citrix.com>, "ben.sanda@dornerworks.com"
	<ben.sanda@dornerworks.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Xentrace and Xenalyze on ARM S32G3
Thread-Topic: Xentrace and Xenalyze on ARM S32G3
Thread-Index: AdkqfUPTEGGUes6ITFy0mgmDyuKtLw==
Date: Tue, 17 Jan 2023 14:17:44 +0000
Message-ID: <AM5PR0802MB2578F1D80D0F9E7A22C2D2079DC69@AM5PR0802MB2578.eurprd08.prod.outlook.com>
Accept-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
msip_labels: MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Enabled=true;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_SetDate=2023-01-17T14:17:43Z;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Method=Privileged;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Name=Internal sub2 (no
 marking);
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_SiteId=eb70b763-b6d7-4486-8555-8831709a784e;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_ActionId=72c8d641-f9cd-4380-aa13-147d61a2fa7c;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_ContentBits=0
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: AM5PR0802MB2578:EE_|AS8PR08MB8369:EE_
x-ms-office365-filtering-correlation-id: 621c35cd-60d2-4eff-8131-08daf8959a3f
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0
x-microsoft-antispam-message-info: OidaqOiOiagOAIUPxA48H697b8XH6z10GtfqjvPZhoOin9WZy0TkdwzcZR5vpGj6oiyPAMEYjXvRsQGMh9n4sMKwlmc+AyenrckKpZJGLVZAeHzMOFNTac7tqBhHvvLeBhJ+ag9AUHEXGPw/AvYFgOf6c49b/jl0FQVSjECAVBmlFM9gWtEXI93tiQGCWgQ03gRSjdcC6WVORVmJagnv0d5lQsKfbD8XwYYtbyMcF+v74f8CLgs+RGxD5CUK7CX+9E/yztPA8OD3/iCJ+/OohBFcCCeTtSMfJjRQWBkqjaTW7JN77zj5MHxd5RgDetio4bmPE4vJW1ht68fI5HlnCBwkfYwnlgHWITWkrAqHPP7YSJS2cFYJLLMKGzVgRZ5UQCOv8wKuaYIEwSo0gSULbHk432QDF7kA/X9feMHcmU3IYrHz4eRWFkwJkjCai6VZDqyWIzI0idYVN9qT7RpCait9CBBcDVBVXI3R0vK2UwxaVG8PgUcdwbws4ECccuwO3rfU/2PN0xM/CCDH1p58buAYnnepA7i5Ctb+i7067GTvN3Asrv2zqWwKEcSfyrrh534x49SZms409uinvv9SSinko8LVkjbL1Cv9kX06YiDi6XxLctvvHwQLsQlKTZijvWZ8uZtZ61GRqWGpwzzUuRr9wiohiSxRPygB4wTOIYC3Pqbfq5WAmX71ukIs0bqlaCIpuzE2MlhLXbSZF6ca6tTKQz/nx9hu1FMMcbFUhsY=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM5PR0802MB2578.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(39860400002)(346002)(136003)(396003)(376002)(451199015)(71200400001)(6506007)(7696005)(186003)(478600001)(9686003)(26005)(33656002)(66556008)(76116006)(122000001)(66446008)(19627235002)(4326008)(110136005)(66476007)(64756008)(8676002)(316002)(83380400001)(41300700001)(52536014)(5660300002)(8936002)(2906002)(55016003)(86362001)(66946007)(4744005)(38070700005)(38100700002)(82960400001)(36394005);DIR:OUT;SFP:1102
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?GcOseDVF7sfm3R1bUoS0FiCCj/9yiN/HNykfOchGL7ZxsIBvsXMLrcowuY6g?=
 =?us-ascii?Q?kXSFKT3wLdqoIrTumvJSW3Nmr5I6lyVvgCwRQenZAFn9fuCtOFaXKK6gAMUz?=
 =?us-ascii?Q?ocnsXc0UmGdf5nK35idBM7RmYfG536fXV7MAE9OBmLOp0TzZ/CiJU8uRZwKe?=
 =?us-ascii?Q?IpPnDQ23X7zMM8TWOK7TiWQbUsUDDDgjfCWFlGrglO+7WhL4nmsv/ptUJenU?=
 =?us-ascii?Q?kL6Bgr9ZoFeebRrtIrrAAikYUvnMH8Kss92BS4KgyagVBVmetz4o4tejMlS9?=
 =?us-ascii?Q?XaDRxuKrCTR03XZ7EEeGyuEAWfChSq1ykvPdCO6LUJ8syX0U+ZVPTOxJMcls?=
 =?us-ascii?Q?y5wGKVjnMUIn0WiPimoI0Cn50exqg5ksN2hONYZBOdJkfd59Re6cT8JP2rul?=
 =?us-ascii?Q?ISMdRA3zVNW1O/LZ307zf3sWM05AE2QKdRPFRW7uZQeWJRjG6i4UD/xjKFHu?=
 =?us-ascii?Q?W48IrBhKGPNpvhNWCFFj6k2VolNdRZEnVdAtq3zQTnOmDJyoZbPA4xSzjuUa?=
 =?us-ascii?Q?LmFKohRxRu2gqywvb2soolkUeaTYy8Hsxw+xYh4e3kqEH/Xv01BmefAtqwjA?=
 =?us-ascii?Q?eaX8tVv0xLFgYGxkDvRIBPvgE/nU71j09ffGvQPtff3Bbt072avkoy3se9R/?=
 =?us-ascii?Q?dw33Sd5++EJim5s6oAIV2/ZaCAPaIkkIH8vND1AGLlEQtO0bBDcU1+aB81TC?=
 =?us-ascii?Q?ksAfVHcx/FW5W3ZBAy6s7CAWb09e9TeQ/GMeJFlz5TtzMmU642aeuZrAv7Pe?=
 =?us-ascii?Q?1tliTlEkpbPjeecSE+Ll0gSwCdAfwGLH3ryLAoX2KnLiEbBw6sgpAGC6CNvp?=
 =?us-ascii?Q?g9ATfT2aG5OvxolaUKNHZYBY57SUQUL2wlxaIS+kEuNlft/TSXj4c+2t/2sv?=
 =?us-ascii?Q?35ZDvtl59vbdnlEqu0/oy2xmLm8hCdM3h0oFToSwRjCSb7jApcBxakDKK6xc?=
 =?us-ascii?Q?59MdvzfMSZzy0EBfpxiXOn9vno0meiZE8iRRPOOouUjuUKH2ll8pWqzaCexJ?=
 =?us-ascii?Q?GWkM8XYNj8Gvtm+8b2eHfdlpC+daTwSpo7ld2ni5DEbV5gspsUh2IOI/D2ZJ?=
 =?us-ascii?Q?uDAG81WzGozAxuEbN0XxrzpHecf1ZjwNEN6Y/hYw5yQMuYRq0mQawIWQkaPs?=
 =?us-ascii?Q?ICN/bqogyt9RtcJPttZLxn/W7X3ve+g+ZRWGtYgT8ubBk5y4nKP6n0UMhwm1?=
 =?us-ascii?Q?Lj+hxjYFge73JSDjsD2s3T/YdY9H5Ikx8iVPp77yZH+3gWQBdF7Spvv42AtP?=
 =?us-ascii?Q?484yFKy4+CHHM/8MKExkiStBm9V2gPcDIYaTQH7o5ATRzjBIe7Q83m04aM0t?=
 =?us-ascii?Q?Dhhm9ds6Q4GLHcNSNXbtarFfXUa8uAa5uMaaPd5ei3Wb9SALcV1i/ryZN7S6?=
 =?us-ascii?Q?XXb6fk+Eddp5llK6lWeauVNkp7ijFgpah1PyeZ5wv2gVUb766Rwh09mFMeRF?=
 =?us-ascii?Q?+bliPSoDqgYfXyeYNomZ9lTuM2MgvLHuN3i0Hej8kcTcz+oxaNj11ejZZSci?=
 =?us-ascii?Q?+gdWf/DzGGFVlZ5ILtTBpTRjcqntZY+Vqab4JrgJyOAVTMeN/kU9R7poryhL?=
 =?us-ascii?Q?nfEG97xaxEMuMkZ6Rl0A2Na9w30B1valx/1b+y0rfNiIlO9jgHeaanh+17mv?=
 =?us-ascii?Q?Aw=3D=3D?=
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: h42hIyRj4U63e03iKl63Vt4QCdspRfWTm6NQI35yVLZpQ+SR/nhY5QjNCMPhCxvDB83R0VkLkDHAjVf4fYkuyPIKK6ya0lHbemdx4NmncT88ycCLkvTIRUbStAxDI1O9s4Zl69rsXG3raKPzByM8w7KedhFWtGUbEqGv9vR046kZq1WaLlwwEmFIKEs9JBNvXIVH9G8G9A1LbE7hOIG3qtvu20GWv7u4RcGs0lfufzxHk90FcDFa/N9R3Mf7jc1oguG2Pq7YMga/KR5FWrKzMeqAz1h3MHmgvrb3tjvqpSD8PVhOcV23gRU00BY0/qqXiMCK/63oDWzwRqcOmzv7lL/DBGbkvPfVBOBlVXs19EP4NrGW65Q/p9tx8avXy5t/L7zodpLb/hu8D2oqKWO3fJhNEIxfoYTg/F7oy7vjNHqwndLRjB7Ge4EsAeURysO82Uy8rlkTYSV+aXM3YYbNI9x4J0L3dbiUJIt8cL5y296cY/vTHr8pKh1b/3GfAw27f0ONnbB0czwnlwIeRlv+HCjh0uGRJCGlVoKvPkSbZlgjrJgtdRUFHoNtdcHx6tfei9J90URsa/JbXy0kIvI2qbw2aMNgs+BGfL452zU/Cg3KKP/WC2N+oRhGlkEfMwNzZP7mkDMvBEDzwIguK3TzNgcPgYk8gKCiXf96mw2msWR7Z8tyc4sd8friq0+B6OBV7NphHXJXczJh2M/H1Nz36hfpP2H4PP0VUeUeSXKgE2YomrI5wrnA1MtsZUH5iXzjkUw/oF6SBEx0L9sigab9pba9V8Unh8hCRVg65S8uwAZHpTqjpzM3pyP42Gx1SOFShZacjVdcOuz0rh2VxamvlT3mZ1uW8DzluaKatLQ6y2iNAnmQNFzV/fvuY3mqihS8
X-OriginatorOrg: zf.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM5PR0802MB2578.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 621c35cd-60d2-4eff-8131-08daf8959a3f
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 14:17:44.7858
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: eb70b763-b6d7-4486-8555-8831709a784e
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Ev/SbTFgUGrDJt3qXFDK0p84rt+PBlvUVd9YELTyQr9yG2FJptRlnKw3hGlN1lx5oCCYdlbM1txjK7oUWvAvt40PYSH+/VcvMQdYxQ+3bxc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8369
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: zf.com
Content-Language: de-DE
Content-Type: multipart/alternative;
	boundary="_000_AM5PR0802MB2578F1D80D0F9E7A22C2D2079DC69AM5PR0802MB2578_"

--_000_AM5PR0802MB2578F1D80D0F9E7A22C2D2079DC69AM5PR0802MB2578_
Content-Type: text/plain; charset=WINDOWS-1252
Content-Transfer-Encoding: quoted-printable

Hello everyone,

My name is Youssef, I am an electrical Eng. student and right now I'm worki=
ng on my thesis using Xen hypervisor on an S32G3 microprocessor (ARM archit=
ecture). After building my Yocto-Image using the Linux-BSP of the microproc=
essor, I noticed that Xentrace and Xenalyze are not working because they ar=
e not compatible with ARM architecture. I have found on Xen.markmail.org th=
at you have already done this, I tried to understand the changes that Mr. b=
en and Paul did, but I could not understand them. I wish you can help me wi=
th that by sending me the repo.

If you have any more information that could help me to enable Xenalyze and =
Xentrace, I would appreciate it.

The Xen version I am using is Xen 1.14. ( this was downloaded automatically=
 by giving this in the local.conf file DISTRO_FEATURES_append +=3D "xen" ).

Thank you so much In Advance.

Cheers
Youssef



--_000_AM5PR0802MB2578F1D80D0F9E7A22C2D2079DC69AM5PR0802MB2578_
Content-Type: text/html; charset=WINDOWS-1252
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
=09{font-family:"Cambria Math";
=09panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
=09{font-family:Calibri;
=09panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
=09{margin:0cm;
=09font-size:11.0pt;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
span.E-MailFormatvorlage17
=09{mso-style-type:personal-compose;
=09font-family:"Calibri",sans-serif;
=09color:windowtext;}
.MsoChpDefault
=09{mso-style-type:export-only;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
@page WordSection1
=09{size:612.0pt 792.0pt;
=09margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
=09{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"DE" link=3D"#0563C1" vlink=3D"#954F72" style=3D"word-wrap:bre=
ak-word">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hello everyone,<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">My name is Youssef, I am an ele=
ctrical Eng. student and right now I&#8217;m working on my thesis using Xen=
 hypervisor on an S32G3 microprocessor (ARM architecture). After building m=
y Yocto-Image using the Linux-BSP of the microprocessor,
 I noticed that Xentrace and Xenalyze are not working because they are not =
compatible with ARM architecture. I have found on Xen.markmail.org that you=
 have already done this, I tried to understand the changes that Mr. ben and=
 Paul did, but I could not understand
 them. I wish you can help me with that by sending me the repo.<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">If you have any more informatio=
n that could help me to enable Xenalyze and Xentrace, I would appreciate it=
.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">The Xen version I am using is X=
en 1.14. ( this was downloaded automatically by giving this in the local.co=
nf file DISTRO_FEATURES_append +=3D &quot;xen&quot; ).<o:p></o:p></span></p=
>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thank you so much In Advance.<o=
:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Cheers <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Youssef<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--_000_AM5PR0802MB2578F1D80D0F9E7A22C2D2079DC69AM5PR0802MB2578_--



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:24:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:24:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479515.743413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmst-0003Q6-SK; Tue, 17 Jan 2023 14:24:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479515.743413; Tue, 17 Jan 2023 14:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmst-0003Pz-Ou; Tue, 17 Jan 2023 14:24:19 +0000
Received: by outflank-mailman (input) for mailman id 479515;
 Tue, 17 Jan 2023 14:24:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHmss-0003Pt-OW
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:24:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmsp-0001uG-7s; Tue, 17 Jan 2023 14:24:15 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHmsp-0000TQ-1k; Tue, 17 Jan 2023 14:24:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=E7YlRMGSPi1g4nZDvIxx8Y2IGAqrgBcbKzTtEPy6F+U=; b=PHHTrCvGLxgw65XGdXHwv3nQau
	Uu77G7Hbc0MkYcrAgeCNRJYQ+/AcqMGgM0hkHHXbRySpr7og72dcyqf580P9qWV6aetKh0duGi7uK
	9otv2iANDP2BdZFpnIasp3lRbbeUZJ1LxZPd0vm9j8gInjj2Icxz5i49cgSkWt2vufJU=;
Message-ID: <8caae9ce-671b-2a5a-4263-beb08c767861@xen.org>
Date: Tue, 17 Jan 2023 14:24:12 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Xentrace and Xenalyze on ARM S32G3
Content-Language: en-US
To: El Mesdadi Youssef ESK UILD7 <youssef.elmesdadi@zf.com>,
 "psujkov@gmail.com" <psujkov@gmail.com>,
 "dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
 "ben.sanda@dornerworks.com" <ben.sanda@dornerworks.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <AM5PR0802MB2578F1D80D0F9E7A22C2D2079DC69@AM5PR0802MB2578.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM5PR0802MB2578F1D80D0F9E7A22C2D2079DC69@AM5PR0802MB2578.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 17/01/2023 14:17, El Mesdadi Youssef ESK UILD7 wrote:
> Hello everyone,

Hi,

> My name is Youssef, I am an electrical Eng. student and right now I'm working on my thesis using Xen hypervisor on an S32G3 microprocessor (ARM architecture). After building my Yocto-Image using the Linux-BSP of the microprocessor, I noticed that Xentrace and Xenalyze are not working because they are not compatible with ARM architecture. I have found on Xen.markmail.org that you have already done this, I tried to understand the changes that Mr. ben and Paul did, but I could not understand them. I wish you can help me with that by sending me the repo.

You already asked the question in a separate thread. If you don't get 
answer there, then it would be better to "ping" on the other thread.

This will avoid duplication on the ML. I will answer on the other thread 
soon.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:28:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479521.743424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmxF-00042u-Dc; Tue, 17 Jan 2023 14:28:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479521.743424; Tue, 17 Jan 2023 14:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHmxF-00042n-9v; Tue, 17 Jan 2023 14:28:49 +0000
Received: by outflank-mailman (input) for mailman id 479521;
 Tue, 17 Jan 2023 14:28:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHmxD-00042d-Q3; Tue, 17 Jan 2023 14:28:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHmxD-0001yy-OO; Tue, 17 Jan 2023 14:28:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHmxD-0004dw-F3; Tue, 17 Jan 2023 14:28:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHmxD-0003u2-EZ; Tue, 17 Jan 2023 14:28:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=II999gdkGiiiMog5ugUKHXxe7L7w7iu5RFG2D8HAr7U=; b=wI9UHYuXsefBV+CK1kES7BVceI
	IGDAkD5F3vN2KwYspWIXk4FvN5c997gCkLRz4GXQNU11zaD3RAlBpChGmlUX3Fjszyr5NmwGgB2+u
	8VYX5I98It/YMiV/KcZczwueB3P+KlWPOYV7q2gAnJ7yv66FKx7tAxr689kgxhl5UtiM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175934-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175934: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=d05739a3ff88457ae3ce90db3e91e9d2a11949c8
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 14:28:47 +0000

flight 175934 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175934/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 d05739a3ff88457ae3ce90db3e91e9d2a11949c8
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    4 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   27 attempts
Testing same since   175934  2023-01-17 13:40:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d05739a3ff88457ae3ce90db3e91e9d2a11949c8
Author: Konstantin Aladyshev <aladyshev22@gmail.com>
Date:   Wed Dec 14 00:22:22 2022 +0800

    Fix cyclic dependency error on OptionROM build
    
    EDKII build system supports OptionROM generation if particular PCI_*
    defines are present in the module INF file:
    ```
    [Defines]
      ...
      PCI_VENDOR_ID                  = <...>
      PCI_DEVICE_ID                  = <...>
      PCI_CLASS_CODE                 = <...>
      PCI_REVISION                   = <...>
    ```
    Although after the commit d372ab585a2cdc5348af5f701c56c631235fe698
    ("BaseTools/Conf: Fix Dynamic-Library-File template") it is no longer
    possible.
    The build system fails with the error:
    ```
    Cyclic dependency detected while generating rule for
    "<...>/DEBUG/<...>.efi" file
    ```
    Remove "$(DEBUG_DIR)(+)$(MODULE_NAME).efi" from the 'dll' output files
    to fix the cyclic dependency.
    
    Signed-off-by: Konstantin Aladyshev <aladyshev22@gmail.com>
    Reviewed-by: Bob Feng <bob.c.feng@intel.com>

commit 987cc09c7cf38d628063062483e2341fba679b0e
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Mon Jan 16 10:46:39 2023 +0100

    ArmVirt: don't use unaligned CopyMem () on NOR flash
    
    Commit 789a72328553 reclassified the NOR flash region as EFI_MEMORY_WC
    in the OS visible EFI memory map, and dropped the explicit aligned
    CopyMem() implementation, in the assumption that EFI_MEMORY_WC will be
    honored by the OS, and that the region will be mapped in a way that
    tolerates misaligned accesseses. However, Linux today uses device
    attributes for all EFI MMIO regions, in spite of the memory type
    attributes, and so using misaligned accesses is never safe.
    
    So instead, switch to the generic CopyMem() implementation entirely,
    just like we already did for VariableRuntimeDxe.
    
    Fixes: 789a72328553 ("OvmfPkg/VirtNorFlashDxe: use EFI_MEMORY_WC and drop AlignedCopyMem()")
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Ard Biesheuvel <ardb@kernel.org>

commit 47ab397011b6d1ce4d5805117dc87d9e35f378db
Author: Abner Chang <abner.chang@amd.com>
Date:   Wed Jan 11 11:10:08 2023 +0800

    MdeModulePkg/XhciPei: Unlinked XhciPei memory block
    
    Unlink the XhciPei memory block when it has been freed.
    
    Signed-off-by: Jiangang He <jiangang.he@amd.com>
    Cc: Hao A Wu <hao.a.wu@intel.com>
    Cc: Ray Ni <ray.ni@intel.com>
    Cc: Garrett Kirkendall <garrett.kirkendall@amd.com>
    Cc: Abner Chang <abner.chang@amd.com>
    Cc: Kuei-Hung Lin <Kuei-Hung.Lin@amd.com>
    Reviewed-by: Hao A Wu <hao.a.wu@intel.com>

commit be8d6ef3856fac2e64e23847a8f05d37822b1f14
Author: Abner Chang <abner.chang@amd.com>
Date:   Wed Jan 11 11:10:07 2023 +0800

    MdeModulePkg/Usb: Read a large number of blocks
    
    Changes to allow reading blocks that greater than 65535 sectors.
    
    Signed-off-by: Jiangang He <jiangang.he@amd.com>
    Cc: Hao A Wu <hao.a.wu@intel.com>
    Cc: Ray Ni <ray.ni@intel.com>
    Cc: Garrett Kirkendall <garrett.kirkendall@amd.com>
    Cc: Abner Chang <abner.chang@amd.com>
    Cc: Kuei-Hung Lin <Kuei-Hung.Lin@amd.com>
    Reviewed-by: Hao A Wu <hao.a.wu@intel.com>

commit 8147fe090fb566f9a1ed8fde24098bbe425026be
Author: Abner Chang <abner.chang@amd.com>
Date:   Wed Jan 11 11:10:06 2023 +0800

    MdeModulePkg/Xhci: Initial XHCI DCI slot's Context value
    
    Initialize XHCI DCI slot's context entries value.
    
    Signed-off-by: Jiangang He <jiangang.he@amd.com>
    Cc: Hao A Wu <hao.a.wu@intel.com>
    Cc: Ray Ni <ray.ni@intel.com>
    Cc: Garrett Kirkendall <garrett.kirkendall@amd.com>
    Cc: Abner Chang <abner.chang@amd.com>
    Cc: Kuei-Hung Lin <Kuei-Hung.Lin@amd.com>
    Reviewed-by: Hao A Wu <hao.a.wu@intel.com>

commit 7cd55f300915af8759bdf1687af7e3a7f4d4f13c
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:35 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Return error if installing NotifyProtocol failed
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Installation of gQemuAcpiTableNotifyProtocol may fail. The error code
    should be returned so that the caller can handle it.
    
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-7-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 66f18fde49c7fe65818db0801cdaf63015e875e5
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:34 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Refactor QemuAcpiTableNotifyProtocol
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 install the QemuAcpiTableNotifyProtocol at a
    wrong positioin. It should be called before TransferS3ContextToBootScript
    because TransferS3ContextToBootScript is the last operation in
    InstallQemuFwCfgTables(). Another error is that we should check the
    returned value after installing the QemuAcpiTableNotifyProtocol.
    
    This patch refactors the installation and error handling of
    QemuAcpiTableNotifyProtocol in InstallQemuFwCfgTables ().
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-6-min.m.xu@intel.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>

commit 2ef0ff39e53d2d2af3859b783882eea6f0beda64
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:33 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Add log to show the installed tables
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    Commit 9fdc70af6ba8 wrongly removed the log from InstallQemuFwCfgTables
    after ACPI tables are successfully installed. This patch add the log
    back after all operations succeed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-5-min.m.xu@intel.com>

commit 165f1e49361a9a5f5936f2d582641096d0d7a2a2
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:32 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in QemuFwCfgAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mQemuAcpiHandle is not needed for anything, beyond the
    scope of the InstallQemuFwCfgTables(). So a local variable will
    suffice for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-4-min.m.xu@intel.com>

commit f81273f7fbb3defbef43313ada8397bbc202a1d0
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:31 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Use local variable in CloudHvAcpi.c
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The handle of mChAcpiHandle is not needed for anything, beyond the
    scope of the InstallCloudHvTablesTdx (). A local variable (ChAcpiHandle)
    suffices for storing the handle.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-3-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit 43b3ca6b7f626c6dcdc1a347ad8a42d8cf9ea575
Author: Min M Xu <min.m.xu@intel.com>
Date:   Wed Jan 11 09:22:30 2023 +0800

    OvmfPkg/AcpiPlatformDxe: Remove QEMU_ACPI_TABLE_NOTIFY_PROTOCOL
    
    BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4237
    
    The QEMU_ACPI_TABLE_NOTIFY_PROTOCOL structure is superfluous because NULL
    protocol interfaces have been used in edk2 repeatedly. A protocol instance
    can exist in the protocol database with a NULL associated interface.
    Therefore the QEMU_ACPI_TABLE_NOTIFY_PROTOCOL type, the
    "QemuAcpiTableNotify.h" header, and the "mAcpiNotifyProtocol" global
    variable can be removed.
    
    Cc: Laszlo Ersek <lersek@redhat.com>
    Cc: Erdem Aktas <erdemaktas@google.com>
    Cc: James Bottomley <jejb@linux.ibm.com>
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Gerd Hoffmann <kraxel@redhat.com>
    Cc: Tom Lendacky <thomas.lendacky@amd.com>
    Cc: Sebastien Boeuf <sebastien.boeuf@intel.com>
    Reported-by: Laszlo Ersek <lersek@redhat.com>
    Reviewed-by: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Min Xu <min.m.xu@intel.com>
    Message-Id: <20230111012235.189-2-min.m.xu@intel.com>
    Reviewed-by: Sebastien Boeuf <sebastien.boeuf@intel.com>

commit ba08910df1071bf5ade987529d9becb38d14a14a
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Thu Jan 12 23:41:02 2023 +0800

    OvmfPkg: fix OvmfTpmSecurityStub.dsc.inc include
    
    TPM support is independent from secure boot support.  Move the TPM
    include snipped out of the secure boot !if block.
    
    Fixes: b47575801e19 ("OvmfPkg: move tcg configuration to dsc and fdf include files")
    Bugzilla: https://bugzilla.tianocore.org//show_bug.cgi?id=4290
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:37:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:37:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479531.743435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHn5l-0005ZQ-Cg; Tue, 17 Jan 2023 14:37:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479531.743435; Tue, 17 Jan 2023 14:37:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHn5l-0005ZJ-9I; Tue, 17 Jan 2023 14:37:37 +0000
Received: by outflank-mailman (input) for mailman id 479531;
 Tue, 17 Jan 2023 14:37:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHn5k-0005ZD-7K
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:37:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHn5k-00028w-0c; Tue, 17 Jan 2023 14:37:36 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.7.198]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHn5j-0000y3-Qo; Tue, 17 Jan 2023 14:37:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=EhJsozbj9A6D5UjyFPjtZNmE3z5a+bX2LhJ7SEKtA3M=; b=FmQccu+8W35yIMmJnCHKfGTf6K
	MqXWTuOrBzp+6VynnplLu1ihDbbweLFnzPBB1QhY8AjF1wAzpSLpw11jo2aqFtZFgyjWlewyFvn5Y
	Bm714YvwX25CI64esQLt7wjqgfiE8y/YCusLUk/j8pbAHX3Vs1GB/zHmOGBVqljbuvR4=;
Message-ID: <619a00f0-0f9f-5f5f-13a7-ea86f9c24eec@xen.org>
Date: Tue, 17 Jan 2023 14:37:34 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: AW: Xenalyze on ARM ( NXP S32G3 with Cortex-A53)
Content-Language: en-US
To: El Mesdadi Youssef ESK UILD7 <youssef.elmesdadi@zf.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <AM5PR0802MB25781717167B5BFC980BF2A49DFF9@AM5PR0802MB2578.eurprd08.prod.outlook.com>
 <3e7059c2-0d23-03f2-9a93-f88de09171f4@xen.org>
 <AM5PR0802MB2578A1389424064D6884588E9DC29@AM5PR0802MB2578.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM5PR0802MB2578A1389424064D6884588E9DC29@AM5PR0802MB2578.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 13/01/2023 12:56, El Mesdadi Youssef ESK UILD7 wrote:
> Hello Julien,

Hi,

>>>> xentrace should work on upstream Xen. What did you version did you try?
> 
> While building my image using the BSP-linux of NXP, the version that was downloaded is Xen 4.14.

Do you know where the source are downloaded from?

> 
> 
>>>> Can you also clarify the error you are seen?
> 
> The error I receive while tipping xentrace is: Command not found.


"Command not found" means the program hasn't been installed.

> I assume in this Xen version, Xentrace is not compatible with ARM architecture
The support for xentrace on Arm has been added around Xen 4.12. So it 
should work for Xen 4.14 (even though I don't recommend using older 
release).

I would suggest to check how you are building the tools and which 
package will be installed.

> 
> My question is, is there any new version of Xen that supports Xentrace on ARM? If yes how could I install it? Because Xen.4.14 was installed automatically by typing this ( DISTRO_FEATURES_append += "xen") in the local.conf file while creating my image.

I am not familiar with the BSP-linux of NXP as this is not a project 
maintained by Xen Project.

If you need support for it, then I would suggest to discuss with NXP 
directly.

> 
> Or is there any source on Xentrace that is compatible with ARM on GitHub, that I could download and compile myself?
Even if the hypervisor is Xen, you seem to use code provided by an 
external entity. I can't advise on the next steps without knowing the 
modification that NXP made in the hypervisor.

> 
>>>> Yes if you assign (or provide para-virtualized driver) the GPIO/LED/Can-Interface to the guest.
> 
> Is there any tutorial that could help me create those drivers? And how complicated is that, to create them?

I am not aware of any tutorial. Regarding the complexity, it all depends 
on what exactly you want to do.

> Or can they be assigned just with PCI-Passthrough?

PCI passthrough is not yet supported on Arm. That said, I was not 
expecting the GPIO/LED/Can-interface to be PCI devices.

If they are platform devices (i.e non-PCI devices), then you could 
potentially directly assign them to *one* guest (this would not work if 
you need to share them).

I wrote potentially because if the device is DMA-capabable, then the 
device must be behind an IOMMU.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:44:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:44:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479538.743446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHnC4-00071c-1l; Tue, 17 Jan 2023 14:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479538.743446; Tue, 17 Jan 2023 14:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHnC3-00071V-Ux; Tue, 17 Jan 2023 14:44:07 +0000
Received: by outflank-mailman (input) for mailman id 479538;
 Tue, 17 Jan 2023 14:44:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=p+P9=5O=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pHnC2-00071P-5q
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:44:07 +0000
Received: from sonic309-20.consmr.mail.gq1.yahoo.com
 (sonic309-20.consmr.mail.gq1.yahoo.com [98.137.65.146])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f2c546c-9675-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 15:43:58 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.gq1.yahoo.com with HTTP; Tue, 17 Jan 2023 14:44:01 +0000
Received: by hermes--production-ne1-5648bd7666-6sqtt (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 1d9b16749e1e59c9a71991ed05e6bc3c; 
 Tue, 17 Jan 2023 14:43:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f2c546c-9675-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673966641; bh=xGBV8fjshiR7XRyMDkAYvY70ELJ31pbSDB+YMPRpZu4=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=SQxuMaJRY96JgGtBx+QFRKz5xwa+0WpmWbfPbt6tkELS2aP9jiE+Jwoz0e2XJ/lyQDKmEC8N52hjhGkUwEKdI+v7hpv43RBhsuGd3+lrRe1MnacNtbO9mIAI6EqdNMKRU0AkIipuAM3TWd4krjtLB5qclLzYipNrGbGMPVsUIZPXDUk8TmktPFZI12MmpHZD+Mt4bIqceBu9ovExJSKTTgEBEbzhFVezLAs2GYoEdcIxAqAXACuU4AiUzGbYQEgYliRtA37n02wNFvaXd9l4eODIlMPB0Ftc02ZNZIh8YxnjU3gDncDLQiwDDH01b1XYufyi9Z4Mx+9vP21X35oOqQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673966641; bh=LX2yVf0u5xwlznpUNcW5V/isVG4g9tUVRmMm+0QFY6Q=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=jRyqCombVSDgqmTMV+6/MPiPGTdVbs1R5CFp0Un2PlMovVehfWlRL/RByh2hkQQCPQ/lbzgS2BbC4+fmKH6TPbJBOhf6nIYY2myb2zoPWSwWdb6KE+KavaoAQxsAaZ6AWP+y2y0Wl4COyPEPceOqAxCQb51C5NWniK/4OZ8V1GeF+Ms2slZc2Yz09yxdgH5sgEoMA+YmVccOPpWEIYVsqnSbk1i8Xw9nDRd5ENuh4la5cWMv6TxxodtK1iVdVUanCpNdzP8/19U/kqUTyux1+xFXpgTodxamtGksgmgjGl+8bt55pXZrLomRsfQN+x15wE1JjudrWnTSTzGUUoZuIw==
X-YMail-OSG: PqPIgf4VM1lDM9MKddB8B6mogUcazQI3DXLf.srBfccWT8ITgndli_BvJtH.L5x
 23vwZQfdgwQ4psBuv4UXsbGFZIUo3kW4aMwK9E4MzswYnGLEXjuHvtRrM2GUQTV8D830oc1lOVsL
 5Intg.XmPuk8z2YplCJkieLl4QibfWaEShXI7hgWZESD899XlZIVuUUVcW291NZjS_TWdVB3atvV
 QYHArad_yoxybH1roiLCLdMLBAIj2wPmgZeyhiMDwVgfDib9I5gBwBSLG1RuFNX7gU3T4bl0i9Xh
 XKuEQ1qWPIv540H5UastdfX4NVkQoxNqTWiwn8iLpajY9ILiSQat3YUhd6M1mLqWyEmiSN_qTxtm
 kRXc32z.9L0oYQ7Ls6AEXqmjm96ziI2WUoIdY2SWUSwpeQW6VCZEVAwWAc1up5h87NSkx2lpOhWF
 reAC6rGr5s_04at8axpnbAPELtnj6G2J1ssKw.nugIhvqVbG2ZQrlhd4.QfmH4ZgUQJOjN0tnFFD
 CkmJF0APonnVCbOIj8FGU5BL07G5AxzOvr.Lf71kEZL2h40qjJnDpskWYFf0ktvLpNe4kmZAvLJO
 3RAbawfmJyCTWqiYP1._8JYb37YIyw6e0HUN6iSk67DCxklcQpopFU_6nWCvIxBb6re0P2q1tO8d
 4b9VMSPg8w66VZ63oaJEKXhHvUaMNnspJvbVsfgfSU5xV.OSGHMFJyMfo_mdfRBi0qZWW6JRiafk
 7CvhmEwwLy7dP0tYV5iRh4AfOlJ.AUdWezTIMyEguFcWF9NuqFtyuZgTslt9twbANHiUB_eV.ZmX
 DnVVCmYGeH8lPFFbJOfUwsCwQRIkEBEshn_qBUVIV_Pu3xCE5BEDxgQqGg1m9SdatA.xzF2r4ihC
 fvsb1TnU9OLCwNJVnHVpo5JkqI7EraOkiCldMv3P2T2OYChjKQ0M2gCNlYL63fmE6MjRTW5B.9E5
 C_5i7R7VkQRO5GOL0OnS29e6z34ZBD7SRu6OybeBv92nTPWv.ssLPi8DYTpkx3A1Dms5pg5PbO4y
 TcubZUrtIPp265pOIbYkPaS3m1mcqGC_Cma32cmsxYIj69JTbg.WQITpgk.qIgNeiP6aZdMtnSP9
 y7w8_c3L1jnoDFVF1DvqrB4Q1T0CcnvVWNyWNT3Y1uPtH7c8HKI1WlXoU2NpoXXORXtDGYEDGlEj
 ieXujLnRAPakppE.Lc3w8dINty.i3CN.3vCf0nInnfy4d65dWrCbi4p2Pb6AIU75Iifd0f2YFCZR
 e4uTSUcyZaQfm0S3MM2dphQh1cEZYu.JX7A.JJ0cA67hPLRGnm.d.FXMvfbuEm_Ti7whrS5p6lyq
 kpQx.HY7ariN9ydc.gRnCh4OxxepfYrA1ioC9LnmkcPlg3NrXw4nB_3qdywWBEvPurX_VSK2XeKt
 a_byXJp9OP1_zo_PsgEXh_0VWKTewbgkil7KyKtWTP0pPonC55fWEU5iMvvUQdyKhIvNVR6z37SQ
 JhFw5H2Am_6G597N.2FbffoQvdWzBLYOf3zvC7l8CeyFpesp2NSbcj9cF6W08BtTYDXG.n2DCRGd
 kPdM2Nhk4NFaEZWm_aVl9IaagknHTf03X2wDXF6j5BxKSCLwQLTKbevv6v7yxpZXlK0k6kmDCiE4
 ff6DEEqtoE0Ah_KGMsge4NNT6PrKH75BPgGdAKfVM3IMTt.l4pnpltDbcMD_1dOAgLYwjY7YLXXH
 3hmhPYDZNAhb0Wl3VsiDT4CtenT.vB8YKDS_42nDRr2G.7wCr5V5S9eDzzcMv2JmVyswN_BRNkgB
 OBIqh1ZsMMn6.U0CRtKqfTXioWHupzXTt8uMLfZUBnO9Ja2f3abCKTCCZO09xajbIsNFgb6LGVub
 2z4D._HgrjexYq3cw2ikpvPFTaBrL4plMttyE59VgVtuTwtAny_rPXrG3gL9mhUBnwi2asCnB4wJ
 NccfTYydVEuHxcI5TuRZ81zy0C0s5SsfpXpFDCDRodxPdq2mbo0SC3RoY7QuLjIeOsE0WD_VZ9EB
 UpoJIiKkIKr5OYf6BRCRZ6.UskIISMjvCiSODz_XDV7kuQzPaXWgRKwp3234NO.gJxF1IKXgrrql
 aLNzOicnHU55hgXveGFARIC7j9yCRbXbsBLy7oryrVsJNHIVk8S8jyvxJqWqpYjUIVxSsxT6KIWH
 e1oEF5srqHFdtB3cj050C1aonR7s34wb5CgrtsZXr2kJ3H.0fbwC1Jn3HrB3Ut_vOE2p0MkKl38r
 Df.RhSoi1GpZvQScQakbutXfW4qTH
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <a3aed167-74f7-aa4c-1bc6-84f116feb702@aol.com>
Date: Tue, 17 Jan 2023 09:43:52 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: Igor Mammedov <imammedo@redhat.com>,
 Chuck Zmudzinski <brchuckz@netscape.net>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, Anthony Perard <anthony.perard@citrix.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
 <20230112180314-mutt-send-email-mst@kernel.org>
 <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
 <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
 <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
 <20230116163342.467039a0@imammedo.users.ipa.redhat.com>
 <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
 <20230117113513.4e692539@imammedo.users.ipa.redhat.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230117113513.4e692539@imammedo.users.ipa.redhat.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4028

On 1/17/2023 5:35 AM, Igor Mammedov wrote:
> On Mon, 16 Jan 2023 13:00:53 -0500
> Chuck Zmudzinski <brchuckz@netscape.net> wrote:
>
> > On 1/16/23 10:33, Igor Mammedov wrote:
> > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > >   
> > >> On 1/13/23 4:33 AM, Igor Mammedov wrote:  
> > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > >> >     
> > >> >> On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:    
> > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:      
> > >> >> >> I think the change Michael suggests is very minimalistic: Move the if
> > >> >> >> condition around xen_igd_reserve_slot() into the function itself and
> > >> >> >> always call it there unconditionally -- basically turning three lines
> > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
> > >> >> >> Michael further suggests to rename it to something more general. All
> > >> >> >> in all no big changes required.      
> > >> >> > 
> > >> >> > yes, exactly.
> > >> >> >       
> > >> >> 
> > >> >> OK, got it. I can do that along with the other suggestions.    
> > >> > 
> > >> > have you considered instead of reservation, putting a slot check in device model
> > >> > and if it's intel igd being passed through, fail at realize time  if it can't take
> > >> > required slot (with a error directing user to fix command line)?    
> > >> 
> > >> Yes, but the core pci code currently already fails at realize time
> > >> with a useful error message if the user tries to use slot 2 for the
> > >> igd, because of the xen platform device which has slot 2. The user
> > >> can fix this without patching qemu, but having the user fix it on
> > >> the command line is not the best way to solve the problem, primarily
> > >> because the user would need to hotplug the xen platform device via a
> > >> command line option instead of having the xen platform device added by
> > >> pc_xen_hvm_init functions almost immediately after creating the pci
> > >> bus, and that delay in adding the xen platform device degrades
> > >> startup performance of the guest.
> > >>   
> > >> > That could be less complicated than dealing with slot reservations at the cost of
> > >> > being less convenient.    
> > >> 
> > >> And also a cost of reduced startup performance  
> > > 
> > > Could you clarify how it affects performance (and how much).
> > > (as I see, setup done at board_init time is roughly the same
> > > as with '-device foo' CLI options, modulo time needed to parse
> > > options which should be negligible. and both ways are done before
> > > guest runs)  
> > 
> > I preface my answer by saying there is a v9, but you don't
> > need to look at that. I will answer all your questions here.
> > 
> > I am going by what I observe on the main HDMI display with the
> > different approaches. With the approach of not patching Qemu
> > to fix this, which requires adding the Xen platform device a
> > little later, the length of time it takes to fully load the
> > guest is increased. I also noticed with Linux guests that use
> > the grub bootoader, the grub vga driver cannot display the
> > grub boot menu at the native resolution of the display, which
> > in the tested case is 1920x1080, when the Xen platform device
> > is added via a command line option instead of by the
> > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > to Qemu, the grub menu is displayed at the full, 1920x1080
> > native resolution of the display. Once the guest fully loads,
> > there is no noticeable difference in performance. It is mainly
> > a degradation in startup performance, not performance once
> > the guest OS is fully loaded.
> above hints on presence of bug[s] in igd-passthru implementation,
> and this patch effectively hides problem instead of trying to figure
> out what's wrong and fixing it.
>

Why did you delete the rest of my answers to your other observations and
not respond to them?


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 14:50:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 14:50:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479545.743457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHnIJ-0008SM-OT; Tue, 17 Jan 2023 14:50:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479545.743457; Tue, 17 Jan 2023 14:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHnIJ-0008SF-KG; Tue, 17 Jan 2023 14:50:35 +0000
Received: by outflank-mailman (input) for mailman id 479545;
 Tue, 17 Jan 2023 14:50:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=p+P9=5O=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pHnIH-0008S9-Fr
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 14:50:33 +0000
Received: from sonic314-19.consmr.mail.gq1.yahoo.com
 (sonic314-19.consmr.mail.gq1.yahoo.com [98.137.69.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 462d71de-9676-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 15:50:26 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Tue, 17 Jan 2023 14:50:28 +0000
Received: by hermes--production-ne1-5648bd7666-2l5qw (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 85422408e56590c9cc34f8e4190b8259; 
 Tue, 17 Jan 2023 14:50:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 462d71de-9676-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1673967028; bh=zbbQi8zwre/KEVh6Oq0xkk76DAYGxuRn4Lq8OgN4AUU=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=S11dkoYSkTTwAjYYB6vAkxeRy90Ulcap8q4/fqnkZWd7gWBIUHBHh1mtbVBW9ATFphg25rUWhyIACNS5vc4n8tNP3hvyopGsEPDbI5v6zuHkW8Rz7g4fjPXEyxRQS/NUTZB5nbsTMnrDW+yijejc4752Me/aPuLKQLvNUXfRZUHg3dRIXwjHE/gBdJrcmfdUmJ6wufdxZqRNYclDH0LXRXjBPt2oEVLfPPmq3KcPVO/wl9cj6XittwTmdCAHKFCRlFXLWWIJ5ikxDrAuuU6tmyyTRb/ZBq8AgCk2Uyhdrwfg0VusDEAVbIT6slh/mTPiQPVhOb3U/bKsFCbb8X+jxg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1673967028; bh=27WRokyUQs+tw9svfWRP67Tcu91NE30GKn+kWklBwsj=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=ILKd+oNiO5xObPwycnPwx4PxkXq1WkLtpcmLSm64jd5OGt7gJJka5se5fGDArDHXdw6/XzNuLwsrXPWKCUtF5CXNjKMRNEnf/VdvowpPKiewWYl/Wq74S4P7MxFzAEz2JyLP5fK3hEbdKo/8JoX4DIDz2BJgCVX34tGYYnaboc34sap2ZUzKjcE9YsvA5lVVmElFLTK+IX4wgTr964U3TWziZQlz5UigJAd9LAdKbml5g4lq6CILTiOxyUxm6qJuiIG83aEZfXhFl5eQY/7V2/628UEXEzoTMJf/xAM8rcLKL+MybuyLvIho36QIoaFdtdJcZG6HCtv0hxQOh++Y1Q==
X-YMail-OSG: 8RXPhAUVM1lKMuKNZgnxuEZyfja_vPAjQyEBtkRqdvcce_eWZDOX4xoZq6aEkXi
 B1zsCAAZZRLlAb0AdyZYo1Stlq.07.Y0vD6oqZfPTazGd245cUAPC8rdwhHmiYSirlm3u5LRZ_ME
 B2U1sE1bRtIqvmpDEtGSSGlAYaqHY5_gJg.n2agNvU1AKj9ViYiJGIvSSp.ryk5CbHn6JZIl.ln5
 d5tzxAPB6KmdK64nENtArK0cAWRS5vp_M4shsDuGBJCII3gRleHOJK.5vQufyjgR08LUoT1Rm8B5
 0oowqDPuZmLGVU4S311o4wRQ6XDMStTA2WM76l3.RduuL2aYuqv6wFas.r88oZglwndPQNlo1Eo0
 9SCjxEctObPBEqxMT8n3rfIvNbI2iSKKwAFTa19r7pktFRdT6PZzlQjMLDXCFzFTMWIf8YfWxAz8
 JCj11s3figHwZ7QlSegdxanSk6ens7QwDqeXhVXWV6WU2.ILLmFuJhd61yMK4_2.mEQtuKWbrAcC
 bSspGp6M_BWaLH4nP5I4E1QooFvl6QgVLVjwE4_waq.eRu7GXA9dBC0MeNDa85PFrb_lSiNDd9gd
 ghqYUhPlRfSi12Lv7VJYE9JEp1rIiWosq0bszPnVfnR3ga9ApO2rqpyEAGoMK.iSYedqr9tOdLEE
 4GtLKOyu5dGwc8Tq5XDwJ9KzrIKTLo6c0a8zl0OoFXmGTj6sl3h4mHSfBMM7kj01wZiFW.3osc_F
 FwBHaYNJq5EQ4dIHcubOy5PvhFWaFJsbDK5JmyB3m846SEiah1gq6hu1t3Pg02h1Wn4JubTtXvE.
 mlh6PDT2TNVaa4HLx6Grt368QN6f4nbtU.GTW9tLlkFfSU3cUFhjB8YDNWYth7nHaWbN9TlQkwT0
 4pY4Qm8orvW6nYv0f5Qv7Wq9jhN1XxotJABQQYCMfXxyEA9DWqKVyFjuaFIcLQeg_jDRTnUF2ZS8
 kGkTXtqK6BDYieaxzRe.vqj96Z9OYA.6WzUX19oeC5a28pBUb8UlunrNomw.jGHedEYS8is7FlSA
 94Z.6SClDAySeiUkV_lX7k3dt3CXg4NiFryely0LRJogwTidc8d05Ab28nUzS2Of7Q9MlGJTBRHd
 8oKO57uDzFBDkg8N.k_j9xgq6KDlDrSd3sxyDhpjFOzGVjeNQhj8SOgq_JI8jD9AkHJkbHrQY5n7
 aQcDNR.gtD8w1R03iv7lYM5Dz4t9Y_NEEyYoHwxiQeh.u5G0poSJw7TBbTPGSRsZZaG8fz.jbXUN
 VFnmCvzFK3xPWy8p98DDjU34A0fxGsSS4BRgOBW8QBnCpR_MGhY.stuanbSEk8mYD08MOGY.DcTj
 MmHPdFISruierd9U7tBlYLlT2hjVAZCo.Tu1MlCjFC2MkBssMKGPa4YdDjlyhuJ2iPIn4PRlzOEE
 o8sddjyX4lP9hMSPNW1KgA_5qVVm_FVWhkEEFaCYd0PgjPVhJwTSMSrpmWg0_a50jXZOqHw8DV.A
 FcEvRLh1RvqVjP1WPq3qfVc37gCuCqJ7yfJbcoZeTAOPQFk9k5tdh5K5K_tm7Qmxx0L.CQQfNcKT
 lQSwzFyY0abDYkua0N8Pukmi0kknZfvq7BYfzUk7etIAxrnfc2zdb81TnU_V6LZvTH7eOZeUe2PE
 L1KrT6LcX7047paHCNidKsPxZl7cIlyz7Yvnl5CUZ.QXzWuWSQUbEIqmDQQ.Hco6dwVfitb0rCRV
 A5GKvbGvd0B.PSEqhKtMRhs_PGXF3gQ6dlzlVDavE2LCwzH5_UpCj3iZ.cJb1mn543rKTpXBaTmu
 G8EzDnI6SNDfjtGebnWDdE6Q.UsoczrDUSSmsqz6wTD05G2i32E3rGBYM6IFsFP5Jjjzrco7Ek3c
 1LS5_PCVxBC0M_z5hcsDuyaArQUw6RSfX6eD143w_Sn5mzxIKlSPgR.x0Yw4xZ_waN3_il956gMH
 zgN_1ipVFh8wrPil6Np5UzaI_CHWLh3wBfYpNla9eT91xh88oERjD.D3kEVGDlCayVzNvOnmYu2S
 BQ41QobJyn2bhDUE479lbwIZjCkw5XfV1pwj1wMxzxJ5QaVs_M.x9cSDkf.opBhGMPOtkZHTsyEi
 srHW4ZencRlkjGP5.yS_gu4DBaXvdAa0RK_6ydZAh4G0rDae8jhBlJFz8vRi7repc8kpVzECjuDM
 k9nwA83I400wjFnI0U_a6n2jsClSpeT1UdMp0LM.voP0koSsW3f.Za0zj9Tk76ye__osZTTSbW_f
 rIMqF1p0rFSF913J5
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <1c7d1fd7-d29b-1ae1-a7f4-0fea811b56c5@aol.com>
Date: Tue, 17 Jan 2023 09:50:23 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: Igor Mammedov <imammedo@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, Anthony Perard <anthony.perard@citrix.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
 <20230112180314-mutt-send-email-mst@kernel.org>
 <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
 <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
 <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
 <20230116163342.467039a0@imammedo.users.ipa.redhat.com>
 <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
 <20230117113513.4e692539@imammedo.users.ipa.redhat.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230117113513.4e692539@imammedo.users.ipa.redhat.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4114

On 1/17/2023 5:35 AM, Igor Mammedov wrote:
> On Mon, 16 Jan 2023 13:00:53 -0500
> Chuck Zmudzinski <brchuckz@netscape.net> wrote:
>
> > On 1/16/23 10:33, Igor Mammedov wrote:
> > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > >   
> > >> On 1/13/23 4:33 AM, Igor Mammedov wrote:  
> > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > >> >     
> > >> >> On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:    
> > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:      
> > >> >> >> I think the change Michael suggests is very minimalistic: Move the if
> > >> >> >> condition around xen_igd_reserve_slot() into the function itself and
> > >> >> >> always call it there unconditionally -- basically turning three lines
> > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
> > >> >> >> Michael further suggests to rename it to something more general. All
> > >> >> >> in all no big changes required.      
> > >> >> > 
> > >> >> > yes, exactly.
> > >> >> >       
> > >> >> 
> > >> >> OK, got it. I can do that along with the other suggestions.    
> > >> > 
> > >> > have you considered instead of reservation, putting a slot check in device model
> > >> > and if it's intel igd being passed through, fail at realize time  if it can't take
> > >> > required slot (with a error directing user to fix command line)?    
> > >> 
> > >> Yes, but the core pci code currently already fails at realize time
> > >> with a useful error message if the user tries to use slot 2 for the
> > >> igd, because of the xen platform device which has slot 2. The user
> > >> can fix this without patching qemu, but having the user fix it on
> > >> the command line is not the best way to solve the problem, primarily
> > >> because the user would need to hotplug the xen platform device via a
> > >> command line option instead of having the xen platform device added by
> > >> pc_xen_hvm_init functions almost immediately after creating the pci
> > >> bus, and that delay in adding the xen platform device degrades
> > >> startup performance of the guest.
> > >>   
> > >> > That could be less complicated than dealing with slot reservations at the cost of
> > >> > being less convenient.    
> > >> 
> > >> And also a cost of reduced startup performance  
> > > 
> > > Could you clarify how it affects performance (and how much).
> > > (as I see, setup done at board_init time is roughly the same
> > > as with '-device foo' CLI options, modulo time needed to parse
> > > options which should be negligible. and both ways are done before
> > > guest runs)  
> > 
> > I preface my answer by saying there is a v9, but you don't
> > need to look at that. I will answer all your questions here.
> > 
> > I am going by what I observe on the main HDMI display with the
> > different approaches. With the approach of not patching Qemu
> > to fix this, which requires adding the Xen platform device a
> > little later, the length of time it takes to fully load the
> > guest is increased. I also noticed with Linux guests that use
> > the grub bootoader, the grub vga driver cannot display the
> > grub boot menu at the native resolution of the display, which
> > in the tested case is 1920x1080, when the Xen platform device
> > is added via a command line option instead of by the
> > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > to Qemu, the grub menu is displayed at the full, 1920x1080
> > native resolution of the display. Once the guest fully loads,
> > there is no noticeable difference in performance. It is mainly
> > a degradation in startup performance, not performance once
> > the guest OS is fully loaded.
> above hints on presence of bug[s] in igd-passthru implementation,
> and this patch effectively hides problem instead of trying to figure
> out what's wrong and fixing it.
>

I agree that the with the current Qemu there is a bug in the igd-passthru
implementation. My proposed patch fixes it. So why not support the
proposed patch with a Reviewed-by tag?


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 15:02:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 15:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479551.743469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHnTq-0001a5-Re; Tue, 17 Jan 2023 15:02:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479551.743469; Tue, 17 Jan 2023 15:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHnTq-0001Zy-MR; Tue, 17 Jan 2023 15:02:30 +0000
Received: by outflank-mailman (input) for mailman id 479551;
 Tue, 17 Jan 2023 15:02:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHnTq-0001Zo-9M; Tue, 17 Jan 2023 15:02:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHnTq-0002jl-70; Tue, 17 Jan 2023 15:02:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHnTp-0005Nd-RD; Tue, 17 Jan 2023 15:02:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHnTp-0004KF-Qp; Tue, 17 Jan 2023 15:02:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5fWJbVZpgkCxplOgxVEoh+GVRgDuqtrTu4X2OpHPEO4=; b=ZH3gF35h5/Q2hKfw8WpDRVpfNH
	OSa2u500DalK4R98ZjU2YjaTTdnIOf00ptDfOXmfglu1mZpWjw2egD84Tva8WUArYwfM7R2FW/T01
	cxpiKWLl0CjPdEOyZHfITLQHHHnJ4GAhbCVLFAOgibL0t8Vmmi1x0qaSSXYLv1T454X4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175935-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175935: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=a107ad0f623669c72997443dc0431eeb732f81a0
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 15:02:29 +0000

flight 175935 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175935/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 a107ad0f623669c72997443dc0431eeb732f81a0
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    4 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   28 attempts
Testing same since   175935  2023-01-17 14:42:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 323 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 15:32:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 15:32:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479560.743479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHnwY-0004xs-8i; Tue, 17 Jan 2023 15:32:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479560.743479; Tue, 17 Jan 2023 15:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHnwY-0004xl-5y; Tue, 17 Jan 2023 15:32:10 +0000
Received: by outflank-mailman (input) for mailman id 479560;
 Tue, 17 Jan 2023 15:32:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHnwW-0004xf-U1
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 15:32:09 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1634401c-967c-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 16:32:01 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 88F086896D;
 Tue, 17 Jan 2023 15:32:05 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1E6F51390C;
 Tue, 17 Jan 2023 15:32:05 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id K4UeBnW/xmPaTgAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 15:32:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1634401c-967c-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673969525; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=a9LZzcQ1wBQ7DCn68UxwLj36sYNRsJCxc5KjuK09bEw=;
	b=oToo5eMTgEofYlyer6Y0PW/GH7bZ2ChC82iOYxSIYa3cyfGoZRJ7E+TxMJOiM3pEZLuYIF
	I03hwgNkgU2dWl/lIzWzDmUpl8pO3g2BjlOq/iDIVNrVzv0EQGunrOo4+K32I1R8b9nya2
	bqjdII0+fqtswdis+odrOqZfaT/E8t4=
Message-ID: <05d451d0-ab2b-53a5-d666-529b0880b259@suse.com>
Date: Tue, 17 Jan 2023 16:32:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, linux-pm@vger.kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, Len Brown <len.brown@intel.com>,
 Pavel Machek <pavel@ucw.cz>, Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <20230113140610.7132-1-jgross@suse.com>
 <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>
 <e5cc2f96-82bc-a0dc-21fa-2f605bc867d1@suse.com>
 <CAJZ5v0ggSbR7+RiXuJo4aq+PYWS-eb9R2iSr0DFfVhcaJ1bfUQ@mail.gmail.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] x86/acpi: fix suspend with Xen
In-Reply-To: <CAJZ5v0ggSbR7+RiXuJo4aq+PYWS-eb9R2iSr0DFfVhcaJ1bfUQ@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------pQ03Tt00E1mOWOXtVZrZRoHz"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------pQ03Tt00E1mOWOXtVZrZRoHz
Content-Type: multipart/mixed; boundary="------------hO6119jhcG5jS1kjYpjos0iV";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, linux-pm@vger.kernel.org,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, Len Brown <len.brown@intel.com>,
 Pavel Machek <pavel@ucw.cz>, Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Message-ID: <05d451d0-ab2b-53a5-d666-529b0880b259@suse.com>
Subject: Re: [PATCH] x86/acpi: fix suspend with Xen
References: <20230113140610.7132-1-jgross@suse.com>
 <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>
 <e5cc2f96-82bc-a0dc-21fa-2f605bc867d1@suse.com>
 <CAJZ5v0ggSbR7+RiXuJo4aq+PYWS-eb9R2iSr0DFfVhcaJ1bfUQ@mail.gmail.com>
In-Reply-To: <CAJZ5v0ggSbR7+RiXuJo4aq+PYWS-eb9R2iSr0DFfVhcaJ1bfUQ@mail.gmail.com>

--------------hO6119jhcG5jS1kjYpjos0iV
Content-Type: multipart/mixed; boundary="------------ts1PE2ihPsdh0koBqvijVVKl"

--------------ts1PE2ihPsdh0koBqvijVVKl
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDEuMjMgMTU6MDksIFJhZmFlbCBKLiBXeXNvY2tpIHdyb3RlOg0KPiBPbiBNb24s
IEphbiAxNiwgMjAyMyBhdCA3OjQ1IEFNIEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNv
bT4gd3JvdGU6DQo+Pg0KPj4gT24gMTMuMDEuMjMgMjA6NDAsIFJhZmFlbCBKLiBXeXNvY2tp
IHdyb3RlOg0KPj4+IE9uIEZyaSwgSmFuIDEzLCAyMDIzIGF0IDM6MDYgUE0gSnVlcmdlbiBH
cm9zcyA8amdyb3NzQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4NCj4+Pj4gQ29tbWl0IGYxZTUy
NTAwOTQ5MyAoIng4Ni9ib290OiBTa2lwIHJlYWxtb2RlIGluaXQgY29kZSB3aGVuIHJ1bm5p
bmcgYXMNCj4+Pj4gWGVuIFBWIGd1ZXN0IikgbWlzc2VkIG9uZSBjb2RlIHBhdGggYWNjZXNz
aW5nIHJlYWxfbW9kZV9oZWFkZXIsIGxlYWRpbmcNCj4+Pj4gdG8gZGVyZWZlcmVuY2luZyBO
VUxMIHdoZW4gc3VzcGVuZGluZyB0aGUgc3lzdGVtIHVuZGVyIFhlbjoNCj4+Pj4NCj4+Pj4g
ICAgICAgWyAgMzQ4LjI4NDAwNF0gUE06IHN1c3BlbmQgZW50cnkgKGRlZXApDQo+Pj4+ICAg
ICAgIFsgIDM0OC4yODk1MzJdIEZpbGVzeXN0ZW1zIHN5bmM6IDAuMDA1IHNlY29uZHMNCj4+
Pj4gICAgICAgWyAgMzQ4LjI5MTU0NV0gRnJlZXppbmcgdXNlciBzcGFjZSBwcm9jZXNzZXMg
Li4uIChlbGFwc2VkIDAuMDAwIHNlY29uZHMpIGRvbmUuDQo+Pj4+ICAgICAgIFsgIDM0OC4y
OTI0NTddIE9PTSBraWxsZXIgZGlzYWJsZWQuDQo+Pj4+ICAgICAgIFsgIDM0OC4yOTI0NjJd
IEZyZWV6aW5nIHJlbWFpbmluZyBmcmVlemFibGUgdGFza3MgLi4uIChlbGFwc2VkIDAuMTA0
IHNlY29uZHMpIGRvbmUuDQo+Pj4+ICAgICAgIFsgIDM0OC4zOTY2MTJdIHByaW50azogU3Vz
cGVuZGluZyBjb25zb2xlKHMpICh1c2Ugbm9fY29uc29sZV9zdXNwZW5kIHRvIGRlYnVnKQ0K
Pj4+PiAgICAgICBbICAzNDguNzQ5MjI4XSBQTTogc3VzcGVuZCBkZXZpY2VzIHRvb2sgMC4z
NTIgc2Vjb25kcw0KPj4+PiAgICAgICBbICAzNDguNzY5NzEzXSBBQ1BJOiBFQzogaW50ZXJy
dXB0IGJsb2NrZWQNCj4+Pj4gICAgICAgWyAgMzQ4LjgxNjA3N10gQlVHOiBrZXJuZWwgTlVM
TCBwb2ludGVyIGRlcmVmZXJlbmNlLCBhZGRyZXNzOiAwMDAwMDAwMDAwMDAwMDFjDQo+Pj4+
ICAgICAgIFsgIDM0OC44MTYwODBdICNQRjogc3VwZXJ2aXNvciByZWFkIGFjY2VzcyBpbiBr
ZXJuZWwgbW9kZQ0KPj4+PiAgICAgICBbICAzNDguODE2MDgxXSAjUEY6IGVycm9yX2NvZGUo
MHgwMDAwKSAtIG5vdC1wcmVzZW50IHBhZ2UNCj4+Pj4gICAgICAgWyAgMzQ4LjgxNjA4M10g
UEdEIDAgUDREIDANCj4+Pj4gICAgICAgWyAgMzQ4LjgxNjA4Nl0gT29wczogMDAwMCBbIzFd
IFBSRUVNUFQgU01QIE5PUFRJDQo+Pj4+ICAgICAgIFsgIDM0OC44MTYwODldIENQVTogMCBQ
SUQ6IDY3NjQgQ29tbTogc3lzdGVtZC1zbGVlcCBOb3QgdGFpbnRlZCA2LjEuMy0xLmZjMzIu
cXViZXMueDg2XzY0ICMxDQo+Pj4+ICAgICAgIFsgIDM0OC44MTYwOTJdIEhhcmR3YXJlIG5h
bWU6IFN0YXIgTGFicyBTdGFyQm9vay9TdGFyQm9vaywgQklPUyA4LjAxIDA3LzAzLzIwMjIN
Cj4+Pj4gICAgICAgWyAgMzQ4LjgxNjA5M10gUklQOiBlMDMwOmFjcGlfZ2V0X3dha2V1cF9h
ZGRyZXNzKzB4Yy8weDIwDQo+Pj4+DQo+Pj4+IEZpeCB0aGF0IGJ5IGFkZGluZyBhbiBpbmRp
cmVjdGlvbiBmb3IgYWNwaV9nZXRfd2FrZXVwX2FkZHJlc3MoKSB3aGljaA0KPj4+PiBYZW4g
UFYgZG9tMCBjYW4gdXNlIHRvIHJldHVybiBhIGR1bW15IG5vbi16ZXJvIHdha2V1cCBhZGRy
ZXNzICh0aGlzDQo+Pj4+IGFkZHJlc3Mgd29uJ3QgZXZlciBiZSB1c2VkLCBhcyB0aGUgcmVh
bCBzdXNwZW5kIGhhbmRsaW5nIGlzIGRvbmUgYnkgdGhlDQo+Pj4+IGh5cGVydmlzb3IpLg0K
Pj4+DQo+Pj4gSG93IGV4YWN0bHkgZG9lcyB0aGlzIGhlbHA/DQo+Pg0KPj4gSSBiZWxpZXZl
ZCB0aGUgZmlyc3Qgc2VudGVuY2Ugb2YgdGhlIGNvbW1pdCBtZXNzYWdlIHdvdWxkIG1ha2Ug
dGhpcw0KPj4gY2xlYXIgZW5vdWdoLg0KPiANCj4gVGhhdCB3YXMgY2xlYXIsIGJ1dCB0aGUg
Zml4IHBhcnQgd2Fzbid0IHJlYWxseS4NCj4gDQo+PiBJIGNhbiBleHBhbmQgdGhlIGNvbW1p
dCBtZXNzYWdlIHRvIGdvIG1vcmUgaW50byBkZXRhaWwgaWYgeW91IHRoaW5rDQo+PiB0aGlz
IGlzIHJlYWxseSBuZWVkZWQuDQo+IA0KPiBJTU8gY2FsbGluZyBhY3BpX3NldF93YWtpbmdf
dmVjdG9yKCkgd2l0aCBhIGtub3duLWludmFsaWQgd2FrZXVwDQo+IHZlY3RvciBhZGRyZXNz
IGluIGRvbTAgaXMgcGxhaW4gY29uZnVzaW5nLg0KPiANCj4gSSdtIG5vdCBzdXJlIHdoYXQg
dG8gZG8gYWJvdXQgaXQgeWV0LCBidXQgSU1WIHNvbWV0aGluZyBuZWVkcyB0byBiZSBkb25l
Lg0KDQpBbm90aGVyIHBvc3NpYmlsaXR5IHdvdWxkIGJlIHRvIG1vZGlmeSBhY3BpX3NsZWVw
X3ByZXBhcmUoKSwgZS5nLiBsaWtlIHRoZQ0KYXR0YWNoZWQgcGF0Y2ggKGNvbXBpbGUgdGVz
dGVkIG9ubHkpLg0KDQoNCkp1ZXJnZW4NCg==
--------------ts1PE2ihPsdh0koBqvijVVKl
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-acpi-fix-suspend-with-Xen-PV.patch"
Content-Disposition: attachment;
 filename="0001-acpi-fix-suspend-with-Xen-PV.patch"
Content-Transfer-Encoding: base64

RnJvbSA1YTJkZTcyNTBjNGU4NzgyZWQ4OWM3ODk4MDYxODE1MDU4ZmEyNzEwIE1vbiBTZXAg
MTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
CkRhdGU6IFR1ZSwgMTcgSmFuIDIwMjMgMTY6MjA6MDQgKzAxMDAKU3ViamVjdDogW1BBVENI
XSBhY3BpOiBmaXggc3VzcGVuZCB3aXRoIFhlbiBQVgpNSU1FLVZlcnNpb246IDEuMApDb250
ZW50LVR5cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9VVRGLTgKQ29udGVudC1UcmFuc2Zlci1F
bmNvZGluZzogOGJpdAoKQ29tbWl0IGYxZTUyNTAwOTQ5MyAoIng4Ni9ib290OiBTa2lwIHJl
YWxtb2RlIGluaXQgY29kZSB3aGVuIHJ1bm5pbmcgYXMKWGVuIFBWIGd1ZXN0IikgbWlzc2Vk
IG9uZSBjb2RlIHBhdGggYWNjZXNzaW5nIHJlYWxfbW9kZV9oZWFkZXIsIGxlYWRpbmcKdG8g
ZGVyZWZlcmVuY2luZyBOVUxMIHdoZW4gc3VzcGVuZGluZyB0aGUgc3lzdGVtIHVuZGVyIFhl
bjoKCiAgICBbICAzNDguMjg0MDA0XSBQTTogc3VzcGVuZCBlbnRyeSAoZGVlcCkKICAgIFsg
IDM0OC4yODk1MzJdIEZpbGVzeXN0ZW1zIHN5bmM6IDAuMDA1IHNlY29uZHMKICAgIFsgIDM0
OC4yOTE1NDVdIEZyZWV6aW5nIHVzZXIgc3BhY2UgcHJvY2Vzc2VzIC4uLiAoZWxhcHNlZCAw
LjAwMCBzZWNvbmRzKSBkb25lLgogICAgWyAgMzQ4LjI5MjQ1N10gT09NIGtpbGxlciBkaXNh
YmxlZC4KICAgIFsgIDM0OC4yOTI0NjJdIEZyZWV6aW5nIHJlbWFpbmluZyBmcmVlemFibGUg
dGFza3MgLi4uIChlbGFwc2VkIDAuMTA0IHNlY29uZHMpIGRvbmUuCiAgICBbICAzNDguMzk2
NjEyXSBwcmludGs6IFN1c3BlbmRpbmcgY29uc29sZShzKSAodXNlIG5vX2NvbnNvbGVfc3Vz
cGVuZCB0byBkZWJ1ZykKICAgIFsgIDM0OC43NDkyMjhdIFBNOiBzdXNwZW5kIGRldmljZXMg
dG9vayAwLjM1MiBzZWNvbmRzCiAgICBbICAzNDguNzY5NzEzXSBBQ1BJOiBFQzogaW50ZXJy
dXB0IGJsb2NrZWQKICAgIFsgIDM0OC44MTYwNzddIEJVRzoga2VybmVsIE5VTEwgcG9pbnRl
ciBkZXJlZmVyZW5jZSwgYWRkcmVzczogMDAwMDAwMDAwMDAwMDAxYwogICAgWyAgMzQ4Ljgx
NjA4MF0gI1BGOiBzdXBlcnZpc29yIHJlYWQgYWNjZXNzIGluIGtlcm5lbCBtb2RlCiAgICBb
ICAzNDguODE2MDgxXSAjUEY6IGVycm9yX2NvZGUoMHgwMDAwKSAtIG5vdC1wcmVzZW50IHBh
Z2UKICAgIFsgIDM0OC44MTYwODNdIFBHRCAwIFA0RCAwCiAgICBbICAzNDguODE2MDg2XSBP
b3BzOiAwMDAwIFsjMV0gUFJFRU1QVCBTTVAgTk9QVEkKICAgIFsgIDM0OC44MTYwODldIENQ
VTogMCBQSUQ6IDY3NjQgQ29tbTogc3lzdGVtZC1zbGVlcCBOb3QgdGFpbnRlZCA2LjEuMy0x
LmZjMzIucXViZXMueDg2XzY0ICMxCiAgICBbICAzNDguODE2MDkyXSBIYXJkd2FyZSBuYW1l
OiBTdGFyIExhYnMgU3RhckJvb2svU3RhckJvb2ssIEJJT1MgOC4wMSAwNy8wMy8yMDIyCiAg
ICBbICAzNDguODE2MDkzXSBSSVA6IGUwMzA6YWNwaV9nZXRfd2FrZXVwX2FkZHJlc3MrMHhj
LzB4MjAKCkZpeCB0aGF0IGJ5IGFkZGluZyBhbiBvcHRpb25hbCBhY3BpIGNhbGxiYWNrIGFs
bG93aW5nIHRvIHNraXAgc2V0dGluZwp0aGUgd2FrZXVwIGFkZHJlc3MsIGFzIGluIHRoZSBY
ZW4gUFYgY2FzZSB0aGlzIHdpbGwgYmUgaGFuZGxlZCBieSB0aGUKaHlwZXJ2aXNvciBhbnl3
YXkuCgpGaXhlczogZjFlNTI1MDA5NDkzICgieDg2L2Jvb3Q6IFNraXAgcmVhbG1vZGUgaW5p
dCBjb2RlIHdoZW4gcnVubmluZyBhcyBYZW4gUFYgZ3Vlc3QiKQpSZXBvcnRlZC1ieTogTWFy
ZWsgTWFyY3p5a293c2tpLUfDs3JlY2tpIDxtYXJtYXJla0BpbnZpc2libGV0aGluZ3NsYWIu
Y29tPgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+Ci0t
LQogYXJjaC94ODYvaW5jbHVkZS9hc20vYWNwaS5oIHwgOCArKysrKysrKwogZHJpdmVycy9h
Y3BpL3NsZWVwLmMgICAgICAgIHwgNiArKysrKy0KIDIgZmlsZXMgY2hhbmdlZCwgMTMgaW5z
ZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQoKZGlmZiAtLWdpdCBhL2FyY2gveDg2L2luY2x1
ZGUvYXNtL2FjcGkuaCBiL2FyY2gveDg2L2luY2x1ZGUvYXNtL2FjcGkuaAppbmRleCA2NTA2
NGQ5ZjdmYTYuLjhlYjc0Y2YzODZkYiAxMDA2NDQKLS0tIGEvYXJjaC94ODYvaW5jbHVkZS9h
c20vYWNwaS5oCisrKyBiL2FyY2gveDg2L2luY2x1ZGUvYXNtL2FjcGkuaApAQCAtMTQsNiAr
MTQsNyBAQAogI2luY2x1ZGUgPGFzbS9tbXUuaD4KICNpbmNsdWRlIDxhc20vbXBzcGVjLmg+
CiAjaW5jbHVkZSA8YXNtL3g4Nl9pbml0Lmg+CisjaW5jbHVkZSA8YXNtL2NwdWZlYXR1cmUu
aD4KIAogI2lmZGVmIENPTkZJR19BQ1BJX0FQRUkKICMgaW5jbHVkZSA8YXNtL3BndGFibGVf
dHlwZXMuaD4KQEAgLTYzLDYgKzY0LDEzIEBAIGV4dGVybiBpbnQgKCphY3BpX3N1c3BlbmRf
bG93bGV2ZWwpKHZvaWQpOwogLyogUGh5c2ljYWwgYWRkcmVzcyB0byByZXN1bWUgYWZ0ZXIg
d2FrZXVwICovCiB1bnNpZ25lZCBsb25nIGFjcGlfZ2V0X3dha2V1cF9hZGRyZXNzKHZvaWQp
OwogCitzdGF0aWMgaW5saW5lIGJvb2wgYWNwaV9za2lwX3NldF93YWtldXBfYWRkcmVzcyh2
b2lkKQoreworCXJldHVybiBjcHVfZmVhdHVyZV9lbmFibGVkKFg4Nl9GRUFUVVJFX1hFTlBW
KTsKK30KKworI2RlZmluZSBhY3BpX3NraXBfc2V0X3dha2V1cF9hZGRyZXNzIGFjcGlfc2tp
cF9zZXRfd2FrZXVwX2FkZHJlc3MKKwogLyoKICAqIENoZWNrIGlmIHRoZSBDUFUgY2FuIGhh
bmRsZSBDMiBhbmQgZGVlcGVyCiAgKi8KZGlmZiAtLWdpdCBhL2RyaXZlcnMvYWNwaS9zbGVl
cC5jIGIvZHJpdmVycy9hY3BpL3NsZWVwLmMKaW5kZXggMGI1NTdjMGQ0MDVlLi40Y2E2Njcy
NTEyNzIgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYWNwaS9zbGVlcC5jCisrKyBiL2RyaXZlcnMv
YWNwaS9zbGVlcC5jCkBAIC02MCwxMyArNjAsMTcgQEAgc3RhdGljIHN0cnVjdCBub3RpZmll
cl9ibG9jayB0dHNfbm90aWZpZXIgPSB7CiAJLnByaW9yaXR5CT0gMCwKIH07CiAKKyNpZm5k
ZWYgYWNwaV9za2lwX3NldF93YWtldXBfYWRkcmVzcworI2RlZmluZSBhY3BpX3NraXBfc2V0
X3dha2V1cF9hZGRyZXNzKCkgZmFsc2UKKyNlbmRpZgorCiBzdGF0aWMgaW50IGFjcGlfc2xl
ZXBfcHJlcGFyZSh1MzIgYWNwaV9zdGF0ZSkKIHsKICNpZmRlZiBDT05GSUdfQUNQSV9TTEVF
UAogCXVuc2lnbmVkIGxvbmcgYWNwaV93YWtldXBfYWRkcmVzczsKIAogCS8qIGRvIHdlIGhh
dmUgYSB3YWtldXAgYWRkcmVzcyBmb3IgUzIgYW5kIFMzPyAqLwotCWlmIChhY3BpX3N0YXRl
ID09IEFDUElfU1RBVEVfUzMpIHsKKwlpZiAoYWNwaV9zdGF0ZSA9PSBBQ1BJX1NUQVRFX1Mz
ICYmICFhY3BpX3NraXBfc2V0X3dha2V1cF9hZGRyZXNzKCkpIHsKIAkJYWNwaV93YWtldXBf
YWRkcmVzcyA9IGFjcGlfZ2V0X3dha2V1cF9hZGRyZXNzKCk7CiAJCWlmICghYWNwaV93YWtl
dXBfYWRkcmVzcykKIAkJCXJldHVybiAtRUZBVUxUOwotLSAKMi4zNS4zCgo=
--------------ts1PE2ihPsdh0koBqvijVVKl
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ts1PE2ihPsdh0koBqvijVVKl--

--------------hO6119jhcG5jS1kjYpjos0iV--

--------------pQ03Tt00E1mOWOXtVZrZRoHz
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPGv3QFAwAAAAAACgkQsN6d1ii/Ey/t
pQf9HeeMJXbkh42aotcK+nPu2x0SigRqr7VfWbrHxxTP6gHTMjFt1xW8k9aqIIqiXxeVdaye+exK
wyODG2Wrnqgr1EWBD+TP+rqrDz7mBb1M0RaqD6cw2MWoaLWCqBhLgXX+SmWzfqboUYYjzgjZhgh2
MscSViGCeIMPVudT329DXzXnjG/F3bEnhnlKpUWpyP4gSnJUkMfqJSSFIUoUiLcu5Y0iO9smzmnx
1m7UvCWA4WZvBdoIa4orQOMbRsMRLZO6gC7fiuw4jRuEZvTMrSZsWIdtWDxZjPksF97iZptWvsd9
McjuvE9enebbW+Kq+2sK/d6hrLfv1etpRu7JzHRBFQ==
=pclZ
-----END PGP SIGNATURE-----

--------------pQ03Tt00E1mOWOXtVZrZRoHz--


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 15:36:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 15:36:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479566.743490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHo0r-0005a9-Pm; Tue, 17 Jan 2023 15:36:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479566.743490; Tue, 17 Jan 2023 15:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHo0r-0005a1-Mu; Tue, 17 Jan 2023 15:36:37 +0000
Received: by outflank-mailman (input) for mailman id 479566;
 Tue, 17 Jan 2023 15:36:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mGQh=5O=gmail.com=rjwysocki@srs-se1.protection.inumbo.net>)
 id 1pHo0q-0005Zu-Sl
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 15:36:36 +0000
Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com
 [209.85.208.48]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b964d988-967c-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 16:36:35 +0100 (CET)
Received: by mail-ed1-f48.google.com with SMTP id x36so10940064ede.13
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 07:36:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b964d988-967c-11ed-91b6-6bf2151ebd3b
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ecga+jTngnfjKJUzMM5MjyIXSr3deNDqaVoOTtaxwxk=;
        b=u+3ERuhATqIUb1crNp+zsIxxdVJcYEdTy6crqWW+F7cdobjdONb7w/QmyhFzS+jIHu
         GyI78geSzIWw9QYjsZQS1jW89Utv/LC+SgdYGqmMuarsWfajjT5waVHP53AbKkJj37TZ
         SRjgs5fKBf2x+o8PfeVCC9L5GEh3BhvoxErsJlwevgWXO1ftM53gBMpdcEA0Ex//2t/5
         khkMbLx83lJyIylnrhNiFMlleaZWmAfhN66faGx9iwq+fqJdXvOXEm+CqO7XZpSk9QEw
         XByMzTccM0qadwu/Jb0j4jQl1dg2q/xWuXixLbxNIgy/2KFp23z0lgnWomO8jDOU+yhy
         BXQg==
X-Gm-Message-State: AFqh2kp8D11fzRdMOdFyfWrcaU5quLddsiUG7gwQEHXUhxMek/dVb9jT
	ZDhAEz3fo4GCMVpwA9SCWbu+deZsyr/Tj6CUNPI=
X-Google-Smtp-Source: AMrXdXtT3fvqJUtzb/EzIMdGLBKoB2U84v13s39TajTdzxAVfMVxwEwmsQw4RK6LFCcOvz382MMMQ4nzvw6a1Q/mk4U=
X-Received: by 2002:a05:6402:94a:b0:47f:7465:6e76 with SMTP id
 h10-20020a056402094a00b0047f74656e76mr357637edz.181.1673969794659; Tue, 17
 Jan 2023 07:36:34 -0800 (PST)
MIME-Version: 1.0
References: <20230113140610.7132-1-jgross@suse.com> <CAJZ5v0gP_NUeQimn21tJuUjpMAOW_wFrRe4jstN13So_4_T4QQ@mail.gmail.com>
 <e5cc2f96-82bc-a0dc-21fa-2f605bc867d1@suse.com> <CAJZ5v0ggSbR7+RiXuJo4aq+PYWS-eb9R2iSr0DFfVhcaJ1bfUQ@mail.gmail.com>
 <05d451d0-ab2b-53a5-d666-529b0880b259@suse.com>
In-Reply-To: <05d451d0-ab2b-53a5-d666-529b0880b259@suse.com>
From: "Rafael J. Wysocki" <rafael@kernel.org>
Date: Tue, 17 Jan 2023 16:36:23 +0100
Message-ID: <CAJZ5v0idVBgi4GEBwBeGqoaDiYJBuHffy8rXsERML6Dw2pYsWA@mail.gmail.com>
Subject: Re: [PATCH] x86/acpi: fix suspend with Xen
To: Juergen Gross <jgross@suse.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>, linux-kernel@vger.kernel.org, x86@kernel.org, 
	linux-pm@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>, 
	Len Brown <len.brown@intel.com>, Pavel Machek <pavel@ucw.cz>, 
	Stefano Stabellini <sstabellini@kernel.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, xen-devel@lists.xenproject.org, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jan 17, 2023 at 4:32 PM Juergen Gross <jgross@suse.com> wrote:
>
> On 17.01.23 15:09, Rafael J. Wysocki wrote:
> > On Mon, Jan 16, 2023 at 7:45 AM Juergen Gross <jgross@suse.com> wrote:
> >>
> >> On 13.01.23 20:40, Rafael J. Wysocki wrote:
> >>> On Fri, Jan 13, 2023 at 3:06 PM Juergen Gross <jgross@suse.com> wrote:
> >>>>
> >>>> Commit f1e525009493 ("x86/boot: Skip realmode init code when running as
> >>>> Xen PV guest") missed one code path accessing real_mode_header, leading
> >>>> to dereferencing NULL when suspending the system under Xen:
> >>>>
> >>>>       [  348.284004] PM: suspend entry (deep)
> >>>>       [  348.289532] Filesystems sync: 0.005 seconds
> >>>>       [  348.291545] Freezing user space processes ... (elapsed 0.000 seconds) done.
> >>>>       [  348.292457] OOM killer disabled.
> >>>>       [  348.292462] Freezing remaining freezable tasks ... (elapsed 0.104 seconds) done.
> >>>>       [  348.396612] printk: Suspending console(s) (use no_console_suspend to debug)
> >>>>       [  348.749228] PM: suspend devices took 0.352 seconds
> >>>>       [  348.769713] ACPI: EC: interrupt blocked
> >>>>       [  348.816077] BUG: kernel NULL pointer dereference, address: 000000000000001c
> >>>>       [  348.816080] #PF: supervisor read access in kernel mode
> >>>>       [  348.816081] #PF: error_code(0x0000) - not-present page
> >>>>       [  348.816083] PGD 0 P4D 0
> >>>>       [  348.816086] Oops: 0000 [#1] PREEMPT SMP NOPTI
> >>>>       [  348.816089] CPU: 0 PID: 6764 Comm: systemd-sleep Not tainted 6.1.3-1.fc32.qubes.x86_64 #1
> >>>>       [  348.816092] Hardware name: Star Labs StarBook/StarBook, BIOS 8.01 07/03/2022
> >>>>       [  348.816093] RIP: e030:acpi_get_wakeup_address+0xc/0x20
> >>>>
> >>>> Fix that by adding an indirection for acpi_get_wakeup_address() which
> >>>> Xen PV dom0 can use to return a dummy non-zero wakeup address (this
> >>>> address won't ever be used, as the real suspend handling is done by the
> >>>> hypervisor).
> >>>
> >>> How exactly does this help?
> >>
> >> I believed the first sentence of the commit message would make this
> >> clear enough.
> >
> > That was clear, but the fix part wasn't really.
> >
> >> I can expand the commit message to go more into detail if you think
> >> this is really needed.
> >
> > IMO calling acpi_set_waking_vector() with a known-invalid wakeup
> > vector address in dom0 is plain confusing.
> >
> > I'm not sure what to do about it yet, but IMV something needs to be done.
>
> Another possibility would be to modify acpi_sleep_prepare(), e.g. like the
> attached patch (compile tested only).

I prefer this to the previous version.  It is much more straightforward IMV.


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 15:50:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 15:50:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479573.743501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoE3-0007tZ-0i; Tue, 17 Jan 2023 15:50:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479573.743501; Tue, 17 Jan 2023 15:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoE2-0007tS-U0; Tue, 17 Jan 2023 15:50:14 +0000
Received: by outflank-mailman (input) for mailman id 479573;
 Tue, 17 Jan 2023 15:50:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHoE1-0007tM-Oi
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 15:50:13 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a08a9428-967e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 16:50:12 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DE96E20063;
 Tue, 17 Jan 2023 15:50:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B98C11390C;
 Tue, 17 Jan 2023 15:50:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 86XnK7PDxmMvWQAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 15:50:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a08a9428-967e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673970611; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ylkLwEQ9wpikNwd/2MHRgca3vP7wm2XPPZEd4mvvd2I=;
	b=oZjBqEoR0o1huc2iD7qOoL95ms2uXjhIEpOA+pRp7RU8aD7mQacKA865VrAS+Lwo1k6Sh9
	wUMxFedgjXbBoKjeU8rU0F1EqP/Y97iy6DaXC8eTcLxCgNpYGSfYZkiAh7qX5oc2o8Gnwl
	GdjieTXBlSzyQNnlJeHWck2cqy7DDsU=
Message-ID: <371fadc8-64d8-230e-f57b-1d081456b260@suse.com>
Date: Tue, 17 Jan 2023 16:50:11 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 04/17] tools/xenstore: introduce dummy nodes for
 special watch paths
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-5-jgross@suse.com>
 <a944a5ea-0e9f-add6-cbe7-8b06054c637a@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <a944a5ea-0e9f-add6-cbe7-8b06054c637a@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------3h00L3ae711oQvX3mcdw033D"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------3h00L3ae711oQvX3mcdw033D
Content-Type: multipart/mixed; boundary="------------WJZmKry62k8Amme0t0pkjTEc";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <371fadc8-64d8-230e-f57b-1d081456b260@suse.com>
Subject: Re: [PATCH v3 04/17] tools/xenstore: introduce dummy nodes for
 special watch paths
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-5-jgross@suse.com>
 <a944a5ea-0e9f-add6-cbe7-8b06054c637a@xen.org>
In-Reply-To: <a944a5ea-0e9f-add6-cbe7-8b06054c637a@xen.org>

--------------WJZmKry62k8Amme0t0pkjTEc
Content-Type: multipart/mixed; boundary="------------aGbzozR4sYm9HMgVI0vUqdjI"

--------------aGbzozR4sYm9HMgVI0vUqdjI
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDEuMjMgMTU6MDIsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDE3LzAxLzIwMjMgMDk6MTEsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiAr
c3RhdGljIHZvaWQgZmlyZV9zcGVjaWFsX3dhdGNoZXMoY29uc3QgY2hhciAqbmFtZSkNCj4+
ICt7DQo+PiArwqDCoMKgIHZvaWQgKmN0eCA9IHRhbGxvY19uZXcoTlVMTCk7DQo+PiArwqDC
oMKgIHN0cnVjdCBub2RlICpub2RlOw0KPj4gKw0KPj4gK8KgwqDCoCBpZiAoIWN0eCkNCj4+
ICvCoMKgwqDCoMKgwqDCoCByZXR1cm47DQo+PiArDQo+PiArwqDCoMKgIG5vZGUgPSByZWFk
X25vZGUoTlVMTCwgY3R4LCBuYW1lKTsNCj4+ICsNCj4+ICvCoMKgwqAgaWYgKG5vZGUpDQo+
PiArwqDCoMKgwqDCoMKgwqAgZmlyZV93YXRjaGVzKE5VTEwsIGN0eCwgbmFtZSwgbm9kZSwg
dHJ1ZSwgTlVMTCk7DQo+PiArwqDCoMKgIGVsc2UNCj4+ICvCoMKgwqDCoMKgwqDCoCBzeXNs
b2coTE9HX0VSUiwgInNwZWNpYWwgbm9kZSAlcyBub3QgZm91bmRcbiIsIG5hbWUpOw0KPiAN
Cj4gTklUOiBIb3cgYWJvdXQgdXNpbmcgbG9nKCkgc28gaXQgaXMgYWxzbyBwcmludGVkIGlu
IHRoZSB0cmFjZSBsb2c/IFRoaXMgd291bGQgYmUgDQo+IGhhbmR5IHRvIGF2b2lkIGhhdmlu
ZyB0byBjaGVjayBtdWx0aXBsZSBsb2cgZmlsZXMuDQoNClRoaXMgd291bGQgcmVxdWlyZSB0
byBtb3ZlIHBhdGNoIDE0IGluIGZyb250IG9mIHRoaXMgb25lLg0KDQpJIHRoaW5rIHRoaXMg
aXMgcG9zc2libGUgd2l0aG91dCBwcm9ibGVtcy4NCg0KPiANCj4gV2l0aCBvciB3aXRob3V0
Og0KPiANCj4gUmV2aWV3ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQoNClRoYW5rcywNCg0KDQpKdWVyZ2VuDQoNCg==
--------------aGbzozR4sYm9HMgVI0vUqdjI
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------aGbzozR4sYm9HMgVI0vUqdjI--

--------------WJZmKry62k8Amme0t0pkjTEc--

--------------3h00L3ae711oQvX3mcdw033D
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPGw7MFAwAAAAAACgkQsN6d1ii/Ey/u
uQf+PzL5UxChEcJSrD3ZO1M6GEbPtiUJtqHcAUiBoQ8A/8OP506+4ID0b8exkJP5ksywc2LptlsF
8UEEq3aduU/oyFyMlPAua2rEIfEJGe0YdsOwbq66C/aO4dlp91e+nN76SDwPIgmu6NttR6qbiyEW
p9RBn/flWuezcnbe+WTlehoW6Eb2noYnil0IrVohaVmLzba8CU4QPyqMGCwLHkrNxO2GY9tNeVnc
GQZk+yU+j+sjxDc2R5lk5f7WQPrQDiuyFo6n1Tx/FizeqPBTgX1h5AEiaao7NM32G0mf/XQ3MT7Y
iJgyDfigljOhOSJjKV8EQlGRDVgQkYqeJPFYC+x83Q==
=mJDj
-----END PGP SIGNATURE-----

--------------3h00L3ae711oQvX3mcdw033D--


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 15:51:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 15:51:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479579.743512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoFG-0008Ub-Dw; Tue, 17 Jan 2023 15:51:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479579.743512; Tue, 17 Jan 2023 15:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoFG-0008UT-B5; Tue, 17 Jan 2023 15:51:30 +0000
Received: by outflank-mailman (input) for mailman id 479579;
 Tue, 17 Jan 2023 15:51:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uRIs=5O=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pHoFE-0008UI-Jh
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 15:51:28 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cd6e4081-967e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 16:51:27 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 5CA5238894;
 Tue, 17 Jan 2023 15:51:27 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 36D3113357;
 Tue, 17 Jan 2023 15:51:27 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id n+ivC//DxmPoWQAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 17 Jan 2023 15:51:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd6e4081-967e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1673970687; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NcNHI/6jQuK9NSK4vkgXK9cEJCCCc5EiLrrsH2rtyDE=;
	b=daY0e6sqdjc1Lz/jZxwhiL+cLtCg6Owt6vwD6m74D+Gi7bbP+gNF3sBE87Vmw+DqOn/5DB
	7BO+/EYoIvT7otChAQhCM9cPqu57Zv43Ga1BN9XWfcS0sG8Evwj3si5QpAn67nW5QzrA18
	+xB+knEPKzWh29QsAb6zXkrjmpBc1xY=
Message-ID: <e0e74307-a9c6-97f2-6fd2-f982b10556b7@suse.com>
Date: Tue, 17 Jan 2023 16:51:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 08/17] tools/xenstore: don't allow creating too many
 nodes in a transaction
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-9-jgross@suse.com>
 <8aa74a44-2189-ca6b-3668-722e74d233fd@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <8aa74a44-2189-ca6b-3668-722e74d233fd@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------WfxIilPo0M7BlwbCNv1FuJhc"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------WfxIilPo0M7BlwbCNv1FuJhc
Content-Type: multipart/mixed; boundary="------------PAOkmDDj440ty3Ctp1afpyLq";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <e0e74307-a9c6-97f2-6fd2-f982b10556b7@suse.com>
Subject: Re: [PATCH v3 08/17] tools/xenstore: don't allow creating too many
 nodes in a transaction
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-9-jgross@suse.com>
 <8aa74a44-2189-ca6b-3668-722e74d233fd@xen.org>
In-Reply-To: <8aa74a44-2189-ca6b-3668-722e74d233fd@xen.org>

--------------PAOkmDDj440ty3Ctp1afpyLq
Content-Type: multipart/mixed; boundary="------------9UNnTaStq2uYDCGJcDCByauh"

--------------9UNnTaStq2uYDCGJcDCByauh
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDEuMjMgMTU6MDgsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDE3LzAxLzIwMjMgMDk6MTEsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBU
aGUgYWNjb3VudGluZyBmb3IgdGhlIG51bWJlciBvZiBub2RlcyBvZiBhIGRvbWFpbiBpbiBh
biBhY3RpdmUNCj4+IHRyYW5zYWN0aW9uIGlzIG5vdCB3b3JraW5nIGNvcnJlY3RseSwgYXMg
aXQgYWxsb3dzIHRvIGNyZWF0ZSBhcmJpdHJhcnkNCj4+IG51bWJlciBvZiBub2Rlcy4gVGhl
IHRyYW5zYWN0aW9uIHdpbGwgZmluYWxseSBmYWlsIGR1ZSB0byBleGNlZWRpbmcNCj4+IHRo
ZSBudW1iZXIgb2Ygbm9kZXMgcXVvdGEsIGJ1dCBiZWZvcmUgY2xvc2luZyB0aGUgdHJhbnNh
Y3Rpb24gYW4NCj4+IHVucHJpdmlsZWdlZCBndWVzdCBjb3VsZCBjYXVzZSBYZW5zdG9yZSB0
byB1c2UgYSBsb3Qgb2YgbWVtb3J5Lg0KPj4NCj4+IFNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4g
R3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4gDQo+IElzIHRoZSByZXN0IG9mIHRoZSBzZXJp
ZXMgZGVwZW5kIG9uIHRoaXMgcGF0Y2g/IEkgYW0gYXNraW5nIHRoaXMgYmVjYXVzZSBJIHN0
aWxsIA0KPiBuZWVkIHRvIGdvIHRocm91Z2ggeW91ciBzZWNvbmQgc2VyaWVzIGJlZm9yZSBm
b3JnaW5nIGFuIG9waW5pb24gb24gdGhpcyBwYXRjaC4NCg0KSSB0aGluayB0aGUgcmVzdCBz
aG91bGQgYXBwbHkgd2l0aG91dCB0aGlzIG9uZS4gVGhlcmUgc2hvdWxkbid0IGJlIGFueQ0K
ZnVuY3Rpb25hbCBkZXBlbmRlbmN5Lg0KDQo+IFlldCwgSSB3b3VsZCBsaWtlIHRvIHJlZHVj
ZSB0aGUgbnVtYmVyIG9mIGluZmxpZ2h0IHBhdGNoZXMgOikuDQoNCisxDQoNCg0KSnVlcmdl
bg0K
--------------9UNnTaStq2uYDCGJcDCByauh
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9UNnTaStq2uYDCGJcDCByauh--

--------------PAOkmDDj440ty3Ctp1afpyLq--

--------------WfxIilPo0M7BlwbCNv1FuJhc
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPGw/4FAwAAAAAACgkQsN6d1ii/Ey8L
lAf8D1ULiJpIx4BeOv/5WGLqEcel2McLvUxWs5jG0btBWHlMb1R51lve2FZ77Ku4GbGZveYCww40
2jUNCjX8sGVTcRJWipxONrl9R5x4vF0JcxeKg/gSSPQaqzui7eU4t5yFKiXCe6/89Gp8ETlNbpbh
yFT8ctmFOy0Geh5tBKqquWR1zwvIRj8OTUkgpsgOrDB+/uTYlIiHNJREfsKsTBrhm3hGSnnf0pYt
dMZozxkzy0uBb7Lvtrmhi/u6VaHBE5Rih5s4mULNk6Rzk9RJ+Ls8qfSfY7PzLkwDBPH8TMejR0SO
NhyZ6/laQeRJBEqnwBeDuC/A5JlLlT0L/MK/Hvza7g==
=bksH
-----END PGP SIGNATURE-----

--------------WfxIilPo0M7BlwbCNv1FuJhc--


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 15:55:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 15:55:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479584.743523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoJQ-0000iA-Uu; Tue, 17 Jan 2023 15:55:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479584.743523; Tue, 17 Jan 2023 15:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoJQ-0000i3-Rs; Tue, 17 Jan 2023 15:55:48 +0000
Received: by outflank-mailman (input) for mailman id 479584;
 Tue, 17 Jan 2023 15:55:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IWGa=5O=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pHoJQ-0000hu-51
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 15:55:48 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2046.outbound.protection.outlook.com [40.107.103.46])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67b3935f-967f-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 16:55:46 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7756.eurprd04.prod.outlook.com (2603:10a6:10:1e3::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 15:55:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.012; Tue, 17 Jan 2023
 15:55:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67b3935f-967f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WUYYWHyd+Usj0ljJ544wfkss5dPqlzewAwdnEwHk/DAnfMyYDj7NM39/J6qnG4Dh6+SJ6SWhEq2+UItlxsiH911zPjVaq8DJWIDVfu9gOh9ooeGVJFLLW+b6IySqowwAbrmEHYQQ8o7G+U23KBt2RAaMDC4R+IJxaezZEIEpugSmDfJsQOgNVMuTe947G8S5HVGA+U1NLmFR3QVv0wEJepkamWnyHq7HUvdsExNl4rjrPpXnmIF1eQAUw1Hf/rqG2K2Si8RePqtIzshhaodRMyJdNNNG9vazuEVmUYKardAQ1OBOsEs+Jk3Ic83CyuKG4q4F832KnxTEGEU85fYWiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qvZL0QLp5gwi4TE3RYXxeAqSEYBl+mDU5xcEH6xpkro=;
 b=Gm5ZZ9ZEXnyKUc6uqYDmDVVp6VyQNsg9+8Y17B6FbhHd0pVamtRqV3Kyqi85/o1pQO0KR048/i/EPZUuGI9zbOvUfV0PYdRu9SX/OjhVN2cj6VdKP59Ek1sgKKqsmEZ1VWAjh49OEqWvlkmXnrdcXKZVc+qO6yDSdcbkgMHyNywV7N4H9BWjObpAz841SWralfulkJPH3SIDEBZt+0cKeZNWmGzvhPMIM9eRsKLreFF/TSB/PQtS5kKg1H4LcH0AOWjGkuY4XvWwakwsj9uYoW1AftdlnxqSjsSh59Eclw1PETqf20Cv6RvF99DZ/N+LEPbQ0EjbcqpdXL6bJsvX/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qvZL0QLp5gwi4TE3RYXxeAqSEYBl+mDU5xcEH6xpkro=;
 b=N7odw1zka5vUjZ8sBsTE5XDsmgu8bO0Gc8ZmlMYao4ZsbPqqbAYTEFn7xKG5rMyPogpZmobYLU23E7vo8Splq0ghEAlxjfmIIF/y6HTwg5N36UQ3nA4rlqNQ7M+zxvdIrWCDHUfyZG9FEKXWby8Hj+DIXbQMClzWVESlisBh3uSIfs299lGDDTtfXTHnUdKBoM0PjD0IL0DFd/14900AAm/paJunHhhk/fwMwebviU145cNYxdrUgDb1FUHX5tQAzx73Ph50rYoWQst3G60ZvRTSiDPgmY+2LK6xAJ74eNyHkEZ0Np05r32UmEu2vBi41FiMsuLrVYsp8iFerGNUwQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f7e7b6ea-5bc1-ba2b-5d21-eb431ecff53a@suse.com>
Date: Tue, 17 Jan 2023 16:55:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN PATCH] Create a Kconfig option to set preferred reboot
 method
Content-Language: en-US
To: Per Bilse <per.bilse@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230116172102.43469-1-per.bilse@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230116172102.43469-1-per.bilse@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0130.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7756:EE_
X-MS-Office365-Filtering-Correlation-Id: 6e7b894c-305e-4e95-b349-08daf8a34a16
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2WgcIoYANPNl1pQ3XwBT9LBnKi2Cv7u9wYjkRCsI/i5G8ey6dLEy2Tg/xiphRtDBuJYa9RN7lALTXIOK4OL2n5dw3JfEuVGhFwCoz3ECT2ubl21OGFV15kjPDwwHel8AftJxwPlJFEaWYHWesFrqDPwM3Fsi/0w0jqd32TDGqHP8dBHY3+jZPlko2rTzlcfg333QbK3ShmOeJXCsuaghhuj/eB7GqQT1HzdQxLflxorVCJNqzjRC26TfMNBgDlzcXBA0/clGmVjK1BnebbAkQHUFsva+kN8S5Bvs99oBbmfF0uj6kAGR7YTUFmSIuhhxeDUwWmsfGJXXqCWdZaX95YqSX+YjAykofDFSYXUpP94fgccCD8AUTtoAC9ZUMniEIM+xVG71yLo/55OtseCj2U/+N98Ij9m4KgbYVfU+N7IhyjC+Blx5jOuEG/lcfGusIN47s/5+X6qV1PEQauMvru6XfanLa3XEq/mbnReKoTQ6cZ6CcuP8oTC35TyPiBq3J9b9BGUcgpKf13m32soufgsBrhExAI3lp+CTJY8LigMyzmJgMUyJU+MA6TeLZV0ItqZDEf8peCVOP+NXFTE9E3SajE+SK7LVC1CirW3sRp4X3vduuA1eDC+GMvbDPr2KwG0mIORL+7B1rxkrYBccf6EfHeWdZprMKhtL/HeSCPh/RFn1HOC/7YMGeRP+4GJVTa+3VmWqiDqa7AshIIrKgW9oJe6aNid2m9U48WFOsnk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(346002)(136003)(376002)(39860400002)(451199015)(31686004)(36756003)(86362001)(8676002)(6916009)(66476007)(66556008)(5660300002)(4326008)(31696002)(66946007)(8936002)(2906002)(38100700002)(478600001)(6486002)(316002)(54906003)(41300700001)(2616005)(26005)(53546011)(186003)(6512007)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RjJBVXF2OXF1OU81SW45YkQ2WEFWd0VIdGNmUGp3Y0VCV3o3WTcwNGVnSDFT?=
 =?utf-8?B?aktJUTVWSSt2OCszMDZ6REY3MnExS0tMTEw5VHZOdW5wVUozZ0UvTUJ5VWNu?=
 =?utf-8?B?VWpwdDRYUFpHY3NQV1JQc1p5VUozdElqSFh5QjdjVUFIN3ZSZ3F6NzZPNVVO?=
 =?utf-8?B?c1N2bStGZTJaSmVMZFo0bk90M0lXZEZ0SlJIMUVpS2liZkdWNHZWZzZrWHRa?=
 =?utf-8?B?cEZmNnZsdDIxeVRSMWtZc2VRcGFTbkg0dVJUVFgxbCswSFBjSVU2ZWpTUXda?=
 =?utf-8?B?VUVYZkUzTDlFOFlIYjI5VUx6MDlQaldEd0VYdVViVXVQZUlrdEpkZjcrVmxU?=
 =?utf-8?B?cC9ERDgrUTJnczc5Qk5zMHp4OG8yWDlwRS9WSU5oY293ZVZQNFViK2llSExG?=
 =?utf-8?B?K3dlOER4MTlCRkJLbnF2WGZGeFZDcUFUWk1xYTNySVBQU04zZWEyU3lLTjFV?=
 =?utf-8?B?NmZDaHh4SnBrSmxaNkRVbTNIZkh3TXpQNTR0SXhET0ZSb3Bsd1NtdHAzUzZM?=
 =?utf-8?B?OC9KY3VHcmlJOTFITitRRlpUdGk0K0tmeElkZjI1aUh6ZGIvQ3N2SSt2L1Fx?=
 =?utf-8?B?bHBxTEVna2J5d0llQVBvZCtkOXptenYyR3lxODkxN0YrbkRreFZ3cnA2MGFl?=
 =?utf-8?B?Lzl6bUJVb09iMkpDVTRqZmI4V0w2NG1oV3FRaGRIV2gyZ3VRSE5UQjVZZ2Uv?=
 =?utf-8?B?LzEyZ2cyaDFISTlkckJQZ1pDMjJiMVZEVGJVYStuTVdtNytUeHdxS0pxbm9h?=
 =?utf-8?B?MjY0U1Q1UDhKanZQdm5ScHZpa2RzSlN1ZDU2WjU4QVJiTHBUYklTWk5XTUly?=
 =?utf-8?B?OG14QUVGWm02OGFFUlNOaVFHWG1aZlZTWWFsNEQweEQ1dUNMY2d6RWZKT2ZP?=
 =?utf-8?B?bGQ5a3NhcHVZK3NEZnlNK1c2ckxIQVlRK2p5WUhyb1V2bWtGV0hwK0g3b0ll?=
 =?utf-8?B?ajlsdklRL3grbVQwUXFGL1VVdDVNZFFjUmIwbXlFcHV0RktOUVNqM2tmRUhI?=
 =?utf-8?B?L0dXc3VrL2VhYUVJWXVILzdQTjZ2czRSMU9tVU80SS9Jb21tRXJMOHA0clRk?=
 =?utf-8?B?M0JxamJkM0JtdWoyNXdKeGxzcDc0eHVLUksvSUdKQklzRXMwUHVlQkc5Nkdn?=
 =?utf-8?B?K1RQam0wZ3FSMDRMNWc1elBVWHQvUjFTbG44TkJFSTc0cnNacDlURlpXRnBp?=
 =?utf-8?B?cVJHVEdNZmJzdndsYVB0VWlZVGdqVWVJaTNocUQza2pJTzZMMUdrRVhSK0U2?=
 =?utf-8?B?Y0dpKzdXUGp0T0dGcTA5UDBCSEdtZG9kL2FDNDl2Vm5LM2VCYzdkN0pndWt5?=
 =?utf-8?B?SFdwRmFVcXRLNWordnQvVHExQWdNV05qdFlmQ3hOM0JodVBXM00vc2FhRE5p?=
 =?utf-8?B?eTd3TmJsZWsyS2pJam1PVDlEQ1JsNk5JVUVvWXhwV2lPWHYzeW1ISExsZG9z?=
 =?utf-8?B?K0J5ODMvMURIR2FrMzNoU2ozMXh0YlRFaEREYXNDZGwyL29MR3dnY25LL0or?=
 =?utf-8?B?elROODVaY3BlTGxmaklJRmlIMHUzSm83RzdQbnZVYnRSNnliaXZiei9LTnk5?=
 =?utf-8?B?dUlaMWR6MzcyWVB6TnRzVmpGbEpTK1gwMnM4UkhZUmRLSFZReU1kUUNmcjZj?=
 =?utf-8?B?OUk1dk1seFd5QjAzczB0ejVvelNqS3dtWGJ6WS9PSlJvbHV6Ykp0TjFxZUls?=
 =?utf-8?B?MFBETnk4ZVVGUGVQZzVDS25PUzBPQXByMktSbjJJZEMzTncxL1N1bUl0RDJI?=
 =?utf-8?B?ZnRlSGs0V0RDL1JvcDdoWmxVQzhHNk1oK000a2RZNk1tSDJsRkZWRUtGVjBi?=
 =?utf-8?B?WG12M3E0NmJMMExiOGVRd29zandSWktmMXVmclhvMzRiN0x6bm43cVY2cnEz?=
 =?utf-8?B?NmRGaXp0ZHVtMzNYK3puZitsL3VzNS9VclUyMEU1MGY1M0pMalJzb1FuUDZq?=
 =?utf-8?B?R2FUejFHOWdGMUxkaG5Zd2ExOUoxcXZVcHNCTVlMcitWM3JEY0pJR3VLMW8x?=
 =?utf-8?B?ZVdXS1A0dEhHZXhTNUQ3Q3MzQ2FMSWRFQUhIWEJzNmdLb2lCcnVwNkFqZDJ0?=
 =?utf-8?B?UTlEeUdmMzBRZ0tPbXdsaWZOcUZKZUFlNEJzTHJld2pUdm9UWmo0RnBlZGg2?=
 =?utf-8?Q?tBkO6KnoJvo9H7SYvn2eNOb06?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e7b894c-305e-4e95-b349-08daf8a34a16
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 15:55:43.4580
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: J0WkMW9+BQrojPJ5CQhnOFDnoMH+e/XmiREZpJ2x3wXtQtmBTI9MPgd2hZ4P1QUa7InMvSCVFmyaGUyDB42oAg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7756

On 16.01.2023 18:21, Per Bilse wrote:
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -306,6 +306,101 @@ config MEM_SHARING
>  	bool "Xen memory sharing support (UNSUPPORTED)" if UNSUPPORTED
>  	depends on HVM
>  
> +config REBOOT_SYSTEM_DEFAULT
> +	default y
> +	bool "Xen-defined reboot method"

May I ask that you swap the two lines? (Personally I consider kconfig
too forgiving here - it doesn't make a lot of sense to set a default
when the type isn't known yet.)

> +	help
> +		Xen will choose the most appropriate reboot method,
> +		which will be EFI, ACPI, or by way of the keyboard
> +		controller, depending on system features.  Disabling
> +		this will allow you to choose exactly how the system
> +		will be rebooted.

Indentation: Help text is to be indented by a tab and two spaces. See
pre-existing examples.

> +choice
> +	bool "Choose reboot method"
> +	depends on !REBOOT_SYSTEM_DEFAULT
> +	default REBOOT_METHOD_ACPI
> +	help
> +		This is a compiled-in alternative to specifying the
> +		reboot method on the Xen command line.  Specifying a
> +		method on the command line will override this choice.
> +
> +		warm    Don't set the cold reboot flag
> +		cold    Set the cold reboot flag

These two are modifiers, not reboot methods on their own. They set a
field in the BDA to tell the BIOS how much initialization / checking
to do (in the legacy case). Therefore they shouldn't be selectable
right here. If you feel like it you could add another boolean to
control that default.

> +		none    Suppress automatic reboot after panics or crashes
> +		triple  Force a triple fault (init)
> +		kbd     Use the keyboard controller, cold reset
> +		acpi    Use the RESET_REG in the FADT
> +		pci     Use the so-called "PCI reset register", CF9
> +		power   Like 'pci' but for a full power-cyle reset
> +		efi     Use the EFI reboot (if running under EFI)
> +		xen     Use Xen SCHEDOP hypercall (if running under Xen as a guest)
> +
> +	config REBOOT_METHOD_WARM
> +		bool "warm"
> +		help
> +			Don't set the cold reboot flag.

I don't think help text is very useful in sub-items of a choice. Plus
you also explain each item above.

> +	config REBOOT_METHOD_COLD
> +		bool "cold"
> +		help
> +			Set the cold reboot flag.
> +
> +	config REBOOT_METHOD_NONE
> +		bool "none"
> +		help
> +			Suppress automatic reboot after panics or crashes.
> +
> +	config REBOOT_METHOD_TRIPLE
> +		bool "triple"
> +		help
> +			Force a triple fault (init).
> +
> +	config REBOOT_METHOD_KBD
> +		bool "kbd"
> +		help
> +			Use the keyboard controller, cold reset.
> +
> +	config REBOOT_METHOD_ACPI
> +		bool "acpi"
> +		help
> +			Use the RESET_REG in the FADT.
> +
> +	config REBOOT_METHOD_PCI
> +		bool "pci"
> +		help
> +			Use the so-called "PCI reset register", CF9.
> +
> +	config REBOOT_METHOD_POWER
> +		bool "power"
> +		help
> +			Like 'pci' but for a full power-cyle reset.
> +
> +	config REBOOT_METHOD_EFI
> +		bool "efi"
> +		help
> +			Use the EFI reboot (if running under EFI).
> +
> +	config REBOOT_METHOD_XEN
> +		bool "xen"
> +		help
> +			Use Xen SCHEDOP hypercall (if running under Xen as a guest).

This wants to depend on XEN_GUEST, doesn't it?

> +endchoice
> +
> +config REBOOT_METHOD
> +	string
> +	default "w" if REBOOT_METHOD_WARM
> +	default "c" if REBOOT_METHOD_COLD
> +	default "n" if REBOOT_METHOD_NONE
> +	default "t" if REBOOT_METHOD_TRIPLE
> +	default "k" if REBOOT_METHOD_KBD
> +	default "a" if REBOOT_METHOD_ACPI
> +	default "p" if REBOOT_METHOD_PCI
> +	default "P" if REBOOT_METHOD_POWER
> +	default "e" if REBOOT_METHOD_EFI
> +	default "x" if REBOOT_METHOD_XEN

I think it would be neater (and more forward compatible) if the strings
were actually spelled out here. We may, at any time, make set_reboot_type()
look at more than just the first character.

> @@ -143,6 +144,8 @@ void machine_halt(void)
>      __machine_halt(NULL);
>  }
>  
> +#ifdef CONFIG_REBOOT_SYSTEM_DEFAULT
> +
>  static void default_reboot_type(void)
>  {
>      if ( reboot_type != BOOT_INVALID )
> @@ -533,6 +536,8 @@ static const struct dmi_system_id __initconstrel reboot_dmi_table[] = {
>      { }
>  };
>  
> +#endif /* CONFIG_REBOOT_SYSTEM_DEFAULT */
> +
>  static int __init cf_check reboot_init(void)
>  {
>      /*
> @@ -542,8 +547,12 @@ static int __init cf_check reboot_init(void)
>      if ( reboot_type != BOOT_INVALID )
>          return 0;
>  
> +#ifdef CONFIG_REBOOT_SYSTEM_DEFAULT
>      default_reboot_type();
>      dmi_check_system(reboot_dmi_table);
> +#else
> +    set_reboot_type(CONFIG_REBOOT_METHOD);
> +#endif

I don't think you should eliminate the use of DMI - that's machine
specific information which should imo still overrule a build-time default.
See also the comment just out of context - it's a difference whether the
override came from the command line or was set at build time.

> @@ -595,8 +604,10 @@ void machine_restart(unsigned int delay_millisecs)
>          tboot_shutdown(TB_SHUTDOWN_REBOOT);
>      }
>  
> +#ifdef CONFIG_REBOOT_SYSTEM_DEFAULT
>      /* Just in case reboot_init() didn't run yet. */
>      default_reboot_type();
> +#endif
>      orig_reboot_type = reboot_type;

As the comment says, this is done here to cover the case of a very early
crash. You want to apply the build-time default then as well. Hence I
think you want to invoke set_reboot_type(CONFIG_REBOOT_METHOD) from
inside default_reboot_type(), rather than in place of it (perhaps while
#ifdef-ing out its alternative code). That'll then also make sure the
command line setting overrides the built-in default - it doesn't look
as if that would work with the current arrangements.

Furthermore a reboot type of "e" will result in reboot_type getting set
to BOOT_INVALID when not running on top of EFI. I think you want to fall
back to default_reboot_type() in that case.

So, taking everything together, maybe

static void default_reboot_type(void)
{
#ifndef CONFIG_REBOOT_SYSTEM_DEFAULT
    if ( reboot_type == BOOT_INVALID )
        set_reboot_type(CONFIG_REBOOT_METHOD);
#endif

    if ( reboot_type != BOOT_INVALID )
        return;

    if ( xen_guest )
    ...

? Which of course would require (conditionally?) dropping __init from
set_reboot_type(). (And I wonder whether the #ifndef wouldn't better be
"#ifdef CONFIG_REBOOT_METHOD".)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 16:00:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 16:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479590.743534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoOA-0002ei-Gl; Tue, 17 Jan 2023 16:00:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479590.743534; Tue, 17 Jan 2023 16:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoOA-0002eb-Dz; Tue, 17 Jan 2023 16:00:42 +0000
Received: by outflank-mailman (input) for mailman id 479590;
 Tue, 17 Jan 2023 16:00:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHoO9-0002eP-0D; Tue, 17 Jan 2023 16:00:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHoO8-0004dC-U7; Tue, 17 Jan 2023 16:00:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHoO8-0006kg-JO; Tue, 17 Jan 2023 16:00:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHoO8-00065x-Iy; Tue, 17 Jan 2023 16:00:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Dagy+0dHBnkYD8HJYUErylgF5c+TLGgIK7CLLGTfuTc=; b=kd4IvAiF1TXXUCtMSvzzea56Xh
	kCKfeolou/DIaajTYrkGV8w6n3S6WZ2+naxYgg0xU7BO5wv9cMaR6eEFvDZab7OT/B3IlbZYF8rs1
	gngapbKz8YPopfbwNpzGDIQ0+zwCFrwTV3JEm9VP8CoOsqleqgl19wpo8B/mgbo4nEOA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175937-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175937: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=a107ad0f623669c72997443dc0431eeb732f81a0
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 16:00:40 +0000

flight 175937 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175937/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 a107ad0f623669c72997443dc0431eeb732f81a0
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    4 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   29 attempts
Testing same since   175935  2023-01-17 14:42:05 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 323 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 16:07:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 16:07:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479601.743545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoV1-0003M9-C4; Tue, 17 Jan 2023 16:07:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479601.743545; Tue, 17 Jan 2023 16:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoV1-0003M2-9I; Tue, 17 Jan 2023 16:07:47 +0000
Received: by outflank-mailman (input) for mailman id 479601;
 Tue, 17 Jan 2023 16:07:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mKDY=5O=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pHoUz-0003Lw-M4
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 16:07:45 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2078.outbound.protection.outlook.com [40.107.20.78])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12ec1e5e-9681-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 17:07:43 +0100 (CET)
Received: from DB6P191CA0011.EURP191.PROD.OUTLOOK.COM (2603:10a6:6:28::21) by
 DB9PR08MB9513.eurprd08.prod.outlook.com (2603:10a6:10:459::15) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.19; Tue, 17 Jan 2023 16:07:34 +0000
Received: from DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:28:cafe::b4) by DB6P191CA0011.outlook.office365.com
 (2603:10a6:6:28::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 16:07:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT057.mail.protection.outlook.com (100.127.142.182) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 16:07:33 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Tue, 17 Jan 2023 16:07:33 +0000
Received: from 3dc1911535da.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5D60DC58-C687-4A40-B0F3-051765C9143C.1; 
 Tue, 17 Jan 2023 16:07:27 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3dc1911535da.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 17 Jan 2023 16:07:27 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB8567.eurprd08.prod.outlook.com (2603:10a6:20b:566::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 16:07:25 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 16:07:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12ec1e5e-9681-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z69IK5eg1Yp044CEQuWKET+1p83MKuIf4gLfd4BkkmE=;
 b=g4aku8fUV8bak64QziFQmVqFb0CF7EFHo4P7gKv/zz3ymaRsZhIj3boTGvCBC2PHVi4yo0AwcRO/gi6oTlBhmIaxcKLYOCIGmgeHYBjg/hkrl0oMVTtDp4pkp/Zt0ZXybHewQrcvWlRCkFEFvtpoS5bK9o4N1WydwOAkCi/a8BE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1cdf1ece433958c3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F+A8/gVUjsRV7JpbHQce16/2jBGSyYNL0wfP+XQORnhsyU6f6kqJm8PckazRqxGbeRbXy7hHLsGQa0sWUutFd8bKExPZrfiA5c3EYvjjXIGGoW5/US/xpEL2QQ1TfY9YwhRz8XJZ8p0wmiGQETp6qcVEvlqOBm2RBAO2bnLbp+F+iU/Pi8oLAzA6VMFL23czdKFlnz7oUa0r6tgHnQey2wEf3fzqBYG+Zw+yhadH0LROK2xBDEwMtBDO6hmg2uDfx5kmRSOLJMXqcHwx1lf+G+s+H09kzVIPFNqO23S8STH8gNsXxhqFmUe54J7yiGQUSyPZdW7mZmBxc7lGtBt6uw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z69IK5eg1Yp044CEQuWKET+1p83MKuIf4gLfd4BkkmE=;
 b=bx+8VA8RRi6xPnrLgKu+BwkqIO4mEG+qL9CT5vDd88Ta6RSa3e8bldnTw7lTaXV/JP77nBn1qzvuKh7oksK6UKnrjyYuxvHLxZIX4gW2WqoIIaSn5kFKwEbBgI/hZM/zWaPf13UiWPb9H7CTJxlfWdUs6dTJtStCX4vyecxmY81/T/hyErVXX+7XEDYSHBOfkS8f3J/cCZzIxix1ThBDDXNLVO3XyiUlFHgOYsE5/co4If/+bbgcJUxorynQnbzWAoypmZWdLP6RuphOJsqDyZyykiz3LhJ5hLza6B0Xu/QyjMc+Vmr834o2TlreBy7SO/Ku8iugKYQZUiLLtjIQ6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z69IK5eg1Yp044CEQuWKET+1p83MKuIf4gLfd4BkkmE=;
 b=g4aku8fUV8bak64QziFQmVqFb0CF7EFHo4P7gKv/zz3ymaRsZhIj3boTGvCBC2PHVi4yo0AwcRO/gi6oTlBhmIaxcKLYOCIGmgeHYBjg/hkrl0oMVTtDp4pkp/Zt0ZXybHewQrcvWlRCkFEFvtpoS5bK9o4N1WydwOAkCi/a8BE=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Topic: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Index: AQHZKdYEuIagcBo/xkCEsH1joQzNJq6ix9mA
Date: Tue, 17 Jan 2023 16:07:24 +0000
Message-ID: <399DE18D-39B4-4BC6-940F-09864D4BEEE5@arm.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
In-Reply-To: <20230116181048.30704-2-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB8567:EE_|DBAEUR03FT057:EE_|DB9PR08MB9513:EE_
X-MS-Office365-Filtering-Correlation-Id: 0c1fc631-4e04-483e-983a-08daf8a4f192
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7YXPiyXblIceN/TsSHgadoHUv3TaV9YWpROvYnztyrRm1UftKT/ZNoW5dSfk6Nqy0+rq0Hzbl6Lyl5c4rHBFc+1EIbOonFabG0o6FLA5qHIY+1ATT6u0Qm03CxgNjgBTmBi2n3NKeCwByTgSr+9TXl8TlbOGs+qSzMcdtF89N/5YQRMicrMVM1baExTG9gwpOuTsmFBjckyuROANvS1gUQd7lQjd/cXR5TUtCtfJYcurAUQZPI1t/hfac2K2jYS7hNqCLMbrfaCRGlD3GzsWw/8ny0vpTgbwLrDctwGpqSOrW3br9jCwgfish1AYHZ9hT4Z4pTPtplyDVLWeCZOq1+Y/FbBWe3P3XtthqhuvWmhwtPsDQgqjsndoUpp68Fa4nQ9sm5mxsF82F1j4gtrYXvBGa33/XUvV/qV31sRFsFb3fC83taHo2UIJ39X1F4RDZSuTT0G/7ZOGWDVivW/jD3dGV0QQrtMP7bCnr0/kmD+x7cWdIvbotKRCPp/XewZYgMjbz38VP6mZGqcNsFL77xGgdOGzug+f68efiAaB7N1fkUMIS0SHljt+3UtGYQHRXRhh/+Y2Y4xp1Y5wPFopmBAK+D87dA3UUNJv52ffctGvhPMKLqcExnvDwzSQO4G34idqiCdd8r6/YrMQxjTL+xoZc/rcKsX12O0kQPXR4PPbmV89AocNJoQu1qWjgUBHhcUWE1Gf8WOUfsGwFQqp+0lAhYzhMdzI5ixpEpNf6QU=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(396003)(376002)(366004)(39860400002)(451199015)(6506007)(2906002)(86362001)(66476007)(66946007)(66446008)(66556008)(76116006)(91956017)(64756008)(4326008)(8676002)(6916009)(53546011)(316002)(41300700001)(6512007)(186003)(6486002)(26005)(478600001)(122000001)(54906003)(71200400001)(2616005)(38100700002)(36756003)(5660300002)(8936002)(33656002)(38070700005)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <0A302A9D804D7644B4642217B507FA82@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8567
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c9c6af2d-9bde-4d14-c4cc-08daf8a4ebed
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ohnY6WHx1esPiOmCCwFtrN9reZModMJw7BjEneJRku3O7sMWXJ1iURaAY8FyYAs2idKDA/zNqRMFwy4vsgP/Ie+M0WzYZGuphvpZjlm6ARfzNW+nkPM0OmcxlAHgpz3570AAGJes41w7WtV07n9BS4eRlNlIGUO4rybXj/z/yDahX/bFn+ujRz/erOny+CwVTrwQ0uX/7sYbC9skT1/yG0mHT0FOoLYLcDJeBdeq9hE/ArlhHDSSNl+h5Wv0CM/xm0Qxtb8qhWP3vkw16Vjb8067S2lyqa9IiES5zpBb2lQRr4ymdCArD0vfZoC27kQbeFC2WShLSRAuMxSs2gDFslXN/WCQNxM7Y1Q8ktqmjY1+befd78jS83iYFgAEGF+37XDIMyxXmTQLxmp7Npsuiv0nUc9tU8NTlSjQvuPuq6XxGhY+5s9Ab+b4o/22ooHRNbGhrkUb1Hk3uPDo3jx/JykCjga0Mpv96EUJYTBkvbbo64QuyGm5MVaPdM7v8/OEux6z9BBEwxZ3laT+MmQuloXdM76G/3YPWjX94ndeTPP2kUJd8tZI5GtVT/nH7GLuw+jQhSBlkzB9Cv0sCPhCCymR151FsnX9cRep7KE/9dJmLSzrAHAK/pycsIs4P78+NN+xphW+G4aYEQx8RrBFA4nTKyt06F/rH4ML8c5n18CixGe58lsC1+wzBxhc1lZFWFKOC/upZRHNlnmySsMntw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(136003)(376002)(346002)(39860400002)(396003)(451199015)(40470700004)(46966006)(36840700001)(36756003)(41300700001)(2616005)(47076005)(70206006)(316002)(4326008)(8676002)(83380400001)(54906003)(86362001)(33656002)(82740400003)(81166007)(356005)(5660300002)(8936002)(336012)(70586007)(6862004)(40460700003)(40480700001)(2906002)(36860700001)(26005)(478600001)(6512007)(6506007)(186003)(53546011)(82310400005)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 16:07:33.7587
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0c1fc631-4e04-483e-983a-08daf8a4f192
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9513



> On 16 Jan 2023, at 18:10, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> The get-fields.sh which generate all the include/compat/.xlat/*.h
> headers is quite slow. It takes for example nearly 3 seconds to
> generate platform.h on a recent machine, or 2.3 seconds for memory.h.
>=20
> Rewriting the mix of shell/sed/python into a single python script make
> the generation of those file a lot faster.
>=20
> No functional change, the headers generated are identical.
>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>=20
> Notes:
>    To test the header generation, I've submit a branch to gitlab ci,
>    where all the headers where generated, and for each one both the shell
>    script and the python script where run and the result of both
>    compared.
>=20
>    v3:
>        convert to python script instead of perl
>        - this should allow more developper do be able to review/edit it.
>        - it avoid adding a dependency on perl for the hypervisor build.
>=20
>        It can be twice as slow than the perl version, but overall, when d=
oing
>        a build with make, there isn't much difference between the perl an=
d
>        python script.
>        We might be able to speed the python script up by precompiling the
>        many regex, but it's probably not worth it. (python already have a
>        cache of compiled regex, but I think it's small, maybe 10 or so)
>=20
>    v2:
>    - Add .pl extension to the perl script
>    - remove "-w" from the shebang as it is duplicate of "use warning;"
>    - Add a note in the commit message that the "headers generated are ide=
ntical".
>=20
> xen/include/Makefile            |   6 +-
> xen/tools/compat-xlat-header.py | 468 ++++++++++++++++++++++++++++
> xen/tools/get-fields.sh         | 528 --------------------------------
> 3 files changed, 470 insertions(+), 532 deletions(-)
> create mode 100644 xen/tools/compat-xlat-header.py
> delete mode 100644 xen/tools/get-fields.sh
>=20
> diff --git a/xen/include/Makefile b/xen/include/Makefile
> index 65be310eca..b950423efe 100644
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -60,9 +60,7 @@ cmd_compat_c =3D \
>=20
> quiet_cmd_xlat_headers =3D GEN     $@
> cmd_xlat_headers =3D \
> -    while read what name; do \
> -        $(SHELL) $(srctree)/tools/get-fields.sh "$$what" compat_$$name $=
< || exit $$?; \
> -    done <$(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<=
)).lst >$@.new; \
> +    $(PYTHON) $(srctree)/tools/compat-xlat-header.py $< $(patsubst $(obj=
)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst > $@.new; \
>     mv -f $@.new $@
>=20
> targets +=3D $(headers-y)
> @@ -80,7 +78,7 @@ $(obj)/compat/%.c: $(src)/public/%.h $(srcdir)/xlat.lst=
 $(srctree)/tools/compat-
> $(call if_changed,compat_c)
>=20
> targets +=3D $(patsubst compat/%, compat/.xlat/%, $(headers-y))
> -$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(s=
rctree)/tools/get-fields.sh FORCE
> +$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(s=
rctree)/tools/compat-xlat-header.py FORCE
> $(call if_changed,xlat_headers)
>=20
> quiet_cmd_xlat_lst =3D GEN     $@
> diff --git a/xen/tools/compat-xlat-header.py b/xen/tools/compat-xlat-head=
er.py
> new file mode 100644
> index 0000000000..c1b361ac56
> --- /dev/null
> +++ b/xen/tools/compat-xlat-header.py
> @@ -0,0 +1,468 @@
> +#!/usr/bin/env python

Would it make sense to start with python3 since it is a new script?



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 16:22:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 16:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479606.743556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoj1-0005fA-KR; Tue, 17 Jan 2023 16:22:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479606.743556; Tue, 17 Jan 2023 16:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoj1-0005f3-HU; Tue, 17 Jan 2023 16:22:15 +0000
Received: by outflank-mailman (input) for mailman id 479606;
 Tue, 17 Jan 2023 16:22:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wyYX=5O=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pHoj0-0005ex-9V
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 16:22:14 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 18dce8ad-9683-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 17:22:12 +0100 (CET)
Received: by mail-ej1-x62b.google.com with SMTP id bk15so19065924ejb.9
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 08:22:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18dce8ad-9683-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=xNnk8xtXpsGuhxIpiLYHfCMmNvD0b3qoKe4U8pFLMZQ=;
        b=Hz0dKwa2C7DjWVvGztL30CEVZg1ivWP3Ovy6cTCjZ4XX4Qc1ZE+qC2GPA2cCvJEU6T
         T/kvsdjcSXiese3am/b38YzIvcd4lUP2UXvj99iPuxHdvu9VACeBEbUKmENHal/V4pR4
         xsP84JcZDUn5y2FQWFIE1vlVV8aVQYvK7ltMnDGcY/3HVfypyZrLvmpOqu7P9OOJ8A3W
         BrH9CHLWuVAz7VQd/kdkzgKty6SgKMwLdXfPHbrzFrqaux5j2mHPyl5W4GLxWtBT4PrY
         Oou4Hq4skxGqDE/ytnDCgOKILw1tUQbfjla5TvcUrxsSVmKTSTAh0alFf/bnFEHFS9B0
         sg9Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=xNnk8xtXpsGuhxIpiLYHfCMmNvD0b3qoKe4U8pFLMZQ=;
        b=07KZbpIwNVS9wD0vZZvzb640Bk1YSue6Kvr7bpyD0HRuiBP0D0D8LjEMtyDz7JHGCo
         dvlUsOscF50xIGTpA7yK8gZ7aiIi+bF9p/34mwNLGEP6PVFuWADSk0oCY340O9PA5f1R
         WMOOJUZ/wRTpNTPuKs+P46+80VomMxyUFt8CYXjkWzrAcKzSqlXXyMh7oC2dqlf4zSyi
         I6bg4VksExvoUtCTTLmiW12CNnUT0rMMxEOuTccnf8snLzHOZmnjQ4+aLuGWP1jULSws
         vd2VP+k/+5GLLV2I4d7oxOUnIDVpFYkcdSH9Qd8rKoXmh4m6hdUlby+YsjiDUxERcLcQ
         kTmA==
X-Gm-Message-State: AFqh2kpQzrqYCNKJUe4IU7K1I/ppnB5LPOUYAO/y4vaFlICst30OUNA2
	eP9ID/6EaM8aIz4kc/F6LxphLPKMYoJA6ZPCZAE=
X-Google-Smtp-Source: AMrXdXutIkia0sFmsP+r2hJMV2PwNijOQa6hRMQ4AD/xTTAXTTUSmYTsVY1UD1lSzLTDL8knoWR2YAgtHv5cI1vLn3U=
X-Received: by 2002:a17:906:f9c9:b0:84d:45cc:3247 with SMTP id
 lj9-20020a170906f9c900b0084d45cc3247mr288177ejb.481.1673972531760; Tue, 17
 Jan 2023 08:22:11 -0800 (PST)
MIME-Version: 1.0
References: <20230113230835.29356-1-andrew.cooper3@citrix.com> <20230113230835.29356-4-andrew.cooper3@citrix.com>
In-Reply-To: <20230113230835.29356-4-andrew.cooper3@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 17 Jan 2023 11:21:59 -0500
Message-ID: <CAKf6xpstz88SvrQNxmOrk2FL6x2mRegAGfrui7-6_C3Yg7EsjQ@mail.gmail.com>
Subject: Re: [PATCH v2 3/5] xen/version: Introduce non-truncating XENVER_* subops
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	George Dunlap <George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Daniel Smith <dpsmith@apertussolutions.com>
Content-Type: text/plain; charset="UTF-8"

On Fri, Jan 13, 2023 at 6:08 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> Recently in XenServer, we have encountered problems caused by both
> XENVER_extraversion and XENVER_commandline having fixed bounds.
>
> More than just the invariant size, the APIs/ABIs also broken by typedef-ing an
> array, and using an unqualified 'char' which has implementation-specific
> signed-ness
>
> Provide brand new ops, which are capable of expressing variable length
> strings, and mark the older ops as broken.
>
> This fixes all issues around XENVER_extraversion being longer than 15 chars.
> More work is required to remove other assumptions about XENVER_commandline
> being 1023 chars long.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> CC: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> CC: Daniel Smith <dpsmith@apertussolutions.com>
> CC: Jason Andryuk <jandryuk@gmail.com>
>
> v2:
>  * Remove xen_capabilities_info_t from the stack now that arch_get_xen_caps()
>    has gone.
>  * Use an arbitrary limit check much lower than INT_MAX.
>  * Use "buf" rather than "string" terminology.
>  * Expand the API comment.
>
> Tested by forcing XENVER_extraversion to be 20 chars long, and confirming that
> an untruncated version can be obtained.
> ---
>  xen/common/kernel.c          | 62 +++++++++++++++++++++++++++++++++++++++++++
>  xen/include/public/version.h | 63 ++++++++++++++++++++++++++++++++++++++++++--
>  xen/include/xlat.lst         |  1 +
>  xen/xsm/flask/hooks.c        |  4 +++

The Flask change looks good, so that part is:
Reviewed-by: Jason Andryuk <jandryuk@gmail.com> # Flask

Looking at include/xsm/dummy.h, these new subops would fall under the
default case and require XSM_PRIV.  Is that the desired permission,
and guests would just have to handle EPERM?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 16:38:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 16:38:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479613.743567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoxr-0007DZ-Tx; Tue, 17 Jan 2023 16:37:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479613.743567; Tue, 17 Jan 2023 16:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHoxr-0007DS-QX; Tue, 17 Jan 2023 16:37:35 +0000
Received: by outflank-mailman (input) for mailman id 479613;
 Tue, 17 Jan 2023 16:37:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHoxq-0007DI-IB; Tue, 17 Jan 2023 16:37:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHoxq-0005TU-Ep; Tue, 17 Jan 2023 16:37:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHoxq-0007Zo-7i; Tue, 17 Jan 2023 16:37:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHoxq-0004xn-7I; Tue, 17 Jan 2023 16:37:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7md/ASegCAGvXw82GiIhB2IRY39pJpLrdd99EyQz5Tc=; b=egJRQ3TvvNHXTVMzs8AQ1sFwuM
	bGdV5QP71eHbIctyPsaXIvqL0iZEpkejXA4z7tZ19Hw7J1qY1S583kps4tSnRQVX6fHB2LF8Ts6eA
	2qSemtj+1WffIffyhS+O6IbkUdD2dx0hWD/EMliUCjdbeJZj2h+hqovgddKLUoasRdas=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175939-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175939: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=a107ad0f623669c72997443dc0431eeb732f81a0
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 16:37:34 +0000

flight 175939 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175939/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 a107ad0f623669c72997443dc0431eeb732f81a0
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   30 attempts
Testing same since   175935  2023-01-17 14:42:05 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 323 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 16:56:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 16:56:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479621.743577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHpFa-00019r-DU; Tue, 17 Jan 2023 16:55:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479621.743577; Tue, 17 Jan 2023 16:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHpFa-00019k-Ad; Tue, 17 Jan 2023 16:55:54 +0000
Received: by outflank-mailman (input) for mailman id 479621;
 Tue, 17 Jan 2023 16:55:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sgDM=5O=citrix.com=prvs=374aff589=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pHpFY-00019b-RH
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 16:55:52 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb311129-9687-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 17:55:51 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb311129-9687-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673974550;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Gk+2bwqbVv+3ZSL50/GVf7VB0UR/zK7MJKki0suyuRQ=;
  b=TXNfRF5GrzP/y4Unbh5051Kf0aZYIJHOoqzphOp/t4tGUnfSBCWshnlE
   aosN70WjWZQ10ybc0pKu2tyfIcokTfclANDnBygS8FlvLbDH8obtOam+4
   Bp11zXj0FWBZ0TVDPRuHhDzlnqQN0ctzc66Zb6vlGSSJ2K4pSMoWTDcZb
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92998965
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:SNXmAKlC9EM5CaJpe35kY9no5gy5JkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIXCj3VO/nYMGakco1yPY208kgFsJTWyN9lTwRpqCBmQyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5QCGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 cQgDitXRyCvvtiR27WlCe41vukcJ8a+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglH2dSFYr1SE47I6+WHJwCR60aT3McqTcduPLSlQthfC9
 zOWrjqkav0cHOKGmACEwl2Dvc6Mkyj8eroVNpuTytc/1TV/wURMUUZLBDNXu8KRrlO1UpRxI
 kof9y4qsIA77kntRd74NzWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnM08SCEu1
 1SJt8j0HjEpu7qQIVqf67OVoDWaKSUTa2gYakcsVhAZ6tPupIUyiBPnTdt5FqOxyNrvFlnY3
 DSivCU4wbIJgqY2O76TpA6dxWj2/96QE1Bzv1+MNo640u9nTLadQZfywGj31MxnN4GHDV7Yh
 FU7kMfLuYjiEqqxeDyxrPQlRe/2vKffamWD0TaDDLF6qW3zpifLkZR4pWgneRw3aptslSrBO
 he7hO9H2HNE0JJGh4dTapn5NcklxLOI+T/NBqGNNYomjnScmWa6EMBSia24hTqFfLAEy/1XB
 HtiWZ/E4YwmIapm1iGqYOwWzKUmwCszrUuKG8+gn0X7ierPPS/OIVvgDLdpRrlphJ5oXS2Pq
 4oPXyd040g3vBLCjtn/rtdIcAFiwYkTDpHqsc1HHtNv0SI/cFzN/8T5mOt7E6Q8xvQ9qws91
 i3lMqOu4Aal1CKvxMTjQiwLVY4Dqr4k9ChnZ3ZyZw33s5XhCK72hJoim1IMVeFP3IReITRcF
 JHpp+3o7ixzdwn6
IronPort-HdrOrdr: A9a23:foVIpqvS79aAEWNCwURQ2egA7skDhtV00zEX/kB9WHVpm6yj+v
 xG/c5rsiMc7Qx6ZJhOo7+90cW7L080sKQFgrX5Xo3SODUO2lHJEGgK1+KLrwEIWReOlNK1vZ
 0KT0EUMqyUMbEVt6fHCAnTKadd/DGEmprY+ts3GR1WPH9Xg6IL1XYJNu6CeHcGIjWvnfACZe
 ChDswsnUvYRV0nKv6VK1MiROb5q9jChPvdEGM7705O0nj3sduwgoSKaCSl4g==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="92998965"
Date: Tue, 17 Jan 2023 16:55:39 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Message-ID: <Y8bTC3Ftr494imCr@perard.uk.xensource.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
 <399DE18D-39B4-4BC6-940F-09864D4BEEE5@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <399DE18D-39B4-4BC6-940F-09864D4BEEE5@arm.com>

On Tue, Jan 17, 2023 at 04:07:24PM +0000, Luca Fancellu wrote:
> > On 16 Jan 2023, at 18:10, Anthony PERARD <anthony.perard@citrix.com> wrote:
> > diff --git a/xen/tools/compat-xlat-header.py b/xen/tools/compat-xlat-header.py
> > new file mode 100644
> > index 0000000000..c1b361ac56
> > --- /dev/null
> > +++ b/xen/tools/compat-xlat-header.py
> > @@ -0,0 +1,468 @@
> > +#!/usr/bin/env python
> 
> Would it make sense to start with python3 since it is a new script?

That shebang isn't even used as the script doesn't even have the
execution bit set. So why do you say that the script isn't python3? Not
really asking, just been pedantic :-)

Even if it's a new script, it isn't a new project. We can't depend on
brand new functionality from our dependencies. We need to be able to
build the hypervisor with old build toolchain / distribution.

Anyway, I did start by writing a python3 script in all its glory (or at
least some of the new part of the language that I know about), but I had
to rework it to be able to use it on older distribution. Our centos7
container in our GitLab CI seems to use python2.7.

So I had to stop using str.removeprefix() and I introduce some function
doing the same thing instead (so that works with older than python 3.9).
Then I had to stop using f-strings and use %-formatting instead.
Then use "m.groups()[0]" instead of "m[1]" where "m" is a match result
from re.match() and other.
And use the classing "from __future__ ..." preamble.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 16:59:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 16:59:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479628.743589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHpIk-0001oz-0G; Tue, 17 Jan 2023 16:59:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479628.743589; Tue, 17 Jan 2023 16:59:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHpIj-0001os-Ss; Tue, 17 Jan 2023 16:59:09 +0000
Received: by outflank-mailman (input) for mailman id 479628;
 Tue, 17 Jan 2023 16:59:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mKDY=5O=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pHpIj-0001om-5N
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 16:59:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2044.outbound.protection.outlook.com [40.107.22.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 410ef49c-9688-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 17:59:07 +0100 (CET)
Received: from FR3P281CA0142.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:95::16)
 by DB9PR08MB6443.eurprd08.prod.outlook.com (2603:10a6:10:261::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 16:58:46 +0000
Received: from VI1EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:95:cafe::9d) by FR3P281CA0142.outlook.office365.com
 (2603:10a6:d10:95::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Tue, 17 Jan 2023 16:58:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT051.mail.protection.outlook.com (100.127.144.138) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 16:58:45 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Tue, 17 Jan 2023 16:58:45 +0000
Received: from 8bf8efe9d6fe.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A659724B-20F9-4BFD-A41E-4DEA91FA5564.1; 
 Tue, 17 Jan 2023 16:58:38 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8bf8efe9d6fe.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 17 Jan 2023 16:58:38 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB7692.eurprd08.prod.outlook.com (2603:10a6:10:3a4::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 16:58:35 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 16:58:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 410ef49c-9688-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MYdEvaRF12y9D6I+kU42P8nMQ/30uTiKnkvwqrx5g7Y=;
 b=TzlZcqxVV5M7CRuMpiLOlG6LsA/xbuzjGFHZENV8qTWW5fFZkffrJC7njIe3z2YCzJCrEOFraQHGq6XHrhW/EuNP32qqqDX0mgq+aeBPN2zI8ZDsF7kR2OvdWUQxFIfO82GA4LgPgHZzdBkvEyPvWK3iyxTa9O7RNfEH3UfbHuU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: d0eb2f63df388a77
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XJxuRSNWokEJETZGnpQ5fWidAlb15KgO2eeIgqlHqBD3/hQ4oGozXPpBYA8C+8SNCa7zEv6V/YxNiBkYHc6e//e92Bu5ogoNPHvBDwW7h2C9NoEJxClovKHjnFMmqiM7hx7w+4g5rclM5lTxAV2l0fLvFk6QSPfyRfE3J8hZPusSkY6+U3moRWzvtE5wo02ToiXZVNteFFNm8kA2xiSJ1iD5nOnAOFORXOaJXJaQrkUJHMIN8cVe6f2KVcDHx9nkgfaFhwxEu5YqOQJ5782vXrt+VwLObRZu41q0ckBktphnGLyKblkSJSo1nmNwRgXF4wtg941j12Mh8VM/562+pg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MYdEvaRF12y9D6I+kU42P8nMQ/30uTiKnkvwqrx5g7Y=;
 b=MA0hpfWHtbptL5pCN/T4mw7H3OU3BDirHSSae5gvc32wIeEH8ywaiA4cGH3i8VIH3Wmc7iEHUrkc78BITQIhtd3ooM0NPEv0zMfHR7we3b3y6QB82uNgkxQ9q9KWklEutugOHXL3hLqViiRnb8OMQgQTAFrQWl7Ngs/G7XUheQSrGcfOjirH56rRuttt0qrQtrNBV+mekpqZI9yAYKWwAFaypSZIj3S8OMs5Vz55aEwSJCHc0J6gl1svDZr02jOx+bL9ZBrr6vw2Jj0X12J2Dbn0yBzydL/fvo7/SgaZ9QKwBQ+NOsqUissyDYEixilYjUalgPh2xwPg67sRHHwKdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MYdEvaRF12y9D6I+kU42P8nMQ/30uTiKnkvwqrx5g7Y=;
 b=TzlZcqxVV5M7CRuMpiLOlG6LsA/xbuzjGFHZENV8qTWW5fFZkffrJC7njIe3z2YCzJCrEOFraQHGq6XHrhW/EuNP32qqqDX0mgq+aeBPN2zI8ZDsF7kR2OvdWUQxFIfO82GA4LgPgHZzdBkvEyPvWK3iyxTa9O7RNfEH3UfbHuU=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Topic: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Index: AQHZKdYEuIagcBo/xkCEsH1joQzNJq6ix9mAgAANiICAAADFAA==
Date: Tue, 17 Jan 2023 16:58:35 +0000
Message-ID: <E93CF214-6AE5-4292-8392-41D38C0D5426@arm.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
 <399DE18D-39B4-4BC6-940F-09864D4BEEE5@arm.com>
 <Y8bTC3Ftr494imCr@perard.uk.xensource.com>
In-Reply-To: <Y8bTC3Ftr494imCr@perard.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB7692:EE_|VI1EUR03FT051:EE_|DB9PR08MB6443:EE_
X-MS-Office365-Filtering-Correlation-Id: eeafb44d-f38c-41aa-8ae3-08daf8ac1856
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 PZ6fqUsHe5n2e3NFBWzqU9tQ5i0aqVEwMWSxffvGy91w0ohxYKeC5b/QXOZb1/EPv6EmJAX7pjcNBrSQpbj+uqQoRCbzLAhsHxj5UkJGh+AJrugOayNKwWREMzUACiLpe0XPORQfRzZOKSrmrgcx8sGBbnzRM8BsxzooepOubp+WNA2NaMqLw5ejfrnBLDB1TSnusmMWVR7sGw1bf2xGXUA/uwHNhrurNhjyCbfIGEBjYCOAOiQF9n3AsKhA416zkQReKXUH+NvXjlvtRM4cNUy4tWggwixer4w7grE3133XiQmamC9mBGVq8XKfOgP8BuQj8RHxtBtymbvk1cywKjRG4L+AJNhmieb9ccS5LCEDFe9Ky4fx/IrLuPzP+r47ZM+NSwS/gLmYMTNvAqreXtD9eIxvtHmYu8ZsfKt/FmV4YpFKHVB4oOUEOGntgwwcM+tPqp89RZijYtkwJUgCbToz2Wrf0gxU2GmHJ3fPtfqOuoDa7GQlB+hJ105wMsexIUobu4CiZHBvZbOOBbvADvMEbS4dzEhGHZl8MRU4j6SQ64eivebpmZsK7P+NjjNEY8QrpJK0qWRvfO5Ctr+qG2PO0w/tGyAbP0+zpJqrhH/uL1bWF40ZIDBHFsQvZxz9T1Z7ngmiz6D161ggJBCTchM4Y1wWL0IE8N1oCIhfWex8JQn2uB/ACYUQZxGNrUdbhw/Je4cZwf6Lbbou6g0TYLbxmYYn2eXkzTfkUwW17Sg=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(366004)(39860400002)(136003)(396003)(451199015)(33656002)(36756003)(8676002)(41300700001)(66446008)(66556008)(76116006)(5660300002)(478600001)(26005)(6512007)(186003)(8936002)(53546011)(6486002)(66476007)(91956017)(64756008)(6506007)(66946007)(54906003)(71200400001)(316002)(6916009)(4326008)(2616005)(38070700005)(2906002)(38100700002)(122000001)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <A748656BD7D9EF4BAF8A91C5762AFE89@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7692
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ab803b80-e792-4ad0-287c-08daf8ac1243
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N16JnLE1kH2UmAdEcN+F3OsKBPCxLdI/rjbPln1kdQY/5WWdMvIDAiZGUW/vbkTM6wE6VzNtZzK1UEvd4uhx3VD+n6i2sNFcM8tml2EsYw3bDi9fvwYIq5y/4Bepb7YN8NfxcrbqpCXhZDn8jfKkAetDpnanHIgWmdrwpwpN8pAK+D5xStb67CMjDGtla6GWOjm/rq35+v3wHu44KXns2zAAnOPPLRYDBzNUSF7aVOXKx51yjh5pQOYYlclR9f3Wm1/rwk1o/wCO/kwMrvZ6JsXbtt5auykcnRw5Tm8lYp3EOtuMFDljpIW/+1Do9ZurFte5OhIQcz1FVRzJoX/2y7BxxlG26jnTLiFlAHvMg2XgDUt/7KlARY2vOUzC5i7MHVkQ1MS2z/+X5hoseQhKLvmZNhn9JxdxIMTUSeEVm2CQWvlfQNck2jWd83sRDVM+g/7tGe7cVYN4fuWcE2okvBxG3zKUxNh1JfRTEzXDaS8spO5FaZSKX6ghjsEHJvK+Uw8kQBmXiT1y0uQN/4lqahxtDcgisvqv+2vTisCrQk2IBdOsP36zVfDnQvtRx11DF7jg8dBmZHMaDtOJKE8Sz40L5tajpFClU78MUjxffBnBfdjbX1tfWLU45eJmeXWNg+y3DxwLfrhdkOlrCxMBx26rYj+L7y1+bDtxJ2xK4szo6hxgpUNA25enuenWPlaC+ICaUObCb8AuU7fLHQK9PQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199015)(40470700004)(36840700001)(46966006)(6862004)(41300700001)(2616005)(36756003)(8936002)(316002)(4326008)(336012)(54906003)(70586007)(8676002)(70206006)(47076005)(82310400005)(356005)(81166007)(82740400003)(86362001)(2906002)(5660300002)(40460700003)(40480700001)(33656002)(186003)(26005)(6512007)(478600001)(6486002)(36860700001)(53546011)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 16:58:45.1641
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: eeafb44d-f38c-41aa-8ae3-08daf8ac1856
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6443

DQoNCj4gT24gMTcgSmFuIDIwMjMsIGF0IDE2OjU1LCBBbnRob255IFBFUkFSRCA8YW50aG9ueS5w
ZXJhcmRAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiBUdWUsIEphbiAxNywgMjAyMyBhdCAw
NDowNzoyNFBNICswMDAwLCBMdWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+IE9uIDE2IEphbiAyMDIz
LCBhdCAxODoxMCwgQW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+IHdy
b3RlOg0KPj4+IGRpZmYgLS1naXQgYS94ZW4vdG9vbHMvY29tcGF0LXhsYXQtaGVhZGVyLnB5IGIv
eGVuL3Rvb2xzL2NvbXBhdC14bGF0LWhlYWRlci5weQ0KPj4+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0
DQo+Pj4gaW5kZXggMDAwMDAwMDAwMC4uYzFiMzYxYWM1Ng0KPj4+IC0tLSAvZGV2L251bGwNCj4+
PiArKysgYi94ZW4vdG9vbHMvY29tcGF0LXhsYXQtaGVhZGVyLnB5DQo+Pj4gQEAgLTAsMCArMSw0
NjggQEANCj4+PiArIyEvdXNyL2Jpbi9lbnYgcHl0aG9uDQo+PiANCj4+IFdvdWxkIGl0IG1ha2Ug
c2Vuc2UgdG8gc3RhcnQgd2l0aCBweXRob24zIHNpbmNlIGl0IGlzIGEgbmV3IHNjcmlwdD8NCj4g
DQo+IFRoYXQgc2hlYmFuZyBpc24ndCBldmVuIHVzZWQgYXMgdGhlIHNjcmlwdCBkb2Vzbid0IGV2
ZW4gaGF2ZSB0aGUNCj4gZXhlY3V0aW9uIGJpdCBzZXQuIFNvIHdoeSBkbyB5b3Ugc2F5IHRoYXQg
dGhlIHNjcmlwdCBpc24ndCBweXRob24zPyBOb3QNCj4gcmVhbGx5IGFza2luZywganVzdCBiZWVu
IHBlZGFudGljIDotKQ0KDQpZZXMgSSBkaWRu4oCZdCBwYXkgYXR0ZW50aW9uIHRvIHRoYXQNCg0K
PiANCj4gRXZlbiBpZiBpdCdzIGEgbmV3IHNjcmlwdCwgaXQgaXNuJ3QgYSBuZXcgcHJvamVjdC4g
V2UgY2FuJ3QgZGVwZW5kIG9uDQo+IGJyYW5kIG5ldyBmdW5jdGlvbmFsaXR5IGZyb20gb3VyIGRl
cGVuZGVuY2llcy4gV2UgbmVlZCB0byBiZSBhYmxlIHRvDQo+IGJ1aWxkIHRoZSBoeXBlcnZpc29y
IHdpdGggb2xkIGJ1aWxkIHRvb2xjaGFpbiAvIGRpc3RyaWJ1dGlvbi4NCj4gDQo+IEFueXdheSwg
SSBkaWQgc3RhcnQgYnkgd3JpdGluZyBhIHB5dGhvbjMgc2NyaXB0IGluIGFsbCBpdHMgZ2xvcnkg
KG9yIGF0DQo+IGxlYXN0IHNvbWUgb2YgdGhlIG5ldyBwYXJ0IG9mIHRoZSBsYW5ndWFnZSB0aGF0
IEkga25vdyBhYm91dCksIGJ1dCBJIGhhZA0KPiB0byByZXdvcmsgaXQgdG8gYmUgYWJsZSB0byB1
c2UgaXQgb24gb2xkZXIgZGlzdHJpYnV0aW9uLiBPdXIgY2VudG9zNw0KPiBjb250YWluZXIgaW4g
b3VyIEdpdExhYiBDSSBzZWVtcyB0byB1c2UgcHl0aG9uMi43Lg0KDQpUaGF0IG1ha2VzIHNlbnNl
LCB0aGFua3MgZm9yIHRoZSBleHBsYW5hdGlvbi4NCg0KPiANCj4gU28gSSBoYWQgdG8gc3RvcCB1
c2luZyBzdHIucmVtb3ZlcHJlZml4KCkgYW5kIEkgaW50cm9kdWNlIHNvbWUgZnVuY3Rpb24NCj4g
ZG9pbmcgdGhlIHNhbWUgdGhpbmcgaW5zdGVhZCAoc28gdGhhdCB3b3JrcyB3aXRoIG9sZGVyIHRo
YW4gcHl0aG9uIDMuOSkuDQo+IFRoZW4gSSBoYWQgdG8gc3RvcCB1c2luZyBmLXN0cmluZ3MgYW5k
IHVzZSAlLWZvcm1hdHRpbmcgaW5zdGVhZC4NCj4gVGhlbiB1c2UgIm0uZ3JvdXBzKClbMF0iIGlu
c3RlYWQgb2YgIm1bMV0iIHdoZXJlICJtIiBpcyBhIG1hdGNoIHJlc3VsdA0KPiBmcm9tIHJlLm1h
dGNoKCkgYW5kIG90aGVyLg0KPiBBbmQgdXNlIHRoZSBjbGFzc2luZyAiZnJvbSBfX2Z1dHVyZV9f
IC4uLiIgcHJlYW1ibGUuDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gQW50aG9ueSBQRVJB
UkQNCg0K


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:22:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:22:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479634.743600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHped-00052q-P5; Tue, 17 Jan 2023 17:21:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479634.743600; Tue, 17 Jan 2023 17:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHped-00052j-MF; Tue, 17 Jan 2023 17:21:47 +0000
Received: by outflank-mailman (input) for mailman id 479634;
 Tue, 17 Jan 2023 17:21:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHpec-00052d-AQ
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:21:46 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68c51991-968b-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 18:21:43 +0100 (CET)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 12:21:35 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM8PR03MB6248.namprd03.prod.outlook.com (2603:10b6:8:25::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:21:32 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 17:21:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68c51991-968b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673976103;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=hxV2T10Qrv9JpvFmxil+/4aisuRrhyEXIyRaKhunBxs=;
  b=awoW0JA1BaoTuIaVRzijwB+CM4J9QKpgWMHBBq+eVYuJ+eNsSWjBIp7B
   Gz17p7eZxs4Amf/S337wwpVjO8tioiP5+7LTA3KV7nblqg6vRdO56iTzt
   HbrMh4rOdX+TvifsDd52hEZHmlI7e4bF7MQqY555hSubUWGJe0sjV1MRh
   E=;
X-IronPort-RemoteIP: 104.47.66.45
X-IronPort-MID: 93066322
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:b2mXhK1TmGIwOXQD+/bD5Rlwkn2cJEfYwER7XKvMYLTBsI5bpzIGz
 2BNXTyPaf/eYDekfdB2b43n9R5VupKEnIBhQVY+pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK5ULSfUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS93uDgNyo4GlD5gVnOqgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfXV1z5
 8A1JT00Nz+Ki763w72ra7lwv5F2RCXrFNt3VnBI6xj8VKxjZK+ZBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxpvS6PkWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzHijBtxJTufQGvhC31uOw2kULkMtbnjnpcGLkxbiXcgPN
 BlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWXVHac+7G8vT60fy8PIgcqfjQYRAEI593ipoAbjR/VSNtnVqmvgbXdBjXY0
 z2M6i8kiN0uYdUj0qy6+RXCnGiqr52QFAotvF2LAySi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf03QEH80UJn9h1x0OeQA==
IronPort-HdrOrdr: A9a23:ltxUOq6MYCaK3sjNYAPXwPXXdLJyesId70hD6mlbQxY9SL3+qy
 nIppgmPH7P5wr5PUtKpTnuAse9qB/nlKKdg7NhXotKLTOHhILAFugLh+bfKlbbak/DH4BmpM
 NdWpk7JNrsDUVryebWiTPIderIGeP3lZyVuQ==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="93066322"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B8qO9RT6GRO6FkFea/3ju57yMTwrNPjnYK2keajLxbIYj1MTM3N+r0lLAz9+WU6bizx7WhlUEt6hkYQFgMDewa3ANKuysgYcYygX5DcO6I4a81Rh+1GSrwaO/z7XmxApJFapWrdxq56xm+ycp7DUKTgslXRyx2xRXFpuwMmKgd0GulBnvAJrzfdAvc6SRomKPUURjK9Q2KjfJU9FHKJKKL12Eg705tgHBAcb8iVarErja3cmU9mKRliwjckPHYgd3vUc7kQouU+YasOLflP0pukV9EYscYnvNYp+mPcM8yXCtAfMu59oh3GAoFr1i2E/aDDWxmaY1PDewdYTlLf8dg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hxV2T10Qrv9JpvFmxil+/4aisuRrhyEXIyRaKhunBxs=;
 b=WebNrPU592njWIJwPKt9Ly19/26OjR0wPmKemYtN10eF31/ChPqO+VsXFnnl9749ddYLMeYnh7tX6094gF7Lo/1GPkbW+96ZK+by5JjxRnvbP4tZUjh3r5Ud6CSIUuudLE+Dt9iBvDDq8w5mJlp0fOBtMwdY8pB8K1xP0Gi/bemX+lZ2/BqgSDwH0Kfcsf+5Pr83AMLgb9x1PPC6cjmZ8ko4j5vUkvap5i0XgjYyaJoY5yElrfkqeMO6KmXy8qV/MR5CRg36uJ6M0Vljk2PHRc/r/RWmRHfugoaSCnS03IVD97slxZgngpqTJRQF2aXkdjVh/Qn+/SKTetum+e2FPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hxV2T10Qrv9JpvFmxil+/4aisuRrhyEXIyRaKhunBxs=;
 b=c5iu96HfmtZS0DJbF9BV6s9yoqtFVg8p65INQWTVC23IQ+tXK70rFAcH62Qg6EzOHEFcq89AaVbHlrrZAODNEhcTa/PhgD5BA/6G5Tu1DxHAilxr1NUBD/ulyX4Ag8P7gBkr4KCm78GgsvTANjgzqU0JxGI01kNG3SbIX/TYHgc=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Topic: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Index: AQHZKdXmSfLi4OljAEm5WNlNB2w+ZK6i3JyA
Date: Tue, 17 Jan 2023 17:21:32 +0000
Message-ID: <1ab3bc93-326a-172d-4f0f-f6c2ddc84105@citrix.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
In-Reply-To: <20230116181048.30704-2-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM8PR03MB6248:EE_
x-ms-office365-filtering-correlation-id: badf68cc-9811-40c4-0c72-08daf8af4764
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 fj8nsIiAqf2fhbP19Rt0Iyd5zirDQpBPIi1EY2OA5C2zpbuh71+WD147ZjgmjYqVZu73LyLr43bErCiGjaYdIN3taV8Vcpywovi3ek4xdFEWQO0KoTSh2JA8U+0KAJG6dWx+HOVfI3gL6PncBj25y8m2jp41cfDchDU1GdLxUanYe7X9Inhay1Ij1EpIqBeZOaILQnRl9TFzxgN/BpAMvYPLJBq6DOfEioPfzhM1TW6/RJ1vF7FY8QeT187BNndRRWXCIFFn3QkmpLFMNXuaBWV4kxArHwMbNzH9Om303CfueLG+qPurmYfEju8qLKZi7FGxMdmXGd90l8j4O76R8csz1VhdcvA0aAYNnqj9HhxEGh2sS3b/LHDlw7Y/glGAKmtM5w6YIz8bXGAUObEt6fZ4QF6cDwQ9PKR1BNDopVS4DPKIRfvhreGOZKqgPhcAptF7Eo8e3NWM+mMpLzbKscUAV4T+pj8YRgpca165/SYb4bvCfqK4UMz7MpqaE3wEuPyrqKv2HgLGn4mSr02qfswGjdAqbC9m7ealNkjgmLxeWBIjHaBQcv+KPWO7sA6gOfkg0+RMBDcPK65/L0g2EfO839aFEJo94/LUH1em45KmWHHNIuOpwzMay6BmQoDuVzVXqHdXYNwMfFyWa2WLQ6R/St4/4PT4rQ3CONpIJMvatqRxnzu/5lgmw8oEyJOy5nfq24viI5TdTyzNceOX8yEYQBMyNlTIa/g6tkE0XYd9/4DNv187x0b1IY2kpRgEMrggM+ArRAekIe31fC0SpA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(396003)(376002)(366004)(39860400002)(451199015)(36756003)(31696002)(86362001)(2616005)(91956017)(64756008)(4326008)(26005)(66556008)(8676002)(6512007)(66946007)(186003)(66476007)(76116006)(41300700001)(66446008)(71200400001)(6506007)(316002)(110136005)(54906003)(478600001)(122000001)(2906002)(38070700005)(6486002)(38100700002)(82960400001)(8936002)(5660300002)(53546011)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?NlN5WnVROVJwejVIM1NwVkZKSTlsZitiMTJwMWZEcTRjb1RCM2w5QnZib054?=
 =?utf-8?B?eEJZdjlQWnJNYXZveG53ODd1TjhZUnJXM3hvbXdrY2pTTWhEWVpwM2h2ZjZL?=
 =?utf-8?B?ZnNlMGlrZEU4cHcrdzB5NWpkbm54UHhqTlphY1c2NkdmSDFFQnloTzI2QWtP?=
 =?utf-8?B?L2JFTmJ3ZGlmaG41akR2eW0wZ1FmcmxEWkl0anQya0RucUdISmtSK0Q5MDli?=
 =?utf-8?B?RHNjZ1B6TGlMeHhDeVVsN2xaVjFRU1dIOXhRaTVzK1BsL2RpWUxaaHo4b01Y?=
 =?utf-8?B?akpWQXVMUTdtSFNqOGdJVFFBeFhkTGFMU1J0VE9hWXpZY0d4c3p3SFA1M3du?=
 =?utf-8?B?Sy9jaWpidlVRV1hBQXNlRmdocWsyNjduZkoxUjU0NjgzRW1ESGhVdXZ2Q1ZV?=
 =?utf-8?B?bU1GU1Rsc2VaVnU0ZnFvYS8rWm1VdDBzVXBmYmFQVHNEOUZrWU5HRFJZMFRY?=
 =?utf-8?B?c0ZlK2RNd0dkVHRSUXB4cWY0aEFHUTBSNWgwNEQyelRPTytTekl4WHNxZnB5?=
 =?utf-8?B?bkhYRTk5bmJBcWZaSjFKUWd5cjVzYnA5b0JiVGIwYW5DR0xlbTNDeFJqMFpO?=
 =?utf-8?B?S2FFWVVaZGQzTUVtbG85WEs0QmtKSXV3NmZoaHBTanRCNGpTV3pvbDdKaU5v?=
 =?utf-8?B?NWkvbnN0N25nVXo0UjdyN2FXd0JhNWZNMENqMGM0RTdkSk0rc2JPaFoxclBH?=
 =?utf-8?B?WkVoZ1pJd2JpZFY4dzRGaFBaNDVTT1hlVCtsM29tTlFMbnQvS3YvZng2T3pH?=
 =?utf-8?B?Z00vN2lad3c1UER6cEVSVjVQZFF2VXF5NDd0dDNNYVV6SmJDWTJ4eVc3Zkly?=
 =?utf-8?B?NnpjU1k5NWdRckhPVDdIWTZpSWNRdm1odmNyNXVjVi9iNStrKzFRN2JJY2Rx?=
 =?utf-8?B?ay9sdmhBQ3Evcm5WUDU4N0VIcUZEWWx4MlNLRFpWdUtRejF5Q3puZlNSVHht?=
 =?utf-8?B?SDdjS0lZRVV5RnMrMWxYVkRSQzhVNHlPd0o0NjJUZ0lHcExuN21qN2tyUzdD?=
 =?utf-8?B?WWd2clJQNG9mc2JvRmpsdFdYTzVFck52OVowOWtZRDZsMGMxUGZ0amlCM1Zj?=
 =?utf-8?B?a3RJeGFNOENzSUFxbGhnNmNKcWFmdFA2QkZqSkUyMHlPdlByNVhrbkIwVTRa?=
 =?utf-8?B?a3RSYmJCQkpNbitZSjkwbHRzS3JOKzlZWUc5S28xL3A1UzNoSmhjVmM3Uk93?=
 =?utf-8?B?ODZzMlNMKzRKRG9OKzNpaVRFSVdQMW5yNGQ1bk45WDRKaWMxZTJodExuWkZ0?=
 =?utf-8?B?M1RzNm4xM0FzenlkdHdMRUR5QzZEZU8xazhWV25IcEN2Y1RNbDJKb3FhQm1u?=
 =?utf-8?B?emNHbmg4ZFppclVuelhhUWNmMmRRRXNQWTVxdkRXVDArVGZ2Y2lQcUh2VGhK?=
 =?utf-8?B?VnNlcEYwbS9JUjRnMWxJNlNzbnJZS3ZKT2RuODUxeGYwRDZxRi9JcXYwc0ZQ?=
 =?utf-8?B?NXFGRHo5bHJ5clMwaG1jV0MxTFFXQWd2aUNPb3JWWUFVTG1CSkkreUpBalQ2?=
 =?utf-8?B?SU9rZEZXRjdPOUlKSENPMHMrNFNxZXJXLzVPYUsvc3pnYVAzVEhUR2xCS1cz?=
 =?utf-8?B?K2IrWGJ1S1J5VGU3aHJYMUhYblBRZ2JmdjZMdVNyeURmNmEzRU1vdHZ4Z01v?=
 =?utf-8?B?dG5PSXBPNmw2TmhkeWRmemxZRmxIV1BlTDArcGlEZTR2SEJSUzJML01DYnJv?=
 =?utf-8?B?bW5qQkhCWkZqS1l6ZkRtM2cxZ2k5M0hmdmVvYURkZy9RVUVBMTVmMHNIZktm?=
 =?utf-8?B?VmxmblJHdXQ0RjlHeCsrV3QwU3ZNeHBVMSt4WEF1YUpKaVljbFYxSW1Tb1JC?=
 =?utf-8?B?anhCa0VZVHV6R1ExWWpWUEhYTGlKTlI3YVJ0UmZSZ1MvOXdNOHFNQWRLMEFm?=
 =?utf-8?B?ZGVFaDg0QjNDKzZBa2VuZElITDJ4QlNacmRsN2FvanN2Q3JXb1BpU3ZWYTBy?=
 =?utf-8?B?VEFUdjZabFRWN2ZxTk1PTUk4RTBmWHdldDc4UGJxUDdLSmdTTFdtclVJWUI2?=
 =?utf-8?B?RTJzRktuOTJodTlSL1VPMTZ0Tlk3Ni9EYVhWRFZWUlROUWhwTWFxdUV6MHM1?=
 =?utf-8?B?bDAwM1BhMmc2dERtTDZLYXZEdE0xcnhGZ0hWb2g2RDROQ2xnSmhoeFJaZHdy?=
 =?utf-8?Q?Pj2kWKk/hVW+7g3CZJEwWoZ8k?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1142FBA198D93F45B484A4A7AAA02B17@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	G1jQFMPmA699DnBObX6pRE0dh/pzUC9fs+MUGWZ+d7GchwZfQyu9OS5sToZ9eJ9CSY5PBhG3OYA3aj80b7Y9/ElSS0wSL4TNus4H46tJW4NhKJX4hoxzB8jIkrHNeuBZMNhDe49VRclDQmxrfgzrQhqIq1LSoCOxgdga+jxnkiYYH7EQ+StJ4IawBpJjGhDjKWONh8YBJSSppKAGN7XPKAh2qisnFsB89G9fG7251Lho9WXNnY4sY3Ffq/e3ZgBsuW/E1x7pmMXR9/cJl5dPFP0sWbXWRcki1DVNKF+Ww78QuIzT3P54QBZa6z/srJhRhCxmsMcZnsm6yr3/1EOravRt4R9K5S9zkDS+TKVOPw6uMP9rmtUSWE5MGSbTVyz7r80fCdqKIhKe58I7PyrSIGrhBpIlfYb2IzD8gtPbta1vcVuwtaAMTnJ6bgfSCl2FHZNSzUXqZgx4GISIx20fimkCvLZc3Gsw3JPADXoCRdSqO3PqPoiFescQoQ/gf0We0p2JOkygvQ6WpDqQgPMm5AyD2zDBbTctzT4YwBiAczmfcOfRd4+iWw48Bi6iaZZInGr1X8FvBPjZP794rpG0lKejS5i2BSYVdLg5f1Vg7HxIdlM9aVVbJoqXQEjBd3v6w83bhasxvYgwBrvl/iPzVGZvwj1saiFh59H9vuJONbtG35+CTgiTjLv0NS83ih1+x8B5kpfKnneDWiJRg9RD4IQdNcsLDd+bfSa0NSukSV9CpVwNGA6SIpIqCYv9yn7uy2f/cc0FPx7MT/xfWXk0hPLLfEdayzNQ5BDyAooiezZ8kYrmRsVWdu+oYT5SAzoOzlO2JPLljCxtqUaPc13KHCHECTpg059yFupXLlktem4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: badf68cc-9811-40c4-0c72-08daf8af4764
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 17:21:32.6582
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 4MmJ/Ui9iYJD6OvQ7TQ/nYafbJWw57GcObu1AC1H//P8bhPWcVgjUXCfe7D/Nzo//Oncuq2wLYLMoq4/aMq9k7r6t0L8H+RJr6hlBxYYHmY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR03MB6248

T24gMTYvMDEvMjAyMyA2OjEwIHBtLCBBbnRob255IFBFUkFSRCB3cm90ZToNCj4gVGhlIGdldC1m
aWVsZHMuc2ggd2hpY2ggZ2VuZXJhdGUgYWxsIHRoZSBpbmNsdWRlL2NvbXBhdC8ueGxhdC8qLmgN
Cj4gaGVhZGVycyBpcyBxdWl0ZSBzbG93LiBJdCB0YWtlcyBmb3IgZXhhbXBsZSBuZWFybHkgMyBz
ZWNvbmRzIHRvDQo+IGdlbmVyYXRlIHBsYXRmb3JtLmggb24gYSByZWNlbnQgbWFjaGluZSwgb3Ig
Mi4zIHNlY29uZHMgZm9yIG1lbW9yeS5oLg0KPg0KPiBSZXdyaXRpbmcgdGhlIG1peCBvZiBzaGVs
bC9zZWQvcHl0aG9uIGludG8gYSBzaW5nbGUgcHl0aG9uIHNjcmlwdCBtYWtlDQo+IHRoZSBnZW5l
cmF0aW9uIG9mIHRob3NlIGZpbGUgYSBsb3QgZmFzdGVyLg0KPg0KPiBObyBmdW5jdGlvbmFsIGNo
YW5nZSwgdGhlIGhlYWRlcnMgZ2VuZXJhdGVkIGFyZSBpZGVudGljYWwuDQo+DQo+IFNpZ25lZC1v
ZmYtYnk6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KDQpJJ2xs
IGhhcHBpbHkgdHJ1c3QgdGhlIHRlc3RpbmcgeW91J3ZlIGRvbmUsIGFuZCB0aGF0IHRoZSBvdXRw
dXRzIGFyZQ0KZXF1aXZhbGVudC7CoCBIb3dldmVyLCBJIGhhdmUgc29tZSBnZW5lcmFsIHB5dGhv
biBzdWdnZXN0aW9ucy4NCg0KPiBkaWZmIC0tZ2l0IGEveGVuL3Rvb2xzL2NvbXBhdC14bGF0LWhl
YWRlci5weSBiL3hlbi90b29scy9jb21wYXQteGxhdC1oZWFkZXIucHkNCj4gbmV3IGZpbGUgbW9k
ZSAxMDA2NDQNCj4gaW5kZXggMDAwMDAwMDAwMC4uYzFiMzYxYWM1Ng0KPiAtLS0gL2Rldi9udWxs
DQo+ICsrKyBiL3hlbi90b29scy9jb21wYXQteGxhdC1oZWFkZXIucHkNCj4gQEAgLTAsMCArMSw0
NjggQEANCj4gKyMhL3Vzci9iaW4vZW52IHB5dGhvbg0KPiArDQo+ICtmcm9tIF9fZnV0dXJlX18g
aW1wb3J0IHByaW50X2Z1bmN0aW9uDQo+ICtpbXBvcnQgcmUNCj4gK2ltcG9ydCBzeXMNCj4gKw0K
PiArdHlwZWRlZnMgPSBbXQ0KPiArDQo+ICtkZWYgcmVtb3ZlcHJlZml4KHMsIHByZWZpeCk6DQo+
ICsgICAgaWYgcy5zdGFydHN3aXRoKHByZWZpeCk6DQo+ICsgICAgICAgIHJldHVybiBzW2xlbihw
cmVmaXgpOl0NCj4gKyAgICByZXR1cm4gcw0KPiArDQo+ICtkZWYgcmVtb3Zlc3VmZml4KHMsIHN1
ZmZpeCk6DQo+ICsgICAgaWYgcy5lbmRzd2l0aChzdWZmaXgpOg0KPiArICAgICAgICByZXR1cm4g
c1s6LWxlbihzdWZmaXgpXQ0KPiArICAgIHJldHVybiBzDQo+ICsNCj4gK2RlZiBnZXRfZmllbGRz
KGxvb2tpbmdfZm9yLCBoZWFkZXJfdG9rZW5zKToNCj4gKyAgICBsZXZlbCA9IDENCj4gKyAgICBh
Z2dyID0gMA0KPiArICAgIGZpZWxkcyA9IFtdDQo+ICsgICAgbmFtZSA9ICcnDQo+ICsNCj4gKyAg
ICBmb3IgdG9rZW4gaW4gaGVhZGVyX3Rva2VuczoNCj4gKyAgICAgICAgaWYgdG9rZW4gaW4gWydz
dHJ1Y3QnLCAndW5pb24nXToNCg0KKCdzdHJ1Y3QnLCAndW5pb24nKQ0KDQpBIHR1cGxlIGhlcmUg
cmF0aGVyIHRoYW4gYSBsaXN0IGlzIG1hcmdpbmFsbHkgbW9yZSBlZmZpY2llbnQuDQoNCj4gKyAg
ICAgICAgICAgIGlmIGxldmVsID09IDE6DQo+ICsgICAgICAgICAgICAgICAgYWdnciA9IDENCj4g
KyAgICAgICAgICAgICAgICBmaWVsZHMgPSBbXQ0KPiArICAgICAgICAgICAgICAgIG5hbWUgPSAn
Jw0KPiArICAgICAgICBlbGlmIHRva2VuID09ICd7JzoNCj4gKyAgICAgICAgICAgIGxldmVsICs9
IDENCj4gKyAgICAgICAgZWxpZiB0b2tlbiA9PSAnfSc6DQo+ICsgICAgICAgICAgICBsZXZlbCAt
PSAxDQo+ICsgICAgICAgICAgICBpZiBsZXZlbCA9PSAxIGFuZCBuYW1lID09IGxvb2tpbmdfZm9y
Og0KPiArICAgICAgICAgICAgICAgIGZpZWxkcy5hcHBlbmQodG9rZW4pDQo+ICsgICAgICAgICAg
ICAgICAgcmV0dXJuIGZpZWxkcw0KPiArICAgICAgICBlbGlmIHJlLm1hdGNoKHInXlthLXpBLVpf
XScsIHRva2VuKToNCg0KSGVyZSBhbmQgbWFueSBvdGhlciBwbGFjZXMsIHlvdSd2ZSBnb3QgcmUu
e3NlYXJjaCxtYXRjaH0gd2l0aCByZXBlYXRlZA0KcmVnZXhlcy7CoMKgIFRoaXMgY2FuIGVuZCB1
cCBiZWluZyBxdWl0ZSBleHBlbnNpdmUsIGFuZCB3ZSBhbHJlYWR5IGhhZCBvbmUNCmJ1aWxkIHN5
c3RlbSBzbG93ZG93biBjYXVzZWQgYnkgaXQuDQoNClVwIG5lYXIgdHlwZWRlZnMgYXQgdGhlIHRv
cCwgeW91IHdhbnQgc29tZXRoaW5nIGxpa2U6DQoNCnRva2VuX3JlID0gcmUuY29tcGlsZSgnW2Et
ekEtWl9dJykNCg0KdG8gcHJlcGFyZSB0aGUgcmVnZXggb25jZSBhdCB0aGUgc3RhcnQgb2YgZGF5
LCBhbmQgYW5kIHVzZSAnZWxpZg0KdG9rZW5fcmUubWF0Y2godG9rZW4pJyBoZXJlLg0KDQpBbm90
aGVyIHRoaW5nIHRvIG5vdGUsIHRoZSBkaWZmZXJlbmNlIGJldHdlZW4gcmUuc2VhcmNoIGFuZCBy
ZS5tYXRjaCBpcw0KdGhhdCBtYXRjaCBoYXMgYW4gaW1wbGljaXQgJ14nIGFuY2hvci7CoCBZb3Ug
bmVlZCB0byBiZSBhd2FyZSBvZiB0aGlzDQp3aGVuIGNvbXBpbGluZyBvbmUgcmVnZXggdG8gYmUg
dXNlZCB3aXRoIGJvdGguDQoNCg0KQWxsIG9mIHRoaXMgc2FpZCwgd2hlcmUgaXMgMC05IGluIHRo
ZSB0b2tlbiByZWdleD/CoCBIYXZlIHdlIGp1c3QgZ290DQpleHRyZW1lbHkgbHVja3kgd2l0aCBo
YXZpbmcgbm8gZW1iZWRkZWQgZGlnaXRzIGluIGlkZW50aWZpZXJzIHRodXMgZmFyPw0KDQo+ICsg
ICAgICAgICAgICBpZiBub3QgKGFnZ3IgPT0gMCBvciBuYW1lICE9ICcnKToNCj4gKyAgICAgICAg
ICAgICAgICBuYW1lID0gdG9rZW4NCj4gKw0KPiArICAgICAgICBpZiBhZ2dyICE9IDA6DQo+ICsg
ICAgICAgICAgICBmaWVsZHMuYXBwZW5kKHRva2VuKQ0KPiArDQo+ICsgICAgcmV0dXJuIFtdDQo+
ICsNCj4gK2RlZiBnZXRfdHlwZWRlZnModG9rZW5zKToNCj4gKyAgICBsZXZlbCA9IDENCj4gKyAg
ICBzdGF0ZSA9IDANCj4gKyAgICB0eXBlZGVmcyA9IFtdDQoNClRoaXMgc2hhZG93cyB0aGUgZ2xv
YmFsIHR5cGVkZWZzIGxpc3QsIGJ1dCB0aGUgcmVzdWx0IG9mIGNhbGxpbmcgdGhpcw0KZnVuY3Rp
b24gaXMgc2ltcGx5IGFzc2lnbmVkIHRvIGl0LsKgIExvb2tpbmcgYXQgdGhlIGNvZGUsIHRoZSBn
bG9iYWwgbGlzdA0KaXMgdXNlZCBpbiBzZXZlcmFsIHBsYWNlcw0KDQpJdCB3b3VsZCBiZSBiZXR0
ZXIgdG8gImdsb2JhbCB0eXBlZGVmcyIgaGVyZSwgYW5kIG1ha2UgdGhpcyBmdW5jdGlvbg0Kdm9p
ZCwgdW5sZXNzIHlvdSB3YW50IHRvIG1vZGlmeSBoYW5kbGVfZmllbGQoKSBhbmQgYnVpbGRfYm9k
eSgpIHRvIHRha2UNCml0IGluIGFzIGEgcmVndWxhciBwYXJhbWV0ZXIuDQoNCkknbSBwcmV0dHkg
c3VyZSB0eXBlZGVmcyBhY3R1YWxseSB3YW50cyB0byBiZSBhIGRpY3QgcmF0aGVyIHRoYW4gYSBs
aXN0DQood2lsbCBoYXZlIGJldHRlciAiaWQgaW4gdHlwZWRlZnMiIHBlcmZvcm1hbmNlIGxvd2Vy
IGRvd24pLCBidXQgdGhhdA0Kd2FudHMgbWF0Y2hpbmcgd2l0aCBjb2RlIGNoYW5nZXMgZWxzZXdo
ZXJlLCBhbmQgcHJvYmFibHkgd2FudHMgZG9pbmcNCnNlcGFyYXRlbHkuDQoNCj4gKyAgICBmb3Ig
dG9rZW4gaW4gdG9rZW5zOg0KPiArICAgICAgICBpZiB0b2tlbiA9PSAndHlwZWRlZic6DQo+ICsg
ICAgICAgICAgICBpZiBsZXZlbCA9PSAxOg0KPiArICAgICAgICAgICAgICAgIHN0YXRlID0gMQ0K
PiArICAgICAgICBlbGlmIHJlLm1hdGNoKHInXkNPTVBBVF9IQU5ETEVcKCguKilcKSQnLCB0b2tl
bik6DQo+ICsgICAgICAgICAgICBpZiBub3QgKGxldmVsICE9IDEgb3Igc3RhdGUgIT0gMSk6DQo+
ICsgICAgICAgICAgICAgICAgc3RhdGUgPSAyDQo+ICsgICAgICAgIGVsaWYgdG9rZW4gaW4gWydb
JywgJ3snXToNCj4gKyAgICAgICAgICAgIGxldmVsICs9IDENCj4gKyAgICAgICAgZWxpZiB0b2tl
biBpbiBbJ10nLCAnfSddOg0KPiArICAgICAgICAgICAgbGV2ZWwgLT0gMQ0KPiArICAgICAgICBl
bGlmIHRva2VuID09ICc7JzoNCj4gKyAgICAgICAgICAgIGlmIGxldmVsID09IDE6DQo+ICsgICAg
ICAgICAgICAgICAgc3RhdGUgPSAwDQo+ICsgICAgICAgIGVsaWYgcmUubWF0Y2gocideW2EtekEt
Wl9dJywgdG9rZW4pOg0KPiArICAgICAgICAgICAgaWYgbm90IChsZXZlbCAhPSAxIG9yIHN0YXRl
ICE9IDIpOg0KPiArICAgICAgICAgICAgICAgIHR5cGVkZWZzLmFwcGVuZCh0b2tlbikNCj4gKyAg
ICByZXR1cm4gdHlwZWRlZnMNCj4gKw0KPiArZGVmIGJ1aWxkX2VudW1zKG5hbWUsIHRva2Vucyk6
DQo+ICsgICAgbGV2ZWwgPSAxDQo+ICsgICAga2luZCA9ICcnDQo+ICsgICAgbmFtZWQgPSAnJw0K
PiArICAgIGZpZWxkcyA9IFtdDQo+ICsgICAgbWVtYmVycyA9IFtdDQo+ICsgICAgaWQgPSAnJw0K
PiArDQo+ICsgICAgZm9yIHRva2VuIGluIHRva2VuczoNCj4gKyAgICAgICAgaWYgdG9rZW4gaW4g
WydzdHJ1Y3QnLCAndW5pb24nXToNCj4gKyAgICAgICAgICAgIGlmIG5vdCBsZXZlbCAhPSAyOg0K
PiArICAgICAgICAgICAgICAgIGZpZWxkcyA9IFsnJ10NCj4gKyAgICAgICAgICAgIGtpbmQgPSAi
JXM7JXMiICUgKHRva2VuLCBraW5kKQ0KPiArICAgICAgICBlbGlmIHRva2VuID09ICd7JzoNCj4g
KyAgICAgICAgICAgIGxldmVsICs9IDENCj4gKyAgICAgICAgZWxpZiB0b2tlbiA9PSAnfSc6DQo+
ICsgICAgICAgICAgICBsZXZlbCAtPSAxDQo+ICsgICAgICAgICAgICBpZiBsZXZlbCA9PSAxOg0K
PiArICAgICAgICAgICAgICAgIHN1YmtpbmQgPSBraW5kDQo+ICsgICAgICAgICAgICAgICAgKHN1
YmtpbmQsIF8sIF8pID0gc3Via2luZC5wYXJ0aXRpb24oJzsnKQ0KPiArICAgICAgICAgICAgICAg
IGlmIHN1YmtpbmQgPT0gJ3VuaW9uJzoNCj4gKyAgICAgICAgICAgICAgICAgICAgcHJpbnQoIlxu
ZW51bSBYTEFUXyVzIHsiICUgKG5hbWUsKSkNCj4gKyAgICAgICAgICAgICAgICAgICAgZm9yIG0g
aW4gbWVtYmVyczoNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgIHByaW50KCIgICAgWExBVF8l
c18lcywiICUgKG5hbWUsIG0pKQ0KPiArICAgICAgICAgICAgICAgICAgICBwcmludCgifTsiKQ0K
PiArICAgICAgICAgICAgICAgIHJldHVybg0KPiArICAgICAgICAgICAgZWxpZiBsZXZlbCA9PSAy
Og0KPiArICAgICAgICAgICAgICAgIG5hbWVkID0gJz8nDQo+ICsgICAgICAgIGVsaWYgcmUubWF0
Y2gocideW2EtekEtWl9dJywgdG9rZW4pOg0KPiArICAgICAgICAgICAgaWQgPSB0b2tlbg0KPiAr
ICAgICAgICAgICAgayA9IGtpbmQNCj4gKyAgICAgICAgICAgIChfLCBfLCBrKSA9IGsucGFydGl0
aW9uKCc7JykNCj4gKyAgICAgICAgICAgIGlmIG5hbWVkICE9ICcnIGFuZCBrICE9ICcnOg0KPiAr
ICAgICAgICAgICAgICAgIGlmIGxlbihmaWVsZHMpID4gMCBhbmQgZmllbGRzWzBdID09ICcnOg0K
PiArICAgICAgICAgICAgICAgICAgICBmaWVsZHMucG9wKDApDQo+ICsgICAgICAgICAgICAgICAg
YnVpbGRfZW51bXMoIiVzXyVzIiAlIChuYW1lLCB0b2tlbiksIGZpZWxkcykNCj4gKyAgICAgICAg
ICAgICAgICBuYW1lZCA9ICchJw0KPiArICAgICAgICBlbGlmIHRva2VuID09ICcsJzoNCj4gKyAg
ICAgICAgICAgIGlmIGxldmVsID09IDI6DQo+ICsgICAgICAgICAgICAgICAgbWVtYmVycy5hcHBl
bmQoaWQpDQo+ICsgICAgICAgIGVsaWYgdG9rZW4gPT0gJzsnOg0KPiArICAgICAgICAgICAgaWYg
bGV2ZWwgPT0gMjoNCj4gKyAgICAgICAgICAgICAgICBtZW1iZXJzLmFwcGVuZChpZCkNCj4gKyAg
ICAgICAgICAgIGlmIG5hbWVkICE9ICcnOg0KPiArICAgICAgICAgICAgICAgIChfLCBfLCBraW5k
KSA9IGtpbmQucGFydGl0aW9uKCc7JykNCj4gKyAgICAgICAgICAgIG5hbWVkID0gJycNCj4gKyAg
ICAgICAgaWYgbGVuKGZpZWxkcykgIT0gMDoNCj4gKyAgICAgICAgICAgIGZpZWxkcy5hcHBlbmQo
dG9rZW4pDQo+ICsNCj4gK2RlZiBoYW5kbGVfZmllbGQocHJlZml4LCBuYW1lLCBpZCwgdHlwZSwg
ZmllbGRzKToNCj4gKyAgICBpZiBsZW4oZmllbGRzKSA9PSAwOg0KPiArICAgICAgICBwcmludCgi
IFxcIikNCj4gKyAgICAgICAgaWYgdHlwZSA9PSAnJzoNCj4gKyAgICAgICAgICAgIHByaW50KCIl
cyhfZF8pLT4lcyA9IChfc18pLT4lczsiICUgKHByZWZpeCwgaWQsIGlkKSwgZW5kPScnKQ0KPiAr
ICAgICAgICBlbHNlOg0KPiArICAgICAgICAgICAgayA9IGlkLnJlcGxhY2UoJy4nLCAnXycpDQo+
ICsgICAgICAgICAgICBwcmludCgiJXNYTEFUXyVzX0hORExfJXMoX2RfLCBfc18pOyIgJSAocHJl
Zml4LCBuYW1lLCBrKSwgZW5kPScnKQ0KPiArICAgIGVsaWYgbm90IHJlLnNlYXJjaChyJ1t7fV0n
LCAnICcuam9pbihmaWVsZHMpKToNCj4gKyAgICAgICAgdGFnID0gJyAnLmpvaW4oZmllbGRzKQ0K
PiArICAgICAgICB0YWcgPSByZS5zdWIocidccyooc3RydWN0fHVuaW9uKVxzKyhjb21wYXRfKT8o
XHcrKVxzLionLCAnXFwzJywgdGFnKQ0KPiArICAgICAgICBwcmludCgiIFxcIikNCj4gKyAgICAg
ICAgcHJpbnQoIiVzWExBVF8lcygmKF9kXyktPiVzLCAmKF9zXyktPiVzKTsiICUgKHByZWZpeCwg
dGFnLCBpZCwgaWQpLCBlbmQ9JycpDQo+ICsgICAgZWxzZToNCj4gKyAgICAgICAgZnVuY19pZCA9
IGlkDQo+ICsgICAgICAgIGZ1bmNfdG9rZW5zID0gZmllbGRzDQo+ICsgICAgICAgIGtpbmQgPSAn
Jw0KPiArICAgICAgICBhcnJheSA9ICIiDQo+ICsgICAgICAgIGxldmVsID0gMQ0KPiArICAgICAg
ICBhcnJsdmwgPSAxDQo+ICsgICAgICAgIGFycmF5X3R5cGUgPSAnJw0KPiArICAgICAgICBpZCA9
ICcnDQo+ICsgICAgICAgIHR5cGUgPSAnJw0KPiArICAgICAgICBmaWVsZHMgPSBbXQ0KPiArICAg
ICAgICBmb3IgdG9rZW4gaW4gZnVuY190b2tlbnM6DQo+ICsgICAgICAgICAgICBpZiB0b2tlbiBp
biBbJ3N0cnVjdCcsICd1bmlvbiddOg0KPiArICAgICAgICAgICAgICAgIGlmIGxldmVsID09IDI6
DQo+ICsgICAgICAgICAgICAgICAgICAgIGZpZWxkcyA9IFsnJ10NCj4gKyAgICAgICAgICAgICAg
ICBpZiBsZXZlbCA9PSAxOg0KPiArICAgICAgICAgICAgICAgICAgICBraW5kID0gdG9rZW4NCj4g
KyAgICAgICAgICAgICAgICAgICAgaWYga2luZCA9PSAndW5pb24nOg0KPiArICAgICAgICAgICAg
ICAgICAgICAgICAgdG1wID0gZnVuY19pZC5yZXBsYWNlKCcuJywgJ18nKQ0KPiArICAgICAgICAg
ICAgICAgICAgICAgICAgcHJpbnQoIiBcXCIpDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICBw
cmludCgiJXNzd2l0Y2ggKCVzKSB7IiAlIChwcmVmaXgsIHRtcCksIGVuZD0nJykNCj4gKyAgICAg
ICAgICAgIGVsaWYgdG9rZW4gPT0gJ3snOg0KPiArICAgICAgICAgICAgICAgIGxldmVsICs9IDEN
Cj4gKyAgICAgICAgICAgICAgICBpZCA9ICcnDQo+ICsgICAgICAgICAgICBlbGlmIHRva2VuID09
ICd9JzoNCj4gKyAgICAgICAgICAgICAgICBsZXZlbCAtPSAxDQo+ICsgICAgICAgICAgICAgICAg
aWQgPSAnJw0KPiArICAgICAgICAgICAgICAgIGlmIGxldmVsID09IDEgYW5kIGtpbmQgPT0gJ3Vu
aW9uJzoNCj4gKyAgICAgICAgICAgICAgICAgICAgcHJpbnQoIiBcXCIpDQo+ICsgICAgICAgICAg
ICAgICAgICAgIHByaW50KCIlc30iICUgKHByZWZpeCwpLCBlbmQ9JycpDQo+ICsgICAgICAgICAg
ICBlbGlmIHRva2VuID09ICdbJzoNCj4gKyAgICAgICAgICAgICAgICBpZiBsZXZlbCAhPSAyIG9y
IGFycmx2bCAhPSAxOg0KPiArICAgICAgICAgICAgICAgICAgICBwYXNzDQo+ICsgICAgICAgICAg
ICAgICAgZWxpZiBhcnJheSA9PSAnJzoNCj4gKyAgICAgICAgICAgICAgICAgICAgYXJyYXkgPSAn
ICcNCj4gKyAgICAgICAgICAgICAgICBlbHNlOg0KPiArICAgICAgICAgICAgICAgICAgICBhcnJh
eSA9ICIlczsiICUgKGFycmF5LCkNCj4gKyAgICAgICAgICAgICAgICBhcnJsdmwgKz0gMQ0KPiAr
ICAgICAgICAgICAgZWxpZiB0b2tlbiA9PSAnXSc6DQo+ICsgICAgICAgICAgICAgICAgYXJybHZs
IC09IDENCj4gKyAgICAgICAgICAgIGVsaWYgcmUubWF0Y2gocideQ09NUEFUX0hBTkRMRVwoKC4q
KVwpJCcsIHRva2VuKToNCj4gKyAgICAgICAgICAgICAgICBpZiBsZXZlbCA9PSAyIGFuZCBpZCA9
PSAnJzoNCj4gKyAgICAgICAgICAgICAgICAgICAgbSA9IHJlLm1hdGNoKHInXkNPTVBBVF9IQU5E
TEVcKCguKilcKSQnLCB0b2tlbikNCj4gKyAgICAgICAgICAgICAgICAgICAgdHlwZSA9IG0uZ3Jv
dXBzKClbMF0NCj4gKyAgICAgICAgICAgICAgICAgICAgdHlwZSA9IHJlbW92ZXByZWZpeCh0eXBl
LCAnY29tcGF0XycpDQoNClNvIHRoaXMgcGF0dGVybiBpcyB3aGF0IGxlYWQgdG8gdGhlIGludHJv
ZHVjdGlvbiBvZiB0aGUgOj0gb3BlcmF0b3IgaW4NClB5dGhvbiAzLjgsIGJ1dCB3ZSBvYnZpb3Vz
bHkgY2FuJ3QgdXNlIGl0IHlldC4NCg0KQXQgbGVhc3QgcHJlLWNvbXBpbGluZyB0aGUgcmVnZXhl
cyB3aWxsIHJlZHVjZSB0aGUgaGl0IGZyb20gdGhlIGRvdWJsZQ0KZXZhbHVhdGlvbiBoZXJlLg0K
DQp+QW5kcmV3DQoNClAuUy4gSSBwcm9iYWJseSBkb24ndCB3YW50IHRvIGtub3cgd2h5IHdlIGhh
dmUgdG8gc3BlY2lhbCBjYXNlIGV2dGNobg0KcG9ydCwgYXJnbyBwb3J0IGFuZCBkb21haW4gaGFu
ZGxlLsKgIEkgdGhpbmsgaXQgc2F5cyBtb3JlIGFib3V0IHRoZSB0aGlzDQpib2RnZSBvZiBhIHBh
cnNlciB0aGFuIGFueXRoaW5nIGVsc2UuLi4NCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:31:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:31:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479641.743611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHpnS-0006bG-Oe; Tue, 17 Jan 2023 17:30:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479641.743611; Tue, 17 Jan 2023 17:30:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHpnS-0006b9-K7; Tue, 17 Jan 2023 17:30:54 +0000
Received: by outflank-mailman (input) for mailman id 479641;
 Tue, 17 Jan 2023 17:30:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHpnR-0006a3-FS; Tue, 17 Jan 2023 17:30:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHpnR-0006kU-Cc; Tue, 17 Jan 2023 17:30:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHpnR-0000PG-0s; Tue, 17 Jan 2023 17:30:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHpnR-00015m-0W; Tue, 17 Jan 2023 17:30:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9KOHtitnz0PJbbJVq0xY9KMugYLDgbWqgTFL7HH0eYE=; b=zJjueLh0DsMxb2b1hQWU2/VwD3
	XDrYqCjU83gCVK9gZhoun4KjcYN7X+yr12Mqio+J00WVwn0TP9imJiaxDmdobyCxPAAx7IvkgSQ1J
	Fs//juptXhWWGi0VkNWgiPvoyy435B8QbhVsgnfHkhRKrurfzDhnQGNIOwp4GkRDmbOM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175940-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175940: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=015a001b03db14f791476f817b8b125b195b6d10
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 17:30:53 +0000

flight 175940 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175940/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 015a001b03db14f791476f817b8b125b195b6d10
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   31 attempts
Testing same since   175940  2023-01-17 16:40:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 424 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:40:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:40:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479649.743622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHpwF-0007II-J8; Tue, 17 Jan 2023 17:39:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479649.743622; Tue, 17 Jan 2023 17:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHpwF-0007IB-G7; Tue, 17 Jan 2023 17:39:59 +0000
Received: by outflank-mailman (input) for mailman id 479649;
 Tue, 17 Jan 2023 17:39:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHpwE-0007I5-HN
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:39:58 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3934d27-968d-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:39:55 +0100 (CET)
Received: from mail-mw2nam04lp2171.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 12:39:47 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5078.namprd03.prod.outlook.com (2603:10b6:a03:1e5::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:39:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 17:39:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3934d27-968d-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673977195;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=sfxrSXwUC/Cos2GlMWRMpJO2QKp7dFdBTczRZEQ43S8=;
  b=aHJfjlFZ4ceTCVYsCiQmGvDio1eXEGMFSnQoSJiMCG1FuX77+hV+Ws02
   noe3xlel+ucXFbprZ4Gpk7nhltV9Ihh8RQQSDWCroq7G970VaUqLDxCLl
   P3DDZpwk8qX/hs5g5XullfmvqUy0LqjzpCHJKkO6885ZxMHJK3WeXtWfn
   Q=;
X-IronPort-RemoteIP: 104.47.73.171
X-IronPort-MID: 92486242
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:TaeSfaKxeoTolQY4FE+RTJQlxSXFcZb7ZxGr2PjKsXjdYENS3jNUx
 zcaUWiHafyMMTD1eowga4q3pBwC7ZPWmIJmSAJlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wVhPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5HPWJi9
 aI8Fwwpbw++3r6X6ruDdbhj05FLwMnDZOvzu1lG5BSBUbMDfsqGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VspTGMlWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzHmnCNxISOfQGvhCjwGJ7U8oGkUscVK9qNe0m0q0UehEN
 BlBksYphe1onKCxdfH/Vhu0um+ZvTYTXtNRF6sx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9a1iQ67OVpjOaIjUOICkJYipsZRAe/9DprYU3jxTOZtVuCqi4ipvyAz6Y6
 y+OhDgzgfMUl8Fj/6em+VHKhRq8q56PSRQ6ji3bUW6o4RlwTJK0bIyvr17A5LBPK5jxZlWZp
 30Fh8i25fgDF42QjzeKRPgRHbav/LCONzi0vLJ0N5wo9jDo/mH5e4lVuG16PB0wbZ9CfiL1a
 kjOvw8X/IVUIHahca5wZcS2Ftguyq/jU9/iU5g4c+ZzX3S4TyfflAkGWKJa9zmFfJQE+U3nB
 aqmTA==
IronPort-HdrOrdr: A9a23:cP4zVaCVqggjygrlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="92486242"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fbQXjBvVTlfEaWi2PmDyS/G1vlhRCm2lUz98z5QZBwpsR9sHxIqgkcd3nk7d2zxu3lxKM4V2dp6wnbvoZvsBYVY//tijReXzi3gChlsKNeMTzSPnV9O8lHGKUBCm2nar5kejKVc8djb/ooRB8iW4HzRZgWjxvEKNlDUOxRrsaoLd7XjDW32uh4EPIjCbLQPIv1WAChobSV3pT9cTnpNKMEh6ygDE6XTWRrYrsIS9wKjZ3pfJTA3KYKh8PSK43c8h6jZrDX0pyHrHxiTa2vslGn7QCiv5kTDMBbENiarZytco+lvd6so9fPCkueoXDMcBAzc2rQrSz8WZ31lJddWTFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sfxrSXwUC/Cos2GlMWRMpJO2QKp7dFdBTczRZEQ43S8=;
 b=RV9LaoKh4AGlnDO+ELyc6+qQz4X+wlnzY4tJmzSJaSmFNX1St9zqqfmEkOesjdLs9P7DiAU0KhGp9VGMFYkHYLCVbD+wX9omKAmQLkHgdVO8hAMex3bKI0mPzXvqOBhFLrj9xeXSU+2ls6HE36U/OwsYehJBKxqJVb5NKRN/7e4VkNdlmmjv4anMqSA5xGnbIn6OUTfvnK6B2qhL/RGgJ9pSwQxefXU2NrY1RPhAAQ7VdUI29ovtOddIO8AKDxYg/mbFzJLmeL6qqhA+3ifuZu8Kp15lo0ZdObad9CHMTLMRNZk/6DxIH4mNOpIvU38+JiFzJDI1+YEHTgL8B/Om8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sfxrSXwUC/Cos2GlMWRMpJO2QKp7dFdBTczRZEQ43S8=;
 b=fL5Jng7H8CxUlo9NDNvrByZAwwD2ANev5SdYOjHlHoXLnMCyO6RuoJfA4/xYSeNo39/2glm86/UOi9fznWgQfMgyQKA4XBALDxlF8wAO064KbusCi5S4/KHNEqTz6xoJOe79aNq2EigJS24+fgHV0pBZMkvyp6R4y6ZvftA7tq4=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, Daniel Smith
	<dpsmith@apertussolutions.com>
Subject: Re: [PATCH v2 3/5] xen/version: Introduce non-truncating XENVER_*
 subops
Thread-Topic: [PATCH v2 3/5] xen/version: Introduce non-truncating XENVER_*
 subops
Thread-Index: AQHZJ6P9s5HCoeqlF020fKh7orvbwa6i0F2AgAAVuQA=
Date: Tue, 17 Jan 2023 17:39:44 +0000
Message-ID: <5e9862b0-2c14-db70-2292-4f31b1653159@citrix.com>
References: <20230113230835.29356-1-andrew.cooper3@citrix.com>
 <20230113230835.29356-4-andrew.cooper3@citrix.com>
 <CAKf6xpstz88SvrQNxmOrk2FL6x2mRegAGfrui7-6_C3Yg7EsjQ@mail.gmail.com>
In-Reply-To:
 <CAKf6xpstz88SvrQNxmOrk2FL6x2mRegAGfrui7-6_C3Yg7EsjQ@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BY5PR03MB5078:EE_
x-ms-office365-filtering-correlation-id: 0f335e08-829d-40ed-a180-08daf8b1d25d
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 b5XsTzCPR2y2t8G600WoRPFKGSX+KWysKDYJF0vnRYWQaPShei6EBYKOOX73LHqDk0r4p8h8cLYw9nsr+3uluAuLgGoYzZoPnEaAMNVtb2bYhctHf5jfE5o+EErsSvflM/Bn2HotLxbMjiMuqg/WWBQc26yVpkekB0ry3GMjopW6iCmNQ+npNMK2wZ5uv2gO0Veco+JABiZmufpAiPa5trikBxLfSslr+DnxVIP6JbHCbtgJ0u64iYMjhXjxXvHc0K1Ii5i41GckT43oQULt/xOoYwG2xLoQqNkNZbTpDaFJ2vfMQcpCgD/61+atdW7UVgN1z9NwJX+6LTJrgoj38eiGWuQrIovEN89eZSrSxWTYvk2Pw50djVhXnJC25MU6sDjlBKmFcnIuhplI45mGS/uyLfcQxGLb70ZUEZnUNjoN5FAameHH0eT2p1C7crqiArUctWQEbuSGRqWc+UGdc8kXkR2Ff5bULmA7tu1K0CrDkqQ56ciE6J3A2tQWFUHBxLkiDDvMc+c80E4bR8zvFT7pJgocOjWystEnFKr/aeBQ3hdAM32yR3oHNEkG9u7MOBe0cwIOBQtErmaZfBJeMRtO9NggL3wKLT/RT2cJZ0WM5/VF+OjXQVl88VfhVq4NEeTIEI9Zu4xBWo9FmVw45gmjc0/FftEWQ4FwTPgk8OOyxn6xiDrRVwxeOA5tMEflawJ4ylCm0Z5EBvIsU+jhCFMUkj7BFb+tT4vFDItLUjaCW8sLbdzPPj9Q4rjxxcisuzpmgWYDuzUBZRILqBCNhw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(366004)(376002)(39860400002)(346002)(136003)(451199015)(53546011)(26005)(6512007)(5660300002)(66446008)(41300700001)(8936002)(66556008)(82960400001)(38100700002)(122000001)(71200400001)(8676002)(478600001)(6486002)(66476007)(64756008)(6916009)(54906003)(91956017)(4326008)(38070700005)(76116006)(66946007)(6506007)(36756003)(316002)(31696002)(31686004)(2616005)(2906002)(186003)(86362001)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?L283RGxKUzV2RnZkQ3FOMk9UVE0wSDM4MTdQRmlELzB4T2llb3JiSG1nVlow?=
 =?utf-8?B?c0VCT0JudWdlMHVkb2kyb0cveVNId1M3VUk3cHhwM0gxR00vNmNObWYwQ2pr?=
 =?utf-8?B?RVZCYzgwSVM0Z2NrdFlkYXg3VUd2dWNKWmtFRjlIajNBKzNOZnZocG1wTTJC?=
 =?utf-8?B?eFVNQk9JOElqYmJ1aEhkb2EvNFBWWlBSd0JnY3J3ZXhkTTAyK0F2Qm0wdTh6?=
 =?utf-8?B?d1ZZMU94NFhLaFFPcjBFdW9OcUtQbDNnNjVZZ0Zxc2xTQWwxamZGYlJNYmsz?=
 =?utf-8?B?S1lKVXErSE4zUFV1L3h1NHBybmdSZzFQTGxUS0prQ3MxR3VzakpUUzl4SWZZ?=
 =?utf-8?B?TGNmZlBIN0F5bGZMTFlzalVzSUVXRm80U0d4dVI1QlhvM3lYZFdjdHVPd3RJ?=
 =?utf-8?B?TGpIL2dPQUR3STV4ZWFsaEhrSHFXbytlSjlLcjd4ME1vL1UvQ0VWTDk0NmpW?=
 =?utf-8?B?WEREa3dhOHl3RXB4MlR3R3dURUZWYzlheWcrWjMyYXFKamlEdGFZT2JvTXA3?=
 =?utf-8?B?S1FEc3pkWnpvUzB3NlhYMDNOR2hvU3FDZjI4Q1Yxa3ViMEtEaTVLdFh0UXBN?=
 =?utf-8?B?TXBQZTA2eUFEZUdHbzZiWFhwdGNCc2xVS040NTQyY3VMblRtbWJVbHYyd0xO?=
 =?utf-8?B?bHFub2hwMEpEYXVIYVV5Nm5wSjZxS1F1SUcyYWloWm9VSUpIbXpYMm9qMGNs?=
 =?utf-8?B?azlBS2NHZ0lQK1FSSDMya2dZN1JGWk55WTdWSzAvTjhMSVRwNkxuTVdJQzlq?=
 =?utf-8?B?bDZLWmdOcThGRHYvaXVsY24xdjcwZTR4emxoTzFXd01vMlhVVzdMR3Fha0kv?=
 =?utf-8?B?bkFOQ1NKbkFXazhva0tqMXdlYmVKRTBVQlVBRG5FVjhSZldKY0Q4VVMzdzU2?=
 =?utf-8?B?bUZlY0dZZHBQMlQyV09CaVdqNWk3aHY4ZVg3cERGTkhEeG5KeHZJSXd5a1Ni?=
 =?utf-8?B?UXhidXRNVHppR1lOQVhUVEtlUU5jdjM4UGg0VmtrWVZLd1NGVkJweEx4d2dZ?=
 =?utf-8?B?dWZFS2YyRjBCQjN3VnJqSzlHRE9PclZURkxFdU1WRVVFL0VHNjhjSXlUT0k5?=
 =?utf-8?B?SFJ1QVhvTG9jR0EzWlFsQjlYYjExZ3NqV1d1SmhzYjRsUHA1Z2JtQlFHM1lX?=
 =?utf-8?B?TGlxYkVackw0dGVLMW1XV0NRYlRuVURRT0FEMWliSnl4TisvbS82WmRpZWNJ?=
 =?utf-8?B?dTNCOW5qVHFnM0xWdVAyTUMyVDNodGdjYy9MNDhhMThNbG9hUHViMTJ0TlN1?=
 =?utf-8?B?a3ppclRRMkZ0cHVyWnZ5d3BVN0xNNmJZRlBscVV3czNFWHAxaHJlS3JWRFZm?=
 =?utf-8?B?MUZHcGx3Vlk3THpSemRuZjRQcTYzRzl5QWVHNEp6NjFwcXVhR2pTclRTRXcy?=
 =?utf-8?B?cWY0WUFidElJVUo5bUhIL041MTBNZDV3bzBYelZmU09PbjNpUGllYzhLZ0ts?=
 =?utf-8?B?MDBaUG9saVk4SlpKUDI4RGtneGU5dUZ4Z0hXWEJDdjFiOWt4ZkZ6V0JWeTdW?=
 =?utf-8?B?NFpGYXZISUM1dEF2Wk5VTllITk1oUWxMQXovVjVQNG1UM0pUZzJqSjBaZENv?=
 =?utf-8?B?UFV1aHdFb3I2SWpEQzdpeFQyMU5TUnRRcFh3K2FsS2w1cnNISWp5S3hxSkJR?=
 =?utf-8?B?MGZpVERXNFFidjBpOXBqMjd2dDRKZXYxODF1Z01zbnJXOWkyRWZ2VitxNXhQ?=
 =?utf-8?B?M25GVHJPTmpVQkFTTEtST01ZR01WNmpMZnlJeHl4MUFqK2RYN2Z4QUw1RE1U?=
 =?utf-8?B?WVJDR1QrUHBGYUo5UjMzNFJOOG5uekdHUnh4QXp6Y1I3b1Vhd2N2cnhwaXRu?=
 =?utf-8?B?blA2bzVSWUlmZHZBUFJObjdnaXFvQ09CQWM1RDBFRUxVU0F3ejZUZXZCQlBB?=
 =?utf-8?B?Y2NUSXVlUzEyRXVQRkJmSEN3VjNCY1pSNW1KWHRwMFoySmVHS1pReVVSc0xD?=
 =?utf-8?B?cSt1S0RqcWZZc2hMVEZRUHVxMTNhVkE1VnllTHBvWTRLQjZUb0p6cVJxK2t5?=
 =?utf-8?B?eUUwdGtBTVIydmVETnkzeGN6VGlKRWFNK0l2UktQSHlOOU1iVVdob3dJUU5E?=
 =?utf-8?B?elY0V1RYNHZqejdPYk1vaFBhdkxqR2JwR2VYWHI0K0FBT2o5V3FEbm5NbkpN?=
 =?utf-8?Q?o9ivnTbO6wwc2FXwYU5VYLjVr?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <ED0B7FDA2FC19E4688399D3AA598C924@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?U0VxMzltVlVMTHlPSW9zY2I1T1pvSlNqMndON0picXIvRXBHbk9ickJ5S25W?=
 =?utf-8?B?aXJGU1JQSnYzZ1hLK01IUmlkajNCMUZOdkdrbGZsVVZKZUNTdjE1M3JaRVJN?=
 =?utf-8?B?Y0VaWE00dmpIR0hoQ1M4aWJiek5OYURpbDVHaDBVZG4rcXhMYlNHcWFDSVN6?=
 =?utf-8?B?UXNnV00wek9hY0llSjZiS2pLSzZ5Tm5lL2NDN1Q0eUR5VWRTVkdDR0dhTkpQ?=
 =?utf-8?B?Q3V4OTI3eWo5RndzZkszeHJVRjJRVXZ5aFZiVEtBSDdJckNJa1MxaDVMaEg5?=
 =?utf-8?B?UnlwYTJiSzBMNzBkMUR0djRsZUs4Tm5KTVNEQmErMXFkVkptSHZQbUNIY0VF?=
 =?utf-8?B?R2FOMGk4TURkT1JwRlFyTXRTZmhzQWhTVkJzakpMWWRRcWNnb2RVaEVkZ3pP?=
 =?utf-8?B?dWwwcVdBNkhZZVJuVVZ5cU56SHRpczNaRjZDTnNiNE9LaUdiSTJ3MFJMaHlz?=
 =?utf-8?B?QXNsT2dhTXNQbkpOWGRCNUdrL2MxdnlVbWROMXYvcEc4cXZwWXI5S281MWlN?=
 =?utf-8?B?YzZOaDRkckpjT0M2a1A2WmwvS3JyVlQ3d1Y1OVBGcDlBTDg2T3JCeTZFRFlt?=
 =?utf-8?B?dzZxd1lJVkllclVrcVpYYTNueDR6NTA3cEdRaWVCS3N3WVNZeGlpODFPeS9U?=
 =?utf-8?B?RHhzSStDRHFHRmd6bENEMnJ5Wmk5NklvU08vR1NCQ2hNeXM1ZklMSjVmNFB5?=
 =?utf-8?B?WjJTU05UM0lqMXFXQ3BMV043dDR1aGtlYWZSR01GV09maHdZWGNYdDhia3hq?=
 =?utf-8?B?N1lEVW9rNTdnckFpazdhZkRFTFpGR1ZDSExhVmptbTNZNEhsSG0wVW1ER0cv?=
 =?utf-8?B?MDdMMGRqMW9oakpyRURSVHpya3BuRDc1YXdZbEpvVVFkNVRDM25IUGdIbHh4?=
 =?utf-8?B?QnJuUDRFWU1LNmRCMVNyOGh3ODZPc1RybXJvWmVTdVgrRFQ1Mlp0NGxJY1Bi?=
 =?utf-8?B?Tm5Beks3OGN4MUxVcUFjYk9jSFlZMkswcnBmT0hWdEdPWGtLS2hyNXl2RVF5?=
 =?utf-8?B?V2E0UVZwVEZ5a2xFa0JaWHJwZ2x1cWh4MkhVeG0xcUx6NkxoaTBrME4xNzJX?=
 =?utf-8?B?bTFOVUxITkEyd1NVN1ZlenFKVUpwckZOVmZaS0xJbER0N1NiVm1BR015RmFx?=
 =?utf-8?B?L2E0TUswMm45c1M0WXQ1SUFyWXE4M0txNjAvV1BwbEtKYUNNcE9ob3NNRUtP?=
 =?utf-8?B?eDRFbnpwR1lLK0lzT1JqZzJVUFNZSWsrVEVPclRWRUxQbHFZWklOVEJuYXA4?=
 =?utf-8?B?dkI3M25pWDlwWmg5NFZiM09Tc3ZNRTRjc1YyUG1iSmc3ajM2UT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0f335e08-829d-40ed-a180-08daf8b1d25d
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 17:39:44.8441
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: agKXojLT3KOkA4FDF3acblqqpMAKdwY/Ba3snjt0uhoIHVFxYn/vYb4i8rA8Q0aSOcrSrCXIiMZzhMeOyF7aCXu8/rW9OWUE6LyEBzLyJoQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5078

T24gMTcvMDEvMjAyMyA0OjIxIHBtLCBKYXNvbiBBbmRyeXVrIHdyb3RlOg0KPiBPbiBGcmksIEph
biAxMywgMjAyMyBhdCA2OjA4IFBNIEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJp
eC5jb20+IHdyb3RlOg0KPj4gUmVjZW50bHkgaW4gWGVuU2VydmVyLCB3ZSBoYXZlIGVuY291bnRl
cmVkIHByb2JsZW1zIGNhdXNlZCBieSBib3RoDQo+PiBYRU5WRVJfZXh0cmF2ZXJzaW9uIGFuZCBY
RU5WRVJfY29tbWFuZGxpbmUgaGF2aW5nIGZpeGVkIGJvdW5kcy4NCj4+DQo+PiBNb3JlIHRoYW4g
anVzdCB0aGUgaW52YXJpYW50IHNpemUsIHRoZSBBUElzL0FCSXMgYWxzbyBicm9rZW4gYnkgdHlw
ZWRlZi1pbmcgYW4NCj4+IGFycmF5LCBhbmQgdXNpbmcgYW4gdW5xdWFsaWZpZWQgJ2NoYXInIHdo
aWNoIGhhcyBpbXBsZW1lbnRhdGlvbi1zcGVjaWZpYw0KPj4gc2lnbmVkLW5lc3MNCj4+DQo+PiBQ
cm92aWRlIGJyYW5kIG5ldyBvcHMsIHdoaWNoIGFyZSBjYXBhYmxlIG9mIGV4cHJlc3NpbmcgdmFy
aWFibGUgbGVuZ3RoDQo+PiBzdHJpbmdzLCBhbmQgbWFyayB0aGUgb2xkZXIgb3BzIGFzIGJyb2tl
bi4NCj4+DQo+PiBUaGlzIGZpeGVzIGFsbCBpc3N1ZXMgYXJvdW5kIFhFTlZFUl9leHRyYXZlcnNp
b24gYmVpbmcgbG9uZ2VyIHRoYW4gMTUgY2hhcnMuDQo+PiBNb3JlIHdvcmsgaXMgcmVxdWlyZWQg
dG8gcmVtb3ZlIG90aGVyIGFzc3VtcHRpb25zIGFib3V0IFhFTlZFUl9jb21tYW5kbGluZQ0KPj4g
YmVpbmcgMTAyMyBjaGFycyBsb25nLg0KPj4NCj4+IFNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29w
ZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo+PiAtLS0NCj4+IENDOiBHZW9yZ2UgRHVu
bGFwIDxHZW9yZ2UuRHVubGFwQGV1LmNpdHJpeC5jb20+DQo+PiBDQzogSmFuIEJldWxpY2ggPEpC
ZXVsaWNoQHN1c2UuY29tPg0KPj4gQ0M6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlA
a2VybmVsLm9yZz4NCj4+IENDOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPj4gQ0M6IEp1bGllbiBH
cmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+PiBDQzogRGFuaWVsIERlIEdyYWFmIDxkZ2RlZ3JhQHR5
Y2hvLm5zYS5nb3Y+DQo+PiBDQzogRGFuaWVsIFNtaXRoIDxkcHNtaXRoQGFwZXJ0dXNzb2x1dGlv
bnMuY29tPg0KPj4gQ0M6IEphc29uIEFuZHJ5dWsgPGphbmRyeXVrQGdtYWlsLmNvbT4NCj4+DQo+
PiB2MjoNCj4+ICAqIFJlbW92ZSB4ZW5fY2FwYWJpbGl0aWVzX2luZm9fdCBmcm9tIHRoZSBzdGFj
ayBub3cgdGhhdCBhcmNoX2dldF94ZW5fY2FwcygpDQo+PiAgICBoYXMgZ29uZS4NCj4+ICAqIFVz
ZSBhbiBhcmJpdHJhcnkgbGltaXQgY2hlY2sgbXVjaCBsb3dlciB0aGFuIElOVF9NQVguDQo+PiAg
KiBVc2UgImJ1ZiIgcmF0aGVyIHRoYW4gInN0cmluZyIgdGVybWlub2xvZ3kuDQo+PiAgKiBFeHBh
bmQgdGhlIEFQSSBjb21tZW50Lg0KPj4NCj4+IFRlc3RlZCBieSBmb3JjaW5nIFhFTlZFUl9leHRy
YXZlcnNpb24gdG8gYmUgMjAgY2hhcnMgbG9uZywgYW5kIGNvbmZpcm1pbmcgdGhhdA0KPj4gYW4g
dW50cnVuY2F0ZWQgdmVyc2lvbiBjYW4gYmUgb2J0YWluZWQuDQo+PiAtLS0NCj4+ICB4ZW4vY29t
bW9uL2tlcm5lbC5jICAgICAgICAgIHwgNjIgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKw0KPj4gIHhlbi9pbmNsdWRlL3B1YmxpYy92ZXJzaW9uLmggfCA2MyArKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLQ0KPj4gIHhlbi9pbmNsdWRl
L3hsYXQubHN0ICAgICAgICAgfCAgMSArDQo+PiAgeGVuL3hzbS9mbGFzay9ob29rcy5jICAgICAg
ICB8ICA0ICsrKw0KPiBUaGUgRmxhc2sgY2hhbmdlIGxvb2tzIGdvb2QsIHNvIHRoYXQgcGFydCBp
czoNCj4gUmV2aWV3ZWQtYnk6IEphc29uIEFuZHJ5dWsgPGphbmRyeXVrQGdtYWlsLmNvbT4gIyBG
bGFzaw0KDQpUaGFua3MuDQoNCj4NCj4gTG9va2luZyBhdCBpbmNsdWRlL3hzbS9kdW1teS5oLCB0
aGVzZSBuZXcgc3Vib3BzIHdvdWxkIGZhbGwgdW5kZXIgdGhlDQo+IGRlZmF1bHQgY2FzZSBhbmQg
cmVxdWlyZSBYU01fUFJJVi4gIElzIHRoYXQgdGhlIGRlc2lyZWQgcGVybWlzc2lvbiwNCj4gYW5k
IGd1ZXN0cyB3b3VsZCBqdXN0IGhhdmUgdG8gaGFuZGxlIEVQRVJNPw0KDQpJIG5vdGljZWQgdGhh
dCBkdXJpbmcgZnVydGhlciB0ZXN0aW5nLsKgIEkgbWFkZSBhIG1vZGlmaWNhdGlvbiB0byBkdW1t
eQ0KaW4gdGhlIHNhbWUgbWFubmVyIGFzIGZsYXNrLg0KDQpHaXZlbiB0aGF0IGl0J3MgdGhlIHNh
bWUgcGllY2Ugb2YgaW5mb3JtYXRpb24gKGp1c3Qgd2l0aCBhIGxlc3MgYnJva2VuDQpBUEkpLCBw
ZXJtaXNzaW9uIGhhbmRsaW5nIGZvciB0aGUgdHdvIG9wcyBzaG91bGQgYmUgdGhlIHNhbWUuDQoN
Cn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:44:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:44:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479655.743633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq0T-0000H7-3n; Tue, 17 Jan 2023 17:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479655.743633; Tue, 17 Jan 2023 17:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq0T-0000H0-11; Tue, 17 Jan 2023 17:44:21 +0000
Received: by outflank-mailman (input) for mailman id 479655;
 Tue, 17 Jan 2023 17:44:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq0R-0000Gu-00
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:44:19 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2079.outbound.protection.outlook.com [40.107.101.79])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8f60bf75-968e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 18:44:17 +0100 (CET)
Received: from BN8PR12CA0010.namprd12.prod.outlook.com (2603:10b6:408:60::23)
 by SA1PR12MB6896.namprd12.prod.outlook.com (2603:10b6:806:24f::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:44:12 +0000
Received: from BN8NAM11FT053.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:60:cafe::3f) by BN8PR12CA0010.outlook.office365.com
 (2603:10b6:408:60::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 17:44:12 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT053.mail.protection.outlook.com (10.13.177.209) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 17:44:12 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:44:12 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:44:11 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:44:11 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f60bf75-968e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OtOa4AJUcsFlfYj72GZcTikQyBUSwtC4Ls+3xNcz+VMwkVg1h0Ykxl+TfW85QZexGSZUGmhhImzpNehKtFjaVCk8mPlZDvcrk3k73ipKbYpg+h/XnyrxQNG9pmUsMlyPfdrTBvl72w/eCqQONMHrpEfZ6DdnrAgRdjEvZi6Kx3txQUlJ1hT/xXWI9BAmsHAkDpqr0KUrkZPH6IDoqCT2rDNula0QYvsT9pfISfhzAFTdXEVPGXi/Wjlsm0u4i0iMfiQIP+hgpx+TLq+aGXly6uUjbChgDN1mFbhLTBfcHREFJ4x2aGT48cIZdePUN4Fh3ldFI7SGRc7+aBLSgU/uRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q2RD9ZRtSDHuGZSZbGgqtX8rHC0gssD0MojCsiUdImY=;
 b=dBO08lTk3DavtN19mab5QfiXuNUOv/rby2X//qAyaHF5lXByYJ7RgUOyUl6yx0eXOkM96k4Y8uzEakM/loLBz+Ecm22Y60Z8LsdkOx8f3xB5XIu3YqS0OnpMvHgBEecH53JBiPLojlU031Qmpa1ygzbZO6Oc237rsLnLPD8Mi19/0smfH1uMMFoc9FyM+JZfyBPyozH/SbDy1f7xPDMZaYYZEJ30tYOEyaloa9LGjA/uwODHrgP77vG6YKH7mCOL6f+GtxR1i6uOB2xlLOfCbFwD+A50KXfo0I6jNKpc0Cj2oRF9sn1kM47mkMMvftPXceSJxS/KoBH69bLtcR9clQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q2RD9ZRtSDHuGZSZbGgqtX8rHC0gssD0MojCsiUdImY=;
 b=bElTYGfm1iC/2rt7qyiGfK83TYpFpfOlc7sGLWCMyGfPaYlvz0GG/0TUZj9f1AxT/edxS/BAGnFyia9iOyLfm55VzJ0/E9BY8wREElzI0c2rNDcHUF26zQ1TuUVRzsb7lpIW7OzeMJjI8kpsLswYBkXNNXIDLDlED34k9lZ9apY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 00/11] Add support for 32 bit physical address
Date: Tue, 17 Jan 2023 17:43:47 +0000
Message-ID: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT053:EE_|SA1PR12MB6896:EE_
X-MS-Office365-Filtering-Correlation-Id: b05008b4-914d-498f-bc69-08daf8b271f0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	csNQt348hr56Rs58/2gBkQDEjimrLd/KMyAi0i4gMVauJU9O2P/TH+xFegnz+kcoI9Fe4IC5RgSr0dEfAp9lNSUrB209AFzWpOZWpMflhOOpSf9LInUEajYMJn9pklQlsCU1AB4BICvTWe0dGS+yehmyt0Z01xjnFDwMEUcbbAEArUswmmOSYs5UFHPK9Fg+PWFahTW5my3eemjt3q1T4v6ZS9yLO9yn6sQXU6wEj+21wkrckWjlT0A1y+mj852WGbxX8qhcxcsA9aFQ0DH0xgs0iXpHnkdPECBHGcq/qIO3kQKRQ+Zc10iStzLVZwGI089zYNTsR7lTV5y3OOqUyVLAeA/87iQHfJGavjsoyJjuI7ww1pTSfL1SDCUmJy173lphCFTXNUBKz+PZCeZ4nwmOmFL0lsw1oTZ+NpcD4i759QNGtL9iTOC3QmvSAqbkgrfjxrCKYekIZwUHBpUW0zy6zXFVIGsCMoKOVcEK3DpmYs4IBoLQQnEBNjy9y/uFA61pfbFkPiKwlqkdzmZmjs0vvAKurWxfpGzdpfaHNy+kzc2f9i88Qd1H3WmEAMeW1WO04cqpEefNJNSH6p+dsycpaSOIyVj1TvBfOG61H9QK5fYejWwY0Z4XVFu2eDdTx1KMxO7u1qry62lOOUY1ARAPZPIJdbikhIh9ZF/C+XdsuUfrYy8RhYPOwQwwE2hdJoqEt+UlNGnmSDyRA9gZz8GHNlNwpWP9O+ZJAaIqQ5ZDT9ZztSQkZ43J3r0CAzUhFh+CCai4+wadDOlouFF8vReLKaYyjcOAhBhFK/ipOpUmmW/r/m7lOiiUC2wLIJ0D
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199015)(36840700001)(40470700004)(46966006)(86362001)(82310400005)(36756003)(103116003)(186003)(26005)(6916009)(8676002)(4326008)(2616005)(70206006)(70586007)(41300700001)(426003)(47076005)(40480700001)(1076003)(336012)(54906003)(316002)(478600001)(966005)(40460700003)(2906002)(356005)(81166007)(83380400001)(36860700001)(5660300002)(8936002)(82740400003)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:44:12.5347
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b05008b4-914d-498f-bc69-08daf8b271f0
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT053.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB6896

Hi All,

Please have a look at https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg01465.html
for the context.

The benefits of using 32 bit physical addresses are as follows :-

1. It helps to use Xen on platforms (for eg R52) which supports 32 bit
physical addresses and has no support for large page address extension.
On 32 bit MPU systems which supports flat-mapping (for eg R52), it helps
to translate 32 bit VA into 32 bit PA.

2. It also helps in code optimization when the underlying platform does not
use large page address extension.


The following points are to be noted :-
1. Device tree always use u64 for address and size. The caller needs to
translate between u64 and u32 (when 32 bit physical addressing is used).
2. Currently, we have enabled this option for Arm_32 as the MMU for Arm_64
uses 48 bit physical addressing.
3. https://lists.xenproject.org/archives/html/xen-devel/2022-12/msg00117.html
has been added to this series.

Changes from :

v1 - 1. Reordered the patches such that the first three patches fixes issues in
the existing codebase. These can be applied independent of the remaining patches
in this serie,

2. Dropped translate_dt_address_size() for the address/size translation between
paddr_t and u64 (as parsed from the device tree). Also, dropped the check for
truncation (while converting u64 to paddr_t).
Instead now we have modified device_tree_get_reg() and typecasted the return for
dt_read_number(), to obtain paddr_t. Also, introduced wrappers for
fdt_get_mem_rsv() and dt_device_get_address() for the same purpose. These can be
found in patch 4/11 and patch 6/11.

3. Split "Other adaptations required to support 32bit paddr" into the following
individual patches for each adaptation :
  xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
    SMMU_CBn_TTBR0
  xen/arm: guest_walk: LPAE specific bits should be enclosed within
    "ifndef CONFIG_ARM_PA_32"

4. Introduced "xen/arm: p2m: Enable support for 32bit IPA".

Ayan Kumar Halder (11):
  xen/ns16550: Remove unneeded truncation check in the DT init code
  xen/arm: Use the correct format specifier
  xen/arm: domain_build: Replace use of paddr_t in find_domU_holes()
  xen/arm: Typecast the DT values into paddr_t
  xen/arm: Use paddr_t instead of u64 for address/size
  xen/arm: Introduce a wrapper for dt_device_get_address() to handle
    paddr_t
  xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
    SMMU_CBn_TTBR0
  xen/arm: guest_walk: LPAE specific bits should be enclosed within
    "ifndef CONFIG_ARM_PA_32"
  xen/arm: Introduce ARM_PA_32 to support 32 bit physical address
  xen/arm: Restrict zeroeth_table_offset for ARM_64
  xen/arm: p2m: Enable support for 32bit IPA

 xen/arch/arm/Kconfig                   |  9 ++++
 xen/arch/arm/bootfdt.c                 | 23 ++++++----
 xen/arch/arm/domain_build.c            | 32 +++++++-------
 xen/arch/arm/gic-v2.c                  | 17 ++++----
 xen/arch/arm/gic-v3.c                  | 11 ++---
 xen/arch/arm/guest_walk.c              |  2 +
 xen/arch/arm/include/asm/device_tree.h | 59 ++++++++++++++++++++++++++
 xen/arch/arm/include/asm/lpae.h        |  4 ++
 xen/arch/arm/include/asm/page-bits.h   |  2 +
 xen/arch/arm/include/asm/setup.h       |  2 +-
 xen/arch/arm/include/asm/types.h       |  7 +++
 xen/arch/arm/mm.c                      |  9 +---
 xen/arch/arm/p2m.c                     | 10 +++--
 xen/arch/arm/platforms/exynos5.c       | 33 +++++++-------
 xen/arch/arm/platforms/sunxi.c         |  3 +-
 xen/arch/arm/setup.c                   | 14 +++---
 xen/arch/arm/smpboot.c                 |  2 +-
 xen/drivers/char/exynos4210-uart.c     |  5 ++-
 xen/drivers/char/ns16550.c             | 16 +++----
 xen/drivers/char/omap-uart.c           |  5 ++-
 xen/drivers/char/pl011.c               |  7 +--
 xen/drivers/char/scif-uart.c           |  5 ++-
 xen/drivers/passthrough/arm/smmu.c     | 24 ++++++-----
 23 files changed, 199 insertions(+), 102 deletions(-)
 create mode 100644 xen/arch/arm/include/asm/device_tree.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479661.743644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1G-0000rO-Id; Tue, 17 Jan 2023 17:45:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479661.743644; Tue, 17 Jan 2023 17:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1G-0000rF-Dy; Tue, 17 Jan 2023 17:45:10 +0000
Received: by outflank-mailman (input) for mailman id 479661;
 Tue, 17 Jan 2023 17:45:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1F-0000r0-KL
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:09 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on2075.outbound.protection.outlook.com [40.107.100.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ae7e8220-968e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 18:45:08 +0100 (CET)
Received: from DM6PR08CA0042.namprd08.prod.outlook.com (2603:10b6:5:1e0::16)
 by DM6PR12MB4548.namprd12.prod.outlook.com (2603:10b6:5:2a1::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 17:45:05 +0000
Received: from DM6NAM11FT110.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1e0:cafe::2e) by DM6PR08CA0042.outlook.office365.com
 (2603:10b6:5:1e0::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:05 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT110.mail.protection.outlook.com (10.13.173.205) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 17:45:04 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:04 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 09:45:03 -0800
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:02 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae7e8220-968e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XNV7lckrRBEYlR5aLFTr4/LImzdTeNz4EfdEtnQsqPkfApJdvY2ZJw798B2HUt4fR0NlHI3Lhax+EfPTRpnOZLRpyNaUxQ3FfaEOSXm/NpYugV45/hFXcl4OJrl3ex1omtko75/6MczffwyJpHGS29eYX/yHT33C6ECnHczTCJNiAfLKOcKutBeNL0X99LHZFYK8gDXCD9X+BC8sUge1UTIR9x5DgJvZXPPywu08PDg3+QSVtwMdKpE4Bl5OYFlksnVFWVXe9EqUlI2CesKPNS8sbdaQ/+l8Z7jLMWpmpcEu6TcWzMCprwJ+qOu52aVacGLas52uazfdXa2F5jFVFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PB+0PNaaVLEQ5pWTn1e1aJdTj00MY9jH79kRwiJAuFg=;
 b=Dv6PvvEPGzxz+8RnbGax5C0Gxr8LFxzMh+x/uLqWy9Ue2VsZG4ftXaCkSW3Uz/li7flZhSPMXAGZnXxGI3wtKM/B4f0xaUNL8xOcsnSmJn/Uie9/GWeihIFt8shpnpe+PwW72AClWN4H34KTEuACePrgjk5mB1ewXzTKNDzUVkj/nAXJcRAfM0smM6HcTTSEcgg59P7YYh/yQDNtzCiVSK4US7kjVz/s3cmW8zVh7bvvCwL9Zjlk9SJ21p6kEulzs/Oy+tGAHcC3WcRYs0u6exFxQ2GzYsNoCf8QXdVrtNJWJlLY6toRO1Jr3UpSSJ4p0xDBEF97Qb6hMH7aKkvKfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PB+0PNaaVLEQ5pWTn1e1aJdTj00MY9jH79kRwiJAuFg=;
 b=DpmElJENMpCGMdyaVYQU52VW/Fu5IPSN0NNAgCbNZydsLQ6gceuFwtdTMIHiq8SPVybv7TMUdj3KYk1a4qlgM5vQP7SEA4H0i/KVx6v506OWNMd+HXekODzsweRsneQXKP/F7ltJnLDgl2l+HyFw2CwM2KmsF0wB7V2W2QBsMrU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 01/11] xen/ns16550: Remove unneeded truncation check in the DT init code
Date: Tue, 17 Jan 2023 17:43:48 +0000
Message-ID: <20230117174358.15344-2-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT110:EE_|DM6PR12MB4548:EE_
X-MS-Office365-Filtering-Correlation-Id: ef8f7b73-a161-45c4-ea0d-08daf8b2911b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	F7hWv/vho/RRk/vtgb5mfAOPpDbXN3M+/UnYIJ6AZgnhJx0cT1Bm4gIzHZRmqxPztiDUtbCf48PauaK0I6GpTb4TrJE8pdm8GovL0EPHCKLiwT9KzDRwjawIgmHinAwzSfRMOg5ZVO2ifIq0RVNJ9RQR27cRq8/yBig2Ujugd73b8VTmNmverjAiR4/0pbnflKy7ZnzXXSzQd+TOkrCPERbds5yRaonEj6SZS+wa6IpcMpVdxbzbeef4qibwk3b92rWxsBnAarUC37q/mq+YeRC+++HnWQYYjemJNP+LNDFgYRZkESluCB9fc4csA4WKBftts0sNYytF7TeUQ5uKjP8+EKODhGQl6f5f07SzOKyclBVMm0tpZfktStDE62rftHda68deQ7J8lriinLfP9NRnWU3I3x25ust/ungmUUj3sYjZHC1XsPiQv4HPVNXzkSKV2AteaDPcrVhmv9Yx1MDzzMEPAOWKGg8AORno6nzU13/LzS1fJnPPFh0WQ8lIRyK6zVDGwIYFmlkAxEAYO9lUfOPaC5Aa6bxBnJoXXkT0HE5h3QsP/dwVK5dwWi71letTmLjcLfTFEIm+I5RmC9A1z7SF3uwZ4irmZaHxx36b7lZLoDaFe0XPeWbT2AFPDaq6pfArfXVUemgXzCujql68bjCN5E1u/990dZgrYMt+k5UC2KRL0EabLflkhdNi7E+equbJBoj2D7IFIWWAhh5CeyQUVwZ7hwLbcm2s0f0=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199015)(46966006)(36840700001)(40470700004)(6666004)(6916009)(70206006)(336012)(478600001)(186003)(83380400001)(26005)(4326008)(8936002)(47076005)(41300700001)(5660300002)(2906002)(36860700001)(426003)(8676002)(36756003)(82740400003)(70586007)(82310400005)(356005)(40480700001)(103116003)(81166007)(316002)(1076003)(86362001)(2616005)(54906003)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:04.7958
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ef8f7b73-a161-45c4-ea0d-08daf8b2911b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT110.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4548

In an earlier commit (7c1de0038895), "ns16550.io_size" was u32 and
"io_size" was u64. Thus, the ASSERT() was needed to check if the values
are the same.
However, in a later commit (c9f8e0aee507), "ns16550.io_size" was changed
to u64. Thus, the ASSERT() became redundant.

So, now as "io_size" and "uart->io_size" are both u64, there will be no
truncation. Thus, one can remove the ASSERT() and extra assignment.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---

Changes from :

v1 - 1. Updated commit message/title.
2. Added Rb.

 xen/drivers/char/ns16550.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 01a05c9aa8..58d0ccd889 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -1747,7 +1747,6 @@ static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
     struct ns16550 *uart;
     int res;
     u32 reg_shift, reg_width;
-    u64 io_size;
 
     uart = &ns16550_com[0];
 
@@ -1758,14 +1757,10 @@ static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
     uart->parity    = UART_PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &uart->io_base, &io_size);
+    res = dt_device_get_address(dev, 0, &uart->io_base, &uart->io_size);
     if ( res )
         return res;
 
-    uart->io_size = io_size;
-
-    ASSERT(uart->io_size == io_size); /* Detect truncation */
-
     res = dt_property_read_u32(dev, "reg-shift", &reg_shift);
     if ( !res )
         uart->reg_shift = 0;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479662.743654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1J-00017h-OH; Tue, 17 Jan 2023 17:45:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479662.743654; Tue, 17 Jan 2023 17:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1J-00017Y-LM; Tue, 17 Jan 2023 17:45:13 +0000
Received: by outflank-mailman (input) for mailman id 479662;
 Tue, 17 Jan 2023 17:45:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1I-0000oY-VJ
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:13 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2046.outbound.protection.outlook.com [40.107.92.46])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id af689fb7-968e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:45:10 +0100 (CET)
Received: from DS7PR05CA0058.namprd05.prod.outlook.com (2603:10b6:8:2f::11) by
 SJ0PR12MB6711.namprd12.prod.outlook.com (2603:10b6:a03:44d::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Tue, 17 Jan
 2023 17:45:07 +0000
Received: from DM6NAM11FT099.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2f:cafe::c7) by DS7PR05CA0058.outlook.office365.com
 (2603:10b6:8:2f::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:07 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT099.mail.protection.outlook.com (10.13.172.241) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 17:45:07 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:06 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 09:45:06 -0800
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:05 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af689fb7-968e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=msR6cRL4h1bFfKdgntKcrszuQh863yvkpmzGhZUrDe1RXynjyh67qZB3T58+8RrLh9nqNhAKh6aOdz0BEMz78Aq7lurdI/X+2kvPx9C9R5r1jwCiJwV+8qH4h4iMsZ2cGrdIbx1QH7zUopQdje5Rla4R7c3LxCd/20+YvJeP6oLjP5dP4FGBGhAKKnNEEQ8wN+fG0LB6J7bzF6xaiVvY0K1KiXjMDH+2iCkeHejCCLGX0yZSQA+FbhoQf/XAEiIMRFtv38eDGlbNmIApY0Uq/YJMvW0apLbwQpqtKfI4KjBcJA8T2HKC3X2JHF8GSDXY01vu7LzXTLXmpVUlZ4MQzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dzvsf6MwjhjMY17ND2Wzg5NfLYRh8AamJF6PJtvANM0=;
 b=nyLgSXATDdFGK/oyc7tTMtfzPHixFAlMAZM6PjA2mV5i32ZujwX+gnyxXAzr/Ww5QNC5I9KBR7X4oFNZDkozoaS5kivrzykqu2RsoXO1EJCA/Z30noxP9qMJjxqXpZFfGD8zpzE0jenYaiYzM69a0U0JyHWEAUUpABXhxGfb2d6ZGcgVVFv5SVYTyhG7v+wcsUF/pnTOg+KLgY7F6yq4eoewF2kLce9PbzqNPC3Ldl9lwvAZPJWPXncepyv0bZPhdMLZsdEomDuEkoQrGOQ61MZ/tkg6qkSun810T8dbNraqQuQ5+bUNAde8OtlZD/v3zT58iGokmzK3KQojsBihMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dzvsf6MwjhjMY17ND2Wzg5NfLYRh8AamJF6PJtvANM0=;
 b=2lCzsY9xnBEGw1WjAsBcM5l7fX/qh+egPN9vN9N2qahAlCmMDC257nrdL2YxrTaX8vKFTRLiGHEsxFylbwLSNXpgpMxJuyuzVt89Pz8lK+KLcBorRpFD528+knKa0BQy4pYWbpbJ5d6wijnO+r+JiDOFgsyzqCwZV1+4xtDYkeI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 02/11] xen/arm: Use the correct format specifier
Date: Tue, 17 Jan 2023 17:43:49 +0000
Message-ID: <20230117174358.15344-3-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT099:EE_|SJ0PR12MB6711:EE_
X-MS-Office365-Filtering-Correlation-Id: caf5fa31-0dad-4921-9d95-08daf8b29277
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	83tzEEaKGpH2lT3YXC7K1VuXRUPey+l2KjDutukA8VUAT03mS+dOpaZy/7DF7wU7VF5A9/ohujfc1kaXKZC+He0XpqF4uR3OtAUCMcUfG/FIe0nRCWYKmug+kdPZiJ/wRlOocQ71soMlZe9yxWpUKF+Eh1zgkvmPgpfL4oR0CwFYL/ESFNr4z70K2ctAs13baB8F+J6Gh0255glhhzbDKnx+BLNSuQasrQCpqbEVfVmYJwp9QeYbpm0FAWE4z2YDtcuGSsKSoBJm7Obc062ZAm5aUv+ypwGr1pNfn6ICBQXQ5q/9DRR2KR7LoQ++EQ/2nTzO5HyYgxS+n6XahINrCoCrV09dtEZChSBFAKLhSRbAjqCmMtNU0Txrr0hFh32Xqdpv5sl1Rg/0LbE1gmuD/j4wI9+AwkZ8H8uFXbYWiCTqmOXkR/nCYHxUcKGu1nJY4B5EmZZtYWraP6bvNAgR5nHVHlyTkecBrhPSjtzeTQlhseIpTPUz5dR+OgPHAs3aHaqPaDuXaAi7ikkHtRwQ6ce2cTEQQKQvUUTs5efYoAdn1Bm5leExgOxm+JwWl6EtaHMg2L6IQLAxRV4qmc1ALRCBD7jE5hPM2SkxXbWior9AFr9Qglp3P9MaNcxlYQXXNxaeaXwnIQDFXGw/GJkAStxD23iMNaCLi8oAqtf5kyv4Fsn/7GQWIOdBpgoO/4yWqRfxu8RtGxVZqTAULNoku8bTVx4ArYb4xaE9G7fOH+g=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(39860400002)(396003)(346002)(451199015)(40470700004)(36840700001)(46966006)(5660300002)(36756003)(40460700003)(41300700001)(36860700001)(8936002)(103116003)(26005)(186003)(82740400003)(478600001)(336012)(83380400001)(2616005)(1076003)(47076005)(82310400005)(6666004)(54906003)(426003)(316002)(356005)(4326008)(86362001)(8676002)(70206006)(6916009)(81166007)(70586007)(40480700001)(2906002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:07.0745
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: caf5fa31-0dad-4921-9d95-08daf8b29277
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT099.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6711

1. One should use 'PRIpaddr' to display 'paddr_t' variables.
2. One should use 'PRIx64' to display 'u64' in hex format. The current
use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
address.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v1 - 1. Moved the patch earlier.
2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr"
into this patch.

 xen/arch/arm/domain_build.c | 10 +++++-----
 xen/arch/arm/gic-v2.c       |  6 +++---
 xen/arch/arm/mm.c           |  2 +-
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 829cea8de8..33a5945a2d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1315,7 +1315,7 @@ static int __init make_memory_node(const struct domain *d,
     dt_dprintk("Create memory node\n");
 
     /* ePAPR 3.4 */
-    snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[i].start);
+    snprintf(buf, sizeof(buf), "memory@%"PRIpaddr, mem->bank[i].start);
     res = fdt_begin_node(fdt, buf);
     if ( res )
         return res;
@@ -1383,7 +1383,7 @@ static int __init make_shm_memory_node(const struct domain *d,
         __be32 *cells;
         unsigned int len = (addrcells + sizecells) * sizeof(__be32);
 
-        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, mem->bank[i].start);
+        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIpaddr, mem->bank[i].start);
         res = fdt_begin_node(fdt, buf);
         if ( res )
             return res;
@@ -2719,7 +2719,7 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
     /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
     char buf[38];
 
-    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
+    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIpaddr,
              vgic_dist_base(&d->arch.vgic));
     res = fdt_begin_node(fdt, buf);
     if ( res )
@@ -2775,7 +2775,7 @@ static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
     char buf[38];
     unsigned int i, len = 0;
 
-    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
+    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIpaddr,
              vgic_dist_base(&d->arch.vgic));
 
     res = fdt_begin_node(fdt, buf);
@@ -2861,7 +2861,7 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
     /* Placeholder for sbsa-uart@ + a 64-bit number + \0 */
     char buf[27];
 
-    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIx64, d->arch.vpl011.base_addr);
+    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIpaddr, d->arch.vpl011.base_addr);
     res = fdt_begin_node(fdt, buf);
     if ( res )
         return res;
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 61802839cb..5d4d298b86 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1049,7 +1049,7 @@ static void __init gicv2_dt_init(void)
     if ( csize < SZ_8K )
     {
         printk(XENLOG_WARNING "GICv2: WARNING: "
-               "The GICC size is too small: %#"PRIx64" expected %#x\n",
+               "The GICC size is too small: %#"PRIpaddr" expected %#x\n",
                csize, SZ_8K);
         if ( platform_has_quirk(PLATFORM_QUIRK_GIC_64K_STRIDE) )
         {
@@ -1280,11 +1280,11 @@ static int __init gicv2_init(void)
         gicv2.map_cbase += aliased_offset;
 
         printk(XENLOG_WARNING
-               "GICv2: Adjusting CPU interface base to %#"PRIx64"\n",
+               "GICv2: Adjusting CPU interface base to %#"PRIpaddr"\n",
                cbase + aliased_offset);
     } else if ( csize == SZ_128K )
         printk(XENLOG_WARNING
-               "GICv2: GICC size=%#"PRIx64" but not aliased\n",
+               "GICv2: GICC size=%#"PRIpaddr" but not aliased\n",
                csize);
 
     gicv2.map_hbase = ioremap_nocache(hbase, PAGE_SIZE);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0fc6f2992d..fab54618ab 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -249,7 +249,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
 
         pte = mapping[offsets[level]];
 
-        printk("%s[0x%03x] = 0x%"PRIpaddr"\n",
+        printk("%s[0x%03x] = 0x%"PRIx64"\n",
                level_strs[level], offsets[level], pte.bits);
 
         if ( level == 3 || !pte.walk.valid || !pte.walk.table )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479663.743666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1M-0001Og-0d; Tue, 17 Jan 2023 17:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479663.743666; Tue, 17 Jan 2023 17:45:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1L-0001OV-TK; Tue, 17 Jan 2023 17:45:15 +0000
Received: by outflank-mailman (input) for mailman id 479663;
 Tue, 17 Jan 2023 17:45:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1K-0000oY-Ah
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:14 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2075.outbound.protection.outlook.com [40.107.93.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b090a849-968e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:45:12 +0100 (CET)
Received: from DM6PR07CA0119.namprd07.prod.outlook.com (2603:10b6:5:330::11)
 by PH7PR12MB6489.namprd12.prod.outlook.com (2603:10b6:510:1f7::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:45:09 +0000
Received: from DM6NAM11FT091.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:330:cafe::c1) by DM6PR07CA0119.outlook.office365.com
 (2603:10b6:5:330::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT091.mail.protection.outlook.com (10.13.173.108) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 17:45:08 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:08 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:08 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:07 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b090a849-968e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TeXLJf6vgehwmr/KZXHuGB86TKUVHfYF1pgrIowxLBMtv/Q6CVJnPxOWg4dmUV8qeV5JKHAw80r6uObgh5JhYuHWYFqME+oRRRmADacbCd9HaTTRyi0SUVyYt7H1com1pJ70KT0LQ6hNeovsyRsfz++Q26U/8eRXBmmjxtwZEn3TxTSCy+n0n+DBKC8UK2BoSvQj/A/kLbpaBzwaBDkOGRDRHEkQXqPMMF72hDcZb8jVVJQWt9Vyyr2yisDdBJkF+ilUGHU/F6s6zJA9F2LmZ5VWs7AVN8VtlXUxxfXkOrAm4paRCFXKnAXQi3wADOF++zEmvoRib9yHJJHAioVExA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pYKh2ZpT1JIDy592j7PXcRkkm4hFksMRdjO4SCJ7TpY=;
 b=LuM9BJB00qPGfqMD5o66N6VNQ/i4EXtjDEJUoT8xmITbSTI80b8QE0U2EWzSq9SLFDbi4c5lRL3JkFVxQYf0Z9f1k7QphS35W4r70zx+l35nrb/Ou1F8qzE9IxhuVcgie8kVqcaFIXEVPumCllh+J+k2BeAhH0fgK2bN4RVVwXcLhwpjqTAsr/pPkS1Ff7tSCjpCXGECzoB+JwCpU359hiom74hCV11Ygljwv0YbU94d5DWUQeIU1ED8HyYBy/6uKrd7UzDmVVNZ5dgcxYU8i2w7BdkPRM+/Lzi7N3Ni3gSsKXbwAesCMh7WBCEbiB+ZBQXQrKhYP7zD/ppOCbh4HA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pYKh2ZpT1JIDy592j7PXcRkkm4hFksMRdjO4SCJ7TpY=;
 b=IPGdLr2g8jW7hUYBMMSzi68G7E1sm1cRH4WIX6n5xirqovHGq4CNGLgIIfpCkOkIBn3my1SWe5gXceoAUhUAR1l/U1VTwYp+GmkecTSCbqWkKjYkCf+0UpxHMUeUuOtxnfJyECr34LTW8ZkYxlveoloaGfccilE4cNQz+wGRzB4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 03/11] xen/arm: domain_build: Replace use of paddr_t in find_domU_holes()
Date: Tue, 17 Jan 2023 17:43:50 +0000
Message-ID: <20230117174358.15344-4-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT091:EE_|PH7PR12MB6489:EE_
X-MS-Office365-Filtering-Correlation-Id: 4d411da9-0977-475b-90d2-08daf8b29389
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Y3uxipXyMKjzf9S1QGc5lmqAqydcX/EG9tXVsdWlVSj8n8Fss0YQitwAWcokpZayS+7eb+gxzbC8bGC9GRdGSBkqDUZLC5KZjqaoM/+2ixXw5AjXxRCs6ntbvj+gJYusSXjM4DRU/0KRJpbKeLz+C+W17WYpXWio5vgmnnioDATPP6IeGp99aJOeXph2HvoTEvu+or/NjngBt6LQRGGd7QE2IMfgzKn2eXhTvYSpTQr9ZNUVCGFk3ElZTwikMdkQdgmHv3fEDsoYPR7C6zeVMYGqNArzhSsyhbyI3B5/cXNhFYHvT9bWXWxpug0VVxzSBrRSGjUJEtyMq1KDs1U53n4BEeXjo403ciGReqBuWQQJ8S99Z16gi0I+dHtpuKfgCN5j2ukny3tPedOiSSHtCQDQMPW18amCArI85C5xmFgE0XveqPBYEFJstxQnaAnecg3LhWzc6HVs7DHXGE6sQBu9fj8f0UH/ZEdv1morPaLa5+FuL2iNiOVmYqdHsPAdM5SRcdW9ZpVQHdgNhCVtOR0y3nypGZvbTpoxO1MyormAx8KdsmZ3y3J+8WWTMIvMMjXGeVIQfp4y0KaRTGwuGJTpaTKHojiBWdqd7cyLcW2CEB4lafKebrzCEm7RW1qetVtibVcDuMbGyI4HL7WSOH7G8AwO3c88I/47vx9bQjDfnzC8yt3uTYx3JB9uC21y7atlmqFGwTMMB2E7Ot1a4UP1gPyCLtrnzFdDQs1HafI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(396003)(346002)(39860400002)(451199015)(36840700001)(46966006)(40470700004)(8936002)(83380400001)(36756003)(41300700001)(2616005)(70206006)(70586007)(6916009)(4326008)(47076005)(426003)(8676002)(336012)(54906003)(316002)(5660300002)(81166007)(356005)(82740400003)(82310400005)(86362001)(1076003)(2906002)(40460700003)(103116003)(36860700001)(40480700001)(478600001)(186003)(26005)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:08.8698
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d411da9-0977-475b-90d2-08daf8b29389
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT091.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6489

bankbase, banksize and bankend are used to hold values of type 'unsigned
long long'. This can be represented as 'uint64_t' instead of 'paddr_t'.
This will ensure consistency with allocate_static_memory() (where we use
'uint64_t' for rambase and ramsize).

In future, paddr_t can be used for 'uin32_t' as well to represent 32bit
physical addresses.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v1 - Modified the title/commit message from "[XEN v1 6/9] xen/arm: Use 'u64' to represent 'unsigned long long"
and moved it earlier.

 xen/arch/arm/domain_build.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 33a5945a2d..f904f12408 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1726,9 +1726,9 @@ static int __init find_domU_holes(const struct kernel_info *kinfo,
                                   struct meminfo *ext_regions)
 {
     unsigned int i;
-    paddr_t bankend;
-    const paddr_t bankbase[] = GUEST_RAM_BANK_BASES;
-    const paddr_t banksize[] = GUEST_RAM_BANK_SIZES;
+    uint64_t bankend;
+    const uint64_t bankbase[] = GUEST_RAM_BANK_BASES;
+    const uint64_t banksize[] = GUEST_RAM_BANK_SIZES;
     int res = -ENOENT;
 
     for ( i = 0; i < GUEST_RAM_BANKS; i++ )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479665.743677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1P-0001ig-Bh; Tue, 17 Jan 2023 17:45:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479665.743677; Tue, 17 Jan 2023 17:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1P-0001iX-8n; Tue, 17 Jan 2023 17:45:19 +0000
Received: by outflank-mailman (input) for mailman id 479665;
 Tue, 17 Jan 2023 17:45:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1N-0000oY-VH
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:18 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2068.outbound.protection.outlook.com [40.107.93.68])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b26ea08b-968e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:45:15 +0100 (CET)
Received: from BN9P222CA0007.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::12)
 by DS0PR12MB7748.namprd12.prod.outlook.com (2603:10b6:8:130::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Tue, 17 Jan
 2023 17:45:11 +0000
Received: from BN8NAM11FT040.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10c:cafe::37) by BN9P222CA0007.outlook.office365.com
 (2603:10b6:408:10c::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:11 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT040.mail.protection.outlook.com (10.13.177.166) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 17:45:10 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:10 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:10 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:09 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b26ea08b-968e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GXCny/wSrrhscW19374vRe2281HDS0fgBasSUpoii5rIXRDlcOQgQmINLmzxGNTcvZNogd+jsaihG22k37zLpn2JSQo6Jvby6CWPCJ1aJbzuc9Z+ht09Q4MhhgCPcGnzDj3tt6T1Dy1IoEEAhVGSuAB10dd3F4ZQHlPUa5r3mXLoXunZIX8kFwWa1v1F+I3xbplTGwnJw6EhE6mw9beZeXBuj+hC+OJZze8hpVxtYv/6VzFq/p6Dp6cofo1ZwmJnFPwPhDkfocis0THKPEyycUXELnL3uXDc/HKCb4WEzYkjkuXvsbRL5wbZv4qNjEgv1duv0D/c4KIhziajnYOXVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vkQVewwIqXR9Op0ttYXasbtHBsayU2os3eE5hDirBvk=;
 b=FQoEYkNw6yYTn7jYm4rVnCNRg7gWuQdZy85JnySDVeruMPmgg79kJu9+I90PXeGIkjxYlVZ8EF+jZAqt5dyLgiRbzYq/bCzvstZg4nASxM2uK1Zk+OGjv5r5unceKXbCMGAYPObL8cCLpGNtvtbrSVeEvuWgArP9jI6GKuYKbZ+QWN66dfYYmYXEbSsWobuptfQarGjb9yi4wgcfKCrigtC/pO2/VW3LCltCtNTaYNfavOEPw9wrIiYyKrPyOgKf0gZsHatEKuHhYwdqerJrX0xECTHmpwYQlpsUQvakUAA/atYasmckBmtP0JYIvZZfFHJLBzW+4TRWgx8K1s4cqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vkQVewwIqXR9Op0ttYXasbtHBsayU2os3eE5hDirBvk=;
 b=N+749NBDsQndCdGQprdG18U2/zH7ULvaxiouy8Hv1GWcU30Y2v53MOMC3mQKY8mTZf2QjWiHN0ugmyV+ERXaa391oezfW2S7ETee80u1eu8Hx7XOAETOStJj4VEhzOZmOkD4sH0nGKwBSL2bbWWmgbu6yvde6HwZBxki16Zh3/c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 04/11] xen/arm: Typecast the DT values into paddr_t
Date: Tue, 17 Jan 2023 17:43:51 +0000
Message-ID: <20230117174358.15344-5-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT040:EE_|DS0PR12MB7748:EE_
X-MS-Office365-Filtering-Correlation-Id: 08810278-fe2d-4d21-a9ab-08daf8b294bb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MVFlhh3hGOr3WfjuYxrYsEIcglild1rl/1gZIfaWf6OXo3zO6ozVTvoKvpDtc5ZGg4wsRcZA2+nSSsiJlRh5bdz72y8Pwp5XO2XxVVRvajNDbopwmzTKCdOgDm3cvB2noL3WT6OFvYjnGUN/cdM45EbzTpFvQN4VcfzV6lgL4DHupaZfu3rS+O4t/oCdM5ZmYZ0iW2IctRZhg8c5X1FDv1I26MJvnzlzsg9rQ7ps3W9ecP+tVovRBFfGR2gEeMutvU0AWPqCZ0CbfkYlJ/1mNrgB0cf+tobqz0gXf2aRyTcLy4Fs5kx6jtnF2iPhrLHlHdjrGSXyYOqoUjs63BM/ZAS+jXiF4PSg21laIUEfaSA2U4Z6neRilkmKgwyVCG+SShOrxywqSHyku+RXtLhqzlVk6OsOMITRXDGhXYxExoXaYsIIq2I7V/vukoPmZ6ZJjg83nlC6Oca+WxrGs3k/q5ZX/GhPyAIxo3PCFOD7o9/ikroezzt3ML4qczmwq510Pm3LOYuEFjbvM7edJvgKtjcjdSHgRqHWmuDHcyeXd9FEOalxIt2KySw7WxN8yfGyQvfnR35/OmSvu9Gc63PMTOpcc3S9hdKB3+bx+Zz/LLAmaBnC4GitzQkfuh8E4xhhhYUBcK9pfxH7lyw7VMpmtfJa+0PCxdoePW3ii+v9BnE2Z4Z9Fv1izVmkrI/1+ZKxiFSpJeCWoApF6Mydf5NHHomgnuviXMYlfW+JWm2i6i0=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(136003)(396003)(376002)(451199015)(40470700004)(46966006)(36840700001)(36860700001)(47076005)(426003)(83380400001)(82740400003)(86362001)(356005)(40480700001)(2906002)(82310400005)(5660300002)(8936002)(40460700003)(316002)(8676002)(6916009)(478600001)(6666004)(26005)(1076003)(186003)(70206006)(2616005)(336012)(4326008)(70586007)(81166007)(54906003)(41300700001)(103116003)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:10.9227
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 08810278-fe2d-4d21-a9ab-08daf8b294bb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT040.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7748

In future, we wish to support 32 bit physical address.
However, one can only read u64 values from the DT. Thus, we need to
typecast the values appropriately from u64 to paddr_t.

device_tree_get_reg() should now be able to return paddr_t. This is
invoked by various callers to get DT address and size.
Similarly, dt_read_number() is invoked as well to get DT address and
size. The return value is typecasted to paddr_t.
fdt_get_mem_rsv() can only accept u64 values. So, we provide a warpper
for this called fdt_get_mem_rsv_paddr() which will do the necessary
typecasting before invoking fdt_get_mem_rsv() and while returning.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from

v1 - 1. Dropped "[XEN v1 2/9] xen/arm: Define translate_dt_address_size() for the translation between u64 and paddr_t" and
"[XEN v1 4/9] xen/arm: Use translate_dt_address_size() to translate between device tree addr/size and paddr_t", instead
this approach achieves the same purpose.

2. No need to check for truncation while converting values from u64 to paddr_t.

 xen/arch/arm/bootfdt.c                 | 23 +++++++++------
 xen/arch/arm/domain_build.c            |  2 +-
 xen/arch/arm/include/asm/device_tree.h | 40 ++++++++++++++++++++++++++
 xen/arch/arm/include/asm/setup.h       |  2 +-
 xen/arch/arm/setup.c                   | 14 ++++-----
 xen/arch/arm/smpboot.c                 |  2 +-
 6 files changed, 65 insertions(+), 18 deletions(-)
 create mode 100644 xen/arch/arm/include/asm/device_tree.h

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 0085c28d74..f536a3f3ab 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -11,9 +11,9 @@
 #include <xen/efi.h>
 #include <xen/device_tree.h>
 #include <xen/lib.h>
-#include <xen/libfdt/libfdt.h>
 #include <xen/sort.h>
 #include <xsm/xsm.h>
+#include <asm/device_tree.h>
 #include <asm/setup.h>
 
 static bool __init device_tree_node_matches(const void *fdt, int node,
@@ -53,10 +53,15 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
 }
 
 void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
-                                u32 size_cells, u64 *start, u64 *size)
+                                u32 size_cells, paddr_t *start, paddr_t *size)
 {
-    *start = dt_next_cell(address_cells, cell);
-    *size = dt_next_cell(size_cells, cell);
+    /*
+     * dt_next_cell will return u64 whereas paddr_t may be u64 or u32. Thus, one
+     * needs to cast paddr_t to u32. Note that we do not check for truncation as
+     * it is user's responsibility to provide the correct values in the DT.
+     */
+    *start = (paddr_t) dt_next_cell(address_cells, cell);
+    *size = (paddr_t) dt_next_cell(size_cells, cell);
 }
 
 static int __init device_tree_get_meminfo(const void *fdt, int node,
@@ -326,7 +331,7 @@ static int __init process_chosen_node(const void *fdt, int node,
         printk("linux,initrd-start property has invalid length %d\n", len);
         return -EINVAL;
     }
-    start = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
+    start = (paddr_t) dt_read_number((void *)&prop->data, dt_size_to_cells(len));
 
     prop = fdt_get_property(fdt, node, "linux,initrd-end", &len);
     if ( !prop )
@@ -339,7 +344,7 @@ static int __init process_chosen_node(const void *fdt, int node,
         printk("linux,initrd-end property has invalid length %d\n", len);
         return -EINVAL;
     }
-    end = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
+    end = (paddr_t) dt_read_number((void *)&prop->data, dt_size_to_cells(len));
 
     if ( start >= end )
     {
@@ -594,9 +599,11 @@ static void __init early_print_info(void)
     for ( i = 0; i < nr_rsvd; i++ )
     {
         paddr_t s, e;
-        if ( fdt_get_mem_rsv(device_tree_flattened, i, &s, &e) < 0 )
+
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &s, &e) < 0 )
             continue;
-        /* fdt_get_mem_rsv returns length */
+
+        /* fdt_get_mem_rsv_paddr returns length */
         e += s;
         printk(" RESVD[%u]: %"PRIpaddr" - %"PRIpaddr"\n", i, s, e);
     }
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f904f12408..72b9afbb4c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -949,7 +949,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
         BUG_ON(!prop);
         cells = (const __be32 *)prop->value;
         device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase);
-        psize = dt_read_number(cells, size_cells);
+        psize = (paddr_t) dt_read_number(cells, size_cells);
         if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) )
         {
             printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n",
diff --git a/xen/arch/arm/include/asm/device_tree.h b/xen/arch/arm/include/asm/device_tree.h
new file mode 100644
index 0000000000..51e0f0ae20
--- /dev/null
+++ b/xen/arch/arm/include/asm/device_tree.h
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * xen/arch/arm/include/asm/device_tree.h
+ * 
+ * Wrapper functions for device tree. This helps to convert dt values
+ * between u64 and paddr_t.
+ *
+ * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
+ */
+
+#ifndef __ARCH_ARM_DEVICE_TREE__
+#define __ARCH_ARM_DEVICE_TREE__
+
+#include <xen/libfdt/libfdt.h>
+
+inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
+                                 paddr_t *address,
+                                 paddr_t *size)
+{
+    uint64_t dt_addr;
+    uint64_t dt_size;
+    int ret = 0;
+
+    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
+
+    *address = (paddr_t) dt_addr;
+    *size = (paddr_t) dt_size;
+
+    return ret;
+}
+
+#endif /* __ARCH_ARM_DEVICE_TREE__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2b..6105e5cae3 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -158,7 +158,7 @@ extern uint32_t hyp_traps_vector[];
 void init_traps(void);
 
 void device_tree_get_reg(const __be32 **cell, u32 address_cells,
-                         u32 size_cells, u64 *start, u64 *size);
+                         u32 size_cells, paddr_t *start, paddr_t *size);
 
 u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90..da13439e62 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -29,7 +29,6 @@
 #include <xen/virtual_region.h>
 #include <xen/vmap.h>
 #include <xen/trace.h>
-#include <xen/libfdt/libfdt.h>
 #include <xen/acpi.h>
 #include <xen/warning.h>
 #include <asm/alternative.h>
@@ -39,6 +38,7 @@
 #include <asm/gic.h>
 #include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
+#include <asm/device_tree.h>
 #include <asm/platform.h>
 #include <asm/procinfo.h>
 #include <asm/setup.h>
@@ -222,11 +222,11 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
     {
         paddr_t r_s, r_e;
 
-        if ( fdt_get_mem_rsv(device_tree_flattened, i, &r_s, &r_e ) < 0 )
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &r_s, &r_e ) < 0 )
             /* If we can't read it, pretend it doesn't exist... */
             continue;
 
-        r_e += r_s; /* fdt_get_mem_rsv returns length */
+        r_e += r_s; /* fdt_get_mem_rsv_paddr returns length */
 
         if ( s < r_e && r_s < e )
         {
@@ -502,13 +502,13 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
     {
         paddr_t mod_s, mod_e;
 
-        if ( fdt_get_mem_rsv(device_tree_flattened,
-                             i - mi->nr_mods,
-                             &mod_s, &mod_e ) < 0 )
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened,
+                                   i - mi->nr_mods,
+                                   &mod_s, &mod_e ) < 0 )
             /* If we can't read it, pretend it doesn't exist... */
             continue;
 
-        /* fdt_get_mem_rsv returns length */
+        /* fdt_get_mem_rsv_paddr returns length */
         mod_e += mod_s;
 
         if ( s < mod_e && mod_s < e )
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 412ae22869..ee59b1d379 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -159,7 +159,7 @@ static void __init dt_smp_init_cpus(void)
             continue;
         }
 
-        addr = dt_read_number(prop, dt_n_addr_cells(cpu));
+        addr = (paddr_t) dt_read_number(prop, dt_n_addr_cells(cpu));
 
         hwid = addr;
         if ( hwid != addr )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479666.743688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1R-00020t-Jr; Tue, 17 Jan 2023 17:45:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479666.743688; Tue, 17 Jan 2023 17:45:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1R-00020h-GI; Tue, 17 Jan 2023 17:45:21 +0000
Received: by outflank-mailman (input) for mailman id 479666;
 Tue, 17 Jan 2023 17:45:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1P-0000oY-8T
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:19 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2045.outbound.protection.outlook.com [40.107.220.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b2eb7650-968e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:45:16 +0100 (CET)
Received: from BN9PR03CA0672.namprd03.prod.outlook.com (2603:10b6:408:10e::17)
 by PH0PR12MB8007.namprd12.prod.outlook.com (2603:10b6:510:28e::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:45:13 +0000
Received: from BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10e:cafe::d8) by BN9PR03CA0672.outlook.office365.com
 (2603:10b6:408:10e::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:12 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT056.mail.protection.outlook.com (10.13.177.26) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 17:45:12 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:12 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:12 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:11 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2eb7650-968e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dbOU5Rq4WHqPNSlkBUth/U0nSMphkHQ8KaWJnpvBAGneHIBMeSsfaxkgfGN8IvjFHjV02wel4XEuPfdRMqegsShr6504eB+XoHcbxjN4G0+uo0f4DawrqnwPVHVe8XzHnkq7iQVa92vuE/TK5P5U/qt84HART7S04fPhrkNx0oGlZJkPs8R0Bw9FhFW6tjQPT8gkpPT2R0Oac2bsn0fQU/iUn6Ye+oPJLdzr2eGxxWW3jTfs+mboRvu2cCRNIz80lz54Nctz7B6Ij3Q33JwoF0q/k8vuWOv8tqdMR0/No8aveluX7EJFo4Inksv2FlPaf/t8UXEMB3JYQhFOu/9UBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=D93ZyLBoRJyeXnXktUxKPuF+KODu71JUMQB/vCvmEr0=;
 b=GYm8Z2JYfLk7aX1pp7x364LcTgv4H+kH4FtMudBkp/OTmBlfTdzmoCMdemaKLamr2BjbcSWd9DoHfMgqnKaoCW9DS/HxKnlZdkDYCLA4ishvAmdw3jr9We54ABXcvTcGdK/d7x1M2zu+iYtKMaC1SaMtNOgd+WfDufvZwi5Lanj+HVFfwRQNkVUAzvSAaLBsJMBD4l7ipbb0NOtmQjdkJg5pDMGfArF+zbleU7FM0Wtfhu4y3oavh0Y8Ox+PmXNeiFCVatFVkls4UD050xCA2CP4jqPnLM2QAj42VrpUfpT9CBYaGG+k/f7JyprCWF/rMYSqtzBMkxVYuAyG3WhYtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D93ZyLBoRJyeXnXktUxKPuF+KODu71JUMQB/vCvmEr0=;
 b=xk/3wIcN6Bk2fhiazUkYrqkYd+jALndxbPIrEJyRFsXi0Ue/NP7nc+loUmuD/XMocIRVkD7gvtqUqXMLy4Ms+BMh8yUkkoFQ7BC7c1fLdLoBEPoDfDMzvtxvzADO80auBBZ9MpbrPry88ooBv0H9aawojCt4F3OxSN6cHH9vGqg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size
Date: Tue, 17 Jan 2023 17:43:52 +0000
Message-ID: <20230117174358.15344-6-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT056:EE_|PH0PR12MB8007:EE_
X-MS-Office365-Filtering-Correlation-Id: f020d4bf-5c91-4f3f-1acc-08daf8b295d0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mdJ9QSuJ8SY/fOMmssbsX+zSr5IbL7yMtPV6ML9NALHO3QoLCdZN4a2UjX75/uF7txKdxlTll5XyqCv9oW/QYMtsj76M20XRGde8ODETIfPhFRYHjOQk02FOqlR5VUDAOjxehvwek3m9lD4oRezThVqxXsFY423vcpI0lTc9+eIQbTc7Nyadn/2C9FhENLOe6XfBw27LmWRhPwV1Xn1XVHPyLBTOv+JeaIKkD3LtU0LwFXMmpazFsRi/DoqgISno+5/2wFcLvB2Kr1c4fpvvtbNFLLGEhP3TjnCKCJjEd9xQb6v2v/3zUD6hwsX6Sqb3e6iihHWYNVlp+Wi03M0BXUEGQjZvtDTQzvSvzREeQLrPexkKXYiNlUO2WF6A03PXqmTMOkN3guRaHY1WZO0pDeZvqpErOvXZAvsRRioC3X1nkoC2GXHy9TWrZ3mY7+kjMNromqwq16f4aPFq1Vzt89Yt03QpJ8MRwkfgKkorcdSBDhFZx2r0udsIrY+lKXljvoJIGaivzABNLkEIrkujQuvBG973Vyvz3tkQGb3ttf3NDLrrn79gdFtLLFQlANYaVB1KdIDxi7tNwtFsuYScYr4m8uAi6RT7eMvB2QvwRNYCCfs7MBnFJk0F45NNpiQ/XHlCLO1Xo++PX30iInadoaBmi+Cphw0+0RPfbLpAfGI+LdiHn+/ZMJmmRe9mcWmZmtUuHmM2j6BG8BAR8VURibKgxEgkbocBhl5/M4BAXM4=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(346002)(376002)(136003)(451199015)(36840700001)(46966006)(40470700004)(2906002)(26005)(4326008)(103116003)(6916009)(36756003)(8676002)(40460700003)(70586007)(70206006)(82310400005)(41300700001)(186003)(36860700001)(1076003)(6666004)(2616005)(83380400001)(47076005)(336012)(426003)(86362001)(316002)(40480700001)(54906003)(356005)(81166007)(82740400003)(478600001)(8936002)(5660300002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:12.7188
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f020d4bf-5c91-4f3f-1acc-08daf8b295d0
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB8007

One should now be able to use 'paddr_t' to represent address and size.
Consequently, one should use 'PRIpaddr' as a format specifier for paddr_t.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v1 - 1. Rebased the patch.

 xen/arch/arm/domain_build.c        |  9 +++++----
 xen/arch/arm/gic-v3.c              |  2 +-
 xen/arch/arm/platforms/exynos5.c   | 26 +++++++++++++-------------
 xen/drivers/char/exynos4210-uart.c |  2 +-
 xen/drivers/char/ns16550.c         |  8 ++++----
 xen/drivers/char/omap-uart.c       |  2 +-
 xen/drivers/char/pl011.c           |  4 ++--
 xen/drivers/char/scif-uart.c       |  2 +-
 xen/drivers/passthrough/arm/smmu.c |  6 +++---
 9 files changed, 31 insertions(+), 30 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 72b9afbb4c..cf8ae37a14 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1666,7 +1666,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
     dt_for_each_device_node( dt_host, np )
     {
         unsigned int naddr;
-        u64 addr, size;
+        paddr_t addr, size;
 
         naddr = dt_number_of_address(np);
 
@@ -2445,7 +2445,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     unsigned int naddr;
     unsigned int i;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
     bool own_device = !dt_device_for_passthrough(dev);
     /*
      * We want to avoid mapping the MMIO in dom0 for the following cases:
@@ -2941,9 +2941,10 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
         if ( res )
         {
             printk(XENLOG_ERR "Unable to permit to dom%d access to"
-                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                   " 0x%"PRIpaddr" - 0x%"PRIpaddr"\n",
                    kinfo->d->domain_id,
-                   mstart & PAGE_MASK, PAGE_ALIGN(mstart + size) - 1);
+                   (paddr_t) (mstart & PAGE_MASK),
+                   (paddr_t) (PAGE_ALIGN(mstart + size) - 1));
             return res;
         }
 
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index bb59ea94cd..391dfa53d7 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1393,7 +1393,7 @@ static void __init gicv3_dt_init(void)
 
     for ( i = 0; i < gicv3.rdist_count; i++ )
     {
-        uint64_t rdist_base, rdist_size;
+        paddr_t rdist_base, rdist_size;
 
         res = dt_device_get_address(node, 1 + i, &rdist_base, &rdist_size);
         if ( res )
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index 6560507092..f79fad9957 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -42,8 +42,8 @@ static int exynos5_init_time(void)
     void __iomem *mct;
     int rc;
     struct dt_device_node *node;
-    u64 mct_base_addr;
-    u64 size;
+    paddr_t mct_base_addr;
+    paddr_t size;
 
     node = dt_find_compatible_node(NULL, NULL, "samsung,exynos4210-mct");
     if ( !node )
@@ -59,7 +59,7 @@ static int exynos5_init_time(void)
         return -ENXIO;
     }
 
-    dprintk(XENLOG_INFO, "mct_base_addr: %016llx size: %016llx\n",
+    dprintk(XENLOG_INFO, "mct_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
             mct_base_addr, size);
 
     mct = ioremap_nocache(mct_base_addr, size);
@@ -97,9 +97,9 @@ static int __init exynos5_smp_init(void)
     struct dt_device_node *node;
     void __iomem *sysram;
     char *compatible;
-    u64 sysram_addr;
-    u64 size;
-    u64 sysram_offset;
+    paddr_t sysram_addr;
+    paddr_t size;
+    paddr_t sysram_offset;
     int rc;
 
     node = dt_find_compatible_node(NULL, NULL, "samsung,secure-firmware");
@@ -131,7 +131,7 @@ static int __init exynos5_smp_init(void)
         dprintk(XENLOG_ERR, "Error in %s\n", compatible);
         return -ENXIO;
     }
-    dprintk(XENLOG_INFO, "sysram_addr: %016llx size: %016llx offset: %016llx\n",
+    dprintk(XENLOG_INFO,"sysram_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"offset: 0x%"PRIpaddr"\n",
             sysram_addr, size, sysram_offset);
 
     sysram = ioremap_nocache(sysram_addr, size);
@@ -189,7 +189,7 @@ static int exynos5_cpu_power_up(void __iomem *power, int cpu)
     return 0;
 }
 
-static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
+static int exynos5_get_pmu_baseandsize(paddr_t *power_base_addr, paddr_t *size)
 {
     struct dt_device_node *node;
     int rc;
@@ -215,7 +215,7 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
         return -ENXIO;
     }
 
-    dprintk(XENLOG_DEBUG, "power_base_addr: %016llx size: %016llx\n",
+    dprintk(XENLOG_DEBUG, "power_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
             *power_base_addr, *size);
 
     return 0;
@@ -223,8 +223,8 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
 
 static int exynos5_cpu_up(int cpu)
 {
-    u64 power_base_addr;
-    u64 size;
+    paddr_t power_base_addr;
+    paddr_t size;
     void __iomem *power;
     int rc;
 
@@ -256,8 +256,8 @@ static int exynos5_cpu_up(int cpu)
 
 static void exynos5_reset(void)
 {
-    u64 power_base_addr;
-    u64 size;
+    paddr_t power_base_addr;
+    paddr_t size;
     void __iomem *pmu;
     int rc;
 
diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 43aaf02e18..32cc8c78b5 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -303,7 +303,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     const char *config = data;
     struct exynos4210_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 58d0ccd889..8ef895a2bb 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -42,8 +42,8 @@
 
 static struct ns16550 {
     int baud, clock_hz, data_bits, parity, stop_bits, fifo_size, irq;
-    u64 io_base;   /* I/O port or memory-mapped I/O address. */
-    u64 io_size;
+    paddr_t io_base;   /* I/O port or memory-mapped I/O address. */
+    paddr_t io_size;
     int reg_shift; /* Bits to shift register offset by */
     int reg_width; /* Size of access to use, the registers
                     * themselves are still bytes */
@@ -1166,7 +1166,7 @@ static const struct ns16550_config __initconst uart_config[] =
 static int __init
 pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
 {
-    u64 orig_base = uart->io_base;
+    paddr_t orig_base = uart->io_base;
     unsigned int b, d, f, nextf, i;
 
     /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on bus 0. */
@@ -1259,7 +1259,7 @@ pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
                     else
                         size = len & PCI_BASE_ADDRESS_MEM_MASK;
 
-                    uart->io_base = ((u64)bar_64 << 32) |
+                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
                                     (bar & PCI_BASE_ADDRESS_MEM_MASK);
                 }
                 /* IO based */
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index d6a5d59aa2..3b53e1909a 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -324,7 +324,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     struct omap_uart *uart;
     u32 clkspec;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index be67242bc0..256ec11e3f 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -222,7 +222,7 @@ static struct uart_driver __read_mostly pl011_driver = {
     .vuart_info   = pl011_vuart,
 };
 
-static int __init pl011_uart_init(int irq, u64 addr, u64 size, bool sbsa)
+static int __init pl011_uart_init(int irq, paddr_t addr, paddr_t size, bool sbsa)
 {
     struct pl011 *uart;
 
@@ -258,7 +258,7 @@ static int __init pl011_dt_uart_init(struct dt_device_node *dev,
 {
     const char *config = data;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
     {
diff --git a/xen/drivers/char/scif-uart.c b/xen/drivers/char/scif-uart.c
index 2fccafe340..b425881d06 100644
--- a/xen/drivers/char/scif-uart.c
+++ b/xen/drivers/char/scif-uart.c
@@ -311,7 +311,7 @@ static int __init scif_uart_init(struct dt_device_node *dev,
     const char *config = data;
     struct scif_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b3..490d253d44 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -73,8 +73,8 @@
 /* Xen: Helpers to get device MMIO and IRQs */
 struct resource
 {
-	u64 addr;
-	u64 size;
+	paddr_t addr;
+	paddr_t size;
 	unsigned int type;
 };
 
@@ -169,7 +169,7 @@ static void __iomem *devm_ioremap_resource(struct device *dev,
 	ptr = ioremap_nocache(res->addr, res->size);
 	if (!ptr) {
 		dev_err(dev,
-			"ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
+			"ioremap failed (addr 0x%"PRIpaddr" size 0x%"PRIpaddr")\n",
 			res->addr, res->size);
 		return ERR_PTR(-ENOMEM);
 	}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479667.743699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1T-0002Il-2e; Tue, 17 Jan 2023 17:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479667.743699; Tue, 17 Jan 2023 17:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1S-0002IX-UJ; Tue, 17 Jan 2023 17:45:22 +0000
Received: by outflank-mailman (input) for mailman id 479667;
 Tue, 17 Jan 2023 17:45:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1R-0000oY-W5
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:22 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2040.outbound.protection.outlook.com [40.107.244.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b46846a6-968e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:45:18 +0100 (CET)
Received: from BL1PR13CA0277.namprd13.prod.outlook.com (2603:10b6:208:2bc::12)
 by SN7PR12MB7107.namprd12.prod.outlook.com (2603:10b6:806:2a2::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:45:15 +0000
Received: from BL02EPF000108EA.namprd05.prod.outlook.com
 (2603:10b6:208:2bc:cafe::fa) by BL1PR13CA0277.outlook.office365.com
 (2603:10b6:208:2bc::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:15 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF000108EA.mail.protection.outlook.com (10.167.241.203) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Tue, 17 Jan 2023 17:45:15 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:14 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 09:45:13 -0800
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:12 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b46846a6-968e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K+YeYCmWnBOMz1r9xna95frcNYtzt4zO5fgVXIAxZj76i+iPNBcKSttu19SCglq65s94lBFsiAXdaTtWY+xiUQV0K7c2j4RL3jaLkFtC6r2AsmfTVgnNJJqATbSLekYL9zGRvAP2mwGjEApOnfkPmB4fjSoCWbpzXoPuLpApKwy+v2DlL82aSTLcOOs/wHe1jzhPESv9iy8kTXJEHs8mHxhOpTfJcSa9fXOAF36vaV4qgltMXKrKJWXgMDEmMsK8nAQDoR9vTScpOB/og7R/ih2rFEtgl2CKuGURC6xGVP8N9QoPwbt2TkJdjXMSC9J+Gt97U7f2G0G4Df7w6LseuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hBI9VqrGtClHg+8xIqkrnP/L1ZKhG2r8i7jBnpGNVWI=;
 b=Tzwkw7hAYezD1JYhWU8eeib+jhsM+S931DXn53tD1EiWl67Amu4DsE8R4x91L1TD31uUKKmPQ7uzO+lekrHi31bUsX1rhK6kQa+LH3Zn2zLa7BeeJ+Zr8B7JWIbhEmFHOeeLVDS592VGmmnH3ZX0CrwrfKJUjwDnIHotnHINZGXm9orEKGEjZORJ3Hj9Oxm0uBqeMSbZ6Mjdd44YUGRyz2EiDATWiCZr8kuGCxTPv9R2xjTyyqvIO3r5qR+qc7hb17Xqgt/4IT7Zhggd3G6sGe0V7e+0mBo6g8JQt7PmuJ+kME0j7pLwv1SHMyblJoLn6oxRAkRL9BQZOjsblSGWkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hBI9VqrGtClHg+8xIqkrnP/L1ZKhG2r8i7jBnpGNVWI=;
 b=3U+5t0MTlHq86NY2G1CMdGAubzCrZzk4zEoa6FRjFwnxXrB+LcLU7bZAXnhTBqVkd+rdJ+TjiXMIxy4F/H/iwqAqKN8f8wQIsmdrOLLq7Fc9/W0qtU18h63nFwEsyrcYCEdd/yll2Vyr29bj7qgy7wYZf90lIZTiLuhqT41iKks=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 06/11] xen/arm: Introduce a wrapper for dt_device_get_address() to handle paddr_t
Date: Tue, 17 Jan 2023 17:43:53 +0000
Message-ID: <20230117174358.15344-7-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000108EA:EE_|SN7PR12MB7107:EE_
X-MS-Office365-Filtering-Correlation-Id: 62c086d7-5878-4187-a977-08daf8b2975e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3ymqwr5OM+7RMJq9utx3jNzXuxw/yFww7g1GiF7wbr/JNxfMjcn/Jmrw7GnxhrvGHAoh0GCfG+upNToxQM18ZQMc6J//KylTvK4aGm4ZYlxKfBBlGkm3t9jEWfxMNr9fEA+61MShVUbUba8k+dZjU6M+Y4Ugn9gF0l5tc24ezMTcFPBQgasNqaK5spR/vj9/r0Yt51bkhpr9/rQSWVeDcXHzRXjRuNXOF0P36sWEzrWReTUD/NfgZ/i9u6XzOTuDTJXFig3ceKd3GD6F2Ymmpg3d+EIN1/kfrcDcXN3NPShBDg4+Wqgd6BccjTwRJvhixcBZBs73vbpNxUxqrpfYA944sOKC/9QFOgMccBMy6dx8gVaXxogKSBmO3vpM28epbVCHI7RLyqfEOHazYpnDHQ74Jjwc1llTeFMyUYoenc+XqoVv3+lj0RBzTJHOgS+3evoFgKu7ehtbWVGdH4BVX2XzKGu0fYa9SIMyeX7NkIjXoHmt4bm8ra2l5l93RmUmUMfOhol1c2roVPryTY1MA8yfNeUFpebDk3nh9/T0yYOZrApcRq1cGeauh7mKXcGUCTU8QW/gqQ2vdLMO7/ho0KE+l36rPrvQk+zt4GzaVAe4apKAlHGxQqsXAWOtSp+UACOvRmif5VHFz3rPE1ARHeLebEmUeFM3By+Rc32/M4qhkLBTHAdXnIV72a50btFTfms/SFngmGOgEUjpSKS/biIS/dLN4FjLbnKO99clJV4=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(136003)(346002)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(36756003)(86362001)(356005)(8676002)(70206006)(2906002)(4326008)(70586007)(5660300002)(30864003)(6916009)(8936002)(82740400003)(36860700001)(81166007)(83380400001)(103116003)(478600001)(54906003)(316002)(6666004)(40460700003)(40480700001)(82310400005)(41300700001)(2616005)(1076003)(47076005)(26005)(186003)(336012)(426003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:15.3308
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 62c086d7-5878-4187-a977-08daf8b2975e
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000108EA.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7107

dt_device_get_address() can accept u64 only for address and size. The
various callers will use 'paddr_t' datatype for address and size.
'paddr_t' is currently defined as u64, but we may support u32 as well.
Thus, we need an appropriate wrapper which can handle this type
conversion.

The callers will now invoke dt_device_get_paddr(). This inturn invokes
dt_device_get_address() with u64 address/size. And then it typecasts
the u64 address/size to paddr_t address/size.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v1 - 1. New patch introduced.

 xen/arch/arm/domain_build.c            |  5 +++--
 xen/arch/arm/gic-v2.c                  | 11 ++++++-----
 xen/arch/arm/gic-v3.c                  |  9 +++++----
 xen/arch/arm/include/asm/device_tree.h | 19 +++++++++++++++++++
 xen/arch/arm/platforms/exynos5.c       |  7 ++++---
 xen/arch/arm/platforms/sunxi.c         |  3 ++-
 xen/drivers/char/exynos4210-uart.c     |  3 ++-
 xen/drivers/char/ns16550.c             |  3 ++-
 xen/drivers/char/omap-uart.c           |  3 ++-
 xen/drivers/char/pl011.c               |  3 ++-
 xen/drivers/char/scif-uart.c           |  3 ++-
 xen/drivers/passthrough/arm/smmu.c     |  3 ++-
 12 files changed, 51 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index cf8ae37a14..21199b624b 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -7,6 +7,7 @@
 #include <xen/domain_page.h>
 #include <xen/sched.h>
 #include <xen/sizes.h>
+#include <asm/device_tree.h>
 #include <asm/irq.h>
 #include <asm/regs.h>
 #include <xen/errno.h>
@@ -1672,7 +1673,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
 
         for ( i = 0; i < naddr; i++ )
         {
-            res = dt_device_get_address(np, i, &addr, &size);
+            res = dt_device_get_paddr(np, i, &addr, &size);
             if ( res )
             {
                 printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
@@ -2500,7 +2501,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     /* Give permission and map MMIOs */
     for ( i = 0; i < naddr; i++ )
     {
-        res = dt_device_get_address(dev, i, &addr, &size);
+        res = dt_device_get_paddr(dev, i, &addr, &size);
         if ( res )
         {
             printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 5d4d298b86..5230c4ebaf 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -24,6 +24,7 @@
 #include <xen/acpi.h>
 #include <acpi/actables.h>
 #include <asm/p2m.h>
+#include <asm/device_tree.h>
 #include <asm/domain.h>
 #include <asm/platform.h>
 #include <asm/device.h>
@@ -993,7 +994,7 @@ static void gicv2_extension_dt_init(const struct dt_device_node *node)
             continue;
 
         /* Get register frame resource from DT. */
-        if ( dt_device_get_address(v2m, 0, &addr, &size) )
+        if ( dt_device_get_paddr(v2m, 0, &addr, &size) )
             panic("GICv2: Cannot find a valid v2m frame address\n");
 
         /*
@@ -1018,19 +1019,19 @@ static void __init gicv2_dt_init(void)
     paddr_t vsize;
     const struct dt_device_node *node = gicv2_info.node;
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("GICv2: Cannot find a valid address for the distributor\n");
 
-    res = dt_device_get_address(node, 1, &cbase, &csize);
+    res = dt_device_get_paddr(node, 1, &cbase, &csize);
     if ( res )
         panic("GICv2: Cannot find a valid address for the CPU\n");
 
-    res = dt_device_get_address(node, 2, &hbase, NULL);
+    res = dt_device_get_paddr(node, 2, &hbase, NULL);
     if ( res )
         panic("GICv2: Cannot find a valid address for the hypervisor\n");
 
-    res = dt_device_get_address(node, 3, &vbase, &vsize);
+    res = dt_device_get_paddr(node, 3, &vbase, &vsize);
     if ( res )
         panic("GICv2: Cannot find a valid address for the virtual CPU\n");
 
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 391dfa53d7..58d2eb0690 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -29,6 +29,7 @@
 
 #include <asm/cpufeature.h>
 #include <asm/device.h>
+#include <asm/device_tree.h>
 #include <asm/gic.h>
 #include <asm/gic_v3_defs.h>
 #include <asm/gic_v3_its.h>
@@ -1377,7 +1378,7 @@ static void __init gicv3_dt_init(void)
     int res, i;
     const struct dt_device_node *node = gicv3_info.node;
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("GICv3: Cannot find a valid distributor address\n");
 
@@ -1395,7 +1396,7 @@ static void __init gicv3_dt_init(void)
     {
         paddr_t rdist_base, rdist_size;
 
-        res = dt_device_get_address(node, 1 + i, &rdist_base, &rdist_size);
+        res = dt_device_get_paddr(node, 1 + i, &rdist_base, &rdist_size);
         if ( res )
             panic("GICv3: No rdist base found for region %d\n", i);
 
@@ -1417,10 +1418,10 @@ static void __init gicv3_dt_init(void)
      * For GICv3 supporting GICv2, GICC and GICV base address will be
      * provided.
      */
-    res = dt_device_get_address(node, 1 + gicv3.rdist_count,
+    res = dt_device_get_paddr(node, 1 + gicv3.rdist_count,
                                 &cbase, &csize);
     if ( !res )
-        dt_device_get_address(node, 1 + gicv3.rdist_count + 2,
+        dt_device_get_paddr(node, 1 + gicv3.rdist_count + 2,
                               &vbase, &vsize);
 }
 
diff --git a/xen/arch/arm/include/asm/device_tree.h b/xen/arch/arm/include/asm/device_tree.h
index 51e0f0ae20..7f58f1f278 100644
--- a/xen/arch/arm/include/asm/device_tree.h
+++ b/xen/arch/arm/include/asm/device_tree.h
@@ -11,6 +11,7 @@
 #ifndef __ARCH_ARM_DEVICE_TREE__
 #define __ARCH_ARM_DEVICE_TREE__
 
+#include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 
 inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
@@ -29,6 +30,24 @@ inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
     return ret;
 }
 
+inline int dt_device_get_paddr(const struct dt_device_node *dev,
+                               unsigned int index, paddr_t *addr,
+                               paddr_t *size)
+{
+    u64 dt_addr, dt_size;
+    int ret;
+
+    ret = dt_device_get_address(dev, index, &dt_addr, &dt_size);
+
+    if ( addr )
+        *addr = dt_addr;
+
+    if ( size )
+        *size = dt_size;
+
+    return ret;
+}
+
 #endif /* __ARCH_ARM_DEVICE_TREE__ */
 /*
  * Local variables:
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index f79fad9957..55b6ac1e7e 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -22,6 +22,7 @@
 #include <xen/mm.h>
 #include <xen/vmap.h>
 #include <xen/delay.h>
+#include <asm/device_tree.h>
 #include <asm/platforms/exynos5.h>
 #include <asm/platform.h>
 #include <asm/io.h>
@@ -52,7 +53,7 @@ static int exynos5_init_time(void)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, &mct_base_addr, &size);
+    rc = dt_device_get_paddr(node, 0, &mct_base_addr, &size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in \"samsung,exynos4210-mct\"\n");
@@ -125,7 +126,7 @@ static int __init exynos5_smp_init(void)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, &sysram_addr, &size);
+    rc = dt_device_get_paddr(node, 0, &sysram_addr, &size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in %s\n", compatible);
@@ -208,7 +209,7 @@ static int exynos5_get_pmu_baseandsize(paddr_t *power_base_addr, paddr_t *size)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, power_base_addr, size);
+    rc = dt_device_get_paddr(node, 0, power_base_addr, size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in \"samsung,exynos5XXX-pmu\"\n");
diff --git a/xen/arch/arm/platforms/sunxi.c b/xen/arch/arm/platforms/sunxi.c
index e8e4d88bef..ce47f97507 100644
--- a/xen/arch/arm/platforms/sunxi.c
+++ b/xen/arch/arm/platforms/sunxi.c
@@ -18,6 +18,7 @@
 
 #include <xen/mm.h>
 #include <xen/vmap.h>
+#include <asm/device_tree.h>
 #include <asm/platform.h>
 #include <asm/io.h>
 
@@ -50,7 +51,7 @@ static void __iomem *sunxi_map_watchdog(bool *new_wdt)
         return NULL;
     }
 
-    ret = dt_device_get_address(node, 0, &wdt_start, &wdt_len);
+    ret = dt_device_get_paddr(node, 0, &wdt_start, &wdt_len);
     if ( ret )
     {
         dprintk(XENLOG_ERR, "Cannot read watchdog register address\n");
diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 32cc8c78b5..6d2008c44f 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -24,6 +24,7 @@
 #include <xen/irq.h>
 #include <xen/mm.h>
 #include <asm/device.h>
+#include <asm/device_tree.h>
 #include <asm/exynos4210-uart.h>
 #include <asm/io.h>
 
@@ -316,7 +317,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     uart->parity    = PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("exynos4210: Unable to retrieve the base"
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 8ef895a2bb..7226f3c2f7 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -35,6 +35,7 @@
 #include <asm/io.h>
 #ifdef CONFIG_HAS_DEVICE_TREE
 #include <asm/device.h>
+#include <asm/device_tree.h>
 #endif
 #ifdef CONFIG_X86
 #include <asm/fixmap.h>
@@ -1757,7 +1758,7 @@ static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
     uart->parity    = UART_PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &uart->io_base, &uart->io_size);
+    res = dt_device_get_paddr(dev, 0, &uart->io_base, &uart->io_size);
     if ( res )
         return res;
 
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index 3b53e1909a..06200bc9f1 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -15,6 +15,7 @@
 #include <xen/init.h>
 #include <xen/irq.h>
 #include <xen/device_tree.h>
+#include <asm/device_tree.h>
 #include <asm/device.h>
 #include <xen/errno.h>
 #include <xen/mm.h>
@@ -344,7 +345,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     uart->parity = UART_PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("omap-uart: Unable to retrieve the base"
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index 256ec11e3f..b4c1d9d592 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -26,6 +26,7 @@
 #include <asm/device.h>
 #include <xen/mm.h>
 #include <xen/vmap.h>
+#include <asm/device_tree.h>
 #include <asm/pl011-uart.h>
 #include <asm/io.h>
 
@@ -265,7 +266,7 @@ static int __init pl011_dt_uart_init(struct dt_device_node *dev,
         printk("WARNING: UART configuration is not supported\n");
     }
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("pl011: Unable to retrieve the base"
diff --git a/xen/drivers/char/scif-uart.c b/xen/drivers/char/scif-uart.c
index b425881d06..af14388f70 100644
--- a/xen/drivers/char/scif-uart.c
+++ b/xen/drivers/char/scif-uart.c
@@ -26,6 +26,7 @@
 #include <xen/mm.h>
 #include <xen/delay.h>
 #include <asm/device.h>
+#include <asm/device_tree.h>
 #include <asm/scif-uart.h>
 #include <asm/io.h>
 
@@ -318,7 +319,7 @@ static int __init scif_uart_init(struct dt_device_node *dev,
 
     uart = &scif_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("scif-uart: Unable to retrieve the base"
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 490d253d44..0c89cb644e 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -51,6 +51,7 @@
 #include <xen/sizes.h>
 #include <asm/atomic.h>
 #include <asm/device.h>
+#include <asm/device_tree.h>
 #include <asm/io.h>
 #include <asm/iommu_fwspec.h>
 #include <asm/platform.h>
@@ -101,7 +102,7 @@ static struct resource *platform_get_resource(struct platform_device *pdev,
 
 	switch (type) {
 	case IORESOURCE_MEM:
-		ret = dt_device_get_address(pdev, num, &res.addr, &res.size);
+		ret = dt_device_get_paddr(pdev, num, &res.addr, &res.size);
 
 		return ((ret) ? NULL : &res);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479668.743710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1U-0002cp-Hn; Tue, 17 Jan 2023 17:45:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479668.743710; Tue, 17 Jan 2023 17:45:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1U-0002bf-Ag; Tue, 17 Jan 2023 17:45:24 +0000
Received: by outflank-mailman (input) for mailman id 479668;
 Tue, 17 Jan 2023 17:45:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1S-0000oY-KZ
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:22 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2053.outbound.protection.outlook.com [40.107.223.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b53c60a4-968e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:45:20 +0100 (CET)
Received: from BN9PR03CA0414.namprd03.prod.outlook.com (2603:10b6:408:111::29)
 by CH3PR12MB8235.namprd12.prod.outlook.com (2603:10b6:610:120::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 17:45:16 +0000
Received: from BN8NAM11FT041.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:111:cafe::92) by BN9PR03CA0414.outlook.office365.com
 (2603:10b6:408:111::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:16 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT041.mail.protection.outlook.com (10.13.177.18) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 17 Jan 2023 17:45:15 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:15 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 09:45:15 -0800
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:14 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b53c60a4-968e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UBgHzgbQXGPwf6DKE9/WcIqYDTmZhVIwCckFXEyahEkYrMnYNGhSZsxznNLCvJyKMcQ/RVoCE6YzWTqzZIZrDw7aWBWKLCGEqU3IliW2Q9+2e+NJ1eMrNKf/JAOcc3BMdNvt5tfzojx415m86XIsbueHKFWtvRmn+73hrmpAUNP2jW74PTnzgJRDfhoXyi3KbuGIxqxMDkczcdxHN7iEjzfo+vZj9pvy3nIrmMoq60zcA81sBzkKY2nPDKC+oiK2tpUNFi9CZfzrXJUJ+R8NU/u54cbwpo2bk1811ibvD35pmwbCxURS7CCRtXu9EY7qbVmftnzXLHu1LRw+z/Zvlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iIKU3XYytNQAfZ0EaTu36MGFEtvhVTZymsJIStWcfzE=;
 b=RuffE0LVPuUpBnHawSiZwLEdt4uHy3q2rxdj0lLFaHEfmefMNvOxQgqB2TYcOHerJPySngZp9kR4PQLxwx8spNxSZZ6UyoTJVgT/nUcT+MoNpH+rypmdg4UwllmTx/vRX/GLpyztoICt4ooUpMmYwfHb0XIMe6+LtiOnyyTcTZmTBIaC52cFcFjirRUFt/hEtbwqT0CBqcCsrdRnP09t2yQkp7x+AyKTXK/9hwVUqE6uB3rlxS7ceNfg+Tb9YVpixTfP662EebrA9OCPKKeLQxUT6eR41b/9IEK4FUYOWpNWRlDXN5f2us9wg04lTxgcQ+lMiVOQ3k2rQMecbje9MA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iIKU3XYytNQAfZ0EaTu36MGFEtvhVTZymsJIStWcfzE=;
 b=CxWquEZrUDu6JbgCrwU/MiHRLUr7OfUzCzquIyAH8/emcM1ckzPugQH8g7esr05ejpXHfQXnQGVecdrrT2X/r4fObjQ2VZKYL8Gusr4+kXoUAdlPFSiiN72LZZnA3/8aIEpDHA0kjGuC4RpsOFSh5nb6nlgiBP4//1aBNeh/c4Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 07/11] xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to SMMU_CBn_TTBR0
Date: Tue, 17 Jan 2023 17:43:54 +0000
Message-ID: <20230117174358.15344-8-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT041:EE_|CH3PR12MB8235:EE_
X-MS-Office365-Filtering-Correlation-Id: 628ddd1f-16b1-4dac-a72c-08daf8b297b6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HGChrRYxvKrxib8KCUVvR8Ynjli+YfbldbI/mAGeq0DnVrcMloryHfOHV74Vealsg6UfypAuLJhz++xhcPunSGIZ7fFb5qspIaBg7DA0CqjAIpywXUoEuPB8LReZUp60eIjByYIvvKCDcWvsFCdzGy5WnZNX3xP5nO7wIt9P1/nQ5/TG2x7KyTWIsQR1J0k0ONVKUAorn/2QW2+HtnRCRn/WOfs/8rXq2g+GiiJRwb9cT2LeLTtWpxJjDtecWei20g3fNjymiZgHfYJJSU1VSjKGCfVdDEg41A4FKpcdq3kCXSkb5NFRfaFgumLQPK26II29kp28kHTCLP3CZjCSySKoQEr1w2aWltUqwjGcLrJHAJi9fpxHfhODfEbmyVe7bFCorQHToTsYptL0KE6uEcxFOARx+zYHF4ixQRzX6dqKtouR+Zn5b7Q95e2/fwQb0zK8oRzjXc8Xu3a9hPYOrJROrOlVlFnb+K6MFlypbwZ0jdmXn7YTz8ZMSUQYN42qfGzTqeCdJGgBb483bqNX+uaofqrD2y6e5FhaSjoD3ezp7CrbRfWWeiuXxGV6ndj4dXoJNDEpZnat0DPtXYCHtl111GTYVl9Qrv/dx8atosLiJaMxBaXDjfNRYaCM8XfaJ+FO7Yyn8Dg2PoLoz2YRI/rlpZ0lCFH2ecNZWsoGbeE35Mm6Xthp/DWBiOh/LL31TkHsG7FU0sEzcNmpxTy1O3uc+HZd8Rn07x41LIu547M=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199015)(40470700004)(46966006)(36840700001)(86362001)(8676002)(82310400005)(103116003)(36756003)(4326008)(26005)(6916009)(70586007)(47076005)(426003)(2616005)(41300700001)(70206006)(186003)(1076003)(6666004)(54906003)(478600001)(81166007)(40460700003)(356005)(2906002)(82740400003)(5660300002)(83380400001)(336012)(36860700001)(316002)(8936002)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:15.9074
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 628ddd1f-16b1-4dac-a72c-08daf8b297b6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT041.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8235

Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,
SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use
writeq_relaxed_non_atomic() to write to it instead of invoking
writel_relaxed() twice for lower half and upper half of the register.

This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).
Thus, one can assign p2maddr to a 64 bit register and do the bit
manipulations on it, to generate the value for SMMU_CBn_TTBR0.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - 1. Extracted the patch from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
Use writeq_relaxed_non_atomic() to write u64 register in a non-atomic
fashion.

 xen/drivers/passthrough/arm/smmu.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0c89cb644e..84b6803b4e 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -500,8 +500,7 @@ enum arm_smmu_s2cr_privcfg {
 #define ARM_SMMU_CB_SCTLR		0x0
 #define ARM_SMMU_CB_RESUME		0x8
 #define ARM_SMMU_CB_TTBCR2		0x10
-#define ARM_SMMU_CB_TTBR0_LO		0x20
-#define ARM_SMMU_CB_TTBR0_HI		0x24
+#define ARM_SMMU_CB_TTBR0		0x20
 #define ARM_SMMU_CB_TTBCR		0x30
 #define ARM_SMMU_CB_S1_MAIR0		0x38
 #define ARM_SMMU_CB_FSR			0x58
@@ -1084,6 +1083,7 @@ static void arm_smmu_flush_pgtable(struct arm_smmu_device *smmu, void *addr,
 static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 {
 	u32 reg;
+	u64 reg64;
 	bool stage1;
 	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
@@ -1178,12 +1178,13 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 	dev_notice(smmu->dev, "d%u: p2maddr 0x%"PRIpaddr"\n",
 		   smmu_domain->cfg.domain->domain_id, p2maddr);
 
-	reg = (p2maddr & ((1ULL << 32) - 1));
-	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_LO);
-	reg = (p2maddr >> 32);
+	reg64 = p2maddr;
+
 	if (stage1)
-		reg |= ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT;
-	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_HI);
+		reg64 |= (((uint64_t) (ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT))
+		         << 32);
+
+	writeq_relaxed_non_atomic(reg64, cb_base + ARM_SMMU_CB_TTBR0);
 
 	/*
 	 * TTBCR
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479669.743716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1V-0002jT-55; Tue, 17 Jan 2023 17:45:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479669.743716; Tue, 17 Jan 2023 17:45:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1U-0002hn-Ti; Tue, 17 Jan 2023 17:45:24 +0000
Received: by outflank-mailman (input) for mailman id 479669;
 Tue, 17 Jan 2023 17:45:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1U-0000oY-76
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:24 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2083.outbound.protection.outlook.com [40.107.243.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6594838-968e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:45:21 +0100 (CET)
Received: from BL1P223CA0013.NAMP223.PROD.OUTLOOK.COM (2603:10b6:208:2c4::18)
 by BN9PR12MB5226.namprd12.prod.outlook.com (2603:10b6:408:11f::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:45:19 +0000
Received: from BL02EPF000108E9.namprd05.prod.outlook.com
 (2603:10b6:208:2c4:cafe::71) by BL1P223CA0013.outlook.office365.com
 (2603:10b6:208:2c4::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:19 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF000108E9.mail.protection.outlook.com (10.167.241.202) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Tue, 17 Jan 2023 17:45:18 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:17 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 09:45:16 -0800
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:16 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6594838-968e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aKFcRBIoSL8hFvCF6230um6e8PhO6Yx8sUL5w75R809q0lwztfKZmhqJ4jJy6sXPxox5ncT9X2ONi6Kj7rUivZ5twsYJW7BRkBo4GZ0+HLA3yrL7SMmwIrBUAX0rGGviZMl0pDLCwRsAEKkzTmxm0kHFTtGHS5+NbiKZLPNPqFj0T76IYjH+mi4b5FFhiaAfxiMFEJJY2qxb48Sj7GUL4UFkynFdImZy8LgDIxXAG+gnTRo41sZ6kVS6QBj6miRy8ny9MXg3Ul0CHaV739Gd4ep51Gglkj5lKpN5vnsyy8CdLCoISGI8ChVVwTcX8uKzrNCC5NaJ1XlKbxjuyaxV0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mTuCjvySdNRIh7I98ybx4BVBMGxho8OVLY6od2Qthjw=;
 b=hFDq7byjPtDsdp+YU25UvxF9NcSn6X60FTXX1bSR9p40/rA4MZ1tHWoAF51KiaBgKlxwHIvIra0yTzyEunp+LGeelTfqsgWhCg51kb52Ka//99Sf1jaUccrBDB0jDPw9vTL+CE8MaPNJoSmZ1H9Gv42e1G+YpIFfMiH2vK6ji7/UOVNFUZIGNcM6D0GQ3Wd11+c/AX9IsfkE//oD4YhK2MrlIhgzhn3LqbrIhAMJtqkXjE0q0AH3jpS4UBvwsODj6nzwryOptY6QD/Bds+zZ0QULYhAGLDUz6W2eM/F7KYJRID7YIxs1/Ov4P6+2Zl2nZsosnuipUvpQ2XTkasAHUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mTuCjvySdNRIh7I98ybx4BVBMGxho8OVLY6od2Qthjw=;
 b=Wu2tFbFbTZJhJ+MWeMPPAlKPVK/uY+pEecQY8t3xN90+o6eK07clUBZ1XPFNgqziNrSrSrjZjiU77iA5xbH8ZV0cfTbTt1kTxGq4DlO2M2wecU5HUuW5R4wyzlMYCyZ6eizSzdFlqAs9NycAVdAcXgycHVeNfll8zlDKVzSNSd8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 08/11] xen/arm: guest_walk: LPAE specific bits should be enclosed within "ifndef CONFIG_ARM_PA_32"
Date: Tue, 17 Jan 2023 17:43:55 +0000
Message-ID: <20230117174358.15344-9-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000108E9:EE_|BN9PR12MB5226:EE_
X-MS-Office365-Filtering-Correlation-Id: 954f94af-97fa-4e29-e7d2-08daf8b29971
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oB3QoPbqPpIC48Nz0POVbhS0M4oR12foSQiiKPhb3YIIFsKpD/IOq78pwNtfFQ0Wrkco3INoLnljDTIRRTVJBsuGmeCxoAoIffZL0CIzSjpITF5gfo4/RVi5IXjv//CXUdr5uGvQjWKO/DR2tXzkMf9B0uP0SpTXCusNpywZzyH7gsBtdXECgT0sgwjK0Kn8sL5yEsXgsoG+e6D6HOsQEFsimzG83PqesssLDV1aHNmy1ihLuKWVPWLG9HE+0TB3BCCGjHhUSWQU7QWOENEjN+X4T4Zoe8Ys6O1FVcuob9u26QLBitMGWeGVE5ucH8ZZJtjDwpWy6BzDjoCNdP1beBJ8BnTvKQScE2DmRh+eM8gp+lT7rn4blElnbO33SzgGuhIgAcyr3UR/Jf6PHpkxAd86UOpTbKrPAzv1QcIuLOCe5G4TegGyrvhImW9i48sdG4+LQ7G9JSSCf5VmrvTC2Omxkv6/3zCxTHbtkZNO0o0P+7FwGSf5JeFAdAB/j/XqT/h5FrloDNrzEJ6Jj/JkCEM8rUa0qLFMmn6eXQkSRw88NTnDX6z30AVwOzzO9UNqRkOXefZ5+bxNANt3CGl+OPeexp4CNsSRxXUzBVcFNpLXOJHcdsxZ1V1aFfN1q6rnBgRAz2UbOFanBHuzdr1ZuJ0tlWYwV2xqMeDJ061/uZlM+kGrw/ZfWOR6SAsYx9fhZMLg+Ida7a6YdSpmRVD5SutVuKUv0V2YYMdGWRmbJMU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(136003)(346002)(396003)(451199015)(46966006)(36840700001)(40470700004)(86362001)(103116003)(41300700001)(2616005)(70586007)(70206006)(47076005)(186003)(4326008)(8676002)(426003)(6916009)(82310400005)(36756003)(26005)(36860700001)(5660300002)(83380400001)(336012)(82740400003)(8936002)(54906003)(478600001)(40480700001)(316002)(1076003)(2906002)(81166007)(6666004)(40460700003)(356005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:18.2498
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 954f94af-97fa-4e29-e7d2-08daf8b29971
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000108E9.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5226

In the subsequent patch, we introduce "CONFIG_ARM_PA_32" to support
32 bit physical addresses. Thus, the code specific to
"Large Page Address Extension" (ie LPAE) should be enclosed within
"ifndef CONFIG_ARM_PA_32".

Refer xen/arch/arm/include/asm/short-desc.h, "short_desc_l1_supersec_t"
unsigned int extbase1:4;    /* Extended base address, PA[35:32] */
unsigned int extbase2:4;    /* Extended base address, PA[39:36] */

Thus, extbase1 and extbase2 are not valid when 32 bit physical addresses
are supported.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -
v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".

 xen/arch/arm/guest_walk.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index 43d3215304..0feb7b76ec 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -154,8 +154,10 @@ static bool guest_walk_sd(const struct vcpu *v,
             mask = (1ULL << L1DESC_SUPERSECTION_SHIFT) - 1;
             *ipa = gva & mask;
             *ipa |= (paddr_t)(pte.supersec.base) << L1DESC_SUPERSECTION_SHIFT;
+#ifndef CONFIG_ARM_PA_32
             *ipa |= (paddr_t)(pte.supersec.extbase1) << L1DESC_SUPERSECTION_EXT_BASE1_SHIFT;
             *ipa |= (paddr_t)(pte.supersec.extbase2) << L1DESC_SUPERSECTION_EXT_BASE2_SHIFT;
+#endif /* CONFIG_ARM_PA_32 */
         }
 
         /* Set permissions so that the caller can check the flags by herself. */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479670.743729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1W-00039n-L6; Tue, 17 Jan 2023 17:45:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479670.743729; Tue, 17 Jan 2023 17:45:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1W-00037n-Ed; Tue, 17 Jan 2023 17:45:26 +0000
Received: by outflank-mailman (input) for mailman id 479670;
 Tue, 17 Jan 2023 17:45:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1U-0000oY-PC
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:24 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2084.outbound.protection.outlook.com [40.107.237.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6b0624f-968e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:45:22 +0100 (CET)
Received: from BL1P223CA0016.NAMP223.PROD.OUTLOOK.COM (2603:10b6:208:2c4::21)
 by IA0PR12MB8374.namprd12.prod.outlook.com (2603:10b6:208:40e::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:45:19 +0000
Received: from BL02EPF000108E9.namprd05.prod.outlook.com
 (2603:10b6:208:2c4:cafe::11) by BL1P223CA0016.outlook.office365.com
 (2603:10b6:208:2c4::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:19 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF000108E9.mail.protection.outlook.com (10.167.241.202) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Tue, 17 Jan 2023 17:45:19 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:19 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:18 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:17 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6b0624f-968e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fyKg/SiuLnNtr++MQj90BxMCE6x/hgp7vK5l4EdSPCwZxEvQ+sU93aJq3SdSYZ+OK4u9lHdkUrgtQogzD48IwtHu/ELjpg04fM85kxQe22JGtdu+y/WqCmTMOimWNd7n/y/T82aPcET0Leu4uhnfcn6I5ifBeKoNWdbYHxtmqhMz67x+ZP9U3PPNnnLsAnYokiqiELC6PTPycYTKEp7R1Bo270rlG5MA2ChOhv6LMnjaN37z7XZfexjWQXGE76szgJTI+L0+t8j/FOP8etVOuIOsRlqLXUruggvCppWQACh+ORFmQGia8NGlFUm9Jb64VqRHuP0gki6hJkDlAMZTLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ALYkcOUMY24lQgvkESpS4YUl/I55d3ApiAMvpoBf2ic=;
 b=X6iMR4or+kTbuc13hnjlxl0EzqeL9jZP5R8p6LXbDdNYvjvxS1w37jNheSBr5dMsMpzmFli1+xEw6etNE2V86XGBWGV/Oqt/cA/DMNrmXml9MjQX6QLwMVsxIHZsQyYKeToH+lfdkt/Bw7akIU8UNliLNCdMuHWPMZmdGs0rsxnBhPGQ51YOrtVcnlsh1pMPmakh8AHQX6JMC94P8NQkkho/MPoOoht3pVo9Gmbhu/HjqTGpeTMaojj8/T+5wjtuh2GEwPNtIVdJoSC629sFIeZWskAIRBHF7JFjNvSJ20XySFuucU3jxFdmguBhPyLDAyOw3fPoXEg+HGkf1Dsjbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ALYkcOUMY24lQgvkESpS4YUl/I55d3ApiAMvpoBf2ic=;
 b=x9Nf9HJ3X6O8CwHCK6nMnjMJEVJ+WY86syFIbkpFC5NqXJFya1VE9xPO27DOOrmdr1OEDQff7BN7wCXLONZL1yIO6Za2I/NS5+7fFU2n0+y5bEqK1fOVLdvjnCMnSbcgSgfiODtI85LFvvDhMg6yMZ+fI3IknB5gAjfoEB92qWc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 09/11] xen/arm: Introduce ARM_PA_32 to support 32 bit physical address
Date: Tue, 17 Jan 2023 17:43:56 +0000
Message-ID: <20230117174358.15344-10-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000108E9:EE_|IA0PR12MB8374:EE_
X-MS-Office365-Filtering-Correlation-Id: 90c57a63-798f-47d6-f560-08daf8b29a00
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hHwQelgOJAmI+Q9nrZ5cvFFyiWSVIosX72GL1UAKyx+kgrvGRZcOGtMKUR2qipWDfaD8H0A+TR7erEpa2996HW+6h6fvLKBJSRMOm7llOePrkZeIqd2yelllWGIF7++/JnuCHm3Eh8aHuhfwz3pL+Nm00gsuZ12RAWdUl051PU8ysGbw1qrAMijb+1Sw5FxUKJlTNFY3XdVtDLoTD/iCYIay6HwCIn/3bm/e/hkMnV9vAwlwBovCeApr1Kbx6GlQh2C5ASHZLfXIsBl8QEhlJVR54XG+YJSmIShGvNnz9CJMu8rcKOyzc9iiruRsVrZx1Ke3nqsVDN1g8M34jWr8I/aKwyfQAVsWJkEdqv5Kuxt4F6klvAPjh10FnNSCZIdpiO1yLA6OZu7Z9leIE8vmATXnMvcb+ddHvIGGezgkcj4TiOwZgLI7EsIlkfLyOSaLEevcD2BCXwxN6kaBvZBZjVnH+aP2/JLvYrjBNjjBVAdLDNC/G7PBV/YwecF/EKF8M9ZfdGNzIRAeBwHW3a7HO91JrI+JqLgnnMSMgkwzJ8hlQDJcnl5EpQTVhmxybWBlHJgXmY9SIRrnXe2FI6imHQtlsYlyNj5KAslkyT+zuYbYDz5L5G8jCpO2/GuMJNNscK+6puo/J8EgfVchUQYyzeXiztZ29zRo00rqYhaOABY8E2fwwiPvSLH3Ig2jHmdMxTJ/JyC+qpEEYAOpNJQcZNVf2nIInG7ihdp3HV0b730=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199015)(46966006)(36840700001)(40470700004)(83380400001)(82310400005)(40460700003)(54906003)(36756003)(103116003)(478600001)(40480700001)(186003)(356005)(36860700001)(2616005)(81166007)(1076003)(47076005)(82740400003)(86362001)(336012)(426003)(41300700001)(2906002)(26005)(6666004)(70206006)(8936002)(70586007)(6916009)(8676002)(4326008)(316002)(5660300002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:19.7498
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 90c57a63-798f-47d6-f560-08daf8b29a00
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000108E9.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8374

We have introduced a new config option to support 32 bit physical address.
By default, it is disabled.
ARM_PA_32 cannot be enabled on ARM_64 as the memory management unit works
on 48bit physical addresses.
On ARM_32, it can be used on systems where large page address extension is
not supported.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - 1. No changes.

 xen/arch/arm/Kconfig                 | 9 +++++++++
 xen/arch/arm/include/asm/page-bits.h | 2 ++
 xen/arch/arm/include/asm/types.h     | 7 +++++++
 3 files changed, 18 insertions(+)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..aeb0f7252e 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -39,6 +39,15 @@ config ACPI
 config ARM_EFI
 	bool
 
+config ARM_PA_32
+	bool "32 bit Physical Address"
+	depends on ARM_32
+	default n
+	---help---
+
+	  Support 32 bit physical addresses.
+	  If unsure, say N
+
 config GICV3
 	bool "GICv3 driver"
 	depends on !NEW_VGIC
diff --git a/xen/arch/arm/include/asm/page-bits.h b/xen/arch/arm/include/asm/page-bits.h
index 5d6477e599..8f4dcebcfd 100644
--- a/xen/arch/arm/include/asm/page-bits.h
+++ b/xen/arch/arm/include/asm/page-bits.h
@@ -5,6 +5,8 @@
 
 #ifdef CONFIG_ARM_64
 #define PADDR_BITS              48
+#elif CONFIG_ARM_PA_32
+#define PADDR_BITS              32
 #else
 #define PADDR_BITS              40
 #endif
diff --git a/xen/arch/arm/include/asm/types.h b/xen/arch/arm/include/asm/types.h
index 083acbd151..f9595b9098 100644
--- a/xen/arch/arm/include/asm/types.h
+++ b/xen/arch/arm/include/asm/types.h
@@ -37,9 +37,16 @@ typedef signed long long s64;
 typedef unsigned long long u64;
 typedef u32 vaddr_t;
 #define PRIvaddr PRIx32
+#if defined(CONFIG_ARM_PA_32)
+typedef u32 paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PADDR_SHIFT BITS_PER_LONG
+#define PRIpaddr PRIx32
+#else
 typedef u64 paddr_t;
 #define INVALID_PADDR (~0ULL)
 #define PRIpaddr "016llx"
+#endif
 typedef u32 register_t;
 #define PRIregister "08x"
 #elif defined (CONFIG_ARM_64)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:45:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:45:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479672.743743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1Z-0003eP-1C; Tue, 17 Jan 2023 17:45:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479672.743743; Tue, 17 Jan 2023 17:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHq1Y-0003eA-SY; Tue, 17 Jan 2023 17:45:28 +0000
Received: by outflank-mailman (input) for mailman id 479672;
 Tue, 17 Jan 2023 17:45:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1W-0000r0-VQ
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:27 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2050.outbound.protection.outlook.com [40.107.237.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b9150966-968e-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 18:45:26 +0100 (CET)
Received: from BL1P223CA0022.NAMP223.PROD.OUTLOOK.COM (2603:10b6:208:2c4::27)
 by MN0PR12MB6003.namprd12.prod.outlook.com (2603:10b6:208:37f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:45:23 +0000
Received: from BL02EPF000108E9.namprd05.prod.outlook.com
 (2603:10b6:208:2c4:cafe::3f) by BL1P223CA0022.outlook.office365.com
 (2603:10b6:208:2c4::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:23 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF000108E9.mail.protection.outlook.com (10.167.241.202) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Tue, 17 Jan 2023 17:45:23 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:22 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:21 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9150966-968e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OBJjOCkPyaoxD6G7id62wRiaiWDS/1HY87hdxnzub2UNmcIXt+m0m44F9H9Yq696VilWXc889IefY6MpjEQm0bKoE/bvOz/RZdVwhwF5jHwfJYR/BgC3uLed3zkpr4ZxW/DlApTDfkg+cR0qdBzhPVJc8rssUR9ZcBgAbgMGPNerPRDwr3+CESrkvacorewa3/9CxK0pdIe3GK/RTRyPIxYp4G6voz6SIghcm/GIiJf/3wF9TRcKlAcSjpc2bBgkZtyfkX30QLa3YPfOXER6kCLXVDS05laULDKVCF5nrEjPRbSyCQotOSx2jZNoK36EYxOIjo5sLVgJ9loG8oZGVA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5nDuDeqGPvT23nv75ZOIBleV+rCr7pLBOWXoJ8VggXg=;
 b=GPfTSl6dycvsx8crBIZOVcl/YOjUtuql06TX+aOi3b+fI30oVuxqOApcqHoNYyRYB89dKoJV/RbK+pOsvKZDbDBnUqy52hu7EbHLZZiOd9zci+uWuKMALugB+iQZ6Z5aQYOgszaow5dX+n6TUselqYn4zH4Ek6hyXqsycDH97J3PpAlyZ9shDgV80Gr3z7KwqtqAZh0MS+xmUUzrV8YDR/rKs9OyZZU8zHen3wUasxdlAjihfcihaRcDuHSPcV/XYJ8kXlcdwWM3skBqC6MQoF/Ieo6s+opSVHjMV+3p8IMV9jjLgiS69eVzocI/ZyjjGIpUfyBUbjYYe0mLXSaRSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5nDuDeqGPvT23nv75ZOIBleV+rCr7pLBOWXoJ8VggXg=;
 b=rCL9kKkbpiuErU1APbqn83efvQQs5QIdaSwioZK9XIJ17kOE3+I3EqemkgC4YCnSiCdp5woNHgVHGXxy2t3Wklk1PiKe2OYvNwvq6Wso4EQLEbpTVuumBDZ1Zbv8UzulNpZj7mH+28LKQwKoEFQqlE4BlxougO5A4RpwxzAyIVg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 11/11] xen/arm: p2m: Enable support for 32bit IPA
Date: Tue, 17 Jan 2023 17:43:58 +0000
Message-ID: <20230117174358.15344-12-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000108E9:EE_|MN0PR12MB6003:EE_
X-MS-Office365-Filtering-Correlation-Id: decc4f7b-83ff-4bc6-1c26-08daf8b29c2a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z8gaDb+q0YxsODVGSp4Qzyz0rZZ0aV8Er2c4KtZ1LLzKshoDeE5Cmrcjdk+jMis9w726fSVcuMPlJ5u8PGraQnqlChyv4wjRAzLUU/i1LawdYRQGxoZG905gHGbsWo3eEWpYspewdrvSe8GoztTQkJOVxuvrOKUp22PvQDJAsOaaGStxfhldGNut2kozWWffK5VZ6lseQUPBmKNFU8hzawo9O64tMH42wKBMcB1w3ZsIA9OnMTnCZDhgwcBEXRL++QRcutO/JauR5Asao6Yq0Kf+bW5x64H1alZC3tSp1SgGYLcGbJSmV6qfy+ThuI7i7w8ZL/dOLkRclpQMwGrYGoSQsoKFraqjIn9FoE/A4jmY0Q17CVAVXjkmW143z1e8rrLH3N93VzIxt4Ds4Hf9MVt5cqF6LpmLnvivtDqxW6cVyGY2WveRRzczBn54lo99VwBqucvOzkWvwPpeOmjmWCveBtpDZXQ0jQpYwMG6FgZv60mY1TCtlqGEe2Nm5mY3yEUENjKkTiuEX6Vb1MZYhZGibBwiGDx/N+y+d+aVqH3e1aIeXR6pToDYQKXg8P0x/JC4FM6IFdXjgr4wFqvQg9dajFVXptOKihVx3hIRCQ6V3OHgbW6iax+hEW2OGatJzwz2+1puRDdXLlFz2L3gOcBhoIeOvaAp9CvphoKWB6mvQMVzi75vdW4L9sIzVA6DWIXjaErOL4WiS+8XT8n2YkXPqehjpJxLRsREfxyBgt4=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(396003)(346002)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(41300700001)(8936002)(36756003)(83380400001)(2616005)(70206006)(70586007)(6916009)(4326008)(47076005)(426003)(8676002)(336012)(54906003)(316002)(5660300002)(81166007)(356005)(82740400003)(82310400005)(86362001)(1076003)(2906002)(40460700003)(103116003)(36860700001)(40480700001)(478600001)(186003)(26005)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:23.3593
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: decc4f7b-83ff-4bc6-1c26-08daf8b29c2a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000108E9.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6003

VTCR.T0SZ should be set as 0x20 to support 32bit IPA.
Refer ARM DDI 0487I.a ID081822, G8-9824, G8.2.171, VTCR,
"Virtualization Translation Control Register" for the bit descriptions.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - New patch.

 xen/arch/arm/p2m.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 948f199d84..cfdea55e71 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2266,13 +2266,17 @@ void __init setup_virt_paging(void)
     register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
 
 #ifdef CONFIG_ARM_32
-    if ( p2m_ipa_bits < 40 )
+    if ( p2m_ipa_bits < PADDR_BITS )
         panic("P2M: Not able to support %u-bit IPA at the moment\n",
               p2m_ipa_bits);
 
-    printk("P2M: 40-bit IPA\n");
-    p2m_ipa_bits = 40;
+    printk("P2M: %u-bit IPA\n",PADDR_BITS);
+    p2m_ipa_bits = PADDR_BITS;
+#ifdef CONFIG_ARM_PA_32
+    val |= VTCR_T0SZ(0x20); /* 32 bit IPA */
+#else
     val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
+#endif
     val |= VTCR_SL0(0x1); /* P2M starts at first level */
 #else /* CONFIG_ARM_64 */
     static const struct {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 17:57:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 17:57:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479711.743754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHqD6-0007dp-4I; Tue, 17 Jan 2023 17:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479711.743754; Tue, 17 Jan 2023 17:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHqD6-0007di-1Z; Tue, 17 Jan 2023 17:57:24 +0000
Received: by outflank-mailman (input) for mailman id 479711;
 Tue, 17 Jan 2023 17:57:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D8eG=5O=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pHq1W-0000oY-Tu
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 17:45:27 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2057.outbound.protection.outlook.com [40.107.92.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b8235567-968e-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 18:45:24 +0100 (CET)
Received: from BL1PR13CA0293.namprd13.prod.outlook.com (2603:10b6:208:2bc::28)
 by BL1PR12MB5192.namprd12.prod.outlook.com (2603:10b6:208:311::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 17:45:21 +0000
Received: from BL02EPF000108EA.namprd05.prod.outlook.com
 (2603:10b6:208:2bc:cafe::6e) by BL1PR13CA0293.outlook.office365.com
 (2603:10b6:208:2bc::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Tue, 17 Jan 2023 17:45:21 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF000108EA.mail.protection.outlook.com (10.167.241.203) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Tue, 17 Jan 2023 17:45:21 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 17 Jan
 2023 11:45:20 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 17 Jan 2023 11:45:19 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8235567-968e-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gKX77pi5LhWWRRzUe8sPU9yYuuPG/bH5pkOAhg0ccaB0JRBzbH1dud0UiQ7M99W+wAjSnT9riXDwenhECcOai+QeLuZcimALY84XpMLhvEdVeKvncJPdsuwX1MeybWmLypFwpGDu6rXXbK9Ch1ALdFchv9dnM/f5eq3ZAi07kzgCKn4Dnfg6TkLw1L77Gsh4CTPbDefVVfUzUdprvhTAepLw06HyAgcgwWPd1OurJuWKVqkQBtOZMJOoeokSlSsg2IANRMjtoSM+WA2Je5Q4HtTIrF2KFHgENkUv6fjHSJEt+owzMT2vPWFEM0cXwHJeg6dIg5AC7EA1HBnrm+CJDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IUKkPNjs40zSymEmFxi8ADEbz2aWFS1vKYbBMcQpOKo=;
 b=efteD46leLWOXOsHEaPnwIF8mFD9+QdIkLMxKuZFOVc1jAvK4lytVYUuxLYoQ9TMRqvOo+qGKIGDXPYeZo3F9hO1eZ09Mkg9DwPi13/B5BC2NxjsYBTI+gTHhJYD8GkX9mLa3D9OeivWq/8Q5OSByGjEbhR7gBFg2gOvgHi47eXG3Md8zjT6dcpznKQ/iU6s1LIfz01lMxFhGBSQ1y4FV5s7afifAgdqsSUIqo/uCxEkVSjl0fKtlJ4yN8xWGgxLWlYO6vx6FckRRTP0e8ukIjZF197k91OS01rOdCg1g53WbJbocFfrrF503T2e3gTpinEjh1aoFEirpQyNN2wlfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IUKkPNjs40zSymEmFxi8ADEbz2aWFS1vKYbBMcQpOKo=;
 b=CaiMpMoM/FndhFVnbQSjERMc8hrBAiWkzp8Drs4KUv0EfrbWDSQEDRiouoXtIxLY5bZO4P0W4B6LiNsQN/xbU6ppP8ZO/8V5pTyO4c0y6rEOEo3/0uhotdJKz4uWADEwBPFoa58Ipq/6S5b8fCzdR/SooYlriUXqYXhupURsEPU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v2 10/11] xen/arm: Restrict zeroeth_table_offset for ARM_64
Date: Tue, 17 Jan 2023 17:43:57 +0000
Message-ID: <20230117174358.15344-11-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000108EA:EE_|BL1PR12MB5192:EE_
X-MS-Office365-Filtering-Correlation-Id: 912192a8-fd13-4b16-0a2b-08daf8b29b29
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NrnV0HM/yiIGDqCOfLcrWsMZDJpcA5DkX+OvzhersBZx/ZKarMkHPr6M/EdxLRrqDsYvv+npmG5+08I9sCdEtEcZ7Wgsaoj1ftNXdW5sM6D2ykUOQosnI3hVj4ZcaSIQIPFjiKCnfW7Ao6Eo3MW1ND+KeJjNTEw3HEmGRxK33oGzkBxcx8aLzcOQjnsmj5YZHLp2f2VrtYqSEdhO2jDvS/32c0xhuIOBzCGniTp+ucUMuf3/dAfEsslunSuK9W3BZFXzYvYz+eJv8JDIyXyzTDqi2vS7WJVSNteOxHg92Cv9Vx5Gn9Ra2gy+ZwK6BkEpLzZS+snc/RH3ciL82gIYOOCM0dpbGmXQiWVcsuii9HvS9ZdSzSEJrJ+tU5Td3u/BNWGIlHAG8Dtha8bbfbvR8RDRS8oXmNjsfg19yC8mMlauL4wpCqKlRIejORplrYsUOvRBzd2kyKvxiYDQv1h99u4p2c2C6yFSAND/IrxFG+YGLOKusbn0DCCRlXMz2qKMezPC/2Ae/kXxlotjuVu/X7jT+dc64v2p3Ymq2F/T3bvkEFuSnb8I3qjck0SSIRE/+1woxZRbaZY4hP6fDO6+0E4ab7nAjsbDyBRMuMM+NSEl9rDoJ3AIYzN/GciOsMll8Lpdia5cCIjVmY1WuKoHd5UvPIbLmguwqclk6lk7WDRuxFcco5yfEIixeV/c+eNpV3TvRqJ7NoUb+bfBCykPJ2QwZdl8FHgv4r+/X4f/3HE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199015)(40470700004)(36840700001)(46966006)(36756003)(86362001)(356005)(8936002)(6916009)(8676002)(70206006)(70586007)(5660300002)(2906002)(4326008)(36860700001)(82740400003)(81166007)(83380400001)(103116003)(478600001)(40460700003)(6666004)(54906003)(316002)(40480700001)(82310400005)(41300700001)(47076005)(336012)(426003)(1076003)(2616005)(26005)(186003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 17:45:21.1746
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 912192a8-fd13-4b16-0a2b-08daf8b29b29
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000108EA.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5192

zeroeth_table_offset is not accessed by ARM_32.
Also, when 32 bit physical addresses are used (ie ARM_PA_32=y), this
causes an overflow.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - Removed the duplicate declaration for DECLARE_OFFSETS.

 xen/arch/arm/include/asm/lpae.h | 4 ++++
 xen/arch/arm/mm.c               | 7 +------
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
index 3fdd5d0de2..2744e0eebf 100644
--- a/xen/arch/arm/include/asm/lpae.h
+++ b/xen/arch/arm/include/asm/lpae.h
@@ -259,7 +259,11 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
 #define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
 #define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
 #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
+#ifdef CONFIG_ARM_64
 #define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
+#else
+#define zeroeth_table_offset(va)  0
+#endif
 
 /*
  * Macros to define page-tables:
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index fab54618ab..95784e0c59 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -207,12 +207,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
 {
     static const char *level_strs[4] = { "0TH", "1ST", "2ND", "3RD" };
     const mfn_t root_mfn = maddr_to_mfn(ttbr);
-    const unsigned int offsets[4] = {
-        zeroeth_table_offset(addr),
-        first_table_offset(addr),
-        second_table_offset(addr),
-        third_table_offset(addr)
-    };
+    DECLARE_OFFSETS(offsets, addr);
     lpae_t pte, *mapping;
     unsigned int level, root_table;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 17 18:00:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 18:00:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479731.743765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHqG2-0000lM-KX; Tue, 17 Jan 2023 18:00:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479731.743765; Tue, 17 Jan 2023 18:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHqG2-0000lF-Hs; Tue, 17 Jan 2023 18:00:26 +0000
Received: by outflank-mailman (input) for mailman id 479731;
 Tue, 17 Jan 2023 18:00:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHqG1-0000l5-Px; Tue, 17 Jan 2023 18:00:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHqG1-0007c9-Or; Tue, 17 Jan 2023 18:00:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHqG1-00018I-A2; Tue, 17 Jan 2023 18:00:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHqG1-0003BI-9Y; Tue, 17 Jan 2023 18:00:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=u2v/n+4AVhKWdt0O6Jg5oC+4qViZtnSCoA0W64ohTac=; b=37PE3WP3XCTe3E7aJHwIvfnmg0
	03jHsWMq3rzZF8CPblMN9LmsXgQ/E1CC83DZaiv1CwOKc5LHj/WR6MTcfYGe4UmGiI3eC9HtiP/gv
	SO2qoU1CWv6uHOU7u5IN74yHYAS7B1SWAOrl8s9FWVmzsU9juSKAu/HCFeqU044RTfVU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175941-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175941: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=015a001b03db14f791476f817b8b125b195b6d10
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 18:00:25 +0000

flight 175941 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175941/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 015a001b03db14f791476f817b8b125b195b6d10
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   32 attempts
Testing same since   175940  2023-01-17 16:40:41 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 424 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 18:30:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 18:30:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479741.743776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHqjD-00047J-TU; Tue, 17 Jan 2023 18:30:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479741.743776; Tue, 17 Jan 2023 18:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHqjD-00047C-QE; Tue, 17 Jan 2023 18:30:35 +0000
Received: by outflank-mailman (input) for mailman id 479741;
 Tue, 17 Jan 2023 18:30:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHqjC-000472-ME; Tue, 17 Jan 2023 18:30:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHqjC-0008L8-JU; Tue, 17 Jan 2023 18:30:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHqjC-0001ty-9F; Tue, 17 Jan 2023 18:30:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHqjC-0004kM-8k; Tue, 17 Jan 2023 18:30:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DrL+FdsVoIMUuxnVLp2yAESifsj9w8Aa/0G8ssa6X7c=; b=PGP/+AwTL0GiKFVm0EXOj+bfw2
	wACrzVnU8rki6Paul63saxDYN6SAIagZtW8BMxspZ0LBgvlVAkF/TzttZ5TxDg+hIpYZ1vKpuXdNt
	FKQPPBiXSBmisU/V8M/o9ObGapdeZOqUVcF3aFmmQgByJ/U4hpfBvvIW1mcIXRJjd5Ok=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175942-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175942: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=015a001b03db14f791476f817b8b125b195b6d10
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 18:30:34 +0000

flight 175942 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175942/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 015a001b03db14f791476f817b8b125b195b6d10
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   33 attempts
Testing same since   175940  2023-01-17 16:40:41 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 424 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 18:49:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 18:49:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479749.743787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHr0p-0005i2-EW; Tue, 17 Jan 2023 18:48:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479749.743787; Tue, 17 Jan 2023 18:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHr0p-0005hv-B3; Tue, 17 Jan 2023 18:48:47 +0000
Received: by outflank-mailman (input) for mailman id 479749;
 Tue, 17 Jan 2023 18:48:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHr0n-0005hp-Ox
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 18:48:46 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8fa728d8-9697-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 19:48:43 +0100 (CET)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 13:48:39 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5156.namprd03.prod.outlook.com (2603:10b6:a03:224::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 18:48:37 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 18:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fa728d8-9697-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673981323;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=i6nnM96Wcje001If7Kq8Ot3jIAsr9a3RJkD8G3Wj0uM=;
  b=EOcW3PQwwTCqlyvlyX53l+VIUOX8EwXRxnVz2zadSbIDlg7/3b2nsdh0
   TI8dWnfLk1hecNftqwIiUwXY8RaaD+f/XQn4rBw5FEOrUHlf582/20BXO
   dYXCZZTQo9qO8EtptBPJllcbuaQfN6TaP9Dj/XW60TxtZEPSSnTxeweGl
   c=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 93017569
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:t6pI+6tkYA6vNJHrzHyy5UUH++fnVGdfMUV32f8akzHdYApBsoF/q
 tZmKWmOPveJa2b1ed9zYI++80oCsZfcyddnTgFt/39nQSNG+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj51v0gnRkPaoQ5AaHyCFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwNAIHUiLT2aWMx7egRPhF28t7Fdn5M9ZK0p1g5Wmx4fcOZ7nmGvyPz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60bou9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzjrqc13QPCroAVICEbZ36WurqAsXf9V9lRI
 Ewz8wp/p6dnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8sjeaKSUTa2gYakcsUQoAy8nupsc0lB2nczp4OKu8j9mwEzegx
 TmP9XE6n+9K059N0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLpm4U2l/MBVfNgwIQ==
IronPort-HdrOrdr: A9a23:XPv9CaucWk8Vq2bdIYNcZ90t7skC+IMji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6HnBEDyewKkyXcV2/hmAV7GZmXbUQSTXeVfBOfZogEIeBeOv9K1t5
 0QFJSWYeeYZTcVsS+Q2njaLz9U+qjjzEnev5a9854Cd2FXQpAlyz08JheQE0VwSgUDL4E+Do
 Cg6s1OoCflUWgLb+ygb0N1FNTrlpnurtbLcBQGDxko5E2lljWz8oP3FBCew1M3Ty5P+7E/6m
 LI+jaJrJlL8svLhyM05VWjoKi+q+GRhOerw/b8y/T9Hw+cxjpAor4RG4Fq8gpF491Ho2xa6O
 Uk6y1QRPibrUmhNl1d6CGdoTXIwXIg7WTvxkSfhmamqcvlRCgiA84Eno5BdADFgnBQye1Uwe
 ZKxnzcuYFQEQrFkCD368OgbWAVqqOYmwtQrQcotQ0sbaIOLLtK6YAP9kJcF5kNWCr89YA8Ce
 FrSMXR/uxff1+WZ23Q+jAH+q3aYl0jWhOdBkQSsM2c1DZb2Hh/0ksD3cQa2nMN7og0RZVI7/
 nNdq5oiLZNRMkLar8VPpZ0feKnTmjWBR7cOmObJlrqUKkBJnLWspbypK444em7EaZ4vKfaWK
 6xIW+wmVRCBH4GU/f+oaGj2iq9PFmAYQ==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="93017569"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ssit10oVSFIGqXVojDuHcDpQjRWZUa8k0+V4nTo9iOn3R2ZjMaot83/ano4cgGblxbIocPewrjceupdfTxRzoaaKUmg7mr/7l5RFeuBikeqIoSofSD5Oyfo+u4TvsKxZ8LxKssBqJy3b0sHFSMVseR21q6F2EApjYO/Upr1csPjlYzrf9EZb9OWivdqujgQ39sErQuBFt+xS6zuuHG6uQyub434VyEYfAfqBav1aJ7ZdGteHCDIH458KHpLlJU+la4jZhnHhkW2BsOfOBQywR6n/SAXtIeolvdPQM5iVhpQvd7d9Edxht0/gXDy8w12bGkfGMkhbN3nePgSWiElY/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=i6nnM96Wcje001If7Kq8Ot3jIAsr9a3RJkD8G3Wj0uM=;
 b=BDPU9/byCc+ci43NiqEf3FJc4Y4kWRgFJteZZ2ufGJk9DTOLyhn9I4zFn9kOMprc0Dckrw2hBWkwJVlV3ztzTjElZ2AqqAU5yoR5YYutNascBjswsmNcRtYq+ymRdL2bjmt0lnihcQBstoMhXHqdtcouoUEY130IsE7FVEuG6/yeOPDCCzbWThFpoATEChR3nePG9KBwt/R+PyZDKnTqjwcY17F8eVi75XvfITk4AVDV4njpfCNrQcGukL6NljR6yZ0jmp0fvTI2uGCjwFjXKH8mlDoQRBWLo1R35yVE0TPEitSe+f0+2MB5BWMZn4mnVJvSxarMk8LBawUsS0yIOA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=i6nnM96Wcje001If7Kq8Ot3jIAsr9a3RJkD8G3Wj0uM=;
 b=YHbHHdlJYH3aNecxCDwK49MoDQe4RroL1dh47+yAu31yfswWF8JACsgXLqgg9GYQt2tHW2Eksv+SmkX5UAi1YwAB8DXwyau3mqG24HHzQqUfhBKzFQ46Cf24mTAt0uOHmoEX89tSdeb3WIPVjxwt0hOYoQqeQEvF4BVProFtCQw=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v2 6/9] x86/shadow: re-work 4-level SHADOW_FOREACH_L2E()
Thread-Topic: [PATCH v2 6/9] x86/shadow: re-work 4-level SHADOW_FOREACH_L2E()
Thread-Index: AQHZJcRNSLEE6oa1r0qKWqqRMiTKGK6i/RMA
Date: Tue, 17 Jan 2023 18:48:36 +0000
Message-ID: <ed382d91-d4fc-4778-d1e2-9f55b147e33a@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <27a7245c-f933-5b2b-5685-d9ba2dbd4a8c@suse.com>
In-Reply-To: <27a7245c-f933-5b2b-5685-d9ba2dbd4a8c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BY5PR03MB5156:EE_
x-ms-office365-filtering-correlation-id: 2a7e86ca-3d90-4566-5c43-08daf8bb712e
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ma+6sb6m7j01X7JzzgOEOMJSPHYvj3kSSa7NHOqIALxy6o50XHUpHVcl5CNETZwoK1tLkTJrUFi8qqGGADA8ZCo2Ko0jpx1de3tLachpRLrh6CADuG2RZ08ebTlBt9fBNdQXdI3+YIEBmOVhA5uI4dRVXsUo6UVAC8o0yUDmX+iWcpje+y6e2YIUOlgghWhThhVSqLN/N4p3KFYOBEqOYnpoFmJDz1ZWrqMSLNifzaAUEkVCEG7Qn+8p6L2tHNMw44fwDrnfrnds6ZQk++HFkuiekoShmKe/95vFPrZndyMV8pbcUANjulGf/zCXLL5JepufQbaSXbPynFFZLfX6O/0PhGjY7dd8kd3x0UQvnAeKrBI06yY+YWyrQCEebQK1DIoNm/ZfEd1Brf37WegH38J6dtOTjnU+G7SaQqLYBb5p6XczFQl96J8ppbLQxtN/JJay4GADzkZDBhroj07uO1OC8sRSpCULfVCugcP7XhPpuq/fOfm0Pthjdfa3h6pEQehorxu9Phuhgr4DvADeEdkwYYsmEzNTeDTPixY3lYDDpctoal0WHE+QWhetF4IW5OXrBQsrjBLDbr07+JipugW4itjlJiPHhN3jLZKIOsmWuUOBoF4v506xpL+VadQ8gSgsnjqhJw66nCtaEVyTqxTwQn4ESwIqvDziSEcV+poloHL3XpEn2JqTVjseeJ5ue5CJvaZzhKhLbYvO70Vw3sC6+QdKDf/BsGbpWgV4hBwly13kA8GYuS92GLdPlUW4ZSqvfjZlqtp6DtgKn1dzQg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(346002)(376002)(366004)(136003)(451199015)(2906002)(4326008)(26005)(8676002)(76116006)(66446008)(36756003)(64756008)(91956017)(66476007)(41300700001)(66556008)(53546011)(66946007)(6506007)(107886003)(6512007)(2616005)(83380400001)(86362001)(110136005)(38070700005)(186003)(71200400001)(54906003)(316002)(31696002)(38100700002)(122000001)(6486002)(8936002)(31686004)(5660300002)(82960400001)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?eVJUNFdsYms3bWxDL1JzN3o2ODRIZ2tSc1F6QXNtM0F6VDhOc0h3SkwzUENX?=
 =?utf-8?B?V3Q1MC90U0lSMHA5QUlNcjRiMXpqWkk4aTl4NzRadU4wRXpZVmR4UDFpa2xJ?=
 =?utf-8?B?b1NRakU3bzdCamU0b2J1Y3NsM3FOdFpRdFhIbUhiQkJQL1RPMUFIRnJzL0RG?=
 =?utf-8?B?KzRVQlhkZm1va1hsV0s5RldpWENXaTh4RnBCaVRINFdxcm9VYjQySHpNUnJw?=
 =?utf-8?B?NTNJaGc1S1NScEZNb3hCWVUreTJFTnRiRkVOckRSc0NuM2Vua2tkT0g0U1VG?=
 =?utf-8?B?THNuNy95ckVBL2ZLTTlLQitaS3VBamNYdXhiTFFLZVNPcnJNb1o2OUxGM0ow?=
 =?utf-8?B?QWxHVkFBZTI5K0E5d3FPT2FDZmphWUpGSlo5Y1RHT0x6ZkVwV3V0ZEh4b1dx?=
 =?utf-8?B?SjJLY1cwVEtWdXJCQjRzVEVyTEFnZGdvTzY4Q1NrVisxdXE1WVd6U0R4Vkto?=
 =?utf-8?B?Wkwwanh3eWtrd2p0cXpzaHlJdWw3cEF6SDN4aGJNd29mdEhnUXhOREhFYVY3?=
 =?utf-8?B?dmZtZS8rQk5zbVV4SWRyS3RrZXZWeEZxcE5KZlA1NVlMN0FIMUk3cmRIWEor?=
 =?utf-8?B?VTVVM1lhYnJ4Z1QvaUFZK1VtZndDMzBvc1ZHekRjcFA2NFQ5ZFJHS280U25W?=
 =?utf-8?B?NW5RWmM4ZFIxTjA3Z2lPRkExSW1UTFZzWGJ4UW8rZVRWVGNyMlhpdHloZzdp?=
 =?utf-8?B?RnhNRUF1N3kvZFlBUjlvRFBQZ0tPVERneUZnNnRqZlVaT1RPOHVialMvZE80?=
 =?utf-8?B?d0lHcjZZQzZ2UlJIdTRUR3ZnSDAveW5ESFg3SlU5bEVXTkZOSmN5N3hDWHFQ?=
 =?utf-8?B?N0FWZFlYMVkzQ1pzeVZzVkdKSDNieHZiVGVYKzRYR1BjOGZaNDN5U2cxTjlL?=
 =?utf-8?B?N3EwMSs2RXJPVHEvT2YxTExENXBKdTI4R0IvNUIxT0pRZnF3RGxjTEFYMmZk?=
 =?utf-8?B?VUxhQ3dRZTZlRFBuT2tJdmRGRkN6YzFBbU4rRUFXb3JxQURJWVE1VWR0Smth?=
 =?utf-8?B?aTFZbndrdFNiZTUzMHVhdGpDbE9EMlBaRmp1VVIyTEROdHhoa3RweG00UVNJ?=
 =?utf-8?B?bWxXTm5HbTJSTmh5NWZGMkZsdENicFZZNDNEUjljYnIremNsbTl3ZjhrSjNH?=
 =?utf-8?B?UysrZEI3WkhDQ3JUbmhhNlNYbDdHNHhRVE1VTys2aUxPdDRTUE80cit6UXJF?=
 =?utf-8?B?N1gvNFlEZkd2REJkNjlIRGNVTGVNMmkvUE9vU29rYURvUmhuc0dZYmN2WlpQ?=
 =?utf-8?B?TGUrbVQyZ3BKYklGNForZzB1NjF1YzFZbmpzSHlqUzJuc0NXa3BKTlZsZ2l4?=
 =?utf-8?B?dVEzQ3BHUFNQZWlBWHRGeGJnZi9CQWQ3dTdwdjJtZ0NSL293bjkxRFU3UEFl?=
 =?utf-8?B?RmNwVEY5WUFZVnhkRUI0bWxyZGx5WDBOVDdESHFyUnRxRGpoRUtpYVZWQmR2?=
 =?utf-8?B?a2E3NHo1Y0twSkI1bFlLMDlXeHphNTMzL0lCcExNanhVMTBtSlhPQ0tneVFP?=
 =?utf-8?B?dHNXU0ovenpESHo1VHJOS1hPYTNOQjJqMU1yNEp1eXgzQU40OUtxdVpBdGhm?=
 =?utf-8?B?cWc5bG83bFJvbzBIY1ZwWUNEVjJRS0JIWmdhcGxYQUt3dGtSb0VmTDJPOWUy?=
 =?utf-8?B?TjVkR2Jpc0xnSlU1SVllcm1rTGJZNHdjajEybTFxZXMxYnBHc0k3dklrQW5Y?=
 =?utf-8?B?bTZaeWZZNUJldUtiUWZpMFQ5MjUrSVpEWHl5MkticDdXSXhmMm4wUGNLR3BI?=
 =?utf-8?B?ZnNZOUJDUVAxUkxrMy9pQ0FTeUpFeTNGbGVaT3oyS2hoOUsrcHhtZ0tKdmlv?=
 =?utf-8?B?Umk4UXNnY05PcDJPU1crTEZ4cjU0VzBqZ1JVU0duc3ZPOE5JZEs5ZVJyRGdX?=
 =?utf-8?B?MWxMR1VYSlJzMzdicmVrdEZIcXNTeDRoYmFva1hPdCtvRFp0WFc0N0ZqOW1r?=
 =?utf-8?B?N2ZSZHlVbzdhR0Rma1RrVDVpOENHR3BZM1FnM1FqbEQxNWpBM29BOFpTckZV?=
 =?utf-8?B?dC9tcWU4clNRM2VqYWpPYlNYaXZCRHZlNHkrRDhFNmU0OGV0K21SeHh4ei8x?=
 =?utf-8?B?T3VZaHF1emdsYkR5Mit0VmF2UWt4RVI4UHRpRHo3UlNLQy8rU0Jka3MxSDlH?=
 =?utf-8?Q?FfGvxQkvJK6fkEofE2099xit4?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <682B96306D01E34C80C28AE956DD38B1@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	AUccvMSzcZzxDCuXoKpjpAIp+ThDn58vHM6uYOIqURtu8yy2+wD/TZcGEe6F8eH/jouvvuNkFnf8Q+Z64k/ZB4Cxy4tZKA7T/UqnlZHgTWIxfXWZDDPy29SYNImg/w0c8YkeZaiuXEth7w8GXah6fJh3AXnYrVl/gM4ROdaL2MjOMPvOPD6Yq2RXybg80NKyeZQbkkkoFKTOWAyfi5/R4rQ5D6hZca7jWR3LUD+iaEqfUs6hHO3+qYJiW3QwoVo+oEA3RE49l95VgDgwWQuwMuQ4sKE0ayMEeMLdjRwXo3DCkUYBIS2WqZQYIz7qFmcx+wVmt9ofXgYYaJ+3VHyEhFp2Wkh3ryJxr53eEL58BhbLV0AkAIgXCWHZOoLAY+UQpc3aqdEg21hcdqzGqaoRWOfoGCk/QuQ41isa0HZ9InW9Q5PzOtpqx0Ut+qgxxMgtkvsTGnz1gxa2VRH8o3bf7H3mONxLgdMXVMc6ueECiui48cNCEqA05LBY89eIuiNWOzhEg1wUkaxEed/v4RUhXaDcd9sjEPxMOgOZDZ5TEelXXVlwwHwDI7MNbh17Rg7EH6Kd/zzMm8Lpla5lSnYByJMJ42ePVEBn7iRmkoDpMu+TTxt49toyHIvuAcJiIpeVjLmp1De1B3FxWxsgVUyidfgvt2olf5QqKgOcQYvB8yTwp8B2t/wgefhIQ94i0N/2jHWcwTa1vuWukFHW8GBkYRbsKKy7F2IJ0SJ8UQqz4XwiHbsXUacVk8bIlgvRLOcjbrIdQ3qiXMCm7+5JKbPlQG0t/ZgdIuRvmyshuLdBjH9Lk3QggRlAxZ9gOsKWVKCS
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a7e86ca-3d90-4566-5c43-08daf8bb712e
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 18:48:36.7626
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: pQ94i9LqRbMXr/BBohm12WTMmvYbt8W/UYPKXZYqx7kFgIQhK78NPVgfxIpsog5hH0giDj3+jSZFvmU2y4pLzIcqHeln2Y8rP0F/ri050hw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5156

T24gMTEvMDEvMjAyMyAxOjU0IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gRmlyc3Qgb2YgYWxs
IG1vdmUgdGhlIGFsbW9zdCBsb29wLWludmFyaWFudCBjb25kaXRpb24gb3V0IG9mIHRoZSBsb29w
Ow0KPiB0cmFuc2Zvcm0gaXQgaW50byBhbiBhbHRlcmVkIGxvb3AgYm91bmRhcnkuIFNpbmNlIHRo
ZSBuZXcgbG9jYWwgdmFyaWFibGUNCj4gd2FudHMgdG8gYmUgInVuc2lnbmVkIGludCIgYW5kIG5h
bWVkIHdpdGhvdXQgdmlvbGF0aW5nIG5hbWUgc3BhY2UgcnVsZXMsDQo+IGNvbnZlcnQgdGhlIGxv
b3AgaW5kdWN0aW9uIHZhcmlhYmxlIGFjY29yZGluZ2x5Lg0KDQpJJ20gZmlybWx5IC0xIGFnYWlu
c3QgdXNpbmcgdHJhaWxpbmcgdW5kZXJzY29yZXMuDQoNCk1haW5seSwgSSBvYmplY3QgdG8gdGhl
IGF0dGVtcHQgdG8ganVzdGlmeSBkb2luZyBzbyBiYXNlZCBvbiBhIHJ1bGUgd2UNCmV4cGxpY2l0
bHkgY2hvb3NlIHRvIHZpb2xhdGUgZm9yIGNvZGUgY29uc2lzdGVuY3kgYW5kIGxlZ2liaWxpdHkg
cmVhc29ucy4NCg0KQnV0IGluIHRoaXMgY2FzZSwgeW91J3JlIHRha2luZyBhIGJsb2NrIG9mIGxv
Z2ljIHdoaWNoIHdhcyBjbGVhbmx5IGluDQpvbmUgc3R5bGUsIGFuZCBtYWtpbmcgaXQgbWl4ZWQs
IGV2ZW4gYW1vbmdzdCBvbmx5IHRoZSBsb2NhbCB2YXJpYWJsZXMuDQoNCj4gLS0tIGEveGVuL2Fy
Y2gveDg2L21tL3NoYWRvdy9tdWx0aS5jDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cv
bXVsdGkuYw0KPiBAQCAtODYzLDIzICs4NjMsMjAgQEAgZG8gew0KPiAgLyogNjQtYml0IGwyOiB0
b3VjaCBhbGwgZW50cmllcyBleGNlcHQgZm9yIFBBRSBjb21wYXQgZ3Vlc3RzLiAqLw0KPiAgI2Rl
ZmluZSBTSEFET1dfRk9SRUFDSF9MMkUoX3NsMm1mbiwgX3NsMmUsIF9nbDJwLCBfZG9uZSwgX2Rv
bSwgX2NvZGUpICAgICAgIFwNCj4gIGRvIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+IC0gICAgaW50IF9p
OyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgXA0KPiAtICAgIGludCBfeGVuID0gIXNoYWRvd19tb2RlX2V4dGVybmFsKF9kb20p
OyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4gKyAgICB1bnNpZ25lZCBpbnQg
aV8sIGVuZF8gPSBTSEFET1dfTDJfUEFHRVRBQkxFX0VOVFJJRVM7ICAgICAgICAgICAgICAgICAg
ICBcDQo+ICAgICAgc2hhZG93X2wyZV90ICpfc3AgPSBtYXBfZG9tYWluX3BhZ2UoKF9zbDJtZm4p
KTsgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiAgICAgIEFTU0VSVF9WQUxJRF9MMihtZm5f
dG9fcGFnZShfc2wybWZuKS0+dS5zaC50eXBlKTsgICAgICAgICAgICAgICAgICAgICAgIFwNCj4g
LSAgICBmb3IgKCBfaSA9IDA7IF9pIDwgU0hBRE9XX0wyX1BBR0VUQUJMRV9FTlRSSUVTOyBfaSsr
ICkgICAgICAgICAgICAgICAgICBcDQo+ICsgICAgaWYgKCAhc2hhZG93X21vZGVfZXh0ZXJuYWwo
X2RvbSkgJiYgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiArICAgICAg
ICAgaXNfcHZfMzJiaXRfZG9tYWluKF9kb20pICYmICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwNCg0KVGhlIHNlY29uZCBjbGF1c2UgaGVyZSBpbXBsaWVzIHRoZSBmaXJz
dC7CoCBHaXZlbiB0aGF0IGFsbCB3ZSdyZSB0cnlpbmcNCnRvIHNheSBoZXJlIGlzICJhcmUgdGhl
cmUgWGVuIGVudHJpZXMgdG8gc2tpcCIsIEkgdGhpbmsgd2UnZCBiZSBmaW5lDQpkcm9wcGluZyB0
aGUgZXh0ZXJuYWwoKSBjaGVjayBlbnRpcmVseS4NCg0KPiArICAgICAgICAgbWZuX3RvX3BhZ2Uo
X3NsMm1mbiktPnUuc2gudHlwZSAhPSBTSF90eXBlX2wyXzY0X3NoYWRvdyApICAgICAgICAgIFwN
Cj4gKyAgICAgICAgZW5kXyA9IENPTVBBVF9MMl9QQUdFVEFCTEVfRklSU1RfWEVOX1NMT1QoX2Rv
bSk7ICAgICAgICAgICAgICAgICAgICBcDQo+ICsgICAgZm9yICggaV8gPSAwOyBpXyA8IGVuZF87
ICsraV8gKSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiAgICAg
IHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIFwNCj4gLSAgICAgICAgaWYgKCAoIShfeGVuKSkgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+IC0gICAgICAgICAg
ICAgfHwgIWlzX3B2XzMyYml0X2RvbWFpbihfZG9tKSAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgXA0KPiAtICAgICAgICAgICAgIHx8IG1mbl90b19wYWdlKF9zbDJtZm4pLT51LnNo
LnR5cGUgPT0gU0hfdHlwZV9sMl82NF9zaGFkb3cgICAgIFwNCj4gLSAgICAgICAgICAgICB8fCAo
X2kgPCBDT01QQVRfTDJfUEFHRVRBQkxFX0ZJUlNUX1hFTl9TTE9UKF9kb20pKSApICAgICAgICAg
ICBcDQo+IC0gICAgICAgIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiAtICAgICAgICAgICAgKF9zbDJlKSA9IF9z
cCArIF9pOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4g
LSAgICAgICAgICAgIGlmICggc2hhZG93X2wyZV9nZXRfZmxhZ3MoKihfc2wyZSkpICYgX1BBR0Vf
UFJFU0VOVCApICAgICAgICAgICBcDQo+IC0gICAgICAgICAgICAgICAge19jb2RlfSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiAtICAgICAg
ICAgICAgaWYgKCBfZG9uZSApIGJyZWFrOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwNCj4gLSAgICAgICAgICAgIGluY3JlbWVudF9wdHJfdG9fZ3Vlc3RfZW50
cnkoX2dsMnApOyAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+IC0gICAgICAgIH0gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgXA0KPiArICAgICAgICAoX3NsMmUpID0gX3NwICsgaV87ICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4gKyAgICAgICAgaWYgKCBzaGFkb3df
bDJlX2dldF9mbGFncygqKF9zbDJlKSkgJiBfUEFHRV9QUkVTRU5UICkgICAgICAgICAgICAgICBc
DQo+ICsgICAgICAgICAgICB7IF9jb2RlIH0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgXA0KDQpUaGlzIGRvZXNuJ3QgbWF0Y2ggZWl0aGVyIG9m
IG91ciB0d28gc3R5bGVzLsKgDQoNCmlmICggLi4uICkNCnsgX2NvZGUgfQ0KDQp3b3VsZCBiZSBj
bG9zZXIgdG8gWGVuJ3Mgbm9ybWFsIHN0eWxlLCBidXTCoCAuLi4NCg0KPiArICAgICAgICBpZiAo
IF9kb25lICkgYnJlYWs7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFwNCg0KLi4uIHdpdGggdGhpcyB0b28sIEkgdGhpbmsgaXQgd291bGQgc3RpbGwgYmUg
YmV0dGVyIHRvIHdyaXRlIGl0IG91dA0KZnVsbHksIHNvOg0KDQppZiAoIC4uLiApDQp7DQrCoMKg
wqAgX2NvZGUNCn0NCmlmICggX2RvbmUgKQ0KwqDCoMKgIGJyZWFrOw0KDQpUaGVzZSBtYWNyb3Mg
YXJlIGFscmVhZHkgYmlnIGVub3VnaCB0aGF0IHRyeWluZyB0byBzYXZlIDMgbGluZXMgc2VlbXMN
CmZ1dGlsZS4NCg0KfkFuZHJldw0KDQo+ICsgICAgICAgIGluY3JlbWVudF9wdHJfdG9fZ3Vlc3Rf
ZW50cnkoX2dsMnApOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KDQpQLlMuIEkn
bSBwcmV0dHkgc3VyZSBJIGhhZCBlbmNvdW50ZXJlZCB0aGlzIGJlZm9yZSwgYnV0IEknbSByZS1y
ZW1pbmRlZA0Kb2YgaG93IG11Y2ggdGhpcyBmdW5jdGlvbiBzZWVtcyBsaWtlIGEgYmFkIGlkZWEs
IGV2ZW4gaW4gdGhlIGNvbnRleHQgb2YNCnRoaXMgbWFjcm9zZXQgdGFraW5nIGFyYml0cmFyeSBf
ZG9uZSBhbmQgX2NvZGUgYmxvYnMuLi4NCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 19:00:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 19:00:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479756.743798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrBg-0007HA-JZ; Tue, 17 Jan 2023 19:00:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479756.743798; Tue, 17 Jan 2023 19:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrBg-0007H3-Gm; Tue, 17 Jan 2023 19:00:00 +0000
Received: by outflank-mailman (input) for mailman id 479756;
 Tue, 17 Jan 2023 18:59:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHrBf-0007Gt-Ks; Tue, 17 Jan 2023 18:59:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHrBf-0000Xe-HX; Tue, 17 Jan 2023 18:59:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHrBf-0002kE-7S; Tue, 17 Jan 2023 18:59:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHrBf-0006Z1-76; Tue, 17 Jan 2023 18:59:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eEbfJlovUlK3i/thozeKpzVGP8h3hRK/+xddSGaSfO4=; b=Gxt+LzOsUdlVs5NOAKX5tQHn4V
	xSKCtPfohKZLIG0oCXgzMCcybCDG9aRH74XKdXDg1IGXeJ88FcCzl9JpBrtnxf6UXFrgOOVi9scu/
	CtpQeFGGKsK/5aKvIv4S8+fJlanmbddibtuVQxpOB1RnkRQc2XTBDz589pDbN/bRl4Hk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175936-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175936: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 18:59:59 +0000

flight 175936 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175936/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    5 days
Failing since        175748  2023-01-12 20:01:56 Z    4 days   22 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    3 days   20 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 19:06:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 19:06:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479764.743808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrHr-0000J3-9A; Tue, 17 Jan 2023 19:06:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479764.743808; Tue, 17 Jan 2023 19:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrHr-0000Iw-6S; Tue, 17 Jan 2023 19:06:23 +0000
Received: by outflank-mailman (input) for mailman id 479764;
 Tue, 17 Jan 2023 19:06:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHrHp-0000Iq-Qi
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 19:06:21 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03ad1cd2-969a-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 20:06:16 +0100 (CET)
Received: from mail-dm3nam02lp2041.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 14:06:08 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6439.namprd03.prod.outlook.com (2603:10b6:a03:38d::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 19:06:06 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 19:06:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03ad1cd2-969a-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673982376;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=EVjnqygt9lDMeGJvS96PnGRQazzQEO6BqfdcIli78ZM=;
  b=aZSb3scqsYN573EegvKhsbqozGU6ECWlZ4J0dMI3Zkl6lhL/+j/Ogq/w
   +YopYLJcT3596288C7cYzNl4FlH/Gv0IezKXZ16SgF/j/MnpsmQhvXrKI
   c3Duu2sc8zSKkT3huKb/2MZOeyrhfwheNO6FrNsDrP4e4/BzLbNECYsU8
   M=;
X-IronPort-RemoteIP: 104.47.56.41
X-IronPort-MID: 92500914
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jT3ZxaiFZWOHSVhetByAQqB9X161VhEKZh0ujC45NGQN5FlHY01je
 htvXT2OOqzbM2HxKdklOYq/8E0CvJKHnIBiG1FspS4zEiob9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QaBzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQWIzYnfB+DwNiIyeKyELlNrdkeFevSadZ3VnFIlVk1DN4AaLWbGeDmwIQd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEvluS0WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDTuDgrq8z3DV/wEQMUUwST3ziisXoyWKFcuNSL
 BYZ6xQH+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6JC25BQjNfZdgOsM4tWSdsx
 lKPh8nuBzFkrPuSU3313qiQhSO/P24SN2BqTTMFSCMV7t+lp5s85i8jVf5mGa+xy9HwQjf5x
 mjQqDBk3+lKy8kWy6+84FbLxSq2oYTERRI04QORWX+56gR+Z8iuYInABUXn0Mus5b2xFjGp1
 EXoUeDEhAzSJflhTBCwfdg=
IronPort-HdrOrdr: A9a23:kmVhhqHRyqPfscl9pLqE5ceALOsnbusQ8zAXPiFKOHtom6mj/f
 xG885rtiMc5AxwZJhCo7G90cu7MBHhHPdOiOF7AV7IZniChILHFvAH0WIg+VHd8u/Fm9K1GZ
 0OT0G2MrPNMWQ=
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="92500914"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GZ631gS05Zh0lO9MtLYxIGzr8JVvXhEqotxbf3M6HsuPTidKb5ckswAXtNtX5a2hs1lLUjf+6O1KnIca6XADeqDLjrw10ELMLRtIu2bGPVnDt45UjivCS2TDsdZQx75sTc1mIrIsw9C01PmbemnCLIWPIgSGlqnk7pdlbI9AsoTDQJlDtKLmMLUhg78Irc95/2dqLugFgmNDSf8msmKKeoBKDjJnFstgaj+F/vnWV/GGqjbDgUZoyAr5fQlBdGDANECxvDg8NyuTjyuhBzoVvlMcgzVHeN/ei3DjS13NesWwqwu3i8/UhiAUVyju9JmHbVFaSGXEzO29N5VpjDlvxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EVjnqygt9lDMeGJvS96PnGRQazzQEO6BqfdcIli78ZM=;
 b=ccwReucDj8wMh96zMNBg/YQKHhyHK9S04D35XbbQTPkjf31IEy699OvZOX5WVAisPyTqeKXxjE49QlRkXnEQr7qIipTCw39pd2BWgKEB8xG3DC+aSJ7QxvYYEuIF87blO0X0U1bSEJs7zv4xc2gXRJku4e5Ne8wUeKgvTKLCCn3lV00u1/ag986bPyJQF8Xpt9Dkgw+UAtBAw4fSwgAERGW8W2pRYTzh0ldUunXT/SgoTICa+6rMc4ZGnCWuU+9Ny+O3U4t9L8/P2c070fn3SyYJM1wJAW4E8qFijTBgawtKlJL+vnEUcn9VfoBDxyytpIx1oHfUroB5NikW5WjF7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EVjnqygt9lDMeGJvS96PnGRQazzQEO6BqfdcIli78ZM=;
 b=IswvgmxzXse8BT3iCp1bO6TRUiTdIteas4ZdScgQ130UjA10HT7NPytGbZrGEw67mOBuB3YGRVOTxBgNVt9NXZ/P2VgM1Cg1nccOGD+Sj++3lxpiOD56i9w1ElSkI6uz/vqISCoyuUmHpXrf7XzmdfmCuGFLVtxJb9Xh2+D3TNU=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v2 5/9] x86/shadow: L2H shadow type is PV32-only
Thread-Topic: [PATCH v2 5/9] x86/shadow: L2H shadow type is PV32-only
Thread-Index: AQHZJcQ2cWKPiq6iKkSag94orDthPq6jAfaA
Date: Tue, 17 Jan 2023 19:06:06 +0000
Message-ID: <af3ad943-de62-66c0-2a9e-cecf2356581a@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <af8ca228-473a-6777-4a4c-a474a5faec1f@suse.com>
In-Reply-To: <af8ca228-473a-6777-4a4c-a474a5faec1f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6439:EE_
x-ms-office365-filtering-correlation-id: 1aef8a6c-9e82-472a-6d24-08daf8bde2c0
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 35+gW3tyLT7vdG8mHNbKYJejdP1HftwBn6Qs8K1Opz4IFoAvGfv9UzBZUCtPRqRE77nTDhEb9hVSAycO9vDHhRGoLxMWUsWdrABdVD4OExkCuocK867tsxoGverY7CtQ2West6eAA9kRyOrEJznvBhU26xCgh8yaDc5OoqSFFrfxr6laKqz++4gFrHp1e1vSkrY3bKQglPfiRaSrp2GfkwVFxwmEEbjZZOolb8wDQE5Nz/S3/Qk1KJ0w61D87QTteo98Qh0pr724Blzjuuitc9TzhHIB79nxPAjjtuB7SWeYBgyCkU1er8BRJ+l76z+SrA6oWiEz00bC3KS9aSb6fE1QoI76oPMm/fC0hKRmujhcC6q059rrZgr48I82EmZLiXZdaAOQQK0XWLGGfWRZbyp9tjgKbrU0BbksnldY+H0tCX8gmRueyNct8HP3eUS847qOTmNumpEIyyyQNdnmcO73YZ4/HFe7kbtwPeJbyIzXQEq1I9RvFcxwpYpEQyp+kGEiyu9FjUK471vLrhBcO5WeU1Lxet+w3NZTrhBNHpaN241tNdunOhIZFm/99x4KgIrnLNpbqOSPPIcO9TUxozHI3JTgQ1StVOdzQnvl6uwGiDhy8O4jbZBqfZw53EHo4fkyWJMj6CG/KWWkGPDjOgJ2HVSxUe6V49XYVO3nt3uyvPZ/InV8+5ZKOglcB2YGt/Wcpmvn4YHxQs0u7oHeNtkO2b4havjb7+uqzuj+3qrEnrPQOOs992U4Ch7kifThcpVacrgcY+7XyksP+wbaSQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(346002)(396003)(366004)(136003)(451199015)(31686004)(71200400001)(53546011)(558084003)(66946007)(66446008)(66556008)(76116006)(186003)(41300700001)(6512007)(2616005)(64756008)(26005)(91956017)(36756003)(4326008)(66476007)(31696002)(86362001)(8676002)(5660300002)(82960400001)(316002)(8936002)(110136005)(478600001)(54906003)(6506007)(107886003)(38100700002)(2906002)(38070700005)(122000001)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TS9ieHgrbmx1ZHlhNUVWa3VtTzVKOHowSWZKaTNkUkFDK0lMY0Z4U2lnY0N3?=
 =?utf-8?B?bFRtVXBuRllwdzYyYVFwNDk4T28rN2pTKzFKOWxKNysySXNidlY3bFhSaEVT?=
 =?utf-8?B?dFdUV1hoNHovT0Rvejg2a2lIS204TFFTTldPL29WcWRPTWFyUGs5dm80dXRE?=
 =?utf-8?B?NVBoOTBrL3RqdGdvZ2xud3JZMUkybGhYaGt5NkpNYmhRWjFCbG5lTE9nanJw?=
 =?utf-8?B?TSs0QUpaZmhGY2JBSnQwWkZJbU1CUlFHR2hDSXpCV3NFelZvRDgxenRVbkx4?=
 =?utf-8?B?cmRvNkx3cnkrdEIvS20vZUhXY3hUenh3cHdqNlJuKzRpYVMwemtPWUtaL21K?=
 =?utf-8?B?bDhZK080TnhiWnVZc0dvYnNQajFHbjhod2lkT1lEbVJHanhnM05WYm80cWdp?=
 =?utf-8?B?SVFOSG5WSHk5OVhydTJhSWp2V091NnBVQTBhR0dsclBDTGNpNStYMDRoY1RP?=
 =?utf-8?B?Q0UycjVjbVhQYWpIMVQzeXU0NmkwenFKT3FHZnF0T1hFeXR6MTNNVmRBeXZQ?=
 =?utf-8?B?QjRQdmFkR3Q4d0tKQmx4NVhqcnpPcU5hcU11K2dZdG9jRFNwbDlQWGl3UXBT?=
 =?utf-8?B?RkQ3a1lTandrRndPb1ExZ214MzB5YjlndkhwLzViMzhJRFo2MnV4NmxrWkJQ?=
 =?utf-8?B?MlQ2VTAxUDNOL1RHS3YzdXZRYlBUbHprcXRydmxkTlVuWXp6VzllaHZPQzh2?=
 =?utf-8?B?N2t0bGVUcnI3a2d6ckMyYmM2Wlpjd3l6cHpYYU9QL0VGRVB5WTdyTG9UWEtu?=
 =?utf-8?B?cHk1TXhDdGZwREI0NmlGK0lGbm40MDd0ekFZZExoQzhoVUxpbVJMVVg1MU1Q?=
 =?utf-8?B?RE9BekY3N2JnaDdHOEdOOEJFQUxuUWZpUkdXeGlkQitjb1FLSDlYRlJJWVdW?=
 =?utf-8?B?K2JGbzBtdllyNUdacnByWmExc2JLTWZKMVp2QVcrbkpqNlRRU3g5MTlBRHRN?=
 =?utf-8?B?b2pUK1BrRjE4NW1CQ1cvZVFmalhYRndrQXRONEtBcTR1SFZMaDJYdmUvSEFW?=
 =?utf-8?B?MEJFd2JjZTAxL0xSSXUwNHZUUVJiaGx0ZG95TmYxQk9RdkF4dW4rVlQ5MjR3?=
 =?utf-8?B?V1lPSUJ4bER1SE8ySjlLVmxOcDNBQ2c4NER0cXlpR2Z3TDhxaXN3NTd6NWNC?=
 =?utf-8?B?YUhidyt4RHV5M084RHZxNnRDYXhtYVFBVzkvWFNQZmtMYlFwNXNpdDdwSldZ?=
 =?utf-8?B?QzhzenEwSk4zeEFJVS9EclJITlA3aFhBcWd1V1N5em93dnIzTXJTOSsyMmRP?=
 =?utf-8?B?cm5PUWl2Y040YU44cFNOYzZtWkhKTVlyR0pXTG5jS3hKWG1CYXNwc01TYmFw?=
 =?utf-8?B?WTR2VVRhcnFSVXdUbm5zZTNrL2dwNXN0TTNCSWtReG9CUmdTQ2M1dnJ0czlL?=
 =?utf-8?B?M2xCMXR2NG1VTVN6cXRsL2dqZERoYWZWMGl6c0IvRXF4NDJWcFRzSkxGQkVY?=
 =?utf-8?B?MlBZYlZla05uUCtxZEJFNU5ZWWxYcFQxNlN3SlFnRXhXSklOK2l2QkRsYWIr?=
 =?utf-8?B?aXp3R0huOFI0bFIxR1gwb0M2YkRMWlAyZ3pqZ0tRWUVwUHlIcSswSjE3Z2lq?=
 =?utf-8?B?M0ppL2RQOHpRTlRwU0k3VU93TDlVendHTC9xSUUzNzZ3MWEzaVZlVmVDRkh3?=
 =?utf-8?B?dkIva1p3OFJZby9DMTYwbHAvRWhuQXNDcG9CaVhSSUNOcW50VFNibmxoSkdM?=
 =?utf-8?B?YXVkeFBhRVZXN2lBWlFBeml0Tk9WZVhvSGtRU2ZueE54ZFF6VDBjaFdwdTJx?=
 =?utf-8?B?WDBFdWZZT2VOOCtLb1JHakx4K0I5QnpCNEpxYlJwRFdlNXFKdUF6dHVaQlpN?=
 =?utf-8?B?ZEdLVHVjc2pwWHlna2JGZi9OSG9UUlZQSitod2gvcVBMQVNLYm5rYkt5d0Mz?=
 =?utf-8?B?WjhFeGdMaWJRWEpvQUVOdFg1VkFFTm04N3lsaVRGbjhXMWJMNCtGL3l2K2dV?=
 =?utf-8?B?ZENtVVBRaUl0MjdwbzJHcEYwMkUzQlJjVSthZkRnSWxiOEF1aitmd1dkb1Bw?=
 =?utf-8?B?OTJhWEg0QzBkWG50QVdQOU1LR3RrV0RUVTFVaXFVZ2ptcXMreW4yN1E5QWRW?=
 =?utf-8?B?dGVqd3B1U0g0MmVEOE5sUDhZT21pMnByWENKZzMzVlQybHpVajkrNGJvaXF5?=
 =?utf-8?Q?37vsdZgJBMI5/k6Ma3l6oZsxD?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <EE35608B46E97A4C987C7D8AF516FF62@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	U6wp81gAP6ZUlBdcD0m42g2EH9U1ryA9+zazskfUU52AqlaVC1NHwNUuDepFyGAEfCJ6x/VGS5mOIMdvg4Cq9j0zsby6VmR4Ew2J2VFF3Ptnov7PdVGR0QwANfCnTnkWG1xph8YHCuY+zgkV48RJWpirgHz9Ws61WL/KeyRSpYauNyfhEwvWh7vXkD9tbaBlBqbcczzWub3LBrUH/PCyOm03y2MuoPX7/9ODFCygJ96lKk2U/RGNbc9ZJsjeXjYId5WSoS4816WjGwYT8lahzJ4pxLSLJFHo8UTeMXXT8qir+3lxkbnh2ou3j1SQqiYmeLtLqdWf678yFbbLHn7NC4k2GBb7ptr+ypQ4b6VoMX4I45pdUogAKODDB/k458FbfH0UYJ6n0OJaJytx1PJrlJMW6qzE31yB9uaVomrI2HYQIIqnIiVEoXs1yD2qKFpAiO6wV5lfLMF64w4UNOyFULzhru4wJrGaiA/O1+qqnGjiYwRgMB+GNi8nbHRsErKfQ2wjKgzNIc3f097hYojxhx5J0Ndlho1SyjmXh2Oo9ZJAgN1jMRwkrISBZFOdHaVZ7lYVRgc6rzLajBCBaorQgZhqzdKBk3INKzjK4U88XjSxhzZXEA7xBLcqe6h+2T5C/haIW2sweCrfrsfZIMa9+9pAYVZo/bb5SG/KvkW2q1ICbq5YLMnliqqyuy2yqJQSfVq0sYAb8LPgX59YUNxN/22Q186d9fUIWbrEiALejIokoiKjiMBaSzGVj07iaiFBYlJiNgEGsNb8SYh88Ql4yMDB1XnlRfRswVyiJH2HJRD3O5x9gc8T/iYOJuNlfRH8
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1aef8a6c-9e82-472a-6d24-08daf8bde2c0
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 19:06:06.2803
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: rObM0cBtdLwZwCZ7FQQlH6sEvxkZluP/NJ57NGltT1PH7K/5iG2yWr3YhQEmLP+tXlyhUfMGIerVaSIU7vXzeDeMuYawU0UJEYYxeaJucsY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6439

T24gMTEvMDEvMjAyMyAxOjU0IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gTGlrZSBmb3IgdGhl
IHZhcmlvdXMgSFZNLW9ubHkgdHlwZXMsIHNhdmUgYSBsaXR0bGUgYml0IG9mIGNvZGUgYnkgc3Vp
dGFibHkNCj4gIm1hc2tpbmciIHRoaXMgdHlwZSBvdXQgd2hlbiAhUFYzMi4NCj4NCj4gU2lnbmVk
LW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KDQpBY2tlZC1ieTogQW5k
cmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 19:13:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 19:13:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479770.743820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrOq-0001kz-1B; Tue, 17 Jan 2023 19:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479770.743820; Tue, 17 Jan 2023 19:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrOp-0001ks-UZ; Tue, 17 Jan 2023 19:13:35 +0000
Received: by outflank-mailman (input) for mailman id 479770;
 Tue, 17 Jan 2023 19:13:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHrOo-0001km-Rh
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 19:13:34 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 07ca0171-969b-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 20:13:32 +0100 (CET)
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 14:13:30 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6713.namprd03.prod.outlook.com (2603:10b6:a03:404::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 19:13:27 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 19:13:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07ca0171-969b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673982812;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=5hggP8gB8E6F2vLgwEBeqHK894LH8iRqTLFBk5fBDXI=;
  b=EMQ9YzxzMomm39zBHXqpP2oEw2vvu6ZceCaDeDUilOeiV3rej6k8L0b7
   +aCWzHwYFn3BFFCoLO0VnkpsYWEBTde7lNzFjzjRLKYI/k26HD5MRLRim
   Mevg5oUCBgJNetaZ5P//qTgZT+7r3DjW8ff6bm94IHVZXjfjfCtmmzAlw
   o=;
X-IronPort-RemoteIP: 104.47.70.100
X-IronPort-MID: 92502008
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:p3FgR68Ysznzu7c7NwAFDrUDsX+TJUtcMsCJ2f8bNWPcYEJGY0x3x
 mseCj+Obv/YYTPwLYglPNy09hxQusSGmNBhTgM9pHs8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKkU5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklR5
 N0TLjUcZyqEqN204rSjcc49pp0seZyD0IM34hmMzBn/JNN/GdXpZfqP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTeLilUpiNABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTCdhPTOfgrq4CbFu7gWkNKD40eWuBs+S9gBK7RON8K
 3M/0397xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L8/AKxFmUCCDlbZ7QOqM4zbSwn0
 BmOhdyBLSxitviZRGyQ8p+QrCiuIm4FIGkafygGQAAZpd75r+kOYgnnS99iFOu/iILzEDSpm
 zSS9nFm3/MUkNIB0Li98RbfmTWwq5PVTwkzoALKQmai6QA/b4mgD2C11WXmAT97BN7xZjG8U
 LIsx6ByMMhm4UmxqRGw
IronPort-HdrOrdr: A9a23:4QS1UalomQ14wKYY+AwSMBcl2I/pDfIR3DAbv31ZSRFFG/GwvM
 ql9c5rsiMc7wx8ZJhAo7+90cy7Kk80mqQa3WB8B9aftWvdyQiVxfBZjbcKqgeIc0eSygc379
 YDT0ERMqyVMXFKyer8/QmkA5IB7bC8gcaVbD7lvhJQcT0=
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="92502008"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VNpK3qAv2yQ9sZ7w8VMQtGAL6J2czKMTwWmIzuH3+JHeFqmU1fO3926VGlo7RlSVxBoBlIgF5e8t+mQGzKoSu9lrMPEMf/F+3uasuvg/asQAD9woqh4azRwdwfa4aNxr2MLWHmOJUknSVJvrzYr+yUmLz9EK5fMP2awB74ZcYvUpybhl+Fka2Z4seWlf7CDIwT2cSN/SH/lkBGrvphpueK+JG5sh+PPppA2CqOfEO77dPKkikZdRIN7Cbt5Aann/5sjSn3qcHMOIOD1iEAcLCH6Yz1ekByz6JVrelqrUOp5hRKrIlygPRHYHwriKhqWMDEfKY7HWuIt8ixkMJoIzLQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5hggP8gB8E6F2vLgwEBeqHK894LH8iRqTLFBk5fBDXI=;
 b=d05Dt4A0HhYj5uDfM3mziipqfeqZT5gDCPciXIdQ76Q6l+lU41d/xfI5weObIFqkuXZ66WpqAK3C4qFOJOw9Tmg/Fk3MMWu75bcixpNNxgiVu4eVjLD6Hhag1pOvAG0AcnQL9nDCbPxjmVVd2CnZF4mbyayd+2S1euWdV9td1E0KcsCFqUvoi04z+WriSUQeIBGFXijbDqrfgxeYDzE8oxEvQ4wu8rHX9A9KPVEB7tOnxriMFlrT46mjScl56NgljVgXKA0Y5PRs9QcSub3MHpKMxnMZpqRUrzwEvGPEKpvM0EsxpvRPT1dJwpYTexFUExAzLFsqh4Ruzvpr90QJ+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5hggP8gB8E6F2vLgwEBeqHK894LH8iRqTLFBk5fBDXI=;
 b=JMjSltSsUNYYYqqAKGvGziw6feu/4XrhnwzpIOAmqme6tfmqApWPGWRx1SaTRSdNvXbvVW6PTdIker0UrhcItzVuWfeMx0Nl7m3HzahHc3gBQpO+u3XaKThkf7i13rjQ4vDcfFlIU8hLlRABGvWsODyn526stdeSiG6AQ8J4cpo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Thread-Topic: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Thread-Index: AQHZJcS6xQlTtpJxX0O79yAhgOZJ+66Z2cOAgACwbwCAAAw+gIAAA0qAgAhqSQA=
Date: Tue, 17 Jan 2023 19:13:27 +0000
Message-ID: <733edba4-6913-97a4-f949-4f8899a3bba9@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <9bea51eb-4fbd-b061-52d7-c6c234d060a1@suse.com>
 <c5d201ac-89ca-6baa-d685-5bef2497183f@citrix.com>
 <a438f16b-7d45-d7e4-2191-4ed7b2077785@suse.com>
 <71e7ba34-f1ea-16fd-ec01-bb2aa454270c@citrix.com>
 <b49793f8-55be-0746-815d-ab7b627d3baa@suse.com>
In-Reply-To: <b49793f8-55be-0746-815d-ab7b627d3baa@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6713:EE_
x-ms-office365-filtering-correlation-id: f7c4cf78-e3e6-4a3a-6cda-08daf8bee992
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 qA62Nx2zWdxEXARLPoiw2lOinVgNlHiDAoVxP1Puvw0V5g0dppFd3X+wgSHRP+CDOroh7uX6StUE9Zhfmja1Qgi/xXu8zIjyFsLCTWpjhYTMXPqQJrWKfJqNOhAQTyfdU+Q0+IOzIleVPyYMkJ3nC7bbXhrY57DTTJZSFVMzEDfdL2RUc37ieMHcjobTz757+nmlTKqzhItPszVFwykR6PG92BhOkACV3YP4ItmylFitCd/gm57isJljsdQ6S5lb8vB6YA/+6rVoyLauQ29kHTqhNpPq3uaqRDcFr3PKwmE0XfM3PquHHwSk92Gx2+gLR2chx+yM06oD/7sNpWqoepU/V+Nbn5K8lvy0n+IlGGDSm+fS4PNnMLlBOybMJJCYnIBakSQWsEgDBnTA5y5r/PsJ+LGy3ZmtURecKYv5DphSQFZMc8DWYsvT75kf8fGgYulkRpFqAlEpO1LadEwJh8qPJSKnsY1GGArLr1/sVtLf1kaIvVK6GAWejopBLZQWoXDwngOuLMHu5duROEqNQL5Ty1IeFmVFPZVqlVh3eYrU3btyAlHN2EDqZVG9z3ctkjjKp+9yPPPqtY2qDKnE+cYoms+YQFyUHwS1w7G2V+nX+4jB6H0IASTFeIySzUACh9lyT++ecQjNhZaREnNcQYYVe4wHBC5hU/qgdOkmiJyA/IEmiB4+v6LLvWHO4Tcr3PgQ1EDlWSeHfkayhYiPZWQG1ivy5UJqgay/Bg+spUBX4Vyq0oM0U00qDx9ty1BPD2jnyFvg9LzXhnY0E0HAwA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(346002)(396003)(39860400002)(376002)(136003)(451199015)(38100700002)(82960400001)(316002)(122000001)(86362001)(31696002)(91956017)(8936002)(5660300002)(76116006)(66946007)(64756008)(66476007)(66556008)(4326008)(66446008)(186003)(41300700001)(31686004)(2616005)(26005)(83380400001)(2906002)(6512007)(8676002)(6916009)(71200400001)(53546011)(38070700005)(36756003)(6486002)(54906003)(6506007)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?RStUNU5uZnkxR041aXA4MHE3ZythRVA5OVdyeEJrViszNk9Ka0pvTlZpNFo0?=
 =?utf-8?B?UUFlRWxxeHdjWVVHQnhhRmpmY3ZSRXd0cVNaaUY4THJFalBobktIWHNPZnlY?=
 =?utf-8?B?M3pEQ2JCZ3JRbEd3aTVJbDFCV1NGa05reC9RSzlJdEFyQlZjdFh4bjh2ZXJG?=
 =?utf-8?B?ZERoU3pKWC9KRG5USUliaE4xaHRNU2xOSExncG1jNEtqdDhtL2JZR0Y3T3dW?=
 =?utf-8?B?blY0Y3gydHQ1N2k4Umg3a25kemRKV011dUpJeVVWZkZ6V0U4Y1plWE9wRGF3?=
 =?utf-8?B?R1A0VVplcDZ4bGZtbW01VFQyS1RKdFhkTE4wWms2UUNnVkJwbmRQbVY3c3Jt?=
 =?utf-8?B?bHltbVhJVTl6S0Zham5vTklRaE8xQ3RKa0tldFcwTU9naHJBZUpmWUsrbEU3?=
 =?utf-8?B?MTkxV1hiTGZ2V2hwR0dXS0EvdUgwOWpJaERTMUczRWxMbXVvUVdZVnkwQXZ0?=
 =?utf-8?B?VDkvOW9FOFQybW83STV1bkk2bEhmN0IySldrL2lzb2Zudk4zNGU4dVJTbUZk?=
 =?utf-8?B?Z1hxT0lvOHJtUEZJS3MzeE9aRlBTaEtQYml0SXZnWTFReXR2THgyclNmWWR0?=
 =?utf-8?B?WHYyb0tJS2NkQzBkL05sL0JVNzhXeE9uL0h5TlgrWmR6dDhJVGlEbm5TbkVF?=
 =?utf-8?B?TTlpS1IrcEwxekI1cEYzempaTWtibEp3a0RuNW1QVVorQTRIdWg3OC9sb2xS?=
 =?utf-8?B?WDhNZ2tyTEFlYVdkaHFzVkFwWXpXWHpEWXVjaG9pc3hRbU1IekxCYVJDbExZ?=
 =?utf-8?B?aXB4bE1lZ1BqdFMyVURlRytwNnpYQ2lTTUhPSnExeUpOQ2IzdFhHa2lKUTdD?=
 =?utf-8?B?ZlFVVGZHUU9qVEFGQmVFREZWT3ZtVlJqdkVoeHRsVU9OYzhoRFlZbEJGRUJS?=
 =?utf-8?B?QitGYzZ3S0E3ZmZHR1BtbE1XNTE1UVl1UlA0bCsvb1NvV0l6NGlaZkpxNitY?=
 =?utf-8?B?WEVyWE1PWTZrWlppUG16OXowQzZsVkhwZVhSUGhFaVhPalBnTnA3REhzL1pV?=
 =?utf-8?B?cDh1MEYwTWZ2bnZOeHkrc3dPd3RvMU43MFJRYTQ5aWh4ck4ya3BzREtxcmR3?=
 =?utf-8?B?T2kvM0g0ME80RzlRNUNDeEhTSVNsaTNtTU9rVC9CNGsvR0RJV0ZxejZodWti?=
 =?utf-8?B?TmRlbHp3TThpZHJaUGVmZldwMFhUcTVUdVpvd2JacEtFNVVtc1RxbDgrY0pW?=
 =?utf-8?B?MHQ2Sm1yU0luS2VrcWZZc3hDUFhVUWlIQmorKzZsUlRXTTFYaDVhY0pyamlZ?=
 =?utf-8?B?cGp6WW8vWkhPYjBwQUJCTUp4OVdVZ1FRMGM4RUROR2hMekZrUkRFMTdKdStN?=
 =?utf-8?B?MTFrTERGeVRHc3NOOUp2TkwrRVNxRXBML20zNndGcytQOEF2RXExN1M2SC9n?=
 =?utf-8?B?UlRjUDJFVnJ1dmwwYU1obi9zOXFKN1ZDRUdmakNLaUppRFI5cVViR0M2RU5l?=
 =?utf-8?B?STVjTDA4OWI4TVpEb3lqSzNvcS81UmcxdmhGMkg2Q29lYXMyZU9VMytlUG5Y?=
 =?utf-8?B?OGtwczlaV1NSMysrRExENE02U0Q4SXZaV25hRkthOFMrNG5ydTZueTd2cmgv?=
 =?utf-8?B?Y0Y2dzdOcnVhd21lWDlzUnNZUGYyR2hSOFQ5Z1duWE9KV3hMNjE2NTdWN2NL?=
 =?utf-8?B?RzVjbW1iZmUveTUxK1NiUmFLMGRnMU94Q1B0Y2Nla01zdkFNV0dBM2lCV082?=
 =?utf-8?B?c3dZWi8vS05ZcnpFMTIrZnhDdkRCbW5QYXdYZmtnNmVyZFBUNkpUWjBBSTZh?=
 =?utf-8?B?c2c3ZlAwakMxVm5HQ3FidDB5TC9GQ0NabGg1MjVuaTlhQXJ6Ni8yTUV4UFor?=
 =?utf-8?B?SjJzanA2TGtRTFhQc1hZS042NFlseElpdGcwdjBmWjFDODVWZjgyN1B0eG1O?=
 =?utf-8?B?V2YwUG1pcGVlYXRlMFg4Y0IwdVpmZnRHZ1k0SFE5dWJaR1hzUmlVMTl0MTJi?=
 =?utf-8?B?R21nR1N0emMvdHg0ZndKNU91dy9wYzZNWmdnWm44YmFNdUw5dWVkV1dqZE5U?=
 =?utf-8?B?V0ZRbktBQVF0L0U4enh3Zjl3UlFvN21wOE84Y1REZDNiY3NSZDROUmNXckJU?=
 =?utf-8?B?aHFEY1p4ZTB5V0JrVnJHMUJ0elF2eFZEcm1lTVdaZkphQmxJZ1YwSTc3SnU0?=
 =?utf-8?Q?idRPen7YXpSEmtxD3uUh+4uUH?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <88359E1B1E90F64F91BDAAFB049A9A05@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	rr+ttq2F5qXnHGGVnWN1NsplV782tEjIfnt0h5Nkrz8wnm/NGi+Q7C6HZlDSekCOZy4KbuK1/uQBAkBGnrTw5p+k6/2ZpGfaZXYaM3hTAETbHHhYilUqvnABOc21eN3MORw9Z9Rj+pHAtkIv3/GoYhuuDr/UdazR+TINufCCaD2jB4LRZzxuXNW5C5O/YgRtbRjSfMpOjg3KkBRq9mZL6Clzine4t/z2pqXc1/Rp1n88rEgP8MUL2uGJm+5NJje7AbboCK5+Zc6CHpLQCus1++OgjGfkDVXB5+gDm/JVS/MOKlGttpDyqECdVPKY9eTqa6M3PmSn2Zzv7Ldtot4wyIASy0j2Ds6NDtWq+qwYYYtGXrHTZNQwdVVeRB6u9s38D+aSXLko+6yOZ0BUyGsD9/5NfgYgHpOff0jfrQRpO1k0mRnEgeBxSb2o1XpO9jbOKxxYXJCyKOee7JwmnFP5t9MKHjpGpKDiKNNY/Zl1Fy2VJWy2RaQi93q+RCoiPi65H37IxHVRebqqlrD9EyMmBDO1+nVKNjBa24YIB/eiePWtRfPn+XqKGHiDmAtjkLHlQKMZCK6p+EB75Wmr3jf/PVEoJeApi6dmTVjdqouxlFsnk8vLLJADUm8cig+rGHNYWxmEEce82DIBZtK9V7Ns956uOsULLD8QvtYKxt4qZFGdgV+5/bd2xnymKnKvaU8yXqnQHQWw8YKa6/Td+OJo7hOzY3RJ9UVoRz9NZUD2N2iit9RCaGZ5+LQ0um+VK+ueZTublVxDAVkasrwCBO/jiptVNdXrcZRPZTXSEcnWOrfwXZHS2NOSVPSJo51HtD0O
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7c4cf78-e3e6-4a3a-6cda-08daf8bee992
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 19:13:27.2508
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Cl00POIznAMqOrbGTqLstH0j+cEKSXgKuUe24Q2JR+QQfkdopZy8g3gVLErhZCPpjtqWb4Pz5m7FKGdNh8UHchA3cWlt6bI+x62FSmG5eZQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6713

T24gMTIvMDEvMjAyMyAxMDo0MiBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEyLjAxLjIw
MjMgMTE6MzEsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBPbiAxMi8wMS8yMDIzIDk6NDcgYW0s
IEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+IE9uIDEyLjAxLjIwMjMgMDA6MTUsIEFuZHJldyBDb29w
ZXIgd3JvdGU6DQo+Pj4+IE9uIDExLzAxLzIwMjMgMTo1NyBwbSwgSmFuIEJldWxpY2ggd3JvdGU6
DQo+Pj4+PiBNYWtlIEhWTT15IHJlbGVhc2UgYnVpbGQgYmVoYXZpb3IgcHJvbmUgYWdhaW5zdCBh
cnJheSBvdmVycnVuLCBieQ0KPj4+Pj4gKGFiKXVzaW5nIGFycmF5X2FjY2Vzc19ub3NwZWMoKS4g
VGhpcyBpcyBpbiBwYXJ0aWN1bGFyIHRvIGd1YXJkIGFnYWluc3QNCj4+Pj4+IGUuZy4gU0hfdHlw
ZV91bnVzZWQgbWFraW5nIGl0IGhlcmUgdW5pbnRlbnRpb25hbGx5Lg0KPj4+Pj4NCj4+Pj4+IFNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4+Pj4+IC0tLQ0K
Pj4+Pj4gdjI6IE5ldy4NCj4+Pj4+DQo+Pj4+PiAtLS0gYS94ZW4vYXJjaC94ODYvbW0vc2hhZG93
L3ByaXZhdGUuaA0KPj4+Pj4gKysrIGIveGVuL2FyY2gveDg2L21tL3NoYWRvdy9wcml2YXRlLmgN
Cj4+Pj4+IEBAIC0yNyw2ICsyNyw3IEBADQo+Pj4+PiAgLy8gYmVlbiBpbmNsdWRlZC4uLg0KPj4+
Pj4gICNpbmNsdWRlIDxhc20vcGFnZS5oPg0KPj4+Pj4gICNpbmNsdWRlIDx4ZW4vZG9tYWluX3Bh
Z2UuaD4NCj4+Pj4+ICsjaW5jbHVkZSA8eGVuL25vc3BlYy5oPg0KPj4+Pj4gICNpbmNsdWRlIDxh
c20veDg2X2VtdWxhdGUuaD4NCj4+Pj4+ICAjaW5jbHVkZSA8YXNtL2h2bS9zdXBwb3J0Lmg+DQo+
Pj4+PiAgI2luY2x1ZGUgPGFzbS9hdG9taWMuaD4NCj4+Pj4+IEBAIC0zNjgsNyArMzY5LDcgQEAg
c2hhZG93X3NpemUodW5zaWduZWQgaW50IHNoYWRvd190eXBlKQ0KPj4+Pj4gIHsNCj4+Pj4+ICAj
aWZkZWYgQ09ORklHX0hWTQ0KPj4+Pj4gICAgICBBU1NFUlQoc2hhZG93X3R5cGUgPCBBUlJBWV9T
SVpFKHNoX3R5cGVfdG9fc2l6ZSkpOw0KPj4+Pj4gLSAgICByZXR1cm4gc2hfdHlwZV90b19zaXpl
W3NoYWRvd190eXBlXTsNCj4+Pj4+ICsgICAgcmV0dXJuIGFycmF5X2FjY2Vzc19ub3NwZWMoc2hf
dHlwZV90b19zaXplLCBzaGFkb3dfdHlwZSk7DQo+Pj4+IEkgZG9uJ3QgdGhpbmsgdGhpcyBpcyB3
YXJyYW50ZWQuDQo+Pj4+DQo+Pj4+IEZpcnN0LCBpZiB0aGUgY29tbWl0IG1lc3NhZ2Ugd2VyZSBh
Y2N1cmF0ZSwgdGhlbiBpdCdzIGEgcHJvYmxlbSBmb3IgYWxsDQo+Pj4+IGFycmF5cyBvZiBzaXpl
IFNIX3R5cGVfdW51c2VkLCB5ZXQgeW91J3ZlIG9ubHkgYWRqdXN0ZWQgYSBzaW5nbGUgaW5zdGFu
Y2UuDQo+Pj4gQmVjYXVzZSBJIHRoaW5rIHRoZSByaXNrIGlzIGhpZ2hlciBoZXJlIHRoYW4gZm9y
IG90aGVyIGFycmF5cy4gSW4NCj4+PiBvdGhlciBjYXNlcyB3ZSBoYXZlIHN1aXRhYmxlIGJ1aWxk
LXRpbWUgY2hlY2tzIChIQVNIX0NBTExCQUNLU19DSEVDSygpDQo+Pj4gaW4gcGFydGljdWxhcikg
d2hpY2ggd291bGQgdHJpcCB1cG9uIGluYXBwcm9wcmlhdGUgdXNlIG9mIG9uZSBvZiB0aGUNCj4+
PiB0eXBlcyB3aGljaCBhcmUgYWxpYXNlZCB0byBTSF90eXBlX3VudXNlZCB3aGVuICFIVk0uDQo+
Pj4NCj4+Pj4gU2Vjb25kbHksIGlmIGl0IHdlcmUgcmVsaWFibHkgMTYgdGhlbiB3ZSBjb3VsZCBk
byB0aGUgYmFzaWNhbGx5LWZyZWUNCj4+Pj4gInR5cGUgJj0gMTU7IiBtb2RpZmljYXRpb24uwqAg
KEl0IGFwcGVhcnMgbXkgY2hhbmdlIHRvIGRvIHRoaXMNCj4+Pj4gYXV0b21hdGljYWxseSBoYXNu
J3QgYmVlbiB0YWtlbiB5ZXQuKSwgYnV0IHdlJ2xsIGVuZCB1cCB3aXRoIGxmZW5jZQ0KPj4+PiB2
YXJpYXRpb24gaGVyZS4NCj4+PiBIb3cgY291bGQgYW55dGhpbmcgYmUgInJlbGlhYmx5IDE2Ij8g
U3VjaCBlbnVtcyBjYW4gY2hhbmdlIGF0IGFueSB0aW1lOg0KPj4+IFRoZXkgZGlkIHdoZW4gbWFr
aW5nIEhWTSB0eXBlcyBjb25kaXRpb25hbCwgYW5kIHRoZXkgd2lsbCBhZ2FpbiB3aGVuDQo+Pj4g
YWRkaW5nIHR5cGVzIG5lZWRlZCBmb3IgNS1sZXZlbCBwYWdpbmcuDQo+Pj4NCj4+Pj4gQnV0IHRo
ZSB2YWx1ZSBpc24ndCBhdHRhY2tlciBjb250cm9sbGVkLsKgIHNoYWRvd190eXBlIGFsd2F5cyBj
b21lcyBmcm9tDQo+Pj4+IFhlbidzIG1ldGFkYXRhIGFib3V0IHRoZSBndWVzdCwgbm90IHRoZSBn
dWVzdCBpdHNlbGYuwqAgU28gSSBkb24ndCBzZWUNCj4+Pj4gaG93IHRoaXMgY2FuIGNvbmNlaXZh
Ymx5IGJlIGEgc3BlY3VsYXRpdmUgaXNzdWUgZXZlbiBpbiBwcmluY2lwbGUuDQo+Pj4gSSBkaWRu
J3Qgc2F5IGFueXRoaW5nIGFib3V0IHRoZXJlIGJlaW5nIGEgc3BlY3VsYXRpdmUgaXNzdWUgaGVy
ZS4gSXQNCj4+PiBpcyBmb3IgdGhpcyB2ZXJ5IHJlYXNvbiB0aGF0IEkgd3JvdGUgIihhYil1c2lu
ZyIuDQo+PiBUaGVuIGl0IGlzIGVudGlyZWx5IHdyb25nIHRvIGJlIHVzaW5nIGEgbm9zcGVjIGFj
Y2Vzc29yLg0KPj4NCj4+IFNwZWN1bGF0aW9uIHByb2JsZW1zIGFyZSBzdWJ0bGUgZW5vdWdoLCB3
aXRob3V0IGZhbHNlIHVzZXMgb2YgdGhlIHNhZmV0eQ0KPj4gaGVscGVycy4NCj4+DQo+PiBJZiB5
b3Ugd2FudCB0byAiaGFyZGVuIiBhZ2FpbnN0IHJ1bnRpbWUgYXJjaGl0ZWN0dXJhbCBlcnJvcnMs
IHlvdSB3YW50DQo+PiB0byB1cCB0aGUgQVNTRVJUKCkgdG8gYSBCVUcoKSwgd2hpY2ggd2lsbCBl
eGVjdXRlIGZhc3RlciB0aGFuIHN0aWNraW5nDQo+PiBhbiBoaWRpbmcgYW4gbGZlbmNlIGluIGhl
cmUsIGFuZCBub3QgaGlkZSB3aGF0IGlzIG9idmlvdXNseSBhIG1ham9yDQo+PiBtYWxmdW5jdGlv
biBpbiB0aGUgc2hhZG93IHBhZ2V0YWJsZSBsb2dpYy4NCj4gSSBzaG91bGQgaGF2ZSBjb21tZW50
ZWQgb24gdGhpcyBlYXJsaWVyIGFscmVhZHk6IFdoYXQgbGZlbmNlIGFyZSB5b3UNCj4gdGFsa2lu
ZyBhYm91dD8NCg0KVGhlIG9uZSBJIHRob3VnaHQgd2FzIGhpZGluZyB1bmRlciBhcnJheV9hY2Nl
c3Nfbm9zcGVjKCksIGJ1dCBJIGZvcmdvdA0Kd2UnZCBkb25lIHRoZSBzYmIgdHJpY2suDQoNCj4g
QXMgdG8gQlVHKCkgLSB0aGUgZ29hbCBoZXJlIHNwZWNpZmljYWxseSBpcyB0byBhdm9pZCBhDQo+
IGNyYXNoIGluIHJlbGVhc2UgYnVpbGRzLCBieSBmb3JjaW5nIHRoZSBmdW5jdGlvbiB0byByZXR1
cm4gemVybyAodmlhDQo+IGhhdmluZyBpdCB1c2UgYXJyYXkgc2xvdCB6ZXJvIGZvciBvdXQgb2Yg
cmFuZ2UgaW5kZXhlcykuDQoNCkknbSB2ZXJ5IHVuZWFzeSBhYm91dCBoYXZpbmcgc29tZXRoaW5n
IHRoaXMgZGVlcCBpbnNpZGUgYSBjb21wb25lbnQsDQp3aGljaCBBU1NFUlQoKXMgY29ycmVjdG5l
c3MgZG9pbmcgc29tZXRoaW5nIGN1c3RvbSBsaWtlIHRoaXMgImp1c3QgdG8NCmF2b2lkIGNyYXNo
aW5nIi4NCg0KVGhlcmUgaXMgZWl0aGVyIHNvbWV0aGluZyBpbXBvcnRhbnQgd2hpY2ggbWFrZXMg
dGhpcyBtb3JlIGxpa2VseSB0aGFuDQptb3N0IHRvIGdvIHdyb25nIGF0IHJ1bnRpbWUsIG9yIHRo
ZXJlIGlzIG5vdC7CoCBBbmQgaG9uZXN0bHksIEkgY2FuJ3Qgc2VlDQp3aHkgaXQgaXMgbW9yZSBy
aXNreSBhdCBydW50aW1lLg0KDQpJZiB3ZSByZWFsbHkgZG8gbmVlZCB0byBjbGFtcCBpdCwgdGhl
biB3ZSBuZWVkIGEgYnJhbmQgbmV3IGhlbHBlciB3aXRoIGENCm5hbWUgdGhhdCBkb2Vzbid0IHJl
ZmVyZW5jZSBzcGVjdWxhdGlvbiBhdCBhbGwuwqAgSXQncyBmaW5lIGZvciAqX25vc3BlYw0KdG8g
cmV1c2UgdGhpcyBoZWxwZXIsIHN0YXRpbmcgdGhlIHNhZmV0eSBvZiBkb2luZyBzbywgYnV0IGF0
IGEgY29kZQ0KbGV2ZWwgdGhlcmUgbmVlZCB0byBiZSB0d28gYXBwcm9wcmlhdGVseSBuYW1lZCBo
ZWxwZXJzIGZvciB0aGVpciB0d28NCmxvZ2ljYWxseS11bnJlbGF0ZWQgcHVycG9zZXMuDQoNCn5B
bmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 19:28:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 19:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479778.743831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrdL-0003M5-Eg; Tue, 17 Jan 2023 19:28:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479778.743831; Tue, 17 Jan 2023 19:28:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrdL-0003Ly-B8; Tue, 17 Jan 2023 19:28:35 +0000
Received: by outflank-mailman (input) for mailman id 479778;
 Tue, 17 Jan 2023 19:28:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QZI9=5O=citrix.com=prvs=374c09890=Per.Bilse@srs-se1.protection.inumbo.net>)
 id 1pHrdJ-0003Ls-EN
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 19:28:33 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1e902521-969d-11ed-b8d0-410ff93cb8f0;
 Tue, 17 Jan 2023 20:28:30 +0100 (CET)
Received: from mail-dm6nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 14:28:22 -0500
Received: from BN6PR03MB3378.namprd03.prod.outlook.com (2603:10b6:405:42::30)
 by SJ0PR03MB6873.namprd03.prod.outlook.com (2603:10b6:a03:438::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Tue, 17 Jan
 2023 19:28:19 +0000
Received: from BN6PR03MB3378.namprd03.prod.outlook.com
 ([fe80::b47b:e121:d2c1:1e4a]) by BN6PR03MB3378.namprd03.prod.outlook.com
 ([fe80::b47b:e121:d2c1:1e4a%5]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 19:28:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e902521-969d-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673983710;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=q0nEVV+PKACxNuyLFiJ+cSFauCEtuldpdi8hWhQlFlg=;
  b=fGbTq8rOgsA7Xm91ZjoR8VEL3i/EiFhax1Wwk+NkjA6EkNOfGjU+ss3U
   s7r6ZrA+7lTeqeNwCcd2zjPSn8nBg7Z8HcvCIIYxb4UVbQJ9SPd8nBZZw
   PQecITTvTdLjbmgOlDolRMXyQ1soDJRmt2sqH4/YDs3HRKeHTyoU/tgNR
   M=;
X-IronPort-RemoteIP: 104.47.57.175
X-IronPort-MID: 91960519
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ueT7CKpUeG9S0bEV5x5vfI5zhMheBmI/ZBIvgKrLsJaIsI4StFCzt
 garIBmHbK7YYGqhL993a422oU9V7Z/Wn9dgGgJk/ygzHypA8puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHziZNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABUSfyvYi6Gf+umYdrY8m9sJPOPXYKpK7xmMzRmBZRonabbqZvyToPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeeraYWIEjCJbZw9ckKwn
 m/cuU74BgoXHNee1SCE4jSngeqncSbTCdtDSuXlr68CbFu75mVNFzY4U3qBn9K+plaaSsN6G
 0I/5X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L8/AKxFmUCCDlbZ7QOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRVLufgcBRAoBptXm/oc6i0uWSs45SfHoyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:F+DhFKNYZRIGw8BcT13155DYdb4zR+YMi2TDiHoadfUFSKelfp
 6V9MjzjSWE8Ar4WBkb+exoS5PwOk80lKQFqLX5WI3OYOCIghrNEGgP1+XfKnjbalTDH41mpO
 9dmspFebrN5DFB5KqU3OD7KadH/DDtytHKuQ6q9QYJcegcUdAD0+4WMGemO3wzYDMDKYsyFZ
 Ka6MYCjSGnY24rYsOyAWRAd/TfpvXQ/aiWKyIuNloC0k2jnDmo4Ln1H1yzxREFSQ5Cxr8k7C
 zsjxH53KO+qPu2oyWsmlM7rq4m1OcJ+OEzSvBkufJlawkEvzzYK7iJFYfy/Azd69vfkmrC2O
 O83ivIef4DoE85N1vF3SfFyk3u1i0j5GTlzkLdiXz/odbhTDZ/EMZZg5lFGyGpn3bIkesMop
 6j5VjpwqZ/HFfFhmDw9tLIXxZlmg69pmcji/caizhaXZEFYLFcoIQD9AcNea1wah7S+cQiCq
 1jHcvc7PFZfReTaG3YpHBmxJipUm4oFhmLT0Aesoie0iRQnnp+00wErfZv6Uso5dY4Ud1J9u
 7EOqNnmPVHSdIXd7t0AKMbTc6+GgX2MGHx2aKpUCTa/Y08SgPwQsTMkcoIDcmRCeI18Kc=
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="91960519"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dWKv8wLWZqt4bX4R/9hedM09pbsIhVzAzhMKEejT+hRStqczyqg+W1GbmLE/YdGQM02TJIPzXq7H0PQMf1+r0nj3gOz7y1spVU9Fv8yPm3JcEt3hWTkZWkWd9bHtHb5Ic/aMkSyHyElZVLcqrV8qBQlLu/dKFm+Idlf6RT/jbj5JCq2ye9xt0NIN9aaTTRw9DE22VzD0tcIFXnmVX8e9zHVeUkiQuA9TvFObxegB2TW3ee1AtyDrYXE/w7TpF+OKbthOuaaJqWLhEBclkOMnL35ygcsgX1VDdPqA+5SZgT4ow6txoohnPgNPF9UHc3iGll/c/owtS3KVKpjtYpJMgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=q0nEVV+PKACxNuyLFiJ+cSFauCEtuldpdi8hWhQlFlg=;
 b=A8Y+jJ2n6SjtKBTCBMZXC0xKaEZvqYiWQrlA38yZpLbEImo1XJqcXhkzCDl7Hyl577hLOx2gAbxTnAiF/HHNvpdzYDLsR0W63Xk7fmCBCbukv5lH8wuz1Vs3GsWOBYJpYYKVpIIlG0E1587QphJw6XJPShFPXJMOVCxfb8a/wd321LcZBoNioqjqAkt3AzfK7KRDKmPIlPeJmIRS7wnSsfFL888Qn8DUju/+OfXzn6iM0zjrsLb/vtAUui4a4ZYc7rMaeO48IuH1H1VWTBQWszRcbBLRjqDH2hd5mvWJ3PIgE0bryioVOEJWpiuTNxgYyw10cNmIzM6S7KAhT3rkaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q0nEVV+PKACxNuyLFiJ+cSFauCEtuldpdi8hWhQlFlg=;
 b=xhP7hcUCoDh05Fm9IZ+rI3J9pNk0kUctJZOz11ZUWEWpryciR+dFlbYknVN8+Zera7UP2UWvDO2UvHoDqRmt9mx6SkGdN/nAG99iHxHta2gIgB8nP5s/yAkcqvFCpmBRwFQXQFE+EWv1F0je12IJ80gmwSz1agvgikGzfAdre9o=
From: Per Bilse <Per.Bilse@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH] Create a Kconfig option to set preferred reboot
 method
Thread-Topic: [XEN PATCH] Create a Kconfig option to set preferred reboot
 method
Thread-Index: AQHZKc71hMjfkRx58UW4fngDofFpla6ixK6AgAA7aAA=
Date: Tue, 17 Jan 2023 19:28:19 +0000
Message-ID: <348dff00-5ae4-5dc2-64d4-d52409a22283@citrix.com>
References: <20230116172102.43469-1-per.bilse@citrix.com>
 <f7e7b6ea-5bc1-ba2b-5d21-eb431ecff53a@suse.com>
In-Reply-To: <f7e7b6ea-5bc1-ba2b-5d21-eb431ecff53a@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN6PR03MB3378:EE_|SJ0PR03MB6873:EE_
x-ms-office365-filtering-correlation-id: 30f7599f-baa9-4276-0ed7-08daf8c0fd7f
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 LEToopFUhBbsUFPRPRpdxwFWrtCKSqlD8OvmQBQobAHpy7JN7CAu+ZvxDY35kYsAr0856r2cvAT6HsbejGzCqHXOMOClFKWlySlrEx+qKOewPJJ5N8Dbz0fqKh7MLU3d5wr02WQoH9GIO8hrnZ2/0thJQAdIw6llAHS/V+qNIropOzPl5977USpflSnVNiiGuIRIwgXET9WFQ1wmnGn2DAbIwOPzIvVwiYGTlwS5iewrSzGi9DNVCSk2Fk6TxhBUkA9AElDWTYGLGLaitwjgeVIeN/nodEIfb2k5LeDB13AGojObrMDD7rUlds4c4T4xPZ66+eVNSY6sP8N6x6uOTmu+EjIMPWGc1JycHF4zO+7VNMrPegcl2bA6VptVd40zmJw3Fn3DLnOhB0KmFQ3k5agIH8q6rml5eMapPcd+5a+4/fNwyBZGx7y8Bu8ivx7N5rHaDp1DkU0XrachMsdkaEhnYDMVNPKjITu+UfRjn1Ehc5wJ1WufashogpRMouymDCKBXhx9WCIxPxLGFFdVB5L3vnbCNOEgU2MUdxsmrEVh4zpPzHiGd5VAnVdVozGbFpKP7SrYnOVQmMSa3rmYL8RFfvKdaO+JMG6aSc9GweNEgX6pGgifadeiqv+FwaW8+sqN8BAAOrYFTaYLucGhPWMJG5SyvswfyFG1dMrKKbdrk2Goi3fYGfTHSJ8KgYE/JnviZ268td7xE7Chc+moqLALF9jKL8lHOeN363cLvXU=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN6PR03MB3378.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(376002)(366004)(136003)(396003)(451199015)(31696002)(122000001)(36756003)(86362001)(82960400001)(316002)(38100700002)(38070700005)(8676002)(66446008)(5660300002)(66476007)(54906003)(76116006)(66946007)(64756008)(66556008)(41300700001)(6916009)(8936002)(91956017)(6506007)(2616005)(6512007)(26005)(186003)(478600001)(2906002)(31686004)(53546011)(4326008)(6486002)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?cFppRDZKTTkvSU5SODhIVUdrZDU0Y0M3aUc4VzIzVHpSSXBYRnJxdXBGbURE?=
 =?utf-8?B?cGoxN04zN3Q0UFJQRWQxSE55YXpmUVhsWXowZHF2RUZqUFB5Q2R6NGZabE9W?=
 =?utf-8?B?ZWVWeE0rSlhaSFNTRGFpcngyUTk3emJYc0QrV1BSMitERTNQeWN5Y1JqT3pT?=
 =?utf-8?B?dG43LzZ3aHFod0J3aiswdStzTnVYZlVRRFhuZ0crbVNrdFlCZWNWQlYwdlg4?=
 =?utf-8?B?aUNBT2ZpbFlYMGdQUnIvWnI0K1hlL2FRcFpNUzRROUZtK1BYM3NRQTZqOUJm?=
 =?utf-8?B?VHZjbVlmWGJhcjJ0Wm5QUWppdXpGODRYa0QwZVR5TWZKcC9yZFg2aFBCT1Na?=
 =?utf-8?B?VEltOVFiQ1k5NmloM2laSXk2ZTFTZmNsZi9QNE9NZ0dqMC9nUWJiSTR1TDlV?=
 =?utf-8?B?UzBQbzV0SFJ3dWtSZmhKYitDVXI2dVIvNzNkUWhPTEh4ZERXOWx0VDE2TGM5?=
 =?utf-8?B?WXE4RWJOOE9DbVcrUmxnS28rRnF1RjNVSENLVXpYQkg2c2FxYlRQUU56Z0Vn?=
 =?utf-8?B?cE85QThpVGRqTnVrcDJ3UnpJRGxqNXc4YUtaQjRmYy9KVnBEK1BSWldiUkds?=
 =?utf-8?B?dEx1TDcxRmFxcjZJVml4ejBldTBaOTd0ZGtUc25waHlWRGpZTytUU0VFMEFP?=
 =?utf-8?B?enhpek5ybkdQTGlleU9scm9kZWpnU2lSY0E1UE5xdWZwQjRpQ2hNTW1VY2xX?=
 =?utf-8?B?UzlnaFpjUGlDWGdvM1Z3L216R1dQTGZQVnRldTV6RzBpbzlTdXVCeHBzNGRR?=
 =?utf-8?B?ZFN4OHl0ck4rU2NSd25sTEpLTVlGT2gxbDJaUXB1eTl5ZDVuYXNhZm53WkYy?=
 =?utf-8?B?eExxNWdaa2Y1SlVKdG5ibWhkejkrZTdiZFFqL1RuSDJpdlEzdXNMWGpvbHBC?=
 =?utf-8?B?U28xYTZxVnYzMTVjSXNSV1pRNjU2ODB3U3ZoWGdhUGVEdExmREVCUHVwT3hv?=
 =?utf-8?B?QTFQNFdialR1Z296Z0tZTE0vZkcrUy92OGNEMmhQbUYrYXlrZ2JYb2Y5ck1N?=
 =?utf-8?B?WlhBNE5PWHNtZ2tzdVVvcmxUb200S1hrcW5kbXhxRkRmVTBqaytLdmttRUdD?=
 =?utf-8?B?VEQrSVVxNllLcXpvRnpaeHJxM1g1MEVtWmlJRFBzL3hnNFNzcG1WTDU3dkNw?=
 =?utf-8?B?RUhrR1RnWlVMNFlObjk0RVBLWnY5V3czNWpiTHdXZmdLVmI3dzk1dmNkaVph?=
 =?utf-8?B?b0wxZFZXcGpqVFVIUGVNWnkvTEhwcjZXTDVrZ2pWaDRCZXVtY2E5UU5jaE1o?=
 =?utf-8?B?Mk5tL0ZiRDBzSkRzcnhYWVZib3ZzVFV6RHlSa2g0SFdOTFA4ZFhrRDU1eDd3?=
 =?utf-8?B?RytIdjBtdEkyUTU2TjBGVytZSy8wL09kSjlNcWxjMVZpdmZhTkl6UkNJUzUw?=
 =?utf-8?B?cStCend0T1VYWjdud1owOEFVVG1uaC9iSVZCOWp5RFdvaVZnc2NDdHlzQWhI?=
 =?utf-8?B?bW5NSm9GaVA4dng5NldHeUdmSVd1a2Rkc0ZUbTd3dVJDYy9XNzZnb1JXTVdF?=
 =?utf-8?B?cnAvMkVYNkdSbER0ZXFVNUw2amZDNDNzWFJXM2pDcWYxRUdFSzJ2VHpjME9q?=
 =?utf-8?B?R2JBNkIzTmxuR1hKVEdzRjYrc2w3Rzh5RjN5S2ZjL29ueTJKTGJqS3Jxc3RJ?=
 =?utf-8?B?L3MvVEVQelpJYWZCakNEM0lWYlZrSlRsMFNtcEZkRHpxS2tXM0dTZ2JXY3ZR?=
 =?utf-8?B?K09lOGc4SDR4SnJFQUFFM1NqTmc2eGxvM2lmTTRBTnRMVmxROWw4bkNwczRP?=
 =?utf-8?B?ak9CamNkaEkxa04rdGVmZEJ5azJmaGRpQm1ld3ZnUHM3U3hlNHZ2aFhwU2dE?=
 =?utf-8?B?RUpNRWhveFQveFd6QmZZL0tRNW1UcHJidmNxdDMvUlRHNTU3V0swNDVXdGxq?=
 =?utf-8?B?Q2l4RStkSGJrc1ExSHBMc3dWZzBZenZvS1RYZTUwT0RuVXluU3I3RjllRDNQ?=
 =?utf-8?B?ZE8xWWZ3bVdOU20xa1JMV1locFRrTlUzb3hSSFBETEZuWXoyZFlyVGtRSnhy?=
 =?utf-8?B?TG5mSEU1a3VWTURXelk0emw1OVZ1eGI0cXFvZVBXdTJIclJJNHlHRDVtcEha?=
 =?utf-8?B?c1BndThOaFVJUFhwVG9OZVVqRVhnRU80TDBSdzVUZlVpT3BnSlJwWCtPWlFy?=
 =?utf-8?Q?O5fs87m49k4pJYCKgGKHMHcSN?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <3ABD85C1A18112489BFECE66480DA565@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	+f5g6x+wPm4JJt2dP/9jR0iRs6Tm9XNpmjvPlzSY11WhgVfg26sLmuZyTdCdVw7o1VDaUfYv58vLUVAtwA+1JH0bYwZIS5MdFbnFvDppoz7g407XpLklXSYYYej45v5DrQcTArtdVr0QPSKz3si9BrTnaaVfD0c85wUcIAe8SwOk7/FocKIWdlf8oKvsyq/DjVrzHx7RHWCY+dSx7pjOdDM7SRimlXUaZnZgPbEOcg9CtfFYoYyIIdA+kRdhaCtoyiGKgqS7jyKjBMYXnGCT/o+93UnguS9lSYXb4fq4V2oKjrUjHKejzvKQ89XenXu72onTmISBjrjP8YNF9XUJ6CIWjSg71WwzhGMesTgcJyl1aBD6tnpSAPKaojWuzXbXjnU5QmvxjLDdBIvdxt4N6ThHxNPaVot2GdJ1jKUnFZ7WblK57y22t74raotgZFDFYILUas2Fan4dwCADT7vqDsCsdD8VKWp3ew0P5iarvbEkj5g2/Mhbyru3vtT1yg9wY2OwjBBGc2E+5JfpTDuE/AkonvRtsNcI3Ektz9MNJRQgZNXReamMILvYABKBQutNE+WyLbwsgZ8q4mBkEbeASuyqT/yNUlp5uDDN1L1FInDE2JEpAFZQE7lZhCTRP7UcaOtvF+vkRz2dPVMHNp0UwRZeaO0sZ4xTSJ4CPsRctCIA1mDC151Z0y19DjWowZuk10EMuCRziM9EjZXxEbA5Te8INeW2enCk/zkhT2nS8oRpwMblYBSda6R+Qu9UxXMBkqfweXFXrwhrZ5F4GTh3b4lQ+LJZNahiD1ughytmgyQSEZGQd6gvs9CRB0hsawSc
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN6PR03MB3378.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 30f7599f-baa9-4276-0ed7-08daf8c0fd7f
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 19:28:19.6249
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ajCwoZdopvrP4Vjg7gWJ+G5HfnT/06eyC9yftpjaglvH7VTpmskoAIw4jHiOVMeJ66qjeMHdSiGZGWo54yIpGw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6873

T24gMTcvMDEvMjAyMyAxNTo1NSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDE2LjAxLjIwMjMg
MTg6MjEsIFBlciBCaWxzZSB3cm90ZToNCj4+ICtjb25maWcgUkVCT09UX1NZU1RFTV9ERUZBVUxU
DQo+PiArCWRlZmF1bHQgeQ0KPj4gKwlib29sICJYZW4tZGVmaW5lZCByZWJvb3QgbWV0aG9kIg0K
PiANCj4gTWF5IEkgYXNrIHRoYXQgeW91IHN3YXAgdGhlIHR3byBsaW5lcz8gKFBlcnNvbmFsbHkg
SSBjb25zaWRlciBrY29uZmlnDQo+IHRvbyBmb3JnaXZpbmcgaGVyZSAtIGl0IGRvZXNuJ3QgbWFr
ZSBhIGxvdCBvZiBzZW5zZSB0byBzZXQgYSBkZWZhdWx0DQo+IHdoZW4gdGhlIHR5cGUgaXNuJ3Qg
a25vd24geWV0LikNCg0KQ2VydGFpbmx5LCBJIHNwb3R0ZWQgaXQgbXlzZWxmIGFmdGVyIHNlbmRp
bmcsIGFuZCB3YXMga2lja2luZyBteXNlbGYNCmZvciBub3Qgc2VlaW5nIGl0IGVhcmxpZXIuDQoN
Cj4+ICtjaG9pY2UNCj4+ICsJYm9vbCAiQ2hvb3NlIHJlYm9vdCBtZXRob2QiDQo+PiArCWRlcGVu
ZHMgb24gIVJFQk9PVF9TWVNURU1fREVGQVVMVA0KPj4gKwlkZWZhdWx0IFJFQk9PVF9NRVRIT0Rf
QUNQSQ0KPj4gKwloZWxwDQo+PiArCQlUaGlzIGlzIGEgY29tcGlsZWQtaW4gYWx0ZXJuYXRpdmUg
dG8gc3BlY2lmeWluZyB0aGUNCj4+ICsJCXJlYm9vdCBtZXRob2Qgb24gdGhlIFhlbiBjb21tYW5k
IGxpbmUuICBTcGVjaWZ5aW5nIGENCj4+ICsJCW1ldGhvZCBvbiB0aGUgY29tbWFuZCBsaW5lIHdp
bGwgb3ZlcnJpZGUgdGhpcyBjaG9pY2UuDQo+PiArDQo+PiArCQl3YXJtICAgIERvbid0IHNldCB0
aGUgY29sZCByZWJvb3QgZmxhZw0KPj4gKwkJY29sZCAgICBTZXQgdGhlIGNvbGQgcmVib290IGZs
YWcNCj4gDQo+IFRoZXNlIHR3byBhcmUgbW9kaWZpZXJzLCBub3QgcmVib290IG1ldGhvZHMgb24g
dGhlaXIgb3duLiBUaGV5IHNldCBhDQo+IGZpZWxkIGluIHRoZSBCREEgdG8gdGVsbCB0aGUgQklP
UyBob3cgbXVjaCBpbml0aWFsaXphdGlvbiAvIGNoZWNraW5nDQo+IHRvIGRvIChpbiB0aGUgbGVn
YWN5IGNhc2UpLiBUaGVyZWZvcmUgdGhleSBzaG91bGRuJ3QgYmUgc2VsZWN0YWJsZQ0KPiByaWdo
dCBoZXJlLiBJZiB5b3UgZmVlbCBsaWtlIGl0IHlvdSBjb3VsZCBhZGQgYW5vdGhlciBib29sZWFu
IHRvDQo+IGNvbnRyb2wgdGhhdCBkZWZhdWx0Lg0KDQpNeSB1bmRlcnN0YW5kaW5nIGlzIHRoYXQg
aXQgd2FzIGRlc2lyZWQgdG8gcHJvdmlkZSBhIGNvbXBpbGVkLWluDQplcXVpdmFsZW50IG9mIHRo
ZSBjb21tYW5kIGxpbmUgJ3JlYm9vdD0nIG9wdGlvbiAod2hpY2ggaW5jbHVkZXMNCmNvbGQgYW5k
IHdhcm0pLCBidXQgdGhpcyBtYXkgYmUgYSBtaXN1bmRlcnN0YW5kaW5nLiAgSSBjYW4gc2VwYXJh
dGUNCnRoZXNlIHR3byBvdXQuDQoNCj4+ICsJY29uZmlnIFJFQk9PVF9NRVRIT0RfV0FSTQ0KPj4g
KwkJYm9vbCAid2FybSINCj4+ICsJCWhlbHANCj4+ICsJCQlEb24ndCBzZXQgdGhlIGNvbGQgcmVi
b290IGZsYWcuDQo+IA0KPiBJIGRvbid0IHRoaW5rIGhlbHAgdGV4dCBpcyB2ZXJ5IHVzZWZ1bCBp
biBzdWItaXRlbXMgb2YgYSBjaG9pY2UuIFBsdXMNCj4geW91IGFsc28gZXhwbGFpbiBlYWNoIGl0
ZW0gYWJvdmUuDQoNCkkgdGhvdWdodCBpdCBiZXR0ZXIgdG8gZXJyIG9uIHRoZSBzaWRlIG9mIGNh
dXRpb24uICBUaGUgaGVscCB0ZXh0cyB3aWxsDQphcHBlYXIgYXQgdHdvIGRpZmZlcmVudCBtZW51
IGxldmVscywgYnV0IEkgY2FuIHJlbW92ZSBpdC4NCg0KPj4gKwljb25maWcgUkVCT09UX01FVEhP
RF9YRU4NCj4+ICsJCWJvb2wgInhlbiINCj4+ICsJCWhlbHANCj4+ICsJCQlVc2UgWGVuIFNDSEVE
T1AgaHlwZXJjYWxsIChpZiBydW5uaW5nIHVuZGVyIFhlbiBhcyBhIGd1ZXN0KS4NCj4gDQo+IFRo
aXMgd2FudHMgdG8gZGVwZW5kIG9uIFhFTl9HVUVTVCwgZG9lc24ndCBpdD8NCg0KWWVzLCBkZXBl
bmRpbmcgb24gY29udGV4dC4gIEluIHByb3ZpZGluZyBhIGNvbXBpbGVkLWluIGVxdWl2YWxlbnQN
Cm9mIHRoZSBjb21tYW5kLWxpbmUgcGFyYW1ldGVyLCBpdCBzaG91bGQgYXJndWFibHkgcHJvdmlk
ZSBhbmQgYWNjZXB0DQp0aGUgc2FtZSBzZXQgb2Ygb3B0aW9ucywgYnV0IEknbGwgY2hhbmdlIGl0
Lg0KDQo+PiArCWRlZmF1bHQgIngiIGlmIFJFQk9PVF9NRVRIT0RfWEVODQo+IA0KPiBJIHRoaW5r
IGl0IHdvdWxkIGJlIG5lYXRlciAoYW5kIG1vcmUgZm9yd2FyZCBjb21wYXRpYmxlKSBpZiB0aGUg
c3RyaW5ncw0KPiB3ZXJlIGFjdHVhbGx5IHNwZWxsZWQgb3V0IGhlcmUuIFdlIG1heSwgYXQgYW55
IHRpbWUsIG1ha2Ugc2V0X3JlYm9vdF90eXBlKCkNCj4gbG9vayBhdCBtb3JlIHRoYW4ganVzdCB0
aGUgZmlyc3QgY2hhcmFjdGVyLg0KDQpPSy4NCg0KPj4gKyNpZmRlZiBDT05GSUdfUkVCT09UX1NZ
U1RFTV9ERUZBVUxUDQo+PiAgICAgICBkZWZhdWx0X3JlYm9vdF90eXBlKCk7DQo+PiAgICAgICBk
bWlfY2hlY2tfc3lzdGVtKHJlYm9vdF9kbWlfdGFibGUpOw0KPj4gKyNlbHNlDQo+PiArICAgIHNl
dF9yZWJvb3RfdHlwZShDT05GSUdfUkVCT09UX01FVEhPRCk7DQo+PiArI2VuZGlmDQo+IA0KPiBJ
IGRvbid0IHRoaW5rIHlvdSBzaG91bGQgZWxpbWluYXRlIHRoZSB1c2Ugb2YgRE1JIC0gdGhhdCdz
IG1hY2hpbmUNCj4gc3BlY2lmaWMgaW5mb3JtYXRpb24gd2hpY2ggc2hvdWxkIGltbyBzdGlsbCBv
dmVycnVsZSBhIGJ1aWxkLXRpbWUgZGVmYXVsdC4NCj4gU2VlIGFsc28gdGhlIGNvbW1lbnQganVz
dCBvdXQgb2YgY29udGV4dCAtIGl0J3MgYSBkaWZmZXJlbmNlIHdoZXRoZXIgdGhlDQo+IG92ZXJy
aWRlIGNhbWUgZnJvbSB0aGUgY29tbWFuZCBsaW5lIG9yIHdhcyBzZXQgYXQgYnVpbGQgdGltZS4N
Cg0KT0s7IGFnYWluLCBpdCdzIGEgc2xpZ2h0bHkgZGlmZmVyZW50IHRha2Ugb24gdGhlIHB1cnBv
c2UgdGhhbiBJIGhhZCwgYnV0DQpJIGNhbiBjaGFuZ2UgaXQuICBBbHNvIGZvciB0aGUgcmVzdC4N
Cg0KQmVzdCwNCg0KICAgLS0gUGVyDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 19:30:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 19:30:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479785.743842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrf7-0004hW-Pk; Tue, 17 Jan 2023 19:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479785.743842; Tue, 17 Jan 2023 19:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHrf7-0004hP-Ml; Tue, 17 Jan 2023 19:30:25 +0000
Received: by outflank-mailman (input) for mailman id 479785;
 Tue, 17 Jan 2023 19:30:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHrf7-0004hD-8T; Tue, 17 Jan 2023 19:30:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHrf7-0001Fr-6Y; Tue, 17 Jan 2023 19:30:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHrf6-0003YM-Rt; Tue, 17 Jan 2023 19:30:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHrf6-00086H-RS; Tue, 17 Jan 2023 19:30:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VSjK9dIEbof7H088Xv1oB5lDAfx8uDIYzrtBfk98WzI=; b=oLODVCkd6QaJPtEHENbI1e4Szm
	RKKWDfaiodEPBGv26hEtfqF3tS9SzKobch7+OsFytO3Abs0a7PuvJM6l4QWPKLSCWEXGdTO43TquZ
	WmzjsF1v8F8nTFhzWNEJgxdUNMmc6MPGo8ef+zaUrEkmEthqxJ5V41Asxv8NiIUM79CA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175943-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175943: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=015a001b03db14f791476f817b8b125b195b6d10
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 19:30:24 +0000

flight 175943 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175943/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 015a001b03db14f791476f817b8b125b195b6d10
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   34 attempts
Testing same since   175940  2023-01-17 16:40:41 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 424 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 20:19:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 20:19:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479793.743852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHsQM-0000hy-F7; Tue, 17 Jan 2023 20:19:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479793.743852; Tue, 17 Jan 2023 20:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHsQM-0000hr-C5; Tue, 17 Jan 2023 20:19:14 +0000
Received: by outflank-mailman (input) for mailman id 479793;
 Tue, 17 Jan 2023 20:19:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHsQL-0000hl-Bx
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 20:19:13 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 320ff34c-96a4-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 21:19:10 +0100 (CET)
Received: from mail-dm3nam02lp2046.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 15:19:06 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5103.namprd03.prod.outlook.com (2603:10b6:208:1aa::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 20:19:04 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 20:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 320ff34c-96a4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673986750;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=jFAI3Ux6qPtissa7plSCh2AfaIBdW9fgTpRfbLWjBIc=;
  b=bYXHSp7jLDCjNd+06+2zmhIT7USlb8KuIcrzarNsD2f/QbXK92HhiPJ+
   Fe+NqrCYg5C9NS7QMm7HKS1tnxOozd1TwKHs+Sk1rd6fkq5fIi0ybqylb
   ViaVY2MuQ0UuRFzbvQVdPtL/Yfk/FnqS1ERJIVVQ1RYPCiSBq5U6bHEPo
   s=;
X-IronPort-RemoteIP: 104.47.56.46
X-IronPort-MID: 93031451
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:2u388KAujNF1fxVW/wriw5YqxClBgxIJ4kV8jS/XYbTApGtxhTYGz
 DcZW22OO/2MN2H9e4sna4Sw9UtQ68TVn4U3QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpB4QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw2bt9BGAQ+
 cwiDiEmRwHYreys/rGkY7w57igjBJGD0II3nFhFlWucIdN9BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI++xrvQA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE17SXxnqnBdt6+LuQ9ORD3mS0nkAvMSYfUVufn8irqUyDYocKQ
 6AT0m90xUQoz2SpRNTgWxyzoFafowURHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRwtJWFRHTb8a2bxQ5eIgAQJG4GICMBFg0M5oG5pJlp1k6RCNF+DKSyk9v5Xynqx
 CyHpzQ/gLNVitMX06K8/hbMhDfESoX1czPZLz7/BgqNhj6Vrqb8D2B0wTA3Ncp9Ebs=
IronPort-HdrOrdr: A9a23:4u49Mak1QsUApa6PQYaIFBbSx57pDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="93031451"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XOTfZzPwLZG5YoWQ2I/SrXPQMVAS2SPALjIs1+cl2im67eOCC8t2+JhDbz6l66LBJGBpUiBBfiLsY/FW2kx985pgrGXHSDyaNftL7r7g8BnCubf6dx19/M29Tj0vwzFAjCBTx95VjL3oHl6PECB3E/BSmS8yLs+vDPLwiiU9R/xzrnxMCoXFjE3RsbN66ZKwUjIZLTrMup6+4VgMYRxxwMPg/oe3IidOr2MHwAbgZZwnl0XHXFxRj0Y/1xkQMCqCJCNYhHbjs/wYbzvN+dgd7Xk4PaZmqxVqIcUQNE7BYXYgQXL40C0nogtEbKC+4vfT8SsvOPfGKFRqks2wdkmfcQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jFAI3Ux6qPtissa7plSCh2AfaIBdW9fgTpRfbLWjBIc=;
 b=LN6Wm8Ykc9VcCoif1agvb1/vTQ+rQmkSj3kPIqvz4RYEnY8CQky8J8qEUzMMUO1OIerZMPqczvdsWkNqKjCT1Ql6pXsgTF6NassUvcpKhef5vFQSJe5Wins5KcwI+tGg6xEuZc2ck5ciS6e04uJkWlVKt10UbhutPpX44pABTTTgJ5RPYywwYbMr/bJQgDBhmpWlOaESPiGL524x6ANHbhEknGELgKXcMaqR2s6x12msJCDHhL48HFT8X4wcGvFS6Qi2XbgKOuOVndqsBaMlIOIDcJJFeKOQ3r0y1jVhQ73eQCAXiY1amkJf3bok1dAmg1ITIN+mz3cXHr3fi00UCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jFAI3Ux6qPtissa7plSCh2AfaIBdW9fgTpRfbLWjBIc=;
 b=aNywLqWnlZU/jXECy1XVcYzM1YTxvxfS4EbNRMyxTH+OKRkE6uC4i2fhvxwSjDmctOXufddUH3L2x9MCcVb/Cbvo5mYxFQF4CBBcxQJHFU39bp46v1bXOUxLU5gUImofIY/RY4KJnA6EZdOHZ43KmDdyX9vsRrXy9TJwOGX+68Q=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Wei Liu <wl@xen.org>, Roger Pau
 Monne <roger.pau@citrix.com>
Subject: Re: [PATCH 02/10] x86: split populating of struct vcpu_time_info into
 a separate function
Thread-Topic: [PATCH 02/10] x86: split populating of struct vcpu_time_info
 into a separate function
Thread-Index: AQHY4438eIfEX/gnK0SRRrhvSEeRJK6jmsaA
Date: Tue, 17 Jan 2023 20:19:04 +0000
Message-ID: <18016404-c3e3-8b99-569d-b6f786635a2c@citrix.com>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <3ff84ed1-8143-15c2-2b4a-3ae8ef23677c@suse.com>
In-Reply-To: <3ff84ed1-8143-15c2-2b4a-3ae8ef23677c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MN2PR03MB5103:EE_
x-ms-office365-filtering-correlation-id: faddef7b-17be-4e5e-2856-08daf8c8143e
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 erWLa0+TGvPnv+0AwERyshjPmC0oaiELH04AERBoiQajYyGWRqpxw7IaHx74q/NtggtfmuejA88uFn2i5mRYFkEmhAKL/YlDnUMiUWhEMK0V04ciKrEJgCrZewQge5jYnh8G6kDx5a/9NUkrmY6hiUUXVYFsDZvOF9zqBh8tAPOpl3awN6iWg3Lo/eZriMzD4pyyx8PBqib3iYydd1uMdW5VnCOwHRNsXRtfpYekc5L0dIebfErt1vT0566pPQyQ5fjKpJiOP9Fs66IWT+Jwnz/ebdtbYHlMCfkfExt8Ns1gP80CXs1Qyz1y+5vWeRdk4ybOFPOM4g3/nzWQ9etLPM3f4hhhsPu8fh95OL9WP+TbGR4DMOlBsb8y6eIhUzpNPBp+xRiZL4gKBZvXY3khn3J5aaZYOL3O2V9N6SSlxU7sfyOOyRxANlxJrh85DNg19Ox21rRQXzWGz/OOoZlv0Gp9QeFI2xUH5FzlypAYhpGLS4dbweYpl/zFvEuzWT77KkRzURLuHP378TSoyEgyByrEOEsE/H6bdzNv3CsW0ltoe4IPESifoSi0oCUoXZonB/MDMchS7zt/xTchun/5BTXZXl7C4NiKbT+4qMoLk0RY94ttrtZFzqbpliV85HybrZSn0/8fU+J3y5a7/+v33n5NfvfRlSSmef1uO5zWGDFwondWh1obibYtKYG+aDchiGEUpN+lBrD4A+RNUDmxdVEf0PSpatTXJFC9uEQk0PYUUsLG1+RIgE525A6LlV0l5/4CsjED1phNhSWCaG1n8g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(366004)(396003)(39860400002)(346002)(376002)(451199015)(31696002)(86362001)(36756003)(26005)(8676002)(6512007)(186003)(76116006)(4326008)(53546011)(91956017)(64756008)(66946007)(66446008)(66556008)(66476007)(41300700001)(2616005)(6506007)(107886003)(478600001)(110136005)(38100700002)(54906003)(316002)(71200400001)(38070700005)(2906002)(6486002)(122000001)(82960400001)(83380400001)(5660300002)(8936002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SFVpUFA5R0t4aGNCTUxreEJ2WDNUcytseThrNVZpeEp5NWd3VjlVSElBRStu?=
 =?utf-8?B?czZsSUFQbzE0MUsyanArL09tS29FNDJHQzFxSUJOYkFtTEFPMFJZQnRWMjZU?=
 =?utf-8?B?a3kxbTRBY05iTlJqc0xXeHR6OERNY1lIU2UzODJCYlhYcUxYdlB1SWUzU21a?=
 =?utf-8?B?SEJxckNNMjhNVU5MMGtUSXczZGhxK0ZIY0wyMktJNW5nSEoxNUlrQnVoeTVW?=
 =?utf-8?B?bWxBS2hsRW9tMUY4OFRSdEJaZ2s0MGJDWDJQMDJqQlFEUUUyV1VGZWtpYzJh?=
 =?utf-8?B?cTdJL090QlcwZGtNcG0rSVpaQU9tS0RNVU9VbXQvdEc3WkhkQUZWNUZ5ZEZZ?=
 =?utf-8?B?ekR0d2Jla285MDM1L2Z5YzY4eGY4L1crODVvYU5EUDhRbXRKdnp1VjZkTUpl?=
 =?utf-8?B?SEdpSkxSQnl4Uk44S1lNOFU0VC9KSzlmQ0RwaW5EL2pmWUpsZnFiNmlpUmN3?=
 =?utf-8?B?VzJUZmdLMzhBVWpXZGpxczZRS0dDTVZVVDVSUzlRazlkLzQrekJnKzZGME1t?=
 =?utf-8?B?QXpnaXpXdHRNSllRSXlXNFVKb1psZDN3eXUvSWdiNm90NFZReTQrbDNIbjZP?=
 =?utf-8?B?QTlsalpWbmt5QXJVeHRGYVMvc3Y4RlpnRFM2cDV4dE9hbDljMVlQc2tHQ3FF?=
 =?utf-8?B?Y3pvNHNOT04vNUVSbHBYSWQ1SkpmYmxpQ3dtdm1SS3ZsRC9SblJYYU9xUGpQ?=
 =?utf-8?B?ZDBKMlZpSXdxWDhld0VTc1NHazQxOHlmRFpzL1JOVDRRV3RudFZxMUlkbEls?=
 =?utf-8?B?RlF2QVJDZHYrdUZzVm1qMHpIT0NSaUxnLzEvYVYyYXk1c0JkL0FkakVyUUtz?=
 =?utf-8?B?QXVndk5ENEd5b1dZdlcwNmN5RGltb1U4SWdBSWVhd1owdUc0U0gvakJrVkRi?=
 =?utf-8?B?a2ZRMGp1dnhMaDRaU3pVQXN3Y3I2UVFiZTVVWXQ4S0ZidUF6WGpXbmFMVTF2?=
 =?utf-8?B?UDQ1TkF4TFJnSGFHcDljRzhOclBxYm1zMm56SlRKaFFOZUt0V0lDcktPL2N3?=
 =?utf-8?B?Qk94eU1jK3hqUUNZR2NwNk4venhCZnRwVUFWTHZlOG05UVByTXZSak4wTmMx?=
 =?utf-8?B?NkN3RFdYVUNkR21CSkM2ZWZFSlFJUVdjdkowSi8rUk5iWDIxY2tyVzdJTG9L?=
 =?utf-8?B?Mml2VHhnVFFQT3g2YTFDc3hqbXRuMy9TVzBKYXpNcEVQMGpGamNhbEltNGp1?=
 =?utf-8?B?b0xUVDZkT0R1UFZYRExLZEJsVEFOTUdSZUp5TkVqVTFYc1NlbnFpZTlPZ2tF?=
 =?utf-8?B?bDlVczZqR1JsbGhKVnpFSC9tbnRmUFdFek85SWJmaTlEWmhwRDV5dVpZb2t2?=
 =?utf-8?B?SHpyZ1pZcXJOWXlGSnFQYWZpbjBQY1JUNWFGQ3ZMWURJL1k4a1pWcTRjdG5Z?=
 =?utf-8?B?ZjFUZkp0d205QUFIeC9NQisyc01ROG9JNFNxU2IraWplWmUvMlBWd2FuZDBI?=
 =?utf-8?B?dFBrRTVyVmxrV0xVY1A1ZEMwRmlNM2lSMzV3Skc1K2dZUjRZc1IyUWtSNmtu?=
 =?utf-8?B?VXdjWC9mMVBvbEluNiswVkdxUUc0Q1IzQ0s0WjYxbXA0WGQ1Q3U3RHYxN0dE?=
 =?utf-8?B?YjBTUitLS1ZUVFRRWWhPNWd3enh6T0paT0Nla2p0NFoyT3lOYkNKekV1UlFm?=
 =?utf-8?B?UVNIWjE0ZmV6bmQ0anplNFV0QStvLzczV1d0MDVKZ09WcUI4WVp1M3gxTkFR?=
 =?utf-8?B?elhna0djamFCa1gybDh2OHJBNm1hV0ZXNXVCaWp4bzQ2ZE05ams0TXNuc2pO?=
 =?utf-8?B?UW9SbTdBR1V3Sjl1Skk0ek0yYW5ybVhmZEt4ZkZYWk5MWHl1ZUJIQXdvRVlq?=
 =?utf-8?B?QklMalVMWlRjZmpGMkVQQ1ROOXowd2xBSXQyWnR3dThEcUpYTm9tMDgyNjhX?=
 =?utf-8?B?QXpJSnh5QmFyVVRHZkdRcXF2N3RlRm1PNzl5N2Z0Ky91QWZIdVhJZmovTXVJ?=
 =?utf-8?B?S2szWkx5M3VTd25DLzNLdnY4NHhTUVdmMFVGKzR1dUEza2xtVTN6Rk5WNGdU?=
 =?utf-8?B?QTliNWY3bzRrYytFN0xxNVRDaUJZNHA3Z2dFNnM2aXN4aVJXWlZLWDVnSmpQ?=
 =?utf-8?B?MFBFTHVJR0tVWk5UdHVSV1YrdWt1d1ZzMnpyMUlqa3d0UzlKYmt1ZExMbWJI?=
 =?utf-8?Q?5xS51MhiHn1VmItueRf+NwlC/?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <807745850B08CF49A546B7EED6ADC786@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	i6yJY02tejpM8AMz3hxVZjHrAB7RlaxHJkZO/HSgqAoW5Hq2DdpfjnBZrY4NCyzCHVN5fwJpnCA4i9hKF240Jk8lj0tiFUw6VbrUy7V44VicSXzvou6t4UnzBVj5hLakKGH1yMvWhNhzWTALhvZMvb6PHzpBySDNt7CYCcLhiGKvrCiaC0MvuDfMPHifIr0rbpVlw9lkai42cT/VJJUA49z+HsVZQ0yZB026hBo+76XKfdmdC9quNVqh2DofoxvcS1h5e0HfGRVwC2hokcY//o7riKj7MuqiZcG/rxYiinrozHTmqptGGCNJqefsB7zs3YdDKtb3TOMTjJihkYOeekxz2Nfz7oLXZ4qIUyPv7dIMzpf4UoFrhPd7hEUD+ttlQtnZsiV7jiOufIm+RLc/WBjByXw5sc/Q4IlMZut9V8p9ku0poxXUJfgYcqfPmDnP5rYAjwytlnOhk+cS1ncrBE4qz31APH1KkAY95h7AAnUU8GGbPZ3c7ebUTedBMHmmUo/6URbwIr6yexeBcsYvXRj89AFA1U2W5jWDLJD3FYA+oYN6prnyXgb/ahxFEMt/1ynfupLGU3eHAjnySpQqkUhnkUR0aLBQXTyE+1gstJmRz3MyW840ip+oN6EH1YkmpPpg7IbhyDfEm5BQKuEGQpNkR2RjPKgTlBRjf2r1sv6kMc9wj97Xc3Cl1DP5eR+fEB0K+Yb5QDp68tCb7CFFl5dEVO+BAiSXYOTsr9/bfVBXegk9DKzNZFY2IINWuLJ4j1hLCGibZCUBTvun2zRIlhe4f98e7rCkRa6ChaV+EMf3tQkA6fUd0WUZBR8imUXc
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: faddef7b-17be-4e5e-2856-08daf8c8143e
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 20:19:04.2799
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: VxfuonRpoHAl0hNxycSGwz4iTxbAPisAfNnPzGva9dR8xdn0+9lDNmXasWfHa9Lu0ByNTKdRT4mSDV0b2ZzeLFG2xtVSRWhV/8bcqinCwkk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5103

T24gMTkvMTAvMjAyMiA4OjM5IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gVGhpcyBpcyB0byBm
YWNpbGl0YXRlIHN1YnNlcXVlbnQgcmUtdXNlIG9mIHRoaXMgY29kZS4NCj4NCj4gV2hpbGUgZG9p
bmcgc28gYWRkIGNvbnN0IGluIGEgbnVtYmVyIG9mIHBsYWNlcywgZXh0ZW5kaW5nIHRvDQo+IGd0
aW1lX3RvX2d0c2MoKSBhbmQgdGhlbiBmb3Igc3ltbWV0cnkgYWxzbyBpdHMgaW52ZXJzZSBmdW5j
dGlvbi4NCj4NCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29t
Pg0KDQpBY2tlZC1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlckBjaXRyaXguY29tPg0K
DQo+IC0tLQ0KPiBJIHdhcyBvbiB0aGUgZWRnZSBvZiBhbHNvIGZvbGRpbmcgdGhlIHZhcmlvdXMg
aXNfaHZtX2RvbWFpbigpIGludG8gYQ0KPiBmdW5jdGlvbiBzY29wZSBib29sZWFuLCBidXQgdGhl
biB3YXNuJ3QgcmVhbGx5IGNlcnRhaW4gdGhhdCB0aGlzDQo+IHdvdWxkbid0IG9wZW4gdXAgdW5k
dWUgc3BlY3VsYXRpb24gb3Bwb3J0dW5pdGllcy4NCg0KSSBjYW4ndCBzZWUgYW55dGhpbmcgaW50
ZXJlc3RpbmcgdW5kZXIgaGVyZSBzcGVjdWxhdGlvbiB3aXNlLsKgDQpDb21tZW50YXJ5IGlubGlu
ZS4NCg0KPg0KPiAtLS0gYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vdGltZS5oDQo+ICsrKyBi
L3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS90aW1lLmgNCj4gQEAgLTUyLDggKzUyLDggQEAgdWlu
dDY0X3QgY2ZfY2hlY2sgYWNwaV9wbV90aWNrX3RvX25zKHVpbg0KPiAgdWludDY0X3QgdHNjX3Rp
Y2tzMm5zKHVpbnQ2NF90IHRpY2tzKTsNCj4gIA0KPiAgdWludDY0X3QgcHZfc29mdF9yZHRzYyhj
b25zdCBzdHJ1Y3QgdmNwdSAqdiwgY29uc3Qgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpOw0K
PiAtdTY0IGd0aW1lX3RvX2d0c2Moc3RydWN0IGRvbWFpbiAqZCwgdTY0IHRpbWUpOw0KPiAtdTY0
IGd0c2NfdG9fZ3RpbWUoc3RydWN0IGRvbWFpbiAqZCwgdTY0IHRzYyk7DQo+ICt1aW50NjRfdCBn
dGltZV90b19ndHNjKGNvbnN0IHN0cnVjdCBkb21haW4gKmQsIHVpbnQ2NF90IHRpbWUpOw0KPiAr
dWludDY0X3QgZ3RzY190b19ndGltZShjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCB1aW50NjRfdCB0
c2MpOw0KPiAgDQo+ICBpbnQgdHNjX3NldF9pbmZvKHN0cnVjdCBkb21haW4gKmQsIHVpbnQzMl90
IHRzY19tb2RlLCB1aW50NjRfdCBlbGFwc2VkX25zZWMsDQo+ICAgICAgICAgICAgICAgICAgIHVp
bnQzMl90IGd0c2Nfa2h6LCB1aW50MzJfdCBpbmNhcm5hdGlvbik7DQo+IC0tLSBhL3hlbi9hcmNo
L3g4Ni90aW1lLmMNCj4gKysrIGIveGVuL2FyY2gveDg2L3RpbWUuYw0KPiBAQCAtMTM3MywxOCAr
MTM3MywxNCBAQCB1aW50NjRfdCB0c2NfdGlja3MybnModWludDY0X3QgdGlja3MpDQo+ICAgICAg
cmV0dXJuIHNjYWxlX2RlbHRhKHRpY2tzLCAmdC0+dHNjX3NjYWxlKTsNCj4gIH0NCj4gIA0KPiAt
c3RhdGljIHZvaWQgX191cGRhdGVfdmNwdV9zeXN0ZW1fdGltZShzdHJ1Y3QgdmNwdSAqdiwgaW50
IGZvcmNlKQ0KPiArc3RhdGljIHZvaWQgY29sbGVjdF90aW1lX2luZm8oY29uc3Qgc3RydWN0IHZj
cHUgKnYsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgdmNwdV90aW1l
X2luZm8gKnUpDQo+ICB7DQo+IC0gICAgY29uc3Qgc3RydWN0IGNwdV90aW1lICp0Ow0KPiAtICAg
IHN0cnVjdCB2Y3B1X3RpbWVfaW5mbyAqdSwgX3UgPSB7fTsNCj4gLSAgICBzdHJ1Y3QgZG9tYWlu
ICpkID0gdi0+ZG9tYWluOw0KPiArICAgIGNvbnN0IHN0cnVjdCBjcHVfdGltZSAqdCA9ICZ0aGlz
X2NwdShjcHVfdGltZSk7DQo+ICsgICAgY29uc3Qgc3RydWN0IGRvbWFpbiAqZCA9IHYtPmRvbWFp
bjsNCj4gICAgICBzX3RpbWVfdCB0c2Nfc3RhbXA7DQo+ICANCj4gLSAgICBpZiAoIHYtPnZjcHVf
aW5mbyA9PSBOVUxMICkNCj4gLSAgICAgICAgcmV0dXJuOw0KPiAtDQo+IC0gICAgdCA9ICZ0aGlz
X2NwdShjcHVfdGltZSk7DQo+IC0gICAgdSA9ICZ2Y3B1X2luZm8odiwgdGltZSk7DQo+ICsgICAg
bWVtc2V0KHUsIDAsIHNpemVvZigqdSkpOw0KPiAgDQo+ICAgICAgaWYgKCBkLT5hcmNoLnZ0c2Mg
KQ0KPiAgICAgIHsNCj4gQEAgLTEzOTIsNyArMTM4OCw3IEBAIHN0YXRpYyB2b2lkIF9fdXBkYXRl
X3ZjcHVfc3lzdGVtX3RpbWUoc3QNCj4gIA0KPiAgICAgICAgICBpZiAoIGlzX2h2bV9kb21haW4o
ZCkgKQ0KPiAgICAgICAgICB7DQo+IC0gICAgICAgICAgICBzdHJ1Y3QgcGxfdGltZSAqcGwgPSB2
LT5kb21haW4tPmFyY2guaHZtLnBsX3RpbWU7DQo+ICsgICAgICAgICAgICBjb25zdCBzdHJ1Y3Qg
cGxfdGltZSAqcGwgPSBkLT5hcmNoLmh2bS5wbF90aW1lOw0KDQpBIFBWIGd1ZXN0IGNvdWxkIGlu
IGluIHByaW5jaXBsZSB1c2UuLi4NCg0KPiAgDQo+ICAgICAgICAgICAgICBzdGltZSArPSBwbC0+
c3RpbWVfb2Zmc2V0ICsgdi0+YXJjaC5odm0uc3RpbWVfb2Zmc2V0Ow0KDQouLi4gdGhpcyBwbC0+
c3RpbWVfb2Zmc2V0IGFzIHRoZSBzZWNvbmQgZGVmZXJlbmNlIG9mIGEgd2hhdGV2ZXIgaGFwcGVu
cw0KdG8gc2l0IHVuZGVyIGQtPmFyY2guaHZtLnBsX3RpbWUgaW4gdGhlIHB2IHVuaW9uLg0KDQpJ
biB0aGUgY3VycmVudCBidWlsZCBvZiBYZW4gSSBoYXZlIHRvIGhhbmQsIHRoYXQncw0KZC0+YXJj
aC5wdi5tYXBjYWNoZS57ZXBvY2gsdGxiZmx1c2hfdGltZXN0YW1wfSwgdGhlIGNvbWJpbmF0aW9u
IG9mIHdoaWNoDQpkb2Vzbid0IHNlZW0gbGlrZSBpdCBjYW4gYmUgc3RlZXJlZCBpbnRvIGJlaW5n
IGEgbGVnYWwgcG9pbnRlciBpbnRvIFhlbi4NCg0KPiAgICAgICAgICAgICAgaWYgKCBzdGltZSA+
PSAwICkNCj4gQEAgLTE0MDMsMjcgKzEzOTksMjcgQEAgc3RhdGljIHZvaWQgX191cGRhdGVfdmNw
dV9zeXN0ZW1fdGltZShzdA0KPiAgICAgICAgICBlbHNlDQo+ICAgICAgICAgICAgICB0c2Nfc3Rh
bXAgPSBndGltZV90b19ndHNjKGQsIHN0aW1lKTsNCj4gIA0KPiAtICAgICAgICBfdS50c2NfdG9f
c3lzdGVtX211bCA9IGQtPmFyY2gudnRzY190b19ucy5tdWxfZnJhYzsNCj4gLSAgICAgICAgX3Uu
dHNjX3NoaWZ0ICAgICAgICAgPSBkLT5hcmNoLnZ0c2NfdG9fbnMuc2hpZnQ7DQo+ICsgICAgICAg
IHUtPnRzY190b19zeXN0ZW1fbXVsID0gZC0+YXJjaC52dHNjX3RvX25zLm11bF9mcmFjOw0KPiAr
ICAgICAgICB1LT50c2Nfc2hpZnQgICAgICAgICA9IGQtPmFyY2gudnRzY190b19ucy5zaGlmdDsN
Cj4gICAgICB9DQo+ICAgICAgZWxzZQ0KPiAgICAgIHsNCj4gICAgICAgICAgaWYgKCBpc19odm1f
ZG9tYWluKGQpICYmIGh2bV90c2Nfc2NhbGluZ19zdXBwb3J0ZWQgKQ0KDQpPbiB0aGUgb3RoZXIg
aGFuZCwgdGhpcyBpcyBpc24ndCBzYWZlLsKgIFRoZXJlJ3Mgbm8gcHJvdGVjdGlvbiBvZiB0aGUg
JiYNCmNhbGN1bGF0aW9uLCBidXQuLi4NCg0KPiAgICAgICAgICB7DQo+ICAgICAgICAgICAgICB0
c2Nfc3RhbXAgICAgICAgICAgICA9IGh2bV9zY2FsZV90c2MoZCwgdC0+c3RhbXAubG9jYWxfdHNj
KTsNCg0KLi4uIHRoaXMgcGF0aCBpcyB0aGUgb25seSBwYXRoIHN1YmplY3QgdG8gc3BlY3VsYXRp
dmUgdHlwZSBjb25mdXNpb24sDQphbmQgYWxsIGl0IGRvZXMgaXMgcmVhZCBkLT5hcmNoLmh2bS50
c2Nfc2NhbGluZ19yYXRpbywgc28gaXMNCmFwcHJvcHJpYXRlbHkgcHJvdGVjdGVkIGluIHRoaXMg
aW5zdGFuY2UuDQoNCkFsc28sIGFsbCBhbiBhdHRhY2tlciBjb3VsZCBkbyBpcyBlbmNvZGUgdGhl
IHNjYWxpbmcgcmF0aW8gYWxvbmdzaWRlDQp0LT5zdGFtcC5sb2NhbF90c2MgKHVucHJlZGljdGFi
bGUpIGluIHRoZSBjYWxjdWxhdGlvbiBmb3IgdGhlIGR1cmF0aW9uDQpvZiB0aGUgc3BlY3VsYXRp
dmUgd2luZG93LCB3aXRoIG5vIHdheSBJIGNhbiBzZWUgdG8gZGVyZWZlcmVuY2UgdGhlIHJlc3Vs
dC4NCg0KDQo+IC0gICAgICAgICAgICBfdS50c2NfdG9fc3lzdGVtX211bCA9IGQtPmFyY2gudnRz
Y190b19ucy5tdWxfZnJhYzsNCj4gLSAgICAgICAgICAgIF91LnRzY19zaGlmdCAgICAgICAgID0g
ZC0+YXJjaC52dHNjX3RvX25zLnNoaWZ0Ow0KPiArICAgICAgICAgICAgdS0+dHNjX3RvX3N5c3Rl
bV9tdWwgPSBkLT5hcmNoLnZ0c2NfdG9fbnMubXVsX2ZyYWM7DQo+ICsgICAgICAgICAgICB1LT50
c2Nfc2hpZnQgICAgICAgICA9IGQtPmFyY2gudnRzY190b19ucy5zaGlmdDsNCj4gICAgICAgICAg
fQ0KPiAgICAgICAgICBlbHNlDQo+ICAgICAgICAgIHsNCj4gICAgICAgICAgICAgIHRzY19zdGFt
cCAgICAgICAgICAgID0gdC0+c3RhbXAubG9jYWxfdHNjOw0KPiAtICAgICAgICAgICAgX3UudHNj
X3RvX3N5c3RlbV9tdWwgPSB0LT50c2Nfc2NhbGUubXVsX2ZyYWM7DQo+IC0gICAgICAgICAgICBf
dS50c2Nfc2hpZnQgICAgICAgICA9IHQtPnRzY19zY2FsZS5zaGlmdDsNCj4gKyAgICAgICAgICAg
IHUtPnRzY190b19zeXN0ZW1fbXVsID0gdC0+dHNjX3NjYWxlLm11bF9mcmFjOw0KPiArICAgICAg
ICAgICAgdS0+dHNjX3NoaWZ0ICAgICAgICAgPSB0LT50c2Nfc2NhbGUuc2hpZnQ7DQo+ICAgICAg
ICAgIH0NCj4gICAgICB9DQo+ICANCj4gLSAgICBfdS50c2NfdGltZXN0YW1wID0gdHNjX3N0YW1w
Ow0KPiAtICAgIF91LnN5c3RlbV90aW1lICAgPSB0LT5zdGFtcC5sb2NhbF9zdGltZTsNCj4gKyAg
ICB1LT50c2NfdGltZXN0YW1wID0gdHNjX3N0YW1wOw0KPiArICAgIHUtPnN5c3RlbV90aW1lICAg
PSB0LT5zdGFtcC5sb2NhbF9zdGltZTsNCj4gIA0KPiAgICAgIC8qDQo+ICAgICAgICogSXQncyBl
eHBlY3RlZCB0aGF0IGRvbWFpbnMgY29wZSB3aXRoIHRoaXMgYml0IGNoYW5naW5nIG9uIGV2ZXJ5
DQo+IEBAIC0xNDMxLDEwICsxNDI3LDIxIEBAIHN0YXRpYyB2b2lkIF9fdXBkYXRlX3ZjcHVfc3lz
dGVtX3RpbWUoc3QNCj4gICAgICAgKiBvciBpZiBpdCBmdXJ0aGVyIHJlcXVpcmVzIG1vbm90b25p
Y2l0eSBjaGVja3Mgd2l0aCBvdGhlciB2Y3B1cy4NCj4gICAgICAgKi8NCj4gICAgICBpZiAoIGNs
b2Nrc291cmNlX2lzX3RzYygpICkNCj4gLSAgICAgICAgX3UuZmxhZ3MgfD0gWEVOX1BWQ0xPQ0tf
VFNDX1NUQUJMRV9CSVQ7DQo+ICsgICAgICAgIHUtPmZsYWdzIHw9IFhFTl9QVkNMT0NLX1RTQ19T
VEFCTEVfQklUOw0KPiAgDQo+ICAgICAgaWYgKCBpc19odm1fZG9tYWluKGQpICkNCj4gLSAgICAg
ICAgX3UudHNjX3RpbWVzdGFtcCArPSB2LT5hcmNoLmh2bS5jYWNoZV90c2Nfb2Zmc2V0Ow0KPiAr
ICAgICAgICB1LT50c2NfdGltZXN0YW1wICs9IHYtPmFyY2guaHZtLmNhY2hlX3RzY19vZmZzZXQ7
DQoNClRoaXMgcGF0aCBpcyBzdWJqZWN0IHRvIHR5cGUgY29uZnVzaW9uIG9uIHYtPmFyY2gue3B2
LGh2bX0sIHdpdGggYSBQVg0KZ3Vlc3QgYWJsZSB0byBlbmNvZGUgdGhlIHZhbHVlIG9mIHYtPmFy
Y2gucHYuY3RybHJlZ1s1XSBpbnRvIHRoZQ0KdGltZXN0YW1wLsKgIEJ1dCBhZ2Fpbiwgbm8gd2F5
IHRvIGRlcmVmZXJlbmNlIHRoZSByZXN1bHQuDQoNCg0KSSByZWFsbHkgZG9uJ3QgdGhpbmsgdGhl
cmUncyBlbm91Z2ggZmxleGliaWxpdHkgaGVyZSBmb3IgZXZlbiBhDQpwZXJmZWN0bHktdGltZWQg
YXR0YWNrZXIgdG8gYWJ1c2UuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 20:22:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 20:22:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479801.743864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHsTk-00027Q-1x; Tue, 17 Jan 2023 20:22:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479801.743864; Tue, 17 Jan 2023 20:22:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHsTj-00027J-Td; Tue, 17 Jan 2023 20:22:43 +0000
Received: by outflank-mailman (input) for mailman id 479801;
 Tue, 17 Jan 2023 20:22:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHsTi-000279-Qk; Tue, 17 Jan 2023 20:22:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHsTi-0002TV-P6; Tue, 17 Jan 2023 20:22:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHsTi-0004uJ-Ej; Tue, 17 Jan 2023 20:22:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHsTi-0006XP-EH; Tue, 17 Jan 2023 20:22:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3y3kNi/+qnbq7P4tTA03tmrDs1sN+zgOfGfKv1JWNgQ=; b=3Ade3u7iX9enXeA+gPl9WAiJMZ
	5KIVzh+f6ZqOpD7+cJe2DC+VVK7xWyyfl18Tz+HowOOtgQLCH9PDX+lAByE+8RRR4VgptHjxre1LQ
	L+mani5wCdUbrwoaFCk8xjX/KK0J9+VOu9DPNs6wAogmhbkhteEQo75ojgFPSuNqIfI0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175945-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175945: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ea382b3b21ef6664d5850dfbe08793f77d10c15d
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 20:22:42 +0000

flight 175945 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175945/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ea382b3b21ef6664d5850dfbe08793f77d10c15d
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   35 attempts
Testing same since   175945  2023-01-17 19:40:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>
  Oliver Steffen <osteffen@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 718 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 20:32:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 20:32:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479809.743875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHscf-0003c4-Qi; Tue, 17 Jan 2023 20:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479809.743875; Tue, 17 Jan 2023 20:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHscf-0003bx-Nw; Tue, 17 Jan 2023 20:31:57 +0000
Received: by outflank-mailman (input) for mailman id 479809;
 Tue, 17 Jan 2023 20:31:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHscf-0003br-3T
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 20:31:57 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id faaab40b-96a5-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 21:31:55 +0100 (CET)
Received: from mail-mw2nam04lp2177.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 15:31:52 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5036.namprd03.prod.outlook.com (2603:10b6:5:1eb::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 20:31:50 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 20:31:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: faaab40b-96a5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673987515;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ZMWaRjhc3+GSYJUKhEBkyEpFF87GRxHX285lgwcegbI=;
  b=YE4dxyR/4DNSWSFVt0ihkm6KIF/aXU288aUXQwQKst8RwJSMRzplhtPN
   cvVvyYYy0buWJdr9yNOmcgGA3CF2VnzFjKvVxVOz57oncFIuQK5F+STs2
   YQqo0UlBXjVNbPQipT/BUmAI4hcIfcVIFLnJgZ+alF3a1J+iJdEl3f31/
   Q=;
X-IronPort-RemoteIP: 104.47.73.177
X-IronPort-MID: 93473961
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:usQaWKMuB0dVIMjvrR2HlsFynXyQoLVcMsEvi/4bfWQNrUp31mBRz
 zAeUDyPM/rcZ2ameop3aIzk9R5T6sKEm9UyHgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5wNmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0s9wETBx8
 qAHEhMmSzqYvNvsxbiqUOY506zPLOGzVG8ekldJ6GmFSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+vFxujeMpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexn+kA9NMTdVU8NZgvFTU+GsZFiQQC1i3saaw0HG6YtV2f
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaBMQOBWoLZCtBSBRf5dDm+N03lkiWEYklF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9XABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:dbT4Pq7vWvsCevxx1gPXwMbXdLJyesId70hD6qkRc3Bom6mj/P
 xG88516faZslgssRMb+exoSZPgfZq0z/cci+Qs1NyZLWrbUQWTXeRfxLqn7zr8GzDvss5xvJ
 0QF5SW0eeAb2RHsQ==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="93473961"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NBJURmkCth4bD4CjNRROwCF84i9FfS412mVLLz+LFqDMdx+Mo1lIdwemqakfjtOV/9i4+U1yLJRrAZoN2Myl46DtIPqTOlZRKWHPReBoFdP3CInt2VpYFHl+n4im/LRSS2jHaSzRoD4f5UzvjK4354elOJX25/ZphX8bS03lFSeA8cRBMCiHGpjlyJpPbwE0pc4WnPXhGOIjJpllbFc86D1JZ9lYsutMSeDzenAPKqoZimMO3Sseoq27rZtvl7svDhgDJQ11SHwgdp+wZiw2DlbcPFgarPMQg7znFC9kU/1kQeZ2uwl5sEJOIhpHjhFCReAd9CAC0GIrZDh00wtpKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZMWaRjhc3+GSYJUKhEBkyEpFF87GRxHX285lgwcegbI=;
 b=LWAdb+G6T/DoA5WZ2Tx2BNaT86oqzHtAWzqWj+jT56jvQItQpmAhFeiKBMkPICGd7vB0PRN9ueyTdK0/i2PIZzr8Wp3ES8ZVUxoqvh2FXVZUGaiiMJ8QhmkICdzQiKDjmvdlwuAckI72M7vn9z5wf0qNyb4guVKSFgVbyy8aLVd31grjeFnoqYbHOFYN5d5vaGKM6WGWNqvKrvKLbZ1jg4tkJAFoUYjpf2rLPeoxwXdvlIniqiyZRQ1pEX3BZPl/Qq975zZAoRNz7t/Idhj5/B8UwMvMkMoEUazxVBjLqUSj2uglNf+SdzOU//oP9TmY8ymcQbIhcqmeLDssrB3pbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZMWaRjhc3+GSYJUKhEBkyEpFF87GRxHX285lgwcegbI=;
 b=sEG8wEUXQtd5joBX6VMkjrvX0wBfwHJARc9iGxk9SehI7CI/6MRc2ujiP6bodpPZsmtdrpfQCywfAV3ofS/fg2Q69HUQ2T1SIhm3bUG/UN9rBc0AsnzUFYBRR+zDiHlHV4ovM6YWHeJniT6AUYBDoRfyDxpbEG9uWxBM+vB7Z38=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Wei Liu <wl@xen.org>, Roger Pau
 Monne <roger.pau@citrix.com>
Subject: Re: [PATCH RFC 05/10] x86: update GADDR based secondary time area
Thread-Topic: [PATCH RFC 05/10] x86: update GADDR based secondary time area
Thread-Index: AQHY4449r21jsZmdvU21bR3R1n+QYq6jnlaA
Date: Tue, 17 Jan 2023 20:31:49 +0000
Message-ID: <d0186b8a-652c-60a2-dc27-50c54d9221f4@citrix.com>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <7d814375-c190-ae0b-793b-a8563a23d318@suse.com>
In-Reply-To: <7d814375-c190-ae0b-793b-a8563a23d318@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB5036:EE_
x-ms-office365-filtering-correlation-id: dc6d68cf-38a2-47b4-e0f0-08daf8c9dc9c
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 dNGHIz3JvzDSyxLlVWtkiVSVv6aD1VtKGNVHLX/aKdY+EVNDEPrxwvGZAMrGH5YZliDwmu+m4ZmB16S1UOU7yA74LoCo7UuCRbwKoya722Eq6N3eVrzxAQEYZBjvFhEySYVQBRNKpLr50qRZCwcvdzTluiOVP4K6olVyUMR7s1fy9iWyrQcSp7diS9I8rYsfqmgc3MrueXX2JYwRDLb7/A7rTTl3NG3UC9PtfGqE6O62JDrVXIo4tI0bVhN+VSEl4cU3rL2HuGxRXQO7w+mjpItJEI4/9m49J2V2KnkNsb6Dm9PZxmYZFKh5fGXchwW/rRYkx9XY2nPF4aNaQPDrlYgQEmvZ0zWjeFGhNHmiEtIjPQHQaXQlkPnYD6VUgOUGbOtXpKV4GIXRxkDX1odIgePNm3EQsgDXR2CwV9y9WVtbq+VSsy9ZV7aQ4VcBiqO9bkO8m3f6ifOn1eq6gVKRBc6Yb3JlMmlm9XBV6jTlCIn9kYWDjEskv/u7469GNhSJtP6SXXFMbW/q7FoOuKnKOIJrMSTr1mb5dpfSBtIJ2LGguTVjz/HVn1FQDpsIavUa8CSIHac9fOTCj7Bn3vlyo0cV5MOfTRVn4tGBugtindmTvLWWdH5NEZpDtIkS77pMDQ+ydnyCzxY+DTrl1Yv0Lx5+CkiyT5Zk/Lct16G2uwkVQp/KIX7YLvWV5q1HP5rzKIA1QfDgvWRDtOmDlrvTDNpfd0dxppe7zXQfmFyZTDFwCaBRwUJKXQES+pe33Eeh2KC+GSbkoK+rftha/X0lIw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(346002)(366004)(39860400002)(136003)(451199015)(31686004)(36756003)(15650500001)(38070700005)(86362001)(8936002)(66946007)(66476007)(8676002)(66556008)(66446008)(64756008)(76116006)(2906002)(31696002)(5660300002)(4326008)(82960400001)(83380400001)(38100700002)(122000001)(110136005)(71200400001)(6486002)(478600001)(54906003)(316002)(91956017)(41300700001)(6512007)(53546011)(2616005)(6506007)(26005)(107886003)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?eExEL3RYNWsrYlNWSWlzNFdGMFpOOE9DcWozMTZOWjRlWW56blhmWFB1NWNy?=
 =?utf-8?B?NWRmMGNRbU9LYkZxbFFVdnpoUmhFYTNwTVVYUG5zNHRvRzl5M2VhSGZtSi9H?=
 =?utf-8?B?b2J0SE5BMDV1YmNhT1RFSW51eElRVGh2QXFSMm1vMTE3aXZ2aUlkUTd4aG9S?=
 =?utf-8?B?ZzRQbkllcFdma1dHMDZUdE8wZlZhRmtVdDF6a25rQW0zYUp6U3VzRmFVdkUx?=
 =?utf-8?B?OWZQZlR2QTBIR2VmUEJwblpUOGVpUVMxdUwrQzR4dG9RcmVzWmp0aEVROHAy?=
 =?utf-8?B?aExGQjJreXVSYjFqaUlhQ1FvaGV6aTdlNjZhT0VYU1hQTnFmUGVyT3Y2Wnpx?=
 =?utf-8?B?MFl0ZFV2aWxEU2RqNTlJL2E2OE5GNDNHenpJeHBSN2RjSktmWi9kU1ljbkZm?=
 =?utf-8?B?MXlIQ3RFVUVFM3IyazFoVUd2MXRraktpcjdibTBVOFpBb2dqTHNIRStkM2o1?=
 =?utf-8?B?cHhPYWlnLy9WdTRlSmdLYTVDc2pTdUsrbUpudk9nY1FZWlNXRGtVK0FJcjVO?=
 =?utf-8?B?eW9PdmtXSjNqVitpQ0pUZ0JzZnNUa0gwZWR0d2JzajQ4YVB5STd6eVpzVVIw?=
 =?utf-8?B?L1JYTE1LUGVNL25RdG1oRDMveEZOZzYrblh3WnhQdzRXY0ZwaVplbGN2Y3Rs?=
 =?utf-8?B?ZU9HSnd0Z3lBMmp6MmZkWXUwcytndGttWldwLzV5cUFJV3JST2FFeWtzZm96?=
 =?utf-8?B?b1Uvelc3emhkQ3FoSEx2L09HeExmUmxoZGZpa1llN1N1ZDdFRzNTZGMwMDFs?=
 =?utf-8?B?ekNNWXZmeHFLN1VwRFN4Z09xZUZvYVlJdTlOU0lVL1JtQkw4NHQ3L1o5UWlP?=
 =?utf-8?B?VFN2TFlmdUtXM21tWWtJMUQ1VXdwY2FpZzNKMzAxdkYyOVZCMFAyZWRBbWZU?=
 =?utf-8?B?V3VzQ29QSWo3OWtzOFNobnUvWGJCSTdJdjNkdVlVVGlERGo1TFZjSTZVcmVG?=
 =?utf-8?B?TGhaQ0FRS2dHdnJNWUtPd1p5NDJWUUZQdkpaZXhtUEM2Z21WVTJtczhKSE5H?=
 =?utf-8?B?SURoR1YyMnFxU3luSTh2MitsZmo5VXF4VkpqYlhUUllaQmxHUUxVOTlBWmxh?=
 =?utf-8?B?WFBPWDdlQ0p0Ukx5aUI3cUFPWkduTERTN3lIVFh6M0dVR003cFllRllsUDF4?=
 =?utf-8?B?Tkt2UThmazhzdnpMR25kMFFJQUxZdmVTWXFWUXY5THdmNytYZlBjb0JyRWlT?=
 =?utf-8?B?MzRqaXNMd0VaT01NT0VkMCtvQ3QremJ2OUJqYk1YdTljMUQ4VFlWQ1RDa2tX?=
 =?utf-8?B?OE4xdG5jUnlzOTI5TncrZEZVMTdKU2Z3Q0t6OE5aNHRObDUyYkFuWm1EVFJN?=
 =?utf-8?B?clhnTmxtMXp1SHdkZVY2aDJzYnVsZkNQUlFHVUozaitDcE5yUE1kbnZVZEp1?=
 =?utf-8?B?dGZWZEhPS0ZkQ0NSblg3cVdMK0ZKNFgzZ29jS0p4dXhPRUpKSlc4N25NaHdy?=
 =?utf-8?B?bjcvZ2Q0ZEhkTUtXSDhPcndVRlJPM1l1Tm51bW5XTDgreVNhVm91T1VUNXNL?=
 =?utf-8?B?dTVmWWx5N1JORGtOWWlaem5OekY5aHVHa2lJdzh5TEpxRytBSGU2TGRoS1J3?=
 =?utf-8?B?YkNoeWl5SmpuOFM4Tm4yTG1VVC92VTRVSkZHbjJ6M2I3dUJwQWt6c25tVE1C?=
 =?utf-8?B?ZXZEeE9NVVl2TzdKaC94Q2FnbnhFMFpJdWtacUhsV0xpb1VEV0dWSlE4cTlW?=
 =?utf-8?B?eThzSytqVWhTVlpPZVZWMHBQTTBNMmhWLzFTSm1vRitSNDc1Mm15anNxcGlT?=
 =?utf-8?B?UVdLMFU2U2JGOVlpNTFhMzNMZk9IWFhOVnhBY3V6ZFcweVVwbFd6azJvVFBs?=
 =?utf-8?B?Tk44YU00Q3F5M1VSNFNndkJsdEVLMk1lcGo2MVJIc1FsWDE2b0hrTCtnYjhk?=
 =?utf-8?B?aldxTWdLMjR2OVE4dEpuNG5JK2hvR040OE9DRWl5MDNYSG1HdjFmQ3pNMitx?=
 =?utf-8?B?aE11STdwajh1T0tDTFQ5QUFLeXJBdTEvQWFMNlY4TDFiQmxFSGdZVEgzNWJV?=
 =?utf-8?B?ZFQxcWhiUVByZUIxbG1kOVVXbmIzTjJ4QWNLWmFHN2psWlE5T2FDWE13Y1Vz?=
 =?utf-8?B?ZjAralg5L25xWW5ZVWZoZDZ5ejFoV2crME9rWjdkY2hQajgxeS9nV1NtYVNn?=
 =?utf-8?Q?xy2mqC62K1NzvoIezPryIY5sW?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <62E2164B1AD59F4C8085D02A94176C04@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	VSpVK54Hq6mOGue0y0kooO9nHsMCO3Ux58ZbOiVoZa4j36yYGgFZLnKBs0JH7OERfivXcJCiuDDmc2jnWvctB7CxIvXnDn+5+Ip1n4dJujeNlH44AfwE4Gv98LiMM7Le88Ld98y9TDmGRxvzRKh2DobxQFh6J5t7VDB+BofR7gvNy2orn/S2z0DMCY5eVlOeSVh1A9mxSsH3o6SUTsMEaHxA8deKgB0ZE+LmwLhnN3ZQzclnMoAnugEsIXq4Cm9PX+p4D/HcM6kidHD28qdeW6mRdtZmEdYfaScuyWhhfNsU5w7jH3Ka9l5vzsZZarn5Y+UedcZ8EqN8Iz3bGHUcc84lvCFYIOH3tDA3vZ2HK4uHspI19785inGRqUR84dZrIJYKc9Je6Vhpyh3XiPBH9/nMIM/CBLKF6+gC6wt2SjGJREP2eJpAIJVFGD29qlXuoA0pj5JBw1PIzNVZ4V6EwC/TFWzAl50M9B2Fg3dAQ2NqZDxYHzgMAy1+NFxir7VC3N4rlHKRBjF7yzpI34e/xJ4fTsgaFfoc3JrcjYgtbIprOpvxrCVbgoYG4OhZvh3ZeOUKLFSZmYrGGl92byEhLmWZKiH69EtdCptYFuNc80K6+Ua9V5HmSDMPbxjQ7uvh/69hZAPnC7gwGBWGymBD9QvkyL7OhdVM5WhgormDb8bnOskm/vlDjMtY294G/7hju6GYBAIZuqru5+reXtNqYm6xXJm2UG1jltslVReChH1SrYtxFXrtK6QEW8/hH+jn1i2T70h3nt/Lre8RtvgxmWNDv6ik1ZTFrCSofISJYxuM8mg2FkGVOGEgvae9UCKC
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dc6d68cf-38a2-47b4-e0f0-08daf8c9dc9c
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 20:31:49.9105
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hX8KaUrttKGoMLvV+IzoQMWeNpqJgvL7QD3FFZBjnVFXV/A4Xe4jmEUGPAPiC9SyBSWXj/OivtRc1iN1IfzEbIHT5fhWjrOTpFLnoFlhtb8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5036

T24gMTkvMTAvMjAyMiA4OjQxIGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gLS0tIGEveGVuL2Fy
Y2gveDg2L3RpbWUuYw0KPiArKysgYi94ZW4vYXJjaC94ODYvdGltZS5jDQo+IEBAIC0xNDYyLDEy
ICsxNDYyLDM0IEBAIHN0YXRpYyB2b2lkIF9fdXBkYXRlX3ZjcHVfc3lzdGVtX3RpbWUoc3QNCj4g
ICAgICAgICAgdi0+YXJjaC5wdi5wZW5kaW5nX3N5c3RlbV90aW1lID0gX3U7DQo+ICB9DQo+ICAN
Cj4gK3N0YXRpYyB2b2lkIHdyaXRlX3RpbWVfZ3Vlc3RfYXJlYShzdHJ1Y3QgdmNwdV90aW1lX2lu
Zm8gKm1hcCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBzdHJ1
Y3QgdmNwdV90aW1lX2luZm8gKnNyYykNCj4gK3sNCj4gKyAgICAvKiAxLiBVcGRhdGUgdXNlcnNw
YWNlIHZlcnNpb24uICovDQo+ICsgICAgd3JpdGVfYXRvbWljKCZtYXAtPnZlcnNpb24sIHNyYy0+
dmVyc2lvbik7DQoNCnZlcnNpb25fdXBkYXRlX2JlZ2luKCkNCg0KfkFuZHJldw0KDQo+ICsgICAg
c21wX3dtYigpOw0KPiArDQo+ICsgICAgLyogMi4gVXBkYXRlIGFsbCBvdGhlciB1c2Vyc3BhY2Ug
ZmllbGRzLiAqLw0KPiArICAgICptYXAgPSAqc3JjOw0KPiArDQo+ICsgICAgLyogMy4gVXBkYXRl
IHVzZXJzcGFjZSB2ZXJzaW9uIGFnYWluLiAqLw0KPiArICAgIHNtcF93bWIoKTsNCj4gKyAgICB3
cml0ZV9hdG9taWMoJm1hcC0+dmVyc2lvbiwgdmVyc2lvbl91cGRhdGVfZW5kKHNyYy0+dmVyc2lv
bikpOw0KPiArfQ0KPiArDQo+ICBib29sIHVwZGF0ZV9zZWNvbmRhcnlfc3lzdGVtX3RpbWUoc3Ry
dWN0IHZjcHUgKnYsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RydWN0
IHZjcHVfdGltZV9pbmZvICp1KQ0KPiAgew0KPiAgICAgIFhFTl9HVUVTVF9IQU5ETEUodmNwdV90
aW1lX2luZm9fdCkgdXNlcl91ID0gdi0+YXJjaC50aW1lX2luZm9fZ3Vlc3Q7DQo+ICsgICAgc3Ry
dWN0IHZjcHVfdGltZV9pbmZvICptYXAgPSB2LT5hcmNoLnRpbWVfZ3Vlc3RfYXJlYS5tYXA7DQo+
ICAgICAgc3RydWN0IGd1ZXN0X21lbW9yeV9wb2xpY3kgcG9saWN5ID0geyAubmVzdGVkX2d1ZXN0
X21vZGUgPSBmYWxzZSB9Ow0KPiAgDQo+ICsgICAgaWYgKCBtYXAgKQ0KPiArICAgIHsNCj4gKyAg
ICAgICAgd3JpdGVfdGltZV9ndWVzdF9hcmVhKG1hcCwgdSk7DQo+ICsgICAgICAgIHJldHVybiB0
cnVlOw0KPiArICAgIH0NCj4gKw0KPiAgICAgIGlmICggZ3Vlc3RfaGFuZGxlX2lzX251bGwodXNl
cl91KSApDQo+ICAgICAgICAgIHJldHVybiB0cnVlOw0KPiAgDQo+DQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 21:00:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 21:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479816.743885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHt4U-0006wi-3N; Tue, 17 Jan 2023 21:00:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479816.743885; Tue, 17 Jan 2023 21:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHt4U-0006wb-0d; Tue, 17 Jan 2023 21:00:42 +0000
Received: by outflank-mailman (input) for mailman id 479816;
 Tue, 17 Jan 2023 21:00:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHt4T-0006wR-3l; Tue, 17 Jan 2023 21:00:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHt4T-0003QU-26; Tue, 17 Jan 2023 21:00:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHt4S-0005xF-JV; Tue, 17 Jan 2023 21:00:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHt4S-0003nA-J4; Tue, 17 Jan 2023 21:00:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GxwFPUfBKY5nkB83Sm50b3UvSisLHB+Nn/eoZE0nWI8=; b=YkxziKvsFirxeaWofmNgkqYC6L
	EDkgWqmNjCcTODgPEWXuwZ8qOvZbt6PcrhLQNCv8ro3U2qv8S526m9UY4//hFDJmLUoegFvM2pT5b
	PRSd3Ndbyl9zVtkDHK2trwTwFZM6l1pvYZ0eE3Sc6jDTFxOqDWHxp4yvHd805rLY17IU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175946-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175946: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ea382b3b21ef6664d5850dfbe08793f77d10c15d
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 21:00:40 +0000

flight 175946 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175946/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ea382b3b21ef6664d5850dfbe08793f77d10c15d
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   36 attempts
Testing same since   175945  2023-01-17 19:40:40 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>
  Oliver Steffen <osteffen@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 718 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 21:10:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 21:10:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479824.743896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHtDw-0008Q3-2y; Tue, 17 Jan 2023 21:10:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479824.743896; Tue, 17 Jan 2023 21:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHtDv-0008Pw-Vb; Tue, 17 Jan 2023 21:10:27 +0000
Received: by outflank-mailman (input) for mailman id 479824;
 Tue, 17 Jan 2023 21:10:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtDu-0008Pm-S7; Tue, 17 Jan 2023 21:10:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtDu-0003bR-Ol; Tue, 17 Jan 2023 21:10:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtDu-0006DC-AD; Tue, 17 Jan 2023 21:10:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtDu-0005ZC-9o; Tue, 17 Jan 2023 21:10:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zOc/z0WO7uv0WvLnoLNmaantf5aAzD+kl1SoudPPusY=; b=ck0UuE4gEPZ8oUSFiUKR54QFoo
	edhE8C7ZtWNoUOQHOFNK1IkDsBsV1dT2YXb737/ICvGQxqDRs088bpgB8HIsOUSqO6aykIwyNwzax
	j54wa0YcvPV7LTUr9pWwvgTh9SjOSj1b24v8GL5vmeAonReACMsX9JMKzNC5uNUKR2yo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175931-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175931: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-amd64-xsm:<job status>:broken:regression
    xen-unstable:build-i386:<job status>:broken:regression
    xen-unstable:build-i386-xsm:<job status>:broken:regression
    xen-unstable:build-amd64-xsm:host-build-prep:fail:regression
    xen-unstable:build-i386-xsm:host-build-prep:fail:regression
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-arm64:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-i386:host-build-prep:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 21:10:26 +0000

flight 175931 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175931/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm                 <job status>                 broken
 build-i386                      <job status>                 broken
 build-i386-xsm                  <job status>                 broken
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175734
 build-i386-xsm                5 host-build-prep          fail REGR. vs. 175734
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-arm64                   6 xen-build                fail REGR. vs. 175734
 build-amd64                   6 xen-build                fail REGR. vs. 175734
 build-i386                    5 host-build-prep          fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175734
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175734
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175734
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    5 days
Failing since        175739  2023-01-12 09:38:44 Z    5 days   14 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    4 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               broken  
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64-xsm broken
broken-job build-i386 broken
broken-job build-i386-xsm broken

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 21:18:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 21:18:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479832.743908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHtLA-0000gs-WD; Tue, 17 Jan 2023 21:17:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479832.743908; Tue, 17 Jan 2023 21:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHtLA-0000gl-TP; Tue, 17 Jan 2023 21:17:56 +0000
Received: by outflank-mailman (input) for mailman id 479832;
 Tue, 17 Jan 2023 21:17:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHtL9-0000gf-Rf
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 21:17:55 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 66d42822-96ac-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 22:17:53 +0100 (CET)
Received: from mail-mw2nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 16:17:50 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5442.namprd03.prod.outlook.com (2603:10b6:208:291::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 21:17:48 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 21:17:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66d42822-96ac-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673990273;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=NJKwbt51lAIQ5yBjh6WMmX6f7pvIIpA15gy8NIXAXMQ=;
  b=YnLZYUVBobtUpP+4TQA3fFtbZPUB1kxu+Z9zbo0va1YhbzdMmGFQTD/b
   4zHD9HxlFEv52dZRXXXmYnOlR7emXrmSGR155vexUous6wyi6mjm+GpZR
   cQBytEoZS9y/wlNaGsgD3ef6RzYrc8qr6brSycpXVyUlDOZ96P08okTWr
   M=;
X-IronPort-RemoteIP: 104.47.55.105
X-IronPort-MID: 93100853
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:u4t+GKhev10Nkq4nsaBAACBrX161qhEKZh0ujC45NGQN5FlHY01je
 htvUDqCaPmMZ2TzeNgiOd6z9k4BsZTTx9Q1HFZo/n82Qiob9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QaBzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ7Iw5VXDCStdvr//Wxe+Y9hu0kIefCadZ3VnFIlVk1DN4AaLWaG+Dv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEsluG1bbI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6RebhrqY63Qb7Kmo7Ki1KbFWnvMWFq0uDRolHC
 lFM3nQIsv1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+WsDezNC49PWIEIygeQmMt+ML/qYs+ihbOSNdLE6OviNDxXzbqz
 FiisywWl7gVy8kR2M2GEUvvhjutot3MUVQz7wCOBma9tFohOciiepCi7kXd4bBYNoGFQ1Kdv
 X8C3c+D8OQJCpLLnyuIKAkQIIyUCz++GGW0qTZS81MJrlxBJ1bLkVhs3QxD
IronPort-HdrOrdr: A9a23:PX/DcavC8yYv55/FLR6GL87u7skDZ9V00zEX/kB9WHVpm62j+v
 xG+c5xvyMc5wxhO03I5urwWpVoLUmzyXcX2+Us1NWZPDUO0VHARL2KhrGM/9SPIUzDH+dmpM
 JdT5Q=
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="93100853"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wz62AltO+8ziIU9ONWiHHk99476slIL96X5He1iufQnV/90y4eRgS/zY6xBqBRFSLoyJJNnQ0iUeUanQmEERxgev66hwfr0LkDbcmtR4HrkM/O+Iwk/j5wJmRaT3nteWxvQ/THOlEqe59UytCrGbmY3IsCSAwveFm7ICK3s2/YPYQAh+jMuy9Erz9VZ+lUNJF4Q3c/3oix+N5x+VmuhYTRFyB/ysu1lyGc+rIica6HorkIXzYVhIF9W5mSd5c5V2ORfWbSis8mD5SINPQLOVjGriZpNrvWVISeOA+oWUdtS0L3wqPlReYn0bYVw2tLgPjRXsXHwjHtnKiNnEOVxQtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NJKwbt51lAIQ5yBjh6WMmX6f7pvIIpA15gy8NIXAXMQ=;
 b=Ddrp3gmuRLnTlrqO5qeijHikWcQTaCXGhVvkPlcJEB7UhsSQnaHVzIkNu3n1UmTd2VrQruZbSSDNGwerSfS4PrAGCZGY7yMlBgeGnyYwhztBIl1LQvsWKHSoGnifqP4u4n+5/PNTyDL14WL4Jjb8oHKKoeLQkLYfxkwpu/W9nMSsK/XIMLI4o6BshxrBIfOeshDu0ooDd3trHfEuG0lh5K8ecvY9fZm3um7HxM0vMW7xdfyH/w3VHiJNbeSCNw9tZwbeYcOAESN2BJhJOhwTfmaDT6RIdDPEjdBdbp2zePplhR73gKdAoC4t9B0sWxl5KrrOwg5o5lL0rCZSZ0JiKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NJKwbt51lAIQ5yBjh6WMmX6f7pvIIpA15gy8NIXAXMQ=;
 b=QzpLl+NYzPKDpTxadPXcAy8uKxwGCFBM75btAYXHeTrPKA63UO/9jEj5rwEiGcZjRufA1K65ARLFRwIAhoZYiB7MKdSJ43Wp6B++xgdwzDUgcS7cTNxlhH/ADccFPy2QjSnvkl4vR1L4meEiWkRqulD/d3s03LfiZ1/Ai4Fg5zo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Roger Pau
 Monne <roger.pau@citrix.com>
Subject: Re: [PATCH RFC 03/10] domain: GADDR based shared guest area
 registration alternative - teardown
Thread-Topic: [PATCH RFC 03/10] domain: GADDR based shared guest area
 registration alternative - teardown
Thread-Index: AQHY444WbZhuHDQ41k2+L84eJy1YfK6jqy6A
Date: Tue, 17 Jan 2023 21:17:48 +0000
Message-ID: <8d002c7f-a532-e8e5-0a71-801af4712d47@citrix.com>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <214c9ec9-b948-1ca6-24d6-4e7f8852ac45@suse.com>
In-Reply-To: <214c9ec9-b948-1ca6-24d6-4e7f8852ac45@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5442:EE_
x-ms-office365-filtering-correlation-id: 6f62cd35-78b3-4b20-9a81-08daf8d048bd
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 pkZ2buP88hjnVFFSPc1zUU3rDibDY/1RcGg0ZK97D84akspEGxFMu/tttu1uUTBYHkf/DNIHhwSv1+TRuxHG7m8slIBqLLAgWHKHFoXJ29/dbUcskMuhOml64cYOGASFOoOm/gSOOrDPGLmJETmuidIyELRJG9+5y51SXkwzEci44DZ+tdxgmf35FZteCci0G+vt/E09tqmha/KShO8mxlU83ffXlUYFdSublhSWK2BUWh7XMbWwD50PvNszME4Eiy2IxyC9NoUpzM6uii3FPDNwRs9DJ2x1r0XsmPgVEQZJV943ak5ohPvD+3rPwXAUDuUhvR1UkR6XdI6lUxDkACgB4+l9u71Bvs/IOmrHOLbeR3DHeIU9putUrpixRE7+xnUAncmi60m5BfzPIy5k9ZHA0cm1o38um4Hlezs3+LNgogb+csbxpnZQQ6lDrI6h4UMqcxu6SSk3ZhesZlKrXe1s6jJmrxrrHAJTOFZkJGGQ/9qrX/bgwo5sI/EFfLh6V3mE4t3iVpRUIJF1AZ4ShpKUkEppdr4UBWs+5eEDljnhLAdsdUyL57nu4pisETUf9BjYVslX5Qt5LMbQgbU+EjrDLF9CqWo9KkCal8bmBVVys3vcNHxJ1bHw8nezRU/WNPr4WlRF3eusZXPYw2MIdcl42nn1oszx4USMqmOzjU7Hb2GrI5bKcnA1EBX/kWK029/19nD7/5aULc6DT6S7zf7c6Qtbx2vsUFnWpkStWom97kmlbyFKDKMVA/6NGGRwJf0K4nYrHL2rYKKBAYa3ww==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(366004)(396003)(346002)(136003)(39860400002)(451199015)(26005)(478600001)(82960400001)(41300700001)(6486002)(38070700005)(6512007)(66446008)(38100700002)(66556008)(71200400001)(186003)(31696002)(316002)(110136005)(54906003)(2616005)(86362001)(66476007)(66946007)(76116006)(4326008)(53546011)(5660300002)(36756003)(107886003)(6506007)(2906002)(31686004)(83380400001)(91956017)(64756008)(8936002)(8676002)(122000001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VEtsR2FJa1VxWkhReHpUaXJwYkRHa1ZNL2h4dnArcXZxUXh4MDRkNjh3dDdV?=
 =?utf-8?B?Yy9xMzB1ZDNoNUV5VXYraGN2eGJ1RjJ0c3BHOEpCaHh2eC9yNngwU3JRbWlz?=
 =?utf-8?B?Wkw4eFJPMFNidlZrWWNNV0lCc3NHVzJUK3VlY0dJS0V6OC9NeThKT0cvSGhL?=
 =?utf-8?B?SE11Z1NQR0ZmaUErcVdmQ1JzTHVSTmp1Qk1UU05iOXNKcGdZTWJvZkE1THBB?=
 =?utf-8?B?RFVkZHFVdExJRTNXVVJGTHRkLzlNUHVYMDBlYjBMN3ZGU2tyRHJPMCt4cUNU?=
 =?utf-8?B?K2Jmenl4VHVkMlZ2QjFQbmFFUWVmejdydllYdEZHOHJveExpbk9XVnc2YW12?=
 =?utf-8?B?Qm9FdmhNUnRsTWZ2dXhaZUFhKzU2S1VmeThvSnJQcncwcHJKVVQvWVZsZ1Nl?=
 =?utf-8?B?WWpJSXZuNlhSU3lPWHJHL2tibHpWeW16MW16d1RpdlVteDBYVUxodWtualNE?=
 =?utf-8?B?WTllWHhJeittKzRHbVBpeXg0M1FWZmQ2enMvcFp0cWx0WmVHRTYxcW1GUmk0?=
 =?utf-8?B?cU0yR0p4YXRwL0RBeUhBdnRKdW9OcmlPUkFqeFh6aFcwcW5hWTRIUmU3Mnln?=
 =?utf-8?B?TGhRM1FlVk55cHJEY29wNXR3S3YxN1NiT3lvc0F1UHd4d1lXWjFtRnh4Z2tV?=
 =?utf-8?B?b3JFN09LbEIyS1UwMlZBMkxkOHdCa1plcW5Jc2R6dXNiMzZJWnc1QXhHeUg1?=
 =?utf-8?B?L2pDdGJGVUJyd3NpM0tJMFZCb3FIU042YTVOQ09uWml6NTV3YUtkN3FXUHVn?=
 =?utf-8?B?TzJqMklBVWtsTW5HZS9pRzROWURMMWpJS3VNYi9VSUNzN053Vnptc3VHQW9I?=
 =?utf-8?B?dk5TOHZ3d1lLOEVYU0Fwa09zelFZL3NpaXZaWUlVd0ZYVnVVbkJDZk5QanVB?=
 =?utf-8?B?dkhoM0VXSHo3SG9uQkh5a2xtb0ZRMm9kQlh5ZE1SbUtsWlpHVXNlWkF3N3V1?=
 =?utf-8?B?ZWFreUlNV1g5RVlDcVBZOE5LemVyN2YrSUFFWHR6VmdmKzljODFpZ1UweDJZ?=
 =?utf-8?B?SG9ucjFLWFJ1RXYxK2VvWWRVSFExRzRwYnRlKzZuNXZDajhGWC9mR1hLdTV5?=
 =?utf-8?B?ZnRZd09SMHhzRUJVOFpvU0J3TWhZRGJCWUVzcW9XWW44VHhWSFU5TjljR05Z?=
 =?utf-8?B?Q0lFRlk5eHk0U1dBazdzVEc0cDNaNUlNbHBERWt5OGRmK3R5NkpyM2U2RDlX?=
 =?utf-8?B?SXVRVEYyZm0yY2w0N1Y3NXNPZUdPVFJlVWZucWxKYk1CSGlPcFVoUlQ5cjNV?=
 =?utf-8?B?Q0xpeUozREZpajA1bDExUVozenRycGxkS1JKUWdSSGF4dXVDeTZzUVVYN2FK?=
 =?utf-8?B?clNMMEhRb0lBTzRuRS9NTDFsQ0pzYkwwcU1oZXAzVnlYMmhqWGJyM014dlpB?=
 =?utf-8?B?U0F5eVpVRi9JUWV1UWRRWmVvYzkwRzEza0hLQUJMbUxDaWJ2VStPRld5SnVm?=
 =?utf-8?B?TEl0cEZDRmp3OVorUXB6c0F3Zm8xZGJjVVJnNU1iaENMMUlUQUFET0ZxQTAz?=
 =?utf-8?B?Zm5LVEpUN0l0QkxQOEN6NVZLZ3RJUXhmWnNPb2JDeU9NWm93dTBBNXdXNStQ?=
 =?utf-8?B?Rkt3VVFDcDdrMlcyVFdPM09UT25CUWE3dzVlQ244RWtXclBzenVDeWRJcHFo?=
 =?utf-8?B?L3I0VFJab2tFRXFPTlJLTUlTWm1RVUduK3FsbXVvYXJHQjlZakZnZkREK1la?=
 =?utf-8?B?RjFCcjU2dVlTc2xxWGZ1U0VESHRZbmpSNE02ejdHYVFyQm84eDlUSTVFbWNl?=
 =?utf-8?B?ZE4vbHRTL1c3THRhUk04UElDcTQwd21YOVBXcmNvNWlpOEFXcnhINkFWRG5X?=
 =?utf-8?B?S1E4YkpXR2hXYlRhVTR0Zm90d3JFbEtFeWt4YnhCYndIZHJKVmJ6eDNDc3h0?=
 =?utf-8?B?RWpPRG04dVhLUWR6dzJQVFE2TXNLTHRWMFp6N1RhUXhsQWNRVWF5eFFFWjBK?=
 =?utf-8?B?OTFnSHQvdzFnc1ZwVGZxN2dIUFpkUXYwdktFNTBFTUtNYUlEVU1wOC9CVVR3?=
 =?utf-8?B?RWlGUGJvcmxiMUUxb0dyU2lIdjNkRU5uOUJtZi95OHMzZEhEcnVhQk9GdUp5?=
 =?utf-8?B?WjNoRWlxMFhtMnR2U28rMGxuT29jSC92L0sxYit5SlJvMGVaVnU3NmUxdmVH?=
 =?utf-8?Q?YNb0Yp0fSHyQGZZn1tB8dk6iZ?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <3BE696DEB5DA93409095A4AC0137A143@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	446ypXTtGwmsUC3saUsFquJUoo63HoLNv2GAIxj2L1sLydO59kuUrMFTkXnydkixdjq44xK71TxeLISPVCuCxrUY0yYqPa4U+kemqSpGZKqqesMn+ji/+Sx0xSarVjL7dwAigWRwELS1pRq1wqGNobGGrpwR+9zhaZMiEjRTN0F0C9q+d0BDfHuozqXsWYw2Q/mFkeE6HIJHtWNKxx7DUupbATKDMjuAxfId6OEFDs4CBhQX12HG7TimrwtT9yD0w1ktHqKYbx8f2Bg6XKRekrNJ/kve6PykQhZ/S1DLXIVk2YgFRuI/INi+SiINbCWKVwVqEP8IJj6LsgC1fHR2ghpZhgxNUnBLtT2GBXVrt4kX0ryi5WfdkLl1IV5y7uWZhNUoBcggsfiENEAVBCEq9lTHz9s7jPlPdtnsQ79QDOI1nG8kd3969S0+49a+RHtEKG+KqZKVOr5oJPxaAMscgFUEUgszTMy+/Tp0jq4sHI6oaHyTw+raGBkXR9OkNbc2bn4io7nVcIEmYcFnmcg4cBkrWBKeGrgzzGdb85tKdUtQLURo9hGRJ7EuU7ZIFPU74yDbRoehbqXVqbHNnEfI2nUgIA97pFem/BHjOCjl/cBhz4mw4bnn19BioDtaTY+cZqVdA3k8b9Zi5epgSUGzIC3GVamBu2iilR2bTqBWech+iPsF/J9/l7zvzycw9kjMyMQPsYpaQKBLpjDew/4obUfbIRKdZB48zdZ62poFhFpwkffTssVHlAet3gF5rE87unkyto9gYLEcQqm9bKHfndHMD3B8VKUx7wqT5KQ39LBQxLiUpfRNlyLnEmRkUApQDa0vOC46j+PbjSV7H7levpz0QtuhAluvRaDwTAvZh68=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f62cd35-78b3-4b20-9a81-08daf8d048bd
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 21:17:48.3149
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 0FKC8pv8WtvSdeACF5MTerInfZ3ECLMiltONxX56E4hqcYfDSVeWlf1KWtL43IxAkc+wYdmUuTZa5rdcpbRrn8vO/4oIbNzt86G0iEBPFoQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5442

T24gMTkvMTAvMjAyMiA4OjQwIGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gSW4gcHJlcGFyYXRp
b24gb2YgdGhlIGludHJvZHVjdGlvbiBvZiBuZXcgdkNQVSBvcGVyYXRpb25zIGFsbG93aW5nIHRv
DQo+IHJlZ2lzdGVyIHRoZSByZXNwZWN0aXZlIGFyZWFzIChvbmUgb2YgdGhlIHR3byBpcyB4ODYt
c3BlY2lmaWMpIGJ5DQo+IGd1ZXN0LXBoeXNpY2FsIGFkZHJlc3MsIGFkZCB0aGUgbmVjZXNzYXJ5
IGRvbWFpbiBjbGVhbnVwIGhvb2tzLg0KDQpXaGF0IGFyZSB0aGUgdHdvIGFyZWFzIHlvdSdyZSBk
aXNjdXNzaW5nIGhlcmU/DQoNCkkgYXNzdW1lIHlvdSBtZWFuIFZDUFVPUF9yZWdpc3Rlcl92Y3B1
X3RpbWVfbWVtb3J5X2FyZWEgdG8gYmUgdGhlDQp4ODYtc3BlY2lmaWMgb3AsIGJ1dCBBUk0gcGVy
bWl0cyBib3RowqAgVkNQVU9QX3JlZ2lzdGVyX3ZjcHVfaW5mbyBhbmQNClZDUFVPUF9yZWdpc3Rl
cl9ydW5zdGF0ZV9tZW1vcnlfYXJlYS4NCg0KU28gaXNuJ3QgaXQgbW9yZSAyKzEgcmF0aGVyIHRo
YW4gMSsxPw0KDQo+DQo+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4NCj4gLS0tDQo+IFJGQzogWmFwcGluZyB0aGUgYXJlYXMgaW4gcHZfc2hpbV9zaHV0ZG93
bigpIG1heSBub3QgYmUgc3RyaWN0bHkNCj4gICAgICBuZWNlc3Nhcnk6IEFpdWkgdW5tYXBfdmNw
dV9pbmZvKCkgaXMgY2FsbGVkIG9ubHkgYmVjYXVzZSB0aGUgdkNQVQ0KPiAgICAgIGluZm8gYXJl
YSBjYW5ub3QgYmUgcmUtcmVnaXN0ZXJlZC4gQmV5b25kIHRoYXQgSSBndWVzcyB0aGUNCj4gICAg
ICBhc3N1bXB0aW9uIGlzIHRoYXQgdGhlIGFyZWFzIHdvdWxkIG9ubHkgYmUgcmUtcmVnaXN0ZXJl
ZCBhcyB0aGV5DQo+ICAgICAgd2VyZSBiZWZvcmUuIElmIHRoYXQncyBub3QgdGhlIGNhc2UgSSB3
b25kZXIgd2hldGhlciB0aGUgZ3Vlc3QNCj4gICAgICBoYW5kbGVzIGZvciBib3RoIGFyZWFzIHNo
b3VsZG4ndCBhbHNvIGJlIHphcHBlZC4NCg0KQXQgdGhpcyBwb2ludCBpbiBwdl9zaGltX3NodXRk
b3duKCksIGhhdmUgYWxyZWFkeSBjb21lIG91dCBvZiBzdXNwZW5kDQooc3VjY2Vzc2Z1bGx5KSBh
bmQgYXJlIHByZXBhcmluZyB0byBwb2tlIHRoZSBQViBndWVzdCBvdXQgb2Ygc3VzcGVuZCB0b28u
DQoNClRoZSBQViBndWVzdCBuZWVkcyB0byBub3QgaGF2ZSBpdHMgc3Vic2VxdWVudCBWQ1BVT1Bf
cmVnaXN0ZXJfdmNwdV9pbmZvDQpjYWxsIGZhaWwsIGJ1dCBiZXlvbmQgdGhhdCBJIGNhbiBlbnRp
cmVseSBiZWxpZXZlIHRoYXQgaXQgd2FzIGNvZGVkIHRvDQoiTGludXggZG9lc24ndCBjcmFzaCIg
cmF0aGVyIHRoYW4gIndoYXQncyBldmVyeXRoaW5nIHdlIG91Z2h0IHRvIHJlc2V0DQpoZXJlIiAt
IHJlY2FsbCB0aGF0IHdlIHdlcmVuJ3QgZXhhY3RseSBmbHVzaCB3aXRoIHRpbWUgd2hlbiB0cnlp
bmcgdG8NCnB1c2ggU2hpbSBvdXQgb2YgdGhlIGRvb3IuDQoNCldoYXRldmVyIGRvZXMgZ2V0IHJl
cmVnc3RlcmVkIHdpbGwgYmUgcmVyZWdpc3RlcmVkIGF0IHRoZSBzYW1lDQpwb3NpdGlvbi7CoCBO
byBndWVzdCBhdCBhbGwgaXMgZ29pbmcgdG8gaGF2ZSBkZXRhaWxzIGxpa2UgdGhhdCBkeW5hbWlj
DQphY3Jvc3MgbWlncmF0ZS4NCg0KDQpBcyBhIHRhbmdlbnRpYWwgb2JzZXJ2YXRpb24sIGkgc2Vl
IHRoZSBwZXJpb2RpYyB0aW1lciBnZXRzIHJlYXJtZWQuwqANClRoaXMgaXMgc3RpbGwgb25lIG9m
IHRoZSBtb3JlIGluc2FuZSBkZWZhdWx0IHByb3BlcnRpZXMgb2YgYSBQViBndWVzdDsNCkxpbnV4
IGludGVudGlvbmFsbHkgY2xvYmJlcnMgaXQgb24gYm9vdCwgYnV0IEkgY2FuIGVxdWl2YWxlbnQg
bG9naWMgdG8NCnJlLWNsb2JiZXIgYWZ0ZXIgcmVzdW1lLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 21:42:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 21:42:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479838.743918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHtis-0004VO-RW; Tue, 17 Jan 2023 21:42:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479838.743918; Tue, 17 Jan 2023 21:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHtis-0004VH-Og; Tue, 17 Jan 2023 21:42:26 +0000
Received: by outflank-mailman (input) for mailman id 479838;
 Tue, 17 Jan 2023 21:42:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtir-0004V5-0i; Tue, 17 Jan 2023 21:42:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtiq-0004H9-Te; Tue, 17 Jan 2023 21:42:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtiq-00072q-DJ; Tue, 17 Jan 2023 21:42:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtiq-0001GM-Cr; Tue, 17 Jan 2023 21:42:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vHmQHb+9IMdcfUKlLgELUvJXJOzVz780yurnjpQ3phQ=; b=YEPoEztSSdUKl7JwmTraZJRdYN
	WwEkgysb6DqAdrsEXsK87ECSXXh5MIFFfjx8eGXktZZE3vK8pRuK7Apdjg8r02fAUowEQX6MqO8rF
	rvBgWBA6WhMHSf6JKqNxs7FaZwK5ZzjYWZaI5Qx3r8/wyAEsCRHuqJEUrtapLrdkjyZU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175944-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175944: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 21:42:24 +0000

flight 175944 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175944/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    5 days
Failing since        175748  2023-01-12 20:01:56 Z    5 days   23 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    3 days   21 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 21:59:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 21:59:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479846.743929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHtzF-0006I2-AZ; Tue, 17 Jan 2023 21:59:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479846.743929; Tue, 17 Jan 2023 21:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHtzF-0006Hv-7o; Tue, 17 Jan 2023 21:59:21 +0000
Received: by outflank-mailman (input) for mailman id 479846;
 Tue, 17 Jan 2023 21:59:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtzD-0006Hl-Uj; Tue, 17 Jan 2023 21:59:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtzD-0004gk-J2; Tue, 17 Jan 2023 21:59:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtzD-0007Uy-7O; Tue, 17 Jan 2023 21:59:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHtzD-00009f-70; Tue, 17 Jan 2023 21:59:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XEjIRjTK80fSJSlmHw9m+qtsJSNi7rg0wTjeE2ES+3M=; b=ypHO+B902CMNJH03yJrUe5HXFT
	zJp64VbIb9H/I2VJYzcvK4GIav8cXJykF7KVFnggHlO7K45Y/MoElpRjupwQGqF0l1M/EQWHoKB73
	N+/7wcdq5ujcJzNkb3/nlDfM021a+cffO9055/yK1KHgLEj6WibPc7wRnPDA/5nQuiXE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175947-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175947: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ea382b3b21ef6664d5850dfbe08793f77d10c15d
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 21:59:19 +0000

flight 175947 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175947/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ea382b3b21ef6664d5850dfbe08793f77d10c15d
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   37 attempts
Testing same since   175945  2023-01-17 19:40:40 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>
  Oliver Steffen <osteffen@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 718 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 22:03:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 22:03:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479855.743941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHu3a-0007nQ-0m; Tue, 17 Jan 2023 22:03:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479855.743941; Tue, 17 Jan 2023 22:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHu3Z-0007nJ-UE; Tue, 17 Jan 2023 22:03:49 +0000
Received: by outflank-mailman (input) for mailman id 479855;
 Tue, 17 Jan 2023 22:03:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHu3Y-0007nB-LN
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 22:03:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHu3W-0004nT-Am; Tue, 17 Jan 2023 22:03:46 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHu3W-0008V9-5z; Tue, 17 Jan 2023 22:03:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=Tr0Mvs1nz7nw/iWg1OqAOwcbB4DZU8RC0rcQMqMT5SM=; b=T1NrI8RGiAKZv8jLoFvw3ILhjb
	ZqyRptTZxm6tkbSdllq5VjYsAs8dizt134ydG3wXtD4DFd5MS6jf/c4L5YYy9fYUEQvAbaRDUcC1h
	nFODSSt+I8ee9m3ta3XDW7Ecle2rG7/U21ZOISi7GRGGlJIZBfgTXTyWRDqrjoeG8Vi0=;
Message-ID: <19a0c39c-31b3-ce9c-6f03-466b6109b88f@xen.org>
Date: Tue, 17 Jan 2023 22:03:44 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-13-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 12/17] tools/xenstore: don't let hashtable_remove()
 return the removed value
In-Reply-To: <20230117091124.22170-13-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> Letting hashtable_remove() return the value of the removed element is
> not used anywhere in Xenstore, and it conflicts with a hashtable
> created specifying the HASHTABLE_FREE_VALUE flag.
> 
> So just drop returning the value.

Any reason this can't be void? If there are, then I would consider to 
return a bool as the return can only be 2 values.

> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V3:
> - new patch
> ---
>   tools/xenstore/hashtable.c | 10 +++++-----
>   tools/xenstore/hashtable.h |  4 ++--
>   2 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
> index 299549c51e..6738719e47 100644
> --- a/tools/xenstore/hashtable.c
> +++ b/tools/xenstore/hashtable.c
> @@ -214,7 +214,7 @@ hashtable_search(struct hashtable *h, void *k)
>   }
>   
>   /*****************************************************************************/
> -void * /* returns value associated with key */
> +int
>   hashtable_remove(struct hashtable *h, void *k)
>   {
>       /* TODO: consider compacting the table when the load factor drops enough,
> @@ -222,7 +222,6 @@ hashtable_remove(struct hashtable *h, void *k)
>   
>       struct entry *e;
>       struct entry **pE;
> -    void *v;
>       unsigned int hashvalue, index;
>   
>       hashvalue = hash(h,k);
> @@ -236,16 +235,17 @@ hashtable_remove(struct hashtable *h, void *k)
>           {
>               *pE = e->next;
>               h->entrycount--;
> -            v = e->v;
>               if (h->flags & HASHTABLE_FREE_KEY)
>                   free(e->k);
> +            if (h->flags & HASHTABLE_FREE_VALUE)
> +                free(e->v);

I don't quite understand how this change is related to this patch.

>               free(e);
> -            return v;
> +            return 1;
>           }
>           pE = &(e->next);
>           e = e->next;
>       }
> -    return NULL;
> +    return 0;
>   }
>   
>   /*****************************************************************************/
> diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
> index 6d65431f96..d6feb1b038 100644
> --- a/tools/xenstore/hashtable.h
> +++ b/tools/xenstore/hashtable.h
> @@ -68,10 +68,10 @@ hashtable_search(struct hashtable *h, void *k);
>    * @name        hashtable_remove
>    * @param   h   the hashtable to remove the item from
>    * @param   k   the key to search for  - does not claim ownership
> - * @return      the value associated with the key, or NULL if none found
> + * @return      0 if element not found
>    */
>   
> -void * /* returns value */
> +int
>   hashtable_remove(struct hashtable *h, void *k);
>   
>   /*****************************************************************************

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 22:05:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 22:05:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479861.743952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHu4e-0008Kg-At; Tue, 17 Jan 2023 22:04:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479861.743952; Tue, 17 Jan 2023 22:04:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHu4e-0008KZ-7D; Tue, 17 Jan 2023 22:04:56 +0000
Received: by outflank-mailman (input) for mailman id 479861;
 Tue, 17 Jan 2023 22:04:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHu4b-0008KH-Pi
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 22:04:54 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f67a5a6c-96b2-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 23:04:51 +0100 (CET)
Received: from mail-bn1nam02lp2047.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 17:04:48 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB5971.namprd03.prod.outlook.com (2603:10b6:610:e2::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 22:04:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 22:04:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f67a5a6c-96b2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673993091;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=3p+t6ZVOvxjY1Cuwg5qSQGK2CeTqrNXupNUGp+51XLw=;
  b=BivX2ErP7KVJXxZVPSTi5EEbSRRRQ2TWsJRAZ842+GGgV65Jz6Aa6cwk
   GI0zx10oWD3iLACaabzsuOcAZBhoVg9RR6x64jmOGKXoAZ6kFyFMStV7e
   qiInBzw0mMz674p+AQJ/6JV+3zHplsUeZC4vPbtALuBXmk5D+KM5Av53B
   g=;
X-IronPort-RemoteIP: 104.47.51.47
X-IronPort-MID: 93106593
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:q4Zvaq/b782dj3ZZ10I9DrUDS3+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 mZOX2CAPqyONjPxLt8lbdm3ox8O6pSBnYI1TANtqSk8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKkU5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklK5
 cIFOSJcNSypqMap/Oqwdvg1mf0KeZyD0IM34hmMzBn/JNN+G9XpZfyP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWKilAuuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqA9tKSuDprZaGhnXM1kFLJzwaXGKx/+aHjWDuW/ZzD
 RUbr39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLE7gDcCmUaQzppbN09qNRwVTEsz
 kWOnd7iGXpoqrL9YXCA8raZqxuiNC5TKnUNDQcfVhcM6dTnpIA1jzrMQ8xlHarzicf6cRnvx
 xiaoS54gK8c5fPnzI2+9FHDxjiq/57AS1Zv4h2NBj76qARkeISieoqkr0DB6upNJ5qYSV/Hu
 2UYn8+Z76YFCpTleDGxfdjh1YqBv56tWAAwS3Y2d3X931xBI0KeQL0=
IronPort-HdrOrdr: A9a23:E7MTQqu+kjbpxwSsnNi6L4va7skDjtV00zEX/kB9WHVpm62j5q
 aTdZEgvyMc5wxxZJhNo7C90cq7MBThHPxOkOws1N6ZNWGM1QfGQr2Ki7GSoAEIcBeOktK1u5
 0QEZRWOZnfNxxTqfeS2njCLz/l+qj8gd2VrPabwW0oSRkvYadm8gt/F0KQE0VwQwVCH/MCZe
 Khz9sCqSDlfWxSZMK9G3UDQqzPodfWkJ7gfFoHCnccmXCzsQ8=
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="93106593"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mzUf52PjcxkZTwlSMGgHiPdE4O7IlO366Do0fSlcLUllXQv9N/veRFcTMYC7pNZoibpg0xYYhhy2v3tkpM6hTSEFxbfxTKYT6i9uYoILpJPbr8EzPNxIMY4iBmWr3OfUoJbwmLndEII71DFIwpGLt6uxoacsU9MQun1H3zVOAS0nJLisOGT3yXYoY9ryWMJrSJ5TpHOhU6z91t+JBFU0ygnR1Hx9d5e2cLQu5VbCWLY4pvW5cJXxi4htZP5sIVrBhlS6wA4tuTpb2LhTKsaxK8E+ktLgmvS2uNf1jsaD+btH/bZlfQMjWVxZtdHxFw/UTOf5tBOlUqL+Mn7t7lTuTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3p+t6ZVOvxjY1Cuwg5qSQGK2CeTqrNXupNUGp+51XLw=;
 b=aZ1xAwbVXQQSPhbtypZe/KwsvjpBav8rC9RI9fNiyJYKskw0OsyYU4yMI92aHQaeqZ9E+jBehf+JuEP5I+o+STmFqUYLx6GkAyvIsP9Py4clpIbuiaxVTOs80WQK1OQtFpt4zM4OdmWgOtQPG03byvMYR/lFWF1Yomv4T2sw5fpAtFAWJbtXiUJZgoqFzs74jrqmgIao6/E+hZNPrwZLo70dIx5oQY462CAx3Z/2OXQGwTqNhATVlmvOS1Wh2b8qz62AWct9NasyvkG18OKsp3dwrh+N+C61bSg/HSghP+OF/Z9xafPMILrk1isjnmBznnx8onAQTdhgkT5m1teQRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3p+t6ZVOvxjY1Cuwg5qSQGK2CeTqrNXupNUGp+51XLw=;
 b=a9U2u1ZwTBS2/3nbJ4giGRc/oGp0o39GVCwV2ALYWY8GVuNA3Jzkg8wKLL8nQyRAPow03jDFdph2BsHOMEaMp5vCPnSNouosz/LXcNN3TvMZaJdrDMEjOII6+bOel5ZTazwPSpFdbHwafHwN9PVc+DL6ThSUHgTubI0zVQtn4yc=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Roger Pau
 Monne <roger.pau@citrix.com>
Subject: Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Thread-Topic: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Thread-Index: AQHY4457jyaOcvAnhkOdTNwMDM8hXq6juEqA
Date: Tue, 17 Jan 2023 22:04:44 +0000
Message-ID: <ed4d8d85-2ba5-74c1-7c65-0ae65bf0ee06@citrix.com>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
In-Reply-To: <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH0PR03MB5971:EE_
x-ms-office365-filtering-correlation-id: 43ab8f15-4129-4c62-4888-08daf8d6d74a
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 L1fCdnw06mre5bUmnjvT5iLWMhkTSJI8cqFgC4GnYXnrT7BrzB8UcWN7fhQssE0e6GUjd+VWJN11ujDm9D06Ccct8YmInRo1EZS7lq5h+9adVdwWozXXTzsedDImjO5c2mzwvQasajrjsmRTlA26v7DtnmKBDQYvhm8fuzQ1/z+MHp5XWxmy5xxW93bQlEfdHVKkYr5t1MsSs4qmcrzotVQU4LQ/4TrikKFd689H4Amj8cWw9mGNNxWs58vk3GQDxbebA3OvObwV0vAUlhkrNbxwsuOdUpVIDep8O0dhnToahSDrc514Gc2tNHHUZQUFD6AyIVtRYF2pl33X2fKmcK7/5E53NcPHXkYp9ymdLih5ygypPPpunVDPu3L34EMyGT+heGJSmoWAN7AyMxsOU9ZozKcy1iMF7IE7jV+tAcTv59hvot04njWvu4aNsbo41V3Nv678sKWmr812DW7tfVA/jpVYhaKRlWRLc7ZYCd6YFaQqkC6mpsMCE6G6uVx8H+Eg/MJ9TBS9YnUhahbcrEwAvp21uJRTEc8opDXUybrWOdDV6O4w4aW9oPirolBFqzJFlMRh2n6OVsC0ZfjE1Ao1vJrRDfiFN8QTBf0m03/Jefb6oaH7Emkq2z3T2fPL901X6U8ozeFWnIzcNuwmBKtghvN8+nfpp3HqT7lEQEz9UjX6YT5gYMX49c4z8eJDwgUFI8gaPLSiSSEF4Hehv2+P0H51fa2bpTCbU21QAKZOBbDzQHomZUU7YAyeQaI//Fk893NeXOOCostmd5Ot/A==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(396003)(39860400002)(376002)(366004)(451199015)(110136005)(71200400001)(91956017)(2616005)(8676002)(64756008)(186003)(66476007)(4326008)(26005)(54906003)(66556008)(66446008)(316002)(66946007)(76116006)(31686004)(478600001)(41300700001)(8936002)(2906002)(107886003)(5660300002)(83380400001)(53546011)(6512007)(6506007)(122000001)(82960400001)(38070700005)(31696002)(38100700002)(6486002)(36756003)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VkJGVHNkR1c0cXJrRi85akprK3hLYU9PcXMyT1laVGVPdjBRMmZ1WVljUThR?=
 =?utf-8?B?dGhxU0xEM2czcjl5SUx5cXk2amdwZlJQbkl2ajBqSGhKeEQyekRNQWlBSVEw?=
 =?utf-8?B?cEo4ZThsVXBueGpEcmlkaU9VcHF1NjhIL0RoK3MzL3JFbU12bkJ3K3RlWjQy?=
 =?utf-8?B?cDFMMjQ1WFBidkpPRmU1WFhZQldOeWNNV3EyR3hob2UvTWZBbGtHMWNLSzJR?=
 =?utf-8?B?N0lhamwyNnRWSGhyWldiMWdnRWE2MlVrd3lISm9iSlg3akNmc1lvZzRydDVw?=
 =?utf-8?B?ZEVnbG9UNE1nbVpMZE1tenE1RHkzZ3NtUS9JcUVyZ3JKdDZ5MmIvYUh6aC8x?=
 =?utf-8?B?a0xNYU55dW55cStSNzFUSU0yWFNwMmV1dGFxMHVxTDlNalZSVmpjTzV0dlFy?=
 =?utf-8?B?aklTTDgrei9ZRjZNc3U0Uzk1Snh3cEFIMGY4bzAvV2d0Q0REcmJDR2NjVUgx?=
 =?utf-8?B?bTRIUEFQSm1tWHNYRmdIQmw4eWFCM1pTMktmU2hjc0loRUF5akVyUmtWR2Zr?=
 =?utf-8?B?cmlCUjA3amlhdEwvdGtQYk1IVVpjVk55OFc2Umh6NVkwVzlaeHVhNG5iT1dD?=
 =?utf-8?B?TFJiYW16Wm83QnZISVdPeE9PQ0hwZGhmSFhaQWx5bVdRMzA3S3I5M3ZSWXVZ?=
 =?utf-8?B?UzFaQWZGVjdQc295azZRMkRwdGtVMnF3bFJvU0txS2xtbVh0N1g3clNGNmpH?=
 =?utf-8?B?YWxvS1lEN01RQ2NmRjBMK0FNUm9TQzB3aExRMHBTenAyU285cE12cVlMZWpI?=
 =?utf-8?B?REh0OGhhU3ZBUUhjR2lRYVU5T2VZTVF1YUo3TlhqbTNGejZmZ0NQUHFNZnky?=
 =?utf-8?B?VmtvdUlxUGVWU1JHQ09wOGgvdm81SHNRUDNpbCtwSWNmMkkzN3N1bUtuRy9X?=
 =?utf-8?B?OVc4NjB3cElPTUJnZXpWQmhlejJzSndwYVFNblR1TTI4Y1dhdFRVZm40VEJw?=
 =?utf-8?B?cGdLVzFNMU93WkVEOXpBWXY0UkVPQzkxWSt0Wm1IQ2ZWRSt3MjR0UHVleGV4?=
 =?utf-8?B?cE9TVmxhbHoxWnhQSG43TmQzQlhUT2JLdklrU3pKbFVLV0VXdlNnTk9Ub2c2?=
 =?utf-8?B?ZE9SRXhMRnp6aCtIM1NyNzUvL3FJb3dDd3JnN2ZoRGdkTFNmOEpCMlMvQXUv?=
 =?utf-8?B?ZVZwZmFGY2tBLzMvc0dRdy9QdHB4eGtjbWtIM0RBK3FRMWZFMDM5Znd6amhv?=
 =?utf-8?B?RmhDYUNQMlJWQ1RMdEdNWW9JNnJMSy9zWkJadStia3JMRVhTbWY0Sjd6R0FM?=
 =?utf-8?B?NG5MSkhhbkl6bmZPMEdyZ3RHamFuaHVxS1RMenNPM0Y3Q2xlNlN6NGxzbW1j?=
 =?utf-8?B?SFFIWjRWbEFaOVBsZzNBK3RGZUhKOTFwdnZJdUhZVHg5RWd6QllmcUpKNEZy?=
 =?utf-8?B?THdPQUlyUm5SUndscHFzWFRmVGtkZWpzVHR3bnZpTytWZUk4M1VFc1JuQUdM?=
 =?utf-8?B?S2ZOdWpLMzZHb3BBUldpOXJES2hHQzdsNmpINGRCWWZQTGxMQmVIT3BTWW1U?=
 =?utf-8?B?TXJTSTNxelpJQ3NMUVRnWHFuMG9MQzdZS3ZpTThReWJwRDBBTC9scis5UlVn?=
 =?utf-8?B?RWtiM0paY0FRWmlxZVc3N0FsOExFdFlPcnJkWDhxM1JZY2V4eVEyTERaQXdX?=
 =?utf-8?B?MDZUSXFxUEozZnlESDB0Y1RVR0t2NkYwQVE1c290UExZaS9CZGJlbVkzMHha?=
 =?utf-8?B?M3IvK0JKQWxEeERUSWY2ODlXODN1L0xWc0x3ZmhHUDNMQm9mbXQxVjJYRTR5?=
 =?utf-8?B?ays3WHZTTVBOWkV3eXJ5b3NDeThnSEp1NVFFelVTZFV3NXdOTHRadmtZTFFT?=
 =?utf-8?B?bTlyQTVGUnNQT1JqSitVR2oxM3d5TzBld3BTUldJVlVkL2ZmaUxIVkZwT1R6?=
 =?utf-8?B?OW9DaUErSDd2cFBBdDJUUHhDemxFeUNPalQwaDl4UWEzeS8wWUpwU2RUNGxO?=
 =?utf-8?B?QzRMQmhKQUNGZ3BZVGdxN1FlVkV6UG9jZWdsODhHdVc2OVMvdW9GNGd0RDVE?=
 =?utf-8?B?REppU2dxbkVucFFoNlNhWFUrSDFsQWtyT0xOYmlZa1FCbFZBRE5acWszR3Nr?=
 =?utf-8?B?aU9GNTExYlA3c01xK0IyZUUzeDFPUmFldFg5QnVudmxIdDBMZ0NLRFpwa2RL?=
 =?utf-8?Q?IvzWjJCgCbGFYJQixCZ7tlXyN?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <90E7C600987F2048BC7261D6EDD00925@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	FtCdguUEqTJlki3Eu82jFDA9A/q2uughNe1DHpcxwBjS86X82LJ9kI+csgVcbw//hniafYsieAaZ/uGnHA9Vwb4yakXXW0P0YlTCzMIQZf9dRCVys5Oj0RDstNvF8DyVNxP8l/89+HcrC7EzD+IiVhwzdB/KxWwBhSU2GcQNYxIXj4rmcsPC5zb66OUNjmjAinJ11lr28GN4PzkwQwlFIWu061WdZSf5F2/DRc0Vsg7vj5efQy+2dPmgncVWUvMRWDa6yp5+9uQFXuw9/Un1GK5/7HKjW0d8ZCjzkcL1em6KpN+INph6hgFcB+U8HaAgsMPgdKlxoA4vib39YbiFDCGwoWsKMBI7YUBRfBbD15bFuWEEVxwSawh8afvwRvil8JZCD5EcT0cgdF+x6VofXs8TzoJxQHuUCVOcVjRqFulkoGl15dPvJryPn9fNrBoe1aJWvExRglFJLXA0cCXJ/u7djqkQ7D0HfMLYEPqfXdzaByDI9m5RtoArC3XKvkkz0pnD4DFMG8vdHe40UT1Xajp+nDQuW6SfKt/AkKnp6MY2MfZuG5r8NOEnIbygO3Ou48GCgJrsIjDA6ahF3OzIgpMkNOIoJFsujdT7CwBcZedfQWItCjw+knA0mh5t9AMiwvbXOVuau0bGOjVFzmfbkjxMYWlzhq95+soEpUTLPVt41AjpSLHQ3Hqdvk8ZeXIbO3jHku4knSACbygwVqHCRKISWKU0trX2b874BxPDOVkl6BxgcO2l2OZTQVavjhBSV5DLrw18KUQMgU8iSkQ8zuTHTJauE25ok0N2CX4Ut0LMQ+MrKmDjdyM5WFb62QDfInaCiPWaj2Z0FAMF8BhppW0l58e1yJzUqkB32WzoLYg=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43ab8f15-4129-4c62-4888-08daf8d6d74a
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 22:04:44.4381
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 0Xsf5+sNqkfVD7JyzJaKvcytp8HD955eaDIam9jN2MUK7Pm1A5fDX1IIifH383cojns0v+YXho5bW6ymQaUzhFeCUb+5hPW0za6HfX85oMw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB5971

T24gMTkvMTAvMjAyMiA4OjQzIGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gVGhlIHJlZ2lzdHJh
dGlvbiBieSB2aXJ0dWFsL2xpbmVhciBhZGRyZXNzIGhhcyBkb3duc2lkZXM6IEF0IGxlYXN0IG9u
DQo+IHg4NiB0aGUgYWNjZXNzIGlzIGV4cGVuc2l2ZSBmb3IgSFZNL1BWSCBkb21haW5zLiBGdXJ0
aGVybW9yZSBmb3IgNjQtYml0DQo+IFBWIGRvbWFpbnMgdGhlIGFyZWFzIGFyZSBpbmFjY2Vzc2li
bGUgKGFuZCBoZW5jZSBjYW5ub3QgYmUgdXBkYXRlZCBieQ0KPiBYZW4pIHdoZW4gaW4gZ3Vlc3Qt
dXNlciBtb2RlLg0KDQpUaGV5J3JlIGFsc28gaW5hY2Nlc3NpYmxlIGluIEhWTSBndWVzdHMgKHg4
NiBhbmQgQVJNKSB3aGVuIE1lbHRkb3duDQptaXRpZ2F0aW9ucyBhcmUgaW4gcGxhY2UuDQoNCkFu
ZCBsZXRzIG5vdCBnZXQgc3RhcnRlZCBvbiB0aGUgbXVsdGl0dWRlIG9mIGxheWVyaW5nIHZpb2xh
dGlvbnMgdGhhdCBpcw0KZ3Vlc3RfbWVtb3J5X3BvbGljeSgpIGZvciBuZXN0ZWQgdmlydC7CoCBJ
biBmYWN0LCBwcm9oaWJpdGluZyBhbnkgZm9ybSBvZg0KbWFwLWJ5LXZhIGlzIGEgcGVycXVpc2l0
ZSB0byBhbnkgcmF0aW9uYWwgYXR0ZW1wdCB0byBtYWtlIG5lc3RlZCB2aXJ0IHdvcmsuDQoNCihJ
biBmYWN0LCB0aGF0IGluZnJhc3RydWN0dXJlIG5lZWRzIHB1cmdpbmcgYmVmb3JlIGFueSBvdGhl
cg0KYXJjaGl0ZWN0dXJlIHBpY2sgdXAgc3R1YnMgdG9vLikNCg0KVGhleSdyZSBhbHNvIGluYWNj
ZXNzaWJsZSBpbiBnZW5lcmFsIGJlY2F1c2Ugbm8gYXJjaGl0ZWN0dXJlIGhhcw0KaHlwZXJ2aXNv
ciBwcml2aWxlZ2UgaW4gYSBub3JtYWwgdXNlci9zdXBlcnZpc29yIHNwbGl0LCBhbmQgeW91IGRv
bid0DQprbm93IHdoZXRoZXIgdGhlIG1hcHBpbmcgaXMgb3ZlciBzdXBlcnZpc29yIG9yIHVzZXIg
bWFwcGluZywgYW5kDQpzZXR0aW5ncyBsaWtlIFNNQVAvUEFOIGNhbiBjYXVzZSB0aGUgcGFnZXdh
bGsgdG8gZmFpbCBldmVuIHdoZW4gdGhlDQptYXBwaW5nIGlzIGluIHBsYWNlLg0KDQoNClRoZXJl
IGFyZSBhIGxvdCBvZiBnb29kIHJlYXNvbnMgd2h5IG1hcC1ieS12YSBzaG91bGQgbmV2ZXIgaGF2
ZSBoYXBwZW5lZC4NCg0KRXZlbiBmb3IgUFYgZ3Vlc3RzLCBtYXAtYnktZ2ZuIChhbmQgbGV0IHRo
ZSBndWVzdCBtYW5hZ2Ugd2hhdGV2ZXINCnZpcnR1YWwgbWFwcGluZ3MgaXQgd2FudHMgb24gaXRz
IG93bikgd291bGQgaGF2ZSBiZWVuIGNsb3NlciB0byB0aGUNCnN0YXR1cyBxdW8gZm9yIGhvdyBy
ZWFsIGhhcmR3YXJlIHdvcmtlZCwgYW5kIGNyaXRpY2FsbHkgd291bGQgaGF2ZQ0KYXZvaWRlZCB0
aGUgcmVzdHJpY3Rpb24gdGhhdCB0aGUgYXJlYXMgaGFkIHRvIGxpdmUgYXQgYSBnbG9iYWxseSBm
aXhlZA0KcG9zaXRpb24gdG8gYmUgdXNlZnVsLg0KDQoNCg0KDQoNCj4NCj4gSW4gcHJlcGFyYXRp
b24gb2YgdGhlIGludHJvZHVjdGlvbiBvZiBuZXcgdkNQVSBvcGVyYXRpb25zIGFsbG93aW5nIHRv
DQo+IHJlZ2lzdGVyIHRoZSByZXNwZWN0aXZlIGFyZWFzIChvbmUgb2YgdGhlIHR3byBpcyB4ODYt
c3BlY2lmaWMpIGJ5DQo+IGd1ZXN0LXBoeXNpY2FsIGFkZHJlc3MsIGZsZXNoIG91dCB0aGUgbWFw
L3VubWFwIGZ1bmN0aW9ucy4NCj4NCj4gTm90ZXdvcnRoeSBkaWZmZXJlbmNlcyBmcm9tIG1hcF92
Y3B1X2luZm8oKToNCj4gLSBhcmVhcyBjYW4gYmUgcmVnaXN0ZXJlZCBtb3JlIHRoYW4gb25jZSAo
YW5kIGRlLXJlZ2lzdGVyZWQpLA0KDQpXaGVuIHJlZ2lzdGVyIGJ5IEdGTiBpcyBhdmFpbGFibGUs
IHRoZXJlIGlzIG5ldmVyIGEgZ29vZCByZWFzb24gdG8gdGhlDQpzYW1lIGFyZWEgdHdpY2UuDQoN
ClRoZSBndWVzdCBtYXBzIG9uZSBNTUlPLWxpa2UgcmVnaW9uLCBhbmQgdGhlbiBjb25zdHJ1Y3Rz
IGFsbCB0aGUgcmVndWxhcg0KdmlydHVhbCBhZGRyZXNzZXMgbWFwcGluZyBpdCAob3Igbm90KSB0
aGF0IGl0IHdhbnRzLg0KDQpUaGlzIEFQSSBpcyBuZXcsIHNvIHdlIGNhbiBlbmZvcmNlIHNhbmUg
YmVoYXZpb3VyIGZyb20gdGhlIG91dHNldC7CoCBJbg0KcGFydGljdWxhciwgaXQgd2lsbCBoZWxw
IHdpdGggLi4uDQoNCj4gLSByZW1vdGUgdkNQVS1zIGFyZSBwYXVzZWQgcmF0aGVyIHRoYW4gY2hl
Y2tlZCBmb3IgYmVpbmcgZG93biAod2hpY2ggaW4NCj4gICBwcmluY2lwbGUgY2FuIGNoYW5nZSBy
aWdodCBhZnRlciB0aGUgY2hlY2spLA0KPiAtIHRoZSBkb21haW4gbG9jayBpcyB0YWtlbiBmb3Ig
YSBtdWNoIHNtYWxsZXIgcmVnaW9uLg0KPg0KPiBTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+DQo+IC0tLQ0KPiBSRkM6IEJ5IHVzaW5nIGdsb2JhbCBkb21haW4g
cGFnZSBtYXBwaW5ncyB0aGUgZGVtYW5kIG9uIHRoZSB1bmRlcmx5aW5nDQo+ICAgICAgVkEgcmFu
Z2UgbWF5IGluY3JlYXNlIHNpZ25pZmljYW50bHkuIEkgZGlkIGNvbnNpZGVyIHRvIHVzZSBwZXIt
DQo+ICAgICAgZG9tYWluIG1hcHBpbmdzIGluc3RlYWQsIGJ1dCB0aGV5IGV4aXN0IGZvciB4ODYg
b25seS4gT2YgY291cnNlIHdlDQo+ICAgICAgY291bGQgaGF2ZSBhcmNoX3ssdW59bWFwX2d1ZXN0
X2FyZWEoKSBhbGlhc2luZyBnbG9iYWwgZG9tYWluIHBhZ2UNCj4gICAgICBtYXBwaW5nIGZ1bmN0
aW9ucyBvbiBBcm0gYW5kIHVzaW5nIHBlci1kb21haW4gbWFwcGluZ3Mgb24geDg2LiBZZXQNCj4g
ICAgICB0aGVuIGFnYWluIG1hcF92Y3B1X2luZm8oKSBkb2Vzbid0IGRvIHNvIGVpdGhlciAoYWxi
ZWl0IHRoYXQncw0KPiAgICAgIGxpa2VseSB0byBiZSBjb252ZXJ0ZWQgc3Vic2VxdWVudGx5IHRv
IHVzZSBtYXBfdmNwdV9hcmVhKCkgYW55d2F5KS4NCg0KLi4uIHRoaXMgYnkgcHJvdmlkaW5nIGEg
Ym91bmQgb24gdGhlIGFtb3VudCBvZiB2bWFwKCkgc3BhY2UgY2FuIGJlIGNvbnN1bWVkLg0KDQo+
DQo+IFJGQzogSW4gbWFwX2d1ZXN0X2FyZWEoKSBJJ20gbm90IGNoZWNraW5nIHRoZSBQMk0gdHlw
ZSwgaW5zdGVhZCAtIGp1c3QNCj4gICAgICBsaWtlIG1hcF92Y3B1X2luZm8oKSAtIHNvbGVseSBy
ZWx5aW5nIG9uIHRoZSB0eXBlIHJlZiBhY3F1aXNpdGlvbi4NCj4gICAgICBDaGVja2luZyBmb3Ig
cDJtX3JhbV9ydyBhbG9uZSB3b3VsZCBiZSB3cm9uZywgYXMgYXQgbGVhc3QNCj4gICAgICBwMm1f
cmFtX2xvZ2RpcnR5IG91Z2h0IHRvIGFsc28gYmUgb2theSB0byB1c2UgaGVyZSAoYW5kIGluIHNp
bWlsYXINCj4gICAgICBjYXNlcywgZS5nLiBpbiBBcmdvJ3MgZmluZF9yaW5nX21mbigpKS4gcDJt
X2lzX3BhZ2VhYmxlKCkgY291bGQgYmUNCj4gICAgICB1c2VkIGhlcmUgKGxpa2UgYWx0cDJtX3Zj
cHVfZW5hYmxlX3ZlKCkgZG9lcykgYXMgd2VsbCBhcyBpbg0KPiAgICAgIG1hcF92Y3B1X2luZm8o
KSwgeWV0IHRoZW4gYWdhaW4gdGhlIFAyTSB0eXBlIGlzIHN0YWxlIGJ5IHRoZSB0aW1lDQo+ICAg
ICAgaXQgaXMgYmVpbmcgbG9va2VkIGF0IGFueXdheSB3aXRob3V0IHRoZSBQMk0gbG9jayBoZWxk
Lg0KDQpBZ2FpbiwgYW5vdGhlciBlcnJvciBjYXVzZWQgYnkgWGVuIG5vdCBrbm93aW5nIHRoZSBn
dWVzdCBwaHlzaWNhbA0KYWRkcmVzcyBsYXlvdXQuwqAgVGhlc2UgbWFwcGluZ3Mgc2hvdWxkIGJl
IHJlc3RyaWN0ZWQgdG8ganVzdCBSQU0gcmVnaW9ucw0KYW5kIEkgdGhpbmsgd2Ugd2FudCB0byBl
bmZvcmNlIHRoYXQgcmlnaHQgZnJvbSB0aGUgb3V0c2V0Lg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 22:15:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 22:15:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479867.743962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuEa-0001RA-7e; Tue, 17 Jan 2023 22:15:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479867.743962; Tue, 17 Jan 2023 22:15:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuEa-0001R3-56; Tue, 17 Jan 2023 22:15:12 +0000
Received: by outflank-mailman (input) for mailman id 479867;
 Tue, 17 Jan 2023 22:15:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHuEZ-0001Qx-6b
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 22:15:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHuEZ-00050I-1A; Tue, 17 Jan 2023 22:15:11 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHuEY-0000cU-SC; Tue, 17 Jan 2023 22:15:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=05ZsLxnhrqrBYV1uktE+uGxMIsTLyXZRe+iL0UodKt0=; b=r/gKfkfBr8ZlBD9XR9myvmIOp5
	0wbmDGDyCneDgHV0bv+TtpQfEwjc4OWUVYBql5Pfj8xPS0SLMBiR8ncZMbIXQ/2xf45uDv6UL+r5i
	eVJrCBLwSR7UqUAAtaHKYpykyihgmzQNIB5Z71cOvWyEda19ziNsV9bhtBeQ+ydzNG/M=;
Message-ID: <2ab20725-4bb9-66ac-a87f-01dca92f9453@xen.org>
Date: Tue, 17 Jan 2023 22:15:08 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-16-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 15/17] tools/xenstore: introduce trace classes
In-Reply-To: <20230117091124.22170-16-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> @@ -2604,6 +2607,8 @@ static void usage(void)
>   "  -N, --no-fork           to request that the daemon does not fork,\n"
>   "  -P, --output-pid        to request that the pid of the daemon is output,\n"
>   "  -T, --trace-file <file> giving the file for logging, and\n"
> +"      --trace-control=+<switch> activate a specific <switch>\n"
> +"      --trace-control=-<switch> deactivate a specific <switch>\n"
>   "  -E, --entry-nb <nb>     limit the number of entries per domain,\n"
>   "  -S, --entry-size <size> limit the size of entry per domain, and\n"
>   "  -W, --watch-nb <nb>     limit the number of watches per domain,\n"
> @@ -2647,6 +2652,7 @@ static struct option options[] = {
>   	{ "output-pid", 0, NULL, 'P' },
>   	{ "entry-size", 1, NULL, 'S' },
>   	{ "trace-file", 1, NULL, 'T' },
> +	{ "trace-control", 1, NULL, 1 },
>   	{ "transaction", 1, NULL, 't' },
>   	{ "perm-nb", 1, NULL, 'A' },
>   	{ "path-max", 1, NULL, 'M' },
> @@ -2721,6 +2727,43 @@ static void set_quota(const char *arg, bool soft)
>   		barf("unknown quota \"%s\"\n", arg);
>   }
>   
> +/* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
> +const char *trace_switches[] = {

AFAICT, this array is not meant to be modified. So you want a second const.

> +	"obj", "io", "wrl",
> +	NULL
> +};

[...]

> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
> index 3b96ecd018..c85b15515c 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -287,6 +287,12 @@ extern char **orig_argv;
>   
>   extern char *tracefile;
>   extern int tracefd;
> +extern unsigned int trace_flags;
> +#define TRACE_OBJ	0x00000001
> +#define TRACE_IO	0x00000002
> +#define TRACE_WRL	0x00000004
I would add a comment on top to explain that the value should be kept in 
sync with trace_switches.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 22:21:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 22:21:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479873.743974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuKN-0002rX-TK; Tue, 17 Jan 2023 22:21:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479873.743974; Tue, 17 Jan 2023 22:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuKN-0002rQ-P3; Tue, 17 Jan 2023 22:21:11 +0000
Received: by outflank-mailman (input) for mailman id 479873;
 Tue, 17 Jan 2023 22:21:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHuKM-0002rK-Fw
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 22:21:10 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38c23303-96b5-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 23:21:02 +0100 (CET)
Received: from mail-bn8nam12lp2173.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 17:20:51 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO1PR03MB5826.namprd03.prod.outlook.com (2603:10b6:303:9f::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 22:20:49 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 22:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38c23303-96b5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673994062;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=t4EILZT52ZKSfqjd+zyNYWSgc1N/Ke0NH5zkAWZgzo0=;
  b=Z141X290tKs+uli37ZavHyqJAZHrKQjejkVrQdRT6X710zp/wdSAVP1R
   pP11UpXViOlyM0zxC2zPusgPgpqdqc9g0mpDTsotvh16E9jofi3mRzY4p
   ckJulyEvHDIKNOweFWYg7zaARQe482NZfUQmVTed506V27k1fRmvKo8JS
   E=;
X-IronPort-RemoteIP: 104.47.55.173
X-IronPort-MID: 92524489
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:9/VhV6+UBL3CPxrD4rkkDrUDSn+TJUtcMsCJ2f8bNWPcYEJGY0x3n
 WAfCDzVMv3bajOjKox3O9++8BhX6MfQxoMwHQdsqis8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKkU5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkllq
 NMqCmEGRCmBmqWL56zkcdZHtsA8eZyD0IM34hmMzBn/JNN/G9XpZfWP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUui9ABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTCdhPTuHkpq8CbFu7mn0UGD0pb16BoqO3qmKedcACF
 Ew6w397xUQ13AnxJjXnZDW6vXqFsxg0S9dWVeog52mlyKDZ/gKYDWgsVSNaZZots8pebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakcsVhAZ6tPupIUyiBPnTdt5FqOxyNrvFlnY3
 DSivCU4wbIJgqY2O76T+FnGh3emoMjPRwtsvAHPBDv6tUV+eZKvYJGu5R7D9/FcIY2FT16H+
 n8Zh8yZ6+NIBpaI/MCQfNgw8HiSz67tGFXhbZRHRvHNKxzFF6afQL1t
IronPort-HdrOrdr: A9a23:R/VzV62jYzO2bIahbCQ5UwqjBLwkLtp133Aq2lEZdPU1SKClfq
 WV98jzuiWatN98Yh8dcLK7WJVoMEm8yXcd2+B4V9qftWLdyQiVxe9ZnO7f6gylNyri9vNMkY
 dMGpIObOEY1GIK7/rH3A==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="92524489"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jCvanKGFTPEVIHcQw+DVEPSyhsi6rUVxricNj7iwTwVoUtqfShL1qERZodR3PbQOrjSrQhTgLvhIgz0oZ+PiF1Py+0Oa+qLcUujlG/e6npMSbtk4EQjfa7BPX/oRoi7WzmbARWEh8nY1LtljzZLeUJwEZzBl9H0vRUWmbzYsyj9GNuT0QXDENh0g0VWwHtAeAzqmy7qb4K03MI0Ibt5lVTUqIfmj7vdGgxKRmEyr6Hn5Z7wxnP5l4+VqfO2cYYzZoeQbxEQBrL7MHELxatt63veHciQA/Ju6J9MEvo4d+rg9Tqy+x8zJD/+ODtEJbY/OOHExdLJ5XNWiCQxxr8un/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=t4EILZT52ZKSfqjd+zyNYWSgc1N/Ke0NH5zkAWZgzo0=;
 b=NIOLTeDKIejYBhGhnTCY4TlC+k/XlnztYtUL7QfMTt9kgpB197p8qArwQZnfkF6HFSHs+Tpk/rssn/GLweWUEPZ+2TwxylpWn1O39e3bzR22TVFgYjtKieUhP7vzwRShNuYiVp8j02g2Rexh1uvO0uNSyEY3S9Yff8vTlroFSPfj7v57lrUmPPbbK3ErjYAomiCa2uvD2FvttavnYVcRtR5vHTnITXwn16W5cXrISPk7LPDNteNOu7kLJjQFgqPS2eUrd70zFx/V7kh9B/Icrktl90NTwNnoyRekCnky+4Muk++JCdNwuqDPJAfYZF43xk96yGypFnlpEFG66IjYtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t4EILZT52ZKSfqjd+zyNYWSgc1N/Ke0NH5zkAWZgzo0=;
 b=o7e8qPp4CctPtEfKzO+rGuGQpHnzA2w6JTIJuq91KB+oN1lcPn8U9cd/ZC02kgooLgfiJiZ9oA5VhSpqFth8Gaz3BB4pJ20SaMBj+kb9aiezoGM387Xx5p4r5CkYT6o6l68RJFzuO837TZy1icgurlr0pEHkCSYMH5Mw0T7WnFs=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Roger Pau Monne
	<roger.pau@citrix.com>
Subject: Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Thread-Topic: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Thread-Index: AQHY4457jyaOcvAnhkOdTNwMDM8hXq5O0KiAgFTsIQA=
Date: Tue, 17 Jan 2023 22:20:48 +0000
Message-ID: <bd6befdf-65eb-6937-fb85-449a5fa16794@citrix.com>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
 <f1229a27-f92c-a0dc-928e-1d78b928fdd0@xen.org>
In-Reply-To: <f1229a27-f92c-a0dc-928e-1d78b928fdd0@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CO1PR03MB5826:EE_
x-ms-office365-filtering-correlation-id: 875819a8-a849-47d5-5ade-08daf8d91614
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 bDIET0ZARl9SywFtOJIxFXShzurX48/v5NFBL8uOhWY8oV/riEV28TklpVw7+Y8lEZUCwkg7osplH6QJ9IlX2rWEfmU3IXp4o180nLJIOdzaULm9zUJn5YbCTV14ETm801D+7s305BeUVPI5KppoNAkxuN0drIy1LgYezhbDKQcBmPkWuIyy/ZyxYDzpj3sSxBc+Dn80413fWL3yog1nDi52PWRW9YhPTLvzrbK5iAdUZQ/gfH58m6AAATFzbeuQG6rCXLWSOpASWcRuwlzHQPd2ZLpz+Q1z8E/IcAl6DbJw2EzgOnYs1pjzTmbbHB912i0FS0OGqlU199MYBO+Tx4TpPQIxQR45yYnqcBKIyJ/zdIKTV7JWPDRl1W13YiF9O8sYiCFDx3UdSApuLSiTgcrWUU4e4MpFa4zfG6MLkpBX4w3odhy7cOqpWe+XydX7pBuSUhSSKvVI1es81xwki20Zj8Pj6dFmg9KSE6JxAmfoCaZ4PObiF6jR8iZV0vSYV/tgmullOUYj2+HgIQhxQRT7sVXnKuA1fTB/d8gojXDedncLTS4kNeleRT+2J6P9WapPCUQeHfNEeZQRHob+malih3EwjEhbcUc7b2XoCfidjcCSMs0aJu2nzWtR4QCfya2vgNEfur6p+pAc3q1zp+K+obAquiLah856kEwT3yLFt6htm201XV+gV4ooV3IPgnsAk1VEnr8ntPe8SoaysUfWHzWrkbUjzydyyfnQKdrUgwIiXwISth5GjlXs5DvfVU3Ib4JzciRbIcL7uLn6+Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(366004)(376002)(346002)(39860400002)(396003)(451199015)(31686004)(6486002)(478600001)(6512007)(26005)(6506007)(186003)(107886003)(71200400001)(91956017)(2616005)(76116006)(122000001)(110136005)(316002)(4326008)(64756008)(66446008)(8676002)(54906003)(83380400001)(36756003)(41300700001)(8936002)(2906002)(66476007)(66556008)(66946007)(38100700002)(86362001)(38070700005)(5660300002)(53546011)(31696002)(82960400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?WG92UEZJdzQwL1JiNkRxK2hlUzBSekxMc3B0Y3Jwdm1HSEp5ZzkyUUxWYk42?=
 =?utf-8?B?UXNaQ0JQcjJwaTc1T1M3Zzhpa092MnFTcGJNRElYNkVFNklEYk80ZVhqUytQ?=
 =?utf-8?B?eld3ODR1VU16WXNic0YrQUZmUy9jSDNUcE9FclZJVlFhbktyUzgrQ3BiK0NP?=
 =?utf-8?B?aFJrMXJCa2xtcGszSXdHemVaUmFHdGQxK1Bzd2p6SWZLMVQ5eUt6SXRkcHBW?=
 =?utf-8?B?R0svTUNpaXpSUXdrS3dkVnRJUHVTalV1RU5XQTMwMWpQU202dDRnWWduMW9S?=
 =?utf-8?B?NVZkWG5tUlorY3BVNlJzcE1OZGRyZ3NwQlVVdUdQaC9RcGhyU0dLMnZrb0ZS?=
 =?utf-8?B?K085OUoxcVQxV3h2OVAwQWttZ1IxQVJsTWRVOTJJcUtXcUZNK1FPV2ZXL0Vu?=
 =?utf-8?B?UWIra3BmNmFFTFVDS0lVakY1TnZuOGVReElZTUxXK2Y4c3VFMFZhbVY4MGt3?=
 =?utf-8?B?RDlXQVMxRDluaEczWHJSMWZqcHVVYTJ6OTR1T0lNbDdwTCtUeGJFd2p2RzIy?=
 =?utf-8?B?cTVCWU5vWVlDREpscHc5S0xqTHJYeWZ2Tk9NWlE2T3NWRVN0VzBYMUtLVE1v?=
 =?utf-8?B?MCtoSldYNlBUMDJaQTNscTJYelRLUk9ua1RmTzZPRCtSQnNHZVc4cUJDQXJ1?=
 =?utf-8?B?MUxKMWVRcHIrcTlpZjZleTBSSUNlVVBzM2syYVV5SWROK29BTGJEVWFqNVox?=
 =?utf-8?B?d3pJOHB2cW82K2diVXEyUzlLeVVFa0FoUHVGczgwc1NYMFU0Rlg1d2FNRzNM?=
 =?utf-8?B?cmM2RjRPazFMMXNMVXdPODJYQkpNVmJHTERHc1RFYnQvVWRRZzdsbUJ6NjEz?=
 =?utf-8?B?b0h5ZjRCSjhJckVmQTN2eUVnVFltcEhzMTlvdmJpVFhvYU96SS9STzZMUXBl?=
 =?utf-8?B?R0krcm9VamI5Y2MvOVljRFVvY2Z3VEFpTEpIRTZVdmkvKzBiUURwMVZPaHdF?=
 =?utf-8?B?dmRIN0dxSk8ya2loV3U0eXh0SlBhckluUnVFZGVQSUF3MXJZOHR6V1ZZVmNB?=
 =?utf-8?B?dzFWeFJFY1MxL2Q0dEVtVzYrd0ZQaVNTRmFTWGZETTk4eS81aWhMZlFFUlpO?=
 =?utf-8?B?RjFsdUNsczZGSWMvb0liaGd1ZDgxbUl5YktTSjR6SGdvSTU1bE5EMDNLcG0y?=
 =?utf-8?B?Mks2cWYzZ3dQUnhSVVJnT3NiSFBYSlF0R2V5Yk1uNUNyWmpSSGdGQURhSThV?=
 =?utf-8?B?Z2lNQk85SDErNDhsRzBaWTVNME95OFpvN1dLY25qTzBHdkNhOUxtZGxTRWtm?=
 =?utf-8?B?YXUzYXhnREd0RzFLdWZiVStNbWZ6eVNhOGd4QUhQNnRiZFJpUGVld0lQalBl?=
 =?utf-8?B?dkJPbUlSd3p6b3I3OVFjWWE1T21xU1FUVjl1NEhoMERaTTFOc3cyRTBTN1BL?=
 =?utf-8?B?dWFTYUJHYlRxK1ZQSWtBVEZNb1o1d3N2OHpBMUZRcmVualhBcXhlTllHZUcv?=
 =?utf-8?B?Yk00RE5YbytUZXUvNDJBWm8wL3FlREMrYmJsRUJjbEJrZGZ2bXJiMmJRNWtQ?=
 =?utf-8?B?TlZnOXdCY1VSc1BtY3Z5WUtlWWtoYWMwbVowOEx0WTZVZGVXUUg0RDV3clFJ?=
 =?utf-8?B?dDZCUUJvWFU1eFFnbHV1Q1J3ZUFQNVQ1RHk2WE1QUEJwMVJpMzJ1L0kvSkpj?=
 =?utf-8?B?QW4wWnp0YXJ5L0x1TlRIZ0dqZmFCR0c2b0ZiclA3bHdEbWUvTmxmNGgxYmtp?=
 =?utf-8?B?amxXMllaMXd2TDFPUC8zTElxUE9zSFpDWi9DNGlSQlJRZlIzUEJiMjU5SGtx?=
 =?utf-8?B?SGhBOEJsS3E5eHAycDR1UUc5WjcxeWJ0VlgvbEVwSm5TZ0E5UDYzMVFWdmJ4?=
 =?utf-8?B?Lzc1ZE1qY240ZnJpVU55S0dJUGVOQWlIei9mNzh4RlhwMjh6dHYybkRkVkFG?=
 =?utf-8?B?Q0hFSUNYSUI4Mk5wbHNwODhHU3Z0WllKU0UyK1BTMzEzU2U4T2ZJZ1czb0hn?=
 =?utf-8?B?aEE3WHVYWjBuZzJGWEk3WVp3TWZjT1FBRzBvbWxUY1hMcHZUYjc0c08xTTJu?=
 =?utf-8?B?TmZiS2pwZGZjNjlUb0RZTldrQ0c1MFVaZEJtSGdwdDNQbUczamFHbXB3WDZx?=
 =?utf-8?B?RG5TWXNJWnN0L1lIaXBIZmd3ajlBdi81Wkc2KzlYVS9saHJQWEZTUzVlUUVw?=
 =?utf-8?Q?7FdWws15o2EFFiVdBpOkkxFiQ?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <73DC3300E788E245912762F40C90815E@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ActAsjrPw3AArgw2E/YuXRoUvF/65UqEFU1A12XOBnzhVtMFZVLvlJwhb2jPoOjAzNxNkbBBZHPTfWFeOJcZyAb9yyWe+ENzflJsjvwKpDUwCnjZJ+SJIb+fj1rOg7kkIjj+GteLHBaVYyA9+oAUlPKinDkUJD2DKTPwF3zkokjMYwAISpxI8ExEOKfWHzNIW1SoJ3300qpYCvKblpbasWLETTZsle67xU7SQlcby7+3EBvR6RtqQh+F5Bot2qImSmgHM0Xylb+eCTPMWKtpZZYXdFKs8Y2zwmokJMVtrOKESVIKDbJvtPqmDMS3fvHISni1cQ6kPx9rwKsUAZ1WS6eqOuhIlAS4Nlyk3Z9TVedYpMw6mg5eLqfCJ8ew4paIJUj3UsjQ2f5JmZ+Rr/OUNWM7HEHz7xFX6lsSq3mYORH6RlmpT8ozUZIjIKrK8IbPf6ZOCtpZxFgyu2jzVzrkrzzBSolh9IPx05++WcEOGfwW12h8NV0flJEHVwb0Qz23PpQedwxofttBaADDTRbjgROkq5Grv3C+ZPHupmnK2nrklISKJXaJd3e2F1PZm335xaJrmaqUj4Qd1EifCB/V7TxtthTSWj4FDwP7He/3LyoDBzOmAw/AIIOmyK5HimuMhLK4x1Oo4iCov20GxCBXfx8GpdFxE/vPr8Blprzx971mufQofzv5B0Geu4++Xg1icbXfh/nvSp1OT9dsvhlFtHcZzqcT9gNrej81dYjVVNoAteojk1fjIkEP2eAnf8f1/SQ0xBAeUn3/3fHb25RWqRaG585ZLfRGewvHaIAA7AtFfpXcGrsatx6BcrbVeMksQKeyZ3NVdhk3fcRr8G7je01aUtHQi8uP4HY86AJoGQY=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 875819a8-a849-47d5-5ade-08daf8d91614
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 22:20:48.7745
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: zqDT+URntbPd8ZSPhgRdS/ZgbIhVotU88DcEtymiRgCZ32bO9vWWOlojsF4MNjCjQ7q2PbZHcFJ/Tcwrv82Yy+oMqick0IclRZpPw6WmZKQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5826

T24gMjQvMTEvMjAyMiA5OjI5IHBtLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+IEhpIEphbiwNCj4N
Cj4gSSBhbSBzdGlsbCBkaWdlc3RpbmcgdGhpcyBzZXJpZXMgYW5kIHJlcGx5aW5nIHdpdGggc29t
ZSBpbml0aWFsIGNvbW1lbnRzLg0KPg0KPiBPbiAxOS8xMC8yMDIyIDA5OjQzLCBKYW4gQmV1bGlj
aCB3cm90ZToNCj4+IFRoZSByZWdpc3RyYXRpb24gYnkgdmlydHVhbC9saW5lYXIgYWRkcmVzcyBo
YXMgZG93bnNpZGVzOiBBdCBsZWFzdCBvbg0KPj4geDg2IHRoZSBhY2Nlc3MgaXMgZXhwZW5zaXZl
IGZvciBIVk0vUFZIIGRvbWFpbnMuIEZ1cnRoZXJtb3JlIGZvciA2NC1iaXQNCj4+IFBWIGRvbWFp
bnMgdGhlIGFyZWFzIGFyZSBpbmFjY2Vzc2libGUgKGFuZCBoZW5jZSBjYW5ub3QgYmUgdXBkYXRl
ZCBieQ0KPj4gWGVuKSB3aGVuIGluIGd1ZXN0LXVzZXIgbW9kZS4NCj4+DQo+PiBJbiBwcmVwYXJh
dGlvbiBvZiB0aGUgaW50cm9kdWN0aW9uIG9mIG5ldyB2Q1BVIG9wZXJhdGlvbnMgYWxsb3dpbmcg
dG8NCj4+IHJlZ2lzdGVyIHRoZSByZXNwZWN0aXZlIGFyZWFzIChvbmUgb2YgdGhlIHR3byBpcyB4
ODYtc3BlY2lmaWMpIGJ5DQo+PiBndWVzdC1waHlzaWNhbCBhZGRyZXNzLCBmbGVzaCBvdXQgdGhl
IG1hcC91bm1hcCBmdW5jdGlvbnMuDQo+Pg0KPj4gTm90ZXdvcnRoeSBkaWZmZXJlbmNlcyBmcm9t
IG1hcF92Y3B1X2luZm8oKToNCj4+IC0gYXJlYXMgY2FuIGJlIHJlZ2lzdGVyZWQgbW9yZSB0aGFu
IG9uY2UgKGFuZCBkZS1yZWdpc3RlcmVkKSwNCj4+IC0gcmVtb3RlIHZDUFUtcyBhcmUgcGF1c2Vk
IHJhdGhlciB0aGFuIGNoZWNrZWQgZm9yIGJlaW5nIGRvd24gKHdoaWNoIGluDQo+PiDCoMKgIHBy
aW5jaXBsZSBjYW4gY2hhbmdlIHJpZ2h0IGFmdGVyIHRoZSBjaGVjayksDQo+PiAtIHRoZSBkb21h
aW4gbG9jayBpcyB0YWtlbiBmb3IgYSBtdWNoIHNtYWxsZXIgcmVnaW9uLg0KPj4NCj4+IFNpZ25l
ZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4+IC0tLQ0KPj4gUkZD
OiBCeSB1c2luZyBnbG9iYWwgZG9tYWluIHBhZ2UgbWFwcGluZ3MgdGhlIGRlbWFuZCBvbiB0aGUg
dW5kZXJseWluZw0KPj4gwqDCoMKgwqDCoCBWQSByYW5nZSBtYXkgaW5jcmVhc2Ugc2lnbmlmaWNh
bnRseS4gSSBkaWQgY29uc2lkZXIgdG8gdXNlIHBlci0NCj4+IMKgwqDCoMKgwqAgZG9tYWluIG1h
cHBpbmdzIGluc3RlYWQsIGJ1dCB0aGV5IGV4aXN0IGZvciB4ODYgb25seS4gT2YgY291cnNlIHdl
DQo+PiDCoMKgwqDCoMKgIGNvdWxkIGhhdmUgYXJjaF97LHVufW1hcF9ndWVzdF9hcmVhKCkgYWxp
YXNpbmcgZ2xvYmFsIGRvbWFpbiBwYWdlDQo+PiDCoMKgwqDCoMKgIG1hcHBpbmcgZnVuY3Rpb25z
IG9uIEFybSBhbmQgdXNpbmcgcGVyLWRvbWFpbiBtYXBwaW5ncyBvbiB4ODYuIFlldA0KPj4gwqDC
oMKgwqDCoCB0aGVuIGFnYWluIG1hcF92Y3B1X2luZm8oKSBkb2Vzbid0IGRvIHNvIGVpdGhlciAo
YWxiZWl0IHRoYXQncw0KPj4gwqDCoMKgwqDCoCBsaWtlbHkgdG8gYmUgY29udmVydGVkIHN1YnNl
cXVlbnRseSB0byB1c2UgbWFwX3ZjcHVfYXJlYSgpDQo+PiBhbnl3YXkpLg0KPj4NCj4+IFJGQzog
SW4gbWFwX2d1ZXN0X2FyZWEoKSBJJ20gbm90IGNoZWNraW5nIHRoZSBQMk0gdHlwZSwgaW5zdGVh
ZCAtIGp1c3QNCj4+IMKgwqDCoMKgwqAgbGlrZSBtYXBfdmNwdV9pbmZvKCkgLSBzb2xlbHkgcmVs
eWluZyBvbiB0aGUgdHlwZSByZWYgYWNxdWlzaXRpb24uDQo+PiDCoMKgwqDCoMKgIENoZWNraW5n
IGZvciBwMm1fcmFtX3J3IGFsb25lIHdvdWxkIGJlIHdyb25nLCBhcyBhdCBsZWFzdA0KPj4gwqDC
oMKgwqDCoCBwMm1fcmFtX2xvZ2RpcnR5IG91Z2h0IHRvIGFsc28gYmUgb2theSB0byB1c2UgaGVy
ZSAoYW5kIGluIHNpbWlsYXINCj4+IMKgwqDCoMKgwqAgY2FzZXMsIGUuZy4gaW4gQXJnbydzIGZp
bmRfcmluZ19tZm4oKSkuIHAybV9pc19wYWdlYWJsZSgpIGNvdWxkIGJlDQo+PiDCoMKgwqDCoMKg
IHVzZWQgaGVyZSAobGlrZSBhbHRwMm1fdmNwdV9lbmFibGVfdmUoKSBkb2VzKSBhcyB3ZWxsIGFz
IGluDQo+PiDCoMKgwqDCoMKgIG1hcF92Y3B1X2luZm8oKSwgeWV0IHRoZW4gYWdhaW4gdGhlIFAy
TSB0eXBlIGlzIHN0YWxlIGJ5IHRoZSB0aW1lDQo+PiDCoMKgwqDCoMKgIGl0IGlzIGJlaW5nIGxv
b2tlZCBhdCBhbnl3YXkgd2l0aG91dCB0aGUgUDJNIGxvY2sgaGVsZC4NCj4+DQo+PiAtLS0gYS94
ZW4vY29tbW9uL2RvbWFpbi5jDQo+PiArKysgYi94ZW4vY29tbW9uL2RvbWFpbi5jDQo+PiBAQCAt
MTU2Myw3ICsxNTYzLDgyIEBAIGludCBtYXBfZ3Vlc3RfYXJlYShzdHJ1Y3QgdmNwdSAqdiwgcGFk
ZHINCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgc3RydWN0IGd1
ZXN0X2FyZWEgKmFyZWEsDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIHZvaWQgKCpwb3B1bGF0ZSkodm9pZCAqZHN0LCBzdHJ1Y3QgdmNwdSAqdikpDQo+PiDCoCB7
DQo+PiAtwqDCoMKgIHJldHVybiAtRU9QTk9UU1VQUDsNCj4+ICvCoMKgwqAgc3RydWN0IGRvbWFp
biAqY3VycmQgPSB2LT5kb21haW47DQo+PiArwqDCoMKgIHZvaWQgKm1hcCA9IE5VTEw7DQo+PiAr
wqDCoMKgIHN0cnVjdCBwYWdlX2luZm8gKnBnID0gTlVMTDsNCj4+ICvCoMKgwqAgaW50IHJjID0g
MDsNCj4+ICsNCj4+ICvCoMKgwqAgaWYgKCBnYWRkciApDQo+DQo+IDAgaXMgdGVjaG5pY2FsbHkg
YSB2YWxpZCAoZ3Vlc3QpIHBoeXNpY2FsIGFkZHJlc3Mgb24gQXJtLg0KDQpJdCBpcyBvbiB4ODYg
dG9vLCBidXQgdGhhdCdzIG5vdCB3aHkgMCBpcyBnZW5lcmFsbHkgY29uc2lkZXJlZCBhbg0KaW52
YWxpZCBhZGRyZXNzLg0KDQpTZWUgdGhlIG11bHRpdHVkZSBvZiBYU0FzLCBhbmQgbmVhci1YU0Fz
IHdoaWNoIGhhdmUgYmVlbiBjYXVzZWQgYnkgYmFkDQpsb2dpYyBpbiBYZW4gY2F1c2VkIGJ5IHRy
eWluZyB0byBtYWtlIGEgdmFyaWFibGUgaGVsZCBpbiBzdHJ1Y3QNCnZjcHUvZG9tYWluIGhhdmUg
YSBkZWZhdWx0IHZhbHVlIG90aGVyIHRoYW4gMC4NCg0KSXQncyBub3QgaW1wb3NzaWJsZSB0byB3
cml0ZSBzdWNoIGNvZGUgc2FmZWx5LCBhbmQgaW4gdGhpcyBjYXNlIEkgZXhwZWN0DQppdCBjYW4g
YmUgZG9uZSBieSB0aGUgTlVMTG5lc3MgKG9yIG5vdCkgb2YgdGhlIG1hcHBpbmcgcG9pbnRlciwg
cmF0aGVyDQp0aGFuIGJ5IHN0YXNoaW5nIHRoZSBnYWRkciwgYnV0IGhpc3RvcnkgaGFzIHByb3Zl
ZCByZXBlYXRlZGx5IHRoYXQgdGhpcw0KaXMgYSB2ZXJ5IGZlcnRpbGUgc291cmNlIG9mIHNlY3Vy
aXR5IGJ1Z3MuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 22:36:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 22:36:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479905.744002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuZI-0005eS-Ik; Tue, 17 Jan 2023 22:36:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479905.744002; Tue, 17 Jan 2023 22:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuZI-0005eL-Fv; Tue, 17 Jan 2023 22:36:36 +0000
Received: by outflank-mailman (input) for mailman id 479905;
 Tue, 17 Jan 2023 22:36:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHuZG-0005eD-VP
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 22:36:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHuZG-0005Xj-4f; Tue, 17 Jan 2023 22:36:34 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHuZG-0001Mk-03; Tue, 17 Jan 2023 22:36:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=XYe4maePp7zmVmQU5gus58pAOG/ID5cGJENFvQ4i/sw=; b=oeCB03jrtE+YWYnZ7GGTFbxaDj
	xgfHYkIiZ2Z0iIgm0a2NTVgiuIw6rSzgxUy6OFQENMyOU/oG15ul25YyBRgy8rTdOjBisXtcSdVbm
	Y8bWppjmuOa5N5PDAAJxN7vqSogUlvl4FkkhjeckJzWKc2SRIjueJot9amnqSOsX0NgI=;
Message-ID: <17595b1f-1523-9526-85da-99b9300f3218@xen.org>
Date: Tue, 17 Jan 2023 22:36:32 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-17-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 16/17] tools/xenstore: let check_store() check the
 accounting data
In-Reply-To: <20230117091124.22170-17-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> Today check_store() is only testing the correctness of the node tree.
> 
> Add verification of the accounting data (number of nodes)  and correct

NIT: one too many space before 'and'.

> the data if it is wrong.
> 
> Do the initial check_store() call only after Xenstore entries of a
> live update have been read.

Can you clarify whether this is needed for the rest of the patch, or 
simply a nice thing to have in general?

> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/xenstore/xenstored_core.c   | 62 ++++++++++++++++------
>   tools/xenstore/xenstored_domain.c | 86 +++++++++++++++++++++++++++++++
>   tools/xenstore/xenstored_domain.h |  4 ++
>   3 files changed, 137 insertions(+), 15 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 3099077a86..e201f14053 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2389,8 +2389,6 @@ void setup_structure(bool live_update)
>   		manual_node("@introduceDomain", NULL);
>   		domain_nbentry_fix(dom0_domid, 5, true);
>   	}
> -
> -	check_store();
>   }
>   
>   static unsigned int hash_from_key_fn(void *k)
> @@ -2433,20 +2431,28 @@ int remember_string(struct hashtable *hash, const char *str)
>    * As we go, we record each node in the given reachable hashtable.  These
>    * entries will be used later in clean_store.
>    */
> +
> +struct check_store_data {
> +	struct hashtable *reachable;
> +	struct hashtable *domains;
> +};
> +
>   static int check_store_step(const void *ctx, struct connection *conn,
>   			    struct node *node, void *arg)
>   {
> -	struct hashtable *reachable = arg;
> +	struct check_store_data *data = arg;
>   
> -	if (hashtable_search(reachable, (void *)node->name)) {
> +	if (hashtable_search(data->reachable, (void *)node->name)) {

IIUC the cast is only necessary because hashtable_search() expects a 
non-const value. I can't think for a reason for the key to be non-const. 
So I will look to send a follow-up patch.

> +
> +void domain_check_acc_add(const struct node *node, struct hashtable *domains)
> +{
> +	struct domain_acc *dom;
> +	unsigned int domid;
> +
> +	domid = node->perms.p[0].id;

This could be replaced with get_node_owner().

> +	dom = hashtable_search(domains, &domid);
> +	if (!dom)
> +		log("Node %s owned by unknown domain %u", node->name, domid);
> +	else
> +		dom->nodes++;
> +}
> +
> +static int domain_check_acc_sub(const void *k, void *v, void *arg)

I find the suffix 'sub' misleading because we have a function with a 
very similar name (instead suffixed 'sub'). Yet, AFAICT, it is not meant 
to substract.

So I would prefix with '_cb' instead.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 22:37:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 22:37:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479907.744013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuZc-00065E-SD; Tue, 17 Jan 2023 22:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479907.744013; Tue, 17 Jan 2023 22:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuZc-000657-Oh; Tue, 17 Jan 2023 22:36:56 +0000
Received: by outflank-mailman (input) for mailman id 479907;
 Tue, 17 Jan 2023 22:36:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHuZb-00064s-Pu; Tue, 17 Jan 2023 22:36:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHuZb-0005Xy-P9; Tue, 17 Jan 2023 22:36:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHuZb-0008SR-D2; Tue, 17 Jan 2023 22:36:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHuZb-000898-CV; Tue, 17 Jan 2023 22:36:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7MM7shFrSpdg638sIB9FD56tEbQ/51qPQmY9UWoRe0A=; b=FAeA3vkgOWxkTtu0zK80V3WiIH
	rFo1ewy8BbWSdIkSMat/tDisjr5KtbIjT9R2X7FmV/1ynjk7+xL2McyP/doEphZus5Scl44yQq/1Y
	HWjUDNTboJ4An/NXpnUZwOvrgjtgpjpocKZfuM8+ZP/aMhiEQn9HO8luaTnLlMiZT65I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175950-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175950: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ea382b3b21ef6664d5850dfbe08793f77d10c15d
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Jan 2023 22:36:55 +0000

flight 175950 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175950/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 ea382b3b21ef6664d5850dfbe08793f77d10c15d
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    2 days   38 attempts
Testing same since   175945  2023-01-17 19:40:40 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>
  Oliver Steffen <osteffen@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 718 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 22:39:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 22:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479919.744025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHubu-0006sc-AV; Tue, 17 Jan 2023 22:39:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479919.744025; Tue, 17 Jan 2023 22:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHubu-0006sV-5i; Tue, 17 Jan 2023 22:39:18 +0000
Received: by outflank-mailman (input) for mailman id 479919;
 Tue, 17 Jan 2023 22:39:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHubt-0006sP-Oi
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 22:39:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHubt-0005aw-1c; Tue, 17 Jan 2023 22:39:17 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHubs-0001Ut-Sv; Tue, 17 Jan 2023 22:39:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=gZQ5HEqX71hU1/q6YSaasZdkyxy17ihJcYZ+dxCrIeQ=; b=RiKVvrspXq0ajBTdUMGpo16DhW
	RNVwCKn7qIqaN18uJOUzFU0M2iPMCE7OEwIj9Cmr4zcb7oMP6ES82EBxiW6O+Q5i4UbO/zginQP0z
	X2vQyPYas8IdObpvKcu9EOOyLGL+84GqQsoLjapSgQcpPU1xKfc/GUrKchn+YEOTovkU=;
Message-ID: <d980e8e7-8fd3-4eab-eec8-312202f2df18@xen.org>
Date: Tue, 17 Jan 2023 22:39:15 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 17/17] tools/xenstore: make output of "xenstore-control
 help" more pretty
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-18-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117091124.22170-18-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> Using a tab for separating the command from the options in the output
> of "xenstore-control help" results in a rather ugly list.
> 
> Use a fixed size for the command instead.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Rwviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 22:45:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 22:45:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479925.744035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuiA-0008J9-Uw; Tue, 17 Jan 2023 22:45:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479925.744035; Tue, 17 Jan 2023 22:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuiA-0008J2-Rg; Tue, 17 Jan 2023 22:45:46 +0000
Received: by outflank-mailman (input) for mailman id 479925;
 Tue, 17 Jan 2023 22:45:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHui9-0008It-Mo
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 22:45:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHui9-0005kD-0h; Tue, 17 Jan 2023 22:45:45 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHui8-0001lP-Ne; Tue, 17 Jan 2023 22:45:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=3NWZAyDIovJyMx74/VBGc2qVIk/lkIqeHL7mPsetEoU=; b=s0XlxGqoeDUoN2JEKRZuYmkoDF
	sSkH0eVrYoFjTTbK87rY9c1J5hO1BUI+E3bil+r7nHu0bMdL+WiC+ghXY79OqHEhZYk2EBqGt+a01
	HJI/d3EbN20sz1CpItQQ8yuWppZvX/kXPU2HspFsgaL7zQgTE4YGlD6nFlGWmWdRmbnI=;
Message-ID: <e59e2c5f-a0cc-ff2e-15ba-268fc132e5d4@xen.org>
Date: Tue, 17 Jan 2023 22:45:43 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
 <f1229a27-f92c-a0dc-928e-1d78b928fdd0@xen.org>
 <bd6befdf-65eb-6937-fb85-449a5fa16794@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <bd6befdf-65eb-6937-fb85-449a5fa16794@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Andrew,

On 17/01/2023 22:20, Andrew Cooper wrote:
> On 24/11/2022 9:29 pm, Julien Grall wrote:
>> Hi Jan,
>>
>> I am still digesting this series and replying with some initial comments.
>>
>> On 19/10/2022 09:43, Jan Beulich wrote:
>>> The registration by virtual/linear address has downsides: At least on
>>> x86 the access is expensive for HVM/PVH domains. Furthermore for 64-bit
>>> PV domains the areas are inaccessible (and hence cannot be updated by
>>> Xen) when in guest-user mode.
>>>
>>> In preparation of the introduction of new vCPU operations allowing to
>>> register the respective areas (one of the two is x86-specific) by
>>> guest-physical address, flesh out the map/unmap functions.
>>>
>>> Noteworthy differences from map_vcpu_info():
>>> - areas can be registered more than once (and de-registered),
>>> - remote vCPU-s are paused rather than checked for being down (which in
>>>     principle can change right after the check),
>>> - the domain lock is taken for a much smaller region.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> RFC: By using global domain page mappings the demand on the underlying
>>>        VA range may increase significantly. I did consider to use per-
>>>        domain mappings instead, but they exist for x86 only. Of course we
>>>        could have arch_{,un}map_guest_area() aliasing global domain page
>>>        mapping functions on Arm and using per-domain mappings on x86. Yet
>>>        then again map_vcpu_info() doesn't do so either (albeit that's
>>>        likely to be converted subsequently to use map_vcpu_area()
>>> anyway).
>>>
>>> RFC: In map_guest_area() I'm not checking the P2M type, instead - just
>>>        like map_vcpu_info() - solely relying on the type ref acquisition.
>>>        Checking for p2m_ram_rw alone would be wrong, as at least
>>>        p2m_ram_logdirty ought to also be okay to use here (and in similar
>>>        cases, e.g. in Argo's find_ring_mfn()). p2m_is_pageable() could be
>>>        used here (like altp2m_vcpu_enable_ve() does) as well as in
>>>        map_vcpu_info(), yet then again the P2M type is stale by the time
>>>        it is being looked at anyway without the P2M lock held.
>>>
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -1563,7 +1563,82 @@ int map_guest_area(struct vcpu *v, paddr
>>>                       struct guest_area *area,
>>>                       void (*populate)(void *dst, struct vcpu *v))
>>>    {
>>> -    return -EOPNOTSUPP;
>>> +    struct domain *currd = v->domain;
>>> +    void *map = NULL;
>>> +    struct page_info *pg = NULL;
>>> +    int rc = 0;
>>> +
>>> +    if ( gaddr )
>>
>> 0 is technically a valid (guest) physical address on Arm.
> 
> It is on x86 too, but that's not why 0 is generally considered an
> invalid address.
> 
> See the multitude of XSAs, and near-XSAs which have been caused by bad
> logic in Xen caused by trying to make a variable held in struct
> vcpu/domain have a default value other than 0.
> 
> It's not impossible to write such code safely, and in this case I expect
> it can be done by the NULLness (or not) of the mapping pointer, rather
> than by stashing the gaddr, but history has proved repeatedly that this
> is a very fertile source of security bugs.

I understand the security concern. However... you are now imposing some 
constraint in the guest OS which may be more difficult to address.

Furthermore, we are trying to make a sane ABI. It doesn't look sane to 
me to expose our currently shortcomings to the guest OS. The more that 
if we decide it to relax in the future, then it would not help an OS 
because they will need to support older Xen...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 22:52:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 22:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479931.744046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuo3-0001IX-KY; Tue, 17 Jan 2023 22:51:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479931.744046; Tue, 17 Jan 2023 22:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHuo3-0001IQ-Gn; Tue, 17 Jan 2023 22:51:51 +0000
Received: by outflank-mailman (input) for mailman id 479931;
 Tue, 17 Jan 2023 22:51:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2qLe=5O=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pHuo2-0001IH-Ay
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 22:51:50 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86bf9084-96b9-11ed-91b6-6bf2151ebd3b;
 Tue, 17 Jan 2023 23:51:49 +0100 (CET)
Received: by mail-ej1-x62d.google.com with SMTP id ud5so79061668ejc.4
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 14:51:49 -0800 (PST)
Received: from [127.0.0.1] (dynamic-077-011-043-201.77.11.pool.telefonica.de.
 [77.11.43.201]) by smtp.gmail.com with ESMTPSA id
 k15-20020a17090632cf00b0087120324712sm2738569ejk.23.2023.01.17.14.51.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 17 Jan 2023 14:51:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86bf9084-96b9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jbh03W6hVMjCODjzgpmoric/2rTMsA2rWH3ep2nias0=;
        b=GTBzrSwzC0C+TbpKBPASdjFgmPK9MbGvc5t3esQ65Uc/KQOCsm545QihSYOgepEsAx
         rqv7NBrNxsqiw9wLC8PjaY726+IF868Uqcoj1MJRZOA4PJI6MkrYnw81z2bffpWxJHzM
         Z15Wib++E6k1CJqXVVSu0WXiI57WxyfvsbcY+C7GpVDz5Mh1x48lhAUgcWXntXBAOKWz
         hrPRKlnzYBmW5etZlK2IJv/Pmx+QPNK1/OGBDtAd1FrJT698evGYc/mBAwJ0xxncm0CU
         QlSENUD08VRWCnaxpeuwqiIgQ4exc1RAQ5IPg7Y1L9iYNnBN96IiweLC3PDczt4WJBBA
         kQZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jbh03W6hVMjCODjzgpmoric/2rTMsA2rWH3ep2nias0=;
        b=h8mI+7fVg8B/z5+GL1JquyzFDT62LbYWUkOW1XAOwXzjIVOr8yRlYAtfzpp30J4g1R
         OAJ9bem+RvlllHI3jRN/f4WnVfeBoq+l43wv8EtISGbaKaHZwwg4gNf393YzYivsDrDC
         GUa6V4161KEJ3WEpEq+ntAlk5EagbB33XgJXjAglrVwEGjuGIdLlt0NkM6ZpyY+t6VuC
         ++8Q7crODkXsb9lKOHgHAbNzTRafDTMuIxh0HQz3y1wp0F3l1gYru/sdMHuJmW2yUTTI
         aTsqjHR8Zm5ez6UGS/6KmiY9dfzakNeU3M90F6QPFdLN2GjPOSWsoF6wPyF6emKP/Mbo
         BXRQ==
X-Gm-Message-State: AFqh2kpwYh1ud4BikGPYS+Xw+fnpUSqPt+npSXEK/Egh6EDsAgaQrnMm
	cYwo0RV6WvBZusMCUIkyZzg=
X-Google-Smtp-Source: AMrXdXuViHlaRCcJQwvdFLv4FdIMUTrp63hp3JtXnf0wno9j+1Rox2wLPKLD9Iw0BNao/SOEWyvs+g==
X-Received: by 2002:a17:906:b24c:b0:869:236c:ac41 with SMTP id ce12-20020a170906b24c00b00869236cac41mr4837674ejb.24.1673995908835;
        Tue, 17 Jan 2023 14:51:48 -0800 (PST)
Date: Tue, 17 Jan 2023 22:51:45 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
CC: Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <eduardo@habkost.net>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Chuck Zmudzinski <brchuckz@aol.com>
Subject: Re: [PATCH v2 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
In-Reply-To: <20230104144437.27479-1-shentey@gmail.com>
References: <20230104144437.27479-1-shentey@gmail.com>
Message-ID: <D2349D00-B64B-4197-A62E-A74CB20112FB@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 4=2E Januar 2023 14:44:31 UTC schrieb Bernhard Beschow <shentey@gmail=
=2Ecom>:
>This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally rem=
oves
>
>it=2E The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen i=
n the PC
>
>machine agnostic to the precise southbridge being used=2E 2/ will become
=
>
>particularily interesting once PIIX4 becomes usable in the PC machine, av=
oiding
>
>the "Frankenstein" use of PIIX4_ACPI in PIIX3=2E
>
>
>
>v2:
>
>- xen_piix3_set_irq() is already generic=2E Just rename it=2E (Chuck)
>
>
>
>Testing done:
>
>None, because I don't know how to conduct this properly :(

Ping

Successfully tested by Chuck=2E Patches 2, 4 and 6 still need review=2E

I can rebase to master if that eases review -- please let me know=2E Curre=
ntly this series is based on my "Consolidate PIIX south bridges" series:

>Based-on: <20221221170003=2E2929-1-shentey@gmail=2Ecom>
>
>          "[PATCH v4 00/30] Consolidate PIIX south bridges"
>
>
>
>Bernhard Beschow (6):
>
>  include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()
>
>  hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
>
>  hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
>
>  hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
>
>  hw/isa/piix: Resolve redundant k->config_write assignments
>
>  hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
>
>
>
> hw/i386/pc_piix=2Ec             | 34 ++++++++++++++++--
>
> hw/i386/xen/xen-hvm=2Ec         |  2 +-
>
> hw/isa/piix=2Ec                 | 66 +----------------------------------=

>
> include/hw/southbridge/piix=2Eh |  1 -
>
> include/hw/xen/xen=2Eh          |  2 +-
>
> stubs/xen-hw-stub=2Ec           |  2 +-
>
> 6 files changed, 35 insertions(+), 72 deletions(-)
>
>
>
>-- >
>2=2E39=2E0
>
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:09:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:09:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479963.744073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHv4i-0003ui-AM; Tue, 17 Jan 2023 23:09:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479963.744073; Tue, 17 Jan 2023 23:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHv4i-0003ub-7f; Tue, 17 Jan 2023 23:09:04 +0000
Received: by outflank-mailman (input) for mailman id 479963;
 Tue, 17 Jan 2023 23:09:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHv4g-0003uV-Fe
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:09:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHv4g-0006Pq-0w; Tue, 17 Jan 2023 23:09:02 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHv4f-0002gB-Qu; Tue, 17 Jan 2023 23:09:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=TMdBwipMsR4sGxcGr3hd4B/7d5FHTYOFYAxQ4CRlq8M=; b=dFOeRE9I8CQNkR2rF7qtAM7K9j
	qF9XiUCY/D0YB1kcagLeBPJFEwlLT2NIE2FVaY6DA/sFBAsYVqGhQqx6EGD15T01dAcjHCz7Z3IrZ
	CVdCXPMvrXR7SPxksofq8oWfR/CZHht/w319f2bnFRafgJlJMks5alfXRk4pJN8gZdvY=;
Message-ID: <32811667-4f9c-12f1-7b8f-2b066bc3dee9@xen.org>
Date: Tue, 17 Jan 2023 23:09:00 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 02/40] xen/arm: make ARM_EFI selectable for Arm64
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-3-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113052914.3845596-3-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> Currently, ARM_EFI will mandatorily selected by Arm64.
> Even if the user knows for sure that their images will not
> start in the EFI environment, they can't disable the EFI
> support for Arm64. This means there will be about 3K lines
> unused code in their images.
> 
> So in this patch, we make ARM_EFI selectable for Arm64, and
> based on that, we can use CONFIG_ARM_EFI to gate the EFI
> specific code in head.S for those images that will not be
> booted in EFI environment.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>

Your signed-off-by is missing.

> ---
> v1 -> v2:
> 1. New patch
> ---
>   xen/arch/arm/Kconfig      | 10 ++++++++--
>   xen/arch/arm/arm64/head.S | 15 +++++++++++++--
>   2 files changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 239d3aed3c..ace7178c9a 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -7,7 +7,6 @@ config ARM_64
>   	def_bool y
>   	depends on !ARM_32
>   	select 64BIT
> -	select ARM_EFI
>   	select HAS_FAST_MULTIPLY
>   
>   config ARM
> @@ -37,7 +36,14 @@ config ACPI
>   	  an alternative to device tree on ARM64.
>   
>   config ARM_EFI
> -	bool
> +	bool "UEFI boot service support"
> +	depends on ARM_64
> +	default y
> +	help
> +	  This option provides support for boot services through
> +	  UEFI firmware. A UEFI stub is provided to allow Xen to
> +	  be booted as an EFI application. This is only useful for
> +	  Xen that may run on systems that have UEFI firmware.

I would drop the last sentence as this is implied with the rest of the 
paragraph.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:17:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:17:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479970.744085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvCF-0005Py-8L; Tue, 17 Jan 2023 23:16:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479970.744085; Tue, 17 Jan 2023 23:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvCF-0005Pr-4o; Tue, 17 Jan 2023 23:16:51 +0000
Received: by outflank-mailman (input) for mailman id 479970;
 Tue, 17 Jan 2023 23:16:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHvCE-0005Pl-2Q
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:16:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHvCD-0006kv-J6; Tue, 17 Jan 2023 23:16:49 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHvCD-0002xt-DP; Tue, 17 Jan 2023 23:16:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=y4y2F/N7XxRFF1gg8TL5NCZE3HwrkgzPAbqv97unP2w=; b=sQey8TmkcwlMBjDh8HdI7kZfKE
	+9MRyNiObG/gWX8c7mAsd1i7pdOopSqqc3HnEw6VF3C8qT5KW7YAhgzrT3hrLkg+qVVC39TsOghkp
	V6ERGkQdJ/i0nIimG+9hHmWmeZMr5uhjcxiRS9D1s8LsHX4jGhMkpZ9E2zyRL38tI4Mo=;
Message-ID: <759544c9-7783-c61d-75bd-0a9dab364be2@xen.org>
Date: Tue, 17 Jan 2023 23:16:47 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 03/40] xen/arm: adjust Xen TLB helpers for Armv8-R64
 PMSA
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-4-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113052914.3845596-4-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/01/2023 05:28, Penny Zheng wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
>  From Arm ARM Supplement of Armv8-R AArch64 (DDI 0600A) [1],
> section D1.6.2 TLB maintenance instructions, we know that
> Armv8-R AArch64 permits an implementation to cache stage 1
> VMSAv8-64 and stage 2 PMSAv8-64 attributes as a common entry
> for the Secure EL1&0 translation regime. But for Xen itself,
> it's running with stage 1 PMSAv8-64 on Armv8-R AArch64. The
> EL2 MPU updates for stage 1 PMSAv8-64 will not be cached in
> TLB entries. So we don't need any TLB invalidation for Xen
> itself in EL2.

So I understand the theory here. But I would expect that none of the 
common code will call any of those helpers. Therefore the #ifdef should 
be unnecessary.

Can you clarify if my understanding is correct?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:19:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:19:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479976.744096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvEZ-00060m-KK; Tue, 17 Jan 2023 23:19:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479976.744096; Tue, 17 Jan 2023 23:19:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvEZ-00060f-HN; Tue, 17 Jan 2023 23:19:15 +0000
Received: by outflank-mailman (input) for mailman id 479976;
 Tue, 17 Jan 2023 23:19:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1Jq1=5O=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1pHvEY-00060Z-09
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:19:14 +0000
Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com
 [2607:f8b0:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 59e283b4-96bd-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 00:19:12 +0100 (CET)
Received: by mail-pl1-x62a.google.com with SMTP id b17so27610766pld.7
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 15:19:12 -0800 (PST)
Received: from localhost (ec2-13-57-97-131.us-west-1.compute.amazonaws.com.
 [13.57.97.131]) by smtp.gmail.com with ESMTPSA id
 17-20020a630211000000b0049f77341db3sm18049594pgc.42.2023.01.17.15.19.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 15:19:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59e283b4-96bd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=R7+tOmcqmrckRY4QYVvr7mpyl5qyA1jmpxRjK+JFMSE=;
        b=RfF50XXsNyG3J4oHe2FuELLjhnTz4qNlFBx5RtBf/mdMm4ZVDflbyFSA1iEqESX4aN
         yWZs/18wzDzJlszl5SfctuAePN7cXu0thfw33YHWPFQXpypWEz0qwRbMwdlBYb/VIhjE
         lLHofdy8H16CY2OxxThydElkZCgiFjn+XKXdj97Ph5ZbfO4PL18hXI9+kE2aVc92kKrc
         TwOfC3BR0ctNsXqthMTEaR6Dm93JJZyWJGU/vbsTyjbf9IErLrO1uBcuk6hB0MraKRz4
         Npfnu19q9CjpiMLLOIbr1OUqrhh2eFkG8iJ374D95qJI5pWWIJ7NC31FZl16jSwU7jto
         DZhg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=R7+tOmcqmrckRY4QYVvr7mpyl5qyA1jmpxRjK+JFMSE=;
        b=SNjB7Nn1oJ9AYUdb6Qs9DIJCCp4nivaFkd6el+jiQH0pXimEmr+fPiSuTxvQoTpMNU
         +dGu5gYO0YBCvlDgRSUm2Fbo77Xuvl6AgU+4/AH/o64yuL51h9OEc0wNg4qlX1vpKix2
         er2SdRZw0/yLBSpDSoKv0at07nLIHeI5ycB9usF9d7vceI2qXSdaRRlpAlcgyegGvhla
         D987wmUCLzPOzaCYfBy3FYx0II7ZMoKIycSiEr5348VRFj1WSZ7Zxtuaj9akuGJAbaTM
         ZBEYmn2i64egXNszSSrR2b/TXfLGHLv0dqFdH0j1wHphxeuxfOTh34Gn0upWJu6gZO+I
         2sWA==
X-Gm-Message-State: AFqh2kqmW7WZmfIsIoMLo5wK0OvMlTZboDB7G07cnNPS2rq7pYyIUDcv
	5q1oVnjz9GVcGt1wrmt7UrI=
X-Google-Smtp-Source: AMrXdXvIjeiNTjYbRqS4p4SffP+X7Wfpn6eztztcvcUEB/KQB4Vv2v3jy0yTtzMPf/Z/nm08aNsiQw==
X-Received: by 2002:a05:6a20:7528:b0:b0:3329:c395 with SMTP id r40-20020a056a20752800b000b03329c395mr25126574pzd.30.1673997551195;
        Tue, 17 Jan 2023 15:19:11 -0800 (PST)
Date: Tue, 17 Jan 2023 23:19:09 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v4 2/4] xen/riscv: introduce sbi call to putchar to
 console
Message-ID: <Y8cs7UVt1FH44k7I@bullseye>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
 <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>

On Mon, Jan 16, 2023 at 04:39:30PM +0200, Oleksii Kurochko wrote:
> From: Bobby Eshleman <bobby.eshleman@gmail.com>
> 
> Originally SBI implementation for Xen was introduced by
> Bobby Eshleman <bobby.eshleman@gmail.com> but it was removed
> all the stuff for simplicity  except SBI call for putting
> character to console.
> 
> The patch introduces sbi_putchar() SBI call which is necessary
> to implement initial early_printk.
> 
> Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V4:
>     - Nothing changed
> ---
> Changes in V3:
>     - update copyright's year
>     - rename definition of __CPU_SBI_H__ to __ASM_RISCV_SBI_H__
>     - fix identations
>     - change an author of the commit
> ---
> Changes in V2:
>     - add an explanatory comment about sbi_console_putchar() function.
>     - order the files alphabetically in Makefile
> ---
>  xen/arch/riscv/Makefile          |  1 +
>  xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>  xen/arch/riscv/sbi.c             | 45 ++++++++++++++++++++++++++++++++
>  3 files changed, 80 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/sbi.h
>  create mode 100644 xen/arch/riscv/sbi.c
> 
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 5a67a3f493..fd916e1004 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,4 +1,5 @@
>  obj-$(CONFIG_RISCV_64) += riscv64/
> +obj-y += sbi.o
>  obj-y += setup.o
>  
>  $(TARGET): $(TARGET)-syms
> diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
> new file mode 100644
> index 0000000000..0e6820a4ed
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/sbi.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
> +/*
> + * Copyright (c) 2021-2023 Vates SAS.
> + *
> + * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> + *
> + * Taken/modified from Xvisor project with the following copyright:
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + */
> +
> +#ifndef __ASM_RISCV_SBI_H__
> +#define __ASM_RISCV_SBI_H__
> +
> +#define SBI_EXT_0_1_CONSOLE_PUTCHAR		0x1
> +
> +struct sbiret {
> +    long error;
> +    long value;
> +};
> +
> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
> +                        unsigned long arg0, unsigned long arg1,
> +                        unsigned long arg2, unsigned long arg3,
> +                        unsigned long arg4, unsigned long arg5);
> +
> +/**
> + * Writes given character to the console device.
> + *
> + * @param ch The data to be written to the console.
> + */
> +void sbi_console_putchar(int ch);
> +
> +#endif /* __ASM_RISCV_SBI_H__ */
> diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
> new file mode 100644
> index 0000000000..dc0eb44bc6
> --- /dev/null
> +++ b/xen/arch/riscv/sbi.c
> @@ -0,0 +1,45 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/*
> + * Taken and modified from the xvisor project with the copyright Copyright (c)
> + * 2019 Western Digital Corporation or its affiliates and author Anup Patel
> + * (anup.patel@wdc.com).
> + *
> + * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + * Copyright (c) 2021-2023 Vates SAS.
> + */
> +
> +#include <xen/errno.h>
> +#include <asm/sbi.h>
> +
> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
> +                        unsigned long arg0, unsigned long arg1,
> +                        unsigned long arg2, unsigned long arg3,
> +                        unsigned long arg4, unsigned long arg5)
> +{
> +    struct sbiret ret;
> +
> +    register unsigned long a0 asm ("a0") = arg0;
> +    register unsigned long a1 asm ("a1") = arg1;
> +    register unsigned long a2 asm ("a2") = arg2;
> +    register unsigned long a3 asm ("a3") = arg3;
> +    register unsigned long a4 asm ("a4") = arg4;
> +    register unsigned long a5 asm ("a5") = arg5;
> +    register unsigned long a6 asm ("a6") = fid;
> +    register unsigned long a7 asm ("a7") = ext;
> +
> +    asm volatile ("ecall"
> +              : "+r" (a0), "+r" (a1)
> +              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
> +              : "memory");
> +    ret.error = a0;
> +    ret.value = a1;
> +
> +    return ret;
> +}
> +
> +void sbi_console_putchar(int ch)
> +{
> +    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
> +}
> -- 
> 2.39.0
> 
> 

Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:22:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:22:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479982.744106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvI0-0007OZ-3O; Tue, 17 Jan 2023 23:22:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479982.744106; Tue, 17 Jan 2023 23:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvI0-0007OS-0P; Tue, 17 Jan 2023 23:22:48 +0000
Received: by outflank-mailman (input) for mailman id 479982;
 Tue, 17 Jan 2023 23:22:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1Jq1=5O=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1pHvHy-0007OM-G3
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:22:46 +0000
Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com
 [2607:f8b0:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d8815da9-96bd-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 00:22:45 +0100 (CET)
Received: by mail-pg1-x52d.google.com with SMTP id b12so23212925pgj.6
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 15:22:45 -0800 (PST)
Received: from localhost (ec2-13-57-97-131.us-west-1.compute.amazonaws.com.
 [13.57.97.131]) by smtp.gmail.com with ESMTPSA id
 k18-20020a628412000000b0058103f45d9esm17455477pfd.82.2023.01.17.15.22.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 15:22:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8815da9-96bd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=0F1zjsrbkwMDQy5hmxNGfzYpEbvTLx9WXzW2it+3pjs=;
        b=Sr54KRLICgOdMM+rOhICQUcuwHzZUh7FNn8UWbOTj7OWUZzeQWQ1K4H95qgNqS8Q53
         YbCagtaxUF3b+xjqhO97vwXkbQzhQfy99eZMNwcqN7onZYaIJSW2Ct8uFj2tdkkYO+f9
         encFoy9WPhph185Z2CQxTbGQtFqDXFt3ia0XP+jhS/2XwyhOvL/VO8AECWck4nx/yy6V
         wgCEP0QiNsTWr9Qon/tINBl6Otkf2SgiKWXWbpCf44KirflE/dhm6ufSjANcVNROT7Lm
         paVI48lc7PF3ukay+jE4dpiYNa7rvbB6gjtH5sLhbvjFLBM89mSXsuzOnDli86FhbBbB
         oTuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0F1zjsrbkwMDQy5hmxNGfzYpEbvTLx9WXzW2it+3pjs=;
        b=SxBVovyTKGmypqiinqzYfnvzXdmFSaMAd6nGPYEiNjkaa7i3kFzX+QKSBdxvp0N+cn
         1kq746NrOhYGrd44nK4ohb0BSsHIe6qOFxy4TFrZuqNT32ricBBry8dXXnmp+A9Qu7Nz
         zqC6mOj1523obGvwOwVGz2Ts2kgD5yZBeX+S5rtG8bFhiQc7JGqHKlYKxdaXkDeQ9q4m
         hx9xSdeSbLuoNAbVsslIJlgom0D+9Hi62FOX1QRUr/9+di07yJhbwW7nBb+5auooGNK2
         7Ls6N7+xp4SwvdrKKBu8MFdELLsZSqOitkiX0nds94NPw7TB44pkAzaWbFWcUJHwltUl
         LHpw==
X-Gm-Message-State: AFqh2kruCT7jU6yFZiVKRIsplFDM/Y1J9W7bKrN193sLLDxI5S0dQW05
	pddGvClBPIxEQZX/PLga/M8=
X-Google-Smtp-Source: AMrXdXtAn0NZ2o859uxwF3urLwXZW8nShedS33qwc8oum+wUKOnH8a+smKZ8VKe4QrMdX2TEVYx6og==
X-Received: by 2002:a62:1851:0:b0:581:95a7:d2f4 with SMTP id 78-20020a621851000000b0058195a7d2f4mr25008998pfy.9.1673997763702;
        Tue, 17 Jan 2023 15:22:43 -0800 (PST)
Date: Tue, 17 Jan 2023 23:22:42 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: Re: [PATCH v4 3/4] xen/riscv: introduce early_printk basic stuff
Message-ID: <Y8ctwgHuXfmSXhqT@bullseye>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
 <915bd184c6648a1a3bf0ac6a79b5274972bb33dd.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <915bd184c6648a1a3bf0ac6a79b5274972bb33dd.1673877778.git.oleksii.kurochko@gmail.com>

On Mon, Jan 16, 2023 at 04:39:31PM +0200, Oleksii Kurochko wrote:
> Because printk() relies on a serial driver (like the ns16550 driver)
> and drivers require working virtual memory (ioremap()) there is not
> print functionality early in Xen boot.
> 
> The patch introduces the basic stuff of early_printk functionality
> which will be enough to print 'hello from C environment".
> 
> Originally early_printk.{c,h} was introduced by Bobby Eshleman
> (https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1aab71384)
> but some functionality was changed:
> early_printk() function was changed in comparison with the original as
> common isn't being built now so there is no vscnprintf.
> 
> This commit adds early printk implementation built on the putc SBI call.
> 
> As sbi_console_putchar() is already being planned for deprecation
> it is used temporarily now and will be removed or reworked after
> real uart will be ready.
> 
> Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V4:
>     - Remove "depends on RISCV*" from Kconfig.debug as it is located in
>       arch specific folder so by default RISCV configs should be ebabled.
>     - Add "ifdef __riscv_cmodel_medany" to be sure that PC-relative addressing
>       is used as early_*() functions can be called from head.S with MMU-off and
>       before relocation (if it would be needed at all for RISC-V)
>     - fix code style
> ---
> Changes in V3:
>     - reorder headers in alphabetical order
>     - merge changes related to start_xen() function from "[PATCH v2 7/8]
>       xen/riscv: print hello message from C env" to this patch
>     - remove unneeded parentheses in definition of STACK_SIZE
> ---
> Changes in V2:
>     - introduce STACK_SIZE define.
>     - use consistent padding between instruction mnemonic and operand(s)
> ---
> ---
>  xen/arch/riscv/Kconfig.debug              |  6 +++
>  xen/arch/riscv/Makefile                   |  1 +
>  xen/arch/riscv/early_printk.c             | 45 +++++++++++++++++++++++
>  xen/arch/riscv/include/asm/early_printk.h | 12 ++++++
>  xen/arch/riscv/setup.c                    |  6 ++-
>  5 files changed, 69 insertions(+), 1 deletion(-)
>  create mode 100644 xen/arch/riscv/early_printk.c
>  create mode 100644 xen/arch/riscv/include/asm/early_printk.h
> 
> diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
> index e69de29bb2..e139e44873 100644
> --- a/xen/arch/riscv/Kconfig.debug
> +++ b/xen/arch/riscv/Kconfig.debug
> @@ -0,0 +1,6 @@
> +config EARLY_PRINTK
> +    bool "Enable early printk"
> +    default DEBUG
> +    help
> +
> +      Enables early printk debug messages
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index fd916e1004..1a4f1a6015 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,3 +1,4 @@
> +obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>  obj-$(CONFIG_RISCV_64) += riscv64/
>  obj-y += sbi.o
>  obj-y += setup.o
> diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
> new file mode 100644
> index 0000000000..6bc29a1942
> --- /dev/null
> +++ b/xen/arch/riscv/early_printk.c
> @@ -0,0 +1,45 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * RISC-V early printk using SBI
> + *
> + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> + */
> +#include <asm/early_printk.h>
> +#include <asm/sbi.h>
> +
> +/*
> + * early_*() can be called from head.S with MMU-off.
> + *
> + * The following requiremets should be honoured for early_*() to
> + * work correctly:
> + *    It should use PC-relative addressing for accessing symbols.
> + *    To achieve that GCC cmodel=medany should be used.
> + */
> +#ifndef __riscv_cmodel_medany
> +#error "early_*() can be called from head.S before relocate so it should not use absolute addressing."
> +#endif
> +
> +/*
> + * TODO:
> + *   sbi_console_putchar is already planned for deprecation
> + *   so it should be reworked to use UART directly.
> +*/
> +void early_puts(const char *s, size_t nr)
> +{
> +    while ( nr-- > 0 )
> +    {
> +        if ( *s == '\n' )
> +            sbi_console_putchar('\r');
> +        sbi_console_putchar(*s);
> +        s++;
> +    }
> +}
> +
> +void early_printk(const char *str)
> +{
> +    while ( *str )
> +    {
> +        early_puts(str, 1);
> +        str++;
> +    }
> +}
> diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
> new file mode 100644
> index 0000000000..05106e160d
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/early_printk.h
> @@ -0,0 +1,12 @@
> +#ifndef __EARLY_PRINTK_H__
> +#define __EARLY_PRINTK_H__
> +
> +#include <xen/early_printk.h>
> +
> +#ifdef CONFIG_EARLY_PRINTK
> +void early_printk(const char *str);
> +#else
> +static inline void early_printk(const char *s) {};
> +#endif
> +
> +#endif /* __EARLY_PRINTK_H__ */
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> index 13e24e2fe1..9c9412152a 100644
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -1,13 +1,17 @@
>  #include <xen/compile.h>
>  #include <xen/init.h>
>  
> +#include <asm/early_printk.h>
> +
>  /* Xen stack for bringing up the first CPU. */
>  unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
>      __aligned(STACK_SIZE);
>  
>  void __init noreturn start_xen(void)
>  {
> -    for ( ;; )
> +    early_printk("Hello from C env\n");
> +
> +    for ( ; ; )
>          asm volatile ("wfi");
>  
>      unreachable();
> -- 
> 2.39.0
> 

Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:24:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:24:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479987.744118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvJQ-0007yV-G5; Tue, 17 Jan 2023 23:24:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479987.744118; Tue, 17 Jan 2023 23:24:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvJQ-0007yO-C5; Tue, 17 Jan 2023 23:24:16 +0000
Received: by outflank-mailman (input) for mailman id 479987;
 Tue, 17 Jan 2023 23:24:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHvJO-0007yG-GT
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:24:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHvJO-0006tM-9l; Tue, 17 Jan 2023 23:24:14 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHvJO-0003Fk-4U; Tue, 17 Jan 2023 23:24:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=2HOq/hXikQ2cSCtlt1XSjWml10OgtcsOj410j6bzY8A=; b=p30m2Yvva3cIaPQqVIQEBbCaxS
	EdTqeKd0N58Qb4Iy5yDHqo/kf7pYhAFtAIOxnn14ciHBPemv2ZmUR8Cn8OgolIBpDPTlekEg7dcH5
	6NQZBkvCIkXrlucoMe85dkpK8b1rCDCP5jxS6UJsFHukuhCFt/gKP3aBAMWxIe+8CvUo=;
Message-ID: <e406484a-aad3-4953-afdb-3159597ec998@xen.org>
Date: Tue, 17 Jan 2023 23:24:12 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "Jiamei . Xie" <jiamei.xie@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-5-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 04/40] xen/arm: add an option to define Xen start
 address for Armv8-R
In-Reply-To: <20230113052914.3845596-5-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> On Armv8-A, Xen has a fixed virtual start address (link address
> too) for all Armv8-A platforms. In an MMU based system, Xen can
> map its loaded address to this virtual start address. So, on
> Armv8-A platforms, the Xen start address does not need to be
> configurable. But on Armv8-R platforms, there is no MMU to map
> loaded address to a fixed virtual address and different platforms
> will have very different address space layout. So Xen cannot use
> a fixed physical address on MPU based system and need to have it
> configurable.
> 
> In this patch we introduce one Kconfig option for users to define
> the default Xen start address for Armv8-R. Users can enter the
> address in config time, or select the tailored platform config
> file from arch/arm/configs.
> 
> And as we introduced Armv8-R platforms to Xen, that means the
> existed Arm64 platforms should not be listed in Armv8-R platform
> list, so we add !ARM_V8R dependency for these platforms.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> Signed-off-by: Jiamei.Xie <jiamei.xie@arm.com>

Your signed-off-by is missing.

> ---
> v1 -> v2:
> 1. Remove the platform header fvp_baser.h.
> 2. Remove the default start address for fvp_baser64.
> 3. Remove the description of default address from commit log.
> 4. Change HAS_MPU to ARM_V8R for Xen start address dependency.
>     No matter Arm-v8r board has MPU or not, it always need to
>     specify the start address.

I don't quite understand the last sentence. Are you saying that it is 
possible to have an ARMv8-R system with an MPU nor a page-table?

> ---
>   xen/arch/arm/Kconfig           |  8 ++++++++
>   xen/arch/arm/platforms/Kconfig | 16 +++++++++++++---
>   2 files changed, 21 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index ace7178c9a..c6b6b612d1 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -145,6 +145,14 @@ config TEE
>   	  This option enables generic TEE mediators support. It allows guests
>   	  to access real TEE via one of TEE mediators implemented in XEN.
>   
> +config XEN_START_ADDRESS
> +	hex "Xen start address: keep default to use platform defined address"
> +	default 0
> +	depends on ARM_V8R

It is still pretty unclear to me what would be the difference between 
HAS_MPU and ARM_V8R.

> +	help
> +	  This option allows to set the customized address at which Xen will be
> +	  linked on MPU systems. This address must be aligned to a page size.
> +
>   source "arch/arm/tee/Kconfig"
>   
>   config STATIC_SHM
> diff --git a/xen/arch/arm/platforms/Kconfig b/xen/arch/arm/platforms/Kconfig
> index c93a6b2756..0904793a0b 100644
> --- a/xen/arch/arm/platforms/Kconfig
> +++ b/xen/arch/arm/platforms/Kconfig
> @@ -1,6 +1,7 @@
>   choice
>   	prompt "Platform Support"
>   	default ALL_PLAT
> +	default FVP_BASER if ARM_V8R
>   	---help---
>   	Choose which hardware platform to enable in Xen.
>   
> @@ -8,13 +9,14 @@ choice
>   
>   config ALL_PLAT
>   	bool "All Platforms"
> +	depends on !ARM_V8R
>   	---help---
>   	Enable support for all available hardware platforms. It doesn't
>   	automatically select any of the related drivers.
>   
>   config QEMU
>   	bool "QEMU aarch virt machine support"
> -	depends on ARM_64
> +	depends on ARM_64 && !ARM_V8R
>   	select GICV3
>   	select HAS_PL011
>   	---help---
> @@ -23,7 +25,7 @@ config QEMU
>   
>   config RCAR3
>   	bool "Renesas RCar3 support"
> -	depends on ARM_64
> +	depends on ARM_64 && !ARM_V8R
>   	select HAS_SCIF
>   	select IPMMU_VMSA
>   	---help---
> @@ -31,14 +33,22 @@ config RCAR3
>   
>   config MPSOC
>   	bool "Xilinx Ultrascale+ MPSoC support"
> -	depends on ARM_64
> +	depends on ARM_64 && !ARM_V8R
>   	select HAS_CADENCE_UART
>   	select ARM_SMMU
>   	---help---
>   	Enable all the required drivers for Xilinx Ultrascale+ MPSoC
>   
> +config FVP_BASER
> +	bool "Fixed Virtual Platform BaseR support"
> +	depends on ARM_V8R
> +	help
> +	  Enable platform specific configurations for Fixed Virtual
> +	  Platform BaseR

This seems unrelated to this patch.

> +
>   config NO_PLAT
>   	bool "No Platforms"
> +	depends on !ARM_V8R
>   	---help---
>   	Do not enable specific support for any platform.
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:32:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:32:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.479994.744129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvRV-000111-9N; Tue, 17 Jan 2023 23:32:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 479994.744129; Tue, 17 Jan 2023 23:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvRV-00010u-6O; Tue, 17 Jan 2023 23:32:37 +0000
Received: by outflank-mailman (input) for mailman id 479994;
 Tue, 17 Jan 2023 23:32:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHvRS-00010i-T3
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:32:35 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3535f902-96bf-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 00:32:31 +0100 (CET)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 18:32:22 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5636.namprd03.prod.outlook.com (2603:10b6:208:297::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Tue, 17 Jan
 2023 23:32:20 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.5986.023; Tue, 17 Jan 2023
 23:32:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3535f902-96bf-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673998351;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=G2i3daRfa0oeNdufIOpYLdInNUCQHAwX6/3DupdRhMo=;
  b=DrTeKk7dLYDL8PD8jody9Ds5PdzxYjmHRKrt7dxGd3ZNWNwLNlPKhATN
   3YvM1OIkj7Rvnvwxc9flh/yuu46Zzw58GfdbBey0Z6MAjPOc+e77Gmf1q
   VHfTknIR1kIHUYDeOoDdghU5yOYKSCrHd2C4VRWVVvrqgMFdR0NwzMy+J
   k=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 92529949
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:tB2qDKDx8H21zhVW/3Xjw5YqxClBgxIJ4kV8jS/XYbTApGshhmEAn
 GVOUTrXa/yMM2egfNl3Po2+oU9Tu5fQzIRiQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpB4QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw3utqPjBw6
 8IiDxdKQgiDiOSd3LGaVbw57igjBJGD0II3nFhFlGucKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTL++xruwA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE17WSx3KqBNN6+LuQz+FOw3a84HwqEhA1VXiaq9KdkGC/RIcKQ
 6AT0m90xUQoz2SsStT+RBy55n2ZpBkXW9lXO+I/4QCJjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+Wpz6vPSkeLUcZeDQJCwAC5rHLopw3jx/JZsZuFuiylNKdMSrr3
 zmAoSw6hrMSpc0GzaO2+RbAmT3EjpfIRwgx+w7ednik8ARiZYiuIYev7DDz5/FKJpffQ0KBu
 HUBks624+UHDJXLnyuIKM0HG7uj9vueMDnRhFdpN5Yk/jWpvXWkeOh46TV/P1tgM9xCdyXgZ
 kTSoitO6JQVN3yvBYd8ZIS3DMYmxLbhDvzqU/nVapxFZZ0ZSeOc1CRnZErV2n+3lkEpyPs7I
 c3DLZ7qCmsGA6N6yjbwX/0azbIg2iE5wyXUWIz/yBOkl7GZYRZ5VIs4DbdHVchhhIvsnekf2
 4w32xeio/mHbNDDXw==
IronPort-HdrOrdr: A9a23:7UDfka2S/JJWjt7CE3kHVQqjBI8kLtp133Aq2lEZdPU1SL36qy
 nKpp8mPHDP6Qr5NEtPpTniAsa9qBHnhPtICOAqVN/JMWXbUQOTXeNfBODZowEIdReOkNJ15O
 NNdLV/Fc21LXUSt7eC3OHye+xQpOVvKZrY4tvj8w==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="92529949"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P/7kbe3nw2mwrPD/UmZn753YOIH4khQrYkUjVpIweDEIAC0P/9s+txZbZTY7viqExsLICr2//kKOISpKC19sUU6Boa0vI8dGuKOQgK9E2IA3TQcONI2Jc6nXuSga9h43TKZ/vpW+s9PwLuw1HEFAUZ12wwb57vx61C9KO6yVZrLbKEPfsLB6ZZmX5ifwPAB+Qef89+IQzxz0pfBH2Fe4akmYBy/tYNiuKo5L9iqrr4dtxpKLNsmlXSmwyjf/q9FgLN3oc1NAfhiysTvv7QIS+KnasEb+ABNKJbIn4k8VAPdFCediqF6JtBk2ZuOFub+A28h7bDWp/RQDd3dvcb3LMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G2i3daRfa0oeNdufIOpYLdInNUCQHAwX6/3DupdRhMo=;
 b=U8eF4pcnwlgcimHwKFUfY7hhf9qqJ/k9jYaA/C3Ub/B9/40u9reypbb9+S7Gy1eUrj1iOEUNJtrFUoz1Dsa1t2ck50+0MMBNabsT9QoGn6Yr3hXYTjTnGreNcXlYHBEnqJwxWqvepOhCzRTXVdqwnE33iA3MO/+S4C32xF90+nnVxFwAQqF0oJd/rpfeHWGxreRTEjmNdO8S/kB0Vdn+LYYe/YE7Tte6YNk601nW0bE4cgcm6KUhVdSjrLPcAARJyM5mz+uZUHLJjBfh8J2iIwhnDZSQgtz+6AaOTCs3z8wDo3s/T0mR/i9hviBEc5+P3ZDsnOkPpvWPQhW+0ovyfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G2i3daRfa0oeNdufIOpYLdInNUCQHAwX6/3DupdRhMo=;
 b=MBVigsYysJQzoyQYtdjCvVWd2KMc27YJnPv/35Mvkb6c8wtLNbCT9nbSwM3MhFKvdXW5WgjyK554l3UhZv90eIcJuM2r8wqFGWKNJO6hwncLjDoNIZEr+WR49TSx4lKOmGZGmr117ge0QqrHs1gXV1A3eq61oaMMGtBRqkMarQg=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>, Bob Eshleman
	<bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v4 2/4] xen/riscv: introduce sbi call to putchar to
 console
Thread-Topic: [PATCH v4 2/4] xen/riscv: introduce sbi call to putchar to
 console
Thread-Index: AQHZKbhhqgcYVfmmBkmkfqhgFJ6YQ66jRHCA
Date: Tue, 17 Jan 2023 23:32:20 +0000
Message-ID: <7918f456-14ff-77b2-3cdb-1e879e030b39@citrix.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
 <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5636:EE_
x-ms-office365-filtering-correlation-id: 09bf7616-1aa4-426a-fb43-08daf8e313ee
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ptMXPr7SB2NV2zYIfWr0L6ERHPKaosTlAGP+UWEMX95zCOk7qMxC+zZFx5T2DJJXT8FneF1rInfryznYCMu/Cz50TDmBeSq49Tpgs6eWDXJ5BT51AYaDqhCVmzu7DTnUYY2tGDq5nxGuk8qbpiB9i2Dk09gdAnF/ukRkDfuz6rIc+3UhnMDjwyOBzFxLEnRhHei+x+EDChyk3Rsi/w9ajDFo9hK/v9JkODh5AIBgQnQRqk+PKTW8CRmXdb/GjhkFqnHnyirf2GFXtzsJr8WKYQw8mhnDeoNC75PptmbKMU0Jrf+5kIYxcm8OTw+O4nBP0gZeLW1FXWWwY0K3tAD3lUziRWjQ4IG7pNhd6npkZTjI9dILX/8WKXZMiPdkZyCe0yQ1yWBmxt74hapSv5fzjuhC/S/VNCkBn6kYI1QIWST/Xr9WqTOcj0MBt2wlLRzpiqU4w9reJe/twTCd6PCMHOhuo7H34gd7LNgNuP7tTCZx3DHZ4FSUnpCeag1dGZxAyT4bClxY5NR70QfQGqR/l0LlddiGcroFQkflCMs56lFWgDl9A3T7uNKCUum/w/k/l7aZOgbhXD1vlGpfnFSAStjuLs2G9p0Yv0VJHGYYWRWwd1l8goeK/6XtZg3YLg0iTNXRT1/6m0zSKLxARzbCY1Hcphh3510Fh0XoCIzWPwiCKaFFgk6i8dIhkd6uBk54ck4f2tOEMmIr5FajXwey+8TsoYt2//wPU0q3I3jU2Qo8QmYZQevzc0NAHx7KEwuqQoegmPl7/7Qqpwo1Lnw/HQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(136003)(396003)(346002)(39860400002)(376002)(451199015)(71200400001)(64756008)(66446008)(478600001)(110136005)(8676002)(4326008)(36756003)(41300700001)(91956017)(66556008)(54906003)(6486002)(316002)(82960400001)(53546011)(76116006)(6512007)(38100700002)(38070700005)(66946007)(2616005)(122000001)(6506007)(66476007)(31696002)(26005)(86362001)(186003)(7416002)(5660300002)(31686004)(2906002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?U3VqNmEwdG01YWJDOGV3d1FhdmlzSXFXUFFMY203aFcxdUN2c0RTNGVldFNE?=
 =?utf-8?B?TmFoa1Q4ak5TRllkbEd1ekIzOXhQaWFpdGZjTDFVL1lmQ3IySVlkL1ZicGZL?=
 =?utf-8?B?OWlDYlB0TjMwSFlBNjFzdXNVRmcxK0RnaTBubVFWRy9Lc0h6bDNScnVxaFd1?=
 =?utf-8?B?MWxMVGdjUGtvTUlVZHRWdE15bThtYWhiU1lTSTQ4cUxncDdJRUU5a21Qb0tl?=
 =?utf-8?B?d0lGOEFuQzRwdUsxVXZOSjltbHlOS0l6U2ZxMzJWa2xIbE5LMkQ5d3Nyd1Rn?=
 =?utf-8?B?K2FwSzRPWThCbjNENlU0aU5MMHRwOEkzdTBzK2tZcXBHbk54YzdwMUZPZGFM?=
 =?utf-8?B?eUppdGNyekhZSTBkNTh6MTJiWHMwMTd5RHF0bHo2QmNVMDBBY1NOZmdZQzhL?=
 =?utf-8?B?cHUwZkF6UXY1R3U3RGtObThaQTBkcTdoL1pNaGNzcFZsWkxORHgvWmlSVU9a?=
 =?utf-8?B?WTlja01ZVkpJVWlhZm1jT0pkR3FDdzkyczIya3VvRE1YWm9Ua1g0MXM1bUh3?=
 =?utf-8?B?Y2xxUXErcmp6TTZteVVnUno1ZXJGc0dML0VGMU45UFZpS241NWpuc0pONHY4?=
 =?utf-8?B?NVJJSFFSTXhUZisybkhWd3BDYUZiMU1VdStHSE1mbzhPY21SSEFzbEo4aEdB?=
 =?utf-8?B?cWR0OGhmdVRxdkVaNDdKNGpUWEFmOTdVRWJWWDRLQTFYdTVneVp5SzkyTVBP?=
 =?utf-8?B?MG1PM2N6N1NEaHJyN25rd1NzeTNOTTNBdUVLVXVFelBoV29ZczRUN2owenZS?=
 =?utf-8?B?dTBnNGQ4emJmTFhGcllyNkRIbHdYQ1Y4Vm5PcnFHa0g0UTNQdjlBVHRrN1Yv?=
 =?utf-8?B?UVhJSWtGZnZEMlJjZWpjYjRHVFRpZ3BpNGdpY1FTOHczMldGRTVwRUN0S1Jn?=
 =?utf-8?B?TlBmb0tVNGczbUlwRU4wVGFrM1ZTT0hwUjNONXFFU0g0ZGwyajR6eVZQZEh5?=
 =?utf-8?B?NlNKTnN1VG5RQmJCTzBIUGhKdkJDOGFuV3pGb25aaitlcmR1dTR5OXp0dXVy?=
 =?utf-8?B?UHEyYldQeHhKbzNpbVZzYlM2RWVEdURDV2pnZ3hrR0U0QmVqQUgwaHQyd0NJ?=
 =?utf-8?B?VStJYi9nOWtodzc5SEtOZ3lMZVkvQ1dDTWtteHRxMS9uM05sdWk5amFWOFl1?=
 =?utf-8?B?NWhwZlZPWDhSdGxPK2FiRGtmUm00dGhyRExENSswaStpZFRYREtBKzdySmlE?=
 =?utf-8?B?SSsrY0JpZ2huNVU0K2EvMytZTmRRS1g1aVQyeWVyQllQVllOTlhRcnFXTEcy?=
 =?utf-8?B?eEJkVm5wcmk5SmlqRS9OaUlsbkpRbkFMMkt5RkpnWVN1RDI5elVUOHorV2k2?=
 =?utf-8?B?ejhvWDNoVVBXOHJFNFRySnZTUnpBRmlBd0x2QmR2OThOWW9UbVgxNFF6alVp?=
 =?utf-8?B?Sm9kSk9VbU5pVDJ6eHJjMEJzMHBINVRJd2RHSGlLNnFZTW82V1NIV1dRTE0z?=
 =?utf-8?B?QVArWC9xRmh3TFZyTWpjUlRtRXJQU3F4Yi96aE1hbmpuRVZoa2xFR0paV1ZU?=
 =?utf-8?B?VWhIbjFHbVNsQmtiWmVhSnFtS1ZrSEYxRTYyRFNNN2Y5REpkSXBqUVpnaXE3?=
 =?utf-8?B?NCtSY2x5bXFSWWk5STByMVUwQ3VLd1FwbSs3MkYrZEdsR1QwdE1qckgraFBr?=
 =?utf-8?B?SDhtWlFwTTN3SmtaWGNYdWRlOUtLQ2QzRkNMYXNTdkdndEVERUtyQXBUMW5p?=
 =?utf-8?B?RGhqWG9zSVFIMXdQdnpOVTgvRDNJdm5WRFlYV3lyYnVYOU5RTUZSRXpoNmtS?=
 =?utf-8?B?bytuZzBKbi9ncmZob2Q1V0J0dGdZK2l3S0pkUzdBODU4Vnc3STEweHFrM3I1?=
 =?utf-8?B?R0JvdXZLYWljSEUwcTg0RmJudGMyODZrcGNzaXlmamRxVVFrYzJVRGlzUnRt?=
 =?utf-8?B?TGoycUwxRXFQZEFnR3pkQzQrQm1DN2JGcnNFaFU1dlFWMzZqWTloTDZxMWpz?=
 =?utf-8?B?aEVDMFNwQzBGVnBGczdoUjJsODd4RGdyN2FFSnZKbE1BcnZySmxQc0ZqZGFT?=
 =?utf-8?B?UVJ3ZTIyTkR4b2s1K1gzK2wwZUZmaXVsUENVdVNpQXFyVS9TZFNVM2o3ZjFx?=
 =?utf-8?B?SVBySlZxcENHcE82em5rOUg3VHRFeVJFNTNQb20zZGFMZENJZzVXdURkYzVY?=
 =?utf-8?Q?8YCxDtkZtPr4C9f1zuoR6V1Qg?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <EFFD7868C655B34E817A535B4D50C1C6@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?R3NRRk9mSjdLOUN1d3QvQ3owbUlKSU93dXJnb1lybXJVbSs2eHhTaDNvVTBq?=
 =?utf-8?B?ektJUWdoM1U4L0dkYWpSQ2wrQ3JhcGdrUGt1NlFjU1F1T0FSSU4zOFZxMDh1?=
 =?utf-8?B?YVQ5SE1jcTNXanNMTDZ5cTJ5bGxtbVB0MGVOSDJuQllWZVVML2ZHV2RvQTQr?=
 =?utf-8?B?QXdTVDlrYm44MUtoTkxZMk11YzM4aVFNSmZtUXpTSU1pakExM0ZVK2VCUUFs?=
 =?utf-8?B?YllrbmZ0ZzJ6Vzh4d2Frb3VxeUhhSUJjMmxhaHZvZlgzL1hIQmZGZnlWT09D?=
 =?utf-8?B?eTVmVWt6RlEyb2owWERnWG55cGNENnBKNkxGY0JZemovVkkrNkxXN2UzNWVC?=
 =?utf-8?B?WGlYQWRmZDhiRm9WR21FMThCbENhN0x4Y1ZyOHpaNU01aW1ZVkwvWkViRndu?=
 =?utf-8?B?aWxqVGVuaHgvNGpQVDJMY2srWW5qRXJlK2RqTlNKVGF3ZHlNRFN1SDJjTFlk?=
 =?utf-8?B?WVFLWXN0ak5HMXE3WmZUZHNjWmxadVlnY1ZaSnFWcjJpRkYrVlFPQUYzbmhF?=
 =?utf-8?B?RTgrOEZjMTc1cjNBenBqaUpkU1pqaUhEQTAwOHBuQkNSMWsyY29MSVVLMHpZ?=
 =?utf-8?B?eDJ0VlRwcXZrZ3RONWRucTY5ejVDMVNxdDh5N0tDcEtGRDh3L004MCtNS1hE?=
 =?utf-8?B?LzBjZ0djM1UzSHQ0dEgxSWE5K0MwZWwvUmJ4V08yaTliOHY5LzRlZmNiVkUx?=
 =?utf-8?B?aGRyVk82NjBlNUZMY0M1dnlCVWRxbGFPNjJQOHFMQzZTb0ZKSm11d1lXODNy?=
 =?utf-8?B?ZWoxT2ZsaG5vaDgxSnhHLzNNTWdTY2dwNlF3MXdNRUgrZ2Y5dUgwV05wMmly?=
 =?utf-8?B?Mms2bmo1aGMraFVaSnNuUXFKa2JxV1ZPZHRNRTdOQndHTHVwTC9YTEtmcGY3?=
 =?utf-8?B?MjJoclBaMFdPWUdqcHJpZFErcmlJSDVDR2JuOUdPMUxhaHlUdnF3MkFMc1c4?=
 =?utf-8?B?bHJTWlphcWVmd3Bid2ZXZUdvWUhCUy96SmtKZzZxeVVYMnkxc2FmYkY4N0dP?=
 =?utf-8?B?bFROYXJxWXpHeEJlT1llUCs0OG5NNy9zQzBUUzhrVjdDbGRYV1ozZ095dUFr?=
 =?utf-8?B?RWc1YXRxZW50YzlWOStVbWlrT2d0YUNCbWhSVVgxWENUbXlsRUtzSnpNOEhQ?=
 =?utf-8?B?bmlrem1VdER0eDMweUVLYnVWL1Q4ZDFRY0djS015eWtKTU1Zc05WVHBGYzY2?=
 =?utf-8?B?R21LWUZqY2hjVmwxWEd4SjFBSFRtajZaWXhZcnVJdlRGaHlCang2Q0F1NEJL?=
 =?utf-8?Q?ZYWKe8+hv+82tAE?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 09bf7616-1aa4-426a-fb43-08daf8e313ee
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 23:32:20.1883
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Bk2LRP8S6wP2L+ZR/OV/eud5I59JChasNwNN2mdJT/f/UsikXrqqhux4wlJfkHSS80PTuam/gBpeVgQnHBZb1sJWCNH/oLHlU/f5VwOruz4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5636

T24gMTYvMDEvMjAyMyAyOjM5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvcmlzY3Yvc2JpLmMgYi94ZW4vYXJjaC9yaXNjdi9zYmkuYw0KPiBuZXcg
ZmlsZSBtb2RlIDEwMDY0NA0KPiBpbmRleCAwMDAwMDAwMDAwLi5kYzBlYjQ0YmM2DQo+IC0tLSAv
ZGV2L251bGwNCj4gKysrIGIveGVuL2FyY2gvcmlzY3Yvc2JpLmMNCj4gQEAgLTAsMCArMSw0NSBA
QA0KPiArLyogU1BEWC1MaWNlbnNlLUlkZW50aWZpZXI6IEdQTC0yLjAtb3ItbGF0ZXIgKi8NCj4g
Ky8qDQo+ICsgKiBUYWtlbiBhbmQgbW9kaWZpZWQgZnJvbSB0aGUgeHZpc29yIHByb2plY3Qgd2l0
aCB0aGUgY29weXJpZ2h0IENvcHlyaWdodCAoYykNCj4gKyAqIDIwMTkgV2VzdGVybiBEaWdpdGFs
IENvcnBvcmF0aW9uIG9yIGl0cyBhZmZpbGlhdGVzIGFuZCBhdXRob3IgQW51cCBQYXRlbA0KPiAr
ICogKGFudXAucGF0ZWxAd2RjLmNvbSkuDQo+ICsgKg0KPiArICogTW9kaWZpZWQgYnkgQm9iYnkg
RXNobGVtYW4gKGJvYmJ5LmVzaGxlbWFuQGdtYWlsLmNvbSkuDQo+ICsgKg0KPiArICogQ29weXJp
Z2h0IChjKSAyMDE5IFdlc3Rlcm4gRGlnaXRhbCBDb3Jwb3JhdGlvbiBvciBpdHMgYWZmaWxpYXRl
cy4NCj4gKyAqIENvcHlyaWdodCAoYykgMjAyMS0yMDIzIFZhdGVzIFNBUy4NCj4gKyAqLw0KPiAr
DQo+ICsjaW5jbHVkZSA8eGVuL2Vycm5vLmg+DQoNClVudXNlZCBoZWFkZXIuwqAgQWxsIHRoaXMg
ZmlsZSBuZWVkcyAoaW4gdGhpcyBmb3JtIGF0IGxlYXN0KSBpcyBhc20vc2JpLmgNCg0KPiArI2lu
Y2x1ZGUgPGFzbS9zYmkuaD4NCj4gKw0KPiArc3RydWN0IHNiaXJldCBzYmlfZWNhbGwodW5zaWdu
ZWQgbG9uZyBleHQsIHVuc2lnbmVkIGxvbmcgZmlkLA0KPiArICAgICAgICAgICAgICAgICAgICAg
ICAgdW5zaWduZWQgbG9uZyBhcmcwLCB1bnNpZ25lZCBsb25nIGFyZzEsDQo+ICsgICAgICAgICAg
ICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nIGFyZzIsIHVuc2lnbmVkIGxvbmcgYXJnMywNCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgYXJnNCwgdW5zaWduZWQgbG9u
ZyBhcmc1KQ0KPiArew0KPiArICAgIHN0cnVjdCBzYmlyZXQgcmV0Ow0KPiArDQo+ICsgICAgcmVn
aXN0ZXIgdW5zaWduZWQgbG9uZyBhMCBhc20gKCJhMCIpID0gYXJnMDsNCj4gKyAgICByZWdpc3Rl
ciB1bnNpZ25lZCBsb25nIGExIGFzbSAoImExIikgPSBhcmcxOw0KPiArICAgIHJlZ2lzdGVyIHVu
c2lnbmVkIGxvbmcgYTIgYXNtICgiYTIiKSA9IGFyZzI7DQo+ICsgICAgcmVnaXN0ZXIgdW5zaWdu
ZWQgbG9uZyBhMyBhc20gKCJhMyIpID0gYXJnMzsNCj4gKyAgICByZWdpc3RlciB1bnNpZ25lZCBs
b25nIGE0IGFzbSAoImE0IikgPSBhcmc0Ow0KPiArICAgIHJlZ2lzdGVyIHVuc2lnbmVkIGxvbmcg
YTUgYXNtICgiYTUiKSA9IGFyZzU7DQo+ICsgICAgcmVnaXN0ZXIgdW5zaWduZWQgbG9uZyBhNiBh
c20gKCJhNiIpID0gZmlkOw0KPiArICAgIHJlZ2lzdGVyIHVuc2lnbmVkIGxvbmcgYTcgYXNtICgi
YTciKSA9IGV4dDsNCj4gKw0KPiArICAgIGFzbSB2b2xhdGlsZSAoImVjYWxsIg0KPiArICAgICAg
ICAgICAgICA6ICIrciIgKGEwKSwgIityIiAoYTEpDQo+ICsgICAgICAgICAgICAgIDogInIiIChh
MiksICJyIiAoYTMpLCAiciIgKGE0KSwgInIiIChhNSksICJyIiAoYTYpLCAiciIgKGE3KQ0KPiAr
ICAgICAgICAgICAgICA6ICJtZW1vcnkiKTsNCg0KSW5kZW50YXRpb24uwqAgRWFjaCBjb2xvbiB3
YW50cyA0IG1vcmUgc3BhY2VzIGluIGZyb250IG9mIGl0Lg0KDQpCb3RoIGNhbiBiZSBmaXhlZCBv
biBjb21taXQuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:37:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:37:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480001.744140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvVn-0001hT-UY; Tue, 17 Jan 2023 23:37:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480001.744140; Tue, 17 Jan 2023 23:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvVn-0001hM-Rc; Tue, 17 Jan 2023 23:37:03 +0000
Received: by outflank-mailman (input) for mailman id 480001;
 Tue, 17 Jan 2023 23:37:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHvVm-0001hG-DZ
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:37:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHvVm-00077R-2k; Tue, 17 Jan 2023 23:37:02 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHvVl-0003gh-U0; Tue, 17 Jan 2023 23:37:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=IKfIAhQJC6HRnc793AOkK3DmG185u2zwyKSHZ5OwvUw=; b=mdy8n5csz9O7HOc9uS83OEn+Nb
	J04NUKZSwA57QNy7C79V5sIuXJAtVyPcZX0TPe7WCbjVX+7wQGGefXeeGrTCcbM3DVmQcMRGHQ96/
	1cflyo65ISDrhbBiNsFASW6A2pstCWLSmDMYtocDBrrze4N+PgR7Zh9BMP2OlaCqj3dU=;
Message-ID: <f78755d8-0b43-ebe4-4b2c-c88875347796@xen.org>
Date: Tue, 17 Jan 2023 23:37:00 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-6-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 05/40] xen/arm64: prepare for moving MMU related code
 from head.S
In-Reply-To: <20230113052914.3845596-6-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> We want to reuse head.S for MPU systems, but there are some
> code implemented for MMU systems only. We will move such
> code to another MMU specific file. But before that, we will
> do some preparations in this patch to make them easier
> for reviewing:

Well, I agree that...

> 1. Fix the indentations of code comments.

... changing the indentation is better here. But...

> 2. Export some symbols that will be accessed out of file
>     scope.

... I have no idea which functions are going to be used in a separate 
file. So I think they should belong to the patch moving the code.

> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>

Your signed-off-by is missing.

> ---
> v1 -> v2:
> 1. New patch.
> ---
>   xen/arch/arm/arm64/head.S | 40 +++++++++++++++++++--------------------
>   1 file changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 93f9b0b9d5..b2214bc5e3 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -136,22 +136,22 @@
>           add \xb, \xb, x20
>   .endm
>   
> -        .section .text.header, "ax", %progbits
> -        /*.aarch64*/
> +.section .text.header, "ax", %progbits
> +/*.aarch64*/

This change is not mentioned.

>   
> -        /*
> -         * Kernel startup entry point.
> -         * ---------------------------
> -         *
> -         * The requirements are:
> -         *   MMU = off, D-cache = off, I-cache = on or off,
> -         *   x0 = physical address to the FDT blob.
> -         *
> -         * This must be the very first address in the loaded image.
> -         * It should be linked at XEN_VIRT_START, and loaded at any
> -         * 4K-aligned address.  All of text+data+bss must fit in 2MB,
> -         * or the initial pagetable code below will need adjustment.
> -         */
> +/*
> + * Kernel startup entry point.
> + * ---------------------------
> + *
> + * The requirements are:
> + *   MMU = off, D-cache = off, I-cache = on or off,
> + *   x0 = physical address to the FDT blob.
> + *
> + * This must be the very first address in the loaded image.
> + * It should be linked at XEN_VIRT_START, and loaded at any
> + * 4K-aligned address.  All of text+data+bss must fit in 2MB,
> + * or the initial pagetable code below will need adjustment.
> + */
>   
>   GLOBAL(start)
>           /*
> @@ -586,7 +586,7 @@ ENDPROC(cpu_init)
>    *
>    * Clobbers x0 - x4
>    */
> -create_page_tables:
> +ENTRY(create_page_tables)

I am not sure about keeping this name. Now we have create_page_tables() 
and arch_setup_page_tables().

I would conside to name it create_boot_page_tables().

>           /* Prepare the page-tables for mapping Xen */
>           ldr   x0, =XEN_VIRT_START
>           create_table_entry boot_pgtable, boot_first, x0, 0, x1, x2, x3
> @@ -680,7 +680,7 @@ ENDPROC(create_page_tables)
>    *
>    * Clobbers x0 - x3
>    */
> -enable_mmu:
> +ENTRY(enable_mmu)
>           PRINT("- Turning on paging -\r\n")
>   
>           /*
> @@ -714,7 +714,7 @@ ENDPROC(enable_mmu)
>    *
>    * Clobbers x0 - x1
>    */
> -remove_identity_mapping:
> +ENTRY(remove_identity_mapping)

Patch #14 should be before this patch. So you don't have to export 
remove_identity_mapping temporarily.

This will also avoid (transient) naming confusing with my work (see [1]).

>           /*
>            * Find the zeroeth slot used. Remove the entry from zeroeth
>            * table if the slot is not XEN_ZEROETH_SLOT.
> @@ -775,7 +775,7 @@ ENDPROC(remove_identity_mapping)
>    *
>    * Clobbers x0 - x3
>    */
> -setup_fixmap:
> +ENTRY(setup_fixmap)
>   #ifdef CONFIG_EARLY_PRINTK
>           /* Add UART to the fixmap table */
>           ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
> @@ -871,7 +871,7 @@ ENDPROC(init_uart)
>    * x0: Nul-terminated string to print.
>    * x23: Early UART base address
>    * Clobbers x0-x1 */
> -puts:
> +ENTRY(puts)

This name is a bit too generic to be globally exported. It is also now 
quite confusing because we have "early_puts" and "puts".

I would consider to name it asm_puts(). It is still not great but 
hopefully it would give a hint that should be call from assembly code.

>           early_uart_ready x23, 1
>           ldrb  w1, [x0], #1           /* Load next char */
>           cbz   w1, 1f                 /* Exit on nul */

Cheers,

[1] https://lore.kernel.org/all/20230113101136.479-13-julien@xen.org/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:46:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:46:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480008.744151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvei-0003Br-S1; Tue, 17 Jan 2023 23:46:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480008.744151; Tue, 17 Jan 2023 23:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvei-0003Bk-Oy; Tue, 17 Jan 2023 23:46:16 +0000
Received: by outflank-mailman (input) for mailman id 480008;
 Tue, 17 Jan 2023 23:46:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHveh-0003Be-Be
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:46:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHveg-0007RG-S2; Tue, 17 Jan 2023 23:46:14 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHveg-00047M-NV; Tue, 17 Jan 2023 23:46:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=4/t/YTEFLF1NRZaHXm6APHNRymjxIA+orZM+fLdbCog=; b=FVlWJz2Xgz78LMG/5anDFqnOIj
	XekJLrOVhpqs5RnoEhxvaEvUhwTHcoWVDsU/uJyKdvHXPeunq3HEQpFEfEUjUbJs0nK6X1SuZ+/Lo
	+G23pq66OjX+BSlEfOsVLgzu3yCbBIbf8iRwBNDuRqpkRCUOGBS6pIdbRf9ebkCxqWfY=;
Message-ID: <4b817b65-f558-b4df-c7fd-242a04e59a59@xen.org>
Date: Tue, 17 Jan 2023 23:46:13 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-8-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 07/40] xen/arm64: add .text.idmap for Xen identity map
 sections
In-Reply-To: <20230113052914.3845596-8-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/01/2023 05:28, Penny Zheng wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> Only the first 4KB of Xen image will be mapped as identity
> (PA == VA). At the moment, Xen guarantees this by having
> everything that needs to be used in the identity mapping
> in head.S before _end_boot and checking at link time if this
> fits in 4KB.
> 
> In previous patch, we have moved the MMU code outside of
> head.S. Although we have added .text.header to the new file
> to guarantee all identity map code still in the first 4KB.
> However, the order of these two files on this 4KB depends
> on the build tools. Currently, we use the build tools to
> process the order of objs in the Makefile to ensure that
> head.S must be at the top. But if you change to another build
> tools, it may not be the same result.

Right, so this is fixing a bug you introduced in the previous patch. We 
should really avoid introducing (latent) regression in a series. So 
please re-order the patches.

> 
> In this patch we introduce .text.idmap to head_mmu.S, and
> add this section after .text.header. to ensure code of
> head_mmu.S after the code of header.S.
> 
> After this, we will still include some code that does not
> belong to identity map before _end_boot. Because we have
> moved _end_boot to head_mmu.S. 

I dislike this approach because you are expecting that only head_mmu.S 
will be part of .text.idmap. If it is not, everything could blow up again.

That said, if you look at staging, you will notice that now _end_boot is 
defined in the linker script to avoid any issue.

> That means all code in head.S
> will be included before _end_boot. In this patch, we also
> added .text flag in the place of original _end_boot in head.S.
> All the code after .text in head.S will not be included in
> identity map section.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
> v1 -> v2:
> 1. New patch.
> ---
>   xen/arch/arm/arm64/head.S     | 6 ++++++
>   xen/arch/arm/arm64/head_mmu.S | 2 +-
>   xen/arch/arm/xen.lds.S        | 1 +
>   3 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 5cfa47279b..782bd1f94c 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -466,6 +466,12 @@ fail:   PRINT("- Boot failed -\r\n")
>           b     1b
>   ENDPROC(fail)
>   
> +/*
> + * For the code that do not need in indentity map section,
> + * we put them back to normal .text section
> + */
> +.section .text, "ax", %progbits
> +

I would argue that puts wants to be part of the idmap.

>   #ifdef CONFIG_EARLY_PRINTK
>   /*
>    * Initialize the UART. Should only be called on the boot CPU.
> diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
> index e2c8f07140..6ff13c751c 100644
> --- a/xen/arch/arm/arm64/head_mmu.S
> +++ b/xen/arch/arm/arm64/head_mmu.S
> @@ -105,7 +105,7 @@
>           str   \tmp2, [\tmp3, \tmp1, lsl #3]
>   .endm
>   
> -.section .text.header, "ax", %progbits
> +.section .text.idmap, "ax", %progbits
>   /*.aarch64*/
>   
>   /*
> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
> index 92c2984052..bc45ea2c65 100644
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -33,6 +33,7 @@ SECTIONS
>     .text : {
>           _stext = .;            /* Text section */
>          *(.text.header)
> +       *(.text.idmap)
>   
>          *(.text.cold)
>          *(.text.unlikely .text.*_unlikely .text.unlikely.*)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:49:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:49:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480014.744162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvhg-0003nL-9M; Tue, 17 Jan 2023 23:49:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480014.744162; Tue, 17 Jan 2023 23:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvhg-0003nE-6Q; Tue, 17 Jan 2023 23:49:20 +0000
Received: by outflank-mailman (input) for mailman id 480014;
 Tue, 17 Jan 2023 23:49:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pHvhf-0003n8-7d
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:49:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHvhf-0007Tn-0P; Tue, 17 Jan 2023 23:49:19 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pHvhe-0004C2-Rn; Tue, 17 Jan 2023 23:49:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=GmKGGz/kAkBzBcQfubJ9rBng+6oHr9IVOCK8rMVVnTs=; b=6tzmBDNsjaCju8v4G7kX1fzjVQ
	5NiXqQle5to/o6HGdvax8IlcM5fzkE18YFHEXKgJzAonmOgSrdrepnjE+662Tw7u11D2GaquSylpW
	lP6fNWeV+YULLfRdbQ7EuKQSpcW4AmxzvUb8KDMAaPr+QWfgJKMuPsSoafoTUylOvkN0=;
Message-ID: <09e4c2ef-eddf-e798-573b-68744a061d68@xen.org>
Date: Tue, 17 Jan 2023 23:49:17 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 08/40] xen/arm: use PA == VA for
 EARLY_UART_VIRTUAL_ADDRESS on Armv-8R
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-9-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113052914.3845596-9-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> There is no VMSA support on Armv8-R AArch64, so we can not map early
> UART to FIXMAP_CONSOLE. Instead, we use PA == VA to define
> EARLY_UART_VIRTUAL_ADDRESS on Armv8-R AArch64.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>

Your signed-off-by is missing.

> ---
> 1. New patch
> ---
>   xen/arch/arm/include/asm/early_printk.h | 12 ++++++++++++
>   1 file changed, 12 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/early_printk.h b/xen/arch/arm/include/asm/early_printk.h
> index c5149b2976..44a230853f 100644
> --- a/xen/arch/arm/include/asm/early_printk.h
> +++ b/xen/arch/arm/include/asm/early_printk.h
> @@ -15,10 +15,22 @@
>   
>   #ifdef CONFIG_EARLY_PRINTK
>   
> +#ifdef CONFIG_ARM_V8R

Shouldn't this be CONFIG_HAS_MPU?

> +
> +/*
> + * For Armv-8r, there is not VMSA support in EL2, so we use VA == PA

s/not/no/

> + * for EARLY_UART_VIRTUAL_ADDRESS. > + */
> +#define EARLY_UART_VIRTUAL_ADDRESS CONFIG_EARLY_UART_BASE_ADDRESS
> +
> +#else
> +
>   /* need to add the uart address offset in page to the fixmap address */
>   #define EARLY_UART_VIRTUAL_ADDRESS \
>       (FIXMAP_ADDR(FIXMAP_CONSOLE) + (CONFIG_EARLY_UART_BASE_ADDRESS & ~PAGE_MASK))
>   
> +#endif /* CONFIG_ARM_V8R */
> +
>   #endif /* !CONFIG_EARLY_PRINTK */
>   
>   #endif

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 17 23:57:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Jan 2023 23:57:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480020.744173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvpY-0005Fx-4W; Tue, 17 Jan 2023 23:57:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480020.744173; Tue, 17 Jan 2023 23:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHvpY-0005Fq-1c; Tue, 17 Jan 2023 23:57:28 +0000
Received: by outflank-mailman (input) for mailman id 480020;
 Tue, 17 Jan 2023 23:57:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kjsf=5O=citrix.com=prvs=374a1453b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pHvpW-0005Fj-HR
 for xen-devel@lists.xenproject.org; Tue, 17 Jan 2023 23:57:26 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id af5c1deb-96c2-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 00:57:24 +0100 (CET)
Received: from mail-dm6nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Jan 2023 18:57:16 -0500
Received: from BN7PR03MB3618.namprd03.prod.outlook.com (2603:10b6:406:c3::27)
 by BN9PR03MB6075.namprd03.prod.outlook.com (2603:10b6:408:118::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan
 2023 23:57:14 +0000
Received: from BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::a8de:fe00:94a6:dd09]) by BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::a8de:fe00:94a6:dd09%7]) with mapi id 15.20.6002.013; Tue, 17 Jan 2023
 23:57:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af5c1deb-96c2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1673999844;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=KOOzchN/nckgqCP5QYHmWObBV7SJUAVKIFGbdE4icIY=;
  b=SzhS+kJ4pLL56fTPMT+iuOOiamFRhGA4uQv89f+exW7NpAHUFz1qPs/j
   VfQfxp34oenhYQ+DCWTJScNxYe+M++fedtDogofpoXrgKpzt90CegQV/j
   JG14Rf9+lI9a9KllpKko8bujmXXYDhO9air5JOrwDRLeW/O/uO56VTwWq
   E=;
X-IronPort-RemoteIP: 104.47.57.177
X-IronPort-MID: 93115561
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hIwbdahGDcsnFbHA07ngruRxX161PxAKZh0ujC45NGQN5FlHY01je
 htvCzqPbPmDMGf2e4h+aoq+9EIG6J7VxoAxT1BppSlgHigb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QaBzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQFKSEhcFPe1tmG+7m0EPBzoMUyCfjSadZ3VnFIlVk1DN4AaLWaGuDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEuluGyb7I5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6Rebhr6M33gT7Kmo7NxMMCX+wgNmCrA2gfcIPL
 FYp9AsJsv1nnKCsZpynN/Gim1aDuhMfQNtRVe4n8gaGyqnTywmcD2kACDVGbbQOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRUcAbyIZSQoO4/H4vZo+yBnIS75LErOxj9DzMSH9x
 XaNtidWr64IkccB2qG//FbGqzGhvJ7ESkgy/Aq/dm2k5wV9eYWsT5ap91/A7PBLao2eSzGpt
 n8OkdmT9+AKAJSEkgSCRewMGPei4PPtGDTYgEVzFpg7sTq38niofJt4/z11YkxuN64scjjvZ
 kjRtQpP5YR7M36jbKsxaIW0Y+wgyqLqBJLoTfDQY99HZLB+cQaG+GdlYkv44oz2uE0lkKV6N
 ZLFd8+pVC8eEf4+k2XwQPoB27g2wCx43XnUWZ3w0xWg1/yZeWKRTrAGdlCJa4jV8Z+5nekcy
 P4HX+Pi9vmVeL2WjvX/mWLLEW03EA==
IronPort-HdrOrdr: A9a23:bnEiVKDNfp/NjsrlHeiEsseALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+U0ssHFJo6HiBEEZKUmsuKKdkrNhR4tKOzOW9FdATbsSp7cKpgeNJ8SQzJ876U
 4NSclD4ZjLfCBHZKXBkUaF+rQbsb+6GcmT7I+woUuFDzsaEp2IhD0JaDpzZ3cGIDWucqBJca
 Z0iPAmmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzI39QWUijusybjiVzyVxA0XXT9jyaortT
 GtqX252oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuQFNzn2jQ6sRYJ5H5mPpio8ru2D4Esj1P
 PMvxAjFcJu7G65RBD6nTLdny3blBo+4X7rzlGVxVH5p9bieT48A81dwapEbxrw8SMbzZJB+Z
 MO+1jcm4tcDBvGkii4zcPPTQtWmk29pmdnufIPjkZYTZAVZNZq3M4iFQJuYdI99RDBmcca+d
 pVfYfhDTFtAAqnhkXizy1SKRqXLywO91m9MxM/U4euokVrdThCvjclLYok7zc9HdsGOud5D6
 6vCNUWqJheCsARdq5zH+EHXI++DXHMWwvFNCaILU3gD7xvAQOFl3fb2sRD2AiRQu1/8LIi3J
 DaFF9Iv287fEzjTcWIwZ1Q6xjIBGGwRy7kxM1S74Vw/uSUfsuhDQSTDFQ118ewqfQWBcPWH/
 61JZJNGvfmaW/jA5xA0QHyU4RbbXMeTMoWsNAmXE/mmLOCFqT68ujANPrDLrvkFjgpHmv5H3
 sYRTD2YN5N60i6M0WI9CQ5m0mdD3AX0agAY5QypdJjubTlHrc8wjQ9mBC++tyBLyFEv+g/YF
 Z+SYmX4J+GmQ==
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="93115561"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KN8BLxM4Vv6ArNtbC79jr/VAyKmiZuGslT2zbpG2rGTZI71GFjM1jZeVOXahDwssViGcFPhuFhb6seA+5gKV+OxlMHocpLiv27HK8Hoh1SRkv8SytwOsyJo73z4+hv7xbPdzor1Ga/kUgjINopHu8guoFJukTmKgbugSSCe8RRa7/d2wGLiS4m3U4oBZQa984PX/us4oThKKWUw6epnca4e8xnBR+SBZzmzxUC5RhpTFImFPIzQwnUePtgCsu4I1XMI3KhNJPBN2G8CbYc+jaKoO0Fc45ocERWboXdCO6t3Cg2Ge5T9whVs4SV7l6npNDGaNTDj6xC2s8FPQYR5aDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KOOzchN/nckgqCP5QYHmWObBV7SJUAVKIFGbdE4icIY=;
 b=a5LiYphTsmYGZtNthb5b1i/0jS7PBZJVYFCOgFAfd1FdsswxADeJLLLzxkV1Z/Ho5M/rm+X7aXjCGDiJ8+ahV2h3Tn7/YxAEliJSEm4aJv4VZdUBTlnesR4uBHjn7tiOKbl4/De81ZjRPN8KgrM1oNjA3q/ubSX7gJiW5Wnyo2DnCKr5eSQjgzh2I+lshdAUdheYxNx1ATXJJ/VWf9QtL7SQOl5LXQtXlB0a+qMIi36B1FS6M1hws6FONz9m4X0y269cHkkj+4Xzmz0SocOIjqhrWRjzake2Nma0RuXV3Vg1rzxqhyBHZtacuhc/W3Jtc870KdZefqfmJqdcCwOIlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KOOzchN/nckgqCP5QYHmWObBV7SJUAVKIFGbdE4icIY=;
 b=EWe3VoZZ6kkgbpvA2AGtsikdo/v5MmH6+V5hoym00GCzj6RKWWe678hT7BYxRLObEXXZ6B0bUcy5xuY7O5tGV+cFQDodAtfN1x3LNet9iH6+g0jGpPZ/eojiF0NlU1Ly3pHaEpXUI+RnwfXiKrTSDUYtGSyiJ4I5fB7I9nBhhFk=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Bobby
 Eshleman <bobby.eshleman@gmail.com>
Subject: Re: [PATCH v4 3/4] xen/riscv: introduce early_printk basic stuff
Thread-Topic: [PATCH v4 3/4] xen/riscv: introduce early_printk basic stuff
Thread-Index: AQHZKbhjN4vcDNwMek+QkGCbNS8YNK6jS2WA
Date: Tue, 17 Jan 2023 23:57:13 +0000
Message-ID: <d0cabe82-315e-408c-7364-33e2b5093ee6@citrix.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
 <915bd184c6648a1a3bf0ac6a79b5274972bb33dd.1673877778.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <915bd184c6648a1a3bf0ac6a79b5274972bb33dd.1673877778.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN7PR03MB3618:EE_|BN9PR03MB6075:EE_
x-ms-office365-filtering-correlation-id: 4c84a96d-2436-4e88-157c-08daf8e68e49
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ddSVq4IT6r2b9PswcPW7q9iHxpC+IMGMJrmmH3+3rqCRUgOIfBCNGIuIEuYickGhkoDi5kN5IHN0hAvwt+rZ/M0V88jzWWGNRp0Ks8thoiZkqOVsIwzlSBZ9AooqY/xcGKuVGLVsjmL3+xiVnxDHT5YDYrfI8cpiVv9FX14+A/Ta0q8rvDXHV/p8fGtVcIvVh+jVS7bPqY+E3k9tSL9raJ4QbZxFq5xGzunofLkpJ7YXhKhMwo1DCOrgNjp9SQ/Z0jwk2ylebVlOOElgpat3tRp/qHfYIe+fxMgKXU252B+yYElAIbG70LPhu5IOs2Di8vLOEjnfbtg3PX1r5bCXL5pub368tJEesfwk7anlQaoqDeeBmCI1lBRhqefN4Yru6t4wnWDwVUmabFaFjY3a+7qZL1vsYMmsjH3Jp07PrrijV+gCmX/BirUZA/oDQmg268H5m6Cr6Z4bqGDAB1Z7QIBUb/9EPqhYYStIkyP+IcDXcGrue8eEswGsoH8VsoBvfWYnBGxhFbTQ6Fx/7v/V8HrlIJZkz/cPcdabPuFJY/BirLXqLbFOwIrpHF1JcuoZaCA9SwRPZTKRqQEkrV1kNPztpOcBM4QjG+l8qLCQaQSZWQecONQqJbmVBiIMtH+PhGkq6HDmBqDpjn248LyA+QX0jV8ZLlMWR1p8Ws6HJiog10zueEhmlbVNIjowW1L7KQDhwfhzcICi3/1wfxpDAziiht9FX899IS4ikLNI62sKj/bF3H4kohZhmELPQ6+oJIoU7guvVTkvCu/gRuN4Uw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR03MB3618.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(136003)(396003)(346002)(366004)(451199015)(54906003)(71200400001)(2616005)(110136005)(91956017)(4326008)(8676002)(66556008)(6512007)(66446008)(26005)(76116006)(64756008)(66476007)(186003)(66946007)(478600001)(31686004)(41300700001)(6506007)(8936002)(83380400001)(5660300002)(7416002)(2906002)(53546011)(122000001)(316002)(38070700005)(82960400001)(38100700002)(31696002)(86362001)(6486002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TXRyS2tBSVk4SzFGS0h2NzgxaFVSUXBrd1VpWlR4dnBPbmhNVGRjRGg3SXlU?=
 =?utf-8?B?Z2JDS05JaHlmRkRObTdJc1BXS3pPTjBuU3dtbzY3M0RWM2lnRm84SitMejZm?=
 =?utf-8?B?SGtDUjRzNTZwZldHczVrbHpseUQzUmFHUnU0N1dwV3hlbzJpemxOSzlyZWVx?=
 =?utf-8?B?RVdsQWxNN3NlTXdXMlFTWjRHcUxrNmhOeW1SMzdPUWFPZFFaNStSZnlOdXVk?=
 =?utf-8?B?K2xScGYzMmdaSVBJRWp3Lzh0WXRIRGMwS25tMFFXWFljRWxydFJWK2FGQmtV?=
 =?utf-8?B?eW14R1VVZ1M1aGpwRWxTa3ZMdGhOcEhNS0FTL3NvYWhEWFE2Yy9RNlUwUWxu?=
 =?utf-8?B?Nkp1L2h1UWh0VUxQNU13RldRS1l3UDJsVEFLWjFQY0tqb2tUZnVMdDVreTRT?=
 =?utf-8?B?R3JjMDFPaVcybStkZlF5SmNsR09vMHBqa1FYRzJ0K01oMHpMa016UnYvNFJQ?=
 =?utf-8?B?SzJFNHBVeEtka1F5SnpnWDRVdHl1YkhhRjJWMi9sQ3kxQm9SRVZ6V1E5UHZ3?=
 =?utf-8?B?dWdGMURGS1dlMGVOTEd2Zmd1QXVnQlpQamtyU29HR1ZCT0d5Zy9zaXhtR3V1?=
 =?utf-8?B?L2NqbDhId0NDZndNN1FrRVlsc2dNNG9MZEVMcXdFdk9hWG1LLzZ1S3FaVWZi?=
 =?utf-8?B?OWRINVBiSHJDZlMzQ2Q3SklBcU80c29RaFdZZlJEd0dod3FMd2hVTWVacDFV?=
 =?utf-8?B?UjBpcngyWHVzQTFWVWdqT00xVlZjQ25VdTdVL0ptUHQ5Q2w5TTlxY1M5OGpH?=
 =?utf-8?B?WE5Ia1hnNFN0cE05d0tBR3luRUs2bFdic3UrOUZzcDBzbVBLRlExckovM0Nn?=
 =?utf-8?B?U21RZ0UxSHlPN1gybk84MU9pNGlVUjI1T3F5YWpOZWt3S2VQUk95L1NGRERW?=
 =?utf-8?B?WnBtblU0VFBjbVhRMm1pOTdKa1JOU3NQWW9GenZ3eTZ4amt4Q2FPODF0alpC?=
 =?utf-8?B?Qkk2amltMWo2R3Fad1JlUzZzWEM3VmVoZ1UxelpHZCtodXU3N2N4aHNIeGFs?=
 =?utf-8?B?a09hZnFpNzFOcGwvWTJiM3BQTVdTem1lWW9zSm93aHNTTUprVG5lRXlIbXZT?=
 =?utf-8?B?amkvLzBYUDEvOUVvNEt6NXRWQXRPQ0VqZHZ0SGRwOUNxQU5BdWt3UnVHME0y?=
 =?utf-8?B?Wko3bytwOFBTenB2aFB5QVhoRWpoVW9rSUJ1ZTZtK0pFZytXTnVTZm5wdXpL?=
 =?utf-8?B?TEQvMloyalVCdmVUV3hWM1VUeW5hMVNzUGg5TUdaUmowK1FmeHhrbzY3SGRS?=
 =?utf-8?B?RVpXMlhBcmZ2ajcvMzRvcnJROHdTMGwrZXdiVnp6eXRLa1pNTWY3QmRPdUIr?=
 =?utf-8?B?dGhNdFA2Nzk4REdRSHlKK3dHMU9QQ1NTY1FrTFQ5QW4zT0dPU0Z1OVkyMFNM?=
 =?utf-8?B?SEdnbEFpcW5saGJKRGk1WXFYeVNvQlJuMGpobFNEQTBOOGxqdmR2UXhtY01s?=
 =?utf-8?B?M21DWCtkZEpHK3lmOEFuNzgwdFlrWXRwVFV1a2oveVQzaGlFUlZtS2pVaXV3?=
 =?utf-8?B?N25TOTdmdUd3L25XV3J1Y1UycnZrY1JLam55SW1OUXdJUWwydC9EWFlYL0Z0?=
 =?utf-8?B?cXd3N0JsNTVycEpCWnNjSUdENUpTNEd1WEFBazE4Z1h5dkl0NlZ1blc2V0NK?=
 =?utf-8?B?QU1VSWhnWGxQK3orVkxEemhZVEcyZnhWZ3RGOTY1NjQxYkxCdzdKNGp1SDBy?=
 =?utf-8?B?ZkZ4ell6VFFlOE9Dd284VjNFZmc2OTNCQ0REQlgyd2xreDUrM2xzTlRsYlRZ?=
 =?utf-8?B?UStvbktpVFREb2VFY3FtTHg3cmgzRUpJZFJtVHpOWDY3QlBHWDRlUVFwaVVS?=
 =?utf-8?B?Mk1hSmhoQUhRL1ZuU3VJTGxLNDJnV0l6VzJYV3FKUjhra0s3ZjRDL21rbmY1?=
 =?utf-8?B?djlyWWp3VStKUnZMNlIyQ2w2dmpaWlFrczdSQmZBK0ZieUM0WEF5Zm9JODkz?=
 =?utf-8?B?RTBzd2ppUVZoQVVEd1Z5RU5jTEptbnRZNjNJMzNta0tram1UZkcyS2dxbTc1?=
 =?utf-8?B?TlB4Qy9zQ0NsQlZ4b2U2SFpzN1JwZXllOTNVZktTUUF2TWdiaXRaaVVMSTNY?=
 =?utf-8?B?QzBOQytweTlBaGxnVGd0YzMrVVkwUlJhRFd0TkdWdnliKzgzb3NUMkErZnpT?=
 =?utf-8?Q?SZvsLxLbWUNTmTmr22Y/Lls6H?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <11AAC1C925A2174BAB645FC82B543494@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?bkptWW5EWkRiVGpjaUtFb05GWDIvMkE4clhIOU5yb2xZWE1Db1hERmtGN2tF?=
 =?utf-8?B?VytGbklGUXFKYWQvZFQ4UjFNdFlydzQ3SW4rSVN5TnNZT3pKOVp2WlpWSkpT?=
 =?utf-8?B?cElwZi90ZWVwZXozR1g0NjF4NVMzcUhxV3hPYVVDaXZaSnpheVg2bEdnS3Q2?=
 =?utf-8?B?WGJubUhrRCtOZUZXb0ZscENsam9OMnJDQkJTZktPUER2b3JsNVlXUmxWc2g2?=
 =?utf-8?B?MFdieVZaakh3cFhJK1JEUURJM2NEcjZUa2w1Uklhc2VmaGZyMXYwU2VMQ1Fa?=
 =?utf-8?B?dUtXZDM2NENxRjUrekpuK2Q3RWVyKzNVclpwTm13WVVxOVBBMXl1Zm96cTNi?=
 =?utf-8?B?cFArODBtUklWVCtweFphaDVXT3E5REx0enhodWFKdEtKZWZsQ0c0L2x3SmJm?=
 =?utf-8?B?a1c5dW84bHZNeVNPNVYzRXJkc0lTRzhyY2VYUzZiUXhjMUQ5cUNKMFRYVFBs?=
 =?utf-8?B?dE1sdTJzZTBpRW4xQjAzSVpLa1dxZGtIZGZhZXhLN3JWdG5SektUMkZpYjNa?=
 =?utf-8?B?UDlKb1NMZ2xhTnRyZ3ltTFJoN1B0djVpekp5cC83eDg5QWt4Q0wzMGlBWkhp?=
 =?utf-8?B?OWRTQmNXZ2JSMnpkZjFKYzdRcVFxZU9GSE5LRGZiaFhKSk1TZFRjUStDbEYw?=
 =?utf-8?B?WllMNHRqMG9sWG9mVEFmclg3RFVObiswZy9BMUg1L2dIUHVvOWxtUXpjV1dq?=
 =?utf-8?B?YnE4UkV3TjZYbVBGcCszazRCM1k5VnduM2tSalZlb0lpdmJVRFBqOGNQNmdI?=
 =?utf-8?B?VnNhRWR4cGRBVndSQ1hJWFNZOE5ReUVSQ0Q5MXhLNkxuYmE1djZHZ2YxcXJ0?=
 =?utf-8?B?anQrbXJnTTR6Zk50bW9UZWRMalgrNmFEQ3RUUWtBN240dVZuY1J0OWpZY28r?=
 =?utf-8?B?Syt5NnBId1IxWlVtRlJmeVZVa2wvWkZlaXc2M0ExQUQ5MXBzKy9lS04zdDU5?=
 =?utf-8?B?V240ZXhWSllQSHVMajAyYTNWUFlQdy9mRmJjQXQzVVdSUUsrYVhybkpLWFc0?=
 =?utf-8?B?K0g3L3NNaHZMT1BRY3Z5NXM1UTdIZlpwcDN4Wi9iOERTdDVQNXdyR0l4Zmk2?=
 =?utf-8?B?RjRUODdHekxmNWZiR1RFR0plUTJhaCttNENpNnRhRFhmT05hRjAralJlbEpu?=
 =?utf-8?B?Rmc0L0pJR0sxSXM3RnNBUTJRMy84dlN3VEg3QVRIR3EyZnhpSVJtemMwY2ha?=
 =?utf-8?B?L0RiSmZlWUtHVm4vL2RpRWdLYVcyVkFBT1doNkYxYkkzRklJOGxrTWx6R05E?=
 =?utf-8?Q?8XDdFVVgybZjjTi?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN7PR03MB3618.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4c84a96d-2436-4e88-157c-08daf8e68e49
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jan 2023 23:57:13.9042
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 0Q1euHeT+FzkPxgKtOiBkiY1Otmu6qsdyKkEupdksh6JDd/geaSBt0SJH8ZfQYqQwzKPN9CY8es1i/UkLXN4GuPOUkpQ8rGNjTc8ay9ikbQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6075

T24gMTYvMDEvMjAyMyAyOjM5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvcmlzY3YvS2NvbmZpZy5kZWJ1ZyBiL3hlbi9hcmNoL3Jpc2N2L0tjb25m
aWcuZGVidWcNCj4gaW5kZXggZTY5ZGUyOWJiMi4uZTEzOWU0NDg3MyAxMDA2NDQNCj4gLS0tIGEv
eGVuL2FyY2gvcmlzY3YvS2NvbmZpZy5kZWJ1Zw0KPiArKysgYi94ZW4vYXJjaC9yaXNjdi9LY29u
ZmlnLmRlYnVnDQo+IEBAIC0wLDAgKzEsNiBAQA0KPiArY29uZmlnIEVBUkxZX1BSSU5USw0KPiAr
ICAgIGJvb2wgIkVuYWJsZSBlYXJseSBwcmludGsiDQo+ICsgICAgZGVmYXVsdCBERUJVRw0KPiAr
ICAgIGhlbHANCj4gKw0KPiArICAgICAgRW5hYmxlcyBlYXJseSBwcmludGsgZGVidWcgbWVzc2Fn
ZXMNCg0KS2NvbmZpZyBpbmRlbnRhdGlvbiBpcyBhIGxpdHRsZSBoYXJkIHRvIGdldCB1c2VkIHRv
Lg0KDQpJdCdzIG9uZSB0YWIgZm9yIHRoZSBtYWluIGJsb2NrLCBhbmQgb25lIHRhYiArIDIgc3Bh
Y2VzIGZvciB0aGUgaGVscCB0ZXh0Lg0KDQpBbHNvLCBkcm9wIHRoZSBibGFuayBsaW5lIGFmdGVy
IGhlbHAuDQoNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3Jpc2N2L01ha2VmaWxlIGIveGVuL2Fy
Y2gvcmlzY3YvTWFrZWZpbGUNCj4gaW5kZXggZmQ5MTZlMTAwNC4uMWE0ZjFhNjAxNSAxMDA2NDQN
Cj4gLS0tIGEveGVuL2FyY2gvcmlzY3YvTWFrZWZpbGUNCj4gKysrIGIveGVuL2FyY2gvcmlzY3Yv
TWFrZWZpbGUNCj4gQEAgLTEsMyArMSw0IEBADQo+ICtvYmotJChDT05GSUdfRUFSTFlfUFJJTlRL
KSArPSBlYXJseV9wcmludGsubw0KPiAgb2JqLSQoQ09ORklHX1JJU0NWXzY0KSArPSByaXNjdjY0
Lw0KPiAgb2JqLXkgKz0gc2JpLm8NCj4gIG9iai15ICs9IHNldHVwLm8NCj4gZGlmZiAtLWdpdCBh
L3hlbi9hcmNoL3Jpc2N2L2Vhcmx5X3ByaW50ay5jIGIveGVuL2FyY2gvcmlzY3YvZWFybHlfcHJp
bnRrLmMNCj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gaW5kZXggMDAwMDAwMDAwMC4uNmJjMjlh
MTk0Mg0KPiAtLS0gL2Rldi9udWxsDQo+ICsrKyBiL3hlbi9hcmNoL3Jpc2N2L2Vhcmx5X3ByaW50
ay5jDQo+IEBAIC0wLDAgKzEsNDUgQEANCj4gKy8qIFNQRFgtTGljZW5zZS1JZGVudGlmaWVyOiBH
UEwtMi4wICovDQo+ICsvKg0KPiArICogUklTQy1WIGVhcmx5IHByaW50ayB1c2luZyBTQkkNCj4g
KyAqDQo+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMjEgQm9iYnkgRXNobGVtYW4gPGJvYmJ5ZXNobGVt
YW5AZ21haWwuY29tPg0KPiArICovDQo+ICsjaW5jbHVkZSA8YXNtL2Vhcmx5X3ByaW50ay5oPg0K
PiArI2luY2x1ZGUgPGFzbS9zYmkuaD4NCj4gKw0KPiArLyoNCj4gKyAqIGVhcmx5XyooKSBjYW4g
YmUgY2FsbGVkIGZyb20gaGVhZC5TIHdpdGggTU1VLW9mZi4NCj4gKyAqDQo+ICsgKiBUaGUgZm9s
bG93aW5nIHJlcXVpcmVtZXRzIHNob3VsZCBiZSBob25vdXJlZCBmb3IgZWFybHlfKigpIHRvDQo+
ICsgKiB3b3JrIGNvcnJlY3RseToNCj4gKyAqICAgIEl0IHNob3VsZCB1c2UgUEMtcmVsYXRpdmUg
YWRkcmVzc2luZyBmb3IgYWNjZXNzaW5nIHN5bWJvbHMuDQo+ICsgKiAgICBUbyBhY2hpZXZlIHRo
YXQgR0NDIGNtb2RlbD1tZWRhbnkgc2hvdWxkIGJlIHVzZWQuDQo+ICsgKi8NCj4gKyNpZm5kZWYg
X19yaXNjdl9jbW9kZWxfbWVkYW55DQo+ICsjZXJyb3IgImVhcmx5XyooKSBjYW4gYmUgY2FsbGVk
IGZyb20gaGVhZC5TIGJlZm9yZSByZWxvY2F0ZSBzbyBpdCBzaG91bGQgbm90IHVzZSBhYnNvbHV0
ZSBhZGRyZXNzaW5nLiINCj4gKyNlbmRpZg0KDQpUaGlzIGlzIGluY29ycmVjdC4NCg0KV2hhdCAq
dGhpcyogZmlsZSBpcyBjb21waWxlZCB3aXRoIGhhcyBubyBiZWFyaW5nIG9uIGhvdyBoZWFkLlMg
Y2FsbHMNCnVzLsKgIFRoZSBSSVNDLVYgZG9jdW1lbnRhdGlvbiBleHBsYWluaW5nIF9fcmlzY3Zf
Y21vZGVsX21lZGFueSB2cw0KX19yaXNjdl9jbW9kZWxfbWVkbG93IGNhbGxzIHRoaXMgcG9pbnQg
b3V0IHNwZWNpZmljYWxseS7CoCBUaGVyZSdzDQpub3RoaW5nIHlvdSBjYW4gcHV0IGhlcmUgdG8g
Y2hlY2sgdGhhdCBoZWFkLlMgZ2V0cyBjb21waWxlZCB3aXRoIG1lZGFueS4NCg0KUmlnaHQgbm93
LCB0aGVyZSdzIG5vdGhpbmcgaW4gdGhpcyBmaWxlIGRlcGVuZGVudCBvbiBlaXRoZXIgbW9kZSwg
YW5kDQppdCdzIG5vdCBsaWFibGUgdG8gY2hhbmdlIGluIHRoZSBzaG9ydCB0ZXJtLsKgIEZ1cnRo
ZXJtb3JlLCBYZW4gaXNuJ3QNCmRvaW5nIGFueSByZWxvY2F0aW9uIGluIHRoZSBmaXJzdCBwbGFj
ZS4NCg0KV2Ugd2lsbCB3YW50IHRvIHN1cHBvcnQgWElQIGluIGR1ZSBjb3Vyc2UsIGFuZCB0aGF0
IHdpbGwgYmUgY29tcGlsZWQNCl9fcmlzY3ZfY21vZGVsX21lZGxvdywgd2hpY2ggaXMgYSBmaW5l
IGFuZCBsZWdpdGltYXRlIHVzZWNhc2UuDQoNCg0KVGhlIGJ1aWxkIHN5c3RlbSBzZXRzIHRoZSBt
b2RlbCB1cCBjb25zaXN0ZW50bHkuwqAgQWxsIHlvdSBhcmUgZG9pbmcgYnkNCnB1dHRpbmcgdGhp
cyBpbiBpcyBjcmVhdGluZyB3b3JrIHRoYXQgc29tZW9uZSBpcyBnb2luZyB0byBoYXZlIHRvIGRl
bGV0ZQ0KZm9yIGxlZ2l0aW1hdGUgcmVhc29ucyBpbiB0aGUgZnV0dXJlLg0KDQo+IGRpZmYgLS1n
aXQgYS94ZW4vYXJjaC9yaXNjdi9zZXR1cC5jIGIveGVuL2FyY2gvcmlzY3Yvc2V0dXAuYw0KPiBp
bmRleCAxM2UyNGUyZmUxLi45Yzk0MTIxNTJhIDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC9yaXNj
di9zZXR1cC5jDQo+ICsrKyBiL3hlbi9hcmNoL3Jpc2N2L3NldHVwLmMNCj4gQEAgLTEsMTMgKzEs
MTcgQEANCj4gICNpbmNsdWRlIDx4ZW4vY29tcGlsZS5oPg0KPiAgI2luY2x1ZGUgPHhlbi9pbml0
Lmg+DQo+ICANCj4gKyNpbmNsdWRlIDxhc20vZWFybHlfcHJpbnRrLmg+DQo+ICsNCj4gIC8qIFhl
biBzdGFjayBmb3IgYnJpbmdpbmcgdXAgdGhlIGZpcnN0IENQVS4gKi8NCj4gIHVuc2lnbmVkIGNo
YXIgX19pbml0ZGF0YSBjcHUwX2Jvb3Rfc3RhY2tbU1RBQ0tfU0laRV0NCj4gICAgICBfX2FsaWdu
ZWQoU1RBQ0tfU0laRSk7DQo+ICANCj4gIHZvaWQgX19pbml0IG5vcmV0dXJuIHN0YXJ0X3hlbih2
b2lkKQ0KPiAgew0KPiAtICAgIGZvciAoIDs7ICkNCj4gKyAgICBlYXJseV9wcmludGsoIkhlbGxv
IGZyb20gQyBlbnZcbiIpOw0KPiArDQo+ICsgICAgZm9yICggOyA7ICkNCg0KUmViYXNpbmcgZXJy
b3I/DQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 00:16:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 00:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480028.744184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHw7f-0008Kv-V9; Wed, 18 Jan 2023 00:16:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480028.744184; Wed, 18 Jan 2023 00:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHw7f-0008Ko-SE; Wed, 18 Jan 2023 00:16:11 +0000
Received: by outflank-mailman (input) for mailman id 480028;
 Wed, 18 Jan 2023 00:16:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tkj8=5P=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pHw7e-0008Ki-0J
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 00:16:10 +0000
Received: from sonic308-8.consmr.mail.gq1.yahoo.com
 (sonic308-8.consmr.mail.gq1.yahoo.com [98.137.68.32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4c956998-96c5-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 01:16:07 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic308.consmr.mail.gq1.yahoo.com with HTTP; Wed, 18 Jan 2023 00:16:04 +0000
Received: by hermes--production-bf1-6cb45cc684-6j92l (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 5c5588657972730fa997ec088c335621; 
 Wed, 18 Jan 2023 00:15:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c956998-96c5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674000964; bh=FsNUYVheK9SNhgasrb7Z3K9xbBwe59sDSdfgIEzKAKM=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=dKGNnX6425loTlpvJ22iQhmtOCFECTn5h94r+IzZc+ZsjHwwODyKgIlSDWgzmJyvFRfLGM7PpEDgTQ1/B3nfphp89I+1rjXJu9peYD+g+RZTTUGvWDlOFooPwQqWig8bpoPx+WoadWXJ0MZaTnBSP6BbzacOlQ76A+l6mZeQE7iVjQLjo2/vt1SlEOAkWf8rDWlEFV8cL/ZE466GBIu7lky3uWv5U3bJnJyq2W6UkmxrnTKxzj50JYvVLbk0+NV/CRAGUAjGWtQygNUpWLixLqroLslTZ5Yq5wDW1AE9YbcMnbaTNWwIumnbJqCPU+XoqzJCbsHQvXXXnS5LYsKtwQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674000964; bh=/aBAx7bj4zhE1lab1K5iEdXwfitrxNRN9McRZ4V7x3Y=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=fcV9lD7rzg0aV/sGZschySI3CS92qOIBAsM8iw3tku5fChRq17XxtDiObV1IDSIlEJaEuNqFyFGuAeiTbRDk0gq1xsDKW00TCjMHl9SL6pazSC0nDiETW8Vvx4n4wxxdJuxAw1q6qiOPc+AOQTMou7mQKUD+u29NMUX8AO2RnE2r9QD/oDeWAaNGcrFIl54OJjl05/sn/ruN/Nrf2BjI8ldYHlWCtWhSnTtQNyvDO3tC+Q1AyLimlWXWdvJUxwvRZglc9JRqIbqkedwyeXiV/71WhnbiQ5y7kkxdhKTIxdo+6LD75faT+ZfTvB35299i9iIyDkp612wiR1GSsXOf4Q==
X-YMail-OSG: 8uuzW6AVM1lMXDruRMp6vrSiGYE7mx6fVVVccM.psboPnb9uz_1yIHJ9CbZTcH5
 0JJPX0pCn529ZDJaRppJRbePfvB2jMj5TIKCTZcIsLOu4Lk3lVibpjKkPr7kzs3v3EPDjVXzo936
 7GEweKS9prTTWubsL6ioMtuEZp8JwZe3p3rDor9OU1Xxwyf0hB2peu.Q6HJSXkb9U_60GhdXowEd
 uA7M7K01xNHe8MKm9A7ZUxOpZyZk0s_gnY6ENR_3.dNFBdf1bWU6RR2dDS8ttRHVGpKFHbuXp4dT
 ieMGIND48uI6Y.d45rWFHs1CfU6tGkOSdyAOXk_6r43N4ec.kugE8HhW.uPljT8IUyfCaRkg1cMY
 2eilLouoyE54T9NcugrX505nPyl8LojBSfbZHkNhkf8oCFCL7CVPWNUAQMNDVXAnCFThPQJKeo6U
 QXeKjOMcF6EYI5857T8L4qnvIx8zRfW.bHOVQVn5I3t5cryXISTdBk2mY8_KBfSLa15SkDTNtUiJ
 6QQniABWD2adg1PCRLnXUeve6qSkGSpsgPWShT1rK4XD.JABoWsnX.jaxv8ULZr_8b80C1.NzdkI
 v_uuNHM5KTw4kfTYUTzjhwkU_sgXO9TZk8W6OztQ2A6b3XhMScoou8TTNY4jyKZMz4EzJnscziZc
 XaQiLlqGdEkCg0yWMK.qbK.auh_seCSoqPQ4zWZstzmsAhP7df3fyU6OGQGDGb6wLbQX7p697JcV
 FQrpttIH0S2UxS740QQUOYUoS6AhmcpJ_LyNBSWLEkG1f2UqtUTrl4cIK8KfIsMYjDkvxRp6jWlG
 M4HHjiaKmli5HjxxS0wqal8WWFp8Fcv3pSKD0ep2o9efZzRNR3.NbemEuAWkmQfQZoDDZyfPIi3i
 4SqWj1Ll.faNJLnZglZ1rWBxHABIBT6WUgnCAIyPF63VHVWW819YPNj_2zFevNI1EVejOhcas9iM
 se5TGXv0kXDDRJewI8UT8bPZhRrW7ByhEeOlyGX3SPCNM6SXaf0BeY976EWgmBx4.DLo0KuyOe.r
 KKMyl7xWsJ_kKt5jpMRECHEBSVPQJ3tXi.XbxSBlWUl1YxA3_FSBbnbqL64pMqBrskMtLvXI3jvm
 MWzd6T98PegOfWZM3HN56qVzq7nswp_8vkBcpZ_lNZ71VgFFnJUl079.T94.83LJfFETMjQVZJeU
 ctB.TvgApbdk60jORA7aWtXsEF4o8.WRr80_RhE0if7ljnamFN3UbhYkBwff98pUPUPnxfr_VEuA
 fJnE_Tw5vMKPxaViNBt7cnMCFo7GQVBkTbM9Jdu5pP5V6Nptr2R.9lX_nHZqPTZjCC3dWz7fZ3aC
 MrU6sMso2QfBDH4wGzDKWtYBa6Ebxy5V2UwR3rXFZk_snrdgPpkImEJj.M_JAK2AT_YaAILeoobj
 2Is2NDxcNYUY0qVt3_GFzCJ.zMPogupIs3DuacGEHKy6XX4YWE8KuNAcZ0Dka_wPXGnwJAnjh2lw
 0OPsiEPtCxh9JZz28aCdpl7Y5Xcp29m.zdtIhHwAd6ux6FjVVp4Q_fpCOrpl2Lhnx7HLJLknVJAv
 V1tw17qO04.bHwln7Q1uddWatcycGvbxfL0PhjVL9scNg84ZFFoy12frc.1jNADj.qWa8FZE8tjO
 b12_k1LCMPjQ4x_8.qU8V.9l7NOZRuadi3XZ79KJz8SwrkEiCtlb7pviG916PUtJDTr8OaxnjM8C
 j7yxQfheQHndgZ8g9Ey6UA4i0vKRt2f0EsibQdu3lVDAckBJF5WVbCEjdvNKXc_amB8F8KGzUWLZ
 xbmuJhHqWzKCE.vGgjNDUo2JzhjwvgDSVxbJoDb1FWY5fPdGhm6OHrKM68v_72K_80JOpBVpIRyU
 b2ypXS85vd4wyZ2s87XF66OX8eoSMgL.MESwyigl21Rw9j6_QFwbIEd8Ck7TbIvHYqjlsAvtyNET
 _3_yomSSQb4..BjxuNhp14X5GpyKyPUSLNLSnl7qjH2ZAw4ChDnEXULroXzzBSictVbrKkgE0wBZ
 ekdgBA.AiBpptow1qj0n.rgtCKVrqKS.4plf1PxIAHjIghixzMQelDKd.LQrQhf28t1xXDC1HI7_
 kVrYO7HixJsLQ104trD8Pw8KM5TlOI7m45ksh9laqdKow8u1QOCTlPo4kMyy8K2GhybbdTNo2E2k
 .pbNJLdIS8Duk7lXhktihdq5gtv7YXrIm.vmhcNu8jz37CNpUUE33rInhsrwKwiSyqGC0zU2goI3
 r3UEniOeCMJuNxT5ALx9sG.5jwlZQizmS5trK
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <b6f7d6dd-3b9b-2cc7-32ab-8521802e1fed@aol.com>
Date: Tue, 17 Jan 2023 19:15:57 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: Igor Mammedov <imammedo@redhat.com>,
 Chuck Zmudzinski <brchuckz@netscape.net>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, Anthony Perard <anthony.perard@citrix.com>,
 Thomas Huth <thuth@redhat.com>, Eric Auger <eric.auger@redhat.com>,
 Alex Williamson <alex.williamson@redhat.com>, Peter Xu <peterx@redhat.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
 <20230112180314-mutt-send-email-mst@kernel.org>
 <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
 <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
 <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
 <20230116163342.467039a0@imammedo.users.ipa.redhat.com>
 <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
 <20230117120416.0aa041d6@imammedo.users.ipa.redhat.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230117120416.0aa041d6@imammedo.users.ipa.redhat.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 6962

On 1/17/2023 6:04 AM, Igor Mammedov wrote:
> On Mon, 16 Jan 2023 13:00:53 -0500
> Chuck Zmudzinski <brchuckz@netscape.net> wrote:
>
> > On 1/16/23 10:33, Igor Mammedov wrote:
> > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > >   
> > >> On 1/13/23 4:33 AM, Igor Mammedov wrote:  
> > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > >> >     
> > >> >> On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:    
> > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:      
> > >> >> >> I think the change Michael suggests is very minimalistic: Move the if
> > >> >> >> condition around xen_igd_reserve_slot() into the function itself and
> > >> >> >> always call it there unconditionally -- basically turning three lines
> > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
> > >> >> >> Michael further suggests to rename it to something more general. All
> > >> >> >> in all no big changes required.      
> > >> >> > 
> > >> >> > yes, exactly.
> > >> >> >       
> > >> >> 
> > >> >> OK, got it. I can do that along with the other suggestions.    
> > >> > 
> > >> > have you considered instead of reservation, putting a slot check in device model
> > >> > and if it's intel igd being passed through, fail at realize time  if it can't take
> > >> > required slot (with a error directing user to fix command line)?    
> > >> 
> > >> Yes, but the core pci code currently already fails at realize time
> > >> with a useful error message if the user tries to use slot 2 for the
> > >> igd, because of the xen platform device which has slot 2. The user
> > >> can fix this without patching qemu, but having the user fix it on
> > >> the command line is not the best way to solve the problem, primarily
> > >> because the user would need to hotplug the xen platform device via a
> > >> command line option instead of having the xen platform device added by
> > >> pc_xen_hvm_init functions almost immediately after creating the pci
> > >> bus, and that delay in adding the xen platform device degrades
> > >> startup performance of the guest.
> > >>   
> > >> > That could be less complicated than dealing with slot reservations at the cost of
> > >> > being less convenient.    
> > >> 
> > >> And also a cost of reduced startup performance  
> > > 
> > > Could you clarify how it affects performance (and how much).
> > > (as I see, setup done at board_init time is roughly the same
> > > as with '-device foo' CLI options, modulo time needed to parse
> > > options which should be negligible. and both ways are done before
> > > guest runs)  
> > 
> > I preface my answer by saying there is a v9, but you don't
> > need to look at that. I will answer all your questions here.
> > 
> > I am going by what I observe on the main HDMI display with the
> > different approaches. With the approach of not patching Qemu
> > to fix this, which requires adding the Xen platform device a
> > little later, the length of time it takes to fully load the
> > guest is increased. I also noticed with Linux guests that use
> > the grub bootoader, the grub vga driver cannot display the
> > grub boot menu at the native resolution of the display, which
> > in the tested case is 1920x1080, when the Xen platform device
> > is added via a command line option instead of by the
> > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > to Qemu, the grub menu is displayed at the full, 1920x1080
> > native resolution of the display. Once the guest fully loads,
> > there is no noticeable difference in performance. It is mainly
> > a degradation in startup performance, not performance once
> > the guest OS is fully loaded.
>
> Looking at igd-assign.txt, it recommends to add IGD using '-device' CLI
> option, and actually drop at least graphics defaults explicitly.
> So it is expected to work fine even when IGD is constructed with
> '-device'.
>
> Could you provide full CLI current xen starts QEMU with and then
> a CLI you used (with explicit -device for IGD) that leads
> to reduced performance?
>
> CCing vfio folks who might have an idea what could be wrong based
> on vfio experience.

Actually, the igd is not added with an explicit -device option using Xen:

   1573 ?        Ssl    0:42 /usr/bin/qemu-system-i386 -xen-domid 1 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-1,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-1,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -name windows -vnc none -display none -serial pty -boot order=c -smp 4,maxcpus=4 -net none -machine xenfv,max-ram-below-4g=3758096384,igd-passthru=on -m 6144 -drive file=/dev/loop0,if=ide,index=0,media=disk,format=raw,cache=writeback -drive file=/dev/disk/by-uuid/A44AA4984AA468AE,if=ide,index=1,media=disk,format=raw,cache=writeback

I think it is added by xl (libxl management tool) when the guest is created
using the qmp-libxl socket that appears on the command line, but I am not 100
percent sure. So, with libxl, the command line alone does not tell the whole
story. The xl.cfg file has a line like this to define the pci devices passed through,
and in qemu they are type XEN_PT devices, not VFIO devices:

pci = [ '00:1b.0','00:14.0','00:02.0@02' ]

This means three host pci devices are passed through, the ones on the
host at slot 1b.0, 14.0, and 02.0. Of course the device at 02 is the igd.
The @02 means libxl is requesting slot 2 in the guest for the igd, the
other 2 devices are just auto assigned a slot by Qemu. Qemu cannot
assign the igd to slot 2 for xenfv machines without a patch that prevents
the Xen platform device from grabbing slot 2. That is what this patch
accomplishes. The workaround involves using the Qemu pc machine
instead of the Qemu xenfv machine, in which case the code in Qemu
that adds the Xen platform device at slot 2 is avoided, and in that case
the Xen platform device is added via a command line option instead
at slot 3 instead of slot 2.

The differences between vfio and the Xen pci passthrough device
might make a difference in Xen vs. kvm for igd-pasthru.

Also, kvm does not use the Xen platform device, and it seems the
Xen guests behave better at startup when the Xen platform device
is added very early, during the initialization of the emulated devices
of the machine, which is based on the i440fx piix3 machine, instead
of being added using a command line option. Perhaps the performance
at startup could be improved by adding the igd via a command line
option using vfio instead of the canonical way that libxl does pci
passthrough on Xen, but I have no idea if vfio works on Xen the way it
works on kvm. I am willing to investigate and experiment with it, though.

So if any vfio people can shed some light on this, that would help.

Thanks,

Chuck


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 00:33:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 00:33:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480034.744194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHwOJ-0002Gs-E2; Wed, 18 Jan 2023 00:33:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480034.744194; Wed, 18 Jan 2023 00:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHwOJ-0002Gl-At; Wed, 18 Jan 2023 00:33:23 +0000
Received: by outflank-mailman (input) for mailman id 480034;
 Wed, 18 Jan 2023 00:33:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fuL9=5P=gmail.com=julien.grall@srs-se1.protection.inumbo.net>)
 id 1pHwOI-0002Gf-Ao
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 00:33:22 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b4c597d2-96c7-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 01:33:19 +0100 (CET)
Received: by mail-ej1-x630.google.com with SMTP id ss4so72188349ejb.11
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 16:33:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4c597d2-96c7-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=0O0fNqSjOCmHJ7YEyu+NfYWhs4mN0JgFNhe8XAV3gEQ=;
        b=ZyXIEH8s2WcKyX/TKVAbw3URKTjTCTlSjs5/5XKJiVHD6k+zjAIMxlXVcxFv8K6cWE
         yl4AEEQI5lgg12JN51gNzyH235Fo9GOMGtsctMClC+LbYiSKVaLonAEhOfGy6ecRIQQR
         BDhi915vn82g05YELOvD17eQQLPpk680qkeoZJnnmQaizqa0g+r9AhRhzmOY+n1SjFui
         u6+JdPGUHCEjIgGSJ5PHgmxQcjgcF4B/2MOJq3v7aPP4wSGvLUdc8F0DlD1+DR4eZKSX
         i1ulw6KnTt6hQYOnw/a77AqWHdGvcswd3Xh6M9llXEi/0+x+nSpY4oA7pNBIpCEjGDN5
         6laQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=0O0fNqSjOCmHJ7YEyu+NfYWhs4mN0JgFNhe8XAV3gEQ=;
        b=U7qsrS+mQH/n9Bc+F0frM54UWJ0m8yL9ODLnEy/gm0XQy7LDIPw+KqKe3ncWD27X1S
         9zAjNnxtH4nPGUTbAucrqF5no1/Sjw4kjyd6hfKOJJh8948FmF0okNTrnoVrmd2f4K/G
         odblHVQ86RZcyq6NXJIsCC7flrp3zHmZMFYec8ZVojkQ3oKvtL09VWQdLDMcWFbVnqTI
         cyV+fNNmBVnkpPlgOqBWT66jQSZPmANhhgFl9KmqOekjxG251Ybmc56j4zUdZuzWX89s
         L/35nsFxkrr0Qs9Xw2ogHsGuxkWGFquV9J1ep7NyZ46wQjwVaY5uiwU8xrrGc3WxmdO4
         OTBg==
X-Gm-Message-State: AFqh2kpH3HJ8NYyAqO+wVGMs2GCthXvXX6i8FlDKxC5DW5HhupXGIwtL
	FyxFrNgzLpHwya/x9gTQU6hEYXy5O14586yIqZU=
X-Google-Smtp-Source: AMrXdXtc59Xba6R6XpxF4vSslUNprZYwfNUw6kAL7Am0ntV8BN41cLM7g0CCCV91VFvu0+ec/6NCNVE5UmzXEXFufrs=
X-Received: by 2002:a17:906:5c8:b0:86b:914a:572 with SMTP id
 t8-20020a17090605c800b0086b914a0572mr446015ejt.39.1674001999039; Tue, 17 Jan
 2023 16:33:19 -0800 (PST)
MIME-Version: 1.0
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
 <915bd184c6648a1a3bf0ac6a79b5274972bb33dd.1673877778.git.oleksii.kurochko@gmail.com>
 <d0cabe82-315e-408c-7364-33e2b5093ee6@citrix.com>
In-Reply-To: <d0cabe82-315e-408c-7364-33e2b5093ee6@citrix.com>
From: Julien Grall <julien.grall@gmail.com>
Date: Wed, 18 Jan 2023 00:33:08 +0000
Message-ID: <CAF3u54C2ewEfBN+ZT6VPaVu4vsqS_+12gr3YJ_jsg1sGHDhZ1A@mail.gmail.com>
Subject: Re: [PATCH v4 3/4] xen/riscv: introduce early_printk basic stuff
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, 
	Bobby Eshleman <bobby.eshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, 
	Gianluca Guida <gianluca@rivosinc.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Oleksii Kurochko <oleksii.kurochko@gmail.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="00000000000081931c05f27ef684"

--00000000000081931c05f27ef684
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 17 Jan 2023 at 23:57, Andrew Cooper <Andrew.Cooper3@citrix.com>
wrote:

> On 16/01/2023 2:39 pm, Oleksii Kurochko wrote:
> > diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debu=
g
> > index e69de29bb2..e139e44873 100644
> > --- a/xen/arch/riscv/Kconfig.debug
> > +++ b/xen/arch/riscv/Kconfig.debug
> > @@ -0,0 +1,6 @@
> > +config EARLY_PRINTK
> > +    bool "Enable early printk"
> > +    default DEBUG
> > +    help
> > +
> > +      Enables early printk debug messages
>
> Kconfig indentation is a little hard to get used to.
>
> It's one tab for the main block, and one tab + 2 spaces for the help text=
.
>
> Also, drop the blank line after help.
>
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index fd916e1004..1a4f1a6015 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,3 +1,4 @@
> > +obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o
> >  obj-$(CONFIG_RISCV_64) +=3D riscv64/
> >  obj-y +=3D sbi.o
> >  obj-y +=3D setup.o
> > diff --git a/xen/arch/riscv/early_printk.c
> b/xen/arch/riscv/early_printk.c
> > new file mode 100644
> > index 0000000000..6bc29a1942
> > --- /dev/null
> > +++ b/xen/arch/riscv/early_printk.c
> > @@ -0,0 +1,45 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * RISC-V early printk using SBI
> > + *
> > + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> > + */
> > +#include <asm/early_printk.h>
> > +#include <asm/sbi.h>
> > +
> > +/*
> > + * early_*() can be called from head.S with MMU-off.
> > + *
> > + * The following requiremets should be honoured for early_*() to
> > + * work correctly:
> > + *    It should use PC-relative addressing for accessing symbols.
> > + *    To achieve that GCC cmodel=3Dmedany should be used.
> > + */
> > +#ifndef __riscv_cmodel_medany
> > +#error "early_*() can be called from head.S before relocate so it
> should not use absolute addressing."
> > +#endif
>
> This is incorrect.


Aside the part about the relocation, I do not see what is wrong with it
(see below)

>
>
> What *this* file is compiled with has no bearing on how head.S calls
> us.  The RISC-V documentation explaining __riscv_cmodel_medany vs
> __riscv_cmodel_medlow calls this point out specifically.  There's
> nothing you can put here to check that head.S gets compiled with medany.


I believe you misunderstood the goal here. It is not to check how head.S
will call it but to ensure the function is safe to be called when the MMU
is off (e.g VA !=3D VA).


>
> Right now, there's nothing in this file dependent on either mode, and
> it's not liable to change in the short term.


The whole point is to make sure this don=E2=80=99t change without us knowin=
g.


  Furthermore, Xen isn't
> doing any relocation in the first place.
>
> We will want to support XIP in due course, and that will be compiled
> __riscv_cmodel_medlow, which is a fine and legitimate usecase.
>
>
> The build system sets the model up consistently.  All you are doing by
> putting this in is creating work that someone is going to have to delete
> for legitimate reasons in the future.



Are you saying that a code compiled with medlow can also work with MMU off
and no relocation needed?

If not, then the check is correct. It would avoid hours of debugging=E2=80=
=A6

Cheers,


>
> > diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> > index 13e24e2fe1..9c9412152a 100644
> > --- a/xen/arch/riscv/setup.c
> > +++ b/xen/arch/riscv/setup.c
> > @@ -1,13 +1,17 @@
> >  #include <xen/compile.h>
> >  #include <xen/init.h>
> >
> > +#include <asm/early_printk.h>
> > +
> >  /* Xen stack for bringing up the first CPU. */
> >  unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
> >      __aligned(STACK_SIZE);
> >
> >  void __init noreturn start_xen(void)
> >  {
> > -    for ( ;; )
> > +    early_printk("Hello from C env\n");
> > +
> > +    for ( ; ; )
>
> Rebasing error?
>
> ~Andrew
>
--=20
Julien Grall

--00000000000081931c05f27ef684
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div><br></div><div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=
=3D"gmail_attr">On Tue, 17 Jan 2023 at 23:57, Andrew Cooper &lt;<a href=3D"=
mailto:Andrew.Cooper3@citrix.com">Andrew.Cooper3@citrix.com</a>&gt; wrote:<=
br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8e=
x;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-lef=
t-color:rgb(204,204,204)">On 16/01/2023 2:39 pm, Oleksii Kurochko wrote:<br=
>
&gt; diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.deb=
ug<br>
&gt; index e69de29bb2..e139e44873 100644<br>
&gt; --- a/xen/arch/riscv/Kconfig.debug<br>
&gt; +++ b/xen/arch/riscv/Kconfig.debug<br>
&gt; @@ -0,0 +1,6 @@<br>
&gt; +config EARLY_PRINTK<br>
&gt; +=C2=A0 =C2=A0 bool &quot;Enable early printk&quot;<br>
&gt; +=C2=A0 =C2=A0 default DEBUG<br>
&gt; +=C2=A0 =C2=A0 help<br>
&gt; +<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 Enables early printk debug messages<br>
<br>
Kconfig indentation is a little hard to get used to.<br>
<br>
It&#39;s one tab for the main block, and one tab + 2 spaces for the help te=
xt.<br>
<br>
Also, drop the blank line after help.<br>
<br>
&gt; diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile<br>
&gt; index fd916e1004..1a4f1a6015 100644<br>
&gt; --- a/xen/arch/riscv/Makefile<br>
&gt; +++ b/xen/arch/riscv/Makefile<br>
&gt; @@ -1,3 +1,4 @@<br>
&gt; +obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o<br>
&gt;=C2=A0 obj-$(CONFIG_RISCV_64) +=3D riscv64/<br>
&gt;=C2=A0 obj-y +=3D sbi.o<br>
&gt;=C2=A0 obj-y +=3D setup.o<br>
&gt; diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_prin=
tk.c<br>
&gt; new file mode 100644<br>
&gt; index 0000000000..6bc29a1942<br>
&gt; --- /dev/null<br>
&gt; +++ b/xen/arch/riscv/early_printk.c<br>
&gt; @@ -0,0 +1,45 @@<br>
&gt; +/* SPDX-License-Identifier: GPL-2.0 */<br>
&gt; +/*<br>
&gt; + * RISC-V early printk using SBI<br>
&gt; + *<br>
&gt; + * Copyright (C) 2021 Bobby Eshleman &lt;<a href=3D"mailto:bobbyeshle=
man@gmail.com" target=3D"_blank">bobbyeshleman@gmail.com</a>&gt;<br>
&gt; + */<br>
&gt; +#include &lt;asm/early_printk.h&gt;<br>
&gt; +#include &lt;asm/sbi.h&gt;<br>
&gt; +<br>
&gt; +/*<br>
&gt; + * early_*() can be called from head.S with MMU-off.<br>
&gt; + *<br>
&gt; + * The following requiremets should be honoured for early_*() to<br>
&gt; + * work correctly:<br>
&gt; + *=C2=A0 =C2=A0 It should use PC-relative addressing for accessing sy=
mbols.<br>
&gt; + *=C2=A0 =C2=A0 To achieve that GCC cmodel=3Dmedany should be used.<b=
r>
&gt; + */<br>
&gt; +#ifndef __riscv_cmodel_medany<br>
&gt; +#error &quot;early_*() can be called from head.S before relocate so i=
t should not use absolute addressing.&quot;<br>
&gt; +#endif<br>
<br>
This is incorrect.</blockquote><div dir=3D"auto"><br></div><div dir=3D"auto=
">Aside the part about the relocation, I do not see what is wrong with it (=
see below)</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0=
px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;bor=
der-left-color:rgb(204,204,204)" dir=3D"auto"><br>
<br>
What *this* file is compiled with has no bearing on how head.S calls<br>
us.=C2=A0 The RISC-V documentation explaining __riscv_cmodel_medany vs<br>
__riscv_cmodel_medlow calls this point out specifically.=C2=A0 There&#39;s<=
br>
nothing you can put here to check that head.S gets compiled with medany.</b=
lockquote><div dir=3D"auto"><br></div><div dir=3D"auto">I believe you misun=
derstood the goal here. It is not to check how head.S will call it but to e=
nsure the function is safe to be called when the MMU is off (e.g VA !=3D VA=
).</div><div dir=3D"auto"><br></div><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;=
padding-left:1ex;border-left-color:rgb(204,204,204)" dir=3D"auto"><br>
<br>
Right now, there&#39;s nothing in this file dependent on either mode, and<b=
r>
it&#39;s not liable to change in the short term.</blockquote><div dir=3D"au=
to"><br></div><div dir=3D"auto">The whole point is to make sure this don=E2=
=80=99t change without us knowing.=C2=A0</div><div dir=3D"auto"><br></div><=
div dir=3D"auto"><br></div><blockquote class=3D"gmail_quote" style=3D"margi=
n:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-l=
eft:1ex;border-left-color:rgb(204,204,204)" dir=3D"auto">=C2=A0 Furthermore=
, Xen isn&#39;t<br>
doing any relocation in the first place.<br>
<br>
We will want to support XIP in due course, and that will be compiled<br>
__riscv_cmodel_medlow, which is a fine and legitimate usecase.<br>
<br>
<br>
The build system sets the model up consistently.=C2=A0 All you are doing by=
<br>
putting this in is creating work that someone is going to have to delete<br=
>
for legitimate reasons in the future.</blockquote><div dir=3D"auto"><br></d=
iv><div dir=3D"auto"><br></div><div dir=3D"auto">Are you saying that a code=
 compiled with medlow can also work with MMU off and no relocation needed?<=
/div><div dir=3D"auto"><br></div><div dir=3D"auto">If not, then the check i=
s correct. It would avoid hours of debugging=E2=80=A6</div><div dir=3D"auto=
"><br></div><div dir=3D"auto">Cheers,</div><div dir=3D"auto"><br></div><blo=
ckquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left=
-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(2=
04,204,204)" dir=3D"auto"><br>
<br>
&gt; diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c<br>
&gt; index 13e24e2fe1..9c9412152a 100644<br>
&gt; --- a/xen/arch/riscv/setup.c<br>
&gt; +++ b/xen/arch/riscv/setup.c<br>
&gt; @@ -1,13 +1,17 @@<br>
&gt;=C2=A0 #include &lt;xen/compile.h&gt;<br>
&gt;=C2=A0 #include &lt;xen/init.h&gt;<br>
&gt;=C2=A0 <br>
&gt; +#include &lt;asm/early_printk.h&gt;<br>
&gt; +<br>
&gt;=C2=A0 /* Xen stack for bringing up the first CPU. */<br>
&gt;=C2=A0 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]<br>
&gt;=C2=A0 =C2=A0 =C2=A0 __aligned(STACK_SIZE);<br>
&gt;=C2=A0 <br>
&gt;=C2=A0 void __init noreturn start_xen(void)<br>
&gt;=C2=A0 {<br>
&gt; -=C2=A0 =C2=A0 for ( ;; )<br>
&gt; +=C2=A0 =C2=A0 early_printk(&quot;Hello from C env\n&quot;);<br>
&gt; +<br>
&gt; +=C2=A0 =C2=A0 for ( ; ; )<br>
<br>
Rebasing error?<br>
<br>
~Andrew<br>
</blockquote></div></div>-- <br><div dir=3D"ltr" class=3D"gmail_signature" =
data-smartmail=3D"gmail_signature">Julien Grall</div>

--00000000000081931c05f27ef684--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 01:10:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 01:10:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480040.744205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHwxy-0004it-8u; Wed, 18 Jan 2023 01:10:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480040.744205; Wed, 18 Jan 2023 01:10:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHwxy-0004im-63; Wed, 18 Jan 2023 01:10:14 +0000
Received: by outflank-mailman (input) for mailman id 480040;
 Wed, 18 Jan 2023 01:10:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHwxw-0004ic-IQ; Wed, 18 Jan 2023 01:10:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHwxw-00085Q-Ba; Wed, 18 Jan 2023 01:10:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHwxv-0003av-SE; Wed, 18 Jan 2023 01:10:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHwxv-0003mO-Rh; Wed, 18 Jan 2023 01:10:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oLrSYxlATwjbKcSr0t00aKlA42QoRqB5KOu7SsGVnIQ=; b=EsMEhnFkpCeR5aLOz3GdtHFIOg
	sQpFSJiyT0pVcuCLCbhrJHlPU0oUhHH+tH9ANz48yP/WTyZm35nX0zpvMHNOlJv/maK19Dsve/G36
	BtoDgYu7rR8xvMG8TLq0KY/fOK1+pTFEnnri1+OzmBfCCV9Z13dgzSfKsmh88ZNJZHEo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175949-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175949: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:host-install(5):broken:heisenbug
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 01:10:11 +0000

flight 175949 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175949/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl             <job status>                 broken
 build-amd64                   6 xen-build                fail REGR. vs. 175746

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           5 host-install(5)          broken pass in 175944

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-armhf-armhf-xl         15 migrate-support-check fail in 175944 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 175944 never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    5 days
Failing since        175748  2023-01-12 20:01:56 Z    5 days   24 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    3 days   22 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl broken
broken-step test-armhf-armhf-xl host-install(5)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 01:11:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 01:11:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480047.744217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHwyg-0005HW-Mq; Wed, 18 Jan 2023 01:10:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480047.744217; Wed, 18 Jan 2023 01:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHwyg-0005HP-Ip; Wed, 18 Jan 2023 01:10:58 +0000
Received: by outflank-mailman (input) for mailman id 480047;
 Wed, 18 Jan 2023 01:10:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0ll=5P=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pHwyf-0005FD-FS
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 01:10:57 +0000
Received: from mail-vs1-xe29.google.com (mail-vs1-xe29.google.com
 [2607:f8b0:4864:20::e29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5ccd6bb-96cc-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 02:10:56 +0100 (CET)
Received: by mail-vs1-xe29.google.com with SMTP id 3so34204577vsq.7
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 17:10:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5ccd6bb-96cc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=UdTaj5o3X/W5SP6UOlIzMe044t8GhfTKKPgfBa31DtY=;
        b=F17vLM6Qen3OucAJba1k+GxCI0YWSOWh7/9gwNAt9klWmEFskqVugZR2PH5fKbCvLN
         RdsWTVzIPIp1E0F+PYZpmExWyplgyQLcLDlIq/kPgc3EoAKCV7tKUD6bX3SjXxw6ihpZ
         vXlBYsV9Kl6xFFVnqhs7fC8NtUi1nAaRKbnCag7AlirDx0YxDlm/d+CFetqXTNrNRCAD
         Hu1lyXhgfkylPUop3EC5E5lyZTkpIFQfOXAWQ66u1C3V3O28zFTm966JYNNHZ5duh4Cv
         82LUczlD97AZP0YXywtERiTmOa4XQ7WuJH0Yr9SIuqsBbe7WvahBV34ne14OLsgwj/HU
         TmJg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=UdTaj5o3X/W5SP6UOlIzMe044t8GhfTKKPgfBa31DtY=;
        b=vr8v4Xg9munbVLIG2JKPmVU9mxQIH2enfZtTTfv/1rourBZ064/gcWS6NrXfmmgQ9J
         E93BAoOKCtho4vXP28/FCml8lNxFvgr7yIZ0GocEeBDM6BCmDaP+wv24UOz3MgNngkt7
         xjnvgk3p8yfBpmGqKwBd034FuKIkmJbEloE92/mubEPqvgF7CdRlfvmnL1Vw4XvHwaqm
         Ku/od1BETk8RBaSkP7d35Va8SoAix2LLQhXZIY2610jhI+W4Pxt5nbbSjmP9IgQ1sR2/
         1S+IXOr3BxCj+M/4gk2PouXZpPEZVEq3B5JWpRDYZVAmuovUEMNcbv4TvRdD32+xa8YH
         CyKA==
X-Gm-Message-State: AFqh2krV/n5KPVWOKGavoSG72QjOQULtEgBqOxIviCUmGv9pVQD+b2FV
	eQ0koTjS1ENnOzgbF+mov/tDwWYwOYS2VnHitdk=
X-Google-Smtp-Source: AMrXdXvkf6tHk+eww6am02/5I8a2jjCzgT3MdmEJGKyDvmPQhZdEG4v09Oe3WxEfeaHS/soKJUydK9uZ1z3gSgHPV2A=
X-Received: by 2002:a67:ba0c:0:b0:3ce:f2da:96a with SMTP id
 l12-20020a67ba0c000000b003cef2da096amr601685vsn.64.1674004255535; Tue, 17 Jan
 2023 17:10:55 -0800 (PST)
MIME-Version: 1.0
References: <cover.1673877778.git.oleksii.kurochko@gmail.com> <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>
In-Reply-To: <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 18 Jan 2023 11:10:29 +1000
Message-ID: <CAKmqyKMcGiO3wzF23Ps_AWZd=o6R9wdmvV3QG4-YAWnFd8BEiQ@mail.gmail.com>
Subject: Re: [PATCH v4 2/4] xen/riscv: introduce sbi call to putchar to console
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bobby Eshleman <bobby.eshleman@gmail.com>, Bob Eshleman <bobbyeshleman@gmail.com>, 
	Alistair Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jan 17, 2023 at 12:39 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> From: Bobby Eshleman <bobby.eshleman@gmail.com>
>
> Originally SBI implementation for Xen was introduced by
> Bobby Eshleman <bobby.eshleman@gmail.com> but it was removed
> all the stuff for simplicity  except SBI call for putting
> character to console.
>
> The patch introduces sbi_putchar() SBI call which is necessary
> to implement initial early_printk.
>
> Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V4:
>     - Nothing changed
> ---
> Changes in V3:
>     - update copyright's year
>     - rename definition of __CPU_SBI_H__ to __ASM_RISCV_SBI_H__
>     - fix identations
>     - change an author of the commit
> ---
> Changes in V2:
>     - add an explanatory comment about sbi_console_putchar() function.
>     - order the files alphabetically in Makefile
> ---
>  xen/arch/riscv/Makefile          |  1 +
>  xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
>  xen/arch/riscv/sbi.c             | 45 ++++++++++++++++++++++++++++++++
>  3 files changed, 80 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/sbi.h
>  create mode 100644 xen/arch/riscv/sbi.c
>
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 5a67a3f493..fd916e1004 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,4 +1,5 @@
>  obj-$(CONFIG_RISCV_64) += riscv64/
> +obj-y += sbi.o
>  obj-y += setup.o
>
>  $(TARGET): $(TARGET)-syms
> diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
> new file mode 100644
> index 0000000000..0e6820a4ed
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/sbi.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: (GPL-2.0-or-later) */
> +/*
> + * Copyright (c) 2021-2023 Vates SAS.
> + *
> + * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> + *
> + * Taken/modified from Xvisor project with the following copyright:
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + */
> +
> +#ifndef __ASM_RISCV_SBI_H__
> +#define __ASM_RISCV_SBI_H__
> +
> +#define SBI_EXT_0_1_CONSOLE_PUTCHAR            0x1
> +
> +struct sbiret {
> +    long error;
> +    long value;
> +};
> +
> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
> +                        unsigned long arg0, unsigned long arg1,
> +                        unsigned long arg2, unsigned long arg3,
> +                        unsigned long arg4, unsigned long arg5);
> +
> +/**
> + * Writes given character to the console device.
> + *
> + * @param ch The data to be written to the console.
> + */
> +void sbi_console_putchar(int ch);
> +
> +#endif /* __ASM_RISCV_SBI_H__ */
> diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
> new file mode 100644
> index 0000000000..dc0eb44bc6
> --- /dev/null
> +++ b/xen/arch/riscv/sbi.c
> @@ -0,0 +1,45 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/*
> + * Taken and modified from the xvisor project with the copyright Copyright (c)
> + * 2019 Western Digital Corporation or its affiliates and author Anup Patel
> + * (anup.patel@wdc.com).
> + *
> + * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + * Copyright (c) 2021-2023 Vates SAS.
> + */
> +
> +#include <xen/errno.h>
> +#include <asm/sbi.h>
> +
> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
> +                        unsigned long arg0, unsigned long arg1,
> +                        unsigned long arg2, unsigned long arg3,
> +                        unsigned long arg4, unsigned long arg5)
> +{
> +    struct sbiret ret;
> +
> +    register unsigned long a0 asm ("a0") = arg0;
> +    register unsigned long a1 asm ("a1") = arg1;
> +    register unsigned long a2 asm ("a2") = arg2;
> +    register unsigned long a3 asm ("a3") = arg3;
> +    register unsigned long a4 asm ("a4") = arg4;
> +    register unsigned long a5 asm ("a5") = arg5;
> +    register unsigned long a6 asm ("a6") = fid;
> +    register unsigned long a7 asm ("a7") = ext;
> +
> +    asm volatile ("ecall"
> +              : "+r" (a0), "+r" (a1)
> +              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
> +              : "memory");
> +    ret.error = a0;
> +    ret.value = a1;
> +
> +    return ret;
> +}
> +
> +void sbi_console_putchar(int ch)
> +{
> +    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
> +}
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 01:15:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 01:15:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480055.744228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHx30-0005yj-7w; Wed, 18 Jan 2023 01:15:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480055.744228; Wed, 18 Jan 2023 01:15:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHx30-0005yc-4n; Wed, 18 Jan 2023 01:15:26 +0000
Received: by outflank-mailman (input) for mailman id 480055;
 Wed, 18 Jan 2023 01:15:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0ll=5P=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pHx2y-0005yW-8U
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 01:15:24 +0000
Received: from mail-vk1-xa34.google.com (mail-vk1-xa34.google.com
 [2607:f8b0:4864:20::a34])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9493945c-96cd-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 02:15:23 +0100 (CET)
Received: by mail-vk1-xa34.google.com with SMTP id q141so13093428vkb.13
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 17:15:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9493945c-96cd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=PPNOF5NzVVkBSS2fDP/+kOC4Q5RSuajK6z5SUgAz4jA=;
        b=AYSAqvqczE9H2Rc5MS29rCoDC2LrtrSBu/GPIJgh6N2BacPJLe0hX2ZtaesZijOTSu
         Pxra9JfXlBXEqmyG602DpWdT2kOwZ/iOSGyoqGIK4OIaF3rbw2X4Eauym2OBQJoIwdG/
         51BgGsdlcUy3+ynE6U2RXpZemrqw5XMRGHNaDTRUMQu1kcywOQb3dRMIJPvMVWww9duV
         thIvnXw93nYlUySlpwAhSMr5Ui5yYWbvZ30bdrBfzB4GuAtS4iwZa/JyV8clfdaH5Z3H
         HVK/gEJYYA/a7fV4NT2vc2MemOwBSCc+EqFAWvxr4tEi5wjKoaPAYQwsNvPVHbDomfpF
         TuOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=PPNOF5NzVVkBSS2fDP/+kOC4Q5RSuajK6z5SUgAz4jA=;
        b=ZU715gbZ9JFshkT0zq+OLbuRYZIyLDoTaOsa5c0SMjbS180xuhXQ02Zoh3511nVlM0
         Rb1MlL1j+LT2EdbDXCZKY4Ui0Imqq7WTwmV9mKgTB8cRUK9af9uLP0X8A8FAM7k8+kvv
         Ypzvpk2OXUQ3PdN+1gYCSr+8pt49kIuhSt3+itZpaQGwL5f/e7TdybkkQ51iSYt4RrkY
         4UwV7+I17j5UkA39/GyfxeBPaTLWZEZYGLFfaCTjrt5/74iqtNY3qBNSriYMP7cgLf92
         T6/a8SVU/e/en+e2N/W7go2hYjWZmg0OXSxD5LXoe0lSiU8DyVvR0zVavkmz0f/FnWaR
         bjgg==
X-Gm-Message-State: AFqh2korp6wXz02tqmXfXiG/g0YP7YdQkDm58VNdPlACjJFk/uH7NwgV
	yXM/vB/lEEkQB6VZ0lUL2G48Je6zwZ7NPZ6Crz8=
X-Google-Smtp-Source: AMrXdXsD4EoR226TtYLLEYae1B+7q2GjlCKBNTF7Y1+z0nZfhXhm9XXF0X+ZzhEW4Y62VJkz3IMiFt1iajNdfWxtkGU=
X-Received: by 2002:a1f:a7ca:0:b0:3d5:86ff:6638 with SMTP id
 q193-20020a1fa7ca000000b003d586ff6638mr733643vke.30.1674004521937; Tue, 17
 Jan 2023 17:15:21 -0800 (PST)
MIME-Version: 1.0
References: <cover.1673877778.git.oleksii.kurochko@gmail.com> <216c21039a5552a329178b4376ff53ba16cf6104.1673877778.git.oleksii.kurochko@gmail.com>
In-Reply-To: <216c21039a5552a329178b4376ff53ba16cf6104.1673877778.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 18 Jan 2023 11:14:55 +1000
Message-ID: <CAKmqyKNKsQdFEWsu7RZdv4mWvoK959Uk9ZH71RDE2EYJ76ei5w@mail.gmail.com>
Subject: Re: [PATCH v4 4/4] automation: add RISC-V smoke test
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Doug Goldstein <cardoe@cardoe.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jan 17, 2023 at 12:40 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Add check if there is a message 'Hello from C env' presents
> in log file to be sure that stack is set and C part of early printk
> is working.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V4:
>     - Nothing changed
> ---
> Changes in V3:
>     - Nothing changed
>     - All mentioned comments by Stefano in Xen mailing list will be
>       fixed as a separate patch out of this patch series.
> ---
>  automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
>  automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
>  2 files changed, 40 insertions(+)
>  create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
>
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index afd80adfe1..64f47a0ab9 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -54,6 +54,19 @@
>    tags:
>      - x86_64
>
> +.qemu-riscv64:
> +  extends: .test-jobs-common
> +  variables:
> +    CONTAINER: archlinux:riscv64
> +    LOGFILE: qemu-smoke-riscv64.log
> +  artifacts:
> +    paths:
> +      - smoke.serial
> +      - '*.log'
> +    when: always
> +  tags:
> +    - x86_64
> +
>  .yocto-test:
>    extends: .test-jobs-common
>    script:
> @@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
>    needs:
>      - debian-unstable-clang-debug
>
> +qemu-smoke-riscv64-gcc:
> +  extends: .qemu-riscv64
> +  script:
> +    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - riscv64-cross-gcc
> +
>  # Yocto test jobs
>  yocto-qemuarm64:
>    extends: .yocto-test-arm64
> diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
> new file mode 100755
> index 0000000000..e0f06360bc
> --- /dev/null
> +++ b/automation/scripts/qemu-smoke-riscv64.sh
> @@ -0,0 +1,20 @@
> +#!/bin/bash
> +
> +set -ex
> +
> +# Run the test
> +rm -f smoke.serial
> +set +e
> +
> +timeout -k 1 2 \
> +qemu-system-riscv64 \
> +    -M virt \
> +    -smp 1 \
> +    -nographic \
> +    -m 2g \
> +    -kernel binaries/xen \
> +    |& tee smoke.serial
> +
> +set -e
> +(grep -q "Hello from C env" smoke.serial) || exit 1
> +exit 0
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 01:44:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 01:44:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480061.744239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHxUd-0000q6-II; Wed, 18 Jan 2023 01:43:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480061.744239; Wed, 18 Jan 2023 01:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHxUd-0000pz-Cx; Wed, 18 Jan 2023 01:43:59 +0000
Received: by outflank-mailman (input) for mailman id 480061;
 Wed, 18 Jan 2023 01:43:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pHxUc-0000pt-7S
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 01:43:58 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2083.outbound.protection.outlook.com [40.107.8.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 91bc0f18-96d1-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 02:43:56 +0100 (CET)
Received: from DB7PR05CA0064.eurprd05.prod.outlook.com (2603:10a6:10:2e::41)
 by DU0PR08MB9300.eurprd08.prod.outlook.com (2603:10a6:10:41f::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 01:43:51 +0000
Received: from DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2e:cafe::e9) by DB7PR05CA0064.outlook.office365.com
 (2603:10a6:10:2e::41) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Wed, 18 Jan 2023 01:43:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT019.mail.protection.outlook.com (100.127.142.129) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 01:43:50 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Wed, 18 Jan 2023 01:43:50 +0000
Received: from 0db16cb1853e.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CE945F30-9B51-4F2D-A550-566266FE9C75.1; 
 Wed, 18 Jan 2023 01:43:45 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0db16cb1853e.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 01:43:45 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AM9PR08MB6020.eurprd08.prod.outlook.com (2603:10a6:20b:2d6::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Wed, 18 Jan
 2023 01:43:43 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 01:43:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91bc0f18-96d1-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IpS6PScZSY6VWnV7kqq4QoafUlL/+ppe1J78pWtpZv4=;
 b=jaq0e4O8W9JDy2Oz4qRbzM/+66aozZ8waRG5h6FzDC7XmVCSqGGx9glzAkD7HYsuhaGcAgKC/azer+Uu/TFtL1yz1aFVYaANFxQn7X0oRcBNFjzHeS0NZBOovXOlYCmMkPGFVMXJbuhdK5t3mramg0GodQv6bvnM2ctXmG7fSxQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZyeJIBXqi3RAld6ffg6C1rLytjzvSVc8e/k6BPz+JC7WxzaJ+cipwaM7fnCaBdeNc7/sOM8WCJ1iOVM7s9WgrUnBLoauQGQXoUoDE1HojNRawvjTxO7EBzcIXs71XvBJR3mt1FNHxZVJU0CI5DepLJzuifTd488iBs5sz+HwUFZzrC+qZhRY4LTeQ8MsCiRJUkcpNaOBZ2iwkOZjpYyjMtSEg0XRt5KuTOAhmOeXhbypw7Og3jUhl6nEQPQI6mczGH/qfpHIfHq43wX7Fx8tfci5BuBMCtxqrzognAwZ0nMThfXGqvQRKHClIksDHusMoDBHFNBmkjKMCMrfvZOCDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IpS6PScZSY6VWnV7kqq4QoafUlL/+ppe1J78pWtpZv4=;
 b=CVv0Knzt9SjU6gcrCdMjANyV19UX0dadBetgIWLmt8iI5iTv5YU9msMTQNKGbrdGLynDU+NS3Pix1NmwPM2pZmC1E2VgBz2opSTNJHVtbRLOH7K+IZceddd74LqDKe0I/BJqmoa/uARwqaaaJbM3Qdk9pvaILUkdbDzLm7tuIkca3yJSn/Pz9VEwKzadDpX5W3tpCyIUC1QMWo1LO5jqHaOkoBDE7ORgGh4qjimP0DkeU9Ztl0dBK5JBHRIsqBfM8gs7qK9aS80Llv0j19AsZz462IjTUFDouc/9ovxuKdhvH8ouaOWSJUkcC8GgnWw0Z1OpYcuWuCS3mOgJfgRPtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IpS6PScZSY6VWnV7kqq4QoafUlL/+ppe1J78pWtpZv4=;
 b=jaq0e4O8W9JDy2Oz4qRbzM/+66aozZ8waRG5h6FzDC7XmVCSqGGx9glzAkD7HYsuhaGcAgKC/azer+Uu/TFtL1yz1aFVYaANFxQn7X0oRcBNFjzHeS0NZBOovXOlYCmMkPGFVMXJbuhdK5t3mramg0GodQv6bvnM2ctXmG7fSxQ=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 08/40] xen/arm: use PA == VA for
 EARLY_UART_VIRTUAL_ADDRESS on Armv-8R
Thread-Topic: [PATCH v2 08/40] xen/arm: use PA == VA for
 EARLY_UART_VIRTUAL_ADDRESS on Armv-8R
Thread-Index: AQHZJxAhOdeISnEbykSTu6lhfGgLIq6jTn6AgAAbGHA=
Date: Wed, 18 Jan 2023 01:43:42 +0000
Message-ID:
 <PAXPR08MB74205278BB264245DC91EFCB9EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-9-Penny.Zheng@arm.com>
 <09e4c2ef-eddf-e798-573b-68744a061d68@xen.org>
In-Reply-To: <09e4c2ef-eddf-e798-573b-68744a061d68@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 46A645E76417544AA7616AE3CE5E841E.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|AM9PR08MB6020:EE_|DBAEUR03FT019:EE_|DU0PR08MB9300:EE_
X-MS-Office365-Filtering-Correlation-Id: 40307e19-1e1b-47d0-493e-08daf8f57300
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Ft7jaQrMEDQ+RE+JSBLg6AGD6X3fcUO+5ixnfL7WFq9/KyxpwhgZ221dFuLfjk7JrwQI4jhkSmx38eQpT/UteWQjIyFYbh/BMpaYgHgd4nN6YVtqLKxGrtPf9rDwHAVVTzyXPiiRqyefZxm5BRVF6S2IGV5QPRaGWrQXmHBPOu03zjWTlVTzwAASLqfuCFFzwDO2q3x6GlKmk7xoywc4wWDtpmxEhE1OUplwckmciHjZr6zn3YgYe0VT9KhG/sFCKTihSMuuzTS4QcQ5OCoTw6knOpj/s+xltNo18qLfJyfloPEFhOtfFDn+MpBo7mds1t7+E+RlfxhFfhhHX45OSrz8GbNg1Q6gkEyOnSmIdaYq+lHTkHxK+OjOHBpx2YDs2iCcCIkN/xdQsejCqhR8wvQVO5ZZsuuY/jKuEQ2BjRnU0LH1Tya4R1MYnmQs0IJl2hdx45nUlgJQHmfx1MokOVYYePxKUM4H91ujkBQKjLkHqrsZm8PBbg9l754I6dKEzRrdsSWxgmEnrvJr5KI6goX4XEkdKu0PijT1SZxrmpLLoBxO7vtv/1HnMPZ2ELJ9EErbIjeBcxguVWuD6jn8dyPJXuBgZCCP085IhpzO5r4KNLkvqLhcLRYEVGG031YZKnCGZ46lw6ok6UrZEtfID+4W1QCHUOfw3awp/C6uv0cJSiHq2J2NSWADaVirznfClTVmusB2owE0oR3Lsodc1w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(366004)(376002)(396003)(346002)(136003)(451199015)(8676002)(33656002)(66946007)(76116006)(66476007)(66446008)(55016003)(186003)(66556008)(41300700001)(9686003)(53546011)(64756008)(26005)(4326008)(86362001)(5660300002)(52536014)(83380400001)(8936002)(478600001)(110136005)(54906003)(6506007)(71200400001)(7696005)(316002)(38070700005)(2906002)(38100700002)(122000001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6020
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8f01498f-f109-4c9e-ae26-08daf8f56e53
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LdJrPkT+6zOx348IMgcmrctENaoWwvuVfN429AgxrWEtS2VZQOtv5oeQxK74cIo9NFd93G2lrIthGiLCKXNvpntvtT2wBMo882USkkD09a7KBEVeTk09hE9ENdurkcJ63h+hEoH82cmQV6Q3RKrvfaFmfipLJZXVDdFs1BNYMQaP6gnZVJvZpwl5DDwuvDLONbRRJLeg8pHxaXPaNQpZhR4iz3UFvngZ8q/eCANBHCjzM3NUhINH4Eg/ugbRqSZkRo7zTguS1RI4HU5AP09Nm3HGKhCjoVJ97Uaj5xpLU5G2cs5hSIvCQ9uWqMRd6759ubigC4USQkeBz8/I88MfJikZlO2Lb73AMaiotgIs/PqIj9enf4cjEAeBkvh6LrCrCbU1yafjrJznc9+hKF9if3qmnwATEjInV1TWpj4kTCHb38qxhpTuCx1TXmuZQYYMLptN03CQzp/ac+0gMXiSuDThHI3sWSGIUBfhYy8KOeKt9mV7Ks3zwySiMXfJuj50qAE5fXfF3a1aCTTgMOVSutMRAjivHDJq4hVuj2GFRb6Hvrj4Hbm6+jeH3wRYaH2RCGKRrVhVeyMTxhdFW8C4nZQfT5xcC3hE3DbOVHDEvkDzwT5FDWboOKCh7H/+XH6HTC9f2JMK05BC9wmI36ye2OTWfY7a+xwAygho87Yr3/ktOfJncy7p+QXLj9aNZo0f4M9YkIfJBTedJwB5Bf4w8g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(346002)(396003)(376002)(451199015)(36840700001)(40470700004)(46966006)(82310400005)(316002)(2906002)(54906003)(110136005)(336012)(53546011)(9686003)(26005)(186003)(86362001)(81166007)(356005)(47076005)(83380400001)(7696005)(33656002)(36860700001)(478600001)(40460700003)(55016003)(107886003)(40480700001)(6506007)(5660300002)(82740400003)(8936002)(4326008)(70206006)(8676002)(70586007)(52536014)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 01:43:50.6264
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 40307e19-1e1b-47d0-493e-08daf8f57300
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9300

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjPlubQx5pyIMTjml6UgNzo0OQ0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFu
byBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9k
eW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDA4LzQwXSB4
ZW4vYXJtOiB1c2UgUEEgPT0gVkEgZm9yDQo+IEVBUkxZX1VBUlRfVklSVFVBTF9BRERSRVNTIG9u
IEFybXYtOFINCj4gDQo+IEhpIFBlbm55LA0KPiANCj4gT24gMTMvMDEvMjAyMyAwNToyOCwgUGVu
bnkgWmhlbmcgd3JvdGU6DQo+ID4gRnJvbTogV2VpIENoZW4gPHdlaS5jaGVuQGFybS5jb20+DQo+
ID4NCj4gPiBUaGVyZSBpcyBubyBWTVNBIHN1cHBvcnQgb24gQXJtdjgtUiBBQXJjaDY0LCBzbyB3
ZSBjYW4gbm90IG1hcCBlYXJseQ0KPiA+IFVBUlQgdG8gRklYTUFQX0NPTlNPTEUuIEluc3RlYWQs
IHdlIHVzZSBQQSA9PSBWQSB0byBkZWZpbmUNCj4gPiBFQVJMWV9VQVJUX1ZJUlRVQUxfQUREUkVT
UyBvbiBBcm12OC1SIEFBcmNoNjQuDQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBXZWkgQ2hlbiA8
d2VpLmNoZW5AYXJtLmNvbT4NCj4gDQo+IFlvdXIgc2lnbmVkLW9mZi1ieSBpcyBtaXNzaW5nLg0K
PiANCj4gPiAtLS0NCj4gPiAxLiBOZXcgcGF0Y2gNCj4gPiAtLS0NCj4gPiAgIHhlbi9hcmNoL2Fy
bS9pbmNsdWRlL2FzbS9lYXJseV9wcmludGsuaCB8IDEyICsrKysrKysrKysrKw0KPiA+ICAgMSBm
aWxlIGNoYW5nZWQsIDEyIGluc2VydGlvbnMoKykNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC9hcm0vaW5jbHVkZS9hc20vZWFybHlfcHJpbnRrLmgNCj4gYi94ZW4vYXJjaC9hcm0vaW5j
bHVkZS9hc20vZWFybHlfcHJpbnRrLmgNCj4gPiBpbmRleCBjNTE0OWIyOTc2Li40NGEyMzA4NTNm
IDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9lYXJseV9wcmludGsu
aA0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9lYXJseV9wcmludGsuaA0KPiA+
IEBAIC0xNSwxMCArMTUsMjIgQEANCj4gPg0KPiA+ICAgI2lmZGVmIENPTkZJR19FQVJMWV9QUklO
VEsNCj4gPg0KPiA+ICsjaWZkZWYgQ09ORklHX0FSTV9WOFINCj4gDQo+IFNob3VsZG4ndCB0aGlz
IGJlIENPTkZJR19IQVNfTVBVPw0KPiANCg0KV2UgaGFkIGNvbnNpZGVyZWQgdGhhdCB0aGVyZSBt
YXkgYmUgYW4gaW1wbGVtZW50YXRpb24gb2YgQXJtOFIgd2l0aG91dA0KYW4gTVBVLCBzbyB3ZSB1
c2VkIENPTkZJR19BUk1fVjhSIGhlcmUuIEJ1dCB5b3UncmUgcmlnaHQsIHdlIGhhdmUgbm90DQpz
dXBwb3J0IG5vbi1NUFUgc2NlbmFyaW8gaW4gdGhpcyBzZXJpZXMsIHNvIHVzZSBDT05GSUdfSEFT
X01QVSBoZXJlDQp3b3VsZCBiZSBiZXR0ZXIgdG8gaW5kaWNhdGUgdGhpcyBpcyBhIGZlYXR1cmUg
YmFzZWQgY29kZSBzZWN0aW9uLg0KV2Ugd2lsbCBjaGFuZ2UgaXQgdG8gQ09ORklHX0hBU19NUFUg
aW4gbmV4dCB2ZXJzaW9uLg0KDQo+ID4gKw0KPiA+ICsvKg0KPiA+ICsgKiBGb3IgQXJtdi04ciwg
dGhlcmUgaXMgbm90IFZNU0Egc3VwcG9ydCBpbiBFTDIsIHNvIHdlIHVzZSBWQSA9PSBQQQ0KPiAN
Cj4gcy9ub3Qvbm8vDQo+IA0KDQpPay4NCg0KQ2hlZXJzLA0KV2VpIENoZW4NCg0KPiA+ICsgKiBm
b3IgRUFSTFlfVUFSVF9WSVJUVUFMX0FERFJFU1MuID4gKyAqLw0KPiA+ICsjZGVmaW5lIEVBUkxZ
X1VBUlRfVklSVFVBTF9BRERSRVNTIENPTkZJR19FQVJMWV9VQVJUX0JBU0VfQUREUkVTUw0KPiA+
ICsNCj4gPiArI2Vsc2UNCj4gPiArDQo+ID4gICAvKiBuZWVkIHRvIGFkZCB0aGUgdWFydCBhZGRy
ZXNzIG9mZnNldCBpbiBwYWdlIHRvIHRoZSBmaXhtYXAgYWRkcmVzcw0KPiAqLw0KPiA+ICAgI2Rl
ZmluZSBFQVJMWV9VQVJUX1ZJUlRVQUxfQUREUkVTUyBcDQo+ID4gICAgICAgKEZJWE1BUF9BRERS
KEZJWE1BUF9DT05TT0xFKSArIChDT05GSUdfRUFSTFlfVUFSVF9CQVNFX0FERFJFU1MgJg0KPiB+
UEFHRV9NQVNLKSkNCj4gPg0KPiA+ICsjZW5kaWYgLyogQ09ORklHX0FSTV9WOFIgKi8NCj4gPiAr
DQo+ID4gICAjZW5kaWYgLyogIUNPTkZJR19FQVJMWV9QUklOVEsgKi8NCj4gPg0KPiA+ICAgI2Vu
ZGlmDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 01:54:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 01:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480067.744250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHxeh-0002JT-Eq; Wed, 18 Jan 2023 01:54:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480067.744250; Wed, 18 Jan 2023 01:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHxeh-0002JM-CD; Wed, 18 Jan 2023 01:54:23 +0000
Received: by outflank-mailman (input) for mailman id 480067;
 Wed, 18 Jan 2023 01:54:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/ljc=5P=gmail.com=joanbrugueram@srs-se1.protection.inumbo.net>)
 id 1pHxeg-0002JG-M7
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 01:54:22 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 05fdbd83-96d3-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 02:54:20 +0100 (CET)
Received: by mail-wm1-x32f.google.com with SMTP id
 q10-20020a1cf30a000000b003db0edfdb74so427312wmq.1
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 17:54:20 -0800 (PST)
Received: from solport.. (80.red-83-42-43.dynamicip.rima-tde.net.
 [83.42.43.80]) by smtp.gmail.com with ESMTPSA id
 q12-20020adff50c000000b002be25db0b7bsm2043264wro.10.2023.01.17.17.54.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 17:54:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05fdbd83-96d3-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=thVLVROfX6AvPBKTUyk8ux14/6qppawT7dt0nyTi8Lc=;
        b=gfLdwhvtjCP6/74XGTQQJcA7iGaS4m87AHEBSP5sD16YNFRhl9vS4OwpV7jSH+idhH
         srgibYZnk4NkHdvQTDvKfaHBmvrYCg5+yb9Q6PvBmHIeXuq6+DfT4LmwtBTHF/EGdwXl
         C3coOPpKVt+Oxh6I5rBOFA1MzJI93BDRoB/wOgptl9DAqWel2T1Akf2L+y6gUi0qtFex
         NS8+DnE5VyO9TqY8PIefBcJCmZ2SaCcIlm92DPzXUJ6cymW9vPmPTMm+m+uZ/N8kj/Xv
         YHJ7dZ35CItYN65ejGnk+UDmi1vzXgr9DYVUIKpjUO+iMu+Z0PbRfvjPmNTmO1jPngoy
         tJ6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=thVLVROfX6AvPBKTUyk8ux14/6qppawT7dt0nyTi8Lc=;
        b=FQBb2j/JvCAg7DilTbmTzWAv2kqD4GRmjnopP1AniMvEjAPUtdv2eIGNCt2qGImOkJ
         LHsEGt/8BwM3Yr71aF6SErbtL6m9c5xWUYKBGZJeasapE8KHc0J+GuFxejQX9VPzghjm
         COYWgaVJil1lUz9qgJI9oV/9KcEpBgt7xO9k8fVOeSNgroFEdhtrkFv37AEx+IDBq4Xo
         BiFIPCh3ZQs5+Ud9P+KrE+g2sQOJZCGz+fo45D+0Hqc/WMwz/FGagqfv9K6/uxmjMXy/
         5pI7xshxx117cHCUN+JJH0AHwNdYEL+Bq3IWrDe4BbZ3vYEw//1EbAx4mDGVXZ3o16mO
         LtxA==
X-Gm-Message-State: AFqh2krxmp97kh/YASbuZDTsCA1t4rO5CqpouSzd1l86xy0rpKTYEGYl
	cu7QeFQUwkhGB6aTBlZMpyk=
X-Google-Smtp-Source: AMrXdXv//7iE8nVbevIc5WQF2jxSlhQMI2dNCCYQgusye0d7toEjXRfqJCmv3LQ9+KG9V6qyZyXi6w==
X-Received: by 2002:a05:600c:1d22:b0:3da:f66c:795d with SMTP id l34-20020a05600c1d2200b003daf66c795dmr5261659wms.9.1674006859635;
        Tue, 17 Jan 2023 17:54:19 -0800 (PST)
From: Joan Bruguera <joanbrugueram@gmail.com>
To: x86@kernel.org,
	Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org,
	"Juergen Gross" <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"Jan Beulich" <jbeulich@suse.com>,
	"Roger Pau Monne" <roger.pau@citrix.com>,
	"Kees Cook" <keescook@chromium.org>,
	mark.rutland@arm.com,
	"Andrew Cooper" <Andrew.Cooper3@citrix.com>,
	=?UTF-8?q?J=C3=B6rg=20R=C3=B6del?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 0/7] x86: retbleed=stuff fixes
Date: Wed, 18 Jan 2023 01:54:12 +0000
Message-Id: <20230118015412.273150-1-joanbrugueram@gmail.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <20230116142533.905102512@infradead.org>
References: <20230116142533.905102512@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Thanks, I've been testing those patches on my real system (i5-7200U) for
the last day with no problems so far, waking from s2ram works as well.
I can also no longer see those `sarq $5, %gs:0x1337` with %gs=0 on QEMU.

Regards,
- Joan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 02:19:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 02:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480074.744261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHy27-0005CF-Fa; Wed, 18 Jan 2023 02:18:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480074.744261; Wed, 18 Jan 2023 02:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHy27-0005C8-CW; Wed, 18 Jan 2023 02:18:35 +0000
Received: by outflank-mailman (input) for mailman id 480074;
 Wed, 18 Jan 2023 02:18:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pHy25-0005C2-O7
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 02:18:33 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060.outbound.protection.outlook.com [40.107.20.60])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65c193c2-96d6-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 03:18:29 +0100 (CET)
Received: from FR3P281CA0191.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a4::9) by
 AS8PR08MB6295.eurprd08.prod.outlook.com (2603:10a6:20b:295::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 02:18:27 +0000
Received: from VI1EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:a4:cafe::c7) by FR3P281CA0191.outlook.office365.com
 (2603:10a6:d10:a4::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Wed, 18 Jan 2023 02:18:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT030.mail.protection.outlook.com (100.127.144.128) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 02:18:26 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Wed, 18 Jan 2023 02:18:25 +0000
Received: from 1e2f6dd1ce74.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9E2153ED-1E4C-461B-97BC-A18B3A4F3E84.1; 
 Wed, 18 Jan 2023 02:18:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1e2f6dd1ce74.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 02:18:16 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by GV1PR08MB7732.eurprd08.prod.outlook.com (2603:10a6:150:53::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 02:18:12 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 02:18:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65c193c2-96d6-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lkiXdli/CeyeinAOvKeLAevU+PKvIz1V/X6MF4mKLF4=;
 b=aERw2DaNKS/XAX63NehtSXDusJmAFP+Zg4SrnY0KeTmETh8vpIWImaMA8rry1t1TVXIDcsoLv8JXrxifeT8lZD6ThpQBfXskHBQ8uupQdCAQikQypBlb6r2QAoNrhZ8kMYx/qXgaTIPEmwFjl80i//05AjokxHl/oLqpndBB+hU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hwKYB3tj/uCLeFwfQKCaOJ6TFgB3eiFT+ZjaHRREoCw3sWvg9B+t02rjfEAPcgAJ3Z8LEgaBpWciSvkkzLCNi5RVrRPHWY9P1X1dheh3zmfYzRnOsK46HHYtMkfE3AEUDnU09tiIcvPxP/x4dgJU+DzVV7BwRCDjpSqJe+dHOkh68s3R6waEKxr1iI0nT32RiMTb+rBMbxig5RKJ8Z4jB5prCZaSuU7/CItc8wPeY8eKRVFBfbmyYq18RnZhIrh8VlZeHZ36RWmek5caI/oiLbzaRAaYG+kEyMx5g+h19c4ckQkgjFVBz/GhBwZ1seHezdO+zEFwlV+LbnV7rjYMUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lkiXdli/CeyeinAOvKeLAevU+PKvIz1V/X6MF4mKLF4=;
 b=mYTv1V3XhBxmQ064xpevUJmE5960uFrP0/4Huv5vPxYLAlU6IE0xY4U5d5d6FRs190ADzSOfuB8wbQg4xPImBuiCvxfRr6HJUhaVVxQt6BZ5fBSOfSTviNj2WoyGeRvG1XPSDNEmhL9+3O2WTEpCnGvck2OYkLC6JZ+j6KbRoQyEQohkFTt8aDPM2NwrPdJrIS62Yjn3jemCrNo9nOk1sbUpGpXoENlXHHnHxIQQ53kA5payMg+Ya1lNxbHOlCI8GvpqDXvMm/1Xg5ykd8szPZVhxQYjjrAN8xsK+VGFWA3o72ftZqRBv5yehqGgWKcX//4kyrYdfY6GToXBVBqC6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lkiXdli/CeyeinAOvKeLAevU+PKvIz1V/X6MF4mKLF4=;
 b=aERw2DaNKS/XAX63NehtSXDusJmAFP+Zg4SrnY0KeTmETh8vpIWImaMA8rry1t1TVXIDcsoLv8JXrxifeT8lZD6ThpQBfXskHBQ8uupQdCAQikQypBlb6r2QAoNrhZ8kMYx/qXgaTIPEmwFjl80i//05AjokxHl/oLqpndBB+hU=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 07/40] xen/arm64: add .text.idmap for Xen identity map
 sections
Thread-Topic: [PATCH v2 07/40] xen/arm64: add .text.idmap for Xen identity map
 sections
Thread-Index: AQHZJxAY2FhaLGF+HUqxg8o4//gxhq6jTaOAgAAkT4A=
Date: Wed, 18 Jan 2023 02:18:11 +0000
Message-ID:
 <PAXPR08MB742061C5E8ED73BD43FEF9599EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-8-Penny.Zheng@arm.com>
 <4b817b65-f558-b4df-c7fd-242a04e59a59@xen.org>
In-Reply-To: <4b817b65-f558-b4df-c7fd-242a04e59a59@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A6EDD32842840546B15F42A534342384.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|GV1PR08MB7732:EE_|VI1EUR03FT030:EE_|AS8PR08MB6295:EE_
X-MS-Office365-Filtering-Correlation-Id: cf638c48-eb71-4185-28e0-08daf8fa483b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Mm9e+zW0ES8SMzYhy2e6cRMYh7kC5y8XDyfsp2RZEl1BEquJOktPZCCPDnJF2UjbntJhgvgGyjnWawkdIl8hfOnMQOXSTK5VctakkVuAtQTdNHpfVaikI3QAtBNrd0gklTArZtlTTKtuzHLWo16HETnQPtcXcLLVhwONrc5VRr4WtkJHBQfB2Ex/jwNOKoEmk/HMmjeVj20S6ZWrkgBIZXb5m4F8GvJKa6fDdbBK7xNAptomWG1NngFSZ/dym+N0IK00vQBvnZFDoGCKcye8Iq2UO1Mq2yhgOEci43dKQFrrfypmHIaIRWVm3qxS3IC77y7EWQkZHfAWvnbtlc4gk2nPP+K0NFaPxMJ3qS86+9CUMAohXO5FvZp9eN0iN3jwqnug97FP/r/6/Vp8D6gu0di7On1hYea039bt4n9ed8e+OyNEfdLGSVhr0srzSZ32B0YANkNYHNM9BtrdVOnJPvlYM94ozFAnpZOATYelQ7v8mllkyaqc0XtsVt2LNGCJG/xKQ59M2z2S92GIzuP3r2Z7nnAyC13TrSUrhC7Y9NajPYK+DPPeKL4GBsBKDC2HUu8S981p74Xjvnma10bHlhTdDEY7X4syYxwKjBgE7bt7I1JJ7tcBYQyGaHJTQM5aP0p1t/SwdVHO1pywlY3+odN0XGXA6Ve8FaEDr/L/ntMLMwxfL9YG8DN2oO4VtWNf8+PMK8k/oQlqFsPqiwutmN92EFJ9SUjvx/AHst1d+TA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(376002)(39860400002)(366004)(396003)(451199015)(86362001)(55016003)(66446008)(33656002)(9686003)(64756008)(4326008)(53546011)(186003)(8676002)(26005)(76116006)(66946007)(66556008)(41300700001)(66476007)(316002)(71200400001)(38100700002)(478600001)(54906003)(110136005)(6506007)(122000001)(38070700005)(2906002)(5660300002)(52536014)(83380400001)(7696005)(8936002)(142923001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7732
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f7b4399f-552e-4762-7b2d-08daf8fa3f85
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ziw/06lFngp6E6R3EThYpH9xYzDAA0Ausf9hg+O2FhUz9dcLF/s9PphO3O1DqM9LBkUceo+ZcDdDv19nxhTD0aiem8hP+FP/oxxGDasuHOF9KWEn2QTJ1HPayIaYCIFHB/HPvkL6S4Ys4vMIEkfyhuCGRNcYGyZPCD8Xqu5XDE6vNgTyfmYkVcQqCd+/PIZyJ5+Ai8c51F+M3aFTFmKDT2e1ZEiMXoBQRop48N2zO8rKCb3qOTOaQgI624r5NQy/trm8k4ubbIJNgOwPcV/OExrMCx+jdxll7I8pLC4MJ+GOv4Ijc8UI4oJu4nbJ1Ow7KNSvraoDgfJHYE1EMDTeWIZBDi4/7/rNm4v3J1KcEHRJlGwAAtkmgdigMoJvMFk/3PY0xdHPt3gfK9cWnb+yseAV3ksskNh2wHczOjeKgve/WCXhmp4j0oMRUDcPcBnal2+/tUQTCGa31S/s7r8sFanvy66E/e2f4mucUza62e8rs9obDn5OSQ3UsNcnlM9v8S9zxZGajNRZcHq9l0WmL6j1aIAjDar7g3Gcxe2Z/sS5+QIPwM1m6RTLXfaYLK0IcznjA13s5+KGV/CeoJP7bBvt4+G6ZAdAeByutdNIVQmGuJ7SjNMfVSkB0qC73SK1izerTrjgqGBnOKTLpLN8NMHvSgv6wTR+YZynDVjVY9EK3lMQAiAPbCz9+iFsICkOL4fBvZ/HhSPKcwC9CdQpJzySb+TnbSRn+5bTDIb56ss=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(346002)(396003)(376002)(451199015)(36840700001)(40470700004)(46966006)(316002)(82310400005)(2906002)(54906003)(110136005)(336012)(186003)(9686003)(26005)(86362001)(81166007)(53546011)(356005)(47076005)(83380400001)(7696005)(36860700001)(40460700003)(478600001)(33656002)(55016003)(107886003)(40480700001)(6506007)(5660300002)(82740400003)(8936002)(4326008)(70586007)(8676002)(70206006)(52536014)(41300700001)(142923001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 02:18:26.2289
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cf638c48-eb71-4185-28e0-08daf8fa483b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6295

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjPlubQx5pyIMTjml6UgNzo0Ng0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFu
byBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9k
eW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDA3LzQwXSB4
ZW4vYXJtNjQ6IGFkZCAudGV4dC5pZG1hcCBmb3IgWGVuIGlkZW50aXR5DQo+IG1hcCBzZWN0aW9u
cw0KPiANCj4gSGksDQo+IA0KPiBPbiAxMy8wMS8yMDIzIDA1OjI4LCBQZW5ueSBaaGVuZyB3cm90
ZToNCj4gPiBGcm9tOiBXZWkgQ2hlbiA8d2VpLmNoZW5AYXJtLmNvbT4NCj4gPg0KPiA+IE9ubHkg
dGhlIGZpcnN0IDRLQiBvZiBYZW4gaW1hZ2Ugd2lsbCBiZSBtYXBwZWQgYXMgaWRlbnRpdHkNCj4g
PiAoUEEgPT0gVkEpLiBBdCB0aGUgbW9tZW50LCBYZW4gZ3VhcmFudGVlcyB0aGlzIGJ5IGhhdmlu
Zw0KPiA+IGV2ZXJ5dGhpbmcgdGhhdCBuZWVkcyB0byBiZSB1c2VkIGluIHRoZSBpZGVudGl0eSBt
YXBwaW5nDQo+ID4gaW4gaGVhZC5TIGJlZm9yZSBfZW5kX2Jvb3QgYW5kIGNoZWNraW5nIGF0IGxp
bmsgdGltZSBpZiB0aGlzDQo+ID4gZml0cyBpbiA0S0IuDQo+ID4NCj4gPiBJbiBwcmV2aW91cyBw
YXRjaCwgd2UgaGF2ZSBtb3ZlZCB0aGUgTU1VIGNvZGUgb3V0c2lkZSBvZg0KPiA+IGhlYWQuUy4g
QWx0aG91Z2ggd2UgaGF2ZSBhZGRlZCAudGV4dC5oZWFkZXIgdG8gdGhlIG5ldyBmaWxlDQo+ID4g
dG8gZ3VhcmFudGVlIGFsbCBpZGVudGl0eSBtYXAgY29kZSBzdGlsbCBpbiB0aGUgZmlyc3QgNEtC
Lg0KPiA+IEhvd2V2ZXIsIHRoZSBvcmRlciBvZiB0aGVzZSB0d28gZmlsZXMgb24gdGhpcyA0S0Ig
ZGVwZW5kcw0KPiA+IG9uIHRoZSBidWlsZCB0b29scy4gQ3VycmVudGx5LCB3ZSB1c2UgdGhlIGJ1
aWxkIHRvb2xzIHRvDQo+ID4gcHJvY2VzcyB0aGUgb3JkZXIgb2Ygb2JqcyBpbiB0aGUgTWFrZWZp
bGUgdG8gZW5zdXJlIHRoYXQNCj4gPiBoZWFkLlMgbXVzdCBiZSBhdCB0aGUgdG9wLiBCdXQgaWYg
eW91IGNoYW5nZSB0byBhbm90aGVyIGJ1aWxkDQo+ID4gdG9vbHMsIGl0IG1heSBub3QgYmUgdGhl
IHNhbWUgcmVzdWx0Lg0KPiANCj4gUmlnaHQsIHNvIHRoaXMgaXMgZml4aW5nIGEgYnVnIHlvdSBp
bnRyb2R1Y2VkIGluIHRoZSBwcmV2aW91cyBwYXRjaC4gV2UNCj4gc2hvdWxkIHJlYWxseSBhdm9p
ZCBpbnRyb2R1Y2luZyAobGF0ZW50KSByZWdyZXNzaW9uIGluIGEgc2VyaWVzLiBTbw0KPiBwbGVh
c2UgcmUtb3JkZXIgdGhlIHBhdGNoZXMuDQo+IA0KDQpPay4NCg0KPiA+DQo+ID4gSW4gdGhpcyBw
YXRjaCB3ZSBpbnRyb2R1Y2UgLnRleHQuaWRtYXAgdG8gaGVhZF9tbXUuUywgYW5kDQo+ID4gYWRk
IHRoaXMgc2VjdGlvbiBhZnRlciAudGV4dC5oZWFkZXIuIHRvIGVuc3VyZSBjb2RlIG9mDQo+ID4g
aGVhZF9tbXUuUyBhZnRlciB0aGUgY29kZSBvZiBoZWFkZXIuUy4NCj4gPg0KPiA+IEFmdGVyIHRo
aXMsIHdlIHdpbGwgc3RpbGwgaW5jbHVkZSBzb21lIGNvZGUgdGhhdCBkb2VzIG5vdA0KPiA+IGJl
bG9uZyB0byBpZGVudGl0eSBtYXAgYmVmb3JlIF9lbmRfYm9vdC4gQmVjYXVzZSB3ZSBoYXZlDQo+
ID4gbW92ZWQgX2VuZF9ib290IHRvIGhlYWRfbW11LlMuDQo+IA0KPiBJIGRpc2xpa2UgdGhpcyBh
cHByb2FjaCBiZWNhdXNlIHlvdSBhcmUgZXhwZWN0aW5nIHRoYXQgb25seSBoZWFkX21tdS5TDQo+
IHdpbGwgYmUgcGFydCBvZiAudGV4dC5pZG1hcC4gSWYgaXQgaXMgbm90LCBldmVyeXRoaW5nIGNv
dWxkIGJsb3cgdXAgYWdhaW4uDQo+IA0KDQpJIGFncmVlLg0KDQo+IFRoYXQgc2FpZCwgaWYgeW91
IGxvb2sgYXQgc3RhZ2luZywgeW91IHdpbGwgbm90aWNlIHRoYXQgbm93IF9lbmRfYm9vdCBpcw0K
PiBkZWZpbmVkIGluIHRoZSBsaW5rZXIgc2NyaXB0IHRvIGF2b2lkIGFueSBpc3N1ZS4NCj4gDQoN
ClNvcnJ5LCBJIGFtIG5vdCBxdWl0ZSBjbGVhciBhYm91dCB0aGlzIGNvbW1lbnQuIFRoZSBfZW5k
X2Jvb3Qgb2Ygb3JpZ2luYWwNCnN0YWdpbmcgYnJhbmNoIGlzIGRlZmluZWQgaW4gaGVhZC5TLiBB
bmQgSSBhbSBub3QgcXVpdGUgc3VyZSBob3cgdGhpcw0KX2VuZF9ib290IHNvbHZlIG11bHRpcGxl
IGZpbGVzIGNvbnRhaW4gaWRtYXAgY29kZS4NCg0KQ2hlZXJzLA0KV2VpIENoZW4NCg0KPiA+IFRo
YXQgbWVhbnMgYWxsIGNvZGUgaW4gaGVhZC5TDQo+ID4gd2lsbCBiZSBpbmNsdWRlZCBiZWZvcmUg
X2VuZF9ib290LiBJbiB0aGlzIHBhdGNoLCB3ZSBhbHNvDQo+ID4gYWRkZWQgLnRleHQgZmxhZyBp
biB0aGUgcGxhY2Ugb2Ygb3JpZ2luYWwgX2VuZF9ib290IGluIGhlYWQuUy4NCj4gPiBBbGwgdGhl
IGNvZGUgYWZ0ZXIgLnRleHQgaW4gaGVhZC5TIHdpbGwgbm90IGJlIGluY2x1ZGVkIGluDQo+ID4g
aWRlbnRpdHkgbWFwIHNlY3Rpb24uDQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBXZWkgQ2hlbiA8
d2VpLmNoZW5AYXJtLmNvbT4NCj4gPiAtLS0NCj4gPiB2MSAtPiB2MjoNCj4gPiAxLiBOZXcgcGF0
Y2guDQo+ID4gLS0tDQo+ID4gICB4ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TICAgICB8IDYgKysr
KysrDQo+ID4gICB4ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tbXUuUyB8IDIgKy0NCj4gPiAgIHhl
bi9hcmNoL2FybS94ZW4ubGRzLlMgICAgICAgIHwgMSArDQo+ID4gICAzIGZpbGVzIGNoYW5nZWQs
IDggaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQ0KPiA+DQo+ID4gZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL2FybS9hcm02NC9oZWFkLlMgYi94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TDQo+ID4g
aW5kZXggNWNmYTQ3Mjc5Yi4uNzgyYmQxZjk0YyAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vYXJjaC9h
cm0vYXJtNjQvaGVhZC5TDQo+ID4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0L2hlYWQuUw0KPiA+
IEBAIC00NjYsNiArNDY2LDEyIEBAIGZhaWw6ICAgUFJJTlQoIi0gQm9vdCBmYWlsZWQgLVxyXG4i
KQ0KPiA+ICAgICAgICAgICBiICAgICAxYg0KPiA+ICAgRU5EUFJPQyhmYWlsKQ0KPiA+DQo+ID4g
Ky8qDQo+ID4gKyAqIEZvciB0aGUgY29kZSB0aGF0IGRvIG5vdCBuZWVkIGluIGluZGVudGl0eSBt
YXAgc2VjdGlvbiwNCj4gPiArICogd2UgcHV0IHRoZW0gYmFjayB0byBub3JtYWwgLnRleHQgc2Vj
dGlvbg0KPiA+ICsgKi8NCj4gPiArLnNlY3Rpb24gLnRleHQsICJheCIsICVwcm9nYml0cw0KPiA+
ICsNCj4gDQo+IEkgd291bGQgYXJndWUgdGhhdCBwdXRzIHdhbnRzIHRvIGJlIHBhcnQgb2YgdGhl
IGlkbWFwLg0KPiANCg0KSSBhbSBvayB0byBtb3ZlIHB1dHMgdG8gaWRtYXAuIEJ1dCBmcm9tIHRo
ZSBvcmlnaW5hbCBoZWFkLlMsIHB1dHMgaXMNCnBsYWNlZCBhZnRlciBfZW5kX2Jvb3QsIGFuZCBm
cm9tIHRoZSB4ZW4ubGQuUywgd2UgY2FuIHNlZSBpZG1hcCBpcw0KYXJlYSBpcyB0aGUgc2VjdGlv
biBvZiAiX2VuZF9ib290IC0gc3RhcnQiLiBUaGUgcmVhc29uIG9mIG1vdmluZyBwdXRzDQp0byBp
ZG1hcCBpcyBiZWNhdXNlIHdlJ3JlIHVzaW5nIGl0IGluIGlkbWFwPw0KDQpDaGVlcnMsDQpXZWkg
Q2hlbg0KDQo+ID4gICAjaWZkZWYgQ09ORklHX0VBUkxZX1BSSU5USw0KPiA+ICAgLyoNCj4gPiAg
ICAqIEluaXRpYWxpemUgdGhlIFVBUlQuIFNob3VsZCBvbmx5IGJlIGNhbGxlZCBvbiB0aGUgYm9v
dCBDUFUuDQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21tdS5TDQo+
IGIveGVuL2FyY2gvYXJtL2FybTY0L2hlYWRfbW11LlMNCj4gPiBpbmRleCBlMmM4ZjA3MTQwLi42
ZmYxM2M3NTFjIDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21tdS5T
DQo+ID4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0L2hlYWRfbW11LlMNCj4gPiBAQCAtMTA1LDcg
KzEwNSw3IEBADQo+ID4gICAgICAgICAgIHN0ciAgIFx0bXAyLCBbXHRtcDMsIFx0bXAxLCBsc2wg
IzNdDQo+ID4gICAuZW5kbQ0KPiA+DQo+ID4gLS5zZWN0aW9uIC50ZXh0LmhlYWRlciwgImF4Iiwg
JXByb2diaXRzDQo+ID4gKy5zZWN0aW9uIC50ZXh0LmlkbWFwLCAiYXgiLCAlcHJvZ2JpdHMNCj4g
PiAgIC8qLmFhcmNoNjQqLw0KPiA+DQo+ID4gICAvKg0KPiA+IGRpZmYgLS1naXQgYS94ZW4vYXJj
aC9hcm0veGVuLmxkcy5TIGIveGVuL2FyY2gvYXJtL3hlbi5sZHMuUw0KPiA+IGluZGV4IDkyYzI5
ODQwNTIuLmJjNDVlYTJjNjUgMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2FyY2gvYXJtL3hlbi5sZHMu
Uw0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS94ZW4ubGRzLlMNCj4gPiBAQCAtMzMsNiArMzMsNyBA
QCBTRUNUSU9OUw0KPiA+ICAgICAudGV4dCA6IHsNCj4gPiAgICAgICAgICAgX3N0ZXh0ID0gLjsg
ICAgICAgICAgICAvKiBUZXh0IHNlY3Rpb24gKi8NCj4gPiAgICAgICAgICAqKC50ZXh0LmhlYWRl
cikNCj4gPiArICAgICAgICooLnRleHQuaWRtYXApDQo+ID4NCj4gPiAgICAgICAgICAqKC50ZXh0
LmNvbGQpDQo+ID4gICAgICAgICAgKigudGV4dC51bmxpa2VseSAudGV4dC4qX3VubGlrZWx5IC50
ZXh0LnVubGlrZWx5LiopDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwN
Cg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 02:20:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 02:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480079.744272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHy3S-0005hx-QV; Wed, 18 Jan 2023 02:19:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480079.744272; Wed, 18 Jan 2023 02:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHy3S-0005hm-N2; Wed, 18 Jan 2023 02:19:58 +0000
Received: by outflank-mailman (input) for mailman id 480079;
 Wed, 18 Jan 2023 02:19:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pHy3R-0005he-Jz
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 02:19:57 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061.outbound.protection.outlook.com [40.107.22.61])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 98a07c48-96d6-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 03:19:54 +0100 (CET)
Received: from AM6PR05CA0005.eurprd05.prod.outlook.com (2603:10a6:20b:2e::18)
 by PAXPR08MB6512.eurprd08.prod.outlook.com (2603:10a6:102:15a::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 02:19:46 +0000
Received: from AM7EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::eb) by AM6PR05CA0005.outlook.office365.com
 (2603:10a6:20b:2e::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Wed, 18 Jan 2023 02:19:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT057.mail.protection.outlook.com (100.127.140.117) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 02:19:46 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Wed, 18 Jan 2023 02:19:46 +0000
Received: from 3ea9e91a0495.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 451048EB-9812-49F5-966C-B5290C30650D.1; 
 Wed, 18 Jan 2023 02:19:40 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3ea9e91a0495.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 02:19:40 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by GV1PR08MB7732.eurprd08.prod.outlook.com (2603:10a6:150:53::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 02:19:37 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 02:19:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98a07c48-96d6-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oez922NGKhif7LclDvt12UEBh5S2BpR2MihgPORW6d8=;
 b=yRLpUHbV0JS/KAgUGlexfUgmj/4/IvXj+OESrI+Md5yuCZ4M5n59eULtBknLR0UvkFxobeKv6ibCgUzqz7zZ5l52XEmR7XibAIilHjnCUSycagNiSPX2XUQzWOBG3XZv2KUlQ+jeOt4M/IJj9x5CchBnGF4rlBmcKci743xOJWQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ADwQwErAostf2NbndIlgxS78BBk1xcks5a8UeUQ7/jriOVr0fjGgKGA61pDQPFRINIH18NGy2fO2enZgaFvxrXEN8m6gOUcW/eNIZCJCOBECNuc4RYj+Y5+n6FMdSbCXNgj2FkiCpNmQyKvBFscZ5ZrEkUQaErDzRGsh/eqzRGkYfA4iNqJ7Y0C/4rPI1D7C7r5r30CFNwleHZYUymwT4VCW8a+2TYfzZj8l1cfW7oByDhsSSLSpAvt56ZkTYIGNGdN8yiEaDCbiVJXW6PUQkXqArbg8isFFiH+Ql6RlJ/fu1mYVJOSW3h8yEbmt5m+kPbpQl2wv7zwJFx/NRnAHDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Oez922NGKhif7LclDvt12UEBh5S2BpR2MihgPORW6d8=;
 b=UXpMMy0Tn9mYddJiY25zKT/ibopeF/lGnyldUas7VAl7TppDYtLZNbEovSsL92o5fG+Y8uCEvpUoHyUoWSc7EjgDy+MysAPEOZuWJlegarR67h90UoZkUWUkrRrT86op8LkjkdQ4gjJE7iOlAiVQs/gtcS31ZLWhD42anmnWfZC5xrrmtQgLqSF5sS4x2TpdqNAM1+CrwrCPPwU306nacqL4tuplWyLP665CtiqYkJM6v+ioBcJCaOuSeBgOylO6aF+99x5gMHiGYGhPL61BdonADPPudSjeGwQznYMFEy/6DQoKRtnjw6emQtOYl+MDeBqulhT5ozg7RWVW5knExQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oez922NGKhif7LclDvt12UEBh5S2BpR2MihgPORW6d8=;
 b=yRLpUHbV0JS/KAgUGlexfUgmj/4/IvXj+OESrI+Md5yuCZ4M5n59eULtBknLR0UvkFxobeKv6ibCgUzqz7zZ5l52XEmR7XibAIilHjnCUSycagNiSPX2XUQzWOBG3XZv2KUlQ+jeOt4M/IJj9x5CchBnGF4rlBmcKci743xOJWQ=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 02/40] xen/arm: make ARM_EFI selectable for Arm64
Thread-Topic: [PATCH v2 02/40] xen/arm: make ARM_EFI selectable for Arm64
Thread-Index: AQHZJxAWjxT6rj1UKU+x1t/kdJ20K66jQz0AgAA1JWA=
Date: Wed, 18 Jan 2023 02:19:36 +0000
Message-ID:
 <PAXPR08MB7420BFF01464B721A34952DC9EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-3-Penny.Zheng@arm.com>
 <32811667-4f9c-12f1-7b8f-2b066bc3dee9@xen.org>
In-Reply-To: <32811667-4f9c-12f1-7b8f-2b066bc3dee9@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1AB6178AEF00FB43AD809FD3B019050C.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|GV1PR08MB7732:EE_|AM7EUR03FT057:EE_|PAXPR08MB6512:EE_
X-MS-Office365-Filtering-Correlation-Id: c2aa8f6c-4c84-4713-870c-08daf8fa780c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 /LDY7YWJpv3HQ93NyF5kbb1nms81om7drogdnXSzkMWjL345l8G8eO4z5WQc60QedgMozrPSHXALLpE918+xLxX306T1lKBmerhAWhY1RAI8ULianQuFOPgozkzEuC/TJo+eeICy3kv+LzYquCImjmaT9G+7IxyOuUtNF5WgUrNzHQSg4d8B1LxoRWqVhXzfdbA6DmXkwMlCDZ958kmRGbBTBeT/nV8xHQkKSiu+uN7/j9qM2QkWRYeSUskWAMgH82HPfaCUQR2BtDZ4uYrjGn2sJW0L+kqy+5+34IbO7gI/qyRLTYATCvprUBAt61qfSRYanw09ur/xs2JugNIJToCCFxjarjoKHU7ee8/XgxC6BM3zdIxA64HUXABZkIC6LhCY1JCwzbhViC8JeIzvc/WdW0uBl5o246ReE8jE2A++o6C20nJpbmD4rO5x6lZwVmO9CaOA150Wvrqo831UylF+Ss4hhCxfOnj0s/hn9zkWR671iFI2N1Xi6TBHtuQ1AcZxWiCDNA3+ybAGJZU0eOeKwUJC2rvteE+AT+aeR9wZ4UFbd4MkcJz74PF01nyPSN6wqeKJ4NilbpnepjT+zdH7QQSr82I2AzO9SRjK3SSlNoTjzSmlLbmXuaqNSDRlwEaqo2a6mW+SmyJm4LETfn+T87NGcTaa4KfEymRdYaJY4xUA2tnHyhRFTA0hRf3R1IFm2QWxwpXixAnYpkwWrw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(376002)(39860400002)(366004)(396003)(451199015)(86362001)(55016003)(66446008)(33656002)(9686003)(64756008)(4326008)(53546011)(186003)(8676002)(26005)(76116006)(66946007)(66556008)(41300700001)(66476007)(316002)(71200400001)(38100700002)(478600001)(54906003)(110136005)(6506007)(122000001)(38070700005)(2906002)(5660300002)(52536014)(83380400001)(7696005)(8936002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7732
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d063523f-223b-46a0-1134-08daf8fa724d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RYSywizsTmlTulZ06sgoOFvsmsBx5QX4j7qcjkws5SvBf8k2iR8pHVhFOzGJJYwLH6ENbICd1wuY3Ctm732HWcFKq5jKGE1/JsXiFk0NJcB0KbXkuHyeB/v+XFsFcggUT/n5sUlKBxoX9eDmb3c+W7A+JfV37gyn/58EE4isTIU3/IWyTYkpG7qRbPKEZF3MVvgDgZ/Q9EKecAIzW1J6a65CkZT8pwSW3d1yeRiWkbA1PqcfWu5DeaQdL0j45bIHMYblcEaxI6xGR6s7vL4PHtQmkMnBLZZzTBkOSPrdv8lbiZgXEnJyl8AiHFWOywih+JuJSRHqm1rYeWCG9Aj8pEBSN1DfijxF2+4JKrTuCNSdORG+cuzYPtf5mlTs1XWSoPkvw7xCFmmgrjgkbdatScx6EVsMz8YcRhdww4+q8I4nlBBlmTQjCCK4Vn8+iKCD93951jJzafZuAQA3WrvBniWB7VJzpuSmBXwnHdii7Xenf65PNHD8DkrYDygWhKqvA8RFXdU7FCmOSdTL5VuD85aKzLGvvdDyE3xZw9W3MG7CvuPcw7EdeWV97EQTP50VXFR99NnHCO2AVvp3Eiwn+230YdNLTL41XGMUeyynMBQ/VQbk15yWOQ7a/XCGrpE6b4f59F3QGZRFlNApZx4iD9K8f7ECzxiVNnS5k0zxbBbEDEDz32g0ZYaEdI/oy/vn5NQ2Un0k1rSL24McUmFcxQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(39860400002)(376002)(136003)(451199015)(36840700001)(46966006)(40470700004)(186003)(9686003)(478600001)(40460700003)(86362001)(41300700001)(33656002)(336012)(55016003)(316002)(54906003)(40480700001)(110136005)(47076005)(7696005)(70586007)(70206006)(26005)(4326008)(8676002)(36860700001)(81166007)(6506007)(2906002)(53546011)(82740400003)(82310400005)(52536014)(8936002)(356005)(5660300002)(83380400001)(107886003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 02:19:46.5477
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c2aa8f6c-4c84-4713-870c-08daf8fa780c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6512

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjPlubQx5pyIMTjml6UgNzowOQ0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFu
byBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9k
eW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDAyLzQwXSB4
ZW4vYXJtOiBtYWtlIEFSTV9FRkkgc2VsZWN0YWJsZSBmb3IgQXJtNjQNCj4gDQo+IEhpIFBlbm55
LA0KPiANCj4gT24gMTMvMDEvMjAyMyAwNToyOCwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gRnJv
bTogV2VpIENoZW4gPHdlaS5jaGVuQGFybS5jb20+DQo+ID4NCj4gPiBDdXJyZW50bHksIEFSTV9F
Rkkgd2lsbCBtYW5kYXRvcmlseSBzZWxlY3RlZCBieSBBcm02NC4NCj4gPiBFdmVuIGlmIHRoZSB1
c2VyIGtub3dzIGZvciBzdXJlIHRoYXQgdGhlaXIgaW1hZ2VzIHdpbGwgbm90DQo+ID4gc3RhcnQg
aW4gdGhlIEVGSSBlbnZpcm9ubWVudCwgdGhleSBjYW4ndCBkaXNhYmxlIHRoZSBFRkkNCj4gPiBz
dXBwb3J0IGZvciBBcm02NC4gVGhpcyBtZWFucyB0aGVyZSB3aWxsIGJlIGFib3V0IDNLIGxpbmVz
DQo+ID4gdW51c2VkIGNvZGUgaW4gdGhlaXIgaW1hZ2VzLg0KPiA+DQo+ID4gU28gaW4gdGhpcyBw
YXRjaCwgd2UgbWFrZSBBUk1fRUZJIHNlbGVjdGFibGUgZm9yIEFybTY0LCBhbmQNCj4gPiBiYXNl
ZCBvbiB0aGF0LCB3ZSBjYW4gdXNlIENPTkZJR19BUk1fRUZJIHRvIGdhdGUgdGhlIEVGSQ0KPiA+
IHNwZWNpZmljIGNvZGUgaW4gaGVhZC5TIGZvciB0aG9zZSBpbWFnZXMgdGhhdCB3aWxsIG5vdCBi
ZQ0KPiA+IGJvb3RlZCBpbiBFRkkgZW52aXJvbm1lbnQuDQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5
OiBXZWkgQ2hlbiA8d2VpLmNoZW5AYXJtLmNvbT4NCj4gDQo+IFlvdXIgc2lnbmVkLW9mZi1ieSBp
cyBtaXNzaW5nLg0KPiANCj4gPiAtLS0NCj4gPiB2MSAtPiB2MjoNCj4gPiAxLiBOZXcgcGF0Y2gN
Cj4gPiAtLS0NCj4gPiAgIHhlbi9hcmNoL2FybS9LY29uZmlnICAgICAgfCAxMCArKysrKysrKy0t
DQo+ID4gICB4ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TIHwgMTUgKysrKysrKysrKysrKy0tDQo+
ID4gICAyIGZpbGVzIGNoYW5nZWQsIDIxIGluc2VydGlvbnMoKyksIDQgZGVsZXRpb25zKC0pDQo+
ID4NCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL0tjb25maWcgYi94ZW4vYXJjaC9hcm0v
S2NvbmZpZw0KPiA+IGluZGV4IDIzOWQzYWVkM2MuLmFjZTcxNzhjOWEgMTAwNjQ0DQo+ID4gLS0t
IGEveGVuL2FyY2gvYXJtL0tjb25maWcNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vS2NvbmZpZw0K
PiA+IEBAIC03LDcgKzcsNiBAQCBjb25maWcgQVJNXzY0DQo+ID4gICAJZGVmX2Jvb2wgeQ0KPiA+
ICAgCWRlcGVuZHMgb24gIUFSTV8zMg0KPiA+ICAgCXNlbGVjdCA2NEJJVA0KPiA+IC0Jc2VsZWN0
IEFSTV9FRkkNCj4gPiAgIAlzZWxlY3QgSEFTX0ZBU1RfTVVMVElQTFkNCj4gPg0KPiA+ICAgY29u
ZmlnIEFSTQ0KPiA+IEBAIC0zNyw3ICszNiwxNCBAQCBjb25maWcgQUNQSQ0KPiA+ICAgCSAgYW4g
YWx0ZXJuYXRpdmUgdG8gZGV2aWNlIHRyZWUgb24gQVJNNjQuDQo+ID4NCj4gPiAgIGNvbmZpZyBB
Uk1fRUZJDQo+ID4gLQlib29sDQo+ID4gKwlib29sICJVRUZJIGJvb3Qgc2VydmljZSBzdXBwb3J0
Ig0KPiA+ICsJZGVwZW5kcyBvbiBBUk1fNjQNCj4gPiArCWRlZmF1bHQgeQ0KPiA+ICsJaGVscA0K
PiA+ICsJICBUaGlzIG9wdGlvbiBwcm92aWRlcyBzdXBwb3J0IGZvciBib290IHNlcnZpY2VzIHRo
cm91Z2gNCj4gPiArCSAgVUVGSSBmaXJtd2FyZS4gQSBVRUZJIHN0dWIgaXMgcHJvdmlkZWQgdG8g
YWxsb3cgWGVuIHRvDQo+ID4gKwkgIGJlIGJvb3RlZCBhcyBhbiBFRkkgYXBwbGljYXRpb24uIFRo
aXMgaXMgb25seSB1c2VmdWwgZm9yDQo+ID4gKwkgIFhlbiB0aGF0IG1heSBydW4gb24gc3lzdGVt
cyB0aGF0IGhhdmUgVUVGSSBmaXJtd2FyZS4NCj4gDQo+IEkgd291bGQgZHJvcCB0aGUgbGFzdCBz
ZW50ZW5jZSBhcyB0aGlzIGlzIGltcGxpZWQgd2l0aCB0aGUgcmVzdCBvZiB0aGUNCj4gcGFyYWdy
YXBoLg0KPiANCg0KT2suDQoNCkNoZWVycywNCldlaSBDaGVuDQoNCj4gQ2hlZXJzLA0KPiANCj4g
LS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 02:30:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 02:30:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480087.744283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHyDo-00086q-T8; Wed, 18 Jan 2023 02:30:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480087.744283; Wed, 18 Jan 2023 02:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHyDo-00086j-Oq; Wed, 18 Jan 2023 02:30:40 +0000
Received: by outflank-mailman (input) for mailman id 480087;
 Wed, 18 Jan 2023 02:30:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHyDn-00086Z-VG; Wed, 18 Jan 2023 02:30:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHyDn-0001xR-R7; Wed, 18 Jan 2023 02:30:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHyDn-0005Sh-8R; Wed, 18 Jan 2023 02:30:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHyDn-0000Eo-7v; Wed, 18 Jan 2023 02:30:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nZvS9pS330vvk0gy5jQBySGCEEuZibgoTKs2T6iOn1E=; b=ykee1RsfRW7gRBgvEGN+M7VAZm
	G0ytUW9fufh9/b7qm2A3evy3tsPJbog1QZltJ3VAMOBSSGGS6uaOBxxC/gP7b8EqyuiBqU6axauyF
	7+NVnhwHKNNzMj0xAUQ6B7mZeXoP/6OK08+oLUb7SBO2YIJniZ4qFLMrW3euOSNtR9Pk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175932-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175932: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:host-install(5):broken:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:debian-install:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2f8d6a88e44928e1afaab5fd37fafefc94bf395c
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 02:30:39 +0000

flight 175932 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175932/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-libvirt-raw  5 host-install(5)        broken REGR. vs. 175743
 build-amd64                   6 xen-build                fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743
 test-armhf-armhf-libvirt     12 debian-install           fail REGR. vs. 175743
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175735
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                2f8d6a88e44928e1afaab5fd37fafefc94bf395c
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    5 days
Failing since        175750  2023-01-13 06:38:52 Z    4 days   11 attempts
Testing same since   175932  2023-01-17 13:37:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Bernhard Beschow <shentey@gmail.com>
  Daniel Henrique Barboza <dbarboza@ventanamicro.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Laurent Vivier <laurent@vivier.eu>
  Marcel Holtmann <marcel@holtmann.org>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt-raw broken
broken-step test-armhf-armhf-libvirt-raw host-install(5)

Not pushing.

(No revision log; it would be 2361 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 02:33:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 02:33:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480095.744293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHyGH-0000Dk-9t; Wed, 18 Jan 2023 02:33:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480095.744293; Wed, 18 Jan 2023 02:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHyGH-0000Dd-7G; Wed, 18 Jan 2023 02:33:13 +0000
Received: by outflank-mailman (input) for mailman id 480095;
 Wed, 18 Jan 2023 02:33:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pHyGF-0000DX-Tr
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 02:33:11 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2086.outbound.protection.outlook.com [40.107.6.86])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7242eafc-96d8-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 03:33:09 +0100 (CET)
Received: from AS9PR07CA0048.eurprd07.prod.outlook.com (2603:10a6:20b:46b::15)
 by DB3PR08MB9060.eurprd08.prod.outlook.com (2603:10a6:10:430::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 02:33:05 +0000
Received: from AM7EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:46b:cafe::1b) by AS9PR07CA0048.outlook.office365.com
 (2603:10a6:20b:46b::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Wed, 18 Jan 2023 02:33:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT053.mail.protection.outlook.com (100.127.140.202) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 02:33:04 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Wed, 18 Jan 2023 02:33:04 +0000
Received: from 3b798216bb1c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0EF9643B-A6D2-43F3-BFF8-36CE9C4C663A.1; 
 Wed, 18 Jan 2023 02:32:59 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3b798216bb1c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 02:32:59 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by DU0PR08MB9202.eurprd08.prod.outlook.com (2603:10a6:10:416::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.22; Wed, 18 Jan
 2023 02:32:52 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 02:32:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7242eafc-96d8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dfpTWE0qauED++MI8wxFRNNT3Ne56BXzu+NnU94osI4=;
 b=ZoLbjxbAD0cXRjbIrVWyLjOKkdjXFQE79ya5Y9uQ9muwSGhntoZLcZ3W6I1KBJHpiPh8bR0k2whN57OXhQ9g5M/c9HXgNbPlduHZBGe3JLV6+Gn+lN69HmBQgIgQsQoqRL2JqLAPrKggXedzvvMF/x9CxG/39rzwUGR4yqA9UqI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QjpXHS8Thh10GUptMlO/zqdeupRof5nIh7dJq5pX+0mnGOdLiLDIWV/WhWY9WoFnfRdGdi7/t9dJcCltZEq7EBoO8FLlFrHVV94/ytdCB1OUxyi/J55rxiJhs0/R7W5P8US33fCTxA4JREGXxGPL8n7xhGQoRCc8vs/z2IvI/hRZzeUIsouvGKH/sJ/vBg17A5cKxao9j7ZQjituZoQHRe9Cpz5nR0ARh1n26+rKtiZjqkAhllsSyBN3SfRPVARDoIQFxwxRl+M0RIYwzNHHYjl8an9J21lOvTv2oeHiUecjtDlUqorVSf9ZbSMsmOwK6WIdJ0KDJB6weYiHDFXhaA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dfpTWE0qauED++MI8wxFRNNT3Ne56BXzu+NnU94osI4=;
 b=ax0C9XmBrW2EkIm145W8W6hozsXIEMIjilRS5bU82B16P21FZYSbr6gbXggWAbtJiq7uqJZ/Z00/DZcjOddzJwwEsZbliGuaR4oIazPaRAoWTDnNXt09PIjHBwUBysKTsY3B+o4DjbOv36mDXuS23KGbvc1aGGUjgZDWDkh1OHnmHuKDSX+2Rz/jiBGsMflk9iZm9iALnNPZvd5//KXg7k2fOITQWsAEPqhqAzt+aj6edBBQshGnDCk2mMGyKFUg5XcGl1VLEMCCg3gdDPLA9gIswU4T1TLeFstOMGDN7M/cw1IlmQieNQoVPPa0yTfzsTZDhYo127Kgg56GZiK4pQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dfpTWE0qauED++MI8wxFRNNT3Ne56BXzu+NnU94osI4=;
 b=ZoLbjxbAD0cXRjbIrVWyLjOKkdjXFQE79ya5Y9uQ9muwSGhntoZLcZ3W6I1KBJHpiPh8bR0k2whN57OXhQ9g5M/c9HXgNbPlduHZBGe3JLV6+Gn+lN69HmBQgIgQsQoqRL2JqLAPrKggXedzvvMF/x9CxG/39rzwUGR4yqA9UqI=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 03/40] xen/arm: adjust Xen TLB helpers for Armv8-R64
 PMSA
Thread-Topic: [PATCH v2 03/40] xen/arm: adjust Xen TLB helpers for Armv8-R64
 PMSA
Thread-Index: AQHZJxAWg7D0kl7RH0mt7T+dlRngQa6jRWqAgAA2W9A=
Date: Wed, 18 Jan 2023 02:32:52 +0000
Message-ID:
 <PAXPR08MB742081B2438A4B30229CDDE19EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-4-Penny.Zheng@arm.com>
 <759544c9-7783-c61d-75bd-0a9dab364be2@xen.org>
In-Reply-To: <759544c9-7783-c61d-75bd-0a9dab364be2@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D71A659A272AD046892AB173C7B8FDFA.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|DU0PR08MB9202:EE_|AM7EUR03FT053:EE_|DB3PR08MB9060:EE_
X-MS-Office365-Filtering-Correlation-Id: afb3aed7-2313-426c-e228-08daf8fc53da
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 B97T+H6x2vvWeXiXF8qJxHAEC8Frbl4Psh/kAQatIfMm3cxYcYuE2LvS8R7WF52Rvz0IGszjI2x5FMYNa62SSEBmcv6+H6kqkw74oJxwi8sn9mfGk0N2y6pPGX0hCTPoJfSeXS4p67NYCbyHKBJwfWwCp+xWg7lTKwTsRK4CwXKSXV9usXbehlA2CGo7cb44t2qKYlu/GMbAgNxr1x2bCIz3FcBFVbCyHW2XkdD3pAjsW2Fl0MmTt9tebgk4cKj/3j+hPDM3I926woUKflwdwHNU/N2lgB05AwVzK49hafIS25dX38wUr4e2SECwpzFlQE2LUW7A437jotPcJACHqTmcoRc44LAppeMVxof89XOVsoxrr2CJiuyk8hsPNUedki4K9TY0AxMqF5IlBnQP0ad/nkm4xXigXEQ2JT7V2Pq/9pqHHXqX/5966xfFDMajjzG1QraHC6LKXpbn7RLcQFOYHDDILUMLreB+orNkmXChUEtCzuvby9StTPBvIDdZWVru+el6QkH/r7H48zv2p4E61fuOZCqS4ZvSyve0DXM+VKmxaq7GP376Gm1iSoQHaHJ4nTlo7jIKuyGBzcmzukH6qMmmsVUq3CPsRCRF+lWLcraQqNBY//CTUVjVLNmSO8kujbl0WZb9/MPJpyKTM6Gzs5ImusZQeAo8xgnR2ltxI9BA4yxO2pdyJfpxD+fR4Lmqmq5KqSEpeiQO6rpGTQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(346002)(396003)(376002)(366004)(451199015)(110136005)(316002)(54906003)(8936002)(52536014)(66946007)(5660300002)(4326008)(66446008)(76116006)(66556008)(33656002)(83380400001)(41300700001)(66476007)(38070700005)(38100700002)(122000001)(86362001)(53546011)(55016003)(71200400001)(26005)(6506007)(186003)(7696005)(478600001)(9686003)(8676002)(2906002)(64756008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9202
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f062ab26-a084-4de1-fdae-08daf8fc4c95
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8aVuUaG06s7aOoWJRY6kKvuS80HVjWsb7O4jTWRkb5S/Vg4qV/N8fzxEi0P2GKpy0yk+yfzwf+7Dss6lhbdRNRT3LurWu4yneG1qxndAA+2xxepti89lFwj11wLj5bKAQ/nFK9YNnBTZT8V9zRg8McITLU0tEBQBC3q/L+fwm5k3IgsjvojcIZTMhrsECmcSVqOwdZ/QOGMiNdOj8QChfOXBNKngiCmjZTRQ0KhZjstt2tOiDEpdeY2wKwmAH7uM3kKO6ZkkvQoU0ptkegocM7SqnDfMTRKOFFDAlgrSOzUF5hqZwGK4OCEOzyAwNnLtn67x/aGUzyBm4wQD3lQuUWHP6tuxAROZ2woe0OvTQk6VCPaaHV+1r0mBvzejY1KXhCF9CVqU5iQVr/LDi/n1By/LqFWp5QEXRXGXRCboDpX5pBBwuT4+QIPnutGBciJBpw3GiCFIa+OxU/HmMaPROVClO9U2jaOkK7AswmBn2SwDR3pVaHCvcKgN7ZV9gajzF+qGwR1j0XiXbAPwx9PLxYPmjMaObEZUzdKLRRjeTmLLUzAOEnbJJki0aM5UJpisKHgp7vXUgBK7pYU/GuKz8aRXYEpUBZtd9WxQ68jJa3aiPRDOUZl3Jqjom+zWqcynx7XSmJ7xzYMk1b7s0JLdNGwootLS8WlUKG8I9a5q7L/3o8h6HO+3eMUmhMEs0EAfh4reTc7KQwS+DB3DtwjgOA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199015)(40470700004)(46966006)(36840700001)(40480700001)(33656002)(55016003)(40460700003)(47076005)(70206006)(6506007)(8676002)(41300700001)(4326008)(83380400001)(70586007)(53546011)(478600001)(186003)(8936002)(107886003)(336012)(9686003)(52536014)(26005)(7696005)(110136005)(54906003)(316002)(5660300002)(82310400005)(356005)(36860700001)(2906002)(81166007)(82740400003)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 02:33:04.8022
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: afb3aed7-2313-426c-e228-08daf8fc53da
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB9060

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjPlubQx5pyIMTjml6UgNzoxNw0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFu
byBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9k
eW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDAzLzQwXSB4
ZW4vYXJtOiBhZGp1c3QgWGVuIFRMQiBoZWxwZXJzIGZvciBBcm12OC0NCj4gUjY0IFBNU0ENCj4g
DQo+IEhpLA0KPiANCj4gT24gMTMvMDEvMjAyMyAwNToyOCwgUGVubnkgWmhlbmcgd3JvdGU6DQo+
ID4gRnJvbTogV2VpIENoZW4gPHdlaS5jaGVuQGFybS5jb20+DQo+ID4NCj4gPiAgRnJvbSBBcm0g
QVJNIFN1cHBsZW1lbnQgb2YgQXJtdjgtUiBBQXJjaDY0IChEREkgMDYwMEEpIFsxXSwNCj4gPiBz
ZWN0aW9uIEQxLjYuMiBUTEIgbWFpbnRlbmFuY2UgaW5zdHJ1Y3Rpb25zLCB3ZSBrbm93IHRoYXQN
Cj4gPiBBcm12OC1SIEFBcmNoNjQgcGVybWl0cyBhbiBpbXBsZW1lbnRhdGlvbiB0byBjYWNoZSBz
dGFnZSAxDQo+ID4gVk1TQXY4LTY0IGFuZCBzdGFnZSAyIFBNU0F2OC02NCBhdHRyaWJ1dGVzIGFz
IGEgY29tbW9uIGVudHJ5DQo+ID4gZm9yIHRoZSBTZWN1cmUgRUwxJjAgdHJhbnNsYXRpb24gcmVn
aW1lLiBCdXQgZm9yIFhlbiBpdHNlbGYsDQo+ID4gaXQncyBydW5uaW5nIHdpdGggc3RhZ2UgMSBQ
TVNBdjgtNjQgb24gQXJtdjgtUiBBQXJjaDY0LiBUaGUNCj4gPiBFTDIgTVBVIHVwZGF0ZXMgZm9y
IHN0YWdlIDEgUE1TQXY4LTY0IHdpbGwgbm90IGJlIGNhY2hlZCBpbg0KPiA+IFRMQiBlbnRyaWVz
LiBTbyB3ZSBkb24ndCBuZWVkIGFueSBUTEIgaW52YWxpZGF0aW9uIGZvciBYZW4NCj4gPiBpdHNl
bGYgaW4gRUwyLg0KPiANCj4gU28gSSB1bmRlcnN0YW5kIHRoZSB0aGVvcnkgaGVyZS4gQnV0IEkg
d291bGQgZXhwZWN0IHRoYXQgbm9uZSBvZiB0aGUNCj4gY29tbW9uIGNvZGUgd2lsbCBjYWxsIGFu
eSBvZiB0aG9zZSBoZWxwZXJzLiBUaGVyZWZvcmUgdGhlICNpZmRlZiBzaG91bGQNCj4gYmUgdW5u
ZWNlc3NhcnkuDQo+IA0KPiBDYW4geW91IGNsYXJpZnkgaWYgbXkgdW5kZXJzdGFuZGluZyBpcyBj
b3JyZWN0Pw0KPiANCg0KWWVzLCB5b3UncmUgcmlnaHQsIGFmdGVyIHdlIHNlcGFyYXRlIGNvbW1v
biBjb2RlIGFuZCBNTVUgY29kZSwgdGhlc2UNCmhlbHBlcnMgd2lsbCBiZSBjYWxsZWQgaW4gTU1V
IHNwZWNpZmljIGNvZGUgb25seS4gV2Ugd2lsbCBkcm9wIHRoaXMNCnBhdGNoIGluIG5leHQgdmVy
c2lvbi4NCg0KQ2hlZXJzLA0KV2VpIENoZW4NCg0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxp
ZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 03:00:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 03:00:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480102.744305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHygy-0003dS-Ir; Wed, 18 Jan 2023 03:00:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480102.744305; Wed, 18 Jan 2023 03:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHygy-0003dL-Fs; Wed, 18 Jan 2023 03:00:48 +0000
Received: by outflank-mailman (input) for mailman id 480102;
 Wed, 18 Jan 2023 03:00:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pHygx-0003dC-G7
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 03:00:47 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2075.outbound.protection.outlook.com [40.107.21.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4c2dcbf1-96dc-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 04:00:43 +0100 (CET)
Received: from AM7PR02CA0011.eurprd02.prod.outlook.com (2603:10a6:20b:100::21)
 by AM0PR08MB5330.eurprd08.prod.outlook.com (2603:10a6:208:17f::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 03:00:41 +0000
Received: from AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::61) by AM7PR02CA0011.outlook.office365.com
 (2603:10a6:20b:100::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Wed, 18 Jan 2023 03:00:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT037.mail.protection.outlook.com (100.127.140.225) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 03:00:40 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Wed, 18 Jan 2023 03:00:40 +0000
Received: from 1bcd013e2a90.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 92EB5855-E45B-4677-885F-09541CA4543D.1; 
 Wed, 18 Jan 2023 03:00:34 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1bcd013e2a90.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 03:00:34 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by DB9PR08MB9515.eurprd08.prod.outlook.com (2603:10a6:10:453::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 03:00:20 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 03:00:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c2dcbf1-96dc-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nadhZKla++sk9BClc+JQe0CyknQhMdPc1lvqjqwQUHg=;
 b=mS1oWg/xsuVFyEUoidHLry19qAL4mNRdjgIgjTXWUu3OKqJZculdXxSHr7WzGZ12Cgc6kJr2EkeA2OShB3ef3e8pVTO4RvjwGP+ZtGA+ugTp5v4izFzG4cR59yeWend+7HIL1sFPVKK/MnSgJTWDEALsGQaFI42I6brfVi+uunY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uv7nGRTjV3BdL3IvR9z5F3nNr270cFkvXqJYGSg5wLkQtzgQNA8R6FNgrI33XmalnEAfGMJdS+sAiiZch3WREeer5c9D5pLL3OSk4ecc0EVdo8i6Nz95hmkTKiWagMbtkFZCr4GHBvhCOkAcROnGvMJNPob1eQunvqbtxMGMmVEzrUbBxVmOGHpElmSX5kRKy4W33kfACN3rP8vq0FjCpN0+MgfSt6i7RXYqdfd4Iwr7M7eyyG4Ivc/d4t63iXh0U//6JGEElC7wmgkWJNeeKWHHKdzvUp1OKfPxLMiRRzNknVGjGLN+qoyYQ+RUSNr/JKYNxvPl0uyLuTsvhjgqyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nadhZKla++sk9BClc+JQe0CyknQhMdPc1lvqjqwQUHg=;
 b=W1JkOb4QOpq6JhnX7UgXJPV9wB5vgm8S0S+ymU17G8WuVGvWL00lR6RNGL47Ob41Xl2GIX5YdYbTM+KQnR3Vu0zAGO4P3SpjORc1GUWTaKHxaCiJfLRrIdAd42rIOluToccxyaXOSgMczyMIBAS2Pp3NNGLGpoymgecGCeCekWk3e0DAfHsYBpFxvDf+UvjPeZdFpd2eFY2KY1Fqi9FRnTKlEBcRYuH1Zaw9RE/LgbDzdckit/fy7F+FBNUMpc2l7A4ce9zw92g2N8OhZUkEPDexefTcvJh8kWqcM98ayxO+bB/8uCFzQYva5dEDoTbehBI3LGOdyCTxFkS+tCLf4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nadhZKla++sk9BClc+JQe0CyknQhMdPc1lvqjqwQUHg=;
 b=mS1oWg/xsuVFyEUoidHLry19qAL4mNRdjgIgjTXWUu3OKqJZculdXxSHr7WzGZ12Cgc6kJr2EkeA2OShB3ef3e8pVTO4RvjwGP+ZtGA+ugTp5v4izFzG4cR59yeWend+7HIL1sFPVKK/MnSgJTWDEALsGQaFI42I6brfVi+uunY=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jiamei Xie <Jiamei.Xie@arm.com>
Subject: RE: [PATCH v2 04/40] xen/arm: add an option to define Xen start
 address for Armv8-R
Thread-Topic: [PATCH v2 04/40] xen/arm: add an option to define Xen start
 address for Armv8-R
Thread-Index: AQHZJxAXLGszZRVrMECopJok59rQha6jR3wAgAA4E5A=
Date: Wed, 18 Jan 2023 03:00:19 +0000
Message-ID:
 <PAXPR08MB7420A5C7F93F23F14C77C9BA9EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-5-Penny.Zheng@arm.com>
 <e406484a-aad3-4953-afdb-3159597ec998@xen.org>
In-Reply-To: <e406484a-aad3-4953-afdb-3159597ec998@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: E48362B22AC3AB4486C986892A382C5A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|DB9PR08MB9515:EE_|AM7EUR03FT037:EE_|AM0PR08MB5330:EE_
X-MS-Office365-Filtering-Correlation-Id: c281917f-c0f5-4a6b-0bff-08daf9002ed3
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 USATQ7xQMrsPEQ6xA0XNvG2eBwEOEaVPOX55rZpIlxE76yA/lHZX6RkYLNWlTpcxZwVm6A8Rmb2Z52z1M+2Iy41VrR+4xiI3WPMmGQRmB/crtLp5rfksIWCBwjdasfAnxgCuEsf4c6AiVjO1o5ouXdevz9CIq70f8Q39Dh8Ssln3hFIhjz6RFb6e3pzMWypR3mozFRYaaHvur+kXu6EeHeTrsNUThU8nQUnAK3tVz15zdmImNHL+32C01PTVxSXmMmDFF3wtxcQeomlO2tlOK/WU1y8vvGlkDsA2vGGZwekR47oS4YsJdB+kKT56XEmN0ZHF5KY1EALmUqtpWsKboKsFNT1aKqR4wAIrAINFKdX/AmyBkeBKVuzudRffTx88W8zzGNHD1XSy+cxgWihrCXhjKjrgzp7hKWVkYwrovujcyHzd4eSi/j/Fn7WMHw8Pg9uZ1q6N2um2ye02/XjPu/CZjzfahPza3L5l9zcjxuhSeBitqobsS+TGGaDJTQmZeE2/9eyRyzxvJ95QOF0FFeOc3nazu4F3CSGF5eff8HZJa+xNW3Wbnm+nhZ1kNommrFZo8nfzhgIH92/tI7fD5ssv2HD9b5MjB5/HNa/mAl1y3HOu1eLeIwzC60I6Iage4FFpmelzmHtFmrXwYXo7hGrmLGvP2ii+UN4EZH14C5xe5uwSDsvBJF1AWzm2j8vJxRTXbtQWo9StGUE/91kZXHp/5ImOLET7b7yg2/roNNA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(39860400002)(136003)(346002)(376002)(396003)(451199015)(33656002)(186003)(26005)(66946007)(66556008)(9686003)(4326008)(52536014)(41300700001)(478600001)(6506007)(53546011)(64756008)(66446008)(66476007)(76116006)(8676002)(966005)(55016003)(86362001)(2906002)(83380400001)(110136005)(122000001)(8936002)(54906003)(7696005)(71200400001)(316002)(38100700002)(38070700005)(5660300002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9515
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b82f0d84-b6dc-49a1-9c98-08daf9002225
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CRfATf6cZ/sh1Wdwi9uGXu0q0pIcisI8BdDKrvLhD5T20sS5nOO3O+wymsQzRixwES89ADyNuxnLtXqY6Fuydz/dV7SgtGIkh8SC6hN76T/y36RylPFuKnvYEGemKKqFiE/eUfhqYUwYx5GA/F2yRLGmt4I1f1xydfCAt+/1F0ZDTJaEb/K1HNhYAI5tRwG+dCJNc+WkX5EvQasVH768lat6VRuDOMRelN8cv8b15yPoRrucXBLOcLAL/CTV2QDSqfwnLlPDfNJwDNfPBRXUv4hJj1iv3n7vJ5oD2cDjHBACyuv0sua/lxjDMKkAxez43ryZigjQkf95rLR/RKo+hrt1NJsv2sujinRElBnurvJdn3F+/LMQ5oA3Bq4RXExuGvX3YVu5KQu8imicPUUj6lBHMljwsFJ0IzMu2c7LX/mKp+wWURsLh2mpFx5muNXR1yNkHaBgIlm07svikPh73Hi0HkpMAu2l/5DBExudyGrGcYDsLD15uteR7yBtKRB6WAMgFf30Pm2PDXYYanBaXPAcm8V/HSKVfsMCZM89A4m371/6H1JdZRvNsJDaGWrCIsA3CAV71RZ71Y68SgsC7oM3kaayOy1cktEvVmnfK1kYRSdbEs1I0TLZqwMjsG5RPQtxKkqyxzLNTakRD5cHM9oLrEcexB9Ulzob9uWYOWcxrip8xfs67GmwWzR60Rfts96mRKTsVUvYZc/2T49j7W2IdZJCVE+0tnio+CHUcMw=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199015)(36840700001)(46966006)(40470700004)(86362001)(36860700001)(2906002)(356005)(81166007)(82740400003)(40460700003)(55016003)(40480700001)(33656002)(47076005)(82310400005)(316002)(7696005)(54906003)(110136005)(70206006)(4326008)(70586007)(8676002)(52536014)(9686003)(41300700001)(6506007)(966005)(53546011)(26005)(8936002)(83380400001)(5660300002)(478600001)(186003)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 03:00:40.6464
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c281917f-c0f5-4a6b-0bff-08daf9002ed3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5330

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjPlubQx5pyIMTjml6UgNzoyNA0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFu
byBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9k
eW15cl9CYWJjaHVrQGVwYW0uY29tPjsgSmlhbWVpIFhpZQ0KPiA8SmlhbWVpLlhpZUBhcm0uY29t
Pg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDA0LzQwXSB4ZW4vYXJtOiBhZGQgYW4gb3B0aW9u
IHRvIGRlZmluZSBYZW4gc3RhcnQNCj4gYWRkcmVzcyBmb3IgQXJtdjgtUg0KPiANCj4gSGkgUGVu
bnksDQo+IA0KPiBPbiAxMy8wMS8yMDIzIDA1OjI4LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPiBG
cm9tOiBXZWkgQ2hlbiA8d2VpLmNoZW5AYXJtLmNvbT4NCj4gPg0KPiA+IE9uIEFybXY4LUEsIFhl
biBoYXMgYSBmaXhlZCB2aXJ0dWFsIHN0YXJ0IGFkZHJlc3MgKGxpbmsgYWRkcmVzcw0KPiA+IHRv
bykgZm9yIGFsbCBBcm12OC1BIHBsYXRmb3Jtcy4gSW4gYW4gTU1VIGJhc2VkIHN5c3RlbSwgWGVu
IGNhbg0KPiA+IG1hcCBpdHMgbG9hZGVkIGFkZHJlc3MgdG8gdGhpcyB2aXJ0dWFsIHN0YXJ0IGFk
ZHJlc3MuIFNvLCBvbg0KPiA+IEFybXY4LUEgcGxhdGZvcm1zLCB0aGUgWGVuIHN0YXJ0IGFkZHJl
c3MgZG9lcyBub3QgbmVlZCB0byBiZQ0KPiA+IGNvbmZpZ3VyYWJsZS4gQnV0IG9uIEFybXY4LVIg
cGxhdGZvcm1zLCB0aGVyZSBpcyBubyBNTVUgdG8gbWFwDQo+ID4gbG9hZGVkIGFkZHJlc3MgdG8g
YSBmaXhlZCB2aXJ0dWFsIGFkZHJlc3MgYW5kIGRpZmZlcmVudCBwbGF0Zm9ybXMNCj4gPiB3aWxs
IGhhdmUgdmVyeSBkaWZmZXJlbnQgYWRkcmVzcyBzcGFjZSBsYXlvdXQuIFNvIFhlbiBjYW5ub3Qg
dXNlDQo+ID4gYSBmaXhlZCBwaHlzaWNhbCBhZGRyZXNzIG9uIE1QVSBiYXNlZCBzeXN0ZW0gYW5k
IG5lZWQgdG8gaGF2ZSBpdA0KPiA+IGNvbmZpZ3VyYWJsZS4NCj4gPg0KPiA+IEluIHRoaXMgcGF0
Y2ggd2UgaW50cm9kdWNlIG9uZSBLY29uZmlnIG9wdGlvbiBmb3IgdXNlcnMgdG8gZGVmaW5lDQo+
ID4gdGhlIGRlZmF1bHQgWGVuIHN0YXJ0IGFkZHJlc3MgZm9yIEFybXY4LVIuIFVzZXJzIGNhbiBl
bnRlciB0aGUNCj4gPiBhZGRyZXNzIGluIGNvbmZpZyB0aW1lLCBvciBzZWxlY3QgdGhlIHRhaWxv
cmVkIHBsYXRmb3JtIGNvbmZpZw0KPiA+IGZpbGUgZnJvbSBhcmNoL2FybS9jb25maWdzLg0KPiA+
DQo+ID4gQW5kIGFzIHdlIGludHJvZHVjZWQgQXJtdjgtUiBwbGF0Zm9ybXMgdG8gWGVuLCB0aGF0
IG1lYW5zIHRoZQ0KPiA+IGV4aXN0ZWQgQXJtNjQgcGxhdGZvcm1zIHNob3VsZCBub3QgYmUgbGlz
dGVkIGluIEFybXY4LVIgcGxhdGZvcm0NCj4gPiBsaXN0LCBzbyB3ZSBhZGQgIUFSTV9WOFIgZGVw
ZW5kZW5jeSBmb3IgdGhlc2UgcGxhdGZvcm1zLg0KPiA+DQo+ID4gU2lnbmVkLW9mZi1ieTogV2Vp
IENoZW4gPHdlaS5jaGVuQGFybS5jb20+DQo+ID4gU2lnbmVkLW9mZi1ieTogSmlhbWVpLlhpZSA8
amlhbWVpLnhpZUBhcm0uY29tPg0KPiANCj4gWW91ciBzaWduZWQtb2ZmLWJ5IGlzIG1pc3Npbmcu
DQo+IA0KPiA+IC0tLQ0KPiA+IHYxIC0+IHYyOg0KPiA+IDEuIFJlbW92ZSB0aGUgcGxhdGZvcm0g
aGVhZGVyIGZ2cF9iYXNlci5oLg0KPiA+IDIuIFJlbW92ZSB0aGUgZGVmYXVsdCBzdGFydCBhZGRy
ZXNzIGZvciBmdnBfYmFzZXI2NC4NCj4gPiAzLiBSZW1vdmUgdGhlIGRlc2NyaXB0aW9uIG9mIGRl
ZmF1bHQgYWRkcmVzcyBmcm9tIGNvbW1pdCBsb2cuDQo+ID4gNC4gQ2hhbmdlIEhBU19NUFUgdG8g
QVJNX1Y4UiBmb3IgWGVuIHN0YXJ0IGFkZHJlc3MgZGVwZW5kZW5jeS4NCj4gPiAgICAgTm8gbWF0
dGVyIEFybS12OHIgYm9hcmQgaGFzIE1QVSBvciBub3QsIGl0IGFsd2F5cyBuZWVkIHRvDQo+ID4g
ICAgIHNwZWNpZnkgdGhlIHN0YXJ0IGFkZHJlc3MuDQo+IA0KPiBJIGRvbid0IHF1aXRlIHVuZGVy
c3RhbmQgdGhlIGxhc3Qgc2VudGVuY2UuIEFyZSB5b3Ugc2F5aW5nIHRoYXQgaXQgaXMNCj4gcG9z
c2libGUgdG8gaGF2ZSBhbiBBUk12OC1SIHN5c3RlbSB3aXRoIGFuIE1QVSBub3IgYSBwYWdlLXRh
YmxlPw0KPiANCg0KWWVzLCBmcm9tIHRoZSBDb3J0ZXgtUjgyIHBhZ2UgWzFdLCB5b3UgY2FuIHNl
ZSB0aGUgTVBVIGlzIG9wdGlvbmFsIGluIEVMMQ0KYW5kIEVMMjoNCiJUd28gb3B0aW9uYWwgYW5k
IHByb2dyYW1tYWJsZSBNUFVzIGNvbnRyb2xsZWQgZnJvbSBFTDEgYW5kIEVMMiByZXNwZWN0aXZl
bHkuIg0KDQpBbHRob3VnaCBpdCBpcyB1bmxpa2VseSB0aGF0IHZlbmRvcnMgdXNpbmcgdGhlIEFy
bXY4LVIgSVAgd2lsbCBkbyBzbywgaXQNCmlzIGluZGVlZCBhbiBvcHRpb24uIEluIHRoZSBJRCBy
ZWdpc3RlciwgdGhlcmUgYXJlIGFsc28gcmVsYXRlZCBiaXRzIGluDQpJRF9BQTY0TU1GUjBfRUwx
IChNU0FfZnJhYykgdG8gaW5kaWNhdGUgdGhpcy4NCg0KPiA+IC0tLQ0KPiA+ICAgeGVuL2FyY2gv
YXJtL0tjb25maWcgICAgICAgICAgIHwgIDggKysrKysrKysNCj4gPiAgIHhlbi9hcmNoL2FybS9w
bGF0Zm9ybXMvS2NvbmZpZyB8IDE2ICsrKysrKysrKysrKystLS0NCj4gPiAgIDIgZmlsZXMgY2hh
bmdlZCwgMjEgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkNCj4gPg0KPiA+IGRpZmYgLS1n
aXQgYS94ZW4vYXJjaC9hcm0vS2NvbmZpZyBiL3hlbi9hcmNoL2FybS9LY29uZmlnDQo+ID4gaW5k
ZXggYWNlNzE3OGM5YS4uYzZiNmI2MTJkMSAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0v
S2NvbmZpZw0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9LY29uZmlnDQo+ID4gQEAgLTE0NSw2ICsx
NDUsMTQgQEAgY29uZmlnIFRFRQ0KPiA+ICAgCSAgVGhpcyBvcHRpb24gZW5hYmxlcyBnZW5lcmlj
IFRFRSBtZWRpYXRvcnMgc3VwcG9ydC4gSXQgYWxsb3dzDQo+IGd1ZXN0cw0KPiA+ICAgCSAgdG8g
YWNjZXNzIHJlYWwgVEVFIHZpYSBvbmUgb2YgVEVFIG1lZGlhdG9ycyBpbXBsZW1lbnRlZCBpbiBY
RU4uDQo+ID4NCj4gPiArY29uZmlnIFhFTl9TVEFSVF9BRERSRVNTDQo+ID4gKwloZXggIlhlbiBz
dGFydCBhZGRyZXNzOiBrZWVwIGRlZmF1bHQgdG8gdXNlIHBsYXRmb3JtIGRlZmluZWQNCj4gYWRk
cmVzcyINCj4gPiArCWRlZmF1bHQgMA0KPiA+ICsJZGVwZW5kcyBvbiBBUk1fVjhSDQo+IA0KPiBJ
dCBpcyBzdGlsbCBwcmV0dHkgdW5jbGVhciB0byBtZSB3aGF0IHdvdWxkIGJlIHRoZSBkaWZmZXJl
bmNlIGJldHdlZW4NCj4gSEFTX01QVSBhbmQgQVJNX1Y4Ui4NCj4gDQoNCklmIHdlIGRvbid0IHdh
bnQgdG8gc3VwcG9ydCBub24tTVBVIHN1cHBvcnRlZCBBcm12OC1SLCBJIHRoaW5rIHRoZXkgYXJl
IHRoZQ0Kc2FtZS4gSU1PLCBub24tTVBVIHN1cHBvcnRlZCBBcm12OC1SIGlzIG1lYW5pbmdsZXNz
IHRvIFhlbi4NCg0KPiA+ICsJaGVscA0KPiA+ICsJICBUaGlzIG9wdGlvbiBhbGxvd3MgdG8gc2V0
IHRoZSBjdXN0b21pemVkIGFkZHJlc3MgYXQgd2hpY2ggWGVuIHdpbGwNCj4gYmUNCj4gPiArCSAg
bGlua2VkIG9uIE1QVSBzeXN0ZW1zLiBUaGlzIGFkZHJlc3MgbXVzdCBiZSBhbGlnbmVkIHRvIGEg
cGFnZSBzaXplLg0KPiA+ICsNCj4gPiAgIHNvdXJjZSAiYXJjaC9hcm0vdGVlL0tjb25maWciDQo+
ID4NCj4gPiAgIGNvbmZpZyBTVEFUSUNfU0hNDQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2Fy
bS9wbGF0Zm9ybXMvS2NvbmZpZw0KPiBiL3hlbi9hcmNoL2FybS9wbGF0Zm9ybXMvS2NvbmZpZw0K
PiA+IGluZGV4IGM5M2E2YjI3NTYuLjA5MDQ3OTNhMGIgMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2Fy
Y2gvYXJtL3BsYXRmb3Jtcy9LY29uZmlnDQo+ID4gKysrIGIveGVuL2FyY2gvYXJtL3BsYXRmb3Jt
cy9LY29uZmlnDQo+ID4gQEAgLTEsNiArMSw3IEBADQo+ID4gICBjaG9pY2UNCj4gPiAgIAlwcm9t
cHQgIlBsYXRmb3JtIFN1cHBvcnQiDQo+ID4gICAJZGVmYXVsdCBBTExfUExBVA0KPiA+ICsJZGVm
YXVsdCBGVlBfQkFTRVIgaWYgQVJNX1Y4Ug0KPiA+ICAgCS0tLWhlbHAtLS0NCj4gPiAgIAlDaG9v
c2Ugd2hpY2ggaGFyZHdhcmUgcGxhdGZvcm0gdG8gZW5hYmxlIGluIFhlbi4NCj4gPg0KPiA+IEBA
IC04LDEzICs5LDE0IEBAIGNob2ljZQ0KPiA+DQo+ID4gICBjb25maWcgQUxMX1BMQVQNCj4gPiAg
IAlib29sICJBbGwgUGxhdGZvcm1zIg0KPiA+ICsJZGVwZW5kcyBvbiAhQVJNX1Y4Ug0KPiA+ICAg
CS0tLWhlbHAtLS0NCj4gPiAgIAlFbmFibGUgc3VwcG9ydCBmb3IgYWxsIGF2YWlsYWJsZSBoYXJk
d2FyZSBwbGF0Zm9ybXMuIEl0IGRvZXNuJ3QNCj4gPiAgIAlhdXRvbWF0aWNhbGx5IHNlbGVjdCBh
bnkgb2YgdGhlIHJlbGF0ZWQgZHJpdmVycy4NCj4gPg0KPiA+ICAgY29uZmlnIFFFTVUNCj4gPiAg
IAlib29sICJRRU1VIGFhcmNoIHZpcnQgbWFjaGluZSBzdXBwb3J0Ig0KPiA+IC0JZGVwZW5kcyBv
biBBUk1fNjQNCj4gPiArCWRlcGVuZHMgb24gQVJNXzY0ICYmICFBUk1fVjhSDQo+ID4gICAJc2Vs
ZWN0IEdJQ1YzDQo+ID4gICAJc2VsZWN0IEhBU19QTDAxMQ0KPiA+ICAgCS0tLWhlbHAtLS0NCj4g
PiBAQCAtMjMsNyArMjUsNyBAQCBjb25maWcgUUVNVQ0KPiA+DQo+ID4gICBjb25maWcgUkNBUjMN
Cj4gPiAgIAlib29sICJSZW5lc2FzIFJDYXIzIHN1cHBvcnQiDQo+ID4gLQlkZXBlbmRzIG9uIEFS
TV82NA0KPiA+ICsJZGVwZW5kcyBvbiBBUk1fNjQgJiYgIUFSTV9WOFINCj4gPiAgIAlzZWxlY3Qg
SEFTX1NDSUYNCj4gPiAgIAlzZWxlY3QgSVBNTVVfVk1TQQ0KPiA+ICAgCS0tLWhlbHAtLS0NCj4g
PiBAQCAtMzEsMTQgKzMzLDIyIEBAIGNvbmZpZyBSQ0FSMw0KPiA+DQo+ID4gICBjb25maWcgTVBT
T0MNCj4gPiAgIAlib29sICJYaWxpbnggVWx0cmFzY2FsZSsgTVBTb0Mgc3VwcG9ydCINCj4gPiAt
CWRlcGVuZHMgb24gQVJNXzY0DQo+ID4gKwlkZXBlbmRzIG9uIEFSTV82NCAmJiAhQVJNX1Y4Ug0K
PiA+ICAgCXNlbGVjdCBIQVNfQ0FERU5DRV9VQVJUDQo+ID4gICAJc2VsZWN0IEFSTV9TTU1VDQo+
ID4gICAJLS0taGVscC0tLQ0KPiA+ICAgCUVuYWJsZSBhbGwgdGhlIHJlcXVpcmVkIGRyaXZlcnMg
Zm9yIFhpbGlueCBVbHRyYXNjYWxlKyBNUFNvQw0KPiA+DQo+ID4gK2NvbmZpZyBGVlBfQkFTRVIN
Cj4gPiArCWJvb2wgIkZpeGVkIFZpcnR1YWwgUGxhdGZvcm0gQmFzZVIgc3VwcG9ydCINCj4gPiAr
CWRlcGVuZHMgb24gQVJNX1Y4Ug0KPiA+ICsJaGVscA0KPiA+ICsJICBFbmFibGUgcGxhdGZvcm0g
c3BlY2lmaWMgY29uZmlndXJhdGlvbnMgZm9yIEZpeGVkIFZpcnR1YWwNCj4gPiArCSAgUGxhdGZv
cm0gQmFzZVINCj4gDQo+IFRoaXMgc2VlbXMgdW5yZWxhdGVkIHRvIHRoaXMgcGF0Y2guDQo+IA0K
DQpDYW4gd2UgYWRkIHNvbWUgZGVzY3JpcHRpb25zIGluIGNvbW1pdCBsb2cgZm9yIHRoaXMgY2hh
bmdlLCBvciB3ZQ0KU2hvdWxkIG1vdmUgaXQgdG8gYSBuZXcgcGF0Y2g/IFdlIGhhZCBwcmVmZXJy
ZWQgdG8gdXNlIHNlcGFyYXRlDQpwYXRjaGVzIGZvciB0aGlzIGtpbmQgb2YgY2hhbmdlcywgYnV0
IHdlIGZvdW5kIHRoZSBudW1iZXIgb2YgcGF0Y2hlcw0Kd291bGQgYmVjb21lIG1vcmUgYW5kIG1v
cmUuIFRoaXMgcHJvYmxlbSBoYXMgYmVlbiBib3RoZXJpbmcgdXMgZm9yDQpvcmdhbml6aW5nIHBh
dGNoZXMuDQoNClsxXSBodHRwczovL2RldmVsb3Blci5hcm0uY29tL1Byb2Nlc3NvcnMvQ29ydGV4
LVI4Mg0KDQpDaGVlcnMsDQpXZWkgQ2hlbg0KDQo+ID4gKw0KPiA+ICAgY29uZmlnIE5PX1BMQVQN
Cj4gPiAgIAlib29sICJObyBQbGF0Zm9ybXMiDQo+ID4gKwlkZXBlbmRzIG9uICFBUk1fVjhSDQo+
ID4gICAJLS0taGVscC0tLQ0KPiA+ICAgCURvIG5vdCBlbmFibGUgc3BlY2lmaWMgc3VwcG9ydCBm
b3IgYW55IHBsYXRmb3JtLg0KPiA+DQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4g
R3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 03:09:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 03:09:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480108.744315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHypg-0004JY-E8; Wed, 18 Jan 2023 03:09:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480108.744315; Wed, 18 Jan 2023 03:09:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHypg-0004JR-B5; Wed, 18 Jan 2023 03:09:48 +0000
Received: by outflank-mailman (input) for mailman id 480108;
 Wed, 18 Jan 2023 03:09:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pHype-0004JL-PT
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 03:09:46 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2058.outbound.protection.outlook.com [40.107.21.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ef94687-96dd-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 04:09:45 +0100 (CET)
Received: from AS9PR06CA0739.eurprd06.prod.outlook.com (2603:10a6:20b:487::21)
 by PAXPR08MB7573.eurprd08.prod.outlook.com (2603:10a6:102:24f::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 03:09:42 +0000
Received: from AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:487:cafe::dd) by AS9PR06CA0739.outlook.office365.com
 (2603:10a6:20b:487::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Wed, 18 Jan 2023 03:09:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT036.mail.protection.outlook.com (100.127.140.93) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 03:09:41 +0000
Received: ("Tessian outbound b1d3ffe56e73:v132");
 Wed, 18 Jan 2023 03:09:41 +0000
Received: from 1ec4d9837329.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B951260B-0413-4127-84DD-B47C47A9C24A.1; 
 Wed, 18 Jan 2023 03:09:35 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1ec4d9837329.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 03:09:35 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by PAXPR08MB6349.eurprd08.prod.outlook.com (2603:10a6:102:15b::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan
 2023 03:09:34 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 03:09:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ef94687-96dd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h0HlQEg5nJRXk2o3xk6UWlabotBPaHScVkZ79IHQf4A=;
 b=sohOLjtjnFv/85Gx35WfMe08LelabOFqIh65RSmXNykWt3r9nKvNHiSGsWuefkX3w6G/t8yJWi85ZyFPHFAj+xEklbE4hiM6CTdv4yVc+bB4ECruKe/KOjNyrb57ZnX+iDyLYjQ/qWqsIle1wBvUN7LFHkVPCFsMgQlFDIjIBaI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=muyUrUyfAR3RL9JjDNJ1NZRKJNCt8nt7xDZS4p5FMxVqatmKV1oagwRz4QZoxT+jqvxjB8qlpEVYG2N7LqsKHHK+9QfvtkyDSoq8AeyYaiWROA2ZRwt1rcSiybERGl32ifE88dbvcf+FplNvmzGXegF50EN4tQYg+PhnRMWaeIFU+npUlqkGFdsQbIFEmSP9p01Km8HjurJfjM3y9f55Kk0TKTRbgH0Uhlf/BL+AO8kB/9iw/sL9eTm7gFlXYumEe7/iASwMXLP0s87LgRwb6Z36eK49/KWJfnfvb2xzEt3VCHezU7u9upMWJ1q6nEbFS+gq7MsMCoC9xnEtywg9LQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h0HlQEg5nJRXk2o3xk6UWlabotBPaHScVkZ79IHQf4A=;
 b=UhqsXuUu8PEIzv7aYGCoO+v9gEUSZP7v6/4AzwjouzGdNCZu4Gl9m/ESFwK8d0b2D+pFoWCTVO00os0x27/HRqkBoIOLfx4IJfQVuNj8QTTwaEmv+qJe/zmLdV3nvsNf5sMhi7S6ze7lmGUdiO1pFSVEwTQZRexwTY7fbREOpdJgRcLiceJTCru/isaO1gsLr4qGPD6Jk3ttcwTotMDPwpgoE7wtX4AYUu8xS4CbG9nT19j7tBqqhXq9va8xP3IUYxmvmu5VzXQGLTIctidJBcQvMm81e3PMSUf5SrZmQBmveJ1NKUsfoEBizGy1j2i5nuaAIm6jMCg4n83r36klkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h0HlQEg5nJRXk2o3xk6UWlabotBPaHScVkZ79IHQf4A=;
 b=sohOLjtjnFv/85Gx35WfMe08LelabOFqIh65RSmXNykWt3r9nKvNHiSGsWuefkX3w6G/t8yJWi85ZyFPHFAj+xEklbE4hiM6CTdv4yVc+bB4ECruKe/KOjNyrb57ZnX+iDyLYjQ/qWqsIle1wBvUN7LFHkVPCFsMgQlFDIjIBaI=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 05/40] xen/arm64: prepare for moving MMU related code
 from head.S
Thread-Topic: [PATCH v2 05/40] xen/arm64: prepare for moving MMU related code
 from head.S
Thread-Index: AQHZJxAYj73pNk7diEiR8Njo5KbmJ66jSxAAgAA5lAA=
Date: Wed, 18 Jan 2023 03:09:33 +0000
Message-ID:
 <PAXPR08MB742006643CF50E239EBC12139EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-6-Penny.Zheng@arm.com>
 <f78755d8-0b43-ebe4-4b2c-c88875347796@xen.org>
In-Reply-To: <f78755d8-0b43-ebe4-4b2c-c88875347796@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: EE3738CC17AADA46A62E8B076CC6F2BD.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|PAXPR08MB6349:EE_|AM7EUR03FT036:EE_|PAXPR08MB7573:EE_
X-MS-Office365-Filtering-Correlation-Id: 50af6e30-4e7a-4a96-1c31-08daf901713b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 l1aFFTHrKU8LHMbQ/UlZSHIFfYM9TDTqp+SGu3QulKNpylwIZmgQQHpwJ1/YyfxqG/o67VHBFzWmMZoNNNKi0w5naF+hBoboIZhZ5r6ks43dRoKnxQ3jNnj+B2448liBill88nDpVnZ4AprHbagLf02bz6smdyaHE9SzCqYLauX0xXZeObsHCh6cddjx7QmhO/maY9UKNp266MupyPutC2jnDsRn+0hLR0+gvQnN6GCmjWSgWrXA0M0DQ4rk0QwgEXBSB73a8AkD1FTPlBV/8y1Q0O/6N+nXRgfQbKlIQaY8G5rV8S4syGVtgClTrovqovQ8vuDm2npFbPhXhaJqxKWwrl+Uzf5Ra12CVkNDcia8Qx+0TtrTdi5qB3JC/1RnQjdlJoDoaWWr9XrTbMbCg0a84OMZVAkwiZ6fRrPDVwYrtnSmo3aLUUt/75/SVH5wl44aRf4v3x+30QSU6Y1WuttmdsC7m7ALrfQTrVMcoCo4pQycp6o/n84MWqQCO5sHfLzAxThu2aOzsmCJRfa4A7adAjGSaO/vhJqM9Gf325YWT9fFdkFgaONdV9ahvtBJJfmFYmwt+q7mLcXCb6ibc5d3qJlWPB64kOAA8s+FAGRizPlANfcWFtKZ35BHnr/gBsRjwQCO5F+aJXT11V2Dw/UeF32Q6snna99xT77JHZeGb5GocD0l8+oFDtdGeweVAaQpeONg7Xz1jpnze3P19KzVXMnOLfSa9zoxXEu8H8E=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(366004)(346002)(136003)(376002)(451199015)(26005)(8676002)(76116006)(83380400001)(8936002)(66446008)(41300700001)(66556008)(66476007)(66946007)(64756008)(52536014)(38070700005)(4326008)(6506007)(316002)(5660300002)(33656002)(54906003)(2906002)(71200400001)(55016003)(86362001)(478600001)(966005)(7696005)(186003)(9686003)(110136005)(53546011)(38100700002)(122000001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6349
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7894406e-0d85-4902-21be-08daf9016cb4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dxuPiZzOABEfSPY+AbRj82pNP1sKllPM4+KXtanW/oGiEpoT/8xFGcKH8HnI2+YDxha/o8xFDKKu3KWxD86Up7C6FfIZob4ofGfsEuoTHijpO1Hg4aZf3zXof+NcutwryhBO8tseO633whZFHMSXEa17/Nknzhnx3HxVpHkjFdzt/kIUNGEdmfgzO9rfliSRl4YqAanKUSpQSptEzCzicWag88AXNHCB67yI+QfwznCs4G6/jVLdaoju0RRDYMfanjEwb6wqU/jBBG+lIqeh/HiuGxHn2pMt/A/lxUo19rRPWvVqyvO1jV5KuqerRaO2+HgunR4Q/KsM1+563ZS8gufZiyZ4d4vMBoUwhKk+G/SBtCpKk/bW5OYCrccaN4W0Kxmo1CpVwSTSUCUdYkKoDasgQBGjFImYqpI9a0lgviDFT6dHrvzWVTUEMSe+QBNsfjsl1Qsx/Egqg4ix8kMdSRatO4cGJ4W/cqQB3qFppVdN+dKGr+DSLtfwg4Aqx3TI9Q+3T0/Absyanh+nJM+9gSFWpWvmdzVbYng/ABSmMmwOladEiV0ognlaSTpLZaLM9ncJvgzQecidK1y7DGGD+cN4Vj3AwMHqiHc7mIfYnt+wlJRSiovXd/hX0x3QWODSozghu6fd3lzRpRpiW91e4KJMrM4fICrvyOwVImxvDpfavlCIw2iEHQX6PdtLqGM+onWOsb+XJSx7v9Q8H5B17uscKBnXlnKWaqRbdihspbg=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199015)(36840700001)(46966006)(40470700004)(186003)(9686003)(478600001)(86362001)(40460700003)(41300700001)(33656002)(336012)(55016003)(316002)(7696005)(110136005)(40480700001)(47076005)(70586007)(70206006)(26005)(54906003)(4326008)(966005)(8676002)(36860700001)(6506007)(2906002)(81166007)(356005)(53546011)(82740400003)(82310400005)(8936002)(5660300002)(52536014)(107886003)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 03:09:41.5548
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 50af6e30-4e7a-4a96-1c31-08daf901713b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7573

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjPlubQx5pyIMTjml6UgNzozNw0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFu
byBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9k
eW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDA1LzQwXSB4
ZW4vYXJtNjQ6IHByZXBhcmUgZm9yIG1vdmluZyBNTVUgcmVsYXRlZA0KPiBjb2RlIGZyb20gaGVh
ZC5TDQo+IA0KPiBIaSBQZW5ueSwNCj4gDQo+IE9uIDEzLzAxLzIwMjMgMDU6MjgsIFBlbm55IFpo
ZW5nIHdyb3RlOg0KPiA+IEZyb206IFdlaSBDaGVuIDx3ZWkuY2hlbkBhcm0uY29tPg0KPiA+DQo+
ID4gV2Ugd2FudCB0byByZXVzZSBoZWFkLlMgZm9yIE1QVSBzeXN0ZW1zLCBidXQgdGhlcmUgYXJl
IHNvbWUNCj4gPiBjb2RlIGltcGxlbWVudGVkIGZvciBNTVUgc3lzdGVtcyBvbmx5LiBXZSB3aWxs
IG1vdmUgc3VjaA0KPiA+IGNvZGUgdG8gYW5vdGhlciBNTVUgc3BlY2lmaWMgZmlsZS4gQnV0IGJl
Zm9yZSB0aGF0LCB3ZSB3aWxsDQo+ID4gZG8gc29tZSBwcmVwYXJhdGlvbnMgaW4gdGhpcyBwYXRj
aCB0byBtYWtlIHRoZW0gZWFzaWVyDQo+ID4gZm9yIHJldmlld2luZzoNCj4gDQo+IFdlbGwsIEkg
YWdyZWUgdGhhdC4uLg0KPiANCj4gPiAxLiBGaXggdGhlIGluZGVudGF0aW9ucyBvZiBjb2RlIGNv
bW1lbnRzLg0KPiANCj4gLi4uIGNoYW5naW5nIHRoZSBpbmRlbnRhdGlvbiBpcyBiZXR0ZXIgaGVy
ZS4gQnV0Li4uDQo+IA0KPiA+IDIuIEV4cG9ydCBzb21lIHN5bWJvbHMgdGhhdCB3aWxsIGJlIGFj
Y2Vzc2VkIG91dCBvZiBmaWxlDQo+ID4gICAgIHNjb3BlLg0KPiANCj4gLi4uIEkgaGF2ZSBubyBp
ZGVhIHdoaWNoIGZ1bmN0aW9ucyBhcmUgZ29pbmcgdG8gYmUgdXNlZCBpbiBhIHNlcGFyYXRlDQo+
IGZpbGUuIFNvIEkgdGhpbmsgdGhleSBzaG91bGQgYmVsb25nIHRvIHRoZSBwYXRjaCBtb3Zpbmcg
dGhlIGNvZGUuDQo+IA0KDQpPaywgSSB3aWxsIG1vdmUgdGhlc2UgY2hhbmdlcyB0byB0aGUgbW92
aW5nIGNvZGUgcGF0Y2hlcy4NCg0KPiA+DQo+ID4gU2lnbmVkLW9mZi1ieTogV2VpIENoZW4gPHdl
aS5jaGVuQGFybS5jb20+DQo+IA0KPiBZb3VyIHNpZ25lZC1vZmYtYnkgaXMgbWlzc2luZy4NCj4g
DQo+ID4gLS0tDQo+ID4gdjEgLT4gdjI6DQo+ID4gMS4gTmV3IHBhdGNoLg0KPiA+IC0tLQ0KPiA+
ICAgeGVuL2FyY2gvYXJtL2FybTY0L2hlYWQuUyB8IDQwICsrKysrKysrKysrKysrKysrKystLS0t
LS0tLS0tLS0tLS0tLS0tLQ0KPiA+ICAgMSBmaWxlIGNoYW5nZWQsIDIwIGluc2VydGlvbnMoKyks
IDIwIGRlbGV0aW9ucygtKQ0KPiA+DQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02
NC9oZWFkLlMgYi94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TDQo+ID4gaW5kZXggOTNmOWIwYjlk
NS4uYjIyMTRiYzVlMyAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5T
DQo+ID4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0L2hlYWQuUw0KPiA+IEBAIC0xMzYsMjIgKzEz
NiwyMiBAQA0KPiA+ICAgICAgICAgICBhZGQgXHhiLCBceGIsIHgyMA0KPiA+ICAgLmVuZG0NCj4g
Pg0KPiA+IC0gICAgICAgIC5zZWN0aW9uIC50ZXh0LmhlYWRlciwgImF4IiwgJXByb2diaXRzDQo+
ID4gLSAgICAgICAgLyouYWFyY2g2NCovDQo+ID4gKy5zZWN0aW9uIC50ZXh0LmhlYWRlciwgImF4
IiwgJXByb2diaXRzDQo+ID4gKy8qLmFhcmNoNjQqLw0KPiANCj4gVGhpcyBjaGFuZ2UgaXMgbm90
IG1lbnRpb25lZC4NCj4gDQoNCkkgd2lsbCBhZGQgdGhlIGRlc2NyaXB0aW9uIGluIGNvbW1pdCBt
ZXNzYWdlLg0KDQo+ID4NCj4gPiAtICAgICAgICAvKg0KPiA+IC0gICAgICAgICAqIEtlcm5lbCBz
dGFydHVwIGVudHJ5IHBvaW50Lg0KPiA+IC0gICAgICAgICAqIC0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLQ0KPiA+IC0gICAgICAgICAqDQo+ID4gLSAgICAgICAgICogVGhlIHJlcXVpcmVtZW50
cyBhcmU6DQo+ID4gLSAgICAgICAgICogICBNTVUgPSBvZmYsIEQtY2FjaGUgPSBvZmYsIEktY2Fj
aGUgPSBvbiBvciBvZmYsDQo+ID4gLSAgICAgICAgICogICB4MCA9IHBoeXNpY2FsIGFkZHJlc3Mg
dG8gdGhlIEZEVCBibG9iLg0KPiA+IC0gICAgICAgICAqDQo+ID4gLSAgICAgICAgICogVGhpcyBt
dXN0IGJlIHRoZSB2ZXJ5IGZpcnN0IGFkZHJlc3MgaW4gdGhlIGxvYWRlZCBpbWFnZS4NCj4gPiAt
ICAgICAgICAgKiBJdCBzaG91bGQgYmUgbGlua2VkIGF0IFhFTl9WSVJUX1NUQVJULCBhbmQgbG9h
ZGVkIGF0IGFueQ0KPiA+IC0gICAgICAgICAqIDRLLWFsaWduZWQgYWRkcmVzcy4gIEFsbCBvZiB0
ZXh0K2RhdGErYnNzIG11c3QgZml0IGluIDJNQiwNCj4gPiAtICAgICAgICAgKiBvciB0aGUgaW5p
dGlhbCBwYWdldGFibGUgY29kZSBiZWxvdyB3aWxsIG5lZWQgYWRqdXN0bWVudC4NCj4gPiAtICAg
ICAgICAgKi8NCj4gPiArLyoNCj4gPiArICogS2VybmVsIHN0YXJ0dXAgZW50cnkgcG9pbnQuDQo+
ID4gKyAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPiA+ICsgKg0KPiA+ICsgKiBUaGUg
cmVxdWlyZW1lbnRzIGFyZToNCj4gPiArICogICBNTVUgPSBvZmYsIEQtY2FjaGUgPSBvZmYsIEkt
Y2FjaGUgPSBvbiBvciBvZmYsDQo+ID4gKyAqICAgeDAgPSBwaHlzaWNhbCBhZGRyZXNzIHRvIHRo
ZSBGRFQgYmxvYi4NCj4gPiArICoNCj4gPiArICogVGhpcyBtdXN0IGJlIHRoZSB2ZXJ5IGZpcnN0
IGFkZHJlc3MgaW4gdGhlIGxvYWRlZCBpbWFnZS4NCj4gPiArICogSXQgc2hvdWxkIGJlIGxpbmtl
ZCBhdCBYRU5fVklSVF9TVEFSVCwgYW5kIGxvYWRlZCBhdCBhbnkNCj4gPiArICogNEstYWxpZ25l
ZCBhZGRyZXNzLiAgQWxsIG9mIHRleHQrZGF0YStic3MgbXVzdCBmaXQgaW4gMk1CLA0KPiA+ICsg
KiBvciB0aGUgaW5pdGlhbCBwYWdldGFibGUgY29kZSBiZWxvdyB3aWxsIG5lZWQgYWRqdXN0bWVu
dC4NCj4gPiArICovDQo+ID4NCj4gPiAgIEdMT0JBTChzdGFydCkNCj4gPiAgICAgICAgICAgLyoN
Cj4gPiBAQCAtNTg2LDcgKzU4Niw3IEBAIEVORFBST0MoY3B1X2luaXQpDQo+ID4gICAgKg0KPiA+
ICAgICogQ2xvYmJlcnMgeDAgLSB4NA0KPiA+ICAgICovDQo+ID4gLWNyZWF0ZV9wYWdlX3RhYmxl
czoNCj4gPiArRU5UUlkoY3JlYXRlX3BhZ2VfdGFibGVzKQ0KPiANCj4gSSBhbSBub3Qgc3VyZSBh
Ym91dCBrZWVwaW5nIHRoaXMgbmFtZS4gTm93IHdlIGhhdmUgY3JlYXRlX3BhZ2VfdGFibGVzKCkN
Cj4gYW5kIGFyY2hfc2V0dXBfcGFnZV90YWJsZXMoKS4NCj4gDQo+IEkgd291bGQgY29uc2lkZSB0
byBuYW1lIGl0IGNyZWF0ZV9ib290X3BhZ2VfdGFibGVzKCkuDQo+IA0KDQpEbyB5b3UgbmVlZCBt
ZSB0byByZW5hbWUgaXQgaW4gdGhpcyBwYXRjaD8NCg0KPiA+ICAgICAgICAgICAvKiBQcmVwYXJl
IHRoZSBwYWdlLXRhYmxlcyBmb3IgbWFwcGluZyBYZW4gKi8NCj4gPiAgICAgICAgICAgbGRyICAg
eDAsID1YRU5fVklSVF9TVEFSVA0KPiA+ICAgICAgICAgICBjcmVhdGVfdGFibGVfZW50cnkgYm9v
dF9wZ3RhYmxlLCBib290X2ZpcnN0LCB4MCwgMCwgeDEsIHgyLCB4Mw0KPiA+IEBAIC02ODAsNyAr
NjgwLDcgQEAgRU5EUFJPQyhjcmVhdGVfcGFnZV90YWJsZXMpDQo+ID4gICAgKg0KPiA+ICAgICog
Q2xvYmJlcnMgeDAgLSB4Mw0KPiA+ICAgICovDQo+ID4gLWVuYWJsZV9tbXU6DQo+ID4gK0VOVFJZ
KGVuYWJsZV9tbXUpDQo+ID4gICAgICAgICAgIFBSSU5UKCItIFR1cm5pbmcgb24gcGFnaW5nIC1c
clxuIikNCj4gPg0KPiA+ICAgICAgICAgICAvKg0KPiA+IEBAIC03MTQsNyArNzE0LDcgQEAgRU5E
UFJPQyhlbmFibGVfbW11KQ0KPiA+ICAgICoNCj4gPiAgICAqIENsb2JiZXJzIHgwIC0geDENCj4g
PiAgICAqLw0KPiA+IC1yZW1vdmVfaWRlbnRpdHlfbWFwcGluZzoNCj4gPiArRU5UUlkocmVtb3Zl
X2lkZW50aXR5X21hcHBpbmcpDQo+IA0KPiBQYXRjaCAjMTQgc2hvdWxkIGJlIGJlZm9yZSB0aGlz
IHBhdGNoLiBTbyB5b3UgZG9uJ3QgaGF2ZSB0byBleHBvcnQNCj4gcmVtb3ZlX2lkZW50aXR5X21h
cHBpbmcgdGVtcG9yYXJpbHkuDQo+IA0KPiBUaGlzIHdpbGwgYWxzbyBhdm9pZCAodHJhbnNpZW50
KSBuYW1pbmcgY29uZnVzaW5nIHdpdGggbXkgd29yayAoc2VlIFsxXSkuDQo+IA0KDQpPaywgd2Ug
d2lsbCBkbyBpdC4NCg0KPiA+ICAgICAgICAgICAvKg0KPiA+ICAgICAgICAgICAgKiBGaW5kIHRo
ZSB6ZXJvZXRoIHNsb3QgdXNlZC4gUmVtb3ZlIHRoZSBlbnRyeSBmcm9tIHplcm9ldGgNCj4gPiAg
ICAgICAgICAgICogdGFibGUgaWYgdGhlIHNsb3QgaXMgbm90IFhFTl9aRVJPRVRIX1NMT1QuDQo+
ID4gQEAgLTc3NSw3ICs3NzUsNyBAQCBFTkRQUk9DKHJlbW92ZV9pZGVudGl0eV9tYXBwaW5nKQ0K
PiA+ICAgICoNCj4gPiAgICAqIENsb2JiZXJzIHgwIC0geDMNCj4gPiAgICAqLw0KPiA+IC1zZXR1
cF9maXhtYXA6DQo+ID4gK0VOVFJZKHNldHVwX2ZpeG1hcCkNCj4gPiAgICNpZmRlZiBDT05GSUdf
RUFSTFlfUFJJTlRLDQo+ID4gICAgICAgICAgIC8qIEFkZCBVQVJUIHRvIHRoZSBmaXhtYXAgdGFi
bGUgKi8NCj4gPiAgICAgICAgICAgbGRyICAgeDAsID1FQVJMWV9VQVJUX1ZJUlRVQUxfQUREUkVT
Uw0KPiA+IEBAIC04NzEsNyArODcxLDcgQEAgRU5EUFJPQyhpbml0X3VhcnQpDQo+ID4gICAgKiB4
MDogTnVsLXRlcm1pbmF0ZWQgc3RyaW5nIHRvIHByaW50Lg0KPiA+ICAgICogeDIzOiBFYXJseSBV
QVJUIGJhc2UgYWRkcmVzcw0KPiA+ICAgICogQ2xvYmJlcnMgeDAteDEgKi8NCj4gPiAtcHV0czoN
Cj4gPiArRU5UUlkocHV0cykNCj4gDQo+IFRoaXMgbmFtZSBpcyBhIGJpdCB0b28gZ2VuZXJpYyB0
byBiZSBnbG9iYWxseSBleHBvcnRlZC4gSXQgaXMgYWxzbyBub3cNCj4gcXVpdGUgY29uZnVzaW5n
IGJlY2F1c2Ugd2UgaGF2ZSAiZWFybHlfcHV0cyIgYW5kICJwdXRzIi4NCj4gDQo+IEkgd291bGQg
Y29uc2lkZXIgdG8gbmFtZSBpdCBhc21fcHV0cygpLiBJdCBpcyBzdGlsbCBub3QgZ3JlYXQgYnV0
DQo+IGhvcGVmdWxseSBpdCB3b3VsZCBnaXZlIGEgaGludCB0aGF0IHNob3VsZCBiZSBjYWxsIGZy
b20gYXNzZW1ibHkgY29kZS4NCj4gDQoNClllcywgSSBoYWQgdGhlIHNhbWUgY29uY2Vybi4gSSB3
aWxsIHJlbmFtZSBpdCBpbiBuZXh0IHZlcnNpb24uDQoNCkNoZWVycywNCldlaSBDaGVuDQoNCj4g
PiAgICAgICAgICAgZWFybHlfdWFydF9yZWFkeSB4MjMsIDENCj4gPiAgICAgICAgICAgbGRyYiAg
dzEsIFt4MF0sICMxICAgICAgICAgICAvKiBMb2FkIG5leHQgY2hhciAqLw0KPiA+ICAgICAgICAg
ICBjYnogICB3MSwgMWYgICAgICAgICAgICAgICAgIC8qIEV4aXQgb24gbnVsICovDQo+IA0KPiBD
aGVlcnMsDQo+IA0KPiBbMV0gaHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcvYWxsLzIwMjMwMTEzMTAx
MTM2LjQ3OS0xMy1qdWxpZW5AeGVuLm9yZy8NCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 03:30:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 03:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480115.744327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHz9U-0007a7-93; Wed, 18 Jan 2023 03:30:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480115.744327; Wed, 18 Jan 2023 03:30:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pHz9U-0007a0-5z; Wed, 18 Jan 2023 03:30:16 +0000
Received: by outflank-mailman (input) for mailman id 480115;
 Wed, 18 Jan 2023 03:30:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHz9S-0007Zq-5t; Wed, 18 Jan 2023 03:30:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHz9R-0003Jg-UT; Wed, 18 Jan 2023 03:30:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pHz9R-0006me-Cs; Wed, 18 Jan 2023 03:30:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pHz9R-0000hs-CR; Wed, 18 Jan 2023 03:30:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CiF5AD2NetpQXMy4zCTqVyqLTniKeUPNkhXBm4UqERo=; b=bfRjzp9o2OdFGyKLDj5ITAfSAG
	FwGisp5sq46GHrTHjrcO3vk65KEBjmnm4nrDZZdtJPYHQh683yQTppa40dfY4ZpLYHxUvOeJTqTgj
	ujD3lulsAnfp1sCVUbeSj5onle2FuxYIX9Ey6Jb+hljp4aSG/Er7tz9Qx1aJbzOzQWeY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175933-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175933: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-vhd:host-install(5):broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:regression
    linux-linus:test-armhf-armhf-libvirt-raw:host-install(5):broken:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6e50979a9c87371fdb85d16058f9b5cb40751501
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 03:30:13 +0000

flight 175933 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175933/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 test-armhf-armhf-xl-vhd       5 host-install(5)        broken REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  5 host-install(5)       broken REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  5 host-install(5)        broken REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 build-amd64                   6 xen-build                fail REGR. vs. 173462
 build-amd64-xsm               6 xen-build                fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 build-i386-xsm                6 xen-build                fail REGR. vs. 173462
 build-i386                    6 xen-build                fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd11-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd12-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                6e50979a9c87371fdb85d16058f9b5cb40751501
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  102 days
Failing since        173470  2022-10-08 06:21:34 Z  101 days  209 attempts
Testing same since   175933  2023-01-17 13:40:13 Z    0 days    1 attempts

------------------------------------------------------------
3371 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-freebsd11-amd64                             blocked 
 test-amd64-amd64-freebsd12-amd64                             blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 blocked 
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-amd64-xl-vhd                                      blocked 
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      broken  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-vhd broken
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-libvirt-raw host-install(5)

Not pushing.

(No revision log; it would be 516088 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 04:27:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 04:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480123.744337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI02Z-0004iu-Hz; Wed, 18 Jan 2023 04:27:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480123.744337; Wed, 18 Jan 2023 04:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI02Z-0004in-FI; Wed, 18 Jan 2023 04:27:11 +0000
Received: by outflank-mailman (input) for mailman id 480123;
 Wed, 18 Jan 2023 04:27:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7ZPj=5P=redhat.com=alex.williamson@srs-se1.protection.inumbo.net>)
 id 1pI02X-0004ih-N1
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 04:27:09 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d81b4b3-96e8-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 05:27:07 +0100 (CET)
Received: from mail-il1-f200.google.com (mail-il1-f200.google.com
 [209.85.166.200]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-128-uKjfGRMLM-G1zbLfxF-LVw-1; Tue, 17 Jan 2023 23:27:05 -0500
Received: by mail-il1-f200.google.com with SMTP id
 z19-20020a921a53000000b0030b90211df1so24118714ill.2
 for <xen-devel@lists.xenproject.org>; Tue, 17 Jan 2023 20:27:04 -0800 (PST)
Received: from redhat.com ([38.15.36.239]) by smtp.gmail.com with ESMTPSA id
 u125-20020a022383000000b003a60488c1fcsm74998jau.118.2023.01.17.20.27.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 17 Jan 2023 20:27:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d81b4b3-96e8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1674016026;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GmsKXk8mxh6uH4Qz8i58lSd7uPYMl3VZxDVUoaXIxHQ=;
	b=Q0jV2sUuc41HE6AR6a3durYHlPYKenDwCYS/1okZ9bcPtNUgZaMognXe+qOI+K8lD/HSUv
	g6dOgOTZiI8fQczApgG310XMnhmDADkyx4GtT980kebktA/Q6pGgRePXAP/7wTu/MqypNt
	ge8AAIPc3w4G550T4H41V4HnIHTadCQ=
X-MC-Unique: uKjfGRMLM-G1zbLfxF-LVw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:organization:references
         :in-reply-to:message-id:subject:cc:to:from:date:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=WwIVV2GdMWA9O7WvFo6puxVSK0p8BVpY4+LbrHl3vXw=;
        b=D8I/dhhXzVcLVOLgfLPWnuGRDM2ThyR8GpYHGnB/DdfohK91scbs+Y0L5ilSkbsMCa
         GrDPGMt7SMvrG3+yeFtht4UUwCtf8NJxOg/Mda13q0LhcbNDy59tr6lGf2B8Q6+fnGTg
         JyLQnZeMSThFI5Byaoh8eDZAvzKn4UbYPLrJyEmlJM+MZwtAQEAddThg4zD7vIMF0UFY
         404/rswEqUwm/PjIH9Kr4GgiR/NPIHvDGwPCT4r0qSVk1/LTgTpnzhr7TtJJwXTVxP22
         Do/EGjIXaSayudqdInfx2HTmSKCSyc1dbP6tD3pyhz2Fy0gYPpHkCh4nu414fFn0bHKK
         1nvQ==
X-Gm-Message-State: AFqh2koHzkH1xyxdrUi+uv1Trz7dH92/HAWjr+sr9n5kphmtHBFLDIRH
	uLVr335R22EOCtUdUz8GFmg9p60lLqjG178VGrjvpDLFzuKQNJmIb96nwsuesLeAAus6I61pqDI
	63GoLkufml+dZuJvcKoZS5w6uov4=
X-Received: by 2002:a92:d5d2:0:b0:303:3163:cb0a with SMTP id d18-20020a92d5d2000000b003033163cb0amr5009485ilq.17.1674016024119;
        Tue, 17 Jan 2023 20:27:04 -0800 (PST)
X-Google-Smtp-Source: AMrXdXuN/9vLnwuT4teyQ30VRbVSmD5N3uTBIPLHpuDYCgF16ZXgBQKahZL0LE1OWApEgPX8NNEJcw==
X-Received: by 2002:a92:d5d2:0:b0:303:3163:cb0a with SMTP id d18-20020a92d5d2000000b003033163cb0amr5009468ilq.17.1674016023574;
        Tue, 17 Jan 2023 20:27:03 -0800 (PST)
Date: Tue, 17 Jan 2023 21:27:00 -0700
From: Alex Williamson <alex.williamson@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: Igor Mammedov <imammedo@redhat.com>, Chuck Zmudzinski
 <brchuckz@netscape.net>, "Michael S. Tsirkin" <mst@redhat.com>, Bernhard
 Beschow <shentey@gmail.com>, qemu-devel@nongnu.org, Stefano Stabellini
 <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Paolo Bonzini
 <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org, Philippe
 =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>, Thomas Huth <thuth@redhat.com>, Eric Auger
 <eric.auger@redhat.com>, Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230117212700.35b3af18.alex.williamson@redhat.com>
In-Reply-To: <b6f7d6dd-3b9b-2cc7-32ab-8521802e1fed@aol.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
	<a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
	<20230110030331-mutt-send-email-mst@kernel.org>
	<a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
	<D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
	<9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
	<7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
	<20230112180314-mutt-send-email-mst@kernel.org>
	<128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
	<20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
	<88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
	<20230116163342.467039a0@imammedo.users.ipa.redhat.com>
	<fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
	<20230117120416.0aa041d6@imammedo.users.ipa.redhat.com>
	<b6f7d6dd-3b9b-2cc7-32ab-8521802e1fed@aol.com>
Organization: Red Hat
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Tue, 17 Jan 2023 19:15:57 -0500
Chuck Zmudzinski <brchuckz@aol.com> wrote:

> On 1/17/2023 6:04 AM, Igor Mammedov wrote:
> > On Mon, 16 Jan 2023 13:00:53 -0500
> > Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> > =20
> > > On 1/16/23 10:33, Igor Mammedov wrote: =20
> > > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > >    =20
> > > >> On 1/13/23 4:33=E2=80=AFAM, Igor Mammedov wrote:   =20
> > > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > >> >      =20
> > > >> >> On 1/12/23 6:03=E2=80=AFPM, Michael S. Tsirkin wrote:     =20
> > > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wr=
ote:       =20
> > > >> >> >> I think the change Michael suggests is very minimalistic: Mo=
ve the if
> > > >> >> >> condition around xen_igd_reserve_slot() into the function it=
self and
> > > >> >> >> always call it there unconditionally -- basically turning th=
ree lines
> > > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem sp=
ecific,
> > > >> >> >> Michael further suggests to rename it to something more gene=
ral. All
> > > >> >> >> in all no big changes required.       =20
> > > >> >> >=20
> > > >> >> > yes, exactly.
> > > >> >> >        =20
> > > >> >>=20
> > > >> >> OK, got it. I can do that along with the other suggestions.    =
 =20
> > > >> >=20
> > > >> > have you considered instead of reservation, putting a slot check=
 in device model
> > > >> > and if it's intel igd being passed through, fail at realize time=
  if it can't take
> > > >> > required slot (with a error directing user to fix command line)?=
     =20
> > > >>=20
> > > >> Yes, but the core pci code currently already fails at realize time
> > > >> with a useful error message if the user tries to use slot 2 for th=
e
> > > >> igd, because of the xen platform device which has slot 2. The user
> > > >> can fix this without patching qemu, but having the user fix it on
> > > >> the command line is not the best way to solve the problem, primari=
ly
> > > >> because the user would need to hotplug the xen platform device via=
 a
> > > >> command line option instead of having the xen platform device adde=
d by
> > > >> pc_xen_hvm_init functions almost immediately after creating the pc=
i
> > > >> bus, and that delay in adding the xen platform device degrades
> > > >> startup performance of the guest.
> > > >>    =20
> > > >> > That could be less complicated than dealing with slot reservatio=
ns at the cost of
> > > >> > being less convenient.     =20
> > > >>=20
> > > >> And also a cost of reduced startup performance   =20
> > > >=20
> > > > Could you clarify how it affects performance (and how much).
> > > > (as I see, setup done at board_init time is roughly the same
> > > > as with '-device foo' CLI options, modulo time needed to parse
> > > > options which should be negligible. and both ways are done before
> > > > guest runs)   =20
> > >=20
> > > I preface my answer by saying there is a v9, but you don't
> > > need to look at that. I will answer all your questions here.
> > >=20
> > > I am going by what I observe on the main HDMI display with the
> > > different approaches. With the approach of not patching Qemu
> > > to fix this, which requires adding the Xen platform device a
> > > little later, the length of time it takes to fully load the
> > > guest is increased. I also noticed with Linux guests that use
> > > the grub bootoader, the grub vga driver cannot display the
> > > grub boot menu at the native resolution of the display, which
> > > in the tested case is 1920x1080, when the Xen platform device
> > > is added via a command line option instead of by the
> > > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > > to Qemu, the grub menu is displayed at the full, 1920x1080
> > > native resolution of the display. Once the guest fully loads,
> > > there is no noticeable difference in performance. It is mainly
> > > a degradation in startup performance, not performance once
> > > the guest OS is fully loaded. =20
> >
> > Looking at igd-assign.txt, it recommends to add IGD using '-device' CLI
> > option, and actually drop at least graphics defaults explicitly.
> > So it is expected to work fine even when IGD is constructed with
> > '-device'.
> >
> > Could you provide full CLI current xen starts QEMU with and then
> > a CLI you used (with explicit -device for IGD) that leads
> > to reduced performance?
> >
> > CCing vfio folks who might have an idea what could be wrong based
> > on vfio experience. =20
>=20
> Actually, the igd is not added with an explicit -device option using Xen:
>=20
> =C2=A0=C2=A0 1573 ?=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Ssl=C2=A0=
=C2=A0=C2=A0 0:42 /usr/bin/qemu-system-i386 -xen-domid 1 -no-shutdown -char=
dev socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-1,server,nowait -mo=
n chardev=3Dlibxl-cmd,mode=3Dcontrol -chardev socket,id=3Dlibxenstat-cmd,pa=
th=3D/var/run/xen/qmp-libxenstat-1,server,nowait -mon chardev=3Dlibxenstat-=
cmd,mode=3Dcontrol -nodefaults -no-user-config -name windows -vnc none -dis=
play none -serial pty -boot order=3Dc -smp 4,maxcpus=3D4 -net none -machine=
 xenfv,max-ram-below-4g=3D3758096384,igd-passthru=3Don -m 6144 -drive file=
=3D/dev/loop0,if=3Dide,index=3D0,media=3Ddisk,format=3Draw,cache=3Dwritebac=
k -drive file=3D/dev/disk/by-uuid/A44AA4984AA468AE,if=3Dide,index=3D1,media=
=3Ddisk,format=3Draw,cache=3Dwriteback
>=20
> I think it is added by xl (libxl management tool) when the guest is creat=
ed
> using the qmp-libxl socket that appears on the command line, but I am not=
 100
> percent sure. So, with libxl, the command line alone does not tell the wh=
ole
> story. The xl.cfg file has a line like this to define the pci devices pas=
sed through,
> and in qemu they are type XEN_PT devices, not VFIO devices:
>=20
> pci =3D [ '00:1b.0','00:14.0','00:02.0@02' ]
>=20
> This means three host pci devices are passed through, the ones on the
> host at slot 1b.0, 14.0, and 02.0. Of course the device at 02 is the igd.
> The @02 means libxl is requesting slot 2 in the guest for the igd, the
> other 2 devices are just auto assigned a slot by Qemu. Qemu cannot
> assign the igd to slot 2 for xenfv machines without a patch that prevents
> the Xen platform device from grabbing slot 2. That is what this patch
> accomplishes. The workaround involves using the Qemu pc machine
> instead of the Qemu xenfv machine, in which case the code in Qemu
> that adds the Xen platform device at slot 2 is avoided, and in that case
> the Xen platform device is added via a command line option instead
> at slot 3 instead of slot 2.
>=20
> The differences between vfio and the Xen pci passthrough device
> might make a difference in Xen vs. kvm for igd-pasthru.
>=20
> Also, kvm does not use the Xen platform device, and it seems the
> Xen guests behave better at startup when the Xen platform device
> is added very early, during the initialization of the emulated devices
> of the machine, which is based on the i440fx piix3 machine, instead
> of being added using a command line option. Perhaps the performance
> at startup could be improved by adding the igd via a command line
> option using vfio instead of the canonical way that libxl does pci
> passthrough on Xen, but I have no idea if vfio works on Xen the way it
> works on kvm. I am willing to investigate and experiment with it, though.
>=20
> So if any vfio people can shed some light on this, that would help.

ISTR some rumors of Xen thinking about vfio, but AFAIK there is no
combination of vfio with Xen, nor is there any sharing of device quirks
to assign IGD.  IGD assignment has various VM platform dependencies, of
which the bus address is just one.  Vfio support for IGD assignment
takes a much different approach, the user or management tool is
responsible for configuring the VM correctly for IGD assignment,
including assigning the device to the correct bus address and using the
right VM chipset, with the correct slot free for the LPC controller.  If
all the user configuration of the VM aligns correctly, we'll activate
"legacy mode" by inserting the opregion and instantiate the LPC bridge
with the correct fields to make the BIOS and drivers happy.  Otherwise,
maybe the user is trying to use the mythical UPT mode, but given
Intel's lack of follow-through, it probably just won't work.  Or maybe
it's a vGPU and we don't need the legacy features anyway.

Working with an expectation that QEMU needs to do the heavy lifting to
make it all work automatically, with no support from the management
tool for putting devices in the right place, I'm afraid there might not
be much to leverage from the vfio vesion.  Thanks,

Alex



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 06:10:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 06:10:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480132.744349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI1dq-0006mh-UH; Wed, 18 Jan 2023 06:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480132.744349; Wed, 18 Jan 2023 06:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI1dq-0006ma-RR; Wed, 18 Jan 2023 06:09:46 +0000
Received: by outflank-mailman (input) for mailman id 480132;
 Wed, 18 Jan 2023 06:09:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CyUV=5P=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pI1dq-0006mU-4w
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 06:09:46 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2040.outbound.protection.outlook.com [40.107.8.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b2f0a708-96f6-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 07:09:43 +0100 (CET)
Received: from DUZPR01CA0013.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:3c3::6) by PA4PR08MB7620.eurprd08.prod.outlook.com
 (2603:10a6:102:261::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 06:09:40 +0000
Received: from DBAEUR03FT024.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3c3:cafe::33) by DUZPR01CA0013.outlook.office365.com
 (2603:10a6:10:3c3::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Wed, 18 Jan 2023 06:09:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT024.mail.protection.outlook.com (100.127.142.163) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 06:09:40 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Wed, 18 Jan 2023 06:09:40 +0000
Received: from a8a662c84299.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AC0C3C80-9503-47A8-A524-0A7F4B67E2EC.1; 
 Wed, 18 Jan 2023 06:09:29 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a8a662c84299.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 06:09:29 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by AM9PR08MB6276.eurprd08.prod.outlook.com (2603:10a6:20b:2d4::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Wed, 18 Jan
 2023 06:09:21 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%7]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 06:09:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2f0a708-96f6-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vdcxqi7XguLN86UZ9ocP990VjgUdyOog0lWV2zlPOrE=;
 b=Z6A7bOW/bLtjlvP3WG/Pxt6oIn1usi+GaDLBQ77jJLo4MXF/9CFmLW1LtwoLUOUmeIwwCrPqki/WkQVw7qGoT1wnAe9xXvn9vMfDTLfWl5VanM2Ow/TcY+5YJ8M1dahBu5H4tYGklubopR09arZrcuoA4p5zXKsjGG2mfL2ORiY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WtHBJq0cTxlgKp6yy28XPcbztdgLvFgjfNVDUwPu8QqbmqZzGuZwon4sTC9PQrjESvhsIIfMh42cUQoFDoQY6x5q9qQes91TMRDyocWdhVemi/WJNknewgATlG9LXaWu1C66kNptGHXFv45iDt2qWrouXrQS9/grs+4Y15OMrEN3L4VFR+nGCtdsxtMvRklHb86pRQBkjnTN2/bs42VM98UDlpkGJOvWGrhKw5Kjn6uhURibc9OB9bEXazUCBQiVu6s+yIQROmvrp0AZ7VWph0W6m0lCyiKXdpkXBT46SUOWEj53ypISgFdHY4kF/VsaY+611T/ER12Gh/QfSlODsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vdcxqi7XguLN86UZ9ocP990VjgUdyOog0lWV2zlPOrE=;
 b=jOb0roRqbo5ZlALgyCsI8lYR8nETM7qC5eZS+mE+EIa1JvyqS6VirEbjzZ4YxzYPsvJt8MWAimxe6Z34Y/YUF4Tl5MsGGRCPPc7MGdYr1OztiS1iu2Xx4Fkbt+dn7zdmKH1fODhCkEi3YNom1WfOq6LndUGRb1JjWUoaNfDHCcuNAiqT8FWLJuJ6gKTKTa61BkucHu//d2QmdZejAA46VlSR4fkr76ulmaXH1PmxU3UD3VvaOd/ZNzNFCC7sNOQq1rs3BLDBc+SYnkWb6ASEi4EWjrSRCPKXv+ovDdfsNXimuMoFIBNsieVHAJVOfy0BVp0NXvunSggRgNB1UQj3sw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vdcxqi7XguLN86UZ9ocP990VjgUdyOog0lWV2zlPOrE=;
 b=Z6A7bOW/bLtjlvP3WG/Pxt6oIn1usi+GaDLBQ77jJLo4MXF/9CFmLW1LtwoLUOUmeIwwCrPqki/WkQVw7qGoT1wnAe9xXvn9vMfDTLfWl5VanM2Ow/TcY+5YJ8M1dahBu5H4tYGklubopR09arZrcuoA4p5zXKsjGG2mfL2ORiY=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v1 06/13] xen/arm: assign shared memory to owner when host
 address not provided
Thread-Topic: [PATCH v1 06/13] xen/arm: assign shared memory to owner when
 host address not provided
Thread-Index: AQHY+J1pmmYEajWz5UynKX+c5rdEZK6UzwiAgAEQoXCAAGGAAIANzbrg
Date: Wed, 18 Jan 2023 06:09:21 +0000
Message-ID:
 <AM0PR08MB45301DB151AA42174CA8326BF7C79@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20221115025235.1378931-1-Penny.Zheng@arm.com>
 <20221115025235.1378931-7-Penny.Zheng@arm.com>
 <d7f12897-c6cc-0895-b70e-53c0b88bd0f9@xen.org>
 <AM0PR08MB453041150588948050F718D4F7FE9@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <6db41bd2-ab71-422a-4235-a9209e984915@xen.org>
In-Reply-To: <6db41bd2-ab71-422a-4235-a9209e984915@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A51278E297A39A4F8E5EB4019398EF65.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|AM9PR08MB6276:EE_|DBAEUR03FT024:EE_|PA4PR08MB7620:EE_
X-MS-Office365-Filtering-Correlation-Id: d44d024e-c65e-413d-36ca-08daf91a959f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Q12qNdVLzDK3iskNSLVqrDITxFwMbQ5oVBnxZohZB5CwW36U6bO9j4aXgtBLdDDRCNdYqlu/mv54iUGOh/pSFlBwwJw9OfFQulT+BhiwBw28TW3a1faQd1XYEiFRNUpWtDupap7YBw2qI04tyzUdMBDm6IyOR8iZJrrME/kUo5UYU0iSeW7D8IsPRmsCPt5Igu2UMuxIyD0xbpSaUp0WR9+nQlpnzZAxGE1HebcT2yUMYrHWdPHpbke1/4DyJMDUmANqP8AOn20O/g9HYmog51a/Zj+fPxetYRBhfxoo6ikm170LK3etS8DUnR9wrd1imdgk33caTgZNoSbxCVZAUMNs6XNSrvrz1KCIf2JKE5fV7GNWENJWRR068eD0Ej3K/GfABT2eWhIAvDk/LyRJrpHb1CNcNtteeXth26/0XuPesjPFeDJyvao5zIuriDq6tt1cDYhmwow1q1LuHnR72M5rTAfPrsqTGowr/4T/2Q//TUZwslPGgLCO9ug6BzIPe92LKhOE6Q+CCMBhKWFgZr5pZzNVK8B0z3apjXvafajwIVAiScfVxead9SsY3lNmXvCGuWRQubvxUMdLEWdJGOBnHn7FhyPxrNs9HFBOuLap/f4XGIFFkHMcNMtqclcyFQMi79/gh2ZqkJIMUWTW8wSK6ol+knhgcKd4FWdApCFnWZDwF787xFLKjaLQldvaTZVXzdZkoQzDA3t7pjtnWQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(376002)(39860400002)(366004)(396003)(451199015)(71200400001)(186003)(52536014)(55016003)(26005)(9686003)(7696005)(122000001)(86362001)(66556008)(66946007)(66446008)(66476007)(76116006)(4326008)(8676002)(5660300002)(33656002)(64756008)(478600001)(38070700005)(38100700002)(54906003)(8936002)(41300700001)(316002)(2906002)(53546011)(6506007)(110136005)(83380400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6276
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bfd70c7a-2b6a-411e-8946-08daf91a8a54
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qfTrySxE/5WfcZTKQflKGKv192mPdgSe6FvlHu85npTx2YCSdYL3awAUbb8G2j2EAtjoTiEZBHmlYuNToJVBfMpH/K8TkzsWboXoE7sYZwpGxZ5mhEtqXM4wWBWhfZOsPzbmfk4l5dWUYj5q0ROatRtU35aF+0IOjLAU44CIiGhGbkh5FeED7EwVA3XjT0A2zmbuws0//Zdj/+UtiNkFdZXhCWsnAm1diYjSs7RwEbqFxmxg5Jv4AtxVp4OEAMnjMbcR/ThQWUIxJJ6t+K/X71j9yK5w6WPfl7oNYxcq+medwavbb1dfQv+N8/rTELYu1bGiYmjYymo99lQfwuUjkQ9re0Y6SrnrJOVpcVCs+JVdnZyk+6qq2sGOCuXkGuZayqPPEtNcTKa3yeOUQuuzRk/wlnoJOuqjD7tgrdU8Dqdj/K7O4qWkMKHf5T8GH1Wdb7ABIayeJD0larNm19ZkAOW112Eobcbxn7wtYsO3zW+7aqFg23wxqO8ZtT6pUSQfkGcK+nrDXlDwMqZqryxyy2LTyK5U/FAeu92mz8eRlTLV3sLqVYs6S9E4g8okjKOSe2i6KtCwjhE4Cunaqm0XXKi4+n7iOpzPl4+AAmkwya88d2sU4ZmWpJWWFFOXodfZ9lBBobikcDCWyv8xEoF1BD+p2wb0hmDl3OYaifo49HHKP3O8E1a6mR7zOh/RW4xBoqkYwaFdR5JLoi2PhI14Vw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199015)(40470700004)(36840700001)(46966006)(81166007)(82740400003)(36860700001)(86362001)(356005)(33656002)(5660300002)(8936002)(52536014)(2906002)(70586007)(70206006)(55016003)(40480700001)(41300700001)(8676002)(4326008)(82310400005)(40460700003)(26005)(9686003)(186003)(336012)(83380400001)(47076005)(53546011)(316002)(54906003)(110136005)(107886003)(7696005)(6506007)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 06:09:40.0895
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d44d024e-c65e-413d-36ca-08daf91a959f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB7620

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogTW9uZGF5LCBKYW51YXJ5IDksIDIwMjMg
Njo1OCBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29t
PjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJh
bmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNo
dWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYx
IDA2LzEzXSB4ZW4vYXJtOiBhc3NpZ24gc2hhcmVkIG1lbW9yeSB0byBvd25lcg0KPiB3aGVuIGhv
c3QgYWRkcmVzcyBub3QgcHJvdmlkZWQNCj4gDQo+IA0KPiANCj4gT24gMDkvMDEvMjAyMyAwNzo0
OSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gSGkgSnVsaWVuDQo+IA0KPiBIaSBQZW5ueSwNCj4g
DQo+ID4gSGFwcHkgbmV3IHllYXJ+fn5+DQo+IA0KPiBIYXBweSBuZXcgeWVhciB0b28hDQo+IA0K
PiA+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+PiBGcm9tOiBKdWxpZW4gR3JhbGwg
PGp1bGllbkB4ZW4ub3JnPg0KPiA+PiBTZW50OiBTdW5kYXksIEphbnVhcnkgOCwgMjAyMyA4OjUz
IFBNDQo+ID4+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi0NCj4g
ZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gPj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBh
cm0uY29tPjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+ID4+IDxzc3RhYmVsbGluaUBrZXJuZWwub3Jn
PjsgQmVydHJhbmQgTWFycXVpcw0KPiA+PiA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgVm9s
b2R5bXlyIEJhYmNodWsNCj4gPj4gPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiA+PiBT
dWJqZWN0OiBSZTogW1BBVENIIHYxIDA2LzEzXSB4ZW4vYXJtOiBhc3NpZ24gc2hhcmVkIG1lbW9y
eSB0byBvd25lcg0KPiA+PiB3aGVuIGhvc3QgYWRkcmVzcyBub3QgcHJvdmlkZWQNCj4gPj4NCj4g
Pj4gSGksDQo+ID4+DQo+ID4+IE9uIDE1LzExLzIwMjIgMDI6NTIsIFBlbm55IFpoZW5nIHdyb3Rl
Og0KPiA+Pj4gQEAgLTkyMiwzMyArOTI3LDgyIEBAIHN0YXRpYyBtZm5fdCBfX2luaXQNCj4gPj4g
YWNxdWlyZV9zaGFyZWRfbWVtb3J5X2Jhbmsoc3RydWN0IGRvbWFpbiAqZCwNCj4gPj4+ICAgICAg
ICBkLT5tYXhfcGFnZXMgKz0gbnJfcGZuczsNCj4gPj4+DQo+ID4+PiAgICAgICAgc21mbiA9IG1h
ZGRyX3RvX21mbihwYmFzZSk7DQo+ID4+PiAtICAgIHJlcyA9IGFjcXVpcmVfZG9tc3RhdGljX3Bh
Z2VzKGQsIHNtZm4sIG5yX3BmbnMsIDApOw0KPiA+Pj4gLSAgICBpZiAoIHJlcyApDQo+ID4+PiAr
ICAgIHBhZ2UgPSBtZm5fdG9fcGFnZShzbWZuKTsNCj4gPj4+ICsgICAgLyoNCj4gPj4+ICsgICAg
ICogSWYgcGFnZSBpcyBhbGxvY2F0ZWQgZnJvbSBoZWFwIGFzIHN0YXRpYyBzaGFyZWQgbWVtb3J5
LCB0aGVuIHdlDQo+IGp1c3QNCj4gPj4+ICsgICAgICogYXNzaWduIGl0IHRvIHRoZSBvd25lciBk
b21haW4NCj4gPj4+ICsgICAgICovDQo+ID4+PiArICAgIGlmICggcGFnZS0+Y291bnRfaW5mbyA9
PSAoUEdDX3N0YXRlX2ludXNlIHwgUEdDX3N0YXRpYykgKQ0KPiA+PiBJIGFtIGEgYml0IGNvbmZ1
c2VkIGhvdyB0aGlzIGNhbiBoZWxwIGRpZmZlcmVudGlhdGluZw0KPiA+PiBiZWNhUEdDX3N0YXRl
X2ludXNlIGlzIDAuIFNvIGVmZmVjdGl2ZWx5LCB5b3UgYXJlIGNoZWNraW5nIHRoYXQgY291bnRf
aW5mbw0KPiBpcyBlcXVhbCB0byBQR0Nfc3RhdGljLg0KPiA+Pg0KPiA+DQo+ID4gV2hlbiBob3N0
IGFkZHJlc3MgaXMgcHJvdmlkZWQsIHRoZSBob3N0IGFkZHJlc3MgcmFuZ2UgZGVmaW5lZCBpbg0K
PiA+IHhlbixzdGF0aWMtbWVtIHdpbGwgYmUgc3RvcmVkIGFzIGEgInN0cnVjdCBtZW1iYW5rIiB3
aXRoIHR5cGUgb2YNCj4gPiAiTUVNQkFOS19TVEFUSUNfRE9NQUlOIiBpbiAiYm9vdGluZm8ucmVz
ZXJ2ZWRfbWVtIg0KPiA+IFRoZW4gaXQgd2lsbCBiZSBpbml0aWFsaXplZCBhcyBzdGF0aWMgbWVt
b3J5IHRocm91Z2ggImluaXRfc3RhdGljbWVtX3BhZ2VzIg0KPiA+IFNvIGhlcmUgaXRzIHBhZ2Ut
PmNvdW50X2luZm8gaXMgUEdDX3N0YXRlX2ZyZWUgfFBHQ19zdGF0aWMuDQo+ID4gRm9yIHBhZ2Vz
IGFsbG9jYXRlZCBmcm9tIGhlYXAsIGl0cyBwYWdlIHN0YXRlIGlzIGRpZmZlcmVudCwgYmVpbmcN
Cj4gUEdDX3N0YXRlX2ludXNlLg0KPiA+IFNvIGFjdHVhbGx5LCB3ZSBhcmUgY2hlY2tpbmcgcGFn
ZSBzdGF0ZSB0byB0ZWxsIHRoZQ0KPiBkaWZmZXJlbmNlICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIC4NCj4gDQo+IE9rLiBUaGlzIGlzIGRlZmluaXRl
bHkgbm90IG9idmlvdXMgZnJvbSB0aGUgY29kZS4gQnV0IEkgdGhpbmsgdGhpcyBpcyBhIHZlcnkN
Cj4gZnJhZ2lsZSBhc3N1bXB0aW9uLg0KPiANCj4gSW5zdGVhZCwgaXQgd291bGQgYmUgYmV0dGVy
IGlmIHdlIGFsbG9jYXRlIHRoZSBtZW1vcnkgaW4NCj4gYWNxdWlyZV9zaGFyZWRfbWVtb3J5X2Jh
bmsoKSB3aGVuIHRoZSBob3N0IGFkZHJlc3MgaXMgbm90IHByb3ZpZGVkLg0KPiANCg0KUmlnaHQg
bm93LCBhY3F1aXJlX3NoYXJlZF9tZW1vcnlfYmFuayBpcyBjYWxsZWQgb25seSB3aGVuIGRvbWFp
biBpcyBvd25lciBkb21haW4uDQpJdCBpcyBhcHBsaWNhYmxlIHdoZW4gaG9zdCBhZGRyZXNzIGlz
IHByb3ZpZGVkLCB3ZSBjb3VsZCBzdGlsbCBkbyBndWVzdCBwaHlzbWFwIHdoZW4NCmJvcnJvd2Vy
IGFjY2Vzc2VkIGJlZm9yZSBvd25lciwgYXMgYWRkcmVzcyBpcyBwcm92aWRlZC4NCkhvd2V2ZXIg
d2hlbiBob3N0IGFkZHJlc3MgaXMgbm90IHByb3ZpZGVkLCB3ZSBtdXN0IGFsbG9jYXRlIG1lbW9y
eSBhdCBmaXJzdCBkb21haW4uDQpTbyBhbGxvY2F0aW5nIHRoZSBtZW1vcnkgc2hhbGwgYmUgY2Fs
bGVkIG91dHNpZGUgYWNxdWlyZV9zaGFyZWRfbWVtb3J5X2JhbmsNCg0KPiA+DQo+ID4+IEJ1dCBh
cyBJIHdyb3RlIGluIGEgcHJldmlvdXMgcGF0Y2gsIEkgZG9uJ3QgdGhpbmsgeW91IHNob3VsZCBj
b252ZXJ0DQo+ID4+IHt4ZW4sZG9tfWhlYXAgcGFnZXMgdG8gYSBzdGF0aWMgcGFnZXMuDQo+ID4+
DQo+ID4NCj4gPiBJIGFncmVlIHRoYXQgdGFraW5nIHJlZmVyZW5jZSBjb3VsZCBhbHNvIHByZXZl
bnQgZ2l2aW5nIHRoZXNlIHBhZ2VzIGJhY2sgdG8NCj4gaGVhcC4NCj4gPiBCdXQgbWF5IEkgYXNr
IHdoYXQgaXMgeW91ciBjb25jZXJuIG9uIGNvbnZlcnRpbmcge3hlbixkb219aGVhcCBwYWdlcyB0
bw0KPiBhIHN0YXRpYyBwYWdlcz8NCj4gDQo+IEEgZmV3IHJlYXNvbnM6DQo+ICAgMSkgSSBjb25z
aWRlciB0aGVtIGFzIHR3byBkaXN0aW5jdCBhbGxvY2F0b3JzLiBTbyBmYXIgdGhleSBoYXZlIHRo
ZSBzYW1lDQo+IGJlaGF2aW9yLCBidXQgaW4gdGhlIGZ1dHVyZSB0aGlzIG1heSBjaGFuZ2UuDQo+
ICAgMikgSWYgdGhlIHBhZ2UgaXMgZnJlZWQgeW91IHJlYWxseSBkb24ndCB3YW50IHRoZSBkb21h
aW4gdG8gYmUgYWJsZSB0byByZS11c2UNCj4gdGhlIHBhZ2UgZm9yIGEgZGlmZmVyZW50IHB1cnBv
c2UuDQo+IA0KPiANCj4gSSByZWFsaXplIHRoYXQgMikgaXMgYWxyZWFkeSBhIHByb2JsZW0gdG9k
YXkgd2l0aCBzdGF0aWMgcGFnZXMuIFNvIEkgdGhpbmsgdGhlDQo+IGJlc3QgaXMgdG8gZW5zdXJl
IHRoYXQgcGFnZXMgYWxsb2NhdGVkIGZvciBzaGFyZWQgbWVtb3J5IG5ldmVyIHJlYWNoIHRoZQ0K
PiBhbnkgb2YgdGhlIGFsbG9jYXRvcnMuDQo+IA0KPiANCj4gQ2hlZXJzLA0KPiANCj4gLS0NCj4g
SnVsaWVuIEdyYWxsDQoNCkNoZWVycywNCg0KLS0NCkp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 06:18:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 06:18:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480138.744359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI1lS-0008ES-N4; Wed, 18 Jan 2023 06:17:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480138.744359; Wed, 18 Jan 2023 06:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI1lS-0008EL-KS; Wed, 18 Jan 2023 06:17:38 +0000
Received: by outflank-mailman (input) for mailman id 480138;
 Wed, 18 Jan 2023 06:17:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI1lR-0008EF-QG
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 06:17:37 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cc8af248-96f7-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 07:17:35 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BE5AF3EB16;
 Wed, 18 Jan 2023 06:17:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 96FC7138FE;
 Wed, 18 Jan 2023 06:17:34 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id gkJPI/6Ox2MCWQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 06:17:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc8af248-96f7-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674022654; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=tNetT30kYb9UcwFqdvmcjguPRiSiyULR5xJDUZjPoXU=;
	b=eLgSxHm1VUQ8qFPmNjbDCluJEydLuJ9OUVnFTvsxsYnWv8kBJpR4wEYiTOvao5pU5THwfp
	WpqvU9/lhFORMrOcoBVjJMl7mFyG1hKduecFt+uZd+QABQ/uDo9bNB63iqSUFfg/3ZyJUa
	rMWa4xGtXyzPwPqrFiUn95iIV46VC9E=
Message-ID: <fb160e56-8a6a-85fd-0140-ae25322479c7@suse.com>
Date: Wed, 18 Jan 2023 07:17:34 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 12/17] tools/xenstore: don't let hashtable_remove()
 return the removed value
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-13-jgross@suse.com>
 <19a0c39c-31b3-ce9c-6f03-466b6109b88f@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <19a0c39c-31b3-ce9c-6f03-466b6109b88f@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------qtF0F3RaJ2QFxDb3dMyQx9Y5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------qtF0F3RaJ2QFxDb3dMyQx9Y5
Content-Type: multipart/mixed; boundary="------------xLgdbenKbId05eKM6NYcvJ00";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <fb160e56-8a6a-85fd-0140-ae25322479c7@suse.com>
Subject: Re: [PATCH v3 12/17] tools/xenstore: don't let hashtable_remove()
 return the removed value
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-13-jgross@suse.com>
 <19a0c39c-31b3-ce9c-6f03-466b6109b88f@xen.org>
In-Reply-To: <19a0c39c-31b3-ce9c-6f03-466b6109b88f@xen.org>

--------------xLgdbenKbId05eKM6NYcvJ00
Content-Type: multipart/mixed; boundary="------------YkEu9guguiIbAOElqmBK4qbW"

--------------YkEu9guguiIbAOElqmBK4qbW
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDEuMjMgMjM6MDMsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDE3LzAxLzIwMjMgMDk6MTEsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBM
ZXR0aW5nIGhhc2h0YWJsZV9yZW1vdmUoKSByZXR1cm4gdGhlIHZhbHVlIG9mIHRoZSByZW1v
dmVkIGVsZW1lbnQgaXMNCj4+IG5vdCB1c2VkIGFueXdoZXJlIGluIFhlbnN0b3JlLCBhbmQg
aXQgY29uZmxpY3RzIHdpdGggYSBoYXNodGFibGUNCj4+IGNyZWF0ZWQgc3BlY2lmeWluZyB0
aGUgSEFTSFRBQkxFX0ZSRUVfVkFMVUUgZmxhZy4NCj4+DQo+PiBTbyBqdXN0IGRyb3AgcmV0
dXJuaW5nIHRoZSB2YWx1ZS4NCj4gDQo+IEFueSByZWFzb24gdGhpcyBjYW4ndCBiZSB2b2lk
PyBJZiB0aGVyZSBhcmUsIHRoZW4gSSB3b3VsZCBjb25zaWRlciB0byByZXR1cm4gYSANCj4g
Ym9vbCBhcyB0aGUgcmV0dXJuIGNhbiBvbmx5IGJlIDIgdmFsdWVzLg0KDQpJIHRoaW5rIHlv
dSBhcmUgcmlnaHQuIFN3aXRjaGluZyB0byB2b2lkIHNob3VsZCBiZSBmaW5lLg0KDQo+IA0K
Pj4NCj4+IFNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4N
Cj4+IC0tLQ0KPj4gVjM6DQo+PiAtIG5ldyBwYXRjaA0KPj4gLS0tDQo+PiDCoCB0b29scy94
ZW5zdG9yZS9oYXNodGFibGUuYyB8IDEwICsrKysrLS0tLS0NCj4+IMKgIHRvb2xzL3hlbnN0
b3JlL2hhc2h0YWJsZS5oIHzCoCA0ICsrLS0NCj4+IMKgIDIgZmlsZXMgY2hhbmdlZCwgNyBp
bnNlcnRpb25zKCspLCA3IGRlbGV0aW9ucygtKQ0KPj4NCj4+IGRpZmYgLS1naXQgYS90b29s
cy94ZW5zdG9yZS9oYXNodGFibGUuYyBiL3Rvb2xzL3hlbnN0b3JlL2hhc2h0YWJsZS5jDQo+
PiBpbmRleCAyOTk1NDljNTFlLi42NzM4NzE5ZTQ3IDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMv
eGVuc3RvcmUvaGFzaHRhYmxlLmMNCj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL2hhc2h0YWJs
ZS5jDQo+PiBAQCAtMjE0LDcgKzIxNCw3IEBAIGhhc2h0YWJsZV9zZWFyY2goc3RydWN0IGhh
c2h0YWJsZSAqaCwgdm9pZCAqaykNCj4+IMKgIH0NCj4+IMKgIC8qKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKi8NCj4+IC12b2lkICogLyogcmV0dXJucyB2YWx1ZSBhc3NvY2lhdGVkIHdpdGgg
a2V5ICovDQo+PiAraW50DQo+PiDCoCBoYXNodGFibGVfcmVtb3ZlKHN0cnVjdCBoYXNodGFi
bGUgKmgsIHZvaWQgKmspDQo+PiDCoCB7DQo+PiDCoMKgwqDCoMKgIC8qIFRPRE86IGNvbnNp
ZGVyIGNvbXBhY3RpbmcgdGhlIHRhYmxlIHdoZW4gdGhlIGxvYWQgZmFjdG9yIGRyb3BzIGVu
b3VnaCwNCj4+IEBAIC0yMjIsNyArMjIyLDYgQEAgaGFzaHRhYmxlX3JlbW92ZShzdHJ1Y3Qg
aGFzaHRhYmxlICpoLCB2b2lkICprKQ0KPj4gwqDCoMKgwqDCoCBzdHJ1Y3QgZW50cnkgKmU7
DQo+PiDCoMKgwqDCoMKgIHN0cnVjdCBlbnRyeSAqKnBFOw0KPj4gLcKgwqDCoCB2b2lkICp2
Ow0KPj4gwqDCoMKgwqDCoCB1bnNpZ25lZCBpbnQgaGFzaHZhbHVlLCBpbmRleDsNCj4+IMKg
wqDCoMKgwqAgaGFzaHZhbHVlID0gaGFzaChoLGspOw0KPj4gQEAgLTIzNiwxNiArMjM1LDE3
IEBAIGhhc2h0YWJsZV9yZW1vdmUoc3RydWN0IGhhc2h0YWJsZSAqaCwgdm9pZCAqaykNCj4+
IMKgwqDCoMKgwqDCoMKgwqDCoCB7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAq
cEUgPSBlLT5uZXh0Ow0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaC0+ZW50cnlj
b3VudC0tOw0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdiA9IGUtPnY7DQo+PiDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoaC0+ZmxhZ3MgJiBIQVNIVEFCTEVfRlJFRV9L
RVkpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGZyZWUoZS0+ayk7
DQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoaC0+ZmxhZ3MgJiBIQVNIVEFCTEVf
RlJFRV9WQUxVRSkNCj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgZnJlZShl
LT52KTsNCj4gDQo+IEkgZG9uJ3QgcXVpdGUgdW5kZXJzdGFuZCBob3cgdGhpcyBjaGFuZ2Ug
aXMgcmVsYXRlZCB0byB0aGlzIHBhdGNoLg0KDQpXaXRoIG5vdCByZXR1cm5pbmcgdGhlIHZh
bHVlIHBvaW50ZXIgYW55IGxvbmdlciB0aGVyZSB3b3VsZCBiZSBubyB3YXkNCmZvciB0aGUg
Y2FsbGVyIHRvIGZyZWUgaXQsIHNvIGl0IG11c3QgYmUgZnJlZWQgYnkgaGFzaHRhYmxlX3Jl
bW92ZSgpDQppZiB0aGUgcmVsYXRlZCBmbGFnIHdhcyBzZXQuDQoNCkkgY2FuIGFkZCBhIHNl
bnRlbmNlIHRvIHRoZSBjb21taXQgbWVzc2FnZS4NCg0KDQpKdWVyZ2VuDQo=
--------------YkEu9guguiIbAOElqmBK4qbW
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------YkEu9guguiIbAOElqmBK4qbW--

--------------xLgdbenKbId05eKM6NYcvJ00--

--------------qtF0F3RaJ2QFxDb3dMyQx9Y5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPHjv4FAwAAAAAACgkQsN6d1ii/Ey8p
vwf/SdTi1M/QjZQEsCAyOlDEoLZIpZ+7iSZj0sundmrql6zC/fi16adwGjJRPBM5n+MQhzh8dCTs
4WLpBvBujEVw3MjptlT9j7OOO78w6tqVpIFW/SbSXfzSiRdxzRJ3vzvmwS0WQ52ggQ5pUFf4Q/AE
tYZFr+kczWOw6U8/IfyxWuqtbcpcdxqyrDkZHzJN3+A7b6l+3WS2qnPo5wPwjk9X2vj4xGEjrLQN
+usV7ZIx5zr1XDGcHTTEVyVqGgJ7pHgag38BUIWsHVbUarjhrbAmi8SyRKuhLkeVLfjCwQiFD6vi
X0U/uO5Kvu7QWatmTY4ju8/VS7P+aJjo3x3txw9YXA==
=yF29
-----END PGP SIGNATURE-----

--------------qtF0F3RaJ2QFxDb3dMyQx9Y5--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 06:18:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 06:18:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480141.744370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI1mH-0000Ht-VX; Wed, 18 Jan 2023 06:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480141.744370; Wed, 18 Jan 2023 06:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI1mH-0000Hk-Sy; Wed, 18 Jan 2023 06:18:29 +0000
Received: by outflank-mailman (input) for mailman id 480141;
 Wed, 18 Jan 2023 06:18:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI1mH-0000HW-18
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 06:18:29 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb5a80c6-96f7-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 07:18:27 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id AD7123EB18;
 Wed, 18 Jan 2023 06:18:26 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6B8CF138FE;
 Wed, 18 Jan 2023 06:18:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id y7vLGDKPx2OQWQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 06:18:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb5a80c6-96f7-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674022706; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=y2hBd3BYt/zkcZa1Ddp9kUwSg+T1Z+CDys6gyy5sF4c=;
	b=nEBY+NVc8OhboiOHjvQtcG52CfmYI+J4zoneZ7r7dVn1Jhm9fAKj/Zaq1YiOr+D4jCcqmD
	0UaSaA7G4bTeC0Hro1vY2CiL/Hasff2ifPytVlz4QMRpMnIZqCxnW/6ue5E7gII7bGSf/b
	5yaQVnOZgFeab0r30VOZMu3ZwDrtQ+Q=
Message-ID: <ededea85-19df-fc56-23d3-555871923ad7@suse.com>
Date: Wed, 18 Jan 2023 07:18:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 15/17] tools/xenstore: introduce trace classes
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-16-jgross@suse.com>
 <2ab20725-4bb9-66ac-a87f-01dca92f9453@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <2ab20725-4bb9-66ac-a87f-01dca92f9453@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------ijYTGASDyFQB404vDvbF0aKn"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------ijYTGASDyFQB404vDvbF0aKn
Content-Type: multipart/mixed; boundary="------------j5a8XqnpF0ZYJG7Gd3tVAs6k";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <ededea85-19df-fc56-23d3-555871923ad7@suse.com>
Subject: Re: [PATCH v3 15/17] tools/xenstore: introduce trace classes
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-16-jgross@suse.com>
 <2ab20725-4bb9-66ac-a87f-01dca92f9453@xen.org>
In-Reply-To: <2ab20725-4bb9-66ac-a87f-01dca92f9453@xen.org>

--------------j5a8XqnpF0ZYJG7Gd3tVAs6k
Content-Type: multipart/mixed; boundary="------------o3yt0udUaMngoGNulmJRTIKa"

--------------o3yt0udUaMngoGNulmJRTIKa
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDEuMjMgMjM6MTUsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDE3LzAxLzIwMjMgMDk6MTEsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBA
QCAtMjYwNCw2ICsyNjA3LDggQEAgc3RhdGljIHZvaWQgdXNhZ2Uodm9pZCkNCj4+IMKgICLC
oCAtTiwgLS1uby1mb3JrwqDCoMKgwqDCoMKgwqDCoMKgwqAgdG8gcmVxdWVzdCB0aGF0IHRo
ZSBkYWVtb24gZG9lcyBub3QgZm9yayxcbiINCj4+IMKgICLCoCAtUCwgLS1vdXRwdXQtcGlk
wqDCoMKgwqDCoMKgwqAgdG8gcmVxdWVzdCB0aGF0IHRoZSBwaWQgb2YgdGhlIGRhZW1vbiBp
cyBvdXRwdXQsXG4iDQo+PiDCoCAiwqAgLVQsIC0tdHJhY2UtZmlsZSA8ZmlsZT4gZ2l2aW5n
IHRoZSBmaWxlIGZvciBsb2dnaW5nLCBhbmRcbiINCj4+ICsiwqDCoMKgwqDCoCAtLXRyYWNl
LWNvbnRyb2w9Kzxzd2l0Y2g+IGFjdGl2YXRlIGEgc3BlY2lmaWMgPHN3aXRjaD5cbiINCj4+
ICsiwqDCoMKgwqDCoCAtLXRyYWNlLWNvbnRyb2w9LTxzd2l0Y2g+IGRlYWN0aXZhdGUgYSBz
cGVjaWZpYyA8c3dpdGNoPlxuIg0KPj4gwqAgIsKgIC1FLCAtLWVudHJ5LW5iIDxuYj7CoMKg
wqDCoCBsaW1pdCB0aGUgbnVtYmVyIG9mIGVudHJpZXMgcGVyIGRvbWFpbixcbiINCj4+IMKg
ICLCoCAtUywgLS1lbnRyeS1zaXplIDxzaXplPiBsaW1pdCB0aGUgc2l6ZSBvZiBlbnRyeSBw
ZXIgZG9tYWluLCBhbmRcbiINCj4+IMKgICLCoCAtVywgLS13YXRjaC1uYiA8bmI+wqDCoMKg
wqAgbGltaXQgdGhlIG51bWJlciBvZiB3YXRjaGVzIHBlciBkb21haW4sXG4iDQo+PiBAQCAt
MjY0Nyw2ICsyNjUyLDcgQEAgc3RhdGljIHN0cnVjdCBvcHRpb24gb3B0aW9uc1tdID0gew0K
Pj4gwqDCoMKgwqDCoCB7ICJvdXRwdXQtcGlkIiwgMCwgTlVMTCwgJ1AnIH0sDQo+PiDCoMKg
wqDCoMKgIHsgImVudHJ5LXNpemUiLCAxLCBOVUxMLCAnUycgfSwNCj4+IMKgwqDCoMKgwqAg
eyAidHJhY2UtZmlsZSIsIDEsIE5VTEwsICdUJyB9LA0KPj4gK8KgwqDCoCB7ICJ0cmFjZS1j
b250cm9sIiwgMSwgTlVMTCwgMSB9LA0KPj4gwqDCoMKgwqDCoCB7ICJ0cmFuc2FjdGlvbiIs
IDEsIE5VTEwsICd0JyB9LA0KPj4gwqDCoMKgwqDCoCB7ICJwZXJtLW5iIiwgMSwgTlVMTCwg
J0EnIH0sDQo+PiDCoMKgwqDCoMKgIHsgInBhdGgtbWF4IiwgMSwgTlVMTCwgJ00nIH0sDQo+
PiBAQCAtMjcyMSw2ICsyNzI3LDQzIEBAIHN0YXRpYyB2b2lkIHNldF9xdW90YShjb25zdCBj
aGFyICphcmcsIGJvb2wgc29mdCkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBiYXJmKCJ1bmtu
b3duIHF1b3RhIFwiJXNcIlxuIiwgYXJnKTsNCj4+IMKgIH0NCj4+ICsvKiBTb3J0ZWQgYnkg
Yml0IHZhbHVlcyBvZiBUUkFDRV8qIGZsYWdzLiBGbGFnIGlzICgxdSA8PCBpbmRleCkuICov
DQo+PiArY29uc3QgY2hhciAqdHJhY2Vfc3dpdGNoZXNbXSA9IHsNCj4gDQo+IEFGQUlDVCwg
dGhpcyBhcnJheSBpcyBub3QgbWVhbnQgdG8gYmUgbW9kaWZpZWQuIFNvIHlvdSB3YW50IGEg
c2Vjb25kIGNvbnN0Lg0KDQpZZXMsIHlvdSBhcmUgcmlnaHQuDQoNCj4gDQo+PiArwqDCoMKg
ICJvYmoiLCAiaW8iLCAid3JsIiwNCj4+ICvCoMKgwqAgTlVMTA0KPj4gK307DQo+IA0KPiBb
Li4uXQ0KPiANCj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29y
ZS5oIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaA0KPj4gaW5kZXggM2I5NmVj
ZDAxOC4uYzg1YjE1NTE1YyAxMDA2NDQNCj4+IC0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0
b3JlZF9jb3JlLmgNCj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgN
Cj4+IEBAIC0yODcsNiArMjg3LDEyIEBAIGV4dGVybiBjaGFyICoqb3JpZ19hcmd2Ow0KPj4g
wqAgZXh0ZXJuIGNoYXIgKnRyYWNlZmlsZTsNCj4+IMKgIGV4dGVybiBpbnQgdHJhY2VmZDsN
Cj4+ICtleHRlcm4gdW5zaWduZWQgaW50IHRyYWNlX2ZsYWdzOw0KPj4gKyNkZWZpbmUgVFJB
Q0VfT0JKwqDCoMKgIDB4MDAwMDAwMDENCj4+ICsjZGVmaW5lIFRSQUNFX0lPwqDCoMKgIDB4
MDAwMDAwMDINCj4+ICsjZGVmaW5lIFRSQUNFX1dSTMKgwqDCoCAweDAwMDAwMDA0DQo+IEkg
d291bGQgYWRkIGEgY29tbWVudCBvbiB0b3AgdG8gZXhwbGFpbiB0aGF0IHRoZSB2YWx1ZSBz
aG91bGQgYmUga2VwdCBpbiBzeW5jIA0KPiB3aXRoIHRyYWNlX3N3aXRjaGVzLg0KDQpPa2F5
Lg0KDQoNCkp1ZXJnZW4NCg0K
--------------o3yt0udUaMngoGNulmJRTIKa
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------o3yt0udUaMngoGNulmJRTIKa--

--------------j5a8XqnpF0ZYJG7Gd3tVAs6k--

--------------ijYTGASDyFQB404vDvbF0aKn
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPHjzIFAwAAAAAACgkQsN6d1ii/Ey9c
Hgf/Wc1hiLi/+tQSaKs3yJ6o/x5eKQjbSW4rJIQ4zZWBlicUC0Sq/XKKrzbkvZ+83uLvihho0Coh
7ysj/OBrwL9VIB5AZ8mhSmu43RPYF1Gus7tlcj0o5p/kVKgqM3TDEn74u6Vn2qzP9qhkb2bZxpmz
WwjdIdIcltfhR5tzZ6dLxc1UNWvSR3XMc8A+20wd/CxqQQwR78RuYdWtzqmA8omA2p9obQHJMHgr
9vVq6hjE+F3+HtO/DDvsFTK17dAmxzbOv2hw6LWhSuFSeeBqDVoqWDCr+G3zxWk+TG7sArIUGZKx
ao44NLL5ovrDPe6MIqoIxgrxZWqjmk3YezY0xl42BQ==
=Rhlv
-----END PGP SIGNATURE-----

--------------ijYTGASDyFQB404vDvbF0aKn--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 06:24:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 06:24:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480152.744382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI1rX-0001rT-Nf; Wed, 18 Jan 2023 06:23:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480152.744382; Wed, 18 Jan 2023 06:23:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI1rX-0001rM-KT; Wed, 18 Jan 2023 06:23:55 +0000
Received: by outflank-mailman (input) for mailman id 480152;
 Wed, 18 Jan 2023 06:23:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI1rV-0001qn-Ko
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 06:23:53 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id acf1f3bc-96f8-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 07:23:51 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 5A45F20DAA;
 Wed, 18 Jan 2023 06:23:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 23831138FE;
 Wed, 18 Jan 2023 06:23:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Fb9OB3eQx2OpWwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 06:23:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acf1f3bc-96f8-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674023031; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=9XOvtrcVkOmWGlRiKMRwAPmgBhb7IY9GEt8qZ1c/GQo=;
	b=vAOA6b0GobqNo+F0IJwjnktMskaJs1BWXBWWFX9GvvyiNX85Jjxt3h+Jlsz1YkZ+ghH7PD
	Io12bYqYqNbTdZ4GH8PxnAX8yehsNmdCinWRks6fkuCqrcpW1uHg5WuxBgEYJyBBM8mF6b
	CdzdWIPOQO129tNsBJK5I1JC9YWx6yQ=
Message-ID: <c541fcd7-a829-f757-c949-1b4a089ac6c3@suse.com>
Date: Wed, 18 Jan 2023 07:23:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-17-jgross@suse.com>
 <17595b1f-1523-9526-85da-99b9300f3218@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v3 16/17] tools/xenstore: let check_store() check the
 accounting data
In-Reply-To: <17595b1f-1523-9526-85da-99b9300f3218@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------ZdnA3t4tQLev6YkcScSpCVHu"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------ZdnA3t4tQLev6YkcScSpCVHu
Content-Type: multipart/mixed; boundary="------------sLtV0tiM8qoUHUo5rAMFMqaY";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <c541fcd7-a829-f757-c949-1b4a089ac6c3@suse.com>
Subject: Re: [PATCH v3 16/17] tools/xenstore: let check_store() check the
 accounting data
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-17-jgross@suse.com>
 <17595b1f-1523-9526-85da-99b9300f3218@xen.org>
In-Reply-To: <17595b1f-1523-9526-85da-99b9300f3218@xen.org>

--------------sLtV0tiM8qoUHUo5rAMFMqaY
Content-Type: multipart/mixed; boundary="------------Tjd0oT0yPHUqbzgRn50tnlO6"

--------------Tjd0oT0yPHUqbzgRn50tnlO6
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDEuMjMgMjM6MzYsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDE3LzAxLzIwMjMgMDk6MTEsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBU
b2RheSBjaGVja19zdG9yZSgpIGlzIG9ubHkgdGVzdGluZyB0aGUgY29ycmVjdG5lc3Mgb2Yg
dGhlIG5vZGUgdHJlZS4NCj4+DQo+PiBBZGQgdmVyaWZpY2F0aW9uIG9mIHRoZSBhY2NvdW50
aW5nIGRhdGEgKG51bWJlciBvZiBub2RlcynCoCBhbmQgY29ycmVjdA0KPiANCj4gTklUOiBv
bmUgdG9vIG1hbnkgc3BhY2UgYmVmb3JlICdhbmQnLg0KPiANCj4+IHRoZSBkYXRhIGlmIGl0
IGlzIHdyb25nLg0KPj4NCj4+IERvIHRoZSBpbml0aWFsIGNoZWNrX3N0b3JlKCkgY2FsbCBv
bmx5IGFmdGVyIFhlbnN0b3JlIGVudHJpZXMgb2YgYQ0KPj4gbGl2ZSB1cGRhdGUgaGF2ZSBi
ZWVuIHJlYWQuDQo+IA0KPiBDYW4geW91IGNsYXJpZnkgd2hldGhlciB0aGlzIGlzIG5lZWRl
ZCBmb3IgdGhlIHJlc3Qgb2YgdGhlIHBhdGNoLCBvciBzaW1wbHkgYSANCj4gbmljZSB0aGlu
ZyB0byBoYXZlIGluIGdlbmVyYWw/DQoNCkknbGwgYWRkOiAiVGhpcyBpcyB3YW50ZWQgdG8g
bWFrZSBzdXJlIHRoZSBhY2NvdW50aW5nIGRhdGEgaXMgY29ycmVjdCBhZnRlcg0KYSBsaXZl
IHVwZGF0ZS4iDQoNCj4gDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8
amdyb3NzQHN1c2UuY29tPg0KPj4gLS0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5jwqDCoCB8IDYyICsrKysrKysrKysrKysrKystLS0tLS0NCj4+IMKgIHRvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYyB8IDg2ICsrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysNCj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaCB8
wqAgNCArKw0KPj4gwqAgMyBmaWxlcyBjaGFuZ2VkLCAxMzcgaW5zZXJ0aW9ucygrKSwgMTUg
ZGVsZXRpb25zKC0pDQo+Pg0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0
b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+PiBpbmRl
eCAzMDk5MDc3YTg2Li5lMjAxZjE0MDUzIDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuYw0KPj4gKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2NvcmUuYw0KPj4gQEAgLTIzODksOCArMjM4OSw2IEBAIHZvaWQgc2V0dXBfc3RydWN0dXJl
KGJvb2wgbGl2ZV91cGRhdGUpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgbWFudWFsX25vZGUo
IkBpbnRyb2R1Y2VEb21haW4iLCBOVUxMKTsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBkb21h
aW5fbmJlbnRyeV9maXgoZG9tMF9kb21pZCwgNSwgdHJ1ZSk7DQo+PiDCoMKgwqDCoMKgIH0N
Cj4+IC0NCj4+IC3CoMKgwqAgY2hlY2tfc3RvcmUoKTsNCj4+IMKgIH0NCj4+IMKgIHN0YXRp
YyB1bnNpZ25lZCBpbnQgaGFzaF9mcm9tX2tleV9mbih2b2lkICprKQ0KPj4gQEAgLTI0MzMs
MjAgKzI0MzEsMjggQEAgaW50IHJlbWVtYmVyX3N0cmluZyhzdHJ1Y3QgaGFzaHRhYmxlICpo
YXNoLCBjb25zdCBjaGFyIA0KPj4gKnN0cikNCj4+IMKgwqAgKiBBcyB3ZSBnbywgd2UgcmVj
b3JkIGVhY2ggbm9kZSBpbiB0aGUgZ2l2ZW4gcmVhY2hhYmxlIGhhc2h0YWJsZS7CoCBUaGVz
ZQ0KPj4gwqDCoCAqIGVudHJpZXMgd2lsbCBiZSB1c2VkIGxhdGVyIGluIGNsZWFuX3N0b3Jl
Lg0KPj4gwqDCoCAqLw0KPj4gKw0KPj4gK3N0cnVjdCBjaGVja19zdG9yZV9kYXRhIHsNCj4+
ICvCoMKgwqAgc3RydWN0IGhhc2h0YWJsZSAqcmVhY2hhYmxlOw0KPj4gK8KgwqDCoCBzdHJ1
Y3QgaGFzaHRhYmxlICpkb21haW5zOw0KPj4gK307DQo+PiArDQo+PiDCoCBzdGF0aWMgaW50
IGNoZWNrX3N0b3JlX3N0ZXAoY29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3QgY29ubmVjdGlvbiAq
Y29ubiwNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgc3RydWN0IG5v
ZGUgKm5vZGUsIHZvaWQgKmFyZykNCj4+IMKgIHsNCj4+IC3CoMKgwqAgc3RydWN0IGhhc2h0
YWJsZSAqcmVhY2hhYmxlID0gYXJnOw0KPj4gK8KgwqDCoCBzdHJ1Y3QgY2hlY2tfc3RvcmVf
ZGF0YSAqZGF0YSA9IGFyZzsNCj4+IC3CoMKgwqAgaWYgKGhhc2h0YWJsZV9zZWFyY2gocmVh
Y2hhYmxlLCAodm9pZCAqKW5vZGUtPm5hbWUpKSB7DQo+PiArwqDCoMKgIGlmIChoYXNodGFi
bGVfc2VhcmNoKGRhdGEtPnJlYWNoYWJsZSwgKHZvaWQgKilub2RlLT5uYW1lKSkgew0KPiAN
Cj4gSUlVQyB0aGUgY2FzdCBpcyBvbmx5IG5lY2Vzc2FyeSBiZWNhdXNlIGhhc2h0YWJsZV9z
ZWFyY2goKSBleHBlY3RzIGEgbm9uLWNvbnN0IA0KPiB2YWx1ZS4gSSBjYW4ndCB0aGluayBm
b3IgYSByZWFzb24gZm9yIHRoZSBrZXkgdG8gYmUgbm9uLWNvbnN0LiBTbyBJIHdpbGwgbG9v
ayB0byANCj4gc2VuZCBhIGZvbGxvdy11cCBwYXRjaC4NCg0KSXQgaXMgcG9zc2libGUsIGJ1
dCBuYXN0eSwgZHVlIHRvIHRhbGxvY19mcmVlKCkgbm90IHRha2luZyBhIGNvbnN0IHBvaW50
ZXIuDQoNCj4gDQo+PiArDQo+PiArdm9pZCBkb21haW5fY2hlY2tfYWNjX2FkZChjb25zdCBz
dHJ1Y3Qgbm9kZSAqbm9kZSwgc3RydWN0IGhhc2h0YWJsZSAqZG9tYWlucykNCj4+ICt7DQo+
PiArwqDCoMKgIHN0cnVjdCBkb21haW5fYWNjICpkb207DQo+PiArwqDCoMKgIHVuc2lnbmVk
IGludCBkb21pZDsNCj4+ICsNCj4+ICvCoMKgwqAgZG9taWQgPSBub2RlLT5wZXJtcy5wWzBd
LmlkOw0KPiANCj4gVGhpcyBjb3VsZCBiZSByZXBsYWNlZCB3aXRoIGdldF9ub2RlX293bmVy
KCkuDQoNCkluZGVlZC4NCg0KPiANCj4+ICvCoMKgwqAgZG9tID0gaGFzaHRhYmxlX3NlYXJj
aChkb21haW5zLCAmZG9taWQpOw0KPj4gK8KgwqDCoCBpZiAoIWRvbSkNCj4+ICvCoMKgwqDC
oMKgwqDCoCBsb2coIk5vZGUgJXMgb3duZWQgYnkgdW5rbm93biBkb21haW4gJXUiLCBub2Rl
LT5uYW1lLCBkb21pZCk7DQo+PiArwqDCoMKgIGVsc2UNCj4+ICvCoMKgwqDCoMKgwqDCoCBk
b20tPm5vZGVzKys7DQo+PiArfQ0KPj4gKw0KPj4gK3N0YXRpYyBpbnQgZG9tYWluX2NoZWNr
X2FjY19zdWIoY29uc3Qgdm9pZCAqaywgdm9pZCAqdiwgdm9pZCAqYXJnKQ0KPiANCj4gSSBm
aW5kIHRoZSBzdWZmaXggJ3N1YicgbWlzbGVhZGluZyBiZWNhdXNlIHdlIGhhdmUgYSBmdW5j
dGlvbiB3aXRoIGEgdmVyeSANCj4gc2ltaWxhciBuYW1lIChpbnN0ZWFkIHN1ZmZpeGVkICdz
dWInKS4gWWV0LCBBRkFJQ1QsIGl0IGlzIG5vdCBtZWFudCB0byBzdWJzdHJhY3QuDQo+IA0K
PiBTbyBJIHdvdWxkIHByZWZpeCB3aXRoICdfY2InIGluc3RlYWQuDQoNCkZpbmUgd2l0aCBt
ZS4NCg0KDQpKdWVyZ2VuDQo=
--------------Tjd0oT0yPHUqbzgRn50tnlO6
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Tjd0oT0yPHUqbzgRn50tnlO6--

--------------sLtV0tiM8qoUHUo5rAMFMqaY--

--------------ZdnA3t4tQLev6YkcScSpCVHu
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPHkHYFAwAAAAAACgkQsN6d1ii/Ey+L
gwf/Q0KKbxefmzYmbxpBUZazWvEiddgz3S8jdBSl9re6R/48eGBlpdTYcV0ZhK5II6IfwSsjaT3C
dNcLh6xhiWqlERdMVkVUrDwOZqL4VC6o42R0g6Cb9+uO94HlD9JgRbT3pLkd9O9g/0mnzZTxJDfr
jVILugznQe6xpFa/+pR8LFQdNpNXfciEI4YvScugZXm0TLfsn2DjplLX1ZHuPQzOWE0Ymm7vkp0a
GC8z/BLBVBvUZq0Lesirq90aAPP1T7fr7J1cX0KLkTg+eaUPcJUhmcwxCmmOHsUaO2ohLPaTVeNe
+YtoFvEeQqtRRBYbTYyFGRYKEWmB7qrRkJI+E4BoGQ==
=BTqi
-----END PGP SIGNATURE-----

--------------ZdnA3t4tQLev6YkcScSpCVHu--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 06:34:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 06:34:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480158.744392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI219-0003L8-KK; Wed, 18 Jan 2023 06:33:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480158.744392; Wed, 18 Jan 2023 06:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI219-0003L1-HJ; Wed, 18 Jan 2023 06:33:51 +0000
Received: by outflank-mailman (input) for mailman id 480158;
 Wed, 18 Jan 2023 06:33:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI218-0003Kr-Te; Wed, 18 Jan 2023 06:33:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI218-00081C-Ea; Wed, 18 Jan 2023 06:33:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI217-0002Nv-Tb; Wed, 18 Jan 2023 06:33:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pI217-0001dU-T8; Wed, 18 Jan 2023 06:33:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tDRQd05pemzMB5RigPZIn5bR4gqHfbb2NLmA22TaKto=; b=BTGmikRLiiv0As8bWbbjSzbukD
	KILFSu8LOVxA2TCNgvy15GT8Fk7VG4nJJE1FIrxozuU0sOkTO3Z0wpn9QjwT+YKIwcz3DwfL8Q6Ed
	62rNft2WLBeONbRoCjsIYni4E84slx5REGx5X8A44Ds0rEL12tT8OPi/YVyY7fw68kaM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175951-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175951: regressions - trouble: blocked/broken/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 06:33:49 +0000

flight 175951 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175951/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175746
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    5 days
Failing since        175748  2023-01-12 20:01:56 Z    5 days   25 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    3 days   23 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 07:16:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 07:16:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480166.744404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI2fu-0007gM-QI; Wed, 18 Jan 2023 07:15:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480166.744404; Wed, 18 Jan 2023 07:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI2fu-0007gF-Mv; Wed, 18 Jan 2023 07:15:58 +0000
Received: by outflank-mailman (input) for mailman id 480166;
 Wed, 18 Jan 2023 07:15:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI2fs-0007g9-HB
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 07:15:56 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2050.outbound.protection.outlook.com [40.107.22.50])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1f451b0-96ff-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 08:15:54 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7520.eurprd04.prod.outlook.com (2603:10a6:102:e9::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 07:15:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 07:15:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1f451b0-96ff-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ChHkOpZJ8ufNlTczjrj/I3DJgS/rQ/N9W3DBR4DUPCrVvGkzeRc1yjOCsZYosx3tHt+gGGKIBLNrZw2pWIe/mmqnYxaiJOH1CNqWWBiZ6RSZSZG+Cq1SfghkdfjJps7M57g3JhpBMmduIVegoNQ9Q2MvyIy+if5gDsbF/VuaR3faaaTq0a+5DGYNbCeJrjR91abLeNnfAX3qbX1FLZxDuHMPr11DkFeXQkLNeoci87/M760lARIxhV+DrQOr7FbjFg61EWgRpDCuKHsL1fxleBjkr9U84jqNEcs/kX4AwsT1ZdlSeNPVFyszNexXKECaqCOMG4/0SZHB8Yzkf1JM8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cRp6zVQ7NYBJCGPeBh2wyoprETQ6CvAi0CYQGPihjyg=;
 b=QnNC1vKS0HKyOHu14bNNPHq6p5WLeY3TiE7axZ4IKGj5N6bpRKWWHJmCeMdPOWbQsg6Xnga0/vLi6vikBtyB5AQ2Uf7sLwX1WrW+2+JfV5LgLJe7CPj+WgBkKgB81O2LooHEviBKsta95K4YahcxtpJvY5lw0OiqjhMI7FF0HMGnZF7ZPn49N7iTKjBAZIxYtHKppEw/Q3G+rujMF4RglZ39gjFP7YUJd6N35DS5NxKrcVPzCGDyn4+uSvTP95DHSfChedq353XP3/zv//PnEqi92yQStKjTgoKR8geFVWg3ZKl246kGh3IFtuHn/ZeCAkqH2DQx71lkGNDUfO6A4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cRp6zVQ7NYBJCGPeBh2wyoprETQ6CvAi0CYQGPihjyg=;
 b=24yFOGjtht5bSTE2ILlWslxvVpPxn7wVApj6BhvmKOTp3H8LMFKJOjyHh8ZRgLqwEF+Rokl/tdmIXQB8yyBUrhsVgJGDFoKaWZ4YdyHMf+STkb5dHHZfgzPoMxkENWAwORPGluuokMAjwz05FhOhPznfMO2jv3GJSJ2KkpWzs5hxUTSnVyLRJ61YEBpbW4w8yXI48NJ/nsD/9e+y7WKOhk2gW6yfY9uoCe8EDHdVUpHHSG7ioraBhAEnrt0omGemQ3kh8AaJeorjcITgJGemRU7rN+5QvReCF/hj5MSg6Rn35eqicbEs+53wRnSCKkPo4/9CsZeQ0b4U/siwnR2ZXQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fd408cf4-bb25-c8b0-b979-340668d4c5cd@suse.com>
Date: Wed, 18 Jan 2023 08:15:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN PATCH] Create a Kconfig option to set preferred reboot
 method
Content-Language: en-US
To: Per Bilse <Per.Bilse@citrix.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230116172102.43469-1-per.bilse@citrix.com>
 <f7e7b6ea-5bc1-ba2b-5d21-eb431ecff53a@suse.com>
 <348dff00-5ae4-5dc2-64d4-d52409a22283@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <348dff00-5ae4-5dc2-64d4-d52409a22283@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0124.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7520:EE_
X-MS-Office365-Filtering-Correlation-Id: f502b59f-30c9-4e60-d8f8-08daf923d43c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+uW+gutxISe+//RcVShKNA6/YQ4t6hk+jbwVwye1MGE3eMyeBLe6HjXiCYPMGh+Zr8dxZ1PObHNdQ3phrUMGKi8q1AI0FWvW8/U4n3GyMGp+7tLW8vH2a0Fm8/e4jMilnnmRuzg61ZeBLEfmSR5C+yW6/8khtagDLlbks8tiUzqlkXHuiU5kQ4LnOambyHU07VBTTnkT8A/97nlf66eqsah8Up+Qzft5p2ez2V8u/QavTple7XFq+60yP/EAJbq3PDsOIuKRdv4agLNa3JO1Oi06ObH/xOM0zsVQKUvOOe8YTInA4+jEH/XJj+YfsLK5VFRBq1fQIPUekIX7wCwGFwq1n9zpSfRKs/1SlXAw/lVQ6pJEmye4dn/HQfyRB3kn4dCkdMSdMubr6bJg5EPacc6Arn1lFAvF+FdHPoOnHS2BPBJZfcbvP7Jml5ORzfwsKwlDSuyJ12eNLaACgB1GkPcQ56cm6eGVE2O4sx2z3ibw2PQzOJfw693imEwEWhcJJaLnAtysqpPMoc5t4T/tv0e7IpHBgX6ZPidigYHXN3fhGEU+h+2OVIU7taCwPyyiMxk4VSkf+4oVH+UjxfO7SRNwW+CVxyFrKVf/SfPT1SU7Ti0WE+Npy4Ngeb72oGpRRdcw4TgtQfJYtIRuvhDYgn9PiHAGPLtIohzxlWbg0KStZqhrz6gstSXBhp8qkuzDVie5/PPSKp6XtiDswd3c4GG3qgInBZqppYCEmHD0cKA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(39860400002)(396003)(136003)(376002)(346002)(451199015)(31686004)(86362001)(53546011)(6512007)(6916009)(66556008)(66476007)(2616005)(26005)(41300700001)(186003)(66946007)(8676002)(4326008)(36756003)(31696002)(5660300002)(38100700002)(8936002)(2906002)(478600001)(54906003)(6486002)(316002)(4744005)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SGZBY2NIdXEwbitqWktJbkFaV0R0VDlQMlB6cVRyc0xMSWFqclVtWlJlcXI0?=
 =?utf-8?B?endKYndiUUx2ZkxuQndGT0NqeUthTmEyVHRmOWROZHdneHdiMldZMWFPOE1J?=
 =?utf-8?B?dktHQU9pTW1RVU9jVzA2Y3RzeUVkSWFqWWsvTndvTlI3TU1mcjdEYU8xa0ZC?=
 =?utf-8?B?ekhOb2NnMFBlYk05R1YrWElYQnVYVjdGWTY2MXBZaWVmTy9ZRTJLWFc0bTBm?=
 =?utf-8?B?VS9IZG84TkVLZjFUZzk2aDF4elFycG1GRkFEajQ1TU9sUE9VdkZEVnNndG5I?=
 =?utf-8?B?Y3BTQTBBcGlISmUxMGhvRjhPR0Q2Sm1ZZURnQm5hTWZSYWc4UDVaYWk0SjJP?=
 =?utf-8?B?bE5GT2ZxTkRWTjlMdkUyZW0zeHQxaFYvR2FiUzhkUUVmbm04ZUJrTUJiTlBN?=
 =?utf-8?B?QjlxTCtaOXRzQjZUekp5eDlpVE1DUHJCVFE2ZkwrazNSZnVOaGpXdXJ4VTdF?=
 =?utf-8?B?bUo5ckxNeTRhdmpWeVpRdUxtaHVkYzkwQ3JiTjc3eWVaK1Y1OHliRWdhOFVG?=
 =?utf-8?B?ZkFJbm1WRmNGNG90VDl6aSt2SnJHLzhoK25mQ0QydVNFR3NJUTdxUGZwYUZw?=
 =?utf-8?B?ZlNDcWdSdEQwV2U0ejV0NDFzRzc2bmNkYUJBNVE2d29NYnlWTXhiTjBaWVMz?=
 =?utf-8?B?Ly9pSmM1eWZpdzlQdXFpZnk4SVRFcjVtRDhPLy9oRDlPdzRBVnFjZnhQc0wz?=
 =?utf-8?B?cHo3TG5jSTdjc1M0dENOSSttMnBlcC9idklyamVCRDV5OXBSWEtob0dUQTRj?=
 =?utf-8?B?QzFaTDJraVRSNElFb1ViVEVFYWJudVRiYlQ1Qzl4a3ZHUTFKZ2xQSTZZOXNP?=
 =?utf-8?B?NVdhT0hYL2RHdDE4NHMxcC9GMzQ4R25QV2h4TzlxOEZHNFZPMlNsWm5wdWkw?=
 =?utf-8?B?NEpDTVRlTDhLWkRWbEw5ZEZEUXJDWU56QndFcGc0SWp6THFHb01ZNS9vcHI0?=
 =?utf-8?B?V052dWFYRXYvL2tkRmo5NjZBSTkwbkNnRVlXQVhMY1RPYlZCU0gyeDBXUTJh?=
 =?utf-8?B?NnJuR0NSYkhzUnlRSlgwOW83MFVDc3VBRXcwckx5cUFiZDYzSkxlaXVCcUps?=
 =?utf-8?B?aWFDU2ZQaGQ5MVNHK1d3UFJrMXdEYlcvODhmT1llbjFxek9yNmg2c1BNK3o4?=
 =?utf-8?B?YWVLYW92WVk0M2FIT3AwMUNtenBVb3JZckpPZm9nam45UldEL2MwU3hBZTR5?=
 =?utf-8?B?OG9seFFNdWdJQ1Q4ZklyOThQdGU1UlJFWE40bnExbWFYQ1R1WXgxTHNFNk4y?=
 =?utf-8?B?KzN2V1p0UG5UaHVWSHUrZklQQVh2SzVkZm1mMmIzVVRlK0h6OE9WWHJPVEtH?=
 =?utf-8?B?VUFtM0xLc3hNNEQ0OWJwVFdGUStkVUxkUk0xc0FoV0dyKzQyT2RIL1g5K2d0?=
 =?utf-8?B?WWNSVHN4ajBPd1BlR25pQW5oRGQ5VHFMTTVld2gvTVEyOGJWM0gvWnlZU0Ix?=
 =?utf-8?B?SmgvejV1c0Y2NDRKNk1uclFwK0FGYzA1em93WEt5NCs4MDlDcXlFRll2QkUx?=
 =?utf-8?B?VmJweEo3dGkxcDlQRThUMVB2SlRoYXZvTlpFU0xjanpYUDl2TE5rbVFqT0Uw?=
 =?utf-8?B?d0ZUU1RYZ0ZHNEJEbVhFV0VSZDhqU0ZyT2Z3dkwyV3VhNGp2Z05VdWRrdkZy?=
 =?utf-8?B?ME5Sa1E4bUNpc2FJckZKVUJQQVRDc0FMS0FMM1llRGdMVFErWVYzRVAyalhY?=
 =?utf-8?B?QWtJVURKcGR4Z2ZEdWROZUZIN1UvazZ5S1h5c1FkZ2JtdFJTVms1a3llSHZU?=
 =?utf-8?B?MVVKK2hCT0V2bDZvdVJtdGtmNFNPeTRHNGsvckcvZGJQSDlGRDVOTm5hSkQ5?=
 =?utf-8?B?QWlTZnVXbVdrTVhTRGN4cElkUm9TL3lBaWVKbUtVT25sb0U3QWV0MHBYandI?=
 =?utf-8?B?RWNjTytUa3Znby9OMkJ5aHF4ZnF0dGJVSDV0ZXBTd3ZyV1p2bU45OXhQQ0ZE?=
 =?utf-8?B?RVlCQTdySkpob2R5Y0RtbjlNRUpRZkNoYmV5bkhET3NmUnN6VHM5WDZNL1VX?=
 =?utf-8?B?ZDdObjFvakVTMDVxRzFKTTl5NS84M0RjN0dKS00zdXBNdzZhdVFOMXpwOExw?=
 =?utf-8?B?R0plTGtTREVQbzhPdmFPQUtBSkNyRTFTUkFIazdJVnU0YjlnSFNLMUQ5MWlO?=
 =?utf-8?Q?5RktyGycHBwV9oSAcwnMnf3ji?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f502b59f-30c9-4e60-d8f8-08daf923d43c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 07:15:50.8931
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: y46/7GHbjXSvueF7MkRFU9kY+KETg1cmoQ99PNLiUoQ/1dmuwXAXsqIZ6gPIJyPxyuKPHSZxmYK6GXntjdrhSQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7520

On 17.01.2023 20:28, Per Bilse wrote:
> On 17/01/2023 15:55, Jan Beulich wrote:
>> On 16.01.2023 18:21, Per Bilse wrote:
>>> +	config REBOOT_METHOD_XEN
>>> +		bool "xen"
>>> +		help
>>> +			Use Xen SCHEDOP hypercall (if running under Xen as a guest).
>>
>> This wants to depend on XEN_GUEST, doesn't it?
> 
> Yes, depending on context.  In providing a compiled-in equivalent
> of the command-line parameter, it should arguably provide and accept
> the same set of options, but I'll change it.

If consistency between the two cases is the goal, then why not adjust
command line handling (in a separate patch) to "not know" about "x"
when !XEN_GUEST?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 07:25:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 07:25:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480173.744415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI2p1-0000oA-Qp; Wed, 18 Jan 2023 07:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480173.744415; Wed, 18 Jan 2023 07:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI2p1-0000o3-Mg; Wed, 18 Jan 2023 07:25:23 +0000
Received: by outflank-mailman (input) for mailman id 480173;
 Wed, 18 Jan 2023 07:25:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI2p1-0000nx-6s
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 07:25:23 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2084.outbound.protection.outlook.com [40.107.20.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43da2e72-9701-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 08:25:21 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7993.eurprd04.prod.outlook.com (2603:10a6:10:1eb::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 07:25:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 07:25:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43da2e72-9701-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QfPzzu/guOIakYquH+gtGzubRhN01vJiYO49gH0z73apqJD/Cg/eNBipP0ZWf5GFWLGqfE1pqjQ/mP6jaf8S3wLK127lKbPv6wr+/YPejCsAFxIzwYIxeDVceKDXuIVmMFSiVeyfIo3q6cUDqXWafz00nVgHu7PSY2N81Hg8q7bClTpywM9T2FhTpTj1Q0MQn7utZnue1go/BCpY1kB3JQRzxrNbBMxmSrjm4pZapBOvAPlYjHODmuaY/tp9ix9+QXl2j7bb2sF7ERvEEXV1cn/zhEIkGTuvC+geMDhgXpQvdaRIyGwGvVrqUyOxalwJ0huSu4yWvIGwx8UcO2WNWg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=d1BXN+EBezKFEDEV7kCekm2JH8EqTVC7vguBfgcsXcc=;
 b=fmTsQRrzQjBhzQ4JqQTEDUQdHbQfBt+isc3Bf8Qp5n/BviFXPOSX611TYr9ZaBowzPLREv1M5QdrPnkuVzIQTsG82AKjtTl8CLam9W7n/Rw7cLFA8AyEaFP7X2hF7qjBKoY+G1U7HMkpBt63FJs2hsuB90c9yeE+KQ/GW/3nSuHDG+O/IPOinsR2ZP8Ffcx+2iwGOirkIGcWa7/Y+OchdZoiSozDirVSFfMYOCO1vCggOXQQT72o8hI7a4t5rgCtZJJu8cB2kkhH/zmv7mI4PkF6IvKX/cIpjciwFgjbUSGHR/qd32Li/LxyVd9jvxw9lfMyLrDisuXYYDlADVVWRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d1BXN+EBezKFEDEV7kCekm2JH8EqTVC7vguBfgcsXcc=;
 b=06XrNAZd7fAkw62zE9l06Rpz7qhSgA1kOZhgqkDnh2Eih9Zp9MNluNuFZnXQuulRsaVW3NeNctkgpEeGYYxOfDHKOpzMoHEfX9v6rqoXg+O2D4uvzQ8YJpio08iUD10lQhUNsD9uDKCNF3Z9JCMNb4YDB8GnemSqMtjfCKoizmh9X/RakNmcdYbCpjIouM87tjq2Dm4V82j1q9okrcizL/R+1qZTgbJQ5eDKKHgi0Bfd31vvXhoOo7WhgBvMMEPvRFRpUBmjR6DUEgljDlqJfWAHE0XaUTx5MLDBv7iFVkHRrJN+ermRGf6tp680WapKoScTZSunjbi7jeBZoPtc3Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3852a9f8-c950-3c45-fc99-946a2ff4a5d1@suse.com>
Date: Wed, 18 Jan 2023 08:25:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 6/9] x86/shadow: re-work 4-level SHADOW_FOREACH_L2E()
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <27a7245c-f933-5b2b-5685-d9ba2dbd4a8c@suse.com>
 <ed382d91-d4fc-4778-d1e2-9f55b147e33a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ed382d91-d4fc-4778-d1e2-9f55b147e33a@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0167.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7993:EE_
X-MS-Office365-Filtering-Correlation-Id: bb3852c7-d4f7-43e1-f60b-08daf92526aa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wmYlz7i4+GwjSNuwJjtuQFHMZL6qZKKBYC9vEn+Z+MPgSHMATUMB2MQbM1DFHD6x3Pv76+SrqnLGWEPvutuHPNHPcDK4A4JdzHO0fHbWwI+sD5LJReMcsFo7TkrpKOoyS39TwEVN8IPRE3iGh5LYvUr224LaUPLK13QIFBGD+SSt0mxrefRd2TtnfngZW6/2eJr000j66ZP3D8Nosmtfv9MUobgkkPLPOa29MssJw1yIF2UK0QfszpLDkx+1JTo7ii/Piz5JkgJ5IVMzubAPk71VTcJ5Sxzib9ooEXozrCqIOTCmMex3UgR7xPoc3/MOX5nPZ1pmflRvjBAmirO69ndqn/CusQQnogLjVfZGLpNJBBOyRkQ5IPAcbT34dpU+BjE5EGVFhMkD86gKlXjQ16a1Uz9agUbJjqZa4gkbO7z31pbSo3WgUb5NXluRI2SPAQexFPX4QU5cVniExsfdXaZrvHqd75hYhrlfJZY4N39zAR3xjwixlrC2Orsx3Ekc+EJRPn4WUcEK+S3iCye0vzk/xYFb6wBW7Z+1Dang6pMYPB1TEx1r0Mh7rpB787cGArCXK+mvHndol0oE2U8/9CAPCqF7ceKcqD8ukmQGuzny/NA2xZ9pi5Iumkp8Zeo9bBDbh1qUU8HHHJZcmHQp895330KFB5ovnEzyUjwDLqSnEGvGmt7x5uwuuW1lkBEFAfg/Y9NTuhX5aI5/+Kt5WPcDAueAKD5uAWvwz6AbD3c=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(39860400002)(396003)(346002)(366004)(451199015)(38100700002)(31696002)(86362001)(66476007)(5660300002)(8936002)(2906002)(66946007)(66556008)(316002)(41300700001)(8676002)(6916009)(4326008)(26005)(2616005)(6512007)(186003)(83380400001)(53546011)(54906003)(6506007)(6486002)(478600001)(36756003)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Vkt6cG95M3JQLzl6YkVNR3lXbGl1OGJCTCtORUV3dFFua2JIb0NDT0dOMkRI?=
 =?utf-8?B?bG5RVG9tNStNZlR0SEM2RkF0dVhsWHd5dDZxT1M2YWtMMDJEaHBZdUhpT3l0?=
 =?utf-8?B?OEhGNEJHWmFkVXZPVkUxNXR4aTFlRkRpNytadHJ2eWJDWDFEaHJxbXUvL1kx?=
 =?utf-8?B?Wm0zdCsrVDJ3RzR5aXRHQkV0dGJyNjRtTEdtdmRBaEJVVWFKNjVxL1R5Yng2?=
 =?utf-8?B?TERRdm81V1NqbG02SGhram5rOWZMdThNK3N5eVhMMG1VemZsa1BxS2lPclUx?=
 =?utf-8?B?b0xpMHQ2OTQzM3M2dURlTWphbjZnSjhDYmY0STNNNDNVMlFDNkpzSjNLb2ZV?=
 =?utf-8?B?OUJEbnA3Mi8vNHI3WG9HRGZHZUNTdzZQUjhYUHNwQ2g4QWtBUUZBODFBRHNz?=
 =?utf-8?B?eFlkQysvYnUyRzVXeWlHWGxBSGxGS1RncjNlN21JK0pwazRWNDZ1VGRPZnE1?=
 =?utf-8?B?bVdscTR3Vk1hbVJKeHhnVytIZ0ttckhiTmZwMUNZcVNKbDdnNkVnUFNXb3k0?=
 =?utf-8?B?bTUyT3RtWm5ULy9SalF2a1AwK0NzTld3RWZCWnJodXBiVVVmL1NBd3dpTXZH?=
 =?utf-8?B?Qjdod0o3TVovcEJJdVY3bGs2c2JYSUsvOVdIcXE5NktJSlFWWkNMSXhXelFh?=
 =?utf-8?B?cU9DYXBvdEtGZ09FOVptOG4xT3F2amhQMHBDa2p3dlZlczI5S2lOVk1STHB2?=
 =?utf-8?B?UlFqNVg0ZWlURmhBSW1DZEdhUXZNUFM0WWRqcHEwZTlKb0RKNXcwQ01xYkFE?=
 =?utf-8?B?MktPT0V2WXM5VHpETkY1aXdwUU1KVEpvOTQwY0lQZGlEU1FvQk1MWmtHTmhw?=
 =?utf-8?B?aHAwamd5cm1RTWxiV21rNTFTNHEwMHJBRmI5c2lxSlA4VnBsRnJCU05pcXVR?=
 =?utf-8?B?aFhyYll0d2tNMkdiWWdMekZWSm5ySHowdEI3OHBwQXlORnVzMWhEUFVYMEp6?=
 =?utf-8?B?VjRQcEVxWE4zY25IdzV6ek5uUnNhM2dLc2M1TWwxbDFtZTl4a1hBN2RQKzVD?=
 =?utf-8?B?V0J2Y3IrczNBdmh6S0VPelFtU3dtZ1dCWDRWVndXalJtUC90b2x6T0JDUmtV?=
 =?utf-8?B?cXhGTkNZRnh2NVhuMHMzK3Y0RE5DSUNwNmdQQ28vRlM5MmxKZzkrMnkwT29J?=
 =?utf-8?B?cWk2MndvK2dVTHFlb2FDUWVLeURldGpFNWkyMzI0enlkNWhta0IwY3VTZSt2?=
 =?utf-8?B?Qmlod1g5Rk9vck1UclVCcndIbVZkajdYb3N2UXNTSDltUmhVZUlXMkNMUGpm?=
 =?utf-8?B?QWdtREN0WmFhSlhKUGtHcTR5Tzg1WnllaGs3ci9scUdGZ0pDMTAxVUliTncx?=
 =?utf-8?B?aEphVTBhSEtSYjA3OTVTbHpuZVJ0Y01Ud282eEJHZlhybnpSdW1yMXBpaSsz?=
 =?utf-8?B?Sm9XV3VGRVZta1VrakVONDdrdWJWekVTa1VuYmQ3b2lRcGprZGk3czFnMmJX?=
 =?utf-8?B?bWpJanM1bUNaZExGU0JZamdsdU9kUERnNFh6czY0NS9IbGtZVTcyYi9HT0tR?=
 =?utf-8?B?WVVzT0dJcDI5RWhsTmFqRTJtNGxzTGVRbHNhSWhJWUFsOXY1Q3hqMVh5anll?=
 =?utf-8?B?S21mTDNoMVJqUmpRKzJrUWVod3J5ZVFkQlV6dXl5UW1lcVAyVWdTa3dvZmxW?=
 =?utf-8?B?TExwRlBhc1JpdVBiQ01OejZJTFdQKzhXdjYrMEVBRk4yUlNDRW1PVFhuZC9I?=
 =?utf-8?B?elNSbEdlUEJJRVJxNlp1VGZmTktycHNJWklsQ3I2cC9yeGRFcEtuRGR2NTQ1?=
 =?utf-8?B?VGIyRU9Mc0FzMzArQ21MaGZSZUpJTTFCMjZjc1hOSVFVVDFBM0w1OHBlbzNE?=
 =?utf-8?B?bit3cEkreHhZOWFGVHo1Rml4SGhpK3ZPNGJNaTJyZldyNUZ4OE9wNEpkditl?=
 =?utf-8?B?RTFuYVcvTVpmbTlreXBsWVJRUDZ5WE55TE4zMzVVMWpWNjA4Z0ZNRmVqNkFH?=
 =?utf-8?B?NEhEa3ZLcjgrbWxzRitpQUhKamdscjhvRFdPdHQ5TEs3WmpwZjdOd3dIQXZX?=
 =?utf-8?B?SC9jQkphN2dMN3lMQmZOTnpKMDlORFZub3VzVWtnM0pkNDQ4K0o0QTB2bitZ?=
 =?utf-8?B?d3h4Njc5QmVVdmxWZFJtOXdHQjVOMzRjU2xhaWY4Ui90bHRmQ2dqZmdRemRo?=
 =?utf-8?Q?HSlNizsymYDeAjO5XQ8O+xj5M?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bb3852c7-d4f7-43e1-f60b-08daf92526aa
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 07:25:18.5597
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qZJpq5Kk7MKpG41dI0vSddfR6JE2u0r+svmWgGATVjzucI2/pl2MvEYZnrMFJo6uzVt3sr8hRAlTkG7oUACWwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7993

On 17.01.2023 19:48, Andrew Cooper wrote:
> On 11/01/2023 1:54 pm, Jan Beulich wrote:
>> First of all move the almost loop-invariant condition out of the loop;
>> transform it into an altered loop boundary. Since the new local variable
>> wants to be "unsigned int" and named without violating name space rules,
>> convert the loop induction variable accordingly.
> 
> I'm firmly -1 against using trailing underscores.

Well, I can undo that aspect, but just to get done with the change. I do
consider ...

> Mainly, I object to the attempt to justify doing so based on a rule we
> explicitly choose to violate for code consistency and legibility reasons.

... your view here at least questionable: I'm unaware of us doing so
explicitly, and I've pointed out numerous times that the C standard
specifies very clearly what underscore-prefixed names may be used for. 

> But in this case, you're taking a block of logic which was cleanly in
> one style, and making it mixed, even amongst only the local variables.

That's simply the result of wanting to limit how much of a change I
make to the macro, such that the actual changes are easier to spot.

>> --- a/xen/arch/x86/mm/shadow/multi.c
>> +++ b/xen/arch/x86/mm/shadow/multi.c
>> @@ -863,23 +863,20 @@ do {
>>  /* 64-bit l2: touch all entries except for PAE compat guests. */
>>  #define SHADOW_FOREACH_L2E(_sl2mfn, _sl2e, _gl2p, _done, _dom, _code)       \
>>  do {                                                                        \
>> -    int _i;                                                                 \
>> -    int _xen = !shadow_mode_external(_dom);                                 \
>> +    unsigned int i_, end_ = SHADOW_L2_PAGETABLE_ENTRIES;                    \
>>      shadow_l2e_t *_sp = map_domain_page((_sl2mfn));                         \
>>      ASSERT_VALID_L2(mfn_to_page(_sl2mfn)->u.sh.type);                       \
>> -    for ( _i = 0; _i < SHADOW_L2_PAGETABLE_ENTRIES; _i++ )                  \
>> +    if ( !shadow_mode_external(_dom) &&                                     \
>> +         is_pv_32bit_domain(_dom) &&                                        \
> 
> The second clause here implies the first.  Given that all we're trying
> to say here is "are there Xen entries to skip", I think we'd be fine
> dropping the external() check entirely.

Will do. I may retain this in some form of comment.

>> +         mfn_to_page(_sl2mfn)->u.sh.type != SH_type_l2_64_shadow )          \
>> +        end_ = COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(_dom);                    \
>> +    for ( i_ = 0; i_ < end_; ++i_ )                                         \
>>      {                                                                       \
>> -        if ( (!(_xen))                                                      \
>> -             || !is_pv_32bit_domain(_dom)                                   \
>> -             || mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_64_shadow     \
>> -             || (_i < COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(_dom)) )           \
>> -        {                                                                   \
>> -            (_sl2e) = _sp + _i;                                             \
>> -            if ( shadow_l2e_get_flags(*(_sl2e)) & _PAGE_PRESENT )           \
>> -                {_code}                                                     \
>> -            if ( _done ) break;                                             \
>> -            increment_ptr_to_guest_entry(_gl2p);                            \
>> -        }                                                                   \
>> +        (_sl2e) = _sp + i_;                                                 \
>> +        if ( shadow_l2e_get_flags(*(_sl2e)) & _PAGE_PRESENT )               \
>> +            { _code }                                                       \
> 
> This doesn't match either of our two styles. 

Indeed, and I was unable to come up with good criteria whether to leave
it (for consistency with the other macros) or  change it. Since you're
...

> if ( ... )
> { _code }
> 
> would be closer to Xen's normal style, but  ...
> 
>> +        if ( _done ) break;                                                 \
> 
> ... with this too, I think it would still be better to write it out
> fully, so:
> 
> if ( ... )
> {
>     _code
> }
> if ( _done )
>     break;
> 
> These macros are already big enough that trying to save 3 lines seems
> futile.

... explicitly asking for it, I'll change then. Would you mind if I then
also added a semicolon after _code, to make things look more sensible?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 07:32:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 07:32:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480179.744425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI2vE-0002GP-Dy; Wed, 18 Jan 2023 07:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480179.744425; Wed, 18 Jan 2023 07:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI2vE-0002GI-B9; Wed, 18 Jan 2023 07:31:48 +0000
Received: by outflank-mailman (input) for mailman id 480179;
 Wed, 18 Jan 2023 07:31:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI2vD-0002GC-1y
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 07:31:47 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28c09018-9702-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 08:31:44 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 80FFE209FA;
 Wed, 18 Jan 2023 07:31:44 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5D10F139D2;
 Wed, 18 Jan 2023 07:31:44 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id A577FGCgx2OhegAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 07:31:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28c09018-9702-11ed-b8d0-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674027104; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ZLfi7g2GBcYzwP9pKAdtS4hulT20e64IYvVfC/iPhts=;
	b=joaefor6cWtaqHSJJmwno/9gK+OaFS90DmFHXtiKItoylqHm/AO2xhENHOx/ATMe8kCVc1
	/fcPD/kxvTO2Rsd6KfzIlrgE8Wbff2hUaFrosbRHg/h6tSiF2l8Zww6yXtwG2VwG7Sgo6z
	R5caZgnTlS1AFzsi12hsUw1GAW+6BHE=
Message-ID: <94ab0683-e4b4-31cc-205b-2039860a4c70@suse.com>
Date: Wed, 18 Jan 2023 08:31:43 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 07/17] tools/xenstore: change per-domain node
 accounting interface
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-8-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230117091124.22170-8-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------UjMhXtf085jp4tlbK4Mv2Npr"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------UjMhXtf085jp4tlbK4Mv2Npr
Content-Type: multipart/mixed; boundary="------------EilRpdFovX1MaHaT78vpus7w";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <94ab0683-e4b4-31cc-205b-2039860a4c70@suse.com>
Subject: Re: [PATCH v3 07/17] tools/xenstore: change per-domain node
 accounting interface
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-8-jgross@suse.com>
In-Reply-To: <20230117091124.22170-8-jgross@suse.com>

--------------EilRpdFovX1MaHaT78vpus7w
Content-Type: multipart/mixed; boundary="------------MJ3IJudYGviZTPS8aIIpoGR9"

--------------MJ3IJudYGviZTPS8aIIpoGR9
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDEuMjMgMTA6MTEsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IFJld29yayB0aGUg
aW50ZXJmYWNlIGFuZCB0aGUgaW50ZXJuYWxzIG9mIHRoZSBwZXItZG9tYWluIG5vZGUNCj4g
YWNjb3VudGluZzoNCj4gDQo+IC0gcmVuYW1lIHRoZSBmdW5jdGlvbnMgdG8gZG9tYWluX25i
ZW50cnlfKigpIGluIG9yZGVyIHRvIGJldHRlciBtYXRjaA0KPiAgICB0aGUgcmVsYXRlZCBj
b3VudGVyIG5hbWUNCj4gDQo+IC0gc3dpdGNoIGZyb20gbm9kZSBwb2ludGVyIHRvIGRvbWlk
IGFzIGludGVyZmFjZSwgYXMgYWxsIG5vZGVzIGhhdmUgdGhlDQo+ICAgIG93bmVyIGZpbGxl
ZCBpbg0KPiANCj4gLSB1c2UgYSBjb21tb24gaW50ZXJuYWwgZnVuY3Rpb24gZm9yIGFkZGlu
ZyBhIHZhbHVlIHRvIHRoZSBjb3VudGVyDQo+IA0KPiBGb3IgdGhlIHRyYW5zYWN0aW9uIGNh
c2UgYWRkIGEgaGVscGVyIGZ1bmN0aW9uIHRvIGdldCB0aGUgbGlzdCBoZWFkDQo+IG9mIHRo
ZSBwZXItdHJhbnNhY3Rpb24gY2hhbmdlZCBkb21haW5zLCBlbmFibGluZyB0byBlbGltaW5h
dGUgdGhlDQo+IHRyYW5zYWN0aW9uX2VudHJ5XyooKSBmdW5jdGlvbnMuDQo+IA0KPiBTaWdu
ZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+IC0tLQ0KPiBW
MzoNCj4gLSBhZGQgZ2V0X25vZGVfb3duZXIoKSAoSnVsaWVuIEdyYWxsKQ0KPiAtIHJlbmFt
ZSBkb21haW5fbmJlbnRyeV9hZGQoKSBwYXJhbWV0ZXIgKEp1bGllbiBHcmFsbCkNCj4gLSBh
dm9pZCBuZWdhdGl2ZSBlbnRyeSBjb3VudCAoSnVsaWVuIEdyYWxsKQ0KPiAtLS0NCj4gICB0
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jICAgICAgICB8ICAzMyArKysrLS0tDQo+
ICAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jICAgICAgfCAxMjYgKysrKysr
KysrKysrLS0tLS0tLS0tLS0tLQ0KPiAgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21h
aW4uaCAgICAgIHwgIDEwICstDQo+ICAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5z
YWN0aW9uLmMgfCAgMTUgKy0tDQo+ICAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5z
YWN0aW9uLmggfCAgIDcgKy0NCj4gICA1IGZpbGVzIGNoYW5nZWQsIDg2IGluc2VydGlvbnMo
KyksIDEwNSBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYw0K
PiBpbmRleCBmYjQzNzllNjdjLi41NjFmYjEyMWQzIDEwMDY0NA0KPiAtLS0gYS90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0
b3JlZF9jb3JlLmMNCj4gQEAgLTcwMCw2ICs3MDAsMTEgQEAgaW50IGRvX3RkYl9kZWxldGUo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIFREQl9EQVRBICprZXksDQo+ICAgCXJldHVybiAw
Ow0KPiAgIH0NCj4gICANCj4gK3N0YXRpYyB1bnNpZ25lZCBpbnQgZ2V0X25vZGVfb3duZXIo
Y29uc3Qgc3RydWN0IG5vZGUgKm5vZGUpDQo+ICt7DQo+ICsJcmV0dXJuIG5vZGUtPnBlcm1z
LnBbMF0uaWQ7DQo+ICt9DQoNCkkgaGF2ZSBtb3ZlZCB0aGlzIGhlbHBlciBhcyBpbmxpbmUg
ZnVuY3Rpb24gdG8geGVuc3RvcmVkX2NvcmUuaCBub3csDQphcyBpdCB3aWxsIGJlIG5lZWRl
ZCBpbiB4ZW5zdG9yZWRfZG9tYWluLmMsIHRvby4NCg0KDQpKdWVyZ2VuDQo=
--------------MJ3IJudYGviZTPS8aIIpoGR9
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------MJ3IJudYGviZTPS8aIIpoGR9--

--------------EilRpdFovX1MaHaT78vpus7w--

--------------UjMhXtf085jp4tlbK4Mv2Npr
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPHoF8FAwAAAAAACgkQsN6d1ii/Ey+A
iQgAkzA0zIneKhjrgRMR5+o18odVYlEUTRltA4KgWwKsL5lea57yDxCzZwXJdUHD0ev0956a0Jxw
z2TMgLMZrhbnUMegqlrfFqdtHrC4rolf0cI9i/DK7bsn9Bb1JIZDHj3gGOnUwhhrqmRgv32zeLfX
5K3G1qMdHnKCfkJgp9YVr2Hp2/JYcEABfpa+FgVwJ4VM4YsLaucik5uuCZb6roSo08duu5Zhn7Lb
G+2iT7354J7c49JdkQjJUjOqT0a6Z1yXKFbjulIpmtHs9/4IwK6Oi5zt3iSnkwVFRxGD0/SIjyBg
Ke1I8gDx/YCFIVAGaw+UHlwT2RLoNaMv97h+c13LfA==
=QGvO
-----END PGP SIGNATURE-----

--------------UjMhXtf085jp4tlbK4Mv2Npr--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 07:32:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 07:32:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480184.744437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI2vz-0002m6-NU; Wed, 18 Jan 2023 07:32:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480184.744437; Wed, 18 Jan 2023 07:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI2vz-0002lz-Jf; Wed, 18 Jan 2023 07:32:35 +0000
Received: by outflank-mailman (input) for mailman id 480184;
 Wed, 18 Jan 2023 07:32:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI2vy-0002hn-3w
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 07:32:34 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2071.outbound.protection.outlook.com [40.107.20.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 459a82ac-9702-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 08:32:33 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7725.eurprd04.prod.outlook.com (2603:10a6:102:f1::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan
 2023 07:32:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 07:32:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 459a82ac-9702-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hzg6jfLzVpCh1YGUwBeUPuiujQEQFeGUcvJMSqpoz/9xrxWze5TzeHIwyh3SjkmyX6F34IAs4LBLlbCjKLZPGXt943rn+CZbBHRwfnSOrG2ynlJdSv2PFR/aqNU8kLopSJttPu8yOE3rRE96AOzv5T2nRqgq8Qa5wv5leY3A7PZhNfRmwvMEfe5Qi6+DSaaw2+Q8g7nBkL39QAagx5vbhE6/z3k573y9TsMig6QHC0XXkFx+9rqZoKs+h7PT8VeAQunBs8KQYItlNieEFicBe13a8+GVYsn+J2xStnxOTN90lql1532WzM9y29zpxIPhqyp8/KXHU2UAiGBp3JumEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qBeENcj6uVSUXec8n/4V0RMBHbX8hAq+T5n5jGWYUpA=;
 b=f6e0zOaUKYltDFIR+6kOQYhRjjqGcAEGHXn0kgjnCTxbw3tlNncRYgIqvpWAm5Mq74pVqHTayP9ye9+um7PIhpThsHAmu/OQJ2YT43aNTQF3+p0nPK3VZW7hXCcNSr21n86s5RiClnmziqRTFOJswbn5JGJJTArtB7njnOu14Imog36z5u26AOvfYcytSqwBmbtI7aT/LpqMBfzjbO9dhPBwjEfsRsEr1T4vULP4GxZGSaiHLdWvpQ99lKLZ6TxQZN2aqfJZrMrvcrrmInAo5K0cO10yDf2DBhrZSLCPia+JE6xZFoeXSHkugSf6GbkxuqfeEqAQxrIS/72X9n/r8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qBeENcj6uVSUXec8n/4V0RMBHbX8hAq+T5n5jGWYUpA=;
 b=aRYJa6uH4x0RPZ7oCr9XVagdgxHSbEVrUHHfurMPjQ7e2zrxyvZVVjcx2BLcIYhEvMMUrZ34Hfcfuw5D8m0Awu83ti7aen4R6GxQM4H8FQTKFIzmAgyzUehKnZQPYynctcfUfcpXf8/o0R4DJeohK9E1PRABwclSl8OqfwHACypLNUPTUJ6OxMhcXa89KZqpgjf1uvU58Xnqb+UM04EbqxUXooq0c6HV/vP8y87Bx3Q9uCmKedFT2q4cB2wMnUj6yEQOK+CynFJtvRU4NtzgyJKNlsErQQTuwyZuU4dK5sB1mOt5T2RGYu2i/1yorSBd4g9sDoLKiR0nUx56FG5lnw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <99b628f8-f383-ec28-2d5e-f7d899f9dd85@suse.com>
Date: Wed, 18 Jan 2023 08:32:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 02/10] x86: split populating of struct vcpu_time_info into
 a separate function
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <3ff84ed1-8143-15c2-2b4a-3ae8ef23677c@suse.com>
 <18016404-c3e3-8b99-569d-b6f786635a2c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <18016404-c3e3-8b99-569d-b6f786635a2c@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FRYP281CA0003.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::13)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7725:EE_
X-MS-Office365-Filtering-Correlation-Id: 14a10794-a2fe-4f3a-908b-08daf9262841
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5o5UVvN1JKSfyG1FlE0VE5hBSiOoELvedMfpVM7ES0jhktAyIiao/VxKwd9p3C/DEO42xt4m+/5KAu3SfhhthYslFMCwzMWQD+8KbvlidAphYm75367qCHNuhLct95zbD1M/+RqRUi76DuS6obxMntOz+F64AVQRFASyWS3j2fPGq0AYf7qTtZtLGGQOxu5A0cm37QNdsFpOyH1BUK3y6mbPeyZmDpupuk7cwXMVPwh9PToStJ1LLVRsOKRM2iDTnWjLYB8b5Mmyu0GNPeT0z6qOjTdgO0vOWGSEse69ZcOZzl/XN13uirtp++RvdtyrfEWTiKC2monOIvq3wuvDOQg3C/MKaUwpYot8sLOkJ5vZsoc+Ov9V4Ru2i3FHeJarkeNJ90Ay4eqVDrNFsD4+XwQAWPGtQYAWDoNsPyMMFUkAf6oZZFxRg7GZmGrKRj6S3Ojb/uxUKp/tRvfCa3TpyWr/CRzhqvMApaa7uhr8Tgga95xWjyTVV1P6ynlo8Lsp3BxDv5uxfjlCZUXhUNWppfqiowkgur5qUvXa4misJlTcOYS+p6O2NpXk8cLsxgQFsGztqIt8DxT0iqBDebG0HO8HF+twIJo6ZqXe76a2XppVJ17oJFVQBF8m4Bxy1LuN6IYc9ft6xo5XGPStxIlgl1dZSEyConYtX34WcgENirzVwH9/uS6EHeDJ22fYAuhO1L3EGJ2lmT9EqLXkQoZJvz8NVrP+6RKs6hxFYW6stFE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199015)(6512007)(186003)(31686004)(6486002)(26005)(6506007)(478600001)(316002)(66476007)(54906003)(2616005)(66556008)(4326008)(6916009)(8676002)(66946007)(86362001)(38100700002)(53546011)(5660300002)(83380400001)(41300700001)(8936002)(31696002)(36756003)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RS9EQm9XNi9hZFBiWWhmK25hYTRXLzFackNLeUFkZmxIb2F3b1Yvam5ocjBz?=
 =?utf-8?B?WmFqMjR3NWd6RjFFUVl6ZDMwaDlIZFk1bXp2aTdDUXZIVTdlek1VNitTdTVD?=
 =?utf-8?B?ekRYbG4ybXp5amhMSm8wWGF3a2VJM1hob3BvTWNEb2tyalkzQjNMQ0JnaVRH?=
 =?utf-8?B?Ym9wa0lkeGdxNHVoT2k5UmVtSVg4VUJtdVp4Ymc0b1BWMFFEVmRYNTJXNUU1?=
 =?utf-8?B?M3RWQ2tnYnAwNS9KNnRIaDl6bGFKZHhzMGdleTdPc2o5UGRpQnVERVRYUVBJ?=
 =?utf-8?B?alF6dGV2NVlDL3ljSUxUaWtTVEVwTlIvejhHVUxOUlFsdFBDcW9JemgyTzZ5?=
 =?utf-8?B?TUVHQlhWcnE5NVVsdVRpNnFxbHFyRTVCUnBNWUk3OVU0SXJmTEY2VmtCUmQz?=
 =?utf-8?B?ZnhMcUJhUmwyNGJwQmNhS2w2Z1hEVXRBZzBsdVhOY2ZYOFNoYkQ5bnN0cHI0?=
 =?utf-8?B?M0Z0emVMWjZqTlp5U0NDRDE5UkcvaDFxbW1rVnF3czM0eVNpL0tLZGlOVmN2?=
 =?utf-8?B?UXA5TE1CSXJtaGliUHVYRkhSMkdFQ3ppb3JvOERLRHJkcjZrZ2luTmgzTWxm?=
 =?utf-8?B?aS9LbE5mY3F4dU8zcWJudTlDWlk4Qko4ZFYrd1hRcFA1ZUFPMFkvNDJ4NFFS?=
 =?utf-8?B?QldOa1JobHh0a3dsRUNXV1ZYbnF6UmtlbHI3WmttMCtxZ0hGQTNoTm5pU0cy?=
 =?utf-8?B?c2psNlhKQ1JQdytXK2lobHlUMWtRdzJXMDdrdG5FSGNNS0d0eHczRHNKVVBU?=
 =?utf-8?B?R0NCSXFQSFJCZ0hmNmpVZHNvdWdITVhZMmZGblJLazBkdFQ4bkVQR1hQMm5V?=
 =?utf-8?B?YjhKSlpmeUdxZVZQaTYzQkNxbDFNQXM0eGF6Ky82T1ZHbW5qN2EyTnh1K1FS?=
 =?utf-8?B?TFl1R2l6RmVFRU0vU2w5OVJUanVyTHNhRTRaTnBXaFR6NlRPSjdXWjFNVDc2?=
 =?utf-8?B?Z2NsMVZVbHNodnl5dm53SXdUUE55ZHhrY09OdFBGRmt3M2RKNVI5VW1JQVhP?=
 =?utf-8?B?NmN0R2dPOGY5MTB1NDVTSzdzVFg1ME9NOVh2NmtrOW4zbm9PZmFCTUpOVTJN?=
 =?utf-8?B?UmJENCt5dDM3UTdLSlJJZElnYUkwemJRdkkwWjFHdWNUU3Njdy9tWWhPaWlW?=
 =?utf-8?B?a1kvSjRZU1JkYmprbllJcjRyNGJoZnFibVp3OHpBUnNma0ZSam1VRXlaMEc1?=
 =?utf-8?B?MGIycUhyZUZraDl4ZXVsc0VENW92Ykk0Mmo0MDFSbUFxYkVtVWtqMUtDOXY1?=
 =?utf-8?B?MVUxYytWMlhzeEpjc3hqaVgzK3F5RFIrODh5Q1BLTDZSbzNQcnRGVHNqV3A4?=
 =?utf-8?B?MkpvUzIzM3VXVGY3QXFpa3ZXejF3Y1VxejhkYUtidm9PVE5LTk1OeFM0aEtO?=
 =?utf-8?B?aVBSSVRTeUY4NzhUVE5qc0lvdVdHUVFiUkV0aFVSMTB2QnUvcE9KV1RoMGxV?=
 =?utf-8?B?UmpFcWJLaGdZRmU4V0d2aUs0d1BqNndId1VRUHJKd0dVNWwwRXNpMVBDalFo?=
 =?utf-8?B?TEJEcGhkbHdpS1BpMzlGUUJ4UGlaOHVKT1draWNxSDZnT2FDUmU5MXdsQUgy?=
 =?utf-8?B?UmVXaTE4N3pLekFZTUlZWHg2bkJuaWhHbFZhYmRHVnlPS3hFU0FxYm8wamEr?=
 =?utf-8?B?bENqUVBDMDlQNXdFNjNJeEpkcXlDcVQzczM0aUJ2cDdpbDgycjZsQUJRYjJC?=
 =?utf-8?B?T3JSVjZOS3YwL1NvZUlyLzRSZUxTUWZjL1hhVC9TR1lPd2FzallkcmdWYm5t?=
 =?utf-8?B?bzE0eU9XYjhzM3FGQk5RanlkblZqcHVZQ1l2SDdvb0xUUnlSanJWU1U2VGJ0?=
 =?utf-8?B?c1F4ankxd2tLek5UNURJeVNaTFYyeitxZlZKWmI0cDF3MS9uaVBqenFLSm8y?=
 =?utf-8?B?OEROQ2d6RndhcDVieDZkbGFIL0NDS1Q2bWJVNzJ6elZwMzZqd09ZYXdFeS9S?=
 =?utf-8?B?bGZCRk1jNEhaUVQ1YnJqUEVLOWY5SkNpNlhxOXlobVJzamNGL2p5ZFlCOUFp?=
 =?utf-8?B?dENmeGRJeVlxUHduaEVuUEJzOTJPaE5YU1ZIWjJldWNCUk9KNkJyem1lcGdr?=
 =?utf-8?B?YTEyZXVHZjdoRWFmRjVLQ3JPaVZySHRGajN2M3VUQTJPSnZEbTI5VVlTYzVi?=
 =?utf-8?Q?IZQrn7HQq/c4qGQBb3DsWT7rH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 14a10794-a2fe-4f3a-908b-08daf9262841
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 07:32:30.7211
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3nlMkiXzA+7c+ERJ75HWSlDD08S5/mPsIamlLJQpkVdnJWIt/5B3xemlRQ0k8kSX8aczYCwyY+VMsP5VukeTYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7725

On 17.01.2023 21:19, Andrew Cooper wrote:
> On 19/10/2022 8:39 am, Jan Beulich wrote:
>> This is to facilitate subsequent re-use of this code.
>>
>> While doing so add const in a number of places, extending to
>> gtime_to_gtsc() and then for symmetry also its inverse function.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper@citrix.com>

Thanks.

>> ---
>> I was on the edge of also folding the various is_hvm_domain() into a
>> function scope boolean, but then wasn't really certain that this
>> wouldn't open up undue speculation opportunities.
> 
> I can't see anything interesting under here speculation wise. 
> Commentary inline.

My interpretation of those comments is that the suggested conversion
would be okay-ish (as in not making things worse), but since you didn't
provide an explicit answer I thought I'd better ask for confirmation
before possibly making a patch to that effect.

Jan

>> --- a/xen/arch/x86/include/asm/time.h
>> +++ b/xen/arch/x86/include/asm/time.h
>> @@ -52,8 +52,8 @@ uint64_t cf_check acpi_pm_tick_to_ns(uin
>>  uint64_t tsc_ticks2ns(uint64_t ticks);
>>  
>>  uint64_t pv_soft_rdtsc(const struct vcpu *v, const struct cpu_user_regs *regs);
>> -u64 gtime_to_gtsc(struct domain *d, u64 time);
>> -u64 gtsc_to_gtime(struct domain *d, u64 tsc);
>> +uint64_t gtime_to_gtsc(const struct domain *d, uint64_t time);
>> +uint64_t gtsc_to_gtime(const struct domain *d, uint64_t tsc);
>>  
>>  int tsc_set_info(struct domain *d, uint32_t tsc_mode, uint64_t elapsed_nsec,
>>                   uint32_t gtsc_khz, uint32_t incarnation);
>> --- a/xen/arch/x86/time.c
>> +++ b/xen/arch/x86/time.c
>> @@ -1373,18 +1373,14 @@ uint64_t tsc_ticks2ns(uint64_t ticks)
>>      return scale_delta(ticks, &t->tsc_scale);
>>  }
>>  
>> -static void __update_vcpu_system_time(struct vcpu *v, int force)
>> +static void collect_time_info(const struct vcpu *v,
>> +                              struct vcpu_time_info *u)
>>  {
>> -    const struct cpu_time *t;
>> -    struct vcpu_time_info *u, _u = {};
>> -    struct domain *d = v->domain;
>> +    const struct cpu_time *t = &this_cpu(cpu_time);
>> +    const struct domain *d = v->domain;
>>      s_time_t tsc_stamp;
>>  
>> -    if ( v->vcpu_info == NULL )
>> -        return;
>> -
>> -    t = &this_cpu(cpu_time);
>> -    u = &vcpu_info(v, time);
>> +    memset(u, 0, sizeof(*u));
>>  
>>      if ( d->arch.vtsc )
>>      {
>> @@ -1392,7 +1388,7 @@ static void __update_vcpu_system_time(st
>>  
>>          if ( is_hvm_domain(d) )
>>          {
>> -            struct pl_time *pl = v->domain->arch.hvm.pl_time;
>> +            const struct pl_time *pl = d->arch.hvm.pl_time;
> 
> A PV guest could in in principle use...
> 
>>  
>>              stime += pl->stime_offset + v->arch.hvm.stime_offset;
> 
> ... this pl->stime_offset as the second deference of a whatever happens
> to sit under d->arch.hvm.pl_time in the pv union.
> 
> In the current build of Xen I have to hand, that's
> d->arch.pv.mapcache.{epoch,tlbflush_timestamp}, the combination of which
> doesn't seem like it can be steered into being a legal pointer into Xen.
> 
>>              if ( stime >= 0 )
>> @@ -1403,27 +1399,27 @@ static void __update_vcpu_system_time(st
>>          else
>>              tsc_stamp = gtime_to_gtsc(d, stime);
>>  
>> -        _u.tsc_to_system_mul = d->arch.vtsc_to_ns.mul_frac;
>> -        _u.tsc_shift         = d->arch.vtsc_to_ns.shift;
>> +        u->tsc_to_system_mul = d->arch.vtsc_to_ns.mul_frac;
>> +        u->tsc_shift         = d->arch.vtsc_to_ns.shift;
>>      }
>>      else
>>      {
>>          if ( is_hvm_domain(d) && hvm_tsc_scaling_supported )
> 
> On the other hand, this is isn't safe.  There's no protection of the &&
> calculation, but...
> 
>>          {
>>              tsc_stamp            = hvm_scale_tsc(d, t->stamp.local_tsc);
> 
> ... this path is the only path subject to speculative type confusion,
> and all it does is read d->arch.hvm.tsc_scaling_ratio, so is
> appropriately protected in this instance.
> 
> Also, all an attacker could do is encode the scaling ratio alongside
> t->stamp.local_tsc (unpredictable) in the calculation for the duration
> of the speculative window, with no way I can see to dereference the result.
> 
> 
>> -            _u.tsc_to_system_mul = d->arch.vtsc_to_ns.mul_frac;
>> -            _u.tsc_shift         = d->arch.vtsc_to_ns.shift;
>> +            u->tsc_to_system_mul = d->arch.vtsc_to_ns.mul_frac;
>> +            u->tsc_shift         = d->arch.vtsc_to_ns.shift;
>>          }
>>          else
>>          {
>>              tsc_stamp            = t->stamp.local_tsc;
>> -            _u.tsc_to_system_mul = t->tsc_scale.mul_frac;
>> -            _u.tsc_shift         = t->tsc_scale.shift;
>> +            u->tsc_to_system_mul = t->tsc_scale.mul_frac;
>> +            u->tsc_shift         = t->tsc_scale.shift;
>>          }
>>      }
>>  
>> -    _u.tsc_timestamp = tsc_stamp;
>> -    _u.system_time   = t->stamp.local_stime;
>> +    u->tsc_timestamp = tsc_stamp;
>> +    u->system_time   = t->stamp.local_stime;
>>  
>>      /*
>>       * It's expected that domains cope with this bit changing on every
>> @@ -1431,10 +1427,21 @@ static void __update_vcpu_system_time(st
>>       * or if it further requires monotonicity checks with other vcpus.
>>       */
>>      if ( clocksource_is_tsc() )
>> -        _u.flags |= XEN_PVCLOCK_TSC_STABLE_BIT;
>> +        u->flags |= XEN_PVCLOCK_TSC_STABLE_BIT;
>>  
>>      if ( is_hvm_domain(d) )
>> -        _u.tsc_timestamp += v->arch.hvm.cache_tsc_offset;
>> +        u->tsc_timestamp += v->arch.hvm.cache_tsc_offset;
> 
> This path is subject to type confusion on v->arch.{pv,hvm}, with a PV
> guest able to encode the value of v->arch.pv.ctrlreg[5] into the
> timestamp.  But again, no way to dereference the result.
> 
> 
> I really don't think there's enough flexibility here for even a
> perfectly-timed attacker to abuse.
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 07:38:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 07:38:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480192.744448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI31U-0003Wd-EB; Wed, 18 Jan 2023 07:38:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480192.744448; Wed, 18 Jan 2023 07:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI31U-0003WW-BC; Wed, 18 Jan 2023 07:38:16 +0000
Received: by outflank-mailman (input) for mailman id 480192;
 Wed, 18 Jan 2023 07:38:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI31T-0003WO-2x
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 07:38:15 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2086.outbound.protection.outlook.com [40.107.22.86])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 106ca26d-9703-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 08:38:13 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7725.eurprd04.prod.outlook.com (2603:10a6:102:f1::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan
 2023 07:38:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 07:38:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 106ca26d-9703-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mmfQ+68iO4L/aDnbDtAg8K9SJwrNM7Pth6oncxND7iFLbC8Cp0i+DQj5gnmJOuvRsFwLXfblnHyVTOrFh6r4PYH1y+K0EZLpHcDzPgfD6jc8pPSQyZNqHRQbuf5lSq8sFsajvBOwTLvfZhRbtDM5JjRmQXYUY3oTiQdh1KRoAfB3hGoH0xNuT6jh0e5jj0yqoDoy8DaTAO8VTsvL48qpW96cz+1Q0H4QdCCWMlaC/OhkXCU7ZDCohOmEDPlV11sXVhxj4YPIe5C9qYulcZSnlSA/l84qfeXuqp4qjm7iuKgQhSrLklPPKySYfcYO4PFDQuxbNcDAjqvpRdO4oAXj+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zPSKKHQPH7bBkW3Yq+y1pEXmvB1EI+8zZGED24kyGOY=;
 b=RF9KJMSe1TaqhavDPcJ+Xee8DvLvo7wAQ8lOATLMWiCMMyqHIBscuqGM4A9L5V/SFy5luZdHU78YDPIi/5zCfi9HD6SVpQdzPP5Zy+vIP6SORWe0G0JZAMNIFnYw7juYVLSW8Qo4+reoAjVToHG34rHDKHYYdecoPJ7yxgeZ4UKhZTmDfZOClVkQWch6AaB2RjCrM0AePTx6lfblmRKCz/R3nwaBxQwOvklpG4Es+GOP76SExf8kv30/yos8X0blIZ0BXAaqwiD3vFZhDxBs6hj5dIgrZyDXv9tdnhZErpeYjr848oeFhhsxcQUD4bZqaStD1KYnyvRdgxK+IBmkNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zPSKKHQPH7bBkW3Yq+y1pEXmvB1EI+8zZGED24kyGOY=;
 b=K+NG5QYjV+LM9u3EIBKprNi0PAPsgW9G3TiwbdBaz68oLRC2CDcR99U5hdnyV7oJHIb6oQHFj44y4HW2zF4xvEgQQ1m0NmPpXExJfJ9hxbYik1TSFuhSm/A/5bSawL+EUKVylDg4XZPsavzz9EUX2Mcc9DzdVSMxMNsDUG/alxG65JaFMXRKhkvSMRAlAoVcsSOARQvYH2XqKxqaDGnhTpmOOevnYNcktgN8Q4sFDu8OfNS62Vmghqyzx9prTxOuo69G3dzDEbY7eOaNatjVFzBJAOZzWp4yd/uPTcJzca57QdO6HSHtA5X7Xmc6T+fX9OvXLLXqYOgfZEOgbB8JOA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e5e38496-3dcd-3a42-6c2a-43ccb988caf3@suse.com>
Date: Wed, 18 Jan 2023 08:38:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 2/4] xen/riscv: introduce sbi call to putchar to
 console
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
 <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>
 <7918f456-14ff-77b2-3cdb-1e879e030b39@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <7918f456-14ff-77b2-3cdb-1e879e030b39@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0026.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7725:EE_
X-MS-Office365-Filtering-Correlation-Id: 352de570-046f-451f-3fa7-08daf926f3ba
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ga1b04TYP9z+i4tkfhfe0kLyKlmwjaodSmUvdWahqMvfYCMm22Ui+17YBlaGqgwZmqcCFzbgsXhE5iweo9NE97Jn9Fi7t2A0VRsrCyOuE0sG+ZCTBZpz9phBTDmuJYKPYkEKA5K5hONY401CI6WR3G7A5e0lsOupszbppxxy/MbwuPjwItqZ4SseiHNw+w2/BTQ2PF1QhDPPA3DRdeLcMmXRVvuXKUMGckO4hGT+HPXmffnEhhKvR1NUnuuw7Ga//njIrkr0hCIOK/aIGPLSgsBWJbpsN+iEsgZ7DwFgGYBcr0vb5/5rQGYbRCuvYWT6zSiOWSCQ5DENytPSkOE87JJTQCCwDXqJ0I72v/iE3700SHFORa/GFINXek3C/d/+ileviby0OeVW1VC0AcTtlRckFUc5fdAWi1Mrhyn0Jf7dlW6uYxHznlmF/2qFyZzpnd95rGXbgkPjZI59wbWIwTWrQq3CqORQKc36Xrl4mveELr1+6LoXgSgpm3D8kjCyaai2CKkWKElhy99EMtuw/5luiF253mQnf4AGBH8uyAqlZ1JK/NxNTMi8HTrhswpNQsCEH0gEVl3HNjRAcBSJ5LXowdH3pui3rQSFJlYqKt9yNzGhmt60/Lbq/4GnT3xB+synrvL5lgYWSSB2oTgbzyCiNA2uqLfVYzXDmiwso5Q3HTZX3D3LAmre/sJ0kDN7mWkPMzzgWdh/AhwLrTO7KRPwh1mTiK+xF8tS5nYiIJw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199015)(6512007)(186003)(31686004)(6486002)(26005)(6506007)(478600001)(110136005)(316002)(66476007)(54906003)(2616005)(66556008)(4326008)(8676002)(66946007)(86362001)(38100700002)(53546011)(7416002)(5660300002)(83380400001)(41300700001)(8936002)(31696002)(36756003)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d2t2MExvMFBhZ1ExczRIQnp4Y0RURm1GSHRRT1d6QmVMdS92emZjOEkrcXMz?=
 =?utf-8?B?MldKdEYwaythc0JXZjRmOWhkeWJ5NTlsM0hIS0ZOV2VuZU9EUjJaYVlseDVp?=
 =?utf-8?B?dDQ3YXh4QW45VmtwcFdLRXNHeW1oYnhwSHdzYUJTaU1FajgvbkdGdlpsU25v?=
 =?utf-8?B?S3dtcjVybjE3cUJNMHArS2pQWVBYc2JSRFNnUEFoT1I2YW4zdGUvWVBieTV0?=
 =?utf-8?B?eFJEZFRZRkNyU1BsUGRvYk9oQmRXdXhVRmRuQ2l0R0NUVlpBVFFUc041UWVH?=
 =?utf-8?B?a3l5b09wYy9QdjVvK3VtQXJiMkZ6YU9YNVZFdzNnUUdMTHI2R1I1Z1RvaHgx?=
 =?utf-8?B?Tk9NYW9mMUZ4VVJvWThnQWRGZEFvdnQwZ2lFUG1iSktNRW1SRnhmUm40eEt2?=
 =?utf-8?B?Zkd2bkQrRDJ3N0plelY1RVhyaXBpSk9tcW9lQXArYitCSmQxVnJOZHhIVVVy?=
 =?utf-8?B?cDlsNUNHTXM3QWRCTnlkanA3OUtxdWRZNlJydDNRTHZyMEZFNWs0SGlhVGY5?=
 =?utf-8?B?citoMnNreExPYy9DZVFGU3JPdXVvVFAyc2ZqWkRmbWlua2F0WE9JY1NKSG9j?=
 =?utf-8?B?TEtzaU1jZzF5Vm1NZHZMd1ByeENvZmRGdG9CenUxVDRKbGV0MkFua0hyOTJn?=
 =?utf-8?B?cENBU1ovSlZOaEJ6bEIxNzYwalZxdFE3VkNuRnpKc1Q2UWlrNGFHV1lUSlYr?=
 =?utf-8?B?anZmajdYVHJOQjAxNHM3bUpDK3hvL1kzN0JpcWFIMUhBc2t3UmlOVUQ2OHJT?=
 =?utf-8?B?VGF0aURYWklZSXdIRFI3T2srWTRRdzdLSW90eUx0Sm9NRmwzNGp3cFkrUkVz?=
 =?utf-8?B?M0FxL0FRRFptVWpxU0VuOEFNWjZIT20rL3Y0bWZ0Q2dlellYU1ZOWHQ3Wjdp?=
 =?utf-8?B?eHhmU3VUejl4ZS9FaVBBbnppNnJ4VTY0WGprUC9SSnhRYVNTNDJLSGVRVEgr?=
 =?utf-8?B?QUVNbU9rcTlmZ2piYyt6WFVUK25uNG5ZTS8xZ0NyNEhaVU5HUDVkeTNSZGF1?=
 =?utf-8?B?RThzNU1ncDU3cTY1SXBMcW55OTlwZVl0L2VDUlVkUGZkdXV6VFRweTZmZHlT?=
 =?utf-8?B?Z2xUVTBYT1NkWUhWNXo1NE5VSzVTQXhqckxaOVovRmJuSnlvZktNakJYajZx?=
 =?utf-8?B?Y0drbE4zY1QrVVRoaWNob3ZBOWFCY0taeml6NHBGMm8zZHZHSTdjOGtmSVNY?=
 =?utf-8?B?eWkxckJNd2pUL3ZMQjV6NGoySmQwcWZRdUdRamhFdWQvYU8xL0h3WVp4OHUx?=
 =?utf-8?B?d0hzQm1RMDhGYUpQbHBRZVltc3V5bTZ0S3VkNzI3Q3kvbXplTzlvYSt6VVUv?=
 =?utf-8?B?VlhoT050L0FJRHUvRWc4bnNmQTBSekxpWkIwK3gxNGFNeTR1eHpSMEJVTnNP?=
 =?utf-8?B?QlQ2ZzVwYWVjVm1tZ2hLdVJYaFB0WGdYTmViQ0ZFVElLR0MrS29nTHVWdUZD?=
 =?utf-8?B?M3JyT255SXl5cmp2UkdOczdTN0FBZitxS3lEWjB1ODVsa0JFbWh0UUwyMUV1?=
 =?utf-8?B?dm9kaVh3UXJ4TVdyNWFZNFR0TlZzY1Vodjl0czZaekRNei9ZRml0Q1pJZ0o0?=
 =?utf-8?B?L2cxYm9KL1IxTmoxK3ZZSTRzbHkxOUEzM2NVdkllSHhIWkRoWS9KbFFNUFBU?=
 =?utf-8?B?VzVZcTh0dmNDU3BnTDZ3TXQ4ZmhiK056dmUrMzkyTUZKbVJONlIzcm1mY3hR?=
 =?utf-8?B?T1VjUU92Z2lCR0t2cmF4S0NQc1JaQTF0QVVMbnJSbVdtRmdPcVRubmJGYTho?=
 =?utf-8?B?NVZtUHU0Vkhia0Z6RzczYS9kbSt0R0FqL2dKQzd0YUpGTm9reFAyUTFmcE0y?=
 =?utf-8?B?QUpSamJVTTd1aERpT0tzdGZoTUtBWW9RRnpjTnNvZG0xbEwwVGd6TlV6dW43?=
 =?utf-8?B?OVp2VThBUXFHL1EvdjNnclNnR3N4Qi9iUGo3dC9RdnpEbktrMGtQOXFqMWJk?=
 =?utf-8?B?UC9uaUZiNUxNWjJpRzVEaXIxSllQek9OcUlZVkErNUxyakRkZkN3NWp5TEFL?=
 =?utf-8?B?RkRGMDZCWVJuQ2Z1aHNwZUxHSHdOU3dlQkVvbkczVFVqUWdhUVlSeFlDYWFx?=
 =?utf-8?B?MFZVdTBIR28xb3FZemxPNzdJUURuNUxSVEpkV2pmLy9kVnZnUEJ3OUhmZkVL?=
 =?utf-8?Q?DKchs+DJaiwmhZdftyATeKLH2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 352de570-046f-451f-3fa7-08daf926f3ba
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 07:38:12.1520
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 858p7AhnQBjTV8Q0scCA9uxzAeBKSZ3WwFuPCpma0DCR7P6NFW3KpPHfLuq69Hrn5WcRuW4Zs7KLGDscYSFRyA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7725

On 18.01.2023 00:32, Andrew Cooper wrote:
> On 16/01/2023 2:39 pm, Oleksii Kurochko wrote:
>> +struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
>> +                        unsigned long arg0, unsigned long arg1,
>> +                        unsigned long arg2, unsigned long arg3,
>> +                        unsigned long arg4, unsigned long arg5)
>> +{
>> +    struct sbiret ret;
>> +
>> +    register unsigned long a0 asm ("a0") = arg0;
>> +    register unsigned long a1 asm ("a1") = arg1;
>> +    register unsigned long a2 asm ("a2") = arg2;
>> +    register unsigned long a3 asm ("a3") = arg3;
>> +    register unsigned long a4 asm ("a4") = arg4;
>> +    register unsigned long a5 asm ("a5") = arg5;
>> +    register unsigned long a6 asm ("a6") = fid;
>> +    register unsigned long a7 asm ("a7") = ext;
>> +
>> +    asm volatile ("ecall"
>> +              : "+r" (a0), "+r" (a1)
>> +              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
>> +              : "memory");
> 
> Indentation.  Each colon wants 4 more spaces in front of it.

Plus, if we're already talking of style, blanks are missing immediately inside
the outermost parentheses, requiring yet one more space of indentation on the
subsequent lines.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 07:53:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 07:53:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480198.744459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI3G8-0005tn-Od; Wed, 18 Jan 2023 07:53:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480198.744459; Wed, 18 Jan 2023 07:53:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI3G8-0005tg-LH; Wed, 18 Jan 2023 07:53:24 +0000
Received: by outflank-mailman (input) for mailman id 480198;
 Wed, 18 Jan 2023 07:53:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI3G7-0005tW-So; Wed, 18 Jan 2023 07:53:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI3G7-0001QE-Oi; Wed, 18 Jan 2023 07:53:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI3G7-00049M-FG; Wed, 18 Jan 2023 07:53:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pI3G7-0002hF-Ep; Wed, 18 Jan 2023 07:53:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KCBFwF0KDlfqAYtZoNlKTy+/lE6ftfKq41idSTQUJhw=; b=sGtlj5BW66hpbgd3aX7pTxuQuX
	LdKmYcwcJ+yUfNxnTMxXulzN5zQXHwK6Nbz9hmUzZWdA0ADFEAqq409vxns29C66u4blV9+NtvMZb
	qHGCzJJ2G04QPml4bKamVfyFwMaK+FqSjvnO/cCXOm9UtgRUn8P3N3wdS7eBQAQhDX38=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175938-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 175938: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-upstream-unstable:test-armhf-armhf-libvirt:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-libvirt:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl:host-install(5):broken:regression
    qemu-upstream-unstable:build-amd64:xen-build:fail:regression
    qemu-upstream-unstable:build-i386-xsm:xen-build:fail:regression
    qemu-upstream-unstable:build-amd64-xsm:xen-build:fail:regression
    qemu-upstream-unstable:build-i386:xen-build:fail:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:host-install(5):broken:allowable
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=625eb5e96dc96aa7fddef59a08edae215527f19c
X-Osstest-Versions-That:
    qemuu=1cf02b05b27c48775a25699e61b93b814b9ae042
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 07:53:23 +0000

flight 175938 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175938/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-libvirt-qcow2    <job status>                 broken
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-vhd       5 host-install(5)        broken REGR. vs. 175283
 test-armhf-armhf-xl-multivcpu  5 host-install(5)       broken REGR. vs. 175283
 test-armhf-armhf-libvirt-qcow2  5 host-install(5)      broken REGR. vs. 175283
 test-armhf-armhf-xl-credit2   5 host-install(5)        broken REGR. vs. 175283
 test-armhf-armhf-xl-arndale   5 host-install(5)        broken REGR. vs. 175283
 test-armhf-armhf-libvirt-raw  5 host-install(5)        broken REGR. vs. 175283
 test-armhf-armhf-xl-credit1   5 host-install(5)        broken REGR. vs. 175283
 test-armhf-armhf-libvirt      5 host-install(5)        broken REGR. vs. 175283
 test-armhf-armhf-xl-cubietruck  5 host-install(5)      broken REGR. vs. 175283
 test-armhf-armhf-xl           5 host-install(5)        broken REGR. vs. 175283
 build-amd64                   6 xen-build                fail REGR. vs. 175283
 build-i386-xsm                6 xen-build                fail REGR. vs. 175283
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175283
 build-i386                    6 xen-build                fail REGR. vs. 175283

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      5 host-install(5)        broken REGR. vs. 175283

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                625eb5e96dc96aa7fddef59a08edae215527f19c
baseline version:
 qemuu                1cf02b05b27c48775a25699e61b93b814b9ae042

Last test of basis   175283  2022-12-15 15:42:37 Z   33 days
Testing same since   175938  2023-01-17 15:37:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  broken  
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               broken  
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     broken  
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      broken  
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-arndale broken
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-libvirt-qcow2 host-install(5)
broken-step test-armhf-armhf-xl-credit2 host-install(5)
broken-step test-armhf-armhf-xl-arndale host-install(5)
broken-step test-armhf-armhf-libvirt-raw host-install(5)
broken-step test-armhf-armhf-xl-credit1 host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)
broken-step test-armhf-armhf-libvirt host-install(5)
broken-step test-armhf-armhf-xl-cubietruck host-install(5)
broken-step test-armhf-armhf-xl host-install(5)

Not pushing.

------------------------------------------------------------
commit 625eb5e96dc96aa7fddef59a08edae215527f19c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jan 6 15:21:10 2023 +0100

    configure: Expand test which disable -Wmissing-braces
    
    With "clang 6.0.0-1ubuntu2" on Ubuntu Bionic, the test with build
    fine, but clang still suggest braces around the zero initializer in a
    few places, where there is a subobject. Expand test to include a sub
    struct which doesn't build on clang 6.0.0-1ubuntu2, and give:
        config-temp/qemu-conf.c:7:8: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        } x = {0};
               ^
               {}
    
    These are the error reported by clang on QEMU's code (v7.2.0):
    hw/pci-bridge/cxl_downstream.c:101:51: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        dvsec = (uint8_t *)&(CXLDVSECPortExtensions){ 0 };
    
    hw/pci-bridge/cxl_root_port.c:62:51: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        dvsec = (uint8_t *)&(CXLDVSECPortExtensions){ 0 };
    
    tests/qtest/virtio-net-test.c:322:34: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        QOSGraphTestOptions opts = { 0 };
    
    Reported-by: Andrew Cooper <Andrew.Cooper3@citrix.com>
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Message-Id: <20230106142110.672-1-anthony.perard@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 08:06:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 08:06:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480209.744470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI3Sd-00082P-9n; Wed, 18 Jan 2023 08:06:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480209.744470; Wed, 18 Jan 2023 08:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI3Sd-00082I-70; Wed, 18 Jan 2023 08:06:19 +0000
Received: by outflank-mailman (input) for mailman id 480209;
 Wed, 18 Jan 2023 08:06:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI3Sb-00082A-O7
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 08:06:17 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2066.outbound.protection.outlook.com [40.107.20.66])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fad9fbaf-9706-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 09:06:15 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9431.eurprd04.prod.outlook.com (2603:10a6:20b:4d9::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 08:06:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 08:06:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fad9fbaf-9706-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZVkZxLwoQX2MKOmJ8MeiY8uWk+VlZFsmzNQjaZHBQ0/84JlIxpRYy8OMBzoXvHU6ah4LXh2x38zUCYpbrg4fKvtcaLVVVy1J1sEI/Ms3535N1SKRiVhIZaPiIr4FgM8rkL5Fgc9L+z15Xk9hBfu792215wuFDZYEFkhqYvSzd4Ls2kPLT0crX2AK2iUfCFIT+wO0pYdJAPZP3Vgubh75ia9ZREW0psySuOSp39CowYRixL8se0F94cnSEqHuJFzTEgwwXrAJsxyqN7xZrv2aDSPYf9qABah6c0+rsRf9GxhTdY5wW1/hvhZJLUPTvoLhC2j/y1NTDdANawxCvmvCQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RcjF/V17acJHqxI6kFqCw+dxflXgn48UFej3AAMu1Pc=;
 b=HadsNJyOs1zPyiR9qq8OW656uBah1OM3nPkxxcCuB3KnKhHnLhupSh5E1AyFNdrvxUp1eeRQycrhkprLROnoEXoKeO++7kx8xe+G8M6WQRpyUncqHy5DPT0hkqIaM55EDi7+ca7eyPjHvYhBz+kf6qhJ3j0PJgL00npzB32QXvfm1lUeqwUD1O3WpVtflTjvzoFrCEjzXIRd/fvSTMlBziehwwiWPHix2fFGQa55LTILgHTae0Z8OXAxljvRUyiEcqpzWZsYWejL848VsKW63YvRl/w8y/3teTM3NUWD61zFgm+Ee7W8fdPEMB14VjDaqddd9mURcgrFBLamIQMHpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RcjF/V17acJHqxI6kFqCw+dxflXgn48UFej3AAMu1Pc=;
 b=or5hab+pBekfJNdk3hFCnpy5Xt031+j+5A5gYvWZS+kj318YBblc4BfdSjIlXMeUuvokGGyVlwRd0ayZzyTFDsaHao0QrICmzL0uOaEXG3IU/7UwWg95wYNB3Fx5svUqzzp1+ozhPU0qXhJhizHzPrEndgnHP1HHO2qwyyeRo3otOLIJJMI186BG8n6FJEW7Ady7D3xzF3U4M+jvJxk5laiN+4f6JFZ6LKaY8ZDDfjTsldm1mtpyVGkoxl3O1Esj8lJttFfzOtp05cNnPQfPo8+SdVdQB2LFlhBk+wQeF0FzePuZPG1qIfcG35sEC+cUgIQOddcNg6ak9DTjEOloBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b6d6812a-fa3f-49d1-3e2f-4971f411fb16@suse.com>
Date: Wed, 18 Jan 2023 09:06:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
 <1ab3bc93-326a-172d-4f0f-f6c2ddc84105@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <1ab3bc93-326a-172d-4f0f-f6c2ddc84105@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0185.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9431:EE_
X-MS-Office365-Filtering-Correlation-Id: 20341a90-c295-4218-c006-08daf92ade1c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mH4ff8qXrRkbT4TrE/I7QIfOVQOYBmm1sKaOle4KEtB6xO7HWKToQisOEI2LcBj77Ot2ZA0ZYNBdtrgU0q5Zzo+yBbFiqRaNdNMJYDTd8XTMC+/mgU2G9JPI2dKlIiXPIY3amSDWIMVib5ojwp/n4me5nrumdQnslRswGFkGKrZSLXdXct+A0/BrV4+VWCqiyMQT7ULKxo5Qcm4ju6vxwzE3ge+iYhZ776vI1hjat3IW3yu1UJopmeH4dy4TiieaBIYGPJ4j5TybiyzvQ4Z4oYpS64jHEtrMUNwmA78kpyFg+26RQox26wfarSQc8C2HqghXZ0nNHLriJBCZF3a5zRVKCys5EY84bOq0MBL2IkQiua11DuJadF3RB1RpSn2GSPaQaXs/0LQ+xS93uMptby9ZePtlaGu25hWT+IxCz1uKh9z1vV9iTcTEJ0mlfqKaWZIfHg3N2GVgBJU9pnvexcFfeGgPJ2/jeZwrrfcJybGKpxQHQj1P13vjd6Ojt0wjWiEtY/WSsYaDdrzp8+1/aix68j6IaUpH4mfj71DL0YVdlPzwTx4Nyh+ATo9XIMpBFaV2Dvv6KxZvAQMympPWq2VHAl1IETZVPnnK9czWWosdteSK4R80tihU/Ekch027fhlMWRhah48VwCfGCcQpDf7j/uqCN8lZy3VhgyMw/FN5MYQz5oe2TUxv3ANd6i6yqWPNf2AUCkrhL6KCjkRPlKR2UMRpwi64+/cRxNINpNc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(136003)(346002)(396003)(366004)(451199015)(31686004)(86362001)(66476007)(2906002)(66556008)(66946007)(5660300002)(8936002)(31696002)(38100700002)(316002)(54906003)(110136005)(53546011)(6506007)(6486002)(478600001)(36756003)(8676002)(4326008)(41300700001)(186003)(6512007)(26005)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Qjh5ZGttWHYvRkQrT1FOdVpsTzNJdzRxdi9SMkkwZDg5YTNKZkx1M3FKRjQw?=
 =?utf-8?B?L1dZQkU4Snh5Si9adVFCQjhSSWxNTXV2czh2Vy9PVVQxd3BGM3RraUM0Vmov?=
 =?utf-8?B?ckVGS29LcVBQZ0JEcDRwL3JUV1ZSY05Mc2Zaa2JWVllybGFhRWQvMzRxd3pU?=
 =?utf-8?B?RUVhbWt3WnN5ZFpPb0VSMjVXMkNCOUliRkJCc24zcTVwZVFSUmRpYVpnSFVL?=
 =?utf-8?B?UXMxaUpyVzlDeWF3dElSZjNwdktIbWQzdnRWdkZMT01KenF6V05wSEhRUTJK?=
 =?utf-8?B?MXF1aEt1R0dLWjFHUUVvYllDSGR5OE1qeUoweC9naXRvaXBjK0x3dW9EQzlr?=
 =?utf-8?B?bFN0WHMvK21rdENrakoyY2piQXptTGhVRWcvaHIzb2RkVEh3Y250dklKam5z?=
 =?utf-8?B?OGZ1bmpyV2RMNmF6b2RrTUlTcG03aGFiekVxb1ZjVWhGL3ZtakNiNFVPTFR0?=
 =?utf-8?B?THowSW5GQ2NQMjRBcHArbTMzYk43ZzFoRytnd0l5ZmNseFlQM3ZDbTNsWjlH?=
 =?utf-8?B?ck9kNzQwZ0cvR3pnbFI0aTNjbE5xNUp5SzJVbTgwNE8yblcvQjk2alpaSXhu?=
 =?utf-8?B?MkY0UHZldEhxUk9iMldxWTNNUzlDaG8xKzV1U3huRWVjRjhLbVpwVnZjQ2pQ?=
 =?utf-8?B?aFhSeUJiNTVkWnNaemhaUjVZYjk4QzUvaHhxV3V4aUZmWjFxdUxtT2RVMUQz?=
 =?utf-8?B?R1hPMFBQbmliTkh0UXphU0tDejVVMDFwc1lRLzM4YjVLQUx1bGgyRk1YK2Fr?=
 =?utf-8?B?NTRCVmRaT2pHRmQ2TU9RVGxRZjhxMkxYNG9BbitpaWlCekVteklhazV2ekMr?=
 =?utf-8?B?UEdKVnVkVzY1OFpnMFhBNzBMVE9mem0vVFFnVkZ5bzA1dG5pZHZuemw0Nzhq?=
 =?utf-8?B?WTRjK1Y5UmtpSzBtMS9HYWdYY2VwK2FJbjd1SDVTU1FJcUI0dFlka3U4QUh1?=
 =?utf-8?B?ZDU2ZFpHWjhYQkdRQSszNkhvYlpPSjJ2UjJnMVhOYmdwcXR3ZnE5dnBtNTNM?=
 =?utf-8?B?d0JtUnE5VnhUa3BoazFuV3VGcUIxbExBcERwWlp2dkVleVVpWUFxMVlUbTk3?=
 =?utf-8?B?VG5aQ05tTEdtM1U4Ym05UjE2eDFkLy9mem9WZHJoaUdRd1o0dFBRTHVEUnQr?=
 =?utf-8?B?dkdiTXZWTHBWcDArb0EvR2h6cHdtL2JSSUJYdjUyV0xnNjh1VTBHai9EZSs2?=
 =?utf-8?B?c1F2bkY4cnBldW9FWUMxNlcwK29zWXY5QmNRelluWXZJTkNFZlYxQ1ZkdjZI?=
 =?utf-8?B?VDhCQk5od2tVQmdiK3hjenhOaXdRbmRtbldIbWZqWXFlV1NOK1VER3hGOTI2?=
 =?utf-8?B?bVNML0xDYnJGelAzOXNBL1NsY0w0cUpKOGd6VCtrMjJLaHBnKzRvWHVENmQw?=
 =?utf-8?B?WUpjUFBhZXQ0bzFqNU14YS9rSTJSSmVCeWRONmpjMXM0SzFlaG0yeTNMMklu?=
 =?utf-8?B?NFA1UHM2WmVOYTJyQkptak1qZTJOblpSSE9GdE4xRTlpL2lWU2ZKemYxOTNV?=
 =?utf-8?B?MlV0QkpYUlR3c242WlNnNjBiM2NMWkxQaHFnN0I2dXA0UFBWZi9QQmdnbTEr?=
 =?utf-8?B?L0NacGlQN3dQaTIrbjJKLzIxTmpFQTRUNGVkZUNTQ2Q3aFdPM2ZyTlRnNEJy?=
 =?utf-8?B?SU1hcnNTVkV1eWZFeW00dmVBUDJyRm0zdFFIWlI2d1loOHF3QzJYVlFEQTlk?=
 =?utf-8?B?TDFnZURVYmdVNGZSUEpva0JzdVZSYmZZazZsTXowb2FPb3BjemdHNUsyU29X?=
 =?utf-8?B?SGFRcDNGZlE4ckxySGlhSVV4V2lxMGJudng2aU5CN3VSS0Q0Y25zU1ViOVpZ?=
 =?utf-8?B?VlZtelRPZGhzZVNiRkxCSmFydFBRek1yMWxpZytaRnA1WVliRldnaVlaamRB?=
 =?utf-8?B?cU9kZU05VEluM2duZmt0bFBhaWE1b3ZsQnoyQkpqREJFTHN1TkNTcVc1WjBN?=
 =?utf-8?B?NlpsRGcvb3ovWGl5Zk1Xc3ZWdFRKaFBvK3p6Q0krYXQvMjFnK3R2b0hTaVQy?=
 =?utf-8?B?c3JqV0ZOazZ6M3lWUUJoaGxNUHFOWnBETnN6N0lBb1NIWVIwc2xnUElVQXBL?=
 =?utf-8?B?b09sdmFicXlMSFgrRkRlelV2U0IzRnErOWhpcjU2NDN6YjBHeUhqdlBnRGlv?=
 =?utf-8?Q?M7uDPHpYR0vqz9faqi0bomKPb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 20341a90-c295-4218-c006-08daf92ade1c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 08:06:13.8121
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 571/8cfh2pxZpeMJBTv1AxNpmr44Uxt2oxu5aQoErRpmEiYteVazh5I0wUZmtE9IAXefMo7ELc3/f4ee+eXlnA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9431

On 17.01.2023 18:21, Andrew Cooper wrote:
> On 16/01/2023 6:10 pm, Anthony PERARD wrote:
>> +        elif re.match(r'^[a-zA-Z_]', token):
> 
>[...]
> 
> All of this said, where is 0-9 in the token regex?  Have we just got
> extremely lucky with having no embedded digits in identifiers thus far?

That's checking for just the first character, which can't be a digit?

> P.S. I probably don't want to know why we have to special case evtchn
> port, argo port and domain handle.  I think it says more about the this
> bodge of a parser than anything else...

Iirc something broke without it, but it's been too long and spending a
reasonable amount of time trying to re-construct I couldn't come up
with anything. I didn't want to go as far as put time into actually
trying out what (if anything) breaks with those removed. What I'm
puzzled about is that argo and evtchn port types are handled in
different places.

For the domain handle iirc the exception was attributed to it being
the only typedef of an array which is embedded in other structures.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 08:36:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 08:36:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480215.744481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI3vI-0002wx-Jd; Wed, 18 Jan 2023 08:35:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480215.744481; Wed, 18 Jan 2023 08:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI3vI-0002wq-Gu; Wed, 18 Jan 2023 08:35:56 +0000
Received: by outflank-mailman (input) for mailman id 480215;
 Wed, 18 Jan 2023 08:35:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o08w=5P=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pI3vH-0002wk-4D
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 08:35:55 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2070.outbound.protection.outlook.com [40.107.13.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1ec96477-970b-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 09:35:53 +0100 (CET)
Received: from FR0P281CA0125.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:97::20)
 by DB4PR08MB7935.eurprd08.prod.outlook.com (2603:10a6:10:379::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Wed, 18 Jan
 2023 08:35:50 +0000
Received: from VI1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:97:cafe::7e) by FR0P281CA0125.outlook.office365.com
 (2603:10a6:d10:97::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Wed, 18 Jan 2023 08:35:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT063.mail.protection.outlook.com (100.127.144.155) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 08:35:49 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Wed, 18 Jan 2023 08:35:49 +0000
Received: from 6c542e8d9372.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 72C7BBEA-539C-4295-9C23-FF04089ED104.1; 
 Wed, 18 Jan 2023 08:35:37 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6c542e8d9372.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 08:35:37 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS4PR08MB7757.eurprd08.prod.outlook.com (2603:10a6:20b:514::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Wed, 18 Jan
 2023 08:35:35 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 08:35:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ec96477-970b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tdkX32C572az6v5n2zyIeVkaRxE7Jyba3HiN2ee3VdM=;
 b=VFXDX0nSQ4AKn4Hr863XJmGUOcrTM9YMi94bvZjGUFYBpe/19IQOihncxE6zyq1Ge6b0mnVFWV3yXBkTtyvEMumgCWuVZzRFdGbDw/r3eD4PoUw5CZA1MbgF+HsT38f5J5KWhqyiYIQd5xWvYsjT01d0njSgLx06/Nm4F0RqhpE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7cccb7f00d29e920
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Sx1pVrHGUyGpFmUklgF3rk/+3Qp5ByrDwKHzu2m83btUZdzGRnahhfXWAZ6/RsVRG/CO0rMj9WIvgF/Kv3cuKd9+MQ3WYoz44PtDEVxzht07Bo3IT57XZdknuwI4IgwpH5IZisc0j7/crfTKDJMqzsLIhijwy5PDeKO1UAyUDjqMTHJaSlheQgOa/XVYtRqzquOoDhlysgfDWueM5buqD9khJdcogSapUtt1bTi/syl7DgJl+pt5eCC+t7yW1TXOZbmGWdZYOHf5u8u+cEo6JApoFzRBxR1TFnfMS2HbCvdpuoXavPslwAhHocly6v7pSxfdPCCKdG3SMX15X+4z+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tdkX32C572az6v5n2zyIeVkaRxE7Jyba3HiN2ee3VdM=;
 b=RKcbsr1Yi9pz9o303UnHCNVFeMGarGEzDBNGRoDc2SfHgSTnzyEwKSWCd9oEDclfXEa41plK12FZKP9ansv2lqYeV8KIGpb6OZLkcj7d98KMMMO4jfNfb9ig5y/4gHMeLAHqzHJqSht4zO8Y9R0R+j7NIOV56xPPsr1EF1iN3+lqUrEK1suwDXAVlZvk6tKywurArUGCuNVJdiSxOLQJs+OAnctBGXapbqjyN3ovFz/L9HGSraSBLIIUDki4aSCkZsRqUXRcGpP4aXGCMwL+yxm0NIgfPK/UE59wu7HDbrFpleoQBK9gpNoDSAnmO6SljzcVRBioFl2ejTWzp75DxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tdkX32C572az6v5n2zyIeVkaRxE7Jyba3HiN2ee3VdM=;
 b=VFXDX0nSQ4AKn4Hr863XJmGUOcrTM9YMi94bvZjGUFYBpe/19IQOihncxE6zyq1Ge6b0mnVFWV3yXBkTtyvEMumgCWuVZzRFdGbDw/r3eD4PoUw5CZA1MbgF+HsT38f5J5KWhqyiYIQd5xWvYsjT01d0njSgLx06/Nm4F0RqhpE=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Michal Orzel <Michal.Orzel@amd.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>
Subject: Re: [PATCH 2/2] xen/cppcheck: add parameter to skip given MISRA rules
Thread-Topic: [PATCH 2/2] xen/cppcheck: add parameter to skip given MISRA
 rules
Thread-Index: AQHZIbuCEhHxPW648U6oapqp0s8gTq6j7CYA
Date: Wed, 18 Jan 2023 08:35:35 +0000
Message-ID: <4818F679-FE61-46D5-B5E1-4BCA37C92B3B@arm.com>
References: <20230106104108.14740-1-luca.fancellu@arm.com>
 <20230106104108.14740-3-luca.fancellu@arm.com>
In-Reply-To: <20230106104108.14740-3-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS4PR08MB7757:EE_|VI1EUR03FT063:EE_|DB4PR08MB7935:EE_
X-MS-Office365-Filtering-Correlation-Id: 82475635-8605-497c-2875-08daf92f00ac
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 vEGuxpztR4lx/UHxXxBbxjCYThGPmM8cuVzzBH6O52FjDuJ6BaC3xBZTJV5gB0UcTp21Kbt3nQlnZEM+25NJRd9kUF3G1L84y9f0l7DanMpF22lY1Mwpr/4j3Mfm1EMHkp7ctdSBGhiAWnPu7am4I/J6PVb2Gt4ig33dwnvcvDY0QIVhdqqGALnS/Rn9kXS/6wc3zkuXwVIHeu3iKXc07ilRpi3jfkogtNRjbaDkb1veTTR3+uRdKZz6pOKF+B3lltrE/J0TtOJMeYzJu/ggR3ZoAq8SxbgZZ2XcoTOSP2dWgQihXF3eyqUb8N/xVyK0Kvhoo1T0MLrN2adZgcO3f8Q4GSg+AJ9OtbtovNEei7zVxI3rWQBqxpz8kRfXr6R3QMhmokV/jU3OwSxwC4bKwWeHsNLo5xhEn247Rfxr1pQCtjBkWyePqW0Pofwd+sPxHbOv1ZUGJuctakdGYs4Q0oguaJ7xfJoHzfvCk1efjDjb1MbWxEGKNxx8LjG2tiHgKpUUcU93Wrx5s7hV63+EZn8fC3MOejb00IrISJ9COMLG5QMIpYJsW5yu8+8aKdRUt036o/QgC8G0E4J46O4lyBPvDUxvGKk0DwV9sMVYu9Zyulfy8NzS8tMoxs55z0LAXpnG6D5hN4F/IgkTc8I9e8PXTG4ZX7w9mqq+xMk3ts9fP7ZYKTS1Owad1HLbzH1zf0qNi/ZvDwJm0DtBj0/mVOyUl/ShwfhhMyD2eLyVfUQ=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(376002)(396003)(366004)(346002)(451199015)(316002)(6916009)(4744005)(33656002)(36756003)(122000001)(54906003)(66946007)(478600001)(66476007)(6486002)(66556008)(76116006)(91956017)(86362001)(38100700002)(2906002)(41300700001)(6512007)(71200400001)(8676002)(2616005)(38070700005)(5660300002)(4326008)(186003)(66446008)(64756008)(6506007)(26005)(8936002)(53546011)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <54DD31D93BFB0F40A6586E9C22A9539F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7757
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	50ba31ba-122c-4ccf-bdb2-08daf92ef80f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lsl50vj6eBaoLIHRitO8Ps2pCqmVd6o8XJXWGLcxOQ+T4uKIJCL0zFwFk79SAYcd8OEkcgLDidBqre1w9eXiVHiT0Kixxf+En0l3z1DkXvz6uLoSdK7XjgxsjWPmbSoRbHy5jS6VDqT9e/Nx1W0gEwpDZuiqRtOS6t7kNSayq1kWOM1rnzm5pym3d65Q8Wkq6TuCfpOM+5uTUFhPJRX7s2Y0VjyCTThV/1t+PzP99rMle5WP4NtRsevUlR7cbKlQ0AXZ2E17yI6dQrX2Z4LPT57+5BjsTbdw+MlAm18F3yCNPXqyV52TDaeKGFz3cDkKghJYumvruxko7dVhTxZHjlk7Fx61rFzeCGmnIOgw+pz5a6jRmqoWcqzWWHy/Aimtr/5E+aY1vWIqiFqQRVj0NVtbzBxnQ1uxssB+Umm8+NvtDltkUlVL6m+0j/tApSctcB2pwNZseEa9vp/eRHmjK8krTdhw/X63NY5LSsbO1Cn1rkI/4eH+PK0FVee16wicU2WIBar4nMAZn2XFI8/wUKh+fNiPByQK9bia9lAXLxInSkpjl9lYqpVA/DunyCoV6q7/KjDszE0Vu2UZj2Zspmfwb1MPovM8ZFOF43vVkW5Iai/AsjxFx5vXV4ODBz+cgDelwGTP71ZPPOJ9mgKWMl8krOD9HZrWajlb5kCmveyzR5budvY9Z3UirMjyksNzW0HRw6WE4zfIWeanW4ILqw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(39860400002)(136003)(346002)(451199015)(46966006)(36840700001)(40470700004)(26005)(54906003)(36756003)(47076005)(6512007)(2616005)(4744005)(6916009)(356005)(336012)(70206006)(33656002)(8936002)(36860700001)(81166007)(82310400005)(40460700003)(82740400003)(4326008)(2906002)(5660300002)(316002)(107886003)(8676002)(53546011)(6506007)(86362001)(41300700001)(70586007)(478600001)(186003)(40480700001)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 08:35:49.5314
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 82475635-8605-497c-2875-08daf92f00ac
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB7935

DQoNCj4gT24gNiBKYW4gMjAyMywgYXQgMTA6NDEsIEx1Y2EgRmFuY2VsbHUgPEx1Y2EuRmFuY2Vs
bHVAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiBBZGQgcGFyYW1ldGVyIHRvIHNraXAgdGhlIHBhc3Nl
ZCBNSVNSQSBydWxlcyBkdXJpbmcgdGhlIGNwcGNoZWNrDQo+IGFuYWx5c2lzLCB0aGUgcnVsZXMg
YXJlIHNwZWNpZmllZCBhcyBhIGxpc3Qgb2YgY29tbWEgc2VwYXJhdGVkDQo+IHJ1bGVzIHdpdGgg
dGhlIE1JU1JBIG51bWJlciBub3RhdGlvbiAoZS5nLiAxLjEsMS4zLC4uLikuDQo+IA0KPiBNb2Rp
ZnkgY29udmVydF9taXNyYV9kb2MucHkgc2NyaXB0IHRvIHRha2UgYW4gZXh0cmEgcGFyYW1ldGVy
DQo+IGdpdmluZyBhIGxpc3Qgb2YgTUlTUkEgcnVsZSB0byBiZSBza2lwcGVkLCBjb21tYSBzZXBh
cmF0ZWQuDQo+IFdoaWxlIHRoZXJlLCBmaXggc29tZSB0eXBvcyBpbiB0aGUgaGVscCBhbmQgcHJp
bnQgZnVuY3Rpb25zLg0KPiANCj4gTW9kaWZ5IHNldHRpbmdzLnB5IGFuZCBjcHBjaGVja19hbmFs
eXNpcy5weSB0byBoYXZlIGEgbmV3DQo+IHBhcmFtZXRlciAoLS1jcHBjaGVjay1za2lwLXJ1bGVz
KSB1c2VkIHRvIHNwZWNpZnkgYSBsaXN0IG9mDQo+IE1JU1JBIHJ1bGUgdG8gYmUgc2tpcHBlZCBk
dXJpbmcgdGhlIGNwcGNoZWNrIGFuYWx5c2lzLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogTHVjYSBG
YW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KPiAtLS0NCg0KR2VudGxlIHBpbmcgb24g
dGhpcyBvbmUsIEnigJl2ZSBkb25lIHRoZSBtb2RpZmljYXRpb25zIGZvciB0aGUgc3VnZ2VzdGlv
bnMgcmVjZWl2ZWQgb24gdGhlIGZpcnN0IHBhdGNoLA0KSeKAmW0gZ29pbmcgdG8gcmVzcGluIHRo
ZSBzZXJpZSBzbyBJIHdvdWxkIGxpa2UgdG8gc2VlIGlmIEkgbmVlZCB0byBjaGFuZ2Ugc29tZXRo
aW5nIGFsc28gb24gdGhpcyBvbmUuDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 08:40:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 08:40:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480222.744492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI3zw-0004MN-6N; Wed, 18 Jan 2023 08:40:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480222.744492; Wed, 18 Jan 2023 08:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI3zw-0004MG-2X; Wed, 18 Jan 2023 08:40:44 +0000
Received: by outflank-mailman (input) for mailman id 480222;
 Wed, 18 Jan 2023 08:40:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI3zu-0004M9-HW
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 08:40:42 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2057.outbound.protection.outlook.com [40.107.20.57])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca407a5f-970b-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 09:40:41 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8813.eurprd04.prod.outlook.com (2603:10a6:102:20c::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 08:40:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 08:40:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca407a5f-970b-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K8c0elCAkKbR73BwQLzPvVx0mji58Wi54YGZM2IpAz3FEuVBPEgD2DP54khKzQay3P8I82hPRhxOKbGtfqFExXiWf6jm3S9nF58BHq+V99NWtJFktuQyUfNyZ29EHJfWKAqCSEzwLgqnnW9dHB856YDyWgLXt7NKWGk35OH5RC1zl84Q8As7b4Go86uuDOq3EajPHr6/Ub0yjnTjN5Aip2SiF+bHmQigxuBRGZhO9fs6MCFBsI/i661GfnCx71txnMRP0n7uJ19clfjzNX5KUAh01kaIUIVZjbAA6fI3xQp34RsY1vqAYICouZbHfIF/VsDpBu9CwxiZI1VGJuQ0cg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eDQDOElpGTuBjaqUWEx0gv26w9qjgdkrpxT3ixcswXA=;
 b=E8Ie9nPvWQK5kB947vITDzXFgM5NPkdcyUvFLfpxQr4m8Dm2+FeBVeX12vLVHDwjLynNbCwwSjOtrmreOJIvqyvfe7YXn0I0ujXfiA5V+X7lTOD/ytloMCz6KTL1Im0boxVM/afMp7o23WQymzDh4qzq/Jdx2Zeaq7PgC4Rh5yD/ZChpWIe8HvFsKAlwj0G0fQRkdjRhPwL2Jd0pnJddJhr+pnpd1uUmiQZbV9mWUTVXprBelSvQCp9uK9tGPcXOX8GRWVEUNsRiTPM3WbOSBzbgV85yqVvBGzeEwIBeFOQrWIQyOZXwRuXPM00/hfRjMhYw1XjxUHo4yHjsUklDhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eDQDOElpGTuBjaqUWEx0gv26w9qjgdkrpxT3ixcswXA=;
 b=RweZQq6pel5v0rtrG6w8Sd/r/DCbcKyIRwVcLi6YfdMseVLH/2AnD7OLSAAWYOau9zof9AClUCaQc7Xkp1ipRhEgOdo3SvOFxpU4uRyMWYhxx4qZniSN8Rv38ZbJO6gSHKDepe8sq00b2mR70m8AoUr+ZxRSPYNoA0htLzl0ydYb34ddOKVLctPuwTGztiQDPtukC83mgo5P7XIiu8PmFiKscwyCXJS/vdwBFjq5WtBWfamXbTbjt+YzRiOfQoDA8AH04Y3bHgTTVGi2qVu0bGceV9jE58M7OFBvhE5t/B5avD1UAxdioUxHEDNiFo96ZHhPyAQ1LMmxEXq6gRfpHQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <926307d3-a354-be87-3885-90681dc5ae24@suse.com>
Date: Wed, 18 Jan 2023 09:40:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for
 address/size
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>, julien@xen.org,
 Wei Xu <xuwei5@hisilicon.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 xen-devel@lists.xenproject.org
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-6-ayan.kumar.halder@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230117174358.15344-6-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0073.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8813:EE_
X-MS-Office365-Filtering-Correlation-Id: 7e6c5246-7be7-4f24-73f2-08daf92fad21
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nu5F9Hf9smQWypU9BXM1xZOkpBujaRE7jSOsCv5gX5uV7YJGYvHt6vc+ZDfSeexXQexW78bopLtC9VI+ArwLOX+NCQC4WCkF8uZNTPMfFfUwRp7KKZKNmk/6ZduhW+thqYkv9C5Dw2/pqY0sGH8WBsPyeVLpwsa4H3EvF5gCXpPrFba3OiY/CpabQecRI6C/E9HWVhQkG665SvaXxv2QGMk0Xe8CyP3bYCg39207f8G6Igs+f6DF4OqrJ1i5Md2obMPFYsfJj5mTjjZ67li4x90cl2rJsPTcMiXvxluO4gpm+k6vP82GMTEhYZv5bu8kosqQ0pqsLHO2K1bEoT3Z616XBDSs4ppN8m71u78r2bcMkszni2XqCPusd8kbxKmpCMUKXG7ZBvRW3JZw0QxDiUo+YWiKQ8sstbIFuzvAcAixYWHY5c2eSjm2k9+3AVtziKE85E7vsZPGU2YO0HesB4QJ7jDyuECXJ+nQyCvVKnMWOhZWQeq6Jxn8XAjcugpG5ReSTQ1NfECQ3PGIG4zbIzoEpYjnQKbzDWlZ0zITinbaPefdoPf2TOEw0669rWQ3n0Z5n9utTXB+G3rZDurVTdYL7KKXWpCqbb/9bwVEVO07e0iyAe2uWyCbz/C5PQPArfKC5Ew0OTHAVcittOlXnqPZF4B+dhlItJsfup4kgqlbcYr0YC+g3rS55KTfU/PuXf+OFqkIh0NWtmEAPnYmLkIS08bM3OfnbL0OsUY5eEM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(39860400002)(346002)(396003)(376002)(366004)(451199015)(110136005)(2616005)(186003)(8676002)(4326008)(26005)(66476007)(66556008)(316002)(31686004)(66946007)(478600001)(6506007)(41300700001)(8936002)(5660300002)(2906002)(83380400001)(53546011)(6512007)(38100700002)(31696002)(6486002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VkdwTWZ3UTUxZkJJSWxMbDE0c2pkdW5HcDFHK0x0cFRYVmhVMnM4T1hXekJY?=
 =?utf-8?B?TG4yUEpId3JENkh2dFJCSzNsNEFad2E0Zk1MNWxYb0pIQlB5SDN4L1BGR0FP?=
 =?utf-8?B?ZEZmenZDWkcwZjNUWHVNMFJnUW83OVkzSGJYRGc0YmQzcWFmUEdVbWVhbTdZ?=
 =?utf-8?B?cVV6YVo4UTZFTFIrTkp2cnZiN3JkNFRkSlhsK05VUnE5SzFMREpQMFNBUDZP?=
 =?utf-8?B?S1crSFE0OG5DNFNFTTJaNDgwMjBOeFJ1bTI4ekRxeW15ZFpmd0VJMWFkQzVY?=
 =?utf-8?B?ZithRGZEYi9GSDNRcE54VzE4dmpXNGU4RDgvYmxFSUNCekwzK1Q1aG1BTlRi?=
 =?utf-8?B?SEVkWXhURVNtaS9NZmV6VWFiNjNjMStaR2wrb1dZY1V0eHJIZU1vNDJmVkNM?=
 =?utf-8?B?QXVSaTZ6TkRISXRGcWVJZnJXbzF0Q2Fua1R1Q3UreXgxY2szZkFNdDZkK0JE?=
 =?utf-8?B?REdrRG1VSWRtMDBQSllYRm9KQkhrdzUwdDgvSlJ1MzZjeXVzUDhaYzlvNFY0?=
 =?utf-8?B?QXJnT2JMejVuUSt3K0xIMTZGdEJUaGRBanZWOUcrR2pma0EwZmJRaitFbEtG?=
 =?utf-8?B?aUFpTHNyaGFQMGs5WFBaL0oxa2V4Ky83SUdiUno3emtaY21GaVdMUnA2anJo?=
 =?utf-8?B?azUrdUpncXV2bjUraS9zNzJ5L2V4bVhyN3BzTldSUTBmOEZVNjg5SjF1Zlcz?=
 =?utf-8?B?ZzNwUW8zQUJOMmVoR0Q5OGhUN2NFM0NaTkNHT3NnZWJFcGo5V3Q2V2c3aEV5?=
 =?utf-8?B?bXVQdlFVeno0WjlJKzg2YnZUc242YjczVTR5Yk5yZEtRNHQvbUUrbzBSQWRv?=
 =?utf-8?B?QnNFaXZhYjUrRE5WcCtWa0R4b2dSUG8ydTg5UlhKYzhpdkVJblZyTDRSV3ZX?=
 =?utf-8?B?cm00eUhlZzVlamNIdGlUM3FaVmE3NWEwYnlDcy91cjArV3JwanBGODFuMkVY?=
 =?utf-8?B?S2Q3QVJtN1ZKdUlqRUNMRnNuMFlEZm9wY3lBemlOWXlrMTFtYmxYVXNVWFZP?=
 =?utf-8?B?bWRLN1ZxQk83OVFFYmRmK0JVY3h5ME5yQVY5RXJYNFRoZGVLY2ZmZFVNaUFX?=
 =?utf-8?B?WlhRZGhVUFNob0xRRWZvT1pjL1J5VGVXeWpxWXE0LzVRTVdhc1B3RzAzdVFa?=
 =?utf-8?B?YzNIK3dwZStBci8vWkMzMU1LQTdLYit3YTlSREJlVWVzZ0t6MHlCc3o3SklR?=
 =?utf-8?B?QTBIRkJQQllWR1JMZW9YeGVpQTJSSFZkSHdTZTQyTmhHZTd3L3FQbFpRZzdn?=
 =?utf-8?B?eUoxM0pFUlJ2Njh0SjBKMzNveU5PU0JjM3QySjY5dFFBVzV2emtKc3cxMlA1?=
 =?utf-8?B?MWc3dkdQWmF0QWJDNHRuY0JXNDlmYjVBc3hqRjhubUpMUE8xZWtIREJIVFYx?=
 =?utf-8?B?S0h5REZwU3lkS0s2am80cWQxZkFubGorY3JsVW9IZDE0cEliSUt6akJxVkc1?=
 =?utf-8?B?MGhqWE03Wm5OOU1sejZ2RTNMS2FzVHZrdXNLNUdoa1ZOc250WG1kUHpmSUVT?=
 =?utf-8?B?US9kRXNBTUs5eW9xSVY1SzVwWTR2NHZickNJTk81ZmNWbmo2NURURXl1UXBu?=
 =?utf-8?B?Y0Jnb3NjRVBWTVZpNWdpOWVUWDBGQ0JWdDZnditNK29Fc2MwbmU2TnFNOThx?=
 =?utf-8?B?TWtRbFJVMXFxQ0JkalpjQWM0ZjFCMTA3Z3lHZkZKWldwQ3NUS1NXdmp1ZXpk?=
 =?utf-8?B?OVcxOWhSNEZIamEyekhOcWRmVHpvL1NtS3lOaFYwTFQ2NUpES3B5VlJRVXRn?=
 =?utf-8?B?bENmOVlaNTNhNGxsY2xLRnppekRDYlNtVHFRM0RSUkJLblZabnhETzN4Wmk2?=
 =?utf-8?B?Y0RxMU4vSGMwVlV4WFpoVzJuNm8vV2VDUGFkUU1NTlRDZS9lQXZFd1FmMkUw?=
 =?utf-8?B?M0QyaWlpZGJtM1JDVXV6YzdldVBPOVV3aTFqcWpwcE9GQVpCSmFla3BLMjZ4?=
 =?utf-8?B?NWw2cEs3MHh6elNyaWwvNzBISEcveFljQ2htMjdtMmNDdDdGVkh2VXBKSEU0?=
 =?utf-8?B?ODNoS2ZYUlo5QnlzdmFpWE9ubnlEUVJqcklIY1JXR3YrTVVzV0d0aE9FdXo3?=
 =?utf-8?B?djNiYUs3Sks2QXRGUHJuZXR4T1J4aUNyRzRGZG1MaEpvN1pqRUliNnEyY3ZD?=
 =?utf-8?Q?wNbj0DIfHxdz6vKYmI/a2odiE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e6c5246-7be7-4f24-73f2-08daf92fad21
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 08:40:39.1495
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7JiF7FK0YDU9ooyuBoUrycOnDB3i294rEXxJuYWOL4US6MqaQlCF6qyvvijbomdu7Be1eiic2C46w9/M23VUYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8813

On 17.01.2023 18:43, Ayan Kumar Halder wrote:
> One should now be able to use 'paddr_t' to represent address and size.
> Consequently, one should use 'PRIpaddr' as a format specifier for paddr_t.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from -
> 
> v1 - 1. Rebased the patch.
> 
>  xen/arch/arm/domain_build.c        |  9 +++++----
>  xen/arch/arm/gic-v3.c              |  2 +-
>  xen/arch/arm/platforms/exynos5.c   | 26 +++++++++++++-------------
>  xen/drivers/char/exynos4210-uart.c |  2 +-
>  xen/drivers/char/ns16550.c         |  8 ++++----

Please make sure you Cc all maintainers.

> @@ -1166,7 +1166,7 @@ static const struct ns16550_config __initconst uart_config[] =
>  static int __init
>  pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>  {
> -    u64 orig_base = uart->io_base;
> +    paddr_t orig_base = uart->io_base;
>      unsigned int b, d, f, nextf, i;
>  
>      /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on bus 0. */
> @@ -1259,7 +1259,7 @@ pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>                      else
>                          size = len & PCI_BASE_ADDRESS_MEM_MASK;
>  
> -                    uart->io_base = ((u64)bar_64 << 32) |
> +                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
>                                      (bar & PCI_BASE_ADDRESS_MEM_MASK);
>                  }

This looks wrong to me: You shouldn't blindly truncate to 32 bits. You need
to refuse acting on 64-bit BARs with the upper address bits non-zero.

If already you correct logic even in code not used on Arm (which I appreciate),
then there's actually also related command line handling which needs adjustment.
The use of simple_strtoul() to obtain ->io_base is bogus - this then needs to
be simple_strtoull() (perhaps in a separate prereq patch), and in the 32-bit-
paddr case you'd again need to check for truncation (in the patch here).

While doing the review I've noticed this

    uart->io_size = spcr->serial_port.bit_width;

in ns16550_acpi_uart_init(). This was introduced in 17b516196c55 ("ns16550:
add ACPI support for ARM only"), so Wei, Julien: Doesn't the right hand value
need DIV_ROUND_UP(, 8) to convert from bit count to byte count?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 08:51:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 08:51:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480229.744502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI49q-0005u8-7b; Wed, 18 Jan 2023 08:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480229.744502; Wed, 18 Jan 2023 08:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI49q-0005u1-50; Wed, 18 Jan 2023 08:50:58 +0000
Received: by outflank-mailman (input) for mailman id 480229;
 Wed, 18 Jan 2023 08:50:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI49p-0005tv-6D
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 08:50:57 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2080.outbound.protection.outlook.com [40.107.22.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38aec754-970d-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 09:50:56 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7291.eurprd04.prod.outlook.com (2603:10a6:102:8c::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 08:50:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 08:50:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38aec754-970d-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ni5F42P38v1bF85DzMiPZOrUlWu/vqAbeO/q3SKWpNV7HQWHSwU/M4wdFq3WFGkk7Nr39e9OA2F7a8p3tN2D1I3Yze1uLZjwa93q2DOBZuixGFliDDxvxAUAtoYExJiomd7o3Gg5zqS3bl2WP1Pbf6yhOCygQSsz6y8s7aRa+/AfRue9QgvIm4n55CY+skLV3M3J8cPJYG+Ox8NSHXZdbI8aH1cjg7SlclqrLqZH5IvNe6E1m/hCwOg/eQLKbrOH09PSkgTINCRbUEhmBd+4pK3MQcW2KMhc1lREW40iWfysC3irRlsIZkir+eUFo8S0p4fmXvCvfaMEUj86gqy8sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YMP6Ax4MV+910RQf3bz23z9fo9ecWxfLuUkM/UYhYZ0=;
 b=h5DKwP4GQumEmHWa2z7W6aim23xTxgcOQtIGW2YzYby+t07XM8U3vaqEjltWNOKQ3IHWjMbka7zt8v0l0FLPU3ezZNxjmoX8rwiYFGlnpgCbo3eiMyirngsee7J/SPzHq2BB6Wapd/dBad1PyyYo/Fznswoq+qyhfgQmeHxdGlHiJQcadvGjgde6OSbiRn7PNCX8KCxNKC6Y0wZ1eofGbaRGlzZIXAeSAnGSgnjwVnALYU6KSgmktJ1rMXTSD13He1vDKhYxj8GDkDxDSqJH8xP48sFMvopaZh5ls3XjSFUtiBOae1awxGLQwfceM2lKOORyKgQItdhULFVbcIb/Wg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YMP6Ax4MV+910RQf3bz23z9fo9ecWxfLuUkM/UYhYZ0=;
 b=l7Mo+GFLuphDlpcVn10zuBL5B356xLBF80uVnRF7pm9A5RAlIGj7+dnYOw3DFuZ+ummJ/2Wa4eZPHxaIzCEa6H2y7WalmSNjMXs4oQC1V08kKFrOnAez6CkaaX6RlFTMCKC4aJqXZ179pvX79mvHzbGtXHRDrs7G8CdOwBd8uoW4iF78myHk02OW9U42hShpFubc9VmrXCCyVNjCI7LQrlollC6mwk8YUgDnxwaO11A679oAX6CqtEnMIkXsBaI8fWNmqkaeB1PiWW/YMPPXbvxMluCiJcKYI2xxKsQsLb0DRJ4lwwiu5Jk813VsMmww2C8mZPpFwM6CbVsyDZ0sIw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <49228d15-3f0d-eb89-6107-40ae9f0b9b92@suse.com>
Date: Wed, 18 Jan 2023 09:50:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v2 09/11] xen/arm: Introduce ARM_PA_32 to support 32 bit
 physical address
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 xen-devel@lists.xenproject.org
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-10-ayan.kumar.halder@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230117174358.15344-10-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0149.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7291:EE_
X-MS-Office365-Filtering-Correlation-Id: 7bb11066-e356-4f4f-1712-08daf9311b95
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mep/cw84cuirsrp23gweiyQYh2Q4/fgOVp3KJI70WHCuS55fXgRYQ0ovNEsvlGogG6D4PRzFtUBWX6DKYv1BGogMODs65rUIOHC5d4DPi/lfhOmdWvgFGDY1EgEDz6GtFb0F89ENWq4yKCzwGB0dPjkabjrkFuy/yl3pp+kgLUEX0vwW8p339PJuKkQlmqlhJzKZOAy8h2YgYOrG6FfhqKn0BEa9lWOixLS6JLXK0gyD9Ed8yGtqdZTEjVkEh0jKmFo60JoRULvsBmiEVBI6//QPMNzWm3fDhGdOJtDvVkp9xxnhnNO3MZ70pPiaY5VwLmpSo8JDU6XRDxphMZxoP0dJkhIBN5z60w9t6FF+cB22pT/I56HxDXpVsIpkrTsMz9IJptToxm7AtMhxCEnfWV/yX6pIJZZTMJYdgF7FEVw1yk8i5xN8F6i7ErNWrLwhdzk8oaYzDjHshtCb2CdZbz758NTyTN3Jt6V+6s/stfz2DL4hwsv8H7oIOpX3h1ArnlxK0y5KdVbvjlmkcZ5Y1iB69Ulk4p3y9Zn93nJlEpQPADTWGM8q9lDr5PHHieZMIpvkV2yZ1MszY+RNhfTEyApmabnzUGgSQWnuKv7Y6LdleZ/+oSROJChQcBmDhJjz3a6eQv/TRtqLbAZrZeooWZpsrwVHX96sGNbQ9GWtXFB0CtQMa0UxlatxDbrleTKVn21bdCAURUfFZanfWw/XcJ2kKt1GcYh8pm4p8W8f3is=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(346002)(136003)(376002)(39860400002)(451199015)(31686004)(2906002)(66946007)(66556008)(4744005)(8936002)(5660300002)(66476007)(31696002)(38100700002)(53546011)(6506007)(316002)(6486002)(86362001)(36756003)(478600001)(8676002)(6916009)(4326008)(41300700001)(186003)(6512007)(26005)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L1FoeGk3RnV2Wm4wcG9RV0VzOGtTYVNOWG5sQURRcXpDYjJqUS9SM3M5UHNP?=
 =?utf-8?B?UFByM3doNGl6eGRPWWE5THdvWG5iODU4dVFuV2pYUDFMWE5WOFZRa1Y5ZWFl?=
 =?utf-8?B?cnh1WVJEZncrTlJhZTQrNmVFV2VvUFVmNzlQUmNkWTcrd0JlSUxBTnV1NmRl?=
 =?utf-8?B?YlJ2eU5CWGFJQmpWL0I2N2g5czJsU1lQYnZ0NU4xNWtoWVZ6NU56TFYzbmdH?=
 =?utf-8?B?elMxVTJENjBFTEhhYUc1aXFEa1ZjU29uMFJjdEw1UWVXWUsyWGdKUGNNN1Q3?=
 =?utf-8?B?Mm02WDdDTDIzeStrR3k4UFNQcy9pbit0c3p1ZVovT3Y2YmZORUhEcSt3a3RP?=
 =?utf-8?B?NnVROU9Melg4N2lreDlEa1Y2b05LK3hBVHVzL3BxWlZDMDYyYVJxYVFOWXRl?=
 =?utf-8?B?eXY3cWFiWGkxRVArRVgxdUxpV0o3VlpkN01iZzdBLzlDU2FvMHowZWFvUDRk?=
 =?utf-8?B?UURTNlRZVVJGaDRHTk9NZ0tneHY1dmZ2N204aUVYanA1Ry9zbEdGZGFPTjc5?=
 =?utf-8?B?OGVsMGlwN2VQOGNzQXFZYUE1MzM4REVha1dSYjRCN05zTU52b1NsNXUxZ0lk?=
 =?utf-8?B?Tnd6SEpDd2JUZ1BYMWRKM2FwOEtjVjl1aU1PVWU3QmRLZHltdGhtSGRaYVZC?=
 =?utf-8?B?QStJUk9OVmFWdGJoT2hhV1V0NGtER0VKQmxlUnFsSkhLeUZvRlBSb0xmRSt0?=
 =?utf-8?B?T3psMkdxRXlYcFY5YThVN2VlS0tqNDZKTVFqek1Vczk4UkdYeWI0RkJYdVhX?=
 =?utf-8?B?aUJQWGx4dUlMc1hXN0k0SnhrZ29YRmovWm1qMWxkeG0xNmhPTGRHMW9oQzZo?=
 =?utf-8?B?Q1pPa29YdjRmdnpNT2djSWNXclRjUEg3OStxbjBMUlpQRDIwdUhTWWRNZzFO?=
 =?utf-8?B?TzFrTnZCN1RlS2NHREEzRkRSdGxCR09xaFRzc0FQcldyK0dYaXRwejFkOWgr?=
 =?utf-8?B?YkYwRGRmdzVxL01wZ2ZGUGhaL3F4UnNFRXZPaVlUODRBS1R4U3FhSklLRnVJ?=
 =?utf-8?B?MWZZMEJlblY0ZkVkRlEyZkFyRWh4emZMWFNJbnZwV290eE1ZcGtRMFpsR0c5?=
 =?utf-8?B?bmNWWE5GWnFQdDlRKy9lQ1NZQ25keEF5SUJVc0FPRnN6eGpRYi9WbWNDOHJU?=
 =?utf-8?B?dURzeVpWMTZHc2N3eDM3KzhsS2JWdmtLTXVRZTlaN2RHNmFmUktyRG9Ebk5Z?=
 =?utf-8?B?bGFDV0JzakFQS3dXVXo3d3VaWnVpcm1mMXlUZ3BheVR3anZkdlVMUmV5RUts?=
 =?utf-8?B?aU1SaldIeVBIUUtMcUNuSU5ERUxXZFB2ckYvK3lBb0ZTb1RQb0hRbERibitM?=
 =?utf-8?B?ajROcXpoMWd2UDZBbDBkNi9aSDNOQktGM05UZWlpMWRET3B4SmFETGtzbFV4?=
 =?utf-8?B?Q3YwbXhNMVpoMDUvSE9BTWdxaEZndmpMSldROENyK2NkT3NnMnJobmU2NkhD?=
 =?utf-8?B?eGQyR2dnWU9NdTFkVDNsVjdkYTRBNVh6Rlluclp0WlR3MlQ4MHBQUEhkY1lJ?=
 =?utf-8?B?RXZIRVZoazRFaWpuN2krRFFpNlJqQXNnd0lsMS9GcFh1dVM1L1dzbHJScUFC?=
 =?utf-8?B?N21iMG1DRkxzOFNMODFISXZ3bkE4alZmUk5VSHJmUFltN0lPcTlFcklzVytI?=
 =?utf-8?B?SkdHUTZoNm5QT2pqU0pWSG5rRlgwTUZSQjQrT2ozU2haV2pRRUl0SUFmcDdZ?=
 =?utf-8?B?Zk54ZU0rN2h1YVkvOWsxTFB3a0hUNTBVY2FHYlBJU1JKOW9oMzNhNEFvS0ZG?=
 =?utf-8?B?c2xLWmRXamdka3BRTXhXd3NRRk9zbHdCdGNqUlMzQysrRC9PY2x4SFkvM2Ni?=
 =?utf-8?B?N0d0dnlDMFdTRFAyRTZvWk1CTkswSmhjYnJtdDB5WDVZZE1vODBXUFcrRHVW?=
 =?utf-8?B?U3BmSjlJcEh3YzE0UmxKbWhLN2JLNUpLYTdoVTF4MkNLcFBFSE9mY0RueHhU?=
 =?utf-8?B?eEFmbXByMXdnUHV4bHRIZjc4UVl6V2VwZm1NR3laeDdDVVcvSTRKakM3K2p5?=
 =?utf-8?B?YkYrZGV2Qms5SzZpVFRzOVBTUGFSRHpqNHhLaDVEbnJCUWcyV3QvSERmeHNZ?=
 =?utf-8?B?NExuM1FEWkdWbjhXZmg4bUNOK05aek5ZN0EzUm93MUtMVmFKdFVia0cyVElX?=
 =?utf-8?Q?Ms819adY6Z4XctMy6vOckLIll?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7bb11066-e356-4f4f-1712-08daf9311b95
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 08:50:54.0628
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5DSTAlxQsNe+HofZC3n9ML6rFx7jIC13THC3ey88KaUNnHgyFQ8rBQSTVWrRq9jVE8rOSvZLrnxqI/x5t+mH0Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7291

On 17.01.2023 18:43, Ayan Kumar Halder wrote:
> --- a/xen/arch/arm/include/asm/types.h
> +++ b/xen/arch/arm/include/asm/types.h
> @@ -37,9 +37,16 @@ typedef signed long long s64;
>  typedef unsigned long long u64;
>  typedef u32 vaddr_t;
>  #define PRIvaddr PRIx32
> +#if defined(CONFIG_ARM_PA_32)
> +typedef u32 paddr_t;
> +#define INVALID_PADDR (~0UL)
> +#define PADDR_SHIFT BITS_PER_LONG
> +#define PRIpaddr PRIx32
> +#else

With our plan to consolidate basic type definitions into xen/types.h
the use of ARM_PA_32 is problematic: Preferably we'd have an arch-
independent Kconfig setting, much like Linux'es PHYS_ADDR_T_64BIT
(I don't think we should re-use the name directly, as we have no
phys_addr_t type), which each arch selects (or not) accordingly.
Perhaps architectures already selecting 64BIT wouldn't need to do
this explicitly, and instead 64BIT could select the new setting
(or the new setting could default to Y when 64BIT=y).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 08:59:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 08:59:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480236.744514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4I0-0006aW-2g; Wed, 18 Jan 2023 08:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480236.744514; Wed, 18 Jan 2023 08:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4Hz-0006aP-Vy; Wed, 18 Jan 2023 08:59:23 +0000
Received: by outflank-mailman (input) for mailman id 480236;
 Wed, 18 Jan 2023 08:59:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI4Hy-0006aJ-Nw
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 08:59:22 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2045.outbound.protection.outlook.com [40.107.20.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65e9aee7-970e-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 09:59:21 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8064.eurprd04.prod.outlook.com (2603:10a6:102:cf::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 08:59:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 08:59:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65e9aee7-970e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NKv2Ernh7WKhXXoJC/evTqzfM5Nxvu5etqbU5GXbc/klCOMfUELfUhd59SUehIpEdrRJdC5vrnHfvSgcWZ33sHOBFyhi7V2GW+medWut8I6/mymmZKZFXmSAGHDLSlvImrq6MRGJyRB+miSuKVgW1hsL8PuLJninsbLQC6ltU5/JB9Bu2Xwy0TJxrZoZt603miCBLyjN/tL5gdrMEO8HiWjCtmH+0MPxf8cjP3QHPPS2izjDg53vKPwVgdltmmKnUV4DjM1y/AEZeRFtUoDUFHhznhms6b1p7Wgwxe13YM0IZ0nmQspcMWcOSWKIJqArF8rsqpSFyyprJo/bzbjkgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=71zGWGcb82+wBdIKl0Wpk5LPrXZxwf9fCZVf5L6JJgY=;
 b=MbF48F62B+59nUQVjM25PJud8fQ1VYdrY2qAy+jIDC+sdCKkUJ+BNPJMIcHS/gSTdP7CghNKZmc9exGZq3CsJ2HGiYO5kzAXu5f8Y634WrUildPDeZXdq7FJKjFnarqX3ESPf+ZDXlqwZhANWPfsaVAuIqIATe/LeDAdv28MEV00Lu86mFNZKw4/hSsoRRw+NZ8MN/udKrD18KFfGZD/WvGjWjIU/lUdDVfOT9TW7YHXxw5aJRNdfxuj4U3F7ppiGwHRJIK/j/j9CuCE+7iGiLpYhBNkIdl+sMKjkEBj2eBUD/NOgn9lFr2yxOOtNAYPpAixYemNgioOslBhBBPXSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=71zGWGcb82+wBdIKl0Wpk5LPrXZxwf9fCZVf5L6JJgY=;
 b=KnBVTWEz49RigOsu3UNstaQ+BfbstM9Z1k+ML3+OOFJq/XNmITkLDe0hDBVg5QwGQgOkS/SoOvGS3qp8KY4D5i8UZ4xNSt7i3CF6pwLbcECITLsnHy46PzeLpovkHuaGQe2XVeZ57PLsLhq3pVyqJH2QliOu+3U9Sjz4zAfZMxXaFMLR3RC1NAtwfpDvD1gm7CnwKfmmYSQlXPtuxQRdd7uELV6ftQMOGDNm5jKL8Fir0ShsYbx6oqx9J5QHLi1iYqFELe5/8Jxs4RFEtB784u5jxnfFoHvzIapPR0zritpMJpFrn3JyRuy1f9q9/54gz9HVLQ/Nq4lHjk07dW/z9w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fd8dbdfd-8730-8b0d-5b9e-14a13e679bf2@suse.com>
Date: Wed, 18 Jan 2023 09:59:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH RFC 03/10] domain: GADDR based shared guest area
 registration alternative - teardown
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Juergen Gross <jgross@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <214c9ec9-b948-1ca6-24d6-4e7f8852ac45@suse.com>
 <8d002c7f-a532-e8e5-0a71-801af4712d47@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <8d002c7f-a532-e8e5-0a71-801af4712d47@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0055.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8064:EE_
X-MS-Office365-Filtering-Correlation-Id: 216afa72-08e8-4140-749f-08daf9324880
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xHgSxlM0Shr+rFMvcPjVIcK2bYj6z3RsEgPd3hrtik1qMGtJkoPXKjlbPolz4fDPSEgv2M8O4HDnmpunoleGSZ8Yj857UQ764JqQK/xxazSbe1Chsu6LxZBUmNBcLdMTRMGtCsrrwZztqKXWYvV3HsbD1jla9gYfZGtBDUKixcms5ry3l6J3vddSG5IKioeZs0KuLtFsLCgkrexT0gOyuxw7gg4EoPs9fn7obnDTBWIx9Ttq1Ly5tvBQSnfJFaB8AkeYPAXGp7uIiggnVIYhHK2ok3I8JEac5Xt7hVj+9ZlWKH6S5Xdw5qdeTORk6c2ryLLIsK0LnNX6hgzZ6+ph6280qBAlrhYvBxAHnVRuXf1ymzON83BtewxE0nxDaglXPAZUKZCwJZhgFJmQLXmUJ2aFHZzPiJtHCvYkpHKicLDmZ7qBRC1li3LC4yPN1f+/eQ8udTADcC8lYwu6krMzhd7Rwt419e9p8J1Kk1ZCEg1vVfPFr+q0Jqx7lY4fVdElhuuFoWM+zgnU39TGGV8kbyByHV+cMkGN87UBnKmSUncC/Th9r9zU4EXx+0k2jRd/4WmJwzeToWSNxoRHO3rY0+ZEwlgwpx64VjNYZl1iSDFNBCZKUjUq38aUbwmHLwV6/9tp/1Xdh1M+9OBe9LYYEH/m/IUeEVVL/3kxdRwjxNgnRGb4pmkWYaQZDtlFlxIzPNUolU/+rRcO248Ro7Z2Qio8chLsFz+U8D+nae1lEPs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(366004)(346002)(376002)(39860400002)(451199015)(31696002)(86362001)(36756003)(66946007)(6512007)(6916009)(8676002)(53546011)(186003)(41300700001)(26005)(4326008)(2616005)(66556008)(66476007)(83380400001)(6506007)(316002)(54906003)(6486002)(38100700002)(2906002)(478600001)(8936002)(5660300002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NDlZUmV2VklJOWErQWJTSWt0OFZQNGxQMDdaOWY5bVoxQnNRY1FEMzdPSVJa?=
 =?utf-8?B?R2FhZko3UTd6eEZTTzh3MkV1Z0V6clIvdFZobGFsaUljK1V3RUJNQW5oY3I4?=
 =?utf-8?B?Mys2TExIOVpvSjU3bi96b1RQUHd3N0xoSG8rNjFHUm8yMHFUL2VlZXpXWHkz?=
 =?utf-8?B?Q1RKQlRPUzI2QmFOU1FITnNzQlF5c0twZlVGU0x4Mjc2WTZ6b282b3Q0TTZn?=
 =?utf-8?B?a1FQbjBxcHFDb3RlWVdWNWpBNm1uc1hTZ1dCWUM1K1NNdHhDREFGMi9ZL1k4?=
 =?utf-8?B?dU92S3M0STQ2OUlXcjBnT1VnQkJuMjJyUnE1QUhXVkFQY3VINXMwUVdrcGF6?=
 =?utf-8?B?V0tZbFlvM0U1OG1oWVZWVWR5MDIyVVNzSStydWVzSjE5dW43Zkc3Q0FqbEs3?=
 =?utf-8?B?eW1JRUZkd3JzOWZRejZ4NUpEemV1dTJGUXNKUlhOcmEyOE5kTmcvMno0YnVX?=
 =?utf-8?B?Mm9RYlBxUzBEWmV4NVdvTXM2T0NCcGVJclUyWlhqWS9tWVhyL3l5NnJQZVNI?=
 =?utf-8?B?bWFEYUFoK1RPQnV6cndEUFU0TEhkckkzN3BkMHJ2cmhQSm43UXBQL1VjQm5o?=
 =?utf-8?B?VXVEai9mZnRrNmgwSlpDZEN2UmFCQ2hnTVR0Qmh1SXJ0U1lxZTU3cFQ3L3Vs?=
 =?utf-8?B?VnRSeVhUbHc2ZGI5K0ovY3ZNZjE5empyMTByUTBDL3EwK0k0UjlxQ0xIMzZw?=
 =?utf-8?B?bkFxMkw3VFgxOWQxTThtZkNvWEhCcmJSaGVRRUR0ZTZLRmY2WTB2Y1ZCU3g1?=
 =?utf-8?B?Vi83T1lRSmRCUEZ4YUU1aEdNUDdWcUZWU1hjTExJWitnek9IelcrNXdzRVds?=
 =?utf-8?B?enh3cUh2N0ZNeU1UcUFxd2NPcUFBT1d2WkZhS3NtbzF0VGpPeUY2a3A5STB6?=
 =?utf-8?B?dnRudy9CWTFJT2xIT3pQUWJyZE9ENlVodEJleFhKK3hSMkFISm1GUUVGWVBx?=
 =?utf-8?B?RlZGYk1QSktOQ0Y1MU1ZM0MwVEg2RCtPeVE5SWx4dEh4c09HS2RBeFdES1Q0?=
 =?utf-8?B?VjJNNm1nREUxUDhBMEdHTzN5NDZCb1hXQ3JmVXlzcUxkTStRWnk3R3J1dUxR?=
 =?utf-8?B?T2dYWjVqU0NtYy96S3BYaUNyV3VLT1JjOFFLSmhTU3hMdmZXT25VOVluT3FI?=
 =?utf-8?B?S2g2YTAxTUhWTDF4WVpSUG9qVzJKRVhES0lNd24wUDJCQXJGd1oreVZBdzVI?=
 =?utf-8?B?Ukg2YkFrMTEvWnVXNnJLT1hNdUROcTFrbkR0TDB1ZUJJZGIrQU9yWXZzZzlH?=
 =?utf-8?B?aWhpWmFZc2RDVHUzUjNZNGRFTkVCdFMvT2F3d1d4UFlWMitneFdxbnY2OSs3?=
 =?utf-8?B?S0NtckpnVGxVK0R0YjI3SzJVeEJyeTY2Z2dtaGFicE1iZEJpZmlHeVQ1R24v?=
 =?utf-8?B?b2tnUzdROGhjTVdCWjFqakdjd1JMZWxRakNVQVhoSmxWdEl2TTV2VEZKNW56?=
 =?utf-8?B?c0dOd25CS2duWVdVaGpTTWNNcllnNml1VEwxSENMQUxuMytHb2p0b2JFMWZD?=
 =?utf-8?B?YUQ5Q2R1TUhHaEhScmtDREpvVEVrQ3dVaEsyLy96RXJyNXJCQnlLMlRvZ0Q4?=
 =?utf-8?B?UTUxZWZYQ2VndXlSSmZkUkNDK21yRlJmZGJWZ1QrQ3NnbERVMWp6VW1tWDdN?=
 =?utf-8?B?anJUNjVXcDNvY09FYjUwcXR6cTBwd1pnQ3U3NkNnZnN6T0dMT0MvdGtBamFU?=
 =?utf-8?B?c2NYbUNrbFM5Tkc1U1luUDRIWlh2bDhsVDN0aGthS1dHNENjRzRGOEpsaXp0?=
 =?utf-8?B?VjVzOHZPZ0NhZkgvd3NNMWp1QmcrR1NJclptb1RGdTJGdGVtS3NUQlQ5Nzhq?=
 =?utf-8?B?aHIreUhZRDJTOGtqSlJIaFZxdGVnRDR2Q251aTNYQUZZVDViUGZWVnVMWXUz?=
 =?utf-8?B?UVFRTU8rRk9udC9IYWpndVJlRDFtYkJabzN3V1NUUVc3ZklseWJiL0RUbEVj?=
 =?utf-8?B?YlJIaGdzRWVYc3R5SERmdmFnOTR2TU5rRVU0MEh6SXNuL1BMWXo0UWUxOVJX?=
 =?utf-8?B?TEIxQTg2RHBYTFFycUpORTB5Rjd4MlhxSGVibkhEZWQ5UUFpNGJFYS8wVWVM?=
 =?utf-8?B?djRpSU5MVkwvRVg5VWVyZEFTUURLZFZjSFdFN21sajhDYW1YUlZOM0Y1d1dG?=
 =?utf-8?Q?fPAKCOX9G8TKXDrpHRWQ4Xdzs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 216afa72-08e8-4140-749f-08daf9324880
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 08:59:18.8436
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2Nolmg8YCt573bHz4mvUPZJDNRRlQMvdCCoOLwfgLI8hjgxZABHxNgWD5tbzPINEynDGGzP8UxFbJoZg2Y8w+g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8064

On 17.01.2023 22:17, Andrew Cooper wrote:
> On 19/10/2022 8:40 am, Jan Beulich wrote:
>> In preparation of the introduction of new vCPU operations allowing to
>> register the respective areas (one of the two is x86-specific) by
>> guest-physical address, add the necessary domain cleanup hooks.
> 
> What are the two areas you're discussing here?
> 
> I assume you mean VCPUOP_register_vcpu_time_memory_area to be the
> x86-specific op, but ARM permits both  VCPUOP_register_vcpu_info and
> VCPUOP_register_runstate_memory_area.
> 
> So isn't it more 2+1 rather than 1+1?

Not in my view: The vcpu_info registration is already physical address
based, and there's also no new vCPU operation introduced there.

>> ---
>> RFC: Zapping the areas in pv_shim_shutdown() may not be strictly
>>      necessary: Aiui unmap_vcpu_info() is called only because the vCPU
>>      info area cannot be re-registered. Beyond that I guess the
>>      assumption is that the areas would only be re-registered as they
>>      were before. If that's not the case I wonder whether the guest
>>      handles for both areas shouldn't also be zapped.
> 
> At this point in pv_shim_shutdown(), have already come out of suspend
> (successfully) and are preparing to poke the PV guest out of suspend too.
> 
> The PV guest needs to not have its subsequent VCPUOP_register_vcpu_info
> call fail, but beyond that I can entirely believe that it was coded to
> "Linux doesn't crash" rather than "what's everything we ought to reset
> here" - recall that we weren't exactly flush with time when trying to
> push Shim out of the door.
> 
> Whatever does get reregstered will be reregistered at the same
> position.  No guest at all is going to have details like that dynamic
> across migrate.

I read this as "keep code as is, drop RFC remark", but that's not
necessarily the only way to interpret your reply.

> As a tangential observation, i see the periodic timer gets rearmed. 
> This is still one of the more insane default properties of a PV guest;
> Linux intentionally clobbers it on boot, but I can equivalent logic to
> re-clobber after resume.

I guess you meant s/can/can't spot/, in which case let's Cc Linux
folks for awareness.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:18:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:18:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480244.744525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4aB-0000Yj-K4; Wed, 18 Jan 2023 09:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480244.744525; Wed, 18 Jan 2023 09:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4aB-0000Yc-Ga; Wed, 18 Jan 2023 09:18:11 +0000
Received: by outflank-mailman (input) for mailman id 480244;
 Wed, 18 Jan 2023 09:18:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pI4a9-0000YW-Nw
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:18:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4a9-0003t8-D5; Wed, 18 Jan 2023 09:18:09 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.8.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4a9-0003uo-3I; Wed, 18 Jan 2023 09:18:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=h/9SYWu66KwxLzEcqWysHLfvLOSNrN4fa5oUb2J1ZJI=; b=w15JLnpxhmcIJs9PDwTx/DOU8o
	ALc5TyuAkQ7XVAZgbY0wBMaYFwvNln9BaeqyZ4NOSCfbHmeypbuuMeooy4PxpG+kQXD6GYG55CZxa
	8mRvh1wBli8VwTnofluQufBz681hoJh+Z6M/YahAdSC71MCX9fTcAeTAxzVM9Mnoje/A=;
Message-ID: <4c35f1bc-b065-dd72-5cdd-5187e4474410@xen.org>
Date: Wed, 18 Jan 2023 09:18:06 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 09/11] xen/arm: Introduce ARM_PA_32 to support 32 bit
 physical address
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-10-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117174358.15344-10-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Ayan,

On 17/01/2023 17:43, Ayan Kumar Halder wrote:
> We have introduced a new config option to support 32 bit physical address.
> By default, it is disabled.
> ARM_PA_32 cannot be enabled on ARM_64 as the memory management unit works
> on 48bit physical addresses.

I don't understand the "cannot" here. It is possible to have a 64-bit HW 
that support only 32-bit physical address.

After your series, I also don't see any restriction in Xen to enable 
ARM_PA_32.

Whether we want to do it is a different discussion. I don't have any 
strong opinion. But the wording should be clarified.

> On ARM_32, it can be used on systems where large page address extension is
> not supported.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from -
> 
> v1 - 1. No changes.
> 
>   xen/arch/arm/Kconfig                 | 9 +++++++++
>   xen/arch/arm/include/asm/page-bits.h | 2 ++
>   xen/arch/arm/include/asm/types.h     | 7 +++++++
>   3 files changed, 18 insertions(+)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 239d3aed3c..aeb0f7252e 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -39,6 +39,15 @@ config ACPI
>   config ARM_EFI
>   	bool
>   
> +config ARM_PA_32
> +	bool "32 bit Physical Address"
> +	depends on ARM_32
> +	default n
> +	---help---
> +
> +	  Support 32 bit physical addresses.

The description is a bit misleading. If you select N, then you can still 
still boot on HW supporting only 32-bit physical address.

It is only not clear from the description why a user may want to select it.

 From an external interface PoV, I think it would be better if we let 
the user decide how much physical address bits they want Xen to support.

In the Kconfig, this would translate as a "choice". For Arm64, there 
will only be one (48 bits) where-as Arm32 there would be two (32, 40).


For an internal interface PoV, this could still translate to select 
ARM_PA_32 (or whichever name we decide) to indicate the type of paddr_t.

> +	  If unsure, say N
> +
>   config GICV3
>   	bool "GICv3 driver"
>   	depends on !NEW_VGIC
> diff --git a/xen/arch/arm/include/asm/page-bits.h b/xen/arch/arm/include/asm/page-bits.h
> index 5d6477e599..8f4dcebcfd 100644
> --- a/xen/arch/arm/include/asm/page-bits.h
> +++ b/xen/arch/arm/include/asm/page-bits.h
> @@ -5,6 +5,8 @@
>   
>   #ifdef CONFIG_ARM_64
>   #define PADDR_BITS              48
> +#elif CONFIG_ARM_PA_32
> +#define PADDR_BITS              32
>   #else
>   #define PADDR_BITS              40
>   #endif
> diff --git a/xen/arch/arm/include/asm/types.h b/xen/arch/arm/include/asm/types.h
> index 083acbd151..f9595b9098 100644
> --- a/xen/arch/arm/include/asm/types.h
> +++ b/xen/arch/arm/include/asm/types.h
> @@ -37,9 +37,16 @@ typedef signed long long s64;
>   typedef unsigned long long u64;
>   typedef u32 vaddr_t;
>   #define PRIvaddr PRIx32
> +#if defined(CONFIG_ARM_PA_32)
> +typedef u32 paddr_t;
> +#define INVALID_PADDR (~0UL)
> +#define PADDR_SHIFT BITS_PER_LONG
> +#define PRIpaddr PRIx32
> +#else
>   typedef u64 paddr_t;
>   #define INVALID_PADDR (~0ULL)
>   #define PRIpaddr "016llx"
> +#endif
>   typedef u32 register_t;
>   #define PRIregister "08x"
>   #elif defined (CONFIG_ARM_64)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:21:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480250.744535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4dV-0001wg-2m; Wed, 18 Jan 2023 09:21:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480250.744535; Wed, 18 Jan 2023 09:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4dU-0001wZ-W5; Wed, 18 Jan 2023 09:21:36 +0000
Received: by outflank-mailman (input) for mailman id 480250;
 Wed, 18 Jan 2023 09:21:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NCl5=5P=citrix.com=prvs=375d36a73=Per.Bilse@srs-se1.protection.inumbo.net>)
 id 1pI4dS-0001wO-NG
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:21:35 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7e5dc3a2-9711-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 10:21:32 +0100 (CET)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 04:21:29 -0500
Received: from CY4PR03MB3381.namprd03.prod.outlook.com (2603:10b6:910:51::39)
 by MW4PR03MB6490.namprd03.prod.outlook.com (2603:10b6:303:121::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 09:21:27 +0000
Received: from CY4PR03MB3381.namprd03.prod.outlook.com
 ([fe80::2984:da99:36a:dd33]) by CY4PR03MB3381.namprd03.prod.outlook.com
 ([fe80::2984:da99:36a:dd33%6]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 09:21:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e5dc3a2-9711-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674033692;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=QGn5Fh2H4uvuaIvyiTT6V7zqpf0iK0Zf15YYHMpsypk=;
  b=BWOGAKi4YlG/WO+PeBP/k0yZ8HS1yccPlWqgMnYnOmDdMpAmpjiSpYJo
   E5eMF0tsRWKMmV4PUgji5p45ov7Pe4nrXYJGsIv05NYlPVDJOZIG+vcMJ
   ZWu/hLXaZO0Q2vQ8QdV55UkYFlWUv2wTCiebGWKgIOxgo8GrO0oxxydG8
   s=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 93554548
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:w4vJvKNp14LZUu/vrR2ClsFynXyQoLVcMsEvi/4bfWQNrUolgmBWx
 mdMD2uCbPnZYmCmKY90YYm+oE0AucWGndU1Gwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5wxmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0u90XCYUr
 P0DFAgUXi+S1+SmwLCLG+Y506zPLOGzVG8ekldJ6GiASNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PpxujaCpOBy+OGF3N79QtGQA+9Uml2Vj
 mnH4374ElcRM9n3JT+toynx27OSwXOTtIQ6CZ6dqPdEo2Sv4EM6BTYdU3KCjcCjlRvrMz5YA
 wlOksY0loAw/kG2Stj2XzWjvWWJ+BUbXrJ4DOkS+AyLjK3O7G6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnop4thu3MCkRaGUENSkNSFJc58G5+d9iyBXSUtxkDai5yMXvHi39y
 CyLqy54gKgPickM1OOw+lWvby+Qm6UlhzUdvm3/Nl9JJCsgDGJ5T+REMWTm0Ms=
IronPort-HdrOrdr: A9a23:DA4cz6l+8M6lZt6FDYZ/mHPXwFfpDfLa3DAbv31ZSRFFG/Fw9/
 rCoB3U73/JYVcqKRcdcLW7UpVoLkmyyXcY2+cs1PKZLWvbUQiTXeZfBOnZsl7d8kTFn4Yw6U
 4jSdkaNDSZNzNHZK3BkW2F+rgboeVu8MqT9JjjJ3UGd3AVV0m3hT0JezpyESdNNXl77YJSLu
 vk2iLezQDQBEj+aK6AdwE4dtmGnfLnvrT8byULAhY2gTP+8Q9BuNbBYmOlNg51aUI0/Ysf
X-IronPort-AV: E=Sophos;i="5.97,224,1669093200"; 
   d="scan'208";a="93554548"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cjYsU2d7RiydAIDu0TvES6jaYNqFu2xzl2P6evGfEUaiUDOTZWR+E53QcT7IL5qsJxaHiWXBaamgaNIHb/QoeyppmCOGJwBcjvgTS7GKlzwqCsoQwgOjqVoDa/wPuPJL6RDI1kZDeORXfYBWe9GmiBrBah841f1JnRZCWkJfVUCo79bZxn1Ee0rFP76pKnDbqGPM+nOGoe/WdPBZj99F++sDCakitPDMwrddJrNP6GFns61wLmfQ2dbGTY5H2DsHhO8r6+bWHEEOmRtnXv4GuXrqhZqc8uYJ4vNcIaIib0Rb3JNwYw0wdpkzhlEnB1xg8L5OuqRkexqO1cK8/lYtiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QGn5Fh2H4uvuaIvyiTT6V7zqpf0iK0Zf15YYHMpsypk=;
 b=objLRWar2GbTPwdvXR7r7gNPjgdbWQkHUi7dvSuypNuqufo6FAIXxiGXx5NKHXUuUr2c8jHRjWaaD5pPsGUNObFH6B81EdqHKAPQ1T37GF8HCK3QcFcVCzzI3PPO2vllP5lZMnv/P5TDQsPcCJsYfzBW1uSCYmgUezVl0NB2BykQ6WuVzlftt4QuYTjewBhbmxOV3Rry2+l7qaQUySDTN6N5A58ysrsua3TQBNfmRFPAlQf4A+f59sovvCDwwqI2fnsI2/yslJFA1BfY205IRCflo5lrK+n0Q6xWCz0sNsjLqtN26Qtbt2AWN3Qg7CXBLj1R3TbfQcNl50TwahkoUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QGn5Fh2H4uvuaIvyiTT6V7zqpf0iK0Zf15YYHMpsypk=;
 b=wGpr3r+AkiHsdPPumyWmnEQBF1uDvQHRBzLEYczYrLipfe4cYAxtKVQ4brmJAvcJ+86Ow4pvRfjSznL3foIOUqNaNcs3BCVP2+6DJON6SLbilEuqFKjucd0ea/AmewIHEpiMRGHSPsFuQHQcM+2lZy17J6hDRYsHAg4Ky6J7PyM=
From: Per Bilse <Per.Bilse@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH] Create a Kconfig option to set preferred reboot
 method
Thread-Topic: [XEN PATCH] Create a Kconfig option to set preferred reboot
 method
Thread-Index: AQHZKc71hMjfkRx58UW4fngDofFpla6ixK6AgAA7aACAAMWtgIAAIxkA
Date: Wed, 18 Jan 2023 09:21:27 +0000
Message-ID: <a808ca5a-d9e8-ac21-45ee-0a74a783cbf7@citrix.com>
References: <20230116172102.43469-1-per.bilse@citrix.com>
 <f7e7b6ea-5bc1-ba2b-5d21-eb431ecff53a@suse.com>
 <348dff00-5ae4-5dc2-64d4-d52409a22283@citrix.com>
 <fd408cf4-bb25-c8b0-b979-340668d4c5cd@suse.com>
In-Reply-To: <fd408cf4-bb25-c8b0-b979-340668d4c5cd@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: CY4PR03MB3381:EE_|MW4PR03MB6490:EE_
x-ms-office365-filtering-correlation-id: 2783026a-ebfb-433e-a1b0-08daf9356065
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 uEyJRpidonz6PMZVh4NMAEnp77YM+Pbmz3o+BiG9jaErQ3st1UZIYpzJaW4E1WQosT4gZ72usnSY/P99GAzmNxW+NNwSCZPyEeR1OxuHjd+xhLW4LShvjrl1Jf92nxt13ZVhN0oWVWEFCBNzKM0csd9BCs8TjBEO64vAdAyXkLnTLY3AnGA0rMJWrLu8RGGKQSnzgdBWevOjA0uKDXg/sdXYRbTW3FxUZ5MztbuMb1eOS1OdfxlIYX70BtjgrXloAX1rdTyKtvRyIA30XCpoUDzbSDA17IbJMqQUTShhlhW7CIOZhlxx4UcDh9nSQtKN1oCRflp5qg/Ceu70rbFTgg+IVw+Ic/uPuKm1yqlfKFy1vDOW4StAH2xtJFuskOneVrzu295umk5eHnozXhiDL4hNCP3Fg+I8Z4ca8U4ApQRg78OSgkbJfLq1aJHqGzMVUNO78DNIbGCEfWXibIhK8WTNxhzcfOSWUWNUmZm+VB65RQK0ulZxG72eIRb8QQO+TNMqAm3j3pECTSLqsBC7s64Ns4pwNKcdN2Zy8hU1M9W/B0U69wxSjU63OE8n1lpjegAowKpy0ZCAeJJFPqiwmEH4oJvewEccXeQxTrrqVeTB0Oj1f/LTJlEXUCSnx+x+d9T4k4iSlS00D89WUmpa9sTZRZJXldO01NlBMhVh5MwZOY5POlOOO+d5TJ4rxq3bMbQ0dPMBUSkfTHrifPTL4ZPk+BTAyYfmDujqocYKVpE=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY4PR03MB3381.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(396003)(136003)(39860400002)(366004)(451199015)(41300700001)(2906002)(4326008)(66446008)(91956017)(66946007)(36756003)(8676002)(66556008)(66476007)(26005)(76116006)(53546011)(64756008)(6916009)(186003)(6506007)(6512007)(2616005)(83380400001)(316002)(86362001)(38070700005)(31696002)(54906003)(71200400001)(122000001)(38100700002)(478600001)(6486002)(4744005)(5660300002)(8936002)(31686004)(82960400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MzBCUGJudS9KeDJjMXF1RXhrblpJa2lGcnBEV3FqUmhhQkt2UldmS3AxaStS?=
 =?utf-8?B?RFRDd0d2L0xlTUNnNWZ6V1NiU1FlZGxlaGg5aFZNRmpibHl1NUdTT2paUC9n?=
 =?utf-8?B?RXZBNXFnYjNTMmxCYnk2VFBEamtaS01DNFJRejBTcTVFTXd3Q0pzaDRjaFJJ?=
 =?utf-8?B?b0RFZzhlY1FnbW42REk2Y2orNkRjRThrVWNQWEV2MlNleENoUklDWnJoUHJI?=
 =?utf-8?B?MFhHazRCZjBGSFFnRzBnamVoR1FXaUh6OXpQaE5MMkhzakhjNVpwSllBeDY2?=
 =?utf-8?B?K04xaURRNmprZ3ZPL05Qd3pqK2dEMkdKZXFSdUlDa2ZJOU9wVEFjV3h6dCtK?=
 =?utf-8?B?em1LVEtWaWh6MHZYN3N6NkdRTzIyT0dYM0dybW9zazFRZnhIVFFSQis0VXNO?=
 =?utf-8?B?YmZuS1BhOURJWUhpdTFTL2xaNlllYU1vQkptUzEvR1FMd2RyenJOdm1WTUxw?=
 =?utf-8?B?K0l6V3BOOHBjOG5TSjdWZVhoSFhBR1EwVWtSZEdzekpYOHpuZTdBZ2VzWFlW?=
 =?utf-8?B?clVlS09EOUVuTWZta3ZoR20zcWFTMDNKWUtZRXdBb3pEUVhUSnNmYitYOW9T?=
 =?utf-8?B?OVJseHkwRHZlam1lOXpwb2NKNVdOWVcwb0U1RW55LzNkU1ZRSTlwMnVyTk5N?=
 =?utf-8?B?UXNFK015VGRnWEtvYlQ3eTFoeEdZTmdxTUl5S0FhSjBKWlhzeVpPZERrU0Ri?=
 =?utf-8?B?dmlrVkg5ZE41eGxPV0ZnemptcEZINHNNd05hZ0RnRUoxUjQxanZtNmVURC9v?=
 =?utf-8?B?V0tZclhseXRIdE1WQXBFWXV0a1RkV3lJUnZ5UGl3SHZZSnZseTNQNVRMR2Rq?=
 =?utf-8?B?UlA4Ui9SVmFmekZaeGpNRTg4YzNEWVloY2h5WDlPUWlqTGtqQ2ZpZG5XcTNE?=
 =?utf-8?B?amZmOU5BdFFUTnc1Y0RvL0lHNUZGZ09vNnhJTEpEYWpLQVNGdHFadnM1czJ1?=
 =?utf-8?B?bElNaUVkMnZybE5NcFpWY1QxVmpRVEErNDR1NXQrVEFlUTdDelBWVmxJQ2Rz?=
 =?utf-8?B?bENReklBRE9ZTUFRc1IzOWdjUklUTmt4dkU0Mld1Q3pBeVBzdFlJempGaXdm?=
 =?utf-8?B?eWswK0gxUUpFeDRaaEpGOTZWY2JjWFpXQ3pBT2hhK1l0ZlpicDR2YzFnUWlF?=
 =?utf-8?B?S2x4cHB2a05RY2JDQThFSTZnQ0htUXI2Y2JOLy9oSGV4SCt1bzE2ZXdVcWgy?=
 =?utf-8?B?N2RsS0dRNlhSeHZkMEpOU251SUg3MFlqTXRLRFg3NFBOMFA4N0doaTVYalk3?=
 =?utf-8?B?a0VkdWlmKyszUHhXU3V1ZXBScmQwd1hZMmRYc0Uybi9pRThhSlZ1WnhqMUJx?=
 =?utf-8?B?dXRSQWxWQVZ3N3J3MnRkOWx4TjhvdWpUT1ZpZXZsTFlIRkhQREd5dWpVZ1h1?=
 =?utf-8?B?K0JsVTYrcnc5Zk0zRW1vUWh6OHk4UURGTkZTak5XQ3dUUG92NWdpSWxCak5K?=
 =?utf-8?B?Z1FXaGV0UDRobWo2Ync0ZjlMMVlCeFZJTWQ0bWthS0ZPQVVLV01RT1RKL0V2?=
 =?utf-8?B?T2ZNazRLTDFZZFIyVW0vSDRESkJIUXQ5a2NoMHBBUnBjbzFwN2NoNzV1ZVdX?=
 =?utf-8?B?SjBtYUV4YVJmWFBzVlhWTHd3VFZrdU5VNWFWdDgzNit4V1dpVzNveFYycHNJ?=
 =?utf-8?B?YVRNU2ROU3oyWHlaL2c3eWluV3NUMkFJM3EzL0RVdThkZmY3elFIcHZsdjVX?=
 =?utf-8?B?VytTcXBDTmdCczZYeXNVdFRjcVk0emdha0xBUVZ3QXRLeDkzSVp3YUZoN0Fk?=
 =?utf-8?B?ZVI0YjlnWUZWKy81R3JCRUhURHF5a0xvYUlRRnFmSjdiVnhCNEMrS0tHQlZq?=
 =?utf-8?B?Vit5VVRQR3RWVnowTTBWTjF0Tms5RkxnVGNKSG5HNmJ4WTlpcFBDNFl2KytV?=
 =?utf-8?B?UTZTUDZ1NnZ0aWliTTE2T2JXSUIwcW1OV2dQZWhDWFdlRU9VYTV4SHovTGk0?=
 =?utf-8?B?NEhqeWlOK0VRU25PMEZPTzFXdDdlS05TZ2ZuaUpxTkU4Wkh4MkZ5Zlh2L0Mw?=
 =?utf-8?B?bnJ5MWtnVFpZamNLTVJuSjZFOEY4MytRdjQwdHZDK0RLQXp6dExTV2Q5dmRr?=
 =?utf-8?B?MnBYbUdxdUNWK004dzJSQjBuU2xwVytHUVpnOTMzRUE4NDRtQThsdE85cWlU?=
 =?utf-8?Q?EhzR0MkHZUYYM+tG3hR0o2sPg?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <B53550E0FDD98D49B32B5CD2D4F5A369@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	4i5lEquaBZeLBNgToH0fimqesxAxTxiXgyoY/GW7LZAq5541eWAW5o6ofyhLQ7X9v1q0wSSEFjNWfn1FnGySdrVcDvb4vQeBC16CW2iKkx4G/gtNWvOvxtsl7+8blGzgllgqXSftrZL+OKG29hn9VIqfQvvVFkiw6mTF00NoCAhDGAUF1zGEVPLOcypbXUzbf2gMTWR8G8q6QLj50TqvKp0GppyMnjJiMpC7FBmxODB8CLHIbWhDlUhj1spGBcT6veqLoFm/CDbzWWrPPVvxJ2B6njmrG38PYeOFK9Sk8Rtlq/stDDzG9rx/9kv5IOh5+TWbYDh/bIMXS6r+6DOnC59nrO0H11/9WvVWNG732FQTDFdENZsthMBvtXMiD194BDUJlIBfExiVANTIMIV8j6Cwa/iAXhrsbGZgnuAAX8n/UJD3M0eEDmuuPd4/+PfspZB8ZgyOoWchnxXR129WtjiDNFttOTawwyWh3Fx6jKY9TeuygrJAnxcPMbDxzeKdEONSzrIxeQPLW/pV3CJxzrpIGzUTtOsULvDoZNPrDp2NhsA4vW6/oRwJErKSGUw9UDwTvfKTqA325yN3acXvHiW+AjcSpfgxThSe16e3kA2YoKGCFsyy1Ojli+UQZJ9rBOz58t/gBj+0DM7M7habEVgaeRMH02PV2WqG2pPzd/PJpQCUMLjooOWJJAeJ2NEMhf8S+b6sxbnn2Cx3QJFj1uKK0f9GvTD8ECYyLjPJC8AcneqCjVVmZq3yg8+fbYr/kkRF7bRsC5UYObuvi8bUYyApEz0KCUyK2OdTsEau4qwplS8j+iN7cWCF69KFcJeu
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: CY4PR03MB3381.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2783026a-ebfb-433e-a1b0-08daf9356065
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 09:21:27.1733
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 08itWtZY3KMEFs4XkQYw5AkcT6onNAzFsI57gxtQLWlBfHX88tqyfwXEUuct9NpvHKThvRfEzNYkqXB/qn4AQw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6490

T24gMTgvMDEvMjAyMyAwNzoxNSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDE3LjAxLjIwMjMg
MjA6MjgsIFBlciBCaWxzZSB3cm90ZToNCj4+IE9uIDE3LzAxLzIwMjMgMTU6NTUsIEphbiBCZXVs
aWNoIHdyb3RlOg0KPj4+IE9uIDE2LjAxLjIwMjMgMTg6MjEsIFBlciBCaWxzZSB3cm90ZToNCj4+
Pj4gKwljb25maWcgUkVCT09UX01FVEhPRF9YRU4NCj4+Pj4gKwkJYm9vbCAieGVuIg0KPj4+PiAr
CQloZWxwDQo+Pj4+ICsJCQlVc2UgWGVuIFNDSEVET1AgaHlwZXJjYWxsIChpZiBydW5uaW5nIHVu
ZGVyIFhlbiBhcyBhIGd1ZXN0KS4NCj4+Pg0KPj4+IFRoaXMgd2FudHMgdG8gZGVwZW5kIG9uIFhF
Tl9HVUVTVCwgZG9lc24ndCBpdD8NCj4+DQo+PiBZZXMsIGRlcGVuZGluZyBvbiBjb250ZXh0LiAg
SW4gcHJvdmlkaW5nIGEgY29tcGlsZWQtaW4gZXF1aXZhbGVudA0KPj4gb2YgdGhlIGNvbW1hbmQt
bGluZSBwYXJhbWV0ZXIsIGl0IHNob3VsZCBhcmd1YWJseSBwcm92aWRlIGFuZCBhY2NlcHQNCj4+
IHRoZSBzYW1lIHNldCBvZiBvcHRpb25zLCBidXQgSSdsbCBjaGFuZ2UgaXQuDQo+IA0KPiBJZiBj
b25zaXN0ZW5jeSBiZXR3ZWVuIHRoZSB0d28gY2FzZXMgaXMgdGhlIGdvYWwsIHRoZW4gd2h5IG5v
dCBhZGp1c3QNCj4gY29tbWFuZCBsaW5lIGhhbmRsaW5nIChpbiBhIHNlcGFyYXRlIHBhdGNoKSB0
byAibm90IGtub3ciIGFib3V0ICJ4Ig0KPiB3aGVuICFYRU5fR1VFU1Q/DQoNCkJlY2F1c2UgdGhh
dCB3b3VsZCBiZSBhIGRpZmZlcmVudCB0aWNrZXQsIGFuZCB3ZSBoYXZlIGVub3VnaA0KdGlja2V0
cy4gOi0pICBCdXQgbm8gd29ycmllcywgeW91ciBzdWdnZXN0aW9ucyBtYWtlIHBlcmZlY3QNCnNl
bnNlIGluIGEgd2lkZW5lZCBjb250ZXh0LCBJIGp1c3Qgd2VudCB3aXRoIGEgbWluaW1hbGlzdA0K
aW50ZXJwcmV0YXRpb24gaW4gb3JkZXIgdG8ga2VlcCBjaGFuZ2VzIG1pbmltYWwuDQoNCkJlc3Qs
DQoNCiAgIC0tIFBlcg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:25:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:25:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480258.744547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4hA-0002cq-M5; Wed, 18 Jan 2023 09:25:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480258.744547; Wed, 18 Jan 2023 09:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4hA-0002cj-Iu; Wed, 18 Jan 2023 09:25:24 +0000
Received: by outflank-mailman (input) for mailman id 480258;
 Wed, 18 Jan 2023 09:25:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI4hA-0002cd-3j
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:25:24 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2048.outbound.protection.outlook.com [40.107.247.48])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 07efa09a-9712-11ed-b8d0-410ff93cb8f0;
 Wed, 18 Jan 2023 10:25:21 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9471.eurprd04.prod.outlook.com (2603:10a6:102:2b2::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Wed, 18 Jan
 2023 09:25:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 09:25:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07efa09a-9712-11ed-b8d0-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NOczQuQRhOJIZxztP/7SsvB+38bAtNgKsAxF7GOEYE4Zx5w2rOApA2eMElQSldsQVcS4d46Lp+ubZ/MFvmVV5EM0IqoDmJgjUDvClKthMlSNTzJkHSv3Y9xyKoSePV5JgNIXJAb+QmT2DcV0L5ZEt1lMbiE45y5EdZVjp3MR/yS+K5JExJHQ3z6carrnl7DhkB96fF91aSQtIDkjyZumJOW3Hz/81tfKF1XS+CgOU5byVtN9N02/zxUIBArvUlgLI8Mt1eh/C7IC9VwqrESH9KnDpSXg+i//miZdAbQOHYuV0aQTGryL+5ZxBiQbT6G70R6Dqk/64I+KrUIExKaMvw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=q00CwzZYgjFUSgH0jbxOOZfZ1NsMwDeapivnXGKnI4M=;
 b=lWI/M48e3/yQQwamD3AhN3GZDRNSckP5/JSP9IcQT3GmJ0vrYpbjn46Of1hOqib+724h6zTop+7Ld0UKHCpTORZuUnW5mnOMcW74lDHcufxEmurCVbdK81JQxljNucMlyzKrzLXercXyyvJm+gYpPXMJnyj6stdWAx3MKUay3iRILsqFjnL1dJUgK0+fl0YV87Mk7ZGQ7Kngs3PgFNoolB7S1AnJrdpPoxEyDqvmdwAwsu8RRDWlDM32XCzSMjeQd8VkqhB+FdKJ6axqqUq9ZeQT13brzfKGsFP03iW0NByOj6Gr0vc7tLzK+5VT8Qxog86ioeKzzlYneCTMexNLuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q00CwzZYgjFUSgH0jbxOOZfZ1NsMwDeapivnXGKnI4M=;
 b=sp3at5B22I/KTUogobZEICxYDqA9ff8Lz8wk8cgj9eMwRXfnJSJOszndOa8+uCpwpyYnxjqeHLwKaUpQ+s7sna3pYulkvv/JbbOqZ5WYbHJS3T5Hd1NlUTJEhCI13g/OKuN4gm0QwbVGL2YvwQJU75xErNq3D8L9ymQ3A5WIewBnQABmRYkTss/Hm9o1xG1c5/zXfntrFG4YuKGQvVg8JLDF5teF9U+yyfwKbV73XqL6tGaYLoGDPGRmfiRcav4E0LP6VDMAb7jawBNkV8gAVRmbpSSHmfS1o3JA68YUeNQZ50mHnLJ/pnIJ4vwbMC2NqQZUrHzDQITFBY//OIP2zw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6ff1357d-8318-f105-f0e8-ffec40699841@suse.com>
Date: Wed, 18 Jan 2023 10:25:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH RFC 05/10] x86: update GADDR based secondary time area
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <7d814375-c190-ae0b-793b-a8563a23d318@suse.com>
 <d0186b8a-652c-60a2-dc27-50c54d9221f4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d0186b8a-652c-60a2-dc27-50c54d9221f4@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0018.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::28) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9471:EE_
X-MS-Office365-Filtering-Correlation-Id: 133d7c51-5473-44c9-4da7-08daf935eada
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	n3pTvUSf34x6JiHpkbfSXPDrwPyBlph5HxLHrcvOkCqY0guCmGaXyLEb8s7z4S6dhX+NUYrIVMq8pft1ivloApXtdHKgGNfCevbSPw19Rzjvnx29tN9D+05X3EQs5nT0V9wn8pbFPPUE0U1bDSFC8F+DNBE7S1PTfvdj2vKhO18SCxoDr+JYlrG1R8gNdYNzqrWYIFjbZGb41bLI5FgZZBf2RxY1Z3/GZ1U21N4Wj067K9gL5WZEkgrUyDSUSJIQ0jOrzCHWLEGm7rEcKsIrxDaseI2Gmbhn7ruRMhhqz7rRL75BPVm+85NEyncKOJZ18U1tjrryt0ShRpkvjNElXe1jOQ2Ri40SLTWNOmmwLU1TgNy8adS0oP8V2I8wUDYUJyfX8vUjVyvNbhqPX/Rvu//q0Kp+ZWvdStmj40rf/N4MRUICX2X+ZQPIVe8LS9twtzCsNDZ1QtNvu/1SGQhZShAUdJdfEBefP1bHuZlS7UR4jYBbCydMMNLpC57VoPrP4mtVow/sp8+X27z3tXsmus57YK/YLtNM1U+VTsXJPJlGqwU6MTJjmQCXjjIKwsiFCOYDYgSXiY6DaANpETZVzSAAIoQaZPx4JV6kUibmpThu22gY0/yuFtNu+MrEOOJLMawRwP09RI6n9OpllnvoJbFzKCjbPLkwPM2Bhi46gKErJbebE/EqVP0pGkABtcWaj2tfle0crp/vRalULE5fAO0xuXCOcnuxMTQ+FOFaR5o=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(376002)(39860400002)(346002)(136003)(451199015)(31696002)(83380400001)(86362001)(36756003)(38100700002)(316002)(54906003)(8676002)(66946007)(66476007)(6916009)(41300700001)(4326008)(66556008)(8936002)(5660300002)(15650500001)(6506007)(2616005)(6512007)(186003)(26005)(31686004)(2906002)(53546011)(6486002)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UTRpYzlzSVM0V3Yyekx6K3h5N3E5WVZ6ZU5Ka05hZy8yNldsUHJqbkZmSC9m?=
 =?utf-8?B?bExZWDNndHlKQ1BPRWxjUnF6a1ZTdytnRjRlTjQ2QVRlUWtqLzQ3WFhCS3g5?=
 =?utf-8?B?eXlTbWNYbkswemlzTXphdGJxMVNBaitsLzYvTzlwT29GQWppd25Bay92MjlF?=
 =?utf-8?B?Q2JSeDgxdGNKd1ZrenVCbHFseVowQ0VIYmt4K1FsMGVJN2hpMm9lREFGNWVh?=
 =?utf-8?B?OVc3VktsdlNQNlM0S25nV3JURWJEL2pVQWVZZG5NLzdZRlBPYm5hYWlxZXFw?=
 =?utf-8?B?YTNCYjE1djg0bUlmaHRwVHdNS0w3cWtLeGpGSzduVEg5REFYdVErc2RWek1l?=
 =?utf-8?B?dXJ3RCtFY1JySzQzbzFabjdjVVpYRWNELzdlRHFDWmppWmpYaGlUUFdwcU5N?=
 =?utf-8?B?OHF5cENlSW81RWIzWVVrbzNxNnc3OE44NjhoT0JxVFd1cDRoUmlWM2I3UzRY?=
 =?utf-8?B?cUpZbGoxTitEbWpyRldkVEhTM0ZGZyt3WU1XSmo4RnhadnBHQ3oyV2g1VWw3?=
 =?utf-8?B?azFxZFdONkRXTVVzN0hYb0IwMU9USDFXNndTc05LdGl6SWZ5Wm9YanBkcGV2?=
 =?utf-8?B?eVArWEsvQVhPb2k4R0FrbHk0dkJ3UG95Mm9QOXBUUyszcERNK0xYSWp3T0o3?=
 =?utf-8?B?T2NldDRRSzVEL1dEUzVMQUg0blBCVzIwY1dnU25wZEZUSXZzK3VacVRnUVRs?=
 =?utf-8?B?NEs1allvVi9jTFJzWmJSbU1CL1Vna3B5bHQ2cDVYV2VoZUNyL1BoY2x5TGIy?=
 =?utf-8?B?VkFTNjI5ZUw2YlhFL3cwbExOamYzZFdpbTNONHh0T1lHRHpMZVhKMFE1Vjli?=
 =?utf-8?B?aUUxQnpwc3FZbnFGRGVhaHZLS1JFbWUvNVZhZUtnTDVOVG1CemQzYXpRditX?=
 =?utf-8?B?RHY1M1Q4M1p4dGRzWWw1YThGNFRSbHUwb0c3QVorWG51T215bnRITzdNOFpi?=
 =?utf-8?B?WHlsVHNxZmZvaWhyeXVvS1lnNDg5b3c2RkxYdlAxWm1zY0thQ0l1ZjlWTVNZ?=
 =?utf-8?B?d3JPb0xFdXNHTGdGNFp0TUU4MEVCd3B4RDlKMmZISWk1TnJWcnlHY1ZVZmVU?=
 =?utf-8?B?cUtCRi9FSk9FSGZZVE9hUU9hSmpkN1psbzBrbFRWWVN2V0dEK1BCMnZxZHJP?=
 =?utf-8?B?czZBTFI4S2JXNUtnWWFuN01maTIrdWZiQm5hZmZPcnVmdGpMTWpZbGphSHov?=
 =?utf-8?B?TmVXbzVIa2dTOHl1ZEdDSkY5R3djQ24vaWpHa09Mdk9xNEhIMEhjZWFPUnFx?=
 =?utf-8?B?aURRM1orTkNXbDdlRUpMY2R3d3ljWVN6d2E2bWUrNE5hSTQxR3Nla2M5TU9z?=
 =?utf-8?B?Z0Y3cDNvcndxZUxMaDc0cm1iU01DcEdPbTY4NDFXSlNqNjVyTjlCUUVOZHE0?=
 =?utf-8?B?SGFjOXQ3VlNxY0UwUEhlazJlRG1tR1ZKWHJDMVREZjhIdmVsOW81aVhiUXZw?=
 =?utf-8?B?ZGphVmYyNWErYzZ0NVB0ME5iNjIybS9nT3QydTlvZEROU1dIYnoyZTVxOU5Q?=
 =?utf-8?B?ZXM4Nzd5TnJWbk8zeHhMaDU2NkRFTFVkMGUrNTIwTHU4RWpDTzRKMnpWbGxR?=
 =?utf-8?B?cFp6eXJhWmt1TXFzejhJbloxODRxdi9YSFFKMFYvdUFFa1dDT3ZicHVKRGkw?=
 =?utf-8?B?VjRFcUVsVnZQR1gwdkp5UVM3ck5HKzYxR1dmdG9BdUFTUjdIK3ROOXJLeWpE?=
 =?utf-8?B?czhPTFQwWXFQQXRQNjF1RTFLVjZUU2czVlpscm44U3EvZXNBRGZhdWdHUDg5?=
 =?utf-8?B?WFMvRGgxMmo1bC9VbHhvQzVpV0w5QW1ETzBIMWQyYTV4UnBOMHVyVDk3bkxl?=
 =?utf-8?B?YmpDZlpyRnJJeVJMMDY2L3UrMVhZNThiKzR1Q1pReldMYmlQbVJzbWx6Snd4?=
 =?utf-8?B?UmVQK1VGZ3VzSU9WK0tMTGNtNVliWllISVJOTXBSM0piaUNhWDFCUmRNNmVC?=
 =?utf-8?B?eDlocFl0NlhFWTFKUkhXVDk3WFVJajkwNFlzWkZZdUpNNld4VStOeG9JNDlj?=
 =?utf-8?B?WEU3bnhLUW5FZS9ySWh1WXhPZldDZG9qU05VWDVUV1c2SWx2TmdMck1kL1Bs?=
 =?utf-8?B?RHdYUjBwR2wwSXZqQTZwQmRUak5QL1MyZ25uME9CZkJnazJZUlk0WWN2Qld6?=
 =?utf-8?Q?ib87ltr5sJe1kNNf18WlXctAv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 133d7c51-5473-44c9-4da7-08daf935eada
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 09:25:19.9166
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Z+WDfJJq6W8taWntVTkP77xJREqoX0/32A75jc/KFJHNBMQLOezyKI83pn7SZBWpvs8iO3wmDOmf2PASttA3XQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9471

On 17.01.2023 21:31, Andrew Cooper wrote:
> On 19/10/2022 8:41 am, Jan Beulich wrote:
>> --- a/xen/arch/x86/time.c
>> +++ b/xen/arch/x86/time.c
>> @@ -1462,12 +1462,34 @@ static void __update_vcpu_system_time(st
>>          v->arch.pv.pending_system_time = _u;
>>  }
>>  
>> +static void write_time_guest_area(struct vcpu_time_info *map,
>> +                                  const struct vcpu_time_info *src)
>> +{
>> +    /* 1. Update userspace version. */
>> +    write_atomic(&map->version, src->version);
> 
> version_update_begin()

Not really, no. src->version was already bumped, and the above is
the equivalent of

    /* 2. Update all other userspace fields. */
    __copy_to_guest(user_u, u, 1);

in pre-existing code (which also doesn't bump).

However, you point out a bug in patch 9: There I need to set the
version to ~0 between collect_time_info() and write_time_guest_area(),
to cover for the subsequent version_update_end(). (Using
version_update_begin() there wouldn't be correct, as
force_update_secondary_system_time() is used to first populate the
area, and we also shouldn't leave version at 2 once done, as that
might get in conflict with subsequent updates mirroring the version
from the "main" area.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:27:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:27:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480264.744557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4is-0003An-0N; Wed, 18 Jan 2023 09:27:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480264.744557; Wed, 18 Jan 2023 09:27:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4ir-0003Ag-Tu; Wed, 18 Jan 2023 09:27:09 +0000
Received: by outflank-mailman (input) for mailman id 480264;
 Wed, 18 Jan 2023 09:27:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pI4iq-0003AZ-Os
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:27:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4ip-00043T-J7; Wed, 18 Jan 2023 09:27:07 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.8.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4ip-0004DA-CL; Wed, 18 Jan 2023 09:27:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ysZ+6AbGTIF80D+AAL/x+UMQurAtnggkUPKFAaJXd1U=; b=mWLuhS93HyHivuMyDvZRj02a9W
	Ru75Rpa+1Z+bIUXGyjWnkMMajYoHAXEbQ7AVTtNuiBB1/EdT57beSTzs0Zs0o5xfkPd5eYSy0zaPp
	WLk8rmv5hLlt7D+na/RPlMHCRMXN7PPUSdxUO/FnosYU9tpWFrMKRxuBAslnoIG8XLP8=;
Message-ID: <ba1812f1-0def-fdc5-d3d9-3b9ec0d1a805@xen.org>
Date: Wed, 18 Jan 2023 09:27:05 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 12/17] tools/xenstore: don't let hashtable_remove()
 return the removed value
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-13-jgross@suse.com>
 <19a0c39c-31b3-ce9c-6f03-466b6109b88f@xen.org>
 <fb160e56-8a6a-85fd-0140-ae25322479c7@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <fb160e56-8a6a-85fd-0140-ae25322479c7@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 18/01/2023 06:17, Juergen Gross wrote:
> On 17.01.23 23:03, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 17/01/2023 09:11, Juergen Gross wrote:
>>> Letting hashtable_remove() return the value of the removed element is
>>> not used anywhere in Xenstore, and it conflicts with a hashtable
>>> created specifying the HASHTABLE_FREE_VALUE flag.
>>>
>>> So just drop returning the value.
>>
>> Any reason this can't be void? If there are, then I would consider to 
>> return a bool as the return can only be 2 values.
> 
> I think you are right. Switching to void should be fine.
> 
>>
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V3:
>>> - new patch
>>> ---
>>>   tools/xenstore/hashtable.c | 10 +++++-----
>>>   tools/xenstore/hashtable.h |  4 ++--
>>>   2 files changed, 7 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
>>> index 299549c51e..6738719e47 100644
>>> --- a/tools/xenstore/hashtable.c
>>> +++ b/tools/xenstore/hashtable.c
>>> @@ -214,7 +214,7 @@ hashtable_search(struct hashtable *h, void *k)
>>>   }
>>>   
>>> /*****************************************************************************/
>>> -void * /* returns value associated with key */
>>> +int
>>>   hashtable_remove(struct hashtable *h, void *k)
>>>   {
>>>       /* TODO: consider compacting the table when the load factor 
>>> drops enough,
>>> @@ -222,7 +222,6 @@ hashtable_remove(struct hashtable *h, void *k)
>>>       struct entry *e;
>>>       struct entry **pE;
>>> -    void *v;
>>>       unsigned int hashvalue, index;
>>>       hashvalue = hash(h,k);
>>> @@ -236,16 +235,17 @@ hashtable_remove(struct hashtable *h, void *k)
>>>           {
>>>               *pE = e->next;
>>>               h->entrycount--;
>>> -            v = e->v;
>>>               if (h->flags & HASHTABLE_FREE_KEY)
>>>                   free(e->k);
>>> +            if (h->flags & HASHTABLE_FREE_VALUE)
>>> +                free(e->v);
>>
>> I don't quite understand how this change is related to this patch.
> 
> With not returning the value pointer any longer there would be no way
> for the caller to free it, so it must be freed by hashtable_remove()
> if the related flag was set.

That makes sense now. Thanks for the explanation.

> 
> I can add a sentence to the commit message.

Yes please.

The rest of this patch looks good to me.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:30:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480270.744569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4ly-0004b3-Ew; Wed, 18 Jan 2023 09:30:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480270.744569; Wed, 18 Jan 2023 09:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4ly-0004aw-By; Wed, 18 Jan 2023 09:30:22 +0000
Received: by outflank-mailman (input) for mailman id 480270;
 Wed, 18 Jan 2023 09:30:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pI4lx-0004ap-Hd
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:30:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4lw-00047y-W2; Wed, 18 Jan 2023 09:30:20 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.8.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4lw-0004HK-P9; Wed, 18 Jan 2023 09:30:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2oZO/KAxo0go/UoEoVWKI9dC1ZpQt7I3BZtSX56VdLg=; b=oWuvPHGnsqqffTsyBINMhAhS0K
	DCf6rEoObdJtTTU7zvBypBtCWMyMNPJfvBbxz/CKPndNyPE+WuDVBNni50+SbG4/V32IcV4uEoSWX
	WEPJjq0LvE5yTFxkBCpYnMHHu3NGbLjGqaQNgyfvtTSg7kcYRfMML0Es/h6IUzGSZerg=;
Message-ID: <a0f38463-18b3-ff70-12b8-aa55747d2701@xen.org>
Date: Wed, 18 Jan 2023 09:30:19 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 13/17] tools/xenstore: switch hashtable to use the
 talloc framework
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-14-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117091124.22170-14-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 17/01/2023 09:11, Juergen Gross wrote:
> Instead of using malloc() and friends, let the hashtable implementation
> use the talloc framework.
> 
> This is more consistent with the rest of xenstored and it allows to
> track memory usage via "xenstore-control memreport".
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:36:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:36:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480279.744580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4rT-0005Ej-3y; Wed, 18 Jan 2023 09:36:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480279.744580; Wed, 18 Jan 2023 09:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4rT-0005Ec-0q; Wed, 18 Jan 2023 09:36:03 +0000
Received: by outflank-mailman (input) for mailman id 480279;
 Wed, 18 Jan 2023 09:36:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pI4rR-0005EW-1v
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:36:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4rP-0004Di-FJ; Wed, 18 Jan 2023 09:35:59 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.8.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4rP-0004ad-9L; Wed, 18 Jan 2023 09:35:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=H1hgDy4AfKuWUzb0zgNMufYxhgUwr8h1m9ssmbzrHJc=; b=b9h9jK91lcUWZv/SPCs+Rq1hK0
	hftG/PgJcNhf+vBgsNVMWo8SBIYf3WQUKY1GWN4t7JLJVkDnAcenmvj1X8JtPqcuvvTXMyV86DTnR
	gPZ1EuWyv0K+2K4r3qAP69c/xPAKJA9PlwpJ7d9ryXD+O5LNBMXyh5ar2cXsR9mB5TYg=;
Message-ID: <449dae4d-c19c-87be-88ef-aa3ff79f1a23@xen.org>
Date: Wed, 18 Jan 2023 09:35:57 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 16/17] tools/xenstore: let check_store() check the
 accounting data
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-17-jgross@suse.com>
 <17595b1f-1523-9526-85da-99b9300f3218@xen.org>
 <c541fcd7-a829-f757-c949-1b4a089ac6c3@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <c541fcd7-a829-f757-c949-1b4a089ac6c3@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 18/01/2023 06:23, Juergen Gross wrote:
> On 17.01.23 23:36, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 17/01/2023 09:11, Juergen Gross wrote:
>>> Today check_store() is only testing the correctness of the node tree.
>>>
>>> Add verification of the accounting data (number of nodes)  and correct
>>
>> NIT: one too many space before 'and'.
>>
>>> the data if it is wrong.
>>>
>>> Do the initial check_store() call only after Xenstore entries of a
>>> live update have been read.
>>
>> Can you clarify whether this is needed for the rest of the patch, or 
>> simply a nice thing to have in general?
> 
> I'll add: "This is wanted to make sure the accounting data is correct after
> a live update."

Fine with me.

> 
>>
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   tools/xenstore/xenstored_core.c   | 62 ++++++++++++++++------
>>>   tools/xenstore/xenstored_domain.c | 86 +++++++++++++++++++++++++++++++
>>>   tools/xenstore/xenstored_domain.h |  4 ++
>>>   3 files changed, 137 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/tools/xenstore/xenstored_core.c 
>>> b/tools/xenstore/xenstored_core.c
>>> index 3099077a86..e201f14053 100644
>>> --- a/tools/xenstore/xenstored_core.c
>>> +++ b/tools/xenstore/xenstored_core.c
>>> @@ -2389,8 +2389,6 @@ void setup_structure(bool live_update)
>>>           manual_node("@introduceDomain", NULL);
>>>           domain_nbentry_fix(dom0_domid, 5, true);
>>>       }
>>> -
>>> -    check_store();
>>>   }
>>>   static unsigned int hash_from_key_fn(void *k)
>>> @@ -2433,20 +2431,28 @@ int remember_string(struct hashtable *hash, 
>>> const char *str)
>>>    * As we go, we record each node in the given reachable hashtable.  
>>> These
>>>    * entries will be used later in clean_store.
>>>    */
>>> +
>>> +struct check_store_data {
>>> +    struct hashtable *reachable;
>>> +    struct hashtable *domains;
>>> +};
>>> +
>>>   static int check_store_step(const void *ctx, struct connection *conn,
>>>                   struct node *node, void *arg)
>>>   {
>>> -    struct hashtable *reachable = arg;
>>> +    struct check_store_data *data = arg;
>>> -    if (hashtable_search(reachable, (void *)node->name)) {
>>> +    if (hashtable_search(data->reachable, (void *)node->name)) {
>>
>> IIUC the cast is only necessary because hashtable_search() expects a 
>> non-const value. I can't think for a reason for the key to be 
>> non-const. So I will look to send a follow-up patch.
> 
> It is possible, but nasty, due to talloc_free() not taking a const pointer.

I am not sure I understand your reasoning. Looking at 
hashtable_search(), I don't see a call to talloc_free().

Anyway, this is not directly related to this patch. So I will have a 
look separately.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:38:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:38:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480285.744591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4t2-0005m8-GM; Wed, 18 Jan 2023 09:37:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480285.744591; Wed, 18 Jan 2023 09:37:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4t2-0005m1-CA; Wed, 18 Jan 2023 09:37:40 +0000
Received: by outflank-mailman (input) for mailman id 480285;
 Wed, 18 Jan 2023 09:37:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI4t1-0005lt-0s
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:37:39 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f587631-9713-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:36:52 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 29DBA2105F;
 Wed, 18 Jan 2023 09:37:20 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 05AA0139D2;
 Wed, 18 Jan 2023 09:37:19 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 9tERANC9x2OEPAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:37:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f587631-9713-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674034640; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=0o+AyJI14PwuFyUSa9s+WRcizaGnvbNkWcpl14cf+NA=;
	b=aU1uOSGQ60R016T35+Tzicd0iW+jsn7n8KJ05L9S2aOlSiVQUxrlIQPKwo9t5o01jgZCGc
	ZEDwuoBdopJOCDKRoFsDRyF2Uvos+WwovGB268FzitbJhkDVXDWeI22PvzdkhMTOGF7+Oq
	s2Lu9B9rNwqnaKPxbLl4VVJ6G9lQaEc=
Message-ID: <4aadcf29-04bc-964b-d2b2-42fd38dd5296@suse.com>
Date: Wed, 18 Jan 2023 10:37:19 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 16/17] tools/xenstore: let check_store() check the
 accounting data
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-17-jgross@suse.com>
 <17595b1f-1523-9526-85da-99b9300f3218@xen.org>
 <c541fcd7-a829-f757-c949-1b4a089ac6c3@suse.com>
 <449dae4d-c19c-87be-88ef-aa3ff79f1a23@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <449dae4d-c19c-87be-88ef-aa3ff79f1a23@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------wOukoQWG4drppYz06WcT9RP9"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------wOukoQWG4drppYz06WcT9RP9
Content-Type: multipart/mixed; boundary="------------ayT9o0cF8XL7SNk0S0qWs5aA";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <4aadcf29-04bc-964b-d2b2-42fd38dd5296@suse.com>
Subject: Re: [PATCH v3 16/17] tools/xenstore: let check_store() check the
 accounting data
References: <20230117091124.22170-1-jgross@suse.com>
 <20230117091124.22170-17-jgross@suse.com>
 <17595b1f-1523-9526-85da-99b9300f3218@xen.org>
 <c541fcd7-a829-f757-c949-1b4a089ac6c3@suse.com>
 <449dae4d-c19c-87be-88ef-aa3ff79f1a23@xen.org>
In-Reply-To: <449dae4d-c19c-87be-88ef-aa3ff79f1a23@xen.org>

--------------ayT9o0cF8XL7SNk0S0qWs5aA
Content-Type: multipart/mixed; boundary="------------Fc1pInSWs2umJeLayAZLBA9k"

--------------Fc1pInSWs2umJeLayAZLBA9k
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTguMDEuMjMgMTA6MzUsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDE4LzAxLzIwMjMgMDY6MjMsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBP
biAxNy4wMS4yMyAyMzozNiwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+IEhpIEp1ZXJnZW4s
DQo+Pj4NCj4+PiBPbiAxNy8wMS8yMDIzIDA5OjExLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0K
Pj4+PiBUb2RheSBjaGVja19zdG9yZSgpIGlzIG9ubHkgdGVzdGluZyB0aGUgY29ycmVjdG5l
c3Mgb2YgdGhlIG5vZGUgdHJlZS4NCj4+Pj4NCj4+Pj4gQWRkIHZlcmlmaWNhdGlvbiBvZiB0
aGUgYWNjb3VudGluZyBkYXRhIChudW1iZXIgb2Ygbm9kZXMpwqAgYW5kIGNvcnJlY3QNCj4+
Pg0KPj4+IE5JVDogb25lIHRvbyBtYW55IHNwYWNlIGJlZm9yZSAnYW5kJy4NCj4+Pg0KPj4+
PiB0aGUgZGF0YSBpZiBpdCBpcyB3cm9uZy4NCj4+Pj4NCj4+Pj4gRG8gdGhlIGluaXRpYWwg
Y2hlY2tfc3RvcmUoKSBjYWxsIG9ubHkgYWZ0ZXIgWGVuc3RvcmUgZW50cmllcyBvZiBhDQo+
Pj4+IGxpdmUgdXBkYXRlIGhhdmUgYmVlbiByZWFkLg0KPj4+DQo+Pj4gQ2FuIHlvdSBjbGFy
aWZ5IHdoZXRoZXIgdGhpcyBpcyBuZWVkZWQgZm9yIHRoZSByZXN0IG9mIHRoZSBwYXRjaCwg
b3Igc2ltcGx5IGEgDQo+Pj4gbmljZSB0aGluZyB0byBoYXZlIGluIGdlbmVyYWw/DQo+Pg0K
Pj4gSSdsbCBhZGQ6ICJUaGlzIGlzIHdhbnRlZCB0byBtYWtlIHN1cmUgdGhlIGFjY291bnRp
bmcgZGF0YSBpcyBjb3JyZWN0IGFmdGVyDQo+PiBhIGxpdmUgdXBkYXRlLiINCj4gDQo+IEZp
bmUgd2l0aCBtZS4NCj4gDQo+Pg0KPj4+DQo+Pj4+DQo+Pj4+IFNpZ25lZC1vZmYtYnk6IEp1
ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+Pj4gLS0tDQo+Pj4+IMKgIHRvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmPCoMKgIHwgNjIgKysrKysrKysrKysrKysrKy0t
LS0tLQ0KPj4+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMgfCA4NiAr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrDQo+Pj4+IMKgIHRvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9kb21haW4uaCB8wqAgNCArKw0KPj4+PiDCoCAzIGZpbGVzIGNoYW5nZWQs
IDEzNyBpbnNlcnRpb25zKCspLCAxNSBkZWxldGlvbnMoLSkNCj4+Pj4NCj4+Pj4gZGlmZiAt
LWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5jDQo+Pj4+IGluZGV4IDMwOTkwNzdhODYuLmUyMDFmMTQwNTMg
MTAwNjQ0DQo+Pj4+IC0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+
Pj4gKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYw0KPj4+PiBAQCAtMjM4
OSw4ICsyMzg5LDYgQEAgdm9pZCBzZXR1cF9zdHJ1Y3R1cmUoYm9vbCBsaXZlX3VwZGF0ZSkN
Cj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgIG1hbnVhbF9ub2RlKCJAaW50cm9kdWNlRG9tYWlu
IiwgTlVMTCk7DQo+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBkb21haW5fbmJlbnRyeV9maXgo
ZG9tMF9kb21pZCwgNSwgdHJ1ZSk7DQo+Pj4+IMKgwqDCoMKgwqAgfQ0KPj4+PiAtDQo+Pj4+
IC3CoMKgwqAgY2hlY2tfc3RvcmUoKTsNCj4+Pj4gwqAgfQ0KPj4+PiDCoCBzdGF0aWMgdW5z
aWduZWQgaW50IGhhc2hfZnJvbV9rZXlfZm4odm9pZCAqaykNCj4+Pj4gQEAgLTI0MzMsMjAg
KzI0MzEsMjggQEAgaW50IHJlbWVtYmVyX3N0cmluZyhzdHJ1Y3QgaGFzaHRhYmxlICpoYXNo
LCBjb25zdCANCj4+Pj4gY2hhciAqc3RyKQ0KPj4+PiDCoMKgICogQXMgd2UgZ28sIHdlIHJl
Y29yZCBlYWNoIG5vZGUgaW4gdGhlIGdpdmVuIHJlYWNoYWJsZSBoYXNodGFibGUuIFRoZXNl
DQo+Pj4+IMKgwqAgKiBlbnRyaWVzIHdpbGwgYmUgdXNlZCBsYXRlciBpbiBjbGVhbl9zdG9y
ZS4NCj4+Pj4gwqDCoCAqLw0KPj4+PiArDQo+Pj4+ICtzdHJ1Y3QgY2hlY2tfc3RvcmVfZGF0
YSB7DQo+Pj4+ICvCoMKgwqAgc3RydWN0IGhhc2h0YWJsZSAqcmVhY2hhYmxlOw0KPj4+PiAr
wqDCoMKgIHN0cnVjdCBoYXNodGFibGUgKmRvbWFpbnM7DQo+Pj4+ICt9Ow0KPj4+PiArDQo+
Pj4+IMKgIHN0YXRpYyBpbnQgY2hlY2tfc3RvcmVfc3RlcChjb25zdCB2b2lkICpjdHgsIHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLA0KPj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIHN0cnVjdCBub2RlICpub2RlLCB2b2lkICphcmcpDQo+Pj4+IMKgIHsNCj4+
Pj4gLcKgwqDCoCBzdHJ1Y3QgaGFzaHRhYmxlICpyZWFjaGFibGUgPSBhcmc7DQo+Pj4+ICvC
oMKgwqAgc3RydWN0IGNoZWNrX3N0b3JlX2RhdGEgKmRhdGEgPSBhcmc7DQo+Pj4+IC3CoMKg
wqAgaWYgKGhhc2h0YWJsZV9zZWFyY2gocmVhY2hhYmxlLCAodm9pZCAqKW5vZGUtPm5hbWUp
KSB7DQo+Pj4+ICvCoMKgwqAgaWYgKGhhc2h0YWJsZV9zZWFyY2goZGF0YS0+cmVhY2hhYmxl
LCAodm9pZCAqKW5vZGUtPm5hbWUpKSB7DQo+Pj4NCj4+PiBJSVVDIHRoZSBjYXN0IGlzIG9u
bHkgbmVjZXNzYXJ5IGJlY2F1c2UgaGFzaHRhYmxlX3NlYXJjaCgpIGV4cGVjdHMgYSANCj4+
PiBub24tY29uc3QgdmFsdWUuIEkgY2FuJ3QgdGhpbmsgZm9yIGEgcmVhc29uIGZvciB0aGUg
a2V5IHRvIGJlIG5vbi1jb25zdC4gU28gSSANCj4+PiB3aWxsIGxvb2sgdG8gc2VuZCBhIGZv
bGxvdy11cCBwYXRjaC4NCj4+DQo+PiBJdCBpcyBwb3NzaWJsZSwgYnV0IG5hc3R5LCBkdWUg
dG8gdGFsbG9jX2ZyZWUoKSBub3QgdGFraW5nIGEgY29uc3QgcG9pbnRlci4NCj4gDQo+IEkg
YW0gbm90IHN1cmUgSSB1bmRlcnN0YW5kIHlvdXIgcmVhc29uaW5nLiBMb29raW5nIGF0IGhh
c2h0YWJsZV9zZWFyY2goKSwgSSANCj4gZG9uJ3Qgc2VlIGEgY2FsbCB0byB0YWxsb2NfZnJl
ZSgpLg0KDQpPaCwgSSB0aG91Z2h0IHlvdSB3ZXJlIHJlZmVycmluZyB0byB0aGUga2V5IGlu
IGdlbmVyYWwuDQoNCg0KSnVlcmdlbg0K
--------------Fc1pInSWs2umJeLayAZLBA9k
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Fc1pInSWs2umJeLayAZLBA9k--

--------------ayT9o0cF8XL7SNk0S0qWs5aA--

--------------wOukoQWG4drppYz06WcT9RP9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPHvc8FAwAAAAAACgkQsN6d1ii/Ey+l
owf9E8EKsi5u6uTrzz8Bj6lJ3sbef2QdG1PpGr5yrvaTtNWggxAd9fVIsu4QKrBN2EqXraHJJ/r3
0NeklGg32JPwdByMCNfvOrCIREdVm0DjqkI8758SNPZDqKf4KGtMEYPeJe+QCc8X7/yfz5hmzaQ5
JCE631UvLFx3bHm2l01z3I4XGdAcwn6Z6nOfDwhhqzIiTMoA5FT2Zpze6M+f6IhRyLTTXgaf+NHQ
yZ8TYK4ZRql7OFosMi4Lul+fDpI64F+zAmlfp+K4fQmEVgYeKgX3atRC3Z6JwpVeLaGU2yMq5Dpk
9kdRxoTiYcGJg7qz/qzuvyNYKiomQooyyA8HhNU0Lg==
=b/Ek
-----END PGP SIGNATURE-----

--------------wOukoQWG4drppYz06WcT9RP9--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:44:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480292.744601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4zI-0007LF-BA; Wed, 18 Jan 2023 09:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480292.744601; Wed, 18 Jan 2023 09:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI4zI-0007L8-8X; Wed, 18 Jan 2023 09:44:08 +0000
Received: by outflank-mailman (input) for mailman id 480292;
 Wed, 18 Jan 2023 09:44:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pI4zG-0007L2-SE
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:44:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4zG-0004Ln-IR; Wed, 18 Jan 2023 09:44:06 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.8.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI4zG-00051I-CW; Wed, 18 Jan 2023 09:44:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=e0MM1jc+ZHh5iFMgn4j2QZWTG0OyHWz3yp3OPZQPxQg=; b=1Oxe+75Zkb89JyKII79RwOoKb9
	lhSvCtrk/vf0XD59RF2jVSMSFnTXWqa3VAk1Rey0w4x5qbPicB68DPD1E8dapYMChfJBs6eUWT24k
	IlbIJqNG1O1K8JQ3aKM8nc+aBav0p9OWGiuzSAVNjwegzh2Fn9TmMJkCd7ku0A7K41y0=;
Message-ID: <7ffe5d34-614d-f2aa-cf87-c518917c970a@xen.org>
Date: Wed, 18 Jan 2023 09:44:04 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 04/40] xen/arm: add an option to define Xen start
 address for Armv8-R
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jiamei Xie <Jiamei.Xie@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-5-Penny.Zheng@arm.com>
 <e406484a-aad3-4953-afdb-3159597ec998@xen.org>
 <PAXPR08MB7420A5C7F93F23F14C77C9BA9EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <PAXPR08MB7420A5C7F93F23F14C77C9BA9EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 18/01/2023 03:00, Wei Chen wrote:
> Hi Julien,

Hi Wei,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 2023年1月18日 7:24
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Jiamei Xie
>> <Jiamei.Xie@arm.com>
>> Subject: Re: [PATCH v2 04/40] xen/arm: add an option to define Xen start
>> address for Armv8-R
>>
>> Hi Penny,
>>
>> On 13/01/2023 05:28, Penny Zheng wrote:
>>> From: Wei Chen <wei.chen@arm.com>
>>>
>>> On Armv8-A, Xen has a fixed virtual start address (link address
>>> too) for all Armv8-A platforms. In an MMU based system, Xen can
>>> map its loaded address to this virtual start address. So, on
>>> Armv8-A platforms, the Xen start address does not need to be
>>> configurable. But on Armv8-R platforms, there is no MMU to map
>>> loaded address to a fixed virtual address and different platforms
>>> will have very different address space layout. So Xen cannot use
>>> a fixed physical address on MPU based system and need to have it
>>> configurable.
>>>
>>> In this patch we introduce one Kconfig option for users to define
>>> the default Xen start address for Armv8-R. Users can enter the
>>> address in config time, or select the tailored platform config
>>> file from arch/arm/configs.
>>>
>>> And as we introduced Armv8-R platforms to Xen, that means the
>>> existed Arm64 platforms should not be listed in Armv8-R platform
>>> list, so we add !ARM_V8R dependency for these platforms.
>>>
>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>> Signed-off-by: Jiamei.Xie <jiamei.xie@arm.com>
>>
>> Your signed-off-by is missing.
>>
>>> ---
>>> v1 -> v2:
>>> 1. Remove the platform header fvp_baser.h.
>>> 2. Remove the default start address for fvp_baser64.
>>> 3. Remove the description of default address from commit log.
>>> 4. Change HAS_MPU to ARM_V8R for Xen start address dependency.
>>>      No matter Arm-v8r board has MPU or not, it always need to
>>>      specify the start address.
>>
>> I don't quite understand the last sentence. Are you saying that it is
>> possible to have an ARMv8-R system with an MPU nor a page-table?
>>
> 
> Yes, from the Cortex-R82 page [1], you can see the MPU is optional in EL1
> and EL2:
> "Two optional and programmable MPUs controlled from EL1 and EL2 respectively."
Would this mean a vendor may provide their custom solution to protect 
the memory?

> 
> Although it is unlikely that vendors using the Armv8-R IP will do so, it
> is indeed an option. In the ID register, there are also related bits in
> ID_AA64MMFR0_EL1 (MSA_frac) to indicate this.
> 
>>> ---
>>>    xen/arch/arm/Kconfig           |  8 ++++++++
>>>    xen/arch/arm/platforms/Kconfig | 16 +++++++++++++---
>>>    2 files changed, 21 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>>> index ace7178c9a..c6b6b612d1 100644
>>> --- a/xen/arch/arm/Kconfig
>>> +++ b/xen/arch/arm/Kconfig
>>> @@ -145,6 +145,14 @@ config TEE
>>>    	  This option enables generic TEE mediators support. It allows
>> guests
>>>    	  to access real TEE via one of TEE mediators implemented in XEN.
>>>
>>> +config XEN_START_ADDRESS
>>> +	hex "Xen start address: keep default to use platform defined
>> address"
>>> +	default 0
>>> +	depends on ARM_V8R
>>
>> It is still pretty unclear to me what would be the difference between
>> HAS_MPU and ARM_V8R.
>>
> 
> If we don't want to support non-MPU supported Armv8-R, I think they are the
> same. IMO, non-MPU supported Armv8-R is meaningless to Xen.
OOI, why do you think this is meaningless?

> 
>>> +	help
>>> +	  This option allows to set the customized address at which Xen will
>> be
>>> +	  linked on MPU systems. This address must be aligned to a page size.
>>> +
>>>    source "arch/arm/tee/Kconfig"
>>>
>>>    config STATIC_SHM
>>> diff --git a/xen/arch/arm/platforms/Kconfig
>> b/xen/arch/arm/platforms/Kconfig
>>> index c93a6b2756..0904793a0b 100644
>>> --- a/xen/arch/arm/platforms/Kconfig
>>> +++ b/xen/arch/arm/platforms/Kconfig
>>> @@ -1,6 +1,7 @@
>>>    choice
>>>    	prompt "Platform Support"
>>>    	default ALL_PLAT
>>> +	default FVP_BASER if ARM_V8R
>>>    	---help---
>>>    	Choose which hardware platform to enable in Xen.
>>>
>>> @@ -8,13 +9,14 @@ choice
>>>
>>>    config ALL_PLAT
>>>    	bool "All Platforms"
>>> +	depends on !ARM_V8R
>>>    	---help---
>>>    	Enable support for all available hardware platforms. It doesn't
>>>    	automatically select any of the related drivers.
>>>
>>>    config QEMU
>>>    	bool "QEMU aarch virt machine support"
>>> -	depends on ARM_64
>>> +	depends on ARM_64 && !ARM_V8R
>>>    	select GICV3
>>>    	select HAS_PL011
>>>    	---help---
>>> @@ -23,7 +25,7 @@ config QEMU
>>>
>>>    config RCAR3
>>>    	bool "Renesas RCar3 support"
>>> -	depends on ARM_64
>>> +	depends on ARM_64 && !ARM_V8R
>>>    	select HAS_SCIF
>>>    	select IPMMU_VMSA
>>>    	---help---
>>> @@ -31,14 +33,22 @@ config RCAR3
>>>
>>>    config MPSOC
>>>    	bool "Xilinx Ultrascale+ MPSoC support"
>>> -	depends on ARM_64
>>> +	depends on ARM_64 && !ARM_V8R
>>>    	select HAS_CADENCE_UART
>>>    	select ARM_SMMU
>>>    	---help---
>>>    	Enable all the required drivers for Xilinx Ultrascale+ MPSoC
>>>
>>> +config FVP_BASER
>>> +	bool "Fixed Virtual Platform BaseR support"
>>> +	depends on ARM_V8R
>>> +	help
>>> +	  Enable platform specific configurations for Fixed Virtual
>>> +	  Platform BaseR
>>
>> This seems unrelated to this patch.
>>
> 
> Can we add some descriptions in commit log for this change, or we
> Should move it to a new patch? 

New patch please or introduce it in the patch where you need it.

We had preferred to use separate
> patches for this kind of changes, but we found the number of patches
> would become more and more. This problem has been bothering us for
> organizing patches.

I understand the concern of increasing the number of patches. However, 
this also needs to weight against the review.

In this case, it is very difficult for me to understand why we need to 
introduce FVP_BASER.

In fact, on the previous version, we discussed to not introduce any new 
platform specific config. So I am a bit surprised this is actually needed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:46:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:46:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480299.744612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI51R-0007uc-Mt; Wed, 18 Jan 2023 09:46:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480299.744612; Wed, 18 Jan 2023 09:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI51R-0007uV-KJ; Wed, 18 Jan 2023 09:46:21 +0000
Received: by outflank-mailman (input) for mailman id 480299;
 Wed, 18 Jan 2023 09:46:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xbTf=5P=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pI51N-0007uI-QS
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:46:20 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb1ebf42-9714-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:44:42 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pI518-00AXu0-OJ; Wed, 18 Jan 2023 09:46:02 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E4BAA30012F;
 Wed, 18 Jan 2023 10:45:44 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id ACDDF201C94B7; Wed, 18 Jan 2023 10:45:44 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb1ebf42-9714-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=Vngt/LhqRClTSAwIT1d/lHxWy/3LcgFLtLzBhiC6+H4=; b=qN7sbrX2eZZR6fSOcS9SM2m8SL
	KggCGNxcd+u+9YxQownogqA0p2VZtU3Hko3Q0wGr2v56chdBuZQbt8ha8wwKycbmdGHoPwXPOcYWO
	npR4HyZ8Pg0qVt0FBNFD5xObAQSnYEMLBZwFFAcIp5+pvT+t6v0Ch6b71BT3vg+al7pt2SxISAwxm
	uOirgCK3BC7rZc0jC+dlSUlqxx3jQf+icqo7pPFEwKBnFFcIvrAtGbI1eSTV3BIX9HWNH1/ObOAXK
	8KY0Sz1JX97KdQvz3fvjCu3Vhs0q3pkB/tddNlhJehJbePTPSxMYlR0vdP/9hbvLbtZoxrnsCdgDx
	n6eQ3Bsw==;
Date: Wed, 18 Jan 2023 10:45:44 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>
Cc: linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?iso-8859-1?Q?J=F6rg_R=F6del?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>, jroedel@suse.de,
	kirill.shutemov@linux.intel.com, dave.hansen@intel.com,
	kai.huang@intel.com
Subject: Re: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from
 secondary_startup_64()
Message-ID: <Y8e/yKgVZgbqgvAG@hirez.programming.kicks-ass.net>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.589522290@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230116143645.589522290@infradead.org>

On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote:
> The boot trampolines from trampoline_64.S have code flow like:
> 
>   16bit BIOS			SEV-ES				64bit EFI
> 
>   trampoline_start()		sev_es_trampoline_start()	trampoline_start_64()
>     verify_cpu()			  |				|
>   switch_to_protected:    <---------------'				v
>        |							pa_trampoline_compat()
>        v								|
>   startup_32()		<-----------------------------------------------'
>        |
>        v
>   startup_64()
>        |
>        v
>   tr_start() := head_64.S:secondary_startup_64()
> 
> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
> touch the APs), there is already a verify_cpu() invocation.

So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs --
can any of the TDX capable folks tell me if we need verify_cpu() on
these?

Aside from checking for LM, it seems to clear XD_DISABLE on Intel and
force enable SSE on AMD/K7. Surely none of that is needed for these
shiny new chips?

I mean, I can hack up a patch that adds verify_cpu() to the 64bit entry
point, but it seems really sad to need that on modern systems.


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:50:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480308.744635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55L-0001CA-EI; Wed, 18 Jan 2023 09:50:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480308.744635; Wed, 18 Jan 2023 09:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55L-0001C3-BI; Wed, 18 Jan 2023 09:50:23 +0000
Received: by outflank-mailman (input) for mailman id 480308;
 Wed, 18 Jan 2023 09:50:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI55J-0001BV-Sd
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:50:21 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 505f5f5d-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:48:51 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 43DD620D41;
 Wed, 18 Jan 2023 09:50:19 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id F3162139D2;
 Wed, 18 Jan 2023 09:50:18 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id OVEAOtrAx2MNQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:50:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 505f5f5d-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035419; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=zW/FbAtyVDZNOpFhbdoNM3G1fhjM/SXTsI3FkBc+OZw=;
	b=O5jeFAdGW+cPyxe0eE8bSUiJf+j7bN8mWBCcmEwjT339SZnV7eftqfLBaG9q8r10eQkY7R
	gUTdwGIJWGU+orWVbrRzvj3okh2GQUDbis1wIwXuZLMxKEm787bkZw5Ci7/MVTBSwYvxJ2
	Hra3no/eeT5ukXNWhm+ziXIap4P1BJQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v4 00/17] tools/xenstore: do some cleanup and fixes
Date: Wed, 18 Jan 2023 10:49:59 +0100
Message-Id: <20230118095016.13091-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a first run of post-XSA patches which piled up during the
development phase of all the recent Xenstore related XSA patches.

This is a mixture of small fixes, enhancements and cleanups.

Changes in V4:
- reordered the patches a little bit (patch 4 and patch 17 of V4 have
  been moved)
- addressed comments

Changes in V3:
- patches 2, 3, and 5 of V2 have been applied already
- new patch 12
- addressed comments

Changes in V2:
- patches 1+2 of V1 have been applied already
- addressed comments
- new patch 19

Juergen Gross (17):
  tools/xenstore: let talloc_free() preserve errno
  tools/xenstore: remove all watches when a domain has stopped
  tools/xenstore: add hashlist for finding struct domain by domid
  tools/xenstore: make log macro globally available
  tools/xenstore: introduce dummy nodes for special watch paths
  tools/xenstore: replace watch->relative_path with a prefix length
  tools/xenstore: move changed domain handling
  tools/xenstore: change per-domain node accounting interface
  tools/xenstore: replace literal domid 0 with dom0_domid
  tools/xenstore: make domain_is_unprivileged() an inline function
  tools/xenstore: let chk_domain_generation() return a bool
  tools/xenstore: don't let hashtable_remove() return the removed value
  tools/xenstore: switch hashtable to use the talloc framework
  tools/xenstore: introduce trace classes
  tools/xenstore: let check_store() check the accounting data
  tools/xenstore: make output of "xenstore-control help" more pretty
  tools/xenstore: don't allow creating too many nodes in a transaction

 docs/misc/xenstore.txt                 |  10 +-
 tools/xenstore/hashtable.c             | 103 ++--
 tools/xenstore/hashtable.h             |   6 +-
 tools/xenstore/talloc.c                |  21 +-
 tools/xenstore/xenstored_control.c     |  36 +-
 tools/xenstore/xenstored_core.c        | 261 +++++++----
 tools/xenstore/xenstored_core.h        |  39 ++
 tools/xenstore/xenstored_domain.c      | 620 +++++++++++++------------
 tools/xenstore/xenstored_domain.h      |  21 +-
 tools/xenstore/xenstored_transaction.c |  76 +--
 tools/xenstore/xenstored_transaction.h |   7 +-
 tools/xenstore/xenstored_watch.c       |  36 +-
 12 files changed, 653 insertions(+), 583 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:50:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480307.744623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55B-0000uX-5u; Wed, 18 Jan 2023 09:50:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480307.744623; Wed, 18 Jan 2023 09:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55B-0000uQ-3F; Wed, 18 Jan 2023 09:50:13 +0000
Received: by outflank-mailman (input) for mailman id 480307;
 Wed, 18 Jan 2023 09:50:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pI55A-0000uK-02
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:50:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI559-0004bj-KT; Wed, 18 Jan 2023 09:50:11 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.8.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI559-000571-Ef; Wed, 18 Jan 2023 09:50:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=zAiykU+TtfSsKJ9F9GCiG9v0SHaPn3kOU6nFLCYsF+c=; b=dJHHF7ZslBzm9WOFp8SwCCCmQQ
	KK0s8cfPWoB4IA0Y/Y2o9L9UySDDSifIy3MO19SZHNJGj+abzxyFmNGL/B533JScqN1NMhWWKHqS0
	Rr/yFC7ygaSgq2qJwOFGnAs2ctw+sn2NyWPCxPgcdhl9YOpaIcH/WtEP5sFiktGc5TRw=;
Message-ID: <4e6d4deb-d38b-9845-2f58-e94f28196bf6@xen.org>
Date: Wed, 18 Jan 2023 09:50:09 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 05/40] xen/arm64: prepare for moving MMU related code
 from head.S
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-6-Penny.Zheng@arm.com>
 <f78755d8-0b43-ebe4-4b2c-c88875347796@xen.org>
 <PAXPR08MB742006643CF50E239EBC12139EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <PAXPR08MB742006643CF50E239EBC12139EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 18/01/2023 03:09, Wei Chen wrote:
> Hi Julien,
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 2023年1月18日 7:37
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v2 05/40] xen/arm64: prepare for moving MMU related
>> code from head.S
>>
>> Hi Penny,
>>
>> On 13/01/2023 05:28, Penny Zheng wrote:
>>> From: Wei Chen <wei.chen@arm.com>
>>>
>>> We want to reuse head.S for MPU systems, but there are some
>>> code implemented for MMU systems only. We will move such
>>> code to another MMU specific file. But before that, we will
>>> do some preparations in this patch to make them easier
>>> for reviewing:
>>
>> Well, I agree that...
>>
>>> 1. Fix the indentations of code comments.
>>
>> ... changing the indentation is better here. But...
>>
>>> 2. Export some symbols that will be accessed out of file
>>>      scope.
>>
>> ... I have no idea which functions are going to be used in a separate
>> file. So I think they should belong to the patch moving the code.
>>
> 
> Ok, I will move these changes to the moving code patches.
> 
>>>
>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>
>> Your signed-off-by is missing.
>>
>>> ---
>>> v1 -> v2:
>>> 1. New patch.
>>> ---
>>>    xen/arch/arm/arm64/head.S | 40 +++++++++++++++++++--------------------
>>>    1 file changed, 20 insertions(+), 20 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>>> index 93f9b0b9d5..b2214bc5e3 100644
>>> --- a/xen/arch/arm/arm64/head.S
>>> +++ b/xen/arch/arm/arm64/head.S
>>> @@ -136,22 +136,22 @@
>>>            add \xb, \xb, x20
>>>    .endm
>>>
>>> -        .section .text.header, "ax", %progbits
>>> -        /*.aarch64*/
>>> +.section .text.header, "ax", %progbits
>>> +/*.aarch64*/
>>
>> This change is not mentioned.
>>
> 
> I will add the description in commit message.
> 
>>>
>>> -        /*
>>> -         * Kernel startup entry point.
>>> -         * ---------------------------
>>> -         *
>>> -         * The requirements are:
>>> -         *   MMU = off, D-cache = off, I-cache = on or off,
>>> -         *   x0 = physical address to the FDT blob.
>>> -         *
>>> -         * This must be the very first address in the loaded image.
>>> -         * It should be linked at XEN_VIRT_START, and loaded at any
>>> -         * 4K-aligned address.  All of text+data+bss must fit in 2MB,
>>> -         * or the initial pagetable code below will need adjustment.
>>> -         */
>>> +/*
>>> + * Kernel startup entry point.
>>> + * ---------------------------
>>> + *
>>> + * The requirements are:
>>> + *   MMU = off, D-cache = off, I-cache = on or off,
>>> + *   x0 = physical address to the FDT blob.
>>> + *
>>> + * This must be the very first address in the loaded image.
>>> + * It should be linked at XEN_VIRT_START, and loaded at any
>>> + * 4K-aligned address.  All of text+data+bss must fit in 2MB,
>>> + * or the initial pagetable code below will need adjustment.
>>> + */
>>>
>>>    GLOBAL(start)
>>>            /*
>>> @@ -586,7 +586,7 @@ ENDPROC(cpu_init)
>>>     *
>>>     * Clobbers x0 - x4
>>>     */
>>> -create_page_tables:
>>> +ENTRY(create_page_tables)
>>
>> I am not sure about keeping this name. Now we have create_page_tables()
>> and arch_setup_page_tables().
>>
>> I would conside to name it create_boot_page_tables().
>>
> 
> Do you need me to rename it in this patch?

So looking at the rest of the series, I see you are already renaming the 
helper in patch #11. I think it would be better if the naming is done 
earlier.

That said, I am not convinced that create_page_tables() should actually 
be called externally.

In fact, you have something like:

    bl create_page_tables
    bl enable_mmu

Both will need a MMU/MPU specific implementation. So it would be better 
if we provide a wrapper to limit the number of external functions.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:50:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:50:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480310.744646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55P-0001VQ-Ml; Wed, 18 Jan 2023 09:50:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480310.744646; Wed, 18 Jan 2023 09:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55P-0001VJ-Il; Wed, 18 Jan 2023 09:50:27 +0000
Received: by outflank-mailman (input) for mailman id 480310;
 Wed, 18 Jan 2023 09:50:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI55O-0001BV-UU
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:50:27 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5389e1da-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:48:57 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id D1E1520C51;
 Wed, 18 Jan 2023 09:50:24 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A11F7139D2;
 Wed, 18 Jan 2023 09:50:24 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 8B8CJuDAx2MrQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:50:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5389e1da-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035424; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CmJqgfwW3rgZ4aM9nfaFZhVOgg064OnswcLCYgVlRWM=;
	b=btpHFKt8tGY6pgUSEfVbx9hsSYlWovpVvBz+Fu/mzw0Tl1juUL3rCqfspkcoPh4lvSn9wa
	NklWmiLRUSO96+x0IGHwrKyYw7TS4ZPM+g69ihDVNlXqhVQ3rJyrcRCBHXG0LMccv+B+EA
	VGuushRO1XLowRrR1VyyPeTpRe3iqgc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 01/17] tools/xenstore: let talloc_free() preserve errno
Date: Wed, 18 Jan 2023 10:50:00 +0100
Message-Id: <20230118095016.13091-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today talloc_free() is not guaranteed to preserve errno, especially in
case a custom destructor is being used.

So preserve errno in talloc_free().

This allows to remove some errno saving outside of talloc.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- drop wrapper (Julien Grall)
---
 tools/xenstore/talloc.c         | 21 +++++++++++++--------
 tools/xenstore/xenstored_core.c |  2 --
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/talloc.c b/tools/xenstore/talloc.c
index d7edcf3a93..23c3a23b19 100644
--- a/tools/xenstore/talloc.c
+++ b/tools/xenstore/talloc.c
@@ -541,38 +541,39 @@ static void talloc_free_children(void *ptr)
 */
 int talloc_free(void *ptr)
 {
+	int saved_errno = errno;
 	struct talloc_chunk *tc;
 
 	if (ptr == NULL) {
-		return -1;
+		goto err;
 	}
 
 	tc = talloc_chunk_from_ptr(ptr);
 
 	if (tc->null_refs) {
 		tc->null_refs--;
-		return -1;
+		goto err;
 	}
 
 	if (tc->refs) {
 		talloc_reference_destructor(tc->refs);
-		return -1;
+		goto err;
 	}
 
 	if (tc->flags & TALLOC_FLAG_LOOP) {
 		/* we have a free loop - stop looping */
-		return 0;
+		goto success;
 	}
 
 	if (tc->destructor) {
 		talloc_destructor_t d = tc->destructor;
 		if (d == (talloc_destructor_t)-1) {
-			return -1;
+			goto err;
 		}
 		tc->destructor = (talloc_destructor_t)-1;
 		if (d(ptr) == -1) {
 			tc->destructor = d;
-			return -1;
+			goto err;
 		}
 		tc->destructor = NULL;
 	}
@@ -594,10 +595,14 @@ int talloc_free(void *ptr)
 	tc->flags |= TALLOC_FLAG_FREE;
 
 	free(tc);
+ success:
+	errno = saved_errno;
 	return 0;
-}
-
 
+ err:
+	errno = saved_errno;
+	return -1;
+}
 
 /*
   A talloc version of realloc. The context argument is only used if
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 78a3edaa4e..1650821922 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -771,9 +771,7 @@ struct node *read_node(struct connection *conn, const void *ctx,
 	return node;
 
  error:
-	err = errno;
 	talloc_free(node);
-	errno = err;
 	return NULL;
 }
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:50:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:50:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480312.744657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55V-0001wt-3J; Wed, 18 Jan 2023 09:50:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480312.744657; Wed, 18 Jan 2023 09:50:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55U-0001wk-Ub; Wed, 18 Jan 2023 09:50:32 +0000
Received: by outflank-mailman (input) for mailman id 480312;
 Wed, 18 Jan 2023 09:50:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI55T-0001v4-PZ
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:50:31 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8b4f7fbd-9715-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 10:50:30 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7B4463EA5D;
 Wed, 18 Jan 2023 09:50:30 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4A628139D2;
 Wed, 18 Jan 2023 09:50:30 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 9jHlEObAx2NSQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b4f7fbd-9715-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035430; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pX5UiTuGTkIWUb6YtyjDTQ7IukPWx9sxAlYTbqTbJs0=;
	b=nkpQqbBeRExaXkv9iWcK+4haSg4IaIenagyUessqnl3AGVnzBHHRcu0QF/+0g0ad58FLrQ
	eaSbYvhatgEDEmTEkt/YCzUC/8HwYMKpYK7BnX7gS7rNewY9E1qbHs5A2LwrVQgxS7UXnG
	wla69kWD5kP8jKcImZKPF8lRyk4JdL0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 02/17] tools/xenstore: remove all watches when a domain has stopped
Date: Wed, 18 Jan 2023 10:50:01 +0100
Message-Id: <20230118095016.13091-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a domain has been released by Xen tools, remove all its
registered watches. This avoids sending watch events to the dead domain
when all the nodes related to it are being removed by the Xen tools.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- move call to do_release() (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index aa86892fed..e669c89e94 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -740,6 +740,9 @@ int do_release(const void *ctx, struct connection *conn,
 	if (IS_ERR(domain))
 		return -PTR_ERR(domain);
 
+	/* Avoid triggering watch events when the domain's nodes are deleted. */
+	conn_delete_all_watches(domain->conn);
+
 	talloc_free(domain->conn);
 
 	send_ack(conn, XS_RELEASE);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:50:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:50:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480314.744668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55c-0002On-Bt; Wed, 18 Jan 2023 09:50:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480314.744668; Wed, 18 Jan 2023 09:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55c-0002Og-6b; Wed, 18 Jan 2023 09:50:40 +0000
Received: by outflank-mailman (input) for mailman id 480314;
 Wed, 18 Jan 2023 09:50:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI55a-0001BV-E2
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:50:38 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a5dc7f2-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:49:08 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 2713920D65;
 Wed, 18 Jan 2023 09:50:36 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E5C19139D2;
 Wed, 18 Jan 2023 09:50:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id rPp9NuvAx2NeQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:50:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a5dc7f2-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035436; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cvGa+mf9SlLa4egdJdNXtGqzGNV+vTmzoXzqW3HDEaE=;
	b=EDfX66hkICKiwNmqIKsDVjQkwrxfaiYLbx+v5F6AnYrN2LFl4VjQ6uOp4wbwxMaPMqzT+6
	xUOKRqQ+NYsIstSCa+QMJFAwlQYUOwUY/cXTHe4nsskUFBPY876EA+gKkKBjdUf4lTKbWZ
	bKHM49o9VWeqSPlcPilMJMH6KSai0bc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 03/17] tools/xenstore: add hashlist for finding struct domain by domid
Date: Wed, 18 Jan 2023 10:50:02 +0100
Message-Id: <20230118095016.13091-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today finding a struct domain by its domain id requires to scan the
list of domains until finding the correct domid.

Add a hashlist for being able to speed this up. This allows to remove
the linking of struct domain in a list. Note that the list of changed
domains per transaction is kept as a list, as there are no known use
cases with more than 4 domains being touched in a single transaction
(this would be a device handled by a driver domain and being assigned
to a HVM domain with device model in a stubdom, plus the control
domain).

Some simple performance tests comparing the scanning and hashlist have
shown that the hashlist will win as soon as more than 6 entries need
to be scanned.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- add comment, fix return value of check_domain() (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 102 ++++++++++++++++++------------
 1 file changed, 60 insertions(+), 42 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index e669c89e94..3ad1028edb 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -48,8 +48,6 @@ static struct node_perms dom_introduce_perms;
 
 struct domain
 {
-	struct list_head list;
-
 	/* The id of this domain */
 	unsigned int domid;
 
@@ -96,7 +94,7 @@ struct domain
 	bool wrl_delay_logged;
 };
 
-static LIST_HEAD(domains);
+static struct hashtable *domhash;
 
 static bool check_indexes(XENSTORE_RING_IDX cons, XENSTORE_RING_IDX prod)
 {
@@ -309,7 +307,7 @@ static int destroy_domain(void *_domain)
 
 	domain_tree_remove(domain);
 
-	list_del(&domain->list);
+	hashtable_remove(domhash, &domain->domid);
 
 	if (!domain->introduced)
 		return 0;
@@ -341,43 +339,50 @@ static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
 	       dominfo->domid == domid;
 }
 
-void check_domains(void)
+static int check_domain(const void *k, void *v, void *arg)
 {
 	xc_dominfo_t dominfo;
-	struct domain *domain;
 	struct connection *conn;
-	int notify = 0;
 	bool dom_valid;
+	struct domain *domain = v;
+	bool *notify = arg;
 
- again:
-	list_for_each_entry(domain, &domains, list) {
-		dom_valid = get_domain_info(domain->domid, &dominfo);
-		if (!domain->introduced) {
-			if (!dom_valid) {
-				talloc_free(domain);
-				goto again;
-			}
-			continue;
-		}
-		if (dom_valid) {
-			if ((dominfo.crashed || dominfo.shutdown)
-			    && !domain->shutdown) {
-				domain->shutdown = true;
-				notify = 1;
-			}
-			if (!dominfo.dying)
-				continue;
-		}
-		if (domain->conn) {
-			/* domain is a talloc child of domain->conn. */
-			conn = domain->conn;
-			domain->conn = NULL;
-			talloc_unlink(talloc_autofree_context(), conn);
-			notify = 0; /* destroy_domain() fires the watch */
-			goto again;
+	dom_valid = get_domain_info(domain->domid, &dominfo);
+	if (!domain->introduced) {
+		if (!dom_valid)
+			talloc_free(domain);
+		return 0;
+	}
+	if (dom_valid) {
+		if ((dominfo.crashed || dominfo.shutdown)
+		    && !domain->shutdown) {
+			domain->shutdown = true;
+			*notify = true;
 		}
+		if (!dominfo.dying)
+			return 0;
+	}
+	if (domain->conn) {
+		/* domain is a talloc child of domain->conn. */
+		conn = domain->conn;
+		domain->conn = NULL;
+		talloc_unlink(talloc_autofree_context(), conn);
+		*notify = false; /* destroy_domain() fires the watch */
+
+		/* Above unlink might result in 2 domains being freed! */
+		return 1;
 	}
 
+	return 0;
+}
+
+void check_domains(void)
+{
+	bool notify = false;
+
+	while (hashtable_iterate(domhash, check_domain, &notify))
+		;
+
 	if (notify)
 		fire_watches(NULL, NULL, "@releaseDomain", NULL, true, NULL);
 }
@@ -415,13 +420,7 @@ static char *talloc_domain_path(const void *context, unsigned int domid)
 
 static struct domain *find_domain_struct(unsigned int domid)
 {
-	struct domain *i;
-
-	list_for_each_entry(i, &domains, list) {
-		if (i->domid == domid)
-			return i;
-	}
-	return NULL;
+	return hashtable_search(domhash, &domid);
 }
 
 int domain_get_quota(const void *ctx, struct connection *conn,
@@ -470,9 +469,13 @@ static struct domain *alloc_domain(const void *context, unsigned int domid)
 	domain->generation = generation;
 	domain->introduced = false;
 
-	talloc_set_destructor(domain, destroy_domain);
+	if (!hashtable_insert(domhash, &domain->domid, domain)) {
+		talloc_free(domain);
+		errno = ENOMEM;
+		return NULL;
+	}
 
-	list_add(&domain->list, &domains);
+	talloc_set_destructor(domain, destroy_domain);
 
 	return domain;
 }
@@ -906,10 +909,25 @@ void dom0_init(void)
 	xenevtchn_notify(xce_handle, dom0->port);
 }
 
+static unsigned int domhash_fn(void *k)
+{
+	return *(unsigned int *)k;
+}
+
+static int domeq_fn(void *key1, void *key2)
+{
+	return *(unsigned int *)key1 == *(unsigned int *)key2;
+}
+
 void domain_init(int evtfd)
 {
 	int rc;
 
+	/* Start with a random rather low domain count for the hashtable. */
+	domhash = create_hashtable(8, domhash_fn, domeq_fn, 0);
+	if (!domhash)
+		barf_perror("Failed to allocate domain hashtable");
+
 	xc_handle = talloc(talloc_autofree_context(), xc_interface*);
 	if (!xc_handle)
 		barf_perror("Failed to allocate domain handle");
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:50:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:50:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480319.744678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55g-0002wh-IE; Wed, 18 Jan 2023 09:50:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480319.744678; Wed, 18 Jan 2023 09:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55g-0002wS-FA; Wed, 18 Jan 2023 09:50:44 +0000
Received: by outflank-mailman (input) for mailman id 480319;
 Wed, 18 Jan 2023 09:50:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI55e-0001v4-LV
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:50:42 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9205c981-9715-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 10:50:42 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BB1E1340AE;
 Wed, 18 Jan 2023 09:50:41 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 89FB3139D2;
 Wed, 18 Jan 2023 09:50:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id BPUoIPHAx2NqQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:50:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9205c981-9715-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035441; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IMY3vUVgw8qRRZJd+a0HyuTGexwRF66hbh+Hzn4a2aU=;
	b=NXR4GPBT/IDitpaZDDryrmiHoJRTV2X2AQ7BI6XajjqSMlB3CWatyJAYpMT8/n9jmVHKRf
	rNKf3SUKoQNUW1ZPEb1NPld5VGsyDukffxXwyf4mPif149q3AEcmE2+A6loJaAQWuwTHUF
	SDpbEsZ7PecSsO7TdczkSIJQdaPkCR0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 04/17] tools/xenstore: make log macro globally available
Date: Wed, 18 Jan 2023 10:50:03 +0100
Message-Id: <20230118095016.13091-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move the definition of the log() macro to xenstored_core.h in order
to make it usable from other source files, too.

While at it preserve errno from being modified.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c | 14 --------------
 tools/xenstore/xenstored_core.h | 15 +++++++++++++++
 2 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 1650821922..d30f35e642 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -88,20 +88,6 @@ TDB_CONTEXT *tdb_ctx = NULL;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
 
-#define log(...)							\
-	do {								\
-		char *s = talloc_asprintf(NULL, __VA_ARGS__);		\
-		if (s) {						\
-			trace("%s\n", s);				\
-			syslog(LOG_ERR, "%s\n",  s);			\
-			talloc_free(s);					\
-		} else {						\
-			trace("talloc failure during logging\n");	\
-			syslog(LOG_ERR, "talloc failure during logging\n"); \
-		}							\
-	} while (0)
-
-
 int quota_nb_entry_per_domain = 1000;
 int quota_nb_watch_per_domain = 128;
 int quota_max_entry_size = 2048; /* 2K */
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 37006d508d..89055cbb21 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -267,6 +267,21 @@ void trace(const char *fmt, ...) __attribute__ ((format (printf, 1, 2)));
 void reopen_log(void);
 void close_log(void);
 
+#define log(...)							\
+	do {								\
+		int _saved_errno = errno;				\
+		char *s = talloc_asprintf(NULL, __VA_ARGS__);		\
+		if (s) {						\
+			trace("%s\n", s);				\
+			syslog(LOG_ERR, "%s\n",	s);			\
+			talloc_free(s);					\
+		} else {						\
+			trace("talloc failure during logging\n");	\
+			syslog(LOG_ERR, "talloc failure during logging\n"); \
+		}							\
+		errno = _saved_errno;					\
+	} while (0)
+
 extern int orig_argc;
 extern char **orig_argv;
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:50:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:50:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480326.744690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55m-0003UL-U2; Wed, 18 Jan 2023 09:50:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480326.744690; Wed, 18 Jan 2023 09:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55m-0003Tz-Of; Wed, 18 Jan 2023 09:50:50 +0000
Received: by outflank-mailman (input) for mailman id 480326;
 Wed, 18 Jan 2023 09:50:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI55m-0001BV-3A
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:50:50 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 60fddad7-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:49:19 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 5A32E3EAAD;
 Wed, 18 Jan 2023 09:50:47 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 241CA139D2;
 Wed, 18 Jan 2023 09:50:47 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id D7qLB/fAx2N2QwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:50:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60fddad7-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035447; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c/wxZ4rtigEwZqrxTapBQSzAOAw3lmv1/174o/yGTrE=;
	b=POE8V5ONtm0GkzjJLv8h2/kEUrgxUG0EBs1Jh9AgTDY2wWo6Gg7hYF4UWMcFq3fwRQnKXe
	XF16TAohQUkVm8KkZuvxzK3jvnO2mxHiy45Wiq3ATk/ofl03Tc1WQvp4j08EWhBuGl1fR7
	APeZHR/j8esuF303cTFJSrKYFDNbWAU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 05/17] tools/xenstore: introduce dummy nodes for special watch paths
Date: Wed, 18 Jan 2023 10:50:04 +0100
Message-Id: <20230118095016.13091-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of special casing the permission handling and watch event
firing for the special watch paths "@introduceDomain" and
"@releaseDomain", use static dummy nodes added to the data base when
starting Xenstore.

The node accounting needs to reflect that change by adding the special
nodes in the domain_entry_fix() call in setup_structure().

Note that this requires to rework the calls of fire_watches() for the
special events in order to avoid leaking memory.

Move the check for a valid node name from get_node() to
get_node_canonicalized(), as it allows to use get_node() for the
special nodes, too.

In order to avoid read and write accesses to the special nodes use a
special variant for obtaining the current node data for the permission
handling.

This allows to simplify quite some code. In future sub-nodes of the
special nodes will be possible due to this change, allowing more fine
grained permission control of special events for specific domains.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- add get_spec_node()
- expand commit message (Julien Grall)
V3:
- modify get_acc_domid() comment (Julien Grall)
- log error in fire_special_watches() (Julien Grall)
V4:
- use log() (Julien Grall)
---
 tools/xenstore/xenstored_control.c |   3 -
 tools/xenstore/xenstored_core.c    |  94 ++++++++++-------
 tools/xenstore/xenstored_domain.c  | 164 ++++-------------------------
 tools/xenstore/xenstored_domain.h  |   6 --
 tools/xenstore/xenstored_watch.c   |  17 +--
 5 files changed, 83 insertions(+), 201 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index d1aaa00bf4..41e6992591 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -676,9 +676,6 @@ static const char *lu_dump_state(const void *ctx, struct connection *conn)
 	if (ret)
 		goto out;
 	ret = dump_state_connections(fp);
-	if (ret)
-		goto out;
-	ret = dump_state_special_nodes(fp);
 	if (ret)
 		goto out;
 	ret = dump_state_nodes(fp, ctx);
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index d30f35e642..c82fb6e3d5 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -597,12 +597,13 @@ static void get_acc_data(TDB_DATA *key, struct node_account_data *acc)
  * Per-transaction nodes need to be accounted for the transaction owner.
  * Those nodes are stored in the data base with the transaction generation
  * count prepended (e.g. 123/local/domain/...). So testing for the node's
- * key not to start with "/" is sufficient.
+ * key not to start with "/" or "@" is sufficient.
  */
 static unsigned int get_acc_domid(struct connection *conn, TDB_DATA *key,
 				  unsigned int domid)
 {
-	return (!conn || key->dptr[0] == '/') ? domid : conn->id;
+	return (!conn || key->dptr[0] == '/' || key->dptr[0] == '@')
+	       ? domid : conn->id;
 }
 
 int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
@@ -944,10 +945,6 @@ static struct node *get_node(struct connection *conn,
 {
 	struct node *node;
 
-	if (!name || !is_valid_nodename(name)) {
-		errno = EINVAL;
-		return NULL;
-	}
 	node = read_node(conn, ctx, name);
 	/* If we don't have permission, we don't have node. */
 	if (node) {
@@ -1236,9 +1233,23 @@ static struct node *get_node_canonicalized(struct connection *conn,
 	*canonical_name = canonicalize(conn, ctx, name);
 	if (!*canonical_name)
 		return NULL;
+	if (!is_valid_nodename(*canonical_name)) {
+		errno = EINVAL;
+		return NULL;
+	}
 	return get_node(conn, ctx, *canonical_name, perm);
 }
 
+static struct node *get_spec_node(struct connection *conn, const void *ctx,
+				  const char *name, char **canonical_name,
+				  unsigned int perm)
+{
+	if (name[0] == '@')
+		return get_node(conn, ctx, name, perm);
+
+	return get_node_canonicalized(conn, ctx, name, canonical_name, perm);
+}
+
 static int send_directory(const void *ctx, struct connection *conn,
 			  struct buffered_data *in)
 {
@@ -1723,8 +1734,7 @@ static int do_get_perms(const void *ctx, struct connection *conn,
 	char *strings;
 	unsigned int len;
 
-	node = get_node_canonicalized(conn, ctx, onearg(in), NULL,
-				      XS_PERM_READ);
+	node = get_spec_node(conn, ctx, onearg(in), NULL, XS_PERM_READ);
 	if (!node)
 		return errno;
 
@@ -1766,17 +1776,9 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 	if (perms.p[0].perms & XS_PERM_IGNORE)
 		return ENOENT;
 
-	/* First arg is node name. */
-	if (strstarts(in->buffer, "@")) {
-		if (set_perms_special(conn, in->buffer, &perms))
-			return errno;
-		send_ack(conn, XS_SET_PERMS);
-		return 0;
-	}
-
 	/* We must own node to do this (tools can do this too). */
-	node = get_node_canonicalized(conn, ctx, in->buffer, &name,
-				      XS_PERM_WRITE | XS_PERM_OWNER);
+	node = get_spec_node(conn, ctx, in->buffer, &name,
+			     XS_PERM_WRITE | XS_PERM_OWNER);
 	if (!node)
 		return errno;
 
@@ -2374,7 +2376,9 @@ void setup_structure(bool live_update)
 		manual_node("/", "tool");
 		manual_node("/tool", "xenstored");
 		manual_node("/tool/xenstored", NULL);
-		domain_entry_fix(dom0_domid, 3, true);
+		manual_node("@releaseDomain", NULL);
+		manual_node("@introduceDomain", NULL);
+		domain_entry_fix(dom0_domid, 5, true);
 	}
 
 	check_store();
@@ -3156,6 +3160,23 @@ static int dump_state_node(const void *ctx, struct connection *conn,
 	return WALK_TREE_OK;
 }
 
+static int dump_state_special_node(FILE *fp, const void *ctx,
+				   struct dump_node_data *data,
+				   const char *name)
+{
+	struct node *node;
+	int ret;
+
+	node = read_node(NULL, ctx, name);
+	if (!node)
+		return dump_state_node_err(data, "Dump node read node error");
+
+	ret = dump_state_node(ctx, NULL, node, data);
+	talloc_free(node);
+
+	return ret;
+}
+
 const char *dump_state_nodes(FILE *fp, const void *ctx)
 {
 	struct dump_node_data data = {
@@ -3167,6 +3188,11 @@ const char *dump_state_nodes(FILE *fp, const void *ctx)
 	if (walk_node_tree(ctx, NULL, "/", &walkfuncs, &data))
 		return data.err;
 
+	if (dump_state_special_node(fp, ctx, &data, "@releaseDomain"))
+		return data.err;
+	if (dump_state_special_node(fp, ctx, &data, "@introduceDomain"))
+		return data.err;
+
 	return NULL;
 }
 
@@ -3340,25 +3366,21 @@ void read_state_node(const void *ctx, const void *state)
 		node->perms.p[i].id = sn->perms[i].domid;
 	}
 
-	if (strstarts(name, "@")) {
-		set_perms_special(&conn, name, &node->perms);
-		talloc_free(node);
-		return;
-	}
-
-	parentname = get_parent(node, name);
-	if (!parentname)
-		barf("allocation error restoring node");
-	parent = read_node(NULL, node, parentname);
-	if (!parent)
-		barf("read parent error restoring node");
+	if (!strstarts(name, "@")) {
+		parentname = get_parent(node, name);
+		if (!parentname)
+			barf("allocation error restoring node");
+		parent = read_node(NULL, node, parentname);
+		if (!parent)
+			barf("read parent error restoring node");
 
-	if (add_child(node, parent, name))
-		barf("allocation error restoring node");
+		if (add_child(node, parent, name))
+			barf("allocation error restoring node");
 
-	set_tdb_key(parentname, &key);
-	if (write_node_raw(NULL, &key, parent, true))
-		barf("write parent error restoring node");
+		set_tdb_key(parentname, &key);
+		if (write_node_raw(NULL, &key, parent, true))
+			barf("write parent error restoring node");
+	}
 
 	set_tdb_key(name, &key);
 	if (write_node_raw(NULL, &key, node, true))
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 3ad1028edb..494694fd30 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,9 +43,6 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
-static struct node_perms dom_release_perms;
-static struct node_perms dom_introduce_perms;
-
 struct domain
 {
 	/* The id of this domain */
@@ -225,27 +222,6 @@ static void unmap_interface(void *interface)
 	xengnttab_unmap(*xgt_handle, interface, 1);
 }
 
-static void remove_domid_from_perm(struct node_perms *perms,
-				   struct domain *domain)
-{
-	unsigned int cur, new;
-
-	if (perms->p[0].id == domain->domid)
-		perms->p[0].id = priv_domid;
-
-	for (cur = new = 1; cur < perms->num; cur++) {
-		if (perms->p[cur].id == domain->domid)
-			continue;
-
-		if (new != cur)
-			perms->p[new] = perms->p[cur];
-
-		new++;
-	}
-
-	perms->num = new;
-}
-
 static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 				  struct node *node, void *arg)
 {
@@ -297,8 +273,26 @@ static void domain_tree_remove(struct domain *domain)
 			       "error when looking for orphaned nodes\n");
 	}
 
-	remove_domid_from_perm(&dom_release_perms, domain);
-	remove_domid_from_perm(&dom_introduce_perms, domain);
+	walk_node_tree(domain, NULL, "@releaseDomain", &walkfuncs, domain);
+	walk_node_tree(domain, NULL, "@introduceDomain", &walkfuncs, domain);
+}
+
+static void fire_special_watches(const char *name)
+{
+	void *ctx = talloc_new(NULL);
+	struct node *node;
+
+	if (!ctx)
+		return;
+
+	node = read_node(NULL, ctx, name);
+
+	if (node)
+		fire_watches(NULL, ctx, name, node, true, NULL);
+	else
+		log("special node %s not found\n", name);
+
+	talloc_free(ctx);
 }
 
 static int destroy_domain(void *_domain)
@@ -326,7 +320,7 @@ static int destroy_domain(void *_domain)
 			unmap_interface(domain->interface);
 	}
 
-	fire_watches(NULL, domain, "@releaseDomain", NULL, true, NULL);
+	fire_special_watches("@releaseDomain");
 
 	wrl_domain_destroy(domain);
 
@@ -384,7 +378,7 @@ void check_domains(void)
 		;
 
 	if (notify)
-		fire_watches(NULL, NULL, "@releaseDomain", NULL, true, NULL);
+		fire_special_watches("@releaseDomain");
 }
 
 /* We scan all domains rather than use the information given here. */
@@ -633,8 +627,7 @@ static struct domain *introduce_domain(const void *ctx,
 		}
 
 		if (!is_master_domain && !restore)
-			fire_watches(NULL, ctx, "@introduceDomain", NULL,
-				     true, NULL);
+			fire_special_watches("@introduceDomain");
 	} else {
 		/* Use XS_INTRODUCE for recreating the xenbus event-channel. */
 		if (domain->port)
@@ -840,59 +833,6 @@ const char *get_implicit_path(const struct connection *conn)
 	return conn->domain->path;
 }
 
-static int set_dom_perms_default(struct node_perms *perms)
-{
-	perms->num = 1;
-	perms->p = talloc_array(NULL, struct xs_permissions, perms->num);
-	if (!perms->p)
-		return -1;
-	perms->p->id = 0;
-	perms->p->perms = XS_PERM_NONE;
-
-	return 0;
-}
-
-static struct node_perms *get_perms_special(const char *name)
-{
-	if (!strcmp(name, "@releaseDomain"))
-		return &dom_release_perms;
-	if (!strcmp(name, "@introduceDomain"))
-		return &dom_introduce_perms;
-	return NULL;
-}
-
-int set_perms_special(struct connection *conn, const char *name,
-		      struct node_perms *perms)
-{
-	struct node_perms *p;
-
-	p = get_perms_special(name);
-	if (!p)
-		return EINVAL;
-
-	if ((perm_for_conn(conn, p) & (XS_PERM_WRITE | XS_PERM_OWNER)) !=
-	    (XS_PERM_WRITE | XS_PERM_OWNER))
-		return EACCES;
-
-	p->num = perms->num;
-	talloc_free(p->p);
-	p->p = perms->p;
-	talloc_steal(NULL, perms->p);
-
-	return 0;
-}
-
-bool check_perms_special(const char *name, struct connection *conn)
-{
-	struct node_perms *p;
-
-	p = get_perms_special(name);
-	if (!p)
-		return false;
-
-	return perm_for_conn(conn, p) & XS_PERM_READ;
-}
-
 void dom0_init(void)
 {
 	evtchn_port_t port;
@@ -962,10 +902,6 @@ void domain_init(int evtfd)
 	if (xce_handle == NULL)
 		barf_perror("Failed to open evtchn device");
 
-	if (set_dom_perms_default(&dom_release_perms) ||
-	    set_dom_perms_default(&dom_introduce_perms))
-		barf_perror("Failed to set special permissions");
-
 	if ((rc = xenevtchn_bind_virq(xce_handle, VIRQ_DOM_EXC)) == -1)
 		barf_perror("Failed to bind to domain exception virq port");
 	virq_port = rc;
@@ -1535,60 +1471,6 @@ const char *dump_state_connections(FILE *fp)
 	return ret;
 }
 
-static const char *dump_state_special_node(FILE *fp, const char *name,
-					   const struct node_perms *perms)
-{
-	struct xs_state_record_header head;
-	struct xs_state_node sn;
-	unsigned int pathlen;
-	const char *ret;
-
-	pathlen = strlen(name) + 1;
-
-	head.type = XS_STATE_TYPE_NODE;
-	head.length = sizeof(sn);
-
-	sn.conn_id = 0;
-	sn.ta_id = 0;
-	sn.ta_access = 0;
-	sn.perm_n = perms->num;
-	sn.path_len = pathlen;
-	sn.data_len = 0;
-	head.length += perms->num * sizeof(*sn.perms);
-	head.length += pathlen;
-	head.length = ROUNDUP(head.length, 3);
-	if (fwrite(&head, sizeof(head), 1, fp) != 1)
-		return "Dump special node error";
-	if (fwrite(&sn, sizeof(sn), 1, fp) != 1)
-		return "Dump special node error";
-
-	ret = dump_state_node_perms(fp, perms->p, perms->num);
-	if (ret)
-		return ret;
-
-	if (fwrite(name, pathlen, 1, fp) != 1)
-		return "Dump special node path error";
-
-	ret = dump_state_align(fp);
-
-	return ret;
-}
-
-const char *dump_state_special_nodes(FILE *fp)
-{
-	const char *ret;
-
-	ret = dump_state_special_node(fp, "@releaseDomain",
-				      &dom_release_perms);
-	if (ret)
-		return ret;
-
-	ret = dump_state_special_node(fp, "@introduceDomain",
-				      &dom_introduce_perms);
-
-	return ret;
-}
-
 void read_state_connection(const void *ctx, const void *state)
 {
 	const struct xs_state_connection *sc = state;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index b38c82991d..630641d620 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -99,11 +99,6 @@ void domain_outstanding_domid_dec(unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 
-/* Special node permission handling. */
-int set_perms_special(struct connection *conn, const char *name,
-		      struct node_perms *perms);
-bool check_perms_special(const char *name, struct connection *conn);
-
 /* Write rate limiting */
 
 #define WRL_FACTOR   1000 /* for fixed-point arithmetic */
@@ -132,7 +127,6 @@ void wrl_apply_debit_direct(struct connection *conn);
 void wrl_apply_debit_trans_commit(struct connection *conn);
 
 const char *dump_state_connections(FILE *fp);
-const char *dump_state_special_nodes(FILE *fp);
 
 void read_state_connection(const void *ctx, const void *state);
 
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 316c08b7f7..75748ac109 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -46,13 +46,6 @@ struct watch
 	char *node;
 };
 
-static bool check_special_event(const char *name)
-{
-	assert(name);
-
-	return strstarts(name, "@");
-}
-
 /* Is child a subnode of parent, or equal? */
 static bool is_child(const char *child, const char *parent)
 {
@@ -153,14 +146,8 @@ void fire_watches(struct connection *conn, const void *ctx, const char *name,
 
 	/* Create an event for each watch. */
 	list_for_each_entry(i, &connections, list) {
-		/* introduce/release domain watches */
-		if (check_special_event(name)) {
-			if (!check_perms_special(name, i))
-				continue;
-		} else {
-			if (!watch_permitted(i, ctx, name, node, perms))
-				continue;
-		}
+		if (!watch_permitted(i, ctx, name, node, perms))
+			continue;
 
 		list_for_each_entry(watch, &i->watches, list) {
 			if (exact) {
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:50:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:50:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480329.744701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55s-000427-8v; Wed, 18 Jan 2023 09:50:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480329.744701; Wed, 18 Jan 2023 09:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55s-000420-5t; Wed, 18 Jan 2023 09:50:56 +0000
Received: by outflank-mailman (input) for mailman id 480329;
 Wed, 18 Jan 2023 09:50:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI55r-0001BV-1f
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:50:55 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 644e0224-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:49:25 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id F00E7340AE;
 Wed, 18 Jan 2023 09:50:52 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BDE42139D2;
 Wed, 18 Jan 2023 09:50:52 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id KIbuLPzAx2OAQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:50:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 644e0224-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035452; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iWM249T7on3CZ5p3PxYw3ebFU4l9Os88mNOImsi4Puo=;
	b=WyO3MVgn1JrKHPJ3XkDF9V8ghEzFQ3kKZnpgxTFeG/9QpAB9rgmfvu29HDBW4sBAW4+meE
	SH2Ug84IHV1L84ugwHmXTTtYC3BSBvG5loQ4kRWxLkO8OyrHWmBJc6L18x5KY6m4pXfKDX
	+/94+mACf/n3nRAaNefrurXztVA3Hlk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 06/17] tools/xenstore: replace watch->relative_path with a prefix length
Date: Wed, 18 Jan 2023 10:50:05 +0100
Message-Id: <20230118095016.13091-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of storing a pointer to the path which is prepended to
relative paths in struct watch, just use the length of the prepended
path.

It should be noted that the now removed special case of the
relative path being "" in get_watch_path() can't happen at all.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- don't open code get_watch_path() (Julien Grall)
V3:
- drop needless modification of dump_state_watches() (Julien Grall)
---
 tools/xenstore/xenstored_watch.c | 19 ++++---------------
 1 file changed, 4 insertions(+), 15 deletions(-)

diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 75748ac109..8ad0229df6 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -39,8 +39,8 @@ struct watch
 	/* Current outstanding events applying to this watch. */
 	struct list_head events;
 
-	/* Is this relative to connnection's implicit path? */
-	const char *relative_path;
+	/* Offset into path for skipping prefix (used for relative paths). */
+	unsigned int prefix_len;
 
 	char *token;
 	char *node;
@@ -66,15 +66,7 @@ static bool is_child(const char *child, const char *parent)
 
 static const char *get_watch_path(const struct watch *watch, const char *name)
 {
-	const char *path = name;
-
-	if (watch->relative_path) {
-		path += strlen(watch->relative_path);
-		if (*path == '/') /* Could be "" */
-			path++;
-	}
-
-	return path;
+	return name + watch->prefix_len;
 }
 
 /*
@@ -211,10 +203,7 @@ static struct watch *add_watch(struct connection *conn, char *path, char *token,
 			      no_quota_check))
 		goto nomem;
 
-	if (relative)
-		watch->relative_path = get_implicit_path(conn);
-	else
-		watch->relative_path = NULL;
+	watch->prefix_len = relative ? strlen(get_implicit_path(conn)) + 1 : 0;
 
 	INIT_LIST_HEAD(&watch->events);
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:51:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480333.744712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55y-0004cB-JN; Wed, 18 Jan 2023 09:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480333.744712; Wed, 18 Jan 2023 09:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI55y-0004by-EL; Wed, 18 Jan 2023 09:51:02 +0000
Received: by outflank-mailman (input) for mailman id 480333;
 Wed, 18 Jan 2023 09:51:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI55w-0001BV-Tg
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:01 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 67b0f3cd-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:49:31 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A1F7A20D73;
 Wed, 18 Jan 2023 09:50:58 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 71126139D2;
 Wed, 18 Jan 2023 09:50:58 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id soZMGgLBx2OTQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:50:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67b0f3cd-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035458; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BTN0uBcsLRcGJdi+TNIyWluvGktnXtGubvFcIzIQIWE=;
	b=koTmZVUoTjmvg53iS1FWuWn7DlrVcXkoRm/961MsZHTsaerQZEQVirva3EfsRtOTtWJOIQ
	BmlAgJEgm8cLthCoxJsib76qCNX5CPaXBXMR7AlFCJyasZU4hI9DU4gorKvTYaTcC/YJI/
	c76SG2rz4g18FBps4QR5ZAIvmwNx2QM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 07/17] tools/xenstore: move changed domain handling
Date: Wed, 18 Jan 2023 10:50:06 +0100
Message-Id: <20230118095016.13091-8-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move all code related to struct changed_domain from
xenstored_transaction.c to xenstored_domain.c.

This will be needed later in order to simplify the accounting data
updates in cases of errors during a request.

Split the code to have a more generic base framework.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- remove unrelated change (Julien Grall)
V3:
- modify acc_add_dom_nbentry() interface (Julien Grall)
---
 tools/xenstore/xenstored_domain.c      | 78 ++++++++++++++++++++++++++
 tools/xenstore/xenstored_domain.h      |  3 +
 tools/xenstore/xenstored_transaction.c | 64 ++-------------------
 3 files changed, 85 insertions(+), 60 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 494694fd30..1d765ceffa 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -91,6 +91,18 @@ struct domain
 	bool wrl_delay_logged;
 };
 
+struct changed_domain
+{
+	/* List of all changed domains. */
+	struct list_head list;
+
+	/* Identifier of the changed domain. */
+	unsigned int domid;
+
+	/* Amount by which this domain's nbentry field has changed. */
+	int nbentry;
+};
+
 static struct hashtable *domhash;
 
 static bool check_indexes(XENSTORE_RING_IDX cons, XENSTORE_RING_IDX prod)
@@ -543,6 +555,72 @@ static struct domain *find_domain_by_domid(unsigned int domid)
 	return (d && d->introduced) ? d : NULL;
 }
 
+int acc_fix_domains(struct list_head *head, bool update)
+{
+	struct changed_domain *cd;
+	int cnt;
+
+	list_for_each_entry(cd, head, list) {
+		cnt = domain_entry_fix(cd->domid, cd->nbentry, update);
+		if (!update) {
+			if (cnt >= quota_nb_entry_per_domain)
+				return ENOSPC;
+			if (cnt < 0)
+				return ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
+static struct changed_domain *acc_find_changed_domain(struct list_head *head,
+						      unsigned int domid)
+{
+	struct changed_domain *cd;
+
+	list_for_each_entry(cd, head, list) {
+		if (cd->domid == domid)
+			return cd;
+	}
+
+	return NULL;
+}
+
+static struct changed_domain *acc_get_changed_domain(const void *ctx,
+						     struct list_head *head,
+						     unsigned int domid)
+{
+	struct changed_domain *cd;
+
+	cd = acc_find_changed_domain(head, domid);
+	if (cd)
+		return cd;
+
+	cd = talloc_zero(ctx, struct changed_domain);
+	if (!cd)
+		return NULL;
+
+	cd->domid = domid;
+	list_add_tail(&cd->list, head);
+
+	return cd;
+}
+
+int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
+			unsigned int domid)
+{
+	struct changed_domain *cd;
+
+	cd = acc_get_changed_domain(ctx, head, domid);
+	if (!cd)
+		return 0;
+
+	errno = 0;
+	cd->nbentry += val;
+
+	return cd->nbentry;
+}
+
 static void domain_conn_reset(struct domain *domain)
 {
 	struct connection *conn = domain->conn;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 630641d620..9e20d2b17d 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -98,6 +98,9 @@ void domain_outstanding_dec(struct connection *conn);
 void domain_outstanding_domid_dec(unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
+int acc_fix_domains(struct list_head *head, bool update);
+int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
+			unsigned int domid);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index ac854197ca..c009c67989 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -137,18 +137,6 @@ struct accessed_node
 	bool watch_exact;
 };
 
-struct changed_domain
-{
-	/* List of all changed domains in the context of this transaction. */
-	struct list_head list;
-
-	/* Identifier of the changed domain. */
-	unsigned int domid;
-
-	/* Amount by which this domain's nbentry field has changed. */
-	int nbentry;
-};
-
 struct transaction
 {
 	/* List of all transactions active on this connection. */
@@ -514,24 +502,6 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	return 0;
 }
 
-static int transaction_fix_domains(struct transaction *trans, bool update)
-{
-	struct changed_domain *d;
-	int cnt;
-
-	list_for_each_entry(d, &trans->changed_domains, list) {
-		cnt = domain_entry_fix(d->domid, d->nbentry, update);
-		if (!update) {
-			if (cnt >= quota_nb_entry_per_domain)
-				return ENOSPC;
-			if (cnt < 0)
-				return ENOMEM;
-		}
-	}
-
-	return 0;
-}
-
 int do_transaction_end(const void *ctx, struct connection *conn,
 		       struct buffered_data *in)
 {
@@ -558,7 +528,7 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 	if (streq(arg, "T")) {
 		if (trans->fail)
 			return ENOMEM;
-		ret = transaction_fix_domains(trans, false);
+		ret = acc_fix_domains(&trans->changed_domains, false);
 		if (ret)
 			return ret;
 		ret = finalize_transaction(conn, trans, &is_corrupt);
@@ -568,7 +538,7 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 		wrl_apply_debit_trans_commit(conn);
 
 		/* fix domain entry for each changed domain */
-		transaction_fix_domains(trans, true);
+		acc_fix_domains(&trans->changed_domains, true);
 
 		if (is_corrupt)
 			corrupt(conn, "transaction inconsistency");
@@ -580,44 +550,18 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 
 void transaction_entry_inc(struct transaction *trans, unsigned int domid)
 {
-	struct changed_domain *d;
-
-	list_for_each_entry(d, &trans->changed_domains, list)
-		if (d->domid == domid) {
-			d->nbentry++;
-			return;
-		}
-
-	d = talloc(trans, struct changed_domain);
-	if (!d) {
+	if (!acc_add_dom_nbentry(trans, &trans->changed_domains, 1, domid)) {
 		/* Let the transaction fail. */
 		trans->fail = true;
-		return;
 	}
-	d->domid = domid;
-	d->nbentry = 1;
-	list_add_tail(&d->list, &trans->changed_domains);
 }
 
 void transaction_entry_dec(struct transaction *trans, unsigned int domid)
 {
-	struct changed_domain *d;
-
-	list_for_each_entry(d, &trans->changed_domains, list)
-		if (d->domid == domid) {
-			d->nbentry--;
-			return;
-		}
-
-	d = talloc(trans, struct changed_domain);
-	if (!d) {
+	if (!acc_add_dom_nbentry(trans, &trans->changed_domains, -1, domid)) {
 		/* Let the transaction fail. */
 		trans->fail = true;
-		return;
 	}
-	d->domid = domid;
-	d->nbentry = -1;
-	list_add_tail(&d->list, &trans->changed_domains);
 }
 
 void fail_transaction(struct transaction *trans)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480364.744759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A4-0006fe-Ft; Wed, 18 Jan 2023 09:55:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480364.744759; Wed, 18 Jan 2023 09:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A4-0006eA-0f; Wed, 18 Jan 2023 09:55:16 +0000
Received: by outflank-mailman (input) for mailman id 480364;
 Wed, 18 Jan 2023 09:55:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI56O-0001v4-24
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:28 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acc8bcbe-9715-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 10:51:26 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A2B9920D6A;
 Wed, 18 Jan 2023 09:51:26 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7587B139D2;
 Wed, 18 Jan 2023 09:51:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id pZtYGx7Bx2PaQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acc8bcbe-9715-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035486; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/4zRIhlp3wD8TB5xvSbPznnaw1mQSoEynNQAxtEv3zI=;
	b=lmzH1f6y07vvYLyy1pjGwj/56F7ZUrMw7O0DXdZCaLMLVUyBNRPk2nxwkCwKgdMRwehcDT
	e9kEyggT7/2A4o4OBi9MmOyt1zH9KrvX/socrcLJ9SRaGOaQb+vYSziABLB+9AfaCUcvQ7
	TiUGHH3OcKSsDei+AFF8IGUfBCg6g0g=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 12/17] tools/xenstore: don't let hashtable_remove() return the removed value
Date: Wed, 18 Jan 2023 10:50:11 +0100
Message-Id: <20230118095016.13091-13-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Letting hashtable_remove() return the value of the removed element is
not used anywhere in Xenstore, and it conflicts with a hashtable
created specifying the HASHTABLE_FREE_VALUE flag.

So just drop returning the value.

This of course requires to free the value if the HASHTABLE_FREE_VALUE
was specified, as otherwise it would be a memory leak.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
V4:
- make return type void (Julien Grall)
---
 tools/xenstore/hashtable.c | 9 ++++-----
 tools/xenstore/hashtable.h | 3 +--
 2 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index 299549c51e..ddca1591a2 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -214,7 +214,7 @@ hashtable_search(struct hashtable *h, void *k)
 }
 
 /*****************************************************************************/
-void * /* returns value associated with key */
+void
 hashtable_remove(struct hashtable *h, void *k)
 {
     /* TODO: consider compacting the table when the load factor drops enough,
@@ -222,7 +222,6 @@ hashtable_remove(struct hashtable *h, void *k)
 
     struct entry *e;
     struct entry **pE;
-    void *v;
     unsigned int hashvalue, index;
 
     hashvalue = hash(h,k);
@@ -236,16 +235,16 @@ hashtable_remove(struct hashtable *h, void *k)
         {
             *pE = e->next;
             h->entrycount--;
-            v = e->v;
             if (h->flags & HASHTABLE_FREE_KEY)
                 free(e->k);
+            if (h->flags & HASHTABLE_FREE_VALUE)
+                free(e->v);
             free(e);
-            return v;
+            return;
         }
         pE = &(e->next);
         e = e->next;
     }
-    return NULL;
 }
 
 /*****************************************************************************/
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index 6d65431f96..780ad3c8f7 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -68,10 +68,9 @@ hashtable_search(struct hashtable *h, void *k);
  * @name        hashtable_remove
  * @param   h   the hashtable to remove the item from
  * @param   k   the key to search for  - does not claim ownership
- * @return      the value associated with the key, or NULL if none found
  */
 
-void * /* returns value */
+void
 hashtable_remove(struct hashtable *h, void *k);
 
 /*****************************************************************************
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480350.744722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A2-000689-4D; Wed, 18 Jan 2023 09:55:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480350.744722; Wed, 18 Jan 2023 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A2-000682-1D; Wed, 18 Jan 2023 09:55:14 +0000
Received: by outflank-mailman (input) for mailman id 480350;
 Wed, 18 Jan 2023 09:55:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI56l-0001BV-3d
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:51 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 85c9b0ae-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:50:21 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 287F020D67;
 Wed, 18 Jan 2023 09:51:49 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EA25C139D2;
 Wed, 18 Jan 2023 09:51:48 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id YrLJNzTBx2MRRAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85c9b0ae-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035509; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=e87nTBW+uIvj9ME8hZyAeCXoO5ik1/HHW8a5gLJKDEU=;
	b=anz5/SV+JZtyzerukg4Kx4fLPZZfBzKQx081ysmpX0VcN/gnJ141oSn2jKIl7dcRF3Ye0M
	4LeWQizoK/XVEiOwfq9ZcBp2uf/24Kn2M8PF0uY2+XIORVt4aeiOj5sG1K+ZXWBjXO5zWk
	LM8KjRHbZFwZG1ShITT/uZrDKOPduwI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 16/17] tools/xenstore: make output of "xenstore-control help" more pretty
Date: Wed, 18 Jan 2023 10:50:15 +0100
Message-Id: <20230118095016.13091-17-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Using a tab for separating the command from the options in the output
of "xenstore-control help" results in a rather ugly list.

Use a fixed size for the command instead.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- new patch
---
 tools/xenstore/xenstored_control.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 000b2bb8c7..cbd62556c3 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -996,7 +996,7 @@ static int do_control_help(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 	for (cmd = 0; cmd < ARRAY_SIZE(cmds); cmd++) {
-		resp = talloc_asprintf_append(resp, "%s\t%s\n",
+		resp = talloc_asprintf_append(resp, "%-15s %s\n",
 					      cmds[cmd].cmd, cmds[cmd].pars);
 		if (!resp)
 			return ENOMEM;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480358.744734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A3-0006KE-1u; Wed, 18 Jan 2023 09:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480358.744734; Wed, 18 Jan 2023 09:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A2-0006Iv-Rd; Wed, 18 Jan 2023 09:55:14 +0000
Received: by outflank-mailman (input) for mailman id 480358;
 Wed, 18 Jan 2023 09:55:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI56E-0001BV-8S
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:18 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 71b3e24a-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:49:47 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 726DE20D6A;
 Wed, 18 Jan 2023 09:51:15 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4413B139D2;
 Wed, 18 Jan 2023 09:51:15 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ol1MDxPBx2O9QwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71b3e24a-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035475; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2l4RL275hmrR3HRBC+1HNH4pI7STyKqFIjdegf4upCU=;
	b=C9tiOcR4l7QScl6Zgbbm6K9Qn84mvRidpF+HZt7+UhiQg61ccnkmL0XY5gLSuCbsUo34sy
	dgn5oGqBVaMgaODtHF41uMmxiYHtA0zG+0cAJjfIV73uUvf5Z+Y5+gRP3E+Cj8j1ua6Der
	0HykYByn0V50ZjromoUS09vK8g7H7+s=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 10/17] tools/xenstore: make domain_is_unprivileged() an inline function
Date: Wed, 18 Jan 2023 10:50:09 +0100
Message-Id: <20230118095016.13091-11-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

clang 14 is complaining about a NULL dereference for constructs like:

  domain_is_unprivileged(conn) ? conn->in : NULL

as it can't know that domain_is_unprivileged(conn) will return false
if conn is NULL.

Fix that by making domain_is_unprivileged() an inline function (and
related to that domid_is_unprivileged(), too).

In order not having to make struct domain public, use conn->id instead
of conn->domain->domid for the test.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.h   | 10 ++++++++++
 tools/xenstore/xenstored_domain.c | 11 -----------
 tools/xenstore/xenstored_domain.h |  2 --
 3 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 62d8ee96bd..6548200d8e 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -318,6 +318,16 @@ void unmap_xenbus(void *interface);
 
 static inline int xenbus_master_domid(void) { return dom0_domid; }
 
+static inline bool domid_is_unprivileged(unsigned int domid)
+{
+	return domid != dom0_domid && domid != priv_domid;
+}
+
+static inline bool domain_is_unprivileged(const struct connection *conn)
+{
+	return conn && domid_is_unprivileged(conn->id);
+}
+
 /* Return the event channel used by xenbus. */
 evtchn_port_t xenbus_evtchn(void);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index a703c0ef47..10880a32d9 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -408,17 +408,6 @@ void handle_event(void)
 		barf_perror("Failed to write to event fd");
 }
 
-static bool domid_is_unprivileged(unsigned int domid)
-{
-	return domid != dom0_domid && domid != priv_domid;
-}
-
-bool domain_is_unprivileged(struct connection *conn)
-{
-	return conn && conn->domain &&
-	       domid_is_unprivileged(conn->domain->domid);
-}
-
 static char *talloc_domain_path(const void *context, unsigned int domid)
 {
 	return talloc_asprintf(context, "/local/domain/%u", domid);
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 1e402f2609..22996e2576 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -59,8 +59,6 @@ void ignore_connection(struct connection *conn, unsigned int err);
 /* Returns the implicit path of a connection (only domains have this) */
 const char *get_implicit_path(const struct connection *conn);
 
-bool domain_is_unprivileged(struct connection *conn);
-
 /* Remove node permissions for no longer existing domains. */
 int domain_adjust_node_perms(struct node *node);
 int domain_alloc_permrefs(struct node_perms *perms);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480362.744741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A3-0006Py-D3; Wed, 18 Jan 2023 09:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480362.744741; Wed, 18 Jan 2023 09:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A3-0006Oz-5r; Wed, 18 Jan 2023 09:55:15 +0000
Received: by outflank-mailman (input) for mailman id 480362;
 Wed, 18 Jan 2023 09:55:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI56p-0001v4-L2
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:55 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bd8568b3-9715-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 10:51:54 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B727B3EAA9;
 Wed, 18 Jan 2023 09:51:54 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8A7C7139D2;
 Wed, 18 Jan 2023 09:51:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id BY56IDrBx2MZRAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd8568b3-9715-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035514; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oOWWalZWNgxUmz2EcuOumPLWbSmCnLaSwu6nlgaH5og=;
	b=MnBYQ9rCSeBrpPsHd0awpeK2iE9dO+VVk+UHKc1jbPAbC/tVx88sj6j55NHfvUJA3ae+GF
	3ONWFIJKVj2I9Hl23kCiQCkPfOnYzIOGNiNmXz5stRxUCLr641M+DfciV4Zyhi0WF7/ocZ
	f7DbrcOyCyM5IGAhEBHGjVLB/7QiHfE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 17/17] tools/xenstore: don't allow creating too many nodes in a transaction
Date: Wed, 18 Jan 2023 10:50:16 +0100
Message-Id: <20230118095016.13091-18-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The accounting for the number of nodes of a domain in an active
transaction is not working correctly, as it allows to create arbitrary
number of nodes. The transaction will finally fail due to exceeding
the number of nodes quota, but before closing the transaction an
unprivileged guest could cause Xenstore to use a lot of memory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_domain.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 9ef41ede03..7eb9cd077b 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1116,9 +1116,8 @@ int domain_nbentry_fix(unsigned int domid, int num, bool update)
 
 int domain_nbentry(struct connection *conn)
 {
-	return (domain_is_unprivileged(conn))
-		? conn->domain->nbentry
-		: 0;
+	return domain_is_unprivileged(conn)
+	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480356.744728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A2-00069m-IV; Wed, 18 Jan 2023 09:55:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480356.744728; Wed, 18 Jan 2023 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A2-000692-Ao; Wed, 18 Jan 2023 09:55:14 +0000
Received: by outflank-mailman (input) for mailman id 480356;
 Wed, 18 Jan 2023 09:55:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI56Z-0001v4-52
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:39 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b3887dca-9715-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 10:51:38 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id ED75538943;
 Wed, 18 Jan 2023 09:51:37 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AFED5139D2;
 Wed, 18 Jan 2023 09:51:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id LSK7KSnBx2P+QwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3887dca-9715-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035497; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qNOnFlh4hw9p/EOsF+rKAd70EPJwnRC4UmfljQvSTOs=;
	b=m4LynDaHn+WDf75AAVUcSLeNgrxiuV3rEsyqDX6bWY4IqUAihNyqcRiN59Wi50OsO1xplM
	VR2b/shSIMv1GSjKrtreKI2k6HLv0HJyhNMpu/gv9OCbLQ6xgR/KHngMqKjiGADnaW5CRC
	yST9VLDjkP2aC81iRFU/SlQNjuWDMgg=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 14/17] tools/xenstore: introduce trace classes
Date: Wed, 18 Jan 2023 10:50:13 +0100
Message-Id: <20230118095016.13091-15-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make the xenstored internal trace configurable by adding classes
which can be switched on and off independently from each other.

Define the following classes:

- obj: Creation and deletion of interesting "objects" (watch,
  transaction, connection)
- io: incoming requests and outgoing responses
- wrl: write limiting

Per default "obj" and "io" are switched on.

Entries written via trace() will always be printed (if tracing is on
at all).

Add the capability to control the trace settings via the "log"
command and via a new "--log-control" command line option.

Add a missing trace_create() call for creating a transaction.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- keep "log" and "logfile" command names (Julien Grall)
V4:
- add another const for trace_switches[] (Julien Grall)
- add comment (Julien Grall)
---
 docs/misc/xenstore.txt                 | 10 +++--
 tools/xenstore/xenstored_control.c     | 31 +++++++++++++--
 tools/xenstore/xenstored_core.c        | 55 ++++++++++++++++++++++++--
 tools/xenstore/xenstored_core.h        |  8 ++++
 tools/xenstore/xenstored_domain.c      | 27 +++++++------
 tools/xenstore/xenstored_transaction.c |  1 +
 6 files changed, 108 insertions(+), 24 deletions(-)

diff --git a/docs/misc/xenstore.txt b/docs/misc/xenstore.txt
index 44428ae3a7..8887e7df88 100644
--- a/docs/misc/xenstore.txt
+++ b/docs/misc/xenstore.txt
@@ -409,10 +409,12 @@ CONTROL			<command>|[<parameters>|]
 		error string in case of failure. -s can return "BUSY" in case
 		of an active transaction, a retry of -s can be done in that
 		case.
-	log|on
-		turn xenstore logging on
-	log|off
-		turn xenstore logging off
+	log|[on|off|+<switch>|-<switch>]
+		without parameters: show possible log switches
+		on: turn xenstore logging on
+		off: turn xenstore logging off
+		+<switch>: activates log entries for <switch>,
+		-<switch>: deactivates log entries for <switch>
 	logfile|<file-name>
 		log to specified file
 	memreport|[<file-name>]
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 41e6992591..000b2bb8c7 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -182,6 +182,28 @@ static int do_control_check(const void *ctx, struct connection *conn,
 static int do_control_log(const void *ctx, struct connection *conn,
 			  char **vec, int num)
 {
+	int ret;
+
+	if (num == 0) {
+		char *resp = talloc_asprintf(ctx, "Log switch settings:\n");
+		unsigned int idx;
+		bool on;
+
+		if (!resp)
+			return ENOMEM;
+		for (idx = 0; trace_switches[idx]; idx++) {
+			on = trace_flags & (1u << idx);
+			resp = talloc_asprintf_append(resp, "%-8s: %s\n",
+						      trace_switches[idx],
+						      on ? "on" : "off");
+			if (!resp)
+				return ENOMEM;
+		}
+
+		send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
+		return 0;
+	}
+
 	if (num != 1)
 		return EINVAL;
 
@@ -189,8 +211,11 @@ static int do_control_log(const void *ctx, struct connection *conn,
 		reopen_log();
 	else if (!strcmp(vec[0], "off"))
 		close_log();
-	else
-		return EINVAL;
+	else {
+		ret = set_trace_switch(vec[0]);
+		if (ret)
+			return ret;
+	}
 
 	send_ack(conn, XS_CONTROL);
 	return 0;
@@ -923,7 +948,7 @@ static int do_control_help(const void *, struct connection *, char **, int);
 
 static struct cmd_s cmds[] = {
 	{ "check", do_control_check, "" },
-	{ "log", do_control_log, "on|off" },
+	{ "log", do_control_log, "[on|off|+<switch>|-<switch>]" },
 
 #ifndef NO_LIVE_UPDATE
 	/*
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 17e678256e..c2c2c342de 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -85,6 +85,7 @@ static int reopen_log_pipe[2];
 static int reopen_log_pipe0_pollfd_idx = -1;
 char *tracefile = NULL;
 TDB_CONTEXT *tdb_ctx = NULL;
+unsigned int trace_flags = TRACE_OBJ | TRACE_IO;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
 
@@ -139,13 +140,13 @@ static void trace_io(const struct connection *conn,
 	time_t now;
 	struct tm *tm;
 
-	if (tracefd < 0)
+	if (tracefd < 0 || !(trace_flags & TRACE_IO))
 		return;
 
 	now = time(NULL);
 	tm = localtime(&now);
 
-	trace("%s %p %04d%02d%02d %02d:%02d:%02d %s (",
+	trace("io: %s %p %04d%02d%02d %02d:%02d:%02d %s (",
 	      out ? "OUT" : "IN", conn,
 	      tm->tm_year + 1900, tm->tm_mon + 1,
 	      tm->tm_mday, tm->tm_hour, tm->tm_min, tm->tm_sec,
@@ -158,12 +159,14 @@ static void trace_io(const struct connection *conn,
 
 void trace_create(const void *data, const char *type)
 {
-	trace("CREATE %s %p\n", type, data);
+	if (trace_flags & TRACE_OBJ)
+		trace("obj: CREATE %s %p\n", type, data);
 }
 
 void trace_destroy(const void *data, const char *type)
 {
-	trace("DESTROY %s %p\n", type, data);
+	if (trace_flags & TRACE_OBJ)
+		trace("obj: DESTROY %s %p\n", type, data);
 }
 
 /**
@@ -2599,6 +2602,8 @@ static void usage(void)
 "  -N, --no-fork           to request that the daemon does not fork,\n"
 "  -P, --output-pid        to request that the pid of the daemon is output,\n"
 "  -T, --trace-file <file> giving the file for logging, and\n"
+"      --trace-control=+<switch> activate a specific <switch>\n"
+"      --trace-control=-<switch> deactivate a specific <switch>\n"
 "  -E, --entry-nb <nb>     limit the number of entries per domain,\n"
 "  -S, --entry-size <size> limit the size of entry per domain, and\n"
 "  -W, --watch-nb <nb>     limit the number of watches per domain,\n"
@@ -2642,6 +2647,7 @@ static struct option options[] = {
 	{ "output-pid", 0, NULL, 'P' },
 	{ "entry-size", 1, NULL, 'S' },
 	{ "trace-file", 1, NULL, 'T' },
+	{ "trace-control", 1, NULL, 1 },
 	{ "transaction", 1, NULL, 't' },
 	{ "perm-nb", 1, NULL, 'A' },
 	{ "path-max", 1, NULL, 'M' },
@@ -2716,6 +2722,43 @@ static void set_quota(const char *arg, bool soft)
 		barf("unknown quota \"%s\"\n", arg);
 }
 
+/* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
+const char *const trace_switches[] = {
+	"obj", "io", "wrl",
+	NULL
+};
+
+int set_trace_switch(const char *arg)
+{
+	bool remove = (arg[0] == '-');
+	unsigned int idx;
+
+	switch (arg[0]) {
+	case '-':
+		remove = true;
+		break;
+	case '+':
+		remove = false;
+		break;
+	default:
+		return EINVAL;
+	}
+
+	arg++;
+
+	for (idx = 0; trace_switches[idx]; idx++) {
+		if (!strcmp(arg, trace_switches[idx])) {
+			if (remove)
+				trace_flags &= ~(1u << idx);
+			else
+				trace_flags |= 1u << idx;
+			return 0;
+		}
+	}
+
+	return EINVAL;
+}
+
 int main(int argc, char *argv[])
 {
 	int opt;
@@ -2764,6 +2807,10 @@ int main(int argc, char *argv[])
 		case 'T':
 			tracefile = optarg;
 			break;
+		case 1:
+			if (set_trace_switch(optarg))
+				barf("Illegal trace switch \"%s\"\n", optarg);
+			break;
 		case 'I':
 			if (optarg && !strcmp(optarg, "off"))
 				tdb_flags = 0;
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 6548200d8e..c59b06551f 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -294,6 +294,14 @@ extern char **orig_argv;
 extern char *tracefile;
 extern int tracefd;
 
+/* Trace flag values must be kept in sync with trace_switches[] contents. */
+extern unsigned int trace_flags;
+#define TRACE_OBJ	0x00000001
+#define TRACE_IO	0x00000002
+#define TRACE_WRL	0x00000004
+extern const char *const trace_switches[];
+int set_trace_switch(const char *arg);
+
 extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
 extern int dom0_event;
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 4f4c0a727c..e9d84d7c24 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1261,6 +1261,12 @@ static long wrl_ndomains;
 static wrl_creditt wrl_reserve; /* [-wrl_config_newdoms_dburst, +_gburst ] */
 static time_t wrl_log_last_warning; /* 0: no previous warning */
 
+#define trace_wrl(...)				\
+do {						\
+	if (trace_flags & TRACE_WRL)		\
+		trace("wrl: " __VA_ARGS__);	\
+} while (0)
+
 void wrl_gettime_now(struct wrl_timestampt *now_wt)
 {
 	struct timespec now_ts;
@@ -1366,12 +1372,9 @@ void wrl_credit_update(struct domain *domain, struct wrl_timestampt now)
 
 	domain->wrl_timestamp = now;
 
-	trace("wrl: dom %4d %6ld  msec  %9ld credit   %9ld reserve"
-	      "  %9ld discard\n",
-	      domain->domid,
-	      msec,
-	      (long)domain->wrl_credit, (long)wrl_reserve,
-	      (long)surplus);
+	trace_wrl("dom %4d %6ld msec %9ld credit %9ld reserve %9ld discard\n",
+		  domain->domid, msec, (long)domain->wrl_credit,
+		  (long)wrl_reserve, (long)surplus);
 }
 
 void wrl_check_timeout(struct domain *domain,
@@ -1403,10 +1406,9 @@ void wrl_check_timeout(struct domain *domain,
 	if (*ptimeout==-1 || wakeup < *ptimeout)
 		*ptimeout = wakeup;
 
-	trace("wrl: domain %u credit=%ld (reserve=%ld) SLEEPING for %d\n",
-	      domain->domid,
-	      (long)domain->wrl_credit, (long)wrl_reserve,
-	      wakeup);
+	trace_wrl("domain %u credit=%ld (reserve=%ld) SLEEPING for %d\n",
+		  domain->domid, (long)domain->wrl_credit, (long)wrl_reserve,
+		  wakeup);
 }
 
 #define WRL_LOG(now, ...) \
@@ -1424,9 +1426,8 @@ void wrl_apply_debit_actual(struct domain *domain)
 	wrl_credit_update(domain, now);
 
 	domain->wrl_credit -= wrl_config_writecost;
-	trace("wrl: domain %u credit=%ld (reserve=%ld)\n",
-	      domain->domid,
-	      (long)domain->wrl_credit, (long)wrl_reserve);
+	trace_wrl("domain %u credit=%ld (reserve=%ld)\n", domain->domid,
+		  (long)domain->wrl_credit, (long)wrl_reserve);
 
 	if (domain->wrl_credit < 0) {
 		if (!domain->wrl_delay_logged) {
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 82e5e66c18..1aa9d3cb3d 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -475,6 +475,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (!trans)
 		return ENOMEM;
 
+	trace_create(trans, "transaction");
 	INIT_LIST_HEAD(&trans->accessed);
 	INIT_LIST_HEAD(&trans->changed_domains);
 	trans->conn = conn;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480363.744749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A4-0006a0-2T; Wed, 18 Jan 2023 09:55:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480363.744749; Wed, 18 Jan 2023 09:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A3-0006Z4-K0; Wed, 18 Jan 2023 09:55:15 +0000
Received: by outflank-mailman (input) for mailman id 480363;
 Wed, 18 Jan 2023 09:55:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI567-0001BV-TY
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:11 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6e60f847-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:49:42 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D8D8A38943;
 Wed, 18 Jan 2023 09:51:09 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AA50B139D2;
 Wed, 18 Jan 2023 09:51:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id CGJnKA3Bx2O1QwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e60f847-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035469; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JygJ73dFwifI6ssavbJqcYd36Sd+BW/DYAxzWFIM7/E=;
	b=EWH3SMhILww2gpd7hcJIBv6pnJY5T6DNSm6IluXhNmNvXorOLLMoyhpVgslfea9FYh7kFl
	tkz8S6AnDa8gbXiqSJb8dvX4WFRx0d3YT8UmGkIxAzB0t5X+mccVv4wPDU86m3OWF1Mpkd
	amtqAPLq9mM3QUDZoWYlWoB2+LIEgxs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 09/17] tools/xenstore: replace literal domid 0 with dom0_domid
Date: Wed, 18 Jan 2023 10:50:08 +0100
Message-Id: <20230118095016.13091-10-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are some places left where dom0 is associated with domid 0.

Use dom0_domid instead.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c   | 5 +++--
 tools/xenstore/xenstored_domain.c | 8 ++++----
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 4582ee39e1..9cfde76898 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2297,9 +2297,10 @@ static void accept_connection(int sock)
 		return;
 
 	conn = new_connection(&socket_funcs);
-	if (conn)
+	if (conn) {
 		conn->fd = fd;
-	else
+		conn->id = dom0_domid;
+	} else
 		close(fd);
 }
 #endif
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 703ddeec4e..a703c0ef47 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -326,7 +326,7 @@ static int destroy_domain(void *_domain)
 	if (domain->interface) {
 		/* Domain 0 was mapped by dom0_init, so it must be unmapped
 		   using munmap() and not the grant unmap call. */
-		if (domain->domid == 0)
+		if (domain->domid == dom0_domid)
 			unmap_xenbus(domain->interface);
 		else
 			unmap_interface(domain->interface);
@@ -410,7 +410,7 @@ void handle_event(void)
 
 static bool domid_is_unprivileged(unsigned int domid)
 {
-	return domid != 0 && domid != priv_domid;
+	return domid != dom0_domid && domid != priv_domid;
 }
 
 bool domain_is_unprivileged(struct connection *conn)
@@ -798,7 +798,7 @@ static struct domain *onearg_domain(struct connection *conn,
 		return ERR_PTR(-EINVAL);
 
 	domid = atoi(domid_str);
-	if (!domid)
+	if (domid == dom0_domid)
 		return ERR_PTR(-EINVAL);
 
 	return find_connected_domain(domid);
@@ -1004,7 +1004,7 @@ static int chk_domain_generation(unsigned int domid, uint64_t gen)
 {
 	struct domain *d;
 
-	if (!xc_handle && domid == 0)
+	if (!xc_handle && domid == dom0_domid)
 		return 1;
 
 	d = find_domain_struct(domid);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480369.744778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A5-00079l-Nk; Wed, 18 Jan 2023 09:55:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480369.744778; Wed, 18 Jan 2023 09:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A5-00075w-BQ; Wed, 18 Jan 2023 09:55:17 +0000
Received: by outflank-mailman (input) for mailman id 480369;
 Wed, 18 Jan 2023 09:55:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI562-0001BV-VN
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:07 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b087800-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:49:36 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 412E6340AE;
 Wed, 18 Jan 2023 09:51:04 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 15668139D2;
 Wed, 18 Jan 2023 09:51:04 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id aY17AwjBx2OrQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b087800-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035464; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UVU2p+c8txVUI24FAKxzeXD4L5boxXW/hUAgXWxAShI=;
	b=gsVyF3gQ1Dumq+OU6saj7iIiCskw7e94KAt7KuJDi9Hgf4dTCHSOr26C04/hVZrlwEGMxh
	fdbr3ECHco+17JKZGlOhmOijCHyJWnT3YxvSHd+rRHDtg40cpbf8iPlOPFBZMMpd/vIeYW
	9+Q07yVCRA7LW8gRgYkILBkVys2c+ik=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 08/17] tools/xenstore: change per-domain node accounting interface
Date: Wed, 18 Jan 2023 10:50:07 +0100
Message-Id: <20230118095016.13091-9-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rework the interface and the internals of the per-domain node
accounting:

- rename the functions to domain_nbentry_*() in order to better match
  the related counter name

- switch from node pointer to domid as interface, as all nodes have the
  owner filled in

- use a common internal function for adding a value to the counter

For the transaction case add a helper function to get the list head
of the per-transaction changed domains, enabling to eliminate the
transaction_entry_*() functions.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- add get_node_owner() (Julien Grall)
- rename domain_nbentry_add() parameter (Julien Grall)
- avoid negative entry count (Julien Grall)
V4:
- make get_node_owner() an inline function
---
 tools/xenstore/xenstored_core.c        |  28 +++---
 tools/xenstore/xenstored_core.h        |   6 ++
 tools/xenstore/xenstored_domain.c      | 126 ++++++++++++-------------
 tools/xenstore/xenstored_domain.h      |  10 +-
 tools/xenstore/xenstored_transaction.c |  15 +--
 tools/xenstore/xenstored_transaction.h |   7 +-
 6 files changed, 87 insertions(+), 105 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index c82fb6e3d5..4582ee39e1 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -738,13 +738,13 @@ struct node *read_node(struct connection *conn, const void *ctx,
 
 	/* Permissions are struct xs_permissions. */
 	node->perms.p = hdr->perms;
-	node->acc.domid = node->perms.p[0].id;
+	node->acc.domid = get_node_owner(node);
 	node->acc.memory = data.dsize;
 	if (domain_adjust_node_perms(node))
 		goto error;
 
 	/* If owner is gone reset currently accounted memory size. */
-	if (node->acc.domid != node->perms.p[0].id)
+	if (node->acc.domid != get_node_owner(node))
 		node->acc.memory = 0;
 
 	/* Data is binary blob (usually ascii, no nul). */
@@ -1445,7 +1445,7 @@ static void destroy_node_rm(struct connection *conn, struct node *node)
 static int destroy_node(struct connection *conn, struct node *node)
 {
 	destroy_node_rm(conn, node);
-	domain_entry_dec(conn, node);
+	domain_nbentry_dec(conn, get_node_owner(node));
 
 	/*
 	 * It is not possible to easily revert the changes in a transaction.
@@ -1484,7 +1484,7 @@ static struct node *create_node(struct connection *conn, const void *ctx,
 	for (i = node; i; i = i->parent) {
 		/* i->parent is set for each new node, so check quota. */
 		if (i->parent &&
-		    domain_entry(conn) >= quota_nb_entry_per_domain) {
+		    domain_nbentry(conn) >= quota_nb_entry_per_domain) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -1495,7 +1495,7 @@ static struct node *create_node(struct connection *conn, const void *ctx,
 
 		/* Account for new node */
 		if (i->parent) {
-			if (domain_entry_inc(conn, i)) {
+			if (domain_nbentry_inc(conn, get_node_owner(i))) {
 				destroy_node_rm(conn, i);
 				return NULL;
 			}
@@ -1648,7 +1648,7 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 	watch_exact = strcmp(root, node->name);
 	fire_watches(conn, ctx, node->name, node, watch_exact, NULL);
 
-	domain_entry_dec(conn, node);
+	domain_nbentry_dec(conn, get_node_owner(node));
 
 	return WALK_TREE_RM_CHILDENTRY;
 }
@@ -1784,29 +1784,29 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 
 	/* Unprivileged domains may not change the owner. */
 	if (domain_is_unprivileged(conn) &&
-	    perms.p[0].id != node->perms.p[0].id)
+	    perms.p[0].id != get_node_owner(node))
 		return EPERM;
 
 	old_perms = node->perms;
-	domain_entry_dec(conn, node);
+	domain_nbentry_dec(conn, get_node_owner(node));
 	node->perms = perms;
-	if (domain_entry_inc(conn, node)) {
+	if (domain_nbentry_inc(conn, get_node_owner(node))) {
 		node->perms = old_perms;
 		/*
 		 * This should never fail because we had a reference on the
 		 * domain before and Xenstored is single-threaded.
 		 */
-		domain_entry_inc(conn, node);
+		domain_nbentry_inc(conn, get_node_owner(node));
 		return ENOMEM;
 	}
 
 	if (write_node(conn, node, false)) {
 		int saved_errno = errno;
 
-		domain_entry_dec(conn, node);
+		domain_nbentry_dec(conn, get_node_owner(node));
 		node->perms = old_perms;
 		/* No failure possible as above. */
-		domain_entry_inc(conn, node);
+		domain_nbentry_inc(conn, get_node_owner(node));
 
 		errno = saved_errno;
 		return errno;
@@ -2378,7 +2378,7 @@ void setup_structure(bool live_update)
 		manual_node("/tool/xenstored", NULL);
 		manual_node("@releaseDomain", NULL);
 		manual_node("@introduceDomain", NULL);
-		domain_entry_fix(dom0_domid, 5, true);
+		domain_nbentry_fix(dom0_domid, 5, true);
 	}
 
 	check_store();
@@ -3386,7 +3386,7 @@ void read_state_node(const void *ctx, const void *state)
 	if (write_node_raw(NULL, &key, node, true))
 		barf("write node error restoring node");
 
-	if (domain_entry_inc(&conn, node))
+	if (domain_nbentry_inc(&conn, get_node_owner(node)))
 		barf("node accounting error restoring node");
 
 	talloc_free(node);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 89055cbb21..62d8ee96bd 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -232,6 +232,12 @@ char *canonicalize(struct connection *conn, const void *ctx, const char *node);
 unsigned int perm_for_conn(struct connection *conn,
 			   const struct node_perms *perms);
 
+/* Get owner of a node. */
+static inline unsigned int get_node_owner(const struct node *node)
+{
+	return node->perms.p[0].id;
+}
+
 /* Write a node to the tdb data base. */
 int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 		   bool no_quota_check);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 1d765ceffa..703ddeec4e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -249,7 +249,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		domain->nbentry--;
 		node->perms.p[0].id = priv_domid;
 		node->acc.memory = 0;
-		domain_entry_inc(NULL, node);
+		domain_nbentry_inc(NULL, priv_domid);
 		if (write_node_raw(NULL, &key, node, true)) {
 			/* That's unfortunate. We only can try to continue. */
 			syslog(LOG_ERR,
@@ -561,7 +561,7 @@ int acc_fix_domains(struct list_head *head, bool update)
 	int cnt;
 
 	list_for_each_entry(cd, head, list) {
-		cnt = domain_entry_fix(cd->domid, cd->nbentry, update);
+		cnt = domain_nbentry_fix(cd->domid, cd->nbentry, update);
 		if (!update) {
 			if (cnt >= quota_nb_entry_per_domain)
 				return ENOSPC;
@@ -606,8 +606,8 @@ static struct changed_domain *acc_get_changed_domain(const void *ctx,
 	return cd;
 }
 
-int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
-			unsigned int domid)
+static int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
+			       unsigned int domid)
 {
 	struct changed_domain *cd;
 
@@ -991,30 +991,6 @@ void domain_deinit(void)
 		xenevtchn_unbind(xce_handle, virq_port);
 }
 
-int domain_entry_inc(struct connection *conn, struct node *node)
-{
-	struct domain *d;
-	unsigned int domid;
-
-	if (!node->perms.p)
-		return 0;
-
-	domid = node->perms.p[0].id;
-
-	if (conn && conn->transaction) {
-		transaction_entry_inc(conn->transaction, domid);
-	} else {
-		d = (conn && domid == conn->id && conn->domain) ? conn->domain
-		    : find_or_alloc_existing_domain(domid);
-		if (d)
-			d->nbentry++;
-		else
-			return ENOMEM;
-	}
-
-	return 0;
-}
-
 /*
  * Check whether a domain was created before or after a specific generation
  * count (used for testing whether a node permission is older than a domain).
@@ -1082,62 +1058,76 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
-void domain_entry_dec(struct connection *conn, struct node *node)
+static int domain_nbentry_add(struct connection *conn, unsigned int domid,
+			      int add, bool no_dom_alloc)
 {
 	struct domain *d;
-	unsigned int domid;
-
-	if (!node->perms.p)
-		return;
+	struct list_head *head;
+	int ret;
 
-	domid = node->perms.p ? node->perms.p[0].id : conn->id;
+	if (conn && domid == conn->id && conn->domain)
+		d = conn->domain;
+	else if (no_dom_alloc) {
+		d = find_domain_struct(domid);
+		if (!d) {
+			errno = ENOENT;
+			corrupt(conn, "Missing domain %u\n", domid);
+			return -1;
+		}
+	} else {
+		d = find_or_alloc_existing_domain(domid);
+		if (!d) {
+			errno = ENOMEM;
+			return -1;
+		}
+	}
 
 	if (conn && conn->transaction) {
-		transaction_entry_dec(conn->transaction, domid);
-	} else {
-		d = (conn && domid == conn->id && conn->domain) ? conn->domain
-		    : find_domain_struct(domid);
-		if (d) {
-			d->nbentry--;
-		} else {
-			errno = ENOENT;
-			corrupt(conn,
-				"Node \"%s\" owned by non-existing domain %u\n",
-				node->name, domid);
+		head = transaction_get_changed_domains(conn->transaction);
+		ret = acc_add_dom_nbentry(conn->transaction, head, add, domid);
+		if (errno) {
+			fail_transaction(conn->transaction);
+			return -1;
 		}
+		/*
+		 * In a transaction when a node is being added/removed AND the
+		 * same node has been added/removed outside the transaction in
+		 * parallel, the resulting number of nodes will be wrong. This
+		 * is no problem, as the transaction will fail due to the
+		 * resulting conflict.
+		 * In the node remove case the resulting number can be even
+		 * negative, which should be avoided.
+		 */
+		return max(d->nbentry + ret, 0);
 	}
+
+	d->nbentry += add;
+
+	return d->nbentry;
 }
 
-int domain_entry_fix(unsigned int domid, int num, bool update)
+int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
-	struct domain *d;
-	int cnt;
+	return (domain_nbentry_add(conn, domid, 1, false) < 0) ? errno : 0;
+}
 
-	if (update) {
-		d = find_domain_struct(domid);
-		assert(d);
-	} else {
-		/*
-		 * We are called first with update == false in order to catch
-		 * any error. So do a possible allocation and check for error
-		 * only in this case, as in the case of update == true nothing
-		 * can go wrong anymore as the allocation already happened.
-		 */
-		d = find_or_alloc_existing_domain(domid);
-		if (!d)
-			return -1;
-	}
+int domain_nbentry_dec(struct connection *conn, unsigned int domid)
+{
+	return (domain_nbentry_add(conn, domid, -1, true) < 0) ? errno : 0;
+}
 
-	cnt = d->nbentry + num;
-	assert(cnt >= 0);
+int domain_nbentry_fix(unsigned int domid, int num, bool update)
+{
+	int ret;
 
-	if (update)
-		d->nbentry = cnt;
+	ret = domain_nbentry_add(NULL, domid, update ? num : 0, update);
+	if (ret < 0 || update)
+		return ret;
 
-	return domid_is_unprivileged(domid) ? cnt : 0;
+	return domid_is_unprivileged(domid) ? ret + num : 0;
 }
 
-int domain_entry(struct connection *conn)
+int domain_nbentry(struct connection *conn)
 {
 	return (domain_is_unprivileged(conn))
 		? conn->domain->nbentry
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 9e20d2b17d..1e402f2609 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -66,10 +66,10 @@ int domain_adjust_node_perms(struct node *node);
 int domain_alloc_permrefs(struct node_perms *perms);
 
 /* Quota manipulation */
-int domain_entry_inc(struct connection *conn, struct node *);
-void domain_entry_dec(struct connection *conn, struct node *);
-int domain_entry_fix(unsigned int domid, int num, bool update);
-int domain_entry(struct connection *conn);
+int domain_nbentry_inc(struct connection *conn, unsigned int domid);
+int domain_nbentry_dec(struct connection *conn, unsigned int domid);
+int domain_nbentry_fix(unsigned int domid, int num, bool update);
+int domain_nbentry(struct connection *conn);
 int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
 
 /*
@@ -99,8 +99,6 @@ void domain_outstanding_domid_dec(unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 int acc_fix_domains(struct list_head *head, bool update);
-int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
-			unsigned int domid);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index c009c67989..82e5e66c18 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -548,20 +548,9 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 	return 0;
 }
 
-void transaction_entry_inc(struct transaction *trans, unsigned int domid)
+struct list_head *transaction_get_changed_domains(struct transaction *trans)
 {
-	if (!acc_add_dom_nbentry(trans, &trans->changed_domains, 1, domid)) {
-		/* Let the transaction fail. */
-		trans->fail = true;
-	}
-}
-
-void transaction_entry_dec(struct transaction *trans, unsigned int domid)
-{
-	if (!acc_add_dom_nbentry(trans, &trans->changed_domains, -1, domid)) {
-		/* Let the transaction fail. */
-		trans->fail = true;
-	}
+	return &trans->changed_domains;
 }
 
 void fail_transaction(struct transaction *trans)
diff --git a/tools/xenstore/xenstored_transaction.h b/tools/xenstore/xenstored_transaction.h
index 3417303f94..b6f8cb7d0a 100644
--- a/tools/xenstore/xenstored_transaction.h
+++ b/tools/xenstore/xenstored_transaction.h
@@ -36,10 +36,6 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 
 struct transaction *transaction_lookup(struct connection *conn, uint32_t id);
 
-/* inc/dec entry number local to trans while changing a node */
-void transaction_entry_inc(struct transaction *trans, unsigned int domid);
-void transaction_entry_dec(struct transaction *trans, unsigned int domid);
-
 /* This node was accessed. */
 int __must_check access_node(struct connection *conn, struct node *node,
                              enum node_access_type type, TDB_DATA *key);
@@ -54,6 +50,9 @@ void transaction_prepend(struct connection *conn, const char *name,
 /* Mark the transaction as failed. This will prevent it to be committed. */
 void fail_transaction(struct transaction *trans);
 
+/* Get the list head of the changed domains. */
+struct list_head *transaction_get_changed_domains(struct transaction *trans);
+
 void conn_delete_all_transactions(struct connection *conn);
 int check_transactions(struct hashtable *hash);
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480367.744766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A5-0006tg-2l; Wed, 18 Jan 2023 09:55:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480367.744766; Wed, 18 Jan 2023 09:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A4-0006oW-JJ; Wed, 18 Jan 2023 09:55:16 +0000
Received: by outflank-mailman (input) for mailman id 480367;
 Wed, 18 Jan 2023 09:55:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI56J-0001BV-1g
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:23 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 750c1099-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:49:53 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 129CB38943;
 Wed, 18 Jan 2023 09:51:21 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D8982139D2;
 Wed, 18 Jan 2023 09:51:20 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id MpKXMxjBx2PGQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 750c1099-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035481; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2z17Fy8IKw0wJOQh7mD0U1wwu980af840nk2P0cdMiQ=;
	b=Ogm0rr4+ZBh+pjO1bVJmCsaFnBqSuWQs7+Uecg3aCdeRbDOj65q3gvs0rU1mpRhJTi8yUq
	sOdFOEONydKLBYW4UPh2WCSgGKlcaihgIMtRjh3kFnxN+RaleDumdbTW/d4YkziP0iTKa5
	IEb3YRWlgRHxCRhNJb/E+7bCPQlB/Hs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 11/17] tools/xenstore: let chk_domain_generation() return a bool
Date: Wed, 18 Jan 2023 10:50:10 +0100
Message-Id: <20230118095016.13091-12-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of returning 0 or 1 let chk_domain_generation() return a
boolean value.

Simplify the only caller by removing the ret variable.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_domain.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 10880a32d9..391395060e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -985,20 +985,20 @@ void domain_deinit(void)
  * count (used for testing whether a node permission is older than a domain).
  *
  * Return values:
- *  0: domain has higher generation count (it is younger than a node with the
- *     given count), or domain isn't existing any longer
- *  1: domain is older than the node
+ *  false: domain has higher generation count (it is younger than a node with
+ *     the given count), or domain isn't existing any longer
+ *  true: domain is older than the node
  */
-static int chk_domain_generation(unsigned int domid, uint64_t gen)
+static bool chk_domain_generation(unsigned int domid, uint64_t gen)
 {
 	struct domain *d;
 
 	if (!xc_handle && domid == dom0_domid)
-		return 1;
+		return true;
 
 	d = find_domain_struct(domid);
 
-	return (d && d->generation <= gen) ? 1 : 0;
+	return d && d->generation <= gen;
 }
 
 /*
@@ -1033,14 +1033,12 @@ int domain_alloc_permrefs(struct node_perms *perms)
 int domain_adjust_node_perms(struct node *node)
 {
 	unsigned int i;
-	int ret;
 
 	for (i = 1; i < node->perms.num; i++) {
 		if (node->perms.p[i].perms & XS_PERM_IGNORE)
 			continue;
-		ret = chk_domain_generation(node->perms.p[i].id,
-					    node->generation);
-		if (!ret)
+		if (!chk_domain_generation(node->perms.p[i].id,
+					   node->generation))
 			node->perms.p[i].perms |= XS_PERM_IGNORE;
 	}
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480370.744798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A7-0007jn-EI; Wed, 18 Jan 2023 09:55:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480370.744798; Wed, 18 Jan 2023 09:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A6-0007fy-W8; Wed, 18 Jan 2023 09:55:19 +0000
Received: by outflank-mailman (input) for mailman id 480370;
 Wed, 18 Jan 2023 09:55:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI56f-0001BV-Nf
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:45 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 82748998-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:50:15 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8AB523EA5D;
 Wed, 18 Jan 2023 09:51:43 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5E712139D2;
 Wed, 18 Jan 2023 09:51:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id NqC3FS/Bx2MGRAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82748998-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035503; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vOAF1RrQQxMUan4XdO2o/kCgbt3Qc3Bo/f4cCx9Mp4Q=;
	b=eUJYjJ+Fa5Sx/aSQBZ/PxvCkxVZxCoA6jIIAnm2Bo016k16GKdty6YT/wStiFHeNvxv4eU
	YfqyRDxNtsKOP1so5eAF3kztNIqrV6YHFxmK2ERMlUO5vUoqrK2VmbWii9Mt6Wk58hXxuJ
	XExRGrOq0DI/bFuE/NNgiX0n4mGfbs4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 15/17] tools/xenstore: let check_store() check the accounting data
Date: Wed, 18 Jan 2023 10:50:14 +0100
Message-Id: <20230118095016.13091-16-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today check_store() is only testing the correctness of the node tree.

Add verification of the accounting data (number of nodes) and correct
the data if it is wrong.

Do the initial check_store() call only after Xenstore entries of a
live update have been read. This is wanted to make sure the accounting
data is correct after a live update.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- use get_node_owner() (Julien Grall)
- rename callback (Julien Grall)
---
 tools/xenstore/xenstored_core.c   | 62 ++++++++++++++++------
 tools/xenstore/xenstored_domain.c | 86 +++++++++++++++++++++++++++++++
 tools/xenstore/xenstored_domain.h |  4 ++
 3 files changed, 137 insertions(+), 15 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index c2c2c342de..27dfbe9593 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2384,8 +2384,6 @@ void setup_structure(bool live_update)
 		manual_node("@introduceDomain", NULL);
 		domain_nbentry_fix(dom0_domid, 5, true);
 	}
-
-	check_store();
 }
 
 static unsigned int hash_from_key_fn(void *k)
@@ -2428,20 +2426,28 @@ int remember_string(struct hashtable *hash, const char *str)
  * As we go, we record each node in the given reachable hashtable.  These
  * entries will be used later in clean_store.
  */
+
+struct check_store_data {
+	struct hashtable *reachable;
+	struct hashtable *domains;
+};
+
 static int check_store_step(const void *ctx, struct connection *conn,
 			    struct node *node, void *arg)
 {
-	struct hashtable *reachable = arg;
+	struct check_store_data *data = arg;
 
-	if (hashtable_search(reachable, (void *)node->name)) {
+	if (hashtable_search(data->reachable, (void *)node->name)) {
 		log("check_store: '%s' is duplicated!", node->name);
 		return recovery ? WALK_TREE_RM_CHILDENTRY
 				: WALK_TREE_SKIP_CHILDREN;
 	}
 
-	if (!remember_string(reachable, node->name))
+	if (!remember_string(data->reachable, node->name))
 		return WALK_TREE_ERROR_STOP;
 
+	domain_check_acc_add(node, data->domains);
+
 	return WALK_TREE_OK;
 }
 
@@ -2491,37 +2497,61 @@ static int clean_store_(TDB_CONTEXT *tdb, TDB_DATA key, TDB_DATA val,
  * Given the list of reachable nodes, iterate over the whole store, and
  * remove any that were not reached.
  */
-static void clean_store(struct hashtable *reachable)
+static void clean_store(struct check_store_data *data)
 {
-	tdb_traverse(tdb_ctx, &clean_store_, reachable);
+	tdb_traverse(tdb_ctx, &clean_store_, data->reachable);
+	domain_check_acc(data->domains);
 }
 
+int check_store_path(const char *name, struct check_store_data *data)
+{
+	struct node *node;
+
+	node = read_node(NULL, NULL, name);
+	if (!node) {
+		log("check_store: error %d reading special node '%s'", errno,
+		    name);
+		return errno;
+	}
+
+	return check_store_step(NULL, NULL, node, data);
+}
 
 void check_store(void)
 {
-	struct hashtable *reachable;
 	struct walk_funcs walkfuncs = {
 		.enter = check_store_step,
 		.enoent = check_store_enoent,
 	};
+	struct check_store_data data;
 
 	/* Don't free values (they are all void *1) */
-	reachable = create_hashtable(NULL, 16, hash_from_key_fn, keys_equal_fn,
-				     HASHTABLE_FREE_KEY);
-	if (!reachable) {
+	data.reachable = create_hashtable(NULL, 16, hash_from_key_fn,
+					  keys_equal_fn, HASHTABLE_FREE_KEY);
+	if (!data.reachable) {
 		log("check_store: ENOMEM");
 		return;
 	}
 
+	data.domains = domain_check_acc_init();
+	if (!data.domains) {
+		log("check_store: ENOMEM");
+		goto out_hash;
+	}
+
 	log("Checking store ...");
-	if (walk_node_tree(NULL, NULL, "/", &walkfuncs, reachable)) {
+	if (walk_node_tree(NULL, NULL, "/", &walkfuncs, &data)) {
 		if (errno == ENOMEM)
 			log("check_store: ENOMEM");
-	} else if (!check_transactions(reachable))
-		clean_store(reachable);
+	} else if (!check_store_path("@introduceDomain", &data) &&
+		   !check_store_path("@releaseDomain", &data) &&
+		   !check_transactions(data.reachable))
+		clean_store(&data);
 	log("Checking store complete.");
 
-	hashtable_destroy(reachable);
+	hashtable_destroy(data.domains);
+ out_hash:
+	hashtable_destroy(data.reachable);
 }
 
 
@@ -2920,6 +2950,8 @@ int main(int argc, char *argv[])
 		lu_read_state();
 #endif
 
+	check_store();
+
 	/* Get ready to listen to the tools. */
 	initialize_fds(&sock_pollfd_idx, &timeout);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index e9d84d7c24..9ef41ede03 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1574,6 +1574,92 @@ void read_state_connection(const void *ctx, const void *state)
 	read_state_buffered_data(ctx, conn, sc);
 }
 
+struct domain_acc {
+	unsigned int domid;
+	int nodes;
+};
+
+static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
+{
+	struct hashtable *domains = arg;
+	struct domain *d = v;
+	struct domain_acc *dom;
+
+	dom = talloc_zero(NULL, struct domain_acc);
+	if (!dom)
+		return -1;
+
+	dom->domid = d->domid;
+	/*
+	 * Set the initial value to the negative one of the current domain.
+	 * If everything is correct incrementing the value for each node will
+	 * result in dom->nodes being 0 at the end.
+	 */
+	dom->nodes = -d->nbentry;
+
+	if (!hashtable_insert(domains, &dom->domid, dom)) {
+		talloc_free(dom);
+		return -1;
+	}
+
+	return 0;
+}
+
+struct hashtable *domain_check_acc_init(void)
+{
+	struct hashtable *domains;
+
+	domains = create_hashtable(NULL, 8, domhash_fn, domeq_fn,
+				   HASHTABLE_FREE_VALUE);
+	if (!domains)
+		return NULL;
+
+	if (hashtable_iterate(domhash, domain_check_acc_init_sub, domains)) {
+		hashtable_destroy(domains);
+		return NULL;
+	}
+
+	return domains;
+}
+
+void domain_check_acc_add(const struct node *node, struct hashtable *domains)
+{
+	struct domain_acc *dom;
+	unsigned int domid;
+
+	domid = get_node_owner(node);
+	dom = hashtable_search(domains, &domid);
+	if (!dom)
+		log("Node %s owned by unknown domain %u", node->name, domid);
+	else
+		dom->nodes++;
+}
+
+static int domain_check_acc_cb(const void *k, void *v, void *arg)
+{
+	struct domain_acc *dom = v;
+	struct domain *d;
+
+	if (!dom->nodes)
+		return 0;
+
+	log("Correct accounting data for domain %u: nodes are %d off",
+	    dom->domid, dom->nodes);
+
+	d = find_domain_struct(dom->domid);
+	if (!d)
+		return 0;
+
+	d->nbentry += dom->nodes;
+
+	return 0;
+}
+
+void domain_check_acc(struct hashtable *domains)
+{
+	hashtable_iterate(domains, domain_check_acc_cb, NULL);
+}
+
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 22996e2576..dc4660861e 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -129,4 +129,8 @@ const char *dump_state_connections(FILE *fp);
 
 void read_state_connection(const void *ctx, const void *state);
 
+struct hashtable *domain_check_acc_init(void);
+void domain_check_acc_add(const struct node *node, struct hashtable *domains);
+void domain_check_acc(struct hashtable *domains);
+
 #endif /* _XENSTORED_DOMAIN_H */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480371.744806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A8-0007sk-46; Wed, 18 Jan 2023 09:55:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480371.744806; Wed, 18 Jan 2023 09:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5A7-0007p7-HW; Wed, 18 Jan 2023 09:55:19 +0000
Received: by outflank-mailman (input) for mailman id 480371;
 Wed, 18 Jan 2023 09:55:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov7m=5P=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pI56U-0001BV-Hs
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:51:34 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7bb8bd32-9715-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:50:04 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 42FE020D6A;
 Wed, 18 Jan 2023 09:51:32 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 111C5139D2;
 Wed, 18 Jan 2023 09:51:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Um3AAiTBx2PtQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 18 Jan 2023 09:51:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bb8bd32-9715-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674035492; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9xr3XDR0lnAcLZVwe7SbDNgcb70DTfqFJY2CcMlXwhM=;
	b=nGd7aTH7CkPAeVkvRL8PVcp0qRzINcHi8y3X0BL1fF4/E455VF1rqpsgjTg4vFU/Tgn3mU
	TrraDprG29Pf39R9TsK4xqtyUxxcZ6Mh7oC+1nMqb76SuCoZIGYUOpp+PJt0How0OjAcW3
	zK9cu2Tb8lsMob+SF7rFx6aTtMXLJDE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 13/17] tools/xenstore: switch hashtable to use the talloc framework
Date: Wed, 18 Jan 2023 10:50:12 +0100
Message-Id: <20230118095016.13091-14-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
References: <20230118095016.13091-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using malloc() and friends, let the hashtable implementation
use the talloc framework.

This is more consistent with the rest of xenstored and it allows to
track memory usage via "xenstore-control memreport".

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V3:
- make key and value children of element (if flagged)
---
 tools/xenstore/hashtable.c        | 98 +++++++++++--------------------
 tools/xenstore/hashtable.h        |  3 +-
 tools/xenstore/xenstored_core.c   |  5 +-
 tools/xenstore/xenstored_domain.c |  2 +-
 4 files changed, 38 insertions(+), 70 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index ddca1591a2..30eb9f21d2 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -6,6 +6,8 @@
 #include <string.h>
 #include <math.h>
 #include <stdint.h>
+#include <stdarg.h>
+#include "talloc.h"
 
 struct entry
 {
@@ -50,7 +52,7 @@ indexFor(unsigned int tablelength, unsigned int hashvalue) {
 
 /*****************************************************************************/
 struct hashtable *
-create_hashtable(unsigned int minsize,
+create_hashtable(const void *ctx, unsigned int minsize,
                  unsigned int (*hashf) (void*),
                  int (*eqf) (void*,void*),
                  unsigned int flags)
@@ -66,10 +68,10 @@ create_hashtable(unsigned int minsize,
         if (primes[pindex] > minsize) { size = primes[pindex]; break; }
     }
 
-    h = (struct hashtable *)calloc(1, sizeof(struct hashtable));
+    h = talloc_zero(ctx, struct hashtable);
     if (NULL == h)
         goto err0;
-    h->table = (struct entry **)calloc(size, sizeof(struct entry *));
+    h->table = talloc_zero_array(h, struct entry *, size);
     if (NULL == h->table)
         goto err1;
 
@@ -83,7 +85,7 @@ create_hashtable(unsigned int minsize,
     return h;
 
 err1:
-   free(h);
+   talloc_free(h);
 err0:
    return NULL;
 }
@@ -115,47 +117,32 @@ hashtable_expand(struct hashtable *h)
     if (h->primeindex == (prime_table_length - 1)) return 0;
     newsize = primes[++(h->primeindex)];
 
-    newtable = (struct entry **)calloc(newsize, sizeof(struct entry*));
-    if (NULL != newtable)
+    newtable = talloc_realloc(h, h->table, struct entry *, newsize);
+    if (!newtable)
     {
-        /* This algorithm is not 'stable'. ie. it reverses the list
-         * when it transfers entries between the tables */
-        for (i = 0; i < h->tablelength; i++) {
-            while (NULL != (e = h->table[i])) {
-                h->table[i] = e->next;
-                index = indexFor(newsize,e->h);
+        h->primeindex--;
+        return 0;
+    }
+
+    h->table = newtable;
+    memset(newtable + h->tablelength, 0,
+           (newsize - h->tablelength) * sizeof(*newtable));
+    for (i = 0; i < h->tablelength; i++) {
+        for (pE = &(newtable[i]), e = *pE; e != NULL; e = *pE) {
+            index = indexFor(newsize, e->h);
+            if (index == i)
+            {
+                pE = &(e->next);
+            }
+            else
+            {
+                *pE = e->next;
                 e->next = newtable[index];
                 newtable[index] = e;
             }
         }
-        free(h->table);
-        h->table = newtable;
-    }
-    /* Plan B: realloc instead */
-    else 
-    {
-        newtable = (struct entry **)
-                   realloc(h->table, newsize * sizeof(struct entry *));
-        if (NULL == newtable) { (h->primeindex)--; return 0; }
-        h->table = newtable;
-        memset(newtable + h->tablelength, 0,
-               (newsize - h->tablelength) * sizeof(*newtable));
-        for (i = 0; i < h->tablelength; i++) {
-            for (pE = &(newtable[i]), e = *pE; e != NULL; e = *pE) {
-                index = indexFor(newsize,e->h);
-                if (index == i)
-                {
-                    pE = &(e->next);
-                }
-                else
-                {
-                    *pE = e->next;
-                    e->next = newtable[index];
-                    newtable[index] = e;
-                }
-            }
-        }
     }
+
     h->tablelength = newsize;
     h->loadlimit   = (unsigned int)
         (((uint64_t)newsize * max_load_factor) / 100);
@@ -184,12 +171,16 @@ hashtable_insert(struct hashtable *h, void *k, void *v)
          * element may be ok. Next time we insert, we'll try expanding again.*/
         hashtable_expand(h);
     }
-    e = (struct entry *)calloc(1, sizeof(struct entry));
+    e = talloc_zero(h, struct entry);
     if (NULL == e) { --(h->entrycount); return 0; } /*oom*/
     e->h = hash(h,k);
     index = indexFor(h->tablelength,e->h);
     e->k = k;
+    if (h->flags & HASHTABLE_FREE_KEY)
+        talloc_steal(e, k);
     e->v = v;
+    if (h->flags & HASHTABLE_FREE_VALUE)
+        talloc_steal(e, v);
     e->next = h->table[index];
     h->table[index] = e;
     return -1;
@@ -235,11 +226,7 @@ hashtable_remove(struct hashtable *h, void *k)
         {
             *pE = e->next;
             h->entrycount--;
-            if (h->flags & HASHTABLE_FREE_KEY)
-                free(e->k);
-            if (h->flags & HASHTABLE_FREE_VALUE)
-                free(e->v);
-            free(e);
+            talloc_free(e);
             return;
         }
         pE = &(e->next);
@@ -278,26 +265,7 @@ hashtable_iterate(struct hashtable *h,
 void
 hashtable_destroy(struct hashtable *h)
 {
-    unsigned int i;
-    struct entry *e, *f;
-    struct entry **table = h->table;
-
-    for (i = 0; i < h->tablelength; i++)
-    {
-        e = table[i];
-        while (NULL != e)
-        {
-            f = e;
-            e = e->next;
-            if (h->flags & HASHTABLE_FREE_KEY)
-                free(f->k);
-            if (h->flags & HASHTABLE_FREE_VALUE)
-                free(f->v);
-            free(f);
-        }
-    }
-    free(h->table);
-    free(h);
+    talloc_free(h);
 }
 
 /*
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index 780ad3c8f7..4e2823134e 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -9,6 +9,7 @@ struct hashtable;
  * create_hashtable
    
  * @name                    create_hashtable
+ * @param   ctx             talloc context to use for allocations
  * @param   minsize         minimum initial size of hashtable
  * @param   hashfunction    function for hashing keys
  * @param   key_eq_fn       function for determining key equality
@@ -22,7 +23,7 @@ struct hashtable;
 #define HASHTABLE_FREE_KEY   (1U << 1)
 
 struct hashtable *
-create_hashtable(unsigned int minsize,
+create_hashtable(const void *ctx, unsigned int minsize,
                  unsigned int (*hashfunction) (void*),
                  int (*key_eq_fn) (void*,void*),
                  unsigned int flags
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 9cfde76898..17e678256e 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2405,11 +2405,10 @@ static int keys_equal_fn(void *key1, void *key2)
 
 int remember_string(struct hashtable *hash, const char *str)
 {
-	char *k = malloc(strlen(str) + 1);
+	char *k = talloc_strdup(NULL, str);
 
 	if (!k)
 		return 0;
-	strcpy(k, str);
 	return hashtable_insert(hash, k, (void *)1);
 }
 
@@ -2504,7 +2503,7 @@ void check_store(void)
 	};
 
 	/* Don't free values (they are all void *1) */
-	reachable = create_hashtable(16, hash_from_key_fn, keys_equal_fn,
+	reachable = create_hashtable(NULL, 16, hash_from_key_fn, keys_equal_fn,
 				     HASHTABLE_FREE_KEY);
 	if (!reachable) {
 		log("check_store: ENOMEM");
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 391395060e..4f4c0a727c 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -931,7 +931,7 @@ void domain_init(int evtfd)
 	int rc;
 
 	/* Start with a random rather low domain count for the hashtable. */
-	domhash = create_hashtable(8, domhash_fn, domeq_fn, 0);
+	domhash = create_hashtable(NULL, 8, domhash_fn, domeq_fn, 0);
 	if (!domhash)
 		barf_perror("Failed to allocate domain hashtable");
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 09:55:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 09:55:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480373.744833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5AF-0000sj-J7; Wed, 18 Jan 2023 09:55:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480373.744833; Wed, 18 Jan 2023 09:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5AF-0000rw-CO; Wed, 18 Jan 2023 09:55:27 +0000
Received: by outflank-mailman (input) for mailman id 480373;
 Wed, 18 Jan 2023 09:55:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI5AE-0000iQ-C6
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:55:26 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2052.outbound.protection.outlook.com [40.107.20.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 05ab9339-9716-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 10:53:56 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7977.eurprd04.prod.outlook.com (2603:10a6:10:1ed::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan
 2023 09:55:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 09:55:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05ab9339-9716-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EEIzkgDE/p2AudjTXsKqpk8BZ3dvMyE/9Y5nKgCix+dp0+HS/l0S2XhaiNDac8fuqpVjFAbv0KVtDapQM2JNvyyXx4t+4kVTWtZxUb6M6yxvuusdxh85ndgGrzbWlPXt5CTfPcjeICHSkX4I7Me7rjWWYrje3vtbpiKec/W6P76eCJngozpTfcReAYGzizK6pz+PGp4ufiwU+fa2QbaUu0upkIRY44795hpVxMQYTZX6gW1mig4v/SvbN6XHaGuiMO2LjvQWe+L/BkiJ+gR2efIKlD7rCEPiKuzBQ01XBeXtXWU2v6/WCh+LCOis1hN0jf9mYqaKGEZuTsKXERnM+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wHORsIA3j1uSHkrpYne1jaxtDvRUxlw3LmENpIDvF9I=;
 b=VRasEoIsRQSTXW3wrzteeDz6UlYuZwgLTqFTkKdXF/++/ZW1P+qgueDHDcTmrUc7TF2X13eZ/jbsBeRfIHlzd8DgKNDdkgBGCRmD9w2IR3Z3njk1cTPO2gc274gvbknEQS9RYMuz4KHrsJxop+pMfyLtBKBNbiIMCALIKCbtfGntL/qgZm/TDrchEckMKnzg71nQ+3m1OQCWhe5QNOl237Jc35QuzcUKhwhMJVWp6mwnSEaoxF06DTmnFaqauC6VPbFObQEU0OXnYj9osrpQdxLusyNnrbNMZZmUT+oFrRAx7Pxf3jsm3Yia+a/SMGdzOdTqM6qjcE2khTguXCfLhQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wHORsIA3j1uSHkrpYne1jaxtDvRUxlw3LmENpIDvF9I=;
 b=24Ug8qWqgWPffOCUo4YcieyYqzLNpAGZNpAC6upe55HmgicZvIxtHR1pJ/sfso6p1LS+0NqoJCDb47TbXka0U8F17Dvt2Znyhl996iax7rx6nFrN5R2Okkm+/ZsCeMjXFcTGLNWHMlgMjm+ouIqfTjs7uB21vwVImBKz9XEzAFU6cho4kildB26clpPvLLuYK9LaxLGwQYcsj1bORRGhjHVFvdahx2WsWTM1UtT55gVbz4M4BZBHuvQWeY2XQ1nHgZd59BBGvsl1LOgJdhmPtQAsAcSpta5V5dK+0yCrtV8hY2aSu3pE4mpWZ4mRW2fOKXkytv8W9huMlpB3wz8mag==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <24a2f51b-e69d-7a44-5239-79f5f526ef01@suse.com>
Date: Wed, 18 Jan 2023 10:55:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
 <ed4d8d85-2ba5-74c1-7c65-0ae65bf0ee06@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ed4d8d85-2ba5-74c1-7c65-0ae65bf0ee06@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FRYP281CA0010.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::20)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7977:EE_
X-MS-Office365-Filtering-Correlation-Id: f2ed693e-6d2f-4c74-67a3-08daf93a1d28
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/dUBKf2zNZlCLDIiUPMD1dZ5v5UJVNsfRWMHQit50WV/mKxI4LUBbXvGf5RdF9SXDf2I9RJD+biNNzcV0Kvs4rAZkz+cznXsmf24vG0PbtiknBkqug1ZrBFGW+KpknDQGdLpLfuvYmW6U30YAME08lL5CFFqhwEw1CijNmVVMDbsoLOryv/F7kQ+E54mqeR/r5/wZzr119Ajw0MPZR+l93U7kbsrmcYypicpEz61LfgOfh30nkTAt65J1GqOH/vgQ1/7TVwWljriOIaRP4DOb28ga7bDjfXVHudZVLaF/RtpU5Hrgt8UthpYVIscOvmqukImXM7bWW18VtQEZuG3vM27d4tfnoPdmLp1e750wWajVQUIr1wzxuURUsSgJOcOfhf+r2NmcqUO15omTYdY4Ksi4RrjO5467KZ/wHoAp44AUcLh/K9BD7/wXh7smJVtc2bWShwHYVFTvcNVe2/ZAfeKzA5zWlbnwfyKbmVNrlG65M1fQ9vzyGQxgXsj624+QwZH3qfWUnwEI5kRQ13tLGxOBE8tnfk3u4/6a1ebEk5hUB2cBnnfam2Y6yg2uhq4sNJKnAPo9CuhkjDSQkMlykrvQT0QFvx15vRxzEiyheR6xM9QJbtKRZUyhPJvhgwZbEAP29bIg41PZZFB29m5Eb7xiYYS0fXvNUQ3z5m9DGNzvnXTZOHbvq542R8OzSq6+MK1QwNW66nM/h/Vapdz7/YjzI0NWl4BVeJ1KTnpBcA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(346002)(39860400002)(136003)(396003)(451199015)(478600001)(4326008)(6486002)(38100700002)(41300700001)(86362001)(2616005)(316002)(54906003)(31696002)(66946007)(26005)(186003)(66556008)(5660300002)(8676002)(53546011)(66476007)(31686004)(6916009)(6506007)(2906002)(36756003)(8936002)(6512007)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aHRlYUR2ZXdWRzlzM1JibWNRRmQ3NloxUDYxQ2Z6RlIyWUFoSUQ2Mk42MjU4?=
 =?utf-8?B?em5tbDZBbE9mL1FoWVpxYVFjYkltUmlZcWlpRkJNd3dYU1FyWkZaRWVqV2FI?=
 =?utf-8?B?blNpZVpsbEN1Ym1STlVmb2lwU28rWXZTTk0ycWJNcGFZUzFXa0I5b1FGQmda?=
 =?utf-8?B?MlBvWFAyald2bEFnN3l3OG9VdzhTbUJTdW0vOHpUdFlyNVJ5ZnpsK2haWEJa?=
 =?utf-8?B?SVR2Z2o3RE03L3NucUgwcEIwNXFPOUtZR0g0K21lbTRBSmhmSlhzUWQxY1JZ?=
 =?utf-8?B?Q2kya0hkMkkvRURPcTI0ZHdrSjBGREJXb2hVUFBWTWZBZlhCVEx4WW00NFUr?=
 =?utf-8?B?bHB1b2hJRGhQY1hmL1RIOGp5NHpTY0kxelNhQjBSTXZuMnpBOFJkc1BUdThk?=
 =?utf-8?B?V3NmYnF4U1BzendXTkFvR0NmOEZMMnU4S3l5WXVIeWMydkRxbmhpbWJ6cUVS?=
 =?utf-8?B?YWoyUXhQNHZxdXdGUStEVTFMeWtsWmI5ODZiT0tKQ1lhMTBBajdacHAyMUlh?=
 =?utf-8?B?SVdibWNkLzB4VVd5enFhbWhrRGVnY0hPZkUvVzk4NUpSRXErR2JpMkE3UG15?=
 =?utf-8?B?NC93VWZldmdCTkRoaFUwY2RUWHl1bkpTa1FVWUF2d2ZndzhTdTZOVTRFQWRr?=
 =?utf-8?B?Ly9xK3BsS0NuMysra1ZpcUkzVCtJQTB1bGl2OGVqcGdwY0pHcmlTZzczcmhw?=
 =?utf-8?B?Q2NqL2NyVnZTL3RuN3grSVJnaXJ6RGFlRTdTYkYzcmJMaHBQM0l5emduMy9X?=
 =?utf-8?B?STB1MjJXcWxhaWYyS3RhcGc3Z2Zra1lveHNWcHVLNFR5eEI1NStyUWNsQXFv?=
 =?utf-8?B?MzlkVGNia3RFS2U2ZkEzWi9rYjF5Sm1xeG5qOWhmeUtiUFVtbWg0czVEQ0hm?=
 =?utf-8?B?ZGxCbEdZRlVkWG5ENXQ5WWQ2R01LUlBGbkhLeExNWUNMcGNiT1NUS0ZoN1BW?=
 =?utf-8?B?NFNvV0FCck9UdmRDVzd3bGJjNkhlQzIzc1dDMm5mak5WQTdHL1l3bVoxS1Rl?=
 =?utf-8?B?dmhUYnVHbmlBOHhXNVpxc004eGVzSzQvNUtrREhubmUxT2d5SXNqdGhaYXJE?=
 =?utf-8?B?RFJvVWJ2RVJWUlp0QjAzcWorU0tYMEs4MXNYTG9tNW8xYkZIVmJIN25SODZB?=
 =?utf-8?B?Q09xZE0wM2lkL3E4OHhPR0dJMGhibW9TNDl3bGxpU1JZTnBnMjZpZEkzemU5?=
 =?utf-8?B?RGZaSWR0OFJucU5MWVZVbDFCQkUzbnRjSkdaVlkxblNrRlZEWk9xV0MrTUF6?=
 =?utf-8?B?a0kxZWhPVnNIRTJTTUtSeVhWcmpSQUc0N2tSUnRGejhPejdTSTIwY2VRREpv?=
 =?utf-8?B?alhQNDZhS093UnVzTkYwc2c3UExJd0JVV3lKM2FzM1NBMXpacDgrVUJwclFT?=
 =?utf-8?B?aWt0OUd5ZVM1dXF2aWU5bWxmdDQyTjNLQ1BMT0czdUdQQkljZkVlbWNyU3B2?=
 =?utf-8?B?cGxVUllEOHRIL3ptSnEwTVMydUtSVklWbjJiOVpPTTQ5bW9qVnJqemZaWjBE?=
 =?utf-8?B?cElZUXJwRm0ybzY1ODU1Y1QvUWtjVUhuVzJ3ZWpzK2Z3WldmZk5lVzFJNm14?=
 =?utf-8?B?U1lDTEJ3MGxiSGNlaGNndWc5V0JQYndvMTIvK1JjTCtKYXZDNnA5eVhUWHlV?=
 =?utf-8?B?dDI1UFNTK2xtU2JGNlQ3Vy9UWUlVNjFnSVVkenVmN1ZkMGozMmltelU0R3hF?=
 =?utf-8?B?dTB6SU43eXYveHFsZC9rMVRTNkNtRmRLV2ZNTEJaVVdiSjkvTDZzeUZGbVhO?=
 =?utf-8?B?c1Z5MVBXbHFZYUFFT0VxZ2Nqdm9UU0NtZDdZUFk4Z2lzMHJ4MmRwa0dWb0xt?=
 =?utf-8?B?N3k0eWNEV1FHckxJcFFnUmNtQ3I0ZTR2SW9jYXFxSDJlT2pHbmhGOUNoSlVs?=
 =?utf-8?B?bTkrMTcvUXlqZXBnUnNWV2VsTUU0MlJSQkxuZUdkODdJb2VYSWhYczgwVUUw?=
 =?utf-8?B?cVNKejNPTjY3ZC8vbUN5K3RiOEhObDV6a3crd1RQQUx4SHE0c09tUDhLZ3R5?=
 =?utf-8?B?MjlxNFQ1M3FCVFF1cVJBY1lkWTNyZlBZVW9Tc1BnRkgybFZ0aGtYL01jQlhB?=
 =?utf-8?B?eHBtZEJtV0Q1Qyt3dXRFOVM5MGhRa1pKZUFRdjNoekh2RUYyTGxhVEtXdTVU?=
 =?utf-8?Q?LKUlm5q6Vl764z6dK5YaH9Zn3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f2ed693e-6d2f-4c74-67a3-08daf93a1d28
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 09:55:22.0515
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nbX9Vbtf31pei0tO0V+joiyLaXCmXzo7rLYDlD/e4criw7DZsZrtoVWONMCxwRcdgvi7C2VJQxXl+yl3Y5bPCg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7977

On 17.01.2023 23:04, Andrew Cooper wrote:
> On 19/10/2022 8:43 am, Jan Beulich wrote:
>> The registration by virtual/linear address has downsides: At least on
>> x86 the access is expensive for HVM/PVH domains. Furthermore for 64-bit
>> PV domains the areas are inaccessible (and hence cannot be updated by
>> Xen) when in guest-user mode.
> 
> They're also inaccessible in HVM guests (x86 and ARM) when Meltdown
> mitigations are in place.

I've added this explicitly, but ...

> And lets not get started on the multitude of layering violations that is
> guest_memory_policy() for nested virt.  In fact, prohibiting any form of
> map-by-va is a perquisite to any rational attempt to make nested virt work.
> 
> (In fact, that infrastructure needs purging before any other
> architecture pick up stubs too.)
> 
> They're also inaccessible in general because no architecture has
> hypervisor privilege in a normal user/supervisor split, and you don't
> know whether the mapping is over supervisor or user mapping, and
> settings like SMAP/PAN can cause the pagewalk to fail even when the
> mapping is in place.

... I'm now merely saying that there are yet more reasons, rather than
trying to enumerate them all.

>> In preparation of the introduction of new vCPU operations allowing to
>> register the respective areas (one of the two is x86-specific) by
>> guest-physical address, flesh out the map/unmap functions.
>>
>> Noteworthy differences from map_vcpu_info():
>> - areas can be registered more than once (and de-registered),
> 
> When register by GFN is available, there is never a good reason to the
> same area twice.

Why not? Why shouldn't different entities be permitted to register their
areas, one after the other? This at the very least requires a way to
de-register.

> The guest maps one MMIO-like region, and then constructs all the regular
> virtual addresses mapping it (or not) that it wants.
> 
> This API is new, so we can enforce sane behaviour from the outset.  In
> particular, it will help with ...
> 
>> - remote vCPU-s are paused rather than checked for being down (which in
>>   principle can change right after the check),
>> - the domain lock is taken for a much smaller region.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> RFC: By using global domain page mappings the demand on the underlying
>>      VA range may increase significantly. I did consider to use per-
>>      domain mappings instead, but they exist for x86 only. Of course we
>>      could have arch_{,un}map_guest_area() aliasing global domain page
>>      mapping functions on Arm and using per-domain mappings on x86. Yet
>>      then again map_vcpu_info() doesn't do so either (albeit that's
>>      likely to be converted subsequently to use map_vcpu_area() anyway).
> 
> ... this by providing a bound on the amount of vmap() space can be consumed.

I'm afraid I don't understand. When re-registering a different area, the
earlier one will be unmapped. The consumption of vmap space cannot grow
(or else we'd have a resource leak and hence an XSA).

>> RFC: In map_guest_area() I'm not checking the P2M type, instead - just
>>      like map_vcpu_info() - solely relying on the type ref acquisition.
>>      Checking for p2m_ram_rw alone would be wrong, as at least
>>      p2m_ram_logdirty ought to also be okay to use here (and in similar
>>      cases, e.g. in Argo's find_ring_mfn()). p2m_is_pageable() could be
>>      used here (like altp2m_vcpu_enable_ve() does) as well as in
>>      map_vcpu_info(), yet then again the P2M type is stale by the time
>>      it is being looked at anyway without the P2M lock held.
> 
> Again, another error caused by Xen not knowing the guest physical
> address layout.  These mappings should be restricted to just RAM regions
> and I think we want to enforce that right from the outset.

Meaning what exactly in terms of action for me to take? As said, checking
the P2M type is pointless. So without you being more explicit, all I can
take your reply for is merely a comment, with no action on my part (not
even to remove this RFC remark).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 10:00:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 10:00:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480415.744845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5Ee-00044q-8M; Wed, 18 Jan 2023 10:00:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480415.744845; Wed, 18 Jan 2023 10:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5Ee-00044j-3S; Wed, 18 Jan 2023 10:00:00 +0000
Received: by outflank-mailman (input) for mailman id 480415;
 Wed, 18 Jan 2023 09:59:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI5Ed-00044d-BE
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 09:59:59 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2074.outbound.protection.outlook.com [40.107.103.74])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dd7e021e-9716-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 10:59:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8543.eurprd04.prod.outlook.com (2603:10a6:102:216::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan
 2023 09:59:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 09:59:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd7e021e-9716-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gLm9iIdiZPTDsLXrWa+r6Pxmn7NL2GAW8GLtE3fGgHRYaphcHbBUq5H8ScxR0rX+WLnqWMoGDoiHUFn3w+kIZK/DM3ONpD7d88u7x4yrR7U8sRVfZebKRajs8uVuSBrP+qrKkXbH7jx0m1oClAwGxC7YiBiGWJOzSkZDIq3deZ3ExokIAFz0VI4n/DveNoYCc6Sr33yz5IBN1/AoZAe2nR/pfhirx8xmk1ut67QSi8DSjfSUPC6ILTtLziASls+NfAVlyWGuASjEVipAdngtJZEpAkPuiN+QFTFm4ZBaTCrhullkfopCVDTHQmRwP5u5tyj25JFcjpPSxTXd+HxztA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=S1jOVNpuZSEM8EbRyRvvAhG8Ue8DujHER+LuUnwlwtA=;
 b=ZVXVW27y094jBSlY+LUnWuH28TuXtSslkcgyH2rTsDPdMkAA0amAa878eb2WmVSOcMTcjTVgNFwmouvTZ1/ieFlGr+INlKQ589nDChumcuKkLJFS1zTAkAQ0kPmJ73445Lnq5NPAUQ09Yc9bvV8qBLreX2ODte4+Czb9OBuYW4YsI44BkvMZapgK8hq+mIdqa9JjAqrufRcO3czQt8s2A8Yf7vRYSormws+IIsPMnjjb2RKASONeqViVJGkpBpDvjicYbujMkPqUHmG8bVWiyObglY2crnhHL7vllTGY7+T4tlljZOdItXq+g1S8CGpz7YaaKCESYG3sLzJCO6/ZnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S1jOVNpuZSEM8EbRyRvvAhG8Ue8DujHER+LuUnwlwtA=;
 b=PJQmT+AkIsRVW9XdVqZt0dgh5/Ba4Qw8QtyBJlo/t5W6ihBJ1WwP4tOILx8dxwwsMvEb5G56HJmI0y5IC1gv6vAbu1z/YETEQz1vmDgFrv9QuwDOEIWmgvIxeNDB4Fep24PSXEdE2yEbEAMQpIVCW1Q9RE00tkv8LMat6boxOqNYHqnnKpQH4JP/KWKw0q8tqYlDj1Tv1Jw8urxSZzdd8BUS0IhIY0at7wiD1jXS4D+rXAvEiClxL8JGloRvk+PwrwlA9ROSVBKvEmplDXhGJpjnv5YdORt7oJyqxouDjjO9JyaH1NMHHm+b436U/L64VHW0WTkr5Rb5G7JXLlRC3g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e87f4767-eb62-fc70-788c-d8afa45f6434@suse.com>
Date: Wed, 18 Jan 2023 10:59:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
 <f1229a27-f92c-a0dc-928e-1d78b928fdd0@xen.org>
 <bd6befdf-65eb-6937-fb85-449a5fa16794@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <bd6befdf-65eb-6937-fb85-449a5fa16794@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0008.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8543:EE_
X-MS-Office365-Filtering-Correlation-Id: 1852cdfa-7472-42b7-d190-08daf93abfe7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CULb4g6IMtfemwOzvUNF996QXAxkQ8m0rKG2wrmJPN0OiFtrIM8NZr0qFMnJ9m2e4H2ez2yWKGCL+slYmVaG7iNNrH/d1nHxz1Mk9ZztsekRDXS5ujgOFTDF5iYglQfHU1Iv53GPMOVN0FZZt1fQUVCBSQfitWfS1Z0darGFc8ZbslrpFTMGY1Om1A0KoRdTKx1qU5YQttNLVQzaz1cI1pZW2NwiG+df0p5qLgv1smYed+Yg2/vu3MSGTlwdIaCQtvP39Ot11eijCm0EEPwMc7dxPhd9HqFnSz5nVYrVx8Dp46BnggfzAXG1aoBQZ6yde1E0neYTW8oIDS6TFboHBwRngZjwsRF/ByTygFFShfbK+5Og1IDPc0tqsoIlS68ZNI1X85oYZ4h/DZiSnPPz1SIXduq9HT8BcO9QPydQjPJmZyPDXQYBmgR5shy+f9F6os82JGDgy+X0NsYzYOgTbiNu7KQTAvRnNqrOYYibToOXaFdTCKbgYwkkvUcO4a61qPZCOGVrFl//JquZzCOeq/cNApEwqKiOuP+txyDRmD9tOilT0Ps9BSzZuc9mS9icySLOU5+pO3TQ6xC8ahHrSek1UW9V9DIyCHpPViccLl2gYckGbFFphoZMv/OBvoe3et4jBol1tXhANozWvaiHVGA67SeWtpOL+KrGIerdVr5DvoVHFYCM9JP5+0XfA3Ur+PfNR7UCFbbk1l/2nn/i77NSFmd3sSs1vGLToE5Qmsc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(39860400002)(376002)(136003)(366004)(346002)(451199015)(83380400001)(31686004)(6512007)(6486002)(54906003)(36756003)(31696002)(478600001)(186003)(38100700002)(2616005)(86362001)(53546011)(8936002)(6506007)(2906002)(26005)(66946007)(6916009)(66476007)(8676002)(4326008)(66556008)(316002)(5660300002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NVgxL1k0RXd6a05hYjNucUc3VjRFTWp6OTg5Wi9QeFVEUkNpVFJDNGdmZ09I?=
 =?utf-8?B?N2FtS2NPV1VGczBaTWJrbEdkTVo0T3dwcWg4MjAzZXFsNU4yRmczWUNadCtD?=
 =?utf-8?B?SE51YmVTTEl0YTg1SE43WGpSczFScUdOczlnNEVlV1dVeGVGVzgxQXJGVzN5?=
 =?utf-8?B?Zk5zNThjazBxcUxBazZJc2VLSkhmWjR3SDlnL0poV1BZclQxbm8wNGlpclRk?=
 =?utf-8?B?UmxFMk5BU2pzaGJsUGJlQ1NYZzBKSzRaNVJTdmJDU1ZIL2VzSDVHaEhhY0Ni?=
 =?utf-8?B?akpEaFFMMTFrampqZ3o1WWI2OUhqdXRVc3NyakxQRXhsZlFyTXNiVW45aWZi?=
 =?utf-8?B?Zm9mWWhLRWNBM1RXR3JvakZlSnI1dk1pYjlQcUh2SVR4WVR4cFhsbDBKVnRK?=
 =?utf-8?B?OG8zTitScjRDNE05L25ieXhqTTJoa2NMZ0tLRDJyV0JkWGJHbE82UnlXYmVN?=
 =?utf-8?B?aDFDUGdwM3NDdnBUYThjQTVXakpDbUJreENZaWVwMkN5TGxWWFIvR1Izbjd3?=
 =?utf-8?B?eGlJNFpjS3ZyY2UzcTVLZy9ha2gyYkV2S0xMM2VQQWFBdGZWRnYvMEIwMnFw?=
 =?utf-8?B?aGRJTGdpNjBHclJadVhCZ2xCNUFDbUprWkNnUzN6TFlTZ3llaXl3bERWTFpM?=
 =?utf-8?B?aUtlNVZzbm9MUEdTL0dqYllLeU1uM0liTHJJQTdSRGxPMXE3ZjVhS1dZd3Bu?=
 =?utf-8?B?VnBWUVRwcXBlakZLMEJaNWJ2TFpHcTNqRXV5VGtCZGRCWHZiNjArd0svam9Z?=
 =?utf-8?B?WGo0Nmk1bW55dFhleW5rb3luWDJNMkhaVFRBYkk0dmNYUGZOSW9jK2Qxcldn?=
 =?utf-8?B?THNLM0FiZVQ4U3FTdnp0NGVFVWI5VEtrSnVkd3MwQTNwcklMZVNWN0RVVVVn?=
 =?utf-8?B?dDVUT094d2x6am5vODFyaDh4MXQ2WmpiSXYrQ21uWWU0bFZDcjE0c3NrS0cr?=
 =?utf-8?B?L3ZSWXY5WUxja2xWZzdYWGdkWmI1MFAxTnR0c0w3eUdQejM3c3pZeHdGUmEv?=
 =?utf-8?B?Ry8wMmVaM1NHTVdiZkV4L1N0R1F0bEJtamxySmU2OXpTb2RBVEk0RHhrY3dZ?=
 =?utf-8?B?UFNRaGI2aG15VHNxTW1DQlEvdjNlNnp6VjRxNVlldXpUUGQvNHFoaWQ1QUF4?=
 =?utf-8?B?dFlqRWw2TVNlVGdrc3krRXpXUFQ3OEJyc1laRHZiTjZLL3dhS1JuZEF0Qm1C?=
 =?utf-8?B?Q3B0dEtGTDNLTUxrVjhKQUlMWDBIM1FwWGo3M216cnRHZGRtZTRmVWNMbzcy?=
 =?utf-8?B?eXU3OCtFdkxOSHM4aGUzTUViemJBYjdHNVJJSTVld2tRTGM4c3BDZWxNUUt2?=
 =?utf-8?B?QWFrMnRSYzJyanRJWDJDZnk0Y1Y1UndZSm10cTUyRDFsR3B1YUdKWS9RTE4v?=
 =?utf-8?B?dzBPZCt2RllDdVBoMFBLU1F4WXhnZkU2dlEvVDkrKzFQcXNWWmdXdDFBcTNX?=
 =?utf-8?B?TFVSNDNTV0IvVVJBNGtIL3IyNkhYNUhMNmVDVzdjRnlTeEsyYXI3Zlk0dFk3?=
 =?utf-8?B?Zi91Zmd1cVoySnVLQTVwWTBzNFBQc0U4ejJldm9HUUFWRjR1YVprZXdFSXpG?=
 =?utf-8?B?K3Nlck9uUVhBbVlncktGWTBHSitCbFhvc3FwVHU0K0JDZ2hrU1hxODU4QXpQ?=
 =?utf-8?B?ZlMrMk90c25EWTVIMkRRMjlIVTVBYUYyT3RTMmkrdks4UFYyanRpeFhTMTNt?=
 =?utf-8?B?cVFtYVJTRjhGKy9Bd1ZGdm5DeUZsaExOMjhKZUw3TGpVUnZxd3Y0OVo3NTZ2?=
 =?utf-8?B?a2IzcFRTNHlMRWtwNHB5UFRZSE1pTlJKL3hIb3E5SXQ3NVJvOWozaGUyT2tE?=
 =?utf-8?B?Y1NjSTJiZGZEUkdkKzVQMW93b0lIK092Q2Fhc0ZWbFhjcEwyUHZ2NkZLbDJP?=
 =?utf-8?B?Qkl0TDk3eEVUaW91OHBqYjQ0NTNFeUExTW1VSkdYdk52YklRRnlYNWtncDh2?=
 =?utf-8?B?by9raWVBSEliTXRBc3d3ODhLVHIzVkZwUmJ4cm5wQXlvSzA5emt1bEpBTkRX?=
 =?utf-8?B?Y2JWMlN5WW1rOWdiYmlUTU1HeGFGNkYyL1RReVA1TUdYb0toZzlyMFFOVDBw?=
 =?utf-8?B?Uytyb1pFSzAyLzFjTzVMMFVoUW1jN3h4dHVKelhwVHY2WDR5QjVNNTJGSE1K?=
 =?utf-8?Q?LUGenVtwIoXj9YE4m7lP8k8CA?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1852cdfa-7472-42b7-d190-08daf93abfe7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 09:59:55.1750
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /9WuDGq4322BWM5IVwiMq3WsJLnJjZ8npmIS1pz9KLmg//zCACEKBQxKrWmYgmpjIi70kRyUHdSvRAXgUdo1DA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8543

On 17.01.2023 23:20, Andrew Cooper wrote:
> On 24/11/2022 9:29 pm, Julien Grall wrote:
>> On 19/10/2022 09:43, Jan Beulich wrote:
>>> The registration by virtual/linear address has downsides: At least on
>>> x86 the access is expensive for HVM/PVH domains. Furthermore for 64-bit
>>> PV domains the areas are inaccessible (and hence cannot be updated by
>>> Xen) when in guest-user mode.
>>>
>>> In preparation of the introduction of new vCPU operations allowing to
>>> register the respective areas (one of the two is x86-specific) by
>>> guest-physical address, flesh out the map/unmap functions.
>>>
>>> Noteworthy differences from map_vcpu_info():
>>> - areas can be registered more than once (and de-registered),
>>> - remote vCPU-s are paused rather than checked for being down (which in
>>>    principle can change right after the check),
>>> - the domain lock is taken for a much smaller region.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> RFC: By using global domain page mappings the demand on the underlying
>>>       VA range may increase significantly. I did consider to use per-
>>>       domain mappings instead, but they exist for x86 only. Of course we
>>>       could have arch_{,un}map_guest_area() aliasing global domain page
>>>       mapping functions on Arm and using per-domain mappings on x86. Yet
>>>       then again map_vcpu_info() doesn't do so either (albeit that's
>>>       likely to be converted subsequently to use map_vcpu_area()
>>> anyway).
>>>
>>> RFC: In map_guest_area() I'm not checking the P2M type, instead - just
>>>       like map_vcpu_info() - solely relying on the type ref acquisition.
>>>       Checking for p2m_ram_rw alone would be wrong, as at least
>>>       p2m_ram_logdirty ought to also be okay to use here (and in similar
>>>       cases, e.g. in Argo's find_ring_mfn()). p2m_is_pageable() could be
>>>       used here (like altp2m_vcpu_enable_ve() does) as well as in
>>>       map_vcpu_info(), yet then again the P2M type is stale by the time
>>>       it is being looked at anyway without the P2M lock held.
>>>
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -1563,7 +1563,82 @@ int map_guest_area(struct vcpu *v, paddr
>>>                      struct guest_area *area,
>>>                      void (*populate)(void *dst, struct vcpu *v))
>>>   {
>>> -    return -EOPNOTSUPP;
>>> +    struct domain *currd = v->domain;
>>> +    void *map = NULL;
>>> +    struct page_info *pg = NULL;
>>> +    int rc = 0;
>>> +
>>> +    if ( gaddr )
>>
>> 0 is technically a valid (guest) physical address on Arm.
> 
> It is on x86 too, but that's not why 0 is generally considered an
> invalid address.
> 
> See the multitude of XSAs, and near-XSAs which have been caused by bad
> logic in Xen caused by trying to make a variable held in struct
> vcpu/domain have a default value other than 0.
> 
> It's not impossible to write such code safely, and in this case I expect
> it can be done by the NULLness (or not) of the mapping pointer, rather
> than by stashing the gaddr, but history has proved repeatedly that this
> is a very fertile source of security bugs.

I'm checking a value passed in from the guest here. No checking of internal
state can replace that. The checks on internal state leverage zero-init:

 unmap:
    if ( pg )
    {
        unmap_domain_page_global(map);
        put_page_and_type(pg);
    }

It's also not clear to me whether, like Julien looks to have read it, you
mean to ask that I revert back to using 0 as the "invalid" (i.e. request
for unmap) indicator.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 10:13:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 10:13:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480438.744855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5RH-0006YB-Bo; Wed, 18 Jan 2023 10:13:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480438.744855; Wed, 18 Jan 2023 10:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5RH-0006Y4-8Y; Wed, 18 Jan 2023 10:13:03 +0000
Received: by outflank-mailman (input) for mailman id 480438;
 Wed, 18 Jan 2023 10:13:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vYxY=5P=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pI5RF-0006Xw-TR
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 10:13:02 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2084.outbound.protection.outlook.com [40.107.220.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id af002047-9718-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 11:13:00 +0100 (CET)
Received: from BN0PR04CA0025.namprd04.prod.outlook.com (2603:10b6:408:ee::30)
 by PH7PR12MB5759.namprd12.prod.outlook.com (2603:10b6:510:1d2::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 10:12:55 +0000
Received: from BN8NAM11FT099.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ee:cafe::3) by BN0PR04CA0025.outlook.office365.com
 (2603:10b6:408:ee::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Wed, 18 Jan 2023 10:12:55 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT099.mail.protection.outlook.com (10.13.177.197) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 10:12:54 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 18 Jan
 2023 04:12:54 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 18 Jan
 2023 02:12:38 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 18 Jan 2023 04:12:36 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af002047-9718-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b84uFKQ8P0y/QYDaWlut7UY4+ULlHdI4YQx5o4Ovef0XOQBbcfyJP77YBY2VLq/DvrQ9hc2ix9xfosZGBVPrbBCUf0pmpiP9umeBPzeEAj4fo7MYvsKo46uaInJhkmbndblSVQYME/5sAbihg3C+39yGuECaCEY+HLJzcLpW9BlCVc7A/CtRVIhovWEw7jxYbb+AV2KsmkRBnreKmyPifupJNYF92JPn7hx3eRlgFdcei4P8NmlOFMb/i3bzhXYYWGYV5PxMisG7vMhNaFf9+SY2a/4gJQhWnszykkPPYrotRZZIGIBfC74c8flgyqQgKnzBBi7vt91neQnMU1B5jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=e48vDjnNHTq3lsRspKwSNzwRZUWiP8aG+DBD0kmMBX0=;
 b=JdDz8AA+YvfmEWngSoFDbbv1mZ0f7WD0L0EC6fili7fc3EcZybRmUSY91bh3F+1FOy5sGNzpEsS6+4neCHzHxKyXUNaAamX3LMCjt89CVtc0//E7oryMOWnrymXsxzf0pKGHyE/WRJf4x5wiFRdcVqrlbwPxtmatbANUiG2baXlRUQIZSirxq0A6yvyxhgTCL24f9ontC5ruI+4ygUhVpi6nL/AaVZESV7YJmr+fwQF00YKwwgXDA6QbqTgjpeSmllfw7gJVfw8LvF+EIsNQ+Hva1LVXlHYkc7+xgBHyo+/R896hmcGg+OMlisqLd56Lrhs5NoQbdSnhjjfWQw4Kdw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e48vDjnNHTq3lsRspKwSNzwRZUWiP8aG+DBD0kmMBX0=;
 b=raqRvB3btm5M5pQzZNzpG/lScTKb7WodHMYddKYv6sxLNsU7ICqPdFSfFsOjrui8ikjeMFToAuyyrjp5QpYUYt+Xcj+WvcmUbzQBtevOaOy0BiOaiMuu+gQKgm+UBDKz4E+yHANBG3HLGoQ/AVsvHFrQQa9qvPKr9VDVT0/wcT8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <157b9c9f-b9fe-5c8a-bea2-aa45ea5d195d@amd.com>
Date: Wed, 18 Jan 2023 11:12:36 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v2 00/11] Add support for 32 bit physical address
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT099:EE_|PH7PR12MB5759:EE_
X-MS-Office365-Filtering-Correlation-Id: 1989ae5d-fefa-4087-8005-08daf93c90d3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NH7EjZO243IEsRlOOZCeLEva+U1l5uDMwZpS9cCYvQ4ooYxZ9Dd6SvAsU24hw/Tzj+iRZ01QVFrN2mN6vDbsjLWhlVO4EGC2iuNsZOMzmGBUmWEBjt5ttD3sGfX4qUi/5aAKIKCR/ZCzo/1b4P0EYazk2Q/zC4FSatl1byx3DRwgxQGqp4dMoSHhw6Ek0krG9wwzfxeMCwlz79LDRApu98ZQs891IsiCnR6GvLMjK64Q6Oh0v9AWGyUhYzGVWlL9ZJLvyWwENeYdax5+rd3v2FcOVKeeKehxEmPX2lIMqst6hTF/pql8n/immIIJ9zfEJpu4K0G2GM1AfG4Aqw0W94BFni5Dy8ZP6RtPmZ/jtU/iHPa6UEj1EDen3fPbl4RO/mHjb4ZQW4GgWuSgh9gyneiwJJnt6YRyw6WdVpUJFoai3LglYZMS7CsuAOlGdd+3QonnrO6TkbqkPplFn9N9wDiqibyESejZIkByvMWfocNEkjVW+4wlpms3NeFxMkfzZ427p6grU68WY2DO3KFHQBy6NwsUyyz6QL1ZHTs8XoOcHBjCcBhGC30sHmeLSAYNWqU5BO2ypCPnVSy6D/UNoGxZY7abpJ7gzq9Ks/CFjdAX558v9ndOm404nkscstNXH9FgtP6Rpnaqch1vrBl0u5F3d2jGV4vI0V9MItQHhC2TWKU95bpAAMxWEGmIl92cxXlEnnXxYDT9tkHn35ecMHVzAEVTlmzkQBJI6gawXaaXvBQKn3eHZRgsKrBhj5lgIX6rhHNQqWhW5Am+Ept//lLnnI/CR9KZqIwwXZ6dmLw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199015)(36840700001)(46966006)(40470700004)(31686004)(82310400005)(36756003)(966005)(36860700001)(316002)(478600001)(4744005)(44832011)(31696002)(2906002)(54906003)(40480700001)(110136005)(40460700003)(16576012)(41300700001)(82740400003)(2616005)(426003)(47076005)(26005)(70206006)(53546011)(186003)(8936002)(86362001)(8676002)(4326008)(356005)(81166007)(336012)(70586007)(5660300002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 10:12:54.9011
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1989ae5d-fefa-4087-8005-08daf93c90d3
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT099.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5759

Hi Ayan,

On 17/01/2023 18:43, Ayan Kumar Halder wrote:
> 
> 
> Hi All,
> 
> Please have a look at https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg01465.html
> for the context.
> 
> The benefits of using 32 bit physical addresses are as follows :-
> 
> 1. It helps to use Xen on platforms (for eg R52) which supports 32 bit
> physical addresses and has no support for large page address extension.
Looking at your entire series, you keep using "large page address extension".
LPAE is ARMv7A thing and it is defined as "large *physical* address extension".
So it would be good to stick to the proper naming.

~Michal


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 10:13:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 10:13:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480439.744866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5RT-0006qk-K1; Wed, 18 Jan 2023 10:13:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480439.744866; Wed, 18 Jan 2023 10:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5RT-0006qd-Gf; Wed, 18 Jan 2023 10:13:15 +0000
Received: by outflank-mailman (input) for mailman id 480439;
 Wed, 18 Jan 2023 10:13:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FJ6X=5P=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pI5RR-0006Xw-OC
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 10:13:13 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b63baf53-9718-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 11:13:12 +0100 (CET)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-441-1TIaP4bENWC2iqbZ3swxhg-1; Wed, 18 Jan 2023 05:13:09 -0500
Received: by mail-wr1-f69.google.com with SMTP id
 s20-20020adfa294000000b002b81849101eso6764197wra.16
 for <xen-devel@lists.xenproject.org>; Wed, 18 Jan 2023 02:13:09 -0800 (PST)
Received: from redhat.com ([2.52.28.74]) by smtp.gmail.com with ESMTPSA id
 z4-20020a1cf404000000b003da2932bde0sm1700355wma.23.2023.01.18.02.13.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 18 Jan 2023 02:13:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b63baf53-9718-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1674036790;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=DRjJcGcIJeRz3HV0dfr6+yOWYOH/Jreqd5wo9IIEYVE=;
	b=S1HttiMWFk4rWddO+v12N01coH1/H9lymIQlB5LYSo0p0BfczsHhECeMlPKNUYTg50hzAx
	zeL9GkFDkClcuRCGPB1lse7sAXgEPYXvK4EAoW6q/cCBgGFProJVaplMBIKv9zWZaXSGkU
	OnRbTBWPSUOtMJbJc5CCebzgVP/xsrM=
X-MC-Unique: 1TIaP4bENWC2iqbZ3swxhg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DRjJcGcIJeRz3HV0dfr6+yOWYOH/Jreqd5wo9IIEYVE=;
        b=4OL9yDoOsVCgIaijheObfRyd/+4qMFKkoBc4YoM7GTEVl0uwgtr+HN/wl8FlcD5+ks
         ywNWZ5QXDwhptRtUwNn1/Ij66oy2JVKTA4E6+2oDwStswkqpM2D9TNO8AaHpHmiQM3IR
         hC41YUmEtREozP+xfOr3MKfn7s89ur1VArX4RDd+lTQVZ1GqO2ELw2yWf37H7jQE6a6H
         l9kd3eRcgWTd4IZ9KyIo7aNE+XDfFC7KKxg0+l7cqrWSxlzZkSw1r05r4TtzzA5k9eZt
         NvuprFT69jlIq5kVpxgBwzBY2vp+5wq4UGfC+XrQGZalugioTP781toTL5rD0lupG9Z9
         ACeQ==
X-Gm-Message-State: AFqh2kpFVRfAVnEIPrnmmYB6qASO12lqmoJv15ES4tMXHNQHO7u8AA2w
	Y7jajj1cFXB/ArysJLFbfhZblu5PWu6A3Vs1+qtZbFYhjvFj7hs27qHEugAn/nKnRylz2ZWYs8u
	Id0+3hnqb6etqD92MjpPixyoJccQ=
X-Received: by 2002:a05:600c:3ca0:b0:3da:fc15:740c with SMTP id bg32-20020a05600c3ca000b003dafc15740cmr6168764wmb.19.1674036788251;
        Wed, 18 Jan 2023 02:13:08 -0800 (PST)
X-Google-Smtp-Source: AMrXdXtPeaEuZcdoxm8td8K7KjvHiWcT8LazBuqb1MG4N7vId7zMUQX+86m6izbLliR3bAA1shRznA==
X-Received: by 2002:a05:600c:3ca0:b0:3da:fc15:740c with SMTP id bg32-20020a05600c3ca000b003dafc15740cmr6168742wmb.19.1674036788038;
        Wed, 18 Jan 2023 02:13:08 -0800 (PST)
Date: Wed, 18 Jan 2023 05:13:03 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Bernhard Beschow <shentey@gmail.com>
Cc: qemu-devel@nongnu.org, Richard Henderson <richard.henderson@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	=?iso-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>,
	Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Chuck Zmudzinski <brchuckz@aol.com>
Subject: Re: [PATCH v2 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
Message-ID: <20230118051230-mutt-send-email-mst@kernel.org>
References: <20230104144437.27479-1-shentey@gmail.com>
MIME-Version: 1.0
In-Reply-To: <20230104144437.27479-1-shentey@gmail.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Wed, Jan 04, 2023 at 03:44:31PM +0100, Bernhard Beschow wrote:
> This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally removes
> it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in the PC
> machine agnostic to the precise southbridge being used. 2/ will become
> particularily interesting once PIIX4 becomes usable in the PC machine, avoiding
> the "Frankenstein" use of PIIX4_ACPI in PIIX3.

Looks ok to me.
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

Feel free to merge through Xen tree.

> v2:
> - xen_piix3_set_irq() is already generic. Just rename it. (Chuck)
> 
> Testing done:
> None, because I don't know how to conduct this properly :(
> 
> Based-on: <20221221170003.2929-1-shentey@gmail.com>
>           "[PATCH v4 00/30] Consolidate PIIX south bridges"
> 
> Bernhard Beschow (6):
>   include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()
>   hw/isa/piix: Reuse piix3_realize() in piix3_xen_realize()
>   hw/isa/piix: Wire up Xen PCI IRQ handling outside of PIIX3
>   hw/isa/piix: Avoid Xen-specific variant of piix_write_config()
>   hw/isa/piix: Resolve redundant k->config_write assignments
>   hw/isa/piix: Resolve redundant TYPE_PIIX3_XEN_DEVICE
> 
>  hw/i386/pc_piix.c             | 34 ++++++++++++++++--
>  hw/i386/xen/xen-hvm.c         |  2 +-
>  hw/isa/piix.c                 | 66 +----------------------------------
>  include/hw/southbridge/piix.h |  1 -
>  include/hw/xen/xen.h          |  2 +-
>  stubs/xen-hw-stub.c           |  2 +-
>  6 files changed, 35 insertions(+), 72 deletions(-)
> 
> -- 
> 2.39.0
> 



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 10:23:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 10:23:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480451.744876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5am-0000Ad-FQ; Wed, 18 Jan 2023 10:22:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480451.744876; Wed, 18 Jan 2023 10:22:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5am-0000AW-Cb; Wed, 18 Jan 2023 10:22:52 +0000
Received: by outflank-mailman (input) for mailman id 480451;
 Wed, 18 Jan 2023 10:22:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pI5ak-0000AQ-L0
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 10:22:50 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2083.outbound.protection.outlook.com [40.107.6.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0da82170-971a-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 11:22:47 +0100 (CET)
Received: from FR3P281CA0077.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:1f::10)
 by GV1PR08MB8665.eurprd08.prod.outlook.com (2603:10a6:150:82::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 10:22:44 +0000
Received: from VI1EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:1f:cafe::d) by FR3P281CA0077.outlook.office365.com
 (2603:10a6:d10:1f::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Wed, 18 Jan 2023 10:22:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT019.mail.protection.outlook.com (100.127.144.122) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 10:22:43 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Wed, 18 Jan 2023 10:22:43 +0000
Received: from a4fff2679139.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0902E2A7-8A7B-48B4-AE96-5C45529D0FCA.1; 
 Wed, 18 Jan 2023 10:22:34 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a4fff2679139.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 10:22:34 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by VI1PR08MB5504.eurprd08.prod.outlook.com (2603:10a6:803:13b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 10:22:30 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 10:22:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0da82170-971a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+9FstzcEsRYHLBahbtxW4s/n/Q2vAeRkF9r1YcUuAmo=;
 b=RM4RFEi+G1n6NR2mWIysoZx6sBfwnbUs0RZYsboy5jWsLRFToyZ9xG3s0KgFPbtaXuRB3VuMSISub/H0ovgVc8SolMz0YmmnOMNYtAlk0uED+CkwEPU38pQbhkW4COJURnaAYa958CbpJceUbCszgVjZ+MHJ/PGbIGR5BXK9Vm8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gCNEynVhBYjbGbZlTOTeBdo/oij+H7CsykLx8st/wNwaW0Pp/UGvgORfN/IxNEgskk/h++h3DFfi9gkkt5khPnoIn1IIqB+WnQi4Gy3ks9Mficnz2TXe4MHLsdmgFdwht+T11oP4rAcVrV28maboJKG5dxzWFWl4zniesN195blOBFh6ZSqksK71K+54zNjNr6u2SayrEkAJMrQpzBOYOXK8tU0YfLe41hjPxnIjntr8/ozNPEGPFcVJq13RCe9Ax3/0jr07NLuO8qqcaHxCV8u4HYKdjWhQ0b6q61b3jT7CkE6whmSTn+4lYnD8pK8rh/tSt1oJbH7g4mx3fBke1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+9FstzcEsRYHLBahbtxW4s/n/Q2vAeRkF9r1YcUuAmo=;
 b=l1xmiYme2Jb5+ALml5HoATt60LeScJ24HjeYngsjlsDbal9ME06ERWshNgR1SWfiui/uyhsn8kPamyYUVqMDXB1OtlHDMJoPQn3wFY5arumi1LG1nzOV/pEOZ/u7tcM4ZjQ9CW/OtTUioR6IlrC2u7Gwqb3aOYO/VuHm7S3coR2QRvFxF00/HV92dl94JPaXUdOwqVSqdm+zZRboO1Ny6ygmp6nQr7VPZlEj660HJm7QD9JcUH64DA+0Cusabr0L90xJ8jP4gkLzOwhplVN4PIcRZmSSPechx/M1Gb8H6uiGpWNUvaXdf0yFlSak4o7q1xLKO+AV2PC7Wd3gWC5M4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+9FstzcEsRYHLBahbtxW4s/n/Q2vAeRkF9r1YcUuAmo=;
 b=RM4RFEi+G1n6NR2mWIysoZx6sBfwnbUs0RZYsboy5jWsLRFToyZ9xG3s0KgFPbtaXuRB3VuMSISub/H0ovgVc8SolMz0YmmnOMNYtAlk0uED+CkwEPU38pQbhkW4COJURnaAYa958CbpJceUbCszgVjZ+MHJ/PGbIGR5BXK9Vm8=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jiamei Xie <Jiamei.Xie@arm.com>
Subject: RE: [PATCH v2 04/40] xen/arm: add an option to define Xen start
 address for Armv8-R
Thread-Topic: [PATCH v2 04/40] xen/arm: add an option to define Xen start
 address for Armv8-R
Thread-Index: AQHZJxAXLGszZRVrMECopJok59rQha6jR3wAgAA4E5CAAHUeAIAABBnw
Date: Wed, 18 Jan 2023 10:22:29 +0000
Message-ID:
 <PAXPR08MB7420F43284FEC60BC88496709EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-5-Penny.Zheng@arm.com>
 <e406484a-aad3-4953-afdb-3159597ec998@xen.org>
 <PAXPR08MB7420A5C7F93F23F14C77C9BA9EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <7ffe5d34-614d-f2aa-cf87-c518917c970a@xen.org>
In-Reply-To: <7ffe5d34-614d-f2aa-cf87-c518917c970a@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 72FEF97377C05641AE8EC8634FB16AEA.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|VI1PR08MB5504:EE_|VI1EUR03FT019:EE_|GV1PR08MB8665:EE_
X-MS-Office365-Filtering-Correlation-Id: 3263374a-d9e8-4404-0818-08daf93defc7
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jX+m4M5xpLhPjjO1FDvTDTse23IDqOyYBXDYboOCm3wSXqNrVxBL4QYTftS7w/X+qXrr32tBVi7TN9htiyH/wVpsLJos3OX1/IJkPvBlxH7jxLVbdfQkBn/JFbE4v3EuTmRqeLnpGJE/GuIrcmEda0MgEiL1yuLQRJQVbi8pbkCqDCCZcpuQI08NoAs30jqyjXWkJAu7UjA8NL4QhSJ+g25KH9Qv5ENh4KWvrhL4IRSg4vrC5Rg/9iQvaYvlQqQ8oVnc67jDkzjiWqlQQSL493I6FmsZWHqMokZGjvOKSBlV0Rcqd5X/Ri+XWdzTiqXuImToGX4lR7Uq9ddT4r0G/QvfsCbDlbb/lbotII7P0e7lwjIYC+1Rj0OFt26r4OO+j6D6uXlBqPty2be501caTMLhhkHmOgyRsYCHek5Dmk1k8A373xAaqXjKfSeHCAEU0nidYEDcohbaygYddlQJktRuJ/ZMcTSfRaHFLDBw4pP48X+Uve9thrLvUnoByN05E3IZdUc2X9f2+cIpRDmki5FoWVwNBvSjBSlkAw52iQOaVf77XxKFfydf27XWr44OgG2VmRLNdO3uxzPxl8Y9FCL1x0vDwzMI1sM+LaKUFtJAcKxGOHkp3vjTvogzTpnwI1WQUg36WT1xNsRb969tM7ccKX+neJKglD4u8/1x7fEGA0RdduWZFFnFsmk9ejW7qRaDlg7CCMjaieUec/3FBg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(396003)(39860400002)(366004)(136003)(451199015)(83380400001)(38100700002)(122000001)(5660300002)(33656002)(38070700005)(2906002)(86362001)(76116006)(4326008)(66946007)(64756008)(52536014)(66476007)(66556008)(55016003)(8676002)(8936002)(66446008)(478600001)(53546011)(186003)(26005)(6506007)(9686003)(316002)(54906003)(110136005)(7696005)(71200400001)(41300700001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5504
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b6a1add0-b2f7-4067-9853-08daf93de75e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FNl1YhMvIFdtQ43Lm+ZSUyRUKcveD9YR+1gljCqOGmOZKKIWrWO15QjJeMwVojMJcZb8OKbxFeAMwfbJKyZrm+8S8g25E/hzIZ4LUiTjO7r4vLL1vpGDR9B72xfxDcJ1sifg+yzeN34u8wkXBA/E9REVeLurJV9IkeZtfDlkiCga2i0ULO033ANOfZzME0xBzRaw0i1Zft73gNqpedo8lIq1tjG1NWe2o1sysV9Rd9n9K43vYY7snzsC5PmqwFAW0el53cdQ8Z0iYikTaz3W7C8+zaz8REv0jm5YKrxGNJl5Zg6HU5pwFCvnnouLuuW0WEflb20AyRE4e2xUmZYsCY1BYPDImjfIt3hP6U0RcYPHvjcjIS8WLfZEBY3uyVcg7D+VGemcqb3jJaAZeDUrrXW/fKTh4/Bs6ia855YXKQsoPzE+YaK+ciAv77TUIYMryHJXh8kaBe9a8FuD/wrO4FRN8t/3engzPly0lIgi3Eap4eI2JZpmQCpHOB8jU4BiB+fFtRNGpX54emYRdlpUaTVWrE/MBqBH6eD4tC6uCQypNEh1famvS4SCyKQPMC/XCXjkLGeQ4rxDvxCAVWoSQGhNlbc4qwFcbTYKy+xagLLfuMNaVPK6AX0moj0UQHlcgLti/Oz5TmoVXGIn1PVlQhvuXLmCaXbGwyHyGzt5o8pUc4nw/+5lRSTt6D6JZoy9c9DJBBE0PP7XKrpdvs5Tfw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(396003)(39860400002)(376002)(451199015)(36840700001)(40470700004)(46966006)(5660300002)(54906003)(110136005)(316002)(36860700001)(40460700003)(33656002)(336012)(82310400005)(2906002)(82740400003)(83380400001)(47076005)(86362001)(7696005)(55016003)(40480700001)(81166007)(356005)(6506007)(53546011)(26005)(186003)(9686003)(478600001)(8676002)(4326008)(41300700001)(52536014)(8936002)(70206006)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 10:22:43.5903
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3263374a-d9e8-4404-0818-08daf93defc7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8665

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjPlubQx5pyIMTjml6UgMTc6NDQN
Cj4gVG86IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgUGVubnkgWmhlbmcgPFBlbm55Llpo
ZW5nQGFybS5jb20+OyB4ZW4tDQo+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IENjOiBT
dGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0cmFuZCBNYXJx
dWlzDQo+IDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBWb2xvZHlteXIgQmFiY2h1ayA8Vm9s
b2R5bXlyX0JhYmNodWtAZXBhbS5jb20+Ow0KPiBKaWFtZWkgWGllIDxKaWFtZWkuWGllQGFybS5j
b20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjIgMDQvNDBdIHhlbi9hcm06IGFkZCBhbiBvcHRp
b24gdG8gZGVmaW5lIFhlbiBzdGFydA0KPiBhZGRyZXNzIGZvciBBcm12OC1SDQo+IA0KPiANCj4g
DQo+IE9uIDE4LzAxLzIwMjMgMDM6MDAsIFdlaSBDaGVuIHdyb3RlOg0KPiA+IEhpIEp1bGllbiwN
Cj4gDQo+IEhpIFdlaSwNCj4gDQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+
IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+IFNlbnQ6IDIwMjPlubQx
5pyIMTjml6UgNzoyNA0KPiA+PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+
OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gPj4gQ2M6IFdlaSBDaGVuIDxXZWku
Q2hlbkBhcm0uY29tPjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+ID4+IDxzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPjsgQmVydHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4g
Pj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPjsgSmlhbWVp
IFhpZQ0KPiA+PiA8SmlhbWVpLlhpZUBhcm0uY29tPg0KPiA+PiBTdWJqZWN0OiBSZTogW1BBVENI
IHYyIDA0LzQwXSB4ZW4vYXJtOiBhZGQgYW4gb3B0aW9uIHRvIGRlZmluZSBYZW4NCj4gc3RhcnQN
Cj4gPj4gYWRkcmVzcyBmb3IgQXJtdjgtUg0KPiA+Pg0KPiA+PiBIaSBQZW5ueSwNCj4gPj4NCj4g
Pj4gT24gMTMvMDEvMjAyMyAwNToyOCwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4+PiBGcm9tOiBX
ZWkgQ2hlbiA8d2VpLmNoZW5AYXJtLmNvbT4NCj4gPj4+DQo+ID4+PiBPbiBBcm12OC1BLCBYZW4g
aGFzIGEgZml4ZWQgdmlydHVhbCBzdGFydCBhZGRyZXNzIChsaW5rIGFkZHJlc3MNCj4gPj4+IHRv
bykgZm9yIGFsbCBBcm12OC1BIHBsYXRmb3Jtcy4gSW4gYW4gTU1VIGJhc2VkIHN5c3RlbSwgWGVu
IGNhbg0KPiA+Pj4gbWFwIGl0cyBsb2FkZWQgYWRkcmVzcyB0byB0aGlzIHZpcnR1YWwgc3RhcnQg
YWRkcmVzcy4gU28sIG9uDQo+ID4+PiBBcm12OC1BIHBsYXRmb3JtcywgdGhlIFhlbiBzdGFydCBh
ZGRyZXNzIGRvZXMgbm90IG5lZWQgdG8gYmUNCj4gPj4+IGNvbmZpZ3VyYWJsZS4gQnV0IG9uIEFy
bXY4LVIgcGxhdGZvcm1zLCB0aGVyZSBpcyBubyBNTVUgdG8gbWFwDQo+ID4+PiBsb2FkZWQgYWRk
cmVzcyB0byBhIGZpeGVkIHZpcnR1YWwgYWRkcmVzcyBhbmQgZGlmZmVyZW50IHBsYXRmb3Jtcw0K
PiA+Pj4gd2lsbCBoYXZlIHZlcnkgZGlmZmVyZW50IGFkZHJlc3Mgc3BhY2UgbGF5b3V0LiBTbyBY
ZW4gY2Fubm90IHVzZQ0KPiA+Pj4gYSBmaXhlZCBwaHlzaWNhbCBhZGRyZXNzIG9uIE1QVSBiYXNl
ZCBzeXN0ZW0gYW5kIG5lZWQgdG8gaGF2ZSBpdA0KPiA+Pj4gY29uZmlndXJhYmxlLg0KPiA+Pj4N
Cj4gPj4+IEluIHRoaXMgcGF0Y2ggd2UgaW50cm9kdWNlIG9uZSBLY29uZmlnIG9wdGlvbiBmb3Ig
dXNlcnMgdG8gZGVmaW5lDQo+ID4+PiB0aGUgZGVmYXVsdCBYZW4gc3RhcnQgYWRkcmVzcyBmb3Ig
QXJtdjgtUi4gVXNlcnMgY2FuIGVudGVyIHRoZQ0KPiA+Pj4gYWRkcmVzcyBpbiBjb25maWcgdGlt
ZSwgb3Igc2VsZWN0IHRoZSB0YWlsb3JlZCBwbGF0Zm9ybSBjb25maWcNCj4gPj4+IGZpbGUgZnJv
bSBhcmNoL2FybS9jb25maWdzLg0KPiA+Pj4NCj4gPj4+IEFuZCBhcyB3ZSBpbnRyb2R1Y2VkIEFy
bXY4LVIgcGxhdGZvcm1zIHRvIFhlbiwgdGhhdCBtZWFucyB0aGUNCj4gPj4+IGV4aXN0ZWQgQXJt
NjQgcGxhdGZvcm1zIHNob3VsZCBub3QgYmUgbGlzdGVkIGluIEFybXY4LVIgcGxhdGZvcm0NCj4g
Pj4+IGxpc3QsIHNvIHdlIGFkZCAhQVJNX1Y4UiBkZXBlbmRlbmN5IGZvciB0aGVzZSBwbGF0Zm9y
bXMuDQo+ID4+Pg0KPiA+Pj4gU2lnbmVkLW9mZi1ieTogV2VpIENoZW4gPHdlaS5jaGVuQGFybS5j
b20+DQo+ID4+PiBTaWduZWQtb2ZmLWJ5OiBKaWFtZWkuWGllIDxqaWFtZWkueGllQGFybS5jb20+
DQo+ID4+DQo+ID4+IFlvdXIgc2lnbmVkLW9mZi1ieSBpcyBtaXNzaW5nLg0KPiA+Pg0KPiA+Pj4g
LS0tDQo+ID4+PiB2MSAtPiB2MjoNCj4gPj4+IDEuIFJlbW92ZSB0aGUgcGxhdGZvcm0gaGVhZGVy
IGZ2cF9iYXNlci5oLg0KPiA+Pj4gMi4gUmVtb3ZlIHRoZSBkZWZhdWx0IHN0YXJ0IGFkZHJlc3Mg
Zm9yIGZ2cF9iYXNlcjY0Lg0KPiA+Pj4gMy4gUmVtb3ZlIHRoZSBkZXNjcmlwdGlvbiBvZiBkZWZh
dWx0IGFkZHJlc3MgZnJvbSBjb21taXQgbG9nLg0KPiA+Pj4gNC4gQ2hhbmdlIEhBU19NUFUgdG8g
QVJNX1Y4UiBmb3IgWGVuIHN0YXJ0IGFkZHJlc3MgZGVwZW5kZW5jeS4NCj4gPj4+ICAgICAgTm8g
bWF0dGVyIEFybS12OHIgYm9hcmQgaGFzIE1QVSBvciBub3QsIGl0IGFsd2F5cyBuZWVkIHRvDQo+
ID4+PiAgICAgIHNwZWNpZnkgdGhlIHN0YXJ0IGFkZHJlc3MuDQo+ID4+DQo+ID4+IEkgZG9uJ3Qg
cXVpdGUgdW5kZXJzdGFuZCB0aGUgbGFzdCBzZW50ZW5jZS4gQXJlIHlvdSBzYXlpbmcgdGhhdCBp
dCBpcw0KPiA+PiBwb3NzaWJsZSB0byBoYXZlIGFuIEFSTXY4LVIgc3lzdGVtIHdpdGggYW4gTVBV
IG5vciBhIHBhZ2UtdGFibGU/DQo+ID4+DQo+ID4NCj4gPiBZZXMsIGZyb20gdGhlIENvcnRleC1S
ODIgcGFnZSBbMV0sIHlvdSBjYW4gc2VlIHRoZSBNUFUgaXMgb3B0aW9uYWwgaW4NCj4gRUwxDQo+
ID4gYW5kIEVMMjoNCj4gPiAiVHdvIG9wdGlvbmFsIGFuZCBwcm9ncmFtbWFibGUgTVBVcyBjb250
cm9sbGVkIGZyb20gRUwxIGFuZCBFTDINCj4gcmVzcGVjdGl2ZWx5LiINCj4gV291bGQgdGhpcyBt
ZWFuIGEgdmVuZG9yIG1heSBwcm92aWRlIHRoZWlyIGN1c3RvbSBzb2x1dGlvbiB0byBwcm90ZWN0
DQo+IHRoZSBtZW1vcnk/DQo+IA0KDQpBaCwgeW91IGdhdmUgbWUgYSBuZXcgaWRlYSwgeWVzIGlu
IHRoZSAiQVJNIERESSAwNjAwQS5jIEcxLjMuNyIgTVNBX2ZyYWMNCm9mIElEX0FBNjRNTUZSMF9F
TDEgc2F5czoNCjBiMDAwMCBQTVNBdjgtNjQgbm90IHN1cHBvcnRlZCBpbiBhbnkgdHJhbnNsYXRp
b24gcmVnaW1lLg0KMGIwMDAwIGlzIG5vdCBwZXJtaXR0ZWQgdmFsdWUuDQoNClNvIG1heWJlIHlv
dSdyZSByaWdodCwgb24gQXJtdjgtUjY0LCB3ZSBhbHdheXMgaGF2ZSBNUFUgaW4gRUwxJkVMMiwg
dGhlDQpvcHRpb25hbCBpcyBmb3IgTVBVIGN1c3RvbWl6YXRpb24uDQoNCj4gPg0KPiA+IEFsdGhv
dWdoIGl0IGlzIHVubGlrZWx5IHRoYXQgdmVuZG9ycyB1c2luZyB0aGUgQXJtdjgtUiBJUCB3aWxs
IGRvIHNvLCBpdA0KPiA+IGlzIGluZGVlZCBhbiBvcHRpb24uIEluIHRoZSBJRCByZWdpc3Rlciwg
dGhlcmUgYXJlIGFsc28gcmVsYXRlZCBiaXRzIGluDQo+ID4gSURfQUE2NE1NRlIwX0VMMSAoTVNB
X2ZyYWMpIHRvIGluZGljYXRlIHRoaXMuDQo+ID4NCj4gPj4+IC0tLQ0KPiA+Pj4gICAgeGVuL2Fy
Y2gvYXJtL0tjb25maWcgICAgICAgICAgIHwgIDggKysrKysrKysNCj4gPj4+ICAgIHhlbi9hcmNo
L2FybS9wbGF0Zm9ybXMvS2NvbmZpZyB8IDE2ICsrKysrKysrKysrKystLS0NCj4gPj4+ICAgIDIg
ZmlsZXMgY2hhbmdlZCwgMjEgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkNCj4gPj4+DQo+
ID4+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL0tjb25maWcgYi94ZW4vYXJjaC9hcm0vS2Nv
bmZpZw0KPiA+Pj4gaW5kZXggYWNlNzE3OGM5YS4uYzZiNmI2MTJkMSAxMDA2NDQNCj4gPj4+IC0t
LSBhL3hlbi9hcmNoL2FybS9LY29uZmlnDQo+ID4+PiArKysgYi94ZW4vYXJjaC9hcm0vS2NvbmZp
Zw0KPiA+Pj4gQEAgLTE0NSw2ICsxNDUsMTQgQEAgY29uZmlnIFRFRQ0KPiA+Pj4gICAgCSAgVGhp
cyBvcHRpb24gZW5hYmxlcyBnZW5lcmljIFRFRSBtZWRpYXRvcnMgc3VwcG9ydC4gSXQgYWxsb3dz
DQo+ID4+IGd1ZXN0cw0KPiA+Pj4gICAgCSAgdG8gYWNjZXNzIHJlYWwgVEVFIHZpYSBvbmUgb2Yg
VEVFIG1lZGlhdG9ycyBpbXBsZW1lbnRlZCBpbg0KPiBYRU4uDQo+ID4+Pg0KPiA+Pj4gK2NvbmZp
ZyBYRU5fU1RBUlRfQUREUkVTUw0KPiA+Pj4gKwloZXggIlhlbiBzdGFydCBhZGRyZXNzOiBrZWVw
IGRlZmF1bHQgdG8gdXNlIHBsYXRmb3JtIGRlZmluZWQNCj4gPj4gYWRkcmVzcyINCj4gPj4+ICsJ
ZGVmYXVsdCAwDQo+ID4+PiArCWRlcGVuZHMgb24gQVJNX1Y4Ug0KPiA+Pg0KPiA+PiBJdCBpcyBz
dGlsbCBwcmV0dHkgdW5jbGVhciB0byBtZSB3aGF0IHdvdWxkIGJlIHRoZSBkaWZmZXJlbmNlIGJl
dHdlZW4NCj4gPj4gSEFTX01QVSBhbmQgQVJNX1Y4Ui4NCj4gPj4NCj4gPg0KPiA+IElmIHdlIGRv
bid0IHdhbnQgdG8gc3VwcG9ydCBub24tTVBVIHN1cHBvcnRlZCBBcm12OC1SLCBJIHRoaW5rIHRo
ZXkgYXJlDQo+IHRoZQ0KPiA+IHNhbWUuIElNTywgbm9uLU1QVSBzdXBwb3J0ZWQgQXJtdjgtUiBp
cyBtZWFuaW5nbGVzcyB0byBYZW4uDQo+IE9PSSwgd2h5IGRvIHlvdSB0aGluayB0aGlzIGlzIG1l
YW5pbmdsZXNzPw0KDQpJZiB0aGVyZSBpcyBBcm12OC1SIGJvYXJkIHdpdGhvdXQgRUwyIE1QVSwg
aG93IGNhbiB3ZSBwcm90ZWN0IFhlbj8gT2YgY291cnNlLA0KaWYgdXNlcnMgZG9uJ3QgY2FyZSBh
Ym91dCBzZWN1cml0eSwgWGVuIHN0aWxsIGNhbiBzdXBwb3J0IGl0Lg0KDQo+IA0KPiA+DQo+ID4+
PiArCWhlbHANCj4gPj4+ICsJICBUaGlzIG9wdGlvbiBhbGxvd3MgdG8gc2V0IHRoZSBjdXN0b21p
emVkIGFkZHJlc3MgYXQgd2hpY2ggWGVuIHdpbGwNCj4gPj4gYmUNCj4gPj4+ICsJICBsaW5rZWQg
b24gTVBVIHN5c3RlbXMuIFRoaXMgYWRkcmVzcyBtdXN0IGJlIGFsaWduZWQgdG8gYSBwYWdlIHNp
emUuDQo+ID4+PiArDQo+ID4+PiAgICBzb3VyY2UgImFyY2gvYXJtL3RlZS9LY29uZmlnIg0KPiA+
Pj4NCj4gPj4+ICAgIGNvbmZpZyBTVEFUSUNfU0hNDQo+ID4+PiBkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gvYXJtL3BsYXRmb3Jtcy9LY29uZmlnDQo+ID4+IGIveGVuL2FyY2gvYXJtL3BsYXRmb3Jtcy9L
Y29uZmlnDQo+ID4+PiBpbmRleCBjOTNhNmIyNzU2Li4wOTA0NzkzYTBiIDEwMDY0NA0KPiA+Pj4g
LS0tIGEveGVuL2FyY2gvYXJtL3BsYXRmb3Jtcy9LY29uZmlnDQo+ID4+PiArKysgYi94ZW4vYXJj
aC9hcm0vcGxhdGZvcm1zL0tjb25maWcNCj4gPj4+IEBAIC0xLDYgKzEsNyBAQA0KPiA+Pj4gICAg
Y2hvaWNlDQo+ID4+PiAgICAJcHJvbXB0ICJQbGF0Zm9ybSBTdXBwb3J0Ig0KPiA+Pj4gICAgCWRl
ZmF1bHQgQUxMX1BMQVQNCj4gPj4+ICsJZGVmYXVsdCBGVlBfQkFTRVIgaWYgQVJNX1Y4Ug0KPiA+
Pj4gICAgCS0tLWhlbHAtLS0NCj4gPj4+ICAgIAlDaG9vc2Ugd2hpY2ggaGFyZHdhcmUgcGxhdGZv
cm0gdG8gZW5hYmxlIGluIFhlbi4NCj4gPj4+DQo+ID4+PiBAQCAtOCwxMyArOSwxNCBAQCBjaG9p
Y2UNCj4gPj4+DQo+ID4+PiAgICBjb25maWcgQUxMX1BMQVQNCj4gPj4+ICAgIAlib29sICJBbGwg
UGxhdGZvcm1zIg0KPiA+Pj4gKwlkZXBlbmRzIG9uICFBUk1fVjhSDQo+ID4+PiAgICAJLS0taGVs
cC0tLQ0KPiA+Pj4gICAgCUVuYWJsZSBzdXBwb3J0IGZvciBhbGwgYXZhaWxhYmxlIGhhcmR3YXJl
IHBsYXRmb3Jtcy4gSXQNCj4gZG9lc24ndA0KPiA+Pj4gICAgCWF1dG9tYXRpY2FsbHkgc2VsZWN0
IGFueSBvZiB0aGUgcmVsYXRlZCBkcml2ZXJzLg0KPiA+Pj4NCj4gPj4+ICAgIGNvbmZpZyBRRU1V
DQo+ID4+PiAgICAJYm9vbCAiUUVNVSBhYXJjaCB2aXJ0IG1hY2hpbmUgc3VwcG9ydCINCj4gPj4+
IC0JZGVwZW5kcyBvbiBBUk1fNjQNCj4gPj4+ICsJZGVwZW5kcyBvbiBBUk1fNjQgJiYgIUFSTV9W
OFINCj4gPj4+ICAgIAlzZWxlY3QgR0lDVjMNCj4gPj4+ICAgIAlzZWxlY3QgSEFTX1BMMDExDQo+
ID4+PiAgICAJLS0taGVscC0tLQ0KPiA+Pj4gQEAgLTIzLDcgKzI1LDcgQEAgY29uZmlnIFFFTVUN
Cj4gPj4+DQo+ID4+PiAgICBjb25maWcgUkNBUjMNCj4gPj4+ICAgIAlib29sICJSZW5lc2FzIFJD
YXIzIHN1cHBvcnQiDQo+ID4+PiAtCWRlcGVuZHMgb24gQVJNXzY0DQo+ID4+PiArCWRlcGVuZHMg
b24gQVJNXzY0ICYmICFBUk1fVjhSDQo+ID4+PiAgICAJc2VsZWN0IEhBU19TQ0lGDQo+ID4+PiAg
ICAJc2VsZWN0IElQTU1VX1ZNU0ENCj4gPj4+ICAgIAktLS1oZWxwLS0tDQo+ID4+PiBAQCAtMzEs
MTQgKzMzLDIyIEBAIGNvbmZpZyBSQ0FSMw0KPiA+Pj4NCj4gPj4+ICAgIGNvbmZpZyBNUFNPQw0K
PiA+Pj4gICAgCWJvb2wgIlhpbGlueCBVbHRyYXNjYWxlKyBNUFNvQyBzdXBwb3J0Ig0KPiA+Pj4g
LQlkZXBlbmRzIG9uIEFSTV82NA0KPiA+Pj4gKwlkZXBlbmRzIG9uIEFSTV82NCAmJiAhQVJNX1Y4
Ug0KPiA+Pj4gICAgCXNlbGVjdCBIQVNfQ0FERU5DRV9VQVJUDQo+ID4+PiAgICAJc2VsZWN0IEFS
TV9TTU1VDQo+ID4+PiAgICAJLS0taGVscC0tLQ0KPiA+Pj4gICAgCUVuYWJsZSBhbGwgdGhlIHJl
cXVpcmVkIGRyaXZlcnMgZm9yIFhpbGlueCBVbHRyYXNjYWxlKyBNUFNvQw0KPiA+Pj4NCj4gPj4+
ICtjb25maWcgRlZQX0JBU0VSDQo+ID4+PiArCWJvb2wgIkZpeGVkIFZpcnR1YWwgUGxhdGZvcm0g
QmFzZVIgc3VwcG9ydCINCj4gPj4+ICsJZGVwZW5kcyBvbiBBUk1fVjhSDQo+ID4+PiArCWhlbHAN
Cj4gPj4+ICsJICBFbmFibGUgcGxhdGZvcm0gc3BlY2lmaWMgY29uZmlndXJhdGlvbnMgZm9yIEZp
eGVkIFZpcnR1YWwNCj4gPj4+ICsJICBQbGF0Zm9ybSBCYXNlUg0KPiA+Pg0KPiA+PiBUaGlzIHNl
ZW1zIHVucmVsYXRlZCB0byB0aGlzIHBhdGNoLg0KPiA+Pg0KPiA+DQo+ID4gQ2FuIHdlIGFkZCBz
b21lIGRlc2NyaXB0aW9ucyBpbiBjb21taXQgbG9nIGZvciB0aGlzIGNoYW5nZSwgb3Igd2UNCj4g
PiBTaG91bGQgbW92ZSBpdCB0byBhIG5ldyBwYXRjaD8NCj4gDQo+IE5ldyBwYXRjaCBwbGVhc2Ug
b3IgaW50cm9kdWNlIGl0IGluIHRoZSBwYXRjaCB3aGVyZSB5b3UgbmVlZCBpdC4NCj4gDQo+IFdl
IGhhZCBwcmVmZXJyZWQgdG8gdXNlIHNlcGFyYXRlDQo+ID4gcGF0Y2hlcyBmb3IgdGhpcyBraW5k
IG9mIGNoYW5nZXMsIGJ1dCB3ZSBmb3VuZCB0aGUgbnVtYmVyIG9mIHBhdGNoZXMNCj4gPiB3b3Vs
ZCBiZWNvbWUgbW9yZSBhbmQgbW9yZS4gVGhpcyBwcm9ibGVtIGhhcyBiZWVuIGJvdGhlcmluZyB1
cyBmb3INCj4gPiBvcmdhbml6aW5nIHBhdGNoZXMuDQo+IA0KPiBJIHVuZGVyc3RhbmQgdGhlIGNv
bmNlcm4gb2YgaW5jcmVhc2luZyB0aGUgbnVtYmVyIG9mIHBhdGNoZXMuIEhvd2V2ZXIsDQo+IHRo
aXMgYWxzbyBuZWVkcyB0byB3ZWlnaHQgYWdhaW5zdCB0aGUgcmV2aWV3Lg0KPiANCg0KVW5kZXJz
dGFuZC4NCg0KPiBJbiB0aGlzIGNhc2UsIGl0IGlzIHZlcnkgZGlmZmljdWx0IGZvciBtZSB0byB1
bmRlcnN0YW5kIHdoeSB3ZSBuZWVkIHRvDQo+IGludHJvZHVjZSBGVlBfQkFTRVIuDQo+IA0KPiBJ
biBmYWN0LCBvbiB0aGUgcHJldmlvdXMgdmVyc2lvbiwgd2UgZGlzY3Vzc2VkIHRvIG5vdCBpbnRy
b2R1Y2UgYW55IG5ldw0KPiBwbGF0Zm9ybSBzcGVjaWZpYyBjb25maWcuIFNvIEkgYW0gYSBiaXQg
c3VycHJpc2VkIHRoaXMgaXMgYWN0dWFsbHkgbmVlZGVkLg0KPiANCg0KTm8sIHRoaXMgaXMgbm8g
dHJ1ZSwgaXQncyBteSBtaXN0YWtlLCBJIGZvcmdvdCB0byByZW1vdmUgRlZQX0JBU0VSIGZyb20N
CnRoaXMgS2NvbmZpZy4gQWN0dWFsbHksIHdlIGRvIG5vdCBuZWVkIHRoaXMgb25lLiBXZSBhbHNv
IGRvbid0IG5lZWQgYQ0KbmV3IHBhdGNoIGZvciBpdC4NCg0KQ2hlZXJzLA0KV2VpIENoZW4NCg0K
PiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 10:24:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 10:24:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480458.744888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5cg-0000of-VJ; Wed, 18 Jan 2023 10:24:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480458.744888; Wed, 18 Jan 2023 10:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5cg-0000oY-SN; Wed, 18 Jan 2023 10:24:50 +0000
Received: by outflank-mailman (input) for mailman id 480458;
 Wed, 18 Jan 2023 10:24:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pI5cg-0000oS-5i
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 10:24:50 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2040.outbound.protection.outlook.com [40.107.241.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20d4d0ba-971a-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 11:23:19 +0100 (CET)
Received: from DB6P195CA0019.EURP195.PROD.OUTLOOK.COM (2603:10a6:4:cb::29) by
 GVXPR08MB7896.eurprd08.prod.outlook.com (2603:10a6:150:16::8) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.23; Wed, 18 Jan 2023 10:24:41 +0000
Received: from DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:cb:cafe::4c) by DB6P195CA0019.outlook.office365.com
 (2603:10a6:4:cb::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend
 Transport; Wed, 18 Jan 2023 10:24:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT043.mail.protection.outlook.com (100.127.143.24) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 10:24:41 +0000
Received: ("Tessian outbound 0d7b2ab0f13d:v132");
 Wed, 18 Jan 2023 10:24:41 +0000
Received: from f647958dc23f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 236CF102-8894-4C4C-B843-9450C52D060C.1; 
 Wed, 18 Jan 2023 10:24:35 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f647958dc23f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 10:24:35 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AS2PR08MB8879.eurprd08.prod.outlook.com (2603:10a6:20b:5f6::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Wed, 18 Jan
 2023 10:24:33 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 10:24:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20d4d0ba-971a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IquuezylwvRjL4NG/xhRa2QIyePrJ3rh/pyM6D3CzYw=;
 b=6vhri22t48d3PjjyE3LKksGP4tJncXsTiw0jlBrXsf+BufAMXOFZci1OK4FRrvN6J+MUMkf4jfHo2pY1EDoKDO5XNM0yp1j93GsOLLYQ/YktEyHAgW9JVgaH3sauvtdS28vbFjP6BPLmUjhch8o3RJ7a6UhQUyXZCo9UIfTQkwo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tru4/dK5/g8UWpjWxBPBLcSoix7FP1Ax4k8RHxY+RrIIZbwcmS8Q6KPMPaw+vNb0uLtbKtvaavQGWvhiz8PeCDy2Xwb1Mdv0Bk/E15sTzfmt+cu0f2HxiRQTigHByt1gFfpelQbdv9ARQvNCWjmuAThhyN4D0gqPXOeq+KGCtaqHjLmUtJ85vZImbQRFi/VUhMgZyPTvJGL4BDqy4EkmXWj0v+oRpSf3Py/cuSd8oNlry/ztA5vSj0jent5A6Zm9vaOdR16O0WIea4H31iTfK1nbYhECfrXSHR2JaMu6AH1brEEhd4b8+ZB1AhG76L23ziN2QEOx1U9Wk1hcYtM9pQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IquuezylwvRjL4NG/xhRa2QIyePrJ3rh/pyM6D3CzYw=;
 b=Q8N+QjeAqFTvlZyQJoVW2bzgS7gmGSaLXFWjpC47aI+khROC6Zs1Po/0chF/4/Ygpt6peQfCg/CzFbMIt6phcDAGJr18GQNqWUW06DBqwz7Vi9PkKHoOj/ogj9xFO75x/tzxyJ4r/ekDBMfiHfkEeybRsk1Y62JMCSaEmcf5bdnfjJrNOo+jMM0BIc+LOXNHEQUhGbPWJczV52CJJor1d1mWdixqGG0tgLCGTrr4b9NwrY+ScsIijf3KaUlZFsYXid1RFulwPynjXZxCpYe0M/vPPY8UEouX5at1uVRifQ/ZhsR2bTEzVTfRMP1sLKhFWgHnH4PY+GxCy4s3O6xNWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IquuezylwvRjL4NG/xhRa2QIyePrJ3rh/pyM6D3CzYw=;
 b=6vhri22t48d3PjjyE3LKksGP4tJncXsTiw0jlBrXsf+BufAMXOFZci1OK4FRrvN6J+MUMkf4jfHo2pY1EDoKDO5XNM0yp1j93GsOLLYQ/YktEyHAgW9JVgaH3sauvtdS28vbFjP6BPLmUjhch8o3RJ7a6UhQUyXZCo9UIfTQkwo=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 05/40] xen/arm64: prepare for moving MMU related code
 from head.S
Thread-Topic: [PATCH v2 05/40] xen/arm64: prepare for moving MMU related code
 from head.S
Thread-Index: AQHZJxAYj73pNk7diEiR8Njo5KbmJ66jSxAAgAA5lACAAHG8gIAACS7Q
Date: Wed, 18 Jan 2023 10:24:32 +0000
Message-ID:
 <PAXPR08MB7420F3F3A95488C1462A8AE59EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-6-Penny.Zheng@arm.com>
 <f78755d8-0b43-ebe4-4b2c-c88875347796@xen.org>
 <PAXPR08MB742006643CF50E239EBC12139EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <4e6d4deb-d38b-9845-2f58-e94f28196bf6@xen.org>
In-Reply-To: <4e6d4deb-d38b-9845-2f58-e94f28196bf6@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: C405321523030C429B13296BC2772AC6.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|AS2PR08MB8879:EE_|DBAEUR03FT043:EE_|GVXPR08MB7896:EE_
X-MS-Office365-Filtering-Correlation-Id: c7fb667f-1995-48b2-4ac5-08daf93e35cc
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 CdS6Nkv14hViTwnBSRjvQ9ez6iXlHXgmtjqTvraBcY59WiDzCMqQwZLcep77MfVac0F0dD6/Z7x6h7xFtUONmChe/fUafhFZ+G3PJaCUU1KQx9b7BLpI7jlfqnQfra2pxIaA63Xc7FBT7pAFTl6tuP0FWrle+x0/gABANqEUvfd5eVUPQ0sjSQAcWk2vGa9z/54qED1hFMgvJ31u7Y5Vr1MUyC+0yggIT8jzaWf1J5vr9U9BhAO/KNNIOKtRPIoSgWcsd2hCmpEfF2i3yvaHvSm1kvdC1RTuZ/4etCMeu3kfELYqOcRlkkfv1VkjsmxDJOc/EKGGNS405ex39SAjjulzAAIXTb9Riijfvg8Id9GgF90PLg9QYty5eBn6cHLa+vVzttoRh+3qMPFllTYNAro9CIz2OeJDsROr6Lc28H6VKB+3HIxykgM/apt+zAVgXid5jcrO60l/nHTy5pMjpbiNRTUPWnVuNOud7tBnr/+hLOTqRHRa15VwnjNURY4x7fxMZ8v4Mi30gZcAM7NAHHf7JbQwP6QfPZxlMGcg0rymJaZknTkaNG+GxlM/D+dD+u4eqymlfRZ2npnNsltOylB5rsXf70wz7EGjb52Gs185AtTlFjmbYXIIfi1QhoroCJ55wiXNteaD6T84s3eGLckpDwc0gjqSfYZsvOtmG+ljf4BSag5fBTeRX65kDeZ/LZDu8IZCmGwZ49/MJwgAyA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(366004)(396003)(136003)(346002)(451199015)(2906002)(38100700002)(122000001)(33656002)(55016003)(86362001)(7696005)(71200400001)(316002)(54906003)(110136005)(38070700005)(6506007)(64756008)(66556008)(66476007)(8676002)(4326008)(83380400001)(41300700001)(76116006)(5660300002)(66446008)(53546011)(478600001)(26005)(186003)(8936002)(9686003)(66946007)(52536014);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8879
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	32022a36-6d8b-49a0-83c4-08daf93e30c7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Bar0HzfXYwEX4osJk5CNOKvDTYfTe4jdrjCmp+5YurfM8amYbuRBq0atqUxAET7YpyKuZLyimnmyNgxPgcoRcaWn9ChCsU3oKikiWQq2/4wh2nFEVzS6qDHPg/Z0AI1kicfagVjsg6iUuDCVGF3q1W5Qf3xnAfM27eCvvGbhNnuED97HMEfqY9bxvEgkS5Ijt0gystv6rH2vTv/ygAG+z0Glmfieu61cZnrQDot4nejwTTCmABMRVJn24wnQ2kaGRp+EuMC6h5kwpbIIPJzIe0I++5YFVDlG1qxVEcKJgk0irqKiJvOUL652SrE5/qZgfPeCU9cdZBLIHlwsvLwq4SMwQ56HVx8HMvmxE/5LKXQpM53YHoJmeZoxaCga65nSp1KRXRHLvfZLG5Nc1RYxCYoZSLUgtEZ44zZqZ8YwpGmqHsWukwiuYJTsB3JESGSJxdFsM2j+OEA3pLvMgldXE07MFXdNxXSgeeKY2LQLBA4FSf/HpW83fOq0NPN9fl84qKG2TCDbJgl6K31Oj9rr8MOplyW/5IDfsclQUhN8b7ck75U506HFgRc+FYJh3JZHYGIbZ42pWwQUFLliRI+LQZQ3Sf2BBwgiRl6toWuueZWVHhNa+VpHqAiZA1Q+R10V2uRR/yYVP9z8DTbJRe0jDQKvAO1RzfM+lYOQGxaPxVl9zovpEKvQUbRa4ncBWlvyX2nAVg3Yar9prZZ8ZxS50A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(346002)(136003)(376002)(451199015)(40470700004)(46966006)(36840700001)(86362001)(2906002)(82740400003)(36860700001)(81166007)(40460700003)(47076005)(40480700001)(55016003)(33656002)(110136005)(82310400005)(7696005)(316002)(54906003)(41300700001)(356005)(26005)(4326008)(70206006)(70586007)(8676002)(5660300002)(107886003)(8936002)(6506007)(52536014)(336012)(186003)(53546011)(83380400001)(9686003)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 10:24:41.2216
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c7fb667f-1995-48b2-4ac5-08daf93e35cc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR08MB7896

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjPlubQx5pyIMTjml6UgMTc6NTAN
Cj4gVG86IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgUGVubnkgWmhlbmcgPFBlbm55Llpo
ZW5nQGFybS5jb20+OyB4ZW4tDQo+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IENjOiBT
dGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0cmFuZCBNYXJx
dWlzDQo+IDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBWb2xvZHlteXIgQmFiY2h1ayA8Vm9s
b2R5bXlyX0JhYmNodWtAZXBhbS5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjIgMDUvNDBd
IHhlbi9hcm02NDogcHJlcGFyZSBmb3IgbW92aW5nIE1NVSByZWxhdGVkDQo+IGNvZGUgZnJvbSBo
ZWFkLlMNCj4gDQo+IA0KPiANCj4gT24gMTgvMDEvMjAyMyAwMzowOSwgV2VpIENoZW4gd3JvdGU6
DQo+ID4gSGkgSnVsaWVuLA0KPiA+DQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+
ID4+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+IFNlbnQ6IDIwMjPl
ubQx5pyIMTjml6UgNzozNw0KPiA+PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5j
b20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gPj4gQ2M6IFdlaSBDaGVuIDxX
ZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+ID4+IDxzc3RhYmVsbGluaUBr
ZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsN
Cj4gPj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiA+
PiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDA1LzQwXSB4ZW4vYXJtNjQ6IHByZXBhcmUgZm9yIG1v
dmluZyBNTVUgcmVsYXRlZA0KPiA+PiBjb2RlIGZyb20gaGVhZC5TDQo+ID4+DQo+ID4+IEhpIFBl
bm55LA0KPiA+Pg0KPiA+PiBPbiAxMy8wMS8yMDIzIDA1OjI4LCBQZW5ueSBaaGVuZyB3cm90ZToN
Cj4gPj4+IEZyb206IFdlaSBDaGVuIDx3ZWkuY2hlbkBhcm0uY29tPg0KPiA+Pj4NCj4gPj4+IFdl
IHdhbnQgdG8gcmV1c2UgaGVhZC5TIGZvciBNUFUgc3lzdGVtcywgYnV0IHRoZXJlIGFyZSBzb21l
DQo+ID4+PiBjb2RlIGltcGxlbWVudGVkIGZvciBNTVUgc3lzdGVtcyBvbmx5LiBXZSB3aWxsIG1v
dmUgc3VjaA0KPiA+Pj4gY29kZSB0byBhbm90aGVyIE1NVSBzcGVjaWZpYyBmaWxlLiBCdXQgYmVm
b3JlIHRoYXQsIHdlIHdpbGwNCj4gPj4+IGRvIHNvbWUgcHJlcGFyYXRpb25zIGluIHRoaXMgcGF0
Y2ggdG8gbWFrZSB0aGVtIGVhc2llcg0KPiA+Pj4gZm9yIHJldmlld2luZzoNCj4gPj4NCj4gPj4g
V2VsbCwgSSBhZ3JlZSB0aGF0Li4uDQo+ID4+DQo+ID4+PiAxLiBGaXggdGhlIGluZGVudGF0aW9u
cyBvZiBjb2RlIGNvbW1lbnRzLg0KPiA+Pg0KPiA+PiAuLi4gY2hhbmdpbmcgdGhlIGluZGVudGF0
aW9uIGlzIGJldHRlciBoZXJlLiBCdXQuLi4NCj4gPj4NCj4gPj4+IDIuIEV4cG9ydCBzb21lIHN5
bWJvbHMgdGhhdCB3aWxsIGJlIGFjY2Vzc2VkIG91dCBvZiBmaWxlDQo+ID4+PiAgICAgIHNjb3Bl
Lg0KPiA+Pg0KPiA+PiAuLi4gSSBoYXZlIG5vIGlkZWEgd2hpY2ggZnVuY3Rpb25zIGFyZSBnb2lu
ZyB0byBiZSB1c2VkIGluIGEgc2VwYXJhdGUNCj4gPj4gZmlsZS4gU28gSSB0aGluayB0aGV5IHNo
b3VsZCBiZWxvbmcgdG8gdGhlIHBhdGNoIG1vdmluZyB0aGUgY29kZS4NCj4gPj4NCj4gPg0KPiA+
IE9rLCBJIHdpbGwgbW92ZSB0aGVzZSBjaGFuZ2VzIHRvIHRoZSBtb3ZpbmcgY29kZSBwYXRjaGVz
Lg0KPiA+DQo+ID4+Pg0KPiA+Pj4gU2lnbmVkLW9mZi1ieTogV2VpIENoZW4gPHdlaS5jaGVuQGFy
bS5jb20+DQo+ID4+DQo+ID4+IFlvdXIgc2lnbmVkLW9mZi1ieSBpcyBtaXNzaW5nLg0KPiA+Pg0K
PiA+Pj4gLS0tDQo+ID4+PiB2MSAtPiB2MjoNCj4gPj4+IDEuIE5ldyBwYXRjaC4NCj4gPj4+IC0t
LQ0KPiA+Pj4gICAgeGVuL2FyY2gvYXJtL2FybTY0L2hlYWQuUyB8IDQwICsrKysrKysrKysrKysr
KysrKystLS0tLS0tLS0tLS0tLS0tLQ0KPiAtLS0NCj4gPj4+ICAgIDEgZmlsZSBjaGFuZ2VkLCAy
MCBpbnNlcnRpb25zKCspLCAyMCBkZWxldGlvbnMoLSkNCj4gPj4+DQo+ID4+PiBkaWZmIC0tZ2l0
IGEveGVuL2FyY2gvYXJtL2FybTY0L2hlYWQuUyBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMN
Cj4gPj4+IGluZGV4IDkzZjliMGI5ZDUuLmIyMjE0YmM1ZTMgMTAwNjQ0DQo+ID4+PiAtLS0gYS94
ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TDQo+ID4+PiArKysgYi94ZW4vYXJjaC9hcm0vYXJtNjQv
aGVhZC5TDQo+ID4+PiBAQCAtMTM2LDIyICsxMzYsMjIgQEANCj4gPj4+ICAgICAgICAgICAgYWRk
IFx4YiwgXHhiLCB4MjANCj4gPj4+ICAgIC5lbmRtDQo+ID4+Pg0KPiA+Pj4gLSAgICAgICAgLnNl
Y3Rpb24gLnRleHQuaGVhZGVyLCAiYXgiLCAlcHJvZ2JpdHMNCj4gPj4+IC0gICAgICAgIC8qLmFh
cmNoNjQqLw0KPiA+Pj4gKy5zZWN0aW9uIC50ZXh0LmhlYWRlciwgImF4IiwgJXByb2diaXRzDQo+
ID4+PiArLyouYWFyY2g2NCovDQo+ID4+DQo+ID4+IFRoaXMgY2hhbmdlIGlzIG5vdCBtZW50aW9u
ZWQuDQo+ID4+DQo+ID4NCj4gPiBJIHdpbGwgYWRkIHRoZSBkZXNjcmlwdGlvbiBpbiBjb21taXQg
bWVzc2FnZS4NCj4gPg0KPiA+Pj4NCj4gPj4+IC0gICAgICAgIC8qDQo+ID4+PiAtICAgICAgICAg
KiBLZXJuZWwgc3RhcnR1cCBlbnRyeSBwb2ludC4NCj4gPj4+IC0gICAgICAgICAqIC0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLQ0KPiA+Pj4gLSAgICAgICAgICoNCj4gPj4+IC0gICAgICAgICAq
IFRoZSByZXF1aXJlbWVudHMgYXJlOg0KPiA+Pj4gLSAgICAgICAgICogICBNTVUgPSBvZmYsIEQt
Y2FjaGUgPSBvZmYsIEktY2FjaGUgPSBvbiBvciBvZmYsDQo+ID4+PiAtICAgICAgICAgKiAgIHgw
ID0gcGh5c2ljYWwgYWRkcmVzcyB0byB0aGUgRkRUIGJsb2IuDQo+ID4+PiAtICAgICAgICAgKg0K
PiA+Pj4gLSAgICAgICAgICogVGhpcyBtdXN0IGJlIHRoZSB2ZXJ5IGZpcnN0IGFkZHJlc3MgaW4g
dGhlIGxvYWRlZCBpbWFnZS4NCj4gPj4+IC0gICAgICAgICAqIEl0IHNob3VsZCBiZSBsaW5rZWQg
YXQgWEVOX1ZJUlRfU1RBUlQsIGFuZCBsb2FkZWQgYXQgYW55DQo+ID4+PiAtICAgICAgICAgKiA0
Sy1hbGlnbmVkIGFkZHJlc3MuICBBbGwgb2YgdGV4dCtkYXRhK2JzcyBtdXN0IGZpdCBpbiAyTUIs
DQo+ID4+PiAtICAgICAgICAgKiBvciB0aGUgaW5pdGlhbCBwYWdldGFibGUgY29kZSBiZWxvdyB3
aWxsIG5lZWQgYWRqdXN0bWVudC4NCj4gPj4+IC0gICAgICAgICAqLw0KPiA+Pj4gKy8qDQo+ID4+
PiArICogS2VybmVsIHN0YXJ0dXAgZW50cnkgcG9pbnQuDQo+ID4+PiArICogLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tDQo+ID4+PiArICoNCj4gPj4+ICsgKiBUaGUgcmVxdWlyZW1lbnRzIGFy
ZToNCj4gPj4+ICsgKiAgIE1NVSA9IG9mZiwgRC1jYWNoZSA9IG9mZiwgSS1jYWNoZSA9IG9uIG9y
IG9mZiwNCj4gPj4+ICsgKiAgIHgwID0gcGh5c2ljYWwgYWRkcmVzcyB0byB0aGUgRkRUIGJsb2Iu
DQo+ID4+PiArICoNCj4gPj4+ICsgKiBUaGlzIG11c3QgYmUgdGhlIHZlcnkgZmlyc3QgYWRkcmVz
cyBpbiB0aGUgbG9hZGVkIGltYWdlLg0KPiA+Pj4gKyAqIEl0IHNob3VsZCBiZSBsaW5rZWQgYXQg
WEVOX1ZJUlRfU1RBUlQsIGFuZCBsb2FkZWQgYXQgYW55DQo+ID4+PiArICogNEstYWxpZ25lZCBh
ZGRyZXNzLiAgQWxsIG9mIHRleHQrZGF0YStic3MgbXVzdCBmaXQgaW4gMk1CLA0KPiA+Pj4gKyAq
IG9yIHRoZSBpbml0aWFsIHBhZ2V0YWJsZSBjb2RlIGJlbG93IHdpbGwgbmVlZCBhZGp1c3RtZW50
Lg0KPiA+Pj4gKyAqLw0KPiA+Pj4NCj4gPj4+ICAgIEdMT0JBTChzdGFydCkNCj4gPj4+ICAgICAg
ICAgICAgLyoNCj4gPj4+IEBAIC01ODYsNyArNTg2LDcgQEAgRU5EUFJPQyhjcHVfaW5pdCkNCj4g
Pj4+ICAgICAqDQo+ID4+PiAgICAgKiBDbG9iYmVycyB4MCAtIHg0DQo+ID4+PiAgICAgKi8NCj4g
Pj4+IC1jcmVhdGVfcGFnZV90YWJsZXM6DQo+ID4+PiArRU5UUlkoY3JlYXRlX3BhZ2VfdGFibGVz
KQ0KPiA+Pg0KPiA+PiBJIGFtIG5vdCBzdXJlIGFib3V0IGtlZXBpbmcgdGhpcyBuYW1lLiBOb3cg
d2UgaGF2ZSBjcmVhdGVfcGFnZV90YWJsZXMoKQ0KPiA+PiBhbmQgYXJjaF9zZXR1cF9wYWdlX3Rh
YmxlcygpLg0KPiA+Pg0KPiA+PiBJIHdvdWxkIGNvbnNpZGUgdG8gbmFtZSBpdCBjcmVhdGVfYm9v
dF9wYWdlX3RhYmxlcygpLg0KPiA+Pg0KPiA+DQo+ID4gRG8geW91IG5lZWQgbWUgdG8gcmVuYW1l
IGl0IGluIHRoaXMgcGF0Y2g/DQo+IA0KPiBTbyBsb29raW5nIGF0IHRoZSByZXN0IG9mIHRoZSBz
ZXJpZXMsIEkgc2VlIHlvdSBhcmUgYWxyZWFkeSByZW5hbWluZyB0aGUNCj4gaGVscGVyIGluIHBh
dGNoICMxMS4gSSB0aGluayBpdCB3b3VsZCBiZSBiZXR0ZXIgaWYgdGhlIG5hbWluZyBpcyBkb25l
DQo+IGVhcmxpZXIuDQo+IA0KPiBUaGF0IHNhaWQsIEkgYW0gbm90IGNvbnZpbmNlZCB0aGF0IGNy
ZWF0ZV9wYWdlX3RhYmxlcygpIHNob3VsZCBhY3R1YWxseQ0KPiBiZSBjYWxsZWQgZXh0ZXJuYWxs
eS4NCj4gDQo+IEluIGZhY3QsIHlvdSBoYXZlIHNvbWV0aGluZyBsaWtlOg0KPiANCj4gICAgIGJs
IGNyZWF0ZV9wYWdlX3RhYmxlcw0KPiAgICAgYmwgZW5hYmxlX21tdQ0KPiANCj4gQm90aCB3aWxs
IG5lZWQgYSBNTVUvTVBVIHNwZWNpZmljIGltcGxlbWVudGF0aW9uLiBTbyBpdCB3b3VsZCBiZSBi
ZXR0ZXINCj4gaWYgd2UgcHJvdmlkZSBhIHdyYXBwZXIgdG8gbGltaXQgdGhlIG51bWJlciBvZiBl
eHRlcm5hbCBmdW5jdGlvbnMuDQo+DQoNCkkgYWdyZWUgd2l0aCB5b3UsIHdlIHdpbGwgdHJ5IHRv
IHdyYXBwZXIgc29tZSBmdW5jdGlvbnMgaW5zdGVhZCBvZg0KZXhwb3J0IHRoZW0uDQoNCkNoZWVy
cywNCldlaSBDaGVuDQogDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 10:43:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 10:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480471.744904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5uG-0003Og-Jd; Wed, 18 Jan 2023 10:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480471.744904; Wed, 18 Jan 2023 10:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI5uG-0003OZ-GM; Wed, 18 Jan 2023 10:43:00 +0000
Received: by outflank-mailman (input) for mailman id 480471;
 Wed, 18 Jan 2023 10:42:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI5uE-0003OP-QT; Wed, 18 Jan 2023 10:42:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI5uE-0005r7-G8; Wed, 18 Jan 2023 10:42:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI5uE-0007pO-4q; Wed, 18 Jan 2023 10:42:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pI5uE-0002c5-4L; Wed, 18 Jan 2023 10:42:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XnTmaWGgR1MmGM3Wk4EirhciGAczmQCrD5RKhjMSL5E=; b=BzRhVRMWxwl/LlQlEyVG3XXMnE
	ZssikH8FOU5lD9fZv+UEPLbZlGbYA+ZRWzuDptUKMMZ3JXiu/0/dvIUdOGB5j4KiRuF3jzaWA16MQ
	tUGYuAN0F0fquhTokZ3QARZbvJDmU1sYNI5G7qz/DzNX+UHxgXFoFYdaGK4AgQEJhZiQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175948-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175948: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-arm64-xsm:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 10:42:58 +0000

flight 175948 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175948/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175734
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175734
 build-amd64                   6 xen-build                fail REGR. vs. 175734
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    6 days
Failing since        175739  2023-01-12 09:38:44 Z    6 days   15 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    5 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 10:56:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 10:56:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480480.744915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI66s-0004yp-W6; Wed, 18 Jan 2023 10:56:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480480.744915; Wed, 18 Jan 2023 10:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI66s-0004yi-Ru; Wed, 18 Jan 2023 10:56:02 +0000
Received: by outflank-mailman (input) for mailman id 480480;
 Wed, 18 Jan 2023 10:56:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pI66r-0004yc-ER
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 10:56:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI66q-0006Fp-Tt; Wed, 18 Jan 2023 10:56:00 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.8.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI66q-0008E8-Lo; Wed, 18 Jan 2023 10:56:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=3j5m5JxERhhVzNmVeMej3DrFFiOJFVdBz8bH3tUI1FQ=; b=AgxW6RPKVhtQaEFM0BukrG5zyp
	G1UDnifF/75TzI+fBXbW/XoFsC4pe3vAnvZITyVfuD2w6BQAOFlamOu0NELiqdDn2FU5Z9Rrsm0dn
	fD1wI9SQk2KRaCRn7VZiEN3fjrnKPPA/xCPM6MY+nLAwRGBwdO5XhdJkkKAASMaXLnjg=;
Message-ID: <a5560a16-60d8-dc75-994e-c8719721bf74@xen.org>
Date: Wed, 18 Jan 2023 10:55:58 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 07/40] xen/arm64: add .text.idmap for Xen identity map
 sections
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-8-Penny.Zheng@arm.com>
 <4b817b65-f558-b4df-c7fd-242a04e59a59@xen.org>
 <PAXPR08MB742061C5E8ED73BD43FEF9599EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <PAXPR08MB742061C5E8ED73BD43FEF9599EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 18/01/2023 02:18, Wei Chen wrote:
> Hi Julien,

Hi Wei,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 2023年1月18日 7:46
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v2 07/40] xen/arm64: add .text.idmap for Xen identity
>> map sections
>>
>> Hi,
>>
>> On 13/01/2023 05:28, Penny Zheng wrote:
>>> From: Wei Chen <wei.chen@arm.com>
>>>
>>> Only the first 4KB of Xen image will be mapped as identity
>>> (PA == VA). At the moment, Xen guarantees this by having
>>> everything that needs to be used in the identity mapping
>>> in head.S before _end_boot and checking at link time if this
>>> fits in 4KB.
>>>
>>> In previous patch, we have moved the MMU code outside of
>>> head.S. Although we have added .text.header to the new file
>>> to guarantee all identity map code still in the first 4KB.
>>> However, the order of these two files on this 4KB depends
>>> on the build tools. Currently, we use the build tools to
>>> process the order of objs in the Makefile to ensure that
>>> head.S must be at the top. But if you change to another build
>>> tools, it may not be the same result.
>>
>> Right, so this is fixing a bug you introduced in the previous patch. We
>> should really avoid introducing (latent) regression in a series. So
>> please re-order the patches.
>>
> 
> Ok.
> 
>>>
>>> In this patch we introduce .text.idmap to head_mmu.S, and
>>> add this section after .text.header. to ensure code of
>>> head_mmu.S after the code of header.S.
>>>
>>> After this, we will still include some code that does not
>>> belong to identity map before _end_boot. Because we have
>>> moved _end_boot to head_mmu.S.
>>
>> I dislike this approach because you are expecting that only head_mmu.S
>> will be part of .text.idmap. If it is not, everything could blow up again.
>>
> 
> I agree.
> 
>> That said, if you look at staging, you will notice that now _end_boot is
>> defined in the linker script to avoid any issue.
>>
> 
> Sorry, I am not quite clear about this comment. The _end_boot of original
> staging branch is defined in head.S. And I am not quite sure how this
> _end_boot solve multiple files contain idmap code.

If you look at the latest staging, there is a commit (229ebd517b9d) that 
now define _end_boot in the linker script.

The .text.idmap section can be added before the definition of _end_boot.

> 
> Cheers,
> Wei Chen
> 
>>> That means all code in head.S
>>> will be included before _end_boot. In this patch, we also
>>> added .text flag in the place of original _end_boot in head.S.
>>> All the code after .text in head.S will not be included in
>>> identity map section.
>>>
>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>> ---
>>> v1 -> v2:
>>> 1. New patch.
>>> ---
>>>    xen/arch/arm/arm64/head.S     | 6 ++++++
>>>    xen/arch/arm/arm64/head_mmu.S | 2 +-
>>>    xen/arch/arm/xen.lds.S        | 1 +
>>>    3 files changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>>> index 5cfa47279b..782bd1f94c 100644
>>> --- a/xen/arch/arm/arm64/head.S
>>> +++ b/xen/arch/arm/arm64/head.S
>>> @@ -466,6 +466,12 @@ fail:   PRINT("- Boot failed -\r\n")
>>>            b     1b
>>>    ENDPROC(fail)
>>>
>>> +/*
>>> + * For the code that do not need in indentity map section,
>>> + * we put them back to normal .text section
>>> + */
>>> +.section .text, "ax", %progbits
>>> +
>>
>> I would argue that puts wants to be part of the idmap.
>>
> 
> I am ok to move puts to idmap. But from the original head.S, puts is
> placed after _end_boot, and from the xen.ld.S, we can see idmap is
> area is the section of "_end_boot - start". 

The original position of _end_boot is wrong. It didn't take into account 
the literal pool (there are at the end of the unit). So they would be 
past _end_boot.

> The reason of moving puts
> to idmap is because we're using it in idmap?

I guess it depends of what idmap really mean here. If you only interpret 
as the MMU is on and VA == PA. Then not yet (I was thinking to introduce 
a few calls).

If you also include the MMU off. Then yes.

Also, in the context of cache coloring, we will need to have a 
trampoline for cache coloring. So it would be better to keep everything 
close together as it is easier to copy.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 11:00:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 11:00:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480486.744926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6Ak-0005oo-H9; Wed, 18 Jan 2023 11:00:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480486.744926; Wed, 18 Jan 2023 11:00:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6Ak-0005o0-Co; Wed, 18 Jan 2023 11:00:02 +0000
Received: by outflank-mailman (input) for mailman id 480486;
 Wed, 18 Jan 2023 11:00:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pI6Aj-0005i9-SU
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 11:00:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI6Aj-0006Jl-7G; Wed, 18 Jan 2023 11:00:01 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.8.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pI6Aj-0008In-1S; Wed, 18 Jan 2023 11:00:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=+jrDTHJzP6DIlIJuk91UB+u8pUB+1hkyR01Pw/bysxA=; b=2klbXRBHPymdBmD54Ghgd3WBCr
	Lh0OXZZ0qRS05OBF6C7N8Qhb8G5HwuzUklnZJKikR5HR6DhPw6cQvWfmdtjWcLWzdnY5L/Wb9AMHz
	QZHsEqW1wSev2/aKJXMOuceiJmFvptBdLAM3vEa1DqLqzdL86O+AJMWf/+dzzGiMPbEM=;
Message-ID: <b13c0904-3503-8894-8b14-64fcc717d50d@xen.org>
Date: Wed, 18 Jan 2023 10:59:59 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 04/40] xen/arm: add an option to define Xen start
 address for Armv8-R
Content-Language: en-US
To: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jiamei Xie <Jiamei.Xie@arm.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-5-Penny.Zheng@arm.com>
 <e406484a-aad3-4953-afdb-3159597ec998@xen.org>
 <PAXPR08MB7420A5C7F93F23F14C77C9BA9EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <7ffe5d34-614d-f2aa-cf87-c518917c970a@xen.org>
 <PAXPR08MB7420F43284FEC60BC88496709EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <PAXPR08MB7420F43284FEC60BC88496709EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 18/01/2023 10:22, Wei Chen wrote:
>>> Although it is unlikely that vendors using the Armv8-R IP will do so, it
>>> is indeed an option. In the ID register, there are also related bits in
>>> ID_AA64MMFR0_EL1 (MSA_frac) to indicate this.
>>>
>>>>> ---
>>>>>     xen/arch/arm/Kconfig           |  8 ++++++++
>>>>>     xen/arch/arm/platforms/Kconfig | 16 +++++++++++++---
>>>>>     2 files changed, 21 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>>>>> index ace7178c9a..c6b6b612d1 100644
>>>>> --- a/xen/arch/arm/Kconfig
>>>>> +++ b/xen/arch/arm/Kconfig
>>>>> @@ -145,6 +145,14 @@ config TEE
>>>>>     	  This option enables generic TEE mediators support. It allows
>>>> guests
>>>>>     	  to access real TEE via one of TEE mediators implemented in
>> XEN.
>>>>>
>>>>> +config XEN_START_ADDRESS
>>>>> +	hex "Xen start address: keep default to use platform defined
>>>> address"
>>>>> +	default 0
>>>>> +	depends on ARM_V8R
>>>>
>>>> It is still pretty unclear to me what would be the difference between
>>>> HAS_MPU and ARM_V8R.
>>>>
>>>
>>> If we don't want to support non-MPU supported Armv8-R, I think they are
>> the
>>> same. IMO, non-MPU supported Armv8-R is meaningless to Xen.
>> OOI, why do you think this is meaningless?
> 
> If there is Armv8-R board without EL2 MPU, how can we protect Xen?

So what you call EL2 MPU is an MPU that is following the Arm 
specification. In theory, you could have a proprietary mechanism for that.

So the question is whether a system not following the Arm specification 
is allowed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 11:08:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 11:08:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480492.744937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6IT-00074p-BI; Wed, 18 Jan 2023 11:08:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480492.744937; Wed, 18 Jan 2023 11:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6IT-00074i-7p; Wed, 18 Jan 2023 11:08:01 +0000
Received: by outflank-mailman (input) for mailman id 480492;
 Wed, 18 Jan 2023 11:08:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Njh=5P=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pI6IR-00074c-UG
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 11:07:59 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5d12d4a8-9720-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 12:07:57 +0100 (CET)
Received: by mail-wr1-x431.google.com with SMTP id b7so7549851wrt.3
 for <xen-devel@lists.xenproject.org>; Wed, 18 Jan 2023 03:07:57 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 v11-20020a5d678b000000b0029e1aa67fd2sm13252944wru.115.2023.01.18.03.07.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 18 Jan 2023 03:07:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d12d4a8-9720-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=9LuFo2/WKeMOeOKp72YzVpDSENQ724G+WuPmC4Yaz5w=;
        b=WXmNLhDQl/BmaHPahkpbHgotmJC3Vpp4Zzzyys5TAS4LnbiZElcfDFdst9Nli1SwQ9
         rRiGj6Hv7cDWelX8QKY2C8E7qBogXzaWaMcwCrY7WXajXcR+XyDhRlSc+6SW6+dm1O+K
         gVfn3U8Cu8iXzzkoEUbsXlX1fuTHiJ6K46/u23EAnppBJUeLxmGHw89xgIwKUwJtnYDo
         hWQwIC0M2JL8N6TSBRwD86nyAFLazzYBGD70DOeAcHuFF1nZanf66ceHESGwRFfhFh3k
         pRcbbyX3eB+pzLqXI7qhDpZ/XUEEdhrajVvSNtT8ef1+cWvWGpCwRXEhGdOhstSjSuU+
         7rUQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=9LuFo2/WKeMOeOKp72YzVpDSENQ724G+WuPmC4Yaz5w=;
        b=ZH22g1gBSA9K+DxdB8r/Gg/3TUvER+egG4KVbb6xHw8GjEW7KkSaRuFXSc4Xdi6JG2
         0vOURuljTlxTLE5RBgJMhgdTsaZgBAQlEjsqQ9C00Iiq4ZbTp9czGy+WR2As7BdcM/Yl
         eg85z9LlLw51LJP4XqApG0cl04YMEX/ABlh/sXYmxpPwHQr0DpPm4j1isBtvbKvouu7Z
         ComXDQoirhvQglq0ahEHJ4yz1UbNnip4c7mTmQ9qNipIDC9UDTA+8AtXdvK+a+OTIn4O
         /iBx3vtFQlI0ajVvdr+GOVYXgDv9tokmw1fp/pvTlyhUvaMqtd7KavTkCU1ioKp4x8AH
         WIhg==
X-Gm-Message-State: AFqh2krsWMew1+/Ma8nUR2V5NSooDfnChFST7bEU0JBc395dyQdEU/Et
	ojoWS/IgKrYT/Hb1RY6c7Is=
X-Google-Smtp-Source: AMrXdXtycVJxaz2q8PdarV8lvWoEm6Cbeb8+Bj1ajDwafFstXBnU/CWtY7j5VrxjTlSDHByjNiwIZg==
X-Received: by 2002:a5d:5451:0:b0:2ba:4ee8:d708 with SMTP id w17-20020a5d5451000000b002ba4ee8d708mr5824622wrv.32.1674040077034;
        Wed, 18 Jan 2023 03:07:57 -0800 (PST)
Message-ID: <e7d325b0005dd22d59ad1be442df9c9524275cd6.camel@gmail.com>
Subject: Re: [PATCH v4 3/4] xen/riscv: introduce early_printk basic stuff
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida
 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
Date: Wed, 18 Jan 2023 13:07:55 +0200
In-Reply-To: <d0cabe82-315e-408c-7364-33e2b5093ee6@citrix.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
	 <915bd184c6648a1a3bf0ac6a79b5274972bb33dd.1673877778.git.oleksii.kurochko@gmail.com>
	 <d0cabe82-315e-408c-7364-33e2b5093ee6@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Tue, 2023-01-17 at 23:57 +0000, Andrew Cooper wrote:
> On 16/01/2023 2:39 pm, Oleksii Kurochko wrote:
> > diff --git a/xen/arch/riscv/Kconfig.debug
> > b/xen/arch/riscv/Kconfig.debug
> > index e69de29bb2..e139e44873 100644
> > --- a/xen/arch/riscv/Kconfig.debug
> > +++ b/xen/arch/riscv/Kconfig.debug
> > @@ -0,0 +1,6 @@
> > +config EARLY_PRINTK
> > +=C2=A0=C2=A0=C2=A0 bool "Enable early printk"
> > +=C2=A0=C2=A0=C2=A0 default DEBUG
> > +=C2=A0=C2=A0=C2=A0 help
> > +
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Enables early printk debug messages
>=20
> Kconfig indentation is a little hard to get used to.
>=20
> It's one tab for the main block, and one tab + 2 spaces for the help
> text.
>=20
> Also, drop the blank line after help.
>=20
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index fd916e1004..1a4f1a6015 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,3 +1,4 @@
> > +obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o
> > =C2=A0obj-$(CONFIG_RISCV_64) +=3D riscv64/
> > =C2=A0obj-y +=3D sbi.o
> > =C2=A0obj-y +=3D setup.o
> > diff --git a/xen/arch/riscv/early_printk.c
> > b/xen/arch/riscv/early_printk.c
> > new file mode 100644
> > index 0000000000..6bc29a1942
> > --- /dev/null
> > +++ b/xen/arch/riscv/early_printk.c
> > @@ -0,0 +1,45 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * RISC-V early printk using SBI
> > + *
> > + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> > + */
> > +#include <asm/early_printk.h>
> > +#include <asm/sbi.h>
> > +
> > +/*
> > + * early_*() can be called from head.S with MMU-off.
> > + *
> > + * The following requiremets should be honoured for early_*() to
> > + * work correctly:
> > + *=C2=A0=C2=A0=C2=A0 It should use PC-relative addressing for accessin=
g symbols.
> > + *=C2=A0=C2=A0=C2=A0 To achieve that GCC cmodel=3Dmedany should be use=
d.
> > + */
> > +#ifndef __riscv_cmodel_medany
> > +#error "early_*() can be called from head.S before relocate so it
> > should not use absolute addressing."
> > +#endif
>=20
> This is incorrect.
>=20
> What *this* file is compiled with has no bearing on how head.S calls
> us.=C2=A0 The RISC-V documentation explaining __riscv_cmodel_medany vs
> __riscv_cmodel_medlow calls this point out specifically.=C2=A0 There's
> nothing you can put here to check that head.S gets compiled with
> medany.
>=20
> Right now, there's nothing in this file dependent on either mode, and
> it's not liable to change in the short term.=C2=A0 Furthermore, Xen isn't
> doing any relocation in the first place.
>=20
> We will want to support XIP in due course, and that will be compiled
> __riscv_cmodel_medlow, which is a fine and legitimate usecase.
>=20
>=20
> The build system sets the model up consistently.=C2=A0 All you are doing
> by
> putting this in is creating work that someone is going to have to
> delete
> for legitimate reasons in the future.
>=20
> > diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> > index 13e24e2fe1..9c9412152a 100644
> > --- a/xen/arch/riscv/setup.c
> > +++ b/xen/arch/riscv/setup.c
> > @@ -1,13 +1,17 @@
> > =C2=A0#include <xen/compile.h>
> > =C2=A0#include <xen/init.h>
> > =C2=A0
> > +#include <asm/early_printk.h>
> > +
> > =C2=A0/* Xen stack for bringing up the first CPU. */
> > =C2=A0unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
> > =C2=A0=C2=A0=C2=A0=C2=A0 __aligned(STACK_SIZE);
> > =C2=A0
> > =C2=A0void __init noreturn start_xen(void)
> > =C2=A0{
> > -=C2=A0=C2=A0=C2=A0 for ( ;; )
> > +=C2=A0=C2=A0=C2=A0 early_printk("Hello from C env\n");
> > +
> > +=C2=A0=C2=A0=C2=A0 for ( ; ; )
>=20
> Rebasing error?
>=20
If you are not speaking about adding of the space between "; ;" than it
is rebasing error. I will double-check it during work on new version of
the patch series.
> ~Andrew
~Oleksii


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 11:16:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 11:16:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480498.744947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6QG-000065-5f; Wed, 18 Jan 2023 11:16:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480498.744947; Wed, 18 Jan 2023 11:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6QG-00005x-2r; Wed, 18 Jan 2023 11:16:04 +0000
Received: by outflank-mailman (input) for mailman id 480498;
 Wed, 18 Jan 2023 11:16:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f/We=5P=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pI6QE-00005r-AM
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 11:16:02 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on2042.outbound.protection.outlook.com [40.107.96.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b8a70fb-9721-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 12:15:59 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by BL0PR12MB4868.namprd12.prod.outlook.com (2603:10b6:208:1c4::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan
 2023 11:15:56 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%4]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 11:15:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b8a70fb-9721-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MGp/lvqnwso4nX054MeGE7/ln/4phO2yPU2TJlJhy7ngd7rP+Qk886VoPAc/dyLwDjdabc2ngyHTTsLELKyiWuHEiplkipldHXdR5V5q6a5qeK0SoRwbX7HaPlZn2tnk/t1NzpBSEouL4lHONaC5JjQdqAvY7/VzfjABh9PIPqZyWazDapRNmR2bR/Vf8TJjMazJPbI2klv3tcNrmQ9/NoJxxjnuxJR5B8CYOlZ9HI+CgmPVqmyCueMdjB6o4jzy4ih3cAbYVdL8i0tI0Q/Jz1/bvia/X1LBbZ1bFGNRD4H60q1i0Pq49bCbNv0ta5y0AL/R7wklK3HeUFLmvwxo3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XBJjBqcWv+8ocw1cr4BlgRHfFKveyIiU8BbcvwfsM60=;
 b=mDizXWwoDwUcyjL9XJjIaXGev8Bn1MGkxlsiElA6A4IMJzUDtTK3VuTlFWrFMJl//2qQyK4QnuZ4Zh3qo9XLYtiReLbVN9sjLuRZmk3SvGQPLTNRpUGferGOz89bCaw3HGw0EwJJ87Yq86kRTRT/6WtzmRtyfZ9u4nAlRuxyET+Ntr/b8B1ckeihzb2cUjQGrmUqHk1d2SbYj301TDkaVQPQzQMzUfXEap8xXutxqamZz+2HPCcaUTmFKgA6PpolGEhQLGwenbiSKLAT5BaIyL5iuvp5l2YsUzSwbFYPY6kJOuYjqeecdLT31S38cjf6NKNRleOVuRXWmhBM5xwLRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XBJjBqcWv+8ocw1cr4BlgRHfFKveyIiU8BbcvwfsM60=;
 b=qXMa1ux4CCFGkozF7ZAUT/QCfpzLvU2PrxI+C2cva+IUcWPiwUygaPplHUg5Lroy6JbnPCX456V4VCcROye/Er8GRAUEgrLMyJLCTt5yRmQk1zyFc0M88/FGjXjXd9hdUAqxB///qv/En0+VvitWfZMxL/Cy88qNJzS7oSG95lY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <37719b71-8405-eefd-3bf5-95c7c8639e82@amd.com>
Date: Wed, 18 Jan 2023 11:15:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for
 address/size
To: Jan Beulich <jbeulich@suse.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, julien@xen.org,
 Wei Xu <xuwei5@hisilicon.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Rahul Singh <rahul.singh@arm.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-6-ayan.kumar.halder@amd.com>
 <926307d3-a354-be87-3885-90681dc5ae24@suse.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <926307d3-a354-be87-3885-90681dc5ae24@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0086.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bd::19) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|BL0PR12MB4868:EE_
X-MS-Office365-Filtering-Correlation-Id: bfadd68e-cdab-47ca-9afa-08daf9455aff
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0EX090AcUvh+vOSgH7x4KemKb80SHIntstVaD1sGH8QwYklSQyqVt0Wvog/nmzDnsBM+Lx/CMIuwBsfiHrp2mvKOLwv6DmcduhRNJ6S9D3TUmluUjoWDW5apgNNintwL0owK3QBe4pWelc7qKpRGOfCdSTTkPXdTW9COtlur0nhwbshs6agvenoJfVmzVmjIEOgumJ9tyX5ucYRdl8eUGwZp7E3gyhIXNTNModf71pWofXWlTZaGIc+Nfw9PlWaA5fL/3XmhtfJN7i+6GUy6YL9CP+s9hzs6wch1iKdyQRA5BpqD0qWu7KFCW2FRIynuWefIJdt1WWKy1t5jy3OVzaM/yABKNl2R1Usq9nJXAX0wNDdx+m/dv7iEgg0tyRlx3GuXvKLfyfDvUP74+DD0JhAJS8av4OrpGwK84WRA7kocC7bMx0lkVIMc/OGMDTP2yIC9Y+cHtIPRqsm+dX/lCBvOjFaoH0sIOHXYMzzC7SIuBkF9QAa/5bGkZ+qyoUFSVXBCPu/htqgvz5Px39kLhKpAYfMB9hvxxvvrLJROkmJbpFqP3Prh1vTJNhv3rHWz2tclanaFEbCa/aevtft1pYGi+RgXS6uU8eYzgMoBLf/o2wyyh9FPFsyTfYx0MBNquXgZNL6D6ByvDhdWoN7XgBJTRBdbDD8tovEfS+NHUHpjHY+Xap6f0zuJ4QYKL2dBjDbxEXohKKlhGuu/CeM0F0i+5NYcKCVbWhn/pUgVBrY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(136003)(366004)(346002)(376002)(451199015)(31686004)(36756003)(7416002)(8936002)(316002)(66476007)(66556008)(8676002)(31696002)(66946007)(5660300002)(2906002)(4326008)(83380400001)(38100700002)(110136005)(6486002)(478600001)(54906003)(6666004)(41300700001)(53546011)(6512007)(6506007)(2616005)(26005)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NldMeXlhQjhnK0x0VWNOaXJFU3IvKzVlbnNMQ1dqSEhiSU5VQVpWa0IwT3RH?=
 =?utf-8?B?VDZJQlgycnN2M3NRNWYwMTB2empxSzFEWUtOOHF2Y0pGYlJFVGpUVEhKVi9i?=
 =?utf-8?B?YUhiNTlIM0xrTnpGSWdFcTFIMUNubEtVZ1VIZ081WmVJYXlWak93UGc2UnhU?=
 =?utf-8?B?ekJmRXJGNko3YjBGVTJPcUtBVFJvRFZBeEdHalFLM2EvbGlEd202WS83a2JN?=
 =?utf-8?B?NFZHcFpWbzFZMW1Na0hFSmVmc1JQTzdFbkFTKzVTOCtHQmQxOUxyODBacWw1?=
 =?utf-8?B?U3llUE9vRi84V1Fzb2FaZitDaVJnWDQxcVRsbTlyYkhJczRMSTFlVkpjUHhK?=
 =?utf-8?B?enF4Q25DWUJqejlHK2p1L3l5YzExZU80WjZLRmRNTEJsUWVnMGFRZlQwTnJD?=
 =?utf-8?B?MUJlRk1lUU5DeHFGUEY4ZUNmZVl3VzVXV056MmNpUVkzVnR6OXRSZWNJdHBv?=
 =?utf-8?B?MGJKeHZEOUtya2pKTkw0SEVtVjdLL2Zucm14V3lQcCtaL2FlOFE3Ukk4RG1M?=
 =?utf-8?B?UGtFTkZpTTYwc1ltNThlbitBMnh6RGxnRkNOb3ZVT0JpYWxpaTBhVVRhY0l5?=
 =?utf-8?B?em9OZFlER2VqQU9rNlp5SUpSZFhWQXV1T3JDVEpHcW1ueW43alE2UVg5cnFU?=
 =?utf-8?B?V0s0QjVPblh2UC9BeVRseGI1cExheFUwckhaUVhKWXdwOCtsVXVWMnhNSk1H?=
 =?utf-8?B?aTF2MmpuK29keUVIcmUzOGhRZjZGV251QURBa0RGVXcrU0JsSlZFZndkdUFx?=
 =?utf-8?B?VHE3ejZQWFZuYUdrTU9scWtVVUFKQTNWOFlOWGxJL1pTVUwvZkMwTVlRd25S?=
 =?utf-8?B?SzRLRTB1WU1QcWlPZ0t6d0xtSDdpbS9BMklNZ0tZZllEQVBTSXJ6cGx2WEl4?=
 =?utf-8?B?MTR4SXdFbmtSK3dqTTZZWE9RblpkUVpSUGcyd3pZL0NxWk9EOUZUY3pDMERx?=
 =?utf-8?B?aENVckZvdiswOTE2Z2VwTVNmQk5mUjEwR3NVU3pvb1p5aGoxbXgzYVlCSlk3?=
 =?utf-8?B?Z0xqL1E4bVZDTWwyTXhVdWU0cEc3S0lVbFdtdkJZUm5iMFp6WnNzUkIySzBO?=
 =?utf-8?B?bE5xbC9WY0FnemF1RU5HVUhyV3Y4ZHh3aG1BNTFEaWNKZEJiSlRaeVpQbjgw?=
 =?utf-8?B?MHZ4Y1doV0RMbTN5Q04yYzhDMGlEN0tCU1FHY20veCt3clJjUG9oSU9uT05Q?=
 =?utf-8?B?K2w0dU1oRVkxZGJOVy82TWROdTg0RnJHdDc3VG1UZUliTW1MZGdlaTIvQWty?=
 =?utf-8?B?b2VFNEllU2dxRVNUOEI3MU9hamFZcWJvRmlseStYVVpEVno1V2Z1eHZqanpU?=
 =?utf-8?B?YWtITUJJVWF3N1B3WCtUWCtILzIzajRycDUwMzZha0d4OXJ6M1VnZjJGUXBQ?=
 =?utf-8?B?VWMzMXFSZ3BBMjh1VFVRNWhsazBDUXhQY1BoYlVBMUl3NnRWc3ZYdzNTYzQ2?=
 =?utf-8?B?NjJIQ2MwT3Z0RUNuanpqRzI3eWtDVis0NXNibDBmM1hsb1ZweXRpZHdBdDBu?=
 =?utf-8?B?RTVCVXhYVld2ZkI2UXA0MWhBT3lxYzEyMW1VVVRsQk8yV21wZ1ZmS1Z1ckh0?=
 =?utf-8?B?bHU5YUorWHlOeHM0cmhHTzQ0L3cyRndreWE5djExbWJCUkJ1bFc3UHNLOGtx?=
 =?utf-8?B?NSs0Um8rSnp1QTEwbFdPanA1Z3VKRUl1VDFUTU5kc2oyRyt1NVRER2kzRWpR?=
 =?utf-8?B?WTlrREVWN25LNmdXZ21CSG9rYVYrdWxxTXlsMmQ3Y3FwSnhHUFYzSlcrU2xa?=
 =?utf-8?B?ZERWWUZPUTNFSWV0eWJzbE9GeE5Fd29NVjI5Yk1BZ0dQZnBKT3R0Y005disw?=
 =?utf-8?B?cXVDSFhQTnF1WFFxZnJabUhEWkRMejJZYU9jTU1TYXozRXBWdkJDM0Iya21W?=
 =?utf-8?B?MnJJMmZFck55dVNoaFJ2bDE0OGliZUNFd0h4dzBreXFWVGcrUzJHN3B5Ym84?=
 =?utf-8?B?MnIyRDlqcFBFY1lSYVFNcFNqTTkwc242eEc2dkJ0ZVlvbnRoajh1NWg5MFpC?=
 =?utf-8?B?Zjc5bDdHRG44WWwrY2NOZDhxZHd1WjhGOXNxNlFFdHVUaFZTd3F2RXB3WUQ5?=
 =?utf-8?B?d0d3L21oZkJuOG5USGFid2FHUnI1bWlOb2lPalMwVHBRb3JhcjJTZlRJZitJ?=
 =?utf-8?Q?Ozw2vLzt/5/fY7OLR5MqG5hYB?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bfadd68e-cdab-47ca-9afa-08daf9455aff
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 11:15:50.3450
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QyqUVyIlO6y+tq5cII62ShX8f3e7ZpMEDxJ8QuBoZ9sF2b0MBbVAjpALXt16eNGFACw4ulnNxtg2dxDfsWNKIA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4868

Hi Jan,

On 18/01/2023 08:40, Jan Beulich wrote:
> On 17.01.2023 18:43, Ayan Kumar Halder wrote:
>> One should now be able to use 'paddr_t' to represent address and size.
>> Consequently, one should use 'PRIpaddr' as a format specifier for paddr_t.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>>
>> Changes from -
>>
>> v1 - 1. Rebased the patch.
>>
>>   xen/arch/arm/domain_build.c        |  9 +++++----
>>   xen/arch/arm/gic-v3.c              |  2 +-
>>   xen/arch/arm/platforms/exynos5.c   | 26 +++++++++++++-------------
>>   xen/drivers/char/exynos4210-uart.c |  2 +-
>>   xen/drivers/char/ns16550.c         |  8 ++++----
> Please make sure you Cc all maintainers.
Ack.
>
>> @@ -1166,7 +1166,7 @@ static const struct ns16550_config __initconst uart_config[] =
>>   static int __init
>>   pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>>   {
>> -    u64 orig_base = uart->io_base;
>> +    paddr_t orig_base = uart->io_base;
>>       unsigned int b, d, f, nextf, i;
>>   
>>       /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on bus 0. */
>> @@ -1259,7 +1259,7 @@ pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>>                       else
>>                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
>>   
>> -                    uart->io_base = ((u64)bar_64 << 32) |
>> +                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
>>                                       (bar & PCI_BASE_ADDRESS_MEM_MASK);
>>                   }
> This looks wrong to me: You shouldn't blindly truncate to 32 bits. You need
> to refuse acting on 64-bit BARs with the upper address bits non-zero.

Yes, I was treating this like others (where Xen does not detect for 
truncation while getting the address/size from device-tree and 
typecasting it to paddr_t).

However in this case, as Xen is reading from PCI registers, so it needs 
to check for truncation.

I think the following change should do good.

@@ -1180,6 +1180,7 @@ pci_uart_config(struct ns16550 *uart, bool_t 
skip_amt, unsigned int idx)
                  unsigned int bar_idx = 0, port_idx = idx;
                  uint32_t bar, bar_64 = 0, len, len_64;
                  u64 size = 0;
+                uint64_t io_base = 0;
                  const struct ns16550_config_param *param = uart_param;

                  nextf = (f || (pci_conf_read16(PCI_SBDF(0, b, d, f),
@@ -1260,8 +1261,11 @@ pci_uart_config(struct ns16550 *uart, bool_t 
skip_amt, unsigned int idx)
                      else
                          size = len & PCI_BASE_ADDRESS_MEM_MASK;

-                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
+                    io_base = ((u64)bar_64 << 32) |
                                      (bar & PCI_BASE_ADDRESS_MEM_MASK);
+
+                    uart->io_base = (paddr_t) io_base;
+                    ASSERT(uart->io_base == io_base); /* Detect 
truncation */
                  }
                  /* IO based */
                  else if ( !param->mmio && (bar & 
PCI_BASE_ADDRESS_SPACE_IO) )

>
> If already you correct logic even in code not used on Arm (which I appreciate),
> then there's actually also related command line handling which needs adjustment.
> The use of simple_strtoul() to obtain ->io_base is bogus - this then needs to
> be simple_strtoull() (perhaps in a separate prereq patch), and in the 32-bit-
> paddr case you'd again need to check for truncation (in the patch here).
Agreed this needs to be done in a separate prereq patch.
>
> While doing the review I've noticed this
>
>      uart->io_size = spcr->serial_port.bit_width;
>
> in ns16550_acpi_uart_init(). This was introduced in 17b516196c55 ("ns16550:
> add ACPI support for ARM only"), so Wei, Julien: Doesn't the right hand value
> need DIV_ROUND_UP(, 8) to convert from bit count to byte count?

Yes, I think it should be

uart->io_size = DIV_ROUND_UP(spcr->serial_port.bit_width, BITS_PER_BYTE);

However, Julien/Wei can confirm this.

- Ayan

>
> Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 11:23:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 11:23:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480507.744958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6XI-0001dj-FX; Wed, 18 Jan 2023 11:23:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480507.744958; Wed, 18 Jan 2023 11:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6XI-0001dc-CI; Wed, 18 Jan 2023 11:23:20 +0000
Received: by outflank-mailman (input) for mailman id 480507;
 Wed, 18 Jan 2023 11:23:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Njh=5P=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pI6XH-0001ck-QC
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 11:23:19 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81759392-9722-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 12:23:17 +0100 (CET)
Received: by mail-wm1-x32d.google.com with SMTP id k16so3780892wms.2
 for <xen-devel@lists.xenproject.org>; Wed, 18 Jan 2023 03:23:17 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 o12-20020a05600c4fcc00b003daff80f16esm2216711wmq.27.2023.01.18.03.23.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 18 Jan 2023 03:23:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81759392-9722-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=k6ihGsBNK8ToYkzti6MYyGHuz3m7GBLzr30v6z+rVFs=;
        b=D6fHzRbO8mZ8nV/RvWoiIAryPl0iflE9PCXN/5yHNWX4sE9Ss8e/8ToeYA0G2VDXB7
         DHnsNvzIsgh2zQY4g9w2GupNEX8MgxUSGZE/j1NGJ/wk/jr9X3sj0G9EK2+QltE9qSwd
         2acxIxk2UjuwmS6q8kbC3LZmxzYPzjnqyvK2tem8SCFmkTl5lTIcmQVa/vOp2C40qdwS
         IE1iIU3vg7KGMoBc9L4hO79XizZXjgJSTXsJo9uetBgj/4Pr7ksYfaaklz6YZDmoyaSD
         BLIUBEQ2bpTW750Sqa0Ar17DmGMyOqiLaJXOphB0ksRAWlJxyc6Ip/sV7pwz2LSFYt8e
         qH1A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=k6ihGsBNK8ToYkzti6MYyGHuz3m7GBLzr30v6z+rVFs=;
        b=hkZHqHTjNMs2ShFtAkrqNFwAM9yKt9WV+C5AbbGBMmxoao3uRrec6LiWsZaLTxfQz0
         7jWGhMcONrYnoXNlSSIx48EAsEZReD6jEleoa9JxuznECr3loRMTMlgLAolGrJGDqjZs
         btgGQd9M/fgljGOc/QpRw0ikMTGOFnoBGfUX7pBIef0qkU1169askMCnZzXhsYioLQby
         EY+xvi6UhuOCJowhpzp2I4Z6U+SDL9o+IZV37dZ0BXheAO9ZICOWdZIzJw5Kev2cjQxj
         QRm6suCdZmS54nFbsxSet4+U0ehpN7ApWmf0uV4kdZ+5zvgOtkGtr9Vfov7VaoajS5LZ
         P4jA==
X-Gm-Message-State: AFqh2krqIc9Y+fVifyebOBImUsmXhdk6A3ptisJVjF+1gw/6psmEHQvs
	cSbdoBIZfl/vei/YH9BNOUM=
X-Google-Smtp-Source: AMrXdXs4S5xswNnSOU95qEZToGvTdp6Uur8VvCnUSlvqLvBfq/L5f6mVDGUKbl3+uc6tBOy69eChQA==
X-Received: by 2002:a05:600c:1d12:b0:3db:53f:baea with SMTP id l18-20020a05600c1d1200b003db053fbaeamr6013246wms.6.1674040997041;
        Wed, 18 Jan 2023 03:23:17 -0800 (PST)
Message-ID: <79e2670cdc74454045e653bd62fb4815cb8a7eb3.camel@gmail.com>
Subject: Re: [PATCH v4 1/4] xen/riscv: introduce asm/types.h header file
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Wed, 18 Jan 2023 13:23:16 +0200
In-Reply-To: <d871f9e2-5f00-1f0d-3297-0084d4a4af27@suse.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
	 <2ce57f95f8445a4880e0992668a48ffe7c2f9732.1673877778.git.oleksii.kurochko@gmail.com>
	 <e00512a6-5d32-6dbf-4269-429532f8a852@suse.com>
	 <87107d8945c9f1513c305d115f24f488b87e088b.camel@gmail.com>
	 <d871f9e2-5f00-1f0d-3297-0084d4a4af27@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Tue, 2023-01-17 at 11:04 +0100, Jan Beulich wrote:
> On 17.01.2023 10:29, Oleksii wrote:
> > On Mon, 2023-01-16 at 15:59 +0100, Jan Beulich wrote:
> > > On 16.01.2023 15:39, Oleksii Kurochko wrote:
> > > > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > > > ---
> > > > Changes in V4:
> > > > =C2=A0=C2=A0=C2=A0 - Clean up types in <asm/types.h> and remain onl=
y
> > > > necessary.
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 The following types was removed as t=
hey are defined in
> > > > <xen/types.h>:
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {__|}{u|s}{8|16|32|64}
> > >=20
> > > For one you still typedef u32 and u64. And imo correctly so,
> > > until we
> > > get around to move the definition basic types into xen/types.h.
> > > Plus
> > > I can't see how things build for you: xen/types.h expects
> > > __{u,s}<N>
> > It builds because nothing is used <xen/types.h> now so that's why I
> > missed them but you are right __{u,s}<N> should be backed.
> > It looks like {__,}{u,s}{8,16,32} are the same for all available in
> > Xen
> > architectures so probably can I move them to <xen/types.h> instead
> > of
> > remain them in <asm/types.h>?
>=20
> This next step isn't quite as obvious, i.e. has room for being
> contentious. In particular deriving fixed width types from C basic
> types is setting us up for future problems (especially in the
> context of RISC-V think of RV128). Therefore, if we touch and
> generalize this, I'd like to sanitize things at the same time.
>=20
> I'd then prefer to typedef {u,}int<N>_t by using either the "mode"
> attribute (requiring us to settle on a prereq of there always being
> 8 bits per char) or the compiler supplied __{U,}INT<N>_TYPE__
> (taking gcc 4.7 as a prereq; didn't check clang yet). Both would
> allow {u,}int64_t to also be put in the common header. Yet if e.g.
> a prereq assumption faced opposition, some other approach would
> need to be found. Plus using either of the named approaches has
> issues with the printf() format specifiers, for which I'm yet to
> figure out a solution (or maybe someone else knows a good way to
> deal with that; using compiler provided headers isn't an option
> afaict, as gcc provides stdint.h but not inttypes.h, but maybe
> glibc's simplistic approach is good enough - they're targeting
> far more architectures than we do and get away with that).
>=20
Thanks for the explanation.

If back to RISCV's <asm/types.h> it looks that the v2 of the patch
(https://lore.kernel.org/xen-devel/ca2674739cfa71cae0bf084a7b471ad4518026d3=
.1673278109.git.oleksii.kurochko@gmail.com/)
is the best one option now because as I can see some work is going on
around <xen/types.h> and keeping the minimal set of types now will
allow us to not remove unneeded types for RISCV's port from asm/types.h
in the future.

Even more based on your patch [
https://lists.xen.org/archives/html/xen-devel/2023-01/msg00720.html ]
RISCV's <asm/types.h> can be empty for this patch series.

> Jan
~Oleksii


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 11:28:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 11:28:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480513.744970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6bu-0002FY-59; Wed, 18 Jan 2023 11:28:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480513.744970; Wed, 18 Jan 2023 11:28:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6bu-0002FR-1Z; Wed, 18 Jan 2023 11:28:06 +0000
Received: by outflank-mailman (input) for mailman id 480513;
 Wed, 18 Jan 2023 11:28:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pI6bt-0002FL-8G
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 11:28:05 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2055.outbound.protection.outlook.com [40.107.13.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2adaf81f-9723-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 12:28:02 +0100 (CET)
Received: from AS8PR04CA0013.eurprd04.prod.outlook.com (2603:10a6:20b:310::18)
 by PAWPR08MB9493.eurprd08.prod.outlook.com (2603:10a6:102:2e9::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 11:27:59 +0000
Received: from AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:310:cafe::48) by AS8PR04CA0013.outlook.office365.com
 (2603:10a6:20b:310::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.20 via Frontend
 Transport; Wed, 18 Jan 2023 11:27:59 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT036.mail.protection.outlook.com (100.127.140.93) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 11:27:59 +0000
Received: ("Tessian outbound baf1b7a96f25:v132");
 Wed, 18 Jan 2023 11:27:59 +0000
Received: from b74a11de5e96.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B9E75E56-92EC-423F-9279-C9C139F7B6C0.1; 
 Wed, 18 Jan 2023 11:27:53 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b74a11de5e96.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 11:27:53 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AS2PR08MB9413.eurprd08.prod.outlook.com (2603:10a6:20b:597::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Wed, 18 Jan
 2023 11:27:51 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 11:27:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2adaf81f-9723-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QArN9qPQXn/8MkrJvwK66iPWzdS7b226kdQrT4vmHig=;
 b=sN/k5Gbw18gxTNVt3y4N50TC1I0WGg5ewCQiaQrTWXULROpn7AJy35xyDyWKQ6zsfYy2OSfDzVGo62eegYrEISRGiQrugn+sK+x+jmbwUG4qAZbwrcHgQkM8FwjALOd5B8RIak8SxbsdNACkIvNM+lsoScQQCEaDjSS+gorJxBw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VH8oLqORIAYnFdOC/pv15Z91FYIKfYRZqM2zKgTOmmfbd5tRtEcc7SeglXO6RH8YxNpHlq3Co5Gf6V1/G0seq2jl197JrPVtFwC4ieeePkD5vN7UWD1oeD/9rkToy/Lyuu9YVn5GGIfwWJY4re5CQAqdTUyic2OYcA9JUqWCDFGR8y3HKDNmq81nElE1dhcKOf+mmUSAKP+bnOEESwnM/AhH+cLZQJJK+JhHp30n9SKhTd7wCAMeQRnYOD9TpaSICtcNnQTzJNLhHZdumPr5CNzX5StX8nmLc3S3/Vi4dMg8VqCWUf1xWm6Ga+rR/U0jEOscGwZ5Z7k+GjFN/leBFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QArN9qPQXn/8MkrJvwK66iPWzdS7b226kdQrT4vmHig=;
 b=NZEOCK4sZtyJlVJkuGLSYl5YMsbIvv+/4KXgwJqH7IJQvtVNOaWd1GohiBHahR/hCQE3itMJxVgFwb0lRJMQJYFXGPgzFGaDTVZONxFXIi+LGBwFJC4N+RpkeBeLDK9eirODnuRMam2Ygwa34Q9g0xVZfl1ip+DM8AFMQd0IcNQQ1wVkjp3Su2bWF3I5SMjWHuo8hZeSrjQSjtWNBZTnV33OPU2ThieGlOhqxsXZvVX9RqxC9JVNfcScx+9kM08pdtcCNRFJdBqTCINk/TfLF7ReX9Ye6/+cnqBO3sM5wjuUTcqnTn7rP05RyF5+aUenfkzjixmLKfWHfgXwp4SUEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QArN9qPQXn/8MkrJvwK66iPWzdS7b226kdQrT4vmHig=;
 b=sN/k5Gbw18gxTNVt3y4N50TC1I0WGg5ewCQiaQrTWXULROpn7AJy35xyDyWKQ6zsfYy2OSfDzVGo62eegYrEISRGiQrugn+sK+x+jmbwUG4qAZbwrcHgQkM8FwjALOd5B8RIak8SxbsdNACkIvNM+lsoScQQCEaDjSS+gorJxBw=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jiamei Xie <Jiamei.Xie@arm.com>
Subject: RE: [PATCH v2 04/40] xen/arm: add an option to define Xen start
 address for Armv8-R
Thread-Topic: [PATCH v2 04/40] xen/arm: add an option to define Xen start
 address for Armv8-R
Thread-Index:
 AQHZJxAXLGszZRVrMECopJok59rQha6jR3wAgAA4E5CAAHUeAIAABBnwgAARHYCAAAZzkA==
Date: Wed, 18 Jan 2023 11:27:51 +0000
Message-ID:
 <PAXPR08MB74203FFE3B863DBA465A335A9EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-5-Penny.Zheng@arm.com>
 <e406484a-aad3-4953-afdb-3159597ec998@xen.org>
 <PAXPR08MB7420A5C7F93F23F14C77C9BA9EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <7ffe5d34-614d-f2aa-cf87-c518917c970a@xen.org>
 <PAXPR08MB7420F43284FEC60BC88496709EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <b13c0904-3503-8894-8b14-64fcc717d50d@xen.org>
In-Reply-To: <b13c0904-3503-8894-8b14-64fcc717d50d@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4E2E875935C2634FA5E408E7F090F013.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|AS2PR08MB9413:EE_|AM7EUR03FT036:EE_|PAWPR08MB9493:EE_
X-MS-Office365-Filtering-Correlation-Id: 0be64594-15b5-4cb7-1099-08daf9470ddf
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3Mid17kjfb7fRcY1qzxV4WJWfTLfrrkhv2Opp4ER/GighRrsq0GmguaDEJHSo7srjHDxvuF6yKZnN8zLqci6ZO0E8eY/KKLhsN+3uO8nN/UNN5TCyw4HIVs0H41H4IFTcwAdDQpkNhl/fO+2UB5jC57c5kOhA/voy15j5pEJUDu5oQfVLnokZeDAcZN/tcXogjVvSkWZ6F6ZG3702H8Asfz5avukADjI9i3vh+5BBwqrYyrVMnVnk7k9WWQuTZZ9M9ybmtWN74f2iIgOiNcRxKd96qZT1QjsqKhD/q0O9k5iZMUy2BdiJgGNk5Y4DNSg8UhRLKjJ9CinxPjR8cMyrt8U7PsVTlG5oEWu+ZlXxFjXdhpsJRe7GS3h52S2JbkNwTdgsB8UeIQjyevzhFLX44LJW4DYv9f3nQWcL1Oj56UpZsWME+fo1CnjVFNQkiSmxr0pLVD8HG0D7SK5Z5CKGq+eDgIWRsI7yauR4MZWdghMuxNNZTW1qIVPGMX7uHFokY6SUZX3jZ3KqNiea4J2u57HRkV84oexvzPjnD2Td1SxZSJuTt/8wYEYgFNOtPjQfiZ41X8SjDeuQ6T0qEzRt1j2+RRFj7UWOIkDZP7FmbEkcreCsu6sJygUMh5BxUHb7UqNHB2eECcNFsS2xZ2PaJIU59gAUCo4r5QmG1cmBplQl0pXTUbet0DFBs5ASdTIRwgBQ2h/WVBRun0PDykOTA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(366004)(39860400002)(396003)(346002)(136003)(451199015)(71200400001)(478600001)(6506007)(7696005)(122000001)(33656002)(26005)(53546011)(186003)(9686003)(316002)(54906003)(110136005)(4326008)(66946007)(8676002)(66446008)(66556008)(66476007)(64756008)(76116006)(41300700001)(83380400001)(8936002)(52536014)(55016003)(2906002)(5660300002)(38070700005)(38100700002)(86362001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9413
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	170273a9-b271-4c11-8528-08daf94708f2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qNLKEEOu3/VVmYxeA04dFYE8ZAxu3wKchEEsR3LAaF0BngSWOYcwZv4WnvrocgSL7662E03+3V9qV2KUP+LZQzqe7Br/kg9ukXO3qitsqx54+ONEDXxbsf3odHQZq6GVL8VQz3TCFfnFGFWSWxzVkehOn6WYour9cW1vaWEAB8IqqqCKP6+ipTLzfR15GTnKDu91IZLrzLO4jGbTvTutuyL5+P2F+35qAdQHpH/3irQGzB/+xTT+EmqHgSmK6PICIls3tYydTDLSz3gubtlxRZtRYGZkTu+sLLfy4Wbe+EQ36E+3LwaraIe9gDdmjk/QOVtd8Ec0VhGrOJw6aF08E11V+4P7HivsrZ2yiRWvV41Vz/jv877JevvckvTvhzDfDnbmd9iVcTcNKRHf3ouUNc1e/RCZJI4Ta7pMUhWdisvsb0UTFhINZM0Eb9i8eDYfOwA4mwmqZSdbRUj8qFkudLGaFTvxXz30Ok41F/S7zItY4wbShOUDLOt637+LNh/J/h24FRryOVHjPaLm6Wb1iNLisG7GOjzwkpKcnzZw1qXJe0TGegwf/Y+/WlYTR/5ogf/NXFZn7THfTCgvtbPjWObBdUOI6C4aZ7cNAJlIf8J9y2XHRpxF5/EEPaj0gCMU3duDp0Kxo8C9al0VDYwnzUwcskFPcBESY76D9nXyHxowXh5Pted/NsYsRZncqbWm/mHQi8KIKPA0UbTJcw7dhQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199015)(46966006)(40470700004)(36840700001)(52536014)(41300700001)(4326008)(8676002)(70206006)(8936002)(70586007)(40460700003)(33656002)(336012)(2906002)(82310400005)(36860700001)(5660300002)(316002)(110136005)(54906003)(40480700001)(55016003)(7696005)(86362001)(9686003)(186003)(478600001)(81166007)(53546011)(26005)(6506007)(356005)(82740400003)(83380400001)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 11:27:59.6438
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0be64594-15b5-4cb7-1099-08daf9470ddf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9493

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IFhlbi1k
ZXZlbCA8eGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVucHJvamVjdC5vcmc+IE9uIEJlaGFsZiBP
Zg0KPiBKdWxpZW4gR3JhbGwNCj4gU2VudDogMjAyM+W5tDHmnIgxOOaXpSAxOTowMA0KPiBUbzog
V2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJt
LmNvbT47IHhlbi0NCj4gZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IFN0ZWZhbm8g
U3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRyYW5kIE1hcnF1aXMNCj4g
PEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFZvbG9keW15ciBCYWJjaHVrIDxWb2xvZHlteXJf
QmFiY2h1a0BlcGFtLmNvbT47DQo+IEppYW1laSBYaWUgPEppYW1laS5YaWVAYXJtLmNvbT4NCj4g
U3ViamVjdDogUmU6IFtQQVRDSCB2MiAwNC80MF0geGVuL2FybTogYWRkIGFuIG9wdGlvbiB0byBk
ZWZpbmUgWGVuIHN0YXJ0DQo+IGFkZHJlc3MgZm9yIEFybXY4LVINCj4gDQo+IEhpLA0KPiANCj4g
T24gMTgvMDEvMjAyMyAxMDoyMiwgV2VpIENoZW4gd3JvdGU6DQo+ID4+PiBBbHRob3VnaCBpdCBp
cyB1bmxpa2VseSB0aGF0IHZlbmRvcnMgdXNpbmcgdGhlIEFybXY4LVIgSVAgd2lsbCBkbyBzbywN
Cj4gaXQNCj4gPj4+IGlzIGluZGVlZCBhbiBvcHRpb24uIEluIHRoZSBJRCByZWdpc3RlciwgdGhl
cmUgYXJlIGFsc28gcmVsYXRlZCBiaXRzDQo+IGluDQo+ID4+PiBJRF9BQTY0TU1GUjBfRUwxIChN
U0FfZnJhYykgdG8gaW5kaWNhdGUgdGhpcy4NCj4gPj4+DQo+ID4+Pj4+IC0tLQ0KPiA+Pj4+PiAg
ICAgeGVuL2FyY2gvYXJtL0tjb25maWcgICAgICAgICAgIHwgIDggKysrKysrKysNCj4gPj4+Pj4g
ICAgIHhlbi9hcmNoL2FybS9wbGF0Zm9ybXMvS2NvbmZpZyB8IDE2ICsrKysrKysrKysrKystLS0N
Cj4gPj4+Pj4gICAgIDIgZmlsZXMgY2hhbmdlZCwgMjEgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlv
bnMoLSkNCj4gPj4+Pj4NCj4gPj4+Pj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9LY29uZmln
IGIveGVuL2FyY2gvYXJtL0tjb25maWcNCj4gPj4+Pj4gaW5kZXggYWNlNzE3OGM5YS4uYzZiNmI2
MTJkMSAxMDA2NDQNCj4gPj4+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL0tjb25maWcNCj4gPj4+Pj4g
KysrIGIveGVuL2FyY2gvYXJtL0tjb25maWcNCj4gPj4+Pj4gQEAgLTE0NSw2ICsxNDUsMTQgQEAg
Y29uZmlnIFRFRQ0KPiA+Pj4+PiAgICAgCSAgVGhpcyBvcHRpb24gZW5hYmxlcyBnZW5lcmljIFRF
RSBtZWRpYXRvcnMgc3VwcG9ydC4gSXQgYWxsb3dzDQo+ID4+Pj4gZ3Vlc3RzDQo+ID4+Pj4+ICAg
ICAJICB0byBhY2Nlc3MgcmVhbCBURUUgdmlhIG9uZSBvZiBURUUgbWVkaWF0b3JzIGltcGxlbWVu
dGVkIGluDQo+ID4+IFhFTi4NCj4gPj4+Pj4NCj4gPj4+Pj4gK2NvbmZpZyBYRU5fU1RBUlRfQURE
UkVTUw0KPiA+Pj4+PiArCWhleCAiWGVuIHN0YXJ0IGFkZHJlc3M6IGtlZXAgZGVmYXVsdCB0byB1
c2UgcGxhdGZvcm0gZGVmaW5lZA0KPiA+Pj4+IGFkZHJlc3MiDQo+ID4+Pj4+ICsJZGVmYXVsdCAw
DQo+ID4+Pj4+ICsJZGVwZW5kcyBvbiBBUk1fVjhSDQo+ID4+Pj4NCj4gPj4+PiBJdCBpcyBzdGls
bCBwcmV0dHkgdW5jbGVhciB0byBtZSB3aGF0IHdvdWxkIGJlIHRoZSBkaWZmZXJlbmNlIGJldHdl
ZW4NCj4gPj4+PiBIQVNfTVBVIGFuZCBBUk1fVjhSLg0KPiA+Pj4+DQo+ID4+Pg0KPiA+Pj4gSWYg
d2UgZG9uJ3Qgd2FudCB0byBzdXBwb3J0IG5vbi1NUFUgc3VwcG9ydGVkIEFybXY4LVIsIEkgdGhp
bmsgdGhleQ0KPiBhcmUNCj4gPj4gdGhlDQo+ID4+PiBzYW1lLiBJTU8sIG5vbi1NUFUgc3VwcG9y
dGVkIEFybXY4LVIgaXMgbWVhbmluZ2xlc3MgdG8gWGVuLg0KPiA+PiBPT0ksIHdoeSBkbyB5b3Ug
dGhpbmsgdGhpcyBpcyBtZWFuaW5nbGVzcz8NCj4gPg0KPiA+IElmIHRoZXJlIGlzIEFybXY4LVIg
Ym9hcmQgd2l0aG91dCBFTDIgTVBVLCBob3cgY2FuIHdlIHByb3RlY3QgWGVuPw0KPiANCj4gU28g
d2hhdCB5b3UgY2FsbCBFTDIgTVBVIGlzIGFuIE1QVSB0aGF0IGlzIGZvbGxvd2luZyB0aGUgQXJt
DQo+IHNwZWNpZmljYXRpb24uIEluIHRoZW9yeSwgeW91IGNvdWxkIGhhdmUgYSBwcm9wcmlldGFy
eSBtZWNoYW5pc20gZm9yIHRoYXQuDQo+IA0KPiBTbyB0aGUgcXVlc3Rpb24gaXMgd2hldGhlciBh
IHN5c3RlbSBub3QgZm9sbG93aW5nIHRoZSBBcm0gc3BlY2lmaWNhdGlvbg0KPiBpcyBhbGxvd2Vk
Lg0KPiANCg0KSSB0aGluayBubywgdGhlIFBNU0EgaXMgYW4gYXJjaGl0ZWN0dXJhbCBmZWF0dXJl
LCB0aGUgc3BlYyBjb250YWlucyBDUFUgYW5kIE1QVQ0KaW50ZXJmYWNlcy4gVmVuZG9ycyBjYW4g
aGF2ZSB0aGVpciBvd24gaGFyZHdhcmUgaW1wbGVtZW50YXRpb24sIGJ1dCBuZWVkIGZvbGxvdw0K
dGhlIEFybSBzcGVjLg0KDQpCdXQgSSBhZ3JlZSB0aGF0LCBoZXJlIHdlIGNvdWxkIGNoYW5nZSB0
byAiZGVwZW5kcyBvbiBIQVNfTVBVIiB3aGljaCB3aWxsIG1ha2UNCkl0IGVhc2llciB0byB1c2Vk
IGJ5IG90aGVyIEFybSBBcmNoaXRlY3R1cmUgb3Igb3RoZXIgYXJjaGl0ZWN0dXJlIGluIHRoZSBm
dXR1cmUuDQoNCkNoZWVycywNCldlaSBDaGVuDQoNCj4gQ2hlZXJzLA0KPiANCj4gLS0NCj4gSnVs
aWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 11:29:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 11:29:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480518.744981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6d4-0002nl-Fv; Wed, 18 Jan 2023 11:29:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480518.744981; Wed, 18 Jan 2023 11:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6d4-0002ne-Ci; Wed, 18 Jan 2023 11:29:18 +0000
Received: by outflank-mailman (input) for mailman id 480518;
 Wed, 18 Jan 2023 11:29:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Njh=5P=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pI6d3-0002nV-0R
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 11:29:17 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5718229a-9723-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 12:29:16 +0100 (CET)
Received: by mail-wm1-x32b.google.com with SMTP id
 f19-20020a1c6a13000000b003db0ef4dedcso1171857wmc.4
 for <xen-devel@lists.xenproject.org>; Wed, 18 Jan 2023 03:29:16 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 az22-20020adfe196000000b002bddaea7a0bsm15168213wrb.57.2023.01.18.03.29.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 18 Jan 2023 03:29:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5718229a-9723-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=qEV92wfwdsO0B1tLELn1ll5CtfMZ7h/ID1HNqTwBTNk=;
        b=Ry7CEBuLJsOBpq3zLak71M18QxVBjWnKe7nkY5XJPt6jUb35DlRGb0kv0Hnu18/frh
         dsmE7+ere3gEQlohlBtSgUMbwup34/mTqzail17i9d1O1+LzP8kdF6iDCP7a/LvITqES
         2oWCGPt7Io6c9g5+zgMg6ACIIv4Bl6sTT79veXYY32jAypZArmeW8VB6DNMNbwLglq/K
         5goZvXU+dJ3veWvB4aK64x1SFaZJyYb8aP7viC2QW2jwm/iMtG8b6kdrWIR1wTn0iecQ
         hnsbYwEnkcDbjqj9RTJZkLcLRpTpMk/OBHijwk0YpPWgpvDDlI/sCHE+uTiZFg8EBhP1
         x8pQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=qEV92wfwdsO0B1tLELn1ll5CtfMZ7h/ID1HNqTwBTNk=;
        b=famFdYIhxXP/6hhF1M68+j5LMGttPDHEO9n8ZFpmTTR0c1YtIZeOt/uUr1BvOX46+M
         n32dOdFnund7Ti/Tyj7D5+6CW/hmF0S/BdBmhGi9dbSdb/5xSeixuxWpOu2RuxImv5It
         wCK1wAeLEiFt1GXAKSDC2jEPR8Y6RosV9wmrX9FJ6xfHMnfKcRkqJTbMQVZohydy8bLx
         Z3rd0elkY0LlMaWcc3e6/LpMxAochBM4vZTthxnBmMUDiq/kkWVJAzPdxrVyktytDGcK
         +mLcFkcRvbn2EDvdnshJL2sPvmiF8VTK72EfdETWCFx9nTl9a4gy8MTFPEMVZp5/HN1i
         tKjQ==
X-Gm-Message-State: AFqh2kqvfVIKH4K6UZOUHp2WDdp8CAxGc/tWPVGF5yASfmdQLQWVpYY+
	/lLFGK/FcIFlOnnzIPvd6yU=
X-Google-Smtp-Source: AMrXdXsemPLV/JCeNTgzBZrHgiZpi5btTEVvLY9haEZVF3meE9elI/rJnif/2MJEW9iFff0yVWTV/w==
X-Received: by 2002:a05:600c:3c8a:b0:3da:2a78:d7a4 with SMTP id bg10-20020a05600c3c8a00b003da2a78d7a4mr6220743wmb.21.1674041355631;
        Wed, 18 Jan 2023 03:29:15 -0800 (PST)
Message-ID: <bdb3f2f5df270b081fa44eac9e1f6347e694ca0d.camel@gmail.com>
Subject: Re: [PATCH v4 2/4] xen/riscv: introduce sbi call to putchar to
 console
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <Andrew.Cooper3@citrix.com>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>,  Gianluca Guida <gianluca@rivosinc.com>, Bobby
 Eshleman <bobby.eshleman@gmail.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Date: Wed, 18 Jan 2023 13:29:14 +0200
In-Reply-To: <e5e38496-3dcd-3a42-6c2a-43ccb988caf3@suse.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
	 <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>
	 <7918f456-14ff-77b2-3cdb-1e879e030b39@citrix.com>
	 <e5e38496-3dcd-3a42-6c2a-43ccb988caf3@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

T24gV2VkLCAyMDIzLTAxLTE4IGF0IDA4OjM4ICswMTAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiBP
biAxOC4wMS4yMDIzIDAwOjMyLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+ID4gT24gMTYvMDEvMjAy
MyAyOjM5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOgo+ID4gPiArc3RydWN0IHNiaXJldCBz
YmlfZWNhbGwodW5zaWduZWQgbG9uZyBleHQsIHVuc2lnbmVkIGxvbmcgZmlkLAo+ID4gPiArwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBsb25n
IGFyZzAsIHVuc2lnbmVkIGxvbmcgYXJnMSwKPiA+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgbG9uZyBhcmcyLCB1bnNpZ25lZCBsb25n
IGFyZzMsCj4gPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIHVuc2lnbmVkIGxvbmcgYXJnNCwgdW5zaWduZWQgbG9uZyBhcmc1KQo+ID4gPiArewo+ID4g
PiArwqDCoMKgIHN0cnVjdCBzYmlyZXQgcmV0Owo+ID4gPiArCj4gPiA+ICvCoMKgwqAgcmVnaXN0
ZXIgdW5zaWduZWQgbG9uZyBhMCBhc20gKCJhMCIpID0gYXJnMDsKPiA+ID4gK8KgwqDCoCByZWdp
c3RlciB1bnNpZ25lZCBsb25nIGExIGFzbSAoImExIikgPSBhcmcxOwo+ID4gPiArwqDCoMKgIHJl
Z2lzdGVyIHVuc2lnbmVkIGxvbmcgYTIgYXNtICgiYTIiKSA9IGFyZzI7Cj4gPiA+ICvCoMKgwqAg
cmVnaXN0ZXIgdW5zaWduZWQgbG9uZyBhMyBhc20gKCJhMyIpID0gYXJnMzsKPiA+ID4gK8KgwqDC
oCByZWdpc3RlciB1bnNpZ25lZCBsb25nIGE0IGFzbSAoImE0IikgPSBhcmc0Owo+ID4gPiArwqDC
oMKgIHJlZ2lzdGVyIHVuc2lnbmVkIGxvbmcgYTUgYXNtICgiYTUiKSA9IGFyZzU7Cj4gPiA+ICvC
oMKgwqAgcmVnaXN0ZXIgdW5zaWduZWQgbG9uZyBhNiBhc20gKCJhNiIpID0gZmlkOwo+ID4gPiAr
wqDCoMKgIHJlZ2lzdGVyIHVuc2lnbmVkIGxvbmcgYTcgYXNtICgiYTciKSA9IGV4dDsKPiA+ID4g
Kwo+ID4gPiArwqDCoMKgIGFzbSB2b2xhdGlsZSAoImVjYWxsIgo+ID4gPiArwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgOiAiK3IiIChhMCksICIrciIgKGExKQo+ID4gPiArwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgOiAiciIgKGEyKSwgInIiIChhMyksICJyIiAoYTQpLCAiciIgKGE1KSwg
InIiCj4gPiA+IChhNiksICJyIiAoYTcpCj4gPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCA6ICJtZW1vcnkiKTsKPiA+IAo+ID4gSW5kZW50YXRpb24uwqAgRWFjaCBjb2xvbiB3YW50cyA0
IG1vcmUgc3BhY2VzIGluIGZyb250IG9mIGl0Lgo+IAo+IFBsdXMsIGlmIHdlJ3JlIGFscmVhZHkg
dGFsa2luZyBvZiBzdHlsZSwgYmxhbmtzIGFyZSBtaXNzaW5nCj4gaW1tZWRpYXRlbHkgaW5zaWRl
Cj4gdGhlIG91dGVybW9zdCBwYXJlbnRoZXNlcywgcmVxdWlyaW5nIHlldCBvbmUgbW9yZSBzcGFj
ZSBvZgo+IGluZGVudGF0aW9uIG9uIHRoZQo+IHN1YnNlcXVlbnQgbGluZXMuCj4gClRoYW5rcyBB
bmRyZXcgYW5kIEphbiBmb3IgdGhlIGNvbW1lbnRzLiBJJ2xsIHRha2UgdGhlbSBpbnRvIGFjY291
bnQuCj4gSmFuCj4gCgo=



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 11:41:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 11:41:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480525.744992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6oD-00058Q-Fv; Wed, 18 Jan 2023 11:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480525.744992; Wed, 18 Jan 2023 11:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6oD-00058J-Ch; Wed, 18 Jan 2023 11:40:49 +0000
Received: by outflank-mailman (input) for mailman id 480525;
 Wed, 18 Jan 2023 11:40:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BRoI=5P=arm.com=Wei.Chen@srs-se1.protection.inumbo.net>)
 id 1pI6oC-00058A-65
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 11:40:48 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2076.outbound.protection.outlook.com [40.107.22.76])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f296fb4d-9724-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 12:40:46 +0100 (CET)
Received: from FR0P281CA0104.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a9::15)
 by PR3PR08MB5579.eurprd08.prod.outlook.com (2603:10a6:102:82::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 11:40:36 +0000
Received: from VI1EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:a9:cafe::f6) by FR0P281CA0104.outlook.office365.com
 (2603:10a6:d10:a9::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend
 Transport; Wed, 18 Jan 2023 11:40:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT060.mail.protection.outlook.com (100.127.144.243) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.24 via Frontend Transport; Wed, 18 Jan 2023 11:40:35 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Wed, 18 Jan 2023 11:40:34 +0000
Received: from eb0c4ce55697.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 950E2FF8-548A-4B55-AA88-F7C35CD25286.1; 
 Wed, 18 Jan 2023 11:40:28 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eb0c4ce55697.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Jan 2023 11:40:28 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com (2603:10a6:102:2b9::9)
 by AS2PR08MB8904.eurprd08.prod.outlook.com (2603:10a6:20b:5f8::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Wed, 18 Jan
 2023 11:40:26 +0000
Received: from PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b]) by PAXPR08MB7420.eurprd08.prod.outlook.com
 ([fe80::72d6:6a74:b637:cc5b%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 11:40:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f296fb4d-9724-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ey8Su89gFdov4CFRiSw7lZFecMSHkeLpHbQMhmfj8ek=;
 b=GY4t80OB66Xadp1ZQAlDhCRYmMK9RoT5HoSrA6eelIYK3loYHOsjKtotb/GfgrG15spowR8PlPJcxkVznGAmy8w2K8yNyKT+KM1UPdv3WW2lRAeAprALLySeokatYlcZ5+9I22pXWFEdLuBR/BenMkcc5zgtQl5Cu/WNV2fS7sQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DFj3OQWu0riQuYO/CxZcvC1pw8HjF1u86FnB4xN4QdZBsnWIXFvQy9b9Qnbi58gHnZ12bgy4nV6+X0hhAxj5BYaFMwAawcsw/cFaHsVZ8W46hK+p2i/knmISmWCdCqDcZFqCZZ3VLOUQ4Nz28Fd9HdPaHzjR3Jk1aFRj3FddhDvv/nS6SnWOo3iSiSp+vTNKFgzupULuY/4CSzgZ+422liRocf8ViwXytf11JSbVc841Jw8Bku1C2tDlPSk3puPkygCqY2X6XRTRxsS/+HE5kqV9i1fOaOGYa7ifbtx2leROcWrFtKFsdXoinB9Yb0fM0qDHEWnBJejBDLC8cYiglw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ey8Su89gFdov4CFRiSw7lZFecMSHkeLpHbQMhmfj8ek=;
 b=Ee79sJFJMaixVbjFipOvca/B51FnQtjdKGuqnLFotai+HwYV29YtqA3e3ZEf2ya5pC8NRoJd4B02aHNCmUMgpmqxPHZQyimTD1SUKXSsOB/8zanhszO84EEP33YdhtpWskPmYeAQgDkAdirWyfoqVlRMwwu7ADAH2s76WtqLi9CtEQv+By8YhL4FYHHLe2339aCBfFvacAX+xLwPxcXBYg4pcYrjg5WaRGN09D8gSiPtEWfC14lGmWJV8r3H5FwSj54vBiIkHJvKpJzBhm3CaYAiZgK05Xfj5qiRlPDQameBjUjagOUgFCN7zA4goiA19a4Ln0ptcOGWvpZmaNOvJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ey8Su89gFdov4CFRiSw7lZFecMSHkeLpHbQMhmfj8ek=;
 b=GY4t80OB66Xadp1ZQAlDhCRYmMK9RoT5HoSrA6eelIYK3loYHOsjKtotb/GfgrG15spowR8PlPJcxkVznGAmy8w2K8yNyKT+KM1UPdv3WW2lRAeAprALLySeokatYlcZ5+9I22pXWFEdLuBR/BenMkcc5zgtQl5Cu/WNV2fS7sQ=
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 07/40] xen/arm64: add .text.idmap for Xen identity map
 sections
Thread-Topic: [PATCH v2 07/40] xen/arm64: add .text.idmap for Xen identity map
 sections
Thread-Index: AQHZJxAY2FhaLGF+HUqxg8o4//gxhq6jTaOAgAAkT4CAAJbRAIAAC0qw
Date: Wed, 18 Jan 2023 11:40:26 +0000
Message-ID:
 <PAXPR08MB7420B12707A7D82675BCFC419EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-8-Penny.Zheng@arm.com>
 <4b817b65-f558-b4df-c7fd-242a04e59a59@xen.org>
 <PAXPR08MB742061C5E8ED73BD43FEF9599EC79@PAXPR08MB7420.eurprd08.prod.outlook.com>
 <a5560a16-60d8-dc75-994e-c8719721bf74@xen.org>
In-Reply-To: <a5560a16-60d8-dc75-994e-c8719721bf74@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 69B0EDDB05189244B2A6B0CB0A92293F.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	PAXPR08MB7420:EE_|AS2PR08MB8904:EE_|VI1EUR03FT060:EE_|PR3PR08MB5579:EE_
X-MS-Office365-Filtering-Correlation-Id: 97d4e6d2-2329-4dbc-e168-08daf948d04a
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 5fJxhUxIZQ0JIGC0C6U6W2wBvZyhTFg+R13Fh/QqYd0EcQ95sSwAaxljZyQrm6YAzRW8lyLvaGKNwYwFKXclLgdFI448amZ0A1KFp0HyiXGuK4dn7IilGyQg4SUHeZqcJD2akVyUFaiDDMiBFbWNRHxVpMM8ydFU5FaLdmR8tugYV3M+xjucv9dodW5x0WwJknLTEprsM+MdkZOjrc6jXJuJcGUuMaA5Vxf80uZts3rbSdz7vtDEw1KB2Wy6nsPrx2jziGr5tkUgloAhAZIoHNZPMCFil6J1MepHs7WHRZD7NlStmt+1sPUBLBEmrh/vClCwaC1zfwGrP0Nrb5E82FhqBYC6w6vosmTUbE+FAATDznSEmw2RlJ9hPv6Yu0sIGmeROk7V1ir0n0HGC7avnEx++mBzIZ0gQYQcJTGkH9D8FcwJxBPWGgdAvIWQA6hkVsaX8UXM1t+hXX+vgXqRl/FdkM2eghcN7iY+a+m1hBydkcKHG/a6enqoxptEDV/lu7D9qY8ieaQOHjXj7ONuH8+S8N5S0nt+pc/WeXCc6sNs0z7xnWOZbfy0L7aRxcoz8m0m6y+IQhtHASiwt0dpZt9mGGDBOtF3IwzZUnqaF27VkFJV+CZGQJCSGj6xVRI2IQnQmv6HAw/tGlcwPTmE346RoH/8bQknL238tH6VFvVywORqmZIvBRY2fH88KB4H9BFqdQ1LgomDyjOjcOtJ0bwcpVMNqs9o9ovPoZFz5pw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PAXPR08MB7420.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(39860400002)(396003)(136003)(366004)(451199015)(33656002)(55016003)(54906003)(83380400001)(66446008)(41300700001)(64756008)(5660300002)(4326008)(66946007)(8936002)(52536014)(478600001)(8676002)(186003)(9686003)(66476007)(26005)(66556008)(110136005)(316002)(7696005)(86362001)(71200400001)(76116006)(38100700002)(38070700005)(122000001)(6506007)(2906002)(142923001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8904
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cd824507-9de7-4617-796c-08daf948cac9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GifZulgwS0srZlB6kcxgaJPkiZQFxKPB9f0r7yhl8UFK5wZ21bEWxcmBzl0RJGM+0y/jpKSJ1xr7E6BWp+PAlSAG1FhmvZggVFmSKVMh//s74cvKfiAdo6FMESWS8uwuGHMazc8LsKPUu/hCqnc6fzjftuJwoWkKTOQ8Xh9xd7qbHDdVyvT75t1EbKREMgE4E7b719fxdlqVPPBgIi8mfZPOvHzIPRwS+FzReH1sMOK0aS0V4eo2iH3Bwh9QbHSVbR1SQoRGVBVbTbM30Rxl3iGDspp3qFS3rPrv4dT44UhEiJKTdGbPuMUNaIGDCE5jh5Wj1p3z7M0uYdF+0UHasMk5XeqEVPgpXEXDZgO1ASPrVEzI1O+o+G10Iud6EUBTuH2PMm10UvOboEx6fS2/0uVaTLsUlMkFjMovm0QeJCo4DUzVoEZnDRJBho5DhOtHxKDQZ3gEhVMTROdOSTmcDk05huRa9PLCPzQ+h1S5l/VbIkFXqmgujpPgN78fJvwebb6fZaxWxxnu9QapNfIeUkLoHYUsU0UO7DAwILUnEXulKj0r8E7LSRCYAnszpAkXQ6i6UsVm6HEnbJarx7DQJrDwWR8kfuipq9kqdDKpqWi1iyBGk8+I/vaXJjSs/MD1Vnw2uft7ho83MTNGKY5IZZkYRS0cirUUAjz4Mr2PrVY+n3lbWvqY4o5b9aWDc/UgvQi5vD7vVgTL7m71yaqzDGrCQzVdxZcZS75BFkXB8Qg=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199015)(40470700004)(36840700001)(46966006)(52536014)(4326008)(41300700001)(107886003)(8676002)(70586007)(8936002)(70206006)(40460700003)(36860700001)(336012)(2906002)(82310400005)(33656002)(5660300002)(316002)(54906003)(110136005)(40480700001)(55016003)(7696005)(86362001)(9686003)(478600001)(186003)(81166007)(26005)(6506007)(356005)(82740400003)(83380400001)(47076005)(142923001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 11:40:35.2470
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 97d4e6d2-2329-4dbc-e168-08daf948d04a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5579

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+Pg0KPiA+Pj4g
SW4gdGhpcyBwYXRjaCB3ZSBpbnRyb2R1Y2UgLnRleHQuaWRtYXAgdG8gaGVhZF9tbXUuUywgYW5k
DQo+ID4+PiBhZGQgdGhpcyBzZWN0aW9uIGFmdGVyIC50ZXh0LmhlYWRlci4gdG8gZW5zdXJlIGNv
ZGUgb2YNCj4gPj4+IGhlYWRfbW11LlMgYWZ0ZXIgdGhlIGNvZGUgb2YgaGVhZGVyLlMuDQo+ID4+
Pg0KPiA+Pj4gQWZ0ZXIgdGhpcywgd2Ugd2lsbCBzdGlsbCBpbmNsdWRlIHNvbWUgY29kZSB0aGF0
IGRvZXMgbm90DQo+ID4+PiBiZWxvbmcgdG8gaWRlbnRpdHkgbWFwIGJlZm9yZSBfZW5kX2Jvb3Qu
IEJlY2F1c2Ugd2UgaGF2ZQ0KPiA+Pj4gbW92ZWQgX2VuZF9ib290IHRvIGhlYWRfbW11LlMuDQo+
ID4+DQo+ID4+IEkgZGlzbGlrZSB0aGlzIGFwcHJvYWNoIGJlY2F1c2UgeW91IGFyZSBleHBlY3Rp
bmcgdGhhdCBvbmx5IGhlYWRfbW11LlMNCj4gPj4gd2lsbCBiZSBwYXJ0IG9mIC50ZXh0LmlkbWFw
LiBJZiBpdCBpcyBub3QsIGV2ZXJ5dGhpbmcgY291bGQgYmxvdyB1cA0KPiBhZ2Fpbi4NCj4gPj4N
Cj4gPg0KPiA+IEkgYWdyZWUuDQo+ID4NCj4gPj4gVGhhdCBzYWlkLCBpZiB5b3UgbG9vayBhdCBz
dGFnaW5nLCB5b3Ugd2lsbCBub3RpY2UgdGhhdCBub3cgX2VuZF9ib290DQo+IGlzDQo+ID4+IGRl
ZmluZWQgaW4gdGhlIGxpbmtlciBzY3JpcHQgdG8gYXZvaWQgYW55IGlzc3VlLg0KPiA+Pg0KPiA+
DQo+ID4gU29ycnksIEkgYW0gbm90IHF1aXRlIGNsZWFyIGFib3V0IHRoaXMgY29tbWVudC4gVGhl
IF9lbmRfYm9vdCBvZg0KPiBvcmlnaW5hbA0KPiA+IHN0YWdpbmcgYnJhbmNoIGlzIGRlZmluZWQg
aW4gaGVhZC5TLiBBbmQgSSBhbSBub3QgcXVpdGUgc3VyZSBob3cgdGhpcw0KPiA+IF9lbmRfYm9v
dCBzb2x2ZSBtdWx0aXBsZSBmaWxlcyBjb250YWluIGlkbWFwIGNvZGUuDQo+IA0KPiBJZiB5b3Ug
bG9vayBhdCB0aGUgbGF0ZXN0IHN0YWdpbmcsIHRoZXJlIGlzIGEgY29tbWl0ICgyMjllYmQ1MTdi
OWQpIHRoYXQNCj4gbm93IGRlZmluZSBfZW5kX2Jvb3QgaW4gdGhlIGxpbmtlciBzY3JpcHQuDQo+
IA0KPiBUaGUgLnRleHQuaWRtYXAgc2VjdGlvbiBjYW4gYmUgYWRkZWQgYmVmb3JlIHRoZSBkZWZp
bml0aW9uIG9mIF9lbmRfYm9vdC4NCj4gDQoNCk9oLCBteSBicmFuY2ggd2FzIGEgbGl0dGxlIG9s
ZCwgSSBoYXZlIHNlZW4gdGhpcyBuZXcgZGVmaW5pdGlvbiBpbiB4ZW4ubGQuUw0KYWZ0ZXIgSSB1
cGRhdGUgdGhlIGJyYW5jaC4gSSB1bmRlcnN0YW5kIG5vdy4NCg0KPiA+DQo+ID4gQ2hlZXJzLA0K
PiA+IFdlaSBDaGVuDQo+ID4NCj4gPj4+IFRoYXQgbWVhbnMgYWxsIGNvZGUgaW4gaGVhZC5TDQo+
ID4+PiB3aWxsIGJlIGluY2x1ZGVkIGJlZm9yZSBfZW5kX2Jvb3QuIEluIHRoaXMgcGF0Y2gsIHdl
IGFsc28NCj4gPj4+IGFkZGVkIC50ZXh0IGZsYWcgaW4gdGhlIHBsYWNlIG9mIG9yaWdpbmFsIF9l
bmRfYm9vdCBpbiBoZWFkLlMuDQo+ID4+PiBBbGwgdGhlIGNvZGUgYWZ0ZXIgLnRleHQgaW4gaGVh
ZC5TIHdpbGwgbm90IGJlIGluY2x1ZGVkIGluDQo+ID4+PiBpZGVudGl0eSBtYXAgc2VjdGlvbi4N
Cj4gPj4+DQo+ID4+PiBTaWduZWQtb2ZmLWJ5OiBXZWkgQ2hlbiA8d2VpLmNoZW5AYXJtLmNvbT4N
Cj4gPj4+IC0tLQ0KPiA+Pj4gdjEgLT4gdjI6DQo+ID4+PiAxLiBOZXcgcGF0Y2guDQo+ID4+PiAt
LS0NCj4gPj4+ICAgIHhlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMgICAgIHwgNiArKysrKysNCj4g
Pj4+ICAgIHhlbi9hcmNoL2FybS9hcm02NC9oZWFkX21tdS5TIHwgMiArLQ0KPiA+Pj4gICAgeGVu
L2FyY2gvYXJtL3hlbi5sZHMuUyAgICAgICAgfCAxICsNCj4gPj4+ICAgIDMgZmlsZXMgY2hhbmdl
ZCwgOCBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0pDQo+ID4+Pg0KPiA+Pj4gZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMgYi94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5T
DQo+ID4+PiBpbmRleCA1Y2ZhNDcyNzliLi43ODJiZDFmOTRjIDEwMDY0NA0KPiA+Pj4gLS0tIGEv
eGVuL2FyY2gvYXJtL2FybTY0L2hlYWQuUw0KPiA+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0
L2hlYWQuUw0KPiA+Pj4gQEAgLTQ2Niw2ICs0NjYsMTIgQEAgZmFpbDogICBQUklOVCgiLSBCb290
IGZhaWxlZCAtXHJcbiIpDQo+ID4+PiAgICAgICAgICAgIGIgICAgIDFiDQo+ID4+PiAgICBFTkRQ
Uk9DKGZhaWwpDQo+ID4+Pg0KPiA+Pj4gKy8qDQo+ID4+PiArICogRm9yIHRoZSBjb2RlIHRoYXQg
ZG8gbm90IG5lZWQgaW4gaW5kZW50aXR5IG1hcCBzZWN0aW9uLA0KPiA+Pj4gKyAqIHdlIHB1dCB0
aGVtIGJhY2sgdG8gbm9ybWFsIC50ZXh0IHNlY3Rpb24NCj4gPj4+ICsgKi8NCj4gPj4+ICsuc2Vj
dGlvbiAudGV4dCwgImF4IiwgJXByb2diaXRzDQo+ID4+PiArDQo+ID4+DQo+ID4+IEkgd291bGQg
YXJndWUgdGhhdCBwdXRzIHdhbnRzIHRvIGJlIHBhcnQgb2YgdGhlIGlkbWFwLg0KPiA+Pg0KPiA+
DQo+ID4gSSBhbSBvayB0byBtb3ZlIHB1dHMgdG8gaWRtYXAuIEJ1dCBmcm9tIHRoZSBvcmlnaW5h
bCBoZWFkLlMsIHB1dHMgaXMNCj4gPiBwbGFjZWQgYWZ0ZXIgX2VuZF9ib290LCBhbmQgZnJvbSB0
aGUgeGVuLmxkLlMsIHdlIGNhbiBzZWUgaWRtYXAgaXMNCj4gPiBhcmVhIGlzIHRoZSBzZWN0aW9u
IG9mICJfZW5kX2Jvb3QgLSBzdGFydCIuDQo+IA0KPiBUaGUgb3JpZ2luYWwgcG9zaXRpb24gb2Yg
X2VuZF9ib290IGlzIHdyb25nLiBJdCBkaWRuJ3QgdGFrZSBpbnRvIGFjY291bnQNCj4gdGhlIGxp
dGVyYWwgcG9vbCAodGhlcmUgYXJlIGF0IHRoZSBlbmQgb2YgdGhlIHVuaXQpLiBTbyB0aGV5IHdv
dWxkIGJlDQo+IHBhc3QgX2VuZF9ib290Lg0KPiANCg0KT2suDQoNCj4gPiBUaGUgcmVhc29uIG9m
IG1vdmluZyBwdXRzDQo+ID4gdG8gaWRtYXAgaXMgYmVjYXVzZSB3ZSdyZSB1c2luZyBpdCBpbiBp
ZG1hcD8NCj4gDQo+IEkgZ3Vlc3MgaXQgZGVwZW5kcyBvZiB3aGF0IGlkbWFwIHJlYWxseSBtZWFu
IGhlcmUuIElmIHlvdSBvbmx5IGludGVycHJldA0KPiBhcyB0aGUgTU1VIGlzIG9uIGFuZCBWQSA9
PSBQQS4gVGhlbiBub3QgeWV0IChJIHdhcyB0aGlua2luZyB0byBpbnRyb2R1Y2UNCj4gYSBmZXcg
Y2FsbHMpLg0KPiANCj4gSWYgeW91IGFsc28gaW5jbHVkZSB0aGUgTU1VIG9mZi4gVGhlbiB5ZXMu
DQo+IA0KPiBBbHNvLCBpbiB0aGUgY29udGV4dCBvZiBjYWNoZSBjb2xvcmluZywgd2Ugd2lsbCBu
ZWVkIHRvIGhhdmUgYQ0KPiB0cmFtcG9saW5lIGZvciBjYWNoZSBjb2xvcmluZy4gU28gaXQgd291
bGQgYmUgYmV0dGVyIHRvIGtlZXAgZXZlcnl0aGluZw0KPiBjbG9zZSB0b2dldGhlciBhcyBpdCBp
cyBlYXNpZXIgdG8gY29weS4NCj4gDQoNClVuZGVyc3RhbmQsIHRoYW5rcyENCg0KQ2hlZXJzLA0K
V2VpIENoZW4NCg0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 11:46:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 11:46:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480532.745003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6ti-0005p3-6V; Wed, 18 Jan 2023 11:46:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480532.745003; Wed, 18 Jan 2023 11:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI6ti-0005ow-2n; Wed, 18 Jan 2023 11:46:30 +0000
Received: by outflank-mailman (input) for mailman id 480532;
 Wed, 18 Jan 2023 11:46:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Si3C=5P=linux.intel.com=kirill.shutemov@srs-se1.protection.inumbo.net>)
 id 1pI6tg-0005oa-2W
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 11:46:28 +0000
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id baf9b2f8-9725-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 12:46:24 +0100 (CET)
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 Jan 2023 03:46:21 -0800
Received: from semenova-mobl1.ccr.corp.intel.com (HELO box.shutemov.name)
 ([10.252.53.249])
 by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 Jan 2023 03:46:17 -0800
Received: by box.shutemov.name (Postfix, from userid 1000)
 id 83581104CC6; Wed, 18 Jan 2023 14:46:14 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: baf9b2f8-9725-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1674042384; x=1705578384;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=JoF9vVe1tkCvj804j2tCzb7TiNPsCz/UXlCdDCW6W10=;
  b=ln9RTQj+fzHtrQc6D4Uf/2oU3WFWIIWYGRUb7YRhAb7GZoQmPezx7cvz
   lxjDRulCEPJSh5z4vk9Zoy0bRRWZoeStOpWuiHwoDmsvoSDUpCwvqUu2c
   domhmt8+M2qEhNgY36LrTAdAq9jFdb4lz1IAS5QT7Ji1drmqpRnZxecMm
   zT3S154EwAhf3W7jbFtnMRhhDmdcMYjApmxRKD6ZYJST8nwKZAVlXVYhQ
   BgCVZhFQ09WmoHj0pica0acjuc4YPgRSDOkvenXhpdhntm49gwMTV2r/B
   6QrwQYvLKy6GIR+e9dhJlzW0oEpzJ6jNBBwZWckWiCZeakZeYnNtIl/ND
   A==;
X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="352210621"
X-IronPort-AV: E=Sophos;i="5.97,226,1669104000"; 
   d="scan'208";a="352210621"
X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="690160335"
X-IronPort-AV: E=Sophos;i="5.97,226,1669104000"; 
   d="scan'208";a="690160335"
Date: Wed, 18 Jan 2023 14:46:14 +0300
From: kirill.shutemov@linux.intel.com
To: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?utf-8?B?SsO2cmcgUsO2ZGVs?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>, jroedel@suse.de,
	dave.hansen@intel.com, kai.huang@intel.com
Subject: Re: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from
 secondary_startup_64()
Message-ID: <20230118114614.x2shsi6s3noiopux@box.shutemov.name>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.589522290@infradead.org>
 <Y8e/yKgVZgbqgvAG@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Y8e/yKgVZgbqgvAG@hirez.programming.kicks-ass.net>

On Wed, Jan 18, 2023 at 10:45:44AM +0100, Peter Zijlstra wrote:
> On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote:
> > The boot trampolines from trampoline_64.S have code flow like:
> > 
> >   16bit BIOS			SEV-ES				64bit EFI
> > 
> >   trampoline_start()		sev_es_trampoline_start()	trampoline_start_64()
> >     verify_cpu()			  |				|
> >   switch_to_protected:    <---------------'				v
> >        |							pa_trampoline_compat()
> >        v								|
> >   startup_32()		<-----------------------------------------------'
> >        |
> >        v
> >   startup_64()
> >        |
> >        v
> >   tr_start() := head_64.S:secondary_startup_64()
> > 
> > Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
> > touch the APs), there is already a verify_cpu() invocation.
> 
> So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs --
> can any of the TDX capable folks tell me if we need verify_cpu() on
> these?
> 
> Aside from checking for LM, it seems to clear XD_DISABLE on Intel and
> force enable SSE on AMD/K7. Surely none of that is needed for these
> shiny new chips?

TDX has no XD_DISABLE set and it doesn't allow to write to
IA32_MISC_ENABLE MSR (triggers #VE), so we should be safe.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 11:57:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 11:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480538.745014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI74M-0007Jo-5Z; Wed, 18 Jan 2023 11:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480538.745014; Wed, 18 Jan 2023 11:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI74M-0007Jh-2i; Wed, 18 Jan 2023 11:57:30 +0000
Received: by outflank-mailman (input) for mailman id 480538;
 Wed, 18 Jan 2023 11:57:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f/We=5P=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pI74L-0007Jb-Hg
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 11:57:29 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on2066.outbound.protection.outlook.com [40.107.96.66])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 46631980-9727-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 12:57:27 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MN2PR12MB4357.namprd12.prod.outlook.com (2603:10b6:208:262::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 11:57:23 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%4]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 11:57:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46631980-9727-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MuG5v32okDEGtAO2b8J5k8mVgheW6TUeYRPk8lfN7lirahGf9WOk+0EvUlClxo3Z9lC/R2XyqAKQvho/ab0w6DP0pqOL8qW7r/yvRglvF+i3XhIUWXjj9/yURDZ7adOR1ts7nQmRh+1/fowwqCfQil9uczsJH1TQqBwf8MQ4tah4Ekkej+yUMyWCRhBlh9bBVkRhYtHuZHvjE//UiSPPirFtSZNfRQpC350+JqsENsYvkwDwqR2+c+led9Suet9IiWfM/GZEngRLVGBuzxOdOkXExYvZ5G3uGTMsbGAAJ4Er+CnfoeEUyVrSJbTyL3pqeZ4AD1Xuz9dJOmrOkBsW5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=53o/4KDeUpLeu/3KOTvm152saLu89gsSRqp0bP1KkCE=;
 b=VIVW6T6MX41moYojSTWAV4GeJXHJh2q24TuAcLEr8VC0m9wuW2BsuHHVtWvgy24ntEtUKlnCawTHOBQxvhJhFnoX9h7O7JKpEToo9hoFE/KXLjxs45dZLQSb9dQgGaSJZA+iOEPFb+vY8X3nYqhuY0B78Z3mLaYUZoBYXKhWchH9sBAEngU6gzGdOUFyjBXeEKq40vDZa4pC42tJ7WQJErLY5FwGYl5XJi2+Ckf0Ke5ruqNrH4F7d8/imKFpIBMcPCFvlbe4BWDuL69Lm/CiUjeI7MiDJ8IdyFzuoK5mwEotOR62UKGxu/mRrX7LLuHnxzPpSBiA9HIy0i/xPjQ2Rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=53o/4KDeUpLeu/3KOTvm152saLu89gsSRqp0bP1KkCE=;
 b=hyLqn1Cr4cabdq9c1VPCAtejGKr6VJ3fzq3dgN57PALyOkjXn4Ax6GUmmvJB4Y8zNwuypNxqI7cGxLPejsOewjnsn/COikJJZdN7wGYPAS5xNstBqlnE015wmSvO3GNA8m3LORZlxRn71C37IJDdCe39blxkmRxYWW9x87hE9Es=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <376c32ed-2e9d-a81a-69a9-8ba6d54f603e@amd.com>
Date: Wed, 18 Jan 2023 11:57:17 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 09/11] xen/arm: Introduce ARM_PA_32 to support 32 bit
 physical address
To: Jan Beulich <jbeulich@suse.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 xen-devel@lists.xenproject.org
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-10-ayan.kumar.halder@amd.com>
 <49228d15-3f0d-eb89-6107-40ae9f0b9b92@suse.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <49228d15-3f0d-eb89-6107-40ae9f0b9b92@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0358.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::34) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|MN2PR12MB4357:EE_
X-MS-Office365-Filtering-Correlation-Id: e87386ec-21e3-4118-834d-08daf94b28df
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3Dq13BoteqNbgn568PK1smoJMXf4zCy/IWO4sFsLoIWW9KkafSNeFwLWc/qwF9Jyt/pNRRhvGumAkGWElYJgFG/CoB15YJCIjJp4ONuED83zTkData2CIeXyowp2FlSnZQKshF2fH6yU1PSipx+nJHc3My2Nj5P/FI9AmpeFKwhZh5EJ/9f/wvRArnOVspD/Ak+vxalnQ+CcvR4Qav84rHAzeVn+Z5GfQXq6u6+ta9G4RA9ORELwUDQ9YCmh5FylgSD3VbpQ3NPL8+WZTIA4Tb7Kka/sYBIC+buZamv7EgsUC8rwBJv6eNUqzfzXtJcDq9yjad2iscKBBb5SovPRnCij4dKxwV2wETQnG/yYQKYuBqKoik3Y1INbYgYMyYTuUPkzsBTlGzprsJ8XDicfKtH+y0YjAJeEtuVnUAaU3RlVANRU9beH4aiGdenbeuiAiAGUqDLb8+c1db535riKTO8z49Uy1hdH1vf0KX/FEzKCJ8dDP+sAWrUXhLMchibJ20UkRkF2CCLXr4A9FZ6dJcmKstsUBrDUugK+hdwrPACQZmoIRTjC43A/XUOwGTJl8sYvrxP6K2L9hNYOvbr/2/wI5qGM7upH12I8CXFve749wcI5JZ9yBR58AskEbfq0hWFn8Fx0mtH9NzbYrgbH44rDJuewz+yoQfc2yg/Rr3a0vTRo5k0Rsp73zq3vSBwKCqxl8rQ0Mw1mKlQV0utOMzek04kLl1nzBVJZvdg2b3w=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(376002)(346002)(136003)(396003)(39860400002)(451199015)(31686004)(66946007)(2906002)(66556008)(8936002)(5660300002)(66476007)(38100700002)(31696002)(6636002)(6666004)(110136005)(53546011)(316002)(6506007)(6486002)(36756003)(478600001)(8676002)(4326008)(41300700001)(186003)(6512007)(26005)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UFhPNjFieTViWTVPcW05c0ZCU3FCVkRkUVJHNlZTUEdJcGFqMFdmeG1CWE82?=
 =?utf-8?B?c0ZRSmZXZ2U3dW81L3huQ0NFUm84SHZybGxBTlZ2U1Ixck1hYkhYNjlGZUhr?=
 =?utf-8?B?WWxnNThnRXJWRFZGV25pL2JISHBCUzZubml4S3lLVzBySU00UktmYnFmaUFh?=
 =?utf-8?B?NTlqUlAzemlkSXU5WGQveDkvUEd6bTRISUozS1Q0K2RRSjU1UW82Skk0TUhh?=
 =?utf-8?B?TjVjdjdLaFlLQm1JWFlJT3NGZk5VVk9hSUd5T2ZFa3c4YnFZTHl5U0xIYkxq?=
 =?utf-8?B?ZlR4YU0vSGwzY2FHb2J5NUsxblFLQUlwbFJlbFRTTnVtMjNyMjYwWG9Sakk5?=
 =?utf-8?B?VXl3dXRuUjhjSXorYkxaSzdVczRDcVFvc2ZHVUZsK3p0bWpDa2VLWnY3alpr?=
 =?utf-8?B?L1BsaFo1V2VUUW5YKzJxbVNnQnNveERZZ1VWZUFzYWdNcG5EbllEeXBpSUw4?=
 =?utf-8?B?YmRZb2RhYS9WK3hWbWxOeUE1SGxIZW9zOGtuelBvWDFxRGRXS2FYOUFwclgw?=
 =?utf-8?B?WFhKc1h1bEVxcUIxMXhqS2c4UG9TczB2dlVYNkRDVEFiMkVqV0YzTXBaSU5E?=
 =?utf-8?B?NmF0UDV3OUF1VHUxaGJFT2NTRktUTXUvUWFPQ0Q0VE9rdEJ4K0d3WGFvTFFF?=
 =?utf-8?B?cmpka3RxWDBDbklDQ2M3N0FwVU1zQVY5akxNelZ3ZnRRWnJjdlEwNzRaUXFH?=
 =?utf-8?B?OHVHb0dpZktOYnlPdGQzMmY4dzdiY29CY0FJbkJYV1hmcVhLUXNnNjh1Y1VN?=
 =?utf-8?B?UG9kcjJtUS8vTlQreml2MVhySEFmWlRuSnkzTVpOZ2swY3JhZ2hOUVFUTU1x?=
 =?utf-8?B?SmoxZWNzVVBtc01icUpBaXo0UVJGbjdOMXd4N0hzcmF1VzZXOGxqTldVSGFX?=
 =?utf-8?B?MEVVZk1Vc2lPUlEwMjV5dG5ZOWIrc0t1MDI5VlRMVHBZZ2hyNGdvemF3bjQz?=
 =?utf-8?B?ZHI1bXN4NXpZTk9iQTJiNWNFK09kQmZVT0xWUGx0TFRwOW4wNzJ2ZEVMRFZK?=
 =?utf-8?B?bUFPZlBMTjJseW5sZDdyVzN4akcvcXIxZ0hoN3MybXNVWXJsbzlGOUplZ0xC?=
 =?utf-8?B?OVcySDNtUHIrdTRadGxoNHcwekJaYXAxKzUvZkxvU1hKM3doM3VWdlZDSXQ5?=
 =?utf-8?B?LzlWcUpxaWdPK05Jd2VDaXd1dVFnKzlnOEdyVjdiTlNQWm1mVUNDVjV6U1Ey?=
 =?utf-8?B?NHR4N3Fab1NtdHgzWkNESUhENWZqbUlmZno2ZGQ0VXNVTlE4UHVkSDRibzlN?=
 =?utf-8?B?Qzl0TVVYaTQ5Y0YwTitOMHoxTDJ1VUZmb01zSjJ3UGMyTUFXNmJza3k1Tk96?=
 =?utf-8?B?ajJhMmNtOTk4WXQ3SHdISVhRYWQ1dXJmZThEMjlza1RBYzZwbm5DSWFzbmdV?=
 =?utf-8?B?OUozUmtMcUVpVmQwVWtnNExkVFdmV1dnaUZWc2hrSmdES0J6RmFVczNxdjJP?=
 =?utf-8?B?QmN2VW9nUnpJcjNGTXI0U09Kc2Q5UWpab1czWTlQSmw1OGRCNU14MXlxVlJZ?=
 =?utf-8?B?OG5VZHFqRDdXZVJ0QUswMmo3NlhqdnZTUDNIdHg0UGpuQit1d2VmZkdCRXFn?=
 =?utf-8?B?Tm0ydDFhdFQzL0RmT05yMlRibDNjaUZSTzBqQmVNZXZNZFczVWdDaWpwKzB1?=
 =?utf-8?B?aGVRM0g2TkVvRGM0Y1ZxSEZDT0xtRVAyODFZMmN6VkRtZDZHSE1aRHRoQWhz?=
 =?utf-8?B?VVBLaHZ6eHFzZTRqVXVscWFHdWp2U0hUVThnMTVvTVpESkc5eFJQVUlwQk1U?=
 =?utf-8?B?Q0RwWTJOejBkRGJ1TjR2a2dZdDZvWE9uU0h0UXplWDk2UnltWVVBVURMZ3M5?=
 =?utf-8?B?bG9PMWVHakxqekxNcktCN3hBS09QbEQwTk5EN2FVNUdiV0hVeVhnRFg1N1Bj?=
 =?utf-8?B?M0FPVURJMGpFbnU4Y0RlNkxZdEE2Q3hVb1NMeDZGQzBKUHp1MlJTUzltWnpZ?=
 =?utf-8?B?MDVmcEdPVzl5QzNXYUl0cExBMlhVbGhvaWt6aTdHR0daaFA5NElTUlFwNjRy?=
 =?utf-8?B?ZGtONEJMeUVOMmhGaXcrblJFTFRIQUdXVHJ0c2xvYjlDcThDbTRldG5WSGc0?=
 =?utf-8?B?YUdqMXVPR29iUGRoMEZ2Wm9MTmw1SksxTFN2eDBIZGxKVUpKaUQxeCtDeTBY?=
 =?utf-8?Q?B3U2Ndmn5UvES098/wC6NaeOi?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e87386ec-21e3-4118-834d-08daf94b28df
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 11:57:23.2134
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tK4ONq4CUvmpPaLCWivogk2dFs0WPVa9KgyNbfZD5mzVdD8t+iXHGWbUaza9w/QDAwP3dOPnxbobYBwHiwGbXg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4357

Hi Jan,

On 18/01/2023 08:50, Jan Beulich wrote:
> On 17.01.2023 18:43, Ayan Kumar Halder wrote:
>> --- a/xen/arch/arm/include/asm/types.h
>> +++ b/xen/arch/arm/include/asm/types.h
>> @@ -37,9 +37,16 @@ typedef signed long long s64;
>>   typedef unsigned long long u64;
>>   typedef u32 vaddr_t;
>>   #define PRIvaddr PRIx32
>> +#if defined(CONFIG_ARM_PA_32)
>> +typedef u32 paddr_t;
>> +#define INVALID_PADDR (~0UL)
>> +#define PADDR_SHIFT BITS_PER_LONG
>> +#define PRIpaddr PRIx32
>> +#else
> With our plan to consolidate basic type definitions into xen/types.h
> the use of ARM_PA_32 is problematic: Preferably we'd have an arch-
> independent Kconfig setting, much like Linux'es PHYS_ADDR_T_64BIT
> (I don't think we should re-use the name directly, as we have no
> phys_addr_t type), which each arch selects (or not) accordingly.
> Perhaps architectures already selecting 64BIT wouldn't need to do
> this explicitly, and instead 64BIT could select the new setting
> (or the new setting could default to Y when 64BIT=y).

Looking at some of the instances where 64BIT is currently used 
(xen/drivers/passthrough/arm/smmu.c), I understand that it is used to 
define the width of **virtual** address.

Thus, it would not conflict with ARM_PA_32 (which is for physical address).

Later on if we wish to introduce something similar to Linux's 
PHYS_ADDR_T_64BIT (which is arch agnostic), then ARM_PA_32 can be 
selected by "!PHYS_ADDR_T_64BIT" && "CONFIG_ARM_32". (We may decide to 
drop ARM_PA_32 config option, but we can still reuse ARM_PA_32 specific 
changes).

Also, then we will need to support 32 bit physical address (ie 
!PHYS_ADDR_T_64BIT) with 32 bit virtual address (ie !64BIT) and 64 bit 
virtual address (ie 64BIT).

Let's confine the current changes to Arm architecture only (as I don't 
have knowledge about x86 or RISCV). When similar changes will be done 
for other architectures, then we can think about introducing an 
architecture agnostic option.

- Ayan

>
> Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 12:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 12:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480544.745024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI771-0000Ma-Vc; Wed, 18 Jan 2023 12:00:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480544.745024; Wed, 18 Jan 2023 12:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI771-0000MT-Sc; Wed, 18 Jan 2023 12:00:15 +0000
Received: by outflank-mailman (input) for mailman id 480544;
 Wed, 18 Jan 2023 12:00:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI770-0000La-8s; Wed, 18 Jan 2023 12:00:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI770-0007hh-6j; Wed, 18 Jan 2023 12:00:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI76z-00015z-Sh; Wed, 18 Jan 2023 12:00:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pI76z-0002Ie-SC; Wed, 18 Jan 2023 12:00:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hO4FlCxAXQQYEksccSE2dh3qI5mO6Y8Oq0kxzES5Z3Y=; b=BDzB5GY2n8iYfBv06iKHvb8VCz
	uM9sYqqqRu8vh+IL2PgEhSaHrC88TWvX6wGBKJWDXOMYh75VFgCErPap60wEGcjK7aauqKmcXqxu5
	+ib20P491x1F6DOwL2AbnEjDWHajZnvqfY52o/mEbfm2VhwNPUfVwFo7HMHZ5ZD8Z1ac=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175955-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175955: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-i386-xsm:xen-build:fail:regression
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=998ebe5ca0ae5c449e83ede533bee872f97d63af
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 12:00:13 +0000

flight 175955 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175955/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm                6 xen-build                fail REGR. vs. 175747
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 998ebe5ca0ae5c449e83ede533bee872f97d63af
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    3 days   39 attempts
Testing same since   175955  2023-01-18 10:40:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb+tianocore@kernel.org>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>
  Oliver Steffen <osteffen@redhat.com>
  Prakash K <prakashk@ami.com>
  Prakash.K <prakashk@ami.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 939 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 12:23:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 12:23:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480563.745035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI7Sx-0002xv-Qd; Wed, 18 Jan 2023 12:22:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480563.745035; Wed, 18 Jan 2023 12:22:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI7Sx-0002xo-Nr; Wed, 18 Jan 2023 12:22:55 +0000
Received: by outflank-mailman (input) for mailman id 480563;
 Wed, 18 Jan 2023 12:22:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vku6=5P=casper.srs.infradead.org=BATV+4fd80059aba2dd5ff62c+7087+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pI7Sr-0002xi-0F
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 12:22:54 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf190fb7-972a-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 13:22:44 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pI7Sg-0000J7-89; Wed, 18 Jan 2023 12:22:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf190fb7-972a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:Date:To:From:
	Subject:Message-ID:Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=3rAnqEYba9joD3OTLqcEznRx1ly/ndyucBZotMGAIhg=; b=lgjz92O/ly3JtWTO0Ja+rMs/vj
	vVo1Ob/RaipHZ1lWm9uJnX9jIQ/jlTnov4vyyxk1d1wjHMWmZO43ayMUclQWBZlVodDWTKbOL9CQk
	qacbvh3thdSDmLh7m7hHecgG7h+joxmXmN9KgwoSt+JR4QBSGWDpaz3RnPytC/yfcnwFI67ql+zU/
	KTz/XQFQdL9NBr6qX0GHFgWHjkHFkHWHTQ8BLj6nUJwDpkugKzkbKsy4Y4dNz6v5swtS6OAhailU8
	1qgyYI8OYEMwduj/veJUzMux38FeYADWVKAu2EgaVLn5XJgjbGZZADRRoP+yhjYRXu6sM07c8TfRf
	zfnRx4DA==;
Message-ID: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
Subject: [PATCH] xen: Allow platform PCI interrupt to be shared
From: David Woodhouse <dwmw2@infradead.org>
To: Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,  xen-devel <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>, Paul
 Durrant <paul@xen.org>
Date: Wed, 18 Jan 2023 12:22:38 +0000
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-UiGbrZ+sT6Xyhi8O8ZO/"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-UiGbrZ+sT6Xyhi8O8ZO/
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

From: David Woodhouse <dwmw@amazon.co.uk>

When we don't use the per-CPU vector callback, we ask Xen to deliver event
channel interrupts as INTx on the PCI platform device. As such, it can be
shared with INTx on other PCI devices.

Set IRQF_SHARED, and make it return IRQ_HANDLED or IRQ_NONE according to
whether the evtchn_upcall_pending flag was actually set. Now I can share
the interrupt:

 11:         82          0   IO-APIC  11-fasteoi   xen-platform-pci, ens4

Drop the IRQF_TRIGGER_RISING. It has no effect when the IRQ is shared,
and besides, the only effect it was having even beforehand was to trigger
a debug message in both I/OAPIC and legacy PIC cases:

[    0.915441] genirq: No set_type function for IRQ 11 (IO-APIC)
[    0.951939] genirq: No set_type function for IRQ 11 (XT-PIC)

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
What does xen_evtchn_do_upcall() exist for? Can we delete it? I don't
see it being called anywhere.

 drivers/xen/events/events_base.c | 9 ++++++---
 drivers/xen/platform-pci.c       | 5 ++---
 include/xen/events.h             | 2 +-
 3 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_b=
ase.c
index c443f04aaad7..c7715f8bd452 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1710,9 +1710,10 @@ void handle_irq_for_port(evtchn_port_t port, struct =
evtchn_loop_ctrl *ctrl)
 	generic_handle_irq(irq);
 }
=20
-static void __xen_evtchn_do_upcall(void)
+static int __xen_evtchn_do_upcall(void)
 {
 	struct vcpu_info *vcpu_info =3D __this_cpu_read(xen_vcpu);
+	int ret =3D vcpu_info->evtchn_upcall_pending ? IRQ_HANDLED : IRQ_NONE;
 	int cpu =3D smp_processor_id();
 	struct evtchn_loop_ctrl ctrl =3D { 0 };
=20
@@ -1737,6 +1738,8 @@ static void __xen_evtchn_do_upcall(void)
 	 * above.
 	 */
 	__this_cpu_inc(irq_epoch);
+
+	return ret;
 }
=20
 void xen_evtchn_do_upcall(struct pt_regs *regs)
@@ -1751,9 +1754,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 	set_irq_regs(old_regs);
 }
=20
-void xen_hvm_evtchn_do_upcall(void)
+int xen_hvm_evtchn_do_upcall(void)
 {
-	__xen_evtchn_do_upcall();
+	return __xen_evtchn_do_upcall();
 }
 EXPORT_SYMBOL_GPL(xen_hvm_evtchn_do_upcall);
=20
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index cd07e3fed0fa..fcc819131572 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -64,14 +64,13 @@ static uint64_t get_callback_via(struct pci_dev *pdev)
=20
 static irqreturn_t do_hvm_evtchn_intr(int irq, void *dev_id)
 {
-	xen_hvm_evtchn_do_upcall();
-	return IRQ_HANDLED;
+	return xen_hvm_evtchn_do_upcall();
 }
=20
 static int xen_allocate_irq(struct pci_dev *pdev)
 {
 	return request_irq(pdev->irq, do_hvm_evtchn_intr,
-			IRQF_NOBALANCING | IRQF_TRIGGER_RISING,
+			IRQF_NOBALANCING | IRQF_SHARED,
 			"xen-platform-pci", pdev);
 }
=20
diff --git a/include/xen/events.h b/include/xen/events.h
index 344081e71584..44c2855c76d1 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -107,7 +107,7 @@ evtchn_port_t evtchn_from_irq(unsigned irq);
=20
 int xen_set_callback_via(uint64_t via);
 void xen_evtchn_do_upcall(struct pt_regs *regs);
-void xen_hvm_evtchn_do_upcall(void);
+int xen_hvm_evtchn_do_upcall(void);
=20
 /* Bind a pirq for a physical interrupt to an irq. */
 int xen_bind_pirq_gsi_to_irq(unsigned gsi,
--=20
2.39.0



--=-UiGbrZ+sT6Xyhi8O8ZO/
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE4MTIyMjM4WjAvBgkqhkiG9w0BCQQxIgQgQmeLRlP5
F5ikTdE5wjiEDiJqFZDMFkJW2x6p3DKz4mQwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAWIrSWNXPcYESnuH6jTlOZFUYYAkXChR+J
Hw72Odsqbiljj1QfmBIaedhO9JG5T91U/zF5gDvR9PnKO5KhTDA9Xp9o8ypwspgCBRMhuy9ujIc3
Xd0rNBi+dhFZZhug7d1hZ/M3jSf/keJn7s/Ib92do778Q2PTKoGYz2Bcc+n7y54ye/PhtVGlBXm9
df3aIVrjYE2onr3WUy94EF2gX+wT86uo6Nf/SaAqDLKohnvXieNWE47Y8t+AiBLbcFRJJPMrXPrN
eKnYuP9F5jzKVqQ0Chw2UrF6+G8qWWNjks8RZwyu7rn/S81RA2h8fatn73VX/28dZnXxE7qF1PfE
ToUpqeA8IeRHTxyKzoP6wsj/wNbI5h0HeIs5WJTDe31p5yKYk8xPYpUpidTjHm8CBOaR9npSJ/VY
f2CU5Bzb59hsFd49NaTqRY1HJnncixw+O1SuXsfQ9IR4mOltweMQ5WdTRHqnWhE5sAZ+7wd4SKkO
X0xqbg5ojBC1FMFHAuTS8VQZ8gjdBhH8sPDfPEE0ZSzpaasjQSdOkPpUoCeKAwGi/aHINf5qE/7y
WVnsX7RcKDFG1j1Hwwv+vuEt5L6WcTPsYLO1X9z7L72jG+y/FLqD8x9coKJr/KNkPf46iGScG4wv
1dVNmgS0+Mv3edPV1nxtfTTNRgg1Y3EvDvPcyG53PwAAAAAAAA==


--=-UiGbrZ+sT6Xyhi8O8ZO/--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 12:48:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 12:48:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480571.745047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI7r8-0005Re-QU; Wed, 18 Jan 2023 12:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480571.745047; Wed, 18 Jan 2023 12:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI7r8-0005RX-NF; Wed, 18 Jan 2023 12:47:54 +0000
Received: by outflank-mailman (input) for mailman id 480571;
 Wed, 18 Jan 2023 12:47:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI7r7-0005RN-F2; Wed, 18 Jan 2023 12:47:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI7r7-0000WU-07; Wed, 18 Jan 2023 12:47:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pI7r6-00028W-Lx; Wed, 18 Jan 2023 12:47:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pI7r6-0006bJ-LW; Wed, 18 Jan 2023 12:47:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Yt0GNRs+l2ha8gtIFVpnjO5dD2JvdOpc9Z9Zi/FIPYs=; b=aQzy0QVr/nL+wzEaECnJyw/07U
	NHjCfl3BnFUlGobm9UvJPf0uG3NrcGd6gtih/DNn4XAU28A+pOIo0Wn7HmE4y3XhEOtb5TWDMyHIk
	bWwRpg7ZjF9Rwu3zgf46t3Dd7AbmtOv+/q2hHxhWAh5Ve5m0UGq3bbn8phNtXFEz1Jb8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175959-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175959: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-i386:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:build-i386-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=998ebe5ca0ae5c449e83ede533bee872f97d63af
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 12:47:52 +0000

flight 175959 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175959/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build                fail REGR. vs. 175747
 build-amd64                   6 xen-build                fail REGR. vs. 175747
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175747

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a

version targeted for testing:
 ovmf                 998ebe5ca0ae5c449e83ede533bee872f97d63af
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    3 days   40 attempts
Testing same since   175955  2023-01-18 10:40:51 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb+tianocore@kernel.org>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>
  Oliver Steffen <osteffen@redhat.com>
  Prakash K <prakashk@ami.com>
  Prakash.K <prakashk@ami.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 939 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:03:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:03:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480579.745058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI862-0007q4-4F; Wed, 18 Jan 2023 13:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480579.745058; Wed, 18 Jan 2023 13:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI862-0007px-16; Wed, 18 Jan 2023 13:03:18 +0000
Received: by outflank-mailman (input) for mailman id 480579;
 Wed, 18 Jan 2023 13:03:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI861-0007pr-2T
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:03:17 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2085.outbound.protection.outlook.com [40.107.22.85])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77fc8483-9730-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 14:03:14 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8653.eurprd04.prod.outlook.com (2603:10a6:102:21c::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan
 2023 13:03:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 13:03:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77fc8483-9730-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OHbdEySkbNviMW7TmnRs3K5umPz7II0RDKr5wc7nTPBsETyj6abBGX4p2yplWvM+0VcrmzAUu+uhCNv5fnLCoWrZSvuWeMGVqMuR2oHr1vyUqmBXrMozcKLLWFt+ORv3Gb2drvYBtfnw/3qCmvF8WTfGsYEkGg5eAssRBfZCJoQTy6qbxDhNf0Zh0O+qepZAdB9jPPXfH3tsGyawDXdmYhWS5gWxWb5bZ2LwDaycDYvA9VR+HIzZn/k0kKTBYNXWvOj7Wz4prZj1HPT1tdTy+aMPjX94YIhlVzPNqCGS8Z3ZwA8Szp0XpmJNXXrWCcYdL/xo45zY/48RzvSZ6RS8rQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ROA18NkkgxlIn3zt6/yd4Z78Lavp/FmGwZ9YUkqrthQ=;
 b=m5JUDWXdkYxXscSGFvsQkwVcdmIe5oR7mA8mXYK3Q5O2q4oFv4yGyOI9pOPCgF3WiXr03jin+zH+wV0w+UEjj/Z8EQEPQgoqx2gbakYa8aRj6Uols01pBKuZiF/4yzeCv0KZm1Ge1zB413hy+9DtQZmuhROw7qDqF3/oV+IudBWt7GfZJlxY9uF1qZ9mIjSF5n+f62KJNquY1Fxkuywjx7hWrFGiKNZ4ssR/z2Pi8oOPGYrCiy5cPw5KNtR75UtF4MRWPZNRBSLRtZQWVBUgXbp9vAN1FiP2E4K/6I9zKW+Hrn9YHGGEGONrfTBghY1mg6gdS/hG+sfgVItU+/A05w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ROA18NkkgxlIn3zt6/yd4Z78Lavp/FmGwZ9YUkqrthQ=;
 b=ZcrzQKzdIDWRau90x8hN9DaGVdDt2+g4FfSsF7DUakhRJSUvje9i3gA8NmarYCqH56rzKfQ4Am5DBSCMblElZ+dDtG4exlJaONZkRFJ6iWx95ItLkBOISdBKSIKFRnwo7REUZFR221sNMNMlwP7kPSSFjnLE92MZBeWzJqAPADGQjtwvq300W+fQ6UwOmLPzbdzjYVr/vCpsd1q5WegtVDPggCDWR7V42ovXw2FIHAzOYkrHOfJEsBbeSykBJDk0VSyoGG1Kit1UBngZBP56WB7SASIwpRGHFfGATWsTMcRtpnpKHa1HrodoNDgu8fcCM8Qo80mPyt5HwszImnKxxw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <57f691ee-666b-83eb-6136-d7d85aa92b89@suse.com>
Date: Wed, 18 Jan 2023 14:03:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 1/4] xen/riscv: introduce asm/types.h header file
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
 <2ce57f95f8445a4880e0992668a48ffe7c2f9732.1673877778.git.oleksii.kurochko@gmail.com>
 <e00512a6-5d32-6dbf-4269-429532f8a852@suse.com>
 <87107d8945c9f1513c305d115f24f488b87e088b.camel@gmail.com>
 <d871f9e2-5f00-1f0d-3297-0084d4a4af27@suse.com>
 <79e2670cdc74454045e653bd62fb4815cb8a7eb3.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <79e2670cdc74454045e653bd62fb4815cb8a7eb3.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0195.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8653:EE_
X-MS-Office365-Filtering-Correlation-Id: 0d191158-2df9-4930-2f2a-08daf9545a4b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zaBeX2Q7xPza9laZRKLRM8tObI0/Fk6LDSZyrcpvtyVpCe29pKSLjALQYq9XiCQuszy6JOUnsk/VRlvzyzGsJ1gPl/iiI8l8Y0e1mr+Q2iBI5lWeMMv4RPEYjeQIENZeTt+sRC2aBR6GuNNp9/+9r1rg1NIYwXycVuRgFSP1c3g+r5CHSoNAiRwIjlOaK/OS4UhUWyNXfNH7V4ip2sRSyIxrI6dQGZ6FIap3JluuIuKnKYGzPnSdcGy6ZLCiVPrZV90TjAyTSdn9v8jxPrvNCKElQSs8nqD6SxgiYSfGAhzGxQ6u0iXaFuyD0TAZh+v6s7jIkfuGDOclbSJqHGidWlftvdWetgLMIgIAnM+18hXXv5RbWEAnHMpZU4EU8ndIH+SXMBjFRkwtHRwM9l5Ntuvoo8iRcgsqotd2cY9NCCcinj449Cszd7d7ZxAXcKqw4VALwOrr4EKsHX1tT+PpVG8ncu+t0udT0NPlrjj+cljZmX/7aurGwnO5vnRxQF094m2HcVCfcVn96S01D1piH8a8aWktWDZU+VX05I2e37cXp4DKWtTD7bOsBBxV0dwruGZLEbY5QYH5m5m4kMhTI7oN+/bBZTO9lGaLbNW0v7JTPbZFXihApxkG1VPqLERjF8mFhUGTACQIpslg/peSYrJMc2xqyUtPyhAyHZPAmRZ4L9wK7gy7MTlqUWB3C6yIxZWg50URELxo3TR1Np4ghcy4mtXei4323C1eCd9FXxrbW6594N/AedJ/btpLMYYHhRbB8MwqrmVUYmajauOCQhoI6x/pbgQgma7IyABenr0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(396003)(136003)(366004)(346002)(451199015)(38100700002)(54906003)(36756003)(31686004)(86362001)(31696002)(2616005)(478600001)(66899015)(186003)(26005)(6512007)(316002)(966005)(53546011)(6506007)(6666004)(6486002)(2906002)(66946007)(6916009)(8936002)(41300700001)(8676002)(66556008)(66476007)(4326008)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RVhOMWYrK3VJUG1DZ25kbkpoRHZEL3dPNEZtSm5COTlZU3Y0ZzkxMHJmMk1j?=
 =?utf-8?B?azk4b2FXZDJjN29EaUgrZEVHYWd0NWtPaW8zL1pITm9zZHZwNGRwY25MOW5o?=
 =?utf-8?B?OXVFRUlTdVdoSVRZOFE5eFM1ZW9yRVhGR1ZNZzlJT3RaVGFscmpFdFlEanVE?=
 =?utf-8?B?cFBjaUhPc0xxWU5CSVVkL3I5bmY5cXczdGlxdHRYUmN6TjVzK1VBZXNVVGVL?=
 =?utf-8?B?UTlWY2l1Y0Y3YU9NMDRQTmZEdXNQcEU3ZnloZ1ZKMEQ1ZXBmeGxBalJUK1dZ?=
 =?utf-8?B?OTRXSkJ1czJ1dkFhK2J2ZFI4MkFVU3djQ0d2UGZUQXJCUGNCUGtTZGU4T25O?=
 =?utf-8?B?b0FUZm9Qa2Z3UTJEODZmMzRUc1IvTkd0SExpSkV2bE9ITUcrMGNNYlVXR1dW?=
 =?utf-8?B?eXV6VEw5SU1PZVdBbVNGeXlaQTJUWTV6K3VmdnRBSnFydXc3WStKRGprcEVF?=
 =?utf-8?B?OWJ5Yk5MOWxoU2xDRWFYcXdVV2txcm14aExGR3hyYUNGUGtVRDBGaDZlSjhh?=
 =?utf-8?B?Q2lBT3dsN2QvalQzK1FzQnRBRW52blRQZk5qeGc3bHRNUW9PUVZuQkZoSVBG?=
 =?utf-8?B?bEovNUFpaWJxZU95Y1lZWjlpei95V3crVEZHVDlpbHZ1Q3lVWEZJWGtFZDdF?=
 =?utf-8?B?aEEzSHJPOEw1UGtGK0xwSy9FRXBGOFJrV0d6RHJ0cXE2bXdPLzhUZEJEeHpx?=
 =?utf-8?B?SUNJVWdPN2daWHRnV25Qc2ZRZzFwR3hiM3dCNitPd0U2d2tLaTg1M2Q0bXpQ?=
 =?utf-8?B?aUFFdlZrOHJtcTU0cTd6N2E4VURLdFZaOFhtWkNJOHpNRUZEa2ZTZm9Gc3RI?=
 =?utf-8?B?U3d1RVVkTnBWV2JDbnFqSGhjUFJoMGVKSjlaM0Uyc2dkeEVuOGtRWi9CVmpp?=
 =?utf-8?B?emx4TVBGMk9Td0dFdmUzcXBYS1pOUktjSFd4WDZ2SWMwSmkyV1ZHaXhVWGs4?=
 =?utf-8?B?Wis0SDhSb0UzREoyVVZ0R2dEc1ZIVGorN1dTWk9BWWlxTmpZcGY2K05sbjVl?=
 =?utf-8?B?OVJWNTZVSWo3RTQ2NTJPeUhQTjFJOXdDeGc0S205eXF6UVVKaysxZW8vSFNz?=
 =?utf-8?B?YW5iY3RSdDFMU3ZYZnVUSURTVGpWa3ovNTBrUmhKUVAvTGlPajZBYWtoNEda?=
 =?utf-8?B?S0tZVk9GTmpqZE55VnBVOXczS2hRTGlXbWxTNDF4a1I0ZzlIMmJxWkx2cUZh?=
 =?utf-8?B?MCtaOU5FYTZTRElYSEtKQVVVVWxBZllTMGpJdVgrS0VjbHpIOERoamliNExq?=
 =?utf-8?B?eVJnaFk4TFdqcll3ZW1vaVJ1Z2dNS3BTV0syUGV1am02WU1rb3BIbVhsVjZD?=
 =?utf-8?B?Z3pGbjJ1ZzFzM3VMU1ZnVWVhTDdxblp2RmhHM1d5clJzYzB0MlRtdDFLWThm?=
 =?utf-8?B?VVkzRzNJdTMrTnRUZkJHbVBnUXg3V2MwQk5zNjFBQ0hCL3lRbVp3dDZPTkt6?=
 =?utf-8?B?eEV4cUJxT2QzVy9Tcnoyd1pFNmoraHMwWUludkxPZUh3cWpWd3hwNmkyRHRX?=
 =?utf-8?B?ZmthdVpuejRSS2Z5YVBjd2NoZnViUVowMmMwQ1JjbVkwZFdpeXZGSk1tSzM2?=
 =?utf-8?B?UTZGOElnWHlrcTIrb2crME5MMlZJbStPbXpJc2RablRpMTc4S2NqOE9Lbldz?=
 =?utf-8?B?NW80dVhiT24zNDMyNjB0dGhITHBZcnpHUVRNbmdYdVYwN21IY1p0M0ViZXkv?=
 =?utf-8?B?d29nR2w1bmhqUXE5czRQSytoNjlHOUl1OGRNYmwya1pIY0NFTU9wZmF0Z0hx?=
 =?utf-8?B?d0JDS0ZTQzl6R21VcVE3YjRvSFpoM3Nydy92dXVncmpHUndYT0F0U2k3ak15?=
 =?utf-8?B?aHZKelJKQklrNWNxYWVwdmsva0VRN09lRWJlR2pUYVhlVmhkRzBSMzVJRURr?=
 =?utf-8?B?Vnp6YXJQSjNLTTQrK2o4cE0ySFJScGpqQ1FwQWsydUw2alU4aDJHblUyWnh4?=
 =?utf-8?B?YWd6eGpmZlZ2SlpTNjJmKzAwTWpUMHpMYW1pc1RLYWwrblYzMUZzUitwemE0?=
 =?utf-8?B?ejVvVXp1Z05scjdlNVJTNzBEMkNNcEpFWkl2R1Y2NStod1hNM2wwN2phZ2RV?=
 =?utf-8?B?ODVPMTEyYS9FNlhxeGRNdjJrcENTSGxFMjlGbS9nMUQyTTlmTnIvMGl2cVJD?=
 =?utf-8?Q?g2XBaRlWq960QKPlbRxu7tKhD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d191158-2df9-4930-2f2a-08daf9545a4b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 13:03:11.9104
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PCcRvBzFdawJUYg9HqBi3N165ryfxwNUt71w1qdW6u6cjSG9d6ERNenfUoBOeCdPY0oEBMB43H0jdMgCVOxxHQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8653

On 18.01.2023 12:23, Oleksii wrote:
> On Tue, 2023-01-17 at 11:04 +0100, Jan Beulich wrote:
>> On 17.01.2023 10:29, Oleksii wrote:
>>> On Mon, 2023-01-16 at 15:59 +0100, Jan Beulich wrote:
>>>> On 16.01.2023 15:39, Oleksii Kurochko wrote:
>>>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>>>> ---
>>>>> Changes in V4:
>>>>>     - Clean up types in <asm/types.h> and remain only
>>>>> necessary.
>>>>>       The following types was removed as they are defined in
>>>>> <xen/types.h>:
>>>>>       {__|}{u|s}{8|16|32|64}
>>>>
>>>> For one you still typedef u32 and u64. And imo correctly so,
>>>> until we
>>>> get around to move the definition basic types into xen/types.h.
>>>> Plus
>>>> I can't see how things build for you: xen/types.h expects
>>>> __{u,s}<N>
>>> It builds because nothing is used <xen/types.h> now so that's why I
>>> missed them but you are right __{u,s}<N> should be backed.
>>> It looks like {__,}{u,s}{8,16,32} are the same for all available in
>>> Xen
>>> architectures so probably can I move them to <xen/types.h> instead
>>> of
>>> remain them in <asm/types.h>?
>>
>> This next step isn't quite as obvious, i.e. has room for being
>> contentious. In particular deriving fixed width types from C basic
>> types is setting us up for future problems (especially in the
>> context of RISC-V think of RV128). Therefore, if we touch and
>> generalize this, I'd like to sanitize things at the same time.
>>
>> I'd then prefer to typedef {u,}int<N>_t by using either the "mode"
>> attribute (requiring us to settle on a prereq of there always being
>> 8 bits per char) or the compiler supplied __{U,}INT<N>_TYPE__
>> (taking gcc 4.7 as a prereq; didn't check clang yet). Both would
>> allow {u,}int64_t to also be put in the common header. Yet if e.g.
>> a prereq assumption faced opposition, some other approach would
>> need to be found. Plus using either of the named approaches has
>> issues with the printf() format specifiers, for which I'm yet to
>> figure out a solution (or maybe someone else knows a good way to
>> deal with that; using compiler provided headers isn't an option
>> afaict, as gcc provides stdint.h but not inttypes.h, but maybe
>> glibc's simplistic approach is good enough - they're targeting
>> far more architectures than we do and get away with that).
>>
> Thanks for the explanation.
> 
> If back to RISCV's <asm/types.h> it looks that the v2 of the patch
> (https://lore.kernel.org/xen-devel/ca2674739cfa71cae0bf084a7b471ad4518026d3.1673278109.git.oleksii.kurochko@gmail.com/)
> is the best one option now because as I can see some work is going on
> around <xen/types.h> and keeping the minimal set of types now will
> allow us to not remove unneeded types for RISCV's port from asm/types.h
> in the future.

Well, as said before, I'd prefer if that header wasn't populated piecemeal,
but ...

> Even more based on your patch [
> https://lists.xen.org/archives/html/xen-devel/2023-01/msg00720.html ]
> RISCV's <asm/types.h> can be empty for this patch series.

... leaving it empty for now would of course be fine. Once the basic
fixed width types are needed, imo they ought to be populated all in one
got. But maybe by then we've managed to have that stuff in xen/types.h.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:15:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:15:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480586.745069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8HC-00010Q-7v; Wed, 18 Jan 2023 13:14:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480586.745069; Wed, 18 Jan 2023 13:14:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8HC-00010J-4V; Wed, 18 Jan 2023 13:14:50 +0000
Received: by outflank-mailman (input) for mailman id 480586;
 Wed, 18 Jan 2023 13:14:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI8HB-00010A-0W
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:14:49 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2058.outbound.protection.outlook.com [40.107.22.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 14fd77be-9732-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 14:14:47 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7636.eurprd04.prod.outlook.com (2603:10a6:20b:281::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 13:14:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 13:14:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14fd77be-9732-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U9e00bRb88tkBqgLb4acyZ5tJaDU4jcflFNgjlBLrgvHObsdwW7Emnyj1T7qIsU46OHVgKpIyHy3M5LWKSXKouhfTL3lt83drYMo93ouO59ss0wPvIawXMaD2FTzWoHPA/XvmrIjuUoZEB7lGqoedvV21Z/7pUk6arR0cBIhvhd523eK+8evuIOgjQvdBLE1l4Kh69nGhIPraLqR1FA7FRv6KeWZ0G2N7vvqg2LdF3/o7pRX4ZpRlmR8BK527d8tWK7pFarLgGNkMJ3+H7bO0mmEYk16cCwjcm2whNDRy8N4BYtZ5EeszFG3pZ2Dh1UCgEu9MVSDH+4t3nUz4JhlJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wLMuFkohfoCPA1PVGHdQfGTUydHqr8+a0AwiOObn1xU=;
 b=n3r14XZ6l97fmaZKn1fUJrfWkfRxJgZ6vOav4jevSUHbGVT9NDN5B6MlSye3AU32LL1oowMT9rqGT22jBSMQ5LgOR3AY+ANUMWoYIhiwc0g6r4//4gb+732I9TggTu3GGm31cmCxk676qm0/NaxJBV17el4WtyyQ58lURIViDSdubkqHryGGQnzfG6XhFkW1JBvpCBGnM+78/Ffw7OXxqv2VMfG/GnlBCzgDTzQIGah/NOqDzRtDUfufFlSTb6PDULQXEC6QPdEbsRKSWKUO+9DxvPTU/TuqLn8oqX+ZFVoXfM3DJeGFSaLx+be0kLB8x8lng4USzHlQJSaUQD4PJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wLMuFkohfoCPA1PVGHdQfGTUydHqr8+a0AwiOObn1xU=;
 b=ypn1+JcTRSZv4iXJLQZwaHdEsB5MFT1FZAlLDMIdA/G6owJUD04D/ltMcmABbWN0U/cDV/umYYoVeGzA55W+MlRnoxJb4gL//BKyrh7V5OX9Q6cSvMhQswzsmuqMJMrJYSBeop8PUIqVHVKoPFMI6M8OdItcOFDI+h/ieOB1fFOS6Xs16583ukP+nie4Z/mnKlk2Ij7oI+qQzHr1VAfUXuOcmu+mjYcxrVqCMswP6YUWHTJgalFvnMLbFU4KEm77+EW3otfj2Qx+Q6+5Z3gsAVdN+JCtqHtEosstWRscAFTBF9omBvW/sMsZBduEOXciewnk0wx5VV29UYeE6YaK+Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7e9db781-47a8-719a-d9b1-88de9c503732@suse.com>
Date: Wed, 18 Jan 2023 14:14:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for
 address/size
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Rahul Singh <rahul.singh@arm.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, julien@xen.org,
 Wei Xu <xuwei5@hisilicon.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-6-ayan.kumar.halder@amd.com>
 <926307d3-a354-be87-3885-90681dc5ae24@suse.com>
 <37719b71-8405-eefd-3bf5-95c7c8639e82@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <37719b71-8405-eefd-3bf5-95c7c8639e82@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0147.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7636:EE_
X-MS-Office365-Filtering-Correlation-Id: a479e99b-1ebb-44da-b51e-08daf955f825
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qsAf6WUAJ1+BvVpk3n38JG3zoMq2w7EfQ138LFp3POl8lB4t5AnfFlc553JbzN8ZM8TT6flyasnlYBWdkuyqsfsnk4u9bBebPVM7Aq7wDvbFYHd+/G9TZiDHZVxOpXl88yMWxCMOp91iV+d3+KlhEk4CWwgn+ULElm0Phyyuc4Krh5sYy4DzYq7x3ZIC851E1IoXptwTzqT/UJyNmw3Zg7OMUYw+2v67rQUWcjxjsOVNFyRA3dPtbw9IzQgnOLRscBpY0WzJySNQWF0fqZlkc7A5oOFpQuH5U/eI+2GULW6J+9QXVGBZEP37XtOy7XtqY8RY+nP2YYHZoNr4t0x7LbDafPnHhhqtGaoX54yfHfYMOwfERjI9afLSM6KTDOZ2ytc5HZJHKg2Aum0iTYQd6YIZS0MONApFy2ed/iNBUUwtit0s0NWryHhofnsw2ZC82Ruo1Rg7ue46q7Mtmkkyu/I0C9MbsotSl2LBLCjAqJQrtjOvDeDhlXN6gXk05OU28O8FIalJwPqFMlFVjomQIyafzUxkKyyk+yUbmHOK5DbGvZvGpddwjMakkmbwQgf2NdX9zJj6O60uOudh0b3Y/5fldW+5VHqOo8FGauranTLqjhJwnVBTxwaz3gU9HIfgY6OSK7P38oXywzEWjRXANbV9CqvwRpl9sRtNxnoL/gDfMo160DCITWN8YMoLzlvJ0X+fgaWkz4/iFN60QiA0MIBN/1W6CM56Lwzgk2CnjJA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(39860400002)(396003)(366004)(346002)(451199015)(83380400001)(8936002)(5660300002)(41300700001)(7416002)(6916009)(4326008)(8676002)(66476007)(66946007)(31696002)(86362001)(2906002)(66556008)(36756003)(38100700002)(6486002)(2616005)(6506007)(478600001)(53546011)(186003)(6512007)(26005)(31686004)(316002)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SytaVXFsQmZRdlVDWE5IRjR4d09tWEkxQWkwV1AyWnorRTZ4UFZGMTFNNWVx?=
 =?utf-8?B?OEJ4dnd4TzVRRlVsOXhUYTk4SWFEUjZ4bitLcysxZHJEUFdTVEl2VWN0Mlpi?=
 =?utf-8?B?T25LNmxQU2FZdEkxMStwMVZPVHlqazBpRzZlRFVjTXlIMTZwNnVweW5YNis1?=
 =?utf-8?B?VGZsUEtXcEJMOU1IdDE0KzR2WDFLcERXWkNsYXJNME4yRnJIYjRTcnMybFlx?=
 =?utf-8?B?cFpNRWlhUEd5bHFndDlFYzhwdUdDRGFNdFpKdnVvZDJFaGtvRG8xb1RIc3Zu?=
 =?utf-8?B?RU1xam9vLzJFeVRoQ2dGbWNrY1FVcWh3cGoxNjJWUTFtak9EZmp4RURkcFND?=
 =?utf-8?B?RXFOS0QyMXBoamttN3kwbGNUVmQwYmJBMkxUNFBKaytjTzVyNFp4T1ZKelVV?=
 =?utf-8?B?ZVU5YUxsclUrdy9uUk5zVUJ4RlV3Wm4yc3RkRzg4L01Ob3MxeGxPTVFGLzlz?=
 =?utf-8?B?V29rV2hYaFRVRW1HaU96YzFxMHFOY1I2OTJNSHVLTTF5NXhTeTBOREFPOXpk?=
 =?utf-8?B?Qm4rREd2eCtKN3EyWUkvVWZqUUVCMkFXVWdZSkpBTVhaVXhwS3FJNnZXek1P?=
 =?utf-8?B?VFhpWThYRHlBTm03ekgxRzlKbEdIZStXcVN5MnFYaDc5NlNoRmNuRWpmcnZR?=
 =?utf-8?B?bEpVNEY2KzlobERjcUQyNG5MOUNtWjdEeVpub0lUNEZQbFYxbGdSdlRDUWFj?=
 =?utf-8?B?MC9QMkhQaGduZWk5dzVDdWpITnVJendtWnplQzJvR0Z0MTBWekVoT1NKWGxO?=
 =?utf-8?B?SG9FbFFTSGxHM25pVFhoTktpSGE3U0lVY250RkNvU3JVUkoyYUtrK1JEOWFJ?=
 =?utf-8?B?VUxiWEVTcGQvdXF5UFJ3N3R0aFlCNjN0UE1MZkdDeTRnWGEvNGk4OS9zeTIy?=
 =?utf-8?B?QnJ6K3ZpWmhZR0JYZkdMYXo4eHNaYTVwN0tlT0dWY1FMTzNEVUw3K2VMWUhK?=
 =?utf-8?B?OWhGK3FkYVlIVjE3bWpGWTN6S1hraFRaS05hZlo2Ky9WZ2x4UEFTQm1NWU8r?=
 =?utf-8?B?Y3Z5YTFZQ1Y1MkFyckViSXVucGxweEhHS0tWeU1zNnZSN3hwN3FrK0lsRmh3?=
 =?utf-8?B?K2g0bVVTZW0rUFVNMldJbS93NWp0dGFaNGVjZzdkQTdEWGRjVEp4LzZaZWt3?=
 =?utf-8?B?dlR0ZVdtQVJqWkZmRTdwVjIvR3c3c1lncDNGS1lLb1ZXOHpPTkk2Ry9MYjlF?=
 =?utf-8?B?ekgvYkNCNGJCcnZEdEYxUXdraGdnMVczQkxNMEF5bkkvMi9WUGtrbkxyMWJq?=
 =?utf-8?B?dUJWWVJ5VklnckxIZTJFSTYyc2pKSGRJeEd4VklZMVA2VVBieHVCMkpyeHFH?=
 =?utf-8?B?Z1UzNTBObjYxaXJreWg5Slg1d1VKT2wxNm1keis3RGp4NjZScEVBbHpHK3Uy?=
 =?utf-8?B?OXUwTWFDallYUzRnWlNwM1NZMTlHcUhyTVZrcWxWS3IyTEM2T3lLbXdUZGl6?=
 =?utf-8?B?cGRZSklPS2dqWXR0bUVLbU1mUFZqU09ycG5KaG5ZVVpJQSs2RlRUNjhud1Rx?=
 =?utf-8?B?ZmpidDFlbHl4dkx0Q0pUbmVxZVE0U2swU2xUUjRnZk5uT3VqQjJpWnAxaUoz?=
 =?utf-8?B?Tm0zbXFEcU5LdTNmcFkxVnRoNmtNei9OTTdCZWlJQjlpZHpncGppUlZkM292?=
 =?utf-8?B?QkIyVnBFUXQzdXZOTEYvL3RZWjU1RC9RUVlSMnNvYktIRWt1WHNPbXo3cTY1?=
 =?utf-8?B?d2pSbUlFYUwyeG9oclhDdkhiRDJ3ZXV6WHArYUVMME0wUVZRRzdvUkoxRGxS?=
 =?utf-8?B?VEgvcHVnYTc1Q04vTkRkVFB1akxRd2FYRmJWN0FqaHdudDJ2SFl3alhIN0tX?=
 =?utf-8?B?WGVjZ2FJOXdJUDQybkZFWEcyUWRxZWwxdnZyRUNGYjNJdEhrUlh6UjhQa0hX?=
 =?utf-8?B?eENpNEVxcWFtcC9aTkl5SUJzZGdqblZVM3FzVEtweUl2ajQ1OWRvVk55SUxl?=
 =?utf-8?B?em5pQWVMc2U1NkF0dU1ZOEV2azBjWHl2QTV1Ym5YVzVwSm55bjlEZHo4Z2Nv?=
 =?utf-8?B?SGxSbmpuMVpMdmViM05WekVwL1dYc2NLTHd6UW10N01TYllsTTlwanpFOXc2?=
 =?utf-8?B?U2xDcldRSlNLKzRwUVkyL0t5eDN4TU5YOEdWY2IwMTl2THpjU2J5bEpweFl0?=
 =?utf-8?Q?bYyotmVi5xOs2oCqvEfFfziAs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a479e99b-1ebb-44da-b51e-08daf955f825
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 13:14:45.9447
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FonUejWi2Ew1TzR9UhbLHbj0SSWaWhy+Aa5s3dLGqIfuwQexApERye7+jo2niuWdwscmW6j4W5sNghx5nkghig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7636

On 18.01.2023 12:15, Ayan Kumar Halder wrote:
> On 18/01/2023 08:40, Jan Beulich wrote:
>> On 17.01.2023 18:43, Ayan Kumar Halder wrote:
>>> @@ -1166,7 +1166,7 @@ static const struct ns16550_config __initconst uart_config[] =
>>>   static int __init
>>>   pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>>>   {
>>> -    u64 orig_base = uart->io_base;
>>> +    paddr_t orig_base = uart->io_base;
>>>       unsigned int b, d, f, nextf, i;
>>>   
>>>       /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on bus 0. */
>>> @@ -1259,7 +1259,7 @@ pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>>>                       else
>>>                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
>>>   
>>> -                    uart->io_base = ((u64)bar_64 << 32) |
>>> +                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
>>>                                       (bar & PCI_BASE_ADDRESS_MEM_MASK);
>>>                   }
>> This looks wrong to me: You shouldn't blindly truncate to 32 bits. You need
>> to refuse acting on 64-bit BARs with the upper address bits non-zero.
> 
> Yes, I was treating this like others (where Xen does not detect for 
> truncation while getting the address/size from device-tree and 
> typecasting it to paddr_t).
> 
> However in this case, as Xen is reading from PCI registers, so it needs 
> to check for truncation.
> 
> I think the following change should do good.
> 
> @@ -1180,6 +1180,7 @@ pci_uart_config(struct ns16550 *uart, bool_t 
> skip_amt, unsigned int idx)
>                   unsigned int bar_idx = 0, port_idx = idx;
>                   uint32_t bar, bar_64 = 0, len, len_64;
>                   u64 size = 0;
> +                uint64_t io_base = 0;
>                   const struct ns16550_config_param *param = uart_param;
> 
>                   nextf = (f || (pci_conf_read16(PCI_SBDF(0, b, d, f),
> @@ -1260,8 +1261,11 @@ pci_uart_config(struct ns16550 *uart, bool_t 
> skip_amt, unsigned int idx)
>                       else
>                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
> 
> -                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
> +                    io_base = ((u64)bar_64 << 32) |
>                                       (bar & PCI_BASE_ADDRESS_MEM_MASK);
> +
> +                    uart->io_base = (paddr_t) io_base;
> +                    ASSERT(uart->io_base == io_base); /* Detect 
> truncation */
>                   }
>                   /* IO based */
>                   else if ( !param->mmio && (bar & 
> PCI_BASE_ADDRESS_SPACE_IO) )

An assertion can only ever check assumption on behavior elsewhere in Xen.
Anything external needs handling properly, including in non-debug builds.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:20:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:20:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480593.745080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8Ly-0001dr-Tj; Wed, 18 Jan 2023 13:19:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480593.745080; Wed, 18 Jan 2023 13:19:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8Ly-0001dk-P5; Wed, 18 Jan 2023 13:19:46 +0000
Received: by outflank-mailman (input) for mailman id 480593;
 Wed, 18 Jan 2023 13:19:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI8Lw-0001de-Um
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:19:44 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2066.outbound.protection.outlook.com [40.107.104.66])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c4cf1479-9732-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 14:19:42 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8013.eurprd04.prod.outlook.com (2603:10a6:102:c4::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 13:19:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 13:19:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4cf1479-9732-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V5KgBgdJgveG0rtT3H/sfHJ06Lsn20ui7tnbz8b1KkeAqeQav0DLe3ipe5uXbMyZHfYj3itGWZzfNAFEN+W59ebuZdAOD1Vqj//imqyUEXT0Bx6DHEkGN1XiUIXswu3OkHfDxPlrmM3X8XhBTRyAu5L/AlyE+/qRiPmNz68W/Giu1EJM8hw5J/OgSJpFnuBAQCnw2ktnGM8lDyjqu594YAdSFnuAgIJ87W0eyRY/lFxR5aMXvyrxcnPKm/fMHDlKr23U+jjR3wJs1+uM+Cf8wx9prorvwd8CtdAHoLycxyhqYl71bj5C4H7yD5kAWUKDs2t0ftj5ZMjFr2JxOgZo6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DoPqFuxgEBpoGbs8h5qrQBBjM/5l5iN5ReL6J3W81MQ=;
 b=SzbGWnUURp3VFXUbifT8XfYdbJxNPCjvnkOfr8tAscBMmW/0iHBAal1hbB1+vTDGPKIN8vLMa8841i7ZlfRSZqfGWRKCQeGkuxtHGgZxtTrJj2h7N+eUFbsRuGTCxcLh716Fu0pvALDz+8+iqKF+KKu2zMsrtR32RmO4deqQXU6jcC3hGa8z8rhW6xp1WHKT7nb20N5cM+Wav+d1XQWS6NhcJI5datGJcgciUraHlxB4qXAJvIUG1aaAJq/Z0V56/OWEoFTH0A010UZurn7mB77wl0tTvWFNtjQfYOTbKJexvQ7v8PTsU7G7LTYhDUZkQqW3CY60Zh2NBNoob5ZSvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DoPqFuxgEBpoGbs8h5qrQBBjM/5l5iN5ReL6J3W81MQ=;
 b=pqh5gX8RZC9r1uJA3nlSr/cA/eWvFKaj6RZ6gN0awRFj2UnKNAJpEsq+Yy0HV/vK2rISLCqf6LrlmSzFvC0jEMY7Y2NcZ/f7S19CLhI07aP6Mlg3iU7Esgi0kxWr7bO2+Kk2mG01fReGlwE7niWOyOxcq/bjVKvB1ynsVlNVM3rnpGyTBwS0crlUdSMLnJuhj3rK+De+IukFR4/WrjsgJZcruzsJZ6PiLK0Xy+DJRCDJI3bl3EFrZOiVwiLWgM7MurFW9ER7I1XYacpXy/dR2TjutcTPQbUpjxMNgA2zxBebq1BRi3ZGaQTTxJ1iOH47ql8IV5ggjOlIzdsOdhp3yg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ea32fa45-5de3-d229-8c22-913ef0513bfa@suse.com>
Date: Wed, 18 Jan 2023 14:19:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v2 09/11] xen/arm: Introduce ARM_PA_32 to support 32 bit
 physical address
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 xen-devel@lists.xenproject.org, Ayan Kumar Halder <ayan.kumar.halder@amd.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-10-ayan.kumar.halder@amd.com>
 <49228d15-3f0d-eb89-6107-40ae9f0b9b92@suse.com>
 <376c32ed-2e9d-a81a-69a9-8ba6d54f603e@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <376c32ed-2e9d-a81a-69a9-8ba6d54f603e@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0116.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8013:EE_
X-MS-Office365-Filtering-Correlation-Id: c5670667-aa9a-435f-85ea-08daf956a7c8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AckbVBzHcP/QsCFXmBVfhADD1KjaQ3upeBVLY5CaBXFk03PCmzqcldxPxsIobFgI8eFkh3idFDiJbXq6pXFxZ7810ks6eZJBcLY3/9Oar8t9JOvEcSKYL3AHqFmf7mLuqOm6e8iz8HPp7/eAEe19tDUeq4DXtKpGqjICKQ4dTBe/sWphCVjOaF6FpoT/29WkrhJsg4wGoDyHOdD9gBsWe+58WiLSE7sl1olr3LHoJCfVoM1YR61QuqiHvy2CwB3LXoXmoYR3V2SJaghMsdaIP5q/u/wjngxUwvMZ7dlDS5HNCH+UkD4y//ctx8uMVEPqay6L3KeudRO3HeQi/ybnWedidBFAC43Q0NWl8LrEeBDPXeSJ9B+ENT/zTyuefdPUOMu/HE+vkvRlAletZSWtELuFDmzD3drNLy23+BvATpk0Y1PjXX8d9g82d5uG/pwO2KbJFDYmsB0LC6BpB6ZEWdApJacKlOQmLqN80y+Fz9N9UPgkjUNzDNxEQrRCJNrgyom3E+r4uKj5Z+1ixRshpJVaQt/AqHMs0y7gschZXz09i6PkjHzY/cENkpAtbM6VIjh+2jud3CRdL9zUMKwihjIDDZz7h5J2Os8GDxt7yHLSIEF5kEcRkpiDXNqFX0rKiBZ8fQEw6cDzEZnambYTdTgEM+R8nUqDN+k2INenQ7/XbvCxnuMkuZqdihvWtLIrHLF4CzWK8eryW5hbImmzDTFunq/35WohowKqVn1qiGc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(376002)(346002)(396003)(136003)(39860400002)(451199015)(31686004)(66946007)(2906002)(66556008)(8936002)(5660300002)(66476007)(31696002)(38100700002)(53546011)(316002)(6506007)(6486002)(86362001)(36756003)(478600001)(8676002)(6916009)(4326008)(41300700001)(186003)(6512007)(26005)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MW5oRTlaWW9yQ0kyWDFsdmhsaVc4MzNmYUtIaFhKMUlJMVJjeVB3N2RvVVhK?=
 =?utf-8?B?ejVXaFRmZzZuQUxsVnY5ZnoyZjJzbURjOXRIa3VWTFUraFZNUGZMS2tiWmc0?=
 =?utf-8?B?K3FUUzRaK0RRaitNcW9GdElCWkxBKzcvYk5Eazc0dENia0VRaGgvakwrRG5i?=
 =?utf-8?B?U0FRU1lFdGp2VzJLa1NiZllIRVlYZXA2K1RmR2RiNmY2WElXMkFVaTVqUXVU?=
 =?utf-8?B?UnF1MWFGQlB3WXRjc2diODN1dEJKVTNZcFNPVkVzSjh2TnBKRmN2cG10YUJn?=
 =?utf-8?B?SXJCYWJQRHlTMFdrQTFFc0luaGpheFFQQThVNnU4WTdyUEVNalk0Rk0reFhp?=
 =?utf-8?B?aVFjSG5xeWwzUGJDRnRCRzRCNTFyVVAxM0w0SnJkUXZIWlEvWGg3Nmd4c3JB?=
 =?utf-8?B?YUxNcm1Wb1R3WU45UDBsbUtCVjFBbkxIQ1h0OFA2dnRrZFBQck94NWk4M28w?=
 =?utf-8?B?dnVpM09sVE45VjJJODVkamVTM21BMElwTkNvWllwYVI4R3JtTG1GMU1XOFJX?=
 =?utf-8?B?YjU4b0tRUmZyZ3lrQnNSUjNTUUpld1VoQW1QT01CSzNTbU9jbFpQQ0FhdDF4?=
 =?utf-8?B?eFRleWRxaTBCVytkZ1YwRWRHMnRRS2xmcFlGWDd3K0lYS0lBZnk3TVFndU1H?=
 =?utf-8?B?blZlcHZVT0IzQXJLNGZMNytLK2FWbm9WZGlUSGdsdE5qbEJ1ejlubmVYYWdn?=
 =?utf-8?B?RUFoN2F0RUtuUmhCcGh0THZOc05ObXZKVGxqdFB0amNhTFBOcDJLZS9JUjdP?=
 =?utf-8?B?elBMMjZ1c2xIMko5SHl6QTArLzg5amN3SWlaZW1Bd0N4WHV4UEk3KzJqakVv?=
 =?utf-8?B?MkxNZHNCZUdEQkxpVDNlWnY3SVMvMlpBbXJReDVkc3BQYmxZNTk4Y2FuanVU?=
 =?utf-8?B?V21YckdDWVh1dHVvUlN1RWdYRUZqNFcrLzBBS2xOTGN2S3NIZEg4Ymg5N0hO?=
 =?utf-8?B?MWcrUkI2TU02Mno3amJ0VU1uRmRnNCtlUnNGOGJNOXNlYmx3QmpndFoyV1ZX?=
 =?utf-8?B?L1dzUDNrZmY1L21QajZnVDdwU0tlb0EvWXRrY0JlNy9oSDFoMWVxemVSVE1o?=
 =?utf-8?B?ME9heU84U2RQK1V2T3dWZGZyRDZ3aVRkTFZpNitEN2lMRTAvRjdEQzJlUDVU?=
 =?utf-8?B?d3B0UFNsQ21CSGNzZm5vQXpuWkw3VmdXZlRxVGRmOFRzenVOMGZQdXBqMjZQ?=
 =?utf-8?B?d0JxTFAvdWRjSFdvK1Ewa29YVXJwKzV4MzlMUjljVWs0WkpjVWQvd1NDd3lK?=
 =?utf-8?B?VW44cVFqZGFUUzBicVd0R1IwajBVNGo1VVlwOGY3UEpJTFV5cDBDNVV5M2RU?=
 =?utf-8?B?WitPYklHUWFtcHpIQjlHNXJtWCsvdU9DOVU0NmFkcUNsa3U5eFhnYlp1Tm40?=
 =?utf-8?B?c2M2cHMxVXJCUGNFbk01d3puZlJpUDR0OXhBbGVBOUV6MnNoYTI5VzhyZGk2?=
 =?utf-8?B?YTlkNEJndlhiVUhBbS9UcDFYWnc0bExDbFZjNFh5WjBLdkFzbVBBYVpEUUNi?=
 =?utf-8?B?TjJXNGd3WWZ2TVpya2t6SEdLY1V5dXZIc1hIbDd3dllsQXZjczlidGhEWE9F?=
 =?utf-8?B?eG5wSE1tZmpNK0c2dUlibGcvdTJzZzh2Y3phNm1SOWZNV3RWY3UwblNqOUpO?=
 =?utf-8?B?dGpMSjljbG9IWDFjS0VYS3FETjBVT0ptM1FaaE5XOXUxWVhPcWh4VERSVU92?=
 =?utf-8?B?OEwvenhyTnBjRHhtWURnOC9xMkN6SXMyTXBzQ1R6UHBKdng0SGp3bENXTUYz?=
 =?utf-8?B?Sm1uSHZ0aTJUTXZyUTk5YzQrMW5veHdFYVM2a1ljSEtFME9KbDJpN29UTERz?=
 =?utf-8?B?Y3NqT1laUzZMNlNwY3JObTlTOWhEeFVGd2FwV1JUSmlmZm5nK09sZ2lOVDY5?=
 =?utf-8?B?bURMbWYzR1Z1ZERWZmpCWDRkTis2ZEdlQ21lOE9KNktoUjFlalRqRERtWU1l?=
 =?utf-8?B?Q3k3VFhkWjd1OUZENGJ1RkpGeDVoS0tHc0kveDRqYTJUZytrRzRWaWgwOTFk?=
 =?utf-8?B?bHl2Qy9EbFp4Q1lwUjAyMHlTQmxSd1NsNVB5RHJndlV5TzhRWDg0dEtnbzRT?=
 =?utf-8?B?ZVM5bnRqMUxpR3prMm1XeWZGYlprMXB5Wm1VVlNJdlc3V0E2NURmUmI2M2dS?=
 =?utf-8?Q?B26BdRpVgfPO30MUxWoTZ1VNc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c5670667-aa9a-435f-85ea-08daf956a7c8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 13:19:40.5817
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EU0TeBnZu81y3/r+jGMNRDlIum3+HULrOofW+ZRVeOMnwdX6lrqZ/liUS3o5xmFihhK32eoSRQ8OT0ZJKFvWng==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8013

On 18.01.2023 12:57, Ayan Kumar Halder wrote:
> Hi Jan,
> 
> On 18/01/2023 08:50, Jan Beulich wrote:
>> On 17.01.2023 18:43, Ayan Kumar Halder wrote:
>>> --- a/xen/arch/arm/include/asm/types.h
>>> +++ b/xen/arch/arm/include/asm/types.h
>>> @@ -37,9 +37,16 @@ typedef signed long long s64;
>>>   typedef unsigned long long u64;
>>>   typedef u32 vaddr_t;
>>>   #define PRIvaddr PRIx32
>>> +#if defined(CONFIG_ARM_PA_32)
>>> +typedef u32 paddr_t;
>>> +#define INVALID_PADDR (~0UL)
>>> +#define PADDR_SHIFT BITS_PER_LONG
>>> +#define PRIpaddr PRIx32
>>> +#else
>> With our plan to consolidate basic type definitions into xen/types.h
>> the use of ARM_PA_32 is problematic: Preferably we'd have an arch-
>> independent Kconfig setting, much like Linux'es PHYS_ADDR_T_64BIT
>> (I don't think we should re-use the name directly, as we have no
>> phys_addr_t type), which each arch selects (or not) accordingly.
>> Perhaps architectures already selecting 64BIT wouldn't need to do
>> this explicitly, and instead 64BIT could select the new setting
>> (or the new setting could default to Y when 64BIT=y).
> 
> Looking at some of the instances where 64BIT is currently used 
> (xen/drivers/passthrough/arm/smmu.c), I understand that it is used to 
> define the width of **virtual** address.
> 
> Thus, it would not conflict with ARM_PA_32 (which is for physical address).
> 
> Later on if we wish to introduce something similar to Linux's 
> PHYS_ADDR_T_64BIT (which is arch agnostic), then ARM_PA_32 can be 
> selected by "!PHYS_ADDR_T_64BIT" && "CONFIG_ARM_32". (We may decide to 
> drop ARM_PA_32 config option, but we can still reuse ARM_PA_32 specific 
> changes).
> 
> Also, then we will need to support 32 bit physical address (ie 
> !PHYS_ADDR_T_64BIT) with 32 bit virtual address (ie !64BIT) and 64 bit 
> virtual address (ie 64BIT).
> 
> Let's confine the current changes to Arm architecture only (as I don't 
> have knowledge about x86 or RISCV). When similar changes will be done 
> for other architectures, then we can think about introducing an 
> architecture agnostic option.

I disagree, not the least with the present goal of reworking xen/types.h
vs asm/types.h. By having an arch-independent Kconfig symbol, paddr_t
could also be manufactured uniformly in xen/types.h.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:34:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:34:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480627.745119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8aB-0005F0-RY; Wed, 18 Jan 2023 13:34:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480627.745119; Wed, 18 Jan 2023 13:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8aB-0005Et-OY; Wed, 18 Jan 2023 13:34:27 +0000
Received: by outflank-mailman (input) for mailman id 480627;
 Wed, 18 Jan 2023 13:34:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TKO3=5P=citrix.com=prvs=3750e6e72=christian.lindig@srs-se1.protection.inumbo.net>)
 id 1pI8aA-0004x8-94
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:34:26 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d0a2afba-9734-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 14:34:23 +0100 (CET)
Received: from mail-bn8nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 08:34:16 -0500
Received: from DM6PR03MB4172.namprd03.prod.outlook.com (2603:10b6:5:5c::23) by
 BN8PR03MB5090.namprd03.prod.outlook.com (2603:10b6:408:db::19) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.23; Wed, 18 Jan 2023 13:34:13 +0000
Received: from DM6PR03MB4172.namprd03.prod.outlook.com
 ([fe80::a77b:c822:a19b:ef23]) by DM6PR03MB4172.namprd03.prod.outlook.com
 ([fe80::a77b:c822:a19b:ef23%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 13:34:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0a2afba-9734-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674048863;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=L467W6EjPQVyDaJWpTZVtP50nFnZYZtjQkvwQoUK8EM=;
  b=HpvPa5AnaOPCvTahaXIcDdP4HPtjyKuHhaA3g36GVoauVT62reg3xCHy
   lzWG0w0+lRndUfRWFrBw86cBxaoN1gfvshKipN2TBRyqAAT9GVbJOZnP2
   807RsqPEvaCzp6ffStT4pV0+Y+Dn6x7uDeAxi0RGqdzxrWw3WIqMU4uA6
   M=;
X-IronPort-RemoteIP: 104.47.55.176
X-IronPort-MID: 92622740
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:0zObh64y6wi4olNAkLtRtAxRtAXGchMFZxGqfqrLsTDasY5as4F+v
 mROX22EM67cNGv9fotwO4W290xTscXSnYMwSwdlrHw3Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakS7AeC/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5my
 NgccT8AVw66jqHt6e6qUMJTmMJ7BZy+VG8fkikIITDxK98DGMqGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooj+aF3Nn9I7RmQe1enlyZv
 X7H9mK/BhAcON2Q4TGE7mitlqnEmiaTtIc6RefmrKQ13AX7Kmo7OQAse1eeptCFq3W7Zcx+K
 A8L8Q8Wov1nnKCsZpynN/Gim1aUsxhZV9dOHukS7ACW1rGS8wufHnIDTDNKdJohrsBebSMu/
 k+EmZXuHzMHmL+aU3WG7Z+PsCi/fyMSKAcqaSYaQCMf7tLkoYV1iQjAJuuPC4awh9zxXDv2k
 zaDqXFkg61J1JFVkaKm4VrAnjSg4IDTSRI47RnWWWTj6R5lYImiZMqj7l2zAet8Ebt1h2Kp5
 BAs8/VyJshXZX1RvERhmNkwIYw=
IronPort-HdrOrdr: A9a23:0QTvmKE/Xc4hdcQWpLqE18eALOsnbusQ8zAXPo5KOGVom62j5r
 iTdZEgvyMc5wxhPU3I9erwWpVoBEmslqKdgrNxAV7BZniDhILAFugLhrcKgQeBJ8SUzJ876U
 4PSdkZNDQyNzRHZATBjTVQ3+xO/DBPys6Vuds=
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="92622740"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KdDfKstdfMfT5DynhipwxHoPB3kok9//FtgqIejnj9M8vJhDGvJqkW7uFmed0gqLZd71Y1wuNqyyetlbytTU2sd34r++E+p85Y2Rjeu4k9SMI6SK/xkeABH1GqgL8ual4cBBUyNkzILkoswk1wEuqIqmJd/guOV4+Q6eae/jdd5xsACkCBk6ggjYrGps65ElcaCdPj+rNr9jlI+pOfefXWLQ/wOy3gCuMF+uyHUOnumklFl2yZUn9Nc6ustbXqi3aJf19/RucIKsRrBeNS4+mdukuCaIyc5AmbOne8J0cLUVLDbap0d6ZMzVmN1H5BFKyY859S8xqlIN9bbb5gd2/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zm2TTARn/zWn9ycOQusMteLh4ZURZYmOTXljc+OKHuU=;
 b=VXeyYuzguuwocidpKJfFgYcq1sBsRJgfyv6wFAsWhCkV3LU1sSFwtnkswPr7S5xrQrwnvWTKBK/oZDccn7fBRO9u+1VNbzFN+YZijj+co5qlwdqVL4vmIDiMBt7zOA2lvCb9K5LQt3/xl52Td/dENiaw1nXPT5UxZUiHZnNsqAlGiIeoXl1WQ4gP5HfVT782t9aUBZF0bhO+3Ix6FJ2XBHdzeVElZf5T56IKXf0PWeqQUYqftyYHyrcn9h7S+KkwkkzB9LK7cPNsoNHTMpeOWrdBsgpF4oN+IutVL3T4eJ6tSIWb1XvLugQlXfumG4GfWJENCWm6oqjy65fUG3puxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zm2TTARn/zWn9ycOQusMteLh4ZURZYmOTXljc+OKHuU=;
 b=g0WgWrVErm18Jh8YPnSAt1wMHkz9jp3t6r6SK31ARqbe6ZfyfV96iJ4cCfmrueXuSZ+LUJNDnGpu8QJxT4JEj6Q3sOg43+m7C2msV55NGBIiJNEC0nMYAqRM8XP1R5693QXPWaOSp0cFQ815leNi28yZWKGOWXGH9Vhta/EtmNM=
From: Christian Lindig <christian.lindig@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Anthony
 Perard <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, David
 Scott <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 4/6] tools: Introduce a non-truncating
 xc_xenver_changeset()
Thread-Topic: [PATCH 4/6] tools: Introduce a non-truncating
 xc_xenver_changeset()
Thread-Index: AQHZKnskQpdn03JGTUK9g3aWThI8PK6kKyKA
Date: Wed, 18 Jan 2023 13:34:13 +0000
Message-ID: <6ECF792E-962B-43C8-A2AE-E62D3F4EE801@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <20230117135336.11662-5-andrew.cooper3@citrix.com>
In-Reply-To: <20230117135336.11662-5-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.120.41.1.1)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB4172:EE_|BN8PR03MB5090:EE_
x-ms-office365-filtering-correlation-id: b3f3a0cc-f1d0-4736-ee14-08daf958b007
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 d2KVAtWn19K9f3rd5Jq0IATzVPvFidInhcsBK1HYrV4RGLdeZ3sn/MK8PJTl1NRcC07p/pPz3CdguJgosK+2eFJqFdMYEtUokxTrrOCbFXe3W5DS3jO3dxv6nrwGJ0qgjm7m8HLUYuxSAMeRChLymzxD+PDMbiIred2FgvngCt+zScr6/g80O6k7JTrWViGQHLOJ7qakoyY8x6goKCZ2+F1+Z0OcAWVhzTKmJYVTrnJuR1eJA35rsnQijs+M42so2dla74zRS2Csz5R+OUYd8KsShJ5WHeXJWBST4qkHeK0QDnaAHNhWRhYj/EVRYIGRO0JHOATIhFYCSJqO2CZmrAtLKPAe87V7vErsWH8JqJObTAvLwj0e9GbNnQXdud72yKd+wDv6aPprQYcPS6erFglYAXStKIWA9oiwFGq6pA+Zw0S+ZTIiy6BNL7arvq0E89ShjflevIgyY8IsuY1f7AzSJ6qmFw4Yf4AepURBYNQHMruZ5gavt/+elcfFzz8PpMUThPFu1gMz0R4zsuAay0QazxnWKYCod0/tZHj/ymCSCsVm+LAvj36UtAygDJ9YFBeHQgC/pDf/7MylIaJPuHlIJLaYjKB9SQhj00cSe07JLXO3SwQG3Sj/5gnhy1RQwjVC5RbmO0yOpf66quoZnjVgd4qNAfnnk3EGIB6myysECbO9At9YLm3dfKQbvy+P65aTUbps4hnRN8X9ezS/qJymNUXjq9Nw3UHfwotwxpE=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB4172.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(346002)(396003)(366004)(376002)(451199015)(54906003)(6636002)(82960400001)(316002)(37006003)(36756003)(122000001)(5660300002)(8936002)(6862004)(86362001)(44832011)(8676002)(4326008)(2906002)(38100700002)(64756008)(66476007)(66946007)(66556008)(66446008)(91956017)(76116006)(41300700001)(38070700005)(107886003)(6506007)(2616005)(53546011)(6512007)(186003)(26005)(83380400001)(6486002)(478600001)(33656002)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?znvDkHGwTVufBIXUKjP2JF8k5MLjApDIfLRMY3G6B53O5opTEw4/ZVn61fYP?=
 =?us-ascii?Q?yY/IDGrT4AgB80AAXX/7NhcvnfmCf2cwLHUmLXOEzqEuiRqaAS3rKtOqIew5?=
 =?us-ascii?Q?eRWwIGzg4zZRYgtSXsRQCozG1vV1MX6njDwBAcqFlyRzNS/hHKuSFdm1vU9Y?=
 =?us-ascii?Q?SGVbJqStqTaScSzMRzf88QwnLou+wEBxgIXSYfLonBMh7lYpV+9mL0ZGtCdq?=
 =?us-ascii?Q?vU0m3o/pQjI1rcJnas50QJ+BWYxjFFeUy8O4q3WLLgHwR8NqANb9FS8/bJuG?=
 =?us-ascii?Q?4ldt1TjVBCBxA6UPktIHBUVeSAvXhzv5tV5HzMID0bBh1hmUc4v3C4lWBqgB?=
 =?us-ascii?Q?5bRBXzGvuX0paBZH18UpcaplhheTTSYqSROVGnrHaVHQrydtNoUxW/39SucL?=
 =?us-ascii?Q?O8IxSM1zJ0mpWhXbK/90D9dnfzy3rcOR7Vlm13IylVdecOZiLx7ZmOP5SJ3s?=
 =?us-ascii?Q?3NTSPHhR/Y3VoQSu2WWqaTnt/IXaK7haoE8dDLR17SWIgPOKJvCytLTMFN/9?=
 =?us-ascii?Q?KGRlC9nmgQLwsrJxN2NRxVdoAsprMrMLGAD/4X7SPFJtzRjy//KM5JlTh2uZ?=
 =?us-ascii?Q?q28mjGuDh4b3wqZuqgAjK1mf3bkXhoW5IpB+gGP0RAG2eM0rCD19dzUt/pZ7?=
 =?us-ascii?Q?VEN+dMZZAu42DBIvxDE9YTnHmEmfymgZYfj86/M9nKF4H5WjvyY/ht9v3TKM?=
 =?us-ascii?Q?0FguyV6Z8V0VZexWEAa0xsfKDwVsu7NlYIFv6pDxCIH38DZr7kpEugmrE3Yt?=
 =?us-ascii?Q?A5089S4JAnDGclXI8mP3H6iTIw/RgxUmDJUDth8bop7HKHMTA0GE5HNbIyyd?=
 =?us-ascii?Q?5vebc2k3T6dpC87iRzFQc9JlgIyQ6Cn15qaNJj+NcqEX1BL6KeNsL45Gzp14?=
 =?us-ascii?Q?X0bCG+vNGxsPUzfiouVTE6GJxOh2pkHZEEWEdw6KUBmgERDmaIo4xb7vkSn+?=
 =?us-ascii?Q?ZRiuR5ip50N+05q7rIwmrokKHG/u9jW7QH6BvOp0PcSal+l2t8BDWYh0c4vf?=
 =?us-ascii?Q?TYd50oerLuoox4GFJGA78CwmgpcHFJKxR9r2/O9MALDURlj/s/0LWmY6wvWi?=
 =?us-ascii?Q?7yyXXqzal/FNVZz5WDIHETJR6dDxKAP5Ko4f8PFS97K1rcAop5Qjy1BShGqU?=
 =?us-ascii?Q?1lFDgN3UwGOYf9BTg57K8GcsEXxYdpx4eAUHG76nDu8Lu7ECHH5VRXkBsm+q?=
 =?us-ascii?Q?5VQ0i6bqdIIVrgjCZ6eHZgJ4/VGwYoSQI6C1B9LlCxVoaXDz+xavKzB6qV/d?=
 =?us-ascii?Q?aph2oPxedvZtr4kdginrU+TO1D5QXuPNYl2qMHbC5oSG4YHH3y9e+W6p7csi?=
 =?us-ascii?Q?I9gnDU0p1JK7oe2AcEvRhXpE/NncfNmbN0BXwEFuI0eecGL7I638LPS+eiEN?=
 =?us-ascii?Q?E/QmE8B1OBwU7dh1xpsrsW7l/WVpRmFTk1YClyrWNLBaJLROpmxW8Ot7GrW3?=
 =?us-ascii?Q?Z6Z055y0wp1LHVbogA6czLJFR1EH9WSBjSEP/cuhpnznRzwK5lgahJ5btr6k?=
 =?us-ascii?Q?Q2D1SAqWYVoXthqTvytlc8foksPtOP4uafBIBlMFd16ZeMmjblC+P7gCQV39?=
 =?us-ascii?Q?rWW7KcEzSAkzEWMuO4uMdNA+aC/e3Q130Ol0XRnLxunwSOl1cRkByrLATPDC?=
 =?us-ascii?Q?8mnvfwAGO+let0hPKbriPd0=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <63A079A4084BED428D5BCB9F356D4589@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	HhUZ1nxxdgoOr7I1uO+IbGgUH/ZdFJwBL9F/YJ2E0akKXPE5OVjc6GihM286QLrEMfwEZO9t0NiotNOdFrClHYH1IuADnuj2P06Adc4h8xMwX2lBbMgPXVnMBDH6XZG4Z6v7r+7o7dG+bufH1DiGk/JmHa+FpC6RG8p4vsjPBcI0asVH2XecRC3xUeGSd+Xcica1NEaw6b9m2oYwmfOP3KvresFqEaDkg1pkFKdBQrZP/ypTfWhXwm2FwhAIawhp2uH2OrxXq/3HXuGGmmQyGBrcaHFD7Hs528AXwi/0PswGKttfc8q96cFyTlS0Km+iTkGgKRHt5RdNY10iYw9UOQg/M+O0KFEIboyn/2eKygZ2N4T/N/OA0f5Ugfjr6FdQwl6Y6vhexO4mcU9xpItJe23PJDzKpf7hLlSVg7AR75qj+amiDy0TqS4bFU2j5K3eI8hZD6av3HwT7KrcG5nB9Qi77DSC440FUv+PEToTZQKeRvJ06xBxEzlrwrvYBHVd2HYcEpwSm4Z1RHzGa7P7BtvoKlnpPEl/HiWUXSftR1hDac2eSmE8kW6NZki6A7Y2IWGvh0JizkQ8S28a7DUPjGzEcVTfqvYY1Xymfn6aRDQbZ8/0HmMhkh3uBOT8Q0j4ooeDCH8tHr3yxws9ewZfJ3q+GvAjkLyjXdArG0YA1ccGvNwkI3NfzklHRIV6BW0GCI422vOEImVq5HTljXb7uVNiUo4aQMVuHRP9jhK5PR6qrHMKN/ptiJuee24wEB2u0D+cs3mhw7c7e2GP5BCtHF3+xHBFZJbTxqjR1dzA3mtFTAGrQIbwEnfYjq99NxAnQEBO5N+dh0AN4Ke0EpMp+fUdpWZOaQr6uZGtsiHR4Dw=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB4172.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b3f3a0cc-f1d0-4736-ee14-08daf958b007
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 13:34:13.1000
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ilh0sYox2lyENVasF/FURCTa4dD3JK8JIKK1IMDEkOkvCkUJgKwSD8n4AijG53XB0iktQ4kWfygXfTKlYAnmm916SwzdksXFepl7eN2hqF0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5090



> On 17 Jan 2023, at 13:53, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
>=20
> Update libxl and the ocaml stubs to match.  No API/ABI change in either.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Christian Lindig <christian.lindig@citrix.com>


> ---
> CC: Wei Liu <wl@xen.org>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Juergen Gross <jgross@suse.com>
> CC: Christian Lindig <christian.lindig@citrix.com>
> CC: David Scott <dave@recoil.org>
> CC: Edwin Torok <edvin.torok@citrix.com>
> CC: Rob Hoes <Rob.Hoes@citrix.com>
> ---
> tools/include/xenctrl.h             |  1 +
> tools/libs/ctrl/xc_version.c        |  5 +++++
> tools/libs/light/libxl.c            |  5 +----
> tools/ocaml/libs/xc/xenctrl_stubs.c | 19 ++++++++-----------
> 4 files changed, 15 insertions(+), 15 deletions(-)
>=20
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 279dc17d67d4..48dbf3eab75f 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -1610,6 +1610,7 @@ int xc_version(xc_interface *xch, int cmd, void *ar=
g);
>  */
> char *xc_xenver_extraversion(xc_interface *xch);
> char *xc_xenver_capabilities(xc_interface *xch);
> +char *xc_xenver_changeset(xc_interface *xch);
>=20
> int xc_flask_op(xc_interface *xch, xen_flask_op_t *op);
>=20
> diff --git a/tools/libs/ctrl/xc_version.c b/tools/libs/ctrl/xc_version.c
> index 512302a393ea..9f2cae03dba8 100644
> --- a/tools/libs/ctrl/xc_version.c
> +++ b/tools/libs/ctrl/xc_version.c
> @@ -161,3 +161,8 @@ char *xc_xenver_capabilities(xc_interface *xch)
> {
>     return varbuf_simple_string(xch, XENVER_capabilities2);
> }
> +
> +char *xc_xenver_changeset(xc_interface *xch)
> +{
> +    return varbuf_simple_string(xch, XENVER_changeset2);
> +}
> diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
> index 139e838d1407..80e763aba944 100644
> --- a/tools/libs/light/libxl.c
> +++ b/tools/libs/light/libxl.c
> @@ -582,7 +582,6 @@ const libxl_version_info* libxl_get_version_info(libx=
l_ctx *ctx)
>     GC_INIT(ctx);
>     union {
>         xen_compile_info_t xen_cc;
> -        xen_changeset_info_t xen_chgset;
>         xen_platform_parameters_t p_parms;
>         xen_commandline_t xen_commandline;
>         xen_build_id_t build_id;
> @@ -607,9 +606,7 @@ const libxl_version_info* libxl_get_version_info(libx=
l_ctx *ctx)
>     info->compile_date =3D libxl__strdup(NOGC, u.xen_cc.compile_date);
>=20
>     info->capabilities =3D xc_xenver_capabilities(ctx->xch);
> -
> -    xc_version(ctx->xch, XENVER_changeset, &u.xen_chgset);
> -    info->changeset =3D libxl__strdup(NOGC, u.xen_chgset);
> +    info->changeset =3D xc_xenver_changeset(ctx->xch);
>=20
>     xc_version(ctx->xch, XENVER_platform_parameters, &u.p_parms);
>     info->virt_start =3D u.p_parms.virt_start;
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xe=
nctrl_stubs.c
> index 368f4727f0a0..291e92db7300 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -983,27 +983,24 @@ CAMLprim value stub_xc_version_compile_info(value x=
ch)
> }
>=20
>=20
> -static value xc_version_single_string(value xch, int code, void *info)
> +CAMLprim value stub_xc_version_changeset(value xch)
> {
> 	CAMLparam1(xch);
> -	int retval;
> +	CAMLlocal1(result);
> +	char *changeset;
>=20
> 	caml_enter_blocking_section();
> -	retval =3D xc_version(_H(xch), code, info);
> +	retval =3D xc_xenver_changeset(_H(xch));
> 	caml_leave_blocking_section();
>=20
> -	if (retval)
> +	if (!changeset)
> 		failwith_xc(_H(xch));
>=20
> -	CAMLreturn(caml_copy_string((char *)info));
> -}
> +	result =3D caml_copy_string(changeset);
>=20
> +	free(changeset);
>=20
> -CAMLprim value stub_xc_version_changeset(value xch)
> -{
> -	xen_changeset_info_t ci;
> -
> -	return xc_version_single_string(xch, XENVER_changeset, &ci);
> +	CAMLreturn(result);
> }
>=20
>=20
> --=20
> 2.11.0
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:34:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:34:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480626.745108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8a5-0004xP-FW; Wed, 18 Jan 2023 13:34:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480626.745108; Wed, 18 Jan 2023 13:34:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8a5-0004xI-CH; Wed, 18 Jan 2023 13:34:21 +0000
Received: by outflank-mailman (input) for mailman id 480626;
 Wed, 18 Jan 2023 13:34:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TKO3=5P=citrix.com=prvs=3750e6e72=christian.lindig@srs-se1.protection.inumbo.net>)
 id 1pI8a4-0004x8-Np
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:34:20 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd3b0fa3-9734-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 14:34:17 +0100 (CET)
Received: from mail-co1nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 08:34:14 -0500
Received: from DM6PR03MB4172.namprd03.prod.outlook.com (2603:10b6:5:5c::23) by
 BN8PR03MB5090.namprd03.prod.outlook.com (2603:10b6:408:db::19) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.23; Wed, 18 Jan 2023 13:34:12 +0000
Received: from DM6PR03MB4172.namprd03.prod.outlook.com
 ([fe80::a77b:c822:a19b:ef23]) by DM6PR03MB4172.namprd03.prod.outlook.com
 ([fe80::a77b:c822:a19b:ef23%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 13:34:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd3b0fa3-9734-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674048857;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=CiERtLZS+Mpj6YW3FKVMSWyZMMA/NCoDT5AlA1mswnE=;
  b=SOs/qnV5KDmBE9/eoSYHpglhYJfSoLYa/bm3HGfTRBGQw+6MgTQQI06E
   y0/TsE15+24J3kLtmYxcZcVQaaXO8spxLPhM2y/A6+mkl+g9j/Jf5CmVT
   W0Lntph8VkFP/p0+HpCZTQIYw1XRJiuXL+Pp+Xp0CUnfSEA+o2QmqvLxQ
   4=;
X-IronPort-RemoteIP: 104.47.56.176
X-IronPort-MID: 93144423
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ts3QdKzYxT9e+P48JIl6t+cKxyrEfRIJ4+MujC+fZmUNrF6WrkVUn
 WZKC2uOaPnZYzeket90atm+9BwBsJPczdc2T1M5ryAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UwHUMja4mtC5QRnPKAT4DcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KTl/9
 /MXB2tdVzvA3+um/rfkcOpdptt2eaEHPKtH0p1h5RfwKK9/BLvkGuDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjaVlVMvuFTuGIO9ltiiTsVPn12Ep
 2vAuWD4BB0bO/SUyCaf82LqjejK9c/+cNNJTeHkqqQx6LGV7m8fCjEWamnqmP+w23XvUIlxd
 B1JwzV7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3OcpQRQ62
 1nPmMnmbRRtv6eSUmm17aqPoHW5Pi19BWMLeyIsVwYO5Njn5oYpgXryos1LFae0ipj+Hmj2y
 jXT9Swm3exM04gMyrmx+k3Bj3S0vJ/VQwUp5wLRGGW48gd+Y43jbIutgbTG0ct9wE+iZgHpl
 BA5dwK2toji0bnlePSxfdgw
IronPort-HdrOrdr: A9a23:VEokgKuH90Unk+jqyzJtgNGd7skDTNV00zEX/kB9WHVpm5qj5q
 eTdZMgpHzJYVcqOE3I9urqBEDtexnhHP1OgLX5X43MYOC8ghrNEGgK1+KL/9SHIUDDH4Vmu5
 uIHZITNDVeZ2IK6/oTTGODYrQdKHjsytHMudvj
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="93144423"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZhFylaI8AejK5euE89GQyMAllujMB5wKVzYDgRBwSuvb3kD2uweJoIzNTf109Qc1xWpZR07oGJ5y4WDLwM14tSq948wqEaNhwpRyzL2NkMnqp1FN+J9oGxPybAzHzB9Vhj0KhMvYJy9sl/6FTTXIdRC4rR5iuDE45SkQHc4573TLc6a+ASRXeBiYhJXcXLLZlXPmCWHoAmLb0FWYj02YzQrrvI/YcYriq4nA36zoIAnk9i7m2gNqp0Zq2lpWQhrphcvP/5NRe6Zp+mRN1Q66m0v5eIdydiU8Rv3f6/bDm1A6LzlnHEfvSREbLlDohD+EPkee1DSuThYieUMcRyEIJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=u78+sxLMkpAYLGfNYnZANoSWvbKSvwtyDW/fdKPu58c=;
 b=SVztvCyCuroBQxJPv0bgci9bPEoEmMNWw0kQVN7zA6Qb0gKHPPcd/ygOniEgi+LkljkVXUXH1Lj6Noxox66f/7dr0Nlvy9s9LL5bafVfIumMKE5JkmkazSTtO9ZXiio2+V2rNv1Ww6mXfH+1uXqn61uLWnwl5PWvWRuYfrJdLpOeGuVL8QWl79rehyXW/aFRcHXvtyCr/z2Zxio9cEiMziTxEJEYsuDJfNHElZl287nueTpKEFqO8NAF5SMfBOPBNw1n/KH+iikyjuhMVIVuiI83QE4R80GZzU9iOsBH871QFcE0ywuCqzlw8uyWDLzj8g21yym/RCHQ3XOHOs4SwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u78+sxLMkpAYLGfNYnZANoSWvbKSvwtyDW/fdKPu58c=;
 b=gorWxZ0QjakTTihlFOfuVeyqmw3BJU1xgOaU3ni9U/ki5NVW2ZjUGeYNhi8UQCnJJ2ncnFAVNSMZnutBrH2WpXffzPlal+IFtjl8TI2CGJOLYigYoQZAsrPIFkdrJyIPNWRP1hdbLQM2STb/N5b+7dqbSRduBAqtUfwC9W8Qx0M=
From: Christian Lindig <christian.lindig@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Anthony
 Perard <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, David
 Scott <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 0/6] tools: Switch to non-truncating XENVER_* ops
Thread-Topic: [PATCH 0/6] tools: Switch to non-truncating XENVER_* ops
Thread-Index: AQHZKnsh9kQgU3QTvku0GVlWBss2ha6kKlQA
Date: Wed, 18 Jan 2023 13:34:12 +0000
Message-ID: <F0BD0226-6C75-4D78-B099-17DB31C3B4F8@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.120.41.1.1)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB4172:EE_|BN8PR03MB5090:EE_
x-ms-office365-filtering-correlation-id: 5273e8a8-f29c-4e6f-cd04-08daf958afa3
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 T8Xl6oT3OzHyNoeyxdV8wyxEx6htJpH/lBC1SBLm/t+jybL2NYOY0zmoi5kUO8YogCme7Qj5rbAff0OwKk0bf6YdMD/JHtR6NOVZoVAlujkgs++MDKhOipZcphHor/c+7Q4SEoYQ32VP+xFglRBgzc33b1Aq0qF44t8sD78PM2Wxs/1qlqkNv/KmAvJUjval7EbdXgWDrVIBrwserzZeF8S7DSVFjsuB/qVXj7l9O4cCzEM9CBRQWUFTD/NVhmRmSYFuPL0QdgwNHHwTZEsjDglHobep7OxHLsWBJ3HCyxQppSSHx6bvTSNIQ5EhW1yn4LEy5fITYvzkSjCktSN1XZYBoAjFgoVNf9UgHEhsLbFHNuZaYU3PGIZ7JikzUEc3y1NueXi90pACKc6qz/cf79+W+ftKS1ZXKMW48aLOW/04RUFmJajmdtOMCJxUpuIourxKEqXV4s2RjcujnojT7iATH4kFnXf8FqLHV/bNl/NabzWECP0AqaytsAgBwOdD3hzNIzmbaHtvlEQCAFZwqpWCzF9gPLrBGicw//DS2KcoJK17KUge7XGJNs6AZqhRrB4ryc9jBnIDuj+PqLeWpF5FhlhDvJx0G/HgOvftFAV0xqhdj5alDJttARRundi1NEQeSbWPZfQ62EvGM0YClIV/Au2T6ctvCNzyXnsW4EevAE3WAPGg6bazNqK7VMbJIMTHZHrxEQ3edBiZOnmcYZKpMK/B8MXPrum6hjSSRmQ=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB4172.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(346002)(396003)(366004)(376002)(451199015)(54906003)(6636002)(82960400001)(316002)(37006003)(36756003)(122000001)(5660300002)(8936002)(6862004)(86362001)(44832011)(8676002)(4326008)(2906002)(38100700002)(64756008)(66476007)(66946007)(66556008)(66446008)(91956017)(76116006)(41300700001)(38070700005)(107886003)(6506007)(2616005)(53546011)(6512007)(186003)(26005)(83380400001)(6486002)(478600001)(33656002)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?riJey0L+/LIDEtQXSDtXXbdpH17N1wvqL8bvL7MD7MFng3LkBNTJ5aYm+1m8?=
 =?us-ascii?Q?KlywIvjeDlNwicEm/6g59AiFdo2dbKXtevukjorWfYC0p+tWFoMCda5Xm91k?=
 =?us-ascii?Q?8FN5eK5LQsSx9/7EHfyoKKqeyAq1j8d/XIvoA/4ZkEwXOwnuJghNgC9l8vvv?=
 =?us-ascii?Q?N2zGdN6PAPun5kkhmQKq/wjg09QQVVtncx33UY8FMWXj4rUMu7biIE/vAlEB?=
 =?us-ascii?Q?vE+qQjd8ll96WPDc8Zaa4kOqVuEAcIfeqAy7uLVxIXCmveAeH/2wJe2C1k5n?=
 =?us-ascii?Q?zGjyic7QYYEbN1M9RCZS2GwDxlLXWEqBt+Je52ZpbgkHmIV5091NcpIdoR00?=
 =?us-ascii?Q?6INYnS7OOmaFAPgCAitHjR+wPM3/nucabOninjWs94irq13AJGuEriK31NRQ?=
 =?us-ascii?Q?eQYgA7WoqMEF3a2JAC13xoQebaDNCN/iUiIS9uCr1vgSlbmkSrNe+6IXLF+u?=
 =?us-ascii?Q?PTKt7jIS8dKsWXpS1o/31vLVSjXW1fW+8im0vzUznnb94ev3lX5aGmskZXOd?=
 =?us-ascii?Q?YvRdkc2JL+6VfHXUB3PnBVj9uQSFoQfwhV6Sya8/3Y1g1JEMDzBdVuJ0Dono?=
 =?us-ascii?Q?MQkrTcaWT6PqYkb9/up1nkQLTNkutgzxuoYaaAquX62WBVxUCH1nAYeBSiZL?=
 =?us-ascii?Q?Px6qlg7Lx77x4BI3neL7qwy91MfIq4Uxw/prozP1E7klTG+iNXz4N4xASAGY?=
 =?us-ascii?Q?YMt3eNtxmfIpuhhrN0fVgeLkyoFjRcvHq5DOoCMwbMBix8CcokP/qa5OQ7e8?=
 =?us-ascii?Q?SspMf8yhZQVxU8k5qccHrGMtI60jJA8nnwqs9mYkqGo4Buk/NFimbYwvvDj8?=
 =?us-ascii?Q?vmlGwxD1Ll4sG6n96iwFf4qKpbZxSwn+badgXxRtfsl27MJDoL6aPPjMagsG?=
 =?us-ascii?Q?TMMpIrKhVxXCAZdKfFWSunvY4SN8XkhaZG3FDuSMX7G96FqaOM5BL7VLpKKD?=
 =?us-ascii?Q?HyR5IsBr/FBLllD0P5F9GmMBSBciSgGEj/fwjNhM3kIqztpb6EklcUKzRqGS?=
 =?us-ascii?Q?FWmlPaEYYrPjWmWkIgapvbd7bk9XzpbIYYBCWgD4zhMZlpvcpWYl7lgpVHmb?=
 =?us-ascii?Q?gT/iq19MppF493z3XWZ5n8sYweYCAIf55NPaRqoQIVPP3hQD5lSpKv7UO2OI?=
 =?us-ascii?Q?kIpxUuaVWBXdnigp8JL5u19oZwIgWFAWXzOuiYAwEi9/fxUx8vgg9+AK/7Wb?=
 =?us-ascii?Q?QRBDRgvRQmNvdZncNF3nXp/AbessgbU17fxlW/fcy+rhWmwp2vmkAS6Hpu4D?=
 =?us-ascii?Q?uPFL43tJILzmC6X3bF0+Ija5sijToScWjQ/ViAjEqGoVhrpcP82AHcRxb9uz?=
 =?us-ascii?Q?Gu/Kii7Dwqjaqnk8wRroKJf3w5Q04r2XcU/bwKgSnobySdKK3q06gWeczyLd?=
 =?us-ascii?Q?w5CQ+j5nnwKi34UkL/Km4jshf0hJW0YQc/pR8nV33tT+fYubl0G9ULsdzWjk?=
 =?us-ascii?Q?ulMJiF/PjqSqtLO9VQVDs17RMtLzCm+3LYChcba2x8MLWaC2aPXXiMOG9gOs?=
 =?us-ascii?Q?8kUOzMXjinmwsNLc/M+6viO4mzaCQAGSS1sYPd193pqej0hepwJPtZrnz018?=
 =?us-ascii?Q?TFqqoKQ6EuRobMNr+Oe4lRR3fVS6sbo9E+Z3ID4lHr2uC116Pd8pEK4G1TX1?=
 =?us-ascii?Q?SQ=3D=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <5BEDE9BF7956F642918848538264ACEB@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	N7gnedaP90dps3O7uxDu+ZEWdQXkA/LsspLNYOrprfNdjEfDc3hZeDMR+/jIZH6zDSbIXMiB2k2enzyEnWgavJwKSoJfhhmv41vPS9vCBij0At3i2oKtEElahmriDyLgVH/teHc5AKHO5mkjiVTfEyjOlmFHsP8pEWLS1W/FnqP0p2FBgizj7WeV2wIrqJjbaZa5M09jxDA/5s9JkYjLOSM9wrcf2hdHZemjHvqRJi92D/kZy7XRba6+n8pX1DN4Ks3Sq79Mi3gT0a/TctWHDbsElhY41V9MFdHCHbK9S3zbekmc3AAJmCAKv/D4JBqt896Looo9vQ7OPffCfXAHNXFuEAAUoPMSKkx8W980efiOvPJHaBbB7FWKihRdsgHlCaPtsMYhb5iyUiAH4pe1sIXC9GaOMm+zFcWlBXM0lJOLiNl6Ah3BJD7gW0TAAN/HbqPtOWlgIIxJAWyM7lXltkHLSVtGgywH0cKBKm88KPNKT/IPO8c7+tSaXaD2x130ZtmRwjuTnXAVUbwXQBBQB3oJUE6lnd2KoVVnvnSnFjxk0Eu1C8lJQmoOeuM2BmEYonJ97sCibhCV1TGomdYY1XB5WB5TuV5i1KXS40+EVlMhvAStsjS77SXtq5Y7DGgXwJVl8IbhemhEB2eh6o+JXEP7b2OMb3Cs/3u9urpWJLANpMTZ1OGeT0eO8rjbGK7DcD2zmm63pnNtMrvieOuWSe4X1jzqqmK1YTlVo8J5ywp8m5/4AQQN1+wTzbwaPHSR1xqc6s4QV90Q7tdqBkJQ9VH5NqhGMDBrVkJY7wUpQ2wuCTffvu6v8KW0dkIMbNjr2JGOOr07Xj3QyDGHDgVl0z3vJMhdwRGqA7KqVOFVQMQ=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB4172.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5273e8a8-f29c-4e6f-cd04-08daf958afa3
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 13:34:12.5376
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: be5Qkw7OsT4NZE6/GlKHU7EZTS9IbiqjT78vzmmN+YPTowJWzYeQoouE+am4RziXEpOK2qjGw/RQYnZA0SI4GYDQyiHZXxnLsU3XXm/CUG0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5090

I will look at individual patches, too.

> On 17 Jan 2023, at 13:53, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
>=20
> This is the tools side of the Xen series posted previously.
>=20
> With this, a Xen built with long strings can be retrieved:
>=20
>  # xl info
>  ...
>  xen_version            : 4.18-unstable+REALLY LONG EXTRAVERSION
>  xen_changeset          : Tue Jan 3 19:27:17 2023 git:52d2da6c0544+REALLY=
 SUPER DUPER EXTRA MEGA LONG CHANGESET
>  ...
>=20
>=20
> Andrew Cooper (6):
>  tools/libxc: Move xc_version() out of xc_private.c into its own file
>  tools: Introduce a non-truncating xc_xenver_extraversion()
>  tools: Introduce a non-truncating xc_xenver_capabilities()
>  tools: Introduce a non-truncating xc_xenver_changeset()
>  tools: Introduce a non-truncating xc_xenver_cmdline()
>  tools: Introduce a xc_xenver_buildid() wrapper
>=20
> tools/include/xenctrl.h             |  10 ++
> tools/libs/ctrl/Makefile.common     |   1 +
> tools/libs/ctrl/xc_private.c        |  66 ------------
> tools/libs/ctrl/xc_private.h        |   7 --
> tools/libs/ctrl/xc_version.c        | 206 +++++++++++++++++++++++++++++++=
+++++
> tools/libs/light/libxl.c            |  61 +----------
> tools/ocaml/libs/xc/xenctrl_stubs.c |  45 +++++---
> 7 files changed, 250 insertions(+), 146 deletions(-)
> create mode 100644 tools/libs/ctrl/xc_version.c
>=20
> --=20
> 2.11.0
>=20

Acked-by: Christian Lindig <christian.lindig@citrix.com>



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:34:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:34:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480628.745130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8aE-0005WD-3j; Wed, 18 Jan 2023 13:34:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480628.745130; Wed, 18 Jan 2023 13:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8aE-0005Vu-07; Wed, 18 Jan 2023 13:34:30 +0000
Received: by outflank-mailman (input) for mailman id 480628;
 Wed, 18 Jan 2023 13:34:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TKO3=5P=citrix.com=prvs=3750e6e72=christian.lindig@srs-se1.protection.inumbo.net>)
 id 1pI8aC-0004x8-LT
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:34:28 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d37c940f-9734-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 14:34:26 +0100 (CET)
Received: from mail-bn8nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 08:34:18 -0500
Received: from DM6PR03MB4172.namprd03.prod.outlook.com (2603:10b6:5:5c::23) by
 BN8PR03MB5090.namprd03.prod.outlook.com (2603:10b6:408:db::19) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.23; Wed, 18 Jan 2023 13:34:13 +0000
Received: from DM6PR03MB4172.namprd03.prod.outlook.com
 ([fe80::a77b:c822:a19b:ef23]) by DM6PR03MB4172.namprd03.prod.outlook.com
 ([fe80::a77b:c822:a19b:ef23%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 13:34:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d37c940f-9734-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674048866;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=iIHbU7EbGiyP84fRV4JFVvFv4LLq7Uyjvv5HRkkyXmE=;
  b=HJrJRutjrq4mTVM+xN3Lk65R1b3GVvqmT+IJ4o6GPbVIohOI18OIGwYd
   CyRC/+P+bptZzNO0tHQ9o53Mo1IdDCAQEYqAKCis8WBNmLZ3nd+wZT9/C
   UlRLvKsbQ9kWWGT4+DLBbyH8qGLML5xEC2Av7oRnAVRB8oY3gOKxKHTRO
   c=;
X-IronPort-RemoteIP: 104.47.55.176
X-IronPort-MID: 92622745
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:PLitN68B+l4aOnyGK9cuDrUDvX+TJUtcMsCJ2f8bNWPcYEJGY0x3m
 2ROWmrUPayDNGL3Lt0iboS//E1VvpCDzoQxHVRtpHg8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKkb5AO2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkVz
 9cBGm8JaiyolrOIw7S/ddhiudkseZyD0IM34hmMzBn/JNN/G9XpZfWP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUujtABM/KMEjCObc5VhEeDu
 mPP+SL9AxcWNdO3wjuZ6HO8wOTImEsXXapDTuPirKE23TV/wEQLMj08VEClisLkqROARvsPK
 k0b/icH+P1aGEuDC4OVsweDiG6JuFsQVsRdF8U+6RqR0ezE7gCBHG8GQzVdLts8u6ceWjgCx
 lKP2dTzClRHsrKPTmmG3qyJtj70Mi8QRUcObDEJZREI6N7ipMc0lB2nczp4OKu8j9mwEzegx
 TmP9XI6n+9K0pBN0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLtm4U2l/MBVfNgwIQ==
IronPort-HdrOrdr: A9a23:jAVjZqG91ipUi8SNpLqE4MeALOsnbusQ8zAXPhZKOHlom6uj5q
 KTdZUgpHzJYVMqMk3I9ursBEDtex/hHP1OgbX5X43NYOCOggLBR72KhrGC/9SPIULDHl8379
 YFT5RD
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="92622745"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kf2i97KXE6CV0/DHomVRGZ6Hxz+bYF5TwZ7SDRK7AeFQQNlzsoyfnqSSd6tfYrXgSpZyZti4SczIIwwMosb5m0irYSy6LQ6IKx+NOp7WGELXSbaWoB9m/r3kfSAShgyhN3CEl5Z3QubdeZh0dF+VKJsbKFZbndaqXqrbcgJxpr4MyqxpsWcgrDfTJZrNEPHEMBeW9QrfU20Z7wn0AIMlb0fRUslXCZDHt1E/eG5vYuBceAibQZhjbVLLNIxJSAsED7dD9GRgvnRA7rsQxkJ9T6A0FN45RZK+uggrXizk99l99mKUUILwjyFLVyeipkLamyPzPcJSH+yzWa/zjUTQog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5nh+if3ho7GK7CykAU+41RYUAatQCoDUpykcYpBTVRE=;
 b=oOsbVgLAD/bZcabz3bbtydah7jXHTS6hUZoCPc1e6umNvUFlp/is6NZQRqF+7ZbputJsRCAaCcHkKykacNBfsMZwdiScSeVbqLdkWB3jDSDXvR3RGeMj1CeLdiQYR+b+Knk4bZRdO1DIbsjWcT0HIfABrBrixBFCJiDvMylLswKNzc4HlR7YorubS+iLGon725sEJaWQEIoSs0TJ9Em+8o/HTZgIauWuSReGLbHT/eBdh/kQXpzlbQSRxDZPFC9qg0g56qoAF0ufqZCeyarKlbU1kyFxjbWDLylCYWhJP8I0/+RY2C7T6F9mLaYhJOGfSz08eIH31WVVTcAESyXlig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5nh+if3ho7GK7CykAU+41RYUAatQCoDUpykcYpBTVRE=;
 b=lAtel0v1vgVCSIEe6czs9AkDMyqfkY1fUO1/D3vq7LmlZg9bnInTjuuPoh2ePYiuOH2Tll9ZJ48UIBQTMNYiS+CG1WUBD5FOJKHjWqpHGE5zMqKXdX+OVqJdADCYfCE67GjuQcfQeSdBtYKSKP3UGpS9i3PUlXqggcsmvGS8dBw=
From: Christian Lindig <christian.lindig@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Anthony
 Perard <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, David
 Scott <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 3/6] tools: Introduce a non-truncating
 xc_xenver_capabilities()
Thread-Topic: [PATCH 3/6] tools: Introduce a non-truncating
 xc_xenver_capabilities()
Thread-Index: AQHZKnsh/SnHJk9rC02Qsh/SBqHxSa6kLY6A
Date: Wed, 18 Jan 2023 13:34:13 +0000
Message-ID: <C037D1D3-7718-4F77-BE10-B4EBEEC36F69@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <20230117135336.11662-4-andrew.cooper3@citrix.com>
In-Reply-To: <20230117135336.11662-4-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.120.41.1.1)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB4172:EE_|BN8PR03MB5090:EE_
x-ms-office365-filtering-correlation-id: 87ea2ea3-1332-4ca7-738a-08daf958b04f
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 +0ta2KwTPUjcoIobgZ0wbr7hjdeFRy2/173VAG/1kWUm2eYH+/hpMcvoh7Dn0zjSpY4vjRuhr269zjOSy6tcw2p22I/Lk+dieXiL8tlYJ1sEXnGYsTPuSsuAcbnKiGGB194VHff9+eORwjxtBgtx4jTbhGmiAcRLhikuUitRXg8nC0cJj+o6qvpNr6Iu1LgNTgSn37cUMTZNvLpjgYdSHI2lczWeELN7mruCMKHfjgO3iGRDrvUodYOMQEBFGfbigoif4oZrgWeOFCkXgpyw0INVqwZtZmo8b2+zUbIUDkHj2nskyiWORTI7ROBRIyUnnwIKJmWyigOQwY9dwyvH5RPeN42i2GVyQ0J3xNZ11n35uj0K/+UAF725j22L2KX65MvhhAEk8cP0pemJpHRQSLIXs+a0prBhdFXBOuC6ZQAkY72UMjYMsNNFwe2zkWwopB5gd3Ts1rreLbf0QO9trSS5asoybSvJ7OVuXikVOJnDzWeWprVUzMMbnhD43EkoOjrARcnEOzAYAOhoWezPXkqe9Wz4nrKKSIbgKAolxbrFUV2Gh2ijC00U/cN0CQ/z3Frj6oTco3QOAvOKap2NYNZPGaXrEBfzysgwhqSGnmcZaS7n0JdxGqfaXPpFxDgYbLcVyaKJxhLGGh0xxHNhDzF9rIccK6vhihwsnSw8c8BG+TM+Pb2+HKFgS5C/LNSHsFntmwrjQJ3hr79j4tltu+U4WFJmva+6Ii20UERTrQU=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB4172.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(346002)(396003)(366004)(376002)(451199015)(54906003)(6636002)(82960400001)(316002)(37006003)(36756003)(122000001)(5660300002)(8936002)(6862004)(86362001)(44832011)(8676002)(4326008)(2906002)(38100700002)(64756008)(66476007)(66946007)(66556008)(66446008)(91956017)(76116006)(41300700001)(38070700005)(107886003)(6506007)(2616005)(53546011)(6512007)(186003)(26005)(83380400001)(6486002)(478600001)(33656002)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?TQOrBSHyEfiym066d5X/20pAd/tYKQXanFlvy6TnYUiqOxihUV5cLgiKHB57?=
 =?us-ascii?Q?qcQcn7ozTEUX70zO5KOA55/e1orrGUQczdYbZvQdP/mAGUb0xWg7KpPo5zHu?=
 =?us-ascii?Q?2c+cartM1RzoP1yXwWxwQpoCC7/a/wHEwaSVR0+zCcfH3LYUVPDUURLfWLoY?=
 =?us-ascii?Q?zGntx1CncSPMTtgFBp6q1liEiLT9eAdXXKrop2AtlSWlWmAIYJN9Xss+aEEn?=
 =?us-ascii?Q?yQCE67r6scpRErOLuvKla42czk/vq1u+I5O+YizDH792xfd9NpDPGDoxtM3o?=
 =?us-ascii?Q?geRhIUheXBUP6sBgXGmUqOSRNLJQrrSNjBUC1IIhzJnedOkvwDrK/XKNKEhQ?=
 =?us-ascii?Q?R4ExdQDzq5Ay73dlF9EStln/BXFMIuTQoD/AmAwn9yf2bZ33iwvSm9fKaKGJ?=
 =?us-ascii?Q?NIXshwXnwBPrgIsFWZsRXCusvQk3QmX8G6HZpNE1KbbZVhso2e68uhTfy0Aq?=
 =?us-ascii?Q?l5pRrAGROJcMAkjhTIAT7B7WvgCPM1CA4viit9HhtoUxeTGz2/r0z3qgswON?=
 =?us-ascii?Q?tIbKKxFGFUoLCv4JhJiiU5HXibU4ht6yB1no8UuvLcXwMAWuPfNCLeSJutLD?=
 =?us-ascii?Q?zxisdvXER0NZTVJrJy86bWY54wQh87xHOJGp21F7PBXiVgEi/IO45lkNtDu4?=
 =?us-ascii?Q?bDtuf5VXsQQWXghDlKj9kxf1VqfCe3nml8Q182k1FfevQm+yrcoOdcuh3Nbh?=
 =?us-ascii?Q?b3nBlMCU7BjTNOl0x/fcPAKPrEapB1cy/l3VaKPaPKKNlshEfPEM6Ch+/SyG?=
 =?us-ascii?Q?6Don35vPbvo1O2DNvMAenbWGY3OcW+Zwmh3LpBHjvIW3xiJMfPveMx3A+Wim?=
 =?us-ascii?Q?p6oL7YDd8UHwU3AmHCKcxmHU3F8nYSx3/3kdUthAhJ6hNLlcllldmQ4Kt5/W?=
 =?us-ascii?Q?Hy20sq9HIQZoTwYsOzvbipPt6WRLegP6MtYUWV+F2PRwwEdYOdeFTYCFqfLP?=
 =?us-ascii?Q?04/NbvJcLWEiEactYiYhlEMnwO7PpZjo1o9UF/K3I80jzXx6fQwxZctcVbVO?=
 =?us-ascii?Q?S4DsdVguos0kE1Bvs8QioLvHEnG/Tc6iGMu3OffaTSFUyigXt4vPR7ih8YGw?=
 =?us-ascii?Q?XiASsEFZT6g9lFvU8+MAP0mBhdnutj6kuMlQIv+SDa3qkZ1Tq9OLXDYHtLze?=
 =?us-ascii?Q?uBviPuJCUSVDSynp2hbxl01GSOAuACHu/R5OafTIoB9xGpLpu42G7rlZW4DQ?=
 =?us-ascii?Q?5657wN2XVxcs6fCCdlWuyIUI3Nf65unA5c8wEG516pdegXl3xWg43fOhoLSB?=
 =?us-ascii?Q?Ezu6SuxBvzqh8OsGufjDgFmWrzO6rcum0uV860zr1QHrnS0iR5hgKK5Ienz/?=
 =?us-ascii?Q?R7ZyaauHUR0n/9bRqvu87MFh/Q/5PS2oFusHguYVMjv+GTjM8aNQyJfeRZOF?=
 =?us-ascii?Q?nGGAVbjLn5qRnBgyBXD3LD8zhTKAS/xiy4jT99/OQijVUmR4C7c0QvO29HmP?=
 =?us-ascii?Q?hx6LfL3pDFqq5aGM8tS8fWnIPUyxXCkLqdsDV+ZLWdsFu6ZTBal44+mAa86H?=
 =?us-ascii?Q?EBT+KWDN5RDxdF1r4+Ul7/a+8dS6mBcE3HxNUmtwRGfHbqxUbQ52cGLNpUeE?=
 =?us-ascii?Q?V9HkT60QnwL++Tng5MSAFdOc5yJwsqzkIJ28Ghtte+Gn3MXChwTLZqQw5UWX?=
 =?us-ascii?Q?hzxOY/uelxBViziqULwpt8U=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <77708C641A91E541B06233AD81F75CFE@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	YC9T79eHck56E/jE/Dif570Tv7Us4VXj6oe55Cpbb39EP7xNSLCluRu1nkz6j58xuVxF3Nqdbe5uWknDAmfgLD8Af12rKhgcVgeXPxviH61VaTXYoqKqwECGG+hP0ZA6UNYNTOSiDNiv48/Y8Egj1j4Qjsc2bnslpC3V293+aGNWwfuNdXVKpyUpbMP2MPDi0GY9ho7AjZM1kG0kUFmMRg4uqRDkFIAe9RUtStV6CdiY8kuHBEebIwDEs+qEjoeG+1b22IvYGmS4xHK4WrycS3X/4KaCrAZL73tlRv2OWw/wc/KB51zhmx8tCzAM5wuTgTirptBFYDJ0auL0neXbrjSvy92Yeb+62B4nr6CCCYUw9FcqaLCozf99Oj+lDs21kY74ECBUQmbAqH0p8P2glgouOE7X5LWlMZdfEwnCt5Ht4EVo/2cOaZFVm07Y5uYX0m62Elr9xQ2mBxgAo/uND/6oToBzuZ5UFcz51yhZK3vIjvlTCsR4c0hwkBhKbrEVz+RKZyVpDyReGDHy7u4k0uEUFZokl5cv3btLmyrHxpOKpGMlyRfXL7dhoZtvhPj2DCDceSjcmQwzcPW+azKc5IGKHr4oS07oVH9wqGEHfEH8jP5ex/srKDU90/l1TtnDZ/fiFxF3yJ7T5CkpPrnTnMAAntaWj5iqgLDlnbl8cHsDZO0m8Ue8wz6knv8Hdkuyequ50Qjh9ch6EXvtgvR8Afg8H1MXBgHJ4500HyyOL+hPy3JH1Y6ByOIMZMIKdqHc6tvdEAEYmCL1O1LyDTzThjBnrn9YXtLR0+s5431wIVCv6bvBhXBMiIyxNMxNbgabVrh9001/cOL+q+2ab4uuLPcvEIAhcK6fO6yK7oUt0kc=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB4172.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 87ea2ea3-1332-4ca7-738a-08daf958b04f
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 13:34:13.6000
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ysXHeytzIC7c5whheJcUqMrDJDPFHqCo1W652gTmE87xOu70oiLZW96MJKzX7Tt3NeUsJsyphW1kdhXXOsNsRY/9EZrQ/9xy01X1reLqST8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5090



> On 17 Jan 2023, at 13:53, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
>=20
> Update libxl and the ocaml stubs to match.  No API/ABI change in either.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Wei Liu <wl@xen.org>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Juergen Gross <jgross@suse.com>
> CC: Christian Lindig <christian.lindig@citrix.com>
> CC: David Scott <dave@recoil.org>
> CC: Edwin Torok <edvin.torok@citrix.com>
> CC: Rob Hoes <Rob.Hoes@citrix.com>

> ---
> tools/include/xenctrl.h             |  1 +
> tools/libs/ctrl/xc_version.c        |  5 +++++
> tools/libs/light/libxl.c            |  4 +---
> tools/ocaml/libs/xc/xenctrl_stubs.c | 17 +++++++++++++++--
> 4 files changed, 22 insertions(+), 5 deletions(-)

Acked-by: Christian Lindig <christian.lindig@citrix.com>

>=20
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 1e88d49371a4..279dc17d67d4 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -1609,6 +1609,7 @@ int xc_version(xc_interface *xch, int cmd, void *ar=
g);
>  * free().
>  */
> char *xc_xenver_extraversion(xc_interface *xch);
> +char *xc_xenver_capabilities(xc_interface *xch);
>=20
> int xc_flask_op(xc_interface *xch, xen_flask_op_t *op);
>=20
> diff --git a/tools/libs/ctrl/xc_version.c b/tools/libs/ctrl/xc_version.c
> index 2c14474feec5..512302a393ea 100644
> --- a/tools/libs/ctrl/xc_version.c
> +++ b/tools/libs/ctrl/xc_version.c
> @@ -156,3 +156,8 @@ char *xc_xenver_extraversion(xc_interface *xch)
> {
>     return varbuf_simple_string(xch, XENVER_extraversion2);
> }
> +
> +char *xc_xenver_capabilities(xc_interface *xch)
> +{
> +    return varbuf_simple_string(xch, XENVER_capabilities2);
> +}
> diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
> index 3e16e568839c..139e838d1407 100644
> --- a/tools/libs/light/libxl.c
> +++ b/tools/libs/light/libxl.c
> @@ -583,7 +583,6 @@ const libxl_version_info* libxl_get_version_info(libx=
l_ctx *ctx)
>     union {
>         xen_compile_info_t xen_cc;
>         xen_changeset_info_t xen_chgset;
> -        xen_capabilities_info_t xen_caps;
>         xen_platform_parameters_t p_parms;
>         xen_commandline_t xen_commandline;
>         xen_build_id_t build_id;
> @@ -607,8 +606,7 @@ const libxl_version_info* libxl_get_version_info(libx=
l_ctx *ctx)
>     info->compile_domain =3D libxl__strdup(NOGC, u.xen_cc.compile_domain)=
;
>     info->compile_date =3D libxl__strdup(NOGC, u.xen_cc.compile_date);
>=20
> -    xc_version(ctx->xch, XENVER_capabilities, &u.xen_caps);
> -    info->capabilities =3D libxl__strdup(NOGC, u.xen_caps);
> +    info->capabilities =3D xc_xenver_capabilities(ctx->xch);
>=20
>     xc_version(ctx->xch, XENVER_changeset, &u.xen_chgset);
>     info->changeset =3D libxl__strdup(NOGC, u.xen_chgset);
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xe=
nctrl_stubs.c
> index f3ce12dd8683..368f4727f0a0 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -1009,9 +1009,22 @@ CAMLprim value stub_xc_version_changeset(value xch=
)
>=20
> CAMLprim value stub_xc_version_capabilities(value xch)
> {
> -	xen_capabilities_info_t ci;
> +	CAMLparam1(xch);
> +	CAMLlocal1(result);
> +	char *capabilities;
> +
> +	caml_enter_blocking_section();
> +	retval =3D xc_xenver_capabilities(_H(xch));
> +	caml_leave_blocking_section();
>=20
> -	return xc_version_single_string(xch, XENVER_capabilities, &ci);
> +	if (!capabilities)
> +		failwith_xc(_H(xch));
> +
> +	result =3D caml_copy_string(capabilities);
> +
> +	free(capabilities);
> +
> +	CAMLreturn(result);
> }
>=20
>=20
> --=20
> 2.11.0
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:34:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480629.745141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8aL-0005we-GG; Wed, 18 Jan 2023 13:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480629.745141; Wed, 18 Jan 2023 13:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8aL-0005wR-Cv; Wed, 18 Jan 2023 13:34:37 +0000
Received: by outflank-mailman (input) for mailman id 480629;
 Wed, 18 Jan 2023 13:34:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cVew=5P=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1pI8aK-0004x8-1d
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:34:36 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7f7c27f-9734-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 14:34:33 +0100 (CET)
Received: by mail-ej1-x630.google.com with SMTP id u19so83208956ejm.8
 for <xen-devel@lists.xenproject.org>; Wed, 18 Jan 2023 05:34:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7f7c27f-9734-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=k06LEAB5MHAytV5/Uwm4oB7cNWmj3eUN84cJ1XfmwJE=;
        b=lFleeUdKA/QelYXB7+zLFhrnhN7JOCUG+M+3oote2dcYkxVS1ojl68yLEdriWj0Amk
         5skJwNr9YU3L4It4chf5fDjIztZXgu0l0TplAalpnVb6CAtu+xjL/WLsJyggmLWvX/G4
         varTjXv7BGJ/caRA0VbaOjZrH1Kc3OJJ2Wq2k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=k06LEAB5MHAytV5/Uwm4oB7cNWmj3eUN84cJ1XfmwJE=;
        b=Ul4uX7TKb9p58al9TVprHk7f0QjZZNNEHI0cYntTKQaqaCQnqYtuQtcVHazjbTeXMO
         DtzjdvouwE/vG3x49KCa56dSJjC4tmWdKhQbnbkYN5gb4xFPlsKb9HM+dOyofq6osEDM
         DN1HmhAyJGkRqHgUpu2E9T54IzjsJaO9L7DQwZtclurYBDKYLqcIfxGvTvuXOX3y6A5P
         kP7tWw1B0nfTeZ7Zz20aF16Rh+odyRwkhnMmKzuXjKvQXafYYDPQyzf+QJsmALPfRSKX
         clsR/TwAnVhxTna4N8y8GLZ2qyxNc9TLadqaNbJ8QYt0odMLaBWf5wN0Bv9LxUwmEtL9
         TWVw==
X-Gm-Message-State: AFqh2kr8M+cw3WxxOh0N95o+YrOHiuVluWGhj8ZbgPxpHOUXxLXdohZ7
	PQiqHrZxTvRNTpX12rffTW3V1TV5lKi8YSLeJtRlOQ==
X-Google-Smtp-Source: AMrXdXsDlyYmgZzSHVK5gtp513jR06IghCa0lZLxTGFnhIvJQJCtB3kLLdeHnWMvwZTSd6WHqx3ETT3Sj6mRIF4aAAQ=
X-Received: by 2002:a17:906:dd5:b0:82b:61ad:c9ac with SMTP id
 p21-20020a1709060dd500b0082b61adc9acmr1008956eji.606.1674048873264; Wed, 18
 Jan 2023 05:34:33 -0800 (PST)
MIME-Version: 1.0
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-6-ayan.kumar.halder@amd.com> <926307d3-a354-be87-3885-90681dc5ae24@suse.com>
 <37719b71-8405-eefd-3bf5-95c7c8639e82@amd.com> <7e9db781-47a8-719a-d9b1-88de9c503732@suse.com>
In-Reply-To: <7e9db781-47a8-719a-d9b1-88de9c503732@suse.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Wed, 18 Jan 2023 13:34:22 +0000
Message-ID: <CA+zSX=a4hfFKGJfTM5BDenRo=T3kvEbkGhRs=7oh4GgOxDk0+Q@mail.gmail.com>
Subject: Re: [XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size
To: Jan Beulich <jbeulich@suse.com>
Cc: Ayan Kumar Halder <ayankuma@amd.com>, sstabellini@kernel.org, stefano.stabellini@amd.com, 
	Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com, 
	xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Rahul Singh <rahul.singh@arm.com>, 
	Ayan Kumar Halder <ayan.kumar.halder@amd.com>, julien@xen.org, Wei Xu <xuwei5@hisilicon.com>
Content-Type: multipart/alternative; boundary="0000000000006d9bc905f289e058"

--0000000000006d9bc905f289e058
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 18, 2023 at 1:15 PM Jan Beulich <jbeulich@suse.com> wrote:

> On 18.01.2023 12:15, Ayan Kumar Halder wrote:
> > On 18/01/2023 08:40, Jan Beulich wrote:
> >> On 17.01.2023 18:43, Ayan Kumar Halder wrote:
> >>> @@ -1166,7 +1166,7 @@ static const struct ns16550_config __initconst
> uart_config[] =
> >>>   static int __init
> >>>   pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int
> idx)
> >>>   {
> >>> -    u64 orig_base = uart->io_base;
> >>> +    paddr_t orig_base = uart->io_base;
> >>>       unsigned int b, d, f, nextf, i;
> >>>
> >>>       /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on
> bus 0. */
> >>> @@ -1259,7 +1259,7 @@ pci_uart_config(struct ns16550 *uart, bool_t
> skip_amt, unsigned int idx)
> >>>                       else
> >>>                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
> >>>
> >>> -                    uart->io_base = ((u64)bar_64 << 32) |
> >>> +                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
> >>>                                       (bar &
> PCI_BASE_ADDRESS_MEM_MASK);
> >>>                   }
> >> This looks wrong to me: You shouldn't blindly truncate to 32 bits. You
> need
> >> to refuse acting on 64-bit BARs with the upper address bits non-zero.
> >
> > Yes, I was treating this like others (where Xen does not detect for
> > truncation while getting the address/size from device-tree and
> > typecasting it to paddr_t).
> >
> > However in this case, as Xen is reading from PCI registers, so it needs
> > to check for truncation.
> >
> > I think the following change should do good.
> >
> > @@ -1180,6 +1180,7 @@ pci_uart_config(struct ns16550 *uart, bool_t
> > skip_amt, unsigned int idx)
> >                   unsigned int bar_idx = 0, port_idx = idx;
> >                   uint32_t bar, bar_64 = 0, len, len_64;
> >                   u64 size = 0;
> > +                uint64_t io_base = 0;
> >                   const struct ns16550_config_param *param = uart_param;
> >
> >                   nextf = (f || (pci_conf_read16(PCI_SBDF(0, b, d, f),
> > @@ -1260,8 +1261,11 @@ pci_uart_config(struct ns16550 *uart, bool_t
> > skip_amt, unsigned int idx)
> >                       else
> >                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
> >
> > -                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
> > +                    io_base = ((u64)bar_64 << 32) |
> >                                       (bar & PCI_BASE_ADDRESS_MEM_MASK);
> > +
> > +                    uart->io_base = (paddr_t) io_base;
> > +                    ASSERT(uart->io_base == io_base); /* Detect
> > truncation */
> >                   }
> >                   /* IO based */
> >                   else if ( !param->mmio && (bar &
> > PCI_BASE_ADDRESS_SPACE_IO) )
>
> An assertion can only ever check assumption on behavior elsewhere in Xen.
> Anything external needs handling properly, including in non-debug builds.
>

Except in this case, it's detecting the result of the compiler cast just
above it, isn't it?  In which case it seems like it should be a
BUILD_BUG_ON() of some sort.

 -George

--0000000000006d9bc905f289e058
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGJyPjwvZGl2Pjxicj48ZGl2IGNsYXNzPSJn
bWFpbF9xdW90ZSI+PGRpdiBkaXI9Imx0ciIgY2xhc3M9ImdtYWlsX2F0dHIiPk9uIFdlZCwgSmFu
IDE4LCAyMDIzIGF0IDE6MTUgUE0gSmFuIEJldWxpY2ggJmx0OzxhIGhyZWY9Im1haWx0bzpqYmV1
bGljaEBzdXNlLmNvbSI+amJldWxpY2hAc3VzZS5jb208L2E+Jmd0OyB3cm90ZTo8YnI+PC9kaXY+
PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjBweCAwcHggMHB4
IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwyMDQpO3BhZGRpbmctbGVm
dDoxZXgiPk9uIDE4LjAxLjIwMjMgMTI6MTUsIEF5YW4gS3VtYXIgSGFsZGVyIHdyb3RlOjxicj4N
CiZndDsgT24gMTgvMDEvMjAyMyAwODo0MCwgSmFuIEJldWxpY2ggd3JvdGU6PGJyPg0KJmd0OyZn
dDsgT24gMTcuMDEuMjAyMyAxODo0MywgQXlhbiBLdW1hciBIYWxkZXIgd3JvdGU6PGJyPg0KJmd0
OyZndDsmZ3Q7IEBAIC0xMTY2LDcgKzExNjYsNyBAQCBzdGF0aWMgY29uc3Qgc3RydWN0IG5zMTY1
NTBfY29uZmlnIF9faW5pdGNvbnN0IHVhcnRfY29uZmlnW10gPTxicj4NCiZndDsmZ3Q7Jmd0O8Kg
IMKgc3RhdGljIGludCBfX2luaXQ8YnI+DQomZ3Q7Jmd0OyZndDvCoCDCoHBjaV91YXJ0X2NvbmZp
ZyhzdHJ1Y3QgbnMxNjU1MCAqdWFydCwgYm9vbF90IHNraXBfYW10LCB1bnNpZ25lZCBpbnQgaWR4
KTxicj4NCiZndDsmZ3Q7Jmd0O8KgIMKgezxicj4NCiZndDsmZ3Q7Jmd0OyAtwqAgwqAgdTY0IG9y
aWdfYmFzZSA9IHVhcnQtJmd0O2lvX2Jhc2U7PGJyPg0KJmd0OyZndDsmZ3Q7ICvCoCDCoCBwYWRk
cl90IG9yaWdfYmFzZSA9IHVhcnQtJmd0O2lvX2Jhc2U7PGJyPg0KJmd0OyZndDsmZ3Q7wqAgwqAg
wqAgwqB1bnNpZ25lZCBpbnQgYiwgZCwgZiwgbmV4dGYsIGk7PGJyPg0KJmd0OyZndDsmZ3Q7wqAg
wqA8YnI+DQomZ3Q7Jmd0OyZndDvCoCDCoCDCoCDCoC8qIE5CLiBTdGFydCBhdCBidXMgMSB0byBh
dm9pZCBBTVQ6IGEgcGx1Zy1pbiBjYXJkIGNhbm5vdCBiZSBvbiBidXMgMC4gKi88YnI+DQomZ3Q7
Jmd0OyZndDsgQEAgLTEyNTksNyArMTI1OSw3IEBAIHBjaV91YXJ0X2NvbmZpZyhzdHJ1Y3QgbnMx
NjU1MCAqdWFydCwgYm9vbF90IHNraXBfYW10LCB1bnNpZ25lZCBpbnQgaWR4KTxicj4NCiZndDsm
Z3Q7Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgZWxzZTxicj4NCiZndDsm
Z3Q7Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgc2l6ZSA9IGxl
biAmYW1wOyBQQ0lfQkFTRV9BRERSRVNTX01FTV9NQVNLOzxicj4NCiZndDsmZ3Q7Jmd0O8KgIMKg
PGJyPg0KJmd0OyZndDsmZ3Q7IC3CoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCB1YXJ0LSZn
dDtpb19iYXNlID0gKCh1NjQpYmFyXzY0ICZsdDsmbHQ7IDMyKSB8PGJyPg0KJmd0OyZndDsmZ3Q7
ICvCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCB1YXJ0LSZndDtpb19iYXNlID0gKHBhZGRy
X3QpICgodTY0KWJhcl82NCAmbHQ7Jmx0OyAzMikgfDxicj4NCiZndDsmZ3Q7Jmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgKGJhciAm
YW1wOyBQQ0lfQkFTRV9BRERSRVNTX01FTV9NQVNLKTs8YnI+DQomZ3Q7Jmd0OyZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoH08YnI+DQomZ3Q7Jmd0OyBUaGlzIGxvb2tzIHdyb25nIHRv
IG1lOiBZb3Ugc2hvdWxkbiYjMzk7dCBibGluZGx5IHRydW5jYXRlIHRvIDMyIGJpdHMuIFlvdSBu
ZWVkPGJyPg0KJmd0OyZndDsgdG8gcmVmdXNlIGFjdGluZyBvbiA2NC1iaXQgQkFScyB3aXRoIHRo
ZSB1cHBlciBhZGRyZXNzIGJpdHMgbm9uLXplcm8uPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFllcywg
SSB3YXMgdHJlYXRpbmcgdGhpcyBsaWtlIG90aGVycyAod2hlcmUgWGVuIGRvZXMgbm90IGRldGVj
dCBmb3IgPGJyPg0KJmd0OyB0cnVuY2F0aW9uIHdoaWxlIGdldHRpbmcgdGhlIGFkZHJlc3Mvc2l6
ZSBmcm9tIGRldmljZS10cmVlIGFuZCA8YnI+DQomZ3Q7IHR5cGVjYXN0aW5nIGl0IHRvIHBhZGRy
X3QpLjxicj4NCiZndDsgPGJyPg0KJmd0OyBIb3dldmVyIGluIHRoaXMgY2FzZSwgYXMgWGVuIGlz
IHJlYWRpbmcgZnJvbSBQQ0kgcmVnaXN0ZXJzLCBzbyBpdCBuZWVkcyA8YnI+DQomZ3Q7IHRvIGNo
ZWNrIGZvciB0cnVuY2F0aW9uLjxicj4NCiZndDsgPGJyPg0KJmd0OyBJIHRoaW5rIHRoZSBmb2xs
b3dpbmcgY2hhbmdlIHNob3VsZCBkbyBnb29kLjxicj4NCiZndDsgPGJyPg0KJmd0OyBAQCAtMTE4
MCw2ICsxMTgwLDcgQEAgcGNpX3VhcnRfY29uZmlnKHN0cnVjdCBuczE2NTUwICp1YXJ0LCBib29s
X3QgPGJyPg0KJmd0OyBza2lwX2FtdCwgdW5zaWduZWQgaW50IGlkeCk8YnI+DQomZ3Q7wqAgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgaW50IGJhcl9pZHggPSAwLCBw
b3J0X2lkeCA9IGlkeDs8YnI+DQomZ3Q7wqAgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgdWludDMyX3QgYmFyLCBiYXJfNjQgPSAwLCBsZW4sIGxlbl82NDs8YnI+DQomZ3Q7wqAgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdTY0IHNpemUgPSAwOzxicj4NCiZndDsgK8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1aW50NjRfdCBpb19iYXNlID0gMDs8YnI+DQom
Z3Q7wqAgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgY29uc3Qgc3RydWN0IG5zMTY1
NTBfY29uZmlnX3BhcmFtICpwYXJhbSA9IHVhcnRfcGFyYW07PGJyPg0KJmd0OyA8YnI+DQomZ3Q7
wqAgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbmV4dGYgPSAoZiB8fCAocGNpX2Nv
bmZfcmVhZDE2KFBDSV9TQkRGKDAsIGIsIGQsIGYpLDxicj4NCiZndDsgQEAgLTEyNjAsOCArMTI2
MSwxMSBAQCBwY2lfdWFydF9jb25maWcoc3RydWN0IG5zMTY1NTAgKnVhcnQsIGJvb2xfdCA8YnI+
DQomZ3Q7IHNraXBfYW10LCB1bnNpZ25lZCBpbnQgaWR4KTxicj4NCiZndDvCoCDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGVsc2U8YnI+DQomZ3Q7wqAgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHNpemUgPSBsZW4gJmFtcDsg
UENJX0JBU0VfQUREUkVTU19NRU1fTUFTSzs8YnI+DQomZ3Q7IDxicj4NCiZndDsgLcKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVhcnQtJmd0O2lvX2Jhc2UgPSAocGFkZHJf
dCkgKCh1NjQpYmFyXzY0ICZsdDsmbHQ7IDMyKSB8PGJyPg0KJmd0OyArwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaW9fYmFzZSA9ICgodTY0KWJhcl82NCAmbHQ7Jmx0OyAz
MikgfDxicj4NCiZndDvCoCDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgKGJhciAmYW1wOyBQQ0lfQkFTRV9BRERS
RVNTX01FTV9NQVNLKTs8YnI+DQomZ3Q7ICs8YnI+DQomZ3Q7ICvCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCB1YXJ0LSZndDtpb19iYXNlID0gKHBhZGRyX3QpIGlvX2Jhc2U7
PGJyPg0KJmd0OyArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgQVNTRVJU
KHVhcnQtJmd0O2lvX2Jhc2UgPT0gaW9fYmFzZSk7IC8qIERldGVjdCA8YnI+DQomZ3Q7IHRydW5j
YXRpb24gKi88YnI+DQomZ3Q7wqAgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfTxi
cj4NCiZndDvCoCDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAvKiBJTyBiYXNlZCAq
Lzxicj4NCiZndDvCoCDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBlbHNlIGlmICgg
IXBhcmFtLSZndDttbWlvICZhbXA7JmFtcDsgKGJhciAmYW1wOyA8YnI+DQomZ3Q7IFBDSV9CQVNF
X0FERFJFU1NfU1BBQ0VfSU8pICk8YnI+DQo8YnI+DQpBbiBhc3NlcnRpb24gY2FuIG9ubHkgZXZl
ciBjaGVjayBhc3N1bXB0aW9uIG9uIGJlaGF2aW9yIGVsc2V3aGVyZSBpbiBYZW4uPGJyPg0KQW55
dGhpbmcgZXh0ZXJuYWwgbmVlZHMgaGFuZGxpbmcgcHJvcGVybHksIGluY2x1ZGluZyBpbiBub24t
ZGVidWcgYnVpbGRzLjxicj48L2Jsb2NrcXVvdGU+PGRpdj48YnI+PC9kaXY+PGRpdj5FeGNlcHQg
aW4gdGhpcyBjYXNlLCBpdCYjMzk7cyBkZXRlY3RpbmcgdGhlIHJlc3VsdCBvZiB0aGUgY29tcGls
ZXIgY2FzdCBqdXN0IGFib3ZlIGl0LCBpc24mIzM5O3QgaXQ/wqAgSW4gd2hpY2ggY2FzZSBpdCBz
ZWVtcyBsaWtlIGl0IHNob3VsZCBiZSBhIEJVSUxEX0JVR19PTigpIG9mIHNvbWUgc29ydC48L2Rp
dj48ZGl2Pjxicj48L2Rpdj48ZGl2PsKgLUdlb3JnZTwvZGl2PjwvZGl2PjwvZGl2Pg0K
--0000000000006d9bc905f289e058--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:35:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:35:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480649.745152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8ax-0007Bs-Q6; Wed, 18 Jan 2023 13:35:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480649.745152; Wed, 18 Jan 2023 13:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8ax-0007BV-Mw; Wed, 18 Jan 2023 13:35:15 +0000
Received: by outflank-mailman (input) for mailman id 480649;
 Wed, 18 Jan 2023 13:35:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TKO3=5P=citrix.com=prvs=3750e6e72=christian.lindig@srs-se1.protection.inumbo.net>)
 id 1pI8aw-0004x8-RS
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:35:15 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ee2a207b-9734-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 14:35:12 +0100 (CET)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 08:35:09 -0500
Received: from DM6PR03MB4172.namprd03.prod.outlook.com (2603:10b6:5:5c::23) by
 BN8PR03MB5090.namprd03.prod.outlook.com (2603:10b6:408:db::19) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.23; Wed, 18 Jan 2023 13:35:07 +0000
Received: from DM6PR03MB4172.namprd03.prod.outlook.com
 ([fe80::a77b:c822:a19b:ef23]) by DM6PR03MB4172.namprd03.prod.outlook.com
 ([fe80::a77b:c822:a19b:ef23%3]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 13:35:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee2a207b-9734-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674048912;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=HsT86MbsP11ALcUY7CsF2JFD+dHzsaEKtwJwQc9uT0A=;
  b=Fd0xNUQOwpakPov1mtlqCo86LifONXGp8EPKZBzWPBHX1Lm4Pu5Xof24
   GFflaaqf5LS1PX+8Lm3EYhg5rSXzttL8lYmBMu18Z1V9KGXh7TPSvJz5p
   dDglvX80iGUwlaL7JdiLAA6OaUyVtbdWOAZjFeNhSXxLI7NojhhQrlu2z
   Y=;
X-IronPort-RemoteIP: 104.47.55.169
X-IronPort-MID: 92622832
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:T+AwNqBnbpCUSRVW/xTiw5YqxClBgxIJ4kV8jS/XYbTApD9w1TdTy
 GAeXzyHOPveMWWhfo1zO4q0/BlQvJ/SyNFkQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpB7gRiDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwuaVPPW5B3
 6YjMjUyVxqipOzmwJCjRbw57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvTm7IA9ZidABNPLeesaLXtlUl0Deo
 mPA82X2KhobKMae2XyO9XfEaurnzHmlAthCT+fQGvhCr2e8+GgdLjwve3CDrbqmil+iC/1nA
 hlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQMMinN87Q3otz
 FDht9HmHzt0q5WOVGmQsLyTqFuaNSELIEcYaCQDTA9D5MPsyLzflTrKR9dnVaSz3tv8HGiqx
 yjQ9XZvwbIOkcQMyqO3u0jdhC6hrYTISQhz4RjLWmWi7UVyY4vNi5GU1GU3JM1odO6xJmRtd
 lBe8yRCxIji1a2wqRE=
IronPort-HdrOrdr: A9a23:o9eI+q0CudJa8JxF+5ywDgqjBLokLtp133Aq2lEZdPUCSL38qy
 nIpoV46faUskdzZJhEo7q90ca7LE80maQY3WBzB9eftWvd1ldARbsKheDfKlbbehEWmNQz6U
 /QGJIObOHNMQ==
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="92622832"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YtjfvgzrwkI2QywwcbmLcAwY13p7RVbWxX5megmxlKJWZRaOrWpIGC2wWRFfz+zEH3D44/tONk7bkAwvv0/QiSJee49FDCAaEa+nxglhhvK3E8rOSKXKeMoAyv0wLAR8x9Qm61YopYRaQFFx9tJQNix5DxAdldBHfY7+IHvx5sGGOUc6lyhbtiWXTpl6MK1XuRx+egYFpXft0ccjLQVUur+3wSWB6PgytxTNl3Jm4E2xLHFFgu4CVQrEOY0Xl5b2cQPaO1UCr5XuxIx2ag2wycVZbMWxOB0oyTZAyoCwjcnPKY5aTGbaI++F27y1TEmpfZ9BcukvbItXmdounUwTfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LD1tWbftmkRjFPtPgfou7M08XKpMsZ7prFIdojMsm8c=;
 b=DjQkfGqXT7sNum+APTp7PYqr6ZEoTcf994ZhQXLyzlqPokf4+LViFq05abxYO7TGSSN1lrkHKGTDuM1HhlbM/PrhfJWaLWVUmdnZRqdZXfPKsjKV0sl8dCUyvpRDwWvYYxMmtWqU+8efyLiXANiKz4WOoBP561QuC4M6y78t46pEra5ZirBnTRAV1gxmJjc3Yjaf6Qken6jA1k7ozd3/3R7bkCkd1R6gfle1ovsdZSQHStqAICZPoGIGw/3Pc0V3gX0aj+M5ron46diCVD3LS/Vs6beJ9y0ynZsvvF4651AmxG/kkkebiAJKEP8k9kJgFS65Su1h1I7WJCPC3ba89w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LD1tWbftmkRjFPtPgfou7M08XKpMsZ7prFIdojMsm8c=;
 b=Oa50r+MVkeMV9DakL/2xY9GG3UtZFYfs5Ot/UuAF9eHzyMZpjXjiOasLJjAQwA+J66homoQ8XQomPjciPFkCcktRDbkrEI+tx2NZCGTiw2PmGLqstxT5VtnmYqYNK2rsYqJ7+5mBFuZ21N8wih+QqZ3dNQCqZId9eCazycqDoAs=
From: Christian Lindig <christian.lindig@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Anthony
 Perard <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, David
 Scott <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 2/6] tools: Introduce a non-truncating
 xc_xenver_extraversion()
Thread-Topic: [PATCH 2/6] tools: Introduce a non-truncating
 xc_xenver_extraversion()
Thread-Index: AQHZKnsgoIAIvLZD2kO3NYPKMb1VtK6kLLyA
Date: Wed, 18 Jan 2023 13:35:07 +0000
Message-ID: <C4766A45-3923-4A4B-9FAA-547F66E7402E@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <20230117135336.11662-3-andrew.cooper3@citrix.com>
In-Reply-To: <20230117135336.11662-3-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3696.120.41.1.1)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB4172:EE_|BN8PR03MB5090:EE_
x-ms-office365-filtering-correlation-id: 95894d0a-73d1-4d13-969b-08daf958d08a
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 jeyIZzKSHvi2Uw91FZFpbRgfw3JH5a92x7PlRW2EHDZV4trPlwECGRTo70x5SDqikttz3c3Eb1kjKpJSEJGHDLKgSh/7OGOYYh2kvY1zs19bNi79sjdu1uAKEI7zzlXo/aFb+ezexOtORXD+cHRMgZhBfWA0+DMEZOal2FK+23HuqK0LsZpA4uNjUKN8sLJec1uvc99R9YDIU7r8lsGXgc1LSDSny3vJx0zm4fIz2Q//0UFa6UW9w2rpgLqoTDwL5VwFWUBQfqmI4xdN6l53fjU8NndxoecTpvPuaBrwJHLRxq+C65We9OeJTKcvSSjtLi7F2e7ncS0xNO187D3WD6g/YktuP78p/UgnWKkb88k0b9ynHlPciRRfClqP0jGXGGI2lJk3LXj/U+l91vht4p9jAsaDII9NHsAM8gBfFBZMWI8a3bhrOhmvbUU2vzy91APg5ej1qW4noLaSJl6L6ESjElTw/Kcp5CGuRnsvo+PHkC1/Lr55vt2BP0NXMr174z7FPJvCPrD5hVnyFWKCIAftrBbUz7Pwm5M781J3Tmr2elkCiEJXk7wTN5G+xEBJlLCDr9V+A/iNqZDG1w9nRPcKetkGki2EpotPRKGO+X2F1Q8geBSF3MtmVywtPm0LZopxBGy/oa1TPfH1Owh4ML211uyDjmpmIx+0pJKv2t4da8WGbOtn2lET7zxk4BYw+PxksYoXQUdJcx7g16c/YnRhAY7vufGIrdz5vdYJ8GU=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB4172.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(346002)(396003)(366004)(376002)(451199015)(54906003)(6636002)(82960400001)(316002)(37006003)(36756003)(122000001)(5660300002)(8936002)(6862004)(86362001)(44832011)(8676002)(4326008)(2906002)(38100700002)(64756008)(66476007)(66946007)(66556008)(66446008)(91956017)(76116006)(41300700001)(38070700005)(107886003)(6506007)(2616005)(53546011)(6512007)(186003)(26005)(83380400001)(6486002)(478600001)(33656002)(71200400001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?607/gNNiP7zNfoKVBDqApCUkFHw2kaJRpG2dqpXwcuvNdWfALB90jvykudUJ?=
 =?us-ascii?Q?E/KI7/AxBh49zvGDHaeNhBEgu27C0yCJW72dpwFqeBcCej3X4ovTzks/2BOg?=
 =?us-ascii?Q?x7Mhp6ohp0mHCuZs/1Fkw3QL07c+r+rLm5x4DbpH8C1fSdKAZEpzch034X6d?=
 =?us-ascii?Q?z+TTkKEx/bj9+tlk/P9lPoyU+9IXAz3i5XHzY89CBvlDx098AL3g0mJusHOy?=
 =?us-ascii?Q?sJ5zAf2E4PBnkEF8CaQF+GhiLc0TmYkuuLAalxb0XelLhW9TjVNjlSQW09c4?=
 =?us-ascii?Q?tqHUPUesG0MIg5dzkJXMN/ICso15MdPrEhBWQhff2lRSQ+lVu53XkY9Mn0zD?=
 =?us-ascii?Q?vUe3PDNsNWcWZ2uDtduarHxnExGbH9cDN4qUt+0C4Vys2Ref7BCjZbeDcmnC?=
 =?us-ascii?Q?+5rCIaSRZRgt0yopkQpt1TxlX3Di0iHnx1hfcXfyN1cMZkx0Fq3FggoGkifj?=
 =?us-ascii?Q?vQZ66eezhrRQnAeZ3hXQ3byDHpQbUHYCRjQvzEBj5eomyCXMB0nM6FFIB+UC?=
 =?us-ascii?Q?4BTlpDv3BCaO2wCeQ0+g/F2S8OhTU45EsZQM4HoIRsJEXLY1hpB6gGKoALYI?=
 =?us-ascii?Q?rpD7yUKceQkKCoVbkfbLAeBlvjFzeEf9CYOln7DpCnRSPeagXn4Cmua8qZ0N?=
 =?us-ascii?Q?6SH8dMyqfsrtC/frfeQEW5lHRMU6iLdwKCr8eM+cVAiBuQKP07lf9Vkny+Os?=
 =?us-ascii?Q?QdLCvmgC6fu3eXaZHKhWVH0rRkWlV5w0Q+v5Pc3mul+4xCMHh4oZcP23Nib8?=
 =?us-ascii?Q?xcUSxoal+9NHRxltUqb3sW1WcioilX77PPeiAaqh1U2ry9xQLAdnuK83CVQh?=
 =?us-ascii?Q?YAkVH/zGNTVzQzn1FtHIFrthaKjJeyPNjlw//UF5RFryPgj0m8npnHPhrdek?=
 =?us-ascii?Q?wsdB88xwy8EYcIS5LbM2bJdccwT0UcIlIi00dEcz7VNwahF8NbaU/OuUiv4y?=
 =?us-ascii?Q?Uw9LG6rYE3tgxAjaqhNok39OxOOsiuO5/YH/eZb/oBSa6wh1EJfaE6l0MuNY?=
 =?us-ascii?Q?Ns2A/ASTK7agjH2OwkekBPJh5PWDCIUYDk21oj9sSJGJAj1RbFQqNQVJsj1d?=
 =?us-ascii?Q?9eEm0JFWbYRUNIVe4L37Qn2gODKbzbC0djVNBHSoQTtc1dgZQraqhtrRukAS?=
 =?us-ascii?Q?le/+KqenhCO2/PQJzIRJIlrSao2URPk+ziOYrHB6+8S0zS+HCXBBYjiV1Rsf?=
 =?us-ascii?Q?f2PzCa7WOC8cHbLxt/91D+nxrIQGKTB6K7FAM5OZq1MozfnykL0z4YHGWYyR?=
 =?us-ascii?Q?mSBB0eEmfUvo5lHjCmRfhC73DjKAdMR1qRUTZAwjYK5tt8CiLmPgPhHJJ0p/?=
 =?us-ascii?Q?18dKQ91Q4kaDFshi+PYGIfuouidxgXyiKrlax0J89nPeiqCCArhMNoz9+EOk?=
 =?us-ascii?Q?Wm+DK8I8NLGh8ht84GJPhyHWVPC8+LFcMzgkPq6bV7Bdqs+29tDUvRZm59Rv?=
 =?us-ascii?Q?U5AtPsz68Uy/Slmqw3Q4u5Za1TBKqKYGSq92ILVXOCbnfvdTjwbWVXeuthQC?=
 =?us-ascii?Q?VEM4ioLpa+P82kr7Wbp3mvBZI+yd+X24MI5mjOHJhv6c6bW7JRiUblB/xCov?=
 =?us-ascii?Q?rqAki9hlw/z480Xcl3KuNYc6BzYoM4cDzLIEv1CH4sWBlf8C4dX+OP+ssurY?=
 =?us-ascii?Q?3L1CnVBZ/jQ0MepM7gw6nZo=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <A42B9DC4866CF544951C9C79DCCE2E63@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	7Fmk46zbSiWg+4w29aWxz3Ao1qkQkHVxMvnbLLK9fDJiduBQnUTcjTf+aMKj0H4rG6mZjbPgEU2/43dQpI46JpMG06cPZDbnRT6Llf223HY/GPR7Yb70ZmJ98sEVSfaePfDNbvFz7+r9WVi5skZE+aIi3x9cS2C76u89gRZIBrI7cXD1vj6ndeWdad4yFruOLc+Rppjw8yNIPTMIx49ZhCXVcczAPRJq9sB133Bl51dJzf/FVJdKp/4JMqk9rGkdm866djpopbHfnlMVUrbdsMhcadQQlTjnBKZuNnRdzrjv1lWq6FBy37hC+inNNQxbCiC6vdHQRtheZ2nGcz/Dcdty/Lm3swspheH6OwkHzBScM8uBOCq7EAvFMRGYP6nVWX00NNZMFA5LwKd8W+V38kP6ge5NF3mffkYSxdtjmDSmmWeF5+jJmoTxqwFJ6Oh2LVGnwyQu/ggH1LPJMpIbn3DJspDIkHO7BlzB+Hr7VUUYItMy86y51D35a4/5KlVSYZaO6iTMerCnnrXHs80VYlzriOi6s0WFSB5BSboYXPKCJ1q/Ba32s/HjESof/PQ4LgZdEUTwsrxLWss4xfBSh2q54ms86C5Wwj+QRziPANFmB8938ZNbIaVXt9BqqD7wpmWblcfwtszkVNDSFDQ+8qBzciZvozJJ4chOFDo3bWsQx3XMFPO4MaUFxnINJFauL+BA7mqBMkVNOxgP2rbVIffkk+VdR6MbonczUtouRV9heyiVeO4Ts1nHr0EvZTAUePgkqhAybSKiAwgaeGevIwZwKafWdJYiRkIZbn0YQuqgdSHWSa8en5GFTM2IT0W2rSW3krJzuRdvHcc7CfYGDd2QtvdpWY12Rh+cq/gjaxk=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB4172.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 95894d0a-73d1-4d13-969b-08daf958d08a
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 13:35:07.7059
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ap61KV5HuiwHz5a3cjhuDGyxk3Q7MY4iwrao3Nsttii+BhVom7t67M/cAi8UVAEzjE/qjsyeI7cb0vTrw27PwH7L74/NdJBxxUaGFMhgCSM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5090



> On 17 Jan 2023, at 13:53, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
>=20
> ... which uses XENVER_extraversion2.
>=20
> In order to do sensibly, use manual hypercall buffer handling.  Not only =
does
> this avoid an extra bounce buffer (we need to strip the xen_varbuf_t head=
er
> anyway), it's also shorter and easlier to follow.
>=20
> Update libxl and the ocaml stubs to match.  No API/ABI change in either.
>=20
> With this change made, `xl info` can now correctly access a >15 char
> extraversion:
>=20
>  # xl info xen_version
>  4.18-unstable+REALLY LONG EXTRAVERSION
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Wei Liu <wl@xen.org>
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Juergen Gross <jgross@suse.com>
> CC: Christian Lindig <christian.lindig@citrix.com>
> CC: David Scott <dave@recoil.org>
> CC: Edwin Torok <edvin.torok@citrix.com>
> CC: Rob Hoes <Rob.Hoes@citrix.com>

Acked-by: Christian Lindig <christian.lindig@citrix.com>

>=20
> Note: There is a marginal risk for a memory leak in the ocaml bindings, b=
ut
> it turns out we have the same bug elsewhere and fixing that is going to b=
e
> rather complicated.
> ---
> tools/include/xenctrl.h             |  6 +++
> tools/libs/ctrl/xc_version.c        | 75 ++++++++++++++++++++++++++++++++=
+++++
> tools/libs/light/libxl.c            |  4 +-
> tools/ocaml/libs/xc/xenctrl_stubs.c |  9 +++--
> 4 files changed, 87 insertions(+), 7 deletions(-)
>=20
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 23037874d3d5..1e88d49371a4 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -1604,6 +1604,12 @@ long xc_memory_op(xc_interface *xch, unsigned int =
cmd, void *arg, size_t len);
>=20
> int xc_version(xc_interface *xch, int cmd, void *arg);
>=20
> +/*
> + * Wrappers around XENVER_* subops.  Callers must pass the returned poin=
ter to
> + * free().
> + */
> +char *xc_xenver_extraversion(xc_interface *xch);
> +
> int xc_flask_op(xc_interface *xch, xen_flask_op_t *op);
>=20
> /*
> diff --git a/tools/libs/ctrl/xc_version.c b/tools/libs/ctrl/xc_version.c
> index 60e107dcee0b..2c14474feec5 100644
> --- a/tools/libs/ctrl/xc_version.c
> +++ b/tools/libs/ctrl/xc_version.c
> @@ -81,3 +81,78 @@ int xc_version(xc_interface *xch, int cmd, void *arg)
>=20
>     return rc;
> }
> +
> +/*
> + * Raw hypercall wrapper, letting us pass NULL and things which aren't o=
f
> + * xc_hypercall_buffer_t *.
> + */
> +static int do_xen_version_raw(xc_interface *xch, int cmd, void *hbuf)
> +{
> +    return xencall2(xch->xcall, __HYPERVISOR_xen_version,
> +                    cmd, (unsigned long)hbuf);
> +}
> +
> +/*
> + * Issues a xen_varbuf_t subop, using manual hypercall buffer handling t=
o
> + * avoid unnecessary buffering.
> + *
> + * On failure, returns NULL.  errno probably useful.
> + * On success, returns a pointer which must be freed with xencall_free_b=
uffer().
> + */
> +static xen_varbuf_t *varbuf_op(xc_interface *xch, unsigned int subop)
> +{
> +    xen_varbuf_t *hbuf =3D NULL;
> +    ssize_t sz;
> +
> +    sz =3D do_xen_version_raw(xch, subop, NULL);
> +    if ( sz < 0 )
> +        return NULL;
> +
> +    hbuf =3D xencall_alloc_buffer(xch->xcall, sizeof(*hbuf) + sz);
> +    if ( !hbuf )
> +        return NULL;
> +
> +    hbuf->len =3D sz;
> +
> +    sz =3D do_xen_version_raw(xch, subop, hbuf);
> +    if ( sz < 0 )
> +    {
> +        xencall_free_buffer(xch->xcall, hbuf);
> +        return NULL;
> +    }
> +
> +    hbuf->len =3D sz;
> +    return hbuf;
> +}
> +
> +/*
> + * Wrap varbuf_op() to obtain a simple string.  Copy out of the hypercal=
l
> + * buffer, stripping the xen_varbuf_t header and appending a NUL termina=
tor.
> + *
> + * On failure, returns NULL, errno probably useful.
> + * On success, returns a pointer which must be free()'d.
> + */
> +static char *varbuf_simple_string(xc_interface *xch, unsigned int subop)
> +{
> +    xen_varbuf_t *hbuf =3D varbuf_op(xch, subop);
> +    char *res;
> +
> +    if ( !hbuf )
> +        return NULL;
> +
> +    res =3D malloc(hbuf->len + 1);
> +    if ( res )
> +    {
> +        memcpy(res, hbuf->buf, hbuf->len);
> +        res[hbuf->len] =3D '\0';
> +    }
> +
> +    xencall_free_buffer(xch->xcall, hbuf);
> +
> +    return res;
> +}
> +
> +char *xc_xenver_extraversion(xc_interface *xch)
> +{
> +    return varbuf_simple_string(xch, XENVER_extraversion2);
> +}
> diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
> index a0bf7d186f69..3e16e568839c 100644
> --- a/tools/libs/light/libxl.c
> +++ b/tools/libs/light/libxl.c
> @@ -581,7 +581,6 @@ const libxl_version_info* libxl_get_version_info(libx=
l_ctx *ctx)
> {
>     GC_INIT(ctx);
>     union {
> -        xen_extraversion_t xen_extra;
>         xen_compile_info_t xen_cc;
>         xen_changeset_info_t xen_chgset;
>         xen_capabilities_info_t xen_caps;
> @@ -600,8 +599,7 @@ const libxl_version_info* libxl_get_version_info(libx=
l_ctx *ctx)
>     info->xen_version_major =3D xen_version >> 16;
>     info->xen_version_minor =3D xen_version & 0xFF;
>=20
> -    xc_version(ctx->xch, XENVER_extraversion, &u.xen_extra);
> -    info->xen_version_extra =3D libxl__strdup(NOGC, u.xen_extra);
> +    info->xen_version_extra =3D xc_xenver_extraversion(ctx->xch);
>=20
>     xc_version(ctx->xch, XENVER_compile_info, &u.xen_cc);
>     info->compiler =3D libxl__strdup(NOGC, u.xen_cc.compiler);
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xe=
nctrl_stubs.c
> index 2fba9c5e94d6..f3ce12dd8683 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -929,9 +929,8 @@ CAMLprim value stub_xc_version_version(value xch)
> {
> 	CAMLparam1(xch);
> 	CAMLlocal1(result);
> -	xen_extraversion_t extra;
> +	char *extra;
> 	long packed;
> -	int retval;
>=20
> 	caml_enter_blocking_section();
> 	packed =3D xc_version(_H(xch), XENVER_version, NULL);
> @@ -941,10 +940,10 @@ CAMLprim value stub_xc_version_version(value xch)
> 		failwith_xc(_H(xch));
>=20
> 	caml_enter_blocking_section();
> -	retval =3D xc_version(_H(xch), XENVER_extraversion, &extra);
> +	extra =3D xc_xenver_extraversion(_H(xch));
> 	caml_leave_blocking_section();
>=20
> -	if (retval)
> +	if (!extra)
> 		failwith_xc(_H(xch));
>=20
> 	result =3D caml_alloc_tuple(3);
> @@ -953,6 +952,8 @@ CAMLprim value stub_xc_version_version(value xch)
> 	Store_field(result, 1, Val_int(packed & 0xffff));
> 	Store_field(result, 2, caml_copy_string(extra));
>=20
> +	free(extra);
> +
> 	CAMLreturn(result);
> }
>=20
> --=20
> 2.11.0
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:54:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:54:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480663.745163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8sj-0001Sm-Fz; Wed, 18 Jan 2023 13:53:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480663.745163; Wed, 18 Jan 2023 13:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8sj-0001Sf-CX; Wed, 18 Jan 2023 13:53:37 +0000
Received: by outflank-mailman (input) for mailman id 480663;
 Wed, 18 Jan 2023 13:53:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMWs=5P=citrix.com=prvs=37540d4c2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pI8sh-0001SZ-UV
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:53:36 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7e751e6d-9737-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 14:53:33 +0100 (CET)
Received: from mail-dm6nam04lp2040.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 08:53:30 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5054.namprd03.prod.outlook.com (2603:10b6:208:1ac::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 13:53:28 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 13:53:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e751e6d-9737-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674050013;
  h=from:to:subject:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version;
  bh=IaoY67JsHFpV6s+3TV4anSJ6LQ9em/1iMVNn2PRymjA=;
  b=NpHadfipLP3TCcZrAdBPBkldn+jXxoa9u/imP90qb+UsQIV2JV1Jyvl7
   l9hdJE1r11F+l4Lt8LAOvKCRu8IC/4En78nuW9b+VMgp0SpHAHQ9a0YCK
   Li8nBmZ5nMiiBCeVcaG0cUIBSkEjlP5VEk1Wbc8hazuGl98FCzovF7U87
   Q=;
X-IronPort-RemoteIP: 104.47.73.40
X-IronPort-MID: 93589175
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:2oPCqarf2jjsHIJZIIpb4NKgmR9eBmKYZBIvgKrLsJaIsI4StFCzt
 garIBmFOPyNa2SheNtxbt7i80MF7JHRndRlTAptqng3FCgap5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHzilNUPrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACJQdAyli/Do+p/hRutQj/seB463DIxK7xmMzRmBZRonabbqZv2WoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+OrbIK9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzjq6472QLOroAVIDMuWmXio9Wzs0+jdI9We
 2wb3ykN7qdnoSRHSfG4BXVUukWssgEZRIB4Eus08giBx6PYpQGDCQAsTDFbb8c9nNQrXjFs3
 ViM9/vyHiBmurCRTXOb95+XoCm0NCxTKnUNDQcUQA1A79T9rYUbihPUUs0lAKOzlsfyGzz73
 3aNtidWr7ESi9Mbkqa251bKhxqyqZXTCA04/APaWiSi9AwRTJKqY5yA7Vnd8OpaK4CYXh+Ns
 RAsg8GD6MgeAJfLkzaCKNjhB5ms7veBdTHZ31hmGsF98yz3oyL7O4dN/Dt5OUFldN4efiPka
 1PSvgUX44JPOHytbul8ZIfZ59kW8JUM3O/NDpj8BueiqLArHONb1EmCvXKt4l0=
IronPort-HdrOrdr: A9a23:NY+LAandlJEhq/hollOUjySyqajpDfI73DAbv31ZSRFFG/Fw9v
 re/8jzsCWe4gr5N0tQ++xoR5PwJE80maQZ3WBzB9eftWvd1ldARbsKhbcKpQeNJ8SUzI9gPM
 lbHJSWAeeAaWRHsQ==
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="93589175"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GQ9Wv/VcEnb3xJTqUCCOdFReAQhyhHKQhteYE7x2tRXX7ozIm1dgjmTJD1oFRi+fL7HBnXtSRrjTvaRrqLv7f3NO7KReY/LfeaCxKbP+Ko+W97X4/L8Kq08qakN5gPE5oLrxun7aFICWP0NYHbUy+K1wxkid/sQIb26p0yySD2R01eRo9+CJTMAUSjMi254v+eClDiltjQ0GVJfFjnzrAAvBnQjv87lKmhvnRdvSwVf1S4bfK6p5bH1Le4qLx8fBCtef78Bh0JpEYpGrWQ6OzXRByUezn2uU/O4bkAQdlk1XmjQAX+ANKe1iC8n5v7tP0p6FNfa7U1p7SIpKELXDxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IaoY67JsHFpV6s+3TV4anSJ6LQ9em/1iMVNn2PRymjA=;
 b=miWl7DDHmPt0eafQd5DRVNnmIqqe32OBVeaa/WX+AO4vEAiPl8amy1r72NI9/x/XHMq/KQAWNCPqpf+zVPkgJqp1N6/TD0uRprgEYgGpUxFgjTRYeh/ULKbZWFNyvXjLBh5eOOTPAxSKh1qvqTrEe5SRQjn+owm22fTJNO2Z1oLgzMaXZFcKPTT5dygYrwM1ZvYwb2fuO2lpRnIzuCOyKEqj8xIZPFsO5lDVR3wOThiuATAjFp7pSTOhyfhGjVeUEQwbwFhTBiHgkUdA0z0xj1lyycHKbxIFzk8ulH/BCaSNGyNsm5YbyvsWfOalaW8oIayBLmFjRSfMBb/Fz+jvew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IaoY67JsHFpV6s+3TV4anSJ6LQ9em/1iMVNn2PRymjA=;
 b=rleNA5yMlXKgufGpDoQN0n8fiqV6ZJ/I96hrhjOTrWTqE4p73PjhMFOvJrWA1K4zKin78xrMRcP8k0PxGiPQGEPfoNI92Gj6ZbDazhGPuuR+e2XLWoFtTletCkK0uDNkoE3m+cE1UlnAH5GXQX81S491k8fjxIY6hnydHRaLxcY=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: David Woodhouse <dwmw2@infradead.org>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel
	<xen-devel@lists.xenproject.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Paul
 Durrant <paul@xen.org>
Subject: Re: [PATCH] xen: Allow platform PCI interrupt to be shared
Thread-Topic: [PATCH] xen: Allow platform PCI interrupt to be shared
Thread-Index: AQHZKze3xTuOiK3Or0K7Zn5gBYHKJq6kMguA
Date: Wed, 18 Jan 2023 13:53:28 +0000
Message-ID: <e0b75988-bee6-e0c7-0dda-86e1e973ba74@citrix.com>
References: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
In-Reply-To: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MN2PR03MB5054:EE_
x-ms-office365-filtering-correlation-id: 703c0cc5-00de-490e-b2bc-08daf95b6080
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 hOC0XwWswix57sziGHFrDlwBSjX5zSdz4nwkMv0UbEImrXHgqDmVq+ysl/Dj34+iFN+jeo93y4NeaeeJZL1h2fszswE7kiGreb8AeEK3XfUn6C9uVPbW9IXNNHl3SAG1Rc7c7CDx8pJKCyvT2wTFA/urxbSZwRzArsFcnh2TPQ0Socmlmt7wmE9SWL4M0X+4627ezPn7Rz7rCT7DqCmqZkwqlrYOlf+ZGxJg3O7BOIFySCoMZ6MxwRM82guAKux3GvVGa9WykOJMfkMxzTYvVQRz1ziDLaVzQQhcm9uw890z6aA6d0VAtuBVfmQvdmK+yvDgpWdnQmueNTk6MsngdRyG00sYfTuz4b1KJFuHgkFmkpSRuNsLSojOtpfhpYdSCUtVn5xjoEVGqYUQkt64xcaGWXQGHztKyTeIahC0RVTDxcpg7O1TcD+2mBpeWNqpsrdu5d4VOzDz+ZO73kldX4hb42amBoX+lSyKKuPrQr9vDWFD5SYmYzB7EJWWOKd0h6Ocp1qXZNewwnGM60U+GISppnK75xnc+AGZ91jgrcdQBHlSRkKVrf9La6jpA5oAUF8iocW4ob7b32rkBwHIbTTtg0SuOh4DBHZCsV6qhyooyN1xSDZfqS9Ke0HjpSOvYozxMfWHuk+JfH1CS1Qd2YWnhscbArNXhlBZJMCjoiGuF76VG8mTfQCKy5zU7b4AQY2GxvnGPtvQW6urKm1XoKfTKOqblb8Kb96XPno+lN5TLtccsJZBL9quvrl7BXPI2juN1OG4GJfhL7A3M9/kvQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(366004)(39860400002)(136003)(396003)(451199015)(31686004)(36756003)(8676002)(38070700005)(86362001)(8936002)(66476007)(66446008)(66946007)(66556008)(76116006)(4744005)(31696002)(2906002)(5660300002)(38100700002)(91956017)(82960400001)(83380400001)(64756008)(122000001)(478600001)(316002)(6486002)(71200400001)(110136005)(41300700001)(53546011)(2616005)(26005)(6506007)(6512007)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TUZEcjcxeTFZUVAvK3owYi9idHFiZWhtSys4NnM1cGxnVXN1b3M2aHpRemxm?=
 =?utf-8?B?ZXYyY1Mvb3ZvV3BPb056RlJlMUtnTkJKbk5xb3JTZ3kwQXUxTm5aUEd4TVpO?=
 =?utf-8?B?Tkg3SkkyQ2pjSVR0RXdxdnhOM1E1eCtCRmoycDAvREJ5a2w0bEVPNWdrTkd1?=
 =?utf-8?B?RDArSUlpTGlvQ2FKcW5XNDEyTzNwUlVhRElyU3BXUUdnanE5TFdMVUVvNkt2?=
 =?utf-8?B?Q0JQRFdmS1pwMnZrdFhSZkNYcUhVb0hRM0E0VlpGTUJ0YnRzY3hRbWJPb0lo?=
 =?utf-8?B?c04yVkxpdU9XeTJTRUIvaVZ6MkFZWUdiTEVoQnp6TEpQeXpyU3crQTNRYmhZ?=
 =?utf-8?B?cVEyV1lBemhQaVNPTWNJdldYYkl0dTFnL3R4d3FoN2czdzM4eEkwU3VRblNk?=
 =?utf-8?B?M3R3eDBZQVQ4Q2Zhc0pFbmxLa1ppK0JRUS95YjM1THliYVhEdHNiZEdmRnpr?=
 =?utf-8?B?MHJZaWFFZTM5Y0lOWlFGUFBROWdCdjRoZzNRWEJzUk1kZ29aeHdjVGxsWUFm?=
 =?utf-8?B?UDFKa250VmMyUWhhOGlIRDJOdEQyQ1c3cGdhT0xzM21pVVVPTVV5KzdmbklN?=
 =?utf-8?B?WHdzYTA2NHFzbUxwRUZRQkZieG5VUGNLLy9vQUJYeDlMQk1tWURFVlh3Umkr?=
 =?utf-8?B?T2tnM2dIQzR4OEhPMTBkUDVDcTVncVRwVGJ5RUFtay9Bb042b25iVDZqYXBC?=
 =?utf-8?B?VXRvV2tyN3BubkNsQmRYbFUzVmdOM0ZJMjZuZC9xOFB6eUpyaGExREpnZ0FT?=
 =?utf-8?B?UVE0c0hvNk1DY2kvVzBVSmx6SWRoZEQraFQvMnZGUUdER05nRzlYMlBuWWlN?=
 =?utf-8?B?RHp3UVU3K1dHblVqbGNNclhlM3RGWlBNQ0F1MmZpL1I2b1hFWjljVThDbjF5?=
 =?utf-8?B?QnNybk9PTjF1M0llZzFLcjhNS2ZtUEJGOGo1VEpLb3Q3eS9ON2xGM3NadnFV?=
 =?utf-8?B?TEk1bkhIV0dsQ0thYzE1eTJXUE5KbVN4bUF6dlhlcVVlWlpNelBNT2VHRGhh?=
 =?utf-8?B?TnhPYy9qZ0lJcWtYbGpMZ3NKaXR3N3BNanRRelZCdS9qcmJPTTBCV1ZJTXpq?=
 =?utf-8?B?K0hZL01PZjZMRmsxV1hRQ1pYNTk0QWZneHQxSkJjYmpNbHdBUTNtNGV6QUdh?=
 =?utf-8?B?QjFEdVYwY1dPVGdvNTUxRmNmWGtrR0cxdTJzT3VKT28vTVVaOUp3Q1NIeHNh?=
 =?utf-8?B?RXVaeU1MZzhHNEVhc2d0Y0l6dWphbWVUa3Jpb3JGR1pVZVcvQmRqVVoySmFF?=
 =?utf-8?B?L2hLYVhjTEJGWHZGOXpJaEpDQ3ZkLzFwdk1WRGpNU2lBWmpwbmNwMmpUK3J1?=
 =?utf-8?B?UU1tZzVQYzBDRXNxazZNcndJbDdCUG1iQVI0NDBBek04Rm1xMUVrUmhDYTZ1?=
 =?utf-8?B?VVZXd2hyQ3VjUWwyN2xDdVhlYzhSS2VqM3VFZVZBcTRzTjJweWp3dS9saitN?=
 =?utf-8?B?clNwQ1VCeHRYQnNnWVMzbldOVXQ3WTgwa1hHVkxGNE9seldEeXdFejVLVHZN?=
 =?utf-8?B?aGlhdEM4UDh2YXRXbWl0eGN6N1RUTFN3VmVXeVZTbFJpRHBGNnE3WExWdDY5?=
 =?utf-8?B?b0VGYTdtZWp5SCtxelBJWUZjSzdpNUNQSU5DdWs5VWgrNmlkaEZJai9sVHRh?=
 =?utf-8?B?MDE3NWg0RSswQkttM0FsbFBpN2xJOGhVSkVsS2xoRjFCSmRYeEpuWnQvQy9q?=
 =?utf-8?B?NWVqZ2hOQUlXaUg2NWlNcllxYXhJQ3RsQUdRWEN4Si9YNWtHOVFHbDMwK2p1?=
 =?utf-8?B?TW1yaE5Nem1aZEF2Z0JjVU5jcm5pYkRzaHpENHB3T0ZDZWdnQ2pyZXQrZjl2?=
 =?utf-8?B?U0xzRzdac0NJcDdXb0JLTDVzNXdmaHhwWFNXVGpsZFUrZHkvWlFidnVCT2Vz?=
 =?utf-8?B?YVNHOThaNC9seG1Hb0RSUHAxeE02SDVocDRidFpLeEZpTHNpcDhTa05TNkoz?=
 =?utf-8?B?SURwK2xWaWEzRFJkVlFmOEhEMEJMZnlvdTVUSEFQVlFmcm1PaU1oWGJ3MTB0?=
 =?utf-8?B?RUF0Qlg5NHJmTUlTS0FTOE44NnFYRjhhOFdPZkV3TnM1V0l0SHBpU3VIMlJl?=
 =?utf-8?B?bU9YTG1NYXV6QjBnWXpBYSsvQk5nbWJ4NEswL2xkanhsR2xlOEVmNDk0N0dl?=
 =?utf-8?B?eDh5WWYyZ1c2U0s0cTExU3J6RnozVkk1TnFYZXg3TTRVQ3I0d0kwdUVZWFdS?=
 =?utf-8?B?SXc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <03534A67D9872A42A40B78C4A2F9B784@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?R3IwQ3lOVWRjdGQ5N1ZmS0JFbll0OG5rUUlyNHV0L01mcWR3anh6bzUxdElo?=
 =?utf-8?B?bmxkR3hRMkx1ZThTZVRzcVlKbUVLaHk4ZjhUOU1xTERkSXZTdVZ1WmV0c3A4?=
 =?utf-8?B?RzZxaVRHNlBMZmo3TDBQMDVxMThDZTQ3WWZyOThSclF1QTQ3MXprS2tFUGk5?=
 =?utf-8?B?WnR2dlNjUDVhRUlYNDNKenlSenMvRnEvWGFoQ2pXWVpEN3hqVEhnWnlTWWpk?=
 =?utf-8?B?Wkc1dzUwaDROYTgrTGQyQkR3cHdraVFmL0xvZ0ZDc29lN3MwaytxQXRHOFRV?=
 =?utf-8?B?d0J4WXdIMHlCTEZsbFpwZWxQZkV5Z1BKRGFNUEw0WWhuYzI4L1FpTnRjUEt5?=
 =?utf-8?B?eEYvdU9ReHpWWTMvRU1QR0NyYVEvU2xBZTNIS08wb05KNE1WeE41VHZmczJ6?=
 =?utf-8?B?ckxEVFp4YVpjdERGb3NhMndLVVlzUkhET25WUDBmKzJXblA5NVVISGUycHc5?=
 =?utf-8?B?anhIeHlqeEVZcnZpL0JpajIzL290ZUorcmdoWjdEKzZveVgzWDBxaG9TYnl0?=
 =?utf-8?B?L1Z2VzZmWUgrKy9ldzNuZnhGclUxMkpaMVY4NE95MDNSQ3FMcmcrRkx6bVpw?=
 =?utf-8?B?UHV3Q05mLzV0cGFob1NsbEQwd0RtQ0Y5Vld5SFpuY2hqUDZQVG5zRU9PcTlU?=
 =?utf-8?B?R3B4QzY4b1FIaFVCWDI0KysvQTlXOVZZc2kxS2pFb05zcDFFdlk5MUFwV0Jw?=
 =?utf-8?B?Z2xUTFFISjNGR1VTUDRiS0N4M2tiL0V1cEZvQllwU3JMazBYYVBNV2IyKyti?=
 =?utf-8?B?MWdpd1NWc0h2TmF1UXo0VG9SYlUxcmlMK1RBOXk0LzMwazdzSzdpZFZXTmVa?=
 =?utf-8?B?UXJWM05ScVRzbTR5bkpxT1VDUUxBdEF1U29qSFJsQ1FROWdQQ2k2MDh6Mk5V?=
 =?utf-8?B?TEJGTC9Ma3FQNkhvODlhM1VpM1VrRE16eStCU1ZLSXlrM1JSbjU2ZTNSVVhm?=
 =?utf-8?B?cVE2azNvWWErRzA5VTNwR2t4Qkx3RHhDR2M3MUE1UUFyQW12NG1PemxrMllX?=
 =?utf-8?B?WjQ1NjdDWU1NeEpqUkw0Mm96Sy81WEttQVhWTjdGN3plTVlrTFF3bTdIMEth?=
 =?utf-8?B?TExBRnhaR3RtQ1VSc1N4b216TFJZZG5BaEJqVlFvRlZ1bWVJNnVBblBRTkV6?=
 =?utf-8?B?T2tOS3lHVTUrbUgrZXMrWm5uQ01KNlN5Q3FCL0dtTDY3QzVSNndSNjQ1aTd5?=
 =?utf-8?B?NG15a0dBUWhUUldLYlUza3pqQ1U2Y1pPU1o3c2tZVmluSktkRE5Ib2hka2Ux?=
 =?utf-8?B?OGhReS8xTHFEYVAvYzI2WEpIMDlBSnc3a2dpbFZicHhMY3lNQT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 703c0cc5-00de-490e-b2bc-08daf95b6080
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 13:53:28.2265
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: tzPmlKrpcQiWd6UnUe3mbP0Qr1Ldayl0vUddrGr1Rerqkd/TofoedYE7tlq2fWjhQ/gJe1GHaHVcvyfYokYcI2/vc9TknCnrlDoBOgYXB5k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5054

T24gMTgvMDEvMjAyMyAxMjoyMiBwbSwgRGF2aWQgV29vZGhvdXNlIHdyb3RlOg0KPiBTaWduZWQt
b2ZmLWJ5OiBEYXZpZCBXb29kaG91c2UgPGR3bXdAYW1hem9uLmNvLnVrPg0KPiAtLS0NCj4gV2hh
dCBkb2VzIHhlbl9ldnRjaG5fZG9fdXBjYWxsKCkgZXhpc3QgZm9yPyBDYW4gd2UgZGVsZXRlIGl0
PyBJIGRvbid0DQo+IHNlZSBpdCBiZWluZyBjYWxsZWQgYW55d2hlcmUuDQoNClNlZW1zIHRoZSBj
YWxsZXIgd2FzIGRyb3BwZWQgYnkNCmNiMDllYTI5MjRjYmYxYTQyZGE1OWJkMzBhNTljYzE4MzYy
NDBiY2IsIGJ1dCB0aGUgQ09ORklHX1BWSFZNIGxvb2tzDQpib2d1cyBiZWNhdXNlIHRoZSBwcmVj
b25kaXRpb24gdG8gc2V0dGluZyBpdCB1cCB3YXMgYmVpbmcgaW4gYSBYZW4gSFZNDQpndWVzdCwg
YW5kIHRoZSBndWVzdCBpcyB0YWtpbmcgZXZ0Y2hucyBieSB2ZWN0b3IgZWl0aGVyIHdheS4NCg0K
UFYgZ3Vlc3RzIHVzZSB0aGUgZW50cnlwb2ludCBjYWxsZWQgZXhjX3hlbl9oeXBlcnZpc29yX2Nh
bGxiYWNrIHdoaWNoDQpyZWFsbHkgb3VnaHQgdG8gZ2FpbiBhIFBWIGluIGl0cyBuYW1lIHNvbWV3
aGVyZS7CoCBBbHNvIHRoZSBjb21tZW50cyBsb29rDQpkaXN0aW5jdGx5IHN1c3BlY3QuDQoNClNv
bWUgdGlkeWluZyBpbiB0aGlzIGFyZWEgd291bGQgYmUgdmFsdWFibGUuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 13:59:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 13:59:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480669.745174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8xg-00027R-1g; Wed, 18 Jan 2023 13:58:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480669.745174; Wed, 18 Jan 2023 13:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI8xf-00027K-Up; Wed, 18 Jan 2023 13:58:43 +0000
Received: by outflank-mailman (input) for mailman id 480669;
 Wed, 18 Jan 2023 13:58:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI8xe-00027D-In
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 13:58:42 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2057.outbound.protection.outlook.com [40.107.249.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 361eecdd-9738-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 14:58:40 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7705.eurprd04.prod.outlook.com (2603:10a6:10:209::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan
 2023 13:58:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 13:58:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 361eecdd-9738-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aS0M4zuE0IXIMUe7TcJ92ZaHjft3Cdr8xGw1SAY03kxtSahQB7HaTLXpG2fjcdHsP/DsaaAS9f6WhxPnrD4xQgiIkW+v2coQjGPpQmu3E5cm1yrYNHVpxZ+LFiJUXXYK2qI2xfwvKwbdiUH4ABhQUpCO5t5TOWO0ZCD9HXszl9bwATBxuYhq/w9vKWb5wJzoTftezHd3rYoW7T6ezweNwwVzaOIobZtM2DfA/lHBDUPG0jtaVcG5pNt6hvn2FA2tTcFJrs60u2qE5XZjwGu5Y5TcoFqhanoFmfafZ5Du1Oi0rGZGT3gVXquo9T+QJZDlTd3azvnjBlWnIgvap4K1yA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Mr6UfkVN3I1idI4ytBUlhTafyZNbimWDvWu0RmX+Ix8=;
 b=DK/J6XU4PGvvSY1rXachhXw3a9oOkSKNNqvNI1D/6OR8wwJHrfy5OncPmGTNel+BIC3a9bHbY3LAFTbmKkQ3T4kVjMFP3kIGkp4dN8uFtUzuloZLEdjdKbmCrZFRN7eOwONoycl96zeOchINq6ehcWOJ80+zVBfPxafSQRSZX4oertCHBWVeKUSsUa9hhEpfci5GCFPj15JccW95CiDtN9/vdRv/E/PzP5I91l5XxDSbtmwtd1s38TX/nUjVcIzj1xPQVEubNsUhYLA8nBINSsKuUwgg5/s+WSzP7MqSEK8N+4ehqrkFXPuDtRUieagy/8EVrDFKYA+eAqxoW+uH7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Mr6UfkVN3I1idI4ytBUlhTafyZNbimWDvWu0RmX+Ix8=;
 b=P60yBlFyoDCFdiIYkCvrFW4AvljWuOEvFz6b98js0uj1I5DiCDocDezrFzzm+sOgWL+9w7LGn3GkNgu8UfVaN+iAKUdAQyRwCYkkquUOBtHjxCuGbUXsgwemYWIQ90fSgIDf1VnLY3UR3JvIzoxW9yTEV36aZGzOioM3g3+sMj5vJ7QmXauvmqYj+Fp05jTN2IUCYpgYDpd7hoJKTH8BKRivF8i3Phm+iwthdlCqVPruJnEfRkmM48nQZXmiwUU8PB3MpDj2BOEuxpf4c2oB8egaG9ykf1336dlnrnqPbpPvZmPth8MUdfV5abfOLtX9/tJyfmCxuFUtcs6f4iKYJQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <60628f62-c9fd-c29b-5c08-f3f746201f01@suse.com>
Date: Wed, 18 Jan 2023 14:58:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for
 address/size
Content-Language: en-US
To: George Dunlap <george.dunlap@cloud.com>
Cc: Ayan Kumar Halder <ayankuma@amd.com>, sstabellini@kernel.org,
 stefano.stabellini@amd.com, Volodymyr_Babchuk@epam.com,
 bertrand.marquis@arm.com, xen-devel@lists.xenproject.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Rahul Singh <rahul.singh@arm.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, julien@xen.org,
 Wei Xu <xuwei5@hisilicon.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-6-ayan.kumar.halder@amd.com>
 <926307d3-a354-be87-3885-90681dc5ae24@suse.com>
 <37719b71-8405-eefd-3bf5-95c7c8639e82@amd.com>
 <7e9db781-47a8-719a-d9b1-88de9c503732@suse.com>
 <CA+zSX=a4hfFKGJfTM5BDenRo=T3kvEbkGhRs=7oh4GgOxDk0+Q@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CA+zSX=a4hfFKGJfTM5BDenRo=T3kvEbkGhRs=7oh4GgOxDk0+Q@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0190.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7705:EE_
X-MS-Office365-Filtering-Correlation-Id: 2c83d02a-4cdf-4fae-1624-08daf95c18c9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dt2HhvFMhMDBDQeBd9/JIZ6Fbhd3npCk73JxNWKPqlYTrsvipzXTSFzGuW6+sPrC6XC5+5rC1GHbB8qiJKGiS9gIDNbiEVFg4T2rz+see37O/P4uFmfJVaHvIQFo6CCG6HX2PTR0WhBCepZcv1TGokxwhXmgKue+TBVFyw1Dk5uzE/Nfhi7epW3t4iJsXYhdUwK7eV6b8LxjIkcvlI4SwXO8nyoxV4rvx3kE2jd7IRhYMekZF4GD9D9F9s610prPlYDOojZuJCNmsniv7KjlrN/W6g13MWcERb+wT7aYXsOSmsFTKKVPEo6mITF1DpEKRiqKAQTYAwqMpDXMtjPj3onf3Dxryv6RZieQWfHOIDbv7cCbJjdSY8bGNHc4txoV+RErm2OR86NQqfdz3prsZ4za+gRStkuaVACsU48NKolurZK9F3kE0jTPHKCRa4HRm/fTANwbid+odS/yKamkPxBv0ur4mO4bEqtkeUAAYsKMFrMnAGhqj5iAGZpZ62LulnA9809jDYf9dT0CPgRYW+FDvwE+jDmXdhN83tFpmTdkXyvS1tfWZD0LOYZ3ZMoPnN6OoABUWk+1wXPo+fYfJcj8OQoKdoLgysBa+ClteGXdTsO71aw+jCoLZsC3ymFzAa9qijXLsy/9dn6fP32e1OUgEHOfyCtFebzBqvVGJnJl++iMO3OrElQ5+de8vBHYWIBDCVQCwGDe8Ahh6O/bzv4JnMJ2uivSi+byLO9gWA0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(346002)(39860400002)(366004)(136003)(376002)(451199015)(38100700002)(31696002)(66476007)(86362001)(5660300002)(8936002)(7416002)(2906002)(66946007)(66556008)(4326008)(8676002)(6916009)(41300700001)(2616005)(26005)(186003)(6512007)(83380400001)(53546011)(316002)(54906003)(6666004)(6506007)(6486002)(478600001)(36756003)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K0ZKM2NTaXc3YldVRS9Ka0F3T0tJMXJBQ3RUSDVqYi8wQ0hjZ2tpdzlNWFRp?=
 =?utf-8?B?Z0JWZkNuVndTZHVsNmJsZXVoTzNueEZaWmNWYXVNNkNvVU9CRG16N1dRV1Nm?=
 =?utf-8?B?SXpDaHZ1aktvSGM0Q21ESzB2WWRFbUZidFN4YW85K2tuYXRPeExNbGorcnpD?=
 =?utf-8?B?bklrei9IRldXdFRFa2RzK0dmWDJTUllmMlZTYzZrWVJqY0pHSzlaU3dXbWNG?=
 =?utf-8?B?V2l2aDF3M050RkFiYis0RGNtUHduVFVhWWFmNi84NHp6KzFYRlRMNExJWWQ2?=
 =?utf-8?B?WkxkaE5mY01CL25pNzZyZEhGTFdHQ2VvMW9DTjVWWjh5VVhsQ3kxZTBFdzlS?=
 =?utf-8?B?NHFoWEJmRHV5cmVmcVd1cWJHZzNJdjdiMW5wd1dPT29ENHR6c1hma3cwVGlq?=
 =?utf-8?B?ZFBiaDFaQUttQnhhSERqeUdqWFFBL1o1a0l1S2UwbDBmSEp5NXlaUXVQbVpU?=
 =?utf-8?B?akEvdUdsL2N5SVFqaStaQ1U3THlUMzU3bGFzNmYxUUNzbGV2TnQva3FSaURF?=
 =?utf-8?B?eE14N3FveDc1Q2cvZXZtWTU1WTU5MDlpS2RGMERSZktmSEdoQ1d6Vmo5Tnc2?=
 =?utf-8?B?bDgwaUdwcVRqN3hZTjd1QURQZUdzWXlpRUs3YlN0ZWJJNG9xRU1EVmN4STVu?=
 =?utf-8?B?WXBiUlVFOVVOV3pmY2VzOUhJU29yNHMzSEI5eEJBTXZZdVBvN3FJa0w1dG81?=
 =?utf-8?B?Y0lTcTBUZGdFWlltYldBZ3lTR3V6NGMrYktvQTFPVUJRVUE4UjRTRFJtd3dC?=
 =?utf-8?B?QjFpWlhnc2JJT1ZPczZ1dCtleHV2SlRvM1NQeE1WejdVNHhCc24xUksybTdr?=
 =?utf-8?B?bVlxODYyYkVGYU9IRDdaUWtwMERNV0xLRjU2VndLM0RPdjdWSHFGOTNMT3Ra?=
 =?utf-8?B?L2xXdTRJcWY4NEhCK0cwV08xNlFhYmVyVjlZV05uc29LcGVHczJNUVhEYS9u?=
 =?utf-8?B?T1hyMmtaQzNHbGdGZ09UYkRyMUpoR3dOQUxEdFRFdFBmUmYvNVNwbVljM21i?=
 =?utf-8?B?Y1RxZ0t1ZlRVMHkvVXFiK1BPQ01yR1RpK1pnQzhyK2laMTVJUmFNQ2Zza3JZ?=
 =?utf-8?B?OEI2L1pVSFpoN2VQcGRHbkU1RmZZb1dvQlAyd3JQbHFNZDB6RnBFV29EcDd2?=
 =?utf-8?B?djRYT2dCZmgrdHJGMllaYUxlazhjT1BJTzh3MVFHRXp3U2R5S1N3bVRVdTJu?=
 =?utf-8?B?clNGM0daOCt2dm52aVlJVWh3blFRenRoN3p0UkNRMEpiczZkRU5wVUE0aWlv?=
 =?utf-8?B?bGJoTWFUU28yVWZKY1g0R25BMzZvNW1VUUlKeUJuRFNVVk9CSERlQyt5aTR5?=
 =?utf-8?B?VmV3SzdiNHAwemJJTWQwYUVXTGZXYUpiUi9ZRXVVS1h1SkdKWGdZUUhZTTJ4?=
 =?utf-8?B?TlorL1lQeDNOMy9HUDZ6Z2c4ZEUwbjkzVXlPc2hQNTF2OHVzMUYxYjBwY2J6?=
 =?utf-8?B?Wmx3WmU0aEtsekJyb1RnVTdtSVdNZXNTcXhHMzZxYmF1WDMrUTZURm5uOUly?=
 =?utf-8?B?TkpQSm8wNm9WdXUzUGNDNHRYcE10LzdVb29ZWjVuUFBSaWM2cWQ4K3NmVDh5?=
 =?utf-8?B?S1ZxT2ZJZ3Z0ZmdieDBOMkNKSEZvS0N0SWY3Sm9Tb3FQL2Z6OFNQVUVTYnlq?=
 =?utf-8?B?c2FGeWJBdWQxdkhiSG1WcHppL21SYS9ZZjF5VzByVVFRNkc0Ui91YUFQV3pT?=
 =?utf-8?B?cUkzK0ZIUDNDOHJRTEVaelMwRmJoa0YvMWR0S3pCKzMwRTFNeXYwTVE0d1VP?=
 =?utf-8?B?UVZBNkxLc3BIeUhYbmxXaWpmckRjQkoxUmxTSEdhWXBodUZJQU9GRlppSlVh?=
 =?utf-8?B?VkU1VG9HdWRUT2JyWFJncWkxUlpPa3lNWFY3K3d6Ky8zenFFVHN6dURLL1dz?=
 =?utf-8?B?a0xiZlROOHVhQisybUF1cVBnOTNKYlJRUjhwREl2V21iaVBFNnZDancwQ3Qx?=
 =?utf-8?B?TVZscXRFWmRWVFRSNmtRejNpcnVyRVBSRWVNaUhrM25rNkNmNmMyZnJFMUdk?=
 =?utf-8?B?RCtFU0l1ZFN6QklwTktUdjliY1JDWFlKVFl6TFRVbG16WHFmMnpWNlltMStB?=
 =?utf-8?B?THR6YVo4WUlXN0N0TU5vb2VpaFJWQ2hBRSs4d0RZeXVzcXNncUM2RnZ6eUVq?=
 =?utf-8?Q?8RMDAul2XFZjBQLRYHFHPKp4R?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2c83d02a-4cdf-4fae-1624-08daf95c18c9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 13:58:37.6047
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uEkCbZDP+tPpwWPkHWbU3HhdSrCzJWRvIWXH1k1SJZkFTAENftFfOyn5EJMo0oTjrn3issPnAOApCBrmKdBoRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7705

On 18.01.2023 14:34, George Dunlap wrote:
> On Wed, Jan 18, 2023 at 1:15 PM Jan Beulich <jbeulich@suse.com> wrote:
> 
>> On 18.01.2023 12:15, Ayan Kumar Halder wrote:
>>> On 18/01/2023 08:40, Jan Beulich wrote:
>>>> On 17.01.2023 18:43, Ayan Kumar Halder wrote:
>>>>> @@ -1166,7 +1166,7 @@ static const struct ns16550_config __initconst
>> uart_config[] =
>>>>>   static int __init
>>>>>   pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int
>> idx)
>>>>>   {
>>>>> -    u64 orig_base = uart->io_base;
>>>>> +    paddr_t orig_base = uart->io_base;
>>>>>       unsigned int b, d, f, nextf, i;
>>>>>
>>>>>       /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on
>> bus 0. */
>>>>> @@ -1259,7 +1259,7 @@ pci_uart_config(struct ns16550 *uart, bool_t
>> skip_amt, unsigned int idx)
>>>>>                       else
>>>>>                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
>>>>>
>>>>> -                    uart->io_base = ((u64)bar_64 << 32) |
>>>>> +                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
>>>>>                                       (bar &
>> PCI_BASE_ADDRESS_MEM_MASK);
>>>>>                   }
>>>> This looks wrong to me: You shouldn't blindly truncate to 32 bits. You
>> need
>>>> to refuse acting on 64-bit BARs with the upper address bits non-zero.
>>>
>>> Yes, I was treating this like others (where Xen does not detect for
>>> truncation while getting the address/size from device-tree and
>>> typecasting it to paddr_t).
>>>
>>> However in this case, as Xen is reading from PCI registers, so it needs
>>> to check for truncation.
>>>
>>> I think the following change should do good.
>>>
>>> @@ -1180,6 +1180,7 @@ pci_uart_config(struct ns16550 *uart, bool_t
>>> skip_amt, unsigned int idx)
>>>                   unsigned int bar_idx = 0, port_idx = idx;
>>>                   uint32_t bar, bar_64 = 0, len, len_64;
>>>                   u64 size = 0;
>>> +                uint64_t io_base = 0;
>>>                   const struct ns16550_config_param *param = uart_param;
>>>
>>>                   nextf = (f || (pci_conf_read16(PCI_SBDF(0, b, d, f),
>>> @@ -1260,8 +1261,11 @@ pci_uart_config(struct ns16550 *uart, bool_t
>>> skip_amt, unsigned int idx)
>>>                       else
>>>                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
>>>
>>> -                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
>>> +                    io_base = ((u64)bar_64 << 32) |
>>>                                       (bar & PCI_BASE_ADDRESS_MEM_MASK);
>>> +
>>> +                    uart->io_base = (paddr_t) io_base;
>>> +                    ASSERT(uart->io_base == io_base); /* Detect
>>> truncation */
>>>                   }
>>>                   /* IO based */
>>>                   else if ( !param->mmio && (bar &
>>> PCI_BASE_ADDRESS_SPACE_IO) )
>>
>> An assertion can only ever check assumption on behavior elsewhere in Xen.
>> Anything external needs handling properly, including in non-debug builds.
>>
> 
> Except in this case, it's detecting the result of the compiler cast just
> above it, isn't it?

Not really, no - it checks whether there was truncation, but the
absence (or presence) thereof is still a property of the underlying
system, not one in Xen.

>  In which case it seems like it should be a BUILD_BUG_ON() of some sort.

Such a check would then be to make sure paddr_t == uint64_t, which is
precisely an equivalence Ayan wants to do away with.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 14:07:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 14:07:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480675.745184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI95Z-0003ga-Qf; Wed, 18 Jan 2023 14:06:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480675.745184; Wed, 18 Jan 2023 14:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI95Z-0003gT-Ni; Wed, 18 Jan 2023 14:06:53 +0000
Received: by outflank-mailman (input) for mailman id 480675;
 Wed, 18 Jan 2023 14:06:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vku6=5P=casper.srs.infradead.org=BATV+4fd80059aba2dd5ff62c+7087+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pI95W-0003gN-TW
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 14:06:51 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 59269b62-9739-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 15:06:49 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pI95N-0002k7-8c; Wed, 18 Jan 2023 14:06:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59269b62-9739-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:To:From:Subject:Message-ID:Sender:Reply-To:Cc:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=vXjTqpEwMo4GpFq+M84Rk8oX46tqQjPYAmbdi9OWqKU=; b=k44IuDrDJZVrO0iZV8CDpEelHf
	koMgZsnJEnmlyD/0OBhJerPVKfdPDLUSqtgtE5pgqaQ0rWJry2KK/jyiqkpKVd1H1/wsH39FXJP0G
	jY+QNOHMDvZQZJ3sozcpNEDnQpMehSDmoZu6F28lZJrNXdDT5mnMbSv/5Bmw1ZmoxowqlgMTKsAG/
	cGwOMxMCjhjEZ+hBrQrPKiY3AROk5p9uRcjI61/ClJ8zv+psXbUxTj6QbD+dTP5t4BcpPlZ6uWirE
	VioRoI6RX4GPc0r7ACWoMxT6oncIDFOOhuB+G9RBUPt5pV5+2235IJSk3d3/6kqdylxSODS3LAjNH
	DMuJPOkA==;
Message-ID: <82e6a68f3cb8c0e440f7dc848fa3444725b3f893.camel@infradead.org>
Subject: Re: [PATCH] xen: Allow platform PCI interrupt to be shared
From: David Woodhouse <dwmw2@infradead.org>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Juergen Gross
 <jgross@suse.com>,  Stefano Stabellini <sstabellini@kernel.org>, xen-devel
 <xen-devel@lists.xenproject.org>, "linux-kernel@vger.kernel.org"
 <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Paul
 Durrant <paul@xen.org>
Date: Wed, 18 Jan 2023 14:06:40 +0000
In-Reply-To: <e0b75988-bee6-e0c7-0dda-86e1e973ba74@citrix.com>
References: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
	 <e0b75988-bee6-e0c7-0dda-86e1e973ba74@citrix.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-IKVdCFFG2doG7Kjh6h4U"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-IKVdCFFG2doG7Kjh6h4U
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2023-01-18 at 13:53 +0000, Andrew Cooper wrote:
> On 18/01/2023 12:22 pm, David Woodhouse wrote:
> > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> > ---
> > What does xen_evtchn_do_upcall() exist for? Can we delete it? I don't
> > see it being called anywhere.
>=20
> Seems the caller was dropped by
> cb09ea2924cbf1a42da59bd30a59cc1836240bcb, but the CONFIG_PVHVM looks
> bogus because the precondition to setting it up was being in a Xen HVM
> guest, and the guest is taking evtchns by vector either way.
>=20
> PV guests use the entrypoint called exc_xen_hypervisor_callback which
> really ought to gain a PV in its name somewhere.=C2=A0 Also the comments =
look
> distinctly suspect.

Yeah. I couldn't *see* any asm or macro magic which would reference
xen_evtchn_do_upcall, and removing it from my build (with CONFIG_XEN_PV
enabled) also didn't break anything.

> Some tidying in this area would be valuable.

Indeed. I just need Paul or myself to throw in a basic XenStore
implementation so we can provide a PV disk, and I should be able to do
quickfire testing of PV guests too with 'qemu -kernel' and a PV shim.

PVHVM would be an entertaining thing to support too; I suppose that's
mostly a case of basing it on the microvm qemu platform, or perhaps
even *more* minimal x86-based platform?

--=-IKVdCFFG2doG7Kjh6h4U
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE4MTQwNjQwWjAvBgkqhkiG9w0BCQQxIgQg7kn4rs6J
h8XnMhUY+bK5zPqSJK1eRxC3IJ0vDz9sIFwwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAQT6STxmSpk0DrWTwiufwADXNjLe6qtpL4
10Zf9LT7BEtZCDIArXOdpsw9U/hSyZRPg6sFu2QzHlmM5wpUbJFBXsnJgdiyBrkLSKQXmVUJYGJR
/Gsa3M4obHXn8YKDyyI/kBigjROfFmYtEQe+soRLjlpdfBQyOlXg3RCLZjTvBcDUlmfjsto2uxze
Y+XHwe04FSIQkjNHV/krgd7h9/z447FdMiXJxLAzLNrkrOZ8dWa+spETnLW1aIZKMftXC/l8yBH3
NvHiF2asqwPx12fkyzzgnV11qi7J1Botfu2mc7nOP3JfirgTKB4r9gv4Xz9YMkPGNr/ct4tP6Ktx
esfxM2ZMPasBnHYtaNNkLgIU/0Quk4csAAwXvpW92uWCwlHHDiRbMGUgq4o7VfQmTJwab5HxfSMj
JpW0xaI+Q8KZZWdec7MGba3QZjOSUkcL726hrLxKexDYZn1tFpvx6Hzx52d0aYqsCDfiXdzhgwp+
sBU/Kp0mzz5n5tIxfvGDDLLX4XQQ/mz0io/zRQrR094jV5A/bsAVBGvfXBpl+FNKlAg3w/PYjP7u
V4wETqAgjaA0lkmxlhATdadawquZ4K8rhcHlhS/VFyXX/o601v4bI2UnVZL4d4skPBAZouTaMD3/
SXueJx8tuVSa/9TSHNW+984+TYexG8qGKplE+AICIAAAAAAAAA==


--=-IKVdCFFG2doG7Kjh6h4U--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 14:14:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 14:14:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480683.745195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9CK-0005DG-Np; Wed, 18 Jan 2023 14:13:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480683.745195; Wed, 18 Jan 2023 14:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9CK-0005D9-LB; Wed, 18 Jan 2023 14:13:52 +0000
Received: by outflank-mailman (input) for mailman id 480683;
 Wed, 18 Jan 2023 14:13:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QcrT=5P=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pI9CJ-0005D3-2M
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 14:13:51 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2040.outbound.protection.outlook.com [40.107.20.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5379573b-973a-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 15:13:48 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8560.eurprd04.prod.outlook.com (2603:10a6:102:217::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan
 2023 14:13:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 14:13:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5379573b-973a-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ffNwAn3ZkYa+csEaJ9Qw+YvgacBgiysQMSfgLab/a2vDq7PY3BT2A+vcCb4Rgpg353iKvKgWg0Zo0UgPaOuvcxoqWLVEpXAqxSP1m9Lj/gyD0sTuTpXDZdb3BBwcyvNSp+gItrhfAVGu/z9eHvf0uchr5DDZpooa8I+aWdV8cqP3ffxuJfK0cOnKeSKPz44cHUKb5UItFKGCweCdWElJ9+coNpVqFaouCDtuAe2T15RZMUIByQFFF/MtUJT7zV8FAmSZYAw48n0c476gOq8kkfghVlw5QAI6crR9jsin5ItUgqWAKNoxNXlvVYlXz9SvHWXBv6LR7NUj1tYKAV4czA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M2BD3I1J1B1A53Sw6NFpEzt0QkfOGaSkop18+jYiq3Q=;
 b=bVs6e+S4fsiy8X48gTexmYkS2fsU0b1Vy88ZXB7n1/eSW/QlDP1TyDlsHnhF1J/3QNTk4z0RAXrxvtt9+bSEu8bBNC4VGVZKqhxUSnXj8Mt9rwj3vyiWAAxFXnJH43QY9y5iDXPOSoBU7DWvRsd6f+Ya2HWYxR6t5SdIboLiVxD4dYNOgKOQBJaiLIB7Y8vwbmh1vb07Qo+S9xFFUE0vFUP5LKeoIYYGa7eGHMy0YnW/dHy0xaU/urTARR/IpMVG2L23a6GzNK3lSoDHKqD6Pcp4e/9DFCximJCG1Oc3+gry5+O+X9oA++yUPxOVIZGC2cX/WUHk6r+o2ndDGbJxMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M2BD3I1J1B1A53Sw6NFpEzt0QkfOGaSkop18+jYiq3Q=;
 b=lF1PYYashFkkMYMYMm0OJPoWhimG90AVdcchEyJ5zJAsZMC1FdzmW+07Fy0CQzVfXo+jQHvZvIJ1RMLDHw9DxevjHTY1wfPwdqG856KrWQJ/NgjNvH2ET7QFSg1hcCiY/STRlLdQp90HUsnCCaO+xR+6Fs70Hw+VAOcLxCIkT6YfaSPpu7bInarDesSWzPJcsMTloxF3sXCDMOaEtfo4bkFdsrllJmME7mA3CXxlrI5RqVWz8Adqh9ii/uXAL0rohFIpsPigJPuI92cKAJsXABDQNBmb3auWFBBcH4ebDTGHr2e+cA0M5iiLBBBHo/gpdGlvkl7dZJ2hapqub2OvuQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <58790f2f-ba46-df0c-2620-370e3994faea@suse.com>
Date: Wed, 18 Jan 2023 15:13:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <9bea51eb-4fbd-b061-52d7-c6c234d060a1@suse.com>
 <c5d201ac-89ca-6baa-d685-5bef2497183f@citrix.com>
 <a438f16b-7d45-d7e4-2191-4ed7b2077785@suse.com>
 <71e7ba34-f1ea-16fd-ec01-bb2aa454270c@citrix.com>
 <b49793f8-55be-0746-815d-ab7b627d3baa@suse.com>
 <733edba4-6913-97a4-f949-4f8899a3bba9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <733edba4-6913-97a4-f949-4f8899a3bba9@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0147.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8560:EE_
X-MS-Office365-Filtering-Correlation-Id: 10f99de7-400e-4fd4-fcc9-08daf95e3614
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jMImaLOL1VxlsGw3FPkmHl7UKEQo6Jw73dQIgcxeLb1vBsvi41i6K3ZKRHW29Oyywhu7lZRvD/Ldo8nROHSadhsoyv87v1myhK2GdAKwnN/nUWFSX4Cjn2NI08/40fPmqEf5Ng84QK6E3qRzUzhUlUx9j+tO1B8Kdp0w4eDDGfRwHFh3/H3CW9JuFHVhc9YZCmpxu5U7bLIyisWJxSz/FySomkd12Jm5JVmODQlEQSHGZc+JzC3DNXse2vj2maUazrBmZof3EH+1WLI4gZo9y50gX5rJsBLCZfzvoUzUzEZFxh/sxq0aFLZC2+fEG/ljQN27yy/+bCvhBybWZGy/QbemlLQJZm6eFguE/KUmU9eNWnFakGMstYqYGRzBF1EP75qz85bzFzyYTjdGDwtQ7b+cd8/DsdKUy3l6OlSkz42Xdl1GlEI3hPoEz5MnIfkEYGODHstPNMMKxBIA4HtL6lz4jzJuC3OO0UFD045z9S3eCp6ty/8Y+UPSICq89UwWbpQUyiEeGNVkSd0HGySvJAjPfkyCztz3uA7+pyg3cbKeABnPejK/cwW0N5oaeloHk3AgfDwAMAJ3gnOIYjF/a6pqN6UR1aMGkvZ4rf4R2e+RU8RandchNuqD0kKq96WgWvp0VuCvGqAecQN43nML2vX/wIGxDHNzOG7wrKl3FEd0an7oWrRr4GksY70dlRJMWpcOiVbG4j+aVrn40jgY7YOAibGj9hWl3aKZfvqIQqA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(346002)(396003)(366004)(39860400002)(376002)(451199015)(6506007)(36756003)(5660300002)(2906002)(53546011)(8676002)(83380400001)(8936002)(31686004)(6916009)(4326008)(38100700002)(6512007)(6486002)(26005)(41300700001)(478600001)(66476007)(66946007)(66556008)(86362001)(31696002)(186003)(2616005)(54906003)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cmRpZ3Y3eDM4OUJzSm5uUGZCZWE3aTFCWjJVMGdEYWdGU3NjVkJCNFVnNzYz?=
 =?utf-8?B?ekhIUyt0OW1rZmZQazVsWjJqMk1oZ3VLd1J4WUpkeEFxNVAyblhEOGJNcHJz?=
 =?utf-8?B?Y3JabWF2VG84Q2tJKzVlN0VxVnNzSXdSTEp4TU5JdzJrWFAzYUVsRjVuQ28x?=
 =?utf-8?B?bEh0OTNQVlRtdnlOaTYrVnQ5ZmZOZ1orTElHM3lGUFBBWUwrczA4U3AvUXlz?=
 =?utf-8?B?a0x5ckJyTkx2eEJtcjh0a25vWVFTWEIxL2ZUcE5ac0RESTNSeHBhWVBBOEsv?=
 =?utf-8?B?WGhwOVpIY05rc2NzbnlWeGovU21KQVFHR1E1SXR1MGJEcWRmWmdkTVg2c2tM?=
 =?utf-8?B?ZndQYW9LNE1jejN2TWllVXBoZGpjRmdNK2lkb3d6T21yZXFBWUVZYThzSWlv?=
 =?utf-8?B?TkxCb2FmRi9UUndhMUNZdW0yZUNQVkNCakM3b2FwQ0dqK1BxVFFqQXQraTNC?=
 =?utf-8?B?dmhoY2VjaVRWcjkxZFBkQ3R2ZUQxeG9vS3N5ZWRubUNhQ1BLSUp4RHZIdWo0?=
 =?utf-8?B?S0ROOHBmTk1kRVQ3MTU2UHJYSDExMHBidTJTb1pDQXladXpoSGhzdWVhWVZp?=
 =?utf-8?B?OEFHNm52ckpPdHA3NFZlTlEzVCtCeXFQaWpzQXoxN0RjMVRnc1MydGJRTTNq?=
 =?utf-8?B?V0Q3TnUvY2NDT2ZQNzhWenFMS0p6dkNLVk83Z3cxdmxDNkl4a21hWVh4azkw?=
 =?utf-8?B?RXBOWjl2UnVOUjR3c2ZENDhVdyswSDBkbktKcWxjdm04UzhXQVNOR3Z0UFVN?=
 =?utf-8?B?a00zZ0plSThYNkRhTy9xOGY3em9sSWVzM1VhQW04LzZTY2RYcjczdmh1S0Ny?=
 =?utf-8?B?OU1scGQ3eUJSbDZqb3NLS28wZVJORGMwSmFnY2tqTGhvbWxrWG9IT05xaEda?=
 =?utf-8?B?VGQrRUpJWGhSOWxQN2huaG5RNUlFeW1WSS9xU3MxY1p5ZitaejJQTGJ6STlZ?=
 =?utf-8?B?TnFhbUEycS9EY3pJVFZ5cHdxM3lCamMxYjhJVk94M2dnL0NQOWJLaXhrNlM4?=
 =?utf-8?B?T1ljQThrU1BPSXBQRUVxTHVmWVVMc2NWOW1WRGdoaEc1STM5N0hKYithbWhG?=
 =?utf-8?B?Wk5ydXh4Ym5JNVlHNFIzTGl5LzNIQlVwdXNqQ1ZyVit5WnFSc2k1S1l4ZGkz?=
 =?utf-8?B?aURRZVRlK3h3Z3pJQXBlT1dUTkVhOVk5a1oxaW95ckRkMFZmSWdtZFh6UHVj?=
 =?utf-8?B?UTVqL1FzaHRiRWk1b0FjMEV1bFgxak1RZUg2VmtZTmxGOUQrWHgrMk40c3Uv?=
 =?utf-8?B?TldWZ1VuTjJnUlJaSy9FUGZPM1dzdFVPN1dEbjlFSU05Z2tXcUNQdnNoUVZn?=
 =?utf-8?B?SEpVeTNGcTRHRnV2eHZ2eUY1NkJtcjV6eVArTVp2VHhWQUdOSTRZSUttcUFI?=
 =?utf-8?B?aE1LZk85VGxqQzhzMGluZENtS2JxNjBOUGdLZVBzeG1UZ3VLT3NZOCt5REsx?=
 =?utf-8?B?NFJLVTNzMy9pZkMwT0lUTDgyRExLMG5haGlaenI0RFlZOWtNMVZLaDlteUsw?=
 =?utf-8?B?TnVJRG1CNWNnUE5FZ2graDYvb0pyU29INTF4cDRFWFRxWGhNbnQ2S3pQbFl2?=
 =?utf-8?B?ZUgrbXA2WHFwRVdjQTFETFVRaEZyTjZvUzJRendkY2tUQVRBTUR5ZGV3Vkcx?=
 =?utf-8?B?Zm5DS2VoVkxIbExBWVpJUXdnYStGaHp3Y3JTN1Q1N2NzNFRJUWFpUkRkMEtW?=
 =?utf-8?B?UTdqUUc4Y1pMZjUxcE41Ymw5UVc4NlRLQVZKSThpU1l0U01jMmUyVSthMmVx?=
 =?utf-8?B?dFp4Y1UvajJQZTdJUXVmaUdtTXFIYmRrUlF5bHdCbzVYUUplamJxcWpMWmtD?=
 =?utf-8?B?WVNWcDIraUVrMk5MSVdiZ0xqeEJOdFh6SUIxVU9ML3ZmeURUZU9UOVk0K1Bi?=
 =?utf-8?B?RHNlY1hvRm8xNTBXWHJDVEl4Y1U5WnhncTFNSUhVUzhSM0JvS0tSazV4bUtQ?=
 =?utf-8?B?bDRFZkt5RlFJZDh5ZlhPS2E4TmF4UkpodFM5Y05OVy9maW5QRUVrNDcreFQy?=
 =?utf-8?B?M0J2UE1rR0dYMXRueXFPdEFNQmZpN1JyUUZHR2laamJGQkVhUGY3RWNtMGhh?=
 =?utf-8?B?cUJGTjk5UjdPMytvemlsc1ZqOTNEWEE0UDFkeEljWGZsa0xaSG03cU1aaUNT?=
 =?utf-8?Q?v89B6TEg6KMVtiRJHmWcsbqWE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10f99de7-400e-4fd4-fcc9-08daf95e3614
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 14:13:46.4526
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1WSu00QBtzXmO2liKt8GDzx9M4RCMvwi0lrrmGSMXYc29IAnZ+caq3kw9GE/jQIaOq58cdTSdPWEbK58wEA0zg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8560

On 17.01.2023 20:13, Andrew Cooper wrote:
> On 12/01/2023 10:42 am, Jan Beulich wrote:
>> On 12.01.2023 11:31, Andrew Cooper wrote:
>>> On 12/01/2023 9:47 am, Jan Beulich wrote:
>>>> On 12.01.2023 00:15, Andrew Cooper wrote:
>>>>> On 11/01/2023 1:57 pm, Jan Beulich wrote:
>>>>>> Make HVM=y release build behavior prone against array overrun, by
>>>>>> (ab)using array_access_nospec(). This is in particular to guard against
>>>>>> e.g. SH_type_unused making it here unintentionally.
>>>>>>
>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>> ---
>>>>>> v2: New.
>>>>>>
>>>>>> --- a/xen/arch/x86/mm/shadow/private.h
>>>>>> +++ b/xen/arch/x86/mm/shadow/private.h
>>>>>> @@ -27,6 +27,7 @@
>>>>>>  // been included...
>>>>>>  #include <asm/page.h>
>>>>>>  #include <xen/domain_page.h>
>>>>>> +#include <xen/nospec.h>
>>>>>>  #include <asm/x86_emulate.h>
>>>>>>  #include <asm/hvm/support.h>
>>>>>>  #include <asm/atomic.h>
>>>>>> @@ -368,7 +369,7 @@ shadow_size(unsigned int shadow_type)
>>>>>>  {
>>>>>>  #ifdef CONFIG_HVM
>>>>>>      ASSERT(shadow_type < ARRAY_SIZE(sh_type_to_size));
>>>>>> -    return sh_type_to_size[shadow_type];
>>>>>> +    return array_access_nospec(sh_type_to_size, shadow_type);
>>>>> I don't think this is warranted.
>>>>>
>>>>> First, if the commit message were accurate, then it's a problem for all
>>>>> arrays of size SH_type_unused, yet you've only adjusted a single instance.
>>>> Because I think the risk is higher here than for other arrays. In
>>>> other cases we have suitable build-time checks (HASH_CALLBACKS_CHECK()
>>>> in particular) which would trip upon inappropriate use of one of the
>>>> types which are aliased to SH_type_unused when !HVM.
>>>>
>>>>> Secondly, if it were reliably 16 then we could do the basically-free
>>>>> "type &= 15;" modification.  (It appears my change to do this
>>>>> automatically hasn't been taken yet.), but we'll end up with lfence
>>>>> variation here.
>>>> How could anything be "reliably 16"? Such enums can change at any time:
>>>> They did when making HVM types conditional, and they will again when
>>>> adding types needed for 5-level paging.
>>>>
>>>>> But the value isn't attacker controlled.  shadow_type always comes from
>>>>> Xen's metadata about the guest, not the guest itself.  So I don't see
>>>>> how this can conceivably be a speculative issue even in principle.
>>>> I didn't say anything about there being a speculative issue here. It
>>>> is for this very reason that I wrote "(ab)using".
>>> Then it is entirely wrong to be using a nospec accessor.
>>>
>>> Speculation problems are subtle enough, without false uses of the safety
>>> helpers.
>>>
>>> If you want to "harden" against runtime architectural errors, you want
>>> to up the ASSERT() to a BUG(), which will execute faster than sticking
>>> an hiding an lfence in here, and not hide what is obviously a major
>>> malfunction in the shadow pagetable logic.
>> I should have commented on this earlier already: What lfence are you
>> talking about?
> 
> The one I thought was hiding under array_access_nospec(), but I forgot
> we'd done the sbb trick.
> 
>> As to BUG() - the goal here specifically is to avoid a
>> crash in release builds, by forcing the function to return zero (via
>> having it use array slot zero for out of range indexes).
> 
> I'm very uneasy about having something this deep inside a component,
> which ASSERT()s correctness doing something custom like this "just to
> avoid crashing".
> 
> There is either something important which makes this more likely than
> most to go wrong at runtime, or there is not.  And honestly, I can't see
> why it is more risky at runtime.

Well, okay. I did explain why I think there is an increased risk here.

> If we really do need to clamp it, then we need a brand new helper with a
> name that doesn't reference speculation at all.  It's fine for *_nospec
> to reuse this helper, stating the safety of doing so, but at a code
> level there need to be two appropriately named helpers for their two
> logically-unrelated purposes.

I don't think anything can sensibly be made for more general purpose
(not speculation related) use. Here I'm specifically utilizing that
array slot 0 is being picked as the fallback slot _and_ that slot is
actually suitable. This may not be the case for other arrays.

Anyway - taking things together I will then simply consider the patch
rejected, despite it being a seemingly easy step of hardening.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 14:23:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 14:23:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480689.745206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9L0-0006hC-JD; Wed, 18 Jan 2023 14:22:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480689.745206; Wed, 18 Jan 2023 14:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9L0-0006h5-GT; Wed, 18 Jan 2023 14:22:50 +0000
Received: by outflank-mailman (input) for mailman id 480689;
 Wed, 18 Jan 2023 14:22:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMWs=5P=citrix.com=prvs=37540d4c2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pI9Kz-0006gz-5y
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 14:22:49 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 93c3210c-973b-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 15:22:47 +0100 (CET)
Received: from mail-bn7nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 09:22:43 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH2PR03MB5336.namprd03.prod.outlook.com (2603:10b6:610:94::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 14:22:42 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 14:22:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93c3210c-973b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674051767;
  h=from:to:subject:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version;
  bh=+sZbL7fRbpTAv6J497fVGDtB8o3ODgLCHg183tsTS64=;
  b=by1AiL8Mn9usEW1mxChMw7pknzCGY6plPOpbdHbjnUshs3TUvw6usVtJ
   2Ta9E9RK5r3AzHMOgSK5t9uKexcasNX1i/C63cdnwyyUptDvOQWRXgG3R
   rITv5FXIFBPMVyItHyTyuDpGJptwjfnIXRCM8zOQ5XUYJgn4kNY45CZUz
   k=;
X-IronPort-RemoteIP: 104.47.70.108
X-IronPort-MID: 93213683
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:nf2mxaDGZUJehRVW/6riw5YqxClBgxIJ4kV8jS/XYbTApDJ2hDNRz
 WMbDWjSbPeLajf0KtwiYY2y/U4HuMOEytBgQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpB7gRiDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw691dUEQRy
 PYkMSkLNxbcpfya6paFc7w57igjBJGD0II3nFhFlGmcJ9B5BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK/exuuzi7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6Ux3qgBtJJfFG+3uZRhwaMw2c9MhRMDFe2nOC0jkyTdfsKf
 iT4/QJr98De7neDQsb4QEeQoXiKpBcQVtNcVeog52mlyKXO5B2CLnMZVTMHY9sj3OctXiAj3
 FKNm9LvBBRsvaeTRHbb8a2bxRuiNC5QIWIcaCssSQoe/8KlsIw1lgjITNtoDOiylNKdMTXxx
 S2a6SsznbMeieYV2Kihu1PKmTShot7OVAFdzhnYWnKN6gJ/eZK/YIqp+R7X4J5oNI+ESnGRs
 X5CnNKRhN3iFrmInS2JBe8LQ7eg4q/dNCWG2AY/WZ486z6q5nivO5hK5y1zL1toNcBCfiL1Z
 EjUukVa45o70GaWUJKbqrmZU6wCpZUM3/y/PhwIRrKiuqRMSTI=
IronPort-HdrOrdr: A9a23:OSrXA66vssb6filftwPXwMbXdLJyesId70hD6qkRc3Bom6mj/P
 xG88516faZslgssRMb+exoSZPgfZq0z/cci+Qs1NyZLWrbUQWTXeRfxLqn7zr8GzDvss5xvJ
 0QF5SW0eeAb2RHsQ==
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="93213683"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dVJGqGghpa5RvgI7rnWIA0cOmdMyL1fpKXS8q9oToljuQ14FkiKfVzhfb/yCpaU3XwXBYO1BdlR5iRlFOAxtNXCZVARgrNfG1XdLoycLZiGKZWIhGVc6ola8/g+NOxw8Z4ZQqTS65f5TlzNQ/eEXQUbUPApumh80dDowDUmv1U14RwPKIqh06wkdhnxIE2HWqN+ooaj395rCmbsuXj2mjOdl40Oh0mFVEJ/OrNfajSNmuI0L24l5RB0A56VFkQ6HSk+P+6y/T93YAg3zjheSw9S0ey4HhFBG905AMQF3X8YXb4L4pHMpiYtqe7wZIYt1DdLNyR+kOeKm0Rrd0yXh9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+sZbL7fRbpTAv6J497fVGDtB8o3ODgLCHg183tsTS64=;
 b=P/uIzgkoFi3VfcV23OI3RJAw7JXW3IfYFHE8S72d0TCPiVAs50HaH8fV4n5v9z4+u4VCqDVE0kuaB20QqjPkzuJAnCrOLMjHcirSWwZIvKTyoJGGY2lgsJHsTJuTXGYqI32ArbwRNat5l6Z70fNVopd8gC5opoZ0g2XROxoPIECHD9twFml6miYhN6gSdVqtivy/0qODTjCUtTIiqjb1PkpavzRIh9dGJRklz16MC+X2nKrmNFHTxpdihW8WjTpn/7bFnVNzl6uRKc1W2JvtOrrkBLn3DX93rN66uBdO/Yru//kG4vyFT4sdZqDcvTRDIWcBUaPCVO0vouVHvTJNGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+sZbL7fRbpTAv6J497fVGDtB8o3ODgLCHg183tsTS64=;
 b=FnuNNuTFDH+coNOA229hs0M7n54HuvDOtqyaqAag3dQoDh4/eSWh2GNmRi0f6AyrXS11XaaWJrV3QSK+OAgXrR0orYiU5sHxYoCCJ9Q8dg6cRU+cJUkpmQ3FVRqt32qTMgDMi3bBO1LP/C/mBY5X3nbN5uwNbAhKXKA2rwT9JSI=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: David Woodhouse <dwmw2@infradead.org>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel
	<xen-devel@lists.xenproject.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Paul
 Durrant <paul@xen.org>
Subject: Re: [PATCH] xen: Allow platform PCI interrupt to be shared
Thread-Topic: [PATCH] xen: Allow platform PCI interrupt to be shared
Thread-Index: AQHZKze3xTuOiK3Or0K7Zn5gBYHKJq6kMguAgAADsQCAAAR6gA==
Date: Wed, 18 Jan 2023 14:22:41 +0000
Message-ID: <973f9ebd-173e-6753-7799-a660994e38de@citrix.com>
References: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
 <e0b75988-bee6-e0c7-0dda-86e1e973ba74@citrix.com>
 <82e6a68f3cb8c0e440f7dc848fa3444725b3f893.camel@infradead.org>
In-Reply-To: <82e6a68f3cb8c0e440f7dc848fa3444725b3f893.camel@infradead.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH2PR03MB5336:EE_
x-ms-office365-filtering-correlation-id: f5a68534-ff2e-493d-4e36-08daf95f75b2
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 nTCryijanJ9I6oNGT3reIDNxPpMOMv4I/rCFXOaxAoZI56FH+crDLiPKhXVqK5K0mzWxGKZefUIb1efPC4WmNL+bshq90/GdHbcRLVkHbWCX3kNj2QHtn/FpYzBK35oNMeLO9K79DoZgRYvSlkEiHbjeMPCExU+TDc8FDl4lZG4QzxZFYxW1KMuPEPgKcWeiKdYSVFLyUaNYWDaDL0ma9oJW+rs9ReWXqWW/iVC5HRIjW+xeII9mwq/oSALtwfqpkJelSXHT8qvvMTdsK749shADfOBygfIA1Kzk+k+W7oVgsJHlTJ44w+plt6L6WSZI+Nf8FHtGnXHAoHlldnymIBVQ8ar6hoTe3ykc00BSSy8zpd//ePHvxjTjcuWIiIShlbDaUITHkHou97EPjAY9h1d+6DuvBCaETOao7tFGR+AmSTSfMaCsLCiqN4ftEHWykDpkSXgCUJZOLu7f/FnXGUBmHCS0gP9fzuOFiUkfBrgM2oRdT1jDNfzB7Yph6l2DdL9Bebcbt+pcKyVdBm9COKC0xi54dNopCLpY91xAeaa5ZGBcZiAYaXI+yWupc9tniNWEi6hbENJZZk59x8+yTcbdlEULxhkDkhB5T4o+VHy1hxv0hRrOnIRgfmdaeNqno47VCAMtMpB7WyJUhrx11xmyq8lOTtza3KZcV76JdJE6jFRaGaPBdh6bKO36NlfoY7WdCUWCUQXu49tdJ2HtVhJa6W4/D8mMAejYKBCbIOwZ2QWy6lEJseNxUpO4+edbIMnk53V6BJnDiRiniVHqWg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(346002)(396003)(376002)(366004)(451199015)(36756003)(64756008)(8676002)(83380400001)(31696002)(86362001)(41300700001)(53546011)(186003)(91956017)(26005)(66476007)(6512007)(66946007)(76116006)(66556008)(66446008)(2616005)(316002)(71200400001)(6506007)(478600001)(110136005)(38100700002)(122000001)(2906002)(38070700005)(6486002)(82960400001)(8936002)(5660300002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?eXhpbnlSYjB1TG1YTWVyenBHSUwyZ0wySlhMVU9MZ0M4WXVwVE1YSmRyL2dJ?=
 =?utf-8?B?ZVhQdDk5S1pXMzNiZjNtd1VtY1dJak85aHpYZjJKUFFqVm5mVEk0My8xb0ls?=
 =?utf-8?B?WlUxdnBORTYwZkJBbHRWWWhIUFViUE1oKy9VeWtuY20rYmtNUUlBRHNtQkll?=
 =?utf-8?B?eHBPL1Vra2xlSS9BaHJ6Z0VNa1drd3FQV1RyeEEvdUVKVTJvZE5UZ0NqSmkx?=
 =?utf-8?B?SXpzSEpyS05uTnpxTGwwQUNsRzh0Z25TbzdIa0JRZU1ISVVFWEM5MERTVWsr?=
 =?utf-8?B?aDZzWE1BR09EZndMTHZpeEVDNklXaWxuVWdjUnN3WEhGcjBqZ3AwTjAyVG1N?=
 =?utf-8?B?TzB5ZHVZQUQwbmhHaGU5Yk5BVzdEczl4TXpReTBCVHRvbnF6cTdpK0sySXJW?=
 =?utf-8?B?bjRnbktrMjVGWHNIY0xLMkNCdjZpSEdwTy91d2VpOFZOS21GSmpzR1RjTFlj?=
 =?utf-8?B?SUdxNUEwS1RVenJ2OUFJNldBMVhoaUpXcVhpTm1vZnppOWVldnpVclo3dnlz?=
 =?utf-8?B?UEhleFEzSlQzUG02dWQ1Rzc1c2xtWUpIVFZsajVOeThHM1pwbWVwSWJQUm1Y?=
 =?utf-8?B?TzVvb2p3VjMxTjRoREtxd0s0SW4zUjVGS0xOYkJLcVFaQ3dWQ3pFaUZLK2sw?=
 =?utf-8?B?bmMvQTVoTTVDbUNtRlU2TnRKTmJKVnhhOUppNXJRZitIRTBTOHlBOHBrNVdW?=
 =?utf-8?B?dTRFaE5heTY3dkU3cVV6bWZPWTg3QmovZ05xRy9yWi82KzFHY3ZlcTB4MlV4?=
 =?utf-8?B?OXpTVFhRYWJ1cXFpKzRBbWgrR0dwYnNnUjlkQXZtWXdsOGVjdXdYRmx6Ym1j?=
 =?utf-8?B?VnI2QXhOcWlQVTJpa1dKR2RWSytYNUw2RWZhV0U2ZUVnTGl4UXhoVVpEa0M3?=
 =?utf-8?B?STZ4TDRocEtLeExaZ2pObnVCRzJoT09WYVhBL2xlSUVOWC8rY2llZEUvWm5t?=
 =?utf-8?B?WjRrOVVHOVBSSmtPV1U0NGNEVkZrd1JkYnZVTmJzMHJ6U0lOQTRORWZSeWJq?=
 =?utf-8?B?K3NqMk96K0NKSU05YXJUdU94WnU4OTczQTJlTzhMTGlxY21JRll3Ynp2NnQx?=
 =?utf-8?B?ZmRkMzU1VVVzaW02RnIvaUJxNkJqMkNmYU13YTRNQWlVNHlOZHoxMHFkQzRL?=
 =?utf-8?B?bWxZUm9aYXpJbUlrU2xHb3EvczRsVGduVndLVEFQQkoyeU51dlphclBHQkNs?=
 =?utf-8?B?ZThicTdPNzloZldFdXZtZ1V2SWcwcndKWU13YnNZRjdPWEh1eTRZL1g4ZkNs?=
 =?utf-8?B?cHN5YW1oY0x3aEZOcGFGM21UbmR2VlRXMFVkNEFmeXZneXE3OXNEOEFEVVg2?=
 =?utf-8?B?Q0N2YkFIR3lCWko4N0VGdy9PSGtRT014YTRmOU5IakM3Ni9lVjdzVUkyeTNE?=
 =?utf-8?B?ZnkyZUZac2ZCWFpPTVQxSnowU0ExT1BCaDRpcGlFY2JCTEhjRU9qQndOdS9D?=
 =?utf-8?B?cm5jWDFlbHNtNWRGQVFiOUs3RGpWOEdkV1MvK3plbUJPeEFyTU9TN2c3SUlY?=
 =?utf-8?B?amQzRzlMdUdMUHppVjYyUkRTT25zNmFibHFZYUNldFJ5OVNManphRTQvd0Vx?=
 =?utf-8?B?K3NJM2Z2N1M5RTJNN1R2WTE4Qm5OQVVZRFh1bG1ZMVJOcFFBQUY2dVlsODE1?=
 =?utf-8?B?dFJxa09UNHhMczNKY2s4RmtQR3Q5SG54RW8vOGtBYWRGOVpLRFVvZjRsVnBT?=
 =?utf-8?B?b285cmpnaEhZUTdxeGkvc3pWSkRLRVlqSmdnRkF0UlRyaktzWmoxZDZEeGFQ?=
 =?utf-8?B?MnVHRGhNNEhqbXJoWE9YVGp2d1hCSUpROHQxdlJQVVplL0VBV2RpMW9mYWJE?=
 =?utf-8?B?cmVpUUk1NkNlOVRsM0ZQa2h5R0l0WkVEeU1TcTl5YmVCQkZyYW9NMm5qdkdV?=
 =?utf-8?B?K0N2MmxDb3ZkS3VncjJPbWRzc0pidnFrNmQzLzEyandXZGFFQzcybTR3bExz?=
 =?utf-8?B?bU1BbmJ5dkFCR3plYldOeUplOVk2K0R5R0grcXc5bzNYZmFhL2lwQW5nVDIw?=
 =?utf-8?B?YnpuWUJ5ejdvQ3NDc3dEazhWQTArUlRKSHp1MzcrUnJoZXlaY01hVUZrVExG?=
 =?utf-8?B?OGVVUlovdDRVRld0MkxXVXBGdCtYTTAyaGJXOVprMFJHS0ZxNGM3bTUvK2dU?=
 =?utf-8?B?RitWc2oyWHpiRzlNdG5ZR21xQmsra3E2QndFTCt1M2RCQm81eDRXNDg2c3Bi?=
 =?utf-8?B?VGc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5369FA15838AED4C865011484F15B5A1@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?OVYxbXFMa1hZdkxNellWU1BnMldER2pRMjhYdGhIN01xcjFhM0k5UnoyaFdy?=
 =?utf-8?B?Z3BzZ1ZhZTlhWFI4UHUrZXF2UHdTU1pWUWhJVXZsQlg1cTNkeTl6TWwyM2pC?=
 =?utf-8?B?MU5FUCs0Y1Y2UVhIeTIyU1JhTUFoSmJKMktyNC9IVElYeW11K1hYUkIrMUFR?=
 =?utf-8?B?OVRlTDcwUExmdVdtS1Z3S1p4bTM4T1c1YWJYeEsxZnRGdERDNGRmREJxeGxu?=
 =?utf-8?B?VU9VTm9LN3FRazNDcEhSNXBNc0pxZGplZFR1VHgyTDIxblljcG9ZS05GTE9Y?=
 =?utf-8?B?Q2JaK01RRUhFVjNPdHdmNDQvWXZwSHBiL1pHR1QxaFVxYUIybG5rZm1ndDda?=
 =?utf-8?B?ZlNid2I5SkFoVG9ycldoa2hxQVdKRnFEeUxsNU10SWtLTDZZNENzNnZDVWph?=
 =?utf-8?B?R0ZDcVNHL0dDOXpOZVBCaUdvTGlnU0plTXJpNkZRTUkyaEFCMTFuaDBPcjVP?=
 =?utf-8?B?L2thRkdrN3NzQklMR2x0K2c2UFlNVVZzZHlVcHlVaWxpT0FXY2xINDNUUVJW?=
 =?utf-8?B?SE9xOGFHTlJQaWlzTGxJY1BHZEtnakFNRVdhS2VVeGp1MGF2SVlReE9VZ29z?=
 =?utf-8?B?M2F3cmhxVlJXN244OGhMenJ3MGxLZDFVWUxVUnEzeG5JeWFWZ2VHbG5lc01R?=
 =?utf-8?B?cUNGY0FtdWFnLytacldaWW94b1Rpa0g1OGRVK3JDQTN4ZUl2L2lYUGE1Kzg2?=
 =?utf-8?B?MFY0OTBuSnpnK0VzQ3R1SHB1UFF0RkdxRjNEbTFLaVNJamQxd1Vza21uQnhB?=
 =?utf-8?B?cHp1czBjYWMycFpkR2FXQ0ZxVVd5aXArcit6L243QTdDcDUwVm0wT3JOSXNF?=
 =?utf-8?B?cVdYRllaWFlNblVxSUJQRFVjM0FJam80TTMybGRzUjlyT1BoY2I5VTJMSXBv?=
 =?utf-8?B?dnBKQ2R2dnp1a3RsdlBFa3N0V09wNnBYTy8vT0EzWHJndzd4Z2RpWGF1TC9n?=
 =?utf-8?B?cU1zK056WmcvSnVMekNZV0MraW0rc2tPTXNNTHRraEpzT1ZmSXJZa2JGdHRz?=
 =?utf-8?B?NE1Fdi9QR2ZNaVNYTkk0WTltNkE2V05Wek16Yzl0TzU0SmJlM0d6M2ZxbXMv?=
 =?utf-8?B?dXk5YmNLMlJYOUFpUDl6UERoa25OUlMxWlZnWDV1ek4wZjV2VFBSZ2wwYm5L?=
 =?utf-8?B?Ylh2aVRIYXhDMG41MVhmZkF1NTRTc1lveVJjSHFwL1BiN0NGOE1TeUwwaFJz?=
 =?utf-8?B?by81U3ZtRFdKY2RwOHBaeCtaT0o3WW5OQ2FxRXlpOE1iNkU5OStOYnh3SDNU?=
 =?utf-8?B?YkRjUTdGK1Y2Sk03Qk5rSEwrSEFmSUdxZngvaXA0RUhkVzRUUT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f5a68534-ff2e-493d-4e36-08daf95f75b2
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 14:22:41.7706
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: BmA2AL/K2zK/c1YPY5PxQUYSVJMhQu6vw0C122AgTIaJedBhKMwFWglriDTnBmg4H1p50aUyYFU3AzbtOZiCOFSFmdpVYthWlvLT0JTbTVc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5336

T24gMTgvMDEvMjAyMyAyOjA2IHBtLCBEYXZpZCBXb29kaG91c2Ugd3JvdGU6DQo+IE9uIFdlZCwg
MjAyMy0wMS0xOCBhdCAxMzo1MyArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDE4
LzAxLzIwMjMgMTI6MjIgcG0sIERhdmlkIFdvb2Rob3VzZSB3cm90ZToNCj4+PiBTaWduZWQtb2Zm
LWJ5OiBEYXZpZCBXb29kaG91c2UgPGR3bXdAYW1hem9uLmNvLnVrPg0KPj4+IC0tLQ0KPj4+IFdo
YXQgZG9lcyB4ZW5fZXZ0Y2huX2RvX3VwY2FsbCgpIGV4aXN0IGZvcj8gQ2FuIHdlIGRlbGV0ZSBp
dD8gSSBkb24ndA0KPj4+IHNlZSBpdCBiZWluZyBjYWxsZWQgYW55d2hlcmUuDQo+PiBTZWVtcyB0
aGUgY2FsbGVyIHdhcyBkcm9wcGVkIGJ5DQo+PiBjYjA5ZWEyOTI0Y2JmMWE0MmRhNTliZDMwYTU5
Y2MxODM2MjQwYmNiLCBidXQgdGhlIENPTkZJR19QVkhWTSBsb29rcw0KPj4gYm9ndXMgYmVjYXVz
ZSB0aGUgcHJlY29uZGl0aW9uIHRvIHNldHRpbmcgaXQgdXAgd2FzIGJlaW5nIGluIGEgWGVuIEhW
TQ0KPj4gZ3Vlc3QsIGFuZCB0aGUgZ3Vlc3QgaXMgdGFraW5nIGV2dGNobnMgYnkgdmVjdG9yIGVp
dGhlciB3YXkuDQo+Pg0KPj4gUFYgZ3Vlc3RzIHVzZSB0aGUgZW50cnlwb2ludCBjYWxsZWQgZXhj
X3hlbl9oeXBlcnZpc29yX2NhbGxiYWNrIHdoaWNoDQo+PiByZWFsbHkgb3VnaHQgdG8gZ2FpbiBh
IFBWIGluIGl0cyBuYW1lIHNvbWV3aGVyZS7CoCBBbHNvIHRoZSBjb21tZW50cyBsb29rDQo+PiBk
aXN0aW5jdGx5IHN1c3BlY3QuDQo+IFllYWguIEkgY291bGRuJ3QgKnNlZSogYW55IGFzbSBvciBt
YWNybyBtYWdpYyB3aGljaCB3b3VsZCByZWZlcmVuY2UNCj4geGVuX2V2dGNobl9kb191cGNhbGws
IGFuZCByZW1vdmluZyBpdCBmcm9tIG15IGJ1aWxkICh3aXRoIENPTkZJR19YRU5fUFYNCj4gZW5h
YmxlZCkgYWxzbyBkaWRuJ3QgYnJlYWsgYW55dGhpbmcuDQo+DQo+PiBTb21lIHRpZHlpbmcgaW4g
dGhpcyBhcmVhIHdvdWxkIGJlIHZhbHVhYmxlLg0KPiBJbmRlZWQuIEkganVzdCBuZWVkIFBhdWwg
b3IgbXlzZWxmIHRvIHRocm93IGluIGEgYmFzaWMgWGVuU3RvcmUNCj4gaW1wbGVtZW50YXRpb24g
c28gd2UgY2FuIHByb3ZpZGUgYSBQViBkaXNrLCBhbmQgSSBzaG91bGQgYmUgYWJsZSB0byBkbw0K
PiBxdWlja2ZpcmUgdGVzdGluZyBvZiBQViBndWVzdHMgdG9vIHdpdGggJ3FlbXUgLWtlcm5lbCcg
YW5kIGEgUFYgc2hpbS4NCj4NCj4gUFZIVk0gd291bGQgYmUgYW4gZW50ZXJ0YWluaW5nIHRoaW5n
IHRvIHN1cHBvcnQgdG9vOyBJIHN1cHBvc2UgdGhhdCdzDQo+IG1vc3RseSBhIGNhc2Ugb2YgYmFz
aW5nIGl0IG9uIHRoZSBtaWNyb3ZtIHFlbXUgcGxhdGZvcm0sIG9yIHBlcmhhcHMNCj4gZXZlbiAq
bW9yZSogbWluaW1hbCB4ODYtYmFzZWQgcGxhdGZvcm0/DQoNClRoZXJlIGlzIG5vIGFjdHVhbCB0
aGluZyBjYWxsZWQgUFZIVk0uwqAgVGhhdCBkaWFncmFtIGhhcyBjYXVzZWQgZmFyIG1vcmUNCmRh
bWFnZSB0aGFuIGdvb2QuLi4NCg0KVGhlcmUncyBIVk0gKGFuZCBieSB0aGlzLCBJIG1lYW4gdGhl
IGh5cGVydmlzb3IncyBpbnRlcnByZXRhdGlvbiBtZWFuaW5nDQpWVC14IG9yIFNWTSksIGFuZCBh
IHNwZWN0cnVtIG9mIHRoaW5ncyB0aGUgZ3Vlc3Qga2VybmVsIGNhbiBkbyBpZiBpdA0KZGVzaXJl
cy4NCg0KSSdtIHByZXR0eSBzdXJlIExpbnV4IGtub3dzIGFsbCBvZiB0aGVtLg0KDQp+QW5kcmV3
DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 14:27:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 14:27:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480696.745218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9P5-0007OQ-8q; Wed, 18 Jan 2023 14:27:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480696.745218; Wed, 18 Jan 2023 14:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9P5-0007OJ-5d; Wed, 18 Jan 2023 14:27:03 +0000
Received: by outflank-mailman (input) for mailman id 480696;
 Wed, 18 Jan 2023 14:27:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vku6=5P=casper.srs.infradead.org=BATV+4fd80059aba2dd5ff62c+7087+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pI9P1-0007O9-97
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 14:27:01 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28d183f8-973c-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 15:26:56 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pI9Os-0003f6-AH; Wed, 18 Jan 2023 14:26:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28d183f8-973c-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:To:From:Subject:Message-ID:Sender:Reply-To:Cc:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=WBwOmcM8+0BE16W8TBX0vypbXonuCPHDr8KKzxBbJQU=; b=KjycyDsd78SljD0djof5nByeVV
	Z01xqZ5SVR38xIWqRK6s/Obdq3qTYf03q5K0zpupSoZ1ufvw+VfaUll76zGUC8xFjVXZ/eNaSZL/d
	j9fC4t7wYtFpRBYHSZxB2IERvDtFMVibGl70ynVNIHvwUhw6dMDUzyIJNJZbdt2QvQjmlmrgidZSI
	10w/tnFqnv7KepjwATQPQ718kDWyT3AiNCzlnekKoO+t4p6HYa7d+BnfbDHXFNh1rzJ3dqVvS7Hue
	TYmsXaZQveqkvptfaCF0MyI/QjmarbEp9HCpeT7Pkc9TaR7M5P7qqqw7NgE/C0uGxI0/8fP+BhxUI
	DLtjhDTQ==;
Message-ID: <1ce5f5501214f1073f4eb86b2fdf3f54b2400057.camel@infradead.org>
Subject: Re: [PATCH] xen: Allow platform PCI interrupt to be shared
From: David Woodhouse <dwmw2@infradead.org>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Juergen Gross
 <jgross@suse.com>,  Stefano Stabellini <sstabellini@kernel.org>, xen-devel
 <xen-devel@lists.xenproject.org>, "linux-kernel@vger.kernel.org"
 <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Paul
 Durrant <paul@xen.org>
Date: Wed, 18 Jan 2023 14:26:49 +0000
In-Reply-To: <973f9ebd-173e-6753-7799-a660994e38de@citrix.com>
References: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
	 <e0b75988-bee6-e0c7-0dda-86e1e973ba74@citrix.com>
	 <82e6a68f3cb8c0e440f7dc848fa3444725b3f893.camel@infradead.org>
	 <973f9ebd-173e-6753-7799-a660994e38de@citrix.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-hNVI4o99yM337Xtbo0NU"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-hNVI4o99yM337Xtbo0NU
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2023-01-18 at 14:22 +0000, Andrew Cooper wrote:
> On 18/01/2023 2:06 pm, David Woodhouse wrote:
> > On Wed, 2023-01-18 at 13:53 +0000, Andrew Cooper wrote:
> > > On 18/01/2023 12:22 pm, David Woodhouse wrote:
> > > > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> > > > ---
> > > > What does xen_evtchn_do_upcall() exist for? Can we delete it? I don=
't
> > > > see it being called anywhere.
> > > Seems the caller was dropped by
> > > cb09ea2924cbf1a42da59bd30a59cc1836240bcb, but the CONFIG_PVHVM looks
> > > bogus because the precondition to setting it up was being in a Xen HV=
M
> > > guest, and the guest is taking evtchns by vector either way.
> > >=20
> > > PV guests use the entrypoint called exc_xen_hypervisor_callback which
> > > really ought to gain a PV in its name somewhere.=C2=A0 Also the comme=
nts look
> > > distinctly suspect.
> > Yeah. I couldn't *see* any asm or macro magic which would reference
> > xen_evtchn_do_upcall, and removing it from my build (with CONFIG_XEN_PV
> > enabled) also didn't break anything.
> >=20
> > > Some tidying in this area would be valuable.
> > Indeed. I just need Paul or myself to throw in a basic XenStore
> > implementation so we can provide a PV disk, and I should be able to do
> > quickfire testing of PV guests too with 'qemu -kernel' and a PV shim.
> >=20
> > PVHVM would be an entertaining thing to support too; I suppose that's
> > mostly a case of basing it on the microvm qemu platform, or perhaps
> > even *more* minimal x86-based platform?
>=20
> There is no actual thing called PVHVM.=C2=A0 That diagram has caused far =
more
> damage than good...

Perhaps so. Even CONFIG_XEN_PVHVM in the kernel is a nonsense, because
it's just automatically set based on (XEN && X86_LOCAL_APIC). And
CONFIG_XEN depends on X86_LOCAL_APIC anyway.

Which is why isn't never mattered that the vector callback handling was
under #ifdef CONFIG_XEN_PVHVM not just CONFIG_XEN.

> There's HVM (and by this, I mean the hypervisor's interpretation meaning
> VT-x or SVM), and a spectrum of things the guest kernel can do if it
> desires.
>=20
> I'm pretty sure Linux knows all of them.

But don't we want to refrain from providing the legacy PC platform devices?


--=-hNVI4o99yM337Xtbo0NU
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE4MTQyNjQ5WjAvBgkqhkiG9w0BCQQxIgQgBr2PquVp
0+/fmcGxeq06tt4u5dCNE3zPm1uDx9DYzaowgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAdNMhb0LAisxvFf6+201+0NbJYfEL5V3gY
u160AFWIUA1570SNluCbWtsJ/dc49pVFDFvb/wsCvuBGCdZbdOrstdy2wVK1lR6sZdrjAMfEpCzb
uEPb3/6+vDHwe0hfToo9VwQH8vFZBEA2Ak8L3tcbMD2nqM7N0WSsHRaE1NiLJmwS5b4Xk+kNYp7Q
ow7dIShRfIgryKPrX4HHR7CF6sT5DmzaBYi5J/1tZs1bvTVhR1bRMKFkNdrZeX6+PRf1wowLNqzB
033z3wxByDWuog1+fFMqZ4fljt2womBf3qFt4I/qmnHZ8+FcibJ7fPyumQCN87z4fi9zf8U3C9hq
0CArAAlwCo6vPU+I7oaSDtv2csLt+GC+qfpBRiYw4Layma+dpEqT5Qyqy98n+lzLif1OokKRMv4r
JejTmx29m+V7KkCPE6pPN7ldOttACJCt5RjPp8I3CnwEC1BN+vmy+8227ruQh8599fQVhOHeC2Vp
NB98tIIozKVXJ42EvRECez9qeBfNMvyQvCl3Imgn5O9Fg4e069y9y4soo2SfuUw221eaKi1X7J4z
cm/9RWsTEvt3pENPWGFIXYoOxFg/U/gzl4JxXpznUS7IffVH0qVa3v6y/Dgzhx7isNYzc8J77hUq
V8P5Qa15qVrWVlsb1G0Hd/u9zOynxvpDruMPrsjU/AAAAAAAAA==


--=-hNVI4o99yM337Xtbo0NU--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 14:40:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 14:40:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480703.745228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9bW-0000ZH-De; Wed, 18 Jan 2023 14:39:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480703.745228; Wed, 18 Jan 2023 14:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9bW-0000ZA-Al; Wed, 18 Jan 2023 14:39:54 +0000
Received: by outflank-mailman (input) for mailman id 480703;
 Wed, 18 Jan 2023 14:39:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMWs=5P=citrix.com=prvs=37540d4c2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pI9bV-0000Z4-Mz
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 14:39:53 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f50d9a64-973d-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 15:39:49 +0100 (CET)
Received: from mail-bn8nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 09:39:38 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB7009.namprd03.prod.outlook.com (2603:10b6:303:1a4::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 14:39:34 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 14:39:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f50d9a64-973d-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674052789;
  h=from:to:subject:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version;
  bh=61deY8fdjKe37EO7vaHkGmNrWpN4ki2JTcLxXAhuSh0=;
  b=fOwloCzCIv3lLYUxkyFSE0W/3SP6wpvXzJyAnuBFxJQiyYl5VWP2h3GC
   OzJnENy9ZfcxiWDnZ3mO5zeMnveLqfqroorHAWfxUOPPZuplhFxO5dkxf
   1IiovK7qkzZfPVdd0DXeEL9ZCEZNQmNqANDYJ3ug83iC/B6We1+bs2jk7
   g=;
X-IronPort-RemoteIP: 104.47.55.170
X-IronPort-MID: 95631096
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:F944uK5k8YLCGQY26KnZNQxRtLvGchMFZxGqfqrLsTDasY5as4F+v
 mYaUWDQOPqJNGHwfI12PInn8k0FvpXTzd9lTVdu/is3Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakS7AeC/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m0
 sAaMS5SYRC/qM3v/aiUcdkyoe0EM5y+VG8fkikIITDxK98DGMqGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooj+CF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNPSubhrq806LGV7nUwIhsXUWOpmKiWj2W+Z+lRD
 k0w/BN7+MDe82TuFLERRSaQrGGBoUQ0WtxeCeQ25QiBjK3O7G6xBGceSSVaQMc7r8JwTjsvv
 neShM/gDzFrtLyTSFqe+62SoDf0PjIaRUcSaClBQQYb7t3LpIAokgmJXttlCLSyjND+BXf32
 T/ihCw/gagDyM0GzaO2+XjZjD+24JvEVAg44kPQRG3Nxh92YJ6NY42u9ETB6vBBPMCVQzGpp
 HEZn+CO4eZICouC/BFhW80IFbCtovyDYDvVhAc1G4F7rmv1vXm+YYpX/TdyYl9zNdoJciPoZ
 0mVvh5N4JhUPz2haqofj5+NNvnGBJPITbzNPs04pPIXCnStXGdrJB1TWHM=
IronPort-HdrOrdr: A9a23:8+eV+6qgFgLfXwr+u7ODOZAaV5o6eYIsimQD101hICG9E/b0qy
 nKpp9w6faaskdzZJheo6HjBEDtex3hHP1OjbX5X43DYOCOggLBEGgI1+TfKlPbehEW/9QtsJ
 tdTw==
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="95631096"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cYtObBzAErfhzWLRoU/BRdEM2HGB2Fby/gdhEfNYAvzhIc7NfOPMzi9sXjboG8RjE1MmtJif5iXLNNhk7l+IG9I0BzrXAsU2zO8a5cLQya37nnUrmrHiX47ovmiUp6c+8nBuoxD6FNwEL0792S1TJGyAxXdQ9yB3f0kw4wfgmqL/STv9G49Ax4PYGiXT49VC5VAHgl0nFirepJTnZqb6WyCfTRFiejnL1jC+7YmqphAtXZOOYJLquFOGdOfA/wmQRe28dWtBvrpKgOJDPrQsBRitG5CpkPsI1hM3lgSeGOIbRYgebnd0luYFBLS3paJqvmz9XkybLtdfNErFV63oGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=61deY8fdjKe37EO7vaHkGmNrWpN4ki2JTcLxXAhuSh0=;
 b=Xogm+ldjUb2QajbICSw5eu6/rScHnjazDs+1yK8pFl1xMo1QjT11Y2mMR2PgoDdOKxORmj4L/o3aWdBRv1ugsaMS/iFd0wqqahe3kWl8aDCtfS/8ueAkwp8pjHAjADPr/JS6el1vMRCziI1Okvg5zJ0ngN9YxTnD/hxGcAf6BBLuTokQXLn1+x3QbHDAdRvvzofAD1SXxGL4SZOQ1skWJsiCmIjP0NrMxBISYxY5+LQ37sRovdDTlt3H7yrelGUu7pnbL6YQow7SaWYbxlvAJQ4LzccU9zq/h+Hpc18W6oIzQrHFXhP6+BUppx52fPdNcmpq5SQrZkv395iJtsG1Hw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=61deY8fdjKe37EO7vaHkGmNrWpN4ki2JTcLxXAhuSh0=;
 b=R+A5chSNAgwIige+VZVtP5GhG1G95gebmVuYtTXx0us8ylDvbgmJSNvTmXbQpJE9L6xsABbzAv5yJEwY+WodYwN8HLh6GsFzpEb4LAp6pRK6GvJNLjD3JL5E8uqHxdJ5fRAgyUon8jGHFLnHQPJZcr40k8Pz5RMMV8YtqvI8mf0=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: David Woodhouse <dwmw2@infradead.org>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel
	<xen-devel@lists.xenproject.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Paul
 Durrant <paul@xen.org>
Subject: Re: [PATCH] xen: Allow platform PCI interrupt to be shared
Thread-Topic: [PATCH] xen: Allow platform PCI interrupt to be shared
Thread-Index: AQHZKze3xTuOiK3Or0K7Zn5gBYHKJq6kMguAgAADsQCAAAR6gIAAASeAgAADj4A=
Date: Wed, 18 Jan 2023 14:39:34 +0000
Message-ID: <d45d172b-0d9e-a45c-36da-3ed42c909081@citrix.com>
References: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
 <e0b75988-bee6-e0c7-0dda-86e1e973ba74@citrix.com>
 <82e6a68f3cb8c0e440f7dc848fa3444725b3f893.camel@infradead.org>
 <973f9ebd-173e-6753-7799-a660994e38de@citrix.com>
 <1ce5f5501214f1073f4eb86b2fdf3f54b2400057.camel@infradead.org>
In-Reply-To: <1ce5f5501214f1073f4eb86b2fdf3f54b2400057.camel@infradead.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MW4PR03MB7009:EE_
x-ms-office365-filtering-correlation-id: 40c97fb7-4f73-4891-1ff0-08daf961d11f
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 PdW5bcYI21j9qKVFerOK9WaaURx58f/p9lO73chpuVgVQQZSnOaHSJTVfFhSsGvbPQkeoqAHUJKoXJ60JIQQqiBYfdhPOKLoLxMXyYyP79nUTMK71GUmegyM2B2vnH+yLACXevI2q00Aa5kZPrAcArETyAr2DYG95Fmusi0M3p/Iji78tFpsdPdQJUFYAUae7DoxBdWrCQ4ry4OebBG7INn/W6doWhbwj4M0TwySoNNpfVfw8jeRw9lP8XVBOy538z8o5TA4AYjJu+4iYsucMfcsv+El/IdcO6vvuMdepRvgZflMgZfBh+nC3dKW4QoQzGjD5vobuUZg4eE+QbAqlyjvFxxztxqL4mbX1BPylMiiFD5dJVuqpfnK8+qiNf+uhdAna8tf1al8UJl3FU8M6wuthky/6rO8qrfWf7J/r/tSttFvLxyyg/8dTO9jcaDP0p5mOJOBYPEWz6txqk/AH++b07TAxdqiB+WVSap9dZez8ScQ3n+qVFd2sWy6vVeiRY8Ruw63kvs7cFqdLBsjvgwfRr9pYZumrO/rsAofABoJPVzlnb+KR0uDZQkWKvKBYAQwGHqU1xnkdthF5yOtgvcuWOxvfJ/VdMdOlxBqC8TSoZQTq995Suq+liWJN96GlIwkX8ILpELzQz5h4pTh2d4uoyPfIwKw8EacrXAZZAVvy8+KlKKAKFQy1bxxNN9M8G4IgTOctVX9Ti9eDvnI1/zA+V4GGbarwCdgzg6aIZLN941QpHLJLmBFZ/ICw+N0tDPw2W7JN7beS0U0DSDxVA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(346002)(39860400002)(396003)(136003)(376002)(451199015)(83380400001)(82960400001)(38100700002)(122000001)(86362001)(38070700005)(5660300002)(2906002)(31696002)(66446008)(8936002)(66946007)(66476007)(66556008)(64756008)(76116006)(91956017)(8676002)(41300700001)(186003)(26005)(6512007)(2616005)(6506007)(53546011)(6486002)(316002)(71200400001)(478600001)(110136005)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TmQrWTFDU2tmeFRvOHIwcXhKam5Ibjk2VVY2cUFpaVkxK0JYQ001UktvTGxv?=
 =?utf-8?B?alBwZk9YUGNMSkRDUlJTclFjekNVdHN4ekZLNWtMOUtuSzhRM294bDJHYmw1?=
 =?utf-8?B?K0JrbXZKU0t6YmZSSW9laXBYdTNkbFRTVFJrai8vQnBGUDBDcHl4c0NZR1JG?=
 =?utf-8?B?SUU2UGd1dHFKdW5TTDZHSTlVamJ3c2V5K2lSRVEzRlRmWG9tZmV1OS95cVdY?=
 =?utf-8?B?TlEvQ1cyS2svVnVYelhoVG4yZTdmR3ZpWlJ1VU9NTnlXVm9xa3Bwc3NTaWVm?=
 =?utf-8?B?b0tMMy9qZ1RBRHZ6Y2Rad1hYcG1GVTJPNlRpakN2OGVVL3crZDB6Nml3dE5n?=
 =?utf-8?B?anpMbkJ2MTJCWVZRa3hOZWp1R0lmUE5EQWwvUm5NNjRRUzcwV0dZMzRvVExY?=
 =?utf-8?B?d1VNTXAxZWJ5NXlmNmhhVFdpRW9sWXNXWUI3QURhLyt6ZFg4cDJBMEtjRnVP?=
 =?utf-8?B?Mlg1U3hTRFcrRzBtQ3FoaVBTOUE1SVFHYmxmUkZNbHduYTJjKzljREJwTUhI?=
 =?utf-8?B?dDIxL3dSK1ZhZGk1V2JnMU5aMWdlT1c3bHgwckViTExvUWNvakpqOTFCOS9t?=
 =?utf-8?B?cm10LzdYd2lFTVdFelIxVTdXYnFoL0lGK2prY3ZScldZQmxjdFliUE9McDc5?=
 =?utf-8?B?cHl0UGhjeForaWEycUVQY2dmajBsZE5QaHJJLzc4ekphSVA1amFPRkR6U1RR?=
 =?utf-8?B?NG5EZXJRbmwyVVZiaFVUeFQxK1dvNDllVWpkRC80UE9GaFFJZ3NTdWRBQkxB?=
 =?utf-8?B?YXBva0tZOGt0U293eE1iMjUvQ2xOTDYwbTIzRnB2Z1Jkd1FDa1cwTmlxMi9h?=
 =?utf-8?B?ZWhvVHluVk9VNmIwTENBdnF3MzYrRDVBeFNIeGJBS255TWFJS1BFRnI4Umg0?=
 =?utf-8?B?Qk9MRC9uYnhiSCt6VENWY2Nsd3N3c3NSZDJ1dWl0UUdyMTIxRDl0UkdaWWxC?=
 =?utf-8?B?ZVR1eG85a2ZpWm03YlArUXVrK0I2d3NhUldheHdqL0RBeUtXRWR0am1Wb3lr?=
 =?utf-8?B?MHBxNVRLY1FSVjhFWm0zZFlOOFlqa2dqdGhxVUtaUXoveDJlT1Q3UURqY2ll?=
 =?utf-8?B?L3JHWm11ZEVjNDhzQzJiUXBvVWlLSEozM05LcVRzM2IvSXBIRnMxNk5KZU9Y?=
 =?utf-8?B?WFpRNWk1SjlaU0NKc3ZyVjlpUTU0U2o4VkNYWTBhRGNISlpWV3BTd0F4bWlE?=
 =?utf-8?B?Q09EZitCdFIxdGozbkV3RUNLYjZvazQ3aVVaZlppOWZQV2Iwem5scEsyRmpw?=
 =?utf-8?B?bHVicEptU1RLeExpeWJybUQxS2dJaUNPZHpBWG1oOWtKbUNyci9GSmsrTkJs?=
 =?utf-8?B?aG9RTVN5dnYzSmpYYms2Z0lFZ09wekdNVzFUbUF6bWR3aHIrWHVRTzlQWUt3?=
 =?utf-8?B?QnlHMyt2S1JnbG43VTUrSDdNTENqMFBqcDhOMUx5bjN2M2xQRDhiWVFjTk9y?=
 =?utf-8?B?ZnZvb3V5Ym5JSGloZnJOV3dXNmEydWZjVkFpOE9jdDkyL3JpTHZsQkNxQ2Yy?=
 =?utf-8?B?dFVUT1ZYNlBrV0UyNUV6S2NZYmJQVlBZUldDT1gvdGg4aHZKb0hvWlUwZ2xv?=
 =?utf-8?B?NUhXbEh1T2ljK01ob3JrcGgxaXhkSU9tbDAwRS9xemZtZDVXSHZGMmlWZ0ZX?=
 =?utf-8?B?TjdvaTh2bmN2cEMwTkNzVDlnMDk5TU9GUm4vSk92RlB5RER6RzNneUtSSHRx?=
 =?utf-8?B?bnVIcDZJZEMvSStHK0lwMjB1RmljZUJSYkxNZkxrcUtLU3ZWTHg3d1BXTk1S?=
 =?utf-8?B?QkhJRTNsUGxiWjkxSy9QQkxCSjhMcU9NWmJiMTNBRkNGYmk3L1pTNldHMDlK?=
 =?utf-8?B?N0ZpRnFSQ0lsbFpMMGt6MUIvamMxb0ppNm1PbGtTdmFqNHpZN2VqUkdka1pM?=
 =?utf-8?B?dWQwTEo2emhYNU4rMkVXWTBsdEVZN1ljdEtmYWpvbzFqUWlYcnd3K2s2MFpJ?=
 =?utf-8?B?Q1hDTXJCQ21EamRIblZwdTRJL1JBVzA2OEM0ZUtWNnBJUVh4cUJ5YnNEei9N?=
 =?utf-8?B?NmFSeE8zT2ZWSHRiY094bTcvdDIwWHpYbE5LR3BzNThaQ1NTTjRsTGVaSTJa?=
 =?utf-8?B?cXpLYzlBSmVaYjVSSnFwVCttSDZoYWxHQWN6KzRWUmNmcEZwckViRm5UVGxG?=
 =?utf-8?B?bklpVXU4SmsxczcrZWt2cTNLN0xxUmFldTAybXJ2bFFaYjJadnptVGhxV3do?=
 =?utf-8?B?bHc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <61457CD15239A6488F6F48B7395ECA29@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?WlppaG5zYUFUWi9MS1Z0bEovL1JsbThUc0lLdDZkc3U5ZTJDb01ndDREM1VZ?=
 =?utf-8?B?dGlHaFN2T25BSXMweEVZanV4TUtwNWR2UDZrNFJ3c2F5VlZGenZTRGZMU1NU?=
 =?utf-8?B?TWdPMFhVdW81SThzendNTlB2U3p0S1VwZ3p6cUlxS2NJTElHTzlNSWQxMXZU?=
 =?utf-8?B?QVcwSWNLbUhzUHVscERqbkxHOUwyNUlHd2RXN1p2aWowL2cyd2wvdVpKSDEr?=
 =?utf-8?B?b3pRRTlRTW8vRitMRzZ1RThlaDhiSGg4eE9jOFRHajZ3c3E3YTdHRDVuWW9l?=
 =?utf-8?B?UUJmaFkvV0NwMklITlJBZ3Y4RVh0cDhKejJWa0tjRjFCVkxkWnRFVW9jWGxF?=
 =?utf-8?B?QWFaQ0Fia29OVGowZGNUZkFlTEhQaFhKNSt2VjRFRXM1SDAwVjM2TVlLWk40?=
 =?utf-8?B?R2RvTGZaMlRHbUViZGg2M2hHQUJ3ZWpLWlo4cGVmaGtlbGxXWVZuOUhSWTZv?=
 =?utf-8?B?UlhoZ3dFS241U0x0ZDhVb0pSNXExMlY5QWtFUlNKM0F4bFF2ajJKUThNUUpP?=
 =?utf-8?B?Snc4dVB3L3BoNjFVOFFYOU1DUEVDYitFM01idUZyYUlQaDcrUVJNUGZFL1F1?=
 =?utf-8?B?dkxNdDhwYUliNm1sUUF1R3Y3VlNTK3lzSjMwRDBrK3laMkhROWtTVjllMk15?=
 =?utf-8?B?WGorSG1ERmxpYTI1bkxVVW9sTGRGUEdIV2Q5bm1hV0krMHI5M1dFOUN1YkZh?=
 =?utf-8?B?RW1xRUYrRlNRT3RMcGtIa2cvb1V4VkhjeW5WUkJOcVhyeEJxdzJRRzA5VDFF?=
 =?utf-8?B?b3ZoY0pDWm02dUlyQTdrc2pYZzlEcFFjVHRiQ2FabFdkaUtCdXNHd3RZQmox?=
 =?utf-8?B?UWFvMWhWbk51cjNEakhya2lWM01UU3F3SWtnWmFQRXVCSVYvd2Vac1hEWXZ1?=
 =?utf-8?B?ZXkvNGRoT1ZEZTNFWkZmbWRNUWovMXU0Q1FLeUlwY1lqcHRmalhmTTIyKzJL?=
 =?utf-8?B?alB2QWZVcGhqUEpXdWJ6VzFFT0tTN1BhREsxdDJrR01zQmsxR1ZCNjduM3Z3?=
 =?utf-8?B?VVMzT1RaZWxudFRNQm5SRDBsY3JGZzRGbkpqT2NnckQ0d1ljSU9iM2VUS3hn?=
 =?utf-8?B?cmYzVkp4NDlhL1FOUEVJQS8vUkhzSkVNNXlhL2NpWitpVXFWQzd2Y2JTQ21H?=
 =?utf-8?B?UnBxTFo0ZXBHOWpUMHJFM0dicjZCR2YwYmN2WTVJbkxsM3g0ZEp5bFJaanhR?=
 =?utf-8?B?NkxVWVZHN2hpZ3QxeUZLVzU3ZjYrNUFFNDl4LzR3Vm4ranh3QWdNZXpyYmZp?=
 =?utf-8?B?djJLRVUwSFVEN1FUcXRodTlKZE41NWJ1NVp6R1E2OHlVZzFSQT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 40c97fb7-4f73-4891-1ff0-08daf961d11f
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 14:39:34.1812
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: TLmRLSVwf9ilcHo83jkWQeWMgw8vHZN5yRXqwTekivdeBR/csi/Y/ksk8OlrGWk+pzUzKhxjd87ziF/FTaJJ8Wc/usZ19ZToDIZCBUSAbAM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB7009

T24gMTgvMDEvMjAyMyAyOjI2IHBtLCBEYXZpZCBXb29kaG91c2Ugd3JvdGU6DQo+IE9uIFdlZCwg
MjAyMy0wMS0xOCBhdCAxNDoyMiArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDE4
LzAxLzIwMjMgMjowNiBwbSwgRGF2aWQgV29vZGhvdXNlIHdyb3RlOg0KPj4+IE9uIFdlZCwgMjAy
My0wMS0xOCBhdCAxMzo1MyArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+Pj4gT24gMTgv
MDEvMjAyMyAxMjoyMiBwbSwgRGF2aWQgV29vZGhvdXNlIHdyb3RlOg0KPj4+Pj4gU2lnbmVkLW9m
Zi1ieTogRGF2aWQgV29vZGhvdXNlIDxkd213QGFtYXpvbi5jby51az4NCj4+Pj4+IC0tLQ0KPj4+
Pj4gV2hhdCBkb2VzIHhlbl9ldnRjaG5fZG9fdXBjYWxsKCkgZXhpc3QgZm9yPyBDYW4gd2UgZGVs
ZXRlIGl0PyBJIGRvbid0DQo+Pj4+PiBzZWUgaXQgYmVpbmcgY2FsbGVkIGFueXdoZXJlLg0KPj4+
PiBTZWVtcyB0aGUgY2FsbGVyIHdhcyBkcm9wcGVkIGJ5DQo+Pj4+IGNiMDllYTI5MjRjYmYxYTQy
ZGE1OWJkMzBhNTljYzE4MzYyNDBiY2IsIGJ1dCB0aGUgQ09ORklHX1BWSFZNIGxvb2tzDQo+Pj4+
IGJvZ3VzIGJlY2F1c2UgdGhlIHByZWNvbmRpdGlvbiB0byBzZXR0aW5nIGl0IHVwIHdhcyBiZWlu
ZyBpbiBhIFhlbiBIVk0NCj4+Pj4gZ3Vlc3QsIGFuZCB0aGUgZ3Vlc3QgaXMgdGFraW5nIGV2dGNo
bnMgYnkgdmVjdG9yIGVpdGhlciB3YXkuDQo+Pj4+DQo+Pj4+IFBWIGd1ZXN0cyB1c2UgdGhlIGVu
dHJ5cG9pbnQgY2FsbGVkIGV4Y194ZW5faHlwZXJ2aXNvcl9jYWxsYmFjayB3aGljaA0KPj4+PiBy
ZWFsbHkgb3VnaHQgdG8gZ2FpbiBhIFBWIGluIGl0cyBuYW1lIHNvbWV3aGVyZS7CoCBBbHNvIHRo
ZSBjb21tZW50cyBsb29rDQo+Pj4+IGRpc3RpbmN0bHkgc3VzcGVjdC4NCj4+PiBZZWFoLiBJIGNv
dWxkbid0ICpzZWUqIGFueSBhc20gb3IgbWFjcm8gbWFnaWMgd2hpY2ggd291bGQgcmVmZXJlbmNl
DQo+Pj4geGVuX2V2dGNobl9kb191cGNhbGwsIGFuZCByZW1vdmluZyBpdCBmcm9tIG15IGJ1aWxk
ICh3aXRoIENPTkZJR19YRU5fUFYNCj4+PiBlbmFibGVkKSBhbHNvIGRpZG4ndCBicmVhayBhbnl0
aGluZy4NCj4+Pg0KPj4+PiBTb21lIHRpZHlpbmcgaW4gdGhpcyBhcmVhIHdvdWxkIGJlIHZhbHVh
YmxlLg0KPj4+IEluZGVlZC4gSSBqdXN0IG5lZWQgUGF1bCBvciBteXNlbGYgdG8gdGhyb3cgaW4g
YSBiYXNpYyBYZW5TdG9yZQ0KPj4+IGltcGxlbWVudGF0aW9uIHNvIHdlIGNhbiBwcm92aWRlIGEg
UFYgZGlzaywgYW5kIEkgc2hvdWxkIGJlIGFibGUgdG8gZG8NCj4+PiBxdWlja2ZpcmUgdGVzdGlu
ZyBvZiBQViBndWVzdHMgdG9vIHdpdGggJ3FlbXUgLWtlcm5lbCcgYW5kIGEgUFYgc2hpbS4NCj4+
Pg0KPj4+IFBWSFZNIHdvdWxkIGJlIGFuIGVudGVydGFpbmluZyB0aGluZyB0byBzdXBwb3J0IHRv
bzsgSSBzdXBwb3NlIHRoYXQncw0KPj4+IG1vc3RseSBhIGNhc2Ugb2YgYmFzaW5nIGl0IG9uIHRo
ZSBtaWNyb3ZtIHFlbXUgcGxhdGZvcm0sIG9yIHBlcmhhcHMNCj4+PiBldmVuICptb3JlKiBtaW5p
bWFsIHg4Ni1iYXNlZCBwbGF0Zm9ybT8NCj4+IFRoZXJlIGlzIG5vIGFjdHVhbCB0aGluZyBjYWxs
ZWQgUFZIVk0uwqAgVGhhdCBkaWFncmFtIGhhcyBjYXVzZWQgZmFyIG1vcmUNCj4+IGRhbWFnZSB0
aGFuIGdvb2QuLi4NCj4gUGVyaGFwcyBzby4gRXZlbiBDT05GSUdfWEVOX1BWSFZNIGluIHRoZSBr
ZXJuZWwgaXMgYSBub25zZW5zZSwgYmVjYXVzZQ0KPiBpdCdzIGp1c3QgYXV0b21hdGljYWxseSBz
ZXQgYmFzZWQgb24gKFhFTiAmJiBYODZfTE9DQUxfQVBJQykuIEFuZA0KPiBDT05GSUdfWEVOIGRl
cGVuZHMgb24gWDg2X0xPQ0FMX0FQSUMgYW55d2F5Lg0KPg0KPiBXaGljaCBpcyB3aHkgaXNuJ3Qg
bmV2ZXIgbWF0dGVyZWQgdGhhdCB0aGUgdmVjdG9yIGNhbGxiYWNrIGhhbmRsaW5nIHdhcw0KPiB1
bmRlciAjaWZkZWYgQ09ORklHX1hFTl9QVkhWTSBub3QganVzdCBDT05GSUdfWEVOLg0KPg0KPj4g
VGhlcmUncyBIVk0gKGFuZCBieSB0aGlzLCBJIG1lYW4gdGhlIGh5cGVydmlzb3IncyBpbnRlcnBy
ZXRhdGlvbiBtZWFuaW5nDQo+PiBWVC14IG9yIFNWTSksIGFuZCBhIHNwZWN0cnVtIG9mIHRoaW5n
cyB0aGUgZ3Vlc3Qga2VybmVsIGNhbiBkbyBpZiBpdA0KPj4gZGVzaXJlcy4NCj4+DQo+PiBJJ20g
cHJldHR5IHN1cmUgTGludXgga25vd3MgYWxsIG9mIHRoZW0uDQo+IEJ1dCBkb24ndCB3ZSB3YW50
IHRvIHJlZnJhaW4gZnJvbSBwcm92aWRpbmcgdGhlIGxlZ2FjeSBQQyBwbGF0Zm9ybSBkZXZpY2Vz
Pw0KDQpUaGF0IGFsc28gZXhpc3RzIGFuZCB3b3JrcyBmaW5lIChhbmQgaXMgb25lIHNsaWNlIG9u
IHRoZSBzcGVjdHJ1bSkuwqAgS1ZNDQpldmVuIGJvcnJvd2VkIG91ciBQVkggYm9vdCBBUEkgYmVj
YXVzZSB3ZSdkIGFscmVhZHkgZG9uZSB0aGUgaGFyZCB3b3JrDQppbiBMaW51eC4NCg0KfkFuZHJl
dw0K


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 14:43:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 14:43:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480710.745240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9et-00022D-0P; Wed, 18 Jan 2023 14:43:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480710.745240; Wed, 18 Jan 2023 14:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9es-000226-St; Wed, 18 Jan 2023 14:43:22 +0000
Received: by outflank-mailman (input) for mailman id 480710;
 Wed, 18 Jan 2023 14:43:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vku6=5P=casper.srs.infradead.org=BATV+4fd80059aba2dd5ff62c+7087+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pI9er-000220-Ph
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 14:43:21 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 73c7cd74-973e-11ed-91b6-6bf2151ebd3b;
 Wed, 18 Jan 2023 15:43:20 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pI9ej-0004Hz-Qg; Wed, 18 Jan 2023 14:43:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73c7cd74-973e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:To:From:Subject:Message-ID:Sender:Reply-To:Cc:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=+DTCbeygRHq6JhLmzQ8O4xkqOoR23wvUBiw9udkCtZU=; b=thDeeiB04kV+8CcMnNK/tFz0Dy
	6rpng/M2ZR4zocH9mNjnbAzzsYQY7eBSggUY52lX9vEfQnOALOcfCWdZ1Am/eRqb7FbVG9UiqeeZh
	b3At/DgAaE6e5dL6fIn9aKFl0EMdKFASUZM5pJML3WrNX+8/sPhFJqYcpDyt8Hiv84cl5wMliBTJ8
	tCrAaGp2L7q0I95r8eHGfKqelQuMa5uO1+ZWLNjuny8HGr63SZUshNJBrLb3Ckbwi9UubMTufBFxd
	KNCFZYa1vI4ekUxwg52OEIBJRCKYGsVBFxuJAwYaWJQg9UMNa44190/70AsZIrfdf/mSSCzRBKIYj
	9fzuitqg==;
Message-ID: <bfe244a08572b1d57dcb5c538bcb70e9f45761f6.camel@infradead.org>
Subject: Re: [PATCH] xen: Allow platform PCI interrupt to be shared
From: David Woodhouse <dwmw2@infradead.org>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Juergen Gross
 <jgross@suse.com>,  Stefano Stabellini <sstabellini@kernel.org>, xen-devel
 <xen-devel@lists.xenproject.org>, "linux-kernel@vger.kernel.org"
 <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Paul
 Durrant <paul@xen.org>
Date: Wed, 18 Jan 2023 14:43:13 +0000
In-Reply-To: <d45d172b-0d9e-a45c-36da-3ed42c909081@citrix.com>
References: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
	 <e0b75988-bee6-e0c7-0dda-86e1e973ba74@citrix.com>
	 <82e6a68f3cb8c0e440f7dc848fa3444725b3f893.camel@infradead.org>
	 <973f9ebd-173e-6753-7799-a660994e38de@citrix.com>
	 <1ce5f5501214f1073f4eb86b2fdf3f54b2400057.camel@infradead.org>
	 <d45d172b-0d9e-a45c-36da-3ed42c909081@citrix.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-i05wny+PFngKkJ5JDkw+"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-i05wny+PFngKkJ5JDkw+
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2023-01-18 at 14:39 +0000, Andrew Cooper wrote:
> On 18/01/2023 2:26 pm, David Woodhouse wrote:?
> > > There is no actual thing called PVHVM.=C2=A0 That diagram has caused =
far more
> > > damage than good...
> > >
> > > There's HVM (and by this, I mean the hypervisor's interpretation mean=
ing
> > > VT-x or SVM), and a spectrum of things the guest kernel can do if it
> > > desires.
> > >=20
> > > I'm pretty sure Linux knows all of them.
> >
> > But don't we want to refrain from providing the legacy PC platform devi=
ces?
>=20
> That also exists and works fine (and is one slice on the spectrum).=C2=A0=
 KVM
> even borrowed our PVH boot API because we'd already done the hard work
> in Linux.

Ah, but it doesn't exist in qemu (on KVM) yet ;)

--=-i05wny+PFngKkJ5JDkw+
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTE4MTQ0MzEzWjAvBgkqhkiG9w0BCQQxIgQgX7skx9B2
vLf7bfousUXd1knCFOqJy2P+QD9kxHMrAIAwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgCgX/+F7t8jazPBlX8O7UAXeLF3JsHkixQI
X/zmX/WW4PvISAoJgx00NzhPIe8NdfViEqcE5MZ5upEj2CNvirGK2ZWUWaSz+cTsxToa7Ixp2O/E
YY9nqXPc3SX6W3YAbyKULX9ArLzpZd3mHjPYsch+YKPrceZwx8XkUklNYaPzqElKkIFAbRDBb3re
4FTQsNN4dIn35q6ekKKQVKGViK03lJlHjCWupIn9W9V0Bstp6+aPA6O9GegaiDZY7HmufW5E30ti
UieELRN+kdvk1qMeFQg5on0ZWz+bVUrOR0q7hbCqAroEfJx+KxYjNkGIZCTa3J/0k7nGwJYCQT60
7NBZPy8pIYiSvtB2VL48f4n/vmDprHrdQ/RWo2zyfnWW2TI/neF95avU29sGvut2jK23+hWnkDUT
JTm2HhrZNoPAU4Ok6mBy7xdl1KI4TsHlhffDrejfzPMy044EBIg5l8c84w1q8otiUnLbiLFHgUxU
KSLp72wFTwzy5CD+4F2PsW5/CGslDdWEa14vLZyuZU9DNr7OJURmvFu62Z8SEG2QPVlvHiYDGWXR
NG60sgeORcueZOST+EuINCawvXm1rI677DtSd0cRgsrN5Bcd2Jvg0+Wsru3ojF8IY9cWDD34h9m3
gDyk8/GiTQyXv+SPjq4oE5ucVzivjzIhrxzkaDfqGwAAAAAAAA==


--=-i05wny+PFngKkJ5JDkw+--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 14:46:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 14:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480717.745250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9hd-0002dv-CK; Wed, 18 Jan 2023 14:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480717.745250; Wed, 18 Jan 2023 14:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pI9hd-0002do-9f; Wed, 18 Jan 2023 14:46:13 +0000
Received: by outflank-mailman (input) for mailman id 480717;
 Wed, 18 Jan 2023 14:46:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cVew=5P=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1pI9hc-0002di-D2
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 14:46:12 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d8bd9827-973e-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 15:46:10 +0100 (CET)
Received: by mail-ej1-x62b.google.com with SMTP id kt14so24892289ejc.3
 for <xen-devel@lists.xenproject.org>; Wed, 18 Jan 2023 06:46:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8bd9827-973e-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=kv6Z+netYSSwHRMmwgIwtDxfzjTdUYqAafZvFP9orlw=;
        b=RPwIjXElJ3iwfHDjOm3wx5CniIhTiTZuBrU0X9nRhx8k2uDgnnsQ0d+RjhZdt2JVcP
         NNrE3cJS3K5U3aKnEGTsGuH3dJtFECYminungJYeeL8nl9KbdJcuTf+FuYqgBkZ3sfnA
         tfwbu5bfcJ/XfpVRBAW8q0zT+yGYwvV1l2bOU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=kv6Z+netYSSwHRMmwgIwtDxfzjTdUYqAafZvFP9orlw=;
        b=1u9tH/B6QQ5XGISiwc4mudPXZQS/yvRVjIW8+T8MZVQfmSKBcjXKj8a4yP5bin4jgL
         MC15MxndDCJYZG+VEKZ9jh2jcgF1lK+Z5oYoVWfhR7MzGbpJZEyIQvelehIhkNP+LRs2
         PaI1Y8VIm5S/5WG/rEzTwFMCybmNXbyFQ+V99ixKBGTI86MFb09XFL7PK+OaoM328cFU
         rRq9HMB52u/Hl1Yq7FC8j3QoaJ5zzTqtfmCGPZiZ09pft7Bl4gGkUmkwJVsTQyGCG9Av
         byoHmQMORvvxFxxINQx/moEqF3RR9uJtlUTtrD7qMibcHpXi9WgLaORYQaSsEwWMBaEX
         rABw==
X-Gm-Message-State: AFqh2koXVUaA+gVaqoDwJU9RFgJ4Oxq1JjZA1zaPtLDFYEN558VbL9tS
	OFXcrJ6NIwTnTs5Mo5Gzc/EDMk0Cu+J/YaViqhT13g==
X-Google-Smtp-Source: AMrXdXvLv8IldrZX4Gqk7vJIHOPHggMcbgORV6JS5ow8bBTjUr8oR1oKUPP5kFO8ANDP9Wmxvgoa6/EkQbxImzucJYs=
X-Received: by 2002:a17:907:9959:b0:83e:6a85:33d9 with SMTP id
 kl25-20020a170907995900b0083e6a8533d9mr970630ejc.704.1674053167551; Wed, 18
 Jan 2023 06:46:07 -0800 (PST)
MIME-Version: 1.0
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-6-ayan.kumar.halder@amd.com> <926307d3-a354-be87-3885-90681dc5ae24@suse.com>
 <37719b71-8405-eefd-3bf5-95c7c8639e82@amd.com> <7e9db781-47a8-719a-d9b1-88de9c503732@suse.com>
 <CA+zSX=a4hfFKGJfTM5BDenRo=T3kvEbkGhRs=7oh4GgOxDk0+Q@mail.gmail.com> <60628f62-c9fd-c29b-5c08-f3f746201f01@suse.com>
In-Reply-To: <60628f62-c9fd-c29b-5c08-f3f746201f01@suse.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Wed, 18 Jan 2023 14:45:56 +0000
Message-ID: <CA+zSX=aAMEhqXYDF+LdyqDvnXxbUBrNZmjKcadQXpNn_vP_qfA@mail.gmail.com>
Subject: Re: [XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size
To: Jan Beulich <jbeulich@suse.com>
Cc: Ayan Kumar Halder <ayankuma@amd.com>, sstabellini@kernel.org, stefano.stabellini@amd.com, 
	Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com, 
	xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Rahul Singh <rahul.singh@arm.com>, 
	Ayan Kumar Halder <ayan.kumar.halder@amd.com>, julien@xen.org, Wei Xu <xuwei5@hisilicon.com>
Content-Type: multipart/alternative; boundary="0000000000006341c005f28ae05e"

--0000000000006341c005f28ae05e
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 18, 2023 at 1:58 PM Jan Beulich <jbeulich@suse.com> wrote:

> On 18.01.2023 14:34, George Dunlap wrote:
> > On Wed, Jan 18, 2023 at 1:15 PM Jan Beulich <jbeulich@suse.com> wrote:
> >
> >> On 18.01.2023 12:15, Ayan Kumar Halder wrote:
> >>> On 18/01/2023 08:40, Jan Beulich wrote:
> >>>> On 17.01.2023 18:43, Ayan Kumar Halder wrote:
> >>>>> @@ -1166,7 +1166,7 @@ static const struct ns16550_config __initconst
> >> uart_config[] =
> >>>>>   static int __init
> >>>>>   pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int
> >> idx)
> >>>>>   {
> >>>>> -    u64 orig_base = uart->io_base;
> >>>>> +    paddr_t orig_base = uart->io_base;
> >>>>>       unsigned int b, d, f, nextf, i;
> >>>>>
> >>>>>       /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on
> >> bus 0. */
> >>>>> @@ -1259,7 +1259,7 @@ pci_uart_config(struct ns16550 *uart, bool_t
> >> skip_amt, unsigned int idx)
> >>>>>                       else
> >>>>>                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
> >>>>>
> >>>>> -                    uart->io_base = ((u64)bar_64 << 32) |
> >>>>> +                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
> >>>>>                                       (bar &
> >> PCI_BASE_ADDRESS_MEM_MASK);
> >>>>>                   }
> >>>> This looks wrong to me: You shouldn't blindly truncate to 32 bits. You
> >> need
> >>>> to refuse acting on 64-bit BARs with the upper address bits non-zero.
> >>>
> >>> Yes, I was treating this like others (where Xen does not detect for
> >>> truncation while getting the address/size from device-tree and
> >>> typecasting it to paddr_t).
> >>>
> >>> However in this case, as Xen is reading from PCI registers, so it needs
> >>> to check for truncation.
> >>>
> >>> I think the following change should do good.
> >>>
> >>> @@ -1180,6 +1180,7 @@ pci_uart_config(struct ns16550 *uart, bool_t
> >>> skip_amt, unsigned int idx)
> >>>                   unsigned int bar_idx = 0, port_idx = idx;
> >>>                   uint32_t bar, bar_64 = 0, len, len_64;
> >>>                   u64 size = 0;
> >>> +                uint64_t io_base = 0;
> >>>                   const struct ns16550_config_param *param =
> uart_param;
> >>>
> >>>                   nextf = (f || (pci_conf_read16(PCI_SBDF(0, b, d, f),
> >>> @@ -1260,8 +1261,11 @@ pci_uart_config(struct ns16550 *uart, bool_t
> >>> skip_amt, unsigned int idx)
> >>>                       else
> >>>                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
> >>>
> >>> -                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
> >>> +                    io_base = ((u64)bar_64 << 32) |
> >>>                                       (bar &
> PCI_BASE_ADDRESS_MEM_MASK);
> >>> +
> >>> +                    uart->io_base = (paddr_t) io_base;
> >>> +                    ASSERT(uart->io_base == io_base); /* Detect
> >>> truncation */
> >>>                   }
> >>>                   /* IO based */
> >>>                   else if ( !param->mmio && (bar &
> >>> PCI_BASE_ADDRESS_SPACE_IO) )
> >>
> >> An assertion can only ever check assumption on behavior elsewhere in
> Xen.
> >> Anything external needs handling properly, including in non-debug
> builds.
> >>
> >
> > Except in this case, it's detecting the result of the compiler cast just
> > above it, isn't it?
>
> Not really, no - it checks whether there was truncation, but the
> absence (or presence) thereof is still a property of the underlying
> system, not one in Xen.
>

Ah, gotcha.  Ayan, it might be helpful to take a look at the 'Handling
unexpected conditions' section of our CODING_STYLE [1] for a discussion of
when (and when not) to use ASSERT().

 -George

[1] https://github.com/xen-project/xen/blob/master/CODING_STYLE

--0000000000006341c005f28ae05e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Wed, Jan 18, 2023 at 1:58 PM Jan B=
eulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; w=
rote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0p=
x 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 18.01.2=
023 14:34, George Dunlap wrote:<br>
&gt; On Wed, Jan 18, 2023 at 1:15 PM Jan Beulich &lt;<a href=3D"mailto:jbeu=
lich@suse.com" target=3D"_blank">jbeulich@suse.com</a>&gt; wrote:<br>
&gt; <br>
&gt;&gt; On 18.01.2023 12:15, Ayan Kumar Halder wrote:<br>
&gt;&gt;&gt; On 18/01/2023 08:40, Jan Beulich wrote:<br>
&gt;&gt;&gt;&gt; On 17.01.2023 18:43, Ayan Kumar Halder wrote:<br>
&gt;&gt;&gt;&gt;&gt; @@ -1166,7 +1166,7 @@ static const struct ns16550_conf=
ig __initconst<br>
&gt;&gt; uart_config[] =3D<br>
&gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0static int __init<br>
&gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0pci_uart_config(struct ns16550 *uart, bool=
_t skip_amt, unsigned int<br>
&gt;&gt; idx)<br>
&gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0{<br>
&gt;&gt;&gt;&gt;&gt; -=C2=A0 =C2=A0 u64 orig_base =3D uart-&gt;io_base;<br>
&gt;&gt;&gt;&gt;&gt; +=C2=A0 =C2=A0 paddr_t orig_base =3D uart-&gt;io_base;=
<br>
&gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int b, d, f, nextf,=
 i;<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0/* NB. Start at bus 1 to avo=
id AMT: a plug-in card cannot be on<br>
&gt;&gt; bus 0. */<br>
&gt;&gt;&gt;&gt;&gt; @@ -1259,7 +1259,7 @@ pci_uart_config(struct ns16550 *=
uart, bool_t<br>
&gt;&gt; skip_amt, unsigned int idx)<br>
&gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0else<br>
&gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0size =3D len &amp; PCI_BASE_ADDRE=
SS_MEM_MASK;<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 uart-&gt;io_base =3D ((u64)bar_64 &lt;&lt; 32) |<br>
&gt;&gt;&gt;&gt;&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 uart-&gt;io_base =3D (paddr_t) ((u64)bar_64 &lt;&lt; 32) =
|<br>
&gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0(bar &amp;<br>
&gt;&gt; PCI_BASE_ADDRESS_MEM_MASK);<br>
&gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0}<br>
&gt;&gt;&gt;&gt; This looks wrong to me: You shouldn&#39;t blindly truncate=
 to 32 bits. You<br>
&gt;&gt; need<br>
&gt;&gt;&gt;&gt; to refuse acting on 64-bit BARs with the upper address bit=
s non-zero.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Yes, I was treating this like others (where Xen does not detec=
t for<br>
&gt;&gt;&gt; truncation while getting the address/size from device-tree and=
<br>
&gt;&gt;&gt; typecasting it to paddr_t).<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; However in this case, as Xen is reading from PCI registers, so=
 it needs<br>
&gt;&gt;&gt; to check for truncation.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; I think the following change should do good.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; @@ -1180,6 +1180,7 @@ pci_uart_config(struct ns16550 *uart, bo=
ol_t<br>
&gt;&gt;&gt; skip_amt, unsigned int idx)<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0unsigned int bar_idx =3D 0, port_idx =3D idx;<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0uint32_t bar, bar_64 =3D 0, len, len_64;<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0u64 size =3D 0;<br>
&gt;&gt;&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint6=
4_t io_base =3D 0;<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0const struct ns16550_config_param *param =3D uart_param;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0nextf =3D (f || (pci_conf_read16(PCI_SBDF(0, b, d, f),<br>
&gt;&gt;&gt; @@ -1260,8 +1261,11 @@ pci_uart_config(struct ns16550 *uart, b=
ool_t<br>
&gt;&gt;&gt; skip_amt, unsigned int idx)<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0else<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0size =3D len &amp; PCI_BASE_ADDRESS_MEM_M=
ASK;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 uart-&gt;io_base =3D (paddr_t) ((u64)bar_64 &lt;&lt; 32) |<br>
&gt;&gt;&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 io_base =3D ((u64)bar_64 &lt;&lt; 32) |<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0(bar &amp; PCI_BASE_ADDRESS_MEM_MASK);<br>
&gt;&gt;&gt; +<br>
&gt;&gt;&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 uart-&gt;io_base =3D (paddr_t) io_base;<br>
&gt;&gt;&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 ASSERT(uart-&gt;io_base =3D=3D io_base); /* Detect<br>
&gt;&gt;&gt; truncation */<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0}<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0/* IO based */<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0else if ( !param-&gt;mmio &amp;&amp; (bar &amp;<br>
&gt;&gt;&gt; PCI_BASE_ADDRESS_SPACE_IO) )<br>
&gt;&gt;<br>
&gt;&gt; An assertion can only ever check assumption on behavior elsewhere =
in Xen.<br>
&gt;&gt; Anything external needs handling properly, including in non-debug =
builds.<br>
&gt;&gt;<br>
&gt; <br>
&gt; Except in this case, it&#39;s detecting the result of the compiler cas=
t just<br>
&gt; above it, isn&#39;t it?<br>
<br>
Not really, no - it checks whether there was truncation, but the<br>
absence (or presence) thereof is still a property of the underlying<br>
system, not one in Xen.<br></blockquote><div><br></div><div>Ah, gotcha.=C2=
=A0 Ayan, it might be helpful to take a look at the &#39;Handling unexpecte=
d conditions&#39; section of our CODING_STYLE [1] for a discussion of when =
(and when not) to use ASSERT().</div><div><br></div><div>=C2=A0-George</div=
><div><br></div><div>[1]=C2=A0<a href=3D"https://github.com/xen-project/xen=
/blob/master/CODING_STYLE">https://github.com/xen-project/xen/blob/master/C=
ODING_STYLE</a></div></div></div>

--0000000000006341c005f28ae05e--


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 15:17:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 15:17:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480723.745262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIABF-000678-Ll; Wed, 18 Jan 2023 15:16:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480723.745262; Wed, 18 Jan 2023 15:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIABF-000671-J4; Wed, 18 Jan 2023 15:16:49 +0000
Received: by outflank-mailman (input) for mailman id 480723;
 Wed, 18 Jan 2023 15:16:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIABE-00066r-CO; Wed, 18 Jan 2023 15:16:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIABE-00040Z-2v; Wed, 18 Jan 2023 15:16:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIABD-00064g-KV; Wed, 18 Jan 2023 15:16:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIABD-0000Mb-K4; Wed, 18 Jan 2023 15:16:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7yKv0y2ab+Sg05H+oOpJkFSQxem68oVzjtgrP3OqrBw=; b=6vkscTzA9L9hq+wPxcn+rMCWer
	HdwL+YaQBkakiw8nyfqIx6STjQVrTlrK4eO1nV09PAIG3a9vZlDMXx6KozJSxGxssbQOiS4XtSmTk
	JvCW5xi56sUCQ3xrCbE3qwYNo5mxkYXNprJrhaKsh9SyQsrQNqhgQ3dRMIvrvQjzmXzQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175960-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175960: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=998ebe5ca0ae5c449e83ede533bee872f97d63af
X-Osstest-Versions-That:
    ovmf=9d70d8f20d0feee1d232cbf86fc87147ce92c2cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 15:16:47 +0000

flight 175960 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175960/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 998ebe5ca0ae5c449e83ede533bee872f97d63af
baseline version:
 ovmf                 9d70d8f20d0feee1d232cbf86fc87147ce92c2cb

Last test of basis   175747  2023-01-12 16:10:44 Z    5 days
Failing since        175860  2023-01-15 07:11:07 Z    3 days   41 attempts
Testing same since   175955  2023-01-18 10:40:51 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Ard Biesheuvel <ardb+tianocore@kernel.org>
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiangang He <jiangang.he@amd.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Konstantin Aladyshev <aladyshev22@gmail.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>
  Oliver Steffen <osteffen@redhat.com>
  Prakash K <prakashk@ami.com>
  Prakash.K <prakashk@ami.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9d70d8f20d..998ebe5ca0  998ebe5ca0ae5c449e83ede533bee872f97d63af -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 16:04:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 16:04:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480732.745273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIAvY-0003QX-Ac; Wed, 18 Jan 2023 16:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480732.745273; Wed, 18 Jan 2023 16:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIAvY-0003QQ-7j; Wed, 18 Jan 2023 16:04:40 +0000
Received: by outflank-mailman (input) for mailman id 480732;
 Wed, 18 Jan 2023 16:04:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIAvX-0003QG-3s; Wed, 18 Jan 2023 16:04:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIAvW-0005bd-Kd; Wed, 18 Jan 2023 16:04:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIAvW-0007K8-BI; Wed, 18 Jan 2023 16:04:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIAvW-0007eU-Aq; Wed, 18 Jan 2023 16:04:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VcDITxnP+De4Xh9RV2g/0qobHZ2YK3GrP+ct7u/C3ek=; b=Ok1wSCQO3Jvn81NJ6Px2Hxbg0G
	FN9lNhG9idJzyCleFWvoj4hkv7TCfwRkgQUoX1cTJ+5gUlfKMMhIoOyMaL4c4vjFYIE/VoOD/qedS
	VlchCAr6QyT3YA6rwHuADjbzwFYPa1ntA1imseMeNJdtI2QWOlBOH6JE/9Cqe4mroESM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175953-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175953: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 16:04:38 +0000

flight 175953 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175953/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175746
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175746

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    6 days
Failing since        175748  2023-01-12 20:01:56 Z    5 days   26 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    4 days   24 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 18:22:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 18:22:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480743.745295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pID4K-0000Xn-BQ; Wed, 18 Jan 2023 18:21:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480743.745295; Wed, 18 Jan 2023 18:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pID4K-0000Xg-8E; Wed, 18 Jan 2023 18:21:52 +0000
Received: by outflank-mailman (input) for mailman id 480743;
 Wed, 18 Jan 2023 18:21:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pID4I-0000XF-UH
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 18:21:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pID4A-0000UM-He; Wed, 18 Jan 2023 18:21:42 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.8.239]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pID4A-0006WH-8b; Wed, 18 Jan 2023 18:21:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=rb1G0dY56QTMw1NtktB7Jvgg7616U4TCKBbdqFearnY=; b=jLfP0vfBHKSE97w5W64H7erz2I
	0bXj15PqYG7astrioKl5rMGrZGon6Nb5kaMxgiNBRE9J3Vica4hbyalYHojTFI7MU/eIocu9N/QEJ
	XIk1KdqWjU2BQuCclzv2+8/Z7x+h6tXKzdxsCLOeg4+vsRuSiO3EfgLza8d3I8ubw6K8=;
Message-ID: <64359f65-4a99-f9ef-b35d-49b44670c7e6@xen.org>
Date: Wed, 18 Jan 2023 18:21:39 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 00/10] Rework PCI locking
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com>
 <9a98a83a-32e5-67fe-431d-7bc5f070674e@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <9a98a83a-32e5-67fe-431d-7bc5f070674e@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 06/09/2022 11:32, Jan Beulich wrote:
> On 31.08.2022 16:10, Volodymyr Babchuk wrote:
>> Hello,
>>
>> This is yet another take to a PCI locking rework. This approach
>> was suggest by Jan Beulich who proposed to use a reference
>> counter to control lifetime of pci_dev objects.
>>
>> When I started added reference counting it quickly became clear
>> that this approach can provide more granular locking insted of
>> huge pcidevs_lock() which is used right now. I studied how this
>> lock used and what it protects. And found the following:
>>
>> 0. Comment in pci.h states the following:
>>
>>   153 /*
>>   154  * The pcidevs_lock protect alldevs_list, and the assignment for the
>>   155  * devices, it also sync the access to the msi capability that is not
>>   156  * interrupt handling related (the mask bit register).
>>   157  */
>>
>> But in reality it does much more. Here is what I found:
>>
>> 1. Lifetime of pci_dev struct
>>
>> 2. Access to pseg->alldevs_list
>>
>> 3. Access to domain->pdev_list
>>
>> 4. Access to iommu->ats_list
>>
>> 5. Access to MSI capability
>>
>> 6. Some obsucure stuff in IOMMU drivers: there are places that
>> are guarded by pcidevs_lock() but it seems that nothing
>> PCI-related happens there.
> 
> Right - the lock being held was (ab)used in IOMMU code in a number of
> places. This likely needs to change in the course of this re-work;
> patch titles don't suggest this is currently part of the series.
> 
>> 7. Something that I probably overlooked
> 
> And this is the main risk here. The huge scope of the original lock
> means that many things are serialized now but won't be anymore once
> the lock is gone.
> 
> But yes - thanks for the work. To be honest I don't expect to be able
> to look at this series in detail until after the Xen Summit. And even
> then it may take a while ...

I was wondering if this is still in your list to review?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 18:22:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 18:22:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480742.745283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pID45-0000Ev-2L; Wed, 18 Jan 2023 18:21:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480742.745283; Wed, 18 Jan 2023 18:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pID44-0000Eo-Vo; Wed, 18 Jan 2023 18:21:36 +0000
Received: by outflank-mailman (input) for mailman id 480742;
 Wed, 18 Jan 2023 18:21:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Muv5=5P=citrix.com=prvs=3754a6524=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pID43-0000Eh-Ss
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 18:21:36 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ede39c7d-975c-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 19:21:32 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ede39c7d-975c-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674066092;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=fZ9JGhXsLumSPUgpibDpHb8KbkAcIOwYinBrtT/AIbs=;
  b=cxT2EectMNMdeaO58ir6T74NyjSLqQPM7WQObhggQq+LKzFscmue7xqd
   LNUxya+6q5EsgT/LJZ2FqUS8WS4iPPpcoWSKtd9FY53DjC7bXmdejkon7
   vwX1zghWaVSbEI/bg12iMRTHCCDmUOcZTO/n49av8sg3xCru55v1iRz41
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93193274
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:SW0luKLD6FwphuDSFE+R8ZUlxSXFcZb7ZxGr2PjKsXjdYENS0mAHn
 TBOD2vVMvvbMTD8eYh3aIvjp0MFsMWEx9VmTFdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wVuPa0jUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c50KEB0q
 aERdAoRLRSGmLivz7OWZ/lV05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TbHpwExRzH+
 goq+UykPQAbZfa6wgO80UiRvvSMjy/WCLsdQejQGvlC3wTImz175ActfVmmpfi0jGauVtQZL
 FYbkgI1trQ7/kGvStj7XjW7rWSCsxpaXMBfe8Ul7Cmdx6yS5ByWbkAGRDNcbN0ttOctWCcnk
 FSOmrvBFTFp9bGYV3+Z3rOVti+pfzgYK3cYYi0JRhdD5MPsyLzflTqWEIwlSvTsyISoR3epm
 WviQDUCa6s7jucq7fnm9Az9n3Goo4eTVwE0yyjHZzfwhu9mX7JJd7BE+HCCs6kbfdzDFgbR1
 JQXs5PAtb5TVPlhgATIGbxQR+/xup5pJRWG2TZS848dGyNBEpJJVaRZ+3lAKUhgKa7okhe5M
 RaI6Wu9CHK+VUZGjJObgKrrUazGNYC6SbzYugn8N7KimKRZeg6d5z1JbkWNxW3rm0VEufhhZ
 szKIZ7xVS5AVP4PIN+KqwA1iO9D+8zD7TmLGcCTI+qPjNJym0J5uZ9aaQDTP4jVHYuPoRnP8
 sY3Cid540w3bQEKWQGOqdR7BQlTfRAG6WXe95Q/mhirflA3RwnMypb5ndscRmCSt/gOzbmTo
 SztCie1CjPX3BX6FOlDUVg7AJuHYHq1hStT0fAEVbpw50UeXA==
IronPort-HdrOrdr: A9a23:kVTaLqpgM52vF+z5OG9RfMEaV5oyeYIsimQD101hICG9E/bo9P
 xG885x6faZslsssRIb+exoWpPvfZq0z/cc3WB2B9uftWLd11dAQrsJ0WKb+VzdJxE=
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="93193274"
Date: Wed, 18 Jan 2023 18:21:25 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "George
 Dunlap" <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Julien
 Grall" <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Message-ID: <Y8g4pSOHvrkqmbTU@perard.uk.xensource.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
 <1ab3bc93-326a-172d-4f0f-f6c2ddc84105@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <1ab3bc93-326a-172d-4f0f-f6c2ddc84105@citrix.com>

On Tue, Jan 17, 2023 at 05:21:32PM +0000, Andrew Cooper wrote:
> On 16/01/2023 6:10 pm, Anthony PERARD wrote:
> > +def get_typedefs(tokens):
> > +    level = 1
> > +    state = 0
> > +    typedefs = []
> 
> I'm pretty sure typedefs actually wants to be a dict rather than a list
> (will have better "id in typedefs" performance lower down), but that
> wants matching with code changes elsewhere, and probably wants doing
> separately.

I'm not sure that going to make a difference to have "id in ()" instead
of "id in []". I just found out that `typedefs` is always empty...

I don't know what get_typedefs() is supposed to do, or at least if it
works as expected, because it always returns "" or an empty list. (even
the shell version)

So, it would actually be a bit faster to not call get_typedefs(), but I
don't know if that's safe.

> > +    for token in tokens:
> > +        if token == 'typedef':
> > +            if level == 1:
> > +                state = 1
> > +        elif re.match(r'^COMPAT_HANDLE\((.*)\)$', token):
> > +            if not (level != 1 or state != 1):
> > +                state = 2
> > +        elif token in ['[', '{']:
> > +            level += 1
> > +        elif token in [']', '}']:
> > +            level -= 1
> > +        elif token == ';':
> > +            if level == 1:
> > +                state = 0
> > +        elif re.match(r'^[a-zA-Z_]', token):
> > +            if not (level != 1 or state != 2):
> > +                typedefs.append(token)
> > +    return typedefs

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 19:01:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 19:01:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480763.745310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIDfn-00059D-9f; Wed, 18 Jan 2023 19:00:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480763.745310; Wed, 18 Jan 2023 19:00:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIDfn-000596-5I; Wed, 18 Jan 2023 19:00:35 +0000
Received: by outflank-mailman (input) for mailman id 480763;
 Wed, 18 Jan 2023 19:00:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIDfl-00058u-AE; Wed, 18 Jan 2023 19:00:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIDfl-0001SH-74; Wed, 18 Jan 2023 19:00:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIDfk-0003gm-OK; Wed, 18 Jan 2023 19:00:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIDfk-000337-Ns; Wed, 18 Jan 2023 19:00:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WZ9iC+jZbfjnkEHydjTlTDrtEViXQL8used8Mxyooq0=; b=Fke1IGWVMtfYWJaU/lfKpxn9fT
	IkJzNHqwxj+AG14RIvIII4MXtla8SPAMNssbhb133KZzX0vpsEhMIIpYBwYGkuG6lDIKSOkN5wNx/
	c1f9vJArBSqpP9i7BfovSx9IvtwEOvAJkMYEeBzSDhJyBZOxR17dVha9nfw6DwA+R0Is=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175952-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175952: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7c9236d6d61f30583d5d860097d88dbf0fe487bf
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 19:00:32 +0000

flight 175952 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175952/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 175743
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386                    6 xen-build                fail REGR. vs. 175743
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175743
 build-i386-xsm                6 xen-build                fail REGR. vs. 175743
 build-arm64                   6 xen-build                fail REGR. vs. 175743

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175735
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175735
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175743
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                7c9236d6d61f30583d5d860097d88dbf0fe487bf
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    6 days
Failing since        175750  2023-01-13 06:38:52 Z    5 days   12 attempts
Testing same since   175952  2023-01-18 02:32:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Bernhard Beschow <shentey@gmail.com>
  Daniel Henrique Barboza <dbarboza@ventanamicro.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Laurent Vivier <laurent@vivier.eu>
  Marcel Holtmann <marcel@holtmann.org>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2498 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 19:37:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 19:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480773.745320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIEEv-0000Bf-48; Wed, 18 Jan 2023 19:36:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480773.745320; Wed, 18 Jan 2023 19:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIEEv-0000BY-1b; Wed, 18 Jan 2023 19:36:53 +0000
Received: by outflank-mailman (input) for mailman id 480773;
 Wed, 18 Jan 2023 19:36:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMWs=5P=citrix.com=prvs=37540d4c2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIEEu-0000BQ-G8
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 19:36:52 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72374afc-9767-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 20:36:48 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72374afc-9767-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674070608;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Ji3KK17PZ3w8CrRmcf1K16MitXB1wnSQTS0aYMoQlfA=;
  b=OC041fCDnSa7XRMK3e8wa8pSzDjQNaNUFIYZA0n574szM/D+ogbDVdME
   D78Hv2NC7P6KqLSTRO3sdCIzNfbcHKehc5IJXsTL6xgAhNaxUijTeACkT
   gTSotnzFwVfaOWqM+dXmo9M1cn815k5zu3umVzDz2tVVeSqJaxDeezbBy
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93202811
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:dWa16KpvS49rUWV+tz/S/lCH9aZeBmIEZRIvgKrLsJaIsI4StFCzt
 garIBnTaP2Ia2f1f9pwbo+zpkgF6pPdx9djT1c5pH03EHsb9puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHzilNUfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACEAYymY2MSU+6jlE7hJ3dg5Fs/aDapK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 jOdpTyjUkhDXDCZ4Rrb+FjzgtHLoTLiSYEYBb+fq9Fj3kLGkwT/DzVJDADm8JFVkHWWRNZ3O
 0ESvC00osAa90G1T9+7QxyxplaFuAIRX5xbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHkpeYVHac/be8ti6pNG4eKmpqWMMfZVJbuZ+5+th110+RCI85S8ZZk+EZBxnhz
 j2zt3cYpY4J05da8v6n52rBsgOF882hohEO2unHYo60xlonO9X0PdbwtgizAeVod9jAEATY1
 JQQs43Htb1VU8nQ/MCYaL9VdIxF8cppJ9E1bbRHO5A6vwqg9He4FWy7yGEvfRw5WirolNKAX
 aMyhe+yzMUJVJdSRfUrC79d8uxzpUQaKfzrV+rPcv1FaYVreQmM8UlGPBDPgzizzhZ3wPBva
 f93lPpA615AUcyLKxLvF48gPUIDnHhilQs/u7ilp/hY7VZuTCHMEupUWLd/Rus48LmFsG3oH
 yV3bqO3J+FkeLSmOEH/qNdDRW3m2FBnXfgaXeQLLL/cSuencUl9Y8LsLUQJJ9c+wf8Ky7eYl
 px/M2cBoGfCabT8AV3iQhhehHnHBP6TcVpT0fQQAGuV
IronPort-HdrOrdr: A9a23:nqd4V6rdTUIGm7++oO9GdcQaV5oVeYIsimQD101hICG9E/b4qy
 nKpp9w6faaskdzZJhNo7290dC7MBXhHP1Oj7X5X43PYOCOggLBEGkJhbGSugEIcBeQygcy78
 ddm6QXMqyTMbB25fyKhzVRGb4bsby6GK/Bv5a780tQ
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="93202811"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Kevin Tian
	<kevin.tian@intel.com>
Subject: [PATCH] x86/vmx: Partially revert "x86/vmx: implement Notify VM Exit"
Date: Wed, 18 Jan 2023 19:36:37 +0000
Message-ID: <20230118193637.8907-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The original patch tried to do two things - implement VMNotify, and
re-optimise VT-x to not intercept #DB/#AC by default.

The second part is buggy in multiple ways.  Both GDBSX and Introspection need
to conditionally intercept #DB, which was not accounted for.  Also, #DB
interception has nothing at all to do with cpu_has_monitor_trap_flag.

Revert the second half, leaving #DB/#AC intercepted unilaterally, but with
VMNotify active by default when available.

Fixes: 573279cde1c4 ("x86/vmx: implement Notify VM Exit")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Kevin Tian <kevin.tian@intel.com>

 #DB/#AC are not fastpaths in the slightest.  This perf optimisation can be
reworked at some point later with rather more care and testing.

It's *really* not as urgent as getting VMNotify active by default.
---
 xen/arch/x86/hvm/vmx/vmcs.c | 11 ++---------
 xen/arch/x86/hvm/vmx/vmx.c  | 16 ++--------------
 2 files changed, 4 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 8992f4e0aeb2..7d8bfeb53982 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1296,17 +1296,10 @@ static int construct_vmcs(struct vcpu *v)
     v->arch.hvm.vmx.exception_bitmap = HVM_TRAP_MASK
               | (paging_mode_hap(d) ? 0 : (1U << TRAP_page_fault))
               | (v->arch.fully_eager_fpu ? 0 : (1U << TRAP_no_device));
+
     if ( cpu_has_vmx_notify_vm_exiting )
-    {
         __vmwrite(NOTIFY_WINDOW, vm_notify_window);
-        /*
-         * Disable #AC and #DB interception: by using VM Notify Xen is
-         * guaranteed to get a VM exit even if the guest manages to lock the
-         * CPU.
-         */
-        v->arch.hvm.vmx.exception_bitmap &= ~((1U << TRAP_debug) |
-                                              (1U << TRAP_alignment_check));
-    }
+
     vmx_update_exception_bitmap(v);
 
     v->arch.hvm.guest_cr[0] = X86_CR0_PE | X86_CR0_ET;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 15a07933ee5d..2e2ab0ac0e26 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1564,19 +1564,10 @@ static void cf_check vmx_update_host_cr3(struct vcpu *v)
 
 void vmx_update_debug_state(struct vcpu *v)
 {
-    unsigned int mask = 1u << TRAP_int3;
-
-    if ( !cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )
-        /*
-         * Only allow toggling TRAP_debug if notify VM exit is enabled, as
-         * unconditionally setting TRAP_debug is part of the XSA-156 fix.
-         */
-        mask |= 1u << TRAP_debug;
-
     if ( v->arch.hvm.debug_state_latch )
-        v->arch.hvm.vmx.exception_bitmap |= mask;
+        v->arch.hvm.vmx.exception_bitmap |= 1U << TRAP_int3;
     else
-        v->arch.hvm.vmx.exception_bitmap &= ~mask;
+        v->arch.hvm.vmx.exception_bitmap &= ~(1U << TRAP_int3);
 
     vmx_vmcs_enter(v);
     vmx_update_exception_bitmap(v);
@@ -4192,9 +4183,6 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         switch ( vector )
         {
         case TRAP_debug:
-            if ( cpu_has_monitor_trap_flag && cpu_has_vmx_notify_vm_exiting )
-                goto exit_and_crash;
-
             /*
              * Updates DR6 where debugger can peek (See 3B 23.2.1,
              * Table 23-1, "Exit Qualification for Debug Exceptions").
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 18 19:38:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 19:38:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480779.745332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIEGf-0000mC-G7; Wed, 18 Jan 2023 19:38:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480779.745332; Wed, 18 Jan 2023 19:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIEGf-0000m5-CW; Wed, 18 Jan 2023 19:38:41 +0000
Received: by outflank-mailman (input) for mailman id 480779;
 Wed, 18 Jan 2023 19:38:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMWs=5P=citrix.com=prvs=37540d4c2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIEGe-0000ls-OB
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 19:38:40 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b374b1ca-9767-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 20:38:38 +0100 (CET)
Received: from mail-dm6nam04lp2046.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 14:38:35 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6487.namprd03.prod.outlook.com (2603:10b6:a03:38d::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan
 2023 19:38:33 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 19:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b374b1ca-9767-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674070717;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=rgK9zWkhALHt0AKMO0FLonN4m6uVrC8df9zBDQoUOeY=;
  b=BgKYggxKnxHlKLXwaQLPjQ5sAis1KGn+Ob8JuAagQVlFMd8AEotmd5ri
   +4PYpTnSg2clEEszaDwrwMvC2ZrbvAZuUGamyig/qDOIzYVtBzmiHUV2h
   HAbPQspIa4pvre4Slxm9ETc71uKgN/CyAEa/coEhpp0Yfg44jqgCDHBCe
   k=;
X-IronPort-RemoteIP: 104.47.73.46
X-IronPort-MID: 93202948
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:HvALCaJC/k4uq/ePFE+RGpQlxSXFcZb7ZxGr2PjKsXjdYENS1WYHm
 DMfCziBafmMNGKmKNlxadi3oRsHvpXRyt8xGgplqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wVuPa0jUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c56Anxwr
 qMKOgsSLUG9q9Dun6zgavNV05FLwMnDZOvzu1lG5BSBUbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VspTSNpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexnmjAtNORNVU8NZLgkCCxlwoUCYxFkOigt7+jEqAYcpQf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmL+ITXOQ8J+EoDX0PjIaRUcZfjMNRwYB59jloakwgwjJQ9IlF7S65vXqHRngz
 jbMqzIx74j/luYO3qS/uFzC3TSlo8CVShZvvlmJGGW48gl+eYipIZSy7kTW5upBK4DfSUSdu
 H8DmI6V6+Vm4YyxqRFhid4lRNmBj8tp+hWF6bKzN/HNLwiQxkM=
IronPort-HdrOrdr: A9a23:xdysYqtK+bN+RU4g3KW8Q8S+7skDYdV00zEX/kB9WHVpm6uj5q
 KTdZUgpHzJYVMqMxsdcL+7VZVoPkmskKKdjbN8AV7BZmfbURqTTL2KhLGKqwEIcBeeygcy78
 hdmqFFebnNMWQ=
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="93202948"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bUP8ZT2pa3Knt5qqUS1cSodemBC+D4cLGhY84Z3Ev/gj2zGvXmUuZqWV0JUBZ3aSwojcTltDs9PBPvh7p0qj04CUau9toShMCSxvzTlk1re9khOrvphWMviRyqVkIzt5DKp7h1aYCHfzTnqviLrFl6rw3+3POX2/C9gg3pyhKVIs47YKMu3PxzfZH1ZEdfTU9ZnRxyuWqukBd3R2yNqioBScQ/zCQL8M36RBLcTr3gn3wgLSWpbBXtG2IAYYNmWK3Dve7HFLZJfqZW7c6pem9VGoivVe7pgke0KXEjJGXQqSlMfW7dEmdkXVWjJ/M9QZ9CcfHONqJJeRiwvOGv0VqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rgK9zWkhALHt0AKMO0FLonN4m6uVrC8df9zBDQoUOeY=;
 b=WqzM7cun4n32ge4X5UVry91JZ7n7BbhHxBIGhIBg6fi1JKjbyGJPkvUTSkacg58ZJ9s2uHsSrsfprP+q+KVitw3cWn47HbG/5jwkV+gUJ8fC/Heg/MHCSo0dbpdwWVsp/EtvtrwD8MZoAc6PvrRCeapuXQWyhlCm2PGf+c4bkvvw9UTjfn7TWTCluzO/LQgq2D3HZjQl9DTJz0O/hKgnwPiqnRhnYS6gAumFQnW6+bj8JRKTIdBJb7g1fiZ/3/7lTNFERPnkwifPvpl4OgGh8ixiidy+9gsq0SvTXuug5VOmSFwK0PjYA5VjrDmkadtxlbpgYZqEvv4MwoMy1baQvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rgK9zWkhALHt0AKMO0FLonN4m6uVrC8df9zBDQoUOeY=;
 b=WrvlzMffKMz0KLFmjvsdmXUwCO9cNphrBw9xfC0NzBy6rawYNrgZjFmnTPBQBjwETiyHe3lmrXqlafrREDicCm/PpOB850OMD0j1RK/ZtfQvGIVRW8CBK/6GorCd8vthAZW6FS8zvm7ElKYtwam/aZMnK0J5CDj3t6Jff/DqX9s=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Anthony Perard
	<anthony.perard@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Topic: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Index: AQHZKdXmSfLi4OljAEm5WNlNB2w+ZK6i3JyAgAD3LQCAAMFwAA==
Date: Wed, 18 Jan 2023 19:38:33 +0000
Message-ID: <c00810b3-9a3c-273b-4f45-4d3f48316d09@citrix.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
 <1ab3bc93-326a-172d-4f0f-f6c2ddc84105@citrix.com>
 <b6d6812a-fa3f-49d1-3e2f-4971f411fb16@suse.com>
In-Reply-To: <b6d6812a-fa3f-49d1-3e2f-4971f411fb16@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6487:EE_
x-ms-office365-filtering-correlation-id: 6c5695d9-8ca7-454b-8c68-08daf98b95a6
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 73qt8P5lf/8X4oHLIRqd+jeOdhdsEUpzZUzF5UIa2Lm913qyadGAZphO6PIVjPsCYPWv6+YUnyK+5aB0lO9kk1RKpS5tqgamI2QvfHtPKYmEGhDNtMmbHW6T6HzHINg1kexyGNm5nIFACpxKLzyRprrBbhU/aR76gfwTauxKTtQ0WgtkjcirWFrTKNTjzVigcAdB5yoaRxklU9tR7qrQahN5eLnwdFfoLkoF1+uT6HP0AxpZK5D8b8jd9ZOc27HNegohxLGENUkaP2+1/qUdn/ByasRXXb+CxOey5twpFj7MNPuUv3zY0YB9ggnlcukPDVydItFrGrHSV6QfGCPAN6epNH2ESHLyauBoUr4MzaK+0QeKSM+5lADQB0I4mMdsnYwEw+Vnn1h67CYlkjouAwCGNI7s6bzyvl1AO0CKB/kMQJtC1h2AinkBYfqX4O/bIBaeXer9oZALQwhyshVbJmQ/7JiE9+ceCnlnbZ0fhG/65gbOLtD0eXRLYgva55PmRI7XggLLp0EylRorYQ3QT2hmCyY8pnNxr/0gi56lRM3yiNc0xXrJMJksVRIxvTLp7bi0q/nI5nsBZjghokkAkcipKvcqGcvZw6iHT6vn1AwZomADvHVGwmIl3ylhv6HnNPDJaicCdNDg60+8WG94noXNVeS/Hqtwf7/Zm6JeDhelOA5wIM4OWlIvOmSLpXq9ZlMxrhQdE2ZmvUBj94TrO2ua+T8/ng9joNOsne7DkwPLQUb05JmpHDU+zFvtD8vcSSnofrwaH9JnvMpi1rd7XA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(376002)(346002)(39860400002)(366004)(451199015)(91956017)(66446008)(64756008)(8676002)(54906003)(36756003)(66556008)(76116006)(4326008)(66476007)(66946007)(110136005)(316002)(2616005)(86362001)(31696002)(2906002)(82960400001)(122000001)(38100700002)(38070700005)(41300700001)(6506007)(8936002)(31686004)(478600001)(186003)(6486002)(26005)(6512007)(5660300002)(71200400001)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MjZ0Y09IcTVZK3pQRlRMeUJ6M2dYc01haUJmU3NwT1ZpWUFtR1hrNHNZRmhO?=
 =?utf-8?B?YWdpVHVkNmltUG82bldGWW1XTWVLdGNmMUVPNzEvQTdPckl3TTJiYXVYdExG?=
 =?utf-8?B?d2VCUng5akpiTDkzcGYrQ0hYUFNFZCtzenkvVXJhcDIwYlA5TVQxQk1YZFBo?=
 =?utf-8?B?eFRVaWJTL0JqTUljSUh4bExZZTk4czVnQjJtN252b1czeHBkV2N1NEdic3dR?=
 =?utf-8?B?d0ZQSEVEaDJhd1RIWGlrd2hLckpyMG5GenI5ZkFpYUxwZEo2U25mdVhSdThD?=
 =?utf-8?B?NkZ3a3daRlV0NEJ2NXpKTnhRajhXaFN5OUZOZGhYSVAybzUvNURFR29JTkpY?=
 =?utf-8?B?blJiNG1VNVNwbEVDNGxDS0tIRXFVWGJZckw4b1MwWjU2SlNmdVZieXBycVZ6?=
 =?utf-8?B?Q2lKeWV0OHVoa3daZm1MUW92KzNRR0U1c1hTN3ZPcDJ0blBNUmJDVDZyNnVJ?=
 =?utf-8?B?S09XMVpmSG52ZCtmU0dzSDR2QmZpOVJpZkY0QjNJd1NVL3BKNm9wZ0pBdThJ?=
 =?utf-8?B?VDUwbWdtQXozZVJtem5sWFRKdnJKdGpadWYzN0dkSmVrYVNSZ2lad3BMRlpJ?=
 =?utf-8?B?enU5dzdPaXVQUWo2ZUFzRzc2czRuYXdrM2wxa05KcnNleGJqaWtuRDBENDlJ?=
 =?utf-8?B?ZDZiT1lzY3dOb3BidldJYTlVWDJkejA1OXE2aU5UcGRBL1c4STduazgzWjds?=
 =?utf-8?B?Nk43R29ES1VGSnVIem52SkFOZWdkWkszM0RGOG5XUmZFR056dTd4STJOMloy?=
 =?utf-8?B?S0pmS3g2VHdFSC9vNlgyRWpCRHlOZ1NkcWU5Vkk3cjBveml2dzgzMnFPV1RQ?=
 =?utf-8?B?SVE0ZDNKMkxGTElHUnFsdXkydGR1dXlsOXJjUjJtMmVNR1pnZElyR2pxdDk1?=
 =?utf-8?B?WS9nMENQdjFIMHVDbldVV3NHSzNvdUZBMnRQMWxGeFhMeDQ3bDdzTVJXMjRM?=
 =?utf-8?B?OXNsaC8zUENxc0h5dXBuTEJVblZGSWtnUWFDd0w0Rm9iaGdhejhqcWx4M1Ji?=
 =?utf-8?B?U0QxWG1nUkp4Zk0zZWhyaXgvOVROKzRqWjJ2RitrV2hibWFnWlRiQW9TbXBF?=
 =?utf-8?B?dFJjRVNESU13b2c2eENXb3FDdXYrY1pzVzZIdWlhajlaTmtxZXc5T2NvQmEw?=
 =?utf-8?B?YitBQ3NNY1c3UWw2MGZHd0ROWW51QmFiYWlaVGZ6TE9jenI5SEd3NTcybEN1?=
 =?utf-8?B?R2psc1Fnd1NzdFkrZWIwNW83NzNRRVhIQ0VreUo4bzhkNXd4TUUxV0paelgv?=
 =?utf-8?B?bVQ5eHhGemJEZGd4ZE5ocUVNeDJFY2QvMDRhSE9tVDAzbGFUZ0pzN3RVcUpI?=
 =?utf-8?B?QWVMRWF1QVg0dnlCeW9jVnZRNmxiSCtDRGFNMmUzMUU0QU1ZYmNLajFvalYv?=
 =?utf-8?B?UFNacW9sVXRlbk4wWnN6NFRpWG02dFcyZFZ3cHd6NjF6ejRPbFZRY21MMEkx?=
 =?utf-8?B?STJoRWJ5S2R6T0tJSi9QNCtVVHpaNHY5UlVDVHRCWi9oQ0NxZHM1TjhGa2k2?=
 =?utf-8?B?TmtURTZacUJTNjZBZUpXV1dLMW5TMENjbW1Ycm9zbitlUmNOeXZuRmpBMTNa?=
 =?utf-8?B?QVBmOS9xUUtLb1AvZ1hMUmZSODFobFFPcjFucDlqS081dUlZTHp3Q2lPS1Ny?=
 =?utf-8?B?L1gxWlkzeWFmcXozWHNNU0JhdGhjakxIVWlPMk1EUVVrU3VybFpnQWd6b1or?=
 =?utf-8?B?cFdWckpiTU9KYk41OG93bTdOVFlnM1YzdXE3OUEvTjJWN3hSZTF5WTRSU3FS?=
 =?utf-8?B?QW1qcFpvQ0ZDUGdCeG1sMUtveEpHSjIvcFEzM3dDc1lJOHA1T2JUb1B5Vnda?=
 =?utf-8?B?VHltWUdySGdPdXh6YW1sQVl0T0JwZUprY1FrN1d2Zm5LeHpORmNXOXpaNDNS?=
 =?utf-8?B?NTlaUTN6TXU5WkFuL3ZKa2pvRzFqY2JVN3A3c0tndkZ2eUMzOU1iNHJXdzhI?=
 =?utf-8?B?ZmhRNFJqc1RpUU9RWS9VaUMrcTRqbDMxSjJKd0NNa3V4bXdzc2Fick9zeStl?=
 =?utf-8?B?T05PK2p6TFNxdWdOSGJRanZZbHdZZVJueStwc0xKNk80Y2pPdnk1akI3MjM3?=
 =?utf-8?B?aU5KdWtQWHQxcDNxSVdCNk8wZzRKS2dPemhUVm85SHIwWDRwakhxcFR4a1k4?=
 =?utf-8?B?cGpxOEQzYlNTUzl3K1lXNGxieG13eUdyd0t0QTZoZkVYeHFyZDVDL0Yvelpt?=
 =?utf-8?B?SVE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <680C7D5C39DFD545BE6F3C6A5422C2FC@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	UAnXGgoJ+fvcmF9nSoNPbSGvkPfY7kHhuBAtcUzISMYKrZOoPLqo7VvmQDc7TqsBSr5+ZKNY1VGUqCDVn3NPPqkSLyRnpkyKIFVq0/7wUzKIkZEK7QjLtis1R8Z3QLY7/mOl114c4isRI7zJycsB5Q4OKrNtZBJ/aG2mG5tgGc0oU1m6Nq+c+fSJoZIM0UHguyCNj4kVW84du+wDL6nDzFvOo3h+5cXjdP+mztya84Lvue5gfu3++YJLu1/0DvuB1sIH7YBX12ZceYQJOOUCyzIbAX77jIQwXOcANmJbW0NPYCab/Akiab/262Sj2S7WNC13njmRnneTqoqouDI4giGoddH0gteG8aAVdit09zJsBMRHW52O1do4/VYkxuALiS2F6FiPhzNB+9+AiWaP3ChYCh2+BWe5fo1Cmrd9BPTl3/b0plyAP7CSQ5bCRSiHF+bKBJyOeUYl35G8jQpt/28J12Iy3+OU6JCTWKy3qwqRNXpT28qR65qnpYiu+JHdWVqocIEUHrQhStGET2s168LoDOVTdQ/sjB3s7g/U0vZIga5e3iCAT5rxVm4P5okPRz+1AVlJMJ+8IW9XLTUJ7cETvYH6ejoUo+qT3VZxhwOasqSB7wiQiDzSzsQ0sBztJ+jzNXgXGgkBYLQkULQaa2k+A8yMX2lccbGVo4wx7VEOJ8tBvn9sPxpZyaFDm3WzxYQMc1xEMdXRZAEgo9pidc02ptFyuv3YQrcDT+nFOizyV7cHGQQIIpkrajuEhoj2le8nCDesrpUXRMl4z+pCkz33TPMmugPFuGw6r8bNzm/8bixF2nVdVX51cgPqkmyV6qUeze8VMj/WUBtJCW5a3morGPm6amFCfUZmGZjVX10=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6c5695d9-8ca7-454b-8c68-08daf98b95a6
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 19:38:33.2524
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xI8Mk4+jCHHmv9AVacv+iVyChfHcL3tYZOnllvvTVHS626eKpaEQL5yM9Ts0aXEEJZGct2JEmv3OdBfXVZEVxuHpHlXdOAPtDZwsU5FnitQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6487

T24gMTgvMDEvMjAyMyA4OjA2IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTcuMDEuMjAy
MyAxODoyMSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDE2LzAxLzIwMjMgNjoxMCBwbSwg
QW50aG9ueSBQRVJBUkQgd3JvdGU6DQo+Pj4gKyAgICAgICAgZWxpZiByZS5tYXRjaChyJ15bYS16
QS1aX10nLCB0b2tlbik6DQo+PiBbLi4uXQ0KPj4NCj4+IEFsbCBvZiB0aGlzIHNhaWQsIHdoZXJl
IGlzIDAtOSBpbiB0aGUgdG9rZW4gcmVnZXg/wqAgSGF2ZSB3ZSBqdXN0IGdvdA0KPj4gZXh0cmVt
ZWx5IGx1Y2t5IHdpdGggaGF2aW5nIG5vIGVtYmVkZGVkIGRpZ2l0cyBpbiBpZGVudGlmaWVycyB0
aHVzIGZhcj8NCj4gVGhhdCdzIGNoZWNraW5nIGZvciBqdXN0IHRoZSBmaXJzdCBjaGFyYWN0ZXIs
IHdoaWNoIGNhbid0IGJlIGEgZGlnaXQ/DQoNClNvIGl0IGlzLi4uDQoNCkJ1dCBub3RoaW5nIGdv
b2QgY2FuIHBvc3NpYmx5IGNvbWUgb2YgaGF2aW5nIGEgdG9rZW4gaGVyZSB3aGljaCBtYXRjaGVz
DQpvbiB0aGUgZmlyc3QgY2hhciBidXQgbWlzbWF0Y2hlcyBsYXRlci4NCg0KPg0KPj4gUC5TLiBJ
IHByb2JhYmx5IGRvbid0IHdhbnQgdG8ga25vdyB3aHkgd2UgaGF2ZSB0byBzcGVjaWFsIGNhc2Ug
ZXZ0Y2huDQo+PiBwb3J0LCBhcmdvIHBvcnQgYW5kIGRvbWFpbiBoYW5kbGUuwqAgSSB0aGluayBp
dCBzYXlzIG1vcmUgYWJvdXQgdGhlIHRoaXMNCj4+IGJvZGdlIG9mIGEgcGFyc2VyIHRoYW4gYW55
dGhpbmcgZWxzZS4uLg0KPiBJaXJjIHNvbWV0aGluZyBicm9rZSB3aXRob3V0IGl0LCBidXQgaXQn
cyBiZWVuIHRvbyBsb25nIGFuZCBzcGVuZGluZyBhDQo+IHJlYXNvbmFibGUgYW1vdW50IG9mIHRp
bWUgdHJ5aW5nIHRvIHJlLWNvbnN0cnVjdCBJIGNvdWxkbid0IGNvbWUgdXANCj4gd2l0aCBhbnl0
aGluZy4gSSBkaWRuJ3Qgd2FudCB0byBnbyBhcyBmYXIgYXMgcHV0IHRpbWUgaW50byBhY3R1YWxs
eQ0KPiB0cnlpbmcgb3V0IHdoYXQgKGlmIGFueXRoaW5nKSBicmVha3Mgd2l0aCB0aG9zZSByZW1v
dmVkLiBXaGF0IEknbQ0KPiBwdXp6bGVkIGFib3V0IGlzIHRoYXQgYXJnbyBhbmQgZXZ0Y2huIHBv
cnQgdHlwZXMgYXJlIGhhbmRsZWQgaW4NCj4gZGlmZmVyZW50IHBsYWNlcy4NCj4NCj4gRm9yIHRo
ZSBkb21haW4gaGFuZGxlIGlpcmMgdGhlIGV4Y2VwdGlvbiB3YXMgYXR0cmlidXRlZCB0byBpdCBi
ZWluZw0KPiB0aGUgb25seSB0eXBlZGVmIG9mIGFuIGFycmF5IHdoaWNoIGlzIGVtYmVkZGVkIGlu
IG90aGVyIHN0cnVjdHVyZXMuDQoNCkkgcmVmZXIgYmFjayB0byAiYm9kZ2Ugb2YgYSBwYXJzZXIi
Lg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 19:49:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 19:49:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480786.745343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIERA-0002Ip-GL; Wed, 18 Jan 2023 19:49:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480786.745343; Wed, 18 Jan 2023 19:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIERA-0002Ii-D7; Wed, 18 Jan 2023 19:49:32 +0000
Received: by outflank-mailman (input) for mailman id 480786;
 Wed, 18 Jan 2023 19:49:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMWs=5P=citrix.com=prvs=37540d4c2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIER8-0002Ic-KI
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 19:49:30 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 36843a51-9769-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 20:49:27 +0100 (CET)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Jan 2023 14:49:15 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS0PR03MB7228.namprd03.prod.outlook.com (2603:10b6:8:126::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan
 2023 19:49:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Wed, 18 Jan 2023
 19:49:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36843a51-9769-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674071367;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=E5OmisCj2EvPhsWk+in9WxtUdVf1ElyNnmycT5M8x70=;
  b=IC10d0SBfR/kOZqgV2rlfKrSJGRYI/92Jgld4TSDACRa7rK7uWiHoWmx
   RxJeWcNb9Tarl8UKZalVf3hvoszeLPb0mY6hgm06y3Z0pYwkpnlH3aYba
   Jc/GLp8ZpTOiJ0TGivnh0c2Ltb8lKtcfFZDw3GdbbkcOFMF7N5kX9Fvk3
   M=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 93268916
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6fXnF6KPUZhGzjzeFE+RBJQlxSXFcZb7ZxGr2PjKsXjdYENSgmMEz
 mofUWuPPvaJMWr1e48jaoq08EIBuJaEy94wHAtlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wVuPa0jUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4nLX5Rq
 cZAIQsuRT6Bidy/+qCUGq5V05FLwMnDZOvzu1lG5BSAVLMMZ8CGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGNnGSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzHihBtJDTePQGvhCm1yK+WwNEzAvXEqVoqOUpU6bQo1bE
 hlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWXVHac+7G8vT60fy8PIgcqfjQYRAEI593ipoAbjR/VSNtnVqmvgbXdBjXY0
 z2M6i8kiN0uYdUj0qy6+RXNhWKqr52QFwotvFyJDiSi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf02AIH80UJn9h1x0OeQA==
IronPort-HdrOrdr: A9a23:2qr2sK/La4+qhAtxQMtuk+H2dr1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYW4qKQodcdDpAtjifZtFnaQFrbX5To3SJjUO31HYY72KjLGSjgEIfheTygcz79
 YGT0ETMrzN5B1B/L7HCWqDYpgdKbu8gcaVbI7lph8DIz2CKZsQljuRYTzrcHGeMTM2YabRY6
 Dsg/avyQDBRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirCWekD+y77b+Mh6AmjMTSSlGz7sO+X
 XM11WR3NTij9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhn8lwqyY4xlerua+BQ4uvum5loGmM
 TF5z0gI8NwwXXMeXzdm2qt5yDQlBIVr1Pyw16RhnXu5ebjQighNsZHjYVFNjPE9ksJprhHoe
 B29lPck6ASIQLLnSz76dSNfQptjFCIrX0rlvNWp2BDULEZdKRaoeUkjQZo+dY7bWbHAbIcYa
 9T5fLnla9rmJShHijkV1xUsZuRt7IIb0y7qwY5y5aoOnNt7Q1EJgMjtbAidzE7hdEAotB/lp
 r52u4DrsAwcuYGKa16H+sPWs2xFyjERg/NKnubJRD9GLgAIG+lke++3Fyb3pDZRHUk9upFpH
 36aiIQiUciP0b1TcGe1pxC9R7ABG27QDT208lbo5x0oKf1SrbnOTCKDAlGqbrrn9wPRsnAH/
 qjMpNfBPHuaWPoBIZSxgX7H51fM2MXXsEZsssyH1iOvsXIIIv3sfGzSoeZGJP9VTI/Hm/vCH
 oKWzb+YM1G80CwQ3f9xAPcXnv8E3aPiq6Y0JKqi9T75LJ9RbGk6DJl+GhRzvv7WQFqo+gxYF
 Z0Jq/hn+eyuXS2lFy4mllUBg==
X-IronPort-AV: E=Sophos;i="5.97,226,1669093200"; 
   d="scan'208";a="93268916"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gwDqwWTZNy71NTZtiyg/0ml0oHzm1wR1DAGwwsA0REwz9B+LvrwK5p9IkRB3HiXHQ604NS8nCPk1pcocvK0q87wnzzuJ8pnhueIg5gX7GXjAbOm3YSdXpgFHfK+a3Sp2GwrzWrKGymNc8wh2XRp99mVe0qZxEbAxZCaOqFpR8ab4He/0Ln9R4X2RIrdXvl/hk20NrB53J0bbA2ox91cpBNAs4JJJUdf7tlENAL/5yQ+hDQiPPa/5hf4eUlYfxMc82/3MErFFwwU3BzLcBTIZ+oCBDWSOVVYo5Ca9FL01VT4Y6keKa6v0cr4SCHvFUhDxV244sJolfN524g6pBANizg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E5OmisCj2EvPhsWk+in9WxtUdVf1ElyNnmycT5M8x70=;
 b=A38OFVhucDwWiRjO6HfynxeNKb9B61QRq4J8fRuUvzW+G+Hump/lrJPi/QcrsflOFKdJwVtVCjuIRulbcgNKl1NdFOZF3j7/QnQO3IIKSNs7MfU3DrwOe1vn4yma4Jjw02OlzvsfijMJN7j0QMY9DNEoWhhLB1Psge6FBxI7JqROrnDT9z/8fjfMt+Gz9Z+ZoSA68qGQPc0R3pvZUNPfhe7pb/iElzWMeHDOlN1ZeL4ed2tV8dW2jrNTF2iw50QYJ3eiRGK1sD5/VvSDRphFVZ4c7vl+QArL/b+iFsOzgpulMTmYxuhSN4WxR5fPR9VJYA6wDZry/xKVl89HhLawtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E5OmisCj2EvPhsWk+in9WxtUdVf1ElyNnmycT5M8x70=;
 b=Ev3pi8DI8MXGHZYnuELPxkwnfKjBi29u1jcJtKsaagIhtaX9QjarQaZhTTyu+vH/KiH63dt0xctMNQ3/H60AwGPBKoiUzA583wATn5+K19RF+8//Ns6GDfRnMMosaQBEjMqBCA5zCoxkhSnSaHr8KvW0yfcKh3uIvbWho6QN6jo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Anthony Perard <anthony.perard@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, George
 Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Topic: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Index: AQHZKdXmSfLi4OljAEm5WNlNB2w+ZK6i3JyAgAGjEYCAABiGAA==
Date: Wed, 18 Jan 2023 19:49:12 +0000
Message-ID: <fc1e27f2-a36d-ed5e-6bc6-959c5940ed7e@citrix.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
 <1ab3bc93-326a-172d-4f0f-f6c2ddc84105@citrix.com>
 <Y8g4pSOHvrkqmbTU@perard.uk.xensource.com>
In-Reply-To: <Y8g4pSOHvrkqmbTU@perard.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DS0PR03MB7228:EE_
x-ms-office365-filtering-correlation-id: 637c0b77-121a-4b1f-0a1b-08daf98d12f4
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 sYJ42U6XcJFqWh2Y5HI0r2/jMDYiiFTOXgyEMRyI9lq0f9Qv+/JuvRCoOWRDHv36lKp4/we2DoQZbmpGNOWx8N9n3Dol+BSVGdl8rNpOSjoZ117Tn7cft2D61ZF8vp01yTkUwTt297U2oKLrBl+XXaTzWzUILCWwjPVwamcEP+ycAKRsXtje+ACt3tmqwZWIZhcMpeAb7CqtcaGucOYTata6izLvTpki35FEm7vhy3/T2lLqVZSHSmjL7Zcrbm4EKLobvFJhbieY21Fd0IhZREK0763aIYlotGZ49LEx9QPhtYO0wxNZy1XXvWPCJoJaREXshdrGW1IA7RuU+Q3efQHGG7bqfwbVg6zy+QFQ1Ql9mu8CS/u2e8xm+J0c+Q9/9mFtG2ermRnwDSTUMrUZ0T2EcsOPuOsMTj8f1nBJA83mFUon5+tUH2zQ9YXqwdevRWnhtTna8zqeEABEsRrpSdgUIDN90ec71eIX2/6YwrjM1XeV32F3cKF3SLI5IxsycoZG9VxtoXp5hP3lORH49YS0xR8Fz2rpTOocmIfaDnp5MT+3byr0NbGvekhElUxaM0a2Y8cbMw/kmgmtvEgiZoW5guGQSe2mjvoSm6ECA1PauF6wRwSE2/JU4nd8wohK64dFmc8PMkD0fI4H7Q3thgBlEPGioIuV57N979Q53g7kenZ95C9NAo8VQiSRuCBEr6RYW0eDSCozv3vrkfz5iEHo/bwwmSKYw6iDfPaBWSQkEalfnbjUNoweS3W9GV/TAkAxo2LsEw2qP7RT7MgBaA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(366004)(396003)(346002)(39860400002)(376002)(451199015)(91956017)(86362001)(66556008)(31696002)(36756003)(64756008)(6512007)(66946007)(8676002)(53546011)(66446008)(186003)(41300700001)(2616005)(76116006)(26005)(66476007)(4326008)(37006003)(478600001)(6506007)(54906003)(6636002)(316002)(122000001)(6486002)(38070700005)(71200400001)(2906002)(38100700002)(8936002)(82960400001)(5660300002)(6862004)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Y3EvUjFxck5vSThBZUU5MWFKN1FUZXM3L1JZZzZITEV5WG1XQno1UU01ZVdq?=
 =?utf-8?B?MHVUNTRHbHI1QzU0ZHMwTXNtbCthd0N5NlI1Qm9lUmtrMENZRnBJbFJBaC9J?=
 =?utf-8?B?a05nQzY0aXJ5SkpsOFlaSXBLY3k1YUNXSW4wb0IraUREL0xDMnplR0J0U0Jj?=
 =?utf-8?B?WXlqM3RJcjFyWGtqVlVwcFVtcERvcFNGS2w0N25GRHp6dWxNdGdmRzYrWDlp?=
 =?utf-8?B?RTRqdGE3ekFOZ1lSUi9YUFRUa1gzcGlOUTF2T1AwajQvenlxNUFFMnhvTlhZ?=
 =?utf-8?B?eUh5c2tJcFVPb0FvcEVPak1ud2drbzFVb3FwVU9uZlQ1LzVhUmd4NXV3NXgx?=
 =?utf-8?B?cTJBWC9MQjlnV0pKZzNQNEg0NFpoZlV5aHVTQlRMMDg1SXFJOTFhWGNmanFn?=
 =?utf-8?B?bUN4Z3F5aGZ5N01Ec0JrRmJnZ0IxSnNoaTkyWWh4d0ZkT0NRM3ZGbGdyUkJs?=
 =?utf-8?B?QXAvY29zUGlOcTZGdXBJOU1sMWFxZkdrUTdaNjB4djh1UFUxczNiNmppQStN?=
 =?utf-8?B?U1g3RzhORWVyQVNrUTNzNVZiSUZIMUpkWU9BbDRYbjViRVZINk1aZU5aeWFK?=
 =?utf-8?B?UjFEK1J4bURKMDlUbEJzUlRsWGludXMwWmRLSnZ1bElhcG4weWtFVUVtdHhx?=
 =?utf-8?B?UERDTXVsZXhQaVptekRXZ1BWcHFUUGUycXpBM3BWWlNqTzBObUx0djhVZUdn?=
 =?utf-8?B?WXVqb00veGppdzNzT2NmdWlybGVRb1JHUzVnbmhuTXRJeUo4cFBHc3JOL1Vl?=
 =?utf-8?B?blJadE9zTjd1VFh0cld5WFBLYU15TTJGb0FGRnpaQUhHbFpaTjBuQkoreXNV?=
 =?utf-8?B?dnhCTE9iNTlTVHFJb1BMNWhYMXRGQ1BBWGJpdk9qeXk4VmpLMWowUHBvV2ZK?=
 =?utf-8?B?bGJzRGNkZVlaNEtKWFJ4aGlDN1RSQjl4N3BwcUt1eGplaUd5eFlWOVVtaGs5?=
 =?utf-8?B?dG1hNUFzc1hOTVlCcC9EYnhRQ2l6WU9JRThVblV2TVRvTU9TMktZQk5ZeGJC?=
 =?utf-8?B?NmJOUEg0MVFqSU8zU2pVeGpYYmJGMzU1TWV5Y2I5ZU5oOUh1UXRpbm5rRjdB?=
 =?utf-8?B?SjZzOWhGOXZWNmltYmhDeUlycVFhVVc2ZHpTOG9FVFpYYi8yNC90cFllaVhj?=
 =?utf-8?B?YmhqVWdHNk5EOFBVaGZDdGtuemdoSEFVUklWdVppeG9wcXoxMndXYlU1dThG?=
 =?utf-8?B?aWtueFc3d3I5WTdUV01nTHNGTDlaUW1CRW4xckwxVXN4M1RZY0RUYzI5N2hZ?=
 =?utf-8?B?eU91bmRZcnUrRFRXdDdzZm5qcG8vUDVGMk9uc0FXSHVHdG9mSUtHVlFSVkdJ?=
 =?utf-8?B?aEZwcHNxbHhkQSt1SG82YWJGMGRJN2o3MnYvZDRoQjdyOVMvTVhrTnI5cVZQ?=
 =?utf-8?B?S1dtWkFRcFAvQlduYzJxbE5IRVAvMHpEcE0yUXRiY3J5d3JWc0o4MGgrWWJ6?=
 =?utf-8?B?ZU9qWWV5QTIzQktBTWw2KzlSNm9RcEhRSUVXRDVWeUdwQms3VjRRaEc3MjZw?=
 =?utf-8?B?MThzWnFob0lRbHBEL25DUEovSnZLay91bjR3T1dKejRacUwvandPOHVWQ2ll?=
 =?utf-8?B?RE0xRElhUWNpR3BnZEJBdk5rZjlxQUFhV0xTMnV4cUx3eFdkbXdFSGlCRDQ2?=
 =?utf-8?B?ZTBHUjFzdS9hMFZ5UTYxVnJ1TTRPYSt4allFOC9IOW00S1E1UlNpajVnSHBX?=
 =?utf-8?B?aDl6c3RsTTl0UE4vM1J3NlNQTXczUU5jZEtaamxwcWVUTUFkekpMMldSbmdF?=
 =?utf-8?B?ckpkNU9zVFNLa3c3ZXVobUpmMEY4Z0ZuaG9kVUN2cmFhZ3dYSis1VmtRcHF3?=
 =?utf-8?B?OFJDZnhQb0k2SjcraTZqNlhWYUx2eVZPVXZOc0xSVWROUkhDK1lqbU02V05H?=
 =?utf-8?B?RXh0M1ZhM2h2UkdLMzBxUU1ERzM2OHBEWG5MaHA2bU05TElIRU84S2Z3MVE1?=
 =?utf-8?B?THNzS0JXZWpLTnBVV0g3OG9NWUJlY0c5NVAwZXJwTnIxZFFRSm9sa3c0NUlR?=
 =?utf-8?B?YUMzbnVQZzhYRjB3MFVUd294ZVVYZmM4RHk0Vi9FRldPamoxNUROaFRIVVJP?=
 =?utf-8?B?clBFZnhZV1ZQaHN2UGJGb1VXc3JxanpBckZhWktOTTFXV0hXNkFkdGkwcnIy?=
 =?utf-8?B?Vy9aaXU2R0Q1MWRmREtGUUJQWXJuTE1XbEhpalk2V2dVRjNnVVAwK0xrajNa?=
 =?utf-8?B?L1E9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <AEECEA18AB2E4344B9BEF7A32B6C64B8@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	L/i3uusEkqwCpK7oz9THBX6R38KhWy2t5OR1JWltL2ZUISMv37uRC36pI65oTn3V/GO99McwjVDrqcwE98ySKWcXTdYIetUWzNz+2x93nur8ln2QRRgAVEgI53FWRTdt+C5rLoUfk/uXvq2ON+0V1pBzeCHYQuuTGpylcRCQpMz4Sb/i0VY1aJ1VbB6/YQnhXaPKkJ5h8Pz49QStqFZWwMBc3550G1wxXl7mIYwpuJyGnaPPkREOEZDh2/Sryjsv86+NZescmBpzOhu8WTVPvWexAYyVPhOxdnBPk+mOk+s0cs9Nd5tkagQTMl68l9LBeY5ecyI5ghI/YQd4bCsB6/UTUNADBhULH5uHlzg1DwTvKA4mYSRARruAorIYVlgE2r0Q5OnrLcPWj4NnE6ES0D7AUgT/n0EiDlh5TWjz3+mFflkQHQo6J3YJtUmSUP4m59Wu4Us+35sOZbVmP1umys+hM1EjqjeDpOrTmC09ASb1BNjwdL87mKpHYHgzQ8cv3HHcOsf4mXvtvFt9v1zCHbnlMdFihqr9K97AzqbaNu/1weRcgCIB6ORnca2F/I5epRowVQui9TaY5L1LOeroDe5w+IgikUPtoqoMxXo6mLqX5kCwwdqjAwIpojcTxJo1Rch97M+e1R4XO30yCCEt6slorD1PfqsptRhvJiRv/KDULVkFYSD5XkBx6kqJ5RMWFunPsSpBl+XH7N9R1saYOHRCzKVxRJ8jY07m/e+ZU3b7hWocXb8STqijSbwhSIqxqJdMHb3TpZ7BHwlvl8Qbk4KgQEdPTJIhvOlWDv8i5Pvnsewun87Ytl6NdOJJ4b1KgdMRBIiUTgYcUBJE0KivsFOEPgW7B86zeWV6vyK/Cjg=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 637c0b77-121a-4b1f-0a1b-08daf98d12f4
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 19:49:12.9409
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: SUfRPlvfFQPu0PqpYSngXfcTdXQGaCCVcSKy2RvTuSfULoFfQfY6VInKOZM4bqlyC+V6PTeO6QsJFV909e6UaTRgzXOYFiLV7SUQUZ2ksoA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR03MB7228

T24gMTgvMDEvMjAyMyA2OjIxIHBtLCBBbnRob255IFBFUkFSRCB3cm90ZToNCj4gT24gVHVlLCBK
YW4gMTcsIDIwMjMgYXQgMDU6MjE6MzJQTSArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+
IE9uIDE2LzAxLzIwMjMgNjoxMCBwbSwgQW50aG9ueSBQRVJBUkQgd3JvdGU6DQo+Pj4gK2RlZiBn
ZXRfdHlwZWRlZnModG9rZW5zKToNCj4+PiArICAgIGxldmVsID0gMQ0KPj4+ICsgICAgc3RhdGUg
PSAwDQo+Pj4gKyAgICB0eXBlZGVmcyA9IFtdDQo+PiBJJ20gcHJldHR5IHN1cmUgdHlwZWRlZnMg
YWN0dWFsbHkgd2FudHMgdG8gYmUgYSBkaWN0IHJhdGhlciB0aGFuIGEgbGlzdA0KPj4gKHdpbGwg
aGF2ZSBiZXR0ZXIgImlkIGluIHR5cGVkZWZzIiBwZXJmb3JtYW5jZSBsb3dlciBkb3duKSwgYnV0
IHRoYXQNCj4+IHdhbnRzIG1hdGNoaW5nIHdpdGggY29kZSBjaGFuZ2VzIGVsc2V3aGVyZSwgYW5k
IHByb2JhYmx5IHdhbnRzIGRvaW5nDQo+PiBzZXBhcmF0ZWx5Lg0KPiBJJ20gbm90IHN1cmUgdGhh
dCBnb2luZyB0byBtYWtlIGEgZGlmZmVyZW5jZSB0byBoYXZlICJpZCBpbiAoKSIgaW5zdGVhZA0K
PiBvZiAiaWQgaW4gW10iLg0KDQpJdCB3aWxsIGluIHRoZSBtaWRkbGUgb2YgYSB0aWdodCBsb29w
LsKgIExlc3MgcG9pbnRlciBjaGFzaW5nIGluIG1lbW9yeS7CoA0KQnV0IGl0IGlzIHZlcnkgbWFy
Z2luYWwuDQoNCj4gIEkganVzdCBmb3VuZCBvdXQgdGhhdCBgdHlwZWRlZnNgIGlzIGFsd2F5cyBl
bXB0eS4uLg0KPg0KPiBJIGRvbid0IGtub3cgd2hhdCBnZXRfdHlwZWRlZnMoKSBpcyBzdXBwb3Nl
ZCB0byBkbywgb3IgYXQgbGVhc3QgaWYgaXQNCj4gd29ya3MgYXMgZXhwZWN0ZWQsIGJlY2F1c2Ug
aXQgYWx3YXlzIHJldHVybnMgIiIgb3IgYW4gZW1wdHkgbGlzdC4gKGV2ZW4NCj4gdGhlIHNoZWxs
IHZlcnNpb24pDQo+DQo+IFNvLCBpdCB3b3VsZCBhY3R1YWxseSBiZSBhIGJpdCBmYXN0ZXIgdG8g
bm90IGNhbGwgZ2V0X3R5cGVkZWZzKCksIGJ1dCBJDQo+IGRvbid0IGtub3cgaWYgdGhhdCdzIHNh
ZmUuDQoNCklmIGl0IGlzIGRlYWQgbG9naWMgZXZlbiBhdCB0aGUgc2hlbGwgbGV2ZWwsIGRyb3Ag
aXQuwqAgUGVyaGFwcyBhIHByZXJlcQ0KcGF0Y2gsIGJlY2F1c2UgcmVtb3ZpbmcgdGhlIGNvbXBs
ZXhpdHkgZmlyc3Qgd2lsbCBtYWtlIHRoaXMgcGF0Y2gNCnNpbXBsZXIgdG8gZm9sbG93Lg0KDQpU
aGUgY29udmVyc2lvbiBpcyBhdHlwaWNhbCBweXRob24sIGFuZCBwZXJmIHdpbGwgaW1wcm92ZSAo
d2hpY2ggaXMgdGhlDQpwb2ludCBvZiB0aGlzIHBhdGNoIGFueXdheSkuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 20:44:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 20:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480793.745354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIFHS-0008VO-JJ; Wed, 18 Jan 2023 20:43:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480793.745354; Wed, 18 Jan 2023 20:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIFHS-0008VH-Gh; Wed, 18 Jan 2023 20:43:34 +0000
Received: by outflank-mailman (input) for mailman id 480793;
 Wed, 18 Jan 2023 20:43:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIFHR-0008Um-9x; Wed, 18 Jan 2023 20:43:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIFHR-0003qq-7g; Wed, 18 Jan 2023 20:43:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIFHQ-00076F-Mq; Wed, 18 Jan 2023 20:43:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIFHQ-000765-MP; Wed, 18 Jan 2023 20:43:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8Z5iuiInq6o12B5vBAuUeKkrF/EZwMcE0aN2/5DIJwE=; b=UoJIGBfhP0zYRXNOuErRRO46qr
	jZLP8liwYQslWx5YHSyMmqHD0Kjc4qMmk/s1Mh7eFBU4B3T3XL+YQmfAs/VKnoWenFVzAAkDnEfyz
	LEw8U2C/KBkXT7s5xOHMIt8bKFmQfAEhYW+MNdapQa5X/6EAXRJZNRuEJ3LHq+UjSaps=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175961-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175961: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 20:43:32 +0000

flight 175961 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175961/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256

Last test of basis   175746  2023-01-12 16:03:41 Z    6 days
Failing since        175748  2023-01-12 20:01:56 Z    6 days   27 attempts
Testing same since   175833  2023-01-14 07:00:25 Z    4 days   25 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6bec713f87..f588e7b7cb  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 22:13:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 22:13:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480801.745365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIGgA-0000Yx-3t; Wed, 18 Jan 2023 22:13:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480801.745365; Wed, 18 Jan 2023 22:13:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIGgA-0000Yq-1A; Wed, 18 Jan 2023 22:13:10 +0000
Received: by outflank-mailman (input) for mailman id 480801;
 Wed, 18 Jan 2023 22:13:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIGg8-0000Yg-MQ; Wed, 18 Jan 2023 22:13:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIGg8-0005uz-JP; Wed, 18 Jan 2023 22:13:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIGg8-0002nd-2H; Wed, 18 Jan 2023 22:13:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIGg8-0004zx-1s; Wed, 18 Jan 2023 22:13:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g9fyRa7lpxIMafq8m6nlFPAZ+Yps9XeoQYPf2r87Vy4=; b=MpGm3KqozmdyWWJ3e5D8Ie1izp
	vRKhYw6AArI/oag1ArO+VXUZymqpb83USSihfCsFaZgt4YuhgcqRBpS6NoWy/6lBY/Rb8GoYbNic7
	7yNrsa0EaOfhTmYumoTSmAkedBAZUV143ROFgAzVUZ18whPx+pDvHCUA8AsccUuqqdKY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175954-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 175954: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-upstream-unstable:build-amd64:<job status>:broken:regression
    qemu-upstream-unstable:build-amd64-xsm:<job status>:broken:regression
    qemu-upstream-unstable:build-i386-xsm:xen-build:fail:regression
    qemu-upstream-unstable:build-i386:xen-build:fail:regression
    qemu-upstream-unstable:build-amd64:host-build-prep:fail:regression
    qemu-upstream-unstable:build-amd64-xsm:host-build-prep:fail:regression
    qemu-upstream-unstable:build-arm64-xsm:xen-build:fail:regression
    qemu-upstream-unstable:build-arm64:xen-build:fail:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=625eb5e96dc96aa7fddef59a08edae215527f19c
X-Osstest-Versions-That:
    qemuu=1cf02b05b27c48775a25699e61b93b814b9ae042
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Jan 2023 22:13:08 +0000

flight 175954 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175954/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-amd64-xsm                 <job status>                 broken
 build-i386-xsm                6 xen-build                fail REGR. vs. 175283
 build-i386                    6 xen-build                fail REGR. vs. 175283
 build-amd64                   5 host-build-prep          fail REGR. vs. 175283
 build-amd64-xsm               5 host-build-prep          fail REGR. vs. 175283
 build-arm64-xsm               6 xen-build                fail REGR. vs. 175283
 build-arm64                   6 xen-build                fail REGR. vs. 175283
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 175283

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175283
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175283
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175283
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                625eb5e96dc96aa7fddef59a08edae215527f19c
baseline version:
 qemuu                1cf02b05b27c48775a25699e61b93b814b9ae042

Last test of basis   175283  2022-12-15 15:42:37 Z   34 days
Testing same since   175938  2023-01-17 15:37:13 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>

jobs:
 build-amd64-xsm                                              broken  
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  broken  
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-amd64-xsm broken

Not pushing.

------------------------------------------------------------
commit 625eb5e96dc96aa7fddef59a08edae215527f19c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jan 6 15:21:10 2023 +0100

    configure: Expand test which disable -Wmissing-braces
    
    With "clang 6.0.0-1ubuntu2" on Ubuntu Bionic, the test with build
    fine, but clang still suggest braces around the zero initializer in a
    few places, where there is a subobject. Expand test to include a sub
    struct which doesn't build on clang 6.0.0-1ubuntu2, and give:
        config-temp/qemu-conf.c:7:8: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        } x = {0};
               ^
               {}
    
    These are the error reported by clang on QEMU's code (v7.2.0):
    hw/pci-bridge/cxl_downstream.c:101:51: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        dvsec = (uint8_t *)&(CXLDVSECPortExtensions){ 0 };
    
    hw/pci-bridge/cxl_root_port.c:62:51: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        dvsec = (uint8_t *)&(CXLDVSECPortExtensions){ 0 };
    
    tests/qtest/virtio-net-test.c:322:34: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        QOSGraphTestOptions opts = { 0 };
    
    Reported-by: Andrew Cooper <Andrew.Cooper3@citrix.com>
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Message-Id: <20230106142110.672-1-anthony.perard@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Jan 18 22:39:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Jan 2023 22:39:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480810.745376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIH5g-00036f-Aq; Wed, 18 Jan 2023 22:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480810.745376; Wed, 18 Jan 2023 22:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIH5g-00036Y-81; Wed, 18 Jan 2023 22:39:32 +0000
Received: by outflank-mailman (input) for mailman id 480810;
 Wed, 18 Jan 2023 22:39:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tkj8=5P=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pIH5d-00036S-TF
 for xen-devel@lists.xenproject.org; Wed, 18 Jan 2023 22:39:30 +0000
Received: from sonic301-22.consmr.mail.gq1.yahoo.com
 (sonic301-22.consmr.mail.gq1.yahoo.com [98.137.64.148])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f59a6f94-9780-11ed-b8d1-410ff93cb8f0;
 Wed, 18 Jan 2023 23:39:26 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic301.consmr.mail.gq1.yahoo.com with HTTP; Wed, 18 Jan 2023 22:39:24 +0000
Received: by hermes--production-bf1-6bb65c4965-5wbr2 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 6f6869fc8bbe8460354ce60f8734b6e9; 
 Wed, 18 Jan 2023 22:39:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f59a6f94-9780-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674081564; bh=Ac1YPZBizrGTTybEqLxq26/alRlIfe24j06/CF5/aKU=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=cpmAVgbiuEThrA4MQnaozx2Tni65Rjca33n/8nSNH+bOt6b+xJLzCnZCyXRlmk6vMSOwE7zaLB1eoJoKriNfrxeJG//SO0eayGDCP2d0GEmSMvTaJnjwlgxlcG3qsi0Fyb86LTQi0Ol6z4mojA2V2h+0vvmBZUzPXWFLnW6mPgjhdiFcXZMV6ObgU6Ck9ahPjbhZFKrANMMPxTj/6y2rODn3CsumfOVrqJHXtT5fi6Cis24r5evRyTWDfa8felVA0TdTA2PeI7pxFurdRsWg09EuP1KQIbgtQ93/eAGKUW3MxTsfyBOgW9Oks4avXytV5IrrJLMjJ/EVs6Lx1aQxcQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674081564; bh=xGQeo6E6/tRZl/tXTzl0WN2OawlczZofZAVMa9Tr1zy=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=kjpfhom98ymX1ygCB1kQvhwl/FDoGQEgKC+nm2UiqfYX3Fuk1UK6Wrj94b6Gwuoy44Z76LH+VYNyMprDjdB4eFNtw1sOwtQgbDW/nt7cNp7nb6UyolbHQX/liryWm3dcrSe+m2SB26hcUWweoFrQrOFvel7gM22OhCKl00XO8SsQQVtgTRdg87bhwGl9DKgUb8z+0MCvikUibdVoXvhb7/4rzCg/rLr1hk0QvEvO63nZnHIE9WYl1Zk5xeTv0WgD96mfCz27gjWgSEnW21HI9fRLVz5+HitMoXDQAqAmPqjNqbbYtqK43Z23EPuy6CakD7p8dtiplmDzJ/LVX5614Q==
X-YMail-OSG: 1EvRyH4VM1nHgY6Bvhh1rko7YB1Wv6vqpwDb1nRblHxL8yxhN0mOkO3IsrtxxfC
 Chn0GOP2H2L1Uw1bxAWQR3JSscHPKYX9.iMX8dgsiX8C1gW8.n3AU5ZllEgT0Oj8r63kiOKviQuF
 tSfMRGibKY8HR8crImu8eKBZqc9KKpmXXA0xJwuswmubN1GLldYD2f2LcQfLja7sQ0OchXhZxOEB
 YbnqGk.oBx7nOEKA1sdVXmqXUokI60tkj0psQJo6IgsOdQLJvFtGCuKmvuqcbmhDfvf13D2mhUCN
 8X382vM.nkoByxHmv9g.xSYeIpf3_GjTxWcDoot1HJoV6_RsivBiCeOw37xi726EDvvIsOx9iuUa
 u6vWuqkLY1t2fvDuXzUiuhvwXlqAzL_T45C23gIdSdiU_04gN8PfiJ1eS2hJ0RzeCGl7AKRiY8JS
 XO8OtygiPDPSLjH216J0CLdkT1jzDetBYxqWT37ELserA_t1p0lQ6vis4CKGhxAI6Vge.cWFnQQ0
 n.bAh3j.RysHxWvt3x1RmjX71SRKM3p.Y8MdtjP8sQ4ZUEFIcOZFqD_Tg6.CNUnZfmX9AoMfQUj.
 MvFFtUqXsLx3qjQCzcVNFSc2bmYvQRWYSNxye6NoAEdzfWoEbUFnsT8tzypNm9URim8ZvPfU7HSm
 k_erO.Mq2noxI6IcO0Bvg3rCPR5VXi9.3QKvOSJ_fcn4Di7iU5jjQ5RcRDBqkIEUuDibPCndW1uN
 PBjW8DufYsOix2doW5Do_VfLDkDoXkhBR4EopGjTY7czCR1rN0Td5BpCMINcQs1lOW5dSCuKzQ94
 O5Vb0XteipQvYZ4H4aUJgYqOgFPV4gKvzX34oSBZzBHdU0RPZw98h_fO5An.AnqE3MBtAt5n6GhJ
 5g1xvwTgoj1lGX9r8PKfN9wq6BNJq8JBzTx_9ZQPB0AT0pcwcglJ0o6akBlrbxzUlg9_emBT7Xup
 Y7jJDh2XawDx1Bd1_jmc1LX.D0n9A_VEs2LSnPuUpz_uDolzt20RGs6xsrwrYXEnMNd6wipjwn48
 xigUCeuQibaNTkvfalysjkCkAtzwAQOE5OxmvANfEwDM5we6LuAL5u1XyLoULtE2f.kApiy5Pv_U
 NcvMxbONCck1BUlMz7mgdKtPLoPiDlvrE9VSBZeK.FdbG.WXbsANSlywyblE6zgFLLhyXlkU7aZ2
 JteptTHwsBUvbL3lyxXADul78UqrDZwSdJsP30UzleZqcUOoGF.Oiv0C1MO2HtJre3lVggLcs7Ag
 .zRDVP3qbK4W5SuhaGDwuPd7kuMOzzVcPhwq3KV2OS0v4YVrrdMmIv38oVS6hT9SnKbR0KpSxxSo
 I6EQdulFP.FAEUoSw_o5roF9RwEVScB2yu3xfHDsS0zPa7yyoa_gIvAoOCGUt3mSWBlEqy.OA2MR
 D4Lw6vtnNTs0i9PHOnCgBAjZG3fTMbqQhy0KgtUX8EocPGLb0vwNQJQIUmALFjIhqJp4iBwf5q13
 XXwLZ6mWCPgwK3bG8eiuFyj2T20_zR7veEuIZVNRVSVNAf.JwU12lAatWnhKuonAYEqwBb8Qp_Le
 q00gEbmQ46414xGc9oGJ0krrETieyCESEaURpwsz.OLVBwbzy1fhtXdcjDi9QkwNMF6jvOJAvwJg
 nKWBLvTWYhh.xDcXfngoWlOBkMeQxw.bo68hitfIVQhzObaNDF80I5I8_MaYcix6CSkAlL2cCCPB
 tieMQBA6BstXeB83UoQ5aNFmwQrAxRAQPIBintYsYcat.DhJBDFgc4SIMZ2qkiv8DNJtbt2hE3XN
 yR6fFxvfakVg83OMm6EOibNz6iDwiLAWt050p8vPY40aEe0u6jpBUYmQWbe6dqdqbNBBJ.9c0GOm
 UITxmHOeWoCq0aocGRtIYK4xpVyGqBXWIsS4xwpRCztw96gEQ3QdQ.36S7N7yHrDL870nd7XhNJi
 5vBrrSdA0CbaJ3wejxJ2YvL7c_xAAzQ7jxrjUE7PvDLb3PohvcJPd8DWx5j9oUuTNPy0PJckxA0A
 b5mc_yUHKo2sQioJk97IrQ2cs.vEhGxNxAOENJp0uh.ncd12k0knLQbMRAYyDkIKnJNSVnn3BzTL
 lP8buwwHBmneogPbsO8MVon3vLSRqOvLHjTYB.o05dx0bkJ4qWX1_IHpaZPNdVCcPoWdy51UzwUB
 FiGt5tcA4rOQStlwcjmeWAN731MIyGe2mApuxpvy_TqJqWWmPffxSQy4KaPxqIdFAzi_UbusXgob
 BxYoz6_wskPNIgYpGNj2nOn0B1Kw-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <13fd40d2-4887-fb25-45dd-5f8ee3cd08e6@aol.com>
Date: Wed, 18 Jan 2023 17:39:17 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: Alex Williamson <alex.williamson@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>,
 Chuck Zmudzinski <brchuckz@netscape.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Bernhard Beschow <shentey@gmail.com>,
 qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, Anthony Perard <anthony.perard@citrix.com>,
 Thomas Huth <thuth@redhat.com>, Eric Auger <eric.auger@redhat.com>,
 Peter Xu <peterx@redhat.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
 <20230112180314-mutt-send-email-mst@kernel.org>
 <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
 <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
 <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
 <20230116163342.467039a0@imammedo.users.ipa.redhat.com>
 <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
 <20230117120416.0aa041d6@imammedo.users.ipa.redhat.com>
 <b6f7d6dd-3b9b-2cc7-32ab-8521802e1fed@aol.com>
 <20230117212700.35b3af18.alex.williamson@redhat.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230117212700.35b3af18.alex.williamson@redhat.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 9005

On 1/17/2023 11:27 PM, Alex Williamson wrote:
> On Tue, 17 Jan 2023 19:15:57 -0500
> Chuck Zmudzinski <brchuckz@aol.com> wrote:
>
> > On 1/17/2023 6:04 AM, Igor Mammedov wrote:
> > > On Mon, 16 Jan 2023 13:00:53 -0500
> > > Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> > >  
> > > > On 1/16/23 10:33, Igor Mammedov wrote:  
> > > > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > > >     
> > > > >> On 1/13/23 4:33 AM, Igor Mammedov wrote:    
> > > > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > > > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > > >> >       
> > > > >> >> On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:      
> > > > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:        
> > > > >> >> >> I think the change Michael suggests is very minimalistic: Move the if
> > > > >> >> >> condition around xen_igd_reserve_slot() into the function itself and
> > > > >> >> >> always call it there unconditionally -- basically turning three lines
> > > > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
> > > > >> >> >> Michael further suggests to rename it to something more general. All
> > > > >> >> >> in all no big changes required.        
> > > > >> >> > 
> > > > >> >> > yes, exactly.
> > > > >> >> >         
> > > > >> >> 
> > > > >> >> OK, got it. I can do that along with the other suggestions.      
> > > > >> > 
> > > > >> > have you considered instead of reservation, putting a slot check in device model
> > > > >> > and if it's intel igd being passed through, fail at realize time  if it can't take
> > > > >> > required slot (with a error directing user to fix command line)?      
> > > > >> 
> > > > >> Yes, but the core pci code currently already fails at realize time
> > > > >> with a useful error message if the user tries to use slot 2 for the
> > > > >> igd, because of the xen platform device which has slot 2. The user
> > > > >> can fix this without patching qemu, but having the user fix it on
> > > > >> the command line is not the best way to solve the problem, primarily
> > > > >> because the user would need to hotplug the xen platform device via a
> > > > >> command line option instead of having the xen platform device added by
> > > > >> pc_xen_hvm_init functions almost immediately after creating the pci
> > > > >> bus, and that delay in adding the xen platform device degrades
> > > > >> startup performance of the guest.
> > > > >>     
> > > > >> > That could be less complicated than dealing with slot reservations at the cost of
> > > > >> > being less convenient.      
> > > > >> 
> > > > >> And also a cost of reduced startup performance    
> > > > > 
> > > > > Could you clarify how it affects performance (and how much).
> > > > > (as I see, setup done at board_init time is roughly the same
> > > > > as with '-device foo' CLI options, modulo time needed to parse
> > > > > options which should be negligible. and both ways are done before
> > > > > guest runs)    
> > > > 
> > > > I preface my answer by saying there is a v9, but you don't
> > > > need to look at that. I will answer all your questions here.
> > > > 
> > > > I am going by what I observe on the main HDMI display with the
> > > > different approaches. With the approach of not patching Qemu
> > > > to fix this, which requires adding the Xen platform device a
> > > > little later, the length of time it takes to fully load the
> > > > guest is increased. I also noticed with Linux guests that use
> > > > the grub bootoader, the grub vga driver cannot display the
> > > > grub boot menu at the native resolution of the display, which
> > > > in the tested case is 1920x1080, when the Xen platform device
> > > > is added via a command line option instead of by the
> > > > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > > > to Qemu, the grub menu is displayed at the full, 1920x1080
> > > > native resolution of the display. Once the guest fully loads,
> > > > there is no noticeable difference in performance. It is mainly
> > > > a degradation in startup performance, not performance once
> > > > the guest OS is fully loaded.  
> > >
> > > Looking at igd-assign.txt, it recommends to add IGD using '-device' CLI
> > > option, and actually drop at least graphics defaults explicitly.
> > > So it is expected to work fine even when IGD is constructed with
> > > '-device'.
> > >
> > > Could you provide full CLI current xen starts QEMU with and then
> > > a CLI you used (with explicit -device for IGD) that leads
> > > to reduced performance?
> > >
> > > CCing vfio folks who might have an idea what could be wrong based
> > > on vfio experience.  
> > 
> > Actually, the igd is not added with an explicit -device option using Xen:
> > 
> >    1573 ?        Ssl    0:42 /usr/bin/qemu-system-i386 -xen-domid 1 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-1,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-1,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -name windows -vnc none -display none -serial pty -boot order=c -smp 4,maxcpus=4 -net none -machine xenfv,max-ram-below-4g=3758096384,igd-passthru=on -m 6144 -drive file=/dev/loop0,if=ide,index=0,media=disk,format=raw,cache=writeback -drive file=/dev/disk/by-uuid/A44AA4984AA468AE,if=ide,index=1,media=disk,format=raw,cache=writeback
> > 
> > I think it is added by xl (libxl management tool) when the guest is created
> > using the qmp-libxl socket that appears on the command line, but I am not 100
> > percent sure. So, with libxl, the command line alone does not tell the whole
> > story. The xl.cfg file has a line like this to define the pci devices passed through,
> > and in qemu they are type XEN_PT devices, not VFIO devices:
> > 
> > pci = [ '00:1b.0','00:14.0','00:02.0@02' ]
> > 
> > This means three host pci devices are passed through, the ones on the
> > host at slot 1b.0, 14.0, and 02.0. Of course the device at 02 is the igd.
> > The @02 means libxl is requesting slot 2 in the guest for the igd, the
> > other 2 devices are just auto assigned a slot by Qemu. Qemu cannot
> > assign the igd to slot 2 for xenfv machines without a patch that prevents
> > the Xen platform device from grabbing slot 2. That is what this patch
> > accomplishes. The workaround involves using the Qemu pc machine
> > instead of the Qemu xenfv machine, in which case the code in Qemu
> > that adds the Xen platform device at slot 2 is avoided, and in that case
> > the Xen platform device is added via a command line option instead
> > at slot 3 instead of slot 2.
> > 
> > The differences between vfio and the Xen pci passthrough device
> > might make a difference in Xen vs. kvm for igd-pasthru.
> > 
> > Also, kvm does not use the Xen platform device, and it seems the
> > Xen guests behave better at startup when the Xen platform device
> > is added very early, during the initialization of the emulated devices
> > of the machine, which is based on the i440fx piix3 machine, instead
> > of being added using a command line option. Perhaps the performance
> > at startup could be improved by adding the igd via a command line
> > option using vfio instead of the canonical way that libxl does pci
> > passthrough on Xen, but I have no idea if vfio works on Xen the way it
> > works on kvm. I am willing to investigate and experiment with it, though.
> > 
> > So if any vfio people can shed some light on this, that would help.
>
> ISTR some rumors of Xen thinking about vfio, but AFAIK there is no
> combination of vfio with Xen, nor is there any sharing of device quirks
> to assign IGD.  IGD assignment has various VM platform dependencies, of
> which the bus address is just one.  Vfio support for IGD assignment
> takes a much different approach, the user or management tool is
> responsible for configuring the VM correctly for IGD assignment,
> including assigning the device to the correct bus address and using the
> right VM chipset, with the correct slot free for the LPC controller.  If
> all the user configuration of the VM aligns correctly, we'll activate
> "legacy mode" by inserting the opregion and instantiate the LPC bridge
> with the correct fields to make the BIOS and drivers happy.  Otherwise,
> maybe the user is trying to use the mythical UPT mode, but given
> Intel's lack of follow-through, it probably just won't work.  Or maybe
> it's a vGPU and we don't need the legacy features anyway.
>
> Working with an expectation that QEMU needs to do the heavy lifting to
> make it all work automatically, with no support from the management
> tool for putting devices in the right place, I'm afraid there might not
> be much to leverage from the vfio vesion.  Thanks,
>
> Alex

Thanks for answering my question. I thought vfio's implementation was
distinct and probably incompatible from Xen's implementation.

Chuck


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 02:53:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 02:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480817.745387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIL2o-0001V4-7H; Thu, 19 Jan 2023 02:52:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480817.745387; Thu, 19 Jan 2023 02:52:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIL2o-0001Uw-20; Thu, 19 Jan 2023 02:52:50 +0000
Received: by outflank-mailman (input) for mailman id 480817;
 Thu, 19 Jan 2023 02:52:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIL2m-0001Um-8H; Thu, 19 Jan 2023 02:52:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIL2m-0003Cc-3j; Thu, 19 Jan 2023 02:52:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIL2l-0007Ia-C3; Thu, 19 Jan 2023 02:52:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIL2l-0007dj-AO; Thu, 19 Jan 2023 02:52:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z3bX0xz0Tfl1uPcGbLjYFKgERniXi1o6RyMYcTCxXas=; b=w6bXA7rpEXyuppvz/uXEnXevAP
	9CPI8RjZcgFOA01E8eFFvZXYHsXZxm57PRIVrFXHi+RtH41uD6W+RVpnody6C92pZ+gB8T9IlD2ak
	3a1E+0LaQbDzjrAIt9c3mfTFbIpJAEOLtqt97m8HFkhEoNXA+TTz7TPT1WcrReeaFv2U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175956-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175956: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-i386-xsm:xen-build:fail:regression
    xen-unstable:build-amd64-prev:xen-build:fail:regression
    xen-unstable:build-i386-prev:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:build-arm64-xsm:xen-build:fail:regression
    xen-unstable:test-arm64-arm64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6bec713f871f21c6254a5783c1e39867ea828256
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 02:52:47 +0000

flight 175956 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175956/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken  in 175948
 build-armhf                4 host-install(4) broken in 175948 REGR. vs. 175734
 build-i386-xsm                6 xen-build                fail REGR. vs. 175734
 build-amd64-prev              6 xen-build                fail REGR. vs. 175734
 build-i386-prev               6 xen-build                fail REGR. vs. 175734
 build-amd64                   6 xen-build                fail REGR. vs. 175734
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175734
 build-i386                    6 xen-build                fail REGR. vs. 175734
 build-arm64-xsm               6 xen-build      fail in 175948 REGR. vs. 175734

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd      17 guest-start/debian.repeat  fail pass in 175948

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu  1 build-check(1)          blocked in 175948 n/a
 test-armhf-armhf-libvirt      1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)         blocked in 175948 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-examine      1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)         blocked in 175948 n/a
 build-armhf-libvirt           1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)           blocked in 175948 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)           blocked in 175948 n/a
 test-armhf-armhf-xl           1 build-check(1)           blocked in 175948 n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175734
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175734
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175734
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6bec713f871f21c6254a5783c1e39867ea828256
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    7 days
Failing since        175739  2023-01-12 09:38:44 Z    6 days   16 attempts
Testing same since   175749  2023-01-13 06:38:52 Z    5 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 03:11:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 03:11:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480827.745398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pILK9-0003yy-T3; Thu, 19 Jan 2023 03:10:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480827.745398; Thu, 19 Jan 2023 03:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pILK9-0003yr-PM; Thu, 19 Jan 2023 03:10:45 +0000
Received: by outflank-mailman (input) for mailman id 480827;
 Thu, 19 Jan 2023 03:10:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pILK9-0003yh-CC; Thu, 19 Jan 2023 03:10:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pILK9-0003X4-94; Thu, 19 Jan 2023 03:10:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pILK8-0007i3-RG; Thu, 19 Jan 2023 03:10:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pILK8-0005mj-Qm; Thu, 19 Jan 2023 03:10:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hYDTLN87P40ro2OJsILherqw9BfrIzPKH1q0hNKF6Ao=; b=FIpq/lBh5NzCeJBCl4kwCFmfV/
	4vLjcLM/aA28IKdCtMUGh5BLqlPtSa+ME0LzKTqeyTMOzKv8UXO7aVVadT0FwPLeicTOOfiZJpB83
	aOA6aRa7LaqJgynMqnLM+eCD4dQy5n6HVeZMkSJs66PUqBnYfkUyVpUmP3eiLURK8SUA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175957-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175957: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c1649ec55708ae42091a2f1bca1ab49ecd722d55
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 03:10:44 +0000

flight 175957 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175957/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 build-amd64                   6 xen-build                fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 build-amd64-xsm               6 xen-build                fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 build-i386-xsm                6 xen-build                fail REGR. vs. 173462
 build-i386                    6 xen-build                fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd11-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-freebsd12-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                c1649ec55708ae42091a2f1bca1ab49ecd722d55
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  103 days
Failing since        173470  2022-10-08 06:21:34 Z  102 days  210 attempts
Testing same since   175957  2023-01-18 11:11:21 Z    0 days    1 attempts

------------------------------------------------------------
3371 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-freebsd11-amd64                             blocked 
 test-amd64-amd64-freebsd12-amd64                             blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 blocked 
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-amd64-xl-vhd                                      blocked 
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 516278 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 04:19:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 04:19:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480834.745408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIMOX-0001nZ-VP; Thu, 19 Jan 2023 04:19:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480834.745408; Thu, 19 Jan 2023 04:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIMOX-0001nS-SO; Thu, 19 Jan 2023 04:19:21 +0000
Received: by outflank-mailman (input) for mailman id 480834;
 Thu, 19 Jan 2023 04:19:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIMOX-0001nI-51; Thu, 19 Jan 2023 04:19:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIMOX-0005dD-18; Thu, 19 Jan 2023 04:19:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIMOW-0001sj-J7; Thu, 19 Jan 2023 04:19:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIMOW-0004kn-IW; Thu, 19 Jan 2023 04:19:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vLuNW9hPciWRKpEo+iLNKfUCi3wlsLmaI9+Lygvrs80=; b=qz/z8/vhrWBrbbcNNLQ2CHNak7
	kMRB7zrmyrJGPdV23gqzDQw3lw+oukbhrXp3VvL3rtku3mfgmVFHgzVdGk7bWeYKFeCom63mIrdfG
	tv8YQc4VTELwADQMYERZQUqX9WjrqdamSpbwHZTHan7foUSk81P1RM7uW7VWSmzb1eGU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175958-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175958: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:build-i386:xen-build:fail:regression
    linux-5.4:build-i386-xsm:xen-build:fail:regression
    linux-5.4:build-amd64-xsm:xen-build:fail:regression
    linux-5.4:build-amd64:xen-build:fail:regression
    linux-5.4:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-5.4:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-5.4:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine-bios:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-examine-uefi:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1349fe3a332ad3d1ece60806225ca7955aba9f56
X-Osstest-Versions-That:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 04:19:20 +0000

flight 175958 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175958/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build                fail REGR. vs. 175560
 build-i386-xsm                6 xen-build                fail REGR. vs. 175560
 build-amd64-xsm               6 xen-build                fail REGR. vs. 175560
 build-amd64                   6 xen-build                fail REGR. vs. 175560

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2 13 guest-start                 fail like 175538
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175557
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 175560
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat    fail  like 175560
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 175560
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175560
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                1349fe3a332ad3d1ece60806225ca7955aba9f56
baseline version:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52

Last test of basis   175560  2023-01-03 17:11:35 Z   15 days
Testing same since   175958  2023-01-18 11:13:39 Z    0 days    1 attempts

------------------------------------------------------------
469 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                blocked 
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                blocked 
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19195 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 04:41:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 04:41:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480844.745421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIMjl-0005JI-SV; Thu, 19 Jan 2023 04:41:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480844.745421; Thu, 19 Jan 2023 04:41:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIMjl-0005JB-Oj; Thu, 19 Jan 2023 04:41:17 +0000
Received: by outflank-mailman (input) for mailman id 480844;
 Thu, 19 Jan 2023 04:41:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIMjk-0005J1-Pm; Thu, 19 Jan 2023 04:41:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIMjk-00060t-NI; Thu, 19 Jan 2023 04:41:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIMjk-0002f6-GX; Thu, 19 Jan 2023 04:41:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIMjk-0005Xw-Fs; Thu, 19 Jan 2023 04:41:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5+IVaCGAs1szDOkO3Zy/SEwHGi2s5Jx0jytJCMeVOu8=; b=pKmFfmfBf4jlpbQjoH9EtBpgoB
	vgbypwvsRQmeST6NpdjfHcnedTMP7pEWYKVQ+f5OooUVTuXc2h2s63JYD11HKr/wAJVunIxLAKe0N
	62QxBisndCzgk5AjSXLvQgfDSTgYVxJESi2GLlPY5rmwaVZuv3c/8KkyboqSMD2upbuQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175964-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175964: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=426efcc37492da4ebd86799c2d4f5dfeac806848
X-Osstest-Versions-That:
    ovmf=998ebe5ca0ae5c449e83ede533bee872f97d63af
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 04:41:16 +0000

flight 175964 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175964/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 426efcc37492da4ebd86799c2d4f5dfeac806848
baseline version:
 ovmf                 998ebe5ca0ae5c449e83ede533bee872f97d63af

Last test of basis   175960  2023-01-18 13:10:41 Z    0 days
Testing same since   175964  2023-01-19 02:42:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Nickle Wang <nicklew@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   998ebe5ca0..426efcc374  426efcc37492da4ebd86799c2d4f5dfeac806848 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 05:53:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 05:53:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480857.745443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pINqu-0004IX-6o; Thu, 19 Jan 2023 05:52:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480857.745443; Thu, 19 Jan 2023 05:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pINqu-0004IQ-3K; Thu, 19 Jan 2023 05:52:44 +0000
Received: by outflank-mailman (input) for mailman id 480857;
 Thu, 19 Jan 2023 05:52:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dkQ4=5Q=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1pINqs-0004II-Q8
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 05:52:43 +0000
Received: from mga03.intel.com (mga03.intel.com [134.134.136.65])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 794c4139-97bd-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 06:52:37 +0100 (CET)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 Jan 2023 21:52:34 -0800
Received: from fmsmsx603.amr.corp.intel.com ([10.18.126.83])
 by fmsmga002.fm.intel.com with ESMTP; 18 Jan 2023 21:52:34 -0800
Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16; Wed, 18 Jan 2023 21:52:34 -0800
Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by
 fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16; Wed, 18 Jan 2023 21:52:34 -0800
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16 via Frontend Transport; Wed, 18 Jan 2023 21:52:34 -0800
Received: from NAM04-MW2-obe.outbound.protection.outlook.com (104.47.73.168)
 by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.16; Wed, 18 Jan 2023 21:52:33 -0800
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by IA1PR11MB6076.namprd11.prod.outlook.com (2603:10b6:208:3d4::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Thu, 19 Jan
 2023 05:52:31 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::6a8d:b95:e1b5:d79d]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::6a8d:b95:e1b5:d79d%9]) with mapi id 15.20.6002.025; Thu, 19 Jan 2023
 05:52:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 794c4139-97bd-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1674107558; x=1705643558;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ZwDno251y8g8lhGGleJtDy9ezZ9G7WUk5vdbPaatc0I=;
  b=ndLLiyYQJ5kcqBv8hdeXBHi8Zhlb8aD3Ry8UkungfEfOvefqgQbXMZtE
   Qxcl0WbjuT0HKZJWZf9vJlAGOfbbk6ebA75fVdapopwW52wnA7YXLD54+
   N9f/wU3e0TA0HkBQcHPetB8WythyZcNIOCX5JqMtSB0OV9eT465TsjXLY
   j1djsym2d6R7987yJIdYWJPx56aQxsIEgEPImuJBLsUAAdLN/P90qYokH
   1SahxKyCl8eyQrinYlvbc79UVZdOY0YZLUV4s3wyHuCn7Y6ukrkuNVWv9
   d4vgLyAhe4Iyha+k9FYT8mJ6XDrzHEBSe+S37d9wVvi20xHbxZOn+PF6k
   g==;
X-IronPort-AV: E=McAfee;i="6500,9779,10594"; a="327277195"
X-IronPort-AV: E=Sophos;i="5.97,228,1669104000"; 
   d="scan'208";a="327277195"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6500,9779,10594"; a="768065843"
X-IronPort-AV: E=Sophos;i="5.97,228,1669104000"; 
   d="scan'208";a="768065843"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZNStlZxjzWnCRzSxgW5CqlVkpznHrO8o4V1n/kTpdMs9v8T0XlFbMivEWQZYqzuHcZesdFHPkA6IitGsbTIHbkj8JKRCHKpvvUbgns1oJhXxkYi/CKgJw6O/Xel8Aw3Ed4apmAxIL3ZsZAxvBKO4nPK6e1Ca44hS6FYZ6XXInAjK/lIeNwuExQwF4ec+dvjoTUBqXzkAmDBEXmQbe2RJcPJHIIc7zOTeZ+NWiMa2W3o8PR/lzEgjWh89hLsS2LIFlt/XfflnVq7VEO6qv2nqL01auxpvopwGUds4WzEqAkQuFrzeoh89Tce/1TLdlGARD4IBWcPrX1VOysFvWuci1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZwDno251y8g8lhGGleJtDy9ezZ9G7WUk5vdbPaatc0I=;
 b=EC06eQBciWN0SaNgxUnE9QEOAZ+4ntGzpIJn5L8lpngZtxf1X9hTdxqjT7+vfBRRlj16NHW9UbszpYtMVwkTbsOKU8kCIUkp1Zrck7dTTdu0pdRgapbbz2BWDLgmmpP+Cc/6ELV/JVQmwYj3jv4w/BcFLYqOhwmQ9kDIiQeevUeoezq2U4bridL1P3kT8MZYs7AmU4n/vb7kkGxRo6Xwh4sJjD9WyHd5vzT/onnDy3xb1JijC5Q7D+3FJ/uWC8sXlUwm9RPmWuUn4pupsyMcX3k4lSOwWoawxsuNoy8OUtXe8WmwGsd8HsrjKPCY7Jco4zdOHasOCdSzION2xvU7XA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Beulich, Jan"
	<JBeulich@suse.com>, =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: RE: [PATCH] x86/vmx: Partially revert "x86/vmx: implement Notify VM
 Exit"
Thread-Topic: [PATCH] x86/vmx: Partially revert "x86/vmx: implement Notify VM
 Exit"
Thread-Index: AQHZK3RJ/0ZZPj3GFkGObjjpGm11Tq6lPW7Q
Date: Thu, 19 Jan 2023 05:52:31 +0000
Message-ID: <BN9PR11MB5276655FFF1ED3E6EE513CB58CC49@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20230118193637.8907-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230118193637.8907-1-andrew.cooper3@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN9PR11MB5276:EE_|IA1PR11MB6076:EE_
x-ms-office365-filtering-correlation-id: 96936b7f-3732-46f8-96e5-08daf9e15aad
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: HHdrZcyeuGWYlz1WXhvnu2/IFGbgNTul/Q2AmaThCgVQtkFhUQiKPXSZcp+KD3a0yiflIDWATz4gs62MYI5LEdwLhczSosgnKIMOVuTFzgTxND3GOs97hYZ6FNx5KPeFooqMbgM1KA6v2SIQkfHdGWyULbbFEIPIuLszqGDtA7GOUBw9EetZ/VlZhjWp0HM/KflCLCZjgBiaB7QVTzb23i2iM/HKRp0gO9AaZeAWKW+mga+9NZZRICtOZD1h07u+k4axB7WfS6Srz2bodbTEMjvZ6hKPfBwY7CX+GutSVz6R7/26/Ou2YSzI8AyhPNVB6rfRYP2aKmYVHoHiDIuifg9gcWBX529JY0zxZngr7C1ShXCqmcpYnYLz9LvRFwtrE8jMsIj9spxWVU5UsKWmtQdxxz7rjxyCWyheMQThCM/OsP8GyeUZxE6BzEVvxudFyzFokfqaePfhED/AnLpy/obv0R9bj4DERhlaGEo6uIhKzrCXzhB7X1WHxP7KLxoe8rwMwFsGDVyGxwesZLH8I2s5DLjAWL1yGMkE/CFwXfusl8HLMhWBKjrC/XYaDx29Fwr0ekgu3icXwz+ziLCbCMf2CiOy8HSf/7J1b33riMGi6rlobuQ+Ezzy6mRfxOSIjJP9AXM79gNUq1sT31n5ix5kj7So0unCugeqUjphyaeZkdrHx8dhl4qi/FCbMG3ndAUpeaTbFt2/ZGdgQc64mw==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(136003)(376002)(396003)(39860400002)(366004)(451199015)(2906002)(26005)(66946007)(4326008)(33656002)(76116006)(66476007)(66446008)(64756008)(8676002)(41300700001)(66556008)(9686003)(186003)(6506007)(55016003)(83380400001)(110136005)(86362001)(122000001)(316002)(38070700005)(54906003)(71200400001)(38100700002)(7696005)(4744005)(8936002)(52536014)(82960400001)(5660300002)(478600001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?UUlLZDVVd1UyUVdoekRjeUltL3dyV0tlbWNhb3ZwWDJKbXpqaXlWdERWQjh2?=
 =?utf-8?B?L3FoeUUrR08xRVU4dGVpdlBvYUozY3NMc3hSZHAxeUU3cjA4bHdRVldCNzNr?=
 =?utf-8?B?bDh1RjlZOWZZbFBTWEhjY1RJV1RZaDZEYWMzZ3pUNStnOUo2VXNoV1d1S2JO?=
 =?utf-8?B?Q2dyYkFNUjVpZGs2Q3paTDhTRS9NVXl1R3MwK0U3aThKNVhoUDdsWXhnY3Iy?=
 =?utf-8?B?N0x2bE4zRnFlTk1XMHBVc3NTTVBLeENqSHBTVzVwenR2dmEyZ3BzOHNVdmxB?=
 =?utf-8?B?emUvd2E0ZzllK2lhdHhBdjhJY2VQNEY3Y3VrT1JWaFVTUjZjUUtqZzBmRGo3?=
 =?utf-8?B?dkJEZDRnMGVPWjJoZ01Da2lsQi9TSklCRFp2RmVNaGRDR1VPNXZmV1lhU3BH?=
 =?utf-8?B?Sk5oZ1FKVE5FUFdaMVNiUitpT2FWQ0lMZkYyOEo2Y3FaNTNtSEQvWXlsM2hJ?=
 =?utf-8?B?SXRVNHVzZUlPc0I3N3kwU1ZYMUVjWFZpRHpoYlRKcVBwUk91SnQzcjI4OUZv?=
 =?utf-8?B?WGs1NmdPTDJveU9vS0ZSMVNrT0VmWEpCM0pkYVNJenZFMFhBWVFDR2tnL1Br?=
 =?utf-8?B?S3JNSjQva0JQWktrNG5YT2hyN2xhK2VlbVlTTDZzL0pPbHdsS085Wm9IRjVK?=
 =?utf-8?B?eUR2V3hubUpDNkhoM0pBZ2RnbFViUEVvTjNIUDhZR0Y2RU1ITVpva085Tk9P?=
 =?utf-8?B?cWcvVG9xR0d1TG05SFFRaDNYS1ZFYk1zMnFzMDVyVnBtcnhKYW1idDNUL1Bu?=
 =?utf-8?B?Z0RmYXRHcDB5eEJyb0gvYkg0QTg5ZTYyNkN4RGk0MEp4TUozOUVGTCtGa3Ux?=
 =?utf-8?B?U3ZlRGFEVVdjNEpaQUFsQ0djZjlrNFppeC93K2RzdEwreG1Tb28zdnVZZlFm?=
 =?utf-8?B?VDQzOFdqdmlIZ3VvVDR1R0Zoby9WbEdmV1V2dVlLWkhjRzhxUFp0RG9oK2lu?=
 =?utf-8?B?ZzhZTWJ0L21wNnBuT2FhUjY2VGRQNWpRSE5HWGFBYXl3dHlBT0dMQ3ZPbWN2?=
 =?utf-8?B?TXBqNURZemZERVFFY3pYcjJ0dVovV1BNYVBlS3ZQYXZ4d2R4OVJoVStySXhV?=
 =?utf-8?B?RzBrREI5S0FVcW13QnNKWWVGVGo2a2RvQ2ppY05WN0xwMTVCVVNablorWUpl?=
 =?utf-8?B?SHFHbzlVT2NKUFh5bkJjbXhGSC9lSlhBUzFjWFRjOC9ycEo3RGVlQWtRNnRh?=
 =?utf-8?B?b2xLZHU4eGpOWXMxaElNTUpDWVlkY252TlNRWVlSWEJPak9FeGhOK3JhNm94?=
 =?utf-8?B?ZTgvRmZ6RDZERVhlbzFRMFlDanRwWGF4eUhXVm93WnFJbHNwbVQyWHU4cnZO?=
 =?utf-8?B?clJYYktxcjlvM2xxZlJ6WHNtVlpyY0lJOCtlazVuL0xtY2dtL1hIZ0hsbGZ5?=
 =?utf-8?B?Z2gzbmN3czkzYSttdU1ObnA0cWJBNjFIWlY5OHBhY1N1MGZEeGcyNjVYbTdu?=
 =?utf-8?B?Lzl0VWdCd0NhM3BwcEtxRGMrQ3VYQ1JaaHlTMk5CUUdjUmx2TnJ4cEd5SDhw?=
 =?utf-8?B?a1F3aGFldk9Nc2ZrVWhhdFRtSXdwY0hNM1VpOHF0cUU1azRZRTJzMEoxRjk5?=
 =?utf-8?B?WkorUnBBeHZwM0RkN3NERFdYN2d0enJCYXd6NlFrU3pqWmo1dzMydEo3bndE?=
 =?utf-8?B?dTk0M0FJVDF6UEcrWHBTYWdndndmRzAzREc2bHJKUzc2YlBxWHVkYWZwM09M?=
 =?utf-8?B?TlpmYnQ1U3czdUJTcGxYT3BNSHllT3E5Ky9DLzJDTTNaTmkzazRtdjNsZlN5?=
 =?utf-8?B?QWp4U0tJL2xRQW44SDNMRUpmaTFCWWVpcTlYcGlESkd3UGVXS1pEZkptVnJw?=
 =?utf-8?B?UjJTQi9LS01QbnJVZ2xyT1M3aGVNbUVDMC9jbEZkVDkydDhwVStKWklQb1dj?=
 =?utf-8?B?T0psY3ZyclVQdFFyU241WlArbUhmdzl0YVA3YUVpVHZzdTZ4Qit5SHNjRkI0?=
 =?utf-8?B?bTBlRnZNYnRIWG1ZQUdIclYyYnR0Y2tISmZYZzBTZWZqa2lGWndaaG5DeElN?=
 =?utf-8?B?ZitFSkRpcGw3S1NPUmd1WHhiRmxCMERMaGNkWER3dmJvMVBvVlpYM3VyVGVS?=
 =?utf-8?B?UC9JbWRMYzhndStybUVlY2VWSEQ1VW0zUW5HaTZhMStoRjd5ck1CVUQxelo0?=
 =?utf-8?Q?H7ZTeeTZHSzvpLTjjqoEWPhAw?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96936b7f-3732-46f8-96e5-08daf9e15aad
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jan 2023 05:52:31.0473
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: PGnSsLI4fHGA1j+nafhfGGni5+cu6yfCzPq/PiKaMT0zWJf971l5wQekzEgHVVj6N/OFGDwIxhjNcF81xp9X/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR11MB6076
X-OriginatorOrg: intel.com

PiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50
OiBUaHVyc2RheSwgSmFudWFyeSAxOSwgMjAyMyAzOjM3IEFNDQo+IA0KPiBUaGUgb3JpZ2luYWwg
cGF0Y2ggdHJpZWQgdG8gZG8gdHdvIHRoaW5ncyAtIGltcGxlbWVudCBWTU5vdGlmeSwgYW5kDQo+
IHJlLW9wdGltaXNlIFZULXggdG8gbm90IGludGVyY2VwdCAjREIvI0FDIGJ5IGRlZmF1bHQuDQo+
IA0KPiBUaGUgc2Vjb25kIHBhcnQgaXMgYnVnZ3kgaW4gbXVsdGlwbGUgd2F5cy4gIEJvdGggR0RC
U1ggYW5kIEludHJvc3BlY3Rpb24NCj4gbmVlZA0KPiB0byBjb25kaXRpb25hbGx5IGludGVyY2Vw
dCAjREIsIHdoaWNoIHdhcyBub3QgYWNjb3VudGVkIGZvci4gIEFsc28sICNEQg0KPiBpbnRlcmNl
cHRpb24gaGFzIG5vdGhpbmcgYXQgYWxsIHRvIGRvIHdpdGggY3B1X2hhc19tb25pdG9yX3RyYXBf
ZmxhZy4NCj4gDQo+IFJldmVydCB0aGUgc2Vjb25kIGhhbGYsIGxlYXZpbmcgI0RCLyNBQyBpbnRl
cmNlcHRlZCB1bmlsYXRlcmFsbHksIGJ1dCB3aXRoDQo+IFZNTm90aWZ5IGFjdGl2ZSBieSBkZWZh
dWx0IHdoZW4gYXZhaWxhYmxlLg0KPiANCj4gRml4ZXM6IDU3MzI3OWNkZTFjNCAoIng4Ni92bXg6
IGltcGxlbWVudCBOb3RpZnkgVk0gRXhpdCIpDQo+IFNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29w
ZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQoNClJldmlld2VkLWJ5OiBLZXZpbiBUaWFu
IDxrZXZpbi50aWFuQGludGVsLmNvbT4NCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 06:21:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 06:21:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480865.745459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIOI9-0007gz-Ck; Thu, 19 Jan 2023 06:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480865.745459; Thu, 19 Jan 2023 06:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIOI9-0007gs-9k; Thu, 19 Jan 2023 06:20:53 +0000
Received: by outflank-mailman (input) for mailman id 480865;
 Thu, 19 Jan 2023 06:20:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/Z0M=5Q=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIOI7-0007gm-PM
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 06:20:51 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66ec324b-97c1-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 07:20:43 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 87BC437314;
 Thu, 19 Jan 2023 06:20:48 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4E1E3139ED;
 Thu, 19 Jan 2023 06:20:48 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 57GlEUDhyGNxHAAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 19 Jan 2023 06:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66ec324b-97c1-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674109248; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7OU4iBHSBL7uO1d+9sDGk7WFkyzvKIudXuLywSlHhHM=;
	b=dLC9xLlxS3Z9cUDj1muNBg04KG+8PaAHKN6EZH17nX0LGo1D7C0Bsac0Rh8h0Fs+y9hSCi
	Xz9fk/v0PdQHRnrDyvf4vph1pUAETqXc1dV8c0b9HSETETfDywXthrLdhuf11+CYon4UUm
	zGld7AgtUDkbiTI3fwtBcgogej5y5Eo=
Message-ID: <3c0fb20e-b6bb-83f6-3460-53de14e18663@suse.com>
Date: Thu, 19 Jan 2023 07:20:47 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 0/6] tools: Switch to non-truncating XENVER_* ops
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>,
 Rob Hoes <Rob.Hoes@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------FoltJnnfUi5LMvGHM4vvbMeE"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------FoltJnnfUi5LMvGHM4vvbMeE
Content-Type: multipart/mixed; boundary="------------ULD5jsbBglIzZytbDu23PUfX";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>,
 Rob Hoes <Rob.Hoes@citrix.com>
Message-ID: <3c0fb20e-b6bb-83f6-3460-53de14e18663@suse.com>
Subject: Re: [PATCH 0/6] tools: Switch to non-truncating XENVER_* ops
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>

--------------ULD5jsbBglIzZytbDu23PUfX
Content-Type: multipart/mixed; boundary="------------SV0JbAsQMieIH6j57tU0i8FJ"

--------------SV0JbAsQMieIH6j57tU0i8FJ
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDEuMjMgMTQ6NTMsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+IFRoaXMgaXMgdGhl
IHRvb2xzIHNpZGUgb2YgdGhlIFhlbiBzZXJpZXMgcG9zdGVkIHByZXZpb3VzbHkuDQo+IA0K
PiBXaXRoIHRoaXMsIGEgWGVuIGJ1aWx0IHdpdGggbG9uZyBzdHJpbmdzIGNhbiBiZSByZXRy
aWV2ZWQ6DQo+IA0KPiAgICAjIHhsIGluZm8NCj4gICAgLi4uDQo+ICAgIHhlbl92ZXJzaW9u
ICAgICAgICAgICAgOiA0LjE4LXVuc3RhYmxlK1JFQUxMWSBMT05HIEVYVFJBVkVSU0lPTg0K
PiAgICB4ZW5fY2hhbmdlc2V0ICAgICAgICAgIDogVHVlIEphbiAzIDE5OjI3OjE3IDIwMjMg
Z2l0OjUyZDJkYTZjMDU0NCtSRUFMTFkgU1VQRVIgRFVQRVIgRVhUUkEgTUVHQSBMT05HIENI
QU5HRVNFVA0KPiAgICAuLi4NCj4gDQo+IA0KPiBBbmRyZXcgQ29vcGVyICg2KToNCj4gICAg
dG9vbHMvbGlieGM6IE1vdmUgeGNfdmVyc2lvbigpIG91dCBvZiB4Y19wcml2YXRlLmMgaW50
byBpdHMgb3duIGZpbGUNCj4gICAgdG9vbHM6IEludHJvZHVjZSBhIG5vbi10cnVuY2F0aW5n
IHhjX3hlbnZlcl9leHRyYXZlcnNpb24oKQ0KPiAgICB0b29sczogSW50cm9kdWNlIGEgbm9u
LXRydW5jYXRpbmcgeGNfeGVudmVyX2NhcGFiaWxpdGllcygpDQo+ICAgIHRvb2xzOiBJbnRy
b2R1Y2UgYSBub24tdHJ1bmNhdGluZyB4Y194ZW52ZXJfY2hhbmdlc2V0KCkNCj4gICAgdG9v
bHM6IEludHJvZHVjZSBhIG5vbi10cnVuY2F0aW5nIHhjX3hlbnZlcl9jbWRsaW5lKCkNCj4g
ICAgdG9vbHM6IEludHJvZHVjZSBhIHhjX3hlbnZlcl9idWlsZGlkKCkgd3JhcHBlcg0KPiAN
Cj4gICB0b29scy9pbmNsdWRlL3hlbmN0cmwuaCAgICAgICAgICAgICB8ICAxMCArKw0KPiAg
IHRvb2xzL2xpYnMvY3RybC9NYWtlZmlsZS5jb21tb24gICAgIHwgICAxICsNCj4gICB0b29s
cy9saWJzL2N0cmwveGNfcHJpdmF0ZS5jICAgICAgICB8ICA2NiAtLS0tLS0tLS0tLS0NCj4g
ICB0b29scy9saWJzL2N0cmwveGNfcHJpdmF0ZS5oICAgICAgICB8ICAgNyAtLQ0KPiAgIHRv
b2xzL2xpYnMvY3RybC94Y192ZXJzaW9uLmMgICAgICAgIHwgMjA2ICsrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKw0KPiAgIHRvb2xzL2xpYnMvbGlnaHQvbGlieGwuYyAg
ICAgICAgICAgIHwgIDYxICstLS0tLS0tLS0tDQo+ICAgdG9vbHMvb2NhbWwvbGlicy94Yy94
ZW5jdHJsX3N0dWJzLmMgfCAgNDUgKysrKystLS0NCj4gICA3IGZpbGVzIGNoYW5nZWQsIDI1
MCBpbnNlcnRpb25zKCspLCAxNDYgZGVsZXRpb25zKC0pDQo+ICAgY3JlYXRlIG1vZGUgMTAw
NjQ0IHRvb2xzL2xpYnMvY3RybC94Y192ZXJzaW9uLmMNCj4gDQoNCkhtbSwgSSdtIG5vdCBj
b21wbGV0ZWx5IG9wcG9zZWQgdG8gdGhpcywgYnV0IGRvIHdlIHJlYWxseSBuZWVkIGFsbCB0
aGF0DQphZGRpdGlvbmFsIGNvZGU/DQoNCkFwYXJ0IGZyb20gdGhlIGJ1aWxkLWlkIGFsbCB0
aGUgaW5mb3JtYXRpb24gaXMgZWFzaWx5IGF2YWlsYWJsZSB2aWEgaHlwZnMuDQpBbmQgdGhl
IGJ1aWxkLWlkIGNhbiBiZSBlYXNpbHkgYWRkZWQgdG8gaHlwZnMuDQoNCg0KSnVlcmdlbg0K

--------------SV0JbAsQMieIH6j57tU0i8FJ
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------SV0JbAsQMieIH6j57tU0i8FJ--

--------------ULD5jsbBglIzZytbDu23PUfX--

--------------FoltJnnfUi5LMvGHM4vvbMeE
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPI4T8FAwAAAAAACgkQsN6d1ii/Ey9m
UAf/dUGjTFneYdpmNPpCMXHoBI8dE0RHc3+PchzZ+0sNI81Ipq6N82D+KmkkXcDFsb4tBBjWvMgW
yJAYPvId043gcOTbRZctEgj1aCJx2RLvMOYWJCsNNSrrVy4ShFluPF5MkiNsLWFRLCLZbsQs/gjc
pfinUCnG7cNvgS8HL/IMoCuh5k+0isayfNOGNIYJzxTCpAAS5J6KQOp7Lg6I22sHnHbxM4cyz0xB
Ou7LZ9+/Fg4mqTqVR9zbMdhMnoIekhAnLLY8eKfMXuqvZ16oKQGNihwJi1+9/4C9MihItlpOsPwd
ZgOzFwnDoHY1/IuHmRxZm0TcUQjvq19lauZFhU0SAQ==
=aBK3
-----END PGP SIGNATURE-----

--------------FoltJnnfUi5LMvGHM4vvbMeE--


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 07:04:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 07:04:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480873.745475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIOyO-0003h3-MF; Thu, 19 Jan 2023 07:04:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480873.745475; Thu, 19 Jan 2023 07:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIOyO-0003gw-JR; Thu, 19 Jan 2023 07:04:32 +0000
Received: by outflank-mailman (input) for mailman id 480873;
 Thu, 19 Jan 2023 07:04:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7WA1=5Q=bu.edu=alxndr@srs-se1.protection.inumbo.net>)
 id 1pIOyM-0003go-KU
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 07:04:31 +0000
Received: from esa6.hc2706-39.iphmx.com (esa6.hc2706-39.iphmx.com
 [216.71.137.79]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7d979c81-97c7-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 08:04:19 +0100 (CET)
Received: from mail-pg1-f200.google.com ([209.85.215.200])
 by ob1.hc2706-39.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Jan 2023 02:04:17 -0500
Received: by mail-pg1-f200.google.com with SMTP id
 f132-20020a636a8a000000b00473d0b600ebso574650pgc.14
 for <xen-devel@lists.xenproject.org>; Wed, 18 Jan 2023 23:04:17 -0800 (PST)
Received: from mozz.bu.edu (mozz.bu.edu. [128.197.127.33])
 by smtp.gmail.com with ESMTPSA id
 t1-20020ac86a01000000b003a7e4129f83sm18379551qtr.85.2023.01.18.23.04.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 18 Jan 2023 23:04:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d979c81-97c7-11ed-b8d1-410ff93cb8f0
X-IronPort-RemoteIP: 209.85.215.200
X-IronPort-MID: 256114136
X-IronPort-Reputation: None
X-IronPort-Listener: OutgoingMail
X-IronPort-SenderGroup: RELAY_GSUITE
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Mgg4m6odkRjJLOh02gCcBQO34TteBmJYZBIvgKrLsJaIsI4StFCzt
 garIBmGOquCZjejfI9zaoSyo0NV65LUndQ3TwRsqHpgF3tG85acVYWSI3mrAy7DdceroGCLT
 ik9hnssCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UwHUMja4mtC5QRnPKET5TcyqlFOZH4hDfDpR5fHatQMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+AsBOsDAbzsAB+v9T2M4nVKtio27hc+ada
 Tl6ncfYpQ8BZsUgkQmGOvVSO3gW0aZuodcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw8b1ZGmJK8
 tUjdQsUTRei1/L1546Zc7w57igjBJGD0II3v3hhyXTBAq9jT8qbG+PF4thX2Dp2jcdLdRrcT
 5BBOHw/MVKaOkAJYA9PYH49tL7Aan3XejlIrl6PjaAqpWXf0WSd1ZC3bYSMI4DTHps9ckCwm
 Xzj/HWjOSojP92WxxaG1G2ihf3tgnauMG4VPPjinhJwu3WDy2pWBBAIWF+TpfiillX4S99ZM
 1YT+Cclse417kPDczXmdxixoXrBphFFHtQKS7V85waKxa7ZpQ2eAwDoUwJ8VTDvj+duLRRC6
 7NDt4qwbdCzmNV5kU6gy4o=
IronPort-HdrOrdr: A9a23:MaDVNanKGvJNSRdkX/YQcqo/wmXpDfL63DAbv31ZSRFFG/FwWf
 re+MjzsiWE9Ar5PUtLpTnuAtjnfZqxz+8W3WBVB8bYYOCEghrUEGgd1/qa/9SIIUSXnZ8/6U
 4jSdkFNDSZNzhHZK3BkW6F+rgbsby62ZHtr8vli1lWcSFWR5dJ0zpZYzzrbXGehzMrOXP6Lv
 ehDwZ8yQZIAU5nFvhTz0NrPtT+mw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=bu.edu; s=s1gsbu;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=E4pJmeqUZRL40o660Z8ZlGFqTDULPb9loeeYL7ivzYk=;
        b=ERPQXJu68/Xrio72aqd90/8aXjLFP56cuoy3sx/p6kVl0WhCKpltFDZLolkrLIvf4o
         9uyW2urXEyB/MOucUaDHO1jrPXoLkLTArwXrWX675ObuVv0etuwI6oxl98ZLJXin3sJZ
         ggFShoU7k+bI+quxKiJKeP8f84ga/HumJjZigToxq9qPuYWYqS/Zq3NEmwtagmBrLD+R
         MYzKeGLfKtCmRQbWAaaA85JKep04kwMCJgTg/Mr9Y01Ip267ETiZtbrkJJnprHQVa3o2
         3TXWuLwxRB48vuwCLPFlIAzEr0Ft22Yodga0PVvXex5tseSc+wEoDv0bjaU1LIRz00LD
         ivNw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=E4pJmeqUZRL40o660Z8ZlGFqTDULPb9loeeYL7ivzYk=;
        b=RWf5HZEHaHkN7qUbzttaDIVTm6u2SambcL5/tj6ldCU7hGG+UFm2s4FTx9/qPdYVhq
         6MFzpgp9PMMb04cSYOydJtq/6xSzksAxRjuqvkPDxgyFJ7I4gWaBSme6DXPjPCnnyvB/
         8/rNqALLfvLN7RWr0wpNqJnZducGFjMFO92gDnrAJWvXCHu7tTlJqHXIkSPMINoQ9HnX
         6CwenIsYQAjeD5bKfm4PtEy70KnykwZWQH3E1CN+wA9lvAiv5LFHMVU0E3I3duoVcURd
         qAPe9qzb15gtzh6KlcgwkdvyHf81u9psCRY/uhpayofbeZRrRzFAKkjIS9JCIiW2ZKQ8
         toYA==
X-Gm-Message-State: AFqh2kqzPQEN3mr5NEirLa3+PbaF2lB200Jt1riM8Jj9Rg+M8NYxir70
	2NbejQ4Xs0x5ye+Ys23u2JAMptX+XSF8/zsBrxEryc/xy6kf8+DxYuJH508/H/CUAwOmu6PwwFd
	HDpbizl8s8xQWxLMramfSBKTOHc3MVhoPzdOWK0y2yQ==
X-Received: by 2002:ac8:45cd:0:b0:3b6:3267:efa1 with SMTP id e13-20020ac845cd000000b003b63267efa1mr13310901qto.50.1674111845918;
        Wed, 18 Jan 2023 23:04:05 -0800 (PST)
X-Google-Smtp-Source: AMrXdXtEiE9va0pYp6bJKPXscClk6pML4cboxFe+vd1v6BP6m1qJWNzzSQvjcX8dEBMmZQ3eqy2mUg==
X-Received: by 2002:ac8:45cd:0:b0:3b6:3267:efa1 with SMTP id e13-20020ac845cd000000b003b63267efa1mr13310855qto.50.1674111845530;
        Wed, 18 Jan 2023 23:04:05 -0800 (PST)
From: Alexander Bulekov <alxndr@bu.edu>
To: qemu-devel@nongnu.org
Cc: Alexander Bulekov <alxndr@bu.edu>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Mauro Matteo Cascella <mcascell@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Bandan Das <bsd@redhat.com>,
	"Edgar E . Iglesias" <edgar.iglesias@gmail.com>,
	Darren Kenny <darren.kenny@oracle.com>,
	Bin Meng <bin.meng@windriver.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Jon Maloy <jmaloy@redhat.com>,
	Siqi Chen <coc.cyqh@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Amit Shah <amit@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Keith Busch <kbusch@kernel.org>,
	Klaus Jensen <its@irrelevant.dk>,
	Fam Zheng <fam@euphon.net>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs),
	qemu-block@nongnu.org (open list:virtio-blk),
	qemu-arm@nongnu.org (open list:i.MX31 (kzm)),
	qemu-ppc@nongnu.org (open list:Old World (g3beige))
Subject: [PATCH v4 3/3] hw: replace most qemu_bh_new calls with qemu_bh_new_guarded
Date: Thu, 19 Jan 2023 02:03:08 -0500
Message-Id: <20230119070308.321653-4-alxndr@bu.edu>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230119070308.321653-1-alxndr@bu.edu>
References: <20230119070308.321653-1-alxndr@bu.edu>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-CES-GSUITE_AUTH: bf3aNvsZpxl8

This protects devices from bh->mmio reentrancy issues.

Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 hw/9pfs/xen-9p-backend.c        | 4 +++-
 hw/block/dataplane/virtio-blk.c | 3 ++-
 hw/block/dataplane/xen-block.c  | 5 +++--
 hw/block/virtio-blk.c           | 5 +++--
 hw/char/virtio-serial-bus.c     | 3 ++-
 hw/display/qxl.c                | 9 ++++++---
 hw/display/virtio-gpu.c         | 6 ++++--
 hw/ide/ahci.c                   | 3 ++-
 hw/ide/core.c                   | 3 ++-
 hw/misc/imx_rngc.c              | 6 ++++--
 hw/misc/macio/mac_dbdma.c       | 2 +-
 hw/net/virtio-net.c             | 3 ++-
 hw/nvme/ctrl.c                  | 6 ++++--
 hw/scsi/mptsas.c                | 3 ++-
 hw/scsi/scsi-bus.c              | 3 ++-
 hw/scsi/vmw_pvscsi.c            | 3 ++-
 hw/usb/dev-uas.c                | 3 ++-
 hw/usb/hcd-dwc2.c               | 3 ++-
 hw/usb/hcd-ehci.c               | 3 ++-
 hw/usb/hcd-uhci.c               | 2 +-
 hw/usb/host-libusb.c            | 6 ++++--
 hw/usb/redirect.c               | 6 ++++--
 hw/usb/xen-usb.c                | 3 ++-
 hw/virtio/virtio-balloon.c      | 5 +++--
 hw/virtio/virtio-crypto.c       | 3 ++-
 25 files changed, 66 insertions(+), 35 deletions(-)

diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
index 65c4979c3c..f077c1b255 100644
--- a/hw/9pfs/xen-9p-backend.c
+++ b/hw/9pfs/xen-9p-backend.c
@@ -441,7 +441,9 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
         xen_9pdev->rings[i].ring.out = xen_9pdev->rings[i].data +
                                        XEN_FLEX_RING_SIZE(ring_order);
 
-        xen_9pdev->rings[i].bh = qemu_bh_new(xen_9pfs_bh, &xen_9pdev->rings[i]);
+        xen_9pdev->rings[i].bh = qemu_bh_new_guarded(xen_9pfs_bh,
+                                                     &xen_9pdev->rings[i],
+                                                     &DEVICE(xen_9pdev)->mem_reentrancy_guard);
         xen_9pdev->rings[i].out_cons = 0;
         xen_9pdev->rings[i].out_size = 0;
         xen_9pdev->rings[i].inprogress = false;
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 26f965cabc..191a8c90aa 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -127,7 +127,8 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
     } else {
         s->ctx = qemu_get_aio_context();
     }
-    s->bh = aio_bh_new(s->ctx, notify_guest_bh, s);
+    s->bh = aio_bh_new_guarded(s->ctx, notify_guest_bh, s,
+                               &DEVICE(s)->mem_reentrancy_guard);
     s->batch_notify_vqs = bitmap_new(conf->num_queues);
 
     *dataplane = s;
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 2785b9e849..e31806b317 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -632,8 +632,9 @@ XenBlockDataPlane *xen_block_dataplane_create(XenDevice *xendev,
     } else {
         dataplane->ctx = qemu_get_aio_context();
     }
-    dataplane->bh = aio_bh_new(dataplane->ctx, xen_block_dataplane_bh,
-                               dataplane);
+    dataplane->bh = aio_bh_new_guarded(dataplane->ctx, xen_block_dataplane_bh,
+                                       dataplane,
+                                       &DEVICE(xendev)->mem_reentrancy_guard);
 
     return dataplane;
 }
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index f717550fdc..e9f516e633 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -866,8 +866,9 @@ static void virtio_blk_dma_restart_cb(void *opaque, bool running,
      * requests will be processed while starting the data plane.
      */
     if (!s->bh && !virtio_bus_ioeventfd_enabled(bus)) {
-        s->bh = aio_bh_new(blk_get_aio_context(s->conf.conf.blk),
-                           virtio_blk_dma_restart_bh, s);
+        s->bh = aio_bh_new_guarded(blk_get_aio_context(s->conf.conf.blk),
+                                   virtio_blk_dma_restart_bh, s,
+                                   &DEVICE(s)->mem_reentrancy_guard);
         blk_inc_in_flight(s->conf.conf.blk);
         qemu_bh_schedule(s->bh);
     }
diff --git a/hw/char/virtio-serial-bus.c b/hw/char/virtio-serial-bus.c
index 7d4601cb5d..dd619f0731 100644
--- a/hw/char/virtio-serial-bus.c
+++ b/hw/char/virtio-serial-bus.c
@@ -985,7 +985,8 @@ static void virtser_port_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    port->bh = qemu_bh_new(flush_queued_data_bh, port);
+    port->bh = qemu_bh_new_guarded(flush_queued_data_bh, port,
+                                   &dev->mem_reentrancy_guard);
     port->elem = NULL;
 }
 
diff --git a/hw/display/qxl.c b/hw/display/qxl.c
index 6772849dec..67efa3c3ef 100644
--- a/hw/display/qxl.c
+++ b/hw/display/qxl.c
@@ -2223,11 +2223,14 @@ static void qxl_realize_common(PCIQXLDevice *qxl, Error **errp)
 
     qemu_add_vm_change_state_handler(qxl_vm_change_state_handler, qxl);
 
-    qxl->update_irq = qemu_bh_new(qxl_update_irq_bh, qxl);
+    qxl->update_irq = qemu_bh_new_guarded(qxl_update_irq_bh, qxl,
+                                          &DEVICE(qxl)->mem_reentrancy_guard);
     qxl_reset_state(qxl);
 
-    qxl->update_area_bh = qemu_bh_new(qxl_render_update_area_bh, qxl);
-    qxl->ssd.cursor_bh = qemu_bh_new(qemu_spice_cursor_refresh_bh, &qxl->ssd);
+    qxl->update_area_bh = qemu_bh_new_guarded(qxl_render_update_area_bh, qxl,
+                                              &DEVICE(qxl)->mem_reentrancy_guard);
+    qxl->ssd.cursor_bh = qemu_bh_new_guarded(qemu_spice_cursor_refresh_bh, &qxl->ssd,
+                                             &DEVICE(qxl)->mem_reentrancy_guard);
 }
 
 static void qxl_realize_primary(PCIDevice *dev, Error **errp)
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 5e15c79b94..66ac9b6cc5 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1339,8 +1339,10 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
 
     g->ctrl_vq = virtio_get_queue(vdev, 0);
     g->cursor_vq = virtio_get_queue(vdev, 1);
-    g->ctrl_bh = qemu_bh_new(virtio_gpu_ctrl_bh, g);
-    g->cursor_bh = qemu_bh_new(virtio_gpu_cursor_bh, g);
+    g->ctrl_bh = qemu_bh_new_guarded(virtio_gpu_ctrl_bh, g,
+                                     &qdev->mem_reentrancy_guard);
+    g->cursor_bh = qemu_bh_new_guarded(virtio_gpu_cursor_bh, g,
+                                       &qdev->mem_reentrancy_guard);
     QTAILQ_INIT(&g->reslist);
     QTAILQ_INIT(&g->cmdq);
     QTAILQ_INIT(&g->fenceq);
diff --git a/hw/ide/ahci.c b/hw/ide/ahci.c
index 7ce001cacd..37091150cb 100644
--- a/hw/ide/ahci.c
+++ b/hw/ide/ahci.c
@@ -1508,7 +1508,8 @@ static void ahci_cmd_done(const IDEDMA *dma)
     ahci_write_fis_d2h(ad);
 
     if (ad->port_regs.cmd_issue && !ad->check_bh) {
-        ad->check_bh = qemu_bh_new(ahci_check_cmd_bh, ad);
+        ad->check_bh = qemu_bh_new_guarded(ahci_check_cmd_bh, ad,
+                                           &DEVICE(ad)->mem_reentrancy_guard);
         qemu_bh_schedule(ad->check_bh);
     }
 }
diff --git a/hw/ide/core.c b/hw/ide/core.c
index 5d1039378f..8c8d1a8ec2 100644
--- a/hw/ide/core.c
+++ b/hw/ide/core.c
@@ -519,7 +519,8 @@ BlockAIOCB *ide_issue_trim(
 
     iocb = blk_aio_get(&trim_aiocb_info, s->blk, cb, cb_opaque);
     iocb->s = s;
-    iocb->bh = qemu_bh_new(ide_trim_bh_cb, iocb);
+    iocb->bh = qemu_bh_new_guarded(ide_trim_bh_cb, iocb,
+                                   &DEVICE(s)->mem_reentrancy_guard);
     iocb->ret = 0;
     iocb->qiov = qiov;
     iocb->i = -1;
diff --git a/hw/misc/imx_rngc.c b/hw/misc/imx_rngc.c
index 632c03779c..082c6980ad 100644
--- a/hw/misc/imx_rngc.c
+++ b/hw/misc/imx_rngc.c
@@ -228,8 +228,10 @@ static void imx_rngc_realize(DeviceState *dev, Error **errp)
     sysbus_init_mmio(sbd, &s->iomem);
 
     sysbus_init_irq(sbd, &s->irq);
-    s->self_test_bh = qemu_bh_new(imx_rngc_self_test, s);
-    s->seed_bh = qemu_bh_new(imx_rngc_seed, s);
+    s->self_test_bh = qemu_bh_new_guarded(imx_rngc_self_test, s,
+                                          &dev->mem_reentrancy_guard);
+    s->seed_bh = qemu_bh_new_guarded(imx_rngc_seed, s,
+                                     &dev->mem_reentrancy_guard);
 }
 
 static void imx_rngc_reset(DeviceState *dev)
diff --git a/hw/misc/macio/mac_dbdma.c b/hw/misc/macio/mac_dbdma.c
index efcc02609f..cc7e02203d 100644
--- a/hw/misc/macio/mac_dbdma.c
+++ b/hw/misc/macio/mac_dbdma.c
@@ -914,7 +914,7 @@ static void mac_dbdma_realize(DeviceState *dev, Error **errp)
 {
     DBDMAState *s = MAC_DBDMA(dev);
 
-    s->bh = qemu_bh_new(DBDMA_run_bh, s);
+    s->bh = qemu_bh_new_guarded(DBDMA_run_bh, s, &dev->mem_reentrancy_guard);
 }
 
 static void mac_dbdma_class_init(ObjectClass *oc, void *data)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 3ae909041a..a170c724de 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2885,7 +2885,8 @@ static void virtio_net_add_queue(VirtIONet *n, int index)
         n->vqs[index].tx_vq =
             virtio_add_queue(vdev, n->net_conf.tx_queue_size,
                              virtio_net_handle_tx_bh);
-        n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, &n->vqs[index]);
+        n->vqs[index].tx_bh = qemu_bh_new_guarded(virtio_net_tx_bh, &n->vqs[index],
+                                                  &DEVICE(vdev)->mem_reentrancy_guard);
     }
 
     n->vqs[index].tx_waiting = 0;
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index f25cc2c235..dcb250e772 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -4318,7 +4318,8 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr,
         QTAILQ_INSERT_TAIL(&(sq->req_list), &sq->io_req[i], entry);
     }
 
-    sq->bh = qemu_bh_new(nvme_process_sq, sq);
+    sq->bh = qemu_bh_new_guarded(nvme_process_sq, sq,
+                                 &DEVICE(sq->ctrl)->mem_reentrancy_guard);
 
     if (n->dbbuf_enabled) {
         sq->db_addr = n->dbbuf_dbs + (sqid << 3);
@@ -4708,7 +4709,8 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr,
         }
     }
     n->cq[cqid] = cq;
-    cq->bh = qemu_bh_new(nvme_post_cqes, cq);
+    cq->bh = qemu_bh_new_guarded(nvme_post_cqes, cq,
+                                 &DEVICE(cq->ctrl)->mem_reentrancy_guard);
 }
 
 static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req)
diff --git a/hw/scsi/mptsas.c b/hw/scsi/mptsas.c
index c485da792c..3de288b454 100644
--- a/hw/scsi/mptsas.c
+++ b/hw/scsi/mptsas.c
@@ -1322,7 +1322,8 @@ static void mptsas_scsi_realize(PCIDevice *dev, Error **errp)
     }
     s->max_devices = MPTSAS_NUM_PORTS;
 
-    s->request_bh = qemu_bh_new(mptsas_fetch_requests, s);
+    s->request_bh = qemu_bh_new_guarded(mptsas_fetch_requests, s,
+                                        &DEVICE(dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), &dev->qdev, &mptsas_scsi_info);
 }
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index ceceafb2cd..e5c9f7a53d 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -193,7 +193,8 @@ static void scsi_dma_restart_cb(void *opaque, bool running, RunState state)
         AioContext *ctx = blk_get_aio_context(s->conf.blk);
         /* The reference is dropped in scsi_dma_restart_bh.*/
         object_ref(OBJECT(s));
-        s->bh = aio_bh_new(ctx, scsi_dma_restart_bh, s);
+        s->bh = aio_bh_new_guarded(ctx, scsi_dma_restart_bh, s,
+                                   &DEVICE(s)->mem_reentrancy_guard);
         qemu_bh_schedule(s->bh);
     }
 }
diff --git a/hw/scsi/vmw_pvscsi.c b/hw/scsi/vmw_pvscsi.c
index fa76696855..4de34536e9 100644
--- a/hw/scsi/vmw_pvscsi.c
+++ b/hw/scsi/vmw_pvscsi.c
@@ -1184,7 +1184,8 @@ pvscsi_realizefn(PCIDevice *pci_dev, Error **errp)
         pcie_endpoint_cap_init(pci_dev, PVSCSI_EXP_EP_OFFSET);
     }
 
-    s->completion_worker = qemu_bh_new(pvscsi_process_completion_queue, s);
+    s->completion_worker = qemu_bh_new_guarded(pvscsi_process_completion_queue, s,
+                                               &DEVICE(pci_dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), DEVICE(pci_dev), &pvscsi_scsi_info);
     /* override default SCSI bus hotplug-handler, with pvscsi's one */
diff --git a/hw/usb/dev-uas.c b/hw/usb/dev-uas.c
index 88f99c05d5..f013ded91e 100644
--- a/hw/usb/dev-uas.c
+++ b/hw/usb/dev-uas.c
@@ -937,7 +937,8 @@ static void usb_uas_realize(USBDevice *dev, Error **errp)
 
     QTAILQ_INIT(&uas->results);
     QTAILQ_INIT(&uas->requests);
-    uas->status_bh = qemu_bh_new(usb_uas_send_status_bh, uas);
+    uas->status_bh = qemu_bh_new_guarded(usb_uas_send_status_bh, uas,
+                                         &d->mem_reentrancy_guard);
 
     dev->flags |= (1 << USB_DEV_FLAG_IS_SCSI_STORAGE);
     scsi_bus_init(&uas->bus, sizeof(uas->bus), DEVICE(dev), &usb_uas_scsi_info);
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
index 8755e9cbb0..a0c4e782b2 100644
--- a/hw/usb/hcd-dwc2.c
+++ b/hw/usb/hcd-dwc2.c
@@ -1364,7 +1364,8 @@ static void dwc2_realize(DeviceState *dev, Error **errp)
     s->fi = USB_FRMINTVL - 1;
     s->eof_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_frame_boundary, s);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_work_timer, s);
-    s->async_bh = qemu_bh_new(dwc2_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(dwc2_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
 
     sysbus_init_irq(sbd, &s->irq);
 }
diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index d4da8dcb8d..c930c60921 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -2533,7 +2533,8 @@ void usb_ehci_realize(EHCIState *s, DeviceState *dev, Error **errp)
     }
 
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, ehci_work_timer, s);
-    s->async_bh = qemu_bh_new(ehci_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(ehci_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
     s->device = dev;
 
     s->vmstate = qemu_add_vm_change_state_handler(usb_ehci_vm_state_change, s);
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index 30ae0104bb..bdc891f57a 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -1193,7 +1193,7 @@ void usb_uhci_common_realize(PCIDevice *dev, Error **errp)
                               USB_SPEED_MASK_LOW | USB_SPEED_MASK_FULL);
         }
     }
-    s->bh = qemu_bh_new(uhci_bh, s);
+    s->bh = qemu_bh_new_guarded(uhci_bh, s, &DEVICE(dev)->mem_reentrancy_guard);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, uhci_frame_timer, s);
     s->num_ports_vmstate = NB_PORTS;
     QTAILQ_INIT(&s->queues);
diff --git a/hw/usb/host-libusb.c b/hw/usb/host-libusb.c
index 176868d345..f500db85ab 100644
--- a/hw/usb/host-libusb.c
+++ b/hw/usb/host-libusb.c
@@ -1141,7 +1141,8 @@ static void usb_host_nodev_bh(void *opaque)
 static void usb_host_nodev(USBHostDevice *s)
 {
     if (!s->bh_nodev) {
-        s->bh_nodev = qemu_bh_new(usb_host_nodev_bh, s);
+        s->bh_nodev = qemu_bh_new_guarded(usb_host_nodev_bh, s,
+                                          &DEVICE(s)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(s->bh_nodev);
 }
@@ -1739,7 +1740,8 @@ static int usb_host_post_load(void *opaque, int version_id)
     USBHostDevice *dev = opaque;
 
     if (!dev->bh_postld) {
-        dev->bh_postld = qemu_bh_new(usb_host_post_load_bh, dev);
+        dev->bh_postld = qemu_bh_new_guarded(usb_host_post_load_bh, dev,
+                                             &DEVICE(dev)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(dev->bh_postld);
     dev->bh_postld_pending = true;
diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index fd7df599bc..39fbaaab16 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -1441,8 +1441,10 @@ static void usbredir_realize(USBDevice *udev, Error **errp)
         }
     }
 
-    dev->chardev_close_bh = qemu_bh_new(usbredir_chardev_close_bh, dev);
-    dev->device_reject_bh = qemu_bh_new(usbredir_device_reject_bh, dev);
+    dev->chardev_close_bh = qemu_bh_new_guarded(usbredir_chardev_close_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
+    dev->device_reject_bh = qemu_bh_new_guarded(usbredir_device_reject_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
     dev->attach_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL, usbredir_do_attach, dev);
 
     packet_id_queue_init(&dev->cancelled, dev, "cancelled");
diff --git a/hw/usb/xen-usb.c b/hw/usb/xen-usb.c
index 0f7369e7ed..dec91294ad 100644
--- a/hw/usb/xen-usb.c
+++ b/hw/usb/xen-usb.c
@@ -1021,7 +1021,8 @@ static void usbback_alloc(struct XenLegacyDevice *xendev)
 
     QTAILQ_INIT(&usbif->req_free_q);
     QSIMPLEQ_INIT(&usbif->hotplug_q);
-    usbif->bh = qemu_bh_new(usbback_bh, usbif);
+    usbif->bh = qemu_bh_new_guarded(usbback_bh, usbif,
+                                    &DEVICE(xendev)->mem_reentrancy_guard);
 }
 
 static int usbback_free(struct XenLegacyDevice *xendev)
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index 746f07c4d2..309cebacc6 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -908,8 +908,9 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
         precopy_add_notifier(&s->free_page_hint_notify);
 
         object_ref(OBJECT(s->iothread));
-        s->free_page_bh = aio_bh_new(iothread_get_aio_context(s->iothread),
-                                     virtio_ballloon_get_free_page_hints, s);
+        s->free_page_bh = aio_bh_new_guarded(iothread_get_aio_context(s->iothread),
+                                             virtio_ballloon_get_free_page_hints, s,
+                                             &DEVICE(s)->mem_reentrancy_guard);
     }
 
     if (virtio_has_feature(s->host_features, VIRTIO_BALLOON_F_REPORTING)) {
diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c
index 516425e26a..4c95f1096e 100644
--- a/hw/virtio/virtio-crypto.c
+++ b/hw/virtio/virtio-crypto.c
@@ -1050,7 +1050,8 @@ static void virtio_crypto_device_realize(DeviceState *dev, Error **errp)
         vcrypto->vqs[i].dataq =
                  virtio_add_queue(vdev, 1024, virtio_crypto_handle_dataq_bh);
         vcrypto->vqs[i].dataq_bh =
-                 qemu_bh_new(virtio_crypto_dataq_bh, &vcrypto->vqs[i]);
+                 qemu_bh_new_guarded(virtio_crypto_dataq_bh, &vcrypto->vqs[i],
+                                     &dev->mem_reentrancy_guard);
         vcrypto->vqs[i].vcrypto = vcrypto;
     }
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 09:18:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 09:18:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480885.745491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIR3p-00006e-Nd; Thu, 19 Jan 2023 09:18:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480885.745491; Thu, 19 Jan 2023 09:18:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIR3p-00006X-Jy; Thu, 19 Jan 2023 09:18:17 +0000
Received: by outflank-mailman (input) for mailman id 480885;
 Thu, 19 Jan 2023 09:18:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIR3o-00006N-RG; Thu, 19 Jan 2023 09:18:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIR3o-000531-OP; Thu, 19 Jan 2023 09:18:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIR3o-0004jV-Ax; Thu, 19 Jan 2023 09:18:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIR3o-0002KV-AT; Thu, 19 Jan 2023 09:18:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=W/zgzhAApNKTgiQdlxq3DWxhuxaBzhn+Vz9XwJbrtkg=; b=u7Dqzw+DGEhdwfhPNTswgLNsoY
	Qr21HakJO+aAHgtAWZE7HnyCcOXFx8+DUjaTNIsHMFSXXIBQanOcNi5M0+1nj3LdPzJOhYIIWvcdy
	jhCg/h6gpK4xWYhytaIcT4Bi2D9be1Ht925iqbRSH8UbYOhnX/oEZiZDiI40WQZe1HqU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175962-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175962: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7c9236d6d61f30583d5d860097d88dbf0fe487bf
X-Osstest-Versions-That:
    qemuu=aa96ab7c9df59c615ca82b49c9062819e0a1c287
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 09:18:16 +0000

flight 175962 qemu-mainline real [real]
flight 175975 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175962/
http://logs.test-lab.xenproject.org/osstest/logs/175975/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 175975-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175735
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175735
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175743
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175743
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175743
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175743
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175743
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175743
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                7c9236d6d61f30583d5d860097d88dbf0fe487bf
baseline version:
 qemuu                aa96ab7c9df59c615ca82b49c9062819e0a1c287

Last test of basis   175743  2023-01-12 13:41:12 Z    6 days
Failing since        175750  2023-01-13 06:38:52 Z    6 days   13 attempts
Testing same since   175952  2023-01-18 02:32:46 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Bernhard Beschow <shentey@gmail.com>
  Daniel Henrique Barboza <dbarboza@ventanamicro.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Felipe Balbi <balbi@kernel.org>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Joe Richey <joerichey@google.com>
  Klaus Jensen <k.jensen@samsung.com>
  Laurent Vivier <laurent@vivier.eu>
  Marcel Holtmann <marcel@holtmann.org>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   aa96ab7c9d..7c9236d6d6  7c9236d6d61f30583d5d860097d88dbf0fe487bf -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 09:28:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 09:28:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480893.745501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRDI-0001dO-LG; Thu, 19 Jan 2023 09:28:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480893.745501; Thu, 19 Jan 2023 09:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRDI-0001dH-IV; Thu, 19 Jan 2023 09:28:04 +0000
Received: by outflank-mailman (input) for mailman id 480893;
 Thu, 19 Jan 2023 09:28:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=txcP=5Q=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pIRDH-0001d9-FP
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 09:28:03 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on2066.outbound.protection.outlook.com [40.107.95.66])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 904c3e3e-97db-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 10:28:00 +0100 (CET)
Received: from DM6PR03CA0063.namprd03.prod.outlook.com (2603:10b6:5:100::40)
 by CH2PR12MB4119.namprd12.prod.outlook.com (2603:10b6:610:aa::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Thu, 19 Jan
 2023 09:27:57 +0000
Received: from DM6NAM11FT058.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:100:cafe::f8) by DM6PR03CA0063.outlook.office365.com
 (2603:10b6:5:100::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.25 via Frontend
 Transport; Thu, 19 Jan 2023 09:27:57 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT058.mail.protection.outlook.com (10.13.172.216) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Thu, 19 Jan 2023 09:27:56 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 19 Jan
 2023 03:27:56 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 19 Jan
 2023 03:27:55 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 19 Jan 2023 03:27:54 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 904c3e3e-97db-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JVBoMkrZlz/3MZKQqJRgUwUGhsoqcc70/KMhVCSNUfdVMY+4Wlue+NDs76RVF/sIxp6NQlDNrEQ7FCh+uc2MPjcOylesx8upTW42nN5jlB7NMGjYOB/ZpvkVjQmgwbU9c7wPGfCar6h/hE5t86GjQTfD4Qqn+Z59+1muOuxo5tZnhS8R5ymsWwtaiFCpVofucr0GMEYFMWd8pzVzRyJcWx6qIUoCFe3G1vSsYFcJM6lTzJ3rjKdHaJQQzVB6PNmv6f+udmbd6CsDd+swc0YlB3d3U6Z6zj3YDBACO9OX8/hlbmxeAbeWDy4jkmRYohcn+gLvMmfmNP3zJpky/7g2Tw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cRiEJW+Q33zIMSjk/w1ZwflVlBlqkiDI6uJKdOQlZ+M=;
 b=QmLYAzccf+AnSsSMZBKrPuBQovkvzANIIhJXcz/wRhSIKLszKu7NmoHP1FrkfTv1NVBGrAF11HTt1D73SQirvCveTZt5UB71OGbEIFTEGs0NCknvfsbu+XxOTPXzFiGjbkGY2cmhaDbIZrfo7OQehCscRWSFP0yWLAl4NnHnBVb1Rh5cXESfm5tnUKfrcEjwYE798PbHk8pCOTil10uY/JSUbXusYCAbmz/3vHOrn2BAZjGAjW+oGXQjyuyZNY0Axfd36wIxnwB7oR+QzHJymAU5nsLYi1LHkBBIlkOLp+fNtuyKc2VzO3nv9M/mrtOQhyrZYCVd64Rc3jV7JEmZ0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cRiEJW+Q33zIMSjk/w1ZwflVlBlqkiDI6uJKdOQlZ+M=;
 b=UcXBOGBGKtTCC64xuWhxepXS3TGBKkycb8CZ1BAjZRbIKvAmKCMSwbyTv2Cllz3jE19zS62fN/yulnKZX8Xrk+lru5e8liFjBedVTUFiOGuOhE65J+PQrvo+wTUXgU6d/5E1YHxn+W4sKpxwsUIm4V1rJeTgwgQIBoR1Qzm7JAI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <bb7ca965-c9f4-445f-dfe9-d96d0b3d8707@amd.com>
Date: Thu, 19 Jan 2023 10:27:53 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v5] xen/arm: Probe the load/entry point address of an uImage
 correctly
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>
References: <20230113122423.22902-1-ayan.kumar.halder@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230113122423.22902-1-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT058:EE_|CH2PR12MB4119:EE_
X-MS-Office365-Filtering-Correlation-Id: 32c36781-51bd-41bf-04e1-08daf9ff72fc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N6GahyZApYjk2EWApdFLq5dMGPVcstU/4cqPFCUItYUz49rd4xf99Q75GyjTbjPxKpMO2WeEGL0FCV+JC1y4Tq5X+IVpWuGv/x/f5ONOTERAn4ueS9BvB0y8TR0CSWEodtZQEXeeKK/vMbcWCjOq1ZXQ5h9Nac0JimLP4eCYd0RGLwQxeh4ZehzyfuoqBgJQanPjsQJ45am+XKN6VBlTXsdb5s4sBJe5WGyMTO3bUZbHTXH/xDTlvxND5IC5YIncwIVW+2/nNuWmSNSSzGDI2V7q4TDmu76AuC09zu6CCRfPx5OuqBQ/TKNpDdohf5bOqpdXqHeyNfW9mB4jWaKYygxuWbtcitUqRUZx5pYHnRxT/1LTQg4x3UrkCiwcoKRIk/chHjGYGAhh/kgKKEOzz1DaNkdBbUI+fw9pcJEneBMoWC87gCDBDHD1MDIVs+KdEhnVVQ7uj8eIjJl/LI7EIoxjLvMfKzdJmxkowUeDyVycXOPuJJkF8EArp7jdbdm9RsPh3tJwXLdtaCEvtlF13mC30ls8/FkZw90cfhIz4fvxD9ed20IlyfjxJOaEwtvTyvge5Iq5Uvjlu7ZMRNwdock96pwh/3ZtuDcKtqkXVmDpYk2l24hJSUhWdvgW+PFDR8LvXtm1SQVRMbIb+3meiXLoHvaNSkS5/gAoSvkvYnGO4Xw2NLV0G+be7BtGrnIexH8C3JFNZqcoLmHcVUn/Y7/KQIAG7NO4mEDCwS0HtGIULCxrd4H5j88aBg0ZsM9ccEE5I61/LBKCUnH2ORhvBsSgGKNolaH6yrGYqHQMo/c=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199015)(46966006)(40470700004)(36840700001)(36756003)(31686004)(356005)(86362001)(82740400003)(44832011)(2906002)(70206006)(8936002)(5660300002)(70586007)(36860700001)(81166007)(31696002)(16576012)(316002)(426003)(54906003)(110136005)(47076005)(53546011)(478600001)(82310400005)(40480700001)(8676002)(4326008)(41300700001)(186003)(26005)(40460700003)(336012)(2616005)(21314003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 09:27:56.6660
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 32c36781-51bd-41bf-04e1-08daf9ff72fc
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT058.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4119

Hello all,

On 13/01/2023 13:24, Ayan Kumar Halder wrote:
> Currently, kernel_uimage_probe() does not read the load/entry point address
> set in the uImge header. Thus, info->zimage.start is 0 (default value). This
> causes, kernel_zimage_place() to treat the binary (contained within uImage)
> as position independent executable. Thus, it loads it at an incorrect
> address.
> 
> The correct approach would be to read "uimage.load" and set
> info->zimage.start. This will ensure that the binary is loaded at the
> correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
> address).
> 
> If user provides load address (ie "uimage.load") as 0x0, then the image is
> treated as position independent executable. Xen can load such an image at
> any address it considers appropriate. A position independent executable
> cannot have a fixed entry point address.
> 
> This behavior is applicable for both arm32 and arm64 platforms.
> 
> Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
> point address set in the uImage header. With this commit, Xen will use them.
> This makes the behavior of Xen consistent with uboot for uimage headers.
> 
> Users who want to use Xen with statically partitioned domains, can provide
> non zero load address and entry address for the dom0/domU kernel. It is
> required that the load and entry address provided must be within the memory
> region allocated by Xen.
> 
> A deviation from uboot behaviour is that we consider load address == 0x0,
> to denote that the image supports position independent execution. This
> is to make the behavior consistent across uImage and zImage.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

When looking at this patch, I spotted one incorrect Xen behavior related to supporting uImage kernels.
And if we want to document that we support such kernels, we should take a look at it.

Xen supports gzip compressed kernels and it tries to decompress them before kernel_XXX_probe.
For zImage and Image, their respective headers are built into the kernel itself and then such image is compressed.
This results in a gzip header appearing right at the top of the image and the workflow works smoothly.

However, uImage header is something that is always added as a last step in the image preparation using mkimage utility
and always appears right at the top of the image. That is why uImage header has a field "ih_comp" used to inform about the compression type.
So the uImage gzip compressed image will have the uImage header before gzip header. Because Xen tries to decompress the image
before probing its header, this will not work for uImage.

~Michal




From xen-devel-bounces@lists.xenproject.org Thu Jan 19 09:47:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 09:47:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480902.745517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRW2-00046m-Af; Thu, 19 Jan 2023 09:47:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480902.745517; Thu, 19 Jan 2023 09:47:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRW2-00046f-7k; Thu, 19 Jan 2023 09:47:26 +0000
Received: by outflank-mailman (input) for mailman id 480902;
 Thu, 19 Jan 2023 09:47:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DP+J=5Q=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIRW0-00046X-Sl
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 09:47:25 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2050.outbound.protection.outlook.com [40.107.7.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45c21b3a-97de-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 10:47:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9375.eurprd04.prod.outlook.com (2603:10a6:102:2b3::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Thu, 19 Jan
 2023 09:47:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 09:47:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45c21b3a-97de-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nJS5eeu5pEswiZbuP1beeikY73iOwPP25USDKZU5++DxPrxEGtkKj2Nw+zYeQsFHziyHBJdwQhwmuZLOsyBUokwihr/Q6LiPdkUWUhPcL3ZXTmkMY0lmxQ/IVrXAgLa38v2Jr1roJ4sz82faysV0j59Gr+sAw2enUsDisF8LsT7d5UZvZ0ovsfdILKbX0s7YtcUZHO3AuE2EjJPUFf0oP9DoGMaBxFzJer9x4tScfhZo2UuYsqluV9ZeGa6UzbOKr25+Tztzw2sJ0R7F1V+nYRCnTuf3Ofml0fGqsfIUftmnshkz7y02Bc7yuF/Kh9ejlOn79cWGyQvL6gS2FiQQcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=StbcwKNzIXnWXnkbS0ZasGDA2Bx/FiHBhcOFCg53lfY=;
 b=jAYUCFs6+OYY2M9jMfRo8ftCzZZO0N6rLSl3IUldMYTcW5K99JnAZAAti6pXCiirV8bfSL+PcI9x1z4rDiGDjND7jkmg7YPkpTohFpzHPMCzewJNjofp24cHM3RPL3xP5H1MHfwSASNh12jT7z1DnOTsMsDWYVE3MHs8htkryqOcwMHGQxW5NFIfMfASL64nog9aA7EtFlh83hoefuL8bACF06EuUpakJ9ugB/wAAurhir13GeGt9sHMNxOyDu6k+nZMfTFLT2oGQKR7ZkjBoAefF5O1Zg61COrzuY+2FWf5CntTUDaVvz11rxJGsDcergdMyfB8HHb8lM9B9M64KA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=StbcwKNzIXnWXnkbS0ZasGDA2Bx/FiHBhcOFCg53lfY=;
 b=a+iw+6kxUaCRbdveZzSqOznExR/4seFaQXfxRkSTVBkaz2E233jbp73esu2H7ctAxC6r63po50V4jQ21QwVokp2CSUIDRJW7/G3oiJ4SaFld6QL+I6JPFHJ1C1bD5MBzjTdMwfHwfBSHMJXda10deH/T03WZRh42FSOy1a6nKODkm3ntJJ1OQeeB68f97csbq1+GWi97QkbVGAbVrpoZgFRNAe8PDvvdhXV9y64xilrt9VK7TliClpE26Iw9CC1scQ6QkNHHBERSRGXONF7W4uwxK/YKYMk4fHWaFVH5NLEsf8sxOkt0w+WxM36oe+In5Ca5sZ24eqYIquQUGu3vsg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bcd0ed06-8f14-b085-b5a4-911d3b859ef9@suse.com>
Date: Thu, 19 Jan 2023 10:47:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 00/10] Rework PCI locking
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com>
 <9a98a83a-32e5-67fe-431d-7bc5f070674e@suse.com>
 <64359f65-4a99-f9ef-b35d-49b44670c7e6@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <64359f65-4a99-f9ef-b35d-49b44670c7e6@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0134.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9375:EE_
X-MS-Office365-Filtering-Correlation-Id: f3e9475b-3850-43f1-07da-08dafa022902
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aESjZySuPmzHnU/8IjMk+yijx/QvPdce9ev1NDUE9FljRzYG23xNzjbxOjSZMrniIajE6k3vO4qSrdxMURe7PWZrKAFu2Wj+wk1OXO366kybi4GwuhqW74fe7PFDUYXQ3V88h0RpS+E8TwhfN4A6L5WRzl2xbH7lndCQIi08l3pi3sFT9EH8Mw/pi1mIVN9OzoM+3pjsjMF1co8dL+yL+vz8HlUxyNLGf4KxoHPTbAr9tFhgIu+5zDJmyg85as9JrbuMFafFxInsKkEzOxOpHoI64zkrRFkSuTYhLhqnaf6fF9GWdkKfXglt1bO3J1Oo//6tn5LIdt4aJkdPBvi/d/6KlHPlgKbJGKXACOV4GexQiKRLETpeo+hApSqt2xAkyVqd67Hi6DcrbaekOWD38KBT+Ocg+8UQANDJQY7FEfX8BdGyHLWQ+Noz6c4S2MRXXaRuZ9jTHHH52zafL7X02aBl8XstNkm8Fq+B9pUkVKWibB1x66D1pvWhEIzpy9+98syJBLmQp3fI6cWh9uWuCnUiviZ4gqklP+39oBP4RNuo3zrxcuvq1StrPs1ivOWcPR1wBnDWVBYWaOeeN/TCf4a5trk3dcLNGJQPgTITf0tXcGSucfqTdjtagAygbDoxeH4i40sCilvGAxbBbgXyG5ITfa//hYtZC5tDJ2ZiDksa60NNpVZNbamgHXsjFawT8aP3iy/db+J0C1ugS5LG/R/c1x37cvbj+k/oV1qkV9g=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(39860400002)(346002)(366004)(136003)(376002)(451199015)(316002)(38100700002)(31696002)(86362001)(36756003)(8936002)(5660300002)(7416002)(66946007)(6486002)(66556008)(4326008)(83380400001)(66476007)(186003)(41300700001)(31686004)(26005)(2906002)(2616005)(478600001)(8676002)(6916009)(53546011)(6506007)(6512007)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y2I3M211Tkt3L1JUbFo0bXJUQytIdmFmSmVFQzNMZWd5YlhJek8wNHdyeUpr?=
 =?utf-8?B?SzYybjF5L29yelVibURVNGdTY2czQ1V1U2VrNmFzTDJmeUJUYitiYVZjcTV3?=
 =?utf-8?B?Z29EU0ZISW5id08wckFWVWRaL2VMZWZjRzBxU25ySEZUNVBSekUyZUU2bGlC?=
 =?utf-8?B?ZFRJVms4RS8yZ1h0eUlkY2F0K2F3N3M0QWQ4K1g2dnZKS2VieXpsVkppblhX?=
 =?utf-8?B?VElaQTc5L29ZZWM5MEd0NHkyYWdhR3ozUXNKWDZqRGJTVlVYU1lTK2RsQ3VI?=
 =?utf-8?B?UzR2TjlpcTB0d3VwcDhVL2htT1VaU0tIeUVVWms5SXZLRU1uaW1GdUJQMFhN?=
 =?utf-8?B?MWNuUkhqcTUvMG0xV0hWNTVib0FPcjFmeDlkSU9tYkt4VXo4MkNNWk9mVG81?=
 =?utf-8?B?cFZJVmMwR3RiOTZPSThMM2pid0x1aHZZUDl1RFJCYlhkY3ZpWitGdVBjM0Nw?=
 =?utf-8?B?NTVucmozdFZKUmlwaGVtZHFXakd6RzRFRWxpNUFKekh6anJvQ0pNOEFCR1l0?=
 =?utf-8?B?UnlGYlg0dmgvZDFZdjY4RUhVeEppR1ljZHpTNU5FNENDVjFPMnFzNnl5OU50?=
 =?utf-8?B?Z3VVWjIvUlYrWGtDNnZYUTZuSGxsSkk4WGM2dCtYM2tJRHBaeGsxaHpVaDUx?=
 =?utf-8?B?UDhlMUpQaFZHOFg0RXFUK2ZxWkRDTWZFSmQ3YXZWMnI1OHpVd3pQTWpkY3JG?=
 =?utf-8?B?eCtjOTRvMGNoWFhGYzlrMEdLWVFJa0ZIZC9JNVFyaGlNYlU0eVQwRjBaOERT?=
 =?utf-8?B?WE11NVFtZFZ2eHhzQjZsSnRLcGlDRklrR0JHcU5uVklLVWhkdERCcWh1eG9V?=
 =?utf-8?B?TDVsN3dxaHZ1Y2VId0JUalM0ZUw1ajlXaU5QUnNzYS9FTlBPNFZMMU9UTmU1?=
 =?utf-8?B?b2ZST1N6SDhsMGY2N1dlSU5lNWREU3dCQTVIcVNscm5NRXFLRWoxSjk3dkxp?=
 =?utf-8?B?czVEZkgxQXBuRkFtaUZ0cmdMRGpNeStQV2ErOERwRCs1UFBFOUlWS2JYRk9M?=
 =?utf-8?B?Mm1tWnpUbGtSbjNXZGZpZ0VpYlIrd1JqQ3ZzTHhicndFNnBqaThnZUZoa0lz?=
 =?utf-8?B?U0FFWFZoT2hNSzRoSEorNzVINlRFK21sdkE3TGJUVzNNNUdUS0V0R2dWekNC?=
 =?utf-8?B?Mmh0RG9zWWJOM0JST0s3Skd6M1l4aGR0QzBUTjB3RXpvY0ZZdzJEYlBueGJ3?=
 =?utf-8?B?ZWsyN2Y2TDJZWjdRMGxXZXp4QmF1NmZIMjR6SUNMSEYvdUp2cFF2MXhoWFl3?=
 =?utf-8?B?emhKcDBaUWUyTk1aWGRMa0pDWE1zakZaYTRHckRHR2JOeE8vNjNLUktaNVRI?=
 =?utf-8?B?SWl3T0xjSXQ3b2JYQlh1c0NlbDFmalJMemRIK0p2eWRPcWY4UW9SMjVVVHhY?=
 =?utf-8?B?RGlzNmZMK2xqWDg4d1lrMFpoUTdBaUtzUkZyQmEwb0hrOGp5OGd4RkN6T2ZI?=
 =?utf-8?B?RUQzYkhmZXhrTUIxT3BzVEtaNndaTXdlMEs5MEFYTmZpWU5ORy9Ncnp4R25B?=
 =?utf-8?B?d2pOL0U1Z3dFUFQwZkVDYW8xMjRzVUl4aWs0NUFkSVBYblNhTjVQRS9na3Uy?=
 =?utf-8?B?eGQxbzl5NTBJZS9nODYydmhuRFlpeUdxQUFYVHRxM1BaMC9leHNxczdDU1g3?=
 =?utf-8?B?K2w4eXFiQXBMd3hFRjBmejUxRkVYdklGRC9sOWxaejZ1cko0Tkl6QWtXUjZQ?=
 =?utf-8?B?SnMzcWFSVTFkY1Qwai8vdW9Tc2RsUWZ4L2ZSZDIrdDM1ZUU0UmpJcURXYyty?=
 =?utf-8?B?T2ZBTmRyclBWS29UZklDUDRua0lwZGFEa0k0U0U2QktNQTNEelF6ak1uTlJj?=
 =?utf-8?B?UjRqZ0YyVWJEZmxBSWNmdFFEMVdudzRMZ3RDUXJFQUxDUk9NL1ZPM0ZkVW4z?=
 =?utf-8?B?c21JSWc0emtMajEwRzNOYnJMNEozbzVtaTFCL1lHY0tlbUY0MHlsOTgzWm16?=
 =?utf-8?B?SGVWb3ZTbnVGR29wcE9ReFR3RnVDTmhnRWJoQUJQQnZoUDY1S0cxekRZOXJG?=
 =?utf-8?B?LzloaUNRUWpaeXErZEZreW9iWUVkUmhDdWNaMllyYzE1c2Y3OHZqMVQ2WEpQ?=
 =?utf-8?B?TjZRY0ZTVWI3aGJiTUhKbS9sMTVmZ3hudWNKbG4wanR3a1ZmaUM0VHBwVDky?=
 =?utf-8?Q?vATleSFWBO9u/ojXOuXZGTK1n?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f3e9475b-3850-43f1-07da-08dafa022902
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 09:47:21.4192
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9b7HGzDFKPbXgw14rFpytN41FaMv2wsKiEjiAUohkn3Xdi1LGHoP6qNv4PLaXHnmaBJ9zcfI5Cjyj/YIw6fgGA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9375

On 18.01.2023 19:21, Julien Grall wrote:
> On 06/09/2022 11:32, Jan Beulich wrote:
>> On 31.08.2022 16:10, Volodymyr Babchuk wrote:
>>> Hello,
>>>
>>> This is yet another take to a PCI locking rework. This approach
>>> was suggest by Jan Beulich who proposed to use a reference
>>> counter to control lifetime of pci_dev objects.
>>>
>>> When I started added reference counting it quickly became clear
>>> that this approach can provide more granular locking insted of
>>> huge pcidevs_lock() which is used right now. I studied how this
>>> lock used and what it protects. And found the following:
>>>
>>> 0. Comment in pci.h states the following:
>>>
>>>   153 /*
>>>   154  * The pcidevs_lock protect alldevs_list, and the assignment for the
>>>   155  * devices, it also sync the access to the msi capability that is not
>>>   156  * interrupt handling related (the mask bit register).
>>>   157  */
>>>
>>> But in reality it does much more. Here is what I found:
>>>
>>> 1. Lifetime of pci_dev struct
>>>
>>> 2. Access to pseg->alldevs_list
>>>
>>> 3. Access to domain->pdev_list
>>>
>>> 4. Access to iommu->ats_list
>>>
>>> 5. Access to MSI capability
>>>
>>> 6. Some obsucure stuff in IOMMU drivers: there are places that
>>> are guarded by pcidevs_lock() but it seems that nothing
>>> PCI-related happens there.
>>
>> Right - the lock being held was (ab)used in IOMMU code in a number of
>> places. This likely needs to change in the course of this re-work;
>> patch titles don't suggest this is currently part of the series.
>>
>>> 7. Something that I probably overlooked
>>
>> And this is the main risk here. The huge scope of the original lock
>> means that many things are serialized now but won't be anymore once
>> the lock is gone.
>>
>> But yes - thanks for the work. To be honest I don't expect to be able
>> to look at this series in detail until after the Xen Summit. And even
>> then it may take a while ...
> 
> I was wondering if this is still in your list to review?

Yes, it certainly is. But as before no predictions when I might get to it.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 10:14:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 10:14:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480909.745526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwD-0007TL-II; Thu, 19 Jan 2023 10:14:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480909.745526; Thu, 19 Jan 2023 10:14:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwD-0007TE-FT; Thu, 19 Jan 2023 10:14:29 +0000
Received: by outflank-mailman (input) for mailman id 480909;
 Thu, 19 Jan 2023 10:13:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7itS=5Q=amazon.com=prvs=3763fc082=mstrasun@srs-se1.protection.inumbo.net>)
 id 1pIRvP-0007Ri-3P
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 10:13:39 +0000
Received: from smtp-fw-80006.amazon.com (smtp-fw-80006.amazon.com
 [99.78.197.217]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ee3750bf-97e1-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 11:13:36 +0100 (CET)
Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO
 email-inbound-relay-pdx-2c-m6i4x-8c5b1df3.us-west-2.amazon.com)
 ([10.25.36.210]) by smtp-border-fw-80006.pdx80.corp.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2023 10:13:30 +0000
Received: from EX13D40EUA003.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-pdx-2c-m6i4x-8c5b1df3.us-west-2.amazon.com (Postfix)
 with ESMTPS id 849C240E16
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 10:13:30 +0000 (UTC)
Received: from EX19D039EUA003.ant.amazon.com (10.252.50.203) by
 EX13D40EUA003.ant.amazon.com (10.43.165.253) with Microsoft SMTP Server (TLS)
 id 15.0.1497.45; Thu, 19 Jan 2023 10:13:29 +0000
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX19D039EUA003.ant.amazon.com (10.252.50.203) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1118.7;
 Thu, 19 Jan 2023 10:13:29 +0000
Received: from dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com
 (172.19.92.214) by mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP
 Server id 15.0.1497.45 via Frontend Transport; Thu, 19 Jan 2023 10:13:28
 +0000
Received: by dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com (Postfix,
 from userid 17415739)
 id 8947921426; Thu, 19 Jan 2023 10:13:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee3750bf-97e1-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1674123216; x=1705659216;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=K5VL5ZsFxWze3LkbqRpgB3nh8/LKnJaOy26zW4qwXA4=;
  b=t94a0XrZoTC749ME7ap9GIYx+x/Rskgf+30YYi2EosDJXi7OGEJG4+cQ
   LKBanhaqcwqROY3zGgxLhKQUGGKRxNxieU0aA2XzBA2G3XH4BTyQo2EWS
   C1fnFhI0xrzZMImuchB/kZvdkBobPd6SCYZREJtWYgh3PNUzeNG0GD1cr
   0=;
X-IronPort-AV: E=Sophos;i="5.97,228,1669075200"; 
   d="scan'208";a="172704163"
From: Mihails Strasuns <mstrasun@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: <mstrasun@amazon.com>
Subject: [PATCH v1 0/4] Collection of livepatch-build-tools tweaks
Date: Thu, 19 Jan 2023 10:12:59 +0000
Message-ID: <cover.1673623880.git.mstrasun@amazon.com>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Precedence: Bulk
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit

Small series of unrelated patches currently used by AWS fork of
livepatch-build-tools but not present upstream. Symbol alias patch is only
relevant for the case of stacking multiple patches and thus may be of no use to
xen in general but still submitting it for extra visibility.

Michael Kurth (1):
  common.h: Flush stdout before writing to stderr

Raphael Ning (2):
  livepatch-build: Allow a patch to introduce new subdirs
  livepatch-gcc: Ignore buildid.o

Stanislav Uschakow / Mihails Strasuns (1):
  create-diff-object: Add new symbols for relocation aliases

 common.h             |   1 +
 create-diff-object.c | 132 +++++++++++++++++++++++++++++++++++++++++++
 livepatch-build      |   1 +
 livepatch-gcc        |   1 +
 4 files changed, 135 insertions(+)

-- 
2.38.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Thu Jan 19 10:14:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 10:14:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480913.745537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwP-0007lT-RG; Thu, 19 Jan 2023 10:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480913.745537; Thu, 19 Jan 2023 10:14:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwP-0007lK-NZ; Thu, 19 Jan 2023 10:14:41 +0000
Received: by outflank-mailman (input) for mailman id 480913;
 Thu, 19 Jan 2023 10:14:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7itS=5Q=amazon.com=prvs=3763fc082=mstrasun@srs-se1.protection.inumbo.net>)
 id 1pIRwO-0007kn-DI
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 10:14:40 +0000
Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 140a99f7-97e2-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 11:14:38 +0100 (CET)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-iad-1a-m6i4x-96feee09.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-2101.iad2.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2023 10:14:34 +0000
Received: from EX13D50EUA002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-iad-1a-m6i4x-96feee09.us-east-1.amazon.com (Postfix)
 with ESMTPS id 3D6C842F0E
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 10:14:34 +0000 (UTC)
Received: from EX19D003EUA004.ant.amazon.com (10.252.50.128) by
 EX13D50EUA002.ant.amazon.com (10.43.165.201) with Microsoft SMTP Server (TLS)
 id 15.0.1497.45; Thu, 19 Jan 2023 10:14:33 +0000
Received: from EX13MTAUEE002.ant.amazon.com (10.43.62.24) by
 EX19D003EUA004.ant.amazon.com (10.252.50.128) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id
 15.2.1118.7; Thu, 19 Jan 2023 10:14:33 +0000
Received: from dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com
 (172.19.92.214) by mail-relay.amazon.com (10.43.62.224) with Microsoft SMTP
 Server id 15.0.1497.45 via Frontend Transport; Thu, 19 Jan 2023 10:14:32
 +0000
Received: by dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com (Postfix,
 from userid 17415739)
 id 19BEB21426; Thu, 19 Jan 2023 10:14:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 140a99f7-97e2-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1674123278; x=1705659278;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=2xU/uBcWEWbfzk2TqLk64mpRRkZpjZZO7FkjawvYQC8=;
  b=XwJ0oRAz/IEfDDBcyv6oPOgWj5WE8hqfb1DuEJZdQM2/DDx+iUhyJvVH
   6G3VC0pnt37F1kgwwhmQ1MbihMrhGpI8aEDhooo8sbzOpGE3FuKDE7Fl7
   bWHJXDm53j4046GY3cLLilt9HmAdlkWatYBtFz4L1QHhshjX2UUSr0UAO
   k=;
X-IronPort-AV: E=Sophos;i="5.97,228,1669075200"; 
   d="scan'208";a="284332618"
From: Mihails Strasuns <mstrasun@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: <mstrasun@amazon.com>, Raphael Ning <raphning@amazon.com>, Bjoern Doebel
	<doebel@amazon.de>, Martin Pohlack <mpohlack@amazon.de>
Subject: [PATCH v1 2/4] livepatch-build: Allow a patch to introduce new subdirs
Date: Thu, 19 Jan 2023 10:13:04 +0000
Message-ID: <472bfbf92aba6d3409b2a101798f04a50c97f6e9.1673623880.git.mstrasun@amazon.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673623880.git.mstrasun@amazon.com>
References: <cover.1673623880.git.mstrasun@amazon.com>
MIME-Version: 1.0
Precedence: Bulk
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit

From: Raphael Ning <raphning@amazon.com>

Fix a bug in create_patch() where cp, strip, etc. will fail if the new
object file introduced by the patch is located in a new subdirectory:

 DEBUG: cp: cannot create regular file `output/xen/common/lu/lu.o': No such file or directory
 DEBUG: strip: 'output/xen/common/lu/lu.o': No such file

In this example, xen/common/lu/ does not exist in the original
(unpatched) Xen source tree. It needs to be created in output/ as well.

Signed-off-by: Raphael Ning <raphning@amazon.com>
Reviewed-by: Bjoern Doebel <doebel@amazon.de>
Reviewed-by: Martin Pohlack <mpohlack@amazon.de>
---
 livepatch-build | 1 +
 1 file changed, 1 insertion(+)

diff --git a/livepatch-build b/livepatch-build
index f7d6471..444daa9 100755
--- a/livepatch-build
+++ b/livepatch-build
@@ -232,6 +232,7 @@ function create_patch()
 
     NEW_FILES=$(comm -23 <(cd patched/xen && find . -type f -name '*.o' | sort) <(cd original/xen && find . -type f -name '*.o' | sort))
     for i in $NEW_FILES; do
+        mkdir -p "output/$(dirname "$i")"
         cp "patched/$i" "output/$i"
         [[ $STRIP -eq 1 ]] && strip --strip-unneeded "output/$i"
         CHANGED=1
-- 
2.38.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Thu Jan 19 10:14:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 10:14:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480914.745547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwX-00089y-2p; Thu, 19 Jan 2023 10:14:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480914.745547; Thu, 19 Jan 2023 10:14:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwW-00089r-VI; Thu, 19 Jan 2023 10:14:48 +0000
Received: by outflank-mailman (input) for mailman id 480914;
 Thu, 19 Jan 2023 10:14:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7itS=5Q=amazon.com=prvs=3763fc082=mstrasun@srs-se1.protection.inumbo.net>)
 id 1pIRwV-0007Sb-Ka
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 10:14:47 +0000
Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com
 [207.171.190.10]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 18968994-97e2-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 11:14:46 +0100 (CET)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-iad-1d-m6i4x-d7759ebe.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-33001.sea14.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2023 10:14:39 +0000
Received: from EX13D51EUB004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-iad-1d-m6i4x-d7759ebe.us-east-1.amazon.com (Postfix)
 with ESMTPS id 7BB8B43948
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 10:14:37 +0000 (UTC)
Received: from EX19D029EUB004.ant.amazon.com (10.252.61.53) by
 EX13D51EUB004.ant.amazon.com (10.43.166.217) with Microsoft SMTP Server (TLS)
 id 15.0.1497.45; Thu, 19 Jan 2023 10:14:37 +0000
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX19D029EUB004.ant.amazon.com (10.252.61.53) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id
 15.2.1118.7; Thu, 19 Jan 2023 10:14:36 +0000
Received: from dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com
 (172.19.92.214) by mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP
 Server id 15.0.1497.45 via Frontend Transport; Thu, 19 Jan 2023 10:14:35
 +0000
Received: by dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com (Postfix,
 from userid 17415739)
 id D06FE21426; Thu, 19 Jan 2023 10:14:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18968994-97e2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1674123287; x=1705659287;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=7xjJSEeZVrQKBcQS6rbLeHu3TkZB46tvdM9kg8Ho5tM=;
  b=aNgiJynxj0bg3dDNDMgoWCjqWlJ0A/jbpo9W64C5Nk6+ybwbUuUIzkpl
   G2Wa6lQiLtBGqJqRMuHwUzIwwRIaNdQdA8eD0d4HPhOTFT5saVtPfQWFe
   N6KRDO2F7+5FS+iUMPVMA7d4/QTUVSrwYt8SDd1BsZYnFrsbZwsdcdnKu
   8=;
X-IronPort-AV: E=Sophos;i="5.97,228,1669075200"; 
   d="scan'208";a="256252891"
From: Mihails Strasuns <mstrasun@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: <mstrasun@amazon.com>, Raphael Ning <raphning@amazon.com>, Bjoern Doebel
	<doebel@amazon.de>, Martin Pohlack <mpohlack@amazon.de>
Subject: [PATCH v1 3/4] livepatch-gcc: Ignore buildid.o
Date: Thu, 19 Jan 2023 10:13:05 +0000
Message-ID: <477a6e0f0b0088c54f3f048bf2b4593eb4df18af.1673623880.git.mstrasun@amazon.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673623880.git.mstrasun@amazon.com>
References: <cover.1673623880.git.mstrasun@amazon.com>
MIME-Version: 1.0
Precedence: Bulk
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit

From: Raphael Ning <raphning@amazon.com>

Not all .o files generated by the Xen build need to be passed to
create-diff-object for analysis. The latest example is:

 Run create-diff-object on xen/arch/x86/efi/buildid.o
 Open base
 /usr/libexec/livepatch-build-tools/create-diff-object: ERROR: buildid.o: kpatch_create_section_list: 77: elf_getshdrnum

This file is special, as it does not contain any sections. It is
generated by objcopy from a magic string of bytes (see Xen commit
eee5909e9d1e x86/EFI: use less crude a way of generating the build ID),
which probably will never change. Therefore, livepatch-gcc should not
copy it to the output directory.

Signed-off-by: Raphael Ning <raphning@amazon.com>
Reviewed-by: Bjoern Doebel <doebel@amazon.de>
Reviewed-by: Martin Pohlack <mpohlack@amazon.de>

CR: https://code.amazon.com/reviews/CR-63147772
---
 livepatch-gcc | 1 +
 1 file changed, 1 insertion(+)

diff --git a/livepatch-gcc b/livepatch-gcc
index 91333d5..e72cc8d 100755
--- a/livepatch-gcc
+++ b/livepatch-gcc
@@ -65,6 +65,7 @@ elif [[ "$TOOLCHAINCMD" =~ $OBJCOPY_RE ]] ; then
         version.o|\
         debug.o|\
         efi/check.o|\
+        buildid.o|\
         .*.o)
             ;;
         *.o)
-- 
2.38.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Thu Jan 19 10:15:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 10:15:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480915.745557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwY-0008R9-9b; Thu, 19 Jan 2023 10:14:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480915.745557; Thu, 19 Jan 2023 10:14:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwY-0008Qs-6q; Thu, 19 Jan 2023 10:14:50 +0000
Received: by outflank-mailman (input) for mailman id 480915;
 Thu, 19 Jan 2023 10:14:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7itS=5Q=amazon.com=prvs=3763fc082=mstrasun@srs-se1.protection.inumbo.net>)
 id 1pIRwW-0007Sb-VV
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 10:14:49 +0000
Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com
 [207.171.190.10]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19f1060f-97e2-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 11:14:47 +0100 (CET)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-pdx-2a-m6i4x-8a14c045.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-33001.sea14.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2023 10:14:44 +0000
Received: from EX13D48EUA002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-pdx-2a-m6i4x-8a14c045.us-west-2.amazon.com (Postfix)
 with ESMTPS id 3F2A383120
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 10:14:42 +0000 (UTC)
Received: from EX19D042EUA004.ant.amazon.com (10.252.50.16) by
 EX13D48EUA002.ant.amazon.com (10.43.165.27) with Microsoft SMTP Server (TLS)
 id 15.0.1497.45; Thu, 19 Jan 2023 10:14:40 +0000
Received: from EX13MTAUWB001.ant.amazon.com (10.43.161.207) by
 EX19D042EUA004.ant.amazon.com (10.252.50.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1118.7;
 Thu, 19 Jan 2023 10:14:40 +0000
Received: from dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com
 (172.19.92.214) by mail-relay.amazon.com (10.43.161.249) with Microsoft SMTP
 Server id 15.0.1497.45 via Frontend Transport; Thu, 19 Jan 2023 10:14:38
 +0000
Received: by dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com (Postfix,
 from userid 17415739)
 id CB9DE21426; Thu, 19 Jan 2023 10:14:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19f1060f-97e2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1674123288; x=1705659288;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=gXy2lSCqzAzPVhs9Pixrebyr+33aF5wrQ+8+7LICdlw=;
  b=RxuYz8n7hbLoIRnhaMyplwEpnpFgya/QnJazk6905YwuO7DvgY7yhpZG
   8/czkV+UicRU/2zHvrwZ6SC8CoO0BlJ2WwNIjSi1FGRhLubPUKnMdvDcb
   HjQQWqpT2CVHlQ+pw3dPIrnbuGnFeZKn/IOQ5uQbLmflomVU+td5/p099
   E=;
X-IronPort-AV: E=Sophos;i="5.97,228,1669075200"; 
   d="scan'208";a="256252906"
From: Mihails Strasuns <mstrasun@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: <mstrasun@amazon.com>, Stanislav Uschakow <suschako@amazon.com>, "Bjoern
 Doebel" <doebel@amazon.de>, Martin Pohlack <mpohlack@amazon.de>
Subject: [PATCH v1 4/4] create-diff-object: Add new symbols for relocation aliases
Date: Thu, 19 Jan 2023 10:13:06 +0000
Message-ID: <eb9dc975083baadf9490195cf984eb13b556e244.1673623880.git.mstrasun@amazon.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673623880.git.mstrasun@amazon.com>
References: <cover.1673623880.git.mstrasun@amazon.com>
MIME-Version: 1.0
Precedence: Bulk
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit

From: Stanislav Uschakow <suschako@amazon.com>

This fixes an issue when stacking multiple hotpatches modifying the same
function when there is also an internal call to that function within
these patches.

Two hotpatches touching the same subset of functions can result in a
situation where the second hotpatch is loaded but the code is never
executed. This happens when the symbol resolution of the first hotpatch
calculates the target address for the relocation entries based on the
hotpach instead of the original function. The resulting address for the
relocation-target will become the patched function instead of the
original one (A' -> B' instead of A' -> B).  In this case the trampoline
in the original function is not used. Patching this function again (B'')
re-sets the trampoline to the newly patched method. Since callq points
to the first patched function it never reaches the second version.

    2nd patch |          B''
    1st patch |  A'  ->  B'
    original  |  A   ->  B

This fix provides a solution for this problem by resolving the symbols
during patch generation using xen-syms. The patch iterates over all
relocation entries, and creates duplicates of all referenced symbols
with the scope set to SHN_ABS and the address to the one resolved from
xen-syms. This new symbol is prefixed with a unique string to generate a
new name.

This forces all calls to be directed to the original function which
contains the trampoline.

Signed-off-by: Stanislav Uschakow <suschako@amazon.com>
Signed-off-by: Mihails Strasuns <mstrasun@amazon.com>
Reviewed-by: Bjoern Doebel <doebel@amazon.de>
Reviewed-by: Martin Pohlack <mpohlack@amazon.de>
---
 create-diff-object.c | 132 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 132 insertions(+)

diff --git a/create-diff-object.c b/create-diff-object.c
index a516670..808b288 100644
--- a/create-diff-object.c
+++ b/create-diff-object.c
@@ -1878,6 +1878,25 @@ static char *mangle_local_symbol(char *filename, char *symname)
 	return s;
 }
 
+/*
+ * For a given mangled symbol name allocates a copy with the filename and the
+ * unmangled symbol separated by a null terminator.
+ */
+static bool unmangle_local_symbol(const char *mangled, char **filename,
+								  char **symname)
+{
+	if (filename == NULL || symname == NULL)
+		ERROR("Malformed output arguments passed to 'unmangle_local_symbol'");
+	char *sep = strchr(mangled, '#');
+	if (!sep)
+		return false;
+	int i = sep - mangled;
+	*filename = strdup(mangled);
+	(*filename)[i] = '\0';
+	*symname = *filename + i + 1;
+	return true;
+}
+
 /*
  * Rename local symbols to the filename#symbol format used by Xen's "special"
  * symbol table.
@@ -2290,6 +2309,116 @@ static error_t parse_opt (int key, char *arg, struct argp_state *state)
 	return 0;
 }
 
+/*
+ * For a given elf output target add a new entry to the symbol table
+ * which duplicates the provided source symbol.
+ */
+static struct symbol *insert_symbol_copy(struct kpatch_elf *elf,
+										 struct symbol *src, int index,
+										 char *new_name)
+{
+	struct symbol *new_sym;
+	ALLOC_LINK(new_sym, &elf->symbols);
+	new_sym->index = index;
+	new_sym->twin = NULL;
+	new_sym->sec = NULL;
+	new_sym->bind = src->bind;
+	new_sym->type = src->type;
+	new_sym->status = NEW;
+	new_sym->strip = 0;
+	new_sym->name = new_name;
+	/* The copy here will keep referring to the original symbol
+	   name which is fine - it will be updated later during the final elf
+	   generation. */
+	new_sym->sym = src->sym;
+	return new_sym;
+}
+
+/*
+ * In certain cases when 2 hotpatches are loaded patching the same subset of
+ * functions calling each other, the topmost hotpatch can become unreachable.
+ * This is caused by the loader when resolving symbols. If a symbol is both
+ * to be relocated and loaded (foo calls bar and both are patched to foo' and
+ * bar'), the callq target in foo' is set to bar' instead of the original bar.
+ * Patching bar again (bar'') cannot be reached from foo' because foo' does not
+ * use the trampoline in bar.
+ *
+ * This post-processing function fixes the problem by updating all
+ * non-livepatch relocation entries (i.e. originally created by a compiler) to
+ * be based on a special SHN_ABS with a fixed value resolved from xen-syms.
+ * These new symbols are not used by livepatch loader itself and only exist as
+ * a named constant for the relevant function address in the host process.
+ *
+ *	All new symbols are prefixed with a scalpel-specific prefix string. As they
+ *  are not used outside of a specific livepatch linking/relocation process,
+ *  the prefix can be the same across multiple livepatches.
+ */
+void add_absolute_symbol_aliases(struct kpatch_elf *elf,
+								 struct lookup_table *lookup)
+{
+	const char *prefix = "__livepatch_unique_";
+
+	int sym_count = 0;
+	struct symbol *sym;
+	list_for_each_entry (sym, &elf->symbols, list)
+		sym_count++;
+
+	struct section *sec;
+	list_for_each_entry (sec, &elf->sections, list) {
+		if (!is_rela_section(sec))
+			continue;
+		/* Ignore any relocation sections that are not related to a symbol
+		   text sections: */
+		if (strncmp(sec->name, ".rela.text.", 11) != 0)
+			continue;
+
+		struct rela *rel;
+		list_for_each_entry (rel, &sec->relas, list) {
+			if ((rel->sym->type != STT_FUNC) &&
+				(rel->sym->type != STT_OBJECT) &&
+				(rel->sym->type != STT_NOTYPE))
+				continue;
+			if (strlen(rel->sym->name) == 0)
+				ERROR("Unnamed symbol used in relocation");
+
+			struct lookup_result res = {0};
+			bool found = false;
+			/* Static/local symbols are mangled to include a filename prefix,
+			   thus need a special treatment for xen-syms lookup: */
+			if (rel->sym->bind == STB_LOCAL) {
+				char *filename, *symname;
+				if (!unmangle_local_symbol(rel->sym->name, &filename, &symname))
+					continue;
+				found = !lookup_local_symbol(lookup, symname, filename, &res);
+			} else
+				found = !lookup_global_symbol(lookup, rel->sym->name, &res);
+
+			if (found) {
+				char *buf = malloc(strlen(prefix) + strlen(rel->sym->name) + 1);
+				if (!buf)
+					ERROR("Failed to allocate buffer\n");
+				strcpy(buf, prefix);
+				strcpy(buf + strlen(prefix), rel->sym->name);
+
+				struct symbol *new_sym =
+					insert_symbol_copy(elf, rel->sym, sym_count++, buf);
+				/* Override key fields after copying. GLOBAL/LOCAL binding
+				   most likely doesn't matter here but forcing GLOBAL in case
+				   linker tries to do something smart: */
+				new_sym->bind = STB_GLOBAL;
+				new_sym->sym.st_value = res.value;
+				new_sym->sym.st_shndx = SHN_ABS;
+				new_sym->sym.st_info = GELF_ST_INFO(new_sym->bind, new_sym->type);
+ 
+				/* TODO: translate X86_64_PC32 rel type to X86_64_64 and
+				   validate that the relocation is still correvtly computed. */
+				log_debug("Added '%s' alias for '%s'\n", buf,  rel->sym->name);
+				rel->sym = new_sym;
+			}
+		}
+	}
+}
+
 static struct argp argp = { options, parse_opt, args_doc, 0 };
 
 int main(int argc, char *argv[])
@@ -2434,6 +2563,9 @@ int main(int argc, char *argv[])
 	log_debug("Rename local symbols\n");
 	livepatch_rename_local_symbols(kelf_out, hint);
 
+	log_debug("Add absolute symbol aliases for internal calls\n");
+	add_absolute_symbol_aliases(kelf_out, lookup);
+
 	/*
 	 *  At this point, the set of output sections and symbols is
 	 *  finalized.  Reorder the symbols into linker-compliant
-- 
2.38.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Thu Jan 19 10:15:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 10:15:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480911.745567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwf-0000Tp-N0; Thu, 19 Jan 2023 10:14:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480911.745567; Thu, 19 Jan 2023 10:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIRwf-0000TX-Jn; Thu, 19 Jan 2023 10:14:57 +0000
Received: by outflank-mailman (input) for mailman id 480911;
 Thu, 19 Jan 2023 10:14:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7itS=5Q=amazon.com=prvs=3763fc082=mstrasun@srs-se1.protection.inumbo.net>)
 id 1pIRw6-0007Sb-Sz
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 10:14:22 +0000
Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com
 [99.78.197.218]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0985ffd9-97e2-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 11:14:21 +0100 (CET)
Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO
 email-inbound-relay-iad-1e-m6i4x-7dc0ecf1.us-east-1.amazon.com)
 ([10.25.36.214]) by smtp-border-fw-80007.pdx80.corp.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2023 10:14:17 +0000
Received: from EX13D45EUA003.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-iad-1e-m6i4x-7dc0ecf1.us-east-1.amazon.com (Postfix)
 with ESMTPS id AFD1B83352
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 10:14:15 +0000 (UTC)
Received: from EX19D039EUA004.ant.amazon.com (10.252.50.95) by
 EX13D45EUA003.ant.amazon.com (10.43.165.71) with Microsoft SMTP Server (TLS)
 id 15.0.1497.45; Thu, 19 Jan 2023 10:14:13 +0000
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX19D039EUA004.ant.amazon.com (10.252.50.95) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1118.7;
 Thu, 19 Jan 2023 10:14:13 +0000
Received: from dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com
 (172.19.92.214) by mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP
 Server id 15.0.1497.45 via Frontend Transport; Thu, 19 Jan 2023 10:14:12
 +0000
Received: by dev-dsk-mstrasun-1c-15417e94.eu-west-1.amazon.com (Postfix,
 from userid 17415739)
 id 8126F21426; Thu, 19 Jan 2023 10:14:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0985ffd9-97e2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1674123262; x=1705659262;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=4hf1x7Xe4bXRLb6h+Ksi8ZE8+Xk4HBnSsvCu7j+CUQ4=;
  b=nUKTLNs6VsJLVg0yLiH7j7Q9MHx6dzSiqZmmGoJxY8FxPds9AhAFBV4l
   qRSCRlKVGoO8rDABdOJf74K5UZ+prgZBd1rZ3uymNR3ZsRG+MWIwhDVTU
   TC7NmyI6qGtoLWWE+HVn6VS8L+I/3l+bLLOCr7QnWrJkPg9B+l1wjY+Xf
   s=;
X-IronPort-AV: E=Sophos;i="5.97,228,1669075200"; 
   d="scan'208";a="172738134"
From: Mihails Strasuns <mstrasun@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: <mstrasun@amazon.com>, Michael Kurth <mku@amazon.com>
Subject: [PATCH v1 1/4] common.h: Flush stdout before writing to stderr
Date: Thu, 19 Jan 2023 10:13:02 +0000
Message-ID: <2e89973f61bbd6e6ebb423ec6fe7a025d5404235.1673623880.git.mstrasun@amazon.com>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <cover.1673623880.git.mstrasun@amazon.com>
References: <cover.1673623880.git.mstrasun@amazon.com>
MIME-Version: 1.0
Precedence: Bulk
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit

Flush existing debug messages before writing an error to stderr.  stderr
is usually unbuffered and stdout is usually buffered. This results in
odd looking output when an error occurs and both stderr/stdout are
printed on the same console/file. More precisely, the error message is
printed in the middle of previously emitted debug messages.

Signed-off-by: Michael Kurth <mku@amazon.com>
---
 common.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/common.h b/common.h
index 02c9b7b..9a9da79 100644
--- a/common.h
+++ b/common.h
@@ -10,6 +10,7 @@ extern char *childobj;
 
 #define DIFF_FATAL(format, ...) \
 ({ \
+	fflush(stdout); \
 	fprintf(stderr, "ERROR: %s: " format "\n", childobj, ##__VA_ARGS__); \
 	error(2, 0, "unreconcilable difference"); \
 })
-- 
2.38.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Thu Jan 19 10:19:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 10:19:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480934.745577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIS0X-0001uz-7j; Thu, 19 Jan 2023 10:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480934.745577; Thu, 19 Jan 2023 10:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIS0X-0001us-4U; Thu, 19 Jan 2023 10:18:57 +0000
Received: by outflank-mailman (input) for mailman id 480934;
 Thu, 19 Jan 2023 10:18:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiH+=5Q=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pIS0W-0001uk-6Y
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 10:18:56 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ac2e3508-97e2-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 11:18:53 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by CY5PR12MB6035.namprd12.prod.outlook.com (2603:10b6:930:2d::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Thu, 19 Jan
 2023 10:18:50 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%4]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 10:18:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac2e3508-97e2-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oCmGmtsuJty04jbFU2513mPnEp25O0RGsaSRUsVcQb6KLFG/ciSAtg9dw+ZVreuhWZhcwKwMMoEqGoc0KUnBgwZKlQHWejoA+K+cIrfj2Lt55gHEdYOv3Zxw9AhtQnkz+rZwacdISDCtACPd5X+vwQJfuxtLh+KN9aKEX937iE6BnIbQHo/wDfPRRnZFYDeBsue2r58VA842xnix8NzbUexBiTHDmSrWM57whA/6lQbQRxtiP8YPwLvMgzfyEpEVqJE52uDDWIF7Oj7VXXcT8idqDtyjAuXhii6LcFqE/dLWCFfDi7bSNeomFo7Tsoi+xCki3FmHR5VhlcYLE9Sf0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bmrHRIayYgdIj+FdqSBY7nSZGN9k60MkhCWIvHM3E9M=;
 b=bWw3ylmIvvzlIoWL0tljuhD6JhPV1aQ02KPdK9nPKChgASwgkb3v4NEkt5Q2CBNQDYD7goNmi0TNRRQmKdzDVTBunej+bEKSjgrAQp+tFqO/P6PkuNrqYb/PWXz3HOkcERkqjOdqj/9FCI1u/IUzafJWPYt4U2JFhRtL7s6sjCwWOBC8+/2mMrfWRjxolYW5nurqFvgmLHcOT+uMig9XSv6MHNtPyZtg45u6Uh+raoNG8EmfEbXHf/zOIkcjjDd9XtWOHurq56sufZU7lcHAHO86CyHpptX2i3MoPY4tC30zmRrgD9ZXJgNyhyh3Ua1NYF7W3zHMbhjTueNOsQTrGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bmrHRIayYgdIj+FdqSBY7nSZGN9k60MkhCWIvHM3E9M=;
 b=gHZF3bqi2H2AaBWJALsbWkTtpbZnsFnBfRVATItiUaEU28TBdSARTyJZ+wr19cRQrMBs1MMQtwzGSCU6aiYyZ2RmyaqeXGC1+63PdLXf+HmahWflpk0FsbiDhesfwqBvZX7SWrXv+39a8Rc4UHGMQqEP4++G8W5eh3TbRnh5p+8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <54355320-2c3c-665f-32e2-90329586d98a@amd.com>
Date: Thu, 19 Jan 2023 10:18:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
To: xen-devel@lists.xenproject.org
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-12-Penny.Zheng@arm.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
Cc: Wei Chen <wei.chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>
In-Reply-To: <20230113052914.3845596-12-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO3P123CA0021.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:388::9) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|CY5PR12MB6035:EE_
X-MS-Office365-Filtering-Correlation-Id: 69a4883a-b39f-4516-d37f-08dafa068eba
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ERDMSUjgPW1oHITiZz4ws+recrhWKK0H0EDm/PX98byYOImpKyiSFPwpWRMjMb5ON3tg8H4Q5Uv5V45CWoLdKqbbpXTapGn04+pkzr3QVVCED5jf3kgbiTFFOrEkkUrHmGOoepuW33UwJqH6/rvYtiXeTfGfiyGxMIqgaE6aBZ1Cw0NV0jAwJTrZ90FpTVASJ5HWwr8c/thLlcy+73wykB1GQHAFv/jGZGZK6K7aVPfAiVVNIGsvAvgU/XS5OCErOII5tVuuqFGR8k9Wfvh34l8/CoB+P7eYbSFUzO3/qtx4rIV9rK7Yw8oi0NPMS+YIktSGA77y+yOJR3R8YlrNNwOmwWzW9TWXMHAwbd6VOzFfEPTdQ6E7iTBco6Oor015P5ag4rkjjSXO7Fs6tSKMAn9j+GFC2jWPAtCQuJ/hYsQUUiX5h8J5UneFp+SThlHT5QJj5zPVfIpnkWYDRzVq23MvOd7qWfERWbs5AzmKuaOMpvFEfs9af8/8mBQCpidk6Lne9mQijRzjD7/gX023ZndXeel2dzpg6wOkf5a/i+rA9egZaNE6fmkZMyncwYSy8rl+/omYFrcd63LdAh1euh9+WWel+pwunTXQkJ/SDCgQKVm7WTXy5CsJzJeGW/8ZWRnn8rbS9EbukHiz97azGGCwajRQuRRRoICw9OWdT0SM8OHWsIIMZ1KQDorEmYZJg5BUVSNydmmvx43ridfp9AA3s6Sg8FURTEMugrB5dY/zgxCRl1IhtaHUdMcuX5paojiA20jkLtCTQ96FCwMyegQpsS/xjacMRzV1TWL3wwEy/9+IkuGwYZQHactT1j/M
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(376002)(396003)(346002)(39860400002)(136003)(451199015)(31686004)(36756003)(8936002)(8676002)(66556008)(66946007)(478600001)(4326008)(31696002)(2906002)(30864003)(6916009)(5660300002)(83380400001)(38100700002)(66476007)(6666004)(316002)(966005)(6486002)(54906003)(41300700001)(2616005)(53546011)(6506007)(6512007)(26005)(186003)(2004002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?czgvNXpaZmgwbE5sQXE3dEJENE0rQjVMazZGVitSazJhL1NrZzd6bjJDT3Np?=
 =?utf-8?B?ZFlNeW1HaFBnTnlBaWNDT3ZEVzBaY0JvVWpkMUVvZHZFQzlTUk9DV3BwM0NJ?=
 =?utf-8?B?ejJiS2xxV1F4MlhzcVBld1JURFdVMU8vTGRPeFkrTTNKUEtTYTk1ckhlUmRY?=
 =?utf-8?B?b25JYWc3NzAxQkFLbUJCRTVwTlpSL2dHaHNFT0hySzhTb3d6R2lCSUNzOFJ5?=
 =?utf-8?B?UkRTVkJxR0ZLcUZKNktmZW5XSVZxQmJJMmJrcElqWnZJZXRBRE1wUFhKVFFs?=
 =?utf-8?B?RXdBV0xkVTBsaW1TUTdDcUI2b0c3VHV2OGFiQkFwRDMrQUNvSklLSWZPbXhM?=
 =?utf-8?B?emVmd3hsMytxNmttWTRmSzNmVU02TnZXWHNhWlJrYTduTDg4REV0MnBNcElH?=
 =?utf-8?B?TkJXNGVBbjRIK1NkTUdxckNjbEJ1VFVURzFrSjhPZVUrUnlXNzdhcXBVSmxi?=
 =?utf-8?B?OWNLVWhGdi9qa1M0T29wMktVN01VeElGMk10ZUpKV204UDc0MjZLakNnSTJn?=
 =?utf-8?B?Zlg2WVRiN005UFMyNFFvbmdBVTVFZk5DRktKQVBCb2xxV0NKVDRrTmdVaW5L?=
 =?utf-8?B?bVFidkIyNHBmWUVyQUJqRmpxNTYyaVJZdjJJNy9aTktqdnVSeWZIcC9FczRm?=
 =?utf-8?B?RDNubjNxa0JkVWYxMDFKNDRVMmFadGpwQW5KUG1aSEVIREcvaFpKVjZVemhX?=
 =?utf-8?B?MzdpditJaWI0R05PUzdyQWxtZGx2akl6T2tJYjV1bFRwUlBjMzA1Qk14TmJa?=
 =?utf-8?B?cDdXRWl4d2VienAyekRqdlBJVkhBYnB3ZHhmSjRFbFJ4V2pHMVMvck5jQkpY?=
 =?utf-8?B?WjJQWmNSVkpxRjFYZVZFWHFlYkVCdDQvRWxDT3R6R2UzejQzanZaL3h5d2xS?=
 =?utf-8?B?Q0ZHT1hlNVlzTngwbmd4dnVDbjJMVEN6ZUVra25rWk9LYy9ZdXFhTHNZUGti?=
 =?utf-8?B?ajgrRmx4SlI0WlptVnE3TU1FUjRFM3hoaDFYa0xmVC8wVUVJaUZ5TWRzWEZt?=
 =?utf-8?B?QzFIMEJCTVVtRFNjNjl1dDNPYTZGMlh2MGYwZ2UwWFR1YUZ2TDRvVThWSThP?=
 =?utf-8?B?Y3BWclRMNEMralAvUDdaZ2w4dUlQelZRMGQrUDVHTmY3WER3cHVVbm4rTzhy?=
 =?utf-8?B?emtKWjVqYjFGMk1mVjVQZjJsSlcxcHRDd1J3YkZXNTRDY1RUZU16ZlBSUU5S?=
 =?utf-8?B?WHZabzErVGt2dmtZUTBOellnNWVyeTBaeXVxelQxcExRMnZHSnN0Q1llM24v?=
 =?utf-8?B?SVhyMmNOQ0loNXJMY0cwWmRrckVtV2lvM0VucmJPVUhQM3VJK1pjQnBjbW4r?=
 =?utf-8?B?b1dIL3VDbGV6VGdFNDRaZFFUeFFaa3puODRxQlV0VGtGSFFDZTBKTC8zMlFv?=
 =?utf-8?B?ODIyZS9CZXFBSEowaVU5Q0hUc3VHc2Q3MkZ5bW9QRUJ4ODBjTFN3ZmZUSWox?=
 =?utf-8?B?RzFyUyttaWNscHYyTkZ1a2htMWZhdGhxY2pDQk9CZmJMemg3aVZvYTZJcGZj?=
 =?utf-8?B?eFg1eHZDMG9pelRQSWc2V2UxVUFqaWRuR1k2TnY1V2Fzazlqd2xDdkZHR3FB?=
 =?utf-8?B?eU40dGczejA4YldHNEdEdk4vSW5QQjFWL2FNekpma3BydWZ3TE4yYlJ4K1Yr?=
 =?utf-8?B?enVOdGN1ZEFZRFZPR1I5T2QrMXhPSEJBbnNLZWpDTXg1S0lrVXgwdW4yd25Y?=
 =?utf-8?B?dkZCa21YbzkyNll5c3lIV2wwaWw4endvbUVVcG1SejJHTWIrMWVxMHJXNVFH?=
 =?utf-8?B?N1R0WTlHWWgyZVFPQzF1cW5CaXdjVFI5NEFKVHU4b1I4S0xCN09DTWtNV1dQ?=
 =?utf-8?B?WnRIUEhzanU1STZmbHZRWGNOQjVjSUw4R0t3aWpKRjJwQjBseHlrRktNb0t5?=
 =?utf-8?B?NzgrMXdwSFlzd2xGb3lrN1NQdjI3N0JUVGVQb0tVVmhJZHZRZDNEM25lRklu?=
 =?utf-8?B?YWtsamdHM2hhRTJ3NENZdVRYZkVKSWtJQm9qMmxZL1ZQMTIwbml5cGFidlJM?=
 =?utf-8?B?VjVIdThpSC9nd1RxRjRxM251YUFCdUI1Q0o5WE1ERU52WDBGemdsQWdRcEF1?=
 =?utf-8?B?c2Z4ZTlyVUloQVJIUzl3VFNpUTBtNFp1NE5ULzRhYWFleUZTeTlyRUN1K2ov?=
 =?utf-8?Q?tpDDF5cqIWKG1kdwMTLjoQcUy?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 69a4883a-b39f-4516-d37f-08dafa068eba
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 10:18:50.1296
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2xgXsUXkLLRNnElqW9X/OzcB/Q9nh/N04RhtRem5iYZaha1oskX2Ye137nnWbTf2K8MKg6ZIhz5cVszWgUtSsQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6035


On 13/01/2023 05:28, Penny Zheng wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> From: Penny Zheng <penny.zheng@arm.com>
>
> The start-of-day Xen MPU memory region layout shall be like as follows:
>
> xen_mpumap[0] : Xen text
> xen_mpumap[1] : Xen read-only data
> xen_mpumap[2] : Xen read-only after init data
> xen_mpumap[3] : Xen read-write data
> xen_mpumap[4] : Xen BSS
> ......
> xen_mpumap[max_xen_mpumap - 2]: Xen init data
> xen_mpumap[max_xen_mpumap - 1]: Xen init text
>
> max_xen_mpumap refers to the number of regions supported by the EL2 MPU.
> The layout shall be compliant with what we describe in xen.lds.S, or the
> codes need adjustment.
>
> As MMU system and MPU system have different functions to create
> the boot MMU/MPU memory management data, instead of introducing
> extra #ifdef in main code flow, we introduce a neutral name
> prepare_early_mappings for both, and also to replace create_page_tables for MMU.
>
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
>   xen/arch/arm/arm64/Makefile              |   2 +
>   xen/arch/arm/arm64/head.S                |  17 +-
>   xen/arch/arm/arm64/head_mmu.S            |   4 +-
>   xen/arch/arm/arm64/head_mpu.S            | 323 +++++++++++++++++++++++
>   xen/arch/arm/include/asm/arm64/mpu.h     |  63 +++++
>   xen/arch/arm/include/asm/arm64/sysregs.h |  49 ++++
>   xen/arch/arm/mm_mpu.c                    |  48 ++++
>   xen/arch/arm/xen.lds.S                   |   4 +
>   8 files changed, 502 insertions(+), 8 deletions(-)
>   create mode 100644 xen/arch/arm/arm64/head_mpu.S
>   create mode 100644 xen/arch/arm/include/asm/arm64/mpu.h
>   create mode 100644 xen/arch/arm/mm_mpu.c
>
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 22da2f54b5..438c9737ad 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -10,6 +10,8 @@ obj-y += entry.o
>   obj-y += head.o
>   ifneq ($(CONFIG_HAS_MPU),y)
>   obj-y += head_mmu.o
> +else
> +obj-y += head_mpu.o
>   endif
>   obj-y += insn.o
>   obj-$(CONFIG_LIVEPATCH) += livepatch.o
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 782bd1f94c..145e3d53dc 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -68,9 +68,9 @@
>    *  x24 -
>    *  x25 -
>    *  x26 - skip_zero_bss (boot cpu only)
> - *  x27 -
> - *  x28 -
> - *  x29 -
> + *  x27 - region selector (mpu only)
> + *  x28 - prbar (mpu only)
> + *  x29 - prlar (mpu only)
>    *  x30 - lr
>    */
>
> @@ -82,7 +82,7 @@
>    * ---------------------------
>    *
>    * The requirements are:
> - *   MMU = off, D-cache = off, I-cache = on or off,
> + *   MMU/MPU = off, D-cache = off, I-cache = on or off,
>    *   x0 = physical address to the FDT blob.
>    *
>    * This must be the very first address in the loaded image.
> @@ -252,7 +252,12 @@ real_start_efi:
>
>           bl    check_cpu_mode
>           bl    cpu_init
> -        bl    create_page_tables
> +
> +        /*
> +         * Create boot memory management data, pagetable for MMU systems
> +         * and memory regions for MPU systems.
> +         */
> +        bl    prepare_early_mappings
>           bl    enable_mmu
>
>           /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
> @@ -310,7 +315,7 @@ GLOBAL(init_secondary)
>   #endif
>           bl    check_cpu_mode
>           bl    cpu_init
> -        bl    create_page_tables
> +        bl    prepare_early_mappings
>           bl    enable_mmu
>
>           /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
> diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
> index 6ff13c751c..2346f755df 100644
> --- a/xen/arch/arm/arm64/head_mmu.S
> +++ b/xen/arch/arm/arm64/head_mmu.S
> @@ -123,7 +123,7 @@
>    *
>    * Clobbers x0 - x4
>    */
> -ENTRY(create_page_tables)
> +ENTRY(prepare_early_mappings)
>           /* Prepare the page-tables for mapping Xen */
>           ldr   x0, =XEN_VIRT_START
>           create_table_entry boot_pgtable, boot_first, x0, 0, x1, x2, x3
> @@ -208,7 +208,7 @@ virtphys_clash:
>           /* Identity map clashes with boot_third, which we cannot handle yet */
>           PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
>           b     fail
> -ENDPROC(create_page_tables)
> +ENDPROC(prepare_early_mappings)

NIT:- Can this renaming be done in a separate patch of its own (before 
this patch).

So that this patch can be only about the new functionality introduced.

>
>   /*
>    * Turn on the Data Cache and the MMU. The function will return on the 1:1
> diff --git a/xen/arch/arm/arm64/head_mpu.S b/xen/arch/arm/arm64/head_mpu.S
> new file mode 100644
> index 0000000000..0b97ce4646
> --- /dev/null
> +++ b/xen/arch/arm/arm64/head_mpu.S
> @@ -0,0 +1,323 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Start-of-day code for an Armv8-R AArch64 MPU system.
> + */
> +
> +#include <asm/arm64/mpu.h>
> +#include <asm/early_printk.h>
> +#include <asm/page.h>
> +
> +/*
> + * One entry in Xen MPU memory region mapping table(xen_mpumap) is a structure
> + * of pr_t, which is 16-bytes size, so the entry offset is the order of 4.
> + */
NIT :- It would be good to quote Arm ARM from which can be referred for 
the definitions.
> +#define MPU_ENTRY_SHIFT         0x4
> +
> +#define REGION_SEL_MASK         0xf
> +
> +#define REGION_TEXT_PRBAR       0x38    /* SH=11 AP=10 XN=00 */
> +#define REGION_RO_PRBAR         0x3A    /* SH=11 AP=10 XN=10 */
> +#define REGION_DATA_PRBAR       0x32    /* SH=11 AP=00 XN=10 */
> +
> +#define REGION_NORMAL_PRLAR     0x0f    /* NS=0 ATTR=111 EN=1 */
> +
> +/*
> + * Macro to round up the section address to be PAGE_SIZE aligned
> + * Each section(e.g. .text, .data, etc) in xen.lds.S is page-aligned,
> + * which is usually guarded with ". = ALIGN(PAGE_SIZE)" in the head,
> + * or in the end
> + */
> +.macro roundup_section, xb
> +        add   \xb, \xb, #(PAGE_SIZE-1)
> +        and   \xb, \xb, #PAGE_MASK
> +.endm
> +
> +/*
> + * Macro to create a new MPU memory region entry, which is a structure
> + * of pr_t,  in \prmap.
> + *
> + * Inputs:
> + * prmap:   mpu memory region map table symbol
> + * sel:     region selector
> + * prbar:   preserve value for PRBAR_EL2
> + * prlar    preserve value for PRLAR_EL2
> + *
> + * Clobbers \tmp1, \tmp2
> + *
> + */
> +.macro create_mpu_entry prmap, sel, prbar, prlar, tmp1, tmp2
> +    mov   \tmp2, \sel
> +    lsl   \tmp2, \tmp2, #MPU_ENTRY_SHIFT
> +    adr_l \tmp1, \prmap
> +    /* Write the first 8 bytes(prbar_t) of pr_t */
> +    str   \prbar, [\tmp1, \tmp2]
> +
> +    add   \tmp2, \tmp2, #8
> +    /* Write the last 8 bytes(prlar_t) of pr_t */
> +    str   \prlar, [\tmp1, \tmp2]
> +.endm
> +
> +/*
> + * Macro to store the maximum number of regions supported by the EL2 MPU
> + * in max_xen_mpumap, which is identified by MPUIR_EL2.
> + *
> + * Outputs:
> + * nr_regions: preserve the maximum number of regions supported by the EL2 MPU
> + *
> + * Clobbers \tmp1
> + *
> + */
> +.macro read_max_el2_regions, nr_regions, tmp1
> +    load_paddr \tmp1, max_xen_mpumap
> +    mrs   \nr_regions, MPUIR_EL2
> +    isb
> +    str   \nr_regions, [\tmp1]
> +.endm
> +
> +/*
> + * Macro to prepare and set a MPU memory region
> + *
> + * Inputs:
> + * base:        base address symbol (should be page-aligned)
> + * limit:       limit address symbol
> + * sel:         region selector
> + * prbar:       store computed PRBAR_EL2 value
> + * prlar:       store computed PRLAR_EL2 value
> + * attr_prbar:  PRBAR_EL2-related memory attributes. If not specified it will be REGION_DATA_PRBAR
> + * attr_prlar:  PRLAR_EL2-related memory attributes. If not specified it will be REGION_NORMAL_PRLAR
> + *
> + * Clobber \tmp1
> + *
> + */
> +.macro prepare_xen_region, base, limit, sel, prbar, prlar, tmp1, attr_prbar=REGION_DATA_PRBAR, attr_prlar=REGION_NORMAL_PRLAR
> +    /* Prepare value for PRBAR_EL2 reg and preserve it in \prbar.*/
> +    load_paddr \prbar, \base
> +    and   \prbar, \prbar, #MPU_REGION_MASK
> +    mov   \tmp1, #\attr_prbar
> +    orr   \prbar, \prbar, \tmp1
> +
> +    /* Prepare value for PRLAR_EL2 reg and preserve it in \prlar.*/
> +    load_paddr \prlar, \limit
> +    /* Round up limit address to be PAGE_SIZE aligned */
> +    roundup_section \prlar
> +    /* Limit address should be inclusive */
> +    sub   \prlar, \prlar, #1
> +    and   \prlar, \prlar, #MPU_REGION_MASK
> +    mov   \tmp1, #\attr_prlar
> +    orr   \prlar, \prlar, \tmp1
> +
> +    mov   x27, \sel
> +    mov   x28, \prbar
> +    mov   x29, \prlar

Any reasons for using x27, x28, x29 to pass function parameters.

https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst 
states x0..x7 should be used (Table 2, General-purpose registers and 
AAPCS64 usage).

> +    /*
> +     * x27, x28, x29 are special registers designed as
> +     * inputs for function write_pr
> +     */
> +    bl    write_pr
> +.endm
> +
> +.section .text.idmap, "ax", %progbits
> +
> +/*
> + * ENTRY to configure a EL2 MPU memory region
> + * ARMv8-R AArch64 at most supports 255 MPU protection regions.
> + * See section G1.3.18 of the reference manual for ARMv8-R AArch64,
> + * PRBAR<n>_EL2 and PRLAR<n>_EL2 provides access to the EL2 MPU region
> + * determined by the value of 'n' and PRSELR_EL2.REGION as
> + * PRSELR_EL2.REGION<7:4>:n.(n = 0, 1, 2, ... , 15)
> + * For example to access regions from 16 to 31 (0b10000 to 0b11111):
> + * - Set PRSELR_EL2 to 0b1xxxx
> + * - Region 16 configuration is accessible through PRBAR0_EL2 and PRLAR0_EL2
> + * - Region 17 configuration is accessible through PRBAR1_EL2 and PRLAR1_EL2
> + * - Region 18 configuration is accessible through PRBAR2_EL2 and PRLAR2_EL2
> + * - ...
> + * - Region 31 configuration is accessible through PRBAR15_EL2 and PRLAR15_EL2
> + *
> + * Inputs:
> + * x27: region selector
> + * x28: preserve value for PRBAR_EL2
> + * x29: preserve value for PRLAR_EL2
> + *
> + */
> +ENTRY(write_pr)
> +    msr   PRSELR_EL2, x27
> +    dsb   sy
> +    and   x27, x27, #REGION_SEL_MASK
> +    cmp   x27, #0
> +    bne   1f
> +    msr   PRBAR0_EL2, x28
> +    msr   PRLAR0_EL2, x29
> +    b     out
> +1:
> +    cmp   x27, #1
> +    bne   2f
> +    msr   PRBAR1_EL2, x28
> +    msr   PRLAR1_EL2, x29
> +    b     out
> +2:
> +    cmp   x27, #2
> +    bne   3f
> +    msr   PRBAR2_EL2, x28
> +    msr   PRLAR2_EL2, x29
> +    b     out
> +3:
> +    cmp   x27, #3
> +    bne   4f
> +    msr   PRBAR3_EL2, x28
> +    msr   PRLAR3_EL2, x29
> +    b     out
> +4:
> +    cmp   x27, #4
> +    bne   5f
> +    msr   PRBAR4_EL2, x28
> +    msr   PRLAR4_EL2, x29
> +    b     out
> +5:
> +    cmp   x27, #5
> +    bne   6f
> +    msr   PRBAR5_EL2, x28
> +    msr   PRLAR5_EL2, x29
> +    b     out
> +6:
> +    cmp   x27, #6
> +    bne   7f
> +    msr   PRBAR6_EL2, x28
> +    msr   PRLAR6_EL2, x29
> +    b     out
> +7:
> +    cmp   x27, #7
> +    bne   8f
> +    msr   PRBAR7_EL2, x28
> +    msr   PRLAR7_EL2, x29
> +    b     out
> +8:
> +    cmp   x27, #8
> +    bne   9f
> +    msr   PRBAR8_EL2, x28
> +    msr   PRLAR8_EL2, x29
> +    b     out
> +9:
> +    cmp   x27, #9
> +    bne   10f
> +    msr   PRBAR9_EL2, x28
> +    msr   PRLAR9_EL2, x29
> +    b     out
> +10:
> +    cmp   x27, #10
> +    bne   11f
> +    msr   PRBAR10_EL2, x28
> +    msr   PRLAR10_EL2, x29
> +    b     out
> +11:
> +    cmp   x27, #11
> +    bne   12f
> +    msr   PRBAR11_EL2, x28
> +    msr   PRLAR11_EL2, x29
> +    b     out
> +12:
> +    cmp   x27, #12
> +    bne   13f
> +    msr   PRBAR12_EL2, x28
> +    msr   PRLAR12_EL2, x29
> +    b     out
> +13:
> +    cmp   x27, #13
> +    bne   14f
> +    msr   PRBAR13_EL2, x28
> +    msr   PRLAR13_EL2, x29
> +    b     out
> +14:
> +    cmp   x27, #14
> +    bne   15f
> +    msr   PRBAR14_EL2, x28
> +    msr   PRLAR14_EL2, x29
> +    b     out
> +15:
> +    msr   PRBAR15_EL2, x28
> +    msr   PRLAR15_EL2, x29
> +out:
> +    isb
> +    ret
> +ENDPROC(write_pr)
> +
> +/*
> + * Static start-of-day Xen EL2 MPU memory region layout.
> + *
> + *     xen_mpumap[0] : Xen text
> + *     xen_mpumap[1] : Xen read-only data
> + *     xen_mpumap[2] : Xen read-only after init data
> + *     xen_mpumap[3] : Xen read-write data
> + *     xen_mpumap[4] : Xen BSS
> + *     ......
> + *     xen_mpumap[max_xen_mpumap - 2]: Xen init data
> + *     xen_mpumap[max_xen_mpumap - 1]: Xen init text
> + *
> + * Clobbers x0 - x6
> + *
> + * It shall be compliant with what describes in xen.lds.S, or the below
> + * codes need adjustment.
> + * It shall also follow the rules of putting fixed MPU memory region in
> + * the front, and the others in the rear, which, here, mainly refers to
> + * boot-only region, like Xen init text region.
> + */
> +ENTRY(prepare_early_mappings)
> +    /* stack LR as write_pr will be called later like nested function */
> +    mov   x6, lr
> +
> +    /* x0: region sel */
> +    mov   x0, xzr
> +    /* Xen text section. */
> +    prepare_xen_region _stext, _etext, x0, x1, x2, x3, attr_prbar=REGION_TEXT_PRBAR
> +    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
> +
> +    add   x0, x0, #1
> +    /* Xen read-only data section. */
> +    prepare_xen_region _srodata, _erodata, x0, x1, x2, x3, attr_prbar=REGION_RO_PRBAR
> +    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
> +
> +    add   x0, x0, #1
> +    /* Xen read-only after init data section. */
> +    prepare_xen_region __ro_after_init_start, __ro_after_init_end, x0, x1, x2, x3
> +    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
> +
> +    add   x0, x0, #1
> +    /* Xen read-write data section. */
> +    prepare_xen_region __data_begin, __init_begin, x0, x1, x2, x3
> +    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
> +
> +    read_max_el2_regions x5, x3 /* x5: max_mpumap */
> +    sub   x5, x5, #1
> +    /* Xen init text section. */
> +    prepare_xen_region _sinittext, _einittext, x5, x1, x2, x3, attr_prbar=REGION_TEXT_PRBAR
> +    create_mpu_entry xen_mpumap, x5, x1, x2, x3, x4
> +
> +    sub   x5, x5, #1
> +    /* Xen init data section. */
> +    prepare_xen_region __init_data_begin, __init_end, x5, x1, x2, x3
> +    create_mpu_entry xen_mpumap, x5, x1, x2, x3, x4
> +
> +    add   x0, x0, #1
> +    /* Xen BSS section. */
> +    prepare_xen_region __bss_start, __bss_end, x0, x1, x2, x3
> +    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4
> +
> +    /* Update next_fixed_region_idx and next_transient_region_idx */
> +    load_paddr x3, next_fixed_region_idx
> +    add   x0, x0, #1
> +    str   x0, [x3]
> +    load_paddr x4, next_transient_region_idx
> +    sub   x5, x5, #1
> +    str   x5, [x4]
> +
> +    mov   lr, x6
> +    ret
> +ENDPROC(prepare_early_mappings)
> +
> +GLOBAL(_end_boot)
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/asm/arm64/mpu.h
> new file mode 100644
> index 0000000000..c945dd53db
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/arm64/mpu.h
> @@ -0,0 +1,63 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * mpu.h: Arm Memory Protection Region definitions.
> + */
> +
> +#ifndef __ARM64_MPU_H__
> +#define __ARM64_MPU_H__
> +
> +#define MPU_REGION_SHIFT  6
> +#define MPU_REGION_ALIGN  (_AC(1, UL) << MPU_REGION_SHIFT)
> +#define MPU_REGION_MASK   (~(MPU_REGION_ALIGN - 1))
> +
> +/*
> + * MPUIR_EL2.Region identifies the number of regions supported by the EL2 MPU.
> + * It is a 8-bit field, so 255 MPU memory regions at most.
> + */
> +#define ARM_MAX_MPU_MEMORY_REGIONS 255
> +
> +#ifndef __ASSEMBLY__
> +
> +/* Protection Region Base Address Register */
> +typedef union {
> +    struct __packed {
> +        unsigned long xn:2;       /* Execute-Never */
> +        unsigned long ap:2;       /* Acess Permission */
> +        unsigned long sh:2;       /* Sharebility */
> +        unsigned long base:42;    /* Base Address */
> +        unsigned long pad:16;
> +    } reg;
> +    uint64_t bits;
> +} prbar_t;
> +
> +/* Protection Region Limit Address Register */
> +typedef union {
> +    struct __packed {
> +        unsigned long en:1;     /* Region enable */
> +        unsigned long ai:3;     /* Memory Attribute Index */
> +        unsigned long ns:1;     /* Not-Secure */
> +        unsigned long res:1;    /* Reserved 0 by hardware */
> +        unsigned long limit:42; /* Limit Address */
> +        unsigned long pad:16;
> +    } reg;
> +    uint64_t bits;
> +} prlar_t;
> +
> +/* MPU Protection Region */
> +typedef struct {
> +    prbar_t prbar;
> +    prlar_t prlar;
> +} pr_t;
> +
> +#endif /* __ASSEMBLY__ */
> +
> +#endif /* __ARM64_MPU_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
> index 4638999514..aca9bca5b1 100644
> --- a/xen/arch/arm/include/asm/arm64/sysregs.h
> +++ b/xen/arch/arm/include/asm/arm64/sysregs.h
> @@ -458,6 +458,55 @@
>   #define ZCR_ELx_LEN_SIZE             9
>   #define ZCR_ELx_LEN_MASK             0x1ff
>
> +/* System registers for Armv8-R AArch64 */
> +#ifdef CONFIG_HAS_MPU
> +
> +/* EL2 MPU Protection Region Base Address Register encode */
> +#define PRBAR_EL2   S3_4_C6_C8_0
> +#define PRBAR0_EL2  S3_4_C6_C8_0
> +#define PRBAR1_EL2  S3_4_C6_C8_4
> +#define PRBAR2_EL2  S3_4_C6_C9_0
> +#define PRBAR3_EL2  S3_4_C6_C9_4
> +#define PRBAR4_EL2  S3_4_C6_C10_0
> +#define PRBAR5_EL2  S3_4_C6_C10_4
> +#define PRBAR6_EL2  S3_4_C6_C11_0
> +#define PRBAR7_EL2  S3_4_C6_C11_4
> +#define PRBAR8_EL2  S3_4_C6_C12_0
> +#define PRBAR9_EL2  S3_4_C6_C12_4
> +#define PRBAR10_EL2 S3_4_C6_C13_0
> +#define PRBAR11_EL2 S3_4_C6_C13_4
> +#define PRBAR12_EL2 S3_4_C6_C14_0
> +#define PRBAR13_EL2 S3_4_C6_C14_4
> +#define PRBAR14_EL2 S3_4_C6_C15_0
> +#define PRBAR15_EL2 S3_4_C6_C15_4
> +
> +/* EL2 MPU Protection Region Limit Address Register encode */
> +#define PRLAR_EL2   S3_4_C6_C8_1
> +#define PRLAR0_EL2  S3_4_C6_C8_1
> +#define PRLAR1_EL2  S3_4_C6_C8_5
> +#define PRLAR2_EL2  S3_4_C6_C9_1
> +#define PRLAR3_EL2  S3_4_C6_C9_5
> +#define PRLAR4_EL2  S3_4_C6_C10_1
> +#define PRLAR5_EL2  S3_4_C6_C10_5
> +#define PRLAR6_EL2  S3_4_C6_C11_1
> +#define PRLAR7_EL2  S3_4_C6_C11_5
> +#define PRLAR8_EL2  S3_4_C6_C12_1
> +#define PRLAR9_EL2  S3_4_C6_C12_5
> +#define PRLAR10_EL2 S3_4_C6_C13_1
> +#define PRLAR11_EL2 S3_4_C6_C13_5
> +#define PRLAR12_EL2 S3_4_C6_C14_1
> +#define PRLAR13_EL2 S3_4_C6_C14_5
> +#define PRLAR14_EL2 S3_4_C6_C15_1
> +#define PRLAR15_EL2 S3_4_C6_C15_5
> +
> +/* MPU Protection Region Selection Register encode */
> +#define PRSELR_EL2 S3_4_C6_C2_1
> +
> +/* MPU Type registers encode */
> +#define MPUIR_EL2 S3_4_C0_C0_4
> +
> +#endif
> +
>   /* Access to system registers */
>
>   #define WRITE_SYSREG64(v, name) do {                    \
> diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
> new file mode 100644
> index 0000000000..43e9a1be4d
> --- /dev/null
> +++ b/xen/arch/arm/mm_mpu.c
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * xen/arch/arm/mm_mpu.c
> + *
> + * MPU based memory managment code for Armv8-R AArch64.
> + *
> + * Copyright (C) 2022 Arm Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/init.h>
> +#include <xen/page-size.h>
> +#include <asm/arm64/mpu.h>
> +
> +/* Xen MPU memory region mapping table. */
> +pr_t __aligned(PAGE_SIZE) __section(".data.page_aligned")
> +     xen_mpumap[ARM_MAX_MPU_MEMORY_REGIONS];
> +
> +/* Index into MPU memory region map for fixed regions, ascending from zero. */
> +uint64_t __ro_after_init next_fixed_region_idx;
> +/*
> + * Index into MPU memory region map for transient regions, like boot-only
> + * region, which descends from max_xen_mpumap.
> + */
> +uint64_t __ro_after_init next_transient_region_idx;
> +
> +/* Maximum number of supported MPU memory regions by the EL2 MPU. */
> +uint64_t __ro_after_init max_xen_mpumap;
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
> index bc45ea2c65..79965a3c17 100644
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -91,6 +91,8 @@ SECTIONS
>         __ro_after_init_end = .;
>     } : text
>
> +  . = ALIGN(PAGE_SIZE);
> +  __data_begin = .;
>     .data.read_mostly : {
>          /* Exception table */
>          __start___ex_table = .;
> @@ -157,7 +159,9 @@ SECTIONS
>          *(.altinstr_replacement)
>     } :text
>     . = ALIGN(PAGE_SIZE);
> +
>     .init.data : {
> +       __init_data_begin = .;            /* Init data */
>          *(.init.rodata)
>          *(.init.rodata.*)
>
> --
> 2.25.1
>
NIT:- Would you consider splitting this patch, something like this :-

1. Renaming of the mmu function

2. Define sysregs, prlar_t, prbar_t and other other hardware specific 
macros.

3. Define write_pr

4. The rest of the changes (ie prepare_early_mappings(), xen.lds.S, etc)

- Ayan



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 10:51:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 10:51:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480954.745596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pISVI-0006Dk-1L; Thu, 19 Jan 2023 10:50:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480954.745596; Thu, 19 Jan 2023 10:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pISVH-0006Dc-Ue; Thu, 19 Jan 2023 10:50:43 +0000
Received: by outflank-mailman (input) for mailman id 480954;
 Thu, 19 Jan 2023 10:50:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DP+J=5Q=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pISVF-0006DG-Uv
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 10:50:42 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2089.outbound.protection.outlook.com [40.107.21.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1d2c178f-97e7-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 11:50:40 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9692.eurprd04.prod.outlook.com (2603:10a6:20b:4fe::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Thu, 19 Jan
 2023 10:50:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 10:50:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d2c178f-97e7-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BYqFeWdFfE9CkR9Y73vc+xoXjsk0Fwqi5VFS+HAFU0KLSBP+ICFjW/3UtZWJpu9LQF5d8jWYNgfWT2xJGVC8gVjoXP7iiEXyRJ/HEB5M/UQZzOLJd6y4Ck9V/Uowk9gzcBrMY+hLY4yZFlZHxuqLu3wPW2obK6NMn1XyeQLaiKcthIIPClrgcvJwt6F80bSH6OeZ5MhmS7Sh31fXGhw0hzFdaaHYsptpVVq+piWWmsQYfHbKLvarOgsU7nHX6+PqausjrMnMJD93VxkCac2M/zVnkT2M+3+MlypCq9KGBw8rVO66GPMcb18KYTSFonxlnW7/OvZiZ3X7YDJq6UNTGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jYpiYIbX/Y7DO5E2iQ+OsYPuSATLSttvJNKGLIObGSc=;
 b=cYaWG4oAOhQT78DoL8g2uMui4JtLOEoGXh2oehOyb9I5745i97EKyD0pozeGDHQFZMnkCbJreRNRPnsL+eHagkiCxEpNEyVZJokmLXpatnabN0IVYmnaayjvcOXRFabkcefuWQmtYKagY32vbdIparjcbpxgzzVSwklFoQ1EruWr6Ya5/Gt2qgKKBAwCFQLzZmtQ1PdqhE+cWPxiIwdbVj2SqKKOoKUcKvOuSdPNx6/ryArHGmMQyU3lsvTgtxuqyBH48niG3Va6Bd2u4e6yvVKK+MjJJrIbLyh0C57ZyNs0mT1RK+axtrgsdt0A0DZ85uzRDr+Xcr4JOKEl3KuP7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jYpiYIbX/Y7DO5E2iQ+OsYPuSATLSttvJNKGLIObGSc=;
 b=xjL+PV75AQBp7KZ4A1tHZkKVc79mGsvcSn3IqdiWheaYcGDpRVrTIQJ7q2NmJeqscguEXETysgt9e/gzHZkPohHDzn9C3VgC4T0gEMIUWjO8phwgzgHg56qxsh0x/FC7mO/FiCWxAjGjKU9ZiqW8UiSpmh4vyvxcyaRruNBwZHtlVVKxkN6c3y65deS3DX9nDh7yoXEs3H5ok+2Au1u89Dxpk27jwqv1IOMXfif/qFhRN9WpUxGC8yIuu0RTPCJj/gncHQ3021lwkrUGmpQzWdwDbDJBwQIV+P/kcef7Ejn2dzrO3+PigOOK80kx1US46B8jk0WlTL3N7zdpST2LKA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6665ebac-6489-2d6a-0b9d-30342c1661d9@suse.com>
Date: Thu, 19 Jan 2023 11:50:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
 <1ab3bc93-326a-172d-4f0f-f6c2ddc84105@citrix.com>
 <Y8g4pSOHvrkqmbTU@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <Y8g4pSOHvrkqmbTU@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0070.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9692:EE_
X-MS-Office365-Filtering-Correlation-Id: 7175e854-8499-4bb7-6411-08dafa0afff8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7SM7cLryX6ngnTb80raIo7l3m4GoOOx4RmSVlDTHxXrefMwC2m1Fe/pW4mMDbhocuuaMu/Zawzl5VcuJZU5iySzFtWntflbs31gefunEJsvaStDmPmCT3N7kgPAHrVMa+8X82ITzbRManVKxLI+IBTirb2dvwUwx6/jLRGLcX8Q/Cwx1jI3D2XIt5w7xe1rcdojEHABFptJovGA71UNeo747iCX+uhXjKvD3uLRgY9tEYr1k9Xqv4Rmu3dTq7bpwWbtJ+XzqPCMHPDLYK0EAWiQsZKzN1zgYV013eIhw1oss4cNp0dzYbUiKEiLlfbSFqUh9qvkNY5jRz4M4zqH9VBX03jjxffcPNJY8CaQZ2nzFIq8mcWnHMAv8Qy6lBv9KTA6MZrRwTEhEbtYZCyMBqPGU6jjBrSXHwv1wySolAJBbTJffwKCH9fhkJyQd0OFh3q9xo7PJHVdRyL9uvSZqX/trnSUT9Zm+iF0j9B58aebkGVYRvKTNzwqWSJ/ev0yS7GBEKUqx3MFUUaKFHyP8XPJIRHRCzxVs3ir1uKZ5iGKlL+ctxSPIu9e+83bU2wWVi9Vlgk+PP5QrGXkVR8kk9Lr67QAhHwkOsQj7htmQv+//d9B1hrmpGoQLV667+Ttn1fitdj1j823tsmoW8yNTU2tMkfifGf+YD93uHdpbolCS/RHZnz7/pzrpEHb1P+BbA5mMMnRx11qaZJagn1w1nO1pu+VfGYqSeWg5sfAvWe8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(366004)(396003)(346002)(376002)(39860400002)(451199015)(31696002)(86362001)(66556008)(36756003)(66946007)(2616005)(6512007)(6916009)(8676002)(186003)(66476007)(26005)(4326008)(41300700001)(53546011)(54906003)(478600001)(6506007)(316002)(6486002)(2906002)(38100700002)(8936002)(5660300002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?THRTMDBoUDJlYTdaWlZuTHE0L3JCaS81YXF6Zno0NExmbTFRdVczSENIaGtE?=
 =?utf-8?B?Q2pGbmFYcEkrUUIxdXNUbU1sL0ZIekc0d1diZVNaSk9YUjhTaExTcGpGa1Ez?=
 =?utf-8?B?WmY2a1Vjei9wMXFITHhmS2pGMEZvWnZld3Bqa1ZWalQrMG9MKzhkM3docEV0?=
 =?utf-8?B?WUk0SHhUaFAralgxS1c3MUVtQzgyYVdoTmpNUGgrdDZlVFpqS3BRYkhScUJ2?=
 =?utf-8?B?anhQQnZiUU5SNVZNeWw5cEFqeGZSWTA1Wms1WVYrOTJiTEZYazFXcThrNlFt?=
 =?utf-8?B?WjVTRHlKS2hBNHRhVXU3bDR0Z3JscUlObzlnTThIT0dDWVU2NnNlbGVtUm9r?=
 =?utf-8?B?Y0ZMbm5zS3pmd3lXckx6UG1wZW1xbW1UV2FyUEc3TEwyV3JMcFhBdjJRNnAr?=
 =?utf-8?B?OEgrY1hjN3FpVUV2WnpvNmp4REpnM3ZIZTJ2Rk04SjZIOFlUMlROckhNV3NC?=
 =?utf-8?B?TVFnQmJYNnNvRXl5ekg2dWNoWU1jTWpQQ0VKNkJqTWJza2IyZTJ6bVVWL1RP?=
 =?utf-8?B?ZGE1dTdOVzlnbmpBOUFOOFJNVk00UC9HcUNUOVFETkl2ZzNoa0YzZUh1QUFw?=
 =?utf-8?B?dkk0cWpiRk9mTDVjNDQ2L1AxQkxHcUJNZnErT2pMY1hRRlEzNXRkTmpQKzh6?=
 =?utf-8?B?K0pqZ2lybEc0eWFXSW9sVkhjQ2VqTkJZRXV5UFd1Y3BkcVMrZ0VoeWVqSFJh?=
 =?utf-8?B?c0VEMkRUMDB0T3dxbjE1cXhJZXlmWVo0MGViK2h3Y3AyMWNiWnJ0ZVJYcW9K?=
 =?utf-8?B?aGtna3FJQnNVWng3N2pnZm5EazZoYnFwOW5JVzFxZ1d0MlVJaVEwc3krRmc0?=
 =?utf-8?B?RkFWTCtYVnZrdEtWOE1EUWJSaWcxbVhLRVd0c3JmTlJMaVAyd3g3WDJKN3kv?=
 =?utf-8?B?am9mNVM0aVdQR0VzeTFmMzBHOFRMWithLzFtcVRoM3ZtR292bmpVY0I3d2VL?=
 =?utf-8?B?dXJJcm1KOWswdDJ0ZlRuR1ZqTzNabDJNblZBdGJCYUM0WlJzaDFRc1RDMDVt?=
 =?utf-8?B?T0VDTmlSSmZQWi91Sm9lcGFsSm42eG9ENUcyLzRaT0h2UmM3Tlk2MGg1bFRy?=
 =?utf-8?B?dkFZOHorMmtHYmx3N2NGODJVQWlPQkpxR3ZkN1YrWVNHa0pFUXNXRk10aURa?=
 =?utf-8?B?QVpvSEpzbVNjam1tZFVRVE1WRUhQcDdUZlE1RVlDYUsrNnhiamcrN0g3VmlH?=
 =?utf-8?B?N2t6a2pkblN1K2k4cUttWHlCOHExRlFLcS9laEFTRmV4U2M2ZG1ib0FPWW85?=
 =?utf-8?B?aERsVjJVeEU1Q2twOHhYdjVpT2tPT1Q0ZWJYd0VRenR3OXdCeUE3eit3RXUz?=
 =?utf-8?B?REZsKzE4SmxFbnNkaUZ4QlZrVzhiK3hxcFU3bzU0bWtBcklxTzZKeEJvcUE3?=
 =?utf-8?B?dE9BNGFYR1NlZlFnR3RpdUJ6bTFmWTZvZk1UQy8wVWNKdjFzRFh1K3VuVjZS?=
 =?utf-8?B?TVNOUytQY3d6M2xDOWZSNXBqeWY5eFd2eGR0T2NieDlreFY1NFJDcnorb2xa?=
 =?utf-8?B?b1BCNWtqbmxaTlFZR2RHV2Mwd0pBNk9lc0ZoQytFYVloZVFkZXBHTEp4Ri9Q?=
 =?utf-8?B?Y2xsTExPUHNOb3dRQmJZNkpvcTdxT1JXNFpURDNHV2NDeEwyeUNabXlHT3Rs?=
 =?utf-8?B?b0ErU2h4a3VnN1VNMC8vblVWSGUrSW9NMnVkcXhuUXliYzVtd3FSVFlqc01V?=
 =?utf-8?B?UzJGNUtpcWhyajhxT25DMkpTSXJBZ1ptRkx4QXJNOEoycHFGOUdtSmNXNkxX?=
 =?utf-8?B?WWNiUm5ORUJhdTNINmZ2VUR1MFRkSElUSXNLSlcwUTBPSTdlWGpMZjZ3aXF4?=
 =?utf-8?B?dE4yVVFMUkRRbzJWQk1wRTRyMkMvdVF5M3gxajF5R25PVGFSNVN4VXh1SE1F?=
 =?utf-8?B?dTA3OVFrSlVuYnRMaks4eDRnTmlCMzlBY1E1bVo4bDhMSkZabC8zRnZ2b25S?=
 =?utf-8?B?ZUtVSkZHK1dZc2VyVjVuK01CbmhxcEQ1UmJhdjE1Vk5mbU9nTGg0WTdITExS?=
 =?utf-8?B?NzhnaS9Wd3Y4NTgyeFUwWTlhaU0ydnVVVWpEaHBqSlBUUjBoK2NlcU54R2ky?=
 =?utf-8?B?d2hsTE91MVUvWkFLd01Felc1VWtvQVlsc2JiTi9EZ3Y0by9OY25Eb09GOE9q?=
 =?utf-8?Q?adg39uLezDuweCbbKCrMhXfwo?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7175e854-8499-4bb7-6411-08dafa0afff8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 10:50:37.9931
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RToU2VPRSMTjRbhhsAvJuRrbc7gteXwsrBZ0xwt2Ku1wqFWfZORw3nad5MJOVBl7ZD70SIBN1SIQiJXxqx2p7w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9692

On 18.01.2023 19:21, Anthony PERARD wrote:
> On Tue, Jan 17, 2023 at 05:21:32PM +0000, Andrew Cooper wrote:
>> On 16/01/2023 6:10 pm, Anthony PERARD wrote:
>>> +def get_typedefs(tokens):
>>> +    level = 1
>>> +    state = 0
>>> +    typedefs = []
>>
>> I'm pretty sure typedefs actually wants to be a dict rather than a list
>> (will have better "id in typedefs" performance lower down), but that
>> wants matching with code changes elsewhere, and probably wants doing
>> separately.
> 
> I'm not sure that going to make a difference to have "id in ()" instead
> of "id in []". I just found out that `typedefs` is always empty...
> 
> I don't know what get_typedefs() is supposed to do, or at least if it
> works as expected, because it always returns "" or an empty list. (even
> the shell version)
> 
> So, it would actually be a bit faster to not call get_typedefs(), but I
> don't know if that's safe.

There's exactly one instance that this would take care of:

typedef XEN_GUEST_HANDLE(char) tmem_cli_va_t;

But tmem.h isn't being processed anymore, and hence right now the list
would always be empty. Are we going to be able to guarantee that going
forward?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 11:27:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 11:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480960.745607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIT4y-0001DK-Ns; Thu, 19 Jan 2023 11:27:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480960.745607; Thu, 19 Jan 2023 11:27:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIT4y-0001DD-Kn; Thu, 19 Jan 2023 11:27:36 +0000
Received: by outflank-mailman (input) for mailman id 480960;
 Thu, 19 Jan 2023 11:27:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YmS1=5Q=citrix.com=prvs=376161a5e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIT4x-0001D7-DS
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 11:27:35 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 431ec1ae-97ec-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 12:27:33 +0100 (CET)
Received: from mail-mw2nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Jan 2023 06:27:20 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB6084.namprd03.prod.outlook.com (2603:10b6:610:bb::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Thu, 19 Jan
 2023 11:27:18 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 11:27:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 431ec1ae-97ec-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674127653;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=NM5XKlfSWDVHdKp0gC68POdu2jjOPixbZWn1PSs/o1Q=;
  b=Ispev0seQb1EackCP0+YL+IKLLL0FBt1dm2FMBB9/0B5I4VQSkxeOfh3
   KKlzMSIXDpct95sAXyqJehDgL+HFva9r2W1KDEnAsQN6ZoLBMs6uBo62E
   i6LMZzf3VU7Xhm1MaagS6qiZZfVu9oqhXpvWNzBIsIRAtRXGl0F+BC5LP
   Y=;
X-IronPort-RemoteIP: 104.47.55.104
X-IronPort-MID: 93296473
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:DK38X6hvnOX/lSpv2ILsspobX161tREKZh0ujC45NGQN5FlHY01je
 htvD2yAbPffZzb3LdglPNizoUkF6p7Smt8xSAQ6/3tkEC0b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QaPzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQRGh0sSBW4ot6O+/Gfaep9t555DZf0adZ3VnFIlVk1DN4AaLWaG+Dv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEsluG1bLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6Rebip6A23wb7Kmo7MzorUl+B/9aAhEOZYPFCC
 x0N2AUoov1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+WsDezNC49PWIEIygeQmMt+ML/qYs+ihbOSNdLE6OviNDxXzbqz
 FiisywWl7gVy8kR2M2GEUvvhjutot3MUVQz7wCOBma9tFohOMiiepCi7kXd4bBYNoGFQ1Kdv
 X8C3c+D8OQJCpLLnyuIKAkQIIyUCz++GGW0qTZS81MJrG3FF6KLFWyI3AxDGQ==
IronPort-HdrOrdr: A9a23:CYx6MKH0LwEFYah2pLqFxpLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdrZJkh8erwW5Vp2RvnhNJICPoqTM2ftW7dySSVxeBZnMbfKljbdxEWmdQtsp
 uIH5IeNDS0NykDsS+Y2nj3Lz9D+qjgzEnAv463oBlQpENRGthdBmxCe2Sm+zhNNW177O0CZf
 +hD6R8xwaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oK+RSDljSh7Z/9Cly90g0FWz1C7L8++S
 yd+jaJppmLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjow4OyjhkQGhYaVmQvmnsCouqO+ixV42mJ
 3nogsmPe5093TNF1vF4CfF6k3F6nID+nXiwViXjT/IusriXg83DMJHmMZwbgbZw1BIhqA/7I
 t7m0ai87ZHBxLJmyrwo/LSUQtxq0ayqX0+1cYOkn1kV5cEYrM5l/1bwKoVKuZFIMvJ0vFgLA
 BcNrCE2B+QSyLDU5nthBgp/DVrZAVpIv7JeDlZhiXf6UkqoJkw9Tpl+CVYpAZCyHt1ceg72w
 yPWJ4Y641mX4sYa7lwC/wGRtbyAmvRQQjUOGbXOlj/ErobUki946IfT49Flt1CVaZ4uqfaoq
 6xGW9wpCo3YQbjGMeO1JpE/lTER3i8Ry3kzoVb64JisrPxSbL3OWnbIWpe2feIsrEaGInWSv
 yzMJVZD7vqKnbvA59A20n7V4NJIXcTXcUJspIwWk6IoMjMNor239arO8r7Nf7oC3IpS2n/Cn
 wMUHz6I9hB9FmiXjvijB3YSxrWCzvCFFJLYdznFsQoufsw39d3w3koYHyCl7G2ACwHtLAqd0
 1jJ76imr+npACNjBP101k=
X-IronPort-AV: E=Sophos;i="5.97,229,1669093200"; 
   d="scan'208";a="93296473"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oTlnj/PwYhYh5VVwLD/1MXXgeCBA2kpJBM8cH7etT0mGg7N2cE4OXf+okKrvm3UdVos478yYnWPBDpGPJymp0JWYLnZ3L8iBulN3KkFvZoMKigHIiZhuZQTs5/hSW6fI0unB5wQ/OmUxLH3SkFt/VIZD66DjQE0dIKMGPH1LZTlVMwYJXZ7a6VmtbdHU1jFZmOzafV1AzC0tYEZtSW/V3tu+JP0p8kPKi17bCxb0QMZ2sPrCYmx+CHaQPiyx7mXPL0Dzal1FC9BiBf+zg2UIWARaw4BvQDfSxH1AJ04/3TqfcCnJQYPhfVxemE+m2Tffb3SSroLfUw/hURAOgd1GJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NM5XKlfSWDVHdKp0gC68POdu2jjOPixbZWn1PSs/o1Q=;
 b=P4rT0uP0cxDb1FcoPpjhHmkpGBjkbg1PPl1xoRXJWwlU0a8m3TtD4VG4sT+uStdqorqElYrA+SWo3z6uLHNDgGy2LcSrZkpjmNn1nw4BHdo8dTVExOsbGf8sVXIlynnxEu8UVcC0WyFj03JT8xP5gBP9hutU/npLl+0tlgfbOayM02McVLbBrR6iWzLBQZO/VJNpJMGxeqJBet3v8JTnAxjsdstj20Ine7i9ixz+Ufr7QzOanJIwHAgbstXNhpNC0Det7QDIPZUPuKYq0Mwl+DOTjdrul0XAq5ReytbOY4WjAwHmq4RY8m+U5ZBNFztNqp7uAD/FUrj1z+kZHb4pgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NM5XKlfSWDVHdKp0gC68POdu2jjOPixbZWn1PSs/o1Q=;
 b=m89nULk2TaZFZ63IvEtNbgwESQDUNa2mEjY0GpHNYp10rtA5eoQcSpdepmFdeY5KszVKE+MP7bislPHHbjJYGKZgqTNiNcUPrZ4UIONnz/1rFgm3AWbThAwoG6W3lie1irrd+zTO0PFKQJFXjO4uMmCGuAiNUlznVKqAAWtyQtc=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Anthony Perard
	<anthony.perard@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, George
 Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Topic: [XEN PATCH v3 1/1] build: replace get-fields.sh by a python
 script
Thread-Index: AQHZKdXmSfLi4OljAEm5WNlNB2w+ZK6i3JyAgAGjEYCAARRgAIAACj+A
Date: Thu, 19 Jan 2023 11:27:17 +0000
Message-ID: <30ab5861-c134-42fd-0c3b-035dda2f8be2@citrix.com>
References: <20230116181048.30704-1-anthony.perard@citrix.com>
 <20230116181048.30704-2-anthony.perard@citrix.com>
 <1ab3bc93-326a-172d-4f0f-f6c2ddc84105@citrix.com>
 <Y8g4pSOHvrkqmbTU@perard.uk.xensource.com>
 <6665ebac-6489-2d6a-0b9d-30342c1661d9@suse.com>
In-Reply-To: <6665ebac-6489-2d6a-0b9d-30342c1661d9@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH0PR03MB6084:EE_
x-ms-office365-filtering-correlation-id: d0dbbf1e-33f6-4072-767a-08dafa101f3d
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 jPjWMCIxLxKOVZ081Oo+/yYRuCfMquIy1V8bv0/lq8aXV8/3N55Iy0cXDFlhovarGqR9W8qsUU5wmtkZcI8MoKZ4r0FJg/3+rWhcM1E2UEyAQJQZ0wJtu5es98lBx5XIr/nqmJ+Ve5WfnxVOq41UkM+gTKFVG8SjK4GHqrrb+9G9KATb7/gUQScjJx8Uz6+R/36SiHB1zcuwFW+rfzO7x+cYRMm+Lsp6+L3/bD50a0lgOVlDj+nz366niWQlJiuicWur4K9clUEBIS61QMVam4ewU1av2ODGgH7LWe3gZd4vYEsDhaeoO4SglegDkrKVf0Swr5195l4/OT6GfNOV8wEDyqKd186waROuQ3Ms4SZuC6KxB+x8jngU4RWnvWwxY97rV9iqLCqxZOjQSbsST+Eqk9/I8pYYYpdAdxKw3bGYwIccfVt/gYeo0YEOD68V4XJugL0BNkVaBtC9rMvnkIiF4tAdUWsDxAcnFFHtdHxHvSd5AJ2TovsUq8+qAA7D+yPH4Cjd+a+qya9AlUdGbdc3wkpJWdC7vYplDM/GM77JcMnLJqfP6mm9sNO1E+43S8iLHJ9knalPURfaZ/w5xzYQ5SMPz79Xm3KDXYPVmMptRacMP8fUb9mG1vG3+Nq/EMvJfvSYcBoKqi3s0K3deOJtaNRno1R8KdETfABDP7ZAviRqmTw1x4jP7dB/SfR0c+dlQeReue1A41h2fbbz26YVTRE6m9snBtepvoIEk8z4GNMZvtB0ECE45BBU4sWYtKyTHJBU2Or34eUi9jUg6g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(366004)(346002)(136003)(376002)(39860400002)(451199015)(82960400001)(66946007)(64756008)(316002)(31696002)(122000001)(38100700002)(86362001)(5660300002)(8936002)(66476007)(2906002)(76116006)(66556008)(91956017)(66446008)(41300700001)(8676002)(2616005)(26005)(6512007)(186003)(53546011)(54906003)(71200400001)(110136005)(6636002)(4326008)(36756003)(478600001)(6486002)(6506007)(38070700005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TXhZU1FxQ0poR2xCQXlGZ1RuQTdyc2V3UHFKZG95NXdmMXMybnhSR3RtVk9U?=
 =?utf-8?B?QnUzcDJ2WE1Na1g1WFcwMXNBTFJDd1Z4MzBVN2FOQnBJMVJVb1ZsWWpYV3o2?=
 =?utf-8?B?L0dBZkNQQlR4c2lkV0JQYVorVEZNRUY2Q0xoNm5KUEZFd2FqbHNwSEdZR1Bl?=
 =?utf-8?B?THJ2Y0JDeDE0QUV6eEpNRGt3L240NGtpNjVIVHZtZXJJSWsvMXRXQUVCaFRJ?=
 =?utf-8?B?NG5URDBvRlZ4RzRmZmFxcWRFRXZLcERoS0xTYlp3SmxBaDU1Ni9MWXg2UXNw?=
 =?utf-8?B?WTJvRjhwWmdrV0VrY2dEYVVpU2N6SGZjdnN2SUpnMHovQ2JXT2pxL1RLd0Vy?=
 =?utf-8?B?Wng2ZVROeHlJMy9JaDV3OXJIOVpCY2hSTFNieUFRZ3o0MFV2a29hcUhpSGNB?=
 =?utf-8?B?S1llSUNaY2U4K09MTlpIWEU5amFoQlVjRDhXZmlkdTJHY0RQa1JibTdUemZW?=
 =?utf-8?B?dXdtRXBTYlY0OHk2ZFRXMCtsTHdqZkVMaktyZmM0bWF2ekFVOEZNZGd6LzMw?=
 =?utf-8?B?SStRZXM5QUQxeU1hbEF5ZWVnanRYV05MdUhORlZtcldKeVUwK1V2Mm1TMWIz?=
 =?utf-8?B?UmlwbXNkV0N6NldKcjM2bU45Sk5NblJoemFGcFo0NDhGYzlCeE01aS9kRkwz?=
 =?utf-8?B?R1o1QnpVTnJibE9qNkVDWTNObU1KVW90dlluRW1GVXlvTU1mTmJGYmNJRGxk?=
 =?utf-8?B?akt5YTZhcXVzSlR1aldlWnhtdDZKekYyOS9oMEppbXBxeVRvekFSa0txUXV6?=
 =?utf-8?B?Wk04UmI4YkxTUVBIcXlUUTJqbXBTWGVNZTVYVWFkNHkwamplVlRoMnYwT1hM?=
 =?utf-8?B?TUxxWTE1NnRkYkp6MFhvOEhyZE1RT251a2p1THpacU1sS0tDVTZESkVEZkdY?=
 =?utf-8?B?WW1XNXp3L2xjV3BGeWQvOThOZ1NDbC9LVW5qQVdSK3l6anNXZm1PN1N1ZEZS?=
 =?utf-8?B?QXZ4OExDb0dCdktVanlleUdDV0NuUXZSNnpHOHpNL1pOaHVsK0s5bDNudU5V?=
 =?utf-8?B?RGtMV3F4ZnNmTzJ1MXVRVVorRUNRNEt0SWFTVXZjd2lnc0lIREVCY0h3L0dt?=
 =?utf-8?B?cnhKUXlXeTJDS1U5elhpQ3hPQXRjN1B4ajJhRll5Y2c4MUxyV1VYTmQ5QmZ1?=
 =?utf-8?B?V010REcvZzFHY0I5TFIxRTFKVDRwMTVBalhYckUzL2VvaGpnQTZkNkpzSVRT?=
 =?utf-8?B?eEczZUdQQmFnYk1hb2Z5Mk9DUEtxUjNzNzFyRXVMU3AwUFdCL0NIVytTYzJw?=
 =?utf-8?B?RVdqYnJsNnhjNDRQNVo3b3RaRXhXRDU3MHNYSEVON3UrZVViTnQ4RzlLR0dn?=
 =?utf-8?B?dGY1NTBkZjRYOFRSTXd4aEtBU0MwbXNhMDJCeHJ0Nk52YmJ4TnhWcDJHOTdi?=
 =?utf-8?B?VTlER0VCN3QrbzVrNStvLzd1M2FLQmduM1drOGtMZDlwTGRUTUNGK1hISEw1?=
 =?utf-8?B?VUZEaWN2cUh2NmlMZXFEdmpzUVZiYnhOTXJucXpVYThjMWFsZU1IZWxwN1Jz?=
 =?utf-8?B?dHNDOFFkVGhaSFdvVFhOY1JpOGVvWFY2dksyUzY1Ry9FSXMwM3ZWTWc1UHZI?=
 =?utf-8?B?ODZmUmZKaFNMQWlOcGhxRlR6V0pYaEpsNGRNdWN5aUVYM3lvL2RieFJ1ajkr?=
 =?utf-8?B?c25lR01yZmU5bEorU0Y1UHR2dEpNU2FSOWN3SEVwSmR4L3JSUE00L1V6MFMy?=
 =?utf-8?B?S1BtOVplN3ZSaVExNS81aC95VlJFOHlNWU0vL3VxNitUYVJPOTAxd2tqdUQy?=
 =?utf-8?B?bnY5T01oTXA0cXA1VWlqVVBmdWYzOVpvdXlzWWsvUFVrZURIbjR2R1lmT1VF?=
 =?utf-8?B?UVVGWVIrdXFtVHNzSnh2WEN4ekc3QnJCdmUzVGFCVmdEaVMrTUxvYVpMcUtV?=
 =?utf-8?B?U1pPOWd1dkZ0ZnpqamM1L1FXM1ZaNHZ0WlVzTmlDalM2ZlZHVDgwZ3JEeWMx?=
 =?utf-8?B?U05PdWRSUW9LTlFUbVV3Nm5PemVrcnIrbzFTZWdTMDhyWXlPdk1SUEg3cFh0?=
 =?utf-8?B?MWU0Qkp6TGYybitKcVhzYkNBN3BxRWlsUzBiWDMwaVI5QmZVK0lvZ2pKeGxD?=
 =?utf-8?B?d3JVajlnNDJzeVN2MlFwTWNnRjB6WU8vcHd6QVhxb0tvS25ydzFRNlJpcHov?=
 =?utf-8?Q?9zwfaI4olVNZbMvCairrTwUOY?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <9CC73485B12DEF47AF0991FF51C5FC5C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	g+TVwDuAIPLHTfl6G+oycRkieBwIW3mLwdYHQGN5FMT02u7p12S6ZyfStjr56qUxq6f9+aYqF5JI+lJvIu25MtjnqKUSnYfaB+i2gFIIqq6H2Yt/1aLO5eHeLQHk6Uacyu0nyCA1bi8HrvJwK0TZdMF4QNOzxUA3lZ9EDKNBBSfe/ljpx35WpDAYxDwXy/ZtImSW+r1KvDn504FzMo0nTE4Z24/Zs/3UL0Z2cyXPzSejkalOMkMzdIAL6MEJs16JzLYywP5oeTpBeg0FkEVWyA6KSBrTwJwiKZIUKIn/OB0Gojp+4LI1zO4DMR7UksHtwpF9liVb6hvQOsK/UluyyBZZVXUVVkPwIUuAd9W1V10xzv5mxCnIAbYzdcaX0aTsZa4jWrMrGvvSLej7Yrffk5DgBWdl6z2hOnF82fQd1N4QsmcE3Tc35Cxt3Tz+tC+4a4tmAYaYUEHdIJtYaZG0R08iAGXk9iGnbOqF8DXY9hQONvQtFL9PU8+gh1g9EkBWTpDRRLdgcJWWxKsCVvMTc8Jt+QVCPFoOFcYYzNi7yi9pxoeAhb10TQudOtkOgh6UaSiw/ioCbLWJhQ5iEPHXNsNfHDAiTmnWpjzoQcMsTGILmCqTC9A+Eazvt90OZyuEYfFZq3n++fs/htO1m0a+NJv8lVvWW2NBriM0etV3kj0D+TPlHUipkPkaii64Bt6wHWHlX4SCfBjzRncde5Kk2vd54qyWwB+bh+kwRMq9eP916DBh0/IgZ3ebiOHIjyaQ8uH4Mf2f61ZTmk828PfZiEvAuEBIzNWSZ3Q2TJ4zi15renHNKmK0r8w1uRoLgqx3Y2GmneXqQAs9kpMRQH7TtNrUjQQ2Is8f8SHxRSpi4Hs=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d0dbbf1e-33f6-4072-767a-08dafa101f3d
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jan 2023 11:27:17.6420
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: D7R7Yb5hdZ3aFUdYjKZMlGQJ7c9SXZLGVirTj+JyXlXAlqbHt763kK74DLX8J2oHkf28zfzc5ArguQKQQLwbk4cm+Vd4La/MPgOEKnXrMgk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6084

T24gMTkvMDEvMjAyMyAxMDo1MCBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDE4LjAxLjIw
MjMgMTk6MjEsIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPj4gT24gVHVlLCBKYW4gMTcsIDIwMjMg
YXQgMDU6MjE6MzJQTSArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+PiBPbiAxNi8wMS8y
MDIzIDY6MTAgcG0sIEFudGhvbnkgUEVSQVJEIHdyb3RlOg0KPj4+PiArZGVmIGdldF90eXBlZGVm
cyh0b2tlbnMpOg0KPj4+PiArICAgIGxldmVsID0gMQ0KPj4+PiArICAgIHN0YXRlID0gMA0KPj4+
PiArICAgIHR5cGVkZWZzID0gW10NCj4+PiBJJ20gcHJldHR5IHN1cmUgdHlwZWRlZnMgYWN0dWFs
bHkgd2FudHMgdG8gYmUgYSBkaWN0IHJhdGhlciB0aGFuIGEgbGlzdA0KPj4+ICh3aWxsIGhhdmUg
YmV0dGVyICJpZCBpbiB0eXBlZGVmcyIgcGVyZm9ybWFuY2UgbG93ZXIgZG93biksIGJ1dCB0aGF0
DQo+Pj4gd2FudHMgbWF0Y2hpbmcgd2l0aCBjb2RlIGNoYW5nZXMgZWxzZXdoZXJlLCBhbmQgcHJv
YmFibHkgd2FudHMgZG9pbmcNCj4+PiBzZXBhcmF0ZWx5Lg0KPj4gSSdtIG5vdCBzdXJlIHRoYXQg
Z29pbmcgdG8gbWFrZSBhIGRpZmZlcmVuY2UgdG8gaGF2ZSAiaWQgaW4gKCkiIGluc3RlYWQNCj4+
IG9mICJpZCBpbiBbXSIuIEkganVzdCBmb3VuZCBvdXQgdGhhdCBgdHlwZWRlZnNgIGlzIGFsd2F5
cyBlbXB0eS4uLg0KPj4NCj4+IEkgZG9uJ3Qga25vdyB3aGF0IGdldF90eXBlZGVmcygpIGlzIHN1
cHBvc2VkIHRvIGRvLCBvciBhdCBsZWFzdCBpZiBpdA0KPj4gd29ya3MgYXMgZXhwZWN0ZWQsIGJl
Y2F1c2UgaXQgYWx3YXlzIHJldHVybnMgIiIgb3IgYW4gZW1wdHkgbGlzdC4gKGV2ZW4NCj4+IHRo
ZSBzaGVsbCB2ZXJzaW9uKQ0KPj4NCj4+IFNvLCBpdCB3b3VsZCBhY3R1YWxseSBiZSBhIGJpdCBm
YXN0ZXIgdG8gbm90IGNhbGwgZ2V0X3R5cGVkZWZzKCksIGJ1dCBJDQo+PiBkb24ndCBrbm93IGlm
IHRoYXQncyBzYWZlLg0KPiBUaGVyZSdzIGV4YWN0bHkgb25lIGluc3RhbmNlIHRoYXQgdGhpcyB3
b3VsZCB0YWtlIGNhcmUgb2Y6DQo+DQo+IHR5cGVkZWYgWEVOX0dVRVNUX0hBTkRMRShjaGFyKSB0
bWVtX2NsaV92YV90Ow0KPg0KPiBCdXQgdG1lbS5oIGlzbid0IGJlaW5nIHByb2Nlc3NlZCBhbnlt
b3JlLCBhbmQgaGVuY2UgcmlnaHQgbm93IHRoZSBsaXN0DQo+IHdvdWxkIGFsd2F5cyBiZSBlbXB0
eS4gQXJlIHdlIGdvaW5nIHRvIGJlIGFibGUgdG8gZ3VhcmFudGVlIHRoYXQgZ29pbmcNCj4gZm9y
d2FyZD8NCg0KSU1PIHRoYXQncyBhIGNvZGUgcGF0dGVybiB3ZSB3b3VsZG4ndCB3YW50IHRvIHJl
cGVhdCBtb3ZpbmcgZm9yd2FyZC4NCg0KVGhlcmUncyBhbHJlYWR5IHRvbyBtdWNoIG1hZ2ljIGlu
IGEgZ3Vlc3QgaGFuZGxlLCB3aXRob3V0IGhpZGluZyBpdA0KYmVoaW5kIGEgdHlwZWRlZiB0b28u
DQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 12:13:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 12:13:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480986.745624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pITmp-0006np-LB; Thu, 19 Jan 2023 12:12:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480986.745624; Thu, 19 Jan 2023 12:12:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pITmp-0006ni-IV; Thu, 19 Jan 2023 12:12:55 +0000
Received: by outflank-mailman (input) for mailman id 480986;
 Thu, 19 Jan 2023 12:12:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pITmo-0006nY-47; Thu, 19 Jan 2023 12:12:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pITmo-00011m-0g; Thu, 19 Jan 2023 12:12:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pITmn-00062e-KW; Thu, 19 Jan 2023 12:12:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pITmn-0003q1-K9; Thu, 19 Jan 2023 12:12:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GluQ8JXLhL0Ck5/UYzPj/JBnrxsIn0IPuxiuNVj2Grw=; b=uWSdfCb/9za0ZtRfw1D3eAch4A
	fapZiqA3nugZG0t9FvSg8f+/koUymlJDelA+H1dRTHnrzHRt6r78mtl2liZLOC9l8wUYMTkS3HVl0
	nEI7gaXV2kdIv0NBhWMtGtfz+dlxqrhZBE1NNlKlmc0DJM02ObbOc13mkDGES0cX8aZM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175971-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175971: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=18df11da8c14e48b6e4a90fb0b5befb1c243070a
X-Osstest-Versions-That:
    ovmf=426efcc37492da4ebd86799c2d4f5dfeac806848
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 12:12:53 +0000

flight 175971 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175971/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 18df11da8c14e48b6e4a90fb0b5befb1c243070a
baseline version:
 ovmf                 426efcc37492da4ebd86799c2d4f5dfeac806848

Last test of basis   175964  2023-01-19 02:42:12 Z    0 days
Testing same since   175971  2023-01-19 05:11:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   426efcc374..18df11da8c  18df11da8c14e48b6e4a90fb0b5befb1c243070a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 12:45:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 12:45:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480994.745635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIQ-0001sf-8L; Thu, 19 Jan 2023 12:45:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480994.745635; Thu, 19 Jan 2023 12:45:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIQ-0001sY-5Y; Thu, 19 Jan 2023 12:45:34 +0000
Received: by outflank-mailman (input) for mailman id 480994;
 Thu, 19 Jan 2023 12:45:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIUIO-0001sI-ME
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 12:45:32 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2749bde2-97f7-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 13:45:29 +0100 (CET)
Received: by mail-wr1-x42a.google.com with SMTP id n7so1749527wrx.5
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 04:45:29 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o15-20020a5d684f000000b002bddac15b3dsm17909808wrw.33.2023.01.19.04.45.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 04:45:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2749bde2-97f7-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=qGInHi3/ALJ2wyDs3M8NHyBY6zC4QfJw4j5yCOiIG08=;
        b=anDyFTqjVgO8A92J4syGLyQYh7RgurCsVyC8kjj0QEbdX2+9kzc9ApUKjIC1VcjYN2
         p68TjCbyqOrQTLUKRq86ewPol0JYrUIQ7qCFyPYnOxv4yhTyy9n7scVxZ+Mr3yMhWPGY
         TizNnguz+XqtJTaDzaw6byH5+hNhrnR2AhrP483Cwxekbm1WXkIRZ+925cWXzR9wTy0N
         T9wVDfu8fOaJ3m1AxQSuMNNB8mNJUPhSl4vTnoOWopoLSapuckKvXqyAfcQM+H5MVBBa
         fG22qUYBBjCDeYTuDgM8ojJKMuB+3v0tDCUYYNE0wPQ+ATFT5VNWqVkQhXhsAtiMz0NG
         /5XA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=qGInHi3/ALJ2wyDs3M8NHyBY6zC4QfJw4j5yCOiIG08=;
        b=n8Xfcp+/dhQ6EeBnLUPHFPuHj41epxcYSbmuI2DCVic/3r7+JovRvusXD02Q2OIhLM
         1w7K9Rpi/Y//+MiJyT5OAnbJGnkuOouhdXHOWreXAXSdZuRDRQAm8w8bdPHhTEElNYeN
         P18FzLTbA1tKJPMS7QteZFzqx6poZ40QC4KaAllVFI5D/4CzDEvhQJ5ecCpBIqdMBgd0
         S4cA4VtpaEICD/vd5NApueWJFavmfoqn+TUX++qZxKcGTZ9V0Gdncc+ow+ZY7SfGjkSb
         zS2k6gyxT5zZOw20CicFv1AusSOifH2oBLIonashStvbiv/+nfAkakLk4GebIvSo5zCy
         QxJA==
X-Gm-Message-State: AFqh2kowYUEOfK2byIXxgst00ZD1y5nhfu9oqb1e27Wlg6wSndlDh46l
	2v2vNO158JrZrQBI01GUn9I4S5LYBMZDce+i
X-Google-Smtp-Source: AMrXdXsdO4Q0wXQenpT/evRxq/7+5VGXvo2Ai38o8gZcpSXHB5U/YIVtyNP3ZgaDNVjDdR8gNKRwRg==
X-Received: by 2002:a05:6000:1049:b0:242:15d6:1a75 with SMTP id c9-20020a056000104900b0024215d61a75mr8653629wrx.66.1674132328812;
        Thu, 19 Jan 2023 04:45:28 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v4 0/4] Basic early_printk and smoke test implementation
Date: Thu, 19 Jan 2023 14:45:13 +0200
Message-Id: <cover.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- the minimal set of headers and changes inside them.
- SBI (RISC-V Supervisor Binary Interface) things necessary for basic
  early_printk implementation.
- things needed to set up the stack.
- early_printk() function to print only strings.
- RISC-V smoke test which checks if  "Hello from C env" message is
  present in serial.tmp

---
Changes in V4:
    - Patches "xen/riscv: introduce dummy asm/init.h" and "xen/riscv: introduce
      stack stuff" were removed from the patch series as they were merged separately
      into staging.
    - Remove "depends on RISCV*" from Kconfig.debug as Kconfig.debug is located
      in arch specific folder.
    - fix code style.
    - Add "ifdef __riscv_cmodel_medany" to early_printk.c.  
---
Changes in V3:
    - Most of "[PATCH v2 7/8] xen/riscv: print hello message from C env"
      was merged with [PATCH v2 3/6] xen/riscv: introduce stack stuff.
    - "[PATCH v2 7/8] xen/riscv: print hello message from C env" was
      merged with "[PATCH v2 6/8] xen/riscv: introduce early_printk basic
      stuff".
    - "[PATCH v2 5/8] xen/include: include <asm/types.h> in
      <xen/early_printk.h>" was removed as it has been already merged to
      mainline staging.
    - code style fixes.
---
Changes in V2:
    - update commit patches commit messages according to the mailing
      list comments
    - Remove unneeded types in <asm/types.h>
    - Introduce definition of STACK_SIZE
    - order the files alphabetically in Makefile
    - Add license to early_printk.c
    - Add RISCV_32 dependecy to config EARLY_PRINTK in Kconfig.debug
    - Move dockerfile changes to separate config and sent them as
      separate patch to mailing list.
    - Update test.yaml to wire up smoke test
---


Bobby Eshleman (1):
  xen/riscv: introduce sbi call to putchar to console

Oleksii Kurochko (3):
  xen/riscv: introduce asm/types.h header file
  xen/riscv: introduce early_printk basic stuff
  automation: add RISC-V smoke test

 automation/gitlab-ci/test.yaml            | 20 ++++++++++
 automation/scripts/qemu-smoke-riscv64.sh  | 20 ++++++++++
 xen/arch/riscv/Kconfig.debug              |  6 +++
 xen/arch/riscv/Makefile                   |  2 +
 xen/arch/riscv/early_printk.c             | 45 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 ++++++
 xen/arch/riscv/include/asm/sbi.h          | 34 +++++++++++++++++
 xen/arch/riscv/include/asm/types.h        | 43 ++++++++++++++++++++++
 xen/arch/riscv/sbi.c                      | 45 +++++++++++++++++++++++
 xen/arch/riscv/setup.c                    |  6 ++-
 10 files changed, 232 insertions(+), 1 deletion(-)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/include/asm/types.h
 create mode 100644 xen/arch/riscv/sbi.c

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 12:45:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 12:45:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480996.745654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIR-0002MS-NT; Thu, 19 Jan 2023 12:45:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480996.745654; Thu, 19 Jan 2023 12:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIR-0002ML-Ks; Thu, 19 Jan 2023 12:45:35 +0000
Received: by outflank-mailman (input) for mailman id 480996;
 Thu, 19 Jan 2023 12:45:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIUIP-0001sI-Ng
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 12:45:33 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 287cd8a2-97f7-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 13:45:31 +0100 (CET)
Received: by mail-wr1-x433.google.com with SMTP id z5so1741401wrt.6
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 04:45:31 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o15-20020a5d684f000000b002bddac15b3dsm17909808wrw.33.2023.01.19.04.45.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 04:45:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 287cd8a2-97f7-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nGpYf6LxU4uEJy+Cdd6qu2fGfvSKVdNlp4dIm9FxUs4=;
        b=CcMy0Bd/lI3/x6dBQKKAqPqyYMRSqSLIU2ixX8cJzr3/DFAjzwyge+jcBWXGS+uoGs
         EMB7JU7ZIYHAIv/f4ZzVgNRbJUyKEQKDHixOI8S9+oyjniT/JyRA19gBBoPkNqXRItG5
         ZVuq15zCDRcRzd/zqOxRC6PrmqZP10HGZGYuuD5Ce7JbIcPoSwavZyTzGwIC/IJWsC1n
         YnQBMYsjE7LfsEqBuGdvBwT0ZVWo77X574bUUMYLTyC+F2UOM9NqUCtu4ti5CgOOM6Ns
         I58WfMs46YVtRUpVwb6Qjxr5E91O9YreIeClaU0wPwQYDJcCNncPg4dkcTnyPHD3PPA/
         fEog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nGpYf6LxU4uEJy+Cdd6qu2fGfvSKVdNlp4dIm9FxUs4=;
        b=b/YYAqNOScefcIsVbk7oi9uuY93lMsxteLFUUjWngigNKUzemLX8692v/RuK5VlKJr
         rhjdgemTzZgRi+IDkgvjOSgn+oo/Dpjko+iZJKeHjwy5P6teGfkzxZFjcOqulRb/QkF6
         v6Aw7wH93/W0z3Xs3eSJf5pFL6MF2iKkgBPyheyUd1JDqAuDpHjO6Qm53WLHhtMECTIM
         HdRRURdhHZaTM5IHjsLfwN1SKAgCJcD/0uIkmi4vGdQkF0+4f4g+MDv8WM/tFyNfkra6
         L1edjUGx2SrF61VstrCJJcvxFzZlT1Y2mBvWuxn/YojTLYOq1fL9MKjUGUaaL2Nwi++6
         I4bA==
X-Gm-Message-State: AFqh2kqqwPF1zSPSeIvPkY7AR9KxDIcyIoPBzk54BlH2qxO+JEt+pG9P
	UFSSJMSM+YDtGqBtJikjPUkgTPMdSeythWzw
X-Google-Smtp-Source: AMrXdXtMoq0AJxL0dY2oR4OeqHohQWghpgCyWas8kjrtR8PHE+1upjGQlD06LEdYqZogb72iq8TL7Q==
X-Received: by 2002:adf:dbc6:0:b0:2be:12a8:9f75 with SMTP id e6-20020adfdbc6000000b002be12a89f75mr8014740wrj.55.1674132330977;
        Thu, 19 Jan 2023 04:45:30 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v4 2/4] xen/riscv: introduce sbi call to putchar to console
Date: Thu, 19 Jan 2023 14:45:15 +0200
Message-Id: <06ad9f6c8cbc87284ef4ecd4b85d9c7df33bd2c1.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Bobby Eshleman <bobby.eshleman@gmail.com>

Originally SBI implementation for Xen was introduced by
Bobby Eshleman <bobby.eshleman@gmail.com> but it was removed
all the stuff for simplicity  except SBI call for putting
character to console.

The patch introduces sbi_putchar() SBI call which is necessary
to implement initial early_printk.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
    - Nothing changed
---
Changes in V3:
    - update copyright's year
    - rename definition of __CPU_SBI_H__ to __ASM_RISCV_SBI_H__
    - fix identations
    - change an author of the commit
---
Changes in V2:
    - add an explanatory comment about sbi_console_putchar() function.
    - order the files alphabetically in Makefile
---
 xen/arch/riscv/Makefile          |  1 +
 xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
 xen/arch/riscv/sbi.c             | 45 ++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/sbi.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 5a67a3f493..fd916e1004 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
+obj-y += sbi.o
 obj-y += setup.o
 
 $(TARGET): $(TARGET)-syms
diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
new file mode 100644
index 0000000000..0e6820a4ed
--- /dev/null
+++ b/xen/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: (GPL-2.0-or-later) */
+/*
+ * Copyright (c) 2021-2023 Vates SAS.
+ *
+ * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Taken/modified from Xvisor project with the following copyright:
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ */
+
+#ifndef __ASM_RISCV_SBI_H__
+#define __ASM_RISCV_SBI_H__
+
+#define SBI_EXT_0_1_CONSOLE_PUTCHAR		0x1
+
+struct sbiret {
+    long error;
+    long value;
+};
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
+                        unsigned long arg0, unsigned long arg1,
+                        unsigned long arg2, unsigned long arg3,
+                        unsigned long arg4, unsigned long arg5);
+
+/**
+ * Writes given character to the console device.
+ *
+ * @param ch The data to be written to the console.
+ */
+void sbi_console_putchar(int ch);
+
+#endif /* __ASM_RISCV_SBI_H__ */
diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
new file mode 100644
index 0000000000..dc0eb44bc6
--- /dev/null
+++ b/xen/arch/riscv/sbi.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Taken and modified from the xvisor project with the copyright Copyright (c)
+ * 2019 Western Digital Corporation or its affiliates and author Anup Patel
+ * (anup.patel@wdc.com).
+ *
+ * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2021-2023 Vates SAS.
+ */
+
+#include <xen/errno.h>
+#include <asm/sbi.h>
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
+                        unsigned long arg0, unsigned long arg1,
+                        unsigned long arg2, unsigned long arg3,
+                        unsigned long arg4, unsigned long arg5)
+{
+    struct sbiret ret;
+
+    register unsigned long a0 asm ("a0") = arg0;
+    register unsigned long a1 asm ("a1") = arg1;
+    register unsigned long a2 asm ("a2") = arg2;
+    register unsigned long a3 asm ("a3") = arg3;
+    register unsigned long a4 asm ("a4") = arg4;
+    register unsigned long a5 asm ("a5") = arg5;
+    register unsigned long a6 asm ("a6") = fid;
+    register unsigned long a7 asm ("a7") = ext;
+
+    asm volatile ("ecall"
+              : "+r" (a0), "+r" (a1)
+              : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
+              : "memory");
+    ret.error = a0;
+    ret.value = a1;
+
+    return ret;
+}
+
+void sbi_console_putchar(int ch)
+{
+    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 12:45:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 12:45:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480995.745641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIQ-0001w7-Kd; Thu, 19 Jan 2023 12:45:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480995.745641; Thu, 19 Jan 2023 12:45:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIQ-0001vd-DN; Thu, 19 Jan 2023 12:45:34 +0000
Received: by outflank-mailman (input) for mailman id 480995;
 Thu, 19 Jan 2023 12:45:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIUIP-0001sI-9n
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 12:45:33 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 27bfb1ab-97f7-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 13:45:29 +0100 (CET)
Received: by mail-wr1-x432.google.com with SMTP id b7so1753884wrt.3
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 04:45:30 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o15-20020a5d684f000000b002bddac15b3dsm17909808wrw.33.2023.01.19.04.45.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 04:45:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27bfb1ab-97f7-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=acnYUtdnwYpIYk/Tnrxms/hajM4oUKDgrFUUijudygw=;
        b=fiKGHKGKwpDrTvmCiM74i7136tjFAyfH92GasnFvrNwTJFHTQ+iayvuHUegZ2Xx8/4
         OX8qH1Oom3DqFA9AtTMm8xlRIIJiA7JwqnZ5qewfCd1/LIVy4ZO63RjW4e1fIetEyR0+
         0ESh9VWdxZBvsiAYz8Vzev5NAKvI1hB+Naexqmf++owO1XnB8SqPakvwEsSMnfJ8+dqx
         nm6E3PYf65jA/MvSosGSMscI54W5PgVPvY4cHvzXtF1FDMbrZ+4kmOl8FiJG71MPk35C
         Dmjw7XIN7uPNVEKh12w7ZNEepP5YFR3HcOTxEQyFbKvA8qhqXpWZdO6OzNw/7Dfp+cJ7
         ZmBw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=acnYUtdnwYpIYk/Tnrxms/hajM4oUKDgrFUUijudygw=;
        b=1Na3xqk2W2NHj/PbnSF9lL/J5/0x8N8QFhl6aNmlx/4f3CPsY3/xcbd2il7ysZSQ1Z
         eA3S7ZLDCnHDnp1sTvjHXiOtnoQ7ORA9iTX5zxQT90zyv6Y3i23gisZTeZq16uMBarDp
         3AbbeFA9HUQNEFejLhdBaM6F/g5eJFWGU/bp4bga+14EPQxrYiJw4ZLWQIGiX09HK0+t
         /27OwrdmiovdVaqOXy8tGvjcF3wIZheMySHbknIhH6XQ9vzGHt40EXDEVyGSIfVNLn08
         F3DNoMM0CIti/zaG6ElKBRv/vd26RJBBKdcOccsSyF7R6McBXpkC9BJ++cfShU2aDGRi
         Asjw==
X-Gm-Message-State: AFqh2kofoVDSOSr73oL6CjSzOw/xdxW8DsTYimUMZ/TixzQkHpffL5Cu
	8WCSV72uRrZ7SVbt1M0LskQXCRyZRUfZqJfi
X-Google-Smtp-Source: AMrXdXuLd2cVE1yxR480O9jxyrvcMnhHBJPHs7EDwDAYwFWZTSRzOK3BLzvo1J6H30JNMHJrJhSL7Q==
X-Received: by 2002:adf:fc4c:0:b0:2bd:dbbb:e7e2 with SMTP id e12-20020adffc4c000000b002bddbbbe7e2mr9152216wrs.60.1674132329722;
        Thu, 19 Jan 2023 04:45:29 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v4 1/4] xen/riscv: introduce asm/types.h header file
Date: Thu, 19 Jan 2023 14:45:14 +0200
Message-Id: <2ce57f95f8445a4880e0992668a48ffe7c2f9732.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
    - Clean up types in <asm/types.h> and remain only necessary.
      The following types was removed as they are defined in <xen/types.h>:
      {__|}{u|s}{8|16|32|64}
---
Changes in V3:
    - Nothing changed
---
Changes in V2:
    - Remove unneeded now types from <asm/types.h>
---
 xen/arch/riscv/include/asm/types.h | 43 ++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/types.h

diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
new file mode 100644
index 0000000000..9e55bcf776
--- /dev/null
+++ b/xen/arch/riscv/include/asm/types.h
@@ -0,0 +1,43 @@
+#ifndef __RISCV_TYPES_H__
+#define __RISCV_TYPES_H__
+
+#ifndef __ASSEMBLY__
+
+#if defined(CONFIG_RISCV_32)
+typedef unsigned long long u64;
+typedef unsigned int u32;
+typedef u32 vaddr_t;
+#define PRIvaddr PRIx32
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0ULL)
+#define PRIpaddr "016llx"
+typedef u32 register_t;
+#define PRIregister "x"
+#elif defined (CONFIG_RISCV_64)
+typedef unsigned long u64;
+typedef u64 vaddr_t;
+#define PRIvaddr PRIx64
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PRIpaddr "016lx"
+typedef u64 register_t;
+#define PRIregister "lx"
+#endif
+
+#if defined(__SIZE_TYPE__)
+typedef __SIZE_TYPE__ size_t;
+#else
+typedef unsigned long size_t;
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_TYPES_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 12:45:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 12:45:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480997.745659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIS-0002Q0-2H; Thu, 19 Jan 2023 12:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480997.745659; Thu, 19 Jan 2023 12:45:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIR-0002Pn-UH; Thu, 19 Jan 2023 12:45:35 +0000
Received: by outflank-mailman (input) for mailman id 480997;
 Thu, 19 Jan 2023 12:45:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIUIR-0001sI-02
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 12:45:35 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 293745f0-97f7-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 13:45:32 +0100 (CET)
Received: by mail-wr1-x433.google.com with SMTP id h16so1715875wrz.12
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 04:45:33 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o15-20020a5d684f000000b002bddac15b3dsm17909808wrw.33.2023.01.19.04.45.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 04:45:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 293745f0-97f7-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ht+yXJXCQ416rH9qG+XVpEx3bPoAaETeI1O2ZSFKNnE=;
        b=aku3f1za6ol9pQR5r9gDhprz9Kjx0Xh/Em1sk4axVDKHkdludLfPL2E0ilSWRO8e5I
         wdyzaBHSXTLnz9dFeMgHL/ljkTPDrvJBMO3B5MSiwJbxuM6cztccvPSzjBAP3ox/+IR1
         9A/S+/ErFucd1mVAyPT8y/N6H4GwCezc0/WjPoEzr6xVPG16avhO8Via4l89hoF+kNuG
         7QjrdJxt/Fl3ENEZ9HeVJBqREgp2NjhHk9oSzax2pHtY+wJqM+Hcha+VonQ4+wuUQCOq
         IUiz3Y7nK1dVscDwABCEw875T7KFmgJjEaVGNAyXyLWU0ZIP3OfZv33gZYjgIds6KhVg
         GZ3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ht+yXJXCQ416rH9qG+XVpEx3bPoAaETeI1O2ZSFKNnE=;
        b=rMPA8lgf7G7fwCljniiLIWdcnA6+Kt6rbX46wa6FllEFjHvmlnKQMFWMVqTLP5oTh8
         SXWOxgR7KKt/Io/2DHRrc0twjlmcAe+gJd8KBALL6ZmspoaNwnrNEZtIqIBnIpwzQHZm
         NgJ89QxW+vqYFMDBBCeODgtJpoIs+qQ+/e96nvKHrZk1POpKSJPaOFESsEIgJMAsnziF
         JYgWLF7WsVxODXrUImSruucM1bZJmG8dBffNPpH+MtNdX2mQ5aD0VQzDruVvxFpBHLXY
         CSCo1JlEePr7IKeEf94ObMAlcEwPe4ybbVG0TvAmtbkGDoHC6GAZKqlfqwrqGhqhNni7
         9boA==
X-Gm-Message-State: AFqh2ko8ZLl1XYqf2WSJQvQL+O/FniflkIh+swwb7iDKxst4viuqljjk
	fegC7MWpBB9mgPHUQnA8fAqgqjx7TXZEtQ==
X-Google-Smtp-Source: AMrXdXumEDvfsGhKIUpCgzd+Gs9h6UaiWfIZqdyaqaJ7tl+kfx/T6QNNAhQq6tObRNUwXRKuPglBBg==
X-Received: by 2002:a5d:43cd:0:b0:26b:8177:a5e6 with SMTP id v13-20020a5d43cd000000b0026b8177a5e6mr8920405wrr.51.1674132332232;
        Thu, 19 Jan 2023 04:45:32 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: [PATCH v4 3/4] xen/riscv: introduce early_printk basic stuff
Date: Thu, 19 Jan 2023 14:45:16 +0200
Message-Id: <915bd184c6648a1a3bf0ac6a79b5274972bb33dd.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Because printk() relies on a serial driver (like the ns16550 driver)
and drivers require working virtual memory (ioremap()) there is not
print functionality early in Xen boot.

The patch introduces the basic stuff of early_printk functionality
which will be enough to print 'hello from C environment".

Originally early_printk.{c,h} was introduced by Bobby Eshleman
(https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1aab71384)
but some functionality was changed:
early_printk() function was changed in comparison with the original as
common isn't being built now so there is no vscnprintf.

This commit adds early printk implementation built on the putc SBI call.

As sbi_console_putchar() is already being planned for deprecation
it is used temporarily now and will be removed or reworked after
real uart will be ready.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
    - Remove "depends on RISCV*" from Kconfig.debug as it is located in
      arch specific folder so by default RISCV configs should be ebabled.
    - Add "ifdef __riscv_cmodel_medany" to be sure that PC-relative addressing
      is used as early_*() functions can be called from head.S with MMU-off and
      before relocation (if it would be needed at all for RISC-V)
    - fix code style
---
Changes in V3:
    - reorder headers in alphabetical order
    - merge changes related to start_xen() function from "[PATCH v2 7/8]
      xen/riscv: print hello message from C env" to this patch
    - remove unneeded parentheses in definition of STACK_SIZE
---
Changes in V2:
    - introduce STACK_SIZE define.
    - use consistent padding between instruction mnemonic and operand(s)
---
---
 xen/arch/riscv/Kconfig.debug              |  6 +++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 45 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 ++++++
 xen/arch/riscv/setup.c                    |  6 ++-
 5 files changed, 69 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
index e69de29bb2..e139e44873 100644
--- a/xen/arch/riscv/Kconfig.debug
+++ b/xen/arch/riscv/Kconfig.debug
@@ -0,0 +1,6 @@
+config EARLY_PRINTK
+    bool "Enable early printk"
+    default DEBUG
+    help
+
+      Enables early printk debug messages
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index fd916e1004..1a4f1a6015 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,3 +1,4 @@
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
new file mode 100644
index 0000000000..6bc29a1942
--- /dev/null
+++ b/xen/arch/riscv/early_printk.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V early printk using SBI
+ *
+ * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
+ */
+#include <asm/early_printk.h>
+#include <asm/sbi.h>
+
+/*
+ * early_*() can be called from head.S with MMU-off.
+ *
+ * The following requiremets should be honoured for early_*() to
+ * work correctly:
+ *    It should use PC-relative addressing for accessing symbols.
+ *    To achieve that GCC cmodel=medany should be used.
+ */
+#ifndef __riscv_cmodel_medany
+#error "early_*() can be called from head.S before relocate so it should not use absolute addressing."
+#endif
+
+/*
+ * TODO:
+ *   sbi_console_putchar is already planned for deprecation
+ *   so it should be reworked to use UART directly.
+*/
+void early_puts(const char *s, size_t nr)
+{
+    while ( nr-- > 0 )
+    {
+        if ( *s == '\n' )
+            sbi_console_putchar('\r');
+        sbi_console_putchar(*s);
+        s++;
+    }
+}
+
+void early_printk(const char *str)
+{
+    while ( *str )
+    {
+        early_puts(str, 1);
+        str++;
+    }
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
new file mode 100644
index 0000000000..05106e160d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -0,0 +1,12 @@
+#ifndef __EARLY_PRINTK_H__
+#define __EARLY_PRINTK_H__
+
+#include <xen/early_printk.h>
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_printk(const char *str);
+#else
+static inline void early_printk(const char *s) {};
+#endif
+
+#endif /* __EARLY_PRINTK_H__ */
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 13e24e2fe1..9c9412152a 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,13 +1,17 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/early_printk.h>
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
 void __init noreturn start_xen(void)
 {
-    for ( ;; )
+    early_printk("Hello from C env\n");
+
+    for ( ; ; )
         asm volatile ("wfi");
 
     unreachable();
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 12:45:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 12:45:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.480998.745675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIT-0002rN-Fy; Thu, 19 Jan 2023 12:45:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 480998.745675; Thu, 19 Jan 2023 12:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUIT-0002rC-DE; Thu, 19 Jan 2023 12:45:37 +0000
Received: by outflank-mailman (input) for mailman id 480998;
 Thu, 19 Jan 2023 12:45:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIUIR-0001sI-OD
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 12:45:35 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 29c293aa-97f7-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 13:45:33 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id r9so1748894wrw.4
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 04:45:33 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o15-20020a5d684f000000b002bddac15b3dsm17909808wrw.33.2023.01.19.04.45.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 04:45:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29c293aa-97f7-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ctVbfTUMTOLN4KM620LAPIxTZt2tk9IeJCHpD6YNn4Y=;
        b=pLK7VCRtqg/YseT3XaGEaSd7V37XOED6G3j6l+wW7vWyiDLkyvioV/xfqmk5uAp6cA
         QtpQRqgP7KfjXJs9nChZYXMlwfppCWGyXZ12XH0qrIErdnO9VnKstu4Jjn+rr+bJfNjk
         4lcLt2Q0neWVD3Rac8PZYA2DgcTHdBXgfqzGzYWF2qASo1MKAWMpho+fC0/fGlKFIabp
         JZnSn+I8EtPAE54s/iVOi3l/zb7psWvT1AlQ/YgS95RUEebn5SB/QmVavINva0rlU/+N
         AJYCIAwK/skoSjE/coi1RMJJOFXRgZ/g0TC59vnwNhvqJZoU04CJtPbiu9Z/mndzZNZZ
         y+yw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ctVbfTUMTOLN4KM620LAPIxTZt2tk9IeJCHpD6YNn4Y=;
        b=wAUsKE1pIf1ncucjuavcySZf7z87cZMillEx383MkyKM0j5erKaY5hWb8S7Tao/PLa
         7HDEaMRIxPIpWiKyhpehejYRpjdiq3xY6qhikqMpGlEaZ+uwcCtifxv3pwU/PaTFJsMi
         1tt2ujhLKuNeKMMo0ADuylGccy4qtKhEdzL+DY26rftCa82dOUKzA+vdsXluXxuxkvEJ
         TTGhy5m1ShBNiGw1knf3rf0oVCWLLCVUo/dP/yW33RuAU2n5TCdZ5F86BHCb9Sxs6ySo
         BLVYy6CA6stLRdSDp+bBYNFa6GNBHD0sQk6Cj6xt3sAngQj9g09+yips9mTPDEx2wiVv
         WB/w==
X-Gm-Message-State: AFqh2krKV7gy3H4kvTpwohpNgj+e6SKrAaIToPGjNXYykROKrsRtbaIf
	a+JBg//bVIzVMzYY0wZEkNl0hjIVL/Bopg==
X-Google-Smtp-Source: AMrXdXvzUUqNE5ShYhe+N5yA48TiQ/D85X6AVjWaq6Iz6juKhqEOQZ9p/pdhj44RgBtZo9rDG8/eHA==
X-Received: by 2002:a05:6000:3c2:b0:2bd:d45c:3929 with SMTP id b2-20020a05600003c200b002bdd45c3929mr10774555wrg.54.1674132333238;
        Thu, 19 Jan 2023 04:45:33 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v4 4/4] automation: add RISC-V smoke test
Date: Thu, 19 Jan 2023 14:45:17 +0200
Message-Id: <216c21039a5552a329178b4376ff53ba16cf6104.1673877778.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add check if there is a message 'Hello from C env' presents
in log file to be sure that stack is set and C part of early printk
is working.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in V4:
    - Nothing changed
---
Changes in V3:
    - Nothing changed
    - All mentioned comments by Stefano in Xen mailing list will be
      fixed as a separate patch out of this patch series.
---
 automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
 2 files changed, 40 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index afd80adfe1..64f47a0ab9 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -54,6 +54,19 @@
   tags:
     - x86_64
 
+.qemu-riscv64:
+  extends: .test-jobs-common
+  variables:
+    CONTAINER: archlinux:riscv64
+    LOGFILE: qemu-smoke-riscv64.log
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - x86_64
+
 .yocto-test:
   extends: .test-jobs-common
   script:
@@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
   needs:
     - debian-unstable-clang-debug
 
+qemu-smoke-riscv64-gcc:
+  extends: .qemu-riscv64
+  script:
+    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
+  needs:
+    - riscv64-cross-gcc
+
 # Yocto test jobs
 yocto-qemuarm64:
   extends: .yocto-test-arm64
diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
new file mode 100755
index 0000000000..e0f06360bc
--- /dev/null
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -ex
+
+# Run the test
+rm -f smoke.serial
+set +e
+
+timeout -k 1 2 \
+qemu-system-riscv64 \
+    -M virt \
+    -smp 1 \
+    -nographic \
+    -m 2g \
+    -kernel binaries/xen \
+    |& tee smoke.serial
+
+set -e
+(grep -q "Hello from C env" smoke.serial) || exit 1
+exit 0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:09:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:09:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481024.745687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUfn-0006jJ-Dk; Thu, 19 Jan 2023 13:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481024.745687; Thu, 19 Jan 2023 13:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUfn-0006jC-B3; Thu, 19 Jan 2023 13:09:43 +0000
Received: by outflank-mailman (input) for mailman id 481024;
 Thu, 19 Jan 2023 13:09:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iv86=5Q=gmail.com=tushar.goel.dav@srs-se1.protection.inumbo.net>)
 id 1pIUfm-0006j6-8a
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:09:42 +0000
Received: from mail-yb1-xb2a.google.com (mail-yb1-xb2a.google.com
 [2607:f8b0:4864:20::b2a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 877d4c7d-97fa-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 14:09:39 +0100 (CET)
Received: by mail-yb1-xb2a.google.com with SMTP id 66so2402063yba.4
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 05:09:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 877d4c7d-97fa-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=lNsUyEPWjP8sROmTom8UUlPpz85zG7Z3FuXAFMvgRjg=;
        b=VKnmmxPSEUbLhPGh35iPRdcjWl6CWOtDr1QaxxFRJIJ82wzjlIohlA1ovfYGccNVpc
         DuvhY/4hCHfbV7qM0w37Rl0ywp5gaIO0yM1njmYWeP/l2Cj4cJDRdVoyMsrC43SH2CGv
         BEatzmcI3ZxLi2QzIk88D2oZJbFFaAZ822RlG+tuZ/tQw12XGWBqBZDVypZ02p1QisTb
         YOFXpPlgRAwFyfpNN2FdP7S0yyf2E9A/Cthcto+ooOUlpFKcfTq7cc6JblWEBUhPQPbn
         FH9JuZjs2kL2ENoz7OTMoWy5SKrlYmKnyPuSWh2TSTHt3MaAAJtq0+DMnTPWTvNOz00U
         AX/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=lNsUyEPWjP8sROmTom8UUlPpz85zG7Z3FuXAFMvgRjg=;
        b=EpO60S4GI0WTwf0ROmhSAL4qNjKKps8wMtZc1loNiFSYW9lfdjqF2F1cnuIEBWDSsf
         jvsWhVW6/LjKSrL/fuKcaSG3jqnz4PMn8Dd9Bm8DaordrmG4K9ltbVwpe9Wze6OmTEqY
         f1TEGC/HC1bgXxQcwf1f2E/p19MluUTIZrR6ixEZek+iPvovPB4ZAWhK3zjHORfJG65w
         0GE6lxUk+sauKKcknYMZ6ns1vd9etZBao6wv/3U9MhLYA3JuriNIVEycHdF7J7vmLnRa
         3OXTijmSm1bWbHvZezKbRwB3zND0tRQqVxwWM2tV7F7W7VscTN+7quTR5bl1tAiwej+i
         XUaQ==
X-Gm-Message-State: AFqh2kqNZ8mkkPM79dy9YH1HThieVndbICcOv4tNlr2iy5nYGgBVNRgi
	2hJhsGsZlMdsFXthkPzmKRKr7D8fjp7kh1YiTDI=
X-Google-Smtp-Source: AMrXdXs9mzeA6v9Ts5rzhVohQJhozjRQz9nvgeKqqfEwyTm80VGtAtSkEEotYkO+bJUKOhK72Q6qFnGgi9swZohYXPg=
X-Received: by 2002:a25:ed01:0:b0:7ec:b507:1255 with SMTP id
 k1-20020a25ed01000000b007ecb5071255mr1548339ybh.111.1674133778908; Thu, 19
 Jan 2023 05:09:38 -0800 (PST)
MIME-Version: 1.0
References: <CAFD1rPdT5Tod+qdit50EWBN6WyRuK2ybb2G2HmOAayAV7uyBuA@mail.gmail.com>
 <7ddac120-29c5-d4fa-2bc7-9da6b1cf2dd9@citrix.com>
In-Reply-To: <7ddac120-29c5-d4fa-2bc7-9da6b1cf2dd9@citrix.com>
From: Tushar Goel <tushar.goel.dav@gmail.com>
Date: Thu, 19 Jan 2023 18:39:28 +0530
Message-ID: <CAFD1rPfv1jCNkcPP1KBLDr1e+_aa7+aCphVTjZG-xAnbkcnNGQ@mail.gmail.com>
Subject: Re: Usage of Xen Security Data in VulnerableCode
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Xen Security <security@xen.org>, 
	Philippe Ombredanne <pombredanne@nexb.com>, jmhoran@nexb.com
Content-Type: text/plain; charset="UTF-8"

Hi Andrew,

> Maybe we want to make it CC-BY-4 to require people to reference back to
> the canonical upstream ?
Thanks for your response, can we have a more declarative statement on
the license from your end
and also can you please provide your acknowledgement over the usage of
Xen security data in vulnerablecode.

Regards,

On Tue, Jan 10, 2023 at 7:15 PM Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
>
> On 10/01/2023 1:33 pm, Tushar Goel wrote:
> > Hey,
> >
> > We would like to integrate the xen security data[1][2] data
> > in vulnerablecode[3] which is a FOSS db of FOSS vulnerability data.
> > We were not able to know under which license this security data comes.
> > We would be grateful to have your acknowledgement over
> > usage of the xen security data in vulnerablecode and
> > have some kind of licensing declaration from your side.
> >
> > [1] - https://xenbits.xen.org/xsa/xsa.json
> > [2] - https://github.com/nexB/vulnerablecode/pull/1044
> > [3] - https://github.com/nexB/vulnerablecode
>
> Hmm, good question...
>
> In practice, it is public domain, not least because we publish it to
> Mitre and various public mailing lists, but I'm not aware of having
> explicitly tried to choose a license.
>
> Maybe we want to make it CC-BY-4 to require people to reference back to
> the canonical upstream ?
>
> ~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:18:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:18:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481031.745701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUoI-0008EY-D3; Thu, 19 Jan 2023 13:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481031.745701; Thu, 19 Jan 2023 13:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUoI-0008ER-A6; Thu, 19 Jan 2023 13:18:30 +0000
Received: by outflank-mailman (input) for mailman id 481031;
 Thu, 19 Jan 2023 13:18:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DP+J=5Q=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIUoH-0008EL-0c
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:18:29 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2050.outbound.protection.outlook.com [40.107.6.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c27a9ca6-97fb-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 14:18:27 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8545.eurprd04.prod.outlook.com (2603:10a6:20b:420::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 19 Jan
 2023 13:18:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 13:18:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c27a9ca6-97fb-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ptaa1x1Kv15waDyHrM8W9vO0GX0kWNhbCmy8tzmBIhCsbGzTHF9g5A7/5SZ5HAxnC1QNlokuqwv1phS/TrrA2S9GIrpBMXZpdKSF1dN6Yb+WOICtvxvxdsbUXRoESkn0Dtb0SS1B86n22fzUElBXUNTWW7kzprYvLEQrQw8Qfp7OeItEGUdG8gR9S2LCUMsKiXbAZnaB0mKDFqqJuXWqBiQEBQ0UgMoZy6yZ8//XIJHZWKLF3uykApNtYU+X2XiH+BCh7uniz5AyYnzsE/0H/PLxyNc8Pgx+g+HeEyIBUogGRBCDrVe9zGbprY/9M0S0ztcr/FNaxNG3ayHbx5zFJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OwPjSQKQLmEYaeICtcv0PQs0zL8GSJ74KYCWt8CkU5M=;
 b=iyXo64JqDTqwTRqIiXYEJenEsJzF6uDWn4eQSkj//PWWTDBb1ZW1lnibX+tC8ZL2OEbFHadW36ccP6zz3n+jM/meEnJW8LSyajaLN3cdhJemPnqLrshYqff0TGMwLaXs9IwKUNN/pk/c+pRqDouognoxRLV6ZWux3mYdpDI6hmYM7APNyhd9suceQJAz93bUOW71w4sQhEpZ8E9b0mq/GdDMxRabscc9aXEhaZQY5D6MjjdVJVDF375jAYbYd4mWydEz08Sg4cCFsfwo8mwIHpZUG5yoKXbt+/o80FFaly+IgczgBD+D9EpS+9gQxFvL1sXNm/V6dUhrbxaZkZrrIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OwPjSQKQLmEYaeICtcv0PQs0zL8GSJ74KYCWt8CkU5M=;
 b=QbKgpBrCAAATs6s20jtDSbpE88Ae5a0YwNaJqkNUk/ria/50ReQcNfqYxGpLAM4cIqtBVyAF5It5TpLhQsqP5QtJPpVuXXYeyMgKzCI9obliUGSHtlk4Mm2QShsSt4vSo/SaKYbL6U8ao8oATu/ciIJELku+dsSIbkz2/fbMnH5vbZJCjVV7mWlOEvfYEAxZZNJhUc46VG0sf/IBiQnwPRl8Gr+oqw01aAFgX0ZCugx6IJDM1K6ARgY3DYWuHNE90G11+nrNwu8tKMsi/7gLGwKDEk5rIAX6lF8oLE5v7mPmkZN0M1W+RsFxMWVLEeP5yLtZiR4KeZzoX7cy6b81gg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9e79449a-fd12-f497-695b-79a50cc913c7@suse.com>
Date: Thu, 19 Jan 2023 14:18:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] x86/shadow: sh_page_fault() adjustments
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0170.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8545:EE_
X-MS-Office365-Filtering-Correlation-Id: 79fe9a80-dce9-4768-315b-08dafa1fa559
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Q1uDbt0I6znILtSuk6joGOVYqU0GvKZ1V+zRwkZOHKBxrTwx1vOMivqlr6lCyYggXo3WtqYpXPaCEcJvlFf5rDJ2SwqMFKDPhiAZhmMiKJlL4BxuG+3DXdv4octnfzraUCXSZly9fIpcYg/CldCoFuOfWEn/LfJ3JvbHggpdv0dGSPxqL2SO1qVpVQq/P2c/dhebKtNPVZNU2U9XgKNzqRGEq2IEsIpMxYTSidmTJdxlWMx5DLFh61mBzqTEU/coyqBCUFmBBkN0AH0s4fWhcYaKAPdhfeZiBOqGEUDgbVDSOULN62UrhKmYRFFVhUkZt/V/GAgbx5+ZJPiXGDSGayvOcami7efkMp4fGs5iETmFbEpdt8eJZS5l93HJUoxWRUOPMIDW3pFjwDHDEqIhp0ZWkMrAR+pzVayXUwq/tjao//VxsVmab+S53r/AcTQxp7UMGGhid1pEsONUI4ZsYmi2vZ61FDFHn3ooSyU4EzgHASoJXinmDRz6qWISicAh+zIQGHxEI6To2VETtr1TqybK3pEEhA+D+tZJ2/5FSrxl/pQNXD3CgzyouLgWwM6F3+o5IZOEvJm+xw0rQxHqimkpqr0AHlddvDGlFiHbgmmdoUHddZXBTpaaJMmfkId8v1sR8BSx2aKolEGriHxDQPsxmfcTLEyqqmMsa4wZtwriXw9Cd8XjyFzFHFHPf9aeHIozmpeqss0/ZCOoefq3XFO9Yd7XLWgEt2NybtWRlMc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199015)(36756003)(6506007)(26005)(186003)(6512007)(38100700002)(86362001)(2616005)(31696002)(558084003)(478600001)(6486002)(66556008)(8936002)(5660300002)(31686004)(2906002)(4326008)(8676002)(6916009)(66476007)(316002)(66946007)(54906003)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?akpLY0wyakVqRGlOYzFNQ2xaK3V3ZlRlUC9Oa1VCV0EvUzQ3ZFE4endBSS9Q?=
 =?utf-8?B?SE03SVNFbk84S1RVbTk4WDQ2OWZHWlY5TDhPN1FVaHU2ZUdGcVpTYXVQWjlS?=
 =?utf-8?B?UEN0bmZIUFJSSFcwQnNYeXpQUGlQWmlYQnJ0Sk90ZUpkMmZ2bU5DL3dsaStR?=
 =?utf-8?B?ZGFZRVdLZnRwQVArc2RvZkN3Q1lYUlZqWDRKZ0VoUmprWm9LK1pMSlY5N0lJ?=
 =?utf-8?B?T0hyUkErUnRxci81eGRKMDFpWGlZQkRSQU9xNlZSa21aMnNGZzVkUGQ2YU9u?=
 =?utf-8?B?cmhvNGdoNlNWeVVSb0xqWGh3cnJZSlZsNmxkSzVYeXpxUFNuVEJxWk85d25z?=
 =?utf-8?B?SG4zYUtNNEhzNUdUeS9zTXBtVVFrVjN4bEdFS1RNelZvWU9lK1pjck5WKzBO?=
 =?utf-8?B?K0V6QzBKNGxHOGc3MUhWUTVJTTdMNjUvSlMvQUVnUTh2TGRQbGJKR3d2blVt?=
 =?utf-8?B?SzJ6TndKSTFnNzZqdDl5WWJ3SmQ4VitpaUFyR3l4SWt6Z0k5WE1VS1FlRHBG?=
 =?utf-8?B?WHdlNVlXeWtIS2dBS1c3ZmwybUs1bmEwSm5veUZQRnROb1c3TjVQWWZyZWJr?=
 =?utf-8?B?Sno2dEFyNVpBU3lBTUZTakFBMldJUG1ibTBjamxnOWlKWjV1SU1na3hYV0k0?=
 =?utf-8?B?TzZZN3ZpOVZBNkVYSGFVS1h4RzBndU9MYjdyOXQzUUx2YjNKUHRnT0EwRFhm?=
 =?utf-8?B?Si9yQkUyNG9XNXlIeHdaMVpNcFR3ZGpBUC8wbjEvZ1pNRWJvaHBGbkpqa0lv?=
 =?utf-8?B?Z1NxbUpabFU5bzhkbnZLcUN5eG1pTTRRamlCZVpTZVhuUVZRL0xKaTZWTmlO?=
 =?utf-8?B?SGRhWjZsdzhEWittZ3V1ZXRqN0ttTDZrd21GTmpuZ01kd21mWXorZ2FDQkZp?=
 =?utf-8?B?bG9Ed3RLSG1zTTNnT3h6bVhYUjFzQUFBQ2VobkRic3JRRGxvUDRtd2dPZUVB?=
 =?utf-8?B?eERqM0RYOG51WWxKVTc2T3NkV2xqTlBibzIzNG5USU9IbUY1L0owNjBmMG9T?=
 =?utf-8?B?aVdXSzJaZVFhbXc1RGxHWVlZdkdFcTIrY3IrNUcyTFpFYVVjd29zekM4YnFC?=
 =?utf-8?B?bldUSkRDVTQ4MjREdFRJZ211RUdrRjN6Uk15QW5JR2RlQk5aMkJwQXpGdUZZ?=
 =?utf-8?B?UlZkOStzQ05mTzhQL213R2tYM3loaVg5clNKK1BzL1VsdkhJaWV5QzBNRmNw?=
 =?utf-8?B?aUVwTEREQVc4UDVQRTFPTlQvVDF1YlkyUGJKYlllS1FueDJ5OWczeFFhN1NZ?=
 =?utf-8?B?ZUtTaUQvMm1LVDNMSGFaWFRzWEtkNVBZMWRydU1qWXhlbUR5MW1odURJNlJ3?=
 =?utf-8?B?Q3VGSEhVMlQzMWI1bWxFQXA0TUt2eExPVm0rcXdJc0liN2llVXRFdGJuYVg4?=
 =?utf-8?B?SEk4aEl5aEtUekpsOUtLQ2VCTTNuUDRHdlNKU242aEJzMyt6eFFIUDJpU3Nj?=
 =?utf-8?B?RDV1Uit5OGFJV3ZveDdaeUMvOTBJeHk3OS95aDJNTitiQTFXaENKQnpWRzJC?=
 =?utf-8?B?VmVEeTZKUVZXWVBEWXB3d2lRRTNjVzNSeStJOWhHUU11bWZ0S0xmOERjV2FM?=
 =?utf-8?B?VXRKUkg0OHFpS0NYWE5saVNqcW9xWm10NlRRc2ZZVUt1RlBidXN6RFY5cTRO?=
 =?utf-8?B?NjlHRVI2N3JVanJTajB6VGsvY2xYY3g3OWMvamh4OVpZZjBTZG1iaER3MFgz?=
 =?utf-8?B?bzFuWCtKNnUwbnVHQmFHdkpFcjhDSlFTNW9GaFllY3JBN1VVVno3NWU5Ylcr?=
 =?utf-8?B?VXNER2wrMGpDMldTUlJVWHdERDFlditkcmdPS3I4S0RCVm0rVjhMQXMwUm82?=
 =?utf-8?B?S2d4bVpFcUNrUWNMNlMzZXFlNDJyT25TYlVKWjlFa1VxKzE4djBEMjduRUVC?=
 =?utf-8?B?NmpwZ08vSklPNEI4aTJYVVRaUHQyUWl1LzVOTjdkWTZNNEYwQmx1U0lpRStK?=
 =?utf-8?B?Tmh5MmZkdnk1bTdqdE1kYy9MY1hNUng3eThWQ3RsMkd5TEtocE1TUTF6RGxr?=
 =?utf-8?B?NmFtdmhrL0VTRWZnYmNCMGFrTG1IL2MveUJ3cE9SZEprb013d0xMZEpKYzlj?=
 =?utf-8?B?U1BhT3FiT3J6b1VzRGk5Q0NEL3NoR2xnQUlXeXRMWHMrYVFaYW01Y0tDemZG?=
 =?utf-8?Q?COHD/NuMuAN7v6p+IoB1+P09i?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 79fe9a80-dce9-4768-315b-08dafa1fa559
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 13:18:25.4643
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mG3ZOkAa3rISTGH+NXQIIKO83AnC0HU9w74jJExUzOpaBiG4jppoS2oVDGP7IPvJZDq+s085DgH83sAlrTsX1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8545

1: fix PAE check for top-level table unshadowing
2: mark more of sh_page_fault() HVM-only

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:19:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:19:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481034.745711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUoq-0000Iv-ND; Thu, 19 Jan 2023 13:19:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481034.745711; Thu, 19 Jan 2023 13:19:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUoq-0000Io-JP; Thu, 19 Jan 2023 13:19:04 +0000
Received: by outflank-mailman (input) for mailman id 481034;
 Thu, 19 Jan 2023 13:19:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qhaF=5Q=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1pIUol-0008EL-6B
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:19:03 +0000
Received: from mail.skyhub.de (mail.skyhub.de [2a01:4f8:190:11c2::b:1457])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d20eb649-97fb-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 14:18:56 +0100 (CET)
Received: from zn.tnic (p5de8e9fe.dip0.t-ipconnect.de [93.232.233.254])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id E2B871EC0691;
 Thu, 19 Jan 2023 14:18:52 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d20eb649-97fb-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1674134333;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=KQnyLQfK7qJC86oYK37cAQ7hek4siDxOAmRxTyQrFa0=;
	b=PZtimvQkqlW/cJilj6IlzV376vPd6z0Ph1rqtUb3ByHCk/BeQkmicHqrBF+DY+kYjExKTd
	WSM2lsHs//3WaJrIcjFy2fnI+Sc3ZkLWm9cCgQBuq1ytKkWulN3q2Fnry/oCuZunWxvxgd
	HNIOXbeTbVZCwhjTV29fcAxtjneo6X0=
Date: Thu, 19 Jan 2023 14:18:47 +0100
From: Borislav Petkov <bp@alien8.de>
To: Peter Zijlstra <peterz@infradead.org>,
	=?utf-8?B?SsO2cmcgUsO2ZGVs?= <joro@8bytes.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit
Message-ID: <Y8lDN73cNOmNuciV@zn.tnic>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.649204101@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230116143645.649204101@infradead.org>

On Mon, Jan 16, 2023 at 03:25:35PM +0100, Peter Zijlstra wrote:
> Per the comment it is important to call sev_verify_cbit() before the
> first RET instruction, this means we can delay calling this until more

Make that "... this means that this can be delayed until... "

And I believe this is not about the first RET insn but about the *next* RET
which will pop poisoned crap from the unencrypted stack and do shits with it.

Also, there's this over sev_verify_cbit():

 * sev_verify_cbit() is called before switching to a new long-mode page-table
 * at boot.

so you can't move it under the

	movq    %rax, %cr3

Looking at this more, there's a sme_enable() call on the BSP which is already in
C.

So, can we do that C-bit verification once on the BSP, *in C* which would be a
lot easier, and be done with it?

Once it is verified there, the bit is the same on all APs so all good.

Right?

joro?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:19:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:19:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481036.745721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUp4-0000bo-VR; Thu, 19 Jan 2023 13:19:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481036.745721; Thu, 19 Jan 2023 13:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUp4-0000bh-RI; Thu, 19 Jan 2023 13:19:18 +0000
Received: by outflank-mailman (input) for mailman id 481036;
 Thu, 19 Jan 2023 13:19:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DP+J=5Q=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIUp3-0000DY-Hp
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:19:17 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2061.outbound.protection.outlook.com [40.107.6.61])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df24041b-97fb-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 14:19:15 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6841.eurprd04.prod.outlook.com (2603:10a6:10:116::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Thu, 19 Jan
 2023 13:19:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 13:19:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df24041b-97fb-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lzjzR4+FWN5lJ6i+PeyCgmB6LvWoUsBmXtVYOv/mjCn7EEhkrRH1rvIi4l44KhPpMDzmquqwXUvk2v5zk4Y8VmQ53JEMf/KwZ97t0vujbLjaZP+iRk0vkRY8WEkPO+iBttEVLZWN3DD2eYc2pTYLXI5wP02ju/HnQqBH/RADinCMjmaTN6n/lusFpUtuGsn9YSOUyGhnc18farGZcyizKhmdqIFMcm3JQeU9zdpNKmn+YkbAXAqnThcwUV3f+ddNK1KNpUFaLcGDfIEy8jtU5gvVC9SIi7BhFUWp6v+uYfP7wejhu9WTn0CAP8bhweDWU4Pn5KCCY18CALvLbyTxJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AitU+pHKk+enzQgMWj+BP+8JFiOp/gFdjRqxb0m96BM=;
 b=JtqbaSJSW2UxZcmeVZ8ngXPT5p9xIWEkDNhV6j8LXfOp7FHUTdsdVPYu8ZLzyfnUwU5S6KOQUnT1YnLlv3+/VaQZ4vfQbYijf5A8NQzEYWMaUzPGktUYchUatjlXWFDepOlGwwqAOGz+HILVPGqZsTRk1dQCo9qhI7t5hJLTRaDMzkIWCMfftC3ku/50GospYtjLS/Q3kkmMmGSdp18nxsN05ebhNqeUNVz87K1jGDqqhzod3hxCwSEJsTaHGOLsVp15G/NiX2CrAiXk/aX1ui6XshDQLCiVyBlHpNkbazc7ze+RQSm9YwttkyHJtbvZL9hRsViH6VCAahPDkakERA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AitU+pHKk+enzQgMWj+BP+8JFiOp/gFdjRqxb0m96BM=;
 b=5eYp0SKJ6di5e+aWjigiA5qpCG71BXx90ZY+VT8iK1F157pr97BflbhMspwWRyueb1LT3gI25WTQVblgLqtfaK4Ajj0VfcIH5D/9rg1XiEVfNy5MJboj9n3aNlTV47N4q2rexOTFhskHQkp64gy4aWN8Lbr0r9sfVxJGVByTwUk14XFOwP4fCAgpX8VPwOR1Z+KdgN7dzThNcdtWrp8sTF+oCjR3N9rdQILlDmmOQ8BnYKzB0knwvS9LR+PiuibJSGfhE9pmqJjBqlpfHnrBVJBJW6wTo3sd2ZXvJJrQte+SeRv4tUR+2yGxuVkByiz6dfGgtidfPPMg0p6F502u/A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bf03f851-2fb4-3de1-7d72-b0ac15b2d488@suse.com>
Date: Thu, 19 Jan 2023 14:19:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 1/2] x86/shadow: fix PAE check for top-level table unshadowing
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <9e79449a-fd12-f497-695b-79a50cc913c7@suse.com>
In-Reply-To: <9e79449a-fd12-f497-695b-79a50cc913c7@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0176.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6841:EE_
X-MS-Office365-Filtering-Correlation-Id: 7ea97159-7d68-4154-c55e-08dafa1fc269
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5pmQA3/qMFTPFyRaPd4OSmrCeoOtGrIyK/ZacvjvNpAzYQnTEmUYKcNjuOBe34X2ZAm1VqAHPisVtIn/pdpRHgl2A7iRLqkLvj9odgAo563B0NTgfnM0XXs5slyVwflcZnJxAma0JJhEaSwADBZlVCZdU1wb4l4dRENmDqYReYFATGzDdewcejyx6w+7LxmBqpPgtRkpFdEk+PTqcW7Di5S4EmclrlMfvbEOjuVl6+siLlQoJsECiRGJeQQ+xKZuohA9XL9L9XhrV9UjOtJBwzDcRgLc4zgB9x1Kriu/5CCJXVqnZDndNDj6U2JEa6qst5oBELJCig3i2lP7s2G5tGg4ene3z4QwbvGwk2faEHvf3nJV0qlezw5DFBTisbdNfMjLlQV8ISk2Jl1/WdBob6agRblXV7xgPyZhNdWaUr2PWP//cXgZXW3IFVTlhdXvpQW/Qz2crXy8fJ1E4y2r/TaURzgOryQBfN7aBveLh88bEmD3y63csQpoQeUOkKsr9k9rYfyUXmkTO/1XzB6KxmoNKvkrKhBQGEZuHcZSgiCQ98UgX/JkPyNm3P/+mnBr7xgJOv9S3THScL9LRxL/p5w/tA2ppD9r70vJxBgGFmC4uDRP9bBJhMUUz0WyT0v4l31bVLNnJtFJ2Jfu4aneKOkgVqgYnPsYld0S2fnnx3K0fofMmv92QGzCTFVBLhoOHgr2RsKGIe2m8Zuop0mauY48d9THH6dt4CVlTpOrgBk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(39860400002)(396003)(376002)(136003)(346002)(451199015)(8936002)(54906003)(66476007)(66946007)(4326008)(6916009)(31696002)(36756003)(66556008)(316002)(26005)(86362001)(6512007)(2616005)(38100700002)(186003)(6506007)(478600001)(6486002)(41300700001)(5660300002)(8676002)(4744005)(31686004)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YndnK3FxYnlSOENtRmxTSzNCSHRKVmJZbC9uZ0FwSzhFT1NUbG9RamJaeHhB?=
 =?utf-8?B?dWpva3BESmRzMmdBMWczMi9UZ01CM2t0OHZhZUdEMWJ6c1RxdnJ6eXBTU011?=
 =?utf-8?B?YWlWaE5tWHZoeU9paE9mY0FzeW1Xc0dYMllBcDFsMlZLLytzUFArQkMyekQ4?=
 =?utf-8?B?NU9Tanp2TndGM0tqRFNqUkozSXczYXNLQW9Ld09nb0ZIeXFlWmh3NXBXakkr?=
 =?utf-8?B?endyT3hUallYdC9uWU5Ib3NZNUlZcnlDWWZKUUNwMXUzWHRjdTlNVGI0anJR?=
 =?utf-8?B?MzdweVdLR0RIYXNEdTFOOEpCSi9pVHQ5d2wxUTdjRTdWbEVIRU9HcEN2TXdW?=
 =?utf-8?B?VmNaclZnQlRUZkhpclVFWXl3QnE2ZnQ4SmpVT051RDRSMGlqdUpNWjJ5Z0kw?=
 =?utf-8?B?TmJyRjJBcFJSYlM3aEU3bVM5MWpreHJIb2twUU1iaUFPd2NSWFg3VkFtc0Nx?=
 =?utf-8?B?Ky82RW1qczNVVWNwQm9HMkM5WEhBNHpIbEZrZ1JuTnIyVG5qZGNZN2dyNjF4?=
 =?utf-8?B?bFlsZVFYQXphVWwyamxqQ1Y5V1FjRG16OEdLQnJvYjI3VlQ2RVRFa0pIVHZw?=
 =?utf-8?B?VTcvQ2hxM25Pd0gzRWNGbFBpdktCUWNOTkNReWFwV1VpWmlWSFFsajNjYndO?=
 =?utf-8?B?dWFKTTlGeUlHcFRpVHZWY1N5bnVQOXA2ZklVdkhaWGdGZTd1NzJqM3FmS2Jx?=
 =?utf-8?B?MUFFUHNadmU2UUZET0w4cy9qUUlBbTRvWVNMNi94L09lMVUyd3JFbzkvOElX?=
 =?utf-8?B?czRJMkx3R2E4N3YwZDVjTXBuZFJ1NlRmbHhrTVBrT2VvaTlJTEs3SDEyUFZu?=
 =?utf-8?B?Q0J3VkRXTUVKUHJCUDdyd1ZDNVhIcjI5U1FYSXFnSi9DenRTaWJWbUNpNjBp?=
 =?utf-8?B?cDZSUE1nYkFyaU0rL1NXSFRpQkVYUllvVks5dXdYZGdrc1NuaU5MTTIzMlBF?=
 =?utf-8?B?VFdhSUt6SW1IQlJMWjY4RGNVOGltVHVqdFlRSFlwT0ZhZFZ6N2xERmRqK3B6?=
 =?utf-8?B?L0FTY280WC9IeHd5S2xWb3pDdU9qWCt2aVhReW04ZktLbW5oOG1aZDRXaXFo?=
 =?utf-8?B?M2xFTXlINFdIUElOZUZveEQvN2tpcWt5VS9tWUM0dzFMS1FJSHF4SG1KWlFB?=
 =?utf-8?B?VU1xUXJDODhaUVYwTFU4Mzk0bTdFdVBSZXo4cDFsOE9aV21BUlRPZDhDYUNI?=
 =?utf-8?B?N3Z6elR2dlNkV1dpM2J6MWZ3U3Y4eWFWOUF2cmJiY2tQU21ZSFptN2Z4eGZr?=
 =?utf-8?B?TUU3S1J6WFNIZHJVNHExKzdSVWpjcWdlbnA3UXFHSFhtaHM2Zkk1S3BNUk1m?=
 =?utf-8?B?L3JDNmJDaE9jeVlWdy94cXB4cXlVSG1iVHVHNEU4TWgxZnl1Y1NMZDBqbzVi?=
 =?utf-8?B?OFo5VTdWWDJqVlpnWmEwUDZuVUZIc2Q2QytYbEhhSzNzWU9sU2tvaFRDRTBq?=
 =?utf-8?B?UE4yalJ3Ym1mMnJXSHFGN0JvZWVLOUdtbEZUYW9ISHRXclUwc3l0RTNOT1VL?=
 =?utf-8?B?eTE5UllkN0tUMTNlbkJ0WUNMNEZ6dStVbW5RMlpkZlJ0UnM2YnRKaVNsbEFo?=
 =?utf-8?B?STlldCsyc3g2M2lnSGZWRkQyQjU0VysyYlUzTFNlL0dMamtVdXdiT1k4OXBU?=
 =?utf-8?B?anJBVlVVK0pibVBjR2pjYnIwNlBPR2p5ODMwSHg4MkZWUC9YYjBjNytaMkZI?=
 =?utf-8?B?QW1qSm04NGVUU2s1QjFUYXZQQWlHN05ROXJVdmhzTklpQXRlVThvYlUraHlw?=
 =?utf-8?B?QXhVb283RSszQmhBNDJCNUUzM0Z5VlZxN2V4YzJXN1dJby9NT0hhdGFFUFdz?=
 =?utf-8?B?L3EzaTNrdVFoRDZXVFAyOHhCT1ZUVGxRQzcwQXBmNGY5ODR1WCtlQmpFOWdN?=
 =?utf-8?B?YmhSOEJhMllLeVgvaTkza1piT016UzdLNENycWxhdVd2elJ6R3ZSM2V2S1RM?=
 =?utf-8?B?TEhZWEE3S0I0Q0NqQ2Q3Z0t6ZDhIaDk4YU12Tm9GeDFES0hRdnR3TjQvVHZ3?=
 =?utf-8?B?aTNNQWpHNlROVndsNTFSR1BoeDFYSUFhZGR3U3IwZjlLR3F4aGh2MExYZjRB?=
 =?utf-8?B?ZElYZXpaOEMxRlY3RjRvaE5OMTZPY05UWVJISHNjQWJyeHlsWHZLWHVhd0dH?=
 =?utf-8?Q?VrTQvkqBG2hc9INt7ojTazL9S?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ea97159-7d68-4154-c55e-08dafa1fc269
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 13:19:14.0863
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lg6vgIVMN2NfhYSbDYvZVwVZbcUIEj3HGgQfXtZJGbF1kRsddLAvvKxcplcgGidHnH+gXNZiU8BzOeIDg4uXKw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6841

Clearly within the for_each_vcpu() the vCPU of this loop is meant, not
the (loop invariant) one the fault occurred on.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Quitle likely this mistake would have been avoided if the function scope
variable was named "curr", leaving "v" available for purposes likethe
one here. Doing the rename now would, however, be quite a bit of code
churn.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2672,10 +2672,10 @@ static int cf_check sh_page_fault(
 #if GUEST_PAGING_LEVELS == 3
             unsigned int i;
 
-            for_each_shadow_table(v, i)
+            for_each_shadow_table(tmp, i)
             {
                 mfn_t smfn = pagetable_get_mfn(
-                                 v->arch.paging.shadow.shadow_table[i]);
+                                 tmp->arch.paging.shadow.shadow_table[i]);
 
                 if ( mfn_x(smfn) )
                 {



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:19:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:19:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481040.745730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUpP-0001Au-8Y; Thu, 19 Jan 2023 13:19:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481040.745730; Thu, 19 Jan 2023 13:19:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUpP-0001An-5T; Thu, 19 Jan 2023 13:19:39 +0000
Received: by outflank-mailman (input) for mailman id 481040;
 Thu, 19 Jan 2023 13:19:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DP+J=5Q=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIUpN-0008EL-IG
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:19:37 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2071.outbound.protection.outlook.com [40.107.15.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb2db859-97fb-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 14:19:36 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6841.eurprd04.prod.outlook.com (2603:10a6:10:116::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Thu, 19 Jan
 2023 13:19:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 13:19:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb2db859-97fb-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JuCDXcLn+ohS8CVsjCBVA2i5qgThdu9RU6b5GkS3fRmDLjZHpT7AmJ6DVD6jI0UJMQXptiZqw9fVAC2K91McbcOfkxBdsUgJ4LVos3kWKrzmZCh7wSo9J/nfEuL4uxu7cmx4GkNBOAGKAbQnQz8hssy9ib0W0UCWoPEGFjy2ESweLv5moHOLIGptM0bl4CyaNfLK+dpAMmEB6u+JmN2VNNeY44Q7O/guMZWt6xItQya5wOkCf0SZJwdpgUv6qgiNn1ftvysDPE4IHonJYxPN45Bs1ZlHFSs6xnviTJlx6FZtH1bPt4WVweCeTrP67g5ehLKmNXK0p9MTiNRx7LR7Wg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=n1LHoiPUScSmgNZv/atqS4HErHP9rkJVOgHZWk7M6S4=;
 b=GupGzGHOHo8uRAaCNAwn1/jAHzw1abEKMy3dcv6UeDb29lnrCBtfGpe7YZBqGxL7mG50FCQgWz9I8YC9oyprzfj+SQKkTk15Qgvjzyi9sspQL+nv9SnthCrGqvGUqEVFCeJ6l7VEiT529zsDMQ/JBZl+BeBM1p/e2Kt82ac58gyGuwvEfIYoeIOOjtsP9t8m51hFGqA5NW0R4/qcfxtnAZycNe9IJW16W3mzbVWw/Au7DkuE9UxWrUnf6UmmAPyKd9dFf8J52RLHc96uAxkwT2wti8KLZKqoRLY0u3WVBY+8ztSp2mLy+VxuuzohxuwVVz+DYK3ddiE754xneI59tw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n1LHoiPUScSmgNZv/atqS4HErHP9rkJVOgHZWk7M6S4=;
 b=C9HZoJ8DIPen9fZa/fS6nhfUZLS+DYxRKfFS15J69yhvzFfFkb6GeYY4Q7hvciEqJK03bXU1w09AqC77MFuN7ZsDpgkHnO60aXV8wVN9rQb/eXvehWv86plL2hg1wAl7aatMzYhqyDTf2FyKtUnKzJ5qOP51F8ZeEdGesEo6os2HsFtWWpD2cqsmckRzUenMRn64xTClGQ1XdzTWYDyCWJpfYJ1Z81f4PnNVGsOC2Lf2zNURor2yHTVFg8zhOIpdAh46sInyb1i37UXudOm8XTGNx9zFD+cK0HAeZISOcX/YDjQuyyOr+nGrtYkqqQTOE/j9Z/iRyBCsTxWRlZDcVA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a4d6ac67-15c6-a7c7-d27c-b98544395a52@suse.com>
Date: Thu, 19 Jan 2023 14:19:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH 2/2] x86/shadow: mark more of sh_page_fault() HVM-only
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <9e79449a-fd12-f497-695b-79a50cc913c7@suse.com>
In-Reply-To: <9e79449a-fd12-f497-695b-79a50cc913c7@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0067.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6841:EE_
X-MS-Office365-Filtering-Correlation-Id: 6d93aba7-f515-42e5-68df-08dafa1fce68
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	q+zFarS/JFNWQuBFJXsbdOrlG5pg/wviss+IEN2VT0HbPmORLODl2VO1JqDNmRmSAwIEw7/8yVo/zrg/vZllpFbDnxns4Z6lJ2xywHy3WVrfW/+BltuU7W5X/rtRT4PXwCyx+UTvKaKVLGwW+8EH4HySsu8Eif9ZacUKqZv0PSzF572DCZAQzrKNl6DraFs+0a61j3dhYGwU/KV3ljjLKLClTx2TbAi7iFb4lF22fav1JTKdzcQSpdSvo45yqKrHKRxI3rvjZN+ASvleXBOrum/9msAs33+vMJHDpB0RTxnlA6H9GecdunglfY5amNIzKzdtNaxCSGgVpGqZGMyuRwlaoVSWe1UNomWZVSsFW9OcEvaaMdmdwPsSnerp+KscCwxmqA9cZSYQEoiyZJc00g8TjG6lJWsV6rPqMlHOz58L6Dbmz/EfDFEHlT5hPDUCFAG3rWx3gqLRjUAcsWBMoHSXbEgpKy3rTzx1znXgKfPTk1zTTc5dE8yt53MD1Q3ElCERPkPmT+p10g3TNzOJEEOM6r28IYGnQNmDQ9RDM/Gk3nY8Z8o0ujcnaMtJ/mF2/1yKxARGVG3t/rTWi6BzUa4YqXvNy/lKmeM2KXGEXImZZqsdKpdBvC5jLhDZI6KPgfFuD6pyyd7W0cZNSk81FRHqT90SbzDg1NFLpN1GHsKz2cFBQIYyjDntS9AONvqwO3ffE3uWy8L79ClRg4h/br0C9Vhwl4Apq2/Lyy1ZHcb4+q1ZsDitbwCedLDrdwdHtAf8VaEadxon4s/PQEe0mELab7KhS8zDQ0gZLkWx1W1Ga3V35NWCQF4N6Wj4zGgN
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(39860400002)(396003)(376002)(136003)(346002)(451199015)(8936002)(54906003)(66476007)(66946007)(4326008)(6916009)(31696002)(36756003)(66556008)(316002)(83380400001)(26005)(86362001)(6512007)(2616005)(38100700002)(186003)(6506007)(478600001)(6486002)(41300700001)(5660300002)(8676002)(31686004)(66899015)(2906002)(43740500002)(45980500001)(414714003)(473944003)(357404004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q1ZTN1MzbE1PRlFReURscXFNODN0MG82N2NUL2EreFp3R0pjWWdCS3JmZFhW?=
 =?utf-8?B?Q1MxelpyNjNnYXlwT3RyTFMvM0Ixcm43WVo1OVo0Q3R1TzV0TytvcStlelF5?=
 =?utf-8?B?OUZzamJHRjRqL3Zhb2I0MDlaTlpJTExCbWNBc3Z6WFAxZGM2UDhmVnM5cDl5?=
 =?utf-8?B?UUpib29lUTB0bEZ5UmxKMDhLblZ5NnorWk4wTm91dmlacW1CQ3Y1cGNrajVD?=
 =?utf-8?B?enIybjNPV2txQjMxb21NaElzWFJreGNhSG16ZTBuL0pTdGZ3TUF3T2t5M0JY?=
 =?utf-8?B?Y3J6a0JrRU0raEp1Z3RTdE1Da1cwTXNEMks5M3BodThVazZzSlE1cWI0QWlP?=
 =?utf-8?B?YUs0aDMxeHg2YlZIWnhPSEQ2VEg5TVdBZUJxZE5NVWtuY3JRR0JsRVJCVERG?=
 =?utf-8?B?WmNWVkNxNnNocCtoWnQ1Wi9RdkZ6UDdSTkNXaUpvYnN1QnNpdENERFJLMDdy?=
 =?utf-8?B?MnEvSWdHa043VmZyUXlUTCtlbk5hSnZKWVBJczhkM0U5UzVodGVUenpTOVh2?=
 =?utf-8?B?T3hzSHBOaDdjYlFOdXgvS0k4UzlzZlFCK21QVTRVbS90c2Nzb0F2dDNIQlZs?=
 =?utf-8?B?dTY2bjRRNjdpczVoQjJVcXB1T1BFbjk0WUVwdC9kbWlGcTJUbk50bzZFMExN?=
 =?utf-8?B?MUk0MENZbE5nY0VKcDlJMEhFSndWYWo0Ykc4VFBnMGtpOE1JTmR5Y1g1YmU0?=
 =?utf-8?B?aWRsaEVYNjN0L3hVMGU1WCtVTWsyQ0YrTGI0c2xyUENjeS9TR1FiNTlBZy9y?=
 =?utf-8?B?WEJlK2JWMldqaTB5TTJWSzBqU1F1aDQ2UEljTzBFU3BMOFVleWNxQzE2d3p0?=
 =?utf-8?B?TFM4TDI5VlozbjVybzhMaElBOXRINzk2QjZvaUx5elNIY3RMWEN0Wm4rei9U?=
 =?utf-8?B?d1VtOCtEWklCSzhMSjVMb0ZpUkVEdUdrZ2FKc3lBUkE3dG5HYWZpWDQ5dlZv?=
 =?utf-8?B?SG5iK2hEVFM2Mi9UZDlJcGpzL1kxZjdYdjJUeHM2bGc3OGJLdDk5SnlBTmVa?=
 =?utf-8?B?SlFGTW9kaFczZnZJMVlSV2hUcjkxMmtZdDE4Ym5CTUlvdzY3NlIra3JMRHBU?=
 =?utf-8?B?MGdiOGtpdUM4ellJMXRaQWUyUW1LWk04TEJkaENGampPeW1jS1NOYVlmTlFk?=
 =?utf-8?B?Nm5ucjk1UDZJZC9kMVJYV3VpaGZ6VTZERjA1dXhoN2RiZzR6MmJpRUpkV1M0?=
 =?utf-8?B?TnkxRHV0eUZqMWlWRFZZUXlOc0p4dThsWFRBam5XM3doNU5KekhSTkVNUlFB?=
 =?utf-8?B?WFQyQSszallBZ2hadWZHK1BQN3ZzcHNuYmt3MjNEYXo1NU9yUzlpUVpUNUZX?=
 =?utf-8?B?eldNYlNDWjkwYXJlNDhZRE5XV0JDLzcrbitFZGZvQ1lsTkV1KzB4ekhWenBS?=
 =?utf-8?B?clJhRmoxM0NxbHNxbU9NTDRiU09GSEEyMTNCcWFKRFlHYmVWT09EMHRMV0xW?=
 =?utf-8?B?WU9ZRVFFeUJJQkt6eDJpbWZKOXc3cVU1UGZpbno2dURDYkR0SGhtV3QzbzVG?=
 =?utf-8?B?YTV4NEhBaFE2aWd2T0ZaMXFrUjRRdEZiQWNwNy8vbjkvM0tXUWwwOGZVc2xE?=
 =?utf-8?B?VnJsaGxaNVVsM0NYREowWnNGTkxjai81YUF0ZXF5M0pWNndLeVYwSFJXcG5R?=
 =?utf-8?B?SnYrRkg5YUErVzE2TExrUkJhOXdqMGorbHQ5TWlQNzU0cnNIMmJvdjRCR0Nu?=
 =?utf-8?B?UXdEUUZzZUFVM0Zya1NnRk5Ta2ZRUHE0ZnNtcXM3dlJGTFFFRnRlOHN4eW9S?=
 =?utf-8?B?Ung4T3lDVmM2UklpK29JZlNvZnZlM2VhMDlyeGFHOEQyMG41dWdudUZMSW1z?=
 =?utf-8?B?dHg0THBOUHdkMjYrYWZ3SGJudHpTaGRMVUdqT1FuMWdpUS9WSWpCVzJWOHha?=
 =?utf-8?B?c25tZGtWR1YzU1Q1cjllT3pYdkhWSmsvSGlXd1UwRnFJYkFvdEhUVGNZS2hW?=
 =?utf-8?B?VXhwRGJQTTNhQVdwdmhWOGJsVkdKRXFFbjNBeStOUmhRNHlaWElMdGFUaGNv?=
 =?utf-8?B?UDJvUkJUL0lTZklEQngrSzNwdmowcXdLUDZVRjNJcDh0UlhnYmdxNW5USFNC?=
 =?utf-8?B?VGladktYcU1pWW1Oei9PSDlxK09TK1FrMFdzSGI4ZjdkVEpvUnFvTmJBNFFT?=
 =?utf-8?Q?W+1pMUCEnB1mBsZHg7kYIB2qu?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d93aba7-f515-42e5-68df-08dafa1fce68
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 13:19:34.3038
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CMcUj3IUZBzjvvnqJWJwzm0FLG6koya/ov+eJOh+oioSyc6SSZdS2XNFBNsh/G+XiHmBNCozKMJn76Our6YlGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6841

Neither p2m_mmio_dm nor the types p2m_is_readonly() checks for are
applicable to PV; specifically get_gfn() won't ever return such a type
for PV domains. Adjacent to those checks is yet another is_hvm_...()
one. With that block made conditional, another conditional block near
the end of the function can be widened.

Furthermore the shadow_mode_refcounts() check immediately ahead of the
aforementioned newly inserted #ifdef renders redundant two subsequent
is_hvm_domain() checks (the latter of which was already redundant with
the former).

Also guest_mode() checks are pointless when we already know we're
dealing with a HVM domain.

Finally style-adjust a comment which otherwise would be fully visible as
patch context anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I'm not convinced of the usefulness of the ASSERT() immediately after
the "mmio" label. Additionally I think the code there would better move
to the single place where we presently have "goto mmio", bringing things
more in line with the other handle_mmio_with_translation() invocation
site.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2158,8 +2158,8 @@ static int cf_check sh_page_fault(
     gfn_t gfn = _gfn(0);
     mfn_t gmfn, sl1mfn = _mfn(0);
     shadow_l1e_t sl1e, *ptr_sl1e;
-    paddr_t gpa;
 #ifdef CONFIG_HVM
+    paddr_t gpa;
     struct sh_emulate_ctxt emul_ctxt;
     const struct x86_emulate_ops *emul_ops;
     int r;
@@ -2583,6 +2583,7 @@ static int cf_check sh_page_fault(
         goto emulate;
     }
 
+#ifdef CONFIG_HVM
     /* Need to hand off device-model MMIO to the device model */
     if ( p2mt == p2m_mmio_dm )
     {
@@ -2614,13 +2615,14 @@ static int cf_check sh_page_fault(
         perfc_incr(shadow_fault_emulate_wp);
         goto emulate;
     }
+#endif
 
     perfc_incr(shadow_fault_fixed);
     d->arch.paging.log_dirty.fault_count++;
     sh_reset_early_unshadow(v);
 
     trace_shadow_fixup(gw.l1e, va);
- done:
+ done: __maybe_unused;
     sh_audit_gw(v, &gw);
     SHADOW_PRINTK("fixed\n");
     shadow_audit_tables(v);
@@ -2629,9 +2631,10 @@ static int cf_check sh_page_fault(
     return EXCRET_fault_fixed;
 
  emulate:
-    if ( !shadow_mode_refcounts(d) || !guest_mode(regs) )
+    if ( !shadow_mode_refcounts(d) )
         goto not_a_shadow_fault;
 
+#ifdef CONFIG_HVM
     /*
      * We do not emulate user writes. Instead we use them as a hint that the
      * page is no longer a page table. This behaviour differs from native, but
@@ -2653,17 +2656,11 @@ static int cf_check sh_page_fault(
      * caught by user-mode page-table check above.
      */
  emulate_readonly:
-    if ( !is_hvm_domain(d) )
-    {
-        ASSERT_UNREACHABLE();
-        goto not_a_shadow_fault;
-    }
-
-#ifdef CONFIG_HVM
-    /* Unshadow if we are writing to a toplevel pagetable that is
-     * flagged as a dying process, and that is not currently used. */
-    if ( sh_mfn_is_a_page_table(gmfn) && is_hvm_domain(d) &&
-         mfn_to_page(gmfn)->pagetable_dying )
+    /*
+     * Unshadow if we are writing to a toplevel pagetable that is
+     * flagged as a dying process, and that is not currently used.
+     */
+    if ( sh_mfn_is_a_page_table(gmfn) && mfn_to_page(gmfn)->pagetable_dying )
     {
         int used = 0;
         struct vcpu *tmp;
@@ -2867,13 +2864,9 @@ static int cf_check sh_page_fault(
  emulate_done:
     SHADOW_PRINTK("emulated\n");
     return EXCRET_fault_fixed;
-#endif /* CONFIG_HVM */
 
  mmio:
-    if ( !guest_mode(regs) )
-        goto not_a_shadow_fault;
-#ifdef CONFIG_HVM
-    ASSERT(is_hvm_vcpu(v));
+    ASSERT(is_hvm_domain(d));
     perfc_incr(shadow_fault_mmio);
     sh_audit_gw(v, &gw);
     SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
@@ -2884,9 +2877,7 @@ static int cf_check sh_page_fault(
     trace_shadow_gen(TRC_SHADOW_MMIO, va);
     return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
             ? EXCRET_fault_fixed : 0);
-#else
-    BUG();
-#endif
+#endif /* CONFIG_HVM */
 
  not_a_shadow_fault:
     sh_audit_gw(v, &gw);



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:24:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:24:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481055.745740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUtZ-0002xB-PL; Thu, 19 Jan 2023 13:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481055.745740; Thu, 19 Jan 2023 13:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIUtZ-0002x4-Ml; Thu, 19 Jan 2023 13:23:57 +0000
Received: by outflank-mailman (input) for mailman id 481055;
 Thu, 19 Jan 2023 13:23:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DP+J=5Q=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIUtY-0002wy-77
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:23:56 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2057.outbound.protection.outlook.com [40.107.20.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84f30e3e-97fc-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 14:23:54 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8545.eurprd04.prod.outlook.com (2603:10a6:20b:420::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 19 Jan
 2023 13:23:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 13:23:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84f30e3e-97fc-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=htqmZVaDyToog7T8KvEoVUfW6zC8ChsIbnqunZ+f9DB6ZDrmD5FcFXOEiQGdral+2SRtlMl8qQlnm5qIxCOlYwqJbD9IhtWY/dWYBvFl/GfJF7fUHraHAEwzjBZY3iRm6TjlvRc54/Z8jAxev5zVrrnRaTC9Rs4yUeb2tkp9jm8wtWFTUEoBY2hQY+85zb+CI/9ey5iLZhzhXwg0VB89IgplXWIO2KoRVjWKB9PxUrbGbkR/0Y5WZn+DEDVF7zroXBQS7Bv3qSiMJU56Lbkf1GfD9EDbRs+nnSvdYJVgXfRJhEMztSC4cELhNehY/IACq9Snj6PHACdZ4mM4b4+NXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JIikQV39SMjuhG6HnwSum4oSPVVScefkQ1pRnIx2Ogc=;
 b=CnLy8uzmKYL/RKUkKjs6ViocsJoHhhcVh9dEIKbUkpIzCoxspOXSsVSEEjcmGe2ZC6jmHK8hKsWS5NPRMftR/C2UZCHCDLqNrH7OA5AgSwMiFNSpgQ1MN7UteieIxp1g0aOCOyAB52SUS3QRMyimtpQBLJzqPo7Vv06t4oTgiZSoQWD2kG9XLwJ51u+hEOS75qu5DCI28HlsXrglFyO8G7YAYtGO9/Ns50Fyot2aMLt3hiRkmJNyW9/8u0yvlxcZ3WOvUVSSdyLEo8I5AMsg8cg39jwAEJbh5FXLJoMsNItqHMyx3KdQYLK+6kQqMnisHyWsC48zjhDu1w86z5/i0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JIikQV39SMjuhG6HnwSum4oSPVVScefkQ1pRnIx2Ogc=;
 b=3eARjhY0R2iswlPqVVHXx/jOHkrx5/UjqRar721ZWQL516UapR5bzYyGGMAfR0WLuUJvjhKddNBzXUB7KW6QXFnE0sm/eYIoAgTx83WyXnCdfZ0DNP5OTJVblq8ZdZLtoJBfjtbQWjfO9kfcJvjbdrkQSngLYUbTUEXALbFc059uDcJ2Cpdo5xo3gsCruWjnGSquqlrI9p5mSgshC7NIv6o04BgUn726nMXodp9AZgvyQ1csL0z842oiWbrz5XJpkvysk969w2f1omL64J9e+Dc/5BpLf4BZhidh4QIqwIOjJE9T+8qlsC+ElK8gQIedOMue1TgqBBAL38ETFN3i6Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b8577d0f-b5d8-4e1b-8006-31fd91f83220@suse.com>
Date: Thu, 19 Jan 2023 14:23:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 0/4] Basic early_printk and smoke test implementation
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, Doug Goldstein <cardoe@cardoe.com>,
 xen-devel@lists.xenproject.org
References: <cover.1673877778.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <cover.1673877778.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0209.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8545:EE_
X-MS-Office365-Filtering-Correlation-Id: 198e93f4-0e19-4891-721e-08dafa206826
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RKM60fef1GRIptbqRscwX+awLT9ASP+FOSq4PzOCg4gTI61FKpq9Upi7qzX51QQqHrDXDHYz9nlA/Wx+UB1DZphLjtwLHA73uGWDKndgHF8iILp/Qgj/do/MpAD3m+8I+JN08EAy8lY0eREAQpta604hmlguSM6Z8BN2IJfxKffxGE4s3x3TkzgCdgqhLyhG+ZA28zgcynYRCOslxqDNFKqDISRRm5xKSJFm/pFqh2jj8rrDfwJcnw++K75D5hNAawHBSM3qEllKlb6x5Vy2ggV1xYFTQRkbtw2lg8cKRk11+DkS4ihMkblh0Mn1dJA0gW/4GiL8Okfmn4bvYSTSmPYSHnUjvadxVSD6b+XU4FM5Mu5d4xcMIulDxI6hVfz1hWTDhFgkYq8m4SIViXnUHis+RVL4x+5iLORrrfnwgr1f5bHYLeWGLiLYjlmSKsQlk5QdvvIbAFwLYmkki3ZQC6sG/GV2WheWCdx6QGMJeL6UkdtB1eX+rclVfAk/FtFbQaRNZm3UDHaMsqqo/Ky7v4puBGqg7mPs4dZiRg9aJ/ddYegzZVYilYjW9UT9+BRVepD9mqrpHTjrqSBCe1sMnE9JwTshrT6Qr5U0q/EIPchJ9hRJnx4JziHsiVZLTP4Nps3g/gLxUSwnqVcAKnikn9YBAb1qKe8+WYyOOxOmG7bvmS/zYWP2DcM5EJXh772RarnOl1Cl3IUhzdBN07SHWWt4lYhr0nv0jQdFWgrDFFM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199015)(36756003)(6506007)(26005)(53546011)(186003)(6512007)(38100700002)(86362001)(83380400001)(2616005)(31696002)(4744005)(478600001)(6486002)(66556008)(7416002)(8936002)(5660300002)(31686004)(2906002)(4326008)(8676002)(6916009)(66476007)(316002)(66946007)(54906003)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z3J4VDByeUVOaHgvUzRxcHlMYnU4M3NYWGxBSnptOGlsZUFLc2wvYnZsWkZ2?=
 =?utf-8?B?Wm9oUjIxNzk4aDZKRGtXZi9NU3N0bDdVQklJV0g1QnVpeVBOWkRZelYvM0Qz?=
 =?utf-8?B?Qk9lbXNnMWUzZWRaRUdZMTRLMWNzVkkwdDFuSHJ1M2pmalVpNHdtVlB5cXk3?=
 =?utf-8?B?UHpEdzZaOUlJbkw2OHZzbDJrMzAxcFhiUTFJcjZRVWhmUHBTVTF6YTc0UmdF?=
 =?utf-8?B?KzVlZkRRRzIrNE1IU21RSDJxWlhYR2tMSUZNZVF3UW1vL1pqOUtxLzRjZE8z?=
 =?utf-8?B?L1k3L2RaZzhIbVNrRnRWS3JJR0Nwc3Qwd3FMU2RnWG9mVk5mYk44K3FwZFc4?=
 =?utf-8?B?TnpnK0VqZ3ZMaDVwdEtnUnhRbFllL1dFOXF6aWRtWkdtelNxempZTUtmTmpX?=
 =?utf-8?B?NDczbHN1RzhBQWhKb0l5Z3ZKVGVyN2ZhRUhtRGc3RXlEY2llcVlkTDArVWZ4?=
 =?utf-8?B?RkVGRUJ6Z29HWStkTGg5UUp5R0ZpN3BtSmptc2JuMEFReFExcVBLKyt0NVhl?=
 =?utf-8?B?ejBzT2x6OFU2RXNWOUFlV0JsMGhQaXJBeTVCbExQZEs3cHduRHVmb1dYRG1r?=
 =?utf-8?B?WmRoSGZvT1VNWWgwbmVpYWV5SmxOQWVJeUxXNUdxTmZjWS9Sb3dManN4aEV1?=
 =?utf-8?B?eExuSm41M0RSL1NBZjl0U3A1TTYwYzJFTjNEQVNuMHlDUytsM0Evc0hIS2Fw?=
 =?utf-8?B?eUhIZnY5UkVXMmwyYkgxQUU0YVo4Q3M4NmxzSWpvV2tEQnBSRUsweHBxUTJy?=
 =?utf-8?B?OGJmUFhWNmVLWlYyMithVnNUZm1WODVyeTgvdGxGaEoyY3MvWVJWUXJGZ1o5?=
 =?utf-8?B?d1Z5Z0ZnMkFhM0Mrb1A0UzFmOTdicDlDTFZ1TUhnNXpLYnYwY2ZaRTcwMFJw?=
 =?utf-8?B?NkorVG1oYnh0dGhnWUpLaDcvN1B5dWdTM0d0YzJNOWc4b0kwZXE0MVhHL0FN?=
 =?utf-8?B?L0Q4elBBQUMxZGptdG5rUW1MT3Y3eW95bVFibjFnVjI1cG1Obmc1Q1cxOGtr?=
 =?utf-8?B?MG9rVGc0aUJ0a1JuN2VPenc3VVhtUUtLMmhiSU1LckYvdTFlNHVZc0pHZ014?=
 =?utf-8?B?ZFppYzFqY1RZS1RoYllMd2tQejBkd1A0RGgvRWFndGd1Qm1vd3V1dEkwUlZx?=
 =?utf-8?B?S3FzL21mbThRSUtXNGl4UDFHL2lmU0hxZ2IzMWpjZmJHZFM1bnJ0dGZSMlZ6?=
 =?utf-8?B?NEF4bE5vK3V4NytBb0J2THBtTlFadXZkcmRqQ3E2cjZkM2VOanJWOHduTjRL?=
 =?utf-8?B?N253Mi9YUUlpVDFMWXUvOThNdjlTWEFtNXRmajYvQzdLRlhUZURsL0xSSEpF?=
 =?utf-8?B?T1hHamFONEhUOCtxMDNRUlJnekNiWXhzd1lNRng2OTNIUjZ1Q1JlSE9YUndx?=
 =?utf-8?B?MGRYc2FVWkpveXBOdjlmTTFVUXpDN08yYUdyVWZ3Y3hrc09BWG5PMXlNT3Vw?=
 =?utf-8?B?cFU4a05RNExFZ1l2U2NwWnFIc1VyNHRTQmNTR0g3ZWhZVk5nMmt6OGw3ZTl3?=
 =?utf-8?B?eGZpakNwQW5mS3EzVDdwU0NWNlpLL1FJaGtuWkN4VmVhSjVGSmZoSTV1blV1?=
 =?utf-8?B?ZXBvaWZYR01LS0N2Nkxrd3llLzJBNU5OQXhtOVZGclcyOTFYajBLaU5KUFdU?=
 =?utf-8?B?VW1sTXpXamRObVFkUzA5M3NybU00alo5NG00bW4yWjRCWENyak0yY1hTcVVM?=
 =?utf-8?B?ODBWREEybENBd1B1OXFzK1g3UERpcHNKU0JLL0RndkJEMG5uWE9vcGFyS3Y4?=
 =?utf-8?B?eGFEeXFlWHBReDcyYTFUUERpamFNWlhhSmJiUHZFQ1YrT2RIV0xjeHRFeHFh?=
 =?utf-8?B?M21QREE2L1haMWRhN21Wc3VaYU1KZGV5OWhJSTRHWThyV00wR1U1WEdsSEhv?=
 =?utf-8?B?c2ZiVXA1cEs1cmNqZjF2Y3Bxc1kyczlKNzNLNVN5eUJBeVBYRHZldUx1S29s?=
 =?utf-8?B?Z0dRdGdUUklMN01IdW1sUytIajZIMmJBYy82WE5hby9Vb1hOYTc1RHFaK1cv?=
 =?utf-8?B?a0dwWjdBQzB5N21QSE5aSFByZS9HTG9TUTQ3V2VaSVNRS0Q3ZEliQ0NBUXFB?=
 =?utf-8?B?cm8vakdDZTI4bXdJRGo3V1JZMTUzWFhSeGxYQTA1V29JcFBhNGo3RjZMTFF3?=
 =?utf-8?Q?rgrj3egRy1yQRAOAE9bzGbRRr?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 198e93f4-0e19-4891-721e-08dafa206826
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 13:23:52.1781
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SH4LyEihRro0JZCAeKY9ujbk/xfGIfdGPOCK0PxaKwIMEQPLIma56G9vs50ZTZyg9Qi8XLsifSg9500rles6bw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8545

On 19.01.2023 13:45, Oleksii Kurochko wrote:
> The patch series introduces the following:
> - the minimal set of headers and changes inside them.
> - SBI (RISC-V Supervisor Binary Interface) things necessary for basic
>   early_printk implementation.
> - things needed to set up the stack.
> - early_printk() function to print only strings.
> - RISC-V smoke test which checks if  "Hello from C env" message is
>   present in serial.tmp
> 
> ---
> Changes in V4:
>     - Patches "xen/riscv: introduce dummy asm/init.h" and "xen/riscv: introduce
>       stack stuff" were removed from the patch series as they were merged separately
>       into staging.
>     - Remove "depends on RISCV*" from Kconfig.debug as Kconfig.debug is located
>       in arch specific folder.
>     - fix code style.
>     - Add "ifdef __riscv_cmodel_medany" to early_printk.c.  

Did you really mean to send v4 another time?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:44:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:44:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481068.745754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVCu-0005Rp-Jf; Thu, 19 Jan 2023 13:43:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481068.745754; Thu, 19 Jan 2023 13:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVCu-0005Ri-GC; Thu, 19 Jan 2023 13:43:56 +0000
Received: by outflank-mailman (input) for mailman id 481068;
 Thu, 19 Jan 2023 13:43:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIVCt-0005Rc-KI
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:43:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVCs-0003As-Sm; Thu, 19 Jan 2023 13:43:54 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.13.107]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVCs-0004p8-Lm; Thu, 19 Jan 2023 13:43:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=tHUQGdSFD6mqmsnAT4Z8BxMS7XRdwXxpSbQHs7bkerQ=; b=xPRb3SdLm5GeK3bCvJKqxrL2Nc
	Y1+4fqvgDT3yV/vNtMYk3ktvDb1qSwCa5zUzYOFPoboAaMSSr4sdvDM50wlPJqFsA+1lq0rJubAJf
	zcJX6i6Q1PeuSf6Ry+0qhZhUp7RW5XVQuf0v6eabzXzDSnBp/MMbiqL4XQlcSfRnyGh8=;
Message-ID: <dfa17a7d-e360-1469-80d6-c9ee9981f64e@xen.org>
Date: Thu, 19 Jan 2023 13:43:52 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 08/17] tools/xenstore: change per-domain node
 accounting interface
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230118095016.13091-1-jgross@suse.com>
 <20230118095016.13091-9-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230118095016.13091-9-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 18/01/2023 09:50, Juergen Gross wrote:
> Rework the interface and the internals of the per-domain node
> accounting:
> 
> - rename the functions to domain_nbentry_*() in order to better match
>    the related counter name
> 
> - switch from node pointer to domid as interface, as all nodes have the
>    owner filled in
> 
> - use a common internal function for adding a value to the counter
> 
> For the transaction case add a helper function to get the list head
> of the per-transaction changed domains, enabling to eliminate the
> transaction_entry_*() functions.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:45:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:45:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481073.745764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVDh-0005x7-SD; Thu, 19 Jan 2023 13:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481073.745764; Thu, 19 Jan 2023 13:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVDh-0005x0-Pf; Thu, 19 Jan 2023 13:44:45 +0000
Received: by outflank-mailman (input) for mailman id 481073;
 Thu, 19 Jan 2023 13:44:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIVDg-0005wc-VA
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:44:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVDg-0003C5-2B; Thu, 19 Jan 2023 13:44:44 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.13.107]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVDf-0004yb-TA; Thu, 19 Jan 2023 13:44:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=5pqVR02lTa+ra0WNhgR4h4WJ9IuegtAWsnEXz3rsEM0=; b=Kg7N7W46c66O8QKcY5B+Aru4oR
	0w8xbKCnh+731I9rE1d2A0hGRb3RGmiWq/sRagiLM23Tr9LD0a3HJZTIHBWLYj3KsjU5nluKpWroc
	srv+M3AKklbNsqaW5JUZGqYcMcESCQhGaupoAcsnDCPq3vP1t/JQM+G1BAXstQ+NxOdI=;
Message-ID: <dcfe62e0-ee9c-57ca-205d-2beca5bb678f@xen.org>
Date: Thu, 19 Jan 2023 13:44:42 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 12/17] tools/xenstore: don't let hashtable_remove()
 return the removed value
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230118095016.13091-1-jgross@suse.com>
 <20230118095016.13091-13-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230118095016.13091-13-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 18/01/2023 09:50, Juergen Gross wrote:
> Letting hashtable_remove() return the value of the removed element is
> not used anywhere in Xenstore, and it conflicts with a hashtable
> created specifying the HASHTABLE_FREE_VALUE flag.
> 
> So just drop returning the value.
> 
> This of course requires to free the value if the HASHTABLE_FREE_VALUE
> was specified, as otherwise it would be a memory leak.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:46:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:46:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481080.745774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVEe-0006a3-8G; Thu, 19 Jan 2023 13:45:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481080.745774; Thu, 19 Jan 2023 13:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVEe-0006Zu-5b; Thu, 19 Jan 2023 13:45:44 +0000
Received: by outflank-mailman (input) for mailman id 481080;
 Thu, 19 Jan 2023 13:45:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIVEc-0006Zn-SS
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:45:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVEc-0003FC-PW; Thu, 19 Jan 2023 13:45:42 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.13.107]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVEc-0004zy-Js; Thu, 19 Jan 2023 13:45:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=YUYZvw4RROg+8CsoP3Sio+dwD0Ri+23YpiqwwpCUAu0=; b=2I+3J4Q1I2AFKv8Q84+nG633MC
	bTs8dBzynUcRBH1RtDbs3hmnv76olYWTYu1ZChJiXPfJsU2BzlEAvaQitqyORQk7C0q6H+HUhesHg
	Zz1rlaQBOOT6eYBuZ3BzdA0MHzJGkChLCSbpjWITe+IFxHhNuRArO53+h+3A1BQFj2gM=;
Message-ID: <84a03cf7-1c3d-4e61-4865-e90d7a9c7862@xen.org>
Date: Thu, 19 Jan 2023 13:45:40 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 14/17] tools/xenstore: introduce trace classes
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230118095016.13091-1-jgross@suse.com>
 <20230118095016.13091-15-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230118095016.13091-15-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 18/01/2023 09:50, Juergen Gross wrote:
> Make the xenstored internal trace configurable by adding classes
> which can be switched on and off independently from each other.
> 
> Define the following classes:
> 
> - obj: Creation and deletion of interesting "objects" (watch,
>    transaction, connection)
> - io: incoming requests and outgoing responses
> - wrl: write limiting
> 
> Per default "obj" and "io" are switched on.
> 
> Entries written via trace() will always be printed (if tracing is on
> at all).
> 
> Add the capability to control the trace settings via the "log"
> command and via a new "--log-control" command line option.
> 
> Add a missing trace_create() call for creating a transaction.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:46:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481087.745784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVFa-00078B-Ir; Thu, 19 Jan 2023 13:46:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481087.745784; Thu, 19 Jan 2023 13:46:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVFa-000784-EX; Thu, 19 Jan 2023 13:46:42 +0000
Received: by outflank-mailman (input) for mailman id 481087;
 Thu, 19 Jan 2023 13:46:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIVFY-00077o-DC
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 13:46:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVFX-0003QN-SG; Thu, 19 Jan 2023 13:46:39 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.13.107]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVFX-00050z-Mr; Thu, 19 Jan 2023 13:46:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=djdDb4AAb62iIEjZ1ymbb4hKXz8DEMYviTvN3i3qlhM=; b=xryrQJFTuuL+5tBIDe4m7gZsMV
	XS0J5jqOzJR/8372xkMFOGgBqaJj41ZfaLTQYI0AsovHpmPmGupfzCEeE9syWLK/SjTgFaRFsgtVS
	g+L3uTTrxG3AnbwiP7NG22T4LvMJ3/k3mq+ndiAiP0rti91SngyfJSZYzKnKmf8b3/eo=;
Message-ID: <85ac4580-f2e5-eb96-6d21-7ed92057710a@xen.org>
Date: Thu, 19 Jan 2023 13:46:38 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 15/17] tools/xenstore: let check_store() check the
 accounting data
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230118095016.13091-1-jgross@suse.com>
 <20230118095016.13091-16-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230118095016.13091-16-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 18/01/2023 09:50, Juergen Gross wrote:
> Today check_store() is only testing the correctness of the node tree.
> 
> Add verification of the accounting data (number of nodes) and correct
> the data if it is wrong.
> 
> Do the initial check_store() call only after Xenstore entries of a
> live update have been read. This is wanted to make sure the accounting
> data is correct after a live update.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 13:52:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 13:52:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481094.745794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVKs-0000Dx-66; Thu, 19 Jan 2023 13:52:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481094.745794; Thu, 19 Jan 2023 13:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVKs-0000Dq-2Z; Thu, 19 Jan 2023 13:52:10 +0000
Received: by outflank-mailman (input) for mailman id 481094;
 Thu, 19 Jan 2023 13:52:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIVKr-0000Dg-5g; Thu, 19 Jan 2023 13:52:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIVKq-0003XO-TE; Thu, 19 Jan 2023 13:52:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIVKq-0002EF-FN; Thu, 19 Jan 2023 13:52:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIVKq-0004Zm-Ev; Thu, 19 Jan 2023 13:52:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bXdbaKVj8fVloEGLpHOYKwXyT3At6/CW/+W3g9CXVWA=; b=7EI+9vP+p1EZTnlnF07Ttwcaim
	yHPZvLetsQeclKw1mE5HZ4YVTGY5Xu9KVIbDOsH/z9FPapEgNJtKDQ12J8/BY7AUNTpFrb2xb6Tnz
	F9nrZCnJSkoR10jEjW+FyW//b8DuvKd+F8yF3fcXocvp02xcXdaiDQE60eeKBLIitczY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175963-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 175963: regressions - FAIL
X-Osstest-Failures:
    qemu-upstream-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=625eb5e96dc96aa7fddef59a08edae215527f19c
X-Osstest-Versions-That:
    qemuu=1cf02b05b27c48775a25699e61b93b814b9ae042
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 13:52:08 +0000

flight 175963 qemu-upstream-unstable real [real]
flight 175980 qemu-upstream-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175963/
http://logs.test-lab.xenproject.org/osstest/logs/175980/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 175283

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175283
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175283
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175283
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175283
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175283
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175283
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175283
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175283
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                625eb5e96dc96aa7fddef59a08edae215527f19c
baseline version:
 qemuu                1cf02b05b27c48775a25699e61b93b814b9ae042

Last test of basis   175283  2022-12-15 15:42:37 Z   34 days
Testing same since   175938  2023-01-17 15:37:13 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 625eb5e96dc96aa7fddef59a08edae215527f19c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jan 6 15:21:10 2023 +0100

    configure: Expand test which disable -Wmissing-braces
    
    With "clang 6.0.0-1ubuntu2" on Ubuntu Bionic, the test with build
    fine, but clang still suggest braces around the zero initializer in a
    few places, where there is a subobject. Expand test to include a sub
    struct which doesn't build on clang 6.0.0-1ubuntu2, and give:
        config-temp/qemu-conf.c:7:8: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        } x = {0};
               ^
               {}
    
    These are the error reported by clang on QEMU's code (v7.2.0):
    hw/pci-bridge/cxl_downstream.c:101:51: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        dvsec = (uint8_t *)&(CXLDVSECPortExtensions){ 0 };
    
    hw/pci-bridge/cxl_root_port.c:62:51: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        dvsec = (uint8_t *)&(CXLDVSECPortExtensions){ 0 };
    
    tests/qtest/virtio-net-test.c:322:34: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
        QOSGraphTestOptions opts = { 0 };
    
    Reported-by: Andrew Cooper <Andrew.Cooper3@citrix.com>
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Message-Id: <20230106142110.672-1-anthony.perard@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:07:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:07:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481130.745827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZp-000316-39; Thu, 19 Jan 2023 14:07:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481130.745827; Thu, 19 Jan 2023 14:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZo-00030Y-VG; Thu, 19 Jan 2023 14:07:36 +0000
Received: by outflank-mailman (input) for mailman id 481130;
 Thu, 19 Jan 2023 14:07:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIVZn-0002xN-Vk
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 14:07:35 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9f1c5990-9802-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 15:07:34 +0100 (CET)
Received: by mail-wm1-x32c.google.com with SMTP id g10so1638897wmo.1
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 06:07:34 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m27-20020a05600c3b1b00b003db012d49b7sm7710178wms.2.2023.01.19.06.07.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 06:07:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f1c5990-9802-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jw5z92GdwkOpMq67zyGgBHYZBK1yWuFbI322wDA/UW4=;
        b=hVoM7hPlhAeqWlT93JqTd1mxy1MsnUUU2u7L66yRzLAxRgy3fbB0xK+Ag+jaWpnEDQ
         j6ywgy3AsgCY15Olqg4eeG3ajTdinCyRgbXyzl4ZPnlqt1qaAF4l8OpzF9jcSuISSoTR
         IreTmWgq5OF9q2BJm9ukVHDDobwpxr2asdhajUwIqOZlwwq4m5bOGUnlwoGbUbPA1D8h
         HX4yybvyxMcOg8jOGpaRFFxCtqGczcFADQNCgGVOVajY2bhlmpsZ/FcZMSaNPatd8N5I
         XFrm/DijyCg3WfBwhZR5Lp5fLybIFNWjIqK3dbuwwMgz3e/Ydgyzkks3VhvpkuAGnEaQ
         MQGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jw5z92GdwkOpMq67zyGgBHYZBK1yWuFbI322wDA/UW4=;
        b=uFnbbl64H+rY3aBw7/DIauJk/rkR3UKaUBSeHDvlaaFRHz8ZsM43dKoCmje7v2bs/v
         EmNLLtB6hmc4F4yjUFPisKuE5TJMS77PzBC2pwLNXqS6orIO0+fLfbydGmtzC1Fo8TZx
         Q7NGkmHALquyvQ8AFrCdMiZvB4n91DMFfMWUfOo8LATud6NLkxIWAKuGG9QS8qqE6Nfh
         Q9icGju/2CwE0ObrL8/qvrWSlcKu6J5TENLPRucPRzK7zED1jGHop7lLGG/qctxLIVh1
         jCdxav/vny/305bBeUqSMBUkMqAEhfLWaFM10pl+OaKVsy3TkUQ6vffRo+3k/ClaPKNr
         Mq8Q==
X-Gm-Message-State: AFqh2kpDsA00YhHJDxDRECanh0O3TJQj1iZIlS4M0qBQk1K+TVS8G3Os
	rFLLG4L8l23jt5AMCZivxoSo0VuhkMz0gQ==
X-Google-Smtp-Source: AMrXdXso+VlLck+nuGDxhZ6lIwyeNuE6ySJ/rTv+sX03C0sCknM9MJ3l9n8n5yvH8oVtgULUxe12vA==
X-Received: by 2002:a05:600c:3d06:b0:3da:f945:2354 with SMTP id bh6-20020a05600c3d0600b003daf9452354mr10404349wmb.41.1674137253886;
        Thu, 19 Jan 2023 06:07:33 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 1/5] xen/include: Change <asm/types.h> to <xen/types.h>
Date: Thu, 19 Jan 2023 16:07:18 +0200
Message-Id: <916d01663e76a3a0acad93f6c234834deaa2dd72.1674131459.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674131459.git.oleksii.kurochko@gmail.com>
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In the patch "include/types: move stddef.h-kind types to common
header" [1] size_t was moved from <asm/types.h> to <xen/types.h>
so early_printk should be updated correspondingly.

[1] https://lore.kernel.org/xen-devel/5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com/

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/include/xen/early_printk.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/xen/early_printk.h b/xen/include/xen/early_printk.h
index abb34687da..5d72293793 100644
--- a/xen/include/xen/early_printk.h
+++ b/xen/include/xen/early_printk.h
@@ -4,7 +4,7 @@
 #ifndef __XEN_EARLY_PRINTK_H__
 #define __XEN_EARLY_PRINTK_H__
 
-#include <asm/types.h>
+#include <xen/types.h>
 
 #ifdef CONFIG_EARLY_PRINTK
 void early_puts(const char *s, size_t nr);
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:07:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481131.745841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZq-0003Sf-Dd; Thu, 19 Jan 2023 14:07:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481131.745841; Thu, 19 Jan 2023 14:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZq-0003SU-9F; Thu, 19 Jan 2023 14:07:38 +0000
Received: by outflank-mailman (input) for mailman id 481131;
 Thu, 19 Jan 2023 14:07:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIVZo-0002xN-Vm
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 14:07:37 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9fa18b36-9802-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 15:07:35 +0100 (CET)
Received: by mail-wm1-x32b.google.com with SMTP id k16so1634377wms.2
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 06:07:35 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m27-20020a05600c3b1b00b003db012d49b7sm7710178wms.2.2023.01.19.06.07.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 06:07:34 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fa18b36-9802-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=o8pOiu1yPNSj0fET51Dhdg4Hqst86W5gYJV+0AgZfn8=;
        b=ii88x7Zl7GGceAEXg9iarLk3km8FnYT72UZRcKT2fhJpYkTI/1yTzFl4YkJcicHD65
         KWK+1UfNupnjdr/cs5hc4vWblf5XrbdVWV6uLn27qz5hC7oJSjzs9dipdiCXtQFPIxxQ
         AaH8jH1u6Mdi/wqVY8BDDBrLr5y80aTzr0OspQjc1reoua6HeM6dNS9+MIFD0vkVtT4x
         oJcWWnOAMmr0FE82y2IzoUA4iYLCjH8JAzmM+e14A085e2kO9Z15tn/47qtUbxhqYBYP
         mW9qMII/B24ilbyiE4AZfDOS5TmgbLjgzZkb6DWP8vg5gz9o19b4nyPUIa5iuXMo0GxU
         Gujw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=o8pOiu1yPNSj0fET51Dhdg4Hqst86W5gYJV+0AgZfn8=;
        b=T3bSgFhVQeIEOVoArsjVuDA8aAoIr0vfmbVotFDW10qbGnXxAdBKHSwU7aWzWpEMUZ
         c0PBdPYQHyhlcv+7Yd918O2KaFtvqslmwMaPT+z/+uXtycpFmPmOFBiTg3gX+4ut4JPI
         +e26IWS60PBz0jJPW6FWn/N/lNwQ3p7sPfUsZ111fhJNjjhouCnTp6YItgZHeqmfZ5V2
         zL0oY62RYLxkJJmTC6++8jZPmaIckMENZPXNDXuCuiZj68P4MmPtrP+RpWpX4XyEYl68
         iy+HA+WBoSe70lEtgPS6ONR3mVtD3a1Co21J0LIY93B2JvMpJF6QIVewzyAvTSd973Si
         GCAw==
X-Gm-Message-State: AFqh2koZfHa4owGicX8+mMRKbJqbDJPDk3AtXLZDdgu6tQWqDxEhoD5f
	XiOqYLjeiUSV2mOCgKgTbedLDm8eKO2V9A==
X-Google-Smtp-Source: AMrXdXu2jRTRlqhR6G4xk/Yl7264yLf/lnMngbJTPksZ+MSKGQzblfXCAWUVVp4mip3kSMfYY9K4qg==
X-Received: by 2002:a05:600c:6014:b0:3db:127e:403 with SMTP id az20-20020a05600c601400b003db127e0403mr5688209wmb.37.1674137254807;
        Thu, 19 Jan 2023 06:07:34 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v5 2/5] xen/riscv: introduce asm/types.h header file
Date: Thu, 19 Jan 2023 16:07:19 +0200
Message-Id: <851a3fa74defe5174335646e2a79096bd8d432f8.1674131459.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674131459.git.oleksii.kurochko@gmail.com>
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V5:
    - Remove size_t from asm/types after rebase on top of the patch
	  "include/types: move stddef.h-kind types to common header" [1].
	- All other types were back as they are used in <xen/types.h> and
      in xen/common.
---
Changes in V4:
    - Clean up types in <asm/types.h> and remain only necessary.
      The following types was removed as they are defined in <xen/types.h>:
      {__|}{u|s}{8|16|32|64}
---
Changes in V3:
    - Nothing changed
---
Changes in V2:
    - Remove unneeded now types from <asm/types.h>
---
 xen/arch/riscv/include/asm/types.h | 70 ++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/types.h

diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
new file mode 100644
index 0000000000..64976f118d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/types.h
@@ -0,0 +1,70 @@
+#ifndef __RISCV_TYPES_H__
+#define __RISCV_TYPES_H__
+
+#ifndef __ASSEMBLY__
+
+typedef __signed__ char __s8;
+typedef unsigned char __u8;
+
+typedef __signed__ short __s16;
+typedef unsigned short __u16;
+
+typedef __signed__ int __s32;
+typedef unsigned int __u32;
+
+#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+#if defined(CONFIG_RISCV_32)
+typedef __signed__ long long __s64;
+typedef unsigned long long __u64;
+#elif defined (CONFIG_RISCV_64)
+typedef __signed__ long __s64;
+typedef unsigned long __u64;
+#endif
+#endif
+
+typedef signed char s8;
+typedef unsigned char u8;
+
+typedef signed short s16;
+typedef unsigned short u16;
+
+typedef signed int s32;
+typedef unsigned int u32;
+
+#if defined(CONFIG_RISCV_32)
+
+typedef signed long long s64;
+typedef unsigned long long u64;
+typedef u32 vaddr_t;
+#define PRIvaddr PRIx32
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0ULL)
+#define PRIpaddr "016llx"
+typedef u32 register_t;
+#define PRIregister "x"
+
+#elif defined (CONFIG_RISCV_64)
+
+typedef signed long s64;
+typedef unsigned long u64;
+typedef u64 vaddr_t;
+#define PRIvaddr PRIx64
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PRIpaddr "016lx"
+typedef u64 register_t;
+#define PRIregister "lx"
+
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_TYPES_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:07:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481129.745821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZo-0002xf-QF; Thu, 19 Jan 2023 14:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481129.745821; Thu, 19 Jan 2023 14:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZo-0002xY-N2; Thu, 19 Jan 2023 14:07:36 +0000
Received: by outflank-mailman (input) for mailman id 481129;
 Thu, 19 Jan 2023 14:07:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIVZn-0002xN-7c
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 14:07:35 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9ea73882-9802-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 15:07:33 +0100 (CET)
Received: by mail-wm1-x32d.google.com with SMTP id
 q10-20020a1cf30a000000b003db0edfdb74so2861033wmq.1
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 06:07:33 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m27-20020a05600c3b1b00b003db012d49b7sm7710178wms.2.2023.01.19.06.07.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 06:07:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ea73882-9802-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=xpr2uf8npxwXpcyJBFnTtrCJsIs2f2Gz+5f3ThRTvss=;
        b=fGEadkPpLrWopmV8jAeLgieIRkOkzcOXNzVDAeFWDFU3tzfEF4oMGuISb6Qqp+3O+Y
         9GSMbsgdpCLs+7C2/saNzNF9ZGhdZTYIaZTmKi2O8NfPm/WA1O+43ynX4s+eUKvP36RC
         vondfMTSTFgVyo72kZSZLbdgEIxUt7kMg7CYijFvxlHy5GKSbNIO9+J7f+k/Q9dZBkZT
         7zLFrwpVGETpoIPCTyxxhVdc0wE4QrDTIRgCZ8ipJQ4YBDem9bcY5BRfhYoeUwi6GkWx
         3Qx2n13Q0l0g90lNhjXYS9NBJOSx5RUAiHU7IAeBDWEyMzMeGvIfjfvzlddDWfGMt2Uq
         A1zw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=xpr2uf8npxwXpcyJBFnTtrCJsIs2f2Gz+5f3ThRTvss=;
        b=NJj5f+pSUBjLWiUFp/XokfxdCatG/dYMl9bZyU0C8fJ6ek8PvEQZZhUaMMp6WkQASu
         hutQag1A2p9GmguWCwQbzhK9xRyIptTQmrgTR/pOE6sLvCixGNLxgY9qYDCbQv4E6kpM
         EQUzHWYigAQFqKWQXa6LL4E7C0Sc1n/QPiasfVsD/zHh585yAcfZWGB1DFUx6WwsdFg4
         iQq3TRVgKOCWCtkYQQySaFnSIRsnQzowgIscUNQhuArPtBtHHEsMrNpkVy2R9q4rkhFH
         sIJBhtJfqyKiNdFdqEhvMXnQCsuhcEvmJklSxLxCNT8Mbyg0cGraRAPJ353p+zbp51mB
         eojg==
X-Gm-Message-State: AFqh2kpDuQj3YAmyBiE2ZEK39f/t00kORRKch7bUXXHVtaQNEQN2sA+X
	z2Q7UmFT+6MMCET4vqJf8vDajh2RsRqKnbVk
X-Google-Smtp-Source: AMrXdXtRJwAkOtx/qVNfyL9WXQMOg8KfxHeU+1sAFlqO4pUmTiEZ9Kt8H7bHUzuSaqixcgVJnO49ng==
X-Received: by 2002:a7b:c4ca:0:b0:3da:17f5:57b9 with SMTP id g10-20020a7bc4ca000000b003da17f557b9mr10744380wmk.5.1674137253027;
        Thu, 19 Jan 2023 06:07:33 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v5 0/5]  Basic early_printk and smoke test implementation
Date: Thu, 19 Jan 2023 16:07:17 +0200
Message-Id: <cover.1674131459.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- the minimal set of headers and changes inside them.
- SBI (RISC-V Supervisor Binary Interface) things necessary for basic
  early_printk implementation.
- things needed to set up the stack.
- early_printk() function to print only strings.
- RISC-V smoke test which checks if  "Hello from C env" message is
  present in serial.tmp

The patch series is rebased on top of the patch "include/types: move
stddef.h-kind types to common header" [1]

[1] https://lore.kernel.org/xen-devel/5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com/

---
Changes in V5:
  - Code style fixes
  - Remove size_t from asm/types after rebase on top of the patch
    "include/types: move stddef.h-kind types to common header". [1]

    All other types were back as they are used in <xen/types.h>
  - Update <xen/early_printk.h> after rebase on top of [1] as size_t was moved from
    <asm/types.h> to <xen/types.h>
  - Remove unneeded <xen/errno.h> from sbi.c
  - Change an error message of #error in case of __riscv_cmodel_medany isn't defined
---
Changes in V4:
    - Patches "xen/riscv: introduce dummy asm/init.h" and "xen/riscv: introduce
      stack stuff" were removed from the patch series as they were merged separately
      into staging.
    - Remove "depends on RISCV*" from Kconfig.debug as Kconfig.debug is located
      in arch specific folder.
    - fix code style.
    - Add "ifdef __riscv_cmodel_medany" to early_printk.c.  
---
Changes in V3:
    - Most of "[PATCH v2 7/8] xen/riscv: print hello message from C env"
      was merged with [PATCH v2 3/6] xen/riscv: introduce stack stuff.
    - "[PATCH v2 7/8] xen/riscv: print hello message from C env" was
      merged with "[PATCH v2 6/8] xen/riscv: introduce early_printk basic
      stuff".
    - "[PATCH v2 5/8] xen/include: include <asm/types.h> in
      <xen/early_printk.h>" was removed as it has been already merged to
      mainline staging.
    - code style fixes.
---
Changes in V2:
    - update commit patches commit messages according to the mailing
      list comments
    - Remove unneeded types in <asm/types.h>
    - Introduce definition of STACK_SIZE
    - order the files alphabetically in Makefile
    - Add license to early_printk.c
    - Add RISCV_32 dependecy to config EARLY_PRINTK in Kconfig.debug
    - Move dockerfile changes to separate config and sent them as
      separate patch to mailing list.
    - Update test.yaml to wire up smoke test
---

Bobby Eshleman (1):
  xen/riscv: introduce sbi call to putchar to console

Oleksii Kurochko (4):
  xen/include: Change <asm/types.h> to <xen/types.h>
  xen/riscv: introduce asm/types.h header file
  xen/riscv: introduce early_printk basic stuff
  automation: add RISC-V smoke test

 automation/gitlab-ci/test.yaml            | 20 +++++++
 automation/scripts/qemu-smoke-riscv64.sh  | 20 +++++++
 xen/arch/riscv/Kconfig.debug              |  5 ++
 xen/arch/riscv/Makefile                   |  2 +
 xen/arch/riscv/early_printk.c             | 45 +++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 ++++
 xen/arch/riscv/include/asm/sbi.h          | 34 +++++++++++
 xen/arch/riscv/include/asm/types.h        | 70 +++++++++++++++++++++++
 xen/arch/riscv/sbi.c                      | 44 ++++++++++++++
 xen/arch/riscv/setup.c                    |  4 ++
 xen/include/xen/early_printk.h            |  2 +-
 11 files changed, 257 insertions(+), 1 deletion(-)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/include/asm/types.h
 create mode 100644 xen/arch/riscv/sbi.c

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:07:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481134.745871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZt-0004Ce-DG; Thu, 19 Jan 2023 14:07:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481134.745871; Thu, 19 Jan 2023 14:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZt-0004BO-44; Thu, 19 Jan 2023 14:07:41 +0000
Received: by outflank-mailman (input) for mailman id 481134;
 Thu, 19 Jan 2023 14:07:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIVZs-0002xN-01
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 14:07:40 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a16bad09-9802-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 15:07:38 +0100 (CET)
Received: by mail-wm1-x32c.google.com with SMTP id
 fl11-20020a05600c0b8b00b003daf72fc844so3680726wmb.0
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 06:07:38 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m27-20020a05600c3b1b00b003db012d49b7sm7710178wms.2.2023.01.19.06.07.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 06:07:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a16bad09-9802-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YgsH/CzG/dU21eVLxWuw+pTYHwBhlSsBbVXpHvXre8w=;
        b=GG8lkNuDJ7IxD/6TF5mKdV3iyZ6bOAtHF4yjOOg5H44I6mKGQ6itZNWInZe4nILewy
         pLQIUAV0nznuHRLIupof4W9e81l50QnzJlWOUbGtSqlGX1F86TS0IMrfhB849i6VjdYI
         Bs/Prj8aBnYpsVIrZuCoX5h0ex7xbiQC9cUf4eYCTjkiLTitfjjkfpMOuP5+B6B9NFGu
         HG06zMMMH1246nYKAW7PFSoSUkuqnaPpBaaC2Qb/0/aYmMU/VHF578sYaJqZ28ybEgK1
         5vWMwsDnlH9wfh7hctSq73eq/H+tvvFEdK4raFfkTkHITC8xdSsII5LrWmXQZUvYrYcU
         J5AQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=YgsH/CzG/dU21eVLxWuw+pTYHwBhlSsBbVXpHvXre8w=;
        b=LUJ2zM+B9/ws+lH+82cDpsfV8vV7wuSzBCEs4IuXvwgE6s0MAq4KmHDsht82omsLiT
         FafgGPuLPNM367gGu+4X5B0KEy620UoaUzb7eIDcvxlLxfGj0RQFPKGm1W07k6tkKEMN
         XFh42Ql0qTeFYTNGf6NPf6EgEsn1nIDTazYzZ7Bh906XadDGPuxBi3K2M/XGo+MIPfb7
         xgGcd1tENZHW6h8t0Z7o31uAIfhhCRNpkD0c/2C9nIIz/bXNO9RDn9mTY6akWUevqr/Z
         aKj/mCDQmEnR61edjAi7aL9x+wq7hnk+5rDVZxBy8TjrSbnUSPRU7ckWfMpU2+2Ta35Z
         z5Ug==
X-Gm-Message-State: AFqh2ko9gvSv2ZLuaiGndav9WwW+RiGEvEgEN9FRectKXXRisrghTJ5r
	yslt046vTZQ7/U8A2GqKeWM8NLAA26rh7Q==
X-Google-Smtp-Source: AMrXdXu4wpk3TxRlzYdGxqTT7WSP1h04pgt7CCuwHlR6P4mC5Mye9uRcbBcbSLiVBz7lKXsxgvXLPw==
X-Received: by 2002:a7b:c4d0:0:b0:3d1:f6b3:2ce3 with SMTP id g16-20020a7bc4d0000000b003d1f6b32ce3mr10842359wmk.35.1674137257840;
        Thu, 19 Jan 2023 06:07:37 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v5 5/5] automation: add RISC-V smoke test
Date: Thu, 19 Jan 2023 16:07:22 +0200
Message-Id: <30727cdc63dfaad22f0781d35205646da130e4fa.1674131459.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674131459.git.oleksii.kurochko@gmail.com>
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add check if there is a message 'Hello from C env' presents
in log file to be sure that stack is set and C part of early printk
is working.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V5:
  - Nothing changed
---
Changes in V4:
  - Nothing changed
---
Changes in V3:
  - Nothing changed
  - All mentioned comments by Stefano in Xen mailing list will be
    fixed as a separate patch out of this patch series.
---
 automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
 2 files changed, 40 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index afd80adfe1..64f47a0ab9 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -54,6 +54,19 @@
   tags:
     - x86_64
 
+.qemu-riscv64:
+  extends: .test-jobs-common
+  variables:
+    CONTAINER: archlinux:riscv64
+    LOGFILE: qemu-smoke-riscv64.log
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - x86_64
+
 .yocto-test:
   extends: .test-jobs-common
   script:
@@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
   needs:
     - debian-unstable-clang-debug
 
+qemu-smoke-riscv64-gcc:
+  extends: .qemu-riscv64
+  script:
+    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
+  needs:
+    - riscv64-cross-gcc
+
 # Yocto test jobs
 yocto-qemuarm64:
   extends: .yocto-test-arm64
diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
new file mode 100755
index 0000000000..e0f06360bc
--- /dev/null
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -ex
+
+# Run the test
+rm -f smoke.serial
+set +e
+
+timeout -k 1 2 \
+qemu-system-riscv64 \
+    -M virt \
+    -smp 1 \
+    -nographic \
+    -m 2g \
+    -kernel binaries/xen \
+    |& tee smoke.serial
+
+set -e
+(grep -q "Hello from C env" smoke.serial) || exit 1
+exit 0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:07:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481132.745850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZr-0003iH-J1; Thu, 19 Jan 2023 14:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481132.745850; Thu, 19 Jan 2023 14:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZr-0003i6-G9; Thu, 19 Jan 2023 14:07:39 +0000
Received: by outflank-mailman (input) for mailman id 481132;
 Thu, 19 Jan 2023 14:07:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIVZp-0002xN-W3
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 14:07:38 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a046e46b-9802-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 15:07:36 +0100 (CET)
Received: by mail-wm1-x32a.google.com with SMTP id
 e19-20020a05600c439300b003db1cac0c1fso2162869wmn.5
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 06:07:36 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m27-20020a05600c3b1b00b003db012d49b7sm7710178wms.2.2023.01.19.06.07.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 06:07:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a046e46b-9802-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Tsdz6F+oqCP32pi0Y4PzeZgzGGVRy5nzjQ8KxHS3U8s=;
        b=QoDowLQ9oNmt9UToXQ5EhBk73Rk5/X07w3PCpvOjHjLvXLUq+MQz/5ono9jm+/m4IX
         HdSIsNHWHv9WltmeWYfN9QuDNW7liN1dJw47o8bATpGcH48T8KZPrLMI4zby2HFCHWhe
         c4L35wSnkIYFfdpKC7LENNHSIlp1yNjinw3DtioM9ZUA5t/x6HwHPrSnCWyXT4jDj05o
         nFXIJlZkpmrvlEGWaIdu8JWI7kgbAuRCD1xMZdVdbLivgtp3nrgXClU3xlOCfMFm9mk4
         imntnLj92ONKr2mGtjSumGXJWIgEv89p7/+qzmsJZ1y79fQcixPosrYUoFE6De7A4W1Z
         X3/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Tsdz6F+oqCP32pi0Y4PzeZgzGGVRy5nzjQ8KxHS3U8s=;
        b=irRJJaRPsqVMKRHQHWZOwwGylcG1oerpvyxZYQiT9xSwM0ET72OZV/nyqIPB1LQtRF
         wSh3kSkF+DJ5+61WL4/zmQjurTFQGCFrD6JmULf+2zx2X7OA5VWodN0ze2T1XhMhEJxs
         9au/61bKyCR9GFphCBOnxTjNesZ6VcMQuUS7kqUCyjosiOAdxrXcm+XZqBZK9uOb+R4t
         kgTSqHSoEWyIG1/LZXbehOnjIM/D9qd97dh/eNgbtMLEceAa83rTXW62DgZ5KKlqRcyk
         WIgdf4GnkvKFnieARnjGJr7fWOyxP564I1Ui1Zop3R0iU6wEwngmBjr+iie8lkdb4Aoi
         DIjQ==
X-Gm-Message-State: AFqh2kr4h125Wld5o+X9SHPw53YJpYyA7og9pGsO4TAA9wsaz5osTtPF
	kyJzWtZIDoFJ3V++jRaeft+WoOcWAFVRIg==
X-Google-Smtp-Source: AMrXdXstIsFAEQtSpFTIpJ/n6HPwHMdgmnbHPoJSD1pVXXFWpAqU8gy1MiaUh3RjqUtDH/zLZDcoaA==
X-Received: by 2002:a05:600c:5390:b0:3d9:a145:4d1a with SMTP id hg16-20020a05600c539000b003d9a1454d1amr6509329wmb.34.1674137255851;
        Thu, 19 Jan 2023 06:07:35 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v5 3/5] xen/riscv: introduce sbi call to putchar to console
Date: Thu, 19 Jan 2023 16:07:20 +0200
Message-Id: <7f92b9f801058835237f05f16e2b03b93a9cfdf8.1674131459.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674131459.git.oleksii.kurochko@gmail.com>
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Bobby Eshleman <bobby.eshleman@gmail.com>

Originally SBI implementation for Xen was introduced by
Bobby Eshleman <bobby.eshleman@gmail.com> but it was removed
all the stuff for simplicity  except SBI call for putting
character to console.

The patch introduces sbi_putchar() SBI call which is necessary
to implement initial early_printk.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V5:
    - Nothing critical was changed
    - Code style fixes
    - Remove unneeded <xen/errno.h> from sbi.c
---
Changes in V4:
    - Nothing changed
---
Changes in V3:
    - update copyright's year
    - rename definition of __CPU_SBI_H__ to __ASM_RISCV_SBI_H__
    - fix identations
    - change an author of the commit
---
Changes in V2:
    - add an explanatory comment about sbi_console_putchar() function.
    - order the files alphabetically in Makefile
---
 xen/arch/riscv/Makefile          |  1 +
 xen/arch/riscv/include/asm/sbi.h | 34 ++++++++++++++++++++++++
 xen/arch/riscv/sbi.c             | 44 ++++++++++++++++++++++++++++++++
 3 files changed, 79 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/sbi.h
 create mode 100644 xen/arch/riscv/sbi.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 5a67a3f493..fd916e1004 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_RISCV_64) += riscv64/
+obj-y += sbi.o
 obj-y += setup.o
 
 $(TARGET): $(TARGET)-syms
diff --git a/xen/arch/riscv/include/asm/sbi.h b/xen/arch/riscv/include/asm/sbi.h
new file mode 100644
index 0000000000..0e6820a4ed
--- /dev/null
+++ b/xen/arch/riscv/include/asm/sbi.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: (GPL-2.0-or-later) */
+/*
+ * Copyright (c) 2021-2023 Vates SAS.
+ *
+ * Taken from xvisor, modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Taken/modified from Xvisor project with the following copyright:
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ */
+
+#ifndef __ASM_RISCV_SBI_H__
+#define __ASM_RISCV_SBI_H__
+
+#define SBI_EXT_0_1_CONSOLE_PUTCHAR		0x1
+
+struct sbiret {
+    long error;
+    long value;
+};
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
+                        unsigned long arg0, unsigned long arg1,
+                        unsigned long arg2, unsigned long arg3,
+                        unsigned long arg4, unsigned long arg5);
+
+/**
+ * Writes given character to the console device.
+ *
+ * @param ch The data to be written to the console.
+ */
+void sbi_console_putchar(int ch);
+
+#endif /* __ASM_RISCV_SBI_H__ */
diff --git a/xen/arch/riscv/sbi.c b/xen/arch/riscv/sbi.c
new file mode 100644
index 0000000000..0ae166c861
--- /dev/null
+++ b/xen/arch/riscv/sbi.c
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Taken and modified from the xvisor project with the copyright Copyright (c)
+ * 2019 Western Digital Corporation or its affiliates and author Anup Patel
+ * (anup.patel@wdc.com).
+ *
+ * Modified by Bobby Eshleman (bobby.eshleman@gmail.com).
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ * Copyright (c) 2021-2023 Vates SAS.
+ */
+
+#include <asm/sbi.h>
+
+struct sbiret sbi_ecall(unsigned long ext, unsigned long fid,
+                        unsigned long arg0, unsigned long arg1,
+                        unsigned long arg2, unsigned long arg3,
+                        unsigned long arg4, unsigned long arg5)
+{
+    struct sbiret ret;
+
+    register unsigned long a0 asm ("a0") = arg0;
+    register unsigned long a1 asm ("a1") = arg1;
+    register unsigned long a2 asm ("a2") = arg2;
+    register unsigned long a3 asm ("a3") = arg3;
+    register unsigned long a4 asm ("a4") = arg4;
+    register unsigned long a5 asm ("a5") = arg5;
+    register unsigned long a6 asm ("a6") = fid;
+    register unsigned long a7 asm ("a7") = ext;
+
+    asm volatile (  "ecall"
+                    : "+r" (a0), "+r" (a1)
+                    : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7)
+                    : "memory");
+    ret.error = a0;
+    ret.value = a1;
+
+    return ret;
+}
+
+void sbi_console_putchar(int ch)
+{
+    sbi_ecall(SBI_EXT_0_1_CONSOLE_PUTCHAR, 0, ch, 0, 0, 0, 0, 0);
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:07:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481133.745856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZr-0003lE-Vb; Thu, 19 Jan 2023 14:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481133.745856; Thu, 19 Jan 2023 14:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVZr-0003ka-OI; Thu, 19 Jan 2023 14:07:39 +0000
Received: by outflank-mailman (input) for mailman id 481133;
 Thu, 19 Jan 2023 14:07:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qz+V=5Q=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIVZq-0002xN-W7
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 14:07:39 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a102b8a4-9802-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 15:07:37 +0100 (CET)
Received: by mail-wm1-x32c.google.com with SMTP id g10so1639048wmo.1
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 06:07:37 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 m27-20020a05600c3b1b00b003db012d49b7sm7710178wms.2.2023.01.19.06.07.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 19 Jan 2023 06:07:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a102b8a4-9802-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2bpQpEYSVqIRvMUSgTqd/R7B3iBoaJSFfEYHAexvgXQ=;
        b=FTuaF5IYor28Xsozkz6YQQyBsE1Slr/BSxLxhMi+Jy9pwosQMQb4IA7DE0NfZGX6Uz
         9HrrB9odaeYINTaZkASuGzmomMD07QL4ksm1d13asd43XZ8zUhp6LcwN+cBUAg0LovN4
         toHzKI7L5YxWVtcPTpA/Dk3BH10ycx9B5ToIfZobr3fISppysZFeLIJOTcKjb3Nj9Fmc
         akA8TqFNV6IkM338y2BMqXQhBJASYFdHxe9PonSCeD60dLEZbL3lTrF2eQ5XQq6ywSdw
         tP/bSDQoW8MlNTByl+kpWgW890CLLm8k/5G+k8jv7tCaW9Kv99ytP+gIdIlY01+Ls2AW
         Ch6A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2bpQpEYSVqIRvMUSgTqd/R7B3iBoaJSFfEYHAexvgXQ=;
        b=ujjEIAOQNEyUjY479RRs9YxDUe+6R5TcbAZAgBi+/Fnn52DIqVSzv/8/9dnh5GXbob
         hpXmxAIZrPdWK0g6FZjwVt5LEcJt1EKzd7AcmeMziTzDWIiPG+q01KtEYAJ6yxmhj7H+
         yIN5u1MlCGyggIKoBqb16NPqP+Ss9KluxY8/G9yBfdBF1xT37a1BrKx+nE9GbfhVAVOK
         YWRTn+rUdm8A7860DUOfmKbxklmJGPHzjlZ1S0hN0AaqMnapmpeR8ac5IFIvayFc4xEO
         wLh6MD5MAbu9xE2ceuoMLs12UVO/K29+cP6gpAPUlHvEKuY+uKBQVXjR7DHO33Ivll6k
         nmbQ==
X-Gm-Message-State: AFqh2kqaqneSfSJUGT/eHjc/ud4YINxUtE5ddkV0H55lOF3qEztzeRix
	yHeuI5jwiS57rraNmO80TqUF/qyJrefcRA==
X-Google-Smtp-Source: AMrXdXt8YeiWkTkoAfXK64LbdpGPwTEH/3KqNB2IDsh2Zn/Gr2P+/TU5azWqoiDeF991/LQPqlmqTg==
X-Received: by 2002:a05:600c:1c1b:b0:3d9:ebf9:7004 with SMTP id j27-20020a05600c1c1b00b003d9ebf97004mr10028924wms.29.1674137257016;
        Thu, 19 Jan 2023 06:07:37 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: [PATCH v5 4/5] xen/riscv: introduce early_printk basic stuff
Date: Thu, 19 Jan 2023 16:07:21 +0200
Message-Id: <8d7ac0dc51a6331d3efa7fcda433616670b46700.1674131459.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674131459.git.oleksii.kurochko@gmail.com>
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Because printk() relies on a serial driver (like the ns16550 driver)
and drivers require working virtual memory (ioremap()) there is not
print functionality early in Xen boot.

The patch introduces the basic stuff of early_printk functionality
which will be enough to print 'hello from C environment".

Originally early_printk.{c,h} was introduced by Bobby Eshleman
(https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1aab71384)
but some functionality was changed:
early_printk() function was changed in comparison with the original as
common isn't being built now so there is no vscnprintf.

This commit adds early printk implementation built on the putc SBI call.

As sbi_console_putchar() is already being planned for deprecation
it is used temporarily now and will be removed or reworked after
real uart will be ready.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
---
Changes in V5:
    - Code style fixes
    - Change an error message of #error in case of __riscv_cmodel_medany
      isn't defined
---
Changes in V4:
    - Remove "depends on RISCV*" from Kconfig.debug as it is located in
      arch specific folder so by default RISCV configs should be ebabled.
    - Add "ifdef __riscv_cmodel_medany" to be sure that PC-relative addressing
      is used as early_*() functions can be called from head.S with MMU-off and
      before relocation (if it would be needed at all for RISC-V)
    - fix code style
---
Changes in V3:
    - reorder headers in alphabetical order
    - merge changes related to start_xen() function from "[PATCH v2 7/8]
      xen/riscv: print hello message from C env" to this patch
    - remove unneeded parentheses in definition of STACK_SIZE
---
Changes in V2:
    - introduce STACK_SIZE define.
    - use consistent padding between instruction mnemonic and operand(s)
---
 xen/arch/riscv/Kconfig.debug              |  5 +++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 45 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 ++++++
 xen/arch/riscv/setup.c                    |  4 ++
 5 files changed, 67 insertions(+)
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
index e69de29bb2..608c9ff832 100644
--- a/xen/arch/riscv/Kconfig.debug
+++ b/xen/arch/riscv/Kconfig.debug
@@ -0,0 +1,5 @@
+config EARLY_PRINTK
+    bool "Enable early printk"
+    default DEBUG
+    help
+      Enables early printk debug messages
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index fd916e1004..1a4f1a6015 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,3 +1,4 @@
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
new file mode 100644
index 0000000000..6f590e712b
--- /dev/null
+++ b/xen/arch/riscv/early_printk.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V early printk using SBI
+ *
+ * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
+ */
+#include <asm/early_printk.h>
+#include <asm/sbi.h>
+
+/*
+ * early_*() can be called from head.S with MMU-off.
+ *
+ * The following requiremets should be honoured for early_*() to
+ * work correctly:
+ *    It should use PC-relative addressing for accessing symbols.
+ *    To achieve that GCC cmodel=medany should be used.
+ */
+#ifndef __riscv_cmodel_medany
+#error "early_*() can be called from head.S with MMU-off"
+#endif
+
+/*
+ * TODO:
+ *   sbi_console_putchar is already planned for deprecation
+ *   so it should be reworked to use UART directly.
+*/
+void early_puts(const char *s, size_t nr)
+{
+    while ( nr-- > 0 )
+    {
+        if ( *s == '\n' )
+            sbi_console_putchar('\r');
+        sbi_console_putchar(*s);
+        s++;
+    }
+}
+
+void early_printk(const char *str)
+{
+    while ( *str )
+    {
+        early_puts(str, 1);
+        str++;
+    }
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
new file mode 100644
index 0000000000..05106e160d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -0,0 +1,12 @@
+#ifndef __EARLY_PRINTK_H__
+#define __EARLY_PRINTK_H__
+
+#include <xen/early_printk.h>
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_printk(const char *str);
+#else
+static inline void early_printk(const char *s) {};
+#endif
+
+#endif /* __EARLY_PRINTK_H__ */
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 13e24e2fe1..d09ffe1454 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,12 +1,16 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/early_printk.h>
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
 void __init noreturn start_xen(void)
 {
+    early_printk("Hello from C env\n");
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:20:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481164.745883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVmA-0008GG-PE; Thu, 19 Jan 2023 14:20:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481164.745883; Thu, 19 Jan 2023 14:20:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVmA-0008G9-Mm; Thu, 19 Jan 2023 14:20:22 +0000
Received: by outflank-mailman (input) for mailman id 481164;
 Thu, 19 Jan 2023 14:20:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIVmA-0008G3-Bb
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 14:20:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVmA-0004Ju-2f; Thu, 19 Jan 2023 14:20:22 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.13.107]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIVm9-0006jp-SJ; Thu, 19 Jan 2023 14:20:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=amZswHTQjsYeHewrevFLzhZrNwSS5O/8T/WGMeSpCI0=; b=2oJX94asqNNDyDZeCWivruVNr7
	+Dad3hiYB+qLnn8BM2WvIoWC38ECoBsOLotUiGkSZwO+zhP7kcHJWxOq/DrDvbu5WmWVP81EsY6EH
	J7TDtYY4gkFfYfCtWaSc3HDQI+4uHqI5/JezEUeg3QBgH7euLSlDL5PMeI3SI0zTW9AE=;
Message-ID: <f42681d3-c08f-24bd-8f10-aa53930081b0@xen.org>
Date: Thu, 19 Jan 2023 14:20:20 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 10/40] xen/arm: split MMU and MPU config files from
 config.h
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-11-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113052914.3845596-11-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/01/2023 05:28, Penny Zheng wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> Xen defines some global configuration macros for Arm in
> config.h. We still want to use it for Armv8-R systems, but
> there are some address related macros that are defined for
> MMU systems. These macros will not be used by MPU systems,
> Adding ifdefery with CONFIG_HAS_MPU to gate these macros
> will result in a messy and hard-to-read/maintain code.
> 
> So we keep some common definitions still in config.h, but
> move virtual address related definitions to a new file -
> config_mmu.h. And use a new file config_mpu.h to store
> definitions for MPU systems. To avoid spreading #ifdef
> everywhere, we keep the same definition names for MPU
> systems, like XEN_VIRT_START and HYPERVISOR_VIRT_START,
> but the definition contents are MPU specific.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
> v1 -> v2:
> 1. Remove duplicated FIXMAP definitions from config_mmu.h
> ---
>   xen/arch/arm/include/asm/config.h     | 103 +++--------------------
>   xen/arch/arm/include/asm/config_mmu.h | 112 ++++++++++++++++++++++++++
>   xen/arch/arm/include/asm/config_mpu.h |  25 ++++++

I think this patch wants to be split in two. So we keep code movement 
separate from the introduction of new feature (e.g. MPU).

Furthermore, I think it would be better to name the new header layout_* 
(or similar).

Lastly, you are going to introduce several file with _mmu or _mpu. I 
would rather prefer if we create directory instead.


>   3 files changed, 147 insertions(+), 93 deletions(-)
>   create mode 100644 xen/arch/arm/include/asm/config_mmu.h
>   create mode 100644 xen/arch/arm/include/asm/config_mpu.h
> 
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 25a625ff08..86d8142959 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -48,6 +48,12 @@
>   
>   #define INVALID_VCPU_ID MAX_VIRT_CPUS
>   
> +/* Used for calculating PDX */

I am not entirely sure to understand the purpose of this comment.

> +#ifdef CONFIG_ARM_64
> +#define FRAMETABLE_SIZE        GB(32)
> +#define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
> +#endif
> +

Why do you only keep the 64-bit version in config.h?

However... the frametable size is limited by the space we reserve in the 
virtual address space. This would not be the case for the MPU.

So having the limit in common seems a bit odd. In fact, I think we 
should look at getting rid of the limit for the MPU.

[...]

> diff --git a/xen/arch/arm/include/asm/config_mmu.h b/xen/arch/arm/include/asm/config_mmu.h
> new file mode 100644
> index 0000000000..c12ff25cf4
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/config_mmu.h
> @@ -0,0 +1,112 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/******************************************************************************
> + * config_mmu.h
> + *
> + * A Linux-style configuration list, only can be included by config.h

Why do you need to restrict where this is included? And if you really 
need it, then you should check it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:22:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:22:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481170.745894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVoA-0000Nu-4X; Thu, 19 Jan 2023 14:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481170.745894; Thu, 19 Jan 2023 14:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVoA-0000Nn-1Q; Thu, 19 Jan 2023 14:22:26 +0000
Received: by outflank-mailman (input) for mailman id 481170;
 Thu, 19 Jan 2023 14:22:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uOG/=5Q=zf.com=youssef.elmesdadi@srs-se1.protection.inumbo.net>)
 id 1pIVo9-0000Nh-Cy
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 14:22:25 +0000
Received: from de-smtp-delivery-114.mimecast.com
 (de-smtp-delivery-114.mimecast.com [194.104.109.114])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b078e00a-9804-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 15:22:23 +0100 (CET)
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 de-mta-47-ZPPMhxetNn-J4aa-lOw4Hw-1; Thu, 19 Jan 2023 15:22:18 +0100
Received: from AM5PR0802MB2578.eurprd08.prod.outlook.com
 (2603:10a6:203:9e::22) by PA4PR08MB6078.eurprd08.prod.outlook.com
 (2603:10a6:102:e0::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Thu, 19 Jan
 2023 14:22:17 +0000
Received: from AM5PR0802MB2578.eurprd08.prod.outlook.com
 ([fe80::f7f1:3b45:d707:62b7]) by AM5PR0802MB2578.eurprd08.prod.outlook.com
 ([fe80::f7f1:3b45:d707:62b7%2]) with mapi id 15.20.6002.025; Thu, 19 Jan 2023
 14:22:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b078e00a-9804-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zf.com; s=mczfcom20220728;
	t=1674138142;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=k8g8fHayFHYCmZiqeudSU8mkWjRVhTPW/eEm5DSLUHk=;
	b=jlqjzg7mzNOuWSsw+JKWhv1+hlK8VBW3gXLFBp3sObq52l88VJnZQrMG/+8tT6Y0318xBj
	AgW81fc0G744heQAO+wZHk/Q+/HMhr25QOqTuVVXk6j6J/1Y3NTGFJQkWhUpgHX5CTX1O2
	kg6JHCTe+tixFxah9kkMV6k6kum/USI=
X-MC-Unique: ZPPMhxetNn-J4aa-lOw4Hw-1
From: El Mesdadi Youssef ESK UILD7 <youssef.elmesdadi@zf.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Subject: AW: AW: Xenalyze on ARM ( NXP S32G3 with Cortex-A53)
Thread-Topic: AW: Xenalyze on ARM ( NXP S32G3 with Cortex-A53)
Thread-Index: Adkk/qg7kRGuW6KDReaFxxX6SSJ/1gA5Tq8AAFmZfbAAzbzrAABjz8vw
Date: Thu, 19 Jan 2023 14:22:17 +0000
Message-ID: <AM5PR0802MB25785FDEB43A8137D644BC7E9DC49@AM5PR0802MB2578.eurprd08.prod.outlook.com>
References: <AM5PR0802MB25781717167B5BFC980BF2A49DFF9@AM5PR0802MB2578.eurprd08.prod.outlook.com>
 <3e7059c2-0d23-03f2-9a93-f88de09171f4@xen.org>
 <AM5PR0802MB2578A1389424064D6884588E9DC29@AM5PR0802MB2578.eurprd08.prod.outlook.com>
 <619a00f0-0f9f-5f5f-13a7-ea86f9c24eec@xen.org>
In-Reply-To: <619a00f0-0f9f-5f5f-13a7-ea86f9c24eec@xen.org>
Accept-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
msip_labels: MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Enabled=true;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_SetDate=2023-01-19T14:22:14Z;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Method=Privileged;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_Name=Internal sub2 (no
 marking);
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_SiteId=eb70b763-b6d7-4486-8555-8831709a784e;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_ActionId=a60f9e3f-f352-48be-8dc4-b9acf14a6c2a;
 MSIP_Label_7294a1c8-9899-41e7-8f6e-8b1b3c79592a_ContentBits=0
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: AM5PR0802MB2578:EE_|PA4PR08MB6078:EE_
x-ms-office365-filtering-correlation-id: 230b89af-b89c-4287-2bb9-08dafa28916a
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0
x-microsoft-antispam-message-info: EnQqYSrDut/9aglaAZL+th0aCDuw69NIiFXzk7OsimSJgVfE0mpJ2at8gJCwjJAKV2yr77xZUnd8P+vOGzJMG9/8KiMseu2gXodFk5QNXcbCJVNUwn9nstNkJppmP0zRoi0hoXQjX1zfYnJRotFUYv5U6SnywJUGExhF1WYt84S545QCk9WNjqFgMBzCDS0ckqNHY1teiCJVID2ksR8phU4rckUu7+Yi+ZNTTgb+BceinvCnKf439HPlxBFwVEVlWH2zaVkcoPDAEY8QESJBxy998hBmWd+e4jKCGiP7uSE/C2stfvpf1Cjuvb7KfLH5tqTxRme1FZqlYNxvYy4EAVADx3Xbc3rBMCeBIRuoIKLlO5LQXBSKUruPWcjxGB7snczC3A7/6pcFwCkeaql+A7GA0TTWPKr+ZmPatG0pbmrSGjoXdVtoA427iiIXiJYqQS06//BQM8tPmO+QdL0uUBMUEcmjT2ZAx75/K3okXRZEHsAU50yiobvxfkutIFmZ58W+0optWfnRTBkzTaOErokxbv6rxThy7RVpjcXlOHr6vjdGd9XB1nDof5S7neU5FsEAuTBPXgFYjywH9q51xia4T0DL57agvFJ/PfsmA8sTTnwSWrUydBlpj9m2gg6OkVf2inSjp6z1yO62+PRZ/vauvZ0KrF+DK5kSqMMqLEbzmyRSzQSu4NbegBvLTTqAZ30iqGip/IvhFFaqvtOTVg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM5PR0802MB2578.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(366004)(396003)(136003)(346002)(376002)(451199015)(33656002)(9686003)(66946007)(186003)(41300700001)(66446008)(52536014)(478600001)(6506007)(66476007)(76116006)(53546011)(64756008)(4326008)(66556008)(8676002)(26005)(55016003)(86362001)(2906002)(83380400001)(66574015)(7696005)(8936002)(122000001)(71200400001)(316002)(82960400001)(110136005)(38100700002)(38070700005)(5660300002);DIR:OUT;SFP:1102
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?V1dac2I1cDhOd0VtcldTZU9UUjB5S2tKYzBPZmtOMTA4aklwb3BSeUhzRmtP?=
 =?utf-8?B?NUVoYTFsUHFZZG9zV1RHZVdEcG9pVWhzbmlkVDVzY0VuTGxGeGplSExDTDgr?=
 =?utf-8?B?RWkrN0tibGJtYnZESFJYZnB6R09CMGJXQ1l2WWhjN2ZCU2hONmRFV2R4aVJi?=
 =?utf-8?B?SVFpa3B1Z1l2d2RmN0lPVVR3TzAwRUVnK3RBMzRGYnhVWDVNZG0wZlhTZ0to?=
 =?utf-8?B?RkdMaDVsRGI3azFHQ1drQ0F6VTVvUmJmNW1RUG4rS0h5SnJiTzlsWmpWcWI0?=
 =?utf-8?B?bFdSUjFEcXVMeW1xZ3VDeDNWR0s1dDluWFlQY3V6dHdDbHA3MG5xWTdPQ3h1?=
 =?utf-8?B?dFlLVWhPV3JzT2FwV3liVGNNWFdyVUJVUmh6SFpQQVlydnBnUzBvczc3MmJQ?=
 =?utf-8?B?OC9TTXNDcHEvcGlwVlBNK3p6a3BnSDVhaWVuemEwZitBNnQrcFNlZmpleXZV?=
 =?utf-8?B?dXQrVmJUZ3dmMTBtV3dIK0NXRHlBQlc5MlIyaWh0Y2lhUnFPR1BNWHFzemg0?=
 =?utf-8?B?eFQrSjFqTzZVQVpudGNDMHBwN01uRG83Nlkzb2FWUjJjVjBpanY0U09BREM1?=
 =?utf-8?B?NFovMVY5NzdsaFRTbWhCK2xveHFPdllrYmtWS01nanBGUDFzNzV3NmdSTDZz?=
 =?utf-8?B?Ty9nWTF6MnV4QnJabFpBMXlSRTJJR2FRdlZWQ1UvZVVGTFgxTndnbVhybXNT?=
 =?utf-8?B?Mjl3NXQvaVNzQnJBTmloL01BRWgyanFiWCtpR2FxWUhmZlBrZmJmNGpFU050?=
 =?utf-8?B?TlVtcE1rSnJZR3dsZW9YUmpKVklVMGVINkdSN1EzWmxEdHJBcGd4RU9VSS9E?=
 =?utf-8?B?S0xvTFlMTlZROVpNR3BwY3EvZXN1akhCenh1eVY2Zld1R2M0SnVDZitWQWY5?=
 =?utf-8?B?WEQ0S0dGTXBLYjl3TzR6UUVZb1lmdnhOSDk2OWhuUXFvT0RJeEFMRDYrS1JZ?=
 =?utf-8?B?ZUpZK1pjakZKQitZMFREaHdkOVhlMWxVZ2hSVGk4T0phMU1wbTdwL2JmamVD?=
 =?utf-8?B?NUNzeDBUcFJrd2k4a2Y3TkpLMEduVTRPVXRwMWdnMXFPdjRLcWl0dlhnMEpG?=
 =?utf-8?B?QTRrVGpnaVBlQnZ5L0pjaG9uODRiZXFEV2ZLa0VtSDV5REE5ZHk3QTRWUTZP?=
 =?utf-8?B?V0Zid2g0YkI1dVBoOFlDa09ZUXBUNGVNWTM0Q0pTbmtyQ2FpOFprU3NtZjJa?=
 =?utf-8?B?WklpaUYyekJINit3eHBLR1k3U1NvRHRpa1RBanMxTTBZY3VwYTNyM0dqeEtu?=
 =?utf-8?B?cm9GNGdkZFlFTGNBT2h6WUhkRXZUOWFvWDhhdDhmR0lhQ0k2UmpFTGtldUto?=
 =?utf-8?B?VGg4RUg3UloxV1BFT1dSUi8rS1FsSElVM0g0YTFqN3AvZ244SXhYNGFxSXp5?=
 =?utf-8?B?TGhpY3lSUmlJTFkxcmpkbzFmZFlCRGtXYldyT21WQVRtTHBNMGVwL2g2cWxN?=
 =?utf-8?B?RlJLUUV5R3JLZVgraFE0UnFTcWg2T2dWQTZlQndPY2wrenpCYmpFcXg2Rmx6?=
 =?utf-8?B?cFhZdE1SQlg5ZHhpMGpUVDNRaHAxYTlQU2o3L1JsR0JIVzMzT3g0ZnhTN0o3?=
 =?utf-8?B?Y3IwWkgvS0tpVndaSFBJZktJRTRiRVQvVk9LSDdnTjdua0d2UXdDcnY4ZGxn?=
 =?utf-8?B?RE1wRTB0V012TmZpOUtpTFJJSGlJdEFyVWxiMWNQcHFmMFNpd3NoT0JEUXg1?=
 =?utf-8?B?Y3k1aXZyWE5ML2s5dFdISHByR2NCd2hoSUZDaCt3M3FnOXZLYktTdFlMc2hN?=
 =?utf-8?B?bnlrbStuSVZ5UXVBcFZ5ZHdLd2NENW5xN1NyVkJpSDMrZGRTNytNd0xBUUdB?=
 =?utf-8?B?aFFYZW90ekdRc2pqNy92cEkyL3ZzMXZVTHZrN3lrZzhBSnlGVm94VXZOMkZt?=
 =?utf-8?B?Y2NSa1B0VGhPZXVQa2MrRTVYQVlrNXFRcFhpN2d2YW4yMHo0NFgyM3g2RkN3?=
 =?utf-8?B?VUsvVGpxY3QvWjR4RXZ5N2t5S2tZVlZxOExrZ3orRGRNVVU3STBLM0gxVUpj?=
 =?utf-8?B?NzNnbVRwTkxmK1Z6Y0tiaVA1NGQ4aVpsN3NNcUNleE4xclh6RzMzb3psVldZ?=
 =?utf-8?B?UzV2VXNkbndPeWtyQmw1UzZJOTl0VkRCMVg2RjRsdHpGUXlBVGRicnVLMnhu?=
 =?utf-8?B?UUdlZnF4elB2WDVuTGRJTytJS2lpSXRUd3hLWXRGY0Yvamwza3Y5b3NvQ3pS?=
 =?utf-8?B?QkE9PQ==?=
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: DZRV67ag8DKHMbHU8T0nmxMuwNczL3aCklK4o9GOKVh3cILgV062S7M+PP6TkeeA4Jl+aMaDMlaL6D+8t/b3U+IR6H8Kt/pXhH/UHyG25ufV0iWuVsdQXdZIAWYIMm/u8rowsBP4xuvVMy6KhSnxqfXN2S87Uag1/tapxpOBQSl+YxYV9rkUdwq+DCngKXhXrZKcdqzMITgUJOBhJQy7IY9DEvxtZsebiJ3+hkHGuPwK+kDqWOi9mvOnWkepCo9b9/mOEIV7WiTsG7KsX4niNzs/mpVFMRqvZfzz8rUYp4yu0A5bxbvW/8HpAtcOVF6L1QftcJuaHLbpP3Jg/oDM/ES1UnndqmaStzFztIWSz6j0/wAMaYCf62PYLhqPA2Q3fl40mgZxK6NE6dLRypHtmLZjzooibgEw7e3s+B3tIwtnC1KIkSPwN+bSyW2RkWDnFQTCEePCZDYqXNDJrQOYRAyw5emnyc0Dl3Lq0OvcDzW4NvVSOkXk4rjTleJsA4aF4bKxb4ctc3uSKmDRH/lBggm4lE4qpfe9y/otA5sSZBJuIvB5fYv4atZafhMaCcqIqYQ4Md0pqR0F+VEu5JjeC+T3OgVpa2McML5PDJ6Z7jMxwFAGHu3Yj1uvLIbh2faC/mg0PsksX3v9h7CIX+mfAKmKzZf7Zn+vNe4XRs0ereMRIVH9VlnUAjEZ1XDi/ee1p5OUUy84P3Sr/u5ObHjQTI00rKeqdgSHhuN01YfBrO+cSTJyMNbC/ChSkUI9x4s+ruKHSL2n2ToCRfaL8fVc7J6aso1lGcHbBWneiKvnI3ZQ5RKRnqmNQkCFRTeWAQKbHLIgvEOsy/IQMStrr5ebZw==
X-OriginatorOrg: zf.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM5PR0802MB2578.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 230b89af-b89c-4287-2bb9-08dafa28916a
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jan 2023 14:22:17.1167
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: eb70b763-b6d7-4486-8555-8831709a784e
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: kfKX/4bjaO1mNP1uRzb6jf5a+yX8xPsSTVvPoKllBw6Ll0WjiGMjs3NSZHdyfgtgQHrjcQcJGBpGY6qd3TCAkkhP2h2RLrLsKPo/PNrn3I8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6078
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: zf.com
Content-Language: de-DE
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

PiBUaGUgc3VwcG9ydCBmb3IgeGVudHJhY2Ugb24gQXJtIGhhcyBiZWVuIGFkZGVkIGFyb3VuZCBY
ZW4gNC4xMi4gU28gaXQgc2hvdWxkIHdvcmsgZm9yIFhlbiA0LjE0IChldmVuIHRob3VnaCBJIGRv
bid0IHJlY29tbWVuZCB1c2luZyBvbGRlciByZWxlYXNlKS4NCg0KSGVsbG8gSnVsaWFuLCANCkZp
cnN0IG9mIGFsbCwgdGhhbmsgeW91IGZvciB0aGUgaGVscC4gSSBjb250YWN0ZWQgTlhQIHN1cHBv
cnQgdG8gZ2V0IG1vcmUgaW5mb3JtYXRpb24gYWJvdXQgaG93IHRvIGdldCB0aGUgbmV3ZXN0IHZl
cnNpb24gb2YgWGVuIHdoaWxlIGJ1aWxkaW5nIG15IEltYWdlLiBNeSBxdWVzdGlvbiBpcywgd2hp
Y2ggWGVuIHZlcnNpb24gZG8geW91IHJlY29tbWVuZD8NCg0KQ2hlZXJzIA0KWW91c3NlZiBFbCBN
ZXNkYWRpIA0KDQotLS0tLVVyc3Byw7xuZ2xpY2hlIE5hY2hyaWNodC0tLS0tDQpWb246IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IA0KR2VzZW5kZXQ6IERpZW5zdGFnLCAxNy4gSmFudWFy
IDIwMjMgMTU6MzgNCkFuOiBFbCBNZXNkYWRpIFlvdXNzZWYgRVNLIFVJTEQ3IDx5b3Vzc2VmLmVs
bWVzZGFkaUB6Zi5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCkNjOiBTdGVm
YW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQpCZXRyZWZmOiBSZTogQVc6
IFhlbmFseXplIG9uIEFSTSAoIE5YUCBTMzJHMyB3aXRoIENvcnRleC1BNTMpDQoNCg0KDQpPbiAx
My8wMS8yMDIzIDEyOjU2LCBFbCBNZXNkYWRpIFlvdXNzZWYgRVNLIFVJTEQ3IHdyb3RlOg0KPiBI
ZWxsbyBKdWxpZW4sDQoNCkhpLA0KDQo+Pj4+IHhlbnRyYWNlIHNob3VsZCB3b3JrIG9uIHVwc3Ry
ZWFtIFhlbi4gV2hhdCBkaWQgeW91IHZlcnNpb24gZGlkIHlvdSB0cnk/DQo+IA0KPiBXaGlsZSBi
dWlsZGluZyBteSBpbWFnZSB1c2luZyB0aGUgQlNQLWxpbnV4IG9mIE5YUCwgdGhlIHZlcnNpb24g
dGhhdCB3YXMgZG93bmxvYWRlZCBpcyBYZW4gNC4xNC4NCg0KRG8geW91IGtub3cgd2hlcmUgdGhl
IHNvdXJjZSBhcmUgZG93bmxvYWRlZCBmcm9tPw0KDQo+IA0KPiANCj4+Pj4gQ2FuIHlvdSBhbHNv
IGNsYXJpZnkgdGhlIGVycm9yIHlvdSBhcmUgc2Vlbj8NCj4gDQo+IFRoZSBlcnJvciBJIHJlY2Vp
dmUgd2hpbGUgdGlwcGluZyB4ZW50cmFjZSBpczogQ29tbWFuZCBub3QgZm91bmQuDQoNCg0KIkNv
bW1hbmQgbm90IGZvdW5kIiBtZWFucyB0aGUgcHJvZ3JhbSBoYXNuJ3QgYmVlbiBpbnN0YWxsZWQu
DQoNCj4gSSBhc3N1bWUgaW4gdGhpcyBYZW4gdmVyc2lvbiwgWGVudHJhY2UgaXMgbm90IGNvbXBh
dGlibGUgd2l0aCBBUk0gDQo+IGFyY2hpdGVjdHVyZQ0KVGhlIHN1cHBvcnQgZm9yIHhlbnRyYWNl
IG9uIEFybSBoYXMgYmVlbiBhZGRlZCBhcm91bmQgWGVuIDQuMTIuIFNvIGl0IHNob3VsZCB3b3Jr
IGZvciBYZW4gNC4xNCAoZXZlbiB0aG91Z2ggSSBkb24ndCByZWNvbW1lbmQgdXNpbmcgb2xkZXIg
cmVsZWFzZSkuDQoNCkkgd291bGQgc3VnZ2VzdCB0byBjaGVjayBob3cgeW91IGFyZSBidWlsZGlu
ZyB0aGUgdG9vbHMgYW5kIHdoaWNoIHBhY2thZ2Ugd2lsbCBiZSBpbnN0YWxsZWQuDQoNCj4gDQo+
IE15IHF1ZXN0aW9uIGlzLCBpcyB0aGVyZSBhbnkgbmV3IHZlcnNpb24gb2YgWGVuIHRoYXQgc3Vw
cG9ydHMgWGVudHJhY2Ugb24gQVJNPyBJZiB5ZXMgaG93IGNvdWxkIEkgaW5zdGFsbCBpdD8gQmVj
YXVzZSBYZW4uNC4xNCB3YXMgaW5zdGFsbGVkIGF1dG9tYXRpY2FsbHkgYnkgdHlwaW5nIHRoaXMg
KCBESVNUUk9fRkVBVFVSRVNfYXBwZW5kICs9ICJ4ZW4iKSBpbiB0aGUgbG9jYWwuY29uZiBmaWxl
IHdoaWxlIGNyZWF0aW5nIG15IGltYWdlLg0KDQpJIGFtIG5vdCBmYW1pbGlhciB3aXRoIHRoZSBC
U1AtbGludXggb2YgTlhQIGFzIHRoaXMgaXMgbm90IGEgcHJvamVjdCBtYWludGFpbmVkIGJ5IFhl
biBQcm9qZWN0Lg0KDQpJZiB5b3UgbmVlZCBzdXBwb3J0IGZvciBpdCwgdGhlbiBJIHdvdWxkIHN1
Z2dlc3QgdG8gZGlzY3VzcyB3aXRoIE5YUCBkaXJlY3RseS4NCg0KPiANCj4gT3IgaXMgdGhlcmUg
YW55IHNvdXJjZSBvbiBYZW50cmFjZSB0aGF0IGlzIGNvbXBhdGlibGUgd2l0aCBBUk0gb24gR2l0
SHViLCB0aGF0IEkgY291bGQgZG93bmxvYWQgYW5kIGNvbXBpbGUgbXlzZWxmPw0KRXZlbiBpZiB0
aGUgaHlwZXJ2aXNvciBpcyBYZW4sIHlvdSBzZWVtIHRvIHVzZSBjb2RlIHByb3ZpZGVkIGJ5IGFu
IGV4dGVybmFsIGVudGl0eS4gSSBjYW4ndCBhZHZpc2Ugb24gdGhlIG5leHQgc3RlcHMgd2l0aG91
dCBrbm93aW5nIHRoZSBtb2RpZmljYXRpb24gdGhhdCBOWFAgbWFkZSBpbiB0aGUgaHlwZXJ2aXNv
ci4NCg0KPiANCj4+Pj4gWWVzIGlmIHlvdSBhc3NpZ24gKG9yIHByb3ZpZGUgcGFyYS12aXJ0dWFs
aXplZCBkcml2ZXIpIHRoZSBHUElPL0xFRC9DYW4tSW50ZXJmYWNlIHRvIHRoZSBndWVzdC4NCj4g
DQo+IElzIHRoZXJlIGFueSB0dXRvcmlhbCB0aGF0IGNvdWxkIGhlbHAgbWUgY3JlYXRlIHRob3Nl
IGRyaXZlcnM/IEFuZCBob3cgY29tcGxpY2F0ZWQgaXMgdGhhdCwgdG8gY3JlYXRlIHRoZW0/DQoN
CkkgYW0gbm90IGF3YXJlIG9mIGFueSB0dXRvcmlhbC4gUmVnYXJkaW5nIHRoZSBjb21wbGV4aXR5
LCBpdCBhbGwgZGVwZW5kcyBvbiB3aGF0IGV4YWN0bHkgeW91IHdhbnQgdG8gZG8uDQoNCj4gT3Ig
Y2FuIHRoZXkgYmUgYXNzaWduZWQganVzdCB3aXRoIFBDSS1QYXNzdGhyb3VnaD8NCg0KUENJIHBh
c3N0aHJvdWdoIGlzIG5vdCB5ZXQgc3VwcG9ydGVkIG9uIEFybS4gVGhhdCBzYWlkLCBJIHdhcyBu
b3QgZXhwZWN0aW5nIHRoZSBHUElPL0xFRC9DYW4taW50ZXJmYWNlIHRvIGJlIFBDSSBkZXZpY2Vz
Lg0KDQpJZiB0aGV5IGFyZSBwbGF0Zm9ybSBkZXZpY2VzIChpLmUgbm9uLVBDSSBkZXZpY2VzKSwg
dGhlbiB5b3UgY291bGQgcG90ZW50aWFsbHkgZGlyZWN0bHkgYXNzaWduIHRoZW0gdG8gKm9uZSog
Z3Vlc3QgKHRoaXMgd291bGQgbm90IHdvcmsgaWYgeW91IG5lZWQgdG8gc2hhcmUgdGhlbSkuDQoN
Ckkgd3JvdGUgcG90ZW50aWFsbHkgYmVjYXVzZSBpZiB0aGUgZGV2aWNlIGlzIERNQS1jYXBhYmFi
bGUsIHRoZW4gdGhlIGRldmljZSBtdXN0IGJlIGJlaGluZCBhbiBJT01NVS4NCg0KQ2hlZXJzLA0K
DQotLQ0KSnVsaWVuIEdyYWxsDQoNCg==



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:30:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:30:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481177.745904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVvX-0001PT-U5; Thu, 19 Jan 2023 14:30:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481177.745904; Thu, 19 Jan 2023 14:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIVvX-0001Oi-QT; Thu, 19 Jan 2023 14:30:03 +0000
Received: by outflank-mailman (input) for mailman id 481177;
 Thu, 19 Jan 2023 14:30:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DP+J=5Q=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIVvW-00019t-FZ
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 14:30:02 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2078.outbound.protection.outlook.com [40.107.6.78])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1290168-9805-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 15:30:00 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6776.eurprd04.prod.outlook.com (2603:10a6:20b:103::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.25; Thu, 19 Jan
 2023 14:29:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 14:29:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1290168-9805-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GRGfTNhiVEuPLLSFqQ1xmQv4QDJt4gk7FnVt2hVTSD8vgykRERtI/3eGNXkYc+VOfQcUo/utVRaTZMMoTncRY3ZSFAc+SYePgXHPxImTv3SOSxL3Ny2QcBvtsEH6DfYt3OTjFipT+YF7eWL7h4x1JGlmqtK92PAxE2zUPbHRn6KOPN+3PDqwENjl5PbwNUBPdu1CkGUNmbpCSSdKahmyA8+BSS2BiTP1SB0+M9KL/L8z9SCcN/OZdCTl8WVZ/6EU3/6Yxa7IL4IP13ngO+sinmGUnPdraSMtR7Z9DxeFTVc4yvtKSx4x0Invw18n/XEXKR1b/XjrcZxEpIYyjI5npg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YFXTkixBq47cNfRB5KHH5dE8+Kohb9ygD1pQb9+/axQ=;
 b=HccJzFOLgwPnKK/WcsxQxdIppk0g5KXa+vtdGPXQw/0Qftgl0iQnlQEBWInt2SffKWYV7cr9A7/5YGxEcVxNQEqnnFCjOeY70L4jwoQegLErKQuSJVrn25L+1XjhkqfmbKGgitJ41LB4dZVLV+jnRe+t2dmIFXQobT3FKpdhjp1f8j7I3Zm9GNrc+HNZskz4DdH2PKW0dr9HKFpUuqrV9AsSA2iBm1PTehv9dk4wLmR1I/nCb0lBgVOtCftRu7XGaiAwb7Wy7qnd1pR92ASzMmhZInRAxEy1YuOqa7lNwZr6fQrvwL5awNXpaF0pE3o61iv7JM+PDe7jVFQ5b4uE3A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YFXTkixBq47cNfRB5KHH5dE8+Kohb9ygD1pQb9+/axQ=;
 b=h/8kYbiNPTgkuIdXmtwq3C9K39ItbToGvf8v9PZmkQskuTaO4YYFNPuaEloDweLFAp8448yk3MamRZXeOZJfblWGhBLYkn8DKPleH5fhh/BcWb+Dx3rA7aoiK52py3fEveF5twfbYcd/W8x+MRGwOcVF5SJdbMPcaFiK+o280vzeMyBYB43flaWempsrNHY9LjXqT1gTgM131m1J0pRGJXCOFZk9ixsLLW9plUbzQHivTPasrGxCpZ1IuaQkILGkJMguM0k8drGR/AVbTjQfeKhpJUNGLY/OPWunP58es05wWIEkFDfJy9C0kR9EV/RqgyfqNvQ2bnzdorOY+n1q8Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <22992b47-bac6-d522-a8a6-c55c3c15e7a5@suse.com>
Date: Thu, 19 Jan 2023 15:29:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v5 1/5] xen/include: Change <asm/types.h> to <xen/types.h>
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
 <916d01663e76a3a0acad93f6c234834deaa2dd72.1674131459.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <916d01663e76a3a0acad93f6c234834deaa2dd72.1674131459.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0183.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6776:EE_
X-MS-Office365-Filtering-Correlation-Id: 70187d71-7c9e-4d24-de7c-08dafa29a437
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8ilaxrJQm8IzWnMUritKMpedmIIggNoM8TxEs0ATsF62eabuDyRR776+/BLvfDSGkHbKwI8lxggA0qz59WiwWTJRaGgYXhH57A7LHcS9CyiLkMcmpcpP6+XNAVA4raGS2fXDOAK+JsIgAP+Xrl8uCZnDUpuq6LiFjOyFE3UoKL4AsU8/gJnAWVlNQ2fa4D1aBf5LBnlQwo8nbIeBbVVZ7YJkwPTB/JrQS8+39/ZTgJ2b7r27Yh+88wJnrelRQtvQv0j+c3RrRm/LqpRkAoov+aA5uWEuv31tw0Bl90mP56Oi52kjiAO8OyCPR5dml/qoh4Vlz9DYUXkzX4MdProQL7QM0Q7BicCe4VgTs8msO+obLocn7/UB78LMPBqZG6CcBqLEVHFiI4sODZbwcFQxGkWT98GyqOsuneo0pxJkbWD9Q6cU9+6IrpDeTTHL+/AiExY79nd4V6F8yhD051njGvESQARr1+WlzOsAFbQJoTgqHR5zwyrDMzzcy2u7XpgpZwoY9IUMjZksZRxPjHSK1zTRJ2uUXDxAjd8pjCsqFR62wwmDHF6u4tL9W8ynvi5QttgiYUVVId4fgwTH1Nmy4znbib1ytNEwinAt6b/6qgy1wqVmPel/IdwF8t7fs7j3YrCsu0982GZBm0ISxATMIrL6puIRcvt3rpZOicSStJlh3jJnih7Hfm9O52v+cS3ixIgFjLoWFjT0dMssjuby8L2GEXpNyZeV+svBYUXx1UnPo4mGuuX0Q23EdfdaiKh2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(396003)(366004)(136003)(39860400002)(346002)(451199015)(478600001)(966005)(6486002)(53546011)(186003)(26005)(6506007)(6512007)(31686004)(8936002)(2616005)(41300700001)(83380400001)(66556008)(31696002)(6916009)(66476007)(4326008)(86362001)(5660300002)(54906003)(316002)(36756003)(4744005)(66946007)(8676002)(2906002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c0tGMEhtNDY2ZGRNMmNFSFlwYXdzd2NCeldoWm50WnU4R25ZMXZEbm80Vkhq?=
 =?utf-8?B?bk53VUFjTjdHNk5KTXlIM2RQSUxUSG9wcVZhS1RvVS9Ua1g2bEFoeWg4WnB0?=
 =?utf-8?B?NlFsZjVtZklwSkJ5STRZcTZVK29rVnd3Ui9TSHlRQnRUUU5QZHVHVnRnU1Uw?=
 =?utf-8?B?UVJQOFlzNE9WVlB0WDE0L0k3SFZMeXZsU0VXTHVDQ0FzV3pPc1hsa21CYk5y?=
 =?utf-8?B?TFBOWjNTemNsOEE0RjV2ZVpZV3BTZDJ6U0ZGeWw0MjY1c2JzUmtMdHI0R3hE?=
 =?utf-8?B?M3A1UzlkLzJ1VmFqbTRiNzRBbDF4dkkybkdaWFpHTEZ3WUlRSkUzQitSc1pm?=
 =?utf-8?B?VmEvNTFDUTR2Nkt6QmJQaUowK0xTcmhEMHBXeHhTcllNTmd0RkUyZzVVVmhr?=
 =?utf-8?B?cEsyUG9YK015YmlJRmdSOEVacldmaEFaMGFmcTlib1NuSmNSSG9nUEtSVmpI?=
 =?utf-8?B?VDI1V1ZqcFVac3Voa3JmbXNnTzY5VDg2ZEs0Z0F3Vzg1cVl4cGd2TWdxNVBs?=
 =?utf-8?B?eTd1K3NrV21TQUlKMlZyUFZESWpDaWROdHR2bDZKcUxmQ0VvZnlpUHlYcnlB?=
 =?utf-8?B?N0M2Qkl3QTZqSU9ZSjNsclJUbitXditKOWxVaTBISERzR1NqRitaUm1UWkVh?=
 =?utf-8?B?WWhHQXJ5VmpNQ25JdXdaZmpUQnQ2RDM0c2RqZlNrTXFRUjhiUGxzT1ZmWDJu?=
 =?utf-8?B?YnNQcHYwQUc1K3FvTG41azBteTY0VEVwZG1RWDFBQWdpekVUOVFvSTJaRkIw?=
 =?utf-8?B?QVRDSGtSeWNoUUd1ZXBUTzBCV01WSm80RWphVmpwWmxOc2JVcmowSGlUN1RF?=
 =?utf-8?B?aE9TUUhRaXF3V1QzY0FicGh0NjBWVUFLT1N6bnYwTW1ZTjZjdVdEQUxkNk1i?=
 =?utf-8?B?RzhiWmh4a0dWRlh3TmZNZm02R2dxSmIwTE9qMENicUNaTUtPb01KdGFOQ2FF?=
 =?utf-8?B?anp1RjhzMW0rcVR5UVZXTk9qQVA3R0hwMWNUWmlCZE5RemlvNk5LWFY5Y0NL?=
 =?utf-8?B?S2p4OUgxZXdDR3FnNW0zQ0xueGtBeWlTQ0JtOE94cVNYb2hQaCt4NGlNZzJB?=
 =?utf-8?B?eTZOd2w5cURSWWs5ZHNXZXdSZWdmOUJpbVowTHlNMU5DNHpGZXdkdWp4L2gr?=
 =?utf-8?B?ZUxLMTlyajlTbU5XZWhXZGZUQ3oxcGZOVzNLbzdRQndSWlZ2Z1ZHM3dLamdh?=
 =?utf-8?B?Y0VFbFNpaS9XSGZvc0w5SWZGTHRqWlFGQkxzeUs3RXdSMTg2UzFRbXJjNDFR?=
 =?utf-8?B?RnRRbVlxUGZVbGRUcW5qd1JieDNSSlVvN0NQeG5nbDNVMW5YTXhURDhaZEtE?=
 =?utf-8?B?cFRWSEN1Nk9RL04wTXJDS1dUY2xZRUM4TlhxM0tqK2Z0S2RtOGZkb1lXLzBx?=
 =?utf-8?B?OC9xTVdZd3ZTNFZrbkZMV2hHSG5mNW1JL0ZvMVJUV1hRbkl5Uk5GTGY1TVBY?=
 =?utf-8?B?SU9Od2RBaVBROThnaHVrbjhqMDJka1ZzWWMzazFLeGtjSFlWR3B4QmJvbGFo?=
 =?utf-8?B?YWJyNXYrdzFsbnFEczFWMHlJVVR3TGNmby9ZZ1IyMG9KTFNEb0dqY2ozdlVZ?=
 =?utf-8?B?OHdaWXdNd0dENjF5MHdFY20wTUZyM1VlQ1pwYXRzWDFkN3QxaWluUm8vamxR?=
 =?utf-8?B?dzZWR1dlZHNUd250V2E5c0ExVmVwOUp3dEM4MkcvS3dwMElFQUgyLzVwK2pW?=
 =?utf-8?B?NTZ2alJ6UEU4RHljMjJhVkQrNjVQakdxTmgrREZqczFqeWQ2ckJVNStKYXpi?=
 =?utf-8?B?NWlTbEdjQVZhZldjUnE1ZkdVM0N0WHQ5K1E4bFRXcVJsZ0F6RFhGcGtSWjRs?=
 =?utf-8?B?SjFYQXhNdWFFN0Zicm1SQ3FIYUNmaHU3QklWc040MDh1a0ltc1Bac3lnRC80?=
 =?utf-8?B?dG9PZkl1dThmQm9HQUFwc1pnQ1kwUFVrSk9lZ3dqWU5QcUlzYVYyY0IralRS?=
 =?utf-8?B?Y2JpV3ZxRlVJYzZYODVCbUtrSFhIQ29nQnhHbU1QbmNjcGVPWThTU0xhNXlw?=
 =?utf-8?B?K3UrS1ZRWTBBWEVhcTdqbFhyQW96Y1NQRHJhZFNKMW1MOHIrL3FuWW9seEM1?=
 =?utf-8?B?TmhoNkJaL041NU5CMlVrRmJWN0YvNGJjRkpkZ0J6aEtldHg1SWlxR0FrR09h?=
 =?utf-8?Q?0qX9T+pfPoFsBQxVWOmh2JOdJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 70187d71-7c9e-4d24-de7c-08dafa29a437
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 14:29:58.3949
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U19KfK/vABYb/9pgPSBGo48HcRKEEPAY00IkzDpfRSOWUyXtfx0UxwBuiNIEP/ey4he+DcH92bCkjk7DnHGY4A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6776

On 19.01.2023 15:07, Oleksii Kurochko wrote:
> In the patch "include/types: move stddef.h-kind types to common
> header" [1] size_t was moved from <asm/types.h> to <xen/types.h>
> so early_printk should be updated correspondingly.

Hmm, this reads a little like that patch would introduce a build
issue (on Arm), but according to my testing it doesn't. If it did,
the change here would need to be part of that (not yet committed)
change. On the assumption that it really doesn't, ...

> [1] https://lore.kernel.org/xen-devel/5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com/
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 14:48:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 14:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481183.745914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWCg-0003c5-HW; Thu, 19 Jan 2023 14:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481183.745914; Thu, 19 Jan 2023 14:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWCg-0003by-F2; Thu, 19 Jan 2023 14:47:46 +0000
Received: by outflank-mailman (input) for mailman id 481183;
 Thu, 19 Jan 2023 14:47:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIWCf-0003bV-B9; Thu, 19 Jan 2023 14:47:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIWCf-00051D-4d; Thu, 19 Jan 2023 14:47:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIWCe-0005IU-8Z; Thu, 19 Jan 2023 14:47:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIWCe-0005kY-88; Thu, 19 Jan 2023 14:47:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yTXd0x/3pCNgyrB6Dk5Cu8WKs7raObg6u/PRco+hAyY=; b=hbvigvASyCGOmnDkGpb9bLRI50
	kBpAj9kPmtxrsra7vgHJuqlomJdtdl5nkxQWJ3uejDXF7IWr6Zl55P/Q1vITeDnVps+430EJxvlHi
	yDJImZafbswO2iDdPTNFtaN/xCtgDko+OHyZzg3zSRdUBOKejb4v6gfO0pMbWVwjMb6s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175981-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 175981: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=51411435d559c55eaf38c02baf5d76da78bb658d
X-Osstest-Versions-That:
    ovmf=18df11da8c14e48b6e4a90fb0b5befb1c243070a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 14:47:44 +0000

flight 175981 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175981/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 51411435d559c55eaf38c02baf5d76da78bb658d
baseline version:
 ovmf                 18df11da8c14e48b6e4a90fb0b5befb1c243070a

Last test of basis   175971  2023-01-19 05:11:58 Z    0 days
Testing same since   175981  2023-01-19 12:43:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Min M Xu <min.m.xu@intel.com>
  Min Xu <min.m.xu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   18df11da8c..51411435d5  51411435d559c55eaf38c02baf5d76da78bb658d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 15:04:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 15:04:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481192.745924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWSm-00069O-UX; Thu, 19 Jan 2023 15:04:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481192.745924; Thu, 19 Jan 2023 15:04:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWSm-00069H-RO; Thu, 19 Jan 2023 15:04:24 +0000
Received: by outflank-mailman (input) for mailman id 481192;
 Thu, 19 Jan 2023 15:04:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIWSl-00069B-LU
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 15:04:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIWSl-0005LR-7m; Thu, 19 Jan 2023 15:04:23 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.13.107]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIWSk-0000H1-Tq; Thu, 19 Jan 2023 15:04:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=wxJgAXMrNU9PVYRNg3azos3+wn70c9ur0IFI73KFTE0=; b=QMks0LNKoGaIcDdV4scoUcyZl6
	pY4EJliWGWF7qvSAU9TpxVnhUhIBHEygYCH1sMew9djHdjjYMORvV3Pg5IwurKh0Tcd0Tl0w/TB/X
	GjJcQiMKM4cPdODY2WafZYpWMl0EB/Yohd/99Aq9nthyaGGXn/sP/0r6Y6wF0BK4Hd3k=;
Message-ID: <c30b4458-b5f6-f996-0c3c-220b18bfb356@xen.org>
Date: Thu, 19 Jan 2023 15:04:21 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-12-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113052914.3845596-12-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> From: Penny Zheng <penny.zheng@arm.com>
> 
> The start-of-day Xen MPU memory region layout shall be like as follows:
> 
> xen_mpumap[0] : Xen text
> xen_mpumap[1] : Xen read-only data
> xen_mpumap[2] : Xen read-only after init data
> xen_mpumap[3] : Xen read-write data
> xen_mpumap[4] : Xen BSS
> ......
> xen_mpumap[max_xen_mpumap - 2]: Xen init data
> xen_mpumap[max_xen_mpumap - 1]: Xen init text

Can you explain why the init region should be at the end of the MPU?

> 
> max_xen_mpumap refers to the number of regions supported by the EL2 MPU.
> The layout shall be compliant with what we describe in xen.lds.S, or the
> codes need adjustment.
> 
> As MMU system and MPU system have different functions to create
> the boot MMU/MPU memory management data, instead of introducing
> extra #ifdef in main code flow, we introduce a neutral name
> prepare_early_mappings for both, and also to replace create_page_tables for MMU.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
>   xen/arch/arm/arm64/Makefile              |   2 +
>   xen/arch/arm/arm64/head.S                |  17 +-
>   xen/arch/arm/arm64/head_mmu.S            |   4 +-
>   xen/arch/arm/arm64/head_mpu.S            | 323 +++++++++++++++++++++++
>   xen/arch/arm/include/asm/arm64/mpu.h     |  63 +++++
>   xen/arch/arm/include/asm/arm64/sysregs.h |  49 ++++
>   xen/arch/arm/mm_mpu.c                    |  48 ++++
>   xen/arch/arm/xen.lds.S                   |   4 +
>   8 files changed, 502 insertions(+), 8 deletions(-)
>   create mode 100644 xen/arch/arm/arm64/head_mpu.S
>   create mode 100644 xen/arch/arm/include/asm/arm64/mpu.h
>   create mode 100644 xen/arch/arm/mm_mpu.c
> 
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 22da2f54b5..438c9737ad 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -10,6 +10,8 @@ obj-y += entry.o
>   obj-y += head.o
>   ifneq ($(CONFIG_HAS_MPU),y)
>   obj-y += head_mmu.o
> +else
> +obj-y += head_mpu.o
>   endif
>   obj-y += insn.o
>   obj-$(CONFIG_LIVEPATCH) += livepatch.o
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 782bd1f94c..145e3d53dc 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -68,9 +68,9 @@
>    *  x24 -
>    *  x25 -
>    *  x26 - skip_zero_bss (boot cpu only)
> - *  x27 -
> - *  x28 -
> - *  x29 -
> + *  x27 - region selector (mpu only)
> + *  x28 - prbar (mpu only)
> + *  x29 - prlar (mpu only) >    *  x30 - lr
>    */
>   
> @@ -82,7 +82,7 @@
>    * ---------------------------
>    *
>    * The requirements are:
> - *   MMU = off, D-cache = off, I-cache = on or off,
> + *   MMU/MPU = off, D-cache = off, I-cache = on or off,
>    *   x0 = physical address to the FDT blob.
>    *
>    * This must be the very first address in the loaded image.
> @@ -252,7 +252,12 @@ real_start_efi:
>   
>           bl    check_cpu_mode
>           bl    cpu_init
> -        bl    create_page_tables
> +
> +        /*
> +         * Create boot memory management data, pagetable for MMU systems
> +         * and memory regions for MPU systems.
> +         */
> +        bl    prepare_early_mappings
>           bl    enable_mmu
>   
>           /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
> @@ -310,7 +315,7 @@ GLOBAL(init_secondary)
>   #endif
>           bl    check_cpu_mode
>           bl    cpu_init
> -        bl    create_page_tables
> +        bl    prepare_early_mappings
>           bl    enable_mmu
>   
>           /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
> diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
> index 6ff13c751c..2346f755df 100644
> --- a/xen/arch/arm/arm64/head_mmu.S
> +++ b/xen/arch/arm/arm64/head_mmu.S
> @@ -123,7 +123,7 @@
>    *
>    * Clobbers x0 - x4
>    */
> -ENTRY(create_page_tables)
> +ENTRY(prepare_early_mappings)
>           /* Prepare the page-tables for mapping Xen */
>           ldr   x0, =XEN_VIRT_START
>           create_table_entry boot_pgtable, boot_first, x0, 0, x1, x2, x3
> @@ -208,7 +208,7 @@ virtphys_clash:
>           /* Identity map clashes with boot_third, which we cannot handle yet */
>           PRINT("- Unable to build boot page tables - virt and phys addresses clash. -\r\n")
>           b     fail
> -ENDPROC(create_page_tables)
> +ENDPROC(prepare_early_mappings)
>   
>   /*
>    * Turn on the Data Cache and the MMU. The function will return on the 1:1
> diff --git a/xen/arch/arm/arm64/head_mpu.S b/xen/arch/arm/arm64/head_mpu.S
> new file mode 100644
> index 0000000000..0b97ce4646
> --- /dev/null
> +++ b/xen/arch/arm/arm64/head_mpu.S
> @@ -0,0 +1,323 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Start-of-day code for an Armv8-R AArch64 MPU system.
> + */
> +
> +#include <asm/arm64/mpu.h>
> +#include <asm/early_printk.h>
> +#include <asm/page.h>
> +
> +/*
> + * One entry in Xen MPU memory region mapping table(xen_mpumap) is a structure
> + * of pr_t, which is 16-bytes size, so the entry offset is the order of 4.
> + */
> +#define MPU_ENTRY_SHIFT         0x4
> +
> +#define REGION_SEL_MASK         0xf
> +
> +#define REGION_TEXT_PRBAR       0x38    /* SH=11 AP=10 XN=00 */
> +#define REGION_RO_PRBAR         0x3A    /* SH=11 AP=10 XN=10 */
> +#define REGION_DATA_PRBAR       0x32    /* SH=11 AP=00 XN=10 */
> +
> +#define REGION_NORMAL_PRLAR     0x0f    /* NS=0 ATTR=111 EN=1 */
> +
> +/*
> + * Macro to round up the section address to be PAGE_SIZE aligned
> + * Each section(e.g. .text, .data, etc) in xen.lds.S is page-aligned,
> + * which is usually guarded with ". = ALIGN(PAGE_SIZE)" in the head,
> + * or in the end
> + */
> +.macro roundup_section, xb
> +        add   \xb, \xb, #(PAGE_SIZE-1)
> +        and   \xb, \xb, #PAGE_MASK
> +.endm
> +
> +/*
> + * Macro to create a new MPU memory region entry, which is a structure
> + * of pr_t,  in \prmap.
> + *
> + * Inputs:
> + * prmap:   mpu memory region map table symbol
> + * sel:     region selector
> + * prbar:   preserve value for PRBAR_EL2
> + * prlar    preserve value for PRLAR_EL2
> + *
> + * Clobbers \tmp1, \tmp2
> + *
> + */
> +.macro create_mpu_entry prmap, sel, prbar, prlar, tmp1, tmp2
> +    mov   \tmp2, \sel
> +    lsl   \tmp2, \tmp2, #MPU_ENTRY_SHIFT
> +    adr_l \tmp1, \prmap
> +    /* Write the first 8 bytes(prbar_t) of pr_t */
> +    str   \prbar, [\tmp1, \tmp2]
> +
> +    add   \tmp2, \tmp2, #8
> +    /* Write the last 8 bytes(prlar_t) of pr_t */
> +    str   \prlar, [\tmp1, \tmp2]

Any particular reason to not use 'stp'?

Also, AFAICT, with data cache disabled. But at least on ARMv8-A, the 
cache is never really off. So don't need some cache maintainance?

FAOD, I know the existing MMU code has the same issue. But I would 
rather prefer if the new code introduced is compliant to the Arm Arm.

> +.endm
> +
> +/*
> + * Macro to store the maximum number of regions supported by the EL2 MPU
> + * in max_xen_mpumap, which is identified by MPUIR_EL2.
> + *
> + * Outputs:
> + * nr_regions: preserve the maximum number of regions supported by the EL2 MPU
> + *
> + * Clobbers \tmp1
> + * > + */

Are you going to have multiple users? If not, then I would prefer if 
this is folded in the only caller.

> +.macro read_max_el2_regions, nr_regions, tmp1
> +    load_paddr \tmp1, max_xen_mpumap

I would rather prefer if we restrict the use of global while the MMU if 
off (see why above).

> +    mrs   \nr_regions, MPUIR_EL2
> +    isb

What's that isb for?

> +    str   \nr_regions, [\tmp1]
> +.endm
> +
> +/*
> + * Macro to prepare and set a MPU memory region
> + *
> + * Inputs:
> + * base:        base address symbol (should be page-aligned)
> + * limit:       limit address symbol
> + * sel:         region selector
> + * prbar:       store computed PRBAR_EL2 value
> + * prlar:       store computed PRLAR_EL2 value
> + * attr_prbar:  PRBAR_EL2-related memory attributes. If not specified it will be REGION_DATA_PRBAR
> + * attr_prlar:  PRLAR_EL2-related memory attributes. If not specified it will be REGION_NORMAL_PRLAR
> + *
> + * Clobber \tmp1

This macro will also clobber x27, x28, x29.

> + *
> + */
> +.macro prepare_xen_region, base, limit, sel, prbar, prlar, tmp1, attr_prbar=REGION_DATA_PRBAR, attr_prlar=REGION_NORMAL_PRLAR
> +    /* Prepare value for PRBAR_EL2 reg and preserve it in \prbar.*/
> +    load_paddr \prbar, \base
> +    and   \prbar, \prbar, #MPU_REGION_MASK
> +    mov   \tmp1, #\attr_prbar
> +    orr   \prbar, \prbar, \tmp1
> +
> +    /* Prepare value for PRLAR_EL2 reg and preserve it in \prlar.*/
> +    load_paddr \prlar, \limit
> +    /* Round up limit address to be PAGE_SIZE aligned */
> +    roundup_section \prlar
> +    /* Limit address should be inclusive */
> +    sub   \prlar, \prlar, #1
> +    and   \prlar, \prlar, #MPU_REGION_MASK
> +    mov   \tmp1, #\attr_prlar
> +    orr   \prlar, \prlar, \tmp1
> +
> +    mov   x27, \sel
> +    mov   x28, \prbar
> +    mov   x29, \prlar
> +    /*
> +     * x27, x28, x29 are special registers designed as
> +     * inputs for function write_pr
> +     */
> +    bl    write_pr
> +.endm
> +
> +.section .text.idmap, "ax", %progbits
> +
> +/*
> + * ENTRY to configure a EL2 MPU memory region
> + * ARMv8-R AArch64 at most supports 255 MPU protection regions.
> + * See section G1.3.18 of the reference manual for ARMv8-R AArch64,
> + * PRBAR<n>_EL2 and PRLAR<n>_EL2 provides access to the EL2 MPU region
> + * determined by the value of 'n' and PRSELR_EL2.REGION as
> + * PRSELR_EL2.REGION<7:4>:n.(n = 0, 1, 2, ... , 15)
> + * For example to access regions from 16 to 31 (0b10000 to 0b11111):
> + * - Set PRSELR_EL2 to 0b1xxxx
> + * - Region 16 configuration is accessible through PRBAR0_EL2 and PRLAR0_EL2
> + * - Region 17 configuration is accessible through PRBAR1_EL2 and PRLAR1_EL2
> + * - Region 18 configuration is accessible through PRBAR2_EL2 and PRLAR2_EL2
> + * - ...
> + * - Region 31 configuration is accessible through PRBAR15_EL2 and PRLAR15_EL2
> + *
> + * Inputs:
> + * x27: region selector
> + * x28: preserve value for PRBAR_EL2
> + * x29: preserve value for PRLAR_EL2
> + *
> + */
> +ENTRY(write_pr)

AFAICT, this function would not be necessary if the index for the init 
sections were hardcoded.

So I would like to understand why the index cannot be hardcoded.

> +    msr   PRSELR_EL2, x27
> +    dsb   sy

What is this 'dsb' for? Also why 'sy'?

> +    and   x27, x27, #REGION_SEL_MASK
> +    cmp   x27, #0
> +    bne   1f
> +    msr   PRBAR0_EL2, x28
> +    msr   PRLAR0_EL2, x29
> +    b     out
> +1:
> +    cmp   x27, #1
> +    bne   2f
> +    msr   PRBAR1_EL2, x28
> +    msr   PRLAR1_EL2, x29
> +    b     out
> +2:
> +    cmp   x27, #2
> +    bne   3f
> +    msr   PRBAR2_EL2, x28
> +    msr   PRLAR2_EL2, x29
> +    b     out
> +3:
> +    cmp   x27, #3
> +    bne   4f
> +    msr   PRBAR3_EL2, x28
> +    msr   PRLAR3_EL2, x29
> +    b     out
> +4:
> +    cmp   x27, #4
> +    bne   5f
> +    msr   PRBAR4_EL2, x28
> +    msr   PRLAR4_EL2, x29
> +    b     out
> +5:
> +    cmp   x27, #5
> +    bne   6f
> +    msr   PRBAR5_EL2, x28
> +    msr   PRLAR5_EL2, x29
> +    b     out
> +6:
> +    cmp   x27, #6
> +    bne   7f
> +    msr   PRBAR6_EL2, x28
> +    msr   PRLAR6_EL2, x29
> +    b     out
> +7:
> +    cmp   x27, #7
> +    bne   8f
> +    msr   PRBAR7_EL2, x28
> +    msr   PRLAR7_EL2, x29
> +    b     out
> +8:
> +    cmp   x27, #8
> +    bne   9f
> +    msr   PRBAR8_EL2, x28
> +    msr   PRLAR8_EL2, x29
> +    b     out
> +9:
> +    cmp   x27, #9
> +    bne   10f
> +    msr   PRBAR9_EL2, x28
> +    msr   PRLAR9_EL2, x29
> +    b     out
> +10:
> +    cmp   x27, #10
> +    bne   11f
> +    msr   PRBAR10_EL2, x28
> +    msr   PRLAR10_EL2, x29
> +    b     out
> +11:
> +    cmp   x27, #11
> +    bne   12f
> +    msr   PRBAR11_EL2, x28
> +    msr   PRLAR11_EL2, x29
> +    b     out
> +12:
> +    cmp   x27, #12
> +    bne   13f
> +    msr   PRBAR12_EL2, x28
> +    msr   PRLAR12_EL2, x29
> +    b     out
> +13:
> +    cmp   x27, #13
> +    bne   14f
> +    msr   PRBAR13_EL2, x28
> +    msr   PRLAR13_EL2, x29
> +    b     out
> +14:
> +    cmp   x27, #14
> +    bne   15f
> +    msr   PRBAR14_EL2, x28
> +    msr   PRLAR14_EL2, x29
> +    b     out
> +15:
> +    msr   PRBAR15_EL2, x28
> +    msr   PRLAR15_EL2, x29
> +out:
> +    isb

What is this 'isb' for?

> +    ret
> +ENDPROC(write_pr)
> +
> +/*
> + * Static start-of-day Xen EL2 MPU memory region layout.
> + *
> + *     xen_mpumap[0] : Xen text
> + *     xen_mpumap[1] : Xen read-only data
> + *     xen_mpumap[2] : Xen read-only after init data
> + *     xen_mpumap[3] : Xen read-write data
> + *     xen_mpumap[4] : Xen BSS
> + *     ......
> + *     xen_mpumap[max_xen_mpumap - 2]: Xen init data
> + *     xen_mpumap[max_xen_mpumap - 1]: Xen init text
> + *
> + * Clobbers x0 - x6
> + *
> + * It shall be compliant with what describes in xen.lds.S, or the below
> + * codes need adjustment.
> + * It shall also follow the rules of putting fixed MPU memory region in
> + * the front, and the others in the rear, which, here, mainly refers to
> + * boot-only region, like Xen init text region.
> + */
> +ENTRY(prepare_early_mappings)
> +    /* stack LR as write_pr will be called later like nested function */
> +    mov   x6, lr
> +
> +    /* x0: region sel */
> +    mov   x0, xzr
> +    /* Xen text section. */
> +    prepare_xen_region _stext, _etext, x0, x1, x2, x3, attr_prbar=REGION_TEXT_PRBAR
> +    create_mpu_entry xen_mpumap, x0, x1, x2, x3, x4

You always seem to call prepare_xen_region and create_mpu_entry. Can 
they be combined?

Also, will the first parameter of create_mpu_entry always be xen_mpumap? 
If so, I will remove it from the parameter.


[...]

> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
> index bc45ea2c65..79965a3c17 100644
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -91,6 +91,8 @@ SECTIONS
>         __ro_after_init_end = .;
>     } : text
>   
> +  . = ALIGN(PAGE_SIZE);

Why do you need this ALIGN?

> +  __data_begin = .;
>     .data.read_mostly : {
>          /* Exception table */
>          __start___ex_table = .;
> @@ -157,7 +159,9 @@ SECTIONS
>          *(.altinstr_replacement)

I know you are not using alternative instructions yet. But, you should 
make sure they are included. So I think rather than introduce 
__init_data_begin, you want to use "_einitext" for the start of the 
"Init data" section.

>     } :text
>     . = ALIGN(PAGE_SIZE);
> +

Spurious?

>     .init.data : {
> +       __init_data_begin = .;            /* Init data */
>          *(.init.rodata)
>          *(.init.rodata.*)
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 15:23:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 15:23:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481199.745944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWky-0000Ru-NF; Thu, 19 Jan 2023 15:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481199.745944; Thu, 19 Jan 2023 15:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWky-0000Rl-KH; Thu, 19 Jan 2023 15:23:12 +0000
Received: by outflank-mailman (input) for mailman id 481199;
 Thu, 19 Jan 2023 15:23:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RBPQ=5Q=citrix.com=prvs=3763d7854=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIWkw-0000C9-Sy
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 15:23:11 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2c064f9c-980d-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 16:23:07 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c064f9c-980d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674141787;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=xXI7LtIQ3wajoUWP2aIUocEFf+htiw98CX7jFZ6KiZA=;
  b=UpaT0T8kySoFiJ1vreRF9I17ZxxBz0PWiEewfv9573IFHuWFfqij04kv
   w93BFlRaqPEjJ76tF7uea1ro3qatis3jKTIe17ES42/kpiOYwVuJ7mgDQ
   vs51gEMOZpSXc23lUSAYPKrQk9eS7YAmWZCknnFf/59HX/HFcA+BvKOp6
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93331236
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:sq18tKvEN64ZLxk1owZpTHqn8OfnVHheMUV32f8akzHdYApBsoF/q
 tZmKWmOaKyOZzf1etoiatm/pk4C7MLXmIJmSFBsqn08Ri4U+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj51v0gnRkPaoQ5AaHxiFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwDxI3ZRqPiMaNnomeZvIvmu0jFZSzM9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WzRetFKSo7tx+2XJxRZ9+LPsLMDUapqBQsA9ckOw9
 zmdpD2jWU9y2Nq3xgO090+AmNP0nCbWXdJKC72nzdtljwjGroAUIEJPDgbqyRWjsWa8RtZeJ
 ko86ico668o+ySDTNPwQhm5q36spQMHVpxbFOhSwB6J4rrZ5UCeHGdsZiVadNUsucsyRDor/
 lyEhdXkAXpoqrL9YWKQ8PKYoC2/PQARLHQefmkUQA0d+d7hrYovyBXVQb5e/LWd14OvX2uqm
 nbT8XZ43u9I5SIW60ml1XfluTmmqpftdVAOwynMHX6M7jokPIHwMuRE9mPnAeZ8wJexFwfe5
 ylewZDBvIjiHrnWynXTHbxl8KWBoq/cbWaC2QMH84wJrWzFxpK1QWxHDNiSzm9NO91MRzLma
 VS7Veh5tM4KZyvCgUOajuuM5yUWIUvIT46Nugj8NIYmX3SIXFbvENtSTUCRxXvxt0MnjLsyP
 5yWGe71UylGUfo5kmHnFrdNuVPO+szY7TmLLXwc5033uYdymVbPEetVWLdwRr5RAFy4TPX9r
 I8EapriJ+R3W+zieCjHmbP/3nhTRUXX8ave8pQNHsbae1oOJY3UI6OJqV/XU9A/zvs9eyah1
 i3VZ3K0P3Ki1COZd17SMC0LhXGGdc8XkE/X9BcEZT6As0XPq671hEvDX/PbpYUaydE=
IronPort-HdrOrdr: A9a23:z7RpLqGNrcaE89ecpLqE7MeALOsnbusQ8zAXPidKJSC9E/b2qy
 nKpp8mPHDP5gr5NEtApTnjAtjifZqsz/5ICOAqVN/JMTUO01HYTr2Kg7GSpwHIKmnT8fNcyL
 clU4UWMqyWMbGit7ee3DWF
X-IronPort-AV: E=Sophos;i="5.97,229,1669093200"; 
   d="scan'208";a="93331236"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v4 2/3] build: replace get-fields.sh by a python script
Date: Thu, 19 Jan 2023 15:22:55 +0000
Message-ID: <20230119152256.15832-3-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230119152256.15832-1-anthony.perard@citrix.com>
References: <20230119152256.15832-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The get-fields.sh which generate all the include/compat/.xlat/*.h
headers is quite slow. It takes for example nearly 3 seconds to
generate platform.h on a recent machine, or 2.3 seconds for memory.h.

Rewriting the mix of shell/sed/python into a single python script make
the generation of those file a lot faster.

No functional change, the headers generated are identical.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    To test the header generation, I've submit a branch to gitlab ci,
    where all the headers where generated, and for each one both the shell
    script and the python script where run and the result of both
    compared.
    
    v4:
    - Use "token in ()" instead of "token in []"
    - Pre-compile regex
    - removed get_typedefs() as it's been removed from get-fields.sh
    
    Overall, this seems to save 0.07166s, down to 1.6673s when generating
    all xlat headers sequentially. (before removing get_typedefs)
    
    v3:
        convert to python script instead of perl
        - this should allow more developper do be able to review/edit it.
        - it avoid adding a dependency on perl for the hypervisor build.
    
        It can be twice as slow than the perl version, but overall, when doing
        a build with make, there isn't much difference between the perl and
        python script.
        We might be able to speed the python script up by precompiling the
        many regex, but it's probably not worth it. (python already have a
        cache of compiled regex, but I think it's small, maybe 10 or so)
    
    v2:
    - Add .pl extension to the perl script
    - remove "-w" from the shebang as it is duplicate of "use warning;"
    - Add a note in the commit message that the "headers generated are identical".

 xen/include/Makefile            |   6 +-
 xen/tools/compat-xlat-header.py | 432 +++++++++++++++++++++++++++++
 xen/tools/get-fields.sh         | 473 --------------------------------
 3 files changed, 434 insertions(+), 477 deletions(-)
 create mode 100644 xen/tools/compat-xlat-header.py
 delete mode 100644 xen/tools/get-fields.sh

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 65be310eca..b950423efe 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -60,9 +60,7 @@ cmd_compat_c = \
 
 quiet_cmd_xlat_headers = GEN     $@
 cmd_xlat_headers = \
-    while read what name; do \
-        $(SHELL) $(srctree)/tools/get-fields.sh "$$what" compat_$$name $< || exit $$?; \
-    done <$(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst >$@.new; \
+    $(PYTHON) $(srctree)/tools/compat-xlat-header.py $< $(patsubst $(obj)/compat/%,$(obj)/compat/.xlat/%,$(basename $<)).lst > $@.new; \
     mv -f $@.new $@
 
 targets += $(headers-y)
@@ -80,7 +78,7 @@ $(obj)/compat/%.c: $(src)/public/%.h $(srcdir)/xlat.lst $(srctree)/tools/compat-
 	$(call if_changed,compat_c)
 
 targets += $(patsubst compat/%, compat/.xlat/%, $(headers-y))
-$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/get-fields.sh FORCE
+$(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/compat-xlat-header.py FORCE
 	$(call if_changed,xlat_headers)
 
 quiet_cmd_xlat_lst = GEN     $@
diff --git a/xen/tools/compat-xlat-header.py b/xen/tools/compat-xlat-header.py
new file mode 100644
index 0000000000..ae5c9f11c9
--- /dev/null
+++ b/xen/tools/compat-xlat-header.py
@@ -0,0 +1,432 @@
+#!/usr/bin/env python
+
+from __future__ import print_function
+import re
+import sys
+
+re_identifier = re.compile(r'^[a-zA-Z_]')
+re_compat_handle = re.compile(r'^COMPAT_HANDLE\((.*)\)$')
+re_pad = re.compile(r'^_pad\d*$')
+re_compat = re.compile(r'^compat_.*_t$')
+re_brackets = re.compile(r'[{}]')
+
+def removeprefix(s, prefix):
+    if s.startswith(prefix):
+        return s[len(prefix):]
+    return s
+
+def removesuffix(s, suffix):
+    if s.endswith(suffix):
+        return s[:-len(suffix)]
+    return s
+
+def get_fields(looking_for, header_tokens):
+    level = 1
+    aggr = 0
+    fields = []
+    name = ''
+
+    for token in header_tokens:
+        if token in ('struct', 'union'):
+            if level == 1:
+                aggr = 1
+                fields = []
+                name = ''
+        elif token == '{':
+            level += 1
+        elif token == '}':
+            level -= 1
+            if level == 1 and name == looking_for:
+                fields.append(token)
+                return fields
+        elif re_identifier.match(token):
+            if not (aggr == 0 or name != ''):
+                name = token
+
+        if aggr != 0:
+            fields.append(token)
+
+    return []
+
+def build_enums(name, tokens):
+    level = 1
+    kind = ''
+    named = ''
+    fields = []
+    members = []
+    id = ''
+
+    for token in tokens:
+        if token in ('struct', 'union'):
+            if not level != 2:
+                fields = ['']
+            kind = "%s;%s" % (token, kind)
+        elif token == '{':
+            level += 1
+        elif token == '}':
+            level -= 1
+            if level == 1:
+                subkind = kind
+                (subkind, _, _) = subkind.partition(';')
+                if subkind == 'union':
+                    print("\nenum XLAT_%s {" % (name,))
+                    for m in members:
+                        print("    XLAT_%s_%s," % (name, m))
+                    print("};")
+                return
+            elif level == 2:
+                named = '?'
+        elif re_identifier.match(token):
+            id = token
+            k = kind
+            (_, _, k) = k.partition(';')
+            if named != '' and k != '':
+                if len(fields) > 0 and fields[0] == '':
+                    fields.pop(0)
+                build_enums("%s_%s" % (name, token), fields)
+                named = '!'
+        elif token == ',':
+            if level == 2:
+                members.append(id)
+        elif token == ';':
+            if level == 2:
+                members.append(id)
+            if named != '':
+                (_, _, kind) = kind.partition(';')
+            named = ''
+        if len(fields) != 0:
+            fields.append(token)
+
+def handle_field(prefix, name, id, type, fields):
+    if len(fields) == 0:
+        print(" \\")
+        if type == '':
+            print("%s(_d_)->%s = (_s_)->%s;" % (prefix, id, id), end='')
+        else:
+            k = id.replace('.', '_')
+            print("%sXLAT_%s_HNDL_%s(_d_, _s_);" % (prefix, name, k), end='')
+    elif not re_brackets.search(' '.join(fields)):
+        tag = ' '.join(fields)
+        tag = re.sub(r'\s*(struct|union)\s+(compat_)?(\w+)\s.*', '\\3', tag)
+        print(" \\")
+        print("%sXLAT_%s(&(_d_)->%s, &(_s_)->%s);" % (prefix, tag, id, id), end='')
+    else:
+        func_id = id
+        func_tokens = fields
+        kind = ''
+        array = ""
+        level = 1
+        arrlvl = 1
+        array_type = ''
+        id = ''
+        type = ''
+        fields = []
+        for token in func_tokens:
+            if token in ('struct', 'union'):
+                if level == 2:
+                    fields = ['']
+                if level == 1:
+                    kind = token
+                    if kind == 'union':
+                        tmp = func_id.replace('.', '_')
+                        print(" \\")
+                        print("%sswitch (%s) {" % (prefix, tmp), end='')
+            elif token == '{':
+                level += 1
+                id = ''
+            elif token == '}':
+                level -= 1
+                id = ''
+                if level == 1 and kind == 'union':
+                    print(" \\")
+                    print("%s}" % (prefix,), end='')
+            elif token == '[':
+                if level != 2 or arrlvl != 1:
+                    pass
+                elif array == '':
+                    array = ' '
+                else:
+                    array = "%s;" % (array,)
+                arrlvl += 1
+            elif token == ']':
+                arrlvl -= 1
+            elif re_compat_handle.match(token):
+                if level == 2 and id == '':
+                    m = re_compat_handle.match(token)
+                    type = m.groups()[0]
+                    type = removeprefix(type, 'compat_')
+            elif token == "compat_domain_handle_t":
+                if level == 2 and id == '':
+                    array_type = token
+            elif re_identifier.match(token):
+                id = token
+            elif token in (',', ';'):
+                if level == 2 and not re_pad.match(id):
+                    if kind == 'union':
+                        tmp = "%s.%s" % (func_id, id)
+                        tmp = tmp.replace('.', '_')
+                        print(" \\")
+                        print("%scase XLAT_%s_%s:" % (prefix, name, tmp), end='')
+                        if len(fields) > 0 and fields[0] == '':
+                            fields.pop(0)
+                        handle_field("%s    " % (prefix,), name, "%s.%s" % (func_id, id), type, fields)
+                    elif array == '' and array_type == '':
+                        if len(fields) > 0 and fields[0] == '':
+                            fields.pop(0)
+                        handle_field(prefix, name, "%s.%s" % (func_id, id), type, fields)
+                    elif array == '':
+                        copy_array("    ", "%s.%s" % (func_id, id))
+                    else:
+                        (_, _, array) = array.partition(';')
+                        if len(fields) > 0 and fields[0] == '':
+                            fields.pop(0)
+                        handle_array(prefix, name, "{func_id}.{id}", array, type, fields)
+                    if token == ';':
+                        fields = []
+                        id = ''
+                        type = ''
+                    array = ''
+                    if kind == 'union':
+                        print(" \\")
+                        print("%s    break;" % (prefix,), end='')
+            else:
+                if array != '':
+                    array = "%s %s" % (array, token)
+            if len(fields) > 0:
+                fields.append(token)
+
+def copy_array(prefix, id):
+    print(" \\")
+    print("%sif ((_d_)->%s != (_s_)->%s) \\" % (prefix, id, id))
+    print("%s    memcpy((_d_)->%s, (_s_)->%s, sizeof((_d_)->%s));" % (prefix, id, id, id), end='')
+
+def handle_array(prefix, name, id, array, type, fields):
+    i = re.sub(r'[^;]', '', array)
+    i = "i%s" % (len(i),)
+
+    print(" \\")
+    print("%s{ \\" % (prefix,))
+    print("%s    unsigned int %s; \\" % (prefix, i))
+    (head, _, tail) = array.partition(';')
+    head = head.strip()
+    print("%s    for (%s = 0; %s < %s; ++%s) {" % (prefix, i, i, head, i), end='')
+    if not ';' in array:
+        handle_field("%s        " % (prefix,), name, "%s[%s]" % (id, i), type, fields)
+    else:
+        handle_array("%s        " % (prefix,) , name, "%s[%s]" % (id, i), tail, type, fields)
+    print(" \\")
+    print("%s    } \\" % (prefix,))
+    print("%s}" % (prefix,), end='')
+
+def build_body(name, tokens):
+    level = 1
+    id = ''
+    array = ''
+    arrlvl = 1
+    array_type = ''
+    type = ''
+    fields = []
+
+    print("\n#define XLAT_%s(_d_, _s_) do {" % (name,), end='')
+
+    for token in tokens:
+        if token in ('struct', 'union'):
+            if level == 2:
+                fields = ['']
+        elif token == '{':
+            level += 1
+            id = ''
+        elif token == '}':
+            level -= 1
+            id = ''
+        elif token == '[':
+            if level != 2 or arrlvl != 1:
+                pass
+            elif array == '':
+                array = ' '
+            else:
+                array = "%s;" % (array,)
+            arrlvl += 1
+        elif token == ']':
+            arrlvl -= 1
+        elif re_compat_handle.match(token):
+            if level == 2 and id == '':
+                m = re_compat_handle.match(token)
+                type = m.groups()[0]
+                type = removeprefix(type, 'compat_')
+        elif token == "compat_domain_handle_t":
+            if level == 2 and id == '':
+                array_type = token
+        elif re_identifier.match(token):
+            if array != '':
+                array = "%s %s" % (array, token)
+            else:
+                id = token
+        elif token in (',', ';'):
+            if level == 2 and not re_pad.match(id):
+                if array == '' and array_type == '':
+                    if len(fields) > 0 and fields[0] == '':
+                        fields.pop(0)
+                    handle_field("    ", name, id, type, fields)
+                elif array == '':
+                    copy_array("    ", id)
+                else:
+                    (head, sep, tmp) = array.partition(';')
+                    if sep == '':
+                        tmp = head
+                    if len(fields) > 0 and fields[0] == '':
+                        fields.pop(0)
+                    handle_array("    ", name, id, tmp, type, fields)
+                if token == ';':
+                    fields = []
+                    id = ''
+                    type = ''
+                array = ''
+        else:
+            if array != '':
+                array = "%s %s" % (array, token)
+        if len(fields) > 0:
+            fields.append(token)
+    print(" \\\n} while (0)")
+
+def check_field(kind, name, field, extrafields):
+    if not re_brackets.search(' '.join(extrafields)):
+        print("; \\")
+        if len(extrafields) != 0:
+            for token in extrafields:
+                if token in ('struct', 'union'):
+                    pass
+                elif re_identifier.match(token):
+                    print("    CHECK_%s" % (removeprefix(token, 'xen_'),), end='')
+                    break
+                else:
+                    raise Exception("Malformed compound declaration: '%s'" % (token,))
+        elif not '.' in field:
+            print("    CHECK_FIELD_(%s, %s, %s)" % (kind, name, field), end='')
+        else:
+            n = field.count('.')
+            field = field.replace('.', ', ')
+            print("    CHECK_SUBFIELD_%s_(%s, %s, %s)" % (n, kind, name, field), end='')
+    else:
+        level = 1
+        fields = []
+        id = ''
+
+        for token in extrafields:
+            if token in ('struct', 'union'):
+                if level == 2:
+                    fields = ['']
+            elif token == '{':
+                level += 1
+                id = ''
+            elif token == '}':
+                level -= 1
+                id = ''
+            elif re_compat.match(token):
+                if level == 2:
+                    fields = ['']
+                    token = removesuffix(token, '_t')
+                    token = removeprefix(token, 'compat_')
+            elif re.match(r'^evtchn_.*_compat_t$', token):
+                if level == 2 and token != "evtchn_port_compat_t":
+                    fields = ['']
+                    token = removesuffix(token, '_compat_t')
+            elif re_identifier.match(token):
+                id = token
+            elif token in (',', ';'):
+                if level == 2 and not re_pad.match(id):
+                    if len(fields) > 0 and fields[0] == '':
+                        fields.pop(0)
+                    check_field(kind, name, "%s.%s" % (field, id), fields)
+                    if token == ";":
+                        fields = []
+                        id = ''
+            if len(fields) > 0:
+                fields.append(token)
+
+def build_check(name, tokens):
+    level = 1
+    fields = []
+    kind = ''
+    id = ''
+    arrlvl = 1
+
+    print("")
+    print("#define CHECK_%s \\" % (name,))
+
+    for token in tokens:
+        if token in ('struct', 'union'):
+            if level == 1:
+                kind = token
+                print("    CHECK_SIZE_(%s, %s)" % (kind, name), end='')
+            elif level == 2:
+                fields = ['']
+        elif token == '{':
+            level += 1
+            id = ''
+        elif token == '}':
+            level -= 1
+            id = ''
+        elif token == '[':
+            arrlvl += 1
+        elif token == ']':
+            arrlvl -= 1
+        elif re_compat.match(token):
+            if level == 2 and token != "compat_argo_port_t":
+                fields = ['']
+                token = removesuffix(token, '_t')
+                token = removeprefix(token, 'compat_')
+        elif re_identifier.match(token):
+            if not (level != 2 or arrlvl != 1):
+                id = token
+        elif token in (',', ';'):
+            if level == 2 and not re_pad.match(id):
+                if len(fields) > 0 and fields[0] == '':
+                    fields.pop(0)
+                check_field(kind, name, id, fields)
+                if token == ";":
+                    fields = []
+                    id = ''
+
+        if len(fields) > 0:
+            fields.append(token)
+    print("")
+
+
+def main():
+    header_tokens = []
+    re_tokenazier = re.compile(r'\s+')
+    re_skip_line = re.compile(r'^\s*(#|$)')
+    re_spacer = re.compile(r'([\]\[,;:{}])')
+
+    with open(sys.argv[1]) as header:
+        for line in header:
+            if re_skip_line.match(line):
+                continue
+            line = re_spacer.sub(' \\1 ', line)
+            line = line.strip()
+            header_tokens += re_tokenazier.split(line)
+
+    with open(sys.argv[2]) as compat_list:
+        for line in compat_list:
+            words = re_tokenazier.split(line, maxsplit=1)
+            what = words[0]
+            name = words[1]
+
+            name = removeprefix(name, 'xen')
+            name = name.strip()
+
+            fields = get_fields("compat_%s" % (name,), header_tokens)
+            if len(fields) == 0:
+                raise Exception("Fields of 'compat_%s' not found in '%s'" % (name, sys.argv[1]))
+
+            if what == "!":
+                build_enums(name, fields)
+                build_body(name, fields)
+            elif what == "?":
+                build_check(name, fields)
+            else:
+                raise Exception("Invalid translation indicator: '%s'" % (what,))
+
+if __name__ == '__main__':
+    main()
diff --git a/xen/tools/get-fields.sh b/xen/tools/get-fields.sh
deleted file mode 100644
index ad4a7aacc6..0000000000
--- a/xen/tools/get-fields.sh
+++ /dev/null
@@ -1,473 +0,0 @@
-test -n "$1" -a -n "$2" -a -n "$3"
-set -ef
-
-SED=sed
-if test -x /usr/xpg4/bin/sed; then
-	SED=/usr/xpg4/bin/sed
-fi
-if test -z ${PYTHON}; then
-	PYTHON=`/usr/bin/env python`
-fi
-if test -z ${PYTHON}; then
-	echo "Python not found"
-	exit 1
-fi
-
-get_fields ()
-{
-	local level=1 aggr=0 name= fields=
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 1 || aggr=1 fields= name=
-			;;
-		"{")
-			level=$(expr $level + 1)
-			;;
-		"}")
-			level=$(expr $level - 1)
-			if [ $level = 1 -a $name = $1 ]
-			then
-				echo "$fields }"
-				return 0
-			fi
-			;;
-		[a-zA-Z_]*)
-			test $aggr = 0 -o -n "$name" || name="$token"
-			;;
-		esac
-		test $aggr = 0 || fields="$fields $token"
-	done
-}
-
-build_enums ()
-{
-	local level=1 kind= fields= members= named= id= token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 2 || fields=" "
-			kind="$token;$kind"
-			;;
-		"{")
-			level=$(expr $level + 1)
-			;;
-		"}")
-			level=$(expr $level - 1)
-			if [ $level = 1 ]
-			then
-				if [ "${kind%%;*}" = union ]
-				then
-					echo
-					echo "enum XLAT_$1 {"
-					for m in $members
-					do
-						echo "    XLAT_${1}_$m,"
-					done
-					echo "};"
-				fi
-				return 0
-			elif [ $level = 2 ]
-			then
-				named='?'
-			fi
-			;;
-		[a-zA-Z]*)
-			id=$token
-			if [ -n "$named" -a -n "${kind#*;}" ]
-			then
-				build_enums ${1}_$token "$fields"
-				named='!'
-			fi
-			;;
-		",")
-			test $level != 2 || members="$members $id"
-			;;
-		";")
-			test $level != 2 || members="$members $id"
-			test -z "$named" || kind=${kind#*;}
-			named=
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-}
-
-handle_field ()
-{
-	if [ -z "$5" ]
-	then
-		echo " \\"
-		if [ -z "$4" ]
-		then
-			printf %s "$1(_d_)->$3 = (_s_)->$3;"
-		else
-			printf %s "$1XLAT_${2}_HNDL_$(echo $3 | $SED 's,\.,_,g')(_d_, _s_);"
-		fi
-	elif [ -z "$(echo "$5" | $SED 's,[^{}],,g')" ]
-	then
-		local tag=$(echo "$5" | ${PYTHON} -c '
-import re,sys
-for line in sys.stdin.readlines():
-    sys.stdout.write(re.subn(r"\s*(struct|union)\s+(compat_)?(\w+)\s.*", r"\3", line)[0].rstrip() + "\n")
-')
-		echo " \\"
-		printf %s "${1}XLAT_$tag(&(_d_)->$3, &(_s_)->$3);"
-	else
-		local level=1 kind= fields= id= array= arrlvl=1 array_type= type= token
-		for token in $5
-		do
-			case "$token" in
-			struct|union)
-				test $level != 2 || fields=" "
-				if [ $level = 1 ]
-				then
-					kind=$token
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "${1}switch ($(echo $3 | $SED 's,\.,_,g')) {"
-					fi
-				fi
-				;;
-			"{")
-				level=$(expr $level + 1) id=
-				;;
-			"}")
-				level=$(expr $level - 1) id=
-				if [ $level = 1 -a $kind = union ]
-				then
-					echo " \\"
-					printf %s "$1}"
-				fi
-				;;
-			"[")
-				if [ $level != 2 -o $arrlvl != 1 ]
-				then
-					:
-				elif [ -z "$array" ]
-				then
-					array=" "
-				else
-					array="$array;"
-				fi
-				arrlvl=$(expr $arrlvl + 1)
-				;;
-			"]")
-				arrlvl=$(expr $arrlvl - 1)
-				;;
-			COMPAT_HANDLE\(*\))
-				if [ $level = 2 -a -z "$id" ]
-				then
-					type=${token#COMPAT_HANDLE?}
-					type=${type%?}
-					type=${type#compat_}
-				fi
-				;;
-			compat_domain_handle_t)
-				if [ $level = 2 -a -z "$id" ]
-				then
-					array_type=$token
-				fi
-				;;
-			[a-zA-Z]*)
-				id=$token
-				;;
-			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-				then
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "${1}case XLAT_${2}_$(echo $3.$id | $SED 's,\.,_,g'):"
-						handle_field "$1    " $2 $3.$id "$type" "$fields"
-					elif [ -z "$array" -a -z "$array_type" ]
-					then
-						handle_field "$1" $2 $3.$id "$type" "$fields"
-					elif [ -z "$array" ]
-					then
-						copy_array "    " $3.$id
-					else
-						handle_array "$1" $2 $3.$id "${array#*;}" "$type" "$fields"
-					fi
-					test "$token" != ";" || fields= id= type=
-					array=
-					if [ $kind = union ]
-					then
-						echo " \\"
-						printf %s "$1    break;"
-					fi
-				fi
-				;;
-			*)
-				if [ -n "$array" ]
-				then
-					array="$array $token"
-				fi
-				;;
-			esac
-			test -z "$fields" || fields="$fields $token"
-		done
-	fi
-}
-
-copy_array ()
-{
-	echo " \\"
-	echo "${1}if ((_d_)->$2 != (_s_)->$2) \\"
-	printf %s "$1    memcpy((_d_)->$2, (_s_)->$2, sizeof((_d_)->$2));"
-}
-
-handle_array ()
-{
-	local i="i$(echo $4 | $SED 's,[^;], ,g' | wc -w | $SED 's,[[:space:]]*,,g')"
-	echo " \\"
-	echo "$1{ \\"
-	echo "$1    unsigned int $i; \\"
-	printf %s "$1    for ($i = 0; $i < "${4%%;*}"; ++$i) {"
-	if [ "$4" = "${4#*;}" ]
-	then
-		handle_field "$1        " $2 $3[$i] "$5" "$6"
-	else
-		handle_array "$1        " $2 $3[$i] "${4#*;}" "$5" "$6"
-	fi
-	echo " \\"
-	echo "$1    } \\"
-	printf %s "$1}"
-}
-
-build_body ()
-{
-	echo
-	printf %s "#define XLAT_$1(_d_, _s_) do {"
-	local level=1 fields= id= array= arrlvl=1 array_type= type= token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			test $level != 2 || fields=" "
-			;;
-		"{")
-			level=$(expr $level + 1) id=
-			;;
-		"}")
-			level=$(expr $level - 1) id=
-			;;
-		"[")
-			if [ $level != 2 -o $arrlvl != 1 ]
-			then
-				:
-			elif [ -z "$array" ]
-			then
-				array=" "
-			else
-				array="$array;"
-			fi
-			arrlvl=$(expr $arrlvl + 1)
-			;;
-		"]")
-			arrlvl=$(expr $arrlvl - 1)
-			;;
-		COMPAT_HANDLE\(*\))
-			if [ $level = 2 -a -z "$id" ]
-			then
-				type=${token#COMPAT_HANDLE?}
-				type=${type%?}
-				type=${type#compat_}
-			fi
-			;;
-		compat_domain_handle_t)
-			if [ $level = 2 -a -z "$id" ]
-			then
-				array_type=$token
-			fi
-			;;
-		[a-zA-Z_]*)
-			if [ -n "$array" ]
-			then
-				array="$array $token"
-			else
-				id=$token
-			fi
-			;;
-		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-			then
-				if [ -z "$array" -a -z "$array_type" ]
-				then
-					handle_field "    " $1 $id "$type" "$fields"
-				elif [ -z "$array" ]
-				then
-					copy_array "    " $id
-				else
-					handle_array "    " $1 $id "${array#*;}" "$type" "$fields"
-				fi
-				test "$token" != ";" || fields= id= type=
-				array=
-			fi
-			;;
-		*)
-			if [ -n "$array" ]
-			then
-				array="$array $token"
-			fi
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-	echo " \\"
-	echo "} while (0)"
-}
-
-check_field ()
-{
-	if [ -z "$(echo "$4" | $SED 's,[^{}],,g')" ]
-	then
-		echo "; \\"
-		local n=$(echo $3 | $SED 's,[^.], ,g' | wc -w | $SED 's,[[:space:]]*,,g')
-		if [ -n "$4" ]
-		then
-			for n in $4
-			do
-				case $n in
-				struct|union)
-					;;
-				[a-zA-Z_]*)
-					printf %s "    CHECK_${n#xen_}"
-					break
-					;;
-				*)
-					echo "Malformed compound declaration: '$n'" >&2
-					exit 1
-					;;
-				esac
-			done
-		elif [ $n = 0 ]
-		then
-			printf %s "    CHECK_FIELD_($1, $2, $3)"
-		else
-			printf %s "    CHECK_SUBFIELD_${n}_($1, $2, $(echo $3 | $SED 's!\.!, !g'))"
-		fi
-	else
-		local level=1 fields= id= token
-		for token in $4
-		do
-			case "$token" in
-			struct|union)
-				test $level != 2 || fields=" "
-				;;
-			"{")
-				level=$(expr $level + 1) id=
-				;;
-			"}")
-				level=$(expr $level - 1) id=
-				;;
-			compat_*_t)
-				if [ $level = 2 ]
-				then
-					fields=" "
-					token="${token%_t}"
-					token="${token#compat_}"
-				fi
-				;;
-			evtchn_*_compat_t)
-				if [ $level = 2 -a $token != evtchn_port_compat_t ]
-				then
-					fields=" "
-					token="${token%_compat_t}"
-				fi
-				;;
-			[a-zA-Z]*)
-				id=$token
-				;;
-			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-				then
-					check_field $1 $2 $3.$id "$fields"
-					test "$token" != ";" || fields= id=
-				fi
-				;;
-			esac
-			test -z "$fields" || fields="$fields $token"
-		done
-	fi
-}
-
-build_check ()
-{
-	echo
-	echo "#define CHECK_$1 \\"
-	local level=1 fields= kind= id= arrlvl=1 token
-	for token in $2
-	do
-		case "$token" in
-		struct|union)
-			if [ $level = 1 ]
-			then
-				kind=$token
-				printf %s "    CHECK_SIZE_($kind, $1)"
-			elif [ $level = 2 ]
-			then
-				fields=" "
-			fi
-			;;
-		"{")
-			level=$(expr $level + 1) id=
-			;;
-		"}")
-			level=$(expr $level - 1) id=
-			;;
-		"[")
-			arrlvl=$(expr $arrlvl + 1)
-			;;
-		"]")
-			arrlvl=$(expr $arrlvl - 1)
-			;;
-		compat_*_t)
-			if [ $level = 2 -a $token != compat_argo_port_t ]
-			then
-				fields=" "
-				token="${token%_t}"
-				token="${token#compat_}"
-			fi
-			;;
-		[a-zA-Z_]*)
-			test $level != 2 -o $arrlvl != 1 || id=$token
-			;;
-		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
-			then
-				check_field $kind $1 $id "$fields"
-				test "$token" != ";" || fields= id=
-			fi
-			;;
-		esac
-		test -z "$fields" || fields="$fields $token"
-	done
-	echo ""
-}
-
-list="$($SED -e 's,^[[:space:]]#.*,,' -e 's!\([]\[,;:{}]\)! \1 !g' $3)"
-fields="$(get_fields $(echo $2 | $SED 's,^compat_xen,compat_,') "$list")"
-if [ -z "$fields" ]
-then
-	echo "Fields of '$2' not found in '$3'" >&2
-	exit 1
-fi
-name=${2#compat_}
-name=${name#xen}
-case "$1" in
-"!")
-	build_enums $name "$fields"
-	build_body $name "$fields"
-	;;
-"?")
-	build_check $name "$fields"
-	;;
-*)
-	echo "Invalid translation indicator: '$1'" >&2
-	exit 1
-	;;
-esac
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 15:23:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 15:23:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481200.745948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWky-0000Tw-Un; Thu, 19 Jan 2023 15:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481200.745948; Thu, 19 Jan 2023 15:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWky-0000TZ-R0; Thu, 19 Jan 2023 15:23:12 +0000
Received: by outflank-mailman (input) for mailman id 481200;
 Thu, 19 Jan 2023 15:23:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RBPQ=5Q=citrix.com=prvs=3763d7854=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIWkx-0000C1-Gf
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 15:23:11 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a4d8cb2-980d-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 16:23:04 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a4d8cb2-980d-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674141788;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=xn/IxrW5V76kot9CBVjdkzIoSE64b8psdGUefGXif2Q=;
  b=Zl6tYApfV4dYZIzl/yAxvUrcDAQepnslwt8PlTsh03YqQt3F42hA2l+n
   cNblds2662MS2yxYDCaVuSLQBqYpML4W5YsF6vDUC9mr+hNJWURPLlLXR
   Rf8zhEHuPq/pcWIh//RC2IJBINjt0lNgFjkp3zCtHUCl3cXI0Quh4mdSv
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93399569
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:MDORKaJV4m0sXmLGFE+R/5UlxSXFcZb7ZxGr2PjKsXjdYENShmQDy
 WoZDW/TPPaNMGTwed9+OoviphlU65GDztM1SVRlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wVvPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c59K010y
 tgSNwsnLRWE17vszrulbcVV05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TbHp4EzxvG9
 woq+Uz5CzEHaNKV0QHdrHicuOz/uiXEctINQejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxasvBQRRt5RGO0S8xyWx+zf5APxLmoZSj9MbvQ2uclwQiYlv
 neShM/gDzFrtLyTSFqe+62SoDf0PjIaRUcdYQcUQA1D5MPsyLzflTqWEIwlSvTsyISoR3epm
 WviQDUCa6s7h+Qn7Zqf90/8qXGpociQFAA8+CL7Zzfwhu9mX7JJd7BE+HCCs6kbfdzDFgbR1
 JQXs5PAtb5TVPlhgATIGbxQR+/xup5pJRWG2TZS848dGyNBEpJJVaRZ+3lAKUhgKa7okhe5M
 RaI6Wu9CHK+VUZGjJObgKrrUazGNYC6SbzYugn8N7KimKRZeg6d5z1JbkWNxW3rm0VEufhhZ
 svDL5jyVidLWfQPIN+KqwE1i+dDKscWnDO7eHwG507/jer2iIC9F9/pz2dinshmtfjZ8W05A
 v5UNteQygU3bQENSnC/zGLnFnhTdSJTLcmv+6RqmhurflIO9JcJV6WAntvMuuVNw8xoqws/1
 izsBBEGkwKl2BUq62yiMxheVV8mZr4nxVpTAMDmFQ/AN6QLCWp30JoiSg==
IronPort-HdrOrdr: A9a23:7JwMX6FyMZLCHDtppLqE5seALOsnbusQ8zAXPiFKJSC9F/byqy
 nAppsmPHPP5gr5OktBpTnwAsi9qBrnnPYejLX5Vo3SPzUO1lHYSb1K3M/PxCDhBj271sM179
 YFT0GmMqyTMWRH
X-IronPort-AV: E=Sophos;i="5.97,229,1669093200"; 
   d="scan'208";a="93399569"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v4 3/3] build: compat-xlat-header.py: optimisation to search for just '{' instead of [{}]
Date: Thu, 19 Jan 2023 15:22:56 +0000
Message-ID: <20230119152256.15832-4-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230119152256.15832-1-anthony.perard@citrix.com>
References: <20230119152256.15832-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

`fields` and `extrafields` always all the parts of a sub-struct, so
when there is '}', there is always a '{' before it. Also, both are
lists.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/tools/compat-xlat-header.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/tools/compat-xlat-header.py b/xen/tools/compat-xlat-header.py
index ae5c9f11c9..d0a864b68e 100644
--- a/xen/tools/compat-xlat-header.py
+++ b/xen/tools/compat-xlat-header.py
@@ -105,7 +105,7 @@ def handle_field(prefix, name, id, type, fields):
         else:
             k = id.replace('.', '_')
             print("%sXLAT_%s_HNDL_%s(_d_, _s_);" % (prefix, name, k), end='')
-    elif not re_brackets.search(' '.join(fields)):
+    elif not '{' in fields:
         tag = ' '.join(fields)
         tag = re.sub(r'\s*(struct|union)\s+(compat_)?(\w+)\s.*', '\\3', tag)
         print(" \\")
@@ -290,7 +290,7 @@ def build_body(name, tokens):
     print(" \\\n} while (0)")
 
 def check_field(kind, name, field, extrafields):
-    if not re_brackets.search(' '.join(extrafields)):
+    if not '{' in extrafields:
         print("; \\")
         if len(extrafields) != 0:
             for token in extrafields:
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 15:23:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 15:23:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481198.745934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWkw-0000CH-Ey; Thu, 19 Jan 2023 15:23:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481198.745934; Thu, 19 Jan 2023 15:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWkw-0000CA-C0; Thu, 19 Jan 2023 15:23:10 +0000
Received: by outflank-mailman (input) for mailman id 481198;
 Thu, 19 Jan 2023 15:23:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RBPQ=5Q=citrix.com=prvs=3763d7854=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIWkv-0000C1-9W
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 15:23:09 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28e0564d-980d-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 16:23:02 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28e0564d-980d-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674141786;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=GSkTDl1mMIYyDF4Fu8QPO73Ju/CNKuXMuVUvJ5T5Aoo=;
  b=cem6om21JFj1vD9uLPxSjwhY7Z9V2uRhAC1GJpy+X47zotaduYe9KaBJ
   tJjiTbgiKOKHMQEbqpzFEdcsq8TDPyE1nK/WXxriGSN/jwCiZlmCxYAmD
   1q3Wnsb66NFqWwm27EEnHsQJErVHy0AiJdjx1pU0XRN07H3DIt8yv7Aa3
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92268585
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:gt1jHq6tRSA5agqOGQ/BlAxRtBbHchMFZxGqfqrLsTDasY5as4F+v
 mAdX2uPO6qMZ2P1c911Poi19EkOuJTWndY3TANs/39nHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakS7QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mr
 8MRFCtSQBa4mOeInKyLTM893JQsM5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrnD5bz1frkPTvact6nLf5AdwzKLsIJzefdniqcB9zxzC+
 DKbrzmR7hcyJY28yCiuzHeVoKzQlnOnBK4RKLmZ6as/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c8FLD+Qw5QWJy6zVywWUHG4JSnhGctNOnM0rQT0n0
 HeZktWvAiZg2JWXQ3+A8rafrRupJDMYa2QFYEcsUg8t89Tl5oYpgXryos1LSfDvyIevQHepn
 m7M9XJl71kOsSIV//+E9Gzc3ByqnYfMcFIr1gPxDzj14RwsMeZJeLeUBUjnAedoddjGFQjb5
 iBby6By/8hVU8jTyXXlrPElWejwuq3baGC0bUtHRcFJyti7x5K0kWm8ChlaLVwhDMsLcCSBj
 KT76VIIv8870JdHgMZKj2ON5ycCl/KI+SzNDKy8Uza3SsEZmPW71C9vf1WM+GvmjVIhl6oyU
 b/CL5nwVShEV/82nWrmLwv47VPN7npmrY80bcmrpylLLJLEPCLFIVv7GAXmgh8FAFOs/1yOr
 oc32zqiwBRDSuzuChQ7AqZKRW3m2UMTXMisw+QOL77rH+aTMD15YxMn6e97KtMNcmU8vrugw
 0xRrWcFmQSh2yafc1jih7IKQOqHYKuTZEkTZUQEVWtEEVB6CWpzxM/zr6cKQIQ=
IronPort-HdrOrdr: A9a23:EGQu4q9a+KFuxk89SHZuk+DWI+orL9Y04lQ7vn2ZKCY4TiX8ra
 uTdZsguiMc5Ax+ZJhDo7C90di7IE80nKQdieN9AV7IZniEhILHFvAG0aLShxHmBi3i5qp8+M
 5bAsxD4QTLfDpHsfo=
X-IronPort-AV: E=Sophos;i="5.97,229,1669093200"; 
   d="scan'208";a="92268585"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [XEN PATCH v4 0/3] xen: rework compat headers generation
Date: Thu, 19 Jan 2023 15:22:53 +0000
Message-ID: <20230119152256.15832-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Patch series available in this git branch:
https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.build-system-xen-include-rework-v4

v4:
- new patch removing get_typedefs() from the existing script
  (and thus also removing it from the new script)
- Adding some optimisation, mainly pre-compile regex
  It's slightly faster.

v3:
- Rewrite script into python instead of perl.
  (last patch of the series)

v2:
- new patch [1/4] to fix issue with command line that can be way too long
- other small changes, and reorder patches

Hi,

This patch series is about 2 improvement. First one is to use $(if_changed, )
in "include/Makefile" to make the generation of the compat headers less verbose
and to have the command line part of the decision to rebuild the headers.
Second one is to replace one slow script by a much faster one, and save time
when generating the headers.

Thanks.

Anthony PERARD (3):
  build: include/compat, remove typedefs handling
  build: replace get-fields.sh by a python script
  build: compat-xlat-header.py: optimisation to search for just '{'
    instead of [{}]

 xen/include/Makefile            |   6 +-
 xen/tools/compat-xlat-header.py | 432 ++++++++++++++++++++++++++
 xen/tools/get-fields.sh         | 528 --------------------------------
 3 files changed, 434 insertions(+), 532 deletions(-)
 create mode 100644 xen/tools/compat-xlat-header.py
 delete mode 100644 xen/tools/get-fields.sh

-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 15:23:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 15:23:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481201.745964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWl3-00010C-Cr; Thu, 19 Jan 2023 15:23:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481201.745964; Thu, 19 Jan 2023 15:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIWl3-000103-9C; Thu, 19 Jan 2023 15:23:17 +0000
Received: by outflank-mailman (input) for mailman id 481201;
 Thu, 19 Jan 2023 15:23:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RBPQ=5Q=citrix.com=prvs=3763d7854=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIWl1-0000C1-Ay
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 15:23:15 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2ccca58a-980d-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 16:23:08 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ccca58a-980d-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674141793;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Y3RL6Avu2AAie8vc75L0y154G/XcplbXjHREi2GuIus=;
  b=LazbwCf5k7+9FVP68DrfAsGVbPUd6GbQO1gw2nKThZy3UJlq+zpE+l99
   XIvRsXJ66EgxBtenFTMZTVPBuiYkVJgHFsBOSHha6vpprAqsK463JH+7l
   3yjHoDNSi5i5V3R2k6SnUb/VdRhPE2NxT9nL/GQ1o7T+rIbVqK6g56EY2
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92813036
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:V73LRq/3EcU1bpayx9PaDrUDqX6TJUtcMsCJ2f8bNWPcYEJGY0x3y
 WIXXjjVP/vfNmbzed5/bIi+9xsG6p+Dm9ZlTFdr+3g8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKka5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklF7
 thGEyAJViuEuPu9kJ23R8Z0jZwseZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0ExBbB/
 TqdoQwVBDlDboy6lBag7kmzvenqximjXKEALpCBo6sCbFq7mTVIVUx+uUGAifukjk+zXfpPJ
 kpS/TAhxYAw/kG2Stj2XzWjvWWJ+BUbXrJ4DOkS+AyLjK3O7G6xHXMYRzRMbNgnss4eRjEw0
 FKN2dTzClRHoLCTDH6Q6LqQhTezIjQOa38PYzceSgkI6MWlp5s85i8jVf46TvTz1IesX2itn
 XbT9nNWa6gvYdAj3L6fo2vXhwqXoafQRV4a6gj4Rmn94VYsDGK6XLBE+WQ3/N4ZctnCHwPb5
 CdU8ySNxLtQVM/QzURhVM1IRej0vKjdbVUwlHY1R/EcGyKRF2lPlGy6yBV3Pw9XP8kNYlcFi
 2eD6FoKtPe/0JZHBJKbgr5d6Oxwl8AM7fy/CpjpgiNmO/CdjjOv8iB0flK31GvwikUqmqxXE
 c7FLpr0UyhEUvU2nGreqwIhPVkDnHhWKYT7HMCT8vha+eDGOC79pUktbjNikdzVHIvb+V6Io
 r6zxuOByglFUf2WX8Uk2dd7ELz+FlBiXcqeg5UOJoa+zv9ORDlJ5wn5nelwJOSIXs19yo/1w
 51KchIJlgSh3iWddG1nqBlLMdvSYHq2llpjVQREALpi8yN9CWpzxM/zr6cKQIQ=
IronPort-HdrOrdr: A9a23:BeG4Yq9AXioClUknJ0Ruk+DWI+orL9Y04lQ7vn2ZKCY4TiX8ra
 uTdZsguiMc5Ax+ZJhDo7C90di7IE80nKQdieN9AV7IZniEhILHFvAG0aLShxHmBi3i5qp8+M
 5bAsxD4QTLfDpHsfo=
X-IronPort-AV: E=Sophos;i="5.97,229,1669093200"; 
   d="scan'208";a="92813036"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v4 1/3] build: include/compat, remove typedefs handling
Date: Thu, 19 Jan 2023 15:22:54 +0000
Message-ID: <20230119152256.15832-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230119152256.15832-1-anthony.perard@citrix.com>
References: <20230119152256.15832-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Partial revert of c93bd0e6ea2a ("tmem: fix 32-on-64 support")
Since c492e19fdd05 ("xen: remove tmem from hypervisor"), this code
isn't used anymore.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v4:
    - new patch

 xen/tools/get-fields.sh | 57 +----------------------------------------
 1 file changed, 1 insertion(+), 56 deletions(-)

diff --git a/xen/tools/get-fields.sh b/xen/tools/get-fields.sh
index 002db2093f..ad4a7aacc6 100644
--- a/xen/tools/get-fields.sh
+++ b/xen/tools/get-fields.sh
@@ -41,34 +41,6 @@ get_fields ()
 	done
 }
 
-get_typedefs ()
-{
-	local level=1 state=
-	for token in $1
-	do
-		case "$token" in
-		typedef)
-			test $level != 1 || state=1
-			;;
-		COMPAT_HANDLE\(*\))
-			test $level != 1 -o "$state" != 1 || state=2
-			;;
-		[\{\[])
-			level=$(expr $level + 1)
-			;;
-		[\}\]])
-			level=$(expr $level - 1)
-			;;
-		";")
-			test $level != 1 || state=
-			;;
-		[a-zA-Z_]*)
-			test $level != 1 -o "$state" != 2 || echo "$token"
-			;;
-		esac
-	done
-}
-
 build_enums ()
 {
 	local level=1 kind= fields= members= named= id= token
@@ -201,21 +173,7 @@ for line in sys.stdin.readlines():
 				fi
 				;;
 			[a-zA-Z]*)
-				if [ -z "$id" -a -z "$type" -a -z "$array_type" ]
-				then
-					for id in $typedefs
-					do
-						test $id != "$token" || type=$id
-					done
-					if [ -z "$type" ]
-					then
-						id=$token
-					else
-						id=
-					fi
-				else
-					id=$token
-				fi
+				id=$token
 				;;
 			[\,\;])
 				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
@@ -330,18 +288,6 @@ build_body ()
 			if [ -n "$array" ]
 			then
 				array="$array $token"
-			elif [ -z "$id" -a -z "$type" -a -z "$array_type" ]
-			then
-				for id in $typedefs
-				do
-					test $id != "$token" || type=$id
-				done
-				if [ -z "$type" ]
-				then
-					id=$token
-				else
-					id=
-				fi
 			else
 				id=$token
 			fi
@@ -514,7 +460,6 @@ name=${2#compat_}
 name=${name#xen}
 case "$1" in
 "!")
-	typedefs="$(get_typedefs "$list")"
 	build_enums $name "$fields"
 	build_body $name "$fields"
 	;;
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 15:43:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 15:43:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481221.745974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIX4M-0004S4-2M; Thu, 19 Jan 2023 15:43:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481221.745974; Thu, 19 Jan 2023 15:43:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIX4L-0004Rx-Uv; Thu, 19 Jan 2023 15:43:13 +0000
Received: by outflank-mailman (input) for mailman id 481221;
 Thu, 19 Jan 2023 15:43:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DP+J=5Q=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIX4L-0004Rr-Hi
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 15:43:13 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2074.outbound.protection.outlook.com [40.107.105.74])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f78e0c31-980f-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 16:43:06 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9697.eurprd04.prod.outlook.com (2603:10a6:20b:480::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12; Thu, 19 Jan
 2023 15:43:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 15:43:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f78e0c31-980f-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xls2QcxJtViABrhpSwSIzTWxQe4833aOgEZ+q6Mq68AQRyPdh9PdeJfrDafIcbQOJSgc3UZvuETEP0odxJ6yuU4/7UcEcfUFjh7l7Aynh0isgN85eXLQL6R0am+fQqbxCEdUfKyRDRCKm4in3J/p3q1IJ4WW6j01x+2fLQDih3brpw0+iucA/aHoxEfyQRKu5lDOS1arL7n4IOXDEx3+5L9aIGRwFULV7egFw7iIhKIrIkQ3MtNGuNF8sypFD5rGwrv7yhhE189imjMz8gqsf63JLUtUd8NfEbcj7//cVR/D6DSS8i72gzPv+ZB33YE8weI87gTuvYlihAX9CkLwBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cE8PJXTR8azlFb4HPZRlTqqi/GSEat+IqwbKdSyrn/4=;
 b=hFlCuutq1kMGCK0U8+PICBCkd+exUwuHvd2aj31lmY/ioKO+gHsxearxCEPGqWz1ZxxI/m3VdKmqY4IgytIbAybuqD1T0NNIVgMJgU5RUofR+uLAiQH2+NrTaqrlhIeGfmnKw8mFzLEgA0gK7Pxt7D6u1hUuXqBS/vmKAaddhq7icdrndObze6r4tO0EZc2ro17M3v9qMbzOK1/0PPU4ais3Akp8juoUsvvk06gnADk7VpxaIJ9jJIj40h9EmhtkvlnamdplseAZw1rkPhQ/EDpH48YoBID3gkSDwbfZyzzlqFQ9baLpWEpmSY7hpwkVYa38jFUTArnW1s5cTHkt7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cE8PJXTR8azlFb4HPZRlTqqi/GSEat+IqwbKdSyrn/4=;
 b=K6Pze2ZLFTatW3jHtWYhU8AzmqO+EmTqW+QJ9LNJPyAerGrFglLGGeYmppBvmloOab/Wp0m8ntKow+9+WLSH1jevLhTkAcs42U4YbRi+JDPU/BxAOyIx3fxDpd+wrgZaMEQmxmO4Qhn9h6TwGmAn1zzcfVkjcNO1M6qoPHAg51vd2DJGm23thhkhnUr9FdlAGmxdWdSfhQ4jywpoJxX5kVbS4nKoVAQ4M4d0G0wKm3GnyWAl1MqLKJsSFspV3o7tYbhRkZ4jaO13qVOanvWa7cArXu0lFRYkzeF6oB2K9TbdKKjzMaaTz8jHjakHl+gKtA2orjEZQVu0TugXpHSPWA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a2d262f8-77eb-44bb-d3c9-677ed73df22a@suse.com>
Date: Thu, 19 Jan 2023 16:43:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v5 1/5] xen/include: Change <asm/types.h> to <xen/types.h>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
 <916d01663e76a3a0acad93f6c234834deaa2dd72.1674131459.git.oleksii.kurochko@gmail.com>
 <22992b47-bac6-d522-a8a6-c55c3c15e7a5@suse.com>
In-Reply-To: <22992b47-bac6-d522-a8a6-c55c3c15e7a5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0037.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9697:EE_
X-MS-Office365-Filtering-Correlation-Id: 807506aa-6c3f-42e1-65a0-08dafa33dd35
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	otyfuBk1BxMyAdrN/bdYmtqpjJ0XaS9XsFz7zkPAl2aHoU6dbNrC21KaYlrsi9emReRbs3nx7EPmL/Y++SuIzU8v9o6ITZhKqP8n71P1xKd34bqljF+L+PPjwt7a/57LvY5w3o16D+XTdepICE6IR2JEqhsdL3YVzKfZJmBPkO4lAtyqDjAvuSXZkWTAsPsdKxE7L11YkeU/SyVXUoB+L0TgAaUe97JuKkko7se/GjbZ7Rtyt7Y3yCobVpNynHFGLeVFGwHpn3KNiJkZ7VlHU2nEWi6JBeMS28z5KBnTezQba3C31phQoj82qsMkBVYCY1EQzZqOcBM58TtEvHBFgXK06bESAyr6Nj8HtZPrRC8IoxywPlFjMBChSuZ+bTDK2zlz4GYVcz37A/fckTetfLcZNyN/mikSV+BKwvMRcsYdejyTUdjS+5aQkSO1WjiQWbfO+QmacTqkKXzXn0wY5KJOQfSfjE8MjHCEPPR1gGXCm6YpoOPXv6fMvNOek9NqUB3DQIXaMiIFYSnVo52uSSYdmt5Xwyr51QgmNLqSQDryWPCcTvLnPKeUnhU82mEZcOrqF7jaDJLIwYBIaCLz6X1ebxP1nLWxo2AoaepbXFNtZO75uuNEWM3qQOwFa+Icqu4A2d+mkzTy5vdrjK5qENMsZMYF3FDkA9SjO9HxsFKBjJf7OkCetaIR8aBhQz+NFLvdNaKyuodVw345Tt3lYt3hsN8UGVOLAHb0EpF+QryADVp0UpKFkbL0a8XFaj0gom/D0HKNN9t71Eub+MRfBA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(346002)(39860400002)(366004)(376002)(396003)(451199015)(6486002)(478600001)(966005)(36756003)(2906002)(38100700002)(6512007)(2616005)(83380400001)(6506007)(53546011)(31696002)(4744005)(26005)(186003)(66946007)(5660300002)(4326008)(66476007)(86362001)(8676002)(41300700001)(31686004)(66556008)(316002)(8936002)(6916009)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T2hJei83d2VBVFpXWTM5ai82YWUzK0tmT0JEeGtMajYvVEZxeEZMMFlBdUtJ?=
 =?utf-8?B?cWpFbzQ4a3lucFpzZDFBa21MbldzTmlLSmtjNnpOc0JPR2lNQ0N0cWg1aEg3?=
 =?utf-8?B?ZTdvYWxiM3ZRV2ZoOVppOW5Ndm1ZdG9KcDJOV0E4Wm5XR2tBem1wZXFjMFJ1?=
 =?utf-8?B?MU5TMVk5V0NEcHRVdStHemFQRUEraEZqbnFuQ2ZuNjRuZ3JSMXVnV0I4c1lE?=
 =?utf-8?B?TDhUMnVTMURZNUtxdVByUy9sWDdveUdUS0MvTmo3RlVacWQzSE1tczBRSDVC?=
 =?utf-8?B?YmJjdFhpVjVzQTlLaFpuUzBSYzA2TlBKbW9uZndmZXpzcWJlSEFpcytrT1dG?=
 =?utf-8?B?eE4vQ0JBUmxLVGI2UGFXMzFmam1NNW9BTy9aZjZqRDdkVGFPVlI0cnFlSkFX?=
 =?utf-8?B?aUZvek9BSjJNYmpQakNLVjhWc3FsM3BlNm5NT0xIQWNhSmtQdUVya2hZaUtX?=
 =?utf-8?B?STQzM2xrNHROZEMxU283ME50WlRBUjFEd21yazA0bkx4ZzhENEpaNDBiKzhD?=
 =?utf-8?B?YVZTT0JEK3pxdXc0SC9IOXROZFVCaCtvTFY2Qld1bWRmd3hKV3d4MDhIZnNX?=
 =?utf-8?B?Y0x0amhBV0dldnBaQUE5aXluYkppQlY5YnQvWTY1RnEzSjRtVysySG5lN2tL?=
 =?utf-8?B?aDZLdXloa3JmQVpQSk9Lb3JOTHBaU3pRcG9kSGNaWmlCR21rOW9uVUhKbnlh?=
 =?utf-8?B?aDVoa3BaL2NvWCtCalVxTVRYT2c0NnFSZWRzenZJQXB3dnRVSG84b3YyREJF?=
 =?utf-8?B?SUJvamcvWWx6SlEvaU5XOXl3clZ5Sml0Y1BHVVJiZ043MFRTNUEzRVpHNURi?=
 =?utf-8?B?SkJFekc1bmZFSUN6MTkvb0hCeTd6VTBtc0RnUld1bFYrQkFjRDk5SWsraUpx?=
 =?utf-8?B?WW1MN2k4RDJSelVVelJRT3BqOGJzOTlWNnZxQzlYRGxCRUZ1ckU4cEFYOEJB?=
 =?utf-8?B?bkkvTlQ1TWg2eWJxQlJqN09zZVpZYjlrOHhWUy95UENRVDhjR2NtSUZVSGkz?=
 =?utf-8?B?aDZxbm5veWcxenJzQ01ycFNRalBvQ1dpdXpWOVNjWUVTa2x3blB2aWVVcG1k?=
 =?utf-8?B?d0Z0N3A2WDVvUkxYNXYwSFY4a2xvR3IyM00wd1psR3pHd21FMmV3QUxkYXNy?=
 =?utf-8?B?RkgvMFNMeWkwaUFjdmtYNGswb0M3OXVWRHdsV21qSEF6QWFTK3dIVzdXSlFB?=
 =?utf-8?B?dm82ZmNSOEppK1Y0djlFWkJJbGdpeVUzbFZYSnRxeGt6Vi8rR2d5QzFwRlZO?=
 =?utf-8?B?bTVYbFNIcE9kNlk4R1hVRkRnVkZxaU14NStQb21QcTZ6VHBuVXR4NmJ0TXVq?=
 =?utf-8?B?SUltS0YyR2plY012NDVoRlBEYitPVjk3NVh4cjJlWG80R2FzK2tiYitDNTdM?=
 =?utf-8?B?M1llemxHWW5uMWVQZ3BZYjh5VVM1UUxpZWs0ZzZXSkVMMitJQUJOaVpKTi9y?=
 =?utf-8?B?cXVsRnpBTzY4RXBNbE9BbHVUREczTWNuVGdLUjdocnFwelhjbHA3N2RpUDhE?=
 =?utf-8?B?N2N0b3JMWVVrUEI3UXgveTBkTG55T1JCVGVvM3NEWnhtRUVYY21kbVlHQTFu?=
 =?utf-8?B?ZlFZcUF2N1RNL2p2eWNwOEl3NlF5angrYUxRVTRhVFVmSXh1SUUvZTNWeXJn?=
 =?utf-8?B?R2xMUVc1anN4OG4xcDZMWGMrMUpLaUVCUHAwS3ZjQzlyZHZZd2JDRFpHbDlD?=
 =?utf-8?B?VzRmOTJoaTByc2xIdXZwbTArQitnbEJvWENkampDcnNGbGZpbnB3b2ZwV1lO?=
 =?utf-8?B?Y0QwMWVBTWpTSTRQWEpnV0J5cUh1WDNqWk5JM0Q3UDFGM3NQRVJNUkxPVWJs?=
 =?utf-8?B?dDRYMGRXMTlLaGxoVGZWNjBEWlI3RGNDblRiV0lXMVN5a1ZGKzdFVm56V0RZ?=
 =?utf-8?B?bzgxQVhEb0RtWDhhUURJYnpSM2YwVmpZYnlRS1E0ak03RThKL3BJRHpSMklG?=
 =?utf-8?B?SDdFSVVGVGczWXR1NXFYaW5xakEvNytRb3JCQ1BEYlhjYjJWVGR4QXhacGVu?=
 =?utf-8?B?WWxCc1RFSDNGcExONS9vMjg0Z0RnREJwSy9Zd0xVREUvdDlXQ0M4ZE90M3BO?=
 =?utf-8?B?WEtvSzQ4Q3FHVEhsL1lQd09malNWeEFxZVVneFhxUGU2T1UxTVFoeHpWc0dN?=
 =?utf-8?Q?HOkuzYlFBTbw+shbMcbgK2bvA?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 807506aa-6c3f-42e1-65a0-08dafa33dd35
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 15:43:09.3042
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wiSDNmAwGYnPC3BnTmenyxEJbkeKgFsrqIhDWnOQGOSjAFYOeeNtOn7MOlHYOba4CPlI0pIqL5587N2VcD/Yhw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9697

On 19.01.2023 15:29, Jan Beulich wrote:
> On 19.01.2023 15:07, Oleksii Kurochko wrote:
>> In the patch "include/types: move stddef.h-kind types to common
>> header" [1] size_t was moved from <asm/types.h> to <xen/types.h>
>> so early_printk should be updated correspondingly.
> 
> Hmm, this reads a little like that patch would introduce a build
> issue (on Arm), but according to my testing it doesn't. If it did,
> the change here would need to be part of that (not yet committed)
> change. On the assumption that it really doesn't, ...
> 
>> [1] https://lore.kernel.org/xen-devel/5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com/
>>
>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>

Actually I notice we have more explicit uses of asm/types.h, and
hence the title of this change isn't really correct (with this
title I would expect all uses to go away underneath xen/include/xen).
I'll try to remember to adjust the title when committing.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jan 19 16:44:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 16:44:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481226.745984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIY13-0002kK-Gs; Thu, 19 Jan 2023 16:43:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481226.745984; Thu, 19 Jan 2023 16:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIY13-0002kD-De; Thu, 19 Jan 2023 16:43:53 +0000
Received: by outflank-mailman (input) for mailman id 481226;
 Thu, 19 Jan 2023 16:43:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIY12-0002k7-6W
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 16:43:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIY12-0008IE-3I; Thu, 19 Jan 2023 16:43:52 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.13.107]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIY11-0004Vy-Rq; Thu, 19 Jan 2023 16:43:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=MfPHl/jE/xRForke+Bx4yNyevW0T/HweuFe5Jh/E5C0=; b=GUybiXXxhu4ut89wZWJyA8jZVj
	ObAKgOEPs4h93xHO0KdTJzDv2qzwDDx7eq7J8X6Cl05V0/RMAXrRmkwYiCSUJiN9AO3Fu/YSJ04i+
	1kiDe63RjlBIWBsxcgkVI0mJF/pVslVdhMLZT6DZQJqQnjA2rD40dJ7L5TKAOgqkcXAU=;
Message-ID: <e6e33408-7e27-97ac-f32d-201f1e9c4cc2@xen.org>
Date: Thu, 19 Jan 2023 16:43:49 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: AW: AW: Xenalyze on ARM ( NXP S32G3 with Cortex-A53)
To: El Mesdadi Youssef ESK UILD7 <youssef.elmesdadi@zf.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
References: <AM5PR0802MB25781717167B5BFC980BF2A49DFF9@AM5PR0802MB2578.eurprd08.prod.outlook.com>
 <3e7059c2-0d23-03f2-9a93-f88de09171f4@xen.org>
 <AM5PR0802MB2578A1389424064D6884588E9DC29@AM5PR0802MB2578.eurprd08.prod.outlook.com>
 <619a00f0-0f9f-5f5f-13a7-ea86f9c24eec@xen.org>
 <AM5PR0802MB25785FDEB43A8137D644BC7E9DC49@AM5PR0802MB2578.eurprd08.prod.outlook.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM5PR0802MB25785FDEB43A8137D644BC7E9DC49@AM5PR0802MB2578.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 19/01/2023 14:22, El Mesdadi Youssef ESK UILD7 wrote:
>> The support for xentrace on Arm has been added around Xen 4.12. So it should work for Xen 4.14 (even though I don't recommend using older release).
> 
> Hello Julian,
> First of all, thank you for the help. I contacted NXP support to get more information about how to get the newest version of Xen while building my Image. My question is, which Xen version do you recommend?

I would always recomment the latest stable version available. In this 
case, this is Xen 4.17.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 17:05:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 17:05:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481232.745994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIYLM-0005Ey-7s; Thu, 19 Jan 2023 17:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481232.745994; Thu, 19 Jan 2023 17:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIYLM-0005Er-3C; Thu, 19 Jan 2023 17:04:52 +0000
Received: by outflank-mailman (input) for mailman id 481232;
 Thu, 19 Jan 2023 17:04:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIYLL-0005Eh-9Q; Thu, 19 Jan 2023 17:04:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIYLL-0000Q9-7W; Thu, 19 Jan 2023 17:04:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIYLK-0003Re-RQ; Thu, 19 Jan 2023 17:04:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIYLK-0000Po-Qz; Thu, 19 Jan 2023 17:04:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QMRsdodQiLRl6DiIuBscj7HMS8FzNJRKHq4XG81FoMY=; b=tX2RT9h/Fu0ng+7xacQ6uHXtis
	CeYsg/1aFuWwDbmriKnwSZ4+n89m+FkROMCDlkkCjYDGyFKcArKTMpgKrIudJ5g0AbzEyB2xPkZpw
	/1wFnM5eCRaxmY3HUTT2w/VdjTnwcC0/ihC415l2kfggz55y+gj0FvV4/L6yxm9KTCl4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175966-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175966: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7287904c8771b77b9504f53623bb477065c19a58
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 17:04:50 +0000

flight 175966 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175966/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-amd64  8 xen-boot    fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-xl-vhd     21 guest-start/debian.repeat fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7287904c8771b77b9504f53623bb477065c19a58
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  103 days
Failing since        173470  2022-10-08 06:21:34 Z  103 days  211 attempts
Testing same since   175966  2023-01-19 03:13:54 Z    0 days    1 attempts

------------------------------------------------------------
3375 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 516750 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 18:20:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 18:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481254.746014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIZVm-0004Kn-PS; Thu, 19 Jan 2023 18:19:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481254.746014; Thu, 19 Jan 2023 18:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIZVm-0004Kg-MZ; Thu, 19 Jan 2023 18:19:42 +0000
Received: by outflank-mailman (input) for mailman id 481254;
 Thu, 19 Jan 2023 18:19:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YmS1=5Q=citrix.com=prvs=376161a5e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIZVl-0004Ka-Fr
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 18:19:41 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d4a74c3a-9825-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 19:19:38 +0100 (CET)
Received: from mail-bn1nam02lp2044.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Jan 2023 13:19:30 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5229.namprd03.prod.outlook.com (2603:10b6:208:1e9::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.25; Thu, 19 Jan
 2023 18:19:26 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Thu, 19 Jan 2023
 18:19:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4a74c3a-9825-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674152378;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=fQiIbHhHYgHKroBtPkv1gPMXRwxgEH8iH06KA3iq9zg=;
  b=PrlPPBcINunNBVGUMOSUo8OneLYo7Kik4mRT8gkurNLrot2zWe8KRg0P
   werkj6xlVuV1r+gt6vhYmS31ByW2Vhlu0yih33ymAbUvCnttv/8047Bgn
   3d1GQ7Mcn7ZUsMldhN8sijXXEBLV0T18A20mY7yjGwV/HQGiE7PP0RB/j
   U=;
X-IronPort-RemoteIP: 104.47.51.44
X-IronPort-MID: 92847838
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BG8T6qACDeXt+BVW/x/iw5YqxClBgxIJ4kV8jS/XYbTApD4mgjECy
 TYeCDvSb6rca2ugfY1xOdmy8UwCv5/VydBkQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpB7wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw5MxxLk5yp
 fIhDzVObgmlu8Xoxu3nVbw57igjBJGD0II3nFhFlWucJ9B/BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI/OxrvwA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE17WfwXyrA9J6+LuQ9adD2H2hhTAvMUNLCwe/kfyfrWGYcocKQ
 6AT0m90xUQoz2SpRNTgWxyzoFafowURHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRwtJWFRHTb8a2bxRuuOC09PWIEIygeQmM4D8LLpYgyilfFSI9lGavt1NntQ2msn
 3aNsTQ0gKgVgYgTzaKn8FvbgjWq4J/UUgoy4QaRVWWghu9kWLOYi0WTwQCzxZ59wEyxFDFtY
 FBsdxCi0d0z
IronPort-HdrOrdr: A9a23:o3f6M6HQg1HG2J9gpLqE18eALOsnbusQ8zAXPo5KOGVom62j5r
 iTdZEgvyMc5wxhPU3I9erwWpVoBEmslqKdgrNxAV7BZniDhILAFugLhrcKgQeBJ8SUzJ876U
 4PSdkZNDQyNzRHZATBjTVQ3+xO/DBPys6Vuds=
X-IronPort-AV: E=Sophos;i="5.97,229,1669093200"; 
   d="scan'208";a="92847838"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WEYbimqIXA2zceufJCtM0u5YMvFUCCHgyYbGdjv+LfTdRwv0Fo7v+XI7pTgL8LGvrFvGu5SbcB/qYy6hBtUjF6/uksWhVXBIBJLdknCVSRK+K3GTpMLqoDGIXpq+ocYdjsHoPn/LNmxlj1GSdDWb+O8a5Kqib2URbGYlVfxO1Wq2N42lleETGxB21ggdZhzfW8M6TGCI76lWKyrwNi8qufjrUJdiA3A5NiEiZxrMHgSWS/gdrU8zbAUH4kpUEM9lqws9FsOfC8kgrx9ZY/A8zjoXvRps898kylkAuqSXWD7LT1srqw9r6+eHYncjCPou0Qgj3SckDf2VYmX6zyRQLQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fQiIbHhHYgHKroBtPkv1gPMXRwxgEH8iH06KA3iq9zg=;
 b=jCk7+LwoufbJaj2bpehfX5X9ZdYvmX7lDuOYOOFnrGwfasZFyvk+hti7x4SI1JA/DrgoYS459vFGdf/9vN1A05L51sggPigMX7kt/2QK/wCEs8BEwOfg+401kTL3i0mkmyVWOcFPubfGgGEQYD4/acYL+BxLBhD31iZq9/gvKX77ybjau4TMTPocMNsfyIDYMCgtLGePI2lpw0EjsxadTQ+GSjdB8uZhAoOcLTtbKssf2gQ2BOBCBmzXfcIPuAda8oFA4C2OsQDaHr1kKKVRyAsFQv4iO4JtuymNqeGqY/X1th4sT0xqudc+rZ2bmIoBhfrBomd67+POee4maZDFsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fQiIbHhHYgHKroBtPkv1gPMXRwxgEH8iH06KA3iq9zg=;
 b=P6g7QxOUCfn2DamuHxijiCTz2NUDkr/D6Msc5oak8HbgWcG9nD5L9WB1ItWvLFCUQAyyQK8Cqt2TXwogfYE1iTJx7OZX1m9I53Paqq5ywkhe14VpHk+BmRVHF9uimVG2fOTTPucir+IsQDbOd90hWnwGUvkLpaC2e+M2FISvZmw=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 1/2] x86/shadow: fix PAE check for top-level table
 unshadowing
Thread-Topic: [PATCH 1/2] x86/shadow: fix PAE check for top-level table
 unshadowing
Thread-Index: AQHZLAik+GLprcTCdkai++VQyfbToq6mDQ2A
Date: Thu, 19 Jan 2023 18:19:26 +0000
Message-ID: <266eb5cf-b2ad-051b-0474-96bf6e2ca7e0@citrix.com>
References: <9e79449a-fd12-f497-695b-79a50cc913c7@suse.com>
 <bf03f851-2fb4-3de1-7d72-b0ac15b2d488@suse.com>
In-Reply-To: <bf03f851-2fb4-3de1-7d72-b0ac15b2d488@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MN2PR03MB5229:EE_
x-ms-office365-filtering-correlation-id: 91efdf0d-3893-4419-8a2a-08dafa49b2a8
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 1IkuczzbLSuaDtTHJL+y7YJ4xL1EqGXIQ2Zvu1cD7b+dRi5qsiLseQhetzp6gs21sxglIfL1YG/F2jVPKdh7z1Lf5CI4LaWBklxgiRJeoLSEV2i/Kczetl4/QFCxPXOnRKhjSuBZX8XXogOYNsz3YLRahg9YWpKZWsaYxM9HAClYIX6+6EDUuXrqtxADruNA0FWSkYH9rDQKR6U14HZWA0/Ah1M+OrsuBVLcqdZ42NJfqx/HeYFSeM2K5l0BJT2mrXPUe/6f+vJpbtp2kJAZS42XnoOCA/ax5fCikP705AyN4FpK9atYwE8cpMFH0IR8NM0cUISlqc1m5M276mub4Zkh4wEFYMz6NZsdaVTgKU2x5TRRl2qguTITduz11mgNxAP1mygbc1WMjS7CLAcI2PaaSr9HQcaDTuPzB/6UxFjB5OHhnkjOGDvOoVzN4VUIV2mqn9LA5L2Rkucrv0CrJVhedDDpgeB+uxgZUT9DL6cmiA1SoTqBoBMUnMGLtQGL1Pah+u30+9cfBp80pSz0RhuaK8HBDok8EeENYfUjKAQywyj8pmUHSk8MY/JV+xLducAX5wZmPH08zPyS3aGOAF7xQqZh07QRTGJMOX1/vvFea+lUmeAYI948sI34JN1RK2B+rAaY0ZKOq+iqw5AYb9FTA1tsV9Hm+eXIkiVZPR0TWmY8WEfJ/yNoT9TDsxMgJBC+OORU9VIF4df4BTY5+skSBv90Xk32ewSTmYWjJf4qTSHe2UgYowj8sqmhsKQPyAHDO/ci0YvqEgLNqvG3tw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(136003)(396003)(376002)(366004)(451199015)(31686004)(36756003)(107886003)(54906003)(6506007)(186003)(110136005)(53546011)(316002)(31696002)(2906002)(6512007)(64756008)(66446008)(66556008)(66946007)(82960400001)(66476007)(2616005)(41300700001)(38100700002)(122000001)(38070700005)(26005)(6486002)(71200400001)(478600001)(8936002)(91956017)(8676002)(4326008)(76116006)(5660300002)(4744005)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VkE5RU1tRnNUVXd0ekZFbXI1bFd6R3hVWWF6dmQ0Nm15L2dBeEdoUjFXc3ht?=
 =?utf-8?B?elFYOTN5aFF2aHRWT2J6VkltQU9UNFNPdE1pWk9yU1BKMDdKV1RSb2xYZHgx?=
 =?utf-8?B?UmxMNXJLbnA5VFE3NDlLQUV6SXNyTU9vdHJGaGxNME01cGlxVzZYdFI3VVZP?=
 =?utf-8?B?akJmWHZtc1FnMzB4VDdSS3NBRnp1a3lUS3g2RXBIVzhDVnI1NmNJanRlUDdx?=
 =?utf-8?B?T1htd3ljM2RQbGQ4NFdYUCsyRFZUNWs4TXhIY0lhVVAzRU5XTCs0eXA1NGQ3?=
 =?utf-8?B?RFFnWE5YTytLS1NaRlE3TlRrMTVnaEtVWVZBdmZqM3QzOXZzSEdaaFlCaTg3?=
 =?utf-8?B?RVVYa0IrUFNaYUJQOHpIRDYzY2RpUzJLOU04dWQyTm1iN1RaR0R5RWpZeHZU?=
 =?utf-8?B?RC9QZUVNN1Ewd0ZBWXFEQVFwaXVlZjRxTTVtMGJHK29aZm96Q284WSs2MS84?=
 =?utf-8?B?UThkRFBBdjZEcHFSNE8wajlBNmpYVEwyckllZmRGTGFQWU4zUWorZEE0eVZB?=
 =?utf-8?B?bUI5NjNGbklOM3VPSDRHZ1FFYW52blFkUmZPTEh4SFIwRkhQY2U4VHpZc1NZ?=
 =?utf-8?B?MW1HTWRBUnRIUkNrZlJRTGRIOFkwTkoySC9XKzRkekNSUTd5UmVaeStQSE1U?=
 =?utf-8?B?VHVsZXdzajBNRi9PQXhuMEpxeDZmUGVqcSsrK0dBKzFvV0NSbTltVG9ScWNH?=
 =?utf-8?B?VklhSlpjbHBHVFVlTWt2dFJlbENPaCs5YUtnOFlNNXN2NFJTZnpSUHgySklC?=
 =?utf-8?B?emJsdVhrYndkeGZQSXFQRStVa3NjNjcvMVpuVVl0bmc3anVPeTB5L3lkdVJv?=
 =?utf-8?B?TFRzb1dnNFlKTnNkMEZhbWQ1WHNscnpJKzVBQ0FrYWRVQ0kzWlh0aDlCVFdG?=
 =?utf-8?B?UXByd3dKa1VPY0NOZ1FBbGJJYTRZK3E0Yk1aTTdYa1lieEc5ck55QjNKRnlj?=
 =?utf-8?B?alBMbGJEREJSN3RTNTFZakdUWkQ2ZWxjQUdXLzBoR0ZOakRYbzdhTVdRT2xw?=
 =?utf-8?B?bEdZZkFaaWVvNW9CZEdrZFB6ekJkVUY0MGViM0tYeTBDYW1kVVJxbmRUMXdO?=
 =?utf-8?B?WGR5UEZQMnVoRzVvRitFZkpEcDIxdGtDMHlwSE9EUUFlSEl0cnB2R0E0bHBJ?=
 =?utf-8?B?VnZIUWFCMzdVSDcvSEVVcUhqMCszcWF2ZWRLWU02eDBKTnRxeXhsQzBvY09R?=
 =?utf-8?B?bkxpaWJONUMwWU9HVlFndFpDZERaNi9BNzFVdVlMKzFPVkY4anMyRC9TODZN?=
 =?utf-8?B?djJLWkpaNFZxWmdWSmZKRXBFKzNtSVZZRGNhaTAwU0pSek44OTdEREt1OXlI?=
 =?utf-8?B?V0lWZVkySUhhcExmZldTbTY4cERDYWx6MVNvRGhBL2RRd09pL21xQlhqMDNz?=
 =?utf-8?B?cDF6R2lVTE10SUkyaVJZcnR0YzJJOWpQOFpBZitBWW5TWFMyalAwVkZocFFU?=
 =?utf-8?B?NXFyNEdJNnRkRURUUzVoQXpJRmpsVkRsRm5Mb0FjaHBZUkZHY0lwWFlyVGxF?=
 =?utf-8?B?R2ppOTVrNnQzNnByVmFyanZ2VFdLa05QQ2FnTnl1emVTc0xpdjFkenFReGk3?=
 =?utf-8?B?QmxWOGtoZXQwOXEyNUlxNWpGZFJXdy9kMUF0Q2FmZ0cwSlhxL3FYN08vYVMy?=
 =?utf-8?B?MFNkaVhkOWo3VStaVjRiMUVSK2Y3MWNJL1NzNjNmcncvVlFXaUVoVEplTXZ0?=
 =?utf-8?B?aDdQaUhpZDVVdXdkTHUwY2VzMGwzNWRxWmpraDd5bkVEUjQzbFNHWlh4eU1K?=
 =?utf-8?B?NmNybmJUYUFWRzNpQmxJY3hRQnh6NnJFdFh3UUxwMGhKekZrcDRBSXJKV2Qr?=
 =?utf-8?B?aUNVN2NhMzM3bVVWZHI5SEduR2VGNitjbXJWclNzaW11MlpIdzU0VXEyc0JK?=
 =?utf-8?B?UHdVZENvTGFmMTRSM2pDcEZqa3FuemF6Rks1RGZKZUk4OU16V1dhMzRlREd0?=
 =?utf-8?B?REtMQ3NSS3gwcW51TCtSN1FHUXVZOTRLWGVuSURkRzVCZ09rRGUyU1dFbDBC?=
 =?utf-8?B?SlltdWdKRmdtdHZaWjRJUFdHKzVHTEhCTmdHMlJXd2VIQUJsbW5PQUdjZmlV?=
 =?utf-8?B?OU9ZK1hYQlM1QzZWYjhiTDFGTGpGQmpUUitHY2dBRUhCZHc4NVhLV1JkcXov?=
 =?utf-8?Q?jd2IjF6h2E1cBCDueJdyOPxum?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <BB4DCB557B7FA342A232FEE6D8072C00@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Ki0u/m6cf1mdMLgSoWHilynR33ohF45XzRL2EICBfYVBEsTOrlKZfooWt2TLYTI59Bl1JN6PG8/77I0r5H4UZRgLBbKum97OwrHlhntiuyvFhqwa7BxbI0uh7Whl0pUo3ukHQicxYWz6cEwFtzv+kT0K1kOARpo78WQ/stkEM7Moo7XbxBH1iyTQvhNB5MXETA4yodl4AC3Wq6CaiDux5tm+AEpINhGyGI3F7AAqcd78p9RDCiVUTtNZQ4bU8EzTOLZVEoDPm4WDukVl348T2m+FG3tN1m4lj9rNy2X06jZCHcsIZcBO1aFwtug5tZ5gd2lhAzQubJHBMY+7cTf/W4R17QvRkV+ol6fRVOcb5r3Lx7CYi/hP25QmtJg99DkOG4JdM8Vi7/yCWu0HyxHFtB9CC8fGtvDm7vKIOkXFPno+Nsta1aJT84a+nfK+6tZyczbb32tkniLHtrEE5Ry1EpPYi7kGJlmTnQ4CJJ9pOBfnPbj6zxxNx8xDFbGvXRGEy3IW1wJ49JX+WGmoUzMKT2tzhxMq2fq6uU/41VoIsYOt3X8BYsCrFZPtRauCY4wh8DwN8a1ecPR7j0mKwqbI+hLkR86MDzqpY84H4RUje3CJRhTNcLbECSynuaq9pfUU1xxkTYsdUbZfXNcSNy5wmW1T6SlrI2gXN67j3qhBQq1CFI5J6CdGdaEdqeUAcNUMlB4LqOC+GO5BPdVmybnIwjffvbTeC5hvMXTTvngiBxwNv1h0C0gd83rnVfVnutQK12G2VD2b1TkUVEa+keFT5G3g23YLBaqwKPibia6D2miW4zkQtFAp3D0Ixkq7BoRP
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 91efdf0d-3893-4419-8a2a-08dafa49b2a8
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jan 2023 18:19:26.2824
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 51CsrXzCFRyolBHOh1UDuaa6INRxKRr4yNkbomfrCttKJfbjFLV8BzWaGla9FXbNb8GtcLZpe8e6lFg98s/Tk0L8DQylDd0qLGsM6Fsvdo8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5229

T24gMTkvMDEvMjAyMyAxOjE5IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gQ2xlYXJseSB3aXRo
aW4gdGhlIGZvcl9lYWNoX3ZjcHUoKSB0aGUgdkNQVSBvZiB0aGlzIGxvb3AgaXMgbWVhbnQsIG5v
dA0KPiB0aGUgKGxvb3AgaW52YXJpYW50KSBvbmUgdGhlIGZhdWx0IG9jY3VycmVkIG9uLg0KPg0K
PiBTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQoNCldvdyB0
aGF0J3MgYmVlbiBicm9rZW4gZm9yIHRoZSBlbnRpcmUgbGlmZXRpbWUgb2YgdGhlIHBhZ2V0YWJs
ZV9keWluZyBvcA0KMCAzZDVlNmEzZmYzOCBmcm9tIDIwMTAsIGJ1dCBpdCBzdGlsbCBkZXNlcnZl
cyBhIGZpeGVzIHRhZy4NCg0KUHJlZmVyYWJseSB3aXRoIHRoYXQsIFJldmlld2VkLWJ5OiBBbmRy
ZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KDQo+IC0tLQ0KPiBRdWl0bGUg
bGlrZWx5IHRoaXMgbWlzdGFrZSB3b3VsZCBoYXZlIGJlZW4gYXZvaWRlZCBpZiB0aGUgZnVuY3Rp
b24gc2NvcGUNCj4gdmFyaWFibGUgd2FzIG5hbWVkICJjdXJyIiwgbGVhdmluZyAidiIgYXZhaWxh
YmxlIGZvciBwdXJwb3NlcyBsaWtldGhlDQo+IG9uZSBoZXJlLiBEb2luZyB0aGUgcmVuYW1lIG5v
dyB3b3VsZCwgaG93ZXZlciwgYmUgcXVpdGUgYSBiaXQgb2YgY29kZQ0KPiBjaHVybi4NCg0KUGVy
aGFwcywgYnV0IHRoYXQgcGF0dGVybiB3YXMgZmFyIGxlc3MgcHJldmFsZW50IGJhY2sgdGhlbiwg
YW5kIHRoZSByZWFsDQpjYXVzZSBvZiB0aGlzIGJ1ZyBpcyBzaF9wYWdlX2ZhdWx0KCkgYmVpbmcg
ZmFyIHRvbyBiaWcgYW5kIHNwcmF3bGluZy4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 18:40:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 18:40:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481259.746024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIZpO-0006lH-Ct; Thu, 19 Jan 2023 18:39:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481259.746024; Thu, 19 Jan 2023 18:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIZpO-0006lA-A6; Thu, 19 Jan 2023 18:39:58 +0000
Received: by outflank-mailman (input) for mailman id 481259;
 Thu, 19 Jan 2023 18:39:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIZpN-0006l0-0v; Thu, 19 Jan 2023 18:39:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIZpM-0002fO-TV; Thu, 19 Jan 2023 18:39:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIZpM-0007GS-Eb; Thu, 19 Jan 2023 18:39:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIZpM-0007Q3-C5; Thu, 19 Jan 2023 18:39:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SojtoeSVhon44Uwdmzd8tmZzYBjXDqtjnMEITqJAOIg=; b=O6rTYt+aL7Q3le2OCWR8/in6sO
	jkZoWtTXc9xzb7DlGOhnGOQrh+C2oT15OwAvQEntDdLjglyQcZxiwMcMiAkjRRuEiQaUM5PGTNykn
	A+4ghpmwbgUM3PkVmj3QpKvNgLbt+fFNDg1HznGhs+D+s5zIUCZsaQux2IYBrlGprqnA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175967-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 175967: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=16bfbc8cd2b4a039d3e846dceca807a9cc15849b
X-Osstest-Versions-That:
    libvirt=12a3bee3899cdba8b637a7286f24ade1214b6420
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 18:39:56 +0000

flight 175967 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175967/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175736
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175736
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175736
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              16bfbc8cd2b4a039d3e846dceca807a9cc15849b
baseline version:
 libvirt              12a3bee3899cdba8b637a7286f24ade1214b6420

Last test of basis   175736  2023-01-12 04:18:57 Z    7 days
Failing since        175917  2023-01-16 04:18:43 Z    3 days    2 attempts
Testing same since   175967  2023-01-19 04:22:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Anton Fadeev <anton.fadeev@red-soft.ru>
  antonios-f <anton.fadeev@red-soft.ru>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Han Han <hhan@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Yuri Chornoivan <yurchor@ukr.net>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   12a3bee389..16bfbc8cd2  16bfbc8cd2b4a039d3e846dceca807a9cc15849b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 19:02:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 19:02:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481266.746035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaAp-0001bg-8q; Thu, 19 Jan 2023 19:02:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481266.746035; Thu, 19 Jan 2023 19:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaAp-0001bZ-45; Thu, 19 Jan 2023 19:02:07 +0000
Received: by outflank-mailman (input) for mailman id 481266;
 Thu, 19 Jan 2023 19:02:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIaAn-0001bP-Ov; Thu, 19 Jan 2023 19:02:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIaAn-0003KF-Ky; Thu, 19 Jan 2023 19:02:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIaAn-0008JM-8M; Thu, 19 Jan 2023 19:02:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIaAn-0004y2-7i; Thu, 19 Jan 2023 19:02:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DRyGvhD3UnIJdhC9j2lH8+gomMG3LgVRnGZvZFDaHiM=; b=sM7fiDTqptiIGfgTIct0/PkJUw
	WcEmI0sFJYKspoQ2n5ycIxqrwDdGi/eXLUilRhrPV/1S5Kw4pfWAfYwRFmql+DlNEC5tMxnYUYmQb
	54d8pZwuXJO/Lb4fMp2oAWvxJofN5s0DAXrwLkg6wOiIyLghX3PiMtHvywF5+27VuVa4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175965-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175965: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-vhd:xen-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 19:02:05 +0000

flight 175965 xen-unstable real [real]
flight 175984 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175965/
http://logs.test-lab.xenproject.org/osstest/logs/175984/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd        7 xen-install              fail REGR. vs. 175734

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail pass in 175984-retest
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install   fail pass in 175984-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175734
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175734
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175734
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175734
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175734
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175734
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175734
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175734
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175734
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175734
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175734
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175734
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    7 days
Failing since        175739  2023-01-12 09:38:44 Z    7 days   17 attempts
Testing same since   175965  2023-01-19 02:53:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Tue Jan 3 11:25:19 2023 +0100

    xen/arm: Add 0x prefix when printing memory size in construct_domU
    
    Printing memory size in hex without 0x prefix can be misleading, so
    add it. Also, take the opportunity to adhere to 80 chars line length
    limit by moving the printk arguments to the next line.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 229ebd517b9df0e2d2f9e3ea50b57ca716334826
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:07:42 2023 +0000

    xen/arm: linker: The identitymap check should cover the whole .text.header
    
    At the moment, we are only checking that only some part of .text.header
    is part of the identity mapping. However, this doesn't take into account
    the literal pool which will be located at the end of the section.
    
    While we could try to avoid using a literal pool, in the near future we
    will also want to use an identity mapping for switch_ttbr().
    
    Not everything in .text.header requires to be part of the identity
    mapping. But it is below a page size (i.e. 4KB) so take a shortcut and
    check that .text.header is smaller than a page size.
    
    With that _end_boot can be removed as it is now unused. Take the
    opportunity to avoid assuming that a page size is always 4KB in the
    error message and comment.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 22a9981ba2443bd569bad6b772fb6e7e64f0d714
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Jan 12 22:06:42 2023 +0000

    xen/arm: linker: Indent correctly _stext
    
    _stext is indented by one space more compare to the lines. This doesn't
    seem warrant, so delete the extra space.
    
    Signed-off: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3edca52ce736297d7fcf293860cd94ef62638052
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 10:58:31 2023 +0000

    x86/vmx: Support for CPUs without model-specific LBR
    
    Ice Lake (server at least) has both architectural LBR and model-specific LBR.
    Sapphire Rapids does not have model-specific LBR at all.  I.e. On SPR and
    later, model_specific_lbr will always be NULL, so we must make changes to
    avoid reliably hitting the domain_crash().
    
    The Arch LBR spec states that CPUs without model-specific LBR implement
    MSR_DBG_CTL.LBR by discarding writes and always returning 0.
    
    Do this for any CPU for which we lack model-specific LBR information.
    
    Adjust the now-stale comment, now that the Arch LBR spec has created a way to
    signal "no model specific LBR" to guests.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e94af0d58f86c3a914b9cbbf4d9ed3d43b974771
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 9 11:42:22 2023 +0000

    x86/vmx: Calculate model-specific LBRs once at start of day
    
    There is no point repeating this calculation at runtime, especially as it is
    in the fallback path of the WRSMR/RDMSR handlers.
    
    Move the infrastructure higher in vmx.c to avoid forward declarations,
    renaming last_branch_msr_get() to get_model_specific_lbr() to highlight that
    these are model-specific only.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit e6ee01ad24b6a1c3b922579964deebb119a90a48
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 3 15:08:56 2023 +0000

    xen/version: Drop compat/kernel.c
    
    kernel.c is mostly in an #ifndef COMPAT guard, because compat/kernel.c
    re-includes kernel.c to recompile xen_version() in a compat form.
    
    However, the xen_version hypercall is almost guest-ABI-agnostic; only
    XENVER_platform_parameters has a compat split.  Handle this locally, and do
    away with the re-include entirely.  Also drop the CHECK_TYPE()'s between types
    that are simply char-arrays in their native and compat form.
    
    In particular, this removed the final instances of obfuscation via the DO()
    macro.
    
    No functional change.  Also saves 2k of of .text in the x86 build.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 73f0696dc1d31a987563184ce1d01cbf5d12d6ab
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Dec 20 15:51:07 2022 +0000

    public/version: Change xen_feature_info to have a fixed size
    
    This is technically an ABI change, but Xen doesn't operate in any environment
    where "unsigned int" is different to uint32_t, so switch to the explicit form.
    This avoids the need to derive (identical) compat logic for handling the
    subop.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 6bec713f871f21c6254a5783c1e39867ea828256
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 16:17:54 2023 +0100

    include/compat: produce stubs for headers not otherwise generated
    
    Public headers can include other public headers. Such interdependencies
    are retained in their compat counterparts. Since some compat headers are
    generated only in certain configurations, the referenced headers still
    need to exist. The lack thereof was observed with hvm/hvm_op.h needing
    trace.h, where generation of the latter depends on TRACEBUFFER=y. Make
    empty stubs in such cases (as generating the extra headers is relatively
    slow and hence better to avoid). Changes to .config and incrementally
    (re-)building is covered by the respective .*.cmd then no longer
    matching the command to be used, resulting in the necessary re-creation
    of the (possibly stub) header.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 661489874e87c0f6e21ac298b039aab9379f6ee0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:14:50 2023 +0100

    x86/shadow: call sh_detach_old_tables() directly
    
    There's nothing really mode specific in this function anymore (the
    varying number of valid entries in v->arch.paging.shadow.shadow_table[]
    is dealt with fine by the zero check, and we have other similar cases of
    iterating through the full array in common.c), and hence there's neither
    a need to have multiple instances of it, nor does it need calling
    through a function pointer.
    
    While moving the function drop a non-conforming and not very useful
    (anymore) comment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d2123363788cc60b1d33123d93d27a49e5a37a43
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:12:35 2023 +0100

    x86/shadow: reduce effort of hash calculation
    
    The "n" input is a GFN/MFN value and hence bounded by the physical
    address bits in use on a system. The hash quality won't improve by also
    including the upper always-zero bits in the calculation. To keep things
    as compile-time-constant as they were before, use PADDR_BITS (not
    paddr_bits) for loop bounding. This reduces loop iterations from 8 to 5.
    
    While there also drop the unnecessary conversion to an array of unsigned
    char, moving the value off the stack altogether (at least with
    optimization enabled).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2497cb428250de36df088b36a47f89a10c115b94
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jan 12 11:11:47 2023 +0100

    x86/shadow: drop a few uses of mfn_valid()
    
    v->arch.paging.shadow.shadow_table[], v->arch.paging.shadow.oos[],
    v->arch.paging.shadow.oos_{snapshot[],fixup[].smfn[]} as well as the
    hash table are all only ever written with valid MFNs or INVALID_MFN.
    Avoid the somewhat expensive mfn_valid() when checking MFNs coming from
    these arrays.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit c47e5d94d0bd0c283bf7f1b31e42314065d82be4
Author: Xenia Ragiadakou <burzalodowa@gmail.com>
Date:   Thu Jan 12 11:09:16 2023 +0100

    x86/iommu: introduce AMD-Vi and Intel VT-d Kconfig options
    
    Introduce two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, to allow code
    specific to each IOMMU technology to be separated and, when not required,
    stripped. AMD_IOMMU will be used to enable IOMMU support for platforms that
    implement the AMD I/O Virtualization Technology. INTEL_IOMMU will be used to
    enable IOMMU support for platforms that implement the Intel Virtualization
    Technology for Directed I/O.
    
    Since, at this point, disabling any of them would cause Xen to not compile,
    the options are not visible to the user and are enabled by default if X86.
    
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 83d9679db057d5736c7b5a56db06bb6bb66c3914
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Jan 10 17:17:56 2023 +0200

    xen/riscv: introduce stack stuff
    
    The patch introduces and sets up a stack in order to go to C environment
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 19:04:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 19:04:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481275.746044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaCv-0002F8-Op; Thu, 19 Jan 2023 19:04:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481275.746044; Thu, 19 Jan 2023 19:04:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaCv-0002F1-LG; Thu, 19 Jan 2023 19:04:17 +0000
Received: by outflank-mailman (input) for mailman id 481275;
 Thu, 19 Jan 2023 19:04:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tmj6=5Q=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pIaCu-0002Ev-3p
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 19:04:16 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e529f3a-982c-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 20:04:11 +0100 (CET)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 2CAD05C0164;
 Thu, 19 Jan 2023 14:04:10 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Thu, 19 Jan 2023 14:04:10 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 19 Jan 2023 14:04:09 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e529f3a-982c-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1674155050; x=1674241450; bh=DWSmhiY53c1fCxHo8+QDF152rmPtSJZJfbo
	sjDMfwaA=; b=mIINDxvzEWW1TOvsfQmK1217TTk1+rcauO+ev2im7SruFDL9Z+x
	I55qi7LfuGXU580EJe6mvU4RT7u0J3pg94ZMtAuJIISKsDjBgGYHm/1qUR0H4mpD
	WM4Jwzgadwn5ZNCtd7cLSP4GryJwgDS5tcqiZ2WuF0VFHlEmP0H12XXwgw/bA2VM
	D+ORqlOnF1PXQlBxQpvla9356m5aUhMW3cuir7ViyU9szXwO4+XtoqBT5aEiz5Z7
	ql0MnXUoqLcmdaq7T/SdfozkryrUH03Gdwqm8AVPB3xW7HVRnQ1y36b9h4dUJvWi
	8I5N/bGK0y223R25ysBur+t/EufqWCdd5Kw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1674155050; x=1674241450; bh=DWSmhiY53c1fC
	xHo8+QDF152rmPtSJZJfbosjDMfwaA=; b=UoYj+qtsnWGUKKUdT4zCotFPuyjZs
	ve1NZ2BaZwMeJ+xH9VClymTkxYMPmU8X6ky3uNjs14U7UyKqCuY0gNGW2nslmRFU
	JUOc0SzGRsxNmQZaJICDHvnanyJeSIJyq4qJkHvlVH3rsKnzOV1ip8oWbCSaK/G8
	ZY9p2YKLdoisEOPKoIY3YBttNmwp4onjcG0Dc/YQgOE8g5iwuzNH4KQRhZ8Ndq46
	AQQxtVI952u5yMcNOzkWIpTc5U6zwE7GrpHabea+8R2Jb14xo/lgRPS8yRNckDYK
	xqsdEcZAaBe7PJvXkRN6eOLjM/CMAUiJKFHP4rsN0Aq6KfFvwaO0XFURQ==
X-ME-Sender: <xms:KZTJY5kxRBbHsi58SSGXG0thzHypBrE0ywzVyRgSh8rhyiH1g3VK-Q>
    <xme:KZTJY02wmm7OwC6JqVT4x2pEI4Ek4GZ_RtIfkFVT9Ul4nJWPrkyIHH2DV1qStLBgB
    vFQ4VZUBhJmX9k>
X-ME-Received: <xmr:KZTJY_qWnlZnCAO0qq42vjpQRJ6z44snfew9zof9wJiNHQBaEyRhIAAcj6RftpriZbTjj9YNVRJD>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddutddguddvudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:KZTJY5lQhpx0TLu1FIedmKRxrwKBKNsirPTu2rgwc81AdXdowggaqg>
    <xmx:KZTJY32L6Yg4VlAjM-EBSUH6sB3f_BMV-kTrynK3dmsKf0fx3Dgh9Q>
    <xmx:KZTJY4tO5ZxrmnCj9Or9SivEJ5tQP29NyVj_9NIaj9B3vSUHZDyRhg>
    <xmx:KpTJYxovdbTDQlMID3b5Xe01VE9uVnvHDQo-ibZJhp2YeqNeyiGVzQ>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Ard Biesheuvel <ardb@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-efi@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v3 0/5] efi: Support ESRT under Xen
Date: Thu, 19 Jan 2023 14:03:55 -0500
Message-Id: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <20221003112625.972646-1-ardb@kernel.org>
References: <20221003112625.972646-1-ardb@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch series fixes handling of EFI tables when running under Xen.
These fixes allow the ESRT to be loaded when running paravirtualized in
dom0, making the use of EFI capsule updates possible.

Demi Marie Obenour (5):
  efi: memmap: Disregard bogus entries instead of returning them
  efi: xen: Implement memory descriptor lookup based on hypercall
  efi: Apply allowlist to EFI configuration tables when running under
    Xen
  efi: Actually enable the ESRT under Xen
  efi: Warn if trying to reserve memory under Xen

 drivers/firmware/efi/efi.c  | 22 ++++++++++++-
 drivers/firmware/efi/esrt.c | 15 +++------
 drivers/xen/efi.c           | 61 +++++++++++++++++++++++++++++++++++++
 include/linux/efi.h         |  3 ++
 4 files changed, 90 insertions(+), 11 deletions(-)

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 19:04:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 19:04:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481276.746054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaDB-0002bd-0t; Thu, 19 Jan 2023 19:04:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481276.746054; Thu, 19 Jan 2023 19:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaDA-0002bV-TE; Thu, 19 Jan 2023 19:04:32 +0000
Received: by outflank-mailman (input) for mailman id 481276;
 Thu, 19 Jan 2023 19:04:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tmj6=5Q=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pIaD9-0002YR-Qf
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 19:04:31 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1943d3c7-982c-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 20:04:29 +0100 (CET)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id A24985C0127;
 Thu, 19 Jan 2023 14:04:28 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute5.internal (MEProxy); Thu, 19 Jan 2023 14:04:28 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 19 Jan 2023 14:04:27 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1943d3c7-982c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm3; t=1674155068; x=1674241468; bh=iAve/Gqvno
	UrklJx76JsPi9tOXPhkOndFsIm/wBRVc0=; b=B9VLgkfAr/7249QjOT39QtxfRv
	APPspvfWvL6fuDe6zW3+pifMYFXwfEI+IGcd6gUisGfx3fFR6zwpKDMaFcGsDYjd
	E313x+xuAnRGczFIKTKMOChtctBXqmc7KzKo3eKA+re/WmlYCYEzxZMxfdJMugF7
	L6E5Oue7bH6z436+4IafefMY9ZYIdXrm0a0pjGxxf0FPEakxFQctWRIES4eWxWu2
	Yf1bT/5jTCGixgfl+wcrkXCnFrY+jh+29T2rd3PMNmGcXJAPEtDpj/vYpx7p38Kx
	Av4Co/J/ZoABgjw6cChO5T5eVpQcHljE6yFPj5cg52HtU5FY7DpLpMkBhwug==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1674155068; x=
	1674241468; bh=iAve/GqvnoUrklJx76JsPi9tOXPhkOndFsIm/wBRVc0=; b=Z
	udXHlylSgOZXsETO57hjNC/qoXFauOzCK1dNKjA2XSqYPrBllsHZT6ntMGWz2V8u
	BelrZMZiHOMJQ0s3hJEJLbFbg+ykRhWquY4QvBAisG5ELnH/6GAlzN7+u2BVTbjR
	FbQLjKCRdbVKbpASVWSl/TuT2wrbynC0GVo/e2T6+3ochP6KfXKvdw834ANEN9Xq
	wCuAdFe0k7K1XJVZSm6u9yL9wmGhdEj6s7F4PIkR3tc3qMjyZFxzQckvE2JnmyCM
	P7CYrNEsfzLupOPL+ruGD75A8cP8an4BLDTbCfW7i70MXTQ03Q7sxs/OCKAcrkPE
	yMSHDZ9+TMNlllnRUou0g==
X-ME-Sender: <xms:PJTJY_zuzXYRodxycES-1yHCoQVY_E-n56Ft67-UXy6yk7-4piKsaA>
    <xme:PJTJY3RFq7qLgmAgHECoOEaGwjpJBfDw_SbU9rNm24aRQ6dZ69wOZYuKP_xdQZkaE
    rrL9IpbHdjQruQ>
X-ME-Received: <xmr:PJTJY5Uio_FtDb9GTvtKgmylTuWFjOJiaCwVVMgFuKmUM1usPVb3MqzeMdVuYzlbrwsLhRLqk_uT>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddutddguddvudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepffgv
    mhhiucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhih
    hnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepledukeelleejkeevkeefgefh
    ffegvdeigeelieegjefffeeiveeivdejgeevteeinecuvehluhhsthgvrhfuihiivgeptd
    enucfrrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhn
    ghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:PJTJY5gDx88SFiFrhr4xC9pF-OoySYrpDeD9-o3p2ucJHNZP1FjgrA>
    <xmx:PJTJYxDcWqEH881XZ1H_C2JAhcqVC1tNqsaXSkFyN3L019FCxApJ1A>
    <xmx:PJTJYyKvDJWKAQEoQYNWHJOx-6BqdbdWtwsDzUG8FiRHYpwbEaJ8bQ>
    <xmx:PJTJYz0LxTt6btYPWmix9P1j-8Yckpu9atABFe-bV4wfbMe21L12wA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Ard Biesheuvel <ardb@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-efi@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v3 4/5] efi: Actually enable the ESRT under Xen
Date: Thu, 19 Jan 2023 14:03:59 -0500
Message-Id: <26938d59bb398bea7e8f43d03a9c75189fa3b4cc.1669264419.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
References: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The ESRT can be parsed if EFI_PARAVIRT is enabled, even if EFI_MEMMAP is
not.  Also allow the ESRT to be in reclaimable memory, as that is where
future Xen versions will put it.

Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/firmware/efi/esrt.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/firmware/efi/esrt.c b/drivers/firmware/efi/esrt.c
index fb9fb70e1004132eff50c712c6fca05f7aeb1d57..87729c365be1a804bb84e0b1ab874042848327b4 100644
--- a/drivers/firmware/efi/esrt.c
+++ b/drivers/firmware/efi/esrt.c
@@ -247,7 +247,7 @@ void __init efi_esrt_init(void)
 	int rc;
 	phys_addr_t end;
 
-	if (!efi_enabled(EFI_MEMMAP))
+	if (!efi_enabled(EFI_MEMMAP) && !efi_enabled(EFI_PARAVIRT))
 		return;
 
 	pr_debug("esrt-init: loading.\n");
@@ -258,7 +258,9 @@ void __init efi_esrt_init(void)
 	if (rc < 0 ||
 	    (!(md.attribute & EFI_MEMORY_RUNTIME) &&
 	     md.type != EFI_BOOT_SERVICES_DATA &&
-	     md.type != EFI_RUNTIME_SERVICES_DATA)) {
+	     md.type != EFI_RUNTIME_SERVICES_DATA &&
+	     md.type != EFI_ACPI_RECLAIM_MEMORY &&
+	     md.type != EFI_ACPI_MEMORY_NVS)) {
 		pr_warn("ESRT header is not in the memory map.\n");
 		return;
 	}
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 19:04:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 19:04:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481277.746060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaDB-0002eG-AB; Thu, 19 Jan 2023 19:04:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481277.746060; Thu, 19 Jan 2023 19:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaDB-0002dP-3c; Thu, 19 Jan 2023 19:04:33 +0000
Received: by outflank-mailman (input) for mailman id 481277;
 Thu, 19 Jan 2023 19:04:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tmj6=5Q=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pIaDA-0002Ev-9A
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 19:04:32 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 19ea16b0-982c-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 20:04:30 +0100 (CET)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id AEF315C011B;
 Thu, 19 Jan 2023 14:04:29 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute5.internal (MEProxy); Thu, 19 Jan 2023 14:04:29 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 19 Jan 2023 14:04:28 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19ea16b0-982c-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm3; t=1674155069; x=1674241469; bh=xD2qxTwi/N
	WHKFJGn9LwYuXdkJ4R6DAu/R1nUg2IBG0=; b=ecpOATQXTQt1OzoFqAbAKuwXQ0
	P3/N3pE/0VHkO79vVJS3N2J/Rd4OQ2EYsI/8JxXuur6kDGkZ6YVocJ3ySEUWk9sC
	K3u1L8gMGZBo+8OxBCdShJo0RDA5nrICMOoSEOaa0IxOduZBwf0q3IapL+OfngmB
	qq68kfN1SIuWl2sPQqLNmvwmLM5M8tKqLAQw1dPgLuhAXER+We9ER2vT/0rFHt6v
	VPvkPbqobs1C0lFmmpD5kzpb7eKPmgN/VbMHV3WrKL6sYlN6EnWMOoKCcGUnJf3D
	xIgcJQBzGbgxqP2O6HsGOPlUUUUy8A4uRmxNbK/CJUhgFUalagYHzy6T171A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1674155069; x=
	1674241469; bh=xD2qxTwi/NWHKFJGn9LwYuXdkJ4R6DAu/R1nUg2IBG0=; b=o
	68ZJd58KoLnxfMYqNc6GW6TyohkNX2AMV+/mm2cYVWNaKKPdY9GDHZ0q/dF/ON7Q
	zpIs0v6T4J5JkJ5uH1uv5oqEaQw+leGy9yKPN/xZH2zFKj3Aq/l/+rgq3oi6pZDE
	sFI3UGAEbys/g6nPMWkE85PsnD0EzDp0coYn/p4D98AyZjySp4PfbCtxDvdHLEQF
	47dfxn+ZiV9fqBKYXfpZz/bD310keEUiL0R8An8YPF8pw6MzxLwJQmXx5lQw4TGK
	PcLN/eD1pFZ6aO724/Kn+u1OjpY+U+8t+ZWykaElOSiHzxTywZuxdSjQzRwrwMvH
	HAeU5i7XO/EgLLO+/x6aA==
X-ME-Sender: <xms:PZTJYywVRv9e_jPD1JlUeYUEogg99S1ZlWfrcCHtckcBlCRT1Qapcw>
    <xme:PZTJY-SnYpx5gtA7aZaJVmkSuuvP9a2WIwuNhoIqKDkhSUKmytjl5hi9ctSh1TdHW
    1k8aYXGFGEmtOs>
X-ME-Received: <xmr:PZTJY0WsaK4q1u3vPbOZwE7_m_7s213-yq67z4tJYWD6dtAxXe3j1jNDM2Jngv0LbAOjybKeNoPy>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddutddguddvudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepffgv
    mhhiucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhih
    hnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepledukeelleejkeevkeefgefh
    ffegvdeigeelieegjefffeeiveeivdejgeevteeinecuvehluhhsthgvrhfuihiivgeptd
    enucfrrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhn
    ghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:PZTJY4jD1jeiU98W-4CXzhlO7tUA2SEQitukbTA72xPHi3FJXH-l6Q>
    <xmx:PZTJY0A1wSfRmLo0RYo23Hb7239yMW_5XQNIVfSBrcsjr2ADN5C9HA>
    <xmx:PZTJY5LIAZuI10qYrPS-w7Cmn8v-_uqg30qoUOAumWQFPHMsjWsuGQ>
    <xmx:PZTJY20dUyD8JdS0XJ9HZ7AvUxJF4-HZfkwMQqw848OERA03DOX8Bg>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Ard Biesheuvel <ardb@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-efi@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v3 5/5] efi: Warn if trying to reserve memory under Xen
Date: Thu, 19 Jan 2023 14:04:00 -0500
Message-Id: <e51d5abde5c5dfd122cb96f71d0dd8acc0cd358d.1669264419.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
References: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Doing so cannot work and should never happen.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/firmware/efi/efi.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index b49fcde06ca0ff5347047666f38b9309bd9cfe26..902f323499d8acc4f2b846a78993eb201448acad 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -519,6 +519,10 @@ void __init __weak efi_arch_mem_reserve(phys_addr_t addr, u64 size) {}
  */
 void __init efi_mem_reserve(phys_addr_t addr, u64 size)
 {
+	/* efi_mem_reserve() does not work under Xen */
+	if (WARN_ON_ONCE(efi_enabled(EFI_PARAVIRT)))
+		return;
+
 	if (!memblock_is_region_reserved(addr, size))
 		memblock_reserve(addr, size);
 
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 19:04:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 19:04:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481278.746064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaDB-0002kh-KP; Thu, 19 Jan 2023 19:04:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481278.746064; Thu, 19 Jan 2023 19:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaDB-0002hr-Cn; Thu, 19 Jan 2023 19:04:33 +0000
Received: by outflank-mailman (input) for mailman id 481278;
 Thu, 19 Jan 2023 19:04:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tmj6=5Q=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pIaDA-0002YR-Jo
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 19:04:32 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 18b41409-982c-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 20:04:28 +0100 (CET)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id B0BC15C0113;
 Thu, 19 Jan 2023 14:04:27 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute1.internal (MEProxy); Thu, 19 Jan 2023 14:04:27 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 19 Jan 2023 14:04:26 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18b41409-982c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm3; t=1674155067; x=1674241467; bh=SFELFhRBDG
	wxzbzozrOmTJ4SoxpryGpr1CzrXd0rbww=; b=dyikPYdTJzsJDOgvNb4fSLgmtt
	qnJsfvjF/c45Rr4UTsnxxyzyK69GemoA40vz4QqytzeVPkG9oDYl2du1BGRjmFpc
	XPEJiivlyIq4zuvnu+kLj+45UtBx6LDaDnk+o7jRjzsmZ2YPp7Eowyxyf1qdLoHC
	5hEQH164NUO+Yl0ANLWO52jmpC1ohoUtSTH+BSn05+CIhS793scPMOcbzRjS3OS6
	izj3OSXzEP0uuFdUQzi5fo4c567VcoRvDIb83wjS8Vu9S6drGXgfnhqEECBzEygm
	GuUPiWn9fB1spgReRKh+X2fTI+9+oEXdzOPREiKxWYQsADbXepd36tdo2T1Q==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1674155067; x=
	1674241467; bh=SFELFhRBDGwxzbzozrOmTJ4SoxpryGpr1CzrXd0rbww=; b=a
	l6RegSqim7BFs3FrE6J+49XSBvpNwX107t4zIBAZYjFzk9u+s67kzoA4Bauuo7Ge
	IIEK16MmU7GO29hW92WRmVMKMhxJxXR/GN5dUo/lOkKUsqT9fQ0OoUOsOiJPSGpc
	bJLKg3oogEWdQutNk7u7FBaave353Dxka+9g/rxf23+un0kyLOTFk4qqh61iBh1A
	hanHl0u1JpY+Pkx6rdE4JW/qPuKX2ws2LfEaquCn95HO0K0W1papQ7uUXMRrkHGP
	Lyc8TmPo9RqiZumXOldUgbDvwsfqjzcIlKhNHVO4RGTk1+ns/Z7npumPRZCbuXPz
	8ZdUMHlYEJhttiDx7mFjw==
X-ME-Sender: <xms:O5TJY1jZ9piu1WNItg0aow2pXfm6utsa6VIHtnhx1YTXflwiqnzVWg>
    <xme:O5TJY6C2dd1SA5EpGory50TmMuhcX7xqz5EzJiMI3bfRMoOZIphkx50HTSojsM09S
    Lh1IwCbZi9vvW0>
X-ME-Received: <xmr:O5TJY1HaEiBDNV1qHQKgCBA8wGc2yUlwS9iTLarOCcvgbj3KMjHbyvMoiSxc04uezUr6pKMUgFWZ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddutddguddvudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepffgv
    mhhiucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhih
    hnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepledukeelleejkeevkeefgefh
    ffegvdeigeelieegjefffeeiveeivdejgeevteeinecuvehluhhsthgvrhfuihiivgeptd
    enucfrrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhn
    ghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:O5TJY6Qmu13I_ZwhaTSyhuoejvaSujw2-RuKgJZPR0qbsDuOLCvbsg>
    <xmx:O5TJYyyQ2iCppONNYFvhvSQA0Dwg9UM9GxhE2yfnydGW9HKakK44sw>
    <xmx:O5TJYw6HDewiFdSEZxHeB2Vo-52Yy8kD4RoT6GVBR-pFYdkp42tnOg>
    <xmx:O5TJY5lErevuhruCBDzETh-eOtL7NcHj1DKQseLRP2z-2NWCnKS7rw>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Ard Biesheuvel <ardb@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-efi@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v3 3/5] efi: Apply allowlist to EFI configuration tables when running under Xen
Date: Thu, 19 Jan 2023 14:03:58 -0500
Message-Id: <ae86554a2846cd3732316405f10bce5ea9bf7f3d.1669264419.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
References: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

As it turns out, Xen does not guarantee that EFI boot services data
regions in memory are preserved, which means that EFI configuration
tables pointing into such memory regions may be corrupted before the
dom0 OS has had a chance to inspect them.

This is causing problems for Qubes OS when it attempts to perform system
firmware updates, which requires that the contents of the EFI System
Resource Table are valid when the fwupd userspace program runs.

However, other configuration tables such as the memory attributes table
or the runtime properties table are equally affected, and so we need a
comprehensive workaround that works for any table type.

So when running under Xen, check the EFI memory descriptor covering the
start of the table, and disregard the table if it does not reside in
memory that is preserved by Xen.

Co-developed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/firmware/efi/efi.c |  7 +++++++
 drivers/xen/efi.c          | 25 +++++++++++++++++++++++++
 include/linux/efi.h        |  2 ++
 3 files changed, 34 insertions(+)

diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index bcb848e44e7b1350b10b7c0479c0b38d980fe37d..b49fcde06ca0ff5347047666f38b9309bd9cfe26 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -564,6 +564,13 @@ static __init int match_config_table(const efi_guid_t *guid,
 
 	for (i = 0; efi_guidcmp(table_types[i].guid, NULL_GUID); i++) {
 		if (!efi_guidcmp(*guid, table_types[i].guid)) {
+			if (IS_ENABLED(CONFIG_XEN_EFI) &&
+			    !xen_efi_config_table_is_usable(guid, table)) {
+				if (table_types[i].name[0])
+					pr_cont("(%s=0x%lx) may have been clobbered by Xen ",
+					        table_types[i].name, table);
+				return 1;
+			}
 			*(table_types[i].ptr) = table;
 			if (table_types[i].name[0])
 				pr_cont("%s=0x%lx ",
diff --git a/drivers/xen/efi.c b/drivers/xen/efi.c
index 3c792353b7308f9c2bf0a888eda9f827aa9177f8..fb321cd6415a40e8c4d0ad940611adcabe20ab97 100644
--- a/drivers/xen/efi.c
+++ b/drivers/xen/efi.c
@@ -328,3 +328,28 @@ int efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md)
 
 	return 0;
 }
+
+bool __init xen_efi_config_table_is_usable(const efi_guid_t *guid,
+                                           unsigned long table)
+{
+	efi_memory_desc_t md;
+	int rc;
+
+	if (!efi_enabled(EFI_PARAVIRT))
+		return true;
+
+	rc = efi_mem_desc_lookup(table, &md);
+	if (rc)
+		return false;
+
+	switch (md.type) {
+	case EFI_RUNTIME_SERVICES_CODE:
+	case EFI_RUNTIME_SERVICES_DATA:
+	case EFI_ACPI_RECLAIM_MEMORY:
+	case EFI_ACPI_MEMORY_NVS:
+	case EFI_RESERVED_TYPE:
+		return true;
+	default:
+		return false;
+	}
+}
diff --git a/include/linux/efi.h b/include/linux/efi.h
index b407a302b730a6cc7481afa0f582360e59faf1e0..b210b50c4bdedaafcce6f63d44f57ff8329d1cfd 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -1322,4 +1322,6 @@ struct linux_efi_coco_secret_area {
 /* Header of a populated EFI secret area */
 #define EFI_SECRET_TABLE_HEADER_GUID	EFI_GUID(0x1e74f542, 0x71dd, 0x4d66,  0x96, 0x3e, 0xef, 0x42, 0x87, 0xff, 0x17, 0x3b)
 
+bool xen_efi_config_table_is_usable(const efi_guid_t *guid, unsigned long table);
+
 #endif /* _LINUX_EFI_H */
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 19:04:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 19:04:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481280.746084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaDD-0003PE-S0; Thu, 19 Jan 2023 19:04:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481280.746084; Thu, 19 Jan 2023 19:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIaDD-0003Oz-O9; Thu, 19 Jan 2023 19:04:35 +0000
Received: by outflank-mailman (input) for mailman id 481280;
 Thu, 19 Jan 2023 19:04:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tmj6=5Q=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pIaDB-0002YR-Jt
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 19:04:33 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 181c29a1-982c-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 20:04:27 +0100 (CET)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id B288E5C00D8;
 Thu, 19 Jan 2023 14:04:26 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Thu, 19 Jan 2023 14:04:26 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 19 Jan 2023 14:04:25 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 181c29a1-982c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm3; t=1674155066; x=1674241466; bh=JNxIP18FJG
	MNUYBP65b84d6ync2Tzl5LK9xIO36ews4=; b=V9s7TGR9vSl2qHXadvBQ5jANqT
	YsH5gcQlEJ4qUFFblcTOwTDCkjYBGMTnhzWrvyP4wUxzF9P6ReX071u7Zb48fNln
	9F+erC7wwWxgelTnZ45UxgcZXeIT+nRxk0JHttjDPwkpt3l5oqb7nbPi3WYNE/1F
	yFd76iqa2KtykXVk3IhbB08edyNj7SA1wavzMY3Kwckjvzo+dCdCsMYrcRETWzWE
	cSw+wqGejNlAcusrsPK1S1xXsN3HhKiXHa7JGxPmHDyruZJibrhV/bXcPk1vUcRe
	IJcBMsAFnb/w50ABar0rBJv0QFRh4M012F+pACaYSzQi9qNQClxGy7Y0pYFQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1674155066; x=
	1674241466; bh=JNxIP18FJGMNUYBP65b84d6ync2Tzl5LK9xIO36ews4=; b=J
	yFsovdtn/rPqbJClnCW1NgOy8ebOXt8SCkihjpMPPf9x7G9ZOQk+5xgZw8Ebed32
	nzn2Izk8PB9HN0hf4zwIiqi3L8W2cRDiYfye1ed8QadLqMbq4eIXkFg+GUJu2DCH
	Sy9IDdfOM3FgggnNEIOfoe9xdvEEGhcV2ATpZD4y/L/sWF7PYfOeiVohtYaCXR+H
	Kjm/7xuK3iAe1xGkjKdkqx2O3FkokQw//hA7Q2breO9MP4A8rzxTbopzdY/7GGB2
	B/WRxIk2PnP2Q6kaDkwhGeeyEfXf0cZAxobx/IfCGbQeZcd4ws8JRirIv8xHkMXi
	NWWMcKT0h8WiQfokCsQvQ==
X-ME-Sender: <xms:OpTJYz5DUUpMrz1Dur0MLyDT7uTJawS8WOL6m-vcG-RiIiYAsN-QLg>
    <xme:OpTJY45xIbDMttCiDuEBGEK2fjnPzOaInVP4TWcbUqxjyJO3-hCa-BzQ40ItovfrP
    zJUXnLoqKScpqs>
X-ME-Received: <xmr:OpTJY6dFi9QhhR-44F7nQFulDsjlpbg6vVy7IQNKk9oB4kGJL26iqdEdfLrPPTbwdNOOHIwpG7BX>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddutddguddvudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepffgv
    mhhiucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhih
    hnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepledukeelleejkeevkeefgefh
    ffegvdeigeelieegjefffeeiveeivdejgeevteeinecuvehluhhsthgvrhfuihiivgeptd
    enucfrrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhn
    ghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:OpTJY0IeuOz5X_yZJp5w-lcaawIxfTKO7JmMMJmyeR55YjylT27NFQ>
    <xmx:OpTJY3Lyr5JflduQ8P-HoNIqNk7HEfTG7s9A6i3HPN5wLO3OKCcnGQ>
    <xmx:OpTJY9xCu2ZAZkumpUAKi3ghISqHIYYf3GLWM6DegJlVrTloAbCyzA>
    <xmx:OpTJY__zreQyAQfy1-bsR7NGxJqlrTH4k20OEKYvOGQNYBWgRLtcSA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Ard Biesheuvel <ardb@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-efi@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v3 2/5] efi: xen: Implement memory descriptor lookup based on hypercall
Date: Thu, 19 Jan 2023 14:03:57 -0500
Message-Id: <cca4ad8a7d034e207bf1cb1f0571c6491eae9faf.1669264419.git.demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
References: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Xen on x86 boots dom0 in EFI mode but without providing a memory map.
This means that some consistency checks we would like to perform on
configuration tables or other data structures in memory are not
currently possible.  Xen does, however, expose EFI memory descriptor
info via a Xen hypercall, so let's wire that up instead.  It turns out
that the returned information is not identical to what Linux's
efi_mem_desc_lookup would return: the address returned is the address
passed to the hypercall, and the size returned is the number of bytes
remaining in the configuration table.  However, none of the callers of
efi_mem_desc_lookup() currently care about this.  In the future, Xen may
gain a hypercall that returns the actual start address, which can be
used instead.

Co-developed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/firmware/efi/efi.c |  5 ++++-
 drivers/xen/efi.c          | 36 ++++++++++++++++++++++++++++++++++++
 include/linux/efi.h        |  1 +
 3 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index 780caea594e0ffce30abb69bddcccf3bacf25382..bcb848e44e7b1350b10b7c0479c0b38d980fe37d 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -456,7 +456,7 @@ void __init efi_find_mirror(void)
  * and if so, populate the supplied memory descriptor with the appropriate
  * data.
  */
-int efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md)
+int __efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md)
 {
 	efi_memory_desc_t *md;
 
@@ -490,6 +490,9 @@ int efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md)
 	return -ENOENT;
 }
 
+extern int efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md)
+	__weak __alias(__efi_mem_desc_lookup);
+
 /*
  * Calculate the highest address of an efi memory descriptor.
  */
diff --git a/drivers/xen/efi.c b/drivers/xen/efi.c
index d1ff2186ebb48a7c0981ecb6d4afcbbb25ffcea0..3c792353b7308f9c2bf0a888eda9f827aa9177f8 100644
--- a/drivers/xen/efi.c
+++ b/drivers/xen/efi.c
@@ -26,6 +26,7 @@
 
 #include <xen/interface/xen.h>
 #include <xen/interface/platform.h>
+#include <xen/page.h>
 #include <xen/xen.h>
 #include <xen/xen-ops.h>
 
@@ -292,3 +293,38 @@ void __init xen_efi_runtime_setup(void)
 	efi.get_next_high_mono_count	= xen_efi_get_next_high_mono_count;
 	efi.reset_system		= xen_efi_reset_system;
 }
+
+int efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md)
+{
+	static_assert(XEN_PAGE_SHIFT == EFI_PAGE_SHIFT,
+	              "Mismatch between EFI_PAGE_SHIFT and XEN_PAGE_SHIFT");
+	struct xen_platform_op op;
+	union xenpf_efi_info *info = &op.u.firmware_info.u.efi_info;
+	int rc;
+
+	if (!efi_enabled(EFI_PARAVIRT) || efi_enabled(EFI_MEMMAP))
+		return __efi_mem_desc_lookup(phys_addr, out_md);
+	phys_addr &= ~(u64)(EFI_PAGE_SIZE - 1);
+	op = (struct xen_platform_op) {
+		.cmd = XENPF_firmware_info,
+		.u.firmware_info = {
+			.type = XEN_FW_EFI_INFO,
+			.index = XEN_FW_EFI_MEM_INFO,
+			.u.efi_info.mem.addr = phys_addr,
+			.u.efi_info.mem.size = U64_MAX - phys_addr,
+		},
+	};
+
+	rc = HYPERVISOR_platform_op(&op);
+	if (rc) {
+		pr_warn("Failed to lookup header 0x%llx in Xen memory map: error %d\n",
+		        phys_addr, rc);
+	}
+
+	out_md->phys_addr	= info->mem.addr;
+	out_md->num_pages	= info->mem.size >> EFI_PAGE_SHIFT;
+	out_md->type    	= info->mem.type;
+	out_md->attribute	= info->mem.attr;
+
+	return 0;
+}
diff --git a/include/linux/efi.h b/include/linux/efi.h
index f87b2f5db9f83db6f7488648fe99a8f8fc4fdf04..b407a302b730a6cc7481afa0f582360e59faf1e0 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -724,6 +724,7 @@ extern u64 efi_mem_attribute (unsigned long phys_addr, unsigned long size);
 extern int __init efi_uart_console_only (void);
 extern u64 efi_mem_desc_end(efi_memory_desc_t *md);
 extern int efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md);
+extern int __efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md);
 extern void efi_mem_reserve(phys_addr_t addr, u64 size);
 extern int efi_mem_reserve_persistent(phys_addr_t addr, u64 size);
 extern void efi_initialize_iomem_resources(struct resource *code_resource,
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 19:36:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 19:36:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481312.746099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIahZ-00081H-Fl; Thu, 19 Jan 2023 19:35:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481312.746099; Thu, 19 Jan 2023 19:35:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIahZ-00081A-DB; Thu, 19 Jan 2023 19:35:57 +0000
Received: by outflank-mailman (input) for mailman id 481312;
 Thu, 19 Jan 2023 19:35:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vhZF=5Q=zytor.com=hpa@srs-se1.protection.inumbo.net>)
 id 1pIahY-000812-83
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 19:35:56 +0000
Received: from mail.zytor.com (unknown [2607:7c80:54:3::138])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7af6aad2-9830-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 20:35:52 +0100 (CET)
Received: from [127.0.0.1] ([73.223.250.219]) (authenticated bits=0)
 by mail.zytor.com (8.17.1/8.17.1) with ESMTPSA id 30JJZ95D909956
 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NO);
 Thu, 19 Jan 2023 11:35:09 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7af6aad2-9830-11ed-b8d1-410ff93cb8f0
DKIM-Filter: OpenDKIM Filter v2.11.0 mail.zytor.com 30JJZ95D909956
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zytor.com;
	s=2023010601; t=1674156911;
	bh=KFNC6n5tjG56AZ6QGCCnA813vxZPvPdVzmseUKYQm/I=;
	h=Date:From:To:CC:Subject:In-Reply-To:References:From;
	b=TfwjDirpObGhbZuqhQo/OjXlYctFSVNIqq7knSM29fRDfZ2tQke+RwtNnj4zfQlUu
	 Bq7qw2iSUSh4dJSDm7i/EaoS584mAtH8JI0HxllkzWt3z0T8MZU4Yg0NjdsZBw+YZK
	 ZfM4pgmJ0Ook31LrNi0G4ZIduvwtcQZqBqp0unnHdEG7AuhbJIo+R1m0AcjiUZ0cRr
	 NKhVSWrU3s9CUZaUwpuO6d8p7hTU5Lu7doD4erJ735/nqwxXHrIi/r4/iUAE4IvqIK
	 0aAr/sWn86woN3PLoEzLyYRrgoztLvaP7svvxfNwZ4ItddpzPXF3KsA96J5mrIBTMy
	 XnFHIg1J9H7Nw==
Date: Thu, 19 Jan 2023 11:35:06 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
To: Peter Zijlstra <peterz@infradead.org>, x86@kernel.org,
        Joan Bruguera <joanbrugueram@gmail.com>
CC: linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
        "Rafael J. Wysocki" <rafael@kernel.org>,
        xen-devel <xen-devel@lists.xenproject.org>,
        Jan Beulich <jbeulich@suse.com>,
        Roger Pau Monne <roger.pau@citrix.com>,
        Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
        Andrew Cooper <Andrew.Cooper3@citrix.com>,
        =?ISO-8859-1?Q?J=F6rg_R=F6del?= <joro@8bytes.org>, jroedel@suse.de,
        kirill.shutemov@linux.intel.com, dave.hansen@intel.com,
        kai.huang@intel.com
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v2_1/7=5D_x86/boot=3A_Remove_ve?= =?US-ASCII?Q?rify=5Fcpu=28=29_from_secondary=5Fstartup=5F64=28=29?=
User-Agent: K-9 Mail for Android
In-Reply-To: <Y8e/yKgVZgbqgvAG@hirez.programming.kicks-ass.net>
References: <20230116142533.905102512@infradead.org> <20230116143645.589522290@infradead.org> <Y8e/yKgVZgbqgvAG@hirez.programming.kicks-ass.net>
Message-ID: <5718C98C-C07A-4BD1-9182-7F3A8BDBC605@zytor.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable

On January 18, 2023 1:45:44 AM PST, Peter Zijlstra <peterz@infradead=2Eorg>=
 wrote:
>On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote:
>> The boot trampolines from trampoline_64=2ES have code flow like:
>>=20
>>   16bit BIOS			SEV-ES				64bit EFI
>>=20
>>   trampoline_start()		sev_es_trampoline_start()	trampoline_start_64()
>>     verify_cpu()			  |				|
>>   switch_to_protected:    <---------------'				v
>>        |							pa_trampoline_compat()
>>        v								|
>>   startup_32()		<-----------------------------------------------'
>>        |
>>        v
>>   startup_64()
>>        |
>>        v
>>   tr_start() :=3D head_64=2ES:secondary_startup_64()
>>=20
>> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
>> touch the APs), there is already a verify_cpu() invocation=2E
>
>So supposedly TDX/ACPI-6=2E4 comes in on trampoline_startup64() for APs -=
-
>can any of the TDX capable folks tell me if we need verify_cpu() on
>these?
>
>Aside from checking for LM, it seems to clear XD_DISABLE on Intel and
>force enable SSE on AMD/K7=2E Surely none of that is needed for these
>shiny new chips?
>
>I mean, I can hack up a patch that adds verify_cpu() to the 64bit entry
>point, but it seems really sad to need that on modern systems=2E

Sad, perhaps, but really better for orthogonality =E2=80=93 fewer special =
cases=2E


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 20:12:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 20:12:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481318.746112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIbGu-0003uE-9X; Thu, 19 Jan 2023 20:12:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481318.746112; Thu, 19 Jan 2023 20:12:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIbGu-0003u7-6o; Thu, 19 Jan 2023 20:12:28 +0000
Received: by outflank-mailman (input) for mailman id 481318;
 Thu, 19 Jan 2023 20:12:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIbGt-0003tx-4a; Thu, 19 Jan 2023 20:12:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIbGs-0005BF-Vp; Thu, 19 Jan 2023 20:12:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIbGs-0003yW-J6; Thu, 19 Jan 2023 20:12:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIbGs-0000Cq-Ie; Thu, 19 Jan 2023 20:12:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=K0oxg6HAWekwICXckDqP2UB0yBn3p6CN02JTQsRpCU0=; b=alXjpj4F8Q540JP+woQzvhtHRj
	56hBli5wuvVoDuEL8ONeFxc6Y2hV7ge2gHfAwnTNEvDmh3/d8MtA1RaZ0lK9qzBd40X5V0W/Fjn+J
	FIbdfCnvSKat3ftGY57Wbyqfyt9eYv2r9jMa+qfzaV9Zn51UGx9qbIRYFH7FSSpPgDkA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175968-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 175968: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1349fe3a332ad3d1ece60806225ca7955aba9f56
X-Osstest-Versions-That:
    linux=851c2b5fb7936d54e1147f76f88e2675f9f82b52
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 20:12:26 +0000

flight 175968 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175968/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 175531
 test-armhf-armhf-libvirt-qcow2 13 guest-start                 fail like 175538
 test-armhf-armhf-xl-rtds     14 guest-start                  fail  like 175540
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 175553
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175557
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175560
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 175560
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175560
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175560
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175560
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat    fail  like 175560
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175560
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175560
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175560
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175560
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175560
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175560
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                1349fe3a332ad3d1ece60806225ca7955aba9f56
baseline version:
 linux                851c2b5fb7936d54e1147f76f88e2675f9f82b52

Last test of basis   175560  2023-01-03 17:11:35 Z   16 days
Testing same since   175958  2023-01-18 11:13:39 Z    1 days    2 attempts

------------------------------------------------------------
469 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   851c2b5fb793..1349fe3a332a  1349fe3a332ad3d1ece60806225ca7955aba9f56 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 20:51:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 20:51:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481327.746128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIbsV-00089A-AL; Thu, 19 Jan 2023 20:51:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481327.746128; Thu, 19 Jan 2023 20:51:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIbsV-000893-7P; Thu, 19 Jan 2023 20:51:19 +0000
Received: by outflank-mailman (input) for mailman id 481327;
 Thu, 19 Jan 2023 20:51:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KNz0=5Q=flex--seanjc.bounces.google.com=3Qa3JYwYKCeYaMIVRKOWWOTM.KWUfMV-LMdMTTQaba.fMVXZWRMKb.WZO@srs-se1.protection.inumbo.net>)
 id 1pIbsT-00088x-Oo
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 20:51:17 +0000
Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com
 [2607:f8b0:4864:20::44a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0374f8be-983b-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 21:51:15 +0100 (CET)
Received: by mail-pf1-x44a.google.com with SMTP id
 u3-20020a056a00124300b0056d4ab0c7cbso1417493pfi.7
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 12:51:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0374f8be-983b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=I2tCCje/5pOyO5VnTjmGeAPKEpXwqTjK1CxJlREHoLc=;
        b=ZUxDCKMVQjNq0Ad3LmKcfXxLYPQlie6QymBRU9CgWvPnQmU9YNW3vRcaYzXlDf5zGK
         iBkYKQDM6BTanCHAesuDCeGhnFpa2ZowBqPYTml16CUd2+O15v5wsT6cveJwoqmF8CW8
         h0HfBZNFXtMwqWwyYMJSUdqn1BfLD8pZSZTcHMI8By/yzItKQB6xJrffJGRrtRZ4+L9v
         ZT05TsuLk0jaJP1yfvVCxxIz35gDfSZzhEDPnlsLTg5plULU5US3BsDMYdswsCXEJ6GO
         XvaQhJWSwaX4TsjwRTdHjGGn8lP3+TNR5q/VjVjJm5S8Wh806CvafZKa6FDaBcZEFwpp
         Encg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=I2tCCje/5pOyO5VnTjmGeAPKEpXwqTjK1CxJlREHoLc=;
        b=fmzN7DOKjrpGJ9pI75Ts/vB2SsOd8PT0YWSKTcmlQSAnGBOcrLpIIP1d7Th50A1BNU
         mW5VoUesC0FBnT7/EWjMpgn+v5H2KmOPu25Pqby08hmBT4qiGVsQPI5DgzXh/xm7+x4P
         e+ZYbYBrfY+oJ2nB/KTxqKW1LMuA7098TtKXOGQv6iJlmZZ5UgcnJ/IOefZI7Zyl5IIA
         bR+t7GqAsroia+8cTmVSVWnUYnclK7lrFhMF8B26K0w1zYXILmlT76cfVIuCsSjKXYf2
         nzNVNueke19VmCoOZ3hUCCbQai9iWcKS5ezivkpSnVwIduGMOV+Tde2UFaVz5T3mC2Um
         QQlg==
X-Gm-Message-State: AFqh2koKyeEgmRYE4Zy+7AGYcwTXt88FXGfN+WcrfkgGbhydkk7zPW+v
	tdmOIIxTJTuoiFjDBsOLzdFzbdr8ldU=
X-Google-Smtp-Source: AMrXdXuMJ72KATZjKKGdUxfwU1BnmbdJhYoDO22X+Eb8X1jmyzmFoSUv42Aa7I3d+0m8g/hjK7t9hHqafUo=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a17:903:12cf:b0:194:b631:69ac with SMTP id
 io15-20020a17090312cf00b00194b63169acmr838250plb.39.1674161473999; Thu, 19
 Jan 2023 12:51:13 -0800 (PST)
Date: Thu, 19 Jan 2023 20:48:54 +0000
In-Reply-To: <20230106103600.528-1-pdurrant@amazon.com>
Mime-Version: 1.0
References: <20230106103600.528-1-pdurrant@amazon.com>
X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog
Message-ID: <167409095498.2375862.15711695818147385057.b4-ty@google.com>
Subject: Re: [PATCH v7 0/2] KVM: x86/xen: update Xen CPUID Leaf 4
From: Sean Christopherson <seanjc@google.com>
To: Sean Christopherson <seanjc@google.com>, x86@kernel.org, kvm@vger.kernel.org, 
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
	Paul Durrant <pdurrant@amazon.com>
Cc: Borislav Petkov <bp@alien8.de>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Dave Hansen <dave.hansen@linux.intel.com>, David Woodhouse <dwmw2@infradead.org>, 
	"H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>, Juergen Gross <jgross@suse.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>
Content-Type: text/plain; charset="utf-8"

On Fri, 06 Jan 2023 10:35:58 +0000, Paul Durrant wrote:
> Patch #2 was the original patch. It has been expended to a series in v6.
> 
> Paul Durrant (2):
>   KVM: x86/cpuid: generalize kvm_update_kvm_cpuid_base() and also
>     capture limit
>   KVM: x86/xen: update Xen CPUID Leaf 4 (tsc info) sub-leaves, if
>     present
> 
> [...]

Applied to kvm-x86 misc, thanks!

[1/2] KVM: x86/cpuid: generalize kvm_update_kvm_cpuid_base() and also capture limit
      https://github.com/kvm-x86/linux/commit/e3ac476711ca
[2/2] KVM: x86/xen: update Xen CPUID Leaf 4 (tsc info) sub-leaves, if present
      https://github.com/kvm-x86/linux/commit/509d19565173

--
https://github.com/kvm-x86/linux/tree/next
https://github.com/kvm-x86/linux/tree/fixes


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 21:12:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 21:12:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481334.746139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIcCF-0002CY-4a; Thu, 19 Jan 2023 21:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481334.746139; Thu, 19 Jan 2023 21:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIcCF-0002CR-1n; Thu, 19 Jan 2023 21:11:43 +0000
Received: by outflank-mailman (input) for mailman id 481334;
 Thu, 19 Jan 2023 21:11:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0cEW=5Q=epam.com=prvs=53830b2ab6=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1pIcCD-0002CL-Aq
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 21:11:41 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dc25aff0-983d-11ed-b8d1-410ff93cb8f0;
 Thu, 19 Jan 2023 22:11:38 +0100 (CET)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 30JGOmUN025449; Thu, 19 Jan 2023 21:11:26 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2058.outbound.protection.outlook.com [104.47.0.58])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3n74cd1vhb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 19 Jan 2023 21:11:26 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by AS8PR03MB6840.eurprd03.prod.outlook.com (2603:10a6:20b:298::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.25; Thu, 19 Jan
 2023 21:11:21 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::8b3:273:e51b:1023]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::8b3:273:e51b:1023%7]) with mapi id 15.20.5986.023; Thu, 19 Jan 2023
 21:11:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc25aff0-983d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DPwiALO0aLJqlCIP8N8/9hakoTWz6+zxEqXUSM6erj6IRmS85XoF++u/0ceeogY3tCp8zrimDNBpNL2N9IzGiX1XmiPxXu1bNTYgDImahmEQ0D1vb9gGYQsnvjulV0u8Rnu9LkPMO2EFjQ4HIzNoMRosjY3xeruIfkzfWK4aGB6NRkKG28Il0WQWKVKt/egxVapi2aa88mb2U0pVg5STSHxCucAX00Eg6eSj4PaLE329BUTFgdBDi0c/79evTMCufUHPXYP3eP8sYmgJDVsDP6GE5+SRwyVyVBJWFulpJYK+Y8tcwa+UyfHvkFBVCt0M3DjQUfLvLazQcAAdFsZbzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DbBOmXvNw+1mMey8yI7cqMPgGwSu1jVp5Matm0hB6G8=;
 b=h9R0FmybgXbi3PKWmXMGqcQt43eGIneli4HU98cB7M2pxIU9DJoza/WZ8eQbqXIgbHyqXYYk1fN6WfZ20gGZY/jxayr4B99I7BWZ0RktzX+/8mAjOoGpFgmOVa9150Tek7Rfu3TNuD3cG8u2eD7BI1vHrHSSrSSHXQcdQLfAX+Ojy0FAms1Bq9ScMPJC9LkVK5qZn/N2XurM2Es0VNctFvb2YBZDG7twLLe90rGWEysWynG82fsAQHqlviyyUbFZF0wOrBTrTPd7gL52NWvsVaiLv1VFkSaDjD4OY6ZEM5mSrY97IMrOB+cf4czR5IlFNMT2QLzx9SIwJ6IuLtFKNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DbBOmXvNw+1mMey8yI7cqMPgGwSu1jVp5Matm0hB6G8=;
 b=U+xn9cDcV/H6CMh+xM1668lUTXtAy3r1SwiAlN8BECmiuwihJblNdEjDV8RQUH/2rwiYtLJqMyFw8Chk7hkBkiIWB8hIxIS7KuVDR0ExR6B/KCoTj3k/SGQ1gew3u/nG6ZLfKmNGUSUdWmdJzG7y7gZiCy+B0VU2VrQITncqbZqFfl7QXCq7CodYCKr9OwEmNfgdjX3Tz25X98lmNijn9JtHiIqicZHp9ogKRv7LjbUufBOeyf4R1rmHDu5Giox0OpKzYfA6bAgT75fxB52j4jttjswK9EK3rKm85ANqQJaBZsTgoHvuVIyN310Mz1Iyz82/Dd1taBO78HiL/cr33w==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
        Juergen Gross
	<jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
        Oleksii
 Moisieiev <Oleksii_Moisieiev@epam.com>
Subject: [PATCH] xen/pvcalls-back: fix permanently masked event channel
Thread-Topic: [PATCH] xen/pvcalls-back: fix permanently masked event channel
Thread-Index: AQHZLEqQuRGgM2P0bEiSEaENUdLo/Q==
Date: Thu, 19 Jan 2023 21:11:15 +0000
Message-ID: <20230119211037.1234931-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.38.1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|AS8PR03MB6840:EE_
x-ms-office365-filtering-correlation-id: b08107ef-535e-4bf7-0817-08dafa61b36c
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 h3iGwIPyAsL6v8b4J8MaNg4OGRAEfnMh9DgOy2fZEu/eHqxmViygsV42/qQt6GTzzVA29K+MelotLTYfCjv+ZT5jqS7BkL3qbE4m+tZyDq9lqDuzw5aQh5aGBR3Jq2ZzQC62GH3Zy0dhAqMiKH9ci5UJEGvSZtifEpts7MPElH6hYtvoHcwV4cPA2/7Xxd2s1Tv3AYDa8eu0fXdX1zir3XcXiS9IA6wbaTG8HXSmztMB36XewnW6EDAn7dicKdonpyF2afqD8kZnEsbIvqruPC/YlmFGGgiTy9wfUzeP3VdDdoQUgk+PReYaJLGs9BjDYIeLfH3YWRv+NrRqlZswZgnyK9IkEUtW7vZbwpBVwggtK5lp08TckxGkWKEwTkAxxFDN4CaYgirg5KHsJK49Atuvap4+QNe1XoD3FhzeQuhXXlKlyVh1R+QHWA+3LH7dV5kC1jv4TSNO4E1xb6j3c4MpLDk3g48+F+alp6TbtSrLUbTIi4iY1Gobfx8rwonDJumnfYsMfPE3Mp89mKJOMCaAUCbojielZPtk2d3UrN/kwJQIm11fHjjUxUH58kk/TLe1KlnuAmXiGniaOBZPBnS8jAq1Eo6Fw3kn+ciojK1wAX4sc0+qE1tba4XRwQSl3gU3UfLR+7k8Ts51S9RlH5KvInassdz4NNq3ALnmAC5hQoVlS8Dqcse3z0tKz6bRxTF3zcjB+PDyaUXQ382Arw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(396003)(366004)(376002)(136003)(451199015)(107886003)(86362001)(6506007)(55236004)(36756003)(6486002)(478600001)(71200400001)(8936002)(122000001)(316002)(5660300002)(66476007)(4326008)(66446008)(66556008)(91956017)(38100700002)(76116006)(8676002)(64756008)(6512007)(41300700001)(66946007)(26005)(186003)(38070700005)(2616005)(83380400001)(54906003)(110136005)(2906002)(1076003)(66899015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?VQ7n0jlfBEMBPaKLUzK0wK5x7aDqgHSegP/Oc0YEb/hEGSEiFysf6hDZTR?=
 =?iso-8859-1?Q?nNO/8aYVey81i38+85fGNf6FC004w3HNiMvXQ8lgaemsk4kIy4a1x5Uahq?=
 =?iso-8859-1?Q?SN/g+hUhO/AcupuvSnyMLtGUnDHwyglVhSUMt20v+3LhogWyk3ykNcJR4u?=
 =?iso-8859-1?Q?mZhW07pJDHb7rBco68ZexPuBPMtuU25RcdpX02fT9N4x4iFH2QjrPRjAUk?=
 =?iso-8859-1?Q?boOn3u62B4ZlFGNHAlMtQdXxJ26e3DuGjw38sl1ScYrBbNNJ3xCm1EMzCR?=
 =?iso-8859-1?Q?LQD3uSBh238QnUcrWOVg115wm9vUKtbp1dD3MOpzgyvjrLU7WgHFyVQvso?=
 =?iso-8859-1?Q?ec8SQDHsjVeN826OmyTuexoGbLYcocp/E1YRGrGVt6+o9w7/7ecQzLFGlL?=
 =?iso-8859-1?Q?IhTcC2o0FNGTLYaeMSv3+1Hc71ps3fTwuJD5Ay/JtzVYoEmbF7z5YzqTKQ?=
 =?iso-8859-1?Q?ldRlbbvKctYczDQRG7tqaTqJPyiVC3crrfn+8OvcIYKfK82eGQ2cEP+v/9?=
 =?iso-8859-1?Q?qBIKtZ0g2U07uI6o6xr/Ckyq+SBZpOtXyk9SU3Jz0W6BTzNeWbFSDe1GzM?=
 =?iso-8859-1?Q?e8jC4g0xLZ48tAF4xGxvVBmzvnJzDoEnj6kkk0Y7SDBOAYHG5RlVQhXvvL?=
 =?iso-8859-1?Q?fRhKMtbLNMl9d8AIsAyeZkMldIbW4q2Fjbq83NCS7pS+2V4s1vMkCDQTaV?=
 =?iso-8859-1?Q?krjNA0T1CqB/I2cNX74n1+lEo7Uv7uEbCsNkZ57L4beQ4wiEKx9ntYyjqL?=
 =?iso-8859-1?Q?cTUDH2mS6NBNJggNrvKTJOavwOnqDaORS8QtzbOCWdX8tUzVyD17aKpE9e?=
 =?iso-8859-1?Q?n8K1TKv8IbwkDJ+SM5GILqv+6J2BXZoDtitlBwq0xhiFlRcs4cCqwKuHMh?=
 =?iso-8859-1?Q?F1ScAGdqfpTC0Su74CifP92OZW0llSnAFUUrOUoKnNO9ZZmba0cKekqvSA?=
 =?iso-8859-1?Q?+5Sp7+KDACSqnQcUU99zWFlDPBng0IlCzEgZEzQqTu0914FwgVWmqAbyVt?=
 =?iso-8859-1?Q?5F9X6Xls/HLW5hy4tID8r2p6kDuKSDLN+7xYaYjPuzT9EwfjiwqtB5qpA7?=
 =?iso-8859-1?Q?GQIVPcLx2B3yNXZeUBXhmjhr0LmeW6XbGttwfVuZxYeAYmfknWqCEvaIZO?=
 =?iso-8859-1?Q?NYaTdOXXfvl8Hscntrxwk51vVvP2pvKcotjPVVIWy2ZwkbZ5SPN7CvBCaI?=
 =?iso-8859-1?Q?196l7MIIW+FBooSmMzU62YomiakkIq5klETcayRvhV1V8t6+s/yCczcFZH?=
 =?iso-8859-1?Q?EaX3pwKnYUFSCNTSwN3W3eQIYx6qUPWqKgOPZJ8FwbjSovCHjv8Nz5ou6l?=
 =?iso-8859-1?Q?3LaC2yPwaBY4KgSseSdxDrhg6E61uZRxWJQhycMArokVIJO900+/tUL7+b?=
 =?iso-8859-1?Q?2Q50/dWYDIpRMpCiHV546F8L2gMLzUVW/op5fKuI1zA0uqEt8wJzmmLqJF?=
 =?iso-8859-1?Q?+4EniBrEIDYcsJ/DpT7UkEkesmjF8GqlL7tOTcilraCLQMOm2SQMJyGYfK?=
 =?iso-8859-1?Q?dMM9Jx7znD+kXx5L+plnJwvEAsG38dl0x5l23OSmX7Czb10EalBN8k7AFS?=
 =?iso-8859-1?Q?QedmCrTLm8Q3o65FTXaO/xXxdF4lpi/dF8C+Y8cLjtJLaRlA53x+9y3iXZ?=
 =?iso-8859-1?Q?SYe6kIxT0k+6Gwa8pK2OzZsMGKlEUweMuF4Oc4W7tbsGl5fde4RhlzPA?=
 =?iso-8859-1?Q?=3D=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b08107ef-535e-4bf7-0817-08dafa61b36c
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jan 2023 21:11:15.5007
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: NhCyDbiqHrNSxhseMNkqf4+1vs2rBVnWdBq84rVqsOOAYFJl7k5jyc8fKqxfNkl3fyYSMRtnPm5z1c2rdS40j6+ouB0m8pTkWdCkO6fazvU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR03MB6840
X-Proofpoint-ORIG-GUID: ouClumBOzvn6CkxX3bejVPFm-vDk31IT
X-Proofpoint-GUID: ouClumBOzvn6CkxX3bejVPFm-vDk31IT
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1
 definitions=2023-01-19_14,2023-01-19_01,2022-06-22_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 spamscore=0
 clxscore=1011 mlxlogscore=999 impostorscore=0 lowpriorityscore=0
 priorityscore=1501 bulkscore=0 adultscore=0 malwarescore=0 phishscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2212070000 definitions=main-2301190177

There is a sequence of events that can lead to a permanently masked
event channel, because xen_irq_lateeoi() is newer called. This happens
when a backend receives spurious write event from a frontend. In this
case pvcalls_conn_back_write() returns early and it does not clears the
map->write counter. As map->write > 0, pvcalls_back_ioworker() returns
without calling xen_irq_lateeoi(). This leaves the event channel in
masked state, a backend does not receive any new events from a
frontend and the whole communication stops.

Move atomic_set(&map->write, 0) to the very beginning of
pvcalls_conn_back_write() to fix this issue.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Reported-by: Oleksii Moisieiev <oleksii_moisieiev@epam.com>
---
 drivers/xen/pvcalls-back.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index a7d293fa8d14..60f5cd70d770 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -173,6 +173,8 @@ static bool pvcalls_conn_back_write(struct sock_mapping=
 *map)
 	RING_IDX cons, prod, size, array_size;
 	int ret;
=20
+	atomic_set(&map->write, 0);
+
 	cons =3D intf->out_cons;
 	prod =3D intf->out_prod;
 	/* read the indexes before dealing with the data */
@@ -197,7 +199,6 @@ static bool pvcalls_conn_back_write(struct sock_mapping=
 *map)
 		iov_iter_kvec(&msg.msg_iter, READ, vec, 2, size);
 	}
=20
-	atomic_set(&map->write, 0);
 	ret =3D inet_sendmsg(map->sock, &msg, size);
 	if (ret =3D=3D -EAGAIN) {
 		atomic_inc(&map->write);
--=20
2.38.1


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 21:12:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 21:12:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481338.746148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIcD6-0002hh-Ec; Thu, 19 Jan 2023 21:12:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481338.746148; Thu, 19 Jan 2023 21:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIcD6-0002ha-Bb; Thu, 19 Jan 2023 21:12:36 +0000
Received: by outflank-mailman (input) for mailman id 481338;
 Thu, 19 Jan 2023 21:12:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4g95=5Q=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pIcD4-0002Ux-7s
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 21:12:34 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fd594469-983d-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 22:12:33 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd594469-983d-11ed-91b6-6bf2151ebd3b
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1674162752;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rRUXLP0p0WQ+Qd4xVEuRKMl25ggID9YoYcEY3DCIiik=;
	b=mdQ7CD0wRJvfaZDOr0f82Dc4967iwC75I1+s5lDCu1fHHwtrp+IBba8UPSoQ/va3j4Qar2
	ATGu2nZLLKa0vJ8CheBHLvt0I89xHSG06ZUusQKDPY104va7qqUjwYfiuIUukk+LaqO7Gu
	N+aHnCbtSnW+e8Smx0L+A2DiKy1AOSyLQ5QGRIN4vgJHVE1sRkt9jT/0k9LahiDV7T+AiK
	jKmIL8dmtVDNgeoMh+UAXcZ37mv49o4hNCz9KdPCGAyDUP9ZfpqnymsWZ0f3dG4LxhFiyw
	RTndkgdz2BKS/FhRmtxFB9hP9YbenuT2IOAVALIMh4inh4bZRqxisS45FivGfA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1674162752;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rRUXLP0p0WQ+Qd4xVEuRKMl25ggID9YoYcEY3DCIiik=;
	b=0BJ5w/p0x778LsQ1MznxzGrIdB3OE1QGABAUfN3vpOmJUgdDINUgQsr9tLqZGxSzWuC+ct
	ivl26is4VR39BBBw==
To: Igor Mammedov <imammedo@redhat.com>, "Srivatsa S. Bhat"
 <srivatsa@csail.mit.edu>
Cc: linux-kernel@vger.kernel.org, amakhalov@vmware.com, ganb@vmware.com,
 ankitja@vmware.com, bordoloih@vmware.com, keerthanak@vmware.com,
 blamoreaux@vmware.com, namit@vmware.com, Peter Zijlstra
 <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov
 <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter
 Anvin" <hpa@zytor.com>, "Rafael J.
 Wysocki" <rafael.j.wysocki@intel.com>, "Paul E. McKenney"
 <paulmck@kernel.org>, Wyes Karny <wyes.karny@amd.com>, Lewis Caroll
 <lewis.carroll@amd.com>, Tom Lendacky <thomas.lendacky@amd.com>, Juergen
 Gross <jgross@suse.com>, x86@kernel.org, VMware PV-Drivers Reviewers
 <pv-drivers@vmware.com>, virtualization@lists.linux-foundation.org,
 kvm@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle
 state
In-Reply-To: <20230116155526.05d37ff9@imammedo.users.ipa.redhat.com>
References: <20230116060134.80259-1-srivatsa@csail.mit.edu>
 <20230116155526.05d37ff9@imammedo.users.ipa.redhat.com>
Date: Thu, 19 Jan 2023 22:12:31 +0100
Message-ID: <87bkmui5z4.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Mon, Jan 16 2023 at 15:55, Igor Mammedov wrote:
> "Srivatsa S. Bhat" <srivatsa@csail.mit.edu> wrote:
>> Fix this by preventing the use of mwait idle state in the vCPU offline
>> play_dead() path for any hypervisor, even if mwait support is
>> available.
>
> if mwait is enabled, it's very likely guest to have cpuidle
> enabled and using the same mwait as well. So exiting early from
>  mwait_play_dead(), might just punt workflow down:
>   native_play_dead()
>         ...
>         mwait_play_dead();
>         if (cpuidle_play_dead())   <- possible mwait here                                              
>                 hlt_play_dead(); 
>
> and it will end up in mwait again and only if that fails
> it will go HLT route and maybe transition to VMM.

Good point.

> Instead of workaround on guest side,
> shouldn't hypervisor force VMEXIT on being uplugged vCPU when it's
> actually hot-unplugging vCPU? (ex: QEMU kicks vCPU out from guest
> context when it is removing vCPU, among other things)

For a pure guest side CPU unplug operation:

    guest$ echo 0 >/sys/devices/system/cpu/cpu$N/online

the hypervisor is not involved at all. The vCPU is not removed in that
case.

So to ensure that this ends up in HLT something like the below is
required.

Note, the removal of the comment after mwait_play_dead() is intentional
because the comment is completely bogus. Not having MWAIT is not a
failure. But that wants to be a seperate patch.

Thanks,

        tglx
---        
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 55cad72715d9..3f1f20f71ec5 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1833,7 +1833,10 @@ void native_play_dead(void)
 	play_dead_common();
 	tboot_shutdown(TB_SHUTDOWN_WFS);
 
-	mwait_play_dead();	/* Only returns on failure */
+	if (this_cpu_has(X86_FEATURE_HYPERVISOR))
+		hlt_play_dead();
+
+	mwait_play_dead();
 	if (cpuidle_play_dead())
 		hlt_play_dead();
 }


  


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 22:54:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 22:54:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481349.746167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIdne-0004D9-Sk; Thu, 19 Jan 2023 22:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481349.746167; Thu, 19 Jan 2023 22:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIdne-0004D2-Pt; Thu, 19 Jan 2023 22:54:26 +0000
Received: by outflank-mailman (input) for mailman id 481349;
 Thu, 19 Jan 2023 22:54:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PYO0=5Q=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIdnd-0004Cw-B3
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 22:54:25 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3679293e-984c-11ed-91b6-6bf2151ebd3b;
 Thu, 19 Jan 2023 23:54:23 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 3838061D5F;
 Thu, 19 Jan 2023 22:54:21 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AF3DDC433F1;
 Thu, 19 Jan 2023 22:54:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3679293e-984c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674168860;
	bh=dC/jVX4XfevFCiywKrgTd2muNUX6G+Rw36Whx4b84Kk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=s8XN97enMvkt05il3Xv1nHCiogvwlDN4i3GJdSvE8aVoNWu9290AWZ0kKxf1gHflH
	 u3KW5Y76fMkPhqPVQYMB9q8xZ9XqhWcXulKgMYntkI4GLMHIsXVmetMcWYU/kES2xE
	 caatkMhjCnDDZaOd1TGP4V6e3nXMkh4TCadhSvgf4GaMYuKqUBig1PF7iF47ilsTgn
	 LWjrhxemhxX7uFbjy8sp1G0vt94un9dwvF2tcsT3BMb7nmLxauHVAN3gpFbrJNuJST
	 r70JNKygJIgr/iD2I9HUUkcrwrflrla+I+1RCGuXAjjHLrYaAPB5hUlEleQGlA1Z6Q
	 zy/5KCM+PTHQw==
Date: Thu, 19 Jan 2023 14:54:14 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
In-Reply-To: <20230117174358.15344-3-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-3-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
> address.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> Changes from -
> 
> v1 - 1. Moved the patch earlier.
> 2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr"
> into this patch.
> 
>  xen/arch/arm/domain_build.c | 10 +++++-----
>  xen/arch/arm/gic-v2.c       |  6 +++---
>  xen/arch/arm/mm.c           |  2 +-
>  3 files changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 829cea8de8..33a5945a2d 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1315,7 +1315,7 @@ static int __init make_memory_node(const struct domain *d,
>      dt_dprintk("Create memory node\n");
>  
>      /* ePAPR 3.4 */
> -    snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[i].start);
> +    snprintf(buf, sizeof(buf), "memory@%"PRIpaddr, mem->bank[i].start);
>      res = fdt_begin_node(fdt, buf);
>      if ( res )
>          return res;
> @@ -1383,7 +1383,7 @@ static int __init make_shm_memory_node(const struct domain *d,
>          __be32 *cells;
>          unsigned int len = (addrcells + sizecells) * sizeof(__be32);
>  
> -        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, mem->bank[i].start);
> +        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIpaddr, mem->bank[i].start);
>          res = fdt_begin_node(fdt, buf);
>          if ( res )
>              return res;
> @@ -2719,7 +2719,7 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
>      /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
>      char buf[38];
>  
> -    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
> +    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIpaddr,
>               vgic_dist_base(&d->arch.vgic));
>      res = fdt_begin_node(fdt, buf);
>      if ( res )
> @@ -2775,7 +2775,7 @@ static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
>      char buf[38];
>      unsigned int i, len = 0;
>  
> -    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
> +    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIpaddr,
>               vgic_dist_base(&d->arch.vgic));
>  
>      res = fdt_begin_node(fdt, buf);
> @@ -2861,7 +2861,7 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
>      /* Placeholder for sbsa-uart@ + a 64-bit number + \0 */
>      char buf[27];
>  
> -    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIx64, d->arch.vpl011.base_addr);
> +    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIpaddr, d->arch.vpl011.base_addr);
>      res = fdt_begin_node(fdt, buf);
>      if ( res )
>          return res;
> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> index 61802839cb..5d4d298b86 100644
> --- a/xen/arch/arm/gic-v2.c
> +++ b/xen/arch/arm/gic-v2.c
> @@ -1049,7 +1049,7 @@ static void __init gicv2_dt_init(void)
>      if ( csize < SZ_8K )
>      {
>          printk(XENLOG_WARNING "GICv2: WARNING: "
> -               "The GICC size is too small: %#"PRIx64" expected %#x\n",
> +               "The GICC size is too small: %#"PRIpaddr" expected %#x\n",
>                 csize, SZ_8K);
>          if ( platform_has_quirk(PLATFORM_QUIRK_GIC_64K_STRIDE) )
>          {
> @@ -1280,11 +1280,11 @@ static int __init gicv2_init(void)
>          gicv2.map_cbase += aliased_offset;
>  
>          printk(XENLOG_WARNING
> -               "GICv2: Adjusting CPU interface base to %#"PRIx64"\n",
> +               "GICv2: Adjusting CPU interface base to %#"PRIpaddr"\n",
>                 cbase + aliased_offset);
>      } else if ( csize == SZ_128K )
>          printk(XENLOG_WARNING
> -               "GICv2: GICC size=%#"PRIx64" but not aliased\n",
> +               "GICv2: GICC size=%#"PRIpaddr" but not aliased\n",
>                 csize);
>  
>      gicv2.map_hbase = ioremap_nocache(hbase, PAGE_SIZE);
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0fc6f2992d..fab54618ab 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -249,7 +249,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
>  
>          pte = mapping[offsets[level]];
>  
> -        printk("%s[0x%03x] = 0x%"PRIpaddr"\n",
> +        printk("%s[0x%03x] = 0x%"PRIx64"\n",
>                 level_strs[level], offsets[level], pte.bits);
>  
>          if ( level == 3 || !pte.walk.valid || !pte.walk.table )
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 23:02:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 23:02:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481354.746178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIdvg-0005fo-Lv; Thu, 19 Jan 2023 23:02:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481354.746178; Thu, 19 Jan 2023 23:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIdvg-0005fh-J0; Thu, 19 Jan 2023 23:02:44 +0000
Received: by outflank-mailman (input) for mailman id 481354;
 Thu, 19 Jan 2023 23:02:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PYO0=5Q=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIdvf-0005fb-AI
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 23:02:43 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 605450ae-984d-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 00:02:42 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0B85961D80;
 Thu, 19 Jan 2023 23:02:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8253DC433EF;
 Thu, 19 Jan 2023 23:02:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 605450ae-984d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674169360;
	bh=xM+fVZyXdWtZjElkj5Vr0f5uVWipaXvA1W5Ch7MZ8wE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=YU/t43PS3jcNJ00OGIVESZ/sfTLj/U9EEHwAScfwgizAGnjweed2erLFu/ZeL9Aa7
	 0YsXeoK2sxEgIrM0EcT2oePWCCkCz85c1C620WnaIJU7qTngyPKRz2KSVxBYS5Of8W
	 +7vvVUWDPFIZt63DCnOkHa5EL4zOm0su1W8StU9Bh0P7p4cAWPzCfcAn8fZi5iE80n
	 x3LbFLn1vlv0qbwH/cve9Lvx/HQYkP2fIOuVmanlYNlHGVbZjuVj+WTybhjjEPQscQ
	 Ze8522pm1JJ654M/4eI7ME1XpEc1c6MTjKaKcPhCHlT1sCDMJfLRtnGPpxbJnTq/Ju
	 EQUNwIBuUQS2g==
Date: Thu, 19 Jan 2023 15:02:38 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v2 03/11] xen/arm: domain_build: Replace use of paddr_t in
 find_domU_holes()
In-Reply-To: <20230117174358.15344-4-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301191459230.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-4-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> bankbase, banksize and bankend are used to hold values of type 'unsigned
> long long'. This can be represented as 'uint64_t' instead of 'paddr_t'.
> This will ensure consistency with allocate_static_memory() (where we use
> 'uint64_t' for rambase and ramsize).
> 
> In future, paddr_t can be used for 'uin32_t' as well to represent 32bit
> physical addresses.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

I saw that Julien commented about using unsigned long long. To be
honest, given that we typically use explicitly-sized integers (which is
a good thing) and unsigned long long is always uint64_t on all
architectures, I can see the benefits of using uint64_t here.

At the same time, I can see that the reason for change the type here is
that we are dealing with ULL values, so it would make sense to use
unsigned long long.

I cannot see any big problem/advantages either way, so I am OK with
this patch. (Julien, if you prefer unsigned long long I am fine with
that also.)

Acked-by: Stefano Stabellini <sstabellini@kernel.org>



> ---
> 
> Changes from -
> 
> v1 - Modified the title/commit message from "[XEN v1 6/9] xen/arm: Use 'u64' to represent 'unsigned long long"
> and moved it earlier.
> 
>  xen/arch/arm/domain_build.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 33a5945a2d..f904f12408 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1726,9 +1726,9 @@ static int __init find_domU_holes(const struct kernel_info *kinfo,
>                                    struct meminfo *ext_regions)
>  {
>      unsigned int i;
> -    paddr_t bankend;
> -    const paddr_t bankbase[] = GUEST_RAM_BANK_BASES;
> -    const paddr_t banksize[] = GUEST_RAM_BANK_SIZES;
> +    uint64_t bankend;
> +    const uint64_t bankbase[] = GUEST_RAM_BANK_BASES;
> +    const uint64_t banksize[] = GUEST_RAM_BANK_SIZES;
>      int res = -ENOENT;
>  
>      for ( i = 0; i < GUEST_RAM_BANKS; i++ )
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 23:20:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 23:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481361.746188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeCb-00086J-8Y; Thu, 19 Jan 2023 23:20:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481361.746188; Thu, 19 Jan 2023 23:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeCb-00086C-5k; Thu, 19 Jan 2023 23:20:13 +0000
Received: by outflank-mailman (input) for mailman id 481361;
 Thu, 19 Jan 2023 23:20:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PYO0=5Q=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIeCa-000866-H0
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 23:20:12 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d148b3fa-984f-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 00:20:10 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 6F12C61D9D;
 Thu, 19 Jan 2023 23:20:09 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D31AAC433EF;
 Thu, 19 Jan 2023 23:20:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d148b3fa-984f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674170408;
	bh=acDBaGiiW35JavUUnI5ml5IinAklQDhI7+IJo/aefno=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=TwomfGzypqH37oOcUiJtmeoA/x92opvFgJR8BeBqAV+pdzUTXhT/F2PcsBrBqTaWE
	 tsSr2RVP/YIr4C65eFzRgMQs7YzyY3A+pRHjHpPDGU7Em+Nq1VTXwb6/32YDp9q8Pp
	 FoAzB/GZguI+BpJE6O89VZ654w4lsLpeh33hhhTmp3umYgw0XFhjVC2fQJ6Hxsk0BH
	 9FCbpkDBuPDZ5zyz93gBjQLpo3xDPubfHtR2QoWRHUVfuEYl1Dx1LjEM9D7uDQVI4E
	 NN52510SSZ1tvSgDDD17A3LBXqD6cLIAje6y+dzcPOadz59dXcOKh3Rmwp05O54Cf6
	 +OsUFUl5H2JAg==
Date: Thu, 19 Jan 2023 15:20:06 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v2 04/11] xen/arm: Typecast the DT values into paddr_t
In-Reply-To: <20230117174358.15344-5-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301191514240.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-5-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> In future, we wish to support 32 bit physical address.
> However, one can only read u64 values from the DT. Thus, we need to
> typecast the values appropriately from u64 to paddr_t.
> 
> device_tree_get_reg() should now be able to return paddr_t. This is
> invoked by various callers to get DT address and size.
> Similarly, dt_read_number() is invoked as well to get DT address and
> size. The return value is typecasted to paddr_t.
> fdt_get_mem_rsv() can only accept u64 values. So, we provide a warpper
> for this called fdt_get_mem_rsv_paddr() which will do the necessary
> typecasting before invoking fdt_get_mem_rsv() and while returning.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

I know we discussed this before and you implemented exactly what we
suggested, but now looking at this patch I think we should do the
following:

- also add a wrapper for dt_read_number, something like
  dt_read_number_paddr that returns paddr
- add a check for the top 32-bit being zero in all the wrappers
  (dt_read_number_paddr, device_tree_get_reg, fdt_get_mem_rsv_paddr)
  when paddr!=uint64_t. In case the top 32-bit are != zero I think we
  should print an error

Julien, I remember you were concerned about BUG_ON/panic/ASSERT and I
agree with you there (especially considering Vikram's device tree
overlay series). So here I am only suggesting to check truncation and
printk a message, not panic. Would you be OK with that?

Last comment, maybe we could add fdt_get_mem_rsv_paddr to setup.h
instead of introducing xen/arch/arm/include/asm/device_tree.h, given
that we already have device tree definitions there
(device_tree_get_reg). I am OK either way.


> ---
> 
> Changes from
> 
> v1 - 1. Dropped "[XEN v1 2/9] xen/arm: Define translate_dt_address_size() for the translation between u64 and paddr_t" and
> "[XEN v1 4/9] xen/arm: Use translate_dt_address_size() to translate between device tree addr/size and paddr_t", instead
> this approach achieves the same purpose.
> 
> 2. No need to check for truncation while converting values from u64 to paddr_t.
> 
>  xen/arch/arm/bootfdt.c                 | 23 +++++++++------
>  xen/arch/arm/domain_build.c            |  2 +-
>  xen/arch/arm/include/asm/device_tree.h | 40 ++++++++++++++++++++++++++
>  xen/arch/arm/include/asm/setup.h       |  2 +-
>  xen/arch/arm/setup.c                   | 14 ++++-----
>  xen/arch/arm/smpboot.c                 |  2 +-
>  6 files changed, 65 insertions(+), 18 deletions(-)
>  create mode 100644 xen/arch/arm/include/asm/device_tree.h
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 0085c28d74..f536a3f3ab 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -11,9 +11,9 @@
>  #include <xen/efi.h>
>  #include <xen/device_tree.h>
>  #include <xen/lib.h>
> -#include <xen/libfdt/libfdt.h>
>  #include <xen/sort.h>
>  #include <xsm/xsm.h>
> +#include <asm/device_tree.h>
>  #include <asm/setup.h>
>  
>  static bool __init device_tree_node_matches(const void *fdt, int node,
> @@ -53,10 +53,15 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
>  }
>  
>  void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
> -                                u32 size_cells, u64 *start, u64 *size)
> +                                u32 size_cells, paddr_t *start, paddr_t *size)
>  {
> -    *start = dt_next_cell(address_cells, cell);
> -    *size = dt_next_cell(size_cells, cell);
> +    /*
> +     * dt_next_cell will return u64 whereas paddr_t may be u64 or u32. Thus, one
> +     * needs to cast paddr_t to u32. Note that we do not check for truncation as
> +     * it is user's responsibility to provide the correct values in the DT.
> +     */
> +    *start = (paddr_t) dt_next_cell(address_cells, cell);
> +    *size = (paddr_t) dt_next_cell(size_cells, cell);
>  }
>  
>  static int __init device_tree_get_meminfo(const void *fdt, int node,
> @@ -326,7 +331,7 @@ static int __init process_chosen_node(const void *fdt, int node,
>          printk("linux,initrd-start property has invalid length %d\n", len);
>          return -EINVAL;
>      }
> -    start = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
> +    start = (paddr_t) dt_read_number((void *)&prop->data, dt_size_to_cells(len));
>  
>      prop = fdt_get_property(fdt, node, "linux,initrd-end", &len);
>      if ( !prop )
> @@ -339,7 +344,7 @@ static int __init process_chosen_node(const void *fdt, int node,
>          printk("linux,initrd-end property has invalid length %d\n", len);
>          return -EINVAL;
>      }
> -    end = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
> +    end = (paddr_t) dt_read_number((void *)&prop->data, dt_size_to_cells(len));
>  
>      if ( start >= end )
>      {
> @@ -594,9 +599,11 @@ static void __init early_print_info(void)
>      for ( i = 0; i < nr_rsvd; i++ )
>      {
>          paddr_t s, e;
> -        if ( fdt_get_mem_rsv(device_tree_flattened, i, &s, &e) < 0 )
> +
> +        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &s, &e) < 0 )
>              continue;
> -        /* fdt_get_mem_rsv returns length */
> +
> +        /* fdt_get_mem_rsv_paddr returns length */
>          e += s;
>          printk(" RESVD[%u]: %"PRIpaddr" - %"PRIpaddr"\n", i, s, e);
>      }
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f904f12408..72b9afbb4c 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -949,7 +949,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>          BUG_ON(!prop);
>          cells = (const __be32 *)prop->value;
>          device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase);
> -        psize = dt_read_number(cells, size_cells);
> +        psize = (paddr_t) dt_read_number(cells, size_cells);
>          if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) )
>          {
>              printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n",
> diff --git a/xen/arch/arm/include/asm/device_tree.h b/xen/arch/arm/include/asm/device_tree.h
> new file mode 100644
> index 0000000000..51e0f0ae20
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/device_tree.h
> @@ -0,0 +1,40 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * xen/arch/arm/include/asm/device_tree.h
> + * 
> + * Wrapper functions for device tree. This helps to convert dt values
> + * between u64 and paddr_t.
> + *
> + * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
> + */
> +
> +#ifndef __ARCH_ARM_DEVICE_TREE__
> +#define __ARCH_ARM_DEVICE_TREE__
> +
> +#include <xen/libfdt/libfdt.h>
> +
> +inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
> +                                 paddr_t *address,
> +                                 paddr_t *size)
> +{
> +    uint64_t dt_addr;
> +    uint64_t dt_size;
> +    int ret = 0;
> +
> +    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
> +
> +    *address = (paddr_t) dt_addr;
> +    *size = (paddr_t) dt_size;
> +
> +    return ret;
> +}
> +
> +#endif /* __ARCH_ARM_DEVICE_TREE__ */
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index a926f30a2b..6105e5cae3 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -158,7 +158,7 @@ extern uint32_t hyp_traps_vector[];
>  void init_traps(void);
>  
>  void device_tree_get_reg(const __be32 **cell, u32 address_cells,
> -                         u32 size_cells, u64 *start, u64 *size);
> +                         u32 size_cells, paddr_t *start, paddr_t *size);
>  
>  u32 device_tree_get_u32(const void *fdt, int node,
>                          const char *prop_name, u32 dflt);
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f26f67b90..da13439e62 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -29,7 +29,6 @@
>  #include <xen/virtual_region.h>
>  #include <xen/vmap.h>
>  #include <xen/trace.h>
> -#include <xen/libfdt/libfdt.h>
>  #include <xen/acpi.h>
>  #include <xen/warning.h>
>  #include <asm/alternative.h>
> @@ -39,6 +38,7 @@
>  #include <asm/gic.h>
>  #include <asm/cpuerrata.h>
>  #include <asm/cpufeature.h>
> +#include <asm/device_tree.h>
>  #include <asm/platform.h>
>  #include <asm/procinfo.h>
>  #include <asm/setup.h>
> @@ -222,11 +222,11 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>      {
>          paddr_t r_s, r_e;
>  
> -        if ( fdt_get_mem_rsv(device_tree_flattened, i, &r_s, &r_e ) < 0 )
> +        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &r_s, &r_e ) < 0 )
>              /* If we can't read it, pretend it doesn't exist... */
>              continue;
>  
> -        r_e += r_s; /* fdt_get_mem_rsv returns length */
> +        r_e += r_s; /* fdt_get_mem_rsv_paddr returns length */
>  
>          if ( s < r_e && r_s < e )
>          {
> @@ -502,13 +502,13 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
>      {
>          paddr_t mod_s, mod_e;
>  
> -        if ( fdt_get_mem_rsv(device_tree_flattened,
> -                             i - mi->nr_mods,
> -                             &mod_s, &mod_e ) < 0 )
> +        if ( fdt_get_mem_rsv_paddr(device_tree_flattened,
> +                                   i - mi->nr_mods,
> +                                   &mod_s, &mod_e ) < 0 )
>              /* If we can't read it, pretend it doesn't exist... */
>              continue;
>  
> -        /* fdt_get_mem_rsv returns length */
> +        /* fdt_get_mem_rsv_paddr returns length */
>          mod_e += mod_s;
>  
>          if ( s < mod_e && mod_s < e )
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index 412ae22869..ee59b1d379 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -159,7 +159,7 @@ static void __init dt_smp_init_cpus(void)
>              continue;
>          }
>  
> -        addr = dt_read_number(prop, dt_n_addr_cells(cpu));
> +        addr = (paddr_t) dt_read_number(prop, dt_n_addr_cells(cpu));
>  
>          hwid = addr;
>          if ( hwid != addr )
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 23:24:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 23:24:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481366.746197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeH3-0000GH-QL; Thu, 19 Jan 2023 23:24:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481366.746197; Thu, 19 Jan 2023 23:24:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeH3-0000G5-Nj; Thu, 19 Jan 2023 23:24:49 +0000
Received: by outflank-mailman (input) for mailman id 481366;
 Thu, 19 Jan 2023 23:24:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PYO0=5Q=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIeH2-0000Fz-Nf
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 23:24:48 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75a5649a-9850-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 00:24:47 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by sin.source.kernel.org (Postfix) with ESMTPS id 97733CE25DE;
 Thu, 19 Jan 2023 23:24:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1198C433F0;
 Thu, 19 Jan 2023 23:24:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75a5649a-9850-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674170679;
	bh=T71dO3QYmXldGt4FF+359WqjwCtGNMJPQNmVZVNQCsc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=TElbiYhhEoScv2Y+nb8FGvDMUoGxEFTvMuH/To6I+Y0Yg5d77D8dqSmU1NeOi9ld1
	 SARF+9gx+dnJ/uO+ZSDfVXLZ8msR7Mofww2eIulpe/2osFUPIVvYlROHuDZ8ArbMVl
	 F3UvfuUA+0E/sBY5zCSDVDxZutjydB7oeDq2K2BU+VnWbUQaJTBy/pRpQbHQVvwXvb
	 MCxzWDOlOtCFlLjynQ9HIVcPEd7LjYT14fTt7BQw3goUREYTz+npAeQ7hqfsLP4PUL
	 SVrIV+Eo9tOprRb98dJ6bHuFzTcXNjjLbfFxRnLfiqoVjtxvonU28WxG5L8uVUZoUd
	 /IaWzz7/kiLdQ==
Date: Thu, 19 Jan 2023 15:24:37 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for
 address/size
In-Reply-To: <20230117174358.15344-6-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301191522170.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-6-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> One should now be able to use 'paddr_t' to represent address and size.
> Consequently, one should use 'PRIpaddr' as a format specifier for paddr_t.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from -
> 
> v1 - 1. Rebased the patch.
> 
>  xen/arch/arm/domain_build.c        |  9 +++++----
>  xen/arch/arm/gic-v3.c              |  2 +-
>  xen/arch/arm/platforms/exynos5.c   | 26 +++++++++++++-------------
>  xen/drivers/char/exynos4210-uart.c |  2 +-
>  xen/drivers/char/ns16550.c         |  8 ++++----
>  xen/drivers/char/omap-uart.c       |  2 +-
>  xen/drivers/char/pl011.c           |  4 ++--
>  xen/drivers/char/scif-uart.c       |  2 +-
>  xen/drivers/passthrough/arm/smmu.c |  6 +++---
>  9 files changed, 31 insertions(+), 30 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 72b9afbb4c..cf8ae37a14 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1666,7 +1666,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
>      dt_for_each_device_node( dt_host, np )
>      {
>          unsigned int naddr;
> -        u64 addr, size;
> +        paddr_t addr, size;
>  
>          naddr = dt_number_of_address(np);
>  
> @@ -2445,7 +2445,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
>      unsigned int naddr;
>      unsigned int i;
>      int res;
> -    u64 addr, size;
> +    paddr_t addr, size;
>      bool own_device = !dt_device_for_passthrough(dev);
>      /*
>       * We want to avoid mapping the MMIO in dom0 for the following cases:
> @@ -2941,9 +2941,10 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
>          if ( res )
>          {
>              printk(XENLOG_ERR "Unable to permit to dom%d access to"
> -                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
> +                   " 0x%"PRIpaddr" - 0x%"PRIpaddr"\n",
>                     kinfo->d->domain_id,
> -                   mstart & PAGE_MASK, PAGE_ALIGN(mstart + size) - 1);
> +                   (paddr_t) (mstart & PAGE_MASK),
> +                   (paddr_t) (PAGE_ALIGN(mstart + size) - 1));

Why do you need the casts here? mstart is already defined as paddr_t


>              return res;
>          }
>  
> diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
> index bb59ea94cd..391dfa53d7 100644
> --- a/xen/arch/arm/gic-v3.c
> +++ b/xen/arch/arm/gic-v3.c
> @@ -1393,7 +1393,7 @@ static void __init gicv3_dt_init(void)
>  
>      for ( i = 0; i < gicv3.rdist_count; i++ )
>      {
> -        uint64_t rdist_base, rdist_size;
> +        paddr_t rdist_base, rdist_size;
>  
>          res = dt_device_get_address(node, 1 + i, &rdist_base, &rdist_size);
>          if ( res )
> diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
> index 6560507092..f79fad9957 100644
> --- a/xen/arch/arm/platforms/exynos5.c
> +++ b/xen/arch/arm/platforms/exynos5.c
> @@ -42,8 +42,8 @@ static int exynos5_init_time(void)
>      void __iomem *mct;
>      int rc;
>      struct dt_device_node *node;
> -    u64 mct_base_addr;
> -    u64 size;
> +    paddr_t mct_base_addr;
> +    paddr_t size;
>  
>      node = dt_find_compatible_node(NULL, NULL, "samsung,exynos4210-mct");
>      if ( !node )
> @@ -59,7 +59,7 @@ static int exynos5_init_time(void)
>          return -ENXIO;
>      }
>  
> -    dprintk(XENLOG_INFO, "mct_base_addr: %016llx size: %016llx\n",
> +    dprintk(XENLOG_INFO, "mct_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
>              mct_base_addr, size);
>  
>      mct = ioremap_nocache(mct_base_addr, size);
> @@ -97,9 +97,9 @@ static int __init exynos5_smp_init(void)
>      struct dt_device_node *node;
>      void __iomem *sysram;
>      char *compatible;
> -    u64 sysram_addr;
> -    u64 size;
> -    u64 sysram_offset;
> +    paddr_t sysram_addr;
> +    paddr_t size;
> +    paddr_t sysram_offset;
>      int rc;
>  
>      node = dt_find_compatible_node(NULL, NULL, "samsung,secure-firmware");
> @@ -131,7 +131,7 @@ static int __init exynos5_smp_init(void)
>          dprintk(XENLOG_ERR, "Error in %s\n", compatible);
>          return -ENXIO;
>      }
> -    dprintk(XENLOG_INFO, "sysram_addr: %016llx size: %016llx offset: %016llx\n",
> +    dprintk(XENLOG_INFO,"sysram_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"offset: 0x%"PRIpaddr"\n",
>              sysram_addr, size, sysram_offset);
>  
>      sysram = ioremap_nocache(sysram_addr, size);
> @@ -189,7 +189,7 @@ static int exynos5_cpu_power_up(void __iomem *power, int cpu)
>      return 0;
>  }
>  
> -static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
> +static int exynos5_get_pmu_baseandsize(paddr_t *power_base_addr, paddr_t *size)
>  {
>      struct dt_device_node *node;
>      int rc;
> @@ -215,7 +215,7 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
>          return -ENXIO;
>      }
>  
> -    dprintk(XENLOG_DEBUG, "power_base_addr: %016llx size: %016llx\n",
> +    dprintk(XENLOG_DEBUG, "power_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
>              *power_base_addr, *size);
>  
>      return 0;
> @@ -223,8 +223,8 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
>  
>  static int exynos5_cpu_up(int cpu)
>  {
> -    u64 power_base_addr;
> -    u64 size;
> +    paddr_t power_base_addr;
> +    paddr_t size;
>      void __iomem *power;
>      int rc;
>  
> @@ -256,8 +256,8 @@ static int exynos5_cpu_up(int cpu)
>  
>  static void exynos5_reset(void)
>  {
> -    u64 power_base_addr;
> -    u64 size;
> +    paddr_t power_base_addr;
> +    paddr_t size;
>      void __iomem *pmu;
>      int rc;
>  
> diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
> index 43aaf02e18..32cc8c78b5 100644
> --- a/xen/drivers/char/exynos4210-uart.c
> +++ b/xen/drivers/char/exynos4210-uart.c
> @@ -303,7 +303,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
>      const char *config = data;
>      struct exynos4210_uart *uart;
>      int res;
> -    u64 addr, size;
> +    paddr_t addr, size;
>  
>      if ( strcmp(config, "") )
>          printk("WARNING: UART configuration is not supported\n");
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index 58d0ccd889..8ef895a2bb 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -42,8 +42,8 @@
>  
>  static struct ns16550 {
>      int baud, clock_hz, data_bits, parity, stop_bits, fifo_size, irq;
> -    u64 io_base;   /* I/O port or memory-mapped I/O address. */
> -    u64 io_size;
> +    paddr_t io_base;   /* I/O port or memory-mapped I/O address. */
> +    paddr_t io_size;
>      int reg_shift; /* Bits to shift register offset by */
>      int reg_width; /* Size of access to use, the registers
>                      * themselves are still bytes */
> @@ -1166,7 +1166,7 @@ static const struct ns16550_config __initconst uart_config[] =
>  static int __init
>  pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>  {
> -    u64 orig_base = uart->io_base;
> +    paddr_t orig_base = uart->io_base;
>      unsigned int b, d, f, nextf, i;
>  
>      /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on bus 0. */
> @@ -1259,7 +1259,7 @@ pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>                      else
>                          size = len & PCI_BASE_ADDRESS_MEM_MASK;
>  
> -                    uart->io_base = ((u64)bar_64 << 32) |
> +                    uart->io_base = (paddr_t) ((u64)bar_64 << 32) |
>                                      (bar & PCI_BASE_ADDRESS_MEM_MASK);
>                  }
>                  /* IO based */
> diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
> index d6a5d59aa2..3b53e1909a 100644
> --- a/xen/drivers/char/omap-uart.c
> +++ b/xen/drivers/char/omap-uart.c
> @@ -324,7 +324,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
>      struct omap_uart *uart;
>      u32 clkspec;
>      int res;
> -    u64 addr, size;
> +    paddr_t addr, size;
>  
>      if ( strcmp(config, "") )
>          printk("WARNING: UART configuration is not supported\n");
> diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
> index be67242bc0..256ec11e3f 100644
> --- a/xen/drivers/char/pl011.c
> +++ b/xen/drivers/char/pl011.c
> @@ -222,7 +222,7 @@ static struct uart_driver __read_mostly pl011_driver = {
>      .vuart_info   = pl011_vuart,
>  };
>  
> -static int __init pl011_uart_init(int irq, u64 addr, u64 size, bool sbsa)
> +static int __init pl011_uart_init(int irq, paddr_t addr, paddr_t size, bool sbsa)
>  {
>      struct pl011 *uart;
>  
> @@ -258,7 +258,7 @@ static int __init pl011_dt_uart_init(struct dt_device_node *dev,
>  {
>      const char *config = data;
>      int res;
> -    u64 addr, size;
> +    paddr_t addr, size;
>  
>      if ( strcmp(config, "") )
>      {
> diff --git a/xen/drivers/char/scif-uart.c b/xen/drivers/char/scif-uart.c
> index 2fccafe340..b425881d06 100644
> --- a/xen/drivers/char/scif-uart.c
> +++ b/xen/drivers/char/scif-uart.c
> @@ -311,7 +311,7 @@ static int __init scif_uart_init(struct dt_device_node *dev,
>      const char *config = data;
>      struct scif_uart *uart;
>      int res;
> -    u64 addr, size;
> +    paddr_t addr, size;
>  
>      if ( strcmp(config, "") )
>          printk("WARNING: UART configuration is not supported\n");
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 0a514821b3..490d253d44 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -73,8 +73,8 @@
>  /* Xen: Helpers to get device MMIO and IRQs */
>  struct resource
>  {
> -	u64 addr;
> -	u64 size;
> +	paddr_t addr;
> +	paddr_t size;
>  	unsigned int type;
>  };
>  
> @@ -169,7 +169,7 @@ static void __iomem *devm_ioremap_resource(struct device *dev,
>  	ptr = ioremap_nocache(res->addr, res->size);
>  	if (!ptr) {
>  		dev_err(dev,
> -			"ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
> +			"ioremap failed (addr 0x%"PRIpaddr" size 0x%"PRIpaddr")\n",
>  			res->addr, res->size);
>  		return ERR_PTR(-ENOMEM);
>  	}
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 23:34:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 23:34:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481372.746208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeQA-0001kQ-Mv; Thu, 19 Jan 2023 23:34:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481372.746208; Thu, 19 Jan 2023 23:34:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeQA-0001kJ-Jj; Thu, 19 Jan 2023 23:34:14 +0000
Received: by outflank-mailman (input) for mailman id 481372;
 Thu, 19 Jan 2023 23:34:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PYO0=5Q=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIeQ8-0001kD-KL
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 23:34:12 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6068809-9851-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 00:34:10 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9F48860CF5;
 Thu, 19 Jan 2023 23:34:09 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E45AC433EF;
 Thu, 19 Jan 2023 23:34:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6068809-9851-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674171249;
	bh=sUce5xIommk2zFQgvsmiC/KNXZsVwnMX7lidC34HUJg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=l7wnAVSuZWCQRBBhmI2Iwj3UL3oEWeNdSPeRa3JNN5hJThR+4Kkqe5kbskpse3P61
	 5BYq+RFkPkpvM1DpVv66l4xfGJQkW+3skl4CQGQCVrUqmCvwwfnV/bjo+xDfH10Hau
	 PtrZQz9Bjxjc93l0yYQPG4AOi5iSfPoWSUMQdN3SLKUj/JUgGmZ5xvHbs9qANzOjPj
	 AJZY0Hc7wcGQl0PBs+fJW7uzBKDSZ/82NNp6m0tpX3CPN14a2yazwNnqkAhKSabEGr
	 lDMEFBQDWqC+aEeladwCFrmLbbNIhU16+bnTkPkIR+aNaWVha7QhsPreCFg3CNjYA/
	 YpXiNdmh1xomA==
Date: Thu, 19 Jan 2023 15:34:06 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
    xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, julien@xen.org, 
    Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
Subject: Re: [XEN v2 04/11] xen/arm: Typecast the DT values into paddr_t
In-Reply-To: <alpine.DEB.2.22.394.2301191514240.731018@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2301191532370.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-5-ayan.kumar.halder@amd.com> <alpine.DEB.2.22.394.2301191514240.731018@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 19 Jan 2023, Stefano Stabellini wrote:
> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> > In future, we wish to support 32 bit physical address.
> > However, one can only read u64 values from the DT. Thus, we need to
> > typecast the values appropriately from u64 to paddr_t.
> > 
> > device_tree_get_reg() should now be able to return paddr_t. This is
> > invoked by various callers to get DT address and size.
> > Similarly, dt_read_number() is invoked as well to get DT address and
> > size. The return value is typecasted to paddr_t.
> > fdt_get_mem_rsv() can only accept u64 values. So, we provide a warpper
> > for this called fdt_get_mem_rsv_paddr() which will do the necessary
> > typecasting before invoking fdt_get_mem_rsv() and while returning.
> > 
> > Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> 
> I know we discussed this before and you implemented exactly what we
> suggested, but now looking at this patch I think we should do the
> following:
> 
> - also add a wrapper for dt_read_number, something like
>   dt_read_number_paddr that returns paddr
> - add a check for the top 32-bit being zero in all the wrappers
>   (dt_read_number_paddr, device_tree_get_reg, fdt_get_mem_rsv_paddr)
>   when paddr!=uint64_t. In case the top 32-bit are != zero I think we
>   should print an error
> 
> Julien, I remember you were concerned about BUG_ON/panic/ASSERT and I
> agree with you there (especially considering Vikram's device tree
> overlay series). So here I am only suggesting to check truncation and
> printk a message, not panic. Would you be OK with that?
> 
> Last comment, maybe we could add fdt_get_mem_rsv_paddr to setup.h
> instead of introducing xen/arch/arm/include/asm/device_tree.h, given
> that we already have device tree definitions there
> (device_tree_get_reg). I am OK either way.
 
Actually I noticed you also add dt_device_get_paddr to
xen/arch/arm/include/asm/device_tree.h. So it sounds like it is a good
idea to introduce xen/arch/arm/include/asm/device_tree.h, and we could
also move the declarations of device_tree_get_reg, device_tree_get_u32
there.


 
> > ---
> > 
> > Changes from
> > 
> > v1 - 1. Dropped "[XEN v1 2/9] xen/arm: Define translate_dt_address_size() for the translation between u64 and paddr_t" and
> > "[XEN v1 4/9] xen/arm: Use translate_dt_address_size() to translate between device tree addr/size and paddr_t", instead
> > this approach achieves the same purpose.
> > 
> > 2. No need to check for truncation while converting values from u64 to paddr_t.
> > 
> >  xen/arch/arm/bootfdt.c                 | 23 +++++++++------
> >  xen/arch/arm/domain_build.c            |  2 +-
> >  xen/arch/arm/include/asm/device_tree.h | 40 ++++++++++++++++++++++++++
> >  xen/arch/arm/include/asm/setup.h       |  2 +-
> >  xen/arch/arm/setup.c                   | 14 ++++-----
> >  xen/arch/arm/smpboot.c                 |  2 +-
> >  6 files changed, 65 insertions(+), 18 deletions(-)
> >  create mode 100644 xen/arch/arm/include/asm/device_tree.h
> > 
> > diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> > index 0085c28d74..f536a3f3ab 100644
> > --- a/xen/arch/arm/bootfdt.c
> > +++ b/xen/arch/arm/bootfdt.c
> > @@ -11,9 +11,9 @@
> >  #include <xen/efi.h>
> >  #include <xen/device_tree.h>
> >  #include <xen/lib.h>
> > -#include <xen/libfdt/libfdt.h>
> >  #include <xen/sort.h>
> >  #include <xsm/xsm.h>
> > +#include <asm/device_tree.h>
> >  #include <asm/setup.h>
> >  
> >  static bool __init device_tree_node_matches(const void *fdt, int node,
> > @@ -53,10 +53,15 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
> >  }
> >  
> >  void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
> > -                                u32 size_cells, u64 *start, u64 *size)
> > +                                u32 size_cells, paddr_t *start, paddr_t *size)
> >  {
> > -    *start = dt_next_cell(address_cells, cell);
> > -    *size = dt_next_cell(size_cells, cell);
> > +    /*
> > +     * dt_next_cell will return u64 whereas paddr_t may be u64 or u32. Thus, one
> > +     * needs to cast paddr_t to u32. Note that we do not check for truncation as
> > +     * it is user's responsibility to provide the correct values in the DT.
> > +     */
> > +    *start = (paddr_t) dt_next_cell(address_cells, cell);
> > +    *size = (paddr_t) dt_next_cell(size_cells, cell);
> >  }
> >  
> >  static int __init device_tree_get_meminfo(const void *fdt, int node,
> > @@ -326,7 +331,7 @@ static int __init process_chosen_node(const void *fdt, int node,
> >          printk("linux,initrd-start property has invalid length %d\n", len);
> >          return -EINVAL;
> >      }
> > -    start = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
> > +    start = (paddr_t) dt_read_number((void *)&prop->data, dt_size_to_cells(len));
> >  
> >      prop = fdt_get_property(fdt, node, "linux,initrd-end", &len);
> >      if ( !prop )
> > @@ -339,7 +344,7 @@ static int __init process_chosen_node(const void *fdt, int node,
> >          printk("linux,initrd-end property has invalid length %d\n", len);
> >          return -EINVAL;
> >      }
> > -    end = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
> > +    end = (paddr_t) dt_read_number((void *)&prop->data, dt_size_to_cells(len));
> >  
> >      if ( start >= end )
> >      {
> > @@ -594,9 +599,11 @@ static void __init early_print_info(void)
> >      for ( i = 0; i < nr_rsvd; i++ )
> >      {
> >          paddr_t s, e;
> > -        if ( fdt_get_mem_rsv(device_tree_flattened, i, &s, &e) < 0 )
> > +
> > +        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &s, &e) < 0 )
> >              continue;
> > -        /* fdt_get_mem_rsv returns length */
> > +
> > +        /* fdt_get_mem_rsv_paddr returns length */
> >          e += s;
> >          printk(" RESVD[%u]: %"PRIpaddr" - %"PRIpaddr"\n", i, s, e);
> >      }
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index f904f12408..72b9afbb4c 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -949,7 +949,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
> >          BUG_ON(!prop);
> >          cells = (const __be32 *)prop->value;
> >          device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase);
> > -        psize = dt_read_number(cells, size_cells);
> > +        psize = (paddr_t) dt_read_number(cells, size_cells);
> >          if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) )
> >          {
> >              printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n",
> > diff --git a/xen/arch/arm/include/asm/device_tree.h b/xen/arch/arm/include/asm/device_tree.h
> > new file mode 100644
> > index 0000000000..51e0f0ae20
> > --- /dev/null
> > +++ b/xen/arch/arm/include/asm/device_tree.h
> > @@ -0,0 +1,40 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * xen/arch/arm/include/asm/device_tree.h
> > + * 
> > + * Wrapper functions for device tree. This helps to convert dt values
> > + * between u64 and paddr_t.
> > + *
> > + * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
> > + */
> > +
> > +#ifndef __ARCH_ARM_DEVICE_TREE__
> > +#define __ARCH_ARM_DEVICE_TREE__
> > +
> > +#include <xen/libfdt/libfdt.h>
> > +
> > +inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
> > +                                 paddr_t *address,
> > +                                 paddr_t *size)
> > +{
> > +    uint64_t dt_addr;
> > +    uint64_t dt_size;
> > +    int ret = 0;
> > +
> > +    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
> > +
> > +    *address = (paddr_t) dt_addr;
> > +    *size = (paddr_t) dt_size;
> > +
> > +    return ret;
> > +}
> > +
> > +#endif /* __ARCH_ARM_DEVICE_TREE__ */
> > +/*
> > + * Local variables:
> > + * mode: C
> > + * c-file-style: "BSD"
> > + * c-basic-offset: 4
> > + * indent-tabs-mode: nil
> > + * End:
> > + */
> > diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> > index a926f30a2b..6105e5cae3 100644
> > --- a/xen/arch/arm/include/asm/setup.h
> > +++ b/xen/arch/arm/include/asm/setup.h
> > @@ -158,7 +158,7 @@ extern uint32_t hyp_traps_vector[];
> >  void init_traps(void);
> >  
> >  void device_tree_get_reg(const __be32 **cell, u32 address_cells,
> > -                         u32 size_cells, u64 *start, u64 *size);
> > +                         u32 size_cells, paddr_t *start, paddr_t *size);
> >  
> >  u32 device_tree_get_u32(const void *fdt, int node,
> >                          const char *prop_name, u32 dflt);
> > diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> > index 1f26f67b90..da13439e62 100644
> > --- a/xen/arch/arm/setup.c
> > +++ b/xen/arch/arm/setup.c
> > @@ -29,7 +29,6 @@
> >  #include <xen/virtual_region.h>
> >  #include <xen/vmap.h>
> >  #include <xen/trace.h>
> > -#include <xen/libfdt/libfdt.h>
> >  #include <xen/acpi.h>
> >  #include <xen/warning.h>
> >  #include <asm/alternative.h>
> > @@ -39,6 +38,7 @@
> >  #include <asm/gic.h>
> >  #include <asm/cpuerrata.h>
> >  #include <asm/cpufeature.h>
> > +#include <asm/device_tree.h>
> >  #include <asm/platform.h>
> >  #include <asm/procinfo.h>
> >  #include <asm/setup.h>
> > @@ -222,11 +222,11 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
> >      {
> >          paddr_t r_s, r_e;
> >  
> > -        if ( fdt_get_mem_rsv(device_tree_flattened, i, &r_s, &r_e ) < 0 )
> > +        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &r_s, &r_e ) < 0 )
> >              /* If we can't read it, pretend it doesn't exist... */
> >              continue;
> >  
> > -        r_e += r_s; /* fdt_get_mem_rsv returns length */
> > +        r_e += r_s; /* fdt_get_mem_rsv_paddr returns length */
> >  
> >          if ( s < r_e && r_s < e )
> >          {
> > @@ -502,13 +502,13 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
> >      {
> >          paddr_t mod_s, mod_e;
> >  
> > -        if ( fdt_get_mem_rsv(device_tree_flattened,
> > -                             i - mi->nr_mods,
> > -                             &mod_s, &mod_e ) < 0 )
> > +        if ( fdt_get_mem_rsv_paddr(device_tree_flattened,
> > +                                   i - mi->nr_mods,
> > +                                   &mod_s, &mod_e ) < 0 )
> >              /* If we can't read it, pretend it doesn't exist... */
> >              continue;
> >  
> > -        /* fdt_get_mem_rsv returns length */
> > +        /* fdt_get_mem_rsv_paddr returns length */
> >          mod_e += mod_s;
> >  
> >          if ( s < mod_e && mod_s < e )
> > diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> > index 412ae22869..ee59b1d379 100644
> > --- a/xen/arch/arm/smpboot.c
> > +++ b/xen/arch/arm/smpboot.c
> > @@ -159,7 +159,7 @@ static void __init dt_smp_init_cpus(void)
> >              continue;
> >          }
> >  
> > -        addr = dt_read_number(prop, dt_n_addr_cells(cpu));
> > +        addr = (paddr_t) dt_read_number(prop, dt_n_addr_cells(cpu));
> >  
> >          hwid = addr;
> >          if ( hwid != addr )
> > -- 
> > 2.17.1
> > 
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 23:36:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 23:36:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481379.746217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeRp-0002MZ-4D; Thu, 19 Jan 2023 23:35:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481379.746217; Thu, 19 Jan 2023 23:35:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeRp-0002MS-1f; Thu, 19 Jan 2023 23:35:57 +0000
Received: by outflank-mailman (input) for mailman id 481379;
 Thu, 19 Jan 2023 23:35:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PYO0=5Q=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIeRn-0002MG-IG
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 23:35:55 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 03553717-9852-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 00:35:53 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 80C4B61D85;
 Thu, 19 Jan 2023 23:35:52 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 93EDAC433D2;
 Thu, 19 Jan 2023 23:35:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03553717-9852-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674171351;
	bh=TsNVolKRuy8Hi9OxldYWDYLnc45k78L+TGGkkq5iH/E=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=V7H3foDGph1LpvXPPd5SNbEq75qgrpgbEOQEecsa2SRmS7Nf58gPU3KvdiXPx0Pw5
	 5uJdvUleYM/n8xqsgDcgjad+fA6/GJKrzhRXHi3+O0ZFeAO9KyTW06F+5fKzVb8lAK
	 VqvINCt2o6s9JXf2HnJ1cZe3tjmHb9BNuRjgYtdzMQu/4pDbLzxeF0KEK80E78ffjy
	 3xe1WZ5cmjKXVEdp6pWFG40xN/ulvkqLKe0NmIvJzufaZ+UPO2P2VrBBtKBgq6R+dr
	 zhAo2yGpSCRbr0kpB3CyomliKdE8w3bEXieBsq6Pz3mPiLWDfR4DtgbI6CYQJq1idD
	 ERwLcs0m+oN3w==
Date: Thu, 19 Jan 2023 15:35:48 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v2 06/11] xen/arm: Introduce a wrapper for dt_device_get_address()
 to handle paddr_t
In-Reply-To: <20230117174358.15344-7-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301191531160.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-7-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> dt_device_get_address() can accept u64 only for address and size. The
> various callers will use 'paddr_t' datatype for address and size.
> 'paddr_t' is currently defined as u64, but we may support u32 as well.
> Thus, we need an appropriate wrapper which can handle this type
> conversion.
> 
> The callers will now invoke dt_device_get_paddr(). This inturn invokes
> dt_device_get_address() with u64 address/size. And then it typecasts
> the u64 address/size to paddr_t address/size.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from -
> 
> v1 - 1. New patch introduced.
> 
>  xen/arch/arm/domain_build.c            |  5 +++--
>  xen/arch/arm/gic-v2.c                  | 11 ++++++-----
>  xen/arch/arm/gic-v3.c                  |  9 +++++----
>  xen/arch/arm/include/asm/device_tree.h | 19 +++++++++++++++++++
>  xen/arch/arm/platforms/exynos5.c       |  7 ++++---
>  xen/arch/arm/platforms/sunxi.c         |  3 ++-
>  xen/drivers/char/exynos4210-uart.c     |  3 ++-
>  xen/drivers/char/ns16550.c             |  3 ++-
>  xen/drivers/char/omap-uart.c           |  3 ++-
>  xen/drivers/char/pl011.c               |  3 ++-
>  xen/drivers/char/scif-uart.c           |  3 ++-
>  xen/drivers/passthrough/arm/smmu.c     |  3 ++-
>  12 files changed, 51 insertions(+), 21 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index cf8ae37a14..21199b624b 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -7,6 +7,7 @@
>  #include <xen/domain_page.h>
>  #include <xen/sched.h>
>  #include <xen/sizes.h>
> +#include <asm/device_tree.h>
>  #include <asm/irq.h>
>  #include <asm/regs.h>
>  #include <xen/errno.h>
> @@ -1672,7 +1673,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
>  
>          for ( i = 0; i < naddr; i++ )
>          {
> -            res = dt_device_get_address(np, i, &addr, &size);
> +            res = dt_device_get_paddr(np, i, &addr, &size);
>              if ( res )
>              {
>                  printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> @@ -2500,7 +2501,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
>      /* Give permission and map MMIOs */
>      for ( i = 0; i < naddr; i++ )
>      {
> -        res = dt_device_get_address(dev, i, &addr, &size);
> +        res = dt_device_get_paddr(dev, i, &addr, &size);
>          if ( res )
>          {
>              printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> index 5d4d298b86..5230c4ebaf 100644
> --- a/xen/arch/arm/gic-v2.c
> +++ b/xen/arch/arm/gic-v2.c
> @@ -24,6 +24,7 @@
>  #include <xen/acpi.h>
>  #include <acpi/actables.h>
>  #include <asm/p2m.h>
> +#include <asm/device_tree.h>
>  #include <asm/domain.h>
>  #include <asm/platform.h>
>  #include <asm/device.h>
> @@ -993,7 +994,7 @@ static void gicv2_extension_dt_init(const struct dt_device_node *node)
>              continue;
>  
>          /* Get register frame resource from DT. */
> -        if ( dt_device_get_address(v2m, 0, &addr, &size) )
> +        if ( dt_device_get_paddr(v2m, 0, &addr, &size) )
>              panic("GICv2: Cannot find a valid v2m frame address\n");
>  
>          /*
> @@ -1018,19 +1019,19 @@ static void __init gicv2_dt_init(void)
>      paddr_t vsize;
>      const struct dt_device_node *node = gicv2_info.node;
>  
> -    res = dt_device_get_address(node, 0, &dbase, NULL);
> +    res = dt_device_get_paddr(node, 0, &dbase, NULL);
>      if ( res )
>          panic("GICv2: Cannot find a valid address for the distributor\n");
>  
> -    res = dt_device_get_address(node, 1, &cbase, &csize);
> +    res = dt_device_get_paddr(node, 1, &cbase, &csize);
>      if ( res )
>          panic("GICv2: Cannot find a valid address for the CPU\n");
>  
> -    res = dt_device_get_address(node, 2, &hbase, NULL);
> +    res = dt_device_get_paddr(node, 2, &hbase, NULL);
>      if ( res )
>          panic("GICv2: Cannot find a valid address for the hypervisor\n");
>  
> -    res = dt_device_get_address(node, 3, &vbase, &vsize);
> +    res = dt_device_get_paddr(node, 3, &vbase, &vsize);
>      if ( res )
>          panic("GICv2: Cannot find a valid address for the virtual CPU\n");
>  
> diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
> index 391dfa53d7..58d2eb0690 100644
> --- a/xen/arch/arm/gic-v3.c
> +++ b/xen/arch/arm/gic-v3.c
> @@ -29,6 +29,7 @@
>  
>  #include <asm/cpufeature.h>
>  #include <asm/device.h>
> +#include <asm/device_tree.h>
>  #include <asm/gic.h>
>  #include <asm/gic_v3_defs.h>
>  #include <asm/gic_v3_its.h>
> @@ -1377,7 +1378,7 @@ static void __init gicv3_dt_init(void)
>      int res, i;
>      const struct dt_device_node *node = gicv3_info.node;
>  
> -    res = dt_device_get_address(node, 0, &dbase, NULL);
> +    res = dt_device_get_paddr(node, 0, &dbase, NULL);
>      if ( res )
>          panic("GICv3: Cannot find a valid distributor address\n");
>  
> @@ -1395,7 +1396,7 @@ static void __init gicv3_dt_init(void)
>      {
>          paddr_t rdist_base, rdist_size;
>  
> -        res = dt_device_get_address(node, 1 + i, &rdist_base, &rdist_size);
> +        res = dt_device_get_paddr(node, 1 + i, &rdist_base, &rdist_size);
>          if ( res )
>              panic("GICv3: No rdist base found for region %d\n", i);
>  
> @@ -1417,10 +1418,10 @@ static void __init gicv3_dt_init(void)
>       * For GICv3 supporting GICv2, GICC and GICV base address will be
>       * provided.
>       */
> -    res = dt_device_get_address(node, 1 + gicv3.rdist_count,
> +    res = dt_device_get_paddr(node, 1 + gicv3.rdist_count,
>                                  &cbase, &csize);
>      if ( !res )
> -        dt_device_get_address(node, 1 + gicv3.rdist_count + 2,
> +        dt_device_get_paddr(node, 1 + gicv3.rdist_count + 2,
>                                &vbase, &vsize);
>  }
>  
> diff --git a/xen/arch/arm/include/asm/device_tree.h b/xen/arch/arm/include/asm/device_tree.h
> index 51e0f0ae20..7f58f1f278 100644
> --- a/xen/arch/arm/include/asm/device_tree.h
> +++ b/xen/arch/arm/include/asm/device_tree.h
> @@ -11,6 +11,7 @@
>  #ifndef __ARCH_ARM_DEVICE_TREE__
>  #define __ARCH_ARM_DEVICE_TREE__
>  
> +#include <xen/device_tree.h>
>  #include <xen/libfdt/libfdt.h>
>  
>  inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
> @@ -29,6 +30,24 @@ inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
>      return ret;
>  }
>  
> +inline int dt_device_get_paddr(const struct dt_device_node *dev,
> +                               unsigned int index, paddr_t *addr,
> +                               paddr_t *size)
> +{
> +    u64 dt_addr, dt_size;
> +    int ret;
> +
> +    ret = dt_device_get_address(dev, index, &dt_addr, &dt_size);
> +
> +    if ( addr )
> +        *addr = dt_addr;
> +
> +    if ( size )
> +        *size = dt_size;

I think we should check for truncation (top 32-bit != 0) and return an
error in that case.


> +    return ret;
> +}
> +
>  #endif /* __ARCH_ARM_DEVICE_TREE__ */
>  /*
>   * Local variables:
> diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
> index f79fad9957..55b6ac1e7e 100644
> --- a/xen/arch/arm/platforms/exynos5.c
> +++ b/xen/arch/arm/platforms/exynos5.c
> @@ -22,6 +22,7 @@
>  #include <xen/mm.h>
>  #include <xen/vmap.h>
>  #include <xen/delay.h>
> +#include <asm/device_tree.h>
>  #include <asm/platforms/exynos5.h>
>  #include <asm/platform.h>
>  #include <asm/io.h>
> @@ -52,7 +53,7 @@ static int exynos5_init_time(void)
>          return -ENXIO;
>      }
>  
> -    rc = dt_device_get_address(node, 0, &mct_base_addr, &size);
> +    rc = dt_device_get_paddr(node, 0, &mct_base_addr, &size);
>      if ( rc )
>      {
>          dprintk(XENLOG_ERR, "Error in \"samsung,exynos4210-mct\"\n");
> @@ -125,7 +126,7 @@ static int __init exynos5_smp_init(void)
>          return -ENXIO;
>      }
>  
> -    rc = dt_device_get_address(node, 0, &sysram_addr, &size);
> +    rc = dt_device_get_paddr(node, 0, &sysram_addr, &size);
>      if ( rc )
>      {
>          dprintk(XENLOG_ERR, "Error in %s\n", compatible);
> @@ -208,7 +209,7 @@ static int exynos5_get_pmu_baseandsize(paddr_t *power_base_addr, paddr_t *size)
>          return -ENXIO;
>      }
>  
> -    rc = dt_device_get_address(node, 0, power_base_addr, size);
> +    rc = dt_device_get_paddr(node, 0, power_base_addr, size);
>      if ( rc )
>      {
>          dprintk(XENLOG_ERR, "Error in \"samsung,exynos5XXX-pmu\"\n");
> diff --git a/xen/arch/arm/platforms/sunxi.c b/xen/arch/arm/platforms/sunxi.c
> index e8e4d88bef..ce47f97507 100644
> --- a/xen/arch/arm/platforms/sunxi.c
> +++ b/xen/arch/arm/platforms/sunxi.c
> @@ -18,6 +18,7 @@
>  
>  #include <xen/mm.h>
>  #include <xen/vmap.h>
> +#include <asm/device_tree.h>
>  #include <asm/platform.h>
>  #include <asm/io.h>
>  
> @@ -50,7 +51,7 @@ static void __iomem *sunxi_map_watchdog(bool *new_wdt)
>          return NULL;
>      }
>  
> -    ret = dt_device_get_address(node, 0, &wdt_start, &wdt_len);
> +    ret = dt_device_get_paddr(node, 0, &wdt_start, &wdt_len);
>      if ( ret )
>      {
>          dprintk(XENLOG_ERR, "Cannot read watchdog register address\n");
> diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
> index 32cc8c78b5..6d2008c44f 100644
> --- a/xen/drivers/char/exynos4210-uart.c
> +++ b/xen/drivers/char/exynos4210-uart.c
> @@ -24,6 +24,7 @@
>  #include <xen/irq.h>
>  #include <xen/mm.h>
>  #include <asm/device.h>
> +#include <asm/device_tree.h>
>  #include <asm/exynos4210-uart.h>
>  #include <asm/io.h>
>  
> @@ -316,7 +317,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
>      uart->parity    = PARITY_NONE;
>      uart->stop_bits = 1;
>  
> -    res = dt_device_get_address(dev, 0, &addr, &size);
> +    res = dt_device_get_paddr(dev, 0, &addr, &size);
>      if ( res )
>      {
>          printk("exynos4210: Unable to retrieve the base"
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index 8ef895a2bb..7226f3c2f7 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -35,6 +35,7 @@
>  #include <asm/io.h>
>  #ifdef CONFIG_HAS_DEVICE_TREE
>  #include <asm/device.h>
> +#include <asm/device_tree.h>
>  #endif
>  #ifdef CONFIG_X86
>  #include <asm/fixmap.h>
> @@ -1757,7 +1758,7 @@ static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
>      uart->parity    = UART_PARITY_NONE;
>      uart->stop_bits = 1;
>  
> -    res = dt_device_get_address(dev, 0, &uart->io_base, &uart->io_size);
> +    res = dt_device_get_paddr(dev, 0, &uart->io_base, &uart->io_size);
>      if ( res )
>          return res;
>  
> diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
> index 3b53e1909a..06200bc9f1 100644
> --- a/xen/drivers/char/omap-uart.c
> +++ b/xen/drivers/char/omap-uart.c
> @@ -15,6 +15,7 @@
>  #include <xen/init.h>
>  #include <xen/irq.h>
>  #include <xen/device_tree.h>
> +#include <asm/device_tree.h>
>  #include <asm/device.h>
>  #include <xen/errno.h>
>  #include <xen/mm.h>
> @@ -344,7 +345,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
>      uart->parity = UART_PARITY_NONE;
>      uart->stop_bits = 1;
>  
> -    res = dt_device_get_address(dev, 0, &addr, &size);
> +    res = dt_device_get_paddr(dev, 0, &addr, &size);
>      if ( res )
>      {
>          printk("omap-uart: Unable to retrieve the base"
> diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
> index 256ec11e3f..b4c1d9d592 100644
> --- a/xen/drivers/char/pl011.c
> +++ b/xen/drivers/char/pl011.c
> @@ -26,6 +26,7 @@
>  #include <asm/device.h>
>  #include <xen/mm.h>
>  #include <xen/vmap.h>
> +#include <asm/device_tree.h>
>  #include <asm/pl011-uart.h>
>  #include <asm/io.h>
>  
> @@ -265,7 +266,7 @@ static int __init pl011_dt_uart_init(struct dt_device_node *dev,
>          printk("WARNING: UART configuration is not supported\n");
>      }
>  
> -    res = dt_device_get_address(dev, 0, &addr, &size);
> +    res = dt_device_get_paddr(dev, 0, &addr, &size);
>      if ( res )
>      {
>          printk("pl011: Unable to retrieve the base"
> diff --git a/xen/drivers/char/scif-uart.c b/xen/drivers/char/scif-uart.c
> index b425881d06..af14388f70 100644
> --- a/xen/drivers/char/scif-uart.c
> +++ b/xen/drivers/char/scif-uart.c
> @@ -26,6 +26,7 @@
>  #include <xen/mm.h>
>  #include <xen/delay.h>
>  #include <asm/device.h>
> +#include <asm/device_tree.h>
>  #include <asm/scif-uart.h>
>  #include <asm/io.h>
>  
> @@ -318,7 +319,7 @@ static int __init scif_uart_init(struct dt_device_node *dev,
>  
>      uart = &scif_com;
>  
> -    res = dt_device_get_address(dev, 0, &addr, &size);
> +    res = dt_device_get_paddr(dev, 0, &addr, &size);
>      if ( res )
>      {
>          printk("scif-uart: Unable to retrieve the base"
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 490d253d44..0c89cb644e 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -51,6 +51,7 @@
>  #include <xen/sizes.h>
>  #include <asm/atomic.h>
>  #include <asm/device.h>
> +#include <asm/device_tree.h>
>  #include <asm/io.h>
>  #include <asm/iommu_fwspec.h>
>  #include <asm/platform.h>
> @@ -101,7 +102,7 @@ static struct resource *platform_get_resource(struct platform_device *pdev,
>  
>  	switch (type) {
>  	case IORESOURCE_MEM:
> -		ret = dt_device_get_address(pdev, num, &res.addr, &res.size);
> +		ret = dt_device_get_paddr(pdev, num, &res.addr, &res.size);
>  
>  		return ((ret) ? NULL : &res);
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 23:40:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 23:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481386.746228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeW6-0003n9-Kw; Thu, 19 Jan 2023 23:40:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481386.746228; Thu, 19 Jan 2023 23:40:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeW6-0003n2-IF; Thu, 19 Jan 2023 23:40:22 +0000
Received: by outflank-mailman (input) for mailman id 481386;
 Thu, 19 Jan 2023 23:40:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PYO0=5Q=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIeW5-0003mv-Fl
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 23:40:21 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a28b52e1-9852-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 00:40:20 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id A15EEB82789;
 Thu, 19 Jan 2023 23:40:19 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 61929C433EF;
 Thu, 19 Jan 2023 23:40:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a28b52e1-9852-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674171618;
	bh=d7PWW7qbqJDMQ/ZbDLYBIv/kTvdW978lQBEVNZlTz7Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sO3MhAWAUYvwE+Y2KsXWZX4g+3CNexE+yNB/8OfklVKa4h+A0GWsrGLCSP1so1WFi
	 ws8KbESdIugI8pQvdGfi7JALsDDtd5HLk3F8DRPF/QR1Xuv9tsPyQQGE0s2QBolbKH
	 AFA8R2QtD7wulKXkqBecPK4y5NzR6l0cV1bWim+Fgv+/YcQjAtaWTitnWkT2P0JhLT
	 NjgiP+y3s8mqJuWUCdEqAZD7tv6DFgdQ/2U3Cqlm5qvQWlarJUm6CCskjoKIe2d6+f
	 a+1Cc29sHbLgx8uNLJ2DJhMiV660L49Wi12b02qGYcIhU708fyba59POXb2W+cA28d
	 dKi3vJe9eJi4A==
Date: Thu, 19 Jan 2023 15:40:15 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v2 07/11] xen/arm: smmu: Use writeq_relaxed_non_atomic()
 for writing to SMMU_CBn_TTBR0
In-Reply-To: <20230117174358.15344-8-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301191540030.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-8-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,
> SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use
> writeq_relaxed_non_atomic() to write to it instead of invoking
> writel_relaxed() twice for lower half and upper half of the register.
> 
> This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).
> Thus, one can assign p2maddr to a 64 bit register and do the bit
> manipulations on it, to generate the value for SMMU_CBn_TTBR0.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes from -
> 
> v1 - 1. Extracted the patch from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
> Use writeq_relaxed_non_atomic() to write u64 register in a non-atomic
> fashion.
> 
>  xen/drivers/passthrough/arm/smmu.c | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 0c89cb644e..84b6803b4e 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -500,8 +500,7 @@ enum arm_smmu_s2cr_privcfg {
>  #define ARM_SMMU_CB_SCTLR		0x0
>  #define ARM_SMMU_CB_RESUME		0x8
>  #define ARM_SMMU_CB_TTBCR2		0x10
> -#define ARM_SMMU_CB_TTBR0_LO		0x20
> -#define ARM_SMMU_CB_TTBR0_HI		0x24
> +#define ARM_SMMU_CB_TTBR0		0x20
>  #define ARM_SMMU_CB_TTBCR		0x30
>  #define ARM_SMMU_CB_S1_MAIR0		0x38
>  #define ARM_SMMU_CB_FSR			0x58
> @@ -1084,6 +1083,7 @@ static void arm_smmu_flush_pgtable(struct arm_smmu_device *smmu, void *addr,
>  static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>  {
>  	u32 reg;
> +	u64 reg64;
>  	bool stage1;
>  	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
>  	struct arm_smmu_device *smmu = smmu_domain->smmu;
> @@ -1178,12 +1178,13 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>  	dev_notice(smmu->dev, "d%u: p2maddr 0x%"PRIpaddr"\n",
>  		   smmu_domain->cfg.domain->domain_id, p2maddr);
>  
> -	reg = (p2maddr & ((1ULL << 32) - 1));
> -	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_LO);
> -	reg = (p2maddr >> 32);
> +	reg64 = p2maddr;
> +
>  	if (stage1)
> -		reg |= ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT;
> -	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_HI);
> +		reg64 |= (((uint64_t) (ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT))
> +		         << 32);
> +
> +	writeq_relaxed_non_atomic(reg64, cb_base + ARM_SMMU_CB_TTBR0);
>  
>  	/*
>  	 * TTBCR
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 23:43:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 23:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481391.746238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeZ9-0004OA-2q; Thu, 19 Jan 2023 23:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481391.746238; Thu, 19 Jan 2023 23:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeZ9-0004O3-06; Thu, 19 Jan 2023 23:43:31 +0000
Received: by outflank-mailman (input) for mailman id 481391;
 Thu, 19 Jan 2023 23:43:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PYO0=5Q=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIeZ8-0004Nx-4T
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 23:43:30 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12a1f598-9853-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 00:43:28 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id CF55FB82789;
 Thu, 19 Jan 2023 23:43:27 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8D709C433F0;
 Thu, 19 Jan 2023 23:43:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12a1f598-9853-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674171806;
	bh=9VWpDFe3f8eAs3RtU1u9vTUoSekoLFXFThbQ4KuH3JY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=YYzxIqR8O4a1jLodlS7cuARNcIoAtzVy27mmdWHnOXCQ2DW0eRFMu5ioMcGXIB2BF
	 ueiKzkGSeXBgLIdMkFz03vUbADxmS66zmm7G3L52hYrl1pEvosIbv3FonHHCb6C5/6
	 oypXuaZRqH1Hiu70QjJ/RhMTvCrBl3UIs2Tug5jCmlAmnWfrsAiO/O/ldXq/wGCHqw
	 HgLiibUbXmqT9aBIkBLpFIVgPD1yKSik/DMXw/VFu0lKw80aWfhkA1/YP2TPFxxS6X
	 O5LmBktYnpycM/ZBipAT3IUavso0rToq3bU2Ak/+aeLiVoTp2xndM267Wce19c1/7x
	 38yrvymudSwkQ==
Date: Thu, 19 Jan 2023 15:43:24 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v2 08/11] xen/arm: guest_walk: LPAE specific bits should
 be enclosed within "ifndef CONFIG_ARM_PA_32"
In-Reply-To: <20230117174358.15344-9-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301191541460.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-9-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> In the subsequent patch, we introduce "CONFIG_ARM_PA_32" to support
> 32 bit physical addresses. Thus, the code specific to
> "Large Page Address Extension" (ie LPAE) should be enclosed within
> "ifndef CONFIG_ARM_PA_32".
> 
> Refer xen/arch/arm/include/asm/short-desc.h, "short_desc_l1_supersec_t"
> unsigned int extbase1:4;    /* Extended base address, PA[35:32] */
> unsigned int extbase2:4;    /* Extended base address, PA[39:36] */
> 
> Thus, extbase1 and extbase2 are not valid when 32 bit physical addresses
> are supported.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

This patch should be merged with patch 9: we shouldn't start to use a
new kconfig symbol before it is defined.


> ---
> 
> Changes from -
> v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
> 
>  xen/arch/arm/guest_walk.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
> index 43d3215304..0feb7b76ec 100644
> --- a/xen/arch/arm/guest_walk.c
> +++ b/xen/arch/arm/guest_walk.c
> @@ -154,8 +154,10 @@ static bool guest_walk_sd(const struct vcpu *v,
>              mask = (1ULL << L1DESC_SUPERSECTION_SHIFT) - 1;
>              *ipa = gva & mask;
>              *ipa |= (paddr_t)(pte.supersec.base) << L1DESC_SUPERSECTION_SHIFT;
> +#ifndef CONFIG_ARM_PA_32
>              *ipa |= (paddr_t)(pte.supersec.extbase1) << L1DESC_SUPERSECTION_EXT_BASE1_SHIFT;
>              *ipa |= (paddr_t)(pte.supersec.extbase2) << L1DESC_SUPERSECTION_EXT_BASE2_SHIFT;
> +#endif /* CONFIG_ARM_PA_32 */
>          }
>  
>          /* Set permissions so that the caller can check the flags by herself. */
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 23:48:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 23:48:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481398.746248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIee5-00050Y-NA; Thu, 19 Jan 2023 23:48:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481398.746248; Thu, 19 Jan 2023 23:48:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIee5-00050R-J9; Thu, 19 Jan 2023 23:48:37 +0000
Received: by outflank-mailman (input) for mailman id 481398;
 Thu, 19 Jan 2023 23:48:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PYO0=5Q=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIee4-00050L-TM
 for xen-devel@lists.xenproject.org; Thu, 19 Jan 2023 23:48:36 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c8bb61b4-9853-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 00:48:34 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 2CF6861DAE;
 Thu, 19 Jan 2023 23:48:33 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4AEE2C433EF;
 Thu, 19 Jan 2023 23:48:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8bb61b4-9853-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674172112;
	bh=Y6cgie6a/AHyF2qixHdESjUV1kOnUWCgvxOZLc5QUlU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=CcV1oZtAh9dKtmRLx71dszZTl1+s3DCPmWHKkDF+I4207mIfIrlc8yg0mfDucTPqf
	 kZobwDaWYIRVUbHw9d64BqoB9PluGrNxZBt9BA8tzanDxFbP2ERI9XJHdRoBkErtZ4
	 OGJ1c/1EBTvPo04oRf2ZcJeueYLU4NZxDItPNV56uDgztvU0WTX39DZni4xQ4F3XPJ
	 rVGenO29kdk7Hf/R/WtiNbn0V2CkIXh/ROKzVXWx6qz1TiYKNqboLpC6yMiVoAp2Zd
	 8lxv71Uq63+GzBSWBjP7dNjz6WHCoNDleFjrw/KAKPoscn1f9oY7AFMZZpRyvrH+ZH
	 MoVmhsMYay3ig==
Date: Thu, 19 Jan 2023 15:48:29 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Ayan Kumar Halder <ayankuma@amd.com>, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com, xen-devel@lists.xenproject.org, 
    Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Subject: Re: [XEN v2 09/11] xen/arm: Introduce ARM_PA_32 to support 32 bit
 physical address
In-Reply-To: <ea32fa45-5de3-d229-8c22-913ef0513bfa@suse.com>
Message-ID: <alpine.DEB.2.22.394.2301191544480.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-10-ayan.kumar.halder@amd.com> <49228d15-3f0d-eb89-6107-40ae9f0b9b92@suse.com> <376c32ed-2e9d-a81a-69a9-8ba6d54f603e@amd.com>
 <ea32fa45-5de3-d229-8c22-913ef0513bfa@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 18 Jan 2023, Jan Beulich wrote:
> On 18.01.2023 12:57, Ayan Kumar Halder wrote:
> > Hi Jan,
> > 
> > On 18/01/2023 08:50, Jan Beulich wrote:
> >> On 17.01.2023 18:43, Ayan Kumar Halder wrote:
> >>> --- a/xen/arch/arm/include/asm/types.h
> >>> +++ b/xen/arch/arm/include/asm/types.h
> >>> @@ -37,9 +37,16 @@ typedef signed long long s64;
> >>>   typedef unsigned long long u64;
> >>>   typedef u32 vaddr_t;
> >>>   #define PRIvaddr PRIx32
> >>> +#if defined(CONFIG_ARM_PA_32)
> >>> +typedef u32 paddr_t;
> >>> +#define INVALID_PADDR (~0UL)
> >>> +#define PADDR_SHIFT BITS_PER_LONG
> >>> +#define PRIpaddr PRIx32
> >>> +#else
> >> With our plan to consolidate basic type definitions into xen/types.h
> >> the use of ARM_PA_32 is problematic: Preferably we'd have an arch-
> >> independent Kconfig setting, much like Linux'es PHYS_ADDR_T_64BIT
> >> (I don't think we should re-use the name directly, as we have no
> >> phys_addr_t type), which each arch selects (or not) accordingly.
> >> Perhaps architectures already selecting 64BIT wouldn't need to do
> >> this explicitly, and instead 64BIT could select the new setting
> >> (or the new setting could default to Y when 64BIT=y).
> > 
> > Looking at some of the instances where 64BIT is currently used 
> > (xen/drivers/passthrough/arm/smmu.c), I understand that it is used to 
> > define the width of **virtual** address.
> > 
> > Thus, it would not conflict with ARM_PA_32 (which is for physical address).
> > 
> > Later on if we wish to introduce something similar to Linux's 
> > PHYS_ADDR_T_64BIT (which is arch agnostic), then ARM_PA_32 can be 
> > selected by "!PHYS_ADDR_T_64BIT" && "CONFIG_ARM_32". (We may decide to 
> > drop ARM_PA_32 config option, but we can still reuse ARM_PA_32 specific 
> > changes).
> > 
> > Also, then we will need to support 32 bit physical address (ie 
> > !PHYS_ADDR_T_64BIT) with 32 bit virtual address (ie !64BIT) and 64 bit 
> > virtual address (ie 64BIT).
> > 
> > Let's confine the current changes to Arm architecture only (as I don't 
> > have knowledge about x86 or RISCV). When similar changes will be done 
> > for other architectures, then we can think about introducing an 
> > architecture agnostic option.
> 
> I disagree, not the least with the present goal of reworking xen/types.h
> vs asm/types.h. By having an arch-independent Kconfig symbol, paddr_t
> could also be manufactured uniformly in xen/types.h.

Jan is only asking to introduce the new Kconfig symbol as an
arch-independent symbol. In other words, rename ARM_PA_32 to PADDR_32
(or something like that) and move the symbol to xen/arch/Kconfig.

I think that's reasonable. And PADDR_32 could be forced to always be
unselected on x86: I don't think Jan is asking you to rework the whole
of xen/arch/x86 and xen/commons to build on x86 with paddr_t set to
uint32_t.


From xen-devel-bounces@lists.xenproject.org Thu Jan 19 23:50:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Jan 2023 23:50:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481405.746258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeg7-0006P7-52; Thu, 19 Jan 2023 23:50:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481405.746258; Thu, 19 Jan 2023 23:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeg7-0006P0-2K; Thu, 19 Jan 2023 23:50:43 +0000
Received: by outflank-mailman (input) for mailman id 481405;
 Thu, 19 Jan 2023 23:50:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIeg5-0006Oo-HL; Thu, 19 Jan 2023 23:50:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIeg5-0002MJ-ET; Thu, 19 Jan 2023 23:50:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIeg4-0007L3-WF; Thu, 19 Jan 2023 23:50:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIeg4-0001SD-Vg; Thu, 19 Jan 2023 23:50:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4EMJEaYIAxfNd2YdmIBlGVoiYTgqMbmAT9PuIQOdKt4=; b=aTQ+PmRiAP3qw7dyzrQAHg2VLy
	I0vC43cXI6mPVr/uHrE3AMAjjlUcqTYjhXlEnrEF4uRM7iZF7ktPP9wl0ZStXVKEVUK03K5DrxkYL
	aAxuhP46VNrrZgVMWf63XBhIAzNW9cZ4aQMQazZFP7AcwE3FT7He4CEu118HcniKXdm8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175977-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175977: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ec8aeb6048018680c06fb9205c01ca6bda08846
X-Osstest-Versions-That:
    qemuu=7c9236d6d61f30583d5d860097d88dbf0fe487bf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Jan 2023 23:50:40 +0000

flight 175977 qemu-mainline real [real]
flight 175990 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175977/
http://logs.test-lab.xenproject.org/osstest/logs/175990/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd       22 guest-start.2       fail pass in 175990-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175962
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175962
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175962
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175962
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175962
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175962
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175962
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175962
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                7ec8aeb6048018680c06fb9205c01ca6bda08846
baseline version:
 qemuu                7c9236d6d61f30583d5d860097d88dbf0fe487bf

Last test of basis   175962  2023-01-18 19:08:30 Z    1 days
Testing same since   175977  2023-01-19 09:20:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   7c9236d6d6..7ec8aeb604  7ec8aeb6048018680c06fb9205c01ca6bda08846 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 00:05:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 00:05:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481412.746268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeuP-00008F-64; Fri, 20 Jan 2023 00:05:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481412.746268; Fri, 20 Jan 2023 00:05:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIeuP-000088-2O; Fri, 20 Jan 2023 00:05:29 +0000
Received: by outflank-mailman (input) for mailman id 481412;
 Fri, 20 Jan 2023 00:05:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqBo=5R=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIeuN-000082-H1
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 00:05:27 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24006c3e-9856-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 01:05:26 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 6BE2DB81A58;
 Fri, 20 Jan 2023 00:05:25 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 27167C433D2;
 Fri, 20 Jan 2023 00:05:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24006c3e-9856-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674173124;
	bh=qNK/RCj+GcV6rZUusEkgSFCwekcXk4u2+BLADfGWquI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=aeENQG5Z8KIDv61czG0Fc0ble7WWMCJA55Hmq0grZie3Rc3Os8hAYr4rwG2VyViLR
	 Ps29ep6s1h+Np0PrnAZDEkGurlf3c6jpsuc932vSTbrVrE5iRUCy4Ewco6umOopcY3
	 9zj7rxQ5GhQr+ddtC2dpLhxvkI9RbF0eABzVb1mfh6TFYA+aS5bswom1pPHxnymlB0
	 1PcEf71rrm4EArxYBiQ5rt40fYSl5kR+DECMYq4Q3OpMIyE3lXKOhvZwydfysGfimo
	 q3otku+NtA3/PY4HdR/3o3JaAu5fud0cylOxR/FK1eQZe1PW7IQMxHuqqff/dN0+Sh
	 bRcoeufp12K3Q==
Date: Thu, 19 Jan 2023 16:05:21 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v2 11/11] xen/arm: p2m: Enable support for 32bit IPA
In-Reply-To: <20230117174358.15344-12-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301191605150.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-12-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> VTCR.T0SZ should be set as 0x20 to support 32bit IPA.
> Refer ARM DDI 0487I.a ID081822, G8-9824, G8.2.171, VTCR,
> "Virtualization Translation Control Register" for the bit descriptions.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes from -
> 
> v1 - New patch.
> 
>  xen/arch/arm/p2m.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 948f199d84..cfdea55e71 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -2266,13 +2266,17 @@ void __init setup_virt_paging(void)
>      register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>  
>  #ifdef CONFIG_ARM_32
> -    if ( p2m_ipa_bits < 40 )
> +    if ( p2m_ipa_bits < PADDR_BITS )
>          panic("P2M: Not able to support %u-bit IPA at the moment\n",
>                p2m_ipa_bits);
>  
> -    printk("P2M: 40-bit IPA\n");
> -    p2m_ipa_bits = 40;
> +    printk("P2M: %u-bit IPA\n",PADDR_BITS);
> +    p2m_ipa_bits = PADDR_BITS;
> +#ifdef CONFIG_ARM_PA_32
> +    val |= VTCR_T0SZ(0x20); /* 32 bit IPA */
> +#else
>      val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
> +#endif
>      val |= VTCR_SL0(0x1); /* P2M starts at first level */
>  #else /* CONFIG_ARM_64 */
>      static const struct {
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 00:19:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 00:19:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481417.746277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIf7w-0001gc-C0; Fri, 20 Jan 2023 00:19:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481417.746277; Fri, 20 Jan 2023 00:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIf7w-0001gV-9V; Fri, 20 Jan 2023 00:19:28 +0000
Received: by outflank-mailman (input) for mailman id 481417;
 Fri, 20 Jan 2023 00:19:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqBo=5R=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIf7v-0001gP-Fj
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 00:19:27 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1904926a-9858-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 01:19:26 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 0519EB82624;
 Fri, 20 Jan 2023 00:19:26 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B027EC433EF;
 Fri, 20 Jan 2023 00:19:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1904926a-9858-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674173964;
	bh=yh7TVqTJPJmJ/ye4MlX6W8P1qoVxWP3RAfJ279PctU8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=TC1Et5PPjpmJKoHcsnVdlhN1SFI/gdFNBJNqgMyVHCWTsPFn3zAWlXxrGAvMyzmj6
	 PyWzGh6ETB0qeF7dCBsxjhFtERy3iWV5VUawiR1dOSYK7z7jG+5jKSeegtEomUqUQG
	 mDpypcxTBiDEaRVjEPSLNGokBh88U468J2amJxYm9ibaQg69mgck0inVrXcj1qBlPu
	 CqXKLWmDZ+hKOD+QNZYLImLWQwLKU9tkdJPVJnja+vh4rXGIKQspjY3bU7OrbkzjiL
	 wDK4XGL88SGPRDJ+U+QEJ221GzdMoFeS/htlFotRwO5nb549pAHj8yo7LKNVvhI5a0
	 bulvlTpPILg6Q==
Date: Thu, 19 Jan 2023 16:19:22 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v2 10/11] xen/arm: Restrict zeroeth_table_offset for
 ARM_64
In-Reply-To: <20230117174358.15344-11-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301191553570.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-11-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> zeroeth_table_offset is not accessed by ARM_32.
> Also, when 32 bit physical addresses are used (ie ARM_PA_32=y), this
> causes an overflow.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from -
> 
> v1 - Removed the duplicate declaration for DECLARE_OFFSETS.
> 
>  xen/arch/arm/include/asm/lpae.h | 4 ++++
>  xen/arch/arm/mm.c               | 7 +------
>  2 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
> index 3fdd5d0de2..2744e0eebf 100644
> --- a/xen/arch/arm/include/asm/lpae.h
> +++ b/xen/arch/arm/include/asm/lpae.h
> @@ -259,7 +259,11 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
>  #define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
>  #define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
>  #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
> +#ifdef CONFIG_ARM_64

Julien was asking for a selectable Kconfig option that would allow us to
have 32-bit paddr_t even on ARM_64. If we do that, assuming we are on
aarch64, and we set VTCR_T0SZ to 0x20, hence we get 32-bit IPA, are we
going to have a 3-level or a 4-level p2m pagetable?

In any case I think this should be:
#ifndef CONFIG_PADDR_32

And if it doesn't work today on aarch64 due to pagetable levels or other
reasons, than I would make CONFIG_PADDR_32 not (yet) selectable on
ARM_64 (until it is fixed).


>  #define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
> +#else
> +#define zeroeth_table_offset(va)  0

Rather than 0 it might be better to have 32, hence zeroing the input
address


> +#endif
>  
>  /*
>   * Macros to define page-tables:
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index fab54618ab..95784e0c59 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -207,12 +207,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
>  {
>      static const char *level_strs[4] = { "0TH", "1ST", "2ND", "3RD" };
>      const mfn_t root_mfn = maddr_to_mfn(ttbr);
> -    const unsigned int offsets[4] = {
> -        zeroeth_table_offset(addr),
> -        first_table_offset(addr),
> -        second_table_offset(addr),
> -        third_table_offset(addr)
> -    };
> +    DECLARE_OFFSETS(offsets, addr);
>      lpae_t pte, *mapping;
>      unsigned int level, root_table;
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 00:36:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 00:36:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481422.746288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIfOh-00044q-Qg; Fri, 20 Jan 2023 00:36:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481422.746288; Fri, 20 Jan 2023 00:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIfOh-00044j-Nl; Fri, 20 Jan 2023 00:36:47 +0000
Received: by outflank-mailman (input) for mailman id 481422;
 Fri, 20 Jan 2023 00:36:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eVq4=5R=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pIfOg-00044d-KK
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 00:36:46 +0000
Received: from mail-vs1-xe35.google.com (mail-vs1-xe35.google.com
 [2607:f8b0:4864:20::e35])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 835818d8-985a-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 01:36:44 +0100 (CET)
Received: by mail-vs1-xe35.google.com with SMTP id i188so4075237vsi.8
 for <xen-devel@lists.xenproject.org>; Thu, 19 Jan 2023 16:36:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 835818d8-985a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=kLKdhTQ7zRzllwMqlN4r5Gt2lqy+3yExc6+oZpPskyI=;
        b=P2P6ImeKpVY5Lvw7IjzmIi86fCF9UGHzVq2Sj6aaGhi+cNkt0Sz7aCjjkAMlIy2aOe
         JzBumEzk1rHsO3WZ6VTKnowbWhixPPntRmaI3ExkCMRl2GDVjU7e6n7LzPtv/D+9VG1G
         8+ouEh7cZXT3uZkjDXxHMSXwNJgh08kWD1WRYPtELMLR+JhMJ8EnHfV8CVsGVm5AKNJU
         hu7BHrzLFtAhuZtMAnTmXI5cajuHyzUl+PzVta7OB3mhkA4rzXCaOfn6pFAskpVoaXSe
         YXV+0qDrTFwQPw5WD5yAnCbIUZgrm1ivInZv1a2qivA/Wh/mp33re8MG7Wzr0rsNqeCX
         FA5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=kLKdhTQ7zRzllwMqlN4r5Gt2lqy+3yExc6+oZpPskyI=;
        b=VDzRYltIdFI08cjTtUbdArrYMplYtqVxKvCdEOVTrxVsGcQ+Jp41x5DRsPN+v/taa8
         q/jwOyMx76k/NRCQg/X/oX4Fvref06pATRzG2dVTZHrI2/f2oeD4uc9IOlraXdtr6PRe
         Gb0Zp0Knc/rmufeF3/BGxrskgFqk1SgKyRvew9mkrF0kmbHl8JUOKnHbLrf5Cbigugin
         2wpJJv84l0+Iq5o4sI9AqMFw1rPE984Z20MVJE4ddu4uqFpn+QhWDcJNdX6Kksu6XTuf
         sdj2wLy9s0YvyG/VdCNu2PUCieHQ0tUoI06Uks487ZNvg/ummHpu7A2OjDfkfevHCPzT
         7iUg==
X-Gm-Message-State: AFqh2kqFQ8AfgDhDn0EoQBlRCZln3Da8Il6UI7DGjuskEvbyzf23s0BV
	b0ZVqEPpGNZSrYWKOw8stMCDsSiUZbCyRAD1M1Y=
X-Google-Smtp-Source: AMrXdXtO3J/0TS4bVZwZ4mC1sBtN45OKl6tDwjh1XNMWEh4xzEfuJ/zwnu2lgWmncKaZE+rMJY8+x+erLW4VY7vqEL0=
X-Received: by 2002:a05:6102:cd4:b0:3d0:c2e9:cb77 with SMTP id
 g20-20020a0561020cd400b003d0c2e9cb77mr1702713vst.54.1674175003178; Thu, 19
 Jan 2023 16:36:43 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674131459.git.oleksii.kurochko@gmail.com> <851a3fa74defe5174335646e2a79096bd8d432f8.1674131459.git.oleksii.kurochko@gmail.com>
In-Reply-To: <851a3fa74defe5174335646e2a79096bd8d432f8.1674131459.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Fri, 20 Jan 2023 10:36:17 +1000
Message-ID: <CAKmqyKN6Prtqg_TSvxtGT-Vd53wxDycWpMEsV+J5HLePskjevQ@mail.gmail.com>
Subject: Re: [PATCH v5 2/5] xen/riscv: introduce asm/types.h header file
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Fri, Jan 20, 2023 at 12:07 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V5:
>     - Remove size_t from asm/types after rebase on top of the patch
>           "include/types: move stddef.h-kind types to common header" [1].
>         - All other types were back as they are used in <xen/types.h> and
>       in xen/common.
> ---
> Changes in V4:
>     - Clean up types in <asm/types.h> and remain only necessary.
>       The following types was removed as they are defined in <xen/types.h>:
>       {__|}{u|s}{8|16|32|64}
> ---
> Changes in V3:
>     - Nothing changed
> ---
> Changes in V2:
>     - Remove unneeded now types from <asm/types.h>
> ---
>  xen/arch/riscv/include/asm/types.h | 70 ++++++++++++++++++++++++++++++
>  1 file changed, 70 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/types.h
>
> diff --git a/xen/arch/riscv/include/asm/types.h b/xen/arch/riscv/include/asm/types.h
> new file mode 100644
> index 0000000000..64976f118d
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/types.h
> @@ -0,0 +1,70 @@
> +#ifndef __RISCV_TYPES_H__
> +#define __RISCV_TYPES_H__
> +
> +#ifndef __ASSEMBLY__
> +
> +typedef __signed__ char __s8;
> +typedef unsigned char __u8;
> +
> +typedef __signed__ short __s16;
> +typedef unsigned short __u16;
> +
> +typedef __signed__ int __s32;
> +typedef unsigned int __u32;
> +
> +#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
> +#if defined(CONFIG_RISCV_32)
> +typedef __signed__ long long __s64;
> +typedef unsigned long long __u64;
> +#elif defined (CONFIG_RISCV_64)
> +typedef __signed__ long __s64;
> +typedef unsigned long __u64;
> +#endif
> +#endif
> +
> +typedef signed char s8;
> +typedef unsigned char u8;
> +
> +typedef signed short s16;
> +typedef unsigned short u16;
> +
> +typedef signed int s32;
> +typedef unsigned int u32;
> +
> +#if defined(CONFIG_RISCV_32)
> +
> +typedef signed long long s64;
> +typedef unsigned long long u64;
> +typedef u32 vaddr_t;
> +#define PRIvaddr PRIx32
> +typedef u64 paddr_t;
> +#define INVALID_PADDR (~0ULL)
> +#define PRIpaddr "016llx"
> +typedef u32 register_t;
> +#define PRIregister "x"
> +
> +#elif defined (CONFIG_RISCV_64)
> +
> +typedef signed long s64;
> +typedef unsigned long u64;
> +typedef u64 vaddr_t;
> +#define PRIvaddr PRIx64
> +typedef u64 paddr_t;
> +#define INVALID_PADDR (~0UL)
> +#define PRIpaddr "016lx"
> +typedef u64 register_t;
> +#define PRIregister "lx"
> +
> +#endif
> +
> +#endif /* __ASSEMBLY__ */
> +
> +#endif /* __RISCV_TYPES_H__ */
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 00:48:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 00:48:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481429.746298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIfaI-0005g1-22; Fri, 20 Jan 2023 00:48:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481429.746298; Fri, 20 Jan 2023 00:48:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIfaH-0005fu-Tw; Fri, 20 Jan 2023 00:48:45 +0000
Received: by outflank-mailman (input) for mailman id 481429;
 Fri, 20 Jan 2023 00:48:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIfaG-0005fo-6Q
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 00:48:44 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2e600661-985c-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 01:48:41 +0100 (CET)
Received: from mail-dm6nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Jan 2023 19:48:38 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM8PR03MB6229.namprd03.prod.outlook.com (2603:10b6:8:24::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.24; Fri, 20 Jan 2023 00:48:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 00:48:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e600661-985c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674175721;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=FMQR7PAlz2RaPG9+7fc+6IJPDJjCaab4NobvEecYcdI=;
  b=aNlzy5eL9WrfJalW3TAHCqtzxOEiDHSnymK13WJFtlsOpLGarIb4Xxgv
   u0dvzrnkw+pN59oozxXJmXu3tsgZ+MHRW0NKhDDZnwaImiGLgl6mFZY5Q
   OA2WZXK8ZykZ2f1U27b2yIENJjTlHew3Ja/Lo2ejg+xQsWHpW3j8AtpgT
   Q=;
X-IronPort-RemoteIP: 104.47.57.175
X-IronPort-MID: 93475028
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1xXmkKJGSaoKc9l1FE+RjpUlxSXFcZb7ZxGr2PjKsXjdYENSgjwGx
 2YaWDvSP/aPMGr8Ko9zaouwoUMGuZKAztQyQVZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wVvPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4pW1FT2
 OAiOAoSNAqhhvLonK+/cvlj05FLwMnDZOvzu1lG5BSAVLMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTGMkWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzHinB99KTu3QGvhCx0zM/FMpLk0sSELn/PjnmGW4Sv1EJ
 BlBksYphe1onKCxdfH6WxC7u3+F+B0BQd1bE+49wA6Iw6vQpQ2eAwAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxRuwMyUIKW4JZQcfUBAIpdLkpekbjA/LT9tlOL64iJvyAz6Y6
 yuRsCE0irEXjMgK/6a251bKh3SrvJehZgE07wPTQ2msxhl4eom+Zoqjr1Pc6J5oJoGTREiMp
 3gAls2X6sgBCJiMkGqGR+BlNLit5u2ZOTzGx1B1Fp8q9i+F5HKoO4tX5VlWL0BvNMEGdTb3Y
 VT7tgZY5ZsVN3yvBYd9ZIi7GoIn1qjkGNHsUNjba9NPZt56cwrvwc11TUuZ3mSonE1yl6g6Y
 M6faZz1UitcDrl7xj2rQetbyaUs2i012WLUQ9b80gij1r2dInWSTN/pLWezUwzw14vcyC29z
 jqVH5LiJ8l3OAEmXhTqzA==
IronPort-HdrOrdr: A9a23:v5h6XKybNEcKTBadnEYXKrPxx+gkLtp133Aq2lEZdPULSKKlfp
 GV88jziyWZtN9IYgBepTiBUJPwJE80hqQFn7X5XI3SEzUO3VHDEGgM1/qY/9SNIVycygd979
 YYT0E6MqyNMbEYt7e13ODbKadb/DDvysnB7oq+r0uFDzsaFp2IhD0JbjpzZ3cGIjWucqBJc6
 Z0iPA3xQaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oK+RSDljSh7Z/9Cly90g0FWz1C7L8++S
 yd+jaJppmLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjow4OyjhkQGhYaVmQvmnsCouqO+ixV42mJ
 3nogsmPe5093TNF1vF4CfF6k3F6nID+nXiwViXjT/IusriXg83DMJHmMZwbgbZw1BIhqA/7I
 t7m0ai87ZHBxLJmyrwo/LSUQtxq0ayqX0+1cYOkn1kV5cEYrM5l/1bwKoVKuZFIMvJ0vFgLA
 BcNrCE2B+QSyLDU5nthBgp/DVrZAVpIv7JeDlZhiXf6UkmoJkw9Tpp+CVYpAZCyHtHceg928
 30dp1ykrdAV8kXar84KtsgbKKMezHwaCOJCXmVJ1v/EqEBJjbqkL7YpJsIxMzCQu1V8HMV8K
 6xDm+wcVRCJH4HBaC1re522wGIT2OnUTv3zMZCo5B/p73nXbLudTaOUVY0jqKb0r8iK9yeWe
 26N49WD//lPi/pBZtD2RH4VvBpWA4jueAuy54Gsmi104n2A5yvsvaefOfYJbLrHzphUmTjAm
 EbVDy2IMlb9EikVnLxnRCUAhrWCwDC1IM1FLKf8/kYyYALOIEJug8JiU6h7sXOLTFZqKQ5cE
 Z3PbuimKKmomu9+3rO8gxSS1dgJ1cQ5K+lX2JBpAcMPU+xebEfu8+HcWQXx3eDLg8XdbKeLO
 eenSUAxUuaFe3l+cl5MaPUDouztQpnmE63
X-IronPort-AV: E=Sophos;i="5.97,230,1669093200"; 
   d="scan'208";a="93475028"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VILW7RbsPFEY7lFUTFpkQNoqm0C7MzmU6nWBWfZfeylURee7NUTeBET5GP4P3SRwTQttef8eK35mpnhGtdRcdfVmX+TxirBnP36HjQUzeM92esJ9y+eer5VhC98dLTGhOZv5NoULNfF4mEYB0MDSNKF2YuX//J9FS2gTQFrPLCyiou5i1lVaMT+XkXXKzGQt7PDAzf5LMWGZa2p8gBjY+duJNIPU7Jty+ESzTBcDlHjBKMM0AUY6zK2eaHs3w1FrgmxFIG3oAxiSr/5c7JTTXstLZtvbl7S7gbrNfSTdGMxtrOY3CUP5/CRgdK7qd0/YRX4qFJZvOk+pyc4Qo5k2mA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FMQR7PAlz2RaPG9+7fc+6IJPDJjCaab4NobvEecYcdI=;
 b=kk11RCg2p3B92pW5XEV5Bq+3LE5P9W4t9Et3fLqPp8No4Lw9ThpfPcqmbNF1cZIalPIluUMAWRkXUN6birx2kb8Yo4i7PMOikMJXsD2OqW85eEDidE6VDC9W/VbWlaI0FMxD7kbMq/A6b/cw0CHSq9973K+PkjL50B3fSLj5EkRWy00LNQIGSu0CWhpv4C/hLNZ1qxmpjPHN6SsUwH0006EYD61E7i2lPJMLVeDvEclOftRYLGK33wFMu+AF0k5PReMlBXi5EY+QplwhevIILiKyJ4DbRlbcBq5GQlfPb8Vo1r4yazmQeZfCT1SDWFZLrBYBGVX5l2wChwJLp1yGqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FMQR7PAlz2RaPG9+7fc+6IJPDJjCaab4NobvEecYcdI=;
 b=TCj1VlzM5InSGJJ1CriDySpRvWn4F67SCdUpoRymKryAyl5OWS1Hy8DPX/Bycv14ul2dqM7g7xEpuSU2HiBa5/Uas0PWXuBFB5eeQIEhhvgspi+P7xUU6d3FPpJktce3UtSzNackNXVl9LVOdTzQYXWBRcwxN0SNCOAjrTeM720=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Bobby
 Eshleman <bobby.eshleman@gmail.com>
Subject: Re: [PATCH v5 4/5] xen/riscv: introduce early_printk basic stuff
Thread-Topic: [PATCH v5 4/5] xen/riscv: introduce early_printk basic stuff
Thread-Index: AQHZLA9l20cepYD69E6g31XDw2HHwK6mebiA
Date: Fri, 20 Jan 2023 00:48:35 +0000
Message-ID: <aefd4cb9-9a60-4ef1-ff45-dedfb6c37203@citrix.com>
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
 <8d7ac0dc51a6331d3efa7fcda433616670b46700.1674131459.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <8d7ac0dc51a6331d3efa7fcda433616670b46700.1674131459.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM8PR03MB6229:EE_
x-ms-office365-filtering-correlation-id: b9f9e263-09f5-4e53-48c8-08dafa800fcb
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 zROTV1iYqQxF15sKGf1GByq/2LyKhOcswZho8kdrMPFHh3xoKJAlhIZP59he85X5rP1rqZQC3/g9oQGuE/g0rIGqYKdkpfLY0Fg7kZgMOXxG7kXqB1nK0WmT050ToGh5F4JS3FtuXMhJNfCuj1egNy183R65U1jWM645d4mxAlZpNSai1HMrYnb5EtJFaU6EfIwb37TSGKUL2YfhS4utC1jReiSC1GQaqc/UFb0AaR7i35qGgDIF3spOXIiOflo49Ae3or28wBMBzH1gJL8lYU2Y+/PJK7bCiBHJoRiNhTMnohvL9jVvWzlJrrv5cYhM3XjKrBFlBQq4keTGDDk335opAeThLaqzE2xfdXdnMKoHI6TaO+90U9Voivk2WbYqzLCDJ0aAkC4KIWpynuHYy1bHoZ+vocN12UnkVORNmaUarhNSJc8ONGUKKR92N6XwR5tnUcwAE4u+zaXqhbWUpudgR7rrWbAG7vqaqIR7GUKJfdI8SNmU1DMh72m9YPwGkJWo5NX9GIzcsKmIqfG/0wc6sBerJGt+Q0QdTswW/urOQElwTMqZnY13RSmPXxLDdhAFzuvHFmLrxl+LVBF9K9f9A2cnmO6esZWKDiCeYzjnD3L7jzM3w68cHeegTZ0uEvbP0s2G6OqEiax6t4ngYLby1AJd4j63sJdPczmsylLTD6nPRTq+SJ8ecsi4uIicW1RdjmDl3HPk5EVGL6WKXNJSzerA5rQ56bKVV6VJ7s2B/4Knaahcc+UPDapQzqXr9IQd9rznth687gRHQPRo2Q==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(346002)(136003)(366004)(376002)(451199015)(82960400001)(38100700002)(122000001)(66946007)(38070700005)(31696002)(66476007)(76116006)(5660300002)(4744005)(7416002)(2906002)(91956017)(8936002)(66556008)(8676002)(4326008)(64756008)(41300700001)(26005)(2616005)(66446008)(186003)(6512007)(83380400001)(53546011)(54906003)(71200400001)(110136005)(6506007)(6486002)(478600001)(36756003)(316002)(86362001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MkdRZlMrNks3T2ZaM0lhbXkyRDZOVnN3eS9GdzFzOGp5V0VFLzZ4aDV0VGta?=
 =?utf-8?B?TFArY3p1UndjNkdQYmxndytxdWdGTUdvTE02cnc4Szc1YTJYSFZ5ZDFNbm1R?=
 =?utf-8?B?d3IxeXJwNDNzQ2lmRUlsOE00YzR0S3V4dnRneTZDL24zVXZmajVDS3J0MGRS?=
 =?utf-8?B?YTBNelNTS0wxU09LTG1tdHhJaW1EWDNsNkhndGsreGtYTHRLUXI4ck1JRFZk?=
 =?utf-8?B?T2prWVFkR3FQYVVnVkkzMVJxcUhmVVNVYWlTbWFiSFRoV0QyaU9MU21LTGRT?=
 =?utf-8?B?SytmaE1RVDRObk95OHkzUVhoK2FjNFEyRU82cnhYdjN1N0loOGozNUlydWh6?=
 =?utf-8?B?anhQSFo4MjZWNzRoRlRrUlFMc1hTYzh1ZnlHQkkyUFVMY0N4VTlocWFOamlT?=
 =?utf-8?B?czhUZnVLTU1YT3NlSWZNemxwYUxXdkU4NjV6YkZPUE9FUmxKZUV2cXZZd3hx?=
 =?utf-8?B?N1I2K3FhUnJDcFFmcFhYTE9EbExQaVNhSjRJblA2bXJ2SHJaL1JidkZ2cEph?=
 =?utf-8?B?T1VZN2NZUWNJLzB1VloxdXZrV3J5RnhvWVN3RXg5YThFUityc28zM0lMUjFy?=
 =?utf-8?B?VDJ6T3lMeVNTZlh5dTgrbEttdVVWN28wOSthZ2dTczg3eXJNQ25IMHdSd1BD?=
 =?utf-8?B?WUlxQ0ZzQUdRdUlCOFJkQ2xZdGtMSDlSOC8rOFVEbzRJREl3NzR1TjhNalFI?=
 =?utf-8?B?cHNGVUJSTWZvSyt4ajlUSVlXWkRJNDdIejZ1MGdQSXZtSFJoLzNRUksyS0pW?=
 =?utf-8?B?dk1xdlRiejlQSnYvQUZJbWU2clVWV2EvRmVlclArd21adHJUZlBlYTJoSm5y?=
 =?utf-8?B?NjFQcEtRQ3lSZ1VkVXl4bjZETDhlWHVzMC9nbk1HUVNZUVRXME1TNmJBZXZo?=
 =?utf-8?B?aEpWUHhSL2NHb1Jkdy9tQW5NYlhLai9xbUlvc3dyYThGMUF4RVFweCtiYS9q?=
 =?utf-8?B?UmIyaGoyRXFVU256aWJsa2F6dVVnTDVITTJKcmdoZ0JxUXZtcktZNS9QUjhT?=
 =?utf-8?B?NW5aWWtiUGRENXZ5VDMydVkraWhPM2hrazkvNmFlZXR4eDBaMGVIc2tVWWdN?=
 =?utf-8?B?OWNPeU1qMThuWi8xVnMwWFBEQzk3Um9JSnhaMWVmMUZ0cmxMRGpvY0x0b043?=
 =?utf-8?B?NnRRMXQ4SGVQYlR6SlluUnNnQmJ5dkRnOU9qSGlNbUFSRm5BdytiaG1ITGx6?=
 =?utf-8?B?Um5XOWV6OHZRNVE0bFl5Q3VIVXFoSWt3UkJvTThFWjdkMjBBNGRxd00rQnhx?=
 =?utf-8?B?RUo5blpqV0ViaWQvdkFETFVYdjFqazJEM1RrTUNUb3UzZWxFY1o3VWtYSjhv?=
 =?utf-8?B?YTZJb2xoNnhWeEVHOExRYjJwOFJ1cE1uUVlPMWJvRERnc2xGL2ZBekIxaEpE?=
 =?utf-8?B?djNwa1JyNFQvZkdBb3hJdDZHWDFSSWdYdURKQlNaek94UWNpVm9mcUxySEpJ?=
 =?utf-8?B?OEhBcHpzYTU1VWNFRnFYZCtKSTRUSVZWcGZMV0hzOTlQOW5SeFpHeFhoRVVh?=
 =?utf-8?B?WlJUWmxOQWJWY09WbjRtQVVzVklSeE9lbmcrOG5FYldpc1lqSStoaE51d3cx?=
 =?utf-8?B?djVNeUJxVVhvMSttTU43ajFVeXA0UmlQUHV5T1FNK1k3Zjk2ZGNlQS9kNHlS?=
 =?utf-8?B?LzRSKzZqZE5xMWRRMUQ5NWgvdmJXcmZpenkzRFN6eUwyV012MCs0RGpqTFNP?=
 =?utf-8?B?MG02ejRkM3lKZE5GZ25relVMYVl4d2lkSHVOZlVQa0VBTVZlMjJqSWpRTHVj?=
 =?utf-8?B?UGJJMm11RWdSSm9pWHdYa3VKVlpjK0Vwd2NsM2xzcGVJQU1NeGg3dTAvM1pa?=
 =?utf-8?B?a1lxcStaczRaUU02RTI3QXRObHVPNjZEbThtbFJLZlVpbmhudUl6UjlGVHJy?=
 =?utf-8?B?K0N3Z0RSdTQxRUQzVGExS1IxalBLeUdKRTN3MEgzMGJxMTF6THpndTZTUy9U?=
 =?utf-8?B?OFE4bjF0S2RKS2liN3ZEdEU0RVJpbklaTm1ISEdFSHdGcnk2UGhvbDZOOGVl?=
 =?utf-8?B?bXlrODhBM2RzRHVUSC9rTWJyNHhTT1lYYmVOc0ZUbmZpcmFvUW1ZNk5xMVVH?=
 =?utf-8?B?SWVDbFEzWURIeVdsYngranZ3ODRMaDNnRUs4NVFjcWl0dGJXTE5HZ3ppcU11?=
 =?utf-8?Q?xH6MieEcQhSDDOZhRagHnhX+7?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <81C76C455AA8C647BD237D01E2B390F7@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?WXRVRG9MTHEya1IzUXhEbGpZWHdEa0p2alp3WjhaN2daWlluem8rQU52b0ll?=
 =?utf-8?B?WWZIbE05Wno4ZkozUDFoeE44ekV1OWowQlZURmltRUlVZ242RHd1YXRMN3Iv?=
 =?utf-8?B?R21MTkFvaXNNQ3hOWkg1K1N1MnRWaUZnT3FOOVZOckY1ZTFYQlZMRVNDRnZM?=
 =?utf-8?B?Yjc1NksyZWxKNElMTFNoTWVSclpKMlZiNUtub2V6UDZTUE5ySDVXSExyamha?=
 =?utf-8?B?dXI0TEhzRmExcUZxNVdCYTJIRERZemd5bGlEVmZNOElyd2JsdFBXSllZcjB4?=
 =?utf-8?B?Zk9ESDN1UXRQMVdKcCtsZmh6YnhrTWtrZTdjOWVPQk5sTkZnZWpGYTY5NGdV?=
 =?utf-8?B?eXg3QmN0cmFDakQ5bXhTN0VlK0dIRlByR1ExaUhWMVpUeVRPeUxiejA1RlBU?=
 =?utf-8?B?RVRydnZtMXNWZFFsWUptQ3V2eW1jUjVRMU5KN3lDTUNvVEUxaXdCRnE5akJN?=
 =?utf-8?B?a0V0NDNlWkZpbVc5d0lzbHkrR08yR0RXMHBnUGRvMkpleGpsTTZ6RHByZnho?=
 =?utf-8?B?M0cwMGNSNS94L2lleGtEL2xCeEZHSkxPdTUvT2ZOS3czbkQ4dzhJSHJnZmtH?=
 =?utf-8?B?ZHo0MTczd3VMWVZjekZDVG1SVzNnZEl6TUdrclB3OVVyUkRmU2M0OC9MY1Vy?=
 =?utf-8?B?R2treDRieVZ1VWowYlBTMk5vL1M2djQ2M2VwWWxqUkROZ1hIeEwyTmVLMTBK?=
 =?utf-8?B?dGdJZi9KWUVNbEd5L0tvNkswREZvOUs0elk2WHhoRFYxNndXSHlVaVI0c2pp?=
 =?utf-8?B?L0kwQTVyK0tWeVRiOXBKaHdDRzNEaURsWnhIQnUzVkh0MUx0dUNCRjJ4QkRm?=
 =?utf-8?B?cUdaOVR0d0s5SFhDOGlBdnNMTUcyL2t5N1ZyYjV3dFRWcXVKK2RGWllseEtM?=
 =?utf-8?B?cFNCK2hxU2ZKTVYzRE9TUmdGdjA1S0tvdWxZaTF2aHh1aFBSSHpycUdmQ0hY?=
 =?utf-8?B?dStDZ0QrYk8rNGh0cFRCc1Fpdkk1VFlhYkRFcnhLbVNFQ2hGSTFVV1diWmlK?=
 =?utf-8?B?NXRzLzBtME5RQXhXVld0eDB1MExSL0J4WStoY3ZPSlNvMXpCdWVUSkFhc0dI?=
 =?utf-8?B?NjJUV2VKcjdWS210bmtuaDFVMFhiT0p5Wkt6V1R3SThCMExxVVEvWkV4ZmRv?=
 =?utf-8?B?QlpWczNxRnE0blRWeGNtdFV6eDhkdnZJZ2lyTjNHSFdSUGZRRTlBYng2S3JY?=
 =?utf-8?B?QmZNK21UelNPOU53ZE5UNkVTeWptOCtBbDZnS0dwdmpqRjlGTnlvZGx6NTZH?=
 =?utf-8?Q?prVLwo+bCUvAEOI?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b9f9e263-09f5-4e53-48c8-08dafa800fcb
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 00:48:35.3616
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: CzEPXAGYdNLjE27zOCTHPMKlRihRG36Y5jhC9hHNdvge1kJX7uAH4YX1293RH7w0RopZn/KLp2kmsiKaXch6rQuXWWcpFGMbJDf3Et1kXuM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR03MB6229

T24gMTkvMDEvMjAyMyAyOjA3IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvcmlzY3YvZWFybHlfcHJpbnRrLmMgYi94ZW4vYXJjaC9yaXNjdi9lYXJs
eV9wcmludGsuYw0KPiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPiBpbmRleCAwMDAwMDAwMDAwLi42
ZjU5MGU3MTJiDQo+IC0tLSAvZGV2L251bGwNCj4gKysrIGIveGVuL2FyY2gvcmlzY3YvZWFybHlf
cHJpbnRrLmMNCj4gQEAgLTAsMCArMSw0NSBAQA0KPiArLyogU1BEWC1MaWNlbnNlLUlkZW50aWZp
ZXI6IEdQTC0yLjAgKi8NCj4gKy8qDQo+ICsgKiBSSVNDLVYgZWFybHkgcHJpbnRrIHVzaW5nIFNC
SQ0KPiArICoNCj4gKyAqIENvcHlyaWdodCAoQykgMjAyMSBCb2JieSBFc2hsZW1hbiA8Ym9iYnll
c2hsZW1hbkBnbWFpbC5jb20+DQo+ICsgKi8NCj4gKyNpbmNsdWRlIDxhc20vZWFybHlfcHJpbnRr
Lmg+DQo+ICsjaW5jbHVkZSA8YXNtL3NiaS5oPg0KPiArDQo+ICsvKg0KPiArICogZWFybHlfKigp
IGNhbiBiZSBjYWxsZWQgZnJvbSBoZWFkLlMgd2l0aCBNTVUtb2ZmLg0KPiArICoNCj4gKyAqIFRo
ZSBmb2xsb3dpbmcgcmVxdWlyZW1ldHMgc2hvdWxkIGJlIGhvbm91cmVkIGZvciBlYXJseV8qKCkg
dG8NCj4gKyAqIHdvcmsgY29ycmVjdGx5Og0KPiArICogICAgSXQgc2hvdWxkIHVzZSBQQy1yZWxh
dGl2ZSBhZGRyZXNzaW5nIGZvciBhY2Nlc3Npbmcgc3ltYm9scy4NCj4gKyAqICAgIFRvIGFjaGll
dmUgdGhhdCBHQ0MgY21vZGVsPW1lZGFueSBzaG91bGQgYmUgdXNlZC4NCj4gKyAqLw0KPiArI2lm
bmRlZiBfX3Jpc2N2X2Ntb2RlbF9tZWRhbnkNCj4gKyNlcnJvciAiZWFybHlfKigpIGNhbiBiZSBj
YWxsZWQgZnJvbSBoZWFkLlMgd2l0aCBNTVUtb2ZmIg0KPiArI2VuZGlmDQoNClRoaXMgY29tbWVu
dCBpcyBmYWxzZSwgYW5kIHRoZSBjaGVjayBpcyBib2d1cy4NCg0KSXQgbmVlZHMgZGVsZXRpbmcu
DQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 04:26:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 04:26:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481435.746308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIiyi-0000DR-J5; Fri, 20 Jan 2023 04:26:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481435.746308; Fri, 20 Jan 2023 04:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIiyi-0000DJ-EC; Fri, 20 Jan 2023 04:26:12 +0000
Received: by outflank-mailman (input) for mailman id 481435;
 Fri, 20 Jan 2023 04:26:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIiyh-0000D9-8X; Fri, 20 Jan 2023 04:26:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIiyh-00088M-5n; Fri, 20 Jan 2023 04:26:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIiyg-0002ut-Om; Fri, 20 Jan 2023 04:26:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIiyg-00036F-M4; Fri, 20 Jan 2023 04:26:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2xg/bFqnR2tw+/rvjySoq0K9pcjC6lANuqrEJUfiMHM=; b=xiYkGnCVU1gffP1EbwVlhXZLtL
	5F1EemMRXOSHv87kXd+7oPIYIL3BMaO64PbbdHqxfGS6xNaSazroGUIHX5J6vdshNvhIcW1zhRtXn
	nQ6qKr7wVAxnTZEKzhBXtGXK2k+07f+1C3puTksutoy1kfG/M1KJDtp3G1EKoItEdhIA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175982-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 175982: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-upstream-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=625eb5e96dc96aa7fddef59a08edae215527f19c
X-Osstest-Versions-That:
    qemuu=1cf02b05b27c48775a25699e61b93b814b9ae042
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 04:26:10 +0000

flight 175982 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175982/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 175963 pass in 175982
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 175963

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175283
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175283
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175283
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175283
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175283
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175283
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175283
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175283
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                625eb5e96dc96aa7fddef59a08edae215527f19c
baseline version:
 qemuu                1cf02b05b27c48775a25699e61b93b814b9ae042

Last test of basis   175283  2022-12-15 15:42:37 Z   35 days
Testing same since   175938  2023-01-17 15:37:13 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   1cf02b05b2..625eb5e96d  625eb5e96dc96aa7fddef59a08edae215527f19c -> master


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 06:07:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 06:07:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481449.746322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIkY3-000287-0h; Fri, 20 Jan 2023 06:06:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481449.746322; Fri, 20 Jan 2023 06:06:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIkY2-000280-Td; Fri, 20 Jan 2023 06:06:46 +0000
Received: by outflank-mailman (input) for mailman id 481449;
 Fri, 20 Jan 2023 06:06:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIkY1-00027q-J6; Fri, 20 Jan 2023 06:06:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIkY1-0002gg-Fc; Fri, 20 Jan 2023 06:06:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIkY0-0005d8-WE; Fri, 20 Jan 2023 06:06:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIkY0-0001En-Vi; Fri, 20 Jan 2023 06:06:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JnG/pFTGSAT4aF3BHnoZt6l/9NgVMU590aMRtNcu+pQ=; b=2TXJTlX/uc4R438UUpIWxpSQfK
	LKf9UXVqYx4IgnMKSECbdq8wwTU/ErWJmJ845wE3//qxemxEjPNEOmR8H0dC5z2F0in1OHavZs3Be
	jy+XZJibztQCcJVQm0Li0ZKPfmLIetcCvtnY8oUBDYiQBR16l0XkCM7W4EJQOOplvrzU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175985-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175985: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-freebsd12-amd64:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7287904c8771b77b9504f53623bb477065c19a58
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 06:06:44 +0000

flight 175985 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175985/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-amd64 8 xen-boot fail in 175966 pass in 175985
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail in 175966 pass in 175985
 test-amd64-amd64-freebsd12-amd64 19 guest-localmigrate/x10 fail pass in 175966

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7287904c8771b77b9504f53623bb477065c19a58
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  104 days
Failing since        173470  2022-10-08 06:21:34 Z  103 days  212 attempts
Testing same since   175966  2023-01-19 03:13:54 Z    1 days    2 attempts

------------------------------------------------------------
3375 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 516750 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 07:20:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 07:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481458.746332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIlh3-0001yB-Ce; Fri, 20 Jan 2023 07:20:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481458.746332; Fri, 20 Jan 2023 07:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIlh3-0001y3-9T; Fri, 20 Jan 2023 07:20:09 +0000
Received: by outflank-mailman (input) for mailman id 481458;
 Fri, 20 Jan 2023 07:20:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIlh2-0001xu-3P
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 07:20:08 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dccb0f1a-9892-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 08:20:05 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2E44622657;
 Fri, 20 Jan 2023 07:20:05 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E4B011390C;
 Fri, 20 Jan 2023 07:20:04 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id XcdlNqRAymNRagAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 07:20:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dccb0f1a-9892-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674199205; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=2q4fzQCu5e7fvGqH5s1QSEHRhfxesa3qtzT1vLYVUsA=;
	b=rhIM/52jz4TuyKcZXDGnBQxGmmiOBO5JdXPN2N8wkkDTOKepOkj+Wd/qPvB5AtAyAmZ7hF
	pbrGP23sPXqNjxtoC197ylRm2QuIBl8LHVaJxm3Lp5+FHCGsJnec4YbnpHReiWUFZQnhNj
	bR8fglv2ZIELDUvW+Hmn0UpD9Fw7l9M=
Message-ID: <1834b425-2d77-a0c3-e2a9-5c1fc9de750d@suse.com>
Date: Fri, 20 Jan 2023 08:20:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen: Allow platform PCI interrupt to be shared
Content-Language: en-US
To: David Woodhouse <dwmw2@infradead.org>
References: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>, linux-kernel@vger.kernel.org,
 Paul Durrant <paul@xen.org>, Thomas Gleixner <tglx@linutronix.de>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------guK1Nuw1nAow0CzighlzBg96"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------guK1Nuw1nAow0CzighlzBg96
Content-Type: multipart/mixed; boundary="------------LqczeNPWt0NQ0pIdP3EIij9f";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: David Woodhouse <dwmw2@infradead.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>, linux-kernel@vger.kernel.org,
 Paul Durrant <paul@xen.org>, Thomas Gleixner <tglx@linutronix.de>
Message-ID: <1834b425-2d77-a0c3-e2a9-5c1fc9de750d@suse.com>
Subject: Re: [PATCH] xen: Allow platform PCI interrupt to be shared
References: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>
In-Reply-To: <f9a29a68d05668a3636dd09acd94d970269eaec6.camel@infradead.org>

--------------LqczeNPWt0NQ0pIdP3EIij9f
Content-Type: multipart/mixed; boundary="------------ozj20MEsJlZdsBkhWZq6i8yX"

--------------ozj20MEsJlZdsBkhWZq6i8yX
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTguMDEuMjMgMTM6MjIsIERhdmlkIFdvb2Rob3VzZSB3cm90ZToNCj4gRnJvbTogRGF2
aWQgV29vZGhvdXNlIDxkd213QGFtYXpvbi5jby51az4NCj4gDQo+IFdoZW4gd2UgZG9uJ3Qg
dXNlIHRoZSBwZXItQ1BVIHZlY3RvciBjYWxsYmFjaywgd2UgYXNrIFhlbiB0byBkZWxpdmVy
IGV2ZW50DQo+IGNoYW5uZWwgaW50ZXJydXB0cyBhcyBJTlR4IG9uIHRoZSBQQ0kgcGxhdGZv
cm0gZGV2aWNlLiBBcyBzdWNoLCBpdCBjYW4gYmUNCj4gc2hhcmVkIHdpdGggSU5UeCBvbiBv
dGhlciBQQ0kgZGV2aWNlcy4NCj4gDQo+IFNldCBJUlFGX1NIQVJFRCwgYW5kIG1ha2UgaXQg
cmV0dXJuIElSUV9IQU5ETEVEIG9yIElSUV9OT05FIGFjY29yZGluZyB0bw0KPiB3aGV0aGVy
IHRoZSBldnRjaG5fdXBjYWxsX3BlbmRpbmcgZmxhZyB3YXMgYWN0dWFsbHkgc2V0LiBOb3cg
SSBjYW4gc2hhcmUNCj4gdGhlIGludGVycnVwdDoNCj4gDQo+ICAgMTE6ICAgICAgICAgODIg
ICAgICAgICAgMCAgIElPLUFQSUMgIDExLWZhc3Rlb2kgICB4ZW4tcGxhdGZvcm0tcGNpLCBl
bnM0DQo+IA0KPiBEcm9wIHRoZSBJUlFGX1RSSUdHRVJfUklTSU5HLiBJdCBoYXMgbm8gZWZm
ZWN0IHdoZW4gdGhlIElSUSBpcyBzaGFyZWQsDQo+IGFuZCBiZXNpZGVzLCB0aGUgb25seSBl
ZmZlY3QgaXQgd2FzIGhhdmluZyBldmVuIGJlZm9yZWhhbmQgd2FzIHRvIHRyaWdnZXINCj4g
YSBkZWJ1ZyBtZXNzYWdlIGluIGJvdGggSS9PQVBJQyBhbmQgbGVnYWN5IFBJQyBjYXNlczoN
Cj4gDQo+IFsgICAgMC45MTU0NDFdIGdlbmlycTogTm8gc2V0X3R5cGUgZnVuY3Rpb24gZm9y
IElSUSAxMSAoSU8tQVBJQykNCj4gWyAgICAwLjk1MTkzOV0gZ2VuaXJxOiBObyBzZXRfdHlw
ZSBmdW5jdGlvbiBmb3IgSVJRIDExIChYVC1QSUMpDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBE
YXZpZCBXb29kaG91c2UgPGR3bXdAYW1hem9uLmNvLnVrPg0KDQpSZXZpZXdlZC1ieTogSnVl
cmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KDQo+IC0tLQ0KPiBXaGF0IGRvZXMgeGVu
X2V2dGNobl9kb191cGNhbGwoKSBleGlzdCBmb3I/IENhbiB3ZSBkZWxldGUgaXQ/IEkgZG9u
J3QNCj4gc2VlIGl0IGJlaW5nIGNhbGxlZCBhbnl3aGVyZS4NCg0KSXQgY2FuIGJlIGRlbGV0
ZWQuDQoNCg0KSnVlcmdlbg0K
--------------ozj20MEsJlZdsBkhWZq6i8yX
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ozj20MEsJlZdsBkhWZq6i8yX--

--------------LqczeNPWt0NQ0pIdP3EIij9f--

--------------guK1Nuw1nAow0CzighlzBg96
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPKQKQFAwAAAAAACgkQsN6d1ii/Ey/k
Jgf/ShQUkF68ZJ7LVMFtJAEAY4UI7xClGLkU85Zwd3vNnZdvIZuY3GpxVa6ZqkSEubzPzY6+C+S2
cJuimmOvsamewAtDCjnh+jtlkFHUi+c5xTtP3Qc0ZZDzBE+KL90G6uBQqVRLptELX4hK+QgEDsnZ
jIoowp3i9vdEEjhAq1VQ95h8LjA+HTYdCZcvTZPdm8jVSVtbbtvrxSl0SX/B7vqz+iIIu5SfYywf
t/ONVN3UXxzUtHiWKmoaAVcMvFHPR1W1u6V1iLO0oJNSWk1ypY0zsmh+JZ/FH9kmJ9dKr+LS3TB0
6MSUlUn6aQQTx9344uynMgQzHOE8kvxaSTBNDDZCng==
=gDUX
-----END PGP SIGNATURE-----

--------------guK1Nuw1nAow0CzighlzBg96--


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 07:24:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 07:24:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481463.746342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIll1-0002Z0-Uo; Fri, 20 Jan 2023 07:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481463.746342; Fri, 20 Jan 2023 07:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIll1-0002Yt-Qe; Fri, 20 Jan 2023 07:24:15 +0000
Received: by outflank-mailman (input) for mailman id 481463;
 Fri, 20 Jan 2023 07:24:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIll1-0002Yn-0m
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 07:24:15 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6fb1966a-9893-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 08:24:13 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id DB34D21F2C;
 Fri, 20 Jan 2023 07:24:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AB1781390C;
 Fri, 20 Jan 2023 07:24:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 0ZxVKJtBymPvawAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 07:24:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fb1966a-9893-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674199451; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xVCoiGuOSiNwE408h17hFokKY1abJGIIRxIQ6PgC4r8=;
	b=KpzSZkpSn1sDPgojQBN+ZEIF3zLp9epIWmQdOf3PjLW5sQCMb3Pm6MDxNY5LDQdoKLxSaf
	yehxJS1LPx5t3lFET3tkifRLHT0cp/hM7/O+VXzLwYWumHYDmfzsC4W/g0QoKSHei7AGtf
	SgKbMpqT7zox8KjnoBRbtlwQM3CLw2E=
Message-ID: <c7f89ae6-4e12-8430-b807-628e8ffda6f3@suse.com>
Date: Fri, 20 Jan 2023 08:24:11 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/pvcalls-back: fix permanently masked event channel
Content-Language: en-US
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
References: <20230119211037.1234931-1-volodymyr_babchuk@epam.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230119211037.1234931-1-volodymyr_babchuk@epam.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------6v9Ljt46l0xlOcGWn9YQNeSF"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------6v9Ljt46l0xlOcGWn9YQNeSF
Content-Type: multipart/mixed; boundary="------------uGQ0hPMt3AfYxpWRp7vu56vC";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
Message-ID: <c7f89ae6-4e12-8430-b807-628e8ffda6f3@suse.com>
Subject: Re: [PATCH] xen/pvcalls-back: fix permanently masked event channel
References: <20230119211037.1234931-1-volodymyr_babchuk@epam.com>
In-Reply-To: <20230119211037.1234931-1-volodymyr_babchuk@epam.com>

--------------uGQ0hPMt3AfYxpWRp7vu56vC
Content-Type: multipart/mixed; boundary="------------5HrkPxkWODe3kjOtrpwUOODX"

--------------5HrkPxkWODe3kjOtrpwUOODX
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTkuMDEuMjMgMjI6MTEsIFZvbG9keW15ciBCYWJjaHVrIHdyb3RlOg0KPiBUaGVyZSBp
cyBhIHNlcXVlbmNlIG9mIGV2ZW50cyB0aGF0IGNhbiBsZWFkIHRvIGEgcGVybWFuZW50bHkg
bWFza2VkDQo+IGV2ZW50IGNoYW5uZWwsIGJlY2F1c2UgeGVuX2lycV9sYXRlZW9pKCkgaXMg
bmV3ZXIgY2FsbGVkLiBUaGlzIGhhcHBlbnMNCj4gd2hlbiBhIGJhY2tlbmQgcmVjZWl2ZXMg
c3B1cmlvdXMgd3JpdGUgZXZlbnQgZnJvbSBhIGZyb250ZW5kLiBJbiB0aGlzDQo+IGNhc2Ug
cHZjYWxsc19jb25uX2JhY2tfd3JpdGUoKSByZXR1cm5zIGVhcmx5IGFuZCBpdCBkb2VzIG5v
dCBjbGVhcnMgdGhlDQo+IG1hcC0+d3JpdGUgY291bnRlci4gQXMgbWFwLT53cml0ZSA+IDAs
IHB2Y2FsbHNfYmFja19pb3dvcmtlcigpIHJldHVybnMNCj4gd2l0aG91dCBjYWxsaW5nIHhl
bl9pcnFfbGF0ZWVvaSgpLiBUaGlzIGxlYXZlcyB0aGUgZXZlbnQgY2hhbm5lbCBpbg0KPiBt
YXNrZWQgc3RhdGUsIGEgYmFja2VuZCBkb2VzIG5vdCByZWNlaXZlIGFueSBuZXcgZXZlbnRz
IGZyb20gYQ0KPiBmcm9udGVuZCBhbmQgdGhlIHdob2xlIGNvbW11bmljYXRpb24gc3RvcHMu
DQo+IA0KPiBNb3ZlIGF0b21pY19zZXQoJm1hcC0+d3JpdGUsIDApIHRvIHRoZSB2ZXJ5IGJl
Z2lubmluZyBvZg0KPiBwdmNhbGxzX2Nvbm5fYmFja193cml0ZSgpIHRvIGZpeCB0aGlzIGlz
c3VlLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogVm9sb2R5bXlyIEJhYmNodWsgPHZvbG9keW15
cl9iYWJjaHVrQGVwYW0uY29tPg0KPiBSZXBvcnRlZC1ieTogT2xla3NpaSBNb2lzaWVpZXYg
PG9sZWtzaWlfbW9pc2llaWV2QGVwYW0uY29tPg0KDQpSZXZpZXdlZC1ieTogSnVlcmdlbiBH
cm9zcyA8amdyb3NzQHN1c2UuY29tPg0KDQoNCkp1ZXJnZW4NCg0K
--------------5HrkPxkWODe3kjOtrpwUOODX
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5HrkPxkWODe3kjOtrpwUOODX--

--------------uGQ0hPMt3AfYxpWRp7vu56vC--

--------------6v9Ljt46l0xlOcGWn9YQNeSF
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPKQZsFAwAAAAAACgkQsN6d1ii/Ey9a
jwf+NSCsTPXNEIqSmfln+gntX+LU7sW3klmuzqzFw35Ipugm6F5gU3YitzgAMQBoHG3QNzpGxubw
2tr4KyVQEKrsdkiWO5OWPKTLJlRYxW1fXUL1dvNYZx4FoLaXmL1ROc2i8v5TuxJAx2GbfWTY5ymW
+COJoTJt57N+DzsPipupn4iCPbMG2tLI95j9d0DbYkLW0IS2f7miRrUsIJl35eNSSauQhl9NMzm6
Hg8sn1b0wl10OxlVHi1mVgsEEeq+wYZ0pK5nyA28UXSZfIh2R4SwU4FYZVygqbr4bCZG1NwEPIKX
e+9BZVxx3OLnVC7o1IhGVtRNKXX4Dfr8bJDGSOi+5w==
=qImL
-----END PGP SIGNATURE-----

--------------6v9Ljt46l0xlOcGWn9YQNeSF--


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 07:49:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 07:49:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481470.746351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIm9Z-00055w-2b; Fri, 20 Jan 2023 07:49:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481470.746351; Fri, 20 Jan 2023 07:49:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIm9Z-00055p-00; Fri, 20 Jan 2023 07:49:37 +0000
Received: by outflank-mailman (input) for mailman id 481470;
 Fri, 20 Jan 2023 07:49:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1qDs=5R=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIm9Y-00055j-B9
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 07:49:36 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2069.outbound.protection.outlook.com [40.107.20.69])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fae00071-9896-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 08:49:34 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8539.eurprd04.prod.outlook.com (2603:10a6:20b:436::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Fri, 20 Jan
 2023 07:49:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.027; Fri, 20 Jan 2023
 07:49:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fae00071-9896-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=glAdSV3zasUryrrhg/wO0j2bk1ND626SnqRgPN0gDIVE20cM5Jd8N922AscTo6O5MUAXHSdp6HU3oktAGK9nzqEEBrBs2N0pDzAVe4M7k+PjJqQmaC0b7EyVCKRHS6hXL+CypJ1K8TIYnKPHVM0Ae/dEcuQpU/M7gx8ON399GzIK4kgNmO3MYwZufTHwhtn4QsyaUJJlyzOHffJ876cGHOVmnZM1k72vhcOmhGjeGt9BEcnzRKYsdcuqTahKcpPxMJ54Q4Rwed97ltwi/GRBeX7maaGs5UYdxxWXUuzhnn1Gf2Nb6sMvoQSbbuCCmkxj5dhev2tGSKZSgXS8LC+FEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6Io31DtRKRvzeaJ7vM0rAHgdD9I+lvUU4R06xLWV21w=;
 b=lYB5VGVedwlvv0ZUqH9xzUlfpnpoLSlG6vSzMoDzNpWlBhhOqWzRAsllLjiUIhOVBqJxzXkzNqFvDB7z0yaen7C6bAtYDp5b0HlAxcozw1DPT9NWtzd58vWhzJQ7BiSES26OQgQcHDx/wDi8Uc2c7YWLMdUd0P+SlIT0KeIRy5ScdTwfV7EJBWdlGW87JBs+MOTnwJoRvZpfGAM/m8HzMQ7ku8LAdCgkmA9dl+mlWM95FP40qf29NSdrkkgM+OWLOoNtWj/nPy+2vS4FyysHs4D+RvftXjV0Lu+GWQSCXMyKhtSTCXqbYhJAISRNUcKxqfmwSdTlzN4fzYt8VuletA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6Io31DtRKRvzeaJ7vM0rAHgdD9I+lvUU4R06xLWV21w=;
 b=wiAXZBBEZT0JIc2nlSYG4nZ0UMvAUBtcZLpJwf2rFu7X7naH3fz6MjOGXUR8P63/qoHnS9lyKCfd9W2+drb4iy4hYEUiqbST9kCSpE1uzmE2+W5Up2l/cK1uUkpqMV6/qfY/a1UJ4oq60WzfQ6t/sNjim8eaxLP5HsFctPN4KtxFAK9UHcotFNPqfcFSjvOBvG84oAdgVfuzINc7AELRbCgkjHKjQbFy0ec36m8PgNZFknsrS1rkwCYlOmynEzVP4D4xOBNMZw/3J/mXfjZavi1Vq/7SbcKYupWKpDjCmeFDYuVb0ngR6LZTgGhZtEnOweLWqvQM+tN3BrfMtMUONw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <839c8f02-a3d7-fa1b-f7de-6a0521ba6b76@suse.com>
Date: Fri, 20 Jan 2023 08:49:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] x86/shadow: fix PAE check for top-level table
 unshadowing
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9e79449a-fd12-f497-695b-79a50cc913c7@suse.com>
 <bf03f851-2fb4-3de1-7d72-b0ac15b2d488@suse.com>
 <266eb5cf-b2ad-051b-0474-96bf6e2ca7e0@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <266eb5cf-b2ad-051b-0474-96bf6e2ca7e0@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0147.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8539:EE_
X-MS-Office365-Filtering-Correlation-Id: 2f30b6b8-18fa-47de-c3e1-08dafabadcd4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	50j+Tq3FLwbrM0fJaf6gkes0ruw1jpfkUo3WSwFBcyMBsVhhRDAsHWdCKOCUc0Ju2bMf30NBG1rljuMRPzC6y28CKUD9Xy7gydV2j4LgYG9Td43VVdSZ1nbbnL6lMz4GkiNKT1jPjM+WZsMsPiFMgaWnngFZpDa7zrHMdrYTAdwYEPs6FbFXKZjRNZ3lDvD6pQAZrVA+utVKPk2qwBKLAj1jd/PRkpWunaEi/hdKOoB0p5wzM24efJn3JA3Ey+Vm+K1EpRKeJ7JVyFtLN4Qr0znkRS3VXVsc1m/UB8M1WAcEYMP8QNmhonBhJIVHWbx3Wn2p7Fgb08fepNrBFdni7GRY/l3KUOOY2GkYgF5rLklYjrGXbyzO2KyEda+lsrk+qf697PGpwmZM9UlfY3pX83x7pNS3tVQiDp2frgFS/AR9HKAioBvv8AyTsGrpifl2jGs6JCIDOLgDgazM8jdhTgtlF+ozNm1u5OT3KC++2F266c77tq0pnmj6gC+OCKpPamy2p6F6fOJ1olADjlQj0EmPV5RBXPetWCS1YGOn2gBxYUHeAuk+UKBemdvJuT7Xy15b8lUgYiDPNDU91uNmoIhadLZrBbvjQGVeKWMp1f4VrpbADFUD1+WiVG3BL+uNQ6u2zWkB217mOZV6qe74qglrKTh9F4HEAMu7OVGTWGILC2nGHYMYOXRuj4NlvqBZyYQu73/EFf0kEzeWBJA9DAVACtZxXjHNcCNtCm7uoLU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(366004)(396003)(376002)(39860400002)(346002)(451199015)(31686004)(36756003)(31696002)(66556008)(186003)(66476007)(4326008)(53546011)(6916009)(86362001)(6512007)(8676002)(41300700001)(2616005)(66946007)(6506007)(54906003)(316002)(2906002)(6666004)(6486002)(478600001)(4744005)(5660300002)(8936002)(38100700002)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WFQ4TjFUNVhmQ3dkZnJaWE5LSXR0aEcxb0xQV2lSTmx2TnB6RzBmM1V6N2JR?=
 =?utf-8?B?dUdVWll6TVk5dE5YUUpDbTFiY1loV1NKSW5rbnJjbCtQTTZBZmlIWFNNZnIx?=
 =?utf-8?B?QTNmK3JsRXQ3eG1LeXVIWjlMdWZRckZQZk15ODBad2RaSmZWR1E4Ry8yUFJW?=
 =?utf-8?B?enJCc2pVT3hvNGM1akk4MXZ4Qm9Ob2h6WHNnUy9abHBLUTI4emZzQXNoSXo2?=
 =?utf-8?B?dWRmRW8rbzZ4cGdUUTVRZHpQdko1UFdCc2IzTVlRak44clU4Y2JISTQwWFNO?=
 =?utf-8?B?NjBXSDg0aS9UME52ZVJZeGhiVkluZjBxZWo3OUxmZmN2WXBCUzNZUGlsN05M?=
 =?utf-8?B?RjJvNjdIdVFBTmhXdVhOTHRYWXVlYi92Znp3anJsQWNNUTZPaGxMODFNOEdl?=
 =?utf-8?B?SFRvcW5OM1RqSEFJOFJJOGptK0lEMHVUZzNsOG9tVVc3STNMZkRkZDFJVEVo?=
 =?utf-8?B?am5XcjhMcGxoSGFQcHB0U2dWQzFxSGhwOTRoVGxMWFN0d0NGa21rblB2Vk5n?=
 =?utf-8?B?TFZHS2RkMGZSTnNqaEh3RVRvNHN5UUdydS9WU2FJR1FVZ0pUMTljak53QW9n?=
 =?utf-8?B?WkRNdklhaWhVTUJHSVFmSkp1aGsrd254bFIvTTBJL05OV2NoWjJwQzYyd1BK?=
 =?utf-8?B?M3c1QkdicjlFSFBqUndrcG1QQU5ac3M0bXdPendiRnN3enJDWVk2Tmo1UWQ4?=
 =?utf-8?B?N2FxR0lib0oxRUJiS1YydmpnWm1odCtuSmoySkNIRkZteGcyZ0dOaC9jZWZT?=
 =?utf-8?B?SE8rTDV1K29SNTlQTkRwT09QMjRKTVNRMTZiYkkrMHNyMEd6MkVibUZieGxy?=
 =?utf-8?B?ZG5vdFNyajFFQStHZ3ZjN2k3d1cvTy83eDFkU0J3NWVPZWljTXdZeEZkZW0y?=
 =?utf-8?B?VkNCek1qZWVMeStuTGdPWmE5KytZSXdraUM3UXowV3FoaXVqTjhIY2FYaWND?=
 =?utf-8?B?cURRV0JsNmhVLzFuTHBUdXprMjZTZlBsSDNxVEJBdk9UcVpLZDcxOHd2Y2lB?=
 =?utf-8?B?VkdHTnBBN29Md25HQk52aGk0V3FhZzZaWjVxWlJmcWJ1Z3FQRDZmdWtjNVUr?=
 =?utf-8?B?WDBYQWt5eHJZSmovNUk4dzlUTFRxeHdwVnRUNnp6a0thSFJRb3FqcGFOdmxv?=
 =?utf-8?B?c3lJRWFYUldHem95VDhScE1rUGVMR3ZGbzlpZGoxVStWYXZwYmpGU05wRmt4?=
 =?utf-8?B?WllmWEFkdEpwRDRsNWRlUk8yVW9kM1BlV0taK2hpUGFlVjM5c1hsOVN2M3g0?=
 =?utf-8?B?dWRqMHVwdUpIVExKNWR2ZFVzMlgwNFlORmUvVnRUYjV5TSthRHlzbjVIaEVj?=
 =?utf-8?B?SnRhdCtuVklRV2xWaHUydkwwcWx1WkRBaGRySDF2ek4xZHZ4Y2FWRmk5TVpY?=
 =?utf-8?B?Vlh5TWZzT2NzMVdwdG5lVjlxQU85d1hDdyszd1ZvN3dLUkJXOW1Vdi9aZ0J6?=
 =?utf-8?B?WTdjWGhpSGJxUG5IUVZrQlZKNUV4L2VZRFYwemN1OU03Qk81UHByZmNiN2dt?=
 =?utf-8?B?dHh5VXlwVmwyM1IrYW9tNnJndysrUC9yZllPM2pHa0dmSEJuN0tOMExRSTcw?=
 =?utf-8?B?emNJcCtUaTVVUnpVcnRDMmtlZ0J0QzhPNGMrT3RPYlZQREVMV1VLNmF4Qk5W?=
 =?utf-8?B?OEk0V0ZNT3VZMnJwRTlRV21ONE95b205eFJLMzlTL284UUR1ellDNUJpMVp1?=
 =?utf-8?B?TFFkOEgyTHFvWUV5SmtIMURjb0czS1UzSngvbWhGL3VtaXRqbERmcGYvVTlR?=
 =?utf-8?B?RG9GbFVCUUN0dHN0enhCYkJNZjVKd3lJRjA0QUJNU2xZY1V5ZFRaYmpKbDBZ?=
 =?utf-8?B?R1l5U3NacjVxcnY3cDlUcktkZUpYREllN3RlaXNSVVNNYlZtajgxMWtFSnk2?=
 =?utf-8?B?eWZlREVPTHlJaUNlcWU0eEI5KzFydWZUWHFiZldadG5GUGNaQ3hXalFsSzQ5?=
 =?utf-8?B?OXY3Njd5MjNQcG9xZ3ZtMFJ1NHVxd3N4ZldRaFQwUFFlTmtYUkdjR3AxODB4?=
 =?utf-8?B?OGZubFkxV3F5QzVKdGxZbUlXYThCR3FMZXA1QXRjaXNWQjhOa1dTZzFVVmtr?=
 =?utf-8?B?bE1CNncvcjFySFJPaUJUdnp4dEFBaTk1ZlpuUkxCMUxjbnIyMTcxRUkxUGNp?=
 =?utf-8?Q?V/8NVXp/sw3d1uue+B2yRisM9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2f30b6b8-18fa-47de-c3e1-08dafabadcd4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 07:49:30.7138
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oZWn50aYPrlAStgceIpIhaw5pwDoceeG+qXta51NV12jeEWZo8/oTkcHcMnQ3LxvhmuKrlrDYoGdruPs/1qBYg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8539

On 19.01.2023 19:19, Andrew Cooper wrote:
> On 19/01/2023 1:19 pm, Jan Beulich wrote:
>> Clearly within the for_each_vcpu() the vCPU of this loop is meant, not
>> the (loop invariant) one the fault occurred on.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Wow that's been broken for the entire lifetime of the pagetable_dying op
> 0 3d5e6a3ff38 from 2010, but it still deserves a fixes tag.

Oh, yes, of course. But then it'll really need two, as ef3b0d8d2c39
("x86/shadow: shadow_table[] needs only one entry for PV-only configs")
was also flawed, and I guess I really should have spotted the issue
back then already.

> Preferably with that, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 08:36:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 08:36:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481491.746416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIms9-0002zf-Cn; Fri, 20 Jan 2023 08:35:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481491.746416; Fri, 20 Jan 2023 08:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIms9-0002zW-A0; Fri, 20 Jan 2023 08:35:41 +0000
Received: by outflank-mailman (input) for mailman id 481491;
 Fri, 20 Jan 2023 08:35:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIms8-0002zK-9k; Fri, 20 Jan 2023 08:35:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIms8-0007Xk-7U; Fri, 20 Jan 2023 08:35:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIms7-0003Nh-OG; Fri, 20 Jan 2023 08:35:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIms7-00042F-No; Fri, 20 Jan 2023 08:35:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RgXNaS8A28aC44U4igVJRtZLZEkVi7y0sQsTQLTii0M=; b=ag7Vrz+arisG2ndfpeJqhCOIcP
	8DcnBv9hUZH3lQU48pIWVQek6TGlvZ7N7+QNjSRgY+l3s/4pR4aYx1coUBy4sz4TItelNbm7Tr+rP
	Ik090WcZmQzPg0tHkob1+carBwbQGp38vjRmKklUdVqENxCRcC2E7RUn9974tqnVpddw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175987-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175987: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=fd42170b152b19ff12121f3b63674e882c087849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 08:35:39 +0000

flight 175987 xen-unstable real [real]
flight 175993 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175987/
http://logs.test-lab.xenproject.org/osstest/logs/175993/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install   fail pass in 175993-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175734
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175734
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175734
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175734
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175734
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175734
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175734
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175734
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175734
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175734
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175734
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175734
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  fd42170b152b19ff12121f3b63674e882c087849

Last test of basis   175734  2023-01-12 01:53:30 Z    8 days
Failing since        175739  2023-01-12 09:38:44 Z    7 days   18 attempts
Testing same since   175965  2023-01-19 02:53:34 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fd42170b15..f588e7b7cb  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363 -> master


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 08:40:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 08:40:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481500.746427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pImwq-0004T1-3v; Fri, 20 Jan 2023 08:40:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481500.746427; Fri, 20 Jan 2023 08:40:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pImwq-0004Su-0d; Fri, 20 Jan 2023 08:40:32 +0000
Received: by outflank-mailman (input) for mailman id 481500;
 Fri, 20 Jan 2023 08:40:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1qDs=5R=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pImwp-0004So-1s
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 08:40:31 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2073.outbound.protection.outlook.com [40.107.15.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 175c1b62-989e-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 09:40:28 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8851.eurprd04.prod.outlook.com (2603:10a6:20b:42e::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 08:40:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.027; Fri, 20 Jan 2023
 08:40:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 175c1b62-989e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c7/s1tWXcEnh3+PUZI+b9bJkNdJm7Ep27tGJQF0rdYBFPHeCx2gg/nUGjnxte4rb57jX77nizsckb94K/MpGVOD865qEdhsWDaO3yvc1nQbdKCUKzsNeBZRAW9UcYC7z/fZMfzRD/aoMu6R25mlltCafFLkir/f+gyCaTdDg7v+iGOK77Aoc1S2a1D3Saryewhu17LzYH0tjJgAb9yRpxiNh+hqW/Hq5FvxvMPACld4nCLpH7QFbScQ5SWGLZfM7R0UKg/VqB8BAXpE5kfCSywMlfCENoQZ9eM4gZbufq7pibkbZC3Pfpn0kFiAt0HGmXoC8vFH1XFk+pZX5EQcmnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7aqu6V3CMXWMqPAvcH9eTQ9aMRxMNq1okf+SC9IX8C0=;
 b=lmc3VPl0+NfaVmvqv30Wx5nWoEwUe3Rot7L8xOq/IDbyv9IjLYevvJuDThOu/s/XSgu1MGKR8xlkd4JWQkkC/s915YYbCZAg1S28z5PJRg2t6TP2Tq74GvMYwaOODOlLzw10vMR9k6U2ImtOd0dqc8l7x4i4paJ4EhRnuQVgIjapMUO2tUY5xjXRnk95MoeAOdk+t91Fcjhwj4NJw/ADi8aeYTND7RZOEOB919hLCbUjF6gMEOFjIECAfo+e/smTjdqWSsXTIYyhoc1pLtgUwYBS1Vc3M61uFpu75aFfDCHvj96IH6JN/cp7yFKd+yYnA53xvtt4CahVyN9FGwXjmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7aqu6V3CMXWMqPAvcH9eTQ9aMRxMNq1okf+SC9IX8C0=;
 b=qsrDuIL2p65AUW+sYLHzBYnuJ03dd1dbzmj2Dyw4c3MenWdby9zIPWGi4BoF1DEta7HVoy9U2uRRcw6OFTH0b1BikU4osKcISrHcpQAHFQN7Sh8Ok9LhQr9Lcks0oLOy9KBN0XpN8roQo5LvGnM5hZrAt2j2+gVLAiYrod21mDuHJXz15gUSk/dTLJd8GMvUSj7IbXiZ676OmtF7MwLYQ4/sMzRTVzBilLmQ+Z3L8h5D79ZjnSrWh1rmjhhLfk0r0Vp+hRV9kIVRlRS96BsgIS+oGJjwCCCB1A2bzlgELeXs9Ldf+zbEevRRFQbO4DbBAZPct6m1SwH8n+IISWqvUQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1eaa4cce-2ef2-ca38-56d2-5d551c9c1ae9@suse.com>
Date: Fri, 20 Jan 2023 09:40:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] tools/symbols: drop asm/types.h inclusion
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0127.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8851:EE_
X-MS-Office365-Filtering-Correlation-Id: 6e5a97d3-5c9b-465c-f86a-08dafac1fa9f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/M6oxcoBL/EOp7QJQZDNeNOBWydxmEasKkON3+qc7LVfu7OMCza2nmOIOm29JyYNbn4kAxyYKLlYz9ltg4MNp8yXlCB8G6yjuzH0eN6yUqYzjjEi34L24btByLkUArDaWsA6IIKO27nGUv/CUaB+sm6drVi7Ch6xyF4FK3xjYz5riTeotvcVq5jMOmoytLpYC5+VKzgbp04btp9/1+1gQyf0X5kmCc08Ovk8qChQPfgysZo2aiR0EspoUpj2ZZQ7Jm53AUgIao/zwDdv2tFP48OPdVYFqxkyYm1jKoLtnqvegyWyxsuVGR9+tAFTUVxMX+phdJo3QWVW7J1mwsmIzjHDoBILq850Perp9uXxZBK1//yhE19i9XkuLmfF/5r9Egs88UtKGNrYEG+9SUKCkAK5NcilN7upeZpBfhBQsPIgTdzqwHBI+yE6Wt9sRxPVR9FSkuOuplF6VYs4QKDPgC/XbQTOhV9u8sygWhiOgdUSX8K+YAsEdOXWY+pr68T0QEyqBfXyqxEP5DWYKVx9LoL4IX+JVy09fveX/M7fJah7kVguGXG4Ar5rIrzdLw5VJ4xfLepcQKO/GFJpr9Mczn4Q7xZdaXSLIFW6tzAC9uCaqih97YLlgLj/MUmhKrvCTp5GbV9BUQfbtP7X2Nfr7sEmQ+rOskkff1LKfDWfi8RFLJQch8pcZ/8dgwO5npwloEPjLfw3qJqwpwDdWNYf4TPAK4KH5+fxK23P94oMdAGZMi6bs6ogWDOEnwRcbAa4NkKr8+5wDnrVA/5Vjkr7/g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(39860400002)(366004)(136003)(346002)(376002)(451199015)(6512007)(186003)(2906002)(26005)(54906003)(6506007)(316002)(2616005)(6486002)(478600001)(36756003)(31696002)(38100700002)(86362001)(41300700001)(4326008)(7416002)(5660300002)(31686004)(8936002)(66946007)(66556008)(66476007)(6916009)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UVZFL3k1T0Zrc0RFalVhVERCdG5uTFErdndXSHNMcFZyWXRBR1BZQTdZdG9K?=
 =?utf-8?B?ZG04RGFmZmNleUxtNk9UdjRsT2RhRHl2TjM2UFlSWnRKUHZUNlBsRnBabkZI?=
 =?utf-8?B?S2R5TS9mVy9OK0xJano1REVMdU5yL05wYi94eHV3clZ3bWFyMHRXMEQyODlK?=
 =?utf-8?B?UjJXUmdNazV0b3A2WGRWU2xzMWNNY1o4c0xKZ1NqWHJJUHdrWThKbXNLQk4z?=
 =?utf-8?B?bDlTZTZZK08zYnU2ZEsvUzM0b2l3dUNOTS8rNkIyeXQvN1Z0WndySkxqZzBu?=
 =?utf-8?B?VkZBVDdtakpPQ0p0UklocjJjV2QraFo4bVRlZVBaZjlpN1hJc3lZVldXME5k?=
 =?utf-8?B?TDc5UDVaSFY0WldTUW5lSE5mR3Z2ek94dHJoclVaSmxaTi9FNlJCcG9wdnBh?=
 =?utf-8?B?eXBFQ1NpTUFkZHdxUUt2RnVRaEVsNWtEeTMzSVlnRWtpalJkSDgvWTdrVGN2?=
 =?utf-8?B?VlZqQ1RMLyt2bis4TlNCQ1FaKzA3SWt1ZHRhVjJlb1pKbmxFZGphSkRtUDR0?=
 =?utf-8?B?Nkp3TGppVmxRdFdBK01sZnQ2ZXlOLzIzWXZWbFVIZ3dGL0dhRzBpY3QramIx?=
 =?utf-8?B?YWR0bXNjMVp2VUxFZ2dtczRtZFdpb0s3blU3STg1RnB6dS91VVVpbVR1clIz?=
 =?utf-8?B?c3BTTCs3L0dsQUlseDljd0xMak5IdGV0NlcyNmZ1YTJyTXhtcGo0WUFybWlK?=
 =?utf-8?B?ak04QVVYNC93ZlBlR2NzYUhveFNvOVBvd2Foclpqd3dCQldabnVCZlhGN3JS?=
 =?utf-8?B?Vm9BMnZhUm5WTnhCNDZXeDYzdkx0OFZ4elRRVTRsMksxa3V1TDFabDhwRURa?=
 =?utf-8?B?STRwN3RFcEZYN3VmM20yWE1jYk1ublZrbkdmcGZLSzk1aDYyOHpnNXdFVzc5?=
 =?utf-8?B?MVExNU1GTzdad3F2MjYxbUU2QWY5ZUpENjdNdkJBWjRkazRQTElCVzlBOTJ0?=
 =?utf-8?B?UElzQnlQdWxkZ0JiSkpEUGlsUTFISW8xbXVGVEVBS29uRHNwTzc1cXBocCtQ?=
 =?utf-8?B?MkgvT1ViZWNWbFF4L3FrUjVkbXhlRDBuMDZRQU0vM2hZNUVEMmhXRFdDek4z?=
 =?utf-8?B?bUQwdEx6VjlEWHZBOWhIbkFINXhWWjZDYkZVY2hKR2doT2N5MllUM0V1eG1o?=
 =?utf-8?B?d2hYZTN5V3p6NzJGeDlQcGMzWXd0WmVNbk01TGVxc0x3eklMWWEycnljVWdT?=
 =?utf-8?B?dFczS1FrY0xrczRhekV0bmJwdzBkTkprcWxId1AvRGZvMUlYMzhXdysxNWtQ?=
 =?utf-8?B?bFhmS3VjREhBd1NJazBQMkx6NEJORjZ3RWVqczEzVzBPYWkwRUtvdVdESmwz?=
 =?utf-8?B?WmtWYitMTGNGN043aTVJdExHMUppR09KWmlGdDRuamxWOFpaaEZXRU5GZGNx?=
 =?utf-8?B?RVhTOWpVclZYeTY5ZkdoNGUrM3J5aDUxQi9XYXRPM0xSU3ZHNHBuK3U5SDNs?=
 =?utf-8?B?QmxUOENpV0RaVW13bUM0dEFaK2w2TDBEM2E5QXJiZkhtVGlBdG5hSlE5T0xu?=
 =?utf-8?B?ZmloVkRGRFBGS1dic1huOGpOeGZHY1kxT0p0d0x4WXpYdldOMUMwNjJ1SWFV?=
 =?utf-8?B?dFZocFFLVllPZithdW1jRVZYS3Z1Vk9lZW4xcUxXd3JuQm9LaGZrRnJzWVhs?=
 =?utf-8?B?TkNYNUQzeURJb2dacWM1LzBUeGtKbjFVWUJEaUUwWG1NejIxVmYrMHM2dFhs?=
 =?utf-8?B?c21HdDJGNXh1ZHE3dFNWSnF6L280djVESEJUUUMxOUNNOFBIOTVVcStQVS8y?=
 =?utf-8?B?eEJSS3ozdTJNeGp2MUQ5YjVzbkVSOW1VUzA4NE5EOW0zZ05MNVJCMVlta2M0?=
 =?utf-8?B?Q3VESXk1TUprcHBlNHp4bndNRkdndEpIVXpGeTFnV0Y3d1RVckcvWmh0NXFL?=
 =?utf-8?B?ZHRNcVVJTnpWUlJvdHdYTHZBWTJ4bERBNWZXQVovaHFuR0hyOWlMK0c2ZlI2?=
 =?utf-8?B?Tit5bjBzK1pTR2JYOUNJWnZJN3hOeGtpalIyNXJNejJ2bGhNeHduQVVqOUto?=
 =?utf-8?B?Yk12ZWpWUGs0QlNXQ3FpNHZRclVDdm1yQW5GZjNSbnVtTVJrNEFuV1UwZmpI?=
 =?utf-8?B?cEdRekdWMEs2Q0RlVVczeHFxT2FRTCs3Tk4xV3h6eW5rd1dFOUhwaWN5RTIz?=
 =?utf-8?Q?xV6D7L27bWQSxmN8v2f9j3t2j?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e5a97d3-5c9b-465c-f86a-08dafac1fa9f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 08:40:26.8016
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EO5rW2BgVoFZjxhADJdERUmJ2JsSSYyINFm4k6+NQiJ06LuZfPfhLV0wU5uQwDh8gBKH5ZCRB+k7JtI1ua9ddw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8851

While this has been there forever, it's not clear to me what it was
(thought to be) needed for. In fact, all three instances of the header
already exclude their entire bodies when __ASSEMBLY__ was defined.
Hence, with no other assembly files including this header, we can at the
same time get rid of those conditionals.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/arm/include/asm/types.h
+++ b/xen/arch/arm/include/asm/types.h
@@ -1,9 +1,6 @@
 #ifndef __ARM_TYPES_H__
 #define __ARM_TYPES_H__
 
-#ifndef __ASSEMBLY__
-
-
 typedef __signed__ char __s8;
 typedef unsigned char __u8;
 
@@ -54,8 +51,6 @@ typedef u64 register_t;
 #define PRIregister "016lx"
 #endif
 
-#endif /* __ASSEMBLY__ */
-
 #endif /* __ARM_TYPES_H__ */
 /*
  * Local variables:
--- a/xen/arch/riscv/include/asm/types.h
+++ b/xen/arch/riscv/include/asm/types.h
@@ -1,8 +1,6 @@
 #ifndef __RISCV_TYPES_H__
 #define __RISCV_TYPES_H__
 
-#ifndef __ASSEMBLY__
-
 typedef __signed__ char __s8;
 typedef unsigned char __u8;
 
@@ -57,8 +55,6 @@ typedef u64 register_t;
 
 #endif
 
-#endif /* __ASSEMBLY__ */
-
 #endif /* __RISCV_TYPES_H__ */
 /*
  * Local variables:
--- a/xen/arch/x86/include/asm/types.h
+++ b/xen/arch/x86/include/asm/types.h
@@ -1,8 +1,6 @@
 #ifndef __X86_TYPES_H__
 #define __X86_TYPES_H__
 
-#ifndef __ASSEMBLY__
-
 typedef __signed__ char __s8;
 typedef unsigned char __u8;
 
@@ -32,6 +30,4 @@ typedef unsigned long paddr_t;
 #define INVALID_PADDR (~0UL)
 #define PRIpaddr "016lx"
 
-#endif /* __ASSEMBLY__ */
-
 #endif /* __X86_TYPES_H__ */
--- a/xen/tools/symbols.c
+++ b/xen/tools/symbols.c
@@ -302,7 +302,6 @@ static void write_src(void)
 		return;
 	}
 	printf("#include <xen/config.h>\n");
-	printf("#include <asm/types.h>\n");
 	printf("#if BITS_PER_LONG == 64 && !defined(SYMBOLS_ORIGIN)\n");
 	printf("#define PTR .quad\n");
 	printf("#define ALGN .align 8\n");


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 08:44:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 08:44:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481505.746436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIn0Y-00053X-JR; Fri, 20 Jan 2023 08:44:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481505.746436; Fri, 20 Jan 2023 08:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIn0Y-00053Q-Gj; Fri, 20 Jan 2023 08:44:22 +0000
Received: by outflank-mailman (input) for mailman id 481505;
 Fri, 20 Jan 2023 08:44:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1qDs=5R=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIn0X-00053I-RE
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 08:44:21 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2073.outbound.protection.outlook.com [40.107.7.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1124ba0-989e-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 09:44:19 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9461.eurprd04.prod.outlook.com (2603:10a6:102:2a9::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Fri, 20 Jan
 2023 08:44:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.027; Fri, 20 Jan 2023
 08:44:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1124ba0-989e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HWWYu3YSxEXreRcQ7betFzgVeToS1eJtLqLh7CRITfmWF6L7ICUS6L+Yq1i9uBZBfm/vu7t1lOqtURqoop/rXdqkF3Sc4Ty+zFZLp+qmitECZMfL8RyKvIfh5WNrlobWRXL7WQzIu22y6tAYW8BJN5wNDZutpGQC6o8Y5eqAGVKFMLzaDTb+Mw8PRQhRPYbfEkut+9oZ6+RT+A0r3ltMNH4a5MfoOKTyqPEnVOScC90x7e8BIlR11nOgnOVixfNr8KYYmtZfTAoEZ72/T0LWdPYrAkUpyRNurvcjow1sCbstObcbHmp+eZ3TvqlDs0dqX3qg0HbB+KuEk+AG6En4fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7bRI3BAsyFGxjaRbNTheSbeJUycSg7Ftm+tyenZvACc=;
 b=loE7Rk20YHUOJibDgea1aZMjsXN2kXGNsQByh92h0OKyL63sNu+6WEKynVNqA/fWlS5U6hS1X4+Oj80ODMlGSOChpLYOC/eLm25PgWANoCmNd4y5Aa8qHg/a83Lxn7CmrNERuGRafoRghwKERHGUA9xuQo+weIFP/Em5FHWa6mC/pLcv7G6i9YcU9ZwdYjBBaUFYqroiGAPTOcsVNCuLRB70b3GTB+6r1oP9r1ZE/Ok8D7/Is9SNnI+C18OkrNPoWtpdDOBxvYGv6JGCK++NlyQGb6TyY1n0FQoH3ev0zGuPwK3MPLswIUf1T16NTcC4nXv+Bz7eIPBvhOQzPQXMkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7bRI3BAsyFGxjaRbNTheSbeJUycSg7Ftm+tyenZvACc=;
 b=MFtAqWC+bVmlvj7KiyWdhryK/ORI8mRSeOAue/IK0khIf0X4s1WjsWUlXJsD4/m5muPjinjdn8HeXDVrsX5t67fcWVWoGpEyO33kZc/b6N8V062jnpz20pl5sD8Tf4IkXyDOAMAJVKnkNSH+r4PBqTidEQR8Z9+ltyFUqX0UtbBVEBDKF3GJGUU5F8Pw+1J4Bp8jhsKnJ9ONxU50M/1BR/YgQpIhSQ77LMQK4p//S4DhrJveocEKck7X3/6nR2QzavnisQvZSqhzu7tPIP8K511yUtRuwkKWg4lsPimoOfbx3XumBxCCR0ryjizGbeqfRF5sa1+LotI8GU5cR8Qdqg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8d41b8ca-decb-e175-c77a-1c8f91fd2046@suse.com>
Date: Fri, 20 Jan 2023 09:44:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Xenia Ragiadakou <burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86/shadow: make iommu_snoop usage consistent with HAP's
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0156.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9461:EE_
X-MS-Office365-Filtering-Correlation-Id: 15b017b4-7493-4428-7883-08dafac28385
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BB7TlQfGLUWQ8FNvX6yI3/V3KmhBJyqRihq7lYp8Q3CQvrwPV4DQHUliWgHqiRFPaa90HtFaGUkr75OW54KJbHztLM5gccageTsG+qtuhdmatzneLEsHvC0ev8o8x41gSFHsRdP+RotUsVmFLHXTkH1HYHV2xF4i7dUzm7/tSgGA0eIceV1o+tmskgL/FYIsnoOG6dVNkZxdXriRGgUTHpzB+A5wHfuSJG6pP8pG1AzbmFwPUUaatOQeC2x/kCKBp9kiYMtt3WggYbgXAPSK1Z4C/Ksd53ioGyzinISQeEUIV4fgTNONvLUevWiWozJEOq6uRBLiz3hkamUlryabzA5lziSOVMK1OxfquPHl1FCS+1ThJ3TkIDYkwekuQCugm+u3k6mdmknPTqWDaaFoJXQWxIAdFQbDlzl1lYJXGIud4s2MkYiAmEvTUljADUhAxim7tv96RNt1zYh06dDZgspCyoM/UlQsN83qmqzHt/KmMI0SjPU9kB9HMC/9hpzh74XKcDW26AGTM3xBnf1PSnMoidc+LH2SZWDDA8YJorQhVtSc6q/+inbWIUkcHRHPjKAa0OXffvOfaZ24NJgwPVidWyK0903UfPxe1Q5ME2JdsSIa+ZFiOejxf81dvhitCwH2rCWfXs5csMBmDoOK8nf72lFFiYMG3M3t9ZkWe2D0Alfc5YDXpZco30lgPfIdbb0QgDG4cHGWYSbYpT5QPMzyJ6pG4wPmSROTTC4U0BeSUbYzksL6mHMn/9EHqIdY6eQfZNeGXVi/t+/1Avc8wA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(346002)(366004)(136003)(376002)(39860400002)(451199015)(31686004)(36756003)(86362001)(8676002)(2906002)(66556008)(54906003)(66946007)(31696002)(6916009)(38100700002)(83380400001)(66476007)(478600001)(966005)(6486002)(316002)(8936002)(41300700001)(4326008)(5660300002)(6506007)(186003)(6512007)(26005)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RjdtTWJrL3pOUTBidWVoUFVPV3loc3dhMkFSb0p5Q0UyV3BSNDN1VnVRVnNK?=
 =?utf-8?B?K2FKSnhwWjd0VnpNVENLQnBvV3BZcWlwQ0VuU3VmY1lHLzNQODgrdFYvbW9J?=
 =?utf-8?B?M0p3OS8yZmVPWWNVbmU1WWdRaGZjWDRmakpTRHY5TVFBd2ttWW9LYmcwbHlV?=
 =?utf-8?B?SnUyYlp5dExmTFd2Y0x2SmZuemZveWkyU2x5dUdNamZqU3hLQ0lhOXIvbDYw?=
 =?utf-8?B?MXJKTmhoSUE3ekpodzNPUDdiVlkzNTlOcUVYNGNYN0t6eVhKMHFrL0EzUkM4?=
 =?utf-8?B?b1prY2hVeDJpVlU5bUdRSkFKTmwrN1BwbktjaGU3bERQTXZGY2tBSFFjMWly?=
 =?utf-8?B?UURYWkp5UWhqRUJRRU5oS09wTWE1b1VmTWZPMG5EazVKejZ6czFRWGdrVkR6?=
 =?utf-8?B?Mkpzdy9LZk1TR09KV0wyVXRVWm1aTXFtU1VIRiszOE5rTFNaNC9aeDJ6MGlF?=
 =?utf-8?B?S2hvYVlueVV2S09UNnF2ZG9NWUVDMGVLRllyc3IwRzVQVExGVkFXbjJoWjdT?=
 =?utf-8?B?NncwV2xZcTN3MExOS1E3blRxaVJYWDlkOGFEQTZhcTk1VjBuTWtOQWo2MTY1?=
 =?utf-8?B?RmJUSUtnNzY0NUFGNTYvUGlOMzBpK3B0MW4zSTVzVkVJb05uRllKVnN0NXlz?=
 =?utf-8?B?bHdZRHdUVDVLb2xmU3RmWDBkS0Yvc0Zta2NBLy9UZCtHakZ5UFl1aGpQczNV?=
 =?utf-8?B?OENrSCs5bmUzTXduUlNJWGdxWXhIK1ZXMnpTb0d3Si8yZGowcEVyaDNVL3Zi?=
 =?utf-8?B?UkVjM0IxeVNSUDZLa1RUa1ZGUmtTZkZtdENCOWdzUkxMRUtvS1NNZFlKam51?=
 =?utf-8?B?cHdnNjRod1lMdjEwNnM3MzJsb3hNdDBicUZ4ZlI5alNNMzdNYVNKVSs1RFAr?=
 =?utf-8?B?TnUyQ2pWTDRzOStOM2xqa3kwTzhsUlFvOFJoUlZuSlJ4cTlBM3VZSTJsMWVv?=
 =?utf-8?B?b2wrVGkyVVQybk1nbDBnaWIwOTVtQXZzenlRWDloczlLTG9BcXpRbFJQSE1B?=
 =?utf-8?B?RGkxcmlZeUI4NDhrWk95b2lNS3U1YjF6dHJGUm5TQ25JZnBncGpUbHoxRFF2?=
 =?utf-8?B?V3JHSFVwWmVqSy9kSDhmcW9nQXFKUEJPWFVzckgrVEdUN0E3NUZMSE1hYm5w?=
 =?utf-8?B?UldVemRsb1JwY0ZpZXZnekorRmZ6V016UVpCN3BWeTAyM2RKdmtzQkcwVjNw?=
 =?utf-8?B?YWJIM1RMVE9TNUQrQW5Xa01tcVZFWDY1dVRJYTlLeTlmVDQ1NytkeTlOL1M1?=
 =?utf-8?B?TEVZcDVxamlnYWpic3BFcXFpWGtnclFuZnB1UWxNckJUOUZodnNTZUxqcVln?=
 =?utf-8?B?a2VmcVJBRjdNQXZQZVlsUGlkKzBTUHFMSGtwRkQ2WEk3K1hPTEh2MjhOanBU?=
 =?utf-8?B?eXVyeU52NVdSMmkrbERKWC9ZNi9IcENFOWoxdjZyMHkwV0crM0VtNTVRL2ND?=
 =?utf-8?B?NmY0NStjM2hkU3VrRFFJTXlCTzhRaHRRWm5zVzlPTmliU2pGVG1JblB3bU9P?=
 =?utf-8?B?U0g4MjRMUjFzSEhoZDlzaEFYY0tUU0ltWVhNR3VNeVhyQnU0aFV6c09teTl1?=
 =?utf-8?B?MlhDYzVLY2NHUXlQWmhhMVFqR0hEc1BDa2MrbFptSGtobHFaQzF6cmVMSkd3?=
 =?utf-8?B?bU5vSHRjZzlwMnlmd25nK2t0U2VuSEFCdGMwelVTWEdZQWg0d0JoSmhtOWQ0?=
 =?utf-8?B?TGhzdWM4K0N6WTJIWnVlUElaYW05TGVPdE9TVTVURGhKa2JxWGdKc3JoOGVR?=
 =?utf-8?B?TXNKc0JxblNsWmp1Wnl4N1ZYZFlMZEw4Q3RxZlpML21iRjNJaTVWRjVRaUpS?=
 =?utf-8?B?b2JSdUN5UzllQmN2UUV1R2ZYd3lvZ0VKYmVjMDlkRnMveWZLbzlSaDdHZ1Vz?=
 =?utf-8?B?RVFPWlhLSW90VVVTbmNxL3hBNVhMaElsUEhRR2FrMkhUM2lRVUlyMjFCN25a?=
 =?utf-8?B?MTBrTU90Rll6OThCTjJjd2ZrL1JRRTdNVDhUYWExNjJrM0I5Mk1UbGs0d25C?=
 =?utf-8?B?amtRektSd1ArVEM2aWZDQ0hXK3RwNU1OKzIwbE1hMUNGWHpJSm10cWd5RVNz?=
 =?utf-8?B?VEdVcW1wZWNseWZCTHd0NktNOFdXTzJ0UDdzM0c4YjgwS3pvZ2s2eXpwOFNs?=
 =?utf-8?Q?SwzOFiJpA3QfqCCLZsgX1ztsH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 15b017b4-7493-4428-7883-08dafac28385
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 08:44:16.4758
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FNwr8tGrToeoQtkkMll+2VJDNghd0rqU27fj4uGv4g7W1osOxeQf1KLnRCprneopRcX3k2VQ6cMz/dZmlhIGhQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9461

First of all the variable is meaningful only when an IOMMU is in use for
a guest. Qualify the check accordingly, like done elsewhere. Furthermore
the controlling command line option is supposed to take effect on VT-d
only. Since command line parsing happens before we know whether we're
going to use VT-d, force the variable back to set when instead running
with AMD IOMMU(s).

Since it may end up misleading, also remove the clearing of the flag in
iommu_setup() and vtd_setup()'s error path. The variable simply is
meaningless with IOMMU(s) disabled, so there's no point touching it
there.

Finally also correct a relevant nearby comment.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was first considering to add the extra check to the outermost
enclosing if(), but I guess that would break the (questionable) case of
assigning MMIO ranges directly by address. The way it's done now also
better fits the existing checks, in particular the ones in p2m-ept.c.

Note that the #ifndef is put there in anticipation of iommu_snoop
becoming a #define when !IOMMU_INTEL (see
https://lists.xen.org/archives/html/xen-devel/2023-01/msg00103.html
and replies).

In _sh_propagate() I'm further puzzled: The iomem_access_permitted()
certainly suggests very bad things could happen if it returned false
(i.e. in the implicit "else" case). The assumption looks to be that no
bad "target_mfn" can make it there. But overall things might end up
looking more sane (and being cheaper) when simply using "mmio_mfn"
instead.
---
v2: Change title. Extend comment in acpi_iommu_init(). Purge clearing
    of the variable from iommu_setup() and vtd_setup()'s error path.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -556,8 +556,8 @@ _sh_propagate(struct vcpu *v,
 
         ASSERT(!(sflags & PAGE_CACHE_ATTRS));
 
-        /* compute the PAT index for shadow page entry when VT-d is enabled
-         * and device assigned.
+        /*
+         * Compute the PAT index for shadow page entry when IOMMU is enabled.
          * 1) direct MMIO: compute the PAT index with gMTRR=UC and gPAT.
          * 2) if enables snoop control, compute the PAT index as WB.
          * 3) if disables snoop control, compute the PAT index with
@@ -577,7 +577,7 @@ _sh_propagate(struct vcpu *v,
                             gfn_to_paddr(target_gfn),
                             mfn_to_maddr(target_mfn),
                             X86_MT_UC);
-                else if ( iommu_snoop )
+                else if ( is_iommu_enabled(d) && iommu_snoop )
                     sflags |= pat_type_2_pte_flags(X86_MT_WB);
                 else
                     sflags |= get_pat_flags(v,
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -587,9 +587,6 @@ int __init iommu_setup(void)
     printk("I/O virtualisation %sabled\n", iommu_enabled ? "en" : "dis");
     if ( !iommu_enabled )
     {
-#ifndef iommu_snoop
-        iommu_snoop = false;
-#endif
         iommu_hwdom_passthrough = false;
         iommu_hwdom_strict = false;
     }
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2746,9 +2746,6 @@ static int __init cf_check vtd_setup(voi
 
  error:
     iommu_enabled = 0;
-#ifndef iommu_snoop
-    iommu_snoop = false;
-#endif
     iommu_hwdom_passthrough = false;
     iommu_qinval = 0;
     iommu_intremap = iommu_intremap_off;
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -56,6 +56,17 @@ void __init acpi_iommu_init(void)
     if ( !acpi_disabled )
     {
         ret = acpi_dmar_init();
+
+#ifndef iommu_snoop
+        /*
+         * As long as there's no per-domain snoop control, and as long as on
+         * AMD we uniformly force coherent accesses, a possible command line
+         * override should affect VT-d only.
+         */
+        if ( ret )
+            iommu_snoop = true;
+#endif
+
         if ( ret == -ENODEV )
             ret = acpi_ivrs_init();
     }


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 09:17:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 09:17:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481512.746446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInWg-0008SR-8L; Fri, 20 Jan 2023 09:17:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481512.746446; Fri, 20 Jan 2023 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInWg-0008SK-5Z; Fri, 20 Jan 2023 09:17:34 +0000
Received: by outflank-mailman (input) for mailman id 481512;
 Fri, 20 Jan 2023 09:17:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pInWe-0008SE-HV
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 09:17:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pInWd-00009J-Nm; Fri, 20 Jan 2023 09:17:31 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pInWd-0003x1-GA; Fri, 20 Jan 2023 09:17:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=5676djTPa2pge/e7UnPJo9SKFwnDti2WjFGbGt5H56c=; b=c0zuvtvKCqauQWjr+v5fWwIMlC
	D2iCveWlyAWJhZnIZN/atKt88ksm2EPXeR0O5No7mxa32vYDDX6jakKnipviFiCOIjux2N5FRoZ94
	spQC/LlMm6j5/FAn8K7puAuSS/8CRK/p0j9W5SDuOUmHbvtTumgdeEK6XcmReJqZk160=;
Message-ID: <0ec33871-96fa-bd9f-eb1b-eb37d3d7c982@xen.org>
Date: Fri, 20 Jan 2023 09:17:29 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v5 4/5] xen/riscv: introduce early_printk basic stuff
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
 <8d7ac0dc51a6331d3efa7fcda433616670b46700.1674131459.git.oleksii.kurochko@gmail.com>
 <aefd4cb9-9a60-4ef1-ff45-dedfb6c37203@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <aefd4cb9-9a60-4ef1-ff45-dedfb6c37203@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 20/01/2023 00:48, Andrew Cooper wrote:
> On 19/01/2023 2:07 pm, Oleksii Kurochko wrote:
>> diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
>> new file mode 100644
>> index 0000000000..6f590e712b
>> --- /dev/null
>> +++ b/xen/arch/riscv/early_printk.c
>> @@ -0,0 +1,45 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * RISC-V early printk using SBI
>> + *
>> + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
>> + */
>> +#include <asm/early_printk.h>
>> +#include <asm/sbi.h>
>> +
>> +/*
>> + * early_*() can be called from head.S with MMU-off.
>> + *
>> + * The following requiremets should be honoured for early_*() to
>> + * work correctly:
>> + *    It should use PC-relative addressing for accessing symbols.
>> + *    To achieve that GCC cmodel=medany should be used.
>> + */
>> +#ifndef __riscv_cmodel_medany
>> +#error "early_*() can be called from head.S with MMU-off"
>> +#endif
> 
> This comment is false, and the check is bogus.

You are already said that in the previous version and ... I reply back 
explaining why I think this is correct (see [1]).

>  > It needs deleting.

That might be the second step. The first step is we settle down on the 
approach.

Cheers,

[1] 
https://lore.kernel.org/xen-devel/CAF3u54C2ewEfBN+ZT6VPaVu4vsqS_+12gr3YJ_jsg1sGHDhZ1A@mail.gmail.com/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 09:25:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 09:25:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481517.746456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIneF-0001Ui-0k; Fri, 20 Jan 2023 09:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481517.746456; Fri, 20 Jan 2023 09:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIneE-0001Ub-UK; Fri, 20 Jan 2023 09:25:22 +0000
Received: by outflank-mailman (input) for mailman id 481517;
 Fri, 20 Jan 2023 09:25:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIneD-0001UU-R9
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 09:25:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIneD-0000Jb-JM; Fri, 20 Jan 2023 09:25:21 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIneD-0004IY-Ca; Fri, 20 Jan 2023 09:25:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=vckqFbR1AK4IWNIgkbK0OB4zTEYkI32iWrrLaBBdHZo=; b=dlbMIoTLFiqT/9CvYe4Aoq2QqZ
	bWc6i7kirGs8EB8QudlT1FuoIAW6T0bmWDY1oi5hVZRvj/yCCGRsNQCYQwkoqEmqZv38OpwNNbimU
	rVxHu/ePLRZWq3JGbW+uItIliCcCryG11yp2JRLkQ8ZB3nme3/JcB7Muu4mF145vpY+s=;
Message-ID: <b5cb6edf-d97a-5a83-09a7-7ef5d154dcb1@xen.org>
Date: Fri, 20 Jan 2023 09:25:19 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 00/17] tools/xenstore: do some cleanup and fixes
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230118095016.13091-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230118095016.13091-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 18/01/2023 09:49, Juergen Gross wrote:
> This is a first run of post-XSA patches which piled up during the
> development phase of all the recent Xenstore related XSA patches.
> 
> This is a mixture of small fixes, enhancements and cleanups.
> 
> Changes in V4:
> - reordered the patches a little bit (patch 4 and patch 17 of V4 have
>    been moved)
> - addressed comments
> 
> Changes in V3:
> - patches 2, 3, and 5 of V2 have been applied already
> - new patch 12
> - addressed comments
> 
> Changes in V2:
> - patches 1+2 of V1 have been applied already
> - addressed comments
> - new patch 19
> 
> Juergen Gross (17):
>    tools/xenstore: let talloc_free() preserve errno
>    tools/xenstore: remove all watches when a domain has stopped
>    tools/xenstore: add hashlist for finding struct domain by domid
>    tools/xenstore: make log macro globally available
>    tools/xenstore: introduce dummy nodes for special watch paths
>    tools/xenstore: replace watch->relative_path with a prefix length
>    tools/xenstore: move changed domain handling
>    tools/xenstore: change per-domain node accounting interface
>    tools/xenstore: replace literal domid 0 with dom0_domid
>    tools/xenstore: make domain_is_unprivileged() an inline function
>    tools/xenstore: let chk_domain_generation() return a bool
>    tools/xenstore: don't let hashtable_remove() return the removed value
>    tools/xenstore: switch hashtable to use the talloc framework
>    tools/xenstore: introduce trace classes
>    tools/xenstore: let check_store() check the accounting data
>    tools/xenstore: make output of "xenstore-control help" more pretty

I have committed up to this patch. The last one...

>    tools/xenstore: don't allow creating too many nodes in a transaction

... needs some review which I will do with part 2 of the xenstored series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 09:33:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 09:33:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481523.746467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInlY-0002ya-Ss; Fri, 20 Jan 2023 09:32:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481523.746467; Fri, 20 Jan 2023 09:32:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInlY-0002yT-Q1; Fri, 20 Jan 2023 09:32:56 +0000
Received: by outflank-mailman (input) for mailman id 481523;
 Fri, 20 Jan 2023 09:32:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pInlX-0002yN-LJ
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 09:32:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pInlX-0000Rz-82; Fri, 20 Jan 2023 09:32:55 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pInlX-0004Wl-1m; Fri, 20 Jan 2023 09:32:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=RpPR80D3Ywf5Fal28CowYgkKbe1D5uUb1nC+b/0ocR0=; b=qrrdn8vg8gNGEVI5VTulNjDNRj
	6KYO3sCXgE/76cy8u9lH5Ir4qfiW+gf9wNuRR78ON30Quxylcr7o6TQ2H/fosQGnoBYzQuV2WtQYI
	K69ZZdhF0RUx5M8nHSIPqEkUFKo5V4gn9chCXqEdLI1rhbbEoCKC1XUmdIAxT0mrmrps=;
Message-ID: <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
Date: Fri, 20 Jan 2023 09:32:53 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-3-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 19/01/2023 22:54, Stefano Stabellini wrote:
> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
>> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
>> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
>> address.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


I have committed the patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 09:34:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 09:34:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481528.746476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInmX-0003W1-5L; Fri, 20 Jan 2023 09:33:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481528.746476; Fri, 20 Jan 2023 09:33:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInmX-0003Vu-2P; Fri, 20 Jan 2023 09:33:57 +0000
Received: by outflank-mailman (input) for mailman id 481528;
 Fri, 20 Jan 2023 09:33:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4CKk=5R=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pInmV-0003Sv-Jv
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 09:33:55 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8e85754b-98a5-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 10:33:54 +0100 (CET)
Received: by mail-ed1-x530.google.com with SMTP id g11so529517eda.12
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 01:33:54 -0800 (PST)
Received: from [192.168.1.93] (adsl-67.109.242.138.tellas.gr. [109.242.138.67])
 by smtp.gmail.com with ESMTPSA id
 r14-20020aa7cb8e000000b00499b6b50419sm13929914edt.11.2023.01.20.01.33.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 20 Jan 2023 01:33:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e85754b-98a5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=2psdB3xKJ/oHlms8LHhw/3lbx6kKANne860IpKr65zE=;
        b=hXrAY0vSk9lOfCuEnZ8TYl+y4k1KZC3QoXsCWmfl0H0N423CwRS1HRAvH8tJIfZP3I
         EbLwIQtjfDH6K5d9iQo6BIwQS+5ITn/EjP62jKUxy6rgZ+UQkrTvYrCF7oujKGHteLOH
         jF9auGmnOl5hM+stSLWsKPnx3E2prK7Sdnq3lLCgYdzl+JYkBtCH57bIqUVpJKi3nNJU
         AWwqufHsqONSU5h76BNc8SnQM4xAB/XRmfkNozpHuMhBzIssir+9+HTNmAAuPq+0Fgw6
         lEecXv3tG1v4JuNa1TDrwNNpOycHe2JHvc6PMqkQ09C1py8OI+uAfCdEEZ0D1XJEdfh6
         vq2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=2psdB3xKJ/oHlms8LHhw/3lbx6kKANne860IpKr65zE=;
        b=41JYDibDCUCY7Jl3lHPJRtcCN7RU7yIlJNTXv2JfyFRQm2xo6SpmQZelwcir3jevOv
         RGsqNHEdS6uLgkqlVLq/ImPACfcEfCfTB7/Dj7IWXohtwtAbo+lnmQiYx4nnt+R1Kwro
         777FhLu5XdUijOpwmVK+A0zoRFQSmzZhy7Pkdi9HJE8lyGGDKFu8IT0mEzBhhpxDjbjA
         gwSy9jdMY7ConnA0PFoK5jadUn8YvsUbhKqau+qDJf/294WCpHUKJcaqqVb2GfeQqhDO
         MbwTuqMHybpec8h6puP5UAoLQEhRcGo37RVsOarOD6Rvv0bgMne2QJWede1gH1pxllWc
         q3jQ==
X-Gm-Message-State: AFqh2ko7jrsnUFzMxQsDspbfeWyKnrrvOo53QXUVTn22qHwxyIoDPuZ8
	3J9uuHzgYeh4Mkfw9mVCDPsIC0gpH1c=
X-Google-Smtp-Source: AMrXdXvhykUgeT9zoHb2RV+NyffkBM4EtE/aGI3KxoZ8lm3bsyhjpaSNGOX9KboMRbFO4ww3SLilDg==
X-Received: by 2002:a05:6402:cba:b0:49d:25f3:6b4e with SMTP id cn26-20020a0564020cba00b0049d25f36b4emr13933753edb.28.1674207234249;
        Fri, 20 Jan 2023 01:33:54 -0800 (PST)
Message-ID: <4173a547-e4ca-c7f5-58f0-34419b4c1484@gmail.com>
Date: Fri, 20 Jan 2023 11:33:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH v2] x86/shadow: make iommu_snoop usage consistent with
 HAP's
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
References: <8d41b8ca-decb-e175-c77a-1c8f91fd2046@suse.com>
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <8d41b8ca-decb-e175-c77a-1c8f91fd2046@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/20/23 10:44, Jan Beulich wrote:
> First of all the variable is meaningful only when an IOMMU is in use for
> a guest. Qualify the check accordingly, like done elsewhere. Furthermore
> the controlling command line option is supposed to take effect on VT-d
> only. Since command line parsing happens before we know whether we're
> going to use VT-d, force the variable back to set when instead running
> with AMD IOMMU(s).
> 
> Since it may end up misleading, also remove the clearing of the flag in
> iommu_setup() and vtd_setup()'s error path. The variable simply is
> meaningless with IOMMU(s) disabled, so there's no point touching it
> there.
> 
> Finally also correct a relevant nearby comment.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Xenia Ragiadakou <burzalodowa@gmail.com>

> ---
> I was first considering to add the extra check to the outermost
> enclosing if(), but I guess that would break the (questionable) case of
> assigning MMIO ranges directly by address. The way it's done now also
> better fits the existing checks, in particular the ones in p2m-ept.c.
> 
> Note that the #ifndef is put there in anticipation of iommu_snoop
> becoming a #define when !IOMMU_INTEL (see
> https://lists.xen.org/archives/html/xen-devel/2023-01/msg00103.html
> and replies).
> 
> In _sh_propagate() I'm further puzzled: The iomem_access_permitted()
> certainly suggests very bad things could happen if it returned false
> (i.e. in the implicit "else" case). The assumption looks to be that no
> bad "target_mfn" can make it there. But overall things might end up
> looking more sane (and being cheaper) when simply using "mmio_mfn"
> instead.
> ---
> v2: Change title. Extend comment in acpi_iommu_init(). Purge clearing
>      of the variable from iommu_setup() and vtd_setup()'s error path.
> 
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -556,8 +556,8 @@ _sh_propagate(struct vcpu *v,
>   
>           ASSERT(!(sflags & PAGE_CACHE_ATTRS));
>   
> -        /* compute the PAT index for shadow page entry when VT-d is enabled
> -         * and device assigned.
> +        /*
> +         * Compute the PAT index for shadow page entry when IOMMU is enabled.
>            * 1) direct MMIO: compute the PAT index with gMTRR=UC and gPAT.
>            * 2) if enables snoop control, compute the PAT index as WB.
>            * 3) if disables snoop control, compute the PAT index with
> @@ -577,7 +577,7 @@ _sh_propagate(struct vcpu *v,
>                               gfn_to_paddr(target_gfn),
>                               mfn_to_maddr(target_mfn),
>                               X86_MT_UC);
> -                else if ( iommu_snoop )
> +                else if ( is_iommu_enabled(d) && iommu_snoop )
>                       sflags |= pat_type_2_pte_flags(X86_MT_WB);
>                   else
>                       sflags |= get_pat_flags(v,
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -587,9 +587,6 @@ int __init iommu_setup(void)
>       printk("I/O virtualisation %sabled\n", iommu_enabled ? "en" : "dis");
>       if ( !iommu_enabled )
>       {
> -#ifndef iommu_snoop
> -        iommu_snoop = false;
> -#endif
>           iommu_hwdom_passthrough = false;
>           iommu_hwdom_strict = false;
>       }
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -2746,9 +2746,6 @@ static int __init cf_check vtd_setup(voi
>   
>    error:
>       iommu_enabled = 0;
> -#ifndef iommu_snoop
> -    iommu_snoop = false;
> -#endif
>       iommu_hwdom_passthrough = false;
>       iommu_qinval = 0;
>       iommu_intremap = iommu_intremap_off;
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -56,6 +56,17 @@ void __init acpi_iommu_init(void)
>       if ( !acpi_disabled )
>       {
>           ret = acpi_dmar_init();
> +
> +#ifndef iommu_snoop
> +        /*
> +         * As long as there's no per-domain snoop control, and as long as on
> +         * AMD we uniformly force coherent accesses, a possible command line
> +         * override should affect VT-d only.
> +         */
> +        if ( ret )
> +            iommu_snoop = true;
> +#endif
> +
>           if ( ret == -ENODEV )
>               ret = acpi_ivrs_init();
>       }

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 09:36:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 09:36:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481534.746487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInog-00047l-Iv; Fri, 20 Jan 2023 09:36:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481534.746487; Fri, 20 Jan 2023 09:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInog-00047e-Eb; Fri, 20 Jan 2023 09:36:10 +0000
Received: by outflank-mailman (input) for mailman id 481534;
 Fri, 20 Jan 2023 09:36:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pInof-000464-1T
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 09:36:09 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ddb963c8-98a5-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 10:36:07 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 42DC75F79A
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 09:36:07 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 30DFB1390C
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 09:36:07 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id hOKPCodgymMJNwAAMHmgww
 (envelope-from <jgross@suse.com>)
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 09:36:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddb963c8-98a5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674207367; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=h1+h9LaKYjKKkfhzp15y9beCI3q5/6EfhcdDhCVc0xw=;
	b=QodfrZcdoB/KHVXAcg8RF2h3E701Rb1WTo6Osw1ox3XYs3yROVAkfgbJ/qgSt2svIXMRFl
	TtMlPHVUJIV6T2cBrr9nqtLsuIx5xtFigQddQcrQvuW5yWGl6RTW8k1cvizFJ3JS0O5UHd
	f0LzpsjJR5GViX/JFKqlyMUJowAuyR0=
Message-ID: <425b75cb-b19e-3282-c574-4165215030f5@suse.com>
Date: Fri, 20 Jan 2023 10:36:06 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 00/17] tools/xenstore: do some cleanup and fixes
Content-Language: en-US
To: xen-devel@lists.xenproject.org
References: <20230118095016.13091-1-jgross@suse.com>
 <b5cb6edf-d97a-5a83-09a7-7ef5d154dcb1@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <b5cb6edf-d97a-5a83-09a7-7ef5d154dcb1@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------u04B9IWPjBARkhPwTcUD00oW"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------u04B9IWPjBARkhPwTcUD00oW
Content-Type: multipart/mixed; boundary="------------hvTsCjM0fbQLKaRxQS9cdHJt";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Message-ID: <425b75cb-b19e-3282-c574-4165215030f5@suse.com>
Subject: Re: [PATCH v4 00/17] tools/xenstore: do some cleanup and fixes
References: <20230118095016.13091-1-jgross@suse.com>
 <b5cb6edf-d97a-5a83-09a7-7ef5d154dcb1@xen.org>
In-Reply-To: <b5cb6edf-d97a-5a83-09a7-7ef5d154dcb1@xen.org>

--------------hvTsCjM0fbQLKaRxQS9cdHJt
Content-Type: multipart/mixed; boundary="------------9Jvatl388eZ3g5lFZCncCwTO"

--------------9Jvatl388eZ3g5lFZCncCwTO
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjAuMDEuMjMgMTA6MjUsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDE4LzAxLzIwMjMgMDk6NDksIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBU
aGlzIGlzIGEgZmlyc3QgcnVuIG9mIHBvc3QtWFNBIHBhdGNoZXMgd2hpY2ggcGlsZWQgdXAg
ZHVyaW5nIHRoZQ0KPj4gZGV2ZWxvcG1lbnQgcGhhc2Ugb2YgYWxsIHRoZSByZWNlbnQgWGVu
c3RvcmUgcmVsYXRlZCBYU0EgcGF0Y2hlcy4NCj4+DQo+PiBUaGlzIGlzIGEgbWl4dHVyZSBv
ZiBzbWFsbCBmaXhlcywgZW5oYW5jZW1lbnRzIGFuZCBjbGVhbnVwcy4NCj4+DQo+PiBDaGFu
Z2VzIGluIFY0Og0KPj4gLSByZW9yZGVyZWQgdGhlIHBhdGNoZXMgYSBsaXR0bGUgYml0IChw
YXRjaCA0IGFuZCBwYXRjaCAxNyBvZiBWNCBoYXZlDQo+PiDCoMKgIGJlZW4gbW92ZWQpDQo+
PiAtIGFkZHJlc3NlZCBjb21tZW50cw0KPj4NCj4+IENoYW5nZXMgaW4gVjM6DQo+PiAtIHBh
dGNoZXMgMiwgMywgYW5kIDUgb2YgVjIgaGF2ZSBiZWVuIGFwcGxpZWQgYWxyZWFkeQ0KPj4g
LSBuZXcgcGF0Y2ggMTINCj4+IC0gYWRkcmVzc2VkIGNvbW1lbnRzDQo+Pg0KPj4gQ2hhbmdl
cyBpbiBWMjoNCj4+IC0gcGF0Y2hlcyAxKzIgb2YgVjEgaGF2ZSBiZWVuIGFwcGxpZWQgYWxy
ZWFkeQ0KPj4gLSBhZGRyZXNzZWQgY29tbWVudHMNCj4+IC0gbmV3IHBhdGNoIDE5DQo+Pg0K
Pj4gSnVlcmdlbiBHcm9zcyAoMTcpOg0KPj4gwqDCoCB0b29scy94ZW5zdG9yZTogbGV0IHRh
bGxvY19mcmVlKCkgcHJlc2VydmUgZXJybm8NCj4+IMKgwqAgdG9vbHMveGVuc3RvcmU6IHJl
bW92ZSBhbGwgd2F0Y2hlcyB3aGVuIGEgZG9tYWluIGhhcyBzdG9wcGVkDQo+PiDCoMKgIHRv
b2xzL3hlbnN0b3JlOiBhZGQgaGFzaGxpc3QgZm9yIGZpbmRpbmcgc3RydWN0IGRvbWFpbiBi
eSBkb21pZA0KPj4gwqDCoCB0b29scy94ZW5zdG9yZTogbWFrZSBsb2cgbWFjcm8gZ2xvYmFs
bHkgYXZhaWxhYmxlDQo+PiDCoMKgIHRvb2xzL3hlbnN0b3JlOiBpbnRyb2R1Y2UgZHVtbXkg
bm9kZXMgZm9yIHNwZWNpYWwgd2F0Y2ggcGF0aHMNCj4+IMKgwqAgdG9vbHMveGVuc3RvcmU6
IHJlcGxhY2Ugd2F0Y2gtPnJlbGF0aXZlX3BhdGggd2l0aCBhIHByZWZpeCBsZW5ndGgNCj4+
IMKgwqAgdG9vbHMveGVuc3RvcmU6IG1vdmUgY2hhbmdlZCBkb21haW4gaGFuZGxpbmcNCj4+
IMKgwqAgdG9vbHMveGVuc3RvcmU6IGNoYW5nZSBwZXItZG9tYWluIG5vZGUgYWNjb3VudGlu
ZyBpbnRlcmZhY2UNCj4+IMKgwqAgdG9vbHMveGVuc3RvcmU6IHJlcGxhY2UgbGl0ZXJhbCBk
b21pZCAwIHdpdGggZG9tMF9kb21pZA0KPj4gwqDCoCB0b29scy94ZW5zdG9yZTogbWFrZSBk
b21haW5faXNfdW5wcml2aWxlZ2VkKCkgYW4gaW5saW5lIGZ1bmN0aW9uDQo+PiDCoMKgIHRv
b2xzL3hlbnN0b3JlOiBsZXQgY2hrX2RvbWFpbl9nZW5lcmF0aW9uKCkgcmV0dXJuIGEgYm9v
bA0KPj4gwqDCoCB0b29scy94ZW5zdG9yZTogZG9uJ3QgbGV0IGhhc2h0YWJsZV9yZW1vdmUo
KSByZXR1cm4gdGhlIHJlbW92ZWQgdmFsdWUNCj4+IMKgwqAgdG9vbHMveGVuc3RvcmU6IHN3
aXRjaCBoYXNodGFibGUgdG8gdXNlIHRoZSB0YWxsb2MgZnJhbWV3b3JrDQo+PiDCoMKgIHRv
b2xzL3hlbnN0b3JlOiBpbnRyb2R1Y2UgdHJhY2UgY2xhc3Nlcw0KPj4gwqDCoCB0b29scy94
ZW5zdG9yZTogbGV0IGNoZWNrX3N0b3JlKCkgY2hlY2sgdGhlIGFjY291bnRpbmcgZGF0YQ0K
Pj4gwqDCoCB0b29scy94ZW5zdG9yZTogbWFrZSBvdXRwdXQgb2YgInhlbnN0b3JlLWNvbnRy
b2wgaGVscCIgbW9yZSBwcmV0dHkNCj4gDQo+IEkgaGF2ZSBjb21taXR0ZWQgdXAgdG8gdGhp
cyBwYXRjaC4gVGhlIGxhc3Qgb25lLi4uDQo+IA0KPj4gwqDCoCB0b29scy94ZW5zdG9yZTog
ZG9uJ3QgYWxsb3cgY3JlYXRpbmcgdG9vIG1hbnkgbm9kZXMgaW4gYSB0cmFuc2FjdGlvbg0K
PiANCj4gLi4uIG5lZWRzIHNvbWUgcmV2aWV3IHdoaWNoIEkgd2lsbCBkbyB3aXRoIHBhcnQg
MiBvZiB0aGUgeGVuc3RvcmVkIHNlcmllcy4NCg0KSSdsbCBkbyBhIHJlc2VuZCBvZiB0aGUg
cGFydCAyIHJlYmFzZWQgdG8gdGhlIGNvbW1pdHRlZCBzdHVmZiwgaW5jbHVkaW5nDQp0aGlz
IGxlZnRvdmVyIHBhdGNoLg0KDQoNCkp1ZXJnZW4NCg0K
--------------9Jvatl388eZ3g5lFZCncCwTO
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9Jvatl388eZ3g5lFZCncCwTO--

--------------hvTsCjM0fbQLKaRxQS9cdHJt--

--------------u04B9IWPjBARkhPwTcUD00oW
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPKYIYFAwAAAAAACgkQsN6d1ii/Ey+c
Vgf8CAFEvE/bVvYWC37NBmCN9MwdIDInHaHHjmKraR8IcEVOo/a4k4ivhG4jpy9TmnrWuVZfaetG
smagnoVJ6ikDb2m9U+xz1uPcuM9B7g0n3Azflt93iaWtGa8Ouu20RN/RqbCmTwcVuwCDXDZZRM4z
mwqXzw/25wBIJwdoOggEkuHAEoXfYpCe7OTDB+1RmK+CnZgrzKHKHk2po7hz0nz3gfqyqI8b7eUI
Naiwf1Ogm9V1t4D6NwWWrubmhTBmmINXEiu97Nio+GoqJvhYWLhKdWM+7Gx3iQWopRlCN7xx40pA
yEACX5/GB1Cnkt5wz2l4juCnvMK8fIn2qclD8yysOA==
=8hax
-----END PGP SIGNATURE-----

--------------u04B9IWPjBARkhPwTcUD00oW--


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 09:44:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 09:44:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481539.746497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInwI-0005bC-B3; Fri, 20 Jan 2023 09:44:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481539.746497; Fri, 20 Jan 2023 09:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pInwI-0005b5-7h; Fri, 20 Jan 2023 09:44:02 +0000
Received: by outflank-mailman (input) for mailman id 481539;
 Fri, 20 Jan 2023 09:44:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nUaQ=5R=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pInwG-0005az-Fl
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 09:44:00 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2089.outbound.protection.outlook.com [40.107.93.89])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f563583d-98a6-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 10:43:57 +0100 (CET)
Received: from BN9PR03CA0311.namprd03.prod.outlook.com (2603:10b6:408:112::16)
 by DS7PR12MB6069.namprd12.prod.outlook.com (2603:10b6:8:9f::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 09:43:54 +0000
Received: from BN8NAM11FT093.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:112:cafe::64) by BN9PR03CA0311.outlook.office365.com
 (2603:10b6:408:112::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.26 via Frontend
 Transport; Fri, 20 Jan 2023 09:43:54 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT093.mail.protection.outlook.com (10.13.177.22) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Fri, 20 Jan 2023 09:43:54 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 03:43:53 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 01:43:53 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 20 Jan 2023 03:43:52 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f563583d-98a6-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kh4gYoYfOJKpPxNMYIppQhuKQKCY0hsYGqTHor0C8uiYU5xYEjetX/tIqzIerXUqC/EKDP4/kTAlKsrEf4h/0ld1SI/kZLx6rEqXu7lSdm5NqTKDplUYLbKAUZ14K7bCVEg+6u0AyeFz1NGEm/1sngJ0MKv7uktiLwnk5SsGir/LjBkrWznsIBqBC/IlVve+0amnTY15HYcaLBjhQ/TBqaBNSNvF/lwdiAm7BDWVcsUoW49SPdUunwMuyYuERe3oYs5Dw8e+36084Thuds3EvUR+C4zX/krdKlEtvE9Nekb5SqMdX1lGwlrTKP/zU2r4wYt2TydScscIXGKFul2pJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IWMcQnwg73M7b3DbzEeTvxUa631JmsT6z26jDjgYwC4=;
 b=dVVi1CNFnzLlVZcmFjQUTwTjFbJ5HZcgbu3ZvHFEOaCoQyPlY2H3GhRqCyOEyAnpbGhJoT5YzJ3lsnbERadwM9MmVR4f4jXWWagTX4lDguDwo6aGpP0E91LUBTEvSia2sm5j1GDPOP2xA+Od820ZveCugsqZpuot4VHAmjANchJr8B2kyFtPgo+xJqmJmTJJV1VgIXnHxdFBgBw0SB0o5qaZbsMKlVhzb5ah9UYkd8j/fn07Rm3mDUGnmJnqf6cOwJE6AwX5OX4YZmcGN7RRwBksX0aSNcR3RZWPACBXTQwdYBsLaUDJqzeJQBoegTUFgccIy+AHTbNqhdne82qzKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IWMcQnwg73M7b3DbzEeTvxUa631JmsT6z26jDjgYwC4=;
 b=xGS7rbGCZIPo/DRv+6htLOUjVz6r1Uyh7yvaVdSyHhl7PZplW+fXUSO3gtKt8VoDwRf/fhT8gmhU41HCRQDlzx+DhirolsusVBRryNJKsuQSZw5mXEhkbz2/GWYz0LyMrG5hA5sGdpK90HRa/OTr2cggng3rlSfO57R55WusfeQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <36821aa0-4e88-57f7-3f8b-35ba0529fabf@amd.com>
Date: Fri, 20 Jan 2023 10:43:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/3] xen/arm: Reduce redundant clear root pages when
 teardown p2m
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen
	<wei.chen@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-2-Henry.Wang@arm.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230116015820.1269387-2-Henry.Wang@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT093:EE_|DS7PR12MB6069:EE_
X-MS-Office365-Filtering-Correlation-Id: 4ff2dc9e-4aac-4304-4fbd-08dafacad829
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7CbRYqm6wF32V0QyMajvvJJkGTcO6sEmI1rJ26PUcHM4CLc6680gk+9UkoV/ku4DS0xBhMG3rwVSJdy06PbGFKjEVnISHoD3Hd9kPMDZLXFxOBSAXjyXS0NOLXofyidXWrH/7qVce95QevVWzkGZfW1/U0S795cVXdUW6LmJTamUWojg+d36TH7VEfkKhbQ9M+XAIPhGSEYujsT649KL0bknM2g3O+HaVpGeVhqIxDNPRrqeAXY15LaXMdWjtcIvOpW7PmGGW+VRLrWw0mdA+t+RF27AJQck0gPwode1Ffdw3nB2zFKsWxpgA2Mt3FLbt5NsykvsyuWC6pOepBlWwUc7VxBUQSutOQ1bptQ4+0c+S61SZ3Jst7g1q4lmxqt3O+Ei7xwJxg655r4n+bJKiX98+OB0lw0Ox29HAafyVMmaYACcmJJ3Yv6phPAdNzXbbGnzplrmAU6j39HC1TQZFdLmSh3Lt5J+YE8fIYxBc6owFIaG1OyOp+zYm0gqsFKI9PM/ikch7aFP88SqFtYYQcPz+rKtbxJNQjZkq3GkONnKaeOmvZgg2XEbmow8ctRKc5qY6eztPkmWYl0qKIwHwuArqC1Q1j9kNk+If4oxhr5JZ24lFu7C/hHMXMF7OaL8N+z3JhBlCNWNU9z/JOt83ni+szvVyMa3M6KyBXC1+CwwImnKS8x03IbtQT9648qHBeTSAxFZ0s8WAAmppY0RkYewGS5u83NHb+eWSfwcXHN7rBr4GmesPRLSnwdjw+3tengtvJKhLNGUghIAAxiHcA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(39860400002)(376002)(396003)(451199015)(46966006)(40470700004)(36840700001)(31686004)(82310400005)(36756003)(36860700001)(83380400001)(478600001)(31696002)(44832011)(2906002)(316002)(16576012)(54906003)(40460700003)(82740400003)(41300700001)(110136005)(47076005)(40480700001)(336012)(186003)(26005)(2616005)(53546011)(426003)(4326008)(81166007)(8936002)(86362001)(5660300002)(356005)(8676002)(70206006)(70586007)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 09:43:54.2801
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4ff2dc9e-4aac-4304-4fbd-08dafacad829
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT093.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6069

Hi Henry,

On 16/01/2023 02:58, Henry Wang wrote:
> 
> 
> Currently, p2m for a domain will be teardown from two paths:
> (1) The normal path when a domain is destroyed.
> (2) The arch_domain_destroy() in the failure path of domain creation.
> 
> When tearing down p2m from (1), the part to clear and clean the root
> is only needed to do once rather than for every call of p2m_teardown().
> If the p2m teardown is from (2), the clear and clean of the root
> is unnecessary because the domain is not scheduled.
> 
> Therefore, this patch introduces a helper `p2m_clear_root_pages()` to
> do the clear and clean of the root, and move this logic outside of
> p2m_teardown(). With this movement, the `page_list_empty(&p2m->pages)`
> check can be dropped.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> ---
> Was: [PATCH v2] xen/arm: Reduce redundant clear root pages when
> teardown p2m. Picked to this series with changes in original v1:
> 1. Introduce a new PROGRESS for p2m_clear_root_pages() to avoid
>    multiple calling when p2m_teardown() is preempted.
> 2. Move p2m_force_tlb_flush_sync() to p2m_clear_root_pages().
> ---
>  xen/arch/arm/domain.c          | 12 ++++++++++++
>  xen/arch/arm/include/asm/p2m.h |  1 +
>  xen/arch/arm/p2m.c             | 34 ++++++++++++++--------------------
>  3 files changed, 27 insertions(+), 20 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 99577adb6c..961dab9166 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -959,6 +959,7 @@ enum {
>      PROG_xen,
>      PROG_page,
>      PROG_mapping,
> +    PROG_p2m_root,
>      PROG_p2m,
>      PROG_p2m_pool,
>      PROG_done,
> @@ -1021,6 +1022,17 @@ int domain_relinquish_resources(struct domain *d)
>          if ( ret )
>              return ret;
> 
> +    PROGRESS(p2m_root):
> +        /*
> +         * We are about to free the intermediate page-tables, so clear the
> +         * root to prevent any walk to use them.
The comment from here...
> +         * The domain will not be scheduled anymore, so in theory we should
> +         * not need to flush the TLBs. Do it for safety purpose.
> +         * Note that all the devices have already been de-assigned. So we don't
> +         * need to flush the IOMMU TLB here.
> +         */
to here does not make a lot of sense in this place and should be moved to p2m_clear_root_pages
where a user can see the call to p2m_force_tlb_flush_sync.

Apart from that:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 09:48:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 09:48:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481546.746507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIo0n-0006HF-1z; Fri, 20 Jan 2023 09:48:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481546.746507; Fri, 20 Jan 2023 09:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIo0m-0006H8-VM; Fri, 20 Jan 2023 09:48:40 +0000
Received: by outflank-mailman (input) for mailman id 481546;
 Fri, 20 Jan 2023 09:48:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIo0m-0006H0-9d
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 09:48:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIo0l-0000t4-Sg; Fri, 20 Jan 2023 09:48:39 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIo0l-0005NL-MZ; Fri, 20 Jan 2023 09:48:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Ak93oFN0Mep3PDFkZ4xhG6CurNAM13G69c8JV5YQLRM=; b=se7ubq4esFFwmQOv8IqJUPg/hc
	MW9XqU7RdqOAeKnOMPdfv5ZnhMT1reIXWcY99RCtFoQ15a4eGNuGhd72HBXu/uq69TRKe5OnfXUWS
	cO1soD2n8LexssC2w6kWHEPQcSzxw1Odlpg6fNJ2JtF0e60nY8QZpHUwwKPhP3apFWqY=;
Message-ID: <bd0f1acb-3448-1f59-dcee-7e94972f442a@xen.org>
Date: Fri, 20 Jan 2023 09:48:37 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 03/11] xen/arm: domain_build: Replace use of paddr_t in
 find_domU_holes()
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-4-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191459230.731018@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301191459230.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 19/01/2023 23:02, Stefano Stabellini wrote:
> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>> bankbase, banksize and bankend are used to hold values of type 'unsigned
>> long long'. This can be represented as 'uint64_t' instead of 'paddr_t'.
>> This will ensure consistency with allocate_static_memory() (where we use
>> 'uint64_t' for rambase and ramsize).
>>
>> In future, paddr_t can be used for 'uin32_t' as well to represent 32bit
>> physical addresses.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> 
> I saw that Julien commented about using unsigned long long. To be
> honest, given that we typically use explicitly-sized integers (which is
> a good thing) 

 From the CODING_STYLE:

"Fixed width types should only be used when a fixed width quantity is
meant (which for example may be a value read from or to be written to a
register)."

This is also how we used it in the Arm port so far.

> and unsigned long long is always uint64_t on all
> architectures, I can see the benefits of using uint64_t here.

 From my understanding of the C spec, the only requirement is that 
"unsigned long long" can fit a 2^64 - 1. So it may technically be bigger 
than 64-bit.

> 
> At the same time, I can see that the reason for change the type here is
> that we are dealing with ULL values, so it would make sense to use
> unsigned long long. >
> I cannot see any big problem/advantages either way, so I am OK with
> this patch. (Julien, if you prefer unsigned long long I am fine with
> that also.)

I don't mind too much here.

> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> 
> 
>> ---
>>
>> Changes from -
>>
>> v1 - Modified the title/commit message from "[XEN v1 6/9] xen/arm: Use 'u64' to represent 'unsigned long long"
>> and moved it earlier.
>>
>>   xen/arch/arm/domain_build.c | 6 +++---
>>   1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 33a5945a2d..f904f12408 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -1726,9 +1726,9 @@ static int __init find_domU_holes(const struct kernel_info *kinfo,
>>                                     struct meminfo *ext_regions)
>>   {
>>       unsigned int i;
>> -    paddr_t bankend;
>> -    const paddr_t bankbase[] = GUEST_RAM_BANK_BASES;
>> -    const paddr_t banksize[] = GUEST_RAM_BANK_SIZES;
>> +    uint64_t bankend;
>> +    const uint64_t bankbase[] = GUEST_RAM_BANK_BASES;
>> +    const uint64_t banksize[] = GUEST_RAM_BANK_SIZES;
>>       int res = -ENOENT;
>>   
>>       for ( i = 0; i < GUEST_RAM_BANKS; i++ )
>> -- 
>> 2.17.1
>>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 09:52:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 09:52:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481551.746516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIo4n-0007ep-Hz; Fri, 20 Jan 2023 09:52:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481551.746516; Fri, 20 Jan 2023 09:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIo4n-0007ei-FI; Fri, 20 Jan 2023 09:52:49 +0000
Received: by outflank-mailman (input) for mailman id 481551;
 Fri, 20 Jan 2023 09:52:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIo4m-0007ec-Qw
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 09:52:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIo4m-0000xt-HW; Fri, 20 Jan 2023 09:52:48 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIo4m-0005RI-9z; Fri, 20 Jan 2023 09:52:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	References:Cc:To:From:Subject:MIME-Version:Date:Message-ID;
	bh=/Oaj7xA8wsaFRUomI5eT7IQZAZ7QITj+EhLlBZL0KuA=; b=cgzscaRdM37gV9OWdwJRuYebfq
	fLSw7NjMOZjjIO3v9Vfw3zu+OM4xUSq36wNOx70Z+tsOW6kMr9eyUEWltdFeFlaDwL8EDdGB4yOvV
	hLaYxY/RbA1YBe2j2TQtvID8m0bpljgPBWSdnztfnedp6AmnpbzzyykVLSiakPOgrw6U=;
Message-ID: <4cffb178-1b15-23b4-b69b-1bf01f96ee56@xen.org>
Date: Fri, 20 Jan 2023 09:52:46 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 03/11] xen/arm: domain_build: Replace use of paddr_t in
 find_domU_holes()
Content-Language: en-US
From: Julien Grall <julien@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-4-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191459230.731018@ubuntu-linux-20-04-desktop>
 <bd0f1acb-3448-1f59-dcee-7e94972f442a@xen.org>
In-Reply-To: <bd0f1acb-3448-1f59-dcee-7e94972f442a@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 20/01/2023 09:48, Julien Grall wrote:
> Hi,
> 
> On 19/01/2023 23:02, Stefano Stabellini wrote:
>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>> bankbase, banksize and bankend are used to hold values of type 'unsigned
>>> long long'. This can be represented as 'uint64_t' instead of 'paddr_t'.
>>> This will ensure consistency with allocate_static_memory() (where we use
>>> 'uint64_t' for rambase and ramsize).
>>>
>>> In future, paddr_t can be used for 'uin32_t' as well to represent 32bit
>>> physical addresses.
>>>
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>
>> I saw that Julien commented about using unsigned long long. To be
>> honest, given that we typically use explicitly-sized integers (which is
>> a good thing) 
> 
>  From the CODING_STYLE:
> 
> "Fixed width types should only be used when a fixed width quantity is
> meant (which for example may be a value read from or to be written to a
> register)."
> 
> This is also how we used it in the Arm port so far.
> 
>> and unsigned long long is always uint64_t on all
>> architectures, I can see the benefits of using uint64_t here.
> 
>  From my understanding of the C spec, the only requirement is that 
> "unsigned long long" can fit a 2^64 - 1. So it may technically be bigger 
> than 64-bit.
> 
>>
>> At the same time, I can see that the reason for change the type here is
>> that we are dealing with ULL values, so it would make sense to use
>> unsigned long long. >
>> I cannot see any big problem/advantages either way, so I am OK with
>> this patch. (Julien, if you prefer unsigned long long I am fine with
>> that also.)
> 
> I don't mind too much here.
> 
>>
>> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 09:54:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 09:54:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481556.746527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIo6c-0008FK-U8; Fri, 20 Jan 2023 09:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481556.746527; Fri, 20 Jan 2023 09:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIo6c-0008FD-QB; Fri, 20 Jan 2023 09:54:42 +0000
Received: by outflank-mailman (input) for mailman id 481556;
 Fri, 20 Jan 2023 09:54:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nUaQ=5R=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pIo6b-0008F7-Pt
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 09:54:41 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2073.outbound.protection.outlook.com [40.107.94.73])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 736ae32c-98a8-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 10:54:39 +0100 (CET)
Received: from BN0PR02CA0016.namprd02.prod.outlook.com (2603:10b6:408:e4::21)
 by SA1PR12MB7271.namprd12.prod.outlook.com (2603:10b6:806:2b8::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 09:54:35 +0000
Received: from BN8NAM11FT041.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e4:cafe::6e) by BN0PR02CA0016.outlook.office365.com
 (2603:10b6:408:e4::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.26 via Frontend
 Transport; Fri, 20 Jan 2023 09:54:35 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT041.mail.protection.outlook.com (10.13.177.18) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Fri, 20 Jan 2023 09:54:34 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 03:54:34 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 03:54:33 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 20 Jan 2023 03:54:32 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 736ae32c-98a8-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=irrxW1YHEU5fG3nIfVGMlxRh6OLRCnkZ1hzikdltA/XuLX+MqYOt4uWX/dJrcySdfw7qwkoLwL0DQI8c1bQ45T0ULymhDMwMnlgZ/UNdwC91qeyyG0zLX2iUorZTU1x01naG92ZDjjzSNS8WQ17LJrjAfGYyb/hZWqFYiUIXeIiZyE0z7OnvDAn7ENXdQJuU4S6lLgdGE+6dBhzUN10RYQReBcRCYQBQY1b82ZpsxcIGUrHWdbOtMjmDnysbYEaraPNz5lpLNKnMii5s/xieB1iiTgUy20KCWdyxj1elO4ZCpiUMKzYwq5cUciDqf8ULWgvfQyDRlgGXv/KMSxgewQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2K/CRRq2IYUdYpxVIoqaguFzA04n9HeLJrwTyngZo/8=;
 b=Iperh/Yw8Fw3oBYlPb2/dZ2Z84vRIB0AXRu89w/GD9X/VL/VkBBc2WQbiCce/JEVOpu+DZVwNCGfmP3IyTzsmXheEhH7+nQosUAQGftnvGQ+/Sei0zgsobfA6D8kOHWZ0TJdn30t7+pILT29LRmotnuuPXDua9jAsE0atzCGXQkLnzngpH1CzMi/YJEATwvhA/AJFO/D6FxqgXAXJ5pdxU3cCrgym4sIlTuxCZT5JxViIP/yxtxIqt0svVdUwOyj0IqhXOtyQdKlnoItZ+YGyy1nG7jF2TBpTV+Pmz2Ijk4BdoxV/3f23ayW5G7G7Y+lLCHn0KsZTrGKrhguPm5NIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2K/CRRq2IYUdYpxVIoqaguFzA04n9HeLJrwTyngZo/8=;
 b=AOzdxdkMBn1/IgSXwLWnsnmrFipe8ZYc5pdO2Stmzeou4KVN5+DONimg/vYW9RweSIMZcryPzzlmuRff6Eg5hX0rsjUyWhKq5YcB+vmOkVhiwsmrz+XycCzN6yk/RILOAczI6FnCoGRt+YxG8yyQ9NVy+Hff0xsVgpRIgaUWK/8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <b2822e36-0972-5c4b-90d9-aee6533824b2@amd.com>
Date: Fri, 20 Jan 2023 10:54:32 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen
	<wei.chen@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-3-Henry.Wang@arm.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230116015820.1269387-3-Henry.Wang@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT041:EE_|SA1PR12MB7271:EE_
X-MS-Office365-Filtering-Correlation-Id: 6a9f2b1e-d647-4db3-2e9b-08dafacc55ec
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Lg2lFnry0HhFDuvHSA6tvTwmlF4K9n/Fk7GxDFmZj2lOuAuj0fPKrYE7L8pCooKxCeAsyZyM/NExYX9tsDtgZA8KLbTKArsRHjYAWiBCI7sRoN3qTwYgGDpdgamzy/W/zllKMHd3sdkzur4YUZobhkdA+WEBH/+cmKA9Tc+WCwnyPMXlYUmTBdaVHd+Z2RJ1OtuZwcCk0yp0oWsrwGK6mkdu8iFUGBcj+qwg4VVWNokBl5M2Jx1tjSlvOzb3jUetAfhisYSwRrElA+9zVL00SOK0Zb3DkenUygVcvdR/V4tQLlFXyBnA19K+LFXWENRZxwd63ERIXzjri+zKWiHEm8RSagFjZdcEvmNJydILoUmEKH4fr6JBKfpNeeMrWfBKk+9L0Y8Oay5lYnZfUibVHYMArMwTOL/BsXrJUvlmahYxpF5bAGqyxFRLYCZjeRnT6FvXxmYwBERAsMYehq30PGWSbm0JN89+KpoibvB5Cj5YxBNAqxetA7kuAEI59cDBboq+gMxcIYCGtifCwUBDXGEWscez1ujpyTt4xnSPQ5rQwNElBC7/VoQMyibZKTzWG2KdwwEH0x7SpHyanZDppqsNclA5YlBIkXdlHDYMKZJqv9eB/wuUF0ERKmBgwUfwI7UbvZ4xDILztfml8Wq2yBSqSBwx9Rp2fIGo44dnpjT+bq4gS4n/dL85VJrCJpVCK5Hw1Jez3JQ3fhpaaJEQWtCGDy6ThIwKS8ujgEAN5T0PsZZ227H45hKgdO+HlNX+J8i0M7m/3v23Ri0D+Gfkfw/0PEz2a8kTF7JqDx0oPEZ2v/vEW/2mc+6lIoT38mmxbCqOgF6D5d8WTw1RxWQD9l2a2Fpj5VB+469SXt2HwHI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(136003)(396003)(376002)(346002)(451199015)(40470700004)(36840700001)(46966006)(44832011)(70586007)(70206006)(5660300002)(8936002)(8676002)(4326008)(16576012)(316002)(82310400005)(478600001)(966005)(54906003)(110136005)(36860700001)(26005)(53546011)(2906002)(186003)(2616005)(36756003)(31696002)(40460700003)(86362001)(336012)(81166007)(83380400001)(40480700001)(356005)(82740400003)(31686004)(41300700001)(426003)(47076005)(36900700001)(43740500002)(414714003)(473944003);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 09:54:34.7691
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a9f2b1e-d647-4db3-2e9b-08dafacc55ec
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT041.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7271

Hi Henry,

On 16/01/2023 02:58, Henry Wang wrote:
> 
> 
> Currently, the mapping of the GICv2 CPU interface is created in
> arch_domain_create(). This causes some troubles in populating and
> freeing of the domain P2M pages pool. For example, a default 16
> P2M pages are required in p2m_init() to cope with the P2M mapping
> of 8KB GICv2 CPU interface area, and these 16 P2M pages would cause
> the complexity of P2M destroy in the failure path of
> arch_domain_create().
> 
> As per discussion in [1], similarly as the MMIO access for ACPI, this
> patch defers the GICv2 CPU interface mapping until the first MMIO
> access. This is achieved by moving the GICv2 CPU interface mapping
> code from vgic_v2_domain_init() to the stage-2 data abort trap handling
> code. The original CPU interface size and virtual CPU interface base
> address is now saved in `struct vgic_dist` instead of the local
> variable of vgic_v2_domain_init().
> 
> Note that GICv2 changes introduced by this patch is not applied to the
> "New vGIC" implementation, as the "New vGIC" is not used. Also since
The fact that "New vGIC" is not used very often and its work is in-progress
does not mean that we can implement changes resulting in a build failure
when enabling CONFIG_NEW_VGIC, which is the case with your patch.
So this needs to be fixed.

> the hardware domain (Dom0) has an unlimited size P2M pool, the
> gicv2_map_hwdom_extra_mappings() is also not touched by this patch.
> 
> [1] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> ---
>  xen/arch/arm/include/asm/vgic.h |  2 ++
>  xen/arch/arm/traps.c            | 19 ++++++++++++++++---
>  xen/arch/arm/vgic-v2.c          | 25 ++++++-------------------
>  3 files changed, 24 insertions(+), 22 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/vgic.h b/xen/arch/arm/include/asm/vgic.h
> index 3d44868039..1d37c291e1 100644
> --- a/xen/arch/arm/include/asm/vgic.h
> +++ b/xen/arch/arm/include/asm/vgic.h
> @@ -153,6 +153,8 @@ struct vgic_dist {
>      /* Base address for guest GIC */
>      paddr_t dbase; /* Distributor base address */
>      paddr_t cbase; /* CPU interface base address */
> +    paddr_t csize; /* CPU interface size */
> +    paddr_t vbase; /* virtual CPU interface base address */
Could you swap them so that base address variables are grouped?

>  #ifdef CONFIG_GICV3
>      /* GIC V3 addressing */
>      /* List of contiguous occupied by the redistributors */
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 061c92acbd..d98f166050 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1787,9 +1787,12 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
>  }
> 
>  /*
> - * When using ACPI, most of the MMIO regions will be mapped on-demand
> - * in stage-2 page tables for the hardware domain because Xen is not
> - * able to know from the EFI memory map the MMIO regions.
> + * Try to map the MMIO regions for some special cases:
> + * 1. When using ACPI, most of the MMIO regions will be mapped on-demand
> + *    in stage-2 page tables for the hardware domain because Xen is not
> + *    able to know from the EFI memory map the MMIO regions.
> + * 2. For guests using GICv2, the GICv2 CPU interface mapping is created
> + *    on the first access of the MMIO region.
>   */
>  static bool try_map_mmio(gfn_t gfn)
>  {
> @@ -1798,6 +1801,16 @@ static bool try_map_mmio(gfn_t gfn)
>      /* For the hardware domain, all MMIOs are mapped with GFN == MFN */
>      mfn_t mfn = _mfn(gfn_x(gfn));
> 
> +    /*
> +     * Map the GICv2 virtual cpu interface in the gic cpu interface
NIT: in all the other places you use CPU (capital letters)

> +     * region of the guest on the first access of the MMIO region.
> +     */
> +    if ( d->arch.vgic.version == GIC_V2 &&
> +         gfn_x(gfn) == gfn_x(gaddr_to_gfn(d->arch.vgic.cbase)) )
There is a macro gnf_eq that you can use to avoid opencoding it.

> +        return !map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
At this point you can use just gfn instead of gaddr_to_gfn(d->arch.vgic.cbase)

> +                                 d->arch.vgic.csize / PAGE_SIZE,
> +                                 maddr_to_mfn(d->arch.vgic.vbase));
> +
>      /*
>       * Device-Tree should already have everything mapped when building
>       * the hardware domain.
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index 0026cb4360..21e14a5a6f 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -644,10 +644,6 @@ static int vgic_v2_vcpu_init(struct vcpu *v)
> 
>  static int vgic_v2_domain_init(struct domain *d)
>  {
> -    int ret;
> -    paddr_t csize;
> -    paddr_t vbase;
> -
>      /*
>       * The hardware domain and direct-mapped domain both get the hardware
>       * address.
> @@ -667,8 +663,8 @@ static int vgic_v2_domain_init(struct domain *d)
>           * aligned to PAGE_SIZE.
>           */
>          d->arch.vgic.cbase = vgic_v2_hw.cbase;
> -        csize = vgic_v2_hw.csize;
> -        vbase = vgic_v2_hw.vbase;
> +        d->arch.vgic.csize = vgic_v2_hw.csize;
> +        d->arch.vgic.vbase = vgic_v2_hw.vbase;
>      }
>      else if ( is_domain_direct_mapped(d) )
>      {
> @@ -683,8 +679,8 @@ static int vgic_v2_domain_init(struct domain *d)
>           */
>          d->arch.vgic.dbase = vgic_v2_hw.dbase;
>          d->arch.vgic.cbase = vgic_v2_hw.cbase;
> -        csize = GUEST_GICC_SIZE;
> -        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
> +        d->arch.vgic.csize = GUEST_GICC_SIZE;
> +        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
>      }
>      else
>      {
> @@ -697,19 +693,10 @@ static int vgic_v2_domain_init(struct domain *d)
>           */
>          BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
>          d->arch.vgic.cbase = GUEST_GICC_BASE;
> -        csize = GUEST_GICC_SIZE;
> -        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
> +        d->arch.vgic.csize = GUEST_GICC_SIZE;
> +        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
>      }
> 
> -    /*
> -     * Map the gic virtual cpu interface in the gic cpu interface
> -     * region of the guest.
> -     */
> -    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
> -                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
> -    if ( ret )
> -        return ret;
> -
Maybe adding some comment like "Mapping of the virtual CPU interface is deferred until first access"
would be helpful.

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:00:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:00:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481562.746547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCR-0001by-OB; Fri, 20 Jan 2023 10:00:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481562.746547; Fri, 20 Jan 2023 10:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCR-0001bn-LQ; Fri, 20 Jan 2023 10:00:43 +0000
Received: by outflank-mailman (input) for mailman id 481562;
 Fri, 20 Jan 2023 10:00:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoCQ-0001b3-1y
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:00:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4b46e61a-98a9-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 11:00:40 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B7B6B5F7E2;
 Fri, 20 Jan 2023 10:00:39 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8CE7B1390C;
 Fri, 20 Jan 2023 10:00:39 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id eIQaIUdmymMBRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:00:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b46e61a-98a9-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208839; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oOWWalZWNgxUmz2EcuOumPLWbSmCnLaSwu6nlgaH5og=;
	b=PeYGAbCgwbKo9iCYeVsE3aXEFcjXCLgH2mXlxP4Wu4v/wKhw8CbqB5KduBbBRPdMLasqG3
	YjERjITMTkbj1Y8T+8DNISFoPRrduV6NMLREK6QosEUlQFQeKln0jXuxSdV6BOLj3SYmtP
	4ANE/DFwVz4RN9O0xMoHBlGvk9/zGYE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 01/13] tools/xenstore: don't allow creating too many nodes in a transaction
Date: Fri, 20 Jan 2023 11:00:16 +0100
Message-Id: <20230120100028.11142-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The accounting for the number of nodes of a domain in an active
transaction is not working correctly, as it allows to create arbitrary
number of nodes. The transaction will finally fail due to exceeding
the number of nodes quota, but before closing the transaction an
unprivileged guest could cause Xenstore to use a lot of memory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_domain.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 9ef41ede03..7eb9cd077b 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1116,9 +1116,8 @@ int domain_nbentry_fix(unsigned int domid, int num, bool update)
 
 int domain_nbentry(struct connection *conn)
 {
-	return (domain_is_unprivileged(conn))
-		? conn->domain->nbentry
-		: 0;
+	return domain_is_unprivileged(conn)
+	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:00:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:00:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481561.746537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCK-0001Kw-G2; Fri, 20 Jan 2023 10:00:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481561.746537; Fri, 20 Jan 2023 10:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCK-0001Kp-DC; Fri, 20 Jan 2023 10:00:36 +0000
Received: by outflank-mailman (input) for mailman id 481561;
 Fri, 20 Jan 2023 10:00:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoCJ-0001Kj-H3
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:00:35 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 47fb4d70-98a9-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 11:00:34 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 22F5122A04;
 Fri, 20 Jan 2023 10:00:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D0ACA1390C;
 Fri, 20 Jan 2023 10:00:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id NthvMUFmymPvRAAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:00:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47fb4d70-98a9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208834; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=BP2Ghnh4lJv6QN3DkwdB0Xdmy7oy5X0QBXuC3leED8Q=;
	b=f4Rq3o9pbCpc+uXh3oFQ6x+6OdFH4+ok8jJoISu7YorWjME1Whd4mY7dTlBVWsAWbRdfx4
	ER3NXWydAw1G0bRLNaJi2GTJrfdPM/+LBax9p/DoMmZlBjo1EPhKveiVwE5/Nx1wsGTOqg
	8dQrpRn/av6SYjXNk8A1mqaxq1WyOjM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 00/13] tools/xenstore: rework internal accounting
Date: Fri, 20 Jan 2023 11:00:15 +0100
Message-Id: <20230120100028.11142-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series reworks the Xenstore internal accounting to use a uniform
generic framework. It is adding some additional useful diagnostic
information, like accounting trace and max. per-domain and global quota
values seen.

Changes in V2:
- added patch 1 (leftover from previous series)
- rebase

Juergen Gross (13):
  tools/xenstore: don't allow creating too many nodes in a transaction
  tools/xenstore: manage per-transaction domain accounting data in an
    array
  tools/xenstore: introduce accounting data array for per-domain values
  tools/xenstore: add framework to commit accounting data on success
    only
  tools/xenstore: use accounting buffering for node accounting
  tools/xenstore: add current connection to domain_memory_add()
    parameters
  tools/xenstore: use accounting data array for per-domain values
  tools/xenstore: add accounting trace support
  tools/xenstore: add TDB access trace support
  tools/xenstore: switch transaction accounting to generic accounting
  tools/xenstore: remember global and per domain max accounting values
  tools/xenstore: use generic accounting for remaining quotas
  tools/xenstore: switch quota management to be table based

 docs/misc/xenstore.txt                 |   5 +-
 tools/xenstore/xenstored_control.c     |  65 ++--
 tools/xenstore/xenstored_core.c        | 164 +++++-----
 tools/xenstore/xenstored_core.h        |  23 +-
 tools/xenstore/xenstored_domain.c      | 435 ++++++++++++++++++-------
 tools/xenstore/xenstored_domain.h      |  55 +++-
 tools/xenstore/xenstored_transaction.c |  22 +-
 tools/xenstore/xenstored_watch.c       |  15 +-
 8 files changed, 506 insertions(+), 278 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:00:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481563.746557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCV-0001u5-5M; Fri, 20 Jan 2023 10:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481563.746557; Fri, 20 Jan 2023 10:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCV-0001tp-1p; Fri, 20 Jan 2023 10:00:47 +0000
Received: by outflank-mailman (input) for mailman id 481563;
 Fri, 20 Jan 2023 10:00:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoCU-0001Kj-7l
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:00:46 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e963540-98a9-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 11:00:45 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4E07222A05;
 Fri, 20 Jan 2023 10:00:45 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 206721390C;
 Fri, 20 Jan 2023 10:00:45 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Il6JBk1mymMJRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:00:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e963540-98a9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208845; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hxLPEC5sKQgZtz5rlLvxKLZpRKYHDJAhBxz2WqBzjA4=;
	b=Vg3oEfgUnsDQokH4I9VcSnxhPu8rc5LFCJFO/PShVwybyUbjWauy5obgnLvACzVWhqp6QK
	M8KWTfdejnuPsrI8ElPPiI9Du/DRuq8tOMnlOMUQTl5AtICEJ+ELbInXOUu2OiUx5YZ1U0
	gBLn+9aA3jCC+gkGr5bEC7+TOs604nk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 02/13] tools/xenstore: manage per-transaction domain accounting data in an array
Date: Fri, 20 Jan 2023 11:00:17 +0100
Message-Id: <20230120100028.11142-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to prepare keeping accounting data in an array instead of
using independent fields, switch the struct changed_domain accounting
data to that scheme, for now only using an array with one element.

In order to be able to extend this scheme add the needed indexing enum
to xenstored_domain.h.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- make "what" parameter of acc_add_changed_dom() an enum type, and
  assert() that it won't exceed the accounting array (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 19 +++++++++++--------
 tools/xenstore/xenstored_domain.h |  5 +++++
 2 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 7eb9cd077b..44e72937fa 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -99,8 +99,8 @@ struct changed_domain
 	/* Identifier of the changed domain. */
 	unsigned int domid;
 
-	/* Amount by which this domain's nbentry field has changed. */
-	int nbentry;
+	/* Accounting data. */
+	int acc[ACC_TR_N];
 };
 
 static struct hashtable *domhash;
@@ -550,7 +550,7 @@ int acc_fix_domains(struct list_head *head, bool update)
 	int cnt;
 
 	list_for_each_entry(cd, head, list) {
-		cnt = domain_nbentry_fix(cd->domid, cd->nbentry, update);
+		cnt = domain_nbentry_fix(cd->domid, cd->acc[ACC_NODES], update);
 		if (!update) {
 			if (cnt >= quota_nb_entry_per_domain)
 				return ENOSPC;
@@ -595,19 +595,21 @@ static struct changed_domain *acc_get_changed_domain(const void *ctx,
 	return cd;
 }
 
-static int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
-			       unsigned int domid)
+static int acc_add_changed_dom(const void *ctx, struct list_head *head,
+			       enum accitem what, int val, unsigned int domid)
 {
 	struct changed_domain *cd;
 
+	assert(what < ARRAY_SIZE(cd->acc));
+
 	cd = acc_get_changed_domain(ctx, head, domid);
 	if (!cd)
 		return 0;
 
 	errno = 0;
-	cd->nbentry += val;
+	cd->acc[what] += val;
 
-	return cd->nbentry;
+	return cd->acc[what];
 }
 
 static void domain_conn_reset(struct domain *domain)
@@ -1071,7 +1073,8 @@ static int domain_nbentry_add(struct connection *conn, unsigned int domid,
 
 	if (conn && conn->transaction) {
 		head = transaction_get_changed_domains(conn->transaction);
-		ret = acc_add_dom_nbentry(conn->transaction, head, add, domid);
+		ret = acc_add_changed_dom(conn->transaction, head, ACC_NODES,
+					  add, domid);
 		if (errno) {
 			fail_transaction(conn->transaction);
 			return -1;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index dc4660861e..6a2b76a85b 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -19,6 +19,11 @@
 #ifndef _XENSTORED_DOMAIN_H
 #define _XENSTORED_DOMAIN_H
 
+enum accitem {
+	ACC_NODES,
+	ACC_TR_N	/* Number of elements per transaction and domain. */
+};
+
 void handle_event(void);
 
 void check_domains(void);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:00:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481566.746567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCb-0002Ha-Ea; Fri, 20 Jan 2023 10:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481566.746567; Fri, 20 Jan 2023 10:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCb-0002HT-BS; Fri, 20 Jan 2023 10:00:53 +0000
Received: by outflank-mailman (input) for mailman id 481566;
 Fri, 20 Jan 2023 10:00:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoCa-0001Kj-0W
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:00:52 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51ea9e8a-98a9-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 11:00:51 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id DCA3822A07;
 Fri, 20 Jan 2023 10:00:50 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B0A001390C;
 Fri, 20 Jan 2023 10:00:50 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id gMLEKVJmymMVRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:00:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51ea9e8a-98a9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208850; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZMwbDEWp3B1In6BAPetozQ+yEIHHD7B6U6eO2oWzgco=;
	b=t/69B4C37lx7vYqCtZjiIVddJ+i6IyO6VnGZ09FEH3DNc2ioxTQHhiVz2lJanswQXtnHXF
	LSKldIy3KmjWNE1tto0M3Ktf0KLKg49qXP1t7nx2jVqWzuV9s499fnTs+bIvKct4XnTiwj
	GRNNDDUKtmzYEH6Y1Y2kCz49ScVcLvU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 03/13] tools/xenstore: introduce accounting data array for per-domain values
Date: Fri, 20 Jan 2023 11:00:18 +0100
Message-Id: <20230120100028.11142-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the scheme of an accounting data array for per-domain
accounting data and use it initially for the number of nodes owned by
a domain.

Make the accounting data type to be unsigned int, as no data is allowed
to be negative at any time.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_domain.c | 71 ++++++++++++++++++-------------
 tools/xenstore/xenstored_domain.h |  5 ++-
 2 files changed, 45 insertions(+), 31 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 44e72937fa..f459c5aabb 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -69,8 +69,8 @@ struct domain
 	/* Has domain been officially introduced? */
 	bool introduced;
 
-	/* number of entry from this domain in the store */
-	int nbentry;
+	/* Accounting data for this domain. */
+	unsigned int acc[ACC_N];
 
 	/* Amount of memory allocated for this domain. */
 	int memory;
@@ -246,7 +246,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 
 	if (keep_orphans) {
 		set_tdb_key(node->name, &key);
-		domain->nbentry--;
+		domain_nbentry_dec(NULL, domain->domid);
 		node->perms.p[0].id = priv_domid;
 		node->acc.memory = 0;
 		domain_nbentry_inc(NULL, priv_domid);
@@ -270,7 +270,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		ret = WALK_TREE_SKIP_CHILDREN;
 	}
 
-	return domain->nbentry > 0 ? ret : WALK_TREE_SUCCESS_STOP;
+	return domain->acc[ACC_NODES] ? ret : WALK_TREE_SUCCESS_STOP;
 }
 
 static void domain_tree_remove(struct domain *domain)
@@ -278,7 +278,7 @@ static void domain_tree_remove(struct domain *domain)
 	int ret;
 	struct walk_funcs walkfuncs = { .enter = domain_tree_remove_sub };
 
-	if (domain->nbentry > 0) {
+	if (domain->acc[ACC_NODES]) {
 		ret = walk_node_tree(domain, NULL, "/", &walkfuncs, domain);
 		if (ret == WALK_TREE_ERROR_STOP)
 			syslog(LOG_ERR,
@@ -437,7 +437,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	resp = talloc_asprintf_append(resp, "%-16s: %8d\n", #t, e); \
 	if (!resp) return ENOMEM
 
-	ent(nodes, d->nbentry);
+	ent(nodes, d->acc[ACC_NODES]);
 	ent(watches, d->nbwatch);
 	ent(transactions, ta);
 	ent(outstanding, d->nboutstanding);
@@ -1047,8 +1047,28 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
-static int domain_nbentry_add(struct connection *conn, unsigned int domid,
-			      int add, bool no_dom_alloc)
+static int domain_acc_add_chk(struct domain *d, enum accitem what, int add,
+			      unsigned int domid)
+{
+	assert(what < ARRAY_SIZE(d->acc));
+
+	if ((add < 0 && -add > d->acc[what]) ||
+	    (d->acc[what] + add) > INT_MAX) {
+		/*
+		 * In a transaction when a node is being added/removed AND the
+		 * same node has been added/removed outside the transaction in
+		 * parallel, the resulting value will be wrong. This is no
+		 * problem, as the transaction will fail due to the resulting
+		 * conflict.
+		 */
+		return (add < 0) ? 0 : INT_MAX;
+	}
+
+	return d->acc[what] + add;
+}
+
+static int domain_acc_add(struct connection *conn, unsigned int domid,
+			  enum accitem what, int add, bool no_dom_alloc)
 {
 	struct domain *d;
 	struct list_head *head;
@@ -1071,56 +1091,49 @@ static int domain_nbentry_add(struct connection *conn, unsigned int domid,
 		}
 	}
 
-	if (conn && conn->transaction) {
+	if (conn && conn->transaction && what < ACC_TR_N) {
 		head = transaction_get_changed_domains(conn->transaction);
-		ret = acc_add_changed_dom(conn->transaction, head, ACC_NODES,
+		ret = acc_add_changed_dom(conn->transaction, head, what,
 					  add, domid);
 		if (errno) {
 			fail_transaction(conn->transaction);
 			return -1;
 		}
-		/*
-		 * In a transaction when a node is being added/removed AND the
-		 * same node has been added/removed outside the transaction in
-		 * parallel, the resulting number of nodes will be wrong. This
-		 * is no problem, as the transaction will fail due to the
-		 * resulting conflict.
-		 * In the node remove case the resulting number can be even
-		 * negative, which should be avoided.
-		 */
-		return max(d->nbentry + ret, 0);
+		return domain_acc_add_chk(d, what, ret, domid);
 	}
 
-	d->nbentry += add;
+	d->acc[what] = domain_acc_add_chk(d, what, add, domid);
 
-	return d->nbentry;
+	return d->acc[what];
 }
 
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
-	return (domain_nbentry_add(conn, domid, 1, false) < 0) ? errno : 0;
+	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
+	       ? errno : 0;
 }
 
 int domain_nbentry_dec(struct connection *conn, unsigned int domid)
 {
-	return (domain_nbentry_add(conn, domid, -1, true) < 0) ? errno : 0;
+	return (domain_acc_add(conn, domid, ACC_NODES, -1, true) < 0)
+	       ? errno : 0;
 }
 
 int domain_nbentry_fix(unsigned int domid, int num, bool update)
 {
 	int ret;
 
-	ret = domain_nbentry_add(NULL, domid, update ? num : 0, update);
+	ret = domain_acc_add(NULL, domid, ACC_NODES, update ? num : 0, update);
 	if (ret < 0 || update)
 		return ret;
 
 	return domid_is_unprivileged(domid) ? ret + num : 0;
 }
 
-int domain_nbentry(struct connection *conn)
+unsigned int domain_nbentry(struct connection *conn)
 {
 	return domain_is_unprivileged(conn)
-	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
+	       ? domain_acc_add(conn, conn->id, ACC_NODES, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
@@ -1597,7 +1610,7 @@ static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
 	 * If everything is correct incrementing the value for each node will
 	 * result in dom->nodes being 0 at the end.
 	 */
-	dom->nodes = -d->nbentry;
+	dom->nodes = -d->acc[ACC_NODES];
 
 	if (!hashtable_insert(domains, &dom->domid, dom)) {
 		talloc_free(dom);
@@ -1652,7 +1665,7 @@ static int domain_check_acc_cb(const void *k, void *v, void *arg)
 	if (!d)
 		return 0;
 
-	d->nbentry += dom->nodes;
+	d->acc[ACC_NODES] += dom->nodes;
 
 	return 0;
 }
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 6a2b76a85b..8259c114b0 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -21,7 +21,8 @@
 
 enum accitem {
 	ACC_NODES,
-	ACC_TR_N	/* Number of elements per transaction and domain. */
+	ACC_TR_N,        /* Number of elements per transaction and domain. */
+	ACC_N = ACC_TR_N /* Number of elements per domain. */
 };
 
 void handle_event(void);
@@ -72,7 +73,7 @@ int domain_alloc_permrefs(struct node_perms *perms);
 int domain_nbentry_inc(struct connection *conn, unsigned int domid);
 int domain_nbentry_dec(struct connection *conn, unsigned int domid);
 int domain_nbentry_fix(unsigned int domid, int num, bool update);
-int domain_nbentry(struct connection *conn);
+unsigned int domain_nbentry(struct connection *conn);
 int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
 
 /*
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:00:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:00:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481568.746577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCg-0002mX-NN; Fri, 20 Jan 2023 10:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481568.746577; Fri, 20 Jan 2023 10:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCg-0002mG-JP; Fri, 20 Jan 2023 10:00:58 +0000
Received: by outflank-mailman (input) for mailman id 481568;
 Fri, 20 Jan 2023 10:00:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoCf-0001Kj-Hg
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:00:57 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5542034f-98a9-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 11:00:56 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7DBC922A09;
 Fri, 20 Jan 2023 10:00:56 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4C7D51390C;
 Fri, 20 Jan 2023 10:00:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id PnZbEVhmymMlRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:00:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5542034f-98a9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208856; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6SkvS5Fu9OM2Kf47HsJ0nDTi/Ok9cmmcQuqSRDJBw2o=;
	b=bzFQhE1WUgL5EtLFJzBWllaQZzU2RpG1CMzah626tQD2uvhq2HfebGyttS6xsC3n+fk0Gl
	s/IxJMLAHAXV024MHkTIUrYAzpvOsDmbtuokT98CoP6fgdm58T3g1By6N3Mpdr2/v7oAG3
	9+jhc/UHRYAJy43kt8YgUkQPtxJNse8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 04/13] tools/xenstore: add framework to commit accounting data on success only
Date: Fri, 20 Jan 2023 11:00:19 +0100
Message-Id: <20230120100028.11142-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of modifying accounting data and undo those modifications in
case of an error during further processing, add a framework for
collecting the needed changes and commit them only when the whole
operation has succeeded.

This scheme can reuse large parts of the per transaction accounting.
The changed_domain handling can be reused, but the array size of the
accounting data should be possible to be different for both use cases.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   |  5 +++
 tools/xenstore/xenstored_core.h   |  3 ++
 tools/xenstore/xenstored_domain.c | 64 +++++++++++++++++++++++++++----
 tools/xenstore/xenstored_domain.h |  5 ++-
 4 files changed, 68 insertions(+), 9 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 27dfbe9593..2d10cacf35 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1023,6 +1023,9 @@ static void send_error(struct connection *conn, int error)
 			break;
 		}
 	}
+
+	acc_drop(conn);
+
 	send_reply(conn, XS_ERROR, xsd_errors[i].errstring,
 			  strlen(xsd_errors[i].errstring) + 1);
 }
@@ -1060,6 +1063,7 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 	}
 
 	conn->in = NULL;
+	acc_commit(conn);
 
 	/* Update relevant header fields and fill in the message body. */
 	bdata->hdr.msg.type = type;
@@ -2195,6 +2199,7 @@ struct connection *new_connection(const struct interface_funcs *funcs)
 	new->is_stalled = false;
 	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
+	INIT_LIST_HEAD(&new->acc_list);
 	INIT_LIST_HEAD(&new->ref_list);
 	INIT_LIST_HEAD(&new->watches);
 	INIT_LIST_HEAD(&new->transaction_list);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index c59b06551f..1f811f38cb 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -139,6 +139,9 @@ struct connection
 	struct list_head out_list;
 	uint64_t timeout_msec;
 
+	/* Not yet committed accounting data (valid if in != NULL). */
+	struct list_head acc_list;
+
 	/* Referenced requests no longer pending. */
 	struct list_head ref_list;
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f459c5aabb..33105dba8f 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -100,7 +100,7 @@ struct changed_domain
 	unsigned int domid;
 
 	/* Accounting data. */
-	int acc[ACC_TR_N];
+	int acc[];
 };
 
 static struct hashtable *domhash;
@@ -577,6 +577,7 @@ static struct changed_domain *acc_find_changed_domain(struct list_head *head,
 
 static struct changed_domain *acc_get_changed_domain(const void *ctx,
 						     struct list_head *head,
+						     enum accitem acc_sz,
 						     unsigned int domid)
 {
 	struct changed_domain *cd;
@@ -585,7 +586,7 @@ static struct changed_domain *acc_get_changed_domain(const void *ctx,
 	if (cd)
 		return cd;
 
-	cd = talloc_zero(ctx, struct changed_domain);
+	cd = talloc_zero_size(ctx, sizeof(*cd) + acc_sz * sizeof(cd->acc[0]));
 	if (!cd)
 		return NULL;
 
@@ -596,13 +597,13 @@ static struct changed_domain *acc_get_changed_domain(const void *ctx,
 }
 
 static int acc_add_changed_dom(const void *ctx, struct list_head *head,
-			       enum accitem what, int val, unsigned int domid)
+			       enum accitem acc_sz, enum accitem what,
+			       int val, unsigned int domid)
 {
 	struct changed_domain *cd;
 
-	assert(what < ARRAY_SIZE(cd->acc));
-
-	cd = acc_get_changed_domain(ctx, head, domid);
+	assert(what < acc_sz);
+	cd = acc_get_changed_domain(ctx, head, acc_sz, domid);
 	if (!cd)
 		return 0;
 
@@ -1071,6 +1072,7 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 			  enum accitem what, int add, bool no_dom_alloc)
 {
 	struct domain *d;
+	struct changed_domain *cd;
 	struct list_head *head;
 	int ret;
 
@@ -1091,10 +1093,26 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 		}
 	}
 
+	/* Temporary accounting data until final commit? */
+	if (conn && conn->in && what < ACC_REQ_N) {
+		/* Consider transaction local data. */
+		ret = 0;
+		if (conn->transaction && what < ACC_TR_N) {
+			head = transaction_get_changed_domains(
+				conn->transaction);
+			cd = acc_find_changed_domain(head, domid);
+			if (cd)
+				ret = cd->acc[what];
+		}
+		ret += acc_add_changed_dom(conn->in, &conn->acc_list, ACC_REQ_N,
+					  what, add, domid);
+		return errno ? -1 : domain_acc_add_chk(d, what, ret, domid);
+	}
+
 	if (conn && conn->transaction && what < ACC_TR_N) {
 		head = transaction_get_changed_domains(conn->transaction);
-		ret = acc_add_changed_dom(conn->transaction, head, what,
-					  add, domid);
+		ret = acc_add_changed_dom(conn->transaction, head, ACC_TR_N,
+					  what, add, domid);
 		if (errno) {
 			fail_transaction(conn->transaction);
 			return -1;
@@ -1107,6 +1125,36 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 	return d->acc[what];
 }
 
+void acc_drop(struct connection *conn)
+{
+	struct changed_domain *cd;
+
+	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
+		list_del(&cd->list);
+		talloc_free(cd);
+	}
+}
+
+void acc_commit(struct connection *conn)
+{
+	struct changed_domain *cd;
+	struct buffered_data *in = conn->in;
+	enum accitem what;
+
+	conn->in = NULL; /* Avoid recursion. */
+	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
+		list_del(&cd->list);
+		for (what = 0; what < ACC_REQ_N; what++)
+			if (cd->acc[what])
+				domain_acc_add(conn, cd->domid, what,
+					       cd->acc[what], true);
+
+		talloc_free(cd);
+	}
+
+	conn->in = in;
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 8259c114b0..cd85bd0cad 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -20,7 +20,8 @@
 #define _XENSTORED_DOMAIN_H
 
 enum accitem {
-	ACC_NODES,
+	ACC_REQ_N,       /* Number of elements per request and domain. */
+	ACC_NODES = ACC_REQ_N,
 	ACC_TR_N,        /* Number of elements per transaction and domain. */
 	ACC_N = ACC_TR_N /* Number of elements per domain. */
 };
@@ -103,6 +104,8 @@ void domain_outstanding_domid_dec(unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 int acc_fix_domains(struct list_head *head, bool update);
+void acc_drop(struct connection *conn);
+void acc_commit(struct connection *conn);
 
 /* Write rate limiting */
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:01:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:01:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481572.746587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCn-0003J1-UA; Fri, 20 Jan 2023 10:01:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481572.746587; Fri, 20 Jan 2023 10:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCn-0003Iu-QW; Fri, 20 Jan 2023 10:01:05 +0000
Received: by outflank-mailman (input) for mailman id 481572;
 Fri, 20 Jan 2023 10:01:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoCm-0001b3-3d
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:01:04 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5893aced-98a9-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 11:01:02 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 0FDAB5F7E3;
 Fri, 20 Jan 2023 10:01:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DADED1390C;
 Fri, 20 Jan 2023 10:01:01 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 9H9DNF1mymMuRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:01:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5893aced-98a9-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208862; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=grOzEcqHpEHR284xFwYs35DiTP4FIbGzOZunWUj9dwg=;
	b=c1TMdmiLxgiHgA9/+KIGmpHJdZbSkkF/SqNFIAQ82c+UROadqGhvG0MvMwAr63LFzUUpp5
	pm4ipDjM6EduHwIhCb6U0sXDzvhxVFjsVscMZd3nLO8lq9mW9huaSzLBBckJ3IjYNtFiJK
	j/BWFeB+hldf8pM3DhjqFwy+KTQS7gM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 05/13] tools/xenstore: use accounting buffering for node accounting
Date: Fri, 20 Jan 2023 11:00:20 +0100
Message-Id: <20230120100028.11142-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the node accounting to the accounting information buffering in
order to avoid having to undo it in case of failure.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   | 21 ++-------------------
 tools/xenstore/xenstored_domain.h |  4 ++--
 2 files changed, 4 insertions(+), 21 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 2d10cacf35..250ecd1199 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1452,7 +1452,6 @@ static void destroy_node_rm(struct connection *conn, struct node *node)
 static int destroy_node(struct connection *conn, struct node *node)
 {
 	destroy_node_rm(conn, node);
-	domain_nbentry_dec(conn, get_node_owner(node));
 
 	/*
 	 * It is not possible to easily revert the changes in a transaction.
@@ -1797,27 +1796,11 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 	old_perms = node->perms;
 	domain_nbentry_dec(conn, get_node_owner(node));
 	node->perms = perms;
-	if (domain_nbentry_inc(conn, get_node_owner(node))) {
-		node->perms = old_perms;
-		/*
-		 * This should never fail because we had a reference on the
-		 * domain before and Xenstored is single-threaded.
-		 */
-		domain_nbentry_inc(conn, get_node_owner(node));
+	if (domain_nbentry_inc(conn, get_node_owner(node)))
 		return ENOMEM;
-	}
 
-	if (write_node(conn, node, false)) {
-		int saved_errno = errno;
-
-		domain_nbentry_dec(conn, get_node_owner(node));
-		node->perms = old_perms;
-		/* No failure possible as above. */
-		domain_nbentry_inc(conn, get_node_owner(node));
-
-		errno = saved_errno;
+	if (write_node(conn, node, false))
 		return errno;
-	}
 
 	fire_watches(conn, ctx, name, node, false, &old_perms);
 	send_ack(conn, XS_SET_PERMS);
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index cd85bd0cad..5df6ae72f8 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -20,9 +20,9 @@
 #define _XENSTORED_DOMAIN_H
 
 enum accitem {
+	ACC_NODES,
 	ACC_REQ_N,       /* Number of elements per request and domain. */
-	ACC_NODES = ACC_REQ_N,
-	ACC_TR_N,        /* Number of elements per transaction and domain. */
+	ACC_TR_N = ACC_REQ_N, /* Number of elements per transaction and domain. */
 	ACC_N = ACC_TR_N /* Number of elements per domain. */
 };
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:01:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:01:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481577.746597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCu-0003rg-E0; Fri, 20 Jan 2023 10:01:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481577.746597; Fri, 20 Jan 2023 10:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoCu-0003rZ-AR; Fri, 20 Jan 2023 10:01:12 +0000
Received: by outflank-mailman (input) for mailman id 481577;
 Fri, 20 Jan 2023 10:01:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoCt-0001Kj-Hn
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:01:11 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5be466a6-98a9-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 11:01:07 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 9BC6522A04;
 Fri, 20 Jan 2023 10:01:07 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6FF211390C;
 Fri, 20 Jan 2023 10:01:07 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id oi66GWNmymM5RQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:01:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5be466a6-98a9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208867; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zwrTl2/5uOzPJK/703a+MTcvbAcwTPMWIDw3qiSNqIA=;
	b=gi7x5zNiJvQrmI8N/9pjlDwBGDSJK1dqCTvnBxW5qnWJfj1YenK2gbxQWIiqLvSrRaybOf
	j7qmtmAXtwvopmHA3vdpy02nMAR83BQ5WTaKjRMNeME37ywN6grQ8N4Ub9xLEpy6PY7Ul/
	hzB4bzMIJlvYkYeeAJSZhBj0X4hq6Uk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 06/13] tools/xenstore: add current connection to domain_memory_add() parameters
Date: Fri, 20 Jan 2023 11:00:21 +0100
Message-Id: <20230120100028.11142-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to enable switching memory accounting to the generic array
based accounting, add the current connection to the parameters of
domain_memory_add().

This requires to add the connection to some other functions, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   | 28 ++++++++++++++++------------
 tools/xenstore/xenstored_domain.c |  3 ++-
 tools/xenstore/xenstored_domain.h | 14 +++++++++-----
 tools/xenstore/xenstored_watch.c  | 11 ++++++-----
 4 files changed, 33 insertions(+), 23 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 250ecd1199..1958df9147 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -246,7 +246,8 @@ static void free_buffered_data(struct buffered_data *out,
 		}
 	}
 
-	domain_memory_add_nochk(conn->id, -out->hdr.msg.len - sizeof(out->hdr));
+	domain_memory_add_nochk(conn, conn->id,
+				-out->hdr.msg.len - sizeof(out->hdr));
 
 	if (out->hdr.msg.type == XS_WATCH_EVENT) {
 		req = out->pend.req;
@@ -631,24 +632,25 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 	 * nodes to new owners.
 	 */
 	if (old_acc.memory)
-		domain_memory_add_nochk(old_domid,
+		domain_memory_add_nochk(conn, old_domid,
 					-old_acc.memory - key->dsize);
-	ret = domain_memory_add(new_domid, data->dsize + key->dsize,
-				no_quota_check);
+	ret = domain_memory_add(conn, new_domid,
+				data->dsize + key->dsize, no_quota_check);
 	if (ret) {
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
-			domain_memory_add_nochk(old_domid,
+			domain_memory_add_nochk(conn, old_domid,
 						old_acc.memory + key->dsize);
 		return ret;
 	}
 
 	/* TDB should set errno, but doesn't even set ecode AFAICT. */
 	if (tdb_store(tdb_ctx, *key, *data, TDB_REPLACE) != 0) {
-		domain_memory_add_nochk(new_domid, -data->dsize - key->dsize);
+		domain_memory_add_nochk(conn, new_domid,
+					-data->dsize - key->dsize);
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
-			domain_memory_add_nochk(old_domid,
+			domain_memory_add_nochk(conn, old_domid,
 						old_acc.memory + key->dsize);
 		errno = EIO;
 		return errno;
@@ -683,7 +685,7 @@ int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 
 	if (acc->memory) {
 		domid = get_acc_domid(conn, key, acc->domid);
-		domain_memory_add_nochk(domid, -acc->memory - key->dsize);
+		domain_memory_add_nochk(conn, domid, -acc->memory - key->dsize);
 	}
 
 	return 0;
@@ -1052,11 +1054,13 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 	if (len <= DEFAULT_BUFFER_SIZE) {
 		bdata->buffer = bdata->default_buffer;
 		/* Don't check quota, path might be used for returning error. */
-		domain_memory_add_nochk(conn->id, len + sizeof(bdata->hdr));
+		domain_memory_add_nochk(conn, conn->id,
+					len + sizeof(bdata->hdr));
 	} else {
 		bdata->buffer = talloc_array(bdata, char, len);
 		if (!bdata->buffer ||
-		    domain_memory_add_chk(conn->id, len + sizeof(bdata->hdr))) {
+		    domain_memory_add_chk(conn, conn->id,
+					  len + sizeof(bdata->hdr))) {
 			send_error(conn, ENOMEM);
 			return;
 		}
@@ -1122,7 +1126,7 @@ void send_event(struct buffered_data *req, struct connection *conn,
 		}
 	}
 
-	if (domain_memory_add_chk(conn->id, len + sizeof(bdata->hdr))) {
+	if (domain_memory_add_chk(conn, conn->id, len + sizeof(bdata->hdr))) {
 		talloc_free(bdata);
 		return;
 	}
@@ -3307,7 +3311,7 @@ static void add_buffered_data(struct buffered_data *bdata,
 	 * be smaller. So ignore it. The limit will be applied for any resource
 	 * after the state has been fully restored.
 	 */
-	domain_memory_add_nochk(conn->id, len + sizeof(bdata->hdr));
+	domain_memory_add_nochk(conn, conn->id, len + sizeof(bdata->hdr));
 }
 
 void read_state_buffered_data(const void *ctx, struct connection *conn,
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 33105dba8f..9e7252372d 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1233,7 +1233,8 @@ static bool domain_chk_quota(struct domain *domain, int mem)
 	return false;
 }
 
-int domain_memory_add(unsigned int domid, int mem, bool no_quota_check)
+int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
+		      bool no_quota_check)
 {
 	struct domain *domain;
 
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 5df6ae72f8..11569dbd1b 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -75,25 +75,29 @@ int domain_nbentry_inc(struct connection *conn, unsigned int domid);
 int domain_nbentry_dec(struct connection *conn, unsigned int domid);
 int domain_nbentry_fix(unsigned int domid, int num, bool update);
 unsigned int domain_nbentry(struct connection *conn);
-int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
+int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
+		      bool no_quota_check);
 
 /*
  * domain_memory_add_chk(): to be used when memory quota should be checked.
  * Not to be used when specifying a negative mem value, as lowering the used
  * memory should always be allowed.
  */
-static inline int domain_memory_add_chk(unsigned int domid, int mem)
+static inline int domain_memory_add_chk(struct connection *conn,
+					unsigned int domid, int mem)
 {
-	return domain_memory_add(domid, mem, false);
+	return domain_memory_add(conn, domid, mem, false);
 }
+
 /*
  * domain_memory_add_nochk(): to be used when memory quota should not be
  * checked, e.g. when lowering memory usage, or in an error case for undoing
  * a previous memory adjustment.
  */
-static inline void domain_memory_add_nochk(unsigned int domid, int mem)
+static inline void domain_memory_add_nochk(struct connection *conn,
+					   unsigned int domid, int mem)
 {
-	domain_memory_add(domid, mem, true);
+	domain_memory_add(conn, domid, mem, true);
 }
 void domain_watch_inc(struct connection *conn);
 void domain_watch_dec(struct connection *conn);
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 8ad0229df6..e30cd89be3 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -199,7 +199,7 @@ static struct watch *add_watch(struct connection *conn, char *path, char *token,
 	watch->token = talloc_strdup(watch, token);
 	if (!watch->node || !watch->token)
 		goto nomem;
-	if (domain_memory_add(conn->id, strlen(path) + strlen(token),
+	if (domain_memory_add(conn, conn->id, strlen(path) + strlen(token),
 			      no_quota_check))
 		goto nomem;
 
@@ -274,8 +274,9 @@ int do_unwatch(const void *ctx, struct connection *conn,
 	list_for_each_entry(watch, &conn->watches, list) {
 		if (streq(watch->node, node) && streq(watch->token, vec[1])) {
 			list_del(&watch->list);
-			domain_memory_add_nochk(conn->id, -strlen(watch->node) -
-							  strlen(watch->token));
+			domain_memory_add_nochk(conn, conn->id,
+						-strlen(watch->node) -
+						strlen(watch->token));
 			talloc_free(watch);
 			domain_watch_dec(conn);
 			send_ack(conn, XS_UNWATCH);
@@ -291,8 +292,8 @@ void conn_delete_all_watches(struct connection *conn)
 
 	while ((watch = list_top(&conn->watches, struct watch, list))) {
 		list_del(&watch->list);
-		domain_memory_add_nochk(conn->id, -strlen(watch->node) -
-						  strlen(watch->token));
+		domain_memory_add_nochk(conn, conn->id, -strlen(watch->node) -
+							strlen(watch->token));
 		talloc_free(watch);
 		domain_watch_dec(conn);
 	}
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:01:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:01:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481582.746607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoD0-0004MN-M8; Fri, 20 Jan 2023 10:01:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481582.746607; Fri, 20 Jan 2023 10:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoD0-0004MA-Im; Fri, 20 Jan 2023 10:01:18 +0000
Received: by outflank-mailman (input) for mailman id 481582;
 Fri, 20 Jan 2023 10:01:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoCz-0001b3-AF
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:01:17 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f3ad155-98a9-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 11:01:13 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 359925F7E2;
 Fri, 20 Jan 2023 10:01:13 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 09B4F1390C;
 Fri, 20 Jan 2023 10:01:13 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id XjEFAWlmymNJRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:01:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f3ad155-98a9-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208873; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2PkWYzC97VtDoa3mJZhvzo8tEMW0dfGEnmkLiIynZS0=;
	b=NP3LHFENih5OTNc9uS8WcpZR6kQsYaPKv/i+SyTwMyRT5ZX8E1FXHtGmI95DdEjSw9iocb
	cHyE5n8vLR7u4KigDVmE3o4DO3A2dgUVGONpl56iE9DbjKPYsLNTgU8uOlFx535KefEpQy
	6ejMnynx46sebXcOzLSRtxtnhsSpmx0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 07/13] tools/xenstore: use accounting data array for per-domain values
Date: Fri, 20 Jan 2023 11:00:22 +0100
Message-Id: <20230120100028.11142-8-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the accounting of per-domain usage of Xenstore memory, watches, and
outstanding requests to the array based mechanism.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   |   8 +--
 tools/xenstore/xenstored_domain.c | 111 +++++++++++-------------------
 tools/xenstore/xenstored_domain.h |  10 +--
 3 files changed, 52 insertions(+), 77 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 1958df9147..6ef60179fa 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -255,7 +255,7 @@ static void free_buffered_data(struct buffered_data *out,
 			req->pend.ref.event_cnt--;
 			if (!req->pend.ref.event_cnt && !req->on_out_list) {
 				if (req->on_ref_list) {
-					domain_outstanding_domid_dec(
+					domain_outstanding_dec(conn,
 						req->pend.ref.domid);
 					list_del(&req->list);
 				}
@@ -271,7 +271,7 @@ static void free_buffered_data(struct buffered_data *out,
 		out->on_ref_list = true;
 		return;
 	} else
-		domain_outstanding_dec(conn);
+		domain_outstanding_dec(conn, conn->id);
 
 	talloc_free(out);
 }
@@ -1077,7 +1077,7 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 	/* Queue for later transmission. */
 	list_add_tail(&bdata->list, &conn->out_list);
 	bdata->on_out_list = true;
-	domain_outstanding_inc(conn);
+	domain_outstanding_inc(conn, conn->id);
 }
 
 /*
@@ -3305,7 +3305,7 @@ static void add_buffered_data(struct buffered_data *bdata,
 	 * request have been delivered.
 	 */
 	if (bdata->hdr.msg.type != XS_WATCH_EVENT)
-		domain_outstanding_inc(conn);
+		domain_outstanding_inc(conn, conn->id);
 	/*
 	 * We are restoring the state after Live-Update and the new quota may
 	 * be smaller. So ignore it. The limit will be applied for any resource
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 9e7252372d..b1e29edb7e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -72,19 +72,12 @@ struct domain
 	/* Accounting data for this domain. */
 	unsigned int acc[ACC_N];
 
-	/* Amount of memory allocated for this domain. */
-	int memory;
+	/* Memory quota data for this domain. */
 	bool soft_quota_reported;
 	bool hard_quota_reported;
 	time_t mem_last_msg;
 #define MEM_WARN_MINTIME_SEC 10
 
-	/* number of watch for this domain */
-	int nbwatch;
-
-	/* Number of outstanding requests. */
-	int nboutstanding;
-
 	/* write rate limit */
 	wrl_creditt wrl_credit; /* [ -wrl_config_writecost, +_dburst ] */
 	struct wrl_timestampt wrl_timestamp;
@@ -200,14 +193,15 @@ static bool domain_can_write(struct connection *conn)
 
 static bool domain_can_read(struct connection *conn)
 {
-	struct xenstore_domain_interface *intf = conn->domain->interface;
+	struct domain *domain = conn->domain;
+	struct xenstore_domain_interface *intf = domain->interface;
 
 	if (domain_is_unprivileged(conn)) {
-		if (conn->domain->wrl_credit < 0)
+		if (domain->wrl_credit < 0)
 			return false;
-		if (conn->domain->nboutstanding >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST] >= quota_req_outstanding)
 			return false;
-		if (conn->domain->memory >= quota_memory_per_domain_hard &&
+		if (domain->acc[ACC_MEM] >= quota_memory_per_domain_hard &&
 		    quota_memory_per_domain_hard)
 			return false;
 	}
@@ -438,10 +432,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	if (!resp) return ENOMEM
 
 	ent(nodes, d->acc[ACC_NODES]);
-	ent(watches, d->nbwatch);
+	ent(watches, d->acc[ACC_WATCH]);
 	ent(transactions, ta);
-	ent(outstanding, d->nboutstanding);
-	ent(memory, d->memory);
+	ent(outstanding, d->acc[ACC_OUTST]);
+	ent(memory, d->acc[ACC_MEM]);
 
 #undef ent
 
@@ -1184,14 +1178,16 @@ unsigned int domain_nbentry(struct connection *conn)
 	       ? domain_acc_add(conn, conn->id, ACC_NODES, 0, true) : 0;
 }
 
-static bool domain_chk_quota(struct domain *domain, int mem)
+static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 {
 	time_t now;
+	struct domain *domain;
 
-	if (!domain || !domid_is_unprivileged(domain->domid) ||
-	    (domain->conn && domain->conn->is_ignored))
+	if (!conn || !domid_is_unprivileged(conn->id) ||
+	    conn->is_ignored)
 		return false;
 
+	domain = conn->domain;
 	now = time(NULL);
 
 	if (mem >= quota_memory_per_domain_hard &&
@@ -1236,80 +1232,57 @@ static bool domain_chk_quota(struct domain *domain, int mem)
 int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
 		      bool no_quota_check)
 {
-	struct domain *domain;
+	int ret;
 
-	domain = find_domain_struct(domid);
-	if (domain) {
-		/*
-		 * domain_chk_quota() will print warning and also store whether
-		 * the soft/hard quota has been hit. So check no_quota_check
-		 * *after*.
-		 */
-		if (domain_chk_quota(domain, domain->memory + mem) &&
-		    !no_quota_check)
-			return ENOMEM;
-		domain->memory += mem;
-	} else {
-		/*
-		 * The domain the memory is to be accounted for should always
-		 * exist, as accounting is done either for a domain related to
-		 * the current connection, or for the domain owning a node
-		 * (which is always existing, as the owner of the node is
-		 * tested to exist and deleted or replaced by domid 0 if not).
-		 * So not finding the related domain MUST be an error in the
-		 * data base.
-		 */
-		errno = ENOENT;
-		corrupt(NULL, "Accounting called for non-existing domain %u\n",
-			domid);
-		return ENOENT;
-	}
+	ret = domain_acc_add(conn, domid, ACC_MEM, 0, true);
+	if (ret < 0)
+		return -ret;
+
+	/*
+	 * domain_chk_quota() will print warning and also store whether the
+	 * soft/hard quota has been hit. So check no_quota_check *after*.
+	 */
+	if (domain_chk_quota(conn, ret + mem) && !no_quota_check)
+		return ENOMEM;
+
+	/*
+	 * The domain the memory is to be accounted for should always exist,
+	 * as accounting is done either for a domain related to the current
+	 * connection, or for the domain owning a node (which is always
+	 * existing, as the owner of the node is tested to exist and deleted
+	 * or replaced by domid 0 if not).
+	 * So not finding the related domain MUST be an error in the data base.
+	 */
+	domain_acc_add(conn, domid, ACC_MEM, mem, true);
 
 	return 0;
 }
 
 void domain_watch_inc(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nbwatch++;
+	domain_acc_add(conn, conn->id, ACC_WATCH, 1, true);
 }
 
 void domain_watch_dec(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	if (conn->domain->nbwatch)
-		conn->domain->nbwatch--;
+	domain_acc_add(conn, conn->id, ACC_WATCH, -1, true);
 }
 
 int domain_watch(struct connection *conn)
 {
 	return (domain_is_unprivileged(conn))
-		? conn->domain->nbwatch
+		? domain_acc_add(conn, conn->id, ACC_WATCH, 0, true)
 		: 0;
 }
 
-void domain_outstanding_inc(struct connection *conn)
+void domain_outstanding_inc(struct connection *conn, unsigned int domid)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nboutstanding++;
+	domain_acc_add(conn, domid, ACC_OUTST, 1, true);
 }
 
-void domain_outstanding_dec(struct connection *conn)
+void domain_outstanding_dec(struct connection *conn, unsigned int domid)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nboutstanding--;
-}
-
-void domain_outstanding_domid_dec(unsigned int domid)
-{
-	struct domain *d = find_domain_by_domid(domid);
-
-	if (d)
-		d->nboutstanding--;
+	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
 }
 
 static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 11569dbd1b..37e14d1c8c 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -23,7 +23,10 @@ enum accitem {
 	ACC_NODES,
 	ACC_REQ_N,       /* Number of elements per request and domain. */
 	ACC_TR_N = ACC_REQ_N, /* Number of elements per transaction and domain. */
-	ACC_N = ACC_TR_N /* Number of elements per domain. */
+	ACC_WATCH = ACC_TR_N,
+	ACC_OUTST,
+	ACC_MEM,
+	ACC_N            /* Number of elements per domain. */
 };
 
 void handle_event(void);
@@ -102,9 +105,8 @@ static inline void domain_memory_add_nochk(struct connection *conn,
 void domain_watch_inc(struct connection *conn);
 void domain_watch_dec(struct connection *conn);
 int domain_watch(struct connection *conn);
-void domain_outstanding_inc(struct connection *conn);
-void domain_outstanding_dec(struct connection *conn);
-void domain_outstanding_domid_dec(unsigned int domid);
+void domain_outstanding_inc(struct connection *conn, unsigned int domid);
+void domain_outstanding_dec(struct connection *conn, unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 int acc_fix_domains(struct list_head *head, bool update);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:05:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:05:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481597.746617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoGn-0005v6-7y; Fri, 20 Jan 2023 10:05:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481597.746617; Fri, 20 Jan 2023 10:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoGn-0005uz-43; Fri, 20 Jan 2023 10:05:13 +0000
Received: by outflank-mailman (input) for mailman id 481597;
 Fri, 20 Jan 2023 10:05:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoDO-0001Kj-E1
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:01:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6ff823f4-98a9-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 11:01:41 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 4F5135BD75;
 Fri, 20 Jan 2023 10:01:41 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 216FD1390C;
 Fri, 20 Jan 2023 10:01:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id H4vBBoVmymOkRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:01:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ff823f4-98a9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208901; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CDs7HKY34pzRyFTNGa+bJbWsTpq9KfRWJ41ZH2QVM3A=;
	b=liP9jaLr4NkbFGEC9OyHCJYJbS6YTs5CLFgL7R4i3Byt95/JD7T1lFBr4xgSUH+mu/8zBr
	gXX4ZmilnIWcpiQnYERolXc/iSskOj5oBnii6n1rHkHQnJk3YeuOFQ2Uc4Ro5ZsemBhQfD
	NY3m7TywINLbDSXAIvnVe5yNct2yMqQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 12/13] tools/xenstore: use generic accounting for remaining quotas
Date: Fri, 20 Jan 2023 11:00:27 +0100
Message-Id: <20230120100028.11142-13-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The maxrequests, node size, number of node permissions, and path length
quota are a little bit special, as they are either active in
transactions only (maxrequests), or they are just per item instead of
count values. Nevertheless being able to know the maximum number of
those quota related values per domain would be beneficial, so add them
to the generic accounting.

The per domain value will never show current numbers other than zero,
but the maximum number seen can be gathered the same way as the number
of nodes during a transaction.

To be able to use the const qualifier for a new function switch
domain_is_unprivileged() to take a const pointer, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 14 ++++-----
 tools/xenstore/xenstored_core.h        |  2 +-
 tools/xenstore/xenstored_domain.c      | 39 ++++++++++++++++++++------
 tools/xenstore/xenstored_domain.h      |  6 ++++
 tools/xenstore/xenstored_transaction.c |  4 +--
 tools/xenstore/xenstored_watch.c       |  2 +-
 6 files changed, 48 insertions(+), 19 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 5b85b69eb2..c34de5ca3a 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -799,8 +799,8 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 		+ node->perms.num * sizeof(node->perms.p[0])
 		+ node->datalen + node->childlen;
 
-	if (!no_quota_check && domain_is_unprivileged(conn) &&
-	    data.dsize >= quota_max_entry_size) {
+	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
+	    && !no_quota_check) {
 		errno = ENOSPC;
 		return errno;
 	}
@@ -1168,7 +1168,7 @@ static bool valid_chars(const char *node)
 		       "0123456789-/_@") == strlen(node));
 }
 
-bool is_valid_nodename(const char *node)
+bool is_valid_nodename(const struct connection *conn, const char *node)
 {
 	int local_off = 0;
 	unsigned int domid;
@@ -1188,7 +1188,8 @@ bool is_valid_nodename(const char *node)
 	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
 		local_off = 0;
 
-	if (strlen(node) > local_off + quota_max_path_len)
+	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
+			   quota_max_path_len))
 		return false;
 
 	return valid_chars(node);
@@ -1250,7 +1251,7 @@ static struct node *get_node_canonicalized(struct connection *conn,
 	*canonical_name = canonicalize(conn, ctx, name);
 	if (!*canonical_name)
 		return NULL;
-	if (!is_valid_nodename(*canonical_name)) {
+	if (!is_valid_nodename(conn, *canonical_name)) {
 		errno = EINVAL;
 		return NULL;
 	}
@@ -1775,8 +1776,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EINVAL;
 
 	perms.num--;
-	if (domain_is_unprivileged(conn) &&
-	    perms.num > quota_nb_perms_per_node)
+	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
 		return ENOSPC;
 
 	permstr = in->buffer + strlen(in->buffer) + 1;
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 6465105b4d..0140c25880 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -258,7 +258,7 @@ void check_store(void);
 void corrupt(struct connection *conn, const char *fmt, ...);
 
 /* Is this a valid node name? */
-bool is_valid_nodename(const char *node);
+bool is_valid_nodename(const struct connection *conn, const char *node);
 
 /* Get name of parent node. */
 char *get_parent(const void *ctx, const char *node);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 91695b6313..f431076505 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -431,7 +431,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
+	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \
 				      d->acc[e].val, d->acc[e].max); \
 	if (!resp) return ENOMEM
 
@@ -440,6 +440,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	ent(transactions, ACC_TRANS);
 	ent(outstanding, ACC_OUTST);
 	ent(memory, ACC_MEM);
+	ent(transaction-nodes, ACC_TRANSNODES);
 
 #undef ent
 
@@ -457,7 +458,7 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
+	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
 				      acc_global_max[e]);         \
 	if (!resp) return ENOMEM
 
@@ -466,6 +467,7 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
 	ent(transactions, ACC_TRANS);
 	ent(outstanding, ACC_OUTST);
 	ent(memory, ACC_MEM);
+	ent(transaction-nodes, ACC_TRANSNODES);
 
 #undef ent
 
@@ -1080,13 +1082,23 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
+static void domain_acc_chk_max(struct domain *d, enum accitem what,
+			       unsigned int val, unsigned int domid)
+{
+	assert(what < ARRAY_SIZE(d->acc));
+	assert(what < ARRAY_SIZE(acc_global_max));
+
+	if (val > d->acc[what].max)
+		d->acc[what].max = val;
+	if (val > acc_global_max[what] && domid_is_unprivileged(domid))
+		acc_global_max[what] = val;
+}
+
 static int domain_acc_add_chk(struct domain *d, enum accitem what, int add,
 			      unsigned int domid)
 {
 	unsigned int val;
 
-	assert(what < ARRAY_SIZE(d->acc));
-
 	if ((add < 0 && -add > d->acc[what].val) ||
 	    (d->acc[what].val + add) > INT_MAX) {
 		/*
@@ -1100,10 +1112,7 @@ static int domain_acc_add_chk(struct domain *d, enum accitem what, int add,
 	}
 
 	val = d->acc[what].val + add;
-	if (val > d->acc[what].max)
-		d->acc[what].max = val;
-	if (val > acc_global_max[what] && domid_is_unprivileged(domid))
-		acc_global_max[what] = val;
+	domain_acc_chk_max(d, what, val, domid);
 
 	return val;
 }
@@ -1219,6 +1228,20 @@ void domain_reset_global_acc(void)
 	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
 }
 
+bool domain_max_chk(const struct connection *conn, enum accitem what,
+		    unsigned int val, unsigned int quota)
+{
+	if (!conn || !conn->domain)
+		return false;
+
+	if (domain_is_unprivileged(conn) && val > quota)
+		return true;
+
+	domain_acc_chk_max(conn->domain, what, val, conn->id);
+
+	return false;
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 162e7dc0d0..ff341dd8bf 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -27,6 +27,10 @@ enum accitem {
 	ACC_OUTST,
 	ACC_MEM,
 	ACC_TRANS,
+	ACC_TRANSNODES,
+	ACC_NPERM,
+	ACC_PATHLEN,
+	ACC_NODESZ,
 	ACC_N            /* Number of elements per domain. */
 };
 
@@ -118,6 +122,8 @@ void acc_drop(struct connection *conn);
 void acc_commit(struct connection *conn);
 int domain_max_global_acc(const void *ctx, struct connection *conn);
 void domain_reset_global_acc(void);
+bool domain_max_chk(const struct connection *conn, unsigned int what,
+		    unsigned int val, unsigned int quota);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index ce6a12b576..7967770ca2 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -244,8 +244,8 @@ int access_node(struct connection *conn, struct node *node,
 
 	i = find_accessed_node(trans, node->name);
 	if (!i) {
-		if (trans->nodes >= quota_trans_nodes &&
-		    domain_is_unprivileged(conn)) {
+		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1,
+				   quota_trans_nodes)) {
 			ret = ENOSPC;
 			goto err;
 		}
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index e30cd89be3..61b1e3421e 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -176,7 +176,7 @@ static int check_watch_path(struct connection *conn, const void *ctx,
 		*path = canonicalize(conn, ctx, *path);
 		if (!*path)
 			return errno;
-		if (!is_valid_nodename(*path))
+		if (!is_valid_nodename(conn, *path))
 			goto inval;
 	}
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:05:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:05:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481598.746627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoGq-0006BM-HN; Fri, 20 Jan 2023 10:05:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481598.746627; Fri, 20 Jan 2023 10:05:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoGq-0006BD-EO; Fri, 20 Jan 2023 10:05:16 +0000
Received: by outflank-mailman (input) for mailman id 481598;
 Fri, 20 Jan 2023 10:05:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoDV-0001Kj-Dc
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:01:49 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 734a4554-98a9-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 11:01:47 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id DA4FE22A09;
 Fri, 20 Jan 2023 10:01:46 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AD1571390C;
 Fri, 20 Jan 2023 10:01:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id /cDhKIpmymOqRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:01:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 734a4554-98a9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208906; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JJSfuNILKcRkQDqI4XTWSKAohkvwxo/7nLNUOQBwvSI=;
	b=cJCwuxjlNNqoOJ2gBa6O0851P8a5V531LqvLKnZ5P8pjVt9mKlCe1ww4Sxihl4cf5p8Jy2
	jwI2AdBbvfnPrTyiZwTNGk3WSGBy4TQ7yQrVY0FsNx5LS1zR2+xmbdw4fJSDXK4ORhugCb
	5isdLZD95h+szi9cykVkdVu2o3XMroU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 13/13] tools/xenstore: switch quota management to be table based
Date: Fri, 20 Jan 2023 11:00:28 +0100
Message-Id: <20230120100028.11142-14-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of having individual quota variables switch to a table based
approach like the generic accounting. Include all the related data in
the same table and add accessor functions.

This enables to use the command line --quota parameter for setting all
possible quota values, keeping the previous parameters for
compatibility.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
One further remark: it would be rather easy to add soft-quota for all
the other quotas (similar to the memory one). This could be used as
an early warning for the need to raise global quota.
---
 tools/xenstore/xenstored_control.c     |  43 ++------
 tools/xenstore/xenstored_core.c        |  85 ++++++++--------
 tools/xenstore/xenstored_core.h        |  10 --
 tools/xenstore/xenstored_domain.c      | 132 +++++++++++++++++--------
 tools/xenstore/xenstored_domain.h      |  12 ++-
 tools/xenstore/xenstored_transaction.c |   5 +-
 tools/xenstore/xenstored_watch.c       |   2 +-
 7 files changed, 155 insertions(+), 134 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index a2ba64a15c..75f51a80db 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -221,35 +221,6 @@ static int do_control_log(const void *ctx, struct connection *conn,
 	return 0;
 }
 
-struct quota {
-	const char *name;
-	int *quota;
-	const char *descr;
-};
-
-static const struct quota hard_quotas[] = {
-	{ "nodes", &quota_nb_entry_per_domain, "Nodes per domain" },
-	{ "watches", &quota_nb_watch_per_domain, "Watches per domain" },
-	{ "transactions", &quota_max_transaction, "Transactions per domain" },
-	{ "outstanding", &quota_req_outstanding,
-		"Outstanding requests per domain" },
-	{ "transaction-nodes", &quota_trans_nodes,
-		"Max. number of accessed nodes per transaction" },
-	{ "memory", &quota_memory_per_domain_hard,
-		"Total Xenstore memory per domain (error level)" },
-	{ "node-size", &quota_max_entry_size, "Max. size of a node" },
-	{ "path-max", &quota_max_path_len, "Max. length of a node path" },
-	{ "permissions", &quota_nb_perms_per_node,
-		"Max. number of permissions per node" },
-	{ NULL, NULL, NULL }
-};
-
-static const struct quota soft_quotas[] = {
-	{ "memory", &quota_memory_per_domain_soft,
-		"Total Xenstore memory per domain (warning level)" },
-	{ NULL, NULL, NULL }
-};
-
 static int quota_show_current(const void *ctx, struct connection *conn,
 			      const struct quota *quotas)
 {
@@ -260,9 +231,11 @@ static int quota_show_current(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 
-	for (i = 0; quotas[i].quota; i++) {
+	for (i = 0; i < ACC_N; i++) {
+		if (!quotas[i].name)
+			continue;
 		resp = talloc_asprintf_append(resp, "%-17s: %8d %s\n",
-					      quotas[i].name, *quotas[i].quota,
+					      quotas[i].name, quotas[i].val,
 					      quotas[i].descr);
 		if (!resp)
 			return ENOMEM;
@@ -274,7 +247,7 @@ static int quota_show_current(const void *ctx, struct connection *conn,
 }
 
 static int quota_set(const void *ctx, struct connection *conn,
-		     char **vec, int num, const struct quota *quotas)
+		     char **vec, int num, struct quota *quotas)
 {
 	unsigned int i;
 	int val;
@@ -286,9 +259,9 @@ static int quota_set(const void *ctx, struct connection *conn,
 	if (val < 1)
 		return EINVAL;
 
-	for (i = 0; quotas[i].quota; i++) {
-		if (!strcmp(vec[0], quotas[i].name)) {
-			*quotas[i].quota = val;
+	for (i = 0; i < ACC_N; i++) {
+		if (quotas[i].name && !strcmp(vec[0], quotas[i].name)) {
+			quotas[i].val = val;
 			send_ack(conn, XS_CONTROL);
 			return 0;
 		}
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index c34de5ca3a..6d71cb5a2c 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -89,17 +89,6 @@ unsigned int trace_flags = TRACE_OBJ | TRACE_IO;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
 
-int quota_nb_entry_per_domain = 1000;
-int quota_nb_watch_per_domain = 128;
-int quota_max_entry_size = 2048; /* 2K */
-int quota_max_transaction = 10;
-int quota_nb_perms_per_node = 5;
-int quota_trans_nodes = 1024;
-int quota_max_path_len = XENSTORE_REL_PATH_MAX;
-int quota_req_outstanding = 20;
-int quota_memory_per_domain_soft = 2 * 1024 * 1024; /* 2 MB */
-int quota_memory_per_domain_hard = 2 * 1024 * 1024 + 512 * 1024; /* 2.5 MB */
-
 unsigned int timeout_watch_event_msec = 20000;
 
 void trace(const char *fmt, ...)
@@ -799,7 +788,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 		+ node->perms.num * sizeof(node->perms.p[0])
 		+ node->datalen + node->childlen;
 
-	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
+	if (domain_max_chk(conn, ACC_NODESZ, data.dsize)
 	    && !no_quota_check) {
 		errno = ENOSPC;
 		return errno;
@@ -1188,8 +1177,7 @@ bool is_valid_nodename(const struct connection *conn, const char *node)
 	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
 		local_off = 0;
 
-	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
-			   quota_max_path_len))
+	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off))
 		return false;
 
 	return valid_chars(node);
@@ -1501,7 +1489,7 @@ static struct node *create_node(struct connection *conn, const void *ctx,
 	for (i = node; i; i = i->parent) {
 		/* i->parent is set for each new node, so check quota. */
 		if (i->parent &&
-		    domain_nbentry(conn) >= quota_nb_entry_per_domain) {
+		    domain_nbentry(conn) >= hard_quotas[ACC_NODES].val) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -1776,7 +1764,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EINVAL;
 
 	perms.num--;
-	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
+	if (domain_max_chk(conn, ACC_NPERM, perms.num))
 		return ENOSPC;
 
 	permstr = in->buffer + strlen(in->buffer) + 1;
@@ -2644,7 +2632,16 @@ static void usage(void)
 "                          memory: total used memory per domain for nodes,\n"
 "                                  transactions, watches and requests, above\n"
 "                                  which Xenstore will stop talking to domain\n"
+"                          nodes: number nodes owned by a domain\n"
+"                          node-permissions: number of access permissions per\n"
+"                                            node\n"
+"                          node-size: total size of a node (permissions +\n"
+"                                     children names + content)\n"
 "                          outstanding: number of outstanding requests\n"
+"                          path-length: length of a node path\n"
+"                          transactions: number of concurrent transactions\n"
+"                                        per domain\n"
+"                          watches: number of watches per domain"
 "  -q, --quota-soft <what>=<nb> set a soft quota <what> to the value <nb>,\n"
 "                          causing a warning to be issued via syslog() if the\n"
 "                          limit is violated, allowed quotas are:\n"
@@ -2695,12 +2692,12 @@ int dom0_domid = 0;
 int dom0_event = 0;
 int priv_domid = 0;
 
-static int get_optval_int(const char *arg)
+static unsigned int get_optval_int(const char *arg)
 {
 	char *end;
-	long val;
+	unsigned long val;
 
-	val = strtol(arg, &end, 10);
+	val = strtoul(arg, &end, 10);
 	if (!*arg || *end || val < 0 || val > INT_MAX)
 		barf("invalid parameter value \"%s\"\n", arg);
 
@@ -2709,15 +2706,19 @@ static int get_optval_int(const char *arg)
 
 static bool what_matches(const char *arg, const char *what)
 {
-	unsigned int what_len = strlen(what);
+	unsigned int what_len;
+
+	if (!what)
+		false;
 
+	what_len = strlen(what);
 	return !strncmp(arg, what, what_len) && arg[what_len] == '=';
 }
 
 static void set_timeout(const char *arg)
 {
 	const char *eq = strchr(arg, '=');
-	int val;
+	unsigned int val;
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<seconds>\n");
@@ -2731,22 +2732,22 @@ static void set_timeout(const char *arg)
 static void set_quota(const char *arg, bool soft)
 {
 	const char *eq = strchr(arg, '=');
-	int val;
+	struct quota *q = soft ? soft_quotas : hard_quotas;
+	unsigned int val;
+	unsigned int i;
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<nb>\n");
 	val = get_optval_int(eq + 1);
-	if (what_matches(arg, "outstanding") && !soft)
-		quota_req_outstanding = val;
-	else if (what_matches(arg, "transaction-nodes") && !soft)
-		quota_trans_nodes = val;
-	else if (what_matches(arg, "memory")) {
-		if (soft)
-			quota_memory_per_domain_soft = val;
-		else
-			quota_memory_per_domain_hard = val;
-	} else
-		barf("unknown quota \"%s\"\n", arg);
+
+	for (i = 0; i < ACC_N; i++) {
+		if (what_matches(arg, q[i].name)) {
+			q[i].val = val;
+			return;
+		}
+	}
+
+	barf("unknown quota \"%s\"\n", arg);
 }
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
@@ -2808,7 +2809,7 @@ int main(int argc, char *argv[])
 			no_domain_init = true;
 			break;
 		case 'E':
-			quota_nb_entry_per_domain = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NODES].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'F':
 			pidfile = optarg;
@@ -2826,10 +2827,10 @@ int main(int argc, char *argv[])
 			recovery = false;
 			break;
 		case 'S':
-			quota_max_entry_size = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NODESZ].val = strtoul(optarg, NULL, 10);
 			break;
 		case 't':
-			quota_max_transaction = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_TRANS].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'T':
 			tracefile = optarg;
@@ -2849,15 +2850,17 @@ int main(int argc, char *argv[])
 			verbose = true;
 			break;
 		case 'W':
-			quota_nb_watch_per_domain = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_WATCH].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'A':
-			quota_nb_perms_per_node = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NPERM].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'M':
-			quota_max_path_len = strtol(optarg, NULL, 10);
-			quota_max_path_len = min(XENSTORE_REL_PATH_MAX,
-						 quota_max_path_len);
+			hard_quotas[ACC_PATHLEN].val =
+				strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_PATHLEN].val =
+				 min((unsigned int)XENSTORE_REL_PATH_MAX,
+				     hard_quotas[ACC_PATHLEN].val);
 			break;
 		case 'Q':
 			set_quota(optarg, false);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 0140c25880..693622ec68 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -315,16 +315,6 @@ extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
 extern int dom0_event;
 extern int priv_domid;
-extern int quota_nb_watch_per_domain;
-extern int quota_max_transaction;
-extern int quota_max_entry_size;
-extern int quota_nb_perms_per_node;
-extern int quota_max_path_len;
-extern int quota_nb_entry_per_domain;
-extern int quota_req_outstanding;
-extern int quota_trans_nodes;
-extern int quota_memory_per_domain_soft;
-extern int quota_memory_per_domain_hard;
 extern bool keep_orphans;
 
 extern unsigned int timeout_watch_event_msec;
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f431076505..3906047e6b 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,7 +43,61 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
-static unsigned int acc_global_max[ACC_N];
+struct quota hard_quotas[ACC_N] = {
+	[ACC_NODES] = {
+		.name = "nodes",
+		.descr = "Nodes per domain",
+		.val = 1000,
+	},
+	[ACC_WATCH] = {
+		.name = "watches",
+		.descr = "Watches per domain",
+		.val = 128,
+	},
+	[ACC_OUTST] = {
+		.name = "outstanding",
+		.descr = "Outstanding requests per domain",
+		.val = 20,
+	},
+	[ACC_MEM] = {
+		.name = "memory",
+		.descr = "Total Xenstore memory per domain (error level)",
+		.val = 2 * 1024 * 1024 + 512 * 1024,	/* 2.5 MB */
+	},
+	[ACC_TRANS] = {
+		.name = "transactions",
+		.descr = "Active transactions per domain",
+		.val = 10,
+	},
+	[ACC_TRANSNODES] = {
+		.name = "transaction-nodes",
+		.descr = "Max. number of accessed nodes per transaction",
+		.val = 1024,
+	},
+	[ACC_NPERM] = {
+		.name = "node-permissions",
+		.descr = "Max. number of permissions per node",
+		.val = 5,
+	},
+	[ACC_PATHLEN] = {
+		.name = "path-max",
+		.descr = "Max. length of a node path",
+		.val = XENSTORE_REL_PATH_MAX,
+	},
+	[ACC_NODESZ] = {
+		.name = "node-size",
+		.descr = "Max. size of a node",
+		.val = 2048,
+	},
+};
+
+struct quota soft_quotas[ACC_N] = {
+	[ACC_MEM] = {
+		.name = "memory",
+		.descr = "Total Xenstore memory per domain (warning level)",
+		.val = 2 * 1024 * 1024,			/* 2.0 MB */
+	},
+};
 
 struct domain
 {
@@ -204,10 +258,10 @@ static bool domain_can_read(struct connection *conn)
 	if (domain_is_unprivileged(conn)) {
 		if (domain->wrl_credit < 0)
 			return false;
-		if (domain->acc[ACC_OUTST].val >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST].val >= hard_quotas[ACC_OUTST].val)
 			return false;
-		if (domain->acc[ACC_MEM].val >= quota_memory_per_domain_hard &&
-		    quota_memory_per_domain_hard)
+		if (domain->acc[ACC_MEM].val >= hard_quotas[ACC_MEM].val &&
+		    hard_quotas[ACC_MEM].val)
 			return false;
 	}
 
@@ -422,6 +476,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 {
 	struct domain *d = find_domain_struct(domid);
 	char *resp;
+	unsigned int i;
 
 	if (!d)
 		return ENOENT;
@@ -430,19 +485,15 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 
-#define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \
-				      d->acc[e].val, d->acc[e].max); \
-	if (!resp) return ENOMEM
-
-	ent(nodes, ACC_NODES);
-	ent(watches, ACC_WATCH);
-	ent(transactions, ACC_TRANS);
-	ent(outstanding, ACC_OUTST);
-	ent(memory, ACC_MEM);
-	ent(transaction-nodes, ACC_TRANSNODES);
-
-#undef ent
+	for (i = 0; i < ACC_N; i++) {
+		if (!hard_quotas[i].name)
+			continue;
+		resp = talloc_asprintf_append(resp, "%-17s: %8u (max %8u)\n",
+					      hard_quotas[i].name,
+					      d->acc[i].val, d->acc[i].max);
+		if (!resp)
+			return ENOMEM;
+	}
 
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 
@@ -452,24 +503,21 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 int domain_max_global_acc(const void *ctx, struct connection *conn)
 {
 	char *resp;
+	unsigned int i;
 
 	resp = talloc_asprintf(ctx, "Max. seen accounting values:\n");
 	if (!resp)
 		return ENOMEM;
 
-#define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
-				      acc_global_max[e]);         \
-	if (!resp) return ENOMEM
-
-	ent(nodes, ACC_NODES);
-	ent(watches, ACC_WATCH);
-	ent(transactions, ACC_TRANS);
-	ent(outstanding, ACC_OUTST);
-	ent(memory, ACC_MEM);
-	ent(transaction-nodes, ACC_TRANSNODES);
-
-#undef ent
+	for (i = 0; i < ACC_N; i++) {
+		if (!hard_quotas[i].name)
+			continue;
+		resp = talloc_asprintf_append(resp, "%-17s: %8u\n",
+					      hard_quotas[i].name,
+					      hard_quotas[i].max);
+		if (!resp)
+			return ENOMEM;
+	}
 
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 
@@ -584,7 +632,7 @@ int acc_fix_domains(struct list_head *head, bool update)
 	list_for_each_entry(cd, head, list) {
 		cnt = domain_nbentry_fix(cd->domid, cd->acc[ACC_NODES], update);
 		if (!update) {
-			if (cnt >= quota_nb_entry_per_domain)
+			if (cnt >= hard_quotas[ACC_NODES].val)
 				return ENOSPC;
 			if (cnt < 0)
 				return ENOMEM;
@@ -1086,12 +1134,12 @@ static void domain_acc_chk_max(struct domain *d, enum accitem what,
 			       unsigned int val, unsigned int domid)
 {
 	assert(what < ARRAY_SIZE(d->acc));
-	assert(what < ARRAY_SIZE(acc_global_max));
+	assert(what < ARRAY_SIZE(hard_quotas));
 
 	if (val > d->acc[what].max)
 		d->acc[what].max = val;
-	if (val > acc_global_max[what] && domid_is_unprivileged(domid))
-		acc_global_max[what] = val;
+	if (val > hard_quotas[what].max && domid_is_unprivileged(domid))
+		hard_quotas[what].max = val;
 }
 
 static int domain_acc_add_chk(struct domain *d, enum accitem what, int add,
@@ -1222,19 +1270,19 @@ void domain_reset_global_acc(void)
 	unsigned int i;
 
 	for (i = 0; i < ACC_N; i++)
-		acc_global_max[i] = 0;
+		hard_quotas[i].max = 0;
 
 	/* Set current max values seen. */
 	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
 }
 
 bool domain_max_chk(const struct connection *conn, enum accitem what,
-		    unsigned int val, unsigned int quota)
+		    unsigned int val)
 {
 	if (!conn || !conn->domain)
 		return false;
 
-	if (domain_is_unprivileged(conn) && val > quota)
+	if (domain_is_unprivileged(conn) && val > hard_quotas[what].val)
 		return true;
 
 	domain_acc_chk_max(conn->domain, what, val, conn->id);
@@ -1283,8 +1331,7 @@ static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 	domain = conn->domain;
 	now = time(NULL);
 
-	if (mem >= quota_memory_per_domain_hard &&
-	    quota_memory_per_domain_hard) {
+	if (mem >= hard_quotas[ACC_MEM].val && hard_quotas[ACC_MEM].val) {
 		if (domain->hard_quota_reported)
 			return true;
 		syslog(LOG_ERR, "Domain %u exceeds hard memory quota, Xenstore interface to domain stalled\n",
@@ -1301,15 +1348,14 @@ static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 			syslog(LOG_INFO, "Domain %u below hard memory quota again\n",
 			       domain->domid);
 		}
-		if (mem >= quota_memory_per_domain_soft &&
-		    quota_memory_per_domain_soft &&
-		    !domain->soft_quota_reported) {
+		if (mem >= soft_quotas[ACC_MEM].val &&
+		    soft_quotas[ACC_MEM].val && !domain->soft_quota_reported) {
 			domain->mem_last_msg = now;
 			domain->soft_quota_reported = true;
 			syslog(LOG_WARNING, "Domain %u exceeds soft memory quota\n",
 			       domain->domid);
 		}
-		if (mem < quota_memory_per_domain_soft &&
+		if (mem < soft_quotas[ACC_MEM].val &&
 		    domain->soft_quota_reported) {
 			domain->mem_last_msg = now;
 			domain->soft_quota_reported = false;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index ff341dd8bf..3989d4a038 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -34,6 +34,16 @@ enum accitem {
 	ACC_N            /* Number of elements per domain. */
 };
 
+struct quota {
+	const char *name;
+	const char *descr;
+	unsigned int val;
+	unsigned int max;
+};
+
+extern struct quota hard_quotas[ACC_N];
+extern struct quota soft_quotas[ACC_N];
+
 void handle_event(void);
 
 void check_domains(void);
@@ -123,7 +133,7 @@ void acc_commit(struct connection *conn);
 int domain_max_global_acc(const void *ctx, struct connection *conn);
 void domain_reset_global_acc(void);
 bool domain_max_chk(const struct connection *conn, unsigned int what,
-		    unsigned int val, unsigned int quota);
+		    unsigned int val);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 7967770ca2..13fabe030d 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -244,8 +244,7 @@ int access_node(struct connection *conn, struct node *node,
 
 	i = find_accessed_node(trans, node->name);
 	if (!i) {
-		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1,
-				   quota_trans_nodes)) {
+		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1)) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -471,7 +470,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (conn->transaction)
 		return EBUSY;
 
-	if (domain_transaction_get(conn) > quota_max_transaction)
+	if (domain_transaction_get(conn) > hard_quotas[ACC_TRANS].val)
 		return ENOSPC;
 
 	/* Attach transaction to ctx for autofree until it's complete */
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 61b1e3421e..e8eb35de02 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -239,7 +239,7 @@ int do_watch(const void *ctx, struct connection *conn, struct buffered_data *in)
 			return EEXIST;
 	}
 
-	if (domain_watch(conn) > quota_nb_watch_per_domain)
+	if (domain_watch(conn) > hard_quotas[ACC_WATCH].val)
 		return E2BIG;
 
 	watch = add_watch(conn, vec[0], vec[1], relative, false);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:05:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:05:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481601.746637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoGs-0006Sb-PW; Fri, 20 Jan 2023 10:05:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481601.746637; Fri, 20 Jan 2023 10:05:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoGs-0006SQ-L5; Fri, 20 Jan 2023 10:05:18 +0000
Received: by outflank-mailman (input) for mailman id 481601;
 Fri, 20 Jan 2023 10:05:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoDE-0001b3-Bs
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:01:32 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 694b41c5-98a9-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 11:01:30 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 100C05BD75;
 Fri, 20 Jan 2023 10:01:30 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DB9821390C;
 Fri, 20 Jan 2023 10:01:29 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id yE5oNHlmymN/RQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:01:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 694b41c5-98a9-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208890; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7EWkkJhQCjKSg7MUGlJAvzZ4VfhKHOcbcgmv3aDpL24=;
	b=XB0P0p7jCjOaqjvgDeSGfjSYknn0SlPX5sXw/vClGkBgoZ6sYCjchxMr5YBQy0JT9NV3Ud
	lHU79laTjsL4ruBowvFRLEyZBZDPRzmeaSwXHrnyt5AGNh6qKZCHdMjYFWWZlO3fuOOxih
	7PcUkijUgdfxXsI+EYniWxVZ4QyPFhU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 10/13] tools/xenstore: switch transaction accounting to generic accounting
Date: Fri, 20 Jan 2023 11:00:25 +0100
Message-Id: <20230120100028.11142-11-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As transaction accounting is active for unprivileged domains only, it
can easily be added to the generic per-domain accounting.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        |  3 +--
 tools/xenstore/xenstored_core.h        |  1 -
 tools/xenstore/xenstored_domain.c      | 21 ++++++++++++++++++---
 tools/xenstore/xenstored_domain.h      |  4 ++++
 tools/xenstore/xenstored_transaction.c | 12 +++++-------
 5 files changed, 28 insertions(+), 13 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 49e196e7ae..5b85b69eb2 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2083,7 +2083,7 @@ static void consider_message(struct connection *conn)
 	 * stalled. This will ignore new requests until Live-Update happened
 	 * or it was aborted.
 	 */
-	if (lu_is_pending() && conn->transaction_started == 0 &&
+	if (lu_is_pending() && conn->ta_start_time == 0 &&
 	    conn->in->hdr.msg.type == XS_TRANSACTION_START) {
 		trace("Delaying transaction start for connection %p req_id %u\n",
 		      conn, conn->in->hdr.msg.req_id);
@@ -2190,7 +2190,6 @@ struct connection *new_connection(const struct interface_funcs *funcs)
 	new->funcs = funcs;
 	new->is_ignored = false;
 	new->is_stalled = false;
-	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
 	INIT_LIST_HEAD(&new->acc_list);
 	INIT_LIST_HEAD(&new->ref_list);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 419a144396..6465105b4d 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -151,7 +151,6 @@ struct connection
 	/* List of in-progress transactions. */
 	struct list_head transaction_list;
 	uint32_t next_transaction_id;
-	unsigned int transaction_started;
 	time_t ta_start_time;
 
 	/* List of delayed requests. */
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index d461fd8cc8..4d48e7a9f4 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -417,12 +417,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 {
 	struct domain *d = find_domain_struct(domid);
 	char *resp;
-	int ta;
 
 	if (!d)
 		return ENOENT;
 
-	ta = d->conn ? d->conn->transaction_started : 0;
 	resp = talloc_asprintf(ctx, "Domain %u:\n", domid);
 	if (!resp)
 		return ENOMEM;
@@ -433,7 +431,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 
 	ent(nodes, d->acc[ACC_NODES]);
 	ent(watches, d->acc[ACC_WATCH]);
-	ent(transactions, ta);
+	ent(transactions, d->acc[ACC_TRANS]);
 	ent(outstanding, d->acc[ACC_OUTST]);
 	ent(memory, d->acc[ACC_MEM]);
 
@@ -1295,6 +1293,23 @@ void domain_outstanding_dec(struct connection *conn, unsigned int domid)
 	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
 }
 
+void domain_transaction_inc(struct connection *conn)
+{
+	domain_acc_add(conn, conn->id, ACC_TRANS, 1, true);
+}
+
+void domain_transaction_dec(struct connection *conn)
+{
+	domain_acc_add(conn, conn->id, ACC_TRANS, -1, true);
+}
+
+unsigned int domain_transaction_get(struct connection *conn)
+{
+	return (domain_is_unprivileged(conn))
+		? domain_acc_add(conn, conn->id, ACC_TRANS, 0, true)
+		: 0;
+}
+
 static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
 static wrl_creditt wrl_config_rate           = WRL_RATE   * WRL_FACTOR;
 static wrl_creditt wrl_config_dburst         = WRL_DBURST * WRL_FACTOR;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 37e14d1c8c..e0562b0b4d 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -26,6 +26,7 @@ enum accitem {
 	ACC_WATCH = ACC_TR_N,
 	ACC_OUTST,
 	ACC_MEM,
+	ACC_TRANS,
 	ACC_N            /* Number of elements per domain. */
 };
 
@@ -107,6 +108,9 @@ void domain_watch_dec(struct connection *conn);
 int domain_watch(struct connection *conn);
 void domain_outstanding_inc(struct connection *conn, unsigned int domid);
 void domain_outstanding_dec(struct connection *conn, unsigned int domid);
+void domain_transaction_inc(struct connection *conn);
+void domain_transaction_dec(struct connection *conn);
+unsigned int domain_transaction_get(struct connection *conn);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 int acc_fix_domains(struct list_head *head, bool update);
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 19a1175d1b..ce6a12b576 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -471,8 +471,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (conn->transaction)
 		return EBUSY;
 
-	if (domain_is_unprivileged(conn) &&
-	    conn->transaction_started > quota_max_transaction)
+	if (domain_transaction_get(conn) > quota_max_transaction)
 		return ENOSPC;
 
 	/* Attach transaction to ctx for autofree until it's complete */
@@ -497,9 +496,9 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	list_add_tail(&trans->list, &conn->transaction_list);
 	talloc_steal(conn, trans);
 	talloc_set_destructor(trans, destroy_transaction);
-	if (!conn->transaction_started)
+	if (!conn->ta_start_time)
 		conn->ta_start_time = time(NULL);
-	conn->transaction_started++;
+	domain_transaction_inc(conn);
 	wrl_ntransactions++;
 
 	snprintf(id_str, sizeof(id_str), "%u", trans->id);
@@ -524,8 +523,8 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 
 	conn->transaction = NULL;
 	list_del(&trans->list);
-	conn->transaction_started--;
-	if (!conn->transaction_started)
+	domain_transaction_dec(conn);
+	if (list_empty(&conn->transaction_list))
 		conn->ta_start_time = 0;
 
 	/* Attach transaction to ctx for auto-cleanup */
@@ -576,7 +575,6 @@ void conn_delete_all_transactions(struct connection *conn)
 
 	assert(conn->transaction == NULL);
 
-	conn->transaction_started = 0;
 	conn->ta_start_time = 0;
 }
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:05:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:05:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481616.746647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoH7-0007Dn-8H; Fri, 20 Jan 2023 10:05:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481616.746647; Fri, 20 Jan 2023 10:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoH7-0007Db-4F; Fri, 20 Jan 2023 10:05:33 +0000
Received: by outflank-mailman (input) for mailman id 481616;
 Fri, 20 Jan 2023 10:05:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoDK-0001b3-6v
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:01:38 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ca65c7c-98a9-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 11:01:36 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B46B55BD75;
 Fri, 20 Jan 2023 10:01:35 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 72FC81390C;
 Fri, 20 Jan 2023 10:01:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id d426Gn9mymOSRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:01:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ca65c7c-98a9-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208895; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=61yczDqETAtkBqlIn3SmkHkKaXysXzSwDDElQn2wKLk=;
	b=Omml80AnQteYJnW/v34P2F860cz2WGygxX2pbKbXxSOZZUrLNWr56khkO9gHGWIQagZAHG
	q+60tkZhcSOat9gzBTj5J31bNQowx2rW5+VRjKT2mi091Oi/bvH5wRwevTJcZCPhQ7xdfp
	hu9lHuypbog6B0OUiZ+soKVf7wOpp4o=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 11/13] tools/xenstore: remember global and per domain max accounting values
Date: Fri, 20 Jan 2023 11:00:26 +0100
Message-Id: <20230120100028.11142-12-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add saving the maximum values of the different accounting data seen
per domain and (for unprivileged domains) globally, and print those
values via the xenstore-control quota command. Add a sub-command for
resetting the global maximum values seen.

This should help for a decision how to set the related quotas.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/xenstore.txt             |   5 +-
 tools/xenstore/xenstored_control.c |  22 ++++++-
 tools/xenstore/xenstored_domain.c  | 100 +++++++++++++++++++++++------
 tools/xenstore/xenstored_domain.h  |   2 +
 4 files changed, 108 insertions(+), 21 deletions(-)

diff --git a/docs/misc/xenstore.txt b/docs/misc/xenstore.txt
index 8887e7df88..44a02f6724 100644
--- a/docs/misc/xenstore.txt
+++ b/docs/misc/xenstore.txt
@@ -423,7 +423,7 @@ CONTROL			<command>|[<parameters>|]
 	print|<string>
 		print <string> to syslog (xenstore runs as daemon) or
 		to console (xenstore runs as stubdom)
-	quota|[set <name> <val>|<domid>]
+	quota|[set <name> <val>|<domid>|max [-r]]
 		without parameters: print the current quota settings
 		with "set <name> <val>": set the quota <name> to new value
 		<val> (The admin should make sure all the domain usage is
@@ -432,6 +432,9 @@ CONTROL			<command>|[<parameters>|]
 		violating the new quota setting isn't increased further)
 		with "<domid>": print quota related accounting data for
 		the domain <domid>
+		with "max [-r]": show global per-domain maximum values of all
+		unprivileged domains, optionally reset the values by adding
+		"-r"
 	quota-soft|[set <name> <val>]
 		like the "quota" command, but for soft-quota.
 	help			<supported-commands>
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index cbd62556c3..a2ba64a15c 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -306,6 +306,22 @@ static int quota_get(const void *ctx, struct connection *conn,
 	return domain_get_quota(ctx, conn, atoi(vec[0]));
 }
 
+static int quota_max(const void *ctx, struct connection *conn,
+		     char **vec, int num)
+{
+	if (num > 1)
+		return EINVAL;
+
+	if (num == 1) {
+		if (!strcmp(vec[0], "-r"))
+			domain_reset_global_acc();
+		else
+			return EINVAL;
+	}
+
+	return domain_max_global_acc(ctx, conn);
+}
+
 static int do_control_quota(const void *ctx, struct connection *conn,
 			    char **vec, int num)
 {
@@ -315,6 +331,9 @@ static int do_control_quota(const void *ctx, struct connection *conn,
 	if (!strcmp(vec[0], "set"))
 		return quota_set(ctx, conn, vec + 1, num - 1, hard_quotas);
 
+	if (!strcmp(vec[0], "max"))
+		return quota_max(ctx, conn, vec + 1, num - 1);
+
 	return quota_get(ctx, conn, vec, num);
 }
 
@@ -978,7 +997,8 @@ static struct cmd_s cmds[] = {
 	{ "memreport", do_control_memreport, "[<file>]" },
 #endif
 	{ "print", do_control_print, "<string>" },
-	{ "quota", do_control_quota, "[set <name> <val>|<domid>]" },
+	{ "quota", do_control_quota,
+		"[set <name> <val>|<domid>|max [-r]]" },
 	{ "quota-soft", do_control_quota_s, "[set <name> <val>]" },
 	{ "help", do_control_help, "" },
 };
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 4d48e7a9f4..91695b6313 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,6 +43,8 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
+static unsigned int acc_global_max[ACC_N];
+
 struct domain
 {
 	/* The id of this domain */
@@ -70,7 +72,10 @@ struct domain
 	bool introduced;
 
 	/* Accounting data for this domain. */
-	unsigned int acc[ACC_N];
+	struct acc {
+		unsigned int val;
+		unsigned int max;
+	} acc[ACC_N];
 
 	/* Memory quota data for this domain. */
 	bool soft_quota_reported;
@@ -199,9 +204,9 @@ static bool domain_can_read(struct connection *conn)
 	if (domain_is_unprivileged(conn)) {
 		if (domain->wrl_credit < 0)
 			return false;
-		if (domain->acc[ACC_OUTST] >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST].val >= quota_req_outstanding)
 			return false;
-		if (domain->acc[ACC_MEM] >= quota_memory_per_domain_hard &&
+		if (domain->acc[ACC_MEM].val >= quota_memory_per_domain_hard &&
 		    quota_memory_per_domain_hard)
 			return false;
 	}
@@ -264,7 +269,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		ret = WALK_TREE_SKIP_CHILDREN;
 	}
 
-	return domain->acc[ACC_NODES] ? ret : WALK_TREE_SUCCESS_STOP;
+	return domain->acc[ACC_NODES].val ? ret : WALK_TREE_SUCCESS_STOP;
 }
 
 static void domain_tree_remove(struct domain *domain)
@@ -272,7 +277,7 @@ static void domain_tree_remove(struct domain *domain)
 	int ret;
 	struct walk_funcs walkfuncs = { .enter = domain_tree_remove_sub };
 
-	if (domain->acc[ACC_NODES]) {
+	if (domain->acc[ACC_NODES].val) {
 		ret = walk_node_tree(domain, NULL, "/", &walkfuncs, domain);
 		if (ret == WALK_TREE_ERROR_STOP)
 			syslog(LOG_ERR,
@@ -426,14 +431,41 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8d\n", #t, e); \
+	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
+				      d->acc[e].val, d->acc[e].max); \
+	if (!resp) return ENOMEM
+
+	ent(nodes, ACC_NODES);
+	ent(watches, ACC_WATCH);
+	ent(transactions, ACC_TRANS);
+	ent(outstanding, ACC_OUTST);
+	ent(memory, ACC_MEM);
+
+#undef ent
+
+	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
+
+	return 0;
+}
+
+int domain_max_global_acc(const void *ctx, struct connection *conn)
+{
+	char *resp;
+
+	resp = talloc_asprintf(ctx, "Max. seen accounting values:\n");
+	if (!resp)
+		return ENOMEM;
+
+#define ent(t, e) \
+	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
+				      acc_global_max[e]);         \
 	if (!resp) return ENOMEM
 
-	ent(nodes, d->acc[ACC_NODES]);
-	ent(watches, d->acc[ACC_WATCH]);
-	ent(transactions, d->acc[ACC_TRANS]);
-	ent(outstanding, d->acc[ACC_OUTST]);
-	ent(memory, d->acc[ACC_MEM]);
+	ent(nodes, ACC_NODES);
+	ent(watches, ACC_WATCH);
+	ent(transactions, ACC_TRANS);
+	ent(outstanding, ACC_OUTST);
+	ent(memory, ACC_MEM);
 
 #undef ent
 
@@ -1051,10 +1083,12 @@ int domain_adjust_node_perms(struct node *node)
 static int domain_acc_add_chk(struct domain *d, enum accitem what, int add,
 			      unsigned int domid)
 {
+	unsigned int val;
+
 	assert(what < ARRAY_SIZE(d->acc));
 
-	if ((add < 0 && -add > d->acc[what]) ||
-	    (d->acc[what] + add) > INT_MAX) {
+	if ((add < 0 && -add > d->acc[what].val) ||
+	    (d->acc[what].val + add) > INT_MAX) {
 		/*
 		 * In a transaction when a node is being added/removed AND the
 		 * same node has been added/removed outside the transaction in
@@ -1065,7 +1099,13 @@ static int domain_acc_add_chk(struct domain *d, enum accitem what, int add,
 		return (add < 0) ? 0 : INT_MAX;
 	}
 
-	return d->acc[what] + add;
+	val = d->acc[what].val + add;
+	if (val > d->acc[what].max)
+		d->acc[what].max = val;
+	if (val > acc_global_max[what] && domid_is_unprivileged(domid))
+		acc_global_max[what] = val;
+
+	return val;
 }
 
 static int domain_acc_add(struct connection *conn, unsigned int domid,
@@ -1121,10 +1161,10 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 	}
 
 	trace_acc("global change domid %u: what=%u %u add %d\n", domid, what,
-		  d->acc[what], add);
-	d->acc[what] = domain_acc_add_chk(d, what, add, domid);
+		  d->acc[what].val, add);
+	d->acc[what].val = domain_acc_add_chk(d, what, add, domid);
 
-	return d->acc[what];
+	return d->acc[what].val;
 }
 
 void acc_drop(struct connection *conn)
@@ -1157,6 +1197,28 @@ void acc_commit(struct connection *conn)
 	conn->in = in;
 }
 
+static int domain_reset_global_acc_sub(const void *k, void *v, void *arg)
+{
+	struct domain *d = v;
+	unsigned int i;
+
+	for (i = 0; i < ACC_N; i++)
+		d->acc[i].max = d->acc[i].val;
+
+	return 0;
+}
+
+void domain_reset_global_acc(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < ACC_N; i++)
+		acc_global_max[i] = 0;
+
+	/* Set current max values seen. */
+	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
@@ -1657,7 +1719,7 @@ static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
 	 * If everything is correct incrementing the value for each node will
 	 * result in dom->nodes being 0 at the end.
 	 */
-	dom->nodes = -d->acc[ACC_NODES];
+	dom->nodes = -d->acc[ACC_NODES].val;
 
 	if (!hashtable_insert(domains, &dom->domid, dom)) {
 		talloc_free(dom);
@@ -1712,7 +1774,7 @@ static int domain_check_acc_cb(const void *k, void *v, void *arg)
 	if (!d)
 		return 0;
 
-	d->acc[ACC_NODES] += dom->nodes;
+	d->acc[ACC_NODES].val += dom->nodes;
 
 	return 0;
 }
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index e0562b0b4d..162e7dc0d0 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -116,6 +116,8 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 int acc_fix_domains(struct list_head *head, bool update);
 void acc_drop(struct connection *conn);
 void acc_commit(struct connection *conn);
+int domain_max_global_acc(const void *ctx, struct connection *conn);
+void domain_reset_global_acc(void);
 
 /* Write rate limiting */
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:05:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:05:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481617.746654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoH7-0007I2-KV; Fri, 20 Jan 2023 10:05:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481617.746654; Fri, 20 Jan 2023 10:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoH7-0007GB-Cj; Fri, 20 Jan 2023 10:05:33 +0000
Received: by outflank-mailman (input) for mailman id 481617;
 Fri, 20 Jan 2023 10:05:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoD8-0001b3-FH
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:01:26 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65e5e489-98a9-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 11:01:24 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6514D22A07;
 Fri, 20 Jan 2023 10:01:24 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 381F41390C;
 Fri, 20 Jan 2023 10:01:24 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id UedeDHRmymNzRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:01:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65e5e489-98a9-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208884; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LDP/2l4XV0mX1a+kmKrjEYO+V+0LM+WGrR3gQ8QJU00=;
	b=Y5HVigNP9PQCWUPsTpJnYBdmzoiC/ZtbT9qctkOR8Ui/6xfcKNMQCLfebIV8PskFdSR4zV
	keXB2rnZjC7J48GSB+6uBQSAxT4AphZRjWIr8/7+gq0vz1xNKL3J8O3ILblxTnBaOuOwqm
	GccoU/E1e98yvYjJLF3A2/F1ecX+2gY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 09/13] tools/xenstore: add TDB access trace support
Date: Fri, 20 Jan 2023 11:00:24 +0100
Message-Id: <20230120100028.11142-10-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new trace switch "tdb" and the related trace calls.

The "tdb" switch is off per default.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 8 +++++++-
 tools/xenstore/xenstored_core.h        | 6 ++++++
 tools/xenstore/xenstored_transaction.c | 7 ++++++-
 3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 558ef491b1..49e196e7ae 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -589,6 +589,8 @@ static void get_acc_data(TDB_DATA *key, struct node_account_data *acc)
 		if (old_data.dptr == NULL) {
 			acc->memory = 0;
 		} else {
+			trace_tdb("read %s size %zu\n", key->dptr,
+				  old_data.dsize + key->dsize);
 			hdr = (void *)old_data.dptr;
 			acc->memory = old_data.dsize;
 			acc->domid = hdr->perms[0].id;
@@ -655,6 +657,7 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 		errno = EIO;
 		return errno;
 	}
+	trace_tdb("store %s size %zu\n", key->dptr, data->dsize + key->dsize);
 
 	if (acc) {
 		/* Don't use new_domid, as it might be a transaction node. */
@@ -682,6 +685,7 @@ int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 		errno = EIO;
 		return errno;
 	}
+	trace_tdb("delete %s\n", key->dptr);
 
 	if (acc->memory) {
 		domid = get_acc_domid(conn, key, acc->domid);
@@ -731,6 +735,8 @@ struct node *read_node(struct connection *conn, const void *ctx,
 		goto error;
 	}
 
+	trace_tdb("read %s size %zu\n", key.dptr, data.dsize + key.dsize);
+
 	node->parent = NULL;
 	talloc_steal(node, data.dptr);
 
@@ -2746,7 +2752,7 @@ static void set_quota(const char *arg, bool soft)
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
 const char *const trace_switches[] = {
-	"obj", "io", "wrl", "acc",
+	"obj", "io", "wrl", "acc", "tdb",
 	NULL
 };
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 3e0734a6c6..419a144396 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -303,8 +303,14 @@ extern unsigned int trace_flags;
 #define TRACE_IO	0x00000002
 #define TRACE_WRL	0x00000004
 #define TRACE_ACC	0x00000008
+#define TRACE_TDB	0x00000010
 extern const char *const trace_switches[];
 int set_trace_switch(const char *arg);
+#define trace_tdb(...)				\
+do {						\
+	if (trace_flags & TRACE_TDB)		\
+		trace("tdb: " __VA_ARGS__);	\
+} while (0)
 
 extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 1aa9d3cb3d..19a1175d1b 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -366,8 +366,11 @@ static int finalize_transaction(struct connection *conn,
 				if (tdb_error(tdb_ctx) != TDB_ERR_NOEXIST)
 					return EIO;
 				gen = NO_GENERATION;
-			} else
+			} else {
+				trace_tdb("read %s size %zu\n", key.dptr,
+					  key.dsize + data.dsize);
 				gen = hdr->generation;
+			}
 			talloc_free(data.dptr);
 			if (i->generation != gen)
 				return EAGAIN;
@@ -391,6 +394,8 @@ static int finalize_transaction(struct connection *conn,
 			set_tdb_key(i->trans_name, &ta_key);
 			data = tdb_fetch(tdb_ctx, ta_key);
 			if (data.dptr) {
+				trace_tdb("read %s size %zu\n", ta_key.dptr,
+					  ta_key.dsize + data.dsize);
 				hdr = (void *)data.dptr;
 				hdr->generation = ++generation;
 				*is_corrupt |= do_tdb_write(conn, &key, &data,
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:05:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481618.746667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoH8-0007kx-UE; Fri, 20 Jan 2023 10:05:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481618.746667; Fri, 20 Jan 2023 10:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoH8-0007k0-Oq; Fri, 20 Jan 2023 10:05:34 +0000
Received: by outflank-mailman (input) for mailman id 481618;
 Fri, 20 Jan 2023 10:05:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NYwS=5R=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pIoD1-0001Kj-MT
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:01:19 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 628fb316-98a9-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 11:01:19 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id CBE995BD75;
 Fri, 20 Jan 2023 10:01:18 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9F2B41390C;
 Fri, 20 Jan 2023 10:01:18 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id O8yqJG5mymNdRQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 20 Jan 2023 10:01:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 628fb316-98a9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674208878; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jwgihKBtb88XLTKE+8ZXuKP1VlqF7Soz5OY23zelXsM=;
	b=WaemiO2+n3R3rlqGhAU8eXdxvAFxtLyXcwOCqoQtNqFrdOQQbD9sssM6Ac2yISLvaqmMe9
	SxgBoveehE6oLChJsSKANk/0rLoMmwPoHFrcYumbr3Lt0i/tOlI9Q120a2UhSlE399Gz5c
	mOP6AMTscORgoGqSvCvUZxli1YmakDU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 08/13] tools/xenstore: add accounting trace support
Date: Fri, 20 Jan 2023 11:00:23 +0100
Message-Id: <20230120100028.11142-9-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230120100028.11142-1-jgross@suse.com>
References: <20230120100028.11142-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new trace switch "acc" and the related trace calls.

The "acc" switch is off per default.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   |  2 +-
 tools/xenstore/xenstored_core.h   |  1 +
 tools/xenstore/xenstored_domain.c | 10 ++++++++++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 6ef60179fa..558ef491b1 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2746,7 +2746,7 @@ static void set_quota(const char *arg, bool soft)
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
 const char *const trace_switches[] = {
-	"obj", "io", "wrl",
+	"obj", "io", "wrl", "acc",
 	NULL
 };
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 1f811f38cb..3e0734a6c6 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -302,6 +302,7 @@ extern unsigned int trace_flags;
 #define TRACE_OBJ	0x00000001
 #define TRACE_IO	0x00000002
 #define TRACE_WRL	0x00000004
+#define TRACE_ACC	0x00000008
 extern const char *const trace_switches[];
 int set_trace_switch(const char *arg);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index b1e29edb7e..d461fd8cc8 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -538,6 +538,12 @@ static struct domain *find_domain_by_domid(unsigned int domid)
 	return (d && d->introduced) ? d : NULL;
 }
 
+#define trace_acc(...)				\
+do {						\
+	if (trace_flags & TRACE_ACC)		\
+		trace("acc: " __VA_ARGS__);	\
+} while (0)
+
 int acc_fix_domains(struct list_head *head, bool update)
 {
 	struct changed_domain *cd;
@@ -602,6 +608,8 @@ static int acc_add_changed_dom(const void *ctx, struct list_head *head,
 		return 0;
 
 	errno = 0;
+	trace_acc("local change domid %u: what=%u %d add %d\n", domid, what,
+		  cd->acc[what], val);
 	cd->acc[what] += val;
 
 	return cd->acc[what];
@@ -1114,6 +1122,8 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 		return domain_acc_add_chk(d, what, ret, domid);
 	}
 
+	trace_acc("global change domid %u: what=%u %u add %d\n", domid, what,
+		  d->acc[what], add);
 	d->acc[what] = domain_acc_add_chk(d, what, add, domid);
 
 	return d->acc[what];
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:17:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:17:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481644.746676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoS3-0001wi-25; Fri, 20 Jan 2023 10:16:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481644.746676; Fri, 20 Jan 2023 10:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoS2-0001wb-Vk; Fri, 20 Jan 2023 10:16:50 +0000
Received: by outflank-mailman (input) for mailman id 481644;
 Fri, 20 Jan 2023 10:16:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIoS1-0001wV-F1
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:16:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIoS0-0001gi-Uf; Fri, 20 Jan 2023 10:16:48 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIoS0-0006eF-Mw; Fri, 20 Jan 2023 10:16:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=idRSzmn58grgYyX7Dgqdd23kJOqbfAkKfv9jJU0eF3U=; b=k8ylPo7mljherSg481ZUyJx6l1
	9IblSgRzdgcaIeylHu6aeXsXJb35j7KgwG0XZrlN0x4x6lb8gU2MY3Qo0c/JEDqX4jhnBsDFe52RY
	eH1LCD4kEPZw/FYtPTuFwzDLYNM9jFRc4Z2fAdKyE9gZTAwx6pRl0qUL9bw8w2/9DuK8=;
Message-ID: <1a227f76-005d-0307-5161-2e8432171eb7@xen.org>
Date: Fri, 20 Jan 2023 10:16:46 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 04/11] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-5-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191514240.731018@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2301191532370.731018@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301191532370.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

I am answering to multiple e-mails at the same time.

On 19/01/2023 23:34, Stefano Stabellini wrote:
> On Thu, 19 Jan 2023, Stefano Stabellini wrote:
>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>> In future, we wish to support 32 bit physical address.
>>> However, one can only read u64 values from the DT. Thus, we need to

A cell in the DT is a 32-bit value. So you could read 32-bit value 
(address-size could be #1). It is just that our wrapper return 64-bit 
values because this is how we use the most.

>>> typecast the values appropriately from u64 to paddr_t.

C will perfectly be able to truncate a 64-bit to 32-bit value. So this 
is not a very good argument to explain why all of this is necessary.

>>>
>>> device_tree_get_reg() should now be able to return paddr_t. This is
>>> invoked by various callers to get DT address and size.
>>> Similarly, dt_read_number() is invoked as well to get DT address and
>>> size. The return value is typecasted to paddr_t.
>>> fdt_get_mem_rsv() can only accept u64 values. So, we provide a warpper

Typo: s/warpper/wrapper/

>>> for this called fdt_get_mem_rsv_paddr() which will do the necessary
>>> typecasting before invoking fdt_get_mem_rsv() and while returning.
>>>
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>
>> I know we discussed this before and you implemented exactly what we
>> suggested, but now looking at this patch I think we should do the
>> following:
>>
>> - also add a wrapper for dt_read_number, something like
>>    dt_read_number_paddr that returns paddr

"number" and "paddr" pretty much means the same. I think it would be 
better to name it "dt_read_paddr".

>> - add a check for the top 32-bit being zero in all the wrappers
>>    (dt_read_number_paddr, device_tree_get_reg, fdt_get_mem_rsv_paddr)
>>    when paddr!=uint64_t. In case the top 32-bit are != zero I think we
>>    should print an error
>>
>> Julien, I remember you were concerned about BUG_ON/panic/ASSERT and I
>> agree with you there (especially considering Vikram's device tree
>> overlay series). So here I am only suggesting to check truncation and
>> printk a message, not panic. Would you be OK with that?

Aside dt_read_number(), I would expect that most of the helper can 
return an error. So if you want to check the truncation, then we should 
propagate the error.

>>
>> Last comment, maybe we could add fdt_get_mem_rsv_paddr to setup.h
>> instead of introducing xen/arch/arm/include/asm/device_tree.h, given
>> that we already have device tree definitions there
>> (device_tree_get_reg). I am OK either way.
>   
> Actually I noticed you also add dt_device_get_paddr to
> xen/arch/arm/include/asm/device_tree.h. So it sounds like it is a good
> idea to introduce xen/arch/arm/include/asm/device_tree.h, and we could
> also move the declarations of device_tree_get_reg, device_tree_get_u32
> there.

None of the helpers you mention sounds arm specific. So why should the 
be move an arch specific headers?

[...]

>>> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
>>> index 0085c28d74..f536a3f3ab 100644
>>> --- a/xen/arch/arm/bootfdt.c
>>> +++ b/xen/arch/arm/bootfdt.c
>>> @@ -11,9 +11,9 @@
>>>   #include <xen/efi.h>
>>>   #include <xen/device_tree.h>
>>>   #include <xen/lib.h>
>>> -#include <xen/libfdt/libfdt.h>
>>>   #include <xen/sort.h>
>>>   #include <xsm/xsm.h>
>>> +#include <asm/device_tree.h>
>>>   #include <asm/setup.h>
>>>   
>>>   static bool __init device_tree_node_matches(const void *fdt, int node,
>>> @@ -53,10 +53,15 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
>>>   }
>>>   
>>>   void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
>>> -                                u32 size_cells, u64 *start, u64 *size)
>>> +                                u32 size_cells, paddr_t *start, paddr_t *size)
>>>   {
>>> -    *start = dt_next_cell(address_cells, cell);
>>> -    *size = dt_next_cell(size_cells, cell);
>>> +    /*
>>> +     * dt_next_cell will return u64 whereas paddr_t may be u64 or u32. Thus, one
>>> +     * needs to cast paddr_t to u32. Note that we do not check for truncation as
>>> +     * it is user's responsibility to provide the correct values in the DT.
>>> +     */
>>> +    *start = (paddr_t) dt_next_cell(address_cells, cell);
>>> +    *size = (paddr_t) dt_next_cell(size_cells, cell);

There is no need for explicit cast here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:23:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:23:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481651.746687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoYB-0003Qi-S2; Fri, 20 Jan 2023 10:23:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481651.746687; Fri, 20 Jan 2023 10:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoYB-0003Qb-P7; Fri, 20 Jan 2023 10:23:11 +0000
Received: by outflank-mailman (input) for mailman id 481651;
 Fri, 20 Jan 2023 10:23:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nUaQ=5R=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pIoYA-0003QV-Ql
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:23:10 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2062.outbound.protection.outlook.com [40.107.93.62])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6eb32298-98ac-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 11:23:08 +0100 (CET)
Received: from DM6PR07CA0120.namprd07.prod.outlook.com (2603:10b6:5:330::10)
 by CH2PR12MB4070.namprd12.prod.outlook.com (2603:10b6:610:ae::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 10:23:06 +0000
Received: from DM6NAM11FT096.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:330:cafe::b4) by DM6PR07CA0120.outlook.office365.com
 (2603:10b6:5:330::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27 via Frontend
 Transport; Fri, 20 Jan 2023 10:23:05 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT096.mail.protection.outlook.com (10.13.173.145) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Fri, 20 Jan 2023 10:23:05 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 04:23:05 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 04:23:05 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 20 Jan 2023 04:23:03 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6eb32298-98ac-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CgR91RnMF5haE+FTUBDSUlC1KR+9idpSMuP8Cfu4yPM5i9ebiEd0S6yKqDTpbOtrG8uMJi0XZfRbfYQeFNDXVfrXkyoDYDXWjsLfIHpDyQ8EG9sTCimkxgYPnMobOB3FOpNckUuISa4Z0nILoADFhJc3EZlkwtlavZDOywKesbPF4d46/ISGP4+DjKeoJAcn9MgP6kwLJ8Vwl5YCZjNMJAcZDK6AKv5YuzdJjFw6CZP6dmhrL4HinzmZ122xqGKVWxBqNtBu1XdQ2V91NcmT1hmdBK/tJU4vi7VDlCQ4XT+qwdO2tM6OGPoOJ0flqLuRcFuihAS77hus5+egCQljeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xcFMsURAjmv2DipZImXEQQ2/KV3cXpq9aizwLCqNE4g=;
 b=Dk3xryFNut0crCc5Jid0glT8iLvh7Xm5RdV0qQrQN0xgPKjuQRyZtlm7UYduoogMljz3a/u02D1rAncngvUnXBmS2DFRFIZOva7cStmKdtBcHcTjQt3M5l14iN14UXmxOB/F4lFeMIxMXRlztXkCl+9PlxRMXsogoxTMnZwQ/soL5zAuoOA4LtDtLkRuj4RQ4UxP8xpgfFV8R14zlSTzVlmAHsMTv7frSwWntvFODell/Q5kCub2MWRf6079uM7dbz7ZL2Qo9F0aJlvMISQKv9hLTPbFxwYOB9PjwfLU7kWyK5FCWMiYKlHtRfLaLOxlu/id0kQEhxaGx6SqNYk91A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xcFMsURAjmv2DipZImXEQQ2/KV3cXpq9aizwLCqNE4g=;
 b=PCc8Td2n4ZBUi5yBTlz8sA/edr6rNZUqJmE6HNSeVdIRpdz0i7EZNAFLOMuPYwbPhw6cR2YS7zf9NuotEV9/eXtqBQ9Qjc7Fi1WA29bLMXo/TFso/dgY7ieq7NJCd7JNkiBUVyefdeVsPdr3vsxgMOAQvA0sg0NDRRU93hff9Pw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <d9861060-22ba-5fce-eef6-a7f2ef01526a@amd.com>
Date: Fri, 20 Jan 2023 11:23:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 3/3] xen/arm: Clean-up in p2m_init() and
 p2m_final_teardown()
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Wei Chen <wei.chen@arm.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-4-Henry.Wang@arm.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230116015820.1269387-4-Henry.Wang@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT096:EE_|CH2PR12MB4070:EE_
X-MS-Office365-Filtering-Correlation-Id: 340cecde-e8c8-43e5-958f-08dafad051c8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HfrpMEi3cOr7YzV8GrgEfVuYM40J1+YbkpaB6CmOcYIsziddUUXx1yJECDdRVixDHf15Lo6MshR0uH2fbHt+Dnsk6d+po38tZBoeTo9OFs2x0EimMlYDRbV4zkgStOEPwgRev5kR5L/ac03uqgnyndHXoDwHgJ7dkYqq+ATZyPjVzPtk7rv33K6qgfXoT8tmjaY1/suRUS6Gr8PYv5WofoLQc8voHRIbkmqjdr9T4cm+c8F2lZ2TDvbRb2c4dTuz26qef1lXdPWUX90+jMrkQ8ydvTQK2fx9OTO2Ov9ccND1PXfsGS0bTEiVx7xU7RXNpc003ch8H/+3cT7TXR5D4vP8EwMdRK7qCa/D4ORJKJijpoWkLFcg7aL077AGcP6YfDbmKiCRiaJHPqZ1X9dtmRVqQQ5Wd0TU7ebTB2Ex1ITNDse3lioFkiGsRHY7R2a6HqjriAKic7WcIg5E62GZWCN1xoEJ/whDJbhcpzhxRUOXq6ejzMTawxT2unFXVsQk9G64ntKUix2Stt+HhFuGf0PSgGO6JL9FoHMH1MX1eRuLVPHNeL3z46QxFZ5asZ1LhwhUvaedJM2TOhL/tOLY4paV7crJFCe330rXSkf82BRwqXJhYpyyAkX8dLVjcq5MnoMEchehhC/4pD9dhVi1k7qz08Fs23gRpZfYT2YLXwFczgjrAJmMhjbkYBscbImFGdJBLWVbgIPXlgqMM0p11YIuWbjZhWnc2ebp+TCXNl5adQIs7vTK3fk1aLqdnh8R6b3zFLCLidxJZXKYFSThnw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199015)(36840700001)(46966006)(40470700004)(36756003)(2616005)(41300700001)(5660300002)(8936002)(336012)(16576012)(316002)(4326008)(70586007)(70206006)(8676002)(26005)(186003)(82740400003)(82310400005)(356005)(81166007)(31696002)(86362001)(44832011)(2906002)(83380400001)(47076005)(40460700003)(426003)(36860700001)(40480700001)(478600001)(54906003)(110136005)(31686004)(53546011)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 10:23:05.7806
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 340cecde-e8c8-43e5-958f-08dafad051c8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT096.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4070

Hi Henry,

On 16/01/2023 02:58, Henry Wang wrote:
> 
> 
> With the change in previous patch, the initial 16 pages in the P2M
> pool is not necessary anymore. Drop them for code simplification.
> 
> Also the call to p2m_teardown() from arch_domain_destroy() is not
> necessary anymore since the movement of the P2M allocation out of
> arch_domain_create(). Drop the code and the above in-code comment
> mentioning it.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> ---
> I am not entirely sure if I should also drop the "TODO" on top of
> the p2m_set_entry(). Because although we are sure there is no
> p2m pages populated in domain_create() stage now, but we are not
> sure if anyone will add more in the future...Any comments?
> ---
>  xen/arch/arm/include/asm/p2m.h |  4 ----
>  xen/arch/arm/p2m.c             | 20 +-------------------
>  2 files changed, 1 insertion(+), 23 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
> index bf5183e53a..cf06d3cc21 100644
> --- a/xen/arch/arm/include/asm/p2m.h
> +++ b/xen/arch/arm/include/asm/p2m.h
> @@ -200,10 +200,6 @@ int p2m_init(struct domain *d);
>   *  - p2m_final_teardown() will be called when domain struct is been
>   *    freed. This *cannot* be preempted and therefore one small
>   *    resources should be freed here.
> - *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
> - *  free the P2M when failures happen in the domain creation with P2M pages
> - *  already in use. In this case p2m_teardown() is called non-preemptively and
> - *  p2m_teardown() will always return 0.
>   */
>  int p2m_teardown(struct domain *d, bool allow_preemption);
>  void p2m_final_teardown(struct domain *d);
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 7de7d822e9..d41a316d18 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1744,13 +1744,9 @@ void p2m_final_teardown(struct domain *d)
>      /*
>       * No need to call relinquish_p2m_mapping() here because
>       * p2m_final_teardown() is called either after domain_relinquish_resources()
> -     * where relinquish_p2m_mapping() has been called, or from failure path of
> -     * domain_create()/arch_domain_create() where mappings that require
> -     * p2m_put_l3_page() should never be created. For the latter case, also see
> -     * comment on top of the p2m_set_entry() for more info.
> +     * where relinquish_p2m_mapping() has been called.
>       */
> 
> -    BUG_ON(p2m_teardown(d, false));
Because you remove this,
>      ASSERT(page_list_empty(&p2m->pages));
you no longer need this assert, right?

Apart from that:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:35:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:35:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481656.746697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIojc-0004ww-UG; Fri, 20 Jan 2023 10:35:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481656.746697; Fri, 20 Jan 2023 10:35:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIojc-0004wp-RR; Fri, 20 Jan 2023 10:35:00 +0000
Received: by outflank-mailman (input) for mailman id 481656;
 Fri, 20 Jan 2023 10:34:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIoja-0004wj-W6
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:34:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIoja-0001zc-Mz; Fri, 20 Jan 2023 10:34:58 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIoja-0007Im-H3; Fri, 20 Jan 2023 10:34:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=yK9n1ucu2MciRbRuZkmhuzvoqNhvwiLEWBhtIGPI7IA=; b=DnGDT7iBOdaaV6Jc8t0LGZDRV8
	36l/TFKzbJK+c+bGa7Xingd9j2y7d2cZgychVP/1wh8lnJVw2Ua6L/uMOca98JkN7gPX64h6JmERB
	8BxhqnQumxxv2+oX+hBGJxr4iSttzi3iOhRekmsQXhFUxmu3ntypgpDUr4MMxAYMui1k=;
Message-ID: <a4775838-1ef6-d227-5747-d525136d62c5@xen.org>
Date: Fri, 20 Jan 2023 10:34:56 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for
 address/size
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-6-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117174358.15344-6-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 17/01/2023 17:43, Ayan Kumar Halder wrote:
> One should now be able to use 'paddr_t' to represent address and size.
> Consequently, one should use 'PRIpaddr' as a format specifier for paddr_t.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from -
> 
> v1 - 1. Rebased the patch.
> 
>   xen/arch/arm/domain_build.c        |  9 +++++----
>   xen/arch/arm/gic-v3.c              |  2 +-
>   xen/arch/arm/platforms/exynos5.c   | 26 +++++++++++++-------------
>   xen/drivers/char/exynos4210-uart.c |  2 +-
>   xen/drivers/char/ns16550.c         |  8 ++++----
>   xen/drivers/char/omap-uart.c       |  2 +-
>   xen/drivers/char/pl011.c           |  4 ++--
>   xen/drivers/char/scif-uart.c       |  2 +-
>   xen/drivers/passthrough/arm/smmu.c |  6 +++---
>   9 files changed, 31 insertions(+), 30 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 72b9afbb4c..cf8ae37a14 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1666,7 +1666,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
>       dt_for_each_device_node( dt_host, np )
>       {
>           unsigned int naddr;
> -        u64 addr, size;
> +        paddr_t addr, size;

Without the next patch, this change is incorrect because 
dt_device_get_address() expects a 64-bit value rather than paddr_t.

So the type change wants to be moved in the next patch. The same goes 
for any variable you modifed in this patch used by dt_device_get_address().

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:39:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481661.746707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoo0-0005ZU-G6; Fri, 20 Jan 2023 10:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481661.746707; Fri, 20 Jan 2023 10:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIoo0-0005ZN-Cb; Fri, 20 Jan 2023 10:39:32 +0000
Received: by outflank-mailman (input) for mailman id 481661;
 Fri, 20 Jan 2023 10:39:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIonz-0005ZH-7s
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:39:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIony-00024Q-RM; Fri, 20 Jan 2023 10:39:30 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIony-0007Xj-LY; Fri, 20 Jan 2023 10:39:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=WsIAY1z2gmgVMlNi6lVg1yNFcFBcKw8eZFIw0xMu4Aw=; b=aYSs4wpyODB17ql1LWmfs3xKJ3
	IBdepRPqLelTaVRdc3GOZkU/u+o5ayc35ZqR1I4bWelgV8GT7T3/hNMUjpj3hZYBg8YYApgIY6yzU
	Crl+ymGBnWwEcObv5kJGffd1jxH4NF5k+hVTsLtf+fSsePIyuK9rZZAJsNqYksbHih9Q=;
Message-ID: <8b4ded2d-ddce-c53a-4c43-4c82900169b6@xen.org>
Date: Fri, 20 Jan 2023 10:39:29 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 08/11] xen/arm: guest_walk: LPAE specific bits should be
 enclosed within "ifndef CONFIG_ARM_PA_32"
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-9-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191541460.731018@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301191541460.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 19/01/2023 23:43, Stefano Stabellini wrote:
> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>> In the subsequent patch, we introduce "CONFIG_ARM_PA_32" to support
>> 32 bit physical addresses. Thus, the code specific to
>> "Large Page Address Extension" (ie LPAE) should be enclosed within
>> "ifndef CONFIG_ARM_PA_32".
>>
>> Refer xen/arch/arm/include/asm/short-desc.h, "short_desc_l1_supersec_t"
>> unsigned int extbase1:4;    /* Extended base address, PA[35:32] */
>> unsigned int extbase2:4;    /* Extended base address, PA[39:36] */
>>
>> Thus, extbase1 and extbase2 are not valid when 32 bit physical addresses
>> are supported.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> 
> This patch should be merged with patch 9: we shouldn't start to use a
> new kconfig symbol before it is defined.

I have asked the adaptation to be in a separate patch to avoid having a 
melting-pot patch.

So if you want the new Kconfig to be defined first, then we should 
reshuffle it.

BTW, the reshuffle will likely be necessary if you want to check for 
truncation in earlier patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 10:53:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 10:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481666.746717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIp1B-0007rd-Lu; Fri, 20 Jan 2023 10:53:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481666.746717; Fri, 20 Jan 2023 10:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIp1B-0007rW-IY; Fri, 20 Jan 2023 10:53:09 +0000
Received: by outflank-mailman (input) for mailman id 481666;
 Fri, 20 Jan 2023 10:53:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIp1A-0007rQ-JS
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 10:53:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIp1A-0002U6-A2; Fri, 20 Jan 2023 10:53:08 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIp1A-00084N-2c; Fri, 20 Jan 2023 10:53:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=GjpCtkR4EtAzb4QaHhXZTlW4qQXPZlyrGG0V6/kVMXQ=; b=aavb6nUgMH+J4SBmneY8U0cTOP
	/OO5XKQ07pzUJLP14+kQeTz65v6kIW13LzkzssWsCPWx+cO89W659n5R8RXlq+s2qlN2C+luqW65L
	6rVp7dBigPg3oRyCXz35sPiWwng71EqduKM4nv8Nh7fcSiM2VajsUGsn6PQSxE5BwlQo=;
Message-ID: <6743ca84-e54e-23be-575f-899a8770d523@xen.org>
Date: Fri, 20 Jan 2023 10:53:06 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 10/11] xen/arm: Restrict zeroeth_table_offset for ARM_64
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-11-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191553570.731018@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301191553570.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

Title: For me "restrict" means that the code macro cannot be used if 
!ARM_64. But this is not the case here.

On 20/01/2023 00:19, Stefano Stabellini wrote:
> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>> zeroeth_table_offset is not accessed by ARM_32.

I don't quite understand this sentence. The helper is used by 32-bit 
arm. The output may not be used thought.

I would suggest to say that there no zeroeth level on Arm 32-bit. But...

>> Also, when 32 bit physical addresses are used (ie ARM_PA_32=y), this
>> causes an overflow.

... this is the most important part.

>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>> Changes from -
>>
>> v1 - Removed the duplicate declaration for DECLARE_OFFSETS.
>>
>>   xen/arch/arm/include/asm/lpae.h | 4 ++++
>>   xen/arch/arm/mm.c               | 7 +------
>>   2 files changed, 5 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
>> index 3fdd5d0de2..2744e0eebf 100644
>> --- a/xen/arch/arm/include/asm/lpae.h
>> +++ b/xen/arch/arm/include/asm/lpae.h
>> @@ -259,7 +259,11 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
>>   #define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
>>   #define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
>>   #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
>> +#ifdef CONFIG_ARM_64
> 
> Julien was asking for a selectable Kconfig option that would allow us to
> have 32-bit paddr_t even on ARM_64. If we do that, assuming we are on
> aarch64, and we set VTCR_T0SZ to 0x20, hence we get 32-bit IPA, are we
> going to have a 3-level or a 4-level p2m pagetable?

It will start at level 1. So 3-level page-table.

> 
> In any case I think this should be:
> #ifndef CONFIG_PADDR_32

+1

> 
> And if it doesn't work today on aarch64 due to pagetable levels or other
> reasons, than I would make CONFIG_PADDR_32 not (yet) selectable on
> ARM_64 (until it is fixed).

+1

>>   #define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
>> +#else
>> +#define zeroeth_table_offset(va)  0
> 
> Rather than 0 it might be better to have 32, hence zeroing the input
> address
I don't understand why you suggest 32. The macro is meant to return the 
index in the 0th table. So return 0 is correct here.

> 
> 
>> +#endif
>>   
>>   /*
>>    * Macros to define page-tables:
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index fab54618ab..95784e0c59 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -207,12 +207,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
>>   {
>>       static const char *level_strs[4] = { "0TH", "1ST", "2ND", "3RD" };
>>       const mfn_t root_mfn = maddr_to_mfn(ttbr);
>> -    const unsigned int offsets[4] = {
>> -        zeroeth_table_offset(addr),
>> -        first_table_offset(addr),
>> -        second_table_offset(addr),
>> -        third_table_offset(addr)
>> -    };
>> +    DECLARE_OFFSETS(offsets, addr);

This wants to be explained in the commit message.

>>       lpae_t pte, *mapping;
>>       unsigned int level, root_table;
>>   
>> -- 
>> 2.17.1
>>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 11:06:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 11:06:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481671.746727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpDv-00010E-Po; Fri, 20 Jan 2023 11:06:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481671.746727; Fri, 20 Jan 2023 11:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpDv-000107-N4; Fri, 20 Jan 2023 11:06:19 +0000
Received: by outflank-mailman (input) for mailman id 481671;
 Fri, 20 Jan 2023 11:06:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIpDu-000101-3S
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 11:06:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIpDt-0002kY-P7; Fri, 20 Jan 2023 11:06:17 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIpDt-0000fl-HA; Fri, 20 Jan 2023 11:06:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=WYrqj0xBrqqmQO83jmHVpoU6DVEZdPQ5oVIwNyetbBg=; b=e9BlcfEMkiw6W0600bcW9+EZfv
	+7whp6gvJm573uXpXY2ixPhESUYJmaThivbX25sDQer+AejGqEJRrMek7uBDJYlDMiPs340Eobrxd
	FJ3s2VVpynEiSaMo6DgwVoSCFLbPMNOq8dE8OW95Y0tSytgViDeiQ8NYuI/2MzhK7HHg=;
Message-ID: <ae501ac3-af2d-c4cb-3ea9-04dd946cdc51@xen.org>
Date: Fri, 20 Jan 2023 11:06:15 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 11/11] xen/arm: p2m: Enable support for 32bit IPA
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-12-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117174358.15344-12-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Ayan,

On 17/01/2023 17:43, Ayan Kumar Halder wrote:
> VTCR.T0SZ should be set as 0x20 to support 32bit IPA.
> Refer ARM DDI 0487I.a ID081822, G8-9824, G8.2.171, VTCR,
> "Virtualization Translation Control Register" for the bit descriptions.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from -
> 
> v1 - New patch.
> 
>   xen/arch/arm/p2m.c | 10 +++++++---
>   1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 948f199d84..cfdea55e71 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -2266,13 +2266,17 @@ void __init setup_virt_paging(void)
>       register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>   
>   #ifdef CONFIG_ARM_32
> -    if ( p2m_ipa_bits < 40 )
> +    if ( p2m_ipa_bits < PADDR_BITS )
>           panic("P2M: Not able to support %u-bit IPA at the moment\n",
>                 p2m_ipa_bits);
>   
> -    printk("P2M: 40-bit IPA\n");
> -    p2m_ipa_bits = 40;
> +    printk("P2M: %u-bit IPA\n",PADDR_BITS);
> +    p2m_ipa_bits = PADDR_BITS;
> +#ifdef CONFIG_ARM_PA_32
> +    val |= VTCR_T0SZ(0x20); /* 32 bit IPA */
> +#else
>       val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
> +#endif

I am wondering whether this is right time to switch to an array like the 
arm64 code? This would allow to use 32-bit IPA also when Xen support 
64-bit physical address.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 11:09:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 11:09:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481676.746737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpHK-0001b1-9Y; Fri, 20 Jan 2023 11:09:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481676.746737; Fri, 20 Jan 2023 11:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpHK-0001au-6J; Fri, 20 Jan 2023 11:09:50 +0000
Received: by outflank-mailman (input) for mailman id 481676;
 Fri, 20 Jan 2023 11:09:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FRq8=5R=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pIpHI-0001ao-9X
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 11:09:48 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2054.outbound.protection.outlook.com [40.107.237.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f1ed1a88-98b2-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 12:09:46 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by CY5PR12MB6274.namprd12.prod.outlook.com (2603:10b6:930:21::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 20 Jan
 2023 11:09:42 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%4]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 11:09:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1ed1a88-98b2-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PwuQrXNkmE4CJ2GAEsGioWle1pDtv6TrRXl72j6FeKn/yzCAn9tMdtdtFC09yh4y3gNg3ksW4M60UH8b8CL+Ss7JRBobsg9U4VIGvn9vRXZ0XBTQexDMzl3cqjcArz3HhffphkJSFxguiL9zTBj36zuzXbFlGDLGEwwW6BLuGrHfNt0gIF1purB0Qo+eJlmR+7yLQW7rdD6kUqWgsq4Vx+x/upXUrlJooedTBwq/AnhJCE8wTBQr3QO2dYluQo194Et/mo7Bm2NBgNrv805eHsNfNX0CYtlx80c3+miX8xy7fMCXmKeUhexmf89P3iPS74thV34kxbu1dTWb0NcqfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=C+w0yBuaDtRV633ju1Z44HVaFVLVdguynPWv4SITV78=;
 b=Kvr68cdV8GhltZn1oGGMv/upccomkArBmejMhbI/thi3/4A3cBuSCLNlh2I4fQJkvX7rugZXeRzjO1Pfm/dAzgTSiah5D2KqHRMvbU2eKEHnfqvySAJ7VLZkxp9S7ByeZ/2sObZ4XXKqV4FiNODMUZtjqBR1cufrU4Xaav9VrdrziXEiGwfWEMHf/S7khvoyjnt3HmoX69KR9hKPFw+zCIUdMwMskI90A0h8G7nbQwDZOgEgC3n0Jd6I8QsQe6HLLG8s7B1mtuIBcVEBUK4zYChQ8PsTBH41HXLpNfbkahKNhVrb5EYwVRGPMKx7+2hfW9nXRj0yMUPJMY+XuEK2FQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C+w0yBuaDtRV633ju1Z44HVaFVLVdguynPWv4SITV78=;
 b=APQ66dGMbhz2BCUfla4rtc/oK8DQ2nNTMmDL6WHXyR/TsglLrshCvjatOVCSC4ilHoU6ZEKAlYwLminl/EfhFbIiHe3nq5xx+BkZ4AjGALI4SSiMIvXYI4cJeykoA4FMNlGP38SzoDO8gY30JXVA1elaFo7pj+uV41hyR3aL+ZU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <e607a9da-0c2a-05c1-47a3-b4e4f11c874d@amd.com>
Date: Fri, 20 Jan 2023 11:09:36 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
To: Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-3-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
 <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0059.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::23) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|CY5PR12MB6274:EE_
X-MS-Office365-Filtering-Correlation-Id: 02d52225-51f7-40e5-4a33-08dafad6d48f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	S1NNdMfDmzYrIdFYxKatoflkQbGKNFCRfaFuxDGOGpLA1GHCi7kcaIWZnJEOH0rdszt6bIcB4G5pyfgh9pvFNya5iB+DCxmgNXNxE2z3HEDGXq+lri8d74E6NOtG2BHGhovb3R/L8fL9XCXLUf9eh9N2vAVOrSHIHXLT/Mn4sXYow9eVvNWVieSfQTO8BMKsJPIhZfzlb9yDbI0qcnmGKnq1eSaW3iaLw7Ldp6TgmZ2aUYRcN40+L6aA7fzE7o54BsDjAaZW4vneVGzAsJ8e63NAHDwL9y4OSrXJjzUZPXq8aJ3XdzGWm0juwKo+DvQ3HuqbFdoay9B5USQulVkxO4QWaOrVeCJA82ECITe8U5j4iAknsvRsWbW+U/2rS2AFG9zN6JquO7Q3G1SQCATddE3qEptEnl6Nn64fM0hNoyY9rL8KkYsWzNqB4ainH5ZVgRerFoqmIc4wWGLye5dDvP527/USfS/GSr4EjZ5klaTAnE3vNsZW7Ygn4rlMbrGP74AQzcpidpmfXxMbWNO98CS8NGuBneWMypK4E1X3UCCAud0jCT2+po+wLKCwU/lqKV4UbqJDuPiSAOKzvaUSJPV5PqOQ3blOe7Ikg/O2CWZ7MFZYXYcoPQitgV1Yube4gnlYqmi8gb+XFMe7rc76J59hCNiayjiUh10q0MlTHGT1hw4AfToprBzQ7kwqviAmA+MyxD0nkoOuODf0dxHHlzMfbJJPKGO1A2AC4zj/LIQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(451199015)(53546011)(6636002)(6486002)(316002)(478600001)(41300700001)(31686004)(6512007)(38100700002)(31696002)(36756003)(4744005)(110136005)(8936002)(5660300002)(186003)(26005)(6666004)(2616005)(2906002)(6506007)(66476007)(66556008)(4326008)(8676002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ckFCdm1qdTZVNG9WdUpzY2hLb0ZFaDJjUEhzSG1WSUk1MjQzT0F4R1Ywa0JW?=
 =?utf-8?B?NzdUc3ZNOEVyamd5MXptQVE2Y1hwVVdkcHBIb3FpN1FMckJuUC9wMkZ0U3VZ?=
 =?utf-8?B?VjVRY3hPOGFWaXF2ZE1TWEJRajg1bjBBd1ozQjIwVkFVclI1dHFpMHI2TDFT?=
 =?utf-8?B?YSsvbUhjdW1MdUFWM3dWdUt1T3hxd0dEUnRaU3oyWlVEajEyRXZSRkErMWNq?=
 =?utf-8?B?YXF5bU9MTEpMcEFWcGZMckd6NEZQNXMwd0FUNisxcjZEQ2xiYVhOVmN3cDhB?=
 =?utf-8?B?cm8zb2NYY3JWczNDMjlOTmd0b0FYenJJN09tcEF0RU13WlJjUUlFTStSZTJD?=
 =?utf-8?B?Y3piNUtPYytBWWd6NGpQNHRIeDNOU2czMURRT0ZobDRuZVU0MzlpNHZDVGc0?=
 =?utf-8?B?T1JoOFhhekVINEJDWms5aC80V3NBUTU2MjBaalc5eW1SeTdidGNZOHlmZExi?=
 =?utf-8?B?ay9hV3hSUXU0dlZSMVRlYnV5QlNqbWU4aE1OU0xRRTR6dUI4L2Y5NjkyR2Vy?=
 =?utf-8?B?RVE5Z3ZPMTdjVlpDSXhkMmZzM05jU1FEWTBpN1M2dUlLaWJsNXlhQ2krMnAx?=
 =?utf-8?B?K1JXQXYzZ2Nhb2VoUWJqV1Bma0V0ZU9xc1dPV01hQ0I3ekZhQThUWXMzSEVi?=
 =?utf-8?B?VTVSeFovYkhKTXNicHBYQUxGSERGMG8wRkFOTmZMazZqS0RFUFVkL1lkT3F5?=
 =?utf-8?B?TUl0ZG1yV3BwRXNLaFF1dk13bTBVR3R5S1ZvNHNhRHp5aU5hMGl5NjlyemhU?=
 =?utf-8?B?NjVDMTIzUjFMWDBJL0FoUjhnUmJiTVdWd0huUzhBYkx0aHhONXE2SUlkbThS?=
 =?utf-8?B?ekV6Z2NUS0I4R0g3NzRlVDdVRUg2cUo1OFhlZmFVcnRxOHVTbXcyek5ycW9q?=
 =?utf-8?B?Nk56VW9RSHlJR3VNb05kN1VpVnpuNVFKdnNyRWgzZXJWM09WRW9QK1VONktZ?=
 =?utf-8?B?dW5XNGJnMFAxcFJKWkhJbUphNDh2SUNZNkJkS3NDd2ZBY001K0R3QnY4S2RM?=
 =?utf-8?B?WklrRitzZGxWSk93Vk5OZ1JYT3ZDeXhTaFN4K3lxOVhwZnRZSk44QVQ4YTR4?=
 =?utf-8?B?ZFE1cjNmdklKaVhaM01CeEo0QlBUSms5ZjZzTy9uTWZabGtTZWt1OEh1b2xH?=
 =?utf-8?B?Ui9zL2YvTTdVTWpjaWlWYU1VdnYrZThzTWZEelNjbGVPRytrcWdJZ05WT04z?=
 =?utf-8?B?WlhmUFZhT3JkcFkvTExYc0Q1bERtTnV0Mk5tQVA3ajR6TGl3OXRKd1o1ZFV6?=
 =?utf-8?B?NERySjEzeEhiM21RVEtDeFgzMTJvcEpJSEh1eGFZZDlBRk1MZG52cFNMK3pU?=
 =?utf-8?B?VnRjQmUrOGwzZi9tKzA1NStDVG9UcnRhWmo3bnNhRmdFcVNhblhTWU45d1Vy?=
 =?utf-8?B?MGZ2SnNlZVA4VnJERm5QcEs1OXhCamo3TW01c2szbjIrRVUxbDFVWkdsUjIy?=
 =?utf-8?B?V0NnVjBXWG9RSWRQSDc0TVBhSU1EK3ExQ1BDQllDOFBmU3VUN2REdG52eXJM?=
 =?utf-8?B?cFI5NHg5eEtYSEh1ZGd0WWRZeUxXMkNoZWU0SkZzRUw4SGplUmtBKzNzV0kr?=
 =?utf-8?B?cXZyZXRXQlM5LzVqNTQrajVBRFZFcE01cWxWS3BkVXp5amY5N21zeTJmSkZE?=
 =?utf-8?B?NFVMVVJ2VGJrVzFwOU1pTEs2NHY5V0hwQ3JZbm52UDJON01NVDRlS2ZoRFBU?=
 =?utf-8?B?b1hFOHp1Qyt6dGdkUU9JS1YwN2h5YXZJS1RBWUdWZlVtY01UTk82U2VCVmdE?=
 =?utf-8?B?cnZvMGJYRnhjK08vejBBVGtQempJTW1QWC9ZS3lDdVJPVkFxMm1PN2ltT1Y5?=
 =?utf-8?B?bzVaQmU4dnZncUwzV1Nldm1ISnROKythRzFSb1hWZHE3cFNlRVVRU3hLd0pm?=
 =?utf-8?B?ZUFleTdJU3N5d3pZcWg5cmNZUlpJZGRtSUFXVUpGTEowaGZ2bFBndVNXaWty?=
 =?utf-8?B?bEo3WWI2MS9uNXJpaGdLeldaZDhmWjg3Vml0ZGFYL2JVNUZQUnBKbU11eG5S?=
 =?utf-8?B?NzRRaC9ESHBoRm0xMmt3cXVza2JmcHZSWm5wMWZ6c3ptc1NrSmU4WGZ4Q09i?=
 =?utf-8?B?S0hJaThzalBWU1F4ZS94MXpwYUliQ1JSeHRHbmlFaVBhMTR3TmNQMmdNWWs0?=
 =?utf-8?Q?rhJbBR0ZCckzDRBnxZZXfpqv1?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 02d52225-51f7-40e5-4a33-08dafad6d48f
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 11:09:42.4327
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: X/kHO5Y5cClw9EtDd+Lj7RHAK4y8jTHkqcpvRSEUI+6qcWeq379tSODEOGWt9Znk3AIkAZpTMtrNiRn/urpHyQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6274


On 20/01/2023 09:32, Julien Grall wrote:
> Hi,
Hi Julien,
>
> On 19/01/2023 22:54, Stefano Stabellini wrote:
>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
>>> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
>>> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
>>> address.
>>>
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>
>
> I have committed the patch.

Thanks for the reviews and commit. :)

Did you miss "[XEN v2 01/11] xen/ns16550: Remove unneeded truncation 
check in the DT init code" ?

- Ayan

>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 11:11:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 11:11:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481683.746747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpJ7-00030L-Q9; Fri, 20 Jan 2023 11:11:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481683.746747; Fri, 20 Jan 2023 11:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpJ7-00030E-NB; Fri, 20 Jan 2023 11:11:41 +0000
Received: by outflank-mailman (input) for mailman id 481683;
 Fri, 20 Jan 2023 11:11:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIpJ5-000306-SH
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 11:11:40 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 34621562-98b3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 12:11:38 +0100 (CET)
Received: from mail-mw2nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 06:11:34 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB6051.namprd03.prod.outlook.com (2603:10b6:610:be::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 11:11:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 11:11:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34621562-98b3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674213098;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=sDNUAFSOM9nSmY2tfFtONpjQs+xlaGIexAFJht7PkCs=;
  b=fuDn9FnPhuH8r380lpfJOrzJqvzdoxjZNgQTONNfj4N4PT+IfkiYfQ2q
   VBMpBcxr2/0gLFiF0otBVzjvum5TieYHWucSi9i3rbOt6eXYa4RxGGA1g
   ZKpGL4YKKqqUXjZerHptePnflJqV/kMPGyO+UYutJI2ceyEB/lvenpw/T
   E=;
X-IronPort-RemoteIP: 104.47.55.105
X-IronPort-MID: 93537808
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:yhoO8agmgAthGrOf5q9RocnlX161KhAKZh0ujC45NGQN5FlHY01je
 htvWTzVafrfZzeme9snPNvk8U0PsZbdm9I1SVc6/CFgHigb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QWGzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ7djs0TUCJmNmOzY/nFudFrOA7DOPSadZ3VnFIlVk1DN4AaLWaG+Dv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEsluG1bbI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6ReblraE62A37Kmo7Bz4tD2bis/aFg2WEQcB5e
 kobpi0+ov1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+WsDezNC49PWIEIygeQmMt+ML/qYs+ihbOSNdLE6OviNDxXzbqz
 FiisywWl7gVy8kR2M2T913dnyiloJSPSwcv/xjWRUqs9AY/b4mgD6S27lzc4edFPZyuRFCLt
 3gZmOCT9OkLS5qKkUSlSuIHFqCk/PaBPTjVh3ZgGpAg83Km/HvLVYlU4SpiLUZzdMgecDniY
 VT7pg9aopRUOROCZqhxZYWzB800zLPIGtHsV/SSZd1LCqWdbyeC9SBqIEuPhWbklRF0lbllY
 M/GN8GxEXwdFKJriiKsQPsQ2qMqwSZ4wn7PQZf8zFKs1r/2iGOpdIrp+WCmNogRhJ5oai2Mm
 zqDH6NmEylibdA=
IronPort-HdrOrdr: A9a23:KytxV6P4cK36HMBcTsijsMiBIKoaSvp037BL7TETdfUxSKalfq
 +V8cjzuSWZtN9zYhEdcLK7VpVoKEm0nfVICOEqTNKftWLd2VdAQrsM0WOoqweQfxEXnIRmpM
 VdT5Q=
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93537808"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k/fhzSskZ3XL0WE8Ht4Vyg6pzt/dwGRZMLLaG81DbXunVi8ttCPGwFqjiYkm86zD/rTdGZ7t8qwQF1lcoy+v8QBAzAzMLNiX8QO5AmME3b0Z8Y9VIsGQi4T7GaBvVRYD13XQoJD6YSUMBeC7U+RABd3VdnsK5e6CurMFwZbCczWau+XMNbstwkON3yiX6aQHDS2hMdf1SXLMIAWlBtowOfhmLChwUlm5fkPZn1bTeIwwD6sChOiK+FFSrTKZpntxDyw9Pc6ji48V0vKottLlSQ7tougwVAvBbTVs/KpPWTjFA3Se244cMjpTv86EUtcpNr+81DbP3u5Mu+zCj2bPDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sDNUAFSOM9nSmY2tfFtONpjQs+xlaGIexAFJht7PkCs=;
 b=cbmQvm2tZMG66ymwBj1uqNT+ryAhf3YIudzH/25Gic6YKTJ2uSMLgtHrqH5Z8nKKzD+Be234sFyLQIyKkC+oOue5DFDJfJ6HILApIkE8ry3stBGThBOBfZrr6vVay/ZjjwB2MTxb9tWnokpJCSi9bdX3v+iqHXR4zEE1SJ8bShyeNwNVgcYB6N7f10jnJTvTu7S4YLbfp4TycfFFHcs+XDkz7P/JrRBY+E1bvqDEafmfD1Pcd/80yQgbZ5DXOhmaWFvIeMcoi0ATWVUeNzyVypAK+Od/edQz0Up3BG/gjOYQzjbBMpa7gvC3+phIr0oSov8vlxtnb3UEyyPbZAwBaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sDNUAFSOM9nSmY2tfFtONpjQs+xlaGIexAFJht7PkCs=;
 b=NzoawmN4Nmgd4KxN8OT4YTZcNFPBjrKvf5zNhUU5GcRyhYME839W8Y9wY1MIDqIhCo+Q8BMz4kab6FIGr40fJbiGIgBzYnV/knLpzkJ01rfs99rv6lxlpmufK1pGibBlOloJbrSKIMQsoStL7wDcemDiKFibfKfOQ7mYRt5GST0=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Bertrand
 Marquis <bertrand.marquis@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>, Connor Davis
	<connojdavis@gmail.com>
Subject: Re: [PATCH] tools/symbols: drop asm/types.h inclusion
Thread-Topic: [PATCH] tools/symbols: drop asm/types.h inclusion
Thread-Index: AQHZLKrc/EwdR9Lkfk2WTSRRJdysna6nJo4A
Date: Fri, 20 Jan 2023 11:11:30 +0000
Message-ID: <9f38be7f-0781-94bf-2444-546897755702@citrix.com>
References: <1eaa4cce-2ef2-ca38-56d2-5d551c9c1ae9@suse.com>
In-Reply-To: <1eaa4cce-2ef2-ca38-56d2-5d551c9c1ae9@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH0PR03MB6051:EE_
x-ms-office365-filtering-correlation-id: dc0ba417-bd89-40f5-65eb-08dafad71545
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 DtQ7i1+TOkk3qrHLNJszRB6ZxFMwNEmItBRDHsHLnK/vB1STMbVeIeb0VRdKGqrfntHIKYqmXLRGEN1OrBYREOk6zRiCgnvIwTkFhp1f4R9ffTBl0o1kqB+YQUbMpxJokK3+WAxtoX+A9GdWk2PX7ywiuIAQ0QcrS0CJ56wsGihv3VqMiQ/MgUQgxP12nhLi1bMWnSTTeyHpXFvykWjld7AaZWRdl3y/9WW1fWjpLOPUG9SBotZmLvepZB2/ejcgfXYzgQ734zw+QDYg0eXM2jAssTtgJYGbg1qyPrFQN72FycX73j1/06lusFvyGjWfhQxrjS8i0/TGT0UVIzhZreim+cQHDUF5TRAAtSm+9YPYbKF/umEpSWy/NZhHBRD32LI04fPsiFqorohpN7c6KvM6j7Hk9iDV7FCa29P6Iw4/1cbOz03C06vXHixwLohXqB7CNjPzaYPsi/8HTf7YF3s3zCEEYLj0Fx7eDcrLkjwTf6m9fDx5b/3iJBvyJNKr7pJhoX7oNZ3+yDiKQh4AtQ+gJ9FeoPxhSBOQVonc9hpDqPBn6UUoNGwRLJ0KJl7oNk/dE3NHEHbmU1IgbqYlEicqXyF8eCLVBq3rKgDPyxJo/K+fS/hS+6I5Pzwb+6v0d43BzT4NP3h6CWjFjq/7CyRaGdkPehV+mIu18TeS1kMM0R1lJM98XVL5FdE7lBekmZtKuNdtSgIzgPgMti4YfmnCnpCPuNF3vzQUnf3F2vJZcqP3LJ68RcJX2V5w56HanXd5y+4v6LWSaoeR/hPOlw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(366004)(39860400002)(346002)(376002)(451199015)(31696002)(38070700005)(31686004)(6486002)(6506007)(53546011)(71200400001)(478600001)(86362001)(2906002)(316002)(54906003)(76116006)(110136005)(4326008)(7416002)(5660300002)(82960400001)(2616005)(122000001)(91956017)(66946007)(66476007)(64756008)(4744005)(66446008)(66556008)(38100700002)(41300700001)(8936002)(186003)(8676002)(6512007)(26005)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VElieGRNazRuWnJxblJzakJvRTZUVVNBa0tuTUFjZXFWakdGYjJLRkdGcXZK?=
 =?utf-8?B?dko1VjV1Q3h2TUpwQzF6MmpZOWlBbGlIQXFESUY4eHNSZTVRUXJKOXMwSmtr?=
 =?utf-8?B?K29Oay9JeEc4UkNhcDB6TnZOd0ErdFdBUVE1amxGcTBOVFBSaUNpbHhoUkpq?=
 =?utf-8?B?S1piY01yaG84UEN6MzRPMTN3KzJMcnJhNFRyZFlZaE1VdFQ0VDRTZ28rckpo?=
 =?utf-8?B?RDlNOENSWlFHc2Q0eHQ3U0VVVUUxNWdCQkR4UHJQT3FEREt4eStOYVk5V0lq?=
 =?utf-8?B?MjBhRWsxc3lVRzd2WUZ1N0dMOWpQQi90VXhXakNybjJVZGM2bS9hdkM5NGtV?=
 =?utf-8?B?TjBBMUR0Q25nOHhoR2JyemFuNFlsVXQ5d2NGVUlBMVVpdFdYQ2JoY0t1STVI?=
 =?utf-8?B?cTVJbldsaGhVOFBSa3FqNWhHUDlFeGhYcFl6QnQ3b3pmYVNqUGFzUkdTdCtL?=
 =?utf-8?B?Qnl2NlNZb3JDeXFvbFdEZThZVjVFTXcrMnVGdmtrcml1TlQ1UUpSUUF5VmVx?=
 =?utf-8?B?TFZBQ0xiekRNRHZBTDl1Q1pUQmRSYWZiVjdFU0c3NURtV0VRMEdNdEhmSXNo?=
 =?utf-8?B?MWh4TmVLZTRlbFJnTGNwK0tYM01WeTN1dTRDM3R3SlMwSEpQQ3JqVVhMaFM0?=
 =?utf-8?B?VVJLWVluR0xvamRvNEdHSmdQbkxXekhVdFpGQ1hxV08vQlQ2a0dXSkhDRUV1?=
 =?utf-8?B?N1dJT3lHd2lhSGt6cm1td1N1Q0tuWGhweXpkcmNWVFpBdWFNSm9NNFVLTmQ4?=
 =?utf-8?B?dVc3b3JObklLeVpreTY0ZjZobVU1Tk14VXR5dGhtK2FBZ2lkWXNmenY3UjI3?=
 =?utf-8?B?WXVMd3ZsOE01Zjl6MkRxWm9waHNFNW5aWUhXR0ltdS9ZNkpIVTZFRHY5MW55?=
 =?utf-8?B?clJxUDQ0cXJZRGNjYlRUMFk4bWN0SHNmeWNWY0tSUGpjRDNNZ0hud2h4Ukd0?=
 =?utf-8?B?VHRtalZzdGZOb20reGM0NHVHZmVuczh0d2k3VFpQbERXSThjNkxhUkRGTllU?=
 =?utf-8?B?UHhIRVdaZEtPZXZacGNpTnJsaisvVjhxQXJVSGIzTFo1cy9qd0JVVnpyVkUr?=
 =?utf-8?B?anpoRGh4SmVMNjdVSmd1OS9LUzdKV3pPMDVGU0RWcEJHS0kxRzZYemorYVJI?=
 =?utf-8?B?ZzFuU3NxSjcwUENGRGxrdy9DWDB0bUpwamlZZmxZTXZpdDF3bDZpWmNkTlVT?=
 =?utf-8?B?R2pPaXVoT1pKRGY0N2JNTWhrNmFNVHpnU2UxVnk4aWdENE9xdDg0RVlGRmxz?=
 =?utf-8?B?R3pmaHoyVUVreGVDQ1dpWmtlV29lRFJXM0JMck5NRDQ0dFdyN20yRHdQdS9a?=
 =?utf-8?B?SEd4aGVVZTBHOE5VY3BnQ3dtVkFJb3RDSU1YbTZFajMxK3hzcE1YS1J1VnRM?=
 =?utf-8?B?cG5NUU5JTlFsVUlzSEJZeFNWU0JCY1NSMllmeGRIRTdNbEZYV1hSSTlvR0JI?=
 =?utf-8?B?QUM0Qmt3U3lDei94U0s4anpqSTM5UTFGYkJCb082U3V5d0tRbVVzNEllTS9a?=
 =?utf-8?B?UWlWZjFaUEw2RTBLaTRIN1lJWXZDaFZweCsxZnBYNzZNbXpPbDV5d25WbVU1?=
 =?utf-8?B?NzhCYnNTMGtYMFZacGtDWWJGUktjRGh6QzhYb2hTQisyWGVTWS9FNWFKaEVq?=
 =?utf-8?B?VUF6Lzdud2Y3M1k4d1BlVURDRWhqRk5JYkZuRDVIbnRqME1icWNwNXBSNmMv?=
 =?utf-8?B?Y0dDMEx2L1R0bi81eFBhQUxPN2pvUU5LMWwvQi9YUjVQUWxCaXpqMGt1aVls?=
 =?utf-8?B?cnc5REpJdUY2cE9OK1pmQkV6cXRZeEFrcGxQZjFHNkVXUXJGZnNaaW1nQ21N?=
 =?utf-8?B?TGNIMDNGcmxtNlBLWWVhOUFEbVRkTlFhTU5FVllVWTJmMUlpRy8wdmx1ejFX?=
 =?utf-8?B?Z2pqV3NQdk9zemdmUFQxNEVaKzhoNkUzY212MHpVNXJ1dmtJbk9sc1dtbnBI?=
 =?utf-8?B?cm1CcXRZSkgrK0lxMkJSV1VKVFd6a01lTGVwU2NMbGlvNkh3U2RnVW9jak5K?=
 =?utf-8?B?YTFpZ3dqUHl0c2JxSUxUbGE4N3RQUkpXOStTQ2dDeE9IT2Y0QTZZaEdaNzB4?=
 =?utf-8?B?eHpvZUIzeUx2QXVtaWdiWmJIdVh4RW41M29uOHV6a3czWU5DNTFTa2RhcTNF?=
 =?utf-8?Q?yFnC962orKE3V+OPRvGAIdZ7j?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <6E5800DA94E4004D9D48C4D0703890D7@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?Y0pTVEJ1bDlJWXJYZGNtTEpkMUJoVEx3Q1FLN240eUdFOUIvemtlU0tXWUtI?=
 =?utf-8?B?RXdPT2lTMWdObiswQlpCOUV1SUgzUHN1dUR4RXN2L0cxUFJ6YWpGanNwQ0Nr?=
 =?utf-8?B?ZTROdFN3SmtzWFRtUmFXemNzaXI1M3F4TW1IS29EWGRKRWJDRjhJTGNHQVMw?=
 =?utf-8?B?R3hSSCtWZkQwVk9pSmtNM0tHaExJSGpCMkd0OFYrRDE2QVlmandzNXM3RzNL?=
 =?utf-8?B?ZVB5NUNaQVB6UHlNQVE5SjcvMnpWMlo4TjVENHh2VDZiMmlZSS9zak11TjBp?=
 =?utf-8?B?aVRqWkhKcFRac3Q5V24rblBhREpGVG1jdTl0ZmNJR1JhRFcwR1JIdWVyNzVo?=
 =?utf-8?B?WXJhOGRZczc0V05VQXNNaHlaSnhzVFhsMVA0TlIrUEdWZFVYWG1iZG9EQiti?=
 =?utf-8?B?RGZ4aFdDeDM3MTZZYUtBZ0ZsQWR5eFJ1QTVlRm95U09NL3RuVE8vdnlRUXp3?=
 =?utf-8?B?bjdvODY3b290VWNUTnBKeDlzd0NEQy83ZUpOMmxUTlZuSXBUZVdFM1A1MFAv?=
 =?utf-8?B?R3p0bkkwZmhlbnhIMGVFV3d4Q2xjMTNEN1UwTWhJMjBLNUpacWovZlV1Rmxp?=
 =?utf-8?B?bEwwT1p0dzFBalpTcTB4Q2puaFBBbWVvditLR2V1b0NCTWMvVUppTzN0UXo2?=
 =?utf-8?B?aG01Qm5TYVhsNHl0UUhiR0VQYkZBbTgyRjRpWkh6dUQxV1ZlRVhIM2xuemtj?=
 =?utf-8?B?WUkrNFEwZVJUWnhFNFo2OHcwa0tycmw5WnllYlJvRHFEL1Y3OWxmS0c5Mkc3?=
 =?utf-8?B?My8vaWMrUFV3VzBDaU9oUVRwYmgrWWI4RGNzWWpmMUx6WFFwWlNXV0FrMmp1?=
 =?utf-8?B?Q0JPR1ZuN1BQM0h2bnR4YVpRMUd6Tnc4Q3JJdnVPVEJhdGh3Rk53MzdiWmhN?=
 =?utf-8?B?dTBGNXI2LytqdURhdjlucy81cVhGUlJQWDZIYVNMN2VtNUxCYWgvR1NKQzZK?=
 =?utf-8?B?YytaUHd6NU9VREhwNUJhMDd3VEx4MDFOOE9FLzR3YzVTTVpPWEUwSnJPbjBS?=
 =?utf-8?B?a3JPVzdHMGtGc3h1azdmYVBiT04vcW9NbGpYYmk2Si9MS3FsazVUcHRCMHFV?=
 =?utf-8?B?ay9qTHQyQkQ5TWZXTUxZTEtSVGg3azR0SHlHS3NLSC9YN1ZQbW1tMWROend5?=
 =?utf-8?B?RmZEQjhjL2lBSVE4UFo3cHYwZXdQaWh3TGhsRXFpaEFjOEgya2lWQlJyQjcy?=
 =?utf-8?B?T2gza2pRc1hYL3BiMTkzR0ZpQlJoc1NQZDZkMTlJU0RFb2poTnlmNEwvR0dQ?=
 =?utf-8?Q?bPjDcTbb6ZAp6xy?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dc0ba417-bd89-40f5-65eb-08dafad71545
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 11:11:30.7972
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: e2lGkKwdbbCktWvpREjIpX31YnQG45hEs7qPb9t6sFydk+rCzGBugA/l9dWUUWSg3IiStKMnxHpR9Tf+jUwebozR0lOT80Kuwi8icaCLm4Y=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6051

T24gMjAvMDEvMjAyMyA4OjQwIGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gV2hpbGUgdGhpcyBo
YXMgYmVlbiB0aGVyZSBmb3JldmVyLCBpdCdzIG5vdCBjbGVhciB0byBtZSB3aGF0IGl0IHdhcw0K
PiAodGhvdWdodCB0byBiZSkgbmVlZGVkIGZvci4gSW4gZmFjdCwgYWxsIHRocmVlIGluc3RhbmNl
cyBvZiB0aGUgaGVhZGVyDQo+IGFscmVhZHkgZXhjbHVkZSB0aGVpciBlbnRpcmUgYm9kaWVzIHdo
ZW4gX19BU1NFTUJMWV9fIHdhcyBkZWZpbmVkLg0KPiBIZW5jZSwgd2l0aCBubyBvdGhlciBhc3Nl
bWJseSBmaWxlcyBpbmNsdWRpbmcgdGhpcyBoZWFkZXIsIHdlIGNhbiBhdCB0aGUNCj4gc2FtZSB0
aW1lIGdldCByaWQgb2YgdGhvc2UgY29uZGl0aW9uYWxzLg0KPg0KPiBTaWduZWQtb2ZmLWJ5OiBK
YW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQoNCkFja2VkLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 11:25:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 11:25:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481688.746757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpW1-0004X6-VM; Fri, 20 Jan 2023 11:25:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481688.746757; Fri, 20 Jan 2023 11:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpW1-0004Wz-Sb; Fri, 20 Jan 2023 11:25:01 +0000
Received: by outflank-mailman (input) for mailman id 481688;
 Fri, 20 Jan 2023 11:25:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIpW1-0004Wr-90
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 11:25:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIpVz-0003C7-SV; Fri, 20 Jan 2023 11:24:59 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIpVz-0001X9-Lx; Fri, 20 Jan 2023 11:24:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=6l/WOw10+Jebe/QO8ckmDtwl4I2AkuhPJV8n7Bn6s8w=; b=cTUDY/Fj/JwOkHJfL8sQDqB7fx
	AahTDvW6vgFeMFODzjBJzgOa9aVN5KouW5kQritKsbthJCwmxeVXmRb7uAqlt6lk2T9c3mA8vA9qw
	829fJ0DQmCXwGuG5zZiwIz3tDc8xHg+kHbBkHedNaCXIFji0o432PIDyCs0fmtpRFfe0=;
Message-ID: <d519b6c5-5972-ff31-c3ee-39649babde7c@xen.org>
Date: Fri, 20 Jan 2023 11:24:57 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] tools/symbols: drop asm/types.h inclusion
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <1eaa4cce-2ef2-ca38-56d2-5d551c9c1ae9@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <1eaa4cce-2ef2-ca38-56d2-5d551c9c1ae9@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 20/01/2023 08:40, Jan Beulich wrote:
> While this has been there forever, it's not clear to me what it was
> (thought to be) needed for.

asm/types.h used to be directly included in assembly x86 file. This was 
dropped by commit 3f76e83c4cf6 "x86/entry: drop unused header inclusions".

>  In fact, all three instances of the header
> already exclude their entire bodies when __ASSEMBLY__ was defined.
> Hence, with no other assembly files including this header, we can at the
> same time get rid of those conditionals.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 11:33:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 11:33:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481693.746767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpeD-0005zq-Ok; Fri, 20 Jan 2023 11:33:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481693.746767; Fri, 20 Jan 2023 11:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpeD-0005zj-Lz; Fri, 20 Jan 2023 11:33:29 +0000
Received: by outflank-mailman (input) for mailman id 481693;
 Fri, 20 Jan 2023 11:33:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UrnU=5R=casper.srs.infradead.org=BATV+010e331da30354bf639d+7089+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pIpe9-0005zd-9l
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 11:33:28 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ed5fe8e-98b6-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 12:33:23 +0100 (CET)
Received: from [2001:8b0:10b:5::bb3] (helo=u3832b3a9db3152.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pIpe3-001ttS-Vy; Fri, 20 Jan 2023 11:33:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ed5fe8e-98b6-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:Date:Cc:To:
	From:Subject:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=FPmMJJucX/DY1S9Sm3Up0z7xbAoQFdvYkviGDi2uMMs=; b=naiS7fk8gxacUZ3x4HlaciIzLf
	XKqAH7AVLuriO6KopMrb+IwHuBvdaA5BMj3oqXseffvUXZofXK3ZI892h/OA77G3KPbVKcxStmlpm
	3YmPaEQQHJfao3h5dWXMtZpamyr/yOqF31QTFDqDXq4pp0Zwj9wH2kUlMzJ8WDVf7MzopLfkEWRQ7
	EgKWSyVfjdisarVKmMlYdpxgWR4pLKwsO3mmaweeHoZ95GY6PHXIwgtkulGCxqzotLfwpIvLXjqg1
	VoSfCBZouWaL3nGBlkYa1WOYRcppmCOWd5/6wixlvDF51RraYNAfiERzzHhMwGlmdbzRX+PB0M2WR
	SRLCvh+Q==;
Message-ID: <feef99dd2e1a5dce004d22baf07d716d6ea1344c.camel@infradead.org>
Subject: [SeaBIOS PATCH] xen: require Xen info structure at 0x1000 to detect
 Xen
From: David Woodhouse <dwmw2@infradead.org>
To: seabios <seabios@seabios.org>, xen-devel
 <xen-devel@lists.xenproject.org>,  qemu-devel <qemu-devel@nongnu.org>
Cc: paul <paul@xen.org>
Date: Fri, 20 Jan 2023 11:33:19 +0000
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-j8Wo6X+P905qFelz4Cgk"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-j8Wo6X+P905qFelz4Cgk
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

From: David Woodhouse <dwmw@amazon.co.uk>

When running under Xen, hvmloader places a table at 0x1000 with the e820
information and BIOS tables. If this isn't present, SeaBIOS will=20
currently panic.

We now have support for running Xen guests natively in QEMU/KVM, which
boots SeaBIOS directly instead of via hvmloader, and does not provide
the same structure.

As it happens, this doesn't matter on first boot. because although we
set PlatformRunningOn to PF_QEMU|PF_XEN, reading it back again still
gives zero. Presumably because in true Xen, this is all already RAM. But
in QEMU with a faithfully-emulated PAM config in the host bridge, it's
still in ROM mode at this point so we don't see what we've just written.

On reboot, however, the region *is* set to RAM mode and we do see the
updated value of PlatformRunningOn, do manage to remember that we've
detected Xen in CPUID, and hit the panic.

It's not trivial to detect QEMU vs. real Xen at the time xen_preinit()
runs, because it's so early. We can't even make a XENVER_extraversion
hypercall to look for hints, because we haven't set up the hypercall
page (and don't have an allocator to give us a page in which to do so).

So just make Xen detection contingent on the info structure being
present. If it wasn't, we were going to panic anyway. That leaves us
taking the standard QEMU init path for Xen guests in native QEMU,
which is just fine.

Untested on actual Xen but ObviouslyCorrect=E2=84=A2.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 src/fw/xen.c | 45 ++++++++++++++++++++++++++++++++-------------
 1 file changed, 32 insertions(+), 13 deletions(-)

diff --git a/src/fw/xen.c b/src/fw/xen.c
index a215b9e..00e4b0c 100644
--- a/src/fw/xen.c
+++ b/src/fw/xen.c
@@ -40,16 +40,25 @@ struct xen_seabios_info {
     u32 e820_nr;
 } PACKED;
=20
-static void validate_info(struct xen_seabios_info *t)
+static struct xen_seabios_info *validate_info(void)
 {
-    if ( memcmp(t->signature, "XenHVMSeaBIOS", 14) )
-        panic("Bad Xen info signature\n");
+    struct xen_seabios_info *t =3D (void *)INFO_PHYSICAL_ADDRESS;
=20
-    if ( t->length < sizeof(struct xen_seabios_info) )
-        panic("Bad Xen info length\n");
+    if ( memcmp(t->signature, "XenHVMSeaBIOS", 14) ) {
+        dprintf(1, "Bad Xen info signature\n");
+        return NULL;
+    }
+
+    if ( t->length < sizeof(struct xen_seabios_info) ) {
+        dprintf(1, "Bad Xen info length\n");
+        return NULL;
+    }
=20
-    if (checksum(t, t->length) !=3D 0)
-        panic("Bad Xen info checksum\n");
+    if (checksum(t, t->length) !=3D 0) {
+        dprintf(1, "Bad Xen info checksum\n");
+        return NULL;
+    }
+    return t;
 }
=20
 void xen_preinit(void)
@@ -86,7 +95,10 @@ void xen_preinit(void)
         dprintf(1, "No Xen hypervisor found.\n");
         return;
     }
-    PlatformRunningOn =3D PF_QEMU|PF_XEN;
+    if (validate_info())
+        PlatformRunningOn =3D PF_QEMU|PF_XEN;
+    else
+        dprintf(1, "Not enabling Xen support due to lack of Xen info\n");
 }
=20
 static int hypercall_xen_version( int cmd, void *arg)
@@ -122,10 +134,14 @@ void xen_hypercall_setup(void)
=20
 void xen_biostable_setup(void)
 {
-    struct xen_seabios_info *info =3D (void *)INFO_PHYSICAL_ADDRESS;
-    void **tables =3D (void*)info->tables;
+    struct xen_seabios_info *info =3D validate_info();
+    void **tables;
     int i;
=20
+    if (!info)
+        panic("Xen info corrupted\n");
+
+    tables =3D (void*)info->tables;
     dprintf(1, "xen: copy BIOS tables...\n");
     for (i=3D0; i<info->tables_nr; i++)
         copy_table(tables[i]);
@@ -136,12 +152,15 @@ void xen_biostable_setup(void)
 void xen_ramsize_preinit(void)
 {
     int i;
-    struct xen_seabios_info *info =3D (void *)INFO_PHYSICAL_ADDRESS;
-    struct e820entry *e820 =3D (struct e820entry *)info->e820;
-    validate_info(info);
+    struct xen_seabios_info *info =3D validate_info();
+    struct e820entry *e820;
+
+    if (!info)
+        panic("Xen info corrupted\n");
=20
     dprintf(1, "xen: copy e820...\n");
=20
+    e820 =3D (struct e820entry *)info->e820;
     for (i =3D 0; i < info->e820_nr; i++) {
         struct e820entry *e =3D &e820[i];
         e820_add(e->start, e->size, e->type);
--=20
2.34.1



--=-j8Wo6X+P905qFelz4Cgk
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwMTIwMTEzMzE5WjAvBgkqhkiG9w0BCQQxIgQgt0UAYe1I
+LcFmYJrg6c89Ru3BLQ8skfDDkeczLgmevowgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgBIfcfobJqNP13umN2Ms9YmbaNrUnYVRYnQ
yYo7rcjIZ+9rDtcDc3/vRqdWyCgTXAnG3CyXknQW8YLYT4RCjKe2cp3cBcszVEPwK4cPR4FyNYcA
gOoosByBQXfYVdMa66PIr3RUwwObC3gsheDIUvqPMrAm/fLeivCTBSDLo8fD2YAAK7LXuxIl5Bnn
1ze694ex3GX5KOmpr6jwOlosP1+51RMKH1Jvul47RgDvLPXQGm/D3q1nffFYC66dqZyxnF+co2tk
eGqyRjQ+5O0u1e3cSJK7lnl+4P7rW7p8rub1jsRWVzQ032jKIX+45wK+2/MtNBB2lhqn3NmgiMhu
wqIIzZF+3j+ywDz4JnUZe5QB1Do6T9ukqtIdKD2s3BSUSpnUxaMBAi1Ee1KRqmchQd9gqNFQ+uz2
7C1devC13Cx2711/dntBkjweaNr8cZmw3SVyZ30Zq1UDw9S8kCgCslf0TireKVQQiMmxCRcezBQb
VM/APe8Mq+KlvRM7npuKmr750dvRBQr53MZvTKfmMDX/MOByjABnzHroHjA6HQA67frjTM9M2BOc
OLI5tDm5V7HaTSXEvNto1SUrzBVoakktQzYgeHK3BXDiygwO1LZ5CmJQqr74UK9bOUDVk1j0Lwu8
dig4FHBAAAxYZzmkU6isIojVaDGRwCCce0NPwBH+NgAAAAAAAA==


--=-j8Wo6X+P905qFelz4Cgk--


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 11:46:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 11:46:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481701.746777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpqb-0007Yt-0a; Fri, 20 Jan 2023 11:46:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481701.746777; Fri, 20 Jan 2023 11:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIpqa-0007Ym-U2; Fri, 20 Jan 2023 11:46:16 +0000
Received: by outflank-mailman (input) for mailman id 481701;
 Fri, 20 Jan 2023 11:46:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIpqZ-0007Ya-6T
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 11:46:15 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 07552922-98b8-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 12:46:10 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07552922-98b8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674215169;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=D+VzVYIqRMT1OcuJ3nEGTwJU8x7YRjAtlbQsRrdWr3c=;
  b=QqBXVnF5rHxKzZFafi4/fWX3ZFnAJq1B8tWZvdx0lWOSe4q4bRAP9dOz
   2i6i9Kbg7ksAXlO/upAV6g0WyeGO01Lyy4IAc6+84LkPzjEb0G/3q5sJu
   OSdAg1JCaYypzz+i72OIExnJEM6mEpVw7j9daWdSvF3cndUQLKjq6wKRp
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95949543
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:G0gsj6yBB4KOFGqTlb16t+c4xirEfRIJ4+MujC+fZmUNrF6WrkUBn
 WsYDG+OPPqDZmDxeth3OY7noBtQ7MSGytdmGwA4+yAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UwHUMja4mtC5QRnP6gT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXxxy
 fsGITE0Vxu8pc+03rG6WMxdmst2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZQIzxrJ+
 z6dl4j/Kg0oGIKw92Kozi+1r/XItBnjZrkoCITto5aGh3XMnzdOWXX6T2CTvv2RmkO4HdVFJ
 CQ85isrhbg/8gqsVNaVdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOluU7WDgr3
 V+hhM7yCHpkt7j9dJ6G3u7K93XoY3FTdDJcI3ZeFmPp/uUPvqkusS7IUN9iKZe+sf/YGy/zw
 iKvnTQx0uB7YdEw60mrwbzWq2vy+cSQHlduvVW/snGNtV0gOtP8D2C8wR2CtKsbct7EJrWUl
 CJc8/Vy+tziGn1keMalZOwWVI+k6P+eWNE3qQ4+RsJxn9hBFpPKQGyx3N2dDB0zWir8UWW1C
 HI/QCsIjHOpAFOkbLVsf6W6ANkwwK7rGLzND66LMoMTOcIoKl7ZrUmCgHJ8OEi0wCARfVwXY
 8/HIa5A815EYUiY8NZGb7hEiuJ6rszP7WjSWYr633yaPUm2PRaopUM+GALWNIgRtfrUyDg5B
 v4Db6NmPT0DCryhCsQWmKZPRW03wY8TX8Gv+pwHKrTaSuekcUl4Y8LsLXoaU9QNt8xoei3gp
 xlRhmcwJILDuED6
IronPort-HdrOrdr: A9a23:cPdQ+KCIF0r6CHflHelA55DYdb4zR+YMi2TDtnoBNiC9F/byqy
 nAppomPHPP5Qr4dhkb+Oxoe5PwI080jKQFmbX5ZI3SJzUO21HYT72Kj7GSvwEIcheWnoRgPM
 FbAtVD4bbLYmSS4/yX3OD2KadE/PC3tI2lgOfAw2x8JDsaDJ2JiG9Ce3ym++BNNW97LIt8Pq
 C1ouBAoyOkeXwRZMj+PH8YROLOzueqqHujW29+Ozc3rA2Dlymh5rLZHwjw5GZ7bw9y
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="95949543"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: [PATCH] x86/shadow: Drop dubious lastpage diagnostic
Date: Fri, 20 Jan 2023 11:45:56 +0000
Message-ID: <20230120114556.14003-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This is a global variable (actually 3, one per GUEST_PAGING_LEVEL), operated
on using atomics only (with no regard to what else shares the same cacheline),
which emits a diagnostic (in debug builds only) without changing any program
behaviour.

Based on read-only p2m types including logdirty, this diagnostic can be
tripped by entirely legitimate guest behaviour.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/shadow/multi.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 8b3e678fa0fa..3b06cfaf9a5a 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2597,14 +2597,7 @@ static int cf_check sh_page_fault(
 
     /* Ignore attempts to write to read-only memory. */
     if ( p2m_is_readonly(p2mt) && (ft == ft_demand_write) )
-    {
-        static unsigned long lastpage;
-        if ( xchg(&lastpage, va & PAGE_MASK) != (va & PAGE_MASK) )
-            gdprintk(XENLOG_DEBUG, "guest attempted write to read-only memory"
-                     " page. va page=%#lx, mfn=%#lx\n",
-                     va & PAGE_MASK, mfn_x(gmfn));
         goto emulate_readonly; /* skip over the instruction */
-    }
 
     /* In HVM guests, we force CR0.WP always to be set, so that the
      * pagetables are always write-protected.  If the guest thinks
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 11:58:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 11:58:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481706.746787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIq1t-0000cl-26; Fri, 20 Jan 2023 11:57:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481706.746787; Fri, 20 Jan 2023 11:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIq1s-0000ce-VQ; Fri, 20 Jan 2023 11:57:56 +0000
Received: by outflank-mailman (input) for mailman id 481706;
 Fri, 20 Jan 2023 11:57:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIq1r-0000cY-Eu
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 11:57:55 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ac0f2b26-98b9-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 12:57:54 +0100 (CET)
Received: by mail-wr1-x42c.google.com with SMTP id e3so4615410wru.13
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 03:57:54 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 d14-20020adffbce000000b002bddd75a83fsm18650463wrs.8.2023.01.20.03.57.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 03:57:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac0f2b26-98b9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=GbexX4kQSh9SEmZpnsGkseyMty0RjNFBm4wrT1LyRr0=;
        b=GNbvpCFJ9wnZ+2Wfz/jFRIik/GF9jy8gYmfKfDaL5/pxDLMomMAcUzVno+9tJAi1ZP
         wwAlEb5pPszrSd+gJlgp4H/3QY7dEu9EZDTCZoxjxS1/Eh5LphxnW3yP4vkKwDm9No9T
         /vL019bt70ZEyzPp+snmZg1MZRc9Z7hXSH5G01xYR3cvOWAd12k1pmojIn4WmQZivVQi
         q7y0xnpfGzEW6yYkjjlfuu9NtFGwqmNzgPzXDAdr2pdO5+DgXrcx3iS+XUyREmu61iao
         G3dpARwuDLfiSi4scfyQly5M+beDkNvpPr+qsstNG7yOh+52FP48rMNObwMT5TQKhYpf
         6hog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=GbexX4kQSh9SEmZpnsGkseyMty0RjNFBm4wrT1LyRr0=;
        b=Yo1e7XS0lwDxE+MP0nZ+sTnE2DvnJjkOzdj6o3HL/c9X+jjQGvn3GPvhcJ5te3Aedn
         vx2rYufgAOLygvNQivi3j9+gY/E720VgAWkedEwbcEmjXJxr7xmGvs5mKbHnXie1i3hg
         lxdadZWQycUs2VJM3u7LAZHymH7BRoJMIFcEUhS0bhC5PIitb3N/mcP8BQBwkof9UeIi
         uKYBuL6i+0hDk7BdyT8olfpdlbLYLnSN7LOVwwMlLO4HDb6AzguWA6mVWCey+1HfNc4n
         2ZDE/aWAzMKk7f0w7OlcLMCgNuoDjoyoRptVF4eg4S7a/U+APVKTf914k8eLG5OVxg8X
         2cFg==
X-Gm-Message-State: AFqh2kqz9+YlKSFPx32LlvKlBuA9oj/KVaePNBk6Ju055qtg57Fbd2TQ
	XvdldFnNCwDrQ/bCPyGNR9A=
X-Google-Smtp-Source: AMrXdXtyrNAPJWxwqaUE4bxs2IUy2rKspaOURlFGEkhAwZU8Q6PZHHGDUJKcBOrhwDMnozTwwHt3UA==
X-Received: by 2002:a5d:514b:0:b0:2a7:68f9:c00b with SMTP id u11-20020a5d514b000000b002a768f9c00bmr8442351wrt.50.1674215873617;
        Fri, 20 Jan 2023 03:57:53 -0800 (PST)
Message-ID: <4dbddea68957f2081bd6fae8b0474164cd20ab6e.camel@gmail.com>
Subject: Re: [PATCH v5 1/5] xen/include: Change <asm/types.h> to
 <xen/types.h>
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, George Dunlap
 <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, 
 xen-devel@lists.xenproject.org
Date: Fri, 20 Jan 2023 13:57:51 +0200
In-Reply-To: <a2d262f8-77eb-44bb-d3c9-677ed73df22a@suse.com>
References: <cover.1674131459.git.oleksii.kurochko@gmail.com>
	 <916d01663e76a3a0acad93f6c234834deaa2dd72.1674131459.git.oleksii.kurochko@gmail.com>
	 <22992b47-bac6-d522-a8a6-c55c3c15e7a5@suse.com>
	 <a2d262f8-77eb-44bb-d3c9-677ed73df22a@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Thu, 2023-01-19 at 16:43 +0100, Jan Beulich wrote:
> On 19.01.2023 15:29, Jan Beulich wrote:
> > On 19.01.2023 15:07, Oleksii Kurochko wrote:
> > > In the patch "include/types: move stddef.h-kind types to common
> > > header" [1] size_t was moved from <asm/types.h> to <xen/types.h>
> > > so early_printk should be updated correspondingly.
> >=20
> > Hmm, this reads a little like that patch would introduce a build
> > issue (on Arm), but according to my testing it doesn't. If it did,
> > the change here would need to be part of that (not yet committed)
> > change. On the assumption that it really doesn't, ...
> >=20
> > > [1]
> > > https://lore.kernel.org/xen-devel/5a0a9e2a-c116-21b5-8081-db75fe4178d=
7@suse.com/
> > >=20
> > > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> >=20
> > Acked-by: Jan Beulich <jbeulich@suse.com>
>=20
> Actually I notice we have more explicit uses of asm/types.h, and
> hence the title of this change isn't really correct (with this
It's really uncorrect. I was in a hurry, my fault.
I meant that change <asm/types.h> to <xen/types.h> only for
<xen/early_printk.h>.

> title I would expect all uses to go away underneath xen/include/xen).
> I'll try to remember to adjust the title when committing.
>=20
Thanks for that.
> Jan
>=20
~ Oleksii



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 12:30:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 12:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481723.746797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqX5-00052p-SI; Fri, 20 Jan 2023 12:30:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481723.746797; Fri, 20 Jan 2023 12:30:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqX5-00052g-Oc; Fri, 20 Jan 2023 12:30:11 +0000
Received: by outflank-mailman (input) for mailman id 481723;
 Fri, 20 Jan 2023 12:30:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIqX4-00052V-BC
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 12:30:10 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b24e7b4-98be-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 13:30:07 +0100 (CET)
Received: from mail-mw2nam04lp2176.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 07:30:04 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO3PR03MB6728.namprd03.prod.outlook.com (2603:10b6:303:170::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 12:30:01 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 12:30:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b24e7b4-98be-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674217807;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=RlNzqUkWLwuj4hAg9mEXT2f13j2mRSF9Xhc9eGvIHHM=;
  b=YCy17rie4DvBmxRbalvftpvJ1Q4YZuJ9NQglud8jCF9N8wziTvOzRtVF
   Ab/MsqXoZXKTjJsid26VrRTrTO37Mmpm3BFMa38yYtkEQMrCG7sMu6TmS
   CzQ25jUz6tFgdnZoTkfultaUzixOVuq+0fztqaf/v84Rfo1UhEPThJmhs
   Y=;
X-IronPort-RemoteIP: 104.47.73.176
X-IronPort-MID: 93924490
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:VYHKtaojkuRNmbYHg3EnYv2Tn1deBmIrZBIvgKrLsJaIsI4StFCzt
 garIBnVMv/eazake40kYNy3phxSsJ7QzNQ1SgRlqCs0E35B9ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHzSFNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAG0BRR3TluGo+r3has5uuYMoLub6MrpK7xmMzRmBZRonabbqZv2WoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+OraYWLEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXpra8w3AHCngT/DjUzXmK/hf+0i3WMZMhGc
 UUUoRop8YgLoRnDot7VGkfQTGS/lhwTQd1LCMUh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+LqRuiNC5TKnUNDQcIRBUIy8Puq4YyilTIVNkLLUKuptj8GDW1x
 i/QqiE73+kXlZRSiPv9+k3biTWxoJSPVhQy+gjcQmOi6EV+eZKhYIurr1Pc6J6sMbqkc7VIh
 1Bc8+D20QzEJc3lePClKAnVIIyU2g==
IronPort-HdrOrdr: A9a23:KhTQaqi7YCZ7uUeWTDsOJ88+HHBQXt0ji2hC6mlwRA09TyX+ra
 2TdZUguSMc7Qx6ZJhNo6HiBECrewK7yXcN2/h3AV7AZmjbUQmTQr2KhLGKqwEIfReOktK1vp
 0QFpSWZueeMbErt63HDUCDYrQdKZ28kJyVuQ==
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93924490"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=em+0yLlbBF2NM8a5U5AtaiZ5owkFk7S8S9IncGlwUTKmlBf0HQ9yj+t+W3BL7DwTS31bDqGSJTCI9PXychhvPlnhQoZX+iFK7F9fXaerzTgjpUBb9yKi63WKFZBxHjEhG00vNtUWDi4ujgw/52KpULje4YZ/7GEekezETTfhgA73Nzt5KhQ1nYXvFyLjgold28vbqPSs1krMfnDyk9xoAh0jc9NA6IhCHeYOpR4RSGwFFYDOjaJy3hMnX2rH1dxNlnP745OJ24LjfJ9x+KVTrIFksr1ENqTx8dPmVt/m3nQGTXMmReNnsqq4bhxSyS1CUJHIMaVVBuBF14qjOBqFkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RlNzqUkWLwuj4hAg9mEXT2f13j2mRSF9Xhc9eGvIHHM=;
 b=JGAlV2jmJ35N28PoMj/3M/Z6PzLqMzUAyAuAOKfGlS5SVkKpNFyzlhFahHfeQ9DGcwk6KPiDyWC32U2WbMubqdjCSAPpa0aTUFz5F470t2CcbMznK0tKy6nP9fYL1IGZ0T4cxRW2k3BzEsbDboGOW9Lm0r85nH6RqyhRoxMG4S/23UIQkaVimB3xkMRsPqi+wqEGeURTCnFQM1K+perj+TGTZoxaeYRjRWTIMlNPkdNEyjVT2N8gvg1XDr2fk5EInSEChDLv5smvXkTdfQYL6iDklRAQBUdqt7BsO8lwmoSaK8+GmJ1mQ8SJmJ7JUOhbAb0lkhTwUDKv51HXCQvOng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RlNzqUkWLwuj4hAg9mEXT2f13j2mRSF9Xhc9eGvIHHM=;
 b=Vgx2DeP/wHVxHR/9N4zL1Vgiw4nlQT8wTnQQTO/YwM62PMzx4Wox72SMkchXKipYceq74oECR5JVX9gslJIyPzlQlObDpgSL0rfItcLKolhiPB8J0rO2kp8ZbFJjNKkuzUoL/zJg976zyEvsy5hCi2SUgzYyYW+MR71ZUic5XCg=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Juergen Gross <jgross@suse.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Anthony Perard <anthony.perard@citrix.com>,
	Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 0/6] tools: Switch to non-truncating XENVER_* ops
Thread-Topic: [PATCH 0/6] tools: Switch to non-truncating XENVER_* ops
Thread-Index: AQHZKnshvDT8KCQSwEaQz5St3JRq4a6lR1+AgAH5fgA=
Date: Fri, 20 Jan 2023 12:30:00 +0000
Message-ID: <72a3af11-ea53-b388-0e7c-d41d7e0ed263@citrix.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <3c0fb20e-b6bb-83f6-3460-53de14e18663@suse.com>
In-Reply-To: <3c0fb20e-b6bb-83f6-3460-53de14e18663@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CO3PR03MB6728:EE_
x-ms-office365-filtering-correlation-id: 910c89f4-db46-446f-ffae-08dafae20ca6
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 I03AIWv/bJbgI0Bu6KUaUXQjk6LhAhtISXbkoE8DKE/XaMiTZCYagF2H7JlocDPoyd/6XsTiS2zkniQxaVmbggEU2BSNScSZgZC00SWmSclgf+UU6tGto9dg6Bbz+iMZey9MfFb5DOxQpM83Z1Hc5VIwN8lf/sXa6qX3RqA6R2Gjz4X3YQDHmikp8LJkLyNMO52A27IKth5eHP6+caj5Ws7IObs4BFctvuCy+/cK7fLyBfpvbHDOZsJrERl0NkxWSi1RTSPOpzB8fx4xyfSfV9F7ji5p2TLRS9Hv4EDsBIl5bi93dI7eWSLt1+BsTssbfvb62BR8Ns1n3H3P+ctMOi/dNAR/uPeybw53wgpx+9rYjiIPwLVN/N69I/QjzftXNYgBVxe33GRTwB4mzzWtaPdKt9Cj1hUKtso8kowmUT6my8twR6QS+KM8Va62G0YIChPqptggwhlAhSSXRHnpoZONyGwsKYUfEBzaeT9wGZICZvj1+9t4e+PAesWqzLdehVd1t2KWomYqIafHanNfA+UElHKxEYFJmReQ1lkSM/gwK8zV6teP48KSI55Hy2wBXrDvNGsv4zzdC2C1CrTeGslwOf6JR5DezLH+4Szo4qGHJKs205ffOhRg56E/lmNPaQqIyQvJuZPPXzoeff/DnAErrkBvZxeyiNYVBtqbUl0DcFyEudCW7SjkCAxTEZrOm3QlDy/ujiogjXMnh+nRkijDZrEZ1WHK8Wd2iljMXM6oJUNLtLzakp/ZVYmHfhRXejgbiyfbg2Msc+WSu/MPIQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(366004)(376002)(396003)(346002)(451199015)(5660300002)(8936002)(31686004)(4326008)(66556008)(66476007)(66446008)(64756008)(8676002)(66946007)(91956017)(76116006)(71200400001)(31696002)(38100700002)(107886003)(122000001)(53546011)(6506007)(6512007)(26005)(2906002)(186003)(110136005)(54906003)(6486002)(478600001)(36756003)(2616005)(316002)(83380400001)(41300700001)(82960400001)(38070700005)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dFhpeDJpRzQrR1hsdC9mVmM0Y2l0Qmx5OHFOeFA4Q1hyOUg0Z0NjT05FSERM?=
 =?utf-8?B?YkY4SHFZUHQwVzEra1p3U3d4enZJN0JrV2tzOEM2Yk8rOEEwTVJBazhEMGxE?=
 =?utf-8?B?QVE1MHVLZWxnNHp1Q0tKbGZHdGxreWJialg0M1JYRk04ei9keG9tMFhzZ0dp?=
 =?utf-8?B?RmtsRDRlLzBPR2Z3VkI3ZnkrcWZXVmx1cmJZci90RjBPazBtMWVQc3Fzc1hj?=
 =?utf-8?B?czVKZDhLWmlId3lZUUpPelhCbE8rVm9ibjN2bitkaGdwRTdBL0pKRG81MWJB?=
 =?utf-8?B?bzVYTHdmeUgxSlNlZzFucjBVWVI5TkYrR2RWd1M2UEpTcE93Y2ZUMEkydjht?=
 =?utf-8?B?alNtSERyUHNOWkRQWERWWWtvOWt2dmd2ZDByU01ycm1WVHphcEloUmZxcHlF?=
 =?utf-8?B?Wm90dE9IYkNWZEtOd1pENk1aR1dXblFUNGMxVkVRRXNLcENDVERvdnpPTk1V?=
 =?utf-8?B?ak53QmVmU1dYSENoeWo1UExsazlyWmxTU0RaUFdsSk5zTVpDdlI3RHR0S2ZD?=
 =?utf-8?B?ZEdSZ1dzVWZDNEhOQTJYU3JQc2NaNmp1dEtXMXdkSVdSYVJya3NRQ2swMlpP?=
 =?utf-8?B?eUNuUUIxSFlPNEJ5K3BiL3hHTUN0VTFhMStxU2lQeFZEcVJDTGgxNngyM1gy?=
 =?utf-8?B?V0lGYXZSTkc3dzIzZTBQZjYyc0RmUWxIUzZPQkUrMHhmL21sL3ltVERaRi83?=
 =?utf-8?B?bGJpU011OEhmSjdIbURCRXJXNVZoM1NCZkRZUGpxalFyV0lJQlRXUXMwNUZ1?=
 =?utf-8?B?NUVSTE5yL1VLOFN4NThSV2RRRHVqdmRJYmxRdGtsZVRhTzI1U1Z3UVUxK2xQ?=
 =?utf-8?B?TG5EQkVXSXNGdmE2R0g2N3dMbXU3T1dmWHdDZFdJM2lJKzJ5bnE5RkxEajRI?=
 =?utf-8?B?TzFPVDlhVFJzSWt1K3g0UXlSU0hZd2dLTjVQQldrM1dZd3JBMFRIcG11aUpJ?=
 =?utf-8?B?Zk50M3hVZDUzWFZpMDVKVHFTMGhiVExGc1dwcDZwNUEwM1paOTFQYlh2U2NM?=
 =?utf-8?B?Wk9MVXlQay82S1JUaU1KN1Q0RisxdVM1TDJ0VTVqQXZLZ1FzVUo1S1JCZ1pQ?=
 =?utf-8?B?UmVIVXZ1WEt1a1V0VjIxMWE5dEJxN2tIVXZBSkp6MG5lWG0wdEhYSUVJZXdn?=
 =?utf-8?B?TXZSandVY3g4eWRQUUNZTUJkWjhMNGhMbEIvMHlaMEREQ3U4SDhmMGM3aEdB?=
 =?utf-8?B?NFAwSzNobm9nSCt3TTlHTVBWczNkbDNwTVhpRzdZdDkyQWFUTWF0K1UvSXor?=
 =?utf-8?B?YUozelFHM3l2TjBMSEdYbGdPVmV1R2FOeTN5ZWEyU3Y2Nm5ja3lwcVA4NThW?=
 =?utf-8?B?S2I5ZEpmV3RQYmNEWklNYWttNzRRanBBWFE4b3hLZ1RPR1p1cVFUYVBvUVNJ?=
 =?utf-8?B?TFJuRzVoSEoremQwbW1HOWxORGphQUNoQmg5UlRlWEN3V0EyUWVIN0RnK3V2?=
 =?utf-8?B?bTliQ3BtOXVxQkFEaTRVTWJGTXBoK1pHeGNpcWxWM0lHL0orYU1peWp1R3FB?=
 =?utf-8?B?a0NCVUR3Z2Z1Tk1rM3U0UTFpOUVla0xoUjJKbS9MQWVXcXh4ZGFNakhncThq?=
 =?utf-8?B?MEJ5bGtocXhVZGVlRG43QklFZXVVckhQY0lhMGFOTWJBQ0lHOXJwNlREUy9n?=
 =?utf-8?B?bE56UEVUVG9nVjVXTXVKYTFHRU1FTDN3YjV0U25Dbks4VDIvOHVyLzJ1M3FB?=
 =?utf-8?B?VlhYa0p3SGg5M1hVTkJENVkyWk1vNHNycVpmbmZNRDEvK3BKaUd5SXc0cDFm?=
 =?utf-8?B?dmRmLzNzNXVBR2hpTVNlbE12RFNpcGlvNW45TTBWT0lNS1k2cU5XVjFSS3Nh?=
 =?utf-8?B?TURPbHMzcmRWUzhtUXJZZHAyM3l4aEZKUlErQzloZmx4STRoU3FVYjYwcGR1?=
 =?utf-8?B?cXRaRTkvU0l3cXpiSCtEUUpkNS9UaWFCZVF0OVRmejY5N0c0Tm9nNVJoN0JQ?=
 =?utf-8?B?Mkt0VFFudnRRanY4T2ZnVlA0V2VuY1RJUHJQZnVENndqdm0yZWQraHNpdUFu?=
 =?utf-8?B?cnNQUVlFTFBZcFlpUzEvTExsNHB6NUxrbFJPUURjZE1MYm9CMWx3YkZsUkp0?=
 =?utf-8?B?S0w2Y1EwKzN3aXFzU253L3IvTndXS1FRN0EyeHFPZlBCN3FRTmJ4RUUrY3VI?=
 =?utf-8?Q?tPIfjMwBiVn3yjA40ILA3Ib0p?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C2B66D68EC234F48A9E5DB84B5B3128C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	aThb+sYAcfyNAN2z5271s1mKI6EXt472gAppptcvBLyZnGSzGF7z+rvVTz1i+AB53qRDU815pwY1fp9DLd4EyUP3Lz641lJXF9k1pZ4cIDmujK5GeeEUHQwsrEiTwBDkTqb5GZ7PMGX+DX/CXkoxKHzqjgzHIZWM8zMSHRRvghxLgjrIXDlPNpOXAPN0mmt8URt/aAkeyPH1HqcVUCLZSxSX3iea6SCPqSXgqgpHyX94V6sm+vcjxUQYWi1QcfJa8CyiKBdWtQijjs3lpDPb/xmlPrOGryXcGD32AYXi90ZSVVsN77VK2eln1szrbDzVSHChvT6sGdacqETAEpzzR1YtW1gmIEH18BJXOdjtL2W8qXKlRqbhbhrAwPXPS1UPd2oHOlnAkSTE1HoGlBBQ8LY0FhokE+QBiDvnIKtJNJebUtTfzszIO4yMPPDnKWVKz8l0AvpOXAEEuZZuK0W8BsI/JXhQm69IdyA2fZoUDMuoYQhV6Q3KKVtJSSt7m2sN0eX6NxyEH4ZH5cdpeR7IegNa970qfy36dMyvTXmhebOW5iJuDY9YZWsrnChVWgJv6An1CB2sizcs0ieiTED99LdIkgcyBBj4xT4edNS0ejcp+HJDsrsZ2jN5NrxS3BJohEXKNiG4q40L9rJXraPSIqzCDP8Mw0Lqc7pok/RnkQ6h8BxRbT+P1LDUbYJ1KC7F4Xa5H6JenAub9CYV9ukoUzMSjrOrhpeHXvJCn0fZ57oiAs5xtu3En4KFegYpiri4AV4Qv1txIF1Q11T01UVlefDatXSn8xjNz2F6VA7VVODjnJXROisE/kYH7TaFbZOVeCJViJfNhHZYx3v2sAqVPvonjeAGn0yw6+QsfUUGLEY=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 910c89f4-db46-446f-ffae-08dafae20ca6
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 12:30:00.8014
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ruonm3J0C4uZMi4LXRVbI5pERg5cOmp9AgiMtxOjSvGiNUDz5jVc+qEuCUmU90i34DCTpldtmiidEiNN1VOpdB4Em2cLXzOFkRDlF6H4o8c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO3PR03MB6728

T24gMTkvMDEvMjAyMyA2OjIwIGFtLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPiBPbiAxNy4wMS4y
MyAxNDo1MywgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IFRoaXMgaXMgdGhlIHRvb2xzIHNpZGUg
b2YgdGhlIFhlbiBzZXJpZXMgcG9zdGVkIHByZXZpb3VzbHkuDQo+Pg0KPj4gV2l0aCB0aGlzLCBh
IFhlbiBidWlsdCB3aXRoIGxvbmcgc3RyaW5ncyBjYW4gYmUgcmV0cmlldmVkOg0KPj4NCj4+IMKg
wqAgIyB4bCBpbmZvDQo+PiDCoMKgIC4uLg0KPj4gwqDCoCB4ZW5fdmVyc2lvbsKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgOiA0LjE4LXVuc3RhYmxlK1JFQUxMWSBMT05HIEVYVFJBVkVSU0lPTg0KPj4g
wqDCoCB4ZW5fY2hhbmdlc2V0wqDCoMKgwqDCoMKgwqDCoMKgIDogVHVlIEphbiAzIDE5OjI3OjE3
IDIwMjMNCj4+IGdpdDo1MmQyZGE2YzA1NDQrUkVBTExZIFNVUEVSIERVUEVSIEVYVFJBIE1FR0Eg
TE9ORyBDSEFOR0VTRVQNCj4+IMKgwqAgLi4uDQo+Pg0KPj4NCj4+IEFuZHJldyBDb29wZXIgKDYp
Og0KPj4gwqDCoCB0b29scy9saWJ4YzogTW92ZSB4Y192ZXJzaW9uKCkgb3V0IG9mIHhjX3ByaXZh
dGUuYyBpbnRvIGl0cyBvd24gZmlsZQ0KPj4gwqDCoCB0b29sczogSW50cm9kdWNlIGEgbm9uLXRy
dW5jYXRpbmcgeGNfeGVudmVyX2V4dHJhdmVyc2lvbigpDQo+PiDCoMKgIHRvb2xzOiBJbnRyb2R1
Y2UgYSBub24tdHJ1bmNhdGluZyB4Y194ZW52ZXJfY2FwYWJpbGl0aWVzKCkNCj4+IMKgwqAgdG9v
bHM6IEludHJvZHVjZSBhIG5vbi10cnVuY2F0aW5nIHhjX3hlbnZlcl9jaGFuZ2VzZXQoKQ0KPj4g
wqDCoCB0b29sczogSW50cm9kdWNlIGEgbm9uLXRydW5jYXRpbmcgeGNfeGVudmVyX2NtZGxpbmUo
KQ0KPj4gwqDCoCB0b29sczogSW50cm9kdWNlIGEgeGNfeGVudmVyX2J1aWxkaWQoKSB3cmFwcGVy
DQo+Pg0KPj4gwqAgdG9vbHMvaW5jbHVkZS94ZW5jdHJsLmjCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgfMKgIDEwICsrDQo+PiDCoCB0b29scy9saWJzL2N0cmwvTWFrZWZpbGUuY29tbW9uwqDCoMKg
wqAgfMKgwqAgMSArDQo+PiDCoCB0b29scy9saWJzL2N0cmwveGNfcHJpdmF0ZS5jwqDCoMKgwqDC
oMKgwqAgfMKgIDY2IC0tLS0tLS0tLS0tLQ0KPj4gwqAgdG9vbHMvbGlicy9jdHJsL3hjX3ByaXZh
dGUuaMKgwqDCoMKgwqDCoMKgIHzCoMKgIDcgLS0NCj4+IMKgIHRvb2xzL2xpYnMvY3RybC94Y192
ZXJzaW9uLmPCoMKgwqDCoMKgwqDCoCB8IDIwNg0KPj4gKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrDQo+PiDCoCB0b29scy9saWJzL2xpZ2h0L2xpYnhsLmPCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIHzCoCA2MSArLS0tLS0tLS0tLQ0KPj4gwqAgdG9vbHMvb2NhbWwvbGlicy94Yy94
ZW5jdHJsX3N0dWJzLmMgfMKgIDQ1ICsrKysrLS0tDQo+PiDCoCA3IGZpbGVzIGNoYW5nZWQsIDI1
MCBpbnNlcnRpb25zKCspLCAxNDYgZGVsZXRpb25zKC0pDQo+PiDCoCBjcmVhdGUgbW9kZSAxMDA2
NDQgdG9vbHMvbGlicy9jdHJsL3hjX3ZlcnNpb24uYw0KPj4NCj4NCj4gSG1tLCBJJ20gbm90IGNv
bXBsZXRlbHkgb3Bwb3NlZCB0byB0aGlzLCBidXQgZG8gd2UgcmVhbGx5IG5lZWQgYWxsIHRoYXQN
Cj4gYWRkaXRpb25hbCBjb2RlPw0KPg0KPiBBcGFydCBmcm9tIHRoZSBidWlsZC1pZCBhbGwgdGhl
IGluZm9ybWF0aW9uIGlzIGVhc2lseSBhdmFpbGFibGUgdmlhIGh5cGZzLg0KDQpjYXBhYmlsaXRp
bGVzIGF0IHRoZSB2ZXJ5IGxlYXN0IGlzbid0IHRoZXJlLsKgIE5vdCB0aGF0IEknbSBwYXJ0aWN1
bGFybHkNCmNvbXBsYWluaW5nIC0gaXRzIG5vdCBhbiBpbnRlcmZhY2Ugd2Ugd2FudCB0byBlbmNv
dXJhZ2UuDQoNCj4gQW5kIHRoZSBidWlsZC1pZCBjYW4gYmUgZWFzaWx5IGFkZGVkIHRvIGh5cGZz
Lg0KDQpIeXBmcyBpcyBvcHRpb25hbCwgYW5kIHlvdSB3aWxsIGZpbmQgZmlybSByZXNpc3RhbmNl
IHRvIG1ha2luZyBpdA0KbWFuZGF0b3J5IGZvciB0aGlzLg0KDQpBbHNvLCBoYXZpbmcgbG9va2Vk
IGF0IGhvdyBoeXBmc19zdHJpbmdfc2V0X3JlZmVyZW5jZSgpIHdvcmtzLCBpdCdzIG5vdA0KY29y
cmVjdCB3aXRoIGxpdmVwYXRjaGluZyAobm90aGluZyB1cGRhdGVzIHNpemUpLsKgIEkgc3VzcGVj
dCB0aGlzIG9ubHkNCmltcGFjdHMgdGhlIGxpdmVwYXRjaGluZyAidW5pdCIgdGVzdHMgd2hpY2gg
bm90aGluZyBydW5zIChoZW5jZSB3aHkNCmxpdmVwYXRjaGluZyBpcyAqc3RpbGwqIGJyb2tlbiBv
biA0LjE1IGFuZCBsYXRlcikuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 12:31:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 12:31:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481726.746807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqXi-0005Wl-4T; Fri, 20 Jan 2023 12:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481726.746807; Fri, 20 Jan 2023 12:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqXi-0005We-1Q; Fri, 20 Jan 2023 12:30:50 +0000
Received: by outflank-mailman (input) for mailman id 481726;
 Fri, 20 Jan 2023 12:30:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIqXg-0005WO-QI; Fri, 20 Jan 2023 12:30:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIqXg-0004ee-Nx; Fri, 20 Jan 2023 12:30:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIqXg-0008BM-6S; Fri, 20 Jan 2023 12:30:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIqXg-0003YJ-61; Fri, 20 Jan 2023 12:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TNSlsHI1wBKHZUEPUTlg0Cwg9RBAWPgNXvthcAxX0Vs=; b=b4pShmZr1aU0OGUGPUmtriR+J2
	Fu2aNmte5Kc4KHY0KBFbu0ScAafStutKcG9K6Pg6c2aqqXWazylvFIQPgXo3Rs64GAMo7L6ZsQUuU
	/vg8LFWOUsgBXEhHpLDA1xToGy6OEJIWDweNJj0DoUK59qZPeha7xrDuSQgVsMc9oi1o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175995-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175995: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=90caa47aa33efb6408514ce09721624a2bdf6694
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 12:30:48 +0000

flight 175995 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175995/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  90caa47aa33efb6408514ce09721624a2bdf6694
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175961  2023-01-18 17:01:50 Z    1 days
Testing same since   175995  2023-01-20 09:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f588e7b7cb..90caa47aa3  90caa47aa33efb6408514ce09721624a2bdf6694 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 12:39:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 12:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481735.746817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqfs-0006HZ-VY; Fri, 20 Jan 2023 12:39:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481735.746817; Fri, 20 Jan 2023 12:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqfs-0006HS-Sg; Fri, 20 Jan 2023 12:39:16 +0000
Received: by outflank-mailman (input) for mailman id 481735;
 Fri, 20 Jan 2023 12:39:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nUaQ=5R=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pIqfr-0006HM-8h
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 12:39:15 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2045.outbound.protection.outlook.com [40.107.92.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 704109b3-98bf-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 13:39:11 +0100 (CET)
Received: from BN0PR04CA0041.namprd04.prod.outlook.com (2603:10b6:408:e8::16)
 by SA1PR12MB7040.namprd12.prod.outlook.com (2603:10b6:806:24f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 12:39:07 +0000
Received: from BN8NAM11FT105.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e8:cafe::f9) by BN0PR04CA0041.outlook.office365.com
 (2603:10b6:408:e8::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27 via Frontend
 Transport; Fri, 20 Jan 2023 12:39:07 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT105.mail.protection.outlook.com (10.13.176.183) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Fri, 20 Jan 2023 12:39:06 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 06:39:06 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 06:39:06 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 20 Jan 2023 06:39:04 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 704109b3-98bf-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Osg/Ri+/dqRcBCpMcwcbm5u19orSqWPNuD2Cw4HEQqP8BkcRVmzrYtBDnwBKBkQ/qEyc5NDA7n/RiKSUU9EKzI1p5SHyqZACWU+yf7MLzsc8+Tt2AdktWsh6q0WHa3eQsXpGpdlIHBGgArRoJNGUhDYkw5iO+B0biAlsdOxwaOg4n4p8QD2X7uCldXcGkxMNzJVipKffsJBgEd+PB0cic7ZiNZjvGW9Y54A1LAGx3GopOVOJ+iP/VLwsCuWJ23oUA+7ay0cuJ+FqrOSt3G3snv2NQPAkLyZFfgHITBSFKuFcZf6OXbcRK94jRAKOoA7vjp3IukTibANNhhhD+C6gdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WygvesPP9qGuUZ5mS+MGnn3SK4/VQvog47EiJyqJOwA=;
 b=NGAjowdHNhaGmBgg/0T2zNQ3U+7uIF5IpWDC9elxEJB3Wt90lKnbkgMzg7xP42S/oUZyEwPokV6N3rsWum1gFh1jIEt/fdp0G9gTVVljnSNgNfTLbDr/fCiF5H/U0MBsbOCxss8f6ycXGfFb5OTDzfTS/viyjU3DqIQ5KP15YPsnX7g5iItRnhPiz6Xab7LMzLUuPsfpLpuTGM2VRqXZWwj7INTMqv40+3UBbISLQgJFY/az3NG4kqtYJuO3ta5qTDpAPbp6dNtkQZaOd1R45jpP+gdoEqnx2fSi8Ah8SO0h/RGnzjLSe1UBbRIZ9ClGnkDzc6P2cxtXmn4q0HDe3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WygvesPP9qGuUZ5mS+MGnn3SK4/VQvog47EiJyqJOwA=;
 b=iWYsG3Up71LcISDWTmMWTTGu1e6hOJPefRJurjVhuY3Ckaz/eH/rxwHzXYHRpkFJ2MTJ/selrVAzijXwTukLzu8NyaYBU2f7GUs5ygBfaC3tE27/aQ8c4lJjWM1vU0v+eTvsO2xFRvdGmnGqkMowJquYogv/rvyHRB+ANAyByf8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <1be7ea27-aec5-e2ae-48c1-fe0c1f099181@amd.com>
Date: Fri, 20 Jan 2023 13:39:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN][RFC PATCH v4 01/16] xen/arm/device: Remove __init from
 function type
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221207061537.7266-1-vikram.garhwal@amd.com>
 <20221207061537.7266-2-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221207061537.7266-2-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT105:EE_|SA1PR12MB7040:EE_
X-MS-Office365-Filtering-Correlation-Id: 61a2d8dc-7305-4f1d-ce8d-08dafae35216
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UkFLMFtKsIjw01hWqzukifKbumpx33VpIhi784xu1Wqu7FsXWusIubTG/ycgUCx2qWbF0DR5b2r1JsKgCirX1XFJotvQfP18bFPhR3o0zoQHnSs+vUa1Wj2bI+RJGkX2x72zl5Q/oeGqgeKsLn+FYizFjYSIysNXZZcGeSMAAWq/iWjAdDcC68iLqxSY7ShgsMkLo5N8HByPnYrHqlZZRHRw4zT63qGKPKW8G5beCQOgcON12wUx2icfCTiV38OPIrDObYdl2FKML1UCc+x7BCvFHlSBMQwXLULZHeKnRIh8eF9MYbHOAdg4wZcrNOAHemh5wzX5gup8hgLhJ16C9Ar++ywuUksJ53QUSowG/xjKmyu2qh4g82Y3gzpzIIqmY+0NTQsxQwp3c5VT72N8hL7Bi2qyp/lcv73cBz6LAzn+nIw1wUu7knjszlXh44dG0/E1nDqdu7nK6TAEddenz6+iSn59uKhWIz43pc4FY8V62jnpLWemXY2AR2YFHcPah8m2NsUI6OynQVLNnqWgZrNpDADc+ABDE4V1tSSdtcgEJ0Um2cfN+dYi+pNvHW2TKQhFm11jzczwc1/vjFhKUHXfulqMybPC3T1pSBsRRFwBKvE8NJIWYzqwHjBYbtCexJ+GckJjPLgtlSLuJhDHnvr3GwGxDAepS2cKx7tqB/1keNg0XD4uuwmRPNv5uweX2yWOJ0zqKaefZmMFm7MsEPwp4+4Qqc/ZhvX1pPtMulHuKbIEYh3Lr8J9+c80bqVlVcPEhZavZzLUAlI4kubJzA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(136003)(376002)(396003)(451199015)(36840700001)(40470700004)(46966006)(2616005)(36756003)(41300700001)(336012)(5660300002)(8936002)(30864003)(16576012)(316002)(4326008)(70586007)(70206006)(8676002)(26005)(186003)(82740400003)(82310400005)(356005)(81166007)(86362001)(44832011)(2906002)(83380400001)(47076005)(40460700003)(426003)(31696002)(36860700001)(40480700001)(478600001)(31686004)(110136005)(54906003)(53546011)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 12:39:06.7624
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 61a2d8dc-7305-4f1d-ce8d-08dafae35216
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT105.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7040

Hi Vikram,

On 07/12/2022 07:15, Vikram Garhwal wrote:
> 
> 
> Change function type of following function to access during runtime:
>     1. map_irq_to_domain()
>     2. handle_device_interrupt()
>     3. map_range_to_domain()
>     4. unflatten_dt_node()
>     5. unflatten_device_tree()
If you do not want to do this at first use, then this should be a separate patch.
> 
> Move map_irq_to_domain(), handle_device_interrupt() and map_range_to_domain() to
> device.c.
This should be a separate patch (without removing __init) to make the comparison easier.

> 
> unflatten_device_tree(): Add handling of memory allocation failure.
Apart from that you also renamed __unflatten_device_tree to unflatten_device_tree
and you did not mention it.

> 
> These changes are done to support the dynamic programming of a nodes where an
> overlay node will be added to fdt and unflattened node will be added to dt_host.
> Furthermore, IRQ and mmio mapping will be done for the added node.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/arch/arm/device.c            | 145 +++++++++++++++++++++++++++++++
>  xen/arch/arm/domain_build.c      | 142 ------------------------------
>  xen/arch/arm/include/asm/setup.h |   3 +
>  xen/common/device_tree.c         |  27 +++---
>  xen/include/xen/device_tree.h    |   5 ++
>  5 files changed, 170 insertions(+), 152 deletions(-)
> 
> diff --git a/xen/arch/arm/device.c b/xen/arch/arm/device.c
> index 70cd6c1a19..d299c04e62 100644
> --- a/xen/arch/arm/device.c
> +++ b/xen/arch/arm/device.c
> @@ -21,6 +21,9 @@
>  #include <xen/errno.h>
>  #include <xen/init.h>
>  #include <xen/lib.h>
> +#include <xen/iocap.h>
> +#include <asm/domain_build.h>
> +#include <asm/setup.h>
> 
>  extern const struct device_desc _sdevice[], _edevice[];
>  extern const struct acpi_device_desc _asdevice[], _aedevice[];
> @@ -84,6 +87,148 @@ enum device_class device_get_class(const struct dt_device_node *dev)
>      return DEVICE_UNKNOWN;
>  }
> 
> +int map_irq_to_domain(struct domain *d, unsigned int irq,
> +                      bool need_mapping, const char *devname)
> +{
> +    int res;
> +
> +    res = irq_permit_access(d, irq);
> +    if ( res )
> +    {
> +        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
> +               d->domain_id, irq);
> +        return res;
> +    }
> +
> +    if ( need_mapping )
> +    {
> +        /*
> +         * Checking the return of vgic_reserve_virq is not
> +         * necessary. It should not fail except when we try to map
> +         * the IRQ twice. This can legitimately happen if the IRQ is shared
> +         */
> +        vgic_reserve_virq(d, irq);
> +
> +        res = route_irq_to_guest(d, irq, irq, devname);
> +        if ( res < 0 )
> +        {
> +            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
> +                   irq, d->domain_id);
> +            return res;
> +        }
> +    }
> +
> +    dt_dprintk("  - IRQ: %u\n", irq);
> +    return 0;
> +}
> +
> +int map_range_to_domain(const struct dt_device_node *dev,
> +                        u64 addr, u64 len, void *data)
> +{
> +    struct map_range_data *mr_data = data;
> +    struct domain *d = mr_data->d;
> +    int res;
> +
> +    /*
> +     * reserved-memory regions are RAM carved out for a special purpose.
> +     * They are not MMIO and therefore a domain should not be able to
> +     * manage them via the IOMEM interface.
> +     */
> +    if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/",
> +                     strlen("/reserved-memory/")) != 0 )
> +    {
> +        res = iomem_permit_access(d, paddr_to_pfn(addr),
> +                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
> +        if ( res )
> +        {
> +            printk(XENLOG_ERR "Unable to permit to dom%d access to"
> +                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
> +                   d->domain_id,
> +                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
> +            return res;
> +        }
> +    }
> +
> +    if ( !mr_data->skip_mapping )
> +    {
> +        res = map_regions_p2mt(d,
> +                               gaddr_to_gfn(addr),
> +                               PFN_UP(len),
> +                               maddr_to_mfn(addr),
> +                               mr_data->p2mt);
> +
> +        if ( res < 0 )
> +        {
> +            printk(XENLOG_ERR "Unable to map 0x%"PRIx64
> +                   " - 0x%"PRIx64" in domain %d\n",
> +                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
> +                   d->domain_id);
> +            return res;
> +        }
> +    }
> +
> +    dt_dprintk("  - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=%x\n",
> +               addr, addr + len, mr_data->p2mt);
> +
> +    return 0;
> +}
> +
> +/*
> + * handle_device_interrupts retrieves the interrupts configuration from
> + * a device tree node and maps those interrupts to the target domain.
> + *
> + * Returns:
> + *   < 0 error
> + *   0   success
> + */
> +int handle_device_interrupts(struct domain *d,
> +                             struct dt_device_node *dev,
> +                             bool need_mapping)
> +{
> +    unsigned int i, nirq;
> +    int res;
> +    struct dt_raw_irq rirq;
> +
> +    nirq = dt_number_of_irq(dev);
> +
> +    /* Give permission and map IRQs */
> +    for ( i = 0; i < nirq; i++ )
> +    {
> +        res = dt_device_get_raw_irq(dev, i, &rirq);
> +        if ( res )
> +        {
> +            printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n",
> +                   i, dt_node_full_name(dev));
> +            return res;
> +        }
> +
> +        /*
> +         * Don't map IRQ that have no physical meaning
> +         * ie: IRQ whose controller is not the GIC
> +         */
> +        if ( rirq.controller != dt_interrupt_controller )
> +        {
> +            dt_dprintk("irq %u not connected to primary controller. Connected to %s\n",
> +                       i, dt_node_full_name(rirq.controller));
> +            continue;
> +        }
> +
> +        res = platform_get_irq(dev, i);
> +        if ( res < 0 )
> +        {
> +            printk(XENLOG_ERR "Unable to get irq %u for %s\n",
> +                   i, dt_node_full_name(dev));
> +            return res;
> +        }
> +
> +        res = map_irq_to_domain(d, res, need_mapping, dt_node_name(dev));
> +        if ( res )
> +            return res;
> +    }
> +
> +    return 0;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 4fb5c20b13..acde8e714e 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2229,41 +2229,6 @@ int __init make_chosen_node(const struct kernel_info *kinfo)
>      return res;
>  }
> 
> -int __init map_irq_to_domain(struct domain *d, unsigned int irq,
> -                             bool need_mapping, const char *devname)
> -{
> -    int res;
> -
> -    res = irq_permit_access(d, irq);
> -    if ( res )
> -    {
> -        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
> -               d->domain_id, irq);
> -        return res;
> -    }
> -
> -    if ( need_mapping )
> -    {
> -        /*
> -         * Checking the return of vgic_reserve_virq is not
> -         * necessary. It should not fail except when we try to map
> -         * the IRQ twice. This can legitimately happen if the IRQ is shared
> -         */
> -        vgic_reserve_virq(d, irq);
> -
> -        res = route_irq_to_guest(d, irq, irq, devname);
> -        if ( res < 0 )
> -        {
> -            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
> -                   irq, d->domain_id);
> -            return res;
> -        }
> -    }
> -
> -    dt_dprintk("  - IRQ: %u\n", irq);
> -    return 0;
> -}
If you move map_irq_to_domain from domain_build.c to device.c, then the prototype needs to also
be moved from domain_build.h to setup.h

> -
>  static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>                                         const struct dt_irq *dt_irq,
>                                         void *data)
> @@ -2295,57 +2260,6 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>      return 0;
>  }
> 
> -int __init map_range_to_domain(const struct dt_device_node *dev,
> -                               u64 addr, u64 len, void *data)
> -{
> -    struct map_range_data *mr_data = data;
> -    struct domain *d = mr_data->d;
> -    int res;
> -
> -    /*
> -     * reserved-memory regions are RAM carved out for a special purpose.
> -     * They are not MMIO and therefore a domain should not be able to
> -     * manage them via the IOMEM interface.
> -     */
> -    if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/",
> -                     strlen("/reserved-memory/")) != 0 )
> -    {
> -        res = iomem_permit_access(d, paddr_to_pfn(addr),
> -                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
> -        if ( res )
> -        {
> -            printk(XENLOG_ERR "Unable to permit to dom%d access to"
> -                    " 0x%"PRIx64" - 0x%"PRIx64"\n",
> -                    d->domain_id,
> -                    addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
> -            return res;
> -        }
> -    }
> -
> -    if ( !mr_data->skip_mapping )
> -    {
> -        res = map_regions_p2mt(d,
> -                               gaddr_to_gfn(addr),
> -                               PFN_UP(len),
> -                               maddr_to_mfn(addr),
> -                               mr_data->p2mt);
> -
> -        if ( res < 0 )
> -        {
> -            printk(XENLOG_ERR "Unable to map 0x%"PRIx64
> -                   " - 0x%"PRIx64" in domain %d\n",
> -                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
> -                   d->domain_id);
> -            return res;
> -        }
> -    }
> -
> -    dt_dprintk("  - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=%x\n",
> -               addr, addr + len, mr_data->p2mt);
> -
> -    return 0;
> -}
> -
>  /*
>   * For a node which describes a discoverable bus (such as a PCI bus)
>   * then we may need to perform additional mappings in order to make
> @@ -2373,62 +2287,6 @@ static int __init map_device_children(const struct dt_device_node *dev,
>      return 0;
>  }
> 
> -/*
> - * handle_device_interrupts retrieves the interrupts configuration from
> - * a device tree node and maps those interrupts to the target domain.
> - *
> - * Returns:
> - *   < 0 error
> - *   0   success
> - */
> -static int __init handle_device_interrupts(struct domain *d,
> -                                           struct dt_device_node *dev,
> -                                           bool need_mapping)
> -{
> -    unsigned int i, nirq;
> -    int res;
> -    struct dt_raw_irq rirq;
> -
> -    nirq = dt_number_of_irq(dev);
> -
> -    /* Give permission and map IRQs */
> -    for ( i = 0; i < nirq; i++ )
> -    {
> -        res = dt_device_get_raw_irq(dev, i, &rirq);
> -        if ( res )
> -        {
> -            printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n",
> -                   i, dt_node_full_name(dev));
> -            return res;
> -        }
> -
> -        /*
> -         * Don't map IRQ that have no physical meaning
> -         * ie: IRQ whose controller is not the GIC
> -         */
> -        if ( rirq.controller != dt_interrupt_controller )
> -        {
> -            dt_dprintk("irq %u not connected to primary controller. Connected to %s\n",
> -                      i, dt_node_full_name(rirq.controller));
> -            continue;
> -        }
> -
> -        res = platform_get_irq(dev, i);
> -        if ( res < 0 )
> -        {
> -            printk(XENLOG_ERR "Unable to get irq %u for %s\n",
> -                   i, dt_node_full_name(dev));
> -            return res;
> -        }
> -
> -        res = map_irq_to_domain(d, res, need_mapping, dt_node_name(dev));
> -        if ( res )
> -            return res;
> -    }
> -
> -    return 0;
> -}
> -
>  /*
>   * For a given device node:
>   *  - Give permission to the guest to manage IRQ and MMIO range
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index fdbf68aadc..ec050848aa 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -163,6 +163,9 @@ void device_tree_get_reg(const __be32 **cell, u32 address_cells,
>  u32 device_tree_get_u32(const void *fdt, int node,
>                          const char *prop_name, u32 dflt);
> 
> +int handle_device_interrupts(struct domain *d, struct dt_device_node *dev,
> +                             bool need_mapping);
> +
>  int map_range_to_domain(const struct dt_device_node *dev,
>                          u64 addr, u64 len, void *data);
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 6c9712ab7b..6518eff9b0 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -1811,12 +1811,12 @@ int dt_count_phandle_with_args(const struct dt_device_node *np,
>   * @allnextpp: pointer to ->allnext from last allocated device_node
>   * @fpsize: Size of the node path up at the current depth.
>   */
> -static unsigned long __init unflatten_dt_node(const void *fdt,
> -                                              unsigned long mem,
> -                                              unsigned long *p,
> -                                              struct dt_device_node *dad,
> -                                              struct dt_device_node ***allnextpp,
> -                                              unsigned long fpsize)
> +static unsigned long unflatten_dt_node(const void *fdt,
> +                                       unsigned long mem,
> +                                       unsigned long *p,
> +                                       struct dt_device_node *dad,
> +                                       struct dt_device_node ***allnextpp,
> +                                       unsigned long fpsize)
>  {
>      struct dt_device_node *np;
>      struct dt_property *pp, **prev_pp = NULL;
> @@ -2047,7 +2047,7 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
>  }
> 
>  /**
> - * __unflatten_device_tree - create tree of device_nodes from flat blob
> + * unflatten_device_tree - create tree of device_nodes from flat blob
>   *
>   * unflattens a device-tree, creating the
>   * tree of struct device_node. It also fills the "name" and "type"
> @@ -2056,8 +2056,7 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
>   * @fdt: The fdt to expand
>   * @mynodes: The device_node tree created by the call
>   */
> -static void __init __unflatten_device_tree(const void *fdt,
> -                                           struct dt_device_node **mynodes)
> +int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
>  {
>      unsigned long start, mem, size;
>      struct dt_device_node **allnextp = mynodes;
> @@ -2079,6 +2078,12 @@ static void __init __unflatten_device_tree(const void *fdt,
>      /* Allocate memory for the expanded device tree */
>      mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct dt_device_node));
> 
> +    if ( mem == 0 )
NIT: !mem would be preffered.

> +    {
> +        printk(XENLOG_ERR "Cannot allocate memory for unflatten device tree\n");
> +        return -ENOMEM;
What is the point of modifying the function to return a value if ...
> +    }
> +
>      ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
> 
>      dt_dprintk("  unflattening %lx...\n", mem);
> @@ -2095,6 +2100,8 @@ static void __init __unflatten_device_tree(const void *fdt,
>      *allnextp = NULL;
> 
>      dt_dprintk(" <- unflatten_device_tree()\n");
> +
> +    return 0;
>  }
> 
>  static void dt_alias_add(struct dt_alias_prop *ap,
> @@ -2179,7 +2186,7 @@ dt_find_interrupt_controller(const struct dt_device_match *matches)
> 
>  void __init dt_unflatten_host_device_tree(void)
>  {
> -    __unflatten_device_tree(device_tree_flattened, &dt_host);
> +    unflatten_device_tree(device_tree_flattened, &dt_host);
... you do not check it anyway?

>      dt_alias_scan();
>  }
> 
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index a28937d12a..bde46d7120 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -181,6 +181,11 @@ int device_tree_for_each_node(const void *fdt, int node,
>   */
>  void dt_unflatten_host_device_tree(void);
> 
> +/**
> + * unflatten any device tree.
> + */
> +int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes);
> +
>  /**
>   * IRQ translation callback
>   * TODO: For the moment we assume that we only have ONE
> --
> 2.17.1
> 
> 

~Michal



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 12:41:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 12:41:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481741.746827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqhq-0007hK-Fg; Fri, 20 Jan 2023 12:41:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481741.746827; Fri, 20 Jan 2023 12:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqhq-0007hD-Ci; Fri, 20 Jan 2023 12:41:18 +0000
Received: by outflank-mailman (input) for mailman id 481741;
 Fri, 20 Jan 2023 12:41:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nUaQ=5R=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pIqho-0007h3-Pz
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 12:41:16 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2056.outbound.protection.outlook.com [40.107.94.56])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b92cd37a-98bf-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 13:41:14 +0100 (CET)
Received: from BN9PR03CA0726.namprd03.prod.outlook.com (2603:10b6:408:110::11)
 by SN7PR12MB7346.namprd12.prod.outlook.com (2603:10b6:806:299::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 20 Jan
 2023 12:41:11 +0000
Received: from BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:110:cafe::d7) by BN9PR03CA0726.outlook.office365.com
 (2603:10b6:408:110::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.26 via Frontend
 Transport; Fri, 20 Jan 2023 12:41:11 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT012.mail.protection.outlook.com (10.13.177.55) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Fri, 20 Jan 2023 12:41:10 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 06:41:09 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 04:41:09 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 20 Jan 2023 06:41:08 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b92cd37a-98bf-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bsjh7NlFYCdpSPT9rWvWt3pWpsub3Lpcm7XBFwleY6NvlBIbczBuBeeUBtPlmsHPhsilJ/UUkVwqx1OGIlkjDnOZEuTZXD2fA5jP2Ijt2RvogRd4DlavFUN1NSJckda952nwqa1sCBuL2Hf2mqhhDKwgtP/j+KbdqGK9iuv1y3OOojznsq3xBkPqGxZ0ud4d/hRnrfkTYRc1NtqgiJLoA8/KrHjQOYBcb4T7ON+vi54PgaIwX6GByjXJvlkBgHuDA6zmIGWQMZPwgHPbJIAWW+hM23KnWI7bfEGhhn+U7zcUL45JbtuLyGfcdYTtIt/2Kpgpt3yUx30xB+ZFntymTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mtKChU3iccf56959COJY4jcxhHIUCtrIhSBaGsCJg+Q=;
 b=aYEWjJl5k4DHDw+Ui8LxeVkQPlyZzcPilFTxZ4pn6fK/Kd3XWukhd8vmahcXPafJYoRFaedR2h4kk7wko1LqWLCPzs5juRROsOdDtAz+Vr7bpQHBkw3SYtly6aMkzQaAlLVwmD8Md0EepSUWFAN/Ktlt36oTPEWuEhejyNsM7427JvPScGR2JSyAHecI+dzLiGYwhXrpeXf1pZqH/ixOe16G7+GXfkAwHDxaLnCni3xszvEQI0BKb9wSmmzpdLUleVHUt0MfQDhEUXtAQ3H4NAUa2AJ/XUvazqgjkuYOWEym/VWGpdSb1j/FCRvnSc2WLLnlgsLVUQA4pHZ0KjLG5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mtKChU3iccf56959COJY4jcxhHIUCtrIhSBaGsCJg+Q=;
 b=yvDdZHGhUfLn8r6jYl3Ab4WI6M5zMQZDq+RF/TXAeAUdhXN+OxroMFaJoRGViUVuzuMSQaXuqtd6bRpHPB4fk5KDZBfdeEU279SkvnBnupb94xmlWLZU1kraIJxMgqCvJHVdcJOre0K24SUi/xZLxVNEwAsdEIqwek9ljZSOTRc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <c24ecaa1-2b55-2ba9-25b1-ff18b15f2d1c@amd.com>
Date: Fri, 20 Jan 2023 13:41:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN][RFC PATCH v4 02/16] xen/arm: Add CONFIG_OVERLAY_DTB
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221207061537.7266-1-vikram.garhwal@amd.com>
 <20221207061537.7266-3-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221207061537.7266-3-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT012:EE_|SN7PR12MB7346:EE_
X-MS-Office365-Filtering-Correlation-Id: d8f80213-cb9d-4a35-d954-08dafae39be7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	L0yNUJUjB9knflPZ3mlMguFHTmUbSc31/xV7KtH/1c5FCzDAfqV+Dh2RvbzsWCqIFTRRGeXtyJX1JoQnWV9JjIR+j3vfg9ucMeYBd95bww6COErAHxO+8qtrOOner0BDiQWWMPfMuyJAT/vbib1kfERf6q9ABiCZHguhajkYKHwZKBG2PdlFiI1kkOvf7yhBBvCD0zwLL0OhQFAZU3yzlMdRYU1Hs4RHwk925aExYPi/HehblilralWVHQsc7Uqnv+a2B3DsqbDZND2dDAmSmQq9bFksbs2GkrH114rXDr4/Cq2JLg/5YQmGdjM5GY61iqftjOhJfk+SU5H7XYOAl7CuYH5LA/k+CB1+88oWUbq1zueaYKWJ3DZp/toflJKREcnKkldawceVNztJnV+q9DNdNmQaJEJu1UH6NLWRuHAZ1IbeOUP4QeBbWr+QAU1AI8mCZUbIje/HSK+ZjXIaN4E/VczER6odtFGzfxTnrBfgSAuILoIxQbSrvuXWjGhxndEEz9/LWX6K1y+r7Q1XQ84GRuS+QchISyk03nrHSBdi4lNLwgYeTnQVIiufGE3zSPH3LJq4XuHrV9e5uF4yrzbcTZzBjv+uV9CL3fpw5sCb+lF7ZndkYOZ5LnVz+uMYMr7N+ikQKtsUd2j5pUPn8x2CboTI7Lk2nSEQd+fkWAi8VbeY3H9d4+C+DeBBDoAiu72O9dA4sXZeqGZOKJzZEoKGl60zfJH30t5c7K2QmJacVzwZx6ZX7UdzhDIl+ohV4qFOPeBMEW0+y1WzxzoABw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(376002)(396003)(346002)(451199015)(40470700004)(36840700001)(46966006)(40480700001)(478600001)(54906003)(53546011)(31686004)(110136005)(6666004)(16576012)(8676002)(316002)(186003)(26005)(70206006)(70586007)(336012)(8936002)(36756003)(41300700001)(2616005)(4744005)(5660300002)(44832011)(86362001)(426003)(2906002)(36860700001)(47076005)(4326008)(40460700003)(82740400003)(81166007)(82310400005)(356005)(31696002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 12:41:10.6060
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d8f80213-cb9d-4a35-d954-08dafae39be7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7346

Hi Vikram,

On 07/12/2022 07:15, Vikram Garhwal wrote:
> 
> 
> Introduce a config option where the user can enable support for adding/removing
> device tree nodes using a device tree binary overlay.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/arch/arm/Kconfig | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 1fe5faf847..ae2ebf1697 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -52,6 +52,11 @@ config HAS_ITS
>          bool "GICv3 ITS MSI controller support (UNSUPPORTED)" if UNSUPPORTED
>          depends on GICV3 && !NEW_VGIC
> 
> +config OVERLAY_DTB
> +    bool "DTB overlay support (UNSUPPORTED)" if UNSUPPORTED
> +    help
> +    Dynamic addition/removal of Xen device tree nodes using a dtbo.
help text should be indented by a tab and 2 spaces.

> +
>  config HVM
>          def_bool y
> 
> --
> 2.17.1
> 
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 12:45:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 12:45:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481746.746837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqlO-0008Kq-TY; Fri, 20 Jan 2023 12:44:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481746.746837; Fri, 20 Jan 2023 12:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqlO-0008Kj-Ql; Fri, 20 Jan 2023 12:44:58 +0000
Received: by outflank-mailman (input) for mailman id 481746;
 Fri, 20 Jan 2023 12:44:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nUaQ=5R=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pIqlO-0008KV-B3
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 12:44:58 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2076.outbound.protection.outlook.com [40.107.223.76])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3d843775-98c0-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 13:44:56 +0100 (CET)
Received: from BN9PR03CA0485.namprd03.prod.outlook.com (2603:10b6:408:130::10)
 by BN9PR12MB5243.namprd12.prod.outlook.com (2603:10b6:408:100::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 12:44:53 +0000
Received: from BN8NAM11FT008.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:130:cafe::78) by BN9PR03CA0485.outlook.office365.com
 (2603:10b6:408:130::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.26 via Frontend
 Transport; Fri, 20 Jan 2023 12:44:53 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT008.mail.protection.outlook.com (10.13.177.95) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Fri, 20 Jan 2023 12:44:52 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 06:44:51 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 04:44:51 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 20 Jan 2023 06:44:50 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d843775-98c0-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Txlr9bEQEB1ux7nyaLKk2gYtJdLj59e1QP1D9KHR4OFNR62+ZxmNdVW9iXHrkyhlh0a7CvwQS1diEHR8AgJFMkrXztNB7z7FaRmEimxdGRFco0h1T3OpiIed3hsNV0fxxgODXgUmyIPTETeeqB7YsU1YqFiEeB+hgEF+Q3DINK5dvAvz53+x31lguUXHXqpCTgDf6dNgBQp3sCQfh7lIV4ThLwU/xcr5GAaITuNTWrBqaQXiGbRCstWiJWAdxpmlXH19bCFLGmIm/ACsPVayS+7zL4l1MTdvfsGVdJAfsXndR2H4EBA0cw5JOa3HUs3Abjqbjd8JWOCvBb+1foMxbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nEiZoyF42vVDQA4kucWVfB3PkiLOnmnKijv6lSxTM3E=;
 b=f7R1afXeG/fAXORxYn4PYfXqCAWDcG89Hg+iIefgF/4bd939l+JmXRVHpuWmN8cj62Tw85a56vbMTnTIIK4gq9OoZR8b1PhR7wTbVsk2qmH6HvhRqjv1wr03w1zpNJx7noZhXZKrfgsdwEVALUCJPqE+8Yo8nlkc4FS4OQ44ot6J/pIMkcNRYyE27MrnCaWNL6Jggi0PEeOfCjPnu9RiVrAhrjdZlrWZtNZiQLrJD0YDSBho6moBvV0Tt9TwTs914lLwE/Qku5NNGYsF3YZ6daazQgzgq/imqQbbbaSdX39WOh4xMuNlAUrWwMOlEdspuURYZwunXWljS46+dW2ssg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nEiZoyF42vVDQA4kucWVfB3PkiLOnmnKijv6lSxTM3E=;
 b=CdeUNg4jViCKdtlN0uLlQ7x+9+O7PqjiH3bnk5GJxrmMC5RPy5vnaPorT6BJO8LoyV28Lr0k3RuAUa6PNvfoxnYJ/iswYAYqx86aA2UMsmyndcu9sSiozDbvpayhTQ+bqgjklB6NKFutiYA0lH/nvgOrPhlx6swQnKZd9OVju3E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <375a18ec-60b8-c2d5-ad88-a6cfa82f4a8c@amd.com>
Date: Fri, 20 Jan 2023 13:44:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN][RFC PATCH v4 04/16] libfdt: overlay: change
 overlay_get_target()
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <julien@xen.org>, David Gibson
	<david@gibson.dropbear.id.au>
References: <20221207061537.7266-1-vikram.garhwal@amd.com>
 <20221207061537.7266-5-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221207061537.7266-5-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT008:EE_|BN9PR12MB5243:EE_
X-MS-Office365-Filtering-Correlation-Id: 528ddc0f-2810-4f23-f355-08dafae4201d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hBPZ64NhgG+tTrm/KHDkjFTmMDBWe2T/bntRsKWfm1irLj9tjhE5AHynEXf8fMV45mAxev1cZq6q4//EHnf3ger9jNXUvaylsqgu1Qgyr64V8Lp8n3FoF77dywwWXd0316qUHJYjfg3TOSkxccFmKpS+YE8ZUAsZ+pNqxaEUB7HMIGpEBtSu62wPv2iQDeoHQkv73h1vl243S6iZ39kf7O/WloW4cKiiCx7JufnxPTFrletLoxjt/qSUuI2p+NvKj4tJidaNidyunBzyPXkUGFsXyliNPhf6/BFGqXwe31TwSXTyqWdKavSmAYyptQJLSJtCjVUiYHvV2oYDe4oLZHjnF3UcuiFQ2vG8J65k2AxrPgGNmTKotV5RQ0XLrl++Ij2uBPMy28DcuKojxOTdR/iwRNKjyNwRqHAP1g4vUqHjjXFU2zXSLREQc3VXY4Cw5wMiQCBw6k4Pp322kk5S6ygIPwz05V+YIpiesrkt5eRL2i/N82cnoL8l+GXDMOUNfBmIauGIiOTkxFvxQ62USHOz6JsXRHkJoIGnsX/LaSCwo9+X+RZs9TWE43tZUXdRYuZhRE8ED7Q4cl5mCIxdnpRTWPd1Sdh7nB8JQ1NALt6d3Nv6uTW+jHEB8SiG1FyiwfmgBGhncySvTU1jvyUcJ+ivXy5gPgNGrlnpkm/e2Ohyz7dRE3GGm6NkoJPOBYvxW+YX2IKuvoAuf3cpKUSH15bO6Dpt1ZIyoqLzvHnXojfauJJY++7KbZ280cTi/lfjK3ucmtO8wL0KL86pjAm/f0KCI6hW1ujFV41/N8a//dc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(5660300002)(44832011)(8936002)(31686004)(4326008)(8676002)(82310400005)(70586007)(70206006)(31696002)(186003)(26005)(2906002)(110136005)(54906003)(966005)(316002)(36756003)(478600001)(16576012)(2616005)(53546011)(83380400001)(41300700001)(356005)(40480700001)(82740400003)(36860700001)(40460700003)(86362001)(81166007)(336012)(47076005)(426003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 12:44:52.4199
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 528ddc0f-2810-4f23-f355-08dafae4201d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT008.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5243

Hi Vikram,

On 07/12/2022 07:15, Vikram Garhwal wrote:
> 
> 
> Rename overlay_get_target() to fdt_overlay_target_offset() and remove static
> function type.
> 
> This is done to get the target path for the overlay nodes which is very useful
> in many cases. For example, Xen hypervisor needs it when applying overlays
> because Xen needs to do further processing of the overlay nodes, e.g. mapping of
> resources(IRQs and IOMMUs) to other VMs, creation of SMMU pagetables, etc.
> 
> Origin: https://github.com/dgibson/dtc 45f3d1a095dd
You should move the Origin tag after all the tags coming from the original patch.

As per "sending-patches.pandoc:
"All tags **above** the `Origin:` tag are from the original patch (which
should all be kept), while tags **after** `Origin:` are related to the
normal Xen patch process as described here."

> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Message-Id: <1637204036-382159-2-git-send-email-fnu.vikram@xilinx.com>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>

~Michal



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 12:51:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 12:51:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481751.746847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqr0-0001Ku-IV; Fri, 20 Jan 2023 12:50:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481751.746847; Fri, 20 Jan 2023 12:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIqr0-0001Kn-EC; Fri, 20 Jan 2023 12:50:46 +0000
Received: by outflank-mailman (input) for mailman id 481751;
 Fri, 20 Jan 2023 12:50:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nUaQ=5R=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pIqqz-0001KZ-Pv
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 12:50:45 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2069.outbound.protection.outlook.com [40.107.237.69])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0d5028eb-98c1-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 13:50:44 +0100 (CET)
Received: from DS7PR03CA0011.namprd03.prod.outlook.com (2603:10b6:5:3b8::16)
 by DM4PR12MB6616.namprd12.prod.outlook.com (2603:10b6:8:8e::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 12:50:41 +0000
Received: from DS1PEPF0000E655.namprd02.prod.outlook.com
 (2603:10b6:5:3b8:cafe::6b) by DS7PR03CA0011.outlook.office365.com
 (2603:10b6:5:3b8::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27 via Frontend
 Transport; Fri, 20 Jan 2023 12:50:41 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DS1PEPF0000E655.mail.protection.outlook.com (10.167.18.11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.11 via Frontend Transport; Fri, 20 Jan 2023 12:50:40 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 06:50:40 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 06:50:40 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 20 Jan 2023 06:50:39 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d5028eb-98c1-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mt4akUxZaIeSPVsP/Tk4pYCOsdO8jtWTlIhF9eMCS3clJXDIS4nvfK5SZaVkT5bKbrZvoF46KP9yPQWb9L03Xq54erM7K/vP/xDZ4gE5Ab+x7I5OZeAxqOYCkoPwqyCuMczMZA4rGH7lNzQxvTLGhfhIpNHC300M0L8GBobCtasmbuO7gvclzLgmTww3H9/HmQMPjTyDCrHCOkDZaRAYrQh5G/mQWRhhN/Xf9SMQVDXGRbH2JA8n752PPXMe1gA7iQuPnFPBAsbmVG4UD2W6Y6tHIou1FWqqlD3bIkEzEW/+CeR/eI7D/045KjU/5wpOdafiB3qAFRVdr3EFDmRYlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kRIjhlI7J3YlTvH//k+3wzT8sYdd/bnubg/ndIDpPq0=;
 b=lTNSq2pqBBTUwudrYO6lHxV0kSBDOb+NtpXXVMi/QNvH1eKBNBrCPVGCl0FO6Wrg+zvfIVTnoAIgkhP1GybDbzs3e7jfQPz6FjyzjsJmUFHVNDKtU+5yc3yIVZ5pM744STwWyHsbJ/Cy2XqSxNoOgOtrorANNfClS8hvwnLfpsMokMF0UlPF/5fXSVDEuaoln7QZI76pqN37xz7jdyVdBW3Z0YaFgUKFrjBvvCG/i+aUO8dpVfWdPHoxcx2kzJoquCpqb9GARDAYOLGwl0xEmzk78lRB+naXMkRqT9D60iDwMITuPZq0pMo934rWjNcT7AmsVhkJxxxDpv8DzVdW7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kRIjhlI7J3YlTvH//k+3wzT8sYdd/bnubg/ndIDpPq0=;
 b=nmAJMmKhfl06NiYWNKqcf4wjg5F7yOma8RlwnLM0YMeFvu/a2O+SEbIYyXDfgSfQVG8/mJyUlhTEjVGYmEkZUETzWoNCugiYwjZJNXvfTwa5viG5VV476avx8pSHwDdJMJT3wz2rDsoWMiXUF9yTN4Bq67KaE/+XlQp9ZLsVLJ4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <e0f7b825-4b2f-e709-07c5-09b8ff192041@amd.com>
Date: Fri, 20 Jan 2023 13:50:38 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN][RFC PATCH v4 05/16] xen/device-tree: Add
 _dt_find_node_by_path() to find nodes in device tree
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <julien@xen.org>
References: <20221207061537.7266-1-vikram.garhwal@amd.com>
 <20221207061537.7266-6-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221207061537.7266-6-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E655:EE_|DM4PR12MB6616:EE_
X-MS-Office365-Filtering-Correlation-Id: 3430bcb5-e276-44f4-697b-08dafae4efe5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0AvtHYFlF+NB4bzlnFoyeZGVNUI8s5N9yf/ImebH4azJO6Qm7dn68Sn+7sKN1y9myOrpNBtOhcXtRDmR4zgyS6Yyks24H1O2rparNCQfNU/haWJhmROrexPMvZZAS3hmU1qGGwtOKoUJdb77lmd9uVOysG89OjaEpsOfY5Lhtcx4x77KlzVxg5wv9Ilkf5OePZLAQVLRCt5ajv74tDC/k59Yak4/kgLY4N7cD8HyHY4GEMMzEWi6H6YqM9M9h4USgMSn0sAZ3GO6jew8mrD+XUoS5hr2K0MllRLsdxZtu95T8rEEyYZDMlPRpBPXBhX1yK4FewbQHTBfZA2GNHBp0n6eg9/ADfJR9lrJsDuR1/c99LoRPN7FXhs42LQ2F3PbxrjXSrdXf2z2S+UeUmjJK8yDNJTb9pIeHDqF9xYzpF+qdz0E66SEMlwwdLQWOKuDa6n5c2c3ag19sW0k8msNJblJJK1a2DiQN5RdTbC4Ak8FtdGwD0EqZ0BRdgI/aIjn6cUdqWBS0ET/FeYHmAUlNzwTGsPJiIQPmTv8x5FNq51NQFpeKGDT7wlX6amSM9t4BA/keLQ2pMO1AsWKKDd42cjFzYPhuiYUyXeDHfqTtj70IpvzPa2jp0MuDrKcQCkaQYjEj72Zdwu1Ol/w9UnVg3/cmu6/kg9r4uclzvKjGJQyjR0UnqtnCpruke+GpujVJBi2tyxi4ZNKhop6xLusXPvFt9oeIEn+KKVTG5eGdVeRLUHO0NXfOXvRC/gpMzmdrq+oXoxAzCvBL2I7Y9MrCUIwTglz25Wy9XlyPPWqYLI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199015)(40470700004)(36840700001)(46966006)(2906002)(26005)(110136005)(186003)(54906003)(316002)(16576012)(2616005)(478600001)(36756003)(53546011)(31696002)(36860700001)(81166007)(40460700003)(86362001)(356005)(82740400003)(40480700001)(336012)(426003)(47076005)(83380400001)(41300700001)(4326008)(44832011)(5660300002)(31686004)(8936002)(70206006)(70586007)(82310400005)(8676002)(37363002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 12:50:40.9842
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3430bcb5-e276-44f4-697b-08dafae4efe5
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E655.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6616

Hi Vikram,

On 07/12/2022 07:15, Vikram Garhwal wrote:
> 
> 
> Add _dt_find_by_path() to find a matching node with path for a dt_device_node.
Here and in commit title you say _dt_find_by_path but you introduce device_tree_find_node_by_path.
Also, it would be beneficial to state why such change is needed.

> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/common/device_tree.c      |  5 +++--
>  xen/include/xen/device_tree.h | 16 ++++++++++++++--
>  2 files changed, 17 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 6518eff9b0..acf26a411d 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -358,11 +358,12 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
>      return np;
>  }
> 
> -struct dt_device_node *dt_find_node_by_path(const char *path)
> +struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
> +                                                     const char *path)
>  {
>      struct dt_device_node *np;
> 
> -    dt_for_each_device_node(dt_host, np)
> +    dt_for_each_device_node(dt, np)
>          if ( np->full_name && (dt_node_cmp(np->full_name, path) == 0) )
>              break;
> 
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index bde46d7120..51e251b0b4 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -537,13 +537,25 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
>  struct dt_device_node *dt_find_node_by_alias(const char *alias);
> 
>  /**
> - * dt_find_node_by_path - Find a node matching a full DT path
> + * device_tree_find_node_by_path - Find a node matching a full DT path
This description and ...
> + * @dt_node: The device tree to search
>   * @path: The full path to match
>   *
>   * Returns a node pointer.
>   */
> -struct dt_device_node *dt_find_node_by_path(const char *path);
> +struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
> +                                                     const char *path);
> 
> +/**
> + * dt_find_node_by_path - Find a node matching a full DT path
... this are identical. I think you should describe the difference.
The function names are also very similar and can be confused but I won't oppose it.
I will leave the decision to maintainers.


> + * @path: The full path to match
> + *
> + * Returns a node pointer.
> + */
> +static inline struct dt_device_node *dt_find_node_by_path(const char *path)
> +{
> +    return device_tree_find_node_by_path(dt_host, path);
> +}
> 
>  /**
>   * dt_find_node_by_gpath - Same as dt_find_node_by_path but retrieve the
> --
> 2.17.1
> 
> 
~Michal



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 13:11:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 13:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481757.746857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrAh-0003l3-6t; Fri, 20 Jan 2023 13:11:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481757.746857; Fri, 20 Jan 2023 13:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrAh-0003kw-3Y; Fri, 20 Jan 2023 13:11:07 +0000
Received: by outflank-mailman (input) for mailman id 481757;
 Fri, 20 Jan 2023 13:11:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1qDs=5R=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIrAg-0003kq-4M
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 13:11:06 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2058.outbound.protection.outlook.com [40.107.21.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e3e4b0cc-98c3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 14:11:03 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7393.eurprd04.prod.outlook.com (2603:10a6:20b:1d2::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 13:11:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.027; Fri, 20 Jan 2023
 13:10:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3e4b0cc-98c3-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d7n1z8eYi/+aWch+JavLwwMv03nPsjf57AOGdbdqOXLGTUtaTvevudja2JPOSlqpZYCQdP4gXPEHs+ialQ2bW4gIP8ocY+rafo4qlR/D7ORtim2Atu6CazHxuyCDpEgOwGLzvE3NnBw97teNrgqzQHuQfKZd9hsrabfbSPdMSXZyyaI5T/0c7lyuoQydBSKcxcfx5em1tLXEMKICgGq3OlatDpI82XILwHv8QuKktZm1vtEa6mkHfYqVReIEvO3FbSBoABM+Qs/uvkTKfhcwC7bgSozV91oPrks8hxL+RjiwYe1Bab9G1GHf5mFcjvUtWnnLTIFMQrSDmAsW8I/LaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=twFvgIgQ0t0sucpxXPX0REiwICs0MBYEdRKZ5HyJ2cY=;
 b=c5GelhtO6tdM6RVbulT1XxVMDo1Y7GifM1jFHGFDFWV34IJFd97o5JCEm0uc+B/L+X1bPNQomqLbdCJIFHJ7IGr4s7aGxJ0nO4GVvHHwglPT4UxG8B9RSzkia3hUuAe4zeQn4JIwlIYRyXYidnONUsaMFGWLLdI6SbtoDzyXfzZ0gguQLajpT8csiY6MI668XbFC4dO3PtvQrOuHVjj5lLzLZscgkGw3uHUlTOCoKXTodjehFXI6e2v1hUMyGkEbxEco1GwO8ohauow5paWbOamrc36ed7I4PPAGpcUPXGNj4jECMk3aqAM7JvS2TCwSByFMRWEyKDbYgbPoohtRGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=twFvgIgQ0t0sucpxXPX0REiwICs0MBYEdRKZ5HyJ2cY=;
 b=iUvLDzXOTEpw52GnC8pOvBY0nnR2p+1NXsX38oGI99O6ei9RxtukXdJi4fipLUg5vZC0x4fDkRvpqU5Sk48qto6lFA62bND1fARuUokcu7YyEeBN4/ttAW0cDocKmt/dY9kGZxqOM5yj/sa9IGGqC3hd7Onu73XjMe5kndikAELyq9xeIw71KQ/9vk/sGJciwNkygqqpwgrBVzaSE8Hwfk1Zes0gdrFMLrImzVdGzLXtJQXzOsQtBd91hWeTal8+ibX/p/geAOCUuxuKAWDvqhC1EJOEJfeM8pg0oncFWonaOohjJgNzbWxcKBbQdnXYKqT6vePff4hJmuJEI8j5pQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f530ddfc-8f97-b913-e838-58cc352f6372@suse.com>
Date: Fri, 20 Jan 2023 14:10:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/shadow: Drop dubious lastpage diagnostic
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
 Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230120114556.14003-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230120114556.14003-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0054.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7393:EE_
X-MS-Office365-Filtering-Correlation-Id: 2b44ebab-3b62-46af-24fc-08dafae7c4eb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Wtqdzgmx795jWgJH6evNCF7zZ5n+W5dS1bAAHp1ckgw8PbKv1FUgGj3ShFLuaoYRbJQvArNfi5Uy6Sfg5Ve868zJXGZj2j5qdgMcA4QSDHG6U8rNCCWTMND4I2j6yT6/zl1CzBQlwwfownL4DsRYjmSh6yiMsLdZNPPFF7KaXQv1Ngbu7LycaSojDW8JJX6sGwiQ3q2IncAvKC5g75Yo+VSjW5zPHRaRnKa/3d6uuq7WMzygRkerHZD0ZhGzRRdMrZwP8hObH0p74WnhYeGBAu9lW9z8GE2ebNB7h5tY67snybN4AR6/Ed8xNXItwJg1eK3EixNpKoXq/k3ryLtWzpp+MCg0fZyQ7SaQDZRm4k0oyFbZAXZGgaVxvsWoBQa+ro7IhM7XS2ZzweRrv7AuzC23urqYZBH6u1WSU/pLWFiAqMPPXFInipt/r4B5g4Ugn4WpwJLrdDhqvAMLcWkWTZoizQ/ZXMTOevFMOARbqmiLy9d7kzF5mAZwaMxZW4WrMRRBfE7/jqlI8mx1L8yeDNzWZMWxN92LuEQkY/RAtis+W/JRjBsLpso/+YBakqmGtB3nSGYLOvOqDpXgNK/+Aw65wsr8IbUAjqLt0qX5smbHWRJvXXQM1SAwkysyTsS3JpiKLe/kdGdriBQEKLL5AJ8OUTtwqBiRkpXHAKUaUcko9S0UFj4c6lYxN8rMslB4wemplsytwhocaKFVzaMkVSmDOxVAkGswwNeCuWWuwDg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(366004)(39860400002)(376002)(396003)(346002)(451199015)(6506007)(478600001)(54906003)(53546011)(6486002)(31686004)(8676002)(66946007)(6916009)(316002)(66476007)(6512007)(26005)(186003)(66556008)(4326008)(8936002)(5660300002)(2616005)(36756003)(41300700001)(4744005)(2906002)(86362001)(38100700002)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZVhWVGxPZXdTTDR5Q3A1ZTg1dmxYclQzZy92RStCMDJtSHNBa0lOVWFuZXhK?=
 =?utf-8?B?L3crWWtiNU1iaXZjWWNhS0xJdkJEdG9va2FPSFozRTMrczAxUlNMUmpsVlNa?=
 =?utf-8?B?azMxcUQ2Mk95enluYnl2cFFJUlBrMkRrRndPdkpidDkvWnIzUWx1SEw2a1ky?=
 =?utf-8?B?dWUyMEFsSzgwc1R6VWRnMTNiVStqSnhYR0pEMzhkVXQ1aVpBZ3dId1UxYS84?=
 =?utf-8?B?SlVNQWRMYjVHYkhwZ3Z6ZG1uWnNIZVZ1VnJ5WEhTWlJ3WHMyMVZaZCtQM01a?=
 =?utf-8?B?clpyQWZxbndKM1RRSFZZU3U2TDF5OG8zVlRWMEtCU2wxSzZBanV4am44Vmtr?=
 =?utf-8?B?MzkxOFQ0U0N6SzhSTU9mT1R1SkUwYkw2Q01QZUdob3VKQzRjU3B5ckV1TGli?=
 =?utf-8?B?YWh5TUNOS2VzQ2QxcGlWTVhKakx4NVRnT3BVQ2tiZmhHbjRTR3RoaHNQYWlj?=
 =?utf-8?B?ZTBaa0JlcmVTTGNIaEZmL1dYOHVXR0hQb1hqNXgxaUpPVE40VEVDekxQc3lQ?=
 =?utf-8?B?VVVDSjYvSHlqTkRKNTFoTUUrTlJ4SEg2YXJhVG9SOUgwU0Q4UUQrRFpqM1RU?=
 =?utf-8?B?N0R2QmFZWVVnUUcySlJXdFptY0VYRkc4MFJjNzhxaVN1OHp2bTNyTCt2Q2ZP?=
 =?utf-8?B?UHFoMkswdGM3SEVuSnU4dlA5OGJNUkZFdDBTWGgzK25QbzUrRm9rMjNUdWp3?=
 =?utf-8?B?djZ1b1dUZjVwRWdrK2ZnTWtqcS9KV1dqditTUXMzb2VnamVvU2xSK1RyaWJi?=
 =?utf-8?B?RFg5bk1qTnE4OGZHL2VOMlQvbVJ6bDFiSjV1RTZkYkZrejJhNVd5TjU2c0dD?=
 =?utf-8?B?Q092R1p3ZW1TVlB1QVV4bVA0S0pZTlh0RHMwdGxLd0VWdHJ4UWRsQ25pSklF?=
 =?utf-8?B?NUg5R3hDUS9USDBIb3R5cEJKMzVqcjJOSlJ5VXJwN3k4RUdaV2NBR1gzZ0E1?=
 =?utf-8?B?NGtnNHljeXptdTQzcDlhb0p1eCtEV29MS1pQTFVVc0lsQlFSMFY1MEJmallV?=
 =?utf-8?B?UDF1WTc2MUdwSUsvNnVUNGMvOWdrTUxYdFgxRXkrTWsrZGVvMFYrcFh4Y3hx?=
 =?utf-8?B?TUZObEN2Y3NPMy9YY3hSbHFMNVI0MEhwa1U5YkxQRS9RSjJzWHdXYmxjejRp?=
 =?utf-8?B?SEc5SStLRjQ5bEY0S2xwMmZBSktURUs0U0dKY1RneDkraHhtNWlaNGpJcUpO?=
 =?utf-8?B?Tnp0cE43UHpyLzJEQndaMXAvUzBsVHA3cXNhUEJ2Y2xrcVpFOURDb1FwTkhP?=
 =?utf-8?B?T0hXTy9NUjJmUTdaaXZCVURBMzJDK3F6OW00K0FpWVB3MWRSQkNHcnVBQ1pN?=
 =?utf-8?B?L0ZjZ01ZUndxd3J6Z2xJS2tZUnEwZVZqQnZ1cVlpNzdpaTBhNTNjaUlSS2w3?=
 =?utf-8?B?VXFIb3A1Y2hVdmhTaFdCMnRlaFpOR2d0Qk4xdDFTMkYrRU53VDdTU2tBUDZp?=
 =?utf-8?B?UUdSUk5MMWNsQWp1Z3lWY2huTzM0UDJHR3EwUXcrN3Rsa3ozUEtKd3BlaGtP?=
 =?utf-8?B?ZTBucC9wb0NvdlhnOEErMzlTaTllZUV2RWY3TmttVk5KZk1sVG1HZmNRbWVp?=
 =?utf-8?B?MXdLb0UrckFGdmEyUlJLZUFEQTF0b2ZzSWRINjJFcHhnQWJvWjJURFBRSXJr?=
 =?utf-8?B?M0Z3MVVTSzVSZ3NKMVlvMUd6MTY0aUcrb1pIcEFFNDAxN3daQkhadVpnR1RQ?=
 =?utf-8?B?d3VxTkdJaVNnZVpvazdReE5peCs0MVpQdUZyU1MxOHY5eWdSOEc2MFdaeEFq?=
 =?utf-8?B?T2pXWUoreE1xbVYyV0dZNGhIRU5wdzlhaDBZMHkwWENsTUZScEZ6RzMzSzNj?=
 =?utf-8?B?dWsvQWdNMFJGbEU3UVVMUlh2dENMTWJTWHlPNndpSGhPbXQvMFBtOWNGQXNT?=
 =?utf-8?B?bEhnMyt4em1BRkgvTmVraWU2cVZycmNQTDc4Q0RzWU93Z3JxMEhBQU5tMmI1?=
 =?utf-8?B?TXkrc3ZKY3JlUldzVi83S0N5eFozUCtSQ25kaTF0UTJOZStLWldGSmRMb1g0?=
 =?utf-8?B?c3k4YmhkVXFLNWVTK0hSODFnaUQ1WmswbjRyWkZkV0RTbFVEQXhoclRacEFy?=
 =?utf-8?B?b1hURC9LTUh4Vm1hMnltd21LRm9vak1yNHNQb04vZXdtRFJNTVo4TXdqTC9y?=
 =?utf-8?Q?MnjHiwJHfHbxz2MkVVDSMecDl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b44ebab-3b62-46af-24fc-08dafae7c4eb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 13:10:57.7292
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 80cjgQaP62C0YmSwvcVEED1LvDs2+GipiT32+4rO+S25Z5FF3D63Vgo6JtqYc2iVYG3AUYiutwG1b4n98b3EHg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7393

On 20.01.2023 12:45, Andrew Cooper wrote:
> This is a global variable (actually 3, one per GUEST_PAGING_LEVEL), operated
> on using atomics only (with no regard to what else shares the same cacheline),
> which emits a diagnostic (in debug builds only) without changing any program
> behaviour.
> 
> Based on read-only p2m types including logdirty, this diagnostic can be
> tripped by entirely legitimate guest behaviour.

Can it? At the very least shadow doesn't use p2m_ram_logdirty, but "cooks"
log-dirty handling its own way.

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
with the last sentence above corrected (if need be: removed).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 13:19:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 13:19:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481764.746866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrIZ-0004Tn-4N; Fri, 20 Jan 2023 13:19:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481764.746866; Fri, 20 Jan 2023 13:19:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrIZ-0004Tg-1c; Fri, 20 Jan 2023 13:19:15 +0000
Received: by outflank-mailman (input) for mailman id 481764;
 Fri, 20 Jan 2023 13:19:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1qDs=5R=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIrIX-0004Ta-KY
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 13:19:13 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2041.outbound.protection.outlook.com [40.107.14.41])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06f4f664-98c5-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 14:19:11 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7866.eurprd04.prod.outlook.com (2603:10a6:10:1ef::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 13:19:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.027; Fri, 20 Jan 2023
 13:19:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06f4f664-98c5-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GBDrQSemKOQvkUTJrgHyPwg357HeTprP9W3rM20PNKJsp1RqhwGNvEKW9y0A8/eJQl+oxZD/fOGnj6Jn81LX6ZHtVJKlqG4cDf1Ti7TuOGLTHNqO4fDbSZsQ+uNimpHZNUYXUNgpbU2WSg2j3pAGhcMDEo45wJJHVfjYSQ6hgL4ATZLGyMZqSHirMWtAma74aUrWq4KXYKZH3h3BuVPSWiRsFaeAh9Oi85K6b7LeYSgy9QgqG3ftuCx6XoX1Ca2nQmPpDuTDWiFfWvae+07QE0iNRoOxv6RybMPzMdZWggbXukKWxBkYfUvbLz+F6jwyOr01Wem6uW5ng/ODMEBl2g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2VvP7C/TUCj9ivO2/VCWsZ8mJIl1GyIGDFOXUDq6QEA=;
 b=JLik6hkL04BMxpwy0tMHYsqIrQyrgzfCu+8l+n3qHOD2R4+eUv+Dsj6h9mpqUExeRnLGNyzFbGoPfaESiO3O54xIiYdPYL3wAcZNwIw+ze9VuU1+8N4hVv2XNI9UcF1l6QIXNFjqMeOxmuNE4yzj4EAMM5S0lIasPkQCx88/RxezdpGEIMG5gTG8qiBBAVOQX1Go0pFyOY8kmHQjnOsmpJFE/KA+zu4iNBT9VxZIDmjyWO7I4k9E9SDzQXGX/+Iq8V/VK+DtO5xqK8lSonaDvO8uLHJtxu9+qTOXa9u4B4CjZZC4y5IegBZZLkiT3O5+BVB9bwk8cwyw7qzZ/SSjKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2VvP7C/TUCj9ivO2/VCWsZ8mJIl1GyIGDFOXUDq6QEA=;
 b=UpepjUprJfxlgNjDa8fq6RHz0wADgnJ8QTSqdnpkEpVW/TjtRo2nr+HEjnpC5YjALE47avsXC3GSxmxyg94loSkUVo60LQDoOaDtUlLs+sV9pft5nGmlegasx9yC+JzkzGoVlh4rvdMv+fs4q8m8QKqOrIwnF3oZLzTLDlzGHFET621j9KF5pPKhGBNDcdvxapeRu+PJ5aOsPIh2L9dLgWSdtXNVpxRhd1rDVUtZdEYe9s0cgV63kkoG3AbG44/0b3a8MTENhrSvhFpOuV7+3GsIFMeCWx/lwmxVhulDz2kZ3tzCGNALA1NLkHwb0Ohv4s+caqgCAszSOe6N2BuaSA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <95d73822-5321-4c9b-04d1-8ee4f78ff35d@suse.com>
Date: Fri, 20 Jan 2023 14:19:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] tools/symbols: drop asm/types.h inclusion
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <1eaa4cce-2ef2-ca38-56d2-5d551c9c1ae9@suse.com>
 <d519b6c5-5972-ff31-c3ee-39649babde7c@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d519b6c5-5972-ff31-c3ee-39649babde7c@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0056.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7866:EE_
X-MS-Office365-Filtering-Correlation-Id: 6f88c991-f268-43e4-dfa7-08dafae8e8f1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r+/3H7x0ClTnu2mLy8/fZd6CIleCY18ZvD6KL/ViB6EKXX7H7L6UYGFocfYZ1aNHnWvOQDOpoNcszIcMpjVqJiOZ/1dwi4Ay4+eJ4qeQLOkx2BzxhEWD3EjNrOZq9yNuQAYG+ILIyTd5QY4au0nHvXFuIFGK5tW4K6DP6LcmwxO0bxrjoUQBmZWUntafqZEMavzedrAhFvv2qN8P5OL+OebhjppIr7ElRmsM9FtoK6RGeGyJ3kLqXlegOcM4LCvWejiEobe+vXDJmdq25FCGWSiKlpoGHWo/s/SYWUbiGDqTH2Lmi9PdHNHtlB/dUu9Nmzc/xA8otJcNlgHoGENI48jfSiWcdO4tEu1Gg3pMmlhMk1BHH2i9ESY6B6iDviAz1o8wbmlRyQ49E6nG81xeD3cln6YdahLjcfDPz6OgtE6Dacfjf/zm0JkvgsSnR0OKbqnHoJGyeb1e+nrGe8DC1KeQWYU1i4b+9hCmEw8YePkHUCve2gMj5l26i6UPbrIvFSqQip8O7j6hTQoCbAi3EiDg7D8EnwVdgTdZQ3ACyGXaigJzPOMGmKkY2atbBa27g8b5m1AP2fQycfLVQ2V+HpHad+7M3qSiFBi2jmf++/4VedMnakd9d0IkksMWouY56hkgzelRiFdL/SDjXExzkHDz4IVDUSf5aQJu2fnGNVGdgN/axtgNDte4r1AiJM62M3e9sWEKMf6jbrRHvANARQrZX489QXD+qub5rYYQI84=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(136003)(346002)(39860400002)(366004)(451199015)(53546011)(6486002)(7416002)(316002)(478600001)(41300700001)(31686004)(6512007)(38100700002)(31696002)(54906003)(36756003)(4744005)(86362001)(8936002)(5660300002)(186003)(26005)(2616005)(2906002)(6506007)(66476007)(66556008)(6916009)(4326008)(8676002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V09PeDQyQWNkeGdpVmhpdzFLSmd6YmxPRjBKbThTN2lFQ0FCVUVGTVloR0ll?=
 =?utf-8?B?dWZVeDIwMEFJQ2RMN1p5OEI3YUV1UkU3SkR1Rzh3V1pxRGRhZGhGTEU2cDBX?=
 =?utf-8?B?QnVmVmFLY1NwY293SWtHaktISk9RYUNHR210N0lkVWFnc01Bdi9iUFdNRlhV?=
 =?utf-8?B?MXdxdk5GdlMxaUdRYUhWMzBUbWg3bm42OWxZZFZQMkdVRUE4bjVUSnZaekJZ?=
 =?utf-8?B?Rmo4ZnRDeGpiSndZdHhMejdVZklVLy9wVFlLS0dVaWhSbzd3VGlqVW8vUVZk?=
 =?utf-8?B?WHF0NDNpcTVpWUhFcTF5SzJrVmROb1k5NElwbUcybExFcG9sc2pGZ1NLVDAw?=
 =?utf-8?B?dmtrZHE2Z1pteGg5bWIyZXllWXBqZmk2WVJYLzhBVW14TkVDdU0xVmgyM1pY?=
 =?utf-8?B?ald4eHE3TlhQTlkzUzdxSW02QVljMGEyT0ZDQ0I4TWRLN2JSeFp5WndCdk52?=
 =?utf-8?B?aUJmUlpUWkw2VmRMQjRHaENCdi9mQ01mWnZyTUFoY1dtenZpQ1p5QjJsY3VK?=
 =?utf-8?B?b2VLQTdUd3RCV0VLeUxveTRiSkJZQnVuOElSTko4OUovZ3dxZ3lqL2tOZGc3?=
 =?utf-8?B?b25kc3huV2ZQbS95dzduMnRWWncyZmtuanh4REZENExZNXMraVE1N25zMVBy?=
 =?utf-8?B?OXQrMWhGUFkyT2Y1b1M0UnovZEVBZFdsTU9DNkQ2bnBMY1Bvc1hWV2VBSUVD?=
 =?utf-8?B?YnI3YXFhUXAvaXU5TEpXRDJjclA4VDV4ZERGV2Z4aWJUNXV2N1psT1pmcmpV?=
 =?utf-8?B?YVB0ckI1MjZnUEdHOS9OVy9kYWIxNjEvakorOXNIWkNzVnJ4MlZGSXFaK2Yv?=
 =?utf-8?B?aUZmUjFLVTFlZXBHNnBpWkI4NHR4VEdINkt3S3ZaQ25lUVQ1UE5qUjEvWmR4?=
 =?utf-8?B?aktrbVNiNU1Wd2EwdC9UTmd1TVgwbE1lb05WVmNtVC9KbDN6b0pSREJ3ZWd0?=
 =?utf-8?B?b3ZwOXJVUExWZ1B6Z0pPbEFoWDQydWpEQWVUcDV6S3dyLytjUzQ1dTR5ZnNP?=
 =?utf-8?B?U3h0QlE4dVc3RzlLUGRHelFCeGZscDl1bGREU0lHY1NHaW9oK0JZOXNTWVoz?=
 =?utf-8?B?TTVmd1ZHSzNCRWpGWGNzQVVMVkgzUm9rcDJTTmlpWDVIdHdpT0hLRG1VWDR6?=
 =?utf-8?B?SVFXanloTjNaK1FZR2FXcnBJNWtTUm1lWmxuVEZpNmJOWE5MM29ra25sV0tI?=
 =?utf-8?B?YmNoeGdXR1RxaXUzKzZIdlFWSDRkUlBpYTd1ek5zMHE0dHZoRk40dTluSUs4?=
 =?utf-8?B?Y2wzOFpxakJmNVYrRFAxeWNRZktZeE12eEIvNklyVWtHVzljaFRJK0xpb29P?=
 =?utf-8?B?TFUzWkRPTnJOWkpONEMwWkE2YjZCZS9NMjNlREsxK1hRdG9pdDVzYXprZ2cy?=
 =?utf-8?B?bFloSFVDTXBGdHhGQlgxd3RLUHRrNmpvQU9SelZoV09mV05UdUp3VmtyMWQ4?=
 =?utf-8?B?SWdhcXhaUXBHV2M0NEZhRC9oVi9RZEkxUCtVN2NCVkZxczJWbllWYjZxWEha?=
 =?utf-8?B?N2VJU0ZwdnBBVlhDdFdrdzBVK2UrZ3NoOUd5ZlhuWGM3aVFaZ3RER2dGR3V0?=
 =?utf-8?B?cmdjRWJUZ0NRQS9mVTFZZ2tVU0E2Y1g5ZnpRSmhuSlk1UDlyTU5oU1NKaDkw?=
 =?utf-8?B?VkdVYVpsb3ZyWjNxQWwreTBmQ2hkUzBHcjI1dkNIK2xUa3U1NDFUUDQ1VGJ4?=
 =?utf-8?B?ZVk2NWhkTGJENStZeHdsRHlJVzJGU2Izd2o2OUk2TWlpVi9SUEwyak5mNjR2?=
 =?utf-8?B?WmhpQ0tEelpQczNjZFU1dGxaRFR1TXdhd1U2ZTk4SmxzVDRjclNWWVdzZjdl?=
 =?utf-8?B?VXFoOUtUaHdhbEFDUE1jOXdkZlo1N045dFNkSHROR2kzeEUrd1VZNVZsLzdD?=
 =?utf-8?B?aDRCeFFpMnJTOUZ2M1RWMnpvQ091eTkxcyt0cW0xWFE2eGtLaXN5SDF1Y3Fp?=
 =?utf-8?B?RHBmdTQwNkwxZkxrL0JUc1dtUFFvLzF3UGN3SE1nZnJMYnpueHFTaDF2c01o?=
 =?utf-8?B?dkYxdUhrZFRzK2ZUaFpUQXpIM1NrTk9GUTluMjQwakM3dktMenE2and6UkQ3?=
 =?utf-8?B?c3VqMFRsd004UWNScXp2b1JvNVlIYThaNHYxOVVrdWFnTitQR211bllhL0M4?=
 =?utf-8?Q?pdFCC7V9efXRtTFT/enC5WoM2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f88c991-f268-43e4-dfa7-08dafae8e8f1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 13:19:07.5899
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8GdrAIFxcP38wPEoFVPzI2uz4gfT7ia01yYmuTZswP0mVnU2oWOgsziaxCDxB9WYng8cvoiwIofCbDOY4fHAAg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7866

On 20.01.2023 12:24, Julien Grall wrote:
> On 20/01/2023 08:40, Jan Beulich wrote:
>> While this has been there forever, it's not clear to me what it was
>> (thought to be) needed for.
> 
> asm/types.h used to be directly included in assembly x86 file. This was 
> dropped by commit 3f76e83c4cf6 "x86/entry: drop unused header inclusions".

Just to clarify: The statement in the description is about $subject,
not ...

>>  In fact, all three instances of the header
>> already exclude their entire bodies when __ASSEMBLY__ was defined.
>> Hence, with no other assembly files including this header, we can at the
>> same time get rid of those conditionals.

... this further aspect. I can certainly see why the guards may have
been there (without having gone look for when the last such use may
have disappeared) beyond the bogus use by the tool.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 13:21:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 13:21:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481769.746877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrKT-0005ou-FI; Fri, 20 Jan 2023 13:21:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481769.746877; Fri, 20 Jan 2023 13:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrKT-0005on-Cd; Fri, 20 Jan 2023 13:21:13 +0000
Received: by outflank-mailman (input) for mailman id 481769;
 Fri, 20 Jan 2023 13:21:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBPl=5R=8bytes.org=joro@srs-se1.protection.inumbo.net>)
 id 1pIrKS-0005od-Ad
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 13:21:12 +0000
Received: from mail.8bytes.org (mail.8bytes.org [85.214.250.239])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 4b99257b-98c5-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 14:21:06 +0100 (CET)
Received: from 8bytes.org
 (p200300c27714bc0086ad4f9d2505dd0d.dip0.t-ipconnect.de
 [IPv6:2003:c2:7714:bc00:86ad:4f9d:2505:dd0d])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest
 SHA256) (No client certificate requested)
 by mail.8bytes.org (Postfix) with ESMTPSA id 583BB262AD8;
 Fri, 20 Jan 2023 13:43:51 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b99257b-98c5-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=8bytes.org;
	s=default; t=1674218631;
	bh=sZNmqKbvZoiovCzlI6WBiaIn/1uSYIe/nS6jVFFARMs=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=IPQE8Zu4cQksOFQUyswavYY4LJUrlMt8lxJm+3ahDYd+mZFXwGtyKeIQiaTg9xx6t
	 XoR/R4zuZsPcWpd9otIOIqrWFKbf0GyoVlPAjf7IhQpqGh1zw08+91/R5sFqDlxF36
	 6X3Iigbgw0xZG1NFBkWTE/fIj3zY1tK+FSVOC7yBrZtGOWxNW+nsLLmSbs+C4H1kP3
	 mk9MA5YIesE4zA1Aabvkey/nokvwF4sQaoft0Cm4GSIXGJ1MS3iEJ2sPjAATJGGzkB
	 SZ5XVMlEyaoXBpIi4mwhOn4QXQHrOE8MkL0RbA3pDV9iCa5bFlLis+fL7yqLWrCJNZ
	 IVfun9JOL6CwQ==
Date: Fri, 20 Jan 2023 13:43:50 +0100
From: =?iso-8859-1?Q?J=F6rg_R=F6del?= <joro@8bytes.org>
To: Borislav Petkov <bp@alien8.de>
Cc: Peter Zijlstra <peterz@infradead.org>, x86@kernel.org,
	Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit
Message-ID: <Y8qMhhEOA6MuGMxm@8bytes.org>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.649204101@infradead.org>
 <Y8lDN73cNOmNuciV@zn.tnic>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Y8lDN73cNOmNuciV@zn.tnic>

On Thu, Jan 19, 2023 at 02:18:47PM +0100, Borislav Petkov wrote:
> So, can we do that C-bit verification once on the BSP, *in C* which would be a
> lot easier, and be done with it?
> 
> Once it is verified there, the bit is the same on all APs so all good.

Yes, I think this is safe to do. The page-table the APs will use to boot
already has the correct C-bit set, and the position is verified on the
BSP. Further, the C-bit position is a hardware capability and there is
no chance the APs will have it at a different position than the BSP.

Even if the HV is lying to the VM by faking CPUID on the APs it wouldn't
matter, because the position is not read again on the APs.

Regards,

	Joerg


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 13:21:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 13:21:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481771.746887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrKg-00068Z-QS; Fri, 20 Jan 2023 13:21:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481771.746887; Fri, 20 Jan 2023 13:21:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrKg-00068S-LW; Fri, 20 Jan 2023 13:21:26 +0000
Received: by outflank-mailman (input) for mailman id 481771;
 Fri, 20 Jan 2023 13:21:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIrKf-00067y-TR; Fri, 20 Jan 2023 13:21:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIrKf-0005vX-Qq; Fri, 20 Jan 2023 13:21:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIrKf-0001Rg-Ar; Fri, 20 Jan 2023 13:21:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIrKf-00040Q-AN; Fri, 20 Jan 2023 13:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+QaRzpR0Qr3ZNKCpkyiEwJKZuRzbjaO2trJ4oO8rSxg=; b=NywzID0VfQWmznrL1D2dsAGTRj
	7C0EEvkDkqL9Zyk5muMWKTnWW5/TAYdwRAnJVLOCY3xOSZgorkoq7QwZ9uWmW5sjnbRw7kLSh7K09
	tKnJ1F/T74MzNMOlsn/FjVIbG5OvyIvm3yt2SFtFxZl3MtBjHRzMBEGOtP6bfdeqjzKE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175991-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175991: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=239b8b0699a222fd21da1c5fdeba0a2456085a47
X-Osstest-Versions-That:
    qemuu=7ec8aeb6048018680c06fb9205c01ca6bda08846
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 13:21:25 +0000

flight 175991 qemu-mainline real [real]
flight 175996 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/175991/
http://logs.test-lab.xenproject.org/osstest/logs/175996/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 175977

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175977
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175977
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175977
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175977
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175977
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175977
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175977
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175977
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                239b8b0699a222fd21da1c5fdeba0a2456085a47
baseline version:
 qemuu                7ec8aeb6048018680c06fb9205c01ca6bda08846

Last test of basis   175977  2023-01-19 09:20:40 Z    1 days
Testing same since   175991  2023-01-20 00:10:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Cédric Le Goater <clg@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hoa Nguyen <hoanguyen@ucdavis.edu>
  Laurent Vivier <laurent@vivier.eu>
  Li-Wen Hsu <lwhsu@lwhsu.org>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yuval Shaia <yuval.shaia.ml@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 900 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 13:37:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 13:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481784.746900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrZS-00081J-98; Fri, 20 Jan 2023 13:36:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481784.746900; Fri, 20 Jan 2023 13:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrZS-00081C-64; Fri, 20 Jan 2023 13:36:42 +0000
Received: by outflank-mailman (input) for mailman id 481784;
 Fri, 20 Jan 2023 13:36:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8tu/=5R=citrix.com=prvs=37768f290=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIrZR-000816-DH
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 13:36:41 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 76d5cf46-98c7-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 14:36:39 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76d5cf46-98c7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674221799;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=U2vyEDsXtFKkNXikJRmtfrMERJCdL0+Q0DPDGdG9Ax4=;
  b=SOjgmM7/42qDXo0OE5nqzvv1kVZBoCJW3xgGDVY8GT9+T5C1dzVCs8ES
   OBhpgWEKJ9oSAoYU9k2hsWnu7MmhT4L7nxY6PKPvK78SuqCxujl31LN2C
   mepF+byOtgjgJH29sWmZUKxMRF7j88SucWazQFIDvnLa7LejB7VXGVE9R
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93554171
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:/enUAaNMs/M6mtTvrR3cl8FynXyQoLVcMsEvi/4bfWQNrUongTJRz
 GAaW2CAb/3eZGb2eN5wOtyw8BsD7JLVytJgTQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5ARmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0sh3X1NFr
 tYGEXMuZzrTmuCp66/ja9A506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUo3RGJgJxxnBz
 o7A10bBOhsqFdDP8D2m4mu+pcHX3iikBbtHQdVU8dY12QbOlwT/EiY+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFaIpgUZWsZQO+Qi5RuR17HP5AKEGmkDSCUHY9sj3PLaXhRzi
 AXPxYmwQ2Uy7vvMEyn1GqqoQS2aFyhLH2RZTzE9DigMyYn+op4Yk0rud4M2eEKqteEZCQ0c0
 hjT8ndi3uVK1pVbv0mo1QuZ2mzx//AlWiZwv1yKBTz9s2uVcab/P+SVBU7nAeGsxWpzZn2Ip
 zA6lseX94ji5rndxXXWEI3h8FxEjstp0QEwYnY1RfHNDxz3pxaekXl4uVmS3ntBPMceYiPOa
 0TOow5X75I7FCL0MvMuPtnrUpp7k/mI+THZuhf8N4omX3SMXFXfoHEGibC4gQgBb3TAYYlgY
 MzGIK5A/F4RCLh9zSreegvu+eZD+8zK/kuKHcqT503+gdKjiIu9Fe9t3K2mMrpos8tpYWz9r
 75iCid9408OCbyuMnWNqOb+7zkidBAGOHw/kOQPHsbrH+asMDtJ5yP5qV/5R7FYog==
IronPort-HdrOrdr: A9a23:zBFy96wy4LkwDUMrDSXDKrPx0ugkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9IYgBepTlEAtjyfZvdnaQFhrX5To3SIjUO2VHYZ72KiLGP/9SOIVyEygcw79
 YET0E6MqyNMbEYt7ex3ODbKadb/DDvysnBuQrH9RlQpENRGtxdBmxCe2Cm+zhNNXF77O0CZe
 OhD6R81l6dkEAsH4iG7iBvZZmTm/T70L72axsPBxoq8yiJly6l5YT7HR+RwwsEXykK5bs562
 DKnzXj4K+uqeu2x3bntlP73tB7idHlwttGCNetjtEPKjLwogy0ZIJnMofy3gwdkaWC+VwumN
 nJrwwBO91p63TNW2mprRzmy2DboVUTwk6n5U6ThHPipcDjfSk9GtpljZ9UdRHIgnBBgDgw6t
 MP44pX36AnRS/orWDY3ZzlRhtqnk27rT4LlvMStWVWVc8zeaJctosW+WJSCdMlEDjh4I4qPe
 FyBIWEjcwmNm+yXjT8hC1C0dasVnM8ElOvRVUDgNWc13x7jW101EwRwe0YhzMl+IgmQ5dJyu
 zYOuBDla1ITOURcaVhbd1xBfefOyjoe1bhIWiSKVPoGOUuPG/MkYf+5PEP6OSjaPUzvewPcM
 CqajxlnF93X3irJdyF3ZVN/ByIan66Ry7RxsZX4IU8kqHgRZLwWBfzFWwGoo+FmbEyE8fbU/
 G8NNZ9GPn4N1bjHo5PwknXR4RSE38DS8cY0+xLAW5mmvi7drECi9arKMo7ZYCdSArMY1mPRE
 friQKDf/mp7SiQKwvFaVbqKj6dJXAWO/pLYefnFqMoufkw37Z3w30oYY7Q3LDLFdRziN15QK
 I3GsKWrkqanxj0wY+a1RQqBvKqZnwlqYkJpBtx1Hk32gXPAOg+Uv2kCBJv9WrCPBN5UsXQCR
 VSo1Rs9cuMXtyt+Rw=
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93554171"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>
Subject: [XEN PATCH] build: fix building flask headers before descending in flask/ss/
Date: Fri, 20 Jan 2023 13:36:26 +0000
Message-ID: <20230120133626.55680-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Unfortunatly, adding prerequisite to "$(obj)/ss/built_in.o" doesn't
work because we have "$(obj)/%/built_in.o: $(obj)/% ;" in Rules.mk.
So, make is allow to try to build objects in "xsm/flask/ss/" before
generating the headers.

Adding a prerequisite on "$(obj)/ss" instead will fix the issue has
that the target used to run make in this subdirectory.

Unfortunatly, that target is also used when running `make clean`, so
we need to ignore it in this case. $(MAKECMDGOALS) can't be used in
this case as it is empty, but we can guess which operation is done by
looking at the list of loaded makefiles.

Fixes: 7a3bcd2babcc ("build: build everything from the root dir, use obj=$subdir")
Reported-by: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/xsm/flask/Makefile | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
index d25312f4fa..2d24346ee3 100644
--- a/xen/xsm/flask/Makefile
+++ b/xen/xsm/flask/Makefile
@@ -16,7 +16,11 @@ FLASK_H_FILES := flask.h class_to_string.h initial_sid_to_string.h
 AV_H_FILES := av_perm_to_string.h av_permissions.h
 ALL_H_FILES := $(addprefix include/,$(FLASK_H_FILES) $(AV_H_FILES))
 
-$(addprefix $(obj)/,$(obj-y)) $(obj)/ss/built_in.o: $(addprefix $(obj)/,$(ALL_H_FILES))
+# Adding prerequisite to descending into ss/ folder only when not running `make
+# clean`.
+ifeq ($(filter %/Makefile.clean,$(MAKEFILE_LIST)),)
+$(addprefix $(obj)/,$(obj-y)) $(obj)/ss: $(addprefix $(obj)/,$(ALL_H_FILES))
+endif
 extra-y += $(ALL_H_FILES)
 
 mkflask := $(srcdir)/policy/mkflask.sh
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 13:43:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 13:43:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481789.746910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrfL-00010U-Ub; Fri, 20 Jan 2023 13:42:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481789.746910; Fri, 20 Jan 2023 13:42:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrfL-00010N-Qc; Fri, 20 Jan 2023 13:42:47 +0000
Received: by outflank-mailman (input) for mailman id 481789;
 Fri, 20 Jan 2023 13:42:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIrfK-00010H-4d
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 13:42:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIrf9-0006GN-8s; Fri, 20 Jan 2023 13:42:35 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIrf8-0007fj-VX; Fri, 20 Jan 2023 13:42:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=qJeBhOWE3Lj0DaWJ9fcnnSccFHTRm2OxoFXO8lOO3I4=; b=DK4/3DiDR8uTfCUnzX0gTWmlpw
	X53/Ig8sNjOUVVBX2yK6JtI/Qsj3EgaKIy8h13fi+3Bmy55tB/N+GN6y33KfSCpoP9VRFtAgNsGd5
	+c66lO+auSAkwFNMSoLga6pl9I8Ect0zkfutH/4gBHGazbqPq8QfbSThFbLI5W7AVql4=;
Message-ID: <b85ff051-ca1b-4b38-3a18-b781b3b85961@xen.org>
Date: Fri, 20 Jan 2023 13:42:32 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] tools/symbols: drop asm/types.h inclusion
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <1eaa4cce-2ef2-ca38-56d2-5d551c9c1ae9@suse.com>
 <d519b6c5-5972-ff31-c3ee-39649babde7c@xen.org>
 <95d73822-5321-4c9b-04d1-8ee4f78ff35d@suse.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <95d73822-5321-4c9b-04d1-8ee4f78ff35d@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 20/01/2023 13:19, Jan Beulich wrote:
> On 20.01.2023 12:24, Julien Grall wrote:
>> On 20/01/2023 08:40, Jan Beulich wrote:
>>> While this has been there forever, it's not clear to me what it was
>>> (thought to be) needed for.
>>
>> asm/types.h used to be directly included in assembly x86 file. This was
>> dropped by commit 3f76e83c4cf6 "x86/entry: drop unused header inclusions".
> 
> Just to clarify: The statement in the description is about $subject,
> not ...
> 
>>>   In fact, all three instances of the header
>>> already exclude their entire bodies when __ASSEMBLY__ was defined.
>>> Hence, with no other assembly files including this header, we can at the
>>> same time get rid of those conditionals.
> 
> ... this further aspect. I can certainly see why the guards may have
> been there (without having gone look for when the last such use may
> have disappeared) beyond the bogus use by the tool.

Ah! Thanks for the clarification. I indeed misinterpreted the first 
sentence.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 13:46:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 13:46:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481794.746919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrib-0001cl-Bq; Fri, 20 Jan 2023 13:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481794.746919; Fri, 20 Jan 2023 13:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrib-0001ce-8S; Fri, 20 Jan 2023 13:46:09 +0000
Received: by outflank-mailman (input) for mailman id 481794;
 Fri, 20 Jan 2023 13:46:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIriZ-0001cY-AI
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 13:46:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIriZ-0006Oq-9b
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 13:46:07 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIriZ-0007uq-2X; Fri, 20 Jan 2023 13:46:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=PkKClvn0TmK4UC9rmACrc8YkEzZI50VmbuTaob+NRsg=; b=r/naC4uVnoNY7B8Gz/LdDjQPjA
	IlqAkcMvduqdqfPrjoE6sYXb9H75UaEF0o25ZKkIey7FwZsce1ClrXsN6lF0OUO800x9agFPj+yNu
	yhl3OaqfZ+soBlTIFPp4j/oGdjTiP/+IAE3Y66bPjbRwHA2N4+jkLe28VPhyTwhGRZ/c=;
Message-ID: <8c23e40b-5892-afb0-15d0-70bad98f640d@xen.org>
Date: Fri, 20 Jan 2023 13:46:05 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 0/3] xen/arm: Frametable hardening and config.h cleanup
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230117114332.25863-1-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117114332.25863-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 17/01/2023 11:43, Michal Orzel wrote:
> The first patch fixes a bug due to incorrect DIRECTMAP_SIZE calculation.
> 
> The second patch removes unused macro FRAMETABLE_VIRT_END.
> 
> The third patch hardens the setup_frametable_mappings by adding a sanity check
> for the size of struct page_info and calling panic if the calculate size of
> the frametable exceeds the limit.
> 
> Sent together for ease of merging.
> 
> Michal Orzel (3):
>    xen/arm64: Fix incorrect DIRECTMAP_SIZE calculation
>    xen/arm32: Remove unused macro FRAMETABLE_VIRT_END
>    xen/arm: Harden setup_frametable_mappings

I have committed the series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 13:55:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 13:55:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481799.746930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrre-00037o-6o; Fri, 20 Jan 2023 13:55:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481799.746930; Fri, 20 Jan 2023 13:55:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIrre-00037h-44; Fri, 20 Jan 2023 13:55:30 +0000
Received: by outflank-mailman (input) for mailman id 481799;
 Fri, 20 Jan 2023 13:55:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r6fY=5R=csail.mit.edu=srivatsa@srs-se1.protection.inumbo.net>)
 id 1pIrrc-00037b-Au
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 13:55:28 +0000
Received: from outgoing2021.csail.mit.edu (outgoing2021.csail.mit.edu
 [128.30.2.78]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 167c467d-98ca-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 14:55:25 +0100 (CET)
Received: from [198.134.98.50] (helo=srivatsab3MD6R.vmware.com)
 by outgoing2021.csail.mit.edu with esmtpsa (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.95)
 (envelope-from <srivatsa@csail.mit.edu>) id 1pIrrV-00Fo3V-LQ;
 Fri, 20 Jan 2023 08:55:21 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 167c467d-98ca-11ed-91b6-6bf2151ebd3b
Subject: Re: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle
 state
To: Thomas Gleixner <tglx@linutronix.de>, Igor Mammedov <imammedo@redhat.com>
Cc: linux-kernel@vger.kernel.org, amakhalov@vmware.com, ganb@vmware.com,
 ankitja@vmware.com, bordoloih@vmware.com, keerthanak@vmware.com,
 blamoreaux@vmware.com, namit@vmware.com,
 Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>,
 "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
 "Paul E. McKenney" <paulmck@kernel.org>, Wyes Karny <wyes.karny@amd.com>,
 Lewis Caroll <lewis.carroll@amd.com>, Tom Lendacky
 <thomas.lendacky@amd.com>, Juergen Gross <jgross@suse.com>, x86@kernel.org,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20230116060134.80259-1-srivatsa@csail.mit.edu>
 <20230116155526.05d37ff9@imammedo.users.ipa.redhat.com> <87bkmui5z4.ffs@tglx>
From: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Message-ID: <ecb9a22e-fd6e-67f0-d916-ad16033fc13c@csail.mit.edu>
Date: Fri, 20 Jan 2023 05:55:11 -0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.12.0
MIME-Version: 1.0
In-Reply-To: <87bkmui5z4.ffs@tglx>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit


Hi Igor and Thomas,

Thank you for your review!

On 1/19/23 1:12 PM, Thomas Gleixner wrote:
> On Mon, Jan 16 2023 at 15:55, Igor Mammedov wrote:
>> "Srivatsa S. Bhat" <srivatsa@csail.mit.edu> wrote:
>>> Fix this by preventing the use of mwait idle state in the vCPU offline
>>> play_dead() path for any hypervisor, even if mwait support is
>>> available.
>>
>> if mwait is enabled, it's very likely guest to have cpuidle
>> enabled and using the same mwait as well. So exiting early from
>>  mwait_play_dead(), might just punt workflow down:
>>   native_play_dead()
>>         ...
>>         mwait_play_dead();
>>         if (cpuidle_play_dead())   <- possible mwait here                                              
>>                 hlt_play_dead(); 
>>
>> and it will end up in mwait again and only if that fails
>> it will go HLT route and maybe transition to VMM.
> 
> Good point.
> 
>> Instead of workaround on guest side,
>> shouldn't hypervisor force VMEXIT on being uplugged vCPU when it's
>> actually hot-unplugging vCPU? (ex: QEMU kicks vCPU out from guest
>> context when it is removing vCPU, among other things)
> 
> For a pure guest side CPU unplug operation:
> 
>     guest$ echo 0 >/sys/devices/system/cpu/cpu$N/online
> 
> the hypervisor is not involved at all. The vCPU is not removed in that
> case.
> 

Agreed, and this is indeed the scenario I was targeting with this patch,
as opposed to vCPU removal from the host side. I'll add this clarification
to the commit message.

> So to ensure that this ends up in HLT something like the below is
> required.
> 
> Note, the removal of the comment after mwait_play_dead() is intentional
> because the comment is completely bogus. Not having MWAIT is not a
> failure. But that wants to be a seperate patch.
> 

Sounds good, will do and post a new version.

Thank you!

Regards,
Srivatsa
VMware Photon OS


> Thanks,
> 
>         tglx
> ---        
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index 55cad72715d9..3f1f20f71ec5 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -1833,7 +1833,10 @@ void native_play_dead(void)
>  	play_dead_common();
>  	tboot_shutdown(TB_SHUTDOWN_WFS);
>  
> -	mwait_play_dead();	/* Only returns on failure */
> +	if (this_cpu_has(X86_FEATURE_HYPERVISOR))
> +		hlt_play_dead();
> +
> +	mwait_play_dead();
>  	if (cpuidle_play_dead())
>  		hlt_play_dead();
>  }
> 
> 
>   
> 


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 14:11:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 14:11:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481805.746939 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIs6e-0005at-Fw; Fri, 20 Jan 2023 14:11:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481805.746939; Fri, 20 Jan 2023 14:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIs6e-0005am-DI; Fri, 20 Jan 2023 14:11:00 +0000
Received: by outflank-mailman (input) for mailman id 481805;
 Fri, 20 Jan 2023 14:10:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIs6d-0005ae-GR
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 14:10:59 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 42061bdd-98cc-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 15:10:58 +0100 (CET)
Received: from mail-bn8nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 09:10:42 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO3PR03MB6806.namprd03.prod.outlook.com (2603:10b6:303:166::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 20 Jan
 2023 14:10:38 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 14:10:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42061bdd-98cc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674223858;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=YCF5JYCTY1lZQy8QZBZuNbS+nNL1neUa/bFFIffrLEU=;
  b=dqaQJy8N9wkel0mQboAJBfgBSyLfEi+ZzpUMfcE3DsyUeQeVp5qhbPlL
   U2QxkZqJuNNGrEtAfvuV9Lgd7pFTho4dG8MwXaXRge5b4eXr+HFA4wnOd
   e7svtaCTPFnfdGkys6rMmHzESBgsDb3mkij/glG4uOyh/ZT5vkn7ExH6z
   g=;
X-IronPort-RemoteIP: 104.47.55.172
X-IronPort-MID: 92425885
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:WHCF367HQET6zrrQxGIefwxRtAnGchMFZxGqfqrLsTDasY5as4F+v
 jAdXTyHb63fY2b2KtFwa9u0oUkGvZLSzINqQAc6r39kHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakR5AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m3
 6U0ChcEX0y6raGE+6KFdsBO2O4tFZy+VG8fkikIITDxK98DGMqGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooj+KF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNISOflpq436LGV7lczGkUXf3S3mqOGtFKzGP5SF
 XIlojV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxmdLngJSHhGctNOnNM3QBQ62
 1nPmMnmbRR/vbvQRX+D+7O8qTKpJTNTPWIEfTUDTwYO/5/kuo5bs/7UZtNqEarwhNulHzj1m
 mqOtHJn2O9VitMX3aKm+1yBmyirupXCUg8y4EPQQ36h6QR6IoWiYuRE9GTm0BqJF67BJnHpg
 ZTOs5P2ADwmZX1VqBGwfQ==
IronPort-HdrOrdr: A9a23:k9uoOKzKUUByodcmNgIKKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="92425885"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zn+wwwIlJjdZqKgVHsyezYFldOlqljPDRWHtUOH1+soVFanXeYY8I4Wa9aexSPexs+nulAh3FHg5SMj4HTZDl9px+Pm2tpiAA+DJ6v6AYniSsbLHn3MUvzrusHSsfVa2WX8AfTmNltxf03H28zr/eQPfaymtU2z0lvEOpgD8wJwk9zKSyzAfW0v7bO74hL+kHlG66AUS3zIWDAQm9D0gJlEtPG0nwYs68e7nDV8Mj+EZCjjt9h3w/HxdY0BqzlVL6hgFH64K0mFxS1ecDPseJZ7DSIZgPLVI0dR9bRxGiBlkZ3hXZf3VFBg6Z3Rt0FeSum4UUHWc671d8syGVNGuBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YCF5JYCTY1lZQy8QZBZuNbS+nNL1neUa/bFFIffrLEU=;
 b=Isx3XUcavrUgxSuMfkrdH0Lhe657Cgo6ydwVo/fhwFpWqs+ZAmlmI4TiEVK60RoLKiBRyXZwXz9XwbdqFEjTCBIGWJyjrgixm8GkvUd+IcPqterl60/j2p1r+F2MWD0d5fKERJysWCyqyPtDi5xP64jLR/0Z0mpthGcevky+GCcD66WSu6dzTQXAGlKWKxLYH4uJv1xhUNazp+XSmrxI7T0YShBS6idDw4iaMgVTnIjNwXFHQRIICSE5sNWVz2rEVz6HtcSUh0lrmERmMCWXGUdKqtpXwfrpjvQ0rXVbZ+SL/o0g8O0N1TW10fr4H6tL/Oe10tCO4WLY2l76k87I6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YCF5JYCTY1lZQy8QZBZuNbS+nNL1neUa/bFFIffrLEU=;
 b=By4Ax5TBehZQBXN+049v7BOoC2KLeZfrekCSALh5kdQhbUKsGxb91BZ7LMlT7I3qeYpb2iYhVDe51snAW/MvbymRbmQWPI3ElG11AaxDVj9Vg2m+mbHLr+6EhjGwndQr8ct0BW3c0bnqGkQS/65ipAGjJMWW7s5eTDNMTh5+1Qs=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George
 Dunlap <George.Dunlap@citrix.com>, "Tim (Xen.org)" <tim@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/shadow: Drop dubious lastpage diagnostic
Thread-Topic: [PATCH] x86/shadow: Drop dubious lastpage diagnostic
Thread-Index: AQHZLMTJYmmsHN9huU+OUU/Zg+9KvK6nR7kAgAAQrgA=
Date: Fri, 20 Jan 2023 14:10:38 +0000
Message-ID: <139c1d03-2cd8-a7c3-4f79-fbdd5e85c712@citrix.com>
References: <20230120114556.14003-1-andrew.cooper3@citrix.com>
 <f530ddfc-8f97-b913-e838-58cc352f6372@suse.com>
In-Reply-To: <f530ddfc-8f97-b913-e838-58cc352f6372@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CO3PR03MB6806:EE_
x-ms-office365-filtering-correlation-id: 68d0b339-93b4-4e46-6115-08dafaf01b7f
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 pTEaAAA8hxDTnL1fBODVVLtwle+YxwF7DS7px3gRzFvzj/bQmljaKErQ+E+9Ky2U4fRm5P1u3pb4cLRB+K2qXuA9qgqbQGqlQGEFvVSG7F/RFTfkVVGr9afT7GedzA6Xb6i7tsH/tHsDSObuILroTA4kZVa+V8Um7ztIoQ9cC9ATImIAAeg2Ubm0ija3v868pRtZKt8Q/wwQx4cID8F1Xrgzc0mOsejca1vjyKk3Av/KtzFsJGrsOLAbfntae6HXuHIf2vEX7haegmq0N18xb0/9Jns3lBIxbhDZe6FPkGLvDVmhU8cGFeqoSWzI6wM6mismKImsE3EsDVOFD6+DFltvtkPma54aRks3f9rPz8OmqSKOaNWPPz0Gy0mHtTBeZ50WQ8wTRwgjz0WLk8QhOyIjZUP05vvuP4HGBZBWrBRxT0rBw7SiXG/wkVvejxNo8V+oAtQ+LuLWrmvwI9uiCTkHQMpcTIh9J0fLw5CJOmIzIaeZdGGwK2aNFE7PDP9Z3a477GssWdqqvIs4q7iH29gfCNlkdarHvJnJtPECPleZsIlXW4vaQ3S7vtNGW6Q6q4a0whKv8ZrW2e5Kja/qXCCwtxOP4KNorSa+xWC1kHBXM0zTAX6zZvWk9wW8k19GlRh1hpK970MONoiyGzgNxEOprsBggiDzOgHFN5B8hnE0U73chV/2MXRUk+lNJyklL6JkLXKQb4QDSzMg344XGEi6K0Q+fAxsYB7iFtSrCS2wTGZbgCRd14BBKSzYb9gZ94NO+8icrD2XMcAF/XxTmw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(366004)(136003)(396003)(376002)(451199015)(38100700002)(66476007)(83380400001)(82960400001)(76116006)(91956017)(66556008)(86362001)(64756008)(8936002)(122000001)(2906002)(31696002)(66446008)(6916009)(5660300002)(478600001)(66946007)(4326008)(41300700001)(6512007)(26005)(2616005)(6506007)(186003)(53546011)(54906003)(6486002)(316002)(38070700005)(8676002)(71200400001)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ME85cklrbWd6a1hhaEFQNHpFTmF3WlJLQjFuamkxZ2xnZ0oyNWlkd3ZMUFpZ?=
 =?utf-8?B?TjFxY1c1cUtIVEZid1Y4MDJNd3hmRFVOV1ZaNWxHd1JGWmdwQUI0OXBvLzNn?=
 =?utf-8?B?N2FxNnJWVzJnZDZwSm1rbTJhVlZHbkRBSU92VFlhdXIvNzVGRll4bmdTbXZC?=
 =?utf-8?B?STZ4c2IzRTRxenJOTHZlL3luUE5ZOW55aWNWNi9Eck1LRnczbFRUWUtYOFV5?=
 =?utf-8?B?Yk5wOWowRjVJK0VzTlRoS1duUHYyVnJsUGorTllkcjc0SWRJcmZSUmd5b3R0?=
 =?utf-8?B?V1hqc1BVY1pRWkdob2RtTEpJMTNDYW9LZkdoeUlTSU5uNW9VQU04amg1OEx4?=
 =?utf-8?B?cTY5RnU2ZVJHUU05YlBCTGt0am1MV0VESkZINWI2MStOcklHNnM3eVg3UEMr?=
 =?utf-8?B?dndrSy9ydThGa3IzRXRPZG5IL2lqbXBZRGZ1OTdxd1oyWU5QUUlRSG9JR2dY?=
 =?utf-8?B?Sk92OXJMNFdUVnpuakRNK041VlRUNnBudDhzOWJURW9pd1Vyams2dG40REJR?=
 =?utf-8?B?akZlWkphcXBIN0I4c0JpK2doc085ZGE3WmdOY21mYlVSNUpzMWtlYmZyb2sx?=
 =?utf-8?B?N01pTGdmRWwvd1p0Ymd0L0pRM3o3eWJZMThRcmd0R1VDbjZWRVVxcTNiR3hv?=
 =?utf-8?B?MVNiSjBvRkVZdmMzbVBBTS9TUVRGNk80MENKV1R2RDlBcVpoU2JWZm42SDNV?=
 =?utf-8?B?L2ZXWmlBZndmWnpIL1JERk9TRk5rNUI2Y3RHMUlHbDNYRk54SEJ2cmpHQWZH?=
 =?utf-8?B?d0xFYk82U3BNS2JjTHU3ZTdFaVFvVHJVVXE3K1hOQ2xGWkRhalg0d2JjbkZS?=
 =?utf-8?B?emF5Y25rbGhOVzJjUExlUStESGRhanMzczRDQnYxQnhXekVtbUsybnRlQXFH?=
 =?utf-8?B?aG41eFZTVS9KN2lGSzU0S3BaYmlmcVJCZnY2Z3d3QTlkWVh2UzZrT2Vod3BF?=
 =?utf-8?B?dCtsclRLNHkraVNZM1VvcVVMZnl3cjAvQ3lua1dBM1BYZ3FXY2lvZlBJQkxp?=
 =?utf-8?B?OU11Wkloa0JVcGhuRS91MXk0b2NDOTkyN1YwOVB3VUpoWFhMMVByelRDeENM?=
 =?utf-8?B?N0VtcWNyVTJwNFVXTmxsNEJjL25XK2dVdFVDSkZxTXdzWFdlZWlldjY0L2FU?=
 =?utf-8?B?NUlMTEZHZkhxN3JSSFVpbVZQeWNZVUhTeG5MR0dMOWIrdXNhbXBzL09uaHFr?=
 =?utf-8?B?WHpZT3NwQUFPR3NyK3ZPejIxcnNIc2VuNTNyaldoQnpsKzBTUEJqREo3TFY2?=
 =?utf-8?B?dXowRFhIN3dXOUJqcWh6eTh2bFlTM2hrRmtlUUxlVTFnL252aDNzL25JTmY4?=
 =?utf-8?B?elN6Ni8xcFJ3cVN0ZlhQL1UvSmQ4SGExSUVYVEgzSTh4UmRscWdHSkUrWWJm?=
 =?utf-8?B?dnAxRkJ3ekpseUhqbHJUQjE5enJkUXlZWk5LY0xJcHloNG5XekN5OENJUTVU?=
 =?utf-8?B?YWpiTU1vUVByMi91VTQwOHV2SjBMamRZdnV1NTYzTnNWNkJMbG5SWjc0WE5R?=
 =?utf-8?B?SHpJczZZNzJjbDlpYTBkSGtVN2ptQTViSHVkZ200ZTNYcm5NcHhYcGdxM2Ev?=
 =?utf-8?B?MHkwT0dySkZzQ0hYTjVGcHpjWEhIQTFkaUQ2ZURXcFBxdWJYQkVjQkpva2Qy?=
 =?utf-8?B?cElGKzNudXhiYTZwUXhqQWJIYzdHMFJKWkNNdTlkelg2Nzg1REgwaExOemVw?=
 =?utf-8?B?ZGFvWlFNbDZ3ajZiQnBSd3hHNTZTNkFFbmhyMk1KSXZJczRrUGhaejBpRnFw?=
 =?utf-8?B?VFEwL1F0MlFWWkNTd3B4aFVXSWNrYmZSTUUrczY0UVdMVEdObzFpdk9sakFl?=
 =?utf-8?B?TnpZVjNpNEZXVlVSWGpISEFmY3kyZ1lyZFFQc05TUHRISmJhR3VFWlZNbGMx?=
 =?utf-8?B?UWo3UUNkL3dmL3A1MmhzZjk0cFVYcGEvOGRnbDhkbTRKRjN3T09UbTBWbkhq?=
 =?utf-8?B?UXNidk10aVFOWERoNFdjR1h6cUdLb0lIcmVjZ3R4SnZ6cXlLSW5abDNETDNU?=
 =?utf-8?B?bXBsYlNycU1WeUdudDZTUFRNTEYxRndWcnA2VVdpc1FNQ0VpUEJQSERmRitC?=
 =?utf-8?B?Rjl4QUMzVkVjdFUwNnhUNmtTQytJNHBtZnp2RXJUVlFnREJOeW9XT2xmOUFD?=
 =?utf-8?Q?C6IH48rE7E9ldetpU8bUVaSmk?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5EF3E3B2935564439EC8419E11477965@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	WHUM2Z0573CkkLGNhxPO6lCfgyPZzv8i3vqLM68nCUVjbvxEqlhvn47b32aFgKim+sz97/lyAwoDOdKNw4O7jFXU4D8Pi1Ki1K3DO3qNzdLV86rMx3kgPLfKr9kkL3WE2i1kLMw3qeL44Ud1cYYxl724WdDZdqnQoqg3uTc3kOZqGj2qHCw1UXvuTRiLjOyqUOryEPhTE70nYEebxz/5KAhlrYSSGjjFAsFuI8+bWDY7JsAvSzn0vpYMVB5L8y6RqRIxo2KEXrgwFdkvmFy7XzqymxPwESb2IUEjZLvc3vRd+L1fwrBT3VcTG0batV2LcIrivSk5DKJYFLMFl/Basrs2qTDQxYl0WdAFA+iQglXwx0zjIIErLQzDYQ62vRZO9zxhs6CzqhHXpo1ON0gALk5174td4/9iHxK58Genh3IqTN9EVCFCAWaqztNTprb9o4nXE2/KZ0LDXitwL6nttH8DgZCMzm3QkecrsZ85x24vLRbcfd1g0Ju3wuFVY0Mfuam97hut924TWgHqGPo/f/rMn+qTyHZ9AhffF/v9UN2rr9uHObGQ6z2yULzStaZbFvypTBmgKj8zZ9omm1+KQTT9vDnEFX+CX4+6do6xg1ML0ZbXa3hCMxcT7YfGiFs3vzNsj/YzVTwedMioLACvxmN7kH55B9RPdm3HZa5pRd6rtl25nQs1Vmb3KTSspcwrCqOG5qmfvHkCrMt38O1jJfMJx3zOUw9sgDHlx3IJU6AXYWD9Yq36hHbPQwppU4dPkvN5uZsZhEJ8a33TH6TBfSsnVjBxUIdTfvj41MLJOPHbyp9ct5R9d053bmJ06oJO
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68d0b339-93b4-4e46-6115-08dafaf01b7f
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 14:10:38.6305
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xHkhcrrfct347tc8pPokB501T6BWWIxbxojewTJ3rsFy5jFhYU3bUEdUFUOKVWYQQpxaTP212ZY1xocZ/pIz8pEnvBwDOzplUopqmwSrJwg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO3PR03MB6806

T24gMjAvMDEvMjAyMyAxOjEwIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjAuMDEuMjAy
MyAxMjo0NSwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IFRoaXMgaXMgYSBnbG9iYWwgdmFyaWFi
bGUgKGFjdHVhbGx5IDMsIG9uZSBwZXIgR1VFU1RfUEFHSU5HX0xFVkVMKSwgb3BlcmF0ZWQNCj4+
IG9uIHVzaW5nIGF0b21pY3Mgb25seSAod2l0aCBubyByZWdhcmQgdG8gd2hhdCBlbHNlIHNoYXJl
cyB0aGUgc2FtZSBjYWNoZWxpbmUpLA0KPj4gd2hpY2ggZW1pdHMgYSBkaWFnbm9zdGljIChpbiBk
ZWJ1ZyBidWlsZHMgb25seSkgd2l0aG91dCBjaGFuZ2luZyBhbnkgcHJvZ3JhbQ0KPj4gYmVoYXZp
b3VyLg0KPj4NCj4+IEJhc2VkIG9uIHJlYWQtb25seSBwMm0gdHlwZXMgaW5jbHVkaW5nIGxvZ2Rp
cnR5LCB0aGlzIGRpYWdub3N0aWMgY2FuIGJlDQo+PiB0cmlwcGVkIGJ5IGVudGlyZWx5IGxlZ2l0
aW1hdGUgZ3Vlc3QgYmVoYXZpb3VyLg0KPiBDYW4gaXQ/IEF0IHRoZSB2ZXJ5IGxlYXN0IHNoYWRv
dyBkb2Vzbid0IHVzZSBwMm1fcmFtX2xvZ2RpcnR5LCBidXQgImNvb2tzIg0KPiBsb2ctZGlydHkg
aGFuZGxpbmcgaXRzIG93biB3YXkuDQo+DQo+PiBTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBBY2tlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPg0KDQpUaGFua3MuDQoNCj4gd2l0aCB0aGUgbGFzdCBzZW50ZW5jZSBh
Ym92ZSBjb3JyZWN0ZWQgKGlmIG5lZWQgYmU6IHJlbW92ZWQpLg0KDQpJIGNhbiByZW1vdmUgaXQs
IGJ1dCBJIGZlZWwgYXMgaWYgdGhlcmUgb3VnaHQgdG8gYmUgc29tZXRoaW5nIHRoZXJlLg0KDQpU
aGUgb3RoZXIgUk8gdHlwZXMgYXJlIHJhbV9ybywgZ3JhbnRfbWFwX3JvIGFuZCByYW1fc2hhcmVk
LsKgIHNoYXJlZCBoYXMNCmhvcGVmdWxseSBiZWVuIHVuc2hhcmVkIGJlZm9yZSBnZXR0aW5nIHRv
IHRoaXMgcG9pbnQsIHdoaWxlIHRoZSBvdGhlcg0KdHdvIGhhdmUgdW5jbGVhciBzZW1hbnRpY3Mg
KGFzIG5laXRoZXIgZXhpc3QgaW4gcmVhbCBzeXN0ZW1zKS4NCg0KV3JpdGluZyB0byBzb21ldGhp
bmcgd2hpY2ggaXMgYWN0dWFsbHkgYSBST00gZWl0aGVyIGRvZXMgc2lsZW50IGRpc2NhcmQsDQpv
ciAjTUMuDQoNClJlYWQtb25seSBncmFudHMgcmVhbGx5IG91Z2h0IHRvICNQRiwgYnV0IEkgYmV0
IHRoaXMgQUJJIGNoYW5nZSBiZXR3ZWVuDQpQViBhbmQgSFZNIGd1ZXN0cyB3YXNuJ3Qgbm90aWNl
ZCBiZWNhdXNlIEknbSBub3QgYXdhcmUgb2YgYW55IGNvbW1vbg0KdXNlcyBvZiByZWFkLW9ubHkg
Z3JhbnRzLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 14:14:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 14:14:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481812.746950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIs9Z-0006G2-3C; Fri, 20 Jan 2023 14:14:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481812.746950; Fri, 20 Jan 2023 14:14:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIs9Y-0006Fv-Vh; Fri, 20 Jan 2023 14:14:00 +0000
Received: by outflank-mailman (input) for mailman id 481812;
 Fri, 20 Jan 2023 14:13:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uODy=5R=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1pIs9X-0006Fn-J6
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 14:13:59 +0000
Received: from sender4-of-o50.zoho.com (sender4-of-o50.zoho.com
 [136.143.188.50]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac5948f4-98cc-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 15:13:56 +0100 (CET)
Received: from [10.10.1.128] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1674224029482241.5489273099247;
 Fri, 20 Jan 2023 06:13:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac5948f4-98cc-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; t=1674224033; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=XZh3rC7FHAof0N08XiWZ/CfdL/0rVKOP2tG0BHe39eY7DomE+aYBHzYrT8spYbEVhrBFCxX0umgL2Vd9pkDHvhCcGcgQqT9js3aPYwooXMGfm4Trx6VYwDkQ5mEPW3ODDOLUTD9mmOWHrDtFyfpAZAjq+sb8xofhUrBiK5a1Q18=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1674224033; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=9ix9s84QpeY8dDxVbIX3sXORNpGtaeroYBcT4xnkZeg=; 
	b=mwSLHIP4cG9pExY3Oyok780YiqY9LZYBo757qhxPNd99LkBB3AkoBOsNoMwgxT7bnDOioQNperYjma5k2pvNSpzRCHLBIiAN29jqlol01XvKa3OYKW6O/cp6PfGlfhDcinyqK5pteKUw5TuDovJ62V9/owBJtqXz4iECqF2KsZI=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1674224033;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To:Cc;
	bh=9ix9s84QpeY8dDxVbIX3sXORNpGtaeroYBcT4xnkZeg=;
	b=ZObQBl7Wjez72p6x3Ngkho+w+PeSLBvGzF/km8oDVKaRXcTI/mZoqUKHY/a+p0xq
	H6J0sCr0t+v0t3yDgAqWXX6Q7P5InFNqTInYLFZNrAIvf0+c4ExFu54OoY7GnO08EAQ
	NkiQ22MR65ORMXOyIN0q7TgzN/Y0ZIFwjBQGYJe8=
Message-ID: <c3e4b2f6-b2b9-b496-a4a3-6292433f695d@apertussolutions.com>
Date: Fri, 20 Jan 2023 09:13:47 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [XEN PATCH] build: fix building flask headers before descending
 in flask/ss/
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20230120133626.55680-1-anthony.perard@citrix.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <20230120133626.55680-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 1/20/23 08:36, Anthony PERARD wrote:
> Unfortunatly, adding prerequisite to "$(obj)/ss/built_in.o" doesn't
> work because we have "$(obj)/%/built_in.o: $(obj)/% ;" in Rules.mk.
> So, make is allow to try to build objects in "xsm/flask/ss/" before
> generating the headers.
> 
> Adding a prerequisite on "$(obj)/ss" instead will fix the issue has
> that the target used to run make in this subdirectory.
> 
> Unfortunatly, that target is also used when running `make clean`, so
> we need to ignore it in this case. $(MAKECMDGOALS) can't be used in
> this case as it is empty, but we can guess which operation is done by
> looking at the list of loaded makefiles.
> 
> Fixes: 7a3bcd2babcc ("build: build everything from the root dir, use obj=$subdir")
> Reported-by: "Daniel P. Smith" <dpsmith@apertussolutions.com>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>   xen/xsm/flask/Makefile | 6 +++++-
>   1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
> index d25312f4fa..2d24346ee3 100644
> --- a/xen/xsm/flask/Makefile
> +++ b/xen/xsm/flask/Makefile
> @@ -16,7 +16,11 @@ FLASK_H_FILES := flask.h class_to_string.h initial_sid_to_string.h
>   AV_H_FILES := av_perm_to_string.h av_permissions.h
>   ALL_H_FILES := $(addprefix include/,$(FLASK_H_FILES) $(AV_H_FILES))
>   
> -$(addprefix $(obj)/,$(obj-y)) $(obj)/ss/built_in.o: $(addprefix $(obj)/,$(ALL_H_FILES))
> +# Adding prerequisite to descending into ss/ folder only when not running `make
> +# clean`.
> +ifeq ($(filter %/Makefile.clean,$(MAKEFILE_LIST)),)
> +$(addprefix $(obj)/,$(obj-y)) $(obj)/ss: $(addprefix $(obj)/,$(ALL_H_FILES))
> +endif
>   extra-y += $(ALL_H_FILES)
>   
>   mkflask := $(srcdir)/policy/mkflask.sh

It builds for me, but I also do not have a large enough system to do a 
`-j16` to confirm it at the scale for which it occurred. Regardless, I 
will ack it.

Acked-by: Daniel P. Smith <dpsmith@apertussolutions.com>


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 14:17:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 14:17:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481817.746959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsCT-0006rc-Fo; Fri, 20 Jan 2023 14:17:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481817.746959; Fri, 20 Jan 2023 14:17:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsCT-0006rV-DB; Fri, 20 Jan 2023 14:17:01 +0000
Received: by outflank-mailman (input) for mailman id 481817;
 Fri, 20 Jan 2023 14:16:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1qDs=5R=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIsCR-0006r7-RA
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 14:16:59 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2077.outbound.protection.outlook.com [40.107.6.77])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 18dc7638-98cd-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 15:16:57 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9364.eurprd04.prod.outlook.com (2603:10a6:20b:4e9::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.25; Fri, 20 Jan
 2023 14:16:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.027; Fri, 20 Jan 2023
 14:16:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18dc7638-98cd-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A72YYcSZI9WIVOG9uVXtvsB6lmPuFOH/2yP/LXzM9xMyPU5NStNudA7ErKvp3l1GQEJlEVRBzQ5TJjhBi+/ko+5jA8wZXxBNvDfwd79b24YvebIaJavrNOu3MAEyqf6ShlZzGv5v6vy5aCH1sux/4yjZk9nD9PhTNiK4AaG6YIjUMSPTTRrXMFpRSsAxCEBOTyoWncFsvIqK5Ao3BxSalgPRzbNt4eWmUBXpYyRziHACdKCXD8YVtiqKVYgO6dzraURR7Ywy1sCg6jSM9pWpPZvI+5E7dNZ1cgXqB4i52Kc3UTdwCbqIF8xTz4QYBWEaqc4LQWpXEFVemnNfkMsOZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SAkNAJNbpqHnz7hWUsIgJtVYVxBDwYG/EWbdIQSgTGs=;
 b=i+gsLtb44OV6Pt+msawPBX10auBbUw3p/Sf64JWeMTpcUK4XoNhdfV0kCY8Sy+E2/+JoTL7jCXGHoYYmInLbG6yuU5aB6odNgkntm6PIB+nqNwnGGFR8og+NUFN6reVTzrte8uY2Fwt2I7cjqaAslAKkxDA8pOOACvzmXm2+hjObBODXuL7Zkkd1F1HsP+OedwLVev3Xd8U6ABCl5rxto+35S4F6ElOE5wect1PB9lU6IDZ5t+K1bDk1St4LaOxBbGpF4xKPYU3lqbCSEIgZvECMKn3BP8QKWSm8lTgKSCIX1gPVRqI0goVy8afpRuzTYLrbo2+xawzDE4w+yHcgGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SAkNAJNbpqHnz7hWUsIgJtVYVxBDwYG/EWbdIQSgTGs=;
 b=qZTKUyfGdJc8dQZpunTpcY1o96kPuLgTx/691DXEnhokzb+a1uUIVOAEr+vRJrIvou2sEK1CixLSsJmcCXD2A1w2viQEs6QnRaBDWTVhT5p6Da8LHrTpN5chs3Fc3LmrVDipd3xN2DCOLf54mkZUl8qXbFUad+xRq9pV8IB8YfioruRTtKMIinaSE5TIhRfiluC40bbCshqFkR/TXIGW3P0Y9Bw+MJg5hLjLXdv6D9CGMagQfq3TqMcC6sHqJ5mgwHUpZpmjoEbYRde9jjEo58kBpn9W3iJ3MzytGdz/5w6mjGmAagmBAfmBYnqIAEk3v0520tMKGxIMPtJD8JnAvQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b5c7af20-2694-6f09-2c06-81e4bd338b8d@suse.com>
Date: Fri, 20 Jan 2023 15:16:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN PATCH] build: fix building flask headers before descending
 in flask/ss/
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
References: <20230120133626.55680-1-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230120133626.55680-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0043.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9364:EE_
X-MS-Office365-Filtering-Correlation-Id: e70f0fa7-d7ae-4fba-243e-08dafaf0faaa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/MCkgIWG1KnGk+CSWR6Fr8A7+bjlKYPcuPyj7c155YtmFP1TkboGmhVj5frGQjK9JLXTmDEzWTCSnj3RMu/X5cOexD7sJ6AtS5fMSAiBIpZ7bKDYL6+3UdYpSxSgdmMiA3bgjh4YKcm5XCdFFHkcs3j/cR6GRHrsmYlAUleNecgT12V2fFJ54selJfXNAk1XE/cbzZAZJpGcNRMT0x6XH9ETq8QCTOV2F1luvPBIfZqJvUZLKn9drYDaUFLSOV04sOmZ/4jJmV8pjJwxnOP28OBi0X9fiSz7EcSO7yr8VRe7fj0iSeaMzscecB/TkQAKdYSVlbOGzFomLbrWDIeADNdht+WqF/XOdZpPCxhYdVOJKcy+3J9k1wBSLUi1UlDhLYVECwmxlzSo0TS4SlKE5ji8A6nZ6THp39M6GNLfr7NBDLj3G+VG04XtuHsxkX8dNtnN1ep0ty97ZZX7sg3U3+rkt8khKvwraUL0G2NDWgqv6V6Fa3ozihin7LEW+VOBQbVrEsRSgI00rqDoi+eNJErLoqcBHHB18HmyOwkqiALXqNZG/0eWuvDe5ksFZvzqNXA4wVPSG824I2CNCxVdFx3Xt3Rtx5E77+JQKaE2lznpRx1FdeH5V70ckUlIC8LxUrjoHinjA/nSNTXeSIso19ybs760stjFwnhSfekP7L3mgSUDE6mFdpS1Yy7odT/kLKIFzGyvSsVpZL+6TarFsC/xhPKVAMSKFa40j3gro3U=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(376002)(346002)(136003)(396003)(39860400002)(451199015)(53546011)(316002)(31686004)(6506007)(38100700002)(2616005)(2906002)(66476007)(66946007)(66556008)(4326008)(26005)(478600001)(8676002)(6916009)(31696002)(186003)(8936002)(6512007)(5660300002)(83380400001)(36756003)(41300700001)(86362001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZkdSR0psSHVCcVFKVTdBamZkOVBsYkRrNndZdnRsV21NaGtwdnFENVRZY1gz?=
 =?utf-8?B?YW1tdFQ0Uk56aUZTL3RBV3V2TWtoWThsUWVWeXZpazdTbzZaeTZyM3NNUWpu?=
 =?utf-8?B?bTVlL2FHS29kZVNDaEFycEt0SmxoWGNNalZQM1RDY0R2Y2toMUFsZWswZk1t?=
 =?utf-8?B?TlU4QWk3S2lhalpDSTM1SmY0aXM2TkRxOFV6bjVKaDVpL2UzVXRpNEJOYVl1?=
 =?utf-8?B?c2tvdEEzMjBlc0ZOQm1ab29scG1mdG96WFVjWDVNTWxaZk1OVmxOYjkraWc3?=
 =?utf-8?B?VGU1U3AwVEZLNlR4U0RIcjAwR3FyNTFteHlncndRazd3YTI1S1FHY2xmY1Rw?=
 =?utf-8?B?SFFyM0x5bllZQW42MUpXOE1WV05xOGNtUk0yL2o5RlQ0Nm9RaDFhMHNpcG4r?=
 =?utf-8?B?M3B0STNCa1YxTmpxbXg4eEpiQXJGckpMRDRhSnZ2eU1RRWF6aGNSdmFSaEtY?=
 =?utf-8?B?bDlDSHQ0RGxSREdxZ1FUbnJnQXY2dFRCaGRoQUsvRm1XLzYzV0hnUXVMRFcw?=
 =?utf-8?B?SmFxS0VzRE1JUXFqY0l1TExVbXN2aW9WajF3QXJrK2JRNi81SkV1V3JsQkQ5?=
 =?utf-8?B?RW9sT0QzN0k5d2R3N01RVE5Hd0tTYytZNWZBZk1ZTWRtVG1iUkZZSWtNaitW?=
 =?utf-8?B?UkJJNW5YVmhzOVRGMTU0U1BYbk5Gell3OEVuS3BqallXWkZkL3hIMHljZlNT?=
 =?utf-8?B?OE1XczJ4RVYrdGtjR3NBMWVlU2p0VU90VytkZ1kza2VtRDdYYi9Od0V1OXpW?=
 =?utf-8?B?K0hLdFgxYyt1WFZLUVpmRG1Dbm1vY3ZBK1hjOFFNam1sRkNFNTNNM0NBQ3Uv?=
 =?utf-8?B?ZkhZaWR0aUJ0UGNhYzIzdkdtc2lBaE1GMWVOVkdaNUdXeXV6QjRYL2FYYWVz?=
 =?utf-8?B?WVcvOXdScHlXMTRWdFUyempSUW81Q3BSd1hTaVd6bkcxSjAySDFNYk83TG9t?=
 =?utf-8?B?S2dZbEp2Qkp3S3E0Y2RCZXdpdm94QjRZY3VMNmdPZS9aUndDc0hybm40M2Vl?=
 =?utf-8?B?T2s0d0JqaThIbnA5MU85NjNMNS92a050ZmthbEZoakpIamZRSHhsOWZDMldn?=
 =?utf-8?B?M3BiZDFRaWNnM3VHTWZuNnpYR0p0a2c1c2x2REFHZkk0OUNvNjFxY21odDVL?=
 =?utf-8?B?NU5yNWhJWXFKK3d2bWRBSlFjSktnZEJ2RnBLSW5pSC9VeGlDclA4SzZLbUFJ?=
 =?utf-8?B?VUtCS3FpNDNWbXVmS1Ewcm9uVjVacURSQlZXTDluYXJQcmUycjdFa2c1blYr?=
 =?utf-8?B?QXVLT2hUV3FHNFI5K3RsZWh0TUtUOHhPMXBwR0RqNUFPOXRvS0piNFlReHZ6?=
 =?utf-8?B?dlREWTQxN1N2MFpUREpwS2FsckIxTE1pRXNSNXdSSkZ4cGZIUzNBejAwUlZM?=
 =?utf-8?B?SXJrbWszU0gvSFF3cm5nVUxIRStNZGxDRnJxdzFUeXFvcFA1cjlZZkU1UVJE?=
 =?utf-8?B?MWpkLzN6VkU5NExETHI0SkZINlBQTndWeTZtcHc1ZkplVzc4ZTlCN2FpRnVR?=
 =?utf-8?B?ak5BSzU3R1BIZ1hBOXFaMmw0NnlETkdmSHpSSHZvOWw5Z29LTVpTb0dubmNP?=
 =?utf-8?B?ajlFQUQzdmsrK2tQVHlFVk5ZNGZhanRkNWUrdUR4VndjTHhseTBxczM4WGdC?=
 =?utf-8?B?OXNVNWRpb2RwY0RDOHAxOGdKQkNYVExsZ0dRTGFZOWdyL1RFaUtmR2ZtZ3NL?=
 =?utf-8?B?WXU1TjVPZnNkbEdJMStEcUhtVHFZQTBWQ3o1Y1lTeFM1VzJOZGJ5VG9NOXIz?=
 =?utf-8?B?QVUzR2VyVjJXWDRlTWZ0Y09DemF4ZmZaaU5YVHcxc0NzN1BPR240K3k1VEVQ?=
 =?utf-8?B?QTl0UkFJcWo1czljOEFybEdXWkhyckxFQVIvTzN1K2FhYmMxNkFKMDBoVElu?=
 =?utf-8?B?a29jRGJJcTFoMVdyazIxSUhHeHFkTWZkL0hSbEZGTFRhVVNuZy91SStCU24w?=
 =?utf-8?B?dkQxYU8xYlBPN2dvYXRnRXoyWDZ2NHRKdGI5ajBScERtZzVZZjRmMmtxUVBJ?=
 =?utf-8?B?OTgvOVpnWlZ3a2MyZWJTTWIwSUtmWFdZV1RGTFRFL1U2dXAxSGxjOGVieUdV?=
 =?utf-8?B?bXRtbXNmd2JNdkY4cnFrV3FVZDZKNEJZdzBFdDlpSVEyd3pOQUdaWG4rSWVv?=
 =?utf-8?Q?Y4vsuY4B41zRkfD2Ot5I+f62b?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e70f0fa7-d7ae-4fba-243e-08dafaf0faaa
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 14:16:53.2302
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oBmizQ8yw0N7+4VO6qh9Wtq8s5xn0IhioCQTfZghD6NaEMHnZ05ICn5+xRvvjaaRKqZXfyhRRg8bb9Glu37ErA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9364

On 20.01.2023 14:36, Anthony PERARD wrote:
> Unfortunatly, adding prerequisite to "$(obj)/ss/built_in.o" doesn't
> work because we have "$(obj)/%/built_in.o: $(obj)/% ;" in Rules.mk.
> So, make is allow to try to build objects in "xsm/flask/ss/" before
> generating the headers.
> 
> Adding a prerequisite on "$(obj)/ss" instead will fix the issue has
> that the target used to run make in this subdirectory.

DYM "... the issue, as that's ..."?

> Unfortunatly, that target is also used when running `make clean`, so
> we need to ignore it in this case. $(MAKECMDGOALS) can't be used in

s/need/want/, I guess, as nothing would break, we'd just create files
only to delete them again right away?

> this case as it is empty, but we can guess which operation is done by
> looking at the list of loaded makefiles.
> 
> Fixes: 7a3bcd2babcc ("build: build everything from the root dir, use obj=$subdir")
> Reported-by: "Daniel P. Smith" <dpsmith@apertussolutions.com>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/xsm/flask/Makefile
> +++ b/xen/xsm/flask/Makefile
> @@ -16,7 +16,11 @@ FLASK_H_FILES := flask.h class_to_string.h initial_sid_to_string.h
>  AV_H_FILES := av_perm_to_string.h av_permissions.h
>  ALL_H_FILES := $(addprefix include/,$(FLASK_H_FILES) $(AV_H_FILES))
>  
> -$(addprefix $(obj)/,$(obj-y)) $(obj)/ss/built_in.o: $(addprefix $(obj)/,$(ALL_H_FILES))
> +# Adding prerequisite to descending into ss/ folder only when not running `make
> +# clean`.

That's then for all "clean" targets, isn't it? I.e. maybe better to refer to
`make *clean` (or `make %clean`, but I think the % could be misleading there)?

I'm happy to make adjustments while committing, as long (or as far) as you
agree with me doing so.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 14:20:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 14:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481822.746970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsGE-0008IC-18; Fri, 20 Jan 2023 14:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481822.746970; Fri, 20 Jan 2023 14:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsGD-0008I5-Uh; Fri, 20 Jan 2023 14:20:53 +0000
Received: by outflank-mailman (input) for mailman id 481822;
 Fri, 20 Jan 2023 14:20:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1qDs=5R=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pIsGC-0008Hw-Ae
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 14:20:52 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2073.outbound.protection.outlook.com [40.107.8.73])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a44c30a7-98cd-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 15:20:51 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7003.eurprd04.prod.outlook.com (2603:10a6:10:11d::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Fri, 20 Jan
 2023 14:20:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.027; Fri, 20 Jan 2023
 14:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a44c30a7-98cd-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YEjlaD4uF90DKPJ1ARfbHMImcls3xImKYH+WwaGkTUntk5nOwUHJIW81MgUcSdOejycmIDUqGaOPG595/aJ73jJQFWI0rFucXraAYZJ6ad4nKqu4+cTub1BG5idfAI8XHpLiSScOBYtcXWvQEwWPZ9Nz8KsxFcQh08vwXP8e2lsyYmTkI97HIK07wC7U6wZsmkrpN1Y1tAn5dade9Riyc4ptJej7MuvziwOcChWeBanH82VVeopynZeJPLB1bnZIvOGdQ7ivZDSBz02fDpMsCC+TGRSGb3OA3Ui48MMe04aM7ScbVZs8HT0HA01X3oAQ/GyiyQusqmTxbPI4Q4bxNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dwi/j5S99Iw4xD1l+OYmtBJ8QuievgUbd5Ro7RNbQt4=;
 b=czpTwzOeGrBHNSK4RyEGtGNckugBA+nY9gbZz+E+8OoemdG1OJmff99D/M3vTOcf7PE5ITdfKnUKEpOAklqRerwZ2YESMDdpvCdC1ijzXntyEijuTrnUArekQA2gzk5I/wkbG9NrAXce9gPXoT0ZdlnXmiIGzA6X8KGtiYn9i8leg46UrStLbFHTyI+46JRCe0M/rROoLxEUJSfqHszc9SbIEtn3J0tot9nms0qHut+sUPqbgokwszmeY0zlCHJVUGk0/nPp2ZuHBBKFCYLTLfkBBGnm55HZdadZIUxz7t2ctOeGsbdsZi2SZ77Yl5YO4U6+OXZu1lBUsgkOLbvwMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dwi/j5S99Iw4xD1l+OYmtBJ8QuievgUbd5Ro7RNbQt4=;
 b=ym3MoAQE8kjf2KRsNZfnCbGXjnGCfM2oRA+1pzw1SvUMZ3uoZC4hPqdCoNBzuTLwgn5Rde1fCwpv9KoeLyG3ItzGCkLnGxIdX8dqhvw81v/roMAbBNYl0DYnkpYxdYhw+AlaKugOWKRU4GLIi6kVkRs58PW70XzoX/PT8zmPgEHGy7pQgdXwUxo5d0zhhJLAVwUsRAKKuADAlZuIEVZqhpg3EtE44lxv+WfZKFKIT7I65ENPQTxHdE5gaSIww36Iv40iJi37x55+Gef8Ku3FYZk3zRCAPm3PKAaenHVmMOqPU5DsYTYCbZpkb4RvA5cImo3i7m45uhO/ZkFE8TCbEg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1740f228-8821-97e1-6524-6a2ccff6d3cd@suse.com>
Date: Fri, 20 Jan 2023 15:20:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/shadow: Drop dubious lastpage diagnostic
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@citrix.com>, "Tim (Xen.org)" <tim@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230120114556.14003-1-andrew.cooper3@citrix.com>
 <f530ddfc-8f97-b913-e838-58cc352f6372@suse.com>
 <139c1d03-2cd8-a7c3-4f79-fbdd5e85c712@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <139c1d03-2cd8-a7c3-4f79-fbdd5e85c712@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0096.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7003:EE_
X-MS-Office365-Filtering-Correlation-Id: 04a946a0-defd-4fc3-78cd-08dafaf186a4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AWWLkfmZ2rGYFduk5paMyy1IaLzZBtRtGA0RnmQtLeQZWJeP8xV0zn//BCwm68PzQsqgsh4C2uk9aOStl3VmMzdK60opdyvWGsxyjCUmyfWog9OEypE4BCMuDUlxytZjy9ClvVl2NJYcpdR05qCPsuanDhnacZPJYa4PTSaz3KRu8XJbkfrOwhboMJt7+fCm3fnLIl/0cMWTM8bkCZlv1k3a8AKpxsM5G7IKGjJ4Eat+UgamfY5v6gQ193/ieiD/28wm+KjqWJKl8d0o924r1pY9jSIjJ2PDmk9JTERK3NTqr0g94GF/4ZU8zJamcVuFl1a0NuPKkn9KKqa8TpGtGauuBftuY6gOXujAgyC113j17oQ+qqU/tP5fuVeylraWX84XLLPfM/WA/3HCGkV6UfpCjhWB4ueCurNfR04KFahUx1SPtGXknpI6VRkIi2n1v0hvq/cwei+9NJMYAsxYZwVuPA9p0fZBZBB6nBT2u1/TNrM//2e+Lrxi1Isv1oRlWd89kVMNF6nS0UoxMqeTkw5LV7jzNaudtpGshIdCWffuNcazMqzppVF6XeXeKAtY0TQsPC7Wv1sgiorrwgj6KOPjRSUSJW7p0ZRwNeKh4P/ViCO1XxnSSnbjzHTWCsKT/RFTCLK2xH2Nahej4TEs+v+WExst9lQpIBQKlYjBFTYlTAMOeVuV4D6o8zwJ1OJNaMqImmg5PcnJX28QZ0FqNXw1cVbFjNgU04XfZmI5vOw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(136003)(376002)(39860400002)(396003)(451199015)(31686004)(478600001)(38100700002)(41300700001)(36756003)(6486002)(31696002)(26005)(2906002)(54906003)(316002)(186003)(4326008)(66556008)(53546011)(6506007)(2616005)(8936002)(6916009)(86362001)(8676002)(5660300002)(6512007)(66476007)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZmNyNGY4WTBkT3VhWHMyVW1teW5rV2swTWpBckJ1ZkJuVUgzd1F6RmlDQmdu?=
 =?utf-8?B?aENlVFM5Q1NvUFd5cU1aWE16UUh3N2NPWFNOQUp6U2RLYzU4LzVNN3ZVeHI2?=
 =?utf-8?B?d1A3b0c2VVg4c05VVWJGdEtpQVhNbllQUHpQUnlxdVg3NjI5N1ZlQUZPeVZ3?=
 =?utf-8?B?ZDRxTGUybStmSDJCcGIzTTNsSTA4dDc3TTRSelpGUTQxZ1k2YS85YjFUK2ts?=
 =?utf-8?B?M2F4cWpmeVJGcU80Q2hUTHJWWk9CcG5kUkIxc1oxL0hjUWI3RjZtTE92L3Fs?=
 =?utf-8?B?aW8zY3ZHWkRjNDR0SXUvNjV5QkxMRUpjWEdXQXU0aWp2NzdiQmtVYXh6R0dz?=
 =?utf-8?B?aW9FQU9IaWFpMHdNT2dubGo3dkhvanp6L294aHVSNG1UTFFOQmZDS2xza05q?=
 =?utf-8?B?amQ5cWlqcFlweEJHMERvSnFXNzF0VTF6Z2JHQjRZOVZXdFR4a0hlQ1doQzl4?=
 =?utf-8?B?UThwUE8rYlJVSjBLYkVCc3h2VzNTY2haMk42RUZub2ZSWXAyYlFjN1BGbDFa?=
 =?utf-8?B?bFk1K0RseTVLcnl0RmtMZXZnM3VsOGY4UElIcG9xcm9pRTU2RUdPb3F0Tmp1?=
 =?utf-8?B?VHY3SjUvSGtVbkVkd0JVN20zMjlibFVJdVRrTm5wTHArU1h1RFoyMEhvYndx?=
 =?utf-8?B?bWdlYkFMVW1IMkg0S2VpcjlZQ0k5VmFKNUV6ZVNkZVZnMW50VUw1L0xLcEQz?=
 =?utf-8?B?WmNTVkVtODVaVDBFOG12TUFsanI5UGVzTG9TRTJHWGE1RCtpYnR3QVBOditq?=
 =?utf-8?B?bXBRUmowQ1hmT1FtYTVWdnVIMXJmVDFlbXlmYlFvVmJ0ZDNVZWZjWTdldmZD?=
 =?utf-8?B?WmZxcWhhM2FzRG1lNlpvekFlWGY2V0RDV244Zml4WW9qUlkzYndQcUZHMHor?=
 =?utf-8?B?NXU2c2lkZVFlTFZ3VXI0alF1ZEMwSlFSRHZ5cWZ4K0hUYWIrYmZkdENEam1V?=
 =?utf-8?B?b2cwZGxIbDZiZTRlUWhZU3Faa1BXaHI1S1QwdDJ3RC93T2cyVUdheVhUN0ZU?=
 =?utf-8?B?OXdyZkxuZVZvc0s4dFF2enNKN0RVcUhHa1k0N2dZMmZsRmlsWDg1Tlk1SUNX?=
 =?utf-8?B?cjhockpvdEp4NmFrTDBDNTJaNDBQRnQzK1IvcDlnaENqWWpqa3kxY2tyWWVl?=
 =?utf-8?B?K1M1bUlNaG5QUXQvMVI4RitvSUxaeFU4SExKZmJ6K0VEMHYzRlptMlp2bUda?=
 =?utf-8?B?THZMNnRLSXV1K2lNRjVoeUdpbTRFYklmc0RWalZrSk1JeElvRVVDbWJ5RnVS?=
 =?utf-8?B?ZmREUUFpWVFNcTQ3U2lXTFM5dDhuV0x5ZE1OZnMxb1J2bUxnS05aV3dvQWln?=
 =?utf-8?B?bTVIaVZ2MVJYRjAwTG5USHNFRXcxWUhMdEtrL3ZNczBhQXBLL3UyWjVZOUJt?=
 =?utf-8?B?WHpFWkFwVjNmVkJZWG9aR3N0cUwwQjdTS3hObENVUUVya2xLRnRkWkNIejdV?=
 =?utf-8?B?MGlERVJPdWw0czFtd0NEME1UZWhjajIyS2drZk1maUJDZFh0VnRwNFI1T1pa?=
 =?utf-8?B?OVBsVkRsV0QvcUpFcGxia0dLaHpYZ1p3b2M5SHhMSktFUnRWazhLalJxR1ZO?=
 =?utf-8?B?V09EdlkrV3ZFSHFvNEswRTZJRHR1b044SUFaTGpISTc1bUlBSEtXU2RaeE5o?=
 =?utf-8?B?ZHpBalo2MG9xUUo4RGhBa1d3d21nY3VNbUZjOGY3UjFKcXQ1cERhQWxHOVlW?=
 =?utf-8?B?RGtVL2xrVjdPL3lUWHZ5R1UyOG1jUmpuK0hoU09ZdzdFdEEzUUs3bmxGZ2ps?=
 =?utf-8?B?ZHpyYkxoVmovRk9VRk54NUFqYUZBb29SNGxBV004UTZFUGZMV1pCZlpCZkpF?=
 =?utf-8?B?MHBsaFJONkY2Zld6TlpDcVdZY282TnpiMzZmM2xsaTR4UFNvWldxc1hMMUtv?=
 =?utf-8?B?emdCQTJtb3dDSFpHZjVVYm9hRUUzSWlXQkJYNjFrRnZvdTJHSDBqL2xTbTdn?=
 =?utf-8?B?WmFJclRqTzBEckxPc0E4WElYdEZCS2lad2tKOU0zWC93KzhUT21kcmNST01v?=
 =?utf-8?B?YW04RFhub2lvM2U4T3AzdTRtYmJnKyt4TGRYOGVkQjhyWFF6OElYcmNyRmQx?=
 =?utf-8?B?Y2V6MnRQdExkckpXcXdGUVlkRGduRjAzekFiK3p3d3pSdVV2U0JjbG5ya2tL?=
 =?utf-8?Q?gHyoPc4FlEbawD7A8g/i0IUm4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 04a946a0-defd-4fc3-78cd-08dafaf186a4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 14:20:48.1678
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: F6gijH6ATfPwhv2bvysYEuOpwyPxO++fmfICopFcK9PD47CLFJrkJDC3BZaNIqGxKqOubQhbjoBY8N15oKfpMw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7003

On 20.01.2023 15:10, Andrew Cooper wrote:
> On 20/01/2023 1:10 pm, Jan Beulich wrote:
>> On 20.01.2023 12:45, Andrew Cooper wrote:
>>> This is a global variable (actually 3, one per GUEST_PAGING_LEVEL), operated
>>> on using atomics only (with no regard to what else shares the same cacheline),
>>> which emits a diagnostic (in debug builds only) without changing any program
>>> behaviour.
>>>
>>> Based on read-only p2m types including logdirty, this diagnostic can be
>>> tripped by entirely legitimate guest behaviour.
>> Can it? At the very least shadow doesn't use p2m_ram_logdirty, but "cooks"
>> log-dirty handling its own way.
>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> Thanks.
> 
>> with the last sentence above corrected (if need be: removed).
> 
> I can remove it, but I feel as if there ought to be something there.
> 
> The other RO types are ram_ro, grant_map_ro and ram_shared.  shared has
> hopefully been unshared before getting to this point, while the other
> two have unclear semantics (as neither exist in real systems).

I'd be okay as long as the "including logdirty" part isn't there. If
we're unsure, perhaps then also instead of "can" either "might" or
"can possibly"?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 14:31:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 14:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481827.746980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsQC-0001Lo-1x; Fri, 20 Jan 2023 14:31:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481827.746980; Fri, 20 Jan 2023 14:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsQB-0001Lh-T6; Fri, 20 Jan 2023 14:31:11 +0000
Received: by outflank-mailman (input) for mailman id 481827;
 Fri, 20 Jan 2023 14:31:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8tu/=5R=citrix.com=prvs=37768f290=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIsQB-0001Lb-Hs
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 14:31:11 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0fb0df9e-98cf-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 15:31:02 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fb0df9e-98cf-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674225062;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=WP3JGrHHHdeaMOu99yqNMDK9+HdgyUdkPeT5thYFZIM=;
  b=TMNWBDfcNu4uiNq13iBNKX6bDuSvXaausQLSqteOb2jpeA47zJ1pcb9U
   jNXgBM0vzxT6s2fWoIwv1C6m9AubSvPqTjRY0ngxBq8MDY0H9E3FcQg1C
   msWNVaK8rSgWIa5Se3KkrutzrXBmXu6dN4YDDup5WdO23UIJUiipWQCQy
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93561588
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:4cpdeKO4QNV6Q1jvrR2Dl8FynXyQoLVcMsEvi/4bfWQNrUpx32dVz
 zBMCzzXaKyDNmH0Kohwb9+y/UxTupPQnNJiQQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5ARmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uhLJ0IJ2
 dYaFC4mVSyHl92X3am9bvY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUo3RGJsMwxbIz
 o7A13XfHEkUKf3D8wqczUyPlsqIlzjcYqtHQdVU8dY12QbOlwT/EiY+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFaNphMGUsBcO/E74gqKjKHT5m6xBGIJUzpAY9wOr9ItSHoh0
 Vrht9/xHjlurL29QGqQ7KuJtii1PTUJLGgEfmkPSg5t3jX4iNht1FSVFI8lSfPryISvQlkc3
 gxmsgAwu5MwyuIh1Zml1mv7hyKTh4XSSCoqs1C/sn2e0u9pWGK0T9X2tgSCva8bd9bxokqp5
 yZdxZXHhAwaJdTUzXHWHr1QdF28z6zdWAAwl2KDCHXIG96F33e4Nb5d7zhlTKuCGpZVIGS5C
 KM/VO442XOyAJdJRfUtC25JI552pZUM7Py8PhwuUvJAY4JqaCiM9zx0aEib0gjFyRZzzfhiY
 s3CL5fyXB727JiLKxLvF48gPUIDnHhilQs/u7ilp/hY7VZuTCHMEupUWLd/Rus48LmFsG3oH
 yV3bqO3J+FkeLSmOEH/qNdDRW3m2FBnXfgaXeQLLL/cSuencUl9Y8LsLUQJINQ/wv4LybmXp
 hlQmCZwkTLCuJEOEi3SAlgLVV8ldc8XQa4TVcD0AWuV5g==
IronPort-HdrOrdr: A9a23:SvvOXqDgseM5OYblHelo55DYdb4zR+YMi2TDj3oBMyC8cqSj5q
 WTdYcgpGLJYVcqKQEdcLW7UpVoLkmsjaKdjbNhXotKPzOWy1dATrsSlrcKqgeIc0afygc078
 ZdmsNFebnN5C1B/KDHCX6DYrEdKbe8gcKVbCTlo0uFjzsGV0it1WhE4m/xKDwOeOEmbaBJZa
 Z0u/A32AZIMk5nEvhTTkN1IdQrbuekqK7b
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93561588"
Date: Fri, 20 Jan 2023 14:30:56 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH] build: fix building flask headers before descending
 in flask/ss/
Message-ID: <Y8qloFEsK7e+Va6L@perard.uk.xensource.com>
References: <20230120133626.55680-1-anthony.perard@citrix.com>
 <b5c7af20-2694-6f09-2c06-81e4bd338b8d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <b5c7af20-2694-6f09-2c06-81e4bd338b8d@suse.com>

On Fri, Jan 20, 2023 at 03:16:51PM +0100, Jan Beulich wrote:
> On 20.01.2023 14:36, Anthony PERARD wrote:
> > Unfortunatly, adding prerequisite to "$(obj)/ss/built_in.o" doesn't
> > work because we have "$(obj)/%/built_in.o: $(obj)/% ;" in Rules.mk.
> > So, make is allow to try to build objects in "xsm/flask/ss/" before
> > generating the headers.
> > 
> > Adding a prerequisite on "$(obj)/ss" instead will fix the issue has
> > that the target used to run make in this subdirectory.
> 
> DYM "... the issue, as that's ..."?

Yes.

> > Unfortunatly, that target is also used when running `make clean`, so
> > we need to ignore it in this case. $(MAKECMDGOALS) can't be used in
> 
> s/need/want/, I guess, as nothing would break, we'd just create files
> only to delete them again right away?

Actually, I did found out that the "FORCE" target doesn't exist in
Makefile.clean, which is why I had to avoid the rule on `make clean`.
But I don't think that needs fixing.

But you can s/need/want/.

> > this case as it is empty, but we can guess which operation is done by
> > looking at the list of loaded makefiles.
> > 
> > Fixes: 7a3bcd2babcc ("build: build everything from the root dir, use obj=$subdir")
> > Reported-by: "Daniel P. Smith" <dpsmith@apertussolutions.com>
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> > --- a/xen/xsm/flask/Makefile
> > +++ b/xen/xsm/flask/Makefile
> > @@ -16,7 +16,11 @@ FLASK_H_FILES := flask.h class_to_string.h initial_sid_to_string.h
> >  AV_H_FILES := av_perm_to_string.h av_permissions.h
> >  ALL_H_FILES := $(addprefix include/,$(FLASK_H_FILES) $(AV_H_FILES))
> >  
> > -$(addprefix $(obj)/,$(obj-y)) $(obj)/ss/built_in.o: $(addprefix $(obj)/,$(ALL_H_FILES))
> > +# Adding prerequisite to descending into ss/ folder only when not running `make
> > +# clean`.
> 
> That's then for all "clean" targets, isn't it? I.e. maybe better to refer to
> `make *clean` (or `make %clean`, but I think the % could be misleading there)?

Yes, all clean targets. Referring to `make *clean` sounds good.

> I'm happy to make adjustments while committing, as long (or as far) as you
> agree with me doing so.

Yes.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 14:40:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 14:40:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481834.746990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsYf-00027V-Ue; Fri, 20 Jan 2023 14:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481834.746990; Fri, 20 Jan 2023 14:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsYf-00027O-Ri; Fri, 20 Jan 2023 14:39:57 +0000
Received: by outflank-mailman (input) for mailman id 481834;
 Fri, 20 Jan 2023 14:39:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIsYe-00027I-6Y
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 14:39:56 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4be14ec6-98d0-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 15:39:53 +0100 (CET)
Received: from mail-mw2nam12lp2042.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 09:39:49 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5198.namprd03.prod.outlook.com (2603:10b6:208:19e::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 14:39:47 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 14:39:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4be14ec6-98d0-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674225593;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=krikSJwJPtSfS/aPAT0ciI+5ihM6utcslkC35ZInVsw=;
  b=cR4VQHaC+j3jHhZc9XRlwM2enAk2YFwG1RgDAj1Grj+HkPHDl2PmcJY0
   HuCrY0YfURaCmR2FPoxEUGJp+jWTzN0a/K5LSqpWqpZD+VCAlWbpATgby
   GO4ksNvt0RM31FqFaUc1vYC1elO3/Ffb0rk6eJeqtAuGsR97a87x9Lmoi
   g=;
X-IronPort-RemoteIP: 104.47.66.42
X-IronPort-MID: 93494311
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:5MM/P6JEZv2DEL7SFE+R55QlxSXFcZb7ZxGr2PjKsXjdYENShGZTz
 WpKDG7UOfqJYjGkeY8lbI+/oEoH7ZeDyt5lSwRlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wZmPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c57HF9Fx
 PooEQtdQQmI2MmEkIK4dPJz05FLwMnDZOvzu1lG5BSAV7MDfsqGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/RppTSJpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eexn+qBNlLTdVU8NZYglKY71c+EydMekGCgvWFpFKReoxQf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZdadkOpMIwAzsw2
 Tehj97vQDBirrCRYXac7auP6yO/PzAPKm0PbjNCShEKi+QPu6k2hxPLC9N8Sqi8i4SvHSmqm
 2zQ6i8jm78UkMgHkb2h+kzKiC6toZ6PSRMp4gLQXSSu6QYRiJOZWrFEIGPztZ5oRLt1hHHa1
 JTYs6ByNNwzMKw=
IronPort-HdrOrdr: A9a23:Xot3hq1dtP3DrwM89PBfCgqjBdVxeYIsimQD101hICG9Kvbo6v
 xG785rjiMc6QxhAk3I/OrqBEDuewKkyXcY2/hyAV7mZnidhILKFvAk0WKB+UyZJ8SWzIc0v8
 sOHckfeb7N5BpB/L3HCWKDYrIdKay8gcaVbJDlvhBQpG9RGsRdB6oTMGum+wZNNXV77NICZe
 WhDk0tnUv5RZzCBf7LX0UtW+XO49uOjZjmaRkJCxNP0nj+sRqt5bL9Hxnw5GZhbxpfhbgl6m
 TLiAr/++GqtOy60AbV0yvJ441Rg8aJ8KoKOCWgsLliFtzXsHfgWK1xH7mZ+DwlquCm71gn1N
 HKvhc7Jsx2r3fcZHu8rxfh0xTplG9G0Q6p9XaIxX/45cDpTjMzDMRMwYpfbxvC8kIl+NVxyr
 hC0W6Vv4deSRnAgCP+7d7VUAwCrDv+nVMy1eoIy3BPW4oXb7Fc6YQZ4UNOCZ8FWDn37Yg2ed
 Mee/309bJTaxeXfnrZtm5gzJinRXIoBAqLRUAEp4iczyVWlGoR9TpU+OUP2nMbsJ4tQZhN4O
 rJdq5ykqtVU8MQZaVhQO8cXMqsDHDXSx6kChPfHbyvfJt3ek4k3fbMkfUIDM/AQu1K8HNy8K
 6xH2+xwgYJCgzT4e3k5uwIzvkMehTIYdyfovsuoqSRloeMMYYDaxfzO2zGu/HQ0ck3E4neQe
 j2O55TDrvlIXX1HIpVwgHkMqMib0X3UqUuy5IGckPLuM/LboHvvuzfevPaPqDsHjYvUn7+BH
 xGVj71OcVG8ka2QBbD8WjsZ08=
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93494311"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mqAFUmpi6UL8Ft3velkWCFr/19x23NhAgVJm10ECAkPNl0vhWW2CKs8ykAhCimwyKym4N2y3OtCMAJGg81YFpd8fR6I6dSqfePh7CemPa6j750vYNK59mXnSpuSctZrQ0c+0xDgBxzyzLFN5JvkojsHfs0Fk7ZwA21VkkqsEXRuFpPJ8C1DupIMom7+yiBR5/X4SWfuR3zWf3H4uCjyq1EvKzQWg8wy9lwSstr+RwuiajQnZS159ynr0TB8Uj5VBk5pie5cimt58lf9NjPcZpJl1CHf6k/zqBpD/1Xi4H6qcjarHmF9S5iUNK5SRMBR+/kPxKFK3Y8FZgS5z53pfRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=krikSJwJPtSfS/aPAT0ciI+5ihM6utcslkC35ZInVsw=;
 b=J+c13e0i3K8iFMCBGrzVwEqBZ71UyzSA8nJAO9RgCB4hBv8R4vTgCu+dIA0yG78Udvw6nyir9NiKbt0K5aMxErwNDbuwFpYSp3snmX6CznK5cgNsgpmQpnrEdNE+VDK9t+z1BCeg8rOuDOeumIvk0VYI48wmk1b5HfHDmNcYUmGK3abwACsAQnKjkmuKfv5eRRy6zLpCMqoDRCl76OY3nyUkZ1yU+wB7b7Uwt1HpQiLbz6N1DRCWpwPNRaTLaZ48B3k58c28d0jbDjoyQUWqSPVyk8VKkHF+OR6JPnqkNehGL41eZm6EaCmBIhCK+lGGb0cUMDK0zOcDn7MHwAYW6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=krikSJwJPtSfS/aPAT0ciI+5ihM6utcslkC35ZInVsw=;
 b=cMjH9EgKSxUOgdSzUo1Ld9d9WQJG6HzST3zzCf1xUpjvqXXoloU0hKswyXIwLC1F4bs9CpPfvSM4hybvehTGcgxPeEMtAYxFtl3xlDZ6VF0Yt9ofJynhVZ/2N+PVz257lvRcJxeHwCx6qHPRQuctDaeL3JihBULDLXAILPhfA7Q=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George
 Dunlap <George.Dunlap@citrix.com>, "Tim (Xen.org)" <tim@xen.org>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/shadow: Drop dubious lastpage diagnostic
Thread-Topic: [PATCH] x86/shadow: Drop dubious lastpage diagnostic
Thread-Index: AQHZLMTJYmmsHN9huU+OUU/Zg+9KvK6nR7kAgAAQrgCAAALWgIAABU+A
Date: Fri, 20 Jan 2023 14:39:47 +0000
Message-ID: <e8dc6e46-9669-063a-ddb2-b56bdaa97825@citrix.com>
References: <20230120114556.14003-1-andrew.cooper3@citrix.com>
 <f530ddfc-8f97-b913-e838-58cc352f6372@suse.com>
 <139c1d03-2cd8-a7c3-4f79-fbdd5e85c712@citrix.com>
 <1740f228-8821-97e1-6524-6a2ccff6d3cd@suse.com>
In-Reply-To: <1740f228-8821-97e1-6524-6a2ccff6d3cd@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MN2PR03MB5198:EE_
x-ms-office365-filtering-correlation-id: c38068b5-8e3c-40d7-029a-08dafaf42ded
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 YxQ1TS8hpy1iBUILNstqk9Tt2NXpnrhonZ6/hCaRHo301HXtQemzcOW1OYpGfFrg9aUslqFYAWaPJHnOBDWwlMiMzGMw2J1LgFBP2PofItgtG0K4x5xoklwx68FASvtuwTWqPMtSx6ELyJIbyhdyh9pLkcaQakhj+8p4DDTGW5nHdoUoJyJOwnXfWL7ECcnTfWy7/GJusTAyh+ra+1nwsosImQ2+p4XjfqN8BB0OHiZG5hbZnDdsoE1wlRuVxz0CjjB3E8ejwzuXTNBWiHo/JYl0QhQ8vgVNCOXRvijKyXcwwENeB/UVwIpi4LYDSGHHQ1PNxoX1mTGtChydX1l5AySlDVUdSdUmTUq5Z7et4S9558H0u/l1HtVfbOS+Rsxx5Pq5HM3UDI6uMaaLp0oeWYEKm8oyvkOXCMQx3io3jF4WJl8398g3Di0Qo+zPoIlGWwbWe7fgnm8KquZIYavbV93SIwD4/6qJ4yQfw0MfO6LvBmnjhKM9okuN1o2mtE4f5bzB25FESpR7oL1kg05ttXlCEfe+g8tdmQM+fK45wEtRMwj7X1X58OIuALBRYVdWpT4MMtfeln6/T4KYCaUOn9K33w0usrHrU8BGSF6wKOrRQltUFDin11CpxiG+f6nJzEAlY5qsQPFwiqnIF3zSw8WUFqfBlKEiUQhvtI9pBTjxKZngu0WhfhCKUtFiq+CM6ncRDu4tDdS0FZNz1Q4tPhy/ISc7fv5aK9TABA0iVY/YV1Szq1IO6ktrtuDHX/VtdQaqP6bJAVIHjgbiDy2bCg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(366004)(376002)(396003)(346002)(451199015)(5660300002)(8936002)(31686004)(4326008)(66446008)(66556008)(64756008)(6916009)(8676002)(66946007)(66476007)(91956017)(76116006)(71200400001)(36756003)(31696002)(38100700002)(53546011)(122000001)(6506007)(26005)(6512007)(2906002)(186003)(54906003)(6486002)(478600001)(316002)(2616005)(83380400001)(41300700001)(82960400001)(38070700005)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TGZmdlQ1c1Y2WFNtMUVRWTRyOWtvR0tBTW5BMTFLQ0M1dTNnOXd4L0N4Uk9S?=
 =?utf-8?B?UG9Td09OYUUxOVRBWElqczBOcWpwVlY5bFgrMmcra1BTVFhuNEFhOEZuekFX?=
 =?utf-8?B?WE9Pd3lJZ0tNc2d1cUJIQWozRU1mWFVVZ3hLZjRRNXVKbVdzRkRwVTdraStY?=
 =?utf-8?B?emlVTzNrclpRU0hueEd2czJHOUhyd0xHNlUxOW5RMERCVmFweTlqOHZDM3NR?=
 =?utf-8?B?YUJVLzlkQUJBMTFBVk8zck5qZHR2S0RtVTBQM1pnTkc4aTB4Y3BYOVBYbE5Q?=
 =?utf-8?B?N28yelpUd0JjN09NZzV6dEVFazVjMFZ6T3NMekg3WmVCaGc5b1N5RWtIZU9p?=
 =?utf-8?B?UDA5NjhUaG55TlQ1TVdsMTJ0UkorcEl6NkZMcmEvcFVsTzhEeW9WSDdVYllp?=
 =?utf-8?B?RzlIRHpmVDhjUFBXUDVnNmtYYkswbXhOUFRiMHlSNTdPbHhlZUZjd0dtUTF4?=
 =?utf-8?B?d3AvNWxPbTVpQ3RNVWtMNWR6QjNWKytIbUp5cXVQUVp3Rk1zbHhHVlpkd05J?=
 =?utf-8?B?ekFROEpTdzJlb0ZnQ3czSDJSa2RCK01YMGorNkh4b2JvU05rUXphejB2dm5W?=
 =?utf-8?B?aFJSak5lZW4wWDVBNzdtQVY4TjVQMzg2NmVCUlVQNTdtY2U3YVBIM3Q2Qzln?=
 =?utf-8?B?Q1ZOR3lCcTlVaVlsbklqSDVMYlNhb0hJT0lUaVF2UkNjNXp1cVg5Nzd0Nllx?=
 =?utf-8?B?ckY0bStMNWtwRCtGRi9XWUhWQmsyclpwZnZ4VXFWRjhZOTljajVJN2g3RXR0?=
 =?utf-8?B?R2JSTDNTZlJRZmtuUVpnRzlSaEpvck15dW5rb3Ftd0U1T1IvdUlad3J4UklH?=
 =?utf-8?B?RmdUbDVVZURXWkdKZThQWFFZNDFIcWlQalN1MHV1TmgyR29BOGQyZGxXV2NO?=
 =?utf-8?B?Vjd2QmE0MXpvN1ZjMXRZWkpJZVVNN3Rqa2hUdVRtRnQ2L2M4ZEZHSU1zMDQy?=
 =?utf-8?B?dkdFcFNVTVlrQjhFRVlXaEx1cFMxSHd3VExvMnQzZ0pxbTd1eVpVYnVGVnhN?=
 =?utf-8?B?aTFPSDZBUUpZQ2w4VEp6bUs4NzNmTWw3cVVwajV2QThwT0VTSzJKc0pvK3dP?=
 =?utf-8?B?eFdJV1pvZ21ieHNpS3hZaTl2RXd1eHBobzB1TG03Um5kbFJmRVZIVnU4ZFFS?=
 =?utf-8?B?NVBURWxXd2lDWk1lRGwwYzFPUnVyR001T3VqVTAvRXk2YUtXejIvaGwyT3BT?=
 =?utf-8?B?V1BCR1dXL2dwTTNjOE4vRWE5V1dyL2ZEUFVKYU5DMHp2OUxLeDk5SVBBSzQx?=
 =?utf-8?B?SWRMMGVDR05vRm45cU9OcEhmdE1NREF1WFZEUnJDOEV3RS9lem1sRzgwMFZs?=
 =?utf-8?B?aDJZa2hhOHRVWGhpMXRFWEEvTkt6WU1Ud0dEcHlpamVkeU9qN2R6V1lhUnEr?=
 =?utf-8?B?OFp6UURyS0F4NnFrTGV4aDFubEFDbVhqWFEvSjg0QjRQc0lFWWxVYkNCS0RR?=
 =?utf-8?B?eG9ZSXd3T3BVbmRjQk1GaXlsNkNEY2thbWkrWDZhUUF6WTRQVkdwRTFiT3JF?=
 =?utf-8?B?MkNmdS9YVmVSblNTYW1WaVdjeFVEdVdHSVROaXRlVnIzaE1zOG0xWGY1VUVU?=
 =?utf-8?B?ZWxFMzUzcTFpdmh0UTRhZmhMTHN5ZEhBcUZZTFE3UWNIWlhlWXdTT3Y5bTNs?=
 =?utf-8?B?NFh5Ym1hS28vK3l5YlRic0o0Nzk2anYzVFUza3ArdGNUdlluSjUwNmZ1VVI0?=
 =?utf-8?B?QU1LRGdMalJhOE0vVFpWeWpKMHFXM0FpSFFIOEtGVTQ1NEJ2R0ZzSGsvcVRu?=
 =?utf-8?B?K240Rk1lM2VsR3JMMEM0S0xpZTl6d0NkR3BPQlY5U3FMMFFKN1JSMkxsd0ti?=
 =?utf-8?B?K0pKdkpVV2ZFNXlNUnZMS1VSOVllNVp3eHdjM1owK3Q4dnUxclhNYUE3Z3Qr?=
 =?utf-8?B?Z2dQNjdleFpoVG5xWmlEQXhmQjErT2NSSzdtZ0grbXp5VGlZY0Z4Mk1QdCtr?=
 =?utf-8?B?U2hvL091VlA1NkhKeGVsYndmeFFGMW1LZUM2TEdPbEh6dkkyWkdUNlBjMUhW?=
 =?utf-8?B?OUpsNzRZZzE5NUdiT2VqaDA1c2xxZWZXYmJJcVdTY2RQbytvTWlYOG1pTXVY?=
 =?utf-8?B?K0dTMTA4aFh0RWlYMkJsOUVCRjRSZm9NWmd1TkIzemVWMXRxZktOV0FDekdZ?=
 =?utf-8?Q?pTrbwioOMyJNm/gGXo76TcHut?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <27822AA8B6C7F14FA99F8723DCBAB407@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	uPnQEOdOp/+27wlHgf56C9S52lZX7HTxiQbUtE7WUFR0rWLdC4/Z1fH6eG2vkEg87aY+IB+o5Lu9YSOm5y0btjnnRb7DYt55i4K3HXEV1aZnQk6BEd9ye6LenKhyosxtSVeL4as9OOvrm8b95ceM3eep4qplbwutrbe3R8TXHOlveXbQNIojjYCtsvOxKx/BzBEVV1ZYdbZatv7AT0MPypkL3imEv0Gau0MgxIrFzV20Y29TdzDVaS2QFxNBnUyY8qAefaeYSTOKcFPHRMuokvO+X4+J5H2fhJQx+bMNWUxkjS0FNvvRUK7eTcDiKJCIEL18uHvdM/SIPmVJ3iztZLaUPiZHloLEeZdCyYMW6F88u6l0HoXz64UzhZeinegZ/NEycxsIQ9VJIGz0w8N0rQ6wZHamtrlfy8gz5S07iw38hKCyBVsUUzkg94uZHR9Gauw9LETM4ulpZwpecdUouensxjtt7QOip+HOOkqPfZPF4aVw2K1DHTk3CmBtaoynrFzhuKY0W8RWhF9RPIay5JM+HUxdZOjTg3xZoVUsVlZUTP/60NcQHlA0fMXBQN7ZPYDKaEDHvXLdRTk1uemtfJBClH81LekXaid2DqgHGlkEVCTjeSHGLzJ6IEfZOrhe7Ac/vC/IvqmRpaXp5Pk+XlHcPy7pqfi8cSEtNOJlj8E1OBRJirxY3J2oBIIkNvQ+OY2vv1Op0r5PFmX4sC4qIhayV7ofIwSiIKgHtJaHsKD81xtuNM2VyxT3cJVZ2xf//uvhTLq0XEPt+JUNttkiq59mnD5287yj6dWySGLMINVX/3JNnMlxM+pGzBzLRYzT
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c38068b5-8e3c-40d7-029a-08dafaf42ded
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 14:39:47.5349
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: eqNSoowYTV5frzSFhle2idT9G7o5eFVNyzBGXh8qi4/4kbt5znfZPOK+DZHMLTbCr7BQ7t0i8Ry9ptqWmnBZa+6woS/NdOvCQmfyeT4RtIU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5198

T24gMjAvMDEvMjAyMyAyOjIwIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjAuMDEuMjAy
MyAxNToxMCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDIwLzAxLzIwMjMgMToxMCBwbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMjAuMDEuMjAyMyAxMjo0NSwgQW5kcmV3IENvb3Bl
ciB3cm90ZToNCj4+Pj4gVGhpcyBpcyBhIGdsb2JhbCB2YXJpYWJsZSAoYWN0dWFsbHkgMywgb25l
IHBlciBHVUVTVF9QQUdJTkdfTEVWRUwpLCBvcGVyYXRlZA0KPj4+PiBvbiB1c2luZyBhdG9taWNz
IG9ubHkgKHdpdGggbm8gcmVnYXJkIHRvIHdoYXQgZWxzZSBzaGFyZXMgdGhlIHNhbWUgY2FjaGVs
aW5lKSwNCj4+Pj4gd2hpY2ggZW1pdHMgYSBkaWFnbm9zdGljIChpbiBkZWJ1ZyBidWlsZHMgb25s
eSkgd2l0aG91dCBjaGFuZ2luZyBhbnkgcHJvZ3JhbQ0KPj4+PiBiZWhhdmlvdXIuDQo+Pj4+DQo+
Pj4+IEJhc2VkIG9uIHJlYWQtb25seSBwMm0gdHlwZXMgaW5jbHVkaW5nIGxvZ2RpcnR5LCB0aGlz
IGRpYWdub3N0aWMgY2FuIGJlDQo+Pj4+IHRyaXBwZWQgYnkgZW50aXJlbHkgbGVnaXRpbWF0ZSBn
dWVzdCBiZWhhdmlvdXIuDQo+Pj4gQ2FuIGl0PyBBdCB0aGUgdmVyeSBsZWFzdCBzaGFkb3cgZG9l
c24ndCB1c2UgcDJtX3JhbV9sb2dkaXJ0eSwgYnV0ICJjb29rcyINCj4+PiBsb2ctZGlydHkgaGFu
ZGxpbmcgaXRzIG93biB3YXkuDQo+Pj4NCj4+Pj4gU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3Bl
ciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCj4+PiBBY2tlZC1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPg0KPj4gVGhhbmtzLg0KPj4NCj4+PiB3aXRoIHRoZSBsYXN0IHNl
bnRlbmNlIGFib3ZlIGNvcnJlY3RlZCAoaWYgbmVlZCBiZTogcmVtb3ZlZCkuDQo+PiBJIGNhbiBy
ZW1vdmUgaXQsIGJ1dCBJIGZlZWwgYXMgaWYgdGhlcmUgb3VnaHQgdG8gYmUgc29tZXRoaW5nIHRo
ZXJlLg0KPj4NCj4+IFRoZSBvdGhlciBSTyB0eXBlcyBhcmUgcmFtX3JvLCBncmFudF9tYXBfcm8g
YW5kIHJhbV9zaGFyZWQuwqAgc2hhcmVkIGhhcw0KPj4gaG9wZWZ1bGx5IGJlZW4gdW5zaGFyZWQg
YmVmb3JlIGdldHRpbmcgdG8gdGhpcyBwb2ludCwgd2hpbGUgdGhlIG90aGVyDQo+PiB0d28gaGF2
ZSB1bmNsZWFyIHNlbWFudGljcyAoYXMgbmVpdGhlciBleGlzdCBpbiByZWFsIHN5c3RlbXMpLg0K
PiBJJ2QgYmUgb2theSBhcyBsb25nIGFzIHRoZSAiaW5jbHVkaW5nIGxvZ2RpcnR5IiBwYXJ0IGlz
bid0IHRoZXJlLiBJZg0KPiB3ZSdyZSB1bnN1cmUsIHBlcmhhcHMgdGhlbiBhbHNvIGluc3RlYWQg
b2YgImNhbiIgZWl0aGVyICJtaWdodCIgb3INCj4gImNhbiBwb3NzaWJseSI/DQoNCkknbGwganVz
dCBkZWxldGUgaXQuwqAgSXQncyBub3QgaW1wb3J0YW50IGVub3VnaCBmb3IgdGhlIHRpbWUgaXQn
cyB0YWtpbmcuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 14:40:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 14:40:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481837.747000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsZN-0003QL-8i; Fri, 20 Jan 2023 14:40:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481837.747000; Fri, 20 Jan 2023 14:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsZN-0003QE-4r; Fri, 20 Jan 2023 14:40:41 +0000
Received: by outflank-mailman (input) for mailman id 481837;
 Fri, 20 Jan 2023 14:40:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nUaQ=5R=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pIsZL-0003Oo-Pq
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 14:40:39 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on2083.outbound.protection.outlook.com [40.107.102.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6667a0b0-98d0-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 15:40:36 +0100 (CET)
Received: from BN9PR03CA0408.namprd03.prod.outlook.com (2603:10b6:408:111::23)
 by CY8PR12MB8313.namprd12.prod.outlook.com (2603:10b6:930:7d::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 14:40:34 +0000
Received: from BN8NAM11FT092.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:111:cafe::7a) by BN9PR03CA0408.outlook.office365.com
 (2603:10b6:408:111::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27 via Frontend
 Transport; Fri, 20 Jan 2023 14:40:33 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT092.mail.protection.outlook.com (10.13.176.180) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Fri, 20 Jan 2023 14:40:33 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 08:40:32 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 06:40:32 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 20 Jan 2023 08:40:30 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6667a0b0-98d0-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YdmQSs4PigCpDhlEGurgvx0aPsunO8pYfJPbSlUmta+D2WxpHeHMvT7ELnrtQ04t2U8CE7hMR2d6gCdavYknpCjM8YpdHTARgEVpsoeOlwFdbEXwzURAIdY06MRTkf6Oh/jdrWpSb6fdCViZHRvjfqr2Oc8tMlBZU2hfL1xnPJOPjqNo0+Dfz2UXLY0C/p2w4WtgGWOcw+sUOO2HF2dXx0TwqKoSsuUNuga6LYGKUsqeUP/BB6H/23EZIEaeexunH0SaxfpawRbPc9kDhO1mF4pnr/pcITAchwFZkY7wP96k80CFSGeSGTJLoI+rTFv0RhA/IPeWwKmjYxx3MA2mCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nSkRRpoOwtmAPbRvNaRj9rkrGYQoBV4WMN4Ek7MbAV8=;
 b=XqUgdQ2zXZvxJ9PYfpe6gyUgWT8QpsGLJDZ9CYGcDhOIPOZ0idlBHzrWfXNFu39Cx1kSe1Q/Yvp9bZmwhx6lc649PMyy6a8fLolK8/SlVNoWW17KiuWKVMyqcebs6s2Scrqhrp0egmpr99ZNWsrIpRtv8/ulFn6OGw/fCW8kgsS7+s+v95TedYALPXm0StRRk61WZ4np6fz9HnrTsfi0IQxniqftV2o/s3y4f+sq9ZEg7NBM/D0KXpaLP+nJPYEksOioMVg20KDn3IU3uNid6ig2cw4Vjf5rFynF5rKSKIWbDwS6w7Uki1JWByIpn84TOIj/sGB4kcbRiC+ed4jD0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nSkRRpoOwtmAPbRvNaRj9rkrGYQoBV4WMN4Ek7MbAV8=;
 b=N78iP84cJEkXBBasgfjslKdB/LD54yhBugX6dHoaXmbNyJnVLRp9GpYddnXr1qPbYV026bT+7yGcw6T0ymBY+1kyuWyffVsPKG6rfW/r9uBsb8r8C3z/qjNA62lInAvTPSYzVai3OsBFdOKmk6tppX3f7vvRXjotdnK0SPDyzWc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <ba37ee02-c07c-2803-0867-149c779890b6@amd.com>
Date: Fri, 20 Jan 2023 15:40:30 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Ayan Kumar Halder <ayan.kumar.halder@amd.com>
CC: <xen-devel@lists.xenproject.org>, <stefano.stabellini@amd.com>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-3-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
 <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT092:EE_|CY8PR12MB8313:EE_
X-MS-Office365-Filtering-Correlation-Id: 6c3aca74-301d-4d15-32c3-08dafaf44939
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hseYbAtqncXOoDhyEzqVnRi37Luh7shP+CMyCR/qcbpfi2BTU2HZTsPZGocoE5K8O+HJRRP4qu4jLAmXo2FkEntsRUTbtkqy0u87COlOuTWRXZnkkNEXxEE7lXhbebqqnkN5CHIx9j68MuaZxJZu3PysHzDXdEJUHQvoKKK4xHskCBd3i37wm1TcPHUwRD/PBzxdthn7FFuaLj8bc4L99zcSMjw57KRKAD7ikLNLzWO5CEZYo+RzwHlidZKA3KNHxFMbfVW0SBi8jZvQRtBJOjNp4kY9WNAqrWoFWdBO1Tu1thgR6FytVb/srmDBhR9dLgsD5UNfqXoCYaDp+PbSBvQ4TgkSu0OoGLABtE1u45EopYcbZ8wwUUc3Xb0crDxvAaaWVYhAbqhnHAWGSIidxp4uT+njlAOpmKahDe+IoDMyhOEX4uTt/NA4ImaxFK/5IHQ3sGS8qTx3QRmYFRWxMoboHEoyyGvSc2w9kApIeeGRQTIlKd490UBizDwSz5khsrLRTH9FMt9KVq+pfmFAphbvtnB0BawsDjCKfKpZD3Uvh3/k/sQFeR1hYVP2CDVcduAEwiSK3oggn+QEynTHJc7IorB3ghsXsYnBYpwIN2BN+Ff5N9XUFiHP3jWVyZDbSSPAigVzsQUjTR0X1s/06qdLnnQr7bAnuI/6KULh+5vOrFx+wjmHndgP1PT6ENOJ7TBH2m8aCh1LEhXqQ4ESpx5ze7Z6qDO/LfSnnu14E91Jp5/2QnIAde9ZYKUlkTdkWQX+tdpdzw35VIJiY56NiHgqU4/qwok8+p8NwsWEGZU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(376002)(396003)(346002)(451199015)(40470700004)(36840700001)(46966006)(44832011)(5660300002)(8936002)(31686004)(4326008)(82310400005)(8676002)(70206006)(70586007)(36860700001)(31696002)(53546011)(6636002)(26005)(186003)(2906002)(110136005)(54906003)(966005)(36756003)(478600001)(316002)(16576012)(2616005)(41300700001)(356005)(40460700003)(82740400003)(40480700001)(81166007)(86362001)(336012)(47076005)(426003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 14:40:33.3327
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6c3aca74-301d-4d15-32c3-08dafaf44939
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT092.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8313

Hello,

On 20/01/2023 10:32, Julien Grall wrote:
> 
> 
> Hi,
> 
> On 19/01/2023 22:54, Stefano Stabellini wrote:
>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
>>> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
>>> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
>>> address.
>>>
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> 
> I have committed the patch.
The CI test jobs (static-mem) failed on this patch:
https://gitlab.com/xen-project/xen/-/pipelines/752911309

I took a look at it and this is because in the test script we
try to find a node whose unit-address does not have leading zeroes.
However, after this patch, we switched to PRIpaddr which is defined as 016lx/016llx and
we end up creating nodes like:

memory@0000000050000000

instead of:

memory@60000000

We could modify the script, but do we really want to create nodes
with leading zeroes? The dt spec does not mention it, although [1]
specifies that the Linux convention is not to have leading zeroes.

[1] https://elinux.org/Device_Tree_Linux

> 
> Cheers,
> 
> --
> Julien Grall
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 14:41:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 14:41:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481844.747010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsaU-0003zY-Iw; Fri, 20 Jan 2023 14:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481844.747010; Fri, 20 Jan 2023 14:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIsaU-0003zR-GB; Fri, 20 Jan 2023 14:41:50 +0000
Received: by outflank-mailman (input) for mailman id 481844;
 Fri, 20 Jan 2023 14:41:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIsaS-0003zD-Nc
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 14:41:48 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8f7feea0-98d0-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 15:41:46 +0100 (CET)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 09:41:43 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6967.namprd03.prod.outlook.com (2603:10b6:303:1a6::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Fri, 20 Jan
 2023 14:41:42 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 14:41:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f7feea0-98d0-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674225706;
  h=from:to:subject:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version;
  bh=3LvgxN8E1xCvkab3P++iD20lLsM22e2OAYIDxZ8wvBk=;
  b=YiNIMO/ZGCL64TnmFcj8meZTWCQCYBArCC4LNYJjQLUj8ECJZlBGrMC2
   Sc4+Siw/5sOIqwd8JhBKQKY1azXu3vU8IO0FIIJjw8W3Z693pLLEd/eWr
   1dW2Q0SLdpulU0/05KSuOk6patWkzuC6fWa5o2arkumo7UdMMNUnPxmXv
   A=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 92430830
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:l6AhwaBtdl7SQhVW/yTiw5YqxClBgxIJ4kV8jS/XYbTApDIq1jRSx
 2EcW2/SbKneYjb8e98naY7ipEtS68fWn983QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpC5gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwvfp1WW5Ar
 vEjJGokdzOqjNCH0Z6wVbw57igjBJGD0II3nFhFlGucIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9OxuvDe7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6VwXijCNlKfFG+3vdwiV/O/jMBMzo1dGvlh9rmoGPkcd0Kf
 iT4/QJr98De7neDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJgOtc/Qjvc4yADAvi
 FmAmoqzASQ16eLKD3WA6r2TsDW+fzAPKnMPbjMFSg1D5MT/pIY0jVTESdML/LOJs+AZ0ArYm
 1iixBXSTZ1I5SLX/81XJWz6vg8=
IronPort-HdrOrdr: A9a23:IcMVYqtWqgEkqRPeyAXGcxV37skCEYAji2hC6mlwRA09TyXBrb
 HSoB1p726utN9xYgB7pTnkAsK9qBznhOdICOUqTNWftUzdyRGVxeJZnMzfKl/balXDH4dmvM
 8LH8cRab2AbmSSz/yKmzVQeOxQpOVvhZrY49s2uE0dKj2CBZsQijuQp22gf3GeDzM2eabRXK
 DsmfauIFGbCCwqhhDRPAhdY8HGqtGOktb+ax8PABAq5WC1/EKVwbr2Hx6V334lIk1y6KZn/m
 7fnwPj4KK/9/m91x/HzmfWq49bgd3717J4dYSxY+UuW0PRYzyTFc1ccqzHuCpwrPCk6V4snt
 WJqxA8P95r43eUem2uuxPi1wTpzT5rshbZuBWlqGqmpda8SCMxCsJHi44cehzF61A4tNU51K
 5QxWqWu5deEBuFliXg4NrDUQ1siyOP0DEfuP9Wi2YaXZoVabdXo4Ba9ERJEI0YFCa/84wjGP
 kGNrCq2N9GNVeBK3zJtGhmx9KhGn4pGA2dX0QEssuJlzBLgXFw1SIjtYEit2ZF8Ih4R4hP5u
 zCPKgtnqpJVNUKYaV0A/pESderC3bKXQnHPAupUBja/OhuAQONl3aAiI9FpN1DVvczvdgPcN
 WoaiIWiYZEE3ieR/Fn/PZwg1LwqaWGLEDQIrI33ek9hlTRfsufDcTYciFcryKJmYRsPuTLH/
 KoJtZUD/vvaWzjB5xN0xDiV4I6EwhZbCUW0uxLH26ms4bQJ4yvuujScPPYIbL2CzYqWmn2H3
 sEW3z4IsJc7ke2XGPj6SKhK0/QRg==
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="92430830"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bPkiklFfDZIF7aKC4tlxkekrAzkiJxPbsVHTMRSU5kbW7TMH8UE3LYESNmc1aFxQXqK3e9dymNmWlLYpW673M2g0dJj/uYFOgAMC5Jgws/OQQYWXZil+U8FOPey4NcPp3OZGoidJv2qjYNsgY7vh+mDtUf0HFHyiQISWApWABqr/v6Gg8QuYY41wLmjJOTaGtZUWphqVHVrp1rU/F88Sm5NV2guNpgXjFwAd2KOONBimU5EOoQTCyXVl5rqd5N8VmqEUOKKmcuG7VmJMt7dMq+jt2xU/yiYrNaEXwG7R7NZ/JEG384ZIvut/2jsPGr6fBZACL2S1xPjfvIiM+JLZew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3LvgxN8E1xCvkab3P++iD20lLsM22e2OAYIDxZ8wvBk=;
 b=UYY+DcE3imjlg/HUHuIL0dfOrtn2vk8bVzDYzlrCVr6MXHHRNT4sELVm1fDjHjI6p+YWr91IXZ5BM7ClH8ga0OXGEzu0SEV5cSmtI6y2ZglHxxnvKshUo3B5G0lDvLFFLAPSdyl4gKcv3+dte06gFWkvnWy/hGGyI/YmJXfxXU1qQDcMyPUV9nBTmqd/tpFhyMLgyToEs/RVE45x5Sixd2HnuVvMU4J68xsZwQ9iMv0yfB7YjNqraxQT5JbbmGpprvQJ1pCRGI4Gf2odh69YKQ1Jr4JaUmtEFYfkrVeJt0m87kBL9Cp4t+XvPHmwrVQ3e6CksLnrj0DIOVWMT+59JA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3LvgxN8E1xCvkab3P++iD20lLsM22e2OAYIDxZ8wvBk=;
 b=rotxZOqBSrC1rMufJ36scSVHst7CNsGibSFDquyfWYjAaw0z52iOZrvdwSvIvquRFAgHaZnbTTag+cXA1X2kk37ermm5vRUVnmj7qxsRiEyjEdw1IDVKWCqDT6kETZoM4ibzZmYVojw0T45DYHVzIVU4ZbseBfYH5c7G67Yl5EY=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/shadow: Drop dubious lastpage diagnostic
Thread-Topic: [PATCH] x86/shadow: Drop dubious lastpage diagnostic
Thread-Index: AQHZLMTJYmmsHN9huU+OUU/Zg+9KvK6nR7kAgAAQrgCAAALWgIAABU+AgAAAiIA=
Date: Fri, 20 Jan 2023 14:41:41 +0000
Message-ID: <bac9c940-627c-4ebd-ea11-823531e3f3ec@citrix.com>
References: <20230120114556.14003-1-andrew.cooper3@citrix.com>
 <f530ddfc-8f97-b913-e838-58cc352f6372@suse.com>
 <139c1d03-2cd8-a7c3-4f79-fbdd5e85c712@citrix.com>
 <1740f228-8821-97e1-6524-6a2ccff6d3cd@suse.com>
 <e8dc6e46-9669-063a-ddb2-b56bdaa97825@citrix.com>
In-Reply-To: <e8dc6e46-9669-063a-ddb2-b56bdaa97825@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MW4PR03MB6967:EE_
x-ms-office365-filtering-correlation-id: 43509dc0-87be-4267-103a-08dafaf4720c
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 2mb+rif0FWORr+T+576ncjhgf5z9XCgAVEJZMXDtBiCIj+tZFUcM6pWzPjYbSjKEjHPwxc8kHIBVJh80erthTUwoLV44oraDzOZ2ulr42r/6NBZHpPhXIwh+N79Qf2/S9/9HgvHO0K+0384rzIiPWVJFeupv26QmjoJ1bn6WxoJHtskv9kNTVs+4T6TO8MpLDXhy24yMi8fH5qFcf7kmXv9q0ytf/nek+ypYesjuJTWewrbsg6MuFuNPHYGiTtLqPloGTI9zL+7V6DTU3LLj9ehO+xX0T/qHqYUjhj12pMczgiJv23qJ2fcTsF2ClS2O25FH0ttJRYAQNdWdPne/gZDVg36dKnSzNIBP6/0T5x4XANGZ5EbRxeuh5hItyv6ejh5GAmb4ukZjXds1hNBzoGQPFTdlgAWeoxVQzvbxTrCoDKyhAiayaH0GD4WRgpUBxP3hKcOudnj4UYRyym34GC/Sp2Q2oHsSq1C7wKPBJg/LzsgU41nGdvch0nlT7CuqRRHRyZ5mVCQ3LysdJ5WgebXjVN3X4/k/9HQy2gMY+tVcnGSHiZaPoJrJZeq1oPCsY3eaJVrxlBNIdI+Tn6AiyD6hUcu/NfIBzEfifYaZGn9f48F8inOZ4iCdIuFeS9Gp2sH5wdLDDnRdUT0sClwa3mNnGmiU3KPINIi2anS5U23iramVQSMaB46IkmhwSKuI/RnExbjUHagbiaD6xCbXk7LlbNjekPaHDOf7SepdhKT2eJBcTTEcXzw11VRgOG+31TvN8asROq/jwEQmCvkHrA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(376002)(366004)(39860400002)(136003)(346002)(451199015)(38100700002)(122000001)(36756003)(38070700005)(31696002)(186003)(82960400001)(6512007)(86362001)(26005)(6486002)(478600001)(76116006)(316002)(71200400001)(31686004)(91956017)(53546011)(5660300002)(66946007)(2906002)(66556008)(2616005)(6506007)(83380400001)(64756008)(41300700001)(66476007)(6916009)(66446008)(8936002)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?c0dIeUprRzdLTVZrY2RFcFFiOGRZdmNCdHFtRHdHWGoxNzBzdnpIRWIrTVBG?=
 =?utf-8?B?dE5NWkRxYTJOOUd3LzJ5RGEydjRJM1dXR2U1ekQ1NzJCb1RBaUVha1VLVWVp?=
 =?utf-8?B?OW1xNHE4NnJ4Z3YxV1ViRERvd1N1VWJnNUNseDRVdWI0L0QvdzdlN1NVRXZw?=
 =?utf-8?B?QnE1ZlorYTR2T1hNR3duK0tVYlhCY3RPNnFmUW11MlZrNzZDMXFQRUdueVFM?=
 =?utf-8?B?eW82VVpCc1V3ZWlOeWUxN01wM3pVNHQ4aU9nOHNlcTBONXh6ckxjYjZnY1Bh?=
 =?utf-8?B?YlR2RXJLWDRpY0dKbm4zRmU0WXU3S0doeEkyK2RiTWR3Wml4OHZFdVZJZEMw?=
 =?utf-8?B?ZlduQlZQb1NpY1ViTlRBUTAvdnNVeUpiUXFCTHAxOUhWTHhlckxHMnpweFJn?=
 =?utf-8?B?NTVSKzdQdDFQMStJOGRyc2NPZCtHWFdYcmY0Mm0veVEvNzYvVlY1QWtVKzhJ?=
 =?utf-8?B?d3NhdkdQb0toNnJuSWZaWjF3UC9WS1BxTFJhcWRNbGc3S2xMUEpCTEtSV0Iz?=
 =?utf-8?B?N0JFaVFhVzRRQkFZNFI5ajZCdkl3eGtoRUJZWENFTzVnbm5aY3lWU1JsMGVV?=
 =?utf-8?B?RkFlVzZaVURQZms4Qms0STAzUHZTSnNaaWJiSTJrc2FsdDFSZmZDWnlVMUhv?=
 =?utf-8?B?dk5MUHVscWZ3MjZHaUhvNkxqb3ZpVzQ3SFBXUEJLQm1KYWl4UGdEZm44Qk9C?=
 =?utf-8?B?cFRqa04zUk5nTzhUa3RhWDlnSzFUbGN4UGR4NDJjdjNPNHowVUxhZjhkYi9O?=
 =?utf-8?B?VU1IQUhKdjRJc1RwdS91RWFaamM2N2JneDJtUFdUWmRsSXpjNWs1R1ZBMlpm?=
 =?utf-8?B?bDE1S1EvOStlNkpJSllvOEhtYTkvai9KWFRETzloQS8xc0t1aEhtVTZ6VklS?=
 =?utf-8?B?S3hVOTNMdkJkYXdndDZnQ1Nxblc1QWZ0SElndXQ2SloyQUtNclNCcG8zOVNx?=
 =?utf-8?B?UnBBL1hIVjhLWnFxQjR4azU1bDY1cFpianNJWnpBMjM2Y3NuN285Uzg5R01F?=
 =?utf-8?B?UllLcHdKR21xL2lFK204UzRoUDlnODZiczQ2SlNPYjZCZ1BiNld1OHoxRDBk?=
 =?utf-8?B?NmVJZ2NpdHZMOU9GWjM3aVRKOGZkUCtxYm9yRy9tQXNaU01JdEtEUDV1Qmx0?=
 =?utf-8?B?ejRtelNYS3FMSGJ5NEs2RzFnZVpBbUdFNXNqWXp1VjZGQWh4YmRCSTRja2hn?=
 =?utf-8?B?WnVObnZUK0NqYk9rYWNtTC9qYyt2SjBqUko0blRGSEphdDRqVzM3RkFQb3NB?=
 =?utf-8?B?bGViZ0NYTVp1YjNNWjk4WHp0RHI3aVRialhiRStTczd2SWc3b3A0WWV2MFZ5?=
 =?utf-8?B?citsV004M1gwR244a0Q3eDIrb2hwb0Q5NGFnVGkyWDVzL2liM0tJc3lxczJH?=
 =?utf-8?B?bTBCTVZsbHduMVQrMFVrbkgvS3R3d3hXSnVFaWNKUGtFQzZaemszWDlrM0tX?=
 =?utf-8?B?eWlZUzhkTlBzV05BRTlhaGFvSDFpU2RwV1dlYkFrSFovbWpVcUNsK2F1S3dn?=
 =?utf-8?B?UE5DMU9MZ2V1c0ZNN3o4OGtvV0x4L1kwSnZrTitSOS9rODNiYzVKRWxCdHdn?=
 =?utf-8?B?T01VV3dxUlZMNG1nWEo1Y2dxdjZ4eEwwUGJwdWJCRm5LbHlWSEVDcmYyMnY0?=
 =?utf-8?B?VnRhWTJwV0lTdm9aQ3BSbm5pam5ZQy9QMXcwZWVrSjUrZzRwMXM0am9jOENo?=
 =?utf-8?B?YU1pbmd5c3IyaWJaSmxKSnR6UGk0VjFPT3hNYXExcVJKN2RUTU5pVVpjb09V?=
 =?utf-8?B?UVBrSnF4bVA2cnh2WHAxempjelBjTlpZcFFvbXBvRlhPRFBaQzJVdFV4QTlS?=
 =?utf-8?B?bmhqZ1cwWFJSZkhkcENsQzl1QWVSSGYxbTF2Z3lQSndiYTNmZVN5S0JXTGl5?=
 =?utf-8?B?VXVBRmN4RWhYNG9CcTUranA1cEZmc3d5RFhKb1pNZFlBcDlzQlpqS1JneEpZ?=
 =?utf-8?B?MDh1czd2RVF3cVcrelN3eE4rVlpmTEg0TVVZUTNkTWJqNkZXdVI2bjd6YTQw?=
 =?utf-8?B?bXR3QXVKQjBrS2dRelpnRVhvN2FWamkwc2dqYnlEV0pEYlcxRVd4RTIxSjIy?=
 =?utf-8?B?RWx3WHBobVBLS2c5dU41VmtxREFkNkcwK2Q4UWgyTStvOTgwZWt5cXlZZk5F?=
 =?utf-8?Q?lFNeYYA4E1ADYODPwACRR8esZ?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F59BBD0412078642A2190FDDC68E26B3@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	61eBmTvgKmMDc8aqTmXX0RQFQm8jXadcfkcGA/PGrUJ0FjWjvseBGad1PrCcKDVShGiuMhGemvQESpTVn/KgSVZBbPwTvjerwLY3nFyqVcU99KBrBeTsVFAeHLoo3X40Z1mwSR3BUSDN9qYTeBRFlPKBarMhc1UsRVc8gUafTqj99lzPc6aFN8iWlx+CkQwo/6krzbUlPQgbEqfXIa/k0sCRvzkgwPKO2Pgb5E2VzbjC4KI+sfCsrEDZUvdEjA2cCYRUi1RnU9Mq89A3410jpbQhaqSddTLsQJD0i8Kk/Bt2CB/i8hPh6EC+JmeTfyBojfSRdTujU3ceP5sW7/8Ttlsq/3iGZuZKnm16kbYGSWWUBAltm8uPDxgCpAoPWi8/dwEMo9vxn+SgI6eUiIQCv6a7WgSc2YfFuGtI3MFnxxi76RmoIeHK3XbJQc5mcKetuIISUqGsACbOORwH43Cb7QW7u0TOznzj9eYgmEkpOxXuHhlB5UpCOFjFZEAc5KQzIoYy4mokumbwSe6fRC/IIp9dhM5jc6ezHlIYKkdIAXhWsWSwZEw4oVgy6seYw/bnOjSOKHpDWBng0b9WYkJu6sMh0tuShu4AWGQioS8m1t5ATxyJbll6nUMCsRWcpQr5e9CEITVzL8xbY5LcTusjD2Ecj2V16Dtc5yyg5feYEQKHwKgxUUhC6uYZVzIFyICgD7D9OX+c6dKmemLHcazOYA5+ieTvE2nRXdpiL3UY9518hQ4lj5EkfiXV8vMy/tUbQ7yUJQd7KvQIu3eVfsKQlg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43509dc0-87be-4267-103a-08dafaf4720c
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 14:41:41.8547
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: H7lxcpvKOsbSYFHOstP2DRwBwI4KM0KKe9HcYY5IBo3h96znw7b7AJKioCmFXCQhDjya0Jq1SXzZml2o2sYqOHlB9aXYauR/aTx4EHEe19o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6967

T24gMjAvMDEvMjAyMyAyOjM5IHBtLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0KPiBPbiAyMC8wMS8y
MDIzIDI6MjAgcG0sIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4gT24gMjAuMDEuMjAyMyAxNToxMCwg
QW5kcmV3IENvb3BlciB3cm90ZToNCj4+PiBPbiAyMC8wMS8yMDIzIDE6MTAgcG0sIEphbiBCZXVs
aWNoIHdyb3RlOg0KPj4+PiBPbiAyMC4wMS4yMDIzIDEyOjQ1LCBBbmRyZXcgQ29vcGVyIHdyb3Rl
Og0KPj4+Pj4gVGhpcyBpcyBhIGdsb2JhbCB2YXJpYWJsZSAoYWN0dWFsbHkgMywgb25lIHBlciBH
VUVTVF9QQUdJTkdfTEVWRUwpLCBvcGVyYXRlZA0KPj4+Pj4gb24gdXNpbmcgYXRvbWljcyBvbmx5
ICh3aXRoIG5vIHJlZ2FyZCB0byB3aGF0IGVsc2Ugc2hhcmVzIHRoZSBzYW1lIGNhY2hlbGluZSks
DQo+Pj4+PiB3aGljaCBlbWl0cyBhIGRpYWdub3N0aWMgKGluIGRlYnVnIGJ1aWxkcyBvbmx5KSB3
aXRob3V0IGNoYW5naW5nIGFueSBwcm9ncmFtDQo+Pj4+PiBiZWhhdmlvdXIuDQo+Pj4+Pg0KPj4+
Pj4gQmFzZWQgb24gcmVhZC1vbmx5IHAybSB0eXBlcyBpbmNsdWRpbmcgbG9nZGlydHksIHRoaXMg
ZGlhZ25vc3RpYyBjYW4gYmUNCj4+Pj4+IHRyaXBwZWQgYnkgZW50aXJlbHkgbGVnaXRpbWF0ZSBn
dWVzdCBiZWhhdmlvdXIuDQo+Pj4+IENhbiBpdD8gQXQgdGhlIHZlcnkgbGVhc3Qgc2hhZG93IGRv
ZXNuJ3QgdXNlIHAybV9yYW1fbG9nZGlydHksIGJ1dCAiY29va3MiDQo+Pj4+IGxvZy1kaXJ0eSBo
YW5kbGluZyBpdHMgb3duIHdheS4NCj4+Pj4NCj4+Pj4+IFNpZ25lZC1vZmYtYnk6IEFuZHJldyBD
b29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo+Pj4+IEFja2VkLWJ5OiBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+Pj4gVGhhbmtzLg0KPj4+DQo+Pj4+IHdpdGggdGhl
IGxhc3Qgc2VudGVuY2UgYWJvdmUgY29ycmVjdGVkIChpZiBuZWVkIGJlOiByZW1vdmVkKS4NCj4+
PiBJIGNhbiByZW1vdmUgaXQsIGJ1dCBJIGZlZWwgYXMgaWYgdGhlcmUgb3VnaHQgdG8gYmUgc29t
ZXRoaW5nIHRoZXJlLg0KPj4+DQo+Pj4gVGhlIG90aGVyIFJPIHR5cGVzIGFyZSByYW1fcm8sIGdy
YW50X21hcF9ybyBhbmQgcmFtX3NoYXJlZC7CoCBzaGFyZWQgaGFzDQo+Pj4gaG9wZWZ1bGx5IGJl
ZW4gdW5zaGFyZWQgYmVmb3JlIGdldHRpbmcgdG8gdGhpcyBwb2ludCwgd2hpbGUgdGhlIG90aGVy
DQo+Pj4gdHdvIGhhdmUgdW5jbGVhciBzZW1hbnRpY3MgKGFzIG5laXRoZXIgZXhpc3QgaW4gcmVh
bCBzeXN0ZW1zKS4NCj4+IEknZCBiZSBva2F5IGFzIGxvbmcgYXMgdGhlICJpbmNsdWRpbmcgbG9n
ZGlydHkiIHBhcnQgaXNuJ3QgdGhlcmUuIElmDQo+PiB3ZSdyZSB1bnN1cmUsIHBlcmhhcHMgdGhl
biBhbHNvIGluc3RlYWQgb2YgImNhbiIgZWl0aGVyICJtaWdodCIgb3INCj4+ICJjYW4gcG9zc2li
bHkiPw0KPiBJJ2xsIGp1c3QgZGVsZXRlIGl0LsKgIEl0J3Mgbm90IGltcG9ydGFudCBlbm91Z2gg
Zm9yIHRoZSB0aW1lIGl0J3MgdGFraW5nLg0KDQpPaCwgSSBzZWUgd2hhdCB5b3UgbWVhbi7CoCBZ
ZWFoLCB0aGF0IHdpbGwgd29yay4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481852.747020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssB-0006R7-81; Fri, 20 Jan 2023 15:00:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481852.747020; Fri, 20 Jan 2023 15:00:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssB-0006Qa-3R; Fri, 20 Jan 2023 15:00:07 +0000
Received: by outflank-mailman (input) for mailman id 481852;
 Fri, 20 Jan 2023 15:00:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssA-0006Kg-Kp
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:06 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1f2c6d41-98d3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:00:04 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id q5so446762wrv.0
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:04 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f2c6d41-98d3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NdJWy2GyyPMWYYMwJzkpPQcfarYfLa10k8GzdHzPycE=;
        b=Er9zoaiiNyWIWfHDMD470bbi4Zz3tdV8DN05fsuan8dbyx78bKtmVTTo4bU1Cg1awG
         T7fZydsXNEkQ3kJ0nldnW1FVrCtWmqDD2Q2NU/oUvbcOhzjckPaRi2CAU4MtPas4vWgj
         NxRYyzD2LUmZThsbQbKK/DEWrcrAOAWJi15HL7edUeHNcnA8lvzB9AInzaowADJ0G87h
         KUL8vodHoOoksVXC3/ZxopmY2SgKL5RJ+tnjrEMBh5IzLrCB6E06ymL4IJsn7+UMXgvX
         ySw3uPCNJpxOoVwfPjeHcvzUOUlXLer9Z/eVGLlxjusFklGaOR7HHR7AMBNJtG9lMfmt
         kAQQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=NdJWy2GyyPMWYYMwJzkpPQcfarYfLa10k8GzdHzPycE=;
        b=4f1BAzpK5cWYr72FIVLYkUCwjulp76EPev7GntLIPzjmkcFFoV6AJg2oUPxOhWc3LU
         iTXoeWTJiLeJ7YHthTGasuFPZ2IsMbEq0Merh90hvbDglJ+p2nJF6TpZavZyxJVsfGSA
         xRffv1IksIsAVyURnXxSK3z2RA3Ey+7E50NcaHNPU9y//c2i/xp1F3p0MPR6yHd8wG4M
         8T0JfHRaQ6WeB8UgM0UTNtKnE8f7G980CbUdaoKKcn70v1UxoEK1bf3FhED3jCV2RUnP
         nFYcryMWeLW9yd36lux3D/hN8f4ZyubHqDgRGoV0MJ+E5vkkd8CiezmRIf7M/kZRgmrC
         7Urg==
X-Gm-Message-State: AFqh2ko/g7h7E+8UckCPETjUNrKDSdyDHZCzt03nWX3p5HrGlJ73uXb1
	LEjpUFZyrhwA0Z5BnINPv/2kmuM13rlcjg==
X-Google-Smtp-Source: AMrXdXshJNbFI2XKD0qvr5w1+/CBeJPMSxuJJi9X/3yY5cbivt3jq88eV9wQsfvXjtkP2JvcExQyyg==
X-Received: by 2002:a5d:688a:0:b0:2be:51a2:c6e2 with SMTP id h10-20020a5d688a000000b002be51a2c6e2mr3573756wru.39.1674226803908;
        Fri, 20 Jan 2023 07:00:03 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 01/14] xen/riscv: add _zicsr to CFLAGS
Date: Fri, 20 Jan 2023 16:59:41 +0200
Message-Id: <3617dc882193166580ae7e5d122447e924cab524.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Work with some registers requires csr command which is part of
Zicsr.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/arch.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
index 012dc677c3..95b41d9f3e 100644
--- a/xen/arch/riscv/arch.mk
+++ b/xen/arch/riscv/arch.mk
@@ -10,7 +10,7 @@ riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
 # into the upper half _or_ the lower half of the address space.
 # -mcmodel=medlow would force Xen into the lower half.
 
-CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+CFLAGS += -march=$(riscv-march-y)_zicsr -mstrict-align -mcmodel=medany
 
 # TODO: Drop override when more of the build is working
 override ALL_OBJS-y = arch/$(TARGET_ARCH)/built_in.o
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481855.747050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssF-0007Fa-TS; Fri, 20 Jan 2023 15:00:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481855.747050; Fri, 20 Jan 2023 15:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssF-0007FQ-QU; Fri, 20 Jan 2023 15:00:11 +0000
Received: by outflank-mailman (input) for mailman id 481855;
 Fri, 20 Jan 2023 15:00:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssE-0006SQ-D7
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:10 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 208d18a5-98d3-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:00:07 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id b7so5106741wrt.3
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:07 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 208d18a5-98d3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gAB4r87bId7kPWzpHKqbdZA0XSGkJtE2emLrpTBI6C0=;
        b=lE/9F+HYPmOykIusBzDKTtxNxkcxeNrUaaw5tRf93DHa8TSeybpBoP4ifdzoV/KVva
         BRmS6SX3EkGknm+DW9t42+BeUspDBNP2a7RE8QoRKW+JWQ2q9XDJeUSxdJVaUwStZTv0
         xhthC6MUdZTAjhlb33e7y+ygrJhYpEfrxKzKxz+55oMNaboIKiEfM1E6JgH33ZPKCnkz
         JFDhrv7FWiK3QzTDRiZTAVBF1uk0bRVepwbGY0dN+VYgOQdJv71W/7wZ+Q4ru3/ky6rp
         g1F1c3eO3P5PaMjbgu3kCa1wX8gS+STdS2+tb6jxBqwQv1oAc2ZzMMoce7smD9TdK0L2
         FEZw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=gAB4r87bId7kPWzpHKqbdZA0XSGkJtE2emLrpTBI6C0=;
        b=O9B3YuWBuM++ckIEHrThMfkVYUh3YrSfEExzRVxPKDJDIP/qJSvOCfQLKXRfIhOCFe
         F1Jhd+GM2x4/f3Ag+/ncTwqjwf46cE/VX2CkhOktnSvOvbMP0ZQwUjZu+cuW+uaJkTH1
         cpdrqYp8DqYSpEGx8QVqPJBEGxo/V/YTvi7y+S1uq2fDM7gYYh/JXBheQ5qfqjUHEJt5
         nbvcAePusayDSWajG647br2gxrviI+3N30oj4tbb3i69eVzzF1HT9UjlplSlfQ0hMfZw
         mzdUjtKmiTW64TaVOxOjIrbpdnSB2ZcU1mBM0ofgGHhWNXRRKNl82izAenhFcwbBON7M
         /mrQ==
X-Gm-Message-State: AFqh2kocACVoBnztdbk/g09L4YBR0FXdnR12Xq3MhIdgc13jpGP0SUK3
	qryYaKV6MMJIjNgXz6A3dHJ8ixrCaXXrkA==
X-Google-Smtp-Source: AMrXdXu0H2fPI3Bfazs+LJValHir7JaXA3B7NTQ5TUidTAfsG8MKC4c53iFH9rsENi/AiioZMMbccQ==
X-Received: by 2002:adf:cd06:0:b0:2be:34f5:1050 with SMTP id w6-20020adfcd06000000b002be34f51050mr8831094wrm.39.1674226806108;
        Fri, 20 Jan 2023 07:00:06 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 03/14] xen/riscv: add <asm/riscv_encoding.h header
Date: Fri, 20 Jan 2023 16:59:43 +0200
Message-Id: <fe153cbeffd4ba4e158271ccd2449628f4973481.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/include/asm/riscv_encoding.h | 945 ++++++++++++++++++++
 1 file changed, 945 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/riscv_encoding.h

diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/include/asm/riscv_encoding.h
new file mode 100644
index 0000000000..8a43d49f7a
--- /dev/null
+++ b/xen/arch/riscv/include/asm/riscv_encoding.h
@@ -0,0 +1,945 @@
+/* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
+/*
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *   Anup Patel <anup.patel@wdc.com>
+ * 
+ * The source has been largely adapted from OpenSBI:
+ * include/sbi/riscv_encodnig.h
+ * 
+ */
+
+#ifndef __RISCV_ENCODING_H__
+#define __RISCV_ENCODING_H__
+
+#define _UL(X) _AC(X, UL)
+#define _ULL(X) _AC(X, ULL)
+
+/* clang-format off */
+#define MSTATUS_SIE			_UL(0x00000002)
+#define MSTATUS_MIE			_UL(0x00000008)
+#define MSTATUS_SPIE_SHIFT		5
+#define MSTATUS_SPIE			(_UL(1) << MSTATUS_SPIE_SHIFT)
+#define MSTATUS_UBE			_UL(0x00000040)
+#define MSTATUS_MPIE			_UL(0x00000080)
+#define MSTATUS_SPP_SHIFT		8
+#define MSTATUS_SPP			(_UL(1) << MSTATUS_SPP_SHIFT)
+#define MSTATUS_MPP_SHIFT		11
+#define MSTATUS_MPP			(_UL(3) << MSTATUS_MPP_SHIFT)
+#define MSTATUS_FS			_UL(0x00006000)
+#define MSTATUS_FS_OFF			_UL(0x00000000)
+#define MSTATUS_FS_INITIAL		_UL(0x00002000)
+#define MSTATUS_FS_CLEAN		_UL(0x00004000)
+#define MSTATUS_FS_DIRTY		_UL(0x00006000)
+#define MSTATUS_XS			_UL(0x00018000)
+#define MSTATUS_XS_OFF			_UL(0x00000000)
+#define MSTATUS_XS_INITIAL		_UL(0x00008000)
+#define MSTATUS_XS_CLEAN		_UL(0x00010000)
+#define MSTATUS_XS_DIRTY		_UL(0x00018000)
+#define MSTATUS_VS			_UL(0x01800000)
+#define MSTATUS_VS_OFF			_UL(0x00000000)
+#define MSTATUS_VS_INITIAL		_UL(0x00800000)
+#define MSTATUS_VS_CLEAN		_UL(0x01000000)
+#define MSTATUS_VS_DIRTY		_UL(0x01800000)
+#define MSTATUS_MPRV			_UL(0x00020000)
+#define MSTATUS_SUM			_UL(0x00040000)
+#define MSTATUS_MXR			_UL(0x00080000)
+#define MSTATUS_TVM			_UL(0x00100000)
+#define MSTATUS_TW			_UL(0x00200000)
+#define MSTATUS_TSR			_UL(0x00400000)
+#define MSTATUS32_SD			_UL(0x80000000)
+#if __riscv_xlen == 64
+#define MSTATUS_UXL			_ULL(0x0000000300000000)
+#define MSTATUS_SXL			_ULL(0x0000000C00000000)
+#define MSTATUS_SBE			_ULL(0x0000001000000000)
+#define MSTATUS_MBE			_ULL(0x0000002000000000)
+#define MSTATUS_MPV			_ULL(0x0000008000000000)
+#else
+#define MSTATUSH_SBE			_UL(0x00000010)
+#define MSTATUSH_MBE			_UL(0x00000020)
+#define MSTATUSH_MPV			_UL(0x00000080)
+#endif
+#define MSTATUS32_SD			_UL(0x80000000)
+#define MSTATUS64_SD			_ULL(0x8000000000000000)
+
+#define SSTATUS_SIE			MSTATUS_SIE
+#define SSTATUS_SPIE_SHIFT		MSTATUS_SPIE_SHIFT
+#define SSTATUS_SPIE			MSTATUS_SPIE
+#define SSTATUS_SPP_SHIFT		MSTATUS_SPP_SHIFT
+#define SSTATUS_SPP			MSTATUS_SPP
+#define SSTATUS_FS			MSTATUS_FS
+#define SSTATUS_FS_OFF			MSTATUS_FS_OFF
+#define SSTATUS_FS_INITIAL		MSTATUS_FS_INITIAL
+#define SSTATUS_FS_CLEAN		MSTATUS_FS_CLEAN
+#define SSTATUS_FS_DIRTY		MSTATUS_FS_DIRTY
+#define SSTATUS_XS			MSTATUS_XS
+#define SSTATUS_XS_OFF			MSTATUS_XS_OFF
+#define SSTATUS_XS_INITIAL		MSTATUS_XS_INITIAL
+#define SSTATUS_XS_CLEAN		MSTATUS_XS_CLEAN
+#define SSTATUS_XS_DIRTY		MSTATUS_XS_DIRTY
+#define SSTATUS_VS			MSTATUS_VS
+#define SSTATUS_VS_OFF			MSTATUS_VS_OFF
+#define SSTATUS_VS_INITIAL		MSTATUS_VS_INITIAL
+#define SSTATUS_VS_CLEAN		MSTATUS_VS_CLEAN
+#define SSTATUS_VS_DIRTY		MSTATUS_VS_DIRTY
+#define SSTATUS_SUM			MSTATUS_SUM
+#define SSTATUS_MXR			MSTATUS_MXR
+#define SSTATUS32_SD			MSTATUS32_SD
+#define SSTATUS64_UXL			MSTATUS_UXL
+#define SSTATUS64_SD			MSTATUS64_SD
+
+#if __riscv_xlen == 64
+#define HSTATUS_VSXL			_UL(0x300000000)
+#define HSTATUS_VSXL_SHIFT		32
+#endif
+#define HSTATUS_VTSR			_UL(0x00400000)
+#define HSTATUS_VTW			_UL(0x00200000)
+#define HSTATUS_VTVM			_UL(0x00100000)
+#define HSTATUS_VGEIN			_UL(0x0003f000)
+#define HSTATUS_VGEIN_SHIFT		12
+#define HSTATUS_HU			_UL(0x00000200)
+#define HSTATUS_SPVP			_UL(0x00000100)
+#define HSTATUS_SPV			_UL(0x00000080)
+#define HSTATUS_GVA			_UL(0x00000040)
+#define HSTATUS_VSBE			_UL(0x00000020)
+
+#define IRQ_S_SOFT			1
+#define IRQ_VS_SOFT			2
+#define IRQ_M_SOFT			3
+#define IRQ_S_TIMER			5
+#define IRQ_VS_TIMER			6
+#define IRQ_M_TIMER			7
+#define IRQ_S_EXT			9
+#define IRQ_VS_EXT			10
+#define IRQ_M_EXT			11
+#define IRQ_S_GEXT			12
+#define IRQ_PMU_OVF			13
+
+#define MIP_SSIP			(_UL(1) << IRQ_S_SOFT)
+#define MIP_VSSIP			(_UL(1) << IRQ_VS_SOFT)
+#define MIP_MSIP			(_UL(1) << IRQ_M_SOFT)
+#define MIP_STIP			(_UL(1) << IRQ_S_TIMER)
+#define MIP_VSTIP			(_UL(1) << IRQ_VS_TIMER)
+#define MIP_MTIP			(_UL(1) << IRQ_M_TIMER)
+#define MIP_SEIP			(_UL(1) << IRQ_S_EXT)
+#define MIP_VSEIP			(_UL(1) << IRQ_VS_EXT)
+#define MIP_MEIP			(_UL(1) << IRQ_M_EXT)
+#define MIP_SGEIP			(_UL(1) << IRQ_S_GEXT)
+#define MIP_LCOFIP			(_UL(1) << IRQ_PMU_OVF)
+
+#define SIP_SSIP			MIP_SSIP
+#define SIP_STIP			MIP_STIP
+
+#define PRV_U				_UL(0)
+#define PRV_S				_UL(1)
+#define PRV_M				_UL(3)
+
+#define SATP32_MODE			_UL(0x80000000)
+#define SATP32_MODE_SHIFT		31
+#define SATP32_ASID			_UL(0x7FC00000)
+#define SATP32_ASID_SHIFT		22
+#define SATP32_PPN			_UL(0x003FFFFF)
+#define SATP64_MODE			_ULL(0xF000000000000000)
+#define SATP64_MODE_SHIFT		60
+#define SATP64_ASID			_ULL(0x0FFFF00000000000)
+#define SATP64_ASID_SHIFT		44
+#define SATP64_PPN			_ULL(0x00000FFFFFFFFFFF)
+
+#define SATP_MODE_OFF			_UL(0)
+#define SATP_MODE_SV32			_UL(1)
+#define SATP_MODE_SV39			_UL(8)
+#define SATP_MODE_SV48			_UL(9)
+#define SATP_MODE_SV57			_UL(10)
+#define SATP_MODE_SV64			_UL(11)
+
+#define HGATP_MODE_OFF			_UL(0)
+#define HGATP_MODE_SV32X4		_UL(1)
+#define HGATP_MODE_SV39X4		_UL(8)
+#define HGATP_MODE_SV48X4		_UL(9)
+
+#define HGATP32_MODE_SHIFT		31
+#define HGATP32_VMID_SHIFT		22
+#define HGATP32_VMID_MASK		_UL(0x1FC00000)
+#define HGATP32_PPN			_UL(0x003FFFFF)
+
+#define HGATP64_MODE_SHIFT		60
+#define HGATP64_VMID_SHIFT		44
+#define HGATP64_VMID_MASK		_ULL(0x03FFF00000000000)
+#define HGATP64_PPN			_ULL(0x00000FFFFFFFFFFF)
+
+#define PMP_R				_UL(0x01)
+#define PMP_W				_UL(0x02)
+#define PMP_X				_UL(0x04)
+#define PMP_A				_UL(0x18)
+#define PMP_A_TOR			_UL(0x08)
+#define PMP_A_NA4			_UL(0x10)
+#define PMP_A_NAPOT			_UL(0x18)
+#define PMP_L				_UL(0x80)
+
+#define PMP_SHIFT			2
+#define PMP_COUNT			64
+#if __riscv_xlen == 64
+#define PMP_ADDR_MASK			((_ULL(0x1) << 54) - 1)
+#else
+#define PMP_ADDR_MASK			_UL(0xFFFFFFFF)
+#endif
+
+#if __riscv_xlen == 64
+#define MSTATUS_SD			MSTATUS64_SD
+#define SSTATUS_SD			SSTATUS64_SD
+#define SATP_MODE			SATP64_MODE
+#define SATP_MODE_SHIFT			SATP64_MODE_SHIFT
+
+#define HGATP_PPN			HGATP64_PPN
+#define HGATP_VMID_SHIFT		HGATP64_VMID_SHIFT
+#define HGATP_VMID_MASK			HGATP64_VMID_MASK
+#define HGATP_MODE_SHIFT		HGATP64_MODE_SHIFT
+#else
+#define MSTATUS_SD			MSTATUS32_SD
+#define SSTATUS_SD			SSTATUS32_SD
+#define SATP_MODE			SATP32_MODE
+#define SATP_MODE_SHIFT			SATP32_MODE_SHIFT
+
+#define HGATP_PPN			HGATP32_PPN
+#define HGATP_VMID_SHIFT		HGATP32_VMID_SHIFT
+#define HGATP_VMID_MASK			HGATP32_VMID_MASK
+#define HGATP_MODE_SHIFT		HGATP32_MODE_SHIFT
+#endif
+
+#define TOPI_IID_SHIFT			16
+#define TOPI_IID_MASK			0xfff
+#define TOPI_IPRIO_MASK		0xff
+
+#if __riscv_xlen == 64
+#define MHPMEVENT_OF			(_UL(1) << 63)
+#define MHPMEVENT_MINH			(_UL(1) << 62)
+#define MHPMEVENT_SINH			(_UL(1) << 61)
+#define MHPMEVENT_UINH			(_UL(1) << 60)
+#define MHPMEVENT_VSINH			(_UL(1) << 59)
+#define MHPMEVENT_VUINH			(_UL(1) << 58)
+#else
+#define MHPMEVENTH_OF			(_ULL(1) << 31)
+#define MHPMEVENTH_MINH			(_ULL(1) << 30)
+#define MHPMEVENTH_SINH			(_ULL(1) << 29)
+#define MHPMEVENTH_UINH			(_ULL(1) << 28)
+#define MHPMEVENTH_VSINH		(_ULL(1) << 27)
+#define MHPMEVENTH_VUINH		(_ULL(1) << 26)
+
+#define MHPMEVENT_OF			(MHPMEVENTH_OF << 32)
+#define MHPMEVENT_MINH			(MHPMEVENTH_MINH << 32)
+#define MHPMEVENT_SINH			(MHPMEVENTH_SINH << 32)
+#define MHPMEVENT_UINH			(MHPMEVENTH_UINH << 32)
+#define MHPMEVENT_VSINH			(MHPMEVENTH_VSINH << 32)
+#define MHPMEVENT_VUINH			(MHPMEVENTH_VUINH << 32)
+
+#endif
+
+#define MHPMEVENT_SSCOF_MASK		_ULL(0xFFFF000000000000)
+
+#if __riscv_xlen > 32
+#define ENVCFG_STCE			(_ULL(1) << 63)
+#define ENVCFG_PBMTE			(_ULL(1) << 62)
+#else
+#define ENVCFGH_STCE			(_UL(1) << 31)
+#define ENVCFGH_PBMTE			(_UL(1) << 30)
+#endif
+#define ENVCFG_CBZE			(_UL(1) << 7)
+#define ENVCFG_CBCFE			(_UL(1) << 6)
+#define ENVCFG_CBIE_SHIFT		4
+#define ENVCFG_CBIE			(_UL(0x3) << ENVCFG_CBIE_SHIFT)
+#define ENVCFG_CBIE_ILL			_UL(0x0)
+#define ENVCFG_CBIE_FLUSH		_UL(0x1)
+#define ENVCFG_CBIE_INV			_UL(0x3)
+#define ENVCFG_FIOM			_UL(0x1)
+
+/* ===== User-level CSRs ===== */
+
+/* User Trap Setup (N-extension) */
+#define CSR_USTATUS			0x000
+#define CSR_UIE				0x004
+#define CSR_UTVEC			0x005
+
+/* User Trap Handling (N-extension) */
+#define CSR_USCRATCH			0x040
+#define CSR_UEPC			0x041
+#define CSR_UCAUSE			0x042
+#define CSR_UTVAL			0x043
+#define CSR_UIP				0x044
+
+/* User Floating-point CSRs */
+#define CSR_FFLAGS			0x001
+#define CSR_FRM				0x002
+#define CSR_FCSR			0x003
+
+/* User Counters/Timers */
+#define CSR_CYCLE			0xc00
+#define CSR_TIME			0xc01
+#define CSR_INSTRET			0xc02
+#define CSR_HPMCOUNTER3			0xc03
+#define CSR_HPMCOUNTER4			0xc04
+#define CSR_HPMCOUNTER5			0xc05
+#define CSR_HPMCOUNTER6			0xc06
+#define CSR_HPMCOUNTER7			0xc07
+#define CSR_HPMCOUNTER8			0xc08
+#define CSR_HPMCOUNTER9			0xc09
+#define CSR_HPMCOUNTER10		0xc0a
+#define CSR_HPMCOUNTER11		0xc0b
+#define CSR_HPMCOUNTER12		0xc0c
+#define CSR_HPMCOUNTER13		0xc0d
+#define CSR_HPMCOUNTER14		0xc0e
+#define CSR_HPMCOUNTER15		0xc0f
+#define CSR_HPMCOUNTER16		0xc10
+#define CSR_HPMCOUNTER17		0xc11
+#define CSR_HPMCOUNTER18		0xc12
+#define CSR_HPMCOUNTER19		0xc13
+#define CSR_HPMCOUNTER20		0xc14
+#define CSR_HPMCOUNTER21		0xc15
+#define CSR_HPMCOUNTER22		0xc16
+#define CSR_HPMCOUNTER23		0xc17
+#define CSR_HPMCOUNTER24		0xc18
+#define CSR_HPMCOUNTER25		0xc19
+#define CSR_HPMCOUNTER26		0xc1a
+#define CSR_HPMCOUNTER27		0xc1b
+#define CSR_HPMCOUNTER28		0xc1c
+#define CSR_HPMCOUNTER29		0xc1d
+#define CSR_HPMCOUNTER30		0xc1e
+#define CSR_HPMCOUNTER31		0xc1f
+#define CSR_CYCLEH			0xc80
+#define CSR_TIMEH			0xc81
+#define CSR_INSTRETH			0xc82
+#define CSR_HPMCOUNTER3H		0xc83
+#define CSR_HPMCOUNTER4H		0xc84
+#define CSR_HPMCOUNTER5H		0xc85
+#define CSR_HPMCOUNTER6H		0xc86
+#define CSR_HPMCOUNTER7H		0xc87
+#define CSR_HPMCOUNTER8H		0xc88
+#define CSR_HPMCOUNTER9H		0xc89
+#define CSR_HPMCOUNTER10H		0xc8a
+#define CSR_HPMCOUNTER11H		0xc8b
+#define CSR_HPMCOUNTER12H		0xc8c
+#define CSR_HPMCOUNTER13H		0xc8d
+#define CSR_HPMCOUNTER14H		0xc8e
+#define CSR_HPMCOUNTER15H		0xc8f
+#define CSR_HPMCOUNTER16H		0xc90
+#define CSR_HPMCOUNTER17H		0xc91
+#define CSR_HPMCOUNTER18H		0xc92
+#define CSR_HPMCOUNTER19H		0xc93
+#define CSR_HPMCOUNTER20H		0xc94
+#define CSR_HPMCOUNTER21H		0xc95
+#define CSR_HPMCOUNTER22H		0xc96
+#define CSR_HPMCOUNTER23H		0xc97
+#define CSR_HPMCOUNTER24H		0xc98
+#define CSR_HPMCOUNTER25H		0xc99
+#define CSR_HPMCOUNTER26H		0xc9a
+#define CSR_HPMCOUNTER27H		0xc9b
+#define CSR_HPMCOUNTER28H		0xc9c
+#define CSR_HPMCOUNTER29H		0xc9d
+#define CSR_HPMCOUNTER30H		0xc9e
+#define CSR_HPMCOUNTER31H		0xc9f
+
+/* ===== Supervisor-level CSRs ===== */
+
+/* Supervisor Trap Setup */
+#define CSR_SSTATUS			0x100
+#define CSR_SEDELEG			0x102
+#define CSR_SIDELEG			0x103
+#define CSR_SIE				0x104
+#define CSR_STVEC			0x105
+#define CSR_SCOUNTEREN			0x106
+
+/* Supervisor Configuration */
+#define CSR_SENVCFG			0x10a
+
+/* Supervisor Trap Handling */
+#define CSR_SSCRATCH			0x140
+#define CSR_SEPC			0x141
+#define CSR_SCAUSE			0x142
+#define CSR_STVAL			0x143
+#define CSR_SIP				0x144
+
+/* Supervisor Protection and Translation */
+#define CSR_SATP			0x180
+
+/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
+#define CSR_SISELECT			0x150
+#define CSR_SIREG			0x151
+
+/* Supervisor-Level Interrupts (AIA) */
+#define CSR_STOPI			0xdb0
+
+/* Supervisor-Level IMSIC Interface (AIA) */
+#define CSR_SSETEIPNUM			0x158
+#define CSR_SCLREIPNUM			0x159
+#define CSR_SSETEIENUM			0x15a
+#define CSR_SCLREIENUM			0x15b
+#define CSR_STOPEI			0x15c
+
+/* Supervisor-Level High-Half CSRs (AIA) */
+#define CSR_SIEH			0x114
+#define CSR_SIPH			0x154
+
+/* Supervisor stateen CSRs */
+#define CSR_SSTATEEN0			0x10C
+#define CSR_SSTATEEN1			0x10D
+#define CSR_SSTATEEN2			0x10E
+#define CSR_SSTATEEN3			0x10F
+
+/* ===== Hypervisor-level CSRs ===== */
+
+/* Hypervisor Trap Setup (H-extension) */
+#define CSR_HSTATUS			0x600
+#define CSR_HEDELEG			0x602
+#define CSR_HIDELEG			0x603
+#define CSR_HIE				0x604
+#define CSR_HCOUNTEREN			0x606
+#define CSR_HGEIE			0x607
+
+/* Hypervisor Configuration */
+#define CSR_HENVCFG			0x60a
+#define CSR_HENVCFGH			0x61a
+
+/* Hypervisor Trap Handling (H-extension) */
+#define CSR_HTVAL			0x643
+#define CSR_HIP				0x644
+#define CSR_HVIP			0x645
+#define CSR_HTINST			0x64a
+#define CSR_HGEIP			0xe12
+
+/* Hypervisor Protection and Translation (H-extension) */
+#define CSR_HGATP			0x680
+
+/* Hypervisor Counter/Timer Virtualization Registers (H-extension) */
+#define CSR_HTIMEDELTA			0x605
+#define CSR_HTIMEDELTAH			0x615
+
+/* Virtual Supervisor Registers (H-extension) */
+#define CSR_VSSTATUS			0x200
+#define CSR_VSIE			0x204
+#define CSR_VSTVEC			0x205
+#define CSR_VSSCRATCH			0x240
+#define CSR_VSEPC			0x241
+#define CSR_VSCAUSE			0x242
+#define CSR_VSTVAL			0x243
+#define CSR_VSIP			0x244
+#define CSR_VSATP			0x280
+
+/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
+#define CSR_HVIEN			0x608
+#define CSR_HVICTL			0x609
+#define CSR_HVIPRIO1			0x646
+#define CSR_HVIPRIO2			0x647
+
+/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */
+#define CSR_VSISELECT			0x250
+#define CSR_VSIREG			0x251
+
+/* VS-Level Interrupts (H-extension with AIA) */
+#define CSR_VSTOPI			0xeb0
+
+/* VS-Level IMSIC Interface (H-extension with AIA) */
+#define CSR_VSSETEIPNUM		0x258
+#define CSR_VSCLREIPNUM		0x259
+#define CSR_VSSETEIENUM		0x25a
+#define CSR_VSCLREIENUM		0x25b
+#define CSR_VSTOPEI			0x25c
+
+/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
+#define CSR_HIDELEGH			0x613
+#define CSR_HVIENH			0x618
+#define CSR_HVIPH			0x655
+#define CSR_HVIPRIO1H			0x656
+#define CSR_HVIPRIO2H			0x657
+#define CSR_VSIEH			0x214
+#define CSR_VSIPH			0x254
+
+/* Hypervisor stateen CSRs */
+#define CSR_HSTATEEN0			0x60C
+#define CSR_HSTATEEN0H			0x61C
+#define CSR_HSTATEEN1			0x60D
+#define CSR_HSTATEEN1H			0x61D
+#define CSR_HSTATEEN2			0x60E
+#define CSR_HSTATEEN2H			0x61E
+#define CSR_HSTATEEN3			0x60F
+#define CSR_HSTATEEN3H			0x61F
+
+/* ===== Machine-level CSRs ===== */
+
+/* Machine Information Registers */
+#define CSR_MVENDORID			0xf11
+#define CSR_MARCHID			0xf12
+#define CSR_MIMPID			0xf13
+#define CSR_MHARTID			0xf14
+
+/* Machine Trap Setup */
+#define CSR_MSTATUS			0x300
+#define CSR_MISA			0x301
+#define CSR_MEDELEG			0x302
+#define CSR_MIDELEG			0x303
+#define CSR_MIE				0x304
+#define CSR_MTVEC			0x305
+#define CSR_MCOUNTEREN			0x306
+#define CSR_MSTATUSH			0x310
+
+/* Machine Configuration */
+#define CSR_MENVCFG			0x30a
+#define CSR_MENVCFGH			0x31a
+
+/* Machine Trap Handling */
+#define CSR_MSCRATCH			0x340
+#define CSR_MEPC			0x341
+#define CSR_MCAUSE			0x342
+#define CSR_MTVAL			0x343
+#define CSR_MIP				0x344
+#define CSR_MTINST			0x34a
+#define CSR_MTVAL2			0x34b
+
+/* Machine Memory Protection */
+#define CSR_PMPCFG0			0x3a0
+#define CSR_PMPCFG1			0x3a1
+#define CSR_PMPCFG2			0x3a2
+#define CSR_PMPCFG3			0x3a3
+#define CSR_PMPCFG4			0x3a4
+#define CSR_PMPCFG5			0x3a5
+#define CSR_PMPCFG6			0x3a6
+#define CSR_PMPCFG7			0x3a7
+#define CSR_PMPCFG8			0x3a8
+#define CSR_PMPCFG9			0x3a9
+#define CSR_PMPCFG10			0x3aa
+#define CSR_PMPCFG11			0x3ab
+#define CSR_PMPCFG12			0x3ac
+#define CSR_PMPCFG13			0x3ad
+#define CSR_PMPCFG14			0x3ae
+#define CSR_PMPCFG15			0x3af
+#define CSR_PMPADDR0			0x3b0
+#define CSR_PMPADDR1			0x3b1
+#define CSR_PMPADDR2			0x3b2
+#define CSR_PMPADDR3			0x3b3
+#define CSR_PMPADDR4			0x3b4
+#define CSR_PMPADDR5			0x3b5
+#define CSR_PMPADDR6			0x3b6
+#define CSR_PMPADDR7			0x3b7
+#define CSR_PMPADDR8			0x3b8
+#define CSR_PMPADDR9			0x3b9
+#define CSR_PMPADDR10			0x3ba
+#define CSR_PMPADDR11			0x3bb
+#define CSR_PMPADDR12			0x3bc
+#define CSR_PMPADDR13			0x3bd
+#define CSR_PMPADDR14			0x3be
+#define CSR_PMPADDR15			0x3bf
+#define CSR_PMPADDR16			0x3c0
+#define CSR_PMPADDR17			0x3c1
+#define CSR_PMPADDR18			0x3c2
+#define CSR_PMPADDR19			0x3c3
+#define CSR_PMPADDR20			0x3c4
+#define CSR_PMPADDR21			0x3c5
+#define CSR_PMPADDR22			0x3c6
+#define CSR_PMPADDR23			0x3c7
+#define CSR_PMPADDR24			0x3c8
+#define CSR_PMPADDR25			0x3c9
+#define CSR_PMPADDR26			0x3ca
+#define CSR_PMPADDR27			0x3cb
+#define CSR_PMPADDR28			0x3cc
+#define CSR_PMPADDR29			0x3cd
+#define CSR_PMPADDR30			0x3ce
+#define CSR_PMPADDR31			0x3cf
+#define CSR_PMPADDR32			0x3d0
+#define CSR_PMPADDR33			0x3d1
+#define CSR_PMPADDR34			0x3d2
+#define CSR_PMPADDR35			0x3d3
+#define CSR_PMPADDR36			0x3d4
+#define CSR_PMPADDR37			0x3d5
+#define CSR_PMPADDR38			0x3d6
+#define CSR_PMPADDR39			0x3d7
+#define CSR_PMPADDR40			0x3d8
+#define CSR_PMPADDR41			0x3d9
+#define CSR_PMPADDR42			0x3da
+#define CSR_PMPADDR43			0x3db
+#define CSR_PMPADDR44			0x3dc
+#define CSR_PMPADDR45			0x3dd
+#define CSR_PMPADDR46			0x3de
+#define CSR_PMPADDR47			0x3df
+#define CSR_PMPADDR48			0x3e0
+#define CSR_PMPADDR49			0x3e1
+#define CSR_PMPADDR50			0x3e2
+#define CSR_PMPADDR51			0x3e3
+#define CSR_PMPADDR52			0x3e4
+#define CSR_PMPADDR53			0x3e5
+#define CSR_PMPADDR54			0x3e6
+#define CSR_PMPADDR55			0x3e7
+#define CSR_PMPADDR56			0x3e8
+#define CSR_PMPADDR57			0x3e9
+#define CSR_PMPADDR58			0x3ea
+#define CSR_PMPADDR59			0x3eb
+#define CSR_PMPADDR60			0x3ec
+#define CSR_PMPADDR61			0x3ed
+#define CSR_PMPADDR62			0x3ee
+#define CSR_PMPADDR63			0x3ef
+
+/* Machine Counters/Timers */
+#define CSR_MCYCLE			0xb00
+#define CSR_MINSTRET			0xb02
+#define CSR_MHPMCOUNTER3		0xb03
+#define CSR_MHPMCOUNTER4		0xb04
+#define CSR_MHPMCOUNTER5		0xb05
+#define CSR_MHPMCOUNTER6		0xb06
+#define CSR_MHPMCOUNTER7		0xb07
+#define CSR_MHPMCOUNTER8		0xb08
+#define CSR_MHPMCOUNTER9		0xb09
+#define CSR_MHPMCOUNTER10		0xb0a
+#define CSR_MHPMCOUNTER11		0xb0b
+#define CSR_MHPMCOUNTER12		0xb0c
+#define CSR_MHPMCOUNTER13		0xb0d
+#define CSR_MHPMCOUNTER14		0xb0e
+#define CSR_MHPMCOUNTER15		0xb0f
+#define CSR_MHPMCOUNTER16		0xb10
+#define CSR_MHPMCOUNTER17		0xb11
+#define CSR_MHPMCOUNTER18		0xb12
+#define CSR_MHPMCOUNTER19		0xb13
+#define CSR_MHPMCOUNTER20		0xb14
+#define CSR_MHPMCOUNTER21		0xb15
+#define CSR_MHPMCOUNTER22		0xb16
+#define CSR_MHPMCOUNTER23		0xb17
+#define CSR_MHPMCOUNTER24		0xb18
+#define CSR_MHPMCOUNTER25		0xb19
+#define CSR_MHPMCOUNTER26		0xb1a
+#define CSR_MHPMCOUNTER27		0xb1b
+#define CSR_MHPMCOUNTER28		0xb1c
+#define CSR_MHPMCOUNTER29		0xb1d
+#define CSR_MHPMCOUNTER30		0xb1e
+#define CSR_MHPMCOUNTER31		0xb1f
+#define CSR_MCYCLEH			0xb80
+#define CSR_MINSTRETH			0xb82
+#define CSR_MHPMCOUNTER3H		0xb83
+#define CSR_MHPMCOUNTER4H		0xb84
+#define CSR_MHPMCOUNTER5H		0xb85
+#define CSR_MHPMCOUNTER6H		0xb86
+#define CSR_MHPMCOUNTER7H		0xb87
+#define CSR_MHPMCOUNTER8H		0xb88
+#define CSR_MHPMCOUNTER9H		0xb89
+#define CSR_MHPMCOUNTER10H		0xb8a
+#define CSR_MHPMCOUNTER11H		0xb8b
+#define CSR_MHPMCOUNTER12H		0xb8c
+#define CSR_MHPMCOUNTER13H		0xb8d
+#define CSR_MHPMCOUNTER14H		0xb8e
+#define CSR_MHPMCOUNTER15H		0xb8f
+#define CSR_MHPMCOUNTER16H		0xb90
+#define CSR_MHPMCOUNTER17H		0xb91
+#define CSR_MHPMCOUNTER18H		0xb92
+#define CSR_MHPMCOUNTER19H		0xb93
+#define CSR_MHPMCOUNTER20H		0xb94
+#define CSR_MHPMCOUNTER21H		0xb95
+#define CSR_MHPMCOUNTER22H		0xb96
+#define CSR_MHPMCOUNTER23H		0xb97
+#define CSR_MHPMCOUNTER24H		0xb98
+#define CSR_MHPMCOUNTER25H		0xb99
+#define CSR_MHPMCOUNTER26H		0xb9a
+#define CSR_MHPMCOUNTER27H		0xb9b
+#define CSR_MHPMCOUNTER28H		0xb9c
+#define CSR_MHPMCOUNTER29H		0xb9d
+#define CSR_MHPMCOUNTER30H		0xb9e
+#define CSR_MHPMCOUNTER31H		0xb9f
+
+/* Machine Counter Setup */
+#define CSR_MCOUNTINHIBIT		0x320
+#define CSR_MHPMEVENT3			0x323
+#define CSR_MHPMEVENT4			0x324
+#define CSR_MHPMEVENT5			0x325
+#define CSR_MHPMEVENT6			0x326
+#define CSR_MHPMEVENT7			0x327
+#define CSR_MHPMEVENT8			0x328
+#define CSR_MHPMEVENT9			0x329
+#define CSR_MHPMEVENT10			0x32a
+#define CSR_MHPMEVENT11			0x32b
+#define CSR_MHPMEVENT12			0x32c
+#define CSR_MHPMEVENT13			0x32d
+#define CSR_MHPMEVENT14			0x32e
+#define CSR_MHPMEVENT15			0x32f
+#define CSR_MHPMEVENT16			0x330
+#define CSR_MHPMEVENT17			0x331
+#define CSR_MHPMEVENT18			0x332
+#define CSR_MHPMEVENT19			0x333
+#define CSR_MHPMEVENT20			0x334
+#define CSR_MHPMEVENT21			0x335
+#define CSR_MHPMEVENT22			0x336
+#define CSR_MHPMEVENT23			0x337
+#define CSR_MHPMEVENT24			0x338
+#define CSR_MHPMEVENT25			0x339
+#define CSR_MHPMEVENT26			0x33a
+#define CSR_MHPMEVENT27			0x33b
+#define CSR_MHPMEVENT28			0x33c
+#define CSR_MHPMEVENT29			0x33d
+#define CSR_MHPMEVENT30			0x33e
+#define CSR_MHPMEVENT31			0x33f
+
+/* For RV32 */
+#define CSR_MHPMEVENT3H			0x723
+#define CSR_MHPMEVENT4H			0x724
+#define CSR_MHPMEVENT5H			0x725
+#define CSR_MHPMEVENT6H			0x726
+#define CSR_MHPMEVENT7H			0x727
+#define CSR_MHPMEVENT8H			0x728
+#define CSR_MHPMEVENT9H			0x729
+#define CSR_MHPMEVENT10H		0x72a
+#define CSR_MHPMEVENT11H		0x72b
+#define CSR_MHPMEVENT12H		0x72c
+#define CSR_MHPMEVENT13H		0x72d
+#define CSR_MHPMEVENT14H		0x72e
+#define CSR_MHPMEVENT15H		0x72f
+#define CSR_MHPMEVENT16H		0x730
+#define CSR_MHPMEVENT17H		0x731
+#define CSR_MHPMEVENT18H		0x732
+#define CSR_MHPMEVENT19H		0x733
+#define CSR_MHPMEVENT20H		0x734
+#define CSR_MHPMEVENT21H		0x735
+#define CSR_MHPMEVENT22H		0x736
+#define CSR_MHPMEVENT23H		0x737
+#define CSR_MHPMEVENT24H		0x738
+#define CSR_MHPMEVENT25H		0x739
+#define CSR_MHPMEVENT26H		0x73a
+#define CSR_MHPMEVENT27H		0x73b
+#define CSR_MHPMEVENT28H		0x73c
+#define CSR_MHPMEVENT29H		0x73d
+#define CSR_MHPMEVENT30H		0x73e
+#define CSR_MHPMEVENT31H		0x73f
+
+/* Counter Overflow CSR */
+#define CSR_SCOUNTOVF			0xda0
+
+/* Debug/Trace Registers */
+#define CSR_TSELECT			0x7a0
+#define CSR_TDATA1			0x7a1
+#define CSR_TDATA2			0x7a2
+#define CSR_TDATA3			0x7a3
+
+/* Debug Mode Registers */
+#define CSR_DCSR			0x7b0
+#define CSR_DPC				0x7b1
+#define CSR_DSCRATCH0			0x7b2
+#define CSR_DSCRATCH1			0x7b3
+
+/* Machine-Level Window to Indirectly Accessed Registers (AIA) */
+#define CSR_MISELECT			0x350
+#define CSR_MIREG			0x351
+
+/* Machine-Level Interrupts (AIA) */
+#define CSR_MTOPI			0xfb0
+
+/* Machine-Level IMSIC Interface (AIA) */
+#define CSR_MSETEIPNUM			0x358
+#define CSR_MCLREIPNUM			0x359
+#define CSR_MSETEIENUM			0x35a
+#define CSR_MCLREIENUM			0x35b
+#define CSR_MTOPEI			0x35c
+
+/* Virtual Interrupts for Supervisor Level (AIA) */
+#define CSR_MVIEN			0x308
+#define CSR_MVIP			0x309
+
+/* Smstateen extension registers */
+/* Machine stateen CSRs */
+#define CSR_MSTATEEN0			0x30C
+#define CSR_MSTATEEN0H			0x31C
+#define CSR_MSTATEEN1			0x30D
+#define CSR_MSTATEEN1H			0x31D
+#define CSR_MSTATEEN2			0x30E
+#define CSR_MSTATEEN2H			0x31E
+#define CSR_MSTATEEN3			0x30F
+#define CSR_MSTATEEN3H			0x31F
+
+/* Machine-Level High-Half CSRs (AIA) */
+#define CSR_MIDELEGH			0x313
+#define CSR_MIEH			0x314
+#define CSR_MVIENH			0x318
+#define CSR_MVIPH			0x319
+#define CSR_MIPH			0x354
+
+/* ===== Trap/Exception Causes ===== */
+
+/* Exception cause high bit - is an interrupt if set */
+#define CAUSE_IRQ_FLAG			(_UL(1) << (__riscv_xlen - 1))
+
+#define CAUSE_MISALIGNED_FETCH		0x0
+#define CAUSE_FETCH_ACCESS		0x1
+#define CAUSE_ILLEGAL_INSTRUCTION	0x2
+#define CAUSE_BREAKPOINT		0x3
+#define CAUSE_MISALIGNED_LOAD		0x4
+#define CAUSE_LOAD_ACCESS		0x5
+#define CAUSE_MISALIGNED_STORE		0x6
+#define CAUSE_STORE_ACCESS		0x7
+#define CAUSE_USER_ECALL		0x8
+#define CAUSE_SUPERVISOR_ECALL		0x9
+#define CAUSE_VIRTUAL_SUPERVISOR_ECALL	0xa
+#define CAUSE_MACHINE_ECALL		0xb
+#define CAUSE_FETCH_PAGE_FAULT		0xc
+#define CAUSE_LOAD_PAGE_FAULT		0xd
+#define CAUSE_STORE_PAGE_FAULT		0xf
+#define CAUSE_FETCH_GUEST_PAGE_FAULT	0x14
+#define CAUSE_LOAD_GUEST_PAGE_FAULT	0x15
+#define CAUSE_VIRTUAL_INST_FAULT	0x16
+#define CAUSE_STORE_GUEST_PAGE_FAULT	0x17
+
+/* Common defines for all smstateen */
+#define SMSTATEEN_MAX_COUNT		4
+#define SMSTATEEN0_CS_SHIFT		0
+#define SMSTATEEN0_CS			(_ULL(1) << SMSTATEEN0_CS_SHIFT)
+#define SMSTATEEN0_FCSR_SHIFT		1
+#define SMSTATEEN0_FCSR			(_ULL(1) << SMSTATEEN0_FCSR_SHIFT)
+#define SMSTATEEN0_IMSIC_SHIFT		58
+#define SMSTATEEN0_IMSIC		(_ULL(1) << SMSTATEEN0_IMSIC_SHIFT)
+#define SMSTATEEN0_AIA_SHIFT		59
+#define SMSTATEEN0_AIA			(_ULL(1) << SMSTATEEN0_AIA_SHIFT)
+#define SMSTATEEN0_SVSLCT_SHIFT		60
+#define SMSTATEEN0_SVSLCT		(_ULL(1) << SMSTATEEN0_SVSLCT_SHIFT)
+#define SMSTATEEN0_HSENVCFG_SHIFT	62
+#define SMSTATEEN0_HSENVCFG		(_ULL(1) << SMSTATEEN0_HSENVCFG_SHIFT)
+#define SMSTATEEN_STATEN_SHIFT		63
+#define SMSTATEEN_STATEN		(_ULL(1) << SMSTATEEN_STATEN_SHIFT)
+
+/* ===== Instruction Encodings ===== */
+
+#define INSN_MATCH_LB			0x3
+#define INSN_MASK_LB			0x707f
+#define INSN_MATCH_LH			0x1003
+#define INSN_MASK_LH			0x707f
+#define INSN_MATCH_LW			0x2003
+#define INSN_MASK_LW			0x707f
+#define INSN_MATCH_LD			0x3003
+#define INSN_MASK_LD			0x707f
+#define INSN_MATCH_LBU			0x4003
+#define INSN_MASK_LBU			0x707f
+#define INSN_MATCH_LHU			0x5003
+#define INSN_MASK_LHU			0x707f
+#define INSN_MATCH_LWU			0x6003
+#define INSN_MASK_LWU			0x707f
+#define INSN_MATCH_SB			0x23
+#define INSN_MASK_SB			0x707f
+#define INSN_MATCH_SH			0x1023
+#define INSN_MASK_SH			0x707f
+#define INSN_MATCH_SW			0x2023
+#define INSN_MASK_SW			0x707f
+#define INSN_MATCH_SD			0x3023
+#define INSN_MASK_SD			0x707f
+
+#define INSN_MATCH_FLW			0x2007
+#define INSN_MASK_FLW			0x707f
+#define INSN_MATCH_FLD			0x3007
+#define INSN_MASK_FLD			0x707f
+#define INSN_MATCH_FLQ			0x4007
+#define INSN_MASK_FLQ			0x707f
+#define INSN_MATCH_FSW			0x2027
+#define INSN_MASK_FSW			0x707f
+#define INSN_MATCH_FSD			0x3027
+#define INSN_MASK_FSD			0x707f
+#define INSN_MATCH_FSQ			0x4027
+#define INSN_MASK_FSQ			0x707f
+
+#define INSN_MATCH_C_LD			0x6000
+#define INSN_MASK_C_LD			0xe003
+#define INSN_MATCH_C_SD			0xe000
+#define INSN_MASK_C_SD			0xe003
+#define INSN_MATCH_C_LW			0x4000
+#define INSN_MASK_C_LW			0xe003
+#define INSN_MATCH_C_SW			0xc000
+#define INSN_MASK_C_SW			0xe003
+#define INSN_MATCH_C_LDSP		0x6002
+#define INSN_MASK_C_LDSP		0xe003
+#define INSN_MATCH_C_SDSP		0xe002
+#define INSN_MASK_C_SDSP		0xe003
+#define INSN_MATCH_C_LWSP		0x4002
+#define INSN_MASK_C_LWSP		0xe003
+#define INSN_MATCH_C_SWSP		0xc002
+#define INSN_MASK_C_SWSP		0xe003
+
+#define INSN_MATCH_C_FLD		0x2000
+#define INSN_MASK_C_FLD			0xe003
+#define INSN_MATCH_C_FLW		0x6000
+#define INSN_MASK_C_FLW			0xe003
+#define INSN_MATCH_C_FSD		0xa000
+#define INSN_MASK_C_FSD			0xe003
+#define INSN_MATCH_C_FSW		0xe000
+#define INSN_MASK_C_FSW			0xe003
+#define INSN_MATCH_C_FLDSP		0x2002
+#define INSN_MASK_C_FLDSP		0xe003
+#define INSN_MATCH_C_FSDSP		0xa002
+#define INSN_MASK_C_FSDSP		0xe003
+#define INSN_MATCH_C_FLWSP		0x6002
+#define INSN_MASK_C_FLWSP		0xe003
+#define INSN_MATCH_C_FSWSP		0xe002
+#define INSN_MASK_C_FSWSP		0xe003
+
+#define INSN_MASK_WFI			0xffffff00
+#define INSN_MATCH_WFI			0x10500000
+
+#define INSN_16BIT_MASK			0x3
+#define INSN_32BIT_MASK			0x1c
+
+#define INSN_IS_16BIT(insn)		\
+	(((insn) & INSN_16BIT_MASK) != INSN_16BIT_MASK)
+#define INSN_IS_32BIT(insn)		\
+	(((insn) & INSN_16BIT_MASK) == INSN_16BIT_MASK && \
+	 ((insn) & INSN_32BIT_MASK) != INSN_32BIT_MASK)
+
+#define INSN_LEN(insn)			(INSN_IS_16BIT(insn) ? 2 : 4)
+
+#if __riscv_xlen == 64
+#define LOG_REGBYTES			3
+#else
+#define LOG_REGBYTES			2
+#endif
+#define REGBYTES			(1 << LOG_REGBYTES)
+
+#define SH_RD				7
+#define SH_RS1				15
+#define SH_RS2				20
+#define SH_RS2C				2
+
+#define RV_X(x, s, n)			(((x) >> (s)) & ((1 << (n)) - 1))
+#define RVC_LW_IMM(x)			((RV_X(x, 6, 1) << 2) | \
+					 (RV_X(x, 10, 3) << 3) | \
+					 (RV_X(x, 5, 1) << 6))
+#define RVC_LD_IMM(x)			((RV_X(x, 10, 3) << 3) | \
+					 (RV_X(x, 5, 2) << 6))
+#define RVC_LWSP_IMM(x)			((RV_X(x, 4, 3) << 2) | \
+					 (RV_X(x, 12, 1) << 5) | \
+					 (RV_X(x, 2, 2) << 6))
+#define RVC_LDSP_IMM(x)			((RV_X(x, 5, 2) << 3) | \
+					 (RV_X(x, 12, 1) << 5) | \
+					 (RV_X(x, 2, 3) << 6))
+#define RVC_SWSP_IMM(x)			((RV_X(x, 9, 4) << 2) | \
+					 (RV_X(x, 7, 2) << 6))
+#define RVC_SDSP_IMM(x)			((RV_X(x, 10, 3) << 3) | \
+					 (RV_X(x, 7, 3) << 6))
+#define RVC_RS1S(insn)			(8 + RV_X(insn, SH_RD, 3))
+#define RVC_RS2S(insn)			(8 + RV_X(insn, SH_RS2C, 3))
+#define RVC_RS2(insn)			RV_X(insn, SH_RS2C, 5)
+
+#define SHIFT_RIGHT(x, y)		\
+	((y) < 0 ? ((x) << -(y)) : ((x) >> (y)))
+
+#define REG_MASK			\
+	((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES))
+
+#define REG_OFFSET(insn, pos)		\
+	(SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK)
+
+#define REG_PTR(insn, pos, regs)	\
+	(unsigned long *)((unsigned long)(regs) + REG_OFFSET(insn, pos))
+
+#define GET_RM(insn)			(((insn) >> 12) & 7)
+
+#define GET_RS1(insn, regs)		(*REG_PTR(insn, SH_RS1, regs))
+#define GET_RS2(insn, regs)		(*REG_PTR(insn, SH_RS2, regs))
+#define GET_RS1S(insn, regs)		(*REG_PTR(RVC_RS1S(insn), 0, regs))
+#define GET_RS2S(insn, regs)		(*REG_PTR(RVC_RS2S(insn), 0, regs))
+#define GET_RS2C(insn, regs)		(*REG_PTR(insn, SH_RS2C, regs))
+#define GET_SP(regs)			(*REG_PTR(2, 0, regs))
+#define SET_RD(insn, regs, val)		(*REG_PTR(insn, SH_RD, regs) = (val))
+#define IMM_I(insn)			((s32)(insn) >> 20)
+#define IMM_S(insn)			(((s32)(insn) >> 25 << 5) | \
+					 (s32)(((insn) >> 7) & 0x1f))
+#define MASK_FUNCT3			0x7000
+
+/* clang-format on */
+
+#endif
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481857.747061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssH-0007OY-0b; Fri, 20 Jan 2023 15:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481857.747061; Fri, 20 Jan 2023 15:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssG-0007Mq-JO; Fri, 20 Jan 2023 15:00:12 +0000
Received: by outflank-mailman (input) for mailman id 481857;
 Fri, 20 Jan 2023 15:00:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssF-0006SQ-Ir
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:11 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 217d729b-98d3-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:00:08 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id t5so5117397wrq.1
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:08 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 217d729b-98d3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=peTzxyrEbjxaYuy2YOboW/eTigzM3d5RhpXU1HhKiX8=;
        b=Jd7hpmTwX5oJ/BKdTyLWb0yBlwZ8F7pujCsWNO9JiHgCZxJlWyUyeDePAwa2XY3+vu
         STwpHsSNWcQGklILTZCdj55xHBZX6/nZ2zL5+GZ/hpGhueB/PJRBSfUdEOIKSA11/omL
         K9p5u2cVOPc4M2R30GlhGmMCBjeZa/eoXS9bQsK5H0LgpirergUc6ohYFhAR1c4m0vOo
         AEHtIeoU+4JoX/vyK9Kz1kNWnh7qp6CRLpb+y3Kt4rOPP/J0JogAFBEGYFhE7eZNOawG
         Hi1B/a3jjmwUEaO9MH+Uykd9ftDt/sv/aFEgBboCgH47qONf7ye2ccvluvQ3MWB3z0vZ
         KtQQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=peTzxyrEbjxaYuy2YOboW/eTigzM3d5RhpXU1HhKiX8=;
        b=MAVAWAkscLRMKjLA8sOfys0jPE8gC9cjAHoyPEOmp/XV5XrTwzdcLzxTjXwzROYcTy
         zM8LL16UeQqHsfbZHr2OaMizGU17DikLySuFEpxKLHsdMj4+1tJ+Clo+Koc8zW7yUd8W
         jR//immoqf3DpErQtrIwfd0A4ICwSqircPBTQTKuh96EQ0Ch4h1/TROEUipWG6nSytEg
         bHEGAEixQ4B6vkrRoutnezNY7hmLu9G7cOn3RtciAF4D02fXrrrgvhxqt5d8mQQ67hUf
         kxgWPzWEar8ucrjVm3MaG1Vd39d5L8OPnmo1U88Fw/XNGqvNav5Z/XJyq8HtE/Az/CCZ
         rc8A==
X-Gm-Message-State: AFqh2krFUFZV1Gr43LPojNpkqaw6xwBIWLgXH7CZ+AF38bicQ62LRu/m
	VHhJ00rlwCBEA9DMfQk6XAXXg1sEIIxNSg==
X-Google-Smtp-Source: AMrXdXtGbMAHhAk14XqQ7Ux7P65mZKKBUsGBFWK80ms59XKwMpAQY3POn4NprlX0ImLXn6DEHIgMhA==
X-Received: by 2002:adf:f18e:0:b0:2bd:e8bd:79ce with SMTP id h14-20020adff18e000000b002bde8bd79cemr12731524wro.20.1674226808038;
        Fri, 20 Jan 2023 07:00:08 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 05/14] xen/riscv: add early_printk_hnum() function
Date: Fri, 20 Jan 2023 16:59:45 +0200
Message-Id: <633ced21788a3abf5079c9a191794616bb1ad351.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add ability to print hex number.
It might be useful to print register value as debug information
in BUG(), WARN(), etc...

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/early_printk.c             | 39 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h |  2 ++
 2 files changed, 41 insertions(+)

diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
index 6f590e712b..876d022dd6 100644
--- a/xen/arch/riscv/early_printk.c
+++ b/xen/arch/riscv/early_printk.c
@@ -43,3 +43,42 @@ void early_printk(const char *str)
         str++;
     }
 }
+
+static void reverse(char *s, int length)
+{
+    int c;
+    char *begin, *end, temp;
+
+    begin  = s;
+    end    = s + length - 1;
+
+    for ( c = 0; c < length/2; c++ )
+    {
+        temp   = *end;
+        *end   = *begin;
+        *begin = temp;
+
+        begin++;
+        end--;
+    }
+}
+
+void early_printk_hnum(const register_t reg_val)
+{
+    char hex[] = "0123456789ABCDEF";
+    char buf[17] = {0};
+
+    register_t num = reg_val;
+    unsigned int count = 0;
+
+    for ( count = 0; num != 0; count++, num >>= 4 )
+        buf[count] = hex[num & 0x0000000f];
+
+    buf[count] = '\0';
+
+    reverse(buf, count);
+
+    early_printk("0x");
+    early_printk(buf);
+    early_printk("\n");
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
index 05106e160d..f6d7580eb0 100644
--- a/xen/arch/riscv/include/asm/early_printk.h
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -5,8 +5,10 @@
 
 #ifdef CONFIG_EARLY_PRINTK
 void early_printk(const char *str);
+void early_printk_hnum(const register_t reg_val);
 #else
 static inline void early_printk(const char *s) {};
+static inline void early_printk_hnum(const register_t reg_val) {};
 #endif
 
 #endif /* __EARLY_PRINTK_H__ */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481860.747095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssK-0008Lo-6R; Fri, 20 Jan 2023 15:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481860.747095; Fri, 20 Jan 2023 15:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssJ-0008Jr-Qf; Fri, 20 Jan 2023 15:00:15 +0000
Received: by outflank-mailman (input) for mailman id 481860;
 Fri, 20 Jan 2023 15:00:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssH-0006Kg-6z
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:13 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 23c0fd10-98d3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:00:12 +0100 (CET)
Received: by mail-wr1-x42d.google.com with SMTP id d14so1414141wrr.9
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:12 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23c0fd10-98d3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZjpYBpBo3Rvm4msdSTsMXPTvqTVEFs1X4iKpD2JoZQU=;
        b=lbAg5dv6suGwRBTCKXEyvvFajHOIaWRpUJEdSPJuBfhaq5FSurrZnxrfpX0gtdoSPI
         VS4Dkwg2tRGxZcE90N9W6vilda/AuaWcGR3JgMerMjXUrtt6uuW30q3KFryoQ4dj2haZ
         6Rf2JP0E3IKNt42KJbRGSH0/vZQikQPHca+kSAfGBm9wcH9maOSvGzIZvFEwTIuBJNi5
         7U+CdWG9BtInXKRKGS68hkZ/7JhtQ1TDCXY0rYE8mKICBzzPRDjxByt/xnSKZjdYD3ry
         WB9sWCPWegEhRiN1bhzioo2PcRWmXkfGzaevgljYO2SvFP0V0ar5+hr3DpAU5hcfgWjA
         3OfQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ZjpYBpBo3Rvm4msdSTsMXPTvqTVEFs1X4iKpD2JoZQU=;
        b=q6KVNpPnMkt7pbLyoSwblskdOaMnPfKkNbKXdLYAGV0rJvA8Eqx/55CXOfLjd8srBc
         oGaqpm+M133+x69a7FwgOH6pWmvHXIma0yxMan0pWnEj+tjqv8JCIDbYYX9mZF0jgh8q
         NyEhqCWuLKgbvSZudPMnfCbF4k17i5jl3HqNbmn9KNflVGcDUHCpiMX4zvogwgu0yS2i
         AMDKDClNJKFft0AEDp4VhCfmFFR+WzdkhDCfSdk5p0bK/sZerSZ0HobgVAenXEBkKzvp
         2+hFXNk3pwgvpcV+WjvvC8VNfAyEJdAmROEeQTARPPuFMi4T20Cw8gygeZH7a9CJYb9j
         ztew==
X-Gm-Message-State: AFqh2kryi7WDcO2chgW3IOf55OGAATpNnXci9gpzdRwFLeeTCeTSPCBt
	ZvkY/N6eEQFd39lzKP3IdWYGVwO+HHmusw==
X-Google-Smtp-Source: AMrXdXteAT6I9N77L+ttW3wX8XDgO3yTGlXjVcwDsz4APn1u6Fh217pSeg8S9yJgMugS9zwgTeUbJw==
X-Received: by 2002:adf:f20d:0:b0:2bd:f549:e66 with SMTP id p13-20020adff20d000000b002bdf5490e66mr12883813wro.63.1674226811821;
        Fri, 20 Jan 2023 07:00:11 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 09/14] xen/riscv: introduce do_unexpected_trap()
Date: Fri, 20 Jan 2023 16:59:49 +0200
Message-Id: <74ca10d9be1dfc3aed4b3b21a79eae88c9df26a4.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces the function the purpose of which is to print
a cause of an exception and call "wfi" instruction.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/traps.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index dd64f053a5..fc25138a4b 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -95,7 +95,19 @@ const char *decode_cause(unsigned long cause)
     return decode_trap_cause(cause);
 }
 
-void __handle_exception(struct cpu_user_regs *cpu_regs)
+static void do_unexpected_trap(const struct cpu_user_regs *regs)
 {
+    unsigned long cause = csr_read(CSR_SCAUSE);
+
+    early_printk("Unhandled exception: ");
+    early_printk(decode_cause(cause));
+    early_printk("\n");
+
+    // kind of die...
     wait_for_interrupt();
 }
+
+void __handle_exception(struct cpu_user_regs *cpu_regs)
+{
+    do_unexpected_trap(cpu_regs);
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481859.747073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssI-0007g7-0A; Fri, 20 Jan 2023 15:00:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481859.747073; Fri, 20 Jan 2023 15:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssH-0007bV-MP; Fri, 20 Jan 2023 15:00:13 +0000
Received: by outflank-mailman (input) for mailman id 481859;
 Fri, 20 Jan 2023 15:00:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssG-0006SQ-Db
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:12 +0000
Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com
 [2a00:1450:4864:20::435])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2206a693-98d3-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:00:09 +0100 (CET)
Received: by mail-wr1-x435.google.com with SMTP id bk16so5076185wrb.11
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:09 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2206a693-98d3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=yh7/N5u6kGLdrdBWuFmNK/JJM/RYHrBASpUY9ocixBU=;
        b=EYKVmJZyk9pfZpdYN5vCWCoeQodKHAjnyKnp65fqyC8y5bDh7TmXHloKlP3dUzEsBP
         3Z1AvSU720OYJPhG68ZwHaX6RJIYr4MpZstHXVoCufieOinjFYjDZqTpBJd5Hr64KNwS
         3l4x9bA9mIs/1BkotUu8VNigdx/ohxx+xVxOtBDrqr4HDPASlsrR9mXKGWRmiQ7DwMrA
         LLBjSitBt7Y431s8t0ItalMwKNG3nkGmc7aS86btMLd71eymJPDZkjjSRI3ugdo06FeJ
         jykGruWn57i+ZzMzAQZe3z7z8j8EWLXKANE57YFlUo8ulZi0qD2twvwvjH3qEav19NO6
         zYDQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=yh7/N5u6kGLdrdBWuFmNK/JJM/RYHrBASpUY9ocixBU=;
        b=wNkteBZGMC8XYLZHn8ul1qp+tKecU/NTxKxB2kNGuwV2HJrcQ2Eju6IH5ziliPnzh+
         9W1XyT0441GvqVOMsik5be/HqdRceBzPIM3xTqpKWiMrGemix0h/Z9KwRFoHfXCkvNg9
         YOZADPCGq18lOMY31uRJ77vfmg+GEX7S4Ful+M16X4zcayEGGbdf5xlnXDtVWf84kU23
         vsKm08uMFUU4EQ8A73yY24WzzQx53ZlIK8l8YRMaABW0v84SWOOxEu1LMCGCd/xrJFK8
         KAWCBMtuXH5KUsA8WgzGUvxNVjZ4G2fM2UZ3JxSplKGmj1BYaIZgPCjyx2t0m4RmcvD3
         Ql/Q==
X-Gm-Message-State: AFqh2kqkRd519ISfG9iPCrCEaZ6Ys1Phtrdn4LYEdS5bd1kdZYt3/Ip8
	j9lUnNJGzXwTXTYE8aY202If5mOOjS15HQ==
X-Google-Smtp-Source: AMrXdXtbpO2/pOVAqMv8ULwz04KaLGgKtUYk8aqM7v0wkBRAErj9hxGMMM7REUO1/uplSGptgrqliw==
X-Received: by 2002:adf:e881:0:b0:26a:6e7d:5782 with SMTP id d1-20020adfe881000000b0026a6e7d5782mr13254827wrm.35.1674226808903;
        Fri, 20 Jan 2023 07:00:08 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: [PATCH v1 06/14] xen/riscv: introduce exception context
Date: Fri, 20 Jan 2023 16:59:46 +0200
Message-Id: <00ecc26833738377003ad21603c198ae4278cfd3.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces a set of registers which should be saved to and
restored from a stack after an exception occurs and a set of defines
which will be used during exception context saving/restoring.

Originally <asm/processor.h> header was introduced in the patch series
from Bobby so mostly it was re-used and removed unneeded things from
it now.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/include/asm/processor.h | 114 +++++++++++++++++++++++++
 1 file changed, 114 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/processor.h

diff --git a/xen/arch/riscv/include/asm/processor.h b/xen/arch/riscv/include/asm/processor.h
new file mode 100644
index 0000000000..5898a09ce6
--- /dev/null
+++ b/xen/arch/riscv/include/asm/processor.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: MIT */
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ * Copyright 2021 (C) Bobby Eshleman <bobby.eshleman@gmail.com>
+ * Copyright 2023 (C) Vates
+ *
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#include <asm/types.h>
+
+#define RISCV_CPU_USER_REGS_zero        0
+#define RISCV_CPU_USER_REGS_ra          1
+#define RISCV_CPU_USER_REGS_sp          2
+#define RISCV_CPU_USER_REGS_gp          3
+#define RISCV_CPU_USER_REGS_tp          4
+#define RISCV_CPU_USER_REGS_t0          5
+#define RISCV_CPU_USER_REGS_t1          6
+#define RISCV_CPU_USER_REGS_t2          7
+#define RISCV_CPU_USER_REGS_s0          8
+#define RISCV_CPU_USER_REGS_s1          9
+#define RISCV_CPU_USER_REGS_a0          10
+#define RISCV_CPU_USER_REGS_a1          11
+#define RISCV_CPU_USER_REGS_a2          12
+#define RISCV_CPU_USER_REGS_a3          13
+#define RISCV_CPU_USER_REGS_a4          14
+#define RISCV_CPU_USER_REGS_a5          15
+#define RISCV_CPU_USER_REGS_a6          16
+#define RISCV_CPU_USER_REGS_a7          17
+#define RISCV_CPU_USER_REGS_s2          18
+#define RISCV_CPU_USER_REGS_s3          19
+#define RISCV_CPU_USER_REGS_s4          20
+#define RISCV_CPU_USER_REGS_s5          21
+#define RISCV_CPU_USER_REGS_s6          22
+#define RISCV_CPU_USER_REGS_s7          23
+#define RISCV_CPU_USER_REGS_s8          24
+#define RISCV_CPU_USER_REGS_s9          25
+#define RISCV_CPU_USER_REGS_s10         26
+#define RISCV_CPU_USER_REGS_s11         27
+#define RISCV_CPU_USER_REGS_t3          28
+#define RISCV_CPU_USER_REGS_t4          29
+#define RISCV_CPU_USER_REGS_t5          30
+#define RISCV_CPU_USER_REGS_t6          31
+#define RISCV_CPU_USER_REGS_sepc        32
+#define RISCV_CPU_USER_REGS_sstatus     33
+#define RISCV_CPU_USER_REGS_pregs       34
+#define RISCV_CPU_USER_REGS_last        35
+
+#define RISCV_CPU_USER_REGS_OFFSET(x)   ((RISCV_CPU_USER_REGS_##x) * __SIZEOF_POINTER__)
+#define RISCV_CPU_USER_REGS_SIZE        RISCV_CPU_USER_REGS_OFFSET(last)
+
+#ifndef __ASSEMBLY__
+
+/* On stack VCPU state */
+struct cpu_user_regs
+{
+    register_t zero;
+    register_t ra;
+    register_t sp;
+    register_t gp;
+    register_t tp;
+    register_t t0;
+    register_t t1;
+    register_t t2;
+    register_t s0;
+    register_t s1;
+    register_t a0;
+    register_t a1;
+    register_t a2;
+    register_t a3;
+    register_t a4;
+    register_t a5;
+    register_t a6;
+    register_t a7;
+    register_t s2;
+    register_t s3;
+    register_t s4;
+    register_t s5;
+    register_t s6;
+    register_t s7;
+    register_t s8;
+    register_t s9;
+    register_t s10;
+    register_t s11;
+    register_t t3;
+    register_t t4;
+    register_t t5;
+    register_t t6;
+    register_t sepc;
+    register_t sstatus;
+    /* pointer to previous stack_cpu_regs */
+    register_t pregs;
+};
+
+static inline void wait_for_interrupt(void)
+{
+    __asm__ __volatile__ ("wfi");
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481856.747057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssG-0007Ky-Ij; Fri, 20 Jan 2023 15:00:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481856.747057; Fri, 20 Jan 2023 15:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssG-0007JX-8u; Fri, 20 Jan 2023 15:00:12 +0000
Received: by outflank-mailman (input) for mailman id 481856;
 Fri, 20 Jan 2023 15:00:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssF-0006SQ-4y
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:11 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2113fc1c-98d3-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:00:08 +0100 (CET)
Received: by mail-wr1-x432.google.com with SMTP id bk16so5076090wrb.11
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:08 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2113fc1c-98d3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=opQz8gCT9iZSR+WTMArSNrtKt78jXvFW2kTnX4DfyVc=;
        b=WOtan2LjZ2jfOwtlK8bLlMIwol9pKEn7hPN/TZFTpNHBzRMlBhKC/czgmoEFpzncDu
         a5FlIBHjy+DeXabFmAxMAg7CtIy5EByiam7FBsv39XjVD8cb7u0nIN4pNA4wbCf0D0PV
         g+NbtZqC3xroIZIMYI8RFEQcHIwZH9TVGweOOeJ826qcdzVBo5xCdPmvpyuTqxGizpUA
         ThLdu/eAllPPuDi9esG5XNywIvo/bdwo3+Xf2lGcK9a1UpKSKgz6FqFG/oM+aQlf2ori
         tc+rsxpzrXes37X+12Md2eQ35jYikvimz0gFyhA99DPEeh/EXCnABrEvdFtYzi9q/GPN
         yjGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=opQz8gCT9iZSR+WTMArSNrtKt78jXvFW2kTnX4DfyVc=;
        b=gg7KpFLJ5vd+XG/Yr4aV/dRS479cI+bmd8f9awMrctzaSYr0zetqvGPy4HfTR4Fbkl
         rgcvGLQTk8R3dUvkPpzyRDR9X2hq3hfY36J3Yxp9DAM3xTgmPuBiA0DLXL3UXktqDg4X
         upHrB3IPa6OLYF7ApU+M8hQnWOe4/RmjKV+PRPJQTl14VUaUf5k89k2EF8emnutonfgF
         U90BPXPvRpVoMbkovhyahS940ZC1tJ1lcmuaM5j7lphP3dRyfLP/3wFEt462nRqSYyE6
         G8egU7F9bd/c8GayagvND93MgZzW5ge5EKVgHdU/smfn6LuffTURyzJS0Vt/vnK9oG9n
         o0Sw==
X-Gm-Message-State: AFqh2kqWME1KAJ9y+cuqdTNoIVQMjcT7zGXXMkGfYLUeXqbCePfjo4N7
	SvYDDtRda93LqZtk5m8QCW2CzaMy0zxL9w==
X-Google-Smtp-Source: AMrXdXuScvj5uQgGPJ3XTwgT2aMxFJsoyM1b85V3S9fi5MqBWbFJc4rQb6iymVthAIGhVLhFI2P7IQ==
X-Received: by 2002:a05:6000:10d:b0:2be:bfc5:c2ef with SMTP id o13-20020a056000010d00b002bebfc5c2efmr1738675wrx.49.1674226807202;
        Fri, 20 Jan 2023 07:00:07 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 04/14] xen/riscv: add <asm/csr.h> header
Date: Fri, 20 Jan 2023 16:59:44 +0200
Message-Id: <afc53b9bee58b5d386f105ee8f23a411d5a15bed.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/include/asm/csr.h | 82 ++++++++++++++++++++++++++++++++
 1 file changed, 82 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/csr.h

diff --git a/xen/arch/riscv/include/asm/csr.h b/xen/arch/riscv/include/asm/csr.h
new file mode 100644
index 0000000000..1a879c6c4d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/csr.h
@@ -0,0 +1,82 @@
+/*
+ * Take from Linux.
+ *
+ * SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2015 Regents of the University of California
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <asm/asm.h>
+#include <xen/const.h>
+#include <asm/riscv_encoding.h>
+
+#ifndef __ASSEMBLY__
+
+#define csr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__ ("csrr %0, " __ASM_STR(csr)	\
+			      : "=r" (__v) :			\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_write(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0"	\
+			      : : "rK" (__v)			\
+			      : "memory");			\
+})
+
+/*
+#define csr_swap(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrw %0, " __ASM_STR(csr) ", %1"\
+			      : "=r" (__v) : "rK" (__v)		\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_read_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrs %0, " __ASM_STR(csr) ", %1"\
+			      : "=r" (__v) : "rK" (__v)		\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0"	\
+			      : : "rK" (__v)			\
+			      : "memory");			\
+})
+
+#define csr_read_clear(csr, val)				\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrc %0, " __ASM_STR(csr) ", %1"\
+			      : "=r" (__v) : "rK" (__v)		\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_clear(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0"	\
+			      : : "rK" (__v)			\
+			      : "memory");			\
+})
+*/
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_CSR_H */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481853.747030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssC-0006kC-Ff; Fri, 20 Jan 2023 15:00:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481853.747030; Fri, 20 Jan 2023 15:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssC-0006jo-BT; Fri, 20 Jan 2023 15:00:08 +0000
Received: by outflank-mailman (input) for mailman id 481853;
 Fri, 20 Jan 2023 15:00:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssB-0006Kg-5O
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:07 +0000
Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com
 [2a00:1450:4864:20::435])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1f2cbeb6-98d3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:00:04 +0100 (CET)
Received: by mail-wr1-x435.google.com with SMTP id n7so5102531wrx.5
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:04 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f2cbeb6-98d3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=b4D+uu+k0+I2IfYB1s/wnaEQNVajcaJWUGoB5Co8S0g=;
        b=o5aqMMLDum3iAFf5vYbbWfjS3IKHcX4+HgWUaNDwB9aygQPMaYFjM6xG8VvrZeqiT5
         juFi3kb3HA5KXAKDHNG8J+Uqm/F6qXJzcthIQ8y/2+j7vrHD1pp5joMCWfOJ5clyKPpM
         NTPqijSKBflLYJ8NxFcJIXPhVa5Wgltmud0qFKdQYoM+bbPSbSMsQZave4gX68JEjoGX
         MJ0tHDtcAiiz998oqSw9ZeXyWLdnOHXB1oU0n9w0EMptfkIgyN5xGGVWnwZ6wZvTKqTJ
         j8g4UDRkbnnUMts9PWJwnpZGZ9zZIs0dxhHRN0O+8BKApyINsi1WSZ3uNKEIsCWzpwU7
         AELQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=b4D+uu+k0+I2IfYB1s/wnaEQNVajcaJWUGoB5Co8S0g=;
        b=ckKg0OZokC7uQFGXYDKqJOBL2mHdc7yr1Q9ALZNu7/pqSW7gzX0/byvE+jtq2KAvz/
         yuqMhJwmMIp/NgibSdVMq7WnF1iDBHMcg/V+RSbTjfWRcg7TL5UHgDI1LUBvYTkm9oy3
         GjGX3kH6+VUnzKxwqO+YX/QfwWu7VHj61x02XT74ROK6+qBZn5zs5T7+GlfoE59kwSgR
         H+2siP1vRMjBVXcHF/KBAR7Omb1uqSI159dEPHzL4FPmAJ22LpHM8iuFw7vCnMplzles
         Nqek76W2gk3I89+Fm41gQOYeViEM6F1DIOp6VQoZ1G8ABgpqyrh9jp/REM4y9zyhmbdj
         KAQQ==
X-Gm-Message-State: AFqh2kqdS1JCKesJQyiPvzL+ZQzBwC4LJakjASE0rEtPbMgGagEkUtqA
	r4KbC/Zi5K5VWoLhlU+x3pxXyZ6z5U95Zg==
X-Google-Smtp-Source: AMrXdXvl8I9pwLlW83BJzfK/DNqKxEl8Cj8Lmkhd89JGrPriE66dgOepOsyIAke/nyXgFQKXXwwqFg==
X-Received: by 2002:a05:6000:1c6:b0:2bd:e40d:98d5 with SMTP id t6-20020a05600001c600b002bde40d98d5mr11462585wrx.0.1674226802982;
        Fri, 20 Jan 2023 07:00:02 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v1 00/14] RISCV basic exception handling implementation
Date: Fri, 20 Jan 2023 16:59:40 +0200
Message-Id: <cover.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series is based on another one [Basic early_printk and smoke
test implementation] which hasn't been commited yet.

The patch series provides a basic implementation of exception handling.
It can do only basic things such as decode a cause of an exception,
save/restore registers and execute "wfi" instruction if an exception
can not be handled.

To verify that exception handling works well it was implemented macros
from <asm/bug.h> such as BUG/WARN/run_in_exception/assert_failed.
The implementation of macros is used "ebreak" instruction and set up bug
frame tables for each type of macros.
Also it was implemented register save/restore to return and continue work
after WARN/run_in_exception.
Not all functionality of the macros was implemented as some of them
require hard-panic the system which is not available now. Instead of
hard-panic 'wfi' instruction is used but it should be definitely changed
in the neareset future.
It wasn't implemented show_execution_state() and stack trace discovering
as it's not necessary now.

Oleksii Kurochko (14):
  xen/riscv: add _zicsr to CFLAGS
  xen/riscv: add <asm/asm.h> header
  xen/riscv: add <asm/riscv_encoding.h header
  xen/riscv: add <asm/csr.h> header
  xen/riscv: add early_printk_hnum() function
  xen/riscv: introduce exception context
  xen/riscv: introduce exception handlers implementation
  xen/riscv: introduce decode_cause() stuff
  xen/riscv: introduce do_unexpected_trap()
  xen/riscv: mask all interrupts
  xen/riscv: introduce setup_trap_handler()
  xen/riscv: introduce an implementation of macros from <asm/bug.h>
  xen/riscv: test basic handling stuff
  automation: add smoke test to verify macros from bug.h

 automation/scripts/qemu-smoke-riscv64.sh    |   2 +
 xen/arch/riscv/Makefile                     |   2 +
 xen/arch/riscv/arch.mk                      |   2 +-
 xen/arch/riscv/early_printk.c               |  39 +
 xen/arch/riscv/entry.S                      |  97 ++
 xen/arch/riscv/include/asm/asm.h            |  54 ++
 xen/arch/riscv/include/asm/bug.h            | 120 +++
 xen/arch/riscv/include/asm/csr.h            |  82 ++
 xen/arch/riscv/include/asm/early_printk.h   |   2 +
 xen/arch/riscv/include/asm/processor.h      | 114 +++
 xen/arch/riscv/include/asm/riscv_encoding.h | 945 ++++++++++++++++++++
 xen/arch/riscv/include/asm/traps.h          |  13 +
 xen/arch/riscv/riscv64/head.S               |   5 +
 xen/arch/riscv/setup.c                      |  27 +
 xen/arch/riscv/traps.c                      | 229 +++++
 xen/arch/riscv/xen.lds.S                    |  10 +
 16 files changed, 1742 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/entry.S
 create mode 100644 xen/arch/riscv/include/asm/asm.h
 create mode 100644 xen/arch/riscv/include/asm/bug.h
 create mode 100644 xen/arch/riscv/include/asm/csr.h
 create mode 100644 xen/arch/riscv/include/asm/processor.h
 create mode 100644 xen/arch/riscv/include/asm/riscv_encoding.h
 create mode 100644 xen/arch/riscv/include/asm/traps.h
 create mode 100644 xen/arch/riscv/traps.c

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481858.747067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssH-0007X6-BI; Fri, 20 Jan 2023 15:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481858.747067; Fri, 20 Jan 2023 15:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssH-0007T6-2B; Fri, 20 Jan 2023 15:00:13 +0000
Received: by outflank-mailman (input) for mailman id 481858;
 Fri, 20 Jan 2023 15:00:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssF-0006Kg-GE
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:11 +0000
Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com
 [2a00:1450:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 229ae9d3-98d3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:00:10 +0100 (CET)
Received: by mail-wr1-x436.google.com with SMTP id r2so5090823wrv.7
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:10 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 229ae9d3-98d3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=5On9Ljb8od/hZFXAUMkhhR6b6lBaYm3tiocvPuOanJo=;
        b=pDhH7j2TPsx7enpELQT3c9yjlF6JakH1voO/6HzAr1AnpbCtuB8xtmohCX50LHYd9J
         nShRQkrBDx+z9L74wDnzNvtardP8lLtdWWoTWe5Cz5EfvvO351iXoI0BYmOlFy2s8v5p
         oeKTSapJuc/lEDWDsDapTg9Lbdzo/oyKIVCKpVoDGcba+L0fVGP8heeV0G4sAGW3D8k9
         wmoVO57V3yLUJAJz89DupuOhmL5fnyId+d7JRVU+UvvNqKeoRqD/mRwaum/L9Fq2WwSX
         oGlz8Z/Q5HuXbU6t0I6X2KIZlcb7dftlKsyxlHrSQX2oVaweZvF0Xhf5cnvdaA7k5bTV
         ZHiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=5On9Ljb8od/hZFXAUMkhhR6b6lBaYm3tiocvPuOanJo=;
        b=ggVBx6IaRBXONhtjZgaVRXh88xF3V7gvKGRfpa6eIwtVjJymBEmERGeZ/+mncBa4Ib
         ChzPZw9IxOp0qiQ2/yDKT7VmyoYTWaB/5lF3pCaej58k+K0J0pnOVrkAvG+Y5t2Yf/gZ
         /6l6tklzzBqK6kpdw82ejT6txn2iT03fUdsCC54oUblozumPJAUuXIMD2pUMdAVa+zGT
         uiSznHH4+/BPJIeBxg92vij/7qrGGcnfftYAe135eLQhx+LwpAKndUoRXyZ+EemcOfKr
         uT5iU2T9fh29MyPHbree2xIRDLM0+KwjuRBjMfVgbjjBWlmutggARv2g9aBlB2G+InAn
         o65w==
X-Gm-Message-State: AFqh2kquFLtOKbIiNJGeUb/DyXeOtbRJh5S3gEZm14y8FErZKivFG6Tw
	RAr9Da/cK8r8QnQ03EO9CDWs4sN48SH5zw==
X-Google-Smtp-Source: AMrXdXu4zBsqjxfCKndCaQ8TRBozx/7/N8JgZeecljxPkjZZzt4PdmgjucuapdmrIiLbLvapLMX2HA==
X-Received: by 2002:adf:eb43:0:b0:2bd:d542:e010 with SMTP id u3-20020adfeb43000000b002bdd542e010mr12250630wrn.46.1674226809884;
        Fri, 20 Jan 2023 07:00:09 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 07/14] xen/riscv: introduce exception handlers implementation
Date: Fri, 20 Jan 2023 16:59:47 +0200
Message-Id: <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces an implementation of basic exception handlers:
- to save/restore context
- to handle an exception itself. The handler calls wait_for_interrupt
  now, nothing more.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/Makefile            |  2 +
 xen/arch/riscv/entry.S             | 97 ++++++++++++++++++++++++++++++
 xen/arch/riscv/include/asm/traps.h | 13 ++++
 xen/arch/riscv/traps.c             | 13 ++++
 4 files changed, 125 insertions(+)
 create mode 100644 xen/arch/riscv/entry.S
 create mode 100644 xen/arch/riscv/include/asm/traps.h
 create mode 100644 xen/arch/riscv/traps.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 1a4f1a6015..443f6bf15f 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,7 +1,9 @@
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+obj-y += entry.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
+obj-y += traps.o
 
 $(TARGET): $(TARGET)-syms
 	$(OBJCOPY) -O binary -S $< $@
diff --git a/xen/arch/riscv/entry.S b/xen/arch/riscv/entry.S
new file mode 100644
index 0000000000..f7d46f42bb
--- /dev/null
+++ b/xen/arch/riscv/entry.S
@@ -0,0 +1,97 @@
+#include <asm/asm.h>
+#include <asm/processor.h>
+#include <asm/riscv_encoding.h>
+#include <asm/traps.h>
+
+        .global handle_exception
+        .align 4
+
+handle_exception:
+
+    /* Exceptions from xen */
+save_to_stack:
+        /* Save context to stack */
+        REG_S   sp, (RISCV_CPU_USER_REGS_OFFSET(sp) - RISCV_CPU_USER_REGS_SIZE) (sp)
+        addi    sp, sp, -RISCV_CPU_USER_REGS_SIZE
+        REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(t0)(sp)
+        j       save_context
+
+save_context:
+        /* Save registers */
+        REG_S   ra, RISCV_CPU_USER_REGS_OFFSET(ra)(sp)
+        REG_S   gp, RISCV_CPU_USER_REGS_OFFSET(gp)(sp)
+        REG_S   t1, RISCV_CPU_USER_REGS_OFFSET(t1)(sp)
+        REG_S   t2, RISCV_CPU_USER_REGS_OFFSET(t2)(sp)
+        REG_S   s0, RISCV_CPU_USER_REGS_OFFSET(s0)(sp)
+        REG_S   s1, RISCV_CPU_USER_REGS_OFFSET(s1)(sp)
+        REG_S   a0, RISCV_CPU_USER_REGS_OFFSET(a0)(sp)
+        REG_S   a1, RISCV_CPU_USER_REGS_OFFSET(a1)(sp)
+        REG_S   a2, RISCV_CPU_USER_REGS_OFFSET(a2)(sp)
+        REG_S   a3, RISCV_CPU_USER_REGS_OFFSET(a3)(sp)
+        REG_S   a4, RISCV_CPU_USER_REGS_OFFSET(a4)(sp)
+        REG_S   a5, RISCV_CPU_USER_REGS_OFFSET(a5)(sp)
+        REG_S   a6, RISCV_CPU_USER_REGS_OFFSET(a6)(sp)
+        REG_S   a7, RISCV_CPU_USER_REGS_OFFSET(a7)(sp)
+        REG_S   s2, RISCV_CPU_USER_REGS_OFFSET(s2)(sp)
+        REG_S   s3, RISCV_CPU_USER_REGS_OFFSET(s3)(sp)
+        REG_S   s4, RISCV_CPU_USER_REGS_OFFSET(s4)(sp)
+        REG_S   s5, RISCV_CPU_USER_REGS_OFFSET(s5)(sp)
+        REG_S   s6, RISCV_CPU_USER_REGS_OFFSET(s6)(sp)
+        REG_S   s7, RISCV_CPU_USER_REGS_OFFSET(s7)(sp)
+        REG_S   s8, RISCV_CPU_USER_REGS_OFFSET(s8)(sp)
+        REG_S   s9, RISCV_CPU_USER_REGS_OFFSET(s9)(sp)
+        REG_S   s10, RISCV_CPU_USER_REGS_OFFSET(s10)(sp)
+        REG_S   s11, RISCV_CPU_USER_REGS_OFFSET(s11)(sp)
+        REG_S   t3, RISCV_CPU_USER_REGS_OFFSET(t3)(sp)
+        REG_S   t4, RISCV_CPU_USER_REGS_OFFSET(t4)(sp)
+        REG_S   t5, RISCV_CPU_USER_REGS_OFFSET(t5)(sp)
+        REG_S   t6, RISCV_CPU_USER_REGS_OFFSET(t6)(sp)
+        csrr    t0, CSR_SEPC
+        REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(sepc)(sp)
+        csrr    t0, CSR_SSTATUS
+        REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(sstatus)(sp)
+
+        mv      a0, sp
+        jal     __handle_exception
+
+restore_registers:
+        /* Restore stack_cpu_regs */
+        REG_L   t0, RISCV_CPU_USER_REGS_OFFSET(sepc)(sp)
+        csrw    CSR_SEPC, t0
+        REG_L   t0, RISCV_CPU_USER_REGS_OFFSET(sstatus)(sp)
+        csrw    CSR_SSTATUS, t0
+
+        REG_L   ra, RISCV_CPU_USER_REGS_OFFSET(ra)(sp)
+        REG_L   gp, RISCV_CPU_USER_REGS_OFFSET(gp)(sp)
+        REG_L   t0, RISCV_CPU_USER_REGS_OFFSET(t0)(sp)
+        REG_L   t1, RISCV_CPU_USER_REGS_OFFSET(t1)(sp)
+        REG_L   t2, RISCV_CPU_USER_REGS_OFFSET(t2)(sp)
+        REG_L   s0, RISCV_CPU_USER_REGS_OFFSET(s0)(sp)
+        REG_L   s1, RISCV_CPU_USER_REGS_OFFSET(s1)(sp)
+        REG_L   a0, RISCV_CPU_USER_REGS_OFFSET(a0)(sp)
+        REG_L   a1, RISCV_CPU_USER_REGS_OFFSET(a1)(sp)
+        REG_L   a2, RISCV_CPU_USER_REGS_OFFSET(a2)(sp)
+        REG_L   a3, RISCV_CPU_USER_REGS_OFFSET(a3)(sp)
+        REG_L   a4, RISCV_CPU_USER_REGS_OFFSET(a4)(sp)
+        REG_L   a5, RISCV_CPU_USER_REGS_OFFSET(a5)(sp)
+        REG_L   a6, RISCV_CPU_USER_REGS_OFFSET(a6)(sp)
+        REG_L   a7, RISCV_CPU_USER_REGS_OFFSET(a7)(sp)
+        REG_L   s2, RISCV_CPU_USER_REGS_OFFSET(s2)(sp)
+        REG_L   s3, RISCV_CPU_USER_REGS_OFFSET(s3)(sp)
+        REG_L   s4, RISCV_CPU_USER_REGS_OFFSET(s4)(sp)
+        REG_L   s5, RISCV_CPU_USER_REGS_OFFSET(s5)(sp)
+        REG_L   s6, RISCV_CPU_USER_REGS_OFFSET(s6)(sp)
+        REG_L   s7, RISCV_CPU_USER_REGS_OFFSET(s7)(sp)
+        REG_L   s8, RISCV_CPU_USER_REGS_OFFSET(s8)(sp)
+        REG_L   s9, RISCV_CPU_USER_REGS_OFFSET(s9)(sp)
+        REG_L   s10, RISCV_CPU_USER_REGS_OFFSET(s10)(sp)
+        REG_L   s11, RISCV_CPU_USER_REGS_OFFSET(s11)(sp)
+        REG_L   t3, RISCV_CPU_USER_REGS_OFFSET(t3)(sp)
+        REG_L   t4, RISCV_CPU_USER_REGS_OFFSET(t4)(sp)
+        REG_L   t5, RISCV_CPU_USER_REGS_OFFSET(t5)(sp)
+        REG_L   t6, RISCV_CPU_USER_REGS_OFFSET(t6)(sp)
+
+        /* Restore sp */
+        REG_L   sp, RISCV_CPU_USER_REGS_OFFSET(sp)(sp)
+
+        sret
diff --git a/xen/arch/riscv/include/asm/traps.h b/xen/arch/riscv/include/asm/traps.h
new file mode 100644
index 0000000000..816ab1178a
--- /dev/null
+++ b/xen/arch/riscv/include/asm/traps.h
@@ -0,0 +1,13 @@
+#ifndef __ASM_TRAPS_H__
+#define __ASM_TRAPS_H__
+
+#include <asm/processor.h>
+
+#ifndef __ASSEMBLY__
+
+void __handle_exception(struct cpu_user_regs *cpu_regs);
+void handle_exception(void);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_TRAPS_H__ */
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
new file mode 100644
index 0000000000..3201b851ef
--- /dev/null
+++ b/xen/arch/riscv/traps.c
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Copyright (C) 2023 Vates
+ *
+ * RISC-V Trap handlers
+ */
+#include <asm/processor.h>
+#include <asm/traps.h>
+
+void __handle_exception(struct cpu_user_regs *cpu_regs)
+{
+    wait_for_interrupt();
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481861.747102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssK-0008RJ-MF; Fri, 20 Jan 2023 15:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481861.747102; Fri, 20 Jan 2023 15:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssK-0008PA-82; Fri, 20 Jan 2023 15:00:16 +0000
Received: by outflank-mailman (input) for mailman id 481861;
 Fri, 20 Jan 2023 15:00:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssH-0006SQ-Dq
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:13 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 23371b36-98d3-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:00:11 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id t5so5117565wrq.1
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:11 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23371b36-98d3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Oa0wgdXnZxM249MLdfrJG7qfiY3SAdUUZ8LhXWtM0Ic=;
        b=ZABVCYsyUwKcpQX/0rzLULBtNWpkV9f4/4PXMYrYkEgKk+mKZn/DNDloFJKUDTrixv
         Po36LgKBFSxolnriU06W0pe/TW3yjv8dxO0RLkHDxLXKyfcwtwFYfZ7aTYKt0eD95p6T
         7MINqvik3DYun61yVCym92uqCPZfh/fVQtS/HKpqV8TpkUCMBbtef+r4cHS6Ko8i5qef
         ukUJ1HY6Pti6D0kGKBKsCsCgiHtUS+FzSR6JEtntdKA4SsLCb7HONIDZfSYceVGi0/TV
         wASdlX+wR+hAOEs4Acc28rWr915dBjeqzLqvf0emA8noZjE1s91x7Bxq7DpuJb00pl3a
         VG3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Oa0wgdXnZxM249MLdfrJG7qfiY3SAdUUZ8LhXWtM0Ic=;
        b=yZCeN4MmUps7HohmDwzlNSRffXoCVYC7CFwYlL8jpk5/rC7hXfHW0tww+wWhp7t7ru
         mJnpwF8qZsoCXq5M59EYWqEaKjM4eYmSsA/pOCELhPp2zNcxkEnIuIGSIO1+NA9PeGSG
         YtFcXQjmv/GJJmcbrARTATJMOfB1k5Y4HuuuoRPhXcUGsxDwvYmzn43ZGMRjw6kZgn46
         S4iI8DFTcdpsTTVN8Q7w7nQhgTonx/5gLfM2Xp3YXBfK3ZlVN0JhJtePiBEhEzl9XOYg
         JQkqJOoFZrq+FLbuQfEXmk3jZDacb4lNUqegUVvkNkzON4vCfPF9LB+rIKmoqkUP/+C1
         8fzA==
X-Gm-Message-State: AFqh2komjlIdxNS2IDsBiAjZVSS6B9SFoTr5mnWSQ40EZ99+TNnD6J7w
	taCCJBWhtohJQUKPf5Pty/dGMPAhBwAAUw==
X-Google-Smtp-Source: AMrXdXuEs6tkrKfGNgQJxeUkzovrnpB0U2Hqnxd8SMBOuezNI2Q2tP5yjGlQzzQTreLtSGeScTUsOg==
X-Received: by 2002:a5d:4526:0:b0:2bc:839c:134d with SMTP id j6-20020a5d4526000000b002bc839c134dmr13159877wra.4.1674226810899;
        Fri, 20 Jan 2023 07:00:10 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 08/14] xen/riscv: introduce decode_cause() stuff
Date: Fri, 20 Jan 2023 16:59:48 +0200
Message-Id: <c798832ec19cb94c0a27e8cff8f5bd6d1aa6ae7e.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces stuff needed to decode a reason of an
exception.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/traps.c | 88 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 88 insertions(+)

diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index 3201b851ef..dd64f053a5 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -4,8 +4,96 @@
  *
  * RISC-V Trap handlers
  */
+#include <asm/csr.h>
+#include <asm/early_printk.h>
 #include <asm/processor.h>
 #include <asm/traps.h>
+#include <xen/errno.h>
+
+const char *decode_trap_cause(unsigned long cause)
+{
+    switch ( cause )
+    {
+    case CAUSE_MISALIGNED_FETCH:
+        return "Instruction Address Misaligned";
+    case CAUSE_FETCH_ACCESS:
+        return "Instruction Access Fault";
+    case CAUSE_ILLEGAL_INSTRUCTION:
+        return "Illegal Instruction";
+    case CAUSE_BREAKPOINT:
+        return "Breakpoint";
+    case CAUSE_MISALIGNED_LOAD:
+        return "Load Address Misaligned";
+    case CAUSE_LOAD_ACCESS:
+        return "Load Access Fault";
+    case CAUSE_MISALIGNED_STORE:
+        return "Store/AMO Address Misaligned";
+    case CAUSE_STORE_ACCESS:
+        return "Store/AMO Access Fault";
+    case CAUSE_USER_ECALL:
+        return "Environment Call from U-Mode";
+    case CAUSE_SUPERVISOR_ECALL:
+        return "Environment Call from S-Mode";
+    case CAUSE_MACHINE_ECALL:
+        return "Environment Call from M-Mode";
+    case CAUSE_FETCH_PAGE_FAULT:
+        return "Instruction Page Fault";
+    case CAUSE_LOAD_PAGE_FAULT:
+        return "Load Page Fault";
+    case CAUSE_STORE_PAGE_FAULT:
+        return "Store/AMO Page Fault";
+    case CAUSE_FETCH_GUEST_PAGE_FAULT:
+        return "Instruction Guest Page Fault";
+    case CAUSE_LOAD_GUEST_PAGE_FAULT:
+        return "Load Guest Page Fault";
+    case CAUSE_VIRTUAL_INST_FAULT:
+        return "Virtualized Instruction Fault";
+    case CAUSE_STORE_GUEST_PAGE_FAULT:
+        return "Guest Store/AMO Page Fault";
+    default:
+        return "UNKNOWN";
+    }
+}
+
+const char *decode_reserved_interrupt_cause(unsigned long irq_cause)
+{
+    switch ( irq_cause )
+    {
+    case IRQ_M_SOFT:
+        return "M-mode Software Interrupt";
+    case IRQ_M_TIMER:
+        return "M-mode TIMER Interrupt";
+    case IRQ_M_EXT:
+        return "M-mode TIMER Interrupt";
+    default:
+        return "UNKNOWN IRQ type";
+    }
+}
+
+const char *decode_interrupt_cause(unsigned long cause)
+{
+    unsigned long irq_cause = cause & ~CAUSE_IRQ_FLAG;
+
+    switch ( irq_cause )
+    {
+    case IRQ_S_SOFT:
+        return "Supervisor Software Interrupt";
+    case IRQ_S_TIMER:
+        return "Supervisor Timer Interrupt";
+    case IRQ_S_EXT:
+        return "Supervisor External Interrupt";
+    default:
+        return decode_reserved_interrupt_cause(irq_cause);
+    }
+}
+
+const char *decode_cause(unsigned long cause)
+{
+    if ( cause & CAUSE_IRQ_FLAG )
+        return decode_interrupt_cause(cause);
+
+    return decode_trap_cause(cause);
+}
 
 void __handle_exception(struct cpu_user_regs *cpu_regs)
 {
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481854.747040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssE-0006zX-Lr; Fri, 20 Jan 2023 15:00:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481854.747040; Fri, 20 Jan 2023 15:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssE-0006zQ-Iv; Fri, 20 Jan 2023 15:00:10 +0000
Received: by outflank-mailman (input) for mailman id 481854;
 Fri, 20 Jan 2023 15:00:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssC-0006SQ-ST
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:08 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f98dc38-98d3-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:00:05 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id d2so5080044wrp.8
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:05 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f98dc38-98d3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jK6mrWrK6nj6EcTqrtHlS0lQnNqAK7Gdih4oSkDJSNU=;
        b=PGGCvQ/UGzX+z4r4reiswrOo0xSrrAeKU36OfQ1XPhFFwQsKXAUnCRhRhhCwZpODgX
         pP+qUjIh/V8SLoVPnaB7kYkql1UGEuzStJ7cNb1c+hGunckig8eH5zJHkFrkVPYnfBeZ
         31QmeVU6/ojyoNnZTHfFiobLNYrv5XA1V9wiibjUONni+t5u+FMgFvEyhbMykJ8NQKlk
         4ZPCpBc+9pqi+rEtAOS/3vPlPlD07pIXqggiHCKR4dmeXr72J32PO39EQWORbP+UtTQn
         MM34Pl4kTaRXTye3xUNPpDfQJjWSiHDmNFy8uhliwTAgeSj/KVhXDPKHwnAthDkR5Dfm
         mQHg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jK6mrWrK6nj6EcTqrtHlS0lQnNqAK7Gdih4oSkDJSNU=;
        b=XFn+nXprntoBZ0caoS5Rom2x8yp7lNEqnrJwi0l8GlVqoMX/jChVCecJ5uXgRSD0la
         Zqxmm+Y/6TLoLlaHXyR4cWg8RcJ+WeMd0bc1tY/TkL39bcrTnWZt7GrFSTUf4y/Ghxso
         yupyRbLn4czYXBukySLJ64oNChRd1oYSIpvRyJLAXdxW5VRRM0cFbzMT5Wfc3djznW5z
         FS/ISIjABPatj+sALH89wA59NWHmRZ9ONiIbP0QMWyBAtRKSPnRXKx0lABr0eZsMYIDl
         2RM+kWSpsxoxvcP1OQ3J5K2PLwyHoYJpjga50MjR4oVkZtyCJBE5Mxjj/IaEe5F5YvQg
         DstA==
X-Gm-Message-State: AFqh2krXAwLFhsKElfWGvmbX6p/cNDgKUD9DUKjW9SvuRqPvA+a4vo1c
	p/ugYfKghb/FBBA+E4BbImBzrCWaVyzfXA==
X-Google-Smtp-Source: AMrXdXto0DaaA4bhlG4PeNRSIYd5z9hie3V+VCoiAEl+va4mBZPBSKs96ZCBOtiMYKHJi+ncwgZUZA==
X-Received: by 2002:a5d:6f0a:0:b0:2be:15f8:af1d with SMTP id ay10-20020a5d6f0a000000b002be15f8af1dmr15035330wrb.66.1674226804856;
        Fri, 20 Jan 2023 07:00:04 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 02/14] xen/riscv: add <asm/asm.h> header
Date: Fri, 20 Jan 2023 16:59:42 +0200
Message-Id: <621e8ef8c6a721927ecade5bb41cdc85df386bbf.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/include/asm/asm.h | 54 ++++++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/asm.h

diff --git a/xen/arch/riscv/include/asm/asm.h b/xen/arch/riscv/include/asm/asm.h
new file mode 100644
index 0000000000..6d426ecea7
--- /dev/null
+++ b/xen/arch/riscv/include/asm/asm.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: (GPL-2.0-only) */
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ */
+
+#ifndef _ASM_RISCV_ASM_H
+#define _ASM_RISCV_ASM_H
+
+#ifdef __ASSEMBLY__
+#define __ASM_STR(x)	x
+#else
+#define __ASM_STR(x)	#x
+#endif
+
+#if __riscv_xlen == 64
+#define __REG_SEL(a, b)	__ASM_STR(a)
+#elif __riscv_xlen == 32
+#define __REG_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define REG_L		__REG_SEL(ld, lw)
+#define REG_S		__REG_SEL(sd, sw)
+
+#if __SIZEOF_POINTER__ == 8
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.dword
+#else
+#define RISCV_PTR		".dword"
+#endif
+#elif __SIZEOF_POINTER__ == 4
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.word
+#else
+#define RISCV_PTR		".word"
+#endif
+#else
+#error "Unexpected __SIZEOF_POINTER__"
+#endif
+
+#if (__SIZEOF_INT__ == 4)
+#define RISCV_INT		__ASM_STR(.word)
+#else
+#error "Unexpected __SIZEOF_INT__"
+#endif
+
+#if (__SIZEOF_SHORT__ == 2)
+#define RISCV_SHORT		__ASM_STR(.half)
+#else
+#error "Unexpected __SIZEOF_SHORT__"
+#endif
+
+#endif /* _ASM_RISCV_ASM_H */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481862.747107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssL-00009L-Et; Fri, 20 Jan 2023 15:00:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481862.747107; Fri, 20 Jan 2023 15:00:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssK-00007D-QO; Fri, 20 Jan 2023 15:00:16 +0000
Received: by outflank-mailman (input) for mailman id 481862;
 Fri, 20 Jan 2023 15:00:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssJ-0006SQ-9R
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:15 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2477e016-98d3-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:00:13 +0100 (CET)
Received: by mail-wr1-x429.google.com with SMTP id d2so5080529wrp.8
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:13 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2477e016-98d3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Fe0eCk7Y2xW3r2WIytl0p5j+qQaI829KhiJfOtG83A0=;
        b=jIUmaA3XUraRJJAH8KPW+MjheJZlQAEZXAlkV0Ekv6KkzdeBGH6ASGvkvjUuZR3iRk
         jFH42EV6W1820nMwJzNCphjSLDJSORRv2AdRNgZ5Gz5ElEVhihzFpFR0uwJ5F9mwHb/j
         9Jw7/wVoX8dDHcgKIsl2GLJei9hOUAXCoVSIjwy526yJZgkuenlbbgIcXuj2uNiRWlLf
         u1Je+p+2pvDKaeOIMQepTEKid/NT7ocuqFG/IkZ4ol9MhynxB6h7CNvMrirj6MEIH/Cr
         AXrBP3ia/L2S4UEXWHrikcecXqQpBht4EEI6g59oNvuwpT2fnad0XfZNBWTJyHeAdYfl
         +qmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Fe0eCk7Y2xW3r2WIytl0p5j+qQaI829KhiJfOtG83A0=;
        b=6bwNmsKrHgjf2bVGDuDggUrKu8VtRR/ryn9Jg1ZjLcHIO7jGypCrlDByUDeeVjOiba
         ibNtakYza21jH+T/UFXNWeXpPDRYmnPlC1L4lI3DfFs+Qv8MWpJvC0Uq597z8VA0YqOP
         YPNdEUMFnXsk5Q6So2l58+LCgSJKf7WZJATrPPJSr25Apc5kJXVyoS09evMtlfB/7Zfs
         J6vQfGaITPhEvmu9cA/GmhuBYvpqVVqDhvMv7UEiuUVeSxCHy7Fr5ddz1dRm2dAi5P2R
         9dXARTbR25TWKszGlnYyVS8S6+9xF9/B17z4VfS3bh2bA0fGb2IneL9QzDULKl+fmWoa
         Va1A==
X-Gm-Message-State: AFqh2koP8136qBBwKP1pgrnwNAkM0NuutpRH/LFjzGrt444m2n0+4lKa
	XPZi+B9ciNoCYW286xwYINRq+tLamRLvVg==
X-Google-Smtp-Source: AMrXdXt4JWVsRr18Xw60hT1COAZWenDPxPHrl7+0Iefg1VDxeFvBkM2RgM3zdh5s71wY+nqBE3s5Rg==
X-Received: by 2002:adf:e310:0:b0:2bd:d8f1:2edf with SMTP id b16-20020adfe310000000b002bdd8f12edfmr13237260wrj.49.1674226812961;
        Fri, 20 Jan 2023 07:00:12 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 10/14] xen/riscv: mask all interrupts
Date: Fri, 20 Jan 2023 16:59:50 +0200
Message-Id: <0153a210de96733880fb3f6fddd902862cc2eaca.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/riscv64/head.S | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index d444dd8aad..ffd95f9f89 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,6 +1,11 @@
+#include <asm/riscv_encoding.h>
+
         .section .text.header, "ax", %progbits
 
 ENTRY(start)
+        /* Mask all interrupts */
+        csrw    CSR_SIE, zero
+
         la      sp, cpu0_boot_stack
         li      t0, STACK_SIZE
         add     sp, sp, t0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481863.747119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssN-0000SB-8j; Fri, 20 Jan 2023 15:00:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481863.747119; Fri, 20 Jan 2023 15:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssM-0000OQ-1l; Fri, 20 Jan 2023 15:00:18 +0000
Received: by outflank-mailman (input) for mailman id 481863;
 Fri, 20 Jan 2023 15:00:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssJ-0006Kg-Bn
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:15 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 250e287b-98d3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:00:14 +0100 (CET)
Received: by mail-wm1-x32b.google.com with SMTP id
 iv8-20020a05600c548800b003db04a0a46bso1306072wmb.0
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:14 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 250e287b-98d3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=K0WSKOVdhD4de0NUU+Vb5gE8iDFJmgnw6gPb5txxrQY=;
        b=e8j5MC51WuX1KTMJ4V2AqT+Klh1/1PCd2GapwqibHdga4oR6gQyn4MNcchnerhVGQy
         Zo0jkGlnLaxnG+ETs5b9y+DQBzK5vWxFcda1YtMY9S4BZrHoICTqFCbL1BaWa6Omqtfw
         SvPbOzPgha6xA4ivGfslns5gFQLiBWOk+1mYqi9Z46CL+A5FGyCz7OULzqwtx6o7YS8H
         qkz+kVn3FQwxufSSfukNf8iR8vbH+EPKbkJ8KZNpSJhhDMoJ4++fZeJcsZH4rYKYGPcK
         BijLMuxwbQ/yG3/+e/uoj6hC+UbkWPJNad5TX+Uzc4+dIOMQgEbFRqx+l3zdPaIX5/J0
         xaaQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=K0WSKOVdhD4de0NUU+Vb5gE8iDFJmgnw6gPb5txxrQY=;
        b=kW0IVZ15cb8gnC1B8+qi/C3IAXdqpROwWvXzK3fnvDXKTG27mMWOsSW3l8FcAUR0mO
         B5+OZWKl5lxHoqB4IOEtkcpqAXbGJMaAqxd1Hlq2LjneMYtyJpXAOD9cUuS5Xr5LZFhS
         LQkC7VJ5EV1ebAGtX7teoSjZtHwApLAyoUXc3gvmTB1HBjYvr/M1jP9jlYCdNmNEZ5xj
         uH82L02b8e905xm092YkOdUJSqBxpasJ3W/56BgtHied6fCxMqpvwO2ZTWBq2X3qEoRI
         hnLNKq26bTMm8MFe4+mGjYIJGX/jPTEfcDbYeYO+hTYCPR/qzIaSdYs6zO3ANqYAjvN5
         V7bQ==
X-Gm-Message-State: AFqh2ko3t0cxle4JpI/SZT73rhHI0MFPjRxv+TFLjWshZhQK/t2t4GGv
	1VQG+XGXTtz0/iFZP9iF8NE0m+jrClEHSQ==
X-Google-Smtp-Source: AMrXdXuk77JOnphr6uSGPpR0kTywUv+ylWZkshdS4OpyJnEN1ImoV2Xg9Ua/LTaJTKstYoR4X1nNpQ==
X-Received: by 2002:a05:600c:1e09:b0:3d1:f16d:5848 with SMTP id ay9-20020a05600c1e0900b003d1f16d5848mr14355775wmb.26.1674226813902;
        Fri, 20 Jan 2023 07:00:13 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 11/14] xen/riscv: introduce setup_trap_handler()
Date: Fri, 20 Jan 2023 16:59:51 +0200
Message-Id: <b8d03f33aea498bb5fde4ccdc16f023bbe208e7f.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/setup.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index d09ffe1454..174e134c93 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,16 +1,27 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/csr.h>
 #include <asm/early_printk.h>
+#include <asm/traps.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
+static void setup_trap_handler(void)
+{
+    unsigned long addr = (unsigned long)&handle_exception;
+    csr_write(CSR_STVEC, addr);
+}
+
 void __init noreturn start_xen(void)
 {
     early_printk("Hello from C env\n");
 
+    setup_trap_handler();
+    early_printk("exception handler has been setup\n");
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481864.747128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssO-0000lp-AV; Fri, 20 Jan 2023 15:00:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481864.747128; Fri, 20 Jan 2023 15:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssN-0000hO-C7; Fri, 20 Jan 2023 15:00:19 +0000
Received: by outflank-mailman (input) for mailman id 481864;
 Fri, 20 Jan 2023 15:00:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssK-0006Kg-SR
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:16 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 26156ae8-98d3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:00:16 +0100 (CET)
Received: by mail-wr1-x42d.google.com with SMTP id d14so1414359wrr.9
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:16 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26156ae8-98d3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=SUG1QTMGPWyaQd+Wzg6VRLvIzEpGSEc+bJIbP4HX+Ac=;
        b=BUzCp98P2KOhLr2xTPj8dfCxehOuf8RpFf/IIH+m9Dvk9buBG26N+OEouINtD4tWKx
         BNBHn3E1J49dz8d0SE+6SOV+Ayp1vswtlZkrdE5xMNuCXjuSnY8RiiIOs0DZuvnmuBc+
         Q9eIjCj72Ol/NRq0g1hsjBCH5mi9pY4gWd9YowkeNaAExQHiRc4iajd1+mmyatXi4E5N
         2cohliapAgw23gsVY9tNQJakdH2QxVGKwHn2xxsE5z59y1F9l7kHgg0JaAM2+1yYkOCx
         6sGxWGLM5b+Ly0ebXvTRuPl5q5q6jhrAEk66iQ3rO09Rk9Ocsvn6qw8gXlKal6KAuY1h
         LhPQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=SUG1QTMGPWyaQd+Wzg6VRLvIzEpGSEc+bJIbP4HX+Ac=;
        b=YTpzHGPDRO3dNEmqY+gP1aEiMdwHjc7UazK1yP54ZTp5PqzxnO1LOL89s+13o2OS2K
         zxHiYyTVXQOsGo79S/L9o9MhPHPX819NubQdU196b/lSj+m3hZ8/b0T1Ql92s9aEgjRw
         JH+JLCC11vOyDy+3L3SkLPKVnfoAM0xXxxsPPUIMhbxv2QkVaJ5CLdyrE8yVFTchVo4C
         738e+9NtOZtHu89tPS0fxE480BbNe0vb77Disg6T9gt999k8LQDrSXOBIfBdgYoyDk2s
         YR+alN04O2Cs2bcRlI1e6SBWNFft5cyuR3A/Oq82jLntZVk2zh5gCTbX/qZBoQksGqlA
         woKw==
X-Gm-Message-State: AFqh2kptdI9njuhyWkhtLSVy9w5gLkKHKRH+Vt3Cd666IqFTAh0ulaoz
	VR+idJ2TauhA18uUrA08TvB9qOxXmx7GPA==
X-Google-Smtp-Source: AMrXdXsikABfoFsOfWXV3DcSlpLN++qeOfD9vBk6m7CYBLT/K/Rzbxd8OY/6qrcT/YvP8Yw90QI3Ew==
X-Received: by 2002:a5d:68c9:0:b0:24f:11eb:2988 with SMTP id p9-20020a5d68c9000000b0024f11eb2988mr14058938wrw.71.1674226815692;
        Fri, 20 Jan 2023 07:00:15 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 13/14] xen/riscv: test basic handling stuff
Date: Fri, 20 Jan 2023 16:59:53 +0200
Message-Id: <10254478415a1417872a5c89cba1811b6483fd78.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/setup.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 174e134c93..35ab9d25c6 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,6 +1,7 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/bug.h>
 #include <asm/csr.h>
 #include <asm/early_printk.h>
 #include <asm/traps.h>
@@ -15,12 +16,27 @@ static void setup_trap_handler(void)
     csr_write(CSR_STVEC, addr);
 }
 
+static void test_run_in_exception(struct cpu_user_regs *regs)
+{
+    early_printk("If you see this message, ");
+    early_printk("run_in_exception_handler is most likely working\n");
+}
+
+static void test_macros_from_bug_h(void)
+{
+    run_in_exception_handler(test_run_in_exception);
+    WARN();
+    early_printk("If you see this message, ");
+    early_printk("WARN is most likely working\n");
+}
+
 void __init noreturn start_xen(void)
 {
     early_printk("Hello from C env\n");
 
     setup_trap_handler();
-    early_printk("exception handler has been setup\n");
+
+    test_macros_from_bug_h();
 
     for ( ;; )
         asm volatile ("wfi");
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481865.747131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssO-0000r9-Kh; Fri, 20 Jan 2023 15:00:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481865.747131; Fri, 20 Jan 2023 15:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssO-0000pi-26; Fri, 20 Jan 2023 15:00:20 +0000
Received: by outflank-mailman (input) for mailman id 481865;
 Fri, 20 Jan 2023 15:00:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssL-0006Kg-PI
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:17 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 267f584d-98d3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:00:17 +0100 (CET)
Received: by mail-wr1-x434.google.com with SMTP id h12so1098388wrv.10
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:17 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 267f584d-98d3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=78yWSa2oWiLzMGFflcvH/ERn5FdBlTt3em76ND/K9vE=;
        b=XvHOyf10LUlT6LmVnhY1ibZnrqr/iIjqulVWUGapL3LP6TjwBnILSqN37umGlIfWGo
         5vXq93ElkWy1VPP89tq0TnvEiYktTtL1UJ6o2MFWuyxffggA9/ufzv8xQDgJq0r48LX8
         xZlJ4GC+N2lWm5J+XFSclYN8MSXttXQFcNK0v0pImeBKP5y+5aZ/arZsHiCu+GrXvrUC
         L7gYf/bJQ25pvI/IwUm3vPXdxmMoz3ZT5Kk8M8S7qvJNV4ttCbYbcu8oDU+OKlDgcOto
         pSCt8VxSsJ6SkIO6EUjJ/Jvfk2crTEFq8Jgj2SC/6d/rA3/uTyl+8hcZVaEz5JmVKwUW
         UCTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=78yWSa2oWiLzMGFflcvH/ERn5FdBlTt3em76ND/K9vE=;
        b=w2msBVPA1rgEwKjQrV9tKQ4sPVdUDfUcT1qfmFdZst709U5J4iMARICherPhNPkEIR
         B0gcVb1bUgQcGUjFritUsUzmTFamphZ7JBKQ2qa9luA26OXwSjKki370764GHpReOFdF
         3WLf5uRgRmrWa6c9E0C5XTGAc8xFdenmuuTEb+SvrKCyPllCEsLksWf4tbsh2JuRDavz
         OpGIMIGGSEcbuEpt1sH2XQgERuFkyOY4LcB9vJM1cJdv0GkQVwqElSFazlgXM96Jj6EQ
         BK+okF/8662y8xa/LDlTONvvEToxhXpJM4Z/WLeIbLTnH5B+f2pz0/GX3ZTdp5xapAY/
         mSsg==
X-Gm-Message-State: AFqh2kp8WThQFqNonKfsbxYcPFyvPfSc5G4fs9zi66/TdKeJ9LZ46pIV
	slItx/eh25T19YsSfoXs5zDVikMF6rIS7w==
X-Google-Smtp-Source: AMrXdXty68AY1Bi6ypLUFvDVa6BIpSI51vivEHVWuO/BZ4HJwY6WN3iurrzFblkw5msdZClxnfBk/g==
X-Received: by 2002:adf:fa88:0:b0:2bd:d85f:55cc with SMTP id h8-20020adffa88000000b002bdd85f55ccmr12704200wrr.21.1674226816452;
        Fri, 20 Jan 2023 07:00:16 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v1 14/14] automation: add smoke test to verify macros from bug.h
Date: Fri, 20 Jan 2023 16:59:54 +0200
Message-Id: <4ce72535e44f49e82ad23f4e7dc004a67344b823.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 automation/scripts/qemu-smoke-riscv64.sh | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
index e0f06360bc..e7cc7f1442 100755
--- a/automation/scripts/qemu-smoke-riscv64.sh
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -17,4 +17,6 @@ qemu-system-riscv64 \
 
 set -e
 (grep -q "Hello from C env" smoke.serial) || exit 1
+(grep -q "run_in_exception_handler is most likely working" smoke.serial) || exit 1
+(grep -q "WARN is most likely working" smoke.serial) || exit 1
 exit 0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:00:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481866.747142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssP-0001AI-Mp; Fri, 20 Jan 2023 15:00:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481866.747142; Fri, 20 Jan 2023 15:00:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIssP-00015Y-5x; Fri, 20 Jan 2023 15:00:21 +0000
Received: by outflank-mailman (input) for mailman id 481866;
 Fri, 20 Jan 2023 15:00:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVut=5R=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pIssL-0006SQ-VA
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:00:18 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 25b2fbdd-98d3-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:00:15 +0100 (CET)
Received: by mail-wm1-x32b.google.com with SMTP id
 f19-20020a1c6a13000000b003db0ef4dedcso6017950wmc.4
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:00:15 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 o2-20020a5d58c2000000b002bdbead763csm25349811wrf.95.2023.01.20.07.00.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:00:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25b2fbdd-98d3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AohNoimI/yhgb0o3Ba+qV4NmjlQD+20RTtBoHbVgDJ4=;
        b=bClvr+doJ8sqt0dhmwoLUhRQBOzD8FJ7aRNEBI6Lv8lroNqFp83vN0+aZkwzAAGnyv
         PsjzBDdAKuwANCTy9kcY2OTYwAoVlTOLHVdC0ztrggwBCemzVM3Qh8xLP4dHcBDu6rHa
         HQN4dQKAwdOQOwZR3b9HcRcE1hPx0k39kFIdK0NDDg1zmu22xmDnMniQFV/OK6yfQrwe
         ZQ7pyDOrI1q8TDa6Gtmfk6RBr7UhJj2uSKbX1AbaquU8IuemxnICfBUCTX0/X1ab8cjr
         dX5jmwJVL0BIiTaxNo33agKTbKqKcDZiNOWp8T3VG5cDdEj3J8IRLg+Y1OdOMJNJg9un
         4eHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=AohNoimI/yhgb0o3Ba+qV4NmjlQD+20RTtBoHbVgDJ4=;
        b=ZQdoXABo3zcRd722L+77sSvfw/eeHKaInDc00cSabW5IvNhZ1HXr9CWl86YGZj+Zwd
         eDUvkZcJUtUJwMo9Yi4zzQDOad91wERX3/2JpAyJjprMLF7r+EheMIcVD2lBEtot3+48
         I4AXHwd5w/EdjlnaK8ZKwR737UNGZmVRtc4Iys+zkx6RGU/C0k84U9Vgbg0/mDcEa2w2
         M1XcJV9Ps59hcpbSre/2WRQfepGh/vadn4NCSavTDyLxnSAIq1MOC+mrLm0UQ/aN5cOp
         Fgu/pFv8o+CSVN78H75LRK3PGpcS82+XocWq3cKpkrFbF9pmFaDztG4mmjLwOtOIlJYV
         KHOA==
X-Gm-Message-State: AFqh2kp6avAJDMrm7FqDTk1XfZC81l+a+g3F9YL6P6o58Pw4+NaZRD9h
	pRirFsOImyIjSlddhzRFxbyVJV0k2vDhfw==
X-Google-Smtp-Source: AMrXdXsk5FWNrQr7OBANffsAgrKjctoT/969G1bCCgqUY2ooV3Ha4IY073V3hCu8uwOxhKSpwhz3ew==
X-Received: by 2002:a05:600c:1c1a:b0:3da:fbd2:a324 with SMTP id j26-20020a05600c1c1a00b003dafbd2a324mr14582175wms.36.1674226814825;
        Fri, 20 Jan 2023 07:00:14 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v1 12/14] xen/riscv: introduce an implementation of macros from <asm/bug.h>
Date: Fri, 20 Jan 2023 16:59:52 +0200
Message-Id: <a0788e4744b04597fbd3e71c2bef0bd76843a066.1674226563.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674226563.git.oleksii.kurochko@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces macros: BUG(), WARN(), run_in_exception(),
assert_failed.

The implementation uses "ebreak" instruction in combination with
diffrent bug frame tables (for each type) which contains useful
information.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/include/asm/bug.h | 120 +++++++++++++++++++++++++++++++
 xen/arch/riscv/traps.c           | 116 ++++++++++++++++++++++++++++++
 xen/arch/riscv/xen.lds.S         |  10 +++
 3 files changed, 246 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/bug.h

diff --git a/xen/arch/riscv/include/asm/bug.h b/xen/arch/riscv/include/asm/bug.h
new file mode 100644
index 0000000000..d17ffdcc4d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/bug.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2021-2023 Vates
+ *
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#include <xen/stringify.h>
+#include <xen/types.h>
+
+#ifndef __ASSEMBLY__
+
+struct bug_frame {
+    signed int loc_disp;    /* Relative address to the bug address */
+    signed int file_disp;   /* Relative address to the filename */
+    signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
+    uint16_t line;          /* Line number */
+    uint32_t pad0:16;       /* Padding for 8-bytes align */
+};
+
+#define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
+#define bug_file(b) ((const void *)(b) + (b)->file_disp);
+#define bug_line(b) ((b)->line)
+#define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
+
+#define BUGFRAME_run_fn 0
+#define BUGFRAME_warn   1
+#define BUGFRAME_bug    2
+#define BUGFRAME_assert 3
+
+#define BUGFRAME_NR     4
+
+#define __INSN_LENGTH_MASK  _UL(0x3)
+#define __INSN_LENGTH_32    _UL(0x3)
+#define __COMPRESSED_INSN_MASK	_UL(0xffff)
+
+#define __BUG_INSN_32	_UL(0x00100073) /* ebreak */
+#define __BUG_INSN_16	_UL(0x9002) /* c.ebreak */
+
+#define GET_INSN_LENGTH(insn)						\
+({									\
+	unsigned long __len;						\
+	__len = ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) ?	\
+		4UL : 2UL;						\
+	__len;								\
+})
+
+typedef u32 bug_insn_t;
+
+/* These are defined by the architecture */
+int is_valid_bugaddr(bug_insn_t addr);
+
+#define BUG_FN_REG t0
+
+/* Many versions of GCC doesn't support the asm %c parameter which would
+ * be preferable to this unpleasantness. We use mergeable string
+ * sections to avoid multiple copies of the string appearing in the
+ * Xen image. BUGFRAME_run_fn needs to be handled separately.
+ */
+#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
+    asm ("1:ebreak\n"														\
+         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
+         "2:\t.asciz " __stringify(file) "\n"                               \
+         "3:\n"                                                             \
+         ".if " #has_msg "\n"                                               \
+         "\t.asciz " #msg "\n"                                              \
+         ".endif\n"                                                         \
+         ".popsection\n"                                                    \
+         ".pushsection .bug_frames." __stringify(type) ", \"a\", %progbits\n"\
+         "4:\n"                                                             \
+         ".p2align 2\n"                                                     \
+         ".long (1b - 4b)\n"                                                \
+         ".long (2b - 4b)\n"                                                \
+         ".long (3b - 4b)\n"                                                \
+         ".hword " __stringify(line) ", 0\n"                                \
+         ".popsection");                                                    \
+} while (0)
+
+/*
+ * GCC will not allow to use "i"  when PIE is enabled (Xen doesn't set the
+ * flag but instead rely on the default value from the compiler). So the
+ * easiest way to implement run_in_exception_handler() is to pass the to
+ * be called function in a fixed register.
+ */
+#define  run_in_exception_handler(fn) do {                                  \
+    asm ("mv " __stringify(BUG_FN_REG) ", %0\n"                            	\
+         "1:ebreak\n"                                                  		\
+         ".pushsection .bug_frames." __stringify(BUGFRAME_run_fn) ","       \
+         "             \"a\", %%progbits\n"                                 \
+         "2:\n"                                                             \
+         ".p2align 2\n"                                                     \
+         ".long (1b - 2b)\n"                                                \
+         ".long 0, 0, 0\n"                                                  \
+         ".popsection" :: "r" (fn) : __stringify(BUG_FN_REG) );             \
+} while (0)
+
+#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0, "")
+
+#define BUG() do {                                              \
+    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0, "");        \
+    unreachable();                                              \
+} while (0)
+
+#define assert_failed(msg) do {                                 \
+    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, msg);     \
+    unreachable();                                              \
+} while (0)
+
+extern const struct bug_frame __start_bug_frames[],
+                              __stop_bug_frames_0[],
+                              __stop_bug_frames_1[],
+                              __stop_bug_frames_2[],
+                              __stop_bug_frames_3[];
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index fc25138a4b..8b719a5ef5 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -4,6 +4,7 @@
  *
  * RISC-V Trap handlers
  */
+#include <asm/bug.h>
 #include <asm/csr.h>
 #include <asm/early_printk.h>
 #include <asm/processor.h>
@@ -107,7 +108,122 @@ static void do_unexpected_trap(const struct cpu_user_regs *regs)
     wait_for_interrupt();
 }
 
+void show_execution_state(const struct cpu_user_regs *regs)
+{
+    early_printk("implement show_execution_state(regs)\n");
+}
+
+int do_bug_frame(struct cpu_user_regs *regs, vaddr_t pc)
+{
+    struct bug_frame *start, *end;
+    struct bug_frame *bug = NULL;
+    unsigned int id = 0;
+    const char *filename, *predicate;
+    int lineno;
+
+    unsigned long bug_frames[] = {
+        (unsigned long)&__start_bug_frames[0],
+        (unsigned long)&__stop_bug_frames_0[0],
+        (unsigned long)&__stop_bug_frames_1[0],
+        (unsigned long)&__stop_bug_frames_2[0],
+        (unsigned long)&__stop_bug_frames_3[0],
+    };
+
+    for ( id = 0; id < BUGFRAME_NR; id++ )
+    {
+        start = (struct  bug_frame *)bug_frames[id];
+        end = (struct  bug_frame *)bug_frames[id + 1];
+
+        while ( start != end )
+        {
+            if ( (vaddr_t)bug_loc(start) == pc )
+            {
+                bug = start;
+                goto found;
+            }
+
+            start++;
+        }
+    }
+
+found:
+    if ( bug == NULL )
+        return -ENOENT;
+
+    if ( id == BUGFRAME_run_fn )
+    {
+        void (*fn)(const struct cpu_user_regs *) = (void *)regs->BUG_FN_REG;
+
+        fn(regs);
+
+        goto end;
+    }
+
+    /* WARN, BUG or ASSERT: decode the filename pointer and line number. */
+    filename = bug_file(bug);
+    lineno = bug_line(bug);
+
+    switch ( id )
+    {
+    case BUGFRAME_warn:
+        early_printk("Xen WARN at ");
+        early_printk(filename);
+        early_printk(":");
+        early_printk_hnum(lineno);
+
+        show_execution_state(regs);
+
+        goto end;
+
+    case BUGFRAME_bug:
+        early_printk("Xen BUG at ");
+        early_printk(filename);
+        early_printk(":");
+        early_printk_hnum(lineno);
+
+        show_execution_state(regs);
+        early_printk("change wait_for_interrupt to panic() when common is available\n");
+        wait_for_interrupt();
+
+    case BUGFRAME_assert:
+        /* ASSERT: decode the predicate string pointer. */
+        predicate = bug_msg(bug);
+
+        early_printk("Assertion \'");
+        early_printk(predicate);
+        early_printk("\' failed at ");
+        early_printk(filename);
+        early_printk(":");
+        early_printk_hnum(lineno);
+
+        show_execution_state(regs);
+        early_printk("change wait_for_interrupt to panic() when common is available\n");
+        wait_for_interrupt();
+    }
+
+    return -EINVAL;
+end:
+    regs->sepc += GET_INSN_LENGTH(*(bug_insn_t *)pc);
+
+    return 0;
+}
+
+int is_valid_bugaddr(bug_insn_t insn)
+{
+    if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
+        return (insn == __BUG_INSN_32);
+    else
+        return ((insn & __COMPRESSED_INSN_MASK) == __BUG_INSN_16);
+}
+
 void __handle_exception(struct cpu_user_regs *cpu_regs)
 {
+    register_t pc = cpu_regs->sepc;
+    uint32_t instr = *(bug_insn_t *)pc;
+
+    if (is_valid_bugaddr(instr))
+        if (!do_bug_frame(cpu_regs, pc)) return;
+
+// die:
     do_unexpected_trap(cpu_regs);
 }
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index ca57cce75c..139e2d04cb 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -39,6 +39,16 @@ SECTIONS
     . = ALIGN(PAGE_SIZE);
     .rodata : {
         _srodata = .;          /* Read-only data */
+        /* Bug frames table */
+       __start_bug_frames = .;
+       *(.bug_frames.0)
+       __stop_bug_frames_0 = .;
+       *(.bug_frames.1)
+       __stop_bug_frames_1 = .;
+       *(.bug_frames.2)
+       __stop_bug_frames_2 = .;
+       *(.bug_frames.3)
+       __stop_bug_frames_3 = .;
         *(.rodata)
         *(.rodata.*)
         *(.data.rel.ro)
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:09:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:09:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481932.747170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIt1Q-0006iG-Ts; Fri, 20 Jan 2023 15:09:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481932.747170; Fri, 20 Jan 2023 15:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIt1Q-0006i9-Qr; Fri, 20 Jan 2023 15:09:40 +0000
Received: by outflank-mailman (input) for mailman id 481932;
 Fri, 20 Jan 2023 15:09:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIt1Q-0006i3-0n
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:09:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIt1P-0008To-MY; Fri, 20 Jan 2023 15:09:39 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIt1P-00039Q-G4; Fri, 20 Jan 2023 15:09:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=9qJtD2Yqcw0dYOgj4lUpYax2JUGRXYkvpooIw51hFoA=; b=WkR0FA36zUzIz5YtjGTdCoaIpq
	fN7o9d6ImHa4raAywCK6WfpUORschYW1KgfwDdMg29Agufi1uyVv3+MKYJgCasuYK7ZARoOZRa0Cm
	TsRQj62GcRYdWEd7DMbbyvVw2G0DmVZTjlnlUIaC1Ytce5WgR9adnlRZX2GZv3Cegy+8=;
Message-ID: <cd673f97-9c0d-286b-e973-7a85c84dd576@xen.org>
Date: Fri, 20 Jan 2023 15:09:37 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-3-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
 <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
 <ba37ee02-c07c-2803-0867-149c779890b6@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <ba37ee02-c07c-2803-0867-149c779890b6@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 20/01/2023 14:40, Michal Orzel wrote:
> Hello,

Hi,

> 
> On 20/01/2023 10:32, Julien Grall wrote:
>>
>>
>> Hi,
>>
>> On 19/01/2023 22:54, Stefano Stabellini wrote:
>>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
>>>> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
>>>> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
>>>> address.
>>>>
>>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>>
>>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>
>>
>> I have committed the patch.
> The CI test jobs (static-mem) failed on this patch:
> https://gitlab.com/xen-project/xen/-/pipelines/752911309

Thanks for the report.

> 
> I took a look at it and this is because in the test script we
> try to find a node whose unit-address does not have leading zeroes.
> However, after this patch, we switched to PRIpaddr which is defined as 016lx/016llx and
> we end up creating nodes like:
> 
> memory@0000000050000000
> 
> instead of:
> 
> memory@60000000
> 
> We could modify the script, 

TBH, I think it was a mistake for the script to rely on how Xen describe 
the memory banks in the Device-Tree.

For instance, from my understanding, it would be valid for Xen to create 
a single node for all the banks or even omitting the unit-address if 
there is only one bank.

> but do we really want to create nodes
> with leading zeroes? The dt spec does not mention it, although [1]
> specifies that the Linux convention is not to have leading zeroes.

Reading through the spec in [2], it suggested the current naming is 
fine. That said the example match the Linux convention (I guess that's 
not surprising...).

I am open to remove the leading. However, I think the CI also needs to 
be updated (see above why).

Cheers,

[2] https://www.devicetree.org/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:24:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:24:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481938.747180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItFW-0000eo-6n; Fri, 20 Jan 2023 15:24:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481938.747180; Fri, 20 Jan 2023 15:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItFW-0000eh-2S; Fri, 20 Jan 2023 15:24:14 +0000
Received: by outflank-mailman (input) for mailman id 481938;
 Fri, 20 Jan 2023 15:24:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pItFU-0000eb-Ly
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:24:12 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7c670f11-98d6-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:24:11 +0100 (CET)
Received: from mail-mw2nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 10:24:05 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SN7PR03MB7059.namprd03.prod.outlook.com (2603:10b6:806:354::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 15:24:02 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 15:24:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c670f11-98d6-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674228251;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=9FyDFegZWA77cL2zFRx/jQPKAl4s+UblQw0qsBskao8=;
  b=fISwajgV7v5OK5gzg7k1qQMgaWPz93feJaxxGk3n6oO4hD2vCvf0EYm2
   2lMGHNUtyJKWlqRqQ2izWkQxQ+Mj6u6cfGlw7oceQhd06uIbzYmJ7GMCH
   +7iacvpgDQQotyRUasxY+YGXRGX4HPIfCPmLrRZI+p+/Qw33UYO5KC1zb
   Q=;
X-IronPort-RemoteIP: 104.47.55.104
X-IronPort-MID: 93501386
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:95RdlK9nGsJG2vL4rVNiDrUDsH+TJUtcMsCJ2f8bNWPcYEJGY0x3n
 zMaDzyCP6uJYjOhfdp0Ody+o09QvsDTm9IyQAA/q308E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKoT5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklFr
 +MYKxowaCm4rMSHn7ypU8dLj5Q8eZyD0IM34hmMzBn/JNN/GdXmfP+P4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUpjNABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTCdlPTOTjpqACbFu7y1UROEAnDHCHut7mrG2iWMoOL
 k49w397xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqC8p+EoDX0PjIaRUceZCosXQYDpd75r+kOYgnnS99iFOuwkYfzEDSpm
 zSS9nFm2/MUkNIB0Li98RbfmTWwq5PVTwkzoALKQmai6QA/b4mgD2C11WXmAT97BN7xZjG8U
 LIswZb2ADwmZX1VqBGwfQ==
IronPort-HdrOrdr: A9a23:Rucbw6z6wP4nb5NACKIqKrPwBL1zdoMgy1knxilNoH1uHfBw8v
 rEoB11726StN98YhAdcKm7Scy9qBDnm6Kdn7NhWYtKBzOW21dARbsKheGOrwEIcBefygcy79
 YZT0FWMqyTMXFKyer8/QmkA5IB7bC8gduVbD7lvhFQpNdRGthd0zs=
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93501386"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dXu7E7B6Ausmz46JFhlxQocqteXDRAMYnsNJ4VRCWoJANOdKBWhyvEPrO0Dl2FNE+JKQgr1z5cINoDiEtAmc0Fj6k9oDsMrG0ZCoOaLjL+88jA2+nDOC6/M+6WxFTXZw1vNWHHl6cSGaW9c0+/lrh10wn4Xw8HiuCYg17RNVBRMfNjid7e8t5l2x9ThkIudMVmqhDP54Vrg/3Op9ibcvY30dfccy36iYgqSmZhXmx7QBvPCTE4ZlnfpuNRfwaTqk0UZJ5w73DDaVXVJhtbWox6ztGZjXaV4fRpumwABCtJoPLJ10X1FTiQ9VZfa3ndtmgoYJG/GdJ8TTdURzmjmk8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9FyDFegZWA77cL2zFRx/jQPKAl4s+UblQw0qsBskao8=;
 b=Xthi5kkZYx8xdZJCitj9X1yuO/ufB1xvg6K/KshhRBMuRyWQR/JhCF9qIaEnXftEG0A6a7Z6WpluZ0keEis2Y4ihYzktxTXCt4AQH/URRd/CEinTOf8us+7P42heOyqBBB4t/t1cWBytFCGgVqeXDPbSL9izvGeI5e4PwUxeDmK2dnCooP1FpoVs1WFl7wSdV+QMXYbOQTr5Xlhkh8YI/pFxF6b+IbYBR3nG4xZAXhvW7Jskp7gTht09yQXQFFW+NBpsbwBivp78oj1s61c5EPkKeQrsKXD2dFB3uFHV5mG4ioty9Z3XR4TZQd3NkMHqZlRkLOWEIKupI9FoluTMLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9FyDFegZWA77cL2zFRx/jQPKAl4s+UblQw0qsBskao8=;
 b=UYeQsJMTNGJIMdjj1w5VqQ6Sx0QSrqJFlUrftv5psjR0ypRl3xGuvPlrdIafp3bZuYtk35QwpS35JXrSzzxfkDTz+5mbjRclZ0ko5vSI91st5f00BqJlotng2bwn2sxN4Re6/EYHoWGl/IvwKIhexaKFY0+0JB4tqmkO/YbzehQ=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 2/2] x86/shadow: mark more of sh_page_fault() HVM-only
Thread-Topic: [PATCH 2/2] x86/shadow: mark more of sh_page_fault() HVM-only
Thread-Index: AQHZLAix9A0R4pcRYE+MObCQxsOEfa6nbmCA
Date: Fri, 20 Jan 2023 15:24:02 +0000
Message-ID: <c44f7536-b86f-4f39-f097-2ab25676cd63@citrix.com>
References: <9e79449a-fd12-f497-695b-79a50cc913c7@suse.com>
 <a4d6ac67-15c6-a7c7-d27c-b98544395a52@suse.com>
In-Reply-To: <a4d6ac67-15c6-a7c7-d27c-b98544395a52@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SN7PR03MB7059:EE_
x-ms-office365-filtering-correlation-id: 50bc4c77-0c9d-4fac-8a42-08dafafa5c57
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 FqHaTfRnp82tg9G/IaaywBKcn+YxaPZFgQX1zGSFx0cl+3/qMejT5abHQx/oUrYRkdh1EEYc2LvNPNAh1/mSWMIjBq2/hfYWVXQl79IvaJt8Gu4ZTkQktYukpn+3S+O5AmLNykWDtkSMk+7hg+t0qhyKebvnZ9iqvPBn0o1nF/mCENZp/L5YNALbxfG1wXCUNbtzkdGVcyuLGqz7OEhwBGfJe/AdNa7/E++7qcCwRYD2FCQvyA6Tp7C99+8QP6ZjJLXovXnkr6GbNeSGajcZC5+IP/jbqgIoGwfvRqBTjOnug+bleWmjw5dtADpn0CmM8YEelwnGvhmMkNuCG+RY9gopXUHedhsdTb6Y1ZMb+48K9gVrz1LwBOwKlfeC6qDO7AZsiwU6u0K0GvaSLr760K1xrf4E+B82dDS1W229/7N+MbGFqizfGM12P3gNF7hlZRh4oHyZVEKzKTNhVTpPiWyAp61CUmWqXq/FGV45vJZKpHqueDLW6Gtp11MikKtCjVmah3fWqQUNa72JvX/rIuNwMTW9LaQhi9Cu5ux/N8JqoLdyPbcLelbW2jhK7BljxxqP98+y+a15gtTRCeShu1pK0aVK70q3XTxm9oDY1cbTlTgAClMhkJKXcgKUek3t/6aHlqN9XzE7gyIdEVlCZsu8MDV7RG7n40gOkJlmYgyw910tmDohMQTyOrv+84jwa8ptvyz/rza/WwwRnugk/w6oCLo6nFoosoZsjB+kVuyytrIU951Yf8nAuGZ4oC8PpVuijT4M2/AnEhmNbRbHOA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(366004)(39860400002)(136003)(376002)(451199015)(6486002)(478600001)(36756003)(71200400001)(316002)(110136005)(54906003)(86362001)(83380400001)(38070700005)(82960400001)(38100700002)(122000001)(2616005)(6506007)(107886003)(26005)(186003)(6512007)(53546011)(31696002)(66899015)(8936002)(2906002)(41300700001)(5660300002)(31686004)(4326008)(8676002)(64756008)(66446008)(66476007)(66946007)(66556008)(91956017)(76116006)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?QnQ2UGpZemhldm56clVpNmIzRGFkVWJmaDhiQXVsa0J5d3hwTUJSRDJ0T0hC?=
 =?utf-8?B?K0J6YnJvZlY5SUhETGVBcTlZeEp4N09DRms0QjQrUkFzZ2pOMnhoUUE0QUto?=
 =?utf-8?B?RW90bGpMY0lrT2JxM3FkcXU0MzZkQmVzWXBNekVyanJQL3R6bmdDa3lsOWtI?=
 =?utf-8?B?RWVpdHJvelprSHd3Tyt0akdHZCsvWEgxTzVnRGo2b0pYWDJrU2YyUWN1dHNj?=
 =?utf-8?B?SFR3aS9hZFkrUlE3S0IzR3BCeStYK1NwMFJ0WVVnekRXMGJ3NWJaWkRMaXN6?=
 =?utf-8?B?Y0lMZUx5cTZNcytvWEhPRWxhb0JZWWExNytxRGlKakVZSytKZmVTVkNBNEc1?=
 =?utf-8?B?WHRPYm1ZYllsQ3lIeTZQSGtjWlNqVjBmdExCeWhtNzBYdVN2OWRJMUFhOEJx?=
 =?utf-8?B?cUVoT1BKVkY4aDRtMHpqMktBMy9Fb3JsRlB0SC94OGwxTlU1NW9PcFZjY0JV?=
 =?utf-8?B?ODJuK0Y3UWZYYy9YNEFWM1VzaHdZeHdkelcwWWt1cFZwWXgwdlNvVGdIc1Uv?=
 =?utf-8?B?QW1jWE5zSkpEUUxpcXVQRklaYkkzM0Z4RkZVMUN6ai9wT1FVcmh4djA4MXhJ?=
 =?utf-8?B?WU5tNzQxS1RXWSs2Q2pOTkhZNitDWVZOK0tHTmZIUUxOeElqbk0xRkZjNkFk?=
 =?utf-8?B?ZkorSzNBK01RU2NSNDhlVG5wSmI4bDl0Z3FRODhqQ2FlNjVLV3lRUUdjdm05?=
 =?utf-8?B?VnpPSGRWQ296SFh6aGs3ZWZlQlJNQ29waUd1UFUxaHhic0Ryekh6OC9XWVR0?=
 =?utf-8?B?MmRjRXI3a3EydmtZd29reVVkTWM3a3VTejRvQkRjQmp5bTRyNlFRU29hckpI?=
 =?utf-8?B?LzBjVmpCOW02cUtIMng3Sll6elVtS1phNlVtdVNYSzhzcm5nQmxvbWliSkJX?=
 =?utf-8?B?T3Zpemh2NlN3aTc0UG1XMERRdlFFaCtsUURVenpGS3h4ZnZXbTFBWUM0bFVE?=
 =?utf-8?B?cEZBM1g2bytrYnQvY2p1RHZDWmluK2hUakNuckFmdVVBa1UxcFkxalBHeFpE?=
 =?utf-8?B?NW1YOFVEZ3BtMVlVMW1rS2J4dFlKY1hTZHNhTTNjUlIyUlNnRnNKTG9xc29q?=
 =?utf-8?B?azFhMDZveGg4TVdjeHF0Qy9sTU4wQ0xqVnVRYVlVQVJCWFV2Ni84cmZQeVJU?=
 =?utf-8?B?MGJVNGpLRDBIbVZ5NUpnTyszY3N4VmFxQ252cnVocWFwMG1VNDJ2M3VvM2w5?=
 =?utf-8?B?SlIrM1E1M1VtczZ0RWg3ZURtUkhScnY0Tm1xTHhMQXRxR1ZuWHZMNFdaRE9y?=
 =?utf-8?B?SFYzNkJiUlNTdzhaTS81T1hzU1BPcFJGbU9VUDVGTHowMDBrVVUzbGVIVHNT?=
 =?utf-8?B?emdaS3BhTjFweEcrcmpZNXRCS1hycEpGWXZITDN2eEF1dUkyeTh6VDFxWm9T?=
 =?utf-8?B?TTJ0TFRGZkRDenkrNlRIYVJJczE0d1cxOGRKNDZ5OWdCZXE5NWxlbDVPWEwx?=
 =?utf-8?B?bmFhWlEvKzUveitPbUdmdEgzRUdmODFJV09XODhwdDJxRnRqRHMzVkh1cER0?=
 =?utf-8?B?bWFVZFBidzV5NFlXL3d3Qi9QaXA3TWhLa3dtL3d0TmdrSHRIMHg2VzNyYTNX?=
 =?utf-8?B?NmVzaUJuZDF5S3lleE02RjhGZXhZU3ZUckxGQkdOTXJxMDE0STJicGJGM0xV?=
 =?utf-8?B?U0g5Qm9MbFFtNkVLeUt1VFJicnJ3OWsrRGtadDh4VC9sa1R2RmRUYWhHWUFF?=
 =?utf-8?B?VDR4NTdIK3pDS3hqZ0pSLzlXVW15UW1IT0VrWmZTdUNTMVUrWGRoMXhNMG5K?=
 =?utf-8?B?RFg5eFY0ekZjRk1SQVIwMlJEQ0o3NEpOYUV1bTl5UnJMb0xIQmVYaTNQaWtC?=
 =?utf-8?B?Vlg4ZERjKzVVdkd4SFprQWZxTVJNSGpKdzZhdVNvSGlwRVBWUVlMTDFQelk3?=
 =?utf-8?B?cTJDMmtjTjdQZVJ1VWN4UDU4Tm5US2NvdE55U0lnOW9TeForZlFJRkRMUzBK?=
 =?utf-8?B?VmxiYm53M3JNTWtKSFlmU2F6WFRBM0M2RWYrbnRNSFMwd2k3YVNiUUhoT2pF?=
 =?utf-8?B?NU1mSGVoK0Z4K1RIeE44aVZMT0Y0cUVHZGhQQTJIOXBoRmtWZlZ6YjVTb3Nr?=
 =?utf-8?B?OG54SHZWdGRKTUYxNnY4OW1HQ0NRbmsvR3JIMXpGSlltOStVM2dRblJvNk9X?=
 =?utf-8?Q?ivGkcqzQ324crirSPeMHGyjsW?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <842913583D17B54A94C373B437423D41@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	yLBQN7U3NZOgkNeBfbcrxld9HH+tUPO7rVVo8BMIyNH4xz3GJiKc8MLLNOzpIDFIQDWfnU9JAviYK45fIHvWAwdwMfheBbJIv3EfULF6lEzwZupAVNlgdD27zXxBRvB24Oy6/vE3VSJnN19Je7jMYTPeUwiF137/2u2wvQ/XxPp9QtI0EurCp2ZPuz7NKFofRSNqDEJxyIHqwziXvCHukB3ivmo3JJE/xNjw+esEInH00GmUuEaUEdQjI1+9YvpHoqs8XWc6ifKXCFgPkkniZmRqcZlhsGuc6BhujR1acAqdNsdj6t5kEPCUeXRdTeHAqmycbfcjJACgGvh/as1dT47ianktEWIV28sFfJA/TpJwrBOrgBtRcV24l6bGSntfbeaqbdexROHoP/SLe5hxWMR3zBWDzrIzi4h+rSnAd2wIkyQ/Ws5X9mx7+QWYAHJS+6ykMUXFxnbLWPKJUSlQNEbGi/iUW2ZnMZU4rY7Nq15Dt5RLvZqnpMUTOVA74c220cnVYU5xxcl1KZFHOM0M6SD/rhxfqzn0wjQErDTJp5tbn9i0SDQ88p8H2PEz6TiZXq/jwXDdbLf/VBXSVISrQLV3InA0O8Gnnt9yOq1ePYLlO+8kOiRuJz8aTFyUK5WWgtkM4UyzgoRN9vb3quHPEKUPZUnfTWfWKEdcWxD2921ta2ZBZbRE1joxrV7JSuRaob6yPjDlN4dleD8vtdkdJD0dH2U0lasHFdHz/b5wKMFLK7sx3nFbQ8ZF/FHoRFvvs2RNliKhhjefnyZQDsUQ6F0JjYzkZZrp25cV0RYZ6HnYrKugmkDI4Us2udeoskRh
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 50bc4c77-0c9d-4fac-8a42-08dafafa5c57
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 15:24:02.3420
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Ot/HPZkKSJL7wAr3j8a/hgIBjD1u4e83/b29Tw6RQcCU+Zm2NjTr2dpLRQfGvnPSyUJCPMrWpopeXGfEP5mjYiwzkkO39Bhdy7K4IlLWSJ4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR03MB7059

T24gMTkvMDEvMjAyMyAxOjE5IHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gTmVpdGhlciBwMm1f
bW1pb19kbSBub3IgdGhlIHR5cGVzIHAybV9pc19yZWFkb25seSgpIGNoZWNrcyBmb3IgYXJlDQo+
IGFwcGxpY2FibGUgdG8gUFY7IHNwZWNpZmljYWxseSBnZXRfZ2ZuKCkgd29uJ3QgZXZlciByZXR1
cm4gc3VjaCBhIHR5cGUNCj4gZm9yIFBWIGRvbWFpbnMuIEFkamFjZW50IHRvIHRob3NlIGNoZWNr
cyBpcyB5ZXQgYW5vdGhlciBpc19odm1fLi4uKCkNCj4gb25lLiBXaXRoIHRoYXQgYmxvY2sgbWFk
ZSBjb25kaXRpb25hbCwgYW5vdGhlciBjb25kaXRpb25hbCBibG9jayBuZWFyDQo+IHRoZSBlbmQg
b2YgdGhlIGZ1bmN0aW9uIGNhbiBiZSB3aWRlbmVkLg0KDQpXaHk/wqAgSSBwcmVzdW1lIHlvdSdy
ZSByZWZlcnJpbmcgdG8gdGhlIGVtdWxhdGVfcmVhZG9ubHkgbGFiZWw/DQoNCkNvdWxkIEkgcmVx
dWVzdCAid2l0aCB0aGF0IGJsb2NrIG1hZGUgY29uZGl0aW9uLCB0aGUgZW11bGF0ZV9yZWFkb25s
eQ0KbGFiZWwgYmVjb21lcyBjb25kaXRpb25hbCB0b28uIg0KDQoid2lkZW5lZCIgaXMgYWN0dWFs
bHkgd2lkZW5lZCBpbiBib3RoIGRpcmVjdGlvbiwgQUZBSUNULiB0byBpbmNsdWRlIGJvdGgNCnRo
ZSBlbXVsYXRlX3JlYWRvbmx5IGFuZCBtbWlvIGxhYmVscy4NCg0KQnV0IGxvb2tpbmcgYXQgdGhl
IGNvZGUsIEkgdGhpbmsgd2UgbWVhbiBlbXVsYXRlZCBtbWlvIG9ubHksIGJlY2F1c2UgaXQNCmNv
bWVzIGZyb20gcDJtX21taW9fZG0gb25seSA/DQoNCj4NCj4gRnVydGhlcm1vcmUgdGhlIHNoYWRv
d19tb2RlX3JlZmNvdW50cygpIGNoZWNrIGltbWVkaWF0ZWx5IGFoZWFkIG9mIHRoZQ0KPiBhZm9y
ZW1lbnRpb25lZCBuZXdseSBpbnNlcnRlZCAjaWZkZWYNCg0KVGhpcyB3b3VsZCBiZSBmYXIgZWFz
aWVyIHRvIGZvbGxvdyBpZiB5b3Ugc2FpZCB0aGUgZW11bGF0ZSBsYWJlbC4NCg0KPiAgcmVuZGVy
cyByZWR1bmRhbnQgdHdvIHN1YnNlcXVlbnQNCj4gaXNfaHZtX2RvbWFpbigpIGNoZWNrcyAodGhl
IGxhdHRlciBvZiB3aGljaCB3YXMgYWxyZWFkeSByZWR1bmRhbnQgd2l0aA0KPiB0aGUgZm9ybWVy
KS4NCj4NCj4gQWxzbyBndWVzdF9tb2RlKCkgY2hlY2tzIGFyZSBwb2ludGxlc3Mgd2hlbiB3ZSBh
bHJlYWR5IGtub3cgd2UncmUNCj4gZGVhbGluZyB3aXRoIGEgSFZNIGRvbWFpbi4NCj4NCj4gRmlu
YWxseSBzdHlsZS1hZGp1c3QgYSBjb21tZW50IHdoaWNoIG90aGVyd2lzZSB3b3VsZCBiZSBmdWxs
eSB2aXNpYmxlIGFzDQo+IHBhdGNoIGNvbnRleHQgYW55d2F5Lg0KPg0KPiBTaWduZWQtb2ZmLWJ5
OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQoNClNvIEkgdGhpbmsgdGhpcyBpcyBh
bGwgb2ssIGJ1dCBob25lc3RseSBpdCB3b3VsZCBiZSBmYXIgZWFzaWVyIHRvIHJldmlldw0KaWYg
aXQgd2FzIHNwbGl0IGludG8gdHdvIHBhdGNoZXMgLSBmaXJzdCB0aGUgI2lmZGVmIHJlYXJyYW5n
aW5nLCBhbmQgdGhlDQpjbGVhbnVwIHNlY29uZC4NCg0KPiAtLS0NCj4gSSdtIG5vdCBjb252aW5j
ZWQgb2YgdGhlIHVzZWZ1bG5lc3Mgb2YgdGhlIEFTU0VSVCgpIGltbWVkaWF0ZWx5IGFmdGVyDQo+
IHRoZSAibW1pbyIgbGFiZWwuIEFkZGl0aW9uYWxseSBJIHRoaW5rIHRoZSBjb2RlIHRoZXJlIHdv
dWxkIGJldHRlciBtb3ZlDQo+IHRvIHRoZSBzaW5nbGUgcGxhY2Ugd2hlcmUgd2UgcHJlc2VudGx5
IGhhdmUgImdvdG8gbW1pbyIsIGJyaW5naW5nIHRoaW5ncw0KPiBtb3JlIGluIGxpbmUgd2l0aCB0
aGUgb3RoZXIgaGFuZGxlX21taW9fd2l0aF90cmFuc2xhdGlvbigpIGludm9jYXRpb24NCj4gc2l0
ZS4NCg0KVGhhdCB3b3VsZCByZW1vdmUgdGhlIHBvb3JseSBuYW1lZCBsYWJlbCwgYW5kIG1ha2Ug
aXQgY2xlYXJseSBlbXVsYXRlZA0KbW1pbyBvbmx5LsKgIFNvIDMgcGF0Y2hlcyB3aXRoIHRoaXMg
bW92ZW1lbnQgYXMgdGhlIGZpcnN0IG9uZT8NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:29:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:29:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481945.747189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItKr-0001Om-RY; Fri, 20 Jan 2023 15:29:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481945.747189; Fri, 20 Jan 2023 15:29:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItKr-0001Of-Ox; Fri, 20 Jan 2023 15:29:45 +0000
Received: by outflank-mailman (input) for mailman id 481945;
 Fri, 20 Jan 2023 15:29:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pItKq-0001OZ-Ej
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:29:44 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41f3ca0b-98d7-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:29:42 +0100 (CET)
Received: from mail-bn8nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 10:29:39 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5412.namprd03.prod.outlook.com (2603:10b6:208:291::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 15:29:37 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 15:29:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41f3ca0b-98d7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674228582;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=yge2g53HZcQLbddtW2AVNvLUAcxZmkR1/ztfQt9ZuOg=;
  b=A/rUcTBgDZYUOIBwSGuhuIvHXZk5ew7CaHCDWxKJcz0wNuMQFxnlHArp
   ndY9OA5GkGC0MDLjQs49LWgCOW0zd1VOmG+tnjxH0OpM1H6vruSF/ZX2o
   OAROOOilj+FI8pB3M5/bhuBJdWHqPiMMWOsWKJYdQ+rI7v2VEM0zIIsh7
   Y=;
X-IronPort-RemoteIP: 104.47.55.176
X-IronPort-MID: 93570715
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:M064uqm6TAdUMChNa+vaV9fo5gwZJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIfDTiCOKnfamb3edEnOt/l9UtT68PSn95iGQdlqiE1EiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5geGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aMqES0kKRCcvOPo6aO4QMhTlP14COC+aevzulk4pd3YJdAPZMmZBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVE3ieezWDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDT+DnrqEz3TV/wEQ1OCwTcAe2ocLnyQ2uRP5/J
 2xI1y0X+P1aGEuDC4OVsweDiHmAsx0HWtsWEPAg7wqNya387AOQB2xCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BXQZeSYOQA8B4t/iiII+lBTCSpBkCqHdptL0EDf03
 juDhDI/mbIIjMgAka68+DjviTWmrInEVQ4x6wDeWEqq6wp4YMiuYInAwVHf7O1cJYeDCFebt
 X4PmtO28+wFS5qKkUSlS+ILGrar6/+bMSb0jltmHp1n/DOok0NPZqhV6TB6YU1vYsANfGazZ
 FeJ4FwPophOIHGtcKl7JZqrDNgnxrThEtKjUe3Iat1JYd56cwrvEDxSWHN8FlvFyCAE+ZzT8
 7/CGSpwJR720Zha8Qc=
IronPort-HdrOrdr: A9a23:j+pelKyjrRlsbND/ToWQKrPxS+gkLtp133Aq2lEZdPUMSL3gqy
 ncpoVf6faUskdUZJhEo7u90ca7MBbhHPJOj7X5eI3SOjUO21HYWL2Kj7GSpwEIcheWnoQwup
 uIMZIOb+EYZmIbsS+O2njbLz9W+qjlzEnHv4bjJ9oHd2xXQpAlyz08JheQE0VwSgUDLZ0lFK
 CE7s4Ciyu8dW8RZsGbAGBAe+TYvdXEmL/vfBZDXnccmX+zpALtzIS/PwmT3x8YXT8K6bA+8V
 Ldmwi8yrS/v+q9whr80XaWy5hNgtPuxvZKGcTJoMkILTfHjBquee1aKsq/lQFwhNvqxEchkd
 HKrRtlFd908Wntcma8pgao8xX80R41gkWSgWOwsD/Gm4jUVTg6A81OicZyaR3C8Xctu9l6ze
 Ziw3+ZjZxKFhnN9R6NrOQgFisa3HZck0BS3dL7vEYvHrf2r4Uh47D3yXklXavo2hiKqbzPXt
 MeTP00r8wmCW9yJ0qpxVWGR7eXLzYO9sPseDlAhiXS6UkeoFlpi0Qf38ARhXEG6dY0TIRF/f
 3NNuBymKhJVdJ+V9MPOA4te7rHNoX2e2O9DEuCZVD8UK0XMXPErJD6pL0z+eGxYZQNiJ8/go
 7IXl9UvXM7PxuGM7z64LRbthTWBGmtVzXkzc9To5B/p73nXbLudSmOUkonncesq+gWRsfbR/
 GwMpRLBOKLFxqdJa9ZmwnlH5VCI3gXV8MY/t49RlKVu8rObpbns+TKGcyjWoYF0QxUJV8XLk
 FzIAQbfv8wlHxDckWI/STsZw==
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93570715"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ltT6nw5LOUZUwhGYX8VH1BJJjgIz3UKrvVVoL+O3UBHm3per0ITsaaJkE6g5qyScf+nlO+m/xqEfQAzryx5nXHlA+AZa7mYA6Jlux0uGTeLqlPFL3i0XI9iHL13q6kqK1Tkww7QMXMpUnQAZhfqnvSMjg+rmtzfQHh5kyRPZwHdifdTWvvuzioucvdVFdncwfLXisbMppwcM9Uy8VyxQyH134an2m4L0SEFlK1cKVTTIap+flt0eFAFC3wNeEOjS5sY7sFA28b47EaXmySetjRDv1kRq9G51rEDyYTz8GCs9gsroK3D264BHAnyXRrpTk19OGV/fPTSez0N2kS746w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yge2g53HZcQLbddtW2AVNvLUAcxZmkR1/ztfQt9ZuOg=;
 b=eiqwMxEftGFvz5oLpB+gBrW9cT8/k/lVeKaYb67PrBXXJhlyObjXhTsLmqd7Gza3Y6cGpCAiboN7C2liRYd+17aGp1uOcR/5eFnidwAD8QL+bCGPe0GFoX/B7dW2Fjr9qLC+sBlAQbDoSMnfJkS22Hzt0secQr67l+56oUrScmpza04cAyc9P5llUXCpB5JBQVa2Q1ctzPUXuqRwSu8o0o3+RHQBGYGfhbIbUhEpepH+JuwzSaSKff/xEFpoAYMFN/mr8BLcg2gmuFnbcpbmDFkfZs2p7Ikz1KEF6fbNAFb+POoistxk7tD5ITvzxkr7E9w8piV6wd3w3uM61obQ7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yge2g53HZcQLbddtW2AVNvLUAcxZmkR1/ztfQt9ZuOg=;
 b=LgPD0EZHD2RGklxyNSQjn7/p16IXgJn+2ThAOgnpZDDPCo/GGBJ0BH7e/339QprrYZ5DTc/4ad4AU0loFammx3HQMVT21fWQc203XHbypHCKgjMWrL7J2dTnQedWWrBKhaaZU7kPjULUp56s3h3QcXSQtFpMlgmDzfsxz/xQzAw=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 01/14] xen/riscv: add _zicsr to CFLAGS
Thread-Topic: [PATCH v1 01/14] xen/riscv: add _zicsr to CFLAGS
Thread-Index: AQHZLN/sD4BhGIHIWE+hX+U44mzDDK6nbkKA
Date: Fri, 20 Jan 2023 15:29:37 +0000
Message-ID: <d5d9a305-3501-cbc4-1c8a-1a62bd08d588@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <3617dc882193166580ae7e5d122447e924cab524.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <3617dc882193166580ae7e5d122447e924cab524.1674226563.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5412:EE_
x-ms-office365-filtering-correlation-id: 8d288807-7b9b-4536-7689-08dafafb2430
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 eqghUMBDbp+SX98Rys6HXH/2IEfvuZMiX6AvI55E87Gf5FmTRD86NHx9O/buS3QZOoXkHgWQhSKx1VJSzXRVD1hyKPh0vrMV4T9PFF1XhAVtLvlMpVWG+DAxISKe/+Av2S7rT52B1wHcY+mVVsComLtX3JEFCB4l/NBDOCshVwuMQS3yXzK6R5KKqi2F8wTe8dC/juQ2UO9gbjfQt7dJ+AKCYLaljYFOLjWxhpefr/DLmYLwn27lkVadTPp0CHEeEQbsnrWocaEb3pwwHdOMp4S3jqRjRPy64Vy3Hysp/CWPnm5RJxbDOh0ojWt5aGpeO9dW+4i4JZoz7KF0ZSeR6s0X7JTdDaY8pTTm2+uYjankVfUkjYvmjb1Pp5wMmL8UHMdhuhnblK2/yBn1iBfURF7TtSqWDimtZwm1KS6wcNzBl9DUH44YMp3pOBHspLUMi3hFHiJOciZwV2dCrPAdpyu1uWfQLp3fia5XzwdtDcsEPeT0EPARexVXr22in8KnL2sO4uj9YsiYF2/j90E2EdWUiQq4j4GWJDKMrhrICCDEV1PRr5TINrdmlbsUf2IGHu//95JgprylYlNd8XbOjdo5InUg0mtMZxy7qR8kB2gik5QhiGxnVA4vckuyV03pYVlhp8zXZsKENRj/QHMIggEpLMhJFk6/W5noYwmeW09nOtdl2VviHhQbJbggvMWDo1MLns/Oh4FIxs29hZS/6fz6cOLIYI1va9wjkyDvSGuW7ZEpLZ425RLySVfju5jJq2xYY5HFeqQRu75Z0Tui6w==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(366004)(376002)(396003)(346002)(451199015)(4744005)(5660300002)(8936002)(31686004)(4326008)(66556008)(64756008)(66446008)(66476007)(8676002)(66946007)(91956017)(76116006)(71200400001)(38100700002)(31696002)(122000001)(53546011)(6506007)(186003)(2906002)(6512007)(110136005)(54906003)(26005)(6486002)(36756003)(478600001)(316002)(2616005)(83380400001)(41300700001)(38070700005)(86362001)(82960400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?YjMrYytPcFN1TDdZQ2RtcURlQmxqOXE1cVYxUXdKSFc3YzhYTVhlVXpNZUpF?=
 =?utf-8?B?VE5Sak5LZU1KTVVTdVhXRnZSSHRCcTlmQ2ZwMkJGUWpvcFBjaEFNSEVZa3F1?=
 =?utf-8?B?NkFJeklHYkR3UzE4VUE3Tld3aWFuclJpS0NwdUtmQm83OTh2RXB3aVE1TXov?=
 =?utf-8?B?YmpPMTh1N0RYa0F2NEN6UUZqQ0I0WkRLNHdTamY3NGY5akxnV1FLQ3RBN0xr?=
 =?utf-8?B?NzhmTWFEZDFsdFNSeEFvOWtjZXFZa0VFSXBMVE1xSVNnUGp2NDVicnJkRHRz?=
 =?utf-8?B?aVVCM2hxN1FNeS80YmR1b05rQmRhVHhxUm9RWjJTZEVDRkVMT1cwY2RmV1hL?=
 =?utf-8?B?N05xVFJqYzlNVTZwekoxR09DU1RybUg2MDdTQVIrdEwrWDFBaER4ejZpUEw0?=
 =?utf-8?B?Rk5JUkxBTDRDNS8yV1JZSzE1Q0t6SmpmQkp6Rng2YS90STQvZFZ1ZFhvd2NT?=
 =?utf-8?B?LzN2ZkFCelBCZnV3cElFUk52RUxZUXZWVjd4TE5DYmk0WUIvYU5HQ3d1MlBw?=
 =?utf-8?B?ZVAxR2x5NlpIWUtObysxaHRzUFM5cW1rYVdrenlyMzYzVkhjNFZWYm1ibVcx?=
 =?utf-8?B?NllKaHI2WlljZnE5Q2IvcGl6Ym1BQ1hrK0NVOG9MdHRNMWVQRW4xaU5jNisw?=
 =?utf-8?B?c2dQRkZ3aitUb1RFYm9lekJhNkdHajdYeU9JVUh1N3QrV3NFa3lCM0ppOU5o?=
 =?utf-8?B?Nis0SnBhWi9Lajl1Y3FQN1F6eWR3amdHM2VJcDg0MU5ZZ0dhVzdVUUN0Zysz?=
 =?utf-8?B?TlJna1dSYUkwUW5GN3lraXZOdFY0TnBSQ1RWMFJTYUlYRmQzaUlLckV5VjBZ?=
 =?utf-8?B?Q1pWazk4Tkw1bXVmSWNQRDRlRGFYU0pudXBzZUxKUGZJdERBSm9ha010dVpW?=
 =?utf-8?B?VnZvdmhucUo2QjN5Tks1N09CNERmZUdFYWJPSjFPZ0JONGk2cXFRcTVuZUpt?=
 =?utf-8?B?OXlRdFU2UmRRcW5HdmdXNEtiSXdZTEtBVVZtWDFBTldlSjdZeUllZnlPTWgv?=
 =?utf-8?B?U0tGdnVVd2pkTXZ4WlN3NUtSTnkxWVJINlc4dWRHbnh2Q1pQRVhHSXA5Mllx?=
 =?utf-8?B?QUcvbjQzM3dYYkxlN1M1VHpOdnVYSmdGdW1idXlrTVhZMjV6YXZsMU1PcDZ4?=
 =?utf-8?B?bURVU1ptU0xHcjZ0aFFaNHd0eDcrSTMxbXFTK2ZIdWdlN0FVYmNqMlFFVG1I?=
 =?utf-8?B?R3RXbEV3dTVMdEZqcVY1VjN4emQwM3YvWWpHUHRyaEluVGxJVHhHWVozeG9P?=
 =?utf-8?B?VVFKNHI3S0IrZ1l6Q1VXSnhZZ29zaWZaYmpKTU5FU3FqeHFmRkhEc3kvZ0c2?=
 =?utf-8?B?TTdYbHkzS1lkU1lrVDJMNTNGZ1gvVUNGdGtyRnFkeXVlMlV4TjZRREo5SjRR?=
 =?utf-8?B?d1BlM0FzYWRCWGVRS0xWZVpmazExZzY3amN4RTdOSVNrWGV0bVEydkQ5bUMw?=
 =?utf-8?B?K1RKR1JDbnpFaC9qMC9EbGxFSlZabEVuVERjMk1vc0hsZGJRN0tMYys3MlhN?=
 =?utf-8?B?K2dqdUxxcHdUcFUwMDFhUmpxeVNTK0RsUlRrZnBVdGZIdlBZamgyYlJha3Zi?=
 =?utf-8?B?bWxjSkFKSTZVNnZlZzgrV0UxMjRTQXdGeTJTcFRWT1BGcW03Zm1RVHFaU2FX?=
 =?utf-8?B?eHpJS0xldDJTNDRyTU53dVZuYXY0ZE1KT2t3ZnRHMmV2YWllK3RaWGlRaXNE?=
 =?utf-8?B?Z21Rc0NETUhiVkNQU2FZemR0RXBXN1FyQzMzVHBOUWJIakxZSXEveFpmVUJI?=
 =?utf-8?B?eHNYOWhGOEtlMExyY055WnduYXN2YWhrSjgyV2V4Syt4VHEzSUZ2QnBscjQ3?=
 =?utf-8?B?K0Z2WmZzbVA3VnBWUkNkVWVIVUNYN3JYdzd2WDhkWjVUZEFpaDhEQksxeTdZ?=
 =?utf-8?B?ZG1FRENONHV0b1BNL1JBdkdjcFhwWHNsM3NXRmFDelpCNXlJK25VY2ZESS9Q?=
 =?utf-8?B?eGw5MXg3QTBzYWUwS0k1U3E4NGtOeGhVNVVqYkRVNDVpMW14cHlqZTJKSUVq?=
 =?utf-8?B?YjlMaWN2T3RZZlE4eEQybTBEOHdZYloyZ0l3VXJ1S1YyY0FHcERyYmJ3MEU1?=
 =?utf-8?B?UWdjR0FuYktCS3ZHdmFwTXJ3RDZwWC9qT1c0bGJ4SWVlK3pLL25CaDVESVBV?=
 =?utf-8?Q?BUVWwdXHFgjXoOxF9ZV5pL9CK?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <8836139EBFC4684BAAF1279C6B10CEB8@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	9Y7vxzuonXZR3ydPDfV1UxlwSzOWrrWuo4R6y+7iHQz3TItPqivE31YKOl5NYPYHItrTGlcil02LazxC7mqNaM2CPcOIlgnyTv2mqQZ8arTq9j0KXzaZtrCWUCfwj+yNHo6iXrWMypVe8H/u5DNM/AEJXviHUaI3ZrsZ2Cu+5OaoY5FSSfwJug4n5836kgZk+1HF7+9rOwNkqdTp/gyjv8knt6qY/9ZYAtiZYCtlaImk3loaNVHki0Hz17ppvaRkFVDv2R4413R/sLbvx7RUitTqrodhqyihhIPWg86OSfDvLxukn7+aaKB2vykpIqNv9fs6j2JlEsPgHtTtvK1B7Ubv2B72u9PcUhbLOWW8IEg242PEYRBER+tILPQQKesED+3+yU/QvzFZpSjzokrxqtlczro2Szp4UE4hfaZJlYFNoUZkXF+5nWj2StcydRdIZji0ypbTAg8m2EhEdYBSYcnVzUHjBS06p0na0Xl4nnHAmcv5AVGkDMXd2y2gbc4Y1ndPHmoBNhMsvCLu09VZ0yMvv13iDE/v35SGpMLlTBXsn+7NJyI1OC6nhYzNG5h5shVYNoRXK4SdTb7YQgFA0Oj0fX67G2nHVdeynSedBdlwLb94IevMggeX5sy8LN6d6eRWe52ea8ju7mpKA7wNEd2FaTWP1LWWHtrUCx0jYb8CfrADAFHF1l1AVEvAAPf8g9y7PITfKs8n/2zbSnqu+gJXBCEPxZtFUYo1zKot1YF2tJvIEDvIAN0bXPvhx3GKEJBSBpdvTr8pruM/8UupkqM0DRiuY4+TMvrx9XncXlrxWuZmaTv643BMuunsEZUgdGbLz4xEgQoP3Sdvqb0xRfO5yxHzzy/UZoUu9AD0CGZnXCsp4aIsABVEyVjply5UGTlSI/+O/S8+Jc6zYfp5ig==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d288807-7b9b-4536-7689-08dafafb2430
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 15:29:37.6768
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: +H1mM0z36tZ2fqJiFy3f0CRkBepntl76L0fUNFIvJNdT4N7jzHcO4ENmCKnBoFXr/26h8HSBob5ha/GNV9+M8vAyOf/PNadxKmZ16dEldgI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5412

T24gMjAvMDEvMjAyMyAyOjU5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBXb3JrIHdp
dGggc29tZSByZWdpc3RlcnMgcmVxdWlyZXMgY3NyIGNvbW1hbmQgd2hpY2ggaXMgcGFydCBvZg0K
PiBaaWNzci4NCj4NCj4gU2lnbmVkLW9mZi1ieTogT2xla3NpaSBLdXJvY2hrbyA8b2xla3NpaS5r
dXJvY2hrb0BnbWFpbC5jb20+DQo+IC0tLQ0KPiAgeGVuL2FyY2gvcmlzY3YvYXJjaC5tayB8IDIg
Ky0NCj4gIDEgZmlsZSBjaGFuZ2VkLCAxIGluc2VydGlvbigrKSwgMSBkZWxldGlvbigtKQ0KPg0K
PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvcmlzY3YvYXJjaC5tayBiL3hlbi9hcmNoL3Jpc2N2L2Fy
Y2gubWsNCj4gaW5kZXggMDEyZGM2NzdjMy4uOTViNDFkOWYzZSAxMDA2NDQNCj4gLS0tIGEveGVu
L2FyY2gvcmlzY3YvYXJjaC5taw0KPiArKysgYi94ZW4vYXJjaC9yaXNjdi9hcmNoLm1rDQo+IEBA
IC0xMCw3ICsxMCw3IEBAIHJpc2N2LW1hcmNoLSQoQ09ORklHX1JJU0NWX0lTQV9DKSAgICAgICA6
PSAkKHJpc2N2LW1hcmNoLXkpYw0KPiAgIyBpbnRvIHRoZSB1cHBlciBoYWxmIF9vcl8gdGhlIGxv
d2VyIGhhbGYgb2YgdGhlIGFkZHJlc3Mgc3BhY2UuDQo+ICAjIC1tY21vZGVsPW1lZGxvdyB3b3Vs
ZCBmb3JjZSBYZW4gaW50byB0aGUgbG93ZXIgaGFsZi4NCj4gIA0KPiAtQ0ZMQUdTICs9IC1tYXJj
aD0kKHJpc2N2LW1hcmNoLXkpIC1tc3RyaWN0LWFsaWduIC1tY21vZGVsPW1lZGFueQ0KPiArQ0ZM
QUdTICs9IC1tYXJjaD0kKHJpc2N2LW1hcmNoLXkpX3ppY3NyIC1tc3RyaWN0LWFsaWduIC1tY21v
ZGVsPW1lZGFueQ0KDQpTaG91bGQgd2UganVzdCBnbyBzdHJhaWdodCBmb3IgRywgcmF0aGVyIHRo
YW4gYnVtcGluZyBpdCBhbG9uZyBldmVyeQ0KdGltZSB3ZSBtYWtlIGEgdHdlYWs/DQoNCn5BbmRy
ZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:31:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:31:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481950.747200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItMj-0002kU-7k; Fri, 20 Jan 2023 15:31:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481950.747200; Fri, 20 Jan 2023 15:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItMj-0002kN-3N; Fri, 20 Jan 2023 15:31:41 +0000
Received: by outflank-mailman (input) for mailman id 481950;
 Fri, 20 Jan 2023 15:31:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pItMg-0002kH-Vb
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:31:38 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 85793393-98d7-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:31:36 +0100 (CET)
Received: from mail-mw2nam04lp2174.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 10:31:33 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5412.namprd03.prod.outlook.com (2603:10b6:208:291::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 15:31:30 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 15:31:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85793393-98d7-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674228696;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=5xWWkPa50EhbsjIYsMVHqH9LEfGHOJQyq4+E2+vQpPo=;
  b=Gzv6efnIYPg6MN/verESqd6qm06732Oxkm9+V37wlOjsbkq4kCqsBUpt
   ENGc1LMwVk5jfws5Sa/RokQbVxCQ0jrUd23Vck+GNCQxaMn2xjdhwrNWE
   nrJxeGe2IBxSyAT25rf6PKeqCRm6d7qcygOW6U7jxnVcxhRVlcVmgpXzP
   Q=;
X-IronPort-RemoteIP: 104.47.73.174
X-IronPort-MID: 93953849
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:m03JiK5O2XJ2k3Io7CknxwxRtKDGchMFZxGqfqrLsTDasY5as4F+v
 jBJXGuAOP/bMWOnf4sjb9mwpk0Bvp7SmNAxTgE9+3g1Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakR5AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m1
 OYTKTACcyK6qsm154i6cuVntuF8I5y+VG8fkikIITDxK98DGcyGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6Ml0ooj+SF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNJReDjpqI76LGV7nA5IScJS12Dmvy4zWT5dvN+N
 lRM9BN7+MDe82TuFLERRSaQp3qJvQUdWpxTDvc94wGOzYLb5g+YAi4PSTspQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkA9NnQebCUJSQ8E5djLo4wpiB/LCNF5H8adgdz8HzXty
 DmitikggK4Si8VN3KK+lXjNhDimt5XSTgo44wzRdm2g5wJ9IoWiYuSA4Fza9upJLZzfQEOIu
 nMFgOCB4OtIBpaI/ASGR+MLG7Ol7uiEKxXThFduG98q8DHFxpK4VYVZ4TU7IVgzNM8BIWPte
 BWK5l8X44JPNny3a6Mxe5i2F8kh0annE5LiS+zQad1NJJN2cWdr4R1TWKJZ5Ei1+GBErE31E
 c3znRqEZZrCNZla8Q==
IronPort-HdrOrdr: A9a23:K6i1navfENVoShFthgU7BS/97skCEYAji2hC6mlwRA09TyXBrb
 HWoB1p726wtN9xYgBlpTnkAsK9qBznhPtICOUqU4tKGTOW3ldAT7sSmbcKoQeQfxEWn9Q1vc
 0NHJSWSueAamSS5vyb3ODMKadD/DDxytHKuQ6x9RZQpawAUcxdxjY8LjzePlx9RQFAC5Z8PJ
 2A5vBfrz7lVWULYt+9DnwlWfGGg9HQjprpbTMPGhZisWC1/EWVwY+/NyLd8gYVUjtJz7tn2W
 /Zkzbh7qHmn+CnxgTa32rz6Y0TvNf60NNMCOGFl8BQADTxjQSDYphnRtS5zUcIidDqzGxvvM
 jHoh8mMcg2wWjWZHuJrRzk3BSl+Coy6lf5oGXoyUfLkIjcfnYXGsBBjYVWfl/y8Ew7puxx16
 pNwiawq4dXNxXdhy7wjuK4HC2C13DE60bKo9Rjw0C3YrFuJ4O5arZvsn+9Ja1wUR4SLrpXUd
 WGQvuspMq+OmnqF0wx9lMfu+BF2R8Ib1W7qpxogL3X79ERpgEx82IIgMMYhXsO75Q7Vt1N4P
 nFKL1hkPVUQtYRdr8VPpZzfSKbMB29ffv3ChPjHX33UKUcf37doZ/+57s4oOmsZZwT1ZM33J
 DMSklRu2I+c1/nTZTm5uw/zjndBGGmGTj9wMBX4JZ0/rX6WbrwKCWGDFQjidGprfkTCtDSH/
 yzJJVVCfn+KnaGI/c64yTuH51JbXUOWswcvdg2H1qIv8LQM4Xv8vfWdf7CTYCdbwrMmlmPfU
 frcAKDWPmotHrbIEMQqCKhJk/QRg==
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93953849"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LusK9udmKbE9wvYzZ2RSwv8uBErnGx6VU7VJGI9oPb0pgNQSBs4ROB3IfO7dNyMoRMSo5GQzQkm/fx55QdwgvBhwhb6DuJJkuDnXAHlAbRRB4PxnaxMw+7D7yL/J0TdUSAn4qpnMsExTEzHF3iEO+imZOmkE6kyY5uGjQmQtT8oxw7VQyNkOiii9LfKXJxXMnw5Idmym5kYQki71c0n2pyrglWy5WnpnyNZMqzFW9oYqBVy1DVVMIAm0nXtM2Hi9967qPS0vZwElbMcBM15S+2wuf1hk1oA+VfBpIveIYV2tPcbL+u01I1++ddKl9gx4mB9gDAYektRgOBH+rzkLAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5xWWkPa50EhbsjIYsMVHqH9LEfGHOJQyq4+E2+vQpPo=;
 b=SlziT16DQWh5il4VGFiqUzdn+RSkq2GKvUkZ+VPxH55ufNJqWft1M0VOzAyy54/FQkZmzMx2RybHxrX6LfAXJEeE0vyGLOvAoqfFuRezpHsrP9WBWq2gFTCBLaz8LshoWCjnT+v7kGaWPZfSmg2/yAZDUyelGJV6QrvUsjTq4hNpe4BaxDA9BHhdrTniSrn5ZB9O0bgsNRF4NaE8cUv2+g/f438ho3hm4X1/LJ+NaF2MQOFqZxsxmG5Gzvs9vHEfWJiGMWuq0/KiJ/gJTa0O/Zu4jqzyLg/zymi5wXAGi/uSADUg1DB1z0PWDnJ5bF9UX8oowD8VzJvwC8RORfSi/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5xWWkPa50EhbsjIYsMVHqH9LEfGHOJQyq4+E2+vQpPo=;
 b=vIh05cDGOnrx2dkzy6/WNVS9l75eDajSoH41fYyOnlQvQeaYEiaToGuS0TcY4b812+/vO+YeOEjg7N/TGdf6+hGt4ibTAa86AyMaUwAyrZkkBxpFzyi/aLLNK2/ccGU5ssZAFCvO9jPPxHS+dVxpa1gIgXLm6PZYAYuPHvsfyQo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 02/14] xen/riscv: add <asm/asm.h> header
Thread-Topic: [PATCH v1 02/14] xen/riscv: add <asm/asm.h> header
Thread-Index: AQHZLN/mQOdIReGXikafzlTdFxmXvK6nbsiA
Date: Fri, 20 Jan 2023 15:31:30 +0000
Message-ID: <610308ac-3440-e84e-02ec-928f0652e9d3@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <621e8ef8c6a721927ecade5bb41cdc85df386bbf.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <621e8ef8c6a721927ecade5bb41cdc85df386bbf.1674226563.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BLAPR03MB5412:EE_
x-ms-office365-filtering-correlation-id: b8c99169-2b1d-4d5a-d25e-08dafafb6733
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 QFkWWxL1kZS8fExE8G49q2GhJB86xvB7J2yC3t+jGsi9oDpCb1k9ybNJyiHdkjjZoeVaRQwKNkecPa30jvdxKlo9Up5m/d2vVp3YQ+/5xz7mtyyu7U3+UkZZA5+LL9H3fJt/svqNp6wke9d6El+Qm0lRPGSYeyzHV9s780AzXNo5UXlgHi3uGem+NR2eZJGGZLEZCv1DfT0tf7dHUN0BnylWEthYbH67PUxK2hR1uRdbGrIqevPaJUBCFpf5tztl8ys++KurnWoXbyfRRddoiyoCPqbHH9x1HaZ2PWBwl2bB2cNgAz594zjOinz9Vl5t3dfHFU7VZw+ZWfXEFj6vm3f3Igu39iCG3fXHR0KDfFFj+L7uS4Dc9LuGUhs69gOsxXydeUP37GNJn/IJeGlhzZ+wQ5BLSIhg9LcF0Zdqg8pB4ZTvtLoPK/tI3vRiUX/jA54MESJBfScPXgK901+4FElq6IaVd60hr7xqJ5CZodivoO0kFAplbfldwWQ/cofXxxW1LUhb5RLzHjYfDpXp1d8ajHqB3Dggux4gURbAeR1cr1mulgLUo63+wtyqw+TMgdT6mnVbEWvTjUuBVuvqlIl+BGNqcrEGvOrExCxSQUTuUWqNuAwCcxe9GRJEkr3ZgpiTFjg/0ZtGUpNK2NSs1XjF7uQ3Cj94tBI+zby7KRMJ9rBRAX5js/BqLNFCvrwuYrUSdaywDD0pI99qwGLmfID+IDNyXv7HvajzKYtav4UbI5s8+aWPZODlkQGYqvRbQIpcDjDRV/30unFjCwpbkQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(39860400002)(366004)(376002)(396003)(346002)(451199015)(5660300002)(8936002)(31686004)(4326008)(66556008)(64756008)(66446008)(66476007)(8676002)(66946007)(91956017)(76116006)(71200400001)(38100700002)(31696002)(122000001)(53546011)(6506007)(186003)(2906002)(6512007)(110136005)(54906003)(26005)(6486002)(36756003)(478600001)(316002)(2616005)(41300700001)(38070700005)(86362001)(82960400001)(558084003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?S3UrbjQ0U3VFZko3VnMzbkZkUUlwM2p4VzJMUXlKSkxxNDBmSTB4M0xpTU9L?=
 =?utf-8?B?KzZHclhWZjhwVkRNaFlIYTZyTDVRWXk0RkRkWUhOeVRtVXpDQ3RjanZaL1NO?=
 =?utf-8?B?TUtpQUpIb00wY2hGb2VvUERxUzUzclNmc3ZRK3NnTGR1RnJMNXhncnRKZUI3?=
 =?utf-8?B?U0NaWG55c0FjNk4wa3hBVVRKeEV0V2JoSStNYXRQWjBKNGFFai9iaGZtcHBQ?=
 =?utf-8?B?cU5vYUEyeFp1M216Y2VLQWNQUUdvdGU0eWNxVHliUFZndzdQOUZ3aGNUQlVT?=
 =?utf-8?B?ZldvVHdsY2ZBN2U1TDZkOG1xeEdkZ3lCaVkwUlJxTXBhQ3FxQWZUOUV1WTVB?=
 =?utf-8?B?dkdsQTRvL0ltZEpwdW5ocHBrNEJKcUlZb1R3enBnV0ZlZXE5NnkxRytEeGd0?=
 =?utf-8?B?RGRZM0tvT1R3TERtTndRVys2YldPNmhNR0ZKQWlsSW5RZlBQOHEyTTE1ZVdO?=
 =?utf-8?B?RC9RbkJrRnR4cW1DZURyd3RiM2tldWcxZU1JQXhscE5UYUFQd0RqT3U2VHgy?=
 =?utf-8?B?NUlPRjFDaE1YL0tMeEFMY0hiWlZtSDZKWDh0SU5lT0szajZqeGMvUWdSUEgv?=
 =?utf-8?B?dCthWjdFNXJqSWxSd2tmSFlqbERJTm1hU1FDNGxIcVpwc3l1ZXFyS3NybDh0?=
 =?utf-8?B?KzVka0ZRNTNadUkwVWp4aFZXNEJUZElhVjFvL3FFQUpaanJsUEdUekdqdUR1?=
 =?utf-8?B?UmxJSE5SVXdzYnF6dEh5aERzR1pYVlFUWlVHZHIzNFZWUlJjVlJQMDZ3T3ZR?=
 =?utf-8?B?ZmgrOFM1STgzdldPWDZlenNlZzF6TC9SRUFhUjRwampPaVh5aUtBUUx2bEdH?=
 =?utf-8?B?enNQWTlyRFluUkRNekowQzZXTTQwK3BVOWxiNmNOWUk4OE5udE01dlRZT3ZD?=
 =?utf-8?B?NmpQc2dRQ0phajlicERyY3poQVlPTXRzWDU5aXVEcEh2bUVxbzFLOWNxOXU4?=
 =?utf-8?B?ZEpWdEFVbG1CRUhCcTlCTFNkWVJucFhlUi9vb1JpYmo1cnM0WFl0STFjOEFs?=
 =?utf-8?B?ZzZVNVI4dUhhQTAyYkVsQzRmU1VORjBzVFJDcUhjYklCUUl0K2dkR3U2VFpl?=
 =?utf-8?B?RnlISGwwRGdESlp3MWt1ZWNTV3VFMXo5SEROWkI4VFMvenpyY0tVdEdXcUNo?=
 =?utf-8?B?cHM0ODYyeHoyckdoZ2tITm80ZnVqb1VPaDhTdlV4MkpQOVNEcVBoQnhPMk9O?=
 =?utf-8?B?TWsyWlpjYWJZSWRvNmlSMytxZDRxWWRjNUFJTjRQRjlnOTVIdTZQNVRmNXNX?=
 =?utf-8?B?VnUyNHFobENmYUJId2JMZTB5S0g0YlZkVUNuNGxKenFpTml1UThnZXlndmRD?=
 =?utf-8?B?NXBJaUVJWlBxMFhwSks4ZWpLUnZoYzNtK0RkMjZJenhhVXNQMDVwN0hkS3pC?=
 =?utf-8?B?Q3VVak1FeUd6OEFWSUlhdzZwdEpBQUhSdXlNVTNBaFRFcEk1UzcvNDB2UFc2?=
 =?utf-8?B?aGN0Wi81M0hHektMcjNGNnBMaGJyMlIxczQyL0wyeWMwcW40aSsxTG80UkNy?=
 =?utf-8?B?dzBTd2lYN0FETlRkR0hlL2tMZDNyL0xqUWF6M1ZRL1BJSGRkM3oxR3ZDQVJz?=
 =?utf-8?B?NFB3clZ6KzVCTk13WGl4ejlNRUwwQ09sdTBPK1ZCV0U0UTIwT2loOCthMnhn?=
 =?utf-8?B?bWg2SmN3eGdORkVmdk1hbytieG5NNkx1eVNzM0poRlVxNzYxMGd4V250NDRn?=
 =?utf-8?B?K0dqdVd5UW80NDVNclJtVEUwMjJxenRRNExaU1Fkb0M2K251cVBFU3FWUFho?=
 =?utf-8?B?eTBlMG9TaXhaUGFpU3E2anU4RGtIakNlK01CNEoyOGhwZDBXenNpNk9TV2Rq?=
 =?utf-8?B?WkcwTjFFS1RJb2o5Z0dzSjBNWndVR3U2eU10cjRHTWRSTWdqU1RhVyt3SVAw?=
 =?utf-8?B?L3dWSnpiVEE4ZjBaQ1VsT3d5UUdDdTcvY09TUjA0aHBZNHJhQ2NXby9COUZ1?=
 =?utf-8?B?bC9oUVdOSmNFVVFjeGowTm9sTDZGZnZidzdROHpubEpiL2V0M2xsWTIrN1Vj?=
 =?utf-8?B?MWtaSVlyb2tsYnBpKzY3VFJFQVlYdVcxQnRyNEpPMTh1UC82S1hRcC9QaTB6?=
 =?utf-8?B?bDZ5UTYzbGpSQXpWcU9OOTlGK3pJZTFzWmpMR3BuNzJuTzhhWi9CdnZUNi93?=
 =?utf-8?Q?WLq1YSp9D+oBKjoORhO3vLtRT?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <691BF11F7EF20547B1B6B44D1CA62712@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ucznwz8MNYXbyOvFxtvwMAp5bOlAerPMAYeeOhgaOuFvCVVYVVMR3yy1HkQ6woXyYAMmaWxVLCfqzzo+LFUXe6cnuqY2HipBOk2x5ceWPT7B2d8/Uw3DQpfIpOt7HM49MTyS0UN6B36ZueTFaw6/p1RDy7IcWRq24iAObPDVHYAsvJBhywC8TKM55S9JZZETwTwuOXVcjBkzpMjRhbluTC+V7/ZIpQHGXdS37wv/O0LqnEtOh/YoxkIFBQwd6DSH9fMCoL6qyB4MpwM92jWyx9klHJnGZQKWYoyBtMuxIzEq9JWcCGwBaoqhUbEOU1MSaIDkQkB/w/yNJobe2fN/grJJbAG5d/6zgnrq/afKTcJWjhEOGYc2bDUlCDeYfzAoIQaHoF3z0Xi1V6URh0Hz6hMGUKtZppCiyiD/sbmOPiiqIIptcKqhnU3k8arRNq3fmlMB3ij7edNlSoVtNuVUr5iwcHMxxhdsgAiX8t7cl8XuJUqsEn5xkSGdnHGz9QbyOYW09SXxwiXBhnoD/kwLATW/OWnT1fmSI54hO2PDR9R1QA5rFiaFHFG9Ui7rwCNRGCIOK1XBQ8d/tDGGEo7jD2vUCRuhzXjpC3BmvHYCjmDZ3sd54U/uy8J3nXCnEHfJ840KbErzS9pxI50yzYWIFh6pvXZB1AUiHMx5FUzdI743JNiIKfL4PxU24zskaLlig94Bki9BxznyDPs6Bjekd1e6a5lxajmbx7hMnb+XzQr4cOuZkDMngiKtFRcguolsT3mqfto6u9SOGPyQyZP454juzl0IQ4B9yp8cw8HHI7x3s6HU0rAOpvWhdEIjHj+Fg5hF4qiAUm1sEO2X+fdCeM8amr5+mlcOGtqrhbWfnW/1AsW+xh4awO0SwJ6R8Jn6nIRRfzKZzHenrbJX2SqKGg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b8c99169-2b1d-4d5a-d25e-08dafafb6733
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 15:31:30.1374
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xY5AcjoS56qWKi6xX9wIo1X2NtjhWHEGT6/5FH8Rqmfsr2cXDFWB7IWNyvS1ye2IlB/Bbov7Vf4yrlzkRbL7ddTsG6AJTSNT4CoXOYdtLz0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5412

T24gMjAvMDEvMjAyMyAyOjU5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBTaWduZWQt
b2ZmLWJ5OiBPbGVrc2lpIEt1cm9jaGtvIDxvbGVrc2lpLmt1cm9jaGtvQGdtYWlsLmNvbT4NCg0K
VGhlcmUncyBzb21lIHN0dWZmIGluIGhlcmUgd2hpY2ggaXMgbm90IFJJU0NWLXNwZWNpZmljLsKg
IFdlIHJlYWxseSB3YW50DQp0byBkZWR1cCB3aXRoIHRoZSBvdGhlciBhcmNoaXRlY3R1cmVzIGFu
ZCBtb3ZlIGludG8gY29tbW9uLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:37:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:37:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481957.747209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItSc-0003Pc-Rr; Fri, 20 Jan 2023 15:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481957.747209; Fri, 20 Jan 2023 15:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItSc-0003PV-Ok; Fri, 20 Jan 2023 15:37:46 +0000
Received: by outflank-mailman (input) for mailman id 481957;
 Fri, 20 Jan 2023 15:37:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=islB=5R=redhat.com=imammedo@srs-se1.protection.inumbo.net>)
 id 1pItSb-0003PP-Hi
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:37:45 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6072c019-98d8-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:37:42 +0100 (CET)
Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com
 [209.85.218.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-311-ynSzOyI2M92Q8d7NniEQ-A-1; Fri, 20 Jan 2023 10:37:39 -0500
Received: by mail-ej1-f70.google.com with SMTP id
 sb39-20020a1709076da700b0086b1cfb06f0so4022535ejc.4
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:37:39 -0800 (PST)
Received: from imammedo.users.ipa.redhat.com (nat-pool-brq-t.redhat.com.
 [213.175.37.10]) by smtp.gmail.com with ESMTPSA id
 eg49-20020a05640228b100b00488117821ffsm17591730edb.31.2023.01.20.07.37.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:37:35 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6072c019-98d8-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1674229061;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Lf+VUNzHo6930QhHDjlYS5a/9FVCtfkspKq7OQ6oVEY=;
	b=a0rMRdsdO9Th5o8Sm5dJoRDcemvLrMD4n/r9GP+KMgd4Y4K+qno9oiSP4hYvxFnCSCqDsB
	FZp44/8/MwChd7jNkJ4fO7VCB1MTlb7/CGcPh5AfW140yisvHthoKPkaIBk4jWwuJKejZP
	EimqZ5tXl1GW0/o3s+wBa260mYXrFxE=
X-MC-Unique: ynSzOyI2M92Q8d7NniEQ-A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Lf+VUNzHo6930QhHDjlYS5a/9FVCtfkspKq7OQ6oVEY=;
        b=hP0PCkZZPmEoHiTsyIiMgYfYWjCXIjz9ReSvnSLOvWxBgQbid+/ohldBToXtlcRd5z
         qo54EdBACouV/FQ1sOQW1y8toR6ee6Ye7Mzf6M54wFqVSYwfSOx/O3f1Tmycdcr3nmEf
         Zm6T0t/79nhWYHN+JchjBtbpMgK4v8Lmt9JiRQIPVyQQHefz7+s3VpE02DriDAc6d97s
         yUYyF6mLgF0/dNgdORoHtyJHxNSRrH5pRp3nx+VYg0mL6Kf28s2qScL9+CBGxVwVt5mX
         N/taxZKuWNFrBVo3KlTQ94T1sekIoHE1ac1H+yO75l4dHw42pnr9FdsuII3PYNGua8kw
         3S4Q==
X-Gm-Message-State: AFqh2krOFQETGXvWSgj4CRoPS3jJ8seoW4aOq4xSW8A6KyogBoAwawHd
	0jefW1sKHoKUfuJYXvsHhYhMpYlYqkWeQWm9Cuxod1Fq0LYxHIWSxDd/mHj8t/cv9ZAcicdSYka
	AAmW2VjkwVEtgMjnBHz+2OX3hDDU=
X-Received: by 2002:a05:6402:413:b0:498:b9ea:1894 with SMTP id q19-20020a056402041300b00498b9ea1894mr14217260edv.15.1674229058192;
        Fri, 20 Jan 2023 07:37:38 -0800 (PST)
X-Google-Smtp-Source: AMrXdXt9Neac975gd5nkcxdpVzeTBwC+wqA5CjhhwnQrnrtHBQwgyExGpn/wMyvV8tTW0X127YXwMw==
X-Received: by 2002:a05:6402:413:b0:498:b9ea:1894 with SMTP id q19-20020a056402041300b00498b9ea1894mr14217220edv.15.1674229057884;
        Fri, 20 Jan 2023 07:37:37 -0800 (PST)
Date: Fri, 20 Jan 2023 16:37:34 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Cc: Thomas Gleixner <tglx@linutronix.de>, linux-kernel@vger.kernel.org,
 amakhalov@vmware.com, ganb@vmware.com, ankitja@vmware.com,
 bordoloih@vmware.com, keerthanak@vmware.com, blamoreaux@vmware.com,
 namit@vmware.com, Peter Zijlstra <peterz@infradead.org>, Ingo Molnar
 <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>, "Rafael J.
 Wysocki" <rafael.j.wysocki@intel.com>, "Paul E. McKenney"
 <paulmck@kernel.org>, Wyes Karny <wyes.karny@amd.com>, Lewis Caroll
 <lewis.carroll@amd.com>, Tom Lendacky <thomas.lendacky@amd.com>, Juergen
 Gross <jgross@suse.com>, x86@kernel.org, VMware PV-Drivers Reviewers
 <pv-drivers@vmware.com>, virtualization@lists.linux-foundation.org,
 kvm@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle
 state
Message-ID: <20230120163734.63e62444@imammedo.users.ipa.redhat.com>
In-Reply-To: <ecb9a22e-fd6e-67f0-d916-ad16033fc13c@csail.mit.edu>
References: <20230116060134.80259-1-srivatsa@csail.mit.edu>
	<20230116155526.05d37ff9@imammedo.users.ipa.redhat.com>
	<87bkmui5z4.ffs@tglx>
	<ecb9a22e-fd6e-67f0-d916-ad16033fc13c@csail.mit.edu>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.36; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Fri, 20 Jan 2023 05:55:11 -0800
"Srivatsa S. Bhat" <srivatsa@csail.mit.edu> wrote:

> Hi Igor and Thomas,
> 
> Thank you for your review!
> 
> On 1/19/23 1:12 PM, Thomas Gleixner wrote:
> > On Mon, Jan 16 2023 at 15:55, Igor Mammedov wrote:  
> >> "Srivatsa S. Bhat" <srivatsa@csail.mit.edu> wrote:  
> >>> Fix this by preventing the use of mwait idle state in the vCPU offline
> >>> play_dead() path for any hypervisor, even if mwait support is
> >>> available.  
> >>
> >> if mwait is enabled, it's very likely guest to have cpuidle
> >> enabled and using the same mwait as well. So exiting early from
> >>  mwait_play_dead(), might just punt workflow down:
> >>   native_play_dead()
> >>         ...
> >>         mwait_play_dead();
> >>         if (cpuidle_play_dead())   <- possible mwait here                                              
> >>                 hlt_play_dead(); 
> >>
> >> and it will end up in mwait again and only if that fails
> >> it will go HLT route and maybe transition to VMM.  
> > 
> > Good point.
> >   
> >> Instead of workaround on guest side,
> >> shouldn't hypervisor force VMEXIT on being uplugged vCPU when it's
> >> actually hot-unplugging vCPU? (ex: QEMU kicks vCPU out from guest
> >> context when it is removing vCPU, among other things)  
> > 
> > For a pure guest side CPU unplug operation:
> > 
> >     guest$ echo 0 >/sys/devices/system/cpu/cpu$N/online
> > 
> > the hypervisor is not involved at all. The vCPU is not removed in that
> > case.
> >   
> 
> Agreed, and this is indeed the scenario I was targeting with this patch,
> as opposed to vCPU removal from the host side. I'll add this clarification
> to the commit message.

commit message explicitly said:
"which prevents the hypervisor from running other vCPUs or workloads on the
corresponding pCPU."

and that implies unplug on hypervisor side as well.
Why? That's because when hypervisor exposes mwait to guest, it has to reserve/pin
a pCPU for each of present vCPUs. And you can safely run other VMs/workloads
on that pCPU only after it's not possible for it to be reused by VM where
it was used originally.

Now consider following worst (and most likely) case without unplug
on hypervisor side:

 1. vm1mwait: pin pCPU2 to vCPU2
 2. vm1mwait: guest$ echo 0 >/sys/devices/system/cpu/cpu2/online
        -> HLT -> VMEXIT
 --
 3. vm2mwait: pin pCPU2 to vCPUx and start VM
 4. vm2mwait: guest OS onlines Vcpu and starts using it incl.
       going into idle=>mwait state
 --
 5. vm1mwait: it still thinks that vCPU is present it can rightfully do:
       guest$ echo 1 >/sys/devices/system/cpu/cpu2/online
 --              
 6.1 best case vm1mwait online fails after timeout
 6.2 worse case: vm2mwait does VMEXIT on vCPUx around time-frame when
     vm1mwait onlines vCPU2, the online may succeed and then vm2mwait's
     vCPUx will be stuck (possibly indefinitely) until for some reason
     VMEXIT happens on vm1mwait's vCPU2 _and_ host decides to schedule
     vCPUx on pCPU2 which would make vm1mwait stuck on vCPU2.
So either way it's expected behavior.

And if there is no intention to unplug vCPU on hypervisor side,
then VMEXIT on play_dead is not really necessary (mwait is better
then HLT), since hypervisor can't safely reuse pCPU elsewhere and
VCPU goes into deep sleep within guest context.

PS:
The only case where making HLT/VMEXIT on play_dead might work out,
would be if new workload weren't pinned to the same pCPU nor
used mwait (i.e. host can migrate it elsewhere and schedule
vCPU2 back on pCPU2).


> > So to ensure that this ends up in HLT something like the below is
> > required.
> > 
> > Note, the removal of the comment after mwait_play_dead() is intentional
> > because the comment is completely bogus. Not having MWAIT is not a
> > failure. But that wants to be a seperate patch.
> >   
> 
> Sounds good, will do and post a new version.
> 
> Thank you!
> 
> Regards,
> Srivatsa
> VMware Photon OS
> 
> 
> > Thanks,
> > 
> >         tglx
> > ---        
> > diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> > index 55cad72715d9..3f1f20f71ec5 100644
> > --- a/arch/x86/kernel/smpboot.c
> > +++ b/arch/x86/kernel/smpboot.c
> > @@ -1833,7 +1833,10 @@ void native_play_dead(void)
> >  	play_dead_common();
> >  	tboot_shutdown(TB_SHUTDOWN_WFS);
> >  
> > -	mwait_play_dead();	/* Only returns on failure */
> > +	if (this_cpu_has(X86_FEATURE_HYPERVISOR))
> > +		hlt_play_dead();
> > +
> > +	mwait_play_dead();
> >  	if (cpuidle_play_dead())
> >  		hlt_play_dead();
> >  }
> > 
> > 
> >   
> >   
> 



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481963.747219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItUE-00042z-9X; Fri, 20 Jan 2023 15:39:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481963.747219; Fri, 20 Jan 2023 15:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItUE-00042s-6o; Fri, 20 Jan 2023 15:39:26 +0000
Received: by outflank-mailman (input) for mailman id 481963;
 Fri, 20 Jan 2023 15:39:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pItUC-00042l-JV
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:39:24 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9ba4428d-98d8-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 16:39:22 +0100 (CET)
Received: from mail-mw2nam12lp2042.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 10:39:12 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB4982.namprd03.prod.outlook.com (2603:10b6:a03:1f1::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 20 Jan
 2023 15:39:10 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 15:39:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ba4428d-98d8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674229162;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=NJj7xlRHwcyXb1MBngUqx3cNZ3XmMewkOKdHGsa7SmA=;
  b=bNJPh79Pnil93FMZNxNmOdRgmh2CemF2yZH3l7jDq5RB+xgviQg/BLVx
   EOFuuodqS6oonwfsXaJzKnWvaGMjpxvbNsbMBDLhAKlcKyGo91b5s4tWp
   XZQLxZlfACzGqxAg/UoqI+z0LEvT2512OwT4//Xqz4fmFZSfwC9TS1Rdc
   Y=;
X-IronPort-RemoteIP: 104.47.66.42
X-IronPort-MID: 92440284
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BAU2xqt6nFkRvNoq9W3SmYEd+ufnVM9fMUV32f8akzHdYApBsoF/q
 tZmKW/Qa6uPa2WhKt8nPIi1phsEucDQmIQwSQE6+SFhFnlG+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj51v0gnRkPaoQ5AaEzyFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwcXcubkqzrPCPyr+kTdJTmpkMPe/kBdZK0p1g5Wmx4fcOZ7nmGv+PwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ovjv6xarI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6RefkqqYw0AH7Kmo7ETQWb3eChKiA2hSAV/hzB
 hIu6y8+hP1nnKCsZpynN/Gim1aDuhMfQNtRVe4n8gaGyqnTywmcD2kACDVGbbQOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRUcZfjMNRwYB59jloakwgwjJQ9IlF7S65vX6GDj2x
 y2BpQAkhqsUls8N3OOw+lWvqzirrJLYQxU14gjSV2SN4QZwZYrjbIutgXDS6fdbMI+YVB+Pp
 nECkMmFxP8CBteGkynlaO4KGreu5fqMLjzHqVFqFpglsT+q/haekZt45Th/IAJsLZwCcDqwO
 kvL41sNvtlUIWegarJxb8SpEcM2wKP8FNPjEPfJct5JZZs3fwiClM1zWXOtM6nWuBBEuckC1
 V2zKK5A0V5y5Xxb8QeL
IronPort-HdrOrdr: A9a23:ZGCAiqDvWhEVfjTlHejLsseALOsnbusQ8zAXPhhKOGVom7+j5o
 WTdZUgpFvJYVMqM03I9urwXZVoLUmzyXcx2/h2AV7AZniThILLFvAH0WKK+VSJcUGQygce79
 YGT0EUMr3N5C1B/KTHCX6DYrUdKbe8kZxBKIzloktFfEVPUeVN/g15AgGUHglfQxRHP4MwEN
 6x99dKvD2pfFUQd4CeCmMeV+bOitXXnNa+CCR2cSIP2U2rt3eF+bT6Gx+X0lM3VC5O+64r9S
 zoghH0/aKqttC801v523XI55pbtdP9wp9oBdCKiOISNjLw4zzYE7hJavmnhnQYseuv4FElnJ
 3nuBE7Jfl+7HvXYyWcvQbt8xOI6kdn11bSjXujxVfzq83wQzw3T+Bbg5hCTxff4008+Plhza
 Nw2X6DvZY/N2KKoM293amDa/hZrDv5nZMQq59ds5WZa/pRVFZll/1TwKqSKuZAIMu10vFmLA
 AkNrCl2B8fSyLgU5hf1VMfguBFih8Ib1S7qw45y4Wo+ikTk3Zjw0QCwssD2n8G6ZImUpFBo/
 /JK6Jyidh1P7wrhI9GdZY8qPGMexzwaAOJNHjXLUXsFakBNX6Io5nr4K8t7OXvfJAT1pM9lJ
 nITVsd7AcJCjfTINzL2IcO/gHGQW27UziowsZC54Jhsrm5QLbwKyWMRF0njsPlqfQCBc/QXe
 q1Jfttco3eBHqrHZwM0xz1WpFUJ3VbWMoJuswjU1bLuc7PIp2CjJ2kTB8SHsuTLd8JYBKCPp
 JYZkmIGCxp1DHXZkPF
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="92440284"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bR9STxQpPWCktMp5ROnGMcRo703NrrIbe+ZfUVjznwmLPpPWpJgwA7ldkZpVDve1WMXWIsqTW1C7aIcfRvxx4DVD5cTthcCjLr2FRDxZMr6i70RfdsMOvyrrrxMnq79RaTdDVp5lC8+gifJ6pRQ0m51ta36zG0fKVsdJ98Dl3m3o1pTj0m4WFSVqSc5aDUk0r36jryrEDtVwW0v7hGpeV+iT6FLmxrdgspPDZ3DCQrbdnIE3sqwT0YXV+xh59YOBnhTjLoavoEWZlEp8R49c7QzW2e47NiLopLSh8MpqKCb4T6eD9wjDat4UISfA9N19WapGR6axUqQ029MNiWfqEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NJj7xlRHwcyXb1MBngUqx3cNZ3XmMewkOKdHGsa7SmA=;
 b=a3hI4CfyfaNfXM4lXRZcuqk6ToB5DhiGy+mvbWL65tUzRcb2ND973pzDlXNKJDUGZ0EzRsqZw2qnnPr8Fyeq+ZH/yEGRWB1nt6dZudbyQVUJFMy3US5ii+bGL0LBuSeZjJ4Ao61EXlP8Frz2UIA2GH5uirAVqLoeJLjPIgUUs6STjbCCZ3mpY+pkxzWcPRR70CH5vZ0pB5uHgs7j+FH4ltLFH//Ueq7LJSlpnB32ykBZq29kGDw5JtXxIkNMu7QvhNyPlxR+3GrAXugPTr8vdSymXA9COOWIVEnIxM24sWlfCMDMBn/N67I3HZkiHh9ExA889Y+LyY2rdyY5i4thbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NJj7xlRHwcyXb1MBngUqx3cNZ3XmMewkOKdHGsa7SmA=;
 b=VWVrQ964HU3rlPI3dqYPB0ZEf3AIOJ0kC4Z6ZRsiNWQaHXQYcnKPeMWK7eCmXvShtbX31TEnXWgppPtJHp8R55m+14lzc5vf5CtN6+bkXUI+78oL+/0Mc3sAVWqdXDM8on0FOYtwulHvU0XGdxEuh4K1yXKZdVw1tz0IbRB0s1g=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 05/14] xen/riscv: add early_printk_hnum() function
Thread-Topic: [PATCH v1 05/14] xen/riscv: add early_printk_hnum() function
Thread-Index: AQHZLN/tUylHypY/2ESw6BfK5kdGDa6ncO0A
Date: Fri, 20 Jan 2023 15:39:10 +0000
Message-ID: <53b7651a-4274-1e2f-fb97-d30f3ddbac1d@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <633ced21788a3abf5079c9a191794616bb1ad351.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <633ced21788a3abf5079c9a191794616bb1ad351.1674226563.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BY5PR03MB4982:EE_
x-ms-office365-filtering-correlation-id: acd564b8-6bfa-4823-0fba-08dafafc79a7
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 uqhwGWwfe+duW3GRSP3EnoeIIlMUU39eDo41N53pGdUsFPOSMJi4lGzwAojxBwFjGVfjZGHR4JRf5hVbSmpnJluYMUb+ln0k7Nq0Yngry7NL71OUH8BEhjBd7mu55ZAlT7IhSxnUQ1l1k2fW5gyDf2HiOziFQPf4bj8vt3q+j8+QSYLsjKP+3cV8b9nwLpMcSj14fV0Ka0TK1fpidcZPMYQhCRPIDJiF8uMEyN5Acg8FCI2IEZfFX6lRQ8LJEK8TYTjpDKCNK14VLctY8mNwz5gd/oIQenBLryszMzFoTkReiNdoPrYIVCRZm5H7qidAOvFOW5TmZflEylFkaAp3xzfR9fhyX/0JxjfgkG/knyqfMf06puWbe0PJPHaOn6dI7EDZZup5j3yR39eqWKiNFy0ZpgK06Kt/P/j2WlI2wMLVzCecdY9KryT7NdLEJW391fl14nlvkBgs2dYaHjTVEtn+CDBrreR9wRTM0bZ7d4Wrvkm2+VkgbQU/PjEuPPxCckFS1+UiUXBTaSYqXU7haZ/oblLgM8aYtui/oENw9tXyPVmCJHoUK8KX/926L2x3mAmwYSDSOZOx/9NvH+5MC4yL0P2xLvt6KKP803SkKOPYI/W0B6y16nb2nI+TpHd7t/9aDnJCe+PgDQzPSqfTn3L0Qy1aX2ogfjy/9S6dqP2RsZGxEWhZAsVInjpDEsB80CDlWglQEHyyCqq8oHX0ZRJr6MRfMoc0MsOaUu5Mnb+1mQLFB9hwUbLW58SfRxfojOVwX4kzyMTuNWfBSycdhQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(346002)(366004)(376002)(136003)(451199015)(122000001)(82960400001)(36756003)(38100700002)(31696002)(38070700005)(26005)(8936002)(4744005)(41300700001)(5660300002)(64756008)(91956017)(316002)(66556008)(76116006)(66476007)(4326008)(66946007)(2906002)(8676002)(66446008)(2616005)(186003)(6486002)(71200400001)(54906003)(110136005)(53546011)(6506007)(86362001)(478600001)(6512007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?QjUwQ0tHaDZlbG1BQS8xNzJMZ2pRMWtqTlhjUGdVUG44ODFwdTE2MExVTlhh?=
 =?utf-8?B?cE1BcHVYdDRTV3NhcjFVcnZheXBYbE9rNHhnRG9laXZXRGpwd0NJM3JoNGhj?=
 =?utf-8?B?UEJ0ck9KbkxzeFp0TWxvL1JpdEhVUmZwODBlaERYVmE4VnZPbmhCb2QxYVNK?=
 =?utf-8?B?UDBFcmg1Z0xzQysxZ05tSXNkbzlORSs0akp4QWdGVnMwbVlLZGJhNWR1bi9U?=
 =?utf-8?B?MS9QUWFrcGFVSHg5eXZmUjJrMUdjLzZ0NFc1NFJNRlRlMVFFemlsZVZwQ1Bs?=
 =?utf-8?B?RXNxaWJKL3piTGVOM0tpRk4zV0pjWlJlVG1wOUFRbUlnZkdxdjFzSUtHTm5q?=
 =?utf-8?B?YnhlY0h4NU1jalFEMDNtbWRWRi9EdGdKZ0JFanE3R1p6OWNwOGdTanZEZ0Rx?=
 =?utf-8?B?UHVmYUl6R0dwYTE3ZjRlUGNiNW5lYUdxaVlqOVFxdmpzM1NyTEtJbjFYMU8x?=
 =?utf-8?B?Sys0RTcrVWFqOXNNNXpGZjkrTHVpOTUzQ01DV0tvTWZCZC9MN1ZwTlVGc2cw?=
 =?utf-8?B?UnNQbjl0cyswQVN1SnYrOWtZeURrVUZEVTRTMWs3b1I2UVlFK3hrNEVrVUlv?=
 =?utf-8?B?a21qN29LQWM0R2VNR3FwUHdvYlMyOWt6cnNtSUZBQkJpTXdjWWJIaTVnRWtC?=
 =?utf-8?B?SFI5bzI5SW1rU3dMd1crbm9zNW5TcG4wREV2bUZmeTB5SDZ1ZmFaYUhIM3ln?=
 =?utf-8?B?UW5EZ2xCMWY5bTBEMldRMU8rdStFbE1WcHJQWHA2emdxOVcwQ000dnBYVlE4?=
 =?utf-8?B?ZHlKV1A5bTlQalM0OFJnTjFHbzFlaGNJbi9DZldRMnFVL2ZQb2tTTitXa2tw?=
 =?utf-8?B?aFBzMTI1MEtoWmVobVNzV2xSODBYZjgwU2gzR0Y2NjRoUFVMeTIyb0ppVTBm?=
 =?utf-8?B?RGpkSFNkRG1mc2VYcnRMS2FZU1ZCU0daZHp5Nm10SkJQaktIMWhjeHlLWEd6?=
 =?utf-8?B?SGtKMTB6QzNYRk5vY3FTdXdwb2tkMHNVRHhvSlhDNG5DeU1YaklBSVY0eEJ6?=
 =?utf-8?B?bW1YUk9CR0VlbnVJbVUxcFFQT2VJa1N1MUdKOUxTYkNGdjBsSkJYWUpWcy9T?=
 =?utf-8?B?OThoUzkzS3dEYWE4c2NvZkRtQnl3ZzBYOFpDMEpEYXRDRGxPTUpSZnJhOVFs?=
 =?utf-8?B?aDliWkg1R3pBWFhKMnRidDJiZmFSZTVwTXhjc1JlV1J6cEl2K0FOSzdPb0M5?=
 =?utf-8?B?RitQeU5OalhiWTloeVN6TlVWaGJWemp4blZNOXN1c0NlRTlsZnZPeWt6S01w?=
 =?utf-8?B?elMwdW45VWErRVc2STFNbFY1TGE0cWo1QXJUUE04Q1RXcWVCSytncGZ3ZGVa?=
 =?utf-8?B?eDA1US9JYjdlbkI2Y3Z5cmhub0lqK2ZCWkdDaUwxZ1NDUkE2ajE2MFY5UXdI?=
 =?utf-8?B?dVE1c09qbTI1Y0Z1WGx2Z0IrK0tGMFd2SmU2ZXJvV2FQNE12WHJReXdST0Iy?=
 =?utf-8?B?L1dQdWhzdjE3SUlJRHQ5ZHcxZS9rU2ZLc2FPL2dXK0FNZ2JCcnd3MXkrQmxL?=
 =?utf-8?B?amJjbVBhcC9ObHZwMVl0MVRFRUZJTmoySUF2N2l0WTR1dTh1SnprdEpZRis0?=
 =?utf-8?B?b1lPTmhuSEdNRzFoL1NmYUFNRk5oVExTQVZNdkF4WnV5Ly9ZYkh0VlltaURF?=
 =?utf-8?B?RFpGc2lzT0pVdFdINDIrRlY3VFVxL0VWa1lrL0VXZXFVRnlSY2psVVArRWJC?=
 =?utf-8?B?YVNkbzd5UjdqZWg0SGYxQXBLR3ZoVUcrcEFmMlhaLzU4blY0S0dNV3poYjZu?=
 =?utf-8?B?a2JqUGEwWVhTYzdzNkV2VmpYRi9ZYWZDd1FsRnNid0l0Sndkb0Z3WmpxVWdG?=
 =?utf-8?B?cHVCQmVLSmpWaEtkTWF3ejREWGlxOURxK0pTb0c4MjQ2aWxzc1RpeldoejBw?=
 =?utf-8?B?T1dPL1Brd05iRFhKNGdqZGxuTHNTOHNHVjNhS0NZeGNoZjJFSXRYZko3SDdJ?=
 =?utf-8?B?SG9HSXJoT2tMcmJkWlRrM2RKSUlIaWpQcW1wY0lXM2tqLy90ck45RjNsUHdJ?=
 =?utf-8?B?NEZnMHpuQzcxeFFwRzg5bDRydzc3WXQxMFRWQVR3R1JHOFE3dXl2Vlh1aWVW?=
 =?utf-8?B?V2JjcVRsakxYa3QyNHNCc0pQeWl2OWUxcndtUUJSUzJsamlKRDEvWHNXNzhS?=
 =?utf-8?B?TEVWaWgwWGJ6V1ZocGp1SUFJbk5CWkVQMTl5SHlaZUw2N1oxaUZCSExsVGZ3?=
 =?utf-8?B?WVE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <B1FA74962851904DA858A523CAD268C2@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	4WjD9xedmqBIXj+9Jh1sPCRTw31UHolsxQTyf1vUT/vcbSO2KJA9w+3F93KFGN9rRTPpZgXJFp+ox80N+yUFwDs4zXgh8UNq9xXcKoWkw+ZsDNaUP34jJ62k97DSeqYcYrvtyhHF/gMBx8fvNbim5IN5uKfIRWi7b6ldCWU66l5GNH+00jVsImP+d0jjSy8lA/6eufS57WqLxO9JPPUkoQhS5Lwma7njIyoyf3UVaFkI8d90CuPiiPo816OjDZt+oHFSW5pNgTG72gpE5Qh9gWgvzfs6xXhGG5H8CS5+A/oS23vmgLdbxuxDNfl8bDEOBg1K2XYinVE896o1Vkb6yc2OBgYT1HvJtFQd+fztGddUPmcvJwvvZhcEv45oV6tbiCDyFK/julbqiJEGyl3zkoE2AV4d/isKuty1PhXdiaA0f9ZwWibgUhgM99YyfxvNkzFxRA4uOpMRJzgN5rbL44IjkYd/oMo1p4mjRo821r0QLzGMXtWerG/G5BIfOPYC2X5vmEZFtiHSzeDGtlLloJyiz4g+zJkTvV/eKlnU8DQCFSMtG40WbWiotxNHLV/wX+N7d1haGdDmRFSSQjJdIUPeZ587Epg7GZc7pXha5Opd+vzCgLB/QoKu+JLfzbl2a2kjrkadGGARaGIIH5KJhJSZvzV5Ot/YdWCr2kzinbM8YgEiICy3Od+Tk6yItgu/DuL/jb8CHkpYDxKQh3gG3v/uHWXUyI9BOkxjjz8RHIzMvqPDusdqNjjdUStuslpEYZl2L+3+xgw4vvnpbs+d/McJ2u0eUoChqkO6yeaswNHUr/DK7g0m2Wk2oLUUqap03vn5RXq+a8DTyY5NVlHYNGdsanuKiY+4tAb+j0wjfceMR6s0uy/j9YLV83OvREBGIfSeCiqRsS+wNF/OXp4ExA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: acd564b8-6bfa-4823-0fba-08dafafc79a7
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 15:39:10.5572
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xZxEZ+4+fUVc/ItmqJymIWxwEzQZUM0vTGe6qpBWQho4Tvq+DGtFVtlDKXRUC2UeioFaAti2uC4V7MWYctkdFhCFb8aQMigNgQYpo6aaQ84=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4982

T24gMjAvMDEvMjAyMyAyOjU5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBBZGQgYWJp
bGl0eSB0byBwcmludCBoZXggbnVtYmVyLg0KPiBJdCBtaWdodCBiZSB1c2VmdWwgdG8gcHJpbnQg
cmVnaXN0ZXIgdmFsdWUgYXMgZGVidWcgaW5mb3JtYXRpb24NCj4gaW4gQlVHKCksIFdBUk4oKSwg
ZXRjLi4uDQo+DQo+IFNpZ25lZC1vZmYtYnk6IE9sZWtzaWkgS3Vyb2Noa28gPG9sZWtzaWkua3Vy
b2Noa29AZ21haWwuY29tPg0KDQpJIHRoaW5rIGl0IHdvdWxkIGJlIGJldHRlciB0byBnZXQgcyhu
KXByaW50ZigpIHdvcmtpbmcgdGhhbiB0byB0YWtlDQp0aGVzZS7CoCBXZSdyZSBnb2luZyB0byBu
ZWVkIHRvIGdldCBpdCB3b3JraW5nIHNvb24gYW55d2F5LCBhbmQgd2lsbCBiZQ0KbXVjaCBlYXNp
ZXIgdGhhbiBkb2luZyB0aGUgZnVsbCBwcmludGsoKSBpbmZyYXN0cnVjdHVyZS4NCg0KfkFuZHJl
dw0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:45:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:45:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481970.747230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItZj-0005U8-Vd; Fri, 20 Jan 2023 15:45:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481970.747230; Fri, 20 Jan 2023 15:45:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItZj-0005U1-Rg; Fri, 20 Jan 2023 15:45:07 +0000
Received: by outflank-mailman (input) for mailman id 481970;
 Fri, 20 Jan 2023 15:45:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pItZi-0005Tr-T0; Fri, 20 Jan 2023 15:45:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pItZi-0000q9-R0; Fri, 20 Jan 2023 15:45:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pItZi-0002uf-CE; Fri, 20 Jan 2023 15:45:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pItZi-0008FQ-Bq; Fri, 20 Jan 2023 15:45:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QPDFzd0om3tcvGwU2h/eUE4PWG30okiPUKAFQLDGxZ8=; b=DLMiNH1jrNvAvjMWHGeujEblfu
	KNKIRTJOtaUzUldSqy63UAeypDZsRrM4cWQe+OfquxQW9OF95gdr59gyOfoADoh2YimF5MIJnBnDL
	fuDHMVN4zhCGEKR9uSho+XNO861cAV+ijmVYib66QVrdRMwhvTCpfBx2opBfHjEveXeo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175997-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 175997: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
X-Osstest-Versions-That:
    xen=90caa47aa33efb6408514ce09721624a2bdf6694
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 15:45:06 +0000

flight 175997 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175997/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
baseline version:
 xen                  90caa47aa33efb6408514ce09721624a2bdf6694

Last test of basis   175995  2023-01-20 09:00:27 Z    0 days
Testing same since   175997  2023-01-20 13:01:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   90caa47aa3..89cc5d96a9  89cc5d96a9d1fce81cf58b6814dac62a9e07fbee -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:53:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:53:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481977.747240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIti2-0006xJ-R8; Fri, 20 Jan 2023 15:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481977.747240; Fri, 20 Jan 2023 15:53:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIti2-0006xC-O0; Fri, 20 Jan 2023 15:53:42 +0000
Received: by outflank-mailman (input) for mailman id 481977;
 Fri, 20 Jan 2023 15:53:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=islB=5R=redhat.com=imammedo@srs-se1.protection.inumbo.net>)
 id 1pIti1-0006x6-KC
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:53:41 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a41c743-98da-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:53:38 +0100 (CET)
Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com
 [209.85.218.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-322-oln-knlLPEquZOKWSxru6w-1; Fri, 20 Jan 2023 10:53:36 -0500
Received: by mail-ej1-f69.google.com with SMTP id
 hq7-20020a1709073f0700b0086fe36ed3d0so4049220ejc.3
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 07:53:34 -0800 (PST)
Received: from imammedo.users.ipa.redhat.com (nat-pool-brq-t.redhat.com.
 [213.175.37.10]) by smtp.gmail.com with ESMTPSA id
 q24-20020a056402249800b0046ac460da13sm17585986eda.53.2023.01.20.07.53.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 07:53:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a41c743-98da-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1674230017;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=d+dLM5erG8JQVkBt6Ot8qH6wXqkphtASXC8nCtDM010=;
	b=CWKVblHI2Fb71lrz6DXMzJfrQKf2pdBVzQfHLx78qgw+rZncZDlXzd0U9mxECJf2IcSnJc
	29qAnGrRQuH3BoFKdEp1DZDN5Po9DLUWaum/LRO0enG2tVgRxj0Za+BY+Y62edsBguXTZ9
	UgvLyr01wYWsvLcobynjJALdZjAhJjQ=
X-MC-Unique: oln-knlLPEquZOKWSxru6w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=zo/bgcYV0ObHws9R5dA7ORalLRR/q2ZzXR/KXsJkowY=;
        b=fxnPu92M0cpjQxRwLTIGAdD0WQcWh6LzlRSbipJrTQDSHcFHuVWsseCSZUs1wW2kaI
         cSk6jzIm9VLLV0stpSCNPVke6sCpdxxQ3wGG1Bpqcs5XuLrin39QDw1sG8Plw3hevgcZ
         7Ue/LsmGR11+vVfyHLgNqJa3iweB+aSKMbuYoseZfDsDqJDwszN8KCH5WOWyV0wIUpZJ
         Zg9njWVCeNitMjls3nBEg3Mb4/3W106bcbagvKb3htFfBjMeF7/cOX9BpfhrOInkERQo
         x3n94Ctp6AU/GoVR0tcm0l30ukhAGKsiFvHHVAz91d+8MdeN6fZ5lEUnoE+LAc3fwyo6
         5OvQ==
X-Gm-Message-State: AFqh2krnv76vQkECpn6QO1t7RARnQ+PQLckV2ZGPMhwcd1Th/IZGf1SG
	cwr7Vj5nelDcUQktcVwRButcph2YwFLCoKIS3EUeCGmuXz8SqEQOZ95qa35Q62XsUSVyXEtsJXL
	zYyr0IOUBXWKQxzLWl1ud61q5gF4=
X-Received: by 2002:a17:907:76ac:b0:857:1e36:3b7b with SMTP id jw12-20020a17090776ac00b008571e363b7bmr14276204ejc.11.1674230013179;
        Fri, 20 Jan 2023 07:53:33 -0800 (PST)
X-Google-Smtp-Source: AMrXdXuHMc4vUSPXnME9pCV0Z0nM0gPPUOC5/S8JlZvWNonafqUeLDk0VipB57PMHiWEE3LnYak7Yg==
X-Received: by 2002:a17:907:76ac:b0:857:1e36:3b7b with SMTP id jw12-20020a17090776ac00b008571e363b7bmr14276194ejc.11.1674230012972;
        Fri, 20 Jan 2023 07:53:32 -0800 (PST)
Date: Fri, 20 Jan 2023 16:53:29 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: Chuck Zmudzinski <brchuckz@netscape.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Bernhard Beschow <shentey@gmail.com>,
 qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, Paul
 Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, Richard
 Henderson <richard.henderson@linaro.org>, Eduardo Habkost
 <eduardo@habkost.net>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <philmd@linaro.org>, Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230120165329.46f254b4@imammedo.users.ipa.redhat.com>
In-Reply-To: <a3aed167-74f7-aa4c-1bc6-84f116feb702@aol.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
	<a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
	<20230110030331-mutt-send-email-mst@kernel.org>
	<a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
	<D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
	<9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
	<7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
	<20230112180314-mutt-send-email-mst@kernel.org>
	<128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
	<20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
	<88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
	<20230116163342.467039a0@imammedo.users.ipa.redhat.com>
	<fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
	<20230117113513.4e692539@imammedo.users.ipa.redhat.com>
	<a3aed167-74f7-aa4c-1bc6-84f116feb702@aol.com>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.36; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Tue, 17 Jan 2023 09:43:52 -0500
Chuck Zmudzinski <brchuckz@aol.com> wrote:

> On 1/17/2023 5:35 AM, Igor Mammedov wrote:
> > On Mon, 16 Jan 2023 13:00:53 -0500
> > Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> > =20
> > > On 1/16/23 10:33, Igor Mammedov wrote: =20
> > > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > >    =20
> > > >> On 1/13/23 4:33=E2=80=AFAM, Igor Mammedov wrote:   =20
> > > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > >> >      =20
> > > >> >> On 1/12/23 6:03=E2=80=AFPM, Michael S. Tsirkin wrote:     =20
> > > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wr=
ote:       =20
> > > >> >> >> I think the change Michael suggests is very minimalistic: Mo=
ve the if
> > > >> >> >> condition around xen_igd_reserve_slot() into the function it=
self and
> > > >> >> >> always call it there unconditionally -- basically turning th=
ree lines
> > > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem sp=
ecific,
> > > >> >> >> Michael further suggests to rename it to something more gene=
ral. All
> > > >> >> >> in all no big changes required.       =20
> > > >> >> >=20
> > > >> >> > yes, exactly.
> > > >> >> >        =20
> > > >> >>=20
> > > >> >> OK, got it. I can do that along with the other suggestions.    =
 =20
> > > >> >=20
> > > >> > have you considered instead of reservation, putting a slot check=
 in device model
> > > >> > and if it's intel igd being passed through, fail at realize time=
  if it can't take
> > > >> > required slot (with a error directing user to fix command line)?=
     =20
> > > >>=20
> > > >> Yes, but the core pci code currently already fails at realize time
> > > >> with a useful error message if the user tries to use slot 2 for th=
e
> > > >> igd, because of the xen platform device which has slot 2. The user
> > > >> can fix this without patching qemu, but having the user fix it on
> > > >> the command line is not the best way to solve the problem, primari=
ly
> > > >> because the user would need to hotplug the xen platform device via=
 a
> > > >> command line option instead of having the xen platform device adde=
d by
> > > >> pc_xen_hvm_init functions almost immediately after creating the pc=
i
> > > >> bus, and that delay in adding the xen platform device degrades
> > > >> startup performance of the guest.
> > > >>    =20
> > > >> > That could be less complicated than dealing with slot reservatio=
ns at the cost of
> > > >> > being less convenient.     =20
> > > >>=20
> > > >> And also a cost of reduced startup performance   =20
> > > >=20
> > > > Could you clarify how it affects performance (and how much).
> > > > (as I see, setup done at board_init time is roughly the same
> > > > as with '-device foo' CLI options, modulo time needed to parse
> > > > options which should be negligible. and both ways are done before
> > > > guest runs)   =20
> > >=20
> > > I preface my answer by saying there is a v9, but you don't
> > > need to look at that. I will answer all your questions here.
> > >=20
> > > I am going by what I observe on the main HDMI display with the
> > > different approaches. With the approach of not patching Qemu
> > > to fix this, which requires adding the Xen platform device a
> > > little later, the length of time it takes to fully load the
> > > guest is increased. I also noticed with Linux guests that use
> > > the grub bootoader, the grub vga driver cannot display the
> > > grub boot menu at the native resolution of the display, which
> > > in the tested case is 1920x1080, when the Xen platform device
> > > is added via a command line option instead of by the
> > > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > > to Qemu, the grub menu is displayed at the full, 1920x1080
> > > native resolution of the display. Once the guest fully loads,
> > > there is no noticeable difference in performance. It is mainly
> > > a degradation in startup performance, not performance once
> > > the guest OS is fully loaded. =20
> > above hints on presence of bug[s] in igd-passthru implementation,
> > and this patch effectively hides problem instead of trying to figure
> > out what's wrong and fixing it.
> > =20
>=20
> Why did you delete the rest of my answers to your other observations and
> not respond to them?

they are preserved on mail-list in you previous email.
It's usually fine to trim non relevant parts and keep only part/context
that is being replied to.

I didn't want to argue point that it's usually job of management
arrange correct IGD passthrough for QEMU like Alex pointed out.
Hence those part got trimmed.


>=20



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 15:55:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 15:55:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481983.747250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItjD-0007YH-9l; Fri, 20 Jan 2023 15:54:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481983.747250; Fri, 20 Jan 2023 15:54:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItjD-0007Y8-5F; Fri, 20 Jan 2023 15:54:55 +0000
Received: by outflank-mailman (input) for mailman id 481983;
 Fri, 20 Jan 2023 15:54:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pItjB-0007Xw-EG
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 15:54:53 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c45139b7-98da-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 16:54:49 +0100 (CET)
Received: from mail-bn8nam04lp2043.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 10:54:46 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB4935.namprd03.prod.outlook.com (2603:10b6:a03:1e3::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.26; Fri, 20 Jan
 2023 15:54:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 15:54:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c45139b7-98da-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674230090;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=6kU454m2W3k+QEM9kV0wYwdUPDVwotb1yhUhbg/UduM=;
  b=WTMjXeUCH0qbQ9MYg7NELdHZdIRwcFh72K/VeLBC1O3uyWsZkg6Bm0Rx
   yf9nY3chMhkgWR/NqgiTLwcB4lk0ReewwK4sT9qqLQLAdYn7QHy+oH5hO
   nxWVIVoewbsAVmMnVvKlNwOugElJRPtr1ZJepUOzucUYy6WQyvMwWzAw5
   4=;
X-IronPort-RemoteIP: 104.47.74.43
X-IronPort-MID: 93506737
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pQ0HraNdkogtozHvrR0WlsFynXyQoLVcMsEvi/4bfWQNrUp3hTQHy
 mBODG3Qa/6KYzb2fIoka96z8BwP6JTXydQxQQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5ARmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uNIKlhI9
 aQ+EyIEalOz3KGM6+6ne/Y506zPLOGzVG8ekldJ6GmFSNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+vZxvzO7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6UwHulB9lNfFG+3uZJ32/K534OMTQHUWaBnuaG0lyaeN0Kf
 iT4/QJr98De7neDSd3wXAa5oTiHowQbUNpTFMU17QiMzuzf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8tiiuMCIYKWsDYy4sTgYf5dTn5oYpgXrnQddqFqqohdTdAzDux
 CuLqiN4jLIW5eYB0K+x7F3cgzaho5HPZgEw7wTTGGmi62tRbYqkfJCh6EKd4+xJKo2YVXGes
 HNCkM+bhMgFCpeLky6BSfsMB5mm4v+ENHvXhlsHN5Mm/T68vXO4fYRd5Th4DEhsO8cAPzTuZ
 SfuVRh54ZZSOD6ga/9xaofpV8Ayl/C8TpLiS+zeacdIbt5pbgib8SpyZEmWmWfwjEwrlqJ5M
 pCeGSqxMUsn5W1c5GLeb48gPXUDn0jSGUu7qUjH8ima
IronPort-HdrOrdr: A9a23:xrfYI6HB77tHl1JApLqFwJLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdrZJkh8erwW5Vp2RvnhNNICPoqTM2ftW7dySeVxeBZnMHfKljbdxEWmdQtsp
 uIH5IeNDS0NykDsS+Y2nj2Lz9D+qjgzEnAv463oBlQpENRGthdBmxCe2Sm+zhNNW177O0CZf
 +hD6R8xwaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oK+RSDljSh7Z/9Cly90g0FWz1C7L8++S
 yd+jaJp5mLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjow4OyjhkQGhYaVmQvmnsCouqO+ixV42mJ
 3nogsmPe5093TNF1vF7yfF6k3F6nID+nXiwViXjT/IusriXg83DMJHmMZwbgbZw1BIhqA+7I
 t7m0ai87ZHBxLJmyrwo/LSUQtxq0ayqX0+1cYOkn1kV5cEYrM5l/1cwKoVKuZEIMvJ0vFhLA
 BcNrCb2B+QSyLCU5nthBgq/DVrZAVqIv7JeDlYhiXf6UkqoJkw9Tpl+CVYpAZByHt1ceg72w
 yPWJ4Y641mX4sYa7lwC/wGRtbyAmvRQQjUOGbXOlj/ErobUki94qIfzY9Fk91CQqZ4uqcaid
 DEShdVpGQyc0XhBYmH24BK6AnERCG4US72ws9T6pBlsvmkLYCbehGrWRQriY+tsv8fCsrUV7
 K6P49XGebqKS/rFZxS1wPzVpFOIT0VUdETuNw8R1WSy/i7YrHCp6jearLeNbDtGTErVif2BW
 YCRiH6IIFa4kWiShbD8WzssrPWCznCFL5LYdvnFrIoufkw36V3w3gooEX84N2XIjtftaFzdF
 diIdrc49GGmVU=
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93506737"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e/jcMW/jv+T56ehPQQySEXd9XMspUEmOpEu22zxQ4AzFS7MM5xe4n2UNMe2t3ljTdIo6iZGD9c9+gSh10fiPOc+XsxcAVHJEzJmcJstJbiSSH14dWqJ+FZLgkrLPp8zitM0Z9nTQ1q8GzdHP9MC7B3NuXgFchWrJhefuxJO2JNGVWtFFUNwM0+IY/YoxdtaqxE7HAwXB3J9nJ5KTkdCc12RHGolHQ9d6sSPv21TixkV7+cQ3rBZc8na+InWJSLz4HYv4x63ruYfH+c3lLnzLx7zFCC5tbXJVrX3iEk/xRVIkb54VPHAUZlVwsd354IZb5TX5E47a6cYluCqKUiKbbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6kU454m2W3k+QEM9kV0wYwdUPDVwotb1yhUhbg/UduM=;
 b=BpRyhF3o1dfdBeKq4fuvBsvkFN3osayOhzhnLYdfvqin1nbYeXHCz+CYhrnVOla8GU/oAksy2TpFGIZhTk2NmDJJl1xrQ0HR0fO0iusyqrE8ucJtuMW/sO8GqXW+q3PoKBUWygIDlHGjHm/MYCkUL2NgmwRFHHlSgmE4DJPpnfcuKmGXWEFDpXw/E/t0mud9mA7fSxcSkuUZ3v1SuDYNpYYlOi8+Htsyb6isVMA2HfxE5kVy3NCeQ/7iFxcakYBUSPemOTQSsqaYJBe4bqllyesVuRqerYY+H2mi9t7gu/AxT/qwhHsqZl3wCcJM+glJLBez+KIpwpDVQs+q+soDhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6kU454m2W3k+QEM9kV0wYwdUPDVwotb1yhUhbg/UduM=;
 b=n5rNNPZDDvXUtN2245noDB1++NeqSpfHLuYwgYLpzun3QnBzuJ8DktVQhSXwr2hcC6s5xbh9h/HvBKLEDjQmqJdbGJue2lmdZEuTmXPgaZP7xwCJe7lpNSYhmmXr+0WWd4DcaP3ACmPQUOk4DOIguMFgqjMvMaXREFzBZ4WrYjI=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: Re: [PATCH v1 06/14] xen/riscv: introduce exception context
Thread-Topic: [PATCH v1 06/14] xen/riscv: introduce exception context
Thread-Index: AQHZLOAEizmhskK0b0Wez8lGfG74vK6ndUcA
Date: Fri, 20 Jan 2023 15:54:44 +0000
Message-ID: <fd276566-6b7d-ea64-a90a-a0c198ccf36c@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <00ecc26833738377003ad21603c198ae4278cfd3.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <00ecc26833738377003ad21603c198ae4278cfd3.1674226563.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BY5PR03MB4935:EE_
x-ms-office365-filtering-correlation-id: df5e30b0-5e09-465b-ae9e-08dafafea65d
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 iLqfkD55RmFGxp6PmUqAhyRYvgRYMFWcBkLSaZZhP/kC4uvk4ichP9o6HpMeckk8NKth6wkMBarj4bjRhq/Z9L1v8zEn1cnyiZqycuEHbecdSDaf9eENdRtm+6WCJbO8xDZ7CrbaPF87LCuBPDoaVJ7uZ3WpjtbobLlhS6dLBHNL6qJ6ZheOqXic/0RJkdAoAqkC2pobENtHhZi4esmsbZtImonkhcOh7Wp8y6ilSj12XJxWmm63PwIyvjdJp0um7GHtML0KyDFK6/41k6iCDVy3RhLz1D9M4mj3uEb0jCVfAyD7yzVXSAZ277YHCF/98l7GXiunkdPpGpX+O7I2OvJrUlUOZ3K/5BAmnGt3PNidb5BAFYLtO27C0cEBDkzIWWM7LQzdNJYaKXSatpb79fPTJd0z3tMiHsp+IgYTRi0xAzHhaDqX0snFS+nVnlB7rQCiM+anUftIH94zUiSCOm+1+vJbIlibdZ9K1j2WHQ8kpRdrni+w6ib+4X2hA0R4v9RE9quLUhEVJ89mmP/yjIqjMWBNvBVjyR5jtPvw8cAOLN+4zh7+VRusqf0eXhq4VtTbGvrWrGDHgR41gphXIBAF0eLxvIpwt0SGYAIsJz7uLdite5iSuwiGdGzPOo+hTMniPg/5XM0sVzi9WRrxjMLxUmWGC1bQNFgPz/BeylL5BUOUus5abAD37NfPr/9rSB2w3DKkK2KhAEpN7KUOvS8PDBzdPcVZcfuB5Nl3DuZkQM7J6RFXAv2Dl7rAdrVibddd2dk4xEge7nVFebZ+5g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(136003)(366004)(376002)(39860400002)(451199015)(6506007)(31696002)(82960400001)(4326008)(2616005)(91956017)(36756003)(8936002)(110136005)(41300700001)(38100700002)(122000001)(8676002)(54906003)(86362001)(76116006)(71200400001)(5660300002)(26005)(83380400001)(6512007)(478600001)(186003)(6486002)(31686004)(2906002)(66476007)(66946007)(53546011)(316002)(64756008)(66446008)(66556008)(38070700005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?UVRGZlh5QzFRcG1GVENYc1dyck9YcFBwUUU5UnpkMmVpR0NDckhGZzZSbDBB?=
 =?utf-8?B?OHBPTS9mczRYbFVDazlETUdJNkRGS0NvUTJkb1E0V0gxSm5jcnRWdmFibnVS?=
 =?utf-8?B?S29HbTJRUlUrVVF2cmQ5MzBVVnRVVDB6bjFLVlpHdFJGV0xiVHRtNlFCMHk0?=
 =?utf-8?B?ZWpQdjVHbEdtSzhFemdhb3EyME1pMlRMSEJSNXhoT0hkK29FakE1NWszcVhu?=
 =?utf-8?B?dkRjTnZEUEk4R2QzZ1BYYmM1bzNSR0dEMGFqVGhTWUU0U1F2azVmWlFmYTRl?=
 =?utf-8?B?b1lpdzNwdzhzNXFKOFBtV01KZ0VQS2hObDNNZXFwNXlCVDB4Lzcra2diV0lU?=
 =?utf-8?B?WDdYZldkSERpSHNWaXprUUpUZTNwVS85cHMxbVhTMEVxbmZqOExCMHBmcm1V?=
 =?utf-8?B?NEgvMzUvdHFGSFJjR1dhTVdPWHVPNG9hNkx2U0ljWk04T05qSUR6RENtRFJa?=
 =?utf-8?B?Zm1taTFrUGk4ZmNTSlQyRERsQWNlYXAzWldtQ2t4dTZsbzhtNTRkalNCM3Bm?=
 =?utf-8?B?R1cwWFpQcVEyMnFQODRyMnBPZzVXUEVVL0pvS2VLcW5NdUhnUUMrZmpCUUpZ?=
 =?utf-8?B?TEZoenAyR0M3YUk3cFpjVmdibmh2MkxqaWpLeVFRZ1hiemhnbklQYWM0Umw4?=
 =?utf-8?B?cFpSUjU5TUtKZzZBOWZJRHJBNEh5NGZEcE44aldoeHlHeVBlWmtuTHJyaFlr?=
 =?utf-8?B?bnRGT2ZYV0l4Qk5hSTRaenJTa3BsV3VGNEpkWTNtbitPV3ZBLzk2MU5IcSta?=
 =?utf-8?B?b2ZXbmpzM2xFM2xXVTdVQkJoMkJzNFdZOVkzcllOM2NpZ0ZlYlhWS2w5STNW?=
 =?utf-8?B?Q1VNRmJjenljSEovZUVMQXQ4VWpNS0cyMlRWOHBTSGN5dE9XTnl4TFJmajhq?=
 =?utf-8?B?UTdoSVhyS3c5dEVGd0FmQ1RQN0prYWlRMGNNdkxQOUlvajIrTzlMZzBJMjVm?=
 =?utf-8?B?Y1lQaUVoOXFVMy84cmVpVm5FOEdsMUM2aXFOY3hieWdwbC9URmxTQS9kVVJO?=
 =?utf-8?B?bCtIanczQ1RYb3NJdGMrN3dDeHo3OGQrNlROQWJYZHl0RS8yM0JsU0Fvdm9T?=
 =?utf-8?B?MVBUUklyQU55V0hnRktqTjB2WW9IRnV0aVJXN2dYMVpYYzRlY2V6Y0hDajVS?=
 =?utf-8?B?OW5YN1d3WUkzWitCaktPeDJaNm8yVGVUY0VhVXVTb003dklhMEtUT1RhUGoz?=
 =?utf-8?B?RjJzaS9KT0Z2Y2txSDB5S3duR2VoL3loZDkvOU1OTXVoWDFLMENib24xem9n?=
 =?utf-8?B?QVhubFlEY3NtT2VyOVp2ZUxUbWZRKzBEQXZTMWUxUElrcUhrNGo2MXRIOHFK?=
 =?utf-8?B?RlZ1L21BeWVwZWw3VUVuT2szTEZFaWNsVCttTnE0MDRpZS91YmsyRGhFdEg5?=
 =?utf-8?B?NE8vekd0YUVrcFhiNmtqVUhvL1NldXp4YzBRZmNtSjd0ZlRidS9OOWppam5v?=
 =?utf-8?B?bEZIN2tLMmphekM3a1lOTXl5Zk5ydnZXT2d5b3dnTlVOQUpmSE5Ib0srNnpI?=
 =?utf-8?B?Uzg5aThkdjRiS3I2blVab0d5NWUzSkxWZXB1bXh3NzdQYXltYW1rODc5Wlhn?=
 =?utf-8?B?ZmFJQnZJM0w1WFdXQXV2QjBLUlcwTkZMSVpva2xDMVRwWWZ6UHF6NHJnSlFQ?=
 =?utf-8?B?a1lxdXRQeXVzTUY3NUN0bGhLTWJkVXp4MHIyTXNJK1lpWjNycVBmQStYOExY?=
 =?utf-8?B?TVpnaGVlRExHRUZpYmlBV0VIRUVPNXJmVThhOUQyUkg4UFdTNW83VTZtSng5?=
 =?utf-8?B?anVtWmtCN3RGc2hsOWFRbmxMSFQzVGVXWXROOUZLT2tYOWdONzBFVzZLOFhG?=
 =?utf-8?B?ZXlCcEFwaVF3TWhVOWU0SFdhcTU1VkNpbUlqR29wTmpqaFpqdTd4Nm9Denkv?=
 =?utf-8?B?em5DcG82elowYVYrZzlINml0aWpkVDRWaW9jQ1RxczVseGZUQ0RNQWptS3BZ?=
 =?utf-8?B?bTJwSDR1SENoeEYzaEc5Sk55WHJscjVCK2t5djZacStmakQ2Z1hPRFB3RUtF?=
 =?utf-8?B?ekR1cVJ4Mk5iTSthTHZBV1I3dkpPRWE2dWVXaWJuYldnRTMwWUFYalNnQXp6?=
 =?utf-8?B?UTRUYlNFbnJuWG5RRjJoM3JWaFVuNEZ2K1RMd2tPSXkxT1ZqSEJLZkVIWXVP?=
 =?utf-8?B?VTV4eitOVzdjQ1FtVXFjclF6S01GNzRzNy9FTXoyTm83ZERWNmY2YzlYUHBY?=
 =?utf-8?B?SVE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <ED7C7E558690214FA31184660D24DD47@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	/Ow9IQLe9Lmq3sjWiM77v7VGmz41e/SiN+v4zdpJtWlTvjlsaUFQy22zfFDBHspHquZtfca+fpICDv8EXHvpfnO8X+4ev8Sa9eiVd8SF2k8sfY+UVZ7ORucJok/CWrxp1Ir45CFq/Rx9xjHMMAb27LwKX4RVE19kOrSoady+fTJV7YntSm62n9hxC6Zj+VE6ZHUfXgNJ0QYBj5MJcTNAK1DElxaCIeS19K/rdQRjVe8U9St9+H5fNdmE0+1d8FPhBWh83yYZWqFqqeTGxiZAP/KNC4RBwvaq1NDpLxrmp7A0DYvee3zZE8iiCXi/npLiR3E93xt7OI09aW4orRjRccLXYsyipNdRQd7KoX+8SPMXdQylOpgEvFY/3r/p3OIJsvj1nFQnjBQmMomT+XCrnpf9fcikBPLPjldq+FsGaCiOQ6AUwabhH91q9/B+JOLPdNKKXQaqx1M0W1LnMZzfj1F8KbmiJ/DCokuzfmBCGxh2Mv5hhbzoBp4a0VFCy5q4ioVjDA/aWmKyzfKf9bsRAqz8al95//reJsIeMdkmtt6aPI1McYAaIyTxYD+3Gi22b+SS2gzma/U2YN2gEORsejx3IVfukw4SiidBO1HaInvcybD3nizRzljx1lrIxpxjhfYZ//egVroISg4X/O3nykwHnTMZX4aXnKdMbZo64/dWUztUnGCX0As/NsgbCFU9JmWP/GEPfW8rjWvELPztFspH6q/vrZCXQBptC+pglzJ003cuCpsOBTvKuWdE2hI7WaCSkXPExVRgBWHLI0YzOvIryLYC46yjCkpioyti6TlNAMTWacKVXednfm4Dkm26OYhzj+rBY7kpvq+o79/GjOamDxlE78H46pyJbZASGuvebY3m88z4/MgSH69xCOLEPNR7jBb9p0q+xv8VS7SdVQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: df5e30b0-5e09-465b-ae9e-08dafafea65d
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 15:54:44.5984
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: dJAv8ZObbBsjQdE4i2JQkIKWjKpUy/NAqvTyZBO8gDTs3S0WH/NAX+ERDDuixy6D6NOCRfOWQZGix6OCala1GcDtjc7EeChPonWL4e7ShU4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4935

T24gMjAvMDEvMjAyMyAyOjU5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vcHJvY2Vzc29yLmggYi94ZW4vYXJjaC9y
aXNjdi9pbmNsdWRlL2FzbS9wcm9jZXNzb3IuaA0KPiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPiBp
bmRleCAwMDAwMDAwMDAwLi41ODk4YTA5Y2U2DQo+IC0tLSAvZGV2L251bGwNCj4gKysrIGIveGVu
L2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vcHJvY2Vzc29yLmgNCj4gQEAgLTAsMCArMSwxMTQgQEAN
Cj4gKy8qIFNQRFgtTGljZW5zZS1JZGVudGlmaWVyOiBNSVQgKi8NCj4gKy8qKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioNCj4gKyAqDQo+ICsgKiBDb3B5cmlnaHQgMjAxOSAoQykgQWxpc3RhaXIgRnJhbmNp
cyA8YWxpc3RhaXIuZnJhbmNpc0B3ZGMuY29tPg0KPiArICogQ29weXJpZ2h0IDIwMjEgKEMpIEJv
YmJ5IEVzaGxlbWFuIDxib2JieS5lc2hsZW1hbkBnbWFpbC5jb20+DQo+ICsgKiBDb3B5cmlnaHQg
MjAyMyAoQykgVmF0ZXMNCj4gKyAqDQo+ICsgKi8NCj4gKw0KPiArI2lmbmRlZiBfQVNNX1JJU0NW
X1BST0NFU1NPUl9IDQo+ICsjZGVmaW5lIF9BU01fUklTQ1ZfUFJPQ0VTU09SX0gNCj4gKw0KPiAr
I2luY2x1ZGUgPGFzbS90eXBlcy5oPg0KPiArDQo+ICsjZGVmaW5lIFJJU0NWX0NQVV9VU0VSX1JF
R1NfemVybyAgICAgICAgMA0KPiArI2RlZmluZSBSSVNDVl9DUFVfVVNFUl9SRUdTX3JhICAgICAg
ICAgIDENCj4gKyNkZWZpbmUgUklTQ1ZfQ1BVX1VTRVJfUkVHU19zcCAgICAgICAgICAyDQo+ICsj
ZGVmaW5lIFJJU0NWX0NQVV9VU0VSX1JFR1NfZ3AgICAgICAgICAgMw0KPiArI2RlZmluZSBSSVND
Vl9DUFVfVVNFUl9SRUdTX3RwICAgICAgICAgIDQNCj4gKyNkZWZpbmUgUklTQ1ZfQ1BVX1VTRVJf
UkVHU190MCAgICAgICAgICA1DQo+ICsjZGVmaW5lIFJJU0NWX0NQVV9VU0VSX1JFR1NfdDEgICAg
ICAgICAgNg0KPiArI2RlZmluZSBSSVNDVl9DUFVfVVNFUl9SRUdTX3QyICAgICAgICAgIDcNCj4g
KyNkZWZpbmUgUklTQ1ZfQ1BVX1VTRVJfUkVHU19zMCAgICAgICAgICA4DQo+ICsjZGVmaW5lIFJJ
U0NWX0NQVV9VU0VSX1JFR1NfczEgICAgICAgICAgOQ0KPiArI2RlZmluZSBSSVNDVl9DUFVfVVNF
Ul9SRUdTX2EwICAgICAgICAgIDEwDQo+ICsjZGVmaW5lIFJJU0NWX0NQVV9VU0VSX1JFR1NfYTEg
ICAgICAgICAgMTENCj4gKyNkZWZpbmUgUklTQ1ZfQ1BVX1VTRVJfUkVHU19hMiAgICAgICAgICAx
Mg0KPiArI2RlZmluZSBSSVNDVl9DUFVfVVNFUl9SRUdTX2EzICAgICAgICAgIDEzDQo+ICsjZGVm
aW5lIFJJU0NWX0NQVV9VU0VSX1JFR1NfYTQgICAgICAgICAgMTQNCj4gKyNkZWZpbmUgUklTQ1Zf
Q1BVX1VTRVJfUkVHU19hNSAgICAgICAgICAxNQ0KPiArI2RlZmluZSBSSVNDVl9DUFVfVVNFUl9S
RUdTX2E2ICAgICAgICAgIDE2DQo+ICsjZGVmaW5lIFJJU0NWX0NQVV9VU0VSX1JFR1NfYTcgICAg
ICAgICAgMTcNCj4gKyNkZWZpbmUgUklTQ1ZfQ1BVX1VTRVJfUkVHU19zMiAgICAgICAgICAxOA0K
PiArI2RlZmluZSBSSVNDVl9DUFVfVVNFUl9SRUdTX3MzICAgICAgICAgIDE5DQo+ICsjZGVmaW5l
IFJJU0NWX0NQVV9VU0VSX1JFR1NfczQgICAgICAgICAgMjANCj4gKyNkZWZpbmUgUklTQ1ZfQ1BV
X1VTRVJfUkVHU19zNSAgICAgICAgICAyMQ0KPiArI2RlZmluZSBSSVNDVl9DUFVfVVNFUl9SRUdT
X3M2ICAgICAgICAgIDIyDQo+ICsjZGVmaW5lIFJJU0NWX0NQVV9VU0VSX1JFR1NfczcgICAgICAg
ICAgMjMNCj4gKyNkZWZpbmUgUklTQ1ZfQ1BVX1VTRVJfUkVHU19zOCAgICAgICAgICAyNA0KPiAr
I2RlZmluZSBSSVNDVl9DUFVfVVNFUl9SRUdTX3M5ICAgICAgICAgIDI1DQo+ICsjZGVmaW5lIFJJ
U0NWX0NQVV9VU0VSX1JFR1NfczEwICAgICAgICAgMjYNCj4gKyNkZWZpbmUgUklTQ1ZfQ1BVX1VT
RVJfUkVHU19zMTEgICAgICAgICAyNw0KPiArI2RlZmluZSBSSVNDVl9DUFVfVVNFUl9SRUdTX3Qz
ICAgICAgICAgIDI4DQo+ICsjZGVmaW5lIFJJU0NWX0NQVV9VU0VSX1JFR1NfdDQgICAgICAgICAg
MjkNCj4gKyNkZWZpbmUgUklTQ1ZfQ1BVX1VTRVJfUkVHU190NSAgICAgICAgICAzMA0KPiArI2Rl
ZmluZSBSSVNDVl9DUFVfVVNFUl9SRUdTX3Q2ICAgICAgICAgIDMxDQo+ICsjZGVmaW5lIFJJU0NW
X0NQVV9VU0VSX1JFR1Nfc2VwYyAgICAgICAgMzINCj4gKyNkZWZpbmUgUklTQ1ZfQ1BVX1VTRVJf
UkVHU19zc3RhdHVzICAgICAzMw0KPiArI2RlZmluZSBSSVNDVl9DUFVfVVNFUl9SRUdTX3ByZWdz
ICAgICAgIDM0DQo+ICsjZGVmaW5lIFJJU0NWX0NQVV9VU0VSX1JFR1NfbGFzdCAgICAgICAgMzUN
Cg0KVGhpcyBibG9jayB3YW50cyBtb3ZpbmcgaW50byB0aGUgYXNtLW9mZnNldHMgaW5mcmFzdHJ1
Y3R1cmUsIGJ1dCBJDQpzdXNwZWN0IHRoZXkgd29uJ3Qgd2FudCB0byBzdXJ2aXZlIGluIHRoaXMg
Zm9ybS4NCg0KZWRpdDogeWVhaCwgZGVmaW5pdGVseSBub3QgdGhpcyBmb3JtLsKgIFJJU0NWX0NQ
VV9VU0VSX1JFR1NfT0ZGU0VUIGlzIGENCnJlY2lwZSBmb3IgYnVncy4NCg0KPiArDQo+ICsjZGVm
aW5lIFJJU0NWX0NQVV9VU0VSX1JFR1NfT0ZGU0VUKHgpICAgKChSSVNDVl9DUFVfVVNFUl9SRUdT
XyMjeCkgKiBfX1NJWkVPRl9QT0lOVEVSX18pDQo+ICsjZGVmaW5lIFJJU0NWX0NQVV9VU0VSX1JF
R1NfU0laRSAgICAgICAgUklTQ1ZfQ1BVX1VTRVJfUkVHU19PRkZTRVQobGFzdCkNCj4gKw0KPiAr
I2lmbmRlZiBfX0FTU0VNQkxZX18NCj4gKw0KPiArLyogT24gc3RhY2sgVkNQVSBzdGF0ZSAqLw0K
PiArc3RydWN0IGNwdV91c2VyX3JlZ3MNCj4gK3sNCj4gKyAgICByZWdpc3Rlcl90IHplcm87DQoN
CnVuc2lnbmVkIGxvbmcuDQoNCj4gKyAgICByZWdpc3Rlcl90IHJhOw0KPiArICAgIHJlZ2lzdGVy
X3Qgc3A7DQo+ICsgICAgcmVnaXN0ZXJfdCBncDsNCj4gKyAgICByZWdpc3Rlcl90IHRwOw0KPiAr
ICAgIHJlZ2lzdGVyX3QgdDA7DQo+ICsgICAgcmVnaXN0ZXJfdCB0MTsNCj4gKyAgICByZWdpc3Rl
cl90IHQyOw0KPiArICAgIHJlZ2lzdGVyX3QgczA7DQo+ICsgICAgcmVnaXN0ZXJfdCBzMTsNCj4g
KyAgICByZWdpc3Rlcl90IGEwOw0KPiArICAgIHJlZ2lzdGVyX3QgYTE7DQo+ICsgICAgcmVnaXN0
ZXJfdCBhMjsNCj4gKyAgICByZWdpc3Rlcl90IGEzOw0KPiArICAgIHJlZ2lzdGVyX3QgYTQ7DQo+
ICsgICAgcmVnaXN0ZXJfdCBhNTsNCj4gKyAgICByZWdpc3Rlcl90IGE2Ow0KPiArICAgIHJlZ2lz
dGVyX3QgYTc7DQo+ICsgICAgcmVnaXN0ZXJfdCBzMjsNCj4gKyAgICByZWdpc3Rlcl90IHMzOw0K
PiArICAgIHJlZ2lzdGVyX3QgczQ7DQo+ICsgICAgcmVnaXN0ZXJfdCBzNTsNCj4gKyAgICByZWdp
c3Rlcl90IHM2Ow0KPiArICAgIHJlZ2lzdGVyX3Qgczc7DQo+ICsgICAgcmVnaXN0ZXJfdCBzODsN
Cj4gKyAgICByZWdpc3Rlcl90IHM5Ow0KPiArICAgIHJlZ2lzdGVyX3QgczEwOw0KPiArICAgIHJl
Z2lzdGVyX3QgczExOw0KPiArICAgIHJlZ2lzdGVyX3QgdDM7DQo+ICsgICAgcmVnaXN0ZXJfdCB0
NDsNCj4gKyAgICByZWdpc3Rlcl90IHQ1Ow0KPiArICAgIHJlZ2lzdGVyX3QgdDY7DQo+ICsgICAg
cmVnaXN0ZXJfdCBzZXBjOw0KPiArICAgIHJlZ2lzdGVyX3Qgc3N0YXR1czsNCj4gKyAgICAvKiBw
b2ludGVyIHRvIHByZXZpb3VzIHN0YWNrX2NwdV9yZWdzICovDQo+ICsgICAgcmVnaXN0ZXJfdCBw
cmVnczsNCg0KU3RhbGUgY29tbWVudD/CoCBBbHNvLCBzdXJlbHkgdGhpcyB3YW50cyB0byBiZSBj
cHVfdXNlcl9yZWdzICpwcmVnczsgPw0KDQo+ICt9Ow0KPiArDQo+ICtzdGF0aWMgaW5saW5lIHZv
aWQgd2FpdF9mb3JfaW50ZXJydXB0KHZvaWQpDQoNClRoZXJlJ3Mgbm8gcG9pbnQgd3JpdGluZyBv
dXQgdGhlIG5hbWUgaW4gbG9uZ2hhbmQgZm9yIGEgd3JhcHBlciBhcm91bmQgYQ0Kc2luZ2xlIGlu
c3RydWN0aW9uLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 16:02:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 16:02:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481993.747270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItqs-0001T3-EL; Fri, 20 Jan 2023 16:02:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481993.747270; Fri, 20 Jan 2023 16:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItqs-0001Sw-BN; Fri, 20 Jan 2023 16:02:50 +0000
Received: by outflank-mailman (input) for mailman id 481993;
 Fri, 20 Jan 2023 16:02:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=islB=5R=redhat.com=imammedo@srs-se1.protection.inumbo.net>)
 id 1pItqq-0001S8-Vc
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 16:02:49 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e05087ad-98db-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 17:02:45 +0100 (CET)
Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com
 [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-632-d8gPAWXTMzq3olbTxg8uVA-1; Fri, 20 Jan 2023 11:02:43 -0500
Received: by mail-ed1-f69.google.com with SMTP id
 l17-20020a056402255100b00472d2ff0e59so4167091edb.19
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 08:02:42 -0800 (PST)
Received: from imammedo.users.ipa.redhat.com (nat-pool-brq-t.redhat.com.
 [213.175.37.10]) by smtp.gmail.com with ESMTPSA id
 l1-20020a1709063d2100b0086d70b9c023sm8983330ejf.63.2023.01.20.08.02.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 08:02:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e05087ad-98db-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1674230564;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ltffmf7efxRjeBXWrWYsu1MTZJnXzkkEAcN40f2IiyQ=;
	b=QUNmQgrTW+se7kdGGKKlaBQfnVgJj7N5W+bXSgIfY2SNhAPyE7TOat5e+t6tSop0nJOWJ2
	mO1I5d9OqIoXGWY6hAaY6hpmz/K0P6uQEFCTRZg+soB6kRL2Hh9u/puVHkpFYgGBSVfv7s
	8KAnBVbRxbIR/ViLQbESZ5K85p0oGpI=
X-MC-Unique: d8gPAWXTMzq3olbTxg8uVA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=LqK9LshMvEORLrMKTzFodPJbRAbdUgjc/AwkyEARM60=;
        b=4q3gDDekj2uBW9VGUXAqbwGkH+k7x2YrKL/O1INiMx5XMFF7lHVfdbxJue9VB/Vdlc
         WHP0iKWnH2hGLkEj7219Ds94B1VRgjrq4aZmSW01uxBy1K1f/kK38idIidI7p37eGFIM
         HRdnpFLLSTOa4GgOf2I2VevQy9C9Im/kJzKOHc8OmPFEUak+BEoSK6+YBjzjqO7clOho
         jKwOB+bHYAbw29Tb0VnHbfYMIVLiFihFwl2y4K+z/GwLYp8yU5CYbGGmS2jDg+P+u8I3
         Jrrud5feoZPc8KwzLueHv0+zoCIG3rB6NDhb1jtvVsHNpe2j4UpGFV1zipVarwgXv6Hk
         lNyg==
X-Gm-Message-State: AFqh2krSjaplxy+xu+7Y8hdZTj9T8QJOXwPZ06petcFENPoOZjADgTby
	mALAjX9PIJJwY9zaFMfge8WL8T+8EPKf4G8Bxok469tiiIcLuC3zlK1gdDW6hJLmDC7lHhZvVif
	noeiPe5LqgsTrV9FAfeUTssUCL1U=
X-Received: by 2002:a17:906:1f57:b0:872:2cc4:6886 with SMTP id d23-20020a1709061f5700b008722cc46886mr11043970ejk.30.1674230561772;
        Fri, 20 Jan 2023 08:02:41 -0800 (PST)
X-Google-Smtp-Source: AMrXdXthoGh0EHUow3P0wDivX/RlxEqI+itykWmbgiIvnMaGm/zCGmO2sh2c7h+h1DgYquwHUIEv8g==
X-Received: by 2002:a17:906:1f57:b0:872:2cc4:6886 with SMTP id d23-20020a1709061f5700b008722cc46886mr11043949ejk.30.1674230561580;
        Fri, 20 Jan 2023 08:02:41 -0800 (PST)
Date: Fri, 20 Jan 2023 17:01:47 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>, Bernhard Beschow
 <shentey@gmail.com>, qemu-devel@nongnu.org, Stefano Stabellini
 <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Paolo Bonzini
 <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org, Philippe
 =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@linaro.org>, Anthony Perard
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230120170147.4f3bbda6@imammedo.users.ipa.redhat.com>
In-Reply-To: <1c7d1fd7-d29b-1ae1-a7f4-0fea811b56c5@aol.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
	<a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
	<20230110030331-mutt-send-email-mst@kernel.org>
	<a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
	<D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
	<9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
	<7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
	<20230112180314-mutt-send-email-mst@kernel.org>
	<128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
	<20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
	<88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
	<20230116163342.467039a0@imammedo.users.ipa.redhat.com>
	<fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
	<20230117113513.4e692539@imammedo.users.ipa.redhat.com>
	<1c7d1fd7-d29b-1ae1-a7f4-0fea811b56c5@aol.com>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.36; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Tue, 17 Jan 2023 09:50:23 -0500
Chuck Zmudzinski <brchuckz@aol.com> wrote:

> On 1/17/2023 5:35 AM, Igor Mammedov wrote:
> > On Mon, 16 Jan 2023 13:00:53 -0500
> > Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> > =20
> > > On 1/16/23 10:33, Igor Mammedov wrote: =20
> > > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > >    =20
> > > >> On 1/13/23 4:33=E2=80=AFAM, Igor Mammedov wrote:   =20
> > > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > >> >      =20
> > > >> >> On 1/12/23 6:03=E2=80=AFPM, Michael S. Tsirkin wrote:     =20
> > > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wr=
ote:       =20
> > > >> >> >> I think the change Michael suggests is very minimalistic: Mo=
ve the if
> > > >> >> >> condition around xen_igd_reserve_slot() into the function it=
self and
> > > >> >> >> always call it there unconditionally -- basically turning th=
ree lines
> > > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem sp=
ecific,
> > > >> >> >> Michael further suggests to rename it to something more gene=
ral. All
> > > >> >> >> in all no big changes required.       =20
> > > >> >> >=20
> > > >> >> > yes, exactly.
> > > >> >> >        =20
> > > >> >>=20
> > > >> >> OK, got it. I can do that along with the other suggestions.    =
 =20
> > > >> >=20
> > > >> > have you considered instead of reservation, putting a slot check=
 in device model
> > > >> > and if it's intel igd being passed through, fail at realize time=
  if it can't take
> > > >> > required slot (with a error directing user to fix command line)?=
     =20
> > > >>=20
> > > >> Yes, but the core pci code currently already fails at realize time
> > > >> with a useful error message if the user tries to use slot 2 for th=
e
> > > >> igd, because of the xen platform device which has slot 2. The user
> > > >> can fix this without patching qemu, but having the user fix it on
> > > >> the command line is not the best way to solve the problem, primari=
ly
> > > >> because the user would need to hotplug the xen platform device via=
 a
> > > >> command line option instead of having the xen platform device adde=
d by
> > > >> pc_xen_hvm_init functions almost immediately after creating the pc=
i
> > > >> bus, and that delay in adding the xen platform device degrades
> > > >> startup performance of the guest.
> > > >>    =20
> > > >> > That could be less complicated than dealing with slot reservatio=
ns at the cost of
> > > >> > being less convenient.     =20
> > > >>=20
> > > >> And also a cost of reduced startup performance   =20
> > > >=20
> > > > Could you clarify how it affects performance (and how much).
> > > > (as I see, setup done at board_init time is roughly the same
> > > > as with '-device foo' CLI options, modulo time needed to parse
> > > > options which should be negligible. and both ways are done before
> > > > guest runs)   =20
> > >=20
> > > I preface my answer by saying there is a v9, but you don't
> > > need to look at that. I will answer all your questions here.
> > >=20
> > > I am going by what I observe on the main HDMI display with the
> > > different approaches. With the approach of not patching Qemu
> > > to fix this, which requires adding the Xen platform device a
> > > little later, the length of time it takes to fully load the
> > > guest is increased. I also noticed with Linux guests that use
> > > the grub bootoader, the grub vga driver cannot display the
> > > grub boot menu at the native resolution of the display, which
> > > in the tested case is 1920x1080, when the Xen platform device
> > > is added via a command line option instead of by the
> > > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > > to Qemu, the grub menu is displayed at the full, 1920x1080
> > > native resolution of the display. Once the guest fully loads,
> > > there is no noticeable difference in performance. It is mainly
> > > a degradation in startup performance, not performance once
> > > the guest OS is fully loaded. =20
> > above hints on presence of bug[s] in igd-passthru implementation,
> > and this patch effectively hides problem instead of trying to figure
> > out what's wrong and fixing it.
> > =20
>=20
> I agree that the with the current Qemu there is a bug in the igd-passthru
> implementation. My proposed patch fixes it. So why not support the
> proposed patch with a Reviewed-by tag?
I see the patch as workaround (it might be a reasonable one) and
it's upto xen maintainers to give Reviewed-by/accept it.

Though I'd add to commit message something about performance issues
you are experiencing when trying to passthrough IGD manually
as one of justifications for workaround (it might push Xen folks
to investigate the issue and it won't be lost/forgotten on mail-list).



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 16:02:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 16:02:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.481990.747260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItqj-0001BD-1g; Fri, 20 Jan 2023 16:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 481990.747260; Fri, 20 Jan 2023 16:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItqi-0001B6-V2; Fri, 20 Jan 2023 16:02:40 +0000
Received: by outflank-mailman (input) for mailman id 481990;
 Fri, 20 Jan 2023 16:02:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pItqh-0001Aw-TL; Fri, 20 Jan 2023 16:02:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pItqh-0001qa-Pa; Fri, 20 Jan 2023 16:02:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pItqh-0003bl-9U; Fri, 20 Jan 2023 16:02:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pItqh-0000rA-91; Fri, 20 Jan 2023 16:02:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B9ZAObwNdUY/u+w92AYmSpNRQNpeS2vubHTH7JKg+3c=; b=cw/ueXjbINwEmi9e/vsQBiNJmU
	vuhB51bCtCQonH+2EhQiP7zsk4uhec4gb40/4MEj0fhEC9bzGoay3p7xWtLFy7wsxlaD0FtvLTq90
	SZDotGlP/Bia8aB0m35GYUjohYXWTUVolAtKoLQc9iatXpgBI8vwwzMleiDXCjqfjbJo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175992-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 175992: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d368967cb1039b5c4cccb62b5a4b9468c50cd143
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 16:02:39 +0000

flight 175992 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175992/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                d368967cb1039b5c4cccb62b5a4b9468c50cd143
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  104 days
Failing since        173470  2022-10-08 06:21:34 Z  104 days  213 attempts
Testing same since   175992  2023-01-20 06:11:32 Z    0 days    1 attempts

------------------------------------------------------------
3375 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 516960 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 16:03:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 16:03:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482001.747280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItrh-0002F4-Od; Fri, 20 Jan 2023 16:03:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482001.747280; Fri, 20 Jan 2023 16:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pItrh-0002Ex-LM; Fri, 20 Jan 2023 16:03:41 +0000
Received: by outflank-mailman (input) for mailman id 482001;
 Fri, 20 Jan 2023 16:03:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nUaQ=5R=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pItrg-0002Dj-5c
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 16:03:40 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2048.outbound.protection.outlook.com [40.107.223.48])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ff3bb2b1-98db-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 17:03:37 +0100 (CET)
Received: from DM6PR06CA0007.namprd06.prod.outlook.com (2603:10b6:5:120::20)
 by DM4PR12MB5722.namprd12.prod.outlook.com (2603:10b6:8:5d::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.26; Fri, 20 Jan
 2023 16:03:34 +0000
Received: from DM6NAM11FT088.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:120:cafe::83) by DM6PR06CA0007.outlook.office365.com
 (2603:10b6:5:120::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27 via Frontend
 Transport; Fri, 20 Jan 2023 16:03:34 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT088.mail.protection.outlook.com (10.13.172.147) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Fri, 20 Jan 2023 16:03:34 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 20 Jan
 2023 10:03:33 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 20 Jan 2023 10:03:31 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff3bb2b1-98db-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NQXdluOwXLyM5mVhZXeEJP0pNTQhCdSkwNcuO1qK/QTn+i6MEiBE2GgTDaCdN6zOkNQn0/cGZiexGw2kB0aNrzCIx1wm6RromiLVjPBp5v0zJWuYRTPfNMDeY8NwK0nXyF+tbnUaxKxpactFE/J4UZ/TkEOVsd+IVO72979+xsIPTcCaOTk73wsTpnvRsXXwAuUok7Wg+uUHmz1lDp+C0ddDH1j33vvf8p8OYtl1mtl1o4sYvcqass8jlP8tUdqHBXTae6Tar8wqmjBcpzrjPvs+BTX8Ytuai0LDdsGPbhI9sveOV00rJ+57vssHVh3OuWpg/yxXQI2qb8mnL478KA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WAYvzcLgC+VmA+eUVGn/Va5dzSjMM1zIXoiSSZqCQC8=;
 b=PXtC9VAWvLtFBZzVXdwRaHQupw+KFwpg+VGPCrl0Rgo0/F8DlPxmhpgAmXokMAjX+OrIPLcoPqqPPfWlfYMnh/+DQhUsyztSTZExSyKZHpuxxUwis1Ofl4DKvoShcSvli45cZRgkIOrP9auxvBFMZHuotpCw+csSA5TsDrn7MBeryTl3k+rrbJaEEzThStxtA2B5hAjWG/jucTAJjE6uQF4HXx54fk82hDDB2l0EVvZYAjcHMsgABStMLpPjQdegYMYNbWeetrQtUYn/fwezHovX9RfH3XdAHilIMRd6VCwFPWJQq1zqhQuBuEc3SLql2FfT4mx4V3zCNePalCrRVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WAYvzcLgC+VmA+eUVGn/Va5dzSjMM1zIXoiSSZqCQC8=;
 b=SGxV4kDM+t3COtVNPINA66hs+muGhUEt1CoD7hLifQzAOzI6iFJEzaetR81qUhYJbnZQA+Z0gGy1uYKVaAh+xUAY/klt/0lby6eN8+BXhbM30cHwknNC6N5LW8UYDflyDRw8afpQEXdBPv1ovagnY8JF8V/yfPHA+vW46aLH2UQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <2017e0d4-dd02-e81d-99f4-1ef47fc9e774@amd.com>
Date: Fri, 20 Jan 2023 17:03:31 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Ayan Kumar Halder <ayan.kumar.halder@amd.com>
CC: <xen-devel@lists.xenproject.org>, <stefano.stabellini@amd.com>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-3-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
 <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
 <ba37ee02-c07c-2803-0867-149c779890b6@amd.com>
 <cd673f97-9c0d-286b-e973-7a85c84dd576@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <cd673f97-9c0d-286b-e973-7a85c84dd576@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT088:EE_|DM4PR12MB5722:EE_
X-MS-Office365-Filtering-Correlation-Id: 2b6f4354-922a-44ca-3b2b-08dafaffe200
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BDb5TJbB+3pP/ULoobozaBrXNRKWrJGcpFOIItFIgDhT4RxyZ3vF9UdPYVWqxcyhNnfNqnPmV0Y9CfLYEBdgftTduifjumpVJTHlS3c8gK4c+bEu7AA5khSdVclCmmfZEt90YaCr2ia7VxaTSriaAOvxTyIRYkIszGX+E5x2oKFtIzLWUAlKusDzBe4xaTXILcX002hTAC62xAscqO6fVdbuIRnh1i3pYhxdexyg94Cj48AoJCDBLhFowOJ73WyHZLcNgtnSYZ29onaeripYQdd9wX9AwjsqrS9CSB9k5k0JvFjH5PM2/26IpvfiMMW9Ew5FYT0Tmz0fsx3XrghvAdKO9nR9hPQJMk1v6QvwwCAkzJmhi176/kO/i6Fw7E9g//6iMLblLJ0Tpn12MIw6E9kQI39SjpQTUGJFKMSmJrJqTTDiDsdIDRdOiAJVOGxvypBhSTN+WccOp5vfim4rvWLlXdMCc/0p6OI4SOeXAKY8vmenGDCsVpQYzmFkMgIYxViAs5SbK+pFlx7LnLj+Krf+Jrp2CiAEnUhTXIjByJV8T9zBUzxmFXPPGqkmQOVwwl7oUJletOWc58WJDEyCcQy//JoYPWzwzUcne3WBdKe0Oo/DkavWuEE4vW4QAoVBtnyyuFT5XS3jEI4fvvYBkrN+9RaYCMwyO9IxZhNOM7HbmjlBuHKvLUzA4fihJPqeuJcSPVXV1hleEx7k0ct5ivZRVU/THbu0TdFSG7goXHW9P5qkkkjJxvwWmIjMUDMZXJ3s0eU7LAYfK8PAHqOpBqH7BZ0+pu4e5yvJV0DiQBzbLS+oV6UcEa3qsIqJf5MuSoiXjJAoXJHutborZZMaLw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(376002)(136003)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(81166007)(31686004)(478600001)(53546011)(356005)(82740400003)(186003)(26005)(86362001)(966005)(31696002)(82310400005)(83380400001)(41300700001)(36860700001)(2906002)(44832011)(8936002)(16576012)(110136005)(54906003)(6636002)(2616005)(316002)(5660300002)(426003)(4326008)(70206006)(70586007)(8676002)(336012)(47076005)(40480700001)(40460700003)(36756003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 16:03:34.0835
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b6f4354-922a-44ca-3b2b-08dafaffe200
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT088.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5722

Hi Julien,

On 20/01/2023 16:09, Julien Grall wrote:
> 
> 
> On 20/01/2023 14:40, Michal Orzel wrote:
>> Hello,
> 
> Hi,
> 
>>
>> On 20/01/2023 10:32, Julien Grall wrote:
>>>
>>>
>>> Hi,
>>>
>>> On 19/01/2023 22:54, Stefano Stabellini wrote:
>>>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>>>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
>>>>> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
>>>>> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
>>>>> address.
>>>>>
>>>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>>>
>>>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>>
>>>
>>> I have committed the patch.
>> The CI test jobs (static-mem) failed on this patch:
>> https://gitlab.com/xen-project/xen/-/pipelines/752911309
> 
> Thanks for the report.
> 
>>
>> I took a look at it and this is because in the test script we
>> try to find a node whose unit-address does not have leading zeroes.
>> However, after this patch, we switched to PRIpaddr which is defined as 016lx/016llx and
>> we end up creating nodes like:
>>
>> memory@0000000050000000
>>
>> instead of:
>>
>> memory@60000000
>>
>> We could modify the script,
> 
> TBH, I think it was a mistake for the script to rely on how Xen describe
> the memory banks in the Device-Tree.
> 
> For instance, from my understanding, it would be valid for Xen to create
> a single node for all the banks or even omitting the unit-address if
> there is only one bank.
> 
>> but do we really want to create nodes
>> with leading zeroes? The dt spec does not mention it, although [1]
>> specifies that the Linux convention is not to have leading zeroes.
> 
> Reading through the spec in [2], it suggested the current naming is
> fine. That said the example match the Linux convention (I guess that's
> not surprising...).
> 
> I am open to remove the leading. However, I think the CI also needs to
> be updated (see above why).
Yes, the CI needs to be updated as well.
I'm in favor of removing leading zeroes because this is what Xen uses in all
the other places when creating nodes (or copying them from the host dtb) including xl
when creating dtb for domUs. Having a mismatch may be confusing and having a unit-address
with leading zeroes looks unusual.

> 
> Cheers,
> 
> [2] https://www.devicetree.org/
> 
> --
> Julien Grall

~Michal



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 16:54:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 16:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482009.747290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIueF-0007eD-E8; Fri, 20 Jan 2023 16:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482009.747290; Fri, 20 Jan 2023 16:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIueF-0007e6-BE; Fri, 20 Jan 2023 16:53:51 +0000
Received: by outflank-mailman (input) for mailman id 482009;
 Fri, 20 Jan 2023 16:53:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqBo=5R=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIueE-0007e0-AF
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 16:53:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0147c130-98e3-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 17:53:47 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8B45B61FC5;
 Fri, 20 Jan 2023 16:53:46 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E8147C433EF;
 Fri, 20 Jan 2023 16:53:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0147c130-98e3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674233626;
	bh=/ZTGcC6OrH01JRw2wdYJEThHDa4K6dER1no9buA1bHs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=r+Cb/yQr9nPdDs/nby1frdhjelxc1pMnMn9JEHxlbbJIuFF0ATbfpGf0Pt5NmO/zZ
	 4JQFFP99pet7ooHmr6sQ4Fk4yhqFrb8faim5JKP2lk7VtnJ9gJTF1DGbgN588ZgH0G
	 VKVsfBcEtaJ41ccv2wlx2Ac5n2y4M1no/14VcM80cKH2j3yw+QAV1Lx2ykHzdBljNU
	 uL1+dcfmNY6VN95KrWxztv4He0LjXYdjV4RfHeoySatOgwAnSHMaclK9nELzRbp0C1
	 2yw/adVBSEyyFZseyxOwDhJXOA5otF5zfjO4T124DW/PgPLHyBmw+6lexxsMq+2XA7
	 z+XqJX25sD7VA==
Date: Fri, 20 Jan 2023 08:53:43 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
    xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, 
    Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
Subject: Re: [XEN v2 10/11] xen/arm: Restrict zeroeth_table_offset for
 ARM_64
In-Reply-To: <6743ca84-e54e-23be-575f-899a8770d523@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301200853330.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-11-ayan.kumar.halder@amd.com> <alpine.DEB.2.22.394.2301191553570.731018@ubuntu-linux-20-04-desktop> <6743ca84-e54e-23be-575f-899a8770d523@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 Jan 2023, Julien Grall wrote:
> > >   #define zeroeth_table_offset(va)
> > > TABLE_OFFSET(zeroeth_linear_offset(va))
> > > +#else
> > > +#define zeroeth_table_offset(va)  0
> > 
> > Rather than 0 it might be better to have 32, hence zeroing the input
> > address
> I don't understand why you suggest 32. The macro is meant to return the index
> in the 0th table. So return 0 is correct here.

This suggestion was a mistake, 0 is fine.


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 16:58:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 16:58:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482014.747300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIuiD-0008FP-U2; Fri, 20 Jan 2023 16:57:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482014.747300; Fri, 20 Jan 2023 16:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIuiD-0008FI-RI; Fri, 20 Jan 2023 16:57:57 +0000
Received: by outflank-mailman (input) for mailman id 482014;
 Fri, 20 Jan 2023 16:57:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqBo=5R=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIuiC-0008F9-L1
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 16:57:56 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 949c208d-98e3-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 17:57:55 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 536BC61FF6;
 Fri, 20 Jan 2023 16:57:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BA198C43443;
 Fri, 20 Jan 2023 16:57:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 949c208d-98e3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674233872;
	bh=rshliT6W9whYI/W69MejgSTc2GK1+IatbbNou9Lkc/w=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZVvZqRbh0sVcOy9/szyduSEGhMEDY1c+or4Jkmy8//rYqawYtnIO+M/pD+D0xL4Se
	 p/LAKRUs93sic37Jo8Ga2xdzpmzu2F//+kQXO3Vy4wadC1UJH1b3R6Tr2CeRqvSEDV
	 zeylM+BMirmOWDbM3a/V/6o+8+fJoWqiEdiIzD5Lr4STtvBHpL7BOvNlxi8FCcnWJ/
	 uYraWnuC1HCHq4UhE3YW1DvqExyGKW4OiYb5Z5cTyOFbkUomvoURrMsb4xTlmTWotg
	 VT0Zir0duBQy4zLzNjbWNrKf6ztyRKKP+PvGV6VfzQ92+t7TakJDrHOsgynz1nNmfN
	 YbbfvPAwcJXVA==
Date: Fri, 20 Jan 2023 08:57:47 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Chuck Zmudzinski <brchuckz@aol.com>
cc: Igor Mammedov <imammedo@redhat.com>, 
    Chuck Zmudzinski <brchuckz@netscape.net>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, 
    Paolo Bonzini <pbonzini@redhat.com>, 
    Richard Henderson <richard.henderson@linaro.org>, 
    Eduardo Habkost <eduardo@habkost.net>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    xen-devel@lists.xenproject.org, 
    =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Thomas Huth <thuth@redhat.com>, 
    Eric Auger <eric.auger@redhat.com>, 
    Alex Williamson <alex.williamson@redhat.com>, Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
In-Reply-To: <b6f7d6dd-3b9b-2cc7-32ab-8521802e1fed@aol.com>
Message-ID: <alpine.DEB.2.22.394.2301200855390.731018@ubuntu-linux-20-04-desktop>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com> <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com> <20230110030331-mutt-send-email-mst@kernel.org> <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com> <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com> <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com> <20230112180314-mutt-send-email-mst@kernel.org> <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
 <20230113103310.3da703ab@imammedo.users.ipa.redhat.com> <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com> <20230116163342.467039a0@imammedo.users.ipa.redhat.com> <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net> <20230117120416.0aa041d6@imammedo.users.ipa.redhat.com>
 <b6f7d6dd-3b9b-2cc7-32ab-8521802e1fed@aol.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1193688626-1674233782=:731018"
Content-ID: <alpine.DEB.2.22.394.2301200857030.731018@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1193688626-1674233782=:731018
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2301200857031.731018@ubuntu-linux-20-04-desktop>

On Tue, 17 Jan 2023, Chuck Zmudzinski wrote:
> On 1/17/2023 6:04 AM, Igor Mammedov wrote:
> > On Mon, 16 Jan 2023 13:00:53 -0500
> > Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> >
> > > On 1/16/23 10:33, Igor Mammedov wrote:
> > > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > >   
> > > >> On 1/13/23 4:33 AM, Igor Mammedov wrote:  
> > > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > >> >     
> > > >> >> On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:    
> > > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:      
> > > >> >> >> I think the change Michael suggests is very minimalistic: Move the if
> > > >> >> >> condition around xen_igd_reserve_slot() into the function itself and
> > > >> >> >> always call it there unconditionally -- basically turning three lines
> > > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
> > > >> >> >> Michael further suggests to rename it to something more general. All
> > > >> >> >> in all no big changes required.      
> > > >> >> > 
> > > >> >> > yes, exactly.
> > > >> >> >       
> > > >> >> 
> > > >> >> OK, got it. I can do that along with the other suggestions.    
> > > >> > 
> > > >> > have you considered instead of reservation, putting a slot check in device model
> > > >> > and if it's intel igd being passed through, fail at realize time  if it can't take
> > > >> > required slot (with a error directing user to fix command line)?    
> > > >> 
> > > >> Yes, but the core pci code currently already fails at realize time
> > > >> with a useful error message if the user tries to use slot 2 for the
> > > >> igd, because of the xen platform device which has slot 2. The user
> > > >> can fix this without patching qemu, but having the user fix it on
> > > >> the command line is not the best way to solve the problem, primarily
> > > >> because the user would need to hotplug the xen platform device via a
> > > >> command line option instead of having the xen platform device added by
> > > >> pc_xen_hvm_init functions almost immediately after creating the pci
> > > >> bus, and that delay in adding the xen platform device degrades
> > > >> startup performance of the guest.
> > > >>   
> > > >> > That could be less complicated than dealing with slot reservations at the cost of
> > > >> > being less convenient.    
> > > >> 
> > > >> And also a cost of reduced startup performance  
> > > > 
> > > > Could you clarify how it affects performance (and how much).
> > > > (as I see, setup done at board_init time is roughly the same
> > > > as with '-device foo' CLI options, modulo time needed to parse
> > > > options which should be negligible. and both ways are done before
> > > > guest runs)  
> > > 
> > > I preface my answer by saying there is a v9, but you don't
> > > need to look at that. I will answer all your questions here.
> > > 
> > > I am going by what I observe on the main HDMI display with the
> > > different approaches. With the approach of not patching Qemu
> > > to fix this, which requires adding the Xen platform device a
> > > little later, the length of time it takes to fully load the
> > > guest is increased. I also noticed with Linux guests that use
> > > the grub bootoader, the grub vga driver cannot display the
> > > grub boot menu at the native resolution of the display, which
> > > in the tested case is 1920x1080, when the Xen platform device
> > > is added via a command line option instead of by the
> > > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > > to Qemu, the grub menu is displayed at the full, 1920x1080
> > > native resolution of the display. Once the guest fully loads,
> > > there is no noticeable difference in performance. It is mainly
> > > a degradation in startup performance, not performance once
> > > the guest OS is fully loaded.
> >
> > Looking at igd-assign.txt, it recommends to add IGD using '-device' CLI
> > option, and actually drop at least graphics defaults explicitly.
> > So it is expected to work fine even when IGD is constructed with
> > '-device'.
> >
> > Could you provide full CLI current xen starts QEMU with and then
> > a CLI you used (with explicit -device for IGD) that leads
> > to reduced performance?
> >
> > CCing vfio folks who might have an idea what could be wrong based
> > on vfio experience.
> 
> Actually, the igd is not added with an explicit -device option using Xen:
> 
>    1573 ?        Ssl    0:42 /usr/bin/qemu-system-i386 -xen-domid 1 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-1,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-1,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -name windows -vnc none -display none -serial pty -boot order=c -smp 4,maxcpus=4 -net none -machine xenfv,max-ram-below-4g=3758096384,igd-passthru=on -m 6144 -drive file=/dev/loop0,if=ide,index=0,media=disk,format=raw,cache=writeback -drive file=/dev/disk/by-uuid/A44AA4984AA468AE,if=ide,index=1,media=disk,format=raw,cache=writeback
> 
> I think it is added by xl (libxl management tool) when the guest is created
> using the qmp-libxl socket that appears on the command line, but I am not 100
> percent sure. So, with libxl, the command line alone does not tell the whole
> story. The xl.cfg file has a line like this to define the pci devices passed through,
> and in qemu they are type XEN_PT devices, not VFIO devices:
> 
> pci = [ '00:1b.0','00:14.0','00:02.0@02' ]
> 
> This means three host pci devices are passed through, the ones on the
> host at slot 1b.0, 14.0, and 02.0. Of course the device at 02 is the igd.
> The @02 means libxl is requesting slot 2 in the guest for the igd, the
> other 2 devices are just auto assigned a slot by Qemu. Qemu cannot
> assign the igd to slot 2 for xenfv machines without a patch that prevents
> the Xen platform device from grabbing slot 2. That is what this patch
> accomplishes.

In principle I think this change is OK. Apologies that this patch is at
v9 and none of the Xen/QEMU maintainers took a look at it yet. I'll try
to look at it today.
--8323329-1193688626-1674233782=:731018--


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 17:02:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 17:02:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482021.747310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIumW-0001KB-Jd; Fri, 20 Jan 2023 17:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482021.747310; Fri, 20 Jan 2023 17:02:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIumW-0001K4-Gq; Fri, 20 Jan 2023 17:02:24 +0000
Received: by outflank-mailman (input) for mailman id 482021;
 Fri, 20 Jan 2023 17:02:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6CDk=5R=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1pIumV-0001Jy-H3
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 17:02:23 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 34bc3a7a-98e4-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 18:02:22 +0100 (CET)
Received: by mail-ed1-x533.google.com with SMTP id x10so7485165edd.10
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 09:02:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34bc3a7a-98e4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=P/reDGfLQUDu+XX9ekjpDd8Mv6YCEE5/O+FBfPa46zo=;
        b=CW//LJmHlo/DnU1omjb5W5cyEhO1E67hO2RQcJljiWNblOB5PGWdkob4XYkd2MIAoR
         /Xcjhy9LU40Wnr8jeJLFvNpFSC/t5uIfhxdq+dZbg+jCVnbKlHRPEUdzarQEQFINPzfL
         qMJhXvT6AaBZr/YjWFjn7PspxsQWkXarjt4jk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=P/reDGfLQUDu+XX9ekjpDd8Mv6YCEE5/O+FBfPa46zo=;
        b=wYr+TLVg8KQ/Mi7ISzV5IQa36zc1H3j4MXROgKM6vLoLjvSgqst+p9Qyj63b+V+OfJ
         Cxh2a6v0c9X+Pq3XCA1oYv6tclZhz2g1uUjg6ewgvZJ/Zo6Wv1OPRP3Ka/eQaHK6MHCP
         CWlBS86d42VtxgEQUH5qfL6X4WFX+1IwrpgSgFT+V1+gL18LF9OluI/u33lw94hbaLHm
         xWnNQJi9kgzJOtZJ0nNNy44GMyFEBl0cOxN3vqPwwStgxW6f9J0dAzodZ+kJC4Lp03W7
         l3TqKpoBGfcZF5yo19xlyvR0XyeFKjUji0poqfQhc1wVa22Ph8xL+SLaW8haqeWVCSxZ
         TPjA==
X-Gm-Message-State: AFqh2koop9Adh7yujLtspW5i2c2WDoGID3LjTShQnicgtdP711nZvb1e
	/3xV8AqnMcN5FY1tccZskjs+HvQbxrX7Fn8DYkKHrA==
X-Google-Smtp-Source: AMrXdXuxL2yfe5gt2d0Y/WkhfgT5bASVojcep7tLJZNOLYPZpUMZpngFWW0bn8AVQQRgWeFSEkfq7p0uB82yW8lsx6U=
X-Received: by 2002:aa7:db8b:0:b0:496:e060:44ff with SMTP id
 u11-20020aa7db8b000000b00496e06044ffmr1787457edt.226.1674234142096; Fri, 20
 Jan 2023 09:02:22 -0800 (PST)
MIME-Version: 1.0
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com> <d91b5315-a5bb-a6ee-c9bb-58974c733a4e@suse.com>
In-Reply-To: <d91b5315-a5bb-a6ee-c9bb-58974c733a4e@suse.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Fri, 20 Jan 2023 17:02:11 +0000
Message-ID: <CA+zSX=ZVK_7xpgraJyC3__uORqXo8F9Atj9gCF+oO7OyfRrtYg@mail.gmail.com>
Subject: Re: [PATCH v2 1/9] x86/shadow: replace sh_reset_l3_up_pointers()
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Content-Type: multipart/alternative; boundary="0000000000004f97b105f2b5033a"

--0000000000004f97b105f2b5033a
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 11, 2023 at 1:52 PM Jan Beulich <jbeulich@suse.com> wrote:

> Rather than doing a separate hash walk (and then even using the vCPU
> variant, which is to go away), do the up-pointer-clearing right in
> sh_unpin(), as an alternative to the (now further limited) enlisting on
> a "free floating" list fragment. This utilizes the fact that such list
> fragments are traversed only for multi-page shadows (in shadow_free()).
> Furthermore sh_terminate_list() is a safe guard only anyway, which isn't
> in use in the common case (it actually does anything only for BIGMEM
> configurations).
>

One thing that seems strange about this patch is that you're essentially
adding a field to the domain shadow struct in lieu of adding another
another argument to sh_unpin() (unless the bit is referenced elsewhere in
subsequent patches, which I haven't reviewed, in part because about half of
them don't apply cleanly to the current tree).

 -George

--0000000000004f97b105f2b5033a
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Wed, Jan 11, 2023 at 1:52 PM Jan B=
eulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; w=
rote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0p=
x 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Rather tha=
n doing a separate hash walk (and then even using the vCPU<br>
variant, which is to go away), do the up-pointer-clearing right in<br>
sh_unpin(), as an alternative to the (now further limited) enlisting on<br>
a &quot;free floating&quot; list fragment. This utilizes the fact that such=
 list<br>
fragments are traversed only for multi-page shadows (in shadow_free()).<br>
Furthermore sh_terminate_list() is a safe guard only anyway, which isn&#39;=
t<br>
in use in the common case (it actually does anything only for BIGMEM<br>
configurations).<br></blockquote><div><br></div><div>One thing that seems s=
trange about this patch is that you&#39;re essentially adding a field to th=
e domain shadow struct in lieu of adding another another argument to sh_unp=
in() (unless the bit is referenced elsewhere in subsequent patches, which I=
 haven&#39;t reviewed, in part because about half of them don&#39;t apply c=
leanly to the current tree).<br></div><div><br></div><div>=C2=A0-George<br>=
</div></div></div>

--0000000000004f97b105f2b5033a--


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 17:12:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 17:12:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482027.747320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIuvs-0002nZ-GK; Fri, 20 Jan 2023 17:12:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482027.747320; Fri, 20 Jan 2023 17:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIuvs-0002nS-DN; Fri, 20 Jan 2023 17:12:04 +0000
Received: by outflank-mailman (input) for mailman id 482027;
 Fri, 20 Jan 2023 17:12:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIuvr-0002mz-5V; Fri, 20 Jan 2023 17:12:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIuvr-0003Nm-3B; Fri, 20 Jan 2023 17:12:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIuvq-0006yt-QQ; Fri, 20 Jan 2023 17:12:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIuvq-0000oq-Q1; Fri, 20 Jan 2023 17:12:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BYQzHe/KJUriQlr3CawW/UOHH5sj/pjSxmHduOoIRwE=; b=6BPvnxFncxE/4TDbgzJCCnLlsB
	Y200NapPPbgCCilAWlJzXcmf6p2Y979nbdoX6GijHKZhA651AYK8Wv76bx4OoCECSF95S14BcL/lW
	7Kft6v6bmfDSnGULBvZP2wXKmXzwA2EF8myFPO7KowNtmee81SlPnRBUilNuVi972YqU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176000-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176000: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=bf5678b5802685e07583e3c7ec56d883cbdd5da3
X-Osstest-Versions-That:
    ovmf=51411435d559c55eaf38c02baf5d76da78bb658d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 17:12:02 +0000

flight 176000 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176000/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 bf5678b5802685e07583e3c7ec56d883cbdd5da3
baseline version:
 ovmf                 51411435d559c55eaf38c02baf5d76da78bb658d

Last test of basis   175981  2023-01-19 12:43:49 Z    1 days
Testing same since   176000  2023-01-20 14:10:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiewen Yao <Jiewen.yao@Intel.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   51411435d5..bf5678b580  bf5678b5802685e07583e3c7ec56d883cbdd5da3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 17:22:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 17:22:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482034.747330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIv5u-0004Im-Gc; Fri, 20 Jan 2023 17:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482034.747330; Fri, 20 Jan 2023 17:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIv5u-0004If-DX; Fri, 20 Jan 2023 17:22:26 +0000
Received: by outflank-mailman (input) for mailman id 482034;
 Fri, 20 Jan 2023 17:22:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIv5t-0004IV-6R; Fri, 20 Jan 2023 17:22:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIv5t-0003iJ-3p; Fri, 20 Jan 2023 17:22:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIv5s-0007KW-Mi; Fri, 20 Jan 2023 17:22:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIv5s-0005wW-MG; Fri, 20 Jan 2023 17:22:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IXrDIFbpx+6o6T+W6w/FDgZlvuRSdvuiOiV3EI9NWcY=; b=YJXIPpu8b0LMAJttyOx24rQtbJ
	rnToyF7oaIcTp18kjips4nvlgvbEk8wD4Mw4P/Avs3juUwvKnUHqjeXX90JKeYoZf4ijcZDiZoSHf
	aX06b8xtJUL5jnh+qet2ceV33RzbCgzXp2qSpniEztvCzxtk5ispG3X6M5oL5OIidLl0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175994-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 175994: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 17:22:24 +0000

flight 175994 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175994/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt      8 xen-boot                   fail pass in 175987
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 20 guest-start/debianhvm.repeat fail pass in 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install    fail pass in 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install     fail pass in 175987

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install   fail in 175987 like 175965
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 175987 like 175965
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop   fail in 175987 like 175965
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 175987 like 175965
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 175987 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175987
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175987
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175987
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175987
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175987
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175987
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 17:36:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 17:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482043.747340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvJJ-00060C-SB; Fri, 20 Jan 2023 17:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482043.747340; Fri, 20 Jan 2023 17:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvJJ-000605-PT; Fri, 20 Jan 2023 17:36:17 +0000
Received: by outflank-mailman (input) for mailman id 482043;
 Fri, 20 Jan 2023 17:36:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIvJI-0005zz-5k
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 17:36:16 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ee1d2d07-98e8-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 18:36:12 +0100 (CET)
Received: from mail-mw2nam12lp2048.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 12:36:05 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6865.namprd03.prod.outlook.com (2603:10b6:303:1b5::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Fri, 20 Jan
 2023 17:36:03 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 17:36:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee1d2d07-98e8-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674236172;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=oqEnxtvyof8znt0qn7lvlru2ub9fDToLj+p44VLQhUY=;
  b=QZKAlrfza1xGsQoy4NHCz46pXK6O6ibJpbB25b8+y1A2poSiUyuwyMyu
   FeqHgRCHb8i+VDZ7H13JzDOanbwdc8CCNP41OMEnf7oIeqNtTLp1uPJUa
   1NBPEKJ98QY7izse0UXurvK5ryDOQ8Q+CHISRqErTOI6pjmPaqDxpUbqm
   Q=;
X-IronPort-RemoteIP: 104.47.66.48
X-IronPort-MID: 96001523
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:lINmbaobMRhZo21qzmbAOEoHgTBeBmIuZBIvgKrLsJaIsI4StFCzt
 garIBnUaPiCYjanfd8kOtm0pk9Xv8eEzdNkQANp/Cs8QSIVp5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHzSFNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAHc/dQ2vtsCS+674QLBsqcgFFc3UPKpK7xmMzRmBZRonabbqZvyQoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3juarbIq9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzmrqYz3QfIroAVICYnCHm2i/W2sFCVdehgA
 hVX5Bg0hqdnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebSAr0
 3eZktWvAiZg2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nczp4OKu8j9mwHC6qx
 TmP9XI6n+9L0Z5N0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLlm4U2l/MBVfNgwIQ==
IronPort-HdrOrdr: A9a23:RTxWg6un/VAqCpkJ9aLAMK2y7skDc9V00zEX/kB9WHVpm62j5q
 aTdZEgviMc5wxhPE3I9erhBEDiewK6yXcW2+Qs1N6ZNWGNhILPFvAA0WKI+UyEJ8SRzJ8+6U
 5WScRD4QzLbGST3K7BjjVRTb4br+W6zA==
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="96001523"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JdKWkmoyho6462BUo8Mre7WMMj5VUDB2zZp5iMvjk45WIm1fVP87eRfUhVk/tnIIPT7fjX1ygLwxvM5+mYERqrAkkf0nEBi2PmUi8raQzboChLiz4XIjFYtHXOWnUQD7qJovzBdj6MgjO/I1m7z+oRkOCN99XmGHkQOnDYj0JqtCwPlauV9R1b9wIECcHbAezLJiYfOFBfotAkr6PQKImb/WqdsAPJsuuF09n6I/Vk8zIs1CmAHDg4JDzWHG3F0o4Z015eACdY+Z5FPKLzi6SFh5mGTl95HUsZ0Ef7V1RRPHQfiwxIrtQGFTDH3W/k6QR6BL/0Ia2VJqvXA8CeiISw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oqEnxtvyof8znt0qn7lvlru2ub9fDToLj+p44VLQhUY=;
 b=TMFnNcx9SG/SP33U9azhaERHHLPShaTH1ytatSmXweLGXcPgsp3R1DfcyjsB1WebldSVaA0/fuB1px/s+99PD+fgA10opwdEnTCS6ybjjlqs+hf29b9YhAjqFmHhmgdiJxXuB+6/SjFgvJ1zdc1vOzsH6NnMa8m8qSYx8/mRtg6MtF+u0MxJk+CCtarAw3xOVmvoiICt5sy97LUoRlMTepuiIBI3S9sfh3zm6Sn+vd2PlD/C+JyEgucc29IH4Gp1w+1wWTjPvQfFe5gi79vYciYVXo0+6r35IwTDVyhsEdU6BlZ0WZAu+XVPJmOD6VhoLoQlO5j4+uKqgFmIvjHMXw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oqEnxtvyof8znt0qn7lvlru2ub9fDToLj+p44VLQhUY=;
 b=rmh5HIiT6oid+UAktE/0mHowXu97orF0AqkgusRsyu8FK/OZUln/AMN1WKorhRjv/rrk64fIJNk60TNxcAP/2KoWheHPbFhY0yij49WGh8+ZnlvV7ZpUrkDOFj94Wwdv/XR1rEzFq9uOgbow0FFc/ykylpc7gtG3yOdKEmAg9KA=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Thread-Topic: [PATCH v2 9/9] x86/shadow: harden shadow_size()
Thread-Index:
 AQHZJcS6xQlTtpJxX0O79yAhgOZJ+66Z2cOAgACwbwCAAAw+gIAAA0qAgAhqSQCAAT6YAIADXTAA
Date: Fri, 20 Jan 2023 17:36:02 +0000
Message-ID: <97f74b0a-5d70-9f8b-1e39-f8ff22650984@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <9bea51eb-4fbd-b061-52d7-c6c234d060a1@suse.com>
 <c5d201ac-89ca-6baa-d685-5bef2497183f@citrix.com>
 <a438f16b-7d45-d7e4-2191-4ed7b2077785@suse.com>
 <71e7ba34-f1ea-16fd-ec01-bb2aa454270c@citrix.com>
 <b49793f8-55be-0746-815d-ab7b627d3baa@suse.com>
 <733edba4-6913-97a4-f949-4f8899a3bba9@citrix.com>
 <58790f2f-ba46-df0c-2620-370e3994faea@suse.com>
In-Reply-To: <58790f2f-ba46-df0c-2620-370e3994faea@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|MW4PR03MB6865:EE_
x-ms-office365-filtering-correlation-id: a1593ce8-7dfa-4147-a088-08dafb0ccd5d
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 oqxAYTwQFiW0IOCJlzulBdrbddmkP2ovvqnyeCCE+vELtqDOs2+nMVcj4dooxQz08oZCgEzkv8TbbxpTwTY2KBXm24ULJf6gPxG9eRdLQkuLqgaMmeQcRfyoRu93NvI2VIi48cqijFq3XSAMlbBK3SjlR0ScZNG58CsIhwo4e1WWAC7IkehRziO6tXG0gSb8s/Ceu1yH3mibAI1SpMgqYKNYcTTJbLc7dF9l021OureFXb9eA0fHbSbnHFFRsN68JNF8hbmJrJB6lhHGaUkk4QS61rpC0DD3/H5Ik4YKYt6t+wPQ6W7DQ4wfnipHVfVoTGO42Du5efF3JuuJkX6ScfLe/i4HMRhX7c0RpmbSE5ZK3tpjQikVQnkmb9FlR6OA8ND3dhyZGYIucCiNpfuGv2EXxfl0aKNHJbw6Hd25UcgKKBoFm9hns+8xR/mp6LPgiCA4UOJ3IcpAqzbuArrOseSr3nTajxJWSEyv9zlSmvKuHkbMzbL0nh+QCC3EhNtkLR8NYXimSfrvEI4ZkxWf2BwqsU39QFmNtbgLRDu9oeO7HOpLlWcnul5dSM4pUf4FQjbzaSVmLexBiYUxP8s2rRnqJ9zIhUSrLAZjZJ6p3DzweKtwyW2d/Kcs6YKlrAdIaFJTJJK5vIxFEq1Nyzxc7eOnqp9+A8+hNISv7GxEHYk1M4p6n8wyG1mF7QnmJRxr5T4odeoH7prBw/RSGkNMo3loEyMh5IhKXhqdG407AjA7ou8T8dsVNGvFJ9fxk7/18bKOsHlTxIRQmDqtWWklig==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(346002)(136003)(396003)(39860400002)(366004)(451199015)(6506007)(5660300002)(478600001)(6512007)(186003)(53546011)(26005)(2616005)(6916009)(66446008)(66946007)(76116006)(54906003)(8676002)(91956017)(64756008)(316002)(83380400001)(8936002)(31686004)(4326008)(41300700001)(66476007)(66556008)(6486002)(38100700002)(38070700005)(31696002)(2906002)(122000001)(82960400001)(71200400001)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?WHE3LzhZRGp6b2FuTjk3WWsxbUlxczdVaXU3VDJhMDE0SG54QmhjcmR2L1ND?=
 =?utf-8?B?aEFUbGpXblhJQ04vbXhwS0JFSHBtSzBlSkdUbnZCTWJObjd6RlRVYUlGS3M2?=
 =?utf-8?B?dm1qay9raTZxc3UwZ3g1R0dDUldQc25YNWt4bDQ2TTNPOHZZOEZRK20zQ3dm?=
 =?utf-8?B?K3JwK3lKVGlQM0cxL3NERkkvcHcyZE8xeEIvZjJtSGQ2V2xobzZ1NmMycDdH?=
 =?utf-8?B?VHBJOXBIREQvaTdkcEdMSldmT1BNMXBpSDE0Vnl4aVJuRkZoS3dHRjZnR25x?=
 =?utf-8?B?YmlKTTdrSXlVTzdzaHhXbjRWR0Y5RzlMalh3MTE2aHFzZ1FXVXhqSWswOEtr?=
 =?utf-8?B?VkxrMFJCWE1ucTF0aHBpcjdWbHNwUWNUTk5HcFlVUTRqd2R1a013c0NjMWwx?=
 =?utf-8?B?SzFCZVZJSWVWZU1KRmh6bEJYRm5TcFRTb2hrUjN3VzZRSkpmWkJNaDJzWWI1?=
 =?utf-8?B?dE9USE9qQW1jTHJSSXhxK003VlN5YlFWZUJUZVp3QUJlQlM2N3BuSGNYRGwy?=
 =?utf-8?B?bVVuaWk3WVFXeFA0eWlZWkNvd1pDSUlwVmkrMDY1Zlp4dG5ScmpnbUk4UDNh?=
 =?utf-8?B?R2xNVWppVUMzTVNaZUZzQ0hWNVRUTkdISDJPVU80MStxelVuL2lNU1VHdXFo?=
 =?utf-8?B?QlVEb01NSFcxeHp5c3o3eXFhdzBrZ2x0a1YzT0E1OFNPS0xtcVlvbXAvWjFM?=
 =?utf-8?B?WVFMREdlSFlaNU9tSlZ2eWh0UmJVNFBBYU1nZmJmVnMvK2ZEY3ZTd2JnLzJM?=
 =?utf-8?B?MUpuK25FaDNXdjBGamE2YlJqeWtCSXFzK3M3Q0NmY3A4YXBTQjZ2RXh6d1pO?=
 =?utf-8?B?WXdqOXNHMGQvL2E3MW1wTkJBUjIySFk4bTByYzA5MmlkSFY2M2hGOVRUQjVz?=
 =?utf-8?B?bkxxbTRieFlEcnFGMkNHaGxxak9sOXkyM3NoR1lFUGlEaUt4RmhQMnpWZ2FN?=
 =?utf-8?B?Nm5CR2M1UlNoUDBweG9oM2cyN2oyQzNVUzJKcjRscEZnYUF0d3BCNjVsbWVG?=
 =?utf-8?B?OEFaWmpNUVRKRzFaWDJlWDR5R21ISFZTemplUHJBTGErbnRBSk41K1Y2eXdW?=
 =?utf-8?B?NWxpak9NTE5WWGY3dUwwT3dIdENwTHM4VkRQamduaWxkQ05QN0ZxNnpuMHNU?=
 =?utf-8?B?Z0V1SGpOUmlLUjF5OFc0aStPaVMzVDgrc3N4QXF4YVVtU3QyQjdHN1IrYTZP?=
 =?utf-8?B?Qkh2di9mamRKbDFNUW85ZDlTc2drVmJPNkdWT2hOaHZFUUhENEcvOUJob3Fm?=
 =?utf-8?B?NHJhc0R1cHBtQzM4RHRwQnhkT2FxZDJCYlR6ZzRMTkNkYnBvTGllaE5SaHlZ?=
 =?utf-8?B?d212ZjhvN0VpbmVhMXlkMFBKZWhXVFMzU1BDeXFPS21PSmdRcXFLSkN0Sjdz?=
 =?utf-8?B?NWhRdWkyQnRQcmRUK2d0WXNFNzhjQjg5SXVCY3VFOVpwbDNaWHBheDlxc0pY?=
 =?utf-8?B?QVhpMHBSUW55ZmJEeDg4QkxvUWJYckFSaUM3a0E2OXRhSTN4VTNONFNqMlA3?=
 =?utf-8?B?dldCQ0lMSk9Wbit0eWNsdG1acUZPSytLdTJ6RDVJWm9ObFZXUTBuYnNKeklM?=
 =?utf-8?B?dklEOGU4VG12ZDFsellKSXQ2bmNDMVNxRjd0S0hxL0cxYjRZQ3RRclptMzQ5?=
 =?utf-8?B?WU9CMlI0bVpuWkJaZlVjMHk5NnNIMnVXY3Q1a00wMk91NS9RRmZVaVRUSk5V?=
 =?utf-8?B?SGpsU1JMUkN4QW5hZlNNQ2dpZ1VZTDZvblIxWmtUbXlETWMzcTQ3TFAvYTBR?=
 =?utf-8?B?alpmSFY0R25FKzNMNXpRZnRjTEJWRllaQm1JNmNDK215bE5TZ0wvczU4dUVX?=
 =?utf-8?B?ZmltOWYrUjErSFEzMmh2R2Q5Y0lPUUJ0ZXBsWnBhU0JYZ1BsdmpLVXk5U29H?=
 =?utf-8?B?TjVoNmZlNjlYT08vL2trM1dxMEQ2aHptRXpaNHRpR3licXE3RDlocDZtNjl2?=
 =?utf-8?B?dFE4RHJiU0R0ODQzZm11TG5KTngwVlRNNHVHWkwxM0pNNzkxd05HTzBXejB0?=
 =?utf-8?B?T3JTMGJYNG84VmdwcHZ6N3Jqa3pKVTlxZkY4R0J3NDlpZFlzNVBDZG1MSXY1?=
 =?utf-8?B?V0NyQTNpVmF6ZFhpbUtCZVQrMlZQM0EvVXlCbkg0M2gyY21sTFlVYlRmeGha?=
 =?utf-8?B?MnZIajAvL0RuazFCOWU2NzFvbjhpZnJBQ3NBczJqTXZsWjE3d3I3c1VOcXFV?=
 =?utf-8?B?K1E9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <596681536EF28C4FAE8797F56AABD189@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	P9PzYsmFhJXICNdQHKSfNMpZAgr+1RUM/KsZY+zxctNMBf1THpHNkpKfD5hByCbrG8E+MkMXOhfJ5Q06QPTcrtPk6ghxW/tCSrPTOyzIfNCcXDZexcSNl6EKNkNf4ppEkGw021/aMVs1vDhVoexw7Al964RSHLQ38sONCpPDB1Nu6Tkvc4bwO4CWrhQfaxsQFPT5Z1cLK3x9qYXkilqKeCda43E90pe4AOcREHhkeLEujv/L4owwU3lzBw1GEvSuJEWKZ++deXFIxZ5mL7Z1caMw49POcuCaA8yA2blI/2RUCeDn4Yrta0czJLovhCyWlOsIGQiLE9LKOKhDE6C7NUR/a+iPzKMuAuYHGgUDBiBaJDQPLlNy8fNjJCQjXTMllIP/Sc+qIqlV9YnSurliH3qOed2XvvFM8ACOrU6nmDH5p6DHKNDO8CS6rfhTZyfXbNLv+YvHiSVSRWiWIM0lxmyQs5HRTLj6fZFNJSa3sbJigSQo9bXSjWpU25y+A0riWHF9rKS4zVcsEETkGL7p0ObSmNNiD0JEBdf+mNACmPG92jrIOATynNnDtSXfWKfrlgx+YOBpPUZSXn6Twzu/476J7rZ3zIVQc8XVE2znSIzK1Zjel6FOqm+WvHh34tROvmlmnpKFUcz1OLa2TH+oqaamf7OVT1CYA5gq6MOBSDRlZvN5GSbtz59qyuTUZeFjsMMnVAzMmngsth2//EjRd+AEUNEts9zOd28nuukt9+iAfCnNle6K0rbv6S1oTRFtRn6Fk+1OGX7+7ut14i4M5UASfvq4Vf9Wno07gFW9QWS3R3N6md7LM/UI3qNt9jq8
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a1593ce8-7dfa-4147-a088-08dafb0ccd5d
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 17:36:02.9834
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: OtrBjM/krzXDiIFopx5plv9UD2VW21o0XRDA2LsrahPGZGLTeSorfrHHAhMgscDrb+SwM3UemrCCQp+h9r4DD9A25XM4JIWN+It0+L9Af0s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6865

T24gMTgvMDEvMjAyMyAyOjEzIHBtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTcuMDEuMjAy
MyAyMDoxMywgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDEyLzAxLzIwMjMgMTA6NDIgYW0s
IEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+IE9uIDEyLjAxLjIwMjMgMTE6MzEsIEFuZHJldyBDb29w
ZXIgd3JvdGU6DQo+Pj4+IE9uIDEyLzAxLzIwMjMgOTo0NyBhbSwgSmFuIEJldWxpY2ggd3JvdGU6
DQo+Pj4+PiBPbiAxMi4wMS4yMDIzIDAwOjE1LCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0KPj4+Pj4+
IE9uIDExLzAxLzIwMjMgMTo1NyBwbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+Pj4+IE1ha2Ug
SFZNPXkgcmVsZWFzZSBidWlsZCBiZWhhdmlvciBwcm9uZSBhZ2FpbnN0IGFycmF5IG92ZXJydW4s
IGJ5DQo+Pj4+Pj4+IChhYil1c2luZyBhcnJheV9hY2Nlc3Nfbm9zcGVjKCkuIFRoaXMgaXMgaW4g
cGFydGljdWxhciB0byBndWFyZCBhZ2FpbnN0DQo+Pj4+Pj4+IGUuZy4gU0hfdHlwZV91bnVzZWQg
bWFraW5nIGl0IGhlcmUgdW5pbnRlbnRpb25hbGx5Lg0KPj4+Pj4+Pg0KPj4+Pj4+PiBTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+Pj4+Pj4+IC0tLQ0KPj4+
Pj4+PiB2MjogTmV3Lg0KPj4+Pj4+Pg0KPj4+Pj4+PiAtLS0gYS94ZW4vYXJjaC94ODYvbW0vc2hh
ZG93L3ByaXZhdGUuaA0KPj4+Pj4+PiArKysgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93L3ByaXZh
dGUuaA0KPj4+Pj4+PiBAQCAtMjcsNiArMjcsNyBAQA0KPj4+Pj4+PiAgLy8gYmVlbiBpbmNsdWRl
ZC4uLg0KPj4+Pj4+PiAgI2luY2x1ZGUgPGFzbS9wYWdlLmg+DQo+Pj4+Pj4+ICAjaW5jbHVkZSA8
eGVuL2RvbWFpbl9wYWdlLmg+DQo+Pj4+Pj4+ICsjaW5jbHVkZSA8eGVuL25vc3BlYy5oPg0KPj4+
Pj4+PiAgI2luY2x1ZGUgPGFzbS94ODZfZW11bGF0ZS5oPg0KPj4+Pj4+PiAgI2luY2x1ZGUgPGFz
bS9odm0vc3VwcG9ydC5oPg0KPj4+Pj4+PiAgI2luY2x1ZGUgPGFzbS9hdG9taWMuaD4NCj4+Pj4+
Pj4gQEAgLTM2OCw3ICszNjksNyBAQCBzaGFkb3dfc2l6ZSh1bnNpZ25lZCBpbnQgc2hhZG93X3R5
cGUpDQo+Pj4+Pj4+ICB7DQo+Pj4+Pj4+ICAjaWZkZWYgQ09ORklHX0hWTQ0KPj4+Pj4+PiAgICAg
IEFTU0VSVChzaGFkb3dfdHlwZSA8IEFSUkFZX1NJWkUoc2hfdHlwZV90b19zaXplKSk7DQo+Pj4+
Pj4+IC0gICAgcmV0dXJuIHNoX3R5cGVfdG9fc2l6ZVtzaGFkb3dfdHlwZV07DQo+Pj4+Pj4+ICsg
ICAgcmV0dXJuIGFycmF5X2FjY2Vzc19ub3NwZWMoc2hfdHlwZV90b19zaXplLCBzaGFkb3dfdHlw
ZSk7DQo+Pj4+Pj4gSSBkb24ndCB0aGluayB0aGlzIGlzIHdhcnJhbnRlZC4NCj4+Pj4+Pg0KPj4+
Pj4+IEZpcnN0LCBpZiB0aGUgY29tbWl0IG1lc3NhZ2Ugd2VyZSBhY2N1cmF0ZSwgdGhlbiBpdCdz
IGEgcHJvYmxlbSBmb3IgYWxsDQo+Pj4+Pj4gYXJyYXlzIG9mIHNpemUgU0hfdHlwZV91bnVzZWQs
IHlldCB5b3UndmUgb25seSBhZGp1c3RlZCBhIHNpbmdsZSBpbnN0YW5jZS4NCj4+Pj4+IEJlY2F1
c2UgSSB0aGluayB0aGUgcmlzayBpcyBoaWdoZXIgaGVyZSB0aGFuIGZvciBvdGhlciBhcnJheXMu
IEluDQo+Pj4+PiBvdGhlciBjYXNlcyB3ZSBoYXZlIHN1aXRhYmxlIGJ1aWxkLXRpbWUgY2hlY2tz
IChIQVNIX0NBTExCQUNLU19DSEVDSygpDQo+Pj4+PiBpbiBwYXJ0aWN1bGFyKSB3aGljaCB3b3Vs
ZCB0cmlwIHVwb24gaW5hcHByb3ByaWF0ZSB1c2Ugb2Ygb25lIG9mIHRoZQ0KPj4+Pj4gdHlwZXMg
d2hpY2ggYXJlIGFsaWFzZWQgdG8gU0hfdHlwZV91bnVzZWQgd2hlbiAhSFZNLg0KPj4+Pj4NCj4+
Pj4+PiBTZWNvbmRseSwgaWYgaXQgd2VyZSByZWxpYWJseSAxNiB0aGVuIHdlIGNvdWxkIGRvIHRo
ZSBiYXNpY2FsbHktZnJlZQ0KPj4+Pj4+ICJ0eXBlICY9IDE1OyIgbW9kaWZpY2F0aW9uLsKgIChJ
dCBhcHBlYXJzIG15IGNoYW5nZSB0byBkbyB0aGlzDQo+Pj4+Pj4gYXV0b21hdGljYWxseSBoYXNu
J3QgYmVlbiB0YWtlbiB5ZXQuKSwgYnV0IHdlJ2xsIGVuZCB1cCB3aXRoIGxmZW5jZQ0KPj4+Pj4+
IHZhcmlhdGlvbiBoZXJlLg0KPj4+Pj4gSG93IGNvdWxkIGFueXRoaW5nIGJlICJyZWxpYWJseSAx
NiI/IFN1Y2ggZW51bXMgY2FuIGNoYW5nZSBhdCBhbnkgdGltZToNCj4+Pj4+IFRoZXkgZGlkIHdo
ZW4gbWFraW5nIEhWTSB0eXBlcyBjb25kaXRpb25hbCwgYW5kIHRoZXkgd2lsbCBhZ2FpbiB3aGVu
DQo+Pj4+PiBhZGRpbmcgdHlwZXMgbmVlZGVkIGZvciA1LWxldmVsIHBhZ2luZy4NCj4+Pj4+DQo+
Pj4+Pj4gQnV0IHRoZSB2YWx1ZSBpc24ndCBhdHRhY2tlciBjb250cm9sbGVkLsKgIHNoYWRvd190
eXBlIGFsd2F5cyBjb21lcyBmcm9tDQo+Pj4+Pj4gWGVuJ3MgbWV0YWRhdGEgYWJvdXQgdGhlIGd1
ZXN0LCBub3QgdGhlIGd1ZXN0IGl0c2VsZi7CoCBTbyBJIGRvbid0IHNlZQ0KPj4+Pj4+IGhvdyB0
aGlzIGNhbiBjb25jZWl2YWJseSBiZSBhIHNwZWN1bGF0aXZlIGlzc3VlIGV2ZW4gaW4gcHJpbmNp
cGxlLg0KPj4+Pj4gSSBkaWRuJ3Qgc2F5IGFueXRoaW5nIGFib3V0IHRoZXJlIGJlaW5nIGEgc3Bl
Y3VsYXRpdmUgaXNzdWUgaGVyZS4gSXQNCj4+Pj4+IGlzIGZvciB0aGlzIHZlcnkgcmVhc29uIHRo
YXQgSSB3cm90ZSAiKGFiKXVzaW5nIi4NCj4+Pj4gVGhlbiBpdCBpcyBlbnRpcmVseSB3cm9uZyB0
byBiZSB1c2luZyBhIG5vc3BlYyBhY2Nlc3Nvci4NCj4+Pj4NCj4+Pj4gU3BlY3VsYXRpb24gcHJv
YmxlbXMgYXJlIHN1YnRsZSBlbm91Z2gsIHdpdGhvdXQgZmFsc2UgdXNlcyBvZiB0aGUgc2FmZXR5
DQo+Pj4+IGhlbHBlcnMuDQo+Pj4+DQo+Pj4+IElmIHlvdSB3YW50IHRvICJoYXJkZW4iIGFnYWlu
c3QgcnVudGltZSBhcmNoaXRlY3R1cmFsIGVycm9ycywgeW91IHdhbnQNCj4+Pj4gdG8gdXAgdGhl
IEFTU0VSVCgpIHRvIGEgQlVHKCksIHdoaWNoIHdpbGwgZXhlY3V0ZSBmYXN0ZXIgdGhhbiBzdGlj
a2luZw0KPj4+PiBhbiBoaWRpbmcgYW4gbGZlbmNlIGluIGhlcmUsIGFuZCBub3QgaGlkZSB3aGF0
IGlzIG9idmlvdXNseSBhIG1ham9yDQo+Pj4+IG1hbGZ1bmN0aW9uIGluIHRoZSBzaGFkb3cgcGFn
ZXRhYmxlIGxvZ2ljLg0KPj4+IEkgc2hvdWxkIGhhdmUgY29tbWVudGVkIG9uIHRoaXMgZWFybGll
ciBhbHJlYWR5OiBXaGF0IGxmZW5jZSBhcmUgeW91DQo+Pj4gdGFsa2luZyBhYm91dD8NCj4+IFRo
ZSBvbmUgSSB0aG91Z2h0IHdhcyBoaWRpbmcgdW5kZXIgYXJyYXlfYWNjZXNzX25vc3BlYygpLCBi
dXQgSSBmb3Jnb3QNCj4+IHdlJ2QgZG9uZSB0aGUgc2JiIHRyaWNrLg0KPj4NCj4+PiBBcyB0byBC
VUcoKSAtIHRoZSBnb2FsIGhlcmUgc3BlY2lmaWNhbGx5IGlzIHRvIGF2b2lkIGENCj4+PiBjcmFz
aCBpbiByZWxlYXNlIGJ1aWxkcywgYnkgZm9yY2luZyB0aGUgZnVuY3Rpb24gdG8gcmV0dXJuIHpl
cm8gKHZpYQ0KPj4+IGhhdmluZyBpdCB1c2UgYXJyYXkgc2xvdCB6ZXJvIGZvciBvdXQgb2YgcmFu
Z2UgaW5kZXhlcykuDQo+PiBJJ20gdmVyeSB1bmVhc3kgYWJvdXQgaGF2aW5nIHNvbWV0aGluZyB0
aGlzIGRlZXAgaW5zaWRlIGEgY29tcG9uZW50LA0KPj4gd2hpY2ggQVNTRVJUKClzIGNvcnJlY3Ru
ZXNzIGRvaW5nIHNvbWV0aGluZyBjdXN0b20gbGlrZSB0aGlzICJqdXN0IHRvDQo+PiBhdm9pZCBj
cmFzaGluZyIuDQo+Pg0KPj4gVGhlcmUgaXMgZWl0aGVyIHNvbWV0aGluZyBpbXBvcnRhbnQgd2hp
Y2ggbWFrZXMgdGhpcyBtb3JlIGxpa2VseSB0aGFuDQo+PiBtb3N0IHRvIGdvIHdyb25nIGF0IHJ1
bnRpbWUsIG9yIHRoZXJlIGlzIG5vdC7CoCBBbmQgaG9uZXN0bHksIEkgY2FuJ3Qgc2VlDQo+PiB3
aHkgaXQgaXMgbW9yZSByaXNreSBhdCBydW50aW1lLg0KPiBXZWxsLCBva2F5LiBJIGRpZCBleHBs
YWluIHdoeSBJIHRoaW5rIHRoZXJlIGlzIGFuIGluY3JlYXNlZCByaXNrIGhlcmUuDQo+DQo+PiBJ
ZiB3ZSByZWFsbHkgZG8gbmVlZCB0byBjbGFtcCBpdCwgdGhlbiB3ZSBuZWVkIGEgYnJhbmQgbmV3
IGhlbHBlciB3aXRoIGENCj4+IG5hbWUgdGhhdCBkb2Vzbid0IHJlZmVyZW5jZSBzcGVjdWxhdGlv
biBhdCBhbGwuwqAgSXQncyBmaW5lIGZvciAqX25vc3BlYw0KPj4gdG8gcmV1c2UgdGhpcyBoZWxw
ZXIsIHN0YXRpbmcgdGhlIHNhZmV0eSBvZiBkb2luZyBzbywgYnV0IGF0IGEgY29kZQ0KPj4gbGV2
ZWwgdGhlcmUgbmVlZCB0byBiZSB0d28gYXBwcm9wcmlhdGVseSBuYW1lZCBoZWxwZXJzIGZvciB0
aGVpciB0d28NCj4+IGxvZ2ljYWxseS11bnJlbGF0ZWQgcHVycG9zZXMuDQo+IEkgZG9uJ3QgdGhp
bmsgYW55dGhpbmcgY2FuIHNlbnNpYmx5IGJlIG1hZGUgZm9yIG1vcmUgZ2VuZXJhbCBwdXJwb3Nl
DQo+IChub3Qgc3BlY3VsYXRpb24gcmVsYXRlZCkgdXNlLiBIZXJlIEknbSBzcGVjaWZpY2FsbHkg
dXRpbGl6aW5nIHRoYXQNCj4gYXJyYXkgc2xvdCAwIGlzIGJlaW5nIHBpY2tlZCBhcyB0aGUgZmFs
bGJhY2sgc2xvdCBfYW5kXyB0aGF0IHNsb3QgaXMNCj4gYWN0dWFsbHkgc3VpdGFibGUuIFRoaXMg
bWF5IG5vdCBiZSB0aGUgY2FzZSBmb3Igb3RoZXIgYXJyYXlzLg0KPg0KPiBBbnl3YXkgLSB0YWtp
bmcgdGhpbmdzIHRvZ2V0aGVyIEkgd2lsbCB0aGVuIHNpbXBseSBjb25zaWRlciB0aGUgcGF0Y2gN
Cj4gcmVqZWN0ZWQsIGRlc3BpdGUgaXQgYmVpbmcgYSBzZWVtaW5nbHkgZWFzeSBzdGVwIG9mIGhh
cmRlbmluZy4NCg0KSSBkb24ndCBoYXZlIGEgcHJvYmxlbSwgaW4gcHJpbmNpcGxlLCB3aXRoIHNv
bWV0aGluZyBsaWtlDQphcnJheV9jbGFtcChhcnIsIGlkeCkgYXMgbG9uZyBhcyBpdCBjb21lcyB3
aXRoIGEgQVBJIGRlc2NyaXB0aW9uIG1ha2luZw0KaXQgY2xlYXIgdGhhdCBpdCB0dXJucyBvdXQt
b2YtYm91bmRzIGluZGljZXMgaW50byAwLg0KDQpCdXQgdXNlIG9mIHRoaXMgaW4gY29kZSBhbHNv
IG5lZWRzIHRvIGNvbWUgd2l0aCBhIGNvbW1lbnQgZXhwbGFpbmluZyB3aHkNCnRoZSBwaWVjZSBv
ZiBjb2RlIGlzIHJpc2t5IGVub3VnaCB0byBqdXN0aWZ5IGl0LsKgIChBcyBtdWNoIGFzIGFueXRo
aW5nDQplbHNlLCBzbyB3ZSBjYW4gZmlndXJlIG91dCB3aGVuIHRvIHJlbW92ZSBpdCBpZiB0aGUg
cHJlY29uZGl0aW9ucyBjaGFuZ2UuKQ0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 17:44:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 17:44:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482048.747350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvRM-0007U2-Mf; Fri, 20 Jan 2023 17:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482048.747350; Fri, 20 Jan 2023 17:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvRM-0007Tv-Iz; Fri, 20 Jan 2023 17:44:36 +0000
Received: by outflank-mailman (input) for mailman id 482048;
 Fri, 20 Jan 2023 17:44:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIvRL-0007Tp-8X
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 17:44:35 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 176efbec-98ea-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 18:44:32 +0100 (CET)
Received: from mail-bn7nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 12:44:24 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB4986.namprd03.prod.outlook.com (2603:10b6:5:1e4::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 17:44:22 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 17:44:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 176efbec-98ea-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674236672;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=uLwgvpGk16i6VK5d6cd5GVcGnGfPIhaxeruIgV1xLcY=;
  b=XIYVG0nvlr6UprLbGt0zpCpMo3CLl0RpXBjc1DXl4POv0NHZBMiaLdD6
   DmrAK9ADMVWCwt1MVfjg3shmbIfde74oRfg+0hGCe75rMNo3bRPI9biSV
   AdojP2YCW3efUvRrEPhjLDDD0lzpK4jDYtwHSojroe8kP/aYyF1zAy/i1
   8=;
X-IronPort-RemoteIP: 104.47.70.106
X-IronPort-MID: 93007184
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/XHWBqvh6N7Hp8+NKfa7C5zsW+fnVJxfMUV32f8akzHdYApBsoF/q
 tZmKWvTPKqDYmr9eN1wao6/pENSsMDdy95nHAZq/CpmFSkS+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj51v0gnRkPaoQ5AaEzyFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwCzFUMzuexPuK36OAYbN3mNx4MY7qFdZK0p1g5Wmx4fcOZ7nmGv2Pz/kHmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ouiP60aIS9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOzjrqY12Q3ProAVIBZPDUnrqqO7sVe3YNdFB
 WhE4gd1vYFnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLnMfUjdLZdgitck3bT8nz
 FmEm5XuHzMHmKKRYWKQ8PGTtzzaESoIKW4PYwcUQA1D5MPsyKkjgxSKQtt9HaqditzuBSq20
 z2MtDI5hbgYkYgMzarTwLzcqzelp5yMRAhq4AzSBzqh9lkgPNDjYJG041/G6/oGNJyeUlSKo
 HkDnY6Z8fwKCpaO0ieKRY3hAY2U2hpMCxWE6XYHInXr323FF6KLFWyI3AxDGQ==
IronPort-HdrOrdr: A9a23:1G4cU6hEADeKy2nCPLEfy5z/U3BQXs0ji2hC6mlwRA09TyX4rb
 HMoB1/73SftN9/YgBDpTn+AtjkfZqxz/NICOoqXYtKPjOJhILAFugL0WKF+VHd8kbFl9K1u5
 0OT0F2MqyVMWRH
X-IronPort-AV: E=Sophos;i="5.97,232,1669093200"; 
   d="scan'208";a="93007184"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GE816gH1VoayO7WeAStZC3EF+6OTVJr0aLbGc+wHJU7aBv6HEZ7xOdgcZrKGFn7db5YlMU87LVQUzjHFPSzUun4un9KHWLKgZQtj42HMABZ5ADPMIOi4O8UfXJa483JiIe5NhG2CqlcqHGUHANZTgX8J7sNPFLePna1Vwbs9Sw60PedeUhXud78SBu2fz/IDiw5GTAp1hd9q2K7swu79fF8PJQt1TnKhanN5C5gAOSPkku6bbVuPmy3BcMLcNH+iB8mMtMB+sxEs3iRGnug6j+NZrH3rvIuUedyvdrG1oc9Y0wZBj613Kdsa5VACiDO1kw3L3qzcgxYT9DjnptMfew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uLwgvpGk16i6VK5d6cd5GVcGnGfPIhaxeruIgV1xLcY=;
 b=jpE6ma5sH5ETWwROH3/Ku0Q8wphktpkfAeoK6KkaJHa2T4fZL0Ouvf8MS4K1hpsYkrwNUPwhY5l6WPwSpga3uDFOjgIHQsyFCFyHzIaJxivio3Lz1laTxfz4ok8PX1gXl+APwwFHvA7jfsWi/GfeyUcdJvAQ8Hd+P7R0XOUxt4V02ipFnaKc/9/7jHT0gwM7gDLVG1EM9Eq+/ZvW7+FVpkug36DJNl3xy1f9mUbdH8lsjeZDXGtaPnmkz8JO4L2povK5kb/egeDTpAzsj+2QvPYKc5Dqm1TlaI2hCagEFhZaDFP3etpWpCAYKIVtuERJb+DWFv3IVOPc7UjtQwwtFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uLwgvpGk16i6VK5d6cd5GVcGnGfPIhaxeruIgV1xLcY=;
 b=wlZEZ4S7hTkxVnozSgh/dw6x8ANuciDFHj0cmv0nIO7SfDjdGLHfrSVEQH3w5c3ktIJkekDZb0NmaSdLvLpDdC7k79Ro4lAOKtvaRwDD7159PrnS5RnoZj6bB9gCBQ9rmiIH3x6tLR2HAKvH2SQiOVp+m8bBz8D3382g1S6joP8=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Roger Pau Monne
	<roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Thread-Topic: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Thread-Index: AQHY4457jyaOcvAnhkOdTNwMDM8hXq5O0KiAgFTsIQCAAMNSgIADpm+A
Date: Fri, 20 Jan 2023 17:44:22 +0000
Message-ID: <f9c6b4ae-8334-c823-43ca-9dfe71ed029c@citrix.com>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
 <f1229a27-f92c-a0dc-928e-1d78b928fdd0@xen.org>
 <bd6befdf-65eb-6937-fb85-449a5fa16794@citrix.com>
 <e87f4767-eb62-fc70-788c-d8afa45f6434@suse.com>
In-Reply-To: <e87f4767-eb62-fc70-788c-d8afa45f6434@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB4986:EE_
x-ms-office365-filtering-correlation-id: 787f531e-00a2-4c7d-1de4-08dafb0df6d6
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 7T8RBHqeYVzuBazVE61SKD5BZg/1NjB4RSHqmfLlm77y1ffX2OUHToKfBBinC9wIt+MYdvCIx2pGjUAT/pLl+WuWRPCifQVUnGPvd30l7d2LKSGyFtqihBfgFW+siOAm/m+bteWSLAxCIhZadGv4ElfM6awyE5oPiYDX0uBZwdtT6zV7AK4NVfOxucYa+ZsKXawLafJ5JRhcbv5nIWV6ZoJZ0Wli1qxEexgHkAHuYy/CqL4ZTEnkU4JPbRbkm8rhGiJBHt9+l7lbPxi0uLCzl8lchmBlTQEebfkUTWnMlyyo2Md1fkXodUV8DZyIwJb5/Xg4YV9jSjlOs7qCBtiZcPFuabU5E/lBC2uamsKOUFSV8WP+eYgWP2PmgCySaCbwArCzZAXbqdODnHMfG5WYeKMmJonG3L7HfCdEV07jW118vOGC+kYRzNZbMZRzboi2FV2Ql9IZF5p2ISbtCbJIYKLSlLwzM7oqp4wNkwIxs/kshptBUfaqAV/+tzlJ+YcSxs4c/vxCiVKiFDlqxKl6cqJIFXIs3/ji9Vct9jFBHM1626FLo8G7ITMph7HKz6cD5OxZL7yMcRgvrIafaJ6bUvvOFhdJGR5nKLdaAjG+v8qJpUmGY1bJ9VhAdFKKY1J5Kfcg5+QGbs2bl8Uq2WQpLtXEXpyflBFsp+NfVoDssQpG/WHg/gfWdYPky9ik5s265r5+Q2bKj2+nl5Zg8PaHrdGE18zsnRUQfRsoKcKmTDRq73GP8lfFR+e6Y73buhRRlZB1/zImVy9Fs6tEib3Zsg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(376002)(396003)(39860400002)(366004)(451199015)(36756003)(86362001)(38070700005)(91956017)(31696002)(66476007)(66446008)(316002)(8676002)(76116006)(41300700001)(66556008)(64756008)(66946007)(6916009)(71200400001)(6486002)(4326008)(54906003)(26005)(186003)(5660300002)(8936002)(83380400001)(2906002)(122000001)(53546011)(6506007)(82960400001)(478600001)(6512007)(38100700002)(2616005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?UTRBMU9NMFVYR0pqeVlsbTJHYzBwR2RNMWFpelZDYnFMcEVXcENCSEJhQ3Iv?=
 =?utf-8?B?bkYybDBXSVd0YUJZOVJERnBCUGNCNGVGeTBEeXMwVWg4YWp6c3RhOTFuQTRu?=
 =?utf-8?B?a1M3WkJobnRzNkI5WTc5N2UzM3VqMXF5eFViZW9xeDFuL2w5UTROa3pjbUwr?=
 =?utf-8?B?Ri93V2NmVHNJc2pYUWdmZlNrd3FudkpEdzBzTU8zRi9JNGEwbWVzelhyVkNm?=
 =?utf-8?B?WXpnWVVnTk1JMEQ5dWhZbTNESXNKWHlNWEprQU9veVozQXZIOTB2amtNaDZX?=
 =?utf-8?B?N2JJVkRkZnBudnJSbjZBMzJvb2ZqMU5ncGlSSTFpakgwMzAzNUV6c0RMblVO?=
 =?utf-8?B?eTFSTVg2VTZGekxEZnJVM3A0S09VTHZHWS9ZSGtWTm5PMitxc3BFNC83NGNZ?=
 =?utf-8?B?aVdSZ3FRb3ZRTEdSbXpDMWI5bXpMa0drdlk1cnNLT2h1M1ZJd3dhbHVNeUk1?=
 =?utf-8?B?WGREMHZWK1V4Q21TYi9xeFNoTVU3WXIySmp3UUlBUHZPajdvZ1dzVVl3a1BP?=
 =?utf-8?B?RVNycVdHWDFuZm9hTlRHSzJjZnZhS2hYVkVWRXUzUWZZQVhxYUJjM29DS2ZU?=
 =?utf-8?B?Uzk3M3FLMlUrcGcrSUF0Sm0zVElhV1VzcEdTYWxMbStCU0VJbHgvN1ZvY0Zy?=
 =?utf-8?B?TWQ3MVdISlJ5aXRiNWhEMTFBTCtXTGllVFM3SEphUmdmWGs2Tkx6Q0tlYk1Y?=
 =?utf-8?B?dVRzNE1oc2RqOS9ISUpYZzdENlY1UDI1QXlmS1BzK3hrNnJwZHRYanY3MzlV?=
 =?utf-8?B?UnBqYm83cmNjaFdaYUNGOGJjR3VaeUhsUGNDaC9SbVh1clAxTHFaTTRzbisv?=
 =?utf-8?B?ck9raUhpMTJmZzlIdTZPUG9pRTRKaFpEZGFYVnNCWDVvR1E3eEdwdjV6NFZZ?=
 =?utf-8?B?c0l4RU4zRnd3UThBT3k0NnJVT3BqY3ZnTU5pRk5VdUN4c2JkY3Q0OWlDazhm?=
 =?utf-8?B?NnB6VXFERVlVL2tiaU0vN1A4eHhUYXdMRzFYUzVlYXNSanprVE10ZEhRclVh?=
 =?utf-8?B?a0dONk9oVWdMb0pLRDZQUnNtZmlZQlVaYU1qQUVxWUVnM1lmUUd4UHloK2pw?=
 =?utf-8?B?ZDgyUkJQRzc1NmVndjZwTDkzR1pzM0dqd1I4TlQ4bEFHa0ljMlArWFpONU5h?=
 =?utf-8?B?aVRmeDBpUDEzdmtBSnZBeC9ydDUrakJQeFZjREN0MTl3Q2hRWktWNGlNSGNm?=
 =?utf-8?B?NjFQSUsvYmwzbmYzZjdzS0JvYnE4TVNPNmZaTm1Vejk2VFBIYkExWURFQStE?=
 =?utf-8?B?RytpNWhWajN5Ym1rTDJsTUNwRU9zUjFIYkJ4dGhvN3VYcEJ1WFJwcHNaWm5h?=
 =?utf-8?B?TklBS0tya2NNNUx3cU81bGZNaW9SUytPcEZOa0p6eWw3WTZndUtZWGlBTmY4?=
 =?utf-8?B?WHdjZHpoUkdsL25iaGJmaVYxV0hrZXZBKzRqNEVSS0VZZTRSZzMzUTRVSUc0?=
 =?utf-8?B?dlJqTm42bHI1WDM3anVOU0tLcEhlVHhPUE03NlFaWXBZUkQvSGw0NEE1R2Jw?=
 =?utf-8?B?SjNyMVZtcjlDbVpYSG1mVUVpalRUK0JUYyt1bVY5bVIzVU1Wb0oyZ2F6TVpL?=
 =?utf-8?B?THRlWllrckRpUXpMNHN5N28raWFzWkI0QWV2SVBNa2plMmlIb2xwelZuUkQ4?=
 =?utf-8?B?WTZ6N0RvcXU1eVI5M0twMFdlNGFKYXZ5U1ZOOWdoM0Q2empGemdJY3lsaFYz?=
 =?utf-8?B?R3BsdjNONzFqQ3dwbGwvRUpRcWVBRXBPM2dyUG5aa0kwTXo0Sm55M1FmTWhx?=
 =?utf-8?B?T1ZzaXowbjg2d3VHUVkyRS9mZWJtSi9rY3JmMkc1MXdGVC8vczkrQkZvNkky?=
 =?utf-8?B?Y3FzS1Q2NkhuTVgwUTBMdDlNc0p3UnpmMCtad3FhcC83QXpwVEEwTEFPRU9X?=
 =?utf-8?B?MkdmdXRKR29LSFQ0MUpxeFR5MUc4bnEzSHRCSWVsaTVxQ3dyUGgxRlVXUFVm?=
 =?utf-8?B?NjJzOWs3QWhpU0g2YVBsbVIyK3R0bUFPTW1vWGFIeXZwdmN5L2JsSzBIZ2hs?=
 =?utf-8?B?N3pUaXJFRWhVWTMvaVVxY2M3bWh5RWtIRUJmeVprVk9BY3hDNWZka1BEbEJm?=
 =?utf-8?B?VDZuSVhTMkMvdmtWNGlpb3Z2bnd6VzgrTFdIcndoc1pPRTd6cldocGRkc2pp?=
 =?utf-8?B?QmVmTHpQTnJNbmJRRVBTWkVwNUdpWXgwREp0YnRFMXZhU3MrSWxjSWxRWmlz?=
 =?utf-8?B?UlE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <4FF8674BD8280044BAB9B5C01081BC6E@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	EdLSuZLa1hMKctFcc5p160a66KnVEM4c6kNzFI/iab0x4m294LGPTV9QWqUyuboCDqwr2rH2Nuze8T8Bu0OiMxIzcc4GUypvzcntSsuAPmAXPfcIiMSsGGbkrulg/hqdzAMhnjpugt/RKmzYC7qkysD2jIOHH+aWwMlV8EH5zU9h87hAgCllaGpgjxbK4qxxAn8rNw9S/lzg0OCunTKYkTqNEvi/t89Yu634CflVBrqYTOYW56TvSvnqtJloOt9mti3NLN5JtYL5BobdL9E/2G4kBu2P14sCdeqIJI3DsOcSBUkGwtWGUwhSaGkj7X2/dUQlwvtspS5k7xU6Nk1cwMbBZ+Wt7E6e+QRh0dEF5LtnQ4FPCHqHmE0rzQRCtoo6F1SznhzPEgoUbCsBO8RkWJd9Oy+ShRMW1CK+eh4Tr/NyZROJfBNjoQLMsN22MwR2qUNdwqqlixXh/ca46jQaIbT+ErL35gGgdi7qRSCkyub11nSsLWBcCS9E03GUJ9FfyYkCjryl4MEErA1SECzGW2MD143gBtHfmwfqTQ0sfrAqeVixS0FldcBct3bZH/oAbLUKwGQJLd64nH0kwbKS7MdeK5Ww1lZLNjIVmL5guEXif1Lgc1bBUHnHFqHH1ZnSNRzeP+Wne2jE4A0bwK6/acMpDLOZr6PhFTcYMdEz/4a0f2v7esHSnohFcGkoi3xiCVvBWUwJZm8XgPM+bGhdDHL/bQh8t87R8eWJyEhQIAsVKR171/mS4Il2orMNABYqG97TABctJaprF5pjRV6rwtN/TUJfbmENooNql/x02AUUhj1EkcEcMxMzUC3RZS7V/qi+gWZvipvsyRj8pvMyTxYvh6r4D5LfF+hu4k/qfE4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 787f531e-00a2-4c7d-1de4-08dafb0df6d6
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 17:44:22.0564
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: h4MIVGTkVlnwMkXyTJYoK3EpIp8nZcTsj68X7DO/OAMF+0KVPEp/mhuECPUJ4piF1JBHmDC2vDATYA6Repx8ZJ1g750T1SchW4DearXlN7k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4986

T24gMTgvMDEvMjAyMyA5OjU5IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTcuMDEuMjAy
MyAyMzoyMCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDI0LzExLzIwMjIgOToyOSBwbSwg
SnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+IE9uIDE5LzEwLzIwMjIgMDk6NDMsIEphbiBCZXVsaWNo
IHdyb3RlOg0KPj4+PiBUaGUgcmVnaXN0cmF0aW9uIGJ5IHZpcnR1YWwvbGluZWFyIGFkZHJlc3Mg
aGFzIGRvd25zaWRlczogQXQgbGVhc3Qgb24NCj4+Pj4geDg2IHRoZSBhY2Nlc3MgaXMgZXhwZW5z
aXZlIGZvciBIVk0vUFZIIGRvbWFpbnMuIEZ1cnRoZXJtb3JlIGZvciA2NC1iaXQNCj4+Pj4gUFYg
ZG9tYWlucyB0aGUgYXJlYXMgYXJlIGluYWNjZXNzaWJsZSAoYW5kIGhlbmNlIGNhbm5vdCBiZSB1
cGRhdGVkIGJ5DQo+Pj4+IFhlbikgd2hlbiBpbiBndWVzdC11c2VyIG1vZGUuDQo+Pj4+DQo+Pj4+
IEluIHByZXBhcmF0aW9uIG9mIHRoZSBpbnRyb2R1Y3Rpb24gb2YgbmV3IHZDUFUgb3BlcmF0aW9u
cyBhbGxvd2luZyB0bw0KPj4+PiByZWdpc3RlciB0aGUgcmVzcGVjdGl2ZSBhcmVhcyAob25lIG9m
IHRoZSB0d28gaXMgeDg2LXNwZWNpZmljKSBieQ0KPj4+PiBndWVzdC1waHlzaWNhbCBhZGRyZXNz
LCBmbGVzaCBvdXQgdGhlIG1hcC91bm1hcCBmdW5jdGlvbnMuDQo+Pj4+DQo+Pj4+IE5vdGV3b3J0
aHkgZGlmZmVyZW5jZXMgZnJvbSBtYXBfdmNwdV9pbmZvKCk6DQo+Pj4+IC0gYXJlYXMgY2FuIGJl
IHJlZ2lzdGVyZWQgbW9yZSB0aGFuIG9uY2UgKGFuZCBkZS1yZWdpc3RlcmVkKSwNCj4+Pj4gLSBy
ZW1vdGUgdkNQVS1zIGFyZSBwYXVzZWQgcmF0aGVyIHRoYW4gY2hlY2tlZCBmb3IgYmVpbmcgZG93
biAod2hpY2ggaW4NCj4+Pj4gwqDCoCBwcmluY2lwbGUgY2FuIGNoYW5nZSByaWdodCBhZnRlciB0
aGUgY2hlY2spLA0KPj4+PiAtIHRoZSBkb21haW4gbG9jayBpcyB0YWtlbiBmb3IgYSBtdWNoIHNt
YWxsZXIgcmVnaW9uLg0KPj4+Pg0KPj4+PiBTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+DQo+Pj4+IC0tLQ0KPj4+PiBSRkM6IEJ5IHVzaW5nIGdsb2JhbCBkb21h
aW4gcGFnZSBtYXBwaW5ncyB0aGUgZGVtYW5kIG9uIHRoZSB1bmRlcmx5aW5nDQo+Pj4+IMKgwqDC
oMKgwqAgVkEgcmFuZ2UgbWF5IGluY3JlYXNlIHNpZ25pZmljYW50bHkuIEkgZGlkIGNvbnNpZGVy
IHRvIHVzZSBwZXItDQo+Pj4+IMKgwqDCoMKgwqAgZG9tYWluIG1hcHBpbmdzIGluc3RlYWQsIGJ1
dCB0aGV5IGV4aXN0IGZvciB4ODYgb25seS4gT2YgY291cnNlIHdlDQo+Pj4+IMKgwqDCoMKgwqAg
Y291bGQgaGF2ZSBhcmNoX3ssdW59bWFwX2d1ZXN0X2FyZWEoKSBhbGlhc2luZyBnbG9iYWwgZG9t
YWluIHBhZ2UNCj4+Pj4gwqDCoMKgwqDCoCBtYXBwaW5nIGZ1bmN0aW9ucyBvbiBBcm0gYW5kIHVz
aW5nIHBlci1kb21haW4gbWFwcGluZ3Mgb24geDg2LiBZZXQNCj4+Pj4gwqDCoMKgwqDCoCB0aGVu
IGFnYWluIG1hcF92Y3B1X2luZm8oKSBkb2Vzbid0IGRvIHNvIGVpdGhlciAoYWxiZWl0IHRoYXQn
cw0KPj4+PiDCoMKgwqDCoMKgIGxpa2VseSB0byBiZSBjb252ZXJ0ZWQgc3Vic2VxdWVudGx5IHRv
IHVzZSBtYXBfdmNwdV9hcmVhKCkNCj4+Pj4gYW55d2F5KS4NCj4+Pj4NCj4+Pj4gUkZDOiBJbiBt
YXBfZ3Vlc3RfYXJlYSgpIEknbSBub3QgY2hlY2tpbmcgdGhlIFAyTSB0eXBlLCBpbnN0ZWFkIC0g
anVzdA0KPj4+PiDCoMKgwqDCoMKgIGxpa2UgbWFwX3ZjcHVfaW5mbygpIC0gc29sZWx5IHJlbHlp
bmcgb24gdGhlIHR5cGUgcmVmIGFjcXVpc2l0aW9uLg0KPj4+PiDCoMKgwqDCoMKgIENoZWNraW5n
IGZvciBwMm1fcmFtX3J3IGFsb25lIHdvdWxkIGJlIHdyb25nLCBhcyBhdCBsZWFzdA0KPj4+PiDC
oMKgwqDCoMKgIHAybV9yYW1fbG9nZGlydHkgb3VnaHQgdG8gYWxzbyBiZSBva2F5IHRvIHVzZSBo
ZXJlIChhbmQgaW4gc2ltaWxhcg0KPj4+PiDCoMKgwqDCoMKgIGNhc2VzLCBlLmcuIGluIEFyZ28n
cyBmaW5kX3JpbmdfbWZuKCkpLiBwMm1faXNfcGFnZWFibGUoKSBjb3VsZCBiZQ0KPj4+PiDCoMKg
wqDCoMKgIHVzZWQgaGVyZSAobGlrZSBhbHRwMm1fdmNwdV9lbmFibGVfdmUoKSBkb2VzKSBhcyB3
ZWxsIGFzIGluDQo+Pj4+IMKgwqDCoMKgwqAgbWFwX3ZjcHVfaW5mbygpLCB5ZXQgdGhlbiBhZ2Fp
biB0aGUgUDJNIHR5cGUgaXMgc3RhbGUgYnkgdGhlIHRpbWUNCj4+Pj4gwqDCoMKgwqDCoCBpdCBp
cyBiZWluZyBsb29rZWQgYXQgYW55d2F5IHdpdGhvdXQgdGhlIFAyTSBsb2NrIGhlbGQuDQo+Pj4+
DQo+Pj4+IC0tLSBhL3hlbi9jb21tb24vZG9tYWluLmMNCj4+Pj4gKysrIGIveGVuL2NvbW1vbi9k
b21haW4uYw0KPj4+PiBAQCAtMTU2Myw3ICsxNTYzLDgyIEBAIGludCBtYXBfZ3Vlc3RfYXJlYShz
dHJ1Y3QgdmNwdSAqdiwgcGFkZHINCj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCBzdHJ1Y3QgZ3Vlc3RfYXJlYSAqYXJlYSwNCj4+Pj4gwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2b2lkICgqcG9wdWxhdGUpKHZvaWQgKmRzdCwgc3Ry
dWN0IHZjcHUgKnYpKQ0KPj4+PiDCoCB7DQo+Pj4+IC3CoMKgwqAgcmV0dXJuIC1FT1BOT1RTVVBQ
Ow0KPj4+PiArwqDCoMKgIHN0cnVjdCBkb21haW4gKmN1cnJkID0gdi0+ZG9tYWluOw0KPj4+PiAr
wqDCoMKgIHZvaWQgKm1hcCA9IE5VTEw7DQo+Pj4+ICvCoMKgwqAgc3RydWN0IHBhZ2VfaW5mbyAq
cGcgPSBOVUxMOw0KPj4+PiArwqDCoMKgIGludCByYyA9IDA7DQo+Pj4+ICsNCj4+Pj4gK8KgwqDC
oCBpZiAoIGdhZGRyICkNCj4+PiAwIGlzIHRlY2huaWNhbGx5IGEgdmFsaWQgKGd1ZXN0KSBwaHlz
aWNhbCBhZGRyZXNzIG9uIEFybS4NCj4+IEl0IGlzIG9uIHg4NiB0b28sIGJ1dCB0aGF0J3Mgbm90
IHdoeSAwIGlzIGdlbmVyYWxseSBjb25zaWRlcmVkIGFuDQo+PiBpbnZhbGlkIGFkZHJlc3MuDQo+
Pg0KPj4gU2VlIHRoZSBtdWx0aXR1ZGUgb2YgWFNBcywgYW5kIG5lYXItWFNBcyB3aGljaCBoYXZl
IGJlZW4gY2F1c2VkIGJ5IGJhZA0KPj4gbG9naWMgaW4gWGVuIGNhdXNlZCBieSB0cnlpbmcgdG8g
bWFrZSBhIHZhcmlhYmxlIGhlbGQgaW4gc3RydWN0DQo+PiB2Y3B1L2RvbWFpbiBoYXZlIGEgZGVm
YXVsdCB2YWx1ZSBvdGhlciB0aGFuIDAuDQo+Pg0KPj4gSXQncyBub3QgaW1wb3NzaWJsZSB0byB3
cml0ZSBzdWNoIGNvZGUgc2FmZWx5LCBhbmQgaW4gdGhpcyBjYXNlIEkgZXhwZWN0DQo+PiBpdCBj
YW4gYmUgZG9uZSBieSB0aGUgTlVMTG5lc3MgKG9yIG5vdCkgb2YgdGhlIG1hcHBpbmcgcG9pbnRl
ciwgcmF0aGVyDQo+PiB0aGFuIGJ5IHN0YXNoaW5nIHRoZSBnYWRkciwgYnV0IGhpc3RvcnkgaGFz
IHByb3ZlZCByZXBlYXRlZGx5IHRoYXQgdGhpcw0KPj4gaXMgYSB2ZXJ5IGZlcnRpbGUgc291cmNl
IG9mIHNlY3VyaXR5IGJ1Z3MuDQo+IEknbSBjaGVja2luZyBhIHZhbHVlIHBhc3NlZCBpbiBmcm9t
IHRoZSBndWVzdCBoZXJlLiBObyBjaGVja2luZyBvZiBpbnRlcm5hbA0KPiBzdGF0ZSBjYW4gcmVw
bGFjZSB0aGF0LiBUaGUgY2hlY2tzIG9uIGludGVybmFsIHN0YXRlIGxldmVyYWdlIHplcm8taW5p
dDoNCj4NCj4gIHVubWFwOg0KPiAgICAgaWYgKCBwZyApDQo+ICAgICB7DQo+ICAgICAgICAgdW5t
YXBfZG9tYWluX3BhZ2VfZ2xvYmFsKG1hcCk7DQo+ICAgICAgICAgcHV0X3BhZ2VfYW5kX3R5cGUo
cGcpOw0KPiAgICAgfQ0KPg0KPiBJdCdzIGFsc28gbm90IGNsZWFyIHRvIG1lIHdoZXRoZXIsIGxp
a2UgSnVsaWVuIGxvb2tzIHRvIGhhdmUgcmVhZCBpdCwgeW91DQo+IG1lYW4gdG8gYXNrIHRoYXQg
SSByZXZlcnQgYmFjayB0byB1c2luZyAwIGFzIHRoZSAiaW52YWxpZCIgKGkuZS4gcmVxdWVzdA0K
PiBmb3IgdW5tYXApIGluZGljYXRvci4NCg0KSSdtIG1lcmVseSBhc2tpbmcgeW91IHRvIGJlIGV4
dHJhIGNhcmVmdWwgYW5kIG5vdCBhZGQgYnVncyB0byB0aGUgZXJyb3INCnBhdGhzLsKgIEFuZCBp
dCBhcHBlYXJzIHRoYXQgeW91IGhhdmUgZG9uZSBpdCBiYXNlZCBvbiB0aGUgTlVMTG5lc3Mgb2YN
CnRoZSBtYXBwaW5nIHBvaW50ZXIsIHdoaWNoIGlzIGZpbmUuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 17:49:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 17:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482055.747359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvVo-0008CM-B3; Fri, 20 Jan 2023 17:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482055.747359; Fri, 20 Jan 2023 17:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvVo-0008CF-8S; Fri, 20 Jan 2023 17:49:12 +0000
Received: by outflank-mailman (input) for mailman id 482055;
 Fri, 20 Jan 2023 17:49:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pIvVm-0008C9-3N
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 17:49:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIvVl-0004Mr-CL; Fri, 20 Jan 2023 17:49:09 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[10.95.149.154]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pIvVl-00073L-5V; Fri, 20 Jan 2023 17:49:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=vDWYqjhAXunHOgqExADktbjMRmeDDjC1icxd3ohNrcE=; b=KEoufVvyNNRgXHWKFPPO4vkRka
	yQl841xWywIqlynOocO1BV7mRf4JOT1ZuVkATiDA2xm+66RvbgSibH4jlrqOa7n5iTAIeJWmaOTui
	3+s5V0/RrU0EcDd7jvQeLrl5om59OVewi2fg3vgdv/NIDVjMLCSwgpi3ehTAgUEyD528=;
Message-ID: <42b138a6-59f5-7614-d96f-30e1784c97a4@xen.org>
Date: Fri, 20 Jan 2023 17:49:07 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-3-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
 <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
 <ba37ee02-c07c-2803-0867-149c779890b6@amd.com>
 <cd673f97-9c0d-286b-e973-7a85c84dd576@xen.org>
 <2017e0d4-dd02-e81d-99f4-1ef47fc9e774@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <2017e0d4-dd02-e81d-99f4-1ef47fc9e774@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 20/01/2023 16:03, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> 
> On 20/01/2023 16:09, Julien Grall wrote:
>>
>>
>> On 20/01/2023 14:40, Michal Orzel wrote:
>>> Hello,
>>
>> Hi,
>>
>>>
>>> On 20/01/2023 10:32, Julien Grall wrote:
>>>>
>>>>
>>>> Hi,
>>>>
>>>> On 19/01/2023 22:54, Stefano Stabellini wrote:
>>>>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>>>>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
>>>>>> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
>>>>>> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
>>>>>> address.
>>>>>>
>>>>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>>>>
>>>>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>>>
>>>>
>>>> I have committed the patch.
>>> The CI test jobs (static-mem) failed on this patch:
>>> https://gitlab.com/xen-project/xen/-/pipelines/752911309
>>
>> Thanks for the report.
>>
>>>
>>> I took a look at it and this is because in the test script we
>>> try to find a node whose unit-address does not have leading zeroes.
>>> However, after this patch, we switched to PRIpaddr which is defined as 016lx/016llx and
>>> we end up creating nodes like:
>>>
>>> memory@0000000050000000
>>>
>>> instead of:
>>>
>>> memory@60000000
>>>
>>> We could modify the script,
>>
>> TBH, I think it was a mistake for the script to rely on how Xen describe
>> the memory banks in the Device-Tree.
>>
>> For instance, from my understanding, it would be valid for Xen to create
>> a single node for all the banks or even omitting the unit-address if
>> there is only one bank.
>>
>>> but do we really want to create nodes
>>> with leading zeroes? The dt spec does not mention it, although [1]
>>> specifies that the Linux convention is not to have leading zeroes.
>>
>> Reading through the spec in [2], it suggested the current naming is
>> fine. That said the example match the Linux convention (I guess that's
>> not surprising...).
>>
>> I am open to remove the leading. However, I think the CI also needs to
>> be updated (see above why).
> Yes, the CI needs to be updated as well.

Can either you or Ayan look at it?

> I'm in favor of removing leading zeroes because this is what Xen uses in all
> the other places when creating nodes (or copying them from the host dtb) including xl
> when creating dtb for domUs. Having a mismatch may be confusing and having a unit-address
> with leading zeroes looks unusual.

I decided to revert the patch mainly because it will be easier to review 
the fix if it is folded in this patch.

I would consider to create a wrapper on top of fdt_begin_node() that 
will take the name of the node and the unit. Something like:

/* XXX: Explain why the wrapper is needed */
static void domain_fdt_begin_node(void *fdt, const char *name, uint64_t 
unit)
{
    char buf[X];

    snprintf(buf, sizeof(buf), "%s@%"PRIx64, ...);
    /* XXX check the return of snprintf() */


    return fdt_begin_node(buf);
}

X would be a value that is large enough to accommodate the existing name.

The reason I don't suggest a new PRI* is because I can't think of a name 
that is short and descriptive enough to understand the different with 
the existing PRIpaddr.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 18:09:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 18:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482060.747370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvor-0002Eq-Sh; Fri, 20 Jan 2023 18:08:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482060.747370; Fri, 20 Jan 2023 18:08:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvor-0002Ej-Pm; Fri, 20 Jan 2023 18:08:53 +0000
Received: by outflank-mailman (input) for mailman id 482060;
 Fri, 20 Jan 2023 18:08:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FRq8=5R=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pIvoq-0002Ed-Ub
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 18:08:53 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2044.outbound.protection.outlook.com [40.107.220.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7d35dee3-98ed-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 19:08:50 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MW4PR12MB7117.namprd12.prod.outlook.com (2603:10b6:303:221::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Fri, 20 Jan
 2023 18:08:46 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%4]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 18:08:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d35dee3-98ed-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PuQdh/d+a00Kiwg2mCfX+3rqeuvsdqRFw6hMByqVEmkwR8gKBqsqDik6+Pfac82w/9musBAGTQbQ7aSNwym0WYGdfbB5wCS9fZ7RlRkh1qChRisIXozUwq6/QFSs/litvXeKmrAbw4zDfhjQx5llkN3N4S1e4o5mNXEpVrkbKj7VSiaWxisrPYtfJ6wCq01AXSpmgVGeflDxbt90jEdGolYAy4PNuPGZpEPw1Ks3MfNRxFLYqMudKeEdRnDxfGf2/Oh1yRX8m8nOPVaCDNwMkuGunkfuaDq6UGhYLJYOSo9wNV9ig89SkUOScq+oKNxY4giNOhdEwVBr1Dy/f2e1UA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iQz9gMjcOkGcuQ/koF8QTlZfzLivqZ8CKaD0vRGtav0=;
 b=CtPordPcsGkIfQOhSkNc5AT7wmwRMOqafPvr4S89oU7q3gr88bK8nTE4VFKBSwufXuodzCgDrijaYDfuXMgKx1pDuMdjjT9RkrkUkmxUT+GVufbZD/C+ld2XEwQf1a9ILR0oHblMZUgS17iz4ws6FKGrsPMm1PcNaLk26H69IYjfzLvO0+xKD4JqBnNBT3IcmYBGgXupC6nnrAr38rg88BNflLW2kKhdd3JfLo4iFAvS4w1dPhTubsJ0kP95e74HmIvdCgsL8zIboy82RXAwxonMTXPHQTM7gjiCWBaPGwR5srsjn/lb5BabO7D08FcHLDHnWYBthKjM2E9/1psuXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iQz9gMjcOkGcuQ/koF8QTlZfzLivqZ8CKaD0vRGtav0=;
 b=IRaBfJcmxj0Eai5/SvgmJNBGZ5p8+jJXc0w6qbdk6UnUGPZILkkwMlHpoIKFn56RS0NL6k8dEfVE14Bl2UIytTZ2z4QIgVyB2W6GMuHzqutwf9GkYBtkP3GLSCjl3tZNKen2U1aLP44cv2layoPCuAXdhtpprd1CnnUnWNughr4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <0a7d3da6-efe7-2cf1-563a-3c5c2ec473b2@amd.com>
Date: Fri, 20 Jan 2023 18:08:40 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
To: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@amd.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-3-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
 <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
 <ba37ee02-c07c-2803-0867-149c779890b6@amd.com>
 <cd673f97-9c0d-286b-e973-7a85c84dd576@xen.org>
 <2017e0d4-dd02-e81d-99f4-1ef47fc9e774@amd.com>
 <42b138a6-59f5-7614-d96f-30e1784c97a4@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <42b138a6-59f5-7614-d96f-30e1784c97a4@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P123CA0004.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::9) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|MW4PR12MB7117:EE_
X-MS-Office365-Filtering-Correlation-Id: af1274c1-ecbe-4e67-06d7-08dafb115f8d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JGEBXxqnN//23OUogZvd2rDbLu4l4MxXRDEr66T5LRGYluqlfVU5hocbh5wVasbp0BO2WwXTkGZbrviblmyjdF55nE70blK8w37FjojlAW9Z2OJmIjjKe6H3jNeBqerp8x2qULbAq+s/Fd06Ismzrq/gAUDtRhWVr1si/QKaQXDWoiWjZyIZo1QAfTrF1fCXD6VHo+JhDjkhcUhtcXZ+0vXjthRKMoM75jGswvLoIcywo9QaV1z4XsGrtkNc/0XgdY2QdlSBpGSpwPACdspgsn97nb+fMR8vZ4H1gIbcvqyYhNb9eutr2+BJAHtm9vw3K20+EN70ssfvJ1MqeeAXRf79a+sfx7Zb8w0QOBFW6Q//dMZFdA77NKIRvKhm68OjlrvrRH1x6T2AsWS5NaXPO1SNB9g5WrF+odrbrMBPJR6S+fP1UhkdtFbnL1zoJpLIExEByLa43+qJcAYYi01LfMiEQIXk/XEVg+gYc1R+k/jnTF09BYSolz0oCir8d6Jlh9rwfOyJ0u/24aafxksJfdunR/nt/+h0xuln+IpZIEJSMWkv/202aWCj3cEjXfBCqZvjBWSW3kxFZRXvgIdwmBeCcCoVzVdZCZKFcZZX1XqmorS34NR5O5K9H+fPp50O18Dgo4ly+WraY1U1YN8SnEcltnIhbuSfU4iaC2qNOMREOINep34KmTVZc3qS8m02XXiKjufRen+1XP1i9g7SZhLRXFKpnBAuHTz64fdfxOUDArDlfGeDZeKHCjg27jmD043tmKrjXu1ysLvQqY/k/Lwi9jzKp1VubViS08bNHvM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(366004)(376002)(39860400002)(396003)(451199015)(5660300002)(8936002)(31686004)(4326008)(66476007)(66556008)(66946007)(8676002)(38100700002)(53546011)(31696002)(6666004)(6506007)(6636002)(186003)(2906002)(110136005)(26005)(6512007)(966005)(6486002)(36756003)(478600001)(316002)(2616005)(83380400001)(41300700001)(21314003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S1FKMC9RYUhPYmxPMUdnWHc1MHdmSTFNckJKbG1GN3NMK1pKM3JLRjBNck1R?=
 =?utf-8?B?SlpPYnlCY1FFaldLbUpSNm9GenRSSFRTb3RRZnVsY1h3ektONnY1OHdhZTNI?=
 =?utf-8?B?TmVZQVh2NWVLNzBsQ096Y2grSDZyNGpvMDhyOUdjQjllYzVjOFFvSUY0aVlh?=
 =?utf-8?B?cjhCS1dKRGFIODF4czN1d1cyN2w1dytnclpoRTQ4Umg0QlI0MVVJS3RQS3Rw?=
 =?utf-8?B?SlNKNjZRRVcwb0R4VmdtWEtaeDNUTVZmY1Z6VTArSTVINGpJZ1M3WkU4UU5s?=
 =?utf-8?B?WUNrU2tOMkxONXRDMkl4TDUvRWJ5emRrM1FCb3RQandhVE9GczcxTXU1TGhX?=
 =?utf-8?B?Y2hGblkvMGtJZzlJbGhEYnQ0algzNnFGemtnclFMb21zWWZWY1RaSHJJZ2J0?=
 =?utf-8?B?UDArWGJOUFo1aWYwZ1NwSDZaclp6YzJ0RWVubG5JVm9MY21aNGFsT2FxMTIv?=
 =?utf-8?B?Y0RGeEEvRVh6bHdXNXBlQ0t6K2NLTzhpOTU2OHljdXBwZncwcVFDYUdYSWlS?=
 =?utf-8?B?c2d3YW9oeldlbmhiekZrZGg4N3V5OElFR055ZG1vbHhVSklBMkxHdFlkNWtm?=
 =?utf-8?B?SE9KQ1lNU0o2TFRvMUFyUEFrYURoOGZ1TElGMlQ1U3VBV09ILzk0NzRMeUJH?=
 =?utf-8?B?YS9UaFBQV3pKKzVtQlBnSTVNZG9sNWlzbXhORUtURVJiU1A4amZkaWRNK1dU?=
 =?utf-8?B?amxWWnNYdk15czhmb3RnZVVIemFXcDJHM3FnTkUzamdoMS9IRzhFN3Y4eFlG?=
 =?utf-8?B?RDJVSTh1WTQ1Q0tpUUt6dGFwcGw2UiszSUxWMjRRU3F3cXdkYVB4QUlMUDdS?=
 =?utf-8?B?cXpabkRJbDJMa3JwdFR4eWFSenpualpHZ2VmNDRwMkxxTUpjVllyRC9uODBY?=
 =?utf-8?B?cHN3dUtBZFdBZmVUUDM0RTFHMHdMNUo2VFcxOWhQWVFzaVdEeGhHSVZUbkMw?=
 =?utf-8?B?WW9hUVZXRXdvdHV4Z1JQRTNIZGp1ZVpRN0hjcTdkLzM3N1NUaTBwYXgyVTh4?=
 =?utf-8?B?N1FVNG9yUzNIQ2NRYUlQYUNkT0l2R3hTYkIwVjRaZDM4UmdHLzNubk0xdUQy?=
 =?utf-8?B?YzVqZXJleWV1YmQ5Q3VSSERBVUVuV2tBWml3cXpDWjJ4TUovREVqTldBNGdB?=
 =?utf-8?B?aHIxM0lENUNCQkNLZmgrcFJmS1FTdFR1NFJCaldKa0gvK2xva09uWE02MGtE?=
 =?utf-8?B?Q3J4N3dLMUx0MzkrWTk2SkpQdllBbTZEWWZGUEhWQmJHajVDMW8zdFZCU0FI?=
 =?utf-8?B?QTlMeXBmdGVuN09UbjdXSEgyWUVrc01QY3QzM0JzbmVYRTJSZGpqNldQMFVS?=
 =?utf-8?B?LzBnZEREWVFpMm95YldDMGNRU3k5cFlQaHpzUWJ5VVJMaTVBVjVlMkVLNENv?=
 =?utf-8?B?WmQwYmZnbEhCR3lHbmFwNHVBdTB5Nm02aDBiUTRqcUNVSjF3ZGJwRExyT3BE?=
 =?utf-8?B?c3FMQkozNFE0QWZoS09pMCtnakVRMGxKTHptZThDU3VrMGhTWmRRcUlyUm1N?=
 =?utf-8?B?YS80N05WMWZob2ZuYzY4THZ6K05EMHBWN0FiV09HT1ZXcThQVFZOSG5qWVpB?=
 =?utf-8?B?UENORi96MnVpSVFHUzZHYTFYcm9ZdkRzVEVJSDJvMktvcVRtcHdJZng3a1lO?=
 =?utf-8?B?QmVUaE9zNE0wUzRRbmVaVnJPNHNLYXY0SmY0MVZWMC9WQnA4UEdDUkJFRzZZ?=
 =?utf-8?B?ZVhLRGhoaUkzUndBK3RDL1RDb29ZRVV0ZXE2UzAvY0E0enZyM1hrWWNCWStH?=
 =?utf-8?B?cHZkR3A2c0ZRVkx2QlNFaVQzcHdWRWMyVWx1eFk0bGxzWkRJYlB4T0daU0xk?=
 =?utf-8?B?QlNZeUxVOXhhYU9zRFdibGZjMUU3ZjExdnI4SExWS1VORjlOTHQxTEdoR1F4?=
 =?utf-8?B?Wmk4anQwcmhaRkg5QWF3YkdMZzBPYWpHeUJPWjRFVmRYbUo3dW1ld2FDcC80?=
 =?utf-8?B?dUFzRHBqWS8vV2EzU2daOVc4RWsraE9CeU9JYnFINjhnQzNiYndTQWJYc1NP?=
 =?utf-8?B?d0REelVkY1VaOWpZRk1xTlVIWHNDY2Mzdm0zaVVFajFaV1ZsWEllaDNtdmFC?=
 =?utf-8?B?empDS2J4OVlLR01tc0xTQkVtQmUwaFNvN3ZDT0xlWmI2anF3bmpmak9TUlNR?=
 =?utf-8?Q?vKygsXgT/TtBTyCHTnUUyhXIN?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: af1274c1-ecbe-4e67-06d7-08dafb115f8d
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jan 2023 18:08:46.5252
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HKDK39wuGRuneaL1cBq4cXiaUJ/4efI/9NsEapZcV2UW5V+8PbWf4XnUqD2r1afWfhtbZW6YOzKhqVBtc/DuTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7117

Hi Julien/Michal,

On 20/01/2023 17:49, Julien Grall wrote:
>
>
> On 20/01/2023 16:03, Michal Orzel wrote:
>> Hi Julien,
>
> Hi Michal,
>
>>
>> On 20/01/2023 16:09, Julien Grall wrote:
>>>
>>>
>>> On 20/01/2023 14:40, Michal Orzel wrote:
>>>> Hello,
>>>
>>> Hi,
>>>
>>>>
>>>> On 20/01/2023 10:32, Julien Grall wrote:
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> On 19/01/2023 22:54, Stefano Stabellini wrote:
>>>>>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>>>>>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
>>>>>>> 2. One should use 'PRIx64' to display 'u64' in hex format. The 
>>>>>>> current
>>>>>>> use of 'PRIpaddr' for printing PTE is buggy as this is not a 
>>>>>>> physical
>>>>>>> address.
>>>>>>>
>>>>>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>>>>>
>>>>>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>>>>
>>>>>
>>>>> I have committed the patch.
>>>> The CI test jobs (static-mem) failed on this patch:
>>>> https://gitlab.com/xen-project/xen/-/pipelines/752911309
>>>
>>> Thanks for the report.
>>>
>>>>
>>>> I took a look at it and this is because in the test script we
>>>> try to find a node whose unit-address does not have leading zeroes.
>>>> However, after this patch, we switched to PRIpaddr which is defined 
>>>> as 016lx/016llx and
>>>> we end up creating nodes like:
>>>>
>>>> memory@0000000050000000
>>>>
>>>> instead of:
>>>>
>>>> memory@60000000
>>>>
>>>> We could modify the script,
>>>
>>> TBH, I think it was a mistake for the script to rely on how Xen 
>>> describe
>>> the memory banks in the Device-Tree.
>>>
>>> For instance, from my understanding, it would be valid for Xen to 
>>> create
>>> a single node for all the banks or even omitting the unit-address if
>>> there is only one bank.
>>>
>>>> but do we really want to create nodes
>>>> with leading zeroes? The dt spec does not mention it, although [1]
>>>> specifies that the Linux convention is not to have leading zeroes.
>>>
>>> Reading through the spec in [2], it suggested the current naming is
>>> fine. That said the example match the Linux convention (I guess that's
>>> not surprising...).
>>>
>>> I am open to remove the leading. However, I think the CI also needs to
>>> be updated (see above why).
>> Yes, the CI needs to be updated as well.
>
> Can either you or Ayan look at it?

Does this change match the expectation ?

diff --git a/automation/scripts/qemu-smoke-dom0less-arm64.sh 
b/automation/scripts/qemu-smoke-dom0less-arm64.sh
index 2b59346fdc..9f5e700f0e 100755
--- a/automation/scripts/qemu-smoke-dom0less-arm64.sh
+++ b/automation/scripts/qemu-smoke-dom0less-arm64.sh
@@ -20,7 +20,7 @@ if [[ "${test_variant}" == "static-mem" ]]; then
      domu_size="10000000"
      passed="${test_variant} test passed"
      domU_check="
-current=\$(hexdump -e '16/1 \"%02x\"' 
/proc/device-tree/memory@${domu_base}/reg 2>/dev/null)
+current=\$(hexdump -e '16/1 \"%02x\"' 
/proc/device-tree/memory@$[0-9]*/reg 2>/dev/null)
  expected=$(printf \"%016x%016x\" 0x${domu_base} 0x${domu_size})
  if [[ \"\${expected}\" == \"\${current}\" ]]; then
         echo \"${passed}\"

>
>> I'm in favor of removing leading zeroes because this is what Xen uses 
>> in all
>> the other places when creating nodes (or copying them from the host 
>> dtb) including xl
>> when creating dtb for domUs. Having a mismatch may be confusing and 
>> having a unit-address
>> with leading zeroes looks unusual.
>
> I decided to revert the patch mainly because it will be easier to 
> review the fix if it is folded in this patch.
>
> I would consider to create a wrapper on top of fdt_begin_node() that 
> will take the name of the node and the unit. Something like:
>
> /* XXX: Explain why the wrapper is needed */
> static void domain_fdt_begin_node(void *fdt, const char *name, 
> uint64_t unit)
> {
>    char buf[X];
>
>    snprintf(buf, sizeof(buf), "%s@%"PRIx64, ...);
>    /* XXX check the return of snprintf() */
>
>
>    return fdt_begin_node(buf);
> }
>
> X would be a value that is large enough to accommodate the existing name.
>
> The reason I don't suggest a new PRI* is because I can't think of a 
> name that is short and descriptive enough to understand the different 
> with the existing PRIpaddr.

Looks fine to me. I can incorporate the change in this existing patch.

- Ayan

>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 18:16:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 18:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482065.747380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvvc-0003eo-KI; Fri, 20 Jan 2023 18:15:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482065.747380; Fri, 20 Jan 2023 18:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIvvc-0003eh-Gy; Fri, 20 Jan 2023 18:15:52 +0000
Received: by outflank-mailman (input) for mailman id 482065;
 Fri, 20 Jan 2023 18:15:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIvvb-0003eb-KP
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 18:15:51 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7637b674-98ee-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 19:15:49 +0100 (CET)
Received: from mail-bn7nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 13:15:45 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5249.namprd03.prod.outlook.com (2603:10b6:a03:21b::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Fri, 20 Jan
 2023 18:15:39 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 18:15:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7637b674-98ee-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674238549;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ZEsfnBZrPpNi72sXbq3Xf9959Zyg3A6HDpx493rqq44=;
  b=SYAlv/Hk/wDGWa4P/VCmTO0H6VBJOfrfLILHkB/24QgYOKdzIbnJ6weA
   +p/Oa1Hm8JqyB2HtMXhqhVC9xVaQspDETKBULFhNCLNyvJH/WCiz2EuwW
   6EqmLPYHyczK+K+bheNzracy0o89eJQ0gtLPVcKo56muHTwGh7EpIjEZu
   w=;
X-IronPort-RemoteIP: 104.47.70.109
X-IronPort-MID: 93527064
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Qdas9K4krxv2LDtYFKa6IQxRtPLGchMFZxGqfqrLsTDasY5as4F+v
 mBLWWiGPKqCZmX9fYgkO4nk80sEu8OBmoA3SAZtpC1nHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakR5AeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m3
 9UoJygyQyK6t+O8nra3acJIquA+I5y+VG8fkikIITDxK98DGMiGaYOVoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MlEooiOmF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNJSefnrqA26LGV7jEpVQcMZWOCmMaatkGRUOhVG
 0kPxAN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8ywSEAmkJSBZRZdpgs9U5LRQxz
 UOAld7tAT1psZWWRGib+7PSqim9UQAKKUcSaClCShEKi/HzrYd2gh/RQ9JLFK+uksazCTz22
 yqNriU1m/MUl8Fj6kmg1VXOgjbprJ6WSAcwv13TRjj8tlI/Y5O5bYu171Sd9exHMIuSUliGu
 j4DhtSa6+cNS5qKkURhXdkwIV1g3N7dWBW0vLKlN8BJG+iFk5J7Qb1t3Q==
IronPort-HdrOrdr: A9a23:+kP74qh3+RN9Hex1fpwBC8qlGnBQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="93527064"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AuwyGfaOKcdpmd/KXsWuzF6mpy4e59LfRkvTFlEtJvJectjdw6mF1tjkanuGkbBqlTDuneWRpE/02iyWJV4Myr2igsUzmXZXpEe6FqbydXn9O+gEK/DsADIhJWP2EONLsBUWlLWbHAdF/p5n6Yj93O92LzdH1Selhe+//Uf5hu9HQH4r3F0RKetRSkPZLJ0IqqwIYhuaQFk6tYu+8rn5o0uN4a21U/fmJn6oKSGt2XRXwek4Rj1vuLlQYk8arm0QXTLIK1DRiB5JNeBhjMbxvwWyT0/xIFHwvaQVLv9iaNTQqlZmo8YTnxiio/YVXxh3PEFIdoLwI6/llLX5mJMfKA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZEsfnBZrPpNi72sXbq3Xf9959Zyg3A6HDpx493rqq44=;
 b=Br0snCQqtt9NfupWEY9zDZG6AVZPpwhq6S7CJ667ReTH6GHbDipZXiQBXVBz9OmaCq3m6TMWbv4l3tP/EpUIBJGWvzdF/L9SeDqW7quQH4uiQO0iVBAFtPjK5oDYbuwPMvmXSpN2nh3meD7lmNkLsKrtWDztqE1VFYdqw48UtADOrPnvI4GH8+Rg29XdYFVWZNd+lsY/L766nN9WtlaHoIhrUEKPYTS6EQmm5ajE3d6nNZ+xbnhtl4ChqW6i5rk/j5QW6fAvZE5y09LUzT27IF+B8/1dWqa8sN9FZqtsHJVgmh2mMVmQm/IcVctjeBbULZCXZkDvfPeaTo+/JHjuoA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZEsfnBZrPpNi72sXbq3Xf9959Zyg3A6HDpx493rqq44=;
 b=p0HEFNcbLTiWRAk9UZf8K9Q7qBVjuLD+O3QXjUU33dpy0r5oyecfB1PyBteDBT7eDhCqSVJb7Kv87KE4ZNMu72RP43WWTXyOVpI0jXAou2KpbT/Kd42+//BqhQ0nKJkeBfp0Cwwes5z6WU6RZirjmx5HpbER6GA2V0Y16nTbNHM=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Roger Pau
 Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Thread-Topic: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Thread-Index: AQHY4457jyaOcvAnhkOdTNwMDM8hXq6juEqAgADGjACAA7ByAA==
Date: Fri, 20 Jan 2023 18:15:38 +0000
Message-ID: <978b098a-d052-09cc-442e-9aafc816feee@citrix.com>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
 <ed4d8d85-2ba5-74c1-7c65-0ae65bf0ee06@citrix.com>
 <24a2f51b-e69d-7a44-5239-79f5f526ef01@suse.com>
In-Reply-To: <24a2f51b-e69d-7a44-5239-79f5f526ef01@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BY5PR03MB5249:EE_
x-ms-office365-filtering-correlation-id: 013086ed-176d-4b4c-ef3e-08dafb125583
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 +4TxtvnHrWuiV8fIq3WjRDJ0GXrBuh0dNgKNVm80IPhk/9jWdpybAd8e+CLSF+Ch0/fZiRL0Oo4tTgROKHMUap9/2U4x3kgOmUGlrotOhnnEsjN4O7KC+EXY6PKhFdmW/4EBALbDnJiQD0NvFnY2ePB1KRy1JWg1a4YfRPMcm2LqVWCc3i9Tfot6Dutya/lnygp5h7Z2C1eVwHPR3IIXmXVt2pr6fn+rv8siwNHjTxnpxrtCASzJ6ypWJpEsfvYRcK12kzXpULK1OVpGEiLDo8R9HCktz57qoRCjB0wGxu6AY1hsvr8UkBRe9iyBV4Z6aIKYAi2npa5w+XSH2yMxqkB2/Z+g5Zbf+dtTplUSRCqIjz64wWZHudk90BUfhBni9i+V/Xkqre3pxkkAVJU5/1cmEYNeIESaXcoguPc8iwZ8lU3KmvYZ4kK9ePvbGnkllgsqvvwouIUFoEE351iMDjINdnMc1yyKzIVygFMnfjKoMIWC7E12ajTnAop8QYsZr+qhdtnSVn3vzfb+xKKYLhsED/dnWBLKGpZ25QiMnxOlQwLKLaA2KpyqwhlOnr9R6txPqyHDciIjEKs2jocb8GDWFei2HChgGhZ8fugOsF+70ZhsQjwN/yXXHpCoML+bj4SusjCtnLTFrt9dnnoLX2+hfj0VIHzx5Mij7WNJy75kF8FXcxtQZFbWdJwBpKei4kFb1sujYddzBy6ntjiNTVbJqB9feB6Ypd2l39r+zjE/zph3rjQOtGdefvFuhIZ+Fg01GIFBV/SfIfvzH9KJGQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(366004)(39860400002)(376002)(136003)(346002)(451199015)(8936002)(66946007)(6916009)(76116006)(91956017)(5660300002)(64756008)(4326008)(82960400001)(66476007)(66446008)(36756003)(66556008)(2906002)(478600001)(86362001)(6486002)(71200400001)(41300700001)(316002)(54906003)(38070700005)(186003)(26005)(53546011)(6512007)(6506007)(31696002)(38100700002)(122000001)(83380400001)(8676002)(2616005)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?YXpITmM2bXZRcU9kRUdpOWphbXNUeUJkM1JrakR3NVM1WldwVGVIYVhKdk03?=
 =?utf-8?B?OE9pM3NXV2Q5VC9QQlExRXc2bWFPUXhyc0xTeVcxMis5cDdnRnRFUTRvc3Uw?=
 =?utf-8?B?cWllb0p2c2t4VVUvY3Q4bEdGRnI1am52TWFKK2xzSG90cnZINzZnSE1EMWNl?=
 =?utf-8?B?RGQxYVg3ckpWbDM4T2hiaXk5ZVgzMHNaV1lland2d3FnQ2U3YldoRXR3R1hi?=
 =?utf-8?B?MXQzOHEyNXNFaVEyajI2MWx0WUJNbHFlSlhNRkhkNjdBRlkzK09KZ2g2cHVh?=
 =?utf-8?B?cW9tVzdiM21NOS9jbHZ3YTdtV3hNd0l5T2pLMk82alY4cCtYa1MwVGpMNWxM?=
 =?utf-8?B?bWQwaktNMFo1dGQvakxPMUk2RXI5Z2VxQUxoTkRlZE43N05rcnFaazF4TUE5?=
 =?utf-8?B?R2tQTjgwakEzTXJwbFFScnFydVk5R2tWbnBZL0NudmE1dmNLRnBOOTJ0bFRn?=
 =?utf-8?B?WWlpUWRXQWE5cXIrVFJRV2VOYW1PZGtLRHlWNWJFVEVMek9yZmxISVZFVXRk?=
 =?utf-8?B?MDZkWVllOENZR2JDRHJCRk93cFFxNDZpbkRqemhNYlNGa3p6Zk1nMThOL2to?=
 =?utf-8?B?ZndyWmc5UUlKdDRaVjNGM1dZcnNvN1ZCZFZvMCtHWDNibzJtK1ppUjVhR1Rv?=
 =?utf-8?B?blpqT0EzR0psb29KVnRTZm02Vit0RUliSWszQjFOOUZ6RjVYRmRDZ2V3SXM3?=
 =?utf-8?B?eE1KQUFzbk5YR1p4aFhpS0xIbEU2WXlSQjhjZWZsR3ExdVFJM29ib3llSFdw?=
 =?utf-8?B?VElBd3pKd2dyZFdLRC9FVHJuVVlSdlJXQWd5YUptOVlSd1hvTGhwVG1QQlpJ?=
 =?utf-8?B?Y1YrN2t1dEVIQjNvOEJzcFR1ejlrV0Q2M1N6YWdKQlgydjhrN3B1em9QSkMw?=
 =?utf-8?B?NHo1QXNSMVRHTXJmMmJNems3cFE1Nm5BU0NBNjdWK3FrQTVhSlJKYWJtMisx?=
 =?utf-8?B?c1RxYWwzbGtuc1hHenc3cVJXYllPdXNkSFlmaU9jTFVjdmRVWi9mWHlYVnBQ?=
 =?utf-8?B?OUlreDNrSlZnNlRhbzhib2RtVzdLczVZWjFlMDFITXIrRUdXck5pRm1hTTI0?=
 =?utf-8?B?bkh5SnMvekgwUGY3QktrN2lmVDVOei90NWd2a1VoUjhHNkszNWswdWFjZTV2?=
 =?utf-8?B?Yy9EUk9naGVxYlU0UEkwcFJzNnpWaXdkcUt1U05tVlRnOU9PRjVXeFFJdlE0?=
 =?utf-8?B?dks1bkYvYTl5T09sNDFnQVAzTWVyc2krK1ZMd2ZmRm9vby9BYmZRaFJmOWlJ?=
 =?utf-8?B?ZjlnRzR0ZzNWeFdkQ3lIbDhvTkp1Tktyd1ZXRjRqc040d1Y5QnduQ2Q1UUFV?=
 =?utf-8?B?c1hSNFlwaWk0WStsYlZCT1cwbjZlcFNvMnNZTlhLNzhTcVdRQk9RUnpqWm51?=
 =?utf-8?B?WnVnTEdpU3R3VzhtaXEvVmsxVDUxdFdDTUh3SGtlVXBiU000d29XVk9hb0Ft?=
 =?utf-8?B?THR1cGFRbGErZlFIVlltS2tJN3AxNW5uQlZJZlB3a21GaDVERmtmSVZneWpU?=
 =?utf-8?B?cVVZM015RGovcEFNQmtmYjFmT1Q0d2NPU0tyaTVJaDc2RittKzRVZ0pESHda?=
 =?utf-8?B?WmoyM0paSWtMaEFQWVNWMFJySDMwOG5VN01Zd3NsclRwLzVxa2F1TUtsbG5v?=
 =?utf-8?B?K1ZQeUdDTFJwRVJLeWcwOHcwOS8wbTN6UFFNVDlZbEs1Qis4ODArcXk5djFo?=
 =?utf-8?B?alVDQTNzdkNuRXE3cU5ySmUva3hybDVQNDRXQTJTcGpmdTBoV0RocWdwdzgx?=
 =?utf-8?B?S2hhVjRuaXZ3MGp0c0lVVFBFbWNuTXZEMUJjZmF0NitNaStTbm9mUW03N0hz?=
 =?utf-8?B?M1V4dElJNGcvblo4bTc3Z05vdDlPbmtiVHBKa2M4RU1VeFFTZER2d0g0Wjkx?=
 =?utf-8?B?VXpsNTFRQkN5MGlQSURiTHN2bTB1bjA0ckt1OFVJdFI5NU1jZ2JhZEhBWWlG?=
 =?utf-8?B?NzQ2NHVHUU5kblNHdjJJZ3h4bmg2TktDVUgwRFVlNkN0UzdGNGNrZTVmNmY4?=
 =?utf-8?B?bnVBTUZQM0FiWlN5c3JKUTF5Y2ZjMXh3cVNFMFFIdStmSkNBWGVYd0VKTnRB?=
 =?utf-8?B?VFoxdkRNYis3N0M5LzllVnk5MDV2Q2srRFVsQlJsdVhEazI2WFdETnljd2Y5?=
 =?utf-8?B?QnpxT3RoYm5HRGpzR2oza0txUzZLUWNkQm9GUC8zR1JEWFdhbmpLSENubTlK?=
 =?utf-8?B?L0E9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <D7120D662B08B94480C9CC00DA46305E@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	FLtX1TkEsRhnEaCUJzt/KjrIClBPi+hYgWOCSmhR3mtKK/7eGbt2UCfrCfKdPa84yDlMVWIak5rfd+euNC0c/SePEQ0LKgc+orMqOtP/F1NCwGHpWbTygkrmBEoX95vHgl5puf3cjaU+X2KK+qwHtT70YkRkJoQwnF9mZiBLFHVfnmEWHIgs2LWpkV1oezTDTJxb0uvTjviNvB6IiNxZzkIF1rjNYuPWRvx8VqktxWl+fXZKnQwTlVu2r9/GKmLYXUqLO4PB0heSuUCNFTYQmdn+Faho50tPAsdBbgWlny0FHKvx7mHybXuokFS7yG3j6zJVTpFqOPWRcws1wxb84qEdyUpgOhodfyTIgJwUPdqq+i5YR0qtaFjOHtzrkRk/G5vNUhQRHHNWIyNWhsjBl/E7B9tnPAOZBzZGHt6bKhhWEhp+TichQ7FV5j1HOG0f9+OR5h/Xt1qyS7kJswHMdv+yQbZDVH1SdbWxcx2O5NiYDJZlMEjmUTN6ghF3KKjff2YwHg20WpwalJSD3pyo3ygI26+oe2bAQVsevqLe+7ZxBnl83c/Bmf4GJlokKgUl4OT2lsg51mJdZX/hruIFnpm0DT0UP0u2kJ2Mfh6JQ5H9jGgWkPSYuDr0tvv+5KaNibVoKjRHOJrewHnW9cDVw56tVSDbxcSFVm9aNcjR4lwwtyACcv/gbH3F7fHKFf8j3nR0+vldU1fshsMAhanmikSW9UrWPlTUvQEv01abbIO6MFn8/rrKFci+Q6KR6QxcYg5fqKy/URc17vsTBbDLcRhNOkegHtaeg7UW+vNmakTTQFByTq62sZiy/5qtdkcqqYBLhUGdwrzs5xKcbWCrvyUOu8Ns+JmXDZUrRCU24iM=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 013086ed-176d-4b4c-ef3e-08dafb125583
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 18:15:38.8883
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 13DFUMIwraVtTzRE/NfbdCJBkRSslxO5zdQr/QcrfLTf4QW4oxmMQqqNotgQ2knHDnm5PTV8i+IlbzMREcQx2qMBcSPdIW7/OEoeuN0/M0o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5249

T24gMTgvMDEvMjAyMyA5OjU1IGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTcuMDEuMjAy
MyAyMzowNCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4+IE9uIDE5LzEwLzIwMjIgODo0MyBhbSwg
SmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gVGhlIHJlZ2lzdHJhdGlvbiBieSB2aXJ0dWFsL2xpbmVh
ciBhZGRyZXNzIGhhcyBkb3duc2lkZXM6IEF0IGxlYXN0IG9uDQo+Pj4geDg2IHRoZSBhY2Nlc3Mg
aXMgZXhwZW5zaXZlIGZvciBIVk0vUFZIIGRvbWFpbnMuIEZ1cnRoZXJtb3JlIGZvciA2NC1iaXQN
Cj4+PiBQViBkb21haW5zIHRoZSBhcmVhcyBhcmUgaW5hY2Nlc3NpYmxlIChhbmQgaGVuY2UgY2Fu
bm90IGJlIHVwZGF0ZWQgYnkNCj4+PiBYZW4pIHdoZW4gaW4gZ3Vlc3QtdXNlciBtb2RlLg0KPj4g
VGhleSdyZSBhbHNvIGluYWNjZXNzaWJsZSBpbiBIVk0gZ3Vlc3RzICh4ODYgYW5kIEFSTSkgd2hl
biBNZWx0ZG93bg0KPj4gbWl0aWdhdGlvbnMgYXJlIGluIHBsYWNlLg0KPiBJJ3ZlIGFkZGVkIHRo
aXMgZXhwbGljaXRseSwgYnV0IC4uLg0KPg0KPj4gQW5kIGxldHMgbm90IGdldCBzdGFydGVkIG9u
IHRoZSBtdWx0aXR1ZGUgb2YgbGF5ZXJpbmcgdmlvbGF0aW9ucyB0aGF0IGlzDQo+PiBndWVzdF9t
ZW1vcnlfcG9saWN5KCkgZm9yIG5lc3RlZCB2aXJ0LsKgIEluIGZhY3QsIHByb2hpYml0aW5nIGFu
eSBmb3JtIG9mDQo+PiBtYXAtYnktdmEgaXMgYSBwZXJxdWlzaXRlIHRvIGFueSByYXRpb25hbCBh
dHRlbXB0IHRvIG1ha2UgbmVzdGVkIHZpcnQgd29yay4NCj4+DQo+PiAoSW4gZmFjdCwgdGhhdCBp
bmZyYXN0cnVjdHVyZSBuZWVkcyBwdXJnaW5nIGJlZm9yZSBhbnkgb3RoZXINCj4+IGFyY2hpdGVj
dHVyZSBwaWNrIHVwIHN0dWJzIHRvby4pDQo+Pg0KPj4gVGhleSdyZSBhbHNvIGluYWNjZXNzaWJs
ZSBpbiBnZW5lcmFsIGJlY2F1c2Ugbm8gYXJjaGl0ZWN0dXJlIGhhcw0KPj4gaHlwZXJ2aXNvciBw
cml2aWxlZ2UgaW4gYSBub3JtYWwgdXNlci9zdXBlcnZpc29yIHNwbGl0LCBhbmQgeW91IGRvbid0
DQo+PiBrbm93IHdoZXRoZXIgdGhlIG1hcHBpbmcgaXMgb3ZlciBzdXBlcnZpc29yIG9yIHVzZXIg
bWFwcGluZywgYW5kDQo+PiBzZXR0aW5ncyBsaWtlIFNNQVAvUEFOIGNhbiBjYXVzZSB0aGUgcGFn
ZXdhbGsgdG8gZmFpbCBldmVuIHdoZW4gdGhlDQo+PiBtYXBwaW5nIGlzIGluIHBsYWNlLg0KPiAu
Li4gSSdtIG5vdyBtZXJlbHkgc2F5aW5nIHRoYXQgdGhlcmUgYXJlIHlldCBtb3JlIHJlYXNvbnMs
IHJhdGhlciB0aGFuDQo+IHRyeWluZyB0byBlbnVtZXJhdGUgdGhlbSBhbGwuDQoNClRoYXQncyBm
aW5lLsKgIEkganVzdCB3YW50ZWQgdG8gcG9pbnQgb3V0IHRoYXQgaXRzIGZhciBtb3JlIHJlYXNv
bnMgdGhhdA0Kd2VyZSBnaXZlbiB0aGUgZmlyc3QgdGltZSBhcm91bmQuDQoNCj4+PiBJbiBwcmVw
YXJhdGlvbiBvZiB0aGUgaW50cm9kdWN0aW9uIG9mIG5ldyB2Q1BVIG9wZXJhdGlvbnMgYWxsb3dp
bmcgdG8NCj4+PiByZWdpc3RlciB0aGUgcmVzcGVjdGl2ZSBhcmVhcyAob25lIG9mIHRoZSB0d28g
aXMgeDg2LXNwZWNpZmljKSBieQ0KPj4+IGd1ZXN0LXBoeXNpY2FsIGFkZHJlc3MsIGZsZXNoIG91
dCB0aGUgbWFwL3VubWFwIGZ1bmN0aW9ucy4NCj4+Pg0KPj4+IE5vdGV3b3J0aHkgZGlmZmVyZW5j
ZXMgZnJvbSBtYXBfdmNwdV9pbmZvKCk6DQo+Pj4gLSBhcmVhcyBjYW4gYmUgcmVnaXN0ZXJlZCBt
b3JlIHRoYW4gb25jZSAoYW5kIGRlLXJlZ2lzdGVyZWQpLA0KPj4gV2hlbiByZWdpc3RlciBieSBH
Rk4gaXMgYXZhaWxhYmxlLCB0aGVyZSBpcyBuZXZlciBhIGdvb2QgcmVhc29uIHRvIHRoZQ0KPj4g
c2FtZSBhcmVhIHR3aWNlLg0KPiBXaHkgbm90PyBXaHkgc2hvdWxkbid0IGRpZmZlcmVudCBlbnRp
dGllcyBiZSBwZXJtaXR0ZWQgdG8gcmVnaXN0ZXIgdGhlaXINCj4gYXJlYXMsIG9uZSBhZnRlciB0
aGUgb3RoZXI/IFRoaXMgYXQgdGhlIHZlcnkgbGVhc3QgcmVxdWlyZXMgYSB3YXkgdG8NCj4gZGUt
cmVnaXN0ZXIuDQoNCkJlY2F1c2UgaXQncyB1c2VsZXNzIGFuZCBleHRyYSBjb21wbGV4aXR5LsKg
IEZyb20gdGhlIHBvaW50IG9mIHZpZXcgb2YNCmFueSBndWVzdCwgaXRzIGFuIE1NSU8oaXNoKSB3
aW5kb3cgdGhhdCBYZW4gaGFwcGVucyB0byB1cGRhdGUgdGhlDQpjb250ZW50IG9mLg0KDQpZb3Ug
ZG9uJ3QgZ2V0IHN5c3RlbXMgd2hlcmUgeW91IGNhbiBhc2sgaGFyZHdhcmUgZm9yIGUuZy4gImFu
b3RoZXIgY29weQ0Kb2YgdGhlIEhQRVQgYXQgbWZuICRmb28gcGxlYXNlIi4NCg0KPj4gVGhlIGd1
ZXN0IG1hcHMgb25lIE1NSU8tbGlrZSByZWdpb24sIGFuZCB0aGVuIGNvbnN0cnVjdHMgYWxsIHRo
ZSByZWd1bGFyDQo+PiB2aXJ0dWFsIGFkZHJlc3NlcyBtYXBwaW5nIGl0IChvciBub3QpIHRoYXQg
aXQgd2FudHMuDQo+Pg0KPj4gVGhpcyBBUEkgaXMgbmV3LCBzbyB3ZSBjYW4gZW5mb3JjZSBzYW5l
IGJlaGF2aW91ciBmcm9tIHRoZSBvdXRzZXQuwqAgSW4NCj4+IHBhcnRpY3VsYXIsIGl0IHdpbGwg
aGVscCB3aXRoIC4uLg0KPj4NCj4+PiAtIHJlbW90ZSB2Q1BVLXMgYXJlIHBhdXNlZCByYXRoZXIg
dGhhbiBjaGVja2VkIGZvciBiZWluZyBkb3duICh3aGljaCBpbg0KPj4+ICAgcHJpbmNpcGxlIGNh
biBjaGFuZ2UgcmlnaHQgYWZ0ZXIgdGhlIGNoZWNrKSwNCj4+PiAtIHRoZSBkb21haW4gbG9jayBp
cyB0YWtlbiBmb3IgYSBtdWNoIHNtYWxsZXIgcmVnaW9uLg0KPj4+DQo+Pj4gU2lnbmVkLW9mZi1i
eTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPj4+IC0tLQ0KPj4+IFJGQzogQnkg
dXNpbmcgZ2xvYmFsIGRvbWFpbiBwYWdlIG1hcHBpbmdzIHRoZSBkZW1hbmQgb24gdGhlIHVuZGVy
bHlpbmcNCj4+PiAgICAgIFZBIHJhbmdlIG1heSBpbmNyZWFzZSBzaWduaWZpY2FudGx5LiBJIGRp
ZCBjb25zaWRlciB0byB1c2UgcGVyLQ0KPj4+ICAgICAgZG9tYWluIG1hcHBpbmdzIGluc3RlYWQs
IGJ1dCB0aGV5IGV4aXN0IGZvciB4ODYgb25seS4gT2YgY291cnNlIHdlDQo+Pj4gICAgICBjb3Vs
ZCBoYXZlIGFyY2hfeyx1bn1tYXBfZ3Vlc3RfYXJlYSgpIGFsaWFzaW5nIGdsb2JhbCBkb21haW4g
cGFnZQ0KPj4+ICAgICAgbWFwcGluZyBmdW5jdGlvbnMgb24gQXJtIGFuZCB1c2luZyBwZXItZG9t
YWluIG1hcHBpbmdzIG9uIHg4Ni4gWWV0DQo+Pj4gICAgICB0aGVuIGFnYWluIG1hcF92Y3B1X2lu
Zm8oKSBkb2Vzbid0IGRvIHNvIGVpdGhlciAoYWxiZWl0IHRoYXQncw0KPj4+ICAgICAgbGlrZWx5
IHRvIGJlIGNvbnZlcnRlZCBzdWJzZXF1ZW50bHkgdG8gdXNlIG1hcF92Y3B1X2FyZWEoKSBhbnl3
YXkpLg0KPj4gLi4uIHRoaXMgYnkgcHJvdmlkaW5nIGEgYm91bmQgb24gdGhlIGFtb3VudCBvZiB2
bWFwKCkgc3BhY2UgY2FuIGJlIGNvbnN1bWVkLg0KPiBJJ20gYWZyYWlkIEkgZG9uJ3QgdW5kZXJz
dGFuZC4gV2hlbiByZS1yZWdpc3RlcmluZyBhIGRpZmZlcmVudCBhcmVhLCB0aGUNCj4gZWFybGll
ciBvbmUgd2lsbCBiZSB1bm1hcHBlZC4gVGhlIGNvbnN1bXB0aW9uIG9mIHZtYXAgc3BhY2UgY2Fu
bm90IGdyb3cNCj4gKG9yIGVsc2Ugd2UnZCBoYXZlIGEgcmVzb3VyY2UgbGVhayBhbmQgaGVuY2Ug
YW4gWFNBKS4NCg0KSW4gd2hpY2ggY2FzZSB5b3UgbWVhbiAiY2FuIGJlIHJlLXJlZ2lzdGVyZWQg
ZWxzZXdoZXJlIi7CoCBNb3JlDQpzcGVjaWZpY2FsbHksIHRoZSBhcmVhIGNhbiBiZSBtb3ZlZCwg
YW5kIGlzbid0IGEgc2luZ2xldG9uIG9wZXJhdGlvbg0KbGlrZSBtYXBfdmNwdV9pbmZvIHdhcy4N
Cg0KVGhlIHdvcmRpbmcgYXMgcHJlc2VudGVkIGZpcm1seSBzdWdnZXN0cyB0aGUgcHJlc2VuY2Ug
b2YgYW4gWFNBLg0KDQo+Pj4gUkZDOiBJbiBtYXBfZ3Vlc3RfYXJlYSgpIEknbSBub3QgY2hlY2tp
bmcgdGhlIFAyTSB0eXBlLCBpbnN0ZWFkIC0ganVzdA0KPj4+ICAgICAgbGlrZSBtYXBfdmNwdV9p
bmZvKCkgLSBzb2xlbHkgcmVseWluZyBvbiB0aGUgdHlwZSByZWYgYWNxdWlzaXRpb24uDQo+Pj4g
ICAgICBDaGVja2luZyBmb3IgcDJtX3JhbV9ydyBhbG9uZSB3b3VsZCBiZSB3cm9uZywgYXMgYXQg
bGVhc3QNCj4+PiAgICAgIHAybV9yYW1fbG9nZGlydHkgb3VnaHQgdG8gYWxzbyBiZSBva2F5IHRv
IHVzZSBoZXJlIChhbmQgaW4gc2ltaWxhcg0KPj4+ICAgICAgY2FzZXMsIGUuZy4gaW4gQXJnbydz
IGZpbmRfcmluZ19tZm4oKSkuIHAybV9pc19wYWdlYWJsZSgpIGNvdWxkIGJlDQo+Pj4gICAgICB1
c2VkIGhlcmUgKGxpa2UgYWx0cDJtX3ZjcHVfZW5hYmxlX3ZlKCkgZG9lcykgYXMgd2VsbCBhcyBp
bg0KPj4+ICAgICAgbWFwX3ZjcHVfaW5mbygpLCB5ZXQgdGhlbiBhZ2FpbiB0aGUgUDJNIHR5cGUg
aXMgc3RhbGUgYnkgdGhlIHRpbWUNCj4+PiAgICAgIGl0IGlzIGJlaW5nIGxvb2tlZCBhdCBhbnl3
YXkgd2l0aG91dCB0aGUgUDJNIGxvY2sgaGVsZC4NCj4+IEFnYWluLCBhbm90aGVyIGVycm9yIGNh
dXNlZCBieSBYZW4gbm90IGtub3dpbmcgdGhlIGd1ZXN0IHBoeXNpY2FsDQo+PiBhZGRyZXNzIGxh
eW91dC7CoCBUaGVzZSBtYXBwaW5ncyBzaG91bGQgYmUgcmVzdHJpY3RlZCB0byBqdXN0IFJBTSBy
ZWdpb25zDQo+PiBhbmQgSSB0aGluayB3ZSB3YW50IHRvIGVuZm9yY2UgdGhhdCByaWdodCBmcm9t
IHRoZSBvdXRzZXQuDQo+IE1lYW5pbmcgd2hhdCBleGFjdGx5IGluIHRlcm1zIG9mIGFjdGlvbiBm
b3IgbWUgdG8gdGFrZT8gQXMgc2FpZCwgY2hlY2tpbmcNCj4gdGhlIFAyTSB0eXBlIGlzIHBvaW50
bGVzcy4gU28gd2l0aG91dCB5b3UgYmVpbmcgbW9yZSBleHBsaWNpdCwgYWxsIEkgY2FuDQo+IHRh
a2UgeW91ciByZXBseSBmb3IgaXMgbWVyZWx5IGEgY29tbWVudCwgd2l0aCBubyBhY3Rpb24gb24g
bXkgcGFydCAobm90DQo+IGV2ZW4gdG8gcmVtb3ZlIHRoaXMgUkZDIHJlbWFyaykuDQoNClRoZXJl
IHdpbGwgYmVjb21lIGEgcG9pbnQgd2hlcmUgaXQgd2lsbCBuZWVkIHRvIGJlY29tZSBwcm9oaWJp
dGVkIHRvDQppc3N1ZSB0aGlzIGFnYWluc3Qgc29tZXRoaW5nIHdoaWNoIGlzbid0IHAybV90eXBl
X3JhbS7CoCBJZiB3ZSBoYWQgYSBzYW5lDQppZGVhIG9mIHRoZSBndWVzdCBwaHlzbWFwLCBJJ2Qg
Z28gYXMgZmFyIGFzIHNheWluZyBFODIwX1JBTSwgYnV0IHRoYXQncw0KY2xlYXJseSBub3QgZmVh
c2libGUgeWV0Lg0KDQpFdmVuIG5vdywgYWJzb2x1dGVseSBub3RoaW5nIGdvb2QgY2FuIHBvc3Np
Ymx5IGNvbWUgb2YgZS5nLiB0cnlpbmcgdG8NCm92ZXJsYXkgaXQgb24gdGhlIGdyYW50IHRhYmxl
LCBvciBhIGdyYW50IG1hcHBpbmcuDQoNCnJhbSB8fCBsb2dkaXJ0eSBvdWdodCB0byBleGNsdWRl
IG1vc3QgY2FzZXMgd2UgY2FyZSBhYm91dCB0aGUgZ3Vlc3QNCihub3QpIHB1dHRpbmcgdGhlIG1h
cHBpbmcuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 18:21:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 18:21:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482072.747390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIw0T-00057r-DN; Fri, 20 Jan 2023 18:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482072.747390; Fri, 20 Jan 2023 18:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIw0T-00057k-95; Fri, 20 Jan 2023 18:20:53 +0000
Received: by outflank-mailman (input) for mailman id 482072;
 Fri, 20 Jan 2023 18:20:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIw0R-00057e-Jt
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 18:20:51 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29cc795d-98ef-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 19:20:50 +0100 (CET)
Received: from mail-mw2nam04lp2171.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 13:20:47 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5249.namprd03.prod.outlook.com (2603:10b6:a03:21b::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Fri, 20 Jan
 2023 18:20:45 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 18:20:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29cc795d-98ef-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674238850;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=Leab09+bzq5znpAUI5jSuBy+z9S0rS7msqeZJPBk6tM=;
  b=g9tm3pVd/6E6mW3ugCtcaebKa5bbJ3aLzLeXLpSzhiMESH54ta2mYMTC
   o1MbUv/kbV2epz8yY6usDeBlra1j95k3SYWBBMNuVXAxosj170Qcxzxqr
   WbMG8FJ+IBNFRYI/x4rlrEnS8akkpDsa7g1gOein5Q4YlhJ1jMoMibRII
   0=;
X-IronPort-RemoteIP: 104.47.73.171
X-IronPort-MID: 93980226
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/fvzdKCaGczVHRVW/+Piw5YqxClBgxIJ4kV8jS/XYbTApD4q1jFTz
 zdNW2mFOqrcMWDweosibtu/8kpQuJHcnIA2QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpC5gRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwy/YnXlNr+
 sQkFzknRUGOrOm45ZCrVbw57igjBJGD0II3nFhFlW2cKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK/uxuvDG7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prr6UzHOjAthMfFG+3uJNmwKS6GFNMR4xWFGG/smCt1+mcc0Kf
 iT4/QJr98De7neDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJhZDYtE7sM49RRQxy
 0SE2djuAFRHsqCRSH+b3qeZq3W1Iyd9BXQZeSYOQA8B4t/iiII+lBTCSpBkCqHdpsLxMSH9x
 XaNtidWulkIpcsC1qH++E+dhTup/sTNVlRsuVyRWX+55ARkYoLjf5av9VXQ8fdHKsCeU0WFu
 38H3cOZ6YjiEK2wqcBEe81VdJnB2hpPGGS0bYJHd3X5ywmQxg==
IronPort-HdrOrdr: A9a23:YjghU6NDiikItsBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="93980226"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EbTBHjA+ZvJOOQlEhF5NHPs/qfpdNo3gbxlP4UiTH1EvKwuH8DkvZh4SOr+KJmc7HaRLrs9y8b+3Zfevr0Vl2wtCB6ogOsRmn3hV8AvZW6De0GrFt6/Abl6o/i4Z+5YW+S8AlDUaxrYByw3kt2RgkqfvqZmLLUrRD8Jxw1gLAwAU93wIjF2YGYswWjWXyVijNdSDuHRbJJ/7Ks+ttFu8PhzMlMUlc3tzZaw/YXHraM/TYNg/Nfd268EQJ+H42NXfT3e9ndlMdTk2PED8xaX7T2ljvNao1aqL5zHl+Oe7TzC/gwhfMTBh0Czh9zUucpa0w74Q3MhWod6JdlbzTn8wpg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Leab09+bzq5znpAUI5jSuBy+z9S0rS7msqeZJPBk6tM=;
 b=WPN5OmVvzDVsDT2XndsnifHp5c/xbGti/JQwItAkKNmLZMJExUA9gecq9LCrXQVtWtueohey/kWUCJRc2DBAjbYbf032e0P7pSg7Gsm2Q9cUI8NbLZznN7gLiQ1EyBbtFB0IzfZHp5riSBJQvJliC2ZaUh17VHJDEMmgBYsxKI17MS8CjCWeaO/QM6Oh/xMMZJPFVAfKAWTmCSOD1nu9MmNPkgvew3aFokjyJa6cSm1drBomUpoU+kfE/56usr56bwIz6fkOGDugEhY53AEnkhXx+53X/PR9fpDpdgu7Jn6tqmEraSkq8Zg/36G6d0bLg2IkfPZm8VmNnKVFh69ClQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Leab09+bzq5znpAUI5jSuBy+z9S0rS7msqeZJPBk6tM=;
 b=sQMZqOamG0XR6M+s4bUA0xk3K9XZcDaeBnsc8k3Qrb74uOtmuyfcsInVBiCtAgXtGU6QTWXYks+O3czFPAcV+BLPn68ye4b0lNPgbB7xpm+UFHDYrKPQhn4m+zIKV++KGDAyFz8lSlPPgTTJftpLKM273NTA2rair5QKwWG73pQ=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v4 1/3] build: include/compat, remove typedefs
 handling
Thread-Topic: [XEN PATCH v4 1/3] build: include/compat, remove typedefs
 handling
Thread-Index: AQHZLBnwSejHh1Foa0mZP15rWae6566nn50A
Date: Fri, 20 Jan 2023 18:20:45 +0000
Message-ID: <06ac9828-0b20-fb8d-7d39-3663365d082a@citrix.com>
References: <20230119152256.15832-1-anthony.perard@citrix.com>
 <20230119152256.15832-2-anthony.perard@citrix.com>
In-Reply-To: <20230119152256.15832-2-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BY5PR03MB5249:EE_
x-ms-office365-filtering-correlation-id: 7c52f035-6b7b-4015-cae6-08dafb130c05
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 dqgrIfwvG5YJLa0xZSK/bnfhroaBlwDcYZ/aoDOQtbBJsVfWraXhx7G2prPqKBAutDv5/2+QiwujjFx9i5yjnRkCCvn+ByIFK0+Oyq4PavR7a8CrbPR0ZO+3ig7LlokZNgJIllAK++nXczlFqT3FME6GIo8xPnFY/qQiogtLvDZhhIvL27HDTvJUWdqvN/Y8WCSqPlYEG6tFTUouHKLGWVu3R9Rq6ArXqmQi5DZobTBIF1aQFWTD6wKIfGS/xIdTl6iqKzAcdjayseM3TpgOktAhY9PnLP7f4P0qWL1gXZIwzBhToMHxM1QPTd9Djcm8XzDuq0q+SncFYNz5qVLz5LlhgkDyXL0qlMeM4xPf1glrxtFYCDOD961YpQaL5dNUKoNA7qVe4j0bl3tO6kNnIUdhTCqwr21f89cqPnaG7rEtPwpIvX7SNw2nkDWEl9BDaVD0HCmSoNZPnGfrUM3MdegWn7o0hcpCy25jop+pZrCyDPnsLPQz2SZnhLeHnqhbdsZvSuw5f5HmfcGUCDX6cnn8MFifcrBCgsmhClUx/23xK+4wu7aQB7BrhwVoPYD4BeOPLxnIpeB4QtnlAhP8py5iV7MTUOJzJfFZgAdATIWhS5BQpqB1KoCPowHa5jkBMDgCSIRDSIRCU/KEgKPd4DIKw/k9KQYTAsdFwkzA3ZKoWfyCdnHj+11iAOezmfCMHdrO3pX9gDeg4UsRUXfqLQII68ZPOLNVlMlDPnl6eksCuY7QekrrwXMhdVfQptwtA7tQ/YJFM8S60KJ3wzAMDg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(366004)(39860400002)(376002)(136003)(346002)(451199015)(8936002)(66946007)(76116006)(91956017)(5660300002)(64756008)(4326008)(82960400001)(66476007)(66446008)(36756003)(558084003)(66556008)(2906002)(478600001)(86362001)(6486002)(71200400001)(41300700001)(316002)(54906003)(38070700005)(186003)(26005)(53546011)(6512007)(6506007)(31696002)(38100700002)(122000001)(8676002)(110136005)(2616005)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?UUlrZnAwTE9iQ2lEb1ZsK1JLTmVKRzBNS1hBdG56T25GSEVJa1U3MFFSTHVR?=
 =?utf-8?B?NDBSNzc5MkJ6VDU2KzMrZ3ppaHI1YXJFR3c5N2JlWTlZR1dSZU1wL1JYTmdq?=
 =?utf-8?B?Ui9HVWxaNm5lVEg4aDNETHBXNFY1dTBxRXpIM3hCODBZcDVkRzl1SjA5NDdQ?=
 =?utf-8?B?WXgvVHZEZEl5cUZsZnR2eFE5RWthbkc2UjU3RmJFeHcvbkp0SUZ3bnNKN1V3?=
 =?utf-8?B?Y000OS9EaU9wT0pSNDMyTkhTRzVpMnhHTmZrc21MY1Q5S3BZMDhIdkdxRUhO?=
 =?utf-8?B?MEVqR2haRG9WeUZDZ1lLcW1kUVg1TC9wZlIwL1RQUElNZERhUFMvWk9wY0Ri?=
 =?utf-8?B?YVlzalR3WXhzdlovUUtqYVhiSk42ZS96ZkYvaWdId1RjejR4UkpncVY1TStq?=
 =?utf-8?B?akovMGF1blZpcDRqalNFeVRXMEJjczY3Tyt2QUR0UmErYkZuNGEwMkV0UlhE?=
 =?utf-8?B?TExEdVNqUTVZVzVhOHhwTXFKRE9PdXN3VDFjeUNXV2U5MjR3VnlZaVYraFQ0?=
 =?utf-8?B?VytMRW00UllRUmhNU200UVVnenFQZEZlMTlKbGZsY20rcDBaQkVraFczVGc5?=
 =?utf-8?B?b2p2aGJGRTdpeTl4Z004RmRVVHE0WHlrL29DLzI0cW1jVTZsc1Rqc3c0VEo3?=
 =?utf-8?B?TkIzbHdSc3FzTDMzaDBvL1JuLzFjVmtxS1ZVWTdCSmVyblozTzRySnhORGZF?=
 =?utf-8?B?NmxncFJHR2h5S0lEYUloRStteFhYSmRTWkpBbUFDVElUNlQvQ3owK2pIWjdq?=
 =?utf-8?B?SXVidEV0ODhOaEZwSk50ZHROcUVXSy94bTZQRkdURTFTSXlrVUdBTUxRYS8r?=
 =?utf-8?B?eTNLOXFLSDdhSk5PeHYwcmhvQ1dNMDh2MDhuNElMdHFveUdXQTg1TXdDY011?=
 =?utf-8?B?NnRjL3lmRFhlNUdYTHg5SGhVL3JIdGN0RzExUmcybnVxRzBnQXdOZVdab0FN?=
 =?utf-8?B?TEQ0RkNzN0lPTUVEQTlmMjArc21aVTFrOTRSQ2RlRmtlZFU3ZWV3bmM2VVo2?=
 =?utf-8?B?ZERBMG5FTGFLeUZyRnNSdThRZ3JvQ2RKTlpvL1dYUzU3aEdoMmZEekY1S25z?=
 =?utf-8?B?TVQ2bWEzcnZTQmZUczB0VXdGaDJlMGRPMzFzOU1xNzZkeXlrZDZqSUdraU5G?=
 =?utf-8?B?TlFHUnZTc3BRSW9hMjBXNjdUTEFwWlp6SkpheWV1dFdoWWlZbTd4alBmNmJF?=
 =?utf-8?B?cTU0c2V3MGVGVENEWkRXekllRnhIRmJKTm5HWW9Fb0cwS0VocHQzTG5XMHdG?=
 =?utf-8?B?MFFyYlBiM3VtUWlyZ2JoR0d5Z2JmMW94WUJTRktVSTBTeG5IOENVMkIyR0Rl?=
 =?utf-8?B?YW1NdzdXcFRBbzJrdFE2Ym9jQklRSjQwMVlNOWlwdWtQeGZHOVJpK05IbWtR?=
 =?utf-8?B?WmJNZENrZDVZV0xQbGFtelJCNDd3WW0yN0hZTC9MSnA3WHcwNDBCSzl4ZUJx?=
 =?utf-8?B?bTgxMVJrWUwxQ05LajZnVUxtbzdMTEl0Z0RXTms3Z3g3cmtuNTkzdDVRRGZ0?=
 =?utf-8?B?dzF3NFhvWU5qN2YxSG5UK1dnY3BEWGllOXV3QTB4ZFhQRHljcTlPWUpOVWtX?=
 =?utf-8?B?WXM1QkFFbmlBekp4cFhoYnNRMEQ0bFl3NmtJSnBLUjJyMjExRWd1MXhrOSts?=
 =?utf-8?B?bDRjY0hLSXV6M1JiODlyK3lzODdFRjhROWxxaUNDejY4Tk9ydFJnTzZiVkRt?=
 =?utf-8?B?QWJEUHlTektKM2pWUU9OS0xoK0JxbThtRC9oODN2Wmo2T3R3RXAvZkQ2SHBq?=
 =?utf-8?B?NVp4ZG9JQ1hkQmRKSWtjdUM0bXFMSXhucVRPbzg3TFl2QW9hdk1JaFBoQllW?=
 =?utf-8?B?bTd2aS8zUW9NcTc2N1J5MHFTTFVrdjlad0JIS0s3d1lVbFBGMVRZUVZGTGJF?=
 =?utf-8?B?WDdqTjh1U0pzOTNrUjcwa3N6YW1IMUJHSmR1akkrN3Z0TG9hbUsrS3ZQa3N4?=
 =?utf-8?B?c2N3UFBRbmd1dStKaHBlQWVDUm9JSWJUMTgwQmk5Um5nZXZ2OTFIeTRMdUxt?=
 =?utf-8?B?TE5IdHdaT1Y2UG9DT3JsN0gxd1JKRU5CbzhKRTJCZmtYaXJtNzRLOXdtTzcz?=
 =?utf-8?B?WlVUa2x5VlZsc1dLYmJYMDQxZXlwbElkTkNReTFwMDcxeHNoaGk3YWJ4Um80?=
 =?utf-8?B?bU13eGxNYllyQkNmVWEzRGc3NzhXcXFVSVZSZWVJQXBNTGJVZzdKZXFXRTd0?=
 =?utf-8?B?a3c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <480BD9E07E38C3489368FC38202C57E2@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	nGQS+OqjEDgT+BeSrlyj6xr6u7hhZ6j3OHyplMSUHz/xv+Pyl35DF0I0z68z52c+Wgj8O9BEZ0K76Dt0EfK4KsWI8l8ZemdYn6qQObhuJp6pbgq7q8wJiFx20o2OLJpwy1Sve15/qRfAmFvpOcdi+vx2h+xwiqb9THbSA6YwZ1wxo25YwyWR3SbIbJtEh/7i8Zlktk2bOIvv1Pb3EP2JGEwyDbVU10otV1PgJ0SisxQgR66Dubpp3dFA9ZO11K+Pc8A8O/JxEDVgNfe1qXHzy7oqq0oUt9PSaVSaalsrJQzmYKGQyf+AV/pGuLkn424kVy/KKMkrw+JokbNikdKguRaoVF5As0lK2hvt+0biaKDfhY6X6LOCRYBeI6rWQ96A5nWCuJM0Z7JHL7NRbghcbwzScwey/2vtd9vKQYaZVo4ng4XPu/acn1U4HM6XRZovMLOrl0G2pFl55aIq4mpdIp2HURetmUq53fPgGeeW7zUzDTIhMvk1zSlSdllWrbwkp1BuroZR6X6X2EfJqjv5aCq7AfYMtH8V04cYEgvgWf6TM/WiGTOkxBRPfXl1g26X2Juhuqa+LD8t/AyxCDRg1nXYRnct7rofLlb3KirpRSYiKFjhmtJUbQVkTQdfIslaTITxcPHcXykt9pilCwClkpb+chrj36Dpc1/562T16dUdQVdD/wYE11JC6Z8LPPTPXnx7BrJCNt24NdmvMj+sUsNMt41Cqsu/TySl3w3ncRs8ss4fDaWjoXyz56NHlN7iz8n2EU4Yer78mpx21tFIwC+QMbXyMKStlks0J5krQypisnANK9NSTyUsya1nwxkhKZ/Xhf5PNnrwjYGSCMMsBFhIkOU0Kzs3GH1xVAvW8lY=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7c52f035-6b7b-4015-cae6-08dafb130c05
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 18:20:45.0690
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nBxu4h64OiyaIoytuZ9vaXasePzVDr5U6BaGhxmkTBh13sbG4sLiu3ORyM/FoL985l41C9qoRzoUPZAWvzy3X8YIQZf37hIlDU4B7tDeClU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5249

T24gMTkvMDEvMjAyMyAzOjIyIHBtLCBBbnRob255IFBFUkFSRCB3cm90ZToNCj4gUGFydGlhbCBy
ZXZlcnQgb2YgYzkzYmQwZTZlYTJhICgidG1lbTogZml4IDMyLW9uLTY0IHN1cHBvcnQiKQ0KPiBT
aW5jZSBjNDkyZTE5ZmRkMDUgKCJ4ZW46IHJlbW92ZSB0bWVtIGZyb20gaHlwZXJ2aXNvciIpLCB0
aGlzIGNvZGUNCj4gaXNuJ3QgdXNlZCBhbnltb3JlLg0KPg0KPiBTaWduZWQtb2ZmLWJ5OiBBbnRo
b255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCg0KQWNrZWQtYnk6IEFuZHJl
dyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 18:24:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 18:24:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482076.747400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIw40-0005in-SR; Fri, 20 Jan 2023 18:24:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482076.747400; Fri, 20 Jan 2023 18:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIw40-0005ig-PL; Fri, 20 Jan 2023 18:24:32 +0000
Received: by outflank-mailman (input) for mailman id 482076;
 Fri, 20 Jan 2023 18:24:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIw40-0005ia-IC
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 18:24:32 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac9c3939-98ef-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 19:24:29 +0100 (CET)
Received: from mail-dm3nam02lp2049.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 13:24:20 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5050.namprd03.prod.outlook.com (2603:10b6:5:1e8::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 20 Jan
 2023 18:24:18 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 18:24:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac9c3939-98ef-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674239069;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=NgQSqVDP/1CKKX5qck6RM2o3MN4Im2tSBUeM6n6PiHk=;
  b=Kr9S/RSb8Q+BXZnNTZ9mkHieFTPBOzDixtQgM9E9TAmK1EzE04cLRg7a
   N6Yd4MJ3GqV90rp4CAesoD/aYXzbp4nOv1X0s1nveLNKF0luW7tFMNtLR
   DKwEu85/+k9fFdDNBJ5l10xrwICkSP7yaBajmlJC2FdhFYAZ+k2oBk+aP
   w=;
X-IronPort-RemoteIP: 104.47.56.49
X-IronPort-MID: 92463461
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Nh2Dxa4mYRb/U19oZ1w3MQxRtPPGchMFZxGqfqrLsTDasY5as4F+v
 mdJCGrQO6uDMGrzft9+aIjj8UpXvpLcytM1QQRupXhjHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakR5AeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m0
 s4SD2k1YUm4wLi7n6CWE8xcvvs5FZy+VG8fkikIITDxK98DGcqGb4CRoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkkotgdABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTCdhOSubkqKcCbFu7ljw9NyAbCmqAgcbom1SDZ4JOd
 Ase0397xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnM08SCEu1
 1SJt8j0HjEpu7qQIVqf67OVoDWaKSUTa2gYakcsVhAZ6tPupIUyiBPnTdt5FqOxyNrvFlnY3
 DSivCU4wbIJgqY2O76T+FnGh3ego8PPRwttvAHPBDr5v0V+eZKvYJGu5R7D9/FcIY2FT16H+
 n8Zh8yZ6+NIBpaI/MCQfNgw8HiSz67tGFXhbZRHRvHNKxzFF6afQL1t
IronPort-HdrOrdr: A9a23:md8sGqw1bKXpjcDrZJf9KrPwO71zdoMgy1knxilNoH1uA6+lfq
 WV954mPHDP+VQssQ4b6LW90cW7LE80lqQU3WByB9mftWDd0QOVxedZgbcKqAeAJ8SRzIFgPK
 5bAsxDNOE=
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="92463461"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SeyPLI5aCGfIiyLgC/zkqd0YP7OmEEY6PgaQhfnA022e6kuqQkMP59i6VoW3ugmrXmCUpXcnXO2XOY2lDjdjypqaJ7qM6QRiAgo9T1qfaBSG6LzrMXiWFkCZ2Qyqway4RA/OyIF/YEl3AW0rmCv4bAOqRh14tiNLLv/LyJQ7rP+X26RtMq3vH33vjvJmN+uli2iKQbdauRvHQ3ZkZAjIxYm3ZoyZNDUNjCPoy7zq9Sazws55aepEqVqcpRdgVtvUDjYoX5uAssmYy2jpPszw4e0qfSVJ5T5VgEfK7Z4hTOIkfwodE4MvD0whB4MlOS2TI2bOLjAVqhE94o7DgkVgvg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NgQSqVDP/1CKKX5qck6RM2o3MN4Im2tSBUeM6n6PiHk=;
 b=M+2eDM5N1d+A3NNcUYGFEynAWsyBp2aTPPNLmpp5gvsITvk4QrRwIiEaXuDza2+lkrPNWTsKjoS2y4P0R/yZXP32iJ8huCt+Z5hod3Qf1alh0HNoezx5nxXTyQXeufoxXXU+Sc9SQW/xdBpBUiUojKiK3+vjaB07uTcwAOLUZ5D6PJ60B45b6B+aaQLH9HGBkldPVeAJNzsI/eA4ePNwwZl4zrL0mJjWELLXJdUF+BLBWAnjoVV+6sNu4JNNy9WVvqvPz7F0MTJMB7v1/xPowSzyuNvaz2fa36ZumNZp+S8dndp1rvNknG86HarMDwrg/WwfhwyxccvwvV4g9yoQ6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NgQSqVDP/1CKKX5qck6RM2o3MN4Im2tSBUeM6n6PiHk=;
 b=fzIL1VDWj8qvR9C5caS4od3Ggu3bFjsbAOGAJYDQCbpF0t9BZpHQWOSlZPHGzFXHPQ494jjF6ZLLmcLn4+vmDeW5dTcnZWgrsbvHW52UpzgNdwiuue+rC4f3QuFtVRrsemI0vYhhxYPYvkFbid9T0rRrn2b/8gzARKf0xDUgVSc=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v4 2/3] build: replace get-fields.sh by a python
 script
Thread-Topic: [XEN PATCH v4 2/3] build: replace get-fields.sh by a python
 script
Thread-Index: AQHZLBn16yskFr/tykaq4ezXFz5nWa6noJuA
Date: Fri, 20 Jan 2023 18:24:17 +0000
Message-ID: <04e0bc84-7e55-1a10-ea31-ddcf4291bc50@citrix.com>
References: <20230119152256.15832-1-anthony.perard@citrix.com>
 <20230119152256.15832-3-anthony.perard@citrix.com>
In-Reply-To: <20230119152256.15832-3-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB5050:EE_
x-ms-office365-filtering-correlation-id: 22751aa7-73ef-4880-3159-08dafb138ac4
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 86h6gi4izLwNmzUrVuHEkAPgIyMmIPh9Rf+TzZxyadGGxFvgjQlxRsYszBTFo06V7gvHCY/jeZc6iqojLeeA6zcGYwDpwbYnIO9vfZTj6Yj+NengkLiI4Cmmyaj+LnM1/zGGpv9RI9I7UPC4wKUGUVGon4hqA1DaMo3X/dtysMe39eQDAozbH6ZgwPFMdoyQvXkMAoC5LxuCoLu7/HMN6i+QsEGach20yZZR62pF6ln/yZiXJKaIBNa14UURs2sN88ReYGTPVGB7K93axfUZTJIVKbrgyk3WOo29f8xcy7mgX0gSnVRJDgVfoxFKroDEQmMQwlk0DipgKvfi1SF10ha/lR0pzXMngg1I7zcDoaSQpgP5xuz4CkDgMqVTnnvoX2ho0FgE0U4wNRyfH0j6gQFZHQN6W+BBuD2Ubsu2sSsB4cTHXWLnHNTl7OGaF6SLk9lezi8C5sNh3B1NAM/et6ZXXsymcPI68pXRbY50eO9b4SN0OYaS2HHPWGNaT+QCPSdA2mTvNFk/6OBroa4Bb2nWyJpJyacNDYh/ny0OEg7zSz95fWM5uiQtrGhrwdJWfT3J80n6mOtZ2m4zlnm7Dh/chICre5/ljrmwPJFln2vdjEoFhU6FUse1cAHXuY9NrJmostC4BjVh2ZEepur6F99MMXJ7teip8ch8eXLfc/jMbzWhGJJVRzAiQNWrFGtqZ10joOOPKt59EjHWqL8Bf1aRFcdczByyrgaakqyQZvJpayWG4/q9Z1wub8zfqMo6Cu/rSu32njPuHB6zGGCNPw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(366004)(451199015)(31686004)(36756003)(91956017)(316002)(66446008)(122000001)(31696002)(8936002)(38070700005)(86362001)(2906002)(66946007)(66556008)(5660300002)(4744005)(76116006)(8676002)(38100700002)(66476007)(82960400001)(64756008)(71200400001)(6486002)(4326008)(110136005)(478600001)(2616005)(6506007)(41300700001)(54906003)(186003)(26005)(6512007)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZjdhRU5KTlBZMWk2MzMweTUvT294cTVzaEZUMEwyK214bUNqd3N4cGFBcENw?=
 =?utf-8?B?SDhSbzZHd2xsVHl1RE5Ta2FvU2J6TCttYk1VMWRSbU5wLzcxaGx4UHRJdnFa?=
 =?utf-8?B?K0tRdE9EVExwV0pxaFdnTGhtNlBuUkhMZUUwY1lvWGwxWDdFZkRUTUY5UUhl?=
 =?utf-8?B?bHpVK0dJL3pWSzBEWloyZld5NTZUT1FSeGVUdEorendKS0dCZnZiWE01aWNR?=
 =?utf-8?B?eFBxb2JQSjVENmZFeXkyZFNXeWdwcmk5S1dJTDNhdXc4ZnpTMlRzejlSWnMy?=
 =?utf-8?B?Vk1EcVJqYnJ6WGhxZkdMcUQ1NjMwMTFnc2tkblFrT2RYdjlibzIvK05IWGVa?=
 =?utf-8?B?aVlyWXNaVVVRcnNoQmZWdDM1Q1BNUmR2TWVaSitSTWlaU00vUEE0MjJ1ZnFo?=
 =?utf-8?B?OTljNHpGeldaZFcvSWhPSnUxZ0VDdzFZbllyUDZQSXZVVk9tS1NDZEQ5OFpW?=
 =?utf-8?B?eFdxSVphc1lWdjV3a05NeTRSV2tOc1BkZzM1UUFHekpJUGZnbWVNaWZSanM5?=
 =?utf-8?B?R1E5MHV3K215aFQxdmZnZHhDMnY3alJjZS9MSy9HZXgwc3BDUnBzcURCcWJj?=
 =?utf-8?B?M3FFQnhJR29yY3FrZFRSMnd2M3NROVFjb3ZLRmM3WDEyRVJxWEZBcWN5c0RG?=
 =?utf-8?B?aHJRY1o3MXFEcEYvSmdwMWFiMk9KVFArNE5LRVZ5cnVjbkRzUzE4VlRzTDhC?=
 =?utf-8?B?UTlPTkI0b3RCSFZPa2JNcHFielBHcnpMZWpsNWd5RGVpZXF2WGFQWUI5NHdS?=
 =?utf-8?B?UU43SkRlazU5d3BZcElNSngxUm9YSTh6ZnpZc2kvaDlPUFNYeGttWkFMNmZX?=
 =?utf-8?B?MkcrOWMvZFR1YkNrTEtINmw4MVVEUFdFMkFhcDZsTTUvaTAxUm5ZVU9scFU0?=
 =?utf-8?B?aXBvQVFyRmhpQTFTalpFYVRsNndxWUtGWUwrWWh4VGpnQWt2RUt2UmlRamtr?=
 =?utf-8?B?aEs3a2RqZzVreWxJakJZcFB4VXZsSk1taE1JTUhEclFRc3ByeEtaajVJSjBx?=
 =?utf-8?B?MVdkazlrWkNicWJoMmdNUzBCZjJJR1NxdkNsckpMcy9KQldVVXhhdFgyMndK?=
 =?utf-8?B?Ly9NWUJsMzNoWlFVT2FSSHNQRkRJSHRIRW1jTnJPQjdIZmJydWdBUjNJSmE3?=
 =?utf-8?B?UXVuYk5rdDBKWDhYVDdQaEpaWEpxM0pMMHBTcm5vbk1LY0IrQzQ1WGYxNDlq?=
 =?utf-8?B?dEhYRmhOblE3blpxeUZlcXRSdms3MkEvcE5GNlB5ejcvZm5SYXRscHQyWnFT?=
 =?utf-8?B?NFh2UlpIMkk2ZHJLbjdXTU84aHFmNnREMTBaRmdmbkN1Zk1teWFMZ1htL1FO?=
 =?utf-8?B?aGdiTW9LeklIOU1qTVcydzJtU0p2dmdwLzRVZ3RIZUp0RnhvaTVSRWF6cHZW?=
 =?utf-8?B?YUxydkNCUDdjM0VueXlqaU9VUlJHRGhLSHE0YVc0NEF3SENYcUJHUGhpdjRP?=
 =?utf-8?B?cG5XdVRjN3dkelEvWEpOOXdVb1gyaUI2RDR0bm05S0ZyZndncXR0a1dQMEc4?=
 =?utf-8?B?V25GOTh1Tlk2UnN3NDJubWZLZ3BaSEpYY2drL2RpUitBT3ArU3huUFEyaTBD?=
 =?utf-8?B?NG5WQUM4SUVNNDJjS0ZFVTZralVXUHlLc3ZmTGtsam9PQ3VTQnI4bTFETFlv?=
 =?utf-8?B?d3BBZEtmMDRrVzZOWkpwMklOYjdqWmo2bXRKRU84ZUhpSVhPN2Q2SUgybVZU?=
 =?utf-8?B?L0R3MGpUWWZyaVZucFZrd3gxd0h2WDFHaHVsTWZTUVNPTEE0cXUreE1STmY5?=
 =?utf-8?B?a3NUdDRFU2drdk5DMmpjYnJxdlRsMzFGMVVzcG03YnU2WUFqY0lhRGxKWVlx?=
 =?utf-8?B?SjRHcXpNeEJJcWs2b3J6dDc0OXRoT1JmWFNlTlRnbHhhL2hZZHlJczRjQUIx?=
 =?utf-8?B?alZBZVFRR3U3Wk5hQzAxZEhqYkxnQVB2UytEYXdrN2kwa0sxaWNOSG5IUUZJ?=
 =?utf-8?B?OXY5TnpTWjBYTm9KZkwvVjZvcEdjK0ZrTjMwOHBhQlRaQjJmY2lKZ1VhSk91?=
 =?utf-8?B?dnllN2VCRnZDdzVpZCtWd3pBK08xbmZnMk96S2hNTWR6S29PVUg3NEhkamdY?=
 =?utf-8?B?SXJ5MitobHhkbSs0bWtIYTZNU21SdnBIWW5qQzFnbU9SNE5sQnNiUDkwdUp4?=
 =?utf-8?B?bnN2Vnp3Wk1ndTNpS2NncmFVeU9URUlqbzFpbGtNMGRUeTBNUE5mSEM1ZTMz?=
 =?utf-8?B?emc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <48383C2DF8CE3E45A0F68953783CC595@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	c5ksXP2hkCKhQaXqdmCbiV4fyOtv8nvkj0nGRNbUpey+aF1/TM0zUpJr5dNn60mo9C508k8v/sjDPKFzs3lwrAqd0XaqG6GC4TqRdG2cYhIlGsLJfoYEchqV52rI0yf6sWC7c+SYGmCxxij2B1CvPUAyYCW7YMg9WZEIEDGXl1F6sw15cmuhbEyrkilXWEz4eRa53g3hxgIY4mRuBYIroBAwyyT+quKle9Yovq+F8m3G182eU4q6YEj+qoiwVzE6KzPs2dJMS03IVGC2oXB4pbI+sOhc8JagAG90bSRl/HN5KT/I4GYu21VQ9SeJyRgJrLBC5thYJcw+XNMDnoFYyHMx8l8KhlwjbNyfuesyiq83BHYzzZHYNFS8l1knpzgqglTqhHvAVi4D0ccXij4wXJIvhwFOh8vWpoCVlieQPPgyQUemKXCGgw6pc5jzVJIzaq/5Q0XNrEgCF4pOSIZH3QolvQ4iY/vGxu61o0CSO/d8XmE4wnzcLaNrkcx+4KOh8Mjl7Hj3SQm0YrXS7eNp/RTrI01R0/biQQ+87T8NFxjZpVjynXn+eGVfy91j4TYrVGIVeiUFEJpUapF8ZOWUFwoHe8/xquZjE/oqPpY6km8lXF676bmBxwAXsDZMSgb+VRMuw99BROyCtRyIRyRTtKKGKSLs1D0txZjv4juWwiQ4BCh7bqF3EdsBcVGHq5gbj7scm/NGTyVZiSwzyXaMzzNNptvZGB2RLiVH9NJKWwcXU90QvmGxYZQtsgrA994of+JtMWGQcnP4rsa1ghsth8Y6wZS9DUyreyWrO/YXFRKe/YdQin1VOfztwEU4KWCPVopV2i8rKWIiKX3mj1Wd7n7osxSjlMGcGRYjrdwNZ3I=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 22751aa7-73ef-4880-3159-08dafb138ac4
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 18:24:17.7097
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 98KOJdlub4JJnvUWJvMGIe9Dh2Ihdt30nOZctwHxoZswLB4dSylOD2ebxQYzqAVanBorXOSI1Pkx8DeZDne1JqGnudIlozmt/a+1Y5ZY2nQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5050

T24gMTkvMDEvMjAyMyAzOjIyIHBtLCBBbnRob255IFBFUkFSRCB3cm90ZToNCj4gVGhlIGdldC1m
aWVsZHMuc2ggd2hpY2ggZ2VuZXJhdGUgYWxsIHRoZSBpbmNsdWRlL2NvbXBhdC8ueGxhdC8qLmgN
Cj4gaGVhZGVycyBpcyBxdWl0ZSBzbG93LiBJdCB0YWtlcyBmb3IgZXhhbXBsZSBuZWFybHkgMyBz
ZWNvbmRzIHRvDQo+IGdlbmVyYXRlIHBsYXRmb3JtLmggb24gYSByZWNlbnQgbWFjaGluZSwgb3Ig
Mi4zIHNlY29uZHMgZm9yIG1lbW9yeS5oLg0KPg0KPiBSZXdyaXRpbmcgdGhlIG1peCBvZiBzaGVs
bC9zZWQvcHl0aG9uIGludG8gYSBzaW5nbGUgcHl0aG9uIHNjcmlwdCBtYWtlDQo+IHRoZSBnZW5l
cmF0aW9uIG9mIHRob3NlIGZpbGUgYSBsb3QgZmFzdGVyLg0KPg0KPiBObyBmdW5jdGlvbmFsIGNo
YW5nZSwgdGhlIGhlYWRlcnMgZ2VuZXJhdGVkIGFyZSBpZGVudGljYWwuDQo+DQo+IFNpZ25lZC1v
ZmYtYnk6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KDQpBY2tl
ZC1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 18:26:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 18:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482081.747410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIw5r-0006HT-6j; Fri, 20 Jan 2023 18:26:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482081.747410; Fri, 20 Jan 2023 18:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIw5r-0006HM-47; Fri, 20 Jan 2023 18:26:27 +0000
Received: by outflank-mailman (input) for mailman id 482081;
 Fri, 20 Jan 2023 18:26:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIw5p-0006HG-Ud
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 18:26:25 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f1527ad8-98ef-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 19:26:24 +0100 (CET)
Received: from mail-dm6nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Jan 2023 13:26:21 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN8PR03MB5107.namprd03.prod.outlook.com (2603:10b6:408:d8::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Fri, 20 Jan
 2023 18:26:15 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.024; Fri, 20 Jan 2023
 18:26:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1527ad8-98ef-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674239184;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=+8PIQQTOGqRVI6KquiLcSlBrSlvtVdURhGAI0uKhxaI=;
  b=gjvTKvow08l25CyPkQwEBb6/55o4w1A0BLUU2PT/CdylqO2Uekkeh/nd
   2ySLM0RZxNM70uqF8lWb602FAGV37283kvpQgInhnNqJnqIhMnBiSjNSm
   uh92Zd3lg/QTwXT55rV9kSrgX8gD6RWKqcRs/6pQXHNa+LgQy09pD7Ec+
   0=;
X-IronPort-RemoteIP: 104.47.57.175
X-IronPort-MID: 93012448
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1WbXiKKelu1t1mBkFE+RG5QlxSXFcZb7ZxGr2PjKsXjdYENSgmcAx
 2EXX2iEPa3famXwfIxyOoiw9h4PvMCDydYwGwdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wZmPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5nPEFSp
 KJfdgpQURO8hrPv7Y6UTMBj05FLwMnDZOvzu1lG5BSAVLMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTGMkWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzHijAdhOSO3QGvhCowa5xkBOK1otDUKVqsfgqk2GSch0J
 BlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWXVHac+7G8vT60fy8PIgcqfjQYRAEI593ipoAbjR/VSNtnVqmvgbXdBjXY0
 z2M6i8kiN0uYdUj0qy6+RXLhmyqr52QFwotvFyIAySi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf02DaDw7FJG+yRxkOe
IronPort-HdrOrdr: A9a23:bfrV768QNtfgtckBziNuk+DWI+orL9Y04lQ7vn2ZKCY4TiX8ra
 uTdZsguiMc5Ax+ZJhDo7C90di7IE80nKQdieN9AV7IZniEhILHFvAG0aLShxHmBi3i5qp8+M
 5bAsxD4QTLfDpHsfo=
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="93012448"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bAsMHTWmiPiBjtCfmvUPDfrTZjacXWvQ5PUKKBJ5oST9bYiP65xe29/+DcJ1csHRINJfz/7Rzzjz5E0oADm0wLdyIicUIzvRMjlBF1dsZeTIsJVMzURkGsN/wgaFzr6ck+uofS1mAP5ADnkUkzNXWwX1ev42vRhKiqiloyqaGIaqLFVVvEcdwn5V0HaJiGtahhCYW+m1WE3xoYrR9/oEnpNqDx05HE5V735u6VdImf3siKJTMnH7DsevBmB2BPDpH0XycR8povX8cvq4HGu7SA0YfKX9PGva2wYeSBW7Pu1q8D59Pa9x4KmCoMlYEue1leP6x+uJMbGBESKgafE/Qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+8PIQQTOGqRVI6KquiLcSlBrSlvtVdURhGAI0uKhxaI=;
 b=PIKOIcBcKrcu/DFmjZbZVNtUtUk1zI3cmjjpXjjC1XUDeIH8GZWVbD+F2kLUzo6cIgDmMDr1GJWUS3/Ar+wvBwuqxpLmsJqDcYc40nyGb97PKA+6rIg6qCakCwoImOLUMV2psVmVqPjcw1VXmNBEzXVECSbc5ZGlChfwDqjv+octLDsQSXgrJ3H6zxANrJg2SvfvywGKUTRaYTmABXd49lXuMnzVrUrwbou+F7JdLvLybGHOQQzXfoLKTZvP+FySefQPB260nTqr3YUtVUwPynEcx6tzQ2WVdcq/VCZgqmKnaPIPhJhcCf23s4kqCPs5Kswh7LM4led4i2mXsnx6Tw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+8PIQQTOGqRVI6KquiLcSlBrSlvtVdURhGAI0uKhxaI=;
 b=QQrD33Rl4A4ULcGT4EH7+qEA7jZqpEoJMggW6HOYlsvlJuN1UjCP0uAYuIAIQytJoqlMnKumE+83tcSkKaUhN60KTB9Pmi5Bm9ijYztqbmxLf5tbNAdL2aGIJk2Tw4ZFgix1dojqFqEccuXZr2q32cbifX5Q7QCW54s9iSEwrlk=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v4 3/3] build: compat-xlat-header.py: optimisation to
 search for just '{' instead of [{}]
Thread-Topic: [XEN PATCH v4 3/3] build: compat-xlat-header.py: optimisation to
 search for just '{' instead of [{}]
Thread-Index: AQHZLBn5GrvhRXVc4kihDJHcjkxEVa6noSaA
Date: Fri, 20 Jan 2023 18:26:14 +0000
Message-ID: <60df7795-8f0b-e0f2-a790-2e00c0d4db2a@citrix.com>
References: <20230119152256.15832-1-anthony.perard@citrix.com>
 <20230119152256.15832-4-anthony.perard@citrix.com>
In-Reply-To: <20230119152256.15832-4-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|BN8PR03MB5107:EE_
x-ms-office365-filtering-correlation-id: dd5cc9ef-073e-4136-b0c3-08dafb13d041
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 eFVSuAVkLnYC3Uhk8kUqb+vwK/ED0R3OyMJxnnMRCTxCTJDEg81MlaffjciFGiESsD57aYS9CCDpS51cNp/ZVL4wg3GvKJ9Ri05LBxeoa1nb+oa9Wo642cHXI6fLchopf1jkMIdjLCnjvMqoAUbEpVv1S3mlsNnqvdkKab53IppTToNnJ+beWWHfeAQiUo1AlhR9d7Wfp+zkLc8YR6qNmvJ9My6i80eDdingwYWK3ggnFZyZ9nz1f69UONsGDQVGOgaLpzgIhSPJ5gizQmKiBuCAulOapzSmtG2pmeHYTDTf43ai0Zq0HsE02fbB0554DtqNnyizHLaEQr0SiGfSS0N3ITaoUSgKRgMBZ2DnGXCjmJnKVDQM+13DFWOo10JghjUnLwSoylrdrgabcuiFrMlT/je3wwREN7/zcyU0t0cL2FlBq6eWTGmSGblvwZoR8rBNF2i66P22nZ5oAwELOlKdtu8iNutRbMk/y/QDmmGmmad3F3y0YYerEewoTM5npIoFBSvvSur4ayQcewiCrQEedDd46qPr5qTxXGAcoT5KvSqHzzgvfESATqYl9gPbqLcXuryqev+UsHMnTH/plTE0fB2Iuw7157Eu47hqhFxZAKy3k8YycxcHL1H69ak0sapSOFp3pgNSJBkzY2ugSZYXcyd5GexHiE2LBAvHgGiTP6rOfWWb+ZT5SrA0m2mxjBbUqyUyT+qWYWo0U53ph4WlFSsAa3DpLgJB1yfuxB1XN7ov9lR3GGEc9DErBMIJDq4o2qaRNZXiLWeh7o4gTY6uF1cmhGnz8KdjZs0xxPU=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(376002)(366004)(39860400002)(346002)(136003)(396003)(451199015)(31686004)(36756003)(38070700005)(86362001)(66446008)(2906002)(31696002)(8936002)(82960400001)(66476007)(66556008)(4326008)(76116006)(66946007)(8676002)(64756008)(83380400001)(5660300002)(122000001)(38100700002)(71200400001)(41300700001)(110136005)(316002)(54906003)(6486002)(2616005)(478600001)(53546011)(6512007)(91956017)(186003)(6506007)(26005)(125773002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?YTIxd0dJaEhmRUwzZHZJSUdDUmtOU3pTOGpNK002KzdmMkwvK0lvelNpM1Yr?=
 =?utf-8?B?dkxmNndVTWVHSlhNdUE1Y1V0UHR2cVpYS09hVGdmL0c4WlNwRFRSQXpYMDUv?=
 =?utf-8?B?bHdPNW5vMnMrYlUrS0lyN0pmcXhPeTJQYmxiQmhWQnhlN0psRVZPZVNpcThz?=
 =?utf-8?B?WWQ5OWJVU0kzNDhhT0pvZFpQMVRTVThmVlZHblVUL2lOSXk4clR4VGNFUlBu?=
 =?utf-8?B?VzM5SDlIQ1M1ZVRmRjFpNXFnYU1MYXpXencxWkExbW91L202bTdScXZuWkZY?=
 =?utf-8?B?aEZNYlBwRkw1OS9pTExSdi9XaGpUc1BaZU1sa0xMZnNoOUpOdFpLY1Rib2cz?=
 =?utf-8?B?U25WMW1odlpFdWwvVmptNlVyYUNQWTFzeDdWZWY3a1BjZlI4a1BVdEtQZ3ZI?=
 =?utf-8?B?TXVCd1VmSFV3TG9abExzaDNyOEpoNWdrS0Zvc3llVVA0bFBrWGhIaGpyVEdh?=
 =?utf-8?B?bDU0aUFrdmFCUTdZYkFwbldwendJZUtnWFBkMVZCbGE5Q1AvdnZIMmt2bmV4?=
 =?utf-8?B?bUlKeDhiajRuMHJuTXhDbzFUcGxEQTVaQy94SWRsdnFzNjhoRExSRXl0RElK?=
 =?utf-8?B?LzNQdnZhWEEva1MvL082RjBENHduMnlwaU1sYTN1WXdSZWsyZnowblpoRndj?=
 =?utf-8?B?QnhCbjhjVlAwOEJtSzMzTm1uRysxSjRTVzhqYWlCUlE4RVJicmZyczVlVk1v?=
 =?utf-8?B?dHhLSWtCTjVxdktnWHF0Yk9qaGVTQUc4RTlVdG01bnprSGhlQVZLNmg2WUF4?=
 =?utf-8?B?N3VUYnJTNFN0Nm9Sd3h6Vk1xTlQzVzU2RHF0VWg3a29pdEc3Q2IrV3dqYXN2?=
 =?utf-8?B?ZUlEaU5qMXZLMVhDUjQrUlRsREY1ME9EQ3plb0dXbS9uKzR4VVN6Tm9lU1pH?=
 =?utf-8?B?WVZYNjhrbk56MTBrbHkrZHRsL2NCLzRnTndHajJmVGZCaW5wQlQweUxKQ2tx?=
 =?utf-8?B?VU1kcGNWZmF0eWFzbnNYQWQ5YXkvZ1NwOFEvY0NDbWY5ZjVKTW5vTWFTempJ?=
 =?utf-8?B?bDFqVVhtcnVwYmw1YXoxcnRNRVdUOTVoS2xTMGMzRXFncndaRWwvZFpEUE1r?=
 =?utf-8?B?NC8rV2NMWmhYUnB2KzRsY2o5aktHMmFvZjliT2dzcGZEVjllVS9kU0I0aW9h?=
 =?utf-8?B?YTdUN3E5Z3hyWXMxVFZQdWRsdFprdTFNRmZwOHU5Q1NwekZtbE5RRWRlbDY0?=
 =?utf-8?B?N2FHUk5tV2sydDc2eDJneUwwbTRkdnQ4VEtSdHo0b2pPS043aUVtU0UxVzJu?=
 =?utf-8?B?SjVYUnhHNlpXSzhVSzlJRHlVcUJOZFRacm15YUNFdTVyV1IwN0xSQjg0ODZG?=
 =?utf-8?B?N0dzWi93ODFuQkJyTUdLYUNQTktuaTVJa2tMSDc5Zml0NmdvZ1NJUkhyV1do?=
 =?utf-8?B?RUdwSWE4TzExRlBRYXFWanNuWTgwaFQzMngvODV1QVVseWxUWkc0N001TFFx?=
 =?utf-8?B?eHdFYzlSalZyNVVrbVJxVUJkR1htTmo4cEN5ekNQeUljc2V5WngydmJ4VC9Q?=
 =?utf-8?B?TnpjMi9jTzQ3Q28vRnFKZWpjOEVoeUNpUTNOcHBlZkYwcTZSYVNsVk4zZTdE?=
 =?utf-8?B?VlZYTW8veVVpVUJXS01ZT0Zhdm1iVjYrZjNFWEVjT1g5QXlLdEhGVStDMUVt?=
 =?utf-8?B?U205Z251QjVQMkZaQTV3OWZiTUYxMmV3dHZQZ2twL3Jhd2JINE1GbDBVNG5w?=
 =?utf-8?B?bDYvOTJEbGlHRUtmZ2IwY3pqZklSLzM4QlBIemZScmlBUk5GU3ZEZE9xd3My?=
 =?utf-8?B?Y2d0cmdoeEtoSHJyWVRhQWNsVUIxVkRGZ2g4OURGTlBnSVZ5aUtCTklzanBU?=
 =?utf-8?B?MmNpMUR4UUhVTmhMRjF0Tzd0M1lCK0h6aDVIMkVoajc5RnZyZXovWldSZ1hj?=
 =?utf-8?B?OHEyZEd6Rld6TFRzaDlRL2svZzEweXZtdGNtRnVBUUxvUEc1ZExRV1p5bUl2?=
 =?utf-8?B?Qm50LzJUaFhycGVnSHlMRFJGR0JkeXZxbzBFakgxTUh4NXNRV3JlMGd4Q05v?=
 =?utf-8?B?US9oUHNmNzcxTGRUTTRzYlBocUZGcTQ2dTVVcE5ROVlyQ2hGTmNjS21KWjlq?=
 =?utf-8?B?bmVaT2czVmVKWmtXQWsxbTZRcHUxdXJCVmRrK3lwaEJKQXJOUUtjNlhqVVBH?=
 =?utf-8?B?ekRrQ1BiR05hWmliaU40TXFNK0pUSGRRNkxyZmNBajNKcElOSGZnb2VXRUlx?=
 =?utf-8?B?dlE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <BF5D1CABAF8ACC41B2CEE2D42A7441D9@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	cC3AvLezCymNfbJp1VdO0vLJ9aPSEF87XPJt5BKPT++U2ZvUiC+ggAOSE6Nre+iFXsEkNFueyJwyk//F7VUBN1g8PbT1U7ZmPK+505nCOYCk0/qEjJwyneQ2V1MwhVTxLReA4FhLRdXalzJVn8ucMUkQowXr0pV96xa+GA8uBV4JDDFwCzBDUOd7eRNNH+r9U+RoY4umhEs1uVIJ9MYBfYv3oXWDu7704cnHn47KnaffzpHws/amh5Xe/fN4twnZKBUYXqRC9nmm6eke/NZlinO3uVPAQRh4WN4V1siBKr9/q9yBMe7Izq7iyEaO3YMHS5eUYrZZIvuzt2z0Bh16e+WfCXJ4soEfEtpx0YVwEbU0xNPXEutkkYPFjwx2ntYecjGQxAKNXjqsAjYRSZCCdqW220sJcFZso19KsEgROTJtpFOCP+XzJj+aVxOGg6QZHXISe1CpqBPHiydwTgqXShXNPftZ+YsMznOWBoPnftumFcJfh8jb1MYNXSW4des2Z3+N66YMmS8k4ehZi3PhUDKAnftxv692HP7BEKVqivvd8DfV2z2ETmSgLusDme/rAwmegRP8C+StBt5IDeKRfZ7UDGzZzEA2L1U9pv0kqLoXNpwyaXGGKhP0YNfqRRZhoUJploSYBmqMd1PaKS0YTekUtLQVt5Dt/cOwyFPvsrNkVxcV4ieHHuPpAFzxEGFi5a7OZID2r4hhlD40qedulKd5mMI8JZDtGV+mRitcD2MjBfhTQ5hna/QuHbyw5d+9zpJRx1S6HgpxGumb84xHJ5YqpShnyDNh9vc8x9Cha+VnX5fGm+LFRvKG7lDZQrrrjOJS9GpVLSeigzGMLIPjnvGRiJKBRZjSELo8cJxIbPw=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dd5cc9ef-073e-4136-b0c3-08dafb13d041
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jan 2023 18:26:14.2637
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Gji6ReEJQ9YK1AU5uFKNg4/Ct6n53TED5WlHe5BP+fdPE9+EwVvB8Ls6t4+L+/rPWqzYMC921Yu6GUMeOHIRt3wFRTNxX0lke7Q8Jt+IhAA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5107

T24gMTkvMDEvMjAyMyAzOjIyIHBtLCBBbnRob255IFBFUkFSRCB3cm90ZToNCj4gYGZpZWxkc2Ag
YW5kIGBleHRyYWZpZWxkc2AgYWx3YXlzIGFsbCB0aGUgcGFydHMgb2YgYSBzdWItc3RydWN0LCBz
bw0KPiB3aGVuIHRoZXJlIGlzICd9JywgdGhlcmUgaXMgYWx3YXlzIGEgJ3snIGJlZm9yZSBpdC4g
QWxzbywgYm90aCBhcmUNCj4gbGlzdHMuDQo+DQo+IFNpZ25lZC1vZmYtYnk6IEFudGhvbnkgUEVS
QVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KPiAtLS0NCj4gIHhlbi90b29scy9jb21w
YXQteGxhdC1oZWFkZXIucHkgfCA0ICsrLS0NCj4gIDEgZmlsZSBjaGFuZ2VkLCAyIGluc2VydGlv
bnMoKyksIDIgZGVsZXRpb25zKC0pDQo+DQo+IGRpZmYgLS1naXQgYS94ZW4vdG9vbHMvY29tcGF0
LXhsYXQtaGVhZGVyLnB5IGIveGVuL3Rvb2xzL2NvbXBhdC14bGF0LWhlYWRlci5weQ0KPiBpbmRl
eCBhZTVjOWYxMWM5Li5kMGE4NjRiNjhlIDEwMDY0NA0KPiAtLS0gYS94ZW4vdG9vbHMvY29tcGF0
LXhsYXQtaGVhZGVyLnB5DQo+ICsrKyBiL3hlbi90b29scy9jb21wYXQteGxhdC1oZWFkZXIucHkN
Cj4gQEAgLTEwNSw3ICsxMDUsNyBAQCBkZWYgaGFuZGxlX2ZpZWxkKHByZWZpeCwgbmFtZSwgaWQs
IHR5cGUsIGZpZWxkcyk6DQo+ICAgICAgICAgIGVsc2U6DQo+ICAgICAgICAgICAgICBrID0gaWQu
cmVwbGFjZSgnLicsICdfJykNCj4gICAgICAgICAgICAgIHByaW50KCIlc1hMQVRfJXNfSE5ETF8l
cyhfZF8sIF9zXyk7IiAlIChwcmVmaXgsIG5hbWUsIGspLCBlbmQ9JycpDQo+IC0gICAgZWxpZiBu
b3QgcmVfYnJhY2tldHMuc2VhcmNoKCcgJy5qb2luKGZpZWxkcykpOg0KPiArICAgIGVsaWYgbm90
ICd7JyBpbiBmaWVsZHM6DQo+ICAgICAgICAgIHRhZyA9ICcgJy5qb2luKGZpZWxkcykNCj4gICAg
ICAgICAgdGFnID0gcmUuc3ViKHInXHMqKHN0cnVjdHx1bmlvbilccysoY29tcGF0Xyk/KFx3Kylc
cy4qJywgJ1xcMycsIHRhZykNCj4gICAgICAgICAgcHJpbnQoIiBcXCIpDQo+IEBAIC0yOTAsNyAr
MjkwLDcgQEAgZGVmIGJ1aWxkX2JvZHkobmFtZSwgdG9rZW5zKToNCj4gICAgICBwcmludCgiIFxc
XG59IHdoaWxlICgwKSIpDQo+ICANCj4gIGRlZiBjaGVja19maWVsZChraW5kLCBuYW1lLCBmaWVs
ZCwgZXh0cmFmaWVsZHMpOg0KPiAtICAgIGlmIG5vdCByZV9icmFja2V0cy5zZWFyY2goJyAnLmpv
aW4oZXh0cmFmaWVsZHMpKToNCj4gKyAgICBpZiBub3QgJ3snIGluIGV4dHJhZmllbGRzOg0KPiAg
ICAgICAgICBwcmludCgiOyBcXCIpDQo+ICAgICAgICAgIGlmIGxlbihleHRyYWZpZWxkcykgIT0g
MDoNCj4gICAgICAgICAgICAgIGZvciB0b2tlbiBpbiBleHRyYWZpZWxkczoNCg0KVGhlc2UgYXJl
IHRoZSBvbmx5IHR3byB1c2VycyBvZiByZV9icmFja2V0cyBhcmVuJ3QgdGhleT/CoCBJbiB3aGlj
aCBjYXNlDQp5b3Ugc2hvdWxkIGRyb3AgdGhlIHJlLmNvbXBpbGUoKSB0b28uDQoNCn5BbmRyZXcN
Cg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 18:36:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 18:36:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482088.747420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIwF4-0007pY-8H; Fri, 20 Jan 2023 18:35:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482088.747420; Fri, 20 Jan 2023 18:35:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIwF4-0007pR-59; Fri, 20 Jan 2023 18:35:58 +0000
Received: by outflank-mailman (input) for mailman id 482088;
 Fri, 20 Jan 2023 18:35:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3+m8=5R=google.com=seanjc@srs-se1.protection.inumbo.net>)
 id 1pIwF3-0007pL-0Y
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 18:35:57 +0000
Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com
 [2607:f8b0:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 45651643-98f1-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 19:35:54 +0100 (CET)
Received: by mail-pl1-x630.google.com with SMTP id jm10so6043229plb.13
 for <xen-devel@lists.xenproject.org>; Fri, 20 Jan 2023 10:35:54 -0800 (PST)
Received: from google.com (7.104.168.34.bc.googleusercontent.com.
 [34.168.104.7]) by smtp.gmail.com with ESMTPSA id
 y9-20020a17090aca8900b0022bb3ee9b68sm816173pjt.13.2023.01.20.10.35.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Jan 2023 10:35:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45651643-98f1-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=okolDEB21Y0jblKM4zeN9TyhMCVMpN2TkJfDtRniBvc=;
        b=HkY8uYC3eaoxfG+CDtPspZfD/eX0Nq+DRBL/w/lbBPAVctXPlKO27GnPIwXfthk5N9
         VC7gyEcJ6u4nAFPLBZZtLuaIrW5eLQ7pRUIMsQ0P7IH/vAwrMVeY6O0eq5y8N9gtXD71
         MsB/8V0TvAjpRoA/f7mCK/eJ8L5cCpz/8ba/KSvwKSwa6zBYdMvsYnc3fycaiKzhvXu0
         bdjb2uPAa1zIysmYxuisWAZGW6U9LNAlN4IJnA8+tzSANClzdAuibHS3CeaFv6CpylZE
         liAoGaxEFIkOokZ7ibDIQSN4J2us6Ws11V6wPdyXS4f5x0ophRHfbRwS5//CQJMGSnxo
         NK+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=okolDEB21Y0jblKM4zeN9TyhMCVMpN2TkJfDtRniBvc=;
        b=FSP0sl6hTSp7P/3YVyKxEyFeNBvWBjc9ZhXXEy4sbrOaSFBQaMXWjnMrIxqi+8kDy9
         9nplI4K51MLUPWdRH+1ZqbwUQqrxP8R04UFcrb9BSA4/PEF5UPiS8U3dgeutT8l67faS
         rvqjvHWPxxxW/hz5sqJjNtIh9247CWvNK5eJJq88ZA2/FGcvrIuGB2+ce9TwLY2A5wmk
         1UmYEUaWHeFmhZ1wLDwl96CtUTZYGRJx27kCVqYHd0+YeFL0a6/BlrViUGKZWsipMZCi
         MuIIv/dDdcME26P8dRiuBxuhZ7EWrqQJxURl6FP5qL0vt8Ai+St8s18CCNqwcdaChcsp
         IRSQ==
X-Gm-Message-State: AFqh2krqg6RoXuSfnhkj2iz2pIx+n0jZC9iNaM5ido5ZduebHzCUoB1H
	tlU+3pJcQcE9lg6tXJCCzeXSiw==
X-Google-Smtp-Source: AMrXdXtygWR/fMhpmxei/j385EDCB6XPa4vZRiEc+kaRf3Blc1If9RpQwQ0WRTDgbKj1ut4JY7mfcQ==
X-Received: by 2002:a17:90a:c985:b0:219:f970:5119 with SMTP id w5-20020a17090ac98500b00219f9705119mr297717pjt.1.1674239752856;
        Fri, 20 Jan 2023 10:35:52 -0800 (PST)
Date: Fri, 20 Jan 2023 18:35:48 +0000
From: Sean Christopherson <seanjc@google.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>,
	Thomas Gleixner <tglx@linutronix.de>, linux-kernel@vger.kernel.org,
	amakhalov@vmware.com, ganb@vmware.com, ankitja@vmware.com,
	bordoloih@vmware.com, keerthanak@vmware.com, blamoreaux@vmware.com,
	namit@vmware.com, Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Wyes Karny <wyes.karny@amd.com>,
	Lewis Caroll <lewis.carroll@amd.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Juergen Gross <jgross@suse.com>, x86@kernel.org,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle
 state
Message-ID: <Y8rfBBBicRMk+Hut@google.com>
References: <20230116060134.80259-1-srivatsa@csail.mit.edu>
 <20230116155526.05d37ff9@imammedo.users.ipa.redhat.com>
 <87bkmui5z4.ffs@tglx>
 <ecb9a22e-fd6e-67f0-d916-ad16033fc13c@csail.mit.edu>
 <20230120163734.63e62444@imammedo.users.ipa.redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230120163734.63e62444@imammedo.users.ipa.redhat.com>

On Fri, Jan 20, 2023, Igor Mammedov wrote:
> On Fri, 20 Jan 2023 05:55:11 -0800
> "Srivatsa S. Bhat" <srivatsa@csail.mit.edu> wrote:
> 
> > Hi Igor and Thomas,
> > 
> > Thank you for your review!
> > 
> > On 1/19/23 1:12 PM, Thomas Gleixner wrote:
> > > On Mon, Jan 16 2023 at 15:55, Igor Mammedov wrote:  
> > >> "Srivatsa S. Bhat" <srivatsa@csail.mit.edu> wrote:  
> > >>> Fix this by preventing the use of mwait idle state in the vCPU offline
> > >>> play_dead() path for any hypervisor, even if mwait support is
> > >>> available.  
> > >>
> > >> if mwait is enabled, it's very likely guest to have cpuidle
> > >> enabled and using the same mwait as well. So exiting early from
> > >>  mwait_play_dead(), might just punt workflow down:
> > >>   native_play_dead()
> > >>         ...
> > >>         mwait_play_dead();
> > >>         if (cpuidle_play_dead())   <- possible mwait here                                              
> > >>                 hlt_play_dead(); 
> > >>
> > >> and it will end up in mwait again and only if that fails
> > >> it will go HLT route and maybe transition to VMM.  
> > > 
> > > Good point.
> > >   
> > >> Instead of workaround on guest side,
> > >> shouldn't hypervisor force VMEXIT on being uplugged vCPU when it's
> > >> actually hot-unplugging vCPU? (ex: QEMU kicks vCPU out from guest
> > >> context when it is removing vCPU, among other things)  
> > > 
> > > For a pure guest side CPU unplug operation:
> > > 
> > >     guest$ echo 0 >/sys/devices/system/cpu/cpu$N/online
> > > 
> > > the hypervisor is not involved at all. The vCPU is not removed in that
> > > case.
> > >   
> > 
> > Agreed, and this is indeed the scenario I was targeting with this patch,
> > as opposed to vCPU removal from the host side. I'll add this clarification
> > to the commit message.

Forcing HLT doesn't solve anything, it's perfectly legal to passthrough HLT.  I
guarantee there are use cases that passthrough HLT but _not_ MONITOR/MWAIT, and
that passthrough all of them.

> commit message explicitly said:
> "which prevents the hypervisor from running other vCPUs or workloads on the
> corresponding pCPU."
> 
> and that implies unplug on hypervisor side as well.
> Why? That's because when hypervisor exposes mwait to guest, it has to reserve/pin
> a pCPU for each of present vCPUs. And you can safely run other VMs/workloads
> on that pCPU only after it's not possible for it to be reused by VM where
> it was used originally.

Pinning isn't strictly required from a safety perspective.  The latency of context
switching may suffer due to wake times, but preempting a vCPU that it's C1 (or
deeper) won't cause functional problems.   Passing through an entire socket
(or whatever scope triggers extra fun) might be a different story, but pinning
isn't strictly required.

That said, I 100% agree that this is expected behavior and not a bug.  Letting the
guest execute MWAIT or HLT means the host won't have perfect visibility into guest
activity state.

Oversubscribing a pCPU and exposing MWAIT and/or HLT to vCPUs is generally not done
precisely because the guest will always appear busy without extra effort on the
host.  E.g. KVM requires an explicit opt-in from userspace to expose MWAIT and/or
HLT.

If someone really wants to effeciently oversubscribe pCPUs and passthrough MWAIT,
then their best option is probably to have a paravirt interface so that the guest
can tell the host its offlining a vCPU.  Barring that the host could inspect the
guest when preempting a vCPU to try and guesstimate how much work the vCPU is
actually doing in order to make better scheduling decisions.

> Now consider following worst (and most likely) case without unplug
> on hypervisor side:
> 
>  1. vm1mwait: pin pCPU2 to vCPU2
>  2. vm1mwait: guest$ echo 0 >/sys/devices/system/cpu/cpu2/online
>         -> HLT -> VMEXIT
>  --
>  3. vm2mwait: pin pCPU2 to vCPUx and start VM
>  4. vm2mwait: guest OS onlines Vcpu and starts using it incl.
>        going into idle=>mwait state
>  --
>  5. vm1mwait: it still thinks that vCPU is present it can rightfully do:
>        guest$ echo 1 >/sys/devices/system/cpu/cpu2/online
>  --              
>  6.1 best case vm1mwait online fails after timeout
>  6.2 worse case: vm2mwait does VMEXIT on vCPUx around time-frame when
>      vm1mwait onlines vCPU2, the online may succeed and then vm2mwait's
>      vCPUx will be stuck (possibly indefinitely) until for some reason
>      VMEXIT happens on vm1mwait's vCPU2 _and_ host decides to schedule
>      vCPUx on pCPU2 which would make vm1mwait stuck on vCPU2.
> So either way it's expected behavior.
> 
> And if there is no intention to unplug vCPU on hypervisor side,
> then VMEXIT on play_dead is not really necessary (mwait is better
> then HLT), since hypervisor can't safely reuse pCPU elsewhere and
> VCPU goes into deep sleep within guest context.
> 
> PS:
> The only case where making HLT/VMEXIT on play_dead might work out,
> would be if new workload weren't pinned to the same pCPU nor
> used mwait (i.e. host can migrate it elsewhere and schedule
> vCPU2 back on pCPU2).


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 19:27:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 19:27:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482093.747430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIx2L-0004Z7-2U; Fri, 20 Jan 2023 19:26:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482093.747430; Fri, 20 Jan 2023 19:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIx2K-0004Z0-V9; Fri, 20 Jan 2023 19:26:52 +0000
Received: by outflank-mailman (input) for mailman id 482093;
 Fri, 20 Jan 2023 19:26:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9Xek=5R=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pIx2I-0004Yu-Pg
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 19:26:51 +0000
Received: from sonic312-24.consmr.mail.gq1.yahoo.com
 (sonic312-24.consmr.mail.gq1.yahoo.com [98.137.69.205])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 603735d8-98f8-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 20:26:46 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic312.consmr.mail.gq1.yahoo.com with HTTP; Fri, 20 Jan 2023 19:26:44 +0000
Received: by hermes--production-bf1-6bb65c4965-lwg94 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID b91055a077e70f722712dffc6046fa22; 
 Fri, 20 Jan 2023 19:26:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 603735d8-98f8-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674242804; bh=Yn4KGPZXDuvS4Bf0u52UWsMUa8+cfIZGayqP+lGXJSg=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=o84QnXgTczeuGnY2fpvxTYUKg7IZIXtrRc2vB0jRWVeVZPFdJ7lCe0Awdvow/HYOs06c1O2d9Pc7pF+Kk88sG7EPBNxkTg/03XQXVbuaSUgj62Nn9USfNOCBzFut9QHGqCO35J/BW79E2cpYQzikhPW3IO+1QKfIgqarVsbbDo8PpWr7TS0LYOVunVrLNW9EQvWOxeOltW0+TpemX8bATVJOlEMbLK6+XXxcyZbfOS3OiuWrEDoFUcswrlJZTLaENWAPJaVbMOtWg7o2m0R24KY6IduBBtHnaoZAgWIVWAg7gVQkgX6fhsixFncCcLnJ8Qi+T5pO5avs1EQDau+SSw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674242804; bh=aOgE+fNrxPyiCFcakFMAoN38bhQ7RnYAHqqTDk/pR9P=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=YRmzCYrq4Vpvt0qbjg9JzXAcFaDqX7V3RT3/yMrMC2WN3q4x1Duw3J52ZLFEYQ000rdsegK0HbPUMAXiyub/NpuF/AXOecE8xbW6bWkahel/QaBPT18Rbid4RZxY3A5wuxDiYhHAxxItcYlxBj5Zno2sgY5UUDAeiAEwTr7hyRlllrkF8c/mvetgHKrDStlVS13ge+wKkp/0UcXr8vSLUzlg4UQMh46mHnIfxMX+TR1RgIVbRDRkCqwyPHa6WF3E5hGeIRwIgU0QllNQQfor2rKM7ZiizgNc2lGcsDXkuTo7vEfBKpa57RvGuJPfxse+7ZnswOjSDLt+qHuWKOixAw==
X-YMail-OSG: lF13aSgVM1lexDeo9q.8drvnYtu6y2xfrsqrH_TchSNXN0qntER8n76YmvzZv5t
 NwVjC5g6yw0FdcVFCg7criHd47pa5Vz.5NTTTTIJZeoxVgA5Ee8YWGW9ymqQd4OJAakwnP1y5GpH
 .i9pGVyvqL5s83U70sS_AqH2fFpatUl7bTAiEdWde8ybTvqfu3qVpnkCyz0xaw9rdoNOmjocRNrr
 gWzV6WpZs3wfam1SLozCWCKx_rhJRjiy9L.ynYgcIAsihk5cseDa.TvmoENn22Nio4C_BbVqrTuO
 PTzJukYFWP1SD3qw.w9izwLGiiQAVH8TU07jSv8WZ_R5AoDF5D4CLKgo21nugp.RKq4gywBzk0Xw
 sT7AZ6QxfI0okTkXXkePo9xaE8EUgbAlpEsP.ojW0pFzukFZntQxvi0gNlGpt6aKcwLXhYPgQDyg
 PgRqglMCZ7l7eaWSawROq1SsFdNwhuxIFd_h69M3k7YQpsmWN21BU6kB.iW3oFars3_NhZtYRvTu
 H5HiD.fCmDnZWeZRCOgarLGkNBIqr1XRo6SfjddRyUpV5oynMRHPmozJCfFSGvUm.gi2aYvk8Lul
 HSKH1Ccei67H_mdW30ksqA_n5HYH3vAHu.PtHXI876Y4050_IF_oJmFfbs_e.tMddllGkHKfJAAs
 GpOJHBoJNZ0oRMT3k.wpy5SiA7DaMsRLeVOF7dywNrAuM2oNvOwPNfQQF8r449kZDE180vnhHjkP
 RY1.tZF7aNYN1OYv8VIluX46nZvbJDQYEcAvVkx16GjxJ_t9CksnuQ8plgp5UBhVIlcPBItvQ5gZ
 HjiNVduk0OTsiMlAqsEvRvaA4rBWJGSanzNdmwhWYLIpBZ8iuQqgyH5mJyhOCUWY2ZicHt23SUfw
 bBjxZKQvP5oEy.mg6dPVfXp4W8lvUxaP8WuTRfd_L9jBYFmQO_DzO0IDwSG9J1jjyA_U6F2gAP.4
 XVnU9R0hGvQkm54EKmruHvO.Xvwikn4MlzwbfE7P4cEO5sFCdgtQsbGjiuPAAZvMO4q6xsAqiIxV
 V9JrxMROqJAB8dREmyacO_QA17BfjLGrvwKwVETbdPmnA9byG81zlXqFdRvlKw9vll6tf9zwuV_e
 CE0s6JhSAXvSGhOXGM9yn9ea4uJjYPoUlWwApbtNlxSmNGl37_01lgH5R4ibmdbIyjyHEGOKD7vj
 MvwteO4WuJpdC2kN6c.3unz8dQ3.G7xn6u3PPlN_iggSKqbfhCb1lEohQkpwjRHJ9M0gtHa3QNk9
 M8NxQMRYPwop8jsqW_8uvas2bpct6b85kUy2l2mM_ZF3ToPWDZ.QkqHqf3RgrnCzPTf62.er3n1s
 L0x0p70QarG6PqY9HNqMFIRNzI1BSt0Hc4rFXp6cnQxAb5ILfUj7NtPJsnlvU8CF7YYuQtTHOz7_
 RTx9tB7xIDQAO1sAoXAVBxYG45VuoSADXNCMfa4Vtj8t40um9NfBDh9g88wtpzqEYejTzXlhMbX8
 oe1avd3Hi8k5R0yaaMJ2rjbvhsRomL_f4dXjAv.dxmSiLarDndiBFqw195OJYn_9CRcOnFYXKrYq
 MVsPNnaYJ_6_2otIeb9cGYIRyMueXBcMcIL0YGkjwFGXBcoCby0i9lO.jU0FlgNwSVR.0cRDp5.t
 2oGqprDfVLjvWAVHmqaWZQMU3zO3C.k.zjutmPQqVQw04aYi6ix5lW7AcwzR.3smpTz9OKizGGf5
 Yn8CoWGwI6gv1nXM.5_74MmqfovjanzWwwqvtREw3jOaOzblyebrEajlh4qXPdwyPFDrN1xRMBmD
 vtleAWlFZDQEs.junMX_Y4AneCMZu2ZX7J5Isvpwf0EbFuIU3xSUkMdxDrazj3y5ITLdx_xSmm.p
 wqee6.0pdZ7Td_9_HtE5TSbc3VrRah4q7FLWFuN6Cw4z3wNh0D_ojHMVsBvobJ4w1d9YNa0R6qCT
 Cp5IO7q018hlWpKt1fhFGTyKucknh_HCb1FcOtpTssw7ZTYZIzpXxEIDYLhR3JAW_hfNXNppyvEg
 .TLVgfWfwcSzawrjo_MIvXJ5ZEgNtrzrcmicoxL0SFXVZA78FKLYf7myQj_keSGHIlFzhCE2B04M
 86PWZ6KC6K61Pk5qygU8O0CreHCL2E_dk7C._kNWKOdfnWrdeavxUm8wWzAfrMCLzvyNdweNmofR
 OsThDCUhJ67CNKf_Grv17jd2MqvnzMWu4ytaDK1dcI4N3G5UUK080vQEHFSqCqLAx2SnazTvOkNr
 vkg5YaGwxOX_4bw0-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <0a536399-f1b8-d30a-f288-8ed6b719f15c@aol.com>
Date: Fri, 20 Jan 2023 14:26:43 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Igor Mammedov <imammedo@redhat.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Bernhard Beschow <shentey@gmail.com>,
 qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, Anthony Perard <anthony.perard@citrix.com>,
 Thomas Huth <thuth@redhat.com>, Eric Auger <eric.auger@redhat.com>,
 Alex Williamson <alex.williamson@redhat.com>, Peter Xu <peterx@redhat.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
 <20230112180314-mutt-send-email-mst@kernel.org>
 <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
 <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
 <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
 <20230116163342.467039a0@imammedo.users.ipa.redhat.com>
 <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
 <20230117120416.0aa041d6@imammedo.users.ipa.redhat.com>
 <b6f7d6dd-3b9b-2cc7-32ab-8521802e1fed@aol.com>
 <alpine.DEB.2.22.394.2301200855390.731018@ubuntu-linux-20-04-desktop>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <alpine.DEB.2.22.394.2301200855390.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 6541

On 1/20/2023 11:57 AM, Stefano Stabellini wrote:
> On Tue, 17 Jan 2023, Chuck Zmudzinski wrote:
> > On 1/17/2023 6:04 AM, Igor Mammedov wrote:
> > > On Mon, 16 Jan 2023 13:00:53 -0500
> > > Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> > >
> > > > On 1/16/23 10:33, Igor Mammedov wrote:
> > > > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > > >   
> > > > >> On 1/13/23 4:33 AM, Igor Mammedov wrote:  
> > > > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > > > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > > >> >     
> > > > >> >> On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:    
> > > > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:      
> > > > >> >> >> I think the change Michael suggests is very minimalistic: Move the if
> > > > >> >> >> condition around xen_igd_reserve_slot() into the function itself and
> > > > >> >> >> always call it there unconditionally -- basically turning three lines
> > > > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
> > > > >> >> >> Michael further suggests to rename it to something more general. All
> > > > >> >> >> in all no big changes required.      
> > > > >> >> > 
> > > > >> >> > yes, exactly.
> > > > >> >> >       
> > > > >> >> 
> > > > >> >> OK, got it. I can do that along with the other suggestions.    
> > > > >> > 
> > > > >> > have you considered instead of reservation, putting a slot check in device model
> > > > >> > and if it's intel igd being passed through, fail at realize time  if it can't take
> > > > >> > required slot (with a error directing user to fix command line)?    
> > > > >> 
> > > > >> Yes, but the core pci code currently already fails at realize time
> > > > >> with a useful error message if the user tries to use slot 2 for the
> > > > >> igd, because of the xen platform device which has slot 2. The user
> > > > >> can fix this without patching qemu, but having the user fix it on
> > > > >> the command line is not the best way to solve the problem, primarily
> > > > >> because the user would need to hotplug the xen platform device via a
> > > > >> command line option instead of having the xen platform device added by
> > > > >> pc_xen_hvm_init functions almost immediately after creating the pci
> > > > >> bus, and that delay in adding the xen platform device degrades
> > > > >> startup performance of the guest.
> > > > >>   
> > > > >> > That could be less complicated than dealing with slot reservations at the cost of
> > > > >> > being less convenient.    
> > > > >> 
> > > > >> And also a cost of reduced startup performance  
> > > > > 
> > > > > Could you clarify how it affects performance (and how much).
> > > > > (as I see, setup done at board_init time is roughly the same
> > > > > as with '-device foo' CLI options, modulo time needed to parse
> > > > > options which should be negligible. and both ways are done before
> > > > > guest runs)  
> > > > 
> > > > I preface my answer by saying there is a v9, but you don't
> > > > need to look at that. I will answer all your questions here.
> > > > 
> > > > I am going by what I observe on the main HDMI display with the
> > > > different approaches. With the approach of not patching Qemu
> > > > to fix this, which requires adding the Xen platform device a
> > > > little later, the length of time it takes to fully load the
> > > > guest is increased. I also noticed with Linux guests that use
> > > > the grub bootoader, the grub vga driver cannot display the
> > > > grub boot menu at the native resolution of the display, which
> > > > in the tested case is 1920x1080, when the Xen platform device
> > > > is added via a command line option instead of by the
> > > > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > > > to Qemu, the grub menu is displayed at the full, 1920x1080
> > > > native resolution of the display. Once the guest fully loads,
> > > > there is no noticeable difference in performance. It is mainly
> > > > a degradation in startup performance, not performance once
> > > > the guest OS is fully loaded.
> > >
> > > Looking at igd-assign.txt, it recommends to add IGD using '-device' CLI
> > > option, and actually drop at least graphics defaults explicitly.
> > > So it is expected to work fine even when IGD is constructed with
> > > '-device'.
> > >
> > > Could you provide full CLI current xen starts QEMU with and then
> > > a CLI you used (with explicit -device for IGD) that leads
> > > to reduced performance?
> > >
> > > CCing vfio folks who might have an idea what could be wrong based
> > > on vfio experience.
> > 
> > Actually, the igd is not added with an explicit -device option using Xen:
> > 
> >    1573 ?        Ssl    0:42 /usr/bin/qemu-system-i386 -xen-domid 1 -no-shutdown -chardev socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-1,server,nowait -mon chardev=libxl-cmd,mode=control -chardev socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-1,server,nowait -mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config -name windows -vnc none -display none -serial pty -boot order=c -smp 4,maxcpus=4 -net none -machine xenfv,max-ram-below-4g=3758096384,igd-passthru=on -m 6144 -drive file=/dev/loop0,if=ide,index=0,media=disk,format=raw,cache=writeback -drive file=/dev/disk/by-uuid/A44AA4984AA468AE,if=ide,index=1,media=disk,format=raw,cache=writeback
> > 
> > I think it is added by xl (libxl management tool) when the guest is created
> > using the qmp-libxl socket that appears on the command line, but I am not 100
> > percent sure. So, with libxl, the command line alone does not tell the whole
> > story. The xl.cfg file has a line like this to define the pci devices passed through,
> > and in qemu they are type XEN_PT devices, not VFIO devices:
> > 
> > pci = [ '00:1b.0','00:14.0','00:02.0@02' ]
> > 
> > This means three host pci devices are passed through, the ones on the
> > host at slot 1b.0, 14.0, and 02.0. Of course the device at 02 is the igd.
> > The @02 means libxl is requesting slot 2 in the guest for the igd, the
> > other 2 devices are just auto assigned a slot by Qemu. Qemu cannot
> > assign the igd to slot 2 for xenfv machines without a patch that prevents
> > the Xen platform device from grabbing slot 2. That is what this patch
> > accomplishes.
>
> In principle I think this change is OK. Apologies that this patch is at
> v9 and none of the Xen/QEMU maintainers took a look at it yet. I'll try
> to look at it today.

Thanks!


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 19:44:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 19:44:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482098.747440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJb-0006vR-IO; Fri, 20 Jan 2023 19:44:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482098.747440; Fri, 20 Jan 2023 19:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJb-0006vK-F4; Fri, 20 Jan 2023 19:44:43 +0000
Received: by outflank-mailman (input) for mailman id 482098;
 Fri, 20 Jan 2023 19:44:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8tu/=5R=citrix.com=prvs=37768f290=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIxJZ-0006vE-R3
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 19:44:41 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id deafe211-98fa-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 20:44:39 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: deafe211-98fa-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674243879;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Td+vdsM413b3H/N28Va3gpuwUXIu5Ol+q6BrZxznVVc=;
  b=RBEfXn1zqhiPIcJuWal92wJNJwHVrQk4/Pgpp86qlbS36JZkt/gJCE3l
   DlAmB5sNhHw4Rq8BOqITqIgGgLZ1iWqwAr0Vbw55Qb/VObGRmds7RqyzQ
   S2UzXNdMTM7dDIscXdaJF1Tzjl2VggrSCadIvRZGKLHpHAJAO8UVGd5zh
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93536163
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:32mh+6wKCyD4FTyNZh16t+f+xirEfRIJ4+MujC+fZmUNrF6WrkVVy
 2MbCDqGM6zZZmTzfIojad+0/UIDvZLSm9JkQVBrpSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UwHUMja4mtC5QRnP6gT5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KW1C1
 u4ZEzUKVz6Smefozo2+ZM9qhMt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZQOwx3G/
 zydl4j/KhsXc8y6xBbdzkipieXpniTQQp0IP7Lto5aGh3XMnzdOWXX6T2CTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1jYDX/JAHut87xuCooLW/gKYC24sXjNHLts8u6ceXic23
 1WEm9foAz1Hs7CPT3+ZsLCOoluaOycPKnQZTTQZVgZD6N7myKkxhB/SStdoEIauk8b4Xzr3x
 li3QDMW3utJy5RRjuPioA6B2mj3znTUcuIrzgnbfXCk1QZ+XaC0eYn252few9BGLonMGzFto
 0M4s8SZ6ekPC7SEmyqMXPgBEdmV2hqVDNHPqQUxRsd8rlxB71bmJNkNu28meC+FJ+5eIVfUj
 FnvVRS9DXO5FF+jdudJbo24EKzGJoCwRI2+Bpg4gjejC6WdlTNrHwk0PyZ8OlwBdmB2yckC1
 W+zK5rEMJrjIf0PIMCKb+kcy6Q34Ss12HneQ5v2pzz+j+XCPiXEFO5VbAPWBgzc0E9iiF+Nm
 zq4H5LaoyizrcWkOnWHmWLtBQ5iwYcH6WDe9JUMK7/rzvtOE2A9Ef7BqY7NiKQ895m5Ytzgp
 ynnMmcBkQqXuJEyAVnSApyVQO+1DMkXQLNSFXBEAGtELFB6ONb/sfxFK8dsFVTlncQ6pcNJo
 zA+U53oKpxypv7volzxsbGVQFReSSmW
IronPort-HdrOrdr: A9a23:tY31Ta3DnEOz7ly0uqcwbwqjBKMkLtp133Aq2lEZdPU1SKGlfq
 WV954mPHDP+VUssQ4b6LK90cW7L080lqQY3WByB9eftWDd0QOVxepZgrcKrQeAJ8T2zJ856Z
 td
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="93536163"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, Julien
 Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Juergen Gross <jgross@suse.com>, George
 Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: [XEN PATCH for-4.17 v6 0/5] Toolstack build system improvement, toward non-recursive makefiles
Date: Fri, 20 Jan 2023 19:44:26 +0000
Message-ID: <20230120194431.55922-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Patch series available in this git branch:
https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.toolstack-build-system-v6

Changes in v6:
- For unstable libs, use --default-symver instead of a generated
  version-script.
- Two new patches to deal with $(xenlibs-*,) macros.
- Second patch description reworded.

Changes in v5:
- rebased on staging
- added "tools: Rework linking options for ocaml binding libraries"

Changes in v4:
- several new patches
- some changes to other patches listed in their changelogs

Changes in v3:
- rebased
- several new patches, starting with 13/25 "tools/libs/util: cleanup Makefile"
- introducing macros to deal with linking with in-tree xen libraries
- Add -Werror to CFLAGS for all builds in tools/

Changes in v2:
- one new patch
- other changes described in patch notes

Hi everyone,

I've been looking at reworking the build system we have for the "tools/", and
transforming it to something that suit it better. There are a lot of
dependencies between different sub-directories so it would be nice if GNU make
could actually handle them. This is possible with "non-recursive makefiles".

With non-recursive makefiles, make will have to load/include all the makefiles
and thus will have complete overview of all the dependencies. This will allow
make to build the necessary targets in other directory, and we won't need to
build sub-directories one by one.

To help with this transformation, I've chosen to go with a recent project
called "subdirmk". It help to deal with the fact that all makefiles will share
the same namespace, it is hooked into autoconf, we can easily run `make` from
any subdirectory. Together "autoconf" and "subdirmk" will also help to get
closer to be able to do out-of-tree build of the tools, but I'm mainly looking
to have non-recursive makefile.

Link to the project:
    https://www.chiark.greenend.org.uk/ucgi/~ian/git/subdirmk.git/

But before getting to the main course, I've got quite a few cleanup and some
changes to the makefiles. I start the patch series with patches that remove old
left over stuff, then start reworking makefiles. They are some common changes like
removing the "build" targets in many places as "all" would be the more common
way to spell it and "all" is the default target anyway. They are other changes
related to the conversion to "subdirmk", I start to use the variable $(TARGETS)
in several makefiles, this variable will have a special meaning in subdirmk
which will build those target by default.

As for the conversion to non-recursive makefile, with subdirmk, I have this WIP
branch, it contains some changes that I'm trying out, some notes, and the
conversion, one Makefile per commit. Cleanup are still needed, some makefile
not converted yet, but it's otherwise mostly done.

    https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.toolstack-build-system-v1-wip-extra

With that branch, you could tried something like:
    ./configure; cd tools/xl; make
and `xl` should be built as well as all the xen library needed.
Also, things like `make clean` or rebuild should be faster in the all tools/
directory.

Cheers,

Anthony PERARD (5):
  libs: Fix auto-generation of version-script for unstable libs
  libs/light: Rework targets prerequisites
  libs/light: Makefile cleanup
  tools: Introduce macro $(xenlibs-cflags,) and introduce $(USELIBS) in
    subdirs
  tools: Rework $(xenlibs-ldlibs, ) to provide library flags only.

 tools/console/client/Makefile     |  8 ++++--
 tools/console/daemon/Makefile     | 11 ++++----
 tools/helpers/Makefile            | 29 ++++++++-----------
 tools/libs/call/Makefile          |  1 +
 tools/libs/ctrl/Makefile          |  3 --
 tools/libs/devicemodel/Makefile   |  1 +
 tools/libs/evtchn/Makefile        |  1 +
 tools/libs/foreignmemory/Makefile |  1 +
 tools/libs/gnttab/Makefile        |  1 +
 tools/libs/guest/Makefile         |  3 --
 tools/libs/hypfs/Makefile         |  1 +
 tools/libs/light/Makefile         | 47 ++++++++++++++++---------------
 tools/libs/stat/Makefile          |  2 +-
 tools/libs/store/Makefile         |  1 +
 tools/libs/toolcore/Makefile      |  1 +
 tools/libs/toollog/Makefile       |  1 +
 tools/libs/util/Makefile          |  3 --
 tools/libs/vchan/Makefile         |  3 --
 tools/Rules.mk                    | 22 +++++++++------
 tools/libs/libs.mk                | 11 ++++----
 tools/libs/light/libxl_x86_acpi.c |  2 +-
 .gitignore                        |  6 ----
 22 files changed, 75 insertions(+), 84 deletions(-)

-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 19:45:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 19:45:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482099.747450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJk-0007Cu-Sp; Fri, 20 Jan 2023 19:44:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482099.747450; Fri, 20 Jan 2023 19:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJk-0007Cn-Pp; Fri, 20 Jan 2023 19:44:52 +0000
Received: by outflank-mailman (input) for mailman id 482099;
 Fri, 20 Jan 2023 19:44:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8tu/=5R=citrix.com=prvs=37768f290=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIxJj-0006vE-Kx
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 19:44:51 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e5f4d844-98fa-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 20:44:50 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5f4d844-98fa-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674243889;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=yUuDd1gBVYbDkE4V3W6tzEt+GcRRvH9lKf79QyzlwRo=;
  b=DlfZabN+4uDl62V2QsM9LZ6s318H2J5IwETN5iPrqBm3DNummPkxxF2V
   FQU+flX+hXswe6tJnp/06DyukKFOnldenWyNXYV+nzreJ2UogenoPjH0r
   09YmQS0lnI6ILXqpYGFWiJ7QzA4sIyMIfyBEYyfcM/6pnXMR9E6m2NyWB
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 96015186
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:kMFk+aNG9xApFkfvrR1sl8FynXyQoLVcMsEvi/4bfWQNrUp01GdUn
 zBNUW2BbPncajP9ftskYYW/phgAvZfcz4NnHgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5ARmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0v14UEtv7
 P0HEyoqaEm7jrrt/rmATeY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUo3UHZwIxxbGz
 o7A11zfGSMhDcyi8yu672mgmuOUkinAYI1HQdVU8dY12QbOlwT/EiY+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFaGtBMBX9tbE8Uh9RqAjKHT5m6xBHUATzNHQMwrsokxXzNC/
 kSSg9rjCDhrsbuUYXGQ7LGZqXW1Iyd9BXAGTT8JS00C+daLiIM8lBXUVf54DbW4yNbyHFnNL
 yui9XZkwe9J1IhSivv9pAqc696xmnTXZhU6ty/2R2O61RleYtedbtTz11Pg6vkVee51UWK9l
 HQDnsGf6sUHApeMiDGBTY0xIV252xqWGGaC2AAyRvHN4xzooif+Jt4IvFmSMW8zaq45lSnVj
 Fg/UO+7zLtaJzOUYKB+eOpd4Ox6nPG7RbwJuh05B+eig6SdlyfdpkmCgHJ8OUi3yCARfVkXY
 8vzTCpVJS9y5V5b5DS3XfwB9rQg2zozw2jeLbiikUv7i+HPOSfFFe9dWLdrUgzfxPncyOky2
 48PX/ZmNj0FCLGuCsUp2dB7wa82wYgTWsmt9p0/mh+rKQt6AmAxY8I9Mpt4E7GJa599z7+Sl
 lnkAx8w9bYKrSGfQel8Qiw5OeyHsFcWhS5TABHAyn7xgihzPN31sPtEH3b1FJF+nNFeITdPZ
 6FtU6297j5nFlwrJxx1gUHBkbFf
IronPort-HdrOrdr: A9a23:DHRGl60tTacs22Tg72kkvQqjBLQkLtp133Aq2lEZdPRUGvb2qy
 nIpoV96faUskdpZJhOo7G90cW7LE80sKQFg7X5Xo3SODUO2lHJEGgK1+KLqFfd8m/Fh4tgPM
 9bAs5D4bbLY2SS4/yX3ODBKadC/OW6
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="96015186"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [XEN PATCH for-4.17 v6 1/5] libs: Fix auto-generation of version-script for unstable libs
Date: Fri, 20 Jan 2023 19:44:27 +0000
Message-ID: <20230120194431.55922-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230120194431.55922-1-anthony.perard@citrix.com>
References: <20230120194431.55922-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

When there isn't a version-script for a shared library (like for
unstable libs), we create one based on the current Xen version. But
that version-script became out-of-date as soon as Xen's version
changes and make as no way to regenerate the version-script on
rebuild.

For unstable libs, we only needs the symver to be different from a
previous release of Xen. There's an option "--default-symver" which
allow to use the soname as symver and as the soname have the Xen
release version, it will be different for every release. With
--default-symver we don't need to generate a version-script.

But we also need to know if there's already an existing version script
, for that we introduce $(version-script) to be used to point to the
path of the existing script. (Guessing if a version script exist for a
stable library with for example $(wildcard) won't work as a file will
exist when building the library without this patch.)

We don't need the version-script unless we are making the shared
library so it is removed from the "all" target.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v6:
    - use --default-symver instead of generating a version-script.
      (option was available in binutils 2.16, according to the doc, so that
      should work.)
    
    v4:
    - new patch
    
    CC: Andrew Cooper <Andrew.Cooper3@citrix.com>

 tools/libs/call/Makefile          |  1 +
 tools/libs/ctrl/Makefile          |  3 ---
 tools/libs/devicemodel/Makefile   |  1 +
 tools/libs/evtchn/Makefile        |  1 +
 tools/libs/foreignmemory/Makefile |  1 +
 tools/libs/gnttab/Makefile        |  1 +
 tools/libs/guest/Makefile         |  3 ---
 tools/libs/hypfs/Makefile         |  1 +
 tools/libs/light/Makefile         |  1 -
 tools/libs/stat/Makefile          |  2 +-
 tools/libs/store/Makefile         |  1 +
 tools/libs/toolcore/Makefile      |  1 +
 tools/libs/toollog/Makefile       |  1 +
 tools/libs/util/Makefile          |  3 ---
 tools/libs/vchan/Makefile         |  3 ---
 tools/libs/libs.mk                | 10 ++++------
 .gitignore                        |  6 ------
 17 files changed, 14 insertions(+), 26 deletions(-)

diff --git a/tools/libs/call/Makefile b/tools/libs/call/Makefile
index 103f5ad360..56a964b517 100644
--- a/tools/libs/call/Makefile
+++ b/tools/libs/call/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
 MINOR    = 3
+version-script := libxencall.map
 
 include Makefile.common
 
diff --git a/tools/libs/ctrl/Makefile b/tools/libs/ctrl/Makefile
index 93442ab389..094e84b8d8 100644
--- a/tools/libs/ctrl/Makefile
+++ b/tools/libs/ctrl/Makefile
@@ -10,6 +10,3 @@ PKG_CONFIG_NAME := Xencontrol
 NO_HEADERS_CHK := y
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-clean::
-	rm -f libxenctrl.map
diff --git a/tools/libs/devicemodel/Makefile b/tools/libs/devicemodel/Makefile
index b70dd774e4..20d1d112e7 100644
--- a/tools/libs/devicemodel/Makefile
+++ b/tools/libs/devicemodel/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
 MINOR    = 4
+version-script := libxendevicemodel.map
 
 include Makefile.common
 
diff --git a/tools/libs/evtchn/Makefile b/tools/libs/evtchn/Makefile
index 3dad3840c6..18cdaab89e 100644
--- a/tools/libs/evtchn/Makefile
+++ b/tools/libs/evtchn/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
 MINOR    = 2
+version-script := libxenevtchn.map
 
 include Makefile.common
 
diff --git a/tools/libs/foreignmemory/Makefile b/tools/libs/foreignmemory/Makefile
index b70dd774e4..81398e88b1 100644
--- a/tools/libs/foreignmemory/Makefile
+++ b/tools/libs/foreignmemory/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
 MINOR    = 4
+version-script := libxenforeignmemory.map
 
 include Makefile.common
 
diff --git a/tools/libs/gnttab/Makefile b/tools/libs/gnttab/Makefile
index 3dad3840c6..4528830bdc 100644
--- a/tools/libs/gnttab/Makefile
+++ b/tools/libs/gnttab/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
 MINOR    = 2
+version-script := libxengnttab.map
 
 include Makefile.common
 
diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
index 19d3ff2fdb..93338a9301 100644
--- a/tools/libs/guest/Makefile
+++ b/tools/libs/guest/Makefile
@@ -14,6 +14,3 @@ NO_HEADERS_CHK := y
 include $(XEN_ROOT)/tools/libs/libs.mk
 
 libxenguest.so.$(MAJOR).$(MINOR): LDLIBS += $(ZLIB_LIBS) -lz
-
-clean::
-	rm -f libxenguest.map
diff --git a/tools/libs/hypfs/Makefile b/tools/libs/hypfs/Makefile
index 630e1e6f3e..7fae5c750d 100644
--- a/tools/libs/hypfs/Makefile
+++ b/tools/libs/hypfs/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
 MINOR    = 0
+version-script := libxenhypfs.map
 
 LDLIBS += -lz
 
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 4fddcc6f51..cd3fa855e1 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -262,6 +262,5 @@ clean::
 	$(RM) testidl.c.new testidl.c *.api-ok
 	$(RM) $(TEST_PROGS) libxenlight_test.so libxl_test_*.opic
 	$(RM) -r __pycache__
-	$(RM) libxenlight.map
 	$(RM) $(AUTOSRCS) $(AUTOINCS)
 	$(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(CURDIR) clean
diff --git a/tools/libs/stat/Makefile b/tools/libs/stat/Makefile
index 7eaf50e91e..ee5c42bf7b 100644
--- a/tools/libs/stat/Makefile
+++ b/tools/libs/stat/Makefile
@@ -134,4 +134,4 @@ uninstall:: uninstall-perl-bindings
 endif
 
 clean::
-	$(RM) libxenstat.map $(BINDINGS) $(BINDINGSRC)
+	$(RM) $(BINDINGS) $(BINDINGSRC)
diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
index 3557a8c76d..daed9d148f 100644
--- a/tools/libs/store/Makefile
+++ b/tools/libs/store/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR = 4
 MINOR = 0
+version-script := libxenstore.map
 
 ifeq ($(CONFIG_Linux),y)
 LDLIBS += -ldl
diff --git a/tools/libs/toolcore/Makefile b/tools/libs/toolcore/Makefile
index 0d92b68b3b..20671dadd0 100644
--- a/tools/libs/toolcore/Makefile
+++ b/tools/libs/toolcore/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR	= 1
 MINOR	= 0
+version-script := libxentoolcore.map
 
 LIBHEADER := xentoolcore.h
 
diff --git a/tools/libs/toollog/Makefile b/tools/libs/toollog/Makefile
index 2361b8cbf1..d612227c85 100644
--- a/tools/libs/toollog/Makefile
+++ b/tools/libs/toollog/Makefile
@@ -3,6 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR	= 1
 MINOR	= 0
+version-script := libxentoollog.map
 
 include Makefile.common
 
diff --git a/tools/libs/util/Makefile b/tools/libs/util/Makefile
index 493d2e00be..e016baf888 100644
--- a/tools/libs/util/Makefile
+++ b/tools/libs/util/Makefile
@@ -47,6 +47,3 @@ $(OBJS-y) $(PIC_OBJS): $(AUTOINCS)
 %.c %.h:: %.l
 	@rm -f $*.[ch]
 	$(FLEX) --header-file=$*.h --outfile=$*.c $<
-
-clean::
-	$(RM) libxenutil.map
diff --git a/tools/libs/vchan/Makefile b/tools/libs/vchan/Makefile
index ac2bff66f5..a1ef60ac8e 100644
--- a/tools/libs/vchan/Makefile
+++ b/tools/libs/vchan/Makefile
@@ -11,6 +11,3 @@ OBJS-y += io.o
 NO_HEADERS_CHK := y
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-clean::
-	rm -f libxenvchan.map
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 3eb91fc8f3..0e4b5e0bd0 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -4,6 +4,7 @@
 #   PKG_CONFIG: name of pkg-config file (xen$(LIBNAME).pc if empty)
 #   MAJOR:   major version of lib (Xen version if empty)
 #   MINOR:   minor version of lib (0 if empty)
+#   version-script: Specify the name of a version script to the linker.
 
 LIBNAME := $(notdir $(CURDIR))
 
@@ -53,7 +54,7 @@ $(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_INCLUDE)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 
 .PHONY: all
-all: $(TARGETS) $(PKG_CONFIG_LOCAL) libxen$(LIBNAME).map $(LIBHEADERS)
+all: $(TARGETS) $(PKG_CONFIG_LOCAL) $(LIBHEADERS)
 
 ifneq ($(NO_HEADERS_CHK),y)
 all: headers.chk
@@ -71,9 +72,6 @@ headers.lst: FORCE
 	@{ set -e; $(foreach h,$(LIBHEADERS),echo $(h);) } > $@.tmp
 	@$(call move-if-changed,$@.tmp,$@)
 
-libxen$(LIBNAME).map:
-	echo 'VERS_$(MAJOR).$(MINOR) { global: *; };' >$@
-
 lib$(LIB_FILE_NAME).a: $(OBJS-y)
 	$(AR) rc $@ $^
 
@@ -82,8 +80,8 @@ lib$(LIB_FILE_NAME).so: lib$(LIB_FILE_NAME).so.$(MAJOR)
 lib$(LIB_FILE_NAME).so.$(MAJOR): lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)
 	$(SYMLINK_SHLIB) $< $@
 
-lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR): $(PIC_OBJS) libxen$(LIBNAME).map
-	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,lib$(LIB_FILE_NAME).so.$(MAJOR) -Wl,--version-script=libxen$(LIBNAME).map $(SHLIB_LDFLAGS) -o $@ $(PIC_OBJS) $(LDLIBS) $(APPEND_LDFLAGS)
+lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR): $(PIC_OBJS) $(version-script)
+	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,lib$(LIB_FILE_NAME).so.$(MAJOR) -Wl,$(if $(version-script),--version-script=$(version-script),--default-symver) $(SHLIB_LDFLAGS) -o $@ $(PIC_OBJS) $(LDLIBS) $(APPEND_LDFLAGS)
 
 # If abi-dumper is available, write out the ABI analysis
 ifneq ($(ABI_DUMPER),)
diff --git a/.gitignore b/.gitignore
index 880ba88c55..d9c906670d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -110,8 +110,6 @@ tools/config.cache
 config/Tools.mk
 config/Stubdom.mk
 config/Docs.mk
-tools/libs/ctrl/libxenctrl.map
-tools/libs/guest/libxenguest.map
 tools/libs/guest/xc_bitops.h
 tools/libs/guest/xc_core.h
 tools/libs/guest/xc_core_arm.h
@@ -121,7 +119,6 @@ tools/libs/light/_*.[ch]
 tools/libs/light/*.pyc
 tools/libs/light/_libxl.api-for-check
 tools/libs/light/*.api-ok
-tools/libs/light/libxenlight.map
 tools/libs/light/libxl-save-helper
 tools/libs/light/dsdt*
 tools/libs/light/mk_dsdt
@@ -131,13 +128,10 @@ tools/libs/light/testidl.c
 tools/libs/light/test_timedereg
 tools/libs/light/test_fdderegrace
 tools/libs/light/tmp.*
-tools/libs/stat/libxenstat.map
 tools/libs/store/list.h
 tools/libs/store/utils.h
 tools/libs/store/xs_lib.c
 tools/libs/util/libxlu_cfg_y.output
-tools/libs/util/libxenutil.map
-tools/libs/vchan/libxenvchan.map
 tools/debugger/gdb/gdb-6.2.1-linux-i386-xen/*
 tools/debugger/gdb/gdb-6.2.1/*
 tools/debugger/gdb/gdb-6.2.1.tar.bz2
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 19:45:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 19:45:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482100.747460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJn-0007T8-4Q; Fri, 20 Jan 2023 19:44:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482100.747460; Fri, 20 Jan 2023 19:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJn-0007T1-1I; Fri, 20 Jan 2023 19:44:55 +0000
Received: by outflank-mailman (input) for mailman id 482100;
 Fri, 20 Jan 2023 19:44:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8tu/=5R=citrix.com=prvs=37768f290=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIxJm-0006vE-9G
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 19:44:54 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e776542c-98fa-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 20:44:52 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e776542c-98fa-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674243892;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=wkzc8EHjHeIToBQwmAPpP3ELZs15aV2ACnx/cVw+uyo=;
  b=ftb0DCwjopmb3pSx+teFq8NvYUw7DWhEq1lQ6m/hxwGaIK8Msfn9TEhX
   hiTRYnVXIe0qR0X7v76f2VHKbOxzKhhUW2LG/g9fPurVF2Lc3D9zo743h
   K08OU2II4mkU8b/6piPc1EwNSaniGzPAtYnzaxfaaNLVcvcOA8Qs1xNh8
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92472092
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Qg9Zi6+iZD9/AiYE3Y+bDrUDlH6TJUtcMsCJ2f8bNWPcYEJGY0x3z
 2UbCmvQOvrYa2L1ftF+b4zgp04F7ZeAzdVhHQA5qS08E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKoT5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl/1
 eY/OhsOQCnfnt2b3omfQ+JU2PsseZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0ExBrH/
 DqXpQwVBDkZGtvE2we+1kiuodWXvynZaoUuEJqRo6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0RN54A+A8rgaXxcL84QmDAXMfZiVcc9Fgv8gzLQHGz
 XfQwYmvX2Y29uTIFzTErOz8QS6O1TY9HE8YQj0vTiU8v8DcjZ8IqhvEdohcH/vg5jHqIg3Yz
 zePpSk4orwci88Xyqm2lWz6byKQSovhFVBsuFiONo6xxkYgPdP+OdT0gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hO0yT5FWy13N2YDB0xWvvogRezP
 CfuVfp5vfe/xkeCY65teJ6WAM8316XmHtmNfqmKMYYUOcksLV/bpHkGiausM4bFyhBEfUYXY
 MfzTCpRJSxCVfQPIMSeGY/xLoPHNghhnDiOFPgXPjys0KaEZW79dFv2GALmUwzN14vd+F+92
 48GZ6O3J+B3DLWWjt//rdRCcjjn7BETWfjLliCgXrfaclo7Qzt9V6S5LHFIU9UNopm5X9zgp
 hmVMnK0AnKm7ZEbAW1mskxeVY4=
IronPort-HdrOrdr: A9a23:0UvX76j0lAuPl4zJVFek+OFeSHBQXssji2hC6mlwRA09TyX4ra
 2TdZEgvnXJYVkqKRIdcK+7Scu9qB/nm6KdgrN8AV7BZmnbUQKTRelfBODZrAEIdReeygdV79
 YET5RD
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="92472092"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, Juergen
 Gross <jgross@suse.com>
Subject: [XEN PATCH for-4.17 v6 2/5] libs/light: Rework targets prerequisites
Date: Fri, 20 Jan 2023 19:44:28 +0000
Message-ID: <20230120194431.55922-3-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230120194431.55922-1-anthony.perard@citrix.com>
References: <20230120194431.55922-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

No need for $(AUTOSRCS), GNU make can generate them as needed when
trying to build them as needed when trying to build the object. Also,
those two AUTOSRCS don't need to be a prerequisite of "all". As for
the "clean" target, those two files are already removed via "_*.c".

We don't need $(AUTOINCS) either:
- As for both _libxl_save_msgs*.h headers, we are adding more
  selective dependencies so the headers will still be generated as
  needed.
- "clean" rule already delete the _*.h files, so AUTOINCS aren't needed
  there.

"libxl_internal_json.h" doesn't seems to have ever existed, so the
dependency is removed.

Rework objects prerequisites, to have them dependents on either
"libxl.h" or "libxl_internal.h". "libxl.h" is not normally included
directly in the source code as "libxl_internal.h" is used instead, but
we have "libxl.h" as prerequisite of "libxl_internal.h", so generated
headers will still be generated as needed.

Make doesn't need "libxl.h" to generate "testidl.c", "libxl.h" is only
needed later when building "testidl.o". This avoid the need to
regenerate "testidl.c" when only "libxl.h" changed. Also use automatic
variables $< and $@.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v6:
    - rebased, part of the patch commited as 4ff0811
    - reword commit message
    
    v4:
    - new patch

 tools/libs/light/Makefile | 28 ++++++++++++++++------------
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index cd3fa855e1..b28447a2ae 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -148,9 +148,6 @@ LIBXL_TEST_OBJS += $(foreach t, $(LIBXL_TESTS_INSIDE),libxl_test_$t.opic)
 TEST_PROG_OBJS += $(foreach t, $(LIBXL_TESTS_PROGS),test_$t.o) test_common.o
 TEST_PROGS += $(foreach t, $(LIBXL_TESTS_PROGS),test_$t)
 
-AUTOINCS = _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
-AUTOSRCS = _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
-
 CLIENTS = testidl libxl-save-helper
 
 SAVE_HELPER_OBJS = libxl_save_helper.o _libxl_save_msgs_helper.o
@@ -178,13 +175,13 @@ libxl_x86_acpi.o libxl_x86_acpi.opic: CFLAGS += -I$(XEN_ROOT)/tools
 $(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn) $(CFLAGS_libxenguest)
 
 testidl.o: CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenlight)
-testidl.c: libxl_types.idl gentest.py $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
-	$(PYTHON) gentest.py libxl_types.idl testidl.c.new
-	mv testidl.c.new testidl.c
+testidl.c: libxl_types.idl gentest.py
+	$(PYTHON) gentest.py $< $@.new
+	mv -f $@.new $@
 
-all: $(CLIENTS) $(TEST_PROGS) $(AUTOSRCS) $(AUTOINCS)
+all: $(CLIENTS) $(TEST_PROGS)
 
-$(OBJS-y) $(PIC_OBJS) $(SAVE_HELPER_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): $(AUTOINCS) libxl.api-ok
+$(OBJS-y) $(PIC_OBJS) $(SAVE_HELPER_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): libxl.api-ok
 
 $(DSDT_FILES-y): acpi
 
@@ -196,7 +193,7 @@ libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
 	touch $@
 
-_libxl.api-for-check: $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
+_libxl.api-for-check: $(XEN_INCLUDE)/libxl.h
 	$(CC) $(CPPFLAGS) $(CFLAGS) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
@@ -208,14 +205,22 @@ _libxl_save_msgs_helper.h _libxl_save_msgs_callout.h: \
 	$(PERL) -w $< $@ >$@.new
 	$(call move-if-changed,$@.new,$@)
 
+#
+# headers dependencies on generated headers
+#
 $(XEN_INCLUDE)/libxl.h: $(XEN_INCLUDE)/_libxl_types.h
 $(XEN_INCLUDE)/libxl_json.h: $(XEN_INCLUDE)/_libxl_types_json.h
 libxl_internal.h: $(XEN_INCLUDE)/libxl.h $(XEN_INCLUDE)/libxl_json.h
 libxl_internal.h: _libxl_types_internal.h _libxl_types_private.h _libxl_types_internal_private.h
-libxl_internal_json.h: _libxl_types_internal_json.h
+libxl_internal.h: _libxl_save_msgs_callout.h
 
-$(OBJS-y) $(PIC_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): $(XEN_INCLUDE)/libxl.h
+#
+# objects dependencies on headers that depends on generated headers
+#
+$(TEST_PROG_OBJS): $(XEN_INCLUDE)/libxl.h
 $(OBJS-y) $(PIC_OBJS) $(LIBXL_TEST_OBJS): libxl_internal.h
+$(SAVE_HELPER_OBJS): $(XEN_INCLUDE)/libxl.h _libxl_save_msgs_helper.h
+testidl.o: $(XEN_INCLUDE)/libxl.h
 
 # This exploits the 'multi-target pattern rule' trick.
 # gentypes.py should be executed only once to make all the targets.
@@ -262,5 +267,4 @@ clean::
 	$(RM) testidl.c.new testidl.c *.api-ok
 	$(RM) $(TEST_PROGS) libxenlight_test.so libxl_test_*.opic
 	$(RM) -r __pycache__
-	$(RM) $(AUTOSRCS) $(AUTOINCS)
 	$(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(CURDIR) clean
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 19:45:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 19:45:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482101.747470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJo-0007k1-Cj; Fri, 20 Jan 2023 19:44:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482101.747470; Fri, 20 Jan 2023 19:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJo-0007jq-9b; Fri, 20 Jan 2023 19:44:56 +0000
Received: by outflank-mailman (input) for mailman id 482101;
 Fri, 20 Jan 2023 19:44:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8tu/=5R=citrix.com=prvs=37768f290=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIxJn-0007So-5n
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 19:44:55 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7997d30-98fa-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 20:44:52 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7997d30-98fa-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674243892;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=qbIop1KczSm54uhv5dm8fTDAeLbDBaP++MO1eIvO/KI=;
  b=UrZkweEhhieC1HE1Oj90vdmqCb32sFEGeOaXaUkFCesrUzcExh74KnrM
   wZL3oxEjZPWUn9r8GgWJ391/PitXL3dwSZ6D+6Pv9KwILRDyQjDMcOs9J
   +iphaWP2XKmGlzgJmVBac4693spBVk+KoO58O6mCr0pmVDXlBj9WuxQmb
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93020460
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:iVGmQ6Al5g3PdBVW/z3jw5YqxClBgxIJ4kV8jS/XYbTApG4r3zRUx
 2QdXG6BPP2KZ2T8fdpzaNu+90ICsZXRxodkQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpC5gRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw8O10RmxB0
 KYhLB9QRE3fv/+o4ZerY7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9K4fSH50JwB7wS
 mTu3mLDXABCZNyjzhG8ok/xnL/hpwDQcddHfFG/3qEz2wDCroAJMzUJUXOrrP//jVSxM/pdJ
 FYT4TEGtrUp+QqgSdyVdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QbWNQeHGJwk
 AXTxpWwWGIp6efOIZ6AyluKhTm5Om8YIkpYXCsrECFYv+H+vqAWnjuaG76PD5WJptHyHDjxx
 RWDoy4/m6gfgKY36kmrwbzUq2ny/8aUF2bZ8i2SBzv4tV0hOOZJcqTysTDmAeB8wJF1p7Vrl
 FwNgICg4e8HFvlhfwTdEbxWTNlFCxtoWQAwYGKD/LF7rVxBHkJPm6gKuFlDyL9BaJpsRNMQS
 Ba7VfltzJFSJmC2SqR8fpi8Dc8npYC5S4u5DKuFM4MePsApHONiwM2ITRTIt4wKuBF8+ZzTx
 L/BKZr8ZZrkIfoPIMWKqxc1juZwm3FWKZL7TpHn1RW3uYdyl1bMIYrpxGCmN7hjhIvd+VW9z
 jqqH5fSo/mpeLGkM3a/HE96BQxiEEXX8riv8pwHK7XZflY9cIzjYteIqY4cl0Vet/w9vo/1E
 ruVASe0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:ix/VBKq0DbD/C/hmaFsaDlkaV5sGLNV00zEX/kB9WHVpm5Oj5q
 eTdaUgpHvJYWgqKRQdcIi7Sdu9qXO1z+8O3WBjB8bWYOCGghrqEGgG1+DfKlLbalPDH4JmpN
 9dmu1Fea7N5DtB/ITHCWuDYqcdKbC8mcjY55a5vg5QpENRGtFdBmxCe3um+zhNNXZ77O0CZe
 ahD6R81kGdkHIsDrXZOpD6ZYf+Tibw+a4OnSRmO/fc0mOzZMyThIIS7iL34j4uFxd0hZsy+2
 nMlAL0oo2lrvGA0xfZk0PD8phMn9Pl691bQOiBkNIcJDnAghuhIN0JYczHgBkF5MWUrHo6mt
 jFpBkte+x19nPqZ2mw5Tf9xgX61z4qynn6jXuVm2Hqr8DVTC8zT+BBmYVaWB3E7FdIhqA47I
 t7m0ai87ZHBxLJmyrwo/LSUQtxq0ayqX0+1cYOkn12S+IlGflshL1a2HkQPIYLHSr85oxiOv
 JpFtvg6PFfdk7fR2zFv1No3MenUh0Ib067qwk5y5SoOgpt7SpEJngjtZEid7A7hc4Aoqx/lr
 /522JT5e5zp4EtHPxA7aw6ML+K4yT2MGXx2SSpUBPa/eg8SgTwgo+y77Mv6O6wfpsUiJM0hZ
 TaSVtd8XU/YkT0FKS1rdN2Gz32MSmAtA7Wu45jzok8vqe5SKvgMCWFRlxrm8y8o+8HCsmeX/
 qoIppZD/LqMGOrQO9yrk3DcogXLWNbXNweu949VV7LqsXXKpfyvuiedPrIPrLiHTstR2u6CH
 oeWzr4ItlG8ymQKz7FqQmUX2modl30/Jp2HqSf9+8PyJIVPokJqQQRgUTR3LDHFdSDiN19QK
 JTGsKtrkrgnxj+wY/h1RQgBiZg
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="93020460"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH for-4.17 v6 3/5] libs/light: Makefile cleanup
Date: Fri, 20 Jan 2023 19:44:29 +0000
Message-ID: <20230120194431.55922-4-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230120194431.55922-1-anthony.perard@citrix.com>
References: <20230120194431.55922-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Rework "libacpi.h" include in "libxl_x86_acpi.c" as to be more
selective about the include path and only add "tools/libacpi/". Also
"libxl_dom.c" don't use "libacpi.h" anymore. Use "-iquote" for libacpi
headers.

Get rid of the weird "$(eval stem =" in the middle of a recipe and use
a make automatic variable "$(*F)" instead.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v4:
    - new patch

 tools/libs/light/Makefile         | 16 +++++++---------
 tools/libs/light/libxl_x86_acpi.c |  2 +-
 2 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index b28447a2ae..96daeabc47 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -170,8 +170,7 @@ LDLIBS += $(LDLIBS-y)
 $(OBJS-y) $(PIC_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
 $(ACPI_OBJS) $(ACPI_PIC_OBJS): CFLAGS += -I. -DLIBACPI_STDUTILS=\"$(CURDIR)/libxl_x86_acpi.h\"
 $(TEST_PROG_OBJS) _libxl.api-for-check: CFLAGS += $(CFLAGS_libxentoollog) $(CFLAGS_libxentoolcore)
-libxl_dom.o libxl_dom.opic: CFLAGS += -I$(XEN_ROOT)/tools  # include libacpi/x86.h
-libxl_x86_acpi.o libxl_x86_acpi.opic: CFLAGS += -I$(XEN_ROOT)/tools
+libxl_x86_acpi.o libxl_x86_acpi.opic: CFLAGS += -iquote $(ACPI_PATH)
 $(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn) $(CFLAGS_libxenguest)
 
 testidl.o: CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenlight)
@@ -225,13 +224,12 @@ testidl.o: $(XEN_INCLUDE)/libxl.h
 # This exploits the 'multi-target pattern rule' trick.
 # gentypes.py should be executed only once to make all the targets.
 _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_type%.idl gentypes.py idl.py
-	$(eval stem = $(notdir $*))
-	$(PYTHON) gentypes.py libxl_type$(stem).idl __libxl_type$(stem).h __libxl_type$(stem)_private.h \
-		__libxl_type$(stem)_json.h  __libxl_type$(stem).c
-	$(call move-if-changed,__libxl_type$(stem).h,_libxl_type$(stem).h)
-	$(call move-if-changed,__libxl_type$(stem)_private.h,_libxl_type$(stem)_private.h)
-	$(call move-if-changed,__libxl_type$(stem)_json.h,_libxl_type$(stem)_json.h)
-	$(call move-if-changed,__libxl_type$(stem).c,_libxl_type$(stem).c)
+	$(PYTHON) gentypes.py libxl_type$(*F).idl __libxl_type$(*F).h __libxl_type$(*F)_private.h \
+		__libxl_type$(*F)_json.h  __libxl_type$(*F).c
+	$(call move-if-changed,__libxl_type$(*F).h,_libxl_type$(*F).h)
+	$(call move-if-changed,__libxl_type$(*F)_private.h,_libxl_type$(*F)_private.h)
+	$(call move-if-changed,__libxl_type$(*F)_json.h,_libxl_type$(*F)_json.h)
+	$(call move-if-changed,__libxl_type$(*F).c,_libxl_type$(*F).c)
 
 .PRECIOUS: _libxl_type%.h _libxl_type%.c
 
diff --git a/tools/libs/light/libxl_x86_acpi.c b/tools/libs/light/libxl_x86_acpi.c
index 57a6b63790..22eb160659 100644
--- a/tools/libs/light/libxl_x86_acpi.c
+++ b/tools/libs/light/libxl_x86_acpi.c
@@ -16,7 +16,7 @@
 #include "libxl_arch.h"
 #include <xen/hvm/hvm_info_table.h>
 #include <xen/hvm/e820.h>
-#include "libacpi/libacpi.h"
+#include "libacpi.h"
 
  /* Number of pages holding ACPI tables */
 #define NUM_ACPI_PAGES 16
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 19:45:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 19:45:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482102.747480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJq-00083m-R9; Fri, 20 Jan 2023 19:44:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482102.747480; Fri, 20 Jan 2023 19:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxJq-00083b-NV; Fri, 20 Jan 2023 19:44:58 +0000
Received: by outflank-mailman (input) for mailman id 482102;
 Fri, 20 Jan 2023 19:44:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8tu/=5R=citrix.com=prvs=37768f290=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIxJp-0007So-81
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 19:44:57 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e99920fc-98fa-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 20:44:54 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e99920fc-98fa-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674243894;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=fh1/fP1mhvs+CCh30pp4CnSP8cqF7mY8PIja6gPahWo=;
  b=AE+pFJRR2f5GPi3prwKn3GM5537qaFuLSUKfhY6IojnQf03qNZ9UONoH
   29iJZexjzXdrd3fTZ6O1J+iE644PgfDFtTOO3lPCu5McpmzrxsqqFW9pJ
   Z8KzEWBK3BNOiwKKXBUP1Y9d5zJOnJhU6xtxHUGDORZgWoZBh6Nuf7F+I
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93020461
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:SxzEw6OQwPfn1tDvrR2gl8FynXyQoLVcMsEvi/4bfWQNrUonhGFWn
 2AbWmCPPfbcZmT3e9BzYdvg80JTvZ6HxoRrHQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5ARmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0ux8AiZD+
 tlCETQiMzre29qr3pOeWsA506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUo3RHZ8NwhjBz
 o7A10DoIzsxJO2Y9X2q83aJp9eXlyb2Y41HQdVU8dY12QbOlwT/EiY+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFabujYMVtwWFPc1gDxh0YKNvVzfXDJdCGccNpp/7pReqSEWO
 kGhldjqQjFgleesTV3A3OrIlS6sGA0FBDpXDcMbdjct797mqYA1qxvASNd/DaK45uHI9SHML
 yOi93Zn2ehK5SIf/+DipA2c3WrwznTcZlRtjjg7SF5J+e+QiGSNQ4WzoWbW4v9bRGpyZgnQ5
 SNU8yRyAQ1nMH1sqMBuaL9XdF1M2xpjGGeE6WOD57F7q1yQF4eLJOi8Gg1WKkZzKdojcjT0e
 kLVsg45zMYNYyfwNv4qOtLtU5xCIU3c+TLNDKi8gj1mO8gZSeN61Hs2OR74M57FziDAbp3Ty
 b/EKJ3xXB72+IxszSasRvd17FPY7nlW+I8nfriil07P+ePHNBaopUItbAPmghYRsPnV/204M
 r93a6O39vmoeLSnMnmKqtRPcQtiwLpSLcmelvG7v9WremJOcFzNwdeKqV/9U+SJR5hoq9o=
IronPort-HdrOrdr: A9a23:OExSj6lFFYnKjizn/1RoicDHNVfpDfMBiWdD5ihNYBxZY6Wkfp
 +V7ZMmPE7P+VIssS8b6LW90fG7MAHhHZ4c2/hqAV7QZniShILIFvAg0WKG+Vbd8kLFh5BgPM
 tbAtBD4ZjLfCtHZKXBkUuF+rQbsai6GcmT7I+urQYKPHhXguNbnndE422gYzBLrXx9dOUE/e
 2nl7Z6TlSbCA8qh8KAZghnYwE8nbL2fWndDCLu+yRH1OB1t1mVAUHBfyRwIy1xbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7hGhdf7zdNHJcqUzuwYMC/lhAqEbJloH+TqhkFwnMifrHIR1P
 XcqRYpOMp+r1vXY2GOuBPonyXwzTo07Hfm6FmAxV/uu9bwSj4WA9dIwahZbhzawUw9u8wU6t
 MP40up875sST/QliX04NbFEztwkFCvnHYkmekPy1RCTIo3ctZq3Moi1XIQNK1FMDPx6YghHu
 UrJtrb/uxqfVSTaG2clnVzwearQm84En69MxE/U42uomBrdUJCvhElLf8k7yo9HVUGOsV5Dt
 H/Q/9VfXd1P5ArhOxGdbk8qICMexjwqFr3QRWvyBLcZeY60jv22ujKyaRw6+ewdJMSypwu3J
 zHTVNDrGY3P1njEMuUwfRwg17wqUiGLHjQI/tlltdEk6y5QKCuPTyISVgoncflq/IDAtfDU/
 L2PJ5NGffsIWbnBI4MhmTFKtlvAGhbVNdQtscwWlqIrM6OIor2tvbDePKWILb2Cz4rVm72H3
 NGVjnuI8dL6FytRxbD8VnscmKofla68YN7EaDc8eRWwI8RNpdUugxQkli97tHjE0wOjkX3Rj
 o1HFrKqNLxmYDtxxeA04xAAGsUMnpo
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="93020461"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH for-4.17 v6 4/5] tools: Introduce macro $(xenlibs-cflags,) and introduce $(USELIBS) in subdirs
Date: Fri, 20 Jan 2023 19:44:30 +0000
Message-ID: <20230120194431.55922-5-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230120194431.55922-1-anthony.perard@citrix.com>
References: <20230120194431.55922-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Introduce $(xenlibs-cflags,) to get the CFLAGS needed to build with
the xen library listed as argument. This mainly give the ability to
use the same list of xen libs as we can use with the other macro
$(xenlibs-ldlibs,). Also, we can avoid listing $(CFLAGS_xeninclude)
more than once.

We will use $(USELIBS) to list the xen libraries been used by a
subdirectory or a binary. Since we usually want the CFLAGS, LDFLAGS
and LDLIBS of possibly several xen libs, we don't need to duplicate
the list for each flags. This change to use $(USELIBS) is only done in
console/ and helpers/ for now as those already use the
$(xenlibs-ldlibs,) macro

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v6:
    - new patch

 tools/console/client/Makefile |  7 ++++---
 tools/console/daemon/Makefile | 10 ++++------
 tools/helpers/Makefile        | 26 +++++++++-----------------
 tools/Rules.mk                |  6 ++++++
 4 files changed, 23 insertions(+), 26 deletions(-)

diff --git a/tools/console/client/Makefile b/tools/console/client/Makefile
index 62d89fdeb9..071262c9ae 100644
--- a/tools/console/client/Makefile
+++ b/tools/console/client/Makefile
@@ -1,11 +1,12 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += $(CFLAGS_libxenctrl)
-CFLAGS += $(CFLAGS_libxenstore)
+USELIBS := ctrl store
+
+CFLAGS += $(call xenlibs-cflags,$(USELIBS))
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 
-LDLIBS += $(call xenlibs-ldlibs,ctrl store)
+LDLIBS += $(call xenlibs-ldlibs,$(USELIBS))
 LDLIBS += $(SOCKET_LIBS)
 
 OBJS-y := main.o
diff --git a/tools/console/daemon/Makefile b/tools/console/daemon/Makefile
index 9fc3b6711f..e53c874eee 100644
--- a/tools/console/daemon/Makefile
+++ b/tools/console/daemon/Makefile
@@ -1,15 +1,13 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += $(CFLAGS_libxenctrl)
-CFLAGS += $(CFLAGS_libxenstore)
-CFLAGS += $(CFLAGS_libxenevtchn)
-CFLAGS += $(CFLAGS_libxengnttab)
-CFLAGS += $(CFLAGS_libxenforeignmemory)
+USELIBS := ctrl store evtchn gnttab foreignmemory
+
+CFLAGS += $(call xenlibs-cflags,$(USELIBS))
 CFLAGS-$(CONFIG_ARM) += -DCONFIG_ARM
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 
-LDLIBS += $(call xenlibs-ldlibs,ctrl store evtchn gnttab foreignmemory)
+LDLIBS += $(call xenlibs-ldlibs,$(USELIBS))
 LDLIBS += $(SOCKET_LIBS)
 LDLIBS += $(UTIL_LIBS)
 LDLIBS += -lrt
diff --git a/tools/helpers/Makefile b/tools/helpers/Makefile
index 09590eb5b6..0d4df01365 100644
--- a/tools/helpers/Makefile
+++ b/tools/helpers/Makefile
@@ -15,29 +15,21 @@ TARGETS += init-dom0less
 endif
 endif
 
+XEN_INIT_DOM0_USELIBS := ctrl toollog store light
 XEN_INIT_DOM0_OBJS = xen-init-dom0.o init-dom-json.o
-$(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
-$(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxenstore)
-$(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxenlight)
-$(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxenctrl)
-xen-init-dom0: LDLIBS += $(call xenlibs-ldlibs,ctrl toollog store light)
+$(XEN_INIT_DOM0_OBJS): CFLAGS += $(call xenlibs-cflags,$(XEN_INIT_DOM0_USELIBS))
+xen-init-dom0: LDLIBS += $(call xenlibs-ldlibs,$(XEN_INIT_DOM0_USELIBS))
 
+INIT_XENSTORE_DOMAIN_USELIBS := toollog store ctrl guest light
 INIT_XENSTORE_DOMAIN_OBJS = init-xenstore-domain.o init-dom-json.o
-$(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
-$(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenguest)
-$(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenctrl)
-$(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenstore)
-$(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(CFLAGS_libxenlight)
+$(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(call xenlibs-cflags,$(INIT_XENSTORE_DOMAIN_USELIBS))
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h
-init-xenstore-domain: LDLIBS += $(call xenlibs-ldlibs,toollog store ctrl guest light)
+init-xenstore-domain: LDLIBS += $(call xenlibs-ldlibs,$(INIT_XENSTORE_DOMAIN_USELIBS))
 
+INIT_DOM0LESS_USELIBS := ctrl evtchn toollog store light guest foreignmemory
 INIT_DOM0LESS_OBJS = init-dom0less.o init-dom-json.o
-$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
-$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenstore)
-$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenlight)
-$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenctrl)
-$(INIT_DOM0LESS_OBJS): CFLAGS += $(CFLAGS_libxenevtchn)
-init-dom0less: LDLIBS += $(call xenlibs-ldlibs,ctrl evtchn toollog store light guest foreignmemory)
+$(INIT_DOM0LESS_OBJS): CFLAGS += $(call xenlibs-cflags,$(INIT_DOM0LESS_USELIBS))
+init-dom0less: LDLIBS += $(call xenlibs-ldlibs,$(INIT_DOM0LESS_USELIBS))
 
 .PHONY: all
 all: $(TARGETS)
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 6e135387bd..d7c1f61cdf 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -88,6 +88,12 @@ define xenlibs-dependencies
         $(USELIBS_$(lib)) $(call xenlibs-dependencies,$(USELIBS_$(lib)))))
 endef
 
+define xenlibs-cflags
+    $(CFLAGS_xeninclude) \
+    $(foreach lib,$(1), \
+	$(filter-out $(CFLAGS_xeninclude),$(CFLAGS_libxen$(lib))))
+endef
+
 # Flags for linking recursive dependencies of Xen libraries in $(1)
 define xenlibs-rpath
     $(addprefix -Wl$(comma)-rpath-link=$(XEN_ROOT)/tools/libs/,$(call xenlibs-dependencies,$(1)))
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 19:45:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 19:45:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482110.747490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxK1-0000Kc-4n; Fri, 20 Jan 2023 19:45:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482110.747490; Fri, 20 Jan 2023 19:45:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxK1-0000KS-1L; Fri, 20 Jan 2023 19:45:09 +0000
Received: by outflank-mailman (input) for mailman id 482110;
 Fri, 20 Jan 2023 19:45:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8tu/=5R=citrix.com=prvs=37768f290=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIxK0-0006vE-Ca
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 19:45:08 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id efe55da1-98fa-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 20:45:06 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efe55da1-98fa-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674243906;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=czljIDkPlFECumG7RW3BRd1WcvEq3rBtflkOOFMBXuk=;
  b=L9TkPakuEwcpPhaVlAdEo3F4GCluaY577X5AP4Q4IQoPokZDu/Lqifwy
   eEbJesvTDEj2jusmIx1VGIbq26ZoxAOJLXOiTrL4B+asXNVM9Xv0lW/wi
   LotfjsLEUVMkeg5SU+YuyfAFIzHnBaRSR88B5B5vdMnLXPXu6HMFTz1wg
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93989459
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:ze4jb681pHhkaKnqqM2hDrUDl36TJUtcMsCJ2f8bNWPcYEJGY0x3x
 2QfWGiCP/uKYjD8fNx3O9/i9RsEuZfSmtNhTAo9qCo8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKoT5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklup
 KYSKjMUaiyvqPC65++mYbhz3c8KeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0ExRfI9
 z6brgwVBDk5C4eC0X2f/kmN2PLtuB28froMRI+Bo6sCbFq7mTVIVUx+uUGAiea9ol6zXZRYM
 UN80ight68p72SwU8LwGRa/pRasrhMaHtZdDeA+wAWM0bbPpRaUAHAeSTxMY8Bgs9U5LQHGz
 XfQwYmvX2Y29uTIFzTErOz8QS6O1TY9DT5Yby0HVA85z/bxrKZukzmeUY1oOfvg5jHqIg3Yz
 zePpSk4orwci88Xyqm2lWz6byKQSovhFVBsuFiONo6xxkYgPdP+OdT0gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hO0yT5FWy13N2YDB0xWvvogRezP
 CfuVfp5vfe/xkeCY65teJ6WAM8316XmHtmNfqmKMYYUOcksLV/bpHkGiausM4bFyhBEfUYXY
 MfzTCpRJSxCVfQPIMSeGY/xLoPHNghhnDiOFPgXPjys0KaEZW79dFv2GALmUwzN14vd+F+92
 48GZ6O3J+B3DLWWjt//rdRCcjjn7BETWfjLliCgXrfaclo7Qzt9V6S5LHFIU9UNopm5X9zgp
 hmVMnK0AnKk2xUr9S3ihqhfVY7S
IronPort-HdrOrdr: A9a23:q5bd6653bndodm+OHQPXweWCI+orL9Y04lQ7vn2ZFiYlFfBwxv
 re+MjziyWE7Qr5AEtQ6+xpOMG7MAnhHO1OkPws1NaZLUrbUQ6TR72KgrGSvQEIdxeOjtK1kJ
 0QAJSWa+eAT2SS7/yKkTVQeuxIqKjkgcbY/Ns2jU0dPT2CAJsQkjuRfzzrbXGeMzM2eabReq
 DsnfavoQDBCBcqhzqAaUXtpNKvmzQ2rvPbiOQ9bSLPFzPjsdpU0tDHOind+i1bfyJEwL8k/2
 SAuwvl5p+7u/X+5g7A23TV55F2nsKk7tdYHsSDhuUcNz2p02+TFcBccozHmApwjPCk6V4snt
 WJixA8P/5r43eUUnCprQDr0wzA1i9rz3P501eXjVbqvMS8bjMnDMhqg55fb3Limg8dleA59J
 gO83OStpJRAx+Ftj/6/cL0WxZjkVfxiWY+kMYI5kYvF7c2Wft0l8gy7UlVGJAPEGbR84Y8Ct
 RjC8na+bJ/bU6aVXbEpWNiqebcB0jbXy32GnTqiPbliQS+r0oJknfwA/ZvwkvowahNEKWsId
 60bZiA2os+EPP+JpgNcNvpCfHHfVAlByi8d156aG6XYp0vKjbDrYX6764y4/zvcJsUzIEqkJ
 CES19As3UuEnieR/Fm8ac7viwlel/NEgjF24Vb/dx0q7f8TL3kPWmKT00vidKpp7EaDtfAU/
 i+NZpKC7u7RFGeWbphzkn7Qd1fOHMeWMoatpIyXE+PuNvCLsnvuvbAePjeKbLxGXIvW3/5AH
 EEQD/vTf8wr3yDSzv9mlzcSnntckvw8dZ5F7Xb5fEazMwXOohFomEu+BmEDwGwWHd/W4ANDQ
 BDyenc4+qGTEGNjC7101k=
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="93989459"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH for-4.17 v6 5/5] tools: Rework $(xenlibs-ldlibs, ) to provide library flags only.
Date: Fri, 20 Jan 2023 19:44:31 +0000
Message-ID: <20230120194431.55922-6-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230120194431.55922-1-anthony.perard@citrix.com>
References: <20230120194431.55922-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

LDLIBS is usually only the library flags (that is the `-l` flags), as
proposed in GNU make manual, while LDFLAGS would be extra flags such
as `-L`.

Rework the make macro $(xenlibs-ldlibs, ) to only provide the library
flags. $(xenlibs-ldflags, ) already only provide the extra flags like
the -rpath-link flags.

Also fix "test_%" recipe in libs/light as "libxenlight.so" in
$(LDLIBS_libxenlight) is been replaced by "-lxenlight". Instead of
just changing the filter, we will start using the $(xenlibs-*,) macro.
For LDFLAGS, we only needs the one for libxenlight, as the one for
toolcore and toollog are already in $(LDFLAGS), they are dependencies
to build libxenlight.so.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    v6:
    - new patch

 tools/console/client/Makefile |  1 +
 tools/console/daemon/Makefile |  1 +
 tools/helpers/Makefile        |  3 +++
 tools/libs/light/Makefile     |  2 +-
 tools/Rules.mk                | 16 +++++++---------
 tools/libs/libs.mk            |  1 +
 6 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/tools/console/client/Makefile b/tools/console/client/Makefile
index 071262c9ae..ea7819c03e 100644
--- a/tools/console/client/Makefile
+++ b/tools/console/client/Makefile
@@ -6,6 +6,7 @@ USELIBS := ctrl store
 CFLAGS += $(call xenlibs-cflags,$(USELIBS))
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 
+LDFLAGS += $(call xenlibs-ldflags,$(USELIBS))
 LDLIBS += $(call xenlibs-ldlibs,$(USELIBS))
 LDLIBS += $(SOCKET_LIBS)
 
diff --git a/tools/console/daemon/Makefile b/tools/console/daemon/Makefile
index e53c874eee..400611fc2d 100644
--- a/tools/console/daemon/Makefile
+++ b/tools/console/daemon/Makefile
@@ -7,6 +7,7 @@ CFLAGS += $(call xenlibs-cflags,$(USELIBS))
 CFLAGS-$(CONFIG_ARM) += -DCONFIG_ARM
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 
+LDFLAGS += $(call xenlibs-ldflags,$(USELIBS))
 LDLIBS += $(call xenlibs-ldlibs,$(USELIBS))
 LDLIBS += $(SOCKET_LIBS)
 LDLIBS += $(UTIL_LIBS)
diff --git a/tools/helpers/Makefile b/tools/helpers/Makefile
index 0d4df01365..5db88dc81b 100644
--- a/tools/helpers/Makefile
+++ b/tools/helpers/Makefile
@@ -18,17 +18,20 @@ endif
 XEN_INIT_DOM0_USELIBS := ctrl toollog store light
 XEN_INIT_DOM0_OBJS = xen-init-dom0.o init-dom-json.o
 $(XEN_INIT_DOM0_OBJS): CFLAGS += $(call xenlibs-cflags,$(XEN_INIT_DOM0_USELIBS))
+xen-init-dom0: LDFLAGS += $(call xenlibs-ldflags,$(XEN_INIT_DOM0_USELIBS))
 xen-init-dom0: LDLIBS += $(call xenlibs-ldlibs,$(XEN_INIT_DOM0_USELIBS))
 
 INIT_XENSTORE_DOMAIN_USELIBS := toollog store ctrl guest light
 INIT_XENSTORE_DOMAIN_OBJS = init-xenstore-domain.o init-dom-json.o
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += $(call xenlibs-cflags,$(INIT_XENSTORE_DOMAIN_USELIBS))
 $(INIT_XENSTORE_DOMAIN_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h
+init-xenstore-domain: LDFLAGS += $(call xenlibs-ldflags,$(INIT_XENSTORE_DOMAIN_USELIBS))
 init-xenstore-domain: LDLIBS += $(call xenlibs-ldlibs,$(INIT_XENSTORE_DOMAIN_USELIBS))
 
 INIT_DOM0LESS_USELIBS := ctrl evtchn toollog store light guest foreignmemory
 INIT_DOM0LESS_OBJS = init-dom0less.o init-dom-json.o
 $(INIT_DOM0LESS_OBJS): CFLAGS += $(call xenlibs-cflags,$(INIT_DOM0LESS_USELIBS))
+init-dom0less: LDFLAGS += $(call xenlibs-ldflags,$(INIT_DOM0LESS_USELIBS))
 init-dom0less: LDLIBS += $(call xenlibs-ldlibs,$(INIT_DOM0LESS_USELIBS))
 
 .PHONY: all
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 96daeabc47..273f3d0864 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -244,7 +244,7 @@ libxenlight_test.so: $(PIC_OBJS) $(LIBXL_TEST_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LDLIBS) $(APPEND_LDFLAGS)
 
 test_%: test_%.o test_common.o libxenlight_test.so
-	$(CC) $(LDFLAGS) -o $@ $^ $(filter-out %libxenlight.so, $(LDLIBS_libxenlight)) $(LDLIBS_libxentoollog) $(LDLIBS_libxentoolcore) -lyajl $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) $(call xenlibs-ldflags,light) -o $@ $^ $(call xenlibs-ldlibs,toollog toolcore) -lyajl $(APPEND_LDFLAGS)
 
 libxl-save-helper: $(SAVE_HELPER_OBJS) libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxentoollog) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxentoolcore) $(APPEND_LDFLAGS)
diff --git a/tools/Rules.mk b/tools/Rules.mk
index d7c1f61cdf..007a64f2f5 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -105,12 +105,6 @@ define xenlibs-libs
         $(XEN_ROOT)/tools/libs/$(lib)/lib$(FILENAME_$(lib))$(libextension))
 endef
 
-# Flags for linking against all Xen libraries listed in $(1)
-define xenlibs-ldlibs
-    $(call xenlibs-rpath,$(1)) $(call xenlibs-libs,$(1)) \
-    $(foreach lib,$(1),$(xenlibs-ldlibs-$(lib)))
-endef
-
 # Provide needed flags for linking an in-tree Xen library by an external
 # project (or when it is necessary to link with "-lxen$(1)" instead of using
 # the full path to the library).
@@ -119,12 +113,16 @@ define xenlibs-ldflags
     $(foreach lib,$(1),-L$(XEN_ROOT)/tools/libs/$(lib))
 endef
 
+# Flags for linking against all Xen libraries listed in $(1)
+define xenlibs-ldlibs
+    $(foreach lib,$(1),-l$(FILENAME_$(lib)) $(xenlibs-ldlibs-$(lib)))
+endef
+
 # Flags for linking against all Xen libraries listed in $(1) but by making use
 # of -L and -l instead of providing a path to the shared library.
 define xenlibs-ldflags-ldlibs
     $(call xenlibs-ldflags,$(1)) \
-    $(foreach lib,$(1), -l$(FILENAME_$(lib))) \
-    $(foreach lib,$(1),$(xenlibs-ldlibs-$(lib)))
+    $(call xenlibs-ldlibs,$(1))
 endef
 
 define LIB_defs
@@ -132,7 +130,7 @@ define LIB_defs
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
  CFLAGS_libxen$(1) = $$(CFLAGS_xeninclude)
  SHLIB_libxen$(1) = $$(call xenlibs-rpath,$(1)) -Wl,-rpath-link=$$(XEN_libxen$(1))
- LDLIBS_libxen$(1) = $$(call xenlibs-ldlibs,$(1))
+ LDLIBS_libxen$(1) = $$(call xenlibs-ldflags,$(1)) $$(call xenlibs-ldlibs,$(1))
 endef
 
 $(foreach lib,$(LIBS_LIBS),$(eval $(call LIB_defs,$(lib))))
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 0e4b5e0bd0..fc6aa7ede9 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -17,6 +17,7 @@ CFLAGS   += -Wmissing-prototypes
 CFLAGS   += $(CFLAGS_xeninclude)
 CFLAGS   += $(foreach lib, $(USELIBS_$(LIBNAME)), $(CFLAGS_libxen$(lib)))
 
+LDFLAGS += $(call xenlibs-ldflags,$(USELIBS_$(LIBNAME)))
 LDLIBS += $(call xenlibs-ldlibs,$(USELIBS_$(LIBNAME)))
 
 PIC_OBJS := $(OBJS-y:.o=.opic)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 19:50:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 19:50:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482124.747500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxP6-0002pZ-QR; Fri, 20 Jan 2023 19:50:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482124.747500; Fri, 20 Jan 2023 19:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxP6-0002pS-Lu; Fri, 20 Jan 2023 19:50:24 +0000
Received: by outflank-mailman (input) for mailman id 482124;
 Fri, 20 Jan 2023 19:50:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9Xek=5R=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pIxP5-0002pM-JM
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 19:50:23 +0000
Received: from sonic306-20.consmr.mail.gq1.yahoo.com
 (sonic306-20.consmr.mail.gq1.yahoo.com [98.137.68.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aa604e2c-98fb-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 20:50:19 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic306.consmr.mail.gq1.yahoo.com with HTTP; Fri, 20 Jan 2023 19:50:17 +0000
Received: by hermes--production-bf1-6bb65c4965-5wbr2 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 8fc9638bc0257b7883f21f7dd4d66667; 
 Fri, 20 Jan 2023 19:50:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa604e2c-98fb-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674244217; bh=SCHuvnfr1DQTZbv8fhW3Qs+S+h+rzzwnUhhfLPF/T9o=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=G3jnl+Pyb8+niUB5VXEYWSwchyd6LhAil8OdwnxzqF/P6rzUZQTDGV9csY9EjB8Yo83A4XRsRx3XQFrv7vHcoji1ZfH7wzWBJgJH2HHnePnXMADmeGCjlJtq5K1GghxRpIDb7XO/Deo9IOX1ZQhTCo4j94ECTtZKxRnk670ZWp6Qkd7npSEs5DKPjGuii8kCuTDHF9fFz/hdOm0BIFD8WzdaR7HcYl/tTn5DwjqvRX30kI2PhJ6QOX1jY5fpVvjBpxjIIvdTnAEuk41XW3v91C4Og5Fg3f6vaS4SqV3bMV8R/Qvu3BhqEz3xfZG+FBXQq/cLJyYsBqAUQGQABnx3Vg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674244217; bh=wi+hWtfbphqrS3C5+SASxj+pJcKsFAAo7ruRUgzoYGe=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=hhStvQ992O28uIO9Bm0HRF8B8PTXXvCfn0a7RCD9eaLaTVqkvtP2cNWPyztiuoeRc5woNhrBFkJ4Gxvp8Wq+Mlg/iw0DVwvFR6OyM8usmPt+J2EM171PnNunAMU9eKkchNI49zpDaoT/nv7z3OEaziBPy1ZPkpHJvEN3S9WN1gyQViNAH3xnrWTd8KiKylfUBiHA0Gv8hFm/OQdzP7uYTPuhtESFKM7tKJUJxANKR2Ov6P5WBKgA8qSpBffyT0FXASD05P4X20mIaxv8zHr9xufZcW95C9uFQ1nU6Qzc6lNRVtzDSpzM7G3l5OAhlxUSHl5LhLyQBOUuM2u98E7+YA==
X-YMail-OSG: g4sjqrAVM1lmEVyXoJ8oNN.3tfk42gY.seZiSTAsqkUO.FApDMPzbjTXlyrdyg4
 t1nUMrJrNVyCYM5pS5xxYYzvqmk3FSOmurNnf.a_WXqx8UdGlPGn3zxDJ9i15fm5L0VYVLH9FtDG
 DLLtNv0e5XYMArf8VkxrC._seV9Q.FLWil4_kkFff8npWEfgSLLOihIG5iJQ88fz64wF2zS9kT4_
 Yd3fRJJw6hPbjk8DkKPszmk6ONkcHPqlnLisMdQlw97ANX2MJuKwaNAM5OXLrSaOORdl37Futd41
 Cir_ZGM7aNfKa4OTeUcnIvEeLJEyJcpQ95rzZJ3ye4YJ.iQgnSIt4xCj.A.VsegCXJzY28f_l.xs
 uqwnVvuHCRNzDnxarPpwB2nBh0klG.behh1cV1cQ785OfTtNGy_cTJAD9MIUboj0ETxNFhDPD_JS
 YSEo.UTORzQrC8YeXZ59_lIN_djeZKpB5xZeNyh6.sQ6HzeBAv90mFk8WoeZlPL6NYKmPF6NuFCm
 .HGHcXjrcCkVJuz2HmX38grzWXryxvh8ov.yfoKax5UIIaDP8UWxDH7wm1EnD3lFyjfArzBkkR14
 .F6XqnV5PpdXm95lYgr0kp_MUzXYOBwSnsTX1vOz1RIStm.OBcFjFjCfool_FaS9nWZmFHef.OSp
 GZhXzh2m2jxE_Xloo2Bgm_hsaH0ANFd5fDZg5MqTSci30NgnTH9QXAmTGSc3X_IZaKy7PhBY80hM
 dqZylPiM0A.YYIiWaiiGqzKxHRRpjVwvEkBgQMSr.7qstLesfro8R6CC1cxS5ZVUlhUTrCJ2dgb4
 pukFrh5R3jbyyPwSUprLy5WZEOGxPQcOpsrEe5bcp0uttfvQnWWJ3XvC.kGDShDwzYoBLy.UZi8W
 MvhOQxM.3C8vl4qQpnma5porFf.oAhPT6cK.8pPxhzQGXlH8MHvhyDUb5uhC4oGN10QXkS3Y1_0v
 KJFTiqZs9atzWYz86IaqgHW5B242SR8rHu834ms4nXr5p4E4FbDguTx_3loubsPAXHNdLJJd9raZ
 tdvWIZGENgkfbZL_shUaZxzbD3pvG6Tiu2KtYMeefBSQ5AoSleXT_1XHjnoZcBU8lsLHU5PI.ESa
 L6FcdXLCB0PFkJHGnM5dZVTmqoj6C.EufFd92HftOfeguZwCtJyADbqNT0wu8.v4WJHDlHE6fj85
 MdRdlhL4uGiyVaOKZpc6KM2j_wT92MCEfYCBqrVLbJ2jne6dLs8P4WTOMdygRt96geV34NV03bmj
 1Y50xn2Gi5IvQ9B1yJg9ppZUJwVXCJ3OA8WYWA7V7B1vYLUdGkEsMyvQPKUohqy5n6tlYRcefjeX
 Aa7aYwzp.rMkpaZfuxyKWHcOeRztEMxw2a6orvlgJp6PJa.FFIipNbJ6eE9pJ1yamACkB5f6xGTW
 TbIHk.XdZGQfK5En5nuGqMZFnOw5_UYXEI6cECbuhILePZI48JZBqoRcvNY.FjYp7wte0oo5VZQK
 A6VJU48ie_WaAQm5LUcAjpHehfY.Tdk2NKd4o4LYT_zzZFv8aMmUqpBCGD7a9mbJR..P61xBM3Kb
 bddh5gjPzcyJiE.YgTM3.BjI9K4Dcg3QOTULUlZ76UgTTzU5UyyLluUsjLxFjCvGaaoVVGpJErYu
 u4R5LG7BCUNyBCy7hHKUV7Q.JxTtREvJcV68iKNwXizu37eucyykGd5YPDzCEdisGw10qF3E2d8.
 5.D9RB00OOF.f7_CRImdODlgHrpRGEgUzj396aG8daOmNGTva979YjxPlTgR78QxvG9fMntjtnGH
 9KhoHiDOgHN.JEdIDmXV98FrWVmpET1D6ekerzljLPp1hESnX99sl_2ShxGNFE4fT1QLxYuo4KCY
 sL4MOC9SHGTv3kCWaNUzYl7at66XNkGQpDklKCWcitsbbyOaj4IugIALCiATUC_nsT4x2pdNFIq2
 Cd.DUvmwswpOiXITbKdvskDcz9_vig6OoOPhCj3ynZHCQIPyDRi0U_w5OjoTOxe.QovGTY_t5Z4v
 SrGa9qY5vm4guXhFN9FicUhlhb_i14.8iz_A8Aujqc3PZKYuvzXzNRziPOockhvxaop0rNv9XyaF
 JWdc0LnSzCXVkxw86MqwddSwek3HDR40iHTfMS8coYDEmkOWbMFz7rQnOIZ21bOfYtWZMODXOdxD
 UajKeB3JQUD6S.JWbFN4BWJq8Ue5rbw2zn4v1gZEg5eo79UcsKrYo7W.ugYCxIT50bUmD9_gmDFi
 q2ThgDozqtlnJkUqDJWaoERtKN_hp
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <29ee7fb7-4357-b06a-0ea4-65699211f874@aol.com>
Date: Fri, 20 Jan 2023 14:50:14 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v8] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
To: Igor Mammedov <imammedo@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
 Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, Anthony Perard <anthony.perard@citrix.com>
References: <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz.ref@aol.com>
 <a09d2427397621eaecee4c46b33507a99cc5f161.1673334040.git.brchuckz@aol.com>
 <20230110030331-mutt-send-email-mst@kernel.org>
 <a6994521-68d5-a05b-7be2-a8c605733467@aol.com>
 <D785501E-F95D-4A22-AFD0-85133F8CE56D@gmail.com>
 <9f63e7a6-e434-64b4-f082-7f5a0ab8d5bf@aol.com>
 <7208A064-2A25-4DBB-BF19-6797E96AB00C@gmail.com>
 <20230112180314-mutt-send-email-mst@kernel.org>
 <128d8ee2-8ee9-0a76-10de-af4c1b364179@aol.com>
 <20230113103310.3da703ab@imammedo.users.ipa.redhat.com>
 <88af50cb-4ebd-7995-70cf-f23ac33c5e45@aol.com>
 <20230116163342.467039a0@imammedo.users.ipa.redhat.com>
 <fce262ea-e0d5-d670-787c-62d91b040739@netscape.net>
 <20230117113513.4e692539@imammedo.users.ipa.redhat.com>
 <1c7d1fd7-d29b-1ae1-a7f4-0fea811b56c5@aol.com>
 <20230120170147.4f3bbda6@imammedo.users.ipa.redhat.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230120170147.4f3bbda6@imammedo.users.ipa.redhat.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21062 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 6553

On 1/20/2023 11:01 AM, Igor Mammedov wrote:
> On Tue, 17 Jan 2023 09:50:23 -0500
> Chuck Zmudzinski <brchuckz@aol.com> wrote:
>
> > On 1/17/2023 5:35 AM, Igor Mammedov wrote:
> > > On Mon, 16 Jan 2023 13:00:53 -0500
> > > Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> > >  
> > > > On 1/16/23 10:33, Igor Mammedov wrote:  
> > > > > On Fri, 13 Jan 2023 16:31:26 -0500
> > > > > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > > >     
> > > > >> On 1/13/23 4:33 AM, Igor Mammedov wrote:    
> > > > >> > On Thu, 12 Jan 2023 23:14:26 -0500
> > > > >> > Chuck Zmudzinski <brchuckz@aol.com> wrote:
> > > > >> >       
> > > > >> >> On 1/12/23 6:03 PM, Michael S. Tsirkin wrote:      
> > > > >> >> > On Thu, Jan 12, 2023 at 10:55:25PM +0000, Bernhard Beschow wrote:        
> > > > >> >> >> I think the change Michael suggests is very minimalistic: Move the if
> > > > >> >> >> condition around xen_igd_reserve_slot() into the function itself and
> > > > >> >> >> always call it there unconditionally -- basically turning three lines
> > > > >> >> >> into one. Since xen_igd_reserve_slot() seems very problem specific,
> > > > >> >> >> Michael further suggests to rename it to something more general. All
> > > > >> >> >> in all no big changes required.        
> > > > >> >> > 
> > > > >> >> > yes, exactly.
> > > > >> >> >         
> > > > >> >> 
> > > > >> >> OK, got it. I can do that along with the other suggestions.      
> > > > >> > 
> > > > >> > have you considered instead of reservation, putting a slot check in device model
> > > > >> > and if it's intel igd being passed through, fail at realize time  if it can't take
> > > > >> > required slot (with a error directing user to fix command line)?      
> > > > >> 
> > > > >> Yes, but the core pci code currently already fails at realize time
> > > > >> with a useful error message if the user tries to use slot 2 for the
> > > > >> igd, because of the xen platform device which has slot 2. The user
> > > > >> can fix this without patching qemu, but having the user fix it on
> > > > >> the command line is not the best way to solve the problem, primarily
> > > > >> because the user would need to hotplug the xen platform device via a
> > > > >> command line option instead of having the xen platform device added by
> > > > >> pc_xen_hvm_init functions almost immediately after creating the pci
> > > > >> bus, and that delay in adding the xen platform device degrades
> > > > >> startup performance of the guest.
> > > > >>     
> > > > >> > That could be less complicated than dealing with slot reservations at the cost of
> > > > >> > being less convenient.      
> > > > >> 
> > > > >> And also a cost of reduced startup performance    
> > > > > 
> > > > > Could you clarify how it affects performance (and how much).
> > > > > (as I see, setup done at board_init time is roughly the same
> > > > > as with '-device foo' CLI options, modulo time needed to parse
> > > > > options which should be negligible. and both ways are done before
> > > > > guest runs)    
> > > > 
> > > > I preface my answer by saying there is a v9, but you don't
> > > > need to look at that. I will answer all your questions here.
> > > > 
> > > > I am going by what I observe on the main HDMI display with the
> > > > different approaches. With the approach of not patching Qemu
> > > > to fix this, which requires adding the Xen platform device a
> > > > little later, the length of time it takes to fully load the
> > > > guest is increased. I also noticed with Linux guests that use
> > > > the grub bootoader, the grub vga driver cannot display the
> > > > grub boot menu at the native resolution of the display, which
> > > > in the tested case is 1920x1080, when the Xen platform device
> > > > is added via a command line option instead of by the
> > > > pc_xen_hvm_init_pci fucntion in pc_piix.c, but with this patch
> > > > to Qemu, the grub menu is displayed at the full, 1920x1080
> > > > native resolution of the display. Once the guest fully loads,
> > > > there is no noticeable difference in performance. It is mainly
> > > > a degradation in startup performance, not performance once
> > > > the guest OS is fully loaded.  
> > > above hints on presence of bug[s] in igd-passthru implementation,
> > > and this patch effectively hides problem instead of trying to figure
> > > out what's wrong and fixing it.
> > >  
> > 
> > I agree that the with the current Qemu there is a bug in the igd-passthru
> > implementation. My proposed patch fixes it. So why not support the
> > proposed patch with a Reviewed-by tag?
> I see the patch as workaround (it might be a reasonable one) and
> it's upto xen maintainers to give Reviewed-by/accept it.
>
> Though I'd add to commit message something about performance issues
> you are experiencing when trying to passthrough IGD manually

No, the device that needs to be passed through manually with the
workaround is the Xen platform device, and the workaround (as I
understand it) is the patch to libxl, not this patch to Qemu. The igd
is passed through the same way whether or not Qemu or libxl is
patched, with these patches I have proposed, and IIUC that is by
using QMP and the Qemu device listener.

> as one of justifications for workaround (it might push Xen folks
> to investigate the issue and it won't be lost/forgotten on mail-list).
>

I could add that on the next iteration.

As far as Xen folks investigating further, that would be welcome.
I think the best way would be for the pc_xen_hvm_init functions
to add the igd at slot 2 and the xen platform device at slot 3
when igd-passthru=on is set on the command line. But I don't
know if it is possible for the pc_xen_hvm_init functions to
attach a type XEN_PT, which is the device type of the igd when
using xen, without changing the interface between libxl and
Qemu. My patches fix this without changing that interface,
but the Qemu patch does it in a way that achieves better
startup performance that the patch to libxl because of the
fact that the xen platform device can be added sooner
during guest creation by patching Qemu than by patching
libxl.

Currently, IIUC, the XEN_PT devices are added through QMP
and the Qemu device listener. If there was a way for Qemu
(the pc_xen_hvm_init_pci function) to actively attach the igd
instead of having libxl add it via QMP and the Qemu device
listener, that would also work and might improve performance
more because then both the igd and the xen platform device
would be added early during the creation of the guest.

Thanks.


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 19:54:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 19:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482131.747510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxT0-0003V6-ET; Fri, 20 Jan 2023 19:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482131.747510; Fri, 20 Jan 2023 19:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxT0-0003Uz-An; Fri, 20 Jan 2023 19:54:26 +0000
Received: by outflank-mailman (input) for mailman id 482131;
 Fri, 20 Jan 2023 19:54:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8tu/=5R=citrix.com=prvs=37768f290=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pIxSy-0003Ut-D7
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 19:54:24 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b9ca49a-98fc-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 20:54:23 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b9ca49a-98fc-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674244463;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=m0DCxynqsw+mJzYA9x8FwtAUhehPHJikFd3yCpVkhIU=;
  b=B0xGDwE6DEoqlKmiime/Ktg1opaWFteNU8K4IEELBlS4xawdv3bOoFnM
   MpB8m3+Dc93USjTfoMSWZta1qKLf8ll2N8z9aJBnC4xW9uXU9+R8tcHA3
   f1QU9RAu+6PT10UeEpEAlGYtMKVWGIwP67+Na8/OzLcUBbihWSTC3khRp
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 92472794
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:jDChBKM43Mi+POHvrR1/l8FynXyQoLVcMsEvi/4bfWQNrUp0gmdRn
 2IdCmqBafncYmH9f48gad+y8BwG65+HyNVgSwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5ARmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0shbPGdk/
 MAlET5OXDTbnuWb37eHcPY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUo3QGZoPwRfEz
 o7A1373DSBCNPaz8CeA9E2Xr8KUwg/HaatHQdVU8dY12QbOlwT/EiY+V1ShpuKiolWjQN8ZI
 EsRkgIntaUo/VanZsX8VRa/5nWDu3Y0WdBdDuk74wGl0bfP7kCSAW1sZiFFQMwrsokxXzNC/
 k+EmZblCCJitJWRSGmB7fGEoDWqIy8XIGQeIygeQmMt+ML/qYs+ihbOSNdLE6OviNDxXzbqz
 FiirjU6hrgVpd4G0eO851+vvt63jsGXFEhvvFyRBz/7qFojP+ZJerBE93D1wctGBo+lc2CP/
 0ogw9GZwf8LMauSwXnlrPo2IJml4POMMTv5iFFpHoU8+znFx0NPbby88xkleh43b59slSvBJ
 RaK5FgPvMM70G6CN/cfXm6nNyg9IUEM//zBX+ucUNdBa4MZmOSvrHA3Ph74M4wAfSERfUAD1
 XSzK5zE4ZMm5UJPlmLeegvl+eV3rh3SPEuKLXwB8zyp0KCFeFmeQqofPV2FY4gRtf3b/F+Oo
 osGZ5fSk32ztdEShAGNqeb/ynhTfRAG6W3e8ZQLJoZv3CI4cI3eNxMh6ex4INE090ikvuzJ4
 mu8SidlJKnX3BX6xfGxQik7MtvHBM8vxU/XyARwZT5ELVB/O9fwhEreHrNrFYQaGBtLlqEsE
 6BeJp/aUpyiiF3volwgUHU0l6Q6HDzDuO5EF3PNjOQXF3K4ezH0xw==
IronPort-HdrOrdr: A9a23:zHrnXqzouMV9Kmhxp8hVKrPwLr1zdoMgy1knxilNoRw8SK2lfu
 SV7ZMmPH7P+VIssR4b9exoVJPufZqYz+8S3WBzB8bGYOCFghrKEGgK1+KLqFeMJ8S9zJ8+6U
 4JSdkGNDSaNzhHZKjBjjWFLw==
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="92472794"
Date: Fri, 20 Jan 2023 19:54:16 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Andrew Cooper <andrew.cooper3@citrix.com>, "Juergen
 Gross" <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>
Subject: Re: [XEN PATCH v6 0/5] Toolstack build system improvement, toward
 non-recursive makefiles
Message-ID: <Y8rxaDoaIWEVUiol@perard.uk.xensource.com>
References: <20230120194431.55922-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230120194431.55922-1-anthony.perard@citrix.com>

The "for-4.17" in the subject is a mistake, the series is for staging.

It is just left over from the last time I've sent the series.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 20:26:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 20:26:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482155.747519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxyF-0006uf-RD; Fri, 20 Jan 2023 20:26:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482155.747519; Fri, 20 Jan 2023 20:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIxyF-0006uY-OJ; Fri, 20 Jan 2023 20:26:43 +0000
Received: by outflank-mailman (input) for mailman id 482155;
 Fri, 20 Jan 2023 20:26:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/gv2=5R=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1pIxyC-0006uS-I4
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 20:26:41 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd89bfe9-9900-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 21:26:38 +0100 (CET)
Received: from zn.tnic (p5de8e9fe.dip0.t-ipconnect.de [93.232.233.254])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 2B8211EC01B5;
 Fri, 20 Jan 2023 21:26:37 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd89bfe9-9900-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1674246397;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=jZ8OaYf/GRQ2TxOV4jZT6AQpjda327cLvF6ttVfQQ4M=;
	b=hY3Ue4LBiep4MKn0VfEK59kgm/TZU3IId3CkHJzgOPb2oe1rF413+UpKZyQsIjVBZB2msy
	bWKdxt58uFQKPOrYTy9O9GkcwBi00bTrasWZhTAnS0a9/FEbZyn4txvKuaEno0yjdo0TBL
	bmOudVDBdGPkTzoyY1zvwnydDd/qamM=
Date: Fri, 20 Jan 2023 21:26:33 +0100
From: Borislav Petkov <bp@alien8.de>
To: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?utf-8?B?SsO2cmcgUsO2ZGVs?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 3/7] x86/power: De-paravirt restore_processor_state()
Message-ID: <Y8r4+V4Y42Y09WMw@zn.tnic>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.708895882@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230116143645.708895882@infradead.org>

On Mon, Jan 16, 2023 at 03:25:36PM +0100, Peter Zijlstra wrote:
> Since Xen PV doesn't use restore_processor_state(), and we're going to
> have to avoid CALL/RET until at least GS is restored,

Drop the "we":

"..., and CALL/RET cannot happen before GS has been restored, ..."

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 20:29:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 20:29:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482160.747529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIy0g-0007UA-82; Fri, 20 Jan 2023 20:29:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482160.747529; Fri, 20 Jan 2023 20:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIy0g-0007U3-5L; Fri, 20 Jan 2023 20:29:14 +0000
Received: by outflank-mailman (input) for mailman id 482160;
 Fri, 20 Jan 2023 20:29:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/gv2=5R=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1pIy0e-0007Tt-Rq
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 20:29:12 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 17249863-9901-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 21:29:08 +0100 (CET)
Received: from zn.tnic (p5de8e9fe.dip0.t-ipconnect.de [93.232.233.254])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id CBC1F1EC01B5;
 Fri, 20 Jan 2023 21:29:07 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17249863-9901-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1674246547;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=pTbCQUN+YnCBIkbYMuXNpavHR0UNLh0FRHrM358v2lc=;
	b=YeKvLhTytdDvp85ARvQJilNzdAlZFXRXXj90lclYwv60YSNWo/Ncb+BZUGEQiyXLMfLqGP
	/Lp7BmpOw3m+sdfvUebkRTj5wotQF8kAECpCPx7wZ99jtAlrgRmIkx1u/y6Sy2hDh4oNdP
	8LRI4/T1Rd9BJXKtfCRjmXVGCLWjAGM=
Date: Fri, 20 Jan 2023 21:29:07 +0100
From: Borislav Petkov <bp@alien8.de>
To: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?utf-8?B?SsO2cmcgUsO2ZGVs?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 4/7] x86/power: Inline write_cr[04]()
Message-ID: <Y8r5k+IW6jAJJTSm@zn.tnic>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.768035056@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230116143645.768035056@infradead.org>

On Mon, Jan 16, 2023 at 03:25:37PM +0100, Peter Zijlstra wrote:
> Since we can't do CALL/RET until GS is restored and CR[04] pinning is
	^^

Ditto like for the previous one.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 20:33:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 20:33:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482165.747540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIy4S-0000Su-O6; Fri, 20 Jan 2023 20:33:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482165.747540; Fri, 20 Jan 2023 20:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIy4S-0000Sn-L3; Fri, 20 Jan 2023 20:33:08 +0000
Received: by outflank-mailman (input) for mailman id 482165;
 Fri, 20 Jan 2023 20:33:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIy4R-0000Sd-K2; Fri, 20 Jan 2023 20:33:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIy4R-00006x-GL; Fri, 20 Jan 2023 20:33:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIy4Q-0001N8-Uw; Fri, 20 Jan 2023 20:33:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIy4Q-00044B-U0; Fri, 20 Jan 2023 20:33:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xAskI4OTRkMUdhMKY3ZTDjfliT71btXLe2mCHQAGkPE=; b=U9s/sVlp6jQEChivdZZ9aphd/y
	LtNBZQk/P40rY8eH6v4fx/RVwj7fQuf1Ru0ANanyOicrid03jmpaWOXh7cKh2V3km0vJGTQMq076a
	zmhwzPPTe8rCqnzK9XNA7Nap9VQVRTGhiY59K2K5EicaxgONPjHW1vaKEF/DYPsYUJM0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176001-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176001: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93017efd7c441420318e46443a06e40fa6f1b6d4
X-Osstest-Versions-That:
    xen=89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 20:33:06 +0000

flight 176001 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176001/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  93017efd7c441420318e46443a06e40fa6f1b6d4
baseline version:
 xen                  89cc5d96a9d1fce81cf58b6814dac62a9e07fbee

Last test of basis   175997  2023-01-20 13:01:52 Z    0 days
Testing same since   176001  2023-01-20 16:04:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   89cc5d96a9..93017efd7c  93017efd7c441420318e46443a06e40fa6f1b6d4 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 21:35:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 21:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482172.747549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIz23-0006eG-83; Fri, 20 Jan 2023 21:34:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482172.747549; Fri, 20 Jan 2023 21:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIz23-0006e9-5N; Fri, 20 Jan 2023 21:34:43 +0000
Received: by outflank-mailman (input) for mailman id 482172;
 Fri, 20 Jan 2023 21:34:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqBo=5R=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIz22-0006e3-2H
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 21:34:42 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3d18e521-990a-11ed-b8d1-410ff93cb8f0;
 Fri, 20 Jan 2023 22:34:37 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id A885AB828C1;
 Fri, 20 Jan 2023 21:34:36 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C69EEC433D2;
 Fri, 20 Jan 2023 21:34:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d18e521-990a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674250475;
	bh=Bfu8hA0GHXq9U+PUhiFwIrS+QwXrzH9MsT0ftwlzISo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=crETC2zP/CSJ/YVdsCP5RapF30Wu3cVvHY436ROj0no7m/Ut6hez5i41scDzUdmMh
	 to6/jnBs+xEzP6zX5Bw8N7wqWpQFk2FgR/EG4A6WOWBkxF6HXBt38yDyIy81jrf/so
	 958prJi3+50+w/9raDtm+gnjb8q8QBkNJpjloK+9LvdREMJZxNPrSRalivGJJHo6Yq
	 uFFJnw1+gIldya8NHwik2ui9m26NKqrZsckXR5x2/ri1GNlakDUmub6sMP7m0bCgs6
	 4fCk0SPVYVUa6eBCzUMe9rQcGiivUkxZbh8OaGFrkCctOmnMXd8Qn9+BCyie9BF586
	 33z27Wv168uew==
Date: Fri, 20 Jan 2023 13:34:32 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Chuck Zmudzinski <brchuckz@aol.com>
cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    Paolo Bonzini <pbonzini@redhat.com>, 
    Richard Henderson <richard.henderson@linaro.org>, 
    Eduardo Habkost <eduardo@habkost.net>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    xen-devel@lists.xenproject.org, qemu-stable@nongnu.org
Subject: Re: [PATCH v9] xen/pt: reserve PCI slot 2 for Intel igd-passthru
In-Reply-To: <974c616b8632f1d7ca3917f8143d8cebf946a55c.1673672956.git.brchuckz@aol.com>
Message-ID: <alpine.DEB.2.22.394.2301201334250.731018@ubuntu-linux-20-04-desktop>
References: <974c616b8632f1d7ca3917f8143d8cebf946a55c.1673672956.git.brchuckz.ref@aol.com> <974c616b8632f1d7ca3917f8143d8cebf946a55c.1673672956.git.brchuckz@aol.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 14 Jan 2023, Chuck Zmudzinski wrote:
> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> as noted in docs/igd-assign.txt in the Qemu source code.
> 
> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> Intel IGD passthrough to the guest with the Qemu upstream device model,
> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> a different slot. This problem often prevents the guest from booting.
> 
> The only available workaround is not good: Configure Xen HVM guests to use
> the old and no longer maintained Qemu traditional device model available
> from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> 
> To implement this feature in the Qemu upstream device model for Xen HVM
> guests, introduce the following new functions, types, and macros:
> 
> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> * typedef XenPTQdevRealize function pointer
> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> 
> Michael Tsirkin:
> * Introduce XEN_PCI_IGD_DOMAIN, XEN_PCI_IGD_BUS, XEN_PCI_IGD_DEV, and
>   XEN_PCI_IGD_FN - use them to compute the value of XEN_PCI_IGD_SLOT_MASK
> 
> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> the xl toolstack with the gfx_passthru option enabled, which sets the
> igd-passthru=on option to Qemu for the Xen HVM machine type.
> 
> The new xen_igd_reserve_slot function also needs to be implemented in
> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> in which case it does nothing.
> 
> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> 
> Move the call to xen_host_pci_device_get, and the associated error
> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> initialize the device class and vendor values which enables the checks for
> the Intel IGD to succeed. The verification that the host device is an
> Intel IGD to be passed through is done by checking the domain, bus, slot,
> and function values as well as by checking that gfx_passthru is enabled,
> the device class is VGA, and the device vendor in Intel.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>

Hi Chuck,

The approach looks OK in principle to me. I only have one question: for
other PCI devices (not Intel IGD), where is xen_host_pci_device_get
called now?

It looks like that xen_igd_reserve_slot would return without setting
slot_reserved_mask, hence xen_igd_clear_slot would also return without
calling xen_host_pci_device_get. And xen_pt_realize doesn't call
xen_host_pci_device_get any longer.

Am I missing something?


> ---
> Notes that might be helpful to reviewers of patched code in hw/xen:
> 
> The new functions and types are based on recommendations from Qemu docs:
> https://qemu.readthedocs.io/en/latest/devel/qom.html
> 
> Notes that might be helpful to reviewers of patched code in hw/i386:
> 
> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> not affect builds that do not have CONFIG_XEN defined.
> 
> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
> existing function that is only true when Qemu is built with
> xen-pci-passthrough enabled and the administrator has configured the Xen
> HVM guest with Qemu's igd-passthru=on option.
> 
> v2: Remove From: <email address> tag at top of commit message
> 
> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
> 
>     is changed to
> 
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     I hoped that I could use the test in v2, since it matches the
>     other tests for the Intel IGD in Qemu and Xen, but those tests
>     do not work because the necessary data structures are not set with
>     their values yet. So instead use the test that the administrator
>     has enabled gfx_passthru and the device address on the host is
>     02.0. This test does detect the Intel IGD correctly.
> 
> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>     email address to match the address used by the same author in commits
>     be9c61da and c0e86b76
>     
>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
> 
> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>     for the Intel IGD that uses the same criteria as in other places.
>     This involved moving the call to xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>     Intel IGD in xen_igd_clear_slot:
>     
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     is changed to
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> 
>     Added an explanation for the move of xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot to the commit message.
> 
>     Rebase.
> 
> v6: Fix logging by removing these lines from the move from xen_pt_realize
>     to xen_igd_clear_slot that was done in v5:
> 
>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>                " to devfn 0x%x\n",
>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                s->dev.devfn);
> 
>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>     set yet in xen_igd_clear_slot.
> 
> v7: The v7 that was posted to the mailing list was incorrect. v8 is what
>     v7 was intended to be.
> 
> v8: Inhibit out of context log message and needless processing by
>     adding 2 lines at the top of the new xen_igd_clear_slot function:
> 
>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>         return;
> 
>     Rebase. This removed an unnecessary header file from xen_pt.h 
> 
> v9: Move check for xen_igd_gfx_pt_enabled() from pc_piix.c to xen_pt.c
> 
>     Move #include "hw/pci/pci_bus.h" from xen_pt.h to xen_pt.c
> 
>     Introduce macros for the IGD devfn constants and use them to compute
>     the value of XEN_PCI_IGD_SLOT_MASK
> 
>     Also use the new macros at an appropriate place in xen_pt_realize
> 
>     Add Cc: to stable - This has been broken for a long time, ever since
>                         support for igd-passthru was added to Qemu 7
>                         years ago.
> 
>     Mention new macros in the commit message (Michael Tsirkin)
> 
>     N.B.: I could not follow the suggestion to move the statement
>     pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK; to after
>     pci_qdev_realize for symmetry. Doing that results in an error when
>     creating the guest:
>     
>     libxl: error: libxl_qmp.c:1837:qmp_ev_parse_error_messages: Domain 4:PCI: slot 2 function 0 not available for xen-pci-passthrough, reserved
>     libxl: error: libxl_pci.c:1809:device_pci_add_done: Domain 4:libxl__device_pci_add failed for PCI device 0:0:2.0 (rc -28)
>     libxl: error: libxl_create.c:1921:domcreate_attach_devices: Domain 4:unable to add pci devices
> 
>  hw/i386/pc_piix.c    |  1 +
>  hw/xen/xen_pt.c      | 61 ++++++++++++++++++++++++++++++++++++--------
>  hw/xen/xen_pt.h      | 20 +++++++++++++++
>  hw/xen/xen_pt_stub.c |  4 +++
>  4 files changed, 75 insertions(+), 11 deletions(-)
> 
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index b48047f50c..8fc96eb63b 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -405,6 +405,7 @@ static void pc_xen_hvm_init(MachineState *machine)
>      }
>  
>      pc_xen_hvm_init_pci(machine);
> +    xen_igd_reserve_slot(pcms->bus);
>      pci_create_simple(pcms->bus, -1, "xen-platform");
>  }
>  #endif
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 0ec7e52183..51f100f64a 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -57,6 +57,7 @@
>  #include <sys/ioctl.h>
>  
>  #include "hw/pci/pci.h"
> +#include "hw/pci/pci_bus.h"
>  #include "hw/qdev-properties.h"
>  #include "hw/qdev-properties-system.h"
>  #include "hw/xen/xen.h"
> @@ -780,15 +781,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                 s->dev.devfn);
>  
> -    xen_host_pci_device_get(&s->real_device,
> -                            s->hostaddr.domain, s->hostaddr.bus,
> -                            s->hostaddr.slot, s->hostaddr.function,
> -                            errp);
> -    if (*errp) {
> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
> -        return;
> -    }
> -
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> @@ -803,8 +795,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>      s->io_listener = xen_pt_io_listener;
>  
>      /* Setup VGA bios for passthrough GFX */
> -    if ((s->real_device.domain == 0) && (s->real_device.bus == 0) &&
> -        (s->real_device.dev == 2) && (s->real_device.func == 0)) {
> +    if ((s->real_device.domain == XEN_PCI_IGD_DOMAIN) &&
> +        (s->real_device.bus == XEN_PCI_IGD_BUS) &&
> +        (s->real_device.dev == XEN_PCI_IGD_DEV) &&
> +        (s->real_device.func == XEN_PCI_IGD_FN)) {
>          if (!is_igd_vga_passthrough(&s->real_device)) {
>              error_setg(errp, "Need to enable igd-passthru if you're trying"
>                      " to passthrough IGD GFX");
> @@ -950,11 +944,55 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>  }
>  
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +    if (!xen_igd_gfx_pt_enabled())
> +        return;
> +
> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
> +}
> +
> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
> +{
> +    ERRP_GUARD();
> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
> +
> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
> +        return;
> +
> +    xen_host_pci_device_get(&s->real_device,
> +                            s->hostaddr.domain, s->hostaddr.bus,
> +                            s->hostaddr.slot, s->hostaddr.function,
> +                            errp);
> +    if (*errp) {
> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
> +        return;
> +    }
> +
> +    if (is_igd_vga_passthrough(&s->real_device) &&
> +        s->real_device.domain == XEN_PCI_IGD_DOMAIN &&
> +        s->real_device.bus == XEN_PCI_IGD_BUS &&
> +        s->real_device.dev == XEN_PCI_IGD_DEV &&
> +        s->real_device.func == XEN_PCI_IGD_FN &&
> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
> +    }
> +    xpdc->pci_qdev_realize(qdev, errp);
> +}
> +
>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>  
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
> +    xpdc->pci_qdev_realize = dc->realize;
> +    dc->realize = xen_igd_clear_slot;
>      k->realize = xen_pt_realize;
>      k->exit = xen_pt_unregister_device;
>      k->config_read = xen_pt_pci_read_config;
> @@ -977,6 +1015,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>      .instance_size = sizeof(XenPCIPassthroughState),
>      .instance_finalize = xen_pci_passthrough_finalize,
>      .class_init = xen_pci_passthrough_class_init,
> +    .class_size = sizeof(XenPTDeviceClass),
>      .instance_init = xen_pci_passthrough_instance_init,
>      .interfaces = (InterfaceInfo[]) {
>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index cf10fc7bbf..e184699740 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -40,7 +40,20 @@ typedef struct XenPTReg XenPTReg;
>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>  
> +#define XEN_PT_DEVICE_CLASS(klass) \
> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
> +
> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
> +
> +typedef struct XenPTDeviceClass {
> +    PCIDeviceClass parent_class;
> +    XenPTQdevRealize pci_qdev_realize;
> +} XenPTDeviceClass;
> +
>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>                                             XenHostPCIDevice *dev);
> @@ -75,6 +88,13 @@ typedef int (*xen_pt_conf_byte_read)
>  
>  #define XEN_PCI_INTEL_OPREGION 0xfc
>  
> +#define XEN_PCI_IGD_DOMAIN 0
> +#define XEN_PCI_IGD_BUS 0
> +#define XEN_PCI_IGD_DEV 2
> +#define XEN_PCI_IGD_FN 0
> +#define XEN_PCI_IGD_SLOT_MASK \
> +    (1UL << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
> +
>  typedef enum {
>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> index 2d8cac8d54..5c108446a8 100644
> --- a/hw/xen/xen_pt_stub.c
> +++ b/hw/xen/xen_pt_stub.c
> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>          error_setg(errp, "Xen PCI passthrough support not built in");
>      }
>  }
> +
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +}
> -- 
> 2.39.0
> 


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 21:52:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 21:52:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482179.747560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIzIa-0000eh-R9; Fri, 20 Jan 2023 21:51:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482179.747560; Fri, 20 Jan 2023 21:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIzIa-0000ea-Mi; Fri, 20 Jan 2023 21:51:48 +0000
Received: by outflank-mailman (input) for mailman id 482179;
 Fri, 20 Jan 2023 21:51:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIzIZ-0000eO-Jw; Fri, 20 Jan 2023 21:51:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIzIZ-0002Ld-IA; Fri, 20 Jan 2023 21:51:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pIzIZ-00076d-30; Fri, 20 Jan 2023 21:51:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pIzIZ-0004lu-2c; Fri, 20 Jan 2023 21:51:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JV5yVRlVfd3FUZsztYq/vHCaQHBq3KfrsuBBDq096yI=; b=kH4X+ZNYFJXeMjxkU3yVsVI9Qy
	/uAdVW6FTSB0Lc6fp2NnTeGASGtRfpATXvuPqF8/FUlVIKRlRNQFFk2/dTRFPYB/0zJIXLtOksLCy
	QaT2FVDy1FIH5eyJGqdG9Vvxvs+Uqa95SQFHiowN3tyYQjO1jCa2rIShjoeevtScWv8c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176004-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176004: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=7afef31b2b17d1a8d5248eb562352c6d3505ea14
X-Osstest-Versions-That:
    ovmf=bf5678b5802685e07583e3c7ec56d883cbdd5da3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Jan 2023 21:51:47 +0000

flight 176004 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176004/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7afef31b2b17d1a8d5248eb562352c6d3505ea14
baseline version:
 ovmf                 bf5678b5802685e07583e3c7ec56d883cbdd5da3

Last test of basis   176000  2023-01-20 14:10:45 Z    0 days
Testing same since   176004  2023-01-20 17:41:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  devel@edk2.groups.io <devel@edk2.groups.io>
  Jan Engelhardt <jengelh@inai.de>
  Tomas Pilar <quic_tpilar@quicinc.com>
  Tomas Pilar <tomas@quicinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   bf5678b580..7afef31b2b  7afef31b2b17d1a8d5248eb562352c6d3505ea14 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 22:00:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 22:00:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482186.747570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIzQk-00029M-M1; Fri, 20 Jan 2023 22:00:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482186.747570; Fri, 20 Jan 2023 22:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIzQk-00029F-J4; Fri, 20 Jan 2023 22:00:14 +0000
Received: by outflank-mailman (input) for mailman id 482186;
 Fri, 20 Jan 2023 22:00:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vlkl=5R=citrix.com=prvs=3778cfab1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pIzQj-000297-0E
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 22:00:13 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce1235cb-990d-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 23:00:11 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce1235cb-990d-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674252011;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=BMNzXhVJVhIQdPw7Pe3ULx8YFfeG4MP+Aq00oYn5wLU=;
  b=P8uyxF1a9uEPq+iipJtK3qiq8cnhDZOxbPqE8TbTlrXrbtzvPj4gl4hH
   3n3SgPdPFbkLbQtLq6njUp9x2vJ2M5qvSPIGx9W8ks+SDrclurCCPurgE
   XF90y5xbOvi6h/IzIaWRsLpzPp89yO6JtsKTpk5QlugxcNQXyxIzvRB16
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93551029
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:EaV9mqCBQPy3vhVW/+Pjw5YqxClBgxIJ4kV8jS/XYbTApDkm0GcOm
 2McCm2AaPfeajagLdBwaNzi908PuZ7Rx9cwQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpC5gRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwxshSE35Mr
 qwiLQswQSmGoeSa4O2+Vbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9K4fXGJwNxBfwS
 mTu1XvAX0hCNP+k0WC4zW2Qnc3VwzmnV9dHfFG/3qEz2wDCroAJMzUJUXOrrP//jVSxM/pEM
 FAd8Ccqqak09WSoQ8P7Uhn+p2SL1jY8VtxKAqsF4QeC4qPO5kCSAW1sZjxcbN0rsucmSDps0
 UWG9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLiKMZgw/LT91jOLWoldCzEjb1q
 w1mtwBn2e9V15RSkfzmoxae2WnESoX1ohAd2ivyTH2ntVNDYbWYSLWtz173zalFM9PMJrWeh
 0Qsl8+b5eEIKJiCki2RXekAdI2UC+a53C702gA2QcR4n9i50zv6JN0LvmkiTKt8GpxcEQIFd
 nM/ru+4CHV7GHKxJZF6bIuqYyjB5fixTI+1Phw4gzcnX3SQSONk1Hs0DaJ144wLuBJ0+ZzTw
 b/BLa6R4Y8yUMyLNgaeSeYHyqMMzSsj327VTp2T5035jubEPy/IEOdZaAPmggUFAEWs+l29H
 zF3bpvi9vmieLemPnm/HXA7czjm0kTX9bip8pcKJ4Zv0yJtGX07Cu+5/F/SU9UNokihrc+Rp
 ivVchYBmDLCaYjvdV3ihoZLNOm+Av6SbBsTYUQRALpf8yN8Odr2t/pFLctfkHtO3LUL8MOYh
 sItI62oasmjgByek9jBRfERdLBfSSk=
IronPort-HdrOrdr: A9a23:kH8cS6inl64t8KHUV6NY8qS5lHBQXiwji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPk3I/OrrBEDuexzhHPJOj7X5eI3SPjUO21HYS72Kj7GSoAEIcheWnoJgPO
 VbAs1D4bXLZmSS5vyKhDVQfexA/DGGmprY+ts3zR1WPH9Xg3cL1XYJNu6ZeHcGNDWvHfACZe
 OhDlIsnUvcRZwQBP7LfkUtbqz4iPDgsonpWhICDw5P0njzsdv5gISKaCRxx30lIkly/Ys=
X-IronPort-AV: E=Sophos;i="5.97,233,1669093200"; 
   d="scan'208";a="93551029"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>, Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH] Changelog: Add details about new features for SPR
Date: Fri, 20 Jan 2023 22:00:04 +0000
Message-ID: <20230120220004.7456-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
CC: Henry Wang <Henry.Wang@arm.com>

A reminder to everyone, write the changelog as it happens, rather than
scrambling to remember 8 months of development just as the release is
happening.
---
 CHANGELOG.md | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 675413971360..b116163b62c5 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,14 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 
 ## [unstable UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=staging) - TBD
 
+### Added
+ - On x86, support for features new in Intel Sapphire Rapids CPUs:
+   - PKS (Protection Key Supervisor) available to HVM/PVH guests.
+   - VM-Notify used by Xen to mitigate certain micro-architectural pipeline
+     livelocks, instead of crashing the entire server.
+   - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the system
+     wide impact of a guest misusing atomic instructions.
+
 ## [4.17.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.17.0) - 2022-12-12
 
 ### Changed
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 20 22:29:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 22:29:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482191.747580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIzsO-0004fV-RA; Fri, 20 Jan 2023 22:28:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482191.747580; Fri, 20 Jan 2023 22:28:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIzsO-0004fO-OL; Fri, 20 Jan 2023 22:28:48 +0000
Received: by outflank-mailman (input) for mailman id 482191;
 Fri, 20 Jan 2023 22:28:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqBo=5R=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIzsM-0004fI-Pq
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 22:28:46 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ccf24aac-9911-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 23:28:45 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 88BB1B8281D;
 Fri, 20 Jan 2023 22:28:44 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1B0C0C433D2;
 Fri, 20 Jan 2023 22:28:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccf24aac-9911-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674253723;
	bh=0zA2rPCCHQ259R5K2c/kxHPddRl9N5ci+NTgr6kWmys=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=CzTV9BoGqdM9g3GMcTCMI01ve13/PEi1zvuCxGx/cp1dgR6PU+wY4NZmxggP92XI9
	 vwL3Qpm5IW1naj/5TXXFxbr5nUXqiyQSfE+kUCgF7T3ixNMd8k3DAos21bRZWchvmM
	 QZCdABED/w1+pKpf0O578lamoLGDBpvrgFou3vlF1+FQ5mcaicKhf3PU4QcE7lON8v
	 ZdcjJuBSU47lNbd2gctk90F4mG+rzVJcjy6XxkoHHI/bDqzqWEDR2rCRM8Sxlz5ppi
	 4iLudz6RpgAcEg7TufTX2eJ/R60ta6YvqeHdze6O5lYTYq54HLsjmda7jQZontXn+B
	 Rbrp4oAyJGIYA==
Date: Fri, 20 Jan 2023 14:28:40 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com, michal.orzel@amd.com
Subject: Re: [XEN v5] xen/arm: Probe the load/entry point address of an uImage
 correctly
In-Reply-To: <20230113122423.22902-1-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301201422410.731018@ubuntu-linux-20-04-desktop>
References: <20230113122423.22902-1-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 13 Jan 2023, Ayan Kumar Halder wrote:
> Currently, kernel_uimage_probe() does not read the load/entry point address
> set in the uImge header. Thus, info->zimage.start is 0 (default value). This
> causes, kernel_zimage_place() to treat the binary (contained within uImage)
> as position independent executable. Thus, it loads it at an incorrect
> address.
> 
> The correct approach would be to read "uimage.load" and set
> info->zimage.start. This will ensure that the binary is loaded at the
> correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
> address).
> 
> If user provides load address (ie "uimage.load") as 0x0, then the image is
> treated as position independent executable. Xen can load such an image at
> any address it considers appropriate. A position independent executable
> cannot have a fixed entry point address.
> 
> This behavior is applicable for both arm32 and arm64 platforms.
> 
> Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
> point address set in the uImage header. With this commit, Xen will use them.
> This makes the behavior of Xen consistent with uboot for uimage headers.
> 
> Users who want to use Xen with statically partitioned domains, can provide
> non zero load address and entry address for the dom0/domU kernel. It is
> required that the load and entry address provided must be within the memory
> region allocated by Xen.
> 
> A deviation from uboot behaviour is that we consider load address == 0x0,
> to denote that the image supports position independent execution. This
> is to make the behavior consistent across uImage and zImage.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from v1 :-
> 1. Added a check to ensure load address and entry address are the same.
> 2. Considered load address == 0x0 as position independent execution.
> 3. Ensured that the uImage header interpretation is consistent across
> arm32 and arm64.
> 
> v2 :-
> 1. Mentioned the change in existing behavior in booting.txt.
> 2. Updated booting.txt with a new section to document "Booting Guests".
> 
> v3 :-
> 1. Removed the constraint that the entry point should be same as the load
> address. Thus, Xen uses both the load address and entry point to determine
> where the image is to be copied and the start address.
> 2. Updated documentation to denote that load address and start address
> should be within the memory region allocated by Xen.
> 3. Added constraint that user cannot provide entry point for a position
> independent executable (PIE) image.
> 
> v4 :-
> 1. Explicitly mentioned the version in booting.txt from when the uImage
> probing behavior has changed.
> 2. Logged the requested load address and entry point parsed from the uImage
> header.
> 3. Some style issues.
> 
>  docs/misc/arm/booting.txt         | 26 ++++++++++++++++
>  xen/arch/arm/include/asm/kernel.h |  2 +-
>  xen/arch/arm/kernel.c             | 49 +++++++++++++++++++++++++++++--
>  3 files changed, 73 insertions(+), 4 deletions(-)
> 
> diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
> index 3e0c03e065..aeb0123e8d 100644
> --- a/docs/misc/arm/booting.txt
> +++ b/docs/misc/arm/booting.txt
> @@ -23,6 +23,28 @@ The exceptions to this on 32-bit ARM are as follows:
>  
>  There are no exception on 64-bit ARM.
>  
> +Booting Guests
> +--------------
> +
> +Xen supports the legacy image header[3], zImage protocol for 32-bit
> +ARM Linux[1] and Image protocol defined for ARM64[2].
> +
> +Until Xen 4.17, in case of legacy image protocol, Xen ignored the load
> +address and entry point specified in the header. This has now changed.
> +
> +Now, it loads the image at the load address provided in the header.
> +And the entry point is used as the kernel start address.
> +
> +A deviation from uboot is that, Xen treats "load address == 0x0" as
> +position independent execution (PIE). Thus, Xen will load such an image
> +at an address it considers appropriate. Also, user cannot specify the
> +entry point of a PIE image since the start address cennot be
> +predetermined.
> +
> +Users who want to use Xen with statically partitioned domains, can provide
> +the fixed non zero load address and start address for the dom0/domU kernel.
> +The load address and start address specified by the user in the header must
> +be within the memory region allocated by Xen.
>  
>  Firmware/bootloader requirements
>  --------------------------------
> @@ -39,3 +61,7 @@ Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/t
>  
>  [2] linux/Documentation/arm64/booting.rst
>  Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst
> +
> +[3] legacy format header
> +Latest version: https://source.denx.de/u-boot/u-boot/-/blob/master/include/image.h#L315
> +https://linux.die.net/man/1/mkimage
> diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h
> index 5bb30c3f2f..4617cdc83b 100644
> --- a/xen/arch/arm/include/asm/kernel.h
> +++ b/xen/arch/arm/include/asm/kernel.h
> @@ -72,7 +72,7 @@ struct kernel_info {
>  #ifdef CONFIG_ARM_64
>              paddr_t text_offset; /* 64-bit Image only */
>  #endif
> -            paddr_t start; /* 32-bit zImage only */
> +            paddr_t start; /* Must be 0 for 64-bit Image */
>          } zimage;
>      };
>  };
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 23b840ea9e..0b7f591857 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -127,7 +127,7 @@ static paddr_t __init kernel_zimage_place(struct kernel_info *info)
>      paddr_t load_addr;
>  
>  #ifdef CONFIG_ARM_64
> -    if ( info->type == DOMAIN_64BIT )
> +    if ( (info->type == DOMAIN_64BIT) && (info->zimage.start == 0) )
>          return info->mem.bank[0].start + info->zimage.text_offset;

This is an issue because if we have a zImage64 kernel binary with a
uimage header, we are not setting zimage.text_offset appropriately, if I
am not mistaken.

The way booting.txt is written in this patch, I think the matching
behavior would be that if there is a uimage header, then the zImage64
header is ignored. If the uimage header has uimage.load == zero, then
we should allocate the load_addr for the kernel (PIE case).

I think it would also be OK if we choose the different behavior that if
there is a uimage header but uimage.load == zero, then we look at
zImage64.text_offset next.

Either way is OK for me as long as it is clearly specified in
booting.txt.




>  #endif
>  
> @@ -162,7 +162,12 @@ static void __init kernel_zimage_load(struct kernel_info *info)
>      void *kernel;
>      int rc;
>  
> -    info->entry = load_addr;
> +    /*
> +     * If the image does not have a fixed entry point, then use the load
> +     * address as the entry point.
> +     */
> +    if ( info->entry == 0 )
> +        info->entry = load_addr;
>  
>      place_modules(info, load_addr, load_addr + len);
>  
> @@ -223,10 +228,38 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>      if ( len > size - sizeof(uimage) )
>          return -EINVAL;
>  
> +    info->zimage.start = be32_to_cpu(uimage.load);
> +    info->entry = be32_to_cpu(uimage.ep);
> +
> +    /*
> +     * While uboot considers 0x0 to be a valid load/start address, for Xen
> +     * to maintain parity with zImage, we consider 0x0 to denote position
> +     * independent image. That means Xen is free to load such an image at
> +     * any valid address.
> +     */
> +    if ( info->zimage.start == 0 )
> +        printk(XENLOG_INFO
> +               "No load address provided. Xen will decide where to load it.\n");
> +    else
> +        printk(XENLOG_INFO
> +               "Provided load address: %"PRIpaddr" and entry address: %"PRIpaddr"\n",
> +               info->zimage.start, info->entry);
> +
> +    /*
> +     * If the image supports position independent execution, then user cannot
> +     * provide an entry point as Xen will load such an image at any appropriate
> +     * memory address. Thus, we need to return error.
> +     */
> +    if ( (info->zimage.start == 0) && (info->entry != 0) )
> +    {
> +        printk(XENLOG_ERR
> +               "Entry point cannot be non zero for PIE image.\n");
> +        return -EINVAL;
> +    }
> +
>      info->zimage.kernel_addr = addr + sizeof(uimage);
>      info->zimage.len = len;
>  
> -    info->entry = info->zimage.start;
>      info->load = kernel_zimage_load;
>  
>  #ifdef CONFIG_ARM_64
> @@ -366,6 +399,7 @@ static int __init kernel_zimage64_probe(struct kernel_info *info,
>      info->zimage.kernel_addr = addr;
>      info->zimage.len = end - start;
>      info->zimage.text_offset = zimage.text_offset;
> +    info->zimage.start = 0;
>  
>      info->load = kernel_zimage_load;
>  
> @@ -436,6 +470,15 @@ int __init kernel_probe(struct kernel_info *info,
>      u64 kernel_addr, initrd_addr, dtb_addr, size;
>      int rc;
>  
> +    /*
> +     * We need to initialize start to 0. This field may be populated during
> +     * kernel_xxx_probe() if the image has a fixed entry point (for e.g.
> +     * uimage.ep).
> +     * We will use this to determine if the image has a fixed entry point or
> +     * the load address should be used as the start address.
> +     */
> +    info->entry = 0;
> +
>      /* domain is NULL only for the hardware domain */
>      if ( domain == NULL )
>      {
> -- 
> 2.17.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 22:30:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 22:30:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482196.747589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIzuV-000617-71; Fri, 20 Jan 2023 22:30:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482196.747589; Fri, 20 Jan 2023 22:30:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pIzuV-000610-4E; Fri, 20 Jan 2023 22:30:59 +0000
Received: by outflank-mailman (input) for mailman id 482196;
 Fri, 20 Jan 2023 22:30:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqBo=5R=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pIzuU-00060u-HB
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 22:30:58 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1b2bea67-9912-11ed-91b6-6bf2151ebd3b;
 Fri, 20 Jan 2023 23:30:57 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D2A5262098;
 Fri, 20 Jan 2023 22:30:55 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2A051C433EF;
 Fri, 20 Jan 2023 22:30:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b2bea67-9912-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674253855;
	bh=h52ElCM10Z0p5uVE/m+qQ5BRAyYedUn9CRq/B3e7F2c=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bDoFKdH5REjD2HBlZQDeRHqKuB2T/OPf8VWR8v/90lvGG1vdLHeT1OPUCqvyXByb4
	 NJP70kZI5s6rGrfYoGqylVNX+2u94cYmXau9QEnGGIrYmeD4dAsbRT4NeKO1KE0sMG
	 zGu1i10TQdN3wdx8uFUS7Sams6/qSHJRGJc9Z8WSgMjvusXesN53rKRL7k8b0n/GeB
	 OI3ySpv3vgtHrpMdVUKkkviHevkyVgNx3v3jMDYMwgLvys3OeGgEpRlRA0elawPpGy
	 tjWt9myV+duqsDWQKw5GOMAEImlGFxG26w9OKFxBX4PlyWzrllPASRU6MCd6c3d03t
	 SI+YFbiG9xLzQ==
Date: Fri, 20 Jan 2023 14:30:52 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
    xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v5] xen/arm: Probe the load/entry point address of an uImage
 correctly
In-Reply-To: <bb7ca965-c9f4-445f-dfe9-d96d0b3d8707@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301201404320.731018@ubuntu-linux-20-04-desktop>
References: <20230113122423.22902-1-ayan.kumar.halder@amd.com> <bb7ca965-c9f4-445f-dfe9-d96d0b3d8707@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 19 Jan 2023, Michal Orzel wrote:
> Hello all,
> 
> On 13/01/2023 13:24, Ayan Kumar Halder wrote:
> > Currently, kernel_uimage_probe() does not read the load/entry point address
> > set in the uImge header. Thus, info->zimage.start is 0 (default value). This
> > causes, kernel_zimage_place() to treat the binary (contained within uImage)
> > as position independent executable. Thus, it loads it at an incorrect
> > address.
> > 
> > The correct approach would be to read "uimage.load" and set
> > info->zimage.start. This will ensure that the binary is loaded at the
> > correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
> > address).
> > 
> > If user provides load address (ie "uimage.load") as 0x0, then the image is
> > treated as position independent executable. Xen can load such an image at
> > any address it considers appropriate. A position independent executable
> > cannot have a fixed entry point address.
> > 
> > This behavior is applicable for both arm32 and arm64 platforms.
> > 
> > Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
> > point address set in the uImage header. With this commit, Xen will use them.
> > This makes the behavior of Xen consistent with uboot for uimage headers.
> > 
> > Users who want to use Xen with statically partitioned domains, can provide
> > non zero load address and entry address for the dom0/domU kernel. It is
> > required that the load and entry address provided must be within the memory
> > region allocated by Xen.
> > 
> > A deviation from uboot behaviour is that we consider load address == 0x0,
> > to denote that the image supports position independent execution. This
> > is to make the behavior consistent across uImage and zImage.
> > 
> > Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> 
> When looking at this patch, I spotted one incorrect Xen behavior related to supporting uImage kernels.
> And if we want to document that we support such kernels, we should take a look at it.
> 
> Xen supports gzip compressed kernels and it tries to decompress them before kernel_XXX_probe.
> For zImage and Image, their respective headers are built into the kernel itself and then such image is compressed.
> This results in a gzip header appearing right at the top of the image and the workflow works smoothly.
> 
> However, uImage header is something that is always added as a last step in the image preparation using mkimage utility
> and always appears right at the top of the image. That is why uImage header has a field "ih_comp" used to inform about the compression type.
> So the uImage gzip compressed image will have the uImage header before gzip header. Because Xen tries to decompress the image
> before probing its header, this will not work for uImage.

It looks like to solve both this problem and also the other one about
zimage64.text_offset we would need to move the kernel_uimage_probe()
call earlier, before kernel_decompress() and before
kernel_zimage64_probe().

However, I can see that this patch is already complex as is, so I would
also be OK if any additional changes (e.g. moving kernel_uimage_probe()
earlier) were done in a separate patch on top.


From xen-devel-bounces@lists.xenproject.org Fri Jan 20 23:02:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Jan 2023 23:02:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482201.747600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ0O3-00012m-Lq; Fri, 20 Jan 2023 23:01:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482201.747600; Fri, 20 Jan 2023 23:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ0O3-00012f-HI; Fri, 20 Jan 2023 23:01:31 +0000
Received: by outflank-mailman (input) for mailman id 482201;
 Fri, 20 Jan 2023 23:01:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqBo=5R=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pJ0O2-00012Z-QL
 for xen-devel@lists.xenproject.org; Fri, 20 Jan 2023 23:01:30 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5ddabf3e-9916-11ed-b8d1-410ff93cb8f0;
 Sat, 21 Jan 2023 00:01:26 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8EACF620D9;
 Fri, 20 Jan 2023 23:01:25 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C11F8C433EF;
 Fri, 20 Jan 2023 23:01:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ddabf3e-9916-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674255685;
	bh=9gQIzIMM5cpDiv+kGDBATwvbTuivHW/jbIB8qFx/DIs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Lfvwb1p5WWreKh/wXmf8LkJtCv/9Rb0tabEzYiI3VmfeCmIaPfwz3iWhRjfOogzQe
	 PY4ExOyvn7d3Qjpxko7HASllFs6/7UrfaJ9lAO9euBl74sTTmuvTYGb4bwftM0L6wj
	 MawzY0m5AgCz9cSEdxiGNxYRY1v3moAFLVZ00fz5seLakPCbNF1uUi8o16Pb1ULXWl
	 s4tDuWn/0etFiBLKj533O47aTKXfl/aokxULITyG+kjJr/9qClSK5rw2X6dT2FyHFm
	 4nOiLsjNdaVL68VbmtoNbjqoYVc0FbKlc5q4BGLNGzLvaKNx0PdcAXaG65BT2pKKWO
	 bs469z/9opGVg==
Date: Fri, 20 Jan 2023 15:01:22 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayankuma@amd.com>
cc: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@amd.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
    xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, 
    Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
In-Reply-To: <0a7d3da6-efe7-2cf1-563a-3c5c2ec473b2@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301201455100.731018@ubuntu-linux-20-04-desktop>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com> <20230117174358.15344-3-ayan.kumar.halder@amd.com> <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop> <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org> <ba37ee02-c07c-2803-0867-149c779890b6@amd.com>
 <cd673f97-9c0d-286b-e973-7a85c84dd576@xen.org> <2017e0d4-dd02-e81d-99f4-1ef47fc9e774@amd.com> <42b138a6-59f5-7614-d96f-30e1784c97a4@xen.org> <0a7d3da6-efe7-2cf1-563a-3c5c2ec473b2@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-450487461-1674255684=:731018"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-450487461-1674255684=:731018
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 20 Jan 2023, Ayan Kumar Halder wrote:
> Hi Julien/Michal,
> 
> On 20/01/2023 17:49, Julien Grall wrote:
> > 
> > 
> > On 20/01/2023 16:03, Michal Orzel wrote:
> > > Hi Julien,
> > 
> > Hi Michal,
> > 
> > > 
> > > On 20/01/2023 16:09, Julien Grall wrote:
> > > > 
> > > > 
> > > > On 20/01/2023 14:40, Michal Orzel wrote:
> > > > > Hello,
> > > > 
> > > > Hi,
> > > > 
> > > > > 
> > > > > On 20/01/2023 10:32, Julien Grall wrote:
> > > > > > 
> > > > > > 
> > > > > > Hi,
> > > > > > 
> > > > > > On 19/01/2023 22:54, Stefano Stabellini wrote:
> > > > > > > On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
> > > > > > > > 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
> > > > > > > > 2. One should use 'PRIx64' to display 'u64' in hex format. The
> > > > > > > > current
> > > > > > > > use of 'PRIpaddr' for printing PTE is buggy as this is not a
> > > > > > > > physical
> > > > > > > > address.
> > > > > > > > 
> > > > > > > > Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> > > > > > > 
> > > > > > > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> > > > > > 
> > > > > > 
> > > > > > I have committed the patch.
> > > > > The CI test jobs (static-mem) failed on this patch:
> > > > > https://gitlab.com/xen-project/xen/-/pipelines/752911309
> > > > 
> > > > Thanks for the report.
> > > > 
> > > > > 
> > > > > I took a look at it and this is because in the test script we
> > > > > try to find a node whose unit-address does not have leading zeroes.
> > > > > However, after this patch, we switched to PRIpaddr which is defined as
> > > > > 016lx/016llx and
> > > > > we end up creating nodes like:
> > > > > 
> > > > > memory@0000000050000000
> > > > > 
> > > > > instead of:
> > > > > 
> > > > > memory@60000000
> > > > > 
> > > > > We could modify the script,
> > > > 
> > > > TBH, I think it was a mistake for the script to rely on how Xen describe
> > > > the memory banks in the Device-Tree.
> > > > 
> > > > For instance, from my understanding, it would be valid for Xen to create
> > > > a single node for all the banks or even omitting the unit-address if
> > > > there is only one bank.
> > > > 
> > > > > but do we really want to create nodes
> > > > > with leading zeroes? The dt spec does not mention it, although [1]
> > > > > specifies that the Linux convention is not to have leading zeroes.
> > > > 
> > > > Reading through the spec in [2], it suggested the current naming is
> > > > fine. That said the example match the Linux convention (I guess that's
> > > > not surprising...).
> > > > 
> > > > I am open to remove the leading. However, I think the CI also needs to
> > > > be updated (see above why).
> > > Yes, the CI needs to be updated as well.
> > 
> > Can either you or Ayan look at it?
> 
> Does this change match the expectation ?
> 
> diff --git a/automation/scripts/qemu-smoke-dom0less-arm64.sh
> b/automation/scripts/qemu-smoke-dom0less-arm64.sh
> index 2b59346fdc..9f5e700f0e 100755
> --- a/automation/scripts/qemu-smoke-dom0less-arm64.sh
> +++ b/automation/scripts/qemu-smoke-dom0less-arm64.sh
> @@ -20,7 +20,7 @@ if [[ "${test_variant}" == "static-mem" ]]; then
>      domu_size="10000000"
>      passed="${test_variant} test passed"
>      domU_check="
> -current=\$(hexdump -e '16/1 \"%02x\"'
> /proc/device-tree/memory@${domu_base}/reg 2>/dev/null)
> +current=\$(hexdump -e '16/1 \"%02x\"' /proc/device-tree/memory@$[0-9]*/reg
> 2>/dev/null)
>  expected=$(printf \"%016x%016x\" 0x${domu_base} 0x${domu_size})
>  if [[ \"\${expected}\" == \"\${current}\" ]]; then
>         echo \"${passed}\"

We need to check for ${domu_base} with or without leading zeroes:

current=\$(hexdump -e '16/1 \"%02x\"' /proc/device-tree/memory@*(0)${domu_base}/reg 2>/dev/null)
--8323329-450487461-1674255684=:731018--


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 00:03:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 00:03:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482208.747609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ1LJ-0007w9-FU; Sat, 21 Jan 2023 00:02:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482208.747609; Sat, 21 Jan 2023 00:02:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ1LJ-0007w2-Cn; Sat, 21 Jan 2023 00:02:45 +0000
Received: by outflank-mailman (input) for mailman id 482208;
 Sat, 21 Jan 2023 00:02:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ1LI-0007vs-1U; Sat, 21 Jan 2023 00:02:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ1LH-0006OU-St; Sat, 21 Jan 2023 00:02:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ1LH-0002Z1-GK; Sat, 21 Jan 2023 00:02:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ1LH-0008KC-Fs; Sat, 21 Jan 2023 00:02:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mgOKd8s7LSQFIqS9JT/k9yaeAqMYCFcKeC9fi5HZJtM=; b=JH3dtMJkQEoP3b7AciqzZqQujw
	MCEDkXYqpc+BgaXjHDFtROjVwdq8CSZA3u7mZHBqMTE3b3MKboVJlA9eE5G9uhd8X0mEL5Gj8Nyld
	sXwvpnqrP4sMiNIWNiUMCrEltDmDTNWX9Kkmx4aGl0IzwhdnUwm3mBTWP3DmY2TuvjBQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176005-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176005: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=56f3782633c252dcec9c96eb345f95fb9557cea7
X-Osstest-Versions-That:
    xen=93017efd7c441420318e46443a06e40fa6f1b6d4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 00:02:43 +0000

flight 176005 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176005/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  56f3782633c252dcec9c96eb345f95fb9557cea7
baseline version:
 xen                  93017efd7c441420318e46443a06e40fa6f1b6d4

Last test of basis   176001  2023-01-20 16:04:11 Z    0 days
Testing same since   176005  2023-01-20 21:01:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   93017efd7c..56f3782633  56f3782633c252dcec9c96eb345f95fb9557cea7 -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 00:12:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 00:12:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482240.747637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ1UC-0001yK-PC; Sat, 21 Jan 2023 00:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482240.747637; Sat, 21 Jan 2023 00:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ1UC-0001yD-LX; Sat, 21 Jan 2023 00:11:56 +0000
Received: by outflank-mailman (input) for mailman id 482240;
 Sat, 21 Jan 2023 00:11:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eSPV=5S=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pJ1UA-0001y7-OP
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 00:11:55 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32fb0a02-9920-11ed-b8d1-410ff93cb8f0;
 Sat, 21 Jan 2023 01:11:49 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 71456620E2;
 Sat, 21 Jan 2023 00:11:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4DA64C433EF;
 Sat, 21 Jan 2023 00:11:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32fb0a02-9920-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674259907;
	bh=JOOcFI5BGNG8tuy0rnWZiPmdPFOfvqWcJ0i+DJ6qQRo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=m57w1oTEy7yduviAhTNnJvLJ1OkcgVKBYI0Q10G7tOPy3ZC7ZDz6wjV/ag4Su4Pl2
	 ufAOmcVscSgGUgznscQTgguTysfqxClvkDTcXi+6YHjgPq7QBuy245bEwkzMVWdevv
	 LcTDCYaLQAEEEsnfQqLz8FWJyAW8DksHUt1TgtLi9lj32c51BZHGRR9MwUdlYf/L8B
	 XMgiSZaImG29r4gDhaiJYlnJFIEsRMCKObFOkrCBnoRcNtwPZdU+vFrEn7OCmoSnUI
	 JXrCZRjXsoThAYJ9yfHgm5A+QUo8cUyKGBgPP+GM+VTgpHE+zKruPa+0DXIob1/w2/
	 67xT1MGGwH3rA==
Date: Fri, 20 Jan 2023 16:11:44 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Vipul Suneja <vsuneja63@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, oleksandr_andrushchenko@epam.com, 
    oleksandr_tyshchenko@epam.com, jgross@suse.com, boris.ostrovsky@oracle.com, 
    Bertrand.Marquis@arm.com, Stewart.Hildebrand@amd.com, michal.orzel@amd.com, 
    vikram.garhwal@amd.com
Subject: Re: Porting Xen in raspberry pi4B
In-Reply-To: <CALAP8f_op8wS=7AaZF0wCjZm8aSmQMfEY5Bv+30+8UDGmQrezA@mail.gmail.com>
Message-ID: <alpine.DEB.2.22.394.2301201604090.731018@ubuntu-linux-20-04-desktop>
References: <CALAP8f--jyG=ufJ9WGtL6qoeGdsykjNK85G3q50SzJm5+wOzhQ@mail.gmail.com> <alpine.DEB.2.22.394.2211221605470.1049131@ubuntu-linux-20-04-desktop> <CALAP8f_QiHN4dP3z+LQgKdGeo8-=9DMyk0W7+x6P2eHvnOD_wQ@mail.gmail.com> <alpine.DEB.2.22.394.2212011128430.4039@ubuntu-linux-20-04-desktop>
 <CALAP8f_b=0m0dqj9a50UYXYfw9X873i07sG9eyxFSqxF0yEneQ@mail.gmail.com> <alpine.DEB.2.22.394.2212121406270.3075842@ubuntu-linux-20-04-desktop> <CALAP8f9JY23ZyDGnku4iWf5YCamSQKsZtdZj3MhX9TrF7wgEpw@mail.gmail.com> <alpine.DEB.2.22.394.2212131518180.315094@ubuntu-linux-20-04-desktop>
 <CALAP8f-fka4jicvLhzS8NFyyqD_NnffMxrZmqpz-x9JnL7Oy7w@mail.gmail.com> <alpine.DEB.2.22.394.2212141443130.315094@ubuntu-linux-20-04-desktop> <CALAP8f8yOdG_g0GpWG5ZPZ0BKiaKCyM2N4V6x_8Fr08f7QjpvA@mail.gmail.com> <alpine.DEB.2.22.394.2212221523390.4079@ubuntu-linux-20-04-desktop>
 <CALAP8f8yvZUKJEXXL8qcoy9=nJ1G97OtiWSv7tk1LDerEWUqiw@mail.gmail.com> <CALAP8f_op8wS=7AaZF0wCjZm8aSmQMfEY5Bv+30+8UDGmQrezA@mail.gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1937431632-1674259579=:731018"
Content-ID: <alpine.DEB.2.22.394.2301201606230.731018@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1937431632-1674259579=:731018
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2301201606231.731018@ubuntu-linux-20-04-desktop>

Hi Vipul,

Sorry for the late reply.

Unfortunately I don't have a simple answer for you. If I were you, I
would add printf's everywhere in QEMU (or use gdb) until I figure out
exactly why the graphics events and pixels don't propagate from
./hw/display/xenfb.c to ui/console.c and to ui/vnc.c.

You should be able to trace all the way from your VNC client requests
received by ui/vnc.c, to the pixel surface created by
qemu_create_displaysurface. You should also be able to check if the
displaysurface buffer has the right content (if it is not all black then
at least one of the bytes should *not* be 0xff.)

That should tell you if the problem is between xenfb.c and the guest,
or between ui/vnc.c and xenfb.c.

displaysurface is the common interface/API to move the pixels between
xenfb.c and ui/vnc.c.

Cheers,

Stefano


On Tue, 10 Jan 2023, Vipul Suneja wrote:
> Hi Stefano,
> Thanks!
> 
> Any input further as per the logs attached?
> 
> Regards,
> Vipul Kumar
> 
> On Mon, Dec 26, 2022 at 11:30 PM Vipul Suneja <vsuneja63@gmail.com> wrote:
>       Hi Stefano,
> 
>       Thanks!
> 
>       As you have mention function call qemu_create_displaysurface, qemu_create_displaysurface_from, dpy_gfx_replace_surface,
>       dpy_gfx_update and dpy_gfx_check_format, i found that
>       these functions are not part of /ui/vnc.c source but they are defined in /ui/console.c source. Even none of these functions
>       have been called from the vnc.c source. I have included debug logs for
>       all of these functions in console.c but could see in the logs that only qemu_create_displaysurface & 
>       dpy_gfx_replace_surface functions are invoked. Even i tried vncviewer
>       on the host machine but other functions are not invoked. Attaching the log file, any other suggestion as per log file or any
>       input for debugging vnc source file. 
> You can also try to use another QEMU UI like SDL to see if the problem is specific to VNC only.
> I already tried with SDL, by adding "vfb=[ 'type=sdl' ]" in the guest configuration file but it failed & didn't start the
> guest machine. Correct me if I am wrong with configuration or steps to use SDL?
> 
> Thanks & Regards,
> Vipul Kumar
> 
> On Fri, Dec 23, 2022 at 5:13 AM Stefano Stabellini <sstabellini@kernel.org> wrote:
>       Hi Vipul,
> 
>       Great that you managed to setup a debugging environment. The logs look
>       very promising: it looks like xenfb.c is handling events as expected.
>       So it would apparently seem that xen-fbfront.c -> xenfb.c connection is
>       working.
> 
>       So the next step is the xenfb.c -> ./ui/vnc.c is working.
> 
>       It could be that the pixels and mouse events arrive just fine in
>       xenfb.c, but then there is an issue with exporting them to the vncserver
>       implementation inside QEMU, which is ./ui/vnc.c. The interesting
>       functions there are qemu_create_displaysurface,
>       qemu_create_displaysurface_from, dpy_gfx_replace_surface,
>       dpy_gfx_update, and dpy_gfx_check_format.
> 
>       Specifically dpy_gfx_update should cause VNC to render the new area.
> 
>       qemu_create_displaysurface_from let VNC use the xenfb buffer directly
>       with VNC, rather than using a secondary buffer and memory copies.
>       Interestingly, dpy_gfx_check_format should be used to check if it is
>       appropriate to share the buffer (qemu_create_displaysurface_from) or not
>       (qemu_create_displaysurface) but we don't call it.
> 
>       I think it would be good to add a call to dpy_gfx_check_format in
>       xenfb_update where we call qemu_create_displaysurface_from and also add
>       a printk.
> 
>       You can try to disable the buffer sharing by replacing
>       qemu_create_displaysurface_from with qemu_create_displaysurface. You can
>       also try to use another QEMU UI like SDL to see if the problem is
>       specific to VNC only.
> 
>       Cheers,
> 
>       Stefano
> 
> 
>       On Mon, 19 Dec 2022, Vipul Suneja wrote:
>       > Hi Stefano,
>       >
>       > Thanks!
>       >
>       > I could prepare a patch for adding debug printf logs in xenfb.c & re-compile QEMU in yocto image. Just for reference, I
>       have included logs
>       > in all the functions.
>       > Attaching qemu log file, could see the entry & exit logs coming up for "xenfb_handle_events" & "xenfb_map_fb" also
>       after the host machine
>       > boots up. Can you please further assist, which parameters has to be cross checked or any other input as per logs?  
>       >
>       > Thanks & Regards,
>       > Vipul Kumar
>       >
>       > On Thu, Dec 15, 2022 at 4:17 AM Stefano Stabellini <sstabellini@kernel.org> wrote:
>       >       Hi Vipul,
>       >
>       >       For QEMU you actually need to follow the Yocto build process to update
>       >       the QEMU binary. That is because QEMU is a userspace application with
>       >       lots of library dependencies so we cannot just do "make" with a
>       >       cross-compiler like in the case of Xen.
>       >
>       >       So you need to make changes to QEMU and then add those changes as a
>       >       patch to the Yocto QEMU build recipe, or configure Yocto to your local
>       >       tree to build QEMU. I am not a Yocto expert and the Yocto community
>       >       would be a better place to ask for advice there. You can see from here
>       >       some instructions on how to build Xen using a local tree, see the usage
>       >       of EXTERNALSRC (note that this is *not* what you need: you need to build
>       >       QEMU with a local tree, not Xen. But I thought that the wikipage might
>       >       still be a starting point)
>       >
>       >       https://wiki.xenproject.org/wiki/Xen_on_ARM_and_Yocto
>       >
>       >       Cheers,
>       >
>       >       Stefano
>       >
>       >
>       >       On Thu, 15 Dec 2022, Vipul Suneja wrote:
>       >       > Hi Stefano,
>       >       >
>       >       > Thanks!
>       >       >
>       >       > I could see QEMU 6.2.0 compiled & installed in the host image xen-image-minimal. I could find xenfb.c source
>       file also &
>       >       modified the same
>       >       > with debug logs.
>       >       > I have set up a cross compile environment, did 'make clean' & 'make all' to recompile but it's failing. In case
>       i am doing
>       >       wrong, Can you
>       >       > please assist me
>       >       > with the correct steps to compile qemu? Below are the error logs:
>       >       >
>       >       >
>       >     
>        agl@agl-OptiPlex-7010:~/Automotive/ADAS_Infotainment/Platform/Poky_Kirkstone/build/tmp/work/cortexa72-poky-linux/qemu/6.2.0-r0/build$
>       >       make
>       >       > all
>       >       > [1/3864] Compiling C object libslirp.a.p/slirp_src_arp_table.c.o
>       >       > [2/3864] Compiling C object subprojects/libvhost-user/libvhost-user.a.p/libvhost-user.c.o
>       >       > [3/3864] Linking static target subprojects/libvhost-user/libvhost-user.a
>       >       > [4/3864] Compiling C object libslirp.a.p/slirp_src_vmstate.c.o
>       >       > [5/3864] Compiling C object libslirp.a.p/slirp_src_dhcpv6.c.o
>       >       > [6/3864] Compiling C object libslirp.a.p/slirp_src_dnssearch.c.o
>       >       > [7/3864] Compiling C object libslirp.a.p/slirp_src_bootp.c.o
>       >       > [8/3864] Compiling C object libslirp.a.p/slirp_src_cksum.c.o
>       >       > [9/3864] Compiling C object libslirp.a.p/slirp_src_if.c.o
>       >       > [10/3864] Compiling C object libslirp.a.p/slirp_src_ip6_icmp.c.o
>       >       > [11/3864] Compiling C object libslirp.a.p/slirp_src_ip6_input.c.o
>       >       > [12/3864] Compiling C object libslirp.a.p/slirp_src_ip6_output.c.o
>       >       > [13/3864] Compiling C object libslirp.a.p/slirp_src_ip_icmp.c.o
>       >       > [14/3864] Compiling C object libslirp.a.p/slirp_src_ip_input.c.o
>       >       > [15/3864] Compiling C object libslirp.a.p/slirp_src_ip_output.c.o
>       >       > [16/3864] Compiling C object libslirp.a.p/slirp_src_mbuf.c.o
>       >       > [17/3864] Compiling C object libslirp.a.p/slirp_src_misc.c.o
>       >       > [18/3864] Compiling C object libslirp.a.p/slirp_src_ncsi.c.o
>       >       > [19/3864] Compiling C object libslirp.a.p/slirp_src_ndp_table.c.o
>       >       > [20/3864] Compiling C object libslirp.a.p/slirp_src_sbuf.c.o
>       >       > [21/3864] Compiling C object libslirp.a.p/slirp_src_slirp.c.o
>       >       > [22/3864] Compiling C object libslirp.a.p/slirp_src_socket.c.o
>       >       > [23/3864] Compiling C object libslirp.a.p/slirp_src_state.c.o
>       >       > [24/3864] Compiling C object libslirp.a.p/slirp_src_stream.c.o
>       >       > [25/3864] Compiling C object libslirp.a.p/slirp_src_tcp_input.c.o
>       >       > [26/3864] Compiling C object libslirp.a.p/slirp_src_tcp_output.c.o
>       >       > [27/3864] Compiling C object libslirp.a.p/slirp_src_tcp_subr.c.o
>       >       > [28/3864] Compiling C object libslirp.a.p/slirp_src_tcp_timer.c.o
>       >       > [29/3864] Compiling C object libslirp.a.p/slirp_src_tftp.c.o
>       >       > [30/3864] Compiling C object libslirp.a.p/slirp_src_udp.c.o
>       >       > [31/3864] Compiling C object libslirp.a.p/slirp_src_udp6.c.o
>       >       > [32/3864] Compiling C object libslirp.a.p/slirp_src_util.c.o
>       >       > [33/3864] Compiling C object libslirp.a.p/slirp_src_version.c.o
>       >       > [34/3864] Linking static target libslirp.a
>       >       > [35/3864] Generating qemu-version.h with a custom command (wrapped by meson to capture output)
>       >       > FAILED: qemu-version.h
>       >      >/home/agl/Automotive/ADAS_Infotainment/Platform/Poky_Kirkstone/build/tmp/work/cortexa72-poky-linux/qemu/6.2.0-r0/recipe-sysroot-native/u
>       sr
>       >       
>       >       > /bin/meson --internal exe --captureqemu-version.h--/home/agl/Automotive/ADAS_Infotainment/Platform/Poky_Kirkstone/build/tmp/work/cortexa72-poky-linux/qemu/6.2.0-r0/qemu-6.2.
>       0/scripts/qemu
>       >       -v
>       >       > ersion.sh
>       >     
>        /home/agl/Automotive/ADAS_Infotainment/Platform/Poky_Kirkstone/build/tmp/work/cortexa72-poky-linux/qemu/6.2.0-r0/qemu-6.2.0
>       ''
>       >       > 6.2.0
>       >       > /usr/bin/env: ‘nativepython3’: No such file or directory
>       >       > ninja: build stopped: subcommand failed.
>       >       > make: *** [Makefile:162: run-ninja] Error 1
>       >       >
>       >       > Thanks & Regards,
>       >       > Vipul Kumar
>       >       >
>       >       > On Wed, Dec 14, 2022 at 4:55 AM Stefano Stabellini <sstabellini@kernel.org> wrote:
>       >       >       Hi Vipul,
>       >       >
>       >       >       Good progress! The main function we should check is "xenfb_refresh" but
>       >       >       from the logs it looks like it is called several times. Which means that
>       >       >       everything seems to be working as expected on the Linux side.
>       >       >
>       >       >       It is time to investigate the QEMU side:
>       >       >       ./hw/display/xenfb.c:xenfb_handle_events
>       >       >       ./hw/display/xenfb.c:xenfb_map_fb
>       >       >
>       >       >       I wonder if the issue is internal to QEMU. You might want to use an
>       >       >       older QEMU version to check if it works, maybe 6.0 or 5.0 or even 4.0.
>       >       >       I also wonder if it is a problem between xenfb.c and the rest of QEMU. I
>       >       >       would investigate how xenfb->pixels is rendered by the rest of QEMU.
>       >       >       Specifically you might want to look at the call to
>       >       >       qemu_create_displaysurface, qemu_create_displaysurface_from and
>       >       >       dpy_gfx_replace_surface in xenfb_update.
>       >       >
>       >       >       I hope this helps.
>       >       >
>       >       >       Cheers,
>       >       >
>       >       >       Stefano
>       >       >
>       >       >
>       >       >       On Tue, 13 Dec 2022, Vipul Suneja wrote:
>       >       >       > Hi Stefano,
>       >       >       >
>       >       >       > Thanks!
>       >       >       >
>       >       >       > I modified xen-fbfront.c source file, included printk debug logs & cross compiled it. I included the
>       printk debug log
>       >       at the
>       >       >       entry & exit
>       >       >       > of all functions of xen-fbfront.c file.
>       >       >       > Generated kernel module & loaded in guest machine at bootup. I could see lots of logs coming up, and
>       could see
>       >       multiple
>       >       >       functions being
>       >       >       > invoked even if I have not used vncviewer in the host. Attaching the log file for reference. Any
>       specific function or
>       >       >       parameters that have
>       >       >       > to be checked or any other suggestion as per logs?
>       >       >       >
>       >       >       > Thanks & Regards,
>       >       >       > Vipul Kumar
>       >       >       >
>       >       >       > On Tue, Dec 13, 2022 at 3:44 AM Stefano Stabellini <sstabellini@kernel.org> wrote:
>       >       >       >       Hi Vipul,
>       >       >       >
>       >       >       >       I am online on IRC OFTC #xendevel (https://www.oftc.net/, you need a
>       >       >       >       registered nickname to join #xendevel).
>       >       >       >
>       >       >       >       For development and debugging I find that it is a lot easier to
>       >       >       >       crosscompile the kernel "by hand", and do a monolithic build, rather
>       >       >       >       than going through Yocto.
>       >       >       >
>       >       >       >       For instance the following builds for me:
>       >       >       >
>       >       >       >       cd linux.git
>       >       >       >       export ARCH=arm64
>       >       >       >       export CROSS_COMPILE=/path/to/cross-compiler
>       >       >       >       make defconfig
>       >       >       >       [add printks to drivers/video/fbdev/xen-fbfront.c]
>       >       >       >       make -j8 Image.gz
>       >       >       >
>       >       >       >       And Image.gz boots on Xen as DomU kernel without issues.
>       >       >       >
>       >       >       >       Cheers,
>       >       >       >
>       >       >       >       Stefano
>       >       >       >
>       >       >       >       On Sat, 10 Dec 2022, Vipul Suneja wrote:
>       >       >       >       > Hi Stefano,
>       >       >       >       >
>       >       >       >       > Thanks!
>       >       >       >       >
>       >       >       >       > I have included printk debug logs in the xen-fbfront.c source file. While cross compiling to
>       generate .ko
>       >       with
>       >       >       >       "xen-guest-image-minimal"
>       >       >       >       > toolchain it's throwing a modpost
>       >       >       >       > not found error. I could see the modpost.c source file but the final script is missing. Any
>       input on this,
>       >       Below are
>       >       >       the
>       >       >       >       logs:
>       >       >       >       >
>       >       >       >       > agl@agl-OptiPlex-7010:~/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuffer$ make
>       >       >       >       > make ARCH=arm64 -I/opt/poky/4.0.5/sysroots/cortexa72-poky-linux/usr/include/asm -C
>       >       >       >       > /opt/poky/4.0.5/sysroots/cortexa72-poky-linux/lib/modules/5.15.72-yocto-standard/build
>       >       >       >       > M=/home/agl/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuffer modules
>       >       >       >       > make[1]: Entering directory
>       >       '/opt/poky/4.0.5/sysroots/cortexa72-poky-linux/lib/modules/5.15.72-yocto-standard/build'
>       >       >       >       > arch/arm64/Makefile:36: Detected assembler with broken .inst; disassembly will be unreliable
>       >       >       >       > warning: the compiler differs from the one used to build the kernel
>       >       >       >       >   The kernel was built by: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
>       >       >       >       >   You are using:           aarch64-poky-linux-gcc (GCC) 11.3.0
>       >       >       >       >   CC [M]
>        /home/agl/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuffer/xen-fbfront.o
>       >       >       >       >   MODPOST
>       /home/agl/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuffer/Module.symvers
>       >       >       >       > /bin/sh: 1: scripts/mod/modpost: not found
>       >       >       >       > make[2]: *** [scripts/Makefile.modpost:133:
>       >       >       >       /home/agl/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuffer/Module.symvers]
>       >       >       >       > Error 127
>       >       >       >       > make[1]: *** [Makefile:1813: modules] Error 2
>       >       >       >       > make[1]: Leaving directory
>       >       '/opt/poky/4.0.5/sysroots/cortexa72-poky-linux/lib/modules/5.15.72-yocto-standard/build'
>       >       >       >       > make: *** [Makefile:5: all] Error 2
>       >       >       >       > agl@agl-OptiPlex-7010:~/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuffer$ ls -l
>       >       >       >       > total 324
>       >       >       >       > -rwxrwxrwx 1 agl agl    359 Dec 10 22:41 Makefile
>       >       >       >       > -rw-rw-r-- 1 agl agl     90 Dec 10 22:49 modules.order
>       >       >       >       > -rw-r--r-- 1 agl agl  18331 Dec  1 20:32 xen-fbfront.c
>       >       >       >       > -rw-rw-r-- 1 agl agl     90 Dec 10 22:49 xen-fbfront.mod
>       >       >       >       > -rw-rw-r-- 1 agl agl 297832 Dec 10 22:49 xen-fbfront.o
>       >       >       >       > agl@agl-OptiPlex-7010:~/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuffer$ file
>       xen-fbfront.o
>       >       >       >       > xen-fbfront.o: ELF 64-bit LSB relocatable, ARM aarch64, version 1 (SYSV), with debug_info, not
>       stripped
>       >       >       >       > agl@agl-OptiPlex-7010:~/Automotive/ADAS_Infotainment/project/Application/Xen/Framebuffer$
>       >       >       >       >
>       >       >       >       > I have connected a HDMI based 1980x1024 resolution display screen to raspberrypi4 for testing
>       purposes. I
>       >       hope
>       >       >       connecting
>       >       >       >       this display to
>       >       >       >       > rpi4 should be ok.
>       >       >       >       >
>       >       >       >       > Is there any other way we can connect also for detailed discussion on the display bringup
>       issue? This will
>       >       really
>       >       >       help to
>       >       >       >       resolve this
>       >       >       >       > issue.
>       >       >       >       >
>       >       >       >       > Thanks & Regards,
>       >       >       >       > Vipul Kumar
>       >       >       >       >
>       >       >       >       > On Fri, Dec 2, 2022 at 1:02 AM Stefano Stabellini <sstabellini@kernel.org> wrote:
>       >       >       >       >       On Thu, 1 Dec 2022, Vipul Suneja wrote:
>       >       >       >       >       > Hi Stefano,
>       >       >       >       >       > Thanks!
>       >       >       >       >       >
>       >       >       >       >       > I am exploring both options here, modification of framebuffer source file & setting up
>       x11vnc server
>       >       in
>       >       >       guest.
>       >       >       >       >       > Other than these I would like to share a few findings with you.
>       >       >       >       >       >
>       >       >       >       >       > 1. If i keep "CONFIG_XEN_FBDEV_FRONTEND=y" then xen-fbfront.ko is not generating but if
>       i keep
>       >       >       >       "CONFIG_XEN_FBDEV_FRONTEND=m"
>       >       >       >       >       > then could see xen-fbfront.ko & its loading also. Same things with other
>       frontend/backend drivers
>       >       also. Do we
>       >       >       need to
>       >       >       >       >       configure these
>       >       >       >       >       > drivers as a module(m) only?
>       >       >       >       >
>       >       >       >       >       xen-fbfront should work both as a module (xen-fbfront.ko) or built-in
>       >       >       >       >       (CONFIG_XEN_FBDEV_FRONTEND=y).
>       >       >       >       >
>       >       >       >       >
>       >       >       >       >
>       >       >       >       >       > 2. I could see xenstored service is running for the host but it's always failing for
>       the
>       >       guest machine. I
>       >       >       could see
>       >       >       >       it in
>       >       >       >       >       bootup logs & via
>       >       >       >       >       > systemctl status also.
>       >       >       >       >
>       >       >       >       >       That is normal. xenstored is only meant to be run in Dom0, not in the
>       >       >       >       >       domUs. If you use the same rootfs for Dom0 and DomU then xenstored will
>       >       >       >       >       fail starting in the DomU (but should succeed in Dom0), which is what we
>       >       >       >       >       want.
>       >       >       >       >
>       >       >       >       >       If you run "xenstore-ls" in Dom0, you'll see a bunch of entries,
>       >       >       >       >       including some of them related to "vfb" which is the virtual framebuffer
>       >       >       >       >       protocol. You should also see an entry called "state" set to "4" which
>       >       >       >       >       means "connected". state = 4 is usually when everything works. Normally
>       >       >       >       >       when things don't work state != 4.
>       >       >       >       >
>       >       >       >       >
>       >       >       >       >
>       >       >       >       >       > Below are the logs:
>       >       >       >       >       > [  OK  ] Reached target Basic System.
>       >       >       >       >       > [  OK  ] Started Kernel Logging Service.
>       >       >       >       >       > [  OK  ] Started System Logging Service.
>       >       >       >       >       >          Starting D-Bus System Message Bus...
>       >       >       >       >       >          Starting User Login Management...
>       >       >       >       >       >          Starting Permit User Sessions...
>       >       >       >       >       >          Starting The Xen xenstore...
>       >       >       >       >       >          Starting OpenSSH Key Generation...
>       >       >       >       >       > [FAILED] Failed to start The Xen xenstore.
>       >       >       >       >       > See 'systemctl status xenstored.service' for details.
>       >       >       >       >       > [DEPEND] Dependency failed for qemu for xen dom0 disk backend.
>       >       >       >       >       > [DEPEND] Dependency failed for Xend…p guests on boot and shutdown.
>       >       >       >       >       > [DEPEND] Dependency failed for xen-…des, JSON configuration stub).
>       >       >       >       >       > [DEPEND] Dependency failed for Xenc…guest consoles and hypervisor.
>       >       >       >       >       > [  OK  ] Finished Permit User Sessions.
>       >       >       >       >       > [  OK  ] Started Getty on tty1.
>       >       >       >       >       > [  OK  ] Started Serial Getty on hvc0.
>       >       >       >       >       > [  OK  ] Started Serial Getty on ttyS0.
>       >       >       >       >       > [  OK  ] Reached target Login Prompts.
>       >       >       >       >       >          Starting Xen-watchdog - run xen watchdog daemon...
>       >       >       >       >       > [  OK  ] Started D-Bus System Message Bus.
>       >       >       >       >       > [  OK  ] Started Xen-watchdog - run xen watchdog daemon.
>       >       >       >       >       > [  OK  ] Finished OpenSSH Key Generation.
>       >       >       >       >       > [  OK  ] Started User Login Management.
>       >       >       >       >       > [  OK  ] Reached target Multi-User System.
>       >       >       >       >       >          Starting Record Runlevel Change in UTMP...
>       >       >       >       >       > [  OK  ] Finished Record Runlevel Change in UTMP.
>       >       >       >       >       > fbcon: Taking over console
>       >       >       >       >       >
>       >       >       >       >       > Poky (Yocto Project Reference Distro) 4.0.4 raspberrypi4-64 hvc0
>       >       >       >       >       >
>       >       >       >       >       > raspberrypi4-64 login: root
>       >       >       >       >       > root@raspberrypi4-64:~#
>       >       >       >       >       > root@raspberrypi4-64:~#
>       >       >       >       >       > root@raspberrypi4-64:~# systemctl status xenstored.service
>       >       >       >       >       > x xenstored.service - The Xen xenstore
>       >       >       >       >       >      Loaded: loaded (/lib/systemd/system/xenstored.service; enabled; vendor preset:
>       enabled)
>       >       >       >       >       >      Active: failed (Result: exit-code) since Thu 2022-12-01 06:12:05 UTC; 26s ago
>       >       >       >       >       >     Process: 195 ExecStartPre=/bin/grep -q control_d /proc/xen/capabilities
>       (code=exited,
>       >       status=1/FAILURE)
>       >       >       >       >       >
>       >       >       >       >       > Dec 01 06:12:04 raspberrypi4-64 systemd[1]: Starting The Xen xenstore...
>       >       >       >       >       > Dec 01 06:12:05 raspberrypi4-64 systemd[1]: xenstored.service: Control pro...URE
>       >       >       >       >       > Dec 01 06:12:05 raspberrypi4-64 systemd[1]: xenstored.service: Failed with...e'.
>       >       >       >       >       > Dec 01 06:12:05 raspberrypi4-64 systemd[1]: Failed to start The Xen xenstore.
>       >       >       >       >       > Hint: Some lines were ellipsized, use -l to show in full.
>       >       >       >       >       > root@raspberrypi4-64:~# 
>       >       >       >       >       >
>       >       >       >       >       > Any input on these?
>       >       >       >       >       >
>       >       >       >       >       > Thanks & Regards,
>       >       >       >       >       > Vipul Kumar
>       >       >       >       >       >
>       >       >       >       >       > On Wed, Nov 23, 2022 at 5:41 AM Stefano Stabellini <sstabellini@kernel.org> wrote:
>       >       >       >       >       >       Hi Vipul,
>       >       >       >       >       >
>       >       >       >       >       >       I cannot spot any issue in the configuration, in particual you have:
>       >       >       >       >       >
>       >       >       >       >       >       CONFIG_XEN_FBDEV_FRONTEND=y
>       >       >       >       >       >
>       >       >       >       >       >       which is what you need.
>       >       >       >       >       >
>       >       >       >       >       >       The only thing I can suggest is to add printks to the Linux frontend
>       >       >       >       >       >       driver (the one running in the domU) which is
>       >       >       >       >       >       drivers/video/fbdev/xen-fbfront.c and printfs to the QEMU backend
>       >       >       >       >       >       (running in Dom0) which is hw/display/xenfb.c to figure out what is
>       >       >       >       >       >       going on.
>       >       >       >       >       >
>       >       >       >       >       >
>       >       >       >       >       >       Alternatively, you can setup PV network with the domU, such as:
>       >       >       >       >       >
>       >       >       >       >       >         vif=['']
>       >       >       >       >       >
>       >       >       >       >       >       and then run x11 and a x11vnc server in your domU. You should be able to
>       >       >       >       >       >       connect to it using vncviewer at the network IP of your domU.
>       >       >       >       >       >
>       >       >       >       >       >       Basically you are skipping the problem because instead of using the PV
>       >       >       >       >       >       framebuffer protocol, you just use VNC over the network with the domU.
>       >       >       >       >       >
>       >       >       >       >       >
>       >       >       >       >       >       Cheers,
>       >       >       >       >       >
>       >       >       >       >       >       Stefano
>       >       >       >       >       >
>       >       >       >       >       >
>       >       >       >       >       >       On Tue, 22 Nov 2022, Vipul Suneja wrote:
>       >       >       >       >       >       > Hi Stefano,
>       >       >       >       >       >       > Thanks for the support!
>       >       >       >       >       >       >
>       >       >       >       >       >       > Looks like I have tried all the combinations & possible ways to get display up
>       but failed. Is
>       >       there
>       >       >       any
>       >       >       >       document or
>       >       >       >       >       pdf for
>       >       >       >       >       >       porting xen on
>       >       >       >       >       >       > raspberrypi4.
>       >       >       >       >       >       > I could find lot's of links telling the same but couldn't see any official user
>       guide or
>       >       document
>       >       >       from the
>       >       >       >       xen
>       >       >       >       >       community on
>       >       >       >       >       >       the same. If
>       >       >       >       >       >       > there is something to refer 
>       >       >       >       >       >       > to please share with me.
>       >       >       >       >       >       > I am attaching the kernel configuration file also, just take a look if i have
>       missed
>       >       anything.
>       >       >       >       >       >       > Any other suggestions or input from your end could be really helpful?
>       >       >       >       >       >       >
>       >       >       >       >       >       > Regards,
>       >       >       >       >       >       > Vipul Kumar
>       >       >       >       >       >       >
>       >       >       >       >       >       > On Fri, Nov 11, 2022 at 6:40 AM Stefano Stabellini <sstabellini@kernel.org>
>       wrote:
>       >       >       >       >       >       >       Hi Vipul,
>       >       >       >       >       >       >
>       >       >       >       >       >       >       Sorry for the late reply. From the earlier logs that you sent, it looks
>       >       >       >       >       >       >       like everything should be working correctly. Specifically:
>       >       >       >       >       >       >
>       >       >       >       >       >       >            vfb = ""
>       >       >       >       >       >       >             1 = ""
>       >       >       >       >       >       >              0 = ""
>       >       >       >       >       >       >               frontend = "/local/domain/1/device/vfb/0"
>       >       >       >       >       >       >               frontend-id = "1"
>       >       >       >       >       >       >               online = "1"
>       >       >       >       >       >       >               state = "4"
>       >       >       >       >       >       >               vnc = "1"
>       >       >       >       >       >       >               vnclisten = "127.0.0.1"
>       >       >       >       >       >       >               vncdisplay = "0"
>       >       >       >       >       >       >               vncunused = "1"
>       >       >       >       >       >       >               sdl = "0"
>       >       >       >       >       >       >               opengl = "0"
>       >       >       >       >       >       >               feature-resize = "1"
>       >       >       >       >       >       >               hotplug-status = "connected"
>       >       >       >       >       >       >               request-update = "1"
>       >       >       >       >       >       >
>       >       >       >       >       >       >       state "4" means "connected". So I would expect that you should be able
>       >       >       >       >       >       >       to connect to the vnc server using vncviewer. You might not see anything
>       >       >       >       >       >       >       (black screen) but you should definitely be able to connect.
>       >       >       >       >       >       >
>       >       >       >       >       >       >       I wouldn't try to launch x11 in the guest just yet. fbcon in Linux is
>       >       >       >       >       >       >       enough to render something on the screen. You should be able to see the
>       >       >       >       >       >       >       Linux text-based console rendered graphically, connecting to it via vnc.
>       >       >       >       >       >       >
>       >       >       >       >       >       >       Sorry for the basic question, but have you tried all the following?
>       >       >       >       >       >       >
>       >       >       >       >       >       >       vncviewer 127.0.0.1:0
>       >       >       >       >       >       >       vncviewer 127.0.0.1:1
>       >       >       >       >       >       >       vncviewer 127.0.0.1:2
>       >       >       >       >       >       >       vncviewer 127.0.0.1:5900
>       >       >       >       >       >       >       vncviewer 127.0.0.1:5901
>       >       >       >       >       >       >       vncviewer 127.0.0.1:5902
>       >       >       >       >       >       >
>       >       >       >       >       >       >       Given that from the xenstore-ls logs everything seems to work correctly
>       >       >       >       >       >       >       I am not sure what else to suggest. You might have to add printf to QEMU
>       >       >       >       >       >       >       ui/vnc.c and hw/display/xenfb.c to see what is going wrong.
>       >       >       >       >       >       >
>       >       >       >       >       >       >       Cheers,
>       >       >       >       >       >       >
>       >       >       >       >       >       >       Stefano
>       >       >       >       >       >       >
>       >       >       >       >       >       >
>       >       >       >       >       >       >       On Mon, 7 Nov 2022, Vipul Suneja wrote:
>       >       >       >       >       >       >       > Hi Stefano,
>       >       >       >       >       >       >       > Thanks!
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > Any input further on "xenstore-ls" logs?
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > I am trying to run the x0vncserver & x11vnc server manually on guest
>       >       >       machine(xen_guest_image_minimal)
>       >       >       >       image
>       >       >       >       >       but it's
>       >       >       >       >       >       failing
>       >       >       >       >       >       >       with the below
>       >       >       >       >       >       >       > error.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > root@raspberrypi4-64:/usr/bin# x0vncserver
>       >       >       >       >       >       >       > x0vncserver: unable to open display ""
>       >       >       >       >       >       >       > root@raspberrypi4-64:/usr/bin#
>       >       >       >       >       >       >       > root@raspberrypi4-64:/usr/bin# x11vnc
>       >       >       >       >       >       >       > ###############################################################
>       >       >       >       >       >       >       > #@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  **  WARNING  **  WARNING  **  WARNING  **  WARNING  **   @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@        YOU ARE RUNNING X11VNC WITHOUT A PASSWORD!!        @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  This means anyone with network access to this computer   @#
>       >       >       >       >       >       >       > #@  may be able to view and control your desktop.            @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@ >>> If you did not mean to do this Press CTRL-C now!! <<< @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  You can create an x11vnc password file by running:       @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@       x11vnc -storepasswd password /path/to/passfile      @#
>       >       >       >       >       >       >       > #@  or   x11vnc -storepasswd /path/to/passfile               @#
>       >       >       >       >       >       >       > #@  or   x11vnc -storepasswd                                 @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  (the last one will use ~/.vnc/passwd)                    @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  and then starting x11vnc via:                            @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@      x11vnc -rfbauth /path/to/passfile                    @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  an existing ~/.vnc/passwd file from another VNC          @#
>       >       >       >       >       >       >       > #@  application will work fine too.                          @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  You can also use the -passwdfile or -passwd options.     @#
>       >       >       >       >       >       >       > #@  (note -passwd is unsafe if local users are not trusted)  @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  Make sure any -rfbauth and -passwdfile password files    @#
>       >       >       >       >       >       >       > #@  cannot be read by untrusted users.                       @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  Use x11vnc -usepw to automatically use your              @#
>       >       >       >       >       >       >       > #@  ~/.vnc/passwd or ~/.vnc/passwdfile password files.       @#
>       >       >       >       >       >       >       > #@  (and prompt you to create ~/.vnc/passwd if neither       @#
>       >       >       >       >       >       >       > #@  file exists.)  Under -usepw, x11vnc will exit if it      @#
>       >       >       >       >       >       >       > #@  cannot find a password to use.                           @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  Even with a password, the subsequent VNC traffic is      @#
>       >       >       >       >       >       >       > #@  sent in the clear.  Consider tunnelling via ssh(1):      @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@    http://www.karlrunge.com/x11vnc/#tunnelling            @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  Or using the x11vnc SSL options: -ssl and -stunnel       @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  Please Read the documentation for more info about        @#
>       >       >       >       >       >       >       > #@  passwords, security, and encryption.                     @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@    http://www.karlrunge.com/x11vnc/faq.html#faq-passwd    @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@  To disable this warning use the -nopw option, or put     @#
>       >       >       >       >       >       >       > #@  'nopw' on a line in your ~/.x11vncrc file.               @#
>       >       >       >       >       >       >       > #@                                                           @#
>       >       >       >       >       >       >       > #@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#
>       >       >       >       >       >       >       > ###############################################################
>       >       >       >       >       >       >       > 09/03/2018 12:58:41 x11vnc version: 0.9.16 lastmod: 2019-01-05  pid:
>       424
>       >       >       >       >       >       >       > 09/03/2018 12:58:41 XOpenDisplay("") failed.
>       >       >       >       >       >       >       > 09/03/2018 12:58:41 Trying again with XAUTHLOCALHOSTNAME=localhost ...
>       >       >       >       >       >       >       > 09/03/2018 12:58:41
>       >       >       >       >       >       >       > 09/03/2018 12:58:41 *** XOpenDisplay failed. No -display or DISPLAY.
>       >       >       >       >       >       >       > 09/03/2018 12:58:41 *** Trying ":0" in 4 seconds.  Press Ctrl-C to
>       abort.
>       >       >       >       >       >       >       > 09/03/2018 12:58:41 *** 1 2 3 4
>       >       >       >       >       >       >       > 09/03/2018 12:58:45 XOpenDisplay(":0") failed.
>       >       >       >       >       >       >       > 09/03/2018 12:58:45 Trying again with XAUTHLOCALHOSTNAME=localhost ...
>       >       >       >       >       >       >       > 09/03/2018 12:58:45 XOpenDisplay(":0") failed.
>       >       >       >       >       >       >       > 09/03/2018 12:58:45 Trying again with unset XAUTHLOCALHOSTNAME ...
>       >       >       >       >       >       >       > 09/03/2018 12:58:45
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > 09/03/2018 12:58:45 ***************************************
>       >       >       >       >       >       >       > 09/03/2018 12:58:45 *** XOpenDisplay failed (:0)
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > *** x11vnc was unable to open the X DISPLAY: ":0", it cannot continue.
>       >       >       >       >       >       >       > *** There may be "Xlib:" error messages above with details about the
>       failure.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > Some tips and guidelines:
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > ** An X server (the one you wish to view) must be running before x11vnc
>       is
>       >       >       >       >       >       >       >    started: x11vnc does not start the X server.  (however, see the
>       -create
>       >       >       >       >       >       >       >    option if that is what you really want).
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > ** You must use -display <disp>, -OR- set and export your $DISPLAY
>       >       >       >       >       >       >       >    environment variable to refer to the display of the desired X
>       server.
>       >       >       >       >       >       >       >  - Usually the display is simply ":0" (in fact x11vnc uses this if you
>       forget
>       >       >       >       >       >       >       >    to specify it), but in some multi-user situations it could be ":1",
>       ":2",
>       >       >       >       >       >       >       >    or even ":137".  Ask your administrator or a guru if you are having
>       >       >       >       >       >       >       >    difficulty determining what your X DISPLAY is.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > ** Next, you need to have sufficient permissions (Xauthority)
>       >       >       >       >       >       >       >    to connect to the X DISPLAY.   Here are some Tips:
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >  - Often, you just need to run x11vnc as the user logged into the X
>       session.
>       >       >       >       >       >       >       >    So make sure to be that user when you type x11vnc.
>       >       >       >       >       >       >       >  - Being root is usually not enough because the incorrect
>       MIT-MAGIC-COOKIE
>       >       >       >       >       >       >       >    file may be accessed.  The cookie file contains the secret key that
>       >       >       >       >       >       >       >    allows x11vnc to connect to the desired X DISPLAY.
>       >       >       >       >       >       >       >  - You can explicitly indicate which MIT-MAGIC-COOKIE file should be
>       used
>       >       >       >       >       >       >       >    by the -auth option, e.g.:
>       >       >       >       >       >       >       >        x11vnc -auth /home/someuser/.Xauthority -display :0
>       >       >       >       >       >       >       >        x11vnc -auth /tmp/.gdmzndVlR -display :0
>       >       >       >       >       >       >       >    you must have read permission for the auth file.
>       >       >       >       >       >       >       >    See also '-auth guess' and '-findauth' discussed below.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > ** If NO ONE is logged into an X session yet, but there is a greeter
>       login
>       >       >       >       >       >       >       >    program like "gdm", "kdm", "xdm", or "dtlogin" running, you will
>       need
>       >       >       >       >       >       >       >    to find and use the raw display manager MIT-MAGIC-COOKIE file.
>       >       >       >       >       >       >       >    Some examples for various display managers:
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >      gdm:     -auth /var/gdm/:0.Xauth
>       >       >       >       >       >       >       >               -auth /var/lib/gdm/:0.Xauth
>       >       >       >       >       >       >       >      kdm:     -auth /var/lib/kdm/A:0-crWk72
>       >       >       >       >       >       >       >               -auth /var/run/xauth/A:0-crWk72
>       >       >       >       >       >       >       >      xdm:     -auth /var/lib/xdm/authdir/authfiles/A:0-XQvaJk
>       >       >       >       >       >       >       >      dtlogin: -auth /var/dt/A:0-UgaaXa
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >    Sometimes the command "ps wwwwaux | grep auth" can reveal the file
>       location.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >    Starting with x11vnc 0.9.9 you can have it try to guess by using:
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >               -auth guess
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >    (see also the x11vnc -findauth option.)
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >    Only root will have read permission for the file, and so x11vnc must
>       be run
>       >       >       >       >       >       >       >    as root (or copy it).  The random characters in the filenames will
>       of course
>       >       >       >       >       >       >       >    change and the directory the cookie file resides in is system
>       dependent.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > See also: http://www.karlrunge.com/x11vnc/faq.html
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > Regards,
>       >       >       >       >       >       >       > Vipul Kumar
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > On Thu, Nov 3, 2022 at 10:27 PM Vipul Suneja <vsuneja63@gmail.com>
>       wrote:
>       >       >       >       >       >       >       >       Hi Stefano,
>       >       >       >       >       >       >       > Thanks!
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > I used xen-guest-image-minimal(simple console based image) as a guest
>       with fbcon &
>       >       fbdev
>       >       >       enabled in
>       >       >       >       kernel
>       >       >       >       >       >       configurations but
>       >       >       >       >       >       >       still
>       >       >       >       >       >       >       > the same error can't open the display.
>       >       >       >       >       >       >       > below are the outcome of "xenstore-ls":
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > root@raspberrypi4-64:~/guest1# xenstore-ls
>       >       >       >       >       >       >       > tool = ""
>       >       >       >       >       >       >       >  xenstored = ""
>       >       >       >       >       >       >       > local = ""
>       >       >       >       >       >       >       >  domain = ""
>       >       >       >       >       >       >       >   0 = ""
>       >       >       >       >       >       >       >    control = ""
>       >       >       >       >       >       >       >     feature-poweroff = "1"
>       >       >       >       >       >       >       >     feature-reboot = "1"
>       >       >       >       >       >       >       >    domid = "0"
>       >       >       >       >       >       >       >    name = "Domain-0"
>       >       >       >       >       >       >       >    device-model = ""
>       >       >       >       >       >       >       >     0 = ""
>       >       >       >       >       >       >       >      backends = ""
>       >       >       >       >       >       >       >       console = ""
>       >       >       >       >       >       >       >       vkbd = ""
>       >       >       >       >       >       >       >       vfb = ""
>       >       >       >       >       >       >       >       qnic = ""
>       >       >       >       >       >       >       >      state = "running"
>       >       >       >       >       >       >       >     1 = ""
>       >       >       >       >       >       >       >      backends = ""
>       >       >       >       >       >       >       >       console = ""
>       >       >       >       >       >       >       >       vkbd = ""
>       >       >       >       >       >       >       >       vfb = ""
>       >       >       >       >       >       >       >       qnic = ""
>       >       >       >       >       >       >       >      state = "running"
>       >       >       >       >       >       >       >    backend = ""
>       >       >       >       >       >       >       >     vbd = ""
>       >       >       >       >       >       >       >      1 = ""
>       >       >       >       >       >       >       >       51712 = ""
>       >       >       >       >       >       >       >        frontend = "/local/domain/1/device/vbd/51712"
>       >       >       >       >       >       >       >        params =
>       "/home/root/guest2/xen-guest-image-minimal-raspberrypi4-64.ext3"
>       >       >       >       >       >       >       >        script = "/etc/xen/scripts/block"
>       >       >       >       >       >       >       >        frontend-id = "1"
>       >       >       >       >       >       >       >        online = "1"
>       >       >       >       >       >       >       >        removable = "0"
>       >       >       >       >       >       >       >        bootable = "1"
>       >       >       >       >       >       >       >        state = "4"
>       >       >       >       >       >       >       >        dev = "xvda"
>       >       >       >       >       >       >       >        type = "phy"
>       >       >       >       >       >       >       >        mode = "w"
>       >       >       >       >       >       >       >        device-type = "disk"
>       >       >       >       >       >       >       >        discard-enable = "1"
>       >       >       >       >       >       >       >        feature-max-indirect-segments = "256"
>       >       >       >       >       >       >       >        multi-queue-max-queues = "4"
>       >       >       >       >       >       >       >        max-ring-page-order = "4"
>       >       >       >       >       >       >       >        node = "/dev/loop0"
>       >       >       >       >       >       >       >        physical-device = "7:0"
>       >       >       >       >       >       >       >        physical-device-path = "/dev/loop0"
>       >       >       >       >       >       >       >        hotplug-status = "connected"
>       >       >       >       >       >       >       >        feature-flush-cache = "1"
>       >       >       >       >       >       >       >        discard-granularity = "4096"
>       >       >       >       >       >       >       >        discard-alignment = "0"
>       >       >       >       >       >       >       >        discard-secure = "0"
>       >       >       >       >       >       >       >        feature-discard = "1"
>       >       >       >       >       >       >       >        feature-barrier = "1"
>       >       >       >       >       >       >       >        feature-persistent = "1"
>       >       >       >       >       >       >       >        sectors = "1794048"
>       >       >       >       >       >       >       >        info = "0"
>       >       >       >       >       >       >       >        sector-size = "512"
>       >       >       >       >       >       >       >        physical-sector-size = "512"
>       >       >       >       >       >       >       >     vfb = ""
>       >       >       >       >       >       >       >      1 = ""
>       >       >       >       >       >       >       >       0 = ""
>       >       >       >       >       >       >       >        frontend = "/local/domain/1/device/vfb/0"
>       >       >       >       >       >       >       >        frontend-id = "1"
>       >       >       >       >       >       >       >        online = "1"
>       >       >       >       >       >       >       >        state = "4"
>       >       >       >       >       >       >       >        vnc = "1"
>       >       >       >       >       >       >       >        vnclisten = "127.0.0.1"
>       >       >       >       >       >       >       >        vncdisplay = "0"
>       >       >       >       >       >       >       >        vncunused = "1"
>       >       >       >       >       >       >       >        sdl = "0"
>       >       >       >       >       >       >       >        opengl = "0"
>       >       >       >       >       >       >       >        feature-resize = "1"
>       >       >       >       >       >       >       >        hotplug-status = "connected"
>       >       >       >       >       >       >       >        request-update = "1"
>       >       >       >       >       >       >       >     vkbd = ""
>       >       >       >       >       >       >       >      1 = ""
>       >       >       >       >       >       >       >       0 = ""
>       >       >       >       >       >       >       >        frontend = "/local/domain/1/device/vkbd/0"
>       >       >       >       >       >       >       >        frontend-id = "1"
>       >       >       >       >       >       >       >        online = "1"
>       >       >       >       >       >       >       >        state = "4"
>       >       >       >       >       >       >       >        feature-abs-pointer = "1"
>       >       >       >       >       >       >       >        feature-raw-pointer = "1"
>       >       >       >       >       >       >       >        hotplug-status = "connected"
>       >       >       >       >       >       >       >     console = ""
>       >       >       >       >       >       >       >      1 = ""
>       >       >       >       >       >       >       >       0 = ""
>       >       >       >       >       >       >       >        frontend = "/local/domain/1/console"
>       >       >       >       >       >       >       >        frontend-id = "1"
>       >       >       >       >       >       >       >        online = "1"
>       >       >       >       >       >       >       >        state = "1"
>       >       >       >       >       >       >       >        protocol = "vt100"
>       >       >       >       >       >       >       >     vif = ""
>       >       >       >       >       >       >       >      1 = ""
>       >       >       >       >       >       >       >       0 = ""
>       >       >       >       >       >       >       >        frontend = "/local/domain/1/device/vif/0"
>       >       >       >       >       >       >       >        frontend-id = "1"
>       >       >       >       >       >       >       >        online = "1"
>       >       >       >       >       >       >       >        state = "4"
>       >       >       >       >       >       >       >        script = "/etc/xen/scripts/vif-bridge"
>       >       >       >       >       >       >       >        mac = "e4:5f:01:cd:7b:dd"
>       >       >       >       >       >       >       >        bridge = "xenbr0"
>       >       >       >       >       >       >       >        handle = "0"
>       >       >       >       >       >       >       >        type = "vif"
>       >       >       >       >       >       >       >        hotplug-status = "connected"
>       >       >       >       >       >       >       >        feature-sg = "1"
>       >       >       >       >       >       >       >        feature-gso-tcpv4 = "1"
>       >       >       >       >       >       >       >        feature-gso-tcpv6 = "1"
>       >       >       >       >       >       >       >        feature-ipv6-csum-offload = "1"
>       >       >       >       >       >       >       >        feature-rx-copy = "1"
>       >       >       >       >       >       >       >        feature-xdp-headroom = "1"
>       >       >       >       >       >       >       >        feature-rx-flip = "0"
>       >       >       >       >       >       >       >        feature-multicast-control = "1"
>       >       >       >       >       >       >       >        feature-dynamic-multicast-control = "1"
>       >       >       >       >       >       >       >        feature-split-event-channels = "1"
>       >       >       >       >       >       >       >        multi-queue-max-queues = "4"
>       >       >       >       >       >       >       >        feature-ctrl-ring = "1"
>       >       >       >       >       >       >       >   1 = ""
>       >       >       >       >       >       >       >    vm = "/vm/d81ec5a9-5bf9-4f2b-89e8-0f60d6da948f"
>       >       >       >       >       >       >       >    name = "guest2"
>       >       >       >       >       >       >       >    cpu = ""
>       >       >       >       >       >       >       >     0 = ""
>       >       >       >       >       >       >       >      availability = "online"
>       >       >       >       >       >       >       >     1 = ""
>       >       >       >       >       >       >       >      availability = "online"
>       >       >       >       >       >       >       >    memory = ""
>       >       >       >       >       >       >       >     static-max = "2097152"
>       >       >       >       >       >       >       >     target = "2097152"
>       >       >       >       >       >       >       >     videoram = "0"
>       >       >       >       >       >       >       >    device = ""
>       >       >       >       >       >       >       >     suspend = ""
>       >       >       >       >       >       >       >      event-channel = ""
>       >       >       >       >       >       >       >     vbd = ""
>       >       >       >       >       >       >       >      51712 = ""
>       >       >       >       >       >       >       >       backend = "/local/domain/0/backend/vbd/1/51712"
>       >       >       >       >       >       >       >       backend-id = "0"
>       >       >       >       >       >       >       >       state = "4"
>       >       >       >       >       >       >       >       virtual-device = "51712"
>       >       >       >       >       >       >       >       device-type = "disk"
>       >       >       >       >       >       >       >       multi-queue-num-queues = "2"
>       >       >       >       >       >       >       >       queue-0 = ""
>       >       >       >       >       >       >       >        ring-ref = "8"
>       >       >       >       >       >       >       >        event-channel = "4"
>       >       >       >       >       >       >       >       queue-1 = ""
>       >       >       >       >       >       >       >        ring-ref = "9"
>       >       >       >       >       >       >       >        event-channel = "5"
>       >       >       >       >       >       >       >       protocol = "arm-abi"
>       >       >       >       >       >       >       >       feature-persistent = "1"
>       >       >       >       >       >       >       >     vfb = ""
>       >       >       >       >       >       >       >      0 = ""
>       >       >       >       >       >       >       >       backend = "/local/domain/0/backend/vfb/1/0"
>       >       >       >       >       >       >       >       backend-id = "0"
>       >       >       >       >       >       >       >       state = "4"
>       >       >       >       >       >       >       >       page-ref = "275022"
>       >       >       >       >       >       >       >       event-channel = "3"
>       >       >       >       >       >       >       >       protocol = "arm-abi"
>       >       >       >       >       >       >       >       feature-update = "1"
>       >       >       >       >       >       >       >     vkbd = ""
>       >       >       >       >       >       >       >      0 = ""
>       >       >       >       >       >       >       >       backend = "/local/domain/0/backend/vkbd/1/0"
>       >       >       >       >       >       >       >       backend-id = "0"
>       >       >       >       >       >       >       >       state = "4"
>       >       >       >       >       >       >       >       request-abs-pointer = "1"
>       >       >       >       >       >       >       >       page-ref = "275322"
>       >       >       >       >       >       >       >       page-gref = "1284"
>       >       >       >       >       >       >       >       event-channel = "10"
>       >       >       >       >       >       >       >     vif = ""
>       >       >       >       >       >       >       >      0 = ""
>       >       >       >       >       >       >       >       backend = "/local/domain/0/backend/vif/1/0"
>       >       >       >       >       >       >       >       backend-id = "0"
>       >       >       >       >       >       >       >       state = "4"
>       >       >       >       >       >       >       >       handle = "0"
>       >       >       >       >       >       >       >       mac = "e4:5f:01:cd:7b:dd"
>       >       >       >       >       >       >       >       mtu = "1500"
>       >       >       >       >       >       >       >       xdp-headroom = "0"
>       >       >       >       >       >       >       >       multi-queue-num-queues = "2"
>       >       >       >       >       >       >       >       queue-0 = ""
>       >       >       >       >       >       >       >        tx-ring-ref = "1280"
>       >       >       >       >       >       >       >        rx-ring-ref = "1281"
>       >       >       >       >       >       >       >        event-channel-tx = "6"
>       >       >       >       >       >       >       >        event-channel-rx = "7"
>       >       >       >       >       >       >       >       queue-1 = ""
>       >       >       >       >       >       >       >        tx-ring-ref = "1282"
>       >       >       >       >       >       >       >        rx-ring-ref = "1283"
>       >       >       >       >       >       >       >        event-channel-tx = "8"
>       >       >       >       >       >       >       >        event-channel-rx = "9"
>       >       >       >       >       >       >       >       request-rx-copy = "1"
>       >       >       >       >       >       >       >       feature-rx-notify = "1"
>       >       >       >       >       >       >       >       feature-sg = "1"
>       >       >       >       >       >       >       >       feature-gso-tcpv4 = "1"
>       >       >       >       >       >       >       >       feature-gso-tcpv6 = "1"
>       >       >       >       >       >       >       >       feature-ipv6-csum-offload = "1"
>       >       >       >       >       >       >       >    control = ""
>       >       >       >       >       >       >       >     shutdown = ""
>       >       >       >       >       >       >       >     feature-poweroff = "1"
>       >       >       >       >       >       >       >     feature-reboot = "1"
>       >       >       >       >       >       >       >     feature-suspend = ""
>       >       >       >       >       >       >       >     sysrq = ""
>       >       >       >       >       >       >       >     platform-feature-multiprocessor-suspend = "1"
>       >       >       >       >       >       >       >     platform-feature-xs_reset_watches = "1"
>       >       >       >       >       >       >       >    data = ""
>       >       >       >       >       >       >       >    drivers = ""
>       >       >       >       >       >       >       >    feature = ""
>       >       >       >       >       >       >       >    attr = ""
>       >       >       >       >       >       >       >    error = ""
>       >       >       >       >       >       >       >    domid = "1"
>       >       >       >       >       >       >       >    store = ""
>       >       >       >       >       >       >       >     port = "1"
>       >       >       >       >       >       >       >     ring-ref = "233473"
>       >       >       >       >       >       >       >    console = ""
>       >       >       >       >       >       >       >     backend = "/local/domain/0/backend/console/1/0"
>       >       >       >       >       >       >       >     backend-id = "0"
>       >       >       >       >       >       >       >     limit = "1048576"
>       >       >       >       >       >       >       >     type = "xenconsoled"
>       >       >       >       >       >       >       >     output = "pty"
>       >       >       >       >       >       >       >     tty = "/dev/pts/1"
>       >       >       >       >       >       >       >     port = "2"
>       >       >       >       >       >       >       >     ring-ref = "233472"
>       >       >       >       >       >       >       >     vnc-listen = "127.0.0.1"
>       >       >       >       >       >       >       >     vnc-port = "5900"
>       >       >       >       >       >       >       >    image = ""
>       >       >       >       >       >       >       >     device-model-pid = "788"
>       >       >       >       >       >       >       > vm = ""
>       >       >       >       >       >       >       >  d81ec5a9-5bf9-4f2b-89e8-0f60d6da948f = ""
>       >       >       >       >       >       >       >   name = "guest2"
>       >       >       >       >       >       >       >   uuid = "d81ec5a9-5bf9-4f2b-89e8-0f60d6da948f"
>       >       >       >       >       >       >       >   start_time = "1520600274.27"
>       >       >       >       >       >       >       > libxl = ""
>       >       >       >       >       >       >       >  1 = ""
>       >       >       >       >       >       >       >   device = ""
>       >       >       >       >       >       >       >    vbd = ""
>       >       >       >       >       >       >       >     51712 = ""
>       >       >       >       >       >       >       >      frontend = "/local/domain/1/device/vbd/51712"
>       >       >       >       >       >       >       >      backend = "/local/domain/0/backend/vbd/1/51712"
>       >       >       >       >       >       >       >      params =
>       "/home/root/guest2/xen-guest-image-minimal-raspberrypi4-64.ext3"
>       >       >       >       >       >       >       >      script = "/etc/xen/scripts/block"
>       >       >       >       >       >       >       >      frontend-id = "1"
>       >       >       >       >       >       >       >      online = "1"
>       >       >       >       >       >       >       >      removable = "0"
>       >       >       >       >       >       >       >      bootable = "1"
>       >       >       >       >       >       >       >      state = "1"
>       >       >       >       >       >       >       >      dev = "xvda"
>       >       >       >       >       >       >       >      type = "phy"
>       >       >       >       >       >       >       >      mode = "w"
>       >       >       >       >       >       >       >      device-type = "disk"
>       >       >       >       >       >       >       >      discard-enable = "1"
>       >       >       >       >       >       >       >    vfb = ""
>       >       >       >       >       >       >       >     0 = ""
>       >       >       >       >       >       >       >      frontend = "/local/domain/1/device/vfb/0"
>       >       >       >       >       >       >       >      backend = "/local/domain/0/backend/vfb/1/0"
>       >       >       >       >       >       >       >      frontend-id = "1"
>       >       >       >       >       >       >       >      online = "1"
>       >       >       >       >       >       >       >      state = "1"
>       >       >       >       >       >       >       >      vnc = "1"
>       >       >       >       >       >       >       >      vnclisten = "127.0.0.1"
>       >       >       >       >       >       >       >      vncdisplay = "0"
>       >       >       >       >       >       >       >      vncunused = "1"
>       >       >       >       >       >       >       >      sdl = "0"
>       >       >       >       >       >       >       >      opengl = "0"
>       >       >       >       >       >       >       >    vkbd = ""
>       >       >       >       >       >       >       >     0 = ""
>       >       >       >       >       >       >       >      frontend = "/local/domain/1/device/vkbd/0"
>       >       >       >       >       >       >       >      backend = "/local/domain/0/backend/vkbd/1/0"
>       >       >       >       >       >       >       >      frontend-id = "1"
>       >       >       >       >       >       >       >      online = "1"
>       >       >       >       >       >       >       >      state = "1"
>       >       >       >       >       >       >       >    console = ""
>       >       >       >       >       >       >       >     0 = ""
>       >       >       >       >       >       >       >      frontend = "/local/domain/1/console"
>       >       >       >       >       >       >       >      backend = "/local/domain/0/backend/console/1/0"
>       >       >       >       >       >       >       >      frontend-id = "1"
>       >       >       >       >       >       >       >      online = "1"
>       >       >       >       >       >       >       >      state = "1"
>       >       >       >       >       >       >       >      protocol = "vt100"
>       >       >       >       >       >       >       >    vif = ""
>       >       >       >       >       >       >       >     0 = ""
>       >       >       >       >       >       >       >      frontend = "/local/domain/1/device/vif/0"
>       >       >       >       >       >       >       >      backend = "/local/domain/0/backend/vif/1/0"
>       >       >       >       >       >       >       >      frontend-id = "1"
>       >       >       >       >       >       >       >      online = "1"
>       >       >       >       >       >       >       >      state = "1"
>       >       >       >       >       >       >       >      script = "/etc/xen/scripts/vif-bridge"
>       >       >       >       >       >       >       >      mac = "e4:5f:01:cd:7b:dd"
>       >       >       >       >       >       >       >      bridge = "xenbr0"
>       >       >       >       >       >       >       >      handle = "0"
>       >       >       >       >       >       >       >      type = "vif"
>       >       >       >       >       >       >       >      hotplug-status = ""
>       >       >       >       >       >       >       >   type = "pvh"
>       >       >       >       >       >       >       >   dm-version = "qemu_xen"
>       >       >       >       >       >       >       > root@raspberrypi4-64:~/guest1#
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > Any input as per above? Looking forward to hearing from you.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > Regards,
>       >       >       >       >       >       >       > Vipul Kumar
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       > On Wed, Oct 26, 2022 at 5:21 AM Stefano Stabellini
>       <sstabellini@kernel.org> wrote:
>       >       >       >       >       >       >       >       Hi Vipul,
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       If you look at the QEMU logs, it says:
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       VNC server running on 127.0.0.1:5900
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       That is the VNC server you need to connect to. So in theory:
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >         vncviewer 127.0.0.1:5900
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       should work correctly.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       If you have:
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >         vfb = ["type=vnc"]
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       in your xl config file and you have "fbdev" in your Linux guest,
>       it
>       >       >       >       >       >       >       >       should work.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       If you connect to the VNC server but you get a black screen, it
>       might be
>       >       >       >       >       >       >       >       a guest configuration issue. I would try with a simpler guest,
>       text only
>       >       >       >       >       >       >       >       (no X11, no Wayland) and enable the fbdev console (fbcon). See
>       >       >       >       >       >       >       >       Documentation/fb/fbcon.rst in Linux. You should be able to see a
>       >       >       >       >       >       >       >       graphical console over VNC.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       If that works, then you know that the fbdev kernel driver
>       (xen-fbfront)
>       >       >       >       >       >       >       >       works correctly.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       If it doesn't work, the output of "xenstore-ls" would be
>       interesting.
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       Cheers,
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       Stefano
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       On Wed, 19 Oct 2022, Vipul Suneja wrote:
>       >       >       >       >       >       >       >       > Hi Stefano,
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       > Thanks for the response!
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       > I am following the same link you shared from the beginning.
>       Tried the command
>       >       >       "vncviewer
>       >       >       >       localhost:0"
>       >       >       >       >       in DOM0
>       >       >       >       >       >       but
>       >       >       >       >       >       >       same
>       >       >       >       >       >       >       >       issue "Can't open
>       >       >       >       >       >       >       >       > display", below are the logs:
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       > root@raspberrypi4-64:~# vncviewer localhost:0
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       > TigerVNC Viewer 64-bit v1.11.0
>       >       >       >       >       >       >       >       > Built on: 2020-09-08 12:16
>       >       >       >       >       >       >       >       > Copyright (C) 1999-2020 TigerVNC Team and many others (see
>       README.rst)
>       >       >       >       >       >       >       >       > See https://www.tigervnc.org for information on TigerVNC.
>       >       >       >       >       >       >       >       > Can't open display:
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       > Below are the netstat logs, i couldn't see anything running at
>       port 5900 or
>       >       5901:
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       > root@raspberrypi4-64:~# netstat -tuwx
>       >       >       >       >       >       >       >       > Active Internet connections (w/o servers)
>       >       >       >       >       >       >       >       > Proto Recv-Q Send-Q Local Address           Foreign Address    
>           State    
>       >        
>       >       >       >       >       >       >       >       > tcp        0    164 192.168.1.39:ssh        192.168.1.38:37472
>          
>       >        ESTABLISHED
>       >       >       >       >       >       >       >       > Active UNIX domain sockets (w/o servers)
>       >       >       >       >       >       >       >       > Proto RefCnt Flags       Type       State         I-Node Path
>       >       >       >       >       >       >       >       > unix  8      [ ]         DGRAM      CONNECTED      10565
>       /dev/log
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10891
>       >       /var/run/xenstored/socket
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      13791
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10843
>       >       /var/run/xenstored/socket
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10573
>       >       /var/run/xenstored/socket
>       >       >       >       >       >       >       >       > unix  2      [ ]         DGRAM      CONNECTED      14510
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      13249
>       >       >       >       >       >       >       >       > unix  2      [ ]         DGRAM      CONNECTED      13887
>       >       >       >       >       >       >       >       > unix  2      [ ]         DGRAM      CONNECTED      10599
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      14005
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      13258
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      13248
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      14003
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10572
>       >       /var/run/xenstored/socket
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10786
>       >       /var/run/xenstored/socket
>       >       >       >       >       >       >       >       > unix  3      [ ]         DGRAM      CONNECTED      13186
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10864
>       >       /var/run/xenstored/socket
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10812
>       >       /var/run/xenstored/socket
>       >       >       >       >       >       >       >       > unix  2      [ ]         DGRAM      CONNECTED      14083
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10813
>       >       /var/run/xenstored/socket
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      14068
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      13256
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10571
>       >       /var/run/xenstored/socket
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      10842
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      13985
>       >       >       >       >       >       >       >       > unix  3      [ ]         DGRAM      CONNECTED      13185
>       >       >       >       >       >       >       >       > unix  2      [ ]         STREAM     CONNECTED      13884
>       >       >       >       >       >       >       >       > unix  2      [ ]         DGRAM      CONNECTED      14528
>       >       >       >       >       >       >       >       > unix  2      [ ]         DGRAM      CONNECTED      13785
>       >       >       >       >       >       >       >       > unix  3      [ ]         STREAM     CONNECTED      14034
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       > Attaching xen log files of /var/log/xen.
>       >       >       >       >       >       >       >       > I didn't get the role of QEMU here because as mentioned
>       earlier, I am porting
>       >       in
>       >       >       raspberrypi
>       >       >       >       4B.
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       > Regards,
>       >       >       >       >       >       >       >       > Vipul Kumar
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       > On Wed, Oct 19, 2022 at 12:43 AM Stefano Stabellini
>       <sstabellini@kernel.org>
>       >       wrote:
>       >       >       >       >       >       >       >       >       It usually works the way it is described in the guide:
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >     
>       >       >       >       >       >       >       >     
>       >       >       >       >       >       >     
>       >       >       >       >       >     
>       >       >       >       >     
>       >       >       >     
>       >       >     
>       >     
>               https://www.virtuatopia.com/index.php?title=Configuring_a_VNC_based_Graphical_Console_for_a_Xen_Paravirtualized_domainU_Guest
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       You don't need to install any VNC-related server software
>       because it is
>       >       >       >       >       >       >       >       >       already provided by Xen (to be precise it is provided by
>       QEMU working
>       >       >       >       >       >       >       >       >       together with Xen.)
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       You only need the vnc client in dom0 so that you can
>       connect, but you
>       >       >       >       >       >       >       >       >       could also run the vnc client outside from another host.
>       So basically
>       >       >       >       >       >       >       >       >       the following should work when executed in Dom0 after
>       creating DomU:
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >         vncviewer localhost:0
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       Can you attach the Xen and QEMU logs (/var/log/xen/*)?
>       And also use
>       >       >       >       >       >       >       >       >       netstat -taunp to check if there is anything running at
>       port 5900 or
>       >       >       >       >       >       >       >       >       5901?
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       Cheers,
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       Stefano
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       On Tue, 18 Oct 2022, Vipul Suneja wrote:
>       >       >       >       >       >       >       >       >       > Hi Stefano,
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       > Thanks for the response!
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       > I could install tigerVNC, x11vnc & libvncserver in Dom0
>       >       xen-image-minimal but
>       >       >       only
>       >       >       >       manage to
>       >       >       >       >       install
>       >       >       >       >       >       >       >       libvncserver(couldn't
>       >       >       >       >       >       >       >       >       install tigervnc
>       >       >       >       >       >       >       >       >       > & x11vnc because of x11
>       >       >       >       >       >       >       >       >       > support missing, it's wayland) in DOMU custom graphical
>       image. I
>       >       tried
>       >       >       running
>       >       >       >       vncviewer with
>       >       >       >       >       IP
>       >       >       >       >       >       address &
>       >       >       >       >       >       >       port
>       >       >       >       >       >       >       >       in dom0 to
>       >       >       >       >       >       >       >       >       access the domu
>       >       >       >       >       >       >       >       >       > graphical image display as per below commands.
>       >       >       >       >       >       >       >       >       >  
>       >       >       >       >       >       >       >       >       >  vncviewer 192.168.1.42:5901
>       >       >       >       >       >       >       >       >       >  
>       >       >       >       >       >       >       >       >       >  But it showing can't open display, below are the logs:
>       >       >       >       >       >       >       >       >       >  
>       >       >       >       >       >       >       >       >       > root@raspberrypi4-64:~/guest1# vncviewer
>       192.168.1.42:5901
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       > TigerVNC Viewer 64-bit v1.11.0
>       >       >       >       >       >       >       >       >       > Built on: 2020-09-08 12:16
>       >       >       >       >       >       >       >       >       > Copyright (C) 1999-2020 TigerVNC Team and many others
>       (see
>       >       README.rst)
>       >       >       >       >       >       >       >       >       > See https://www.tigervnc.org for information on
>       TigerVNC.
>       >       >       >       >       >       >       >       >       > Can't open display:
>       >       >       >       >       >       >       >       >       > root@raspberrypi4-64:~/guest1#
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       > I am not exactly sure what the issue is but I thought
>       only
>       >       libvncserver in
>       >       >       DOMU could
>       >       >       >       work to
>       >       >       >       >       get
>       >       >       >       >       >       access but
>       >       >       >       >       >       >       it
>       >       >       >       >       >       >       >       did not
>       >       >       >       >       >       >       >       >       work. 
>       >       >       >       >       >       >       >       >       > If TigerVNC is the issue here then is there any other
>       VNC source
>       >       which could
>       >       >       be
>       >       >       >       installed for
>       >       >       >       >       both
>       >       >       >       >       >       x11 &
>       >       >       >       >       >       >       >       wayland supported
>       >       >       >       >       >       >       >       >       images?
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       > Regards,
>       >       >       >       >       >       >       >       >       > Vipul Kumar
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       > On Tue, Oct 18, 2022 at 2:40 AM Stefano Stabellini
>       >       <sstabellini@kernel.org>
>       >       >       wrote:
>       >       >       >       >       >       >       >       >       >       VNC is typically easier to setup, because SDL
>       needs extra
>       >       libraries at
>       >       >       >       >       >       >       >       >       >       build time and runtime. If QEMU is built without
>       SDL support it
>       >       won't
>       >       >       >       >       >       >       >       >       >       start when you ask for SDL.
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       VNC should work with both x11 and wayland in your
>       domU. It
>       >       doesn't work
>       >       >       >       >       >       >       >       >       >       at the x11 level, it exposes a special fbdev
>       device in your
>       >       domU that
>       >       >       >       >       >       >       >       >       >       should work with:
>       >       >       >       >       >       >       >       >       >       - a graphical console in Linux domU
>       >       >       >       >       >       >       >       >       >       - x11
>       >       >       >       >       >       >       >       >       >       - wayland (but I haven't tested this so I am not
>       100% sure
>       >       about it)
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       When you say "it doesn't work", what do you mean?
>       Do you get a
>       >       black
>       >       >       >       >       >       >       >       >       >       window?
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       You need CONFIG_XEN_FBDEV_FRONTEND in Linux domU
>       >       >       >       >       >       >       >       >       >       (drivers/video/fbdev/xen-fbfront.c). I would try
>       to get a
>       >       graphical
>       >       >       text
>       >       >       >       >       >       >       >       >       >       console up and running in your domU before
>       attempting
>       >       x11/wayland.
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       Cheers,
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       Stefano
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       On Mon, 17 Oct 2022, Vipul Suneja wrote:
>       >       >       >       >       >       >       >       >       >       > Hi,
>       >       >       >       >       >       >       >       >       >       > Thanks!
>       >       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       > I have ported xen minimal image as DOM0 &
>       custom wayland GUI
>       >       based
>       >       >       image as
>       >       >       >       DOMU in
>       >       >       >       >       raspberry
>       >       >       >       >       >       pi4B. I
>       >       >       >       >       >       >       >       am trying to
>       >       >       >       >       >       >       >       >       make GUI
>       >       >       >       >       >       >       >       >       >       display up
>       >       >       >       >       >       >       >       >       >       > for guest machine. I tried using sdl, included
>       below line in
>       >       >       guest.conf file
>       >       >       >       >       >       >       >       >       >       > vfb= [ 'sdl=1' ]
>       >       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       > But it is throwing below error:
>       >       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       > root@raspberrypi4-64:~/guest1# xl create -c
>       guest1.cfg
>       >       >       >       >       >       >       >       >       >       > Parsing config from guest1.cfg
>       >       >       >       >       >       >       >       >       >       > libxl: error:
>       libxl_qmp.c:1400:qmp_ev_fd_callback: Domain
>       >       3:error on
>       >       >       QMP
>       >       >       >       socket:
>       >       >       >       >       Connection
>       >       >       >       >       >       reset by
>       >       >       >       >       >       >       >       peer
>       >       >       >       >       >       >       >       >       >       > libxl: error:
>       libxl_qmp.c:1439:qmp_ev_fd_callback: Domain
>       >       3:Error
>       >       >       happened
>       >       >       >       with the
>       >       >       >       >       QMP
>       >       >       >       >       >       connection to
>       >       >       >       >       >       >       >       QEMU
>       >       >       >       >       >       >       >       >       >       > libxl: error:
>       libxl_dm.c:3351:device_model_postconfig_done:
>       >       Domain
>       >       >       3:Post DM
>       >       >       >       startup
>       >       >       >       >       configs
>       >       >       >       >       >       failed,
>       >       >       >       >       >       >       >       rc=-26
>       >       >       >       >       >       >       >       >       >       > libxl: error:
>       libxl_create.c:1867:domcreate_devmodel_started:
>       >       Domain
>       >       >       3:device
>       >       >       >       model
>       >       >       >       >       did not
>       >       >       >       >       >       start:
>       >       >       >       >       >       >       -26
>       >       >       >       >       >       >       >       >       >       > libxl: error:
>       libxl_aoutils.c:646:libxl__kill_xs_path: Device
>       >       Model
>       >       >       already
>       >       >       >       exited
>       >       >       >       >       >       >       >       >       >       > libxl: error:
>       libxl_domain.c:1183:libxl__destroy_domid:
>       >       Domain
>       >       >       3:Non-existant
>       >       >       >       domain
>       >       >       >       >       >       >       >       >       >       > libxl: error:
>       libxl_domain.c:1137:domain_destroy_callback:
>       >       Domain
>       >       >       3:Unable to
>       >       >       >       destroy
>       >       >       >       >       guest
>       >       >       >       >       >       >       >       >       >       > libxl: error:
>       libxl_domain.c:1064:domain_destroy_cb: Domain
>       >       >       3:Destruction of
>       >       >       >       domain
>       >       >       >       >       failed
>       >       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       > Another way is VNC, i could install tigervnc in
>       DOM0 but same
>       >       i
>       >       >       couldn't in
>       >       >       >       guest
>       >       >       >       >       machine
>       >       >       >       >       >       because it
>       >       >       >       >       >       >       >       doesn't support
>       >       >       >       >       >       >       >       >       >       x11(supports wayland
>       >       >       >       >       >       >       >       >       >       > only). I am completely blocked here, Need your
>       support to
>       >       enable the
>       >       >       display
>       >       >       >       up.
>       >       >       >       >       >       >       >       >       >       > Any alternative of VNC which could work in both
>       x11 & wayland
>       >       >       supported
>       >       >       >       images?
>       >       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       > Any input on VNC, SDL or any other way to
>       proceed on this?
>       >       Looking
>       >       >       forward to
>       >       >       >       hearing
>       >       >       >       >       from
>       >       >       >       >       >       you.
>       >       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >       > Regards,
>       >       >       >       >       >       >       >       >       >       > Vipul Kumar
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >       >
>       >       >       >       >       >       >
>       >       >       >       >       >       >
>       >       >       >       >       >       >
>       >       >       >       >       >
>       >       >       >       >       >
>       >       >       >       >       >
>       >       >       >       >
>       >       >       >       >
>       >       >       >       >
>       >       >       >
>       >       >       >
>       >       >       >
>       >       >
>       >       >
>       >       >
>       >
>       >
>       >
> 
> 
> 
--8323329-1937431632-1674259579=:731018--


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 00:39:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 00:39:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482247.747647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ1ue-0004WK-3d; Sat, 21 Jan 2023 00:39:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482247.747647; Sat, 21 Jan 2023 00:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ1ue-0004WD-0z; Sat, 21 Jan 2023 00:39:16 +0000
Received: by outflank-mailman (input) for mailman id 482247;
 Sat, 21 Jan 2023 00:39:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eSPV=5S=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pJ1uc-0004W7-DJ
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 00:39:14 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0586d203-9924-11ed-b8d1-410ff93cb8f0;
 Sat, 21 Jan 2023 01:39:11 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5F792620DB;
 Sat, 21 Jan 2023 00:39:10 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8C0A5C433D2;
 Sat, 21 Jan 2023 00:39:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0586d203-9924-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674261549;
	bh=r6TrL3MmJfVWQ3D06p87nOy6/W+RlWWefkrb8VdDTTY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=fo4WdESFdHAMey6Se60xMa/YaUz8uTEhgZiJaRkChgleZJoRCAMoNwKdqkfpFEQmN
	 ISNofNKusDO94ZcG1umZMIHZMChDNRBUmLTEyyPEJKKvN8z2ogOTL+QNMLTZup4Eq7
	 tWpaLG2IkP/G6r62A6GZniTf7N8q/PH4t8MtHu9HkYvvA0Ow3yjXRAMkmyi8Yyw1l+
	 S8KlCxesmKbjK0u36MKKMo1Wc9QtHZ3JZOyRUZr3MVn4qUaXxY3aXxgsssh6l0GA7U
	 xDVv8UecrcYul2IDw9+U/8Gnot7DHU6hTFFgM4QSQbavllFBYNMSyY/GmnUqycesXy
	 TZ/o0aNi5XX0g==
Date: Fri, 20 Jan 2023 16:39:06 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] xen/cppcheck: add parameter to skip given MISRA
 rules
In-Reply-To: <20230106104108.14740-3-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2301201638580.731018@ubuntu-linux-20-04-desktop>
References: <20230106104108.14740-1-luca.fancellu@arm.com> <20230106104108.14740-3-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 6 Jan 2023, Luca Fancellu wrote:
> Add parameter to skip the passed MISRA rules during the cppcheck
> analysis, the rules are specified as a list of comma separated
> rules with the MISRA number notation (e.g. 1.1,1.3,...).
> 
> Modify convert_misra_doc.py script to take an extra parameter
> giving a list of MISRA rule to be skipped, comma separated.
> While there, fix some typos in the help and print functions.
> 
> Modify settings.py and cppcheck_analysis.py to have a new
> parameter (--cppcheck-skip-rules) used to specify a list of
> MISRA rule to be skipped during the cppcheck analysis.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/scripts/xen_analysis/cppcheck_analysis.py |  8 +++--
>  xen/scripts/xen_analysis/settings.py          | 35 +++++++++++--------
>  xen/tools/convert_misra_doc.py                | 28 ++++++++++-----
>  3 files changed, 46 insertions(+), 25 deletions(-)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
> index 0e952a169641..cc1f403d315e 100644
> --- a/xen/scripts/xen_analysis/cppcheck_analysis.py
> +++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
> @@ -153,11 +153,15 @@ def generate_cppcheck_deps():
>      if settings.cppcheck_misra:
>          cppcheck_flags = cppcheck_flags + " --addon=cppcheck-misra.json"
>  
> +        skip_rules_arg = ""
> +        if settings.cppcheck_skip_rules != "":
> +            skip_rules_arg = "-s {}".format(settings.cppcheck_skip_rules)
> +
>          utils.invoke_command(
>              "{}/convert_misra_doc.py -i {}/docs/misra/rules.rst"
> -            " -o {}/cppcheck-misra.txt -j {}/cppcheck-misra.json"
> +            " -o {}/cppcheck-misra.txt -j {}/cppcheck-misra.json {}"
>                  .format(settings.tools_dir, settings.repo_dir,
> -                        settings.outdir, settings.outdir),
> +                        settings.outdir, settings.outdir, skip_rules_arg),
>              False, CppcheckDepsPhaseError,
>              "An error occured when running:\n{}"
>          )
> diff --git a/xen/scripts/xen_analysis/settings.py b/xen/scripts/xen_analysis/settings.py
> index a8502e554e95..8c0d357fe0dc 100644
> --- a/xen/scripts/xen_analysis/settings.py
> +++ b/xen/scripts/xen_analysis/settings.py
> @@ -24,6 +24,7 @@ cppcheck_binpath = "cppcheck"
>  cppcheck_html = False
>  cppcheck_htmlreport_binpath = "cppcheck-htmlreport"
>  cppcheck_misra = False
> +cppcheck_skip_rules = ""
>  make_forward_args = ""
>  outdir = xen_dir
>  
> @@ -53,20 +54,22 @@ Cppcheck report creation phase runs only when --run-cppcheck is passed to the
>  script.
>  
>  Options:
> -  --build-only          Run only the commands to build Xen with the optional
> -                        make arguments passed to the script
> -  --clean-only          Run only the commands to clean the analysis artifacts
> -  --cppcheck-bin=       Path to the cppcheck binary (Default: {})
> -  --cppcheck-html       Produce an additional HTML output report for Cppcheck
> -  --cppcheck-html-bin=  Path to the cppcheck-html binary (Default: {})
> -  --cppcheck-misra      Activate the Cppcheck MISRA analysis
> -  --distclean           Clean analysis artifacts and reports
> -  -h, --help            Print this help
> -  --no-build            Skip the build Xen phase
> -  --no-clean            Don\'t clean the analysis artifacts on exit
> -  --run-coverity        Run the analysis for the Coverity tool
> -  --run-cppcheck        Run the Cppcheck analysis tool on Xen
> -  --run-eclair          Run the analysis for the Eclair tool
> +  --build-only            Run only the commands to build Xen with the optional
> +                          make arguments passed to the script
> +  --clean-only            Run only the commands to clean the analysis artifacts
> +  --cppcheck-bin=         Path to the cppcheck binary (Default: {})
> +  --cppcheck-html         Produce an additional HTML output report for Cppcheck
> +  --cppcheck-html-bin=    Path to the cppcheck-html binary (Default: {})
> +  --cppcheck-misra        Activate the Cppcheck MISRA analysis
> +  --cppcheck-skip-rules=  List of MISRA rules to be skipped, comma separated.
> +                          (e.g. --cppcheck-skip-rules=1.1,20.7,8.4)
> +  --distclean             Clean analysis artifacts and reports
> +  -h, --help              Print this help
> +  --no-build              Skip the build Xen phase
> +  --no-clean              Don\'t clean the analysis artifacts on exit
> +  --run-coverity          Run the analysis for the Coverity tool
> +  --run-cppcheck          Run the Cppcheck analysis tool on Xen
> +  --run-eclair            Run the analysis for the Eclair tool
>  """
>      print(msg.format(sys.argv[0], cppcheck_binpath,
>                       cppcheck_htmlreport_binpath))
> @@ -78,6 +81,7 @@ def parse_commandline(argv):
>      global cppcheck_html
>      global cppcheck_htmlreport_binpath
>      global cppcheck_misra
> +    global cppcheck_skip_rules
>      global make_forward_args
>      global outdir
>      global step_get_make_vars
> @@ -115,6 +119,9 @@ def parse_commandline(argv):
>              cppcheck_htmlreport_binpath = args_with_content_regex.group(2)
>          elif option == "--cppcheck-misra":
>              cppcheck_misra = True
> +        elif args_with_content_regex and \
> +             args_with_content_regex.group(1) == "--cppcheck-skip-rules":
> +            cppcheck_skip_rules = args_with_content_regex.group(2)
>          elif option == "--distclean":
>              target_distclean = True
>          elif (option == "--help") or (option == "-h"):
> diff --git a/xen/tools/convert_misra_doc.py b/xen/tools/convert_misra_doc.py
> index 13074d8a2e91..8984ec625fa7 100755
> --- a/xen/tools/convert_misra_doc.py
> +++ b/xen/tools/convert_misra_doc.py
> @@ -4,12 +4,14 @@
>  This script is converting the misra documentation RST file into a text file
>  that can be used as text-rules for cppcheck.
>  Usage:
> -    convert_misr_doc.py -i INPUT [-o OUTPUT] [-j JSON]
> +    convert_misra_doc.py -i INPUT [-o OUTPUT] [-j JSON] [-s RULES,[...,RULES]]
>  
>      INPUT  - RST file containing the list of misra rules.
>      OUTPUT - file to store the text output to be used by cppcheck.
>               If not specified, the result will be printed to stdout.
>      JSON   - cppcheck json file to be created (optional).
> +    RULES  - list of rules to skip during the analysis, comma separated
> +             (e.g. 1.1,1.2,1.3,...)
>  """
>  
>  import sys, getopt, re
> @@ -47,21 +49,25 @@ def main(argv):
>      outfile = ''
>      outstr = sys.stdout
>      jsonfile = ''
> +    force_skip = ''
>  
>      try:
> -        opts, args = getopt.getopt(argv,"hi:o:j:",["input=","output=","json="])
> +        opts, args = getopt.getopt(argv,"hi:o:j:s:",
> +                                   ["input=","output=","json=","skip="])
>      except getopt.GetoptError:
> -        print('convert-misra.py -i <input> [-o <output>] [-j <json>')
> +        print('convert-misra.py -i <input> [-o <output>] [-j <json>] [-s <rules>]')
>          sys.exit(2)
>      for opt, arg in opts:
>          if opt == '-h':
> -            print('convert-misra.py -i <input> [-o <output>] [-j <json>')
> +            print('convert-misra.py -i <input> [-o <output>] [-j <json>] [-s <rules>]')
>              print('  If output is not specified, print to stdout')
>              sys.exit(1)
>          elif opt in ("-i", "--input"):
>              infile = arg
>          elif opt in ("-o", "--output"):
>              outfile = arg
> +        elif opt in ("-s", "--skip"):
> +            force_skip = arg
>          elif opt in ("-j", "--json"):
>              jsonfile = arg
>  
> @@ -169,14 +175,18 @@ def main(argv):
>  
>      skip_list = []
>  
> +    # Add rules to be skipped anyway
> +    for r in force_skip.split(','):
> +        skip_list.append(r)
> +
>      # Search for missing rules and add a dummy text with the rule number
>      for i in misra_c2012_rules:
>          for j in list(range(1,misra_c2012_rules[i]+1)):
> -            if str(i) + '.' + str(j) not in rule_list:
> -                outstr.write('Rule ' + str(i) + '.' + str(j) + '\n')
> -                outstr.write('No description for rule ' + str(i) + '.' + str(j)
> -                             + '\n')
> -                skip_list.append(str(i) + '.' + str(j))
> +            rule_str = str(i) + '.' + str(j)
> +            if (rule_str not in rule_list) and (rule_str not in skip_list):
> +                outstr.write('Rule ' + rule_str + '\n')
> +                outstr.write('No description for rule ' + rule_str + '\n')
> +                skip_list.append(rule_str)
>  
>      # Make cppcheck happy by starting the appendix
>      outstr.write('Appendix B\n')
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 00:53:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 00:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482252.747657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ280-0006of-9U; Sat, 21 Jan 2023 00:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482252.747657; Sat, 21 Jan 2023 00:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ280-0006oY-6V; Sat, 21 Jan 2023 00:53:04 +0000
Received: by outflank-mailman (input) for mailman id 482252;
 Sat, 21 Jan 2023 00:53:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L7i5=5S=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pJ27y-0006oQ-9E
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 00:53:02 +0000
Received: from sonic301-20.consmr.mail.gq1.yahoo.com
 (sonic301-20.consmr.mail.gq1.yahoo.com [98.137.64.146])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1e1eece-9925-11ed-b8d1-410ff93cb8f0;
 Sat, 21 Jan 2023 01:52:58 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic301.consmr.mail.gq1.yahoo.com with HTTP; Sat, 21 Jan 2023 00:52:56 +0000
Received: by hermes--production-bf1-6bb65c4965-54llb (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 22943f10d0b4b39ec2c7a268d39e9ecc; 
 Sat, 21 Jan 2023 00:52:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1e1eece-9925-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netscape.net; s=a2048; t=1674262376; bh=49WGzloNQn8FAamKCSNbg9g20OI68XYnRq+lyN6caEs=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=ZMV3S61U+8nFAeEts2TujSxf7e5dH7ku7oZHj4kNlu4nGXUiiSs+C93svm9jRUqRSWxp+ecQtd7FnM8VQQ8oegqZrfkAi89F0pyKHm8XNcvJmLm6YH8nHXO7NVgLv0X+y7UvTAGjSaAT+AMkbSey+ZqjJf6tM9RC/HiGG3A/8yy+GvUfkWtT4EDXyn9SS9b8tBL4CsJaK1kB00i5IoAhApCImLwm6Qq4fE47IPE8c2EHO5hAnEosVuL7t/bWrfK9opEwTzSl/bO94MUcl48VGNtwJRbDkFjBURyNeabu1nnB81UjD7irZwdyoyTG28Utwgl2af7OcaMRkzzhdLjnQw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674262376; bh=p+1H5TTVr8rFhgUCBtjM7DGX4rzN4EZKEGrMXpK37Fd=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=OSCNodl6tH1GDlheHHH5x7LUNlWFeSmgjbNcFxVz9+U8RhTVGdt2NqoKjrRKNufmzqBnf9QXBnO8WXnTZywa5NwJdW5UaD5YrJVN1POu0tMyzQMwIkNdSBKrQq3mq030O5kWQglsbMjMmD4KlEMz9g9Qg4S6iXWP22PtLBAAh0zqkl8fsTSA+5EdDJw+OBQ3Ya11rj7hoh8ronjPwxH257ppzO1NL7JzRJklqDiXzsHPh0o1VgSRPUZqTzAQNy8/Vq4sWyyy952z965w6j4u9CmfS3ZBg8ZkyLtDn3IdhbQhu9Ew+y/fu1HILgnmqH6GjImv/y0D4TdsvBLaIS9J6g==
X-YMail-OSG: rAgRK98VM1lbEnhanwcb6DcvIdrCcXBglWBHMEfUEQvUQJ6S7aAw04rC.xIge34
 d8lPiBzWyWIv.DTUh2ErMTRSP42gnBtp.CXoFpHtwwHO09RNwybsZXKV10ZvXo3RR8iqymizFBFa
 v4zkRfrr6AXiXwMNGSmoCBLMX44Pntqo8avVAkqVxP0I6yLT9QGpgGIOOaC2rVVjqsYyDoxfKRos
 Yq_dv31LAZYNb.ftVhfN8cDeXi3dgtDfu2sA9.iTt2qLialACt2r4rT76Yw0CwJ0qQDdGqvVycKV
 EgrdN1rwQNHJzYDrNMehDrykJCiw7I8n9IPCxiOyZICzlw02X3QP79_.d68IMK0.IwUuGZUJYHw0
 pbfG9bYgvmbCuQZkiNCnwel5Wyyuqlp.zsGMMnfXRu_68YqXiy0ARj3Ce9a_5vW7YFe1Bmzovvbw
 Dgy8ADwAVETom8CRJ0Cyc5tpkHDOhMh6s3S8VttuD4k._nMeVD55BuB4A.DtlLGNyUp7okx5iVyB
 UpoWVfCi1B8RcDLTMcDI9nr6ukRcZmY5Hm2kMjnVmps1pbpFJxN2LVUSwHm5TPwmbADw31xdiXwa
 CoQIp5ebRY3rr1rSpU_uNErWZhNXjAsXxzEWPXTj6ykubUjLuc59XvjKBhRecaCrX.nXF1tnkq9F
 9DkLHwMKoaQGf4cqjT3qrJQBYfLOYRYh1J.iRXV2Ojukyvrkn66sR0JZmHHha5aj2L76YWIXGSew
 xlPG0Ywx9Ia_AdecrPfQTE1qLOwX9x3XpO7stewBAP3XBL5ejOVWq7KpcUKZBaDw9Ld_fUW2zaB7
 E8f5PcvfMvn6v5bWY_qwROS2ZLrzQIBLSSf6qQOMuhriubqmKJYr3PN7FGS338U_cpnv8.Fy8eLG
 uG7tSYEuGNa1k7C0Ww2R24oh7M.oNxRfPtIgn_qh1klD1B.qgrGllESotR6qIm09NPXNZr32IS0M
 jcLnGpR08A.Mcc6IcuLpYJo3qE7AnckmPkHiwRncK0OHwHXF6r_BHYG7VGpwO5284vKVG9ul75BQ
 faFNhYolm85RM9BF1DIJYHZPH6nNol1yMDRAoBBxZ_pg.Dx24LkV7GxxGEHpMBldtO7F_iE.xvfs
 xE2kaxRUNneMh8v4y90COjwofMi7_YRW5zA4lWssLSafXnptSHOUi_HkYrCd6ReFaB9rlJCPgtyF
 9Zae02Xi0hwi0GsnlelBgag3kHZ9rmowan5IW0wdWJb6yp0Rlh0cGyXs.ue7Y2w7l72gauOTOSxf
 puOEXrBkWCw8jf2ZI9InnK_HtKuvq8H4jo8EAzDnXj61AaWFNWwkoeKqAg2rijyIGMdd.1GJ_kG2
 EOl1E2U59qHckXI.MTZdFDims_wsM7dKwEApOYhqiSjc66WNsyP7BfNqHupQUdLDq2yaubmEY8l3
 nPHcrUcWWMqhCOYrtwH5KO58hgZn7oMkXfvQ8E_mVMx8qn4RRLOVA72jTG6pMvzaOgV3MYWICkJk
 VbkAaGmwXmn.zZ9gexzAZxgXgx9QceCWcLLZf.Jg3o1PtQUK7inala0Hhmtl1pTS7fiP1tf0KpMg
 DYyTmmsR4jMh0UW0ENn.7.3Ho67ptHqxQpsnRZnk39KEF0D.uzfxbDSw4fe.UtgI.wCCQQ8AjYEW
 8XPXZJnPWu1yjmYn9iX0dnXZRSANJ__o70tE.pogb5CGEXZE1vWb4rC8mr4_pzOIMq3ocl37q3mo
 xRKm_mCHqKt.0g41YJdO977Z3IKP3XOpsNf.0.oDaWpbh4A75IDVUvUQF97m7Q5PGvJB6yKJrzR4
 tJ2zmiDqS96Blm1BvbcYPx1ETlMssbl9wlB45V5wGc3aUEvyV9hJrCIOBxEmcyRI7gkCsXjlJhri
 6EityhwuwSTj9AoCikuQivU2aT9sTYukf2E1.t5A8VzrpaNLUQsFUMsoDeX1zfjpBApQ3wMCqeCd
 8fqJNIN7VNZWxkcSs0O7BaiVRyIkNRBaO5KWBbSfR5PacYlVD8EytterkKyuyTqBcauen8aEsjaA
 t2u_pbyROuxGabdEvR7ZKeqBjcRJsBBTCW0chGIAc4oSkrnuoIjnz8HXjhEFRVi.0cRe2tAogtfM
 AwfshmjUpSwryqVCnrho4yuFw47Avzk8mh4f_q9nR8zEuW7.jvxGEXtyazWHlYQsCdH0MPb54eZE
 _TDlrNsNPE4vtgCDMiVLF3Cw34ItIvWv6uJuHxoxtbNY7yArDvWt4Lappu7JojOiCxXkjlENLUp8
 cJS.POSLgvs.NtBCwVv2RZ2m17Fs-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <669190e6-fbbf-7dd6-b879-8082fa2e5339@netscape.net>
Date: Fri, 20 Jan 2023 19:52:50 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v9] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: qemu-devel@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, qemu-stable@nongnu.org
References: <974c616b8632f1d7ca3917f8143d8cebf946a55c.1673672956.git.brchuckz.ref@aol.com>
 <974c616b8632f1d7ca3917f8143d8cebf946a55c.1673672956.git.brchuckz@aol.com>
 <alpine.DEB.2.22.394.2301201334250.731018@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@netscape.net>
In-Reply-To: <alpine.DEB.2.22.394.2301201334250.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21096 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 17330

On 1/20/23 4:34 PM, Stefano Stabellini wrote:
> On Sat, 14 Jan 2023, Chuck Zmudzinski wrote:
> > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > as noted in docs/igd-assign.txt in the Qemu source code.
> > 
> > Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > Intel IGD passthrough to the guest with the Qemu upstream device model,
> > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > a different slot. This problem often prevents the guest from booting.
> > 
> > The only available workaround is not good: Configure Xen HVM guests to use
> > the old and no longer maintained Qemu traditional device model available
> > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > 
> > To implement this feature in the Qemu upstream device model for Xen HVM
> > guests, introduce the following new functions, types, and macros:
> > 
> > * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > * typedef XenPTQdevRealize function pointer
> > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > 
> > Michael Tsirkin:
> > * Introduce XEN_PCI_IGD_DOMAIN, XEN_PCI_IGD_BUS, XEN_PCI_IGD_DEV, and
> >   XEN_PCI_IGD_FN - use them to compute the value of XEN_PCI_IGD_SLOT_MASK
> > 
> > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > the xl toolstack with the gfx_passthru option enabled, which sets the
> > igd-passthru=on option to Qemu for the Xen HVM machine type.
> > 
> > The new xen_igd_reserve_slot function also needs to be implemented in
> > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > in which case it does nothing.
> > 
> > The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > 
> > Move the call to xen_host_pci_device_get, and the associated error
> > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > initialize the device class and vendor values which enables the checks for
> > the Intel IGD to succeed. The verification that the host device is an
> > Intel IGD to be passed through is done by checking the domain, bus, slot,
> > and function values as well as by checking that gfx_passthru is enabled,
> > the device class is VGA, and the device vendor in Intel.
> > 
> > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>
> Hi Chuck,
>
> The approach looks OK in principle to me. I only have one question: for
> other PCI devices (not Intel IGD), where is xen_host_pci_device_get
> called now?

I think you are right that there might be a problem for the
devices being added after the Intel IGD. I think I only tested
the case when the Intel IGD is the last device being added.
I do expect if I add the Intel IGD first, then there will be
problems when the subsequent type XEN_PT devices are
added because xen_pt_realize will not like it if
xen_host_pci_device_get does not get called. I will check
this over the weekend and, if a change is needed, I will
post it in v10.

I also think there is the same problem when the bit in
slot_reserved_mask is never set, which happens when
the guest has pci devices passed through but without
configuring Qemu with the igd-passthru=on option for
the xenfv machine type. I will also test this over the
weekend.

> It looks like that xen_igd_reserve_slot would return without setting
> slot_reserved_mask, hence xen_igd_clear_slot would also return without
> calling xen_host_pci_device_get. And xen_pt_realize doesn't call
> xen_host_pci_device_get any longer.
>
> Am I missing something?

No, you are not missing anything. You are pointing to some
cases that I need to test that probably would not work.
I think the fix is to have this at the beginning of
xen_igd_clear_slot instead of what I have now:

    xen_host_pci_device_get(&s->real_device,
                            s->hostaddr.domain, s->hostaddr.bus,
                            s->hostaddr.slot, s->hostaddr.function,
                            errp);
    if (*errp) {
        error_append_hint(errp, "Failed to \"open\" the real pci device");
        return;
    }

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
        return;

That way xen_host_pci_device_get would still get called
when xen_igd_clear_slot returns because the bit in the
mask was not set.

Thanks for your careful review of the patch. I think you
did find a real mistake that needs to be fixed in v10 which
I hope to post with the above mentioned change early next
week.

Chuck
>
>
> > ---
> > Notes that might be helpful to reviewers of patched code in hw/xen:
> > 
> > The new functions and types are based on recommendations from Qemu docs:
> > https://qemu.readthedocs.io/en/latest/devel/qom.html
> > 
> > Notes that might be helpful to reviewers of patched code in hw/i386:
> > 
> > The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> > not affect builds that do not have CONFIG_XEN defined.
> > 
> > xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
> > existing function that is only true when Qemu is built with
> > xen-pci-passthrough enabled and the administrator has configured the Xen
> > HVM guest with Qemu's igd-passthru=on option.
> > 
> > v2: Remove From: <email address> tag at top of commit message
> > 
> > v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
> > 
> >     if (is_igd_vga_passthrough(&s->real_device) &&
> >         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
> > 
> >     is changed to
> > 
> >     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
> >         && (s->hostaddr.function == 0)) {
> > 
> >     I hoped that I could use the test in v2, since it matches the
> >     other tests for the Intel IGD in Qemu and Xen, but those tests
> >     do not work because the necessary data structures are not set with
> >     their values yet. So instead use the test that the administrator
> >     has enabled gfx_passthru and the device address on the host is
> >     02.0. This test does detect the Intel IGD correctly.
> > 
> > v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
> >     email address to match the address used by the same author in commits
> >     be9c61da and c0e86b76
> >     
> >     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
> > 
> > v5: The patch of xen_pt.c was re-worked to allow a more consistent test
> >     for the Intel IGD that uses the same criteria as in other places.
> >     This involved moving the call to xen_host_pci_device_get from
> >     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
> >     Intel IGD in xen_igd_clear_slot:
> >     
> >     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
> >         && (s->hostaddr.function == 0)) {
> > 
> >     is changed to
> > 
> >     if (is_igd_vga_passthrough(&s->real_device) &&
> >         s->real_device.domain == 0 && s->real_device.bus == 0 &&
> >         s->real_device.dev == 2 && s->real_device.func == 0 &&
> >         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> > 
> >     Added an explanation for the move of xen_host_pci_device_get from
> >     xen_pt_realize to xen_igd_clear_slot to the commit message.
> > 
> >     Rebase.
> > 
> > v6: Fix logging by removing these lines from the move from xen_pt_realize
> >     to xen_igd_clear_slot that was done in v5:
> > 
> >     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
> >                " to devfn 0x%x\n",
> >                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
> >                s->dev.devfn);
> > 
> >     This log needs to be in xen_pt_realize because s->dev.devfn is not
> >     set yet in xen_igd_clear_slot.
> > 
> > v7: The v7 that was posted to the mailing list was incorrect. v8 is what
> >     v7 was intended to be.
> > 
> > v8: Inhibit out of context log message and needless processing by
> >     adding 2 lines at the top of the new xen_igd_clear_slot function:
> > 
> >     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
> >         return;
> > 
> >     Rebase. This removed an unnecessary header file from xen_pt.h 
> > 
> > v9: Move check for xen_igd_gfx_pt_enabled() from pc_piix.c to xen_pt.c
> > 
> >     Move #include "hw/pci/pci_bus.h" from xen_pt.h to xen_pt.c
> > 
> >     Introduce macros for the IGD devfn constants and use them to compute
> >     the value of XEN_PCI_IGD_SLOT_MASK
> > 
> >     Also use the new macros at an appropriate place in xen_pt_realize
> > 
> >     Add Cc: to stable - This has been broken for a long time, ever since
> >                         support for igd-passthru was added to Qemu 7
> >                         years ago.
> > 
> >     Mention new macros in the commit message (Michael Tsirkin)
> > 
> >     N.B.: I could not follow the suggestion to move the statement
> >     pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK; to after
> >     pci_qdev_realize for symmetry. Doing that results in an error when
> >     creating the guest:
> >     
> >     libxl: error: libxl_qmp.c:1837:qmp_ev_parse_error_messages: Domain 4:PCI: slot 2 function 0 not available for xen-pci-passthrough, reserved
> >     libxl: error: libxl_pci.c:1809:device_pci_add_done: Domain 4:libxl__device_pci_add failed for PCI device 0:0:2.0 (rc -28)
> >     libxl: error: libxl_create.c:1921:domcreate_attach_devices: Domain 4:unable to add pci devices
> > 
> >  hw/i386/pc_piix.c    |  1 +
> >  hw/xen/xen_pt.c      | 61 ++++++++++++++++++++++++++++++++++++--------
> >  hw/xen/xen_pt.h      | 20 +++++++++++++++
> >  hw/xen/xen_pt_stub.c |  4 +++
> >  4 files changed, 75 insertions(+), 11 deletions(-)
> > 
> > diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> > index b48047f50c..8fc96eb63b 100644
> > --- a/hw/i386/pc_piix.c
> > +++ b/hw/i386/pc_piix.c
> > @@ -405,6 +405,7 @@ static void pc_xen_hvm_init(MachineState *machine)
> >      }
> >  
> >      pc_xen_hvm_init_pci(machine);
> > +    xen_igd_reserve_slot(pcms->bus);
> >      pci_create_simple(pcms->bus, -1, "xen-platform");
> >  }
> >  #endif
> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> > index 0ec7e52183..51f100f64a 100644
> > --- a/hw/xen/xen_pt.c
> > +++ b/hw/xen/xen_pt.c
> > @@ -57,6 +57,7 @@
> >  #include <sys/ioctl.h>
> >  
> >  #include "hw/pci/pci.h"
> > +#include "hw/pci/pci_bus.h"
> >  #include "hw/qdev-properties.h"
> >  #include "hw/qdev-properties-system.h"
> >  #include "hw/xen/xen.h"
> > @@ -780,15 +781,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
> >                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
> >                 s->dev.devfn);
> >  
> > -    xen_host_pci_device_get(&s->real_device,
> > -                            s->hostaddr.domain, s->hostaddr.bus,
> > -                            s->hostaddr.slot, s->hostaddr.function,
> > -                            errp);
> > -    if (*errp) {
> > -        error_append_hint(errp, "Failed to \"open\" the real pci device");
> > -        return;
> > -    }
> > -
> >      s->is_virtfn = s->real_device.is_virtfn;
> >      if (s->is_virtfn) {
> >          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> > @@ -803,8 +795,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
> >      s->io_listener = xen_pt_io_listener;
> >  
> >      /* Setup VGA bios for passthrough GFX */
> > -    if ((s->real_device.domain == 0) && (s->real_device.bus == 0) &&
> > -        (s->real_device.dev == 2) && (s->real_device.func == 0)) {
> > +    if ((s->real_device.domain == XEN_PCI_IGD_DOMAIN) &&
> > +        (s->real_device.bus == XEN_PCI_IGD_BUS) &&
> > +        (s->real_device.dev == XEN_PCI_IGD_DEV) &&
> > +        (s->real_device.func == XEN_PCI_IGD_FN)) {
> >          if (!is_igd_vga_passthrough(&s->real_device)) {
> >              error_setg(errp, "Need to enable igd-passthru if you're trying"
> >                      " to passthrough IGD GFX");
> > @@ -950,11 +944,55 @@ static void xen_pci_passthrough_instance_init(Object *obj)
> >      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
> >  }
> >  
> > +void xen_igd_reserve_slot(PCIBus *pci_bus)
> > +{
> > +    if (!xen_igd_gfx_pt_enabled())
> > +        return;
> > +
> > +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
> > +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
> > +}
> > +
> > +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
> > +{
> > +    ERRP_GUARD();
> > +    PCIDevice *pci_dev = (PCIDevice *)qdev;
> > +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
> > +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
> > +    PCIBus *pci_bus = pci_get_bus(pci_dev);
> > +
> > +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
> > +        return;
> > +
> > +    xen_host_pci_device_get(&s->real_device,
> > +                            s->hostaddr.domain, s->hostaddr.bus,
> > +                            s->hostaddr.slot, s->hostaddr.function,
> > +                            errp);
> > +    if (*errp) {
> > +        error_append_hint(errp, "Failed to \"open\" the real pci device");
> > +        return;
> > +    }
> > +
> > +    if (is_igd_vga_passthrough(&s->real_device) &&
> > +        s->real_device.domain == XEN_PCI_IGD_DOMAIN &&
> > +        s->real_device.bus == XEN_PCI_IGD_BUS &&
> > +        s->real_device.dev == XEN_PCI_IGD_DEV &&
> > +        s->real_device.func == XEN_PCI_IGD_FN &&
> > +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> > +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
> > +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
> > +    }
> > +    xpdc->pci_qdev_realize(qdev, errp);
> > +}
> > +
> >  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
> >  {
> >      DeviceClass *dc = DEVICE_CLASS(klass);
> >      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
> >  
> > +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
> > +    xpdc->pci_qdev_realize = dc->realize;
> > +    dc->realize = xen_igd_clear_slot;
> >      k->realize = xen_pt_realize;
> >      k->exit = xen_pt_unregister_device;
> >      k->config_read = xen_pt_pci_read_config;
> > @@ -977,6 +1015,7 @@ static const TypeInfo xen_pci_passthrough_info = {
> >      .instance_size = sizeof(XenPCIPassthroughState),
> >      .instance_finalize = xen_pci_passthrough_finalize,
> >      .class_init = xen_pci_passthrough_class_init,
> > +    .class_size = sizeof(XenPTDeviceClass),
> >      .instance_init = xen_pci_passthrough_instance_init,
> >      .interfaces = (InterfaceInfo[]) {
> >          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
> > diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> > index cf10fc7bbf..e184699740 100644
> > --- a/hw/xen/xen_pt.h
> > +++ b/hw/xen/xen_pt.h
> > @@ -40,7 +40,20 @@ typedef struct XenPTReg XenPTReg;
> >  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
> >  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
> >  
> > +#define XEN_PT_DEVICE_CLASS(klass) \
> > +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
> > +#define XEN_PT_DEVICE_GET_CLASS(obj) \
> > +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
> > +
> > +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
> > +
> > +typedef struct XenPTDeviceClass {
> > +    PCIDeviceClass parent_class;
> > +    XenPTQdevRealize pci_qdev_realize;
> > +} XenPTDeviceClass;
> > +
> >  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
> > +void xen_igd_reserve_slot(PCIBus *pci_bus);
> >  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
> >  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
> >                                             XenHostPCIDevice *dev);
> > @@ -75,6 +88,13 @@ typedef int (*xen_pt_conf_byte_read)
> >  
> >  #define XEN_PCI_INTEL_OPREGION 0xfc
> >  
> > +#define XEN_PCI_IGD_DOMAIN 0
> > +#define XEN_PCI_IGD_BUS 0
> > +#define XEN_PCI_IGD_DEV 2
> > +#define XEN_PCI_IGD_FN 0
> > +#define XEN_PCI_IGD_SLOT_MASK \
> > +    (1UL << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
> > +
> >  typedef enum {
> >      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
> >      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
> > diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> > index 2d8cac8d54..5c108446a8 100644
> > --- a/hw/xen/xen_pt_stub.c
> > +++ b/hw/xen/xen_pt_stub.c
> > @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
> >          error_setg(errp, "Xen PCI passthrough support not built in");
> >      }
> >  }
> > +
> > +void xen_igd_reserve_slot(PCIBus *pci_bus)
> > +{
> > +}
> > -- 
> > 2.39.0
> > 



From xen-devel-bounces@lists.xenproject.org Sat Jan 21 02:49:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 02:49:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482260.747667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ3wd-0007rO-Ph; Sat, 21 Jan 2023 02:49:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482260.747667; Sat, 21 Jan 2023 02:49:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ3wd-0007rH-L9; Sat, 21 Jan 2023 02:49:27 +0000
Received: by outflank-mailman (input) for mailman id 482260;
 Sat, 21 Jan 2023 02:49:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ3wc-0007r7-8K; Sat, 21 Jan 2023 02:49:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ3wc-0000xw-5q; Sat, 21 Jan 2023 02:49:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ3wb-0006ib-P1; Sat, 21 Jan 2023 02:49:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ3wb-0003HC-OD; Sat, 21 Jan 2023 02:49:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VmnJKj9yILnodFi4uCThvXl+5m4UDiBjswfulxnN13Y=; b=jAp7kas6i5aGS9ztx4aGNtPcHr
	txC+ovopUWxPooF1KVVOEIr6JUuW240b5Spi6f7/ZHvy3skGTRG6gfUtND/79eu1Zu0bCHR1CCfYx
	NhBN0kGCYiqIJrxDVsUXT3RLXXswwOw3W1IM2rGrTBU6NHPc1OB2hrqHsX9oMyXvR47Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176002-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176002: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d368967cb1039b5c4cccb62b5a4b9468c50cd143
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 02:49:25 +0000

flight 176002 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176002/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 175992

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                d368967cb1039b5c4cccb62b5a4b9468c50cd143
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  105 days
Failing since        173470  2022-10-08 06:21:34 Z  104 days  214 attempts
Testing same since   175992  2023-01-20 06:11:32 Z    0 days    2 attempts

------------------------------------------------------------
3375 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 516960 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 02:53:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 02:53:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482267.747677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ40P-0000op-Ar; Sat, 21 Jan 2023 02:53:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482267.747677; Sat, 21 Jan 2023 02:53:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ40P-0000oi-7E; Sat, 21 Jan 2023 02:53:21 +0000
Received: by outflank-mailman (input) for mailman id 482267;
 Sat, 21 Jan 2023 02:53:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ40N-0000nj-Ma; Sat, 21 Jan 2023 02:53:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ40N-00013r-Kx; Sat, 21 Jan 2023 02:53:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ40N-0006mj-84; Sat, 21 Jan 2023 02:53:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ40N-0004yK-7W; Sat, 21 Jan 2023 02:53:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HSa56s0j9BsYisQRErbjShRypwEIe1uqwj8JH7GjYNk=; b=kkSSeJHJ7B/yhj+jaMLcBp31Ef
	WydbMwQOZBXMNzQkmW4FARgiGg9SkP2Qf5jSz8Sc88BHRhhK21LJsnNkp1JOpAi2Cgj35DgFodZjf
	SMMKmcCnKnk0Nc3n+IHTKgyvV7H18EWF3OJLKPL/1BKT8eRc0bZ5F9MHVRmV9M5cg6Xc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-175998-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 175998: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:host-ping-check-native:fail:heisenbug
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=239b8b0699a222fd21da1c5fdeba0a2456085a47
X-Osstest-Versions-That:
    qemuu=7ec8aeb6048018680c06fb9205c01ca6bda08846
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 02:53:19 +0000

flight 175998 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/175998/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 175991 pass in 175998
 test-armhf-armhf-xl-rtds      6 host-ping-check-native     fail pass in 175991
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail pass in 175991

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 175991 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 175991 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175977
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175977
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175977
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175977
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175977
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175977
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175977
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175977
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                239b8b0699a222fd21da1c5fdeba0a2456085a47
baseline version:
 qemuu                7ec8aeb6048018680c06fb9205c01ca6bda08846

Last test of basis   175977  2023-01-19 09:20:40 Z    1 days
Testing same since   175991  2023-01-20 00:10:07 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Cédric Le Goater <clg@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Guoyi Tu <tugy@chinatelecom.cn>
  Hoa Nguyen <hoanguyen@ucdavis.edu>
  Laurent Vivier <laurent@vivier.eu>
  Li-Wen Hsu <lwhsu@lwhsu.org>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yuval Shaia <yuval.shaia.ml@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   7ec8aeb604..239b8b0699  239b8b0699a222fd21da1c5fdeba0a2456085a47 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 04:15:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 04:15:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482277.747687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ5HZ-0000VR-FY; Sat, 21 Jan 2023 04:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482277.747687; Sat, 21 Jan 2023 04:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ5HZ-0000VK-Cm; Sat, 21 Jan 2023 04:15:09 +0000
Received: by outflank-mailman (input) for mailman id 482277;
 Sat, 21 Jan 2023 04:15:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ5HX-0000V6-Se; Sat, 21 Jan 2023 04:15:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ5HX-0003Rv-Qs; Sat, 21 Jan 2023 04:15:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ5HX-0001tb-AZ; Sat, 21 Jan 2023 04:15:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ5HX-0003wT-A6; Sat, 21 Jan 2023 04:15:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YqT8M3h7DGrwTTx5mlG0LiWmK3NDYH18H3KSnWS+Vbs=; b=kmRQdnXXicMZWPWxE4k6kUnCEI
	WE8/19Z566hPOEozfcyoiGfrxDMsK4HJKE2l9iF4AK1NT4Dr3LPLWSjqB5CPokauOSjfQWCuelg2D
	Dp+CwZ44fGT+GsYbFhH8v7TMqMLmz3jPq9Go/d3IV6iPzUq3L/Ha50O4NVCmbEZa/isY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176006-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176006: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d60c20260c7e82fe5344d06c20d718e0cc03b8b
X-Osstest-Versions-That:
    xen=56f3782633c252dcec9c96eb345f95fb9557cea7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 04:15:07 +0000

flight 176006 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176006/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1d60c20260c7e82fe5344d06c20d718e0cc03b8b
baseline version:
 xen                  56f3782633c252dcec9c96eb345f95fb9557cea7

Last test of basis   176005  2023-01-20 21:01:59 Z    0 days
Testing same since   176006  2023-01-21 01:01:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   56f3782633..1d60c20260  1d60c20260c7e82fe5344d06c20d718e0cc03b8b -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 07:01:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 07:01:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482285.747698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ7sa-0000fD-DL; Sat, 21 Jan 2023 07:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482285.747698; Sat, 21 Jan 2023 07:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJ7sa-0000f6-8I; Sat, 21 Jan 2023 07:01:32 +0000
Received: by outflank-mailman (input) for mailman id 482285;
 Sat, 21 Jan 2023 07:01:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ7sY-0000ew-DC; Sat, 21 Jan 2023 07:01:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ7sY-00083j-82; Sat, 21 Jan 2023 07:01:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ7sX-00025R-Nf; Sat, 21 Jan 2023 07:01:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJ7sX-0002ZJ-NE; Sat, 21 Jan 2023 07:01:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KrRJI6UbXAa2zOLs7Gmjuehi1q2j/vf7o8ss5l7ZfF0=; b=cmCyK3a8gdRv5L4ZLaoh1wwsSA
	Puv3EJ5wAnC9H9Xc/nj+swZsgY1DmchZv1KRmC6hzva6+judR0HdKqgjqhVKf2M/Qe9Urh9ppuNcO
	FVeRN7FxvClIxQa6Oaqv6gOJ6+QYMSM4FUSVEmSDk2gZsDaAmBsAxpfEF+jkRwgseqNI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176003-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176003: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 07:01:29 +0000

flight 176003 xen-unstable real [real]
flight 176010 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176003/
http://logs.test-lab.xenproject.org/osstest/logs/176010/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail pass in 176010-retest
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 176010-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    0 days
Testing same since   176003  2023-01-20 17:40:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 495 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 10:25:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 10:25:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482297.747715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJB3R-0003Sr-3r; Sat, 21 Jan 2023 10:24:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482297.747715; Sat, 21 Jan 2023 10:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJB3R-0003Sk-1F; Sat, 21 Jan 2023 10:24:57 +0000
Received: by outflank-mailman (input) for mailman id 482297;
 Sat, 21 Jan 2023 10:24:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pRw9=5S=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pJB3P-0003Se-QL
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 10:24:56 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2050.outbound.protection.outlook.com [40.107.243.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d6bda7fd-9975-11ed-91b6-6bf2151ebd3b;
 Sat, 21 Jan 2023 11:24:53 +0100 (CET)
Received: from BN8PR07CA0022.namprd07.prod.outlook.com (2603:10b6:408:ac::35)
 by SN7PR12MB6983.namprd12.prod.outlook.com (2603:10b6:806:261::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Sat, 21 Jan
 2023 10:24:48 +0000
Received: from BN8NAM11FT095.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ac:cafe::d8) by BN8PR07CA0022.outlook.office365.com
 (2603:10b6:408:ac::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.28 via Frontend
 Transport; Sat, 21 Jan 2023 10:24:48 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT095.mail.protection.outlook.com (10.13.176.206) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Sat, 21 Jan 2023 10:24:48 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sat, 21 Jan
 2023 04:24:47 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sat, 21 Jan
 2023 02:24:47 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Sat, 21 Jan 2023 04:24:45 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6bda7fd-9975-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lkiZwiVOYi15Alhoc5SB+K1RibSs/iUHJxhgOWX2ae+vYay5eqm7VAys8KV6dVWoyncTdaqCCGnmw+bsWy1eoNFtcpOPnwBVtm1GB5dVOXf1KfoVT8G/I4BXEdAXey+EGUt6jJ7YxHihmF0ZD78z5V1UrjMYPylGxTYFHQShXuE9cOuTJFOtZwVbn+2noI8mlmILUafmtWC8t+ki8Xe8adHy1HOZLrhgPu0nqgooxAgnUWIyXewbFzDaQG+ApV64YRa4WJE236Kta74cFnMvw6b6h4VT2baeCQND2rv2TYAytgHwX9efpho1oiHKtoyYA4xNHUoq4JgpJJaKh/WYEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=99FcWel1iE7u5LptLzQ+Dx6SfzUQ26tAth4X4e9yg58=;
 b=mrlFRsPXB/rG0fQ+8Y7wSzFovqw8+uUu73PHy2m6lwhuY5ahDoq+BRmh5yM5nhogLCYZ59mn2iIdVBi0//Gs2nRzBKu8dejf6bU7DZ2Do0Mc3s9qS1h4GeI46KzhNvtFv7H51Z7Z1dSv3MjjEYD8k9PmPDYayfrOS7/Hgqig5rAANIYk9Xvzs3wPesiUpWD/9p2Ry1U60QWSB5pStirZgxld9jxBm1+AOQhSqLKciOMc9ieukgEnCXA2rk3nE0HNZs3Z0ZvQ+5Fqwz3pvzPCxwQL/BZwZG8QpvxI2F0XpOuqN6WzpKK0LQjNMZufGwjmxczsziNSV9i2CuaBcdIOgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=99FcWel1iE7u5LptLzQ+Dx6SfzUQ26tAth4X4e9yg58=;
 b=NDdmHQIdXd7mvi/FjspoxAfFw51LUNseiRkvjtolCU93qQLtooy+WPSPUgFatfAYyGdir46FbvaGyS7Vwm6RPUSSlDoqXQHegB1i6gDMJWncbRO3lPsbVLUdtkBsYeyScW5jA1ryEsFVIfV7BCCdga6UB4EBkJu0iIwWRBUrI9U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <bab01caa-1827-c672-c0e3-35b2b5b67a69@amd.com>
Date: Sat, 21 Jan 2023 11:24:39 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v2 02/11] xen/arm: Use the correct format specifier
To: Stefano Stabellini <sstabellini@kernel.org>, Ayan Kumar Halder
	<ayankuma@amd.com>
CC: Julien Grall <julien@xen.org>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>, <xen-devel@lists.xenproject.org>,
	<stefano.stabellini@amd.com>, <Volodymyr_Babchuk@epam.com>,
	<bertrand.marquis@arm.com>
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-3-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191454080.731018@ubuntu-linux-20-04-desktop>
 <c7e5fbf3-9e90-7008-0299-f53b20566b9a@xen.org>
 <ba37ee02-c07c-2803-0867-149c779890b6@amd.com>
 <cd673f97-9c0d-286b-e973-7a85c84dd576@xen.org>
 <2017e0d4-dd02-e81d-99f4-1ef47fc9e774@amd.com>
 <42b138a6-59f5-7614-d96f-30e1784c97a4@xen.org>
 <0a7d3da6-efe7-2cf1-563a-3c5c2ec473b2@amd.com>
 <alpine.DEB.2.22.394.2301201455100.731018@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <alpine.DEB.2.22.394.2301201455100.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT095:EE_|SN7PR12MB6983:EE_
X-MS-Office365-Filtering-Correlation-Id: 2316f92a-734e-428d-1df7-08dafb99b949
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TiOfYTTti7Xrxk8Qvk+0KxiuSDdHUMlx2dOH9JUuKWNVGHwrdFszrKVJLjjbSxfD44X4ZkJtRQzpAZrYaKrBp1v1tZsgBDwSlAfO+qPtjP3vvCpzoapJYQVveKICrO9xN/w/ufhH5P64QGs9lW3c3NAwcjLlzuSQLZZmKHMw1DxCENoEGckcylILp2B8GVbbyWq4RkYkWUO/NaHK+3fDwJ/cHr0HmWzhS7f7Q+mdQFj5dcGXUtdzHmogx10Oq0vesF7LB5gV0BtYFGzNEO+d7BRBV7agyck2eTooObShWjCanjwlkQbSO27XXBY3rSkTjwLyJG0+4Iqtgruy84mdC9GnT00RwIkb9XvXcN3duxgsCQwVonttTh6+NKj+41R5SLulLxlNuc+01jNaNuM6JT9gnHb5L9fE/bADPCiyZVvIF/p/5QMuY7qZnigEluLUzJrp5IadpbJm9Ua7M1J3f2BbRFQKsinl6ktXHC5Yw2rVNmc/1cT++nNSAeZKew2FWdbCIEqpB9XwoOQPZdMjbypXs5KxduQTLxD/IbEPxkREMw4s4itrvBCOX2C/aaCp1w/H0TyRv13gtBoSeFDa2DNL9fRZhpYxQbng4NH4e+Iy0ORi7ama+aQZ0EH9z0vojkSE5X80lNTxzKHitLcTO2x9hok7AX6sj3mlQl1bMpQfRuz8presLR8ekAA/rWu/rIuGOUrkV6dW6tAz4i+hstUM9z/Syod48xmegPIZlkdaT5X3IkO9LfdkKd8jUfem5ZBh1hzZRjciv1808Im8etjHcEP3z6tjhR8N46sNsmU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(376002)(346002)(136003)(451199015)(40470700004)(36840700001)(46966006)(26005)(54906003)(2906002)(186003)(316002)(6636002)(2616005)(16576012)(478600001)(966005)(36756003)(36860700001)(110136005)(6666004)(31696002)(53546011)(81166007)(86362001)(82740400003)(356005)(40460700003)(40480700001)(336012)(47076005)(426003)(83380400001)(41300700001)(4326008)(5660300002)(44832011)(31686004)(8936002)(70586007)(70206006)(8676002)(82310400005)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jan 2023 10:24:48.3022
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2316f92a-734e-428d-1df7-08dafb99b949
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT095.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6983

Hi Stefano, Ayan, Julien

On 21/01/2023 00:01, Stefano Stabellini wrote:
> 
> 
> On Fri, 20 Jan 2023, Ayan Kumar Halder wrote:
>> Hi Julien/Michal,
>>
>> On 20/01/2023 17:49, Julien Grall wrote:
>>>
>>>
>>> On 20/01/2023 16:03, Michal Orzel wrote:
>>>> Hi Julien,
>>>
>>> Hi Michal,
>>>
>>>>
>>>> On 20/01/2023 16:09, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 20/01/2023 14:40, Michal Orzel wrote:
>>>>>> Hello,
>>>>>
>>>>> Hi,
>>>>>
>>>>>>
>>>>>> On 20/01/2023 10:32, Julien Grall wrote:
>>>>>>>
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> On 19/01/2023 22:54, Stefano Stabellini wrote:
>>>>>>>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>>>>>>>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables.
>>>>>>>>> 2. One should use 'PRIx64' to display 'u64' in hex format. The
>>>>>>>>> current
>>>>>>>>> use of 'PRIpaddr' for printing PTE is buggy as this is not a
>>>>>>>>> physical
>>>>>>>>> address.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>>>>>>>
>>>>>>>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>>>>>>
>>>>>>>
>>>>>>> I have committed the patch.
>>>>>> The CI test jobs (static-mem) failed on this patch:
>>>>>> https://gitlab.com/xen-project/xen/-/pipelines/752911309
>>>>>
>>>>> Thanks for the report.
>>>>>
>>>>>>
>>>>>> I took a look at it and this is because in the test script we
>>>>>> try to find a node whose unit-address does not have leading zeroes.
>>>>>> However, after this patch, we switched to PRIpaddr which is defined as
>>>>>> 016lx/016llx and
>>>>>> we end up creating nodes like:
>>>>>>
>>>>>> memory@0000000050000000
>>>>>>
>>>>>> instead of:
>>>>>>
>>>>>> memory@60000000
>>>>>>
>>>>>> We could modify the script,
>>>>>
>>>>> TBH, I think it was a mistake for the script to rely on how Xen describe
>>>>> the memory banks in the Device-Tree.
>>>>>
>>>>> For instance, from my understanding, it would be valid for Xen to create
>>>>> a single node for all the banks or even omitting the unit-address if
>>>>> there is only one bank.
>>>>>
>>>>>> but do we really want to create nodes
>>>>>> with leading zeroes? The dt spec does not mention it, although [1]
>>>>>> specifies that the Linux convention is not to have leading zeroes.
>>>>>
>>>>> Reading through the spec in [2], it suggested the current naming is
>>>>> fine. That said the example match the Linux convention (I guess that's
>>>>> not surprising...).
>>>>>
>>>>> I am open to remove the leading. However, I think the CI also needs to
>>>>> be updated (see above why).
>>>> Yes, the CI needs to be updated as well.
>>>
>>> Can either you or Ayan look at it?
>>
>> Does this change match the expectation ?
>>
>> diff --git a/automation/scripts/qemu-smoke-dom0less-arm64.sh
>> b/automation/scripts/qemu-smoke-dom0less-arm64.sh
>> index 2b59346fdc..9f5e700f0e 100755
>> --- a/automation/scripts/qemu-smoke-dom0less-arm64.sh
>> +++ b/automation/scripts/qemu-smoke-dom0less-arm64.sh
>> @@ -20,7 +20,7 @@ if [[ "${test_variant}" == "static-mem" ]]; then
>>      domu_size="10000000"
>>      passed="${test_variant} test passed"
>>      domU_check="
>> -current=\$(hexdump -e '16/1 \"%02x\"'
>> /proc/device-tree/memory@${domu_base}/reg 2>/dev/null)
>> +current=\$(hexdump -e '16/1 \"%02x\"' /proc/device-tree/memory@$[0-9]*/reg
>> 2>/dev/null)
>>  expected=$(printf \"%016x%016x\" 0x${domu_base} 0x${domu_size})
>>  if [[ \"\${expected}\" == \"\${current}\" ]]; then
>>         echo \"${passed}\"
> 
> We need to check for ${domu_base} with or without leading zeroes:
> 
> current=\$(hexdump -e '16/1 \"%02x\"' /proc/device-tree/memory@*(0)${domu_base}/reg 2>/dev/null)
This check is still bound to the way Xen exposes a memory node in a device tree which might change
as Julien suggested. We need to have a check not relying on device-tree.
My proposal would be to use /proc/iomem which prints memory ranges in %08x format.
This would look like as follows:

mem_range=$(printf \"%08x-%08x\" ${domu_base} $(( ${domu_base} + ${domu_size} - 1 )))
if grep -q \${mem_range} /proc/iomem; then
    echo ${passed}
fi

If you are ok with that, I will push a patch on Monday.

~Michal


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 11:02:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 11:02:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482306.747732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJBdd-0007mi-3X; Sat, 21 Jan 2023 11:02:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482306.747732; Sat, 21 Jan 2023 11:02:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJBdd-0007mb-0c; Sat, 21 Jan 2023 11:02:21 +0000
Received: by outflank-mailman (input) for mailman id 482306;
 Sat, 21 Jan 2023 11:02:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L7i5=5S=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pJBdb-0007mV-Vy
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 11:02:20 +0000
Received: from sonic309-20.consmr.mail.gq1.yahoo.com
 (sonic309-20.consmr.mail.gq1.yahoo.com [98.137.65.146])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0fb0c722-997b-11ed-b8d1-410ff93cb8f0;
 Sat, 21 Jan 2023 12:02:16 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.gq1.yahoo.com with HTTP; Sat, 21 Jan 2023 11:02:13 +0000
Received: by hermes--production-ne1-749986b79f-kcqbz (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 99bb84dd0c6e14c52c100f8a2c2801b6; 
 Sat, 21 Jan 2023 11:02:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fb0c722-997b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674298933; bh=qIkaR++hz4xSQPpKu8moORRYKu9TRC+UxcCbUMfWnzc=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=aE36dgXl1qm7nmhWw+QqN6pSrFR6ZQHEs8PSdyEbmRArHigHAvSa7lPlM+RlU3BvyIaUGGgfOv1rCGeUPawL1V9DsnFggDTfdmZ8krd3EKZ8pgrOMi1JQerA9YcjbUVvY40/ABQEFn5xsEnMz5oeQi/xu3Z/m/yx2AQXwc+OwQpDL1Y4KNWl7L597qIBCBqR5x2Ozd07Ad5qRBpICI+6vS6hIhlw/bCnP8vBpMKi4Ph6mvbXw3QDJNJss31kUNKchq9D1NdggfT7DtRNcG7eE2D8WYOgxQfqERNI/zr0nqdhS3WKzMQt9sx6JdgbIiMq87n8LeC+RGe/c/xWyri4QA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674298933; bh=c2uQhMKlr2ixzdC5oATKnSBYYGXXl3l1O6ybzgxEu5F=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=IQucvig0yQN5kpBSUOpSTALbpdOccxnfSMbuTG6K3dYFxZkz749+ElKqf/cFiA0B6GIAwXhZGIr1VPytby8E6yGqaWPS5KPMFxx6KBVDkrmIiTZgJqm5OPtWFHTEANoqVHn5L9253wDH/RCdzI6pImvePFdhmFgwHY4z/n/SpwjR6UGHGYMcmPh9aeiCJ0X6mxAbnmyG60sm5CZuKsSBhIBJDZbGWqLoClO8OX6fKTCDnQ4q7bkwgB6aOBmkUVmZFw5YIvj3Umx7NQL7q+TqaXJQZ2UuDqXduLCctbVoJckBGc7ZWTfDFbFPt0Gqgb02Yp6zOUHkuwNgj0oB91MK9A==
X-YMail-OSG: alwsZq8VM1nrrDYQHaCCIns6A2i5LvVnIsZlybOVsoWhYjbuY.MV8GVLplY348V
 6QiGI5K607i1uPcb1v_5fJX8FP7PoO7CcthW3WpOXLybmSCkiBdC1nAC0UoS0WOzIAEZqJy_mQgW
 SWpZtv3ovFSiZhDtNue_jYz3hzs8AWujiOROFRjAZqFKQgjvPFm9dMi2ThalloSO62L.hFr3_NQ4
 9Yw3yZYWhXhgWFaXhTr93sIuSu4tlMak80slUMsrgTYUWshwr4Q2Yh8_jF8qF0N0766meHkzMgnx
 jNyz3jSkx4LkQZv_7UeM7WcaPmxyGPnULDD1PVLv0u6dvnRG1tOcXDwxyaSzqO065d824YQMTM2K
 aaQsjFlRY0yi91dNLm0VAO4fZgwB6TBnDMZNn6s6vFeTHV1Fm.oRL1uJCmgdTo0PwC5ldBrecrba
 QafmENbz6XxEqPZm43UTcoe2LU_x9.xd4uIkzb7RpKQmkxRdF9XjtIb6ysQ7Hz2U6RDDRZEB_DP_
 ONCzoj7MH4YTRwbjwkYMh4SRgGXizbsigE4VWKUrLpg.T3d7LM9MkA8y5lBnTQ9MHiS1AHk1OWbw
 JPdviWftpDkZA94YZLCeTQb4BOgWsKyIIwVCY8X09HWHO9vluWDf3QVOYEdrc.o6qgvVv2sWAB6g
 RYSQSpra9hl_8oxHuP7rLMTGdzXddL4.EFJ_pDjhjrzuPvpBksSVpoc_lVbwIuoNGBnkJ6I315eW
 OJ6ZHJ8U4Iiqh3R8Me.ZxIvrHlJifotSUX4uXjIzw6NlJMmB42rMfp1VBWoc4lNHiZJq9.64Xcqx
 ri2lb204wQq5FmjAZgTBGqBaweffluRODKKn0PmMsj4JprrxST8X0imScL3HbPLU1MQAxkT56sXC
 qO4pGblhqGuj5ISeDuebXMCBDn2UVVxpuOV3HGr_rNt4zsprtfj_Mk8Yf4NZn5lybm0q3VpbkaOp
 kshbwIzVM3Im3GoEQZjOSWKKSlWSnthfl5kILlbPa5rbYClTjp87SM.C8WMXWASO5J.bG305W5SD
 M55bleHuO5n2gm1FzsvGGErv2mnCyvWPuDW6yu9FM_mS4_0phXTRgdkg.Ytw1r.vOSIX_PZlrlGL
 LkZMZK02Q5x8lG1APzQYXi2Gk3mrhshcc7JGOiia7OGDZIiNsWgyuwFfezcLgknNTG0iXj7lMR7j
 zGQ5Vr1fvlp0xtdFmQnBWtRLCNeVmqbp_8dILy__nyynwC0.n8bkaUcrGAtk1fdEpSRtRMKTcX48
 JH1I1q0eKCo1NOuhvtb33iE9Iuzk.j8NKFgae5HK_0TQKLHwh1SucYx.rxkovT7gHc8YcRSUc_7f
 SPP_OKTiyRK9i.BBE6mlgvrivC25w2K5CCtmLzCwciD6s7L8HjEVdwkt6QjiPf077Z64Q4LjIZJl
 0AtEC7nUVcFpQya4J2WeJ7uW_bZsMfZcxOD7ndxgPI.HcgLvuXjJ5nVB_4RwFr319_FsqOIoQkrV
 uwLOBVrjDRPOKIjl2.J1PEmeiYwcrkZQAMLdveHWyW5AKJ5q06HpFWZqKhQKPZe3LQPo53._c8IJ
 C6qUxeIQRinMAZTuKPW8B7r5HWWn2svv09Io893O2OyGH2lvufnAfuj6X86.zZymTGNL48cE9N9y
 _ulZ_E76xz0T5iBNqg2June1wOVMwX.nALcpoOPxihj22dJDGYp7JSNx_4YVyU651nr.aHRQYkwi
 6OkQB6UGj3U0zdoEYnw0G.2_sSMxJX7kQ2G5_tEE_rEp_zSDenPd9yteiV1HPoD2FQrW4ovZMoRQ
 zWhnLJCsdvEn8LuS1qzFQ9o.yct9udSNkbw8k17gia4zZ74p4PKYVlLzx0ITTLhVReTtjswlInRQ
 L_0twHGOe1Yt9tB0hH4oMwWzmm1MvEsgIxBYvPEjabRj5B.k_M0ZyCOp3wTUgccyjOVNtRW1oaXm
 Emt3Jgo0fS5iRfx.lJF0O63ydnlIY7PeUogdlsS.2MSJLL77kXLODFSLKtmAewyXq0d9sbkjYwNO
 rnABTBxlT2TMLjRRFzCRDJsjMaRbDDUqPxuYx1NzIVJ0rVBU4_fleitWk4kPJ9sl1fgd3DxVE7AD
 f_HX4lplhSMuHq6cPm.N4Cj0wkYQh6_eix6zmfiq_4Z3Gsr_ikH.jD3dqdJZxzhKJCDEklygk8w8
 9vTJM.osqI_mc2.CiphiBS_O7_Drg4WB_o8JeQ.3lplooiDqlmprmMesBJKFl90xv3ZBDH4KdMZw
 vw5g8FFiaYPwZL1yEig--
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Igor Mammedov <imammedo@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-stable@nongnu.org
Subject: [PATCH v10] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Date: Sat, 21 Jan 2023 06:02:00 -0500
Message-Id: <d473914c4d2dc38ae87dca4b898d75b44751c9cb.1674297794.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <d473914c4d2dc38ae87dca4b898d75b44751c9cb.1674297794.git.brchuckz.ref@aol.com>
Content-Length: 15930

Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
as noted in docs/igd-assign.txt in the Qemu source code.

Currently, when the xl toolstack is used to configure a Xen HVM guest with
Intel IGD passthrough to the guest with the Qemu upstream device model,
a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
a different slot. This problem often prevents the guest from booting.

The only available workarounds are not good: Configure Xen HVM guests to
use the old and no longer maintained Qemu traditional device model
available from xenbits.xen.org which does reserve slot 2 for the Intel
IGD or use the "pc" machine type instead of the "xenfv" machine type and
add the xen platform device at slot 3 using a command line option
instead of patching qemu to fix the "xenfv" machine type directly. The
second workaround causes some degredation in startup performance such as
a longer boot time and reduced resolution of the grub menu that is
displayed on the monitor. This patch avoids that reduced startup
performance when using the Qemu upstream device model for Xen HVM guests
configured with the igd-passthru=on option.

To implement this feature in the Qemu upstream device model for Xen HVM
guests, introduce the following new functions, types, and macros:

* XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
* XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
* typedef XenPTQdevRealize function pointer
* XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
* xen_igd_reserve_slot and xen_igd_clear_slot functions

Michael Tsirkin:
* Introduce XEN_PCI_IGD_DOMAIN, XEN_PCI_IGD_BUS, XEN_PCI_IGD_DEV, and
  XEN_PCI_IGD_FN - use them to compute the value of XEN_PCI_IGD_SLOT_MASK

The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
the xl toolstack with the gfx_passthru option enabled, which sets the
igd-passthru=on option to Qemu for the Xen HVM machine type.

The new xen_igd_reserve_slot function also needs to be implemented in
hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
in which case it does nothing.

The new xen_igd_clear_slot function overrides qdev->realize of the parent
PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
created in hw/i386/pc_piix.c for the case when igd-passthru=on.

Move the call to xen_host_pci_device_get, and the associated error
handling, from xen_pt_realize to the new xen_igd_clear_slot function to
initialize the device class and vendor values which enables the checks for
the Intel IGD to succeed. The verification that the host device is an
Intel IGD to be passed through is done by checking the domain, bus, slot,
and function values as well as by checking that gfx_passthru is enabled,
the device class is VGA, and the device vendor in Intel.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Notes that might be helpful to reviewers of patched code in hw/xen:

The new functions and types are based on recommendations from Qemu docs:
https://qemu.readthedocs.io/en/latest/devel/qom.html

Notes that might be helpful to reviewers of patched code in hw/i386:

The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
not affect builds that do not have CONFIG_XEN defined.

xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
existing function that is only true when Qemu is built with
xen-pci-passthrough enabled and the administrator has configured the Xen
HVM guest with Qemu's igd-passthru=on option.

v2: Remove From: <email address> tag at top of commit message

v3: Changed the test for the Intel IGD in xen_igd_clear_slot:

    if (is_igd_vga_passthrough(&s->real_device) &&
        (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {

    is changed to

    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    I hoped that I could use the test in v2, since it matches the
    other tests for the Intel IGD in Qemu and Xen, but those tests
    do not work because the necessary data structures are not set with
    their values yet. So instead use the test that the administrator
    has enabled gfx_passthru and the device address on the host is
    02.0. This test does detect the Intel IGD correctly.

v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
    email address to match the address used by the same author in commits
    be9c61da and c0e86b76
    
    Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc

v5: The patch of xen_pt.c was re-worked to allow a more consistent test
    for the Intel IGD that uses the same criteria as in other places.
    This involved moving the call to xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot and updating the checks for the
    Intel IGD in xen_igd_clear_slot:
    
    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    is changed to

    if (is_igd_vga_passthrough(&s->real_device) &&
        s->real_device.domain == 0 && s->real_device.bus == 0 &&
        s->real_device.dev == 2 && s->real_device.func == 0 &&
        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {

    Added an explanation for the move of xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot to the commit message.

    Rebase.

v6: Fix logging by removing these lines from the move from xen_pt_realize
    to xen_igd_clear_slot that was done in v5:

    XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
               " to devfn 0x%x\n",
               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
               s->dev.devfn);

    This log needs to be in xen_pt_realize because s->dev.devfn is not
    set yet in xen_igd_clear_slot.

v7: The v7 that was posted to the mailing list was incorrect. v8 is what
    v7 was intended to be.

v8: Inhibit out of context log message and needless processing by
    adding 2 lines at the top of the new xen_igd_clear_slot function:

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
        return;

    Rebase. This removed an unnecessary header file from xen_pt.h 

v9: Move check for xen_igd_gfx_pt_enabled() from pc_piix.c to xen_pt.c

    Move #include "hw/pci/pci_bus.h" from xen_pt.h to xen_pt.c

    Introduce macros for the IGD devfn constants and use them to compute
    the value of XEN_PCI_IGD_SLOT_MASK

    Also use the new macros at an appropriate place in xen_pt_realize

    Add Cc: to stable - This has been broken for a long time, ever since
                        support for igd-passthru was added to Qemu 7
                        years ago.

    Mention new macros in the commit message (Michael Tsirkin)

    N.B.: I could not follow the suggestion to move the statement
    pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK; to after
    pci_qdev_realize for symmetry. Doing that results in an error when
    creating the guest:
    
    libxl: error: libxl_qmp.c:1837:qmp_ev_parse_error_messages: Domain 4:PCI: slot 2 function 0 not available for xen-pci-passthrough, reserved
    libxl: error: libxl_pci.c:1809:device_pci_add_done: Domain 4:libxl__device_pci_add failed for PCI device 0:0:2.0 (rc -28)
    libxl: error: libxl_create.c:1921:domcreate_attach_devices: Domain 4:unable to add pci devices

v10: Change in xen_pt.c at xen_igd_clear_slot from

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
        return;

    xen_host_pci_device_get(&s->real_device,
                            s->hostaddr.domain, s->hostaddr.bus,
                            s->hostaddr.slot, s->hostaddr.function,
                            errp);
    if (*errp) {
        error_append_hint(errp, "Failed to \"open\" the real pci device");
        return;
    }

to:

    xen_host_pci_device_get(&s->real_device,
                            s->hostaddr.domain, s->hostaddr.bus,
                            s->hostaddr.slot, s->hostaddr.function,
                            errp);
    if (*errp) {
        error_append_hint(errp, "Failed to \"open\" the real pci device");
        return;
    }

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
        xpdc->pci_qdev_realize(qdev, errp);
        return;
    }

     Testing shows this fixes the problem of xen_host_pci_device_get
     and xpdc->pci_qdev_realize not being called if xen_igd_clear_slot
     returns because the bit to reserve slot 2 in slot_reserved_mask is
     not set. Without this change, guest creation fails in the cases
     when the bit to reserve slot 2 in slot_reserved_mask is not set.
     Thanks, Stefano!
     
     Also, in addition to mentioning the workaround of using the
     traditional qemu device model available from xenbits.xen.org in the
     commit message, also mention in the commit message the workaround
     of using the "pc" machine type instead of the "xenfv" machine type,
     which results in reduced startup performance.
     
     Rebase.
     
     Add Igor Mammedov <imammedo@redhat.com> to Cc.

 hw/i386/pc_piix.c    |  1 +
 hw/xen/xen_pt.c      | 63 ++++++++++++++++++++++++++++++++++++--------
 hw/xen/xen_pt.h      | 20 ++++++++++++++
 hw/xen/xen_pt_stub.c |  4 +++
 4 files changed, 77 insertions(+), 11 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index df64dd8dcc..a9d535c815 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -421,6 +421,7 @@ static void pc_xen_hvm_init(MachineState *machine)
     }
 
     pc_xen_hvm_init_pci(machine);
+    xen_igd_reserve_slot(pcms->bus);
     pci_create_simple(pcms->bus, -1, "xen-platform");
 }
 #endif
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 8db0532632..4716ce6d4e 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -57,6 +57,7 @@
 #include <sys/ioctl.h>
 
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_bus.h"
 #include "hw/qdev-properties.h"
 #include "hw/qdev-properties-system.h"
 #include "hw/xen/xen.h"
@@ -780,15 +781,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
                s->dev.devfn);
 
-    xen_host_pci_device_get(&s->real_device,
-                            s->hostaddr.domain, s->hostaddr.bus,
-                            s->hostaddr.slot, s->hostaddr.function,
-                            errp);
-    if (*errp) {
-        error_append_hint(errp, "Failed to \"open\" the real pci device");
-        return;
-    }
-
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
@@ -803,8 +795,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
     s->io_listener = xen_pt_io_listener;
 
     /* Setup VGA bios for passthrough GFX */
-    if ((s->real_device.domain == 0) && (s->real_device.bus == 0) &&
-        (s->real_device.dev == 2) && (s->real_device.func == 0)) {
+    if ((s->real_device.domain == XEN_PCI_IGD_DOMAIN) &&
+        (s->real_device.bus == XEN_PCI_IGD_BUS) &&
+        (s->real_device.dev == XEN_PCI_IGD_DEV) &&
+        (s->real_device.func == XEN_PCI_IGD_FN)) {
         if (!is_igd_vga_passthrough(&s->real_device)) {
             error_setg(errp, "Need to enable igd-passthru if you're trying"
                     " to passthrough IGD GFX");
@@ -950,11 +944,57 @@ static void xen_pci_passthrough_instance_init(Object *obj)
     PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
 }
 
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+    if (!xen_igd_gfx_pt_enabled())
+        return;
+
+    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
+    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
+}
+
+static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
+{
+    ERRP_GUARD();
+    PCIDevice *pci_dev = (PCIDevice *)qdev;
+    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
+    PCIBus *pci_bus = pci_get_bus(pci_dev);
+
+    xen_host_pci_device_get(&s->real_device,
+                            s->hostaddr.domain, s->hostaddr.bus,
+                            s->hostaddr.slot, s->hostaddr.function,
+                            errp);
+    if (*errp) {
+        error_append_hint(errp, "Failed to \"open\" the real pci device");
+        return;
+    }
+
+    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
+        xpdc->pci_qdev_realize(qdev, errp);
+        return;
+    }
+
+    if (is_igd_vga_passthrough(&s->real_device) &&
+        s->real_device.domain == XEN_PCI_IGD_DOMAIN &&
+        s->real_device.bus == XEN_PCI_IGD_BUS &&
+        s->real_device.dev == XEN_PCI_IGD_DEV &&
+        s->real_device.func == XEN_PCI_IGD_FN &&
+        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
+        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
+        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
+    }
+    xpdc->pci_qdev_realize(qdev, errp);
+}
+
 static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
+    xpdc->pci_qdev_realize = dc->realize;
+    dc->realize = xen_igd_clear_slot;
     k->realize = xen_pt_realize;
     k->exit = xen_pt_unregister_device;
     k->config_read = xen_pt_pci_read_config;
@@ -977,6 +1017,7 @@ static const TypeInfo xen_pci_passthrough_info = {
     .instance_size = sizeof(XenPCIPassthroughState),
     .instance_finalize = xen_pci_passthrough_finalize,
     .class_init = xen_pci_passthrough_class_init,
+    .class_size = sizeof(XenPTDeviceClass),
     .instance_init = xen_pci_passthrough_instance_init,
     .interfaces = (InterfaceInfo[]) {
         { INTERFACE_CONVENTIONAL_PCI_DEVICE },
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index cf10fc7bbf..e184699740 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -40,7 +40,20 @@ typedef struct XenPTReg XenPTReg;
 #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
 OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
 
+#define XEN_PT_DEVICE_CLASS(klass) \
+    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
+#define XEN_PT_DEVICE_GET_CLASS(obj) \
+    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
+
+typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
+
+typedef struct XenPTDeviceClass {
+    PCIDeviceClass parent_class;
+    XenPTQdevRealize pci_qdev_realize;
+} XenPTDeviceClass;
+
 uint32_t igd_read_opregion(XenPCIPassthroughState *s);
+void xen_igd_reserve_slot(PCIBus *pci_bus);
 void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
 void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
                                            XenHostPCIDevice *dev);
@@ -75,6 +88,13 @@ typedef int (*xen_pt_conf_byte_read)
 
 #define XEN_PCI_INTEL_OPREGION 0xfc
 
+#define XEN_PCI_IGD_DOMAIN 0
+#define XEN_PCI_IGD_BUS 0
+#define XEN_PCI_IGD_DEV 2
+#define XEN_PCI_IGD_FN 0
+#define XEN_PCI_IGD_SLOT_MASK \
+    (1UL << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
+
 typedef enum {
     XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
     XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
index 2d8cac8d54..5c108446a8 100644
--- a/hw/xen/xen_pt_stub.c
+++ b/hw/xen/xen_pt_stub.c
@@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
         error_setg(errp, "Xen PCI passthrough support not built in");
     }
 }
+
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Sat Jan 21 11:20:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 11:20:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482311.747742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJBuz-0001kI-Ju; Sat, 21 Jan 2023 11:20:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482311.747742; Sat, 21 Jan 2023 11:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJBuz-0001kB-H9; Sat, 21 Jan 2023 11:20:17 +0000
Received: by outflank-mailman (input) for mailman id 482311;
 Sat, 21 Jan 2023 11:20:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJBux-0001k0-Jl; Sat, 21 Jan 2023 11:20:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJBux-000605-G1; Sat, 21 Jan 2023 11:20:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJBux-0007Th-3O; Sat, 21 Jan 2023 11:20:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJBux-0004pO-2q; Sat, 21 Jan 2023 11:20:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RtxLL6oTJhorlmzgfxzkA4cuZyszUG2FvJb0l2x2YlE=; b=ZOGm/PbAD1ng4m9MgBMmovkup8
	oUyMppK0Er2+9Db/4MrcfX/mfryH8d3d21++gi/a10qf0EmRHdTEBesA87+XaWNUx59eJ1PwbvfXo
	hCZlqAAkxRNUwynD7IvONg7fpj+rBVfT3uYL1GEgVzbadwfdZ6rxgliX+uly22gB1OF8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176007-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176007: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f883675bf6522b52cd75dc3de791680375961769
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 11:20:15 +0000

flight 176007 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176007/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                f883675bf6522b52cd75dc3de791680375961769
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  105 days
Failing since        173470  2022-10-08 06:21:34 Z  105 days  215 attempts
Testing same since   176007  2023-01-21 02:54:45 Z    0 days    1 attempts

------------------------------------------------------------
3414 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 524922 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 12:48:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 12:48:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482330.747758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJDHY-0001mS-C4; Sat, 21 Jan 2023 12:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482330.747758; Sat, 21 Jan 2023 12:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJDHY-0001mL-9H; Sat, 21 Jan 2023 12:47:40 +0000
Received: by outflank-mailman (input) for mailman id 482330;
 Sat, 21 Jan 2023 12:47:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJDHX-0001mB-JV; Sat, 21 Jan 2023 12:47:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJDHX-000885-Hc; Sat, 21 Jan 2023 12:47:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJDHX-0002g3-6v; Sat, 21 Jan 2023 12:47:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJDHX-0003IZ-6W; Sat, 21 Jan 2023 12:47:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qfkLlbiGPopY1csOEORJjqve/Q2cfAAdyXCZmmx/DtE=; b=woSpO9JKUUMaZcW7B8u+zFfvG0
	XwngNu1N5WGrkZ+tuDVP9heOaB1PRZn7VTjqlxaaTLNeC1LDlpbfeOpXZG6YZqmsfTnsNBLsMMiyz
	sLMTnEY/ESS3Z5NYT9rLem+pYUyBJ9Po8uAXU1x8KHh1fbZwaMMEgqBPPh0NqY20/wEs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176009-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 176009: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=57b0678590708de081e4498e164b86d5c8c85024
X-Osstest-Versions-That:
    libvirt=16bfbc8cd2b4a039d3e846dceca807a9cc15849b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 12:47:39 +0000

flight 176009 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176009/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175967
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175967
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175967
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              57b0678590708de081e4498e164b86d5c8c85024
baseline version:
 libvirt              16bfbc8cd2b4a039d3e846dceca807a9cc15849b

Last test of basis   175967  2023-01-19 04:22:35 Z    2 days
Testing same since   176009  2023-01-21 04:23:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiang Jiacheng <jiangjiacheng@huawei.com>
  Ján Tomko <jtomko@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   16bfbc8cd2..57b0678590  57b0678590708de081e4498e164b86d5c8c85024 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 15:57:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 15:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482346.747786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJGEn-0003dy-3n; Sat, 21 Jan 2023 15:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482346.747786; Sat, 21 Jan 2023 15:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJGEn-0003dr-0t; Sat, 21 Jan 2023 15:57:01 +0000
Received: by outflank-mailman (input) for mailman id 482346;
 Sat, 21 Jan 2023 15:56:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L7i5=5S=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pJGEl-0003dl-Bw
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 15:56:59 +0000
Received: from sonic306-20.consmr.mail.gq1.yahoo.com
 (sonic306-20.consmr.mail.gq1.yahoo.com [98.137.68.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3a16a039-99a4-11ed-b8d1-410ff93cb8f0;
 Sat, 21 Jan 2023 16:56:56 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic306.consmr.mail.gq1.yahoo.com with HTTP; Sat, 21 Jan 2023 15:56:53 +0000
Received: by hermes--production-bf1-6bb65c4965-lwg94 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 3b42b4ec092870dceab4c344b0b1924d; 
 Sat, 21 Jan 2023 15:56:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a16a039-99a4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674316613; bh=WU8FP7owwX+CycfOFbHvgnjn9NginFB8A8uVIaqHxNI=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=XKfVBNYYZULrrGbIiTQRukYmbY11GF8PLjjocwyyimi/yMbqNsfMm3+j+E73a9QSx0qhTldBnGXPKEewzVsI0CtNhRZGFMq/ZNYu20JL0uB/g4p9y/ZL/tV9AeUxqdQtuPBbkAuqM+X12r8n4cwF/yqyOrIHurB798ngeOGIKTdSU3GFbjYj02Wc2PgteTN0zC8t8frivykcewxyuPmdTDmPo688B/1nwtH9v2NWM6K77BMhIoB4pmQVFBCGrmUwa4d6JCnyM2+2vkanpRgle3kKjZcmwjJHWpttocfmb2Cor8ppEWcq8lhNRQUS5b3/H3U9J06cRdn9AxxUFwSdPw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674316613; bh=dWTfVFhHGIBFuo+Yur0KbTA7NHI539FuYle6e4ppJfT=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=lJ8eXPCqNecWRGQD1sXIItfLaitU83ooB/vQP7ZKXVSvufdPNURdo1CQP0NOMVgK6rc1sPtlg/yrqTWfa8aehCvEFo7+nfZ/eRJP91L1e543viJKeYGoPbIAblYQz5rhVZy3DGdJ5RppDL3hoG3RTsOktZwEj+xb3RcoEfQ1eplLC4Jm4R0Kq11vGgG+TEMFMnRQBzmwYc2iz5ciI9EToznbf4P0BKTmv0dzI+/XFq/L7IqG+x9pY1PUvmXNl45wIWSExqU3ZCJc+Qu5hZLXnbhhoQHskKlMnT04tSGEnEekCetMUY0vDf98Vr3pGNmJIEG384dP/sLPyPEEfPSzvQ==
X-YMail-OSG: gzT12XcVM1kOReKgIIC0dYJdfiTPAobDej4LN0jQWw4wbYP16MA_S2Dd9RZSmFR
 VJbyq1lTPdi6fA9K7rdruXAmnfVPENc.p1UGAzJCA4yEk4eGZOI0qc3XZ1tq_Fz02jCImXaVSBis
 PvflD7lBx1jVaNiOi91GzZ4FRKbAODrDqupz7sxsLRknqt_6BNpxpshEhrBgFwRGrlb.hKfZs8u4
 dRegf9lYvDzvtK535FO_nWwc_dqGuwr8s8EJe.8lhcp2VeRSbOKi0ezn.PAVBPW6HvDB7q5.2lih
 WWhOuQ6.tokBpVB0Pxtk76QMvF34exN36lkcauBDCOei4ETcyHYtB6lrZlFkWjpvn4LBAvdYqPpq
 Vx2aEvE0OWQBqHCQQ_hxj6vYiZaNycoIEJ1wRoZ4mFqSAUStEXgCVpkj.4Xu.5pBfNaL.yJZx89C
 B_B16nJUuQ9DXpNghL1TSG03CRVQmGJX1c9Gd7vjkaHms7HsKMpRqz4R5Xhbq_lboeADfVJKqrDQ
 H2yJLVkRZ5DMCpHkgqpFWtXQ0cKgK5WkL92LVatyi8Pi05g2wf5ooQcOFi2DvudUbHcbSj_VCzWo
 ZUHbNIPU_YW930Ll_eEq82g.JAXWWq3wrCeIQ2aXp934wLXV805OyvNlnupOYHWsOZ5dco88eumR
 nZ8VwlXxJ3fH4Kfm97S4bcjDGu97fRP1l3dnN_XRsjxiWss1aPUw9Yj3VBY15RuJDg8WDVLFPoUc
 ibnQMW2_ok3.xrQZZIoefo65AYvFRm4l_Y5ioRp9j2mThSFIPIVSp6QJ6XayH6QgUD0ueTJM7rgF
 hy.VQRJ10kEsPzm26KaL.zdHjz55LiFwUbQqXmalFPYregWmvgF8vlAxSsuS8fna3rfzlko.yILp
 S39z_HgJLH9dHc6y.GV2nGKK_mWN.d7AS9EHyhjPhLDv8nq4oxjueacgsgk7napvM9HzfajQjSkD
 j3tpwHQKHpJrmvNrofjekBBHiCXrqIPa67KpoauysCkhrzyB1Cx4VoKKGaynEfa2J2a_Foqlutdt
 7Lb3Ngj5E2CWYLpOVFMzNJ4grbVlSE33US0OJxSZSknhOIcOrUnmo1RYKzJgDltdOOkpWiHpY9MJ
 7mtxX6zVHr.ro6cJD4wq.pG9.M99GlZAsimmBxzeAory.chjdOF02ORvrf4EV8nQhCWm5FwexD9w
 1vm_zD3pI4tlpTg4cV1lnhfPHi3ISHczT4w3ieGRCQHcxDToVnUUwAk0If8NzoHP1wvzHWom.uIB
 AwhL8YX_.brDZ4XIdcr_QV7zZtl2uy10uo2sl2bKMgVxnTNjBuF9QIEBnwpmqo.Z459bYSKRMkd.
 dPA6yfQEfX76lzJYAG9SIy69kQ48EA9QY39gmgZZygIjKA2A7nwogvIlG02wdSk87bTaF5liMchU
 Kf8kR18NmD0JBBKCY.44nWbteAYCXZWeJj9twbPmloGCqfBCegh2OBZTeROP4fzGX76vvcOvL0jL
 jYhIwb0vIrybCiHLu29Uu5Q9HfpE3T7YN1P9Zae.WP9mot7HpA_gFvr6I9.0MZ.ULrARt52GA_47
 lbXsCKBAirRXUZgbsoLV_zuZ3kHBsK2n36z.9Nmm750S1jWhqtO0jB.b67WP0dQB8ETxKwJIf4t7
 Gi7D8Mvn9nQzIRlvBj2rImtaX9lSTD0eWnLUpNEWxb9UPIkq7DD59Mx06IivXYqp4.MhDsGSYbSO
 VLupxo2zagW2dIQpsHS3jJmG9Kxbgabp9Uzf1X3RD4mlVI19vtn1z7.LX1yX8kLjtaO9SfyknNqV
 Uw813mpqgfHJiQe517eWEtx6eiDsGRZL4E6HYnrMUuJAuRUaGrOZUJUf0.j4niugnuRoXngzJjLK
 wSxN2DPlKi9Y0Ejkj9YNqCr.L1Q_olMc9x1r.jPwU2X0LQ0M7rKKPKzCHIHoZ0XABdz78cNYGUra
 mvUDVGcJXJ6Ll5iZUWBRt2O210m1t.YKeP_T_6IvCu81xJHg9xp6Xdk55cnh21MfnqI5gooFoFdf
 VzQK3Av02DfHdv9XiSLoDPSCgP.IWxEGasJq_b21UjDfp1vk.Hs4eNYibHI1e0AaRZX.zZ2RR6OH
 RHnyksen8ghjXpeFl5eA3YsgEhMPo16vxjwst_WhojiXoCnRrSnMWPvbEI1ImP9lDjGPICsDiA9h
 cyyWpPkDOFh_pT_PeRd1lWG_d1_Q1fzEVWJEKBnm_4zMN7TxA_HDTQlaRam7VQTTdd6_deQ3NFGs
 B89RbzPgYcl5rdEzLIko-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <a679c956-5609-553d-ebc0-f6e4b22b70ac@aol.com>
Date: Sat, 21 Jan 2023 10:56:47 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v9] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: qemu-devel@nongnu.org, Anthony Perard <anthony.perard@citrix.com>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, qemu-stable@nongnu.org
References: <974c616b8632f1d7ca3917f8143d8cebf946a55c.1673672956.git.brchuckz.ref@aol.com>
 <974c616b8632f1d7ca3917f8143d8cebf946a55c.1673672956.git.brchuckz@aol.com>
 <alpine.DEB.2.22.394.2301201334250.731018@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <alpine.DEB.2.22.394.2301201334250.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.21096 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3893

On 1/20/2023 4:34 PM, Stefano Stabellini wrote:
> On Sat, 14 Jan 2023, Chuck Zmudzinski wrote:
> > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > as noted in docs/igd-assign.txt in the Qemu source code.
> > 
> > Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > Intel IGD passthrough to the guest with the Qemu upstream device model,
> > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > a different slot. This problem often prevents the guest from booting.
> > 
> > The only available workaround is not good: Configure Xen HVM guests to use
> > the old and no longer maintained Qemu traditional device model available
> > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > 
> > To implement this feature in the Qemu upstream device model for Xen HVM
> > guests, introduce the following new functions, types, and macros:
> > 
> > * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > * typedef XenPTQdevRealize function pointer
> > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > 
> > Michael Tsirkin:
> > * Introduce XEN_PCI_IGD_DOMAIN, XEN_PCI_IGD_BUS, XEN_PCI_IGD_DEV, and
> >   XEN_PCI_IGD_FN - use them to compute the value of XEN_PCI_IGD_SLOT_MASK
> > 
> > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > the xl toolstack with the gfx_passthru option enabled, which sets the
> > igd-passthru=on option to Qemu for the Xen HVM machine type.
> > 
> > The new xen_igd_reserve_slot function also needs to be implemented in
> > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > in which case it does nothing.
> > 
> > The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > 
> > Move the call to xen_host_pci_device_get, and the associated error
> > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > initialize the device class and vendor values which enables the checks for
> > the Intel IGD to succeed. The verification that the host device is an
> > Intel IGD to be passed through is done by checking the domain, bus, slot,
> > and function values as well as by checking that gfx_passthru is enabled,
> > the device class is VGA, and the device vendor in Intel.
> > 
> > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>
> Hi Chuck,
>
> The approach looks OK in principle to me. I only have one question: for
> other PCI devices (not Intel IGD), where is xen_host_pci_device_get
> called now?
>
> It looks like that xen_igd_reserve_slot would return without setting
> slot_reserved_mask, hence xen_igd_clear_slot would also return without
> calling xen_host_pci_device_get. And xen_pt_realize doesn't call
> xen_host_pci_device_get any longer.
>
> Am I missing something?

Thanks for catching this. With v9 guest creation fails when the bit in
slot_reserved_mask that reserves slot 2 is not set.

It fails because v9 not only fails to call xen_host_pci_device_get when the
bit in slot_reserved_mask that reserves slot 2 is not set, it also does not
call xpdc->pci_qdev_realize either. So I uploaded v10 to fix that here:

https://lore.kernel.org/qemu-devel/d473914c4d2dc38ae87dca4b898d75b44751c9cb.1674297794.git.brchuckz@aol.com/

Tests with v10 show it is now working for all cases.

Thanks,

Chuck


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 16:04:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 16:04:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482351.747795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJGM6-0005dT-Rz; Sat, 21 Jan 2023 16:04:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482351.747795; Sat, 21 Jan 2023 16:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJGM6-0005dM-P5; Sat, 21 Jan 2023 16:04:34 +0000
Received: by outflank-mailman (input) for mailman id 482351;
 Sat, 21 Jan 2023 16:04:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJGM4-0005dC-SS; Sat, 21 Jan 2023 16:04:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJGM4-0004mA-Ot; Sat, 21 Jan 2023 16:04:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJGM4-0002r0-GW; Sat, 21 Jan 2023 16:04:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJGM4-0003wA-Fx; Sat, 21 Jan 2023 16:04:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6mXS0K3Pg9WALWh5aI0M4AH3/itjgTD8qFPKV7BzBLw=; b=TZfuHqYOvDvPUUUTXTj/tOmNWC
	xx2FFRJsd7MsHNPyqHqGJJ3kT07GC3RuYDFbxn/IdUdnnvUSSMUC+/COkxwGX2KfX6AqPIfjwIyqE
	VkQoCxBf0hojh5ivfMOR3uMK5wum60J9KmJjMqSEmQWENPuRFS2Q2lFIrlezLTsxOVrw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176008-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 176008: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start.2:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=fcb7e040f5c69ca1f0678f991ab5354488a9e192
X-Osstest-Versions-That:
    qemuu=239b8b0699a222fd21da1c5fdeba0a2456085a47
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 16:04:32 +0000

flight 176008 qemu-mainline real [real]
flight 176018 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176008/
http://logs.test-lab.xenproject.org/osstest/logs/176018/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install         fail pass in 176018-retest
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail pass in 176018-retest
 test-amd64-amd64-xl-qcow2    22 guest-start.2       fail pass in 176018-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 176018 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175998
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175998
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175998
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175998
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175998
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175998
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175998
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175998
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                fcb7e040f5c69ca1f0678f991ab5354488a9e192
baseline version:
 qemuu                239b8b0699a222fd21da1c5fdeba0a2456085a47

Last test of basis   175998  2023-01-20 13:23:56 Z    1 days
Testing same since   176008  2023-01-21 02:55:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Markus Armbruster <armbru@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   239b8b0699..fcb7e040f5  fcb7e040f5c69ca1f0678f991ab5354488a9e192 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 17:35:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 17:35:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482362.747812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJHlE-00067w-O9; Sat, 21 Jan 2023 17:34:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482362.747812; Sat, 21 Jan 2023 17:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJHlE-00067p-JE; Sat, 21 Jan 2023 17:34:36 +0000
Received: by outflank-mailman (input) for mailman id 482362;
 Sat, 21 Jan 2023 17:34:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJHlD-00067f-KP; Sat, 21 Jan 2023 17:34:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJHlD-0006gi-2l; Sat, 21 Jan 2023 17:34:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJHlC-0005dS-ND; Sat, 21 Jan 2023 17:34:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJHlC-00007z-Mj; Sat, 21 Jan 2023 17:34:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e09NiGBCK/9Aag0mYXB9kwq7x9kFiVgVPwlnqM6wfvA=; b=cN+mC7gP1HzmADTBzoiQ+Ggc92
	99x2v0BqNoBSYs72fq3lNKKNGKUcye3edAISe4iGCck7FvatCY32FVsp5qpZALx9qYhSxFarJv3LM
	8sXXXt7So+Grc9bVLBXEGm7l3t2sxHErajQWeDmXELa0ljOeXuBiWEdk/8appj9Jji5o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176011-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176011: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d60c20260c7e82fe5344d06c20d718e0cc03b8b
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 17:34:34 +0000

flight 176011 xen-unstable real [real]
flight 176021 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176011/
http://logs.test-lab.xenproject.org/osstest/logs/176021/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail pass in 176021-retest
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 176021-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1d60c20260c7e82fe5344d06c20d718e0cc03b8b
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    1 days
Failing since        176003  2023-01-20 17:40:27 Z    0 days    2 attempts
Testing same since   176011  2023-01-21 07:04:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 762 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 18:07:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 18:07:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482371.747828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJIGQ-00018W-AW; Sat, 21 Jan 2023 18:06:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482371.747828; Sat, 21 Jan 2023 18:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJIGQ-00018P-6T; Sat, 21 Jan 2023 18:06:50 +0000
Received: by outflank-mailman (input) for mailman id 482371;
 Sat, 21 Jan 2023 18:06:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L7i5=5S=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pJIGP-00018J-GV
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 18:06:49 +0000
Received: from sonic314-20.consmr.mail.gq1.yahoo.com
 (sonic314-20.consmr.mail.gq1.yahoo.com [98.137.69.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d619a92-99b6-11ed-91b6-6bf2151ebd3b;
 Sat, 21 Jan 2023 19:06:46 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Sat, 21 Jan 2023 18:06:44 +0000
Received: by hermes--production-bf1-6bb65c4965-7k2xj (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 933f982256ad66195c1bc98306f1939a; 
 Sat, 21 Jan 2023 18:06:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d619a92-99b6-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674324404; bh=GzDM3shPM+0VlMkN3ZhiZfEOMRwJWSsmB8h8wEu9+b0=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=reHbeeTReM54qAWIqOwYO8gO/WT8qDVTxkXpY5v4+9xYlKy0exUzvyOvs18kXBDk1129CLNZKKfedETSaGxNIQ/UMkZGHz7/JL2dD4v55mX4zmE5fwdAUOEPsSN/xIQAWupkfansaqu3bTV5fE70qKVoC0YMLtW/avw4IBFLAR27LcTFa2/dYM7MmDtJFlyZn3Ci/OXQ+MGbXGRdlwhN/xtgo8yHNfET6b600GHdSPuWfKvpkL/Pe6SGiCAbtXlewog8uqPnU9u0GdL0KwfUNtw/gyt8IysMq+hNsc8XFU5GSSAtCoK5nv0MC9cGlvkRiFelobleKF/G57NOCwisRA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674324404; bh=+Aiu4RNLmeWPZZqKWZgfwjWfst/rAYAa8ttrT5buLqc=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=KpUOfm4vQy+OfeFrRZsBcGG24UiH/XxU5Eeizf6XMTzhxq4C2TzfGwyFe5zd25Hi8ieFbMT173324DLB0J8X800NThCY62NIogisO9qPb6Uk/vJ35OsLMWu4K0F/jGRpBwHDkOJRU5T5jWRch7mOsRvBR5jG54I2ZPEdmUgjwqQp/S9ndDS17ktZ89hUIqed/1fWKAW6R+pdi0AYHM6tpGCcWwS2r28FLypk/D18IR+uTccdcluKAjFLOIchBaI3xhtP1/q1XlTJrkhzpHd+eKcDMEUUxAOmgDshvpuY4yZ03/8r8KzBFzwfmEAXCu9AeP8PnY8bNy3Uio+6Z/bGHw==
X-YMail-OSG: Z8rqCdYVM1m8DRiBBCrtes5ktRs4bL1jTcvO.gGFc7LXmWwbbjw.2TuS45m71wi
 w_C.9WLcHesjYZA70xSDQIA0fLwyM.wxxMo8qE8JNJnBqkvRE6HYLR3SQ2SkHiz85d.xlmjWf6El
 9Lwyc.TDA7XRkO9ZqXR97V1aEnWGXBf3Kqe_8.5KYaUAUF_5Ms7tkRlHPslmGV3bdYTJgEo9O8sS
 GGDNUCEtbfaM5DRTtE_z88IEqjOaO0VKjhEMvNgshGCP0FOkVVojoXAJ5TgF6g5j8F6T5Mym5WHe
 QM9vHNxfW54OueXos9xxOgnFu02EuAQSJBJKdoaQWFn9faDIK8do6iDLdL1MEZZlIe1n0sHl2f8E
 yfSkr2rKO0UIC8Shufly.NNa9SClhQ3T6pKEtfzhW3cQaBpiTyaoAE30fpDwFUB4XfkmqaxgPkSO
 ypIXK39S74J4_2o1ghLefyOlsbTc79Uq_eN2tz0ePdAV9C954A1Bi4aYHpQKt.tsczjQkNYYMYoH
 wAIzGKcM.x.RZX5bDQa3RCMMVmi5K_IzQ77uqZ0gyK6tpCLy6tzmTtZHGdma_78p0btLHo10uIEW
 RBEJ7Imax6YkjAzEOpYq1xe1MnJtrEpgurXBiPcUGtx.t.yhRKw9ikU1cxVNKPLvSxH2uNEn_Hgh
 z5T.eQnjJDXwJik0fOtvMeTPdsNrNZWQhLFd.WgBGoquenJnjkH7gbsS6mhiWsbwyFyHLU9ZmRz9
 2dY5XXvaVjzbU95xmqQGZdp5bNNZE9xA4U.dWk7XK1Jx7x.t97U1cWSLrpzZd8rO6A6JSY7.M0WD
 v2q0jo0w0V_YsZ2wvRi4RFhEOtD0pt7VmFeMA.MYO4gFWqjam6sa.5kewiYPzpcYLn0hLQkYex6x
 dY8_8dg8DNRdcmYEGFlEsP7KSnsXDHBNTl5p7cE2b63aLGuElTxOz30..vSI9yHfDVci34pGEj.i
 _Hu6V7eU3pSDi9fhjRcOT_O8W92eFolM1aInYPDxxXo07hsC01I99h4WYDmR39DhbOeWSfudRmbK
 VxYQseGyDQT9LM2APFun7G9vv5EbDJaHCljtGiPcXCvxMJ0zT9Uc1ZbdAWjsiivmT1HCB75tDNM5
 LaU4rDmjip9WBkZkHRVbuD0.NBIA9B2.My0C9anlH.ru2KaDPyc6b60beriiFEJRcM0NJGdcsf6G
 czNIQ8wWKL.ndH5rohQKp5kvRQJxxg7vukIj.rUdNXm9Zhgmck2LNVXBhhN9dErMLg6wsMn_GNZn
 Szk9ZnkmMK93ILw7.Rc9w.rfmSWFg8V1Yo1nM73.i1yIYs7hd5qsUUnRiHvUr7kPs3E2UsiAvcKL
 ZvVRrtATYpmzfftz7GIptP5THKMsFriPt.j1IzRbWAAnYuq6efAe74g4eQFVOeMGjSP.Sfe9iA6f
 3jyayngOAEJ.zIIsCfzUdHHEd_8cMfdXmg3q5fVlOI3zlMkcRPw93eNKcVrKt5m0_RjumhOR_DhG
 i4DJDNwNRBaqRxZ3JJklqylaV6hrwTeS2Fbe0VGZpI_heFO4cmiKHPyw2HOcLiT.6syQPl.VfMKD
 wi5nvukNKy9AAANFE8m9zDQqoQZBXWg5HGUYCEbIDAv1lBdUhKQ3w9xQYWWlONWIDLHwV4xsw6Mb
 Yoqu3l9z8WTOJMp7YJ4WXN5dLnsDNTH22NHk4hDrm3eriHlk0Tmpw6ZCjIuGZEiu.z3PAv7fJypQ
 FHOeXLiDx3E3BkJcIfHTn2sXYNssVc9KEdk8TYuMEhMzZ8fo8nLVb15YYXvgr.NNbvT_vAoxxhCW
 T_ZkHBshdcd4GHTzU2UlgBqjjtuB81JDxWEJZoMkqKew_5IdmUCZPhoAWLS8lgwHvBvHbfIFu_P.
 C3cO4CthzbWa2HhM9T9yioMGYz_kqj1oU1nglAf3sDYarS9RUzIR_5CNU3mnUtMHQL3aqS5GbKs_
 TOB7_MBEAz4O6MnOqEc.Uz3w9ONK2j40H5PJKEmrUGH_6AfoqXbnRU3OSjpKfxiOXOvxNZ6Ooa_F
 JLcw3ToELnG9U1RJ60VtsrTDgwDRD4Wr2xe3XdV0ynbCVfjAP.OiTeXFsmRR3.caepiHZjOIqpIw
 MwaNN2Rni87D6ydktsQpQI5KOolQYrCSzP.tf9__W4BLYzCPIN6hugJdnVR67K.rmiwsmklmaDSr
 T0J859mZRLP8vSbOdo.VMeetS.UsYKl4LYLGdI9.IAkMgCgmuUBkGSAjNlU7Frt2a_.ouS4_mA9K
 W4adMarT8NdFXNQI7Zw--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <420b03de-3b1a-2096-529f-d18bfdf0cf53@aol.com>
Date: Sat, 21 Jan 2023 13:06:40 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
 "Michael S. Tsirkin" <mst@redhat.com>, Igor Mammedov <imammedo@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.21096 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4282

On 1/6/2023 9:03 AM, Anthony PERARD wrote:
> On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
> > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > as noted in docs/igd-assign.txt in the Qemu source code.
> > 
> > Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > Intel IGD passthrough to the guest with the Qemu upstream device model,
> > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > a different slot. This problem often prevents the guest from booting.
> > 
> > The only available workaround is not good: Configure Xen HVM guests to use
> > the old and no longer maintained Qemu traditional device model available
> > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > 
> > To implement this feature in the Qemu upstream device model for Xen HVM
> > guests, introduce the following new functions, types, and macros:
> > 
> > * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > * typedef XenPTQdevRealize function pointer
> > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > 
> > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > the xl toolstack with the gfx_passthru option enabled, which sets the
> > igd-passthru=on option to Qemu for the Xen HVM machine type.
> > 
> > The new xen_igd_reserve_slot function also needs to be implemented in
> > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > in which case it does nothing.
> > 
> > The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > 
> > Move the call to xen_host_pci_device_get, and the associated error
> > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > initialize the device class and vendor values which enables the checks for
> > the Intel IGD to succeed. The verification that the host device is an
> > Intel IGD to be passed through is done by checking the domain, bus, slot,
> > and function values as well as by checking that gfx_passthru is enabled,
> > the device class is VGA, and the device vendor in Intel.
> > 
> > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>
>
> This patch looks good enough. It only changes the "xenfv" machine so it
> doesn't prevent a proper fix to be done in the toolstack libxl.
>
> The change in xen_pci_passthrough_class_init() to try to run some code
> before pci_qdev_realize() could potentially break in the future due to
> been uncommon but hopefully that will be ok.
>
> So if no work to fix libxl appear soon, I'm ok with this patch:
>
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
>
> Thanks,
>

Hi Anthony,

If you have been following this patch it is now at v10. Since there is
another approach of patching libxl by using the "pc" machine instead of
patching Qemu to fix the "xenfv" machine and there have been other
changes, I did not include your Reviewed-by tag in the later versions.

I presume you are not interested in dealing with the technical debt
of patching libxl as proposed by this patch to libxl:

https://lore.kernel.org/xen-devel/20230110073201.mdUvSjy1vKtxPriqMQuWAxIjQzf1eAqIlZgal1u3GBI@z/

because it would be more difficult to maintain and result in reduced
startup performance with the Intel IGD than by patching Qemu and
fixing the "xenfv" machine type with the Intel IGD directly.

So are you OK with v10 of this patch? If so, you can add your Reviewed-by
again to v10. The v10 has several changes since v6 as requested by other
reviewers (Michael, Stefano, Igor).

The v10 of the patch is here:

https://lore.kernel.org/qemu-devel/d473914c4d2dc38ae87dca4b898d75b44751c9cb.1674297794.git.brchuckz@aol.com/

Thanks,

Chuck


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 21:40:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 21:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482382.747850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJLaX-0004Kr-NN; Sat, 21 Jan 2023 21:39:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482382.747850; Sat, 21 Jan 2023 21:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJLaX-0004Kk-JN; Sat, 21 Jan 2023 21:39:49 +0000
Received: by outflank-mailman (input) for mailman id 482382;
 Sat, 21 Jan 2023 21:39:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ql0K=5S=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pJLaV-0004Ke-LA
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 21:39:47 +0000
Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com
 [2607:f8b0:4864:20::f2f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1e75ed17-99d4-11ed-b8d1-410ff93cb8f0;
 Sat, 21 Jan 2023 22:39:44 +0100 (CET)
Received: by mail-qv1-xf2f.google.com with SMTP id u20so6385239qvq.4
 for <xen-devel@lists.xenproject.org>; Sat, 21 Jan 2023 13:39:44 -0800 (PST)
Received: from shine.lan ([2001:470:8:67e:4282:e612:9c15:499])
 by smtp.gmail.com with ESMTPSA id
 w6-20020a05620a424600b00705be892191sm24202402qko.56.2023.01.21.13.39.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 21 Jan 2023 13:39:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e75ed17-99d4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=W6jtH6JbVqik8S2gIyLE8s1X6O+bKE7q59Ed51ysmUw=;
        b=m2++UL6NpfMIJWua3g6Wuq5di7JrygMuiF7USvPJwXUlC0ARX4XapJwgp4k5TyhJgZ
         S4QKeMHARJjzwu2vJUhUarT5WI3sN6Kcbiq9u0K9Q8rvDqZI2JTdmyPjpL2CILgbAeph
         lKT34cl2oXvuTkMFIeUrjDTIQb0Ul5l2HGqXFe9tK1G84ESGFK/69DmzNNuWcX8KaMcM
         BDvtrpTSNHJz33pDMFZLiO1Ir36jSBjdGuYjRSR7r+XIkTGHaJN1wsZGH50vf8u8IEHg
         wcX+svCf9WdSQ96L3jI932IL/TpehMlmipMq1uBhCwYvByraGqBY4CT6UFe/kOJkPlNX
         aTZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=W6jtH6JbVqik8S2gIyLE8s1X6O+bKE7q59Ed51ysmUw=;
        b=DCBvO5IJOzfbTtbPwmhYbrv0VgnTLxRfTZmpKVQ6xn6sW1ynKcD9rwOtLKzKIox081
         okO81qy7ZCbtmfNb8GTZgTFqKKBUbxvt+NX0+6S9ySZr2+JUpq3s+K+oH5zmbcmrgYon
         YCRaaav6OEuiaRqti1yGZ6YmQaRip14JGnP6jKA6WyR3gHsJ3+tavEXxX5C+ViTKwChw
         eoWnGyFgAeYSvnCCh6io5A9tLrnqJiOP+kYBDpY+pu/oe4DNNySZ0TDri1kj9AizqQkj
         45KipBKLlrHVDWDUtAfosmWIeJArP9Zxx6mp7ZSDdTwJCEE4mKTiuiT8M8OvZirWk0YW
         7ifg==
X-Gm-Message-State: AFqh2krJOBHzwF6qON6AhV5OqNpJEs6gPTzqngeo5og0a4u+M5gJXj8o
	KpjaagV8g8GFe2uviKM8kGkbcEceCII=
X-Google-Smtp-Source: AMrXdXuJcpaS79twOFs56ZwwbS18TNy60brcEB+8yohdgAhogEGjubyUV+2nxJShHMaFVRSRxbw9rg==
X-Received: by 2002:a0c:fc03:0:b0:537:4b09:670f with SMTP id z3-20020a0cfc03000000b005374b09670fmr9193415qvo.25.1674337183131;
        Sat, 21 Jan 2023 13:39:43 -0800 (PST)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Dongli Zhang <dongli.zhang@oracle.com>
Subject: [PATCH 0/2] tools: guest kexec fixes
Date: Sat, 21 Jan 2023 16:39:06 -0500
Message-Id: <20230121213908.6504-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Two toolstack fixes for guest kexec.  This restored kexec enough that I
got Debian bullseye to kexec once.  Trying to kexec the guest a second
time had it spinning at 100% CPU after initiating the kexec.  Haven't
looked into that part yet.

Regards,
Jason

Jason Andryuk (2):
  libxl: Fix guest kexec - skip cpuid policy
  Revert "tools/xenstore: simplify loop handling connection I/O"

 tools/libs/light/libxl_create.c   |  4 ++--
 tools/libs/light/libxl_dom.c      |  5 +++--
 tools/libs/light/libxl_internal.h |  2 +-
 tools/xenstore/xenstored_core.c   | 19 +++++++++++++++++--
 4 files changed, 23 insertions(+), 7 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sat Jan 21 21:40:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 21:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482384.747870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJLac-0004pE-5j; Sat, 21 Jan 2023 21:39:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482384.747870; Sat, 21 Jan 2023 21:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJLac-0004p6-2s; Sat, 21 Jan 2023 21:39:54 +0000
Received: by outflank-mailman (input) for mailman id 482384;
 Sat, 21 Jan 2023 21:39:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ql0K=5S=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pJLab-0004Ke-Ap
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 21:39:53 +0000
Received: from mail-qv1-xf2e.google.com (mail-qv1-xf2e.google.com
 [2607:f8b0:4864:20::f2e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 226ad58a-99d4-11ed-b8d1-410ff93cb8f0;
 Sat, 21 Jan 2023 22:39:51 +0100 (CET)
Received: by mail-qv1-xf2e.google.com with SMTP id g10so6368251qvo.6
 for <xen-devel@lists.xenproject.org>; Sat, 21 Jan 2023 13:39:51 -0800 (PST)
Received: from shine.lan ([2001:470:8:67e:4282:e612:9c15:499])
 by smtp.gmail.com with ESMTPSA id
 w6-20020a05620a424600b00705be892191sm24202402qko.56.2023.01.21.13.39.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 21 Jan 2023 13:39:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 226ad58a-99d4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=5M2ErVFM3ffELnEbXKvTrmkvSoDVxImjjfW2QVBelko=;
        b=f/lW/Oy/8GtMQRgRZmpr7uaJfCjHJ40UWyp5DbiReRm6FlvGHdZUgu9sv/+kkegcTO
         rgFLK3DEAYSK4sJ6Rv26b1W/RiYbNFnoIfHhBbgsnIXn8ERrerCa2/9B0xnvcBb1wq1Q
         Afl9yNr0x8EPKpCa2DLuxHe4LfxozqJW01Vqb01Rv6yQOSygne3cs/Kk8qJcUHl7XzLd
         eiRCpCYeOjBJILEUZt4tnWzVGQuJJ+1H0eiZPkevmoW6oyUW9zPpOEyR2HcYlq8hwYYr
         nGFN0TyMAxobXOClVOx0Iic1yPux8CmbFVBjMTvdjJwRfKDZ9itWbRhNmFSVQ+m/Sq5h
         izlg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=5M2ErVFM3ffELnEbXKvTrmkvSoDVxImjjfW2QVBelko=;
        b=13tywHzQkTd4EYfymQQsO3a+j42XCGLWE4+h21LzOi64Abkd6XLijQd3o6VNqVg46N
         nEHw76QMHIYbJg29RO+sQAi+rbBhVv6donYql9ZshJmn+aMDtqfhY/C5bppGNsNvOtcg
         yL0da0n41gRae6yuJ2cE9rbY8+FCutYL5utwyEX1xfR1GqXJAI7U4TWQzOB+qnlkhEkc
         rMBU1oJswqCIKNN6NMi0eMFH9JlDK1xDGiHtdz6h5lmDitxCscqXXrQwhMRfe+RdfBg9
         4KpJ2qqej7zrRUU/Bkrj1gG+LMEhCN84DgQsrctPKHlV87W0VYDImCj5QtIJFwoifp09
         W3Kw==
X-Gm-Message-State: AFqh2kp5lAaj3Qir4NrWPrLHokv9CLqqYmbzyin22Xncj1PGtQS25jXO
	InFC/IqM7Ibj3UrF9G/ZbWR68MVlXvs=
X-Google-Smtp-Source: AMrXdXvgUq6EjRFocmsqWWNYsaDfvnRpwxtDOvRgwbUsxotT8lr9AYnAZhcBp5dpibOm+JJmCxh8IQ==
X-Received: by 2002:a0c:b2d4:0:b0:534:8f10:e1a with SMTP id d20-20020a0cb2d4000000b005348f100e1amr29409884qvf.0.1674337189583;
        Sat, 21 Jan 2023 13:39:49 -0800 (PST)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 2/2] Revert "tools/xenstore: simplify loop handling connection I/O"
Date: Sat, 21 Jan 2023 16:39:08 -0500
Message-Id: <20230121213908.6504-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230121213908.6504-1-jandryuk@gmail.com>
References: <20230121213908.6504-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

I'm observing guest kexec trigger xenstored to abort on a double free.

gdb output:
Program received signal SIGABRT, Aborted.
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
44    ./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
    at ./nptl/pthread_kill.c:44
    at ./nptl/pthread_kill.c:78
    at ./nptl/pthread_kill.c:89
    at ../sysdeps/posix/raise.c:26
    at talloc.c:119
    ptr=ptr@entry=0x559fae724290) at talloc.c:232
    at xenstored_core.c:2945
(gdb) frame 5
    at talloc.c:119
119            TALLOC_ABORT("Bad talloc magic value - double free");
(gdb) frame 7
    at xenstored_core.c:2945
2945                talloc_increase_ref_count(conn);
(gdb) p conn
$1 = (struct connection *) 0x559fae724290

Looking at a xenstore trace, we have:
IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
id )
wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
ard
wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
ard
wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
ard
wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
ard
OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
ard
wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
ard
IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
DESTROY watch 0x559fae73f630
DESTROY watch 0x559fae75ddf0
DESTROY watch 0x559fae75ec30
DESTROY watch 0x559fae75ea60
DESTROY watch 0x559fae732c00
DESTROY watch 0x559fae72cea0
DESTROY watch 0x559fae728fc0
DESTROY watch 0x559fae729570
DESTROY connection 0x559fae724290
orphaned node /local/domain/3/device/suspend/event-channel deleted
orphaned node /local/domain/3/device/vbd/51712 deleted
orphaned node /local/domain/3/device/vkbd/0 deleted
orphaned node /local/domain/3/device/vif/0 deleted
orphaned node /local/domain/3/control/shutdown deleted
orphaned node /local/domain/3/control/feature-poweroff deleted
orphaned node /local/domain/3/control/feature-reboot deleted
orphaned node /local/domain/3/control/feature-suspend deleted
orphaned node /local/domain/3/control/feature-s3 deleted
orphaned node /local/domain/3/control/feature-s4 deleted
orphaned node /local/domain/3/control/sysrq deleted
orphaned node /local/domain/3/data deleted
orphaned node /local/domain/3/drivers deleted
orphaned node /local/domain/3/feature deleted
orphaned node /local/domain/3/attr deleted
orphaned node /local/domain/3/error deleted
orphaned node /local/domain/3/console/backend-id deleted

and no further output.

The trace shows that DESTROY was called for connection 0x559fae724290,
but that is the same pointer (conn) main() was looping through from
connections.  So it wasn't actually removed from the connections list?

Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
connection I/O" fixes the abort/double free.  I think the use of
list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
traversal safe for deleting the current iterator, but RELEASE/do_release
will delete some other entry in the connections list.  I think the
observed abort is because list_for_each_entry has next pointing to the
deleted connection, and it is used in the subsequent iteration.

Add a comment explaining the unsuitability of list_for_each_entry_safe.
Also notice that the old code takes a reference on next which would
prevents a use-after-free.

This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
I didn't verify the stale pointers, which is why there are a lot of "I
think" qualifiers.  But reverting the commit has xenstored still running
whereas it was aborting consistently beforehand.
---
 tools/xenstore/xenstored_core.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 78a3edaa4e..029e3852fc 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2941,8 +2941,23 @@ int main(int argc, char *argv[])
 			}
 		}
 
-		list_for_each_entry_safe(conn, next, &connections, list) {
-			talloc_increase_ref_count(conn);
+		/*
+		 * list_for_each_entry_safe is not suitable here because
+		 * handle_input may delete entries besides the current one, but
+		 * those may be in the temporary next which would trigger a
+		 * use-after-free.  list_for_each_entry_safe is only safe for
+		 * deleting the current entry.
+		 */
+		next = list_entry(connections.next, typeof(*conn), list);
+		if (&next->list != &connections)
+			talloc_increase_ref_count(next);
+		while (&next->list != &connections) {
+			conn = next;
+
+			next = list_entry(conn->list.next,
+					  typeof(*conn), list);
+			if (&next->list != &connections)
+				talloc_increase_ref_count(next);
 
 			if (conn_can_read(conn))
 				handle_input(conn);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sat Jan 21 21:40:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 21:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482383.747860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJLaa-0004Zn-U5; Sat, 21 Jan 2023 21:39:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482383.747860; Sat, 21 Jan 2023 21:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJLaa-0004Zg-Qc; Sat, 21 Jan 2023 21:39:52 +0000
Received: by outflank-mailman (input) for mailman id 482383;
 Sat, 21 Jan 2023 21:39:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ql0K=5S=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pJLaY-0004Ke-QT
 for xen-devel@lists.xenproject.org; Sat, 21 Jan 2023 21:39:50 +0000
Received: from mail-qt1-x832.google.com (mail-qt1-x832.google.com
 [2607:f8b0:4864:20::832])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20f4fb03-99d4-11ed-b8d1-410ff93cb8f0;
 Sat, 21 Jan 2023 22:39:49 +0100 (CET)
Received: by mail-qt1-x832.google.com with SMTP id g16so4885622qtu.2
 for <xen-devel@lists.xenproject.org>; Sat, 21 Jan 2023 13:39:48 -0800 (PST)
Received: from shine.lan ([2001:470:8:67e:4282:e612:9c15:499])
 by smtp.gmail.com with ESMTPSA id
 w6-20020a05620a424600b00705be892191sm24202402qko.56.2023.01.21.13.39.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 21 Jan 2023 13:39:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20f4fb03-99d4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VlprdPagtmcclyYEQlFBNVYwZD5+y5cWhZ95XyI3Leo=;
        b=TgMzWx9ifX/noFmbJ2gaApzq0RK4TGGBMBSNDRAsGREzeFsuh0gzj8i863CeBSIcZS
         2ME+DLqYW2BMfbEVe+WovwJ9sLFLT0rvbs/8VUbdttQ6p8c/GZ2l0A4vAqCNQApafay1
         c5X/T8WS6OsVRfOO8xdH6+bhlUNhHDi+jKhW0PVV4+nNxTnif01QX1prVKr78i8rG6j7
         A1CTJSCrvcyem2AgwVbFpz/U9nu5ZPkaSiLYJkyhj9WiKw6KJ7te/8RkrZRdSVZ4nrto
         iSIVx5mEwmloMF0VwaL8hN8wGmE8i+f7TZW7HXj4KTZqQgGXcK+bNzbZhItptg6uStTK
         aocg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=VlprdPagtmcclyYEQlFBNVYwZD5+y5cWhZ95XyI3Leo=;
        b=T2KkMNSMGGI7B6nLEbY/971pQZyGIjdgjPNFVGtdlKHG0fG9FZJOiGkyaj4NO8JOAW
         OJdUclvh4Ueuh5SXBeCWVydgTGMvQeLrZ0y9vyPukRvybPkbUIQFL6LxW386fdQ280Ha
         9hthboLRSqB9kMsI6NOWezNzvILQLu4ycri0LY5cwYPCiE2BV4fq9OyorJW81J2lVJn4
         NXch4maxIhCu2S5vvYIt10VrxckZRFXfNnGJCtpGB3PvazHd34I/U2IFyW9Kn7pUKI62
         i8wQyMeJvapl6J+nPPlp1O2rwcKPVm2yCZHXmjoZb8UTznx+cCUr/3SeDS3SQf+sQbP7
         rNTw==
X-Gm-Message-State: AFqh2kroY4hz4Ko5/c7FCL+GoPzKx5Qgy6RvR9PCZp99/HOYb29EdJZi
	zbg+WteCQxE1oSLUtV4IjweNlNOLaRs=
X-Google-Smtp-Source: AMrXdXuv2Ijx/AyZiagmueTAO6PlPJBQtrFWQiM+a7Cq1+SAIrbfpmcChWa9Yfug9Q+/CUKjGK6ymA==
X-Received: by 2002:a05:622a:4208:b0:3b6:2cdb:e240 with SMTP id cp8-20020a05622a420800b003b62cdbe240mr27853568qtb.18.1674337187386;
        Sat, 21 Jan 2023 13:39:47 -0800 (PST)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Dongli Zhang <dongli.zhang@oracle.com>
Subject: [PATCH 1/2] libxl: Fix guest kexec - skip cpuid policy
Date: Sat, 21 Jan 2023 16:39:07 -0500
Message-Id: <20230121213908.6504-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230121213908.6504-1-jandryuk@gmail.com>
References: <20230121213908.6504-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a domain performs a kexec (soft reset), libxl__build_pre() is
called with the existing domid.  Calling libxl__cpuid_legacy() on the
existing domain fails since the cpuid policy has already been set, and
the guest isn't rebuilt and doesn't kexec.

xc: error: Failed to set d1's policy (err leaf 0xffffffff, subleaf 0xffffffff, msr 0xffffffff) (17 = File exists): Internal error
libxl: error: libxl_cpuid.c:494:libxl__cpuid_legacy: Domain 1:Failed to apply CPUID policy: File exists
libxl: error: libxl_create.c:1641:domcreate_rebuild_done: Domain 1:cannot (re-)build domain: -3
libxl: error: libxl_xshelp.c:201:libxl__xs_read_mandatory: xenstore read failed: `/libxl/1/type': No such file or directory
libxl: warning: libxl_dom.c:49:libxl__domain_type: unable to get domain type for domid=1, assuming HVM

During a soft_reset, skip calling libxl__cpuid_legacy() to avoid the
issue.  Before the fixes commit, the libxl__cpuid_legacy() failure would
have been ignored, so kexec would continue.

Fixes: 34990446ca91 "libxl: don't ignore the return value from xc_cpuid_apply_policy"
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
Probably a backport candidate since this has been broken for a while.
---
 tools/libs/light/libxl_create.c   | 4 ++--
 tools/libs/light/libxl_dom.c      | 5 +++--
 tools/libs/light/libxl_internal.h | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 5cddc3df79..587a515dff 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -510,7 +510,7 @@ int libxl__domain_build(libxl__gc *gc,
     struct timeval start_time;
     int i, ret;
 
-    ret = libxl__build_pre(gc, domid, d_config, state);
+    ret = libxl__build_pre(gc, domid, d_config, state, false);
     if (ret)
         goto out;
 
@@ -1440,7 +1440,7 @@ static void domcreate_bootloader_done(libxl__egc *egc,
         goto out;
     }
 
-    rc = libxl__build_pre(gc, domid, d_config, state);
+    rc = libxl__build_pre(gc, domid, d_config, state, dcs->soft_reset);
     if (rc)
         goto out;
 
diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index b454f988fb..7cebf5047f 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -241,7 +241,8 @@ static int numa_place_domain(libxl__gc *gc, uint32_t domid,
 }
 
 int libxl__build_pre(libxl__gc *gc, uint32_t domid,
-              libxl_domain_config *d_config, libxl__domain_build_state *state)
+              libxl_domain_config *d_config, libxl__domain_build_state *state,
+              bool soft_reset)
 {
     libxl_domain_build_info *const info = &d_config->b_info;
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -382,7 +383,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     /* Construct a CPUID policy, but only for brand new domains.  Domains
      * being migrated-in/restored have CPUID handled during the
      * static_data_done() callback. */
-    if (!state->restore)
+    if (!state->restore && !soft_reset)
         rc = libxl__cpuid_legacy(ctx, domid, false, info);
 
 out:
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 0dc8b8f210..f0af44b523 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1418,7 +1418,7 @@ _hidden void libxl__domain_build_state_dispose(libxl__domain_build_state *s);
 
 _hidden int libxl__build_pre(libxl__gc *gc, uint32_t domid,
               libxl_domain_config * const d_config,
-              libxl__domain_build_state *state);
+              libxl__domain_build_state *state, bool soft_reset);
 _hidden int libxl__build_post(libxl__gc *gc, uint32_t domid,
                libxl_domain_build_info *info, libxl__domain_build_state *state,
                char **vms_ents, char **local_ents);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sat Jan 21 22:24:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 22:24:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482399.747886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJMHy-00029P-NQ; Sat, 21 Jan 2023 22:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482399.747886; Sat, 21 Jan 2023 22:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJMHy-00029I-K4; Sat, 21 Jan 2023 22:24:42 +0000
Received: by outflank-mailman (input) for mailman id 482399;
 Sat, 21 Jan 2023 22:24:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJMHx-000298-3S; Sat, 21 Jan 2023 22:24:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJMHx-0004cx-19; Sat, 21 Jan 2023 22:24:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJMHw-00051I-Ht; Sat, 21 Jan 2023 22:24:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJMHw-0002S0-HR; Sat, 21 Jan 2023 22:24:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EBmEXA0rGmvNxB82HivbiAvnUWg8DHC9BXl8jVVOvAg=; b=T7R/dbopgOV7Yc6NpQUa4RbxlG
	MWWfwpvp+VzLTGtceVO6u9tgsXImkOq5XrS8VLgs2ELnrXehcGiUo15GBsCaHxzicVYBd1sRP+SbX
	DUtXzxLCfUH5z31oPB5X4dPvyFnT/Kq9/iG1FRUl7/C+oifm+SYWzXXPCaq17tr/gQQM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176016-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176016: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f883675bf6522b52cd75dc3de791680375961769
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 22:24:40 +0000

flight 176016 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176016/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                f883675bf6522b52cd75dc3de791680375961769
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  106 days
Failing since        173470  2022-10-08 06:21:34 Z  105 days  216 attempts
Testing same since   176007  2023-01-21 02:54:45 Z    0 days    2 attempts

------------------------------------------------------------
3414 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 524922 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 21 23:46:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Jan 2023 23:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482412.747908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJNYt-0001m0-Vw; Sat, 21 Jan 2023 23:46:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482412.747908; Sat, 21 Jan 2023 23:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJNYt-0001lt-RS; Sat, 21 Jan 2023 23:46:15 +0000
Received: by outflank-mailman (input) for mailman id 482412;
 Sat, 21 Jan 2023 23:46:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJNYs-0001ld-85; Sat, 21 Jan 2023 23:46:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJNYs-0006Px-01; Sat, 21 Jan 2023 23:46:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJNYr-0001ot-H4; Sat, 21 Jan 2023 23:46:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJNYr-00023q-GZ; Sat, 21 Jan 2023 23:46:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oH1effZU/xpEJnFYoEV113QYvC/OxlrGnYmy/H2AGNc=; b=1kE2tiZXilDEIABrqSwc/vh9L6
	YGuo75j5UWpxAXwfTjxp6ECZa3BMxMjofrfmWmRsjWuM4mlvQsmcb8XQDDsEPAFtesd8NXtraPPmb
	dwskzenYke626jMIFUkLbpfGaEEkqTyyFeqaWxV6xTSKALyj57cGR+pvLmyiPD4NUAts=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176022-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 176022: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-credit2:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=65cc5ccf06a74c98de73ec683d9a543baa302a12
X-Osstest-Versions-That:
    qemuu=fcb7e040f5c69ca1f0678f991ab5354488a9e192
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Jan 2023 23:46:13 +0000

flight 176022 qemu-mainline real [real]
flight 176031 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176022/
http://logs.test-lab.xenproject.org/osstest/logs/176031/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-credit2 20 guest-localmigrate/x10 fail pass in 176031-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 176008
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 176008
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 176008
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 176008
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 176008
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 176008
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 176008
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 176008
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                65cc5ccf06a74c98de73ec683d9a543baa302a12
baseline version:
 qemuu                fcb7e040f5c69ca1f0678f991ab5354488a9e192

Last test of basis   176008  2023-01-21 02:55:28 Z    0 days
Testing same since   176022  2023-01-21 16:08:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Bresticker <abrestic@rivosinc.com>
  Bin Meng <bmeng@tinylab.org>
  Daniel Henrique Barboza <dbarboza@ventanamicro.com>
  Dongxue Zhang <elta.era@gmail.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   fcb7e040f5..65cc5ccf06  65cc5ccf06a74c98de73ec683d9a543baa302a12 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 00:12:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 00:12:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482419.747918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJNyI-0005em-Ec; Sun, 22 Jan 2023 00:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482419.747918; Sun, 22 Jan 2023 00:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJNyI-0005ef-Bu; Sun, 22 Jan 2023 00:12:30 +0000
Received: by outflank-mailman (input) for mailman id 482419;
 Sun, 22 Jan 2023 00:12:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJNyG-0005eV-SE; Sun, 22 Jan 2023 00:12:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJNyG-0007Te-Of; Sun, 22 Jan 2023 00:12:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJNyG-0002ag-8j; Sun, 22 Jan 2023 00:12:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJNyG-0008P9-8G; Sun, 22 Jan 2023 00:12:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=k1Hk4HTHPSQ/y1S+zF30lIaoR8gixA4ov6znyXIKDUQ=; b=7GYp9ThLVv6ltIbpiMcilVy16g
	ZpTQeTuNd1zaH4QgB/H9PXmJtolpUWoXBNbpGueZrOT2wEXXWVrPEWsyeksvvAG/LWFQT4n4quvyc
	5rsH31MDr5eFXttg5fV+Ib6RnJWtmCEPRjb3M7bmDl2ZEvDyxXQNKg0IQdIK8hFQsI7s=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-coresched-i386-xl
Message-Id: <E1pJNyG-0008P9-8G@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Jan 2023 00:12:28 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-coresched-i386-xl
testid guest-localmigrate

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176032/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-coresched-i386-xl.guest-localmigrate.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-coresched-i386-xl.guest-localmigrate --summary-out=tmp/176032.bisection-summary --basis-template=175994 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-coresched-i386-xl guest-localmigrate
Searching for failure / basis pass:
 176011 fail [host=nobling0] / 175994 [host=nobling1] 175987 [host=pinot0] 175965 [host=debina1] 175734 [host=debina0] 175726 [host=debina0] 175720 [host=debina1] 175714 [host=pinot1] 175694 [host=pinot0] 175671 [host=debina0] 175651 [host=nobling1] 175635 [host=debina1] 175624 [host=debina0] 175612 [host=debina0] 175601 [host=nobling1] 175592 [host=pinot0] 175573 ok.
Failure / basis pass flights: 176011 / 175573
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 c1df06afe578f698ebe91a1e3817463b9d165123
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#1cf02b05b27c48775a25699e61b93b8\
 14b9ae042-625eb5e96dc96aa7fddef59a08edae215527f19c git://xenbits.xen.org/xen.git#c1df06afe578f698ebe91a1e3817463b9d165123-1d60c20260c7e82fe5344d06c20d718e0cc03b8b
>From git://cache:9419/git://xenbits.xen.org/qemu-xen
   fcb7e040f5..65cc5ccf06  upstream-tested -> origin/upstream-tested
Loaded 10003 nodes in revision graph
Searching for test results:
 175592 [host=pinot0]
 175601 [host=nobling1]
 175612 [host=debina0]
 175624 [host=debina0]
 175635 [host=debina1]
 175651 [host=nobling1]
 175671 [host=debina0]
 175694 [host=pinot0]
 175714 [host=pinot1]
 175720 [host=debina1]
 175726 [host=debina0]
 175734 [host=debina0]
 175834 []
 175861 []
 175890 []
 175907 []
 175931 []
 175956 []
 175965 [host=debina1]
 175987 [host=pinot0]
 175994 [host=nobling1]
 176003 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
 176012 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 c1df06afe578f698ebe91a1e3817463b9d165123
 176013 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
 176014 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176015 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c a1a618208bf53469f5e3eaa14202ba777d33f442
 176017 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 686b80c1ae4cc338334eb5df4836df526109377a
 176019 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176011 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176020 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176023 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176024 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176026 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176027 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176028 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176030 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176032 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 175569 [host=nobling1]
 175573 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 c1df06afe578f698ebe91a1e3817463b9d165123
Searching for interesting versions
 Result found: flight 175573 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f, results HASH(0x5647c4252b30) HASH(0x5647c426cd30) HASH(0x5647c424e520) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96\
 dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x5647c4265f70) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x5647c4265df0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 c1df06afe578f698ebe91a1e3817463b9d165123, results HASH(0x5647c425e0a8) HASH(0x5647c426d330) Result found: flight 176003 (fail), for basis failure (at ancestor ~988)
 Repro found: flight 176012 (pass), for basis pass
 Repro found: flight 176023 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
No revisions left to test, checking graph state.
 Result found: flight 176024 (pass), for last pass
 Result found: flight 176026 (fail), for first failure
 Repro found: flight 176027 (pass), for last pass
 Repro found: flight 176028 (fail), for first failure
 Repro found: flight 176030 (pass), for last pass
 Repro found: flight 176032 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176032/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-coresched-i386-xl.guest-localmigrate.{dot,ps,png,html,svg}.
----------------------------------------
176032: tolerable ALL FAIL

flight 176032 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/176032/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate      fail baseline untested


jobs:
 test-amd64-coresched-i386-xl                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Jan 22 00:57:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 00:57:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482430.747930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJOfk-0001aJ-3N; Sun, 22 Jan 2023 00:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482430.747930; Sun, 22 Jan 2023 00:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJOfk-0001aC-0Y; Sun, 22 Jan 2023 00:57:24 +0000
Received: by outflank-mailman (input) for mailman id 482430;
 Sun, 22 Jan 2023 00:57:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8mh1=5T=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pJOfi-0001Zq-UM
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 00:57:23 +0000
Received: from sonic308-8.consmr.mail.gq1.yahoo.com
 (sonic308-8.consmr.mail.gq1.yahoo.com [98.137.68.32])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6d8a3ea-99ef-11ed-b8d1-410ff93cb8f0;
 Sun, 22 Jan 2023 01:57:18 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic308.consmr.mail.gq1.yahoo.com with HTTP; Sun, 22 Jan 2023 00:57:15 +0000
Received: by hermes--production-bf1-6bb65c4965-z9bcn (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID e61cba04d8e7c7463f511e9383474b44; 
 Sun, 22 Jan 2023 00:57:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6d8a3ea-99ef-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674349035; bh=+5j1CLbOk83+kLzMtFdUlyfegANarV75tev3JSlOpik=; h=From:To:Cc:Subject:Date:References:From:Subject:Reply-To; b=jCj4LBQTzyseZ/J9DGWQAMPaGNaxOV6nEEhXRPXjEYGZOZVKj5av0mpQU11h0jCaYBav0EEVVnRl4yduXaBRaEZffSqs6F09z6IEQgdvCtS0AEPPob5MJ1iKsVUPMRX9hftax10e0ytFtvv2XTX8WgA52JYELP3qS3szsiRihjxhhWJFqSq4HlwHLyRbfv5KGVbhzKxiuveVn8fSKh9emLfvXN8x77u78M1xuo/v+A7Fy747hnMixKSTOf7mJBSU/NWqGDTTOTq9rdEN8SBOn54gLujFCae1UAjGqUNRNEJZvB0WHRd9RxOVcEraR3Uh2+ho5Abuw3LRm+iI9T6J1w==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674349035; bh=Zcu4F8g7U+DBJ23Vhr6FaGdxk5IGtkT9rEocEdowBv0=; h=X-Sonic-MF:From:To:Subject:Date:From:Subject; b=FyvBVCzo6nklhrm2IMik12hG7iUf9tZOpVdt11jN+ff2O4NV1LQDE3nPhnHiOpdAAx+wCdKk6WTQRR8sLp7u2+Bh+RBjvfklr/FhqdZJmKnLJ+2SKgw+tgeLkZNSOW7oK23krPuOejp7ZF7BkAaIvYAqMg0k4NwlFKe98eT7fOJFCjFjoUXq95oLMIS8QxKFyNt+aPQ1mlA/JW/2poxhx+eh8JjPsrAc7IxOJa+MX9sFrKY7bZPME+3Kwfvik5V3nenXj2sP+IRpoku06HD9zekPxxhzS3Ls9p3bhId85+7xmxpuW/h6wp+NZ3zWZeusU03pHe0Bs8TKMw8bWAa5HA==
X-YMail-OSG: HC1dXbgVM1k5F_7RnUpJI1.ZcVaQCmYb1DmdjdND1Msq2rqmNBYNpO646r48yq_
 5p4kJD3cBJALUYbTmvH03fOjvMtQTnL5QudpdSzKkqXTbPLMO.GpbIt53kGLuQ62uHQ6LoAu6MV5
 _lYBwQQ99AcwM3WJSs4.ehlSfiHmIicQc_gMNxs2fE4fr6iTwrQ2KRW.cnpHlZzEWNXVmGdJaUxs
 sz1LkmVWyXNMkpcrRG0QLInc_wFoQMtQxJ02Qec6LeRfjq_nKqsXtoXDWnN0eK2ldpNIL5cryTX6
 4_MiGR2HJwTQZbCbLWL5dg1nLGY9Z1fVBMHuE034iztA7r2go1UV9G5mgb5xDYmRoZxN2rh_7xSa
 5HqFILp_NMYzjLBfBD459b_zzo2KmeGhB.h1UT54wfQS2tO2ELUshSkxKwMCxBWQpoCU7fJ47jv6
 5Ox96iK7I3PWpGr.JYgscpAZXFUtTJ1iw0bo.3VTDoUNF9B9QMQHi6QRGSZEpqtvVzcOHJs3HHYH
 QCcvTjsSu.IYnueD8iAgQM1SHT7wTC_9P408Cdend8S2otc5OqmUv.MDFnARMIrMhIoS41sx8Fi0
 sgSDsl6h.ieVXwcrnITydGm_syxequGg3WvldvbGEro6Cd9QnpS3KDdQ5riyAX6hmYrBsT7AzYEL
 i_llwf2A4v0Y1WWR7m8EGKzqyxRHJQt7IWEMZTySRx5p6qGBH759rNfVqTv99quuNcZ1UoeUFoI5
 tgvi7RZk3wv7cIwDRbN2YwqbqvdShiVtsuKpcJxY0nf93u4tc4PV.JtWw7Xr2hY4TzHdKSgIgfu3
 fSNIJ5LLTV4dV.LiUaVFx3YPuwyyn4jOMkwhNDVdw0tJT94INEzzpRZ6pfcQoAwC5gT2AOU6psAx
 pSMtIB6C9Uh8qbbm.Wih1rBNUuGoFUWbqN.UZ7hX3IJexOSpmh4BmeKrUIi4s87tc99PN2N2vmY2
 sdqqjTW4qGHaSRlHqRrCengnwjAwjEGeya5X2Iyvn8emPMxfmj.T49GMfGGBFPWPoVWPzT..VVGp
 oYJ45VbtBzJiyspHYVOo_f7Kr6Fu1bszHwVr6syW7aMtTIKbH1G9yGkLz7uEqDcfMLsd4HvBkcnv
 YAbk8HkjeWKgL4GnwKCBveYuyi7J8Ai1v4XbLqaLYorZ3H5XDFap7caKaockK1dqdaHnHUeBeJZe
 P4VeMGqZp32dM_L.J6Xqxa08uK93e3c59sX0zB3UN.uN.2_87QQuZi1Bx6iqSpPQeWboJI0MeDCY
 sl6Sab2YdmCJodALlCle5dKamD57glVQTMsYFz0l_0zvgogLQ5bNxxm.fyPeYiT.DkCLW5vjTSa7
 R7BaHHU.fdzuKu2b0cyaOpRgyCpwAtRfh5pxG0lbn6hyR9lya9iHr2vAo3YHmdlKFKrTVgGQ8CEe
 DlaUYLGkZy2_0yPmeuVdylKTwJe9XlPT3qaXfEb4vGUNXr2eJ6OI04YzGKmE0I3XuMwgPdTiO7v2
 VFk1zyXjnRtEf_FCIZwauMaBI0DNWmMlzkLvvbvkHgZho_bG_b0fm6QdKobVZGIUOAATddP56xSW
 WCzdAL4xeusLcUA4LA_hjALF30YCu2dXG2OOLy3wxP47vN1mBqvqsQTGUGdJEADT_pZ.zti6jUrI
 X_QZMFcelCIhx_._vgklVTfnZzGpMzehH8LEMemdCEkQ7mZIBliYd0mmk4H7U4u67chVFzOe1W7T
 74AyGJqCVW6dZkTDUpvfJSBp0QDAxFVZI49GceCeBJGGH_9RrA6SYQZ6HuERWE7NnQZqquWqMLTB
 YBWvrEevcS95ystzH3XDQrhXufOY6hLa83xjYm_f3S5TQc8LMQyqP0nAbQwqa8ylGbf8XBc06_FL
 iJUkZc4rJKlHjsXhnUdysB57c0KDEW67Cwz34MJ8foWFT87TdJHADwxGP.XHLmegPjYTYlPqCZ0p
 XGjBQSnN_1zqcQ1AghAfmX_TslBFS6ayFvmICurjsLyESLeYYuR8EqqdizLSwbMLVA2sdC8TtqWg
 1qHx4hD9mVtaq7k.p8UEjw_FqrbbRAKSUowjIYH2G8CgYERP26c1WQCBFzKYk7lVU.jav4TedOLj
 824S2D3XvA5osBciG7XBDrw9Xkn5t6ZuVbgMRJPx_4nZ5ti81kd2V3WldBGOxSG7tlcVr6Pda64w
 o4AW5pvbzXGXnNqZAAS3qrl4er3Fd4jwPOZ0yMTYB3eyVU78WvYZn7Wd0rtxab7l.fIpxcqpbo3E
 orcP62QkFbdH1Beg-
X-Sonic-MF: <brchuckz@aim.com>
From: Chuck Zmudzinski <brchuckz@aol.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Igor Mammedov <imammedo@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-stable@nongnu.org
Subject: [PATCH v11] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Date: Sat, 21 Jan 2023 19:57:02 -0500
Message-Id: <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz@aol.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
References: <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz.ref@aol.com>
Content-Length: 16468

Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
as noted in docs/igd-assign.txt in the Qemu source code.

Currently, when the xl toolstack is used to configure a Xen HVM guest with
Intel IGD passthrough to the guest with the Qemu upstream device model,
a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
a different slot. This problem often prevents the guest from booting.

The only available workarounds are not good: Configure Xen HVM guests to
use the old and no longer maintained Qemu traditional device model
available from xenbits.xen.org which does reserve slot 2 for the Intel
IGD or use the "pc" machine type instead of the "xenfv" machine type and
add the xen platform device at slot 3 using a command line option
instead of patching qemu to fix the "xenfv" machine type directly. The
second workaround causes some degredation in startup performance such as
a longer boot time and reduced resolution of the grub menu that is
displayed on the monitor. This patch avoids that reduced startup
performance when using the Qemu upstream device model for Xen HVM guests
configured with the igd-passthru=on option.

To implement this feature in the Qemu upstream device model for Xen HVM
guests, introduce the following new functions, types, and macros:

* XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
* XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
* typedef XenPTQdevRealize function pointer
* XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
* xen_igd_reserve_slot and xen_igd_clear_slot functions

Michael Tsirkin:
* Introduce XEN_PCI_IGD_DOMAIN, XEN_PCI_IGD_BUS, XEN_PCI_IGD_DEV, and
  XEN_PCI_IGD_FN - use them to compute the value of XEN_PCI_IGD_SLOT_MASK

The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
the xl toolstack with the gfx_passthru option enabled, which sets the
igd-passthru=on option to Qemu for the Xen HVM machine type.

The new xen_igd_reserve_slot function also needs to be implemented in
hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
in which case it does nothing.

The new xen_igd_clear_slot function overrides qdev->realize of the parent
PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
created in hw/i386/pc_piix.c for the case when igd-passthru=on.

Move the call to xen_host_pci_device_get, and the associated error
handling, from xen_pt_realize to the new xen_igd_clear_slot function to
initialize the device class and vendor values which enables the checks for
the Intel IGD to succeed. The verification that the host device is an
Intel IGD to be passed through is done by checking the domain, bus, slot,
and function values as well as by checking that gfx_passthru is enabled,
the device class is VGA, and the device vendor in Intel.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
---
Notes that might be helpful to reviewers of patched code in hw/xen:

The new functions and types are based on recommendations from Qemu docs:
https://qemu.readthedocs.io/en/latest/devel/qom.html

Notes that might be helpful to reviewers of patched code in hw/i386:

The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
not affect builds that do not have CONFIG_XEN defined.

xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
existing function that is only true when Qemu is built with
xen-pci-passthrough enabled and the administrator has configured the Xen
HVM guest with Qemu's igd-passthru=on option.

v2: Remove From: <email address> tag at top of commit message

v3: Changed the test for the Intel IGD in xen_igd_clear_slot:

    if (is_igd_vga_passthrough(&s->real_device) &&
        (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {

    is changed to

    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    I hoped that I could use the test in v2, since it matches the
    other tests for the Intel IGD in Qemu and Xen, but those tests
    do not work because the necessary data structures are not set with
    their values yet. So instead use the test that the administrator
    has enabled gfx_passthru and the device address on the host is
    02.0. This test does detect the Intel IGD correctly.

v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
    email address to match the address used by the same author in commits
    be9c61da and c0e86b76
    
    Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc

v5: The patch of xen_pt.c was re-worked to allow a more consistent test
    for the Intel IGD that uses the same criteria as in other places.
    This involved moving the call to xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot and updating the checks for the
    Intel IGD in xen_igd_clear_slot:
    
    if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
        && (s->hostaddr.function == 0)) {

    is changed to

    if (is_igd_vga_passthrough(&s->real_device) &&
        s->real_device.domain == 0 && s->real_device.bus == 0 &&
        s->real_device.dev == 2 && s->real_device.func == 0 &&
        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {

    Added an explanation for the move of xen_host_pci_device_get from
    xen_pt_realize to xen_igd_clear_slot to the commit message.

    Rebase.

v6: Fix logging by removing these lines from the move from xen_pt_realize
    to xen_igd_clear_slot that was done in v5:

    XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
               " to devfn 0x%x\n",
               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
               s->dev.devfn);

    This log needs to be in xen_pt_realize because s->dev.devfn is not
    set yet in xen_igd_clear_slot.

v7: The v7 that was posted to the mailing list was incorrect. v8 is what
    v7 was intended to be.

v8: Inhibit out of context log message and needless processing by
    adding 2 lines at the top of the new xen_igd_clear_slot function:

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
        return;

    Rebase. This removed an unnecessary header file from xen_pt.h 

v9: Move check for xen_igd_gfx_pt_enabled() from pc_piix.c to xen_pt.c

    Move #include "hw/pci/pci_bus.h" from xen_pt.h to xen_pt.c

    Introduce macros for the IGD devfn constants and use them to compute
    the value of XEN_PCI_IGD_SLOT_MASK

    Also use the new macros at an appropriate place in xen_pt_realize

    Add Cc: to stable - This has been broken for a long time, ever since
                        support for igd-passthru was added to Qemu 7
                        years ago.

    Mention new macros in the commit message (Michael Tsirkin)

    N.B.: I could not follow the suggestion to move the statement
    pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK; to after
    pci_qdev_realize for symmetry. Doing that results in an error when
    creating the guest:
    
    libxl: error: libxl_qmp.c:1837:qmp_ev_parse_error_messages: Domain 4:PCI: slot 2 function 0 not available for xen-pci-passthrough, reserved
    libxl: error: libxl_pci.c:1809:device_pci_add_done: Domain 4:libxl__device_pci_add failed for PCI device 0:0:2.0 (rc -28)
    libxl: error: libxl_create.c:1921:domcreate_attach_devices: Domain 4:unable to add pci devices

v10: Change in xen_pt.c at xen_igd_clear_slot from

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
        return;

    xen_host_pci_device_get(&s->real_device,
                            s->hostaddr.domain, s->hostaddr.bus,
                            s->hostaddr.slot, s->hostaddr.function,
                            errp);
    if (*errp) {
        error_append_hint(errp, "Failed to \"open\" the real pci device");
        return;
    }

to:

    xen_host_pci_device_get(&s->real_device,
                            s->hostaddr.domain, s->hostaddr.bus,
                            s->hostaddr.slot, s->hostaddr.function,
                            errp);
    if (*errp) {
        error_append_hint(errp, "Failed to \"open\" the real pci device");
        return;
    }

    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
        xpdc->pci_qdev_realize(qdev, errp);
        return;
    }

     Testing shows this fixes the problem of xen_host_pci_device_get
     and xpdc->pci_qdev_realize not being called if xen_igd_clear_slot
     returns because the bit to reserve slot 2 in slot_reserved_mask is
     not set. Without this change, guest creation fails in the cases
     when the bit to reserve slot 2 in slot_reserved_mask is not set.
     Thanks, Stefano!
     
     Also, in addition to mentioning the workaround of using the
     traditional qemu device model available from xenbits.xen.org in the
     commit message, also mention in the commit message the workaround
     of using the "pc" machine type instead of the "xenfv" machine type,
     which results in reduced startup performance.
     
     Rebase.
     
     Add Igor Mammedov <imammedo@redhat.com> to Cc.

v11: I noticed a style mistake that has been present in the past few
     versions that no one noticed. This version fixes it. No
     functional change in v11. I also did more tests on different guests
     such as guests that don't have igd-passthru=on set. No regressions
     were observed.
     
     The style mistake (missing braces) is fixed as follows:

xen_pt.c at xen_igd_reserve_slot is changed from

    if (!xen_igd_gfx_pt_enabled())
        return;

to

    if (!xen_igd_gfx_pt_enabled()) {
        return;
    }

 hw/i386/pc_piix.c    |  1 +
 hw/xen/xen_pt.c      | 64 ++++++++++++++++++++++++++++++++++++--------
 hw/xen/xen_pt.h      | 20 ++++++++++++++
 hw/xen/xen_pt_stub.c |  4 +++
 4 files changed, 78 insertions(+), 11 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index df64dd8dcc..a9d535c815 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -421,6 +421,7 @@ static void pc_xen_hvm_init(MachineState *machine)
     }
 
     pc_xen_hvm_init_pci(machine);
+    xen_igd_reserve_slot(pcms->bus);
     pci_create_simple(pcms->bus, -1, "xen-platform");
 }
 #endif
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 8db0532632..85c93cffcf 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -57,6 +57,7 @@
 #include <sys/ioctl.h>
 
 #include "hw/pci/pci.h"
+#include "hw/pci/pci_bus.h"
 #include "hw/qdev-properties.h"
 #include "hw/qdev-properties-system.h"
 #include "hw/xen/xen.h"
@@ -780,15 +781,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
                s->dev.devfn);
 
-    xen_host_pci_device_get(&s->real_device,
-                            s->hostaddr.domain, s->hostaddr.bus,
-                            s->hostaddr.slot, s->hostaddr.function,
-                            errp);
-    if (*errp) {
-        error_append_hint(errp, "Failed to \"open\" the real pci device");
-        return;
-    }
-
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
@@ -803,8 +795,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
     s->io_listener = xen_pt_io_listener;
 
     /* Setup VGA bios for passthrough GFX */
-    if ((s->real_device.domain == 0) && (s->real_device.bus == 0) &&
-        (s->real_device.dev == 2) && (s->real_device.func == 0)) {
+    if ((s->real_device.domain == XEN_PCI_IGD_DOMAIN) &&
+        (s->real_device.bus == XEN_PCI_IGD_BUS) &&
+        (s->real_device.dev == XEN_PCI_IGD_DEV) &&
+        (s->real_device.func == XEN_PCI_IGD_FN)) {
         if (!is_igd_vga_passthrough(&s->real_device)) {
             error_setg(errp, "Need to enable igd-passthru if you're trying"
                     " to passthrough IGD GFX");
@@ -950,11 +944,58 @@ static void xen_pci_passthrough_instance_init(Object *obj)
     PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
 }
 
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+    if (!xen_igd_gfx_pt_enabled()) {
+        return;
+    }
+
+    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
+    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
+}
+
+static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
+{
+    ERRP_GUARD();
+    PCIDevice *pci_dev = (PCIDevice *)qdev;
+    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
+    PCIBus *pci_bus = pci_get_bus(pci_dev);
+
+    xen_host_pci_device_get(&s->real_device,
+                            s->hostaddr.domain, s->hostaddr.bus,
+                            s->hostaddr.slot, s->hostaddr.function,
+                            errp);
+    if (*errp) {
+        error_append_hint(errp, "Failed to \"open\" the real pci device");
+        return;
+    }
+
+    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
+        xpdc->pci_qdev_realize(qdev, errp);
+        return;
+    }
+
+    if (is_igd_vga_passthrough(&s->real_device) &&
+        s->real_device.domain == XEN_PCI_IGD_DOMAIN &&
+        s->real_device.bus == XEN_PCI_IGD_BUS &&
+        s->real_device.dev == XEN_PCI_IGD_DEV &&
+        s->real_device.func == XEN_PCI_IGD_FN &&
+        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
+        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
+        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
+    }
+    xpdc->pci_qdev_realize(qdev, errp);
+}
+
 static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
+    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
+    xpdc->pci_qdev_realize = dc->realize;
+    dc->realize = xen_igd_clear_slot;
     k->realize = xen_pt_realize;
     k->exit = xen_pt_unregister_device;
     k->config_read = xen_pt_pci_read_config;
@@ -977,6 +1018,7 @@ static const TypeInfo xen_pci_passthrough_info = {
     .instance_size = sizeof(XenPCIPassthroughState),
     .instance_finalize = xen_pci_passthrough_finalize,
     .class_init = xen_pci_passthrough_class_init,
+    .class_size = sizeof(XenPTDeviceClass),
     .instance_init = xen_pci_passthrough_instance_init,
     .interfaces = (InterfaceInfo[]) {
         { INTERFACE_CONVENTIONAL_PCI_DEVICE },
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index cf10fc7bbf..e184699740 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -40,7 +40,20 @@ typedef struct XenPTReg XenPTReg;
 #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
 OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
 
+#define XEN_PT_DEVICE_CLASS(klass) \
+    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
+#define XEN_PT_DEVICE_GET_CLASS(obj) \
+    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
+
+typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
+
+typedef struct XenPTDeviceClass {
+    PCIDeviceClass parent_class;
+    XenPTQdevRealize pci_qdev_realize;
+} XenPTDeviceClass;
+
 uint32_t igd_read_opregion(XenPCIPassthroughState *s);
+void xen_igd_reserve_slot(PCIBus *pci_bus);
 void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
 void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
                                            XenHostPCIDevice *dev);
@@ -75,6 +88,13 @@ typedef int (*xen_pt_conf_byte_read)
 
 #define XEN_PCI_INTEL_OPREGION 0xfc
 
+#define XEN_PCI_IGD_DOMAIN 0
+#define XEN_PCI_IGD_BUS 0
+#define XEN_PCI_IGD_DEV 2
+#define XEN_PCI_IGD_FN 0
+#define XEN_PCI_IGD_SLOT_MASK \
+    (1UL << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
+
 typedef enum {
     XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
     XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
index 2d8cac8d54..5c108446a8 100644
--- a/hw/xen/xen_pt_stub.c
+++ b/hw/xen/xen_pt_stub.c
@@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
         error_setg(errp, "Xen PCI passthrough support not built in");
     }
 }
+
+void xen_igd_reserve_slot(PCIBus *pci_bus)
+{
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Sun Jan 22 01:09:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 01:09:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482438.747940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJOrQ-0001QR-6s; Sun, 22 Jan 2023 01:09:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482438.747940; Sun, 22 Jan 2023 01:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJOrQ-0001QK-40; Sun, 22 Jan 2023 01:09:28 +0000
Received: by outflank-mailman (input) for mailman id 482438;
 Sun, 22 Jan 2023 01:09:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8mh1=5T=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pJOrP-0001QD-8T
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 01:09:27 +0000
Received: from sonic307-8.consmr.mail.gq1.yahoo.com
 (sonic307-8.consmr.mail.gq1.yahoo.com [98.137.64.32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68505ec5-99f1-11ed-91b6-6bf2151ebd3b;
 Sun, 22 Jan 2023 02:09:25 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic307.consmr.mail.gq1.yahoo.com with HTTP; Sun, 22 Jan 2023 01:09:22 +0000
Received: by hermes--production-ne1-749986b79f-29jwl (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 72821de4fdb96083f42679ac48af847f; 
 Sun, 22 Jan 2023 01:09:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68505ec5-99f1-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674349762; bh=L1csov0r2CmZQLOHK3aR9bs3iEEnaMvslHlCsvNCLEQ=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=Rh447vHBPFk3FTy7+JrwHcrkBnLv8g6ndEzxAW+Q4lvSE5hDuCvM03C780bfPl0mHCRQw80iyNSVhxYMvRJ46glLgpskJvWa/sCcV1bgfENTcsS+TPhdowGuJOLhcbX2Qn/kkrdQFQsF/QWbQOiUVXdpiQkb/EqpvGMcg6of9u7KCpZFSuxufWL3jiyTQaLsnY+fUSmO8tw3F5v13lAdTbalMhEXjd0/72usEty1BKMJ53BZAbsISrUaMSptjwqB6zp7QII1k+L+BZI7Fdyoql0Q5P04+ECGUuJK9VSywubFw0hw6R6st8yUEEZ5AMdfytBe6/fIsfWfKY0Mib484w==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674349762; bh=yhEbEC+t/nsi70836EWgS9QUyl2cLusDAIseBN/T3zR=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=WLL+FxI6CM8C3eWYkARtZJ/lkXJoNjXeG+ez616UHWzG8r/beeUoAvr3FrPSDov6hUyT3uV9QsbcwLz4dyJKcqzlIpwQMbQled1ozeImAIHQlpnH8ycv5dsIZFu7/RgJRYKvNW2XHlQpFjeLI/hnTj8eHumcatUn8xVL9FnJv/CcMGmWucPE4BzXwGCEFnswZkR72tSi/pgpilFWuhssIfuHAN835jla753WkvwBRMtnnrYKg7JtqE5pMGTaNXVF/46se2kqyxql3ouvlLs8pSKlxMJ6y4JJCG7QKrn1yNRqaBe4OAaArf7/Oy7LyByqXPhDwO4SaZtQagoLmWTodA==
X-YMail-OSG: WR37BekVM1nb_s_Q6DZ4ROYFM_XftDNooitMZlYE8VXoIbzH1VV.jdC83Y3.qSi
 rloauWJcLeMcJb2Yin2iMAGyokqLo0DlkNDFRZhQ5vxHyJxmRDPH2o53ZrMyZv41iAZ83lXCGUps
 Rv05vskVBPH92Ips2hMe.HaMDmhbXBzEZJOPs.1aS9XPpZ_tz5Vr.kmEjnRudj2cXpw_op3e2NxD
 njgvH9al093Yc4lu1R2XFf7gHoKKJ1_p6a4.dcgHWCr_yBG8.LtXpfU8Ia3hE0RSFaX.qvSmRvch
 u_SFvGVmY9aRHq7Fbg7.RavswtpRgGO1TAUFJGws5tQAE3sSPfkRi7naeWmwBYMcQbU.N5Hr82Jl
 mGqTjGSMqmjJdw9OwiwRygarVOMIXyLlizZrDoqzlWGU9nCNn9i3.xdMKK9Mdl782vA2ZAEg_ix2
 m7rcaAt8YWTS5IY8FMVHvmGp6AqwHjJthw6BBBTf5yHeiLDBkjQoLCdM11gQ.eNdoTXFsSqVCIzk
 HgLtNJeeJzMTlDlgRCJcusKu0UqPoiH7aV.nMdbGlnMRN.Nwk9QkgnD9Blq0nfX2.OxtQuhb6FbP
 wVlvyRZb5oCgnNqCyoFjxHAoF5zQHNEQdir27mZvYHWZ5o3iDPW3IB2YaX5AqNV_pSV8MS28HV4N
 b8vLdq4a9FbRITt4d3pJTHAAlJf_MH.x9YIQpshNkAeHpr8Ro1c858LTxufMmwqc2hySdmWzHVcx
 PDgLrEbKk09MSlSWSIJQ9215sLcG1qf8PWmz.3HPK6hKjpY_sLvIB1tFDDLCQh.eLFx.F9zWMdzL
 iygxuhTsZ7VPwYgaSwznUyzejY1WB1c6_MWwhR0N0jdzo_7FugAg6oABDXVht1CJjCdEYl3yMPX3
 dtLsvYe9E4YvZgSSgPSnOUhPSMvrLkDl6wNyPa9MDt4JvPEvoCOUXsOpBhkXwyRxPKe1aJVEw_UF
 NU8vrunz22UK.OWN9rccVNLXE0qWE4MS9ZTQyawAdPVTjAk5kSoXYq1lLo59LmMCE.GxrLQXD2qu
 Qyi.qIWN6D.17rkuHoX6vjSNPGrKGJkdnt46N6DmfFP4M2ourIGyMnpNQ1hp4wsCwL.hvDdWXRC3
 WnC0cnObqEh2bqh4gjJM7zS.BqgAYlbmCKs9N.iFj3drezWPYxjsT5K.LdSgLKH3jpiCSwBcHyZd
 OZhehZ7vSLyTOrWllfJ.aD_xx_YosV.kZ95ySPH7RQRs8Axtc2YO303DhqGNdqZYGuxq1umrMvhE
 hFOtzIikh54K8xBOMZ0e3nNfTTktSk6ygYtE0A_feWpcTI4gSTjaE9TarpCKaQfusdCZcALy2NK2
 q6gC2pi8xYMzH3vVapqaTKiGu0q035Zu86A0LSSfdr0MB._zvaq8zjH8Y_p_hnHQONyIC2iNifrd
 yAYHQnsdR_WxEGsfdRMSTJg9Leuu8K4heYsrMVNBVB7ADcWv_H3AV3LnkNdNQJlXai1Lr4xBq74I
 chUS9USvW6gHAo5Z0POWf2cby71GeHsud4atJwHYUgF2bODBH1gq9IY36F_XBlDpjgkc7YsxhRgU
 jKQFnfaTmOOPatppstHHyiWJ2TJE5fj9..haZEkkYLfo8ye7oLtdpY_XTv6QrQCl2aGTnTsj2z.p
 gJ.NyLkq7mxJcMbj0a5SAvz2qmTuMhNDmbEiJTTQgktc5wV9crox2rxJIntdGZpynNW60jV04j6.
 .6pFaf.gmxIA9lzhrF4SJFuMCulbpnEnV4p7dXT5BXHROf38dFF5T.JxPwhOc_dgLFONjdYl.lK.
 a3Do4gN15fGp4zqEmvehQ6ve9PJRRp7eX_XfhNIuGNcX1EdzQV8Kq8SxaorLGoEJjzUC7AzZDOMV
 I6RdR4T1UKEg.KzUTMRzCjfytLvtLkJIAakp8QCoP_xO1xjoOIjDnskDm_PN9Dz2GGznz8WtKpXn
 cF008XhXlf9e4Qy2RrTBCY93afAGR7zO1GBKmBYUVkLSfYOMwE3ZULhBF5gOuZfuYrnfuCMmdIIH
 gCYH448Kl5d.jeJDprFOaPEhtN5s7DSKcXkVacQ0SW5QW5cy0BXT0dK9i19_RKoNHSQyZ1OuHhcW
 CCh2MRMybG1E0Z5CJGVK8uz6nSkzZiFFfqJQHtXttK6I4xJqWanX7RMm5zkWWO7wDS9psmbSeNqh
 TgAJyQLeNAyhP3OWd4Dty8yMHk19lt8KErG8ky9oCp_cpbo839dC_AO6pa6uJwX791zHRNrAch26
 MifMnTg.67dg-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <71bde2bc-ee04-7213-ce93-18e0a6188a03@aol.com>
Date: Sat, 21 Jan 2023 20:09:15 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org
References: <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
 <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz@aol.com>
 <Y7gqSLo8pMm4gfV+@perard.uk.xensource.com>
 <420b03de-3b1a-2096-529f-d18bfdf0cf53@aol.com>
In-Reply-To: <420b03de-3b1a-2096-529f-d18bfdf0cf53@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21096 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 4881

On 1/21/23 1:06 PM, Chuck Zmudzinski wrote:
> On 1/6/2023 9:03 AM, Anthony PERARD wrote:
>> On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
>> > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>> > as noted in docs/igd-assign.txt in the Qemu source code.
>> > 
>> > Currently, when the xl toolstack is used to configure a Xen HVM guest with
>> > Intel IGD passthrough to the guest with the Qemu upstream device model,
>> > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>> > a different slot. This problem often prevents the guest from booting.
>> > 
>> > The only available workaround is not good: Configure Xen HVM guests to use
>> > the old and no longer maintained Qemu traditional device model available
>> > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
>> > 
>> > To implement this feature in the Qemu upstream device model for Xen HVM
>> > guests, introduce the following new functions, types, and macros:
>> > 
>> > * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>> > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>> > * typedef XenPTQdevRealize function pointer
>> > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>> > * xen_igd_reserve_slot and xen_igd_clear_slot functions
>> > 
>> > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>> > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>> > the xl toolstack with the gfx_passthru option enabled, which sets the
>> > igd-passthru=on option to Qemu for the Xen HVM machine type.
>> > 
>> > The new xen_igd_reserve_slot function also needs to be implemented in
>> > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>> > when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>> > in which case it does nothing.
>> > 
>> > The new xen_igd_clear_slot function overrides qdev->realize of the parent
>> > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>> > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>> > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>> > 
>> > Move the call to xen_host_pci_device_get, and the associated error
>> > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>> > initialize the device class and vendor values which enables the checks for
>> > the Intel IGD to succeed. The verification that the host device is an
>> > Intel IGD to be passed through is done by checking the domain, bus, slot,
>> > and function values as well as by checking that gfx_passthru is enabled,
>> > the device class is VGA, and the device vendor in Intel.
>> > 
>> > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>>
>>
>> This patch looks good enough. It only changes the "xenfv" machine so it
>> doesn't prevent a proper fix to be done in the toolstack libxl.
>>
>> The change in xen_pci_passthrough_class_init() to try to run some code
>> before pci_qdev_realize() could potentially break in the future due to
>> been uncommon but hopefully that will be ok.
>>
>> So if no work to fix libxl appear soon, I'm ok with this patch:
>>
>> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
>>
>> Thanks,
>>
> 
> Hi Anthony,
> 
> If you have been following this patch it is now at v10. Since there is
> another approach of patching libxl by using the "pc" machine instead of
> patching Qemu to fix the "xenfv" machine and there have been other
> changes, I did not include your Reviewed-by tag in the later versions.
> 
> I presume you are not interested in dealing with the technical debt
> of patching libxl as proposed by this patch to libxl:
> 
> https://lore.kernel.org/xen-devel/20230110073201.mdUvSjy1vKtxPriqMQuWAxIjQzf1eAqIlZgal1u3GBI@z/
> 
> because it would be more difficult to maintain and result in reduced
> startup performance with the Intel IGD than by patching Qemu and
> fixing the "xenfv" machine type with the Intel IGD directly.
> 
> So are you OK with v10 of this patch? If so, you can add your Reviewed-by
> again to v10. The v10 has several changes since v6 as requested by other
> reviewers (Michael, Stefano, Igor).
> 
> The v10 of the patch is here:
> 
> https://lore.kernel.org/qemu-devel/d473914c4d2dc38ae87dca4b898d75b44751c9cb.1674297794.git.brchuckz@aol.com/
> 
> Thanks,
> 
> Chuck

Sorry to bother you again, Anthony, but no one noticed a style
mistake that has been present in the past few versions. The
v11 fixes that without making any other changes since v10, so
if you want to add your Reviewed-by to the most recent version,
here it is (v11) (you should also have it in your Inbox):

https://lore.kernel.org/qemu-devel/b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz@aol.com/

Thanks,

Chuck


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 01:51:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 01:51:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482448.747957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJPVd-0006XP-G6; Sun, 22 Jan 2023 01:51:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482448.747957; Sun, 22 Jan 2023 01:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJPVd-0006XI-Bz; Sun, 22 Jan 2023 01:51:01 +0000
Received: by outflank-mailman (input) for mailman id 482448;
 Sun, 22 Jan 2023 01:51:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJPVc-0006X8-FG; Sun, 22 Jan 2023 01:51:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJPVc-00081C-5Y; Sun, 22 Jan 2023 01:51:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJPVb-0007JG-RG; Sun, 22 Jan 2023 01:50:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJPVb-0003EG-Qm; Sun, 22 Jan 2023 01:50:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RwsFDmPndLlaGiL9zo8I2YypdGu3n1vykAHPkN4tER0=; b=soz5PzMUAKP4Q6zm2NcC5ubnV8
	47krNZTSqU2uGzegQf9Y/dcAimvMRuOcNTEstTRyCGJpjQSuhQIUAmLit5HEyH1qWD011lPuGfKEL
	MOQ4GSZLFuvBdBZXeBnLDOt0UaBoPPmilmSrQLZOrrzwqPL/YM8RUsnp/I1P5rclbsJs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176025-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176025: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d60c20260c7e82fe5344d06c20d718e0cc03b8b
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Jan 2023 01:50:59 +0000

flight 176025 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176025/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail in 176011 pass in 176025
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 176011 pass in 176025
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-localmigrate/x10 fail pass in 176011

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 176011 like 175987
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install       fail like 175994
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1d60c20260c7e82fe5344d06c20d718e0cc03b8b
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    1 days
Failing since        176003  2023-01-20 17:40:27 Z    1 days    3 attempts
Testing same since   176011  2023-01-21 07:04:02 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 762 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 08:40:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 08:40:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482478.747991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJVtM-0003sL-3s; Sun, 22 Jan 2023 08:39:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482478.747991; Sun, 22 Jan 2023 08:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJVtM-0003sE-0b; Sun, 22 Jan 2023 08:39:56 +0000
Received: by outflank-mailman (input) for mailman id 482478;
 Sun, 22 Jan 2023 08:39:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJVtL-0003s4-6n; Sun, 22 Jan 2023 08:39:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJVtL-0001G2-4G; Sun, 22 Jan 2023 08:39:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJVtK-0000Ax-NQ; Sun, 22 Jan 2023 08:39:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJVtK-00027J-Mv; Sun, 22 Jan 2023 08:39:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mZhCIedJOqj8fjrf3LitOk8gS8pPutWBg85bKXnbLGQ=; b=LLCYvK6AqVe8gyEcpIVcW0y1OL
	zW/u6TFkxANWXzLg6stVdqA4R6WAIPAF2XLrc9yA+HaMduTQcWzBp3HU22t/cAXfg6M57HqEWRujU
	w4NN8DSxAxqtJ8kY+Lr3tFBy42POAd5gT2K5yiDNyRK0StecPB7gGmz5PdhU6Mvl8Fxo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176029-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176029: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:host-install(5):broken:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f67144022885344375ad03593e7a290cc614da34
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Jan 2023 08:39:54 +0000

flight 176029 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176029/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-libvirt-xsm  5 host-install(5)        broken REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                f67144022885344375ad03593e7a290cc614da34
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  106 days
Failing since        173470  2022-10-08 06:21:34 Z  106 days  217 attempts
Testing same since   176029  2023-01-21 22:42:32 Z    0 days    1 attempts

------------------------------------------------------------
3434 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-libvirt-xsm broken
broken-step test-amd64-amd64-libvirt-xsm host-install(5)

Not pushing.

(No revision log; it would be 527640 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 08:41:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 08:41:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482487.748002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJVue-0005Fa-L1; Sun, 22 Jan 2023 08:41:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482487.748002; Sun, 22 Jan 2023 08:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJVue-0005FT-Gf; Sun, 22 Jan 2023 08:41:16 +0000
Received: by outflank-mailman (input) for mailman id 482487;
 Sun, 22 Jan 2023 08:41:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4w8v=5T=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pJVud-0005Dr-4t
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 08:41:15 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 85f334f0-9a30-11ed-91b6-6bf2151ebd3b;
 Sun, 22 Jan 2023 09:41:13 +0100 (CET)
Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com
 [209.85.218.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id
 us-mta-653-TSQxcbJBPk20S3ff5sTKdA-1; Sun, 22 Jan 2023 03:41:09 -0500
Received: by mail-ej1-f69.google.com with SMTP id
 xj11-20020a170906db0b00b0077b6ecb23fcso6056552ejb.5
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 00:41:09 -0800 (PST)
Received: from redhat.com ([2.52.149.29]) by smtp.gmail.com with ESMTPSA id
 b2-20020a17090630c200b00780b1979adesm20328133ejb.218.2023.01.22.00.41.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 22 Jan 2023 00:41:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85f334f0-9a30-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1674376871;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HEVKilx5YBcf/mXBNnV23IoE/y9DzD1mBE2GGYY6Hh4=;
	b=VdmnbTUqM5e0WlzcOaOxYY6qN9FKSGhQ8DsR7U/zjq8vaBSemLseU/3d6zbFSJPntSwbS2
	RL1JaavW7sZK3ezkUTyB98A8KXx+ndiUig3amR/9174SYOLUVdRPs05npBJOLjO6RJcnvB
	0GIxz7ZDizCTTRjmBDrYc1RTEQo/xdQ=
X-MC-Unique: TSQxcbJBPk20S3ff5sTKdA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HEVKilx5YBcf/mXBNnV23IoE/y9DzD1mBE2GGYY6Hh4=;
        b=UYHCgJy9ywxih77ADoFU+ygEb4nusEwW3cB07UXiyMNBj4MzHUz3WsCsZN+HZjnrLf
         KglENPr4QFQfPShBdEsMUZXlXw9HGfmbZw49CTf3Gk+Y7OHVXNRCnIBsYJDyj5laEcZX
         MuLauPcSi9da7t1CGNS8BSLf5Rr55MXAf58c8zmKaSgERsK8NxPOVpRnp85VOjrQ3lDD
         SC6mVUfI/YSFh7EswwxKKk3zcQXd/+tc4gyb3Gcnm4gdPWeDwkP1tvZT8DyJcWMNKISv
         3QhdCpFYar7h0z04I2OvM7wzGP+Uk+h+C3qMtPk8N1DdEdmhMIdtyw8CwL4dNpPZmczl
         yIRw==
X-Gm-Message-State: AFqh2krd/1Y7qpUGZtmzBGxtNEg7RWZwcej3DOpoGU5wHQbtA1Cd2Mzi
	yjm8DXtyoj/u4YZ7+5TUMZ3pvZMeEnIlA1plgBM+qWAJHdq9ttOYpmBGPMzp86RkZLpwNtkANCV
	zb904zX7pk0DQ5dxkXeR1YSKf/Hc=
X-Received: by 2002:a17:906:9405:b0:859:1a3c:8b5c with SMTP id q5-20020a170906940500b008591a3c8b5cmr21000324ejx.53.1674376868487;
        Sun, 22 Jan 2023 00:41:08 -0800 (PST)
X-Google-Smtp-Source: AMrXdXs7reMoniqcaok89PE4o/OsoHyHN7DRIVK4pw+iTcNtKfpNrmTt9ZhffDzqx43B1O8H2JIlNQ==
X-Received: by 2002:a17:906:9405:b0:859:1a3c:8b5c with SMTP id q5-20020a170906940500b008591a3c8b5cmr21000307ejx.53.1674376868156;
        Sun, 22 Jan 2023 00:41:08 -0800 (PST)
Date: Sun, 22 Jan 2023 03:40:58 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Igor Mammedov <imammedo@redhat.com>, xen-devel@lists.xenproject.org,
	qemu-stable@nongnu.org
Subject: Re: [PATCH v11] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Message-ID: <20230122033928-mutt-send-email-mst@kernel.org>
References: <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz.ref@aol.com>
 <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz@aol.com>
MIME-Version: 1.0
In-Reply-To: <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz@aol.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Sat, Jan 21, 2023 at 07:57:02PM -0500, Chuck Zmudzinski wrote:
> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> as noted in docs/igd-assign.txt in the Qemu source code.
> 
> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> Intel IGD passthrough to the guest with the Qemu upstream device model,
> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> a different slot. This problem often prevents the guest from booting.
> 
> The only available workarounds are not good: Configure Xen HVM guests to
> use the old and no longer maintained Qemu traditional device model
> available from xenbits.xen.org which does reserve slot 2 for the Intel
> IGD or use the "pc" machine type instead of the "xenfv" machine type and
> add the xen platform device at slot 3 using a command line option
> instead of patching qemu to fix the "xenfv" machine type directly. The
> second workaround causes some degredation in startup performance such as
> a longer boot time and reduced resolution of the grub menu that is
> displayed on the monitor. This patch avoids that reduced startup
> performance when using the Qemu upstream device model for Xen HVM guests
> configured with the igd-passthru=on option.
> 
> To implement this feature in the Qemu upstream device model for Xen HVM
> guests, introduce the following new functions, types, and macros:
> 
> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> * typedef XenPTQdevRealize function pointer
> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> 
> Michael Tsirkin:
> * Introduce XEN_PCI_IGD_DOMAIN, XEN_PCI_IGD_BUS, XEN_PCI_IGD_DEV, and
>   XEN_PCI_IGD_FN - use them to compute the value of XEN_PCI_IGD_SLOT_MASK
> 
> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> the xl toolstack with the gfx_passthru option enabled, which sets the
> igd-passthru=on option to Qemu for the Xen HVM machine type.
> 
> The new xen_igd_reserve_slot function also needs to be implemented in
> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> in which case it does nothing.
> 
> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> 
> Move the call to xen_host_pci_device_get, and the associated error
> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> initialize the device class and vendor values which enables the checks for
> the Intel IGD to succeed. The verification that the host device is an
> Intel IGD to be passed through is done by checking the domain, bus, slot,
> and function values as well as by checking that gfx_passthru is enabled,
> the device class is VGA, and the device vendor in Intel.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> ---
> Notes that might be helpful to reviewers of patched code in hw/xen:
> 
> The new functions and types are based on recommendations from Qemu docs:
> https://qemu.readthedocs.io/en/latest/devel/qom.html
> 
> Notes that might be helpful to reviewers of patched code in hw/i386:
> 
> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> not affect builds that do not have CONFIG_XEN defined.
> 
> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
> existing function that is only true when Qemu is built with
> xen-pci-passthrough enabled and the administrator has configured the Xen
> HVM guest with Qemu's igd-passthru=on option.
> 
> v2: Remove From: <email address> tag at top of commit message
> 
> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
> 
>     is changed to
> 
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     I hoped that I could use the test in v2, since it matches the
>     other tests for the Intel IGD in Qemu and Xen, but those tests
>     do not work because the necessary data structures are not set with
>     their values yet. So instead use the test that the administrator
>     has enabled gfx_passthru and the device address on the host is
>     02.0. This test does detect the Intel IGD correctly.
> 
> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>     email address to match the address used by the same author in commits
>     be9c61da and c0e86b76
>     
>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
> 
> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>     for the Intel IGD that uses the same criteria as in other places.
>     This involved moving the call to xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>     Intel IGD in xen_igd_clear_slot:
>     
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     is changed to
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> 
>     Added an explanation for the move of xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot to the commit message.
> 
>     Rebase.
> 
> v6: Fix logging by removing these lines from the move from xen_pt_realize
>     to xen_igd_clear_slot that was done in v5:
> 
>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>                " to devfn 0x%x\n",
>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                s->dev.devfn);
> 
>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>     set yet in xen_igd_clear_slot.
> 
> v7: The v7 that was posted to the mailing list was incorrect. v8 is what
>     v7 was intended to be.
> 
> v8: Inhibit out of context log message and needless processing by
>     adding 2 lines at the top of the new xen_igd_clear_slot function:
> 
>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>         return;
> 
>     Rebase. This removed an unnecessary header file from xen_pt.h 
> 
> v9: Move check for xen_igd_gfx_pt_enabled() from pc_piix.c to xen_pt.c
> 
>     Move #include "hw/pci/pci_bus.h" from xen_pt.h to xen_pt.c
> 
>     Introduce macros for the IGD devfn constants and use them to compute
>     the value of XEN_PCI_IGD_SLOT_MASK
> 
>     Also use the new macros at an appropriate place in xen_pt_realize
> 
>     Add Cc: to stable - This has been broken for a long time, ever since
>                         support for igd-passthru was added to Qemu 7
>                         years ago.
> 
>     Mention new macros in the commit message (Michael Tsirkin)
> 
>     N.B.: I could not follow the suggestion to move the statement
>     pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK; to after
>     pci_qdev_realize for symmetry. Doing that results in an error when
>     creating the guest:
>     
>     libxl: error: libxl_qmp.c:1837:qmp_ev_parse_error_messages: Domain 4:PCI: slot 2 function 0 not available for xen-pci-passthrough, reserved
>     libxl: error: libxl_pci.c:1809:device_pci_add_done: Domain 4:libxl__device_pci_add failed for PCI device 0:0:2.0 (rc -28)
>     libxl: error: libxl_create.c:1921:domcreate_attach_devices: Domain 4:unable to add pci devices
> 
> v10: Change in xen_pt.c at xen_igd_clear_slot from
> 
>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>         return;
> 
>     xen_host_pci_device_get(&s->real_device,
>                             s->hostaddr.domain, s->hostaddr.bus,
>                             s->hostaddr.slot, s->hostaddr.function,
>                             errp);
>     if (*errp) {
>         error_append_hint(errp, "Failed to \"open\" the real pci device");
>         return;
>     }
> 
> to:
> 
>     xen_host_pci_device_get(&s->real_device,
>                             s->hostaddr.domain, s->hostaddr.bus,
>                             s->hostaddr.slot, s->hostaddr.function,
>                             errp);
>     if (*errp) {
>         error_append_hint(errp, "Failed to \"open\" the real pci device");
>         return;
>     }
> 
>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
>         xpdc->pci_qdev_realize(qdev, errp);
>         return;
>     }
> 
>      Testing shows this fixes the problem of xen_host_pci_device_get
>      and xpdc->pci_qdev_realize not being called if xen_igd_clear_slot
>      returns because the bit to reserve slot 2 in slot_reserved_mask is
>      not set. Without this change, guest creation fails in the cases
>      when the bit to reserve slot 2 in slot_reserved_mask is not set.
>      Thanks, Stefano!
>      
>      Also, in addition to mentioning the workaround of using the
>      traditional qemu device model available from xenbits.xen.org in the
>      commit message, also mention in the commit message the workaround
>      of using the "pc" machine type instead of the "xenfv" machine type,
>      which results in reduced startup performance.
>      
>      Rebase.
>      
>      Add Igor Mammedov <imammedo@redhat.com> to Cc.
> 
> v11: I noticed a style mistake that has been present in the past few
>      versions that no one noticed. This version fixes it. No
>      functional change in v11. I also did more tests on different guests
>      such as guests that don't have igd-passthru=on set. No regressions
>      were observed.
>      
>      The style mistake (missing braces) is fixed as follows:
> 
> xen_pt.c at xen_igd_reserve_slot is changed from
> 
>     if (!xen_igd_gfx_pt_enabled())
>         return;
> 
> to
> 
>     if (!xen_igd_gfx_pt_enabled()) {
>         return;
>     }
> 
>  hw/i386/pc_piix.c    |  1 +
>  hw/xen/xen_pt.c      | 64 ++++++++++++++++++++++++++++++++++++--------
>  hw/xen/xen_pt.h      | 20 ++++++++++++++
>  hw/xen/xen_pt_stub.c |  4 +++
>  4 files changed, 78 insertions(+), 11 deletions(-)


For the 1-liner in PC:

Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

but I really leave this up to Xen maintainers to decide whether
the patch is needed and makes sense.


> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index df64dd8dcc..a9d535c815 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -421,6 +421,7 @@ static void pc_xen_hvm_init(MachineState *machine)
>      }
>  
>      pc_xen_hvm_init_pci(machine);
> +    xen_igd_reserve_slot(pcms->bus);
>      pci_create_simple(pcms->bus, -1, "xen-platform");
>  }
>  #endif
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 8db0532632..85c93cffcf 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -57,6 +57,7 @@
>  #include <sys/ioctl.h>
>  
>  #include "hw/pci/pci.h"
> +#include "hw/pci/pci_bus.h"
>  #include "hw/qdev-properties.h"
>  #include "hw/qdev-properties-system.h"
>  #include "hw/xen/xen.h"
> @@ -780,15 +781,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                 s->dev.devfn);
>  
> -    xen_host_pci_device_get(&s->real_device,
> -                            s->hostaddr.domain, s->hostaddr.bus,
> -                            s->hostaddr.slot, s->hostaddr.function,
> -                            errp);
> -    if (*errp) {
> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
> -        return;
> -    }
> -
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> @@ -803,8 +795,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>      s->io_listener = xen_pt_io_listener;
>  
>      /* Setup VGA bios for passthrough GFX */
> -    if ((s->real_device.domain == 0) && (s->real_device.bus == 0) &&
> -        (s->real_device.dev == 2) && (s->real_device.func == 0)) {
> +    if ((s->real_device.domain == XEN_PCI_IGD_DOMAIN) &&
> +        (s->real_device.bus == XEN_PCI_IGD_BUS) &&
> +        (s->real_device.dev == XEN_PCI_IGD_DEV) &&
> +        (s->real_device.func == XEN_PCI_IGD_FN)) {
>          if (!is_igd_vga_passthrough(&s->real_device)) {
>              error_setg(errp, "Need to enable igd-passthru if you're trying"
>                      " to passthrough IGD GFX");
> @@ -950,11 +944,58 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>  }
>  
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +    if (!xen_igd_gfx_pt_enabled()) {
> +        return;
> +    }
> +
> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
> +}
> +
> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
> +{
> +    ERRP_GUARD();
> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
> +
> +    xen_host_pci_device_get(&s->real_device,
> +                            s->hostaddr.domain, s->hostaddr.bus,
> +                            s->hostaddr.slot, s->hostaddr.function,
> +                            errp);
> +    if (*errp) {
> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
> +        return;
> +    }
> +
> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
> +        xpdc->pci_qdev_realize(qdev, errp);
> +        return;
> +    }
> +
> +    if (is_igd_vga_passthrough(&s->real_device) &&
> +        s->real_device.domain == XEN_PCI_IGD_DOMAIN &&
> +        s->real_device.bus == XEN_PCI_IGD_BUS &&
> +        s->real_device.dev == XEN_PCI_IGD_DEV &&
> +        s->real_device.func == XEN_PCI_IGD_FN &&
> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
> +    }
> +    xpdc->pci_qdev_realize(qdev, errp);
> +}
> +
>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>  
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
> +    xpdc->pci_qdev_realize = dc->realize;
> +    dc->realize = xen_igd_clear_slot;
>      k->realize = xen_pt_realize;
>      k->exit = xen_pt_unregister_device;
>      k->config_read = xen_pt_pci_read_config;
> @@ -977,6 +1018,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>      .instance_size = sizeof(XenPCIPassthroughState),
>      .instance_finalize = xen_pci_passthrough_finalize,
>      .class_init = xen_pci_passthrough_class_init,
> +    .class_size = sizeof(XenPTDeviceClass),
>      .instance_init = xen_pci_passthrough_instance_init,
>      .interfaces = (InterfaceInfo[]) {
>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index cf10fc7bbf..e184699740 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -40,7 +40,20 @@ typedef struct XenPTReg XenPTReg;
>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>  
> +#define XEN_PT_DEVICE_CLASS(klass) \
> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
> +
> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
> +
> +typedef struct XenPTDeviceClass {
> +    PCIDeviceClass parent_class;
> +    XenPTQdevRealize pci_qdev_realize;
> +} XenPTDeviceClass;
> +
>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>                                             XenHostPCIDevice *dev);
> @@ -75,6 +88,13 @@ typedef int (*xen_pt_conf_byte_read)
>  
>  #define XEN_PCI_INTEL_OPREGION 0xfc
>  
> +#define XEN_PCI_IGD_DOMAIN 0
> +#define XEN_PCI_IGD_BUS 0
> +#define XEN_PCI_IGD_DEV 2
> +#define XEN_PCI_IGD_FN 0
> +#define XEN_PCI_IGD_SLOT_MASK \
> +    (1UL << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
> +
>  typedef enum {
>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> index 2d8cac8d54..5c108446a8 100644
> --- a/hw/xen/xen_pt_stub.c
> +++ b/hw/xen/xen_pt_stub.c
> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>          error_setg(errp, "Xen PCI passthrough support not built in");
>      }
>  }
> +
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +}
> -- 
> 2.39.0



From xen-devel-bounces@lists.xenproject.org Sun Jan 22 09:22:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 09:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482494.748017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJWXq-0001A8-Ny; Sun, 22 Jan 2023 09:21:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482494.748017; Sun, 22 Jan 2023 09:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJWXq-0001A1-LE; Sun, 22 Jan 2023 09:21:46 +0000
Received: by outflank-mailman (input) for mailman id 482494;
 Sun, 22 Jan 2023 09:21:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJWXp-00019r-Nm; Sun, 22 Jan 2023 09:21:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJWXp-0002GY-LA; Sun, 22 Jan 2023 09:21:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJWXp-00017a-7K; Sun, 22 Jan 2023 09:21:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJWXp-0006Tz-6x; Sun, 22 Jan 2023 09:21:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f5bFcQ97isIXrnGPD3OTwemFqVmRJHHtwI1ZZo18FWw=; b=pQFNXEpMhsREBpxojcyCJrdp9J
	i4n9pUsGaWn/9wiXzQyCCxXgFNA5tvkTNlyYv795U+y+Uivc3g0NorS58UONHM83/zbqGWcdcj/4w
	eaVIO+TZIYl8HD/nK0R4GNuE5kwwQkmPXrtkCx4yAqFic53ejU5SUsbSnEBzCuH06oUc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176035-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176035: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d60c20260c7e82fe5344d06c20d718e0cc03b8b
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Jan 2023 09:21:45 +0000

flight 176035 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176035/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-localmigrate/x10 fail in 176025 pass in 176035
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 176025
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 176025

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail in 176025 like 175994
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1d60c20260c7e82fe5344d06c20d718e0cc03b8b
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    2 days
Failing since        176003  2023-01-20 17:40:27 Z    1 days    4 attempts
Testing same since   176011  2023-01-21 07:04:02 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 762 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 14:20:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 14:20:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482512.748042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJbD1-0004Qg-5X; Sun, 22 Jan 2023 14:20:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482512.748042; Sun, 22 Jan 2023 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJbD1-0004QZ-1i; Sun, 22 Jan 2023 14:20:35 +0000
Received: by outflank-mailman (input) for mailman id 482512;
 Sun, 22 Jan 2023 14:20:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJbCz-0004QP-Up; Sun, 22 Jan 2023 14:20:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJbCz-0000Nv-R7; Sun, 22 Jan 2023 14:20:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJbCz-0000SM-81; Sun, 22 Jan 2023 14:20:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJbCz-0006TT-7X; Sun, 22 Jan 2023 14:20:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Up+PEaKHh4aZz4yCv8hsm+a8Kjqi1R7BTrBX3oW1BI8=; b=mp132cMzLcZO/LWeROFT/Yvsoy
	GAFxUE+MWx5ly8hzRsr/BTrxasvcTKBuEeMUjh1ohvcKiUq2nyHWh/wy/Tzs9+RyOb0BSDvquh9Xz
	myb4pVOz0g97xFm2+1L++VbRX7tAdvoGkz/8GZ2UFlIm7xqbRcqHKi190lSD9NHdHkjY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176040-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176040: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2241ab53cbb5cdb08a6b2d4688feb13971058f65
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Jan 2023 14:20:33 +0000

flight 176040 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176040/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                2241ab53cbb5cdb08a6b2d4688feb13971058f65
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  106 days
Failing since        173470  2022-10-08 06:21:34 Z  106 days  218 attempts
Testing same since   176040  2023-01-22 08:42:55 Z    0 days    1 attempts

------------------------------------------------------------
3434 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 527690 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 15:35:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 15:35:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482521.748057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJcMr-00035B-Jp; Sun, 22 Jan 2023 15:34:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482521.748057; Sun, 22 Jan 2023 15:34:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJcMr-000354-HL; Sun, 22 Jan 2023 15:34:49 +0000
Received: by outflank-mailman (input) for mailman id 482521;
 Sun, 22 Jan 2023 15:34:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8mh1=5T=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pJcMp-00034y-TD
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 15:34:48 +0000
Received: from sonic309-20.consmr.mail.gq1.yahoo.com
 (sonic309-20.consmr.mail.gq1.yahoo.com [98.137.65.146])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49ec1a7c-9a6a-11ed-b8d1-410ff93cb8f0;
 Sun, 22 Jan 2023 16:34:42 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.gq1.yahoo.com with HTTP; Sun, 22 Jan 2023 15:34:40 +0000
Received: by hermes--production-ne1-749986b79f-pj46j (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID b81cd148e0d46ea76b74c98f1fecb1c2; 
 Sun, 22 Jan 2023 15:34:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49ec1a7c-9a6a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674401680; bh=ldV7DJKNE9QE4UW/+hSADKTOEQM2m7ucgg+uLp2/g2I=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=nDit4zG51CT72iJXFItTXlXVAZPPuDx1CuUswa9pM9VqlRS4SUGyq8OOCbjC3yZb51N8rvnLw1RwQn+yNDohh5PabiBkKkXkHXplj0NXtbIUl9I3kXKk/ydZKq7QOm4C0RQ31JmGA7bBZ0yWVRaz9lup768oCj6VMBmxzeXlJWk/Isg4DR4RSo+D5Gxels/JVxBUd1IxvZfh/DeSt2fDV3MJc7IdY+fOzB1sJ6mCYrkcLJZbZbzOes/v3ChjD3Vq66dcf3C+lWpUYuWBL2XO62khKglVKmomYAYRbOqYb5gGuq7ByRlbD1yiBoOiz7HMtQaiK7T5Uli4BArycKn4Gg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674401680; bh=v8j/j6wsuyoeavp/FKTI0hekL9KwDaT68oZnC5FmyD4=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=cOUFuF9pE4jUZKxyFfyWfRT6gEdIKHuq+BR3arsUlVyN8SWx2brhiZn7VllPM9Z6PwsKFjUOMBNSshTcOVG3PvlQFG9auBxh4aM5fpSFRu9/R0FUtDpg8VXPSiYdG26eq4nYVc67szLPLBujP2Yo+wRYN9fvByaWtjZkvdcVP1gXMSonxiXyfYGpvch3Yp2AXiTyXosvy+uACvUIyg4WkNMh51/DrkiwbICbtmUQjLnYnryz8TTQWAv+rW6v9h4PbCKFVDnnJ69ObmVuBGbgxwKtf3f/dWX13UOqzdygDHe1p2EOS6JNbZ8RDBAW+QTPis2N19DjnQyHw3jsSDpvxg==
X-YMail-OSG: qtYQcuEVM1n4Q6CVUtqDmoo8nKT8M9Pd2acck5kOeUQHLqrwpYmAL9DcipcXtKR
 3_gbVHIxPtLIKUBQKIjmZw7wKkKkpsWQDexIMEjHlnQorE0E3Y6vFAPapZpsIwQ5VZ4oPkn1wR6N
 PaIx4Faxzmr35bF6uea.VjagxLWzY8wsK53ndgDwYJEt4RgAo8sD5FG.FyENAmr0L_sC8rhtG_va
 UaJ3u0AXFrn24n1eIGyzecuTKo63NwJnrCscX81pVsXQMZ_TmF78_tmNdMvHjVxy_aZmw95yp_Nl
 ai1skficSmlWNW_1lMsMVZnkMlSrZXjck1JuillmLnGqTFg1OQc0NdZRUb4h9KIVkBrac7KvhCbL
 sx3878RnYRhb2JSKz791iL1zXPRFbhdYyhRmATQRXw_CAoLnoaYMeuLvYiSo7Lb7CuZPVsw1iHHy
 rza.HaGG4W6fewp08.P9Jduft2biunGL5p8.jaB5fYW3hdsrhCU4VoFzuPLErlf4V6TbZnSxxavP
 h05NA2bnv8FgYBnUbyfJEvAHODa_mc79rA_ioh7Fg70UklgGMwWerL6xUiH9yoTyFDJXIiD3WpCE
 jcDHXtk3rZMVGMYqbe.CZh7EwWYHUEWwU1EHvs6jv3RuWTQZdLJUqQEIj_mhAz73kVNQ8BDVjiWr
 eoghqhGdYFSfNRDEiTF9HMz432HkcfLYIgQeHw5YAYc4phKxGhMGUohIpZdIfjDio3v8sJV.oAxc
 INuX99Tro6kuTQShFD4DBFFW1YMPuNBmrFw5BxrLxb6qTNbnS_RPO0n2mZhKYv48ehOJpi8Lfkt0
 TyoXxgMQxVIP5TPlEGPfiMcn9qZzAmCpe8fo3aUf0jqAMRi1bUAKvQ.qCneCYILNgwrKrStd9qi4
 jW1cAm9kixkPNmc1u8tX9tBglIf4_C7bQtqUaXe1lAlngVYBXbt8CCUoqDeXZTyf2clgEemC7uao
 sv5vnr8.4bQKUitE.KgN_YBfbjkBROoQ76SV0Sw7AFKrl94BV78T.9.YsUJJPeEHbmFlugpeYXvP
 WJgfRqk1jjU6Hh6DE4gA57yDR_aVwdDgZxrobQcLOog650zgLMPITEyrF.ejH62JhuQmWfo2qXgC
 4UXSrXhw7yqp6i8FMrvGfKF2BLJJX6biIBmhOhtHoE3U7Zf5jgyV6BWTdm1cLbEUJqxG9AyHj2uq
 nKIqjDnYGggqRYDA86GOLlM3RmB.Y8_mBlrDLkidpJ.eye6Bt5sLrWXlpB7ChdvxNKKP0rCOXkTF
 62i3133c6Dr9EiqORHqnIMxLgjJMJGboI_lk6jTtYHUpjaBTaUFhXBficW3dZx2rDj4Vt9BoxEaD
 bTu3Jlk.ke3wreH1s7NB32bqLnq0BjKhTl7f7bDxlRt.WDMs0Jo5KBPk47BhLyVr7HMO7BJa454f
 wgtNCEgaOTAfmb.CkXnxWzGvCaAIaVG9.nHO5F5Sx5rnLlPusj5IJhlmFTbxLaP9T5o1y9Ng4p9M
 kkfyx1ovKC_uroJN11Yneh.ZlVtR4lvpy_7jcGQmqQeJw_HKp9ovTUemz997wSZwxPn1Cp2lvnae
 cUiNoftG6BwF8in0wWJWcT12v1VBNo8C.jQUl8mB_Vd2N5JX9ozHuACEyp1KWTWFtgLbJGFJgru3
 UEEyrb9mToEi8XDbWyxFmeBDhoUcEoWIc58BVFvYugrj7SOg3pIxkS8heu13NsDrU08U.buWN.9q
 DPqXzS2CTVlfpRPVtlq5ZL2uc5HkyTNItnp.IUcPBPoSa3gqZSYraiMbxjI6uoP9HbWCtpVpZqN5
 e.8_q7xcRCePUle0udDAFGhMRO1bUog82VbxZmHuV3Lnfntuh3MuFglDGVcgq2y2JmEBtKERxJIi
 kW5TQfkOl.7A4R3loU3lTt3OIsIbYNzd_7cVddKZJxGpr5DaqiAPCwvKXNbN46rDunrRPFB_KPAH
 6AfAAx93GPaf0LGDsEl6dSJqAk8RjTDZU._FdR0J3gXcDPTXwfqWbYWzsmYFYW4YbqcM6k7Fltx5
 9TRpWAdE3Uj9xF95uJbR84WwHy6Fru87s0DIHAV05NN.VQ3tj6SwnCifiXpiICwgoQJ3cgmX7zY9
 33ncTVWgN5cxHnZAuIPNRfAj6T0gkm4cTs43iWfFwH9AGwX8zPHbNM8iOCS60iuJVqfXAPGyePw0
 zXSMLK.lVwIyY_Z3GT5C5A7uYL4mOdR8YZDgfhUt6k5qAMHZ_ysVBrOWwlsvOlLhu3nGP4RNl_4V
 O8T1cXHEq.JoN3Zgut0v79jN_6pAMTWI1
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <6e6b3de1-ff14-9658-41cc-25b6e62138f0@aol.com>
Date: Sun, 22 Jan 2023 10:34:33 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
Subject: Re: [PATCH v11] xen/pt: reserve PCI slot 2 for Intel igd-passthru
To: "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
Cc: qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Igor Mammedov <imammedo@redhat.com>, xen-devel@lists.xenproject.org,
 qemu-stable@nongnu.org
References: <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz.ref@aol.com>
 <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz@aol.com>
 <20230122033928-mutt-send-email-mst@kernel.org>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <20230122033928-mutt-send-email-mst@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21096 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 18303

On 1/22/23 3:40 AM, Michael S. Tsirkin wrote:
> On Sat, Jan 21, 2023 at 07:57:02PM -0500, Chuck Zmudzinski wrote:
>> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
>> as noted in docs/igd-assign.txt in the Qemu source code.
>> 
>> Currently, when the xl toolstack is used to configure a Xen HVM guest with
>> Intel IGD passthrough to the guest with the Qemu upstream device model,
>> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
>> a different slot. This problem often prevents the guest from booting.
>> 
>> The only available workarounds are not good: Configure Xen HVM guests to
>> use the old and no longer maintained Qemu traditional device model
>> available from xenbits.xen.org which does reserve slot 2 for the Intel
>> IGD or use the "pc" machine type instead of the "xenfv" machine type and
>> add the xen platform device at slot 3 using a command line option
>> instead of patching qemu to fix the "xenfv" machine type directly. The
>> second workaround causes some degredation in startup performance such as
>> a longer boot time and reduced resolution of the grub menu that is
>> displayed on the monitor. This patch avoids that reduced startup
>> performance when using the Qemu upstream device model for Xen HVM guests
>> configured with the igd-passthru=on option.
>> 
>> To implement this feature in the Qemu upstream device model for Xen HVM
>> guests, introduce the following new functions, types, and macros:
>> 
>> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
>> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
>> * typedef XenPTQdevRealize function pointer
>> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
>> * xen_igd_reserve_slot and xen_igd_clear_slot functions
>> 
>> Michael Tsirkin:
>> * Introduce XEN_PCI_IGD_DOMAIN, XEN_PCI_IGD_BUS, XEN_PCI_IGD_DEV, and
>>   XEN_PCI_IGD_FN - use them to compute the value of XEN_PCI_IGD_SLOT_MASK
>> 
>> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
>> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
>> the xl toolstack with the gfx_passthru option enabled, which sets the
>> igd-passthru=on option to Qemu for the Xen HVM machine type.
>> 
>> The new xen_igd_reserve_slot function also needs to be implemented in
>> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
>> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
>> in which case it does nothing.
>> 
>> The new xen_igd_clear_slot function overrides qdev->realize of the parent
>> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
>> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
>> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
>> 
>> Move the call to xen_host_pci_device_get, and the associated error
>> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
>> initialize the device class and vendor values which enables the checks for
>> the Intel IGD to succeed. The verification that the host device is an
>> Intel IGD to be passed through is done by checking the domain, bus, slot,
>> and function values as well as by checking that gfx_passthru is enabled,
>> the device class is VGA, and the device vendor in Intel.
>> 
>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>> ---
>> Notes that might be helpful to reviewers of patched code in hw/xen:
>> 
>> The new functions and types are based on recommendations from Qemu docs:
>> https://qemu.readthedocs.io/en/latest/devel/qom.html
>> 
>> Notes that might be helpful to reviewers of patched code in hw/i386:
>> 
>> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
>> not affect builds that do not have CONFIG_XEN defined.
>> 
>> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
>> existing function that is only true when Qemu is built with
>> xen-pci-passthrough enabled and the administrator has configured the Xen
>> HVM guest with Qemu's igd-passthru=on option.
>> 
>> v2: Remove From: <email address> tag at top of commit message
>> 
>> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
>> 
>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
>> 
>>     is changed to
>> 
>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>         && (s->hostaddr.function == 0)) {
>> 
>>     I hoped that I could use the test in v2, since it matches the
>>     other tests for the Intel IGD in Qemu and Xen, but those tests
>>     do not work because the necessary data structures are not set with
>>     their values yet. So instead use the test that the administrator
>>     has enabled gfx_passthru and the device address on the host is
>>     02.0. This test does detect the Intel IGD correctly.
>> 
>> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>>     email address to match the address used by the same author in commits
>>     be9c61da and c0e86b76
>>     
>>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
>> 
>> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>>     for the Intel IGD that uses the same criteria as in other places.
>>     This involved moving the call to xen_host_pci_device_get from
>>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>>     Intel IGD in xen_igd_clear_slot:
>>     
>>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>>         && (s->hostaddr.function == 0)) {
>> 
>>     is changed to
>> 
>>     if (is_igd_vga_passthrough(&s->real_device) &&
>>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
>> 
>>     Added an explanation for the move of xen_host_pci_device_get from
>>     xen_pt_realize to xen_igd_clear_slot to the commit message.
>> 
>>     Rebase.
>> 
>> v6: Fix logging by removing these lines from the move from xen_pt_realize
>>     to xen_igd_clear_slot that was done in v5:
>> 
>>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>>                " to devfn 0x%x\n",
>>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>                s->dev.devfn);
>> 
>>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>>     set yet in xen_igd_clear_slot.
>> 
>> v7: The v7 that was posted to the mailing list was incorrect. v8 is what
>>     v7 was intended to be.
>> 
>> v8: Inhibit out of context log message and needless processing by
>>     adding 2 lines at the top of the new xen_igd_clear_slot function:
>> 
>>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>         return;
>> 
>>     Rebase. This removed an unnecessary header file from xen_pt.h 
>> 
>> v9: Move check for xen_igd_gfx_pt_enabled() from pc_piix.c to xen_pt.c
>> 
>>     Move #include "hw/pci/pci_bus.h" from xen_pt.h to xen_pt.c
>> 
>>     Introduce macros for the IGD devfn constants and use them to compute
>>     the value of XEN_PCI_IGD_SLOT_MASK
>> 
>>     Also use the new macros at an appropriate place in xen_pt_realize
>> 
>>     Add Cc: to stable - This has been broken for a long time, ever since
>>                         support for igd-passthru was added to Qemu 7
>>                         years ago.
>> 
>>     Mention new macros in the commit message (Michael Tsirkin)
>> 
>>     N.B.: I could not follow the suggestion to move the statement
>>     pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK; to after
>>     pci_qdev_realize for symmetry. Doing that results in an error when
>>     creating the guest:
>>     
>>     libxl: error: libxl_qmp.c:1837:qmp_ev_parse_error_messages: Domain 4:PCI: slot 2 function 0 not available for xen-pci-passthrough, reserved
>>     libxl: error: libxl_pci.c:1809:device_pci_add_done: Domain 4:libxl__device_pci_add failed for PCI device 0:0:2.0 (rc -28)
>>     libxl: error: libxl_create.c:1921:domcreate_attach_devices: Domain 4:unable to add pci devices
>> 
>> v10: Change in xen_pt.c at xen_igd_clear_slot from
>> 
>>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>>         return;
>> 
>>     xen_host_pci_device_get(&s->real_device,
>>                             s->hostaddr.domain, s->hostaddr.bus,
>>                             s->hostaddr.slot, s->hostaddr.function,
>>                             errp);
>>     if (*errp) {
>>         error_append_hint(errp, "Failed to \"open\" the real pci device");
>>         return;
>>     }
>> 
>> to:
>> 
>>     xen_host_pci_device_get(&s->real_device,
>>                             s->hostaddr.domain, s->hostaddr.bus,
>>                             s->hostaddr.slot, s->hostaddr.function,
>>                             errp);
>>     if (*errp) {
>>         error_append_hint(errp, "Failed to \"open\" the real pci device");
>>         return;
>>     }
>> 
>>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
>>         xpdc->pci_qdev_realize(qdev, errp);
>>         return;
>>     }
>> 
>>      Testing shows this fixes the problem of xen_host_pci_device_get
>>      and xpdc->pci_qdev_realize not being called if xen_igd_clear_slot
>>      returns because the bit to reserve slot 2 in slot_reserved_mask is
>>      not set. Without this change, guest creation fails in the cases
>>      when the bit to reserve slot 2 in slot_reserved_mask is not set.
>>      Thanks, Stefano!
>>      
>>      Also, in addition to mentioning the workaround of using the
>>      traditional qemu device model available from xenbits.xen.org in the
>>      commit message, also mention in the commit message the workaround
>>      of using the "pc" machine type instead of the "xenfv" machine type,
>>      which results in reduced startup performance.
>>      
>>      Rebase.
>>      
>>      Add Igor Mammedov <imammedo@redhat.com> to Cc.
>> 
>> v11: I noticed a style mistake that has been present in the past few
>>      versions that no one noticed. This version fixes it. No
>>      functional change in v11. I also did more tests on different guests
>>      such as guests that don't have igd-passthru=on set. No regressions
>>      were observed.
>>      
>>      The style mistake (missing braces) is fixed as follows:
>> 
>> xen_pt.c at xen_igd_reserve_slot is changed from
>> 
>>     if (!xen_igd_gfx_pt_enabled())
>>         return;
>> 
>> to
>> 
>>     if (!xen_igd_gfx_pt_enabled()) {
>>         return;
>>     }
>> 
>>  hw/i386/pc_piix.c    |  1 +
>>  hw/xen/xen_pt.c      | 64 ++++++++++++++++++++++++++++++++++++--------
>>  hw/xen/xen_pt.h      | 20 ++++++++++++++
>>  hw/xen/xen_pt_stub.c |  4 +++
>>  4 files changed, 78 insertions(+), 11 deletions(-)
> 
> 
> For the 1-liner in PC:
> 
> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> 
> but I really leave this up to Xen maintainers to decide whether
> the patch is needed and makes sense.

Thanks, Michael!

Stefano, I think I answered the important question you raised when you
looked at v9 in v10/v11.

Anthony, the patch has been through several iterations and
improvements since you reviewed v6.

So I think this is ready if the Xen developers want to do it.

Thanks,

Chuck

> 
> 
>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>> index df64dd8dcc..a9d535c815 100644
>> --- a/hw/i386/pc_piix.c
>> +++ b/hw/i386/pc_piix.c
>> @@ -421,6 +421,7 @@ static void pc_xen_hvm_init(MachineState *machine)
>>      }
>>  
>>      pc_xen_hvm_init_pci(machine);
>> +    xen_igd_reserve_slot(pcms->bus);
>>      pci_create_simple(pcms->bus, -1, "xen-platform");
>>  }
>>  #endif
>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> index 8db0532632..85c93cffcf 100644
>> --- a/hw/xen/xen_pt.c
>> +++ b/hw/xen/xen_pt.c
>> @@ -57,6 +57,7 @@
>>  #include <sys/ioctl.h>
>>  
>>  #include "hw/pci/pci.h"
>> +#include "hw/pci/pci_bus.h"
>>  #include "hw/qdev-properties.h"
>>  #include "hw/qdev-properties-system.h"
>>  #include "hw/xen/xen.h"
>> @@ -780,15 +781,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>>                 s->dev.devfn);
>>  
>> -    xen_host_pci_device_get(&s->real_device,
>> -                            s->hostaddr.domain, s->hostaddr.bus,
>> -                            s->hostaddr.slot, s->hostaddr.function,
>> -                            errp);
>> -    if (*errp) {
>> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
>> -        return;
>> -    }
>> -
>>      s->is_virtfn = s->real_device.is_virtfn;
>>      if (s->is_virtfn) {
>>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
>> @@ -803,8 +795,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>>      s->io_listener = xen_pt_io_listener;
>>  
>>      /* Setup VGA bios for passthrough GFX */
>> -    if ((s->real_device.domain == 0) && (s->real_device.bus == 0) &&
>> -        (s->real_device.dev == 2) && (s->real_device.func == 0)) {
>> +    if ((s->real_device.domain == XEN_PCI_IGD_DOMAIN) &&
>> +        (s->real_device.bus == XEN_PCI_IGD_BUS) &&
>> +        (s->real_device.dev == XEN_PCI_IGD_DEV) &&
>> +        (s->real_device.func == XEN_PCI_IGD_FN)) {
>>          if (!is_igd_vga_passthrough(&s->real_device)) {
>>              error_setg(errp, "Need to enable igd-passthru if you're trying"
>>                      " to passthrough IGD GFX");
>> @@ -950,11 +944,58 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>>  }
>>  
>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>> +{
>> +    if (!xen_igd_gfx_pt_enabled()) {
>> +        return;
>> +    }
>> +
>> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
>> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
>> +}
>> +
>> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
>> +{
>> +    ERRP_GUARD();
>> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
>> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
>> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
>> +
>> +    xen_host_pci_device_get(&s->real_device,
>> +                            s->hostaddr.domain, s->hostaddr.bus,
>> +                            s->hostaddr.slot, s->hostaddr.function,
>> +                            errp);
>> +    if (*errp) {
>> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
>> +        return;
>> +    }
>> +
>> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
>> +        xpdc->pci_qdev_realize(qdev, errp);
>> +        return;
>> +    }
>> +
>> +    if (is_igd_vga_passthrough(&s->real_device) &&
>> +        s->real_device.domain == XEN_PCI_IGD_DOMAIN &&
>> +        s->real_device.bus == XEN_PCI_IGD_BUS &&
>> +        s->real_device.dev == XEN_PCI_IGD_DEV &&
>> +        s->real_device.func == XEN_PCI_IGD_FN &&
>> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
>> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
>> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
>> +    }
>> +    xpdc->pci_qdev_realize(qdev, errp);
>> +}
>> +
>>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>>  {
>>      DeviceClass *dc = DEVICE_CLASS(klass);
>>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>>  
>> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
>> +    xpdc->pci_qdev_realize = dc->realize;
>> +    dc->realize = xen_igd_clear_slot;
>>      k->realize = xen_pt_realize;
>>      k->exit = xen_pt_unregister_device;
>>      k->config_read = xen_pt_pci_read_config;
>> @@ -977,6 +1018,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>>      .instance_size = sizeof(XenPCIPassthroughState),
>>      .instance_finalize = xen_pci_passthrough_finalize,
>>      .class_init = xen_pci_passthrough_class_init,
>> +    .class_size = sizeof(XenPTDeviceClass),
>>      .instance_init = xen_pci_passthrough_instance_init,
>>      .interfaces = (InterfaceInfo[]) {
>>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
>> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
>> index cf10fc7bbf..e184699740 100644
>> --- a/hw/xen/xen_pt.h
>> +++ b/hw/xen/xen_pt.h
>> @@ -40,7 +40,20 @@ typedef struct XenPTReg XenPTReg;
>>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>>  
>> +#define XEN_PT_DEVICE_CLASS(klass) \
>> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
>> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
>> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
>> +
>> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
>> +
>> +typedef struct XenPTDeviceClass {
>> +    PCIDeviceClass parent_class;
>> +    XenPTQdevRealize pci_qdev_realize;
>> +} XenPTDeviceClass;
>> +
>>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
>> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>>                                             XenHostPCIDevice *dev);
>> @@ -75,6 +88,13 @@ typedef int (*xen_pt_conf_byte_read)
>>  
>>  #define XEN_PCI_INTEL_OPREGION 0xfc
>>  
>> +#define XEN_PCI_IGD_DOMAIN 0
>> +#define XEN_PCI_IGD_BUS 0
>> +#define XEN_PCI_IGD_DEV 2
>> +#define XEN_PCI_IGD_FN 0
>> +#define XEN_PCI_IGD_SLOT_MASK \
>> +    (1UL << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
>> +
>>  typedef enum {
>>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
>> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
>> index 2d8cac8d54..5c108446a8 100644
>> --- a/hw/xen/xen_pt_stub.c
>> +++ b/hw/xen/xen_pt_stub.c
>> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>>          error_setg(errp, "Xen PCI passthrough support not built in");
>>      }
>>  }
>> +
>> +void xen_igd_reserve_slot(PCIBus *pci_bus)
>> +{
>> +}
>> -- 
>> 2.39.0
> 



From xen-devel-bounces@lists.xenproject.org Sun Jan 22 17:09:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 17:09:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482530.748071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJdpI-0004QF-L9; Sun, 22 Jan 2023 17:08:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482530.748071; Sun, 22 Jan 2023 17:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJdpI-0004Q8-HM; Sun, 22 Jan 2023 17:08:16 +0000
Received: by outflank-mailman (input) for mailman id 482530;
 Sun, 22 Jan 2023 17:08:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJdpH-0004Py-In; Sun, 22 Jan 2023 17:08:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJdpH-0004Xt-F9; Sun, 22 Jan 2023 17:08:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJdpH-0000ON-2z; Sun, 22 Jan 2023 17:08:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJdpH-0003uX-2Y; Sun, 22 Jan 2023 17:08:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LhfUk3fHRxZtoDDwmjPQhvAA2I+FgpeizbFN+aDeKIQ=; b=5ufIYtvtioDjAZKPzYwK8JBR/l
	Ci0/PKHtsIMGZFvLXAFluhltbY7mlLnNSUyqrlUnS4uCgozv5CiViCnxars4hgQkeiubayTJLNH0U
	89XI0FrK99aoOpbsXGsTWdZuqc1Vq+u14RrUdhhW5Phd7w6DD/1kW0p/HaifEUeT5KIg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176042-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176042: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d60c20260c7e82fe5344d06c20d718e0cc03b8b
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Jan 2023 17:08:15 +0000

flight 176042 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176042/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 176035 pass in 176042
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 176035 pass in 176042
 test-arm64-arm64-xl-vhd      17 guest-start/debian.repeat  fail pass in 176035

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1d60c20260c7e82fe5344d06c20d718e0cc03b8b
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    2 days
Failing since        176003  2023-01-20 17:40:27 Z    1 days    5 attempts
Testing same since   176011  2023-01-21 07:04:02 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 762 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 22:59:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 22:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482544.748102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjId-0003ui-QM; Sun, 22 Jan 2023 22:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482544.748102; Sun, 22 Jan 2023 22:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjId-0003ub-NB; Sun, 22 Jan 2023 22:58:55 +0000
Received: by outflank-mailman (input) for mailman id 482544;
 Sun, 22 Jan 2023 22:58:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJjId-0003uR-BL; Sun, 22 Jan 2023 22:58:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJjId-00044s-74; Sun, 22 Jan 2023 22:58:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJjIc-0006pn-KL; Sun, 22 Jan 2023 22:58:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJjIc-0005ja-Js; Sun, 22 Jan 2023 22:58:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FNnkQIV7vOKf7LE9iotW6TnglNBCsfrRjy1bI+o77wo=; b=0cXcutj7j6lfZOiU51bBHotiFV
	N/UXxYwrucNCf/pe/nVC2GJDtuKE2WXHy5Z2JRfKRTQ7VkMGQAl/7CGeQJcn1FNkwtHiLHBTQmcl5
	H6zaDLAgmZRi3UCLgtGEV8kA9XNyXMirLEOMQi45FS/TqRvohGoIZUeFV1DhfpDIHT4M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176046-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176046: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2241ab53cbb5cdb08a6b2d4688feb13971058f65
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Jan 2023 22:58:54 +0000

flight 176046 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176046/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 build-arm64-pvops             6 kernel-build   fail in 176040 REGR. vs. 173462

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 176040

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)           blocked in 176040 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 176040 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 176040 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 176040 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 176040 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 176040 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 176040 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 176040 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 176040 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 176040 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                2241ab53cbb5cdb08a6b2d4688feb13971058f65
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  107 days
Failing since        173470  2022-10-08 06:21:34 Z  106 days  219 attempts
Testing same since   176040  2023-01-22 08:42:55 Z    0 days    2 attempts

------------------------------------------------------------
3434 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 527690 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 22:59:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 22:59:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482549.748112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjJ3-0004Ln-9q; Sun, 22 Jan 2023 22:59:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482549.748112; Sun, 22 Jan 2023 22:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjJ3-0004Lg-71; Sun, 22 Jan 2023 22:59:21 +0000
Received: by outflank-mailman (input) for mailman id 482549;
 Sun, 22 Jan 2023 22:59:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zCZZ=5T=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pJjJ2-0004Jk-7P
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 22:59:20 +0000
Received: from mail-vs1-xe2f.google.com (mail-vs1-xe2f.google.com
 [2607:f8b0:4864:20::e2f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 66011f1c-9aa8-11ed-91b6-6bf2151ebd3b;
 Sun, 22 Jan 2023 23:59:18 +0100 (CET)
Received: by mail-vs1-xe2f.google.com with SMTP id q125so11242835vsb.0
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 14:59:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66011f1c-9aa8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=OHk0zLX5cDu2LCGUkMk+SDH1kuMpE+ZZ1Z0AKfRPhrg=;
        b=Wd5xEGnmdZFxXoBQJM9O3tvlpfO25Gcq3p5V8TZ2gDQBBoN3UqkbybxRzf46TCzKxV
         zFfcfqfK7Z6xRggkUK7PPnDkTc9oa8ZFtmKcQXnDCB6XScFi2ghz5lL92rHRCDdicqlk
         1J5cKnzzMqJS+NYLs5zY9iAx0B+3efjZIDHrkgGYW2KzQ76aCrNo+3WHlvFniZXeZ0sU
         OTG2O+CC0HuYTEd/mguNU4CTek4968cNaEQfn8YiEU1MoJHFUJ1l2OHeGJvSfNXd0ulJ
         zombBCo6YBGYYaZYWyjulSIcHvrIZfGJxhvvVe0Xf11/a3vR1Nsn5/Xn36TIHSNRCu79
         ThKw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=OHk0zLX5cDu2LCGUkMk+SDH1kuMpE+ZZ1Z0AKfRPhrg=;
        b=pL0sXHTTHRvZpJpwNa1Fk6AKBXmmt/zedItDOpzvcpm+Re+CoPiNwCFD7+EHLI5S++
         viNwC62EaAbOvRHiVq/tRfleFezalKmUfggo9dBuUCB8bxBZdhrk8Vu8qR+liWToZL7I
         +I4lgU5Y0lP1+KYmbnNTNPH1qzHay0SqM0ZUPuGJ9T9JcZggzwj7B7HdimVQtusVC5T0
         4RpHbR1NR2R2qi/sxwVEgxdL5TzRwfMnqYix0IdrKtkNZDH8GxYiYH4xNTC72B2sG4Q5
         aQhtx6DV4kds84DZSXg5+AAmjjk+xCwDbsgiB1AuqnJxupDzvzVUPqRu4OpwUPgEHdKb
         DFkA==
X-Gm-Message-State: AFqh2kplLBVtQ0So4NYTyj634bYLFWrSpYG6JeegYfdKG96gEzXjrTjq
	0tdaYRFhJ3YmRwlUaF/D0w1QOzzoOd37rLGotUuxk/77lV0=
X-Google-Smtp-Source: AMrXdXtTM7Dy68/NH6DmREbp71/PMaPYtZqnawZv47RuImPn4WcWxgnuik0Dea0dxlPa326/aqUXjOxQMEmVAFPIg5Y=
X-Received: by 2002:a67:eb10:0:b0:3c9:8cc2:dd04 with SMTP id
 a16-20020a67eb10000000b003c98cc2dd04mr3168431vso.73.1674428357060; Sun, 22
 Jan 2023 14:59:17 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674226563.git.oleksii.kurochko@gmail.com> <621e8ef8c6a721927ecade5bb41cdc85df386bbf.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To: <621e8ef8c6a721927ecade5bb41cdc85df386bbf.1674226563.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 23 Jan 2023 08:58:51 +1000
Message-ID: <CAKmqyKO-12sczsNdVsAov_nxhSazPM0HruXRzq034UvyMqQgAg@mail.gmail.com>
Subject: Re: [PATCH v1 02/14] xen/riscv: add <asm/asm.h> header
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/arch/riscv/include/asm/asm.h | 54 ++++++++++++++++++++++++++++++++
>  1 file changed, 54 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/asm.h
>
> diff --git a/xen/arch/riscv/include/asm/asm.h b/xen/arch/riscv/include/asm/asm.h
> new file mode 100644
> index 0000000000..6d426ecea7
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/asm.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: (GPL-2.0-only) */
> +/*
> + * Copyright (C) 2015 Regents of the University of California
> + */
> +
> +#ifndef _ASM_RISCV_ASM_H
> +#define _ASM_RISCV_ASM_H
> +
> +#ifdef __ASSEMBLY__
> +#define __ASM_STR(x)   x
> +#else
> +#define __ASM_STR(x)   #x
> +#endif
> +
> +#if __riscv_xlen == 64
> +#define __REG_SEL(a, b)        __ASM_STR(a)
> +#elif __riscv_xlen == 32
> +#define __REG_SEL(a, b)        __ASM_STR(b)
> +#else
> +#error "Unexpected __riscv_xlen"
> +#endif
> +
> +#define REG_L          __REG_SEL(ld, lw)
> +#define REG_S          __REG_SEL(sd, sw)
> +
> +#if __SIZEOF_POINTER__ == 8
> +#ifdef __ASSEMBLY__
> +#define RISCV_PTR              .dword
> +#else
> +#define RISCV_PTR              ".dword"
> +#endif
> +#elif __SIZEOF_POINTER__ == 4
> +#ifdef __ASSEMBLY__
> +#define RISCV_PTR              .word
> +#else
> +#define RISCV_PTR              ".word"
> +#endif
> +#else
> +#error "Unexpected __SIZEOF_POINTER__"
> +#endif
> +
> +#if (__SIZEOF_INT__ == 4)
> +#define RISCV_INT              __ASM_STR(.word)
> +#else
> +#error "Unexpected __SIZEOF_INT__"
> +#endif
> +
> +#if (__SIZEOF_SHORT__ == 2)
> +#define RISCV_SHORT            __ASM_STR(.half)
> +#else
> +#error "Unexpected __SIZEOF_SHORT__"
> +#endif
> +
> +#endif /* _ASM_RISCV_ASM_H */
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 23:25:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 23:25:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482560.748125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjhg-0007rE-A6; Sun, 22 Jan 2023 23:24:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482560.748125; Sun, 22 Jan 2023 23:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjhg-0007r7-7H; Sun, 22 Jan 2023 23:24:48 +0000
Received: by outflank-mailman (input) for mailman id 482560;
 Sun, 22 Jan 2023 23:24:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zCZZ=5T=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pJjhe-0007r1-OU
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 23:24:47 +0000
Received: from mail-vs1-xe2f.google.com (mail-vs1-xe2f.google.com
 [2607:f8b0:4864:20::e2f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f30daabc-9aab-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 00:24:43 +0100 (CET)
Received: by mail-vs1-xe2f.google.com with SMTP id q125so11281918vsb.0
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 15:24:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f30daabc-9aab-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=ZVMpSKS1oR8m5xyAlGUq0lJEeZS+TE0sWyyt9ysDPwE=;
        b=L5rI2ENsHMXb1bJjgm+2HwM1eB/tGeHWUD5sVB3EIRG5rTSulcI10VvmngK5iig6r/
         Seejd9W7BMPOCZkhZWy7XXBm84fRKJg4yibLbXUJVRXovqzDF6JjPdTWHfm8QFMRg18T
         fvV03n9tfP5DWs3ZDEADTg9uuNWv7ge95EfmTmCe8MxdVTaQZ1AYdL7y99FH9khWgMq7
         GCKYK7efzcxUHYJ0tsisD8OnIHFRSAt+KzZtWsgeRM9VtJ90X6x+mDt9rDL6q6FLC+Ll
         Dy1twA1h81CpD+XEKjiH7FHZhWXo2njYTybc53IbsiMhYMkW8F3bd+MZT/CM+syHWA7W
         h5sQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ZVMpSKS1oR8m5xyAlGUq0lJEeZS+TE0sWyyt9ysDPwE=;
        b=puwuUtJr4DFRYDZy+hTbJiE7qkLA59keSnPTcBXo4VUSPqoVdxkmYTI6avwrkHBiuC
         k5wjoooB2EdD5bXwwxZHflGBpo56X/3GDyiC2LAnwBtDJjzxt81B81Ql3UIvKPXRN88R
         6fWnaDMcQBprTzlYCJFoDWDBO0q/49X3QpyRPxBq6DkVdMYc3ltatl63qOPk6Ss+cold
         QQ3LPU5FR1Bt8dF7Ax5H7XOjFor6h0iT8U7xcwHG1CELtCFj2ahOpR5hW4LUZPfg40Dz
         EREDdJY1/4ubPYbtrss37WzWfDZEzonOM0up6cI0Iqeirdfn7naBAi0mFBjNDSH91mqj
         D3rQ==
X-Gm-Message-State: AFqh2korsgqvoupcJavwjFARiSySowpjUylBJFeOsUOf2Tp+XJwbhblI
	duCg9bdXAD7qczDzJMPFmIgAdEc9K1lCJ3gwjnY=
X-Google-Smtp-Source: AMrXdXsi2WYBFywwn7/Xd1uLHJKRlVH1CozazE3Y4iQ/P2kBMeNah7mm62lxVPc1xnfozVpQ+s9kug+IGAvTSpSFqbw=
X-Received: by 2002:a05:6102:f98:b0:3d3:c7d9:7b62 with SMTP id
 e24-20020a0561020f9800b003d3c7d97b62mr2708465vsv.72.1674429881986; Sun, 22
 Jan 2023 15:24:41 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674226563.git.oleksii.kurochko@gmail.com> <fe153cbeffd4ba4e158271ccd2449628f4973481.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To: <fe153cbeffd4ba4e158271ccd2449628f4973481.1674226563.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 23 Jan 2023 09:24:15 +1000
Message-ID: <CAKmqyKPS5CoNofMR-7X2NQs6NYaEqP-agB8jkdfmW9rAhxaqNA@mail.gmail.com>
Subject: Re: [PATCH v1 03/14] xen/riscv: add <asm/riscv_encoding.h header
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 21, 2023 at 1:01 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/arch/riscv/include/asm/riscv_encoding.h | 945 ++++++++++++++++++++
>  1 file changed, 945 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/riscv_encoding.h
>
> diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/include/asm/riscv_encoding.h
> new file mode 100644
> index 0000000000..8a43d49f7a
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/riscv_encoding.h
> @@ -0,0 +1,945 @@
> +/* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
> +/*
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + *
> + * Authors:
> + *   Anup Patel <anup.patel@wdc.com>
> + *
> + * The source has been largely adapted from OpenSBI:
> + * include/sbi/riscv_encodnig.h
> + *
> + */
> +
> +#ifndef __RISCV_ENCODING_H__
> +#define __RISCV_ENCODING_H__
> +
> +#define _UL(X) _AC(X, UL)
> +#define _ULL(X) _AC(X, ULL)
> +
> +/* clang-format off */
> +#define MSTATUS_SIE                    _UL(0x00000002)
> +#define MSTATUS_MIE                    _UL(0x00000008)
> +#define MSTATUS_SPIE_SHIFT             5
> +#define MSTATUS_SPIE                   (_UL(1) << MSTATUS_SPIE_SHIFT)
> +#define MSTATUS_UBE                    _UL(0x00000040)
> +#define MSTATUS_MPIE                   _UL(0x00000080)
> +#define MSTATUS_SPP_SHIFT              8
> +#define MSTATUS_SPP                    (_UL(1) << MSTATUS_SPP_SHIFT)
> +#define MSTATUS_MPP_SHIFT              11
> +#define MSTATUS_MPP                    (_UL(3) << MSTATUS_MPP_SHIFT)
> +#define MSTATUS_FS                     _UL(0x00006000)
> +#define MSTATUS_FS_OFF                 _UL(0x00000000)
> +#define MSTATUS_FS_INITIAL             _UL(0x00002000)
> +#define MSTATUS_FS_CLEAN               _UL(0x00004000)
> +#define MSTATUS_FS_DIRTY               _UL(0x00006000)
> +#define MSTATUS_XS                     _UL(0x00018000)
> +#define MSTATUS_XS_OFF                 _UL(0x00000000)
> +#define MSTATUS_XS_INITIAL             _UL(0x00008000)
> +#define MSTATUS_XS_CLEAN               _UL(0x00010000)
> +#define MSTATUS_XS_DIRTY               _UL(0x00018000)
> +#define MSTATUS_VS                     _UL(0x01800000)
> +#define MSTATUS_VS_OFF                 _UL(0x00000000)
> +#define MSTATUS_VS_INITIAL             _UL(0x00800000)
> +#define MSTATUS_VS_CLEAN               _UL(0x01000000)
> +#define MSTATUS_VS_DIRTY               _UL(0x01800000)
> +#define MSTATUS_MPRV                   _UL(0x00020000)
> +#define MSTATUS_SUM                    _UL(0x00040000)
> +#define MSTATUS_MXR                    _UL(0x00080000)
> +#define MSTATUS_TVM                    _UL(0x00100000)
> +#define MSTATUS_TW                     _UL(0x00200000)
> +#define MSTATUS_TSR                    _UL(0x00400000)
> +#define MSTATUS32_SD                   _UL(0x80000000)
> +#if __riscv_xlen == 64
> +#define MSTATUS_UXL                    _ULL(0x0000000300000000)
> +#define MSTATUS_SXL                    _ULL(0x0000000C00000000)
> +#define MSTATUS_SBE                    _ULL(0x0000001000000000)
> +#define MSTATUS_MBE                    _ULL(0x0000002000000000)
> +#define MSTATUS_MPV                    _ULL(0x0000008000000000)
> +#else
> +#define MSTATUSH_SBE                   _UL(0x00000010)
> +#define MSTATUSH_MBE                   _UL(0x00000020)
> +#define MSTATUSH_MPV                   _UL(0x00000080)
> +#endif
> +#define MSTATUS32_SD                   _UL(0x80000000)
> +#define MSTATUS64_SD                   _ULL(0x8000000000000000)
> +
> +#define SSTATUS_SIE                    MSTATUS_SIE
> +#define SSTATUS_SPIE_SHIFT             MSTATUS_SPIE_SHIFT
> +#define SSTATUS_SPIE                   MSTATUS_SPIE
> +#define SSTATUS_SPP_SHIFT              MSTATUS_SPP_SHIFT
> +#define SSTATUS_SPP                    MSTATUS_SPP
> +#define SSTATUS_FS                     MSTATUS_FS
> +#define SSTATUS_FS_OFF                 MSTATUS_FS_OFF
> +#define SSTATUS_FS_INITIAL             MSTATUS_FS_INITIAL
> +#define SSTATUS_FS_CLEAN               MSTATUS_FS_CLEAN
> +#define SSTATUS_FS_DIRTY               MSTATUS_FS_DIRTY
> +#define SSTATUS_XS                     MSTATUS_XS
> +#define SSTATUS_XS_OFF                 MSTATUS_XS_OFF
> +#define SSTATUS_XS_INITIAL             MSTATUS_XS_INITIAL
> +#define SSTATUS_XS_CLEAN               MSTATUS_XS_CLEAN
> +#define SSTATUS_XS_DIRTY               MSTATUS_XS_DIRTY
> +#define SSTATUS_VS                     MSTATUS_VS
> +#define SSTATUS_VS_OFF                 MSTATUS_VS_OFF
> +#define SSTATUS_VS_INITIAL             MSTATUS_VS_INITIAL
> +#define SSTATUS_VS_CLEAN               MSTATUS_VS_CLEAN
> +#define SSTATUS_VS_DIRTY               MSTATUS_VS_DIRTY
> +#define SSTATUS_SUM                    MSTATUS_SUM
> +#define SSTATUS_MXR                    MSTATUS_MXR
> +#define SSTATUS32_SD                   MSTATUS32_SD
> +#define SSTATUS64_UXL                  MSTATUS_UXL
> +#define SSTATUS64_SD                   MSTATUS64_SD
> +
> +#if __riscv_xlen == 64
> +#define HSTATUS_VSXL                   _UL(0x300000000)
> +#define HSTATUS_VSXL_SHIFT             32
> +#endif
> +#define HSTATUS_VTSR                   _UL(0x00400000)
> +#define HSTATUS_VTW                    _UL(0x00200000)
> +#define HSTATUS_VTVM                   _UL(0x00100000)
> +#define HSTATUS_VGEIN                  _UL(0x0003f000)
> +#define HSTATUS_VGEIN_SHIFT            12
> +#define HSTATUS_HU                     _UL(0x00000200)
> +#define HSTATUS_SPVP                   _UL(0x00000100)
> +#define HSTATUS_SPV                    _UL(0x00000080)
> +#define HSTATUS_GVA                    _UL(0x00000040)
> +#define HSTATUS_VSBE                   _UL(0x00000020)
> +
> +#define IRQ_S_SOFT                     1
> +#define IRQ_VS_SOFT                    2
> +#define IRQ_M_SOFT                     3
> +#define IRQ_S_TIMER                    5
> +#define IRQ_VS_TIMER                   6
> +#define IRQ_M_TIMER                    7
> +#define IRQ_S_EXT                      9
> +#define IRQ_VS_EXT                     10
> +#define IRQ_M_EXT                      11
> +#define IRQ_S_GEXT                     12
> +#define IRQ_PMU_OVF                    13
> +
> +#define MIP_SSIP                       (_UL(1) << IRQ_S_SOFT)
> +#define MIP_VSSIP                      (_UL(1) << IRQ_VS_SOFT)
> +#define MIP_MSIP                       (_UL(1) << IRQ_M_SOFT)
> +#define MIP_STIP                       (_UL(1) << IRQ_S_TIMER)
> +#define MIP_VSTIP                      (_UL(1) << IRQ_VS_TIMER)
> +#define MIP_MTIP                       (_UL(1) << IRQ_M_TIMER)
> +#define MIP_SEIP                       (_UL(1) << IRQ_S_EXT)
> +#define MIP_VSEIP                      (_UL(1) << IRQ_VS_EXT)
> +#define MIP_MEIP                       (_UL(1) << IRQ_M_EXT)
> +#define MIP_SGEIP                      (_UL(1) << IRQ_S_GEXT)
> +#define MIP_LCOFIP                     (_UL(1) << IRQ_PMU_OVF)
> +
> +#define SIP_SSIP                       MIP_SSIP
> +#define SIP_STIP                       MIP_STIP
> +
> +#define PRV_U                          _UL(0)
> +#define PRV_S                          _UL(1)
> +#define PRV_M                          _UL(3)
> +
> +#define SATP32_MODE                    _UL(0x80000000)
> +#define SATP32_MODE_SHIFT              31
> +#define SATP32_ASID                    _UL(0x7FC00000)
> +#define SATP32_ASID_SHIFT              22
> +#define SATP32_PPN                     _UL(0x003FFFFF)
> +#define SATP64_MODE                    _ULL(0xF000000000000000)
> +#define SATP64_MODE_SHIFT              60
> +#define SATP64_ASID                    _ULL(0x0FFFF00000000000)
> +#define SATP64_ASID_SHIFT              44
> +#define SATP64_PPN                     _ULL(0x00000FFFFFFFFFFF)
> +
> +#define SATP_MODE_OFF                  _UL(0)
> +#define SATP_MODE_SV32                 _UL(1)
> +#define SATP_MODE_SV39                 _UL(8)
> +#define SATP_MODE_SV48                 _UL(9)
> +#define SATP_MODE_SV57                 _UL(10)
> +#define SATP_MODE_SV64                 _UL(11)
> +
> +#define HGATP_MODE_OFF                 _UL(0)
> +#define HGATP_MODE_SV32X4              _UL(1)
> +#define HGATP_MODE_SV39X4              _UL(8)
> +#define HGATP_MODE_SV48X4              _UL(9)
> +
> +#define HGATP32_MODE_SHIFT             31
> +#define HGATP32_VMID_SHIFT             22
> +#define HGATP32_VMID_MASK              _UL(0x1FC00000)
> +#define HGATP32_PPN                    _UL(0x003FFFFF)
> +
> +#define HGATP64_MODE_SHIFT             60
> +#define HGATP64_VMID_SHIFT             44
> +#define HGATP64_VMID_MASK              _ULL(0x03FFF00000000000)
> +#define HGATP64_PPN                    _ULL(0x00000FFFFFFFFFFF)
> +
> +#define PMP_R                          _UL(0x01)
> +#define PMP_W                          _UL(0x02)
> +#define PMP_X                          _UL(0x04)
> +#define PMP_A                          _UL(0x18)
> +#define PMP_A_TOR                      _UL(0x08)
> +#define PMP_A_NA4                      _UL(0x10)
> +#define PMP_A_NAPOT                    _UL(0x18)
> +#define PMP_L                          _UL(0x80)
> +
> +#define PMP_SHIFT                      2
> +#define PMP_COUNT                      64
> +#if __riscv_xlen == 64
> +#define PMP_ADDR_MASK                  ((_ULL(0x1) << 54) - 1)
> +#else
> +#define PMP_ADDR_MASK                  _UL(0xFFFFFFFF)
> +#endif
> +
> +#if __riscv_xlen == 64
> +#define MSTATUS_SD                     MSTATUS64_SD
> +#define SSTATUS_SD                     SSTATUS64_SD
> +#define SATP_MODE                      SATP64_MODE
> +#define SATP_MODE_SHIFT                        SATP64_MODE_SHIFT
> +
> +#define HGATP_PPN                      HGATP64_PPN
> +#define HGATP_VMID_SHIFT               HGATP64_VMID_SHIFT
> +#define HGATP_VMID_MASK                        HGATP64_VMID_MASK
> +#define HGATP_MODE_SHIFT               HGATP64_MODE_SHIFT
> +#else
> +#define MSTATUS_SD                     MSTATUS32_SD
> +#define SSTATUS_SD                     SSTATUS32_SD
> +#define SATP_MODE                      SATP32_MODE
> +#define SATP_MODE_SHIFT                        SATP32_MODE_SHIFT
> +
> +#define HGATP_PPN                      HGATP32_PPN
> +#define HGATP_VMID_SHIFT               HGATP32_VMID_SHIFT
> +#define HGATP_VMID_MASK                        HGATP32_VMID_MASK
> +#define HGATP_MODE_SHIFT               HGATP32_MODE_SHIFT
> +#endif
> +
> +#define TOPI_IID_SHIFT                 16
> +#define TOPI_IID_MASK                  0xfff
> +#define TOPI_IPRIO_MASK                0xff
> +
> +#if __riscv_xlen == 64
> +#define MHPMEVENT_OF                   (_UL(1) << 63)
> +#define MHPMEVENT_MINH                 (_UL(1) << 62)
> +#define MHPMEVENT_SINH                 (_UL(1) << 61)
> +#define MHPMEVENT_UINH                 (_UL(1) << 60)
> +#define MHPMEVENT_VSINH                        (_UL(1) << 59)
> +#define MHPMEVENT_VUINH                        (_UL(1) << 58)
> +#else
> +#define MHPMEVENTH_OF                  (_ULL(1) << 31)
> +#define MHPMEVENTH_MINH                        (_ULL(1) << 30)
> +#define MHPMEVENTH_SINH                        (_ULL(1) << 29)
> +#define MHPMEVENTH_UINH                        (_ULL(1) << 28)
> +#define MHPMEVENTH_VSINH               (_ULL(1) << 27)
> +#define MHPMEVENTH_VUINH               (_ULL(1) << 26)
> +
> +#define MHPMEVENT_OF                   (MHPMEVENTH_OF << 32)
> +#define MHPMEVENT_MINH                 (MHPMEVENTH_MINH << 32)
> +#define MHPMEVENT_SINH                 (MHPMEVENTH_SINH << 32)
> +#define MHPMEVENT_UINH                 (MHPMEVENTH_UINH << 32)
> +#define MHPMEVENT_VSINH                        (MHPMEVENTH_VSINH << 32)
> +#define MHPMEVENT_VUINH                        (MHPMEVENTH_VUINH << 32)
> +
> +#endif
> +
> +#define MHPMEVENT_SSCOF_MASK           _ULL(0xFFFF000000000000)
> +
> +#if __riscv_xlen > 32
> +#define ENVCFG_STCE                    (_ULL(1) << 63)
> +#define ENVCFG_PBMTE                   (_ULL(1) << 62)
> +#else
> +#define ENVCFGH_STCE                   (_UL(1) << 31)
> +#define ENVCFGH_PBMTE                  (_UL(1) << 30)
> +#endif
> +#define ENVCFG_CBZE                    (_UL(1) << 7)
> +#define ENVCFG_CBCFE                   (_UL(1) << 6)
> +#define ENVCFG_CBIE_SHIFT              4
> +#define ENVCFG_CBIE                    (_UL(0x3) << ENVCFG_CBIE_SHIFT)
> +#define ENVCFG_CBIE_ILL                        _UL(0x0)
> +#define ENVCFG_CBIE_FLUSH              _UL(0x1)
> +#define ENVCFG_CBIE_INV                        _UL(0x3)
> +#define ENVCFG_FIOM                    _UL(0x1)
> +
> +/* ===== User-level CSRs ===== */
> +
> +/* User Trap Setup (N-extension) */
> +#define CSR_USTATUS                    0x000
> +#define CSR_UIE                                0x004
> +#define CSR_UTVEC                      0x005
> +
> +/* User Trap Handling (N-extension) */
> +#define CSR_USCRATCH                   0x040
> +#define CSR_UEPC                       0x041
> +#define CSR_UCAUSE                     0x042
> +#define CSR_UTVAL                      0x043
> +#define CSR_UIP                                0x044
> +
> +/* User Floating-point CSRs */
> +#define CSR_FFLAGS                     0x001
> +#define CSR_FRM                                0x002
> +#define CSR_FCSR                       0x003
> +
> +/* User Counters/Timers */
> +#define CSR_CYCLE                      0xc00
> +#define CSR_TIME                       0xc01
> +#define CSR_INSTRET                    0xc02
> +#define CSR_HPMCOUNTER3                        0xc03
> +#define CSR_HPMCOUNTER4                        0xc04
> +#define CSR_HPMCOUNTER5                        0xc05
> +#define CSR_HPMCOUNTER6                        0xc06
> +#define CSR_HPMCOUNTER7                        0xc07
> +#define CSR_HPMCOUNTER8                        0xc08
> +#define CSR_HPMCOUNTER9                        0xc09
> +#define CSR_HPMCOUNTER10               0xc0a
> +#define CSR_HPMCOUNTER11               0xc0b
> +#define CSR_HPMCOUNTER12               0xc0c
> +#define CSR_HPMCOUNTER13               0xc0d
> +#define CSR_HPMCOUNTER14               0xc0e
> +#define CSR_HPMCOUNTER15               0xc0f
> +#define CSR_HPMCOUNTER16               0xc10
> +#define CSR_HPMCOUNTER17               0xc11
> +#define CSR_HPMCOUNTER18               0xc12
> +#define CSR_HPMCOUNTER19               0xc13
> +#define CSR_HPMCOUNTER20               0xc14
> +#define CSR_HPMCOUNTER21               0xc15
> +#define CSR_HPMCOUNTER22               0xc16
> +#define CSR_HPMCOUNTER23               0xc17
> +#define CSR_HPMCOUNTER24               0xc18
> +#define CSR_HPMCOUNTER25               0xc19
> +#define CSR_HPMCOUNTER26               0xc1a
> +#define CSR_HPMCOUNTER27               0xc1b
> +#define CSR_HPMCOUNTER28               0xc1c
> +#define CSR_HPMCOUNTER29               0xc1d
> +#define CSR_HPMCOUNTER30               0xc1e
> +#define CSR_HPMCOUNTER31               0xc1f
> +#define CSR_CYCLEH                     0xc80
> +#define CSR_TIMEH                      0xc81
> +#define CSR_INSTRETH                   0xc82
> +#define CSR_HPMCOUNTER3H               0xc83
> +#define CSR_HPMCOUNTER4H               0xc84
> +#define CSR_HPMCOUNTER5H               0xc85
> +#define CSR_HPMCOUNTER6H               0xc86
> +#define CSR_HPMCOUNTER7H               0xc87
> +#define CSR_HPMCOUNTER8H               0xc88
> +#define CSR_HPMCOUNTER9H               0xc89
> +#define CSR_HPMCOUNTER10H              0xc8a
> +#define CSR_HPMCOUNTER11H              0xc8b
> +#define CSR_HPMCOUNTER12H              0xc8c
> +#define CSR_HPMCOUNTER13H              0xc8d
> +#define CSR_HPMCOUNTER14H              0xc8e
> +#define CSR_HPMCOUNTER15H              0xc8f
> +#define CSR_HPMCOUNTER16H              0xc90
> +#define CSR_HPMCOUNTER17H              0xc91
> +#define CSR_HPMCOUNTER18H              0xc92
> +#define CSR_HPMCOUNTER19H              0xc93
> +#define CSR_HPMCOUNTER20H              0xc94
> +#define CSR_HPMCOUNTER21H              0xc95
> +#define CSR_HPMCOUNTER22H              0xc96
> +#define CSR_HPMCOUNTER23H              0xc97
> +#define CSR_HPMCOUNTER24H              0xc98
> +#define CSR_HPMCOUNTER25H              0xc99
> +#define CSR_HPMCOUNTER26H              0xc9a
> +#define CSR_HPMCOUNTER27H              0xc9b
> +#define CSR_HPMCOUNTER28H              0xc9c
> +#define CSR_HPMCOUNTER29H              0xc9d
> +#define CSR_HPMCOUNTER30H              0xc9e
> +#define CSR_HPMCOUNTER31H              0xc9f
> +
> +/* ===== Supervisor-level CSRs ===== */
> +
> +/* Supervisor Trap Setup */
> +#define CSR_SSTATUS                    0x100
> +#define CSR_SEDELEG                    0x102
> +#define CSR_SIDELEG                    0x103
> +#define CSR_SIE                                0x104
> +#define CSR_STVEC                      0x105
> +#define CSR_SCOUNTEREN                 0x106
> +
> +/* Supervisor Configuration */
> +#define CSR_SENVCFG                    0x10a
> +
> +/* Supervisor Trap Handling */
> +#define CSR_SSCRATCH                   0x140
> +#define CSR_SEPC                       0x141
> +#define CSR_SCAUSE                     0x142
> +#define CSR_STVAL                      0x143
> +#define CSR_SIP                                0x144
> +
> +/* Supervisor Protection and Translation */
> +#define CSR_SATP                       0x180
> +
> +/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
> +#define CSR_SISELECT                   0x150
> +#define CSR_SIREG                      0x151
> +
> +/* Supervisor-Level Interrupts (AIA) */
> +#define CSR_STOPI                      0xdb0
> +
> +/* Supervisor-Level IMSIC Interface (AIA) */
> +#define CSR_SSETEIPNUM                 0x158
> +#define CSR_SCLREIPNUM                 0x159
> +#define CSR_SSETEIENUM                 0x15a
> +#define CSR_SCLREIENUM                 0x15b
> +#define CSR_STOPEI                     0x15c
> +
> +/* Supervisor-Level High-Half CSRs (AIA) */
> +#define CSR_SIEH                       0x114
> +#define CSR_SIPH                       0x154
> +
> +/* Supervisor stateen CSRs */
> +#define CSR_SSTATEEN0                  0x10C
> +#define CSR_SSTATEEN1                  0x10D
> +#define CSR_SSTATEEN2                  0x10E
> +#define CSR_SSTATEEN3                  0x10F
> +
> +/* ===== Hypervisor-level CSRs ===== */
> +
> +/* Hypervisor Trap Setup (H-extension) */
> +#define CSR_HSTATUS                    0x600
> +#define CSR_HEDELEG                    0x602
> +#define CSR_HIDELEG                    0x603
> +#define CSR_HIE                                0x604
> +#define CSR_HCOUNTEREN                 0x606
> +#define CSR_HGEIE                      0x607
> +
> +/* Hypervisor Configuration */
> +#define CSR_HENVCFG                    0x60a
> +#define CSR_HENVCFGH                   0x61a
> +
> +/* Hypervisor Trap Handling (H-extension) */
> +#define CSR_HTVAL                      0x643
> +#define CSR_HIP                                0x644
> +#define CSR_HVIP                       0x645
> +#define CSR_HTINST                     0x64a
> +#define CSR_HGEIP                      0xe12
> +
> +/* Hypervisor Protection and Translation (H-extension) */
> +#define CSR_HGATP                      0x680
> +
> +/* Hypervisor Counter/Timer Virtualization Registers (H-extension) */
> +#define CSR_HTIMEDELTA                 0x605
> +#define CSR_HTIMEDELTAH                        0x615
> +
> +/* Virtual Supervisor Registers (H-extension) */
> +#define CSR_VSSTATUS                   0x200
> +#define CSR_VSIE                       0x204
> +#define CSR_VSTVEC                     0x205
> +#define CSR_VSSCRATCH                  0x240
> +#define CSR_VSEPC                      0x241
> +#define CSR_VSCAUSE                    0x242
> +#define CSR_VSTVAL                     0x243
> +#define CSR_VSIP                       0x244
> +#define CSR_VSATP                      0x280
> +
> +/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
> +#define CSR_HVIEN                      0x608
> +#define CSR_HVICTL                     0x609
> +#define CSR_HVIPRIO1                   0x646
> +#define CSR_HVIPRIO2                   0x647
> +
> +/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */
> +#define CSR_VSISELECT                  0x250
> +#define CSR_VSIREG                     0x251
> +
> +/* VS-Level Interrupts (H-extension with AIA) */
> +#define CSR_VSTOPI                     0xeb0
> +
> +/* VS-Level IMSIC Interface (H-extension with AIA) */
> +#define CSR_VSSETEIPNUM                0x258
> +#define CSR_VSCLREIPNUM                0x259
> +#define CSR_VSSETEIENUM                0x25a
> +#define CSR_VSCLREIENUM                0x25b
> +#define CSR_VSTOPEI                    0x25c
> +
> +/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
> +#define CSR_HIDELEGH                   0x613
> +#define CSR_HVIENH                     0x618
> +#define CSR_HVIPH                      0x655
> +#define CSR_HVIPRIO1H                  0x656
> +#define CSR_HVIPRIO2H                  0x657
> +#define CSR_VSIEH                      0x214
> +#define CSR_VSIPH                      0x254
> +
> +/* Hypervisor stateen CSRs */
> +#define CSR_HSTATEEN0                  0x60C
> +#define CSR_HSTATEEN0H                 0x61C
> +#define CSR_HSTATEEN1                  0x60D
> +#define CSR_HSTATEEN1H                 0x61D
> +#define CSR_HSTATEEN2                  0x60E
> +#define CSR_HSTATEEN2H                 0x61E
> +#define CSR_HSTATEEN3                  0x60F
> +#define CSR_HSTATEEN3H                 0x61F
> +
> +/* ===== Machine-level CSRs ===== */
> +
> +/* Machine Information Registers */
> +#define CSR_MVENDORID                  0xf11
> +#define CSR_MARCHID                    0xf12
> +#define CSR_MIMPID                     0xf13
> +#define CSR_MHARTID                    0xf14
> +
> +/* Machine Trap Setup */
> +#define CSR_MSTATUS                    0x300
> +#define CSR_MISA                       0x301
> +#define CSR_MEDELEG                    0x302
> +#define CSR_MIDELEG                    0x303
> +#define CSR_MIE                                0x304
> +#define CSR_MTVEC                      0x305
> +#define CSR_MCOUNTEREN                 0x306
> +#define CSR_MSTATUSH                   0x310
> +
> +/* Machine Configuration */
> +#define CSR_MENVCFG                    0x30a
> +#define CSR_MENVCFGH                   0x31a
> +
> +/* Machine Trap Handling */
> +#define CSR_MSCRATCH                   0x340
> +#define CSR_MEPC                       0x341
> +#define CSR_MCAUSE                     0x342
> +#define CSR_MTVAL                      0x343
> +#define CSR_MIP                                0x344
> +#define CSR_MTINST                     0x34a
> +#define CSR_MTVAL2                     0x34b
> +
> +/* Machine Memory Protection */
> +#define CSR_PMPCFG0                    0x3a0
> +#define CSR_PMPCFG1                    0x3a1
> +#define CSR_PMPCFG2                    0x3a2
> +#define CSR_PMPCFG3                    0x3a3
> +#define CSR_PMPCFG4                    0x3a4
> +#define CSR_PMPCFG5                    0x3a5
> +#define CSR_PMPCFG6                    0x3a6
> +#define CSR_PMPCFG7                    0x3a7
> +#define CSR_PMPCFG8                    0x3a8
> +#define CSR_PMPCFG9                    0x3a9
> +#define CSR_PMPCFG10                   0x3aa
> +#define CSR_PMPCFG11                   0x3ab
> +#define CSR_PMPCFG12                   0x3ac
> +#define CSR_PMPCFG13                   0x3ad
> +#define CSR_PMPCFG14                   0x3ae
> +#define CSR_PMPCFG15                   0x3af
> +#define CSR_PMPADDR0                   0x3b0
> +#define CSR_PMPADDR1                   0x3b1
> +#define CSR_PMPADDR2                   0x3b2
> +#define CSR_PMPADDR3                   0x3b3
> +#define CSR_PMPADDR4                   0x3b4
> +#define CSR_PMPADDR5                   0x3b5
> +#define CSR_PMPADDR6                   0x3b6
> +#define CSR_PMPADDR7                   0x3b7
> +#define CSR_PMPADDR8                   0x3b8
> +#define CSR_PMPADDR9                   0x3b9
> +#define CSR_PMPADDR10                  0x3ba
> +#define CSR_PMPADDR11                  0x3bb
> +#define CSR_PMPADDR12                  0x3bc
> +#define CSR_PMPADDR13                  0x3bd
> +#define CSR_PMPADDR14                  0x3be
> +#define CSR_PMPADDR15                  0x3bf
> +#define CSR_PMPADDR16                  0x3c0
> +#define CSR_PMPADDR17                  0x3c1
> +#define CSR_PMPADDR18                  0x3c2
> +#define CSR_PMPADDR19                  0x3c3
> +#define CSR_PMPADDR20                  0x3c4
> +#define CSR_PMPADDR21                  0x3c5
> +#define CSR_PMPADDR22                  0x3c6
> +#define CSR_PMPADDR23                  0x3c7
> +#define CSR_PMPADDR24                  0x3c8
> +#define CSR_PMPADDR25                  0x3c9
> +#define CSR_PMPADDR26                  0x3ca
> +#define CSR_PMPADDR27                  0x3cb
> +#define CSR_PMPADDR28                  0x3cc
> +#define CSR_PMPADDR29                  0x3cd
> +#define CSR_PMPADDR30                  0x3ce
> +#define CSR_PMPADDR31                  0x3cf
> +#define CSR_PMPADDR32                  0x3d0
> +#define CSR_PMPADDR33                  0x3d1
> +#define CSR_PMPADDR34                  0x3d2
> +#define CSR_PMPADDR35                  0x3d3
> +#define CSR_PMPADDR36                  0x3d4
> +#define CSR_PMPADDR37                  0x3d5
> +#define CSR_PMPADDR38                  0x3d6
> +#define CSR_PMPADDR39                  0x3d7
> +#define CSR_PMPADDR40                  0x3d8
> +#define CSR_PMPADDR41                  0x3d9
> +#define CSR_PMPADDR42                  0x3da
> +#define CSR_PMPADDR43                  0x3db
> +#define CSR_PMPADDR44                  0x3dc
> +#define CSR_PMPADDR45                  0x3dd
> +#define CSR_PMPADDR46                  0x3de
> +#define CSR_PMPADDR47                  0x3df
> +#define CSR_PMPADDR48                  0x3e0
> +#define CSR_PMPADDR49                  0x3e1
> +#define CSR_PMPADDR50                  0x3e2
> +#define CSR_PMPADDR51                  0x3e3
> +#define CSR_PMPADDR52                  0x3e4
> +#define CSR_PMPADDR53                  0x3e5
> +#define CSR_PMPADDR54                  0x3e6
> +#define CSR_PMPADDR55                  0x3e7
> +#define CSR_PMPADDR56                  0x3e8
> +#define CSR_PMPADDR57                  0x3e9
> +#define CSR_PMPADDR58                  0x3ea
> +#define CSR_PMPADDR59                  0x3eb
> +#define CSR_PMPADDR60                  0x3ec
> +#define CSR_PMPADDR61                  0x3ed
> +#define CSR_PMPADDR62                  0x3ee
> +#define CSR_PMPADDR63                  0x3ef
> +
> +/* Machine Counters/Timers */
> +#define CSR_MCYCLE                     0xb00
> +#define CSR_MINSTRET                   0xb02
> +#define CSR_MHPMCOUNTER3               0xb03
> +#define CSR_MHPMCOUNTER4               0xb04
> +#define CSR_MHPMCOUNTER5               0xb05
> +#define CSR_MHPMCOUNTER6               0xb06
> +#define CSR_MHPMCOUNTER7               0xb07
> +#define CSR_MHPMCOUNTER8               0xb08
> +#define CSR_MHPMCOUNTER9               0xb09
> +#define CSR_MHPMCOUNTER10              0xb0a
> +#define CSR_MHPMCOUNTER11              0xb0b
> +#define CSR_MHPMCOUNTER12              0xb0c
> +#define CSR_MHPMCOUNTER13              0xb0d
> +#define CSR_MHPMCOUNTER14              0xb0e
> +#define CSR_MHPMCOUNTER15              0xb0f
> +#define CSR_MHPMCOUNTER16              0xb10
> +#define CSR_MHPMCOUNTER17              0xb11
> +#define CSR_MHPMCOUNTER18              0xb12
> +#define CSR_MHPMCOUNTER19              0xb13
> +#define CSR_MHPMCOUNTER20              0xb14
> +#define CSR_MHPMCOUNTER21              0xb15
> +#define CSR_MHPMCOUNTER22              0xb16
> +#define CSR_MHPMCOUNTER23              0xb17
> +#define CSR_MHPMCOUNTER24              0xb18
> +#define CSR_MHPMCOUNTER25              0xb19
> +#define CSR_MHPMCOUNTER26              0xb1a
> +#define CSR_MHPMCOUNTER27              0xb1b
> +#define CSR_MHPMCOUNTER28              0xb1c
> +#define CSR_MHPMCOUNTER29              0xb1d
> +#define CSR_MHPMCOUNTER30              0xb1e
> +#define CSR_MHPMCOUNTER31              0xb1f
> +#define CSR_MCYCLEH                    0xb80
> +#define CSR_MINSTRETH                  0xb82
> +#define CSR_MHPMCOUNTER3H              0xb83
> +#define CSR_MHPMCOUNTER4H              0xb84
> +#define CSR_MHPMCOUNTER5H              0xb85
> +#define CSR_MHPMCOUNTER6H              0xb86
> +#define CSR_MHPMCOUNTER7H              0xb87
> +#define CSR_MHPMCOUNTER8H              0xb88
> +#define CSR_MHPMCOUNTER9H              0xb89
> +#define CSR_MHPMCOUNTER10H             0xb8a
> +#define CSR_MHPMCOUNTER11H             0xb8b
> +#define CSR_MHPMCOUNTER12H             0xb8c
> +#define CSR_MHPMCOUNTER13H             0xb8d
> +#define CSR_MHPMCOUNTER14H             0xb8e
> +#define CSR_MHPMCOUNTER15H             0xb8f
> +#define CSR_MHPMCOUNTER16H             0xb90
> +#define CSR_MHPMCOUNTER17H             0xb91
> +#define CSR_MHPMCOUNTER18H             0xb92
> +#define CSR_MHPMCOUNTER19H             0xb93
> +#define CSR_MHPMCOUNTER20H             0xb94
> +#define CSR_MHPMCOUNTER21H             0xb95
> +#define CSR_MHPMCOUNTER22H             0xb96
> +#define CSR_MHPMCOUNTER23H             0xb97
> +#define CSR_MHPMCOUNTER24H             0xb98
> +#define CSR_MHPMCOUNTER25H             0xb99
> +#define CSR_MHPMCOUNTER26H             0xb9a
> +#define CSR_MHPMCOUNTER27H             0xb9b
> +#define CSR_MHPMCOUNTER28H             0xb9c
> +#define CSR_MHPMCOUNTER29H             0xb9d
> +#define CSR_MHPMCOUNTER30H             0xb9e
> +#define CSR_MHPMCOUNTER31H             0xb9f
> +
> +/* Machine Counter Setup */
> +#define CSR_MCOUNTINHIBIT              0x320
> +#define CSR_MHPMEVENT3                 0x323
> +#define CSR_MHPMEVENT4                 0x324
> +#define CSR_MHPMEVENT5                 0x325
> +#define CSR_MHPMEVENT6                 0x326
> +#define CSR_MHPMEVENT7                 0x327
> +#define CSR_MHPMEVENT8                 0x328
> +#define CSR_MHPMEVENT9                 0x329
> +#define CSR_MHPMEVENT10                        0x32a
> +#define CSR_MHPMEVENT11                        0x32b
> +#define CSR_MHPMEVENT12                        0x32c
> +#define CSR_MHPMEVENT13                        0x32d
> +#define CSR_MHPMEVENT14                        0x32e
> +#define CSR_MHPMEVENT15                        0x32f
> +#define CSR_MHPMEVENT16                        0x330
> +#define CSR_MHPMEVENT17                        0x331
> +#define CSR_MHPMEVENT18                        0x332
> +#define CSR_MHPMEVENT19                        0x333
> +#define CSR_MHPMEVENT20                        0x334
> +#define CSR_MHPMEVENT21                        0x335
> +#define CSR_MHPMEVENT22                        0x336
> +#define CSR_MHPMEVENT23                        0x337
> +#define CSR_MHPMEVENT24                        0x338
> +#define CSR_MHPMEVENT25                        0x339
> +#define CSR_MHPMEVENT26                        0x33a
> +#define CSR_MHPMEVENT27                        0x33b
> +#define CSR_MHPMEVENT28                        0x33c
> +#define CSR_MHPMEVENT29                        0x33d
> +#define CSR_MHPMEVENT30                        0x33e
> +#define CSR_MHPMEVENT31                        0x33f
> +
> +/* For RV32 */
> +#define CSR_MHPMEVENT3H                        0x723
> +#define CSR_MHPMEVENT4H                        0x724
> +#define CSR_MHPMEVENT5H                        0x725
> +#define CSR_MHPMEVENT6H                        0x726
> +#define CSR_MHPMEVENT7H                        0x727
> +#define CSR_MHPMEVENT8H                        0x728
> +#define CSR_MHPMEVENT9H                        0x729
> +#define CSR_MHPMEVENT10H               0x72a
> +#define CSR_MHPMEVENT11H               0x72b
> +#define CSR_MHPMEVENT12H               0x72c
> +#define CSR_MHPMEVENT13H               0x72d
> +#define CSR_MHPMEVENT14H               0x72e
> +#define CSR_MHPMEVENT15H               0x72f
> +#define CSR_MHPMEVENT16H               0x730
> +#define CSR_MHPMEVENT17H               0x731
> +#define CSR_MHPMEVENT18H               0x732
> +#define CSR_MHPMEVENT19H               0x733
> +#define CSR_MHPMEVENT20H               0x734
> +#define CSR_MHPMEVENT21H               0x735
> +#define CSR_MHPMEVENT22H               0x736
> +#define CSR_MHPMEVENT23H               0x737
> +#define CSR_MHPMEVENT24H               0x738
> +#define CSR_MHPMEVENT25H               0x739
> +#define CSR_MHPMEVENT26H               0x73a
> +#define CSR_MHPMEVENT27H               0x73b
> +#define CSR_MHPMEVENT28H               0x73c
> +#define CSR_MHPMEVENT29H               0x73d
> +#define CSR_MHPMEVENT30H               0x73e
> +#define CSR_MHPMEVENT31H               0x73f
> +
> +/* Counter Overflow CSR */
> +#define CSR_SCOUNTOVF                  0xda0
> +
> +/* Debug/Trace Registers */
> +#define CSR_TSELECT                    0x7a0
> +#define CSR_TDATA1                     0x7a1
> +#define CSR_TDATA2                     0x7a2
> +#define CSR_TDATA3                     0x7a3
> +
> +/* Debug Mode Registers */
> +#define CSR_DCSR                       0x7b0
> +#define CSR_DPC                                0x7b1
> +#define CSR_DSCRATCH0                  0x7b2
> +#define CSR_DSCRATCH1                  0x7b3
> +
> +/* Machine-Level Window to Indirectly Accessed Registers (AIA) */
> +#define CSR_MISELECT                   0x350
> +#define CSR_MIREG                      0x351
> +
> +/* Machine-Level Interrupts (AIA) */
> +#define CSR_MTOPI                      0xfb0
> +
> +/* Machine-Level IMSIC Interface (AIA) */
> +#define CSR_MSETEIPNUM                 0x358
> +#define CSR_MCLREIPNUM                 0x359
> +#define CSR_MSETEIENUM                 0x35a
> +#define CSR_MCLREIENUM                 0x35b
> +#define CSR_MTOPEI                     0x35c
> +
> +/* Virtual Interrupts for Supervisor Level (AIA) */
> +#define CSR_MVIEN                      0x308
> +#define CSR_MVIP                       0x309
> +
> +/* Smstateen extension registers */
> +/* Machine stateen CSRs */
> +#define CSR_MSTATEEN0                  0x30C
> +#define CSR_MSTATEEN0H                 0x31C
> +#define CSR_MSTATEEN1                  0x30D
> +#define CSR_MSTATEEN1H                 0x31D
> +#define CSR_MSTATEEN2                  0x30E
> +#define CSR_MSTATEEN2H                 0x31E
> +#define CSR_MSTATEEN3                  0x30F
> +#define CSR_MSTATEEN3H                 0x31F
> +
> +/* Machine-Level High-Half CSRs (AIA) */
> +#define CSR_MIDELEGH                   0x313
> +#define CSR_MIEH                       0x314
> +#define CSR_MVIENH                     0x318
> +#define CSR_MVIPH                      0x319
> +#define CSR_MIPH                       0x354
> +
> +/* ===== Trap/Exception Causes ===== */
> +
> +/* Exception cause high bit - is an interrupt if set */
> +#define CAUSE_IRQ_FLAG                 (_UL(1) << (__riscv_xlen - 1))
> +
> +#define CAUSE_MISALIGNED_FETCH         0x0
> +#define CAUSE_FETCH_ACCESS             0x1
> +#define CAUSE_ILLEGAL_INSTRUCTION      0x2
> +#define CAUSE_BREAKPOINT               0x3
> +#define CAUSE_MISALIGNED_LOAD          0x4
> +#define CAUSE_LOAD_ACCESS              0x5
> +#define CAUSE_MISALIGNED_STORE         0x6
> +#define CAUSE_STORE_ACCESS             0x7
> +#define CAUSE_USER_ECALL               0x8
> +#define CAUSE_SUPERVISOR_ECALL         0x9
> +#define CAUSE_VIRTUAL_SUPERVISOR_ECALL 0xa
> +#define CAUSE_MACHINE_ECALL            0xb
> +#define CAUSE_FETCH_PAGE_FAULT         0xc
> +#define CAUSE_LOAD_PAGE_FAULT          0xd
> +#define CAUSE_STORE_PAGE_FAULT         0xf
> +#define CAUSE_FETCH_GUEST_PAGE_FAULT   0x14
> +#define CAUSE_LOAD_GUEST_PAGE_FAULT    0x15
> +#define CAUSE_VIRTUAL_INST_FAULT       0x16
> +#define CAUSE_STORE_GUEST_PAGE_FAULT   0x17
> +
> +/* Common defines for all smstateen */
> +#define SMSTATEEN_MAX_COUNT            4
> +#define SMSTATEEN0_CS_SHIFT            0
> +#define SMSTATEEN0_CS                  (_ULL(1) << SMSTATEEN0_CS_SHIFT)
> +#define SMSTATEEN0_FCSR_SHIFT          1
> +#define SMSTATEEN0_FCSR                        (_ULL(1) << SMSTATEEN0_FCSR_SHIFT)
> +#define SMSTATEEN0_IMSIC_SHIFT         58
> +#define SMSTATEEN0_IMSIC               (_ULL(1) << SMSTATEEN0_IMSIC_SHIFT)
> +#define SMSTATEEN0_AIA_SHIFT           59
> +#define SMSTATEEN0_AIA                 (_ULL(1) << SMSTATEEN0_AIA_SHIFT)
> +#define SMSTATEEN0_SVSLCT_SHIFT                60
> +#define SMSTATEEN0_SVSLCT              (_ULL(1) << SMSTATEEN0_SVSLCT_SHIFT)
> +#define SMSTATEEN0_HSENVCFG_SHIFT      62
> +#define SMSTATEEN0_HSENVCFG            (_ULL(1) << SMSTATEEN0_HSENVCFG_SHIFT)
> +#define SMSTATEEN_STATEN_SHIFT         63
> +#define SMSTATEEN_STATEN               (_ULL(1) << SMSTATEEN_STATEN_SHIFT)
> +
> +/* ===== Instruction Encodings ===== */
> +
> +#define INSN_MATCH_LB                  0x3
> +#define INSN_MASK_LB                   0x707f
> +#define INSN_MATCH_LH                  0x1003
> +#define INSN_MASK_LH                   0x707f
> +#define INSN_MATCH_LW                  0x2003
> +#define INSN_MASK_LW                   0x707f
> +#define INSN_MATCH_LD                  0x3003
> +#define INSN_MASK_LD                   0x707f
> +#define INSN_MATCH_LBU                 0x4003
> +#define INSN_MASK_LBU                  0x707f
> +#define INSN_MATCH_LHU                 0x5003
> +#define INSN_MASK_LHU                  0x707f
> +#define INSN_MATCH_LWU                 0x6003
> +#define INSN_MASK_LWU                  0x707f
> +#define INSN_MATCH_SB                  0x23
> +#define INSN_MASK_SB                   0x707f
> +#define INSN_MATCH_SH                  0x1023
> +#define INSN_MASK_SH                   0x707f
> +#define INSN_MATCH_SW                  0x2023
> +#define INSN_MASK_SW                   0x707f
> +#define INSN_MATCH_SD                  0x3023
> +#define INSN_MASK_SD                   0x707f
> +
> +#define INSN_MATCH_FLW                 0x2007
> +#define INSN_MASK_FLW                  0x707f
> +#define INSN_MATCH_FLD                 0x3007
> +#define INSN_MASK_FLD                  0x707f
> +#define INSN_MATCH_FLQ                 0x4007
> +#define INSN_MASK_FLQ                  0x707f
> +#define INSN_MATCH_FSW                 0x2027
> +#define INSN_MASK_FSW                  0x707f
> +#define INSN_MATCH_FSD                 0x3027
> +#define INSN_MASK_FSD                  0x707f
> +#define INSN_MATCH_FSQ                 0x4027
> +#define INSN_MASK_FSQ                  0x707f
> +
> +#define INSN_MATCH_C_LD                        0x6000
> +#define INSN_MASK_C_LD                 0xe003
> +#define INSN_MATCH_C_SD                        0xe000
> +#define INSN_MASK_C_SD                 0xe003
> +#define INSN_MATCH_C_LW                        0x4000
> +#define INSN_MASK_C_LW                 0xe003
> +#define INSN_MATCH_C_SW                        0xc000
> +#define INSN_MASK_C_SW                 0xe003
> +#define INSN_MATCH_C_LDSP              0x6002
> +#define INSN_MASK_C_LDSP               0xe003
> +#define INSN_MATCH_C_SDSP              0xe002
> +#define INSN_MASK_C_SDSP               0xe003
> +#define INSN_MATCH_C_LWSP              0x4002
> +#define INSN_MASK_C_LWSP               0xe003
> +#define INSN_MATCH_C_SWSP              0xc002
> +#define INSN_MASK_C_SWSP               0xe003
> +
> +#define INSN_MATCH_C_FLD               0x2000
> +#define INSN_MASK_C_FLD                        0xe003
> +#define INSN_MATCH_C_FLW               0x6000
> +#define INSN_MASK_C_FLW                        0xe003
> +#define INSN_MATCH_C_FSD               0xa000
> +#define INSN_MASK_C_FSD                        0xe003
> +#define INSN_MATCH_C_FSW               0xe000
> +#define INSN_MASK_C_FSW                        0xe003
> +#define INSN_MATCH_C_FLDSP             0x2002
> +#define INSN_MASK_C_FLDSP              0xe003
> +#define INSN_MATCH_C_FSDSP             0xa002
> +#define INSN_MASK_C_FSDSP              0xe003
> +#define INSN_MATCH_C_FLWSP             0x6002
> +#define INSN_MASK_C_FLWSP              0xe003
> +#define INSN_MATCH_C_FSWSP             0xe002
> +#define INSN_MASK_C_FSWSP              0xe003
> +
> +#define INSN_MASK_WFI                  0xffffff00
> +#define INSN_MATCH_WFI                 0x10500000
> +
> +#define INSN_16BIT_MASK                        0x3
> +#define INSN_32BIT_MASK                        0x1c
> +
> +#define INSN_IS_16BIT(insn)            \
> +       (((insn) & INSN_16BIT_MASK) != INSN_16BIT_MASK)
> +#define INSN_IS_32BIT(insn)            \
> +       (((insn) & INSN_16BIT_MASK) == INSN_16BIT_MASK && \
> +        ((insn) & INSN_32BIT_MASK) != INSN_32BIT_MASK)
> +
> +#define INSN_LEN(insn)                 (INSN_IS_16BIT(insn) ? 2 : 4)
> +
> +#if __riscv_xlen == 64
> +#define LOG_REGBYTES                   3
> +#else
> +#define LOG_REGBYTES                   2
> +#endif
> +#define REGBYTES                       (1 << LOG_REGBYTES)
> +
> +#define SH_RD                          7
> +#define SH_RS1                         15
> +#define SH_RS2                         20
> +#define SH_RS2C                                2
> +
> +#define RV_X(x, s, n)                  (((x) >> (s)) & ((1 << (n)) - 1))
> +#define RVC_LW_IMM(x)                  ((RV_X(x, 6, 1) << 2) | \
> +                                        (RV_X(x, 10, 3) << 3) | \
> +                                        (RV_X(x, 5, 1) << 6))
> +#define RVC_LD_IMM(x)                  ((RV_X(x, 10, 3) << 3) | \
> +                                        (RV_X(x, 5, 2) << 6))
> +#define RVC_LWSP_IMM(x)                        ((RV_X(x, 4, 3) << 2) | \
> +                                        (RV_X(x, 12, 1) << 5) | \
> +                                        (RV_X(x, 2, 2) << 6))
> +#define RVC_LDSP_IMM(x)                        ((RV_X(x, 5, 2) << 3) | \
> +                                        (RV_X(x, 12, 1) << 5) | \
> +                                        (RV_X(x, 2, 3) << 6))
> +#define RVC_SWSP_IMM(x)                        ((RV_X(x, 9, 4) << 2) | \
> +                                        (RV_X(x, 7, 2) << 6))
> +#define RVC_SDSP_IMM(x)                        ((RV_X(x, 10, 3) << 3) | \
> +                                        (RV_X(x, 7, 3) << 6))
> +#define RVC_RS1S(insn)                 (8 + RV_X(insn, SH_RD, 3))
> +#define RVC_RS2S(insn)                 (8 + RV_X(insn, SH_RS2C, 3))
> +#define RVC_RS2(insn)                  RV_X(insn, SH_RS2C, 5)
> +
> +#define SHIFT_RIGHT(x, y)              \
> +       ((y) < 0 ? ((x) << -(y)) : ((x) >> (y)))
> +
> +#define REG_MASK                       \
> +       ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES))
> +
> +#define REG_OFFSET(insn, pos)          \
> +       (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK)
> +
> +#define REG_PTR(insn, pos, regs)       \
> +       (unsigned long *)((unsigned long)(regs) + REG_OFFSET(insn, pos))
> +
> +#define GET_RM(insn)                   (((insn) >> 12) & 7)
> +
> +#define GET_RS1(insn, regs)            (*REG_PTR(insn, SH_RS1, regs))
> +#define GET_RS2(insn, regs)            (*REG_PTR(insn, SH_RS2, regs))
> +#define GET_RS1S(insn, regs)           (*REG_PTR(RVC_RS1S(insn), 0, regs))
> +#define GET_RS2S(insn, regs)           (*REG_PTR(RVC_RS2S(insn), 0, regs))
> +#define GET_RS2C(insn, regs)           (*REG_PTR(insn, SH_RS2C, regs))
> +#define GET_SP(regs)                   (*REG_PTR(2, 0, regs))
> +#define SET_RD(insn, regs, val)                (*REG_PTR(insn, SH_RD, regs) = (val))
> +#define IMM_I(insn)                    ((s32)(insn) >> 20)
> +#define IMM_S(insn)                    (((s32)(insn) >> 25 << 5) | \
> +                                        (s32)(((insn) >> 7) & 0x1f))
> +#define MASK_FUNCT3                    0x7000
> +
> +/* clang-format on */
> +
> +#endif
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 23:26:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 23:26:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482567.748135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjia-0008RB-Og; Sun, 22 Jan 2023 23:25:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482567.748135; Sun, 22 Jan 2023 23:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjia-0008R4-Kr; Sun, 22 Jan 2023 23:25:44 +0000
Received: by outflank-mailman (input) for mailman id 482567;
 Sun, 22 Jan 2023 23:25:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zCZZ=5T=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pJjiZ-0007r1-E7
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 23:25:43 +0000
Received: from mail-vk1-xa2e.google.com (mail-vk1-xa2e.google.com
 [2607:f8b0:4864:20::a2e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 159eb004-9aac-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 00:25:41 +0100 (CET)
Received: by mail-vk1-xa2e.google.com with SMTP id q21so5202335vka.3
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 15:25:41 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 159eb004-9aac-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=ZQCMO+h9luRtEvxhZm1NWv7DavsDk1AG0uqhUeOSCyE=;
        b=F0Z2jq8wesOUoitKdR7/eXt5olmvuLKT1uNszZWMhRZKYf85rw2D2CwMld1JLTjw41
         esnk9W/aOrJr6+evdlamNZqzFjS+ugrynP7hT7CQaOzlJJzzd1ROemtYZTcaMq2od0wW
         x1AmdFAD6sMWSxjek33zcYsCVyG9JaIa/NFuThzIv+7N8uJBjmx/ydw8lpO9XApeq6/l
         ao6pOXB5RKPewI5TXAc5xd/B7b06qWzBCWFL7QM6J90M3DI83eReVFcEBBwC8aIjEIu/
         xpEnTaXulVizG3oKLPttDC3WCmL0npOd/OTvGCEGW2q1C6WKxNw34rE0qetvzHpn6ONR
         z6Rw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ZQCMO+h9luRtEvxhZm1NWv7DavsDk1AG0uqhUeOSCyE=;
        b=7HudZnGASvzqPgKi2Wyw8GCbvoqczs4j96iqald/db/TOWbAvB+14SiN/Z7jedHq8b
         tXDc6OpeUZnzD5jZRvKUYHc9o8C3fGvZnu1c4+XfV+wyWx4HZzpHThLdz5gYgSKNmRjC
         IaIUXQaVgsjzHd4F0P2lFeC27MhaMwzygl6t+tH0HdB12fgE4tnbkqlpxl+nXsncLpZW
         ffToNNZGqGNXWBYQhvXvkzgEAcNKpPHO9vdXQms8cwkOmzTGMN6YFQIV2qKhLiK+myIO
         9/WjYhftXOZRVgjYexrAupgfXTthW+WhIrlgavCbIsScDdUIYIrWXoRMMZyTkIHn/Rzl
         Mw9w==
X-Gm-Message-State: AFqh2kqhpFb0o9iVO97l2S6Go3ugjldurTEa/dYKzvW3SDAMifsXKrZN
	oI3+IvETdK5zp2RPcANV0XHHUP5WcJ9CAVdM/BQ=
X-Google-Smtp-Source: AMrXdXvw7/nT3eTQLnlD9QzDkEy4SMgY05ijXH7o2bg3rjvO2TtRYIrQYp9Bqu/gzBifGAQCwJbsksSQVHK2WUITTGY=
X-Received: by 2002:a1f:2c0c:0:b0:3e1:7e08:a117 with SMTP id
 s12-20020a1f2c0c000000b003e17e08a117mr2961220vks.34.1674429940147; Sun, 22
 Jan 2023 15:25:40 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674226563.git.oleksii.kurochko@gmail.com> <afc53b9bee58b5d386f105ee8f23a411d5a15bed.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To: <afc53b9bee58b5d386f105ee8f23a411d5a15bed.1674226563.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 23 Jan 2023 09:25:14 +1000
Message-ID: <CAKmqyKOmAg9D0r-k6Z3VoVSTynY58z0GUb+oCrkzi_Q9HZju_w@mail.gmail.com>
Subject: Re: [PATCH v1 04/14] xen/riscv: add <asm/csr.h> header
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/arch/riscv/include/asm/csr.h | 82 ++++++++++++++++++++++++++++++++
>  1 file changed, 82 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/csr.h
>
> diff --git a/xen/arch/riscv/include/asm/csr.h b/xen/arch/riscv/include/asm/csr.h
> new file mode 100644
> index 0000000000..1a879c6c4d
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/csr.h
> @@ -0,0 +1,82 @@
> +/*
> + * Take from Linux.
> + *
> + * SPDX-License-Identifier: GPL-2.0-only
> + *
> + * Copyright (C) 2015 Regents of the University of California
> + */
> +
> +#ifndef _ASM_RISCV_CSR_H
> +#define _ASM_RISCV_CSR_H
> +
> +#include <asm/asm.h>
> +#include <xen/const.h>
> +#include <asm/riscv_encoding.h>
> +
> +#ifndef __ASSEMBLY__
> +
> +#define csr_read(csr)                                          \
> +({                                                             \
> +       register unsigned long __v;                             \
> +       __asm__ __volatile__ ("csrr %0, " __ASM_STR(csr)        \
> +                             : "=r" (__v) :                    \
> +                             : "memory");                      \
> +       __v;                                                    \
> +})
> +
> +#define csr_write(csr, val)                                    \
> +({                                                             \
> +       unsigned long __v = (unsigned long)(val);               \
> +       __asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0"     \
> +                             : : "rK" (__v)                    \
> +                             : "memory");                      \
> +})
> +
> +/*
> +#define csr_swap(csr, val)                                     \
> +({                                                             \
> +       unsigned long __v = (unsigned long)(val);               \
> +       __asm__ __volatile__ ("csrrw %0, " __ASM_STR(csr) ", %1"\
> +                             : "=r" (__v) : "rK" (__v)         \
> +                             : "memory");                      \
> +       __v;                                                    \
> +})
> +
> +#define csr_read_set(csr, val)                                 \
> +({                                                             \
> +       unsigned long __v = (unsigned long)(val);               \
> +       __asm__ __volatile__ ("csrrs %0, " __ASM_STR(csr) ", %1"\
> +                             : "=r" (__v) : "rK" (__v)         \
> +                             : "memory");                      \
> +       __v;                                                    \
> +})
> +
> +#define csr_set(csr, val)                                      \
> +({                                                             \
> +       unsigned long __v = (unsigned long)(val);               \
> +       __asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0"     \
> +                             : : "rK" (__v)                    \
> +                             : "memory");                      \
> +})
> +
> +#define csr_read_clear(csr, val)                               \
> +({                                                             \
> +       unsigned long __v = (unsigned long)(val);               \
> +       __asm__ __volatile__ ("csrrc %0, " __ASM_STR(csr) ", %1"\
> +                             : "=r" (__v) : "rK" (__v)         \
> +                             : "memory");                      \
> +       __v;                                                    \
> +})
> +
> +#define csr_clear(csr, val)                                    \
> +({                                                             \
> +       unsigned long __v = (unsigned long)(val);               \
> +       __asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0"     \
> +                             : : "rK" (__v)                    \
> +                             : "memory");                      \
> +})
> +*/
> +
> +#endif /* __ASSEMBLY__ */
> +
> +#endif /* _ASM_RISCV_CSR_H */
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 23:30:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 23:30:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482573.748145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjmh-0000fO-9Q; Sun, 22 Jan 2023 23:29:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482573.748145; Sun, 22 Jan 2023 23:29:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjmh-0000fH-6L; Sun, 22 Jan 2023 23:29:59 +0000
Received: by outflank-mailman (input) for mailman id 482573;
 Sun, 22 Jan 2023 23:29:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zCZZ=5T=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pJjmf-0000fB-PF
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 23:29:57 +0000
Received: from mail-vs1-xe2f.google.com (mail-vs1-xe2f.google.com
 [2607:f8b0:4864:20::e2f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ac7bff8e-9aac-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 00:29:54 +0100 (CET)
Received: by mail-vs1-xe2f.google.com with SMTP id k4so11216601vsc.4
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 15:29:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac7bff8e-9aac-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=w71S0QehHW2gga3KiZY/S419PRoHYWYtAzCcS4dk4Ng=;
        b=iPZgw/ibRdDrSlqm7xZS5yAAiQva0jflQkQoQEjUGNmFd8c6NK+TfCaP06Z7AeKcum
         ipohKO/s9qN/spjfFLX4Y9PYIwM+5KAzjZiCYwYzPDJ9AgDGzUIkNMBJzn7lQo4M2hBP
         ItQi/RC37meNHha/f+sy4yxWMKIYL4WjhwnfFPGNUMphtEIzRddlIJI6pGfjeRktqD3L
         t7v+8ePybZVcUS6asMcKhNwyAmrUDXb1SV7WX7Njc2cyWo8lR+WW35ZFsKK6DN5g7W14
         ZeOFzSYdtWf1+szFRMeRc2UtVv7DA4FY9jd6GF7+hKgSCQZeHnw+5bRpWQAG5pcqRtUu
         cjLQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=w71S0QehHW2gga3KiZY/S419PRoHYWYtAzCcS4dk4Ng=;
        b=QQiwpB5HNLykeg/kJ3VWGFpM9iISPCwO1EC8n2C5G8uVVeKiuto+1O2fzGTsz2FjfA
         8qVsIAOEErV3POCoVoLbQwlZwGJdmp9P/jCAyN99FLc2QYEGhM+Gts9sIuT414Dulxow
         UkbZ7awmyOPV5iSkdcDB3TiIL2sDIxL5KYnKAhkkc0bI8YbMGJJKn5RSj/a/35udxV6P
         ULOhfCq2xJ2rqhY7vCRGhNiiYRzcT2EKPfdq1L34A2xuMonPynu1xqhAv2N7Gw/iQ5ub
         tw86ZOt4eDZTx2SXEdlCRHvB8B8TxyQv/Ak+PwsLvRgbfhgK7BopXTlZU68pLxA59bv3
         P7PQ==
X-Gm-Message-State: AFqh2krWlak1GmSUc4nTR1s7IlIoKbINegvcJj2b48MMtEe0I4LIRDjC
	duk+ffDESPZsgmu4UaKQsEWfAJtFPf5AMTUHHCg=
X-Google-Smtp-Source: AMrXdXsGNCU+1QNIV7WpkVekobYYL4wvYwJX0p9Z/e/TM1OsK+vGiiiJ6DyCehDaR8/KufBrqHyCPiPA0naG2zIcdpQ=
X-Received: by 2002:a05:6102:f98:b0:3d3:c7d9:7b62 with SMTP id
 e24-20020a0561020f9800b003d3c7d97b62mr2709540vsv.72.1674430193347; Sun, 22
 Jan 2023 15:29:53 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674226563.git.oleksii.kurochko@gmail.com> <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To: <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 23 Jan 2023 09:29:27 +1000
Message-ID: <CAKmqyKP7CXjWu9DzdtjGn_qX3et8eUtaYmMEPLtxPL8EE=vVEQ@mail.gmail.com>
Subject: Re: [PATCH v1 07/14] xen/riscv: introduce exception handlers implementation
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> The patch introduces an implementation of basic exception handlers:
> - to save/restore context
> - to handle an exception itself. The handler calls wait_for_interrupt
>   now, nothing more.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/arch/riscv/Makefile            |  2 +
>  xen/arch/riscv/entry.S             | 97 ++++++++++++++++++++++++++++++
>  xen/arch/riscv/include/asm/traps.h | 13 ++++
>  xen/arch/riscv/traps.c             | 13 ++++
>  4 files changed, 125 insertions(+)
>  create mode 100644 xen/arch/riscv/entry.S
>  create mode 100644 xen/arch/riscv/include/asm/traps.h
>  create mode 100644 xen/arch/riscv/traps.c
>
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 1a4f1a6015..443f6bf15f 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,7 +1,9 @@
>  obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
> +obj-y += entry.o
>  obj-$(CONFIG_RISCV_64) += riscv64/
>  obj-y += sbi.o
>  obj-y += setup.o
> +obj-y += traps.o
>
>  $(TARGET): $(TARGET)-syms
>         $(OBJCOPY) -O binary -S $< $@
> diff --git a/xen/arch/riscv/entry.S b/xen/arch/riscv/entry.S
> new file mode 100644
> index 0000000000..f7d46f42bb
> --- /dev/null
> +++ b/xen/arch/riscv/entry.S
> @@ -0,0 +1,97 @@
> +#include <asm/asm.h>
> +#include <asm/processor.h>
> +#include <asm/riscv_encoding.h>
> +#include <asm/traps.h>
> +
> +        .global handle_exception
> +        .align 4
> +
> +handle_exception:
> +
> +    /* Exceptions from xen */
> +save_to_stack:
> +        /* Save context to stack */
> +        REG_S   sp, (RISCV_CPU_USER_REGS_OFFSET(sp) - RISCV_CPU_USER_REGS_SIZE) (sp)
> +        addi    sp, sp, -RISCV_CPU_USER_REGS_SIZE
> +        REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(t0)(sp)
> +        j       save_context
> +
> +save_context:
> +        /* Save registers */
> +        REG_S   ra, RISCV_CPU_USER_REGS_OFFSET(ra)(sp)
> +        REG_S   gp, RISCV_CPU_USER_REGS_OFFSET(gp)(sp)
> +        REG_S   t1, RISCV_CPU_USER_REGS_OFFSET(t1)(sp)
> +        REG_S   t2, RISCV_CPU_USER_REGS_OFFSET(t2)(sp)
> +        REG_S   s0, RISCV_CPU_USER_REGS_OFFSET(s0)(sp)
> +        REG_S   s1, RISCV_CPU_USER_REGS_OFFSET(s1)(sp)
> +        REG_S   a0, RISCV_CPU_USER_REGS_OFFSET(a0)(sp)
> +        REG_S   a1, RISCV_CPU_USER_REGS_OFFSET(a1)(sp)
> +        REG_S   a2, RISCV_CPU_USER_REGS_OFFSET(a2)(sp)
> +        REG_S   a3, RISCV_CPU_USER_REGS_OFFSET(a3)(sp)
> +        REG_S   a4, RISCV_CPU_USER_REGS_OFFSET(a4)(sp)
> +        REG_S   a5, RISCV_CPU_USER_REGS_OFFSET(a5)(sp)
> +        REG_S   a6, RISCV_CPU_USER_REGS_OFFSET(a6)(sp)
> +        REG_S   a7, RISCV_CPU_USER_REGS_OFFSET(a7)(sp)
> +        REG_S   s2, RISCV_CPU_USER_REGS_OFFSET(s2)(sp)
> +        REG_S   s3, RISCV_CPU_USER_REGS_OFFSET(s3)(sp)
> +        REG_S   s4, RISCV_CPU_USER_REGS_OFFSET(s4)(sp)
> +        REG_S   s5, RISCV_CPU_USER_REGS_OFFSET(s5)(sp)
> +        REG_S   s6, RISCV_CPU_USER_REGS_OFFSET(s6)(sp)
> +        REG_S   s7, RISCV_CPU_USER_REGS_OFFSET(s7)(sp)
> +        REG_S   s8, RISCV_CPU_USER_REGS_OFFSET(s8)(sp)
> +        REG_S   s9, RISCV_CPU_USER_REGS_OFFSET(s9)(sp)
> +        REG_S   s10, RISCV_CPU_USER_REGS_OFFSET(s10)(sp)
> +        REG_S   s11, RISCV_CPU_USER_REGS_OFFSET(s11)(sp)
> +        REG_S   t3, RISCV_CPU_USER_REGS_OFFSET(t3)(sp)
> +        REG_S   t4, RISCV_CPU_USER_REGS_OFFSET(t4)(sp)
> +        REG_S   t5, RISCV_CPU_USER_REGS_OFFSET(t5)(sp)
> +        REG_S   t6, RISCV_CPU_USER_REGS_OFFSET(t6)(sp)
> +        csrr    t0, CSR_SEPC
> +        REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(sepc)(sp)
> +        csrr    t0, CSR_SSTATUS
> +        REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(sstatus)(sp)
> +
> +        mv      a0, sp
> +        jal     __handle_exception
> +
> +restore_registers:
> +        /* Restore stack_cpu_regs */
> +        REG_L   t0, RISCV_CPU_USER_REGS_OFFSET(sepc)(sp)
> +        csrw    CSR_SEPC, t0
> +        REG_L   t0, RISCV_CPU_USER_REGS_OFFSET(sstatus)(sp)
> +        csrw    CSR_SSTATUS, t0
> +
> +        REG_L   ra, RISCV_CPU_USER_REGS_OFFSET(ra)(sp)
> +        REG_L   gp, RISCV_CPU_USER_REGS_OFFSET(gp)(sp)
> +        REG_L   t0, RISCV_CPU_USER_REGS_OFFSET(t0)(sp)
> +        REG_L   t1, RISCV_CPU_USER_REGS_OFFSET(t1)(sp)
> +        REG_L   t2, RISCV_CPU_USER_REGS_OFFSET(t2)(sp)
> +        REG_L   s0, RISCV_CPU_USER_REGS_OFFSET(s0)(sp)
> +        REG_L   s1, RISCV_CPU_USER_REGS_OFFSET(s1)(sp)
> +        REG_L   a0, RISCV_CPU_USER_REGS_OFFSET(a0)(sp)
> +        REG_L   a1, RISCV_CPU_USER_REGS_OFFSET(a1)(sp)
> +        REG_L   a2, RISCV_CPU_USER_REGS_OFFSET(a2)(sp)
> +        REG_L   a3, RISCV_CPU_USER_REGS_OFFSET(a3)(sp)
> +        REG_L   a4, RISCV_CPU_USER_REGS_OFFSET(a4)(sp)
> +        REG_L   a5, RISCV_CPU_USER_REGS_OFFSET(a5)(sp)
> +        REG_L   a6, RISCV_CPU_USER_REGS_OFFSET(a6)(sp)
> +        REG_L   a7, RISCV_CPU_USER_REGS_OFFSET(a7)(sp)
> +        REG_L   s2, RISCV_CPU_USER_REGS_OFFSET(s2)(sp)
> +        REG_L   s3, RISCV_CPU_USER_REGS_OFFSET(s3)(sp)
> +        REG_L   s4, RISCV_CPU_USER_REGS_OFFSET(s4)(sp)
> +        REG_L   s5, RISCV_CPU_USER_REGS_OFFSET(s5)(sp)
> +        REG_L   s6, RISCV_CPU_USER_REGS_OFFSET(s6)(sp)
> +        REG_L   s7, RISCV_CPU_USER_REGS_OFFSET(s7)(sp)
> +        REG_L   s8, RISCV_CPU_USER_REGS_OFFSET(s8)(sp)
> +        REG_L   s9, RISCV_CPU_USER_REGS_OFFSET(s9)(sp)
> +        REG_L   s10, RISCV_CPU_USER_REGS_OFFSET(s10)(sp)
> +        REG_L   s11, RISCV_CPU_USER_REGS_OFFSET(s11)(sp)
> +        REG_L   t3, RISCV_CPU_USER_REGS_OFFSET(t3)(sp)
> +        REG_L   t4, RISCV_CPU_USER_REGS_OFFSET(t4)(sp)
> +        REG_L   t5, RISCV_CPU_USER_REGS_OFFSET(t5)(sp)
> +        REG_L   t6, RISCV_CPU_USER_REGS_OFFSET(t6)(sp)
> +
> +        /* Restore sp */
> +        REG_L   sp, RISCV_CPU_USER_REGS_OFFSET(sp)(sp)
> +
> +        sret
> diff --git a/xen/arch/riscv/include/asm/traps.h b/xen/arch/riscv/include/asm/traps.h
> new file mode 100644
> index 0000000000..816ab1178a
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/traps.h
> @@ -0,0 +1,13 @@
> +#ifndef __ASM_TRAPS_H__
> +#define __ASM_TRAPS_H__
> +
> +#include <asm/processor.h>
> +
> +#ifndef __ASSEMBLY__
> +
> +void __handle_exception(struct cpu_user_regs *cpu_regs);
> +void handle_exception(void);
> +
> +#endif /* __ASSEMBLY__ */
> +
> +#endif /* __ASM_TRAPS_H__ */
> diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
> new file mode 100644
> index 0000000000..3201b851ef
> --- /dev/null
> +++ b/xen/arch/riscv/traps.c
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/*
> + * Copyright (C) 2023 Vates
> + *
> + * RISC-V Trap handlers
> + */
> +#include <asm/processor.h>
> +#include <asm/traps.h>
> +
> +void __handle_exception(struct cpu_user_regs *cpu_regs)
> +{
> +    wait_for_interrupt();
> +}
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 23:38:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 23:38:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482579.748158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjvD-00028m-5A; Sun, 22 Jan 2023 23:38:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482579.748158; Sun, 22 Jan 2023 23:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjvD-00028f-2F; Sun, 22 Jan 2023 23:38:47 +0000
Received: by outflank-mailman (input) for mailman id 482579;
 Sun, 22 Jan 2023 23:38:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zCZZ=5T=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pJjvB-00028Z-NA
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 23:38:45 +0000
Received: from mail-vs1-xe2b.google.com (mail-vs1-xe2b.google.com
 [2607:f8b0:4864:20::e2b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7d6aaad-9aad-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 00:38:43 +0100 (CET)
Received: by mail-vs1-xe2b.google.com with SMTP id t10so11250424vsr.3
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 15:38:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7d6aaad-9aad-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=Q0+n8WCk2piafcS3CKFwbx+35Wbh6MxUloWhZg9ttqs=;
        b=o6lWfIXEe4G0/q3owFt+GPZqawAj0vbPQ1ZL3cIhch0ZWtWemC7a7DVSJ3sUGrP8lI
         /x9jLTtpzX6w21aJJw5dUOdpFpcZkRz1zq6uE3C9PUJhPs3xxHfwKcghDbmBPQRNTD6f
         MR61NjaoQJ4FSD3LGlhkCefCMk0cmX5ggJ6Lp4GLt85e7dzKSDZ9LatFU5vylNOzFTbD
         YGlHAjVpCdJH+LqJMunotc+oM+mx7m4si4I5EkBadhvIS+t6Po0N0D3qiD7xXpaZwzTd
         C9Ybl/McNIJLw5jQQgiSe4BE6lQA0cmLMjrEYEdSzPpJqsA6MfMro7tIAMEzvRUgWg1j
         0cfg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Q0+n8WCk2piafcS3CKFwbx+35Wbh6MxUloWhZg9ttqs=;
        b=Z5d0OoElcHOfoULcBjsd/B9LegBRjwNqxFjFOOHCeoeCDvE72VLjo3HPLu6iO5oTVx
         8y56BEE2L0M8TlKlSzZbKi4Dh8QmiZ2h841QcCVBzKMFKpfkQpQREiCMFNAtUEIi3ZvG
         Jp21AXm1ebcP/vNvEMel+fISkGbTZVhl7rWsDwWTxj/xgqP3/DCw1USi27nqQZ7j7w9y
         1ZQ3gi1xOwWH5+GvFYz6ghN1phhs6/u2kjqTHieyxcdhkMVmIN2rrYasfkMxgZhtWJTY
         1XyGILGzCQgN8NQhMiQTd+kJw/3wT6BppKLKCGyEYeiqynr9Z+7PFXonOvUdtd6fDspB
         E04Q==
X-Gm-Message-State: AFqh2krlud9fYWEa03htr/Bj6Pt9DXDagyn5V94BE1noEHQnlHtlnII5
	IBLB0XlWdxxSy2DXTtbnyGq7wpEdrfExdgEq77Q=
X-Google-Smtp-Source: AMrXdXumw40FQoqv7C9vBN0/rmP8HQvs/aztc+bNst3qUzE2l00mkDREXUV2q/wu7Zj1C9+3ceA7Xmc/PcQLcUJ6S5g=
X-Received: by 2002:a05:6102:f98:b0:3d3:c7d9:7b62 with SMTP id
 e24-20020a0561020f9800b003d3c7d97b62mr2711392vsv.72.1674430722408; Sun, 22
 Jan 2023 15:38:42 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674226563.git.oleksii.kurochko@gmail.com> <c798832ec19cb94c0a27e8cff8f5bd6d1aa6ae7e.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To: <c798832ec19cb94c0a27e8cff8f5bd6d1aa6ae7e.1674226563.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 23 Jan 2023 09:38:16 +1000
Message-ID: <CAKmqyKNkO27JqujvNF7t_ewX-oS+=8hhp7E5ZcZRz9cYA-czhg@mail.gmail.com>
Subject: Re: [PATCH v1 08/14] xen/riscv: introduce decode_cause() stuff
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> The patch introduces stuff needed to decode a reason of an
> exception.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/arch/riscv/traps.c | 88 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 88 insertions(+)
>
> diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
> index 3201b851ef..dd64f053a5 100644
> --- a/xen/arch/riscv/traps.c
> +++ b/xen/arch/riscv/traps.c
> @@ -4,8 +4,96 @@
>   *
>   * RISC-V Trap handlers
>   */
> +#include <asm/csr.h>
> +#include <asm/early_printk.h>
>  #include <asm/processor.h>
>  #include <asm/traps.h>
> +#include <xen/errno.h>
> +
> +const char *decode_trap_cause(unsigned long cause)
> +{
> +    switch ( cause )
> +    {
> +    case CAUSE_MISALIGNED_FETCH:
> +        return "Instruction Address Misaligned";
> +    case CAUSE_FETCH_ACCESS:
> +        return "Instruction Access Fault";
> +    case CAUSE_ILLEGAL_INSTRUCTION:
> +        return "Illegal Instruction";
> +    case CAUSE_BREAKPOINT:
> +        return "Breakpoint";
> +    case CAUSE_MISALIGNED_LOAD:
> +        return "Load Address Misaligned";
> +    case CAUSE_LOAD_ACCESS:
> +        return "Load Access Fault";
> +    case CAUSE_MISALIGNED_STORE:
> +        return "Store/AMO Address Misaligned";
> +    case CAUSE_STORE_ACCESS:
> +        return "Store/AMO Access Fault";
> +    case CAUSE_USER_ECALL:
> +        return "Environment Call from U-Mode";
> +    case CAUSE_SUPERVISOR_ECALL:
> +        return "Environment Call from S-Mode";
> +    case CAUSE_MACHINE_ECALL:
> +        return "Environment Call from M-Mode";
> +    case CAUSE_FETCH_PAGE_FAULT:
> +        return "Instruction Page Fault";
> +    case CAUSE_LOAD_PAGE_FAULT:
> +        return "Load Page Fault";
> +    case CAUSE_STORE_PAGE_FAULT:
> +        return "Store/AMO Page Fault";
> +    case CAUSE_FETCH_GUEST_PAGE_FAULT:
> +        return "Instruction Guest Page Fault";
> +    case CAUSE_LOAD_GUEST_PAGE_FAULT:
> +        return "Load Guest Page Fault";
> +    case CAUSE_VIRTUAL_INST_FAULT:
> +        return "Virtualized Instruction Fault";
> +    case CAUSE_STORE_GUEST_PAGE_FAULT:
> +        return "Guest Store/AMO Page Fault";
> +    default:
> +        return "UNKNOWN";
> +    }
> +}
> +
> +const char *decode_reserved_interrupt_cause(unsigned long irq_cause)
> +{
> +    switch ( irq_cause )
> +    {
> +    case IRQ_M_SOFT:
> +        return "M-mode Software Interrupt";
> +    case IRQ_M_TIMER:
> +        return "M-mode TIMER Interrupt";
> +    case IRQ_M_EXT:
> +        return "M-mode TIMER Interrupt";
> +    default:
> +        return "UNKNOWN IRQ type";
> +    }
> +}
> +
> +const char *decode_interrupt_cause(unsigned long cause)
> +{
> +    unsigned long irq_cause = cause & ~CAUSE_IRQ_FLAG;
> +
> +    switch ( irq_cause )
> +    {
> +    case IRQ_S_SOFT:
> +        return "Supervisor Software Interrupt";
> +    case IRQ_S_TIMER:
> +        return "Supervisor Timer Interrupt";
> +    case IRQ_S_EXT:
> +        return "Supervisor External Interrupt";
> +    default:
> +        return decode_reserved_interrupt_cause(irq_cause);
> +    }
> +}
> +
> +const char *decode_cause(unsigned long cause)
> +{
> +    if ( cause & CAUSE_IRQ_FLAG )
> +        return decode_interrupt_cause(cause);
> +
> +    return decode_trap_cause(cause);
> +}
>
>  void __handle_exception(struct cpu_user_regs *cpu_regs)
>  {
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 23:40:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 23:40:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482583.748168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjwL-0002f1-Ga; Sun, 22 Jan 2023 23:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482583.748168; Sun, 22 Jan 2023 23:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjwL-0002eu-Bu; Sun, 22 Jan 2023 23:39:57 +0000
Received: by outflank-mailman (input) for mailman id 482583;
 Sun, 22 Jan 2023 23:39:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zCZZ=5T=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pJjwK-0002bM-No
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 23:39:56 +0000
Received: from mail-vs1-xe29.google.com (mail-vs1-xe29.google.com
 [2607:f8b0:4864:20::e29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12ff10b7-9aae-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 00:39:55 +0100 (CET)
Received: by mail-vs1-xe29.google.com with SMTP id 3so11207273vsq.7
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 15:39:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12ff10b7-9aae-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=RrJxyIdy8E/D+xdXhTJ6aTZ6a+zBBaY8zNu/H+dVl1U=;
        b=DjJoXfNInkFEp9AU60cxQvh2JQrYn0isdSP4tVoduMh2BUza2TMp/NBhsvFkkBM7l3
         aNmcukBKT0chYErJU/WIZGddzg87Mupb4Rz5az/7iChDV0vu8C+j6wR65zPAB7R96wPP
         EG4deprNmEztPsqib1e/UCdoZIzJgSYfnBMHzGzieWA9K16snULoaYblfHLz2QDghi/d
         niv7CayXNSQvICND0gaQhZALoqV2kEhBgzoIhq4fV714OgjE+LD20H2v+5kjHCajB3pC
         Qbw/2RfcpM4e8chzpiSEP9hugffv1bXKqVqDovr6oBwaltpMWWkl6IhAN7GMn/odfm6b
         L3xw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=RrJxyIdy8E/D+xdXhTJ6aTZ6a+zBBaY8zNu/H+dVl1U=;
        b=Pm2Hs+r8GvoL/lPMQTTpC5U26gyDNOcchkupY8T4j32cynkjb0EloyjT1WU/paDGKM
         d7Fll2cR6gXu6u9TFCrP8nKT+cC+6XWe55bGgn8/FGihGzHbNhFxAvlg7txoDy+VQnyK
         PaQUpWrGSsOJYFzdL6qEF2++Wm6rpme9TTHmqfOeernABV6t3rhh+fmriJ7UvUVdrLsZ
         G4Sm8fynJayxJvlzCLYOMPwCm0dIXjY9y/1YBXY5PgoOMPWClzjuMeuD1SPJDjxH63GJ
         hzzGx624q1Vx3at0em8nBBgbkn502UwXE3/53IPfKqEXIC6Vv4Kk6NlG1AA12qCci8Yw
         zVVA==
X-Gm-Message-State: AFqh2ko2qVk4NYysos201VpGbhP9X7Rl089Qr5+iiHU7rdXBGLUcdQxt
	S9dqJGi+dNh/sE2ER6NI9ZMdhpcZJLO25zNWMmU=
X-Google-Smtp-Source: AMrXdXv+jqrCspiXw0fAxxHEwFCQVciXJ5J+bPSHUXaJBmW0q2PoUhbhMqy26DBiiiJy95yxsXtyOBSdTkx79mq0OrY=
X-Received: by 2002:a05:6102:f98:b0:3d3:c7d9:7b62 with SMTP id
 e24-20020a0561020f9800b003d3c7d97b62mr2711641vsv.72.1674430794833; Sun, 22
 Jan 2023 15:39:54 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674226563.git.oleksii.kurochko@gmail.com> <74ca10d9be1dfc3aed4b3b21a79eae88c9df26a4.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To: <74ca10d9be1dfc3aed4b3b21a79eae88c9df26a4.1674226563.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 23 Jan 2023 09:39:28 +1000
Message-ID: <CAKmqyKNtFGoXmF1SJWO+JBJQvPSyDYEfpaYn2YBMQ=BsCk6VPQ@mail.gmail.com>
Subject: Re: [PATCH v1 09/14] xen/riscv: introduce do_unexpected_trap()
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> The patch introduces the function the purpose of which is to print
> a cause of an exception and call "wfi" instruction.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
>  xen/arch/riscv/traps.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
> index dd64f053a5..fc25138a4b 100644
> --- a/xen/arch/riscv/traps.c
> +++ b/xen/arch/riscv/traps.c
> @@ -95,7 +95,19 @@ const char *decode_cause(unsigned long cause)
>      return decode_trap_cause(cause);
>  }
>
> -void __handle_exception(struct cpu_user_regs *cpu_regs)
> +static void do_unexpected_trap(const struct cpu_user_regs *regs)
>  {
> +    unsigned long cause = csr_read(CSR_SCAUSE);
> +
> +    early_printk("Unhandled exception: ");
> +    early_printk(decode_cause(cause));
> +    early_printk("\n");
> +
> +    // kind of die...
>      wait_for_interrupt();

We could put this in a loop, to ensure we never progress

Alistair

>  }
> +
> +void __handle_exception(struct cpu_user_regs *cpu_regs)
> +{
> +    do_unexpected_trap(cpu_regs);
> +}
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 23:40:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 23:40:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482590.748178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjww-000437-Td; Sun, 22 Jan 2023 23:40:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482590.748178; Sun, 22 Jan 2023 23:40:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjww-000430-Pj; Sun, 22 Jan 2023 23:40:34 +0000
Received: by outflank-mailman (input) for mailman id 482590;
 Sun, 22 Jan 2023 23:40:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zCZZ=5T=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pJjwv-0002oW-AU
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 23:40:33 +0000
Received: from mail-vs1-xe29.google.com (mail-vs1-xe29.google.com
 [2607:f8b0:4864:20::e29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 283cb299-9aae-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 00:40:31 +0100 (CET)
Received: by mail-vs1-xe29.google.com with SMTP id n190so11211159vsc.11
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 15:40:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 283cb299-9aae-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=klhteFIeCOqAfIbCYkAE/VvYSMNDVdWDiJyCobSe9jA=;
        b=TGDfWXAYqlPCeZK24Hut7kbXcq5b9JAfSQUhOw3HOc/FbLY9j4QAELEcC/5mLrSug0
         rXSacVWMJK3uP8+jp/YmAYLgGBzNfKkXWPDRQI5aqHo4AHgE0+qHo6tm/xwJTC+Fb42k
         c3sgaxhAVUvtm/yFJbjYy5DqdrR0eIkgkKp1ssqfNqHRCM1WyGrmdGIMu4YP/PMg/1if
         tFVEYV8YE6lslZwqXAq3qHyHEc7GMbSHmEZaZCGmuDz4/hSs47boQ9bJuCMf8htL24Ol
         Y2f5sN3NHc96houoiDWfAKf50SuB/slMwH3lVZlaI8mRk+XZLQSVqVCp2GTsxOQmki2t
         6Eew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=klhteFIeCOqAfIbCYkAE/VvYSMNDVdWDiJyCobSe9jA=;
        b=cwMDqVpV+uQ19dP0I0/NUYY7vnkygnFu1zSY0r63YfFhl0zoyXYbjmiWppeVI75UyJ
         OSCigW3P/drLXhcxbP745T3HoQNMyFkCPCOb35jStX8bIzKmoyQwKnDDpLEEhZkk47XW
         43mZ1r7xa2Lykm5wnD0QiQ503vmiEWeMTOwlZ6vN2gvs+/s9W3xDKAFVMY1qXuOmRmSQ
         8fhWJBox9KKFs9ytdNNcL8wo/32hIhdCUis3C1c4SJF8pcObq0DzR3dz96UI92R+VGLx
         682OdrruDu4I8NE9Mq645idXwNgppTkSBcUEadYDpcCyDJZPQkML2L/G0o0yDx1JT4IJ
         jX8g==
X-Gm-Message-State: AFqh2kqP3YmJg8O4kFaJ+7IHL/0OKuEbWaDRJeixqFtRAYkdor/7guLM
	MKeUHe+Pl7wQ/hC1bdHXMJjd+UD6Ktn3goRGakM=
X-Google-Smtp-Source: AMrXdXvTRkKKb4R5M8neFIif1r2M508tMMWLa0GYCrDT/tLHQzX1HPh9cIyj+wGZM9AXRr+/fIWSQ1A29O/mOp5tox8=
X-Received: by 2002:a67:ba0c:0:b0:3ce:f2da:96a with SMTP id
 l12-20020a67ba0c000000b003cef2da096amr2599867vsn.64.1674430830500; Sun, 22
 Jan 2023 15:40:30 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674226563.git.oleksii.kurochko@gmail.com> <0153a210de96733880fb3f6fddd902862cc2eaca.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To: <0153a210de96733880fb3f6fddd902862cc2eaca.1674226563.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 23 Jan 2023 09:40:04 +1000
Message-ID: <CAKmqyKN2_SUHzRnUeU-YCPLUWCo4eEAL+eFOpfON3N0FDB2Svg@mail.gmail.com>
Subject: Re: [PATCH v1 10/14] xen/riscv: mask all interrupts
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/arch/riscv/riscv64/head.S | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
> index d444dd8aad..ffd95f9f89 100644
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,6 +1,11 @@
> +#include <asm/riscv_encoding.h>
> +
>          .section .text.header, "ax", %progbits
>
>  ENTRY(start)
> +        /* Mask all interrupts */
> +        csrw    CSR_SIE, zero
> +
>          la      sp, cpu0_boot_stack
>          li      t0, STACK_SIZE
>          add     sp, sp, t0
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Sun Jan 22 23:42:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Jan 2023 23:42:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482597.748187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjyW-0004eb-6H; Sun, 22 Jan 2023 23:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482597.748187; Sun, 22 Jan 2023 23:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJjyW-0004eU-3f; Sun, 22 Jan 2023 23:42:12 +0000
Received: by outflank-mailman (input) for mailman id 482597;
 Sun, 22 Jan 2023 23:42:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zCZZ=5T=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pJjyU-0004eK-Bz
 for xen-devel@lists.xenproject.org; Sun, 22 Jan 2023 23:42:10 +0000
Received: from mail-vs1-xe33.google.com (mail-vs1-xe33.google.com
 [2607:f8b0:4864:20::e33])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 61f9df86-9aae-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 00:42:08 +0100 (CET)
Received: by mail-vs1-xe33.google.com with SMTP id d66so11208012vsd.9
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 15:42:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61f9df86-9aae-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=wxnhviJXorDIRt5Tvphi2/dT3epwMfuEG68rwr+fGb0=;
        b=bTvzNuSmySDMktGnvytnSz+Q5tVq36rJklCDae8f5toHPvC3ubik8zDR9suP2zZ0Qd
         7XFHiF8yS+INbPxihBMADL56dXUy9YlYbT5jGWrlga7zgQQvWD38D7jL1sdZth9culkw
         Gh80QAuvvYi/w4ph1afh04v6Fk4icUo41+hiADnvEfkfVQbtQT1im7omhGqUtMP7Agoc
         0foZzC8Ze2UztczXmAdVIqqUcnIdTVgPnKDh94Ctq+/0dUKTG8AMdPovwgRrBCHg9ckK
         dV035oAUGHh+rFcQPcn3zz8RhIJ0+D4Sy/NeDaYB0D7SzJWQ9INIR2ByW0X7c+X5mY0t
         OSag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=wxnhviJXorDIRt5Tvphi2/dT3epwMfuEG68rwr+fGb0=;
        b=NwCpqLVKZPL8PycxIdFdeQdOOVWs+yC/2zSCfce3Aef1YgXtcD8gufafsI6WRLQrwb
         +NVDALYKREQl+O68PMTOLD8KGiQUvoACr6oLXrCkDYyAy4rbHxeuOzxVNj9FC4lIf8+t
         f12M3EjjFzNDtPXezafUhgj8X4nuhoj832Mpk38ywQWACGgHLvQNu31wb1vqvA2oMxUG
         ItowHNXsbF5TfKIN6ZESDFfwXtD0f1GEAd4s/8n9CefQz8eE5QCkBYPgxXTLeDXB1wFU
         0MRRaFl1Fj7D1jA3KGKqD8dSgYcq3nuaOiL6UutjdDbYeJKHcSg9f1InwqYzsnU3ujo4
         RExw==
X-Gm-Message-State: AFqh2kqR9ylURLWdDmLZ+EuwG1Teoe29pcwV7gvuCx5SFG7KL4F5Ea6N
	KxSxacCIWt8GXJmBeu0QIqLbEaWxkW5AkisDQihxauY/kpI=
X-Google-Smtp-Source: AMrXdXslDv4Snv+jtInDKOJQ1jaeIEb0uhiVcfYMa/+kcTwocpxmNMbjwuAzcUoBBCr1nK/36H/ofk2fR18KJj5x0x4=
X-Received: by 2002:a67:eb10:0:b0:3c9:8cc2:dd04 with SMTP id
 a16-20020a67eb10000000b003c98cc2dd04mr3180290vso.73.1674430927332; Sun, 22
 Jan 2023 15:42:07 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674226563.git.oleksii.kurochko@gmail.com> <b8d03f33aea498bb5fde4ccdc16f023bbe208e7f.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To: <b8d03f33aea498bb5fde4ccdc16f023bbe208e7f.1674226563.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 23 Jan 2023 09:41:41 +1000
Message-ID: <CAKmqyKNGEfzOnAfKNePro5MDG3i6x7ycNvE+DT-rUzEfXk+KdA@mail.gmail.com>
Subject: Re: [PATCH v1 11/14] xen/riscv: introduce setup_trap_handler()
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/arch/riscv/setup.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
>
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> index d09ffe1454..174e134c93 100644
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -1,16 +1,27 @@
>  #include <xen/compile.h>
>  #include <xen/init.h>
>
> +#include <asm/csr.h>
>  #include <asm/early_printk.h>
> +#include <asm/traps.h>
>
>  /* Xen stack for bringing up the first CPU. */
>  unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
>      __aligned(STACK_SIZE);
>
> +static void setup_trap_handler(void)
> +{
> +    unsigned long addr = (unsigned long)&handle_exception;
> +    csr_write(CSR_STVEC, addr);
> +}
> +
>  void __init noreturn start_xen(void)
>  {
>      early_printk("Hello from C env\n");
>
> +    setup_trap_handler();
> +    early_printk("exception handler has been setup\n");
> +
>      for ( ;; )
>          asm volatile ("wfi");
>
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 01:05:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 01:05:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482603.748201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJlGn-0003NO-D2; Mon, 23 Jan 2023 01:05:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482603.748201; Mon, 23 Jan 2023 01:05:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJlGn-0003NF-75; Mon, 23 Jan 2023 01:05:09 +0000
Received: by outflank-mailman (input) for mailman id 482603;
 Mon, 23 Jan 2023 01:05:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJlGm-0003N5-OZ; Mon, 23 Jan 2023 01:05:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJlGm-0005sn-LK; Mon, 23 Jan 2023 01:05:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJlGm-0003ji-6U; Mon, 23 Jan 2023 01:05:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJlGm-0001PY-5w; Mon, 23 Jan 2023 01:05:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=f1+c+WWD8MootUzbZGZyxjBEw6TdIN43s73V59UAPrE=; b=DP7ml70aYYPioloC1+PU9NHARo
	8fKMufMFbGjKBRO0EWFWJUj74jzJVR6JXibC0oQrnGBN6axJQ8B5VgeN+yAubXyIS/ULO6o37QVBV
	d+iy+9p+fXt8JfAG2TNG7nUpWCzPd1eynESvopGGrRudPREchi3+ZQXSc/QkgQiLsIE8=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-libvirt-pair
Message-Id: <E1pJlGm-0001PY-5w@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 01:05:08 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-libvirt-pair
testid guest-migrate/src_host/dst_host

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176054/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-libvirt-pair.guest-migrate--src_host--dst_host.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-libvirt-pair.guest-migrate--src_host--dst_host --summary-out=tmp/176054.bisection-summary --basis-template=175994 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-libvirt-pair guest-migrate/src_host/dst_host
Searching for failure / basis pass:
 176042 fail [dst_host=debina1,src_host=debina0] / 175994 [dst_host=albana0,src_host=albana1] 175987 [dst_host=pinot1,src_host=pinot0] 175965 [dst_host=italia0,src_host=italia1] 175734 [dst_host=elbling1,src_host=elbling0] 175726 [dst_host=elbling0,src_host=elbling1] 175720 [dst_host=fiano0,src_host=fiano1] 175714 [dst_host=huxelrebe0,src_host=huxelrebe1] 175694 [dst_host=fiano1,src_host=fiano0] 175671 [dst_host=albana1,src_host=albana0] 175651 [dst_host=debina0,src_host=debina1] 175635 [dst_hos\
 t=italia1,src_host=italia0] 175624 [dst_host=nobling1,src_host=nobling0] 175612 [dst_host=nobling0,src_host=nobling1] 175601 [dst_host=albana0,src_host=albana1] 175592 [dst_host=nocera0,src_host=nocera1] 175573 [dst_host=italia0,src_host=italia1] 175569 [dst_host=nocera1,src_host=nocera0] 175541 [dst_host=pinot1,src_host=pinot0] 175534 [dst_host=pinot0,src_host=pinot1] 175526 ok.
Failure / basis pass flights: 176042 / 175526
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 57b0678590708de081e4498e164b86d5c8c85024 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
Basis pass 0f2396751fccdc9f742230763880f70dbd977f3b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 7eef80e06ed2282bbcec3619d860c6aacb0515d8
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#0f2396751fccdc9f742230763880f70dbd977f3b-57b0678590708de081e4498e164b86d5c8c85024 https://gitlab.com/keycodemap/keycodemapdb.git#57ba70da5312170883a3d622cd2aa3fd0e2ec7ae-57ba70da5312170883a3d622cd2aa3fd0e2ec7ae git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#1cf02b05b27c48775a25699e61b93b814b9ae042-625eb5e96dc96aa7fddef59a08edae215527f19c git://xenbits.xen.org/xen.git#7eef80e06ed2282bbcec3619d860c6aacb0515d8-1d60c20260c7e82fe5344d06c20d718e0cc03b8b
Loaded 15003 nodes in revision graph
Searching for test results:
 175592 [dst_host=nocera0,src_host=nocera1]
 175601 [dst_host=albana0,src_host=albana1]
 175612 [dst_host=nobling0,src_host=nobling1]
 175624 [dst_host=nobling1,src_host=nobling0]
 175635 [dst_host=italia1,src_host=italia0]
 175651 [dst_host=debina0,src_host=debina1]
 175671 [dst_host=albana1,src_host=albana0]
 175694 [dst_host=fiano1,src_host=fiano0]
 175714 [dst_host=huxelrebe0,src_host=huxelrebe1]
 175720 [dst_host=fiano0,src_host=fiano1]
 175726 [dst_host=elbling0,src_host=elbling1]
 175734 [dst_host=elbling1,src_host=elbling0]
 175834 []
 175861 []
 175890 []
 175907 []
 175931 []
 175956 []
 175965 [dst_host=italia0,src_host=italia1]
 175987 [dst_host=pinot1,src_host=pinot0]
 175994 [dst_host=albana0,src_host=albana1]
 176003 fail 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
 176011 fail 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176033 pass 0f2396751fccdc9f742230763880f70dbd977f3b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 7eef80e06ed2282bbcec3619d860c6aacb0515d8
 176025 fail 57b0678590708de081e4498e164b86d5c8c85024 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176034 fail 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176036 fail 57b0678590708de081e4498e164b86d5c8c85024 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176037 pass 0fcdb512d4aed9730f082f2da7cd5b9c3694d271 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 38525f6f73f906699f77a1af86c16b4eaad48e04
 176038 pass e8871a9ce03c75925f7ad315e7efb9277e366aa3 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176039 pass cba964b145515e998a370cda6594d6d8c6d90ba2 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176035 fail 57b0678590708de081e4498e164b86d5c8c85024 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176041 fail 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d4994ac79ed96550f8e8c9a682d468e83db4dfe
 176043 fail 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d99732f2b092173d8600fa818aee3fa51046bb0
 176044 pass 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176045 fail 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176042 fail 57b0678590708de081e4498e164b86d5c8c85024 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176047 pass 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176049 fail 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176050 pass 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176051 fail 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176052 pass 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176054 fail 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 175517 [dst_host=elbling1,src_host=elbling0]
 175520 [dst_host=albana1,src_host=albana0]
 175526 pass 0f2396751fccdc9f742230763880f70dbd977f3b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 7eef80e06ed2282bbcec3619d860c6aacb0515d8
 175534 [dst_host=pinot0,src_host=pinot1]
 175541 [dst_host=pinot1,src_host=pinot0]
 175554 [dst_host=nocera1,src_host=nocera0]
 175562 [dst_host=nocera1,src_host=nocera0]
 175569 [dst_host=nocera1,src_host=nocera0]
 175573 [dst_host=italia0,src_host=italia1]
Searching for interesting versions
 Result found: flight 175526 (pass), for basis pass
 For basis failure, parent search stopping at 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f, results HASH(0x556d3f995a18) HASH(0x556d3f9a60a8) HASH(0x556d3f9b3928) For basis failure, parent search stopping at 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5\
 312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x556d3f99c658) For basis failure, parent search stopping at cba964b145515e998a370cda6594d6d8c6d90ba2 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98\
 c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x556d3f996b20) For basis failure, parent search stopping at e8871a9ce03c75925f7ad315e7efb9277e366aa3 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x556d3f997120) For basis\
  failure, parent search stopping at 0fcdb512d4aed9730f082f2da7cd5b9c3694d271 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 38525f6f73f906699f77a1af86c16b4eaad48e04, results HASH(0x556d3f9ac9e8) For basis failure, parent search stopping at 0f2396751fccdc9f742230763880f70dbd977f3b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b\
 1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 7eef80e06ed2282bbcec3619d860c6aacb0515d8, results HASH(0x556d3f99e360) HASH(0x556d3e9e82d8) Result found: flight 176003 (fail), for basis failure (at ancestor ~1186)
 Repro found: flight 176033 (pass), for basis pass
 Repro found: flight 176035 (fail), for basis failure
 0 revisions at 16bfbc8cd2b4a039d3e846dceca807a9cc15849b 57ba70da5312170883a3d622cd2aa3fd0e2ec7ae c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
No revisions left to test, checking graph state.
 Result found: flight 176047 (pass), for last pass
 Result found: flight 176049 (fail), for first failure
 Repro found: flight 176050 (pass), for last pass
 Repro found: flight 176051 (fail), for first failure
 Repro found: flight 176052 (pass), for last pass
 Repro found: flight 176054 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176054/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

pnmtopng: 191 colors found
Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-libvirt-pair.guest-migrate--src_host--dst_host.{dot,ps,png,html,svg}.
----------------------------------------
176054: tolerable FAIL

flight 176054 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/176054/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail baseline untested


jobs:
 build-i386-libvirt                                           pass    
 test-amd64-i386-libvirt-pair                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 02:20:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 02:20:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482611.748214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJmRF-0002Tq-TE; Mon, 23 Jan 2023 02:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482611.748214; Mon, 23 Jan 2023 02:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJmRF-0002Sp-PL; Mon, 23 Jan 2023 02:20:01 +0000
Received: by outflank-mailman (input) for mailman id 482611;
 Mon, 23 Jan 2023 02:20:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJmRE-0002RW-NL; Mon, 23 Jan 2023 02:20:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJmRE-00089y-JV; Mon, 23 Jan 2023 02:20:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJmRE-0007L8-1G; Mon, 23 Jan 2023 02:20:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJmRE-0001Hs-0g; Mon, 23 Jan 2023 02:20:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Fy/Hl7ZQMxaWnihBhuw145uuXQnkPPvh1R7Q0gCNnJg=; b=xntfu5J5Nk3Ui3HPB3E4lhdbkX
	vBljHfz+5djHKpGmCNmjrFCNMcLrCw+/WCR5iXapaS2rfBK041KMDbn5Fp67px9YdraONu4r6SmDK
	umv/e95bORq4+RgZsbEKPxFp56ETfamPII7pDxTQrcYJVwV70brFLoI2LjA7Rges3k0w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176048-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176048: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-arm64-arm64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d60c20260c7e82fe5344d06c20d718e0cc03b8b
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 02:20:00 +0000

flight 176048 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176048/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd 17 guest-start/debian.repeat fail in 176042 pass in 176048
 test-amd64-i386-libvirt-xsm   7 xen-install                fail pass in 176042

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 176042 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1d60c20260c7e82fe5344d06c20d718e0cc03b8b
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    2 days
Failing since        176003  2023-01-20 17:40:27 Z    2 days    6 attempts
Testing same since   176011  2023-01-21 07:04:02 Z    1 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 762 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 07:31:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 07:31:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482634.748236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJrHg-0000Md-4U; Mon, 23 Jan 2023 07:30:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482634.748236; Mon, 23 Jan 2023 07:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJrHg-0000MW-1F; Mon, 23 Jan 2023 07:30:28 +0000
Received: by outflank-mailman (input) for mailman id 482634;
 Mon, 23 Jan 2023 07:30:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vc9q=5U=kernel.org=ardb@srs-se1.protection.inumbo.net>)
 id 1pJrHe-0000MQ-Sf
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 07:30:26 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cd316f06-9aef-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 08:30:25 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 6832DB80C7B
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:30:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18B00C4339C
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:30:23 +0000 (UTC)
Received: by mail-lf1-f50.google.com with SMTP id b3so16870693lfv.2
 for <xen-devel@lists.xenproject.org>; Sun, 22 Jan 2023 23:30:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd316f06-9aef-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674459023;
	bh=yqHRmBM/5HKjzaBie8odadDl/b5Q4ya5gnQh2zFliW0=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=iGq5+eY5tyMw6oahuiHN28IBRDzPcwah1Tj6QMnoc1/Er+SEnF1kUj3FyEewGhNAL
	 2oNaeTuCjq9wQ3S3cN+Qi8Ti9iMARl2TUxhxKLUgDjaq0Ips8RY8Rvz4gPVCo155uX
	 DxdsLaADc4afozv2jPJRMwYKItIWNsUWIoia9NUqTKLnegidJJXvHDXSchAiqXvk5a
	 3tuFLLfahb1o891CjD83Y6UCad/u+SmZyAfnsBOyFjiVOfUMZE39xhev/Zpcjye+ze
	 b09krGV86uoDA0XlqIxHr35Y2sao3CduQgPq6gK6B39SY9wjZbdPIsVbZrDHvqbhvd
	 2eWEn/ASR54ug==
X-Gm-Message-State: AFqh2krzWypcmQDUFe7Oi3g7ZviLvkg5uZwcMuk6/g7Z9FGAf32CLTLU
	h1oDnR8RsGfpooeNGo+O3wt8b16HL9zsrh0S9L8=
X-Google-Smtp-Source: AMrXdXvLGWnD0SLJx80UTiS66Ijd6HhijiZdhKVE7te2RkGIhVL2UPwS2V4brcBlC75bspAFezet9YOtuuT2k/ow/ZA=
X-Received: by 2002:a19:675e:0:b0:4b6:f37c:c123 with SMTP id
 e30-20020a19675e000000b004b6f37cc123mr1643063lfj.539.1674459021083; Sun, 22
 Jan 2023 23:30:21 -0800 (PST)
MIME-Version: 1.0
References: <20221003112625.972646-1-ardb@kernel.org> <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
In-Reply-To: <b18879e0329c785d35f2aa2164413bb56419c684.1674153153.git.demi@invisiblethingslab.com>
From: Ard Biesheuvel <ardb@kernel.org>
Date: Mon, 23 Jan 2023 08:30:09 +0100
X-Gmail-Original-Message-ID: <CAMj1kXELH7+d5141yhBudrA0vtOOkCfVucwGBpag9u4mU4Q0iA@mail.gmail.com>
Message-ID: <CAMj1kXELH7+d5141yhBudrA0vtOOkCfVucwGBpag9u4mU4Q0iA@mail.gmail.com>
Subject: Re: [PATCH v3 0/5] efi: Support ESRT under Xen
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Thu, 19 Jan 2023 at 20:04, Demi Marie Obenour
<demi@invisiblethingslab.com> wrote:
>
> This patch series fixes handling of EFI tables when running under Xen.
> These fixes allow the ESRT to be loaded when running paravirtualized in
> dom0, making the use of EFI capsule updates possible.
>
> Demi Marie Obenour (5):
>   efi: memmap: Disregard bogus entries instead of returning them
>   efi: xen: Implement memory descriptor lookup based on hypercall
>   efi: Apply allowlist to EFI configuration tables when running under
>     Xen
>   efi: Actually enable the ESRT under Xen
>   efi: Warn if trying to reserve memory under Xen
>

I have given these a spin on a system with a dodgy ESRT (the region in
question is not covered by the memory map at all), and things are
exactly as broken before, which is good.

I have queued these up in efi/next now, they should appear in -next tomorrow.


>  drivers/firmware/efi/efi.c  | 22 ++++++++++++-
>  drivers/firmware/efi/esrt.c | 15 +++------
>  drivers/xen/efi.c           | 61 +++++++++++++++++++++++++++++++++++++
>  include/linux/efi.h         |  3 ++
>  4 files changed, 90 insertions(+), 11 deletions(-)
>
> --
> Sincerely,
> Demi Marie Obenour (she/her/hers)
> Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 07:31:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 07:31:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482638.748245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJrIL-0000qz-EH; Mon, 23 Jan 2023 07:31:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482638.748245; Mon, 23 Jan 2023 07:31:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJrIL-0000qs-Ax; Mon, 23 Jan 2023 07:31:09 +0000
Received: by outflank-mailman (input) for mailman id 482638;
 Mon, 23 Jan 2023 07:31:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJrIK-0000qZ-Ec; Mon, 23 Jan 2023 07:31:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJrIK-0007v6-BL; Mon, 23 Jan 2023 07:31:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJrIJ-00059V-TB; Mon, 23 Jan 2023 07:31:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJrIJ-0004gC-Sj; Mon, 23 Jan 2023 07:31:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8LYg9kR5gn3FhcZ74ytGG8U6NJTkBXZKqRWaxQTmj1M=; b=EVJLZQutWEbWMVr0GScGryQL1b
	qSkg5SYdsoxhoF1VK/cDwvgxAl6bhfas0vq44ZifT8Oz1C9DiqXJxFcVebY1ejbxE9anlS0hnprMp
	Ho7ufXW3162MPU9jqHyyU+BlQ/vAf48x1kjv+GrwhAWKB17W2//yuaetFWwVTos156V0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176053-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176053: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2475bf0250dee99b477e0c56d7dc9d7ac3f04117
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 07:31:07 +0000

flight 176053 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176053/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-freebsd11-amd64 21 guest-start/freebsd.repeat fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                2475bf0250dee99b477e0c56d7dc9d7ac3f04117
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  107 days
Failing since        173470  2022-10-08 06:21:34 Z  107 days  220 attempts
Testing same since   176053  2023-01-22 23:10:33 Z    0 days    1 attempts

------------------------------------------------------------
3437 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 527952 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 08:13:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 08:13:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482650.748262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJrws-0005te-3K; Mon, 23 Jan 2023 08:13:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482650.748262; Mon, 23 Jan 2023 08:13:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJrwr-0005tX-VX; Mon, 23 Jan 2023 08:13:01 +0000
Received: by outflank-mailman (input) for mailman id 482650;
 Mon, 23 Jan 2023 08:13:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJrwq-0005tR-Hz
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 08:13:00 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2061.outbound.protection.outlook.com [40.107.6.61])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bebb3e22-9af5-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 09:12:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7817.eurprd04.prod.outlook.com (2603:10a6:10:1ef::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 08:12:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 08:12:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bebb3e22-9af5-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Urj9IZp8efoUe0LDLudnnkCyLxYTNVKHOQo1QzDZImH7WLaYiomHuEbt8huKMrT/KiAsbsn4VArgBzp1vXjZlp5GgOBMDfMY1ovBlupzm8oLWG6vpE9UBnh58aZm1JSqEWafJOTENcL/ptrUSiNGYE79moNoZ+Xm6Bm9QPtWdk4/SQJpi8kwLxUrRNo3kyp/p3Y2cZo+brJTRDSOf5N6IIJ67ErzobC/XainSARwse2p6bjULFMydy/FmW7Je8q3m49ZFWDxJi/c4oXY9h+1HgIkY3EfxsQLnk2KoY0GL8ctlyPqWvImo5wxZ0G8RfQlEWV3xyJjpSQe+MKZ6n2AWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hwnSNtbVgtH/Q1KGLqJiOs0k3oDNWbq9OOtJzqQmaMA=;
 b=Ruhc4ai4waJrLGQnsA/N8JZnE1rzLaoE7QMBDvr9zhBBekJ6phAkqVGUwoBiUZ8rShIK5J9y6S+ROOoYfhRQ+rt5uVw7CmbwGjU6v3XSDoSardRd2HWNjfOu/rfgp9Sq4clgPz9BGdZManSMMVAuxpYgR0oee1BfI0PTBCBiiIe/ZsWcuuHxCEnu19yb+TNtGiMy1rF13z/uOxbQ3MN99svym/22rsOYcYkzQzeThvhAyPh12ySyKi05durZI3f7y9JRsPvUxCyMONfFRwHD9/T/Rf8hbuitVPdbIEJa0pNRqnfWNng4dfq31KhKwsKYj8FAMmnecdA4Mh3nB/Nyrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hwnSNtbVgtH/Q1KGLqJiOs0k3oDNWbq9OOtJzqQmaMA=;
 b=vKF0K6UGPetpP4WLwbftiEf2m49LCjypM9zGNT3BjVzaBotlhuRxsWpnWG+rimGCONyY2sUT/GghOItfrx7lOGNNW2O9ROXVWI9KJVCfGJtl7FgImY5XNqdr7AqlTxKqjVz794bi7wm6DvkjVLagPltTFQ5MWAX38qDmNQxs22vzTxqOw2fpCLnI42tSroU6OIIPpRl0ja4OmO8Q3rflyTW2/9u0F7tLZG9bfMkIRzZ0g5orgBCZrIGD0YYxj3pjFUj6y+gn7PZgWn4XTQVRR3ru7NRVJRPDKi4XfGs2teGbci7xjPWiqtrtr7Jzqi/wu62DV0DMl0O342iNbNgAww==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <942e1164-5ed0-bdda-424f-90134b0e22c5@suse.com>
Date: Mon, 23 Jan 2023 09:12:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/shadow: sh_type_to_size[] needs L2H entry when HVM+PV32
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0070.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7817:EE_
X-MS-Office365-Filtering-Correlation-Id: 7480f1cc-ee76-46aa-acf5-08dafd19a0df
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BuOqWvvn86iOBaZebz9NVgMDfkglZGmSWuwhK8+v7ASAjelVuR1OxD8W/MiAaVa+f+n/E8pnd8a8IP44m96Ui0H/lZSJCt8FQ3aFSk8ZUbEUcKwLI5yICOUOJTEbEB4k/pwoN3wiap55Rb0CYqi353KaALKrLRMcI+nAbyGDoHLVbss1pxYb5gUkM9t0FAKMGsXYof7QtEdQ7Z+E/PZ2h1TQYErtJAXEbb9xcXnb99x7ZWoEp3nIOExfSvB5zOnOemW+z//24fipytVUle8xGgX3MjEKE3EMG3fQ3q55PylAcqSwFLLtberbmP80wFd4T+T9P3w2PoJhWV0PyzpQ58TukzocuGZxbz+GyuXx47Y/nzWGyG3VqxcvABN2iKA1/OGw0p/Ct+Mqk5pcx6Sf2AwvHiUq0Tr6Hvablyhx9l43CH88dpyimtx0urBjHQhTo3tCpu1wwPZJjpvPGdM+mNnGLTSMTJfm87YHeANtkZgh2TR23fpPH9R3Bc+91b8gFZ0q47GUBf82MpuPYmUTrvRjP17Fqjg5DZV9wj9kqubGv3kQbzB+9rGJkJ2su0gDewaw1Pq4KfsTKU7p0fbctfARWUShrwJ7OVmWhIXyax9S5X7ixtXHva3VmqsmfNg819/e6w+ZMe8m+XtiAUCtXdbh68L3WnxVQM/sjEIShqgoj7zHyzkeFPnrl4hjw9RRUBkJtXJ5zB6OZDohb5Vg90DXhIyQdaEiokXUClETRk0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(39860400002)(346002)(366004)(376002)(451199015)(31696002)(38100700002)(2906002)(4744005)(41300700001)(5660300002)(8936002)(4326008)(6916009)(8676002)(26005)(6506007)(186003)(6512007)(316002)(66556008)(2616005)(54906003)(66946007)(478600001)(6486002)(66476007)(86362001)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bXNHL3hLMnBrYk1wWFRoTTJrUmRTaXFQaVN5K3lEZDY0bGY3dG04Q3VOSTNu?=
 =?utf-8?B?NS9memJvTnFZSlhsZlA2c1hwUmlpWEthaUd3OCtYMkRiZWtER2RBcG5tb3JS?=
 =?utf-8?B?dlJ1SVoybFMvblJReUNXZFltZ0wyRnlpSE4xMm8wekc2Ty85YnlXTHBmYTZV?=
 =?utf-8?B?UmkyV0JXdGVnWnV4YkI5ajhlaWd2akpWMEFqaHRkemRsMGF4OHZSTTU1dmJq?=
 =?utf-8?B?RXg2cThiUytVNC9ONjJKeDBJeG02NHc3R2pxOVo0Qk1KMWNaVlo4ZzNXRVM3?=
 =?utf-8?B?T2lZUEU3by9ucXhXNlZMZTE0UEtwbGI0cUY2QmdCL0VPQVdTVnpjWkwxbEFV?=
 =?utf-8?B?S0dzV3oweUgwNGcyazU2d3gwNFFNY2pWNDVtWjlsYThxM1RHUG9zdkQzeXJu?=
 =?utf-8?B?c2F5Sk1UZnVqQURydXFhczQ4Ym1aQ2o4ellTL3N5a0FlMkFNNjJNaTR2eGl0?=
 =?utf-8?B?UElmYkhqN0JmQjY1NmlqV0FXZG8rZEVoUGVUczdQNjJEcXNhMjdLc3dheWtN?=
 =?utf-8?B?clgwa2xKalVYQlNwbXBVZGJMWlNvN1hhcmFGR3JEMjlIQ2c5NXRLVHR1eCtE?=
 =?utf-8?B?NlFhMTdRWHRlZ2ZYYi90M29SVitJMFJPU2o5Q3gvQ2NzZlpnMGRSQ3JxQTB1?=
 =?utf-8?B?VEx1bHdsK0VqUkVQNkNELzNrZDdxc3UvS2gzL1hyUVAzYWNrZXZxM0t2bk14?=
 =?utf-8?B?ZUk2MGE1eVdnM3ZaTldMZWkrZVFHV3BZZU80N1ViVW5VTGcwZUdmYWlHU3pP?=
 =?utf-8?B?TGVrbHAxZ29wQWxtNndYbHlhVGVmazRHRjBYYkpEcUxoUEtSbGZseFJXUDZh?=
 =?utf-8?B?Q0dPR3lQUHhyVVZGcjhIVUtNTzlSNG85NDJzRjJ2eEFmMTNBSzNCZm9hZTN2?=
 =?utf-8?B?bytjR3YraW5MSWVueTVBc3JNdFVKWmhqc00vejJaTjZYZ1hITTVUbmFUVW5l?=
 =?utf-8?B?Y2pscDFMdWZCSjhWY1AzWWlBUEJKTTBIcVBiRjZqbjc3ZmpBd0pncDZPMzFo?=
 =?utf-8?B?eDF1QW95QjR4RStyT2FJb2hncTJzR2lncnFjcFVlVFFuQzBNTVcwWmRBOEx5?=
 =?utf-8?B?TjJPUnNLVkhGbjA4OEV1ckdHSDBQcm5YS0R1Z1NBOUthQ256Z3JNQzM5cnZZ?=
 =?utf-8?B?UlhWTEU2N2VYZ1F1YUJobVdVQXBkZHZudDZpUkFvYmRMQUk3OGM3N2J2Y0NR?=
 =?utf-8?B?T0I3SzgvSlMvUlRYaEUzRHh4bktvNFdoMzhmaFRHd0xzdFJrYTNOYkgwMDNO?=
 =?utf-8?B?UjVRWERLY2FvVW9hZUxseTg0a1p5bjlvZ3VuZmpUWHBtOExZOERWTFhLSDEy?=
 =?utf-8?B?ejUwaGQyWUxhL200SVRRa0s2OHIrc25CQUpZelR1K2hTUkg4NHhHamhVYmJx?=
 =?utf-8?B?VFFLL2Nyb2dPNDJ2N1puK1ROV2FNK1d0UmJoeFg1ZUF4QW4vbnd3bUpBQlB5?=
 =?utf-8?B?M3NORDRvWjNCUkErQTZEL2pudWxSbTczeUhuejh3UytVMlNkd0hsS3o1MEVC?=
 =?utf-8?B?MitWcmo1bDdvR1J5OXd3NE44QkFWRUZIaXlHS2tlVXo5Z0xWV0ZHRk5uemJL?=
 =?utf-8?B?cWRhWHhzSDN1WDU3d0RvYjQ4SThVbnNtdWUwVTk5TktxVUNwN2xQaEJKUHdj?=
 =?utf-8?B?RmhBZVZheFluZFlxTDRCejFUcExxdzN4K2dIMURlcncrck95YU9TejFQV1Nu?=
 =?utf-8?B?eE5idjNOZ0FRN1FyN2U5cU05bVRGUE1XSGNOZFh2SWt1NW40RzRnbTRwYUVm?=
 =?utf-8?B?SGlFKy9rZW1WZHIwa0hpY09YWGxVeHc5MitYalJhRm80emJYUnVyREo5bm0r?=
 =?utf-8?B?bHRnTGxHUkEvWFJYYTc1czJGZ0EvN3ExRE1XU0Q4R2hweVlMTzdPcllnekQ3?=
 =?utf-8?B?NEZwOTRVNWNXNHRSblJ1d2JGcnQvc2J3bmh3aDNCQ3czemtlWEQ4NTltWUl6?=
 =?utf-8?B?K2QrZlB6NTkrQjc2RUVHZGV3M3lCMm1hdFdacjNqTi82YUY1SzlhTWZreXU1?=
 =?utf-8?B?Q3ArajdvNHp0RFpFV0djT1dOVUNZQVpQM3RBeUptbk5BWEE0dzFPd3ljWVcy?=
 =?utf-8?B?STZ1S1F6VENWL0JEQ2RSR0Y4MGR6dHVNcHRVRkJTaFVEN29YWE5yNzJsVmhD?=
 =?utf-8?Q?soOYKg/lEwT8R60RMHYkoW5U2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7480f1cc-ee76-46aa-acf5-08dafd19a0df
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 08:12:54.3088
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eAB9JiycWTxm3XpXHNyrzlj+TfYxIYpWQKa3NOXTOj1He6NUMk1/gX/OfisuvKaqrHVs2u6odjgbZHr9yu9C5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7817

While the table is used only when HVM=y, the table entry of course needs
to be properly populated when also PV32=y. Fully removing the table
entry we therefore wrong.

Fixes: 1894049fa283 ("x86/shadow: L2H shadow type is PV32-only")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -56,7 +56,9 @@ const uint8_t sh_type_to_size[] = {
     [SH_type_l1_64_shadow]   = 1,
     [SH_type_fl1_64_shadow]  = 1,
     [SH_type_l2_64_shadow]   = 1,
-/*  [SH_type_l2h_64_shadow]  = 1,  PV32-only */
+#ifdef CONFIG_PV32
+    [SH_type_l2h_64_shadow]  = 1,
+#endif
     [SH_type_l3_64_shadow]   = 1,
     [SH_type_l4_64_shadow]   = 1,
     [SH_type_p2m_table]      = 1,


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 08:18:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 08:18:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482655.748272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJs1m-0006WI-Ks; Mon, 23 Jan 2023 08:18:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482655.748272; Mon, 23 Jan 2023 08:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJs1m-0006WB-Hn; Mon, 23 Jan 2023 08:18:06 +0000
Received: by outflank-mailman (input) for mailman id 482655;
 Mon, 23 Jan 2023 08:18:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJs1k-0006W1-Sd; Mon, 23 Jan 2023 08:18:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJs1k-0001DX-OG; Mon, 23 Jan 2023 08:18:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJs1k-0006KQ-HQ; Mon, 23 Jan 2023 08:18:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJs1k-0003SK-Gy; Mon, 23 Jan 2023 08:18:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Nj9cr3GgBsceNEunYdfeDPdDFB16vAjSTUcuuvAXcxE=; b=cv9FhmQlpSFHtW08ktV2lAvMHL
	qA7LxFoe+PDQmLS1gFw95kb9YgCPErLd6oIzQAiF5Q4XV2R5bY85S5gvrjkuf1S7MqTz6XdZxz2sl
	i3qOvQeFcJisuAQNy1Th3PIBUQnDayF+y8NRsgz2jdRUr6SsQxkryrzzz0a2NGGemmvQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176059-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176059: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=37d3eb026a766b2405daae47e02094c2ec248646
X-Osstest-Versions-That:
    ovmf=7afef31b2b17d1a8d5248eb562352c6d3505ea14
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 08:18:04 +0000

flight 176059 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176059/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 37d3eb026a766b2405daae47e02094c2ec248646
baseline version:
 ovmf                 7afef31b2b17d1a8d5248eb562352c6d3505ea14

Last test of basis   176004  2023-01-20 17:41:43 Z    2 days
Testing same since   176059  2023-01-23 06:10:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Bobek <jbobek@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7afef31b2b..37d3eb026a  37d3eb026a766b2405daae47e02094c2ec248646 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 08:23:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 08:23:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482665.748282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJs6m-000805-EA; Mon, 23 Jan 2023 08:23:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482665.748282; Mon, 23 Jan 2023 08:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJs6m-0007zy-A8; Mon, 23 Jan 2023 08:23:16 +0000
Received: by outflank-mailman (input) for mailman id 482665;
 Mon, 23 Jan 2023 08:23:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJs6l-0007zs-8w
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 08:23:15 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2082.outbound.protection.outlook.com [40.107.15.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2d55f9bd-9af7-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 09:23:13 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8721.eurprd04.prod.outlook.com (2603:10a6:20b:428::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 08:23:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 08:23:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d55f9bd-9af7-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aqdtFWpwphEqMxfZydcDPjZ9SsgyQJ7kMk15i+r+Z7fv+EnrNLHaIGxkopTcGSopIjr2yNsytMv4nyZCX0Ql/4Usj1yvR8ITvxNXvvj61o63BK+KZoy4I2c/dbtCAh1F22I1luY3qTx8DGaCDXxTgnpqzyFIgGIOVvWDEYabtgvsxNK4kAffZls3pshNzArmeMR/VHJwPZV+gX2KO8FD2GLIrFc4DQGvXQTaC81XAudNT+0eMTP5D/Sij57GOzqLFcX8MngGO6Sq2LTG+V+KNI4znsKkWKQfQZAAuISbqarsn3XXnnKrq4GJZsm2UW45rDXXmgErJyYbyrVZEG4oFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IdoeQqJGxtq2p6npwItttpM9H0mYHSgyVI9Etpa0NX4=;
 b=d1sxYCfKG7G3KVHS5A5HrQ87NsRCCVicsr0lITNSLrquZQyF2WaY3YX2auaUK30qfiziLjzpXvq3ncmUCTmk2FNf2byokRU9LCf0mXaHGR33yAmfdbJt4qEo/r+tybxKPPLTXQVF5pzGOotZusz43+hp+F3lczToGndsbbwijUOPyHv5HviHFk5Q1DRIDdzrO/ibHbvBySD37HmY/nmPqKP9kBFYuRbEzLrAFJA1AC3uU3Z3B6idl7NYP/nbxJX44YmxtmjCcWPXJknvR/TB6p7iTHX2iRMTlmXrXFawIzQWBb/+45E4c64FEo2CMm87iYsKhmB6LdgRXgFYRIwf2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IdoeQqJGxtq2p6npwItttpM9H0mYHSgyVI9Etpa0NX4=;
 b=Vd8+TdQk+jhleLIo3CnGYPD2+vSTzAh7JA1S5JdxcUDE6pElwEWzMO0MGvcnxPeETCuYapvPaVTxoqHh2dxNXzVgmLq6Jnxxz9OncwoVmj4F5LUbwn0exw6NTzcsJeF5ZAftpESy8PWwYHCDlbV/Zmknlmzn2DbJsl3iYrQVgdgLGq7rOCYXdY2+1SmmiFCNIvUxR9K6jmgDMEyaRyO3zyaHuDmYXXwSamTuWgTxyn6jcKZO52v2uJC7s8JVggavxoWsgJRpGJGjCZnr5YTZNHT0PP1HG20I4TcTC6N/ckh/RHX9ci/btWEuIaTrUSh4oKF58rh77quXXvt60BLKNA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3ded1704-383f-8b18-bd76-c093525c783b@suse.com>
Date: Mon, 23 Jan 2023 09:23:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
 <ed4d8d85-2ba5-74c1-7c65-0ae65bf0ee06@citrix.com>
 <24a2f51b-e69d-7a44-5239-79f5f526ef01@suse.com>
 <978b098a-d052-09cc-442e-9aafc816feee@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <978b098a-d052-09cc-442e-9aafc816feee@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FRYP281CA0013.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::23)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8721:EE_
X-MS-Office365-Filtering-Correlation-Id: d6a0b513-53b9-4dfc-2909-08dafd1b103d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+vb7UY71mh63xr68zJpP7Ismacs34qBtQVu8lTrBOkGtXdeDAC4OVDBzlrctz2zlckgltqOVsplUoHG4iyCwoUO3vWaCTWsk+62tHD7dA4aIKk/FMcyowhMXiX20RjcPkMjMOqwzYZwafnbvS1+leUyfYEHkGRiET5U2DfvA/05ahXj5TGbXlFf41hRxhb2fOliLp8vgjy9t0FDrO1S/8NNjp7EyM5qxT8LoJW7tin2RImdAxwxt6YIao6VcgEMsNnt+RcEIGSKF7DjEnuhlsmEv8x++RdkngLmp+//OCejQIbWHmnDFzCHsfSc5unD4QAm/u2xq7iKVcflmIMGfddOqnZTuMNuzixgBvY1bOMTa7QVyplER0AUG1BxrH72h7T1IBIHMRigBwXHpCBnL1IQO9mF0mPZKsQg5KvhBYVz2+ex6y5u6xTiNCJTW8j3QsLGycdyUYtYIAoMGN78k/Uox5JRMx//6sKEUFfs1G4kWVDp3NSLJ9xQw2lfL2RhwpXShusDVlWLb1TrrmNfTE60i/e9jRgkHwhbFtIAGISzgc0skClbn9xpdomGAVxA58ckd9mdbLKX6tw0doctOkyOiWDZZMWPiDweocJsUGtnRV+aEgof39JBaMDMgWKG3bGygPWnmzH5GjZPZQVQUVz2HgB9MFEWkTo1JYgiSFEWCYjUEZU/i/Syc5skxe5hhXd18/kU4ceP+SMEcjV8BunzH7/Cnyyen4t1KDe5VtDs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(396003)(39860400002)(376002)(136003)(451199015)(83380400001)(38100700002)(31696002)(86362001)(41300700001)(2906002)(4326008)(5660300002)(8936002)(6916009)(8676002)(53546011)(186003)(26005)(6512007)(6506007)(66476007)(2616005)(316002)(54906003)(66946007)(66556008)(478600001)(6486002)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZFllRTc3Ry94QkdRTFQ0Ynl0M2ovUmZSdlYzeEFiUjhTQ1o2ajVOWlhaQmow?=
 =?utf-8?B?OEI3ZUE2QnhQYlY2MVVNdE9INUNpZXdObXQ0OXcyL2JtZlV0R1RSaVdGVENq?=
 =?utf-8?B?QmdSbEtyZ1RJSnpmK2pmTEV2Tzd5YjZhcTlQTXp4WUlKOGFIeTdLNnoreVo5?=
 =?utf-8?B?a0V3ejNyeENCMWxFazFsQWlsV216N01Ob290UXNIU1FPaGhtMmNoWUs4d2E2?=
 =?utf-8?B?WnFIcDJvcE1TbTJBcUVsaWV2eVpxUWFRTWx4WXUydlpYS3RiRkl6d25IekxO?=
 =?utf-8?B?TWVHTXdLZEEwS0VDRm1QOGRkVkoreDhQUi8rU1VNTmJhVjZlZzFpNlFLUXJL?=
 =?utf-8?B?a1ROVzd5eWNWQnR5VUgwYWo0WHJkT3NjdWpRSEZManlSKzAzQlhhaFdTc3dz?=
 =?utf-8?B?ekxGdlNOYzE3QUxFK1YrUGhWZHVNeWZGSFVXZFZzWWhyamcxbklVVVcvemRr?=
 =?utf-8?B?TnhkWk5qNFJ4ZG1jV3BjekhsU0hjaEhqUjRNYnp1ZzVEYlkyRkM2amJzQzdk?=
 =?utf-8?B?MXBQTjNoVm1LKy9FQW9aUk85ekFYR2FwdVZXdFJ5M0hzM25ocDk5Qnd5MGhK?=
 =?utf-8?B?T1NBV3RCZSsrZ2xzUmdkUWc1YnAzenRQTi9ib0l3aW8rOFgwdnRhekdIRWg5?=
 =?utf-8?B?TEQ5SHNaTEw2NVNhbU1DYm1KS0Y5ekdWblkyUk9Sd3BaU0dJRTlsdWdybm1F?=
 =?utf-8?B?eTlEd2pDMlRLaVovVUsyVVlSZmVTdDhNd09ZbmJmeEVzd0JMNzdwemxPRFRm?=
 =?utf-8?B?bjFSWUpuOCt3ZEE3WnArNEVDT0ZBVFNadUNxaG1YU08vK245eDVSSW5CSjhD?=
 =?utf-8?B?N2loSnE2UEZRdkVldys1Mmx1RGJjT1lMMXhrVCtxbWFOOW1rVVdJdlZvZkhr?=
 =?utf-8?B?aEgvc3kyYWt4U1Rxd1F3bTlHOE5TaEdQVGpGakxzL0lMRkhJTkcva0hQdkpR?=
 =?utf-8?B?ajFDNEFGLzN2aVRmSUxqZERrL1ZQY2sxR2Q4M1daWkdjQUNya1YvSkY4b0xo?=
 =?utf-8?B?UnV4bjJvV3UvMDk1SXFTTmVkZ3hPVUdzR2VRYVJPTGVNNnYzN2hITFdqOUxl?=
 =?utf-8?B?YzdLMktZQVVmVHNvN2pFcWIyS2UzQnhyRXJML05waUw0dFJYSGRLdklMMUVp?=
 =?utf-8?B?d0k1L0RIc1BYQi9yYjhhaFo5U2Z3dnJjMng5L0hVQ3lMdXlERkNiNWhFZFpt?=
 =?utf-8?B?RnUxMDZFUkFqMjNTcGFOY21qS2d3YW5FeEhiMEZ2RUJxRnlmeVpCcUNIVEZD?=
 =?utf-8?B?M2N5dmVucndCMWEyb1d1RzFsS1lCQkFJU21LcVdtLzllMXJPZGRuRFN0dXBO?=
 =?utf-8?B?TXo3L2dOYW5jbm14ZnRXRTc1Z1FxMy90Qm4rZDkrdlJFYUNkSHg1WnhUSG9D?=
 =?utf-8?B?Vi9Lall3NVRTeG50Z2ZUU3J5b3hMcVhmaGlpbW1VbGIzK0FKSDdvYU9vUjd3?=
 =?utf-8?B?TlpBMmY2bHBmNGZZZVpyWUM5ZXArQjhIWUlyVjhSU0sxLzlxU0Z6eUJjVGcx?=
 =?utf-8?B?Ykp3VnM5d3B5cWtCR2hUbGtYb1pMdzNQVzRhbk5aOWFHWmlhR2FqWG1POURR?=
 =?utf-8?B?VFRDeXRSTlBMZkFoY2dvRS9WSk9zQzlnb3JSVXhKTFEyVVB6eVo4ZTBpeUY1?=
 =?utf-8?B?SHh6Y09JM2RkVyswTmk0QkVkZHVOQ3lHYlMwbU1IYmxFRE1LZWg2NE9oek93?=
 =?utf-8?B?TVRZeDVyUkVKSkVVM3RzMHR5QXRoUVhTWU53WkVrdEhybzBXUmNhZUorMFQ2?=
 =?utf-8?B?a3ZIMHVGRld1NHNSeW91b0lMZ3d0bjU5VjRvRkxEWER6WGVsRnVHLzNWckxN?=
 =?utf-8?B?VlIycVpKMythZGlTeXBSblMxNC9UNEd0Uy8rVFVnbzdhQXkwNzFyU3BmY2tu?=
 =?utf-8?B?Q3VMVld3TDdGNnc1bi9hTlQwaXJpckJKMkozVUxaVWVGOTJGZ3pUSDBpamkr?=
 =?utf-8?B?dVNRRStvNHVGNDZTRXJMaXhKdDdFcGt0U0hZMmlCOXQrNHQzUFJrMjkxeFVM?=
 =?utf-8?B?Q2gzZ1FXalI4NTZyVDlSYzYrNUpaZmt1MDVzWU1ya2ZkM2taVU0rLzFPY1BV?=
 =?utf-8?B?Q1R0T1pNSTF4ODMrMjQ5YkNKTXFucHFLY0xWekNjbzFWV0ZxSkxlcHBmaFdq?=
 =?utf-8?Q?uJFgY5TU9f+jXVkdAdcpTltmC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d6a0b513-53b9-4dfc-2909-08dafd1b103d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 08:23:10.6609
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iRxcVc30rh6xI0QXfq22SAKck5xy010xeXB6YNMtc77EXWXnlFlu9IvcqRdFl3zgyyYYwLpQYRLflVkY/I+KPQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8721

On 20.01.2023 19:15, Andrew Cooper wrote:
> On 18/01/2023 9:55 am, Jan Beulich wrote:
>> On 17.01.2023 23:04, Andrew Cooper wrote:
>>> On 19/10/2022 8:43 am, Jan Beulich wrote:
>>>> Noteworthy differences from map_vcpu_info():
>>>> - areas can be registered more than once (and de-registered),
>>> When register by GFN is available, there is never a good reason to the
>>> same area twice.
>> Why not? Why shouldn't different entities be permitted to register their
>> areas, one after the other? This at the very least requires a way to
>> de-register.
> 
> Because it's useless and extra complexity.  From the point of view of
> any guest, its an MMIO(ish) window that Xen happens to update the
> content of.
> 
> You don't get systems where you can ask hardware for e.g. "another copy
> of the HPET at mfn $foo please".

I/O ports appear in multiple places on many systems. I think MMIO regions
can, too. And then I don't see why there couldn't be a way to actually
control this (via e.g. some chipset specific register).

>>>> RFC: By using global domain page mappings the demand on the underlying
>>>>      VA range may increase significantly. I did consider to use per-
>>>>      domain mappings instead, but they exist for x86 only. Of course we
>>>>      could have arch_{,un}map_guest_area() aliasing global domain page
>>>>      mapping functions on Arm and using per-domain mappings on x86. Yet
>>>>      then again map_vcpu_info() doesn't do so either (albeit that's
>>>>      likely to be converted subsequently to use map_vcpu_area() anyway).
>>> ... this by providing a bound on the amount of vmap() space can be consumed.
>> I'm afraid I don't understand. When re-registering a different area, the
>> earlier one will be unmapped. The consumption of vmap space cannot grow
>> (or else we'd have a resource leak and hence an XSA).
> 
> In which case you mean "can be re-registered elsewhere".  More
> specifically, the area can be moved, and isn't a singleton operation
> like map_vcpu_info was.
> 
> The wording as presented firmly suggests the presence of an XSA.

You mean the "map_vcpu_info() doesn't do so either"? That talks about the
function not using per-domain mappings. There's no connection at all that
I can see to a missed unmapping, which at this point is the only thing I
can deduce you might be referring to.

>>>> RFC: In map_guest_area() I'm not checking the P2M type, instead - just
>>>>      like map_vcpu_info() - solely relying on the type ref acquisition.
>>>>      Checking for p2m_ram_rw alone would be wrong, as at least
>>>>      p2m_ram_logdirty ought to also be okay to use here (and in similar
>>>>      cases, e.g. in Argo's find_ring_mfn()). p2m_is_pageable() could be
>>>>      used here (like altp2m_vcpu_enable_ve() does) as well as in
>>>>      map_vcpu_info(), yet then again the P2M type is stale by the time
>>>>      it is being looked at anyway without the P2M lock held.
>>> Again, another error caused by Xen not knowing the guest physical
>>> address layout.  These mappings should be restricted to just RAM regions
>>> and I think we want to enforce that right from the outset.
>> Meaning what exactly in terms of action for me to take? As said, checking
>> the P2M type is pointless. So without you being more explicit, all I can
>> take your reply for is merely a comment, with no action on my part (not
>> even to remove this RFC remark).
> 
> There will become a point where it will need to become prohibited to
> issue this against something which isn't p2m_type_ram.  If we had a sane
> idea of the guest physmap, I'd go as far as saying E820_RAM, but that's
> clearly not feasible yet.
> 
> Even now, absolutely nothing good can possibly come of e.g. trying to
> overlay it on the grant table, or a grant mapping.
> 
> ram || logdirty ought to exclude most cases we care about the guest
> (not) putting the mapping.

It's still not clear to me what you want me to do: If I add the P2M type
check here including log-dirty, then this will be inconsistent with what
we do elsewhere _and_ useless code (for the time being). I hope you're
not making a scope-creeping request for me to "fix" all the other places
(I may not have found all) where such a P2M type check is either missing
of failing to include log-dirty.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 08:30:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 08:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482670.748292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsD8-0000DF-2y; Mon, 23 Jan 2023 08:29:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482670.748292; Mon, 23 Jan 2023 08:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsD7-0000D8-Vs; Mon, 23 Jan 2023 08:29:49 +0000
Received: by outflank-mailman (input) for mailman id 482670;
 Mon, 23 Jan 2023 08:29:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJsD6-0000D2-Q0
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 08:29:48 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2042.outbound.protection.outlook.com [40.107.104.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 17f1f94b-9af8-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 09:29:46 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6881.eurprd04.prod.outlook.com (2603:10a6:208:18b::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 08:29:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 08:29:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17f1f94b-9af8-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SK4IzHarI7KXM+0jG7eMfW/0GurjZJBfAP2oSlJEZyN+RCaOzrIpEjZzLUp6bqRiMXOFvxce0I8Osuq0KW7Z1p1YPAit6az3iIF7PWTSAF5SiWHUbT6N1xS8yBUbYRzY7/ljGuHdu1tw6LZYtTMl5EYH+FsX9RXoF9Kj34HWRkVh/6iddJd2eKgyO8d5VRTYtor0sYUURt7DHGAzzw1f4IdRHWciKhfC4Cfx7iuEUhQowUtK8GUc91jV1X7Et+4BsIOfjYgESh8QG0GwiKlHUnY2cfKMn2J1S0vQUAOVj7ZbtI0llWns3YH29NU+M9W/HVq10Pcey974Jf20WW0hdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DuRlfe95W/9eOqK3eeexAn5DajWVjHXYSptDvHAEZTY=;
 b=Aow2jM33m4QwRtF1ylk9skltAtRrqsP7+D9QHk5x0PFKw759easxb1Fzu6bVx/0elWowyOI1JXTcTZ+fLTGEwCqG05p3wtslS+89pqWigf3M9NjV4TlF/P6sGCetfKyvFzg3ryv/lBlj4YALFePxtvvc6hBHOSVh7LUelb4jUHu5/GUUMRFBFY5YaYo+VtomrjHm/MAuZaan3GT3ApM3IyfPAmdpdOO+uAy6ZC9FOhbZfvsylBFId/ZzltQDakKt69suWIBQwEiXpk6AtPtuJbIF9pgIeNI3kY4dn1wczmkCFPTwdJP5svrQaw+kd75Q5XtyVEohyUZfdVFVL8X1QQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DuRlfe95W/9eOqK3eeexAn5DajWVjHXYSptDvHAEZTY=;
 b=2NrAJhJcb3b6xOrkP+ULt1ErXuvf6YNN9c82L+hGl4dnU9lw1o95IwSKgjQqeBonnLrFw6JRFErBsAbEQwzGKT8dZYyder5LTaw7e+fhOQQOTpJapDZqNARtKqVnv1w0OulxVxKs3/W2cOPiM6XQBZCOnfnngMRkJ0F5tfAdHnhzkV2zWlAXl1UIkgMEBm/KfkVAg4/38bh06cd5lLYvUuChiAXBy/Y95z6qB9GxaPt2FTs6xaf1IMEnCAze06viVjoTZ6+vO8cWG35eD5dNbrDNx9vxYDmDpFl0/OJ+kcsCrohD0kHwufKeovg8HBCTd4NYFTLc6WNIMmsh0my+Gw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cc2e4d71-0322-65c9-b58f-742e5d8ec2e9@suse.com>
Date: Mon, 23 Jan 2023 09:29:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest
 areas
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bcab8340-6bfd-8dfc-efe1-564e520b3a06@suse.com>
 <5a571fd9-b0c2-216e-a444-102397a22ca0@suse.com>
 <ed4d8d85-2ba5-74c1-7c65-0ae65bf0ee06@citrix.com>
 <24a2f51b-e69d-7a44-5239-79f5f526ef01@suse.com>
 <978b098a-d052-09cc-442e-9aafc816feee@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <978b098a-d052-09cc-442e-9aafc816feee@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0112.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6881:EE_
X-MS-Office365-Filtering-Correlation-Id: 9a195c1c-8af6-41bc-1685-08dafd1bfb0c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HxGB4B/Efxc8JjC+utHkBHfGSDQKs00a6A+6scPSXexdNG3qEq8szCB+MVKMgkoDqaYSsG04qQ5O0Sphx0rFZ/kEcS7vsp6hqGYHq+mdNfe7Mf3VyoZhe6JNSOa/dNRXZ0mxi94W4XV019pU5jnUWSVvgQGn9bn1wrj9nJCayyd2Fw9rz2Pk4rXiALtoFK0sWyOvWFxviYg22Za3FI01syDCSX7ukjLNC3HLdGO5++XyLCsi1anpFSbor6sLLYqlcePiF9abb0Yf8isZMyCMkLGx72sXrmZnMg3T4MMCt8qFVLpwMjGj/1TI8enPNGEued7sHhRHZT7iNSPhXLbazYrTXFw/QJ1DKQSHm4yc2LhvdggkAcw09IPMJ8TTbQyybjxRgYheGbb+Vav7LkENOJ5bscgeCpjT6/hE1i1zdMxPdE5DD+PMGoiAix4AcZlleoDNc7oXGooUfe+rceq2znOjU7C2Hz43jYPpIUOqIunM2mvHtuqzSYeaZ3trvSL0nMlWElr/PRK/GUMFpAwLRV05atlbnifLEgrpaReHtGJvOpyfNCO1Ey+p4GdtuzW9cj2znz7WHd6xh1PBoLvod2c/stEvRQgW+6Y3AqklNhyhDBwURsYKKA1ZfAry7zpAsRf5AAec/zUGnqsAlVnWxXmRwfXws2celJesAQQ7bDSchjDe1UDTKQbe6Jm9jR4DEm7IdgT9vLbrEX3REFyDyJAqYn64CBXcbslNFB/Q6B8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(39860400002)(366004)(396003)(346002)(376002)(451199015)(31686004)(6486002)(36756003)(478600001)(31696002)(86362001)(38100700002)(83380400001)(316002)(53546011)(26005)(6506007)(8936002)(5660300002)(186003)(6512007)(2616005)(54906003)(2906002)(66556008)(66946007)(41300700001)(66476007)(6916009)(8676002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UzdLME1rdm8yNGdGR2JEM09ZMXg1c3pyeW5Ca3ByY2paNzNlN3NWYjcwVEVN?=
 =?utf-8?B?R295eVRIOWROYlJwbGJEekRuT3JsaDJKVEkya1gxSWJEVWZKQ2M0MC8xM1pE?=
 =?utf-8?B?b1NHYTUrWm1oMTZJWVcwbHBSNWFTVU90aEE5UC84em0yOWdlS2VNMzVrYkF5?=
 =?utf-8?B?U3VsM3RjUjVsMi9COXNzM0t5NXp0S2RIRDIxZVNpSHdOWHdlcUZsT3RZYy9a?=
 =?utf-8?B?WW5LL3d6NVhvTFZZVlZac0ZvWWdreHRXeC9RSzhBTENnRFI0RzNLQTczSkZF?=
 =?utf-8?B?NjhyU28yRDI5T2JPWWdXeWZybnc2Umd4RVcvSThyS1hVWXpVM1ZNMkoyMGlz?=
 =?utf-8?B?RG5LMWQvYTYzY2h1UmcxL0t3aXdaQ1ExbXIySkNBQmhWbmpzM0J0c1lVSE1r?=
 =?utf-8?B?dnNRQUlNQmpORDMzS3pVNkVkbHd2bnNHK1oxckQvWDJWZmhid1U1MEpYM1E5?=
 =?utf-8?B?V1F4WklSRGRLQk52VmtVcXdwakQ3UVUrUE51QnIvT2t2SXZvUjVyVDJ5OGdh?=
 =?utf-8?B?UVRKT2kwaXA1TXEwQ0JndUs4MEhxYkJORUFWQmp4OHBMZ3FjZzZ3SXhhWXB2?=
 =?utf-8?B?ZEc4VTVwTnZqNzNvM2FJVTRTazR6V2ZqYk1pcGRTK0xxWXU5RDJrYUhlQVAw?=
 =?utf-8?B?dUpQWWNhQlhrai9XYmpPSjY4OU1oMS9DVkM0KzZhaEtwc1JNYTQ3Vkhpei80?=
 =?utf-8?B?RkRCUFVJcXdySHFOdjFydVBWRk1PemJyYlZCdXgxclVld1pqdng0ZDE3dU9N?=
 =?utf-8?B?Z090ZEpKa1RvcFM1cy9CdGVzdjBiWE5oZVFDUGRUMkcwMnk2MGY0cm9GMFNW?=
 =?utf-8?B?RGFwU0RSdlA3ekdISjZ6T2RmOVBFbytBNThhd0FaamhyaHQralJ5TWdwVFRq?=
 =?utf-8?B?WGlEaXJVTWp4b1FtbGo4amR0VzN5TGQyVjF3anNCVm03RmN1dmx0UFhVbDFU?=
 =?utf-8?B?MEJQNXo5Mmc4WGRjQ2xVOHZkSXBrdUk3UmFZaXNLR1Z2M0cxT2VTK1NaNG41?=
 =?utf-8?B?UUVpUnV1OHlsRVhSREpBaW1iN0dnemZkcThFYlRGd0tNSFUyV0p1aDZBNFBV?=
 =?utf-8?B?VWhaRHBiNUh6YUtKTzFVZ3ozS3FMWEM4c3FnSDJ3K3Qzc1M1QzE2TUY1bjI2?=
 =?utf-8?B?WkIvb0cyVkpLVkFLdjQ4SGRPNVBlVmJUMFgybWhPWEtQZkxHVGlOV0dRajV6?=
 =?utf-8?B?TXcwM1k1K0tpQTl0ZTlOeEhBYy9wU3VBcTNoQk1nL2JkSXBTbVh3TXNnQjJR?=
 =?utf-8?B?ams1WnJ2dXYzWEVtbCtKYkpxTWR2RjJKdXdVMWdVek1sdlhRWVgzUjFaUEk3?=
 =?utf-8?B?UHdrWXNKUkxDZWNrUXpsSkZaMmVGV1pKMEVaZE5WRWJSL3BXbkFOUTlablR1?=
 =?utf-8?B?YnBBWHlxVnNBc1Z6VEhWTHd3V1VETFMwTVZodlNiVWVDVVJUSjBUWFlOMkVn?=
 =?utf-8?B?NzRUY0h6UWFrVStLVE4zTm8rVUk1ejVLUXZNdytKMzV2WUFiOU5mV1F0RGhB?=
 =?utf-8?B?TUUyRUV6b0RjQzNjMUZ6djhVUTIwd1c3MjFyMFczTTdLT0ErYXlHL1NGcFpm?=
 =?utf-8?B?ZnF2TU80Qy9reU8yVlMvMjhiZ1lRRTc0a1krR1FWUnBHRDZWazdjekxTTnIr?=
 =?utf-8?B?d0lHZzhZcHFLd3VTTHFoY0NmMnZXa0F5WlNTcjJ5MG9BZEtacVV4dzdQM3kz?=
 =?utf-8?B?bjkwK0ZsYjE0M0hCMkhvSUtVMTdEZXZTbHpXa0s0SkVGa01PcVh2eHB4Vk5n?=
 =?utf-8?B?VCs1aERYejJqSCtWeEZYMGpCMVYxTmlKK3VENGN3WG5vZ2ptM3MrdWF1UTM4?=
 =?utf-8?B?Y0diQm9FTUpROFpjMVRQTUUzSmxOT0lBREg0VE1mL0prT3VYMjJkbFd2dnBI?=
 =?utf-8?B?UjBHT3piK0JsYnFVT0lKZHhYand0OGRjSzRYNEdDSm5KS1owbmw4amFlVzB3?=
 =?utf-8?B?YVB0aSs0NE1vYTdDZzZxSm9DM3FwNUJER0RnZXZXd3Bnc0JudkkyU0hBeGFw?=
 =?utf-8?B?V3hqSUlBMUtxOVZCUnJ0cGtlbUJsUE0zZC9OMDlER0xOYVE3MmszUEVWRDVl?=
 =?utf-8?B?ZHVER0ovZ0NNY08wYVFoejlUYldRVGZRQkIrM1Q1dUMweC9VOWNFQjV1OVpV?=
 =?utf-8?Q?bPrSTJjqN6sekqxonxmiF716c?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a195c1c-8af6-41bc-1685-08dafd1bfb0c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 08:29:44.6039
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: slgkiyqhJ7PE9YwCm3XxmN4d8aDCFOtJ9XqXFSD1BF1TOV+++zjzC/IxcfH8Kd79ELQ1w+4pZ/NjipL2S/ouyw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6881

On 20.01.2023 19:15, Andrew Cooper wrote:
> On 18/01/2023 9:55 am, Jan Beulich wrote:
>> On 17.01.2023 23:04, Andrew Cooper wrote:
>>> On 19/10/2022 8:43 am, Jan Beulich wrote:
>>>> In preparation of the introduction of new vCPU operations allowing to
>>>> register the respective areas (one of the two is x86-specific) by
>>>> guest-physical address, flesh out the map/unmap functions.
>>>>
>>>> Noteworthy differences from map_vcpu_info():
>>>> - areas can be registered more than once (and de-registered),
>>> When register by GFN is available, there is never a good reason to the
>>> same area twice.
>> Why not? Why shouldn't different entities be permitted to register their
>> areas, one after the other? This at the very least requires a way to
>> de-register.
> 
> Because it's useless and extra complexity.

As to this: Looking at the code I think that I would actually add
complexity (just a little - an extra check) to prevent re-registration.
Things come out more naturally, from what I can tell, by allowing it.
This can also be seen in "common: convert vCPU info area registration"
where I'm actually adding such a (conditional) check to maintain the
"no re-registration" property of the sub-op there. Granted there can be
an argument towards making that check unconditional then ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 08:31:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 08:31:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482675.748302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsEk-0001Yk-DX; Mon, 23 Jan 2023 08:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482675.748302; Mon, 23 Jan 2023 08:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsEk-0001Yd-Af; Mon, 23 Jan 2023 08:31:30 +0000
Received: by outflank-mailman (input) for mailman id 482675;
 Mon, 23 Jan 2023 08:31:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MxFs=5U=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pJsEj-0001YV-Ep
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 08:31:29 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 544743c4-9af8-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 09:31:27 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 433723369B;
 Mon, 23 Jan 2023 08:31:27 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 185B01357F;
 Mon, 23 Jan 2023 08:31:27 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id bs2tBN9FzmP9VQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 23 Jan 2023 08:31:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 544743c4-9af8-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674462687; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ewVBAu9a3/y/RNyxMOCnGlgWgktLuFtjJXwjo6hq/d0=;
	b=b+zG+v2gU2ZkfLCUWiwr5KPFX+Oak5e5aFjJYsVyR1gZQnCREg3EpeApAGe37H3ujXaGqK
	/MoO2SMl94Hoho72DdRhvu4uY8eFLX7UNuqD6GHHc4hzdElKob3jA6W2L/gUFBSI+8q9fc
	ZgMkZ8AGW+aAzW+jNYIULqBZpYB+CQE=
Message-ID: <d369672f-394b-f8f3-18ee-a72594a86204@suse.com>
Date: Mon, 23 Jan 2023 09:31:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] libxl: Fix guest kexec - skip cpuid policy
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Dongli Zhang <dongli.zhang@oracle.com>
References: <20230121213908.6504-1-jandryuk@gmail.com>
 <20230121213908.6504-2-jandryuk@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230121213908.6504-2-jandryuk@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------9i3ud0TwJ0a0B0BFjVKP0hEO"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------9i3ud0TwJ0a0B0BFjVKP0hEO
Content-Type: multipart/mixed; boundary="------------C18HrWHGgTJm0HnQ0nMFlr4J";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Dongli Zhang <dongli.zhang@oracle.com>
Message-ID: <d369672f-394b-f8f3-18ee-a72594a86204@suse.com>
Subject: Re: [PATCH 1/2] libxl: Fix guest kexec - skip cpuid policy
References: <20230121213908.6504-1-jandryuk@gmail.com>
 <20230121213908.6504-2-jandryuk@gmail.com>
In-Reply-To: <20230121213908.6504-2-jandryuk@gmail.com>

--------------C18HrWHGgTJm0HnQ0nMFlr4J
Content-Type: multipart/mixed; boundary="------------qiOjBqHYKhXU9u03ofetlxRb"

--------------qiOjBqHYKhXU9u03ofetlxRb
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjEuMDEuMjMgMjI6MzksIEphc29uIEFuZHJ5dWsgd3JvdGU6DQo+IFdoZW4gYSBkb21h
aW4gcGVyZm9ybXMgYSBrZXhlYyAoc29mdCByZXNldCksIGxpYnhsX19idWlsZF9wcmUoKSBp
cw0KPiBjYWxsZWQgd2l0aCB0aGUgZXhpc3RpbmcgZG9taWQuICBDYWxsaW5nIGxpYnhsX19j
cHVpZF9sZWdhY3koKSBvbiB0aGUNCj4gZXhpc3RpbmcgZG9tYWluIGZhaWxzIHNpbmNlIHRo
ZSBjcHVpZCBwb2xpY3kgaGFzIGFscmVhZHkgYmVlbiBzZXQsIGFuZA0KPiB0aGUgZ3Vlc3Qg
aXNuJ3QgcmVidWlsdCBhbmQgZG9lc24ndCBrZXhlYy4NCj4gDQo+IHhjOiBlcnJvcjogRmFp
bGVkIHRvIHNldCBkMSdzIHBvbGljeSAoZXJyIGxlYWYgMHhmZmZmZmZmZiwgc3VibGVhZiAw
eGZmZmZmZmZmLCBtc3IgMHhmZmZmZmZmZikgKDE3ID0gRmlsZSBleGlzdHMpOiBJbnRlcm5h
bCBlcnJvcg0KPiBsaWJ4bDogZXJyb3I6IGxpYnhsX2NwdWlkLmM6NDk0OmxpYnhsX19jcHVp
ZF9sZWdhY3k6IERvbWFpbiAxOkZhaWxlZCB0byBhcHBseSBDUFVJRCBwb2xpY3k6IEZpbGUg
ZXhpc3RzDQo+IGxpYnhsOiBlcnJvcjogbGlieGxfY3JlYXRlLmM6MTY0MTpkb21jcmVhdGVf
cmVidWlsZF9kb25lOiBEb21haW4gMTpjYW5ub3QgKHJlLSlidWlsZCBkb21haW46IC0zDQo+
IGxpYnhsOiBlcnJvcjogbGlieGxfeHNoZWxwLmM6MjAxOmxpYnhsX194c19yZWFkX21hbmRh
dG9yeTogeGVuc3RvcmUgcmVhZCBmYWlsZWQ6IGAvbGlieGwvMS90eXBlJzogTm8gc3VjaCBm
aWxlIG9yIGRpcmVjdG9yeQ0KPiBsaWJ4bDogd2FybmluZzogbGlieGxfZG9tLmM6NDk6bGli
eGxfX2RvbWFpbl90eXBlOiB1bmFibGUgdG8gZ2V0IGRvbWFpbiB0eXBlIGZvciBkb21pZD0x
LCBhc3N1bWluZyBIVk0NCj4gDQo+IER1cmluZyBhIHNvZnRfcmVzZXQsIHNraXAgY2FsbGlu
ZyBsaWJ4bF9fY3B1aWRfbGVnYWN5KCkgdG8gYXZvaWQgdGhlDQo+IGlzc3VlLiAgQmVmb3Jl
IHRoZSBmaXhlcyBjb21taXQsIHRoZSBsaWJ4bF9fY3B1aWRfbGVnYWN5KCkgZmFpbHVyZSB3
b3VsZA0KPiBoYXZlIGJlZW4gaWdub3JlZCwgc28ga2V4ZWMgd291bGQgY29udGludWUuDQo+
IA0KPiBGaXhlczogMzQ5OTA0NDZjYTkxICJsaWJ4bDogZG9uJ3QgaWdub3JlIHRoZSByZXR1
cm4gdmFsdWUgZnJvbSB4Y19jcHVpZF9hcHBseV9wb2xpY3kiDQo+IFNpZ25lZC1vZmYtYnk6
IEphc29uIEFuZHJ5dWsgPGphbmRyeXVrQGdtYWlsLmNvbT4NCj4gLS0tDQo+IFByb2JhYmx5
IGEgYmFja3BvcnQgY2FuZGlkYXRlIHNpbmNlIHRoaXMgaGFzIGJlZW4gYnJva2VuIGZvciBh
IHdoaWxlLg0KPiAtLS0NCj4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX2NyZWF0ZS5jICAg
fCA0ICsrLS0NCj4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX2RvbS5jICAgICAgfCA1ICsr
Ky0tDQo+ICAgdG9vbHMvbGlicy9saWdodC9saWJ4bF9pbnRlcm5hbC5oIHwgMiArLQ0KPiAg
IDMgZmlsZXMgY2hhbmdlZCwgNiBpbnNlcnRpb25zKCspLCA1IGRlbGV0aW9ucygtKQ0KPiAN
Cj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfY3JlYXRlLmMgYi90b29s
cy9saWJzL2xpZ2h0L2xpYnhsX2NyZWF0ZS5jDQo+IGluZGV4IDVjZGRjM2RmNzkuLjU4N2E1
MTVkZmYgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfY3JlYXRlLmMN
Cj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9jcmVhdGUuYw0KPiBAQCAtNTEwLDcg
KzUxMCw3IEBAIGludCBsaWJ4bF9fZG9tYWluX2J1aWxkKGxpYnhsX19nYyAqZ2MsDQo+ICAg
ICAgIHN0cnVjdCB0aW1ldmFsIHN0YXJ0X3RpbWU7DQo+ICAgICAgIGludCBpLCByZXQ7DQo+
ICAgDQo+IC0gICAgcmV0ID0gbGlieGxfX2J1aWxkX3ByZShnYywgZG9taWQsIGRfY29uZmln
LCBzdGF0ZSk7DQo+ICsgICAgcmV0ID0gbGlieGxfX2J1aWxkX3ByZShnYywgZG9taWQsIGRf
Y29uZmlnLCBzdGF0ZSwgZmFsc2UpOw0KDQpJbnN0ZWFkIG9mIGFkZGluZyBhIHBhcmFtZXRl
ciB0byBsaWJ4bF9fYnVpbGRfcHJlKCkgSSdkIHJhdGhlciBhZGQgYW5vdGhlcg0KYm9vbCAi
c29mdF9yZXNldCIgdG8gbGlieGxfX2RvbWFpbl9idWlsZF9zdGF0ZS4NCg0KVGhpcyB3b3Vs
ZCBiZSBtb3JlIHNpbWlsYXIgdG8gdGhlIGxpYnhsX19kb21haW5fYnVpbGRfc3RhdGUtPnJl
c3RvcmUgdXNlDQpjYXNlLg0KDQoNCkp1ZXJnZW4NCg==
--------------qiOjBqHYKhXU9u03ofetlxRb
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------qiOjBqHYKhXU9u03ofetlxRb--

--------------C18HrWHGgTJm0HnQ0nMFlr4J--

--------------9i3ud0TwJ0a0B0BFjVKP0hEO
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPORd4FAwAAAAAACgkQsN6d1ii/Ey+6
WQf/XE5XoskGYucst2H3x73kW45nMSEnqJP5CoitIczEdR8Z8Df7OjLL+gDXC6TfF4WfaM0Di4q9
tY5UOpALvtkQcZ8vOA9jDxvfdheajouo8iYGLHg9LnIxxTMdGOjSqhesYY3g44aX2BvZ1vhAjYMU
Td2t07anWJ366MvTX2P2oSRqoXSGDFgzbCX77V9ngX3elhgPiDN9Riiq7IfkMVIwtyIH8ZnARi8e
L/3j2cBEuWdfrg71ERXcGaVdGZlMaNivbxZoUcVBMOGZIQvcCKaVM2HpdraHR3FvE4lTiPPi5dsZ
kZcGijJiZR1RC/LeMJx9ciVtmjgbLeyStIzo3b7eQw==
=/hnd
-----END PGP SIGNATURE-----

--------------9i3ud0TwJ0a0B0BFjVKP0hEO--


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 08:37:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 08:37:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482683.748311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsKH-0002Gq-4t; Mon, 23 Jan 2023 08:37:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482683.748311; Mon, 23 Jan 2023 08:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsKH-0002Gj-2E; Mon, 23 Jan 2023 08:37:13 +0000
Received: by outflank-mailman (input) for mailman id 482683;
 Mon, 23 Jan 2023 08:37:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MxFs=5U=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pJsKG-0002Gd-B6
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 08:37:12 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 20f6ae7d-9af9-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 09:37:11 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 90DA01F385;
 Mon, 23 Jan 2023 08:37:10 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 65BBE1357F;
 Mon, 23 Jan 2023 08:37:10 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id j+iTFzZHzmOGWQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 23 Jan 2023 08:37:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20f6ae7d-9af9-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674463030; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hS5XGFExAsqsp+MS5HknuutdmdrcrusqkkUDZna1C68=;
	b=Z0HmZz9iqB8m36g6PeUuLp/ZQgsMqPCJPzkVlcefDTIfmpPBZ9guE13huLRnnc5I8j+9SQ
	PNGStIjBSgmgMZzc561FUpemEJeQBetQW9a6iMovY5pstP9LKkRKy1mdg/F45FcmHN/uhU
	E/QilRI03nFVeMF8WQoUTXFFZC1JmKY=
Message-ID: <35169441-18ab-b937-8063-b40b16ff3bd4@suse.com>
Date: Mon, 23 Jan 2023 09:37:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230121213908.6504-1-jandryuk@gmail.com>
 <20230121213908.6504-3-jandryuk@gmail.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 2/2] Revert "tools/xenstore: simplify loop handling
 connection I/O"
In-Reply-To: <20230121213908.6504-3-jandryuk@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------w8Uw4fiU27yCTPZCNOp0sChW"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------w8Uw4fiU27yCTPZCNOp0sChW
Content-Type: multipart/mixed; boundary="------------zzHsIoN0Z56OzUvf3pUHZ9mX";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <35169441-18ab-b937-8063-b40b16ff3bd4@suse.com>
Subject: Re: [PATCH 2/2] Revert "tools/xenstore: simplify loop handling
 connection I/O"
References: <20230121213908.6504-1-jandryuk@gmail.com>
 <20230121213908.6504-3-jandryuk@gmail.com>
In-Reply-To: <20230121213908.6504-3-jandryuk@gmail.com>

--------------zzHsIoN0Z56OzUvf3pUHZ9mX
Content-Type: multipart/mixed; boundary="------------YeR8jgEvZz17dcIP1lV3VJ2I"

--------------YeR8jgEvZz17dcIP1lV3VJ2I
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjEuMDEuMjMgMjI6MzksIEphc29uIEFuZHJ5dWsgd3JvdGU6DQo+IEknbSBvYnNlcnZp
bmcgZ3Vlc3Qga2V4ZWMgdHJpZ2dlciB4ZW5zdG9yZWQgdG8gYWJvcnQgb24gYSBkb3VibGUg
ZnJlZS4NCj4gDQo+IGdkYiBvdXRwdXQ6DQo+IFByb2dyYW0gcmVjZWl2ZWQgc2lnbmFsIFNJ
R0FCUlQsIEFib3J0ZWQuDQo+IF9fcHRocmVhZF9raWxsX2ltcGxlbWVudGF0aW9uIChub190
aWQ9MCwgc2lnbm89NiwgdGhyZWFkaWQ9MTQwNjQ1NjE0MjU4MTEyKSBhdCAuL25wdGwvcHRo
cmVhZF9raWxsLmM6NDQNCj4gNDQgICAgLi9ucHRsL3B0aHJlYWRfa2lsbC5jOiBObyBzdWNo
IGZpbGUgb3IgZGlyZWN0b3J5Lg0KPiAoZ2RiKSBidA0KPiAgICAgIGF0IC4vbnB0bC9wdGhy
ZWFkX2tpbGwuYzo0NA0KPiAgICAgIGF0IC4vbnB0bC9wdGhyZWFkX2tpbGwuYzo3OA0KPiAg
ICAgIGF0IC4vbnB0bC9wdGhyZWFkX2tpbGwuYzo4OQ0KPiAgICAgIGF0IC4uL3N5c2RlcHMv
cG9zaXgvcmFpc2UuYzoyNg0KPiAgICAgIGF0IHRhbGxvYy5jOjExOQ0KPiAgICAgIHB0cj1w
dHJAZW50cnk9MHg1NTlmYWU3MjQyOTApIGF0IHRhbGxvYy5jOjIzMg0KPiAgICAgIGF0IHhl
bnN0b3JlZF9jb3JlLmM6Mjk0NQ0KPiAoZ2RiKSBmcmFtZSA1DQo+ICAgICAgYXQgdGFsbG9j
LmM6MTE5DQo+IDExOSAgICAgICAgICAgIFRBTExPQ19BQk9SVCgiQmFkIHRhbGxvYyBtYWdp
YyB2YWx1ZSAtIGRvdWJsZSBmcmVlIik7DQo+IChnZGIpIGZyYW1lIDcNCj4gICAgICBhdCB4
ZW5zdG9yZWRfY29yZS5jOjI5NDUNCj4gMjk0NSAgICAgICAgICAgICAgICB0YWxsb2NfaW5j
cmVhc2VfcmVmX2NvdW50KGNvbm4pOw0KPiAoZ2RiKSBwIGNvbm4NCj4gJDEgPSAoc3RydWN0
IGNvbm5lY3Rpb24gKikgMHg1NTlmYWU3MjQyOTANCj4gDQo+IExvb2tpbmcgYXQgYSB4ZW5z
dG9yZSB0cmFjZSwgd2UgaGF2ZToNCj4gSU4gMHg1NTlmYWU3MWYyNTAgMjAyMzAxMjAgMTc6
NDA6NTMgUkVBRCAoL2xvY2FsL2RvbWFpbi8zL2ltYWdlL2RldmljZS1tb2RlbC1kb20NCj4g
aWQgKQ0KPiB3cmw6IGRvbSAgICAwICAgICAgMSAgbXNlYyAgICAgIDEwMDAwIGNyZWRpdCAg
ICAgMTAwMDAwMCByZXNlcnZlICAgICAgICAxMDAgZGlzYw0KPiBhcmQNCj4gd3JsOiBkb20g
ICAgMyAgICAgIDEgIG1zZWMgICAgICAxMDAwMCBjcmVkaXQgICAgIDEwMDAwMDAgcmVzZXJ2
ZSAgICAgICAgMTAwIGRpc2MNCj4gYXJkDQo+IHdybDogZG9tICAgIDAgICAgICAwICBtc2Vj
ICAgICAgMTAwMDAgY3JlZGl0ICAgICAxMDAwMDAwIHJlc2VydmUgICAgICAgICAgMCBkaXNj
DQo+IGFyZA0KPiB3cmw6IGRvbSAgICAzICAgICAgMCAgbXNlYyAgICAgIDEwMDAwIGNyZWRp
dCAgICAgMTAwMDAwMCByZXNlcnZlICAgICAgICAgIDAgZGlzYw0KPiBhcmQNCj4gT1VUIDB4
NTU5ZmFlNzFmMjUwIDIwMjMwMTIwIDE3OjQwOjUzIEVSUk9SIChFTk9FTlQgKQ0KPiB3cmw6
IGRvbSAgICAwICAgICAgMSAgbXNlYyAgICAgIDEwMDAwIGNyZWRpdCAgICAgMTAwMDAwMCBy
ZXNlcnZlICAgICAgICAxMDAgZGlzYw0KPiBhcmQNCj4gd3JsOiBkb20gICAgMyAgICAgIDEg
IG1zZWMgICAgICAxMDAwMCBjcmVkaXQgICAgIDEwMDAwMDAgcmVzZXJ2ZSAgICAgICAgMTAw
IGRpc2MNCj4gYXJkDQo+IElOIDB4NTU5ZmFlNzFmMjUwIDIwMjMwMTIwIDE3OjQwOjUzIFJF
TEVBU0UgKDMgKQ0KPiBERVNUUk9ZIHdhdGNoIDB4NTU5ZmFlNzNmNjMwDQo+IERFU1RST1kg
d2F0Y2ggMHg1NTlmYWU3NWRkZjANCj4gREVTVFJPWSB3YXRjaCAweDU1OWZhZTc1ZWMzMA0K
PiBERVNUUk9ZIHdhdGNoIDB4NTU5ZmFlNzVlYTYwDQo+IERFU1RST1kgd2F0Y2ggMHg1NTlm
YWU3MzJjMDANCj4gREVTVFJPWSB3YXRjaCAweDU1OWZhZTcyY2VhMA0KPiBERVNUUk9ZIHdh
dGNoIDB4NTU5ZmFlNzI4ZmMwDQo+IERFU1RST1kgd2F0Y2ggMHg1NTlmYWU3Mjk1NzANCj4g
REVTVFJPWSBjb25uZWN0aW9uIDB4NTU5ZmFlNzI0MjkwDQo+IG9ycGhhbmVkIG5vZGUgL2xv
Y2FsL2RvbWFpbi8zL2RldmljZS9zdXNwZW5kL2V2ZW50LWNoYW5uZWwgZGVsZXRlZA0KPiBv
cnBoYW5lZCBub2RlIC9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmJkLzUxNzEyIGRlbGV0ZWQN
Cj4gb3JwaGFuZWQgbm9kZSAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZrYmQvMCBkZWxldGVk
DQo+IG9ycGhhbmVkIG5vZGUgL2xvY2FsL2RvbWFpbi8zL2RldmljZS92aWYvMCBkZWxldGVk
DQo+IG9ycGhhbmVkIG5vZGUgL2xvY2FsL2RvbWFpbi8zL2NvbnRyb2wvc2h1dGRvd24gZGVs
ZXRlZA0KPiBvcnBoYW5lZCBub2RlIC9sb2NhbC9kb21haW4vMy9jb250cm9sL2ZlYXR1cmUt
cG93ZXJvZmYgZGVsZXRlZA0KPiBvcnBoYW5lZCBub2RlIC9sb2NhbC9kb21haW4vMy9jb250
cm9sL2ZlYXR1cmUtcmVib290IGRlbGV0ZWQNCj4gb3JwaGFuZWQgbm9kZSAvbG9jYWwvZG9t
YWluLzMvY29udHJvbC9mZWF0dXJlLXN1c3BlbmQgZGVsZXRlZA0KPiBvcnBoYW5lZCBub2Rl
IC9sb2NhbC9kb21haW4vMy9jb250cm9sL2ZlYXR1cmUtczMgZGVsZXRlZA0KPiBvcnBoYW5l
ZCBub2RlIC9sb2NhbC9kb21haW4vMy9jb250cm9sL2ZlYXR1cmUtczQgZGVsZXRlZA0KPiBv
cnBoYW5lZCBub2RlIC9sb2NhbC9kb21haW4vMy9jb250cm9sL3N5c3JxIGRlbGV0ZWQNCj4g
b3JwaGFuZWQgbm9kZSAvbG9jYWwvZG9tYWluLzMvZGF0YSBkZWxldGVkDQo+IG9ycGhhbmVk
IG5vZGUgL2xvY2FsL2RvbWFpbi8zL2RyaXZlcnMgZGVsZXRlZA0KPiBvcnBoYW5lZCBub2Rl
IC9sb2NhbC9kb21haW4vMy9mZWF0dXJlIGRlbGV0ZWQNCj4gb3JwaGFuZWQgbm9kZSAvbG9j
YWwvZG9tYWluLzMvYXR0ciBkZWxldGVkDQo+IG9ycGhhbmVkIG5vZGUgL2xvY2FsL2RvbWFp
bi8zL2Vycm9yIGRlbGV0ZWQNCj4gb3JwaGFuZWQgbm9kZSAvbG9jYWwvZG9tYWluLzMvY29u
c29sZS9iYWNrZW5kLWlkIGRlbGV0ZWQNCj4gDQo+IGFuZCBubyBmdXJ0aGVyIG91dHB1dC4N
Cj4gDQo+IFRoZSB0cmFjZSBzaG93cyB0aGF0IERFU1RST1kgd2FzIGNhbGxlZCBmb3IgY29u
bmVjdGlvbiAweDU1OWZhZTcyNDI5MCwNCj4gYnV0IHRoYXQgaXMgdGhlIHNhbWUgcG9pbnRl
ciAoY29ubikgbWFpbigpIHdhcyBsb29waW5nIHRocm91Z2ggZnJvbQ0KPiBjb25uZWN0aW9u
cy4gIFNvIGl0IHdhc24ndCBhY3R1YWxseSByZW1vdmVkIGZyb20gdGhlIGNvbm5lY3Rpb25z
IGxpc3Q/DQo+IA0KPiBSZXZlcnRpbmcgY29tbWl0IGU4ZTZlNDIyNzlhNSAidG9vbHMveGVu
c3RvcmU6IHNpbXBsaWZ5IGxvb3AgaGFuZGxpbmcNCj4gY29ubmVjdGlvbiBJL08iIGZpeGVz
IHRoZSBhYm9ydC9kb3VibGUgZnJlZS4gIEkgdGhpbmsgdGhlIHVzZSBvZg0KPiBsaXN0X2Zv
cl9lYWNoX2VudHJ5X3NhZmUgaXMgaW5jb3JyZWN0LiAgbGlzdF9mb3JfZWFjaF9lbnRyeV9z
YWZlIG1ha2VzDQo+IHRyYXZlcnNhbCBzYWZlIGZvciBkZWxldGluZyB0aGUgY3VycmVudCBp
dGVyYXRvciwgYnV0IFJFTEVBU0UvZG9fcmVsZWFzZQ0KPiB3aWxsIGRlbGV0ZSBzb21lIG90
aGVyIGVudHJ5IGluIHRoZSBjb25uZWN0aW9ucyBsaXN0LiAgSSB0aGluayB0aGUNCj4gb2Jz
ZXJ2ZWQgYWJvcnQgaXMgYmVjYXVzZSBsaXN0X2Zvcl9lYWNoX2VudHJ5IGhhcyBuZXh0IHBv
aW50aW5nIHRvIHRoZQ0KPiBkZWxldGVkIGNvbm5lY3Rpb24sIGFuZCBpdCBpcyB1c2VkIGlu
IHRoZSBzdWJzZXF1ZW50IGl0ZXJhdGlvbi4NCj4gDQo+IEFkZCBhIGNvbW1lbnQgZXhwbGFp
bmluZyB0aGUgdW5zdWl0YWJpbGl0eSBvZiBsaXN0X2Zvcl9lYWNoX2VudHJ5X3NhZmUuDQo+
IEFsc28gbm90aWNlIHRoYXQgdGhlIG9sZCBjb2RlIHRha2VzIGEgcmVmZXJlbmNlIG9uIG5l
eHQgd2hpY2ggd291bGQNCj4gcHJldmVudHMgYSB1c2UtYWZ0ZXItZnJlZS4NCj4gDQo+IFRo
aXMgcmV2ZXJ0cyBjb21taXQgZThlNmU0MjI3OWE1NzIzMjM5YzVjNDBiYTRjN2Y1NzlhOTc5
NDY1ZC4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEphc29uIEFuZHJ5dWsgPGphbmRyeXVrQGdt
YWlsLmNvbT4NCg0KR29vZCBjYXRjaCENCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3Mg
PGpncm9zc0BzdXNlLmNvbT4NCg0Kd2l0aCBvbmUgbml0OiBhICJGaXhlczoiIHRhZyBmb3Ig
Y29tbWl0IGU4ZTZlNDIyNzlhNSBzaG91bGQgYmUgYWRkZWQuDQoNCj4gLS0tDQo+IEkgZGlk
bid0IHZlcmlmeSB0aGUgc3RhbGUgcG9pbnRlcnMsIHdoaWNoIGlzIHdoeSB0aGVyZSBhcmUg
YSBsb3Qgb2YgIkkNCj4gdGhpbmsiIHF1YWxpZmllcnMuICBCdXQgcmV2ZXJ0aW5nIHRoZSBj
b21taXQgaGFzIHhlbnN0b3JlZCBzdGlsbCBydW5uaW5nDQo+IHdoZXJlYXMgaXQgd2FzIGFi
b3J0aW5nIGNvbnNpc3RlbnRseSBiZWZvcmVoYW5kLg0KDQpZb3VyIGFuYWx5c2lzIHNlZW1z
IHRvIGJlIGZpbmUuIFNvZnQgcmVzZXQgaGFuZGxpbmcgaW5jbHVkZXMgYQ0KIlhTX1JFTEVB
U0UiIG1lc3NhZ2UgZm9yIHRoZSBhZmZlY3RlZCBndWVzdCwgd2hpY2ggcmVzdWx0cyBpbiB0
aGUNCnN0cnVjdCBkb21haW4gYW5kIHRoZSBhc3NvY2lhdGVkIGNvbm5lY3Rpb24gdG8gYmUg
ZnJlZWQuIFRoaXMgY2FuDQpoYXBwZW4gdG8gYmUgdGhlIGNvbm5lY3Rpb24gaW4gdGhlICJu
ZXh0IiBwb2ludGVyLCByZXN1bHRpbmcgaW4NCnRoZSBjcmFzaCB5b3UndmUgb2JzZXJ2ZWQu
DQoNCg0KSnVlcmdlbg0K
--------------YeR8jgEvZz17dcIP1lV3VJ2I
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------YeR8jgEvZz17dcIP1lV3VJ2I--

--------------zzHsIoN0Z56OzUvf3pUHZ9mX--

--------------w8Uw4fiU27yCTPZCNOp0sChW
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPORzUFAwAAAAAACgkQsN6d1ii/Ey/V
Kwf/YdTA7bZQ4tj2cZIU9nUfIjeCqTpaX+8sVuAsWdBk0bhnKIq2z9HGYu2wSQN2Y5pLQ9yIxj7Z
KxCKUKFdp0hoxcE0xMWcc87zD2T8P4/jcnzxTixJ0WcsNtrEkfOx+m5GZz+C3ldrE6tIF6WAwu3X
aN6ZkRBBtCxd+8Mpx06fnyL/qb6hYrMKWH/Ibj7hvwF1ussOeUnOhtwiDNHyhbk/cLZLOyxxmjNL
gnafdtzmznSmIquSoqDTC3uo/Kgv79Gn8UxOBFR7o7n8QMz5cMSbPZzwJPp93l19lk4aP99j57M0
G6f3dNBOJCt8pXuVdQBCE8s2xuFeYj9prJCv0x1VBA==
=PtDk
-----END PGP SIGNATURE-----

--------------w8Uw4fiU27yCTPZCNOp0sChW--


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 08:41:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 08:41:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482688.748322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsON-0003gQ-L0; Mon, 23 Jan 2023 08:41:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482688.748322; Mon, 23 Jan 2023 08:41:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsON-0003gJ-I9; Mon, 23 Jan 2023 08:41:27 +0000
Received: by outflank-mailman (input) for mailman id 482688;
 Mon, 23 Jan 2023 08:41:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJsOM-0003gD-Kk
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 08:41:26 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2072.outbound.protection.outlook.com [40.107.7.72])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b8195672-9af9-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 09:41:24 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9399.eurprd04.prod.outlook.com (2603:10a6:102:2b3::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Mon, 23 Jan
 2023 08:41:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 08:41:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8195672-9af9-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YRoso7ukkMjRvm7Q/htS97DNtL7Xdg+5cbLedE8zrypMg19Kc8zGU62wYPARZJTprb+BWHRPLrWG+ZhkSy7POrkCIHH0bekXYau+FVvvNuC7Wu1Mh3cJamYNmMRg/7Ve3+HxEgfpuveaKGKAGXrbpb3LKnRlBHet2dYMqC9GgqIxy7rbbb4sIfThLI3szw/q3mvwhmZnSzNZ5Uqu73aGBnY2sjtwTeVyhyPcJu6Vbu4KsXa4V2x/FyVMKmrKc4pwQxE3Q7sHM1cL/SBdmm2m1k4X/tjB540To8yAIouyriLg/O4jwDnHI2+5fcCaSEm1AJDo6Joy3wnu2SO2NwyLiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YG1T+TYun78WSMhRDDopvvtxl8H4dbRCAs0zMaay1Rw=;
 b=SMh0mStV3SzWzkImGgwitvFIna6Ua6p0t9EJBgi7D1LMXxGg7SofFM/7lHEizQo5gH7Aob44alXIUe7EOkVpd3rB8R/eijniN4JRoHo3sr1Q1SGHH6Vs9NpvjmyV+mK/mx0Dd5K63OT+rFxngytzozJzZiQBFQuQi0TjrkyAzuF+x5w+sDTpve+oWEL4Y1Jp9T3tbDlmvQPsL/kTdlyG00l/NsDf90yAZRNhXJyzytbemMfNV9SLUhArqE+K0HykIKCyupiw9C5w5DfiGCnWdNJIz5L3GzYlrV36BEGJg4LYgS3pMTELsE7p0nab7hP18Z4qqrjRGn/4Q0ue4yWkGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YG1T+TYun78WSMhRDDopvvtxl8H4dbRCAs0zMaay1Rw=;
 b=oAuEGARU/Qj1bMyMISippamv4VSBLtO9r0rFj6BiTiVVfAnl4ZesDe3raxYjw5gfbrU/AV2CWPwZJTrFegIBhlXKpVcI9vZo+tbPCIoau49ox6yliFvxZjsyCcKzSDKWqslXyKjZAz6WlASGRyRYGiWtqKAoDJnaNF1hUVrI+baa3ysUSel95QxfNVcgPLftzXBdZWz6LCqLeuOQj7FFka7qQTYNRU2bWv9u7Y/rkSwc8c/bo56CY7zx/h6KXzB6iiL3P73Stv4X5kXNObmL2HqM8OOPunjgr8zQ/yF5AZpA4uvb2NIklBQDxpn81B+hY9vt7UxBaRaFYZGFB1CeZA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c8ca4781-13ac-add6-1ae0-558f8d0da052@suse.com>
Date: Mon, 23 Jan 2023 09:41:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/9] x86/shadow: replace sh_reset_l3_up_pointers()
Content-Language: en-US
To: George Dunlap <george.dunlap@cloud.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <d91b5315-a5bb-a6ee-c9bb-58974c733a4e@suse.com>
 <CA+zSX=ZVK_7xpgraJyC3__uORqXo8F9Atj9gCF+oO7OyfRrtYg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CA+zSX=ZVK_7xpgraJyC3__uORqXo8F9Atj9gCF+oO7OyfRrtYg@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0050.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9399:EE_
X-MS-Office365-Filtering-Correlation-Id: 3ca99d1c-bbb4-4783-92d5-08dafd1d9ab7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8cqhio7bep0c67AtG0bAvGKsKj/KZwLXaYP+XDRSWXRlHxkW2aaCE4sUWbUjSoZNu46MNlb1q+mDEd0L8/rcYVSRxJC69e0Lrls7PBeN5SRZwIETZH8VFmI9faM9SLYmGuvf5JOiQDFG9kI8yXddXDZZ88Dfn8p3KavzVNIgWuCF9ttwXujOW9xEZz9YNXJemROY14kB50VkHl2RxDQNQKrRdLLoQXK66ve77AjQ2C4ENdRk1QZKJPeXQXc1JOcYLA6Ym+l8uEBP7/ve6pEJXTnK9/0vZOo+3xWQJtIl9bao7wf+DAWdKKhgBhl8OXY3iloFbcIfUgPvQ3cnlXsl8dCksM9C/EGAtIBsYEUjHs45oqT3rysHpAgGAd27dlLwW4IhqEbgdUHZDVQfPBuwt3FTyhZlmmlw816fReFnusrGss6v9BnaLJiyqqkIKbE4Kv7dJgEI5+yUc1xzucPutm5yh026RUbYgeOpIFSvwLGzgR+6tYxgsPNFN3JoLWUHXUU8P3XsnP2skOTQtC+2jQoyCmSpvyp5nDTzsh4yLUPvni8/unWUNhpSWCMXxkdmkoNp4cUQBhlvdJjp9WdaYy8lWnRHEI6XzhImaiXkLOu5iIWnjpOgNAxBb780r/NY1rKV2uiT+0IjkeyySWuFdrW5dpJ+VQ5P4Zt0//36T6qqmT4Iu4p6RKwPm3mHFVStTGbFFKmfXBrV144m36Qj+KgKP+o2d9dVLFd4gu9hyls=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(39860400002)(396003)(366004)(346002)(136003)(451199015)(83380400001)(31686004)(86362001)(54906003)(5660300002)(6486002)(31696002)(38100700002)(8936002)(478600001)(6512007)(2616005)(41300700001)(2906002)(316002)(4326008)(6916009)(36756003)(8676002)(186003)(53546011)(66476007)(66946007)(6506007)(26005)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MitpakZCS21wdVhnMGRkTkVmeVVNaUFZTkYzN3NzOHFJM3JxaFhpZzNvYnRH?=
 =?utf-8?B?dHFPdFp4R1hXaFhJN0w5SDVXWklsL1dRWDBJNzdBMVllNittQ3ZJd090VjBY?=
 =?utf-8?B?ZmhMTXlpR3VFYTdEaTJhNklPRE50YmhKL0UxalQxZ081NkJjNU5qYXVsbXc3?=
 =?utf-8?B?RTRqVDhYK3ppUnlnbHBraXlFZVltcXBQYnNSVEpzbnpaMEV6L1o1TC84Umpm?=
 =?utf-8?B?LzhSQ204VDR4TWNKSTVlWTl1Uk9BRWJQSGhGSmZvcSs0MEJxeEpiWjh3NEgx?=
 =?utf-8?B?WFNLYThpRHpkWGd6eTNZOFJGZnVlK1NCemNkbXJGRERSblVQZE5KL1lhZFpN?=
 =?utf-8?B?N1RCWENvSFdDejFJdjIrcWh3R3ByL1VYWFFxaVRqQkZUdmVQRDZwcHcycDNZ?=
 =?utf-8?B?WXZNTk0xbXJvN3dNb2RHNEFwc3hFcTFWTzlTK004QVZKVmx2NG1HMU5udU02?=
 =?utf-8?B?L3dZUzM4NytjeW9HNzBoRGltWXB0Q3BLbjZBVE1aNC9EeGs3MUJyU1JtSlRT?=
 =?utf-8?B?Zys0Y0gyWTdSOUpKVFJFaDQ1aWdteVNGZytlc01XSm05WmV0QUQ4SVJUUXJC?=
 =?utf-8?B?QVhOOEVlRkpmVnp5VmI5N2V6WXErMHl1NWVlajVuZzVEVi80YlRIRERzcTZN?=
 =?utf-8?B?aFBOQlpjOXNNS3hkbmYvQWhraDV2Zk1kMnhOWDdnb3RrcWptMEEzN1k0MHFH?=
 =?utf-8?B?QitTaklSYmNPRUZOOVd6THhCSzNjNlJZZzNiK1dXbFhNd3pUL0N6aXpmMjBB?=
 =?utf-8?B?L0h6TUdhUG1janh4bHZlS1UwOHJSd1RwM1lubXdhM2dHUjVIUTdhTWZDTFVC?=
 =?utf-8?B?d1VCL3BJeTRwdzlScUFHcjFyQ2dvbnVjejh2YVVqYTZHTVBHQTlaczIwVlVF?=
 =?utf-8?B?c3BTaDROSi9TbC92SzBUOUM2a1U1cjFIMG43S1pIWS9pRVBrelFVOGRPTFgr?=
 =?utf-8?B?N2FHMHdjUmlrMy8ycGw1eUxwT01OSUM0QVgzNjNJME5XUHc2aTVYUm8xNCtL?=
 =?utf-8?B?TEdOWk9HemtjWHhjMHQyKzc2RkVIYlM5RzVHV09nL1ZMWW5JT2Y0a1ZVRytT?=
 =?utf-8?B?M3gzQXdzZ0xlQmtFSmFQUmFTNTFTelBxSlEvaFI1VWxVZVB3U2FkQkwzNEpZ?=
 =?utf-8?B?anZqKzIydkJuL1NlTkE1L0s2NzdRUkEyazFwWFF4QXNZZzFsTThvVy9LS1Jr?=
 =?utf-8?B?NWxhTWFxR3JKNDRGZkhreTFZS3VJZ3NMVUNQS2RGZnB4UXJZV2NmSk8yTkZU?=
 =?utf-8?B?d3VoN2c3ZmhZcDBqa2R5VFhrRUx0ODVEZGRyZjVxaTZ0NXhrUGd4dEdEdVlh?=
 =?utf-8?B?Qi96QjY1M3htTnBkbXFyRkNneWIwejlrbzVNT1gyektwM2RYK2lkRVU0ZXBt?=
 =?utf-8?B?WXJraXdMODJRSk0wSUFCNHE0eFF5Z1NyV2NSNk1yZkVSbnVJdEUzSHdGd1pE?=
 =?utf-8?B?QSsyZ2pRZEwxaEY3ZVV3TlpNWXd1UnJIWExuajc3b2dLYXJ2T2pxYk0wWDlY?=
 =?utf-8?B?M3NwVGIyTlFCbzZ0aVlkeFQ5aTkrSlNmU2RSdHhtRjB3TnB2dlVSM0VPdmZu?=
 =?utf-8?B?VVpDSWdiaVY1eEptcS9KUExzV3ZPcmRIZm5kU3dwdjBvdXFVTU8raTY3WXZ2?=
 =?utf-8?B?VUxzM0FhVmtaTmVKdFFCU0FXYlhJc3R5T1drQUs2eXV5S0pnSVZoTUpCTGF4?=
 =?utf-8?B?U1EzaXNjVkFOTG1KSi9NM09pWXFnWXZhT2ZZb0RzZnJHQVBOS2VMZlcyQ0o2?=
 =?utf-8?B?VFFuOU9PQm1WN1RxdXBIcmlObnh0QlhQRTZrR0dYN3I4U25ESXh3NnVpUkc0?=
 =?utf-8?B?WGlCeTZ6K2MxUk94TFh1RDBEM2NwVWVFaHE3Tk1UeUlTUWFtNHp2WjZCakl3?=
 =?utf-8?B?S0ZlVEhWKys4ZitKaDcrT0FqcXZFNy84RGRrSTBYbE5aREJHWStWT0w0dTc1?=
 =?utf-8?B?TnUxUUVqT1V6Rm5OajVncWo1aHVqYjlJbGIzVGFUMERQTndsb1VBOGF5Z1Zp?=
 =?utf-8?B?T3RESkp0aTU1c281NkVTYU5pa0VzRG5SaEw2dERndkZNd0Ficm1pRTJNZFUv?=
 =?utf-8?B?UkhycUdKdzg2Mk56WFEyOTIrY1IxUEpyU1Y1RVhSZ0M2YmVBMVFUWFVKU0JH?=
 =?utf-8?Q?JSDVDFoO/YdsmdVFVN/60Q73g?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ca99d1c-bbb4-4783-92d5-08dafd1d9ab7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 08:41:21.9509
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P775tVfYeSpI1QQu0KoT9LX7gnzBUd7kHDdGCbZWx+tNfRdQOPM4dKZtNfXt+lMpQAOG8EI2PoKolbFSiNWmKw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9399

On 20.01.2023 18:02, George Dunlap wrote:
> On Wed, Jan 11, 2023 at 1:52 PM Jan Beulich <jbeulich@suse.com> wrote:
> 
>> Rather than doing a separate hash walk (and then even using the vCPU
>> variant, which is to go away), do the up-pointer-clearing right in
>> sh_unpin(), as an alternative to the (now further limited) enlisting on
>> a "free floating" list fragment. This utilizes the fact that such list
>> fragments are traversed only for multi-page shadows (in shadow_free()).
>> Furthermore sh_terminate_list() is a safe guard only anyway, which isn't
>> in use in the common case (it actually does anything only for BIGMEM
>> configurations).
> 
> One thing that seems strange about this patch is that you're essentially
> adding a field to the domain shadow struct in lieu of adding another
> another argument to sh_unpin() (unless the bit is referenced elsewhere in
> subsequent patches, which I haven't reviewed, in part because about half of
> them don't apply cleanly to the current tree).

Well, to me adding another parameter to sh_unpin() would have looked odd;
the new field looks slightly cleaner to me. But changing that is merely a
matter of taste, so if you and e.g. Andrew think that approach was better,
I could switch to that. And no, I don't foresee further uses of the field.

As to half of the patches not applying: Some where already applied out of
order, and others therefore need re-basing slightly. Till now I saw no
reason to re-send the remaining patches just for that.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 08:45:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 08:45:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482693.748332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsRk-0004He-5X; Mon, 23 Jan 2023 08:44:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482693.748332; Mon, 23 Jan 2023 08:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJsRk-0004HX-1Z; Mon, 23 Jan 2023 08:44:56 +0000
Received: by outflank-mailman (input) for mailman id 482693;
 Mon, 23 Jan 2023 08:44:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJsRi-0004HN-GJ; Mon, 23 Jan 2023 08:44:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJsRi-0001je-6W; Mon, 23 Jan 2023 08:44:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pJsRh-0007DI-PP; Mon, 23 Jan 2023 08:44:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pJsRh-00019L-Oi; Mon, 23 Jan 2023 08:44:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z0mHZwSCw6kDGw1+szrjKINsUS8EHpblsKLuu5K4/iM=; b=hA3DpffvElEGuFbm+6k/M2Ekrw
	Y/58CcM6y/HwNNJQaWLoE90wbv0Sy3RoMtJ6MVIsM4ddr8vRPjGQQd5sEY+8zKD+O/ICq9s1Pp65P
	C663/CLWCto9PbtT8f7OAwvHtXYaWxkY7Fo8/V1EMKrwYES9pMlqRoUUWAgJqG9P/dfY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176056-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176056: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-arm64-arm64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d60c20260c7e82fe5344d06c20d718e0cc03b8b
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 08:44:53 +0000

flight 176056 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176056/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd 17 guest-start/debian.repeat fail in 176042 pass in 176056
 test-amd64-i386-libvirt-xsm   7 xen-install                fail pass in 176042

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 176042 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1d60c20260c7e82fe5344d06c20d718e0cc03b8b
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    3 days
Failing since        176003  2023-01-20 17:40:27 Z    2 days    7 attempts
Testing same since   176011  2023-01-21 07:04:02 Z    2 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 762 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 09:55:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 09:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482705.748348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJtXS-00033W-DA; Mon, 23 Jan 2023 09:54:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482705.748348; Mon, 23 Jan 2023 09:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJtXS-00033P-A7; Mon, 23 Jan 2023 09:54:54 +0000
Received: by outflank-mailman (input) for mailman id 482705;
 Mon, 23 Jan 2023 09:54:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=StYb=5U=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pJtXR-00033J-BP
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 09:54:53 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2044.outbound.protection.outlook.com [40.107.223.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f91aef37-9b03-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 10:54:49 +0100 (CET)
Received: from BL0PR02CA0139.namprd02.prod.outlook.com (2603:10b6:208:35::44)
 by BY5PR12MB4147.namprd12.prod.outlook.com (2603:10b6:a03:205::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 09:54:44 +0000
Received: from BL02EPF0000EE3F.namprd05.prod.outlook.com
 (2603:10b6:208:35:cafe::a4) by BL0PR02CA0139.outlook.office365.com
 (2603:10b6:208:35::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Mon, 23 Jan 2023 09:54:44 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BL02EPF0000EE3F.mail.protection.outlook.com (10.167.241.133) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.12 via Frontend Transport; Mon, 23 Jan 2023 09:54:44 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 03:54:44 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 01:54:43 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 23 Jan 2023 03:54:42 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f91aef37-9b03-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P+Ix8zQCLQLg+3Jy/GU64tC+Ru0t1rOrbvdLCr1rMyfTo6zynRv8PBi76yNmoBfmMiNzavPIRHlnBrFSnlVGydJzPQ338u/ZzC9+4GHTkQcoYcgEGPmG5BI9RzigHdZS2Cv04VaBpyoqikJgp0YbWnQrFSB/VSQch8rupb5ffcvQfgKHKqu1XuJkhQn6Lgdf7OFbJmX1tKWwFY6DhMtiGvYyKiMme+7ujqudVi59Lr+iqqPPoYOggbBkbbXuLBS1LEc5QDzttmOiQGz1FPH1USaytsBIN9ufCxhO2HswpLXIC5fXwF7h7XrHFHjHuWaG5MbZhSGF5PDDdvJqu9+ieg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=I+ChblAooxSW6QftgYEoMsg2d3B2eVain5PW+PHapxY=;
 b=T4Azov7srwkH3JnYzkBYz7OTSkrBDB8QJMcVttexiae8+8/0qaU6Or++0ioTfnia+FHF4d/W1Aunk9Jf8FMn3F382uTAzfn1DWg5IydLGG6BPmz39K7zcOs56F1sJ1r1zRebplucfogVR2647i/z549RlVaA8WBBdMKUVRylQZ58R82+/K2cH5ANKkj2LVUGPUKQwFUi8a6T0ulB1pMPUVzRhJylVLGtXIwUEqJOcmGC9OqsA4uZ/X06oik6GTz8jVESLU1E44kgTtnZaBRSrBwg/DmOkl1Fs2squP9LACnsPDEJ6F2iZfQLL4XXNKy6UDkOUpwz/X9DlH32PDflrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I+ChblAooxSW6QftgYEoMsg2d3B2eVain5PW+PHapxY=;
 b=EH4LrsKgJecrvuYJa9YXqshbyIiCtoS8K8yJIt0ywfkBCwk5VOuG3oJKvNCVSGmeDVac975gDejb1FtJwLEbBKXKSD+vVLgpR+QYytyopvnOdX2++/Ib8EL3Ngh1qk8B0VByXR1QVbXxALBgHXto4lXenxE/iCG4Qzk9i6Ns9Rg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <3949c194-114c-cd4f-3f7c-c57d423a7955@amd.com>
Date: Mon, 23 Jan 2023 10:54:42 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN][RFC PATCH v4 07/16] xen/iommu: Move spin_lock from
 iommu_dt_device_is_assigned to caller
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <julien@xen.org>, <Luca.Fancellu@arm.com>
References: <20221207061815.7404-1-vikram.garhwal@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221207061815.7404-1-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF0000EE3F:EE_|BY5PR12MB4147:EE_
X-MS-Office365-Filtering-Correlation-Id: dbf21649-dcf9-47ee-34bf-08dafd27dae3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b9v5renwULcEJeoasmGHq4drFuEq1m9aiCl2YV6SoqP98p4qT1zMMv1VsnJ6uTX84ZnOY4qi0AS0idbAcF/d6aXyp1ETGvOG/08gAWXJqFeUNgLJkMEVC8RijhrFs4sNIFVD8LHv+xZlnr5oUdhqA9ccQjulwYLEL/pzgDeurI7KG5P9kMMGZRFPz8+oirSF5AxH99jWHitaeyBayOF5A20HyuSN+DiwG/yEB4ZWenWk/rTPuaDQVByPqcazVjgAOziI/0fZYrGikvuWt3bmU+9yAVLRYYMKjTVei5sCYN22q2WuBaZzOSPuLQJ1NBpJXCj8k53hx9e7aTfnR7ezEuy4wJOmMVi9YbF1om4qfIOzss4v8FW+/10P8oXKbWy58lfey45ZOgwC/HFI3xXFaLS5r3UwBQssKx5yEky8qznm1v2LjlFAGxghb67Kxdd9Y5bhMXzqEGiD9lS73KO3kbLdOERPnu7YNC4Gw49rSZ2vk/WfpOQbKRO0DX4ZPzcAwJMHN/yb2B1sULjXnWnLLh4LYfcxl+sATecc/R6dnvohj9HA0GziLKag7hjL3zNMttejhTRzWTgj+K8M/EE4YuhcswZvud6Cba+fcXwgrdebDVSuagVt7eqUTsWDlhBxBy4pEFNUzAMXlRRLG7T8sT/NxU+fV1zrK5OemGWGeT/nLNor/lkbIMY/7IvHhWRU82ESumFI/8noRaFjylshTlxyDkwKkkJ6pPRjuMI54tY=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39850400004)(346002)(376002)(451199015)(36840700001)(46966006)(36860700001)(83380400001)(31696002)(5660300002)(82740400003)(86362001)(81166007)(41300700001)(356005)(2906002)(44832011)(8936002)(82310400005)(4326008)(40480700001)(8676002)(186003)(53546011)(26005)(47076005)(426003)(336012)(316002)(16576012)(70206006)(54906003)(2616005)(70586007)(110136005)(31686004)(478600001)(36756003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 09:54:44.3752
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dbf21649-dcf9-47ee-34bf-08dafd27dae3
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF0000EE3F.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4147

Hi Vikram,

On 07/12/2022 07:18, Vikram Garhwal wrote:
> 
> 
> Rename iommu_dt_device_is_assigned() to iommu_dt_device_is_assigned_lock().
s/lock/locked/

> 
> Moving spin_lock to caller was done to prevent the concurrent access to
> iommu_dt_device_is_assigned while doing add/remove/assign/deassign.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  xen/drivers/passthrough/device_tree.c | 23 +++++++++++++++++++----
>  1 file changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
> index 1c32d7b50c..bb4cf7784d 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -83,16 +83,15 @@ fail:
>      return rc;
>  }
> 
> -static bool_t iommu_dt_device_is_assigned(const struct dt_device_node *dev)
> +static bool_t
> +    iommu_dt_device_is_assigned_locked(const struct dt_device_node *dev)
This should not be indented
>  {
>      bool_t assigned = 0;
> 
>      if ( !dt_device_is_protected(dev) )
>          return 0;
> 
> -    spin_lock(&dtdevs_lock);
>      assigned = !list_empty(&dev->domain_list);
> -    spin_unlock(&dtdevs_lock);
> 
>      return assigned;
>  }
> @@ -213,27 +212,43 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
>          if ( (d && d->is_dying) || domctl->u.assign_device.flags )
>              break;
> 
> +        spin_lock(&dtdevs_lock);
> +
>          ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
>                                      domctl->u.assign_device.u.dt.size,
>                                      &dev);
>          if ( ret )
> +        {
> +            spin_unlock(&dtdevs_lock);
> +
I think removing a blank line here and in other places would look better.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 10:01:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 10:01:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482710.748358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJtdD-0004YX-1l; Mon, 23 Jan 2023 10:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482710.748358; Mon, 23 Jan 2023 10:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJtdC-0004YQ-UT; Mon, 23 Jan 2023 10:00:50 +0000
Received: by outflank-mailman (input) for mailman id 482710;
 Mon, 23 Jan 2023 10:00:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=StYb=5U=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pJtdB-0004YG-Eq
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 10:00:49 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2045.outbound.protection.outlook.com [40.107.244.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ceca4751-9b04-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 11:00:47 +0100 (CET)
Received: from DS7PR03CA0182.namprd03.prod.outlook.com (2603:10b6:5:3b6::7) by
 MW5PR12MB5652.namprd12.prod.outlook.com (2603:10b6:303:1a0::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 10:00:42 +0000
Received: from DS1PEPF0000E632.namprd02.prod.outlook.com
 (2603:10b6:5:3b6:cafe::c9) by DS7PR03CA0182.outlook.office365.com
 (2603:10b6:5:3b6::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Mon, 23 Jan 2023 10:00:42 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E632.mail.protection.outlook.com (10.167.17.136) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.10 via Frontend Transport; Mon, 23 Jan 2023 10:00:41 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 04:00:41 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 23 Jan 2023 04:00:39 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceca4751-9b04-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QfbJW3y1trkUtR/0s7lwsITNZiW+yZRjc60sYCnSgQp0cDIAXXKKwdWWT6ad6kiTXkv2Dtyr9YhSOR49BJ5Sc7OY9w9DTbyDBWUt2S27oqKgUTFdMaky5RKZmXlsY1XboECu/+UFK2cNuoc7Tl5ktP7f94Yfxw7StZiAOsLBRT6FVYQyJd23BJ+l9iIzSJpUtynFvzGsl5h796qvHMpvBGlOoz25ulrYvoWy4PFMthlcxtxpImQ6G6Dv409PpPUsZXIEX27rNNtUHDmekxSRKin6gbmUKaxZ8c0vZcvDm8Vpq3XPtWbBgaQOy7TcC0zqYUencu0wJGz4ihkrnQyh/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2zYprjtrKsdKBHl/ZKkPyjDmFujiX6ygwUsmkay0LJo=;
 b=DIBN5MG6VzJ5TkJz+eyFTPrnfGZdNf45Zqr7qAy1F7J1j3ZhDLPTeY6iLO4szS0LSk8209+Rvp+TL8i6r94AWwQ4IbHNVVt1DRf2gUTCk9NEzuDQ3c5e8Xo60iDF/MKvWTMsIU7FmiZHlanszgN9NhjQgXk/jXFXGwCQ+tJuztjvT+uVTDfiqErqh6ySXqeJS3ZVIndv5q+G47Ro6EJoQXV1dO80KCj3xZbo3JUXrrekNOw5qG6ujRaCLe+SiU/i1htLYEZdxGuIOw6E5pFHkaXAoAYJSY60wGizPXXRQjad7BqI/bTgrh6cOEk8neI2D/z/oSYkg02VFTfZOXRQkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2zYprjtrKsdKBHl/ZKkPyjDmFujiX6ygwUsmkay0LJo=;
 b=S5dVQp9ab4kj19wycuH4POqer8+CeH3F6JKXlJZoMjzpZkISWjLUnrKf0Hum20zIYYtPzSCT1aoDFHDmdUB0Jg1xHKPvbTkwAzmvyaoIxjiLtuhQD3doR1XqPTybHMFxKrt52WW2DN+Iy3a0JFs1R13FNGx0QvNXKOoas8CWv1I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <311dc97d-e924-e9ad-25d3-1135d4b24f7e@amd.com>
Date: Mon, 23 Jan 2023 11:00:39 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN][RFC PATCH v4 09/16] xen/iommu: Introduce
 iommu_remove_dt_device()
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <julien@xen.org>, <Luca.Fancellu@arm.com>, "Jan
 Beulich" <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20221207061815.7404-1-vikram.garhwal@amd.com>
 <20221207061815.7404-3-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221207061815.7404-3-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E632:EE_|MW5PR12MB5652:EE_
X-MS-Office365-Filtering-Correlation-Id: b72a9767-4c01-45a5-00a6-08dafd28afd6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cQlqtCLflGtP8rmKLMJ/ItTR5MMovAKI4Wtti/udQsjgTEkPCjZqrMUQ2PwwWEtL6gXztTqdv9Q8BmjYM+ANH1kPzTkXF31Kb4ipycSoMLCH7utFck3jWkocjXlZE0uaDayWviX2vFRvirMMCXyTsvxITayWQRVkPDG3JFO9suBzNvP1PUXPRuTSM5UmOZ99LDb4lNXEgq25FVzyZHUEF3W514IEnVC1FKU2qUKm7xyiNO35+OV8S8JmlpS9kUobXwHG83LYtLFFO4Xd/SCIo2H5jxUOgGPNxNTgmF6foWyRDlPgEojpJt3V2kbmHGg3dQcgPtKe9Gv5B8kQ0HXmB87YvBWjOgLR8Lesqt4SEyggf4ACLy8S24KJxHn/HyCKoM/OeOGpNGbqPylslrp2Nyrf2brC/zt0l4Dkmnr+h/e9SjdNFTVbOMkgc97wly5I/ibgK7hlsGCeHCJeyOSbRRyCIUADS+a70OEdfgY0tKZxu8Dpk+qp8NVTuBFkux2+7V/NMYX5ewUsITo39LX7Q3WpXKUhTzGh3jzaTtxosD/qa34WZr8GP5l5xsGMBtvx5ycdgNH5TSOpNtAQjLrYVpEFRBMBqPAeC397vKCTGDmRlvNHXLJD/tlxUwaQn/BVvakR9dB0RuDSqVvnRcxuxC1JluNOKQMAQBTAwWl5ZXwuq2Oid3EhRcDKmezoVzbuO1TzT8CYwjNcDio/KmOEFG4uD6eV3CGrfun0ZsuwaCGinrLunMHjoq4iQ0QZXEhbBcK7bEedUTOrfwEHAN5eww==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(39860400002)(376002)(346002)(451199015)(36840700001)(46966006)(40470700004)(478600001)(31686004)(2616005)(47076005)(426003)(4326008)(8676002)(26005)(186003)(336012)(316002)(70586007)(70206006)(16576012)(83380400001)(53546011)(41300700001)(36860700001)(8936002)(2906002)(5660300002)(44832011)(82740400003)(81166007)(31696002)(82310400005)(54906003)(110136005)(86362001)(36756003)(40460700003)(356005)(40480700001)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 10:00:41.6278
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b72a9767-4c01-45a5-00a6-08dafd28afd6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E632.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW5PR12MB5652

Hi Vikram,

On 07/12/2022 07:18, Vikram Garhwal wrote:
> 
> 
> Remove master device from the IOMMU.
Adding some description on the purpose would be beneficial.

> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/drivers/passthrough/device_tree.c | 38 +++++++++++++++++++++++++++
>  xen/include/xen/iommu.h               |  2 ++
>  2 files changed, 40 insertions(+)
> 
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
> index 457df333a0..a8ba0b0d17 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -126,6 +126,44 @@ int iommu_release_dt_devices(struct domain *d)
>      return 0;
>  }
> 
> +int iommu_remove_dt_device(struct dt_device_node *np)
> +{
> +    const struct iommu_ops *ops = iommu_get_ops();
> +    struct device *dev = dt_to_dev(np);
> +    int rc;
> +
Aren't we missing a check if iommu is enabled?

> +    if ( !ops )
> +        return -EOPNOTSUPP;
-EINVAL to match the return values returned by other functions?

> +
> +    spin_lock(&dtdevs_lock);
> +
> +    if ( iommu_dt_device_is_assigned_locked(np) ) {
Incorrect coding style. The closing brace should be placed on the next line.

> +        rc = -EBUSY;
> +        goto fail;
> +    }
> +
> +    /*
> +     * The driver which supports generic IOMMU DT bindings must have
> +     * these callback implemented.
> +     */
> +    if ( !ops->remove_device ) {
Incorrect coding style. The closing brace should be placed on the next line.

> +        rc = -EOPNOTSUPP;
-EINVAL to match the return values returned by other functions?

> +        goto fail;
> +    }
> +
> +    /*
> +     * Remove master device from the IOMMU if latter is present and available.
> +     */
No need for a multi-line comment style.

> +    rc = ops->remove_device(0, dev);
> +
> +    if ( rc == 0 )
!rc is preffered.

> +        iommu_fwspec_free(dev);
> +
> +fail:
> +    spin_unlock(&dtdevs_lock);
> +    return rc;
> +}
> +
>  int iommu_add_dt_device(struct dt_device_node *np)
>  {
>      const struct iommu_ops *ops = iommu_get_ops();
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 4f22fc1bed..1b36c0419d 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -225,6 +225,8 @@ int iommu_release_dt_devices(struct domain *d);
>   */
>  int iommu_add_dt_device(struct dt_device_node *np);
> 
> +int iommu_remove_dt_device(struct dt_device_node *np);
These prototypes look to be placed in order. So your function should be
placed before add function.

> +
>  int iommu_do_dt_domctl(struct xen_domctl *, struct domain *,
>                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
> 
> --
> 2.17.1
> 
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 10:06:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 10:06:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482715.748367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJtii-0005Be-LH; Mon, 23 Jan 2023 10:06:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482715.748367; Mon, 23 Jan 2023 10:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJtii-0005BX-Ii; Mon, 23 Jan 2023 10:06:32 +0000
Received: by outflank-mailman (input) for mailman id 482715;
 Mon, 23 Jan 2023 10:06:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pJtih-0005BR-9X
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 10:06:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pJtig-0003po-Gl; Mon, 23 Jan 2023 10:06:30 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.11.171]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pJtig-0000An-AD; Mon, 23 Jan 2023 10:06:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=18SE7SHjckfBxjtButz+tpG9m2iYFLekvCknyMkwnvw=; b=BJi+ASAGDKHxe6Vd8duudiouhA
	xbz99un8c/Touv/nq60o8yQ6sscsGcuXm/KtKN//S5692jxM7fKRJnj4CuCEBmxTCXuD648/aLsuE
	1R0kKcNBN8q/oV7fno0j8+e0gqqk9gXj4a1/2Dc7hLsZ7p4ALTPCVUz3+UXT3cU4PweU=;
Message-ID: <07dac0bc-6415-259f-6410-0ca285997a81@xen.org>
Date: Mon, 23 Jan 2023 10:06:28 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN][RFC PATCH v4 09/16] xen/iommu: Introduce
 iommu_remove_dt_device()
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Vikram Garhwal <vikram.garhwal@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, Luca.Fancellu@arm.com,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20221207061815.7404-1-vikram.garhwal@amd.com>
 <20221207061815.7404-3-vikram.garhwal@amd.com>
 <311dc97d-e924-e9ad-25d3-1135d4b24f7e@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <311dc97d-e924-e9ad-25d3-1135d4b24f7e@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 23/01/2023 10:00, Michal Orzel wrote:
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> ---
>>   xen/drivers/passthrough/device_tree.c | 38 +++++++++++++++++++++++++++
>>   xen/include/xen/iommu.h               |  2 ++
>>   2 files changed, 40 insertions(+)
>>
>> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
>> index 457df333a0..a8ba0b0d17 100644
>> --- a/xen/drivers/passthrough/device_tree.c
>> +++ b/xen/drivers/passthrough/device_tree.c
>> @@ -126,6 +126,44 @@ int iommu_release_dt_devices(struct domain *d)
>>       return 0;
>>   }
>>
>> +int iommu_remove_dt_device(struct dt_device_node *np)
>> +{
>> +    const struct iommu_ops *ops = iommu_get_ops();
>> +    struct device *dev = dt_to_dev(np);
>> +    int rc;
>> +
> Aren't we missing a check if iommu is enabled?
> 
>> +    if ( !ops )
>> +        return -EOPNOTSUPP;
> -EINVAL to match the return values returned by other functions?

The meaning of -EINVAL is quite overloaded. So it would be better to use 
a mix of errno to help differentiating the error paths.

In this case, '!ops' means there are no possibility (read "support") to 
remove the device. So I think -EOPNOTUSUPP is suitable.

> 
>> +
>> +    spin_lock(&dtdevs_lock);
>> +
>> +    if ( iommu_dt_device_is_assigned_locked(np) ) {
> Incorrect coding style. The closing brace should be placed on the next line.
> 
>> +        rc = -EBUSY;
>> +        goto fail;
>> +    }
>> +
>> +    /*
>> +     * The driver which supports generic IOMMU DT bindings must have
>> +     * these callback implemented.
>> +     */
>> +    if ( !ops->remove_device ) {
> Incorrect coding style. The closing brace should be placed on the next line.
> 
>> +        rc = -EOPNOTSUPP;
> -EINVAL to match the return values returned by other functions?

Ditto.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 10:31:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 10:31:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482723.748380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJu6M-0008Sl-La; Mon, 23 Jan 2023 10:30:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482723.748380; Mon, 23 Jan 2023 10:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJu6M-0008Se-J2; Mon, 23 Jan 2023 10:30:58 +0000
Received: by outflank-mailman (input) for mailman id 482723;
 Mon, 23 Jan 2023 10:30:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oetg=5U=citrix.com=prvs=380ad89a7=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pJu6L-0008SY-Bt
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 10:30:57 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 03a5d671-9b09-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 11:30:55 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03a5d671-9b09-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674469855;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=Ni+/kyJb9wYCoU/b1nnBkKujdRvalEwt92GEdQZdjb8=;
  b=QgHR7smQBvgjs1JYWE4ASRYlols2HXxHy5Eg3fgbFDN78y2Vwbb0JFhq
   5F/u1bWubZWDzH0uZagxyxX+3twqxuutbjP6T+i6Do163hk2VAHWwxRqK
   /NmfQPhe1eaP7m7OCqdFed3SWbujIyYzgJ/rht3C+D+DcC7JNlrLdlDWJ
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93238071
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:eXpABqhUytoG1ZJCDvmdItofX161qBAKZh0ujC45NGQN5FlHY01je
 htvWWzXbPreM2v8cosjao/no0kHuJPWzYdjHQM5q3gzRSwb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QWFzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRBKA0EPxGT2Nntg4mmZLB+tu0oDe3kadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27J/
 TidrzymW3n2MvTPxCKqolWMttSUthjfRb0WP7Hnp7111Qj7Kms7V0RNCArTTeOCol6zXZdTJ
 lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJ4Eec39QWMwar8+BuCCy4PSTspQN47sM47QxQ62
 1nPmMnmbRR0q6GcQ3+Z8raSrBuxNDITIGtEYjULJSMa5/HzrYd1iQjAJv5qCKOvh8f5MS3xy
 TuN6iM5gt0uYdUjjvvhuwqd2nT1+8aPF1RujunKYo67xlp5WYf0Zpz30gOY1utudZSpZ1Kgm
 HdRzqBy89syJZ2KkSWMRsAEE7eo++uJPVXgvLJ/I3Uy32/zoiD+JOi89Bk7fR40aZhcJVcFd
 WeJ4WtsCIlv0GxGhEOdS6a4EIwUwKfpDrwJvdiEP4MVMvCdmOJqlRyChHJ8PUi3yyDAcollY
 /93lPpA6l5EYZmLNBLsG48gPUYDn0jSP1/7S5Hh1AiA2rGDfnOTQrptGALQMbxltfnf/12Fq
 Yc32y62J/J3Cb2WX8Uq2dRLcQBiwYYTW/gaVPC7hsbce1E7SQnN+tfawK87epwNokimvr6gw
 51JYWcBkACXrSSeeW23hoVLNOuHsWBX8ShqYkTB/D+AhxAeXGpYxP1OKsppIOB4rrALIDwdZ
 6BtRvhsy89nElzvkwnxp7GjxGC+XHxHXT6zAhc=
IronPort-HdrOrdr: A9a23:BSMJz6Brtsbs1CTlHemK55DYdb4zR+YMi2TDgXoBMyC9Vvbo7v
 xG+85rsyMc6QxhP03I/OrrBEDuewK+yXcY2+ks1NSZLW3bUQmTXeNfBNDZskXd8kTFn4Y36U
 4KSdkaNDSfNzlHpPe/yBWkFc0t2dyWmZrY/ts2DE0AceipUcxdBstCZTpz23cZeDV7
X-IronPort-AV: E=Sophos;i="5.97,239,1669093200"; 
   d="scan'208";a="93238071"
Date: Mon, 23 Jan 2023 10:30:30 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, George
 Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [XEN PATCH v4 3/3] build: compat-xlat-header.py: optimisation to
 search for just '{' instead of [{}]
Message-ID: <Y85hxvyTHa/nXZ9H@perard.uk.xensource.com>
References: <20230119152256.15832-1-anthony.perard@citrix.com>
 <20230119152256.15832-4-anthony.perard@citrix.com>
 <60df7795-8f0b-e0f2-a790-2e00c0d4db2a@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <60df7795-8f0b-e0f2-a790-2e00c0d4db2a@citrix.com>

On Fri, Jan 20, 2023 at 06:26:14PM +0000, Andrew Cooper wrote:
> On 19/01/2023 3:22 pm, Anthony PERARD wrote:
> > `fields` and `extrafields` always all the parts of a sub-struct, so
> > when there is '}', there is always a '{' before it. Also, both are
> > lists.
> >
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > ---
> >  xen/tools/compat-xlat-header.py | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/tools/compat-xlat-header.py b/xen/tools/compat-xlat-header.py
> > index ae5c9f11c9..d0a864b68e 100644
> > --- a/xen/tools/compat-xlat-header.py
> > +++ b/xen/tools/compat-xlat-header.py
> > @@ -105,7 +105,7 @@ def handle_field(prefix, name, id, type, fields):
> >          else:
> >              k = id.replace('.', '_')
> >              print("%sXLAT_%s_HNDL_%s(_d_, _s_);" % (prefix, name, k), end='')
> > -    elif not re_brackets.search(' '.join(fields)):
> > +    elif not '{' in fields:
> >          tag = ' '.join(fields)
> >          tag = re.sub(r'\s*(struct|union)\s+(compat_)?(\w+)\s.*', '\\3', tag)
> >          print(" \\")
> > @@ -290,7 +290,7 @@ def build_body(name, tokens):
> >      print(" \\\n} while (0)")
> >  
> >  def check_field(kind, name, field, extrafields):
> > -    if not re_brackets.search(' '.join(extrafields)):
> > +    if not '{' in extrafields:
> >          print("; \\")
> >          if len(extrafields) != 0:
> >              for token in extrafields:
> 
> These are the only two users of re_brackets aren't they? In which case
> you should drop the re.compile() too.

Indeed, I miss that, we can drop re_brackets.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 10:43:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 10:43:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482730.748391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuIh-0001hl-TH; Mon, 23 Jan 2023 10:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482730.748391; Mon, 23 Jan 2023 10:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuIh-0001he-QJ; Mon, 23 Jan 2023 10:43:43 +0000
Received: by outflank-mailman (input) for mailman id 482730;
 Mon, 23 Jan 2023 10:43:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rv8W=5U=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pJuIg-0001hY-JI
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 10:43:42 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cd1788cf-9b0a-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 11:43:41 +0100 (CET)
Received: by mail-wm1-x329.google.com with SMTP id
 d4-20020a05600c3ac400b003db1de2aef0so8200576wms.2
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 02:43:41 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 u3-20020a7bc043000000b003d1d5a83b2esm10155933wmc.35.2023.01.23.02.43.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 02:43:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd1788cf-9b0a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=EaB41qFJIUZ2IiYsh0T1M6qGYN9xhaUM1hyOb/jSwQQ=;
        b=hqUrEgkvhrKD8jK0U8WtIlpPvMxjQ3vfPTLofwfOQZDOUBnLR7Ohqqld4MaSt8EOiw
         cTQ8D0n9wK4nJYCi2aom5HxzkfA+ix1Ci1bVtR3yhukMIKFERjcVNWHr8MA+4iMJL/sQ
         F4iT4CyrZX0j8xQTN0xTPIhjf/xiu0HDvL8Oq8ogDNPj6GfuzXEbyyswN0le3yAn9Zg7
         LVTTKo7VWrJXdk3SvqFYHOhvsRnDqdLWyYhFk8kzqiZppQo2BpuqW0wHA/FNtK5K7nl7
         qniijo5fP9BAqUqknfskbIWttHDGKCXgMhmhYMxmTw/f0utXCifTG3Oo/t9PrgDjyihu
         uvsA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=EaB41qFJIUZ2IiYsh0T1M6qGYN9xhaUM1hyOb/jSwQQ=;
        b=L6UBRFqCzfVTJEmyqHrELMOkHmCn7kpPX2BrxC1FP33FnIrBW2rrZc/TbCS6MNC+2h
         Y8MdITM+QkYuRuP7OqZe3ISBYAOe0bKmweXg36xkoj4FeUb1bgzby5rrwy6j+20sJBqI
         M59Tfoexh40ItRqxwCewrw9eEUPVHHXQr03/orYcYl29NXTR0ytq4cUxcuItcIpTng2G
         ySCDwsZdd0PILS9lk7UR/ihXjkV/4u1Vu5D/usCwNlwlnoDIrsrWVvwj0D7X8eagi6i2
         v7DQn5koUFUE4hn2du6eKym2viEg8sF6xs/u8YGbYVBCmIJVdSK6A0u/ijzppo8Odzpx
         wwjg==
X-Gm-Message-State: AFqh2ko66E3cCRwFP4rSQSvxkly28a/0esLtiI8GD9YFdbSOfvOUdSJy
	HfYk520dx7PvSN0AcPeOQkc=
X-Google-Smtp-Source: AMrXdXtp4hacxzT/Rg4/n9tUFUxbWTSelR8WjJ2kUxCHX0amWX7eGTyhPcTHrPeveXwdq6NBDGLl6w==
X-Received: by 2002:a05:600c:539b:b0:3d9:f836:3728 with SMTP id hg27-20020a05600c539b00b003d9f8363728mr23704479wmb.11.1674470620984;
        Mon, 23 Jan 2023 02:43:40 -0800 (PST)
Message-ID: <d3e2c18e443439d18f8ece31c9419e30a19be8c5.camel@gmail.com>
Subject: Re: [PATCH v1 01/14] xen/riscv: add _zicsr to CFLAGS
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
	Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Date: Mon, 23 Jan 2023 12:43:39 +0200
In-Reply-To: <d5d9a305-3501-cbc4-1c8a-1a62bd08d588@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <3617dc882193166580ae7e5d122447e924cab524.1674226563.git.oleksii.kurochko@gmail.com>
	 <d5d9a305-3501-cbc4-1c8a-1a62bd08d588@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-20 at 15:29 +0000, Andrew Cooper wrote:
> On 20/01/2023 2:59 pm, Oleksii Kurochko wrote:
> > Work with some registers requires csr command which is part of
> > Zicsr.
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > =C2=A0xen/arch/riscv/arch.mk | 2 +-
> > =C2=A01 file changed, 1 insertion(+), 1 deletion(-)
> >=20
> > diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
> > index 012dc677c3..95b41d9f3e 100644
> > --- a/xen/arch/riscv/arch.mk
> > +++ b/xen/arch/riscv/arch.mk
> > @@ -10,7 +10,7 @@ riscv-march-$(CONFIG_RISCV_ISA_C)=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 :=3D
> > $(riscv-march-y)c
> > =C2=A0# into the upper half _or_ the lower half of the address space.
> > =C2=A0# -mcmodel=3Dmedlow would force Xen into the lower half.
> > =C2=A0
> > -CFLAGS +=3D -march=3D$(riscv-march-y) -mstrict-align -mcmodel=3Dmedany
> > +CFLAGS +=3D -march=3D$(riscv-march-y)_zicsr -mstrict-align -
> > mcmodel=3Dmedany
>=20
> Should we just go straight for G, rather than bumping it along every
> time we make a tweak?
>=20
I didn't go straight for G as it represents the =E2=80=9CIMAFDZicsr Zifence=
i=E2=80=9D
base and extensions thereby it will be needed to add support for FPU
(at least it requires {save,restore}_fp_state) but I am not sure that
we need it in general.

Another one reason is that Linux kernel introduces _zicsr extenstion
separately (but I am not sure that it can be considered as a serious
argument):
https://elixir.bootlin.com/linux/latest/source/arch/riscv/Makefile#L58
https://lore.kernel.org/all/20221024113000.891820486@linuxfoundation.org/
=20
> ~Andrew
~Oleksii



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 10:44:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 10:44:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482732.748401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuIy-00025h-6G; Mon, 23 Jan 2023 10:44:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482732.748401; Mon, 23 Jan 2023 10:44:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuIy-00025a-2b; Mon, 23 Jan 2023 10:44:00 +0000
Received: by outflank-mailman (input) for mailman id 482732;
 Mon, 23 Jan 2023 10:43:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTo8=5U=citrix.com=prvs=380e0b34c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pJuIw-0001hY-Ni
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 10:43:59 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5ed916f-9b0a-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 11:43:57 +0100 (CET)
Received: from mail-dm6nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jan 2023 05:43:55 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5676.namprd03.prod.outlook.com (2603:10b6:806:116::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 10:43:50 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 10:43:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5ed916f-9b0a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674470637;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=91VWZdwQoisjKS1gHEo+DOPoUz9Y4vKhU2F7no2seCA=;
  b=FioI0Ng/dGY46HJxTHbAExBG27Cnvlg2dBJMg6VFXKxPyQ4MaSCF2Bd3
   4ouiORNo4n9F8FQF5ExM3lv9XwO3ES5C8ybS01PRMnIMaqIiwZq/QtUra
   Jc7bW5fDr2gHEBNpVOXEEQwVjFtwrhwJOVCRPc1eN1kCNe5oVweYuo0UB
   o=;
X-IronPort-RemoteIP: 104.47.57.170
X-IronPort-MID: 93239769
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:LzQCjqkHVOShhKe+j1G4ZB/o5gywJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJOUTqEPvaKMDT8etp1YIzg/RgC65PTytQ1GlBu/y42QSMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5gSGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 ecAIyETST2Hu8b1x+jja8lrlPwnLPC+aevzulk4pd3YJdAPZMmaBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVM3iee1WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDT+fjp6E00TV/wEQyUiRKWF+5j8Cmg02yQ8hZN
 FE2pXoH+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6JC25BQjNfZdgOsM4tWSdsx
 lKPh8nuBzFkrPuSU3313qiQhSO/P24SN2BqTTMFSCMV7t+lp5s85i8jVf5mGa+xy9HwRzf5x
 mnTqDBk3upLy8kWy6+84FbLxSq2oYTERRI04QORWX+56gR+Z8iuYInABUXn0Mus5b2xFjGp1
 EXoUeDHhAzSJflhTBCwfdg=
IronPort-HdrOrdr: A9a23:5Ws0zq5LufYSgVDfmAPXwMzXdLJyesId70hD6qkmc20wTiX4ra
 CTdZsgviMc5Ax6ZJhCo7G90cu7Lk80rqQFhLX5VI3KNDUO3lHEEGgI1+XfKlPbdxEXWYRmpM
 BdmwQVMqySMbDa5/yKgjWFLw==
X-IronPort-AV: E=Sophos;i="5.97,239,1669093200"; 
   d="scan'208";a="93239769"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=la01BiyxY/6+sl0I+xmlKtrwbtSVWBj00n6nD4a6XJzL6sfJBiZGcjhmcgh1uJTU93IOXJEGoQtzxq8Ewa9kFkeDq+TafasSX9J7MmYEPp7rOSwtfquJrcHY7xibNOHdDv+mO/ucnnNfvovSNopSCqa0TXuLj5wKCxWiBYVmnO110vQRqArE2xq1B1fuWm66KNDLPXy1rvYOjJGOHUys3CV+c51K/FZTHKDz6cjnkkbFbqDlFR9lrczVxeAdIXVbpGcVVJBv2NQk3xYvVRXKkyoqAmRlsTeY3cKualnzFQ/Ho4qK8oGZHyiVoErBDVzO+fDfAt/59/rrphDCyqgBEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=91VWZdwQoisjKS1gHEo+DOPoUz9Y4vKhU2F7no2seCA=;
 b=mXtZ4STtMR6oxF2YUH6cucRcMyQVuNBMjRbTKtZS+1i5+2oxcRjPCMzzBeZM54Po7319PuYdzoEiatCkABvzCBgJq5oVh0vsQq8UfhMVKh4COqyJ9OAbq7JJ4yVXGiPjQ2OdGh3a07r6akLZjBeDl/uVe79sZAWvKjeZg3kpVpQeXYicVXMmpt3Fy3wSMhJYwjOR4/wlinmD1Quk8mvU1POVgiscFWTfjHltfsAJY1heJwAwSMp34r/c6f+0Y/IIjm7yaH8eYey0EzYwl/SDZd8sfVwOlRJzhmeWu3WDNNBu22uM6qjhjm/MYxvpvyfyJuI4KbzUV6erAk1scWXh8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=91VWZdwQoisjKS1gHEo+DOPoUz9Y4vKhU2F7no2seCA=;
 b=bM3xofjWFOvOeB3cs/jArfP8o43qx4DP9wqL+pdK4845wdeluIn1MbPJNRU8e2pdDVHIRk0sLX0s3d8ju2O8h55traDGbHWL79rYd43eOpxMg56AOyQOnQU/kBcG8OOX7Cux6GrHB4kCMVuEqlFGeS005nGhSjGPPEIXGWmUMcE=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH] x86/shadow: sh_type_to_size[] needs L2H entry when
 HVM+PV32
Thread-Topic: [PATCH] x86/shadow: sh_type_to_size[] needs L2H entry when
 HVM+PV32
Thread-Index: AQHZLwKEbZ+pvxq41k6VhQDqg2rUMq6r0SKA
Date: Mon, 23 Jan 2023 10:43:50 +0000
Message-ID: <79420a4f-358a-f404-7965-e5f215234ba9@citrix.com>
References: <942e1164-5ed0-bdda-424f-90134b0e22c5@suse.com>
In-Reply-To: <942e1164-5ed0-bdda-424f-90134b0e22c5@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA2PR03MB5676:EE_
x-ms-office365-filtering-correlation-id: 61e6db6e-1458-40fc-4c2d-08dafd2eb6b4
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ERAgiLK+JTt85Yv+nUg27teR3A8IFVkAn+/mhA9DfagoN3T3vIIFhGmgCBaaUEWyIZNB6lpvfxC0SmwYVQmDfHqX0Fwek2zhdmXIuknjrbYHjZD9j4ewNgjw63o8HpWAmNCjpwbxBqcxsIBm37hzAdfx4DG4YZ9D5O8AkQYwJnV6b65WMC0K9OyAPbOGlQjaI37XqtBuYJmy/+wavPVA+18B2OQQOILeDnWmZsCLi4t1j5EiHTFRqIlE6U8p0RjdHro1UQ1q6tYDVGzSdhYzCTeRsJ+l43n1HPsD4SzW2KK+bjPsGSSLNlo3LvX5t9ncTmRJGlLE3NB+AWsqAI5nT/WUimIjazYuVVu8XxxTOGsu9T+FL2Bd6JDl/HdEZsMcFb9qYZsEVyM2O9f4vTqmmZIGLLX9FQsWf+xU2DcHE5xf//Aw0eRH44Nx3bEH04Q8g0cE+f1xBFwjLjMasCPiW3mYgHQdHLpRtXRQ+1yrWu2fQ2M4tn/TwJ71NQIGZN4bpl+aAAtA68ywHJunW65mihcDNxvGX4gSixE46uLNXQOUcTl8tysfqfuMpt2gdQaPVwbpQSdFHM+mZ1lpPDeU/qwU6JcvvROCXg6pGlBXUN5WDlR9IY9aA6WiqHiwboZ/Ua25eP9mF30HQWnLNbOcjgRpdoh5NhHaEShEU/adF0YwJ+HG3s2pVfsAWuREe8K7rtJQvPTNKSrZCfsEVpy/kCzFCef+m+fc0BGgHcQA67MOSOpWEYNskw9cW9VHKTFVcGEtkBeRsP7Z3zk7NDMIbg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(366004)(376002)(346002)(39860400002)(451199015)(122000001)(31696002)(82960400001)(38100700002)(38070700005)(2906002)(4744005)(41300700001)(5660300002)(8936002)(4326008)(66446008)(6506007)(8676002)(26005)(186003)(6512007)(53546011)(107886003)(66556008)(64756008)(54906003)(2616005)(66946007)(76116006)(316002)(6486002)(91956017)(110136005)(478600001)(66476007)(71200400001)(86362001)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TUthYUgwM2JlZFcxNFhUSjFPcUI1Y1R5Mzc5TEZadW5ydlZHaDBTSnIwKzRh?=
 =?utf-8?B?RU4ycVd4VHlTZzZkL3B4eGVKeGE4UThzYVk5SlRYbjdVR3kyQ09TVDROYTl6?=
 =?utf-8?B?bE5XLzZLZTd6Uzg1ZVg2QW1jS2Y3UUxPdmJQaWFCVkpvcEpQS0xmdThtVUFC?=
 =?utf-8?B?RWl5NXFnMExQa1ErdWVSamdNd1lDQWJRNUFDNHhhYVJIbTFhSXdOV094WkRL?=
 =?utf-8?B?R0VNL282akpxMTNGbjJFdzB0alJWVFVGM0pQb0N0aHVZTTVWUFR0YWcwZFRa?=
 =?utf-8?B?Ym0wYmpKd0ZSZWdWZHc2elMxUFJlTUdJZUxPR1lqMWNlT1lNY2N5R3REMm5B?=
 =?utf-8?B?ZUt3ZmxYemw1b0ZIU0UyOGliZ1dIc245Ry9qVDI0K1hSUDlld0luTHFVektS?=
 =?utf-8?B?SFozdnp2ZWl6RDEvR2FOaVc5d1liWFlXd2kraHJ0K1J4YVMxaFlaVUk5RkJH?=
 =?utf-8?B?OVdJc1RNOFFOeEdwbGZ5SEdoejVZQW1ZKytlQzM0ZGVWcmpjT0hwTFFGWTRJ?=
 =?utf-8?B?UHFqS0pCckhaMDlVdmJLd0I2Z2JUWWRlLzB4ZVJ3UVViMmhyRGhqQWF5Vitn?=
 =?utf-8?B?anJ3K3BVSUpkUndDTGVBZ2tXcU1xeFpGUzgyUlI5dWx1VGc2d0NzZTZYVEcr?=
 =?utf-8?B?TXlzM04rUkx1eDlscGIwR2F5S3FncjhyNzRtbjI0cmR0NENtTTJKalNtd2RJ?=
 =?utf-8?B?a3pweDJ2SmtaUHlRU0gvQSt0UEJyZmh4UiswV05jRGx0YTRBdnlGMzNTelZz?=
 =?utf-8?B?VnpVb1dPYmcxaWpNZm55WDE3cnlmZndSVnF3cGQvVVV4MjF2ZGdPQXNVM1Z1?=
 =?utf-8?B?VUFLa0o3ZzBJek41ckc4ejFhanpHVVowaENnYlpsb205NVkyUEc5MFliNVFU?=
 =?utf-8?B?Qjd6U1dSS0ExeHRWRXRJM21HQjY2MmtkV1QzNXlSZzNQUFNsMHJWSi81UzN4?=
 =?utf-8?B?UFBaRzZEMjRaUWpqWVlNL3VkTCtRQ0dwcE5PMTJIM0w2cWZsSmcyaHh1WEpR?=
 =?utf-8?B?WUE3Ump2UGYrYlowdk00UFBrQUlISzlsWkJzRm5pSmZQSjR2c3FlVDFuTktz?=
 =?utf-8?B?czVjWnJoN2Y2UG1xUnIvZmF3Z2prUEo3YmNnQTN0NHNPdXRrRXQwbWI1UTBi?=
 =?utf-8?B?dlJpTFZjR2VDWVNJQ3pqaHVHRUMyZWU5ZnlNMXU1Q0Q1L2liYUFEMUNrN2F3?=
 =?utf-8?B?THBVUlV6emVLTWpFcWo4ZkRTK3c0M294UlFaMXYyREg5VTRtcVk4T0w4Vk4r?=
 =?utf-8?B?bmNIeFgva014U2I3anpXd0pMZUdkaGRRV042NExtYWJJT0R2VVlvelhiTGpL?=
 =?utf-8?B?K25TbjEyZ2QvKzh4YlkxbEtFdmFWU0ZYRzNPY2dBSTgwMDlQb0cwRXptdk1P?=
 =?utf-8?B?VXlTL2RHcGlGdjN1WVhKaHQ5VkZReWZqSUxHVEozVDE0U2R3bFRkcjJjQm5r?=
 =?utf-8?B?Z0EvcUQzUmhmclpNUHlidzgvNGlnVXRMc0FzNk1tanRTSkg1a1d6OFNnd1Fq?=
 =?utf-8?B?YUJRZnUvVFE4NWdNMGtodG5ObVNaZHBwbzR5am95MlBaTWRCUUVtdGJHZ2Vz?=
 =?utf-8?B?RTk0ZDNtZ3h1b3UyWDMyN1VlZnN2QVB2cUhQdUdmRFBLdXlxZVZLM3ZnWERD?=
 =?utf-8?B?TnduK0V3anNYSTJvTGF3TXNob0g2VDN3Ym1GcjZqaTBCS0NidEF4NzV2TFBy?=
 =?utf-8?B?S211M3ZMRU90R0kvZC8wRlFPczJCRUJvRFNsYm5Mc2x6YU93V0VrcjVZKzZ1?=
 =?utf-8?B?aXJiV2l3SXoyZlFUUFBrbC9jd0x0MERFRkJLK0dOejQvTUxRMkJ6OG5oK243?=
 =?utf-8?B?YW1mVjVTSUZ5enloSDcxUlgyUjZpVTcreEVIY0JlTFkydHNFL3BqTTFValNO?=
 =?utf-8?B?NGRnR3VmOHFYR2FpT0VBcy91NktRaE5HaVRoVVRRUmY5TzB3ZGc3c3hPanVr?=
 =?utf-8?B?T3VsdWdUTUVpUjVxbVhNbzlZNDZ5Z25rVngvSEMwVnJoTzF1SnpYM0xnSnAw?=
 =?utf-8?B?NFN2WnZBNzVnOHZWNHpyOXIrNDBFT3BROXRjS1Z3RUdsK2FJeGdnVmUrWHNQ?=
 =?utf-8?B?T0lrdC8rYkwzTG8vbXhXTGFQUUFWaHVidmY3TlB3YXBTUTNPUU85bnkwTGR3?=
 =?utf-8?Q?FZ3FnJ8pv1KtdCnud7ZZv0rcs?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C2DDA3452430574999ABD8376C4882AB@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	RUs5zhMoj+RIwnzyHc4CzSnEq27NjQpXYPrJR4cZ90+cGPWLef6SFDw88GuE53SRSGIkjhYF43qp04sBwSTY9ozqRCH3Dx3TaA0X56bFDytM8Yz9Zp817GNpLt6//4qvO9MsQ/l2RiB9Mh4sKFN6rsaBr0S0bqLrCUidshkOw+J7mJVB45toaSTgsTj9yC1NqZp7tsgCXZk4dpb8GROGsOcdr5k9lLXgwaFMYl164F/DqlMk0v9hgCIQuACJ1LynRKAmKnBmrSTQHAy4GeihJIH6raojIipjKiIbnRmzfs3f0BZPesX6Tl/OQ3FVdYW3Mzlh7C7/QArh7H3sfdWFFmwerlzfWSRyWzJe0+3KqitKvMrDbCR3Mp7IwQmQdKvVDNOdOHDM0sLZbab2XVAlS7ykrmkJTAxX06OPEzRZQiDz4brnvzL9il9jjDiukWZkZ6NdEQPTf7OeXJsjX5qZxQemywqxy633MZ4u551do6hRp1wrhfPKh0zwMBwmc4YqaMrSGwDmqX4T0j+ZzIODKO8jHw/b6Sv3N+oXvg9YhSn4JDg/rKw3arF1R9kenzvr/5zW1Vpvx9ZGL6KPE90+llNlvS83IwTp75EEq3GS0bb71WS0nGXDo/MToIhJY+D2jUk5/L4vKJkLtlADaT8RHA6bHUoBMGQz/dNnJZoZ3obx+JCo1oueFYNPby2L80rrgLb0nJzlenUQfnwKxphOCr6qdq73EK1DrFSNqf86hCZFbQWOPWC/Dn8iX89v6VDNvpmT3tgff/5z97rTYn8TXjkVzZS88uJGgju8W+wFoeGGSFQz2UFy3xW9YPNgaAmBhyRZXKRVpL1Rivbq2pwF3w==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 61e6db6e-1458-40fc-4c2d-08dafd2eb6b4
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 10:43:50.1782
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: h8vr9857neVobJwYfp3Lq0S8/RoywzSy4FqZozAAglr4QkcSLpm3e2SBM2ZM6Ze9Q8pfN6l6JejvDFCcWvdx551EzcSO6/GMzUixfHa7Na8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5676

T24gMjMvMDEvMjAyMyA4OjEyIGFtLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gV2hpbGUgdGhlIHRh
YmxlIGlzIHVzZWQgb25seSB3aGVuIEhWTT15LCB0aGUgdGFibGUgZW50cnkgb2YgY291cnNlIG5l
ZWRzDQo+IHRvIGJlIHByb3Blcmx5IHBvcHVsYXRlZCB3aGVuIGFsc28gUFYzMj15LiBGdWxseSBy
ZW1vdmluZyB0aGUgdGFibGUNCj4gZW50cnkgd2UgdGhlcmVmb3JlIHdyb25nLg0KPg0KPiBGaXhl
czogMTg5NDA0OWZhMjgzICgieDg2L3NoYWRvdzogTDJIIHNoYWRvdyB0eXBlIGlzIFBWMzItb25s
eSIpDQo+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0K
RXJtLCB3aHk/DQoNClRoZSBzYWZldHkganVzdGlmaWNhdGlvbiBmb3IgdGhlIG9yaWdpbmFsIHBh
dGNoIHdhcyB0aGF0IHRoaXMgaXMgSFZNDQpvbmx5IGNvZGUuwqAgQW5kIGl0IHJlYWxseSBpcyBI
Vk0gb25seSBjb2RlIC0gaXQncyBnZW51aW5lbHkgY29tcGlsZWQgb3V0DQpmb3IgIUhWTSBidWls
ZHMuDQoNClNvIGlmIHB1dHRpbmcgdGhpcyBlbnRyeSBiYWNrIGluIGZpeGVzIHRoZSByZWdyZXNz
aW9uIE9TU1Rlc3QNCmlkZW50aWZpZWQsIHRoZW4gZWl0aGVyIFNIX3R5cGVfbDJoXzY0X3NoYWRv
dyBpc24ndCBQVjMyLW9ubHksIG9yIHdlDQpoYXZlIFBWIGd1ZXN0cyBlbnRlcmluZyBIVk0tb25s
eSBsb2dpYy7CoCBFaXRoZXIgd2F5LCB0aGUgcHJlY29uZGl0aW9uDQpmb3IgY29ycmVjdG5lc3Mg
b2YgdGhlIG9yaWdpbmFsIHBhdGNoIGlzIHZpb2xhdGVkLCBhbmQgaXQgbmVlZHMNCnJldmVydGlu
ZyBvbiB0aG9zZSBncm91bmRzIGFsb25lLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 10:47:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 10:47:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482742.748410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuM1-0002sk-K6; Mon, 23 Jan 2023 10:47:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482742.748410; Mon, 23 Jan 2023 10:47:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuM1-0002sd-HV; Mon, 23 Jan 2023 10:47:09 +0000
Received: by outflank-mailman (input) for mailman id 482742;
 Mon, 23 Jan 2023 10:47:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJuM0-0002sV-Aa
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 10:47:08 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2041.outbound.protection.outlook.com [40.107.247.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 47b63d1a-9b0b-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 11:47:07 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8574.eurprd04.prod.outlook.com (2603:10a6:102:215::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 10:47:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 10:47:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47b63d1a-9b0b-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QisHU5FnDtlaYZY2EZvZ4o9Ri0Q79ObebrAEd7TY8AXmcEse3IUyilkM3xpxCylk74Zos4LYVxf4vrDijNCswG4o/T3Ya9Xa8Yu1Qwp4D2FMhDLBATwFwn5qRln+6RlAU6qWVuT3IsHsIaxIlTq58+DrBwYnIR1PvOe9buIm0gvfEJkhGBhCWoZFcFBkTXR9+LP1pLyzcxVofAmencJG7BOQ2rBXE9KjFX0wK8fCdz68UIGZewrzaMraQOJsj61h1BjSO3EPXC5KMP/qn0X33f1zaYy17pobr9292DqwaxQi+JGq9kHQR6lVGe8HsEt2eCRJH44fvYV48ajzNmr84g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kneIYbav4gRiGtpPvy9/qhxfdtfbw25nmewBJplUyRs=;
 b=NHoDeeZrrnN3Fx06oc8QpKY6/9VZidcgvVQOVTdIRroD+nh22q+0XZOvqrtdpCSbG1IRWFk1ECpYTrK83vzQhQjqnEN9oHBtWugaBaWnMO0a7VNtozxQi+/FnQ8B3hdbYs5vVowalwmON1jnrBd+Kf4nHNngKceKekgeHSkeKOFf/hYmoYpcDkr9+kHGXPh5EZX5Q1mAno9xN7gFJi0LdcK4HgxrQ3QJKNuzjS+DqGsvbo/rUFJXUXQPNbqK4+KbCRYXNpohv+jdG63NtB57za5S098TPv71TZ2aWsCetAS9K76EIhjcB1ai44sv3iuNcAc+ju8zRLeqL4gpgbfFBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kneIYbav4gRiGtpPvy9/qhxfdtfbw25nmewBJplUyRs=;
 b=aZlV+Kvv5Qj100DOEqJ4yrE4Wz9CcvbQt5htfHv50No5oFgMVC5TZWmG3c+duCFGHF0Z9I/Qgm8eAPGKF6QcZpWzun4QFv/ZLaQYAfPOO9EihCTXW8uahTtp1AAKvdpVulJzOqOUB+ZQQMBcEGvXOaiicuVhNSqSHnQ1adSIRrdQ2PvW9oljqGKW1w81EtikpYjmY8qA/dC7QaTIdJjdBuXuVtiRvRy303xDS+ZAThXBZ19mvDmWPGYL+isLysDqRwjQPxhVmGtvIckCfMlKljbsqql+LOAjvM8UjCJtXpNSIPOU0HcRKKEuN+t6OHOipMxLurVz7rSiZP3s3xCVrg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2ec2a36e-4264-6c12-c2e6-1af85c91f1f6@suse.com>
Date: Mon, 23 Jan 2023 11:47:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/shadow: sh_type_to_size[] needs L2H entry when
 HVM+PV32
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <942e1164-5ed0-bdda-424f-90134b0e22c5@suse.com>
 <79420a4f-358a-f404-7965-e5f215234ba9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <79420a4f-358a-f404-7965-e5f215234ba9@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FRYP281CA0014.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::24)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8574:EE_
X-MS-Office365-Filtering-Correlation-Id: 0dd21813-8d9c-4cdf-ffe7-08dafd2f2a79
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rQhz6tHJaldf1i6DV0VdLn2DadreXMRQc4HbmgoPBMPTv013Rm95YjB2a7vEwLbbUmHGnL9bi/1Vz4+rs5nXxKmGAExuuSo6Egr1a7ByBPNM2LnPbhvEhaxN9tXldbi36ictl6YjWJtscVShvZbVQi0VoMnZLqcGYiZNzclxh+jxtV3GJodm0u7IFpS37XRMNyN61/VVXw9/y86Yth9/zZ6GiqU0qdjIVWyxLAs5Z7s74wGULAt6rIl4MCZQopeiAFKe/UYRnPbOojROxH0sbe5Go9N/SqmYDOey4Fpte7dnYoxZTQxbkQbSNUC7SaebjGjt526eCnTeo1jFDqvRgL5MR0PzKLZHudZexGgV6iIKzkfww4WdMGAjVGKlLFcfZ9wrtEOqG22aceu73umHw7iUvnDl7C1xwI2y2ZKI6E6q7BYOOH1x+CWt8GpFXWIpjgFAM7p5b7i3SBmwhAH1F7KPdyxC2miHZ7j06H4U9kg9VmpENq0vCsbA0Wf/KvNOciwfAUGRdjcJt4kBFY/yJZrqFqMxvRFR2Gy1Swc5WI8/XfraB94/8d9GTVKHeArxfQbof+Io1JgOS0lSPH3lwLv4PLD3y3pKTH/Fhp7z+migscQ6VxtmN4r+TxbOpcZm5M/BZDQrdzb7xRRMlHS0KWBHm1xYxmL343moZeMeDnnn8M6JBItZfgu/g8KQbMyzA122aBNXjS3hXLVQdNtbMSsdVTnK1ZaVHoRKiuAv8Z0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(39860400002)(366004)(346002)(376002)(136003)(451199015)(38100700002)(31696002)(86362001)(41300700001)(2906002)(4326008)(5660300002)(8936002)(6916009)(8676002)(26005)(53546011)(6506007)(186003)(6512007)(66476007)(316002)(66946007)(54906003)(2616005)(66556008)(478600001)(6486002)(31686004)(66899015)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QkloUE9vcUYvdUxSOW9WNmFuT1JBNFhhM1NtRDljNnJjRzc5Y0p2L085MDYx?=
 =?utf-8?B?dFBuL0tOUmdaZlJwOTFjK2VzUDJGMlJJdFk1U0NGWXFQdktIOVlYdVNISlh5?=
 =?utf-8?B?OUlIaUFRUjE0eVd6VG1vbmUvZFpFTm5rQWxYdzNyZXBDSExRcElIKzFmQmdW?=
 =?utf-8?B?elBNTW1GMGdFb2F1NVdZOGVLd0hXRVFzMFlMR0xDcjBMZG94Y2NuZDZ0aGJD?=
 =?utf-8?B?cXA0ODc1MkhNWFhZUzdIQzkwZWxoVHIwazF4RndkNzNXR2VWNHNaYk1vUE9C?=
 =?utf-8?B?enZFb1hpT0pRaTFySFZIZFRDK21mLzU4R0tuZThvZ2VSUHNOVW9aMm1jZlRr?=
 =?utf-8?B?WHdoSytNZVN3MS9XWTlJU2VjU1FSZGdFaFVrODNObVZDRkwyVzM2SEVuMEda?=
 =?utf-8?B?TlFlS1U1NUJtTkdqZkF1ZFBoc1RTbHJoTWVJUm9NL3VWNXVCc3Fwekh0WlA2?=
 =?utf-8?B?bndEQjNzbVZESkZsMm96ZTQrWnlOb1pVcnV6dEoycGg0bXphcktydlBvWDdF?=
 =?utf-8?B?QVRZMnBsTVMzQSs3Z0g4VzNCQ0xIVWc5R0FXMlJpcGRkVTQwYTNQTE5pUURX?=
 =?utf-8?B?cmNvQXFMWUw2RFhQNkM5WlVZMzh5MjZvay9UTkluM0diRU5JanFic3d0RzZu?=
 =?utf-8?B?ZTBzaHNrRVNmYUE4VlFIZGZTSEFzR01MZmNyeDJIYTNJR2I5bHJTTzB6d2ta?=
 =?utf-8?B?RjcyRDJ4cEsyQm5kZnRZcWdaS3J3Q0lBdWR0SWgyaTMwS09CMkJsdyt4YXFZ?=
 =?utf-8?B?SkFnTFJkYTlIbU5VYjVCY0srRXRJVzYyeHhvOVRDV1Z1QmxsTWttNGF2clcx?=
 =?utf-8?B?MFNlZG5kbHRGUUlQalVmNDYzWDlpS1ZtTHhWaHA4VWF3Q01CVlhCU1BzdEls?=
 =?utf-8?B?eVdIaEsvY0RhQ29USHdOeHRqN3AxdVMvV1FlaWlqWHRLUDgzMmJSVnJpYUVB?=
 =?utf-8?B?YkVyWUs0Q0xIK21VbzN6V2lQR3c5M09DejVCSE9JczF2dE84Y0RGaWJGK3Js?=
 =?utf-8?B?dS9pWE1qUDBQcURIOHZJQWhQU0d3V3Z3d3Zsc24zbGpFYitXSTN6NC90QWlZ?=
 =?utf-8?B?WnJoQmFncDZzeVdxOXBGNEx0M3dFQTBvUWJxNC8vZGUyZkNTejZhcUtDWTdD?=
 =?utf-8?B?ZENzS3ZPWVRwK2Mvc0J5aC9zb0tFa0ZzVDNpdElsZCtydUtRMU82SDlMdnNY?=
 =?utf-8?B?TXdRSUoxSE5qb3ZhMmZzcW40K1FEczhvRnZlUWdPbDhTOVozelNrTTduWTQy?=
 =?utf-8?B?Q1NyRlNHNUlCNlJrZ0d4bU56VlBGOG5BeEl5dGtDT0pUcmhkQU90dEhzREFG?=
 =?utf-8?B?WmpHUXpzL0xuMzhaOXcvSkRaUllPMTZ5MVpZUERTcXNkLzBCb3N1ZFRXd3p2?=
 =?utf-8?B?elo2eVRPZDZuRnRUMUtSdHVXcitBSzMyVlV4Uks0cXFHWmdUVy8zb2R6N3BF?=
 =?utf-8?B?c1V4SHZ2RWc1S2grTXVMRnpvNlBqdzFaa2FaMW1MZmMvanI4M0hwT05ZRmNl?=
 =?utf-8?B?bnhib1RISDZ0M09EaHV1MmZaTW9UdzV0UGFSRk1xdE9SQVBOTThmYUFuc05J?=
 =?utf-8?B?czdQa3Z1NnM5TTBPeEFuVlhtOFBxMVZaSTNDLzdBQjBGWE9FN2RJZ2MwKzd5?=
 =?utf-8?B?OHJRT1V4Q2RqQWNGY1dDK05MRmhiYlJ3OENEVE15UTlhTll2SHVFVGdscDYr?=
 =?utf-8?B?OElTb3dMK2xPL2pxb2RiRC9hMlBmSTAyekl1di9ZSXhKU3pxOE0xUEo3UWZi?=
 =?utf-8?B?N1Nkd29RSkUrQjhOOTRTRnl2cHlpOFNERWRsclFMUmErNysya0lXR1ZVZlhw?=
 =?utf-8?B?eWhUZytIVHUvZjNEbFNRM056SnVDLzJWMmJzbi8vU1RpdVhvb3lDNnlpQS9n?=
 =?utf-8?B?MTdUK0h0ZG11VVE2empPRWs2Mk1xZVdIa2ZQYnZ2T0U2dUtNY0lxSWtLeW1n?=
 =?utf-8?B?NElTbVVWM1JMS2NjSmkwZTlSdUg3KzZibGIxZXl0ajFtSHVPQzZqeXh1VFJa?=
 =?utf-8?B?NDd6S3lIOHRxSXltRnBJMjdxdXd3K3JuM09HSHFhZCs0cjFOakx5UVJMeDgy?=
 =?utf-8?B?MlVXVXNNSlJYS0wvaXU5ZjFaTUszYWhKOTRkcTZSQStHamFTWkFsbUhxcXpP?=
 =?utf-8?Q?4CQ32mr7cMMZ0lu/wvXQg5QNZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0dd21813-8d9c-4cdf-ffe7-08dafd2f2a79
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 10:47:04.7712
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l4mlMF4eIcIOlI4zP8B1/p8KTkc7dXQ7JgbyhYJ1g0n0f2/OAh7qE975x3m4ruN4ZYxkcltYkU8SlUDLkHTfTw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8574

On 23.01.2023 11:43, Andrew Cooper wrote:
> On 23/01/2023 8:12 am, Jan Beulich wrote:
>> While the table is used only when HVM=y, the table entry of course needs
>> to be properly populated when also PV32=y. Fully removing the table
>> entry we therefore wrong.
>>
>> Fixes: 1894049fa283 ("x86/shadow: L2H shadow type is PV32-only")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Erm, why?
> 
> The safety justification for the original patch was that this is HVM
> only code.  And it really is HVM only code - it's genuinely compiled out
> for !HVM builds.

Right, and we have logic taking care of the !HVM case. But that same
logic uses this "HVM-only" table when HVM=y also for all PV types. Hence
the PV32-special type needs to have a non-zero entry when, besides HVM=y,
PV32=y as well.

> So if putting this entry back in fixes the regression OSSTest
> identified, then either SH_type_l2h_64_shadow isn't PV32-only, or we
> have PV guests entering HVM-only logic.  Either way, the precondition
> for correctness of the original patch is violated, and it needs
> reverting on those grounds alone.

I disagree - the table isn't needed when !HVM, and as such can be
considered HVM-only. It merely needs to deal with all cases correctly
when HVM=y.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 11:01:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 11:01:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482750.748420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuZH-0005PB-Uu; Mon, 23 Jan 2023 11:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482750.748420; Mon, 23 Jan 2023 11:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuZH-0005P5-RT; Mon, 23 Jan 2023 11:00:51 +0000
Received: by outflank-mailman (input) for mailman id 482750;
 Mon, 23 Jan 2023 11:00:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJuZF-0005Os-Op
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 11:00:49 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2072.outbound.protection.outlook.com [40.107.22.72])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2fdcaf94-9b0d-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 12:00:46 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8271.eurprd04.prod.outlook.com (2603:10a6:102:1ca::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 11:00:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 11:00:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fdcaf94-9b0d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GQ0jl+d5ztb4sjCjGEYxwAZRbYFE3ji9hsLD/JAs7ciNB90Ew38tCjShPy55HI0b0eIvcnvxNG8GaT980ZbqdBH4W/uYsoQzjAbTKg2NHCRDoQ10RfJUAr5b8xm3q2WHL+OrCrxUSg16sbr3P0a23+PMHrP5X21waAG/oieAqUPWcA94JzqFwZTlwBGyZBK2LwVmD4mHwqNFqu1L9socc3QmRjrSDvCurgV9KRtL0QkVfPx/xbJuFgUDtpPV+5UoRz+/14CgIrSLavTkdZW5dx67TUbpVpyjT8OLWw8dUfDxaC67A7SyZYyvlQr3V4li9dK2aycLTvU+iJYKlKQ7bQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4ZYWa4RhsCkfQRLtp+DAQup+Zia8TMo5i5Y2bh0Yl3Q=;
 b=Sws9cXrnz6ucmGqSuw/Lq7dB+L4UixjjZw1RxSZXzRXRLrnQEt8z7UBnYwNvdLWowBnT78wSk0+0l7U3VPZg2hohWbbm+tAflq2lDgac3vp2GQrV2XkutvDBwtTLPJUauDrid7n7qLGQElCLfIq+HG7r/8UVA5ERFoOvfNgkpzaDSosGN8MVKrbfB7yACD3+7yMDjyVpfBxJ3yxbvOCPoyZU1QJrTGAgI0XXUOCQvoLB8Nx6iZNAr2Ifg/qCNowBMcgPD2L2pJJljSBcxPL9WQkGtu0pnGlYPkEWHKi0mGWKf2Z6lEzMpJx271q2wVkTDKCi9AbYRpngENKkcShEuA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4ZYWa4RhsCkfQRLtp+DAQup+Zia8TMo5i5Y2bh0Yl3Q=;
 b=bmtLG5xQb9K8o1Vr1Q5xyNC4CCG8UWylw3wy3vavMhQw0d0hIPGEYS9YuHvHzt2b2YsqKG43j6mWnOYAbBrXsSEftv/ugx3o4wazq98/t3EtqhsL1vlCq932LARi92VdE/qmsMk2WozplxC70+p9VE6S46EcEELUPUSGwNvFskygf5+X+u72uzzf1hNnQla1zY5ax+gTW4bmu4Drnyc0fjezWbRs/lcKj8YdEO+h1gku9RIwbnind0Tws9TFlUVrWhtRN5ftg81iklIt8wE16vKUH0tzvN2B3zLSQjGYQfbaepgKv07j0omynjPISQTee9IIHEivyGJDLwc6F6iekw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e82c32fa-4318-96e8-4c0a-eee26b1cea74@suse.com>
Date: Mon, 23 Jan 2023 12:00:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 02/14] xen/riscv: add <asm/asm.h> header
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <621e8ef8c6a721927ecade5bb41cdc85df386bbf.1674226563.git.oleksii.kurochko@gmail.com>
 <610308ac-3440-e84e-02ec-928f0652e9d3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <610308ac-3440-e84e-02ec-928f0652e9d3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0048.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8271:EE_
X-MS-Office365-Filtering-Correlation-Id: 5793c51d-71af-4bf1-f291-08dafd31127e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z0zM6MqOh7j+Zx4Qu/3xNlTcc3nlk2IculypZbQxICiW/kFs5Xm3GmkpBvwp4IIKnO7+X5H6XeQ6yYivYZNz+FTFA692Q2OzybMqSwtmlKK/6Fm1orMhMhrkopNUl12oX/TstX6aGjDHcVoId4Hc3EAZKVPVi4+utiDgjpA4PbkWhVVBA1DTvbMdPv0Kr1/6VyYKGFKWLAR5hcC/zJg65qo8aXMoXjsvvCSrJBwm88KeX9fcdRuAn3LivLEU1qFWFUobDrmjCmXx22WSLa9ioY1dP8VjAY3jUlARDml92/Z+ImR6a+XM6Un73nt8GVULo4o+tQ6d1fGaMSK3AouVbFLJivVP/pn0I+3Ld5wF5ZFCG6exo6Ds4CWi/6p+lLf5o+CVTr/bw84gfMpQs4yMEnQZ3TObqw+3gUvtgpmDZ4iUJiX/nqjRrDbsbSMg5V99a5pue3II8Vv7ll0senOwRXDRxZuN29D4hZETg3W+DnsHhnw1KassfkZVBCX0g4u257pueoLNCalmeG/6X96yETIGe7OQZEv1QEUVaU0VMy7zJ9C0CYfy788UqXlKuPuufod2BtqG0uiltIVLqbfa9m+bdW3omGMimN1FHMqpMcb8sGY4ImOw5ifU2DHAF7tuaivwiz2wGSsWGTh3hsjupLGQFex010Mp7slpThvrHlSpp8AAi+yUmIjHpNpTdtKpv/zdGmJ6UkAOb5jvEn+POgAFOIM81ArQdqYYLIJVjmU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(39860400002)(366004)(346002)(376002)(136003)(451199015)(38100700002)(31696002)(86362001)(41300700001)(2906002)(4744005)(4326008)(5660300002)(8936002)(6916009)(8676002)(26005)(53546011)(6506007)(186003)(6512007)(66476007)(316002)(66946007)(54906003)(2616005)(66556008)(478600001)(6486002)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OXhJS2luaDRZZktkdGk0TEpkb2JSNlIvaGU5VXF2MStZQkVJZTZEbExEaSs2?=
 =?utf-8?B?bWZLZVhkcE9zYkU2U05YQkdxVGJxSUFSOWgrQWppVzBIaW1naDZETzhHZERi?=
 =?utf-8?B?Nkc2dEQwT1J2dWxOcGU0OE5OUTk5MWR0bklNeSt1eVkrQytnWFlsdmUwWjBV?=
 =?utf-8?B?ckJ0THNvQXFKUmJ5SDF3UEszalBMMTQ2R016blIwNlFMdi9DQUtWRU4yeDR4?=
 =?utf-8?B?bG9vcVJLekc5QUpkdWZpa01la3FhbTNQVE9nMjNpVWEwR1JnS1ovYmdJMXBx?=
 =?utf-8?B?K3IxaTZ2TnpYZ2wrVS9MOEZrcXh4VlhjV25hWUpMUGx2L2tpc2cxMnkvV3V5?=
 =?utf-8?B?ZDZsVHRXOTVPY1VmUzJFYzREVGs0SHRMTnFDNXZGUjFNeXNDeVJCSlVHODJi?=
 =?utf-8?B?c1lVMzAwaFk2Zmdhb0haUU5YT05aampHWXJJK3d5NDc2UE1YQVdWeVVmNEx5?=
 =?utf-8?B?NHBjQnFNcDBEQm5PSlhxc3hEMk5SUmd4UFhxN2l6RWRhK3NiNHVNQ0RUbzFk?=
 =?utf-8?B?bG10S25vaitvOTc2cXNwQkxDZUcwanQ5Mk04ZURiWG56RC8wamNwbUtRUkUv?=
 =?utf-8?B?dnVueExjY3RNQThKay9lQzBBSUk5dnRoK1RNZUwxbkVFaDFhQTNzR0JOdUpH?=
 =?utf-8?B?S1dKdUcwT2t1Nm9sc09VSUI2NlJ6Y29ROXJyelplWUVBUTRoYUwxK0FEVjBF?=
 =?utf-8?B?eTdYRk5OeFFKZ0pvb2dMZFlZeEJMSjc1Q2NBL3NtWC9iTmt6QWNOdTQ4SHVZ?=
 =?utf-8?B?cUZnMGswV0p5eXNEaXgyMUZWYXBEalRjQ2s2QnZnUWVKcm1OMUs4Q1RDOURN?=
 =?utf-8?B?Q1pEQURlQlFqUVNZSVViM1lMb0Irdm5mQXNFeVZ1UTRGM0FrUkxsSjlVRmUy?=
 =?utf-8?B?c0ErRzVNMW5IUEp4amRhbk0xQW93U1JxMWR1UnVQYzJBRG14azllRHhKUkRw?=
 =?utf-8?B?SFBrY2RQaTVNMCtKb3BiNnZUUTZwTzk0TURsQWJueXBDSHBhSzdjQmYyTlla?=
 =?utf-8?B?aCtIS3UxRnFiQzRWTTE1dW5WU0IwbE1IaGtUb0JIUmxCdkR4SU90Y0Exdmp3?=
 =?utf-8?B?QmxIT3BqRGIzMm96Mk4yZ1VaeTZFVVB6WmtmRkR4dkJWUzFwTWp4aE5FOUdl?=
 =?utf-8?B?MW1lRlRDKzcyR1JCTlJpQkE2dktKWTN4ZEV1bGJ2L0dUSzN6RmtlTGhHZkVT?=
 =?utf-8?B?ajNtNnJhdWVzaWFoS0RRWFFFT3Y2RHFUcEo2TkNKM0d4MWdna2RwSWh0REFx?=
 =?utf-8?B?OUtGdnNIcGROM29iclRLNnpVN1JyRk1LUXNreFBaWU9vQnJmaWVWdFZNWVBr?=
 =?utf-8?B?clBZUU9mS1JnajVmbU1tUW9NU1drRGF0U1VlYkpXWEdTbHg4UEcyeW91N1Bu?=
 =?utf-8?B?SHhWK0JsTmx3UEFjakYwTGsxL3U1djVvYW9nZWhBYVJRNnBkRlZWU2txVTg1?=
 =?utf-8?B?a2NGR0NXYkJNdHpmc1RRTXZ5V2xYTEZBT1VULzF0RUJnNVV6MHZPRDZGdlBH?=
 =?utf-8?B?Mjlxc2Y1WitOaGRsNzl5WUtWQjBnSU1xRkpRMHpHc0dXQ1VFU1V3d1ZGUll6?=
 =?utf-8?B?a1hEeDRMemFHWjRNL0svaXgvb1ZsOS95LytuYWQwUG82eHE3SnJSdUlYTXNC?=
 =?utf-8?B?RWZXa3FHY0lLTEhTSllVNFVCa2QwMXlJSitnTmdOZ3B6Mk1zNm5nTUU1bTB1?=
 =?utf-8?B?eEUwaDRBS1g3M0ZhR2RuT0VVL3dzNTlhb2Y4NWo3QjV5ZUpGSldNeENlak9p?=
 =?utf-8?B?WFhBaUJGSlZXNSt1bmN4aDB3LzZETGJzdDVqWWFPK3pLek0xa050cU8zSkdQ?=
 =?utf-8?B?a1hFZ2ZkNjZYbGdPNEJGRmF0dDNYUE1paU5NWEgzaWpsMXBQc0FXK2EyS0gr?=
 =?utf-8?B?UlR2RDV3c1cvM2UrU3Q3d1Q5SU1vNW5xK3ltaWUxRE5mYW5zUE5hT1AwMGYr?=
 =?utf-8?B?Z3ZIdEJMUTJYbVhSd2tzeGhDRGEwV3UvRDg4dkhCN2VrTXd0Qy9vWThVajZS?=
 =?utf-8?B?WU1xR3Q0Q3hmZ3hVUDl6WWV5ams3RnM4UittQkh6U2xIcWlUdlFCN01ZV0E0?=
 =?utf-8?B?eFpSa3ZmL1VuT1NWeXpKLzcxMUt6VG5jSFEzY2pDZ0FkNVdySnhnYUt6ODNi?=
 =?utf-8?Q?EyT/nsUMtsqHFpZZbILsUI9Xj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5793c51d-71af-4bf1-f291-08dafd31127e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 11:00:43.3911
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s6gXupZPcDAZ3SgJU3p1sHGORPJRto2rtVSrrRWF9W0rapuSmfECcOKqpN798acWqfAqqxSC6D6jwGRIeaPTcA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8271

On 20.01.2023 16:31, Andrew Cooper wrote:
> On 20/01/2023 2:59 pm, Oleksii Kurochko wrote:
>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> 
> There's some stuff in here which is not RISCV-specific.  We really want
> to dedup with the other architectures and move into common.

I have to admit that I'm not fully convinced in this case: What an arch
may or may not need in support of its assembly code may heavily vary. It
would need to be very generic thing which could be moved out. Then again
xen/asm.h feels like slightly odd a name with, as kind of already implied
above, assembly code being at times very specific to an architecture
(including e.g. formatting constraints or whether labels are to be
followed by colons).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 11:10:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 11:10:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482757.748430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuiL-0006xd-RR; Mon, 23 Jan 2023 11:10:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482757.748430; Mon, 23 Jan 2023 11:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuiL-0006xW-Os; Mon, 23 Jan 2023 11:10:13 +0000
Received: by outflank-mailman (input) for mailman id 482757;
 Mon, 23 Jan 2023 11:10:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJuiK-0006xQ-PM
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 11:10:12 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2080.outbound.protection.outlook.com [40.107.6.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 803ea0d2-9b0e-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 12:10:10 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7480.eurprd04.prod.outlook.com (2603:10a6:10:1a3::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 11:10:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 11:10:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 803ea0d2-9b0e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l9hz/iO31AnNCDzJ52ohe2jheCWNwrLUB8+OWAsPqQbElbo8GA5JiEmD5CKtKZxcdrDoGxcjGblA9CIzwspst5Ykip31kbuB0NStwCwq4raXUGAIMu68Zsz/LzRSJtwUYC0zJWLBHzZMpF+HShE2tAbMoE/I0NXT4zy4MAT48OB47ceQNVMA5cWbS4ib83/izFAnxWK0GjKKE8u5odEBunazi2ZAU6BsG9UD8Sosd2b68FMQ3vmioqtBJLzICzYPH5NJWUxbpdE9ATvZjjI97tfpEatmqWEaaUL8Q5J3Awh2nQixp2MOiE2oo9rQWHWsF/UFsgyb3YdueCnANztOLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TI0/4CHvullVluTKKM10tQVQxLa2eSvSGGZSIoHfyV0=;
 b=ATgzWRUJgsYF9fWtfz/zkRsLHBvs1jUbfrmB+4fmTwTFiu7ZIB5BZKMXWd9utVaR/O96JQCbyFkIoEXx7WK60u2yMc2x6HWPvRea36e3F9v8J2oxf0RSH0PH6Ag/e1LUswhs9W05em7akOpiyPVNCNiLnDnphMbcKENvFv+uJnNa2RcyWdL/kvAwwZBORQALAEMmF7jUIZnB+kJkVn95IWVMfjCGlMMctQ//k30HjLy2u4IESUmi2Jldc/4fZ9DPCHg2aLs5alIpSD62/ZC3mjy90URePasmziYQgCQ1POUg/YYMQTke+0D/BmnStAaUFLs5og6c+wSnjHKZo66lvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TI0/4CHvullVluTKKM10tQVQxLa2eSvSGGZSIoHfyV0=;
 b=J0cBw6XVbLhPGZ1TDvMEJnqBbxt/1cUdggMYOAMR13bRuHlvhGvkqzx4qW9+wcTSKdCDdEFF4SCQsFSWke+zduYrMEshsA3ZjtH4WUgBPvC7LW/tH6fldpAfa+Wi/CkLIHT3chlBUAjgYBRA9up1aK0w+9o+C+GZTSMkmfreAkS78/lEblmsT3uf8RBr1Yz8Zk+euKOof+Jr8fiwD6EwbWRPKOKnF64Yf5oZFOKHOnoLobZkDVOuSTGoVIP+gskDhxrGnGy+EmptCsiGyxBABdtd0jHHLeLwWFBswd4oIyROPGJaCrdvqtNkf7sRZwL9BSznHtUTqw19bLjRYxRY3Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <35c3175c-47f7-80c9-c417-1320aab02de7@suse.com>
Date: Mon, 23 Jan 2023 12:10:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 05/14] xen/riscv: add early_printk_hnum() function
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <633ced21788a3abf5079c9a191794616bb1ad351.1674226563.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <633ced21788a3abf5079c9a191794616bb1ad351.1674226563.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0129.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7480:EE_
X-MS-Office365-Filtering-Correlation-Id: 2052be8f-e2bc-4037-e602-08dafd3262b5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MZKc3iOReDUxw7s+n6mVsK/BfIvDfw6mi3SesFxAKrQCp3TXMtbd+j7+qnv7bk1IcgKd45rr/l+DeNawG25B2ZC1dvs1bBDBFa0nULlRtEx3zW4tv/eMq7A+uSyJ8TLZwQFP5NL5cZWMk/QrSaJzG996J9juX9/KhuQtlzPASMxeF30RjqAx/5LexGqAsciwElKgk5ffqm6PJhH6er1W6Opgk99SYJ91ZIdwNJPLzwrtpsNmNm7Ztd0yOzTPxJDQkxJskWiVn1tfyaltqY/qKxx6fvdq9K1uTYpIyQFB2LupShCIOim/PT2ef71KkFPpuh2MCdRKYj6dTy10nthqodzhN7saSCeslhmmGjgVB/DyLUFUfNSTiojxbTpC4ITv9/l/Z4JWzANhxrMFMNlE5t/JwZSydiPg+dVdDcfvLm0ndVUujLWrlWEsmOSPE4Q0TRPAJZBQPIyD3opiUvjQc12P1srkolWvdfZ64zjlZftArJrfb+L9m/EoX8pX5jnXgYqs0YgTvjecPlHRZaw6e5gK99LS1Z/XSCW/GdKFUV4e4MwqmcroER93x4pe0xryj7CXCFY+U1khDN2+DweXXwFmD/U3wXPYeyvZfCKeZgnypmMG9E3GnMgviVxG1O+Ogz/aWyg3YL05GlKlBx/IIZ7gI8JWsed1g7oxocbSMM6AG7Vh9mNxVoHF9vqIKK/hWzO86imwnP0d6P4RtRBl1gkA9BPbpdDsZ/Nxbe/prcI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(39850400004)(366004)(346002)(376002)(136003)(451199015)(38100700002)(31696002)(86362001)(41300700001)(2906002)(4326008)(5660300002)(8936002)(6916009)(8676002)(26005)(53546011)(6506007)(186003)(6512007)(66476007)(316002)(66946007)(54906003)(2616005)(66556008)(478600001)(6486002)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bHFhODlzcVNYb014NzI1SUtBenRHajIrNGhSYjFaYUlOckF1VkM5NWYvSGpM?=
 =?utf-8?B?UDJaaHpnREdiUG9WN1I2ODMzYXBJSVFBOGZMRTl1U0svQTZ2b2t0S2tNR3NS?=
 =?utf-8?B?QlRPSFNseEFXMnpsYU1rTGV1L05SNG1DQ0hsTVR0K3FkbDlDN0RXdXlSK1dX?=
 =?utf-8?B?Q0IzaWJnUVpoUkNRdk9za3V4MjNaOFcwRzVLSjM5R0c4MUpyTXB1SzdHM1Bj?=
 =?utf-8?B?TC9GUGpmTmhLdGh6OFFYbHBYL1cyNDBSRWovQWErNWRkU2twbnJhdW9Sb1BS?=
 =?utf-8?B?TWMxNDcyeTlJcWs5blNtL1VkM2RGbUZ6OTZObVRIVWR5SmNnOFRGVGtDemhN?=
 =?utf-8?B?QldqWERxWGlMeUZpNVFLdzJYamRpQjMvUzkrMGc2c0d4Ym1MVWdsRFE4b1pL?=
 =?utf-8?B?bi9ROTVnNW5sL2M3QU9UdmVaa3QzWWJWbXJrcE1HVTc5SmRXbkdOQVhOYlAw?=
 =?utf-8?B?bjZNck94VnpNZ3Z0b3VxRXRqUXB2SjA0ZmFKb1BzZHppeXFCbTJCMjNIbnRw?=
 =?utf-8?B?a0RFQW1VV1drdWtrVDFXMWIrOWFDdm9KODBKOElmUHdBc1JYa3lFNE5WVHd2?=
 =?utf-8?B?a0xUeDNYSjRVRGlzNFZmUTltQ2prZThUMktzOHY0SWVxa3FJZEpEZGEzK2tM?=
 =?utf-8?B?UFprQnlUVEk3UGFGSEJ3QmpPcXRxRzhkNWFOZXE2WXRpcTJrYTNwNDBidWdW?=
 =?utf-8?B?VzkvWmxEd25GZ1dtaTNDUVcwWDhRVmhSQUZXMWphNTBwV0hrTDhSck1vTGF3?=
 =?utf-8?B?bFF2VGNXeDE3SWNVWEJPRnd4N3ExQU9sODVIMkdTOGFvK2V1NjZLeGtNYlpl?=
 =?utf-8?B?aFRvcXM5bHY2elltdU5Ea0JuQVhyNU56ZFBBZjV2SWdpaDFLdWVvRkgrazVW?=
 =?utf-8?B?UEFJUFR2SXJOYU56Qys3bzFjdGVwalpTQVlncktqbldoS21zYnloUHdOcVZ2?=
 =?utf-8?B?cWw1SGs5QU4zdStJRG9DbDFPekFFTTZqaWw0UzkrM0lqSE1lb2wxMUNsR254?=
 =?utf-8?B?U3dJNytXMDc2ckFRZW1WN213TlVVbmUwYVpxSEpOblZYaXNvNUhmbVJydnhw?=
 =?utf-8?B?WTJweVYxc0NWbGtHdWJOVDZ6NW9neFhMcHZjZHdISjB1a0R3NFFoUU9tUFla?=
 =?utf-8?B?OWZ5akVXSnFoTXZNd2Y3WlhpSXZWQVJnOU95STgwU0U3d29Gd29KVCt0L1Rw?=
 =?utf-8?B?azkvVFByRzFQREtZRVQwMEhKdy9sUzEzdURTQlExNXZ6dnJjNHRBaERyNHpy?=
 =?utf-8?B?bVFEcklXNzBudWEvOEJESnl0NG5JbzJiWDNjOEI1VmtaaVRyYkhOWDhHajVt?=
 =?utf-8?B?R0FndWhKN3N5Rzl0VUt0WHlaUmVjb0RSb3R4Ynh2dXR0SmFzQ0Q2ZTVEcEcw?=
 =?utf-8?B?YlhremIydHZPSmp5TmZyMlFER0RuUGw1S0Z1VEV0ZkxlUEdKZ1dkMVFBNys3?=
 =?utf-8?B?NXZTMGtYdklWcnlKYVhFOWthNlY0N25nQllhTzV0UC9wenJONys5bzI0NWNr?=
 =?utf-8?B?dkx1NnFHK2NHODZYUkIva1VLZzRuVHdOUFp2TTM3V3M4SzljVHhGQVBncHB1?=
 =?utf-8?B?TzIxVzBDclNrTmViNVQ0K0VabUdrbEppOCt0d0Yxa3VpLy9MSnVkL1pHanZH?=
 =?utf-8?B?ZVpUd2J0V2R4V25RL3NmZU1BNUVocURUVWNqbnRsMzViLzhleHFiWkxnQkFR?=
 =?utf-8?B?L1FtdHh2VGgwQ3BPOVZScGxSUGRwMnV6Rk1LS1lQangybHF0QVZMN3U3OVlx?=
 =?utf-8?B?QVdsT3RRU200ZjIzT0xVanVMbTFSbzA4SUZJY1ZzZW1ZWlhwei9uMHVJTjMv?=
 =?utf-8?B?L3Q0OERzTWNSeUJHTDRjS1BzN3B2NldjdEhveDVqaHhzWmNVZXVmNE1yVmth?=
 =?utf-8?B?enE3WnpUUU9tclJBR2hoSGJROGJoZGxyYjI5U3BVdDYrcnhDbDcxTnFzOWNN?=
 =?utf-8?B?SmxQRHhZVkZ4eVZiOTllREFXRGtXUlhhOCtmSUM4Yms0aUR4Y0dCaGFDSWpF?=
 =?utf-8?B?TUtCZGQ4a3Q2bHpwWkRoWkhSazJ4NWMrTHhWN0ZEeGxIVUM5aGpISG1acEVC?=
 =?utf-8?B?NUhKVXJ2dndvYUlqem9kVFdPVG1sT0wvODlpaEtpTTVyOGJwSmZFTFlQcEQr?=
 =?utf-8?Q?qe/psP4T0a6+LkmARdlHYuJUR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2052be8f-e2bc-4037-e602-08dafd3262b5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 11:10:07.5907
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: z094t/72ss2HWYXI1GkBAW/SbGf6BQbrj+QqQT7l3DyvFRJBycWgtCCkMXVdcw6LP6RgZCfjsG87kspeglu2Ew==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7480

On 20.01.2023 15:59, Oleksii Kurochko wrote:
> Add ability to print hex number.
> It might be useful to print register value as debug information
> in BUG(), WARN(), etc...
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Orthogonal to Andrew's reply (following which I think would be best)
a couple of comments which may be applicable elsewhere as well:

> --- a/xen/arch/riscv/early_printk.c
> +++ b/xen/arch/riscv/early_printk.c
> @@ -43,3 +43,42 @@ void early_printk(const char *str)
>          str++;
>      }
>  }
> +
> +static void reverse(char *s, int length)

Please can you get things const-correct (const char *s) and signedness-
correct (unsigned int length) from the beginning. We're converting other
code as we touch it, but this is extremely slow going and hence would
better be avoided in the first place in new code.

> +{
> +    int c;
> +    char *begin, *end, temp;
> +
> +    begin  = s;
> +    end    = s + length - 1;
> +
> +    for ( c = 0; c < length/2; c++ )

Style: Blanks around binary operators.

> +    {
> +        temp   = *end;
> +        *end   = *begin;
> +        *begin = temp;
> +
> +        begin++;
> +        end--;
> +    }
> +}
> +
> +void early_printk_hnum(const register_t reg_val)

Likely this function wants to be __init? (All functions that can be
should also be made so.) With that, reverse() then would also want
to become __init.

As to the const here vs the remark further up: In cases like this one
we typically don't use const. You're free to keep it of course, but
I think it should at least be purged from the declaration (and maybe
also the stub).

> +{
> +    char hex[] = "0123456789ABCDEF";

static const char __initconst?

> +    char buf[17] = {0};
> +
> +    register_t num = reg_val;
> +    unsigned int count = 0;
> +
> +    for ( count = 0; num != 0; count++, num >>= 4 )
> +        buf[count] = hex[num & 0x0000000f];

Just 0xf?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 11:11:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 11:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482761.748440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuj7-0007Ti-4r; Mon, 23 Jan 2023 11:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482761.748440; Mon, 23 Jan 2023 11:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuj7-0007Tb-2G; Mon, 23 Jan 2023 11:11:01 +0000
Received: by outflank-mailman (input) for mailman id 482761;
 Mon, 23 Jan 2023 11:10:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTo8=5U=citrix.com=prvs=380e0b34c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pJuj5-0007QZ-E8
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 11:10:59 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9b22360c-9b0e-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 12:10:57 +0100 (CET)
Received: from mail-dm6nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jan 2023 06:10:50 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO3PR03MB6790.namprd03.prod.outlook.com (2603:10b6:303:178::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 11:10:48 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 11:10:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b22360c-9b0e-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674472257;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=rM9DSQIxYciNDglaL20GVtHT0dRWq85YIEIHFnb+y8g=;
  b=Nuvw4LxO10wx8gnIoqowK6oDuACuNdqmkMP1zzLGZZW/rkdbpuLEQaMw
   IK9V28+HG+4GlxGcFmSAAfphYlax2ICBVL7oLU72OFDv72u/8tONm0F6W
   aSRnAXN+dc/B1Md8+AG2kQGhzYWKsp6rznLLqX27VsEh4lHR6ZOQ8aIbS
   U=;
X-IronPort-RemoteIP: 104.47.59.171
X-IronPort-MID: 92696323
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:eU3Wia9KY/SLI8/u9rcWDrUDNX+TJUtcMsCJ2f8bNWPcYEJGY0x3n
 WAfUWCObvfeZzSjKtpyPYq+9hsCu5bdn9VgSAFqqiE8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKoQ5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl//
 vUyCzkiUCu+pKGG/+6qDbNhmdYKeZyD0IM34hmMzBn/JNN/G9XvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWCilUuidABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTCdhMRePjrKUCbFu7mkETJQ8bakeBneCmgWuDBf5Zd
 HIm9X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L8+B2FD2IJSjpAbt0Ot8IsQzEuk
 FiTkLvBCj1mtrmIQnu17LaKqiizPyNTJmgHDQcOSgEP8tT4oIU+ixvJZtlmGa+xyNbyHFnYy
 jSLtzQ3hq9Vg9QC0a665njYjznqrZ/MJiY+4QPRWWCp5x14f6aqYoWp7R7Q6vMoBIGdQ1qav
 XlCmNWE6+sODpalmymEQeFLF7asj96VPTuZjVNxEp0J8zW252XlbY1W+Ct5JkpiLoADYzCBX
 aPIkQZY5ZsWMH70a6ZyOti1E55zkvGmEsn5XPfJaNYIeoJ2aAKM4CBpYwiXwnzpl08v16o4P
 P93bPqRMJrTMow/pBLeegvX+eVDKvwWrY8Lea3G8g==
IronPort-HdrOrdr: A9a23:CJ+4tq+3it9Q7nLMYuRuk+DZI+orL9Y04lQ7vn2ZKCY1TiX8ra
 vFoB11726WtN98YhwdcLO7VpVoI0mxyXcd2+B4AV7IZmbbUQWTTL2L4ePZsl/d8yCXzJ876U
 9rG5IObeEZAjBB/KLH3DU=
X-IronPort-AV: E=Sophos;i="5.97,239,1669093200"; 
   d="scan'208";a="92696323"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Cq57Wn54lLisFpVOg5WyiBQ3lSJdpWJJLD0ROwypjQrtF4BJF4uQWNVNhfXqxjhEhMAUOqGNK6Hr8G1SADGflQRn3adVNPUjiDBTUoU0yoax7ssbo2IPKjdKAHsRnAeD68ibC1jZeI2uxox0QWGsQS6t6/WMgOgBFrsANdKDWF7IGTYfnXl4CcYmV9NN+sHaTT4udsyTZi4CmNiW0ppGGsm4UaHVtYAU9VfYlOHDqxjrKAGswbuGQejNaRjaBsXJJ0UH/UVqX4KK2dUq/OfXhHz6f+V7tzRVo45ev73VJJfz2AkL/69BdA0S+lSfW+8b2gCwvNpPR7sTGbJ8WHSC+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rM9DSQIxYciNDglaL20GVtHT0dRWq85YIEIHFnb+y8g=;
 b=IOu+5iBa8vlxUXDGzh/Js1VscgZm9t/aCoZMbtqX5h+n48qrp3MdT1m6JhIJAA3q9cKIrZimEmlymAhegBTn3defgDwcSutQQhmQNezjL3tgsEVkEQmhWJ2Rd6RCtgLKN9s7F6/ksJqNGsFvHICVGvdEayP1l2zFUfazfteS6+POePheepAXdsji8WvPAX5yXVuYNEfjtX/snORz3XFMUs2c1OPW0eMrmnT0N/UYna8bB4j7Dkrr4m91O9oIdT5x7ej9aOX3zMg4obD6pa28ci6wSd3rlEbJu4lyBkzQ5nX1Xsl8RK18/hyQppUf4F/O6GdxnYHi7Ztdw1UmtCQHnA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rM9DSQIxYciNDglaL20GVtHT0dRWq85YIEIHFnb+y8g=;
 b=WBd7ZibjcXZrBWsVM2ubd14rPLvGe9/iukmafZf7Ered5A0Reo6/8hTlyNe9AOTIJcX+QB+zDfsK4Y1Dd16XJc2/P+4mr0X13Q0ooZF2N80YrSRlCdAJDRc5GdMmD+XSE1Zujl/b3njtcQzEeEL24MGYZlO4cWud1GU+x2C/MVM=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v1 02/14] xen/riscv: add <asm/asm.h> header
Thread-Topic: [PATCH v1 02/14] xen/riscv: add <asm/asm.h> header
Thread-Index: AQHZLN/mQOdIReGXikafzlTdFxmXvK6nbsiAgARrVYCAAALRAA==
Date: Mon, 23 Jan 2023 11:10:48 +0000
Message-ID: <9a82205e-acfd-0704-43aa-ab01e19d0e85@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <621e8ef8c6a721927ecade5bb41cdc85df386bbf.1674226563.git.oleksii.kurochko@gmail.com>
 <610308ac-3440-e84e-02ec-928f0652e9d3@citrix.com>
 <e82c32fa-4318-96e8-4c0a-eee26b1cea74@suse.com>
In-Reply-To: <e82c32fa-4318-96e8-4c0a-eee26b1cea74@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CO3PR03MB6790:EE_
x-ms-office365-filtering-correlation-id: cc39b212-56de-47f7-0936-08dafd327b24
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 mQcgZP02zLu3TMruPh3LBuuDZg5qYQ7adlYyPFJtDC0ZdFNTv55nFv0id2KSgHpcEyZv4suicG49ltgsQN9cKIlprhrVnrYpTdlRhtKXhQfdRg1lU40srDgaE7+EtS9klrD9szvVbDXeNUcZdb6TrcaZORyX+DlULFIlkxiCe80qKyujIRGgSPZnJPiGa0A7XLi/bbcI2CI9PAR15Onow0AZrtZD4DkxkaKM5+hruSwfz+jZycpuSczbddJwgjPdtvZI7haKaBL+drKsUbR0+r56qej7mbKLZC0cWn1O2ccqWEL6VpJlg29qdadhI1iOjzpqOtWekIuJZRUkj7Wqr1NMrnH3pq6pBnc/AmmjLyNquixaZ81aPOhqfLSQZuoZLBzEoIj+NPrYjbsZ6jS5e2hM+KEjKs23bqSDC92UNibz/67/Z4fxinWfpWg8iaOmLj2o8zrrSaPzhoLQr3qVv71uLYAl7cAULgqF2adeQ7lGeeCZJDY3EzDrlrkZ08m9LjCR0mET7LW+HGygXP7VhelDEpZq1w/ZXMRRpiDrVsM2hl0RprGfX3WJLk0MgSNryouyi+WM1iyFkw5YoKAWJ84zeE5wThGHBoYWGkjlRZOvPhUXfpE/eQfRwBHJPmOTn9U9h9JUj2Q7qwaRGuVJJzjQS/kyKoyx0cUgWFxR6BDkF76/6vCgzjujmCwWeUrbFgh4h+Jqt3SttMWSzi98VW16NIBoNftqvq+3HQCu6Zl+nn8+GTPbKQxfNvEXhLwd5mZ/Qp9MSzmZr6uYatDYNQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(366004)(346002)(376002)(136003)(451199015)(122000001)(38100700002)(38070700005)(82960400001)(31696002)(86362001)(41300700001)(2906002)(4744005)(4326008)(5660300002)(8936002)(6916009)(8676002)(26005)(53546011)(6506007)(186003)(6512007)(66476007)(316002)(66946007)(54906003)(2616005)(76116006)(64756008)(66556008)(66446008)(91956017)(478600001)(6486002)(71200400001)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MEVOZFhQMXZXektxUWNrdE9SajR1aE1obGplNDN4RG5vb0FlUUx1OStSTDNq?=
 =?utf-8?B?bTFFYlNMODM2ZTVNTy9xdGtMZXZoNjY5UlRsMWVlamtkaS9SU1NlNER4YTFz?=
 =?utf-8?B?eFE5VlpyaFZzamV1RS92d09BUUVLRnI3N0tJTU9uSkoyRnpHdTVndkthMm9W?=
 =?utf-8?B?VDU2OXVPQzcyK1Q5ZEc5aGljZThpQ3kydW4veFFmNU9vVHd0SkRtOEpnb0RF?=
 =?utf-8?B?dEpxZ210QVIxdisxMURQZVY5Tk1oUUk4ZDNleWRERzdZMysxS1hHY045Lzkv?=
 =?utf-8?B?RUsreHlhY0pjS3YxOUNpL3NzS1QyWTNsRnlXWWl1Y05Da2RtWmZlLzloYmNW?=
 =?utf-8?B?YjJPdG1BMnlCMzFqMzFoLzVYSnRMaHlZSTg3RUhqLy9nUllQSThDMGFzdnFt?=
 =?utf-8?B?ZWxWQnlnWWYwZmI3ekJMRFRjSnpNUW5vekpPUVRLZzFCdnBDV1NFamNVZTRM?=
 =?utf-8?B?cGtGc0VqKzQ5anlkR0dlY3cxd0dTVEd2OFg1KzVtOHBCV2pBb0hRRGxCZFBS?=
 =?utf-8?B?M2pBVnhWdDBtTTFuWkJsT2hGbVN2QVJ2ejZPdjZCeTFLZURGQU4zeTRxRmRN?=
 =?utf-8?B?aWpaaG9EMzIydGliQnlKZkhrNWdobHgzNjdzakVsSmVxV2VOYTJldkdPQ3ZK?=
 =?utf-8?B?TE1aOVdLZ0hlMVFGK25xaDQzeHAxc2ZUQlYwOUhOZ01yOWFLQUZFL0Q1RWRo?=
 =?utf-8?B?RzJjeWhsUVZvbHptSDJqcU9qSHFUZklEL0NhV1Q5eWNkeVdxWFA3UzBtb0lC?=
 =?utf-8?B?b1VaVFFaM0gzVUhROVc3YnhKM1VCbjY4eStiUTE4bVNrOFg2bUpJeDhqd1g2?=
 =?utf-8?B?MW4xSDV5RDd4eUV2NndHOXdEckIxS1gyMk83Y2xVRXd6WXBqcGJqL2JpTER6?=
 =?utf-8?B?VjZqZ3RJd1phaW8rQTE5L09vK3RVMUFINmhNU1JJSVlOM1J2OHdiL3lDdVU3?=
 =?utf-8?B?ei92RmlIMnVxYXhJSHBIK2xxVHMxYjNwcTQ4aFJaRUsrQjRnYjZIWWFIYXk0?=
 =?utf-8?B?TVVlSFh5RTFBcE1YYVVqdEtMYmxEWlZydHlYVForRitPWGgwK1RFOEVVRnZB?=
 =?utf-8?B?elc1QVVGQk5MMVpoUTluNkRXK1RIcURvcG95enQyRnM3VDhTSFB0M3FuWUY0?=
 =?utf-8?B?RllkVTVnNENRUnZwQk0xOGw3SGpOanhja2VQNUEwTWNVL3ZJU3l3YlhOaVlF?=
 =?utf-8?B?RjZuWlErdjJIMHdRdmkrY1dRZkZWTVlsVDE0aXNoMERCcDZzQVFBVTNRYmpa?=
 =?utf-8?B?UXRZMGNtaVVxdENGV1RqTmxPZUJGQzhKQjdJRzRlaFZzbFlKM003S2gvemlK?=
 =?utf-8?B?RXlNV0FoeHNHY1AyTEJHWmNBTjVIeEFGV2loQmsvNHp6OHloZHFCSmZDeWQw?=
 =?utf-8?B?MXdqN1V2THRRTnRoK1I0WEpXZE9rVTlmQ2ZzVVNreUk5VnhPU0hMTFVRcEhY?=
 =?utf-8?B?WFRrVWdrS2lmdGlGWjlmMzFvbjdzenVEaFFzYWhIbmp5VkF0akladWVzUFVQ?=
 =?utf-8?B?UE9IaWh6V2ZnOFpFc1g1VUsveGZGQkJvaFM2SEdLTDlEZWdQOHY0OFBtemRF?=
 =?utf-8?B?amtGVWF5dWQ5akpTY0k3dnNlVTQ2YStQWXhKMGpReFJEcTBaRnhsRHA0VG1l?=
 =?utf-8?B?UUxaY0dNVUg3SVA2c0tHR2pVeWRwQjFMWmE5UkFzSXdtMUlCcThaVjVIM3JQ?=
 =?utf-8?B?dmNoTHV3SFh4L1BJWnhoNEdjY0dHeDJDWE4zb2FrbDFzb2Y4OUlsZCtMUlVz?=
 =?utf-8?B?VnFTZ3pERzFDT1dRci9EOXdpNVJ3dGVEK3lmZGZNOEkvdGd3UklZUlJpTWMx?=
 =?utf-8?B?NnlRWHpHVEVvWTZqKzFIeGYrZnhLZE5OSHJXOFpEK2FUKytqNDNGSjhPbHhO?=
 =?utf-8?B?NEUwUCsxemk4Q2NTWm84cExPU2JaMjNkYTJaNWdjb2RWYVRFL0tzTXM1Q1Jj?=
 =?utf-8?B?c0I5V2lSNGxkdFBCM3dBeHB6eU5XNURVSUhnRnFHcVVPZnBPRTh5aTYrVnJm?=
 =?utf-8?B?VHd1OElEaXZQMEt1OFlPaExhbmlZYWd4bHJhaXVsZ0E5TjFKWjQxQUxaNDlk?=
 =?utf-8?B?YW9CS3c1RUllQ0gySTdIMkVZYkxkV2RoU2lsWVlHMnVKVTFEMVNERS8yQXJM?=
 =?utf-8?Q?e34pfyoV92E7vwPHuLUsq+pD8?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C716BDAE5126B4489E38EC881F31617D@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	oq9OV6IwZuz2BCXkSXYDer14NXldWUwbR8pEFHdoWDXUD/5xohjVB24Ef6ajRXar9td+2wzdEQtww5EBcQu5N8hvqy+reMNG2BnXPtwrhRwJR2nqghz9v1s9cpuBuLJja2Y+YuglDkFCTMuGS/r++YTa4Md9npZU8QwDpFY4GG2tVaCgt/q0fcC+49fXEAjZEObMLmbIvwLYVfy024kZyjHiYCP2O1PFy/XTK52/wda3hbLRjekI0AFRggAsY5RIOHmAUYEwGU2z3LqGGDJ0mTgvN2nQx3aZz/M5S4qIXtiZqufd3WlHXacbUUR9jlpiO4bWnXjlDSrOoS1Z1H7aO25S4q/tJqmJez7cxLixGitDQdz59RxVgOR39XH0mazLiy3NkYW1cZZ9FBhlYofC/dv7163z/AGEGebqQxgAd4dlxf68bvh5XH9vAN6ansM3V9ZcWwh7ozi8mu/PRZqACQayn3gr2TvBNya5PWobsjYErozTVW4O/3rdQ+HPfC4l2IenYFO1qB5RKi4RO7n2OMwAxjCpheScVVjVdDcnv+nwd2s+tCaRMUt/zqn6JU90bnnBMPyI/C1QVHGuz7gkianBq9JfWNMu/XVDLNglIKxnYzQj4fdh8CMkZ6KyWAjMToB94x5/OFJ+A3KVNh/314LC0kzhNo5mgMZK9nm7G/l7Y6sP8/uY6jwtMyjZN3ZI5bUx/LjrJl8SyY++AdVr9OiKhhudyh2AQFCm+E751FN8da4sB3+Ay+s+Xjp59AehyG73WrDl8ykGqsglLk/bzEN+js0oanRXURAQg8kIBM0eQTECiVvajUH8OUZW4l/PMIHh1ei9SylsI5tqGII2PmLeRjMytH2HMN17+23cu4ErmapxJ9BHGxvDqQ7oZTcaIDCTpjFpdVT7qO+zNbtQ/2Y0j1WlvN3cz4z+huT0P+A=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cc39b212-56de-47f7-0936-08dafd327b24
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 11:10:48.2185
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 366rypIIwQ3APGwB+qMGwA7ICwudWTrhZEeqJv4A08SfJHJQYOG+oDffNrf+CsbyVaCxNTIJGJ5v+llQME/4nERPxEclsCBAnpH5pk9dWs8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO3PR03MB6790

T24gMjMvMDEvMjAyMyAxMTowMCBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDIwLjAxLjIw
MjMgMTY6MzEsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBPbiAyMC8wMS8yMDIzIDI6NTkgcG0s
IE9sZWtzaWkgS3Vyb2Noa28gd3JvdGU6DQo+Pj4gU2lnbmVkLW9mZi1ieTogT2xla3NpaSBLdXJv
Y2hrbyA8b2xla3NpaS5rdXJvY2hrb0BnbWFpbC5jb20+DQo+PiBUaGVyZSdzIHNvbWUgc3R1ZmYg
aW4gaGVyZSB3aGljaCBpcyBub3QgUklTQ1Ytc3BlY2lmaWMuwqAgV2UgcmVhbGx5IHdhbnQNCj4+
IHRvIGRlZHVwIHdpdGggdGhlIG90aGVyIGFyY2hpdGVjdHVyZXMgYW5kIG1vdmUgaW50byBjb21t
b24uDQo+IEkgaGF2ZSB0byBhZG1pdCB0aGF0IEknbSBub3QgZnVsbHkgY29udmluY2VkIGluIHRo
aXMgY2FzZTogV2hhdCBhbiBhcmNoDQo+IG1heSBvciBtYXkgbm90IG5lZWQgaW4gc3VwcG9ydCBv
ZiBpdHMgYXNzZW1ibHkgY29kZSBtYXkgaGVhdmlseSB2YXJ5LiBJdA0KPiB3b3VsZCBuZWVkIHRv
IGJlIHZlcnkgZ2VuZXJpYyB0aGluZyB3aGljaCBjb3VsZCBiZSBtb3ZlZCBvdXQuIFRoZW4gYWdh
aW4NCj4geGVuL2FzbS5oIGZlZWxzIGxpa2Ugc2xpZ2h0bHkgb2RkIGEgbmFtZSB3aXRoLCBhcyBr
aW5kIG9mIGFscmVhZHkgaW1wbGllZA0KPiBhYm92ZSwgYXNzZW1ibHkgY29kZSBiZWluZyBhdCB0
aW1lcyB2ZXJ5IHNwZWNpZmljIHRvIGFuIGFyY2hpdGVjdHVyZQ0KPiAoaW5jbHVkaW5nIGUuZy4g
Zm9ybWF0dGluZyBjb25zdHJhaW50cyBvciB3aGV0aGVyIGxhYmVscyBhcmUgdG8gYmUNCj4gZm9s
bG93ZWQgYnkgY29sb25zKS4NCg0KSGFsZiBvZiB0aGlzIGhlYWRlciBmaWxlIGlzIHJlLWludmVu
dGluZyBnZW5lcmljIGNvbmNlcHRzIHRoYXQgd2UNCmFscmVhZHkgc3BlbGwgZGlmZmVyZW50bHkg
aW4gdGhlIFhlbiBjb2RlYmFzZS4NCg0KSXQgaXMgdGhlIGRpZmZlcmVuY2UgYmV0d2VlbiBib2x0
aW5nIHNvbWV0aGluZyBvbiB0aGUgc2lkZSwgYW5kDQppbnRlZ3JhdGluZyB0aGUgY29kZSBwcm9w
ZXJseS4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 11:13:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 11:13:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482769.748451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuld-0008Ct-L3; Mon, 23 Jan 2023 11:13:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482769.748451; Mon, 23 Jan 2023 11:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJuld-0008Cm-IJ; Mon, 23 Jan 2023 11:13:37 +0000
Received: by outflank-mailman (input) for mailman id 482769;
 Mon, 23 Jan 2023 11:13:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJulc-0008Ce-Lw
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 11:13:36 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2045.outbound.protection.outlook.com [40.107.8.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f9d1d244-9b0e-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 12:13:34 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7793.eurprd04.prod.outlook.com (2603:10a6:20b:240::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.31; Mon, 23 Jan
 2023 11:13:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 11:13:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9d1d244-9b0e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q0YN2RWMod2WpbWVZbMZdyvQyiX0elouOWxPV1aOIM2ta5R3QQvzWksRINh5cznfN3rtYvXvWOfHGfDEgWbgoGZ7srGa5RP5NJPoinPRIWQG46cn7PEvmCfLSXEocQloV53nGeHesYI80/7Tm/N7kSGG+O1t6Tw1CWgY7NKO9RPoy4N59idg3tKCkQvFzehgkW5tKymsNDNUwlsD4S24X/l7GY0/Pc9Vv3so5j8KpAhxg4gf+GE98hsq3ioV6m2SzX5FWaaOvIVn0r30NShnFD/xbOzQBRKsuGyCb6CQt/WLFdavLWuHkVrUaV75uB4MzA0mAUK52T0pJXDuFyQMVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=k/84V8jh8KuNwpSSuWdYRMCsqzsH2Grd9uG091Yg3zM=;
 b=KL0ltV+mZEhqBIiJtWU0WV0Upydzh9a+tVXXvfhE8g9XPfAAxuzOwahRuaUAKq20cTABQQPjStVBgc6le0gtqgpuEu7dCWT/umHK6tGL+UEQIVnJA2j00OrM8JQ8WZ9WW0qz92DOGNGJvf0t4tUngP/wm8hXIfWicMN+tF/LQHD8ffHYUkn0USWZjrZFHv2OVOzKn+ih2+Cm4rLHo4B2Jeol+4aJ4g08nVVwQur1KDy1+Y4ytZEz9m6/bmaS+/Fd+ec3R4ajiSkyhpubBZ8F+wbVuElNzsKh6CrqO9QhXEiVIZ9PBua+fbft6/1KK9PQ3ABJ7hsVciFGeFNI1wDPhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k/84V8jh8KuNwpSSuWdYRMCsqzsH2Grd9uG091Yg3zM=;
 b=4AfENTZWaT2BmJ1m67gYpqNee3hRwtXpaESydCOwmHJEzi0OdDO2xcl2j3dzofzXHvQ6lN/VHinkRm4rNoQH40Msmbj+iZk5cgzWPYppjtMV1TK/aov6no8WTT1fHNfg3+U8U5m0SZcoZVOamy8vEIYAx7EghsHiaeTVmsDLYKETxszVYBE9jvoUvhX2MsmyQ3EMSz8cK7I1TKNWSRxyxx5fMA1TiKunQ2OwbBRFdEX6Y4Kg+zd1nlLP1P5dBIApVyI21CLzdeSuieyhGi79wlV0A0R5YsHmFMCzQ1HHAbLdtK+FdhdbZLN2SJN3uZLfQxrCP84exTvD9w57VRJAlQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <189b2298-a84c-5fb2-7005-30c3f939096d@suse.com>
Date: Mon, 23 Jan 2023 12:13:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 06/14] xen/riscv: introduce exception context
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <00ecc26833738377003ad21603c198ae4278cfd3.1674226563.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <00ecc26833738377003ad21603c198ae4278cfd3.1674226563.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0136.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7793:EE_
X-MS-Office365-Filtering-Correlation-Id: 7da3dc9e-ba2f-4b69-2a93-08dafd32dd08
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	99V9dxfziJPfp2+uzOCXgWzK7vUsk49Dl8aLK8DsO7BiqSuiAMhWGsvkikDP2XwIlo3LywN9cdy5JbD2C8J00lOD/46rHAGUPgnYLU9X1hKAZV8HNoUvvI6mKRubXJqRGuLM2QRjHVDnGH5NkqoIn30n3FWWE9AImuik3kZHIsqWwr3ZfcRF6mq3fBVfthtW1hVU54mzEAlSbywnV9OhXJ9EYpc0c+5HmeYRdXCj5TiECSU7YC09bk1b2LaJC0TMr8BsLE58akSBiYm+RpLyLgeq8LYHLBw77ghGvaZkyQDdy+3EPIq363AZsPXfK3hBO1/xFYl7oUQmqLw4JYNliwNXHLmZ0+xU6hznMPl4aeGXf//tN3esh4INN4obbEUXiHBd+CCJRhVOwfYIAHiOA9t08O1iHlD4rO0duzAOjd3vxnEgxz25KopSIYerq6Wc4D75m3hcYeZSAkn0VPX97GO5MTEsrd4QyjZM2eMiOa2P3GVS1Ym+gtdhamafQKEGDRVlfQq1Hy5J296OB4FrGU2xKm4XiFrA6rFy9dhNnU/0rm7PE1uayMk3//jtpI8IVnm4ULITPk9hxps4Hqz9NP4M0yMKBEgTTwSK52jUDpibXZveYMBRTyghfw6T/xfe8kqBbHDDyn71M9AMSHYyCnuZJNQydKQcGZOr6DLJCfDPqoXy2Mg0PIDv07qOlYRYaAuH9CQEHPwZlpxntUZ/sL3d/Q0Nc1jKRMzyS5h44mU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(366004)(136003)(346002)(39860400002)(451199015)(31686004)(36756003)(31696002)(54906003)(316002)(86362001)(38100700002)(6512007)(26005)(186003)(2616005)(6506007)(53546011)(5660300002)(8676002)(66946007)(4326008)(66476007)(6916009)(478600001)(6486002)(8936002)(66556008)(41300700001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZGhPOThEVXp3QkZSTlM5QVNCSkFaZU9VOEZyMWhVSXRaZDE5SUFFMysyNzJ3?=
 =?utf-8?B?cXROSjZ2aG45NGUxK05OeG92WC9XdXBoUTAxMEtzS2Job3lreVpXZWJxK2dE?=
 =?utf-8?B?ZVd2UndmVllhdEtFMkI5RDFJeTVoWUZkeXJCZUNHbS9WSGFGYzExQkZ2UXR4?=
 =?utf-8?B?azdOYzVIdmZ5UWZjSkNZR1VxendNOWNRRVRLaFpyZ1gyVExXcmcyNFo2OWhZ?=
 =?utf-8?B?b3JuUGVQVEgwVE5RamNzV2pSaGp6MFRqVmZjQ0hCaTBQYmZxNlhxYTBqU254?=
 =?utf-8?B?TnowamtKREFuWk9LWkhmeWsxMU5PclhZN3lZUEtFU2hsdXdvZTNabkNSVGM0?=
 =?utf-8?B?dGtJOVh4Ti9BbmwyNWxrcmVrQzl2WVhHTzdBOUwrc0RFZm1hY1U3TmxpY1N6?=
 =?utf-8?B?K3d4U0s0RmNWMHJGenNsak9LT3lkRUoxL0k3RTI2SkJGUGEwUXRSckNxSy9q?=
 =?utf-8?B?V0VNbFNleG5rZkRnNytMSWwyR1RscnhCL1p6M2x5U1FZRGUvZzlHNWhzQ3Ur?=
 =?utf-8?B?bGRKT2N0Yzl2ejN1akI5U1JyZ0Y0ZVJmUGM5VXhwLzdSVnhkMHRxaHFmOTFK?=
 =?utf-8?B?MFlTaTBNTHdDUGtRQStnS2l6S1V6QWdNSlN6K0E2T3EweURlK3hEMTFaWWlk?=
 =?utf-8?B?VWtUYWJtZ0xDZ09FZXRxWTVpaVNPTDR2bGZTOVY3blc2aUlTZkoxK09ET3RW?=
 =?utf-8?B?czRMdnd3dzBNRHZsZXpKQ2JVMnNjeUtCdVhSRXJJejJXUUxLcHBRWFo4cGQ2?=
 =?utf-8?B?SEQwaHVPR1F1eWcxNXZxcUJWWi9LeEJ5bHVDR1pFaXlyMFpBUTA0Y3pBTnhK?=
 =?utf-8?B?d1pSREVWNXBJY0VkSjZqM0ZrRTZ3Z2xTejZ2eEMvblM3ZU1LdjNTN3BDbTJR?=
 =?utf-8?B?VG92QXdBZ0s5amhaMlFPOE1VckpXNUZ0ampLOW1YUXlSR2ZTUjNLRHI1cHRR?=
 =?utf-8?B?Mi9BK3FKV1BOeFN2RWZBd0tlTUlQcm1PYmRZQUFnSldwbEpITGgzRlBEanpv?=
 =?utf-8?B?cXZXc3FlTFZad0dYY3NNT09YYmFqSy9zRk9nVUpJcUk1b3Jmb1loRGQ3d3Jh?=
 =?utf-8?B?U3R2d1JpZ2djRzhreUQrdnZ0VTFRTHJSSk9neEZ2K2hhZnhTQlA1cEJhYjk4?=
 =?utf-8?B?RDdrN25acncweENlYTJDa0srM09UY3JSUTNsTlRTcHFDK0hzdUtYZWhtMk5F?=
 =?utf-8?B?VG01TDdWaUhYWTFjRElIRy8wbW5OWURJZFRqUXlvbWRHaG9tamMyeFdRZ0Q5?=
 =?utf-8?B?UTRLTDRRME1DWFJmSWtnOHZxZ2QwNzBMU1VxUmZQOVpJNDhBcVBhbi9ZK0p2?=
 =?utf-8?B?a2dlR2NHUWFxTXVUY2k4U1Q1MERYa2JNTXl0ZUp2eDA5V0UwNWxDZno5ZWxG?=
 =?utf-8?B?VlN6aTVHeFk4QVUwMkh5OVlqRmFwUzQweUkyelFMdnBHUGlTNVg4N21zNVBa?=
 =?utf-8?B?Wkc0R1c2QUVmdE5UUU85bHFpK0RHcmtXNU9qYVp4RXpHWE9ETW5VYUdHRzhn?=
 =?utf-8?B?VE9venZ5K285dGJDbi9sMEJzTkQraUJicVg1Q1hJbVNJTU9VaHNiWnFpbFdH?=
 =?utf-8?B?dmJ5NzI3a3NxR2NBWVdpMWxNcGhXVWpDSno1LzZYRndBSk1wVjdPZ05GK2NO?=
 =?utf-8?B?NFBzaUFSRzFWL1llRkNEMG5hd1Z4ZUk1eFFWSk13SzVvZnlIeVg3bEJJaWJv?=
 =?utf-8?B?ang2Q01nOVl3b1VmK0tOT2Y3SXI5ZURMMWplSk9QQm1wL2k0ZU1rWUowbkov?=
 =?utf-8?B?VGpIUmlSdzg5UERqZ0pQWmdtdURSWlYyMkVWMlRYZHRoeWhUdGQzWWtKWmZo?=
 =?utf-8?B?NW1abVdMSS9pV1pRWEFzbWtYSERBRStaMnlLWk9oNzUvNUliQWp4QytwRkF1?=
 =?utf-8?B?RS84VmhsR2FuZ0tRNFc4ZVZENXZLS1JxQ3JISE5Pem42NzF0bmtEZkVaR0M2?=
 =?utf-8?B?SVZWWVJGWW4zMUljZFo2aWFZZDJyUjBxSE9XNEtBYXBLajUxcHQzL0NGYUFs?=
 =?utf-8?B?bGg5eGNxalFPTmtTNnM5MDdpbml2WU5UOXFsSnFSVWp0MmtvY0VRbnhFcmEx?=
 =?utf-8?B?dW5jZG5KeXVzdVUreEZINFN0ZVFBNHVjbFRrZW9rcWxxZ2oySjNpZTBLUkRK?=
 =?utf-8?Q?YRLUG9er3onpg+ghSNmM7I8QI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7da3dc9e-ba2f-4b69-2a93-08dafd32dd08
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 11:13:32.6869
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dC3SnF3uRWZmnDHQZEDCIuWMgX3HLphg71nHzchyDrthZVn+AgX0oWGDSM6S4zWeGIZNQLHWGLNDLEZZWAk3Iw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7793

On 20.01.2023 15:59, Oleksii Kurochko wrote:
> +/* On stack VCPU state */
> +struct cpu_user_regs
> +{
> +    register_t zero;
> +    register_t ra;
> +    register_t sp;
> +    register_t gp;
> +    register_t tp;
> +    register_t t0;
> +    register_t t1;
> +    register_t t2;
> +    register_t s0;
> +    register_t s1;
> +    register_t a0;
> +    register_t a1;
> +    register_t a2;
> +    register_t a3;
> +    register_t a4;
> +    register_t a5;
> +    register_t a6;
> +    register_t a7;
> +    register_t s2;
> +    register_t s3;
> +    register_t s4;
> +    register_t s5;
> +    register_t s6;
> +    register_t s7;
> +    register_t s8;
> +    register_t s9;
> +    register_t s10;
> +    register_t s11;
> +    register_t t3;
> +    register_t t4;
> +    register_t t5;
> +    register_t t6;
> +    register_t sepc;
> +    register_t sstatus;
> +    /* pointer to previous stack_cpu_regs */
> +    register_t pregs;
> +};

What is the planned correlation of this to what x86 a Arm have in their
public headers (under the same name)? I think the public header want
spelling out first, and if a different internal structure is intended to
be used, the interaction between the two would then want outlining in
the description here.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 11:17:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 11:17:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482774.748461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJupB-0000YJ-4N; Mon, 23 Jan 2023 11:17:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482774.748461; Mon, 23 Jan 2023 11:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJupB-0000YC-1E; Mon, 23 Jan 2023 11:17:17 +0000
Received: by outflank-mailman (input) for mailman id 482774;
 Mon, 23 Jan 2023 11:17:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJup9-0000Y6-G7
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 11:17:15 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2079.outbound.protection.outlook.com [40.107.104.79])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ceb0f62-9b0f-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 12:17:14 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7793.eurprd04.prod.outlook.com (2603:10a6:20b:240::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.31; Mon, 23 Jan
 2023 11:17:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 11:17:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ceb0f62-9b0f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZemJkwHiQESsbiLLYgW1fuW4zO/tmTYt/VlJ6I9N5Nj7caPDHhe5xRJQf+VaM6171L59NXt6Y1zvtSAI8m+Rg/QQNswo5Z440rOiLhtVulPu2Fiy8vraiLqzitFpI0s5gbyhCxl+64U65bouo6ZaIvIvoVrvi0KLEd4GU5vJTqn0yRf4EuALSOh+6z1PgKzBHQ8LAdV8jFfYuznVrEnnM4y/Fj1NbZ9mntSUTCYEWfk9yXNnELHtoc558w4OnNMTaru3R4lBd48kGHOc7yLqnC9EiB3LhPXPXbwbUADG/BOKmOxaSFyrm24SiSmZDyUEziUABcmKf3uGgRjagbuzVA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KcZyObBD1Y4xR9d5H5jjaWL1DUlT7QjalG2VbjboC9Q=;
 b=ASknvcMPDlONBYu5Jy0PK2H98/A/GnynJIjl9Di8/CdqFmpGI+lcozhhGa9SxuHH6I2NCqZyCpCMS6KHMSrgA92rQU86xuIK+87kjN9nchDx2caOTcMbjRoLtwwzNVfdwMXOSRgbnomWuwmTMP1glafOozNTM+DYFl1RLjwf+nVvIYGva1yNUSzeuYXEQIc67O6pbw+wN/k+xP0mqT6Uf17iEbwm0Q1WNzItyn5bKHiZeoBLC3fWLXy8Zj387PuiX0vYEHgLonBD/d3Ha1ZZBm+nryv3iA5ZBz6P/EHT6EGHmeanshFaVQFFp6HtU8fhir7p9Jug4uIeKztseW9X9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KcZyObBD1Y4xR9d5H5jjaWL1DUlT7QjalG2VbjboC9Q=;
 b=JA8+YjvZE2L8x5RgwiH2UvbTRRhWR8siAGLOlrzsms5MjroxVR/dN8rwcB9J/RuLLAYUg1dw5crMP1QEcmFi0mdQHTuyFS+JmXjRHUv0dHg46QDmvqgJDZVi87oMVX92LuVMBQWNY7p1y98VdX5jRt91n7cYHxdXAFC0v3E2xxHa7o+JBnvPEUim6TjwY4NQ1kBMWpWwGWX5SKR7ddQWy5JeOwxaJUcpML/0X5D43xTt6KJ+D5A1CyVanBdb3h+kkStW28juyy2O+hKchjc85ShVcu3YIAsZX2UC/kQGDBtPKeCKlN0qo3sZw5DLFkDiR/6Xfwb+7dACBr+OICCDug==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e8e6bee5-ce59-b171-6134-4473b396df00@suse.com>
Date: Mon, 23 Jan 2023 12:17:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 07/14] xen/riscv: introduce exception handlers
 implementation
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0068.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7793:EE_
X-MS-Office365-Filtering-Correlation-Id: 68e61b22-19ec-4097-7bdb-08dafd335ff0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LTbUKeRcbG9VROsw3GhZ1eUOuUFOkj66D7EJiDS4HSHPAzpInacJpte1MRC9qvvs+l5tMNkfjC/wrEan+t63oQ3B/VbPWZRtZSBjl/uR45bq8d5aEcvI9dQxxRF4KK8bemqI+19El0hJuBk21KNWbro7HVGxy3P4CfZ8cvOCVGGfAOdbXdAqKqVpAipwZENvfNsd2Fs8wWBH614fRXWLRiUS5z2v3ANSuoyGFAq3VAOyuIzp92v4tgcPQBkwn1cuGygGydyN6OADODaFdywaGsKLltEfogW9Uu4Jq5Kfi4PQP6qSB9SwwHY+pkZimiEnJBNp0aMFAyMpe+O7iaNPu+/LUg2gO/tkaXNU0B1Z+gsqp369k4qVpDLqBpk1MiAgc72Jz8uQ87YE14TAL6gCKR+q2bgurT6C0XR18sazxd5Z6pUuuyxQWNJ7NoFd+Ykl2PKENxKcp4yf/9tEPwUAvRMCfqaF6Aggrv9wAvMqgWIetnJlyDdBfYOkaGYVn0w6pycu+77u/+iyxeQ/I7ORrvtvMwVLIayye7iBL95ZxY4H82O8DxxOEcyzA8fM/uw8CMDPSrkIX94XRvf/yVYpiEcLOJxNEivn2eXvWDu4KP4lSMMObmDgcVleOKKRheb10hC/x8vws1J1hkKBa9W4cejtq1z9D67+PmxJarK071mebTfIJg+8kRBf9Vp37q/++YPdwiGEk9RETy/lAVXuMSmHR0AiyYKvCC6NfJHc3BE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(376002)(366004)(136003)(346002)(39860400002)(451199015)(31686004)(36756003)(31696002)(54906003)(316002)(86362001)(38100700002)(6512007)(26005)(186003)(2616005)(6506007)(6666004)(53546011)(4744005)(5660300002)(8676002)(66946007)(4326008)(66476007)(6916009)(478600001)(6486002)(8936002)(66556008)(41300700001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a3VVd2QyMzEwWUxTUm1QeHZiajdPWkNPNHVLbVVCZlFheXhxRjR1SW12Z3E5?=
 =?utf-8?B?Z3pFYWR0WmkzN0tyVm03NlowTE0wdHJIbnBpelFsS3RScCt2VHF5a2lyUXpM?=
 =?utf-8?B?cHZqWG4zMmNsVEZBeno5VkhEMHRMbWV3STJneWZ3WnJLZlNjRE02TnY3WCsy?=
 =?utf-8?B?MlFKTmRJVVRkSTRjRTRQb0pIUEFFUmZtVzhxWXZxdjZDREUwS1ZoWENlRDVC?=
 =?utf-8?B?UlRUeTFVeVVnUDJINlZqZzhDdW1RbyswRzVtYTZsSng2cEN0b3lGN2tGOW9W?=
 =?utf-8?B?dGFxNUg0YkFwMUJIU1J2OXovVHlPeFlaNTVWYUNkWmoraXVQNUhmUGFqdFNK?=
 =?utf-8?B?bW5KZUdyMmVPSTRJcDMzWnZ6SGJtUnIxWEZLZGhUeXhVaHZ6K25FbnM1S3hV?=
 =?utf-8?B?TG14ajNqNDZFbXd0T3ZnVVdrSWVJUnBVZ01wZG9HaWpzcmg4bFk1NEYycGE5?=
 =?utf-8?B?NWVXVnJHbXBmWXFBUHRFWXMyT1ZzVHgwaHpUbklQbVNLT0pJd0w4WThZVjVO?=
 =?utf-8?B?dm1sTUFCN01rUStUaFEzb2F3Nm16bXN2OWZ6RXdaY1pTdE5UMklrQXR3N0Jm?=
 =?utf-8?B?VVdYWUtUeWFVd2VRVUkwY0tGSEhVVHhjaThSdWJTZXRMa3JDRlIybm5kVG1o?=
 =?utf-8?B?RERxU20yRi9Pc3BjdVZ0NTU4OXVvY0RBSVpXazBTamtTSlViaVBjMk5rU2ly?=
 =?utf-8?B?QWtlMXBWZEtVbzl2QjlMSHp3TGdPYndubzNIa3pLZVZFd1pwUFZYWFUycG1n?=
 =?utf-8?B?bCsxb3VzZnJ4eEZDV1MyODVtVzdNUHYxLy9WdlJYOGNPTHlwU29sQ2dyL0NR?=
 =?utf-8?B?K3hxTkdzN2JxQy9lWS9rbmlHMjNDYUNYTFBGZkJQVlZLeHBTMWJGSnJMVTRh?=
 =?utf-8?B?OU56L3A0MVRrdkVIOTFRMHRWTjdjUnhCbEdadlBBd2o5dmR6V3Vxa0pPZnR0?=
 =?utf-8?B?L1BzNVpKQXlBS00yRmNnbzJmSVl0VTNQWDZrb3QreFdvR3J3eEJKdXB0TDgz?=
 =?utf-8?B?UGF5VkVjY3RFbVBTczJqQlZLUDBOOTYvS1J0VUVTREVWSFhkY2JaTDFTazlq?=
 =?utf-8?B?dnVSZlkxMnArOVBZYVU2eWl0L1ZtblIrdlhWQitTL2oyZktoNGl4THg4TXRo?=
 =?utf-8?B?UlRldEhUN2hSZnhrREtDOC9PaUN0cGVOOEt2WDA2VmttRFZtZk5oUnlwd2I1?=
 =?utf-8?B?RkdETzIyMmdnbTlIVFNUK0JWZSt6NmQwYmhmS1pyTWRzTXhDMGJZVUVUNHNO?=
 =?utf-8?B?UW5CMm51YVgxOGRUNm5iNTg4ZGFRcDdZTmczUjJaWThMekcvMjZIQ1pEaGVz?=
 =?utf-8?B?Tnp1UVBZWkRLYVhXTndKVVd4NmZDR0E2MjJCRklDZ1J0bWdpVGI0QUhIVWJC?=
 =?utf-8?B?WVJoTXpNWnhrWlRJQ0VLZFZObjFYWk9teUlCM0pUUzhhZTkrY3ozYmxnTVBr?=
 =?utf-8?B?eXFCTWRxNGVQMU8yMGt6d0srUzY2Ky9ndzdhVXR4NVBjSXJGR2NPWUdvenZ6?=
 =?utf-8?B?YTBKbHRHL3JtNUVkenBUb3hOdEdTeHR4YUFnUGo3eDdGRWVZSE1DZUU2V2gv?=
 =?utf-8?B?NmNOLzd1MnpjSTBMaHZkZ0ZuVXlHQTBjbU5OSnM5cmpzUzc4T1JXY0M2bzV2?=
 =?utf-8?B?VkhUNWlkdElCYW5HQ1hjMmd3OU94VWVtRlNWbU0xT1pjTm84d2VpMUFHRHA3?=
 =?utf-8?B?TmhpeWtLbWNpdnVRUUVXMGM4MGV4Uk9OY0NiaHk2aDBFQU9zekZCQWZWa1Vs?=
 =?utf-8?B?NWt3MllHU0RtRWhRNWlIRE1uTkI5b09uZ0taMUJ1SERmMGhodXUrSTJDVWlJ?=
 =?utf-8?B?K054WEFIZ1Mza1ZCMlFrR2RBVFYvQitFVDB6cWF3clZkQXBLS1hrNTJmbXFR?=
 =?utf-8?B?YWpERmRVYW5KRFl6NGdUZ00vWmo5djVzMXg3YjdheGR4bit5amoyaVhZQzFX?=
 =?utf-8?B?M2VtR1U0NG5VS0NHL2l2T0FiaGN1UlJyRC8zeUtKVkZ4UDFac0F2cVlOWGZC?=
 =?utf-8?B?TG5aMERDaXRhejRydWNKa05vR3dBN3k4dHFJWjRnbFZVTEwzTXFmeXRsK2NC?=
 =?utf-8?B?NHQ1NVBYOEhIV1plZStKNjdna1FmaUlTUDgrenNtVzZFMmlvMTFmb0U3YjBp?=
 =?utf-8?Q?tgd/Iit93RzKm4b4JMHvWnUZM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68e61b22-19ec-4097-7bdb-08dafd335ff0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 11:17:12.7507
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BWOHzS0BtZsRGoOzRtae865kUZbc6Kl7C4CED8cv0xObLzAru7AYiEEut6xRrIFeFyzn3X09dj4O57ckLXYHLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7793

On 20.01.2023 15:59, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/entry.S
> @@ -0,0 +1,97 @@
> +#include <asm/asm.h>
> +#include <asm/processor.h>
> +#include <asm/riscv_encoding.h>
> +#include <asm/traps.h>
> +
> +        .global handle_exception
> +        .align 4
> +
> +handle_exception:
> +
> +    /* Exceptions from xen */
> +save_to_stack:
> +        /* Save context to stack */
> +        REG_S   sp, (RISCV_CPU_USER_REGS_OFFSET(sp) - RISCV_CPU_USER_REGS_SIZE) (sp)
> +        addi    sp, sp, -RISCV_CPU_USER_REGS_SIZE
> +        REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(t0)(sp)
> +        j       save_context
> +
> +save_context:

Just curious: Why not simply fall through here, i.e. why the J which really
is a NOP in this case?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 11:38:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 11:38:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482780.748477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJv9D-000386-Pg; Mon, 23 Jan 2023 11:37:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482780.748477; Mon, 23 Jan 2023 11:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJv9D-00037y-Ms; Mon, 23 Jan 2023 11:37:59 +0000
Received: by outflank-mailman (input) for mailman id 482780;
 Mon, 23 Jan 2023 11:37:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJv9B-00037s-Sj
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 11:37:57 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2068.outbound.protection.outlook.com [40.107.105.68])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 614dd5b4-9b12-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 12:37:56 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7036.eurprd04.prod.outlook.com (2603:10a6:10:12f::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Mon, 23 Jan
 2023 11:37:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 11:37:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 614dd5b4-9b12-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Skxuy0KzBQSPq8nYVPYxJgKxHZxfVi8WHCUXhBssXofATBOMrlg9SIuOz/xsTslEAf0JlqkFadoA2tv5td0XY5C0bZw/WphNSaaECanEmc37uRKSEaFW7/KZ9jCqKJx1cVakBU1kava3nxoUZnwJ/XgeBTgKineTP/ze/UeTlVcJZ5XX/BodslMRwWhpl/ny/OAmdcb9dqbusiq3cWcO/ZiEDJhDfaQUwCZbb4jud+Mn0fNQgV5NH/snQDp8fr+YSBcCXLDbsWcsubys5exToktWjfhbTyFjcDziv/aaK0USSP+S+hN2nxUWI1bN0rVxsERWuFXp27RICTt2AjJG3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CG+LE7ZVGp3+peK9RIxC46OKHI6h53gF2G0/mIajOAA=;
 b=ZfbqAyWm1iYfX8hH6HeKgIfTmYon9E2IUR+Z1BlcDCfYp/avtrtBYsfLPo+tU3kKoRr1e48zAjw67jYPY9fZWvSc0va+Bbm1NNWRjUck27UgCz3Nu1M4WMjRWAINk1Fb01esuuPXk1w5AE6CZw7b/GtKz1+ghVjbPtyR7sWFksrL9zErMqR4FXdka40qe/VDNF3r4hBGcgq5/D6xjCOxJSxgW4ogaOBSrPK9MFuP9wihZ1FJ+ldIknN9jzjaLK3s43BYAmFkkiRZZoOwt9d2zOx5hxXxC3F7o4kwR8z4/b/OuS3DctVQ7qsGgJ0fUDp+nAc0XcNQq+eCs3kxmM7NAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CG+LE7ZVGp3+peK9RIxC46OKHI6h53gF2G0/mIajOAA=;
 b=hn/6lTXVJqBqgbXxGeHr5fBb5SfoKSFHEzDgE0o7IXycSDWJiG3yWtjg7jQ5K/FuqUnW779bFs5EfjrZVYL9nETe4IkvFAlmBN0T03Jj7WS6QkZQz4axfInZwjMDZo3vCB5qdcOAhpSDnmjeCjZt4W9nBug7JCRyrgr/54ZkOrDGCGxTjGbsW7NTzAsgnbXAKI1Th0wRRIMU4XVawNgTCFMcLmw0KNnt3KHU3+yCim7SI/z13NlULFW20EkRx58n3QowQJrx9nZU4w3KoF9RlBdnH+z+AwjBdAhV9lOIiyJ7c13elxfyEyM1fYR3QRnDILydj9PWrvgG+LThyS8yBw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <740420d4-9e7a-6d3a-1b7d-05e16727fd2f@suse.com>
Date: Mon, 23 Jan 2023 12:37:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 12/14] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <a0788e4744b04597fbd3e71c2bef0bd76843a066.1674226563.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a0788e4744b04597fbd3e71c2bef0bd76843a066.1674226563.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7036:EE_
X-MS-Office365-Filtering-Correlation-Id: 01f3a11d-0d00-4ce7-e185-08dafd364429
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/2f9uRHEF1LH5nF4q6l8l4yC3DmzvXz1e79GNwscQHgUSiGPJxdesnUxsAM+g1sFMYlSiZilgRmNbylHwNOaEQEt5XJ38LGSSCCGC/O8O5jwFDR8wQaNXuksrNatpsMdMMhJldQxtrpvsVtn65C1kMSD/pjCwFEiQ5d5KBSBFBdThU3CeMTgsnRzYi0HoUU02HI1/TTgLqF2s1i0OmsVyAK4m/wv0gz4vDlJITdThkOPd5nMzXygoGi2W6W3thz7vcfQg/cJEgZOxYFfqhEZ1L8QmD6T8JutImyMaBk+j4wQnm/FWBOlNtZ9Vkca41OKEmUoTo96fmjTf8MmYx5k0axf68PxVGYWaYe/zSHa+78+SdMwnSf4jeBOBTcBysCI8rj3IRQrzCS7Q4CICsI0CSEJmr4gZosUSBubmsvfDumVEO+VEbrT0nY7C3Ka4Mh7frlE5nSrb/C2GKh+Zi1snjT0t68WaypAv5aYByEOysiO0+uxKF8RG6NhTC95zEYkuePOzY9TgdseomeSR26wV/GqrL9uPTd5tiZ9LPcQveSKlSjX0Lrw2lCnyx7SWLHdmR5Nc9Wpw9LYrdyzyf2n8m5k5kJJOUSyiTy2Yzm0MwA7DsHh+gRxw6tnLPtVqxGcZlhBmpxF1VE1vxQOSb34Lvj4rYf8DXQAUCFnclliiqgMlgx5wTQiwD4m9M/uSMFVg8eo2A6pDsLpppz8eT+WBj4Fk0flgOZiU6UrhPC090o=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(366004)(136003)(39860400002)(346002)(376002)(451199015)(38100700002)(31696002)(26005)(31686004)(53546011)(6512007)(186003)(6506007)(2906002)(36756003)(66556008)(8936002)(4326008)(6916009)(66476007)(478600001)(8676002)(6486002)(5660300002)(66946007)(86362001)(83380400001)(41300700001)(54906003)(2616005)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y09TY3hnYjU2U0d2NEdBVnlaWWg3ODJWRy8ra1drVEJIMWRMUElMRExzaVNB?=
 =?utf-8?B?SWdwWERIbi9GOVMwaGcveXZFc3kyS0VSVC9MM0pPOUhuNGdraHl6Y0ltTG11?=
 =?utf-8?B?L2UzMDNvczJYTmphN21KU3ZmQ0FKSnkvdTFMWkwzbnBWVmlteWhwNUVnTGc3?=
 =?utf-8?B?VjROT3B1RFIzalZaQUdvQjh0MGdwRzFQVUlqNWVXOGlObjhuRGMzL3BYaThL?=
 =?utf-8?B?cGwxZUE2V2o3MVBUakkweVlsNEI3VEphb2RpeVE1d3BaTHNmUk00TzUweVB1?=
 =?utf-8?B?Q0U2OGpxYWdram5WcElqS0svcENhUDFncmlJbnZWc3BXaHRCZDdZSlZGM1ZO?=
 =?utf-8?B?aFhsVUxZUGF4aTJUQ1JNQXFlNy9oU0Z5T0VwWmk3SHd0WVlQZnVFdlJXUlBk?=
 =?utf-8?B?OWRnUWNabGZmNE1ITk5oL0xhajhnR1pVOG5vL3RvUVdianNnZnZQL2t5TkJO?=
 =?utf-8?B?eTEra0krZERLekR5SE40NS9sRXZOM3NsQlh1bHZPM2VHZUlpMHZvT0ZJR25s?=
 =?utf-8?B?elo5dTEwdXF3dnZmYmRIeEYvL000VG5EV0FGRURKbHBDaHdjaDdKRkVwd2Rx?=
 =?utf-8?B?b2lraENoL3ZRYk1qRjR3bzhzSCs4TnBMR0tnVzQ5RXN0M0IxVjNWTzVwUG5i?=
 =?utf-8?B?WEZscVpoTWxYVnJvMHVTSktCbzhTM1QxZk9FUS91amwvQmZMWTR4bHI2aHpE?=
 =?utf-8?B?QnhKQ0tNU0dRYXJOUWNUZ2E0VEtqaE5XU2U1VmhLeUFzRTZDeGh5RFd1TU5m?=
 =?utf-8?B?MlFmM01nSnJLRGZFTHA0Y0Qwc1VRdWFpbXZtakZyRjNFNTZEK1h6RG1tUjc1?=
 =?utf-8?B?UG5nbkt1STRaOEN0ZkZRQXVqaG9ObmRSZXdkK3NzUTZzRXRUYXFGWnduZWZt?=
 =?utf-8?B?MGppc240QTkwK25oTWdrUFpVUVRnUmV1ajQ2Z29wTWt6bnVLTFVoMnUzelEz?=
 =?utf-8?B?REVaek8wR2RJNjRCeWZBbkV5QnNMWDZFY2FJbUcreGx6MjVneWlTM0ZuU1ds?=
 =?utf-8?B?Mm9wVnhqclNZdHU2YWpCRHpXTnJzeVFkT01vek52dFpwTlFNSUVuSm5LT0kx?=
 =?utf-8?B?VVdtM1ltYWpmV3I1dDI4aHM0cnlCbGVacndFS0Fxb3czSmNBWnNCM2NTd2tN?=
 =?utf-8?B?TCtDeGdnZXYvMGhkaTBaSG1JVG9rb1VLYjNmaTh1em04MUU2YTVGMjllRTNZ?=
 =?utf-8?B?OXJYWnN4L0xyUFlKWjFScDNTY3JwRzJTRXJkaGRuWElTNnk4U1dpSmg0L0lt?=
 =?utf-8?B?ZXg2NWtma3orZW9MSXhJNUh2R3V6Q1dwZUdrUmRvaDlKZ1krNVNWVzhHWmR3?=
 =?utf-8?B?ZktBYVljNzNQYkZ3eHJNSys5S0ZaTTFXY0xTbkxMUWRub21JQ00zYkN6bGhU?=
 =?utf-8?B?TnppcEpkOWJldTg4ZTVmS1ZYSGJPbHZLSzA0SW5EdVhKRHZuVDlYUjBBTEQ0?=
 =?utf-8?B?WDJPUVpvZ1FUVEh6aXF2TWxWVFQxbUpVOE1NZ0pUNTI0ZWVocUZTTk1DemxM?=
 =?utf-8?B?VFp4c1hpb3I0YVR3ODNCUnVBMEFFR2RqcjE4WkxRTmNaSnJONDRsZ1d2NUlO?=
 =?utf-8?B?TTVkOHRKdDZ5S1IzQ21NU2dWMjdnNFp4WkRYNFBEOUJVRzVlQUM0S2Yrb1hF?=
 =?utf-8?B?TVdhczFuL05NTzlubHlQamVyMGdnQVk2dzN4SmtkUXlNQXh6aEwrd080U2x5?=
 =?utf-8?B?M2dkUVJrTU5FM1JYem9wVUFTTTVvcDJCbkFSOEN0SXp6RkhLcHhJV1JxdFZ1?=
 =?utf-8?B?cmRrM09mbXFRaWZBQkpzRFU5K2ZYRWRtQmNSSlVqQ0o4N250S285K1BXUGRS?=
 =?utf-8?B?OGpIOVdJRmwvMm9YODVpSzlyeVQrVHBzU1BDaUREeUw0Q2dWTXdjWkdFRTlh?=
 =?utf-8?B?TG05ZllrRlV3L2lEbXFxTlpkL2VPdjB1bll4d1dVakJZelNtTDdpSVJQSUFG?=
 =?utf-8?B?bElkM01mRnpjVFhyVzd5MHZxQ21mQWR2K09wTXROSDZYdHFpVDlWaTBCc0Uz?=
 =?utf-8?B?eTNQbVBOK1dmN0ZZdk1sMVlCYzl3SzQzcUc4NEEzREhVQ0gwWEd5Z1FEVVpP?=
 =?utf-8?B?Z01RRkRtblJoVkNIWWVOU0NFakd0QThYWGovWnk0Ylc3Y0F6QUFGVThOQ0Nl?=
 =?utf-8?Q?Ek6Qv09AINYVh7PBMy7ijptPx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 01f3a11d-0d00-4ce7-e185-08dafd364429
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 11:37:54.1409
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oEZFOTUiEllRrkqCtx/bfKmJo6NFNWMzVoI3dLOiHdL9xcLWqQt7cDJxLvfblsJB1kWNzC7+V5h+tnWw9FxT/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7036

On 20.01.2023 15:59, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/bug.h
> @@ -0,0 +1,120 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + * Copyright (C) 2021-2023 Vates
> + *
> + */
> +
> +#ifndef _ASM_RISCV_BUG_H
> +#define _ASM_RISCV_BUG_H
> +
> +#include <xen/stringify.h>
> +#include <xen/types.h>
> +
> +#ifndef __ASSEMBLY__
> +
> +struct bug_frame {
> +    signed int loc_disp;    /* Relative address to the bug address */
> +    signed int file_disp;   /* Relative address to the filename */
> +    signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
> +    uint16_t line;          /* Line number */
> +    uint32_t pad0:16;       /* Padding for 8-bytes align */
> +};
> +
> +#define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
> +#define bug_file(b) ((const void *)(b) + (b)->file_disp);
> +#define bug_line(b) ((b)->line)
> +#define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
> +
> +#define BUGFRAME_run_fn 0
> +#define BUGFRAME_warn   1
> +#define BUGFRAME_bug    2
> +#define BUGFRAME_assert 3
> +
> +#define BUGFRAME_NR     4
> +
> +#define __INSN_LENGTH_MASK  _UL(0x3)
> +#define __INSN_LENGTH_32    _UL(0x3)
> +#define __COMPRESSED_INSN_MASK	_UL(0xffff)
> +
> +#define __BUG_INSN_32	_UL(0x00100073) /* ebreak */
> +#define __BUG_INSN_16	_UL(0x9002) /* c.ebreak */

May I suggest that you avoid double-underscore (or other reserved) names
where possible?

> +#define GET_INSN_LENGTH(insn)						\
> +({									\
> +	unsigned long __len;						\
> +	__len = ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) ?	\
> +		4UL : 2UL;						\
> +	__len;								\
> +})
> +
> +typedef u32 bug_insn_t;

This is problematic beyond the u32 instead of uint32_t. You use it once, ...

> +/* These are defined by the architecture */
> +int is_valid_bugaddr(bug_insn_t addr);

... in a call to this function, but you can't assume that you can access
32 bits when the insn you look at might be a compressed one. Just to be
on the safe side I'd like to suggest to either avoid such a type, or to
introduce two (32- and 16-bit) which then of course need using properly
in respective contexts.


> +#define BUG_FN_REG t0
> +
> +/* Many versions of GCC doesn't support the asm %c parameter which would
> + * be preferable to this unpleasantness. We use mergeable string
> + * sections to avoid multiple copies of the string appearing in the
> + * Xen image. BUGFRAME_run_fn needs to be handled separately.
> + */
> +#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
> +    asm ("1:ebreak\n"														\

Something's odd with the padding here; looks like there might be hard tabs.

> +         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
> +         "2:\t.asciz " __stringify(file) "\n"                               \
> +         "3:\n"                                                             \
> +         ".if " #has_msg "\n"                                               \
> +         "\t.asciz " #msg "\n"                                              \
> +         ".endif\n"                                                         \
> +         ".popsection\n"                                                    \
> +         ".pushsection .bug_frames." __stringify(type) ", \"a\", %progbits\n"\
> +         "4:\n"                                                             \
> +         ".p2align 2\n"                                                     \
> +         ".long (1b - 4b)\n"                                                \
> +         ".long (2b - 4b)\n"                                                \
> +         ".long (3b - 4b)\n"                                                \
> +         ".hword " __stringify(line) ", 0\n"                                \
> +         ".popsection");                                                    \
> +} while (0)
> +
> +/*
> + * GCC will not allow to use "i"  when PIE is enabled (Xen doesn't set the
> + * flag but instead rely on the default value from the compiler). So the
> + * easiest way to implement run_in_exception_handler() is to pass the to
> + * be called function in a fixed register.
> + */
> +#define  run_in_exception_handler(fn) do {                                  \

With

    register void *fn_ asm(__stringify(BUG_FN_REG)) = (fn);

you should be able to avoid ...

> +    asm ("mv " __stringify(BUG_FN_REG) ", %0\n"                            	\

... this and simply use ...

> +         "1:ebreak\n"                                                  		\
> +         ".pushsection .bug_frames." __stringify(BUGFRAME_run_fn) ","       \
> +         "             \"a\", %%progbits\n"                                 \
> +         "2:\n"                                                             \
> +         ".p2align 2\n"                                                     \
> +         ".long (1b - 2b)\n"                                                \
> +         ".long 0, 0, 0\n"                                                  \
> +         ".popsection" :: "r" (fn) : __stringify(BUG_FN_REG) );             \

   :: "r" (fn_) );

here. See x86's alternative_callN() for similar examples.

> @@ -107,7 +108,122 @@ static void do_unexpected_trap(const struct cpu_user_regs *regs)
>      wait_for_interrupt();
>  }
>  
> +void show_execution_state(const struct cpu_user_regs *regs)
> +{
> +    early_printk("implement show_execution_state(regs)\n");
> +}
> +
> +int do_bug_frame(struct cpu_user_regs *regs, vaddr_t pc)
> +{
> +    struct bug_frame *start, *end;
> +    struct bug_frame *bug = NULL;
> +    unsigned int id = 0;
> +    const char *filename, *predicate;
> +    int lineno;
> +
> +    unsigned long bug_frames[] = {
> +        (unsigned long)&__start_bug_frames[0],
> +        (unsigned long)&__stop_bug_frames_0[0],
> +        (unsigned long)&__stop_bug_frames_1[0],
> +        (unsigned long)&__stop_bug_frames_2[0],
> +        (unsigned long)&__stop_bug_frames_3[0],
> +    };
> +
> +    for ( id = 0; id < BUGFRAME_NR; id++ )
> +    {
> +        start = (struct  bug_frame *)bug_frames[id];
> +        end = (struct  bug_frame *)bug_frames[id + 1];
> +
> +        while ( start != end )
> +        {
> +            if ( (vaddr_t)bug_loc(start) == pc )
> +            {
> +                bug = start;
> +                goto found;
> +            }
> +
> +            start++;
> +        }
> +    }
> +
> +found:

Please indent labels by at least one blank; see ./CODING_STYLE.

> +    if ( bug == NULL )
> +        return -ENOENT;
> +
> +    if ( id == BUGFRAME_run_fn )
> +    {
> +        void (*fn)(const struct cpu_user_regs *) = (void *)regs->BUG_FN_REG;
> +
> +        fn(regs);
> +
> +        goto end;
> +    }
> +
> +    /* WARN, BUG or ASSERT: decode the filename pointer and line number. */
> +    filename = bug_file(bug);
> +    lineno = bug_line(bug);
> +
> +    switch ( id )
> +    {
> +    case BUGFRAME_warn:
> +        early_printk("Xen WARN at ");
> +        early_printk(filename);
> +        early_printk(":");
> +        early_printk_hnum(lineno);
> +
> +        show_execution_state(regs);
> +
> +        goto end;
> +
> +    case BUGFRAME_bug:
> +        early_printk("Xen BUG at ");
> +        early_printk(filename);
> +        early_printk(":");
> +        early_printk_hnum(lineno);
> +
> +        show_execution_state(regs);
> +        early_printk("change wait_for_interrupt to panic() when common is available\n");
> +        wait_for_interrupt();
> +
> +    case BUGFRAME_assert:
> +        /* ASSERT: decode the predicate string pointer. */
> +        predicate = bug_msg(bug);
> +
> +        early_printk("Assertion \'");
> +        early_printk(predicate);
> +        early_printk("\' failed at ");
> +        early_printk(filename);
> +        early_printk(":");
> +        early_printk_hnum(lineno);
> +
> +        show_execution_state(regs);
> +        early_printk("change wait_for_interrupt to panic() when common is available\n");
> +        wait_for_interrupt();
> +    }
> +
> +    return -EINVAL;
> +end:
> +    regs->sepc += GET_INSN_LENGTH(*(bug_insn_t *)pc);
> +
> +    return 0;
> +}
> +
> +int is_valid_bugaddr(bug_insn_t insn)
> +{
> +    if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
> +        return (insn == __BUG_INSN_32);
> +    else
> +        return ((insn & __COMPRESSED_INSN_MASK) == __BUG_INSN_16);
> +}
> +
>  void __handle_exception(struct cpu_user_regs *cpu_regs)
>  {
> +    register_t pc = cpu_regs->sepc;
> +    uint32_t instr = *(bug_insn_t *)pc;
> +
> +    if (is_valid_bugaddr(instr))
> +        if (!do_bug_frame(cpu_regs, pc)) return;
> +
> +// die:

Perhaps better to omit the label until it's actually needed? In any
event you shouldn't be using C++-style comments (see ./CODING_STYLE
again).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 11:50:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 11:50:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482788.748487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvLO-0005ZB-1s; Mon, 23 Jan 2023 11:50:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482788.748487; Mon, 23 Jan 2023 11:50:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvLN-0005Z4-VE; Mon, 23 Jan 2023 11:50:33 +0000
Received: by outflank-mailman (input) for mailman id 482788;
 Mon, 23 Jan 2023 11:50:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTo8=5U=citrix.com=prvs=380e0b34c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pJvLM-0005Yy-HR
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 11:50:32 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20c6fe80-9b14-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 12:50:28 +0100 (CET)
Received: from mail-bn8nam12lp2173.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jan 2023 06:50:26 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5945.namprd03.prod.outlook.com (2603:10b6:806:11c::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 11:50:24 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 11:50:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20c6fe80-9b14-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674474629;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=uA4REJyAREFunQrTC+sFGsAZOIbdT7QofAVB5zoTSoM=;
  b=PGNtMFm7WMsd3G4ShrfB1DuVqhy+D20RnMROUKf4qOtRAdlqzFWdaIjn
   sW0bnvFbS999+YNVycNsivEw6bzefhTOi/0dKjbY2gDCDqvuXar2Fl0cX
   KfLpnp8AFANyEMRZ/bym4AFzZeYSht9yorg4fU5d1UoKqIzzURfMh2UC5
   I=;
X-IronPort-RemoteIP: 104.47.55.173
X-IronPort-MID: 93835459
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:GonV/6K+McwBCgDUFE+RSZQlxSXFcZb7ZxGr2PjKsXjdYENSgTZUn
 2VOW26EPK2NNmP0ctB1bo+/9R8H6MPRxtdhHFZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wZlPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4wMUce5
 7tHFgsOQU+ehOO80YiREfhz05FLwMnDZOvzu1lG5BSAV7MMZ8CGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dqpTGMl2Sd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnzHirA99OSezQGvhCn1OtnzYTUh4vDHC7+NKitHaFevNDN
 BlBksYphe1onKCxdfH6WxC7u3+F+B0BQd1bE+49wA6Iw6vQpQ2eAwAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxRupIjQcJ2IGYS4CTCMG7sPlrYV1iQjAJv5sEaezisD+EBnqw
 i6Ntyk4jPMYistj/6+891rWjimsopXMRwgd6QDeX2bj5QR8DKasY42z9VHa97BONo+fRVial
 GcIkI6V6+VmJZqKkiqKQukEArCyz/mAOTzYx1VoGvEcGy+F/neiecVa5mF4LUIwaMIcI2a2O
 wnUpB9b44JVMD2yd6hrbomtCsMsi6/9CdDiUfOSZd1LCnRsSDK6EOhVTRb49wjQfIIEyMnT5
 b/znR6QMEsn
IronPort-HdrOrdr: A9a23:oQmBm61a+HXXxMbU9HzDqwqjBeBxeYIsimQD101hICG9Lfb3qy
 n+ppsmPEHP5Ar5AEtQ5OxpOMG7MBbhHO1OkPUs1NaZLULbUQ6TRr2KgrGSugEIdxeOldK1kJ
 0QCZSWa+eAR2SS7/yKmDVQeuxIqLLrkcCVbKXlvgxQpGpRGsVdBnJCe2Cm+zpNNW577PQCZf
 ihz/sCgwDlVWUcb8y9CHVAdfPEvcf3mJXvZgNDLwI76SGV5AnYpILSIly95FMzQjlPybAt/S
 zuiAri/JiutPm911v1y3LT1ZJLg9Hso+EzS/Bky/JlZAkEuDzYJLiJaIfy/wzdZ9vfqmrCpe
 O84ivI+f4Drk85MFvF5ScFkDOQrwrGo0WSt2Nwx0GT7PDRdXYCEMxGiptechzFr2QdnPwU6t
 MN40up86NNCxXOhSL84MWNcSpLuA6bnVoO+NRjyUC2d+MlGeZshJ1a80VPHJgaGiXmrIghDe
 l1FcnZoO1baFWAchnizx9SKfGXLwAO9y29Mz8/k93Q1yITkGFyzkMeysBalnAc9IglQ50B4+
 jfKKxnmLxHU8dTNMtGda88aNryDnaITQPHMWqUL1iiHKYbO2jVo5qy5Lku/umldJEB0ZN3kp
 XcV1FTs3I0ZivVeISz9YwO9gqITHS2XDzrxM0b759luqfkTL6uKiGHQEBGqbrUnxzeOLyoZx
 +eAuMkPxa4FxqeJW9g5XyPZ6Vv
X-IronPort-AV: E=Sophos;i="5.97,239,1669093200"; 
   d="scan'208";a="93835459"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dYp/6W8Ms6GY4JHquJYVbYluKQ8vHvuzcSrTopHqat6E1mvMewK0mX1LDRr3Ena2QVLVc4YgU5qTt3cqmmLeQUYttKDdFBzOGwoJE9E3X41OnoNDdI58OvhIQ4nn6dXnOS2LaRvTLbw59uv9dXTEedTDWywbdh4VufZ73iT+FNB76Edzc3k92IH+aiZCd2tOIyXb7gflmBcIio/L9FeiizIHVkTMi6psYlSAj9O0EzWOqWDMzaIYie6826Pi2OanKJ1YdCCz9eBPG0RzCEzQZAI77+LaUIM3IZhYW3IayjJGR0FHJon7K9vNEJz09rsC0sCpKv7LB1PZSJC3WOrW0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uA4REJyAREFunQrTC+sFGsAZOIbdT7QofAVB5zoTSoM=;
 b=MhYxeajkLMRNWC8VJcdWw9fU1rpy7gsYgyKsUW8KpkK6UuMkvrvLBnFP+AMxjzNNw6ZylkTe9Qkn9BVkys/hrDJxz0ELkSDIxT/RJkHQzf6WsS34DUncYiLzvGreOvS4zPNb26w4pcR4OlNqRZRVE4z8KXx22YyES2riGHN+pANrso+Fk7Wh6PVWOsyUJemSsVnIGTXXDM3zdvX9Cj9rdQmzrCLQCo2zS5t85d0Tc80Ev5CFTvWPVL8IRXxnpcQPTSqW2gVrhdKl+T3UfKGkbUgwesETO+GNDEbcYj/NhkJwnjRrr5eq+bZV7q0Gdf5i4+ovdJdkcD4ACnOF5pxmWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uA4REJyAREFunQrTC+sFGsAZOIbdT7QofAVB5zoTSoM=;
 b=SByzrPVAS0DjFO814rfHFAobIig/wwoqVViDXCacMuKZGfMZ+1Les1Y2mLljV5D91e9I791P+06D6+XmlwnMoQSw4unco3NSNlgs5PwvToVCUtHJqf2h6n0iYmVUsvOS1LaTRHxp43M2TvGPSXZOnON7AECnaULpVsP7uIkO05E=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 07/14] xen/riscv: introduce exception handlers
 implementation
Thread-Topic: [PATCH v1 07/14] xen/riscv: introduce exception handlers
 implementation
Thread-Index: AQHZLOAVkIPSVA/PBEeRaJXa6cnAQq6r6ACA
Date: Mon, 23 Jan 2023 11:50:23 +0000
Message-ID: <ac6f02e8-c493-7914-f3c4-32b4ebe1bc26@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA2PR03MB5945:EE_
x-ms-office365-filtering-correlation-id: 9672f598-e79d-43c7-159f-08dafd380322
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 bJKXa4TJD7dx5q40+2HIYxvsaIJZYaEQZhNWTAwz+Z21BpSo9pX2iZyEBoRAoBCyZciD2oNRjXW1/YQ2s0taLv3FEQ/xdwu2zoaicPh1VFLhdBFg7wDwxhDawJiTAl4Zck2o6bVBgzM1+Fe1HNLxamR6geT1D0uv52dQrqhCD7nXEyTu9L/OiXcUH0PxG/041JhyVM7jFrSkap0XRjtYOZoAfA2fzv+u3sM4IaLxOEQO9ieie972V7bGUok5u0AVJNLDfv/iL+/wRJE8ZJIPeO7vibjAMTXITdqmeqMMOLPg2v1VAEJmoxqObLWmcXrl5YB54QInsdJpy5rE0pqEORWXT9KUnmz/Lv0maWeueXi87qSkakxk7+y0WIu95bGOyyv+LFxSHUigB7/Mo4G37O0I1x6RKHGXIgAbXLYSJhQv6kWGsW9Qv3/Spj6xBSdQCNPMP2RdBE5UkFTaHEdfWs5u5Wz9rKSChbLGWzFJK6PiG5LhA5FQvBDDh/z02IF6XDef+ywGUf1HDN223QCWEcg75U5T3W3l7NZCgtpkxjyZT+f2MqUMcR13Nq4o5GDAX8R4Mh6JY58SkPrbqUQf4lZIXMuPDax9iK8ANSoxYGiU5P5I2Yi5qmWDftBB+q85DWc4MMfYK/T9eyOv5xNVSNAdZI+3C6w9zku4Hdagx4oKL3DcvrUwlNjPDWtZkQB9tAp4DSnkgQZO1wGF/RU8R+pHRQjH5twgOmMmD4u3+d/digrNj3dwjs3vESQGn9HJFC0PdwXOzwhMfW3ahkYUoQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(136003)(346002)(376002)(39860400002)(396003)(451199015)(83380400001)(76116006)(66946007)(64756008)(66446008)(66476007)(66556008)(91956017)(86362001)(8676002)(4326008)(110136005)(316002)(54906003)(36756003)(71200400001)(53546011)(6506007)(6486002)(26005)(6512007)(186003)(478600001)(2616005)(31686004)(38070700005)(8936002)(122000001)(5660300002)(82960400001)(31696002)(2906002)(41300700001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?LzBPZXVHa3BiNnU0YkhVcnJ4Mm5Hd1BMWmRkdVg4Z1NmMTZLME1wanBQOXFh?=
 =?utf-8?B?RTYwRGdCajdNeTZaQi9JZnpDKzF5djRWRkpGWWN6ang5RWJ1eHhQS0ErMnVH?=
 =?utf-8?B?TWRBWnFmdmphTWIzbXYrMVMzdkdHMUJScFBjdDNKbVdMRllHbnVza21DTkdZ?=
 =?utf-8?B?MzQrTXpRdytRL0EwSzZ4Y1J1a1c1UkYwSHBvSjh0OUhIQmJ0UE1TUWdlRFV1?=
 =?utf-8?B?VDNpa2dvQ0VsL0l6R1l5UTR1V1NiQ2pHSGpsYVl5SDlZdjEyYk5aS21oVFYv?=
 =?utf-8?B?Zm9zY1pYOThYUWI4cXFpKzhvT2tCcWZOM3VmYkoyVHRaUmZFVXNqQzNVd3N3?=
 =?utf-8?B?VFJsNTU2Ny9RMXZUK1RQT1pTbi84cVRBaUR0US8vT3E0UGhSZm1FdlVTZ1Er?=
 =?utf-8?B?NTFJOGlZZGJlRHVSdHJ0bERraW5uTVZ2Tmh2bE43bCtJUlNVZFRJZ3B1dGZR?=
 =?utf-8?B?bDM2UEdMSTJhd3UyZE9scG1tVnpESlVNdFVZTFVBU0Y2S3BSaDRzVlNGS2d6?=
 =?utf-8?B?VkJRRU1SdE93WGJSR1cxaldCYVIvRE1LLzRGOUlqTWRZN3NtMnR1SUE4Y2NX?=
 =?utf-8?B?dVk0YTZEejBaTzk4WmhaUUlWZ1ZoOEJPQnFWVHNOajF5dENVT25XSGNmSHJG?=
 =?utf-8?B?ZWM3NlJoSnlsMWRIc3JSd1Y3WCtNRHQ0VGlHOWhHbFpFTUVGSTBnM3NwYmt0?=
 =?utf-8?B?Y1R2SVJBWjVWTmQ5aGVEc2ZQRWgzWTZyaXR5TFY1dDQ0K1h4Z0NwMGtZQjg5?=
 =?utf-8?B?YjdUb0lmbE90N3g4YlpQQTNxUHoyblpUUW1Fa0JVbUNzenZiUm9IYVhSZFBq?=
 =?utf-8?B?MHhMM05BK01KcCtTWndnMEJYRklnNDVjT2p5bjgySHRRR1ZmaTFTQ0hwWmtF?=
 =?utf-8?B?TWo1RTFWMGJzR0F5VElYRnlId1lEUEpWcXhwdUV2MUMrbWtOeUk4NWxCT2V5?=
 =?utf-8?B?NTdPWWtIZ01iSldaR1RwQmw2OXNpY3hjbkFuRVRTU2JnR0tXQzhpbW1tcWZR?=
 =?utf-8?B?YUJ0dG1FYWtIVDhOd2VpaHdLN09FWFhnRWx1QWQ0MFY2Q2VnTHNrQnAzUVFX?=
 =?utf-8?B?Tm8xUENUVXl5SzN3QmczU0lKbkc5dUdZL1IwRnR5ZmFEeHlXaUJQQ0EvT1dL?=
 =?utf-8?B?TlNVN25XbmlEWTQ0eG5hTU9TRnM1NmlmM0puOHBmTDdhMWQ2RzU4WnhSbXYr?=
 =?utf-8?B?Nm9oWDdPR2FaS04rdVlMNHAySjFwNkJuNXg5Q0wyY3BFUUJTLzdsT3Eyb08r?=
 =?utf-8?B?NXVkZVliLzdWR0h4eDZJQTg0MFVxeWI0eTBLeE96Y0kveGxTaTVtU3AvNnVa?=
 =?utf-8?B?VUxnYTA4R01TWHRmWDNCcy9CVEJjWEMxTkhpOHpvNThlaGlJL09VcWxhdTZI?=
 =?utf-8?B?aEV4Y1JLVnI3dDJBUEl5MTNpOUFkeWZqTHFkWXd0cnhLMmpQR3BPMTJna1F0?=
 =?utf-8?B?SWZDVXBGd1QwQjhKcm1hQUJSNjcwQTdHYWxQUEF2QXA4cDlRV2NlMVExanBV?=
 =?utf-8?B?ZGR1RTg1WVAyc1RNSTlkdU5zWEEyTnNtK21iRzJ2Tm5wd01zQXZnbFE4ampi?=
 =?utf-8?B?TmxRaU45WW9SK0hIUnR5ck5pbkxFb3BRZkV4aHBreTU4M3ZmT3RVMktpOHg2?=
 =?utf-8?B?b0dZdHVucHZWUTJWd1R5ZTZNOVhITC8wQlZhQ2NJeWdCckZHeG9pRDZWbXd0?=
 =?utf-8?B?YXRMcSszUGNsMTRqakNhOHhkeGtuUFFQRDJpUGpMZTZ2WTh3djdDaUVxTkdD?=
 =?utf-8?B?bDZnUWJobndQb2J0WlBmS1FIdjF2NWVmc1cweGFsZTBlV1prRXQzemZGK1Y0?=
 =?utf-8?B?R1ZvMWF1YVdQY0JaY3ZvUXNWTjB0MnVnczlJV3lKMkxBNmlJeTYxTnhmK0oz?=
 =?utf-8?B?alljMGZFYkN4bHlGY2ZZaHdySmE0S2k1Q1crTEhVLzh4MXV5NWdJNzlEb1Rr?=
 =?utf-8?B?NXEzdHBncjEvRXhuaXNtMHZDcy9jMVArSFlIelM0VXVaSHZER0FORkpJd0dF?=
 =?utf-8?B?aC9ST2RXZk94R2o4RWpxVk1DZ3NRMTZMQXprajhNVkRsQWQ0L1kzLzNBVXJk?=
 =?utf-8?B?azZqYWs2S1NkanZnMGcwNTJXMjNSZDFBWC93Z0VuTjg1eWdqbDIwYnN6RnFU?=
 =?utf-8?Q?3cu7IrIsl/hVaaYzZViGUVowv?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5CEB97476E406648BBA189C3D3AF9306@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	RxTX3Y/sUVG+0yU9r1Up9uKhF9K8gjht9Xj/gb3w0lvSZXKId3v+5GkpgaPHbKtK91ejWtcjuGMCjGvQMYIdvRBZxqYhxk1tII2tK2XdpVrbETgtRPyTRpU3gpab36wHSTa5B1FjchaoQIbwuge3PCPASbCbowUXA/5YYyrmifR7u4u5FOywlnVGkjhu1QOh+9G7eXUS48lJZ43gOWH+3bzkfWId0AZJuM6NhrtYQkyGnW40F3Es10X75SkjlJoxG+PYjzdX8egyzdBCQ/yTY8yD0Y1+n/LkmJKN/i6d3xIKf8pP2wGlHq5PmbpiywZ7EIqA6kywRgKjbq/PVNf5xEOMsT5YSsryp4GxCb7CukNgSQkansEviiS3q8yh6i3Wk76XvITbfmQZWM+v8AMQalpT1ULlP9StC29zdRoGslugMPTgrON4voddUTsIZhERJ89A7BKnKuhpCw+kXE7tak88NfYuf5mM/A+MwT+qCvr0iTcWNLaUTuS1/tcnWWF1Xu+iod4EnYDi6ATNw80tT9TECKeYICQq72WFhjszXO35k0VgWg9AARXSAzl/dfrhyCaugNIuO+wyzlCtdoyUZkAry/hSJj+4QC09Y5Mn2fbokpNmj5+OErZelxs2/qIlPOuFKYpasUHphRTyKIZmP+Ol5tvRdux08K2raFZySTYvH0qOYeqtn3jXui7Pfjcwvq9lCpHVcAdSxS3epukPrD/bgX7OuUomx3JfOiHrKv2uBs7w7mXA9MknGJ1To4ZZVpQQTuD3NW5vQounkAAIYz6Q1CkR9FU34YIif7AbBsvJlRBMpmr7BuDSL/j3gAeyQQSrNcrqIV/IbnhnrnYy73oySdKRcQzWXB4PfcH7r/u35te00DKI65x6Wpi4nNEHbpcgC2e8ui/aUqDmdKlVFw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9672f598-e79d-43c7-159f-08dafd380322
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 11:50:23.8282
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Q6Qwa42FgrZCZo4m3lLI2dt5KBMb75LthiwLPg2wr3TXJDJhSog+wFqkwjDT4mk8x1q46uLmBM6AHhuN8vDqNWSd2tW0a6EjYIrxYPN7YvE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5945

T24gMjAvMDEvMjAyMyAyOjU5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvcmlzY3YvZW50cnkuUyBiL3hlbi9hcmNoL3Jpc2N2L2VudHJ5LlMNCj4g
bmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gaW5kZXggMDAwMDAwMDAwMC4uZjdkNDZmNDJiYg0KPiAt
LS0gL2Rldi9udWxsDQo+ICsrKyBiL3hlbi9hcmNoL3Jpc2N2L2VudHJ5LlMNCj4gQEAgLTAsMCAr
MSw5NyBAQA0KPiArI2luY2x1ZGUgPGFzbS9hc20uaD4NCj4gKyNpbmNsdWRlIDxhc20vcHJvY2Vz
c29yLmg+DQo+ICsjaW5jbHVkZSA8YXNtL3Jpc2N2X2VuY29kaW5nLmg+DQo+ICsjaW5jbHVkZSA8
YXNtL3RyYXBzLmg+DQo+ICsNCj4gKyAgICAgICAgLmdsb2JhbCBoYW5kbGVfZXhjZXB0aW9uDQo+
ICsgICAgICAgIC5hbGlnbiA0DQo+ICsNCj4gK2hhbmRsZV9leGNlcHRpb246DQoNCkVOVFJZKCkg
d2hpY2ggdGFrZXMgY2FyZSBvZiB0aGUgZ2xvYmFsIGFuZCB0aGUgYWxpZ24uDQoNCkFsc28sIHlv
dSB3YW50IGEgc2l6ZSBhbmQgdHlwZSBhdCB0aGUgZW5kLCBqdXN0IGxpa2UgaW4gaGVhZC5TwqAg
KFNvcnJ5LA0Kd2UgKnN0aWxsKiBkb24ndCBoYXZlIGFueSBzYW5lIGluZnJhc3RydWN0dXJlIGZv
ciBkb2luZyB0aGF0IG5pY2VseS7CoA0KT3BlbmNvZGUgaXQgZm9yIG5vdy4pDQoNCj4gKw0KPiAr
ICAgIC8qIEV4Y2VwdGlvbnMgZnJvbSB4ZW4gKi8NCj4gK3NhdmVfdG9fc3RhY2s6DQoNClRoaXMg
bGFiZWwgaXNuJ3QgdXNlZCBhdCBhbGwsIGlzIGl0Pw0KDQo+ICsgICAgICAgIC8qIFNhdmUgY29u
dGV4dCB0byBzdGFjayAqLw0KPiArICAgICAgICBSRUdfUyAgIHNwLCAoUklTQ1ZfQ1BVX1VTRVJf
UkVHU19PRkZTRVQoc3ApIC0gUklTQ1ZfQ1BVX1VTRVJfUkVHU19TSVpFKSAoc3ApDQo+ICsgICAg
ICAgIGFkZGkgICAgc3AsIHNwLCAtUklTQ1ZfQ1BVX1VTRVJfUkVHU19TSVpFDQo+ICsgICAgICAg
IFJFR19TICAgdDAsIFJJU0NWX0NQVV9VU0VSX1JFR1NfT0ZGU0VUKHQwKShzcCkNCg0KRXhjZXB0
aW9ucyBvbiBSSVNDLVYgZG9uJ3QgYWRqdXN0IHRoZSBzdGFjayBwb2ludGVyLsKgIFRoaXMgbG9n
aWMgZGVwZW5kcw0Kb24gaW50ZXJydXB0aW5nIFhlbiBjb2RlLCBhbmQgWGVuIG5vdCBoYXZpbmcg
c3VmZmVyZWQgYSBzdGFjayBvdmVyZmxvdw0KKGFuZCBhY3R1YWxseSwgdGhhdCB0aGUgc3BhY2Ug
b24gdGhlIHN0YWNrIGZvciBhbGwgcmVnaXN0ZXJzIGFsc28NCmRvZXNuJ3Qgb3ZlcmZsb3cpLg0K
DQpXaGljaCBtaWdodCBiZSBmaW5lIGZvciBub3csIGJ1dCBJIHRoaW5rIGl0IHdhcnJhbnRzIGEg
Y29tbWVudCBzb21ld2hlcmUNCihwcm9iYWJseSBhdCBoYW5kbGVfZXhjZXB0aW9uIGl0c2VsZikg
c3RhdGluZyB0aGUgZXhwZWN0YXRpb25zIHdoaWxlDQppdCdzIHN0aWxsIGEgd29yayBpbiBwcm9n
cmVzcy7CoCBTbyBpbiB0aGlzIGNhc2Ugc29tZXRoaW5nIGxpa2U6DQoNCi8qIFdvcmstaW4tcHJv
Z3Jlc3M6wqAgRGVwZW5kcyBvbiBpbnRlcnJ1cHRpbmcgWGVuLCBhbmQgdGhlIHN0YWNrIGJlaW5n
DQpnb29kLiAqLw0KDQoNCkJ1dCwgZG8gd2Ugd2FudCB0byBhbGxvY2F0ZSBzdGVtcCByaWdodCBh
d2F5IChldmVuIHdpdGggYW4gZW1wdHkNCnN0cnVjdCksIGFuZCBnZXQgdHAgc2V0IHVwIHByb3Bl
cmx5Pw0KDQpUaGF0IHNhaWQsIGFyZW4ndCB3ZSBnb2luZyB0byBoYXZlIHRvIHJld3JpdGUgdGhp
cyB3aGVuIGVuYWJsaW5nIEggbW9kZQ0KYW55d2F5Pw0KDQo+ICsgICAgICAgIGogICAgICAgc2F2
ZV9jb250ZXh0DQo+ICsNCj4gK3NhdmVfY29udGV4dDoNCg0KSSdkIGRyb3AgdGhpcy7CoCBJdCdz
IGEgbm9wIHJpZ2h0IG5vdy4NCg0KPiA8c25pcD4NCj4gKyAgICAgICAgY3NyciAgICB0MCwgQ1NS
X1NFUEMNCj4gKyAgICAgICAgUkVHX1MgICB0MCwgUklTQ1ZfQ1BVX1VTRVJfUkVHU19PRkZTRVQo
c2VwYykoc3ApDQo+ICsgICAgICAgIGNzcnIgICAgdDAsIENTUl9TU1RBVFVTDQo+ICsgICAgICAg
IFJFR19TICAgdDAsIFJJU0NWX0NQVV9VU0VSX1JFR1NfT0ZGU0VUKHNzdGF0dXMpKHNwKQ0KDQpT
byBzb21ldGhpbmcgSSd2ZSBub3RpY2VkIGFib3V0IENTUnMgdGhyb3VnaCB0aGlzIHNlcmllcy4N
Cg0KVGhlIEMgQ1NSIG1hY3JvcyBhcmUgc2V0IHVwIHRvIHVzZSByZWFsIENTUiBuYW1lcywgYnV0
IHRoZSBDU1JfKg0KY29uc3RhbnRzIHVzZWQgaW4gQyBhbmQgQVNNIGFyZSByYXcgbnVtYmVycy4N
Cg0KSWYgd2UncmUgdXNpbmcgcmF3IG51bWJlcnMsIHRoZW4gdGhlIEMgQ1NSIGFjY2Vzc29ycyBz
aG91bGQgYmUgc3RhdGljDQppbmxpbmVzIGluc3RlYWQsIGJ1dCB0aGUgYWR2YW50YWdlIG9mIHVz
aW5nIG5hbWVzIGlzIHRoZSB0b29sY2hhaW4gY2FuDQppc3N1ZSBhbiBlcnJvciB3aGVuIHdlIHJl
ZmVyZW5jZSBhIENTUiBub3Qgc3VwcG9ydGVkIGJ5IHRoZSBjdXJyZW50DQpleHRlbnNpb25zLg0K
DQpXZSBvdWdodCB0byB1c2UgYSBzaW5nbGUgZm9ybSwgY29uc2lzdGVudGx5IHRocm91Z2ggWGVu
LsKgIEhvdyBmZWFzaWJsZQ0Kd2lsbCBpdCBiZSB0byB1c2UgbmFtZXMgdGhyb3VnaG91dD8NCg0K
fkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 12:04:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 12:04:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482796.748498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvYJ-0007D0-DU; Mon, 23 Jan 2023 12:03:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482796.748498; Mon, 23 Jan 2023 12:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvYJ-0007Ct-95; Mon, 23 Jan 2023 12:03:55 +0000
Received: by outflank-mailman (input) for mailman id 482796;
 Mon, 23 Jan 2023 12:03:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rv8W=5U=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pJvYH-0007Cn-M9
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 12:03:53 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ffc96085-9b15-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 13:03:50 +0100 (CET)
Received: by mail-wr1-x434.google.com with SMTP id e3so10547065wru.13
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 04:03:51 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 l29-20020adfa39d000000b002bf95500254sm6470342wrb.64.2023.01.23.04.03.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 04:03:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffc96085-9b15-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=dJyG60Emt0d9opA5oYTuh+ei5Kgic0d6ECamgZhcveE=;
        b=AwVUvxs3UWuv6cqzLYmGUUoIzGxAiPCmbLvG16Ra31vELrZw7rAtK1r4qSSQsMdyEx
         UEyvnbOkHZ7UTB56qljZTyWHXwfFd8OIYOoBR2BIbT0VhjNe6gIYGldSJQ04eMqr12c4
         X5xRce2KUv7Uq3j8moTiuGt4z2CzShISoVY2Gj93lfhFFCOppelBgnVQ93nmeZxecMFf
         u80y0+QNFuyGYGh/jf0yHIZelPzFMsppnbNw1Vj+hf2nD+LgjBsKVwz2y2TK2H7WQ92n
         95ASv4eVkfE1BhQtzXZA4ItKU5FUQFgwrjrIfiw96dkyp/8mfBbfdOl5VyYpgrgavRNS
         rZ/Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=dJyG60Emt0d9opA5oYTuh+ei5Kgic0d6ECamgZhcveE=;
        b=Qm8AtbybkzQC5JdbC3qFGylo+MFsiQ89/ADMduovS6D3mh513CtF5+lQB5NCyuSbvM
         CCCNryiJan818lRSPmfbOfVDCtawUNek4SNV8t13ijw6spaUE6o6AIZC1XMcDoPN106A
         BYuoSln2gNnB1NJra9phZ7SAn3loEIAGZMdHzVqkWFZVaL8kMRre2AfSna1cSIpiDceI
         V9N1ANSAuAM+N3Jhg2/L60EF+reRUt8YrU2ZHvqmHZo7HU5EixWGpBEf7OrRyakvmJQj
         F+ccsz8hRWRSWUx3w673h9VitdmxGOJOFNb6BXp2EPayEsoAaJsSTF6LcowqsSzGswr6
         AngQ==
X-Gm-Message-State: AFqh2kqHkOwigX5AYYJAfAe3a+BmI0KXRWBc3e5qbBp/3kJ8IegigTm3
	NUebXhtjf9glz9DgHCKN/sI=
X-Google-Smtp-Source: AMrXdXvzVvY0sSc6mychZYQRLA7WRw8pSOy5305CVWHutTLEMk/W7xMy6zaOarHbOgClZwR4f6+S2w==
X-Received: by 2002:a05:6000:608:b0:28f:29b3:1a7f with SMTP id bn8-20020a056000060800b0028f29b31a7fmr25567498wrb.36.1674475430694;
        Mon, 23 Jan 2023 04:03:50 -0800 (PST)
Message-ID: <bb6b85f147d5d7933532fb27f78fa93ce6209b22.camel@gmail.com>
Subject: Re: [PATCH v1 06/14] xen/riscv: introduce exception context
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
Date: Mon, 23 Jan 2023 14:03:49 +0200
In-Reply-To: <fd276566-6b7d-ea64-a90a-a0c198ccf36c@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <00ecc26833738377003ad21603c198ae4278cfd3.1674226563.git.oleksii.kurochko@gmail.com>
	 <fd276566-6b7d-ea64-a90a-a0c198ccf36c@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-20 at 15:54 +0000, Andrew Cooper wrote:
> On 20/01/2023 2:59 pm, Oleksii Kurochko wrote:
> > diff --git a/xen/arch/riscv/include/asm/processor.h
> > b/xen/arch/riscv/include/asm/processor.h
> > new file mode 100644
> > index 0000000000..5898a09ce6
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/processor.h
> > @@ -0,0 +1,114 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*****************************************************************
> > *************
> > + *
> > + * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
> > + * Copyright 2021 (C) Bobby Eshleman <bobby.eshleman@gmail.com>
> > + * Copyright 2023 (C) Vates
> > + *
> > + */
> > +
> > +#ifndef _ASM_RISCV_PROCESSOR_H
> > +#define _ASM_RISCV_PROCESSOR_H
> > +
> > +#include <asm/types.h>
> > +
> > +#define RISCV_CPU_USER_REGS_zero=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 0
> > +#define RISCV_CPU_USER_REGS_ra=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 1
> > +#define RISCV_CPU_USER_REGS_sp=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 2
> > +#define RISCV_CPU_USER_REGS_gp=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 3
> > +#define RISCV_CPU_USER_REGS_tp=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 4
> > +#define RISCV_CPU_USER_REGS_t0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 5
> > +#define RISCV_CPU_USER_REGS_t1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 6
> > +#define RISCV_CPU_USER_REGS_t2=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 7
> > +#define RISCV_CPU_USER_REGS_s0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 8
> > +#define RISCV_CPU_USER_REGS_s1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 9
> > +#define RISCV_CPU_USER_REGS_a0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 10
> > +#define RISCV_CPU_USER_REGS_a1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 11
> > +#define RISCV_CPU_USER_REGS_a2=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 12
> > +#define RISCV_CPU_USER_REGS_a3=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 13
> > +#define RISCV_CPU_USER_REGS_a4=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 14
> > +#define RISCV_CPU_USER_REGS_a5=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 15
> > +#define RISCV_CPU_USER_REGS_a6=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 16
> > +#define RISCV_CPU_USER_REGS_a7=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 17
> > +#define RISCV_CPU_USER_REGS_s2=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 18
> > +#define RISCV_CPU_USER_REGS_s3=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 19
> > +#define RISCV_CPU_USER_REGS_s4=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 20
> > +#define RISCV_CPU_USER_REGS_s5=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 21
> > +#define RISCV_CPU_USER_REGS_s6=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 22
> > +#define RISCV_CPU_USER_REGS_s7=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 23
> > +#define RISCV_CPU_USER_REGS_s8=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 24
> > +#define RISCV_CPU_USER_REGS_s9=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 25
> > +#define RISCV_CPU_USER_REGS_s10=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 26
> > +#define RISCV_CPU_USER_REGS_s11=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 27
> > +#define RISCV_CPU_USER_REGS_t3=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 28
> > +#define RISCV_CPU_USER_REGS_t4=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 29
> > +#define RISCV_CPU_USER_REGS_t5=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 30
> > +#define RISCV_CPU_USER_REGS_t6=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 31
> > +#define RISCV_CPU_USER_REGS_sepc=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 32
> > +#define RISCV_CPU_USER_REGS_sstatus=C2=A0=C2=A0=C2=A0=C2=A0 33
> > +#define RISCV_CPU_USER_REGS_pregs=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
34
> > +#define RISCV_CPU_USER_REGS_last=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 35
>=20
> This block wants moving into the asm-offsets infrastructure, but I
> suspect they won't want to survive in this form.
>=20
> edit: yeah, definitely not this form.=C2=A0 RISCV_CPU_USER_REGS_OFFSET is
> a
> recipe for bugs.
>=20
Thanks for the recommendation I'll take it into account during a work
on new version of the patch series.

> > +
> > +#define RISCV_CPU_USER_REGS_OFFSET(x)=C2=A0=C2=A0 ((RISCV_CPU_USER_REG=
S_##x)
> > * __SIZEOF_POINTER__)
> > +#define RISCV_CPU_USER_REGS_SIZE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0
> > RISCV_CPU_USER_REGS_OFFSET(last)
> > +
> > +#ifndef __ASSEMBLY__
> > +
> > +/* On stack VCPU state */
> > +struct cpu_user_regs
> > +{
> > +=C2=A0=C2=A0=C2=A0 register_t zero;
>=20
> unsigned long.
Why is it better to define them as \unsigned long' instead of
register_t?
>=20
> > +=C2=A0=C2=A0=C2=A0 register_t ra;
> > +=C2=A0=C2=A0=C2=A0 register_t sp;
> > +=C2=A0=C2=A0=C2=A0 register_t gp;
> > +=C2=A0=C2=A0=C2=A0 register_t tp;
> > +=C2=A0=C2=A0=C2=A0 register_t t0;
> > +=C2=A0=C2=A0=C2=A0 register_t t1;
> > +=C2=A0=C2=A0=C2=A0 register_t t2;
> > +=C2=A0=C2=A0=C2=A0 register_t s0;
> > +=C2=A0=C2=A0=C2=A0 register_t s1;
> > +=C2=A0=C2=A0=C2=A0 register_t a0;
> > +=C2=A0=C2=A0=C2=A0 register_t a1;
> > +=C2=A0=C2=A0=C2=A0 register_t a2;
> > +=C2=A0=C2=A0=C2=A0 register_t a3;
> > +=C2=A0=C2=A0=C2=A0 register_t a4;
> > +=C2=A0=C2=A0=C2=A0 register_t a5;
> > +=C2=A0=C2=A0=C2=A0 register_t a6;
> > +=C2=A0=C2=A0=C2=A0 register_t a7;
> > +=C2=A0=C2=A0=C2=A0 register_t s2;
> > +=C2=A0=C2=A0=C2=A0 register_t s3;
> > +=C2=A0=C2=A0=C2=A0 register_t s4;
> > +=C2=A0=C2=A0=C2=A0 register_t s5;
> > +=C2=A0=C2=A0=C2=A0 register_t s6;
> > +=C2=A0=C2=A0=C2=A0 register_t s7;
> > +=C2=A0=C2=A0=C2=A0 register_t s8;
> > +=C2=A0=C2=A0=C2=A0 register_t s9;
> > +=C2=A0=C2=A0=C2=A0 register_t s10;
> > +=C2=A0=C2=A0=C2=A0 register_t s11;
> > +=C2=A0=C2=A0=C2=A0 register_t t3;
> > +=C2=A0=C2=A0=C2=A0 register_t t4;
> > +=C2=A0=C2=A0=C2=A0 register_t t5;
> > +=C2=A0=C2=A0=C2=A0 register_t t6;
> > +=C2=A0=C2=A0=C2=A0 register_t sepc;
> > +=C2=A0=C2=A0=C2=A0 register_t sstatus;
> > +=C2=A0=C2=A0=C2=A0 /* pointer to previous stack_cpu_regs */
> > +=C2=A0=C2=A0=C2=A0 register_t pregs;
>=20
> Stale comment?=C2=A0 Also, surely this wants to be cpu_user_regs *pregs; =
?
>=20
Not really.
Later it would be introduced another one structure:
	struct pcpu_info {
	...
		struct cpu_user_regs *stack_cpu_regs;
	...
	};
And stack_cpu_regs will be updated during context saving before jump to
__handle_exception:

    	/* new_stack_cpu_regs.pregs =3D old_stack_cpu_res */
    	REG_L   t0, RISCV_PCPUINFO_OFFSET(stack_cpu_regs)(tp)
    	REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(pregs)(sp)
    	/* Update stack_cpu_regs */
    	REG_S   sp, RISCV_PCPUINFO_OFFSET(stack_cpu_regs)(tp)
And I skipped this part as pcpu_info isn't used anywhere now but
reserve some place for pregs in advance.

> > +};
> > +
> > +static inline void wait_for_interrupt(void)
>=20
> There's no point writing out the name in longhand for a wrapper
> around a
> single instruction.
>=20
Will change it to "... wfi(void)"
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 12:06:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 12:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482801.748507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvaO-0007lL-P0; Mon, 23 Jan 2023 12:06:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482801.748507; Mon, 23 Jan 2023 12:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvaO-0007lE-L1; Mon, 23 Jan 2023 12:06:04 +0000
Received: by outflank-mailman (input) for mailman id 482801;
 Mon, 23 Jan 2023 12:06:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rv8W=5U=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pJvaM-0007l8-Ng
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 12:06:02 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4cec20a5-9b16-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 13:06:00 +0100 (CET)
Received: by mail-wr1-x42d.google.com with SMTP id h16so10554388wrz.12
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 04:06:00 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 g12-20020a5d554c000000b00275970a85f4sm4410911wrw.74.2023.01.23.04.05.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 04:05:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cec20a5-9b16-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=jx8yhtLhUxbPyqeo5BHexpyRBPbyotw5yVBZSoQ0OeQ=;
        b=fE8bOZ8noGR5zAz0LtaTxzcXftdLnYqw6bpRhwJ2MUEV2J1UbL12h2L//4GST+GSj7
         y8KR1OCx5yE+4ajqaFcNe/MzlwOTEWk8EDB7fVfzyVNjZHNH3qmLCfZAgHm5qQZGUDGM
         Kap+N21gw26HSD1ep23rArwU4Bj6e2dtRHlQGDGY+lZNuAiqcVXqh4sbziNdgdfIjnjL
         7cHpHhLQyr9p9cGmv/13DXSm/UpychwOTL+hfPUDTNuCO1DdPsNB9qYRgKn1gzQ1oYJu
         nN7h7KJVjl1YFBZFoDyZOba7LDZ+HTmfspH258zHZwJdww4R8usBcO3L1JedMuF8/b2b
         GK6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=jx8yhtLhUxbPyqeo5BHexpyRBPbyotw5yVBZSoQ0OeQ=;
        b=TFq6cIgKjF3DY5M67Ij+RO8629ejMD+RdWRrDsr5rZTzJWhnfSPcE/+z8hVVzLPnBg
         gcR1UFcZq/2z2Z262kNjzJxGLOBrI0HWZZRDiagsDUN8gfgYuExBGcW+SnrBKp+9ByQK
         IxDVR3xHkgnLeoyTMhc7Nx+cbzRGaoyzZDOO9/N5Qrtk9QVJ3M6hxQX51dLlEI7BGgF2
         AftOWnAvJh69H07Z3/N00y8CzbYumCS53Chp2/5Dmr3J/LRZ8YlbkAbII47Z9pkTxSr3
         ygi7vXBM6nlA3ZNAq8WMj1bB3N4e9W025yU0dzNaEuhqLMiUXODZEdh7sxXnVp0ZdF8L
         VDcg==
X-Gm-Message-State: AFqh2kovkhCzfbT5uuBuvbuIJGX0CWVmtnl02eUf0/g4roS5TavhPY7M
	S/oQNfZ/Qqfe9zSZcpFk51mgCJg2XksXYg==
X-Google-Smtp-Source: AMrXdXtKxxmI7uoLoAtk7ZMl1C10DR5Z6Suwl5eMPh/3ZR/dzwg1U4GPMX3QOEWsxWF168ZSfea9kw==
X-Received: by 2002:adf:f005:0:b0:2bd:e55a:1e1d with SMTP id j5-20020adff005000000b002bde55a1e1dmr15099670wro.12.1674475560322;
        Mon, 23 Jan 2023 04:06:00 -0800 (PST)
Message-ID: <bd87af0043ea58247b762d8d96a13f50e4eb262b.camel@gmail.com>
Subject: Re: [PATCH v1 05/14] xen/riscv: add early_printk_hnum() function
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
	Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Date: Mon, 23 Jan 2023 14:05:59 +0200
In-Reply-To: <53b7651a-4274-1e2f-fb97-d30f3ddbac1d@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <633ced21788a3abf5079c9a191794616bb1ad351.1674226563.git.oleksii.kurochko@gmail.com>
	 <53b7651a-4274-1e2f-fb97-d30f3ddbac1d@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-20 at 15:39 +0000, Andrew Cooper wrote:
> On 20/01/2023 2:59 pm, Oleksii Kurochko wrote:
> > Add ability to print hex number.
> > It might be useful to print register value as debug information
> > in BUG(), WARN(), etc...
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> I think it would be better to get s(n)printf() working than to take
> these.=C2=A0 We're going to need to get it working soon anyway, and will
> be
> much easier than doing the full printk() infrastructure.
>=20
Agree here.

I re-watched the patch and I do not use this function now at all.
(it looks like it was needed only for my personal debug stuff)

This patch can be dropped now.
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 12:10:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 12:10:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482805.748517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJveF-0000HW-A3; Mon, 23 Jan 2023 12:10:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482805.748517; Mon, 23 Jan 2023 12:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJveF-0000Gf-6J; Mon, 23 Jan 2023 12:10:03 +0000
Received: by outflank-mailman (input) for mailman id 482805;
 Mon, 23 Jan 2023 12:10:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTo8=5U=citrix.com=prvs=380e0b34c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pJveE-00008T-LP
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 12:10:02 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id da731c71-9b16-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 13:09:59 +0100 (CET)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jan 2023 07:09:43 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5306.namprd03.prod.outlook.com (2603:10b6:5:243::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 12:09:41 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 12:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da731c71-9b16-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674475799;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ou5sAyXk6SHDdw+3MAn2dDSblhTDS4EeWN8QGUyPqV8=;
  b=fSD6j+rCwGrUltvNFf9E9Qn34sUd6X0d4XTdLxm2pcC9pDUWxb95RdGw
   PGd1hG1Eu4PcuqH4Y0TqaUNcdqhdvmfXsBHhNDgCUcOGvIwoZX0ibXoI9
   rIx/dVQ0ncHjh50NgEnMP/CKtBwkg7iu5QvPtAXyzutSW6W+U2zVzDQVM
   c=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 93765027
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/McqKKnzpr+SNr6XK+zO3/Lo5gwaJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIeD2DTM//eNmDxKtsgOdnkoU1U6JaHzdcwTlZr+SFhFiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5gSGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 dxbIismRDOuu7uznO3nabBxjesZa8a+aevzulk4pd3YJdAPZMmaBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3iea9WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDT+Lmq6Yw3zV/wEQiUBA3TnaSpMWSpXaiHIl/D
 Xc/whgh+P1aGEuDC4OVsweDiHmAsx0HWtsWEPAg7wqNya387AOQB2xCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BXQZeSYOQA8B4t/iiII+lBTCSpBkCqHdptL0EDf03
 juDhDI/mbIIjMgAka68+DjviTWmrInEVQ4x6wDeWEqq6wp4YMiuYInAwVHf7O1cJYeDCFebt
 X4PmtO28+wFS5qKkUSlS+ILGrar6/+bMSb0jltmHp1n/DOok0NPZqhV6TB6YU1vYsANfGawZ
 FeJ4F0BophOIHGtcKl7JZqrDNgnxrThEtKjUe3Iat1JYd56cwrvEDxSWHN8FlvFyCAE+ZzT8
 7/BL65A0V5y5Xxb8QeL
IronPort-HdrOrdr: A9a23:wQnc5qNKfN/IeMBcTuCjsMiBIKoaSvp037BL7TELdfUxSKalfq
 +V7ZAmPHPP+VMssRIb6LO90cu7MBXhHPdOiOF7XNeftSbdyQmVxepZnPLfKlPbalXDHy1muZ
 uIsZISNPTASXZ9i8j+7E2DH9EszMLC2Ly0hI7lvhBQpM1RBJ2IJj0WNjqm
X-IronPort-AV: E=Sophos;i="5.97,239,1669093200"; 
   d="scan'208";a="93765027"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GJs/jer36UfZuJUqyOS9tHW5/z0H6Ld0rdntlYNsMncOh+Xq6V4A8tPg25yuiRwRkmB+pBFoea2tgh96TpnT+ypmJmHMOO8akweAJlhjS+D+/T9+WKrz+IywSTcPrSYVzt3o+YqO83Jo3IY87yFwdQc8ky4K8BP7UN7WxWeM+V76b1wgRSP6WN2+njJ2QGtg/KG/Q1KFfzbCZ18u4myYu0vj09a8nL6/voL60cTynj+RahOl7O071pfEdX2myyZz7hV7C1NpD3mhWcW2VQreLvtrrjZL+Y/cXX0Vo0o1obQtLcDu4Sc554B+JFeo5iBgIGHidOxB/u1iqsLuz1kHVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ou5sAyXk6SHDdw+3MAn2dDSblhTDS4EeWN8QGUyPqV8=;
 b=iaLRMrxE3WRsXMUrC2RG0YFzmgC1nGrW0jd4B4anYJMTNuksJaEInSeNAIUS7sVOv34QmXWBiX7IlLy8XY0DCc9VkhQjnzU1PmQOSMTjmsX4qIA47zBSEFxX5TzI0t8LusHpxWvCwkINIFY8YX366hT0msXMaO9rDQ0bW5FQ6OJyMCibGU5A2bV49o6o5e1VRVapEeprDHe+/0pCACJQDzUKJ1okmMpUnWItgQUvu90KaXfqyq4VBCfn3SspjxuXhgKVV4BlPZflZ6LSxw3AuxnaWUTR8E4dak8fN8RbmbcqAmLBuy2lwksf7ZfM1rS8HVXDABA0fGxJo3wR3KoLkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ou5sAyXk6SHDdw+3MAn2dDSblhTDS4EeWN8QGUyPqV8=;
 b=DuZbBYerpsrVDG8qpK3Kz+1R08+UpGzMMo4qW7k80/CaTYBUe/6MzUzDzTfBchmpZSfQh+23rPqLRZIDaUcDHT0hOWiu4AFSK2jJC+MldyE08AIEw0vvw3oh0KA2xn9loYMgkYt6aJGxrvZYwJl65l0EQdm+FcJSjNb5tXBSGog=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 08/14] xen/riscv: introduce decode_cause() stuff
Thread-Topic: [PATCH v1 08/14] xen/riscv: introduce decode_cause() stuff
Thread-Index: AQHZLN///evNj7qtDEG2WE9mE7PhWK6r7WMA
Date: Mon, 23 Jan 2023 12:09:40 +0000
Message-ID: <00af9dc0-1a3c-ef37-3d4d-b0a307349bf3@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <c798832ec19cb94c0a27e8cff8f5bd6d1aa6ae7e.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <c798832ec19cb94c0a27e8cff8f5bd6d1aa6ae7e.1674226563.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|DM6PR03MB5306:EE_
x-ms-office365-filtering-correlation-id: 2192ac11-4cb3-4339-2e41-08dafd3ab4d9
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 xJMkEJj0utaH3W4xDgk4EQAkYmtUw6MxvWTJ+VvLEtGelXn4h7S7rg4qpuo+H1b0be62dmnGEU+WFlQI/dzaCGBLOICcegnF/iVPYAlJEkrTi6wyQKaNuK7vQvvXXKFz1p6ZrPTu3c7viQeIGbsHWC+iqU5mzXA5vWC2SbScnyk3idhYEDRB0KKeXiiMfZoYqudT/bK+VBn3n5/tuLfSXumAOAZVfHq3V4rza4wCEynw6xcQrhpQUTjx37j7jqHe7ySqRkmPxS3+r4ZpqUiGsWDri5RWtk5DLx+/9YCX4n2aoxL466/xTTB9xQsvn4VtQ08hpXOUCPM/e3v5xw/FV+QOax6QP0DEkFG7ogvU0obu8HQkrjuflj+4oU0lE6IR+j0WZnIwggeNtHnK+G3PjfGGnz3j/Nv+e+s05Etn2Qvy/irbL7z7f/0tdrq5Dwj5HUOsENjyLZtbQmy9zeqUzD9i5S2XCP7HWB9hxaVfJ0CEFhPRQArLwJqJApuMShio4q6BkwK/xdV+5Mllrut9Jay9SzYW8sOtuwwY6gPrrgpA3B58U8d3BiY/XAUynFAPooioDLuZC24X3KaZSc9H3g0Dv6MMpnUnQPuZDVS/uKnOlLxC+L3Sll2ByMmYDiiDzqR4H4Emu6YOl5yNu0KUi/3pdYcsM/qSx/GC3rGKquVUtwJmR/d572NK/IPTrXVGpx30qYmMS6CBcLyM1BElq62FZ3mBpkHHS665d0malIxtR/IS699Ddx9mIFyMZHh6MA4W+NDRuxWLZsZ3z6uH9g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(366004)(39860400002)(346002)(376002)(451199015)(71200400001)(31686004)(53546011)(4326008)(8676002)(66556008)(66476007)(76116006)(66946007)(66446008)(110136005)(64756008)(6486002)(2616005)(91956017)(83380400001)(6512007)(41300700001)(26005)(8936002)(5660300002)(186003)(6506007)(2906002)(38100700002)(82960400001)(122000001)(31696002)(38070700005)(478600001)(316002)(54906003)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?WGlldUlQdlFPQUFQc2pIR2NDQko5djBScWtWYUpDaWxJbHAraVBxNFhPNnhq?=
 =?utf-8?B?dXlOOUxqU1pHa0tNdVRZZUhhaE02ZUx1QjBLcktvdUFJaFhGaTYzKzN3YStu?=
 =?utf-8?B?Y1pmNFh0NFJjVVZicE82SStOVUh4bkh5QXZZWEpUS29WOHNXRHdxWmdTRVNU?=
 =?utf-8?B?TlpvcHY2T09GQ2doU1NFWFVkc1k4aERIblgxL1hjT2owRUh0d1A3RHRkeHIy?=
 =?utf-8?B?ajFvYi9GN0hIc3FXU09NZWpBbjVUWDBqcGk4cFZydHNHUDhkN1A3Ykw2M0dO?=
 =?utf-8?B?QTZWUVZPT3ZBRkJzVnlKT0RqV0xtMW9aaEhmaUFMWFU2WFE4UVNNQW9EUTNZ?=
 =?utf-8?B?S0FhamlYTHNjTVdPb3B0U2tMajhpVVZENy9NVktIQVhZZUJOYlBTc2N5VmJX?=
 =?utf-8?B?RWVlY0VWUmxqUlpsS3o0R3FoTXp2UUZQcUdJaFIzbGJweTZjZzVIVXFyQnRj?=
 =?utf-8?B?VHNIL3ZMU0RyVmp4L2toTDQ2T1lGcy9tLzJNL1l6RFNyZXdJRzNYQStkMWtF?=
 =?utf-8?B?bG1ZMFdBSWtNZ29MdUFPcVpkTWdlSTRqYXFMWkdWbTdlbGpQZVMvN0lhQ0lU?=
 =?utf-8?B?cmJKNlhTVUtzSDY4cW53ZFN3MU5jYkpDZGtjdFkzRldRZlQzNEdSekIrTEhJ?=
 =?utf-8?B?R20wMGlLdFkxRGxFZk03NEx1eHc1WDU0MmFjSjlhK3dNc2w2M3RFUkdNQTA0?=
 =?utf-8?B?RCs4eEZCSzBaVG5MVjFydWRRMGlXYk13alhlVTMyNUc3UjRiM3hNc0E1RlUz?=
 =?utf-8?B?T1FEMlJCZXV3SXRTdUg3ckh0b0VYODh0NEppU2xvT1hOV1cxZ1ZET1kyUDRM?=
 =?utf-8?B?Yk85YURXbFQybUN4eXR3Njgyb3JzVVJYTklENlBUUitqZVBNcWFuVExDS1JS?=
 =?utf-8?B?dE5nWGV3akpBS2RZZm5Ja3Z1RWpGUTB5S1RCbzYrNWt3Y1MrRTdVTVhySk5a?=
 =?utf-8?B?eW1TYkhxSEpGYmpWLzhsVFY0VVYwSFJMQnNHbU1RVW4vL0hJSWw3ZjRBVVhn?=
 =?utf-8?B?L3FpVEpoWXprSE1LblB3bzRmVnFhdjIxaXNRUUJacFRlNUVET1ozcHovWnhh?=
 =?utf-8?B?UGhCRjR3NmFaYWZ5c1RCY1V0TnVjNjVtNHhqOXBoeDJyT1J5RVl5Y3dmaFdU?=
 =?utf-8?B?WkoweFIrV3N4di9VSWFMTEY5bVNkT3MxdHRQSzNyV0ZDeHpxdG81OWNzU0ZF?=
 =?utf-8?B?WDJqZGhXOU1UZ0RBR3oyRVMxY0R0T2VjUXNJejNhRWYzUEt4MEE3clhSekxZ?=
 =?utf-8?B?bnE2UHJWZG9IU2ZJOTNEZU1STk1sbTRLNzZMajV3bERtQXVpb2RiVG1sOTZ4?=
 =?utf-8?B?eUZPTjZvR3FiaW5scFBGUEFiL3VtOXVrdjhraWhZbDNqcThEZjN6N1V4cFo5?=
 =?utf-8?B?RnBGSnlzc2ljR0NRYi8rZkJsdll6ZThtQy9LSGJWNFF3ZlhObkFvbk4zNk1U?=
 =?utf-8?B?RkFJWlRPcVp5alNyY0ljaTFKanFFa0M3WW91dFkzVGRUQlhPMnhYdDNRWFVa?=
 =?utf-8?B?dVJ2WnZyOHZVeG0wU3BZQzhrcHZyUkZKcWIxT1FWazMza1dnWjc0c3NuUDVL?=
 =?utf-8?B?ZEQ2RzI5TFVhTDVxUUtLbTByOUN5cjBaYTJnYTVCYVpXa09QS3ZWWnU3YjBt?=
 =?utf-8?B?MGVTRk43bHdscDUwUU9aUXBnVE95TkxJaVNVMks4YnRwV01TQ1FyOGhuN0pk?=
 =?utf-8?B?ZEMzWjY4Y3NaS0w2Y3hBZWF6VjJzWDlJMnZtdmdQRVNLV2c1bnQ2ZVcwcitl?=
 =?utf-8?B?VForMko3eW1hK1d2TDVPZzE5Ukt6S1F1VE1YWjZ4NzIyc2Y0bmlrMFY2WWVh?=
 =?utf-8?B?ZHRPQkh3ZGliSmpHaDU3b1ZtaDJPNWF6UW5EQ3Q3c2FMbzJ0TmoybmxXaHRK?=
 =?utf-8?B?MmttREl1MzlLQTl5M1RaMXVvc3lqbk1WT2xPUUVQam9YeFRZNG1YdkNOZFZ6?=
 =?utf-8?B?QUs4QkFyNTQyVld0YlpULzk0YXlMU2tUdHZIdUJIUThiODZONTBGemZpckpR?=
 =?utf-8?B?cEFPUkdzMGxJTHcrNmVabFkrNXhyVXJabmdyeUtjdVNzbUJKaGlhd3VDYlF1?=
 =?utf-8?B?a1ZJMHVrelkzSEFpZ2FOK1NJdk1zQWJXQ0t6UEM2OFdoK0FuYWM4L3BicGNR?=
 =?utf-8?Q?pQClQpRogatvGsHQ/9ILa/kYt?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5D7A5DDEFB017F44B7FB53E7E225468E@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2AqXIGzuxZoTm7Gn+i1xJjff0KlbMltpqP2ifj5JEFOZphp4Ye6bybE5sytlyOzhDLW9DDGLZ4y1OsNOM72p+WrlC70eVdSiBEROuP4Vr5GtFD1GRiKrL6nI+2hzegIwLMqX3m4p4VExOGxA23v0o4Tbmcv2nmjgn4zmyH/oIPP2xKxKXTUTLPIT8lRDakbbvyH1qazW0UkGQ7vw60fa8dymUcejoe6oFFlZ8D53+NfKXDnovKgRFIOklNmwYCs4hFhoFFu14Oy7PY55fP/6mfgqtWVDXEHauLRiWFPBtpWjiKIn8F4lPJFy0mIXJYpByi+BNgJuTWWpXIRrYSGMUvAeKy7NMRjB3uYQ6FV6q9VZJONJWOoIIzr3KkexwQNmnCS0n5Bs7zL4QNDK9494al7gXN/Rf7jyuFMkpWGWJg8NgcZWL6CQA/bmychxSxdzfQVX3xMYG1scFCETiyBW4sR9xCpmCaDFQCPP9RCgoRwerOOEE401ABntc17v0bKYihVZEaYmRrecO4Ic454LDZtLGl+9csYPhMerrt3o8nQJz74PgkBvws+aRgqk1nJwydtOXtZfLq8gw/DR44szxrgSiW9XPZchqfl/1m+rl/6PWANiu2Et/DJMcullTvDfM/n9ffusXoIWYs4ALlQnmA0SxVnDMp83i76KpCd74onEDTHJfxqw7hE+X47sB9pLPDDgXESizwUdfCW6zHiWGedwfQEuh78P4jO1gZgylOp/hX/YnfmbOt/9aHmSHQMDhKLiqbj9e5xjHdpf/erMKhf7oouOuWSQ07YhYYEhM4O53cc0Xevz5JzQwgW/fxpaZkCgi7PM+ZjUe0FNH5xyEaj4wvvcPew0bPplWcYhQDQOm15bKaZv+ry2sib/v4kXruc6QG5mm1ZErzC3Z8USYQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2192ac11-4cb3-4339-2e41-08dafd3ab4d9
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 12:09:41.0096
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: peN8i70rrznHt6kifnGrYUCW/sTYWIzdoME267a3cKVb8OCyHQexixKIUEmupQ1DN3IlE79IiM7DrA71WyP5UtZeZidnt7n0zVt363Ti7A8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5306

T24gMjAvMDEvMjAyMyAyOjU5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvcmlzY3YvdHJhcHMuYyBiL3hlbi9hcmNoL3Jpc2N2L3RyYXBzLmMNCj4g
aW5kZXggMzIwMWI4NTFlZi4uZGQ2NGYwNTNhNSAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gvcmlz
Y3YvdHJhcHMuYw0KPiArKysgYi94ZW4vYXJjaC9yaXNjdi90cmFwcy5jDQo+IEBAIC00LDggKzQs
OTYgQEANCj4gICAqDQo+ICAgKiBSSVNDLVYgVHJhcCBoYW5kbGVycw0KPiAgICovDQo+ICsjaW5j
bHVkZSA8YXNtL2Nzci5oPg0KPiArI2luY2x1ZGUgPGFzbS9lYXJseV9wcmludGsuaD4NCj4gICNp
bmNsdWRlIDxhc20vcHJvY2Vzc29yLmg+DQo+ICAjaW5jbHVkZSA8YXNtL3RyYXBzLmg+DQo+ICsj
aW5jbHVkZSA8eGVuL2Vycm5vLmg+DQo+ICsNCj4gK2NvbnN0IGNoYXIgKmRlY29kZV90cmFwX2Nh
dXNlKHVuc2lnbmVkIGxvbmcgY2F1c2UpDQoNClRoZXNlIHNob3VsZCBiZSBzdGF0aWMgYXMgeW91
J3ZlIG5vdCBwdXQgYSBkZWNsYXJhdGlvbiBpbiBhIGhlYWRlcg0KZmlsZS7CoCBCdXQgYXMgaXQg
c3RhbmRzLCB5b3UnbGwgdGhlbiBnZXQgYSBjb21waWxlciB3YXJuaW5nIG9uDQpkZWNvZGVfY2F1
c2UoKSBhcyBpdCdzIG5vdCB1c2VkLg0KDQpJIHdvdWxkIG1lcmdlIHRoaXMgcGF0Y2ggd2l0aCB0
aGUgZm9sbG93aW5nIHBhdGNoLCBhcyB0aGUgZm9sbG93aW5nDQpwYXRjaCBpcyB2ZXJ5IHJlbGF0
ZWQgdG8gdGhpcywgYW5kIHRoZW4geW91IGNhbiBnZXQgZXZlcnl0aGluZyBuaWNlbHkNCnN0YXRp
YyB3aXRob3V0IHVudXNlZCB3YXJuaW5ncy4NCg0KPiArew0KPiArICAgIHN3aXRjaCAoIGNhdXNl
ICkNCj4gKyAgICB7DQo+ICsgICAgY2FzZSBDQVVTRV9NSVNBTElHTkVEX0ZFVENIOg0KPiArICAg
ICAgICByZXR1cm4gIkluc3RydWN0aW9uIEFkZHJlc3MgTWlzYWxpZ25lZCI7DQo+ICsgICAgY2Fz
ZSBDQVVTRV9GRVRDSF9BQ0NFU1M6DQo+ICsgICAgICAgIHJldHVybiAiSW5zdHJ1Y3Rpb24gQWNj
ZXNzIEZhdWx0IjsNCj4gKyAgICBjYXNlIENBVVNFX0lMTEVHQUxfSU5TVFJVQ1RJT046DQo+ICsg
ICAgICAgIHJldHVybiAiSWxsZWdhbCBJbnN0cnVjdGlvbiI7DQo+ICsgICAgY2FzZSBDQVVTRV9C
UkVBS1BPSU5UOg0KPiArICAgICAgICByZXR1cm4gIkJyZWFrcG9pbnQiOw0KPiArICAgIGNhc2Ug
Q0FVU0VfTUlTQUxJR05FRF9MT0FEOg0KPiArICAgICAgICByZXR1cm4gIkxvYWQgQWRkcmVzcyBN
aXNhbGlnbmVkIjsNCj4gKyAgICBjYXNlIENBVVNFX0xPQURfQUNDRVNTOg0KPiArICAgICAgICBy
ZXR1cm4gIkxvYWQgQWNjZXNzIEZhdWx0IjsNCj4gKyAgICBjYXNlIENBVVNFX01JU0FMSUdORURf
U1RPUkU6DQo+ICsgICAgICAgIHJldHVybiAiU3RvcmUvQU1PIEFkZHJlc3MgTWlzYWxpZ25lZCI7
DQo+ICsgICAgY2FzZSBDQVVTRV9TVE9SRV9BQ0NFU1M6DQo+ICsgICAgICAgIHJldHVybiAiU3Rv
cmUvQU1PIEFjY2VzcyBGYXVsdCI7DQo+ICsgICAgY2FzZSBDQVVTRV9VU0VSX0VDQUxMOg0KPiAr
ICAgICAgICByZXR1cm4gIkVudmlyb25tZW50IENhbGwgZnJvbSBVLU1vZGUiOw0KPiArICAgIGNh
c2UgQ0FVU0VfU1VQRVJWSVNPUl9FQ0FMTDoNCj4gKyAgICAgICAgcmV0dXJuICJFbnZpcm9ubWVu
dCBDYWxsIGZyb20gUy1Nb2RlIjsNCj4gKyAgICBjYXNlIENBVVNFX01BQ0hJTkVfRUNBTEw6DQo+
ICsgICAgICAgIHJldHVybiAiRW52aXJvbm1lbnQgQ2FsbCBmcm9tIE0tTW9kZSI7DQo+ICsgICAg
Y2FzZSBDQVVTRV9GRVRDSF9QQUdFX0ZBVUxUOg0KPiArICAgICAgICByZXR1cm4gIkluc3RydWN0
aW9uIFBhZ2UgRmF1bHQiOw0KPiArICAgIGNhc2UgQ0FVU0VfTE9BRF9QQUdFX0ZBVUxUOg0KPiAr
ICAgICAgICByZXR1cm4gIkxvYWQgUGFnZSBGYXVsdCI7DQo+ICsgICAgY2FzZSBDQVVTRV9TVE9S
RV9QQUdFX0ZBVUxUOg0KPiArICAgICAgICByZXR1cm4gIlN0b3JlL0FNTyBQYWdlIEZhdWx0IjsN
Cj4gKyAgICBjYXNlIENBVVNFX0ZFVENIX0dVRVNUX1BBR0VfRkFVTFQ6DQo+ICsgICAgICAgIHJl
dHVybiAiSW5zdHJ1Y3Rpb24gR3Vlc3QgUGFnZSBGYXVsdCI7DQo+ICsgICAgY2FzZSBDQVVTRV9M
T0FEX0dVRVNUX1BBR0VfRkFVTFQ6DQo+ICsgICAgICAgIHJldHVybiAiTG9hZCBHdWVzdCBQYWdl
IEZhdWx0IjsNCj4gKyAgICBjYXNlIENBVVNFX1ZJUlRVQUxfSU5TVF9GQVVMVDoNCj4gKyAgICAg
ICAgcmV0dXJuICJWaXJ0dWFsaXplZCBJbnN0cnVjdGlvbiBGYXVsdCI7DQo+ICsgICAgY2FzZSBD
QVVTRV9TVE9SRV9HVUVTVF9QQUdFX0ZBVUxUOg0KPiArICAgICAgICByZXR1cm4gIkd1ZXN0IFN0
b3JlL0FNTyBQYWdlIEZhdWx0IjsNCj4gKyAgICBkZWZhdWx0Og0KPiArICAgICAgICByZXR1cm4g
IlVOS05PV04iOw0KDQpUaGlzIHN0eWxlIHRlbmRzIHRvIGxlYWQgdG8gcG9vciBjb2RlIGdlbmVy
YXRpb24uwqAgWW91IHByb2JhYmx5IHdhbnQ6DQoNCmNvbnN0IGNoYXIgKmRlY29kZV90cmFwX2Nh
dXNlKHVuc2lnbmVkIGxvbmcgY2F1c2UpDQp7DQrCoMKgwqAgc3RhdGljIGNvbnN0IGNoYXIgKmNv
bnN0IHRyYXBfY2F1c2VzW10gPSB7DQrCoMKgIMKgwqDCoMKgIFtDQVVTRV9NSVNBTElHTkVEX0ZF
VENIXSA9ICJJbnN0cnVjdGlvbiBBZGRyZXNzIE1pc2FsaWduZWQiLA0KwqDCoCDCoMKgwqDCoCAu
Li4NCsKgwqAgwqDCoMKgwqAgW0NBVVNFX1NUT1JFX0dVRVNUX1BBR0VfRkFVTFRdID0gIkd1ZXN0
IFN0b3JlL0FNTyBQYWdlIEZhdWx0IiwNCsKgwqDCoCB9Ow0KDQrCoMKgwqAgaWYgKCBjYXVzZSA8
IEFSUkFZX1NJWkUodHJhcF9jYXVzZXMpICYmIHRyYXBfY2F1c2VzW2NhdXNlXSApDQrCoMKgwqAg
wqDCoMKgIHJldHVybiB0cmFwX2NhdXNlc1tjYXVzZV07DQrCoMKgwqAgcmV0dXJuICJVTktOT1dO
IjsNCn0NCg0KKG5vdGUgdGhlIHRyYWlsaW5nIGNvbW1hIG9uIHRoZSBmaW5hbCBlbnRyeSwgd2hp
Y2ggaXMgdGhlcmUgdG8gc2ltcGx5DQpmdXR1cmUgZGlmZnMpDQoNCkhvd2V2ZXIsIGdpdmVuIHRo
ZSBob3BlIHRvIGdldCBzbnByaW50ZigpIHdpcmVkIHVwLCB5b3UgYWN0dWFsbHkgd2FudCB0bw0K
dG8gYWRqdXN0IHRoaXMgdG86DQoNCsKgwqDCoCBpZiAoIGNhdXNlIDwgQVJSQVlfU0laRSh0cmFw
X2NhdXNlcykgKQ0KwqDCoMKgIMKgwqDCoCByZXR1cm4gdHJhcF9jYXVzZXNbY2F1c2VdOw0KwqDC
oMKgIHJldHVybiBOVUxMOw0KDQpBbmQgcmVuZGVyIHRoZSByYXcgY2F1c2UgbnVtYmVyIGZvciB0
aGUgdW5rbm93biBjYXNlLCBiZWNhdXNlIHRoYXQgaXMNCmZhciBtb3JlIHVzZWZ1bCBmb3Igd2hv
bWV2ZXIgaXMgZGVidWdnaW5nLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 12:25:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 12:25:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482814.748527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvt1-0002Tt-Rj; Mon, 23 Jan 2023 12:25:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482814.748527; Mon, 23 Jan 2023 12:25:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvt1-0002Tm-ON; Mon, 23 Jan 2023 12:25:19 +0000
Received: by outflank-mailman (input) for mailman id 482814;
 Mon, 23 Jan 2023 12:25:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTo8=5U=citrix.com=prvs=380e0b34c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pJvsz-0002Tg-Vh
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 12:25:17 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fc649b21-9b18-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 13:25:15 +0100 (CET)
Received: from mail-bn8nam04lp2042.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jan 2023 07:25:11 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5771.namprd03.prod.outlook.com (2603:10b6:806:11e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 12:25:09 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 12:25:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc649b21-9b18-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674476714;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=tc1YmgXbYpKbeyJj17ADgPXtFAEmUBock6Dh2MfcGqg=;
  b=IgwtgrCjJ2QGULyhfG1XkT+7dwWlMGxhBDin1VhFC00MKt1UgHLkQiZR
   CFqciJGjmVVNsqgSCkQ8VZFDF/qH/fniemabKfKjOpA9fEfA2LonZUEQh
   ND3EC2kLpVs/RnE7LBIytP2SBWzhpMnRTE7rwoMkO3omULFrF2FQSIIEb
   Q=;
X-IronPort-RemoteIP: 104.47.74.42
X-IronPort-MID: 94222697
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:wSk+qqvxP/Hvcbq6Krg0pNGwiOfnVOVfMUV32f8akzHdYApBsoF/q
 tZmKTqGMv+DZWvzeIt/admz8EhTsZDWx4QxTFY6rnszFSkR+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj51v0gnRkPaoQ5AaEzCFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwBm01VVeIgbqK+vG0dbBUppgNB+uyI9ZK0p1g5Wmx4fcOZ7nmG/mPz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0oujP6xarI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6ReHirKQ00QL7Kmo7GhAtTFqjh/+FoG2UVtd0C
 mMvyy5/sv1nnKCsZpynN/Gim1aDuhMfQNtRVe4n8gaGyqnTywmcD2kACDVGbbQOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRUcZfjMNRwYB59jloakwgwjJQ9IlF7S65vX6GDj2x
 y2BpQAkhqsUls8N3OOw+lWvqzirrJLYQxU14gjSV2SN4QZwZYrjbIutgXDS6fdbMI+YVB+Pp
 nECkMmFxP8CBteGkynlaO4KGreu5fqMLjzHqVFqFpglsT+q/haLd4da6iA4KVxpNssBcDnBb
 0rauAcX75hWVEZGdodyaoO1Ts4sna7pEIy5UuiONoYUJJ9saAWA4SdiI1aK2Hzgm1Qtlqd5P
 oqHdcGrDjARDqEPICeKetrxGIQDnkgWrV4/j7iip/h7+dJyvEKodIo=
IronPort-HdrOrdr: A9a23:Jzad5KwXU3PjvC8DeRE3KrPw671zdoMgy1knxilNoRw8SL3/qy
 nOppQmPHrP4wr5N0tApTntAtjkfZq+z+8N3WByB8bbYOCOggLBQ+9fBOPZskbd8kbFh4pgPM
 lbAs9DIey1IGJWyeDdy2CDf+rIxuPszImYwd3z9TNGayZES49d1C9FKiC9VndbeWB9dPkEPa
 vZ6cpDqyChangMB/7XOlAOQ/LfodnGj7LKCCR2ZSIa1A==
X-IronPort-AV: E=Sophos;i="5.97,239,1669093200"; 
   d="scan'208";a="94222697"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WSTyCrUK8CPHIAF4XFlOV9N+spO7Cfr9rk+fAnqxzMfDJrA1ytp3RNmaq3ljzJFUiEigdk4MWKk/cV0qbVZvpqmSXAmK3Aok0W0DBy9PV9mCcQnEPd2WhczFMGgqii51rRiA06w/eDihYJo2GMew6pfvsNcZhC4iTZxELeT2iynXfINqgwi/zP20pbHdz67Dil2QZMl3ZClCjMvG99Yhrp8i1F2U3uq+k9Buz/iIKf2FIU2rZVWJN0e/AYLTsQZSrzyIJdopHs0EGCcN1U05ohstMqvAvgry+VTs7FbslVGCoDJrSVM13iahFyRPrbg9NGclkApL8OBC+tZ3ZeIEbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tc1YmgXbYpKbeyJj17ADgPXtFAEmUBock6Dh2MfcGqg=;
 b=AzFaSAcntHrGq0b71rie+eq+vXaXQ3wUz6OAGUWG2X6DS12SAuSQxHReystPzn14y6fhmNjxz+9i1k3CNuuUbKsodMxEdf+CGePRdqRmjxS06StAAf+WpNrQyYiPDmT3b4l2iIJK4iZPBazwyX33DInMz2OBbFGlZGLFnT/dM4b6jUQAhhzkfq4vP3XPschXAA4545bkSiIgV+nzNuh8EdbAMZ9qLoGm2CtMKQDSTkDMKrvZriE0Hz5dOWXynX+4QdDEESn4g16ifqnNNe9eQIyEd0NK6Dik/EitlopqPhxuk4dDUt6HvK2W9CDMQb+x3MEcb/I6c5kBB2CW7oZxPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tc1YmgXbYpKbeyJj17ADgPXtFAEmUBock6Dh2MfcGqg=;
 b=Slf3A9so/C/DeS1VAr6fw2qjdjGy6n8E4ZGx/prO0KV26TOW7NWj7WwypgRJpCeVn8Giu3YjkaQgYHPlOrds+cG1lKXfBiKUfZhTjUsJUklxt2XLmrVbuEse+dqpzqk3ilqowRQyFsBf62Zua6vf1oGyrJUDWGUBTjNVaYE1vDo=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii <oleksii.kurochko@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: Re: [PATCH v1 06/14] xen/riscv: introduce exception context
Thread-Topic: [PATCH v1 06/14] xen/riscv: introduce exception context
Thread-Index: AQHZLOAEizmhskK0b0Wez8lGfG74vK6ndUcAgAR2eoCAAAX1gA==
Date: Mon, 23 Jan 2023 12:25:09 +0000
Message-ID: <94e29752-ee20-2268-7701-9976c93c2882@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <00ecc26833738377003ad21603c198ae4278cfd3.1674226563.git.oleksii.kurochko@gmail.com>
 <fd276566-6b7d-ea64-a90a-a0c198ccf36c@citrix.com>
 <bb6b85f147d5d7933532fb27f78fa93ce6209b22.camel@gmail.com>
In-Reply-To: <bb6b85f147d5d7933532fb27f78fa93ce6209b22.camel@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SA2PR03MB5771:EE_
x-ms-office365-filtering-correlation-id: aba698d6-e654-4fa6-01eb-08dafd3cde3d
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ASUpxq9HnHKsnh4dq41hPgPwLFzWoufIlbh0AB23dZLDeJAfiOaDOa/9XeTkm43fc08v9HTUoZFDitmHzWKCXPo7GBNw9IN6rhgtb07goZA6jHqYOegoZljlXaLfQPqi6SOVhalQPsG+0E17UaZ9vn4aUN3/8CcJBGkgJNkJOJj1jTpCbUI3nJYVdZiCH6n8WNwYom1bUQr0ncbSFHpJCGKOUO5QCCJ5P1Bg6DoRXZKzbNpGV4lMOp/K5bdbcHYIzd4HaSiI+MbILvHaA4hVsjn+ck2Vt5CrCuj6QVW92KvCdq+Egv2xI8JstwwK1WgjYS9mf7ussVRPhYP5Un4rqjzdg1hXXapoqndPfvwISRL1+bGH7ukqjZXwCdt2qKcNCr1mGWVZgmladVVLywUjrg3vyeHlMwMVLpZ+qqg66S59ba0A9PwZSTLZEczcjlBq/cEYhNffQgQphq4+Z2bVavAEXdwrPj/oVEAmLYwLSZz9w3JiEJQ6jf5n0zMBtL6QX0SQ7xicqjW8N1v/QTrwi9f6sHIJ4MyCbcIy94hG6o1yGCoq8gge1EYbom+7ElxfBuXz3dXB/KsK6xMYBpBYYEDWEC0sg8tBku3Mk6nJtyLAyS2l56foXPIkr5POIjqiRhBuNcEWvk8/W3V7I14tp9NK0g4XF9Yog/hUz+ns2waCkvF6wtPq0kBvEeq00YU+ov5EkvkTzL6qzWfWIgendEsOu4qx7KDqRQL33XWuLqImo6HsiOoYaU8JfMPrx6qBc1jkCGdqU65MysBFeYmfGQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(396003)(39860400002)(376002)(136003)(346002)(451199015)(38100700002)(122000001)(38070700005)(82960400001)(5660300002)(31696002)(86362001)(2906002)(4744005)(41300700001)(4326008)(8936002)(6512007)(186003)(6506007)(8676002)(53546011)(26005)(316002)(66946007)(2616005)(76116006)(54906003)(66556008)(66476007)(66446008)(64756008)(110136005)(91956017)(6486002)(478600001)(71200400001)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?V2JvYUgrS3kwWmFqeGx5SEdaS0xpTlViR0g5TXYxYlVTY2xZbGdCTUpFc0lF?=
 =?utf-8?B?Q0g4ZU9QUGdpeE5nWVdCcDhVUGxOUDJLSWlsK3IxemV3SGhmLzBQSU5BQkVh?=
 =?utf-8?B?S1pGM2pkNVErL3hGUVJMclZ3ZzNkUzV3b2FlMlRWRzhoT3hiUkFIOG80TkpL?=
 =?utf-8?B?ZzQ4SlliWEdTNTZxMTE5TWRqdDNILzNJNHhBS2x4aWlua3dUUXVqWFI2OFcr?=
 =?utf-8?B?MEZkY01sTEh5ZlpCUU1CY1hXeVZ2aUxrdC85eWJ4SVZUeE1COCtQOWF6TmVa?=
 =?utf-8?B?eFh0Q1RraGdaMGdvcDdZVG5xN0R0dldWSURDRkNBSzRvTTdlcmVtVndPbmNX?=
 =?utf-8?B?L2JOaVczM0pSRWdYUmZDeUkwNTU3blZkYmRNRXJ2OFRMT1Uzblo4MTRZWkR5?=
 =?utf-8?B?d2M5bXZmMFRwM1hwcC9uRFRhUHNQSDEyVThXWFVHQ0doUUNrQ0g4bHhKeUZT?=
 =?utf-8?B?R3A1REJxNm1uZHhoMUVkVk9EVkNybUpHbU5aT3cyTHZJWDcyL05Wb1huOGRV?=
 =?utf-8?B?SlFwQzFLczJ3cllPSnFiaEJsV252eG81QXZIT3ErTE5lYk92eTd2cnY0dWJJ?=
 =?utf-8?B?VEpMdGE0VDhwcXlxaFBYTDZCM3YrT2M2NlVNdlZDR3Jvc0Q2U3dNSk10SW9X?=
 =?utf-8?B?NG9Ka1dPNGFJUEVGOGxUbjREak5Lc0cvUmJ5YW9vY1lLOEZCUzBCNTRIWXZk?=
 =?utf-8?B?dzBsaVpCazR6clIrcE5mZjJCUEpYZ1lwMk1IK1huN0dJRU9KK2ZyUTFVbUti?=
 =?utf-8?B?N1ZsSzlnSlFTR3ZJeTZyZ0xqL0I0SFJhOFdkd2xrcE1UVEZJKzlVUVZ1b1py?=
 =?utf-8?B?WjRLWExTM2h5cTIrOFlwYTNnVzRaM2ZUZyt4cFUrbkY5SFhkVktkcy8ybE94?=
 =?utf-8?B?K3hSRkoxckp1TFFOMzJQcTlKWjh3SEJIcnQ0YUlTYzhWOXBDSzN4eW5YN0pk?=
 =?utf-8?B?N0RTUkExNFY4TTdTQnpDcVdXOHEzV001T2NrbWd1b1V1OXRIR1R2M08xYTN4?=
 =?utf-8?B?emNkZlhNVVhqQ2xna0taUGJUK0FmN0hCM3VqTVZiMkVqNHA0U3plOXM1RUZr?=
 =?utf-8?B?QW1zT3A0R1VuVFo5RVFOSXZBc1haVXowQVNWZUtZekQ2QWVocHFXR25qREhD?=
 =?utf-8?B?VXJ6N2JWSWtIUG9PVUhhKzE1VFBsbzRIVmlGSkpsc3JtVWg5YmhCRmgrUG5N?=
 =?utf-8?B?ZWltYUYwUHhIMW9XV1hxYkU0cExNemdSTjdFWGV4dC8yMzQ4SjcvVnJMRjNl?=
 =?utf-8?B?ZndqYjBIUDloL2NhYUY3NFJBblhJcGkxWWlhbEN6SzZQVHphdUM3UHY2OVBm?=
 =?utf-8?B?L0hHeHlDK2NablZJV0FEMDNMSDMzR1krb29KK3FBUTBVNGFxcjhKN1RDODYy?=
 =?utf-8?B?MjZhM0YwaVNLaDBkeklpT0V6SkpvZm5NenU5cXNkVEExZUY2dzRTenJaUWJM?=
 =?utf-8?B?TDF4TUZkeGFRU3hpTitXNWhUbHZlWi9aREFzeFBFYVV2VC9Ud2UyYjZhcWFL?=
 =?utf-8?B?dFV5N0VMdVdldEpHdloxZXprNjNqeFlEWFVmSzUyNDJsK00yQnZ4SVNRcUE4?=
 =?utf-8?B?OHJ3M1FoNGpvMGVwRmJ2NDgxa2hqUTNObGE0SkdIeElVRU8zTk5lV2xKK3lQ?=
 =?utf-8?B?a2VmTW15TkZXa0NQSUVLNWsxU0xZRFJwNVNSY3NrRjB6ODBNKzA4cGtuWGFY?=
 =?utf-8?B?Q0dsNmc2c1Jub0lCbHVnazZwNGpXcGJXS3hhVVVkMTFoc2kwenc5T1QzK2h5?=
 =?utf-8?B?TnBKQmFUQ3F1V0lMZjdCK09vMDZGcktFc05lUVQvZ201WUFqS3M2U0hkY05t?=
 =?utf-8?B?bk9Eb29DWUtlMlEyd01aSEF3M2J0QVBqK0RMSHp3Uk1yUWJhVGNML3VUOVh4?=
 =?utf-8?B?U1FrbG5Qd21NQjFOVTBtRWRHY0dGZkN1WGpLcTJVdzZ6RjNDL2xxMjFqMjFW?=
 =?utf-8?B?ay8vTWZYdi9aM2l3YWxwLzdoR1MxWlpWZ25vN1EyOW5RcG5neGJiQnZjL0VQ?=
 =?utf-8?B?ZUZjRGxFRUFieGFpWFJGMHJjc3p3ZUtybEhOR0ZhVzlWWGxIK0IydHBTa2Ft?=
 =?utf-8?B?S09Ub05SWE5uRDVqa2ZIbU1nVDNua3VvREpBcDBKLzF2LzJNYm03QXh0a3Fx?=
 =?utf-8?Q?hLnJ+AU5rPQWvbflZSezcelc5?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <B45BF90510D5D64593EFE42C19AAC0B9@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	KyZU3LW1BC2rNEw7unru2e0cCOb/cVyuFIFqUY6ilzI1dvqDcF5a3CizqD+uz+EoXbPZKjNvflZ8zu32waqpers97DcMKABF+mYVU5/ajpM3qSBd4ORUAWFb9+1YUryY7vswmJjR5RGayqoltvDoiZ2cExoNQrbYHruWRGw2S8vOdKxa6cXdyIQ6kpXmFGUmalrPqb3abEAHXnQO45CYYX4qGf1eVqFDz8MV/5+nW2QIUkn4drmQRYibGQZ2FLe0tz7QScVYnapkvCWxPiMaSjBkTk0k1OMRztADqjqRVsFNf39GhPRPM+OXLcQwcYCoQ4NRs/PgUQB9JZuHw7HhSeHbjxqSmbP05AVGV606hF3OvwH4j4LL/nDe6YqzBeA3XRhWcU+PTCVwlR1hczpSmqH2YNLCXnOcmmtCx/I6x3R1QuOn9VyrxNCQC6CwjCdwbiqUc4/oOGX89kABf9Scdko0M0t2A5+/sF4tKbz5aYX3SLXXZi7/4zImXgSOU9m/GlPNaoaeOiQLKx91wfBF4eTX2cVddLHVYMp36sdfrDOHSmOSWB2odsoMzCivAUserW3AMsCBpqXqwMQAmnk2kr7f0nyqbm3TnqT2P1MhNZgHM57hpPsisnW4hCQmCEtdzWUvLwTAYkb+yM2ZtQT8lNnwEn9rZizH2k2LRhq2JlTY66EkFe/fKFjeT259OSuM/xIqmUbA+ucmcjQdnuL6YArmk6ofTSHmVBHx1M6XZbSQssR2b0xWYqglc8E8uduq72t/KLTO8eGS81yvHYckNvV79dWN9DGO1onwWPBHONb33CCxB0RaVpLttNl/gByUKpY7xPraa1qxxyHGmQVZ/7mw+z2VakxwZu0pVeipFTuLxqoJXXBT2vYvk/GgMvsLPy1w8yGs7wSHGOzcChWJRQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aba698d6-e654-4fa6-01eb-08dafd3cde3d
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 12:25:09.4126
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JhcD1CUJSF2bXVopSM7Vfh4XgHc3xcX5OFP6y8g4+HgCundfDhM2GkxUp5yIavoaBHpO4WOFV8W+V3HDkc53KQ3BMbQhvc5Q+BbkABrfbuk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5771

T24gMjMvMDEvMjAyMyAxMjowMyBwbSwgT2xla3NpaSB3cm90ZToNCj4gT24gRnJpLCAyMDIzLTAx
LTIwIGF0IDE1OjU0ICswMDAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0KPj4gT24gMjAvMDEvMjAy
MyAyOjU5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPj4+ICsNCj4+PiArI2RlZmluZSBS
SVNDVl9DUFVfVVNFUl9SRUdTX09GRlNFVCh4KcKgwqAgKChSSVNDVl9DUFVfVVNFUl9SRUdTXyMj
eCkNCj4+PiAqIF9fU0laRU9GX1BPSU5URVJfXykNCj4+PiArI2RlZmluZSBSSVNDVl9DUFVfVVNF
Ul9SRUdTX1NJWkXCoMKgwqDCoMKgwqDCoA0KPj4+IFJJU0NWX0NQVV9VU0VSX1JFR1NfT0ZGU0VU
KGxhc3QpDQo+Pj4gKw0KPj4+ICsjaWZuZGVmIF9fQVNTRU1CTFlfXw0KPj4+ICsNCj4+PiArLyog
T24gc3RhY2sgVkNQVSBzdGF0ZSAqLw0KPj4+ICtzdHJ1Y3QgY3B1X3VzZXJfcmVncw0KPj4+ICt7
DQo+Pj4gK8KgwqDCoCByZWdpc3Rlcl90IHplcm87DQo+PiB1bnNpZ25lZCBsb25nLg0KPiBXaHkg
aXMgaXQgYmV0dGVyIHRvIGRlZmluZSB0aGVtIGFzIFx1bnNpZ25lZCBsb25nJyBpbnN0ZWFkIG9m
DQo+IHJlZ2lzdGVyX3Q/DQoNCkJlY2F1c2UgdGhlcmUgaXMgYSBtYXRlcmlhbCBjb3N0IHRvIGRl
bGliZXJhdGVseSBoaWRpbmcgdGhlIHR5cGUsIGluDQp0ZXJtcyBvZiBjb2RlIGNsYXJpdHkgYW5k
IGxlZ2liaWxpdHkuDQoNClRoaW5ncyBsaWtlIHJlZ2lzdGVyX3QgYW5kIHZhZGRyX3QgYXJlIG5v
bnNlbnNlIGluIGEgUE9TSVgteSBidWlsZA0KZW52aXJvbm1lbnQgd2hlcmUgdGhlc2UgdGhpbmdz
IGFyZSBzcGVsbGVkICJ1bnNpZ25lZCBsb25nIiwgbm90IHRvDQptZW50aW9uIHRoYXQgdGhlIGFz
c29jaWF0ZWQgaW5mcmFzdHJ1Y3R1cmUgaXMgbG9uZ2VyIHRoYW4gdGhlDQpub24tb2JmdXNjYXRl
ZCBmb3JtLg0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 12:31:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 12:31:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482819.748537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvyP-0003ug-H0; Mon, 23 Jan 2023 12:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482819.748537; Mon, 23 Jan 2023 12:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJvyP-0003uX-DM; Mon, 23 Jan 2023 12:30:53 +0000
Received: by outflank-mailman (input) for mailman id 482819;
 Mon, 23 Jan 2023 12:30:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTo8=5U=citrix.com=prvs=380e0b34c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pJvyO-0003uR-3d
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 12:30:52 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c35e050c-9b19-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 13:30:49 +0100 (CET)
Received: from mail-co1nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jan 2023 07:30:40 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH2PR03MB5301.namprd03.prod.outlook.com (2603:10b6:610:9d::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 12:30:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 12:30:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c35e050c-9b19-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674477049;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=o5BpS3JAQgNTbeaqBa9kJtRJ25xvDWQITu5TzmiHT8A=;
  b=KzK8EamNwTidi1x18wDPPBvhTVqSaGOV+dN8eBFUsxsyi9oBlzqxFFOO
   0YGRcDM+E96d/Oq0a4o3jod+YP3x15fL8iZrO1JPKMVFAG7Bbq+GnvAMy
   DUwP8jcS5R/bmMerP3LKw1JihAmQ/hwWcdwxPeLpvTlAoybkAldBoLgL3
   Q=;
X-IronPort-RemoteIP: 104.47.56.171
X-IronPort-MID: 92705050
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jqAqm65J9J6MUbuFzxA6FwxRtAnGchMFZxGqfqrLsTDasY5as4F+v
 jQeCDiFafaPMTfxKNlxO9++/R9SvMDRydQyQVc+qHwxHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakR5weH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m6
 80XEykSbRe5oaGOnqKcSNlit+YgI5y+VG8fkikIITDxK98DGMmGaIKToNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6Okkooj+GF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNIS+XlrqMy6LGV7mMIDUcdCWbmncC411y1V456E
 mxE1DV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxmdLngJSHhGctNOnNM3QBQ62
 1nPmMnmbRR/vbvQRX+D+7O8qTKpJTNTPWIEfTUDTwYO/5/kuo5bs/7UZtNqEarwhdqsHzj1m
 mmOtHJn2OxVitMX3aKm+1yBmyirupXCUg8y4EPQQ36h6QR6IoWiYuRE9GTm0BqJF67BJnHpg
 ZTOs5H2ADwmZX1VqBGwfQ==
IronPort-HdrOrdr: A9a23:2RY4lqojOS3A63KvYIZGUJUaV5oNeYIsimQD101hICG9JPbo7f
 xGuM5rrCMc7wxhPk3I+OrwX5VoJEm3yXcb2/hzAV7PZmnbUQiTXeVfBOnZsl/d8nbFh5ZgPM
 5bGsAUNDSaNykesS+V2miFOudl6MWb9rulnOLPpk0dNj2CqJsN0+66MGum+4FNKzWuzKBWKK
 ah
X-IronPort-AV: E=Sophos;i="5.97,239,1669093200"; 
   d="scan'208";a="92705050"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cebFiT4XOxgJCV1i2TikX6uAMk8p2Y+o6sEeLv5kjFWzqqLupAuecov+HdpJYnovXXoCNhSKKrNQ+0tUc1AY/lz56MulHYnEIYMCD8Pi7UcjHL5JcYnCYqXQSqFJJ0GT+0k52iPgqo/9Qyq/ig40Qpv6X5ZCU/ZvNvup4V3L7D99cjxpNJCyA1AGhlg9K7eoneGIlFscIxEXhM4Bbb3anQFhF+A3gdw7WP+LMSXebme6s99jc4uaM4+NH1Bvj6JOMbhiK4PadwXyvmqIpDcDS6CuY1ZCR9GJIcKLvf+6pRXYvYkGBJm97SWl98gZTI7hyqw12rwhyH4kFec1cQ1Q9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=o5BpS3JAQgNTbeaqBa9kJtRJ25xvDWQITu5TzmiHT8A=;
 b=SmMmyhZtN00NHtF2ts83stQRYx5Ev4cOajxMtPVBTJXWzA3Z2Y4IRp38VBuZpy7rqxb25wuy0byAckUGGF9g4aBiMELVIoUw2DZzJjZDFmkc1ji2Ow/S0yo3wrho9fLjJ4+e0Woj4CihGi9u+P5RFigJRH+Mufihd7+Ty5ejmjtt2PxmlxC64/H0V40tDwLCoo+SBE9bfAVc79oxz0gXt+10w5thszuglA1RJhZ5ouGhmodB/Xt0fq7RbYs6d8RZ/GkXhUDWX5vLdbQ0xICC/H67w0meA5LBI9ICr210msEcgJRLwzUka4qYYj3IL6WcDlxzyyoUv0/GXsdw6+inbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o5BpS3JAQgNTbeaqBa9kJtRJ25xvDWQITu5TzmiHT8A=;
 b=dqzbM2c5fu5/CLjJRXulhh4DVsx18VB9w8y+By5E0IC0FqiNioCSMakcnwqmKEh27ST9IEjmLgy6QOgWoJs/gPPVhwfexClj/myK7Xh8MU2hKn5m8UAuzouRXOW+xoJGrD13lm+k2QWq+bUJR6PAAB0X3rko8NfWHc+vC6t1GUk=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Tim
 (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/shadow: sh_type_to_size[] needs L2H entry when
 HVM+PV32
Thread-Topic: [PATCH] x86/shadow: sh_type_to_size[] needs L2H entry when
 HVM+PV32
Thread-Index: AQHZLwKEbZ+pvxq41k6VhQDqg2rUMq6r0SKAgAAA54CAABzuAA==
Date: Mon, 23 Jan 2023 12:30:36 +0000
Message-ID: <04f5c9ba-24aa-c9be-e8de-a867c897835a@citrix.com>
References: <942e1164-5ed0-bdda-424f-90134b0e22c5@suse.com>
 <79420a4f-358a-f404-7965-e5f215234ba9@citrix.com>
 <2ec2a36e-4264-6c12-c2e6-1af85c91f1f6@suse.com>
In-Reply-To: <2ec2a36e-4264-6c12-c2e6-1af85c91f1f6@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH2PR03MB5301:EE_
x-ms-office365-filtering-correlation-id: 10d82e8a-76ec-4807-c240-08dafd3da123
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ufobZZjCUiCOpTKhTlYcqBWEaGPVKTKUkF5ZJtP5btKcqq/s++5e/sbJ2wRyCXCq8PJbh220KXmNzDR8MnjPzXqJmqKxD2NAPxyeTTSoh8NpZvWqsP+QD4tOkrR2c1yln/DYyUonckdEu7aNyaSbRvlsdtQrbMsm5rLJU0M18rsilpo41kTexsv30r4rVCW2Gm6SvFTjrt4NW4X+PFvrxOUZ+Jgi2/vqvNU9PfkPPaqwuyG0uHiCoUke1MasKduTL48EHfdK7Df9d7egDBlr/fXG5Mrhz94a2mqPeU35WN7/z2TIPuYPTTgsjFgrKIQtQdQ7rAEuK9D4YldOX2AS1BYIApZw+Q/4sStFn+qLDvsr/z5ngEbXrQbuUJKcG5i0CTJuJtjB8dULKTGGO6MwXkwZEMMge+P5IqydJS/LKuvdukwkrBf7qv91+iuAGwF1flO8GuPby5lhWcpn3sW9L4lBpYD7Aau1cqptm4M0jxCXdEQcEb5VwcFkG3mfEiMaYdFo4pM+wsd/o8Dry9PpdpyBNLby+sCTJU3vbFeBIP3Bychhh7CejMc5Tk3ahCLfWhyEEBum61yiO7xM9sXHlVw3109w3dMhutMStOoTCzo6SkARcOg7LJJ+fD5W+JzukC6jbaGAcmrqwjdTdjRPn3Un0YyDRaGNNt/T/fNte5YYG9QHIIrXFykxx4EjhGBz/Uw440blHTc8jqBL6UYTJx55nJ1bO+D7TSPoL4l/e5WF1ENsdoB6ZHjto3WCxGHflXPxhbzKJP+AiD3NOGWTxw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(366004)(39860400002)(396003)(346002)(376002)(451199015)(2906002)(41300700001)(8936002)(66946007)(91956017)(66476007)(66556008)(5660300002)(76116006)(54906003)(71200400001)(316002)(6506007)(6486002)(6512007)(478600001)(53546011)(26005)(186003)(2616005)(31696002)(36756003)(122000001)(82960400001)(38070700005)(38100700002)(8676002)(31686004)(4326008)(64756008)(66446008)(6916009)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?QVZEZGMyNVJORFhWMisvNEFDNGFVckJ2Qnp1dTNoZi9pNUl5Tm9hbklQZkU3?=
 =?utf-8?B?eVdZLzVQdXRQTlRRQ3NlTFh3TkxsTnp6V0hKTzZkM0pkWnUzT3djMWx1MGN1?=
 =?utf-8?B?Z1JUOThCekVZcWd0YVAxMXphTlJ2M09la1ZhMFc5OEo3cTRSR3hLN2xtSkY3?=
 =?utf-8?B?T3RKdW1HanlIbDhMMmVpUlBFcW1kNEh4OXdMYzgreUVPRkVGMWwzOUF1TGVl?=
 =?utf-8?B?M2NIRWJPcDlEcEdkVjZ2STdjalpQNnRJV3hhRXpkaUJNQlZUVlo5TGtOUFJU?=
 =?utf-8?B?cWUxMVI3K2FGSDFyaXptZ1VlNklQeERrNk1BZXpGamJqVnhZcVdHSUJjT0dV?=
 =?utf-8?B?V01jVENQd3FSUE1Ya05KLy9ONmNkRHh0V0wrU2xSOGNYcGlGRmpGbUlzOWhZ?=
 =?utf-8?B?N0pHMUFjWk9EOFNnUlN6TldxSktCVXI3S29OeCtlaEZFTHJLTEFLQnlsV1ht?=
 =?utf-8?B?OEV3bVVOZWgrNUE2RjVkbXZxYm1QN0k2Nk5yaFlWd2cxbXJwTmRlVEFNTjVV?=
 =?utf-8?B?a2c3Mnlpa2JnTW5obXk2aEY0M2pUSXRmVGRGS3RhVmxXeUhkdWtoQTZsekdw?=
 =?utf-8?B?Mzlwd1dMdVRNVzRtYzRkQkpleHVzU0VnUjQvR2hmMi9mYkNuYWcwb3RtK2hK?=
 =?utf-8?B?NXpPbE00VU1zeHl6UHhuZ3ZsWEg3bWxjcHdpcW8zbXAvcUR2VE5pTmVDZ2d6?=
 =?utf-8?B?Z3pIN1hWUldXWmlubm9kZ2FTYkVtMDhwaFZKcW9vV2N2cmNFS2xYTFlRS04r?=
 =?utf-8?B?VjFYcXVjSE9zUkhnWVZuZlZWbU1maHpPNFVNK2hGamV5OVAzbGVvU29LZTJ5?=
 =?utf-8?B?WEhhZEZqcnNueXJWcEZNKzBzcm5aUkVQMjBpV2RLL3JMclozRzNTSW5BUHgw?=
 =?utf-8?B?OXZmcnpYNjJpRi9PRVh5SzUwVmdWVUpoWGJjaXBQSHBRWmU3Ly9wSUNDM010?=
 =?utf-8?B?dlNGckdtTmgvOW9SOGtwWlVjRXFEeThnc2tnOTFweGk5YmtjMEVqM0pzTk1Q?=
 =?utf-8?B?aEppbjhHUk1pM1cvcHJZK2NIUlI2bmdkbVp3N3NrK2lqSkV0R0dQTCt5Yk9P?=
 =?utf-8?B?VUdxK255dm9TTi9kYWxDamtWMU9LNE5yeEhWa1o3UUhwdkxiZDljNWVhRmlZ?=
 =?utf-8?B?OUtlZ0JmWWNjYzZIL0t0UDM2Y3hxVGU0RTU3VUYvTzZRUk9UZnFCMGxqY2RO?=
 =?utf-8?B?Mjg0bDlQZHBXS2M4bVpRdWNua2t3VVhlaFZQMGlBTm1BVnVZRGJqR0hvN1Zn?=
 =?utf-8?B?Y3IrUitvM25CNUtocktFL00xUnFOSVpDQ24zWit0bGFZK2ZrYXZBSDRLeFZT?=
 =?utf-8?B?eERPSG9IZVpJMWNSa0QydWtjUnFHWnByMVNRcGJwYVpOY01NUFpZTWg5TG9H?=
 =?utf-8?B?eU1zQUFnelJhV3lsdS93SmJqK0IrNVFKNzVkVUpOSzcxL09sNUh3YWZwdkV4?=
 =?utf-8?B?bUpiNEl6R3hMZHpreUdOM05rV1hDUTdWa1hlQ3A0RzZHODJ5aXhQTlp1dXNk?=
 =?utf-8?B?blJxdVBORjhVWGo4STVHWEtOcVBMbS9vS2NuR0FabThCdXB3QnQvd3JRREZX?=
 =?utf-8?B?eE5jNXZsNVJqRnhyaVBsZDFNQWVvMjljblFXOGZzaXNkU1pLNUlHZHpOUXZp?=
 =?utf-8?B?TVBzVVRweFc4b1VsUU9yR1FrZFQ3S0cwTGF1MUVONnNpNlVRT3VwVzJtYjZF?=
 =?utf-8?B?MFRIalI2Y3Y3b3lzVzRjZlNkZnBrOUo0a0FuNWFyQmdNQzBSQlc3by9yOWRG?=
 =?utf-8?B?b1BIelFIRW5xS3g0Z2pqaUs4ajJtZUJlMWsybktuWEtGWHhKNVowTVp2WTN0?=
 =?utf-8?B?QzJXMWxzSjdOdGZvRytxTERpNndrSWlpNVF0YzAxcWVFWFptNURENi9HblJ3?=
 =?utf-8?B?eVY1clRFdjg3cW5laFdOOTh1M3VwaW8wY2xDMWhNcHJwa1Rva29VTm1PNTNZ?=
 =?utf-8?B?Q2JZbUtmTmVGZ0IrN0hzZk82eG50aUZXeCt2YkZUMlFQS1l1dStNZXVQUDVl?=
 =?utf-8?B?VVBLSy95Umk1THl3d1pnL085TCtMb3J4VXBoY09PYVI2RzIwVk1MU09KVnVl?=
 =?utf-8?B?UGNLNkp0MVpmbHB6ZlZxd1ozUUhPbHNyanJEUXA4aWVGc3h6MThEZUJ2Q0F3?=
 =?utf-8?Q?blBMB2uPyI1KIs2tdqXB9XWwr?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0A3EB2B5D0548C4985A1DBD6D74EF802@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	lrv/eR8qP4b/Ymsg2uLXUter/QyV2EbheudJzHWrTW1vel/BIP7Iub8ItBp2ewmMr+k/7LeHbXrZ1kwco0QGbPREOi1LO5qJg22fmalCRXQQCJTbcXAy2wyIYXeYo/lWtl2samUKPShgmfGhnS47PLhc4WpO5b6FiXcSsViG5x1vLbsCJ3JdUtDE/HXkvk4herGJYJ67il2Wh/c9HrEO5HaOupFCXIw/EzlDFX0G5lXiJRoYM25zrmCRsP11E3O0WnxYHdUddA+JZ+ZjQTZQWd/IzDsC2hGEkewCevCunkJnXby3nqhfHVZ/hu2U41erWVZPJySxjBIQBSN3al+RTfQJosLE2IS6ZkoalqdrCjT7tolvfVfJWgalnvMTTpWbdUc6zMfxOCp2xeuIlVE3IGQR84QzZw0AjAooFomdig15CEcfMe22X4ejLqMUUwRGeRTgGYEgAco5Q+Knql9EqpP1678a0CG89//GSMeRO4hD/eV9aIlMY45wIe4WXdUb+32h9QdgpPcsXvJPoT0LZqEOhYgac0A/4R7+8m5KxHX5gtcIhLgy94UCFTysBkM/4ohPXD4Hn9GYqSIj6yYrfowlLR289/WIbyuNUnsQ5K4XK0uNF7OxNU3bUdnxZ3h61uKrDj+aYGBYCPsFYTOv0sK0jBznpzPUU4UJgXaTsUGWkV3Y4O6/i68+EtDs+yrgQ6aTbiRCoclYWPC/diW8FF6Uz52dGCp+uXrlUwU0Ah9RTpFqEJzSRqzVYcMxQz8yswNALYLKnDUqVf+ihFZRIpgqxBPbnv2LOAE2LXe9osG21r/Zwy3A9yRUOr4WaBRF+7pdXQ87uZGIfFsq9LfADA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10d82e8a-76ec-4807-c240-08dafd3da123
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 12:30:36.4029
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: SybmrRBxlGIOVA8x1kNQjLOlxzo9KtPJm4HsI8Sj0Fh13J0mpBFOkJFYr7TaWFhezoowVli8jUoKftF1YfAWn0keot6mASis9jKDJq1Tm44=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5301

T24gMjMvMDEvMjAyMyAxMDo0NyBhbSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDIzLjAxLjIw
MjMgMTE6NDMsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBPbiAyMy8wMS8yMDIzIDg6MTIgYW0s
IEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+IFdoaWxlIHRoZSB0YWJsZSBpcyB1c2VkIG9ubHkgd2hl
biBIVk09eSwgdGhlIHRhYmxlIGVudHJ5IG9mIGNvdXJzZSBuZWVkcw0KPj4+IHRvIGJlIHByb3Bl
cmx5IHBvcHVsYXRlZCB3aGVuIGFsc28gUFYzMj15LiBGdWxseSByZW1vdmluZyB0aGUgdGFibGUN
Cj4+PiBlbnRyeSB3ZSB0aGVyZWZvcmUgd3JvbmcuDQo+Pj4NCj4+PiBGaXhlczogMTg5NDA0OWZh
MjgzICgieDg2L3NoYWRvdzogTDJIIHNoYWRvdyB0eXBlIGlzIFBWMzItb25seSIpDQo+Pj4gU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPj4gRXJtLCB3aHk/
DQo+Pg0KPj4gVGhlIHNhZmV0eSBqdXN0aWZpY2F0aW9uIGZvciB0aGUgb3JpZ2luYWwgcGF0Y2gg
d2FzIHRoYXQgdGhpcyBpcyBIVk0NCj4+IG9ubHkgY29kZS7CoCBBbmQgaXQgcmVhbGx5IGlzIEhW
TSBvbmx5IGNvZGUgLSBpdCdzIGdlbnVpbmVseSBjb21waWxlZCBvdXQNCj4+IGZvciAhSFZNIGJ1
aWxkcy4NCj4gUmlnaHQsIGFuZCB3ZSBoYXZlIGxvZ2ljIHRha2luZyBjYXJlIG9mIHRoZSAhSFZN
IGNhc2UuIEJ1dCB0aGF0IHNhbWUNCj4gbG9naWMgdXNlcyB0aGlzICJIVk0tb25seSIgdGFibGUg
d2hlbiBIVk09eSBhbHNvIGZvciBhbGwgUFYgdHlwZXMuDQoNCk9rIC0gdGhpcyBpcyB3aGF0IG5l
ZWRzIGZpeGluZyB0aGVuLg0KDQpUaGlzIGlzIGEgbGF5ZXJpbmcgdmlvbGF0aW9uIHdoaWNoIGhh
cyBzdWNjZXNzZnVsbHkgdHJpY2tlZCB5b3UgaW50bw0KbWFraW5nIGEgYnVnZ3kgcGF0Y2guDQoN
CkknbSB1bndpbGxpbmcgdG8gYmV0IHRoaXMgd2lsbCBiZSB0aGUgZmluYWwgdGltZSBlaXRoZXIu
Li7CoCAidGhpcyBmaWxlDQppcyBIVk0tb25seSwgdGhlcmVmb3JlIG5vIFBWIHBhdGhzIGVudGVy
IGl0IiBpcyBhIHJlYXNvbmFibGUNCmV4cGVjdGF0aW9uLCBhbmQgc2hvdWxkIGJlIHRydWUuDQoN
Cn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 12:41:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 12:41:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482825.748547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJw8L-0005P8-F4; Mon, 23 Jan 2023 12:41:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482825.748547; Mon, 23 Jan 2023 12:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJw8L-0005P1-Bx; Mon, 23 Jan 2023 12:41:09 +0000
Received: by outflank-mailman (input) for mailman id 482825;
 Mon, 23 Jan 2023 12:41:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJw8K-0005Ov-LB
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 12:41:08 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2076.outbound.protection.outlook.com [40.107.247.76])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 331e04dd-9b1b-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 13:41:04 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8928.eurprd04.prod.outlook.com (2603:10a6:102:20f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 12:41:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 12:41:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 331e04dd-9b1b-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PgpWq0sW8MlH2JmE6pFw5VVY8Yk+NMwsrX4khg4F8JVy73DCMYFfbHrfFg0EsU7HV6zrOVgghqWpMx+OrFr7tWeHE+s3MEIxmZaUBvSmtJ1lTpNxV5nEQeR8UHxWmwT7tp7W12TMJ1n8COozxpStAks+/5Il2aWcpQSJJ2GIL3XHMPU17m0HNbDumb7IFalhGxXI8/k076QSc5LciReT/jTN4YSU4Y9Wk588B7IdCPaHd/GGy3Wzm9l+kUrE4Zx551UgnDMg3YRLQ2RHaMueSN3z5CGKa56wJLdt69uCLVFItcmzblah+3sS3a/lyVlRfFFh2rHZborjmloybWpB7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JHmEEX19wYmeouYaOvjhxtXA/9ROfVN2luM/OduK9RU=;
 b=Dy24Vst+6B0h0zAtc0bJB4H9WWftoQdiW+W56JtRodBKsPKN4CpsXhKyn8nZsB3oDNLG8IXzdOx0vx8On/njrFPp5KahA2j9XCWOYX0Nhg0V6ILi5MVeRMCMw/Acy9p/qAf2y77WTwUR0q1aRiYbfkJWv0n2ZEDF1JVlXheq+A759KkG5qFeK9d3SIe1yQ31Vw2bBMuFVpwC9I0qpDHZfZrDO8IHVbPyJodQduc3B6hojTQs3FSjETVdQec0ItKqG9VlAiDjM8YxVRPicrg3t7MZabUWWEXmNeYvns10thF1uCRVSuHgsV41nk6elT8bkNxfBqA0RnwAzXKxt+TWDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JHmEEX19wYmeouYaOvjhxtXA/9ROfVN2luM/OduK9RU=;
 b=gUNZROMesxiO49GAoIkDU4kGzIaPuuXxXruhGZkm2MGyZT3Vo14wSnKPhyIdPluWbHAAtZs2t/KsF+AMXBr+G/Q2GJryceWtffR5BEUjYzD7+u3ReVnc0emktDPvnsRx68YCZVc2ssc0ctHcCw7m+wZnDo+Sl20QMN/JAeRYsk788YBUKkvkp0lOTfqCO3wdutQt0yZNBs/b7gMgiQ4fo6/XF2Q0qKSujI0V+iXQVScXKgl4S5F9RfOEFYlCQMAfBGqfGKt8tzhzBTWpfOb9IyvalfZOtKSmovfw1uHzye9Xsh+XhQFRdDdAHIe5G0vRqrKyYJ/JgHbDUIWOgL7o/g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f7179c87-fcd8-9a7e-ce8f-4e33035d73c4@suse.com>
Date: Mon, 23 Jan 2023 13:41:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 07/14] xen/riscv: introduce exception handlers
 implementation
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
 <ac6f02e8-c493-7914-f3c4-32b4ebe1bc26@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ac6f02e8-c493-7914-f3c4-32b4ebe1bc26@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8928:EE_
X-MS-Office365-Filtering-Correlation-Id: 4da8c941-841b-46ba-8719-08dafd3f161d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Z2P7O824L3nbaOwETH/AEc9tlnE1Y0t4v2dav8g3ia522G6EmroN1O0/YsKbp4WzGYm+k0ccSQMy4Jfb4DTcqUolsvNjAxSPoCKpJcv+7QIo9C827GtnF31zwNoM+gX44sYyz34vspiz241wMB8gz4aU7Hb9DaXQpqwYeI60Q1vcJQff7lO9pIclvtLKPews1yz0CVxauw70DyWxrtdjrugjVF0VfdlA2GFCtssOLFiO81+rdqx9Q/J+pkIJ3n3oVhwdA+o2mS4ZRnvvyxHO2aHP71A803gdjuMdB3sPd1m6IhG52Hd9minF2nPTaip26Iqhrv/oA/89hqJ1UFb7c0TyzlTuhAaM7zqZ5nGuHY8X/jbtKPoOFDllbZoxPjda3L+N32O9Pnm6Sl9E8mHV6Sbs28WdJ5n/zfgm3slddO5OQZwaFg8CGB1XVpl2XwXT6Woe6GypICYmcjEA4yfpL/27P+Kz8JY+YZJU5XMjnKaAkzxOXKwJyYiGNksSszQd77KPxwe0XkatLzop8xZTWtM6hcbVTQC7hg/esAm/u3SCrWEsyJrqF2hlGIWfQNxrNx2XeGK1tjyyDrpIkU5onDUpvJTTa3b+CAl7JpEMqdFJg/3wC09BBiCxjBmyPiRlIoiyrqbCLVg5By0TGTccl5kH8//Q1Ey3vmcqlqMIGNd7sJTxpR+5QW+pvD6YAc2e5TWFaVbR0B7uVw84HsEkKeWeokZNlQo/iesxgFsn4hk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(366004)(376002)(136003)(396003)(451199015)(36756003)(38100700002)(2906002)(8936002)(4326008)(4744005)(41300700001)(5660300002)(83380400001)(31696002)(478600001)(6486002)(66476007)(31686004)(86362001)(26005)(8676002)(6916009)(6506007)(6512007)(53546011)(186003)(66946007)(54906003)(316002)(2616005)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aERoVW9KRXZpQnZmODlydjhtNVRHRDhZc0VuaU9hdVRrWHNZalhDMGRHSnQ5?=
 =?utf-8?B?cXhWLzlHMFpVSkZRd0N5OGxHTlhVZnM0UkRJWmhOem9FYjBUOUxTdmwrNGFq?=
 =?utf-8?B?aWpKVGZ2Z2g4WjBzYUcxRmpsUzBraytBdTh2a1JvU1B6Vkc3dUpEYklwVGE1?=
 =?utf-8?B?dWhOaDlub2dVcEtGLzAxUm90RWNFaDlVVlEwZjh4cXNGZmFqMnJraHAxblQ5?=
 =?utf-8?B?SFVta2d2TmJxZWNPdlhoTjdFZGV6djh6bGEvYkUyeUdGVzlnZGcxendzTnRz?=
 =?utf-8?B?YlVWSjVYUjZHdjllS1h4c0RXek5SdmNPRTdqN2lWaElldmVNazJuclBpSTE1?=
 =?utf-8?B?bEFqeTQ1dGsrWTRMdmorREV5NHZqdjlWd1ZkUXEvTFo0UHY0SnJ5em00N0Rz?=
 =?utf-8?B?Q3pNZENLb2xNOXdqazYvQm5qMTBFUjdGZmNRNDJWTjBDZjArdVQ0TG43bFdy?=
 =?utf-8?B?QnNnQnFtUEhWTUJMTTFoWXVCcXJSYU4yNDA4aUw5M3NVbEtmZHh0K3NZNUg1?=
 =?utf-8?B?TTdrRXRTRUdKUTcxTGMvbTQwenp3Y0YyeDZwSnlkSHVLcDEzL21mQXIvOGsz?=
 =?utf-8?B?YmRWNkZGNDgrQzRXeXNUR0pkZWFhNktWTVppdkxUdm1QUVo0cXhRbGpwYTNY?=
 =?utf-8?B?QTYrelhlZXlWV0ZseW9nV2RvM2dueEhqSHlUNkJsNjQyN09BRU5XVC93OVZV?=
 =?utf-8?B?VGthajNVT3IrQXZrYXd0Z3dEakE1bzFWZTNYOUpZeWVCSkJ5aFdydmNUK1Jx?=
 =?utf-8?B?VFNoOHFQYmtlcVMrNndFQmFvMGRxNk1pTTU4Z29YQmFCWFNjMXdvSnpiMTNQ?=
 =?utf-8?B?L1hZK0Z2SUZoQWhvdkdUUmpNb1JzQjh0bnJGYjE1YUNRRTVxWXgxL1FjYWtS?=
 =?utf-8?B?by8zRHBueEJLZC9pSW8zVkdjK0x3cmJPa21IWmlyWjlma2tVaHdPeXVLS1ZM?=
 =?utf-8?B?YXJicnE4dXo4VThCOHNOZmdiL2tHOVVvcXNzR2dwaExTd0tDeHlHd0JQWlh2?=
 =?utf-8?B?NWNYTkZhWDh4b29GMy9YQlNaV0gxelBsaFVrOEcyR2tSVk5GRXU0NTd4TGdm?=
 =?utf-8?B?dmtadGJWM3FXbTZPSHhLQ1dWSVh5dVNvMGtvMVQ1Ykt5SE96T0hYV2plRFo2?=
 =?utf-8?B?UTdGRnYyNXZPRHJSQmpsMTUxclh4VFdDeEo2MHg4emhkMmF4NUFLdkJGNGJB?=
 =?utf-8?B?Ylpza3I3NkM1WnA2TlJKL0VNb2lCNlUycW56bXJ2cWVxYU9GUEdXcjdNeTdU?=
 =?utf-8?B?K3F6Q1dOV1VJY09KYUMwcDg0ZXRuV2lrTGJLOEgwQTFWdzlrUFVXdytUYTBs?=
 =?utf-8?B?VkRVcUo5Q2lIanRVdnpSaEsxaHFjNTRrNk1SYlZJT0QrQ0pjOUJSb1BKeGpO?=
 =?utf-8?B?YkxDa05STitQaE1JemVOZlErV2pMTkVjNjFZS080WFJmN2JMaTZRQVRpZUpP?=
 =?utf-8?B?TFJMaFU4cFBQaGlmSC9EZ1NwWm1Eb29VM1hSUXFaaUNyaHB5TURDN20zYlYy?=
 =?utf-8?B?ZjUycDRGWCtDdXNCNzV3L2FhMm1YamVpSFpaNWw1TlpNenhDeWNWdlhTR00x?=
 =?utf-8?B?ZThaMmt3SE1pbTdYOHhZelFiOVowcHNiRmh3MGtMZmxvUW9DRGlYVHFMM2FF?=
 =?utf-8?B?SFY2a2t0QWN6Z2FzaTl4dEIvaDdTV0MrSW92c2xmY2hwMjZBU01oVzA0dEsy?=
 =?utf-8?B?eWlId2c2UDZIZDFnVlo3bUg0Kzl5dEp6T0lJbjF4Y0E0d01ISzdveE9kSjRM?=
 =?utf-8?B?NHpRamlFQmtHU2FWZ3pKczkvclpuaTJvRGtGakthQ3JhM0VRa1Y3aUhVQ3RJ?=
 =?utf-8?B?a3g2N0wxRHVvajZjdktmYzhvUjBVcHZheXR2cmQwOVFqTXVUNXAwMmQ5UFhz?=
 =?utf-8?B?cFJGTFNkeUhKZElXNkQ2NGdOUmlBc1p3TlB1cUkvQzFTR1R2dHQ1U3k3cG4z?=
 =?utf-8?B?bUludnlzL0RQUjN1eG9XajdKRWh1NnpPTExiZklUbEdoKzVkK3RKUUZHMHUy?=
 =?utf-8?B?WDFnY1htQ3lTV0lxSk9rVlgyQmwrK0RCaE5BMlFSbjUyYnZ5YnU2N2c3SnR3?=
 =?utf-8?B?ZTY3MzQ5N2tsazlMSlkvYUZxNHNaZFE3dWgvNUc1b1NEUlQwaFFhRjFTSmZN?=
 =?utf-8?Q?JlPPbEH+j6nCVCd5nhO2h+3c6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4da8c941-841b-46ba-8719-08dafd3f161d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 12:41:02.4022
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Dtgb0eVTNq1+Tan5SkJxFny3WlsaYaIrOpZEQ1lP/1nKGC0XGksTxGSWVOVNeZ34DHVvU+jbuWRPJEzan71pFQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8928

On 23.01.2023 12:50, Andrew Cooper wrote:
> On 20/01/2023 2:59 pm, Oleksii Kurochko wrote:
>> +        csrr    t0, CSR_SEPC
>> +        REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(sepc)(sp)
>> +        csrr    t0, CSR_SSTATUS
>> +        REG_S   t0, RISCV_CPU_USER_REGS_OFFSET(sstatus)(sp)
> 
> So something I've noticed about CSRs through this series.
> 
> The C CSR macros are set up to use real CSR names, but the CSR_*
> constants used in C and ASM are raw numbers.
> 
> If we're using raw numbers, then the C CSR accessors should be static
> inlines instead, but the advantage of using names is the toolchain can
> issue an error when we reference a CSR not supported by the current
> extensions.

That's a default-off diagnostic iirc, so we'd gain something here only
when explicitly turning that on as well.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 12:49:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 12:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482832.748557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJwG7-00068d-Cq; Mon, 23 Jan 2023 12:49:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482832.748557; Mon, 23 Jan 2023 12:49:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJwG7-00068W-8t; Mon, 23 Jan 2023 12:49:11 +0000
Received: by outflank-mailman (input) for mailman id 482832;
 Mon, 23 Jan 2023 12:49:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJwG5-00068Q-Q2
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 12:49:09 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2077.outbound.protection.outlook.com [40.107.6.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53666f20-9b1c-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 13:49:08 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7598.eurprd04.prod.outlook.com (2603:10a6:102:e9::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 12:49:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 12:49:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53666f20-9b1c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n2K4+7lzsKKG7out+uDb0cWMV0BwBMtCo2bozkcYEcv2xSN0ylzQfd1URC27WwlLNavO/1LZH+i1X+gRTB6G5O6z5GENAW7dhuZarU3X2jXJndLPnTppuKSqyxsI7gIlicQyAqXW44NzjXOZ2GaneK86aidoh04wijxs8LjuUJia8iPpnpgh0jO9wUf+W2N/cOKIOJeDh4oxUhurcijxijG3Su0ST5VV8L3oDwZxFLndyUQl37y/Z1SycZOoehGSZM9uoWz6bhNz3AUKMa4rq2TqswEz9qk26dPEmkL50PIfy2HcTDGxZg7xYUUQO0DPC2+fy7PdB3VQqKWlvGVBYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zz/xQac6VCNHR4A8oHGYM40baouxvQANK/LS3SvfatQ=;
 b=hijDVFyEndQOS4fqwGL8IYk2ShPq2GMlU2FGhstP/u1gCLDljTar97dLt3q9C6uKKKbIpEvem05rnsjx+/YJCOzpPr0LEXBt8MMbP3CPVaHQzRVvg61BRYi6Ca9hKpBdlK870/gLEqZ+2wt7khJ+7tM32eAhuADYa7cmUEsFPaI2hGK/BMJD2tyb5lB88wyXwi3MDSN3yQXyG2+/1ySNepGTKP3sk+yQmdW8aCC/LVQOM7+p+WyoPxCnwKBz4FFM9tRPUBeEr6phtS4CGEMuiEy6p6BNm0yulDZPyPbRM8WaRssckTH45T+e4ghK1xs8rq5S8F+WRT2H7TwSbIaz2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zz/xQac6VCNHR4A8oHGYM40baouxvQANK/LS3SvfatQ=;
 b=F2O+Tf3jQzSvC/gNiDOrc82ODuUSqn11F+m+PG3Z8x4jj7QkrVYPn6p5KIn8t0K4AlaNApFcbYfPSn4Dcz7KEtrgAmyykaIUvXi5+AhpYtD57XmjzdX7s96SwPsK4TAZpW/1CPXKsHzbz2/0hP9Xf/cAfryBO8osK5H7csOCdt/0C0lNgxHdc4RR5FYtakzmqXF6UVzb325YgS+jlmjRskq1TT0ltVSOon5Y+wv1yEDFwvS3sundWnAG+Yh9hrqi3DFXNCOrieTPK+V4TPiNzYBWpejEb9ZxZ0UOpSJfbYkwl88+zfgf1gr+WMP8kNwCHrbvqoyEsSOnLTTnMCEJuw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <12bcdc9b-52bf-ad10-a3ec-286d00372be0@suse.com>
Date: Mon, 23 Jan 2023 13:49:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/shadow: sh_type_to_size[] needs L2H entry when
 HVM+PV32
Content-Language: en-US
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <942e1164-5ed0-bdda-424f-90134b0e22c5@suse.com>
 <79420a4f-358a-f404-7965-e5f215234ba9@citrix.com>
 <2ec2a36e-4264-6c12-c2e6-1af85c91f1f6@suse.com>
 <04f5c9ba-24aa-c9be-e8de-a867c897835a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <04f5c9ba-24aa-c9be-e8de-a867c897835a@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0086.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7598:EE_
X-MS-Office365-Filtering-Correlation-Id: eaccf24a-3c73-4c84-eca2-08dafd40366c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T3Tsz/k3ds7xFjxlQyKR4t6ElW9wcPT4GxW/WjkTHd9x2sabYiJ8SN3YL/igKN9Rt9KwMPtMItggIsR8J2I6F6NqUQkt/w7itqlS+VlipfJbLgyLHqbGW1dhx3hkeAp91x8UX30uFf6uVn6vOABm+I4LxyixxVebjxA5TF30pjnSfKDmcQ/WApWvkou9CRvZ0lx93R6pY7LoKBkRWFa8N3DGfaYIQJziQqPCjJgiTTq0+nPyezBEYiUvsSM6HzoY4sOkfSQ9g8lC5c55C9N/hzzoQCn0C8tA9ibvy09e9PnMBxfo0pKzC6bVbc0lLbHclln5KNvNC5t9Bl+fZZygjtwOl076JBuzT1rFXl1p8e19RQkwU83SJ/eOIzp+sL8dlg8hyfy59JPUwYNlW7HdiXaqzbXe1edrUzenHC+Xm8n1pNmgVbJzCPgOEKcg7oARBLPBpgFRLR6chBwSl0HOe4aJGHGo8e5E4Ysi6409qTlh22BlJy5XLld7kM4MsuFV8cJwupLO0PORQmwwWjwfzeN37t7VzJ6UGFC7Z8HCdYcuNOZHDvSx52Np5ZEyJ+s4Tyo5HhPojZbKgAVGc8e0XoKis6s0p3lkSt2z4koamJceB0YgdTmUR3ltnxXcsY4aGyR+CRZFfIMwkixJJv5MJ5zxtq+JIW1p8Im9DVjMzC+Rj3RaSxLmv/z+ZvQTXfrHKxz45LjstvuBV8c4wdejAidSNvDSi21X6U9sayZPTUY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(376002)(346002)(366004)(39860400002)(396003)(451199015)(36756003)(31696002)(41300700001)(86362001)(5660300002)(8936002)(4326008)(2906002)(38100700002)(6486002)(478600001)(31686004)(6916009)(8676002)(26005)(53546011)(186003)(6512007)(6506007)(316002)(66556008)(66946007)(2616005)(54906003)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NVMxMzZHN09NZEYyT3lIVzFrNEhCREt1TVhpSEtVdVo3TXNNZngzdGJpYTNO?=
 =?utf-8?B?cFNVMUZRcUVuZzNZeHlpZHFsemxIN1RkZ1ZFOG9DVjY2S1BTMENNcENRYm1X?=
 =?utf-8?B?UUNnRUpwdFBaR0ZlY1VRWTJ4VlZZZSs0Z1hPUit1UGxPWkZGeVp4UjFpNmtV?=
 =?utf-8?B?UGdnSlJmZm5kcjhqdXBJNWdXMldIWm5ZaUFYV2tmUFNHNFluNmxpZWhHWTdR?=
 =?utf-8?B?endoMDNOWkF3SmVrb3B5ZzNDOEMrbWQrNVJJaFF2VzdxendQeXJCNTlwZUJR?=
 =?utf-8?B?QmZrRm1HVWdzYWZDNWJMbUozVlh1VVVmRCt1dzc4MHpsUDkwT3FGS3dzaVFw?=
 =?utf-8?B?b2dMbnVqc016N0ZzU2x4aVA5bXpGRTJXQlB1MnBwZGN6b2Zja0g1Z016TnBw?=
 =?utf-8?B?MmJteDlDV1Ryb3dPR0JhSFhyZTdSSXJpcGJmVGw0SDVqMTdrUTdYQXhpaGtJ?=
 =?utf-8?B?WWJtOG1FaUxCMWZJU0IvSE1vUmpmOWwzZE4rOGs5dytIc0R0Z1kxcFAzVzJT?=
 =?utf-8?B?T3ZNZFpZSUZ4ZUJROU5oTHFCbFd2cVFHdGJZeUd6aXVNWGNFZjE3Z05scy9W?=
 =?utf-8?B?Q0lSTGlNNTRacXVuRWlTRW5ZQlhCdFMydENKS2xEZ0J5U3hHTlpJMDZkaVFR?=
 =?utf-8?B?bFN5R3VlZ0FacXltZmY2STlFNldoOGlXMnM0di9BVHBJTTNEbGRTY1dDK1lj?=
 =?utf-8?B?U3FjVGFaNEZHMG1tK0dqaTZXWW9DaCtrRVVRdmltU0E5aFpWd2p5Qy9ISGUv?=
 =?utf-8?B?VWFTeG9abGpVTS9KMThxZVA4QlA3T05td3NFYWcxdCtkbjBEZDZpb2prcm56?=
 =?utf-8?B?T1VCdTJhT2tETHA5U3FBUS9KQTh3VmhXQWNjc3VublpEYU0vaUZ2cFlUYm05?=
 =?utf-8?B?NHBucnZSUnQ2Mm5UZ2NJeHJtTDQrOXNkTGU4MHNLbUkyZU9SeHBkQXpwMlRj?=
 =?utf-8?B?VXpjbCtOSVBJVGlJaXN1V0tYZ0JhMTl5Uk1CbDV1NWsycW9vc0N2TVRaTHFM?=
 =?utf-8?B?eVR1MlBJRTdINFFkWGgrYmthWVhrb2ZNbjdlV2VaMlFqaGZOUExlbWNldlNB?=
 =?utf-8?B?OTF2U0dhRDdKZ0NDQVgwQ1hJTFFSVHV1WlhPRkpVdzVwd1B1U3U4ZUl5NFN3?=
 =?utf-8?B?TVlaK09kdGExN3lsVlV0WEhER1FLSWFZM1NRdVJDZVNPaHc2MU9ROFBwUHpM?=
 =?utf-8?B?bEZaNDk0dlFXM3FZZ3llOEFXYXQwYVNMdUUzZ2VKS2xHbEhtT3JOdG5hQmVU?=
 =?utf-8?B?T2Y2TGhHRmpKcEM2NlhHN2tFV0lBdkxMQWcvM2N3RVQ0NWxURGRPQW9uT1ZQ?=
 =?utf-8?B?c1VPcjFFcU1KMm56Nno1L253RlBQMHlHcmZhb3AxOHNSdzMzTjVuS2ZVRGpw?=
 =?utf-8?B?Rlh1L2oyL0wxWjdMNnI1VlBXWlNKWEZvVWo3M3R4c2p4ajYxNHFiWHBQUGk5?=
 =?utf-8?B?OGRmMVdVc3JITXpHYzRJNzhBWmliYmhERm5qT1lGYlFKd1dUd1N4ZEk1TlhQ?=
 =?utf-8?B?VndaWHkzTEt3MUdHQXdJOTV4SmF0NjN0Ty83RzVxZXc2dm9oN0RjcFNrem5h?=
 =?utf-8?B?WFdQci82Q09zNFd2MmxNR05aeEhhREgzZmpzSUtWZnlvSjdVNnZHRHc1bWk1?=
 =?utf-8?B?cVd2STd3bTU1ZkZvNFltOVZLS3VsWTFOSUpxS2NxNzIxdm1KUFA1L01ubmtK?=
 =?utf-8?B?Y2g4MEQxcWVMQ2Rzc01YRTlaNm4vUmc0Um1rTDJ4VCtkcVVIbVZ6Sk95RDQy?=
 =?utf-8?B?QjE5UHhua1hPNkdWTHAyUnU5NENha3NPb0dQUHNSekxnRGxnNUxuYWorakE1?=
 =?utf-8?B?NDVVTHZyc3dPaGd1YTFTR1hhdVU3MVR4Vks0ZzN0K25DOGJRd3F2aHJ2dE5X?=
 =?utf-8?B?cDJ0dFJ1M2c5R2VtbFZ3eCsyamlka2dXblo0bkhPeEJ4S0R3QitVZ2t0dVc0?=
 =?utf-8?B?QU9EbXJVQ1lhZFRSL1gyUVZEU2owVEVJSURqQ3F0d0F5NTN4K0NVU1hRK3lI?=
 =?utf-8?B?N2UyYWljeVFULy92ZDV6cCtJekRQN3EwQzN6bVk4WGo1bW1Oc3NHNzZaUHFJ?=
 =?utf-8?B?MVcvWUJJbVlrcmdwY2YwMUpRVnM3TmFrUm1TdUtRVFVEQ2dKVzBTS3Job0RL?=
 =?utf-8?Q?slpFl8/S6/pjcqljZtDp0ivcv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eaccf24a-3c73-4c84-eca2-08dafd40366c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 12:49:06.0751
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bFoQd4swoegMkk/AAB6tnw5v0MjbI4/oare5eTBJmo9NafHEoMlDDVWzGXWsT0TWIBqVtlVQ4ndK2OJvXJP9yQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7598

On 23.01.2023 13:30, Andrew Cooper wrote:
> On 23/01/2023 10:47 am, Jan Beulich wrote:
>> On 23.01.2023 11:43, Andrew Cooper wrote:
>>> On 23/01/2023 8:12 am, Jan Beulich wrote:
>>>> While the table is used only when HVM=y, the table entry of course needs
>>>> to be properly populated when also PV32=y. Fully removing the table
>>>> entry we therefore wrong.
>>>>
>>>> Fixes: 1894049fa283 ("x86/shadow: L2H shadow type is PV32-only")
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> Erm, why?
>>>
>>> The safety justification for the original patch was that this is HVM
>>> only code.  And it really is HVM only code - it's genuinely compiled out
>>> for !HVM builds.
>> Right, and we have logic taking care of the !HVM case. But that same
>> logic uses this "HVM-only" table when HVM=y also for all PV types.
> 
> Ok - this is what needs fixing then.
> 
> This is a layering violation which has successfully tricked you into
> making a buggy patch.
> 
> I'm unwilling to bet this will be the final time either...  "this file
> is HVM-only, therefore no PV paths enter it" is a reasonable
> expectation, and should be true.

Nice abstract consideration, but would mind pointing out how you envision
shadow_size() to look like meeting your constraints _and_ meeting my
demand of no excess #ifdef-ary? The way I'm reading your reply is that
you ask to special case L2H _right in_ shadow_size(). Then again see also
my remark in the original (now known faulty) patch regarding such special
casing. I could of course follow that route, regardless of HVM (i.e.
unlike said there not just for the #else part) ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 13:11:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 13:11:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482837.748567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJwb7-0000x8-4C; Mon, 23 Jan 2023 13:10:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482837.748567; Mon, 23 Jan 2023 13:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJwb7-0000x1-1Q; Mon, 23 Jan 2023 13:10:53 +0000
Received: by outflank-mailman (input) for mailman id 482837;
 Mon, 23 Jan 2023 13:10:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=StYb=5U=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pJwb5-0000wv-Q0
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 13:10:51 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2057.outbound.protection.outlook.com [40.107.237.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a5e65d9-9b1f-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 14:10:49 +0100 (CET)
Received: from MW4P221CA0028.NAMP221.PROD.OUTLOOK.COM (2603:10b6:303:8b::33)
 by CH3PR12MB8260.namprd12.prod.outlook.com (2603:10b6:610:12a::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 13:10:46 +0000
Received: from CO1NAM11FT073.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8b:cafe::3) by MW4P221CA0028.outlook.office365.com
 (2603:10b6:303:8b::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Mon, 23 Jan 2023 13:10:46 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT073.mail.protection.outlook.com (10.13.174.196) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Mon, 23 Jan 2023 13:10:44 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 07:10:42 -0600
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Mon, 23 Jan 2023 07:10:41 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a5e65d9-9b1f-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XaxQRjwEKlCKIYQSHei0VjDhCk87oT9CA2qD/R/ubt5m64r2zEK4+6Dq/3EGuF4tjwHy37ZbtB0IAdIR3znkZrAsKpvL5AObv0c5Cp4JSeMgaRpF0dYY8yb5IREWyvnaLfWdXgfNh6bTNDVGsSfJz+Hrc9PMb5K8ZFdpjcia3ZQjv5qz1V7qI0n9aH0TAZnKmew9DxSj8gyllwL7U8UABRwywCyXqqcfM4FtHi5TWgrq3D2oJmGv/WIumRC9jxoPbCctl5ocgVySBOSLNdOC/C0nYs45hzWBZHPn8Dy1Byj2/EwyvqTGnLyLpdW+SOoIAMa/10qdKKAZJm+wgR482A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1S9RGzAXZbvL+p2nKWN2tHznjQW89E7pjL+KiBsk+Tw=;
 b=gr2rEoa+oF01X5JHjJcBrgDi6xZ16bzHNomBRCinWcNTIY0UUMr4bMPP7T/3igT117yAS59+3xaasTOnS/jpH7ltmRsfvzXA8JZQWzeNUTy6mzzka7a657g6d6j3oDmhGR/5O4XJIwI/IWMrHIak2i65U/DBt26RHqCGrJ7mSxu2Q0SvHaZ+58bMjhD2AeOvSCCdeSwSUXnsZQ3fsI6akUqWc63JJ/tpv80UlRNE+fuxeuS6RDtvBqlEw3xHutCdPDp+ydSAfzk2QTlY1TDvsg289Ag7cM6S/LplUUZHMdG2vSppdUxdp2tPU0GfnZGwEZ3j8Eu77bWwd+7xqmx+GA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1S9RGzAXZbvL+p2nKWN2tHznjQW89E7pjL+KiBsk+Tw=;
 b=b3Ucyh5ZAyz6pn6dCkpASVKNhYR9j3f+d5obb+xi56jhK0l1bWO8XX4KTWLMAe9td4XfJE5T9tXUjTr/pknnmf0pTwqhW9Woo9aIbmBD+l0NVSv6lmFglmpRMliwQ8Vjuj0LjYC03L96ZdDHWL6rvoVxAGHUuy0JOOx76/EJgn8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>, <julien@xen.org>,
	<ayankuma@amd.com>
Subject: [PATCH] automation: Modify static-mem check in qemu-smoke-dom0less-arm64.sh
Date: Mon, 23 Jan 2023 14:10:23 +0100
Message-ID: <20230123131023.9408-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT073:EE_|CH3PR12MB8260:EE_
X-MS-Office365-Filtering-Correlation-Id: a0d13261-ea34-43a9-4a8e-08dafd433ca6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4ID1OK+19kbj+uP/F4R5D7MaVYC3DpcTiRKoc3UZJjMZ/xKxrvZx+cIjagXiRlkFzB9mUyRlZM6UZWRJnlFB/xM+n6oZh1+jxo7P6kNTr2+GYdKl9t6s81QELPNXHvfkd5D0xYLZtIkKZscfDnV1lFKDkEAfaWQr2i93HTn7XHWXAPxpQla7pzU3CcJk4VtNjCTAvsdYZb0lHaFoRSXqb70ZLAPLhCIJid6ptiSdByIk/UEFYPrTeW/B6jHx7FDYuggL4E4RvCRzkG7gNrZuzPZVcb6GR/p8eD6B3zkOl/ATTmO28HUvprNO7ql4OAKc5nrS+yRIXq50AjiPJ+QV3Ft267H6xJ9bWw4DIqtzO0JD2s6EoNOn9temOJ7k6DSHNPS3FeA8pqRirKMugF4seuEOVoFZAPL2IPpRC4RgEjq8/f80U1O1yTT6vWORZtPJsvDJtwiz+LFeuuIR/WSJJB9N84r1kcDco70wxm+YvtDOFCKtk0HltebtDodj3+NRUPF67N4wVqtUbG0IsPMmBoWsI1wbIWFNYelxGoLMFGuv8ThqqiyK/YjjZtXBmHT4dao4tydQM7678uhezhbQP4Q8fcAHJyrlqzTqd1gIDQ/1RDFSXTQxzX5tYVpmGJyT8i6yVBiqpmiMY2QBhOSyrEcJW9LMRyQ+s68C+0IUN1lZ3aI6wx1/cxUxa6d+EpYTvCFRSKrVdJcmL92MFoTaSvg8eYrwojZAV+f0DyDH+3vFT5b1D+8Kr4gBi3mxDV0KWzOi8s5k4R3AKJlL8TQqga42ue2LiDYKTMOJlUg5c/0=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199015)(46966006)(40470700004)(36840700001)(83380400001)(36860700001)(81166007)(82740400003)(41300700001)(86362001)(356005)(44832011)(2906002)(8936002)(5660300002)(4326008)(82310400005)(40460700003)(40480700001)(6916009)(8676002)(26005)(186003)(1076003)(6666004)(426003)(47076005)(336012)(316002)(70586007)(70206006)(54906003)(2616005)(478600001)(966005)(36756003)(36900700001)(139555002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 13:10:44.6326
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a0d13261-ea34-43a9-4a8e-08dafd433ca6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT073.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8260

At the moment, the static-mem check relies on the way Xen exposes the
memory banks in device tree. As this might change, the check should be
modified to be generic and not to rely on device tree. In this case,
let's use /proc/iomem which exposes the memory ranges in %08x format
as follows:
<start_addr>-<end_addr> : <description>

This way, we can grep in /proc/iomem for an entry containing memory
region defined by the static-mem configuration with "System RAM"
description. If it exists, mark the test as passed. Also, take the
opportunity to add 0x prefix to domu_{base,size} definition rather than
adding it in front of each occurence.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Patch made as part of the discussion:
https://lore.kernel.org/xen-devel/ba37ee02-c07c-2803-0867-149c779890b6@amd.com/

CC: Julien, Ayan
---
 automation/scripts/qemu-smoke-dom0less-arm64.sh | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/automation/scripts/qemu-smoke-dom0less-arm64.sh b/automation/scripts/qemu-smoke-dom0less-arm64.sh
index 2b59346fdcfd..182a4b6c18fc 100755
--- a/automation/scripts/qemu-smoke-dom0less-arm64.sh
+++ b/automation/scripts/qemu-smoke-dom0less-arm64.sh
@@ -16,14 +16,13 @@ fi
 
 if [[ "${test_variant}" == "static-mem" ]]; then
     # Memory range that is statically allocated to DOM1
-    domu_base="50000000"
-    domu_size="10000000"
+    domu_base="0x50000000"
+    domu_size="0x10000000"
     passed="${test_variant} test passed"
     domU_check="
-current=\$(hexdump -e '16/1 \"%02x\"' /proc/device-tree/memory@${domu_base}/reg 2>/dev/null)
-expected=$(printf \"%016x%016x\" 0x${domu_base} 0x${domu_size})
-if [[ \"\${expected}\" == \"\${current}\" ]]; then
-	echo \"${passed}\"
+mem_range=$(printf \"%08x-%08x\" ${domu_base} $(( ${domu_base} + ${domu_size} - 1 )))
+if grep -q -x \"\${mem_range} : System RAM\" /proc/iomem; then
+    echo \"${passed}\"
 fi
 "
 fi
@@ -126,7 +125,7 @@ UBOOT_SOURCE="boot.source"
 UBOOT_SCRIPT="boot.scr"' > binaries/config
 
 if [[ "${test_variant}" == "static-mem" ]]; then
-    echo -e "\nDOMU_STATIC_MEM[0]=\"0x${domu_base} 0x${domu_size}\"" >> binaries/config
+    echo -e "\nDOMU_STATIC_MEM[0]=\"${domu_base} ${domu_size}\"" >> binaries/config
 fi
 
 if [[ "${test_variant}" == "boot-cpupools" ]]; then
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 13:45:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 13:45:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482845.748583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJx8G-0004Oa-QL; Mon, 23 Jan 2023 13:45:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482845.748583; Mon, 23 Jan 2023 13:45:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJx8G-0004OT-Mq; Mon, 23 Jan 2023 13:45:08 +0000
Received: by outflank-mailman (input) for mailman id 482845;
 Mon, 23 Jan 2023 13:45:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozL9=5U=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pJx8E-0004ON-PB
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 13:45:06 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2068.outbound.protection.outlook.com [40.107.237.68])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 23a84a0d-9b24-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 14:45:04 +0100 (CET)
Received: from MW4PR04CA0223.namprd04.prod.outlook.com (2603:10b6:303:87::18)
 by CH2PR12MB4117.namprd12.prod.outlook.com (2603:10b6:610:ae::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 13:45:01 +0000
Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:87:cafe::27) by MW4PR04CA0223.outlook.office365.com
 (2603:10b6:303:87::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Mon, 23 Jan 2023 13:45:01 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Mon, 23 Jan 2023 13:45:01 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 07:45:00 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 07:45:00 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Mon, 23 Jan 2023 07:44:58 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23a84a0d-9b24-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UwCsJl8wTm7D5xSni+ituoZssK2dCbqGy0cRUrtT38O/BqlENlUfo/4aiPguI0PTYAfvC5VXfkq84gHJQrpwIBC7Z36f2tIBz8krCTd4I+IJ9YlTppmZPu1ai4i/AnJVmpC9x0CdU3mOqf4lK8AzkJUdmrh3kqgcweCCTk1/UrytsBDVpy/hsSMD880TXjubh6Ucx/KIDCic/w8dp/HaeHUXhLSTESriMh9phvESJbSbjoMQm66UW2ZA8KVnZCj2LojrxqarbEoyfL7Hv15Hch4Q9zrT+tFpINjEIcSJKO0V4f+1BjJmvYr3YX/1aSw7LX3SA3A9yxPPvGyRo92RQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SJS9P+WkaGORvzmO1vjpZgshINLWMYaWGWtqwGVphZs=;
 b=k6KMxbISOXyWmZamXI4uOJmtw601P0yXVvsrxwWHa3r46ViO69JCUzbKkwOXpdq81kpagHjd483IX8am937ERr9xAQLFzei+xo91+ThBENL33dVHbIVtChdA9lVHupEHSb0iv1qQu8++5ncbtFDDWPkL7uoH0OYoIk+mgl5DVQdJvf2amPkgFC7IDfT9MQNnfRYmjWVif/+l2YbCz/KAC9vTVe4SRrNTkMIRQMJP65sbYnaJstfxLSL8SJBBD2nhkv4fJo0G8cLkWOSdXVBiwKjW3qWxSbSLelqwNbh0tOLigBbMNSvt5b0iENrAHnIcPQSOiwO+jNQg6xj0ub8urg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SJS9P+WkaGORvzmO1vjpZgshINLWMYaWGWtqwGVphZs=;
 b=kWJttfKIiIXz4irjmK9yqrRSzc50QrbTiOCb+UH8FKE72s7dVsdc3o0crguuTSnABOWaKxxTaQCLWyNL5LfLgSyjefsrzDgN4Zi6JRiYg/WdGgGYs78CEIhljQ8tOng+IAAL85AfzHBkNsKA5NhC5syT+nRag43zfHuYsJlTI7U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <xuwei5@hisilicon.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v3 0/3] Pre-requisite patches for supporting 32 bit physical address
Date: Mon, 23 Jan 2023 13:44:48 +0000
Message-ID: <20230123134451.47185-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT026:EE_|CH2PR12MB4117:EE_
X-MS-Office365-Filtering-Correlation-Id: 31cf5429-1d36-404e-e4fa-08dafd480673
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HruGfItwQrks3SvKe+lioDqVXfrWY0QgWhneSxj1+FZDX2vTSJUO9ohoB1LcSI63ZG3iR0+Hw+Zn3JJ6KFEgfZrDjiVEJAeIgyFdWskulC92cfGl/SOSAVM3Sd8fruxo0cpKXRxNdo5uCGv/d9F4sbGXVCCHV/q5QbT9hMCe5CaB/pvJygSNDge9lDUM6kjIPILz+81QBnKCRS3Al2MDFLJg1QC5z5I+gmh+qSe6RCvP51wz9qV7HUL81nt2KZ6Vnp7FItROHwCVx3ADL+gUeyCV300DRf0gA5uuA0UjWE/yT0zcRAQtvUFSqtVDAY7fsGb2uipR+NzDoLWlrVbTL+8Egyjex4dk0vMbO7434wXMg52FHhtVcjd16Oz2z9IpVgW03vTWra2bNwjoDDB1F9cOOjOEWlfaNzLBhM342hncn9b64nyJMRnLGJe1Ov273W8JdaxklJTyspb3glnppgORkUjwW30h7NzTw0fV0woHGUbXz9nrfSp+VL7eoPURVu2Nd28vLy0sIjN17l1TioiO5DmU2aTXE2NqsmAjNsI3SP3YG+lPG2WrRx/bU2C/yk3IOFW8wQZgbUzOCGLH3ye45BXgng8fIugcdJ5lfje4isrVPgcW3BGx7JrKGq8oZVdzBKQ3mBhiIIvYqWQ+SkaAmMlteYTmZIuggaHOFYfkOWT0Bx1luwpdZGtH64AqScOlvUoiBdALEluATJT3toYfmyWLHzOpkdAAYdX+QHM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(396003)(376002)(39860400002)(136003)(451199015)(46966006)(36840700001)(40470700004)(2906002)(7416002)(5660300002)(8936002)(83380400001)(70206006)(70586007)(8676002)(6916009)(4326008)(36756003)(54906003)(103116003)(316002)(336012)(86362001)(41300700001)(82310400005)(186003)(1076003)(2616005)(26005)(40460700003)(6666004)(81166007)(356005)(426003)(47076005)(36860700001)(478600001)(40480700001)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 13:45:01.2359
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 31cf5429-1d36-404e-e4fa-08dafd480673
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT026.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4117

Hi All,

These series include some patches and fixes identified during the review of
"[XEN v2 00/11] Add support for 32 bit physical address".

Patch 1/3 : The previous version causes CI to fail. This patch attempts to fix
this.

Patch 2/3 : This was pointed by Jan during the review of
"[XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size".
Similar to Patch 1/3, this can also be considered as a pre-req for supporting
32 bit physical address.

Patch 3/3 : This was also pointed by Jan during the review of
"[XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size".

Ayan Kumar Halder (3):
  xen/arm: Use the correct format specifier
  xen/drivers: ns16550: Fix the use of simple_strtoul() for extracting
    u64
  xen/drivers: ns16550: Fix an incorrect assignment to uart->io_size

 xen/arch/arm/domain_build.c | 45 +++++++++++++++++--------------------
 xen/arch/arm/gic-v2.c       |  6 ++---
 xen/arch/arm/mm.c           |  2 +-
 xen/drivers/char/ns16550.c  |  6 ++---
 4 files changed, 28 insertions(+), 31 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 13:45:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 13:45:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482846.748593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJx8W-0004j4-7S; Mon, 23 Jan 2023 13:45:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482846.748593; Mon, 23 Jan 2023 13:45:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJx8W-0004ix-4d; Mon, 23 Jan 2023 13:45:24 +0000
Received: by outflank-mailman (input) for mailman id 482846;
 Mon, 23 Jan 2023 13:45:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozL9=5U=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pJx8V-0004ON-2R
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 13:45:23 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2066.outbound.protection.outlook.com [40.107.223.66])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2dba2647-9b24-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 14:45:21 +0100 (CET)
Received: from MW4PR04CA0315.namprd04.prod.outlook.com (2603:10b6:303:82::20)
 by DM4PR12MB5868.namprd12.prod.outlook.com (2603:10b6:8:67::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.28; Mon, 23 Jan
 2023 13:45:19 +0000
Received: from CO1NAM11FT054.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:82:cafe::98) by MW4PR04CA0315.outlook.office365.com
 (2603:10b6:303:82::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Mon, 23 Jan 2023 13:45:19 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT054.mail.protection.outlook.com (10.13.174.70) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Mon, 23 Jan 2023 13:45:18 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 07:45:14 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 07:45:14 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Mon, 23 Jan 2023 07:45:12 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2dba2647-9b24-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=adfer8I6wajabxzNXMrMGUmPYLAd54gjLCWLQZSEBcwkqbhVkfb1qiuiw352oTiyWwNjKTKgd0HJQ+Uw/XN+jFG2G3hntQeB6nvsTxEy4O1V1OgR9U+aCybC3Z1RygxIEqPVFGBUdS5ufSodutrm65Ffga2c3bYQyRtV5pw0zz7N+sI4QFi/qbkQeSEYHS6YrJ6dmmZkRISF2CHkXM7imfb+2O1vFo4it76yv5ggqSVUnhctXR33Xz3hShBiHCqgoZqJI1MJPK2TPk3KE89u7gXqr5z1TTEDFK2eLJnRHqCb4gJHPMSTWOPnmiJkSgPJv1Zbebd0voGNuFRWz1lsTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KLecMdgM3KRy1EpKfpCPHyv+jnvzB/bvcSdkTjM3ULk=;
 b=eALDGAl+rjrBFhAaX0j3ABdGi/t2i8TRrkfKRP0XTZ4SuuAOsw6HEL3hM6uz1wlVSOPTeEtjtQgtvGJfq8elMRQOOKZ1GCAjo64o95WnnQzbnMfwZQVuUW5R1afUo60O3jd2HTezbalajT56TJFRZA6sQNoos8Gktu4cLBKh13PmZSbUhmP0na9re8NsieEYNlGKvAFORQs2L41vckQTIBS/+s4L4k41E1mH3miTpLxzEAKW6B5HuuvM55SDAGYO47WZT/0YQik2GsIUBUepBprYTC/YF6d66zEa/+uD9M6k6U+vH+qcd3ZyPHu9n5OJ7NmtIqyNy46gNxytOlENSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KLecMdgM3KRy1EpKfpCPHyv+jnvzB/bvcSdkTjM3ULk=;
 b=R++C4ySyMNLQST4LnZul3+3ZdBveL5r7fan3sHLMX77AHfSJ8sKoYmbvbFf+wDOBp2he/PT1GE60uX7fGBZpt8y1ucr0R13BKoAmprXCMDBU9HTa6TOPKu9DZ+8aS8AljwG0usEYdZNo+HKmR9NeNxMG/H6OI/3DXC7mrCsZVp8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <xuwei5@hisilicon.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v3 1/3] xen/arm: Use the correct format specifier
Date: Mon, 23 Jan 2023 13:44:49 +0000
Message-ID: <20230123134451.47185-2-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230123134451.47185-1-ayan.kumar.halder@amd.com>
References: <20230123134451.47185-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT054:EE_|DM4PR12MB5868:EE_
X-MS-Office365-Filtering-Correlation-Id: 2a883af0-8f19-4874-f65a-08dafd48109d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9dAIlF+ZCI/M3bmEnpMjyubLm4auhsXUoKVnRmrfJ8ImZm7U60L5wka9IvzyPcsFmqvmm0zXYOZ5dl8arX7TmaTZZH805mSA5i3iap9clq2Abn4DQglzML3dYr1dFOfr5+r1AeQ/6FdVXNZ8Glg+M1W/sJACm3VGNmMjv8iB8tg/8WiDUtwFIq2IAT/xoyKzLRbZKg8SG3Vo0anU8A6CP/xgh1tzztEAYCwvDg4D8w31fM/EOVeq8KgxsUTPYzluBb0DWdr/d1xvZx9ydaxT5y3xUTf2AbHlHV+J5GosQ8eStoHieXFN628wu5xiBvHY7dNrYYgOQ27eKTWdYvSbp6OLp8HJPSAJy8IMuEheuzaJCUJhsy1WYXMVxcjNpBgjLLuzWFgQqfyLQuF3ePktBo4keL/LksWCuqMLEyadKoY7TS2hXwc+DlVJ2VnRGlYuUAm5TSNYv/HGBTVI+CJ/xRJbnD4cz5+hQQe2vffX6ZZPNQfaroZNjfe2QTyX9cI88gpVRtmG5Pt1XHwo/GNuosBQxVRQOsdaB+V5CNFFu2UQ2TSE7tGx3G5w2212Ts7CRbVLNVrH1MluYvUhU2nwuUhsbPZQe6D/2vYXaqFcUjNZNpkwRLu8tqcO5OUvdDra8e7crTskoFRKwaP+MyK9IxtHIIvX3cB5rG5lGVwnxdvsBEJX8NbW+T4tqO/cmXskXeeYfK1AxU6WjKkWpK4SY9K+ymZFGxT1AFx1X049Y4ofcJ/DH09QEVZ/wTfYyprbQHodU/T2OQ/JraJ//+Bauw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(396003)(376002)(136003)(451199015)(36840700001)(46966006)(40470700004)(86362001)(40480700001)(2906002)(103116003)(70586007)(8676002)(336012)(41300700001)(70206006)(186003)(26005)(426003)(1076003)(4326008)(6916009)(47076005)(2616005)(478600001)(966005)(54906003)(40460700003)(316002)(6666004)(81166007)(5660300002)(356005)(7416002)(82740400003)(8936002)(83380400001)(36860700001)(82310400005)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 13:45:18.3200
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a883af0-8f19-4874-f65a-08dafd48109d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT054.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5868

1. One should use 'PRIpaddr' to display 'paddr_t' variables. However,
while creating nodes in fdt, the address (if present in the node name)
should be represented using 'PRIx64'. This is to be in conformance
with the following rule present in https://elinux.org/Device_Tree_Linux

. node names
"unit-address does not have leading zeros"

As 'PRIpaddr' introduces leading zeros, we cannot use it.

So, we have introduced a wrapper ie domain_fdt_begin_node() which will
represent physical address using 'PRIx64'.

2. One should use 'PRIx64' to display 'u64' in hex format. The current
use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
address.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v1 - 1. Moved the patch earlier.
2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr"
into this patch.

v2 - 1. Use PRIx64 for appending addresses to fdt node names. This fixes the CI failure.

 xen/arch/arm/domain_build.c | 45 +++++++++++++++++--------------------
 xen/arch/arm/gic-v2.c       |  6 ++---
 xen/arch/arm/mm.c           |  2 +-
 3 files changed, 25 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f35f4d2456..97c2395f9a 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1288,6 +1288,20 @@ static int __init fdt_property_interrupts(const struct kernel_info *kinfo,
     return res;
 }
 
+static int __init domain_fdt_begin_node(void *fdt, const char *name,
+                                        uint64_t unit)
+{
+    /*
+     * The size of the buffer to hold the longest possible string ie
+     * interrupt-controller@ + a 64-bit number + \0
+     */
+    char buf[38];
+
+    /* ePAPR 3.4 */
+    snprintf(buf, sizeof(buf), "%s@%"PRIx64, name, unit);
+    return fdt_begin_node(fdt, buf);
+}
+
 static int __init make_memory_node(const struct domain *d,
                                    void *fdt,
                                    int addrcells, int sizecells,
@@ -1296,8 +1310,6 @@ static int __init make_memory_node(const struct domain *d,
     unsigned int i;
     int res, reg_size = addrcells + sizecells;
     int nr_cells = 0;
-    /* Placeholder for memory@ + a 64-bit number + \0 */
-    char buf[24];
     __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
     __be32 *cells;
 
@@ -1314,9 +1326,7 @@ static int __init make_memory_node(const struct domain *d,
 
     dt_dprintk("Create memory node\n");
 
-    /* ePAPR 3.4 */
-    snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[i].start);
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "memory", mem->bank[i].start);
     if ( res )
         return res;
 
@@ -1375,16 +1385,13 @@ static int __init make_shm_memory_node(const struct domain *d,
     {
         uint64_t start = mem->bank[i].start;
         uint64_t size = mem->bank[i].size;
-        /* Placeholder for xen-shmem@ + a 64-bit number + \0 */
-        char buf[27];
         const char compat[] = "xen,shared-memory-v1";
         /* Worst case addrcells + sizecells */
         __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
         __be32 *cells;
         unsigned int len = (addrcells + sizecells) * sizeof(__be32);
 
-        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, mem->bank[i].start);
-        res = fdt_begin_node(fdt, buf);
+        res = domain_fdt_begin_node(fdt, "xen-shmem", mem->bank[i].start);
         if ( res )
             return res;
 
@@ -2716,12 +2723,9 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
     __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
     __be32 *cells;
     const struct domain *d = kinfo->d;
-    /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
-    char buf[38];
 
-    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
-             vgic_dist_base(&d->arch.vgic));
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "interrupt-controller",
+                                vgic_dist_base(&d->arch.vgic));
     if ( res )
         return res;
 
@@ -2771,14 +2775,10 @@ static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
     int res = 0;
     __be32 *reg, *cells;
     const struct domain *d = kinfo->d;
-    /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
-    char buf[38];
     unsigned int i, len = 0;
 
-    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
-             vgic_dist_base(&d->arch.vgic));
-
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "interrupt-controller",
+                                vgic_dist_base(&d->arch.vgic));
     if ( res )
         return res;
 
@@ -2858,11 +2858,8 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
     __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
     __be32 *cells;
     struct domain *d = kinfo->d;
-    /* Placeholder for sbsa-uart@ + a 64-bit number + \0 */
-    char buf[27];
 
-    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIx64, d->arch.vpl011.base_addr);
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "sbsa-uart", d->arch.vpl011.base_addr);
     if ( res )
         return res;
 
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 61802839cb..5d4d298b86 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1049,7 +1049,7 @@ static void __init gicv2_dt_init(void)
     if ( csize < SZ_8K )
     {
         printk(XENLOG_WARNING "GICv2: WARNING: "
-               "The GICC size is too small: %#"PRIx64" expected %#x\n",
+               "The GICC size is too small: %#"PRIpaddr" expected %#x\n",
                csize, SZ_8K);
         if ( platform_has_quirk(PLATFORM_QUIRK_GIC_64K_STRIDE) )
         {
@@ -1280,11 +1280,11 @@ static int __init gicv2_init(void)
         gicv2.map_cbase += aliased_offset;
 
         printk(XENLOG_WARNING
-               "GICv2: Adjusting CPU interface base to %#"PRIx64"\n",
+               "GICv2: Adjusting CPU interface base to %#"PRIpaddr"\n",
                cbase + aliased_offset);
     } else if ( csize == SZ_128K )
         printk(XENLOG_WARNING
-               "GICv2: GICC size=%#"PRIx64" but not aliased\n",
+               "GICv2: GICC size=%#"PRIpaddr" but not aliased\n",
                csize);
 
     gicv2.map_hbase = ioremap_nocache(hbase, PAGE_SIZE);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0fc6f2992d..fab54618ab 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -249,7 +249,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
 
         pte = mapping[offsets[level]];
 
-        printk("%s[0x%03x] = 0x%"PRIpaddr"\n",
+        printk("%s[0x%03x] = 0x%"PRIx64"\n",
                level_strs[level], offsets[level], pte.bits);
 
         if ( level == 3 || !pte.walk.valid || !pte.walk.table )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 13:45:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 13:45:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482849.748603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJx8k-0005DZ-HF; Mon, 23 Jan 2023 13:45:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482849.748603; Mon, 23 Jan 2023 13:45:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJx8k-0005DS-EB; Mon, 23 Jan 2023 13:45:38 +0000
Received: by outflank-mailman (input) for mailman id 482849;
 Mon, 23 Jan 2023 13:45:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozL9=5U=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pJx8i-0004z7-H0
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 13:45:36 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on2082.outbound.protection.outlook.com [40.107.96.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 351b59a2-9b24-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 14:45:34 +0100 (CET)
Received: from BN1PR13CA0015.namprd13.prod.outlook.com (2603:10b6:408:e2::20)
 by BY5PR12MB4950.namprd12.prod.outlook.com (2603:10b6:a03:1d9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 13:45:30 +0000
Received: from BL02EPF0000C409.namprd05.prod.outlook.com
 (2603:10b6:408:e2:cafe::b7) by BN1PR13CA0015.outlook.office365.com
 (2603:10b6:408:e2::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.16 via Frontend
 Transport; Mon, 23 Jan 2023 13:45:30 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF0000C409.mail.protection.outlook.com (10.167.241.11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.12 via Frontend Transport; Mon, 23 Jan 2023 13:45:29 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 07:45:28 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Mon, 23 Jan 2023 07:45:27 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 351b59a2-9b24-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hRDlrAhffOntDHJMDIhIPuT6F6ruXQGdXeHnPi8xsjJlg1wOtrmS2VFS4gGefzhRITCcbHDiWdTEmiL1InLF93+7cRSImHXfRifgmfoQd1xlTy4JwdAdAVdUVuo1t0IEEJmYGBUvRz0ba7eF3ZC7L2wvmrttL0EmXN1SLlXQXuXfsaxq0xRtokjAOrniv70nneyAGY7PG8dXSHxysYWv8gB3VU9MPBe84LZ9oiln4y9b1QkvUqowTgc3nhIhc2Ert+ozGI2Xi2DH0R6oWOu4rjiaiqdkf3diU08hzb89WFWaGU36YmxbF0FK1n9nj/gNPsqMO8RSTYwbVM5rV2CLIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3fI/wStOnID0RkF6ajH6XFxYRunB4UiMDznXWTD+pWQ=;
 b=UXi5BJxp9CmqubORI+ztYHG6K33gKZHYy8GycNgfw1+lsDFw7jqBrpZyBQ7HgwLy8nL32CGC9GwSvZ2HfMUVsm275N3JAYdFl7lnhWlKaWQw1ZeojHxb6Ev2IQuS59YY0ilSimgB7HA5AD/vN8xpC2D8lTUhCvXXxkHHlYok1ZxaPOPgKUZ0KfMYKMMIXKiJkJBlp0ZWaosHuh2uumaH43UlXw5zDfcVCnzoemqminfPEGvwbgW+JoCKX5qeQoNgVJRP6MZSgjt8olWTeUlJLcq2HqC2fs5w5hGX+PrEgTCTGLZf/HmeNGECGC23p9Oh3C7WmH60I18gdT7PnFHTDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3fI/wStOnID0RkF6ajH6XFxYRunB4UiMDznXWTD+pWQ=;
 b=vCZxt+9Kv9qamNjha5CGNaFOfEqcMDwHuowsF6cX1TTTJVQNEMR7mJH7HNciGOu1y2UJF8D1B9X+oZwRsiNb2tPJSq13ljPFw2RFQnzXEIl0RikBm28born5YQRa6FA1g2/y6zaL0wXptopc23eW4/5AeA4/Ud1+T/wU5CpgxNg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <xuwei5@hisilicon.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v3 2/3] xen/drivers: ns16550: Fix the use of simple_strtoul() for extracting u64
Date: Mon, 23 Jan 2023 13:44:50 +0000
Message-ID: <20230123134451.47185-3-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230123134451.47185-1-ayan.kumar.halder@amd.com>
References: <20230123134451.47185-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF0000C409:EE_|BY5PR12MB4950:EE_
X-MS-Office365-Filtering-Correlation-Id: 3fc4c396-6b08-48a6-0378-08dafd48170f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FtqA7S0k/5S05cPZtD17e0/3ppmIs54D2Biof9wFuU2U5TMgu/7dyIQLMnqBPxNYd78ZZJbpRJT4DSOkVon4KY+23Cf4z7luuM+nXtQs/myz829MCHPJIKsDcU9+r89srHpll17LE2HV5YAdMvpas0TEmKxvaIGzbx6y9swxlnnwkhOGe+IWTwVGbDjDMGReXHsaVaYrZ+VvQaoz4yNQhhMyUdc41pOZuP5+2KVRd8kyd5F8KO8S5U5M1RGt5ejzjVEt3Wul/J3g2Jb7a67UYU4aR/gj3kcjYYPBkR0vbbHJPJ0Sd8w2IIQSXaM7skDw/B3fYdSQg41oXZRC54c9tuKVcjog+nGCNVpKCMC52GwByY8U8se55ELk41UkWV9YhLUD5EsPPLR0L/C9G5vqKtG/+SUqtgkAwDzZpMzLv9AQYCzpgw+s6Sl+zkru6fWniZNApOEjHR9ZLQHjUhE/5LJCyAygvyQfTkQwYUUOf6TVr7zDBqOBzEuUJ3EkYHue/qMAI+Seoup4KqZHS6R/ZCosbkCXjuCgpNWyo5OpLoEKQLy7GapYX/TtOdwQIeYBFOiuA6CfyMu+u36y8pU1VcISwbfWjSdbwO7wGHZ0XBnEYB7kl3kUnEpZnUrjCfPQS3WCxXE8ixSnXgYVTYiZCg+igtSUuLq//2eK+qlnPC2k6IkrynNpbwjSHtOkNJfHGLxozolVrfsn5H3gDux3fA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(376002)(396003)(39850400004)(451199015)(36840700001)(46966006)(83380400001)(36860700001)(70206006)(70586007)(86362001)(6916009)(4326008)(8676002)(316002)(54906003)(36756003)(40480700001)(6666004)(26005)(186003)(478600001)(356005)(336012)(2616005)(81166007)(103116003)(1076003)(8936002)(47076005)(5660300002)(7416002)(82740400003)(82310400005)(2906002)(426003)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 13:45:29.2366
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3fc4c396-6b08-48a6-0378-08dafd48170f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF0000C409.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4950

One should be using simple_strtoull() ( instead of simple_strtoul() )
to assign value to 'u64' variable. The reason being u64 can be
represented by 'unsigned long long' on all the platforms (ie Arm32,
Arm64 and x86).

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v1,v2 - NA (This patch is introduced in v3).

 xen/drivers/char/ns16550.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 58d0ccd889..43e1f971ab 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -1532,7 +1532,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
         else
 #endif
         {
-            uart->io_base = simple_strtoul(conf, &conf, 0);
+            uart->io_base = simple_strtoull(conf, &conf, 0);
         }
     }
 
@@ -1603,7 +1603,7 @@ static bool __init parse_namevalue_pairs(char *str, struct ns16550 *uart)
                        "Can't use io_base with dev=pci or dev=amt options\n");
                 break;
             }
-            uart->io_base = simple_strtoul(param_value, NULL, 0);
+            uart->io_base = simple_strtoull(param_value, NULL, 0);
             break;
 
         case irq:
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 13:45:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 13:45:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482852.748613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJx8t-0005e7-RN; Mon, 23 Jan 2023 13:45:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482852.748613; Mon, 23 Jan 2023 13:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJx8t-0005ds-Mt; Mon, 23 Jan 2023 13:45:47 +0000
Received: by outflank-mailman (input) for mailman id 482852;
 Mon, 23 Jan 2023 13:45:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozL9=5U=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pJx8r-0004ON-Lf
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 13:45:45 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2052.outbound.protection.outlook.com [40.107.223.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b5f0ff9-9b24-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 14:45:44 +0100 (CET)
Received: from MN2PR20CA0058.namprd20.prod.outlook.com (2603:10b6:208:235::27)
 by IA0PR12MB8225.namprd12.prod.outlook.com (2603:10b6:208:408::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 13:45:42 +0000
Received: from BL02EPF0000C404.namprd05.prod.outlook.com
 (2603:10b6:208:235:cafe::8c) by MN2PR20CA0058.outlook.office365.com
 (2603:10b6:208:235::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Mon, 23 Jan 2023 13:45:42 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF0000C404.mail.protection.outlook.com (10.167.241.6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.10 via Frontend Transport; Mon, 23 Jan 2023 13:45:41 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 23 Jan
 2023 07:45:40 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Mon, 23 Jan 2023 07:45:39 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b5f0ff9-9b24-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xlsf/ickAFk41BuHISbEuDiI2PDrdGvCfr7EJZyOLXESECohAQPhZTWAE+agYQu90QP9ZRV5XjzN+tRCtK61MwLToI2sy4mz8nPx2FlesQ6+BjR5z9PV1KiYQbPkFpb/DlzHYuzXFu6F41wY4/fxRr1lSiefTignbhWZL4mZUGDpEXReNUY8i2DMC0z1AUMjpG/LI0ZrFbtdKm2MfS1AzlzaNd4zsX5CZrRRt8o90sye0chIVbNkSWYs2Ei9QkK/Lhzjffs15/sVDiDLO3m3FGjxCGQXlbe/RL0HEOyR7AnamigT70joLZMYwrH6MbAdj+PyEOGsVOdEuBC9rIjniQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5mGRDEaxCT5Vc+hDapbIEiqvewhZapa+H6B636bxP5M=;
 b=MQnajVYMRuobPs107ohVlA1ehJRqh9Zyskmfs241nUvp5tpvhVB6WitBaw24fYCKCn38qZ/f8cSGAuCTtkerWBwuFkKhILUuqicWklTnTu4qYw6lbLeFOTd3PIQyXBCClAaJ2ZeoPxtojZEQiber9IaJnd3Vm3OWYOfdKtpAlqqUvr3O6EHaewal1s6/gWFx9pds3k4dsH7i/csOlNhpUBkOTL3Ow4tBZtntUHF3Wu9rBcIlxsuUziv2mRwohZpx9A463NbT9FXe6FEsRvJmk5B3IY1juqQh6IkPGjN2Vn3eO4VlXTib2D3xJVytrqsV5CAywMdcghw2GRVTjDbrnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5mGRDEaxCT5Vc+hDapbIEiqvewhZapa+H6B636bxP5M=;
 b=fIF9l37l+P5/oG56WnqX+cAxtnnOPpsXFHjRG/dsLmudBoXTosES/Iyu9nUpUHbEsO/DKBvG4VB5xeqjon5nkfHpLAmKDMWaBLpavre/dt2yE2tY2fVqnYyRXlQXvc/FmUeynP6CmSiHwOUySgAgpd911en3x/odbFXnlkYD3CI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <xuwei5@hisilicon.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v3 3/3] xen/drivers: ns16550: Fix an incorrect assignment to uart->io_size
Date: Mon, 23 Jan 2023 13:44:51 +0000
Message-ID: <20230123134451.47185-4-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230123134451.47185-1-ayan.kumar.halder@amd.com>
References: <20230123134451.47185-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF0000C404:EE_|IA0PR12MB8225:EE_
X-MS-Office365-Filtering-Correlation-Id: af5965ca-5167-482d-6424-08dafd481e69
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AB7WWCZKD+eSNlfQKxUC+IDNe195ojdqn9k+dPhD6HT5fyFLMM7hTirllhUHGXnymi+kI8E5Or/odxrcNAKkTpDun+EHdN/Ga7URAKdwYJRkMDccxZsM3Pxv4v2eZxffuExdu2p5KX8yyLT8diHtuWmE3Jn3xWfU3IP3iO1kkZqBIJIIl/tLtd0EYvgZv56sRRbzdePELNG/Kq7zz9yiwJL0AD1K7uWt2+vc5nQedABCJqXeDKEd2HQsiuEkTnlrwZ6wyUHyDvY2HTXLpQ/gl+aUyOTNRz1xnp+mnTnzJHtGK2PoyOYtNfo+NplsjtRZsNzvCvk6uzoUcUcNojPrKvQpWibIwNztHxyBEBt/oHMiBB1S4NLolfxclZ3NwMLqwDnpCju/A6rBh/8Z7Bu/YEYPZvkehP3dkK9O/oJXINKPTHroRheMyZl7phvMv7wbTAMM/5wSUH94dHePC0r8JlScH8GpOU9l+5Efuv4iFmwmiM1mAGMHhATaTo2jcH8cwAYBvYosRC9nD7sIYwHwV6kSx3fEsMvTpmm9UdMMo6RmNNL4/2RFOlN+kZBxL3Q9ITojSEPafUYN1ZyYNNVwKBlI/Y0PjsHMRp77+WwUbCAGYfXVrAY/pJPx8Yx0FTxDp+68ScVtiD1Jg9iWHEc/8pMgS8h5AvTgScbAN7GPz+znlXD0P4c+51xmRskKJo819k6C6qZGIWa/z7FPD4wV8neWEFbfztbdPSitwCPlAqM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199015)(36840700001)(46966006)(40470700004)(83380400001)(36860700001)(103116003)(81166007)(82740400003)(7416002)(41300700001)(86362001)(356005)(2906002)(4744005)(8936002)(5660300002)(4326008)(82310400005)(40460700003)(40480700001)(6916009)(8676002)(26005)(186003)(1076003)(6666004)(426003)(47076005)(336012)(316002)(70586007)(70206006)(54906003)(2616005)(478600001)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 13:45:41.5573
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: af5965ca-5167-482d-6424-08dafd481e69
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF0000C404.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8225

uart->io_size represents the size in bytes. Thus, when serial_port.bit_width
is assigned to it, it should be converted to size in bytes.

Fixes: 17b516196c55 ("ns16550: add ACPI support for ARM only")
Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v1, v2 - NA (New patch introduced in v3).

 xen/drivers/char/ns16550.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 43e1f971ab..092f6b9c4b 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -1870,7 +1870,7 @@ static int __init ns16550_acpi_uart_init(const void *data)
     uart->parity = spcr->parity;
     uart->stop_bits = spcr->stop_bits;
     uart->io_base = spcr->serial_port.address;
-    uart->io_size = spcr->serial_port.bit_width;
+    uart->io_size = DIV_ROUND_UP(spcr->serial_port.bit_width, BITS_PER_BYTE);
     uart->reg_shift = spcr->serial_port.bit_offset;
     uart->reg_width = spcr->serial_port.access_width;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 13:53:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 13:53:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482868.748622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxFX-0007cU-L8; Mon, 23 Jan 2023 13:52:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482868.748622; Mon, 23 Jan 2023 13:52:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxFX-0007cN-IK; Mon, 23 Jan 2023 13:52:39 +0000
Received: by outflank-mailman (input) for mailman id 482868;
 Mon, 23 Jan 2023 13:52:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJxFW-0007cF-4J
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 13:52:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2046.outbound.protection.outlook.com [40.107.21.46])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 30df0c2a-9b25-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 14:52:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7217.eurprd04.prod.outlook.com (2603:10a6:20b:1db::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Mon, 23 Jan
 2023 13:52:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 13:52:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30df0c2a-9b25-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MzRPmFS/2z5ZrfRukGIkXifvvRYhSn7SaS31/q8FytEtKeXhUxawzA5+Wpes/Ys4tqC+jrFox4hEXrOhWMMIsyaix4BdqW4f2S+7GVVajpah//Z0QXZprr1VizV6PWXanmYGW3ijjJSfyIVCjPOQ4vAvmD+Qpd6ZcU1mfINXoX+XExNkW89sKUXlHKcs8C5q27DbwZHXvw/aXlmdbIMrct8Ea5tMDoI5CfWK5DDgXxv26XUSVPxoYZwYhzEdQkKTmJR2d4SiH8fZDtJ3PjetV7iXDwwPRnBnMeTcKF5AlysXOZMz04RLJ9Gak1ghpQLLCAr3JmMrtrVdyxdQsdyT5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pD5i5hhY1nZSOzoKHijR6N4P0OTNGI2TYPx9edKEDWk=;
 b=hQD10UDNfIaJ6jVTXUSGj8AZ3sNq/O0N7FdTPKtwpBZvo+mf8gAS2yg/CiNhRDQ4s2eW2X3TxHfjXInLHTPVKK4iooYL6HizGQma3qfIvu2EZEvKOgY+o5qatKJCMoo76ZETyPzK0z+J8zu1Y0jCalgcStG6OutGZvcAygunBrdjT0U6RKZPCP4TDazwemWy/VsxJOHQVosik4BLdUDgJMpQUztDH+AUh8WmZjCxddXlB740Qk+Ww4zxAkgibuLLEj4B70BlusiEZoGIGDMGEMC/WHKwMmYhQ6yFIF9lqFN9fKk+aht5cgMwxGLtlbtkGlwiKvGet+lZTEKQD9Q4nQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pD5i5hhY1nZSOzoKHijR6N4P0OTNGI2TYPx9edKEDWk=;
 b=0+uAD5SlRjm3be5UmWM4SJIUrm3J/kTBqNkPMAtbKZzXHBoYtMoYXo9VdGaoT0JOlgUGQcEu/vifmosUA/iZY6EzrbxTR5UnbhLA9FuijE/qbxQwRTZ26KRUlV9EWtPpb7HRDek8wRbSwEyWfH8s3S+CgIGP0Cr+ATz3HfX8maep9gKWdqB3W/mNVywfJ/NibLnxpeFiz+ymwBKoXZEqicI5VsWoE4TCwCIncvSZNROCVcc/S1kiiMOra/OynbPTWTjMgrYuiv6tR/q6D8wIEjwKjjGSkJZBlnOEYfU39+Nap+lZwqPwgPksBpBidUcWsrIEWcYUk7aI7kKQl7hk/Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <469c9ef3-767a-358c-5e70-a1e0d9b1a4ca@suse.com>
Date: Mon, 23 Jan 2023 14:52:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 03/14] xen/riscv: add <asm/riscv_encoding.h header
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <fe153cbeffd4ba4e158271ccd2449628f4973481.1674226563.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <fe153cbeffd4ba4e158271ccd2449628f4973481.1674226563.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0148.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7217:EE_
X-MS-Office365-Filtering-Correlation-Id: 3894622b-f3d3-4044-8f20-08dafd490b2b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EpO5tDoTY+pKLDhZANGR+X2TmIL2N8Vp5w/WdCah5ojiy3TeKx8hAVkRN18tXTCnmDgVVdGKcQ07xMjXQfQWAKNO/Za/kPYFr276qfOGrAPCv7PrgS3sStsu4y0khikrIIYMi8hUXbnnfdrfZHa9HIS/Uu7g4jGEEHyprU4thjTnZ2J4vh5VGMPkA1YjE36nNuaLb9kersSlv10PeQMFsOZYVCVJjFuKdaEqjURrPpS99u0Kid4WkiDB2YujaXTN17QnC6OK9l0qYO4RZl9Ii3ZhIJU+TELp8FAzUtdfqYGtfLTtu9YYRforxiiIfUsHDdmnRcFpESmKXkmBGRiOlKdkWn2e6fULiRHqWalHmEiTUeo01i34kQgO4IP2/PB7K3wFCu3EqqxlAIUS8VKc0oCptdRcn2aBdOjjHSQlo4p4cjKxDZHWyZ2gUUZGiYI5bUGUFYnsBroyLeFZexNKHqFGFUXCmg/H70QxUyUcr6lA1OzfnOVwC1J2mOmqw0D+gU5l3C8SYcDiW2kqHvo7+g72Az/pu+hqMI9wjDK7f1KhbDL+2rk1X1FGdi3iiyzzCmgzjejrtYV2x7GFgj3bKiEtEDeuVyIWeVhfkKfEF5NkeOkVpkq7xDHPaYH3/8A1p4HCeW/skz5AzFFe38EVLpMBUIoKtroutGxqwD9A1V+zmS7XWmw+FkSN3WA6Zm3PSCenzP4fiYJHBA7DjK4B9q+KWTSuH7CeCr1MNhS2Wz8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(376002)(346002)(366004)(396003)(136003)(451199015)(31696002)(66946007)(36756003)(66556008)(2906002)(66476007)(8936002)(38100700002)(5660300002)(6916009)(8676002)(2616005)(4326008)(41300700001)(6512007)(186003)(26005)(6486002)(53546011)(54906003)(478600001)(316002)(6506007)(86362001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SjdHZXlrR0FDUCtxL0xoYjdac01MeWxPRmxPVmpiQTJGRU5USG04TTVIZndO?=
 =?utf-8?B?OUhKbU1kTjBLMEdVZ0pUWTdSNkFsdUtKODQ4R2ErelNLVmRiNFVHMzFjMTE1?=
 =?utf-8?B?LzhWcTUzZ0VpMlJxenpIT3hOMmtGbTkwdG5zSCtUUWh1ZzhhMk55MWdjRjRK?=
 =?utf-8?B?UTYxdCtNQ1J4dHVtcUY1SjZNSmh4TVdwbkJucnQyRGxvTElycFpYM1Q1bFBy?=
 =?utf-8?B?NnBXM2NmT2xVOURHdUpsNWd0YlpaWk9PYURQaFVmUTRMK3ZydWJBNUZNUi9w?=
 =?utf-8?B?cXozYXpEWnlSZW5iZTZ3UHFyTVEyL0VFU3JqRHdCeDVodHpERlA2b0l2Q3Ur?=
 =?utf-8?B?MkVDSENpUEcxcVZ1VURmejN6dURKWFBKcDQyVU5ZVlE3amZlUTk4SE5hNHBy?=
 =?utf-8?B?MUZpdTJWTTJKWDZwQjA1NE5IaVBRRjZlckVrRXdPYjM1U2thMzdmc2x3NXBt?=
 =?utf-8?B?bDQ2R0tORkU3aTdGRDdnK1l1bUQxemM2V0g4OUJ4MWpnTUM5anBvd2dMS1F3?=
 =?utf-8?B?Ulc3Z3l0NnhCZE12eDRUYkFXdDhwajBxZlR0SGFUd3g1Z1RicS9MQUtuS0ZK?=
 =?utf-8?B?Umk4RUJIditYalNsYVFGeGNKMTd1aklmQXVIUTFGRkV2N1BYUllJRExiS1dS?=
 =?utf-8?B?UUhMT1F6Q040dEsySjlML2FFeG1Ebm82YXBZdTZuM3QzblJRTk5zbi9oL0pO?=
 =?utf-8?B?bjlhcEIzM0doakNYS1JoL2pPcEdFQTBPVDZnOS9pM1pFcnAyditRdUV1OEZ4?=
 =?utf-8?B?ei8zRTlrN1ZCK0k2aktnZFVmc2Nvakd0K0k0QUxPNWI1TFJIZTRlT0NUYXdX?=
 =?utf-8?B?cXR0cG1KNFF6TXdIbHpuc1hiUVJ4YUxGa1gybVFkbGdjcUxJamw0UUxpWmVh?=
 =?utf-8?B?MDZ0MHRDQloyWWFKZ21rdjJYa3FXcWFhcURaRUFuRGNQRS9iMnAvaDU2d3hW?=
 =?utf-8?B?VnhBQUhPQTJIejJwODRpZmxIMHhSWUNjZzNCYlNrK3Z3QVBUWHZGK0gwMjdT?=
 =?utf-8?B?VmQ1ZXpaamxsUGhlY3ZIN0tIMzZkMDF3aVQ2ZlBFaW0xcFRObG5UdjY1WFVo?=
 =?utf-8?B?cE5ZbXNJRStOc21GdUhqbFNTVzZ1V2tZbjhlOGpWNllkZGpqWXRxaExQMGlC?=
 =?utf-8?B?djJtZlhQNnQvTkdLOXJWbjFUTTl6TDNRRCtvZWdkN0VSaG5iVXJYSHFyaURW?=
 =?utf-8?B?S0pSelI1dlJOc09IUkQ0MXZ1cnlEekpnTXNCTU5NaVVPWi9saDhJVmhYU0ZT?=
 =?utf-8?B?cWZVMGZPSWQvS0Jsc01JRUQvemRKY2JzMDRLVmdrZDVZQXVuNDZYYncwR0lp?=
 =?utf-8?B?L2xKYVplYnFyclEyTmNHN2s2bmJMVzBPcVRweVlwWW5jOHY0NmcwWHBGV2xG?=
 =?utf-8?B?MVp6RXI1dzhrWVRFSVdLUWVCdmFNNmptR1dXaDkxK3JKM3UxYzFzY29ObG5Q?=
 =?utf-8?B?cC9ET2lxVDZSNXdnTW9VMHluUFBSVGovVTdsSnprTjJlQVc3WkVpaXk0K1JR?=
 =?utf-8?B?alhLbld0blBIZHFHNWs4SDZ5eDZJSkc1QnhZdFYxbjdWeXI0eTVuOUhKR3FB?=
 =?utf-8?B?L1FhU1NLSUtMYW44VnN5WDlvOWxuNkFWQXdwQ2VwNS9sT2xoWTc4bmpodXJL?=
 =?utf-8?B?aWlVRjNES0JjNVA2QnJSZ3ZNQTFMT2NvaDFpb3NwOW5GRG1IVElOdHc0K2tV?=
 =?utf-8?B?Ukx3RmR0eW5Ua1dNTk5aaFRXMFRWMzAyMnhYMkpEa2RVZFNYZTgrVmQ2Z2Vr?=
 =?utf-8?B?WjZ4TjdTN2dRS0tObGhsSXFleU1wbzZlemkwaSs4d1RuaFg5K2JETUk1dENL?=
 =?utf-8?B?YjZPa3dVKzROdkxCeUt0d0dyU2grL2cwbWkrdjArRUdkS1hpVjk3SEJ0MHFJ?=
 =?utf-8?B?dS84VE9XbDYxMzRlQ3BuanYwYkdCckNsS3NDL0QxS1Jxd2kvU2dmelhBeHlh?=
 =?utf-8?B?RHVjY1R0bHhJbGwxRk9pUkhZZEVRTm5WZ2YxV1FjY3gzUWpNdTBkcFdGNDd1?=
 =?utf-8?B?RUI1U05xaVJHVnp5bWVld1VwKzBmZUxyTE96bVczK093aHRFQXk3UkhUV0lo?=
 =?utf-8?B?Y2E5OUw5bTNtN3ViUXMxeGNiTVR6UWVyUkxTUktXTVRENW1HMUU1Y0J3Nnow?=
 =?utf-8?Q?ClaSV4lvd6uddqtrGffs92WAb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3894622b-f3d3-4044-8f20-08dafd490b2b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 13:52:19.1159
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mElxe53gxcCK/rvuxtBIvb0AZf7SFN20AFNsUiDHwgI1mU08vOUrcUdDHIbLdKwqkqPZkZzDSLL1G9W4EyllAg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7217

On 20.01.2023 15:59, Oleksii Kurochko wrote:
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

I was about to commit this, but ...

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/riscv_encoding.h
> @@ -0,0 +1,945 @@
> +/* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
> +/*
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + *
> + * Authors:
> + *   Anup Patel <anup.patel@wdc.com>

... this raises a patch authorship question: Are you missing her/his
S-o-b: and/or From:? 

> + * The source has been largely adapted from OpenSBI:
> + * include/sbi/riscv_encodnig.h

Nit: Typo.

> + * 

Nit: trailing blank.

There also look to be hard tabs in the file. This is fine if the file
is being imported (almost) verbatim from elsewhere, but then the origin
wants stating in an Origin: tag (see docs/process/sending-patches.pandoc).

>[...]
> +#define IMM_I(insn)			((s32)(insn) >> 20)
> +#define IMM_S(insn)			(((s32)(insn) >> 25 << 5) | \
> +					 (s32)(((insn) >> 7) & 0x1f))

Please can you avoid introducing new instances of s<N> or u<N>? See
./CODING_STYLE.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 13:57:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 13:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482874.748633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxKB-0008FC-7B; Mon, 23 Jan 2023 13:57:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482874.748633; Mon, 23 Jan 2023 13:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxKB-0008F5-4U; Mon, 23 Jan 2023 13:57:27 +0000
Received: by outflank-mailman (input) for mailman id 482874;
 Mon, 23 Jan 2023 13:57:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJxK9-0008Ez-TO
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 13:57:25 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2085.outbound.protection.outlook.com [40.107.22.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dd0be682-9b25-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 14:57:24 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7108.eurprd04.prod.outlook.com (2603:10a6:208:19e::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 13:57:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 13:57:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd0be682-9b25-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kp9FHepyeZFiE5J8gJiuUVNvzzqMg3CgDP5ja4ZUe/+0rfwXGQ/2MzSMDQFsA+n0i6oDNc/uUpp3lfPbH5F9nvjUEp4RPOsYnnUy8xmAUsBIrLhzrRMwgUFDRNHoehjattnLxrwBBnmkHmJMxPLN+5oW7KuQObBpnYLjAceYJJ8rpaXo3KpRBp+9vFrOwh9XcPl6tAi9Cm6hpHCZgZZQJdGygyLI4SUZfp8j3tq/UHaMTnRSkaoTjnLwiwwQMJVqu3e8Bmz+kfx2uu31jkRjl6VtfuJmsuvlT/BAp8iDZcGCbRRbdHaKd9DmFKaOCU7Cljbf1jXM4K54wNhbWSsfXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AbeJmVZ0cCLkW+0jH6dPO/xtZoCyvfpomQkDuJawSRQ=;
 b=ljEtOPtD75y/zQycW3UW31RKynesektnPMFc3e5In4arQQzeNhH1vmna55selhlJTlo1Q1e2kuWZpptGFN+zEWq8+D/tLVRQoR7+VOIeYu3FvdMr4toDrpm5RXc4hWPejHcaeSZZ37U1BfUVB7Dl2oQOjZ7m1i1FWAiobEBC/PpsgMSd/69te6ApajM8JcVvnZbuZB3DE70BfAgUHatwWe4Pj4+caRoD/LZcW1Bs4lQNBFxUCn9aMBNgsOfkc4GupKdSajTF1ekiEQjbvQCUZrbvrV1HDGpVfaFt5vM4T8dznlFlpt931xnIUUGT5tsSHTDqbkflHI32n/r1s8G1eA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AbeJmVZ0cCLkW+0jH6dPO/xtZoCyvfpomQkDuJawSRQ=;
 b=MUqNk12f/5j0EHqcMAs66gg0+5OJ7mSivwsScs6khwP3cFOhmGSjGxdGBG9PNQqosDwL1kFb+IdzQnF4UGziVMwH7Qnchrhwd7CsZjfr0homykUNedygl4A408E4Pa6NMiuVMCnq0DMs4LzAPeCSVUrn0+sCv4zC53R97nk2twk0yH9pbMxUILLrBIhFM5+lY5GezxJsq/Oys/VshU5QbJZPAvpjXdyN2gwjM2IrYTyoUUsFU43GNhHztPOB+4fmjD6qgnHVk78nxosIeTrOQJIYGozDxvYBkrB1rzarBwCNUPLNV/ladkpc02efg82tpyQo7NmjlhdDF/m4tbf5Nw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <08ac78fd-85d4-2a43-1922-3128d5fd8d21@suse.com>
Date: Mon, 23 Jan 2023 14:57:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 04/14] xen/riscv: add <asm/csr.h> header
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <afc53b9bee58b5d386f105ee8f23a411d5a15bed.1674226563.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <afc53b9bee58b5d386f105ee8f23a411d5a15bed.1674226563.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0082.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7108:EE_
X-MS-Office365-Filtering-Correlation-Id: 8665ccc3-0699-4422-9cfd-08dafd49bfe6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SNPT0FjS3zixYnxkp8XjXcA9EHkzJhruWbCp8KuyNm+afLEP0gh1BzkATAtDEx2PSuG/aEEgHk6XqJGpO5U32B5X2qMC1FcRgt6Qu0Jq60Zj3xvuKyZtFpoXgDfjUUGaBpsV1rbUSx29vnDi63TNNs1gBb8QOiTfQ0XW/+nJRADKYjo6gqHHTSvTWH8dw04tRLYvf2DDz4T9db5A2qwHNSVqJfl1xoY7BHwgM6O+w6SuTdHLNYguYWHcv7RDJom4GqWI6pu1M8emNPcXhLg0ZXmLpYhXrIje6M605HzbfNaFk192+JQSphcrYcsOMKxt8hwdsK460MV15EO/8jGtGqvXcDcJAD8+bAD/7DoJ9plpQp+fd3B/WgKKe0x+4dsjOrcnYwowFrlPmyf1k1U4nINIXuHrH/gFatn1UbXbtHkFmWOMsrDBspk/XfCImbhFzID8WYzm8lAK8GKTCOVVGmA5/UVqeLH88y1eo6GNIIx1PyFX1vCD51NfRErrWU4LbuvpsDQwN0nGFTtjtal3XJ8F7Eanf3WCEfP6vDcvSR+WvWVHby/Met8AJ6HZfVmB8MPb4mD3jlA850eEekTAr1Wx99riVJ5q/75zBZt6RRFbWrbfptSqDbwAUq/M3WJqHFWXEchsyKcemSTDgKU7urh67jRuU2Lvwv6T7bnSNa3vkpXdFf5Mqtd4z8W46/P8KxWMt0oP6rbRYlKelp9p1J+xbDo2jdnkScrmcCpQUhU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(346002)(39860400002)(136003)(376002)(396003)(451199015)(36756003)(2906002)(5660300002)(38100700002)(8936002)(4326008)(4744005)(41300700001)(86362001)(31696002)(478600001)(6486002)(31686004)(66476007)(8676002)(6916009)(6512007)(53546011)(186003)(6506007)(26005)(316002)(54906003)(66946007)(2616005)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d0xHcXpTSzBMbGlvUjAvbEp3NUZxQmdQSUtCN1hvSzBqcmFydXFBY1JQRDFp?=
 =?utf-8?B?dWhqbVpXVGU4R24xTXdpMnlUS1dEcW5xVWpySjd1a0M2bXc0bTRIRWl1QkZn?=
 =?utf-8?B?UU5tdzFVNk8xY2xRanppbkZpZkFoV2VnZEttSlh4MWM1RGV5aWZCZ3huNDlY?=
 =?utf-8?B?OWVIY0hhREhGMGRpUVVWSnI0bk01VCtQaWNKd1JLUmpJUHAyMmhaNWNtYmIv?=
 =?utf-8?B?OWdjWlB6OHM0bWJ3QXcxcm9MRTg2NGJtb2NLUjlCd1NZak00MHZDc0pQSy9L?=
 =?utf-8?B?WFV3bWFVU3BsN3g3a2pHQ3FWTGJ5eDgybDcyZXZSREgvMjFXaERrTU1hK3A3?=
 =?utf-8?B?V0JLYXJYZ2w5Wk9DbFJPU0hwbkhnb1FSbk5xc1BNTmdGSEdPUlNTZkNzdWNm?=
 =?utf-8?B?eUtYeEN5NTMzSm0yb05UbGxWekJmWFQzV1RoaTAzUDBqSGplQXpoQWcyaWVi?=
 =?utf-8?B?dzdTTXkwamxUQ0hSUG9VMGFpK2NreEx6V0FuS3ZGekZNSUFiTnY5VnFERTln?=
 =?utf-8?B?UGpNRy8yL3pXTkJiNDFCbzdCWDNxcVRwSUVsMU1velZiSHJnbStGa1VjRTU5?=
 =?utf-8?B?RkdhaWFTQVpiRWwycUJYa3kyVkZuRnk1LytFMUV0dnNkQnZTeHB1UmMrSzd4?=
 =?utf-8?B?VkFwQXNYQldtUmVwWFFEUWV6Zmg2Y3ROc3N0REhrRUdPc2dld0xSdFdrekFO?=
 =?utf-8?B?QTIxSDhHT29va0dacENxa1pDdWhxajVocmtTL1piUnUyYUF0N0FueTNpVFd2?=
 =?utf-8?B?MFRFNUpGSm5YbHptRzNQQXZCeDZObDFrQnRpL1EwZThWdFdjYzlsZjhmTTdV?=
 =?utf-8?B?dVJSMDgyTWZXMVBtR3lrMTdVLy9EZTVPZHV4czF3bTVhM0RQbjhmRUJEaEFw?=
 =?utf-8?B?RkdoMm9iZGViZ1o2NWhWRGFXQ2llR1lvMXYzS3RjU2hDRFFNQlhscytISVEx?=
 =?utf-8?B?ZXZsSnJUaE82MDdIS0RnQ09MTXBJaWM5ZGMxMjRtQTl6OTdJMmN1V3J0QVVt?=
 =?utf-8?B?VTJWOWlteG5tdU8rSk55N3V1MHNuSFpKSTJzeVVCQjBzOWxYYWhXTTl4bXJl?=
 =?utf-8?B?NkpFazQ4NGx5R0JJaTl6UzdYRklhS0JrRkVlMlMwRVlzcmdZMEZKNmpnaXVR?=
 =?utf-8?B?TEdFZGovY3ExZU8vUGx3MlJGOHVxNzh5a0YvYUZQKy9iQWJ5TmVxdGw3OHF4?=
 =?utf-8?B?YW5ETStWcVpSeUhjSHZzOTBkS0lpbHhnT09QMDNOS3FoekxRU3J4L2J6SjRW?=
 =?utf-8?B?bnZWVW9Zc0dkY3NGSENDUUdqSUppeCs4Ukg4ODRxdkl6RkxxM1pGR2Zmakgy?=
 =?utf-8?B?V2xOWmh2T3BqMW9HNzN1S2JlVy9wcFFuOW5ra0NIZU43RlFlSWtESzJqYlNG?=
 =?utf-8?B?S3lGY0Y1eXZOQ3pEN3lRQ0ZJVW5YV2V4c003dzFjbXpEcmZCZkhjSVI1aHpm?=
 =?utf-8?B?b0J4ckhBZWJrVVNJeFpjV0JRZ2dDU2RNQ29rdlNGbG0rL00zMGlhQmlQQXFN?=
 =?utf-8?B?b1RXTWhYNHZjWnR4eXlBVEc1SGt1STZUOXN2MWl5UDhsL09ZNGhjUWI5cllo?=
 =?utf-8?B?enB6UGlvaUkvMjdWaXIzbkRpZWY5NHBsSGJlSkY3SlBhT1REbTFPdVlnRG1w?=
 =?utf-8?B?YUxLQkJNQVJiNmVoL2dDeXQxZlZJSVIrWXF5Mi92Yi93SzdIUXpnVVdjRDQx?=
 =?utf-8?B?enY4OHIybFZxaytGa3ZpTVFkK2VKR05EZ1JENHY5ZURIeTFJOUhVM1BlbElN?=
 =?utf-8?B?NlhnWG5EV2N3S0hqVlpia1dxMHVDMHlmUkRDUUZuemM3NGlvYi9USHRLM3FK?=
 =?utf-8?B?NjlJU25BMjZtQjNnYXVrSDBWdHNSSmZOT1RVaXRucmVmR2FuYzZWdEtKZVNw?=
 =?utf-8?B?elZuZ2d2d05pV3F3NzNmeW5kMGxzTzNOb0llV0YxazVLb21hcis2MlI2ZHYw?=
 =?utf-8?B?TWFzb1RzRWs2ODJBancrZW1LZE00TUNCcDJlRlBzMzgybnhFQVFxL2hXTWEx?=
 =?utf-8?B?aThITmdEbGQ2QmYxUEswSDE5SDhDQzM4ZjhpWVFSdzFOVEhzWlp5ZUkyWHg3?=
 =?utf-8?B?RFFVSXg4QjZwSVBrcVlLTy94ZzluN1pzU0Mva25EeER4SUJsUW9ycXJjSUU3?=
 =?utf-8?Q?kZoPZ4qNJBnuwI7G2zBaWsA4U?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8665ccc3-0699-4422-9cfd-08dafd49bfe6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 13:57:22.1910
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EdZHR+EP4h0T7EezKBbKuFJURvMHneTVUybv7rSduWod1qReG5SWhaV7+sFwguwiwiC+eLtN/GTkuWz9Wbd0IQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7108

On 20.01.2023 15:59, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/csr.h
> @@ -0,0 +1,82 @@
> +/*
> + * Take from Linux.

This again means you want an Origin: tag. Whether the comment itself is
useful depends on how much customization you expect there to be down
the road. But wait - the header here is quite dissimilar from Linux'es,
so the description wants to go into further detail. That would then want
to include why 5 of the 7 functions are actually commented out at this
point.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:02:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:02:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482879.748643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxOB-0001IQ-Nd; Mon, 23 Jan 2023 14:01:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482879.748643; Mon, 23 Jan 2023 14:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxOB-0001IJ-Kz; Mon, 23 Jan 2023 14:01:35 +0000
Received: by outflank-mailman (input) for mailman id 482879;
 Mon, 23 Jan 2023 14:01:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJxOA-0001ID-H1
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:01:34 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2077.outbound.protection.outlook.com [40.107.8.77])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70c2b671-9b26-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:01:32 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7397.eurprd04.prod.outlook.com (2603:10a6:10:1a9::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:01:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:01:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70c2b671-9b26-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NmkEABVz5v2yH9EU+0uKHsdqxlyxeIjF+Zlj+ScVyVW/R6xbIzgvUIftIJPPwPxSjlKvpLxZs7RRAnSB7DA+k0TpyE35Nd/6solph9v1Eh8l6lH7M/QiaUZUWTDMoXTYSdT2PsWaKP0UeuPJUWmTzkxK/34D/k8yPhRVszr++3kydBBtVuXdvGdCgz9cPrekegFSCVzTdBQJso7gIZanjYhcRt+DUPsi7dkOJzYoMsDLtiZID9AfgKkWkJTsKH70rCyGvKoO6rJbg3TNM75A9DsDpem0t2363Fof6McCyYWnAk4H+UfUA6jVZ2dCzUMRv+JNiVkCdmq5p9bOLNxaVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zg3tA51WWHVgY5PJgKh3Scq4U5zGBLMhgrhmUUHFOTM=;
 b=BzB/xT0hPUFD/GZ2lCAySFzohTQekAkndkW0hGBKaHUgJlLyZmeAOX0y2QO6B6Cwo1bsWmjcbbsQPXTsm1IoDTpeovk29XGLSvkILNZGmw+3H/OKbeE95Vr5PyJjlzfS2IbUY7Y7M+dyoJpyTnxjNOgrPgUJpXRT69WTro92GrHfXSZu0cL8rs6fKzfN9R4wU7X1TpW81a6mj94UQ0f025zkUxEmvmc4Dc7YeJ0P0y/dYrsZSo8AooyG39QAvvsiuajsxaJ1E8M9W04VX9TUVN3Jh1q7SC9Mt9a8ML0mHZ9s6u01/spd4N5owxGXeMMIqCQO6fqYUb75k1t0zgNksQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zg3tA51WWHVgY5PJgKh3Scq4U5zGBLMhgrhmUUHFOTM=;
 b=O31LdaY6/HnXa+uJ5OrBgo/JxGEyy4Cut6E+1NwLBavymWILF/Db/6onpT2uk+D4ypgF6SCRZM0GqbHR5MO5tbNkh339lt1mPQq4C4GbJ/0epKKsxOH4/b+4f0XFmdQGUGzgCfsdnhQo6+MHHEnqq4/cBLIZCdRUkGAAeJZWcieG4ZE9A8Jp8wo2S+q54H73RWU4cGPjf4qQyROac26NGv3EkxsRovqHkVAwFM4EqLJrCpGAWQvVIlM5uRNvpjrZn8Ai2/mjORJfcfQsIiV4bd1WDlK5WmW8EGojNTxOZimtV+UbvxWzl2JcjIdIlJFnof15XGBtZm5PH5vzA3IUYg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b24040f3-4dbc-dd91-3d7e-97bc614679e3@suse.com>
Date: Mon, 23 Jan 2023 15:01:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v3 2/3] xen/drivers: ns16550: Fix the use of
 simple_strtoul() for extracting u64
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, wl@xen.org,
 xuwei5@hisilicon.com, xen-devel@lists.xenproject.org
References: <20230123134451.47185-1-ayan.kumar.halder@amd.com>
 <20230123134451.47185-3-ayan.kumar.halder@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230123134451.47185-3-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0117.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7397:EE_
X-MS-Office365-Filtering-Correlation-Id: c65db9c0-e0bf-4203-50f8-08dafd4a52b9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hDBmWhqc+9Ptsmq8JcaQTSVp+sIwlb2ZlThOZlr0obPilnm0AhC4EU2HQF9z+QB4o5tFrM/iRSXCULn3WHK0A5Jy9tL8NmCqUVOPcx6lvkHUhVnmCqbQ+QT1tE5ebF3EmtJTxPn4Db2vm24D5BvoFlOXlBwRPh+eNuSZVgNmnws6sr4oDlFbRfhZvE9bMXshWWrl91TY5j2pjzmdiwb+QKoRKPCpKip9MDQ84WpXurVrz5qJLpKaBt4CnSkBMmCh+K5lw/CAqnCczEE2umVaUWVsB1ZX8dIPQTix0Tyxa25e30+e49J3lsyz6WWFzt7QN7NmOhtMjX4DmzH6YkYFYwK5iTidtONOqd6raqwYw6dh+n457FNucJBpuF5pid9kShaWun6rVurOII6DiPy+uIAZNWjPMeF/YaeM6bvFPmOMxCeCxK2zvXV+QvQRQ6yEE0PK7L2jPbPW7S0I1oquyOfSG4YXCG8GtL0wnzMiD/c6JVZ20Yb61y+lqN3RwpuFuA+uOVre7c9/wcN1UbWbV5mJfutplJPBQNqmddrNeGo4gZcr0MnE4x/brDa4ob93Do6QjeWjSZ9t6W4UjSC/MpMGT6wgXCV7JrPiaFYU58HeKTbZg1tpkPYb8BpbXeKHTVZtfj7QQLHKEx8f5dIfnVv4a6Ah3RV09PKX5tWLHBOrsi7L7ibWDRlN987PE9aB2nXsMf/djeFHWAQ86kmCebR0qo+WXii+WDAoqlpn/Zo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(376002)(136003)(396003)(366004)(39850400004)(451199015)(31686004)(53546011)(4326008)(8676002)(66556008)(66946007)(66476007)(6916009)(6486002)(2616005)(83380400001)(6512007)(41300700001)(26005)(4744005)(8936002)(7416002)(5660300002)(186003)(6506007)(2906002)(38100700002)(31696002)(478600001)(316002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y1pyZkdkYTBnNWxCVjBwc09ZSXJ6disyMEZqR2ZodWNMQmZkYXd3dWdVUDBO?=
 =?utf-8?B?cFNOcFdnWGRPcCszNHVNemFEUlZ5aWhGeXIyaFZKZ0JJZEtRNUlkSW1hYVF0?=
 =?utf-8?B?ZjVNcjR4ekNNNFZyM3o3VEpaTmlOQlhXaWgyVUhuNHZOcytDOVRDN3hYQUtK?=
 =?utf-8?B?TGJ5Z2dsaWdmeXNaVHJjT21NbnpXMTVuNnR1MkIrNlYybi9Fb1FPZElYODVx?=
 =?utf-8?B?dWhReHYwejNwKzZYTVV5V0tRbGk4WW02Ym15TUhXRmErSGU1blNyQWxhT0gy?=
 =?utf-8?B?RGxRVU9Ecm9CTzMzOVVYOWRtZWRmaXYrQ0l6VW1lc0I3cUsyS1Y0WEtrbktC?=
 =?utf-8?B?L1FLRDNRaGdnUmpvYmdOZFZ4SFdLSlFoaVp1RW5UK2tXRVkzVkkyRkNUVFdS?=
 =?utf-8?B?N0hTNHpvdElxVUhFMWpMYWQ1b2JMZk54bHBuaG8wMitUczNOdms1R1l2dTFh?=
 =?utf-8?B?NnFJUTdJdGQ0SUU5d0UzcTFrYVROalNBZnY2d0VBa0grcUJoR3Exdk1GWk92?=
 =?utf-8?B?RVZ4c1hhazJQeFFqOGNyallCWndHdXJjNUI4VTRSeE9MdnhleWU2OEppWm1T?=
 =?utf-8?B?bVp3ODE5WnJkUWRRdllhTFFuRkFJQVNBbTU5RTFoRnliT2VmMExCVjJEREw2?=
 =?utf-8?B?WWViMlhIaENab201RUFSV3ZQTkNBQXZpbUJBTUk5WFhLZ1czUGFYNms5MUd4?=
 =?utf-8?B?Vk5kakdXY0V2L2ZTYlEvRi9oVmVOZHpRMVNhN0NxUWxWYTBnbzRaUStGY2w3?=
 =?utf-8?B?eGNlK2FBTFMxY3U2UDBicU5KclZNQkRaWjBxZ1dCWWhXeDVocnRTM2NsUkc3?=
 =?utf-8?B?MzFPUzZDMk92NSs4ZWFaSENwbWNKZXBFcjJLRFZZdmxKdnlJUGd4S2xNbFdx?=
 =?utf-8?B?RDdZQWtENlJWU1kzL0c3S1NBQ05xMXpVQURpd0RYNVIvYkdqQ2JIVmlLYjVn?=
 =?utf-8?B?dkEzSmd3QkYzNmlhaSt1U04yQjF1dU9jcXNsUnFsdGh6SjFtVGVhT0ZVdlRw?=
 =?utf-8?B?SEMxUzZjOWphWHpuVHVWVCtSLzVwTlpaek9kWk5VcDhYMDFHeExwOXdVRUlT?=
 =?utf-8?B?ZjREdEFXWStlVnpLQURTU1J6MHJaUWxlNmZWZ3dESTB0Tzd2dGFQOHVLS0hh?=
 =?utf-8?B?US9PUEVvRmZFRUIwKzA5Wk83YXNseS9YTzVoTzFGcWNZdDFBSEFMQ3YzYWsx?=
 =?utf-8?B?QTFXeUtxQitoZ25yV1JzZWUyY25VUmxTWWlRai94MzFlSC9TdkFGYUdaaE9O?=
 =?utf-8?B?dS91NjRqRzFsSHRkbUt1S2NYekE4NXEvc3lUaHEzdFQ4WGliaDUwR1JpbU5R?=
 =?utf-8?B?dDdOeWpLbEF2OFFqSkI2VGd0ZVRWSG1kcWF3VHk3SS9PZEU3U3djZVhJa2Ji?=
 =?utf-8?B?TUFoci8rMEM5MnNKVi91alh1ZVB1Tm9zRnJ1SzRvVHVPUzExZGg5ZzBVUm5M?=
 =?utf-8?B?WWxwVWRyZzZFL2xTOGpiSG54Zit5SmlnNUpmdlZTQ3FxTzhvaW1qd1lodnlh?=
 =?utf-8?B?MCs5S0s2Z2Q2anduVHQyOWY4Q3lsYzhqVmhYMTN2cUI0L2RKNEYxc2YzbHQv?=
 =?utf-8?B?ejRBWG9MV3ZNajQwS1BIWW5QTFU5Mkl3cGlRSWtCV3FtL0Y5Mnl5c3hYU0dZ?=
 =?utf-8?B?TmNseVkwYWtDWUZBMU0yYzB1VUtZSG9MZ1VYcUxsWXBET2dOREpoN3VtT2k0?=
 =?utf-8?B?REZpMXhJTVVzdTd6TW90NlFoZWFHckY4Yy9vczF5YmlkWWhoc3lQcDhnWHNB?=
 =?utf-8?B?aEZGSVNyTFhJTXhBUGx1UTRTVFdZOWlPNFZ6NmNjeXIzQjN5ejZtaDA4MjZx?=
 =?utf-8?B?ZkFydkt5MmVkcWhjOUpWS1l6eWh4d3pua0F2WEhDUk0xcnFJak11NEUwalZp?=
 =?utf-8?B?Y0NzcDFmWDh1bEl5YXBiY1ZVWVJyTXFYQTk0a05lS05GdWNQUElxMGdWeCtN?=
 =?utf-8?B?QlJTSm9PZ0QzNk1MM3ByRXhWcjFQaWVORFllWlIzMXF6M3AzYlMvQmQyWVly?=
 =?utf-8?B?SmwrTmRScGtWQVd2RHhVNGVQK0ttSFdvTDhpOStISzBQcUE1WFpKdUIwM1ha?=
 =?utf-8?B?ZVN2eFhheHAyZjB1Nks1SUU4WWZWWnFDc3kwR3ZKQjEvMWJZalkwZVFhaUoz?=
 =?utf-8?Q?oUgXchGLrmgi+mI7NJzRw4JqN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c65db9c0-e0bf-4203-50f8-08dafd4a52b9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:01:28.5973
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0JkL7gTDkxnpTOURUgKlUUbP3xKxmX9dy2s5JnryvDrpjTQ8HXt8m6nUW1tgRk3yUlMNBQPBkjCJmRA2JYJcOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7397

On 23.01.2023 14:44, Ayan Kumar Halder wrote:
> One should be using simple_strtoull() ( instead of simple_strtoul() )
> to assign value to 'u64' variable. The reason being u64 can be
> represented by 'unsigned long long' on all the platforms (ie Arm32,
> Arm64 and x86).

Suggested-by: Jan Beulich <jbeulich@suse.com>
(or Reported-by or Requested-by, to your liking)

> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:04:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:04:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482886.748653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxQb-0001wm-8Y; Mon, 23 Jan 2023 14:04:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482886.748653; Mon, 23 Jan 2023 14:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxQb-0001wf-4D; Mon, 23 Jan 2023 14:04:05 +0000
Received: by outflank-mailman (input) for mailman id 482886;
 Mon, 23 Jan 2023 14:04:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rv8W=5U=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pJxQZ-0001wU-Lz
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:04:03 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca4ceee5-9b26-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 15:04:02 +0100 (CET)
Received: by mail-wr1-x434.google.com with SMTP id n7so10914778wrx.5
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 06:04:02 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 w8-20020adf8bc8000000b002bdc39849d1sm30752416wra.44.2023.01.23.06.04.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 06:04:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca4ceee5-9b26-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=v1qJTOnEO3dERuMHOqUksY6UoYGbFJ+ZFqrnd6rK+JA=;
        b=HzrPK2sWXu5/jsyJeeCv4NvqD0Ca+60BamYW9ZmIGL9JjhwSGPU3zX6PwDlajeSyVz
         dqCKB+sbVq4jw7Qjb+S30x+AffJFJjd7A2rgoFjVk4khWYG4VYQXbsuOwHTyRnwdFXQ8
         +bd0yVGAy3LWqolUzqOwoMdptBySSz3EW9iObHiBLjUkPg0t63DDmVO3ECj/g8vlmoGs
         QkfaPBaCjdi+5/G9VIBIqlqWZLnWIi2s8ujxjr2mqs/mLEIM9FM3z38ubjWI/LQQs0U3
         NEprNEYTospKvJ5EWKjRovDK/W0Ppa/0z5CdIaQQKJOaFJ+rF+z0kCzvaiF+ClB5ZgvK
         m2MQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=v1qJTOnEO3dERuMHOqUksY6UoYGbFJ+ZFqrnd6rK+JA=;
        b=mCCdYddy8N59Q6gDFAozrY8O8wKiz4+m4PWKGeiTtuwbChiwaqbGK/+2dA+22Ke+cx
         wn6djqn49Ot8EiLud2VHWUiy0Z48w6847N1OOwwGJNeCOSvduXCEvYsJZODUgnTEbXwa
         G8DTJEIeHCBB2kPF7tC2/zGRPhDW9oJtRIZkmK8EqkqeXI3ug8FxbVfj0DnLeJjHsQtP
         wD3SX5LcqwjWPLTpL5Xkn1r9KLbhwuO/IDMnp2uDga6WiV819B7scY030/DOemsCS8CX
         fhQ2QPzm2gW59hs1LqeeMjQ77oHl1ITwxAlF4KfqLIqG8ywy0sr7noxlq5FfWIFicc/v
         ZrVg==
X-Gm-Message-State: AFqh2kr2UGh8ivFYEvo6wa25yVInshvAzvWiuQc72mQLnUyO43ShUrSn
	Pu/4ZCYrtOsflrX5/22QWak=
X-Google-Smtp-Source: AMrXdXskm652+NX+ELqnZ6TSEMjaMTVhVtVKIxSGUXxfW2/21xaIXQiOXsElX8Qw1AZmR+DA++bpcQ==
X-Received: by 2002:a5d:5150:0:b0:242:3353:26ed with SMTP id u16-20020a5d5150000000b00242335326edmr21663152wrt.62.1674482641488;
        Mon, 23 Jan 2023 06:04:01 -0800 (PST)
Message-ID: <941146ccaf2d4b38ffd05d4d6163fadf46ebb829.camel@gmail.com>
Subject: Re: [PATCH v1 03/14] xen/riscv: add <asm/riscv_encoding.h header
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>,
 xen-devel@lists.xenproject.org
Date: Mon, 23 Jan 2023 16:04:00 +0200
In-Reply-To: <469c9ef3-767a-358c-5e70-a1e0d9b1a4ca@suse.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <fe153cbeffd4ba4e158271ccd2449628f4973481.1674226563.git.oleksii.kurochko@gmail.com>
	 <469c9ef3-767a-358c-5e70-a1e0d9b1a4ca@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-23 at 14:52 +0100, Jan Beulich wrote:
> On 20.01.2023 15:59, Oleksii Kurochko wrote:
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> I was about to commit this, but ...
>=20
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/riscv_encoding.h
> > @@ -0,0 +1,945 @@
> > +/* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
> > +/*
> > + * Copyright (c) 2019 Western Digital Corporation or its
> > affiliates.
> > + *
> > + * Authors:
> > + *=C2=A0=C2=A0 Anup Patel <anup.patel@wdc.com>
>=20
> ... this raises a patch authorship question: Are you missing her/his
> S-o-b: and/or From:?=20
>=20
It is not clear who should be in S-o-b and/or From. So let me explain
situation:

Anup Patel <anup.patel@wdc.com> is a person who introduced
riscv_encoding.h in OpenSBI.

A person who introduced the header to Xen isn't clear as I see 3 people
who did it:
- Bobby Eshleman <bobbyeshleman@gmail.com>
- Alistair Francis <alistair.francis@wdc.com>
- One more person whoose last name, unfortunately, I can't find
And in all cases I saw that an author is different.

> > + * The source has been largely adapted from OpenSBI:
> > + * include/sbi/riscv_encodnig.h
>=20
> Nit: Typo.
>=20
> > + *=20
>=20
> Nit: trailing blank.
>=20
> There also look to be hard tabs in the file. This is fine if the file
> is being imported (almost) verbatim from elsewhere, but then the
> origin
> wants stating in an Origin: tag (see docs/process/sending-
> patches.pandoc).
>=20
> > [...]
> > +#define IMM_I(insn)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0((s32)=
(insn) >> 20)
> > +#define IMM_S(insn)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0(((s32=
)(insn) >> 25 << 5) |
> > \
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 (s32)(((insn) >> 7) &
> > 0x1f))
>=20
> Please can you avoid introducing new instances of s<N> or u<N>? See
> ./CODING_STYLE.
>=20
Thanks. I will update the header.
> Jan



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:07:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482891.748663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxTV-0002XU-M1; Mon, 23 Jan 2023 14:07:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482891.748663; Mon, 23 Jan 2023 14:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxTV-0002XN-Hl; Mon, 23 Jan 2023 14:07:05 +0000
Received: by outflank-mailman (input) for mailman id 482891;
 Mon, 23 Jan 2023 14:07:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJxTT-0002XH-LX
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:07:03 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2053.outbound.protection.outlook.com [40.107.15.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34d39dc3-9b27-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:07:01 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8849.eurprd04.prod.outlook.com (2603:10a6:20b:42c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:06:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:06:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34d39dc3-9b27-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J//sUL94lXPcYupt8OCY4F7TdsCz8zcsIXflLKdDvQlWPIFm9X/cVeBMKBu5jPZhTyGfsQBRQLtR/HnU/lRBKZMLubdnqNzZ7qgIPlSdoj2wUZHHmj9JgJra416O55STooyjaWu2vdoEsvq7H11FwaHOkJqmSx2KyH9m6yD1GE0UdgL1DFpA1QNj0lgCIekIA6RuK5gvW/rtVknCBO6G+EJc2cZvwOAgit+6nFi1q6/Q4jDBdG+ljhIvw65bWnqNd9RZlcuwGFrofif6sbaXRqyWGm99pkjYkxZ9Bcwv+b0kVq4ZXPlcV824QJhu0TMz1AXhUbB1xjj2r1g1tj2prw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=P21u4ASxTbtBeOJs2lFm4pkVDbXGeEzJ9di5SWW6JF8=;
 b=REviq/hnQFkzgF2C1CMasa/1xIsLS+Z11sjwJVtrTkbUBQZk/Oj71QJF8TDPU3B5uUxxLafF7OOGgSgFjQbW/ffzPtE7aqzaBc1JoeW+IGW1gzORtouF60llHIs+rBtyZLYBm2FljUEpS0vqT0QTigQwpgtj/ZT/2S+wm2QW7CiI75EiIfQ+leSaMNXberx31u0pPuflGCH01apQ0wAeIi+1wcWXqRdcbyA5RXFrVJFbmNkVKfghj8FiygziUtdRH5YwYo4m86JNbvNQygUG6F1W279NYJaxxYJx/tN9C+xcp0bYDXhTutbNtmca7NOhNM4UZRr4RsIrw4nGxmxMhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P21u4ASxTbtBeOJs2lFm4pkVDbXGeEzJ9di5SWW6JF8=;
 b=Txt2xOuv99kLr9ncf427pGIEhM+jiCQfD28XuuyCr0bGNSRXcTa6ic8MGxnOQ3vo3jrx754AHEAkRIofMhu63b0Yt4OFLojE48Ja7Rfgy6RMNBZfcttEXV6kl7fn2qK+RHlqMQoHwBK2eKb2POHL/P8C73RpQSQ1mUU/8aNMcBysnRsmQTiW30sTgK+x7N8wc3F2Yq18oUEgNwzAYUtC37I3MAaMXg06h9aORU5678bck4xnXxNH2WgItQR6DcxRQBJZVljdBN/8pQN3yk1PpzwYwTGS/M5lxLVNLHgYntU2yvmW8Jb2ZlXIcJi8CSVJEllS+CsQ5GsdEV0dGNZcvQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <333ab24c-3f46-235b-c88a-ebc6ac25f504@suse.com>
Date: Mon, 23 Jan 2023 15:06:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 03/14] xen/riscv: add <asm/riscv_encoding.h header
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <fe153cbeffd4ba4e158271ccd2449628f4973481.1674226563.git.oleksii.kurochko@gmail.com>
 <469c9ef3-767a-358c-5e70-a1e0d9b1a4ca@suse.com>
 <941146ccaf2d4b38ffd05d4d6163fadf46ebb829.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <941146ccaf2d4b38ffd05d4d6163fadf46ebb829.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0050.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8849:EE_
X-MS-Office365-Filtering-Correlation-Id: 28ab8d53-9479-4603-7c3e-08dafd4b1818
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LDzcmikgTBcxjSViQc6M3LdGNW3tjCk5V75UzD3jHdGfHdCWuWiJEW2FO6ajcu4X5KtH5g2Gkr3urA/Bo6R7BDxoxCQysx0EpD+jF43oVkrYDuBLjQKJsgSU06EZKf2rUYUPVuiyOyzMeDknwvMnZQfHIMpIabM9PpiMew77azvPaCRLA/F4iIpIO3kzugAmtmLdrRG/BVK2gnl7muq0sHCf4CPaS5hp4dQuLXYm4uQtadttQj5p/lTLMpBfXJkj+05EtYsO9Ps0mgoxCzuNHM7UJ6dptvEaBbtT8wzxLiiaBXY+LkdXvuKsSWzj+zwS0Uz0otUlsE4DLp+/Yd17Y0KZWhJlGoe1grE4rDr3AWT6aSU+qb0Fahl6TpkN8gE9+57juuPdadyjpz0MHq4Bh/GrN0EURAqL3rV9SZWF5nfhGUZ4oMw+Xe0lwS7L5d//zBJT1OlBUmBHIK4drydQoyIFZybYSW3B8h0miWw8rFhqa6dKiEo+QYbks8G8E/K6B68/4nsup1sZjuqw2LOpTJ741dcYtDZpJN1KQ/8ToY9PGwTstC2GowOw7RFG877mvZWkaRInkmgH0XU2iMTVhzIqKaG7ySIF6Mr8CPlNNs7DzvCTdrI0Enjq75R8WKZb9zEmYlNh9a2BEfweHRwYc6tACSfVMFKXVgh1d3BEFE8NptKUAVN46fHqHa5pITCHzZBRcSSNNACO6OrioRJy5aXmTwLpnDCqsmU75gBhXLg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(39860400002)(376002)(136003)(346002)(451199015)(38100700002)(31696002)(41300700001)(86362001)(2906002)(8936002)(5660300002)(4326008)(6916009)(8676002)(26005)(53546011)(186003)(6512007)(6506007)(66476007)(316002)(54906003)(66946007)(2616005)(66556008)(478600001)(6486002)(31686004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Qkxibkk5ZjEwNXh1TXE5TDc2cDcyUWd4YThXWHBqekVUa0FqVmlXOEd2dEVN?=
 =?utf-8?B?WXFGUVJpcmtBckV0SGFKS1kvSjdReGNoMDNUd2FDMVdKV3MzOC9SR0NoSHhH?=
 =?utf-8?B?dFJOM3AvVnVGWjRRSm4xR2tkQ1NJdlRrY09SSXdTYUV4NGY0aENEbEZPeDNy?=
 =?utf-8?B?R1N1NTFvVG05NkNINitUaDJoNU1TY256V0FyK0daUDNQS3RsQklWcU82Y3o5?=
 =?utf-8?B?M3pFM1FxYjdtS002eUwrcTNramUrRWNrWWhHcVpZVzBDekg4TGZDQnZGa0lK?=
 =?utf-8?B?eUxRRVFXakxyV09wekRJdDc3YmFkNG1ucE9QbVpBdHdNRDRKbkFmT0hRNVpW?=
 =?utf-8?B?QVVJYldlSUxzM2w4dGV4eDZOWENYZEcrdElEd1o1Mkc2VVVBanlCOUlqR3Q1?=
 =?utf-8?B?aS9taGx4akNEeTRYYjhyT0k2ejFSbWdhOFNCUXhsK1E3K3BvR0h0akdpWmdm?=
 =?utf-8?B?Rjg1S2l0VlZCOGJvdW0vRVFkSCtaYTY4b1BVejkrZUJheWVBd1lmeGE3V3NH?=
 =?utf-8?B?dnd4MGc2YXF3aTdPN2lPK0d4bHBpTWRWQnhqa1lpbzNpNjNWVDMrQ0lKWVd6?=
 =?utf-8?B?eFhBUUQyMzNaVDk2OUNLanlrODdtNFJuNlJIdmQrVllwUWVDTk1Od3Q3U2xZ?=
 =?utf-8?B?UDBwVEtYV240Y2paRVVPTlFlOThuS21nYkxSTTBTZUs5MjVkaXpzeFlDaytX?=
 =?utf-8?B?Q3VURjZQSHlKcHZQOXYzNkM3OTU5QjJMbC9CTkdoU0hWS3dZZitFTnI0eFor?=
 =?utf-8?B?WHc5Sk1vbDBsL3VnRS96S3Vzc3FnS3lseG1nREFaNEFKRUdGU01jOUcxa1FP?=
 =?utf-8?B?U1JsWndEcWRWMnJxTnRjdzVQQjlXOFNRbDdYWThpUEROMFMrc2pFaGNMOGNO?=
 =?utf-8?B?cGQ2bzlsSHBKNGdsWnBDOWhrZWM5Y01JYzd2NzRxNmJvcGs3eXdiTXVlMisw?=
 =?utf-8?B?SVBONGFHZ1MxNUFML2ZOTnNSeUdzYU5DR0xKQ0Frcmt6QnlxbVJaWU5xdldx?=
 =?utf-8?B?aUlvQ1YycTl4d1F2MjhyMDRYdW5pb0l6S1d1RFBJbmZhVlFzMElESGZRRjR3?=
 =?utf-8?B?ZE9KekNUZWw4MFhaVkwwWnpIZnlHRzNoSW5JczNxVmZVbHVQVkx1ZVg2WnJ2?=
 =?utf-8?B?Q2JzTW9zMmhkNzB1bCtoaDQvK253V29QeTNudjI2QS9rbmJPUE1TMitrWGJQ?=
 =?utf-8?B?NjFEUjhtWjBDa2o1NjB6TVMrSVdjU0R5SER6b2pkRE9vUEU0S01DT043amZX?=
 =?utf-8?B?WmRjVHBnakFpVXFPTG9pVUZGZnFBVjVGd0laQzd6MlI1WWJZZzJSWlpjUTdB?=
 =?utf-8?B?RjlzTmJPVTZ0ODBFRW0xYW1lOFhiWnBHdzhMWWlYY1cweUJ6UFp1YmVObG55?=
 =?utf-8?B?aWFQOHdzS1Q5NVhtdjJzd1FNSklmNFRCRVhZR25FMHJCZTRDUGd3OGkvM0NT?=
 =?utf-8?B?cEFzRTlobm9kdGxFdlV0dndoQ3ZNTi9BamQ5RnVoOUxvRUFuN1Yyc0U5dzBp?=
 =?utf-8?B?cTVEbU8rdk5HWnpBM2NEd29wODNmRk5ERHlEZG1tbEppbW54OUROeGpJSjAr?=
 =?utf-8?B?V2FPK0hiQmovelg0ZXQzVU1sZjB3TXR3Q1pPTHgybjdwSU95L090bUZsU1ZX?=
 =?utf-8?B?TzlkRjdwek1yeUhGUzg3UEZvRFJDTGx1bzhvQWM5L1dPSUkxSDRlSnVNSXBG?=
 =?utf-8?B?cXJCVTBhc1ppZ3NXZ2JpK2l5akI0U1pZREp4SnN3MFF0T1dJM2dxWUpYUFlU?=
 =?utf-8?B?L2ZkbTJNSzFYM016aCt0bXJsUzhPKzdQKzNBYVVWRElyMEs5SGRzWjAxbi9j?=
 =?utf-8?B?RjFoZDZwY0loODYwNHVaY1VSK3lVcmx5WVk1eXk1Ti9aYnpJSkNSZzRJU3ho?=
 =?utf-8?B?VDNJeWw3UndPYVZXUk93bklZa2ZuaGpTcy9CbGE0LzNmdGI1SUUwY1FoM1pt?=
 =?utf-8?B?VXpDZkV4VlJJRldQNTFPaFB4OVhJbVpkNitYWDVmcHR6SmFhd3RFbzRGWm9L?=
 =?utf-8?B?b0tVZFpkY1krVjlZSHZjNHZ5eklCRE9xZHlURWNJNlZHbmdsKyt6NStZUEt6?=
 =?utf-8?B?TnZETjBiZHE0V2lzV1RESEFxTFNPNEtGSUdRTkc4ZGRMWTk4Z1J5WHRmNWhr?=
 =?utf-8?Q?Owu1BV6MRoHDyhf1J2WwX+iWw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 28ab8d53-9479-4603-7c3e-08dafd4b1818
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:06:59.6697
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZvJPjxTaYtKnqCfdJ/ZpiADk66eeNXOqS5zG97GYtQz12Gd8rRkE+GI7v9pb3fVTOt9QlQFpLuS72MoelujYqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8849

On 23.01.2023 15:04, Oleksii wrote:
> On Mon, 2023-01-23 at 14:52 +0100, Jan Beulich wrote:
>> On 20.01.2023 15:59, Oleksii Kurochko wrote:
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>
>> I was about to commit this, but ...
>>
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/include/asm/riscv_encoding.h
>>> @@ -0,0 +1,945 @@
>>> +/* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */
>>> +/*
>>> + * Copyright (c) 2019 Western Digital Corporation or its
>>> affiliates.
>>> + *
>>> + * Authors:
>>> + *   Anup Patel <anup.patel@wdc.com>
>>
>> ... this raises a patch authorship question: Are you missing her/his
>> S-o-b: and/or From:? 
>>
> It is not clear who should be in S-o-b and/or From. So let me explain
> situation:
> 
> Anup Patel <anup.patel@wdc.com> is a person who introduced
> riscv_encoding.h in OpenSBI.
> 
> A person who introduced the header to Xen isn't clear as I see 3 people
> who did it:
> - Bobby Eshleman <bobbyeshleman@gmail.com>
> - Alistair Francis <alistair.francis@wdc.com>
> - One more person whoose last name, unfortunately, I can't find
> And in all cases I saw that an author is different.

Then maybe simply move the "Author:" part into ...

>>> + * The source has been largely adapted from OpenSBI:
>>> + * include/sbi/riscv_encodnig.h

... this sentence, e.g. by appending "originally authored by ..."?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:24:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:24:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482896.748673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxjc-0004v2-31; Mon, 23 Jan 2023 14:23:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482896.748673; Mon, 23 Jan 2023 14:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxjb-0004uv-Uv; Mon, 23 Jan 2023 14:23:43 +0000
Received: by outflank-mailman (input) for mailman id 482896;
 Mon, 23 Jan 2023 14:23:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rv8W=5U=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pJxja-0004up-HK
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:23:42 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 87c9383d-9b29-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:23:40 +0100 (CET)
Received: by mail-wm1-x32c.google.com with SMTP id
 l41-20020a05600c1d2900b003daf986faaeso8686129wms.3
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 06:23:39 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 n42-20020a05600c3baa00b003d96efd09b7sm12362975wms.19.2023.01.23.06.23.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 06:23:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87c9383d-9b29-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=PpApWe/Q3rfJtVAeOhCC4USGuzGM53MmoaOHYnCHfyw=;
        b=Zbd+GIzdJfAv13PxRKokybPEZugMVVqamfRTpKDS9XD+FlWQoiWNS8PEkQHgCJlY1m
         jPs67w0Ob/7C/jty8tI+R99tHwQYEVDi+kbyY7tJxtiPNauY3256dZq+zc2KZeNQwusJ
         rDDhF2Fs/Oe3OLQOSdYdm9F/d89ISALvPUk1n1woeK5Y14cISPSueuEy0yHRpkwakLFr
         p/nd9ay9xRyoP5viQsUN+KpUKuyxRtXsnpCevjYzpVQjF16+5iJTz/raMOZq4lEYt1o7
         0qKnFzTcW7qDPhUnsUNJSf1uvBACCdt5oY19tgqYy56UlLqCk+ljwZ5s/Je+5TRuYtvA
         p5qQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=PpApWe/Q3rfJtVAeOhCC4USGuzGM53MmoaOHYnCHfyw=;
        b=3Lw/P9IkcpYWvvUWFk+isyqeDZOxCYnYIL8kFg1Rzd1iZoT6SwnzJWe5IW9rMiyvz9
         EVEos5efA3iS9jBGMcOEow+SnvIaGmwMWSAvoaLRov+N7lZuzwPBNiSMbvO3mqy+RS4m
         gvcEWXgILSA1Pjik4DXS00oagzKWoox7XaDCaaV4GPThO8dgiPHE5mmPoWTWhH0gbMHe
         lfVEtaGuyv1hHiuHQKNMDDLAqEF3Dpj5aZ6Dw9JdB2H2awupVWoMYnp7DqBloIJjYRtH
         IOtF+Hkaf+AHx7FNmYEYe+ULNvIMwny9o/eYHXOxUiwCWbVm9zStiyq2mgvcJf1bSAIp
         UPFg==
X-Gm-Message-State: AFqh2krCBNPa9ggr6l3oh9TXPVoUf6mnLVMeowFm3+hDZOpDNt08ldfa
	wSQ9sngwj8K9XSvAIZhi4Kc=
X-Google-Smtp-Source: AMrXdXt4qk4MP6V4ET3o4yez35DK5E4lzGaP/wvvy7ySmwPXkVUdKPoFOJTBsOJVB5p/RJa4tVWGfA==
X-Received: by 2002:a05:600c:540b:b0:3da:282b:e774 with SMTP id he11-20020a05600c540b00b003da282be774mr32190327wmb.38.1674483818824;
        Mon, 23 Jan 2023 06:23:38 -0800 (PST)
Message-ID: <b6ef89747db4f8ce48dd66e7db8565a6d25f96b2.camel@gmail.com>
Subject: Re: [PATCH v1 04/14] xen/riscv: add <asm/csr.h> header
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>,
 xen-devel@lists.xenproject.org
Date: Mon, 23 Jan 2023 16:23:37 +0200
In-Reply-To: <08ac78fd-85d4-2a43-1922-3128d5fd8d21@suse.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <afc53b9bee58b5d386f105ee8f23a411d5a15bed.1674226563.git.oleksii.kurochko@gmail.com>
	 <08ac78fd-85d4-2a43-1922-3128d5fd8d21@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-23 at 14:57 +0100, Jan Beulich wrote:
> On 20.01.2023 15:59, Oleksii Kurochko wrote:
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/csr.h
> > @@ -0,0 +1,82 @@
> > +/*
> > + * Take from Linux.
>=20
> This again means you want an Origin: tag. Whether the comment itself
> is
> useful depends on how much customization you expect there to be down
> the road. But wait - the header here is quite dissimilar from
> Linux'es,
> so the description wants to go into further detail. That would then
> want
> to include why 5 of the 7 functions are actually commented out at
> this
> point.
>=20
I forgot two remove them. They were commented as they aren't used now.
But probably there is a sense to add them from the start.

I am curious if "Take from Linux" is needed at all?
Should it be described what was removed from the original header [1] ?

[1]
https://elixir.bootlin.com/linux/latest/source/arch/riscv/include/asm/csr.h
> Jan



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:25:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:25:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482901.748683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxlU-0005TQ-EO; Mon, 23 Jan 2023 14:25:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482901.748683; Mon, 23 Jan 2023 14:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxlU-0005TJ-Bi; Mon, 23 Jan 2023 14:25:40 +0000
Received: by outflank-mailman (input) for mailman id 482901;
 Mon, 23 Jan 2023 14:25:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJxlS-0005T0-TY
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:25:38 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2050.outbound.protection.outlook.com [40.107.6.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce5e81bb-9b29-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 15:25:38 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8364.eurprd04.prod.outlook.com (2603:10a6:10:24c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:25:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:25:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce5e81bb-9b29-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yj08t50dFk16FIhQb5ApbXidiriX33Nw9JDs8WeDa4NaNvHxZkj9wqGqI24F6S9/emivw89tbIyZYkmSjLng61t24K1MeweOlPHy8+Uu9qG1gAKxoGhhu/yqmYvqyLIPg8UpqNeeRE0rS1rWDmI3WjqaB1lFY0ILkUKHkqaMACuMQ+6vmiXj4Iv51h3ucaAmLEw5ywZT4Ah1Fs3UI64HUOwGFR5isAPdjC5j5vWds7n+F8zag7cA42LF166n8cH+eNzeMnBFenSlg/cblN9XIMaFDzjH58/wcjDZgtXlSozsXgD+W7/U6LQL01hC3FRrbpA8LwJhDNDtuJmY+xBWjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zR7td2ZufsM6/aLyC5TjPKvdiWy47PbslhSMJYf2mTQ=;
 b=A3F1+/M9GN2OrRjOuaP3ZvtMGAoiebujecWPr7jdFX/bdvJFhVq6Yl2tWASgZFzsO830m8bHpcZpSGnRYqlDc9MBwo3H2TNbIoKVePhQQOC3MP+m4oM6LmBqDfU/i78cX6HI47hPDF8mvWXwT2pbMCdL2/4TBNMEfvv8/W5lxPbo68y+SFSRHsvETuA9Bc+PItB6HSVhx5ASkVvYh7QOGhB1rJry4MEJzE8m7cdU/1keRF6laLgI+rRqDmxYM4zS39f5H4f3oyfB/MODLfz/b0eIdcM4gG2Yf7zPiiWkky/vgKc8ZHg/zs/AgG5m7oEsqjS3mo+QBNcDxoRp7wd8gA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zR7td2ZufsM6/aLyC5TjPKvdiWy47PbslhSMJYf2mTQ=;
 b=IcMC9N1Rw16aTzj4z7lnngYjXx761nYL4IQjms+laeN47smWMpvBwpk88vxtT3q1h8CbdrsRAY9GEo3oxOwb4dSJBXZi+qAumEe7ZdmOhR8xnD3KO52qNjsDFInv88721hmlZE1a+4B9a/sBunP57BigLC4p0nww9eMFFRb+ppwSLBfkxfjJ3jngGq7BSLtKl+Mnm+yqWQULvR4VIHGDJIh+Fj+f6jQapEGID/eIjCkFoV+v4C20p0+SIICH2d9Mx0ZMmzg8MEAWtP24OWbobOihbZVATh/0eOa9eGf7uHOiueClpv61t0BzJo+D+pEPnGGuRcPmrCpT6ZFADinTgA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0e682cd4-3cc0-461d-ee53-13a894797f17@suse.com>
Date: Mon, 23 Jan 2023 15:25:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/3] x86/shadow: sh_page_fault() adjustments
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0136.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8364:EE_
X-MS-Office365-Filtering-Correlation-Id: 6d009104-a649-4fb0-4c9e-08dafd4db166
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	soWIxH7k6B4oMa/q12DsgXMY/iqWg0wV3j4Sn3RupAXW0kZgzBgJYmqmlJM5lFr8d9L2ZrYgxE3UQUz1NzHDIzt15xEJozE884qwu67xN9UxrNhQxfMZpkHVWaiccao0O0eOpGAQK5c3N7Rw8j8yEjLtVtkFQHqFEuaOdZVANMtir5vahLm7P6/sfpfRT46PvAui18bBgWqXVmY7mMD5ykPn80ndtmpB6y4lh343Ly8GYYUp9dZz4Vj1at5kOXqJkGvXNQAO44mEN+hH0mnJX2Rzsvz6EMz+BiCu5HE8r/0JIObHfGIcqnMJ2y9ZFFsF5aR+gwZpiiW/NpIj4iGP5zEIryW4ZaD/00qiNUCjY36nhx7RKVV9AZQ9MRM+T+ulIApsRazePseXfG8rn2tfqestd1uC/DG/8AaJqFZe9DfRjxqGr0lKjxG7Ufq29l7qq4Syk1zO+ViPjs1q/clWz6wdTJ1D0iMiHVfdWKkwEnaKZoHWMSfpwojoblTa4F+2zl+oaEi7EjbGgLoxkaYQE8H6m6vEGbk8oLOOKjaH1NbItmfbOfAgGCcIoXH9q7UU91AMSYZOeKmMKf96/fYfbrX12M89kStHJgGCCCWEgc1KmmXnzH4FKZ9YJibhNRPSmghKaq0CdkkYBxxH2eBs6CeESZ72qJGQ9HrMmM+x1YWI+UEDfJC48TyacatxVfgCBSSgkm5nC0oqZ1tn+SZiq1m0w1a0tM2lEuwAXql7580=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(346002)(366004)(376002)(136003)(396003)(451199015)(36756003)(86362001)(2906002)(38100700002)(8936002)(4326008)(41300700001)(5660300002)(558084003)(31696002)(478600001)(6486002)(31686004)(186003)(6916009)(8676002)(6506007)(26005)(6512007)(66946007)(54906003)(66556008)(66476007)(2616005)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?am5HSHRwUEhYK0M0T2xaM0wwbmVGS3QrY0Z5SXVuTWM5alB4RWtoTlNPaEpp?=
 =?utf-8?B?N0dYb2daVnRiSlk5eVFoV2N6N01OVXU0cWlRRGpHT29kR2RuRnNHNWowcWpQ?=
 =?utf-8?B?ZU5RR1Z5WlIwRS9WWEMwS2tpYVNHMUpwckFlUVkzNXplWmF6Q1VzSFQ0Zk9n?=
 =?utf-8?B?NGtZcG13ZHZNaG05a3NieThWS1dtK2hJaTd0a2ppaWprRWg1azYvd1lSS1h1?=
 =?utf-8?B?VzRJNWRLQTYwSGMxZTc3QXcyN3RTdzlVdnhra1hWVWRrc0VDSzZBcE1jRFhJ?=
 =?utf-8?B?cG1HeVk2RkVyVDFla29ZSFdOR2FyQmd4Nlo4SDlNdG4rdzI0Vlc0eHZpQXlW?=
 =?utf-8?B?c2czaFkzSDNQRnRyaFJwTjZvdk9sbGRXYW5IU2VFY0c4NlpIRDIvOENNVDQ0?=
 =?utf-8?B?czF1UnNpV3RodHVLK2ZXaGNENVcwQTAxK0F4SGxTQ095NkxtRlMzMGlrZ2Vz?=
 =?utf-8?B?SVlRaWhTd0Q3NmhMYkdoeGtpQk14RHk1Ujk0SzMxQzJFZGo2Z2NyZU5YVTZo?=
 =?utf-8?B?bHB3SWFpd3c5N3NudEFqS2dJRlZiR25pang5QTRIc0ZaTTNwakxpOUwwVGpX?=
 =?utf-8?B?SUdxTUp0Ri9Id2xzUy9yWEQ0cklKWWtqZkFmN1VzVzVETVN1bFlxOEc4QmhJ?=
 =?utf-8?B?K1BYRUJsc0R2YkN1NlJxVWRYT0FUY3VTWXdiaTFEQ3h0dU9PdlZpYS94cFAz?=
 =?utf-8?B?ZVpqY2RqUUI3aUNKZ3hMZVJaY2dIQ3ZaNkwramt2eHRSM2kyUnJ0U1gvYXRn?=
 =?utf-8?B?VnUzYVVjSzJSS1hXak1qN00vQks5aVovR1JRaUF6MmdiZWVrSXgxS2Y5bTVU?=
 =?utf-8?B?L2xBenN1ZEpMc3NVMmRxTzNPUkhGUEdJYkl0VFpySkNCR1lTRXBrajFVa2tN?=
 =?utf-8?B?dmx1WlBUeGJxUitkVGFYNFNyQjNOdXEyRTY2SEdSUG9hR0tob1lPUFVnQzFu?=
 =?utf-8?B?WkdXSUoyOWRkVXRTOFpzVjFIM3kwVEJibDVYOCtma3NuVlFuSVZMcUxvUUNn?=
 =?utf-8?B?TlNzbEI4UmIwZUhVQkZyb3h4VmZzSkJJT21DM0o2ZHZNWndKRzhyQm81TVg5?=
 =?utf-8?B?YmloTldDNzJCdDk3djhVTUhXd2p5ZGhlZVBBQ2wyVkdMM1NIM0pDVVdlTEZ4?=
 =?utf-8?B?MDQ1VG05RU5uRXhSYUFHcUtnRUFnbE9hcWppVnFZa2dwVmtMbnBJMHhzSTVU?=
 =?utf-8?B?ZXZScDFBRWNlYm9sVkc0dGRkajg2WkdmUGpSd3loYkgxZjJENHE5aGVsTmYz?=
 =?utf-8?B?TEpCbERYbW9pcVBWaDROSDJtNDgybE9EUFVpK0E1dk9GQTFyUGhibHFlTTRX?=
 =?utf-8?B?SlM3S3lyVXlWOEo4bjhJMnRTWDRSVE5nYnRIb3BuWmlKVGhvTVZUcm1DL2ln?=
 =?utf-8?B?L1RjMjJvMEJyZ3NjL0dHNU5IZ2hycHJ5eHlSV2J5U3ZORG9aSzdEMWxTNTNu?=
 =?utf-8?B?L2dZaGN3TjNZaXlvZGN4UzRFZzJGa0pJaDh0Z3BiL2ZBTHd3VW54aVVPQVM4?=
 =?utf-8?B?TnlLQ2RpWC81MnlMYkw4NVd0L0VTTGhpWnhEZlZrYlVjdVdWaFdEWEgxV1Vv?=
 =?utf-8?B?aFMvRlJaSXNZSDIrb0VFNnN5VGU2SkZEUjFqTFVHaHdWZ0w0ekIvcWRSbWxX?=
 =?utf-8?B?cXRmUldEcmd6ZzMvZDBFZS83bFFHcXE3WlhmMTY5TmFiUHZVd2pmbUp2MjQ4?=
 =?utf-8?B?aytLd0Q4NVFYY3EvNmlTQmZSTlN1SWlWcmUxcG84aEpXQ1krUi81UFQ0Tmkx?=
 =?utf-8?B?enc2YTBOTjhhd2dsSi9lR3JTVzlXbTN0OTErNmRVcnp0THQ5Z2dqRWlVanQ1?=
 =?utf-8?B?OHRqWE5OY1hFWVJzVHhGOCtZaHpWV1JuUlMyYVRYK3BsRDNGaCtNcU5uVUVz?=
 =?utf-8?B?RDhRczV2NXNaRWdpWk5TTkJZWjR4UzRZakRzQWxEeTZYb3dCcW9NYUxoaUcr?=
 =?utf-8?B?OVU0eHBUSEtxb1MwRktIeDJYQUpvMVhKTHMrQXJjYjV0bm5adTEwSmhyaVNB?=
 =?utf-8?B?eDNVS1RpWXBmaHViRHZIRG5CRVBDcmhDTEl1UWxGeVBUWXVIOUZrMUxrNVpK?=
 =?utf-8?B?Y1J4OXROOWxacThvQW5Jdk9YY1VUMFVLMVNVeG9KU1k1ZmJwb2xqNHVQd1NB?=
 =?utf-8?Q?gHaiKWmSInI73GCc7vrnDlx6K?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d009104-a649-4fb0-4c9e-08dafd4db166
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:25:35.8802
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kTeHSHTiib5RFzGKRx+XhSniL1J5pcrCvd7fd5NLkW61Jqz+Y03XLkZN/UU1CO1RB0j8DyKaGUYIOxI17V+QvA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8364

The original 2nd patch of v1 was split into two and extended by a 3rd
(1st one here) one.

1: move dm-mmio handling code in sh_page_fault()
2: mark more of sh_page_fault() HVM-only
3: drop dead code from HVM-only sh_page_fault() pieces

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:27:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:27:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482907.748692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxmp-00065y-Sv; Mon, 23 Jan 2023 14:27:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482907.748692; Mon, 23 Jan 2023 14:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxmp-00065r-Pt; Mon, 23 Jan 2023 14:27:03 +0000
Received: by outflank-mailman (input) for mailman id 482907;
 Mon, 23 Jan 2023 14:27:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJxmo-00065j-3o
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:27:02 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2077.outbound.protection.outlook.com [40.107.7.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fffa1792-9b29-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 15:27:01 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8418.eurprd04.prod.outlook.com (2603:10a6:20b:3fa::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:26:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:26:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fffa1792-9b29-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MZuB4NEpDfHAm9UqxKgqAyAE40yjC8WRRHZ4ZCaOGkVRIuHE37I47FcloqtMQk0Rk+Qy8zRQeYYVmEJ7jZx1db5WClxnHvAjC7AYXGgF+uaywgdeUpejiSEh2+4+vfx2UcPOBKxZOpvLXlLg4Oz3YcQts6iOhCWhiXyKFGKCVY9OxoZPXgJvCgBNxe2ni6HITpoMsHZMn0CaJdK/oW1t2LuiFFbCP0+3f5c8OEtdmero88oiYh2k7Gtu3Ib1mixnULhqzH7ggr/v4fsMMP+oJOdSbMER/AKaYM5H9T8c/sP60KLwQD+hFchIJg4SUIkMlGA/IMmE+1fYTnVOVbOWXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B/gm2cl/X9TcLNxL3ej+ogFLZ1GGnpcQ/Jniu05nakk=;
 b=l+3DiY9qlcbAKAyy5IWRdpH/qlLBL7uY4fe0CKECDhXrp7taIzOgdGU6PTV/+qz64vUQyNE9bQQRqmvAtdN3usW/uqyO1FjhBkxSPRIO6hkC0YFRD3NGmppojvwamlFqw4cM2LFZ9HEJHeqZaS9BTeG5VnUy9anLqMOL7tjjZtzN4XQJ6wsBXeoj8+vsp/P4Xm4peVaX0sbOEZjpiloQ/sn+ugJOhOTA/aV3O5zMxTnBLMB5uMTEAL+LQam6i7QSCgh13zjrxxQhjvPOYbiISqfYPqBuwEBI63n0zTc3FAqsKVFF5++gw7buY0EDXJLQ9Z3gW/5lWqQhex6r0LCycQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B/gm2cl/X9TcLNxL3ej+ogFLZ1GGnpcQ/Jniu05nakk=;
 b=SFKeol+dT6CLMvfBkZgyAsVtlI89hqB1gJxAvzLRPoQ8DXY+oeG60zflT0mdFGbASnsR7Jl1yj1hbir+Sagj4EyrKOG/GIK6/VgFd0pjnhlyEg/G2AhhDTI/L7Bezsp0Gxp1Dfyotxtcx3ERpwI7+KqVx5gXNtGP+B6vPBN54EqZDESTXeMiZOUmRDRETlQoLRlK4xWGmHHyeOXtf2qJkwY0cfdpfVLwktANC5q3AlVgsXhn8wNn+dkbLwRatFb5o9BxYqpWD0MaZLpgbNddFb5K3NSLI12+h0veY+KSgJBhe1qtB2eGBORHSgXNwyzXSXqhBw1O73RyX9poyQNOBA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5d8a938e-cb4a-a989-1849-d702cd25d890@suse.com>
Date: Mon, 23 Jan 2023 15:26:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in
 sh_page_fault()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <0e682cd4-3cc0-461d-ee53-13a894797f17@suse.com>
In-Reply-To: <0e682cd4-3cc0-461d-ee53-13a894797f17@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0077.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8418:EE_
X-MS-Office365-Filtering-Correlation-Id: c001ad8d-46e1-401c-07ba-08dafd4de302
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7SlaED7EiEKyii893uyehhNUSkjG1v3J0TpJTBs39z9Lh1M/dgLDFEKgptIcj1xlG8NtzBOWKpKWsl0lEw78YBVqhX70DB2y+Fx8w4Ts/LWEOJHGam5qipqvyRodOPILHJAQhdALFDsaoKx3p20vAhu0cNDv+0Oce5ygw8ebASyG9tzV3sj2A4wOk/lAC2Y7ytUfqqaLJa/4JSATvcf28aCfvn8Y/fpyX2+GgmNp2K3t3NeJCZDWcunAUZhlV/Hq3SNKGXti6WLbqC8Q/Vf5WwG5Tdc0SIoS2pH+CfE8T2L6JeTWwFKFBnp1/paQwpkgkcLKIeNtk7/RgmHmjvAwQeIqlLgcuKBkCW/F0Z0ue8kWfD1PhSyXVul4MZiFzJINSnK0Ow34wKyfleyQlx8+jYS5roGTfZESI3MiOnzgvJ7m/Ibsl+aF2M9EOaPOE8LCPwTYXJbx73L9ljq5OBoVofU0EPCJLVpNfbpCYxcCBmk7ms//MAAg6iA07rWEnHdhyjj4/NDnw7VyFtyHo8AedABy7f07A3xShTvaskFXO0y+WlAiM9tW+1WW1ChllLBYQY96DzaFyi0KPyrCeUK22kfvxwBuEBir4P1ILsC/VFGlyLQ7VadoGoiF67PpbTYuydr+gM/osXkmxO6JkRjrLdaxfGnHut200UydeulTcfWmdk8MbMxoZ/Np36PnhGXlyNA5w/7keZ/Q5XLqHVnDE2nvTjUgnhj0WNNZUalYKzw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(39850400004)(376002)(136003)(346002)(451199015)(38100700002)(83380400001)(31696002)(41300700001)(86362001)(2906002)(8936002)(5660300002)(4326008)(6916009)(8676002)(26005)(186003)(6512007)(6506007)(66476007)(316002)(54906003)(66946007)(2616005)(66556008)(478600001)(6486002)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V0ZWeG5tRWt0dnVNdCs4UjU0Z3JGaWlaOW1RMmIrRHo3eHc5YnVFcEdxUFM3?=
 =?utf-8?B?SDVmU2ZEcDVIK0xSS0tPZkxtcFpuVmNsZVp2VmhoRm1NdzcwSUkrTnUza1NY?=
 =?utf-8?B?Q1hjNlVLN2NXNXFqaEJ1TUc0VDQ5TVlMUnppS1NCWUMvNjQvRWdSWFZXRTgw?=
 =?utf-8?B?alpQb1V3dll2SDVMSnBwTCtNb2VYc21qaVFybEZ6VmhSWHRHMDU4ZzZ5cEpw?=
 =?utf-8?B?Y3dEZzRmWVVhWThnTDREOUtrMHFIbnNydGF2ZGhBUklCdlRsbVF0Qm1XTVNB?=
 =?utf-8?B?WDVhRGFDeEpYdGZuMkhYYlV4ditJU1R5ZXdpTXVUL2hnWU9IUmpjWjFoMXI1?=
 =?utf-8?B?WE43cEg3VDhYS2ZTVDgzM0ZVbVI5Rm4yOUw0QThpaXN3TlgzSHVoS0RmN0hQ?=
 =?utf-8?B?aloyUmJsenJmZ2IxOW1sc1FzSzdaa3o0dkh0OG4xZzBHck52YmxUMUI1Q2dE?=
 =?utf-8?B?bFJQdTFLU1F5VWNyRjZ2N1o5MExFTlRmT3NKbjRaendCZzZ2OWZtTWprbEFC?=
 =?utf-8?B?aGNTZUtOZkpnRWMzSS9rV0xQZEpjZkI4aU9XeldjUGZLdXVqUFFhWWRRcDVX?=
 =?utf-8?B?NlpvbGJWeDFqT1JnUmNTM0VZazBSWnVtdHpTbDBuMG1PUjVQenhvMnQvOUJR?=
 =?utf-8?B?U2FGeXp5Y0VGUnVBcTJBb0ZHdFdpYVNiaDdHTEZFbHFpMmgvTnpRbWtUM1U1?=
 =?utf-8?B?eGtrM2Y5RmwxaDhGbUlxUFFIeEhhdEpJMlJ1Y21jZDdMbHBWMTEvY0cwK1lY?=
 =?utf-8?B?RWtBT0ZQUzcvbms0cU9XTGlWbjZjVHBUTFZ4Wm1qaEtaRTFuMkNEREt6U1pm?=
 =?utf-8?B?cm5xdXREbUJ1dUw5UkJhUzkzd3JPYkFRUVNOWW1VanRHNkpEVXEvQzdaT3JG?=
 =?utf-8?B?N0dXeG96c095T2NsdXkyN3N3S2VqTEltSnRhT25Mc3FuVGo1VGNIVzhjVVVn?=
 =?utf-8?B?UGVxV3ROVDBMTU05ZHpuY1ZkNWExWEhiSWJ3Nnd4OGpad2ZBUHd4aGF5STR6?=
 =?utf-8?B?UmFuNU5tenVHT1FPdytWYndQT1NIN1c5WWJhRW52bkNidnBjTXMzNnBpZUpo?=
 =?utf-8?B?MFpma3ZQdTZ6K3JvcTdnSVBYUjMxelhDKzFMdnN5T1hQSzJudk5nVlVaaWdt?=
 =?utf-8?B?NFh5cjIrc1VPdmt3U1dpVm45cGZDUmNzdit0TGZneHBLQWFjRnVaVFZvYXNK?=
 =?utf-8?B?b21MaDB5SFJWY0haYVFTVlZqN3B5OW1mZ0RhbWgxZzRSV0Vpdm4ybjRHd256?=
 =?utf-8?B?eVlMd04rWnFGcEVyTHVoWTVUcklMZlZISEg1QlZuV01ubmNSaTJEb2QwSEVa?=
 =?utf-8?B?RXl0ZjJLY01JT0dkUjZjcGdKamdSQVdsODY2TndKK0RocmZCN2U4Q1hKc1Uy?=
 =?utf-8?B?bzR2clNzV3lObGxkM01vRzN1c0RRbjRKN0NjVFN3TkxtV2dPSFpPVkxDa2pK?=
 =?utf-8?B?QnhMSkRkRmZMSEhxTS9DMVNXY2NJTXNTWHRoREd0K0VBZkh3V3JnZytZcVlR?=
 =?utf-8?B?Z2VibzNCdXo5WStxQzNRMWU5QnRWckdabXlqbGdISlhaZXo1cEtMQVZYNGJE?=
 =?utf-8?B?MW5iZ25XNnR4REEzWDBDY3lVUG1YUEVRS1BBemVPbGkvTkY1bnpzV1Rkb1JY?=
 =?utf-8?B?eDdrZWdBaWJsamZicTlZbldHTEoxSEtrckhsdWE4a3czN05MMTEwUlQ2ZmZR?=
 =?utf-8?B?d0lRNjUxLzBRb1M2THNtYlAvc0dQTk5DaVdjTjVBbG41NjdYbTg1UzluTDY2?=
 =?utf-8?B?dE1LS2RRTnFYSVQxTGEzV0NqQ3g3bUxpQkxQUzNac0ZqdnN3QWkvd2N2NWFa?=
 =?utf-8?B?Vm9ybk4rUGJtMXR2cWVJMnFmRXduMTNOT0FCa1VDaGtPczliYllrQjFPbldW?=
 =?utf-8?B?TmZ2NmE5V2FIQUhXMVU1R1B6YmZhTE9Cc1N1SHlLVk4zTkJtUUtoM1krODBa?=
 =?utf-8?B?Qkd5dHRNTDJva1ZmYVkxRllrSSt0UGFzeCtYTDN4U1hiRVI2MUJ3SC9PbEVm?=
 =?utf-8?B?VHlIRmVWcjc3YlpHblBCMzVYNjhHTkpwS085eDZIWFAvL0FFZkxTQlp5eXF2?=
 =?utf-8?B?ZHVtR29IbkRETnFZclJ5SWY1T3hPZTR4NEFzaFIrT2VpY3NvWFdmRkdrQUJE?=
 =?utf-8?Q?LMi2RlNvlarGR4B/pn7gjyFD7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c001ad8d-46e1-401c-07ba-08dafd4de302
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:26:59.1091
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UyXXUYNvArHdFMH/GuKTS42saqi+hZS1MBn8DF+VpdCFHPU+0iPmE4WmmxnID/RjGdz9FifQvMcEJYEPOvkLQA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8418

Do away with the partly mis-named "mmio" label there, which really is
only about emulated MMIO. Move the code to the place where the sole
"goto" was. Re-order steps slightly: Assertion first, perfc increment
outside of the locked region, and "gpa" calculation closer to the first
use of the variable. Also make the HVM conditional cover the entire
if(), as p2m_mmio_dm isn't applicable to PV; specifically get_gfn()
won't ever return this type for PV domains.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2588,13 +2588,33 @@ static int cf_check sh_page_fault(
         goto emulate;
     }
 
+#ifdef CONFIG_HVM
+
     /* Need to hand off device-model MMIO to the device model */
     if ( p2mt == p2m_mmio_dm )
     {
+        ASSERT(is_hvm_vcpu(v));
+        if ( !guest_mode(regs) )
+            goto not_a_shadow_fault;
+
+        sh_audit_gw(v, &gw);
         gpa = guest_walk_to_gpa(&gw);
-        goto mmio;
+        SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
+        shadow_audit_tables(v);
+        sh_reset_early_unshadow(v);
+
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+
+        perfc_incr(shadow_fault_mmio);
+        trace_shadow_gen(TRC_SHADOW_MMIO, va);
+
+        return handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
+               ? EXCRET_fault_fixed : 0;
     }
 
+#endif /* CONFIG_HVM */
+
     /* Ignore attempts to write to read-only memory. */
     if ( p2m_is_readonly(p2mt) && (ft == ft_demand_write) )
         goto emulate_readonly; /* skip over the instruction */
@@ -2867,25 +2887,6 @@ static int cf_check sh_page_fault(
     return EXCRET_fault_fixed;
 #endif /* CONFIG_HVM */
 
- mmio:
-    if ( !guest_mode(regs) )
-        goto not_a_shadow_fault;
-#ifdef CONFIG_HVM
-    ASSERT(is_hvm_vcpu(v));
-    perfc_incr(shadow_fault_mmio);
-    sh_audit_gw(v, &gw);
-    SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
-    shadow_audit_tables(v);
-    sh_reset_early_unshadow(v);
-    paging_unlock(d);
-    put_gfn(d, gfn_x(gfn));
-    trace_shadow_gen(TRC_SHADOW_MMIO, va);
-    return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
-            ? EXCRET_fault_fixed : 0);
-#else
-    BUG();
-#endif
-
  not_a_shadow_fault:
     sh_audit_gw(v, &gw);
     SHADOW_PRINTK("not a shadow fault\n");



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:27:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482911.748703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxnK-0006bx-5a; Mon, 23 Jan 2023 14:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482911.748703; Mon, 23 Jan 2023 14:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxnK-0006bq-2g; Mon, 23 Jan 2023 14:27:34 +0000
Received: by outflank-mailman (input) for mailman id 482911;
 Mon, 23 Jan 2023 14:27:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJxnI-00065j-NK
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:27:32 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2070.outbound.protection.outlook.com [40.107.7.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 125cae2e-9b2a-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 15:27:32 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8418.eurprd04.prod.outlook.com (2603:10a6:20b:3fa::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:27:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:27:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 125cae2e-9b2a-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CFhPwne6xGjpzyuaAr+F18YfrsCiFaZy2rZf6chyGVX6MeppfChgis2uymYtxsQ3DB0+tG1qSw2LEypfHEea3dIQqAgEYUwQEzsmJ479GSpyMkcXOasmyMN2C0L0IqChaEqtdAD/Wu6nMoTp111JZyovSb4PpZ8SmhHKcWZRqBKiujGCbwWhpvThxSlrAxqyXI9nLhOLRYDi/OvSedwweQPlFTOqHagUcLTYfvidPWVye2NIrtXKeCXZ1c9P9cUb3qwbpBeEYUBtM/oR8SLD7lvaO+vx586buEe6msb8NEB/tzd+mq8sDOIlEEYmy4Bd0FJ2dKrJqTnaZiJmycp09w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RrwYPI3hJNohdFkp2eOgTzxDlzUCzI3WQX/iB1o8qcY=;
 b=NszGQ2wLV+rey8pCha3anzNR9cJAC9QScKS4nFNqZBq7mBSTHtILzotMMkUGdE8yFX2VUYKOso9PYlsgCTgOCeWi8kwmDt3tjtNCA8iBNg2Drj3E5hI7C9zUDTWsglpOezHsn9f4QD/msyXMYet+fX7dNfx/SKmPU8PHL7jjuXbjB/RzMQXDaef9G4gpP08nqwKg66HytfUPUKDRxIAeoRyCbqkNs1F98uE85gqzO9Dw/rvMeaUlQDi5pd74NrNKzV4Egjg98qtjlgz07ucUB4YAxo4yEhaSzsnwkXsj9+a1ONXA/ut3P5+NgLt11S8h70NQFqEQtzoY7EtrRk4H6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RrwYPI3hJNohdFkp2eOgTzxDlzUCzI3WQX/iB1o8qcY=;
 b=vhTqeO2RRZWsMZ315uCIHwy5xldVGYfjqnns1CtDlvcv0eXTzq4cM6bnfLBblX1YRtSn535xJjJbslDjtBwTEstHn42pgtkfpfIzZsMDUKcrncISuBF4oICskCfAF6XWC5/SOQSAiDr41A+xG2o1mBwnl4NK/Xk/6VLW9UaXPAkWiH2x6FMv8GzxTPAME8gxbjbwdMOAc26jKBlykkrNWiwaFtmH5ll8RU8wVE9E389p5mjPg+kGquH47q/eqiCpKShfi4kswvLHW26ZasLNJAvqq63pU42zAn5LMMchEM0kTYsVA1YSkjn6luZcuc72IL2kMM7WW9k4isRqVD+T5g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b0b6309c-680e-a764-8f62-3ae5d0751917@suse.com>
Date: Mon, 23 Jan 2023 15:27:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 2/3] x86/shadow: mark more of sh_page_fault() HVM-only
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <0e682cd4-3cc0-461d-ee53-13a894797f17@suse.com>
In-Reply-To: <0e682cd4-3cc0-461d-ee53-13a894797f17@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0085.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8418:EE_
X-MS-Office365-Filtering-Correlation-Id: dfadfc46-692e-4258-9a9d-08dafd4df608
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	C+LCwk14pwRpIYBRi4V2ZJcldMpx+TQPlLPXtM5+QeKXV/Hm8+dObWemvGN4TYaLo8QwPQ3zAoKFvTPvMs0FQyTEO4JMkDgNdtctJNA9ZwaifFnVwVEjkHVV07E5yza3ghlgLVTllbOYpnLdhjEu2PbHErmS1IkJWnk5hZYGNicVuYCntmq4r3XeguDA7bD8IlPVN2DCMIzVkkfgvR4WK8ljrMXbYViUHgt1QRjBE0GypUjSR4GHKInIzejMZrABb0+9d6iZ6KKXHZ70cxXBxbsxjlge1vRnQvwIQCAddMRXVgSbaFR1OdSUt1XWDMUHx+ZTequnATPyc1j4e7cp+2cdWKZs0FHLbSz+7CGL1OjVszJHIg8VMfZ1NFuIZod8YdBRziTSX6n3cd6Web8djttrovdWMR9F+B1Ax4tALetioeRkgXTV0JZvl/HnU47aZ/4EwvvM2qjVetEUpdjWtwRQlbJK4De9S86r8Cmb59rFSNfBaI10syz6j/4InMx4ce7PivB26QiEseIFJuKzfiTTkkwk621H7xNSGggeKuuAXroNOLEKYSbrXkJEtR3UpSY3VxMZ49/9KU+T6ww1J/Ujb0DOfCvsWxMK2FryLugavxrq1aiN1xtq5BY7hDb//gjV4tmB6L/Z1ugHh5Th65u8STxl64QSF6Gr/2femR2J+8t9sP5EhWQHDIHLEiCV9f77bZ68pfeurUiJxrbv94PnubpRZvh+QJJQKFYvlkc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(39850400004)(376002)(136003)(346002)(451199015)(38100700002)(83380400001)(31696002)(41300700001)(86362001)(2906002)(8936002)(5660300002)(4326008)(6916009)(8676002)(26005)(186003)(6512007)(6506007)(66476007)(316002)(54906003)(66946007)(2616005)(66556008)(478600001)(6486002)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0QrMFRWUnFHSTRyM2YrSXUwWTJ6anplamFqVHk3Y0sxeXZIUFErM3EzaUJ2?=
 =?utf-8?B?U2pIZG45a3RXaFpnM0QzaVk3RFRVMEx3Z3lic3BwbElhL1l5N1ZPYmZUN056?=
 =?utf-8?B?b050UDZqOHEra0JRSVgwYzlPQURCZzBTejAwbnlMVnBJZHZtc2pZNGhqalFa?=
 =?utf-8?B?Z0NGWWxkUlZ6Sjk2TGtZUkYvYi8zcHRwdU1aQk5FOFNidlR2Y1N1dVFlSnVP?=
 =?utf-8?B?NUpJUkxBNWhneTBaeFArZkwvTEphWUtQZ1FKSGtqd2Rta1M4QzlVRGY0eXRh?=
 =?utf-8?B?OFFHTXNFSGV3N3UwQktaelFzYU1YTHI2RTZNL09OcDUwZndxYjZ3ZmxsZURv?=
 =?utf-8?B?Mm9LUll3eUxzNkxHdVN4aGQ5eHk4NGltbGs2Sk84YzgzUjFUcW8vRXFVd1FB?=
 =?utf-8?B?cmRERUFmNXZoRVVKWjNBUDhIOWkxc29GSzF3NGZhaFhUNlhrSkN3azd3NlBs?=
 =?utf-8?B?NGRIaFZZbEloRStPdUZod3J3Y2ljVFJnY0pkNHVaYjl0QWFrL3VKOXJUWTVV?=
 =?utf-8?B?UTZJZXVmMXIvU2lIUFEydWl5aEpGTGhVc3RyUEZ4UDVpSk5QcjBydzZBK1Vj?=
 =?utf-8?B?SkVIbXBqZU1MV21sRnd5NStacE5PWDdKek9JNkUxY3JPbHBCS3dzVmR4OGo1?=
 =?utf-8?B?TVk2Rk1rWWFMU3ZmNmtudVBuVjRDSm1nTnFjM0pyRG84aXUrUm9iNng5TUk5?=
 =?utf-8?B?TGdnU0RPRm53OURNZHlhTVhKU0M2QmdpZFJpWWJZamEzaVhBUG91TVJzWG5R?=
 =?utf-8?B?bFBUTkd3ZFBoTTE2bFR2U2pVTWI3b09ndk1pa2xoT1ZjTjliNFRDRXNUYS9o?=
 =?utf-8?B?aE9BMytEY2ozdUpVUXFxS0VtcUJzMGpSZ05WK2lTY0dWQWVvVHlRUW4rVUwx?=
 =?utf-8?B?TWlzSnV0amxoWk5ZSGdZVzBzb3NXRVYxZ2pLU2E3UkIzYktnaEtZSWg4M1lW?=
 =?utf-8?B?SXRXU3pKcGFqS2NXdm54a09Pa3FKT0RDamoxaEREelJQK2tqMk1mM0pzTEp6?=
 =?utf-8?B?a1crb1RmNitEUmhFLzhBN1BSN0JiclZhMGlWQlFYOWhZNmMxb0FMVXpXMkNE?=
 =?utf-8?B?SkF4MElkL3pZeHlBeXRFVnBVNVBrdkFoTFc5QUdSdWgySkxna0RHSTZCQllT?=
 =?utf-8?B?OFZwR0dVRXlxaHVMSnlFZFJKQmI4VHM3SkV0U24zdWI2YnV2UzdHQTNtUGxV?=
 =?utf-8?B?Z1QwYjJDZ0NWMGRxSnQybmxpNXhuYkRRMmpvU28wUHUwZENnaVV3eFR5emxp?=
 =?utf-8?B?b0FCTVFvQ0NaOVVaK0pOUVMwUitSeXBSbGxtTHZXTmhIdWZUR0p0enF4d2Ey?=
 =?utf-8?B?VGRneHMvRk9vTDIzanZ2aG5SVThNdldhM0lkZ0EzdHhOb3ZmMUdmR0o3aUdh?=
 =?utf-8?B?N2g1VUFJYm1KSlJqcWFDM0FmcTNMWWEzaEYxVzRUQ3kzTUdad1gyL3U5MGxq?=
 =?utf-8?B?eXpiTU5iY0NyaU1naUdMVHVoQWl4ZTJqMG9lSjl5Si9wMFNCNFNIbWo4WVE1?=
 =?utf-8?B?a1BVN29ZMStUUy9jQXQxdGxIN01HQTFwNUJUMjI1ZVNnTnMrSlJzYXJjaEc4?=
 =?utf-8?B?K0Q0SHp5UDNkWEUvZW5CZVpwNnJ5Wit4K3FnZVQxNTJLcFNhWEZXTThiR1BM?=
 =?utf-8?B?UURWcHk2ZFdHNFdJWFRoZy9Rc0ZDbWQrTW9mUGMzSWJ1cEMzRkFoM0sza3pP?=
 =?utf-8?B?NjJPR0lUTGJFRDVhaUgrMGdyYm16b3lDNVpGMldNVnVVRjErUHRnU05hMS9B?=
 =?utf-8?B?S1owV2QyWi9jWHJUZzl6ZUs0aUdLR2VvaGszeFVzOHR2aVVIL0pyRk9RSHVZ?=
 =?utf-8?B?Um96WUk4bzhBUUErQThXU1RpaFh0MFBhRUlNdjFTQjJXY0N1d2RYaUw5MkM3?=
 =?utf-8?B?S0dGSGVXRkd2RDd4dld3TEdab2ZpbGdSMnh2NFYrK05VMmZTTU5BdVYrVDQ3?=
 =?utf-8?B?dVBFQW5CRGFvYmlVd2hNSmJXemNNTHJ3NXBFSkM1WWVoQWs0ZitTcnlKUmp5?=
 =?utf-8?B?WGxqQ3V2N1d3bFJLWlpBVE9TNmE2bWc4SGdQUDZlcVpQa1FQZGd2K08vZVc4?=
 =?utf-8?B?Wno4MG1pckxERWZPWC9QZXpvUUxuQnc5bEVLZGljUmcvL3JjSDAvNUZPNmFR?=
 =?utf-8?Q?qS5CtgcRQ4uRXMEnVqDgRP3No?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dfadfc46-692e-4258-9a9d-08dafd4df608
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:27:30.9977
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hTYP2kc37Q1Kw76wbLPfSYP+Jqq3PrFhMirTZKeNW6opZdWKm0RYCFLoIkDAPMLUV0kcMIZ+iFCT47nlKzEx5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8418

The types p2m_is_readonly() checks for aren't applicable to PV;
specifically get_gfn() won't ever return any such type for PV domains.
Extend the HVM-conditional block of code, also past the subsequent HVM-
only if(). This way the "emulate_readonly" also becomes unreachable when
!HVM, so move the conditional there upwards as well. Noticing the
earlier shadow_mode_refcounts() check, move it up even further, right
after that check. With that, the "done" label also needs marking as
potentially unused.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Parts split off to a subsequent patch.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2613,8 +2613,6 @@ static int cf_check sh_page_fault(
                ? EXCRET_fault_fixed : 0;
     }
 
-#endif /* CONFIG_HVM */
-
     /* Ignore attempts to write to read-only memory. */
     if ( p2m_is_readonly(p2mt) && (ft == ft_demand_write) )
         goto emulate_readonly; /* skip over the instruction */
@@ -2633,12 +2631,14 @@ static int cf_check sh_page_fault(
         goto emulate;
     }
 
+#endif /* CONFIG_HVM */
+
     perfc_incr(shadow_fault_fixed);
     d->arch.paging.log_dirty.fault_count++;
     sh_reset_early_unshadow(v);
 
     trace_shadow_fixup(gw.l1e, va);
- done:
+ done: __maybe_unused;
     sh_audit_gw(v, &gw);
     SHADOW_PRINTK("fixed\n");
     shadow_audit_tables(v);
@@ -2650,6 +2650,7 @@ static int cf_check sh_page_fault(
     if ( !shadow_mode_refcounts(d) || !guest_mode(regs) )
         goto not_a_shadow_fault;
 
+#ifdef CONFIG_HVM
     /*
      * We do not emulate user writes. Instead we use them as a hint that the
      * page is no longer a page table. This behaviour differs from native, but
@@ -2677,7 +2678,6 @@ static int cf_check sh_page_fault(
         goto not_a_shadow_fault;
     }
 
-#ifdef CONFIG_HVM
     /* Unshadow if we are writing to a toplevel pagetable that is
      * flagged as a dying process, and that is not currently used. */
     if ( sh_mfn_is_a_page_table(gmfn) && is_hvm_domain(d) &&



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:27:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:27:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482915.748713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxni-00076O-Fp; Mon, 23 Jan 2023 14:27:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482915.748713; Mon, 23 Jan 2023 14:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxni-00076H-C4; Mon, 23 Jan 2023 14:27:58 +0000
Received: by outflank-mailman (input) for mailman id 482915;
 Mon, 23 Jan 2023 14:27:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJxnh-0006sB-4x
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:27:57 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2073.outbound.protection.outlook.com [40.107.7.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 204467d1-9b2a-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:27:55 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8418.eurprd04.prod.outlook.com (2603:10a6:20b:3fa::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:27:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:27:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 204467d1-9b2a-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AsecHomFVzxaFf3PXbHir0jtjtkILKpmoJwrJpkk0vhagcAg0Y3BqC/VZ6G56T9dg3FN//Ac61BiWKQFVmTsa9rhXf2vJjj4EZa8c3uJhFCMG+e6V7D3uUquAxZaCdJEuFbJTidzJ4mHoQY3cG1NToe0qdinlEBcR4LCy+i7B23Wr9aycamnVqVsfiYSPy3g9EwOyAKyfzsggOaomHwM/UuUO6XcZH7MU72Lyq0p1RmEW8pPFGJwpeT32LjisNi2gfrN/oVXUF6NOHjh+xFJYpDOW3H2YpEAex71UJF9fLkH/jZMTrzBHtVvozSRck68BYEcdZeQPCYcbP6wO6hTmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=McZZ2tlCGU0FnMGSjQr1T5hsn4TnYmgsmEZY2V3Kj+A=;
 b=CkkdDGnH9NWjPF59RfvXw/TmW/xdd5O0cDuNj2It3DAHGy233oCIdJHAbw7KhWJ0OlVkCt/X4kWUuHKEs8poc+ABqBBk7ELCMQEp66kSalVsknm0+WNO6ieDMzwhQmV7pSviVHVfh4EMXe16/A26telT14Iym5PSerr5kk8KXwZhQILQEHsYe/N4KgK7jMK9J49PTOUwkJ9MpNTeRICanZUVxxpa5WQ83AB3PlonInoornB1tnCr0D3q2oPbdrqhOe2w/Bt3MQeJsbgQryPEapeL3e4VEw7wgTueEPKZFcZHW5wdnb+/SAFUCbFOwgbeZnPlP2LXpsBu+j1wp9azSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=McZZ2tlCGU0FnMGSjQr1T5hsn4TnYmgsmEZY2V3Kj+A=;
 b=agbQAo8tCPEPCRAZSOE5q6zYs4AD9uDn0cR4znqFPVqJ5ott7yuh3YamSCxR7uh76htjgxzT9bJ8eZbCdwP0Uh2fZs61y1BR01PYDDDOEC4L83ykZBvJ792GhO8cnPDRk/cKXfhDlHLhf/hS22D5ULjSOj9siDpN04cZhT72MNiK8CWuwyPmHrRkDgzWb9exvMaY4FGovBqhcnz9ZrB+dyFnsrKv8gho+cu5W4hhBuHd9jgEptMDLMbKYBH8BVIjSHSWOQ49cGrYsptjsgF1OE5wKJSbQlm9El4/gYsA0tUgK8tCLHz8q8nhFJbVZIbOzRYLlvnu8ewtaWvi512i1Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d8b5e168-2977-bd16-6345-7aecd778419b@suse.com>
Date: Mon, 23 Jan 2023 15:27:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 3/3] x86/shadow: drop dead code from HVM-only
 sh_page_fault() pieces
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <0e682cd4-3cc0-461d-ee53-13a894797f17@suse.com>
In-Reply-To: <0e682cd4-3cc0-461d-ee53-13a894797f17@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0001.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8418:EE_
X-MS-Office365-Filtering-Correlation-Id: 046625c0-5bb1-473a-ae0c-08dafd4e0384
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bJwdo7v+a75JMGmuWUTtPQEHjfVr6yGpYClHz9Seoa0uMqSFbR8LAMLGdiCxGKsoC2msotg7pzWZxVj0ViU7K0tUCbhjnjMpqMoLS0yVnjzbmpsXf0SnX8eK5jGpQyEYaGEgcDv4+KRDxhsqUtqZnv68HJmqo1LK4U09ajc2K8mWCZXMh0Yfkb993HMR2obPStWWQRkaeq8+WPHY9elQ4rp/LcTdL4CGw/btCByGHiXvZEEMo26aSViySN6mbYwHfW5MrGKmn6TCiheswYfyjz6S7zzSJ8Kabi21bVhkGevis2LkzzM5jxGztBdbV+xlEioqgXI8e0PnbUnl4cilKn6ZWsqVSI5lfCmaDVewZGHWFmNo6ppzdnlwQ0y2iYYaV7tl7W7I3BuVqONMMXVnTLFkuanm/2h35cewFw8pi5+JVOxfLaa2g0OAazUHGTMEKe8arLpbiI+2PJRASz2+rpLXciD0aNE8/J7gO5ROG8ybwKCZNopt7p+tMubzcaEtXB5VVaQ7ZsrjmAcm9RrCn+t02bMinXeJRtR4oX7in6CmI2dmAcQieiWDEhUAQCKyAUuuQf2x5uOEdOffINIhopmt1V9gh6gjeyspW2U4PKYS9CzYod/88sPeACzKL5lawvqnUVc1n2LJce1KtgdQwy7U18IVGLiE28CTxCdSH5M5vz7bm5DIDPdjoapULavblQf2ongQz87Xp0uzkILWUktFXn/Gh+jIoie23nw+X5A=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(39850400004)(376002)(136003)(346002)(451199015)(38100700002)(83380400001)(31696002)(41300700001)(86362001)(2906002)(8936002)(5660300002)(4326008)(6916009)(8676002)(26005)(186003)(6512007)(6506007)(66476007)(316002)(54906003)(66946007)(2616005)(66556008)(478600001)(6486002)(31686004)(66899015)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SVdrbG9OVkJlNDhYdVpqM3Jod3hHZ1lQN1ZBMkV6UDJwNGgrWkF3cnJXWWZJ?=
 =?utf-8?B?UFU0U04vVHc3eTJxcHlQTlJiTVJ0T3N2dUdhM25RNW00UDRuVHlxWEc0aVo1?=
 =?utf-8?B?M0NkYWpMV1VBOUNVYjA0Z0dlcXRtQnJBTW8ybjhmOFJIeGM3aEJ4L1JJSkR5?=
 =?utf-8?B?VGdUVHdrTjM1Mkg0Um11SWFHWGk5dHpmTFVDTFdaS1U5b09NYXB5cWF2dWFJ?=
 =?utf-8?B?L0N6Q01kdnYwazBURTFaRVdmZS9lOGhjWEZPNSt5aVhpZVAyYmczVEo3dlUv?=
 =?utf-8?B?Zm05Y2dFSG9MOGdRbW9DdXRXc3owUFdFZzF0SkFZMUVoTjljc0hwZWZKV251?=
 =?utf-8?B?elZVY3ZkOTUvMXpBRjdZNTVqUzlJcDdQTHY3c1BNbmVUQWNTeDE2U0k4eGtW?=
 =?utf-8?B?QWNvR0RCeUlVcXRCRW1xT2tLZHh0akVOcVIwaW9KczhwK25UeGtYRmpyMGor?=
 =?utf-8?B?dUNPQzNoVnNUYjB6QmJsUDBFTmljNUlrSlg3dzU5UEx1bkhJdjM0bUdtOXdO?=
 =?utf-8?B?RVR6a3h5d25OWFFhWkZVd3h3WmcrSnMyT1FMdGp1UURLZXVhbm5CUGJoTCtz?=
 =?utf-8?B?U2pmMGFUR2FueXJzb1BmbjZudFVpRUFRdmxJaG10M0JSRlhkT2VmOG5KVzkv?=
 =?utf-8?B?OFpHYkY5RlV1RGJYSlFvSkFNWk9idXRFa21iLzNNWEU0K2xhNGZ4OVl6ZXdZ?=
 =?utf-8?B?RWZkMTFpamM4c0M4cmdwaG5rV0xGZzlvcDFuMHB4Y3pqNlczUmxQeWh5alpB?=
 =?utf-8?B?WldFVW1mOU9nVXZwR1pjOEZMd3d0R2lpQkZ5R3NKSEo1SnZGdklOODM3cUpU?=
 =?utf-8?B?R2NXZGFmcDJtTHJUZHU4OG94ME9PUUxnMGNYQU1RSTVma1QxQURYeFp5bjc4?=
 =?utf-8?B?a0grS1kzWWU1RlpuNE8rVkZLUHc0OU41K2h4VHI0WjNXdkxkLzBVdHpjZG42?=
 =?utf-8?B?UVd4YUpuOU0ybzJxV2FsVFh1L3hZSDhWb01rZWkxbWRPVHAyT1psSTVvdzFh?=
 =?utf-8?B?dFE3QURaTzh5OXA3akRvNFB5dmhBd3M1NW04Y3hVSWd1WCtGdVNOMlZwVUpC?=
 =?utf-8?B?aWJlN2JBcmx0eHZRTTc1K1BNMVlBbkFFM2RtL3ArT2FGS1Zwam5sNjl6Qngz?=
 =?utf-8?B?MXlGMW5mYXp6QlJOZHkzWDBoekpYQ1VxZ1JOa2ZFV2NPWHlYQnJTcWtoZm5m?=
 =?utf-8?B?STlhNkd6TVpValdudzhSS01lN0JvUXI3T005U05lZnJBakFsT1o1TmQwM0tU?=
 =?utf-8?B?Q3V3c2p5VW11Ym9BZjU3eEhLYiswK0dlZHBCL0ZVcnRZZHorc0RCNlRTRFRo?=
 =?utf-8?B?NEREN3RJUDU0YXVBWWVnRTdaSmlZL0xKSW9ydUlXcDhna0hxcWpEYVpSdG9t?=
 =?utf-8?B?Y1dSaVF2NG9oS0RpZVh0T1U0akJpNzVCVWhwbytRSWdZR1N6S2N5TGlPMitp?=
 =?utf-8?B?cmhVZmpUUHRaNVdXSEw5Q2tSUEJIK0xnY285ZytCSmlGRDJUdnowc0doT2ZZ?=
 =?utf-8?B?RGxncDdsWW9takU1ajJHbHlqV0VBamlsYW4vT1RiOG00WnBLend5Z1hZV08r?=
 =?utf-8?B?TFk4anovK2hCQmNLS1lyTnpBRDYxelRDRFFQQkNIME1FVWphekNscUpEWVY4?=
 =?utf-8?B?aUlJY1VjNCtIdDhaL25BTGwzb0d0TDNLc2RVRWVqN2VNRTM3VC90OXk1VWF0?=
 =?utf-8?B?c3Fwc0hmUW8xbnFNNkl3ay9PR1NXbnk0eEt3TnRuQVQzbmp6NzdVMzIvMW5C?=
 =?utf-8?B?dWZJemU4UXJTdnU4QUNZTWZXRW9YU0hTeS90ZW9UM1BxVENEbXNsRWpsMEE3?=
 =?utf-8?B?bi9GdWREaFVlTFJySkpkRENWdUQ0M2lEeVNySnQzZGlGU3czY2YvTGJKQ1Fj?=
 =?utf-8?B?SjBHTE0xRm5HYVhyL2RsNE01T0ZVS3RGb0pYNVlHZkJCay9CRGxHaGNqQnJh?=
 =?utf-8?B?Tk4ySjVBRXNhQ3IzOHRiT2I3RkhmMEtoM05BcTNudmNEU25OOGcwaVlmcmF6?=
 =?utf-8?B?dTRFZDNJL1F3ZFNrTkIvRzJVM3A1bWhONGRBbVFUaEtKVFZPNEhjRE10WkdB?=
 =?utf-8?B?cERsTUJkMEcyZWJxUWVQbGFpWk5iWFlhMk84ajZnd3cweWJFQVc0UjdZR294?=
 =?utf-8?Q?dZF9XLFF0S+DjeSF7QnsVPwLg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 046625c0-5bb1-473a-ae0c-08dafd4e0384
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:27:53.6368
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /gNnm3071tpyzeWkgoAZJ1i6/Lsbe3KdSRjCDSIWg12LlILeD3BdFFRk/Ebu6Sk3HUgIyFavUSMNE7wsAToxcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8418

The shadow_mode_refcounts() check immediately ahead of the "emulate"
label renders redundant two subsequent is_hvm_domain() checks (the
latter of which was already redundant with the former).

Also guest_mode() checks are pointless when we already know we're
dealing with a HVM domain.

Finally style-adjust a comment which otherwise would be fully visible as
patch context anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New, split off from earlier patch.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2594,8 +2594,6 @@ static int cf_check sh_page_fault(
     if ( p2mt == p2m_mmio_dm )
     {
         ASSERT(is_hvm_vcpu(v));
-        if ( !guest_mode(regs) )
-            goto not_a_shadow_fault;
 
         sh_audit_gw(v, &gw);
         gpa = guest_walk_to_gpa(&gw);
@@ -2647,7 +2645,7 @@ static int cf_check sh_page_fault(
     return EXCRET_fault_fixed;
 
  emulate:
-    if ( !shadow_mode_refcounts(d) || !guest_mode(regs) )
+    if ( !shadow_mode_refcounts(d) )
         goto not_a_shadow_fault;
 
 #ifdef CONFIG_HVM
@@ -2672,16 +2670,11 @@ static int cf_check sh_page_fault(
      * caught by user-mode page-table check above.
      */
  emulate_readonly:
-    if ( !is_hvm_domain(d) )
-    {
-        ASSERT_UNREACHABLE();
-        goto not_a_shadow_fault;
-    }
-
-    /* Unshadow if we are writing to a toplevel pagetable that is
-     * flagged as a dying process, and that is not currently used. */
-    if ( sh_mfn_is_a_page_table(gmfn) && is_hvm_domain(d) &&
-         mfn_to_page(gmfn)->pagetable_dying )
+    /*
+     * Unshadow if we are writing to a toplevel pagetable that is
+     * flagged as a dying process, and that is not currently used.
+     */
+    if ( sh_mfn_is_a_page_table(gmfn) && mfn_to_page(gmfn)->pagetable_dying )
     {
         int used = 0;
         struct vcpu *tmp;



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:31:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:31:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482924.748722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxqW-0000Gs-1D; Mon, 23 Jan 2023 14:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482924.748722; Mon, 23 Jan 2023 14:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxqV-0000Gl-Uf; Mon, 23 Jan 2023 14:30:51 +0000
Received: by outflank-mailman (input) for mailman id 482924;
 Mon, 23 Jan 2023 14:30:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RTeM=5U=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pJxqU-0000GX-5B
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:30:50 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 871df8bb-9b2a-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:30:47 +0100 (CET)
Received: by mail-ej1-x632.google.com with SMTP id tz11so31069467ejc.0
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 06:30:47 -0800 (PST)
Received: from [192.168.1.93] (adsl-208.109.242.227.tellas.gr.
 [109.242.227.208]) by smtp.gmail.com with ESMTPSA id
 g3-20020a1709067c4300b007c0a7286ac8sm22257378ejp.69.2023.01.23.06.30.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 23 Jan 2023 06:30:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 871df8bb-9b2a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=+I4vPncaaWJXfIvQXqNCD0mrnSOzMTWilIJ7jUXKnfU=;
        b=UAldMUsaeLde94z2Df7LKJWrNCKRMHjgmbvBLJmfWrthPtk0a1r1/sUrOalJEYW75s
         G4EIJeaeIGtzhWatOBJQO/8cGmwCl94UvhN1Lfr5Qmh1cS4FhXmEvCQ5rRAipHnvof+e
         U0yR9YqetFZE+JzckyBOErh4wwCINITj9W2eyB5/Flyiz9MzpABzVQlNsCcjn7OD/Dy/
         0xXctPZu1IXroUaB2oy9+cwJ32HeOaIaq6iDdTQwsa+kxeIoDLVvrVTzQN65R/TqlLs2
         YLabJEm4X8PyOyrsQcLX5tYa6M3hXumKzQ0pIwGrUbmrTy1WzBeSFUpqk+BD7B1Paa/Z
         l51A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=+I4vPncaaWJXfIvQXqNCD0mrnSOzMTWilIJ7jUXKnfU=;
        b=QYqv7jJQ/z9QTL+It3ARAJ2X9fpDKh2kI8tzjuqjLKhZ1Y7MtqmnMjr8anWah3PYA/
         XOfopFHGnqOa1NhykSUug7+1mKc8wCrPAbQ3icr3pM3DBef8A4F9n69dsMDOoRKNXYSi
         Bz5QgpBkCDS3YBqt7xyygbLc51q8mwJ7vnUDp2ko6BKre6L5xnYrKIR4NyP4oDmf4m+7
         XUEMeYdHyQVQO2EWwR904UIrm4Cux1NlZgsKT5GMq91QxPBef3uQ26GaO80zgLM9dbED
         kHE5V7KcMqLyNvnNviES1cocWe3f9Dm/uTasXryO1yC3DnwbzmWX67RkdtKkpFryG14H
         xHSA==
X-Gm-Message-State: AFqh2kqMDUDIqpiOoa9JOxDGLCYYctl3znlXoPcs4SETX8EXNvOFcqKU
	OhkntvPBnYilFUr9izObOeA=
X-Google-Smtp-Source: AMrXdXufQ/MIbJ0GHR2xZ9YngV0OromMOGOL0TlQeMZA8QHUB2/6Ik4k42InsBPiuu05Z7or96cUdA==
X-Received: by 2002:a17:906:6b8f:b0:877:74e6:67a4 with SMTP id l15-20020a1709066b8f00b0087774e667a4mr16195462ejr.47.1674484247132;
        Mon, 23 Jan 2023 06:30:47 -0800 (PST)
Message-ID: <01297097-d8ce-a22c-a616-f98691d3ad4f@gmail.com>
Date: Mon, 23 Jan 2023 16:30:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
Subject: Re: [PATCH] automation: Modify static-mem check in
 qemu-smoke-dom0less-arm64.sh
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org, ayankuma@amd.com
References: <20230123131023.9408-1-michal.orzel@amd.com>
Content-Language: en-US
From: Xenia Ragiadakou <burzalodowa@gmail.com>
In-Reply-To: <20230123131023.9408-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 1/23/23 15:10, Michal Orzel wrote:
> At the moment, the static-mem check relies on the way Xen exposes the
> memory banks in device tree. As this might change, the check should be
> modified to be generic and not to rely on device tree. In this case,
> let's use /proc/iomem which exposes the memory ranges in %08x format
> as follows:
> <start_addr>-<end_addr> : <description>
> 
> This way, we can grep in /proc/iomem for an entry containing memory
> region defined by the static-mem configuration with "System RAM"
> description. If it exists, mark the test as passed. Also, take the
> opportunity to add 0x prefix to domu_{base,size} definition rather than
> adding it in front of each occurence.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Also you fixed the hard tab.

> ---
> Patch made as part of the discussion:
> https://lore.kernel.org/xen-devel/ba37ee02-c07c-2803-0867-149c779890b6@amd.com/
> 
> CC: Julien, Ayan
> ---
>   automation/scripts/qemu-smoke-dom0less-arm64.sh | 13 ++++++-------
>   1 file changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/automation/scripts/qemu-smoke-dom0less-arm64.sh b/automation/scripts/qemu-smoke-dom0less-arm64.sh
> index 2b59346fdcfd..182a4b6c18fc 100755
> --- a/automation/scripts/qemu-smoke-dom0less-arm64.sh
> +++ b/automation/scripts/qemu-smoke-dom0less-arm64.sh
> @@ -16,14 +16,13 @@ fi
>   
>   if [[ "${test_variant}" == "static-mem" ]]; then
>       # Memory range that is statically allocated to DOM1
> -    domu_base="50000000"
> -    domu_size="10000000"
> +    domu_base="0x50000000"
> +    domu_size="0x10000000"
>       passed="${test_variant} test passed"
>       domU_check="
> -current=\$(hexdump -e '16/1 \"%02x\"' /proc/device-tree/memory@${domu_base}/reg 2>/dev/null)
> -expected=$(printf \"%016x%016x\" 0x${domu_base} 0x${domu_size})
> -if [[ \"\${expected}\" == \"\${current}\" ]]; then
> -	echo \"${passed}\"
> +mem_range=$(printf \"%08x-%08x\" ${domu_base} $(( ${domu_base} + ${domu_size} - 1 )))
> +if grep -q -x \"\${mem_range} : System RAM\" /proc/iomem; then
> +    echo \"${passed}\"
>   fi
>   "
>   fi
> @@ -126,7 +125,7 @@ UBOOT_SOURCE="boot.source"
>   UBOOT_SCRIPT="boot.scr"' > binaries/config
>   
>   if [[ "${test_variant}" == "static-mem" ]]; then
> -    echo -e "\nDOMU_STATIC_MEM[0]=\"0x${domu_base} 0x${domu_size}\"" >> binaries/config
> +    echo -e "\nDOMU_STATIC_MEM[0]=\"${domu_base} ${domu_size}\"" >> binaries/config
>   fi
>   
>   if [[ "${test_variant}" == "boot-cpupools" ]]; then

-- 
Xenia


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:31:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482929.748732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxrW-0000oC-An; Mon, 23 Jan 2023 14:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482929.748732; Mon, 23 Jan 2023 14:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxrW-0000o5-85; Mon, 23 Jan 2023 14:31:54 +0000
Received: by outflank-mailman (input) for mailman id 482929;
 Mon, 23 Jan 2023 14:31:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJxrU-0000nx-DY
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:31:52 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2070.outbound.protection.outlook.com [40.107.249.70])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac4006b7-9b2a-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:31:50 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8100.eurprd04.prod.outlook.com (2603:10a6:20b:3e3::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:31:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:31:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac4006b7-9b2a-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PfZWGWLbdCUbNUgE7ZKomIDO+qqb/Rc00xYFyZ+FzMtQ32HlelosJajmB6e7wIUOrwBYEcq3ySFGne9EdKyng0ejgig5apFW/sO8pn53QmbNUcFTMuoOktfSusDRsfDzjKdyG/ZuiKU5flLg52/SrHF+RC8XD+GHGo+yWbxrGpNf9oSmDiE9Mjz7jnxZ+VBO5N2cr6wXhTR2fsoabFN2kz672DIigy7TfbE4LUbiKq2ct1R9fL+qQfx99zNDKDXit+UdKr1W1PVFpLrnr4CtiRL61AEDLR3P7gHMWSSzDi0/EoBWBXvwhrbUgz4s5I9znFmImGBeZx0tGDxaq/YrOQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CgORcjBjEs4wfTCx6OQAaXyz7uoAGriVI8R68Ni1WsM=;
 b=WaSjOd2uxY2+QZxgfUBBZrbQnZ8JvqMXRE7YWgFu1f7LFbBFDeykSv3wKTGeKksmKFuhw0PpERO2QZBd82FEim/rB/lTyC85fpb5qqS4zuORnrWLU02pdLYiCfQG/0EVgFeUYObtWaKQqPkeSiP2tDXjQ5RrpHsAWA+yHCv2KlMpDtVb685dI3yJbcM1qzVpjwtqu/i/kYrPmNyvfj0AnNQ4yUp3ACmPu9O/CZUDgWdUQPMgMH3eUeZp751IQFTZW7MSwm3lxSyzjNPMV2IJAOiiizO1p1d59S4Nnr27vCpItNgnpzB1UzqMHMYX2taY1GmUWFX3jaieTRi3Duj8ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CgORcjBjEs4wfTCx6OQAaXyz7uoAGriVI8R68Ni1WsM=;
 b=ZrI4qe+SvxQp6p14crBuzoQrSeNTKaMQhojJGBBtUYcnCQCrJz5zu3B2W1FFk1TIRrpojiqKTHjiipe9D1TFj0hDjXpkiV+xgVilIz/qr57bxowmXhTkHTGPwpjUgPQVaEOSsFWndgRxzwdWWzH5KDkErvav+qDj7qijszM2ff2+45roUcN6nCw0MuY1EaLTruAYsbiR/6IRqKfAPQK6Bu8C1SLyx8Ez3xF2s5Ibb5eGQ8KI6FXUAUjLR94Z50UHhCS4AaRZJ+SkyxWrYvRNKXq3hK6AwDdK0Qkerc1SZoESpjh2dUEvyg4ZzlBXD1JuhqLTyK736Q2cQXdVvkE2JA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7e928fe9-1aec-f93f-b82b-4ecce9f49265@suse.com>
Date: Mon, 23 Jan 2023 15:31:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v1 04/14] xen/riscv: add <asm/csr.h> header
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <afc53b9bee58b5d386f105ee8f23a411d5a15bed.1674226563.git.oleksii.kurochko@gmail.com>
 <08ac78fd-85d4-2a43-1922-3128d5fd8d21@suse.com>
 <b6ef89747db4f8ce48dd66e7db8565a6d25f96b2.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b6ef89747db4f8ce48dd66e7db8565a6d25f96b2.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0013.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8100:EE_
X-MS-Office365-Filtering-Correlation-Id: f4c42148-14d6-4e57-c916-08dafd4e8f2b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sZttMoe1U4ZZE1hysdBnOK9xdINcmQmBIeS9aHbEhzweQyU6CHN1ddmby6puVwBA4a6KP6G7TcehFTyzi1IQzobkeS1OkPBYNrBFAevqxA3rOe8Zh484D/V3JjvWiW+ajEmJ30kzQpx0aoVhOd6M3+hydOca/uq3mJ7pQQhMDc/Y98818SYERSrK71tXyJKeLaL+GJTHFDzroYBjlczNIkLyl5MnfNoeYcAI/14ENmjo0l9BA/qSuFr89jME08D9f+FmSdXVFhzCRD0NNqD2edjk1/0YfcCS0gc3+xoShGsRLzZhvWkZlSJD26fpHV4ixAA7VEkWCPovoJwvGxdIQ+jM+O6ojmNxM7ufKP0aRETun9TkYIZ/qeqs2TwtKPPIQyhW7YrAd+Mdj3kTdh11f8wt5ZsajghdbbNmGMOxiGPAafrZ0GpOTn22xW3Conn7cEg4HqxDkGQv0M5AAHNCLyK2RuvVxKnFBezfzPz6O9SCFzLkYInWLCPT6oP2HfwUzINdw2eGZNlFOksLssED4hC/g0Jdi3w+eG9m73gaoAdVWtpxBMCSPP7J0AGSIvyrcgem0/8kNkhuhTFd0QFdcfPqlfzyzO4NHHWPXt1geOiVGxTQeQJw6dVsnMhnHK0bC78H8iWqEgbrVAF26sDFxbTQBDfcUkFHW8l47cCVnLD0P3pa/IyFrBrUs0KdeUEkepYoUFT1a5DAq905wU+PZ405k6QQPjVk/ooKqlMqpMM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(136003)(366004)(346002)(376002)(39860400002)(451199015)(38100700002)(36756003)(31696002)(86362001)(478600001)(316002)(54906003)(6486002)(8676002)(66946007)(66556008)(6916009)(4326008)(66476007)(2616005)(31686004)(2906002)(6506007)(53546011)(8936002)(186003)(26005)(41300700001)(83380400001)(6512007)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QzlveHpjeVY2eVM0Ky9PTVdocllNcDFZRGF5NGlXbFdOU3JWc1BKSmlPUENH?=
 =?utf-8?B?SzJlZUg3eXNhbUpiYTlETjIyZjU3ZU5rdkRwTU5ZNnhvZ3ZUeGtTazNzdzlh?=
 =?utf-8?B?ZlVEUGtEbDF0U2p5dGcyYWtkZ0xPV00vb3RyOG5ycG1pOHpBVUR2Zms3K0wy?=
 =?utf-8?B?bkltUUZBd0tmUnV0dEJrOXNBUHhtSkpaQXVUd3dpWFQ4MjNMOU14SllPK0ty?=
 =?utf-8?B?UXpsbExPZUtuSG1NUW51SFNURk90cVZyWlQrSENVTjR6UVhYZTJVdG4zNDU3?=
 =?utf-8?B?Wm5JcS8wOVZYVCtFVEtlWlprOXdQSDA2V0dvTE01K3FTOFI2UXB2MzhVa2NW?=
 =?utf-8?B?Yyt4ZVkrYXM0enh0VnEybnZBbTVjTWgwY3pJNVVVOWwrNFdMcmVFWER5dWRB?=
 =?utf-8?B?eE81SUFURjJxYVhrTkFLMmpVZ3doRVU0Z2sxKzU4dk1IdzFCQm8yR1gyczlq?=
 =?utf-8?B?VjZVQ2xsS2I3eXM4UjJrZGhaazRLYXdxOE81VjU4cDh4ajlUQzBGbmdLNUZV?=
 =?utf-8?B?SkduaDNibEhZVkJIYVp5SUZocDVnakxQemxFVUdJWi9rOTJaMWRRd3hjbENl?=
 =?utf-8?B?UkJPYkh4L3FqTSt0ekFSWlVOVXk4bnRVV2QyK1Bxb1g1RDZDRERYV3hYTEcx?=
 =?utf-8?B?Wnc2MjV3K2oxazVYZnNSdDBGQk0yV3NpdGZoNGhYM1NHYlZjSTZhQW1qcUZu?=
 =?utf-8?B?VmcwNFZaTnpaWTVoM2ZuQ1M1d3pNZHZ2VmlaeEhuOEJPMGlkNFJXRmpCSXla?=
 =?utf-8?B?aUQ5a0FZK3UraDNlSzljVWFZd3NQbHpKdW1ualhrb3IwR3c4SjNsRU1lbWpI?=
 =?utf-8?B?TG1KTENIV0trb2E1RDlDV3N3Q3BPWG9lZy9QTnZmc2pOMWZaenpwelVGV3dU?=
 =?utf-8?B?NTAvNzdaV2lFM2VtT2ZKaHBaNmJuY2xHZXJzNUlUNWJrOTZuK0g5Y2xoY2p0?=
 =?utf-8?B?bVZka2ttZzQ5V3RxVlVYRzJRaTdEZzRRR1JuQ0VldlFYcVc1QjQzbHc5WWJo?=
 =?utf-8?B?TWIrN3JBN01VUHA3ek15K3BrR2xhU2xIeEVkMnVVSGduVmVQT3Rxa1gwU01j?=
 =?utf-8?B?SlEwUy9zWVJQeTBaZE8zbXFxYzBrSktqV0pEWnczalRha2tZWWs0YmVNb2Mw?=
 =?utf-8?B?S0M5RHVlODIxbVVVblBmVGZZeUhDUllQUGJMZ3pTcVBPYUVqVlptN3FMa2x3?=
 =?utf-8?B?NDJkS2pvVDVDRzFvQzBkUVVpcURoNXl0bzhYbFM4Y1FGTlNqZ3V3TmFER2d4?=
 =?utf-8?B?QjZYbDhkb1hrbTR0RWxUekJwS1Z0MUpnME9ub0RQQml3UEJEam1wamR1SHZl?=
 =?utf-8?B?SDlFQUhqa0syd1RjWnNKUnZoTVpvcXBvSUZoZFBpRE9LaFJ0MEJTek1IRGdu?=
 =?utf-8?B?Z1M5aXNIVWltUzd3b0FQK1RyMC9pcjB3T2JMTDVXMVYxMW9oZW1ibUk4eDFV?=
 =?utf-8?B?YWZBaWFYamVyRjZwbzFWOXNnU0MyM3FOT2xZQUpXLzVtK1JFV1FFZDZtd1BO?=
 =?utf-8?B?bC9jNHlLZVQrdHEvdFBvTStxOTZ4WnlaNmQzN2ZBTnZManpIY2NwYzJDOHlk?=
 =?utf-8?B?OVpnNk5BOU5GYXFFUGFXWTRzNnAxTWJPbFZ2Y0NGM3V3ODgxcng3VkZJOFR0?=
 =?utf-8?B?aXZxdi81Z1ZOMTNkUFA3enlYUFpZV0JPajB0WmZnSC9VcXZQc08raHBaQVc2?=
 =?utf-8?B?MDQwNmdhVVptNUh3dFFvWDQ2L3pKZzlRYk15Y1BMbGg2NVlCQVRXdytCTmoy?=
 =?utf-8?B?dllNVWNXVzZuQTVqUnFQTjNEQUNPUFJUcTZuajFNVkJTWVc4dVdIME0yMmk2?=
 =?utf-8?B?Uk5vUVRMRlhab0dUdFhueUpuK1RZMURTZFN5T3ZkMTZmdWljSExvY0QwSUt1?=
 =?utf-8?B?Z3pzTlRtd3VkUmNJSlJUWEZlbGMwVGJONHBwVTk5NnltRDJwSHNaY2lsOGFv?=
 =?utf-8?B?eHVweFRTbjNNTU50czVKWGNtYmp2UWJaRmNJR2pDVUg3YjN4T0dFSjRzd21C?=
 =?utf-8?B?cEdrRjhXQjN6aHpleUZPNEwxb3hDZlNzNllreTRpV0I4OG5vbFg4amUvZTIw?=
 =?utf-8?B?ZFZtWm0wTTNNMU1uK1ExQW9sZHJhWnhWUjQxZXVzY0RBUkQ3VzlRdWhLMllv?=
 =?utf-8?Q?ZOgMjffG3NtE0yMMrHR4o1fGh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f4c42148-14d6-4e57-c916-08dafd4e8f2b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:31:47.9040
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ileA3Du18K9b9e36ooco201Ctzs73/lg6lYjFfRMCIYrzjWo+80ssit4s1o9bQdXg5hFA0EMyHxbuzsHoGEMAA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8100

On 23.01.2023 15:23, Oleksii wrote:
> On Mon, 2023-01-23 at 14:57 +0100, Jan Beulich wrote:
>> On 20.01.2023 15:59, Oleksii Kurochko wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/include/asm/csr.h
>>> @@ -0,0 +1,82 @@
>>> +/*
>>> + * Take from Linux.
>>
>> This again means you want an Origin: tag. Whether the comment itself
>> is
>> useful depends on how much customization you expect there to be down
>> the road. But wait - the header here is quite dissimilar from
>> Linux'es,
>> so the description wants to go into further detail. That would then
>> want
>> to include why 5 of the 7 functions are actually commented out at
>> this
>> point.
>>
> I forgot two remove them. They were commented as they aren't used now.
> But probably there is a sense to add them from the start.
> 
> I am curious if "Take from Linux" is needed at all?

I said, I was wondering too. The fewer you take from Linux (and the more
you add on top), the less useful such a comment is going to be.

> Should it be described what was removed from the original header [1] ?

In the description, yes (or, if it's very little, simply say that much
more is present there). Doing so in the leading comment in the header
is risking to go stale very quickly.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:33:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:33:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482933.748743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxsY-0001Nb-Jm; Mon, 23 Jan 2023 14:32:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482933.748743; Mon, 23 Jan 2023 14:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJxsY-0001NU-H2; Mon, 23 Jan 2023 14:32:58 +0000
Received: by outflank-mailman (input) for mailman id 482933;
 Mon, 23 Jan 2023 14:32:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ngzL=5U=tibco.com=sdyasli@srs-se1.protection.inumbo.net>)
 id 1pJxsW-0000nx-F3
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:32:56 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d2aea49c-9b2a-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:32:54 +0100 (CET)
Received: by mail-ed1-x533.google.com with SMTP id v30so14779997edb.9
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 06:32:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2aea49c-9b2a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=N8AsWlAsbBlhe+Tp8QN35vlVyqqOmej6saY8HnP7Ycc=;
        b=Fl+kIQPC7okkNUKLErNzRHbBlWpdWBlEacal2SfvBO60x+N7l0zghSwO9h/6vzfXz7
         sllTGVlg07JBapJxeBiBNP8XKMCTw6Wj9nHx05myn5wcnNQo1Ku67zvXbOUTcfB7yRth
         ts5+Gv/sGUiuf13A/lQEkzvAzoyEGZJbYCR8I=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=N8AsWlAsbBlhe+Tp8QN35vlVyqqOmej6saY8HnP7Ycc=;
        b=0ZceL+kMqKfj69SHw3phNXyNBCXCmOLOP3SjET2WxYazak0rov+M+ZOBzwA+dyc93M
         s71JrvD6ewFK1eGQp5pXvWgKpSY5VsM7tCxz/jnifdhnfTyQ261+jvjJAoCzPcQ/zaun
         6URVH3zis6io46IhjrzV45kMmyel7z91BuxjmQbAAIPjSLiKkMGqT2sctapRTKgQvXRh
         nzdKMbQBbFyeDgSeVe3PHYlJwXjBzQ7HS/Ipvvugtrkhr0xPXqy5GVDf/4MyUwzQITsD
         fWiIGbhzWceSdYeBONGonuceUksUrb8QoUWsVv79PWpjb9QGiG6dezlVvn6Fwh695aiI
         ohgQ==
X-Gm-Message-State: AFqh2krZbASej63swK8m+7UrGnUbbsfTx7VOpwBcO51xi4odWDSm65DR
	vVKTpCljoUHyAV3IhglUNouDqAqxQiE8qLusM+bOlg==
X-Google-Smtp-Source: AMrXdXsMeNnHU39VmQyOxKwMDbxooKY3JURUsaHmTDKqJ+UtAhk2sMYjrRjGT5jfwI1Q7syIzZ8ar5Bbr1g3sp34dhQ=
X-Received: by 2002:a05:6402:f15:b0:49e:f062:99f1 with SMTP id
 i21-20020a0564020f1500b0049ef06299f1mr1318823eda.146.1674484374038; Mon, 23
 Jan 2023 06:32:54 -0800 (PST)
MIME-Version: 1.0
References: <20230111142329.4379-1-sergey.dyasli@citrix.com> <a728fa61-eb33-f348-ca72-caec45154889@suse.com>
In-Reply-To: <a728fa61-eb33-f348-ca72-caec45154889@suse.com>
From: Sergey Dyasli <sergey.dyasli@cloud.com>
Date: Mon, 23 Jan 2023 14:32:43 +0000
Message-ID: <CAPRVcudS_LR4_dXPrLZ5KspHqvrp0vPxSD_8RkogLes+ZZ-NDw@mail.gmail.com>
Subject: Re: [PATCH v2] x86/ucode/AMD: apply the patch early on every logical thread
To: Jan Beulich <jbeulich@suse.com>
Cc: Sergey Dyasli <sergey.dyasli@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Mon, Jan 16, 2023 at 2:47 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 11.01.2023 15:23, Sergey Dyasli wrote:
> > --- a/xen/arch/x86/cpu/microcode/amd.c
> > +++ b/xen/arch/x86/cpu/microcode/amd.c
> > @@ -176,8 +176,13 @@ static enum microcode_match_result compare_revisions(
> >      if ( new_rev > old_rev )
> >          return NEW_UCODE;
> >
> > -    if ( opt_ucode_allow_same && new_rev == old_rev )
> > -        return NEW_UCODE;
> > +    if ( new_rev == old_rev )
> > +    {
> > +        if ( opt_ucode_allow_same )
> > +            return NEW_UCODE;
> > +        else
> > +            return SAME_UCODE;
> > +    }
>
> I find this misleading: "same" should not depend on the command line
> option.

The alternative diff I was considering is this:

--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -179,6 +179,9 @@ static enum microcode_match_result compare_revisions(
     if ( opt_ucode_allow_same && new_rev == old_rev )
         return NEW_UCODE;

+    if ( new_rev == old_rev )
+        return SAME_UCODE;
+
     return OLD_UCODE;
 }

Do you think the logic is clearer this way? Or should I simply remove
"else" from the first diff above?

> In fact the command line option should affect only the cases
> where ucode is actually to be loaded; it should not affect cases where
> the check is done merely to know whether the cache needs updating.
>
> With that e.g. microcode_update_helper() should then also be adjusted:
> It shouldn't say merely "newer" when "allow-same" is in effect.

I haven't tried late-loading an older ucode blob to see this
inconsistency, but you should be right. I'll test and adjust the
message.

Sergey


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:51:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:51:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482940.748753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyAf-0003rD-5p; Mon, 23 Jan 2023 14:51:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482940.748753; Mon, 23 Jan 2023 14:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyAf-0003r6-1g; Mon, 23 Jan 2023 14:51:41 +0000
Received: by outflank-mailman (input) for mailman id 482940;
 Mon, 23 Jan 2023 14:51:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyAe-0003r0-Kk
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:51:40 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2040.outbound.protection.outlook.com [40.107.105.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7079880f-9b2d-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:51:38 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7773.eurprd04.prod.outlook.com (2603:10a6:102:cd::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:51:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:51:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7079880f-9b2d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CqJN31Piw5X6nWbpeLU7KI641m2lvGiT5YFSzuPD9TAebwOwOZQ+oz9RUxuVbHPzPdO0QdpGzjulcYCDPhh1MtDXAmDErXZ1YV6GGLi/irtd03XCX+oXHdWbjcoOjw+Ky4wWYpaQ9v+tJLLmA3fKl2j3KDJDGbhvqdk/GpgMgf4MmQJXcIZFIVfNeisw4oZg+aWHCujQ/Au+6cPbyoUbNqRLZC4BXeG1fZggX7updhOuCCFYMdTl3ln60Y+1yiix6BSfrBf+aNsWDOycnNIDfVT5CtJ0iZa3LlIJizIjyFRndBtAoFVeI41HTRPIld6KWeaDrfEEvI+KSnYxkyqpGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eGqSe4Mdc/R6HXzv6T+Epa9rypABPXy+Qfj61MQdev4=;
 b=JpRuIsoG7pistEBLDpYqTzGpWsq37zHG1Ze/6GCmkTXd07Lm+ZfKiErK7AFaHNTS1MSamqjXZkISZWFU4H6LnRTwojQ/ybM4Kon+shwsEcMJlpnyROs8Zf+RUhn7Lf0ot/R1iML4Iqk7kva+Z5aC7xShCsffUzzJLlNY0DEJl2/lWZ4TOAP1G8w6qYyhfqxDmQ2dPYJQPf7lidBpIR9ld1/V7nZVDWYH0Qtc7+d4f6ZvPwvqKNEkvQg9j+mUyxCIMFDNP8hRPQQZq6ahjS0RJhSzNgYA6uGz6FRYW0Im7QY23GrclceFl7KxupxP2+HTQ1q3MVMOSR281ngfXN6FWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eGqSe4Mdc/R6HXzv6T+Epa9rypABPXy+Qfj61MQdev4=;
 b=SmwStWRrtQdLpFcuEEAYp47/R2BlQ+xbdmx5C/qhtSVFX/gTdqlEo8BvY943uPcB6+4ugTTzomSKVhkeby+3yW0UwRJ3wPEbI/s0Jclc49UyPPb+U/K3pjshFSLNWgn9oWMUGFxDGcJ+0ud5s0uM4/lvdc2ovYfd83jg5GhFIUJfr/y7MdPbYz4ii8kytCsN6tbeQJJGuB0D7b996Tq/SvF5h6/VnLBJHqdj123jzqVNJFSVXKOtdGe2Jh6z5g7cnpfKhN9ELiAiPPf9ztL4+IXgHrKPoV+XtHoQNCtMpuvRTsxkb4NSTMN7wJ51bQzsNHNWzEJqs+cAejdC0y0sZg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
Date: Mon, 23 Jan 2023 15:51:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/8] runstate/time area registration by (guest) physical
 address
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0067.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7773:EE_
X-MS-Office365-Filtering-Correlation-Id: 14beab0c-d545-4aac-5f97-08dafd5153c7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T9uEK8YRmzfgpqy8vD91+GQFo9YObhHlihUuQAIX6XrCg3P4RBxjanjK+cAC2s42h/IgsrXA6B7NXB+yDMCtahDFGDfVCgwZK7h4IerxjInzoscXYkXyyn6W2LAbR+9jsfeCBMGYH7OgepTqWPT/P7zEnoQH2+fdya6LDAM3xrmGi5XsjwTKbDtXminfvv/x74JfQXyS6sFUY+mKMW+I0u+Vq4BvtUGtHDsLsk0PF2mvWKVIr7pG5h+XaM9bG7I/CqFjlitX8eMmjVZaASFPJCqloY2rGcjCP8K9VFat1X29tUMJyMMAarpOBQL/fIKycAed+21kxkSWNyRZYoqsBUwpsD9ZXdbYwkrZevCr2glc9z/6SGvOJ7MiXs6UHt3Fwuzhi8zymn0sNtxItgJ5MVLnNYUw+Uoj3oI9Mhi59e8E07kw/8lsVcNBxpyqGT2JvaFUmE+Tqbx/k6G3HpLMoVmerqgxY6B4hnnBddJHh+Uv7ZWIOHMUrJc4D6Zbj3kWRFqc2iFaWz7nJvY8fSNR3uRNYtvf7iqolvdagq3mwhPqszvxrULiD74ibZXijNsox0WOMgxBz2gLKydWak5thpQ+JwkfSYbGOB6NTilkqA66FMe5376OrcpkpSYnUafLuClOne9wGX/ha/zNd6R8CPGi/iJVoOuWTDH7xs1Ws8I1fpROj7TizSt3V/cVdXE+bS90+t4XlDanOBMvj2QjesS8EPRfG6p5KuWy5hxPbS0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(396003)(346002)(39860400002)(376002)(366004)(451199015)(54906003)(86362001)(41300700001)(316002)(8676002)(36756003)(6916009)(66946007)(4326008)(66476007)(83380400001)(66556008)(31696002)(6486002)(38100700002)(478600001)(2616005)(186003)(6512007)(6506007)(26005)(4744005)(31686004)(2906002)(8936002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cHRUQjB1elN0VUhXMHYxVHBkcUdHSEticXdSaDErbXZpbXN3UWw4SEt1N1BJ?=
 =?utf-8?B?KzJUYlJRY0g5cUtaR2NtWWx0SVdYTFpxYmExRm9pK1VRbEZDQ054MWQ5Zk1O?=
 =?utf-8?B?ejhTVnd5YmRyblNxMkI4WmFRQlNXQzcvR1F2UW1HSERuMDVsNG5xUXUwWUtL?=
 =?utf-8?B?cVJCak1TMjVYN3VtUTNVbXYxVHgwSDE2NEZiVGd4QzV0dTJ1bUt1VmR2a1RY?=
 =?utf-8?B?SU9nRlNHTWlhb2F1TTVwSiszT0podW9tUTA1NUlueDRNblk1NkMxbGd1c2xr?=
 =?utf-8?B?NzlaYkZiYTRka1VlKzlBclFYM21pS1VEdVVQa3g4QjNZQ3lIeW44blpsRUdy?=
 =?utf-8?B?NktJT2RyWWJsaFRvZ2NiRWtqVTlhTVRpZVZJYU1hNDR5VG9uZDNiKzgxR0w0?=
 =?utf-8?B?V0JGbEtWWWc5VC9UczJYVkp4QnlWQ3MxWkoxOEN0blBveU0wMGgzQnd0V1ox?=
 =?utf-8?B?OHE5Zk9nQldxbk91Zlp6YTgvMUdKWk4xUjB4L2hpRUVMc2dBYnlyOW5rRll5?=
 =?utf-8?B?TzVvQWY0aE5oc2FDeTg3Z2NHUFVBejdsWUJ2L0xYcERJWUY1UDVOZmYzNjls?=
 =?utf-8?B?NEgzR0FFMjRHRDBzQWRaSTE0ZmZHTWZQNlRUOWNoKzR4QUNYeHZDc3NNZlVp?=
 =?utf-8?B?Z2src3kxbFVLKzlqcURiUzBSbko4NTNUbCtSQVdUMVlVSHNENmRNd3hJMjBH?=
 =?utf-8?B?WTNyVWUwU0prdjZ5MkU1UmJqMHY3ckxBSWtZZ1dvM2xaMTRkZTV0aTNkUFgy?=
 =?utf-8?B?Rlp5dzMrRWRhZ2RNbU15U3ZCMXhPNDBWRjNqbGNxTW8vcXEwUE9xMWhMVmp6?=
 =?utf-8?B?elZaTm5HZXl3Ym43UGNXNVpVTjdkOEtJRWJ3RHlQWGpYaHpxcmdEYkY5Qi9L?=
 =?utf-8?B?aVR3L1grNmE2R2ZRS3MwQ3M5RG9BVjJlc2ZKT2s3eU80c09XODJ0LzNMbmtp?=
 =?utf-8?B?QzN1Z0thcVRzeHlmNnlRNkpvaDZuNnU0MmM1N00wUm5iN0pMSk9FeXdjdU04?=
 =?utf-8?B?TVdLMmVvWDh3YWRlcjJhU0dvK0dKREFLRnlhT1hLR2ROV29KYWp3ek93eGRa?=
 =?utf-8?B?bStvTnp3NGd3VWpQMktZVklhLzk4cEF4M1RRTHJCWWtic0RCSTBCVVI0S0pt?=
 =?utf-8?B?dGUyQk1VeUtEaUJnUHA5aWxER1k4WXJsZzBNS1AvRTFtSnVkemIrYngzMW9W?=
 =?utf-8?B?Q3Rvd3ZjWHRpTnFrN0JvV0VYYzNWQ3FyVFRVb3JDWXFCRGtHdjE3UW96OFFH?=
 =?utf-8?B?N0tCK29kTVpCR0NVUzBWTGhZRU5ycUZNRnd0dnZPWlp1UlhKak9zblJxNUx2?=
 =?utf-8?B?eUczOXF1Z2orUndTUGEzaFBPWXU4ZWhQSFVjQlVEZWJhOUoycVZVV2cxOGRR?=
 =?utf-8?B?NGhrRjBWeE9HYThxU0IvYVB4UGlwUTlGK1lOTEJwUTg4c3VIRzZuL3JQUlEx?=
 =?utf-8?B?b3RYNHJ4SHNjTWlMM1lpcDhlWkxWOWVDWjhZQ2lnWVdIanR3NXpxajdxRzV4?=
 =?utf-8?B?WU5zUWliaUxxbkNRbjZ5TzZ5Y1NXUysyREs4SDZ2R3RtMU10Vk15MVZLd2ZJ?=
 =?utf-8?B?NmVNQ0NzNW5CR2EwQTFPNklZNWsxRWNCQ1h3b3hmWUZtOFBwdHhmdGlYVmJl?=
 =?utf-8?B?clp6aDR5aFNidFVjY0l1U2l6bGdDdDNFQ1NObWd0QXRJVSsxV2N2eFdRS1dq?=
 =?utf-8?B?V3Zwc2RCMTVKZzNLWC9nVEhQSUFOanp2SkxtbjlYbW1PYWVqSnpvZjMzbGR3?=
 =?utf-8?B?VllSTUtoSWV6QUVNelg5N0IyL0VzbnNHd2Nja285Z0JpcEJJNktkcXBhd2RG?=
 =?utf-8?B?RnFWZGlzSWZZaTlqT1lwaFIxSm40L1BjNmo2cUIwZmE3anhiL3dVOWZQREQx?=
 =?utf-8?B?SkJFK3lsRnJMQWVweGNJYWg5cDRlMmppOGVXRFEra1ZIeUdiMEhUakc4R2Vy?=
 =?utf-8?B?dnZvd2EvN3lsa3NSTVJWNkRwRkZVS1dGL2pYcTZFdlpKdEF1bUJSM1JpV3k1?=
 =?utf-8?B?cEpvT08vakxaSDVBTXJialFGeXMrb2hjVVpXVEZDTnhCYXNZUlp0UStZcmNx?=
 =?utf-8?B?Wnh3bWlYY0hCTEV5UHVNdGdBRlhvRnNUMDVzMGFFbmw5WjQzUjhLSEpBLzEy?=
 =?utf-8?Q?0oco6WoUc4u1cVlM+l8dypL9r?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 14beab0c-d545-4aac-5f97-08dafd5153c7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:51:36.8287
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Za60HU3JVlrjlhlyzBBfFhSk3SCxZ8FV4Bbd8oCm7hwj3ohGiv+xGvlul2KNBgs2zo1uRBBC6wq1GP+w1FOg0A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7773

Since it was indicated that introducing specific new vCPU ops may be
beneficial independent of the introduction of a fully physical-
address-based ABI flavor, here we go. There continue to be a number of
open questions throughout the series, resolving of which is one of the
main goals of this v2 posting.

1: domain: GADDR based shared guest area registration alternative - cleanup
3: domain: update GADDR based runstate guest area
4: x86: update GADDR based secondary time area
5: x86/mem-sharing: copy GADDR based shared guest areas
6: domain: map/unmap GADDR based shared guest areas
7: domain: introduce GADDR based runstate area registration alternative
8: x86: introduce GADDR based secondary time area registration alternative
9: common: convert vCPU info area registration

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:53:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:53:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482947.748763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyCN-0004UY-Jc; Mon, 23 Jan 2023 14:53:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482947.748763; Mon, 23 Jan 2023 14:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyCN-0004UR-Gm; Mon, 23 Jan 2023 14:53:27 +0000
Received: by outflank-mailman (input) for mailman id 482947;
 Mon, 23 Jan 2023 14:53:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyCM-0004Rs-DL
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:53:26 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2082.outbound.protection.outlook.com [40.107.20.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id afabc85b-9b2d-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:53:24 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7773.eurprd04.prod.outlook.com (2603:10a6:102:cd::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:53:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:53:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afabc85b-9b2d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gXMuFJfzrcQlpcDC69Incyxye9jFnwS750Wdii0DIuTGkz9QRcBAKq7ON9H4umSJiUrQIcZMctqIzsamEnaAMRb2yve1YkIqkxKUeq79nRO5GaRz3DOOkt6MNDj2VpI8IZTwNX6Yk0TVs4TfcoHjaxVEK5jgJ5vCMsXBsiJMLKGhafGSFYRTzvj1WvFXXaC65gmKEwFlfMrJQUypHgIT1F/u+mcadn55X1HYdIARZEo+LM5sAURFTthP9dC0h0WOMo3+L/TX5db6RF9jEcLkANJW33llMZejcrjYnNF2oQwZXHQiLhkG2qwf3wZH3JOuqTtsY4K0xKECsCdj6GlJow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+ieUVVVZIskofyobszOZSotS3S7+YkheR+/Im8erqdU=;
 b=fldU3ziAkLwa4ox1AkWShDeX/nHqVy222+XKZcuva4OihxXyvMPF2FnK6yonqo7TfgghpXyILZesNXSSPEmHTUcVRK8BF5Ng2Rn7wINf+yJ6TsTHnub3PpfWX9tu7UZ16wEkL8zPPy3pXN3Ebo8jRAM8F8DpZmmAD5AP7klVDhJCyKOts7jGsXGuFnQqvNudx6vAkmusycPVfGPo4Q3q5B2qIzMBOehXCknG6+uOOglJQIpjWzsV0lgF3h4VOtgOqB6ZaATaAHwwEdqDT5MjGeAGRInVjS5vSWcDlQxHxPthSmp152QNsU3P/VouHIDm/qqkl4rGpCUOb0j+MP5Low==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+ieUVVVZIskofyobszOZSotS3S7+YkheR+/Im8erqdU=;
 b=esxef1x32bA3RXV5hIalJEseDd5tFDfOXiD+tvs7AnAkzfjeIrUhLELr6YtNWXs8uJdERHKuuWFB29zl3IM7LYSfguHY/tsteemR2D9ds8cuTqQHDmXn+IWmziV4uRzxAKBLJCOLhB8lKdHWBFhgjo/HFUB3IppOwyRkmVkefJ7bDJ0pWbNuLRg1RuHMQaWtLngcNwzJQ98ecpDXXv2EK8F0dtJJMCJXPqNy//tRWkeYsKNraqXFQ6oClbLM6NqRuvU9Oqwk7n8SxkfU7OoN5JaQwZHmNpM7Pk2ntSSj8jSlQPr/AZciNqTPn7skUtAJNzTS4+UwNKvlSunGm7fUyQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4a9fd51e-d083-ad95-8f38-2829c980bb66@suse.com>
Date: Mon, 23 Jan 2023 15:53:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH RFC v2 1/8] domain: GADDR based shared guest area registration
 alternative - teardown
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
In-Reply-To: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0016.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7773:EE_
X-MS-Office365-Filtering-Correlation-Id: b814be01-3ba0-4110-1e7f-08dafd5192eb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/ZcH9VsGK1/hHH4DrxWcVsGeRTD/vD0LrcCssfY0beNe0GXux3MLSr4o6cFYHW6YhlNpnwr7KZ60+HHAPw9R3qWQR1IHc5a2aUWhgp8mKHoJY6qZXRt79sId86XMvs3EpYT80LJg10jtKQn7ZrJXjRndeKPrsm2JEYV+aEbqA2YI4cSoscvzK/r4UNDadRtnmmvJrKOQgd7Qr1tdWNnSukNd3h+u+S2jg8eaUxVUktWr8uOHirbIw8EqT/+AfNpICY+xeXfbwBqpaUj5865lE+tnIZLuEYQDRBdxFM2At2KCog32KVdlwzBWVlM1ypzMo1xnGcMIhtv046Fy+ctuhSQDzjTt4jPMOcJxNoiOHR2RpdrNvXPvUhalKX0TiMJwhyWBQAP8UAE55N6zCVDwqjXhzFv2oAXrC/mrhoFVweCmqhXPVECWl5lniEWBlGtacgapFeC8NTMfDeklfYcUGJIcaQ8FcC8Ec9WKnRy1CI29fYpEzQ8DuUb5k0UUPl87N36PkP8f4NOtPrVseXOCu6kGESTwAiUcVf2aoV/agvDsBi89+aAW5Uars0woXBqVc4ARk1iKOVQvIKdlYGD8sY6kLzVZ3WcZV71SrT1ZY2mg7UwNWiFNURQNOnxk7rwNgsgA/JzPhYXb07Z6rDHx6EN4eYdC56bFXzaAwvesak8js6unFdq8IF2yhjeZxO00e6gHA4Zr9/KMYzLZXpbHvsYy0S5yYOGXYwqSaa96BLE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(396003)(346002)(39860400002)(376002)(366004)(451199015)(54906003)(86362001)(41300700001)(316002)(8676002)(36756003)(6916009)(66946007)(4326008)(66476007)(66556008)(31696002)(6486002)(38100700002)(478600001)(2616005)(186003)(6512007)(6506007)(26005)(31686004)(2906002)(8936002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MkNzRmYzN1NDZlplRXNLd1Y1QUZCbmY3U09Pejlla2wwK0ZJRFh4NzJOQWpF?=
 =?utf-8?B?RTBiaTlNWlVGdm5MOUsxV0UvZ3NBbzVSZWpjVjY4aEhIckVBYTVTVEJIcFQ0?=
 =?utf-8?B?Q0NGdm50dXg3SlVNNTNydDhwMnd0aTR5V2cvaEl4bEdydVlMSUVNbG1NcmUr?=
 =?utf-8?B?RjJDK2t5aitaWVF6NVBxT2hqeWh6MXYxSStJUExSaS9PbHRpUldkbmNKa1BR?=
 =?utf-8?B?RXZSUUh3cEt5QUw0YjlRV0Z2dXNWc2Z5QVN2V2hVRytUQXQxYnN4bW1EamVn?=
 =?utf-8?B?OEdYc2tsYWpJcW1BaUJ0eEM3dVk3aTlweFdDVFdjOWxBdVh0UmoyZDZaWVd5?=
 =?utf-8?B?QjVkbDRmRWdDM05FRC9Jbi9iK094R0hjMFpWWE11Rk5oWGQzRjYxai9nUnM4?=
 =?utf-8?B?Z1c4U1BWc2Nkai83UW5TOVJHbU5xaDJienZNT1ZTN1FERlVEYkR3YUZDVklF?=
 =?utf-8?B?a2M1MCtUMnFtTUxCZ3NHNW1JZlZtNkJxdHRCOWVCUmVpaHdSa2RQWitPNUNq?=
 =?utf-8?B?M3ViMkhieWZudmdEeXk3b3VwalZmNzY1L0Y1MklIUlZEdUJHNzJScmlYTzAw?=
 =?utf-8?B?OTRhMmx6UDh5SnFETmJ4OWpOTVFlSklzcTYxVStnM1ZWMDloTE91OUZncWZF?=
 =?utf-8?B?ZElvVG42NW03M0M5OVU5emRLUjVJcFpsMldjRW9vTXZpYUk3UlVmUHNFbit4?=
 =?utf-8?B?WEd1OGttcng0disxV083emFhbkdFQ0l2OTNxdVZucTBmOHd4RFJ3bHB5NnRU?=
 =?utf-8?B?dklYOThPZWZNNXFMK01RUDdxeU9NU2tEYXJZV05nNElXQllpa3pwVGxBZEw0?=
 =?utf-8?B?Z2FSeEZxYkN5d1BTZFg0SW0vSkd1NTBzYXQ3azI5NHJ3d2l6dmtUQkRTd2Nt?=
 =?utf-8?B?YjRkR3A3SmFlOUdkZ0haajNDdTVJd2FzakVrQVdmVjV5bndmRFdrOWNtRGR3?=
 =?utf-8?B?dXFOK3lqZFlkYmlKWDV0MGVDQ1VnRklLR0loN1JRd25YMW9LTENwU0x0NGda?=
 =?utf-8?B?VmlIbEt5cU5heUdmQmQ4aEFnaGhRRlFqRXhFcUpOZ045a0Y1MDRSQ2d6WTZX?=
 =?utf-8?B?WWI4RUxKeldDM1hSd0JLdG5LUE1PazVtem9wNVduaHJSb0d1b2VKYzQ5SGtR?=
 =?utf-8?B?cUFnMHNVajYyV0FsNUwvSHVRUjdVRFQyY0dTUGVuMGI5V0FZM1QvTTIvM2hv?=
 =?utf-8?B?U1A1TDZHRk90QWJ3VWhjeHEvS00waGJVV1ZNdERkSTNBc00wd1NlamZXWVhz?=
 =?utf-8?B?a1BINzBvNzlUaHhkMWNvNXVaWi9zNTBiSExLZTRKczhvRUlnQXVFSm5ITXlY?=
 =?utf-8?B?SGxMN2VxdUZsazlCYXNYUG5zMWJKczR1NGdNSmNGOThxU0NpZEFOU3E4NkpG?=
 =?utf-8?B?RmpuYTRacjZ5NEVMT0JnU0gwTDVQM01wNjkwK2J1SzRYZkpHSzhyZ3VPRjlk?=
 =?utf-8?B?NDR1R0hmOHFmWTc0aTkzTXVoYjhpd1lVM0dtV1Jnd2RqdFZGSE9HUWl1THF6?=
 =?utf-8?B?SjB0c2JXZUNCUkFrZDJrWEFJZUlGczBMQUl3c2tQNTVTczY4K0tpKzhzRjc3?=
 =?utf-8?B?YkdmelJ3Rm9vOGplWWIvcDVzNDFmRDV6UnNUaDlqNHNBVTZzakdQTlR2ekFa?=
 =?utf-8?B?N0ZTUGVjUzUyOHJvMkVQU20yK2gycVRGTlZNQmxYODZhV21sZWc4N1o0VjRS?=
 =?utf-8?B?czQ4aVhTc0N5MkhUL1lJSkUzbmNuNW5BcWtUWlNBc29rbnBSanlTWU5rdTho?=
 =?utf-8?B?Y0lUOXZBL2RFL1pwTERiMTFlOElodEhNRGdNazI1TWg5dTA0bmJCc0p5VE5s?=
 =?utf-8?B?QWEzUEx4NDF0Q1p4Z2Zhb29iNldkM1B1R1hDUXhzd2dIOE40VWp2QmNhbE5C?=
 =?utf-8?B?cU04dC9MVzZ4UHo3cndMb3ZTd01vb2xwNFI5Z0k0Ym82N1VWRkhORmhLWjIw?=
 =?utf-8?B?ckhuV3pMLzJQZWt2TVBSRmwzZjZJWVQzenBpSnprOFpIZVRJaXFQU0l0bkxQ?=
 =?utf-8?B?THhYOHNpRmlyNHRmUnEvSDBLZ2JWb2JoV2lKY0FDbDh2UXhRUnhBTVFSWmc5?=
 =?utf-8?B?bklNTUR3cUxZYWFVZGxiWktrazVhYkdWQXFaRnFlS0x3UndkVDQwQ2c3dVRT?=
 =?utf-8?Q?5cqbhST9+6tW0eo4mMnJRLeYq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b814be01-3ba0-4110-1e7f-08dafd5192eb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:53:22.7906
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9xCEzB4ZIIs59guhEr+jSabhPV0O5enaoa2pKbyyI507xppZ1NS4FXl/Q2Cz+4yyAoeXlryYLN4ppqzOcTb9MQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7773

In preparation of the introduction of new vCPU operations allowing to
register the respective areas (one of the two is x86-specific) by
guest-physical address, add the necessary domain cleanup hooks.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
RFC: Zapping the areas in pv_shim_shutdown() may not be strictly
     necessary: Aiui unmap_vcpu_info() is called only because the vCPU
     info area cannot be re-registered. Beyond that I guess the
     assumption is that the areas would only be re-registered as they
     were before. If that's not the case I wonder whether the guest
     handles for both areas shouldn't also be zapped.
---
v2: Add assertion in unmap_guest_area().

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1014,7 +1014,10 @@ int arch_domain_soft_reset(struct domain
     }
 
     for_each_vcpu ( d, v )
+    {
         set_xen_guest_handle(v->arch.time_info_guest, NULL);
+        unmap_guest_area(v, &v->arch.time_guest_area);
+    }
 
  exit_put_gfn:
     put_gfn(d, gfn_x(gfn));
@@ -2329,6 +2332,8 @@ int domain_relinquish_resources(struct d
             if ( ret )
                 return ret;
 
+            unmap_guest_area(v, &v->arch.time_guest_area);
+
             vpmu_destroy(v);
         }
 
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -658,6 +658,7 @@ struct arch_vcpu
 
     /* A secondary copy of the vcpu time info. */
     XEN_GUEST_HANDLE(vcpu_time_info_t) time_info_guest;
+    struct guest_area time_guest_area;
 
     struct arch_vm_event *vm_event;
 
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -394,8 +394,10 @@ int pv_shim_shutdown(uint8_t reason)
 
     for_each_vcpu ( d, v )
     {
-        /* Unmap guest vcpu_info pages. */
+        /* Unmap guest vcpu_info page and runstate/time areas. */
         unmap_vcpu_info(v);
+        unmap_guest_area(v, &v->runstate_guest_area);
+        unmap_guest_area(v, &v->arch.time_guest_area);
 
         /* Reset the periodic timer to the default value. */
         vcpu_set_periodic_timer(v, MILLISECS(10));
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -963,7 +963,10 @@ int domain_kill(struct domain *d)
         if ( cpupool_move_domain(d, cpupool0) )
             return -ERESTART;
         for_each_vcpu ( d, v )
+        {
             unmap_vcpu_info(v);
+            unmap_guest_area(v, &v->runstate_guest_area);
+        }
         d->is_dying = DOMDYING_dead;
         /* Mem event cleanup has to go here because the rings 
          * have to be put before we call put_domain. */
@@ -1417,6 +1420,7 @@ int domain_soft_reset(struct domain *d,
     {
         set_xen_guest_handle(runstate_guest(v), NULL);
         unmap_vcpu_info(v);
+        unmap_guest_area(v, &v->runstate_guest_area);
     }
 
     rc = arch_domain_soft_reset(d);
@@ -1568,6 +1572,19 @@ void unmap_vcpu_info(struct vcpu *v)
     put_page_and_type(mfn_to_page(mfn));
 }
 
+/*
+ * This is only intended to be used for domain cleanup (or more generally only
+ * with at least the respective vCPU, if it's not the current one, reliably
+ * paused).
+ */
+void unmap_guest_area(struct vcpu *v, struct guest_area *area)
+{
+    struct domain *d = v->domain;
+
+    if ( v != current )
+        ASSERT(atomic_read(&v->pause_count) | atomic_read(&d->pause_count));
+}
+
 int default_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct vcpu_guest_context *ctxt;
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -5,6 +5,12 @@
 #include <xen/types.h>
 
 #include <public/xen.h>
+
+struct guest_area {
+    struct page_info *pg;
+    void *map;
+};
+
 #include <asm/domain.h>
 #include <asm/numa.h>
 
@@ -76,6 +82,11 @@ void arch_vcpu_destroy(struct vcpu *v);
 int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset);
 void unmap_vcpu_info(struct vcpu *v);
 
+int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
+                   struct guest_area *area,
+                   void (*populate)(void *dst, struct vcpu *v));
+void unmap_guest_area(struct vcpu *v, struct guest_area *area);
+
 int arch_domain_create(struct domain *d,
                        struct xen_domctl_createdomain *config,
                        unsigned int flags);
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -202,6 +202,7 @@ struct vcpu
         XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
     } runstate_guest; /* guest address */
 #endif
+    struct guest_area runstate_guest_area;
     unsigned int     new_state;
 
     /* Has the FPU been initialised? */



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:54:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:54:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482951.748773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyCw-00050I-Sj; Mon, 23 Jan 2023 14:54:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482951.748773; Mon, 23 Jan 2023 14:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyCw-00050B-PD; Mon, 23 Jan 2023 14:54:02 +0000
Received: by outflank-mailman (input) for mailman id 482951;
 Mon, 23 Jan 2023 14:54:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyCv-0004vE-EM
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:54:01 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2051.outbound.protection.outlook.com [40.107.20.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c47c3418-9b2d-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 15:53:59 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7773.eurprd04.prod.outlook.com (2603:10a6:102:cd::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:53:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:53:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c47c3418-9b2d-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UMJyfdgY3rYBuGCAzjCkP2DZ2hlBmw72kHJFef4s67K/Cn8CGM6w9eB9exMoZ4M1xzSkxuSfxQb0TgpngCAcYWpcaeWYiYNaCRTG4eVqaTlRxHgsUKxhmntKFOAN2wEciLur2F3zNFaxZCS7W1RHpkHATbYUtuFSodWaF1PneYtIds9ebD8IffWoz5O5U92rkrhBXoBJ4/nwLZ9bujOD79J80xwny9Qck97Ak0ARlMPsDGxqWUstXB+LUd/6dAXT9tFkpkR0dlbGoffV4fImCBffTk9t/XgOYbws/IxvYhdelt0BIpDr+5FSthJoF8z+qmBMSbDdoOqCFFoU4CXjQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xLGElkWNJzy0Qko5n6wahh8Mc1zRhNMqrRcxVopaWLA=;
 b=iaAoEz/t1if+SToDS6yetda/Z+N0Dym+j002mXnH0j0OJdJZezseQ7Cw5GlICCv6iEmdUsj8tByh0PmVHMh1maqSXOh2EJhX7FnHCcr1l0UynSFEmS+Ocov6waolIWfeibDbT19iAw6XFL/J1574Jux907oPeVASB2Nk3bBhULnMxOOPjVKp+Sd7/Co/hOjdYJjniVsXiAtcTE3pxUWzYsMUe9vFWLIld9bpQMGGZ3qWs1dkClnHgxfSs1RpDIzLSHb0MFa4M/kAqiZu6asD4a/zzlzhKJV/45VJ435C6UaZya1WRdSK50FV+SfoROnSFkoaxRy1IJfVBoMiMxJ0sw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xLGElkWNJzy0Qko5n6wahh8Mc1zRhNMqrRcxVopaWLA=;
 b=LRDBQ7hya00xUJBVC/1b0odQGfpICE9f+gaAn/fLIFBGTIg+J07/VvEYDlzNhtk4SpKsfYhp2nkXRs3MSq4T+VsL+K+fV2mlKMwN7ePTO/5GL6QNDRxv74yQ2vcpDZvk3kMJkLp4j+j5FZepSMsYEE1L560pDkhHSw4iot/o9l4qzxB/dD/FE2EsQ4lRWcSeaZzcFilkNawQIVuSeYSL4hIFk7kaEloIbjysk4w2/zEZ1UBw16h7IU3qki4S+LfC8RGY9zuyiB6U+XBguFKlz3Y4Q43uAEauXLEe0nxRm8dCQxGPSwTNaC07Hv9atvI9ThnPt05Qw93E07+NhZILQQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0cc7c5ec-b5db-f9a6-cfd8-c05a22f417c7@suse.com>
Date: Mon, 23 Jan 2023 15:53:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH RFC v2 2/8] domain: update GADDR based runstate guest area
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
In-Reply-To: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0005.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7773:EE_
X-MS-Office365-Filtering-Correlation-Id: a8850510-960a-43c0-a7fb-08dafd51a7d5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a9iwsAwBTZwqj493zHfISErjmTXBkwGwLbkPMLzuEtV1oi32wWBgTW25PfCvNEEM+iTOVC7zzX+ZJGnsnB7hE7uAj3UVx6ZNDDY+2Z/sOG6j77dUGKFOZMkcVwSYPFpLxnQzyYKT7sgDaJxkl5jkw1srXAXKERb3ffbd8CP6C8t3/ppFXUN3V0PZqcgc3sirqbAjc28Smf/ZXrDqu6c2Dh0xYMCrJvB8nTuHfbpvYcaN0lvIuS8OlpjwDiXPm3bE1kdndvWevU1B+Z4nQamPLJJuhPV9XYAKzq9lvN3KNIpS2R+TMjk4D8mwCqPIHqOOFP29to/Hf2B1wMHVYIq4ElCEpE3lF+VYddeX7YKUzrzzZlM+mKRx7KlePFvk+0CyqocN5H7zawPtjiTomoOP7OU0kPZy4vZwwmKDM79T7pNwLUWA6Gu+dRR9c+T2GqusJdzbUEb4zyphL5Q1aS7MYdnYE29rN1UXGaH98M92A3pUxGQqWumNJghOL2gRP+DN4U9ZbnnqKjZEnwuRaYsHuxc4uZ+hPc7u0hhNAdfN1/o4xSd2EU44kWbiDi//VGGL/lDw0z8q3jmHpvQ90Qu81VS1rr5k42CKMLp7JfbCXPkxM5HO+0XWdyG7I8ibkncreT+GOoPv2CbqCrqsZzzIcFVwPl160d/95JxFQUsV7y2OdUDVsSApE9q1AejGqF7iPpIp4v5CiTln3rMyn+9wqsCFz7V7+FADpc4KD/MQ20Q=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(396003)(346002)(39860400002)(376002)(366004)(451199015)(54906003)(86362001)(41300700001)(316002)(8676002)(36756003)(6916009)(66946007)(4326008)(66476007)(83380400001)(66556008)(31696002)(6486002)(38100700002)(478600001)(2616005)(186003)(6512007)(6506007)(26005)(31686004)(2906002)(15650500001)(8936002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c2w3MVBLRmlmdkp5anJOclNNTmxINFpFNHV1UjdubGxoWWg1V1JEN1p0Z3Yx?=
 =?utf-8?B?bnNJeTJwUGNHNmlqOEQ0TFg3NklzZVk0WEhwaVJ6NCthNmJhSzdVUE9ST0Nv?=
 =?utf-8?B?THl6UWZxaXM2enphdkpDRGhEVGp1d0dieFZPL2QxZTRoNldPRlRDaEJSL0lu?=
 =?utf-8?B?NmNBU0tIWEdoaHFYN21VRVdvVG9yK0xFdUI5bFNyRWtUNzI3ZEdUYVZ4dEl2?=
 =?utf-8?B?cjRtc2xRbkhoT2p0UDh6N3VveVJCbnFnMFNXaGlKUXRKQUdlU2JSMEMxSjZC?=
 =?utf-8?B?eXpCS2gwNmxwVmVidVRsSGh2RUVReE9IUlVwaENqUytUOElVRUxOcGIwWnE1?=
 =?utf-8?B?SnNrZ1R4RW42YVE5VTZtUURMSWoxRVZVS2FzVW9mbDB0a1FVdGtHL3M3ZWw2?=
 =?utf-8?B?TnUwcmwwSk9sSnprRkVnQS80RFZZeTBJN05JR3IwUFgxYXFtU2F5Q1lGTzRl?=
 =?utf-8?B?eDcrdnoyTUw1QzMzNysxT0ljcjBZRHlJQWFUTkw4dXZiWFNITjZadExJOEo2?=
 =?utf-8?B?QXp0TTFkTHFvSnhwVlUrUkdhWjlWT1N2bUNGQjFmU2FDbzJLYWxWSHUxLzJD?=
 =?utf-8?B?K1EyQUJDMjlLaUUwdzFtU291bmtOY005SllWUTZlWkV5Szc3ODAweW5ydmxN?=
 =?utf-8?B?WHFCSmJpSXhDRmxsS2JUS3FuRnpxWGJDQkpaVWErWVBabmxvb1FJZ09CazUz?=
 =?utf-8?B?Qk5tTlRsRlViYVdNeW5uMVhQY1RHRjNORmhxdWF3dWFNZitVRUVkMjVBTjZD?=
 =?utf-8?B?RTd5Y3J6UFNNeUQrcjVSbHVORUc5bSt0bkw4UGJmbC9qYjFtR3ppY2JINVFC?=
 =?utf-8?B?Q1ErMTJTY2I1YTJ1RXY2SzRHaHhQZEZtUnQ3ckJidE0xcUFnL2hlTE4yU0RQ?=
 =?utf-8?B?S3ZQQ05INWVhTEZ5S2NzT1NKOGlkY0VrRUJDUGZiVnBVeWI1SnRMSFdhano0?=
 =?utf-8?B?cy9JVVRqQ04rNEZqSXdUZi9IYml0SFkvMlpWaEE2UXRSdmNDem5UekltSmg5?=
 =?utf-8?B?SUtNTHFKOXRYb0xEb2IxYjROVXlTeVZvZlY3YS9vL2Y2T1V1NDl4NlJVMDRq?=
 =?utf-8?B?bm9HVkZ1bjRzZTdCVDBkQnE0RWp6ZW9vc1gveS91OWVRYm9vcEFJci9TNFNH?=
 =?utf-8?B?WFVDNUFadzRSNHE3T1FOYjdJOU51dVpJa0UwTnhHY09HbU5Yc09lTnM5Tk9I?=
 =?utf-8?B?eHZvdGlEZGRObjJ5cmZMcjRneUNwU2JsZ2loclRYcTNKUXNFWHlGdURockFF?=
 =?utf-8?B?ZnUwV00zMWpweHhwekhodGtNZkFFc1hEa0JpdVFHMGt6dkd2MGpDMW1oRWkv?=
 =?utf-8?B?VTVPdHFXc3kzZXI5YzNoOXc5NG83SjZJTGxsVytJeU1xQk8wNnZUMnladHRD?=
 =?utf-8?B?cmJzZE9handCeTgvTi9ybnhRNVpxNVI1azZtR0R4N29HdDZlRnhiYjB0U01h?=
 =?utf-8?B?eFg0UjB2ZU53OGVKdXMwMUt1amRaUWxWQzRZYWJuM0xJZVdRbXBSM1M3ejdC?=
 =?utf-8?B?SUNaQjlZL0FVMGcwcU9ScTBGeGc2dlFpR3YrT1k3MmVEOVMra2hFbCs3Sm1J?=
 =?utf-8?B?ZGFuejBOTis3YzNVYUR3cVhoMnNERStCUnY1WnhzY3ZUS3hWQ1lxMWZiYWxh?=
 =?utf-8?B?Vkh3RjB0ZFNnY2NEaWJBY0dqZTlxZG5pZVJXL2FGMHJaUDRycjdoR0szQm1a?=
 =?utf-8?B?ZWNnUStlam0rNWIzZ1ZNQ2lOQWV5K2V2Tk1PODlNa2hBYXRZQjNjYkJkWU5m?=
 =?utf-8?B?VjdPRjlhcDB0TGY0SlRGTEorUXgvWGFCa1I1cXczaHgwWk41dnA1bWd3cmV6?=
 =?utf-8?B?WlU5bVJDelBkdGJoalFUd1lhY3FLOVowSk9oeDRlRXRrci9BYy9US3UvYVU4?=
 =?utf-8?B?cTR2dzFVQ2N3L1ZGQjlZWm9tcXVqL0NEckZTYUhDQTc5bmNZUkpkOElWQndu?=
 =?utf-8?B?bi9Tc0Jiek85a25lcVB3ZUd2dWRISERBWTYyM1dTQmh4bElHdTRVdlhUdEFt?=
 =?utf-8?B?ZDdVbXNvbDhnNEFRYXkwRFUyZ1JtYmZ2TzVaQldlTU5OVTVpRDVnSG5vanB4?=
 =?utf-8?B?Y3dCc3hQbnBDV3loRDdha3hlbGZ1YksyNU00REozS1E2TWI0NXFQRjRzQitK?=
 =?utf-8?Q?QPk/UbySPRoK6ABpFOzsT9gKj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a8850510-960a-43c0-a7fb-08dafd51a7d5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:53:57.7884
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: c4hTpLgHb0Xlm0+u4YCrqVn8pFVwfYs2Jb4Tx2TXGQ9v0ew02XNL9QXiLnsU8WVfz46X9GzxZcp6Gy/e+pCY1Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7773

Before adding a new vCPU operation to register the runstate area by
guest-physical address, add code to actually keep such areas up-to-date.

Note that updating of the area will be done exclusively following the
model enabled by VMASST_TYPE_runstate_update_flag for virtual-address
based registered areas.

Note further that pages aren't marked dirty when written to (matching
the handling of space mapped by map_vcpu_info()), on the basis that the
registrations are lost anyway across migration (or would need re-
populating at the target for transparent migration). Plus the contents
of the areas in question have to be deemed volatile in the first place
(so saving a "most recent" value is pretty meaningless even for e.g.
snapshotting).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: HVM guests (on x86) can change bitness and hence layout (and size!
     and alignment) of the runstate area. I don't think it is an option
     to require 32-bit code to pass a range such that even the 64-bit
     layout wouldn't cross a page boundary (and be suitably aligned). I
     also don't see any other good solution, so for now a crude approach
     with an extra boolean is used (using has_32bit_shinfo() isn't race
     free and could hence lead to overrunning the mapped space).
---
v2: Drop VM-assist conditionals.

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1616,14 +1616,53 @@ bool update_runstate_area(struct vcpu *v
     struct guest_memory_policy policy = { };
     void __user *guest_handle = NULL;
     struct vcpu_runstate_info runstate;
+    struct vcpu_runstate_info *map = v->runstate_guest_area.map;
+
+    memcpy(&runstate, &v->runstate, sizeof(runstate));
+
+    if ( map )
+    {
+        uint64_t *pset;
+#ifdef CONFIG_COMPAT
+        struct compat_vcpu_runstate_info *cmap = NULL;
+
+        if ( v->runstate_guest_area_compat )
+            cmap = (void *)map;
+#endif
+
+        /*
+         * NB: No VM_ASSIST(v->domain, runstate_update_flag) check here.
+         *     Always using that updating model.
+         */
+#ifdef CONFIG_COMPAT
+        if ( cmap )
+            pset = &cmap->state_entry_time;
+        else
+#endif
+            pset = &map->state_entry_time;
+        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
+        write_atomic(pset, runstate.state_entry_time);
+        smp_wmb();
+
+#ifdef CONFIG_COMPAT
+        if ( cmap )
+            XLAT_vcpu_runstate_info(cmap, &runstate);
+        else
+#endif
+            *map = runstate;
+
+        smp_wmb();
+        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+        write_atomic(pset, runstate.state_entry_time);
+
+        return true;
+    }
 
     if ( guest_handle_is_null(runstate_guest(v)) )
         return true;
 
     update_guest_memory_policy(v, &policy);
 
-    memcpy(&runstate, &v->runstate, sizeof(runstate));
-
     if ( VM_ASSIST(v->domain, runstate_update_flag) )
     {
 #ifdef CONFIG_COMPAT
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -231,6 +231,8 @@ struct vcpu
 #ifdef CONFIG_COMPAT
     /* A hypercall is using the compat ABI? */
     bool             hcall_compat;
+    /* Physical runstate area registered via compat ABI? */
+    bool             runstate_guest_area_compat;
 #endif
 
 #ifdef CONFIG_IOREQ_SERVER



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:54:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:54:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482954.748782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyDK-0005TJ-44; Mon, 23 Jan 2023 14:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482954.748782; Mon, 23 Jan 2023 14:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyDK-0005T8-1J; Mon, 23 Jan 2023 14:54:26 +0000
Received: by outflank-mailman (input) for mailman id 482954;
 Mon, 23 Jan 2023 14:54:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyDI-0005Sn-VO
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:54:24 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2085.outbound.protection.outlook.com [40.107.20.85])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d24b4f11-9b2d-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:54:22 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7773.eurprd04.prod.outlook.com (2603:10a6:102:cd::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:54:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:54:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d24b4f11-9b2d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZKjw0lQj7/r6HOS2+9hXSwapljUiThwRDoha6a0Hy4UL5LHsxftjNusUdOObOJ+7wscsoKRk3w+OYptXaryGHbwkfJTyjtDmUqKzSid+2rlDhV/TVInyj7dJ4Lwo9jCbM9rANysDcRr/bdzryVWAQxp1G9C8LGon6FywCY5fMdRrbIht0+yyXIHgsbUUvIUW594IelZPnol5fUuaLnm613UFg3Jg2nW0l93QCiMLkrbQoysuCeBiDG11dxI6zjxUEQ4BzJAGNOHSA1NpT2VY7qNuPAk6r01yN4x0exUA0rtClBdF3zO/lTuG2dVziGRDtZNvkbhJCAx5jS1yUDtdbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uB3131ouV1w0eF2ri7Zz76fAeueaRAbvh0LQL2AdBT4=;
 b=L3p/eGSB/cmTKCJy5bq6b7dh2QZsx61UT6SsbOqanbWCrdH9DoMo0ADL13SCu+NwlGlJJ/PHqxliyUlgRk16/3PJuDbChU5jI/EV5EFV3ktYKPahM6YaD1YqgsaXGYIFuIv0rfy3sqG2O1V0hm/laSCz6Y295kDxIEz5G+dCsOu8nmtavjsxxukFkeHMe8zOrFZWA4fNG8iUUdt5hGApmsSMiDgLoC+tRi9OsOt4YfxVRTEJ7ZJxQB6ZDyXzTgnd3LJqJaqSS/p1EwH325ASNgit7pIKudu/100xxVmpykkO8qaQBEFbL2GnxWW2FxEjXC2cnMyjd0MH/xgzrL1W4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uB3131ouV1w0eF2ri7Zz76fAeueaRAbvh0LQL2AdBT4=;
 b=e71a9TPS1qj4YG7+VbneayVFG4SxfLQ1S1JBrvarojoTYvMabVfppRJusEY1ik/gSwQmLNSchxCIkY/iD4JGczj9lsb63iIX8qKBx2+J5hY1rPUlX/x5S+O8xOC+n7Hk4g9rlD5QKsQtql6sZm1bjj4oUgHugnvx6S9+N83+C0td+xIBvCT/dQJBA8/pjsxqzHBzrmdQuZEPrb6PSwCxtoJPqH/RV0UcV/pZuN5Yj8ONGgzDJdyarWjl6ZmwxM97VInuJz0vaE6Y/rrCXtseoNhBRPMizwkexCFOiUFdBmD4LA+HgMULyRuxhJViDzbF2pVBQ/y01gYJuuVGWC5i/w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <74e428a9-9309-4ef6-16cf-37f7f9d5c8f7@suse.com>
Date: Mon, 23 Jan 2023 15:54:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 3/8] x86: update GADDR based secondary time area
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
In-Reply-To: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0127.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7773:EE_
X-MS-Office365-Filtering-Correlation-Id: 0df1fba5-82b1-4f13-3f8a-08dafd51b5b8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0w/0LhDVftYWDWoCh1ba820g4syNQVcr0uFTTylJy/lERty2h8mJx+VYSvsWPsv5wGAY+iwfuJK+qTtX27N6CfKTMSljN+y2Dly0WyJzXsrYBEw8SL5jzDR/HaKVf6GrcPdvqkhNes7HtGqJB9LGQHmfU07qJMtmO1GXqY0jWT5aH2yNWDNTy9eQfe02FfxA2uZPi90W/sT/kCcZuaMOonlmtqJ9FZJ2pgpZcTJO7YAIEAnAFtbZP7YjSLlYj7C2Xw3jEMgK22aVoBDb9DQCcgW/O2PkmmY322OLQXd52a/Qb+VMaP93XARNSdvhLhEOekj/Ox51Kt/1/SA2B5V2d5S89LevfbAvyfVK1qIPtPEnjIsbUFFfEEyfcZpDcLHjr7Q/WkY3Mz67QzCKDfC+VlvMY8d5BjQe+65v/mHQ45leafZmkJwpKRT71raJaZeLeYZGCpJQprrBFnYXUqYrCPCAItAFQi6mNmfn4FEECpOgKoXf2+l1mRnrqYJpe0lqXN4K4l+CtcXnG3Aek4O9NpjWEqJIR82jmgxBeOILX9G++sXtn4lNH5b7U5afmCFqwNHbGQt/RNnF1DiGPVNYY+2u9LrqxTcAl7khzlB3yOC5EBQ4eEDageCrZoqJ8WnUWNEUJ6B2koMeYhbwj+/iP+8oQXtGzQ4b8S+NhAWC0pjQx75onMlUmty3kWeK9Q5jekeD8dH4b6UyDEshWDY31jfCrTywCvZr41cFBKihdQw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(396003)(346002)(39860400002)(376002)(366004)(451199015)(54906003)(86362001)(41300700001)(316002)(8676002)(36756003)(6916009)(66946007)(4326008)(66476007)(83380400001)(66556008)(31696002)(6486002)(38100700002)(478600001)(2616005)(186003)(6512007)(6506007)(26005)(31686004)(2906002)(15650500001)(8936002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aldSTExpU3N1aFl2UWVzdlcwVytWb2tJTEdrK3p3cVJXSE9ndUs0UlUyRlBj?=
 =?utf-8?B?YUIrbmxxWHZBUHdTdjIwSUxrQ2VYdE1XZmJyd1Rzb0ZHUkQwcWp5TFM0UmJ5?=
 =?utf-8?B?eStZZ3o5RlAvMm9CRkFUWnVPQ2xmTWd0eXNVOE8xYUhMYXA2TEJIa2tYS09B?=
 =?utf-8?B?WXR0QVlLOG9wZkVpRlVMTlhsT3ByajNzSHJUN0ZHbWh4VmMvUERKRm11SmpJ?=
 =?utf-8?B?QzNpMFJlbWp4R1VHZWZSSkRwS1dMRE5nZVpJQ0luQjNkc0FzbzJZVmg3K2RB?=
 =?utf-8?B?NzcvbnZxZnVkcDZYYW5vT0JNRDZVZjNCUWtqWVVKVWtNR3dtc1BNcUJMYklD?=
 =?utf-8?B?TFowb24vM0lwc0hJdXFrV285TExKbytrT0pFMUVRUkw0d1ZCVUIzTk9LK25R?=
 =?utf-8?B?Y0g0NHpQTk9GcWs4b2srd2xRQTJmU0g2MllRSUZpZ1M3dFlSVjRSd1k3TGJU?=
 =?utf-8?B?MGZPTXdWZmxyRkNBRC9SZVJBUlorbjZodi9JV3dRcXV6aE9qNWx2MkhEd1Bx?=
 =?utf-8?B?TEtPTFZYOEd5RHJTL2ZJbzE4SERyem9NbjhNVnorK2xHbS9wd25vd2VjcGVp?=
 =?utf-8?B?UUFwbENmQ3hydXVHQm5QSzFlanlMYWZlaEt6NFNtU21oTVV0T0tnREk2Wk1x?=
 =?utf-8?B?UkxQQkZ4TDIrbFd4NXB6cVlybnJYQVVuV2hvZ1EvYitTZ1d6NEFFdHFNcnRF?=
 =?utf-8?B?a3RNdnQ3Q2FTWjk5WlczM2d5SUh5VzN0OXdzQ2laWlFUeUlJc3JCM2tZSVVH?=
 =?utf-8?B?Z25MejdqNE1jUUU1YUxIWmJUNnRKN1o2dGNqblpNZjRZYUdQVFc3Z29HTy91?=
 =?utf-8?B?TlYxNXUvQmtZbE13emFJS3FZbk5hcGF1c2lwSFh1VmMyTkt6OGk1VndCQjYx?=
 =?utf-8?B?OUk4cXIxdUFIcXpjT3NDV0orSTdmQXN6MkUxMi84MTczVHI1VUtCbzFrRFp2?=
 =?utf-8?B?M21qWUVqbG9aT21KUzlzSjBQRGwxVTh2UnBJeUFnYmJjNVh4RDRMSHZ2aUN4?=
 =?utf-8?B?emJVR1N2eDJjb1Zzd2h3NDNieEE3NTU2ZlRnbFdwZHBzTFN6RHRBeHFDVzQ5?=
 =?utf-8?B?STBpY01FekNpWWpCZWdMdjRML1AzT0JUdmVvZHhaYXQ5NzVtbTRDK3JkRDF1?=
 =?utf-8?B?eXUwRG1UUTVSZWduOGVJWkFrN01aL2xKMzZRNk1MeGZaWkFGai92Skk4VUNq?=
 =?utf-8?B?eXMxV2RKcjF4b3FyTW5FTW9tQTlqdXhiOGZWVGtDU1BTb1NZTzd0TFErK1JM?=
 =?utf-8?B?L1kzL01WcjcxYXI5bzhkd3FhQ1d3cy9zSEdhV2JobklGQVphWFc1NzE0b0JI?=
 =?utf-8?B?VGVmWVJ1UVl5TEltZkRiSnpUMDJDckZudFVkL2t1ZWJaSWRja3puQ0FFb3p1?=
 =?utf-8?B?RVRzTms4NkVYRFZwb01ONDRXS25TWEFiZUNCWERsZkREOWFYaGE2ZW1VTlRV?=
 =?utf-8?B?RVRIZXIwNTBzSSt1aVRpenpReGE3M3RQaVJISUsxWU1IL3k4WU5KUUtOaGU3?=
 =?utf-8?B?YlJheVI5cVF4K3ExWDNrTWpMODZCSTBlaW1OYnFGMmUrZFBIV2ppMFJNbU1I?=
 =?utf-8?B?TndVN29JM3NKeldFT3pzem9nUmlsZEJWOWxmVklRbkRHOEtudnBLbkczYlA0?=
 =?utf-8?B?N0dMSDF0RzM4RThyM0JiaEllQjJvS2U3blhRdjhRUmMwdmdPL0J2dG1Cb1R1?=
 =?utf-8?B?a0tYR3RKUFA2VjBUV21Wb0ZkSmU0T1BPeFZDSTlNMVpUWW9Qb3BhVWFQQVdL?=
 =?utf-8?B?d3JoeDZnUW5IQVRpendJOUdZcjlUQmdBaG1LeG1IWTdtOWVhL1krZVJ2djFm?=
 =?utf-8?B?ZmpoS1pYREplbGFnMFN6NFEzLzJMb0p2czlBZkE0cUtsRUc5MU1YbFZKRHRh?=
 =?utf-8?B?M0RzV0VrY3RjQ1lFalB0OGsxNjhhVEFDMjN6RWZRRXNtdDRrcldYaGpETHRz?=
 =?utf-8?B?RjNLaEFOMXBSY2dwUm42bkZSY2hNM1dwV0VSbExyZ2d5S2x0ZVB1OEU2d3JU?=
 =?utf-8?B?b3FlUldvdTZJRTJmVDhWZnI5enFCVkJGNlJ0dXAwbFRPcithWnhZUDFzY1o4?=
 =?utf-8?B?RHN1cVc4bmNQaDE0UDdvME14RldRY3VDTzY4alBveDFFSkcrVlZqMkdERmN5?=
 =?utf-8?Q?2bsnhDEb/Q33apRTMu9CbsIkd?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0df1fba5-82b1-4f13-3f8a-08dafd51b5b8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:54:21.0994
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OhLXKreFrhMnMQNzXvWHDpOlPbCG+WH6nBALruo5xev2dWcSyOdtcfvis0tK5vTrw1kLOWSYVCbpf+7UcsP2Pg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7773

Before adding a new vCPU operation to register the secondary time area
by guest-physical address, add code to actually keep such areas up-to-
date.

Note that pages aren't marked dirty when written to (matching the
handling of space mapped by map_vcpu_info()), on the basis that the
registrations are lost anyway across migration (or would need re-
populating at the target for transparent migration). Plus the contents
of the areas in question have to be deemed volatile in the first place
(so saving a "most recent" value is pretty meaningless even for e.g.
snapshotting).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1462,12 +1462,34 @@ static void __update_vcpu_system_time(st
         v->arch.pv.pending_system_time = _u;
 }
 
+static void write_time_guest_area(struct vcpu_time_info *map,
+                                  const struct vcpu_time_info *src)
+{
+    /* 1. Update userspace version. */
+    write_atomic(&map->version, src->version);
+    smp_wmb();
+
+    /* 2. Update all other userspace fields. */
+    *map = *src;
+
+    /* 3. Update userspace version again. */
+    smp_wmb();
+    write_atomic(&map->version, version_update_end(src->version));
+}
+
 bool update_secondary_system_time(struct vcpu *v,
                                   struct vcpu_time_info *u)
 {
     XEN_GUEST_HANDLE(vcpu_time_info_t) user_u = v->arch.time_info_guest;
+    struct vcpu_time_info *map = v->arch.time_guest_area.map;
     struct guest_memory_policy policy = { .nested_guest_mode = false };
 
+    if ( map )
+    {
+        write_time_guest_area(map, u);
+        return true;
+    }
+
     if ( guest_handle_is_null(user_u) )
         return true;
 



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:55:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:55:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482963.748793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyEG-0006Cd-LD; Mon, 23 Jan 2023 14:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482963.748793; Mon, 23 Jan 2023 14:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyEG-0006CW-Gu; Mon, 23 Jan 2023 14:55:24 +0000
Received: by outflank-mailman (input) for mailman id 482963;
 Mon, 23 Jan 2023 14:55:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyEF-0005Sn-GU
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:55:23 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2044.outbound.protection.outlook.com [40.107.104.44])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5853384-9b2d-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:55:21 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8422.eurprd04.prod.outlook.com (2603:10a6:20b:3ea::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:55:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:55:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5853384-9b2d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lzHcLqMRkQoh1rloIw5lkw+1b3GwJaGQIPTIRT/XdgjGkHIGrvL+MiflvntiSSdSi1r2LVB7vHxRGmpCHzdhb294dm+jTgrkp05uOlOwD9OOWB5QLhRiCpX/6CGyyKlDENGVpHzbOdPR/ZCcYyi2XRfAQa2fwPdMiTBbPj0i0+yj4gfnwajQg18O+YjifnP9STrSR/aCubNjP0U23hx0/cqTah4fyiPsc56D1TcxJLC1gG71apwfdquutn3Pl2Xz11E7+A4/poAz8xsIb/w+a92b6kCYkKeh6p7BQeBzjQi2lJiIljrdu5CyZG7sO259JrAj6EsTQ2bOQ758Ln/akw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XBd5rNRYBMnhzTJl9KZ9diTDyGeUkSl/k2wQJ+HyU7U=;
 b=JYLTn6/+r0RcRmXyskZbbCi+kEtdkN814whmGZUo6b6uMCBmgwevG9IynJhBXf7XPWzZRZvO8k7vuZfqJ49342oMHqLhf7TMXHTultorxfhJwHuRwZpzQmEZLlMc9xQY2L/l90in+bOEDGX3By74oB+yyw6Z1DjuMymS57BSKQi4rUUCvIFcuFBg3maqgm7lVCxfG222sWvaANiFz6mBOEpNxw6oNQREEq7KQejRoVuMMKucSR5bxVYjPxl32JZtzSZ1ANho2fZlhXMQDd4gBBGZ0S3mkRwcSz4b8snDBCMrnDRCGdQUTDxNL8q1BKG8MEgxwWr6++SpSWl6IvPO3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XBd5rNRYBMnhzTJl9KZ9diTDyGeUkSl/k2wQJ+HyU7U=;
 b=jqQ6wd3VAEQhcCF8TG7Q7xxoLFkk9OMsoeDsKZYSXUB8R2j+TLL6OoP5fB855FmpRIidbDmeDUK/1pVCrHthPWLBYboRehk8sV1zx7rg9ipmubwr+erEiNaeLcCYzBLKKTECS50+kwmww0KvZLlvH2GulSC+3IEgszmYNt3Fk62rdeRInBUqHeMx3bmJDLuELcqnkM7KtIAIIax+yL8fElwUYXq4b+wbc9ISHITi9164EuSWXDPgOiThPDyr40fOrBPi/hPU+C5qIJM3rfy6Pw54aWi+MdcfK4lb8JSDpBqmDCWSjf7uYThDqYPJJr+1+jMbtGBi5Mhq3I9rNWTMFg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com>
Date: Mon, 23 Jan 2023 15:55:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest areas
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
In-Reply-To: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0106.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8422:EE_
X-MS-Office365-Filtering-Correlation-Id: fb2ebd09-5ba0-4ff8-393f-08dafd51d87e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	elneRqC9nPRAGLqIAnJ8y8dJWE78p4dW+sY5KOdnUO3C+D9MHr0ZM58Ve/d1uvjNXyBrKplbFCTDfhvG6NwGmk3BVfGRGQ8LfEVz71SUgT1ydUQvbOlAz6Mi+Isw7PDsmKAfYD1yd5kcQzUFjZYe7Gp+I+lbVYgZri1sAYtkjcpJO5+N0WXzJrN/q8Bz1UVO7WpEHUfV/Bk9wpI63e7YebtTp4To1j4WzyzS4F5cxHiY3qnivdbcJJs9WLoMK4NxzkfI5STtb6Zbqyzi9rPGF0FTN8/TafWOiF6aF1mhOgMuPHlKTwSIdje24QnmovVGDSQFZJWhuInuNhBLdYMyElOrugnZDNfes4eGQRZGEEajoXxU6v95NGEQoetWenD9DebwtLY1082JFRnZTjg53penF/MocJ1+V6zaHVGcz0qArGutfjPN1s0GFLc9BljzXoF0GXvnecY1hO0vH/p2n11p5Ho4r3SzPMOqiEuXsHzSXhc2Atq1hb6PVr7X+bVU7CLrrXUsTndYfon6sn3JSSzVGU8m/5CqFgXYQcQm2MH/hTTP4DVrCzATD4+i7H3IqRcyl1jst2zNIpNTwULOo8K7Wt04Zpm/7nnsp+4jS5yB77sd2xi0c2pNn2c4Qvg9tBXXe+x590iSWkUFAwDRQnnQ6Th5Nia2JNqqVeREhSbrkxOeKeSlhrQ/TWGfugbsv/5wPGm3/pRxCDuimXLqs0L4nlc2R/BTLI9Uz3y1m9c=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(396003)(136003)(39850400004)(366004)(346002)(451199015)(31686004)(2616005)(38100700002)(31696002)(41300700001)(2906002)(8936002)(5660300002)(4326008)(8676002)(6916009)(6506007)(316002)(54906003)(36756003)(83380400001)(66556008)(66476007)(66946007)(86362001)(478600001)(6486002)(26005)(186003)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0RHa3F4R0FCODJ5azAxRTFHQ2trTHpCTEd5M1dDQlZUYmQzam5XOU5ZanNU?=
 =?utf-8?B?SWcrajg2c0R5SHVwa3VncndOQWtyYUFYektONG5EL05sQmY5b1FXYXJRMUg4?=
 =?utf-8?B?SllsYkRPV1Q4Q2RkWVhBYVIrTjZvT2N6T0VPS2l6T05FbGxLOFExSjJqSnhV?=
 =?utf-8?B?ckN4OXBVdUduUDJkQ2RMOWpGQ0U1bm01M1ZqeTJpSm04THZGMW5UMENrMjlj?=
 =?utf-8?B?ZDBZSzZJWG4yWUdBM3I2bGhDOG5xZytWc2ZtaTVDd2tURjMwRUltRFEwWEh0?=
 =?utf-8?B?OE1jdHdsU1AzK1R6bXhDWS9RU1NsYnVnMUdpMVoreTY2ci9zTjF0TjlNU0hs?=
 =?utf-8?B?ZmtReXkvQ1l5UWJtWnZRbkwzdUIwLzdQS3Y3a3l6MGdiMTRpQWVlUEsvdXB4?=
 =?utf-8?B?VkVRd2RVb2MzcGRhY01FV2F0dVNFanc3MHRWY284R0NYWk9tdXNNUlZWM0kw?=
 =?utf-8?B?Ry83NlVNQkJwTmxjdGJaTU15MFlYdUdBUkpsQWszOFFTVVFoaHJhTEV5OWhG?=
 =?utf-8?B?c04zdk0xVTZ3dVFhYnFyYlJtRmlpWGhCL1FpeUNFMXQ0RTY0WDdtc2RSYTk5?=
 =?utf-8?B?S0QzSTR4cjlpVHByTVN5SHRBN0QzQ3dqNDg4cHpwR0JjSWdidE9XNmZQaStE?=
 =?utf-8?B?SnRtR3MrbnFMQ1lmMzlZZ3FTQ2NNdHhRRWx5VVR1K0FCdXlEc0V2eXl3UUtj?=
 =?utf-8?B?ZjlQMTlpUDFMN25ZQ1oxcWFxSnBlNEdNaEtzeWQ0S2lCY1RNWXA1YlBqc1BJ?=
 =?utf-8?B?Z3dvOUNESXIrMndXNlNpaGR5dm1WZFg5VXBkRWNGNTg2VFFYYmFlamxqenRR?=
 =?utf-8?B?SExsTjZtQ0xxWFVaTDloWnl1UjYxZndpV3VRRzN3azdRUU1qN0ZyZHB6R25W?=
 =?utf-8?B?bm1LL3BsMzk3dStDdGwyN3p6OXFTTEJqTmhZelBpVkNKUkNPdFRrOGx4UWRs?=
 =?utf-8?B?ZlZVeDNPT3RzRnNQRExNWk9FanhxcEhQSUtsbERvS0YwTElsY2RMOFA0KzZr?=
 =?utf-8?B?K1VhM3R6ZXYxZEwvVi9vSFgxdjBITnlPb0x3cGlRRmtPWHNic0U3bndnNEhJ?=
 =?utf-8?B?MVVYVGZaS1p6NitRSHVmZUt5aElBaDc2bTlXRm5KdVBKbEZjQkFHTnRCRzQz?=
 =?utf-8?B?YjQyNnFRMGhaUndOUXBpZHJWTzVDN3lUemY3eVJlM1RDVVl5ei92bS9odytQ?=
 =?utf-8?B?WWU0SUZTWFNJVU9lVU1ZWmZoaHpRYkN0T0prNzl1cys0S0FHd0N2cFFRcXhM?=
 =?utf-8?B?WUhxTTRYWU95eldrZ2tYRExER21BY1g5RkFOZ0xLTHpqT0o2dmt1UjlvZ1BN?=
 =?utf-8?B?dWVCd3FXMFczZ1hnTTQ0N2ZnWHBEb3lMUDJScE9CdUhTY1JDOGtlaXBGZ1RK?=
 =?utf-8?B?eW1QY1NsNWw2QmU1Y0Q3bVFnVHA3QnlBOUNha2IwQUZmaUMvNS9xWWQzbUhr?=
 =?utf-8?B?WStpTWdxbmRsejVvQ3dNRzBjcDJuditQLzdjNmJicUVRSXBaMXl2U2xwK0dl?=
 =?utf-8?B?a0V0RGNrU0pCanRGczFGWlVxMDhTNFVReFpjVGVONk1ZajM3SWZvSXBtWXNr?=
 =?utf-8?B?Zkl1UTZLWnBlTDFFYWhvVUJHUGxBVG1jRHdkaHdWZHIvZzRROWNWYlV1VGxZ?=
 =?utf-8?B?MjRXQ0FEdGkvZGh1M1dTMnF1TXZuYzd2L3B6YVJMWGNhK3B5U01oby9GRjAz?=
 =?utf-8?B?aVlYcWpOWnhMOXJtSnlkVmFMTStBZGh5RVd4OC80QlJsaEZTSi9Dakt0eGdp?=
 =?utf-8?B?cER2U25xcm02Z0lzbk1XS1NoSnZmTzdycG1oYmZRb3ltRTRJQUdrc3BKejNI?=
 =?utf-8?B?ZTlrdzdyUGhhWFRhb1FwQmZ5VzRrSEFkaUx4NjRzcnpYaUNUOUlJemw5UEZ3?=
 =?utf-8?B?OWpWS1dpZkdkbjhpQnFIRVA1MTdiMlZQWFBLWWxPZkhncHRNS1hrcVRHZEtZ?=
 =?utf-8?B?Sld1K2hCT2xxaC9ja3E2WEl4NGp0OWJHQkQvUWNPMitWYndXdlQrS05HSnlG?=
 =?utf-8?B?bzNrZm55Y2ZoMHBvQzZ2aFRiMXVVMmVYY1QzRWI0blRHWFpTRzllNkhuc2kr?=
 =?utf-8?B?STh5TVFuNW10aFVwekFFdjBydG9nemVSSGRSQUJMdm04ZEdPNzZGUHhWWm5O?=
 =?utf-8?Q?ey5cHqTR99atvet/RuZCXciUZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fb2ebd09-5ba0-4ff8-393f-08dafd51d87e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:55:19.5175
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EcYAo8zWFAtzPK/kfagqhfAV5xwaJ2vrFjqwf6G6/qTJQIZA0S2bAjEq+GTsCqEXLCb9OeCwRugM8O9kAG3ncA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8422

In preparation of the introduction of new vCPU operations allowing to
register the respective areas (one of the two is x86-specific) by
guest-physical address, add the necessary fork handling (with the
backing function yet to be filled in).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
     hvm_set_nonreg_state(cd_vcpu, &nrs);
 }
 
+static int copy_guest_area(struct guest_area *cd_area,
+                           const struct guest_area *d_area,
+                           struct vcpu *cd_vcpu,
+                           const struct domain *d)
+{
+    mfn_t d_mfn, cd_mfn;
+
+    if ( !d_area->pg )
+        return 0;
+
+    d_mfn = page_to_mfn(d_area->pg);
+
+    /* Allocate & map a page for the area if it hasn't been already. */
+    if ( !cd_area->pg )
+    {
+        gfn_t gfn = mfn_to_gfn(d, d_mfn);
+        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
+        p2m_type_t p2mt;
+        p2m_access_t p2ma;
+        unsigned int offset;
+        int ret;
+
+        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
+        if ( mfn_eq(cd_mfn, INVALID_MFN) )
+        {
+            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain, 0);
+
+            if ( !pg )
+                return -ENOMEM;
+
+            cd_mfn = page_to_mfn(pg);
+            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
+
+            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, p2m_ram_rw,
+                                 p2m->default_access, -1);
+            if ( ret )
+                return ret;
+        }
+        else if ( p2mt != p2m_ram_rw )
+            return -EBUSY;
+
+        /*
+         * Simply specify the entire range up to the end of the page. All the
+         * function uses it for is a check for not crossing page boundaries.
+         */
+        offset = PAGE_OFFSET(d_area->map);
+        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
+                             PAGE_SIZE - offset, cd_area, NULL);
+        if ( ret )
+            return ret;
+    }
+    else
+        cd_mfn = page_to_mfn(cd_area->pg);
+
+    copy_domain_page(cd_mfn, d_mfn);
+
+    return 0;
+}
+
 static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu)
 {
     struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu);
@@ -1745,6 +1804,16 @@ static int copy_vcpu_settings(struct dom
             copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
         }
 
+        /* Same for the (physically registered) runstate and time info areas. */
+        ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
+                              &d_vcpu->runstate_guest_area, cd_vcpu, d);
+        if ( ret )
+            return ret;
+        ret = copy_guest_area(&cd_vcpu->arch.time_guest_area,
+                              &d_vcpu->arch.time_guest_area, cd_vcpu, d);
+        if ( ret )
+            return ret;
+
         ret = copy_vpmu(d_vcpu, cd_vcpu);
         if ( ret )
             return ret;
@@ -1987,7 +2056,10 @@ int mem_sharing_fork_reset(struct domain
 
  state:
     if ( reset_state )
+    {
         rc = copy_settings(d, pd);
+        /* TBD: What to do here with -ERESTART? */
+    }
 
     domain_unpause(d);
 
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1572,6 +1572,13 @@ void unmap_vcpu_info(struct vcpu *v)
     put_page_and_type(mfn_to_page(mfn));
 }
 
+int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
+                   struct guest_area *area,
+                   void (*populate)(void *dst, struct vcpu *v))
+{
+    return -EOPNOTSUPP;
+}
+
 /*
  * This is only intended to be used for domain cleanup (or more generally only
  * with at least the respective vCPU, if it's not the current one, reliably



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:55:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:55:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482965.748803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyEa-0006f2-S4; Mon, 23 Jan 2023 14:55:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482965.748803; Mon, 23 Jan 2023 14:55:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyEa-0006em-PK; Mon, 23 Jan 2023 14:55:44 +0000
Received: by outflank-mailman (input) for mailman id 482965;
 Mon, 23 Jan 2023 14:55:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyEZ-0005Sn-Tl
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:55:44 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2068.outbound.protection.outlook.com [40.107.104.68])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 018eac98-9b2e-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:55:41 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8422.eurprd04.prod.outlook.com (2603:10a6:20b:3ea::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:55:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:55:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 018eac98-9b2e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nc1jUsF5zpEMTdM153Sh1Emw9hxaXTbuRZo1w/gkXd+1i/S+X2yCBcPddRzrWH09H9EtachrDNOY1W/4Xsd8XYJoHtJHjyKog9Y69JG/4cpUbaTDWJ/Sq8sXSjDDm5VIaNjzbngitvKLqu7GfdxkJGY4hzRmF2KvHygYdqNzFaMX7La9z+7a/YzpMum2dNo0nWqT3s8g1TW+ahOdAHqjFxFME7rlnXzEYqZ01KkmGthJuwDNvNezh469LG3RnbluQmLiRTkxdFLvhmcGu+h8BgY/SOrvJayDYhNenngcv6AURTE1PhtKXZI58JtBsb2MuxzRNOJ3dmTlP84DPuBesg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NH84g47tK7c4GlHoaAADCkWf+J2O8DieVbFkOg4ugNs=;
 b=WMwwg/BSil6G2g/lHcPOgyQuua3v2JhAXTl49rQ/fFHFpWxuq0dCTyzGTiwYovWkkg1EdtM72zVeeq4KwMDYaN58pBq9QH8TA9GqJNr07JVfIbJrlqkOtDwLwe5PkaAaBuFvgXG9en2Z3PzVAHW5rNzQemUe6rD3Y6MfqgAB7JsnMXhh9hziDHXIWCrb0kCxGJcvaLGNQvTOPrvR2HHqvgEPpU2v7MS1rAQr4SglO4aH3Ba/PRrO1400Hq56r7m7kT9gMDuEN00Eyhuv691MM5rcxQQm/yuIN6kVXLup9x93gTZNjczJwOD6CkRlF/G6MdA6mvmQoSXX9vNQ62SAPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NH84g47tK7c4GlHoaAADCkWf+J2O8DieVbFkOg4ugNs=;
 b=ZwGhTso2bJR7mowMbRUgbABSW5WEdw/Rk8ERn4A4LxuthJp3bthQyjtdET41/alfS45V4j9Vo1GXt7JRTmZxjiHNBKPr8eBDP0/tLVwXhepLWODmi9NVsV2iszjfchkQZO8ePAlqIAps4VT3DJvUQt/EJsxwVhoX9/gA4rvMagxpeQQgx4beTRFq3fpIjB8ipeBk7bGICO6UL4Gc+7iRGVn8WVtT0ExINxLU8hKXXbsya8gtBFp2i99CyJ+qvS4MOVyUZfgotCKgWCF1slIwSRMF3Dq5Ij6oh6cfujjEVtl1DngX3Q+YbXxM99zB4586krao7DY/xHRGscVv7ehoIg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ab024dbb-cd79-83aa-9032-ad0ff78927f7@suse.com>
Date: Mon, 23 Jan 2023 15:55:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 5/8] domain: map/unmap GADDR based shared guest areas
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
In-Reply-To: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0037.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8422:EE_
X-MS-Office365-Filtering-Correlation-Id: aac43901-a08b-4007-d180-08dafd51e517
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	u0/JCnzpkejrBcum1BE2PLcjhwJtRdw2NzObYkkj+LXqwpQp6OXmQg4nlQCFs31GZBda0TZejZX3ePBd/wHYsoTVdMKh1qw8gkvZ0ymR1ycJGcj06uytYm1ACtW1GcIWLDj5Cpa4U+HzSFyKHs3JfWqZHGX2+axAdgPW3ShIqLZmBTfGeAg2LZ75qrOckdYPsQm/SxbEBmmM4zwifOOpGxAty5wqj60aNRPM/4gS7ng7/Oe/Xw60nD5TibuSS8cPB3AD0BXB4tTDg5HgdZxnSAPVJk0XAqBwVJknRllsC/JM73AnKfXZWWZxPwcN8YwNmDUdOpr6P8EBgojGRBqxIeLmCglRyR17FR5V4kmHklEe6vkou86iroecyrmaFHxnUwqN3NX62sSU1YqcwvHWg1KPbZigvrryhAp/ZRWxF9WiBhF+2lrcQc2dqQSBczKEZU/mtQnf6KFoWV9lEqN+GPUo1RM58Vgv5DXHybNYF8k59WyEzvUUVsZK0d26bOvEj92eyjL3P0VWR5KHvIhZch7PfmA5PnIvxIf1+raTiUKR2UHrw9JAEowwfitZTQvIutP2cGArstRpY/rsXHZC3fDtRN81ovesY/kWH3ABT7revjnvBA4S1SqjLAO3vM72yvkH5ZOTIzGQI9X/5XSh3pAp2WgyRi6vkx8HvJaf6wDzpOfcxK9f8o/GsNe2/3DnFbUGXMl4EK8mbbXm/Y9n1y1AFVNWh1ZwBG+SxNPqlcE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(396003)(136003)(39850400004)(366004)(346002)(451199015)(31686004)(2616005)(38100700002)(31696002)(41300700001)(2906002)(8936002)(5660300002)(4326008)(8676002)(6916009)(6506007)(316002)(54906003)(36756003)(83380400001)(66556008)(66476007)(66946007)(86362001)(478600001)(6486002)(26005)(186003)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MWJacmoyQmRjZ0ZkZ3U1blNUS3JIajd4MFB1Q2xNbnR3cGI5Yzd3cXFVaEoy?=
 =?utf-8?B?aklGTE1UNHdDZE02ZjdFR05nMmV6MEpNQzZrUC9MZU5DTEFhbWE4T3h5QTE2?=
 =?utf-8?B?RGJRZTJRUVVVMVhyREZ5TVdsZWlqajlyY2MxSjhsNjBLNUtyU1lMNGYrRWU3?=
 =?utf-8?B?MVArREdIaEw0NzlUQytsYjF0ZW1yOUc0ZEt2cDRqL1BwS3NlSVFkVG54dExi?=
 =?utf-8?B?dGpiUEtEWGFRWndrOC90MHkwc0FRMHlMMzJBYnRjVkh3SmllSm50S2QvQnIv?=
 =?utf-8?B?aHFuRGlmUlZldFNBWFBzaWJWK1JaVDVxM1ExSDlRWkNGeTVlb0N5YkF5SXJT?=
 =?utf-8?B?SDQ2eUtybHoyemJQZlkxcFV5eVRwZE1sVW1NUDFDK3A4YlRHQXd6Mi96VjRy?=
 =?utf-8?B?N1Zmcm9oWG1McFUrOEVLL0NOem5ENzJ2Y0RHd3FJZjh6VWY1MjBIUElYVy9j?=
 =?utf-8?B?YzVqZEtYVHlPNHFCamorT2ZrdXc4d205c1E0TFhZOGpkRzJEM1dOZmpOOHlL?=
 =?utf-8?B?cE5pTGhIVWR3QU1HL1h3K1hkY0FuUm9WYm5UNzc2Wm5BSWhvdkp2ckJBOFEr?=
 =?utf-8?B?SUk3Q01SYWpUT0xCTDRxeENLWnBncFBCQzFJVmFITmticFUrZWtnNTM5dmc1?=
 =?utf-8?B?RC9XUEJ0V0lQaTdtT3pJenJvRTJJbGZONklVWU1wdEdLai9qcG94MGorODll?=
 =?utf-8?B?ck5Fd2lIbG9xakVPQURSOHEza0N0YkxXN1V5cmd0TzFvSmxudHdKQmd5Qkp0?=
 =?utf-8?B?RlJFY2lVYUFLdE16MU5QaXFEQ1NwL2dKUVF0c0s2cWJZd1M3Q3lMbXV0azZE?=
 =?utf-8?B?YWdPMnByZEhnMG9hMlFUNEREU1pTMmlDNE1LKzZQVENZeGtpTDAwZHhCbXA1?=
 =?utf-8?B?dGZ5ZjRRRktCOVoyZkI0Y2F3N1p3eW9USFJLODBEUndpZDBTSGp1bmNiNDFV?=
 =?utf-8?B?VzRsSlIwbjYyY0Fjd2ZXbGpkWHdOckdlN3dURGkwWWpWOTluMDRmeDFEMnFZ?=
 =?utf-8?B?V2cxQWYzZkhWT1o4RUhMZXhya2ZxVitFcTlpTnV5RElES0NpOGx3YWNuRVZs?=
 =?utf-8?B?ajNZUXRBcUEwY2JSVDcyaDVZcDI1UUZUMi9mZHpGek5aUjdMNU1USzJ2L0xL?=
 =?utf-8?B?VjhnK2UyOHFyNnlpMm42V25uaCtXNmk0eFB4eU9sUm9RV2tBOXMxNTQrV1dp?=
 =?utf-8?B?R3lwbjBpNWdZdlRQcmF2MExFL1FzTUJaU0NuRVU2dUVmc0E1dlVaTU1ERjlZ?=
 =?utf-8?B?Qm00M2djUTIxTEs4RzNuMzBna3BTNEMrTmlOVWVjeW5qUjRDL0VZcTI0Mytv?=
 =?utf-8?B?ZDRXcW9HbmdXWXB2eEFSVXR1dzVJMlE4NjhlVUdzbXIxSVRXYkFkT3ZRMFlN?=
 =?utf-8?B?NUp2ZTU5TlNWZi9VOEc2a0dPMGNmZWRhdmZRTXQ2U1NZSmxIY040bWRGaG9S?=
 =?utf-8?B?NXdKcFkzNmo5a25NTFlJa2ZUZFNWMkg4dlhjWjdQck8rTUdwQkI5ekpNR0JD?=
 =?utf-8?B?N0hleGZaeWxtVlVyYUJMRXlhcHJ6MTI4cmVVUWJocDZjQUNzUlpJRExUUWRI?=
 =?utf-8?B?T0lRR3FJczRXNjVGMEJ0UEFlRmd1NGgwT002cTdyRHArenNOZk1FQXljVnRq?=
 =?utf-8?B?SFMxWTN4MERUTWJmSmp2ZFVhSFBoYklUZmpYMmJJQ2picnkzUnFibkNxTkVB?=
 =?utf-8?B?YURNeVF4U3BJSjdEd3hDbnNjVU1yb3FBeHRUREVRWGZ5bWZRSEVnR3F1T1JE?=
 =?utf-8?B?VGJMRW4ydnora04zTlpXZzZ4akJ0a1dBVTNlY0pkRktPWE95OHB3dGNQZkFK?=
 =?utf-8?B?T01pUUduYXdhRExFL1NvOGNQU2pZdWdSZkhiRUh4NGRSRzZmQkpiRVVWbkI5?=
 =?utf-8?B?UERibjJqNmhnZXdUVEtCVHFqUTJQR2RyVnRTQUVHK2J3d0VzdnRWL01vcmh6?=
 =?utf-8?B?UUhHb3B5b05BLysxZHNpSXlhcWJlb3RlQ1VUWlBoSFNVWGdCc1BYQ3JidjBJ?=
 =?utf-8?B?SEZRV0FpNVlLYzFNc2I1VDl2M3lzeDR3MEZVUE9kQTdtZjlROHJONkE3OW85?=
 =?utf-8?B?dWs4Z2w4RGdYV05jdDJ0TnhzNXBnWUVxYnpIN1dFSUVrSGkzSTNFS1NFcDVQ?=
 =?utf-8?Q?8ZaAp7TUCZdiXOAOzUwSqT8Yb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aac43901-a08b-4007-d180-08dafd51e517
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:55:40.5474
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: H3hVn7lvVOwZo3wwL++zponi9ue8r2Ik+Kh4Fpa+dODjiyc17E0KOuWmCoHb4Togu7mdxtRsocCMJC3buUCLeQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8422

The registration by virtual/linear address has downsides: At least on
x86 the access is expensive for HVM/PVH domains. Furthermore for 64-bit
PV domains the areas are inaccessible (and hence cannot be updated by
Xen) when in guest-user mode, and for HVM guests they may be
inaccessible when Meltdown mitigations are in place. (There are yet
more issues.)

In preparation of the introduction of new vCPU operations allowing to
register the respective areas (one of the two is x86-specific) by
guest-physical address, flesh out the map/unmap functions.

Noteworthy differences from map_vcpu_info():
- areas can be registered more than once (and de-registered),
- remote vCPU-s are paused rather than checked for being down (which in
  principle can change right after the check),
- the domain lock is taken for a much smaller region.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: By using global domain page mappings the demand on the underlying
     VA range may increase significantly. I did consider to use per-
     domain mappings instead, but they exist for x86 only. Of course we
     could have arch_{,un}map_guest_area() aliasing global domain page
     mapping functions on Arm and using per-domain mappings on x86. Yet
     then again map_vcpu_info() doesn't (and can't) do so.

RFC: In map_guest_area() I'm not checking the P2M type, instead - just
     like map_vcpu_info() - solely relying on the type ref acquisition.
     Checking for p2m_ram_rw alone would be wrong, as at least
     p2m_ram_logdirty ought to also be okay to use here (and in similar
     cases, e.g. in Argo's find_ring_mfn()). p2m_is_pageable() could be
     used here (like altp2m_vcpu_enable_ve() does) as well as in
     map_vcpu_info(), yet then again the P2M type is stale by the time
     it is being looked at anyway without the P2M lock held.
---
v2: currd -> d, to cover mem-sharing's copy_guest_area(). Re-base over
    change(s) earlier in the series. Use ~0 as "unmap" request indicator.

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1576,7 +1576,82 @@ int map_guest_area(struct vcpu *v, paddr
                    struct guest_area *area,
                    void (*populate)(void *dst, struct vcpu *v))
 {
-    return -EOPNOTSUPP;
+    struct domain *d = v->domain;
+    void *map = NULL;
+    struct page_info *pg = NULL;
+    int rc = 0;
+
+    if ( ~gaddr )
+    {
+        unsigned long gfn = PFN_DOWN(gaddr);
+        unsigned int align;
+        p2m_type_t p2mt;
+
+        if ( gfn != PFN_DOWN(gaddr + size - 1) )
+            return -ENXIO;
+
+#ifdef CONFIG_COMPAT
+        if ( has_32bit_shinfo(d) )
+            align = alignof(compat_ulong_t);
+        else
+#endif
+            align = alignof(xen_ulong_t);
+        if ( gaddr & (align - 1) )
+            return -ENXIO;
+
+        rc = check_get_page_from_gfn(d, _gfn(gfn), false, &p2mt, &pg);
+        if ( rc )
+            return rc;
+
+        if ( !get_page_type(pg, PGT_writable_page) )
+        {
+            put_page(pg);
+            return -EACCES;
+        }
+
+        map = __map_domain_page_global(pg);
+        if ( !map )
+        {
+            put_page_and_type(pg);
+            return -ENOMEM;
+        }
+        map += PAGE_OFFSET(gaddr);
+    }
+
+    if ( v != current )
+    {
+        if ( !spin_trylock(&d->hypercall_deadlock_mutex) )
+        {
+            rc = -ERESTART;
+            goto unmap;
+        }
+
+        vcpu_pause(v);
+
+        spin_unlock(&d->hypercall_deadlock_mutex);
+    }
+
+    domain_lock(d);
+
+    if ( map )
+        populate(map, v);
+
+    SWAP(area->pg, pg);
+    SWAP(area->map, map);
+
+    domain_unlock(d);
+
+    if ( v != current )
+        vcpu_unpause(v);
+
+ unmap:
+    if ( pg )
+    {
+        unmap_domain_page_global(map);
+        put_page_and_type(pg);
+    }
+
+    return rc;
 }
 
 /*
@@ -1587,9 +1662,24 @@ int map_guest_area(struct vcpu *v, paddr
 void unmap_guest_area(struct vcpu *v, struct guest_area *area)
 {
     struct domain *d = v->domain;
+    void *map;
+    struct page_info *pg;
 
     if ( v != current )
         ASSERT(atomic_read(&v->pause_count) | atomic_read(&d->pause_count));
+
+    domain_lock(d);
+    map = area->map;
+    area->map = NULL;
+    pg = area->pg;
+    area->pg = NULL;
+    domain_unlock(d);
+
+    if ( pg )
+    {
+        unmap_domain_page_global(map);
+        put_page_and_type(pg);
+    }
 }
 
 int default_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:56:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482973.748813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyFF-0007HZ-4u; Mon, 23 Jan 2023 14:56:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482973.748813; Mon, 23 Jan 2023 14:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyFF-0007HS-21; Mon, 23 Jan 2023 14:56:25 +0000
Received: by outflank-mailman (input) for mailman id 482973;
 Mon, 23 Jan 2023 14:56:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyFD-0007BS-Oo
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:56:23 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2062.outbound.protection.outlook.com [40.107.104.62])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19d9b1c3-9b2e-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 15:56:22 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8422.eurprd04.prod.outlook.com (2603:10a6:20b:3ea::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:56:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:56:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19d9b1c3-9b2e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UUYbCZqUooZtUB5YodunRk27zv62PW646tNw5KgD5K6H8gTfDR7msNXifAFc0SndPlfu1oniUsnWKnvg6xN9AqouWyZ15v6yUSMmjky/ZrTPKQ/UoE+8giXyJznyaPQjELemw4p2qiSaDBaRDlne0/nH29jqM6iICm2RBte6aa32j3RTVdWau8H8lhA4rMtX7XibsqumGGgtvDDV82Bav3SsmYYg79vbxvaKEO6e4ZWlrdvG+ceMME2fk7ItwC184sQ77St7O5QW/lCZ9DhHG3fcu+x3xKAA2B9QvMEHo9LtBsoCImpwQtxFJ1SESdRSgKi/Jlilb+z4un6qxbECDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/fluMR14yPGtt1MEzvUXmnvLOAxRzJWqVCDdd15B6Z4=;
 b=dbnoMzSWC+4eWlIOSS9vvWJWSEMfj+fWsE7rat10qxMwXQ3NefCX3DDIRc3qmB63iPN5cocnHDlT5kcMQChzbXuXIaPcqIcIu2IbYXX87f5OlbW+SPsJSEf2JhhHpNibnLQIfSXkWeA9YEAgAujbGOTwZ1shcVChTHIuwA7GA7xmGHkImpHw72O4jN/3YzctlMzoWSXF/opycOhAFv70HEs/qNxiYGD3/pdNwqVXT4TIy15breVW12ps6lKex5dbrO+ni3O46B9X1yKyzU2vUfChXSQZaLlRSeuy1gcAP9EoBDApBJa176huCt9VnDAvxxzguFaBso7rEc5NCYFSLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/fluMR14yPGtt1MEzvUXmnvLOAxRzJWqVCDdd15B6Z4=;
 b=l/+qPrcOO6Xg5Wxh0yptKE0JiGUUPek//MwpraJgP1Isyo8PKsi2mosmG1AV0eu3ECrloWRHmhR6P0iqvIraqyjO/8qf8h7W8X2RBY1FOdd51gT+I1rfaG7eODDzJGUwySdJ8S7RuRshisqnUxla+3msNBe5hj5xUJ5rNhwOZGguWd0RiN1xZ3O0LmIPhMKG3FmN3Cyg5KppU4Q4vcDweyde8KBe6Rn6nUQgcSRPgFHlVkq9+b9qx3Zf/CZ+ZQfx89QP+EVPg2MCJjWv7CIYgdqObUHjccTaHDElgM/aSPSb2wjqstIqIKgw4G5CbbmVViMyqs9UAL970a7Nat+JJQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <97fe3de2-a647-ede6-1831-1e301976b83e@suse.com>
Date: Mon, 23 Jan 2023 15:56:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 6/8] domain: introduce GADDR based runstate area
 registration alternative
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
In-Reply-To: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0038.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8422:EE_
X-MS-Office365-Filtering-Correlation-Id: 37c0268a-90b4-4562-ff05-08dafd51fd3f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TjFcoZBjhsLCQDkM+4WZssqtmE8aESLJyXjjlauAXK3gx4ZDiSDhNjSObQDravJPLHu5BgoKAYvro1lR0OOZeTKQ6voNg3t6H6rW3JGjdz7ah0x7eLuDrpkOMJDdLNMsE9kjFkWHsqnz0bryTt7HkHRTMDprk2IneWW/crRACIDxvrW6FfPjTXTlh5HBmWV7j/s9tOhXm3QynipWXqzpOMOs/tbRP9r9fZZd43NJ8XM+QHplztPAUGtNvOuep1L49kBvdiaGY35HvyhpFlPBwmrIJ6gFDAvO8s5Wn7exOnOa2Gcbj089LCqsM6wgOSnCCBGXqnwFB3UXFNXSSbxMp3yPxaQ6bd3eGMpweOufhvjH5ujeKPNzsjHqlV7Z1DLGZEeL6z/sS4Nx8jyv07tkp50qMY5HQPvICwyYmcBYxHo5l9KOzZNtL540xxAOLYFxDF3j3psaDh6ayTeLFQu+GjiBv9gYUjgnMkBu5mPCkpnT1DaffsTPL3rtTDIKxLqTC5MIHLSXTxFgCxyVYdPKtZlcL13xjjtZINCLtOp/JaOua9m/B8pqBycgE+ERORVMGeEIfIIHJhV/IZayuzorIg197l4JD/lK//lGbEZccgeDuocxlBNVMePh4UPpHWGzAkmldabDdqR8lU50XB+tYGVV79th8p38T7gv6eEXLoNNZxYHAm2y/tMlS1MxkQo+yPpNjeVIzkd7nYxDTVx7mXmKmPa+LqiegpHzHwTcHOo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(396003)(136003)(39850400004)(366004)(346002)(451199015)(31686004)(2616005)(38100700002)(31696002)(41300700001)(2906002)(8936002)(5660300002)(4326008)(8676002)(6916009)(6506007)(316002)(54906003)(36756003)(83380400001)(66556008)(66476007)(66946007)(86362001)(478600001)(6486002)(26005)(186003)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZTBOUHhyOUlEOUprKzJvL0FyUkxkZVZlUTNZSERLZ1hiaWM3amlkZFhrMjY2?=
 =?utf-8?B?ZzNzV1ZLcTlXT0c3MjFxa2NjY2ZmQmdBdEVWV3QzV0ZWWW5tcXdUWjl4WWRB?=
 =?utf-8?B?bnpTZnZKbVMrRkRCVXh4c002U0plZXU4QXRkUGxDbWlOMmlXcTNROEdEWllz?=
 =?utf-8?B?K2ExUm04TFJaZ3FyY1ZEc3B2VEFxamJpeUw0M0JrbitSNHFOaFpBektSVG1G?=
 =?utf-8?B?SHl1VDRDTjR3ZFBIK0NMNXVXL2M4RU1KYm82empwQjBEMjB5MElGWmx4dlRF?=
 =?utf-8?B?enFDWmVpY251RHpiTUN6LzVuTklWaXNKVWQySXpLUWFJQ09uc0pkSWJWMndF?=
 =?utf-8?B?ZVd4eDhNUUI0MDA5cmtiZi82STVtd0V3cVZ1VklPVjRiYTFnUDJkUnRpL3h3?=
 =?utf-8?B?blZCNldyUGgvNDRDazZFaE9pR3did1VBRFJIS3VNTkpwOUw1VHdtUU15T3JF?=
 =?utf-8?B?UGZlcm5wQTNWMUJXaHVCOTZzWVdGVlRJVURocVc4cmh3Y21hdCtORUdZYkdU?=
 =?utf-8?B?eFVvUi9mSmlKQTVkTzVTci9aNHdTMFc0NnR6ckt1OUJqZTR4TDlhOTlxbVZL?=
 =?utf-8?B?SDR3Sjlla3pKNS9qQVhhQWd6T2xXQlJXaUJZSWN4Y0pnZnVkQWw1YU5iMG5Y?=
 =?utf-8?B?Y1FZQnB5dVBCbmQyYzAvMHQvNEgxVXpYWlFycXJmNVFRdkFyVHdvN3ZDRlMv?=
 =?utf-8?B?d1lMbDRJTWVscllVeWsrYjRPSjRGRXVRS05WcDFmeWR6MXBuT2VvUDhpZHow?=
 =?utf-8?B?NFVLSWYvdG9CKzErNU1UaFJBbkhtU3UrWkp3eGQ1M3lGTHBET1BFVklDUW44?=
 =?utf-8?B?bkFtcDMrREYvL2JQVmNmazF1SnV4b0xZREtUejRtM0VXMjJIbkE2RWE4Sk5t?=
 =?utf-8?B?aHRvYUJiZld5NHkyS2w5YnF4Y25qcnVOcjRCbGN4TmMrajh2MHFCeDF3T2NT?=
 =?utf-8?B?VXBYWHJCT0ZBTm1BeFMxR0twc05SUzc2Mmp1SnYrUzliSDRXRjdiWkNXekh6?=
 =?utf-8?B?YS9rbENkNUI3WHpyWGJoLzRSTCtORWhTWVNmR2RLamd3akJxODZwUmJmWXhB?=
 =?utf-8?B?b2IvNnhIN2lmZGVWNEtCQy9WUW1NU0Rjb05aQ3pIUVEwbng3MDdUZGg5dlc3?=
 =?utf-8?B?ZWhldHJ6cFpWNzBDa2JMYkxoWTJsbjErekZobzRMc1N3WnN3bERaQ2JOREVj?=
 =?utf-8?B?Q2FKb0VWakJmbTVjQzV0RFFuekU4WVN4RFQvWjlLNTBmaUNkdmdPNkZENkxF?=
 =?utf-8?B?UjFnRTc2QnFzQlZOTEFjcWpLa3ViRjd6S1FjZnRha2tNY1ZaUEkyaGFIUGVK?=
 =?utf-8?B?cUdIT0o2Rjg4T3czV1ZEdUd0TG1JejFLVUFhTUN1WWV2K29EMENKc3NWUHBs?=
 =?utf-8?B?dzEyQlNtR3F0RG1Pc0VRUHpNOXptQWJaMVpZRG5XRStZY1V6NzBsV2gzQTFK?=
 =?utf-8?B?SGRXWjhKaWQxV2prQ2VqSEJxdFBmazNDcHBUS0dNa1dlWVBqamYrbWlDT0gw?=
 =?utf-8?B?djNCM052VWkrNXlrNVY5c1NiUDU4aHMwN1N4eWRIV1JwTFBKVmdteHUvaTJi?=
 =?utf-8?B?aEZoL0RISmkvZ3pHWjBwaXlwUUsrVDNLWkFIaER6R0Y2NURPbVRibXMxY2s3?=
 =?utf-8?B?cENPSmozck8rVGRMUmpMRTJ5UXpiUEJSU0J1SHFhM1RQTjJ6OEcvMFMzd2NR?=
 =?utf-8?B?ZTZHVGNMdEF0OTFPYmdPcEFHTHFLUndkWHhHVWNoSkRBRksrMkRzZG5IQW44?=
 =?utf-8?B?QUNVdW1Fclk4TVBBY1BsQ1NUdU1aK1g3RTdQZXVObDV2RjNEM3dRVi85aU5R?=
 =?utf-8?B?Z25KM2tRYmM4bXFsczVYVDdHV0RGZnk1bXNxWVdMRzZ5SGZNM2d5Q09jT1Bu?=
 =?utf-8?B?eEd5Uk5GYmNFUXcxNFNqcFozQjhnSERaYkt1bVpqVVVkNVFwYTlaYXBzc3l2?=
 =?utf-8?B?Qm15c09FMmRvTTFGRUtpVWh2R1hTZ24vaXl3NFdtcFNZU21mSFp6NTljcTgr?=
 =?utf-8?B?VjRDaFB0cllvN21TWVJDMWs0SEV1ZFhhR1BZU0l0QW1oZ2JINUtBNjMydklO?=
 =?utf-8?B?WGNZTkJMY2Q2bENKRlJ1L0hPa3JMWFZtQWR3dUE3bXEzN0k3QVhUdW1yNno3?=
 =?utf-8?Q?nsBNBwSlEnWR+EeeUz1Y+w2y4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37c0268a-90b4-4562-ff05-08dafd51fd3f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:56:21.1073
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bpjL5HXUBmS6BTOABqsxSM5R2W3orsmx8uxX45BOTCQbGF5QiPXSmKtwim++UxUF6y4V2rXP6YEu/JGv3K9bSw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8422

The registration by virtual/linear address has downsides: At least on
x86 the access is expensive for HVM/PVH domains. Furthermore for 64-bit
PV domains the area is inaccessible (and hence cannot be updated by Xen)
when in guest-user mode.

Introduce a new vCPU operation allowing to register the runstate area by
guest-physical address.

An at least theoretical downside to using physically registered areas is
that PV then won't see dirty (and perhaps also accessed) bits set in its
respective page table entries.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Extend comment in public header.

--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -12,6 +12,22 @@
 CHECK_vcpu_get_physid;
 #undef xen_vcpu_get_physid
 
+static void cf_check
+runstate_area_populate(void *map, struct vcpu *v)
+{
+    if ( is_pv_vcpu(v) )
+        v->arch.pv.need_update_runstate_area = false;
+
+    v->runstate_guest_area_compat = true;
+
+    if ( v == current )
+    {
+        struct compat_vcpu_runstate_info *info = map;
+
+        XLAT_vcpu_runstate_info(info, &v->runstate);
+    }
+}
+
 int
 compat_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
@@ -57,6 +73,25 @@ compat_vcpu_op(int cmd, unsigned int vcp
 
         break;
     }
+
+    case VCPUOP_register_runstate_phys_area:
+    {
+        struct compat_vcpu_register_runstate_memory_area area;
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.p, arg, 1) )
+            break;
+
+        rc = map_guest_area(v, area.addr.p,
+                            sizeof(struct compat_vcpu_runstate_info),
+                            &v->runstate_guest_area,
+                            runstate_area_populate);
+        if ( rc == -ERESTART )
+            rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iih",
+                                               cmd, vcpuid, arg);
+
+        break;
+    }
 
     case VCPUOP_register_vcpu_time_memory_area:
     {
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1803,6 +1803,26 @@ bool update_runstate_area(struct vcpu *v
     return rc;
 }
 
+static void cf_check
+runstate_area_populate(void *map, struct vcpu *v)
+{
+#ifdef CONFIG_PV
+    if ( is_pv_vcpu(v) )
+        v->arch.pv.need_update_runstate_area = false;
+#endif
+
+#ifdef CONFIG_COMPAT
+    v->runstate_guest_area_compat = false;
+#endif
+
+    if ( v == current )
+    {
+        struct vcpu_runstate_info *info = map;
+
+        *info = v->runstate;
+    }
+}
+
 long common_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
@@ -1977,6 +1997,25 @@ long common_vcpu_op(int cmd, struct vcpu
 
         break;
     }
+
+    case VCPUOP_register_runstate_phys_area:
+    {
+        struct vcpu_register_runstate_memory_area area;
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.p, arg, 1) )
+            break;
+
+        rc = map_guest_area(v, area.addr.p,
+                            sizeof(struct vcpu_runstate_info),
+                            &v->runstate_guest_area,
+                            runstate_area_populate);
+        if ( rc == -ERESTART )
+            rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iih",
+                                               cmd, vcpuid, arg);
+
+        break;
+    }
 
     default:
         rc = -ENOSYS;
--- a/xen/include/public/vcpu.h
+++ b/xen/include/public/vcpu.h
@@ -218,6 +218,19 @@ struct vcpu_register_time_memory_area {
 typedef struct vcpu_register_time_memory_area vcpu_register_time_memory_area_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_register_time_memory_area_t);
 
+/*
+ * Like the respective VCPUOP_register_*_memory_area, just using the "addr.p"
+ * field of the supplied struct as a guest physical address (i.e. in GFN space).
+ * The respective area may not cross a page boundary.  Pass ~0 to unregister an
+ * area.  Note that as long as an area is registered by physical address, the
+ * linear address based area will not be serviced (updated) by the hypervisor.
+ *
+ * Note that the area registered via VCPUOP_register_runstate_memory_area will
+ * be updated in the same manner as the one registered via virtual address PLUS
+ * VMASST_TYPE_runstate_update_flag engaged by the domain.
+ */
+#define VCPUOP_register_runstate_phys_area      14
+
 #endif /* __XEN_PUBLIC_VCPU_H__ */
 
 /*



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:56:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:56:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482977.748823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyFi-0007sS-IB; Mon, 23 Jan 2023 14:56:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482977.748823; Mon, 23 Jan 2023 14:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyFi-0007sL-Er; Mon, 23 Jan 2023 14:56:54 +0000
Received: by outflank-mailman (input) for mailman id 482977;
 Mon, 23 Jan 2023 14:56:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyFg-0007rt-LU
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:56:52 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2064.outbound.protection.outlook.com [40.107.22.64])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a82a8cf-9b2e-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 15:56:50 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6972.eurprd04.prod.outlook.com (2603:10a6:10:11c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:56:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:56:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a82a8cf-9b2e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iMbyuNrlVdi8icIzsRH7UDjCL9zmS33iQrhfZKDgy+EcdAOyWwpo0gAL5BuLtyNNwpZPwn9Hu/fd8xYE/cMzEqWQXrQ2ymJkvDTyIdFbCh5Hgi+Dr69ZIBs28AZ104XbtSvWOo3ssfWnHP3nKnRNy2RFq96HYXc/f5nM0Yn3RECjTlMzYsMBOlD8yEXIps2MvDgdnBK7oDyNilkkwoBPGzvKpHqwSpH5DMKtfeeRyc3nD8fIsF0bypP56T4fH/3mvqPTIQqBfOuc2tGxINy/rGzvOUYE/2faukPII9r5eUMqNKeQ1L/qdh+38dtQoAms+6rEjnuB69lw8aiFTIHNdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zfjVl067685QYNdMenPbNcSHcpvBJTSgTSpwaVubY4k=;
 b=hUdI7FMfnripvPIaWBiug2g/lyYHFvxP5ezFYIwr+Bz7nFJR92YVu3g/WDLaD5U/xolkLFiTMgR2ICPNXoGMEGI2mj0W9UQrQNpzPlEAkHodigHW75OmgS/xdlkK817fyusrDl0tP2xeXxIsJvr6CIrrj8VhXPLmzMZF1wn4kidqhvU7VQOzhRwQ8SyvW3ra7biZttgiMg87sbelG3MS6C6nWcc/9IR2dQM+z5Ir2jnGFSV1/sJ2B1dEveMZ6Rp3JBj1BA1OfWFXUT7zSLpIu26vgU485mlAXhw2WzdTpr+CLVuEocd17KHyv4fwOBnj2Bo/NU5mgxMltP7q3ROhgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zfjVl067685QYNdMenPbNcSHcpvBJTSgTSpwaVubY4k=;
 b=SLM641KhnryrfiSIgYXIoQPRTcJyUpa7+r5MCV5M5As38syHXvIZ+5nGiSPBNAhzSIY6HveaOevEu+LEc9U2KqDhCVeNEdNZjvBciMoG4x3MSEcx3W7C0B0MUA1BlSiRXsCGIASZsM5DXXX8/GTlJrXeB/dhcFZWOSjqKhpwAuWAc/L2sIg9Li9IfoyC43iM4JNyXuIbP+J05wsQVpU0XTccmfWsnv34IiiKhYr72P23/T5n5EfCwNtRrnN60C0yL9zNPewu43hb6EwqwfD5CsFjkjn5FkR4rrJMIniWeUOxRLaN6twttwbZWbO2CuZ6WjrvIGgD6eB9FtQa8txReQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ce2ee037-1d43-52fe-e934-1c21e977b03a@suse.com>
Date: Mon, 23 Jan 2023 15:56:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 7/8] x86: introduce GADDR based secondary time area
 registration alternative
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
In-Reply-To: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0202.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6972:EE_
X-MS-Office365-Filtering-Correlation-Id: b5ede3fa-d391-4b78-acb2-08dafd520d6a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Mt3+/3F7Ln8r1gJQGJE9DHI63uyYw4q+SOa5ii9NU6jLoo+cYzrHqa7Qz62iS8Fgr63ol1/zzSyQkAefKPAIPzH2dvPcpiTNMQNcTxTiYbz7+vk+xCSXvTCFe68NqQLRVRX6GFHYMIuosT5XKlrRA/zNPTOlK6tjkmBhjybkDtZVJkSgk/R1eFPm0xRTvh7VNWGpDUsTvgERsDU69vTUNPJPmCyORAaNppdQUS0c1nnqv1RDxxfycE/SvLCzrL7axcfaDvn0CPSvtXfr51OrRcO47w+DFZaPUtzeL/dmgcIQvGB5Omgz1/+mo4SJI4bZ12bC8X0S+GX+0DCOw5F2Zl6fBHIC3P+cgA7Fh53rZ6ogQe969ae4oG9oynzxjvgnFdpVu07CQKQ3vtNLEN1NxmfJdw+O7j9LmcmaLk8h7cL0Ue46hD83TZuiIZ1a6Tn4ZtPqAY15uPMYEThi3E9WOHYnPeIEWXVWEuFix+3s0Mhn8unbmM50sTTGeWAExWPYZHv0Gt6HC/0Vd6oBiabQKQMypNx+a27iVDzZbRWO0mn2fsnX3fWJ2j6pgGSWef/9OjQ7VCf+XiebPnKstxZKH2cWAF3K5daIThDGPdYh0iRwcQvyt4ZJamNm5PvGMsmAD/I0gD7FaWgnzbppKLo6V9qvwNeCEvF6hoZgpvaGzaRlnRKaSYZgAE3sBbcO0PmKiLwnapQ88aY1yg40wq3j2RY4Yo46cANbguYGXEl+TpU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(396003)(39860400002)(136003)(346002)(451199015)(6506007)(26005)(186003)(6486002)(478600001)(6512007)(8676002)(6916009)(4326008)(316002)(31696002)(66946007)(66476007)(66556008)(41300700001)(5660300002)(8936002)(83380400001)(36756003)(2906002)(38100700002)(86362001)(54906003)(31686004)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dDBubHZZVDFZYkk2dFRXTDBsYWZhVENtcHFJZkxaejNUT0ZvQXdNZFhSQzh3?=
 =?utf-8?B?azM0ejBUR3IyNzkyWXIvMUU0cC94UUNRQmdCSUtNeHJFR3RoSmdLSXVPTjFr?=
 =?utf-8?B?LzI2akorclVYd0xjM3VJUEJ4NGVyNCtNbHVaVU1ubmFqWTRzbm1iRTUyU2E4?=
 =?utf-8?B?N2thc2l1TjVySEdIcTQ0SWxUdllScUhwcDZySTNtNGk1L0ZjWEhjd2dSbDYz?=
 =?utf-8?B?VnJYUUxGWGNnei9XaFhSeWhzSTRSN3U3bVZ2T1hDMFIrMWwxbU1sZnJCTUZN?=
 =?utf-8?B?SlNjaFBCZmtSQ2lLTk1tL3d0OFh1Y0QzejFEV3gyWDhiOFlLd1dqeW9xU2Y3?=
 =?utf-8?B?UjkyeEQxSjJKbWdwaHNVUURKOHUva1hSK1RXVk9yU3RyaDVUTEpYWGU4Y3Rz?=
 =?utf-8?B?eEZ2bUtJMlpYRm1DWVdEVTJkQ3pweGpQY3FNRUV3SVpyOE0ySTlnMzVsdW1a?=
 =?utf-8?B?aElaN3g2VCt4SzgvZTFlMm1zNmFOaHNxaUlEMXVtMFRIV3ExVkZaaVZ5NlRu?=
 =?utf-8?B?b1BYSlNlNlVwWlhwYUNIcDN2U0RXcFg2ei80blR0eFFic2MyQjg4cjM4d0dX?=
 =?utf-8?B?MXIzbTREMkQzUmx6UW8zQ1lKOXZ2RC9XRVduaGZ2QXltRkhIcGNGNVpuRTBt?=
 =?utf-8?B?ZmZyaXVxNTR6THZNN1VEUUdLbzF2bitqNjJNZC8wK0hMRFRkSlNVSE5SZURt?=
 =?utf-8?B?SFRPS2pCbzBOYXRhbWhaakZ1TytKZW1rUDdBc3VaVmI2SGgzOVJiKzdTWTRG?=
 =?utf-8?B?UDN2NG1pcUpaTTBhQUtuajVDcEl1VkNwTm5aTkI1c21jQ3VxUnprUTZHTTNJ?=
 =?utf-8?B?Z2NDTlpJeE05SEpQb2VwcmhEYkJKQWV2eFdCS3JoN2FYRlFhNG9jODBrTjdy?=
 =?utf-8?B?MnlGTVB3akpIdVcydXhTaGVjVDBwdkhWb1dYczdzVlgydnJSQTBoTnFxZ2h3?=
 =?utf-8?B?NWhtVW5wYk5wWklHVmgzcStFbkQ3cDB3dVkvMWlnQmNrY2kveVk0enZoZ0NC?=
 =?utf-8?B?NFk3UXo1WWVpMC9KYlkvUGJNTlJZeFA1Qm9qbU9uVCtFeVVrQUZUaUJUZ1k5?=
 =?utf-8?B?S25VRlE4T1JBRmhOQzNZWGdGMXo2R0VDb3A5QUhzRXJlR0hKQmNMMjhyU0sw?=
 =?utf-8?B?dnVMYW8rTllIOHZkVk5RVkZTL201Vi9IenFwdjVudnlZUFduUHVYeUMwM01I?=
 =?utf-8?B?MW5yYjUzbEhyc3B0eklwNUtlcGZFUGR0bVpUWk11d0xER3pPTUtSQWcvc1Y1?=
 =?utf-8?B?K0xaYUxpYXliM1RuUUkwQjcwNXZzdytBcDJlYXVydnFxdTFXc0dwWkxJcU5r?=
 =?utf-8?B?VEMrTGhZSy9zRkoyT3VLKzFvd3hVOUpIR1k3RjJvVXpiTVdPb2RYZkFkRHFQ?=
 =?utf-8?B?eWRGQldyM0xoeTdHcnB4UDB2MDZoNnFxN0tMQ3hITkN1L2k5cFIxbkVmemZK?=
 =?utf-8?B?Yzk3c3JISzZVMHpLT29HSk9rODhrdU5lL0NDVVhXeXVGbkxSOWJMZkNSOFZp?=
 =?utf-8?B?bGxhU0ptYXlxaXNjak9PSUpiMlJKaFUzaktXaWR0MnY3M3hHM0d1c2JQc2M5?=
 =?utf-8?B?ajJFTXZaUjVFWVdVVzZJekVJcjFYRDhuWjQ1SGNuVkVFbHlXaGE5dTl3eS84?=
 =?utf-8?B?UlJMRWZmbkV4QVJjaHdYdnE4VG9pL01uU21KTW8rTE1rWDNXWmpRUS9XYjA0?=
 =?utf-8?B?T29sVTVtMnBkOE5xWlVjZVE5T1Z0UUw4WjlQWEY0VWdFRDNHeG9NRDg1K2hj?=
 =?utf-8?B?a09OMWMwYmNOb2JFT3hvYllUMmZOdG1BVEF5WHN1dGhXNHZ0NnhLb3k2V2xi?=
 =?utf-8?B?MlFNWFlJRHJpNVlhd2FQOGY2M2E4akE2WTlwV3Q1U0VTbGZnVEhrQUE4VDdY?=
 =?utf-8?B?QkF0RmxaMlJGREFPeEJlZjdIZ3pPUDN0NUVJZU5rZXZKaEs3djZZOFZ3b1VV?=
 =?utf-8?B?RnNRVEhOOXZTcXlTVGZramJHcHEvN3c1aFIweDg1VVpzdWVEWTF1cTFWZGZL?=
 =?utf-8?B?OHRZenZkUlVBMnJJVGdFbTRJMU0yOHlVUEZhbDNqYTRRQjd0eml0eDN6M2hP?=
 =?utf-8?B?ZnJacW01d01NT0dKV1orb1VIVHU4QXBWVjkzSzM4ai84SUNhaC81VC9Gbm5G?=
 =?utf-8?Q?v6E0K4+mTAkT6GKkCY/RcZZBw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b5ede3fa-d391-4b78-acb2-08dafd520d6a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:56:48.2305
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NbfjHjq+qT0boZANSHwHTVrVDo1Blev2cTDgX0voZ66m3uGOrjRHtK14FKhMPMr8M76nzCZa3GNLLp5kxPzRqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6972

The registration by virtual/linear address has downsides: The access is
expensive for HVM/PVH domains. Furthermore for 64-bit PV domains the area
is inaccessible (and hence cannot be updated by Xen) when in guest-user
mode.

Introduce a new vCPU operation allowing to register the secondary time
area by guest-physical address.

An at least theoretical downside to using physically registered areas is
that PV then won't see dirty (and perhaps also accessed) bits set in its
respective page table entries.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Forge version in force_update_secondary_system_time().

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1499,6 +1499,15 @@ int arch_vcpu_reset(struct vcpu *v)
     return 0;
 }
 
+static void cf_check
+time_area_populate(void *map, struct vcpu *v)
+{
+    if ( is_pv_vcpu(v) )
+        v->arch.pv.pending_system_time.version = 0;
+
+    force_update_secondary_system_time(v, map);
+}
+
 long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
@@ -1536,6 +1545,25 @@ long do_vcpu_op(int cmd, unsigned int vc
 
         break;
     }
+
+    case VCPUOP_register_vcpu_time_phys_area:
+    {
+        struct vcpu_register_time_memory_area area;
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.p, arg, 1) )
+            break;
+
+        rc = map_guest_area(v, area.addr.p,
+                            sizeof(vcpu_time_info_t),
+                            &v->arch.time_guest_area,
+                            time_area_populate);
+        if ( rc == -ERESTART )
+            rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iih",
+                                               cmd, vcpuid, arg);
+
+        break;
+    }
 
     case VCPUOP_get_physid:
     {
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -681,6 +681,8 @@ void domain_cpu_policy_changed(struct do
 
 bool update_secondary_system_time(struct vcpu *,
                                   struct vcpu_time_info *);
+void force_update_secondary_system_time(struct vcpu *,
+                                        struct vcpu_time_info *);
 
 void vcpu_show_execution_state(struct vcpu *);
 void vcpu_show_registers(const struct vcpu *);
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1524,6 +1524,16 @@ void force_update_vcpu_system_time(struc
     __update_vcpu_system_time(v, 1);
 }
 
+void force_update_secondary_system_time(struct vcpu *v,
+                                        struct vcpu_time_info *map)
+{
+    struct vcpu_time_info u;
+
+    collect_time_info(v, &u);
+    u.version = -1; /* Compensate for version_update_end(). */
+    write_time_guest_area(map, &u);
+}
+
 static void update_domain_rtc(void)
 {
     struct domain *d;
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -115,6 +115,7 @@ compat_vcpu_op(int cmd, unsigned int vcp
 
     case VCPUOP_send_nmi:
     case VCPUOP_get_physid:
+    case VCPUOP_register_vcpu_time_phys_area:
         rc = do_vcpu_op(cmd, vcpuid, arg);
         break;
 
--- a/xen/include/public/vcpu.h
+++ b/xen/include/public/vcpu.h
@@ -230,6 +230,7 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_register_ti
  * VMASST_TYPE_runstate_update_flag engaged by the domain.
  */
 #define VCPUOP_register_runstate_phys_area      14
+#define VCPUOP_register_vcpu_time_phys_area     15
 
 #endif /* __XEN_PUBLIC_VCPU_H__ */
 



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 14:57:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 14:57:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482985.748833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyGG-0008S1-Rd; Mon, 23 Jan 2023 14:57:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482985.748833; Mon, 23 Jan 2023 14:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyGG-0008Ru-OP; Mon, 23 Jan 2023 14:57:28 +0000
Received: by outflank-mailman (input) for mailman id 482985;
 Mon, 23 Jan 2023 14:57:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyGF-0007BS-4r
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 14:57:27 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2076.outbound.protection.outlook.com [40.107.22.76])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f8fd8df-9b2e-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 15:57:25 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6972.eurprd04.prod.outlook.com (2603:10a6:10:11c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 14:57:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 14:57:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f8fd8df-9b2e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BBL5oriJJl9QVmA217MpxakpE+Qh9lBxqdfPRykcVGBQ+kYV6ur31uGRg5vTuFE3oJ9AgKcRdwWMa3ONVbInZp4+KLB762IQ4wMQnnH2vqlFvNnoKFFbgU6UjfgcPIQAjPSAnivogA7rgstKujrpRh1lJzK9lo58PESuP+SdZWo61wF4wfF4YG5qVD5yhzub6wDCzwkhos+cQrGTofQSMHFK5d2mw4Hxfpe+cfuOJR1BHWXiMqgrEU++CyYN/FIOqRDCtOw8ykhMb4F4+3upfuFnRytGlJvwUcf1n8N0y9SlRtO20sZ3S+l9mZbCV9bhURHTTlery2EeSZmdqYM13g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4vRXvWQLyGKAmG7Bji/j/VfcIcDZDAV0kqzYj4NHDKM=;
 b=ibdqDd717LlSPHfSaxIXXgWfWJbHocf82gZgDEC1yaMy5UN7+JWVSsSSGsy/y+PPgqKsYk90DXY2LbwFXDRx+l4TXxCQ2woVaLtOxEiYNAtuw5ZhpMtQgvaueKjdTiqanVEF0URlYgpd5AVY2ERjP/0+YD0Vt//0W/RC8K3q3ak7hdffckr0nFkt9JQkosWEjuyehnMuAuwqumiMvKqg6w+YYC/4im522C7+Ldmi8I5K2M4qpjCyZy8YwOb2RXo0MJr3W5iWiV6X2xa2xDRFtmavkYCSSt0DHY0jrOtABmKiLD4lRzXCrGRdEIB9E1JxCA2GMxxb/hDsV3M9wc/Wcg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4vRXvWQLyGKAmG7Bji/j/VfcIcDZDAV0kqzYj4NHDKM=;
 b=sKDKCou3ewMLQIzmPETiGkeFOd0g9A28wL7IODL2G15LYJ0nBvbvo0pmp2OMmiWJZ4FxYRSM4AWjndjgKCFiP0h0K9x0xQa8Tatqxv4fj9Ho69BGCd6QSLJyc+RR2Xm1ZVoMLfJ3jLwzVuJk4dmMfPcMZ05aYL7UktVnDZ/VZRA5O3uYT2w+RHQGBucTaZic9FB2XzRSSNWtbB/voN5ei60BurTA2LiH7PXnrs1jbQ82T1CIfsSHzKUnURZLH5wGyka9T2pnc0TnX+afBcsdpJzLhl4oJIe4/xl+OldkK1YzPo95Q9LQovKMMFlrXmm9CyXPvS+ONj8AC1VpBLJVgQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5ef83f20-e1d0-83a5-168d-912f5be17ca2@suse.com>
Date: Mon, 23 Jan 2023 15:57:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v2 8/8] common: convert vCPU info area registration
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
In-Reply-To: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0145.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6972:EE_
X-MS-Office365-Filtering-Correlation-Id: b9f951ea-9165-41a2-815d-08dafd522282
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RpJ5HiMxbVQFEhWuC+/PFqPNz3jCxdcsjrQg3fKcoOPRbesZrkTdSr7B5mXgBvCUXNwR7miEHUxCTQa7emtrKqKPOrc1cnMyS70ekhmiRRYpIoo8RRiN4qaxojA3MmLxARWX1mkcDuDsXZ4nvIPIzGRdMPl8BYlPUwi+SJreRFXGGiHbE8W1I3cg0FOrJTMGJNo08j/TTyhQ155Wv0Dx3lPXFe6eN0iqFEIQEMIHtWrOODfsdogtt3eCMprANcENyjexV0MYnwEwL0ZMXxOpQY4o6w/91i/0ZLxyEnQ+7eDnxF54CMdKSsz3RN2Il8NoplZ1kTIXNIqnZzF2nt2+y0qxHW2x+Hjb1H6bskBeuE9FInwvpi0E2GCR1SNhY4+sXNErSKDOFpQ6TV1ygnrVgSoz/HpJdXmz4QVrq0tGs3iKk1PnElANriazG26eXwwYDDa09ljSbgj88Qwi0V4EP/VzVZxj/jgS2e/5l6kyqa7DsQ6Fxa04lyG7ocW7+vJ+rO+2HhJCJCUGuWuGBTddyV3+IktgsS/8/D0oIhJ3KAscMoDPXPZvz5HiSoOkZpqHj2cSP9Tv3bIsBP0supoG4yzgGKcIGFazEBJTXSNFM56+pAd2SZ4/Y20cs9y5Ijh2RFsGTpLSNtnSDWnkFu2TGB/SjA5+DbPAtP4HvUJZaqEp7mFFTqIVwJZx5fb+DWw24/rvPAcLlHrVwBqcV6U3/SB6PdFJwecqiNCd8IU/r1g+ahQowNPrmo/PuORBT0zUVDLQg6VVUMv4RuNVDym5bQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(396003)(39860400002)(136003)(346002)(451199015)(6506007)(26005)(186003)(6486002)(478600001)(6512007)(8676002)(6916009)(4326008)(316002)(31696002)(66946007)(66476007)(66556008)(41300700001)(5660300002)(8936002)(83380400001)(30864003)(36756003)(2906002)(38100700002)(86362001)(54906003)(31686004)(2616005)(45980500001)(43740500002)(309714004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bmJlbDFsK21tM3ZaMUJWZ21wWXdoeHVjU0JIRXhCUytFTnR4OTJ3OUxpZzIv?=
 =?utf-8?B?QzREdW1QSnpZQkJXY0I1MDNtbTBtMjlsdGV6dXl0NUhCSitJNTVZZ0hUN3Nw?=
 =?utf-8?B?citoMjB1WFRRZXFzQ244ZDlValFFMnBPbDVuMTVXWVZ6YjE0TDVnU2RjcXhY?=
 =?utf-8?B?MExpN0pSTXRTSkpjY05OVEJ2Nzhrd0hldCtvREFSMTZ5MVM5b1kvZFZtUGFD?=
 =?utf-8?B?eHZJU2tWRExWWlRicG5MRHVjMEY0R1lFeG5iWUVGajI2UVp6NWVoNFNVd082?=
 =?utf-8?B?Qi9QN3BCbDN3cUZFZGxGMnIzTnpYMGlYRkM1aDhsQnFPTEdOcS9CZDBVbGVa?=
 =?utf-8?B?VDNkQXlTeGRJMjh2aVArR3B0YU9KSzVtZEtlNWVnUFVYbGNNdE93dmk5eGlU?=
 =?utf-8?B?VWZtMDlBVC9mcXhJaHV6YXRuTUlJSU5MenN1UnpCd05DTkpvVG9ZL2trcVlx?=
 =?utf-8?B?OXhjdURJNzFlVWN0K0E3SlFReGs4cG1lbGJmOHN5a0c3V3pRMUlUSEZjMXNt?=
 =?utf-8?B?aTRhOWowQXBNMU9JYzFnR2VPYzI0TW4zMnl6ck1SaVkybnBTK3QrMFFzdHo5?=
 =?utf-8?B?WnhHUnp6SkI1V2R6RkdCaThEcmZ4K2VXMWhlTEpMUm80Mjg2ZXJaTDVEdjFB?=
 =?utf-8?B?Yjk3WkxxRThlUmJwMktFcmpDNi9aaDY5OUY2YWRJYm4vYzIyZVNQcjRPN21z?=
 =?utf-8?B?ck90a0dKZytvTkI0RkRxeERxZkVGdzFrZzVMK3pqOXFnR1hCL2lsQmpwZTcv?=
 =?utf-8?B?VVhzano3eXQyRVk5clFqbTc5QUtDYkhoSnd2dVZ4MkdPR2E5TzNUOGJwZGV1?=
 =?utf-8?B?Rkd6SlFMWHVLazNrTjNUZ0tBWXFzbjR1NWtxTzY0VGRiSGtUdFFQRGxLU0tl?=
 =?utf-8?B?djFFSWZqb0loZnNUT1hHSHRTei93dkFzNkZIajEvSmFIeCtKQlFRd2lzUVpn?=
 =?utf-8?B?OG9KSTRYRHlUR3V0ek52cUt6ajF3MU9nSHBweFN4UTU1d2tTTFlocUlDajNz?=
 =?utf-8?B?MHNTSi8yNGl0Vm1PQmJQUGdLYURzaE5ZSi8xamVZY0Q0YW9oZ2xTZVd3ekY3?=
 =?utf-8?B?UmpBd0hDaWxsSzF3Qkx3M1dLUzlRTnpZZ2E3VW5UQWF5NVprUDdqMVJ4Z1d1?=
 =?utf-8?B?Z2o2NFQ1T2EyR1JhdDRxLzJPNGxpbHRTQnhDc0RqNm9TdWgxZUlvQXU1b3B2?=
 =?utf-8?B?bytRcS9qb3FZUkErZHIrRmVEOGJ6Qk4xSDRxZ3djWG5Lay9PWGt4UTV5aGtO?=
 =?utf-8?B?TGY0UzZBc1VXTDFOcTNMOGszWGN5RlNvekFEWGN5VS9GRzRIYzRkWUdldTVa?=
 =?utf-8?B?UzVRUXIrV1RyaTZPMlVQZ0UxNG1sVEVhSEhaakp2TytJM1ZuRkRndFBoN2Za?=
 =?utf-8?B?NzhvTFAyT0FjazdDeUJCa3FKNHl6cG5OdVF1UElJb0dkNEdiZlJlU0Vob1JS?=
 =?utf-8?B?WXhGSk5KQjFabUluQm5VSVVMSkRxalhENkppRVAwNmRQN29pMnV0dngxZzNy?=
 =?utf-8?B?aUZHMExJL0REWUtsY201bm12UVAwTGpla1czTG1TQ0Z0Wm9qaU5qdkVRQkxv?=
 =?utf-8?B?OTJudlk4aC95UDM2bzRwcDVtYjB6aVNiVjlZZ2Z3YmVWQ2hlRGFKanBPZWlW?=
 =?utf-8?B?ZUZscUk4NkpQY1g3R2hUNTNMa1R2Ukw4Y04veHhKSEVHbncyZGhrVjJveFRR?=
 =?utf-8?B?QnRISm4rY21zSE04N2FmTXBoc0xQZ2lhRWdQRzhDZWZ1L1VWMHNFU1kvSWRy?=
 =?utf-8?B?MlVBWCtacUc0VnE3VTZKcDk0amU3N0FOb3JHMlh3Tm9yVG56RTVQaW1uTjZS?=
 =?utf-8?B?b00rRTdIV1g0SjJreUFXS1V4TldkTjBYSU0xUk1aaDgyVktpNXkwYklPWmV6?=
 =?utf-8?B?SXphRXh4MWNHU1JoMXJuc0VjWHpEd0Z6ZlgrSHBZc3RRcEtOb2xYSFg0SlJV?=
 =?utf-8?B?LzhsTGZmSTRKb0pRbzZLTUs2dUNQcUNaUHM4Z3Q0UFFIL3o5OEE0YlEwN0M1?=
 =?utf-8?B?cWhZNmY2eUpCL3ppN01BcktkSmh2UlVVdVhVZVZ4WmNod1hxNS9lSVZOSzVr?=
 =?utf-8?B?d0FsSUIyckduQWxocnpLT2pyN09tS1ZlY2hHQkRVaWM1bFY3UHhJNGxGbkxt?=
 =?utf-8?Q?1C8HhQiBs60ZYZuCkC6L2ayN4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b9f951ea-9165-41a2-815d-08dafd522282
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 14:57:23.6501
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BJbgybWiD9RHgFefRzkquLVXPmXLpooqZBZqAOuxTx6ljiU6ednIIC3C0DlPyD+r+8bD3AAWKBivpgmBvo43WQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6972

Switch to using map_guest_area(). Noteworthy differences from
map_vcpu_info():
- remote vCPU-s are paused rather than checked for being down (which in
  principle can change right after the check),
- the domain lock is taken for a much smaller region,
- the error code for an attempt to re-register the area is now -EBUSY,
- we could in principle permit de-registration when no area was
  previously registered (which would permit "probing", if necessary for
  anything).

Note that this eliminates a bug in copy_vcpu_settings(): The function
did allocate a new page regardless of the GFN already having a mapping,
thus in particular breaking the case of two vCPU-s having their info
areas on the same page.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: I'm not really certain whether the preliminary check (ahead of
     calling map_guest_area()) is worthwhile to have.
---
v2: Re-base over changes earlier in the series. Properly enforce no re-
    registration. Avoid several casts by introducing local variables.

--- a/xen/arch/x86/include/asm/shared.h
+++ b/xen/arch/x86/include/asm/shared.h
@@ -26,17 +26,20 @@ static inline void arch_set_##field(stru
 #define GET_SET_VCPU(type, field)                               \
 static inline type arch_get_##field(const struct vcpu *v)       \
 {                                                               \
+    const vcpu_info_t *vi = v->vcpu_info_area.map;              \
+                                                                \
     return !has_32bit_shinfo(v->domain) ?                       \
-           v->vcpu_info->native.arch.field :                    \
-           v->vcpu_info->compat.arch.field;                     \
+           vi->native.arch.field : vi->compat.arch.field;       \
 }                                                               \
 static inline void arch_set_##field(struct vcpu *v,             \
                                     type val)                   \
 {                                                               \
+    vcpu_info_t *vi = v->vcpu_info_area.map;                    \
+                                                                \
     if ( !has_32bit_shinfo(v->domain) )                         \
-        v->vcpu_info->native.arch.field = val;                  \
+        vi->native.arch.field = val;                            \
     else                                                        \
-        v->vcpu_info->compat.arch.field = val;                  \
+        vi->compat.arch.field = val;                            \
 }
 
 #else
@@ -57,12 +60,16 @@ static inline void arch_set_##field(stru
 #define GET_SET_VCPU(type, field)                           \
 static inline type arch_get_##field(const struct vcpu *v)   \
 {                                                           \
-    return v->vcpu_info->arch.field;                        \
+    const vcpu_info_t *vi = v->vcpu_info_area.map;          \
+                                                            \
+    return vi->arch.field;                                  \
 }                                                           \
 static inline void arch_set_##field(struct vcpu *v,         \
                                     type val)               \
 {                                                           \
-    v->vcpu_info->arch.field = val;                         \
+    vcpu_info_t *vi = v->vcpu_info_area.map;                \
+                                                            \
+    vi->arch.field = val;                                   \
 }
 
 #endif
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1758,53 +1758,24 @@ static int copy_vpmu(struct vcpu *d_vcpu
 static int copy_vcpu_settings(struct domain *cd, const struct domain *d)
 {
     unsigned int i;
-    struct p2m_domain *p2m = p2m_get_hostp2m(cd);
     int ret = -EINVAL;
 
     for ( i = 0; i < cd->max_vcpus; i++ )
     {
         struct vcpu *d_vcpu = d->vcpu[i];
         struct vcpu *cd_vcpu = cd->vcpu[i];
-        mfn_t vcpu_info_mfn;
 
         if ( !d_vcpu || !cd_vcpu )
             continue;
 
-        /* Copy & map in the vcpu_info page if the guest uses one */
-        vcpu_info_mfn = d_vcpu->vcpu_info_mfn;
-        if ( !mfn_eq(vcpu_info_mfn, INVALID_MFN) )
-        {
-            mfn_t new_vcpu_info_mfn = cd_vcpu->vcpu_info_mfn;
-
-            /* Allocate & map the page for it if it hasn't been already */
-            if ( mfn_eq(new_vcpu_info_mfn, INVALID_MFN) )
-            {
-                gfn_t gfn = mfn_to_gfn(d, vcpu_info_mfn);
-                unsigned long gfn_l = gfn_x(gfn);
-                struct page_info *page;
-
-                if ( !(page = alloc_domheap_page(cd, 0)) )
-                    return -ENOMEM;
-
-                new_vcpu_info_mfn = page_to_mfn(page);
-                set_gpfn_from_mfn(mfn_x(new_vcpu_info_mfn), gfn_l);
-
-                ret = p2m->set_entry(p2m, gfn, new_vcpu_info_mfn,
-                                     PAGE_ORDER_4K, p2m_ram_rw,
-                                     p2m->default_access, -1);
-                if ( ret )
-                    return ret;
-
-                ret = map_vcpu_info(cd_vcpu, gfn_l,
-                                    PAGE_OFFSET(d_vcpu->vcpu_info));
-                if ( ret )
-                    return ret;
-            }
-
-            copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
-        }
-
-        /* Same for the (physically registered) runstate and time info areas. */
+        /*
+         * Copy and map the vcpu_info page and the (physically registered)
+         * runstate and time info areas.
+         */
+        ret = copy_guest_area(&cd_vcpu->vcpu_info_area,
+                              &d_vcpu->vcpu_info_area, cd_vcpu, d);
+        if ( ret )
+            return ret;
         ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
                               &d_vcpu->runstate_guest_area, cd_vcpu, d);
         if ( ret )
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -395,7 +395,7 @@ int pv_shim_shutdown(uint8_t reason)
     for_each_vcpu ( d, v )
     {
         /* Unmap guest vcpu_info page and runstate/time areas. */
-        unmap_vcpu_info(v);
+        unmap_guest_area(v, &v->vcpu_info_area);
         unmap_guest_area(v, &v->runstate_guest_area);
         unmap_guest_area(v, &v->arch.time_guest_area);
 
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1438,7 +1438,7 @@ static void __update_vcpu_system_time(st
     struct vcpu_time_info *u = &vcpu_info(v, time), _u;
     const struct domain *d = v->domain;
 
-    if ( v->vcpu_info == NULL )
+    if ( !v->vcpu_info_area.map )
         return;
 
     collect_time_info(v, &_u);
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -53,7 +53,7 @@ void __dummy__(void)
 
     OFFSET(VCPU_processor, struct vcpu, processor);
     OFFSET(VCPU_domain, struct vcpu, domain);
-    OFFSET(VCPU_vcpu_info, struct vcpu, vcpu_info);
+    OFFSET(VCPU_vcpu_info, struct vcpu, vcpu_info_area.map);
     OFFSET(VCPU_trap_bounce, struct vcpu, arch.pv.trap_bounce);
     OFFSET(VCPU_thread_flags, struct vcpu, arch.flags);
     OFFSET(VCPU_event_addr, struct vcpu, arch.pv.event_callback_eip);
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -97,7 +97,7 @@ static void _show_registers(
     if ( context == CTXT_hypervisor )
         printk(" %pS", _p(regs->rip));
     printk("\nRFLAGS: %016lx   ", regs->rflags);
-    if ( (context == CTXT_pv_guest) && v && v->vcpu_info )
+    if ( (context == CTXT_pv_guest) && v && v->vcpu_info_area.map )
         printk("EM: %d   ", !!vcpu_info(v, evtchn_upcall_mask));
     printk("CONTEXT: %s", context_names[context]);
     if ( v && !is_idle_vcpu(v) )
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -49,7 +49,7 @@ int compat_common_vcpu_op(int cmd, struc
     {
     case VCPUOP_initialise:
     {
-        if ( v->vcpu_info == &dummy_vcpu_info )
+        if ( v->vcpu_info_area.map == &dummy_vcpu_info )
             return -EINVAL;
 
 #ifdef CONFIG_HVM
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -127,10 +127,10 @@ static void vcpu_info_reset(struct vcpu
 {
     struct domain *d = v->domain;
 
-    v->vcpu_info = ((v->vcpu_id < XEN_LEGACY_MAX_VCPUS)
-                    ? (vcpu_info_t *)&shared_info(d, vcpu_info[v->vcpu_id])
-                    : &dummy_vcpu_info);
-    v->vcpu_info_mfn = INVALID_MFN;
+    v->vcpu_info_area.map =
+        ((v->vcpu_id < XEN_LEGACY_MAX_VCPUS)
+         ? (vcpu_info_t *)&shared_info(d, vcpu_info[v->vcpu_id])
+         : &dummy_vcpu_info);
 }
 
 static void vmtrace_free_buffer(struct vcpu *v)
@@ -964,7 +964,7 @@ int domain_kill(struct domain *d)
             return -ERESTART;
         for_each_vcpu ( d, v )
         {
-            unmap_vcpu_info(v);
+            unmap_guest_area(v, &v->vcpu_info_area);
             unmap_guest_area(v, &v->runstate_guest_area);
         }
         d->is_dying = DOMDYING_dead;
@@ -1419,7 +1419,7 @@ int domain_soft_reset(struct domain *d,
     for_each_vcpu ( d, v )
     {
         set_xen_guest_handle(runstate_guest(v), NULL);
-        unmap_vcpu_info(v);
+        unmap_guest_area(v, &v->vcpu_info_area);
         unmap_guest_area(v, &v->runstate_guest_area);
     }
 
@@ -1467,111 +1467,6 @@ int vcpu_reset(struct vcpu *v)
     return rc;
 }
 
-/*
- * Map a guest page in and point the vcpu_info pointer at it.  This
- * makes sure that the vcpu_info is always pointing at a valid piece
- * of memory, and it sets a pending event to make sure that a pending
- * event doesn't get missed.
- */
-int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset)
-{
-    struct domain *d = v->domain;
-    void *mapping;
-    vcpu_info_t *new_info;
-    struct page_info *page;
-    unsigned int align;
-
-    if ( offset > (PAGE_SIZE - sizeof(*new_info)) )
-        return -ENXIO;
-
-#ifdef CONFIG_COMPAT
-    BUILD_BUG_ON(sizeof(*new_info) != sizeof(new_info->compat));
-    if ( has_32bit_shinfo(d) )
-        align = alignof(new_info->compat);
-    else
-#endif
-        align = alignof(*new_info);
-    if ( offset & (align - 1) )
-        return -ENXIO;
-
-    if ( !mfn_eq(v->vcpu_info_mfn, INVALID_MFN) )
-        return -EINVAL;
-
-    /* Run this command on yourself or on other offline VCPUS. */
-    if ( (v != current) && !(v->pause_flags & VPF_down) )
-        return -EINVAL;
-
-    page = get_page_from_gfn(d, gfn, NULL, P2M_UNSHARE);
-    if ( !page )
-        return -EINVAL;
-
-    if ( !get_page_type(page, PGT_writable_page) )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    mapping = __map_domain_page_global(page);
-    if ( mapping == NULL )
-    {
-        put_page_and_type(page);
-        return -ENOMEM;
-    }
-
-    new_info = (vcpu_info_t *)(mapping + offset);
-
-    if ( v->vcpu_info == &dummy_vcpu_info )
-    {
-        memset(new_info, 0, sizeof(*new_info));
-#ifdef XEN_HAVE_PV_UPCALL_MASK
-        __vcpu_info(v, new_info, evtchn_upcall_mask) = 1;
-#endif
-    }
-    else
-    {
-        memcpy(new_info, v->vcpu_info, sizeof(*new_info));
-    }
-
-    v->vcpu_info = new_info;
-    v->vcpu_info_mfn = page_to_mfn(page);
-
-    /* Set new vcpu_info pointer /before/ setting pending flags. */
-    smp_wmb();
-
-    /*
-     * Mark everything as being pending just to make sure nothing gets
-     * lost.  The domain will get a spurious event, but it can cope.
-     */
-#ifdef CONFIG_COMPAT
-    if ( !has_32bit_shinfo(d) )
-        write_atomic(&new_info->native.evtchn_pending_sel, ~0);
-    else
-#endif
-        write_atomic(&vcpu_info(v, evtchn_pending_sel), ~0);
-    vcpu_mark_events_pending(v);
-
-    return 0;
-}
-
-/*
- * Unmap the vcpu info page if the guest decided to place it somewhere
- * else. This is used from domain_kill() and domain_soft_reset().
- */
-void unmap_vcpu_info(struct vcpu *v)
-{
-    mfn_t mfn = v->vcpu_info_mfn;
-
-    if ( mfn_eq(mfn, INVALID_MFN) )
-        return;
-
-    unmap_domain_page_global((void *)
-                             ((unsigned long)v->vcpu_info & PAGE_MASK));
-
-    vcpu_info_reset(v); /* NB: Clobbers v->vcpu_info_mfn */
-
-    put_page_and_type(mfn_to_page(mfn));
-}
-
 int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
                    struct guest_area *area,
                    void (*populate)(void *dst, struct vcpu *v))
@@ -1633,14 +1528,44 @@ int map_guest_area(struct vcpu *v, paddr
 
     domain_lock(d);
 
-    if ( map )
-        populate(map, v);
+    /* No re-registration of the vCPU info area. */
+    if ( area != &v->vcpu_info_area || !area->pg )
+    {
+        if ( map )
+            populate(map, v);
 
-    SWAP(area->pg, pg);
-    SWAP(area->map, map);
+        SWAP(area->pg, pg);
+        SWAP(area->map, map);
+    }
+    else
+        rc = -EBUSY;
 
     domain_unlock(d);
 
+    /* Set pending flags /after/ new vcpu_info pointer was set. */
+    if ( area == &v->vcpu_info_area && !rc )
+    {
+        /*
+         * Mark everything as being pending just to make sure nothing gets
+         * lost.  The domain will get a spurious event, but it can cope.
+         */
+#ifdef CONFIG_COMPAT
+        if ( !has_32bit_shinfo(d) )
+        {
+            vcpu_info_t *info = area->map;
+
+            /* For VCPUOP_register_vcpu_info handling in common_vcpu_op(). */
+            BUILD_BUG_ON(sizeof(*info) != sizeof(info->compat));
+            write_atomic(&info->native.evtchn_pending_sel, ~0);
+        }
+        else
+#endif
+            write_atomic(&vcpu_info(v, evtchn_pending_sel), ~0);
+        vcpu_mark_events_pending(v);
+
+        force_update_vcpu_system_time(v);
+    }
+
     if ( v != current )
         vcpu_unpause(v);
 
@@ -1670,7 +1595,10 @@ void unmap_guest_area(struct vcpu *v, st
 
     domain_lock(d);
     map = area->map;
-    area->map = NULL;
+    if ( area == &v->vcpu_info_area )
+        vcpu_info_reset(v);
+    else
+        area->map = NULL;
     pg = area->pg;
     area->pg = NULL;
     domain_unlock(d);
@@ -1803,6 +1731,27 @@ bool update_runstate_area(struct vcpu *v
     return rc;
 }
 
+/*
+ * This makes sure that the vcpu_info is always pointing at a valid piece of
+ * memory, and it sets a pending event to make sure that a pending event
+ * doesn't get missed.
+ */
+static void cf_check
+vcpu_info_populate(void *map, struct vcpu *v)
+{
+    vcpu_info_t *info = map;
+
+    if ( v->vcpu_info_area.map == &dummy_vcpu_info )
+    {
+        memset(info, 0, sizeof(*info));
+#ifdef XEN_HAVE_PV_UPCALL_MASK
+        __vcpu_info(v, info, evtchn_upcall_mask) = 1;
+#endif
+    }
+    else
+        memcpy(info, v->vcpu_info_area.map, sizeof(*info));
+}
+
 static void cf_check
 runstate_area_populate(void *map, struct vcpu *v)
 {
@@ -1832,7 +1781,7 @@ long common_vcpu_op(int cmd, struct vcpu
     switch ( cmd )
     {
     case VCPUOP_initialise:
-        if ( v->vcpu_info == &dummy_vcpu_info )
+        if ( v->vcpu_info_area.map == &dummy_vcpu_info )
             return -EINVAL;
 
         rc = arch_initialise_vcpu(v, arg);
@@ -1956,16 +1905,29 @@ long common_vcpu_op(int cmd, struct vcpu
     case VCPUOP_register_vcpu_info:
     {
         struct vcpu_register_vcpu_info info;
+        paddr_t gaddr;
 
         rc = -EFAULT;
         if ( copy_from_guest(&info, arg, 1) )
             break;
 
-        domain_lock(d);
-        rc = map_vcpu_info(v, info.mfn, info.offset);
-        domain_unlock(d);
+        rc = -EINVAL;
+        gaddr = gfn_to_gaddr(_gfn(info.mfn)) + info.offset;
+        if ( !~gaddr ||
+             gfn_x(gaddr_to_gfn(gaddr)) != info.mfn )
+            break;
 
-        force_update_vcpu_system_time(v);
+        /* Preliminary check only; see map_guest_area(). */
+        rc = -EBUSY;
+        if ( v->vcpu_info_area.pg )
+            break;
+
+        /* See the BUILD_BUG_ON() in vcpu_info_populate(). */
+        rc = map_guest_area(v, gaddr, sizeof(vcpu_info_t),
+                            &v->vcpu_info_area, vcpu_info_populate);
+        if ( rc == -ERESTART )
+            rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iih",
+                                               cmd, vcpuid, arg);
 
         break;
     }
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -79,9 +79,6 @@ void cf_check free_pirq_struct(void *);
 int  arch_vcpu_create(struct vcpu *v);
 void arch_vcpu_destroy(struct vcpu *v);
 
-int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset);
-void unmap_vcpu_info(struct vcpu *v);
-
 int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
                    struct guest_area *area,
                    void (*populate)(void *dst, struct vcpu *v));
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -175,7 +175,7 @@ struct vcpu
 
     int              processor;
 
-    vcpu_info_t     *vcpu_info;
+    struct guest_area vcpu_info_area;
 
     struct domain   *domain;
 
@@ -288,9 +288,6 @@ struct vcpu
 
     struct waitqueue_vcpu *waitqueue_vcpu;
 
-    /* Guest-specified relocation of vcpu_info. */
-    mfn_t            vcpu_info_mfn;
-
     struct evtchn_fifo_vcpu *evtchn_fifo;
 
     /* vPCI per-vCPU area, used to store data for long running operations. */
--- a/xen/include/xen/shared.h
+++ b/xen/include/xen/shared.h
@@ -44,6 +44,7 @@ typedef struct vcpu_info vcpu_info_t;
 extern vcpu_info_t dummy_vcpu_info;
 
 #define shared_info(d, field)      __shared_info(d, (d)->shared_info, field)
-#define vcpu_info(v, field)        __vcpu_info(v, (v)->vcpu_info, field)
+#define vcpu_info(v, field)        \
+        __vcpu_info(v, (vcpu_info_t *)(v)->vcpu_info_area.map, field)
 
 #endif /* __XEN_SHARED_H__ */



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:04:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:04:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482989.748853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyN4-0001wy-Te; Mon, 23 Jan 2023 15:04:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482989.748853; Mon, 23 Jan 2023 15:04:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyN4-0001wr-Qb; Mon, 23 Jan 2023 15:04:30 +0000
Received: by outflank-mailman (input) for mailman id 482989;
 Mon, 23 Jan 2023 15:04:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyN3-0001wN-Nt
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:04:29 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2086.outbound.protection.outlook.com [40.107.20.86])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3adcb33b-9b2f-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 16:04:27 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7044.eurprd04.prod.outlook.com (2603:10a6:208:191::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.25; Mon, 23 Jan
 2023 15:04:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 15:04:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3adcb33b-9b2f-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FmVpf4snMPHH5osIfY39mx8jlemW0tDiWngROvFUTLru8SYkN0dsEqB3P5+lx1aYnFm2AyCS0pA2NAEtB8Vqi2gnyiBwwn3huYSgRcJP/B5W6oBYJlV5tzb53Qi2BnMcSyVHU0tUxbrSGj47oKyh7aXtCt1Ny+ionIkJrKizCorX0pZUdw7wso1GdrFJmoeEStPsyJxoHIkD302XL9X/Y1tXiVVbAFZzzybnUauodZfLmtE3F/q3YvIx5qo50GXDTl4LDVqAPbccKVlRjDmJKRE//pFs7DxOFa6OUWhW6mHSnPgmqaPaJKGQygUZg4Pb+BSBPGFCB0moXZsOhUdzlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DmSAsOo+uh2xPDauq6S/BqRIArE2oaNu2xllyQI6dac=;
 b=nKihpF3vVLfp1TAZvVSPfpkS0eLDsVlRI3vj1XTDuefItUZPsZtIWI4oI8VMBESclI2ZpScNzD5qSTZNbuCt3fMlcH1B3OSUpgrztK6s1V8+75RHwlAZ0d99NmofDPm5GxYs65S0JmurJrNscJla6rXwR/Fd+huHigLdzgcr+fRwtSOP6r3YCCQ/qdTuZ0vrrpdvo+G4u2ijWwgCst7CRdbDg9oak7VOQS9H0W95g6j2s5dEVtyof2okKULkWYZ7LxwmgDVF1dCzWfXDGvU79exFy/kp6YuKG8GBKyiXSK8NAfKYdWK6lOdQ6f3sMUaukx7nx6PIS74NHPR+NNy2eA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DmSAsOo+uh2xPDauq6S/BqRIArE2oaNu2xllyQI6dac=;
 b=WJGopfc48gV/itsstyTPR8d0Qq5xZO1FrKIq6Wdcg9MBTfeeDnixnic6orjqomcdVry0qbkPyo4o6GHY9V1uhuDKNtKG8NPWUG8A0uHsb0gYV1Tl32m4/9Va8lCv2Hvlu4tqg9v0TA9EyeSGkYBSsIGL+3VEcUAhicidE7pT6YrQJQf2lBIcqVnicOQq5yadu/XCq6axHSaDYiM23kA5yH7MKaM7RGYWp2Eu05roCNhl0qpo3S7l9bDCl+pTSAJ6yuDH6fD5f/fbEu/Kze7C8r+1/f7nfP5ZaDp6lsK+6fbw86iemcf7eJPJ2Pinkk4oWUOC2o+GImKpGmJ5vNFHHw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <97244d8e-7906-01f5-2904-187b8a790e31@suse.com>
Date: Mon, 23 Jan 2023 16:04:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] x86/ucode/AMD: apply the patch early on every logical
 thread
Content-Language: en-US
To: Sergey Dyasli <sergey.dyasli@cloud.com>
Cc: Sergey Dyasli <sergey.dyasli@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230111142329.4379-1-sergey.dyasli@citrix.com>
 <a728fa61-eb33-f348-ca72-caec45154889@suse.com>
 <CAPRVcudS_LR4_dXPrLZ5KspHqvrp0vPxSD_8RkogLes+ZZ-NDw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAPRVcudS_LR4_dXPrLZ5KspHqvrp0vPxSD_8RkogLes+ZZ-NDw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0133.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7044:EE_
X-MS-Office365-Filtering-Correlation-Id: 035d0c97-b7a7-4a16-dc6e-08dafd531dd6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	X8Kn44O4IdQAdmcOBTI79tluMJD6djhwnLe5Z+lJnd/7inHsmnRB0Y8W6fFrwi3qptoGbNjKhv376cELB6NHXEksP73l73eu2pXFACs7fzpDg1DUAiMlR5UGs+a97MBdmlYQP0uTZ7BkzV6ZgMBd18raD31S12RhDZ7KODXxg35P1aO6cKzUh1oZmoAXWTNxue62sdNzRTXhTtVB28z4qaRV/VKp28dZkrDZzP2pfj9PV9BqjSl80rN2caZxYEiING8WH3O6zr8mtmlHE4YXCQ56Jynw8iuTOOH6ukB8W7YBqV4axIkHY6bgvyVw0PamDZEaB7Io1DUcGzDaeaPI7CRy9JdODQ5nNLE+1cIrmv3DhvbCz8C2+/vJqwwPlFwCx+vsra4oOVbbwJQ1+mzjD3Ffp0NRnnn5UPANhSB9KK1quVOmD+oN6xh4IN5RCa8O0/d0aZcJ98FIVCNXPOE1Ak/EH1m9UlNQ/hSUehlQLghkhQRfsmDEX8ewDuQCNlrKutjR6yz5lSTiqpOsCv1hIYgu93NCyHxrd86kePvRQ8jmQOy92Kv8XT45nv+Fagaak3AJBPfGir9C2zJfQXQ3ZWbr71wWQV2Ir9GUroLTBFVslEhVx/kyOTR5OTX/9qUSoJrRLkLhGiWYw65zbc5mHGXhrGY1SlbfBV7RExjB0YVmzLONl6DY/6d+jkrYvSZVDRYhiTPsvYOBGHiZknJ93jhp4zhEbptpmg4BnqwkJ7Q=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(136003)(396003)(39860400002)(366004)(376002)(451199015)(54906003)(6506007)(86362001)(2906002)(31696002)(53546011)(6486002)(5660300002)(478600001)(8936002)(36756003)(316002)(66476007)(6916009)(4326008)(8676002)(38100700002)(66946007)(41300700001)(6512007)(66556008)(186003)(2616005)(26005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WXIxRE5Yb2dldytibGdSZndyaFBCek83NjdjcjFQL1JyeU1HMmtyWUZCY3c5?=
 =?utf-8?B?WHhodFZTNWtEbzQwSm95dTc3Nkx3L2VtZnkrVXFKbjY5d1FhVytzQm1JV29x?=
 =?utf-8?B?VW9MKytQUTBZa1YyVGp1M1ZJVjZmZThHRGV4S1orQVNDRzR3NVRDTUc0djM4?=
 =?utf-8?B?Zk80ekx0aGdCL0FEdEkyMjBtUGs5bTFVNDFMaDNpd0ZTamVZUDMxRXdGZkNH?=
 =?utf-8?B?czQxS29hbWZNMm1EOGNRNVdpdkxrNncwQ0xyUncwTVpmTGkrU0JsdHhDbnly?=
 =?utf-8?B?a1YveTN4U2UxZ003M3I5WHd4Z2V5RlI0ZUw5aHc3KzRRdWtKSi9HR1oyUndh?=
 =?utf-8?B?NGxjUTlnYzVpc0NGaXQxUHJiNnp4WVloV0VxTE1IYlBpYVRZTElsT1ZZdHV3?=
 =?utf-8?B?QndoQ2g5bUx1Y3FjcWxjdjlmbURad1JERWJwbHJoUkdlaHBQV1I3Qk9wRDZt?=
 =?utf-8?B?MEdJNW1xMHpFK0Y3aTlLNUd4cVVMc2k5K0dhM0FpRmhaY29ITXhWRlpkVVFJ?=
 =?utf-8?B?blBrMjV0UnRhYlJkdjZuRTg2Vi8rUHFqTEw0VjFqNDVJdWUydEhzWVlzR0Vw?=
 =?utf-8?B?SU45TTlTbE9YWUM3aUl1U0Fnc3hla1NJd2U5NGNDZmtnVUlld0x5ZldQa3Bl?=
 =?utf-8?B?WldESjRFdjBkdFhCRXBBOHpuUC9CeFlZUDNQWkk1cUM5YW5xdnBaMXU3WkpB?=
 =?utf-8?B?M3Y2SlpmOHRtcGEvRXNhN0E3TWpudWMrQ3kzd3Jra0hJNjh2WUQ3MVlraitM?=
 =?utf-8?B?YXhqa0wvMzN3bCtRSk5Lelp1dGFIQUFobFR5Mmp6elZMVjE4bFk1RWl2Y0tj?=
 =?utf-8?B?TmZSTW9jbFhocUVzb3lXZW0yRjl0QUJCc0FkZnZWUi9CdkJiS0FIVkxvcTl5?=
 =?utf-8?B?SVBqR2t0MnczQkVPWlNjZEhWeEpXM2p6QTNRWm1KeHNrVkVjK1craStKMy9C?=
 =?utf-8?B?SUdBaVlxNEFQckpzM1VSY3lMTlJSdHFhZmlQaVo2eE9sUXI4Q2VYdm8xNXcw?=
 =?utf-8?B?MjJqZHhVcFJYaVNna1k3UlhCZ29PL2lYOGVqdEI2RFpxcGhOV0NwdTRZejI5?=
 =?utf-8?B?REg5NThTcCtnU2NTcEU4My96dHJkdzBVYnVaYURtQ3gweCtGTFMxd1NrSUpC?=
 =?utf-8?B?WUxsa0ZHK1M1eHNkbWE1V2xnSkROb1NYcDFZN3FyUkkvTXJ3YlpLMnloNFFq?=
 =?utf-8?B?aHVIUldUM2hCTzd3M043NzlxbXZvdTlUZnR3eTFUTC9OVnd1OFFuSFZxSnBG?=
 =?utf-8?B?SW5uK2kvb2xQRzVMZUtPQzdqdVRvSmV0STE0WjBqME85ejZGOFE4enpqN3FX?=
 =?utf-8?B?cy9WWGhQVlFHYmZ3MTBsN0g4Y3FOYWIzeCs1SnkzOWFvbFRxUDF5bnJ2dWx6?=
 =?utf-8?B?akhBeVNwVDZRQ1UranZKamlNdXlQNmtaSmliWnd4eUhyMElvZ2c3Nkl6bVhw?=
 =?utf-8?B?V3d1d054dExXNXhTR1R0MWxoQkxMVTl3ZFF4S3JGVk0wM1dtMURCd0pFOUo5?=
 =?utf-8?B?SVo5OHVXSTMvWUp2NTdzRCtoZFNqRDdjOTE0NTZsZFNYUmxyRzVwYm1lR2ty?=
 =?utf-8?B?TzlTL2VvVGFIM1lFdWh4TFlFbndNQ04rT1R5YlRiZFJPVFlGN0kzVktLeGdt?=
 =?utf-8?B?TWQzUm9GTzdzREltYXdTelN2Wm1iM0t1cjNtNEZPTXhIaEZWT25RMG1zWW11?=
 =?utf-8?B?VmdaUlNhSjZxVEwvcEJDczVWVnppK21XaXFnRStxblloei9DQ2ttOU5TTFh4?=
 =?utf-8?B?ZHhXSVFiNkpJNEhQcUxSSjFwcHM3VWFFTXpxeVJQRXpuYlE5aHl0Y2FrSHBT?=
 =?utf-8?B?bWtmYWNBVE1EWlZBZnE2SEoydlRlczJNZTlRZlNLeHNFeU1IL3N0ejE0RVJV?=
 =?utf-8?B?SS83ZEd4Qm5qTmZVa1NDR0dtWCtMbE9xMkFkQ1U1aWdKWHRBd0J2RjhQTFJs?=
 =?utf-8?B?R2tpUW0rN2ZTa29kNk1rWnFHcHgzdS9TdzgyNVlGT1Y3NXdtdVFlcUJLelYy?=
 =?utf-8?B?alR1cUNkRkVEUjhCeFdrNFh4cDNDcG1KcGRYZkZqeGQyMjA4RjlGc2t0N2h6?=
 =?utf-8?B?NE1MTmdjbFlWZFlhS0tPMmd4WTUrNGNqL0hiUlFEclcvMjFBMjRJS29Dcjd0?=
 =?utf-8?Q?b5+WOAA5FWE3lvl0pUYKwboaq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 035d0c97-b7a7-4a16-dc6e-08dafd531dd6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 15:04:25.2798
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hIjdWL0qwsPYMhNiDdCBoPrLQDGG0zlyRBy98mapeMJNPcjBB8IdYIEuDKYXPXFMIWeUjYSInaTJnmj4BiH+Fg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7044

On 23.01.2023 15:32, Sergey Dyasli wrote:
> On Mon, Jan 16, 2023 at 2:47 PM Jan Beulich <jbeulich@suse.com> wrote:
>> On 11.01.2023 15:23, Sergey Dyasli wrote:
>>> --- a/xen/arch/x86/cpu/microcode/amd.c
>>> +++ b/xen/arch/x86/cpu/microcode/amd.c
>>> @@ -176,8 +176,13 @@ static enum microcode_match_result compare_revisions(
>>>      if ( new_rev > old_rev )
>>>          return NEW_UCODE;
>>>
>>> -    if ( opt_ucode_allow_same && new_rev == old_rev )
>>> -        return NEW_UCODE;
>>> +    if ( new_rev == old_rev )
>>> +    {
>>> +        if ( opt_ucode_allow_same )
>>> +            return NEW_UCODE;
>>> +        else
>>> +            return SAME_UCODE;
>>> +    }
>>
>> I find this misleading: "same" should not depend on the command line
>> option.
> 
> The alternative diff I was considering is this:
> 
> --- a/xen/arch/x86/cpu/microcode/amd.c
> +++ b/xen/arch/x86/cpu/microcode/amd.c
> @@ -179,6 +179,9 @@ static enum microcode_match_result compare_revisions(
>      if ( opt_ucode_allow_same && new_rev == old_rev )
>          return NEW_UCODE;
> 
> +    if ( new_rev == old_rev )
> +        return SAME_UCODE;
> +
>      return OLD_UCODE;
>  }
> 
> Do you think the logic is clearer this way? Or should I simply remove
> "else" from the first diff above?

Neither addresses my comment. I think the command line option check
needs to move out of this function, into ...

>> In fact the command line option should affect only the cases
>> where ucode is actually to be loaded; it should not affect cases where
>> the check is done merely to know whether the cache needs updating.

... some (but not all) of the callers.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:04:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:04:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.482988.748842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyMw-0001gQ-Lt; Mon, 23 Jan 2023 15:04:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 482988.748842; Mon, 23 Jan 2023 15:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyMw-0001gJ-JG; Mon, 23 Jan 2023 15:04:22 +0000
Received: by outflank-mailman (input) for mailman id 482988;
 Mon, 23 Jan 2023 15:04:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rv8W=5U=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pJyMv-0001gD-Ta
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:04:21 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 370f2c96-9b2f-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 16:04:20 +0100 (CET)
Received: by mail-wr1-x42d.google.com with SMTP id r9so11095731wrw.4
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:04:20 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 h3-20020adfe983000000b002bdf5832843sm21709112wrm.66.2023.01.23.07.04.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:04:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 370f2c96-9b2f-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=5Pe9H5M8lgqXN3yOFU7hWFYm28+tQNyxLI4UL9a/OLk=;
        b=ljCAeerSdzkiVajYOigDrT9dAlZCW1oVKmAu6HCCsvgCs04V0gwIVMVv2FU3py+MN6
         sHau14/RWViTuE/hmz4Ji+c9rnnCccbAVycTnQpuLPCAGtY56UkRymXrdhNDTQ2QwtwY
         7+xUjZhw7EWdi50mIJOvuapnCR9wZFtz+XzIs2Hnt9XaAdc4XsGfoVJ7LCPyuMlaJRaC
         zzafuhd9WJqvMWVUyKQSa+FylKOkMvILTUUO2bv1vRNkY++xgZw/ti9EEMOL07Pj36tA
         UtPNMPpnJhN6Q8h9Iv/Sw/11MNXCdvsgnBgbh7MkRcCRh/ecRcnjdXA/BvExD+PfOd20
         bhHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=5Pe9H5M8lgqXN3yOFU7hWFYm28+tQNyxLI4UL9a/OLk=;
        b=ULlmbcm6hwGuWTufdoUQlnHYvAXOW8S0IiKMZFHTUT8UsufTx02TEgMFfMS7KGcmLy
         V/avaKMa5UQxOYl0tTMtlZLDzycRR044sAeqLQhp5bq+NPSH6QDksixEqtEkDMmUDlo2
         yKwC4U9ZhSjEr/gimZ+nMk4YxUWZoCmN5RXs1xX9qv1UbMDx7Le0s+zF76+kJQN8h9Lw
         WRytQsMYN1cggAj1fajqYDwQSoQM8mRdC6/wFL3Wht+2OlZiyoOd478JiQZwYAjFET7X
         fuXcRIsIfTwBRslxbZ8eyzQqpcDM+UI+AyCvucyIgLarTCmQr64sVCCLAKZoRTCvXa8f
         nuEw==
X-Gm-Message-State: AFqh2kqcmNgzKsrsuukn1XWoscjpFzE8RSWnFkFE188HaNrW4w7z2Eiv
	aL1WkUGCXHe+oWIfMxzJ+JM=
X-Google-Smtp-Source: AMrXdXvdFEzRAEQiD7p9XHBsMlv7mTHqksXzITyIfe+eZ+1xWy+C73rR8qtrPgLOwzW1mLCvAm/Ttg==
X-Received: by 2002:a5d:4c4e:0:b0:2bb:4b40:2d1a with SMTP id n14-20020a5d4c4e000000b002bb4b402d1amr22476897wrt.56.1674486260385;
        Mon, 23 Jan 2023 07:04:20 -0800 (PST)
Message-ID: <61603a7c4ba09012fedbad48b3d7d028ffc9443c.camel@gmail.com>
Subject: Re: [PATCH v1 07/14] xen/riscv: introduce exception handlers
 implementation
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>,
 xen-devel@lists.xenproject.org
Date: Mon, 23 Jan 2023 17:04:19 +0200
In-Reply-To: <e8e6bee5-ce59-b171-6134-4473b396df00@suse.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
	 <e8e6bee5-ce59-b171-6134-4473b396df00@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-23 at 12:17 +0100, Jan Beulich wrote:
> On 20.01.2023 15:59, Oleksii Kurochko wrote:
> > --- /dev/null
> > +++ b/xen/arch/riscv/entry.S
> > @@ -0,0 +1,97 @@
> > +#include <asm/asm.h>
> > +#include <asm/processor.h>
> > +#include <asm/riscv_encoding.h>
> > +#include <asm/traps.h>
> > +
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .global handle_exception
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .align 4
> > +
> > +handle_exception:
> > +
> > +=C2=A0=C2=A0=C2=A0 /* Exceptions from xen */
> > +save_to_stack:
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Save context to stack */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 REG_S=C2=A0=C2=A0 sp, (RISC=
V_CPU_USER_REGS_OFFSET(sp) -
> > RISCV_CPU_USER_REGS_SIZE) (sp)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 addi=C2=A0=C2=A0=C2=A0 sp, =
sp, -RISCV_CPU_USER_REGS_SIZE
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 REG_S=C2=A0=C2=A0 t0, RISCV=
_CPU_USER_REGS_OFFSET(t0)(sp)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 save_context
> > +
> > +save_context:
>=20
> Just curious: Why not simply fall through here, i.e. why the J which
> really
> is a NOP in this case?
>=20
There is no any specific reason.
I left it for future.
Will remove it in the next patch version.
> Jan



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:08:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:08:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483004.748866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyR7-0002v8-GI; Mon, 23 Jan 2023 15:08:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483004.748866; Mon, 23 Jan 2023 15:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyR7-0002v1-CK; Mon, 23 Jan 2023 15:08:41 +0000
Received: by outflank-mailman (input) for mailman id 483004;
 Mon, 23 Jan 2023 15:08:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJyR5-0002u3-T4
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:08:40 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2083.outbound.protection.outlook.com [40.107.6.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf1547f6-9b2f-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 16:08:36 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7632.eurprd04.prod.outlook.com (2603:10a6:102:e8::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 15:08:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 15:08:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf1547f6-9b2f-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ESX0d6aZJWIALjANBH12ikQx4dBENLCoVvmkxA3gxLdOXDp9DGaDC9sAg0vjqyrxvfAZwoJLAF9IM5exGNavj5n9GSkB7Rh0GcrHwbuFIWPg2ujnQ7VaZUPX/vHLEaJsUtuptzd5oHB1oCLjv1ag2KWx4jUKENANqe/Rn+Nwf18/qxZQ46TrIECWyPu8o2hEd+xchNHjKYE9vDjmKlrmwwTnPywJW7jTn3XFY9FwRa/og7aOOVdfDTftOTXvaVtitFQHinS+KCCNdKZbo8NiqQZS/zQo+ScICoJXJhHtw9Sc+xjcWh9c8PF1tTK9T+eKUiHSZgHFRSHgrtYMACL1hQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=to78vtEujlTxHh7jBRJbi5Ef8oA80nUnAGGqpIUa20Q=;
 b=NJT6apn/sFiVedrV0zqfUX196PQ1c8ENqGZFQqkQXKg/pdjzM0ysiTUjCcrtLkjK2wqw+Q6gFGn4fPaAZP7e2irtsd7WzuoLtSFtBadyPPxt8XHndnMS0NM72DGwsuSkmikt5/OE5PN+v0QefQgrzBNR1L4llBHTW1YvLX36zGOPGkd+P8KpbJU2U6N5y91lDAdDafdu/YvCp8/kIv+yK+6nyH+ViC8L8vXAKI7J11ofy5YwH+QdkBNCIaVRtCmIeJXRcZHQsGv2/TnWMqWlA072vV/zWM7v3VW0giPlUupERwSboLmTSy5tytTmVDPTuKa88df4gbyfgahRO4M/uA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=to78vtEujlTxHh7jBRJbi5Ef8oA80nUnAGGqpIUa20Q=;
 b=h9x9QyN0fNtXJPKMyunzra1bfVpb4zhBRLxFWRlFwaXDd51S3I2dERrnc1+jh50DQcgkIC7ZRt5/CbR/j5lle8EQw4xv89BOWFjyLnfiqg2xFnc5Cmf6C5A9CiZDLZLeFU3WGF8N/pz0BwvYe3SF+qdxOuxtQxaAwSpRh7D6Npo7fNaC14+UX+BerO/SXljSIc8v5i10/pfKGxb89QGQS11sArYvl9AY5cpMBAbCvH5fW+O5U8IDyD1S92J/xIlsOOUhTi9Bo28IhozQFns9JQjUr7W+aDwwjDtthNVuO6Zs0FoaMkKRJK58AwLszFXltFrwi+fd7Er+QuaGykrLFA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1d4848eb-4b99-1492-45d1-c0ce2b0ae6a6@suse.com>
Date: Mon, 23 Jan 2023 16:08:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in
 sh_page_fault()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <0e682cd4-3cc0-461d-ee53-13a894797f17@suse.com>
 <5d8a938e-cb4a-a989-1849-d702cd25d890@suse.com>
In-Reply-To: <5d8a938e-cb4a-a989-1849-d702cd25d890@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0059.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7632:EE_
X-MS-Office365-Filtering-Correlation-Id: 9a474b0b-fb7f-48dd-e4bf-08dafd53b21f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JBpOhCLsUf7wlpSNvWLhUJvpl1D3L+ESfpdkImpbyldtcn6iJUEgwlKILtlX+L0mK1e1H4yWiGUhe0bm8XpyzOz0zlK/L+NLiIyb4NRZUTKgFQ8WLOptQFMJ4Ta/wnLcNYXCROIR1sUTyEt3/9mOaZCRSqBruetLBeXAtKQ3V/bGM1YbjtVMKwzj2nnVzmlu0JrXnmaWPwcG5cXmtbOejuKnoywEXR/66jq29jYAH30Cwv4SsCIN9EntBu0dq5vcQr2tWPYPfsJHTgZe7zVJYBhHzjHLUoOsEk7cAaZWArysW0+/ZP2lfobCA6NI4xF8UN1eRl5HrC/h3QJwlmkUq5ALJe8bAQtZizD9Zl59yUW/rLT/QQSPgR9GuEGePsScLrWHpBYFLuvR1oNtuwS74cxl185TfcrpfBnCakLKZPwcKU4hziGZBxqiiNTXzqkzh6cmOAXWDM92Kl14gkhGjEWZo7sJda8mEA9xH/Vi0EwORgO6zZc1j+Urx0ZPPkW1rqRv0jJJRb2fHK1/ZRe9OsRNCBPYZhgGdUvj7/xaFHMAc7Gog1y3ETV4MTUlgQ9cT3XumRmmZMRCQfVh3ziYG0+DM7PaNHjvJwWhGc5mPaHaG9RdfH/a8xaHcwyZc59LOtVwDZHrHyWBJpv4dY1q0LLYw9ihZLXm8Eq9cNRNZmR0HpDE5+qoX4B6gBeZ8WFPbrpvnFWfyDo5WTZOSbdlThO+xdr4VAf4JP6p7ZzSaFK7/uqZsXqAaQsW4vV0aDbjsHEBdJUVSwOXfvq1tZMbWHP1S8eNtcbi//cEnLE9TimOxiypNYv4up8pKjCicPOf
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(376002)(39860400002)(136003)(396003)(451199015)(31686004)(2616005)(2906002)(38100700002)(41300700001)(31696002)(8936002)(5660300002)(8676002)(4326008)(6916009)(53546011)(6506007)(316002)(54906003)(36756003)(83380400001)(66476007)(66556008)(66946007)(86362001)(478600001)(6486002)(26005)(186003)(6512007)(45980500001)(43740500002)(473944003)(414714003)(357404004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dVdVc0ttSVcrVnAzZ3d6QitIRDU0T1IyT2RHQnBKNURSakJVMXFVNDErcUNz?=
 =?utf-8?B?QUN0S3VQOE9jaEhOSEl3RkZibFpiNU9kUFlUNWdpY1BIazZzbEFyTXY3ZmV2?=
 =?utf-8?B?RkkyeTVLZXhLM2N2amJHdzdkQTNnOFhUd09FTkltRytYRGZZRGdVQ0xMbUFT?=
 =?utf-8?B?NjJLZWovMTZ5dXJkWEl5OFNhLytkbVF6M2RHa1VrVXlKVnlpajFRQzdtaDJx?=
 =?utf-8?B?ZmIwdTNJYWNQWGJ2NDJrTXRjOUxjaDV2WFBnb25NUVYybzJiUjFXbDNxVWFX?=
 =?utf-8?B?ZWJlSkNvMzlNeUNBbVMwZ3RoSUtCMFhtK3M1SWpRU2k4NUFXdm4xdXYxbGh2?=
 =?utf-8?B?VjRiVkVnUFlWRkZ3aVdNSWpYVkJiV2hjSkUyTnlZT2psUHdWSytOdkZVSGpF?=
 =?utf-8?B?YjY2OEpBbUpQRkJEZG5NWlBCeVBNWG1DMW44S2pzeGNFZDNsdXcvVWZ4M3E5?=
 =?utf-8?B?aThqaFZ1UTNvcnQ1RFB3a3ZUQTNCT215SkVFMm91YnZvN3FMMU4rNm5pa1Nt?=
 =?utf-8?B?ZUV5RFVQRUVObjJoRm1nS0NGL1lxU3RjdmYwMVZtZUw2b2pHY1FJMEF1alFw?=
 =?utf-8?B?M1JDc3Y1Uk4vcXY3Z2dJd3VRdEYvMFloejhMYjJrNGsvcVRwUHN2aUhBa1Fz?=
 =?utf-8?B?bEFOUmtVcnlUQ2Y2RW5jcHptUTZMNXpjKzVrUG9VRG9OUDRWdW50VXIzQ3Js?=
 =?utf-8?B?alhWL0ZzQjhWVDJFc3Z4L0kxZUFSYlA4ZnVnalA4WXAraWZocXo4YVdSd1pH?=
 =?utf-8?B?MkRCL21DTVByYjZYSXJKQloyU0xZY2krYUxLZXdJejRrQVl1TkJkYk14K05M?=
 =?utf-8?B?OFpoR2dkNnlYUnYralROVU1VeFYzV1dlS2IzNkthRDcwYXVsZkJXYzJMcGZR?=
 =?utf-8?B?ZnZCZE9hRmx1NnphejNjZXhvUHNSdHNJcHpmcXdSbXJ5ZFhjNTNMQnFLSHgx?=
 =?utf-8?B?VjZIdGNoZVNWM2ZreGRFbnZuMzZOVk9NM1l0dGpMczNKNUZiRDl0TlY5eFdT?=
 =?utf-8?B?SVVyQWVSOStCWmpMakNSSklTWWtWZDk3MjBoR0NuQ2J2WjJRbk1jMmhFNHlR?=
 =?utf-8?B?aEFIWGFjNi84Q0VmakFMa0RMUGpKd1VwK0hhbDV3TlZGQitqMmdFTmtSRS9z?=
 =?utf-8?B?bmJrRDF6YlFqRnU5ZysxaU1RT1k4bDY1YnNSby85Y3Frc0ZUM1c4aDJWc25J?=
 =?utf-8?B?a0tQclRncVlwNjFWdTRYb1FzNndyOEprSFdZYjdpd3RVM2lxenZrQml3eUlJ?=
 =?utf-8?B?RjU0cUZzbm53cUsvcUpiTDBmTlJtWVI4Qm44SGRqRkorbUhnOFV6M1BCSG12?=
 =?utf-8?B?WjM0b2RwU20vZUw2RTlSLzNmQWpBbnN6cTZ2TFJkSkxMSlVwL28zR0Vpa0pi?=
 =?utf-8?B?K2IxREx0d3FheXBQdHNESnJnNmFjQ0hBM2JLcWVzWTVDNkpRWmkzN0lwSUlP?=
 =?utf-8?B?VFZDaTd1WTVCYWNKMkFranhZRTRNa0NZcTAzb25KTVUvNm5ENjNoTFp3WlIz?=
 =?utf-8?B?enhMbHFlNVdxTkMwck9jazdjbFJBQkN6Vzk1Rktud1hXS2JMZGpOZG9WWVhn?=
 =?utf-8?B?TVNYNXBLd1ZxOEVVK0hRZ1c3cC9kLzlKd3VlT1prcGFMTU91SVhhaWJQRkE0?=
 =?utf-8?B?K3hXdmtPWFFaRStSQWJrbG81MVord3gzb0gwcm4ybWMrbnV1OGxtTEhzck5n?=
 =?utf-8?B?bkZSY0FWWHhFa3pVWlU5RVNxak9aUytQc1ZQbjF3ZzhSaURWMHEyUEphS0Vq?=
 =?utf-8?B?NzlFeUlUdzBONGpoeSs5dmhyT2xjSWU0U3phRHY1ckZlWWo3OWdtMjg2WmlY?=
 =?utf-8?B?Qll5aWgrNk5QM0V6NElHZGlKUE1pQ0g1NGR4WmlPZDJpSUR4Vk9iLzdWRlB6?=
 =?utf-8?B?NnoyV2k3SE0zS09JY1B6WkZTMVNRbE9mSmFvbWRGNlNoZTBRc2lTd01XemR4?=
 =?utf-8?B?VlVHNFhCeGdnWE5naGZPeDZRQ2ZMczFCM2EyQ0ZXM2xaUjh0WlFRT05nY09Y?=
 =?utf-8?B?U0VuTDRyUUZ0bnVMZlQraE9TOTUzMGFXNTBPamxhWGVNa2QvTk1lOWNScURl?=
 =?utf-8?B?ZjI3OG1uQkZvQW0vNEtnSHp1bUJPSmsyampBWXlkWkh0cEZHR1VWNElNV0Y4?=
 =?utf-8?Q?2p5G4O/A/tg5jsNAnRQnwMTZ8?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a474b0b-fb7f-48dd-e4bf-08dafd53b21f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 15:08:34.0300
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uv40RxCQCtWMcU7/DePnrE2QgohIhpFcom/aGXG7WMI4UerCAC2yHBZsRYlWMv3YnXq8EA+YJncUSQNnFLJDRw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7632

On 23.01.2023 15:26, Jan Beulich wrote:
> Do away with the partly mis-named "mmio" label there, which really is
> only about emulated MMIO. Move the code to the place where the sole
> "goto" was. Re-order steps slightly: Assertion first, perfc increment
> outside of the locked region, and "gpa" calculation closer to the first
> use of the variable. Also make the HVM conditional cover the entire
> if(), as p2m_mmio_dm isn't applicable to PV; specifically get_gfn()
> won't ever return this type for PV domains.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: New.
> 
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c

I've sent a stale patch, I'm sorry. This further hunk is needed to keep
!HVM builds working:

@@ -2144,8 +2144,8 @@ static int cf_check sh_page_fault(
     gfn_t gfn = _gfn(0);
     mfn_t gmfn, sl1mfn = _mfn(0);
     shadow_l1e_t sl1e, *ptr_sl1e;
-    paddr_t gpa;
 #ifdef CONFIG_HVM
+    paddr_t gpa;
     struct sh_emulate_ctxt emul_ctxt;
     const struct x86_emulate_ops *emul_ops;
     int r;

Jan

> @@ -2588,13 +2588,33 @@ static int cf_check sh_page_fault(
>          goto emulate;
>      }
>  
> +#ifdef CONFIG_HVM
> +
>      /* Need to hand off device-model MMIO to the device model */
>      if ( p2mt == p2m_mmio_dm )
>      {
> +        ASSERT(is_hvm_vcpu(v));
> +        if ( !guest_mode(regs) )
> +            goto not_a_shadow_fault;
> +
> +        sh_audit_gw(v, &gw);
>          gpa = guest_walk_to_gpa(&gw);
> -        goto mmio;
> +        SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
> +        shadow_audit_tables(v);
> +        sh_reset_early_unshadow(v);
> +
> +        paging_unlock(d);
> +        put_gfn(d, gfn_x(gfn));
> +
> +        perfc_incr(shadow_fault_mmio);
> +        trace_shadow_gen(TRC_SHADOW_MMIO, va);
> +
> +        return handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
> +               ? EXCRET_fault_fixed : 0;
>      }
>  
> +#endif /* CONFIG_HVM */
> +
>      /* Ignore attempts to write to read-only memory. */
>      if ( p2m_is_readonly(p2mt) && (ft == ft_demand_write) )
>          goto emulate_readonly; /* skip over the instruction */
> @@ -2867,25 +2887,6 @@ static int cf_check sh_page_fault(
>      return EXCRET_fault_fixed;
>  #endif /* CONFIG_HVM */
>  
> - mmio:
> -    if ( !guest_mode(regs) )
> -        goto not_a_shadow_fault;
> -#ifdef CONFIG_HVM
> -    ASSERT(is_hvm_vcpu(v));
> -    perfc_incr(shadow_fault_mmio);
> -    sh_audit_gw(v, &gw);
> -    SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
> -    shadow_audit_tables(v);
> -    sh_reset_early_unshadow(v);
> -    paging_unlock(d);
> -    put_gfn(d, gfn_x(gfn));
> -    trace_shadow_gen(TRC_SHADOW_MMIO, va);
> -    return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
> -            ? EXCRET_fault_fixed : 0);
> -#else
> -    BUG();
> -#endif
> -
>   not_a_shadow_fault:
>      sh_audit_gw(v, &gw);
>      SHADOW_PRINTK("not a shadow fault\n");
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:17:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:17:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483010.748879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyZl-0004RD-Ag; Mon, 23 Jan 2023 15:17:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483010.748879; Mon, 23 Jan 2023 15:17:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJyZl-0004R6-6O; Mon, 23 Jan 2023 15:17:37 +0000
Received: by outflank-mailman (input) for mailman id 483010;
 Mon, 23 Jan 2023 15:17:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rv8W=5U=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pJyZj-0004Qy-AK
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:17:35 +0000
Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com
 [2a00:1450:4864:20::330])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0fdae306-9b31-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 16:17:34 +0100 (CET)
Received: by mail-wm1-x330.google.com with SMTP id l8so9279961wms.3
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:17:34 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 f22-20020a1cc916000000b003d35acb0fd7sm11047915wmb.34.2023.01.23.07.17.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:17:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fdae306-9b31-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Ajxs1wqVGy4VyBnUzuMAw1pH+VeDislQpXThvlQnrN4=;
        b=Rm+AtL4NtoLbFyjH6O8UdNoGuXzjdXqd48OgnQ8DDM3S7fLM8rJ1vsHpC8rO1R/vlS
         M3wniqFNeXlm75R9uaVBFHGPMlH1VEaJQM+EUQCnNTIYmvoDzl1zq5jWqmNV84HytHdb
         pcfeNpupFyL4KUbphh9XYfjzvwoCbYPfQZPls3ps0Z8++a7fOX0Jo743yhH8MSbHdC5F
         yBQEOUi3ZYSVRvZmpcvsxXFuP+6rqDEqh+iG72yS2DesrTe+WdKfYDk6nNOujQNWKcab
         McHdLsbNTod6bpjA15hwDA/0kZ5Zhn8oOnCVHMnTljzQYzSa2sXL4Vm8YVCDrf7++rL8
         6Jbg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Ajxs1wqVGy4VyBnUzuMAw1pH+VeDislQpXThvlQnrN4=;
        b=Quc2x+oIfoieRALMoI24x+KWvlDzkszJNt2OtczRz+5HTgV5O0QwZxXiA2e9l403br
         04lmy5nGnJQDg8NOxDZNp58q6g//krecpVZm+8JBI91X3OhfuAPMnXlaR+kkPUctJb1P
         4ggMjjw6kiPUhkYJCnB8a/vgx1zb55fnuLwbhw5P5rckkxspDliUHDj+GObNeZ1aMMQq
         y1/D00iBEvC15Az/GlIyGaQYnEN8QxTo/vZcFxbv4IJbRBRHf/NHMufZzYXE0Yq50Ad8
         u+G2jEZsXgucAZzPwQ26GnEHITUKOk5xWLVnteYehjF5J+tZD7rzDkExmLMpsB13esrU
         h7hQ==
X-Gm-Message-State: AFqh2kp3Wx81l3fJ0kCahGSYuNgABGr/aYKfwnGa3C46BR1r+AXZ8AMy
	fLyohZu0VL65tnYjME0Vr9Y=
X-Google-Smtp-Source: AMrXdXu5uNAOnXEEEShUq+GA+FPUkrlkgf2LXOeXXj/TvMc/hY11jS3TpYI0+f/CglkLZjpa3xf9yg==
X-Received: by 2002:a05:600c:3d96:b0:3db:26b7:2fc8 with SMTP id bi22-20020a05600c3d9600b003db26b72fc8mr15410085wmb.39.1674487053554;
        Mon, 23 Jan 2023 07:17:33 -0800 (PST)
Message-ID: <ea3c256c0f5a7f09a2504c548e649a0cf0edcb43.camel@gmail.com>
Subject: Re: [PATCH v1 07/14] xen/riscv: introduce exception handlers
 implementation
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
	Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Date: Mon, 23 Jan 2023 17:17:32 +0200
In-Reply-To: <ac6f02e8-c493-7914-f3c4-32b4ebe1bc26@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
	 <ac6f02e8-c493-7914-f3c4-32b4ebe1bc26@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-23 at 11:50 +0000, Andrew Cooper wrote:
> On 20/01/2023 2:59 pm, Oleksii Kurochko wrote:
> > diff --git a/xen/arch/riscv/entry.S b/xen/arch/riscv/entry.S
> > new file mode 100644
> > index 0000000000..f7d46f42bb
> > --- /dev/null
> > +++ b/xen/arch/riscv/entry.S
> > @@ -0,0 +1,97 @@
> > +#include <asm/asm.h>
> > +#include <asm/processor.h>
> > +#include <asm/riscv_encoding.h>
> > +#include <asm/traps.h>
> > +
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .global handle_exception
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .align 4
> > +
> > +handle_exception:
>=20
> ENTRY() which takes care of the global and the align.
>=20
> Also, you want a size and type at the end, just like in head.S=C2=A0
> (Sorry,
> we *still* don't have any sane infrastructure for doing that nicely.=C2=
=A0
> Opencode it for now.)
>=20
> > +
> > +=C2=A0=C2=A0=C2=A0 /* Exceptions from xen */
> > +save_to_stack:
>=20
> This label isn't used at all, is it?
>=20
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Save context to stack */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 REG_S=C2=A0=C2=A0 sp, (RISC=
V_CPU_USER_REGS_OFFSET(sp) -
> > RISCV_CPU_USER_REGS_SIZE) (sp)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 addi=C2=A0=C2=A0=C2=A0 sp, =
sp, -RISCV_CPU_USER_REGS_SIZE
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 REG_S=C2=A0=C2=A0 t0, RISCV=
_CPU_USER_REGS_OFFSET(t0)(sp)
>=20
> Exceptions on RISC-V don't adjust the stack pointer.=C2=A0 This logic
> depends
> on interrupting Xen code, and Xen not having suffered a stack
> overflow
> (and actually, that the space on the stack for all registers also
> doesn't overflow).
>=20
> Which might be fine for now, but I think it warrants a comment
> somewhere
> (probably at handle_exception itself) stating the expectations while
> it's still a work in progress.=C2=A0 So in this case something like:
>=20
> /* Work-in-progress:=C2=A0 Depends on interrupting Xen, and the stack
> being
> good. */
>=20
>=20
> But, do we want to allocate stemp right away (even with an empty
> struct), and get tp set up properly?
>=20
I am not sure that I get you here about stemp. Could you please clarify
a little bit.

> That said, aren't we going to have to rewrite this when enabling H
> mode
> anyway?
I based these code on a code from Bobby's repo ( on top of which with
some additional patches I've successfully ran Dom0 ) so I am not sure
that it will be re-written.
Probably I don't understand about which one part you are talking about.

Regarding H mode if to be honest I didn't see where it is switched to
it.
Maybe Bobby or Alistair can explain me?
>=20
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 j=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 save_context
> > +
> > +save_context:
>=20
> I'd drop this.=C2=A0 It's a nop right now.
>=20
> > <snip>
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 csrr=C2=A0=C2=A0=C2=A0 t0, =
CSR_SEPC
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 REG_S=C2=A0=C2=A0 t0, RISCV=
_CPU_USER_REGS_OFFSET(sepc)(sp)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 csrr=C2=A0=C2=A0=C2=A0 t0, =
CSR_SSTATUS
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 REG_S=C2=A0=C2=A0 t0, RISCV=
_CPU_USER_REGS_OFFSET(sstatus)(sp)
>=20
> So something I've noticed about CSRs through this series.
>=20
> The C CSR macros are set up to use real CSR names, but the CSR_*
> constants used in C and ASM are raw numbers.
>=20
> If we're using raw numbers, then the C CSR accessors should be static
> inlines instead, but the advantage of using names is the toolchain
> can
> issue an error when we reference a CSR not supported by the current
> extensions.
>=20
> We ought to use a single form, consistently through Xen.=C2=A0 How
> feasible
> will it be to use names throughout?
>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:20:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:20:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483016.748889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJycJ-0005t7-Py; Mon, 23 Jan 2023 15:20:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483016.748889; Mon, 23 Jan 2023 15:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJycJ-0005t0-Mq; Mon, 23 Jan 2023 15:20:15 +0000
Received: by outflank-mailman (input) for mailman id 483016;
 Mon, 23 Jan 2023 15:20:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fd1Y=5U=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1pJycI-0005sr-Bv
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:20:14 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6e90db43-9b31-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 16:20:13 +0100 (CET)
Received: by mail-ej1-x630.google.com with SMTP id ud5so31431952ejc.4
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:20:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e90db43-9b31-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=hCe7wNP+TKqBI+q0C50QUeVPIh9gXzDPbTeF71ad82o=;
        b=BScuW+/NLYhPGp9cIeu7wYll/Nfs2exQkGEORM7xcmdmraE1GjRv/Cm82VmMic4Elk
         9fC/T9MIj5uuja6scg5hEDvBj2MCNK6J1CuztFdFD2xLd8blMjN/KkOGPUKRQIaGLHQQ
         OBVcJleRL2fHr2H4TXtKuXwhZ1051hLYkWDZ8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=hCe7wNP+TKqBI+q0C50QUeVPIh9gXzDPbTeF71ad82o=;
        b=S7Yo1rFJhniNRaPG2+XgJUz+LuxmCWEPN5PThLvFZZ5N+Ed5KoD697qqTbJMALdv6U
         4nxF9fDEVarAcTGJuaMpbyYAgPyk+CD4/TEGezkKNqRNJTibejTKaf0/rEKLj4Er0aqO
         YbIJ8d1zw8xesOby6hceQa3FA6tNG++jlEeNncp9Zo+/PQYjCbtRTnxkpjc197km3d+Z
         iL5fra19jK91q01EJ5g/MIK1OiYGi60Odz0OquqXC13plqBDWtbqGWNVnnxgMdkFxYNp
         qSqPIgr65Cr06eW+gPXXVhZARvZdfnFCDbBXVZeb3v/ETOCYrJ8FiRKBKbGbF7FPbnD/
         OGUA==
X-Gm-Message-State: AFqh2kr7c9p9aUXpm5T1vuXqXHC7962uchd5S2hLL1Lv8uWfVVXmuIsD
	bHouLtVxVa8WS/a/MMyRtJwU9enIz8dCA+pU/ZyNjg==
X-Google-Smtp-Source: AMrXdXvvoWeHbNEKoJ4nZwFjZQb+QKm3L9CCa8WeEhuw7GMqr6XtAovcn2sUcS1g5LYLU5sMgaKKWLIL7sK7AZGr4wI=
X-Received: by 2002:a17:907:a065:b0:84d:28d4:6bb with SMTP id
 ia5-20020a170907a06500b0084d28d406bbmr1939830ejc.531.1674487212626; Mon, 23
 Jan 2023 07:20:12 -0800 (PST)
MIME-Version: 1.0
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <d91b5315-a5bb-a6ee-c9bb-58974c733a4e@suse.com> <CA+zSX=ZVK_7xpgraJyC3__uORqXo8F9Atj9gCF+oO7OyfRrtYg@mail.gmail.com>
 <c8ca4781-13ac-add6-1ae0-558f8d0da052@suse.com>
In-Reply-To: <c8ca4781-13ac-add6-1ae0-558f8d0da052@suse.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Mon, 23 Jan 2023 15:20:01 +0000
Message-ID: <CA+zSX=b2o_sbC+CwLUm2F5QnSKaGBSayUPgsLheLWHob8jUnrg@mail.gmail.com>
Subject: Re: [PATCH v2 1/9] x86/shadow: replace sh_reset_l3_up_pointers()
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
Content-Type: multipart/alternative; boundary="0000000000007d7a3e05f2efef00"

--0000000000007d7a3e05f2efef00
Content-Type: text/plain; charset="UTF-8"

On Mon, Jan 23, 2023 at 8:41 AM Jan Beulich <jbeulich@suse.com> wrote:

> On 20.01.2023 18:02, George Dunlap wrote:
> > On Wed, Jan 11, 2023 at 1:52 PM Jan Beulich <jbeulich@suse.com> wrote:
> >
> >> Rather than doing a separate hash walk (and then even using the vCPU
> >> variant, which is to go away), do the up-pointer-clearing right in
> >> sh_unpin(), as an alternative to the (now further limited) enlisting on
> >> a "free floating" list fragment. This utilizes the fact that such list
> >> fragments are traversed only for multi-page shadows (in shadow_free()).
> >> Furthermore sh_terminate_list() is a safe guard only anyway, which isn't
> >> in use in the common case (it actually does anything only for BIGMEM
> >> configurations).
> >
> > One thing that seems strange about this patch is that you're essentially
> > adding a field to the domain shadow struct in lieu of adding another
> > another argument to sh_unpin() (unless the bit is referenced elsewhere in
> > subsequent patches, which I haven't reviewed, in part because about half
> of
> > them don't apply cleanly to the current tree).
>
> Well, to me adding another parameter to sh_unpin() would have looked odd;
> the new field looks slightly cleaner to me. But changing that is merely a
> matter of taste, so if you and e.g. Andrew think that approach was better,
> I could switch to that. And no, I don't foresee further uses of the field.
>

You're about to call sh_unpin(), and you want to tell that function to
change its behavior.  What's so odd about adding an argument to the
function to indicate the behavior?  Instead you're adding a bit of global
state which is carried around 100% of the time, even when that function
isn't being called.  That's not what people normally expect; it makes the
code harder to reason about.

It would certainly be ugly to have to add "false" to every other instance
of sh_unpin; but the normal way you get around that is to redefine
sh_unpin() as a wrapper which calls the other function with the 'false'
argument set.

You asked me to review this for a second opinion on the safety of clearing
the up-pointer this way, not because you need an ack; so I don't really
want to block the patch for non-functional reasons.  But I think this is
one of the "death by a thousand cuts" that makes the shadow code more
fragile and difficult for new people to approach and understand.

Re the original question: I've stared at the code for a bit now, and I
can't see anything obviously wrong or dangerous about it.

But it does make me ask, why do we need the "unpinning_l3" pseudo-argument
at all?  Is there any reason not to unconditionally zero out sp->up when we
find a head_type of SH_type_l3_64_shadow?  As far as I can tell, sp->list
doesn't require any special state.  Why do we make the effort to leave it
alone when we're not unpinning all l3s?

In fact, is there a way to unpin an l3 shadow *other* than when we're
unpinning all l3's?  If so, then this patch, as written, is broken -- the
original code clears the up-pointer for *all* L3_64 shadows, regardless of
whether they're on the pinned list; the new patch will only clear the ones
on the pinned list.  But unconditionally clearing sp->up could actually fix
that.

Thoughts?

As to half of the patches not applying: Some where already applied out of
> order, and others therefore need re-basing slightly. Till now I saw no
> reason to re-send the remaining patches just for that.
>

Sorry if that sounded like complaining; I was only being preemptively
defensive against the potential accusation that the answer would have been
obvious if I'd just continued reviewing the series. :-). (And indeed if the
whole series had applied I would have checked that the final result didn't
have any other references to it.)

 -George

--0000000000007d7a3e05f2efef00
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><div dir=3D"ltr"><div dir=3D"lt=
r">=C2=A0</div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gma=
il_attr">On Mon, Jan 23, 2023 at 8:41 AM Jan Beulich &lt;<a href=3D"mailto:=
jbeulich@suse.com" target=3D"_blank">jbeulich@suse.com</a>&gt; wrote:<br></=
div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;bor=
der-left:1px solid rgb(204,204,204);padding-left:1ex">On 20.01.2023 18:02, =
George Dunlap wrote:<br>
&gt; On Wed, Jan 11, 2023 at 1:52 PM Jan Beulich &lt;<a href=3D"mailto:jbeu=
lich@suse.com" target=3D"_blank">jbeulich@suse.com</a>&gt; wrote:<br>
&gt; <br>
&gt;&gt; Rather than doing a separate hash walk (and then even using the vC=
PU<br>
&gt;&gt; variant, which is to go away), do the up-pointer-clearing right in=
<br>
&gt;&gt; sh_unpin(), as an alternative to the (now further limited) enlisti=
ng on<br>
&gt;&gt; a &quot;free floating&quot; list fragment. This utilizes the fact =
that such list<br>
&gt;&gt; fragments are traversed only for multi-page shadows (in shadow_fre=
e()).<br>
&gt;&gt; Furthermore sh_terminate_list() is a safe guard only anyway, which=
 isn&#39;t<br>
&gt;&gt; in use in the common case (it actually does anything only for BIGM=
EM<br>
&gt;&gt; configurations).<br>
&gt; <br>
&gt; One thing that seems strange about this patch is that you&#39;re essen=
tially<br>
&gt; adding a field to the domain shadow struct in lieu of adding another<b=
r>
&gt; another argument to sh_unpin() (unless the bit is referenced elsewhere=
 in<br>
&gt; subsequent patches, which I haven&#39;t reviewed, in part because abou=
t half of<br>
&gt; them don&#39;t apply cleanly to the current tree).<br>
<br>
Well, to me adding another parameter to sh_unpin() would have looked odd;<b=
r>
the new field looks slightly cleaner to me. But changing that is merely a<b=
r>
matter of taste, so if you and e.g. Andrew think that approach was better,<=
br>
I could switch to that. And no, I don&#39;t foresee further uses of the fie=
ld.<br></blockquote><div><br></div><div>You&#39;re about to call sh_unpin()=
, and you want to tell that function to change its behavior.=C2=A0 What&#39=
;s so odd about adding an argument to the function to indicate the behavior=
?=C2=A0 Instead you&#39;re adding a bit of global state which is carried ar=
ound 100% of the time, even when that function isn&#39;t being called.=C2=
=A0 That&#39;s not what people normally expect; it makes the code harder to=
 reason about.</div><div><br></div><div>It would certainly be ugly to have =
to add &quot;false&quot; to every other instance of sh_unpin; but the norma=
l way you get around that is to redefine sh_unpin() as a wrapper which call=
s the other function with the &#39;false&#39; argument set.</div><div><br><=
/div><div>You asked me to review this for a second opinion on the safety of=
 clearing the up-pointer this way, not because you need an ack; so I don&#3=
9;t really want to block the patch for non-functional reasons.=C2=A0 But I =
think this is one of the &quot;death by a thousand cuts&quot; that makes th=
e shadow code more fragile and difficult for new people to approach and und=
erstand.</div><div>=C2=A0</div><div>Re the original question: I&#39;ve star=
ed at the code for a bit now, and I can&#39;t see anything obviously wrong =
or dangerous about it.</div><div><br></div><div>But it does make me ask, wh=
y do we need the &quot;unpinning_l3&quot; pseudo-argument at all?=C2=A0 Is =
there any reason not to unconditionally zero out sp-&gt;up when we find a h=
ead_type of SH_type_l3_64_shadow?=C2=A0 As far as I can tell, sp-&gt;list d=
oesn&#39;t require any special state.=C2=A0 Why do we make the effort to le=
ave it alone when we&#39;re not unpinning all l3s?</div><div><br></div><div=
>In fact, is there a way to unpin an l3 shadow *other* than when we&#39;re =
unpinning all l3&#39;s?=C2=A0 If so, then this patch, as written, is broken=
 -- the original code clears the up-pointer for *all* L3_64 shadows, regard=
less of whether they&#39;re on the pinned list; the new patch will only cle=
ar the ones on the pinned list.=C2=A0 But unconditionally clearing sp-&gt;u=
p could actually fix that.</div><div><br></div><div>Thoughts?</div><div><br=
></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;=
border-left:1px solid rgb(204,204,204);padding-left:1ex">As to half of the =
patches not applying: Some where already applied out of<br>
order, and others therefore need re-basing slightly. Till now I saw no<br>
reason to re-send the remaining patches just for that.<br></blockquote><div=
><br></div><div>Sorry if that sounded like complaining; I was only being pr=
eemptively defensive against the potential accusation that the answer would=
 have been obvious if I&#39;d just continued reviewing the series. :-). (An=
d indeed if the whole series had applied I would have checked that the fina=
l result didn&#39;t have any other references to it.)</div><div><br></div><=
div>=C2=A0-George=C2=A0</div></div></div>
</div>

--0000000000007d7a3e05f2efef00--


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483023.748909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3T-0000Mq-7n; Mon, 23 Jan 2023 15:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483023.748909; Mon, 23 Jan 2023 15:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3T-0000Mj-4g; Mon, 23 Jan 2023 15:48:19 +0000
Received: by outflank-mailman (input) for mailman id 483023;
 Mon, 23 Jan 2023 15:48:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3R-0000MU-9t
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:17 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53df98c2-9b35-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 16:48:06 +0100 (CET)
Received: by mail-ed1-x532.google.com with SMTP id w14so15073375edi.5
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:14 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53df98c2-9b35-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3H72rNsIsMCXOiG8PzTR5/Br1dIcMG15DtbAn/iuSxU=;
        b=JVD+VcoJ+qMw0GEPyI/HAIC+49qRXTxc0kArNiFfqCZMoo9Kzx2hBUDLaa6wRUVqxU
         eVvEcG08lJjQWSq81P4FMMdhzhoFjcfNNlSayV37vKZ3rKCLN/Hf5SXI+4A4NBteF6lR
         D9kfK0GSf3zaUIfddQ8AoqlAUOieMboczhZQiRoYGQGVpkJEN3fUcUBASAz/D8s2FwqH
         VSTquYU7Atb1Q+Fm8fX1wjW56gtYtSZPB3cRb35MxCwM6qCF5ZimwfsO7p89B7m0KsY0
         72g9ug7gJd3leY3gLluq/CeMOMnOmUbv/2YDgtE5ATdil0FFyhmmbdDbotQm4UrJNXju
         rhhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=3H72rNsIsMCXOiG8PzTR5/Br1dIcMG15DtbAn/iuSxU=;
        b=3d5e6cZXjBBafW+8w39zounj1ISSX54BbC4JgLfzpm2P61jTnXmsDOPxxTjAWceChf
         9L/k/Xvdr2jvh+HB22IqwEd9EcCf58huAXffpFSg14bQxgUnOZebqALul2esk9+DKJ7b
         h93RFQk+xWHWjkX25Xr1vwwcZFcMbsRsao3Z8vQNLhTHVegMnLq1usj4Q/kvHxfCrwqV
         kjhPgKUSJC6aYLAaXAezdtblK/8hI/q0UU6GI5gmLgtrUAXh1u3v+jQHvePTnFGCc+PD
         FXhBZj+uZ5mknIIez07rZSR5LN2S5HskpnS3jLErHEUazSpqqhx6PPfF3/irZ+BPdLRH
         qEMA==
X-Gm-Message-State: AFqh2kr/UQjya6mOFmxadTB8T10c0snYqIKme8odYqYeuvojA/6bw4lH
	upHB3tyTh0IXb1CvxT2lkCId9tSRTGQxu64Z
X-Google-Smtp-Source: AMrXdXsn4lVaXUAIsFwRDoSu2MtLlAzePVMLFPMkmZ2nzcWg97aMcuSbdlgwa8aB7yIvmlgcmUbFfA==
X-Received: by 2002:a05:6402:1946:b0:48b:c8de:9d20 with SMTP id f6-20020a056402194600b0048bc8de9d20mr28353774edz.32.1674488894307;
        Mon, 23 Jan 2023 07:48:14 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Carlo Nonato <carlo.nonato@minervasys.tech>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Marco Solieri <marco.solieri@minervasys.tech>
Subject: [PATCH v4 01/11] xen/common: add cache coloring common code
Date: Mon, 23 Jan 2023 16:47:25 +0100
Message-Id: <20230123154735.74832-2-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit adds the Last Level Cache (LLC) coloring common header,
Kconfig options and stub functions for domain coloring. Since this is an
arch specific feature, actual implementation is postponed to later patches
and Kconfig options are placed under xen/arch.

LLC colors are represented as dynamic arrays plus their size and since
they have to be passed during domain creation, domain_create() is replaced
by domain_create_llc_colored(). domain_create() is then turned into a
wrapper of the colored version to let all the original call sites remain
untouched.

Based on original work from: Luca Miccio <lucmiccio@gmail.com>

Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
---
v4:
- Kconfig options moved to xen/arch
- removed range for CONFIG_NR_LLC_COLORS
- added "llc_coloring_enabled" global to later implement the boot-time
  switch
- added domain_create_llc_colored() to be able to pass colors
- added is_domain_llc_colored() macro
---
 xen/arch/Kconfig               | 17 +++++++++++
 xen/common/Kconfig             |  3 ++
 xen/common/domain.c            | 23 +++++++++++++--
 xen/common/keyhandler.c        |  4 +++
 xen/include/xen/llc_coloring.h | 54 ++++++++++++++++++++++++++++++++++
 xen/include/xen/sched.h        |  9 ++++++
 6 files changed, 107 insertions(+), 3 deletions(-)
 create mode 100644 xen/include/xen/llc_coloring.h

diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
index 7028f7b74f..39c23f2528 100644
--- a/xen/arch/Kconfig
+++ b/xen/arch/Kconfig
@@ -28,3 +28,20 @@ config NR_NUMA_NODES
 	  associated with multiple-nodes management. It is the upper bound of
 	  the number of NUMA nodes that the scheduler, memory allocation and
 	  other NUMA-aware components can handle.
+
+config LLC_COLORING
+	bool "Last Level Cache (LLC) coloring" if EXPERT
+	depends on HAS_LLC_COLORING
+
+config NR_LLC_COLORS
+	int "Maximum number of LLC colors"
+	default 128
+	depends on LLC_COLORING
+	help
+	  Controls the build-time size of various arrays associated with LLC
+	  coloring. Refer to the documentation for how to compute the number
+	  of colors supported by the platform.
+	  The default value corresponds to an 8 MiB 16-ways LLC, which should be
+	  more than what needed in the general case.
+	  Note that if, at any time, a color configuration with more colors than
+	  the maximum is employed, an error is produced.
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index f1ea3199c8..c796c633f1 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -49,6 +49,9 @@ config HAS_IOPORTS
 config HAS_KEXEC
 	bool
 
+config HAS_LLC_COLORING
+	bool
+
 config HAS_PDX
 	bool
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 626debbae0..87aae86081 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -7,6 +7,7 @@
 #include <xen/compat.h>
 #include <xen/init.h>
 #include <xen/lib.h>
+#include <xen/llc_coloring.h>
 #include <xen/ctype.h>
 #include <xen/err.h>
 #include <xen/param.h>
@@ -549,9 +550,11 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
     return arch_sanitise_domain_config(config);
 }
 
-struct domain *domain_create(domid_t domid,
-                             struct xen_domctl_createdomain *config,
-                             unsigned int flags)
+struct domain *domain_create_llc_colored(domid_t domid,
+                                         struct xen_domctl_createdomain *config,
+                                         unsigned int flags,
+                                         unsigned int *llc_colors,
+                                         unsigned int num_llc_colors)
 {
     struct domain *d, **pd, *old_hwdom = NULL;
     enum { INIT_watchdog = 1u<<1,
@@ -663,6 +666,10 @@ struct domain *domain_create(domid_t domid,
         d->nr_pirqs = min(d->nr_pirqs, nr_irqs);
 
         radix_tree_init(&d->pirq_tree);
+
+        if ( llc_coloring_enabled &&
+             (err = domain_llc_coloring_init(d, llc_colors, num_llc_colors)) )
+            return ERR_PTR(err);
     }
 
     if ( (err = arch_domain_create(d, config, flags)) != 0 )
@@ -769,6 +776,13 @@ struct domain *domain_create(domid_t domid,
     return ERR_PTR(err);
 }
 
+struct domain *domain_create(domid_t domid,
+                             struct xen_domctl_createdomain *config,
+                             unsigned int flags)
+{
+    return domain_create_llc_colored(domid, config, flags, 0, 0);
+}
+
 void __init setup_system_domains(void)
 {
     /*
@@ -1103,6 +1117,9 @@ static void cf_check complete_domain_destroy(struct rcu_head *head)
     struct vcpu *v;
     int i;
 
+    if ( is_domain_llc_colored(d) )
+        domain_llc_coloring_free(d);
+
     /*
      * Flush all state for the vCPU previously having run on the current CPU.
      * This is in particular relevant for x86 HVM ones on VMX, so that this
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 0a551033c4..56f7731595 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -6,6 +6,7 @@
 #include <xen/debugger.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
+#include <xen/llc_coloring.h>
 #include <xen/param.h>
 #include <xen/shutdown.h>
 #include <xen/event.h>
@@ -307,6 +308,9 @@ static void cf_check dump_domains(unsigned char key)
 
         arch_dump_domain_info(d);
 
+        if ( is_domain_llc_colored(d) )
+            domain_dump_llc_colors(d);
+
         rangeset_domain_printk(d);
 
         dump_pageframe_info(d);
diff --git a/xen/include/xen/llc_coloring.h b/xen/include/xen/llc_coloring.h
new file mode 100644
index 0000000000..625930d378
--- /dev/null
+++ b/xen/include/xen/llc_coloring.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Last Level Cache (LLC) coloring common header
+ *
+ * Copyright (C) 2022 Xilinx Inc.
+ *
+ * Authors:
+ *    Carlo Nonato <carlo.nonato@minervasys.tech>
+ */
+#ifndef __COLORING_H__
+#define __COLORING_H__
+
+#include <xen/sched.h>
+#include <public/domctl.h>
+
+#ifdef CONFIG_HAS_LLC_COLORING
+
+#include <asm/llc_coloring.h>
+
+extern bool llc_coloring_enabled;
+
+int domain_llc_coloring_init(struct domain *d, unsigned int *colors,
+                             unsigned int num_colors);
+void domain_llc_coloring_free(struct domain *d);
+void domain_dump_llc_colors(struct domain *d);
+
+#else
+
+#define llc_coloring_enabled (false)
+
+static inline int domain_llc_coloring_init(struct domain *d,
+                                           unsigned int *colors,
+                                           unsigned int num_colors)
+{
+    return 0;
+}
+static inline void domain_llc_coloring_free(struct domain *d) {}
+static inline void domain_dump_llc_colors(struct domain *d) {}
+
+#endif /* CONFIG_HAS_LLC_COLORING */
+
+#define is_domain_llc_colored(d) (llc_coloring_enabled)
+
+#endif /* __COLORING_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
\ No newline at end of file
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 12be794002..754f6cb1da 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -602,6 +602,9 @@ struct domain
 
     /* Holding CDF_* constant. Internal flags for domain creation. */
     unsigned int cdf;
+
+    unsigned int *llc_colors;
+    unsigned int num_llc_colors;
 };
 
 static inline struct page_list_head *page_to_list(
@@ -685,6 +688,12 @@ static inline void domain_update_node_affinity(struct domain *d)
  */
 int arch_sanitise_domain_config(struct xen_domctl_createdomain *config);
 
+struct domain *domain_create_llc_colored(domid_t domid,
+                                         struct xen_domctl_createdomain *config,
+                                         unsigned int flags,
+                                         unsigned int *colors,
+                                         unsigned int num_colors);
+
 /*
  * Create a domain: the configuration is only necessary for real domain
  * (domid < DOMID_FIRST_RESERVED).
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483022.748898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3P-00006k-Vg; Mon, 23 Jan 2023 15:48:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483022.748898; Mon, 23 Jan 2023 15:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3P-00006d-Sk; Mon, 23 Jan 2023 15:48:15 +0000
Received: by outflank-mailman (input) for mailman id 483022;
 Mon, 23 Jan 2023 15:48:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3O-00006V-Px
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:15 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58290116-9b35-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 16:48:13 +0100 (CET)
Received: by mail-ej1-x631.google.com with SMTP id vw16so31589989ejc.12
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:13 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58290116-9b35-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=CnCu0SRK9ePUEiQ5se2maNj9MJnvtp+AnY7rFaTM0NU=;
        b=RfE825bgUL5pu+8RaEIQzsXF6qfkeYR5t0zobns4fssro3Z1cXfeRaQaNu46IEwx4f
         00s/od1H/H/5fFXkoi0VmXrScUJ+9fZGAZN37Pe135rsqXJSo8sMDRQMSWIZDOqtWv5C
         rlipSA4xo06BTUAJBWB2zOicl4Rtl9CQmzRhELBm/lqdK105r0EWd342QjCsOKVs23Y+
         6wb3oNfyefmFXjb9swVo//0ESoF28LTyY9xbwQD5/e5gDAsQHsGGdbspPf6XHtT8l28z
         u+HUoAvtbXdkIgtvywhUqhPnNgBsKFJzwXiHHAsjUozPA169BgKNmqTjtriRF6uFrIRP
         m+gg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=CnCu0SRK9ePUEiQ5se2maNj9MJnvtp+AnY7rFaTM0NU=;
        b=gr+wvF9ZEcFMOO4cbVbFPJtgu11vPUp3Kdv91UxILfiwSbTiLEEcqLvjpCm394yNIL
         VqFewacgXsnawYRq41SYjkg+t0W9hydIEWE849IsS3g+A/VaAZAxzAdi5akweuTX/938
         VGhDlbaXarm3Bmz+AFsFFf74kp8PIiu+dnc5t5TH5q7pib6wMjf8m2ZzQTH0g0Xi9u7+
         3jAapXFQCQrKn3UDEx9ZOYxuqXsxsvfmPf917krW8DbkAsXLcnku1HE4XslhjFjFK+T2
         oBbGZ13XPZ8LF2qhLg4502c6OMZXA+rkiLWn/bQjoIvleYdDJM5BPbFBJSv9MXGXtQZC
         vgmQ==
X-Gm-Message-State: AFqh2krmKe3BOMRVPuuO7hVDaDRGmlbOASJ6c94HiEl/C8jA7M9Xs2K/
	liRLELNPM0oO6ouSAmcMNrZGRwW84UbOWoqk
X-Google-Smtp-Source: AMrXdXsSFqadTi1APyOj8rwia3op6YXUtg2N97abRBOHc/N2/EDl3Wl8iUnkDngI+T+ggcWAyqEGiA==
X-Received: by 2002:a17:906:37ca:b0:871:14fc:a299 with SMTP id o10-20020a17090637ca00b0087114fca299mr28246844ejc.1.1674488892797;
        Mon, 23 Jan 2023 07:48:12 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Carlo Nonato <carlo.nonato@minervasys.tech>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v4 00/11] Arm cache coloring
Date: Mon, 23 Jan 2023 16:47:24 +0100
Message-Id: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Shared caches in multi-core CPU architectures represent a problem for
predictability of memory access latency. This jeopardizes applicability
of many Arm platform in real-time critical and mixed-criticality
scenarios. We introduce support for cache partitioning with page
coloring, a transparent software technique that enables isolation
between domains and Xen, and thus avoids cache interference.

When creating a domain, a simple syntax (e.g. `0-3` or `4-11`) allows
the user to define assignments of cache partitions ids, called colors,
where assigning different colors guarantees no mutual eviction on cache
will ever happen. This instructs the Xen memory allocator to provide
the i-th color assignee only with pages that maps to color i, i.e. that
are indexed in the i-th cache partition.

The proposed implementation supports the dom0less feature.
The proposed implementation doesn't support the static-mem feature.
The solution has been tested in several scenarios, including Xilinx Zynq
MPSoCs.

v4 global changes:
- added "llc" acronym (Last Level Cache) in multiple places in code
  (e.g. coloring.{c|h} -> llc_coloring.{c|h}) to better describe the
  feature and to remove ambiguity with too generic "colors". "llc" is also
  shorter than "cache"
- reordered again patches since code is now splitted in common + arch

Carlo Nonato (8):
  xen/common: add cache coloring common code
  xen/arm: add cache coloring initialization
  xen: extend domctl interface for cache coloring
  tools: add support for cache coloring configuration
  xen/arm: add support for cache coloring configuration via device-tree
  xen/arm: use colored allocator for p2m page tables
  Revert "xen/arm: Remove unused BOOT_RELOC_VIRT_START"
  xen/arm: add cache coloring support for Xen

Luca Miccio (3):
  xen/arm: add Dom0 cache coloring support
  xen: add cache coloring allocator for domains
  xen/arm: add Xen cache colors command line parameter

 docs/man/xl.cfg.5.pod.in                |  10 +
 docs/misc/arm/cache-coloring.rst        | 223 +++++++++++++
 docs/misc/arm/device-tree/booting.txt   |   4 +
 docs/misc/xen-command-line.pandoc       |  61 ++++
 tools/libs/ctrl/xc_domain.c             |  17 +
 tools/libs/light/libxl_create.c         |   2 +
 tools/libs/light/libxl_types.idl        |   1 +
 tools/xl/xl_parse.c                     |  38 ++-
 xen/arch/Kconfig                        |  29 ++
 xen/arch/arm/Kconfig                    |   1 +
 xen/arch/arm/Makefile                   |   1 +
 xen/arch/arm/alternative.c              |   9 +-
 xen/arch/arm/arm64/head.S               |  50 +++
 xen/arch/arm/arm64/mm.c                 |  26 +-
 xen/arch/arm/domain_build.c             |  35 ++-
 xen/arch/arm/include/asm/config.h       |   4 +-
 xen/arch/arm/include/asm/llc_coloring.h |  65 ++++
 xen/arch/arm/include/asm/mm.h           |  10 +-
 xen/arch/arm/include/asm/processor.h    |  16 +
 xen/arch/arm/llc_coloring.c             | 397 ++++++++++++++++++++++++
 xen/arch/arm/mm.c                       |  95 +++++-
 xen/arch/arm/p2m.c                      |  11 +-
 xen/arch/arm/psci.c                     |   9 +-
 xen/arch/arm/setup.c                    |  82 ++++-
 xen/arch/arm/smpboot.c                  |   9 +-
 xen/arch/arm/xen.lds.S                  |   2 +-
 xen/common/Kconfig                      |   3 +
 xen/common/domain.c                     |  23 +-
 xen/common/domctl.c                     |  12 +-
 xen/common/keyhandler.c                 |   4 +
 xen/common/page_alloc.c                 | 247 +++++++++++++--
 xen/include/public/domctl.h             |   6 +-
 xen/include/xen/llc_coloring.h          |  63 ++++
 xen/include/xen/mm.h                    |  33 ++
 xen/include/xen/sched.h                 |   9 +
 35 files changed, 1552 insertions(+), 55 deletions(-)
 create mode 100644 docs/misc/arm/cache-coloring.rst
 create mode 100644 xen/arch/arm/include/asm/llc_coloring.h
 create mode 100644 xen/arch/arm/llc_coloring.c
 create mode 100644 xen/include/xen/llc_coloring.h

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483030.748979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3b-0002JA-SP; Mon, 23 Jan 2023 15:48:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483030.748979; Mon, 23 Jan 2023 15:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3b-0002Ih-N5; Mon, 23 Jan 2023 15:48:27 +0000
Received: by outflank-mailman (input) for mailman id 483030;
 Mon, 23 Jan 2023 15:48:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3Z-0000MU-Ob
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:25 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 587185a9-9b35-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 16:48:13 +0100 (CET)
Received: by mail-ej1-x635.google.com with SMTP id u19so31603883ejm.8
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:22 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 587185a9-9b35-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=unwT7Y7OWtE54v3kQMluasF3dALE7ek/pArNr4zV0vs=;
        b=x+Qbc6lFifFuxZQCKmzfNTnCyqgSkVVZQGP9bUpYY0PCMpA4ZcSMPDXyehIT2yz/Lc
         UPVQQuD3j+nnvWHIxgiwVCGHJCx3vPzedA2q8QDn+KR22P9m3oBoot529sg2TnYd6LIT
         C5VuP4XbJX4bgzXliwU3hsT0G8VKMC+261Xd79faml3WZn+ZbBp71XdKrHo9qeHPNLlN
         fAaXTQHyy52cohduEcBK7uCAGgQcz5IhwQulmLD2LAGnRrUcc66XnTtdBTQ0Y5jhKSGU
         9yXmsPCQDE6oTfiTbND/6fIgvHRCs7ImIJNURLNhXSKqzxnylGXJowGnImE7jOSoRgCL
         pcMw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=unwT7Y7OWtE54v3kQMluasF3dALE7ek/pArNr4zV0vs=;
        b=naHuQhY7GPYR2eJQjMflwmVdypqDVRRg7wMMfD3mrjwCwt4XiOqiCcIbbQ66r71o4M
         riUnI/efqqNE2pfObg3DMI8H53q8MBqqi/DP5OmgJHdLd6LdLkbw3PpjSFi8NzAbFYTd
         VD7C6rX/BPhgETZ/6Py5aOnBsdeAe0d4QthsK6QiyOmpNGaPFJDeOvGQwuBtPUTiJrWc
         apc7ClwtNA3TKA/jziu8vMIiWZmNjUgiNg1BkkMBV55FWFEOYZNr1CxxhEK2M2Eg4j0J
         tpQL6D7HpshABevelMc/GFTOG1s9koaHoTDmkomkB3jplBA/t6deDwVoL+Gk5OMCxMWD
         lbJQ==
X-Gm-Message-State: AFqh2krmWcYAQ2sC/05Srrp5edc7X9C8JnXtUsjLHVWmxfsdB7BplWuE
	vTUMJ0TBkzkdRTIltkTi5L8MH3hzaawsBYl1
X-Google-Smtp-Source: AMrXdXsxI82Ko73dhXL16y/EF4Cb/Hoz/sbENqdlcq7QbgIMixTIFXzhJsKfrD6J6EVzC9GK6WZXzw==
X-Received: by 2002:a17:907:7da0:b0:86d:67b0:6292 with SMTP id oz32-20020a1709077da000b0086d67b06292mr38167762ejc.73.1674488901726;
        Mon, 23 Jan 2023 07:48:21 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Luca Miccio <lucmiccio@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Marco Solieri <marco.solieri@minervasys.tech>,
	Carlo Nonato <carlo.nonato@minervasys.tech>
Subject: [PATCH v4 07/11] xen: add cache coloring allocator for domains
Date: Mon, 23 Jan 2023 16:47:31 +0100
Message-Id: <20230123154735.74832-8-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Luca Miccio <lucmiccio@gmail.com>

This commit adds a new memory page allocator that implements the cache
coloring mechanism. The allocation algorithm follows the given domain color
configuration and maximizes contiguity in the page selection of multiple
subsequent requests.

Pages are stored in a color-indexed array of lists, each one sorted by
machine address, that is referred to as "colored heap". Those lists are
filled by a simple init function which computes the color of each page.
When a domain requests a page, the allocator takes one from those lists
whose colors equals the domain configuration. It chooses the page with the
lowest machine address such that contiguous pages are sequentially
allocated if this is made possible by a color assignment which includes
adjacent colors.

The allocator can handle only requests with order equal to 0 since the
single color granularity is represented in memory by one page.

The buddy allocator must coexist with the colored one because the Xen heap
isn't colored. For this reason a new Kconfig option and a command line
parameter are added to let the user set the amount of memory reserved for
the buddy allocator. Even when cache coloring is enabled, this memory
isn't managed by the colored allocator.

Colored heap information is dumped in the dump_heap() debug-key function.

Signed-off-by: Luca Miccio <lucmiccio@gmail.com>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
---
v4:
- moved colored allocator code after buddy allocator because it now has
  some dependencies on buddy functions
- buddy_alloc_size is now used only by the colored allocator
- fixed a bug that allowed the buddy to merge pages when they were colored
- free_color_heap_page() now calls mark_page_free()
- free_color_heap_page() uses of the frametable array for faster searches
- added FIXME comment for the linear search in free_color_heap_page()
- removed alloc_color_domheap_page() to let the colored allocator exploit
  some more buddy allocator code
- alloc_color_heap_page() now allocs min address pages first
- reduced the mess in end_boot_allocator(): use the first loop for
  init_color_heap_pages()
- fixed page_list_add_prev() (list.h) since it was doing the opposite of
  what it was supposed to do
- fixed page_list_add_prev() (non list.h) to check also for next existence
- removed unused page_list_add_next()
- moved p2m code in another patch
---
 docs/misc/arm/cache-coloring.rst  |  49 ++++++
 docs/misc/xen-command-line.pandoc |  14 ++
 xen/arch/Kconfig                  |  12 ++
 xen/arch/arm/include/asm/mm.h     |   3 +
 xen/arch/arm/llc_coloring.c       |  12 ++
 xen/common/page_alloc.c           | 247 +++++++++++++++++++++++++++---
 xen/include/xen/llc_coloring.h    |   5 +
 xen/include/xen/mm.h              |  33 ++++
 8 files changed, 355 insertions(+), 20 deletions(-)

diff --git a/docs/misc/arm/cache-coloring.rst b/docs/misc/arm/cache-coloring.rst
index a28f75dc26..d56dafe815 100644
--- a/docs/misc/arm/cache-coloring.rst
+++ b/docs/misc/arm/cache-coloring.rst
@@ -15,10 +15,16 @@ In Kconfig:
   value meaning and when it should be changed).
 
         CONFIG_NR_LLC_COLORS=<n>
+- If needed, change the amount of memory reserved for the buddy allocator
+  (see `Colored allocator and buddy allocator`_).
+
+        CONFIG_BUDDY_ALLOCATOR_SIZE=<n>
 
 Compile Xen and the toolstack and then:
 
 - Set the `llc-coloring=on` command line option.
+- If needed, set the amount of memory reserved for the buddy allocator
+  via the appropriate command line option.
 - Set `Coloring parameters and domain configurations`_.
 
 Background
@@ -162,6 +168,18 @@ Dom0less configurations (relative documentation in
 **Note:** If no color configuration is provided for a domain, the default one,
 which corresponds to all available colors, is used instead.
 
+Colored allocator and buddy allocator
+*************************************
+
+The colored allocator distributes pages based on color configurations of
+domains so that each domains only gets pages of its own colors.
+The colored allocator is meant as an alternative to the buddy allocator because
+its allocation policy is by definition incompatible with the generic one. Since
+the Xen heap is not colored yet, we need to support the coexistence of the two
+allocators and some memory must be left for the buddy one.
+The buddy allocator memory can be reserved from the Xen Kconfig or with the
+help of a command-line option.
+
 Known issues and limitations
 ****************************
 
@@ -172,3 +190,34 @@ In the domain configuration, "xen,static-mem" allows memory to be statically
 allocated to the domain. This isn't possibile when LLC coloring is enabled,
 because that memory can't be guaranteed to use only colors assigned to the
 domain.
+
+Cache coloring is intended only for embedded systems
+####################################################
+
+The current implementation aims to satisfy the need of predictability in
+embedded systems with small amount of memory to be managed in a colored way.
+Given that, some shortcuts are taken in the development. Expect worse
+performances on larger systems.
+
+Colored allocator can only make use of order-0 pages
+####################################################
+
+The cache coloring technique relies on memory mappings and on the smallest
+amount of memory that can be mapped to achieve the maximum number of colors
+(cache partitions) possible. This amount is what is normally called a page and,
+in Xen terminology, the order-0 page is the smallest one. The fairly simple
+colored allocator currently implemented, makes use only of such pages.
+It must be said that a more complex one could, in theory, adopt higher order
+pages if the colors selection contained adjacent colors. Two subsequent colors,
+for example, can be represented by an order-1 page, four colors correspond to
+an order-2 page, etc.
+
+Fail to boot colored DomUs with large memory size
+#################################################
+
+If the Linux kernel used for Dom0 does not contain the upstream commit
+3941552aec1e04d63999988a057ae09a1c56ebeb and uses the hypercall buffer device,
+colored DomUs with memory size larger then 127 MB cannot be created. This is
+caused by the default limit of this buffer of 64 pages. The solution is to
+manually apply the above patch, or to check if there is an updated version of
+the kernel in use for Dom0 that contains this change.
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index eb105c03af..a89c0cef61 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -299,6 +299,20 @@ can be maintained with the pv-shim mechanism.
     cause Xen not to use Indirect Branch Tracking even when support is
     available in hardware.
 
+### buddy-alloc-size (arm64)
+> `= <size>`
+
+> Default: `64M`
+
+Amount of memory reserved for the buddy allocator when colored allocator is
+active. This options is available only when `CONFIG_LLC_COLORING` is enabled.
+The colored allocator is meant as an alternative to the buddy allocator,
+because its allocation policy is by definition incompatible with the
+generic one. Since the Xen heap systems is not colored yet, we need to
+support the coexistence of the two allocators for now. This parameter, which is
+optional and for expert only, it's used to set the amount of memory reserved to
+the buddy allocator.
+
 ### clocksource (x86)
 > `= pit | hpet | acpi | tsc`
 
diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
index 39c23f2528..4378b5f338 100644
--- a/xen/arch/Kconfig
+++ b/xen/arch/Kconfig
@@ -45,3 +45,15 @@ config NR_LLC_COLORS
 	  more than what needed in the general case.
 	  Note that if, at any time, a color configuration with more colors than
 	  the maximum is employed, an error is produced.
+
+config BUDDY_ALLOCATOR_SIZE
+	int "Buddy allocator reserved memory size (MiB)"
+	default "64"
+	depends on LLC_COLORING
+	help
+	  Amount of memory reserved for the buddy allocator to work alongside
+	  the colored one. The colored allocator is meant as an alternative to the
+	  buddy allocator because its allocation policy is by definition
+	  incompatible with the generic one. Since the Xen heap is not colored yet,
+	  we need to support the coexistence of the two allocators and some memory
+	  must be left for the buddy one.
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index bff6923f3e..596293f792 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -128,6 +128,9 @@ struct page_info
 #else
 #define PGC_static     0
 #endif
+/* Page is cache colored */
+#define _PGC_colored      PG_shift(4)
+#define PGC_colored       PG_mask(1, 4)
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
diff --git a/xen/arch/arm/llc_coloring.c b/xen/arch/arm/llc_coloring.c
index ba5279a022..22612d455b 100644
--- a/xen/arch/arm/llc_coloring.c
+++ b/xen/arch/arm/llc_coloring.c
@@ -33,6 +33,8 @@ static paddr_t __ro_after_init addr_col_mask;
 static unsigned int __ro_after_init dom0_colors[CONFIG_NR_LLC_COLORS];
 static unsigned int __ro_after_init dom0_num_colors;
 
+#define addr_to_color(addr) (((addr) & addr_col_mask) >> PAGE_SHIFT)
+
 /*
  * Parse the coloring configuration given in the buf string, following the
  * syntax below.
@@ -299,6 +301,16 @@ unsigned int *llc_colors_from_str(const char *str, unsigned int *num_colors)
     return colors;
 }
 
+unsigned int page_to_llc_color(const struct page_info *pg)
+{
+    return addr_to_color(page_to_maddr(pg));
+}
+
+unsigned int get_nr_llc_colors(void)
+{
+    return nr_colors;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index e40473f71e..59bd6bcdac 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -126,6 +126,7 @@
 #include <xen/irq.h>
 #include <xen/keyhandler.h>
 #include <xen/lib.h>
+#include <xen/llc_coloring.h>
 #include <xen/mm.h>
 #include <xen/nodemask.h>
 #include <xen/numa.h>
@@ -158,6 +159,10 @@
 #define PGC_static 0
 #endif
 
+#ifndef PGC_colored
+#define PGC_colored 0
+#endif
+
 #ifndef PGT_TYPE_INFO_INITIALIZER
 #define PGT_TYPE_INFO_INITIALIZER 0
 #endif
@@ -924,6 +929,13 @@ static struct page_info *get_free_buddy(unsigned int zone_lo,
     }
 }
 
+/* Initialise fields which have other uses for free pages. */
+static void init_free_page_fields(struct page_info *pg)
+{
+    pg->u.inuse.type_info = PGT_TYPE_INFO_INITIALIZER;
+    page_set_owner(pg, NULL);
+}
+
 /* Allocate 2^@order contiguous pages. */
 static struct page_info *alloc_heap_pages(
     unsigned int zone_lo, unsigned int zone_hi,
@@ -1032,10 +1044,7 @@ static struct page_info *alloc_heap_pages(
             accumulate_tlbflush(&need_tlbflush, &pg[i],
                                 &tlbflush_timestamp);
 
-        /* Initialise fields which have other uses for free pages. */
-        pg[i].u.inuse.type_info = PGT_TYPE_INFO_INITIALIZER;
-        page_set_owner(&pg[i], NULL);
-
+        init_free_page_fields(&pg[i]);
     }
 
     spin_unlock(&heap_lock);
@@ -1488,7 +1497,7 @@ static void free_heap_pages(
             /* Merge with predecessor block? */
             if ( !mfn_valid(page_to_mfn(predecessor)) ||
                  !page_state_is(predecessor, free) ||
-                 (predecessor->count_info & PGC_static) ||
+                 (predecessor->count_info & (PGC_static | PGC_colored)) ||
                  (PFN_ORDER(predecessor) != order) ||
                  (page_to_nid(predecessor) != node) )
                 break;
@@ -1512,7 +1521,7 @@ static void free_heap_pages(
             /* Merge with successor block? */
             if ( !mfn_valid(page_to_mfn(successor)) ||
                  !page_state_is(successor, free) ||
-                 (successor->count_info & PGC_static) ||
+                 (successor->count_info & (PGC_static | PGC_colored)) ||
                  (PFN_ORDER(successor) != order) ||
                  (page_to_nid(successor) != node) )
                 break;
@@ -1928,6 +1937,182 @@ static unsigned long avail_heap_pages(
     return free_pages;
 }
 
+#ifdef CONFIG_LLC_COLORING
+/*************************
+ * COLORED SIDE-ALLOCATOR
+ *
+ * Pages are grouped by LLC color in lists which are globally referred to as the
+ * color heap. Lists are populated in end_boot_allocator().
+ * After initialization there will be N lists where N is the number of
+ * available colors on the platform.
+ */
+typedef struct page_list_head colored_pages_t;
+static colored_pages_t *__ro_after_init _color_heap;
+static unsigned long *__ro_after_init free_colored_pages;
+
+/* Memory required for buddy allocator to work with colored one */
+static unsigned long __initdata buddy_alloc_size =
+    CONFIG_BUDDY_ALLOCATOR_SIZE << 20;
+
+#define color_heap(color) (&_color_heap[color])
+
+static bool is_free_colored_page(struct page_info *page)
+{
+    return page && (page->count_info & PGC_state_free) &&
+                   (page->count_info & PGC_colored);
+}
+
+/*
+ * The {free|alloc}_color_heap_page overwrite pg->count_info, but they do it in
+ * the same way as the buddy allocator corresponding functions do:
+ * protecting the access with a critical section using heap_lock.
+ */
+static void free_color_heap_page(struct page_info *pg)
+{
+    unsigned int color = page_to_llc_color(pg), nr_colors = get_nr_llc_colors();
+    unsigned long pdx = page_to_pdx(pg);
+    colored_pages_t *head = color_heap(color);
+    struct page_info *prev = pdx >= nr_colors ? pg - nr_colors : NULL;
+    struct page_info *next = pdx + nr_colors < FRAMETABLE_NR ? pg + nr_colors
+                                                             : NULL;
+
+    spin_lock(&heap_lock);
+
+    if ( is_free_colored_page(prev) )
+        next = page_list_next(prev, head);
+    else if ( !is_free_colored_page(next) )
+    {
+        /*
+         * FIXME: linear search is slow, but also note that the frametable is
+         * used to find free pages in the immediate neighborhood of pg in
+         * constant time. When freeing contiguous pages, the insert position of
+         * most of them is found without the linear search.
+         */
+        page_list_for_each( next, head )
+        {
+            if ( page_to_maddr(next) > page_to_maddr(pg) )
+                break;
+        }
+    }
+
+    mark_page_free(pg, page_to_mfn(pg));
+    pg->count_info |= PGC_colored;
+    free_colored_pages[color]++;
+    page_list_add_prev(pg, next, head);
+
+    spin_unlock(&heap_lock);
+}
+
+static struct page_info *alloc_color_heap_page(unsigned int memflags,
+                                               struct domain *d)
+{
+    struct page_info *pg = NULL;
+    unsigned int i, color;
+    bool need_tlbflush = false;
+    uint32_t tlbflush_timestamp = 0;
+
+    spin_lock(&heap_lock);
+
+    for ( i = 0; i < d->num_llc_colors; i++ )
+    {
+        struct page_info *tmp;
+
+        if ( page_list_empty(color_heap(d->llc_colors[i])) )
+            continue;
+
+        tmp = page_list_first(color_heap(d->llc_colors[i]));
+        if ( !pg || page_to_maddr(tmp) < page_to_maddr(pg) )
+        {
+            pg = tmp;
+            color = d->llc_colors[i];
+        }
+    }
+
+    if ( !pg )
+    {
+        spin_unlock(&heap_lock);
+        return NULL;
+    }
+
+    pg->count_info = PGC_state_inuse | PGC_colored;
+    free_colored_pages[color]--;
+    page_list_del(pg, color_heap(color));
+
+    if ( !(memflags & MEMF_no_tlbflush) )
+        accumulate_tlbflush(&need_tlbflush, pg, &tlbflush_timestamp);
+
+    init_free_page_fields(pg);
+
+    spin_unlock(&heap_lock);
+
+    if ( need_tlbflush )
+        filtered_flush_tlb_mask(tlbflush_timestamp);
+
+    flush_page_to_ram(mfn_x(page_to_mfn(pg)),
+                      !(memflags & MEMF_no_icache_flush));
+
+    return pg;
+}
+
+static void __init init_color_heap_pages(struct page_info *pg,
+                                         unsigned long nr_pages)
+{
+    unsigned int i;
+
+    if ( buddy_alloc_size )
+    {
+        unsigned long buddy_pages = min(PFN_DOWN(buddy_alloc_size), nr_pages);
+
+        init_heap_pages(pg, buddy_pages);
+        nr_pages -= buddy_pages;
+        buddy_alloc_size -= buddy_pages << PAGE_SHIFT;
+        pg += buddy_pages;
+    }
+
+    if ( !_color_heap )
+    {
+        unsigned int nr_colors = get_nr_llc_colors();
+
+        _color_heap = xmalloc_array(colored_pages_t, nr_colors);
+        BUG_ON(!_color_heap);
+        free_colored_pages = xzalloc_array(unsigned long, nr_colors);
+        BUG_ON(!free_colored_pages);
+
+        for ( i = 0; i < nr_colors; i++ )
+            INIT_PAGE_LIST_HEAD(color_heap(i));
+    }
+
+    printk(XENLOG_DEBUG
+           "Init color heap with %lu pages starting from: %#"PRIx64"\n",
+           nr_pages, page_to_maddr(pg));
+
+    for ( i = 0; i < nr_pages; i++ )
+        free_color_heap_page(&pg[i]);
+}
+
+static void dump_color_heap(void)
+{
+    unsigned int color;
+
+    printk("Dumping color heap info\n");
+    for ( color = 0; color < get_nr_llc_colors(); color++ )
+        printk("Color heap[%u]: %lu pages\n", color, free_colored_pages[color]);
+}
+
+#else /* !CONFIG_LLC_COLORING */
+
+static void free_color_heap_page(struct page_info *pg) {}
+static void __init init_color_heap_pages(struct page_info *pg,
+                                         unsigned long nr_pages) {}
+static struct page_info *alloc_color_heap_page(unsigned int memflags,
+                                               struct domain *d)
+{
+    return NULL;
+}
+static void dump_color_heap(void) {}
+
+#endif /* CONFIG_LLC_COLORING */
+
 void __init end_boot_allocator(void)
 {
     unsigned int i;
@@ -1936,12 +2121,19 @@ void __init end_boot_allocator(void)
     for ( i = 0; i < nr_bootmem_regions; i++ )
     {
         struct bootmem_region *r = &bootmem_region_list[i];
-        if ( (r->s < r->e) &&
-             (mfn_to_nid(_mfn(r->s)) == cpu_to_node(0)) )
+        if ( r->s < r->e )
         {
-            init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s);
-            r->e = r->s;
-            break;
+            if ( llc_coloring_enabled )
+            {
+                init_color_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s);
+                r->e = r->s;
+            }
+            else if ( mfn_to_nid(_mfn(r->s)) == cpu_to_node(0) )
+            {
+                init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s);
+                r->e = r->s;
+                break;
+            }
         }
     }
     for ( i = nr_bootmem_regions; i-- > 0; )
@@ -2332,6 +2524,7 @@ int assign_pages(
 {
     int rc = 0;
     unsigned int i;
+    unsigned long allowed_flags = (PGC_extra | PGC_static | PGC_colored);
 
     spin_lock(&d->page_alloc_lock);
 
@@ -2349,7 +2542,7 @@ int assign_pages(
 
         for ( i = 0; i < nr; i++ )
         {
-            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_static)));
+            ASSERT(!(pg[i].count_info & ~allowed_flags));
             if ( pg[i].count_info & PGC_extra )
                 extra_pages++;
         }
@@ -2408,8 +2601,8 @@ int assign_pages(
         ASSERT(page_get_owner(&pg[i]) == NULL);
         page_set_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
-        pg[i].count_info =
-            (pg[i].count_info & (PGC_extra | PGC_static)) | PGC_allocated | 1;
+        pg[i].count_info = (pg[i].count_info & allowed_flags) |
+                           PGC_allocated | 1;
 
         page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
     }
@@ -2443,7 +2636,14 @@ struct page_info *alloc_domheap_pages(
     if ( memflags & MEMF_no_owner )
         memflags |= MEMF_no_refcount;
 
-    if ( !dma_bitsize )
+    /* Only domains are supported for coloring */
+    if ( d && is_domain_llc_colored(d) )
+    {
+        /* Colored allocation must be done on 0 order */
+        if ( order || (pg = alloc_color_heap_page(memflags, d)) == NULL )
+            return NULL;
+    }
+    else if ( !dma_bitsize )
         memflags &= ~MEMF_no_dma;
     else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi )
         pg = alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags, d);
@@ -2468,7 +2668,10 @@ struct page_info *alloc_domheap_pages(
         }
         if ( assign_page(pg, order, d, memflags) )
         {
-            free_heap_pages(pg, order, memflags & MEMF_no_scrub);
+            if ( pg->count_info & PGC_colored )
+                free_color_heap_page(pg);
+            else
+                free_heap_pages(pg, order, memflags & MEMF_no_scrub);
             return NULL;
         }
     }
@@ -2551,7 +2754,10 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
             scrub = 1;
         }
 
-        free_heap_pages(pg, order, scrub);
+        if ( pg->count_info & PGC_colored )
+            free_color_heap_page(pg);
+        else
+            free_heap_pages(pg, order, scrub);
     }
 
     if ( drop_dom_ref )
@@ -2658,6 +2864,9 @@ static void cf_check dump_heap(unsigned char key)
             continue;
         printk("Node %d has %lu unscrubbed pages\n", i, node_need_scrub[i]);
     }
+
+    if ( llc_coloring_enabled )
+        dump_color_heap();
 }
 
 static __init int cf_check register_heap_trigger(void)
@@ -2790,9 +2999,7 @@ static bool prepare_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
          * to PGC_state_inuse.
          */
         pg[i].count_info = PGC_static | PGC_state_inuse;
-        /* Initialise fields which have other uses for free pages. */
-        pg[i].u.inuse.type_info = PGT_TYPE_INFO_INITIALIZER;
-        page_set_owner(&pg[i], NULL);
+        init_free_page_fields(&pg[i]);
     }
 
     spin_unlock(&heap_lock);
diff --git a/xen/include/xen/llc_coloring.h b/xen/include/xen/llc_coloring.h
index 2855f38296..2e9abf3b3a 100644
--- a/xen/include/xen/llc_coloring.h
+++ b/xen/include/xen/llc_coloring.h
@@ -17,6 +17,8 @@
 
 #include <asm/llc_coloring.h>
 
+struct page_info;
+
 extern bool llc_coloring_enabled;
 
 int domain_llc_coloring_init(struct domain *d, unsigned int *colors,
@@ -26,6 +28,9 @@ void domain_dump_llc_colors(struct domain *d);
 
 unsigned int *llc_colors_from_guest(struct xen_domctl_createdomain *config);
 
+unsigned int page_to_llc_color(const struct page_info *pg);
+unsigned int get_nr_llc_colors(void);
+
 #else
 
 #define llc_coloring_enabled (false)
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 9d14aed74b..8ea72c744e 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -299,6 +299,33 @@ page_list_add_tail(struct page_info *page, struct page_list_head *head)
     }
     head->tail = page;
 }
+static inline void
+_page_list_add(struct page_info *page, struct page_info *prev,
+               struct page_info *next)
+{
+    page->list.prev = page_to_pdx(prev);
+    page->list.next = page_to_pdx(next);
+    prev->list.next = page_to_pdx(page);
+    next->list.prev = page_to_pdx(page);
+}
+static inline void
+page_list_add_prev(struct page_info *page, struct page_info *next,
+                   struct page_list_head *head)
+{
+    struct page_info *prev;
+
+    if ( !next )
+    {
+        page_list_add_tail(page, head);
+        return;
+    }
+
+    prev = page_list_prev(next, head);
+    if ( !prev )
+        page_list_add(page, head);
+    else
+        _page_list_add(page, prev, next);
+}
 static inline bool_t
 __page_list_del_head(struct page_info *page, struct page_list_head *head,
                      struct page_info *next, struct page_info *prev)
@@ -451,6 +478,12 @@ page_list_add_tail(struct page_info *page, struct page_list_head *head)
     list_add_tail(&page->list, head);
 }
 static inline void
+page_list_add_prev(struct page_info *page, struct page_info *next,
+                   struct page_list_head *head)
+{
+    list_add_tail(&page->list, &next->list);
+}
+static inline void
 page_list_del(struct page_info *page, struct page_list_head *head)
 {
     list_del(&page->list);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483025.748929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3V-0000rW-0g; Mon, 23 Jan 2023 15:48:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483025.748929; Mon, 23 Jan 2023 15:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3U-0000qx-Ro; Mon, 23 Jan 2023 15:48:20 +0000
Received: by outflank-mailman (input) for mailman id 483025;
 Mon, 23 Jan 2023 15:48:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3T-0000MU-Ag
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:19 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 55591b41-9b35-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 16:48:08 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id bk15so31563027ejb.9
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:17 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55591b41-9b35-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mg6oHA/uZutZbHyBN28TLCcwMCTT4JS4VhjaLKbCk4w=;
        b=rwaw/EoMIwvKsSF0VdKtWC+duAM2m+fq1KiDi7bOg39f9KkOutt5hyyjrp2UzI1rY6
         f4sdubtROnGpfPDInDIOHuoAnD4PgyzSrGvzvSE9GpGLbyL4tI7nx8ekLEXCPeM/HqOP
         hwRQ7jhqPD76FdnKpXNDwdar3C3cYfJdSKjFhd0NlveSsdfR+gmIBs8rCWawANDfEYKE
         rUZzF3jt2gABHL96ATfqkrw3oqLbjUa/+vMjDklpC30sykImFJZ6wEPzUx+vITc9RBkZ
         4f3dqz2kFKv2x1RQBpe/XRxOh8wdUBc1G83FLoaKrr8fTvN0X8falrCB8C7JkBwWH8jF
         z4jA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=mg6oHA/uZutZbHyBN28TLCcwMCTT4JS4VhjaLKbCk4w=;
        b=futxcvZXrWY+RMbyTZIbIv4SE55Y1KOzCEYjJ78bYcrdiDqwYuroUjj2cdcaYQzgvq
         PF5p9iFNnZguMLkBpJY/Ih7lzxue6kdT2GbteLZtHYT3dqJRU37Zsq7EX9CYUW9mOMB0
         NLa8PFT1j1rRA/HCG3TrtJj1CbJJWZ76IzKGethK5szc/eXmcPY35szi4MUqI81YiAqF
         VptmzHWCHmf0OBAtj0yb3v9024in0lYerOUTcd7EuHWNVo8WtEUC1zOXgv0uGqz+9UFd
         BazCb/gJLKd0LAR3lcnPdo3kLxUoVOlBoxFnfsnRoC3S8ASLWTNcKX6MSiS06K55rQyE
         1BFQ==
X-Gm-Message-State: AFqh2kok6+Rf3FcfToR4FEuXAPOt1hTyKALmYi1fpPZixdo9WMyU2TSq
	sA3XVM1tM6/dn5CkQzUrtm2Jdc+0HoNLHgAf
X-Google-Smtp-Source: AMrXdXuiG1hSBZNw6I84oDkopBiZAdDBhYmFeQRRxj/Pl96Jr5YleKLE44TpRZLUSpCAzQVLzRNVlw==
X-Received: by 2002:a17:906:37c4:b0:871:91ec:edae with SMTP id o4-20020a17090637c400b0087191ecedaemr25653113ejc.75.1674488896790;
        Mon, 23 Jan 2023 07:48:16 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Luca Miccio <lucmiccio@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Marco Solieri <marco.solieri@minervasys.tech>,
	Carlo Nonato <carlo.nonato@minervasys.tech>
Subject: [PATCH v4 03/11] xen/arm: add Dom0 cache coloring support
Date: Mon, 23 Jan 2023 16:47:27 +0100
Message-Id: <20230123154735.74832-4-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Luca Miccio <lucmiccio@gmail.com>

This commit allows the user to set the cache coloring configuration for
Dom0 via a command line parameter.
Since cache coloring and static memory are incompatible, direct mapping
Dom0 isn't possible when coloring is enabled.

Here is also introduced a common configuration syntax for cache colors.

Signed-off-by: Luca Miccio <lucmiccio@gmail.com>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
---
v4:
- dom0 colors are dynamically allocated as for any other domain
  (colors are duplicated in dom0_colors and in the new array, but logic
  is simpler)
---
 docs/misc/arm/cache-coloring.rst        | 32 ++++++++++++++++++++++---
 xen/arch/arm/domain_build.c             | 17 +++++++++++--
 xen/arch/arm/include/asm/llc_coloring.h |  4 ++++
 xen/arch/arm/llc_coloring.c             | 14 +++++++++++
 4 files changed, 62 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/cache-coloring.rst b/docs/misc/arm/cache-coloring.rst
index 0244d2f606..c2e0e87426 100644
--- a/docs/misc/arm/cache-coloring.rst
+++ b/docs/misc/arm/cache-coloring.rst
@@ -83,12 +83,38 @@ manually set the way size it's left for the user to overcome failing situations
 or for debugging/testing purposes. See `Coloring parameters and domain
 configurations`_ section for more information on that.
 
+Colors selection format
+***********************
+
+Regardless of the memory pool that has to be colored (Xen, Dom0/DomUs),
+the color selection can be expressed using the same syntax. In particular a
+comma-separated list of colors or ranges of colors is used.
+Ranges are hyphen-separated intervals (such as `0-4`) and are inclusive on both
+sides.
+
+Note that:
+ - no spaces are allowed between values.
+ - no overlapping ranges or duplicated colors are allowed.
+ - values must be written in ascending order.
+
+Examples:
+
++---------------------+-----------------------------------+
+|**Configuration**    |**Actual selection**               |
++---------------------+-----------------------------------+
+|  1-2,5-8            | [1, 2, 5, 6, 7, 8]                |
++---------------------+-----------------------------------+
+|  4-8,10,11,12       | [4, 5, 6, 7, 8, 10, 11, 12]       |
++---------------------+-----------------------------------+
+|  0                  | [0]                               |
++---------------------+-----------------------------------+
+
 Coloring parameters and domain configurations
 *********************************************
 
-LLC way size (as previously discussed) can be set using the appropriate command
-line parameter. See the relevant documentation in
-"docs/misc/xen-command-line.pandoc".
+LLC way size (as previously discussed) and Dom0 colors can be set using the
+appropriate command line parameters. See the relevant documentation
+in "docs/misc/xen-command-line.pandoc".
 
 **Note:** If no color configuration is provided for a domain, the default one,
 which corresponds to all available colors, is used instead.
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f35f4d2456..093d4ad6f6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2,6 +2,7 @@
 #include <xen/init.h>
 #include <xen/compile.h>
 #include <xen/lib.h>
+#include <xen/llc_coloring.h>
 #include <xen/mm.h>
 #include <xen/param.h>
 #include <xen/domain_page.h>
@@ -4014,7 +4015,10 @@ static int __init construct_dom0(struct domain *d)
     /* type must be set before allocate_memory */
     d->arch.type = kinfo.type;
 #endif
-    allocate_memory_11(d, &kinfo);
+    if ( is_domain_llc_colored(d) )
+        allocate_memory(d, &kinfo);
+    else
+        allocate_memory_11(d, &kinfo);
     find_gnttab_region(d, &kinfo);
 
 #ifdef CONFIG_STATIC_SHM
@@ -4060,6 +4064,8 @@ void __init create_dom0(void)
         .max_maptrack_frames = -1,
         .grant_opts = XEN_DOMCTL_GRANT_version(opt_gnttab_max_version),
     };
+    unsigned int *llc_colors = NULL;
+    unsigned int num_llc_colors = 0, flags = CDF_privileged;
 
     /* The vGIC for DOM0 is exactly emulating the hardware GIC */
     dom0_cfg.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
@@ -4076,7 +4082,14 @@ void __init create_dom0(void)
     if ( iommu_enabled )
         dom0_cfg.flags |= XEN_DOMCTL_CDF_iommu;
 
-    dom0 = domain_create(0, &dom0_cfg, CDF_privileged | CDF_directmap);
+    if ( llc_coloring_enabled )
+        llc_colors = dom0_llc_colors(&num_llc_colors);
+    else
+        flags |= CDF_directmap;
+
+    dom0 = domain_create_llc_colored(0, &dom0_cfg, flags, llc_colors,
+                                     num_llc_colors);
+
     if ( IS_ERR(dom0) || (alloc_dom0_vcpu0(dom0) == NULL) )
         panic("Error creating domain 0\n");
 
diff --git a/xen/arch/arm/include/asm/llc_coloring.h b/xen/arch/arm/include/asm/llc_coloring.h
index c7985c8fd0..382ff7de47 100644
--- a/xen/arch/arm/include/asm/llc_coloring.h
+++ b/xen/arch/arm/include/asm/llc_coloring.h
@@ -17,9 +17,13 @@
 
 bool __init llc_coloring_init(void);
 
+unsigned int *dom0_llc_colors(unsigned int *num_colors);
+
 #else /* !CONFIG_LLC_COLORING */
 
 static inline bool __init llc_coloring_init(void) { return true; }
+static inline unsigned int *dom0_llc_colors(
+    unsigned int *num_colors) { return NULL; }
 
 #endif /* CONFIG_LLC_COLORING */
 
diff --git a/xen/arch/arm/llc_coloring.c b/xen/arch/arm/llc_coloring.c
index 44b601915e..51f057d7c9 100644
--- a/xen/arch/arm/llc_coloring.c
+++ b/xen/arch/arm/llc_coloring.c
@@ -261,6 +261,20 @@ void domain_dump_llc_colors(struct domain *d)
     print_colors(d->llc_colors, d->num_llc_colors);
 }
 
+unsigned int *dom0_llc_colors(unsigned int *num_colors)
+{
+    unsigned int *colors;
+
+    if ( !dom0_num_colors )
+        return NULL;
+
+    colors = alloc_colors(dom0_num_colors);
+    memcpy(colors, dom0_colors, sizeof(unsigned int) * dom0_num_colors);
+    *num_colors = dom0_num_colors;
+
+    return colors;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483024.748916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3T-0000S4-Py; Mon, 23 Jan 2023 15:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483024.748916; Mon, 23 Jan 2023 15:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3T-0000RA-Gx; Mon, 23 Jan 2023 15:48:19 +0000
Received: by outflank-mailman (input) for mailman id 483024;
 Mon, 23 Jan 2023 15:48:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3S-00006V-Bh
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:18 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5a0824ac-9b35-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 16:48:16 +0100 (CET)
Received: by mail-ej1-x632.google.com with SMTP id kt14so31671499ejc.3
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:16 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a0824ac-9b35-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sITnfQ4HJpU2pnVbt8P3FE0rc8UsS7v3Y6FN0Y7K5Ec=;
        b=vXTZorvaFk7tPS6jRfp7QIKsrCHMKF9khIGSKPHjfyZ5k58PzWiuOwCPFjg3/4r58q
         EpK9luHJgPdCU2C2nI/vgHL5aK0H6yyURP5ln2b2IRo8mPmcH4o9oXjMgl+4DfTcpMLz
         st0HY028TtxFNUD4T1QO7Ck3IRqiF/02d4ZwsnsbXM09jekhAEvvwJAWPb6o9teZG9IZ
         EhSbsiRQqB15fJ65gIh+BP+LeybR/6hHLuPHshMimBqaFqekSoiQOs/4CT0Ac8ObpgC9
         7CeI/eqHPJdq3ilu4XCFE8MuM7ciZhdaYsaYHlIEeWkAQxSTTPSc97ltBMS1fkKpzYpD
         Sz6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=sITnfQ4HJpU2pnVbt8P3FE0rc8UsS7v3Y6FN0Y7K5Ec=;
        b=qqQEJ2FEwU21ad/D3Jp8Z+qkGDeCgkuqmc+O2yLpUBkd9UFdA2+af+n56gsVxTKEz0
         jGr/UN/DzNLPZAQQyLU2dyQ7UHNE+iTWoSW5Q3Spd5Nu2qFHs+DDR2zPOPfde8MgT74/
         jwscX5ke/mZcUzDC5Mcqs60hWgE0kMFCq82ogSR4AdLnAuGCLcqhAuHP0riT11AcjWDY
         ML9i6HdVbZKaWGIGh6cNhm78aPjnO8BT9kfodsq9vOeECWmcNhclouaK5NCJCR5OCXUh
         KDRPqP8WTDEZmSYBC+HIWrJKq+AAsN55LLUE3xxkesKJbJ1UX+OxekBHWJ61M38oj/s8
         VjVQ==
X-Gm-Message-State: AFqh2koDccIR9Oum7OnSvV8y2PuIYUrm+zvYokTFGMcZbjLjksJVtDtj
	p9hHHQ7ZGPqx06BZAovyosi3ApmrM3pSS/8e
X-Google-Smtp-Source: AMrXdXuTeuyZHvSU1zURldM42WGe0cZ6rMCvu9LEzN1p6IVaH8Iv1xR+F3P9p6aDuAIgIKU8uAgFXg==
X-Received: by 2002:a17:906:2542:b0:877:6e07:92df with SMTP id j2-20020a170906254200b008776e0792dfmr20135507ejb.15.1674488895626;
        Mon, 23 Jan 2023 07:48:15 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Carlo Nonato <carlo.nonato@minervasys.tech>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Marco Solieri <marco.solieri@minervasys.tech>
Subject: [PATCH v4 02/11] xen/arm: add cache coloring initialization
Date: Mon, 23 Jan 2023 16:47:26 +0100
Message-Id: <20230123154735.74832-3-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit implements functions declared in the LLC coloring common header
for arm64 and adds documentation. It also adds two command line options: a
runtime switch for the cache coloring feature and the LLC way size
parameter.

The feature init function consists of an auto probing of the cache layout
necessary to retrieve the LLC way size which is used to compute the number
of platform colors. It also adds a debug-key to dump general cache coloring
info.

The domain init function, instead, allocates default colors if needed and
checks the provided configuration for errors.

Note that until this patch, there are no implemented methods for actually
configuring cache colors for domains and all the configurations fall back
to the default one.

Based on original work from: Luca Miccio <lucmiccio@gmail.com>

Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
---
v4:
- added "llc-coloring" cmdline option for the boot-time switch
- dom0 colors are now checked during domain init as for any other domain
- fixed processor.h masks bit width
- check for overflow in parse_color_config()
- check_colors() now checks also that colors are sorted and unique
---
 docs/misc/arm/cache-coloring.rst        | 105 +++++++++
 docs/misc/xen-command-line.pandoc       |  37 ++++
 xen/arch/arm/Kconfig                    |   1 +
 xen/arch/arm/Makefile                   |   1 +
 xen/arch/arm/include/asm/llc_coloring.h |  36 ++++
 xen/arch/arm/include/asm/processor.h    |  16 ++
 xen/arch/arm/llc_coloring.c             | 272 ++++++++++++++++++++++++
 xen/arch/arm/setup.c                    |   7 +
 8 files changed, 475 insertions(+)
 create mode 100644 docs/misc/arm/cache-coloring.rst
 create mode 100644 xen/arch/arm/include/asm/llc_coloring.h
 create mode 100644 xen/arch/arm/llc_coloring.c

diff --git a/docs/misc/arm/cache-coloring.rst b/docs/misc/arm/cache-coloring.rst
new file mode 100644
index 0000000000..0244d2f606
--- /dev/null
+++ b/docs/misc/arm/cache-coloring.rst
@@ -0,0 +1,105 @@
+Xen cache coloring user guide
+=============================
+
+The cache coloring support in Xen allows to reserve Last Level Cache (LLC)
+partitions for Dom0, DomUs and Xen itself. Currently only ARM64 is supported.
+
+In order to enable and use it, few steps are needed.
+
+In Kconfig:
+
+- Enable LLC coloring.
+
+        CONFIG_LLC_COLORING=y
+- If needed, change the maximum number of colors (refer to menuconfig help for
+  value meaning and when it should be changed).
+
+        CONFIG_NR_LLC_COLORS=<n>
+
+Compile Xen and the toolstack and then:
+
+- Set the `llc-coloring=on` command line option.
+- Set `Coloring parameters and domain configurations`_.
+
+Background
+**********
+
+Cache hierarchy of a modern multi-core CPU typically has first levels dedicated
+to each core (hence using multiple cache units), while the last level is shared
+among all of them. Such configuration implies that memory operations on one
+core (e.g. running a DomU) are able to generate interference on another core
+(e.g .hosting another DomU). Cache coloring allows eliminating this
+mutual interference, and thus guaranteeing higher and more predictable
+performances for memory accesses.
+The key concept underlying cache coloring is a fragmentation of the memory
+space into a set of sub-spaces called colors that are mapped to disjoint cache
+partitions. Technically, the whole memory space is first divided into a number
+of subsequent regions. Then each region is in turn divided into a number of
+subsequent sub-colors. The generic i-th color is then obtained by all the
+i-th sub-colors in each region.
+
+.. raw:: html
+
+    <pre>
+                            Region j            Region j+1
+                .....................   ............
+                .                     . .
+                .                       .
+            _ _ _______________ _ _____________________ _ _
+                |     |     |     |     |     |     |
+                | c_0 | c_1 |     | c_n | c_0 | c_1 |
+           _ _ _|_____|_____|_ _ _|_____|_____|_____|_ _ _
+                    :                       :
+                    :                       :...         ... .
+                    :                            color 0
+                    :...........................         ... .
+                                                :
+          . . ..................................:
+    </pre>
+
+There are two pragmatic lesson to be learnt.
+
+1. If one wants to avoid cache interference between two domains, different
+   colors needs to be used for their memory.
+
+2. Color assignment must privilege contiguity in the partitioning. E.g.,
+   assigning colors (0,1) to domain I  and (2,3) to domain  J is better than
+   assigning colors (0,2) to I and (1,3) to J.
+
+How to compute the number of colors
+***********************************
+
+To compute the number of available colors for a specific platform, the size of
+an LLC way and the page size used by Xen must be known. The first parameter can
+be found in the processor manual or can be also computed dividing the total
+cache size by the number of its ways. The second parameter is the minimum
+amount of memory that can be mapped by the hypervisor, thus dividing the way
+size by the page size, the number of total cache partitions is found. So for
+example, an Arm Cortex-A53 with a 16-ways associative 1 MiB LLC, can isolate up
+to 16 colors when pages are 4 KiB in size.
+
+Cache layout is probed automatically by Xen itself, but a possibility to
+manually set the way size it's left for the user to overcome failing situations
+or for debugging/testing purposes. See `Coloring parameters and domain
+configurations`_ section for more information on that.
+
+Coloring parameters and domain configurations
+*********************************************
+
+LLC way size (as previously discussed) can be set using the appropriate command
+line parameter. See the relevant documentation in
+"docs/misc/xen-command-line.pandoc".
+
+**Note:** If no color configuration is provided for a domain, the default one,
+which corresponds to all available colors, is used instead.
+
+Known issues and limitations
+****************************
+
+"xen,static-mem" isn't supported when coloring is enabled
+#########################################################
+
+In the domain configuration, "xen,static-mem" allows memory to be statically
+allocated to the domain. This isn't possibile when LLC coloring is enabled,
+because that memory can't be guaranteed to use only colors assigned to the
+domain.
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 923910f553..eb105c03af 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -908,6 +908,15 @@ Controls for the dom0 IOMMU setup.
 
 Specify a list of IO ports to be excluded from dom0 access.
 
+### dom0-llc-colors (arm64)
+> `= List of [ <integer> | <integer>-<integer> ]`
+
+> Default: `All available LLC colors`
+
+Specify dom0 LLC color configuration. This options is available only when
+`CONFIG_LLC_COLORING` is enabled. If the parameter is not set, all available
+colors are chosen and the user is warned on Xen serial console.
+
 ### dom0_max_vcpus
 
 Either:
@@ -1645,6 +1654,34 @@ This option is intended for debugging purposes only.  Enable MSR_DEBUGCTL.LBR
 in hypervisor context to be able to dump the Last Interrupt/Exception To/From
 record with other registers.
 
+### llc-coloring (arm64)
+> `= <boolean>`
+
+> Default: `false`
+
+Flag to enable or disable LLC coloring support at runtime. This options is
+available only when `CONFIG_LLC_COLORING` is enabled. See the general
+cache coloring documentation for more info.
+
+### llc-way-size (arm64)
+> `= <size>`
+
+> Default: `Obtained from the hardware`
+
+Specify the way size of the Last Level Cache. This options is available only
+when `CONFIG_LLC_COLORING` is enabled. It is an optional, expert-only parameter
+and it is used to calculate the number of available LLC colors on the platform.
+It can be obtained by dividing the total LLC size by the number of its
+associative ways.
+By default, the value is automatically computed by probing the hardware, but in
+case of specific needs, it can be manually set. Those include failing probing
+and debugging/testing purposes so that it's possibile to emulate platforms with
+different number of supported colors.
+An important detail to highlight is that the current implementation of the
+cache coloring technique requires the number of colors to be a power of 2, and
+consequently, also the LLC way size must be so. A value that doesn't match this
+requirement is aligned down to the previous power of 2.
+
 ### lock-depth-size
 > `= <integer>`
 
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..97eac24ee3 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -9,6 +9,7 @@ config ARM_64
 	select 64BIT
 	select ARM_EFI
 	select HAS_FAST_MULTIPLY
+	select HAS_LLC_COLORING
 
 config ARM
 	def_bool y
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4d076b278b..7f5cb8ef26 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_IOREQ_SERVER) += ioreq.o
 obj-y += irq.o
 obj-y += kernel.init.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-$(CONFIG_LLC_COLORING) += llc_coloring.o
 obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
diff --git a/xen/arch/arm/include/asm/llc_coloring.h b/xen/arch/arm/include/asm/llc_coloring.h
new file mode 100644
index 0000000000..c7985c8fd0
--- /dev/null
+++ b/xen/arch/arm/include/asm/llc_coloring.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Last Level Cache (LLC) coloring support for ARM
+ *
+ * Copyright (C) 2022 Xilinx Inc.
+ *
+ * Authors:
+ *    Luca Miccio <lucmiccio@gmail.com>
+ *    Carlo Nonato <carlo.nonato@minervasys.tech>
+ */
+#ifndef __ASM_ARM_COLORING_H__
+#define __ASM_ARM_COLORING_H__
+
+#include <xen/init.h>
+
+#ifdef CONFIG_LLC_COLORING
+
+bool __init llc_coloring_init(void);
+
+#else /* !CONFIG_LLC_COLORING */
+
+static inline bool __init llc_coloring_init(void) { return true; }
+
+#endif /* CONFIG_LLC_COLORING */
+
+#endif /* __ASM_ARM_COLORING_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
\ No newline at end of file
diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index 1dd81d7d52..1c5f6d6e7a 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -18,6 +18,22 @@
 #define CTR_IDC_SHIFT       28
 #define CTR_DIC_SHIFT       29
 
+/* CCSIDR Current Cache Size ID Register */
+#define CCSIDR_LINESIZE_MASK            _AC(0x7, ULL)
+#define CCSIDR_NUMSETS_SHIFT            13
+#define CCSIDR_NUMSETS_MASK             _AC(0x3fff, ULL)
+#define CCSIDR_NUMSETS_SHIFT_FEAT_CCIDX 32
+#define CCSIDR_NUMSETS_MASK_FEAT_CCIDX  _AC(0xffffff, ULL)
+
+/* CCSELR Cache Size Selection Register */
+#define CCSELR_LEVEL_MASK  _AC(0x7, UL)
+#define CCSELR_LEVEL_SHIFT 1
+
+/* CLIDR Cache Level ID Register */
+#define CLIDR_CTYPEn_SHIFT(n) (3 * (n - 1))
+#define CLIDR_CTYPEn_MASK     _AC(0x7, UL)
+#define CLIDR_CTYPEn_LEVELS   7
+
 #define ICACHE_POLICY_VPIPT  0
 #define ICACHE_POLICY_AIVIVT 1
 #define ICACHE_POLICY_VIPT   2
diff --git a/xen/arch/arm/llc_coloring.c b/xen/arch/arm/llc_coloring.c
new file mode 100644
index 0000000000..44b601915e
--- /dev/null
+++ b/xen/arch/arm/llc_coloring.c
@@ -0,0 +1,272 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Last Level Cache (LLC) coloring support for ARM
+ *
+ * Copyright (C) 2022 Xilinx Inc.
+ *
+ * Authors:
+ *    Luca Miccio <lucmiccio@gmail.com>
+ *    Carlo Nonato <carlo.nonato@minervasys.tech>
+ */
+#include <xen/bitops.h>
+#include <xen/errno.h>
+#include <xen/keyhandler.h>
+#include <xen/llc_coloring.h>
+#include <xen/param.h>
+#include <xen/types.h>
+
+#include <asm/processor.h>
+#include <asm/sysregs.h>
+
+bool llc_coloring_enabled;
+boolean_param("llc-coloring", llc_coloring_enabled);
+
+/* Size of an LLC way */
+static unsigned int __ro_after_init llc_way_size;
+size_param("llc-way-size", llc_way_size);
+/* Number of colors available in the LLC */
+static unsigned int __ro_after_init nr_colors = CONFIG_NR_LLC_COLORS;
+/* Mask to extract coloring relevant bits */
+static paddr_t __ro_after_init addr_col_mask;
+
+static unsigned int __ro_after_init dom0_colors[CONFIG_NR_LLC_COLORS];
+static unsigned int __ro_after_init dom0_num_colors;
+
+/*
+ * Parse the coloring configuration given in the buf string, following the
+ * syntax below.
+ *
+ * COLOR_CONFIGURATION ::= COLOR | RANGE,...,COLOR | RANGE
+ * RANGE               ::= COLOR-COLOR
+ *
+ * Example: "0,2-6,15-16" represents the set of colors: 0,2,3,4,5,6,15,16.
+ */
+static int parse_color_config(const char *buf, unsigned int *colors,
+                              unsigned int *num_colors)
+{
+    const char *s = buf;
+
+    if ( !colors || !num_colors )
+        return -EINVAL;
+
+    *num_colors = 0;
+
+    while ( *s != '\0' )
+    {
+        if ( *s != ',' )
+        {
+            unsigned int color, start, end;
+
+            start = simple_strtoul(s, &s, 0);
+
+            if ( *s == '-' )    /* Range */
+            {
+                s++;
+                end = simple_strtoul(s, &s, 0);
+            }
+            else                /* Single value */
+                end = start;
+
+            if ( start > end || (end - start) > UINT_MAX - *num_colors ||
+                 *num_colors + (end - start) >= nr_colors )
+                return -EINVAL;
+            for ( color = start; color <= end; color++ )
+                colors[(*num_colors)++] = color;
+        }
+        else
+            s++;
+    }
+
+    return *s ? -EINVAL : 0;
+}
+
+static int __init parse_dom0_colors(const char *s)
+{
+    return parse_color_config(s, dom0_colors, &dom0_num_colors);
+}
+custom_param("dom0-llc-colors", parse_dom0_colors);
+
+/* Return the LLC way size by probing the hardware */
+static unsigned int __init get_llc_way_size(void)
+{
+    register_t ccsidr_el1;
+    register_t clidr_el1 = READ_SYSREG(CLIDR_EL1);
+    register_t csselr_el1 = READ_SYSREG(CSSELR_EL1);
+    register_t id_aa64mmfr2_el1 = READ_SYSREG(ID_AA64MMFR2_EL1);
+    uint32_t ccsidr_numsets_shift = CCSIDR_NUMSETS_SHIFT;
+    uint32_t ccsidr_numsets_mask = CCSIDR_NUMSETS_MASK;
+    unsigned int n, line_size, num_sets;
+
+    for ( n = CLIDR_CTYPEn_LEVELS;
+          n != 0 && !((clidr_el1 >> CLIDR_CTYPEn_SHIFT(n)) & CLIDR_CTYPEn_MASK);
+          n-- );
+
+    if ( n == 0 )
+        return 0;
+
+    WRITE_SYSREG(((n - 1) & CCSELR_LEVEL_MASK) << CCSELR_LEVEL_SHIFT,
+                 CSSELR_EL1);
+    isb();
+
+    ccsidr_el1 = READ_SYSREG(CCSIDR_EL1);
+
+    /* Arm ARM: (Log2(Number of bytes in cache line)) - 4 */
+    line_size = 1 << ((ccsidr_el1 & CCSIDR_LINESIZE_MASK) + 4);
+
+    /* If FEAT_CCIDX is enabled, CCSIDR_EL1 has a different bit layout */
+    if ( (id_aa64mmfr2_el1 >> ID_AA64MMFR2_CCIDX_SHIFT) & 0x7 )
+    {
+        ccsidr_numsets_shift = CCSIDR_NUMSETS_SHIFT_FEAT_CCIDX;
+        ccsidr_numsets_mask = CCSIDR_NUMSETS_MASK_FEAT_CCIDX;
+    }
+    /* Arm ARM: (Number of sets in cache) - 1 */
+    num_sets = ((ccsidr_el1 >> ccsidr_numsets_shift) & ccsidr_numsets_mask) + 1;
+
+    printk(XENLOG_INFO "LLC found: L%u (line size: %u bytes, sets num: %u)\n",
+           n, line_size, num_sets);
+
+    /* Restore value in CSSELR_EL1 */
+    WRITE_SYSREG(csselr_el1, CSSELR_EL1);
+    isb();
+
+    return line_size * num_sets;
+}
+
+static bool check_colors(unsigned int *colors, unsigned int num_colors)
+{
+    unsigned int i;
+
+    if ( num_colors > nr_colors )
+        return false;
+
+    for ( i = 0; i < num_colors; i++ )
+        if ( colors[i] >= nr_colors ||
+             (i != num_colors - 1 && colors[i] >= colors[i + 1]) )
+            return false;
+
+    return true;
+}
+
+static void print_colors(unsigned int *colors, unsigned int num_colors)
+{
+    unsigned int i;
+
+    printk("[ ");
+    for ( i = 0; i < num_colors; i++ )
+        printk("%u ", colors[i]);
+    printk("]\n");
+}
+
+static void dump_coloring_info(unsigned char key)
+{
+    printk("'%c' pressed -> dumping LLC coloring general info\n", key);
+    printk("LLC way size: %u KiB\n", llc_way_size >> 10);
+    printk("Number of LLC colors supported: %u\n", nr_colors);
+    printk("Address to LLC color mask: 0x%lx\n", addr_col_mask);
+}
+
+bool __init llc_coloring_init(void)
+{
+    if ( !llc_way_size && !(llc_way_size = get_llc_way_size()) )
+    {
+        printk(XENLOG_ERR
+               "Probed LLC way size is 0 and no custom value provided\n");
+        return false;
+    }
+
+    /*
+     * The maximum number of colors must be a power of 2 in order to correctly
+     * map them to bits of an address, so also the LLC way size must be so.
+     */
+    if ( llc_way_size & (llc_way_size - 1) )
+    {
+        printk(XENLOG_WARNING "LLC way size (%u) isn't a power of 2.\n",
+               llc_way_size);
+        llc_way_size = 1U << flsl(llc_way_size);
+        printk(XENLOG_WARNING
+               "Using %u instead. Performances will be suboptimal\n",
+               llc_way_size);
+    }
+
+    nr_colors = llc_way_size >> PAGE_SHIFT;
+
+    if ( nr_colors < 2 || nr_colors > CONFIG_NR_LLC_COLORS )
+    {
+        printk(XENLOG_ERR "Number of LLC colors (%u) not in range [2, %u]\n",
+               nr_colors, CONFIG_NR_LLC_COLORS);
+        return false;
+    }
+
+    addr_col_mask = (nr_colors - 1) << PAGE_SHIFT;
+
+    register_keyhandler('K', dump_coloring_info, "dump LLC coloring info", 1);
+
+    return true;
+}
+
+static unsigned int *alloc_colors(unsigned int num_colors)
+{
+    unsigned int *colors = xmalloc_array(unsigned int, num_colors);
+
+    if ( !colors )
+        panic("Unable to allocate LLC colors\n");
+
+    return colors;
+}
+
+int domain_llc_coloring_init(struct domain *d, unsigned int *colors,
+                             unsigned int num_colors)
+{
+    unsigned int i;
+
+    if ( is_domain_direct_mapped(d) )
+    {
+        printk(XENLOG_ERR
+               "LLC coloring and direct mapping are incompatible (%pd)\n", d);
+        return -EINVAL;
+    }
+
+    if ( !colors || num_colors == 0 )
+    {
+        printk(XENLOG_WARNING
+               "LLC color config not found for %pd. Using default\n", d);
+        colors = alloc_colors(nr_colors);
+        num_colors = nr_colors;
+        for ( i = 0; i < nr_colors; i++ )
+            colors[i] = i;
+    }
+
+    d->llc_colors = colors;
+    d->num_llc_colors = num_colors;
+
+    if ( !check_colors(d->llc_colors, d->num_llc_colors) )
+    {
+        /* d->llc_colors will be freed in domain_destroy() */
+        printk(XENLOG_ERR "Bad LLC color config for %pd\n", d);
+        print_colors(d->llc_colors, d->num_llc_colors);
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+void domain_llc_coloring_free(struct domain *d)
+{
+    xfree(d->llc_colors);
+}
+
+void domain_dump_llc_colors(struct domain *d)
+{
+    printk("Domain %pd has %u LLC colors: ", d, d->num_llc_colors);
+    print_colors(d->llc_colors, d->num_llc_colors);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90..c04e5012f0 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -12,6 +12,7 @@
 #include <xen/device_tree.h>
 #include <xen/domain_page.h>
 #include <xen/grant_table.h>
+#include <xen/llc_coloring.h>
 #include <xen/types.h>
 #include <xen/string.h>
 #include <xen/serial.h>
@@ -1026,6 +1027,12 @@ void __init start_xen(unsigned long boot_phys_offset,
     printk("Command line: %s\n", cmdline);
     cmdline_parse(cmdline);
 
+    if ( llc_coloring_enabled )
+    {
+        if ( !llc_coloring_init() )
+            panic("Xen LLC coloring support: setup failed\n");
+    }
+
     setup_mm();
 
     /* Parse the ACPI tables for possible boot-time configuration */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483028.748959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3Y-0001fq-35; Mon, 23 Jan 2023 15:48:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483028.748959; Mon, 23 Jan 2023 15:48:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3X-0001fV-T9; Mon, 23 Jan 2023 15:48:23 +0000
Received: by outflank-mailman (input) for mailman id 483028;
 Mon, 23 Jan 2023 15:48:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3W-0000MU-Sw
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:23 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 577425c4-9b35-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 16:48:12 +0100 (CET)
Received: by mail-ed1-x529.google.com with SMTP id v30so15061168edb.9
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:21 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 577425c4-9b35-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GCEtpCCeNNgVpjRAVcM3vcHvhd+PHR5ZkpezvuQcf7U=;
        b=ZmS5KDRC8hDJKnSaSUtwY94VSRuDHFI5eeajwzfWFv4jUrF4j87Lam0AO4mBGAn362
         tEylRS/pTFH81y2AlE0Qi0BsQ9AFSE4VNXzDdDScCCViLukyBDRPudPQElb0U1qDxlbv
         uFMlrqgE5CPhfvzKIRI36kHiE+O1TPz8keSqsAJG2rH2FAzRSMMfU7THLOUGwdOE5wXv
         X2v3L8mx9SsYPA3sgrry/Z5LfVfRllWNiWaMn8geQdX7Ep8UBwfmtntMysIdktvzpZVY
         sIH97GsxSWi8pHRn6HpRs62THQUgOTgqHeqrioAcKxG/UQWb/qc8tN02r8+LxXTNvWoY
         YxXQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GCEtpCCeNNgVpjRAVcM3vcHvhd+PHR5ZkpezvuQcf7U=;
        b=wWfDmbdfJmUVCaDZDdi9BqxVRE48NtXakY9Q8ZOa3yYlnvkqamECdCvWF8NJx6LGXp
         Ob+mKKGTvYjIp5qV+rI7bRuBs/ifplHzL92dSojjMyfJIxNiqMD6rhWHrc1gqCZZK+H/
         mD3a5e8ne7+t7PFIj2q7jF7D3nf7TeH/QOUxb9rjrY0sinGal7ItxxysiEaIAXDt5b/m
         ylkHBKOaGf3PpQCzuhdRw3jwvbaQcPvBsaPsr+LeYppAAwkSLLs7bR1b4ukhx8X0c5bk
         B4H8l/XVip9PXt9yoGZGhlB8WYV2Phz9EmByxnE28KSLQliGwWqnAa9vAB464MKDAUKG
         6C2A==
X-Gm-Message-State: AFqh2koslRsgYA7G65X++4zyNcL8bpvGzGkv1Ydo9Sy4E8EJlVPQJTNS
	vZ5+ZjJgp12kWYcE90uukeya44lJBEsiuveo
X-Google-Smtp-Source: AMrXdXut7KoZPfG//rjtYHoVwSt6VSeDkbrs7aPZEjixUEAXRMFdPmXR9l3iIM2NZd+QELm5hPz0wA==
X-Received: by 2002:a05:6402:501a:b0:47e:bdb8:9133 with SMTP id p26-20020a056402501a00b0047ebdb89133mr33588255eda.38.1674488900372;
        Mon, 23 Jan 2023 07:48:20 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Carlo Nonato <carlo.nonato@minervasys.tech>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Marco Solieri <marco.solieri@minervasys.tech>
Subject: [PATCH v4 06/11] xen/arm: add support for cache coloring configuration via device-tree
Date: Mon, 23 Jan 2023 16:47:30 +0100
Message-Id: <20230123154735.74832-7-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit adds the "llc-colors" Device Tree attribute that can be used
for DomUs and Dom0less color configurations. The syntax is the same used
for every color config.

Based on original work from: Luca Miccio <lucmiccio@gmail.com>

Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
---
 docs/misc/arm/cache-coloring.rst        | 43 +++++++++++++++++++++++++
 docs/misc/arm/device-tree/booting.txt   |  4 +++
 xen/arch/arm/domain_build.c             | 18 ++++++++++-
 xen/arch/arm/include/asm/llc_coloring.h |  3 ++
 xen/arch/arm/llc_coloring.c             | 10 ++++++
 5 files changed, 77 insertions(+), 1 deletion(-)

diff --git a/docs/misc/arm/cache-coloring.rst b/docs/misc/arm/cache-coloring.rst
index c2e0e87426..a28f75dc26 100644
--- a/docs/misc/arm/cache-coloring.rst
+++ b/docs/misc/arm/cache-coloring.rst
@@ -116,6 +116,49 @@ LLC way size (as previously discussed) and Dom0 colors can be set using the
 appropriate command line parameters. See the relevant documentation
 in "docs/misc/xen-command-line.pandoc".
 
+DomUs colors can be set either in the xl configuration file (relative
+documentation at "docs/man/xl.cfg.pod.5.in") or via Device Tree, also for
+Dom0less configurations (relative documentation in
+"docs/misc/arm/device-tree/booting.txt"), as in the following example:
+
+.. raw:: html
+
+    <pre>
+        xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1G dom0_max_vcpus=1 sched=null llc-coloring=on llc-way-size=64K xen-llc-colors=0-1 dom0-llc-colors=2-6";
+        xen,dom0-bootargs "console=hvc0 earlycon=xen earlyprintk=xen root=/dev/ram0"
+
+        dom0 {
+            compatible = "xen,linux-zimage" "xen,multiboot-module";
+            reg = <0x0 0x1000000 0x0 15858176>;
+        };
+
+        dom0-ramdisk {
+            compatible = "xen,linux-initrd" "xen,multiboot-module";
+            reg = <0x0 0x2000000 0x0 20638062>;
+        };
+
+        domU0 {
+            #address-cells = <0x1>;
+            #size-cells = <0x1>;
+            compatible = "xen,domain";
+            memory = <0x0 0x40000>;
+            llc-colors = "4-8,10,11,12";
+            cpus = <0x1>;
+            vpl011 = <0x1>;
+
+            module@2000000 {
+                compatible = "multiboot,kernel", "multiboot,module";
+                reg = <0x2000000 0xffffff>;
+                bootargs = "console=ttyAMA0";
+            };
+
+            module@30000000 {
+                compatible = "multiboot,ramdisk", "multiboot,module";
+                reg = <0x3000000 0xffffff>;
+            };
+        };
+    </pre>
+
 **Note:** If no color configuration is provided for a domain, the default one,
 which corresponds to all available colors, is used instead.
 
diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 3879340b5e..ad71c16b00 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -162,6 +162,10 @@ with the following properties:
 
     An integer specifying the number of vcpus to allocate to the guest.
 
+- llc-colors
+    A string specifying the LLC color configuration for the guest.
+    Refer to "docs/misc/arm/cache_coloring.rst" for syntax.
+
 - vpl011
 
     An empty property to enable/disable a virtual pl011 for the guest to
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 093d4ad6f6..2c1307d349 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3854,6 +3854,8 @@ void __init create_domUs(void)
     struct dt_device_node *node;
     const struct dt_device_node *cpupool_node,
                                 *chosen = dt_find_node_by_path("/chosen");
+    const char *llc_colors_str;
+    unsigned int *llc_colors = NULL, num_llc_colors = 0;
 
     BUG_ON(chosen == NULL);
     dt_for_each_child_node(chosen, node)
@@ -3960,12 +3962,26 @@ void __init create_domUs(void)
             d_cfg.max_maptrack_frames = val;
         }
 
+        if ( !dt_property_read_string(node, "llc-colors", &llc_colors_str) )
+        {
+            if ( !llc_coloring_enabled )
+                printk(XENLOG_WARNING
+                       "'llc-colors' found, but LLC coloring is disabled\n");
+            else if ( dt_find_property(node, "xen,static-mem", NULL) )
+                panic("static-mem and LLC coloring are incompatible\n");
+            else
+                llc_colors = llc_colors_from_str(llc_colors_str,
+                                                 &num_llc_colors);
+        }
+
         /*
          * The variable max_init_domid is initialized with zero, so here it's
          * very important to use the pre-increment operator to call
          * domain_create() with a domid > 0. (domid == 0 is reserved for Dom0)
          */
-        d = domain_create(++max_init_domid, &d_cfg, flags);
+        d = domain_create_llc_colored(++max_init_domid, &d_cfg, flags,
+                                      llc_colors, num_llc_colors);
+
         if ( IS_ERR(d) )
             panic("Error creating domain %s\n", dt_node_name(node));
 
diff --git a/xen/arch/arm/include/asm/llc_coloring.h b/xen/arch/arm/include/asm/llc_coloring.h
index 382ff7de47..7a01b8841c 100644
--- a/xen/arch/arm/include/asm/llc_coloring.h
+++ b/xen/arch/arm/include/asm/llc_coloring.h
@@ -18,12 +18,15 @@
 bool __init llc_coloring_init(void);
 
 unsigned int *dom0_llc_colors(unsigned int *num_colors);
+unsigned int *llc_colors_from_str(const char *str, unsigned int *num_colors);
 
 #else /* !CONFIG_LLC_COLORING */
 
 static inline bool __init llc_coloring_init(void) { return true; }
 static inline unsigned int *dom0_llc_colors(
     unsigned int *num_colors) { return NULL; }
+static inline unsigned int *llc_colors_from_str(
+    const char *str, unsigned int *num_colors) { return NULL; }
 
 #endif /* CONFIG_LLC_COLORING */
 
diff --git a/xen/arch/arm/llc_coloring.c b/xen/arch/arm/llc_coloring.c
index 2d0457cdbc..ba5279a022 100644
--- a/xen/arch/arm/llc_coloring.c
+++ b/xen/arch/arm/llc_coloring.c
@@ -289,6 +289,16 @@ unsigned int *llc_colors_from_guest(struct xen_domctl_createdomain *config)
     return colors;
 }
 
+unsigned int *llc_colors_from_str(const char *str, unsigned int *num_colors)
+{
+    unsigned int *colors = alloc_colors(nr_colors);
+
+    if ( parse_color_config(str, colors, num_colors) )
+        panic("Error parsing LLC color configuration\n");
+
+    return colors;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483031.748984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3c-0002NM-9h; Mon, 23 Jan 2023 15:48:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483031.748984; Mon, 23 Jan 2023 15:48:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3c-0002MX-0I; Mon, 23 Jan 2023 15:48:28 +0000
Received: by outflank-mailman (input) for mailman id 483031;
 Mon, 23 Jan 2023 15:48:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3a-0000MU-CW
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:26 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 59943607-9b35-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 16:48:15 +0100 (CET)
Received: by mail-ej1-x62b.google.com with SMTP id ud5so31668821ejc.4
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:24 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59943607-9b35-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GXjHJG37wrPSsNQkI70mLlT9j7sfcQhay1lHTQfBpqE=;
        b=aOmXpVDTwkO8k0CqcxAZ3eZ1tYYpqm6pCnougkWeY4CsGgbelNyDO0RRGHptwTZVtC
         cctHSUIq0dgNyfkcqlB9urbilwiY7VvKByqF54wJ7lVWdOlXh3OI/k30YzwQoKtP/DOg
         X5zFTQQStGMNu0q5vyol2FBFYiCKprE4RLZVsevYHYz3upjY7L9Gu7VTdheaUkIFzzU/
         oTl84wAK7FVX31qaBvaob/Zc+1hKoIrWJz7eAbU/081A+Lmm0MrkqqyyHXMcBGxdYxC4
         SxquQZtmbckIYn8UYSpmFCN/XSj21MijZwteeVOldD/GlBqoacDgTFYlyM8IGyrFe7Kz
         OQsA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GXjHJG37wrPSsNQkI70mLlT9j7sfcQhay1lHTQfBpqE=;
        b=zo4WeqaXSGZYSwdBHuN5zijVl1UCC7PYbpAWAmef+3++c0a6GNtlfrVSYTqas1D9TX
         9sQV5RPno9t19oiFZDRNvOYtw/KwYPJcT8f/zpRLZahOGoZfBrzpTMufAt5C7zGM9ucJ
         3Wx2e3BDUigrTUD1y2rcUiV3uu6RgrSelzKMrQGNLSbyeyTXz8CH79S+zad7NMWBWkDb
         lBuuadhi5bQonI0HwDOAAhaBMEm4ao9dkrWOvNRhdiTIWrs3KUum5qjCaNDXYwTR5xiS
         HMY5voutBVngOm6N0oqaylolAUUXdPKD84sFTXeZwymIn2Ok9Y9E2luHxe2VFcgRzKwp
         3MbQ==
X-Gm-Message-State: AFqh2kqy95Ayvtqvae4t57noLdKonG9sMMIf4Bpfvh6S634+y37pJqmj
	8iLJvpWOsmWXK3V65W7Oa05wENjKNzhA8SKA
X-Google-Smtp-Source: AMrXdXtZW+SmY+t9CiCJtVGB6Ye1/RFitDnHCtldOZ8D0at+eHlzhbIJDeiQ1+pJ78QI+Z+WxyveiQ==
X-Received: by 2002:a17:906:f9cd:b0:7c0:b569:8efe with SMTP id lj13-20020a170906f9cd00b007c0b5698efemr25695642ejb.60.1674488903979;
        Mon, 23 Jan 2023 07:48:23 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Carlo Nonato <carlo.nonato@minervasys.tech>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Marco Solieri <marco.solieri@minervasys.tech>
Subject: [PATCH v4 09/11] Revert "xen/arm: Remove unused BOOT_RELOC_VIRT_START"
Date: Mon, 23 Jan 2023 16:47:33 +0100
Message-Id: <20230123154735.74832-10-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This reverts commit 0c18fb76323bfb13615b6f13c98767face2d8097 (not clean).

This is not a clean revert since the rework of the memory layout, but it is
sufficiently similar to a clean one.
The only difference is that the BOOT_RELOC_VIRT_START must match the new
layout.

Cache coloring support for Xen needs to relocate Xen code and data in a new
colored physical space. The BOOT_RELOC_VIRT_START will be used as the virtual
base address for a temporary mapping to this new space.

Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
---
 xen/arch/arm/include/asm/config.h | 4 +++-
 xen/arch/arm/mm.c                 | 1 +
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index c5d407a749..5359acd529 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -96,7 +96,8 @@
  *   2M -   4M   Xen text, data, bss
  *   4M -   6M   Fixmap: special-purpose 4K mapping slots
  *   6M -  10M   Early boot mapping of FDT
- *  10M -  12M   Livepatch vmap (if compiled in)
+ *  10M -  12M   Early relocation address (used when relocating Xen)
+ *               and later for livepatch vmap (if compiled in)
  *
  *   1G -   2G   VMAP: ioremap and early_ioremap
  *
@@ -133,6 +134,7 @@
 #define BOOT_FDT_VIRT_START     (FIXMAP_VIRT_START + FIXMAP_VIRT_SIZE)
 #define BOOT_FDT_VIRT_SIZE      _AT(vaddr_t, MB(4))
 
+#define BOOT_RELOC_VIRT_START   (BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE)
 #ifdef CONFIG_LIVEPATCH
 #define LIVEPATCH_VMAP_START    (BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE)
 #define LIVEPATCH_VMAP_SIZE    _AT(vaddr_t, MB(2))
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index b9c698088b..7015a0f841 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -145,6 +145,7 @@ static void __init __maybe_unused build_assertions(void)
     /* 2MB aligned regions */
     BUILD_BUG_ON(XEN_VIRT_START & ~SECOND_MASK);
     BUILD_BUG_ON(FIXMAP_ADDR(0) & ~SECOND_MASK);
+    BUILD_BUG_ON(BOOT_RELOC_VIRT_START & ~SECOND_MASK);
     /* 1GB aligned regions */
 #ifdef CONFIG_ARM_32
     BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483029.748968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3a-00021S-B2; Mon, 23 Jan 2023 15:48:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483029.748968; Mon, 23 Jan 2023 15:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3a-00021I-7R; Mon, 23 Jan 2023 15:48:26 +0000
Received: by outflank-mailman (input) for mailman id 483029;
 Mon, 23 Jan 2023 15:48:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3Z-0000MU-4j
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:25 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58f113cd-9b35-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 16:48:14 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id bk15so31563831ejb.9
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:23 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58f113cd-9b35-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0a2gbQNKwUWS/d4kzoidztA9Ce3spqkJtPMisplTAE0=;
        b=ng7prlLTd1cYH/dTXfkdmn4gkmQlV/868ZF4otjJNBPRm3fFBORoHT1RSyPy74qFS2
         IT5XZDYlUuDbODtrCs5RyEFQR59k8Zn/ZZ9Q52okjlpqGbKO2Fe76mu0RPbJsJmXwVRo
         W492ws8E19D/TaAeynVqDdwLd3UKd/gDFGZzVoYDLWFtFyF+j7zbQcNRp+yP6XWkbR0l
         ELmMKAfdFC2y/NOw7JNjyWINTFw0nINpN8zW/W4KkbtKJ6IjLeBRJ2VYuQHqEY6ri+ei
         p9M50WfgxYHT6Hiahy2qhDgVnL0ofdhY7gUrnnWLcF7KkftIJf58zPOsfgw6tRq+ViOr
         3uHg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0a2gbQNKwUWS/d4kzoidztA9Ce3spqkJtPMisplTAE0=;
        b=WTnGzDIGYawTiQY0n26BRfPlL5UtZ12+IIc4r74pJKmslnX2jPzPTxOXqfIGtdmbrc
         tF6Wh23HxmnCwAOd/z5Vkukwlrq6tRDGy1hYcKODVC9nekdIr51qSPlAgyrZe8M2M2N5
         L7d8KP6Du/b9+c/4Nays0484vnaQrDxf2Xc12Os2djk1IBehbdlWKkdJO+kaP3u4Sl2X
         BtYp+pHKwS777KjNIzEFu98WWzPMHyd4HnvCu6SeDQDA0S5QM+5L8aD6AU/3wAjnoeYn
         dZDymlgEg1nvsr1Bbgv1taHvXscfiRu8tBJuM3dmd2lp+yi7qflB9Fopvu+9Jv2HEsrx
         5Tbg==
X-Gm-Message-State: AFqh2kr7oJQVUarya8iiNjK2sPQolnvZete0NDbrVBBVUKCTsAnvR0VI
	oxJUERJzo6PrT6dIBcqvjqtOqKukk/wArNLz
X-Google-Smtp-Source: AMrXdXsUAXRa82fcSelfp1n7YAbygkKkkD3DjYj1duZB5/jYaAvQ4PBJQUzuH8foqelDXFscp14mfw==
X-Received: by 2002:a17:907:cbc5:b0:7ad:9455:d57d with SMTP id vk5-20020a170907cbc500b007ad9455d57dmr28301982ejc.74.1674488902800;
        Mon, 23 Jan 2023 07:48:22 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Carlo Nonato <carlo.nonato@minervasys.tech>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Marco Solieri <marco.solieri@minervasys.tech>
Subject: [PATCH v4 08/11] xen/arm: use colored allocator for p2m page tables
Date: Mon, 23 Jan 2023 16:47:32 +0100
Message-Id: <20230123154735.74832-9-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Cache colored domains can benefit from having p2m page tables allocated
with the same coloring schema so that isolation can be achieved also for
those kind of memory accesses.
In order to do that, the domain struct is passed to the allocator and the
MEMF_no_owner flag is used.

Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
---
v4:
- fixed p2m page allocation using MEMF_no_owner memflag
---
 xen/arch/arm/p2m.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 948f199d84..f9faeb61af 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -4,6 +4,7 @@
 #include <xen/iocap.h>
 #include <xen/ioreq.h>
 #include <xen/lib.h>
+#include <xen/llc_coloring.h>
 #include <xen/sched.h>
 #include <xen/softirq.h>
 
@@ -56,7 +57,10 @@ static struct page_info *p2m_alloc_page(struct domain *d)
      */
     if ( is_hardware_domain(d) )
     {
-        pg = alloc_domheap_page(NULL, 0);
+        if ( is_domain_llc_colored(d) )
+            pg = alloc_domheap_page(d, MEMF_no_owner);
+        else
+            pg = alloc_domheap_page(NULL, 0);
         if ( pg == NULL )
             printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
     }
@@ -105,7 +109,10 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
         if ( d->arch.paging.p2m_total_pages < pages )
         {
             /* Need to allocate more memory from domheap */
-            pg = alloc_domheap_page(NULL, 0);
+            if ( is_domain_llc_colored(d) )
+                pg = alloc_domheap_page(d, MEMF_no_owner);
+            else
+                pg = alloc_domheap_page(NULL, 0);
             if ( pg == NULL )
             {
                 printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483026.748933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3V-0000wr-BG; Mon, 23 Jan 2023 15:48:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483026.748933; Mon, 23 Jan 2023 15:48:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3V-0000wC-6T; Mon, 23 Jan 2023 15:48:21 +0000
Received: by outflank-mailman (input) for mailman id 483026;
 Mon, 23 Jan 2023 15:48:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3T-00006V-H5
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:19 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b4ebd1d-9b35-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 16:48:18 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id az20so31697997ejc.1
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:18 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b4ebd1d-9b35-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wnBfVoJg9BJDdsJ9FVXJ5loGQJGgvjA6ZxNM1ora6pg=;
        b=q0ghmJW9sJbCOkKZCuP6aqaXmTD2uOqfhnRw/qssKHan9iebRjVdoGXERWA6zE28VL
         kyWUFGZrSIbLlW4iSOxfZtBmIMuyDSjrtY78W63SB7OS6GrveHRmZS2AhdpwJUybfNlo
         ShBV2pNaWMC6Vmyg07pJN6WuJ9i8rYGxAhIpVpr5H4shJ4OWrFpMrZAE5pwUfwd7Woyn
         cdcZwWtwpRKacHM2DmMXYONt7zJIEHER5aA/O78dsmDcIwGktYM2JQaOwyYO4YzD1yjZ
         iE+7tXIBY+m0gjoS322LpIcARZ2H2d8fH+ON82+PPg6apuFDuK/9+YYE1ahhBYpnWp6+
         ToOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wnBfVoJg9BJDdsJ9FVXJ5loGQJGgvjA6ZxNM1ora6pg=;
        b=zPT9JAP3MIYiT4bliIoWysJG5IZN8noGR9Wk8xAx48vXpCmGBneL3z3+W/XI0Blq4t
         iB3/ELhydpuxM5NRtfc63bMGneODq9du0hmztuW5LfrW98wU+Rquor48YKHYmurofjqm
         SAl4pH8stQ7RaRXUw+DcLf4Cq1PgNHTNGN8KqFmhH/q6ix5NuVOP33mrBLgsU4X+Kc9b
         wnNMumwdU5KCYRNEaG4UJsXf0yBEDkPR2MCB4oEl1hGs+GauiYistowSmryb0ib53ymv
         nllMjCmTCOQds+HrLJ3Ju85xrw4W+z5GIs77mLInTyR/0klcwh0thoOnbMOvg7+HsoES
         CEnw==
X-Gm-Message-State: AFqh2kobiOIoH4uVpC0YcuOuh3NCxY9j4AEbvy0lPypI903VV51jrIxK
	vfZ5wMsrEQauKRUeTT1S2uuDlkVj9/mB0dM8
X-Google-Smtp-Source: AMrXdXsmBdhNxcDwwp12asj0hXt0Q2CTR14/XB1lp++K8Dldee+U4brxt+QK7C4jD5PB3X87FgcFfg==
X-Received: by 2002:a17:906:4351:b0:84d:141f:6784 with SMTP id z17-20020a170906435100b0084d141f6784mr21893886ejm.29.1674488898062;
        Mon, 23 Jan 2023 07:48:18 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Carlo Nonato <carlo.nonato@minervasys.tech>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Marco Solieri <marco.solieri@minervasys.tech>
Subject: [PATCH v4 04/11] xen: extend domctl interface for cache coloring
Date: Mon, 23 Jan 2023 16:47:28 +0100
Message-Id: <20230123154735.74832-5-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit updates the domctl interface to allow the user to set cache
coloring configurations from the toolstack.
It also implements the functionality for arm64.

Based on original work from: Luca Miccio <lucmiccio@gmail.com>

Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
---
v4:
- updated XEN_DOMCTL_INTERFACE_VERSION
---
 xen/arch/arm/llc_coloring.c    | 14 ++++++++++++++
 xen/common/domctl.c            | 12 +++++++++++-
 xen/include/public/domctl.h    |  6 +++++-
 xen/include/xen/llc_coloring.h |  4 ++++
 4 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/llc_coloring.c b/xen/arch/arm/llc_coloring.c
index 51f057d7c9..2d0457cdbc 100644
--- a/xen/arch/arm/llc_coloring.c
+++ b/xen/arch/arm/llc_coloring.c
@@ -10,6 +10,7 @@
  */
 #include <xen/bitops.h>
 #include <xen/errno.h>
+#include <xen/guest_access.h>
 #include <xen/keyhandler.h>
 #include <xen/llc_coloring.h>
 #include <xen/param.h>
@@ -275,6 +276,19 @@ unsigned int *dom0_llc_colors(unsigned int *num_colors)
     return colors;
 }
 
+unsigned int *llc_colors_from_guest(struct xen_domctl_createdomain *config)
+{
+    unsigned int *colors;
+
+    if ( !config->num_llc_colors )
+        return NULL;
+
+    colors = alloc_colors(config->num_llc_colors);
+    copy_from_guest(colors, config->llc_colors, config->num_llc_colors);
+
+    return colors;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index ad71ad8a4c..505626ec46 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -8,6 +8,7 @@
 
 #include <xen/types.h>
 #include <xen/lib.h>
+#include <xen/llc_coloring.h>
 #include <xen/err.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
@@ -409,6 +410,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     {
         domid_t        dom;
         static domid_t rover = 0;
+        unsigned int *llc_colors = NULL, num_llc_colors = 0;
 
         dom = op->domain;
         if ( (dom > 0) && (dom < DOMID_FIRST_RESERVED) )
@@ -434,7 +436,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             rover = dom;
         }
 
-        d = domain_create(dom, &op->u.createdomain, false);
+        if ( llc_coloring_enabled )
+        {
+            llc_colors = llc_colors_from_guest(&op->u.createdomain);
+            num_llc_colors = op->u.createdomain.num_llc_colors;
+        }
+
+        d = domain_create_llc_colored(dom, &op->u.createdomain, false,
+                                      llc_colors, num_llc_colors);
+
         if ( IS_ERR(d) )
         {
             ret = PTR_ERR(d);
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 51be28c3de..49cccc8503 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -21,7 +21,7 @@
 #include "hvm/save.h"
 #include "memory.h"
 
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000016
 
 /*
  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
@@ -92,6 +92,10 @@ struct xen_domctl_createdomain {
     /* CPU pool to use; specify 0 or a specific existing pool */
     uint32_t cpupool_id;
 
+    /* IN LLC coloring parameters */
+    uint32_t num_llc_colors;
+    XEN_GUEST_HANDLE(uint32) llc_colors;
+
     struct xen_arch_domainconfig arch;
 };
 
diff --git a/xen/include/xen/llc_coloring.h b/xen/include/xen/llc_coloring.h
index 625930d378..2855f38296 100644
--- a/xen/include/xen/llc_coloring.h
+++ b/xen/include/xen/llc_coloring.h
@@ -24,6 +24,8 @@ int domain_llc_coloring_init(struct domain *d, unsigned int *colors,
 void domain_llc_coloring_free(struct domain *d);
 void domain_dump_llc_colors(struct domain *d);
 
+unsigned int *llc_colors_from_guest(struct xen_domctl_createdomain *config);
+
 #else
 
 #define llc_coloring_enabled (false)
@@ -36,6 +38,8 @@ static inline int domain_llc_coloring_init(struct domain *d,
 }
 static inline void domain_llc_coloring_free(struct domain *d) {}
 static inline void domain_dump_llc_colors(struct domain *d) {}
+static inline unsigned int *llc_colors_from_guest(
+    struct xen_domctl_createdomain *config) { return NULL; }
 
 #endif /* CONFIG_HAS_LLC_COLORING */
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483027.748949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3W-0001N6-Rd; Mon, 23 Jan 2023 15:48:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483027.748949; Mon, 23 Jan 2023 15:48:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3W-0001LS-Ke; Mon, 23 Jan 2023 15:48:22 +0000
Received: by outflank-mailman (input) for mailman id 483027;
 Mon, 23 Jan 2023 15:48:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3V-0000MU-Lt
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:21 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56be8a4d-9b35-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 16:48:11 +0100 (CET)
Received: by mail-ej1-x62a.google.com with SMTP id hw16so31558504ejc.10
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:19 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56be8a4d-9b35-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bzd4gFUIBqg+ZHD9eDkWBdjmYnoZGDhG9GSzcFzYQKg=;
        b=ETabNKT0EsSZcg15EdQF7dFCM9/xiqUlEIWdA2DWtS5tL9/T5ibgP30fXWC/9WZVJl
         18jmVJaFmqGlvDg/zkXGZcHWjRDlkao01TQGZyZArYNvKMtwgAqCbldtwmyscHyyZFb7
         snpK8rXogh46SgegLnBG/UPTEgeFyyNdyqJefHnYrbfe5MlqOfV7Tf9XmcsYPJ4lCl0T
         znmCxVp/aRl/DcejlPKG7POpNdalluMuldgC6B6gR05SAwICtVkfkS4kwri38rP1tgC5
         BlefBNdlRWRR8D2N7pStaU8hT+058YVUk12ZxQieUUEHfNChQvlORjK2hnCpyaRbjnhK
         hJMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bzd4gFUIBqg+ZHD9eDkWBdjmYnoZGDhG9GSzcFzYQKg=;
        b=N3Gsw9zl0pFfDnUGAmiCp+GSFhtOQMcvg1WrFR3QJLKAY3UKszUNGMl45hpxKYaFjD
         /cUav8Xj4Q/vVByts4LDlKv5J1MUXMMIk1YyA5DEAdEKBuYKZdQ4y+c1eqQwra6E7ghF
         3dmlz/tCrgmJ+JUGxkF3QIJMmMeiujsYX/QVeNgd47yW7UAqtHpeqdt5hkYdDHZlcwTz
         c0IkENlLW92uA11aV6syY510qWsGR9yrpEtLW7m2WuKH/6nEh5e/kqO05O1K7v0k6i+w
         Ygg85oBebcedKiN5xEfrEbVPC6ZKwiFQIB7o1L8BIo3a8gR9Erui52wh/Xyyj5+UO86g
         zm3Q==
X-Gm-Message-State: AFqh2krNVWWZQKfJ7kEOF1Vbxmfc5WJm0ymMuCqwyzQdIPi0o/2WVO5i
	YowvgnyYSS6+M8Sbt5oLHRaVMID/V4kghes/
X-Google-Smtp-Source: AMrXdXuoiU5iC1OsinUzS2gN900TrrzR7pYn/wJvPrR27p8cQgk49O2CCW3ZKOJgKiy1L4wIQIxxsw==
X-Received: by 2002:a17:907:d40c:b0:872:af53:a028 with SMTP id vi12-20020a170907d40c00b00872af53a028mr24373339ejc.61.1674488899145;
        Mon, 23 Jan 2023 07:48:19 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Carlo Nonato <carlo.nonato@minervasys.tech>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Marco Solieri <marco.solieri@minervasys.tech>
Subject: [PATCH v4 05/11] tools: add support for cache coloring configuration
Date: Mon, 23 Jan 2023 16:47:29 +0100
Message-Id: <20230123154735.74832-6-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new "llc_colors" parameter that defines the LLC color assignment for
a domain. The user can specify one or more color ranges using the same
syntax used everywhere else for color config described in the
documentation.
The parameter is defined as a list of strings that represent the color
ranges.

Documentation is also added.

Based on original work from: Luca Miccio <lucmiccio@gmail.com>

Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
---
v4:
- removed overlapping color ranges checks during parsing
- moved hypercall buffer initialization in libxenctrl
---
 docs/man/xl.cfg.5.pod.in         | 10 +++++++++
 tools/libs/ctrl/xc_domain.c      | 17 ++++++++++++++
 tools/libs/light/libxl_create.c  |  2 ++
 tools/libs/light/libxl_types.idl |  1 +
 tools/xl/xl_parse.c              | 38 +++++++++++++++++++++++++++++++-
 5 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 024bceeb61..96f9249c3d 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2903,6 +2903,16 @@ Currently, only the "sbsa_uart" model is supported for ARM.
 
 =back
 
+=over 4
+
+=item B<llc_colors=[ "RANGE", "RANGE", ...]>
+
+Specify the Last Level Cache (LLC) color configuration for the guest.
+B<RANGE> can be either a single color value or a hypen-separated closed
+interval of colors (such as "0-4").
+
+=back
+
 =head3 x86
 
 =over 4
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index e939d07157..064f54c349 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -28,6 +28,20 @@ int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
 {
     int err;
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER(uint32_t, llc_colors);
+
+    if ( config->num_llc_colors )
+    {
+        size_t bytes = sizeof(uint32_t) * config->num_llc_colors;
+
+        llc_colors = xc_hypercall_buffer_alloc(xch, llc_colors, bytes);
+        if ( llc_colors == NULL ) {
+            PERROR("Could not allocate LLC colors for xc_domain_create");
+            return -ENOMEM;
+        }
+        memcpy(llc_colors, config->llc_colors.p, bytes);
+        set_xen_guest_handle(config->llc_colors, llc_colors);
+    }
 
     domctl.cmd = XEN_DOMCTL_createdomain;
     domctl.domain = *pdomid;
@@ -39,6 +53,9 @@ int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
     *pdomid = (uint16_t)domctl.domain;
     *config = domctl.u.createdomain;
 
+    if ( llc_colors )
+        xc_hypercall_buffer_free(xch, llc_colors);
+
     return 0;
 }
 
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index beec3f6b6f..6d0c768241 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -638,6 +638,8 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             .grant_opts = XEN_DOMCTL_GRANT_version(b_info->max_grant_version),
             .vmtrace_size = ROUNDUP(b_info->vmtrace_buf_kb << 10, XC_PAGE_SHIFT),
             .cpupool_id = info->poolid,
+            .num_llc_colors = b_info->num_llc_colors,
+            .llc_colors.p = b_info->llc_colors,
         };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 0cfad8508d..1f944ca6d7 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -562,6 +562,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("ioports",          Array(libxl_ioport_range, "num_ioports")),
     ("irqs",             Array(uint32, "num_irqs")),
     ("iomem",            Array(libxl_iomem_range, "num_iomem")),
+    ("llc_colors",       Array(uint32, "num_llc_colors")),
     ("claim_mode",	     libxl_defbool),
     ("event_channels",   uint32),
     ("kernel",           string),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 853e9f357a..0f8c469fb5 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1297,8 +1297,9 @@ void parse_config_data(const char *config_source,
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *vtpms,
                    *usbctrls, *usbdevs, *p9devs, *vdispls, *pvcallsifs_devs;
     XLU_ConfigList *channels, *ioports, *irqs, *iomem, *viridian, *dtdevs,
-                   *mca_caps;
+                   *mca_caps, *llc_colors;
     int num_ioports, num_irqs, num_iomem, num_cpus, num_viridian, num_mca_caps;
+    int num_llc_colors;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
@@ -1447,6 +1448,41 @@ void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
         b_info->max_memkb = l * 1024;
 
+    if (!xlu_cfg_get_list(config, "llc_colors", &llc_colors, &num_llc_colors, 0)) {
+        int k, cur_index = 0;
+
+        b_info->num_llc_colors = 0;
+        for (i = 0; i < num_llc_colors; i++) {
+            uint32_t start = 0, end = 0;
+
+            buf = xlu_cfg_get_listitem(llc_colors, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Can't get element %d in LLC color list\n", i);
+                exit(1);
+            }
+
+            if (sscanf(buf, "%" SCNu32 "-%" SCNu32, &start, &end) != 2) {
+                if (sscanf(buf, "%" SCNu32, &start) != 1) {
+                    fprintf(stderr, "xl: Invalid LLC color range: %s\n", buf);
+                    exit(1);
+                }
+                end = start;
+            } else if (start > end) {
+                fprintf(stderr,
+                        "xl: Start LLC color is greater than end: %s\n", buf);
+                exit(1);
+            }
+
+            b_info->num_llc_colors += (end - start) + 1;
+            b_info->llc_colors = (uint32_t *)realloc(b_info->llc_colors,
+                        sizeof(*b_info->llc_colors) * b_info->num_llc_colors);
+
+            for (k = start; k <= end; k++)
+                b_info->llc_colors[cur_index++] = k;
+        }
+    }
+
     if (!xlu_cfg_get_long (config, "vcpus", &l, 0)) {
         vcpus = l;
         if (libxl_cpu_bitmap_alloc(ctx, &b_info->avail_vcpus, l)) {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483032.748997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3d-0002oW-TU; Mon, 23 Jan 2023 15:48:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483032.748997; Mon, 23 Jan 2023 15:48:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3d-0002mz-Lp; Mon, 23 Jan 2023 15:48:29 +0000
Received: by outflank-mailman (input) for mailman id 483032;
 Mon, 23 Jan 2023 15:48:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3c-00006V-3K
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:28 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6078c854-9b35-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 16:48:27 +0100 (CET)
Received: by mail-ej1-x631.google.com with SMTP id vw16so31591843ejc.12
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:27 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6078c854-9b35-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ptsteZfIhpfF2UQfbP6Zi8kakL6wMfzWUQYlwBmp7AM=;
        b=2DpkluxTwEf/rRNWROpMgUV2g5OJe6zodA3z8G79y/tFiEa/xuzYvvNsjq/cJp25DW
         ch2upGGp+OeeGZEH6uBjtwDVFTSlDD8ATwiiqu5BC06x7KMJiD/PaAIMqCoaXEumAZ+K
         +woTazI9Bikae17grp8UM2v2O6EqVxuhcxIymq04xG0UN8WDtJiUYAFHecber9pBv+ws
         75JLet58stpmZ34/+wVep9dcDpkOD73930kMyI/ET80Bf/OW60CznHxRlYnimonmum2A
         ZCmPHqZ/hUQSvmpcWT19bI3V0n9vRSwJOOGDFQHA7N4aO9FIwsdHVKgUjcYYaYRJJGJv
         8UJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ptsteZfIhpfF2UQfbP6Zi8kakL6wMfzWUQYlwBmp7AM=;
        b=xPtW4JrAjcThHGahy3YpOb7auqnFAYLGy4p2HHe3xH4hBGfpLWV9FuazxggQZRjxtr
         rCYPnCozuDtNwjEhpFyJt6JDdRLX3dqYq16/U6rgjQVtQj4Ry6ikq2ApqmzD0ZgjzSlI
         O0ecd2hIMS5D6VEKd1cQjSfcSphs3Dky1g8siAKTqzoNCX0Mp+IdhcoiIaT31snFSWqs
         JSnwTSxdRrTTKzPH2z+8WFOwcxeqLlSvpgHf4FFPnGLKw71V3dpAhpfD/DUP5MIT5TPp
         0TOjsNsGcHfD6Nm7neIwbGwujvDBvjSKtYA8cu0RFfdtVnLcGjyKSYJ2U57ZojMDM4MV
         SPyg==
X-Gm-Message-State: AFqh2krpmuO6DKKEMw+Fs/7bB8rZTLb5whqH59wkIIrFoyqr7+h/Nmqd
	vRQfX274/VWOg6vnRevaxNbxw6g036932PV0
X-Google-Smtp-Source: AMrXdXufOtlIu4zh2FVI6Bx7clwNuhDAjNxjB+9zPKqe3JEQrOrwigNGOjnHHO0yrWiiAXQBqeVl5A==
X-Received: by 2002:a17:906:a09:b0:7c1:4a3a:dc97 with SMTP id w9-20020a1709060a0900b007c14a3adc97mr32277336ejf.0.1674488906739;
        Mon, 23 Jan 2023 07:48:26 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Luca Miccio <lucmiccio@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Marco Solieri <marco.solieri@minervasys.tech>,
	Carlo Nonato <carlo.nonato@minervasys.tech>
Subject: [PATCH v4 10/11] xen/arm: add Xen cache colors command line parameter
Date: Mon, 23 Jan 2023 16:47:34 +0100
Message-Id: <20230123154735.74832-11-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Luca Miccio <lucmiccio@gmail.com>

This commit adds a new command line parameter to configure Xen cache
colors. These colors can be dumped with the cache coloring info debug-key.

By default, Xen uses the first color.
Benchmarking the VM interrupt response time provides an estimation of
LLC usage by Xen's most latency-critical runtime task. Results on Arm
Cortex-A53 on Xilinx Zynq UltraScale+ XCZU9EG show that one color, which
reserves 64 KiB of L2, is enough to attain best responsiveness.

More colors are instead very likely to be needed on processors whose L1
cache is physically-indexed and physically-tagged, such as Cortex-A57.
In such cases, coloring applies to L1 also, and there typically are two
distinct L1-colors. Therefore, reserving only one color for Xen would
senselessly partitions a cache memory that is already private, i.e.
underutilize it. The default amount of Xen colors is thus set to one.

Signed-off-by: Luca Miccio <lucmiccio@gmail.com>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
---
 docs/misc/xen-command-line.pandoc | 10 ++++++++++
 xen/arch/arm/llc_coloring.c       | 30 ++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index a89c0cef61..d486946648 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2796,6 +2796,16 @@ In the case that x2apic is in use, this option switches between physical and
 clustered mode.  The default, given no hint from the **FADT**, is cluster
 mode.
 
+### xen-llc-colors (arm64)
+> `= List of [ <integer> | <integer>-<integer> ]`
+
+> Default: `0: the lowermost color`
+
+Specify Xen LLC color configuration. This options is available only when
+`CONFIG_LLC_COLORING` is enabled.
+Two colors are most likely needed on platforms where private caches are
+physically indexed, e.g. the L1 instruction cache of the Arm Cortex-A57.
+
 ### xenheap_megabytes (arm32)
 > `= <size>`
 
diff --git a/xen/arch/arm/llc_coloring.c b/xen/arch/arm/llc_coloring.c
index 22612d455b..745e93a61a 100644
--- a/xen/arch/arm/llc_coloring.c
+++ b/xen/arch/arm/llc_coloring.c
@@ -19,6 +19,10 @@
 #include <asm/processor.h>
 #include <asm/sysregs.h>
 
+/* By default Xen uses the lowest color */
+#define XEN_DEFAULT_COLOR       0
+#define XEN_DEFAULT_NUM_COLORS  1
+
 bool llc_coloring_enabled;
 boolean_param("llc-coloring", llc_coloring_enabled);
 
@@ -33,6 +37,9 @@ static paddr_t __ro_after_init addr_col_mask;
 static unsigned int __ro_after_init dom0_colors[CONFIG_NR_LLC_COLORS];
 static unsigned int __ro_after_init dom0_num_colors;
 
+static unsigned int __ro_after_init xen_colors[CONFIG_NR_LLC_COLORS];
+static unsigned int __ro_after_init xen_num_colors;
+
 #define addr_to_color(addr) (((addr) & addr_col_mask) >> PAGE_SHIFT)
 
 /*
@@ -83,6 +90,12 @@ static int parse_color_config(const char *buf, unsigned int *colors,
     return *s ? -EINVAL : 0;
 }
 
+static int __init parse_xen_colors(const char *s)
+{
+    return parse_color_config(s, xen_colors, &xen_num_colors);
+}
+custom_param("xen-llc-colors", parse_xen_colors);
+
 static int __init parse_dom0_colors(const char *s)
 {
     return parse_color_config(s, dom0_colors, &dom0_num_colors);
@@ -166,6 +179,8 @@ static void dump_coloring_info(unsigned char key)
     printk("LLC way size: %u KiB\n", llc_way_size >> 10);
     printk("Number of LLC colors supported: %u\n", nr_colors);
     printk("Address to LLC color mask: 0x%lx\n", addr_col_mask);
+    printk("Xen LLC colors: ");
+    print_colors(xen_colors, xen_num_colors);
 }
 
 bool __init llc_coloring_init(void)
@@ -202,6 +217,21 @@ bool __init llc_coloring_init(void)
 
     addr_col_mask = (nr_colors - 1) << PAGE_SHIFT;
 
+    if ( !xen_num_colors )
+    {
+        printk(XENLOG_WARNING
+               "Xen LLC color config not found. Using default color: %u\n",
+               XEN_DEFAULT_COLOR);
+        xen_colors[0] = XEN_DEFAULT_COLOR;
+        xen_num_colors = XEN_DEFAULT_NUM_COLORS;
+    }
+
+    if ( !check_colors(xen_colors, xen_num_colors) )
+    {
+        printk(XENLOG_ERR "Bad LLC color config for Xen\n");
+        return false;
+    }
+
     register_keyhandler('K', dump_coloring_info, "dump LLC coloring info", 1);
 
     return true;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:48:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483036.749009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3l-0003wT-Em; Mon, 23 Jan 2023 15:48:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483036.749009; Mon, 23 Jan 2023 15:48:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz3l-0003vV-Ae; Mon, 23 Jan 2023 15:48:37 +0000
Received: by outflank-mailman (input) for mailman id 483036;
 Mon, 23 Jan 2023 15:48:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJz3k-00006V-87
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:48:36 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64f195a5-9b35-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 16:48:34 +0100 (CET)
Received: by mail-ej1-x632.google.com with SMTP id kt14so31674250ejc.3
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 07:48:34 -0800 (PST)
Received: from carlo-ubuntu.mo54.unimo.it (nonato.mo54.unimo.it.
 [155.185.85.8]) by smtp.gmail.com with ESMTPSA id
 r2-20020a17090609c200b007bd28b50305sm22170978eje.200.2023.01.23.07.48.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 07:48:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64f195a5-9b35-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=csEQnyAGj75A2oDuFHesK9E3htjoPKUhnq2Q9BYRkA0=;
        b=sgILsUw1EMkAtx7wBGa1Tl9hBegYY+LO9VzkIUBr+UB7yaEkl3haTW5TSbUJSf1jun
         iRKf8Due67bJ6kh6yxTABd0rsLAFEATnrC1CMbM2i2JmCOCJ7qjXSQtH2P4fyrm1vRCO
         QGIeWtecAeekCocjFMj59pwwH+fP6IJtAQhNRzm1rTZlDcOV/17sQSP/SFB+E6TzHOCO
         U41D01LwXE9Kn3OU47lVYDJKyyFamS+gqaYfhOw5TKWY2GV3NqCGF7gUuRVOFWRuxvF4
         bqdJQ5NTG8Mj1VBpXwxjt6ajeRI/GJjrFMCyKh/LnvGJZQ7eu4yoiG1jFKWkDFHmRqGu
         YMYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=csEQnyAGj75A2oDuFHesK9E3htjoPKUhnq2Q9BYRkA0=;
        b=iaWO7jOISFDVTPcRDTfBYxLxbyVLifrSj2mQd4Hll4y6CVBwpsKfWt2Oyvq2jTT5xV
         Mk3f1SjY+KH+Cp4vjDb5hjdQCFo4kEXwGqfdPo21m7YVRsBXXRvHVCSprQ/Y9VY4RgSi
         rpuGZ3NwK4XkBCQENKmBgbn98FaWQ74XRMuf7/n2kBlRUd+S4523T/QjBxFhmsUQdlFR
         ifjsSSMNSK1XU41P4mVMh7MiI8OJR/D6ud3XH+FRAv1eQ4dKwX7MQZqU85zNRMmhRaKK
         bph85sYJ9fWoaJbhqyFi1EDWEQHUwnhyJj6hECbv2gq9pLR+n53iNmwYUbQJ/enknTDL
         Mmgw==
X-Gm-Message-State: AFqh2kpzC6AiArMnOS52YvvEjgcC/cdxi4tWdFC1Gk+lmfoxW3XKCrzH
	Fb++KTJMOOTfSwrv5xaqQP0rXcmWywNPW56F
X-Google-Smtp-Source: AMrXdXujSAA0u5pxrtehEsoIhORfGahWR5LG/sCKGkAm+svGBrRkdtgXEcP7Dqb8VWdJ8caE/HHmEw==
X-Received: by 2002:a17:907:d68c:b0:812:d53e:1222 with SMTP id wf12-20020a170907d68c00b00812d53e1222mr27709223ejc.31.1674488913975;
        Mon, 23 Jan 2023 07:48:33 -0800 (PST)
From: Carlo Nonato <carlo.nonato@minervasys.tech>
To: xen-devel@lists.xenproject.org
Cc: Carlo Nonato <carlo.nonato@minervasys.tech>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Marco Solieri <marco.solieri@minervasys.tech>
Subject: [PATCH v4 11/11] xen/arm: add cache coloring support for Xen
Date: Mon, 23 Jan 2023 16:47:35 +0100
Message-Id: <20230123154735.74832-12-carlo.nonato@minervasys.tech>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This commit adds the cache coloring support for Xen own physical space.

It extends the implementation of setup_pagetables() to make use of Xen
cache coloring configuration. Page tables construction is essentially the
same except for the fact that PTEs point to a new temporary mapped,
physically colored space.

The temporary mapping is also used to relocate Xen to the new physical
space starting at the address taken from the old get_xen_paddr() function
which is brought back for the occasion.
The temporary mapping is finally converted to a mapping of the "old"
(meaning the original physical space) Xen code, so that the boot CPU can
actually address the variables and functions used by secondary CPUs until
they enable the MMU. This happens when the boot CPU needs to bring up other
CPUs (psci.c and smpboot.c) and when the TTBR value is passed to them
(init_secondary_pagetables()).

Finally, since the alternative framework needs to remap the Xen text and
inittext sections, this operation must be done in a coloring-aware way.
The function xen_remap_colored() is introduced for that.

Based on original work from: Luca Miccio <lucmiccio@gmail.com>

Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
---
v4:
- removed set_value_for_secondary() because it was wrongly cleaning cache
- relocate_xen() now calls switch_ttbr_id()
---
 xen/arch/arm/alternative.c              |  9 ++-
 xen/arch/arm/arm64/head.S               | 50 +++++++++++++
 xen/arch/arm/arm64/mm.c                 | 26 +++++--
 xen/arch/arm/include/asm/llc_coloring.h | 22 ++++++
 xen/arch/arm/include/asm/mm.h           |  7 +-
 xen/arch/arm/llc_coloring.c             | 45 ++++++++++++
 xen/arch/arm/mm.c                       | 94 ++++++++++++++++++++++---
 xen/arch/arm/psci.c                     |  9 ++-
 xen/arch/arm/setup.c                    | 75 +++++++++++++++++++-
 xen/arch/arm/smpboot.c                  |  9 ++-
 xen/arch/arm/xen.lds.S                  |  2 +-
 11 files changed, 325 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
index f00e3b9b3c..29f1ff34d4 100644
--- a/xen/arch/arm/alternative.c
+++ b/xen/arch/arm/alternative.c
@@ -9,6 +9,7 @@
 #include <xen/init.h>
 #include <xen/types.h>
 #include <xen/kernel.h>
+#include <xen/llc_coloring.h>
 #include <xen/mm.h>
 #include <xen/vmap.h>
 #include <xen/smp.h>
@@ -209,8 +210,12 @@ void __init apply_alternatives_all(void)
      * The text and inittext section are read-only. So re-map Xen to
      * be able to patch the code.
      */
-    xenmap = __vmap(&xen_mfn, 1U << xen_order, 1, 1, PAGE_HYPERVISOR,
-                    VMAP_DEFAULT);
+    if ( llc_coloring_enabled )
+        xenmap = xen_remap_colored(xen_mfn, xen_size);
+    else
+        xenmap = __vmap(&xen_mfn, 1U << xen_order, 1, 1, PAGE_HYPERVISOR,
+                        VMAP_DEFAULT);
+
     /* Re-mapping Xen is not expected to fail during boot. */
     BUG_ON(!xenmap);
 
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index a61b4d3c27..9ed7610afa 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -801,6 +801,56 @@ fail:   PRINT("- Boot failed -\r\n")
         b     1b
 ENDPROC(fail)
 
+GLOBAL(_end_boot)
+
+/* Copy Xen to new location and switch TTBR
+ * x0    ttbr
+ * x1    source address
+ * x2    destination address
+ * x3    length
+ *
+ * Source and destination must be word aligned, length is rounded up
+ * to a 16 byte boundary.
+ *
+ * MUST BE VERY CAREFUL when saving things to RAM over the copy */
+ENTRY(relocate_xen)
+        /* Copy 16 bytes at a time using:
+         *   x9: counter
+         *   x10: data
+         *   x11: data
+         *   x12: source
+         *   x13: destination
+         */
+        mov     x9, x3
+        mov     x12, x1
+        mov     x13, x2
+
+1:      ldp     x10, x11, [x12], #16
+        stp     x10, x11, [x13], #16
+
+        subs    x9, x9, #16
+        bgt     1b
+
+        /* Flush destination from dcache using:
+         * x9: counter
+         * x10: step
+         * x11: vaddr
+         */
+        dsb   sy        /* So the CPU issues all writes to the range */
+
+        mov   x9, x3
+        ldr   x10, =dcache_line_bytes /* x10 := step */
+        ldr   x10, [x10]
+        mov   x11, x2
+
+1:      dc    cvac, x11
+
+        add   x11, x11, x10
+        subs  x9, x9, x10
+        bgt   1b
+
+        b switch_ttbr_id
+
 /*
  * Switch TTBR
  *
diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
index 2ede4e75ae..4419381fdd 100644
--- a/xen/arch/arm/arm64/mm.c
+++ b/xen/arch/arm/arm64/mm.c
@@ -1,6 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
 #include <xen/init.h>
+#include <xen/llc_coloring.h>
 #include <xen/mm.h>
 
 #include <asm/setup.h>
@@ -121,26 +122,43 @@ void update_identity_mapping(bool enable)
 }
 
 extern void switch_ttbr_id(uint64_t ttbr);
+extern void relocate_xen(uint64_t ttbr, void *src, void *dst, size_t len);
 
 typedef void (switch_ttbr_fn)(uint64_t ttbr);
+typedef void (relocate_xen_fn)(uint64_t ttbr, void *src, void *dst, size_t len);
 
 void __init switch_ttbr(uint64_t ttbr)
 {
-    vaddr_t id_addr = virt_to_maddr(switch_ttbr_id);
-    switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;
+    vaddr_t vaddr, id_addr;
     lpae_t pte;
 
+    if ( llc_coloring_enabled )
+        vaddr = (vaddr_t)relocate_xen;
+    else
+        vaddr = (vaddr_t)switch_ttbr_id;
+
+    id_addr = virt_to_maddr(vaddr);
+
     /* Enable the identity mapping in the boot page tables */
     update_identity_mapping(true);
     /* Enable the identity mapping in the runtime page tables */
-    pte = pte_of_xenaddr((vaddr_t)switch_ttbr_id);
+    pte = pte_of_xenaddr(vaddr);
     pte.pt.table = 1;
     pte.pt.xn = 0;
     pte.pt.ro = 1;
     write_pte(&xen_third_id[third_table_offset(id_addr)], pte);
 
     /* Switch TTBR */
-    fn(ttbr);
+    if ( llc_coloring_enabled )
+    {
+        relocate_xen_fn *fn = (relocate_xen_fn *)id_addr;
+        fn(ttbr, _start, (void *)BOOT_RELOC_VIRT_START, _end - _start);
+    }
+    else
+    {
+        switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;
+        fn(ttbr);
+    }
 
     /*
      * Disable the identity mapping in the runtime page tables.
diff --git a/xen/arch/arm/include/asm/llc_coloring.h b/xen/arch/arm/include/asm/llc_coloring.h
index 7a01b8841c..ae5c4ff606 100644
--- a/xen/arch/arm/include/asm/llc_coloring.h
+++ b/xen/arch/arm/include/asm/llc_coloring.h
@@ -15,11 +15,28 @@
 
 #ifdef CONFIG_LLC_COLORING
 
+#include <xen/mm-frame.h>
+
+/**
+ * Iterate over each Xen mfn in the colored space.
+ * @mfn:    the current mfn. The first non colored mfn must be provided as the
+ *          starting point.
+ * @i:      loop index.
+ */
+#define for_each_xen_colored_mfn(mfn, i)        \
+    for ( i = 0, mfn = xen_colored_mfn(mfn);    \
+          i < (_end - _start) >> PAGE_SHIFT;    \
+          i++, mfn = xen_colored_mfn(mfn_add(mfn, 1)) )
+
 bool __init llc_coloring_init(void);
 
 unsigned int *dom0_llc_colors(unsigned int *num_colors);
 unsigned int *llc_colors_from_str(const char *str, unsigned int *num_colors);
 
+paddr_t xen_colored_map_size(paddr_t size);
+mfn_t xen_colored_mfn(mfn_t mfn);
+void *xen_remap_colored(mfn_t xen_fn, paddr_t xen_size);
+
 #else /* !CONFIG_LLC_COLORING */
 
 static inline bool __init llc_coloring_init(void) { return true; }
@@ -27,6 +44,11 @@ static inline unsigned int *dom0_llc_colors(
     unsigned int *num_colors) { return NULL; }
 static inline unsigned int *llc_colors_from_str(
     const char *str, unsigned int *num_colors) { return NULL; }
+paddr_t xen_colored_map_size(paddr_t size) { return 0; }
+static inline void *xen_remap_colored(mfn_t xen_fn, paddr_t xen_size)
+{
+    return NULL;
+}
 
 #endif /* CONFIG_LLC_COLORING */
 
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 596293f792..1b3be348b7 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -195,14 +195,19 @@ extern unsigned long total_pages;
 
 #define PDX_GROUP_SHIFT SECOND_SHIFT
 
+#define virt_to_reloc_virt(virt) \
+    (((vaddr_t)virt) - XEN_VIRT_START + BOOT_RELOC_VIRT_START)
+
 /* Boot-time pagetable setup */
-extern void setup_pagetables(unsigned long boot_phys_offset);
+extern void setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr);
 /* Map FDT in boot pagetable */
 extern void *early_fdt_map(paddr_t fdt_paddr);
 /* Switch to a new root page-tables */
 extern void switch_ttbr(uint64_t ttbr);
 /* Remove early mappings */
 extern void remove_early_mappings(void);
+/* Remove early LLC coloring mappings */
+extern void remove_llc_coloring_mappings(void);
 /* Allocate and initialise pagetables for a secondary CPU. Sets init_ttbr to the
  * new page table */
 extern int init_secondary_pagetables(int cpu);
diff --git a/xen/arch/arm/llc_coloring.c b/xen/arch/arm/llc_coloring.c
index 745e93a61a..ded1f33ad5 100644
--- a/xen/arch/arm/llc_coloring.c
+++ b/xen/arch/arm/llc_coloring.c
@@ -15,6 +15,7 @@
 #include <xen/llc_coloring.h>
 #include <xen/param.h>
 #include <xen/types.h>
+#include <xen/vmap.h>
 
 #include <asm/processor.h>
 #include <asm/sysregs.h>
@@ -41,6 +42,8 @@ static unsigned int __ro_after_init xen_colors[CONFIG_NR_LLC_COLORS];
 static unsigned int __ro_after_init xen_num_colors;
 
 #define addr_to_color(addr) (((addr) & addr_col_mask) >> PAGE_SHIFT)
+#define addr_set_color(addr, color) (((addr) & ~addr_col_mask) \
+                                     | ((color) << PAGE_SHIFT))
 
 /*
  * Parse the coloring configuration given in the buf string, following the
@@ -341,6 +344,48 @@ unsigned int get_nr_llc_colors(void)
     return nr_colors;
 }
 
+paddr_t xen_colored_map_size(paddr_t size)
+{
+    return ROUNDUP(size * nr_colors, XEN_PADDR_ALIGN);
+}
+
+mfn_t xen_colored_mfn(mfn_t mfn)
+{
+    paddr_t maddr = mfn_to_maddr(mfn);
+    unsigned int i, color = addr_to_color(maddr);
+
+    for( i = 0; i < xen_num_colors; i++ )
+    {
+        if ( color == xen_colors[i] )
+            return mfn;
+        else if ( color < xen_colors[i] )
+            return maddr_to_mfn(addr_set_color(maddr, xen_colors[i]));
+    }
+
+    /* Jump to next color space (llc_way_size bytes) and use the first color */
+    return maddr_to_mfn(addr_set_color(maddr + llc_way_size, xen_colors[0]));
+}
+
+void *xen_remap_colored(mfn_t xen_mfn, paddr_t xen_size)
+{
+    unsigned int i;
+    void *xenmap;
+    mfn_t *xen_colored_mfns = xmalloc_array(mfn_t, xen_size >> PAGE_SHIFT);
+
+    if ( !xen_colored_mfns )
+        panic("Can't allocate LLC colored MFNs\n");
+
+    for_each_xen_colored_mfn( xen_mfn, i )
+    {
+        xen_colored_mfns[i] = xen_mfn;
+    }
+
+    xenmap = vmap(xen_colored_mfns, xen_size >> PAGE_SHIFT);
+    xfree(xen_colored_mfns);
+
+    return xenmap;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 7015a0f841..f14fb98088 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -14,6 +14,7 @@
 #include <xen/guest_access.h>
 #include <xen/init.h>
 #include <xen/libfdt/libfdt.h>
+#include <xen/llc_coloring.h>
 #include <xen/mm.h>
 #include <xen/pfn.h>
 #include <xen/pmap.h>
@@ -96,6 +97,9 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
 DEFINE_PAGE_TABLE(xen_pgtable);
 static DEFINE_PAGE_TABLE(xen_first);
 #define THIS_CPU_PGTABLE xen_pgtable
+#ifdef CONFIG_LLC_COLORING
+static DEFINE_PAGE_TABLE(xen_colored_temp);
+#endif
 #else
 #define HYP_PT_ROOT_LEVEL 1
 /* Per-CPU pagetable pages */
@@ -391,7 +395,12 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
 
 lpae_t pte_of_xenaddr(vaddr_t va)
 {
-    paddr_t ma = va + phys_offset;
+    paddr_t ma;
+
+    if ( llc_coloring_enabled )
+        ma = virt_to_maddr(virt_to_reloc_virt(va));
+    else
+        ma = va + phys_offset;
 
     return mfn_to_xen_entry(maddr_to_mfn(ma), MT_NORMAL);
 }
@@ -484,9 +493,54 @@ static void clear_table(void *table)
     clean_and_invalidate_dcache_va_range(table, PAGE_SIZE);
 }
 
-/* Boot-time pagetable setup.
- * Changes here may need matching changes in head.S */
-void __init setup_pagetables(unsigned long boot_phys_offset)
+#ifdef CONFIG_LLC_COLORING
+static void __init create_llc_coloring_mappings(paddr_t xen_paddr)
+{
+    lpae_t pte;
+    unsigned int i;
+    mfn_t mfn = maddr_to_mfn(xen_paddr);
+
+    for_each_xen_colored_mfn( mfn, i )
+    {
+        pte = mfn_to_xen_entry(mfn, MT_NORMAL);
+        pte.pt.table = 1; /* level 3 mappings always have this bit set */
+        xen_colored_temp[i] = pte;
+    }
+
+    pte = mfn_to_xen_entry(virt_to_mfn(xen_colored_temp), MT_NORMAL);
+    pte.pt.table = 1;
+    write_pte(&boot_second[second_table_offset(BOOT_RELOC_VIRT_START)], pte);
+}
+
+void __init remove_llc_coloring_mappings(void)
+{
+    int rc;
+
+    /* destroy the _PAGE_BLOCK mapping */
+    rc = modify_xen_mappings(BOOT_RELOC_VIRT_START,
+                             BOOT_RELOC_VIRT_START + SZ_2M,
+                             _PAGE_BLOCK);
+    BUG_ON(rc);
+}
+#else
+static void __init create_llc_coloring_mappings(paddr_t xen_paddr) {}
+void __init remove_llc_coloring_mappings(void) {}
+#endif /* CONFIG_LLC_COLORING */
+
+/*
+ * Boot-time pagetable setup with coloring support
+ * Changes here may need matching changes in head.S
+ *
+ * The coloring support consists of:
+ * - Create a temporary colored mapping that conforms to Xen color selection.
+ * - pte_of_xenaddr takes care of translating the virtual addresses to the
+ *   new colored physical space and the returns the pte, so that the page table
+ *   initialization can remain the same.
+ * - Copy Xen to the new colored physical space by exploiting the temporary
+ *   mapping.
+ * - Update TTBR0_EL2 with the new root page table address.
+ */
+void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
 {
     uint64_t ttbr;
     lpae_t pte, *p;
@@ -494,6 +548,9 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
 
     phys_offset = boot_phys_offset;
 
+    if ( llc_coloring_enabled )
+        create_llc_coloring_mappings(xen_paddr);
+
     arch_setup_page_tables();
 
 #ifdef CONFIG_ARM_64
@@ -543,10 +600,13 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
     pte.pt.table = 1;
     xen_second[second_table_offset(FIXMAP_ADDR(0))] = pte;
 
+    if ( llc_coloring_enabled )
+        ttbr = virt_to_maddr(virt_to_reloc_virt(xen_pgtable));
+    else
 #ifdef CONFIG_ARM_64
-    ttbr = (uintptr_t) xen_pgtable + phys_offset;
+        ttbr = (uintptr_t) xen_pgtable + phys_offset;
 #else
-    ttbr = (uintptr_t) cpu0_pgtable + phys_offset;
+        ttbr = (uintptr_t) cpu0_pgtable + phys_offset;
 #endif
 
     switch_ttbr(ttbr);
@@ -556,6 +616,18 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
 #ifdef CONFIG_ARM_32
     per_cpu(xen_pgtable, 0) = cpu0_pgtable;
 #endif
+
+    /*
+    * Keep original Xen memory mapped because secondary CPUs still point to it
+    * and a few variables needs to be accessed by the master CPU in order to
+    * let them boot. This mapping will also replace the one created at the
+    * beginning of setup_pagetables.
+    */
+    if ( llc_coloring_enabled )
+        map_pages_to_xen(BOOT_RELOC_VIRT_START,
+                         maddr_to_mfn(XEN_VIRT_START + phys_offset),
+                         SZ_2M >> PAGE_SHIFT, PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
+
 }
 
 static void clear_boot_pagetables(void)
@@ -576,12 +648,18 @@ static void clear_boot_pagetables(void)
 #ifdef CONFIG_ARM_64
 int init_secondary_pagetables(int cpu)
 {
+    uint64_t *init_ttbr_addr = &init_ttbr;
+
     clear_boot_pagetables();
 
+    if ( llc_coloring_enabled )
+        init_ttbr_addr = (uint64_t *)virt_to_reloc_virt(&init_ttbr);
+
     /* Set init_ttbr for this CPU coming up. All CPus share a single setof
      * pagetables, but rewrite it each time for consistency with 32 bit. */
-    init_ttbr = (uintptr_t) xen_pgtable + phys_offset;
-    clean_dcache(init_ttbr);
+    *init_ttbr_addr = virt_to_maddr(xen_pgtable);
+    clean_dcache(*init_ttbr_addr);
+
     return 0;
 }
 #else
diff --git a/xen/arch/arm/psci.c b/xen/arch/arm/psci.c
index 695d2fa1f1..fdc798dd14 100644
--- a/xen/arch/arm/psci.c
+++ b/xen/arch/arm/psci.c
@@ -11,6 +11,7 @@
 
 #include <xen/types.h>
 #include <xen/init.h>
+#include <xen/llc_coloring.h>
 #include <xen/mm.h>
 #include <xen/smp.h>
 #include <asm/cpufeature.h>
@@ -39,9 +40,13 @@ static uint32_t psci_cpu_on_nr;
 int call_psci_cpu_on(int cpu)
 {
     struct arm_smccc_res res;
+    vaddr_t init_secondary_addr = (vaddr_t)init_secondary;
 
-    arm_smccc_smc(psci_cpu_on_nr, cpu_logical_map(cpu), __pa(init_secondary),
-                  &res);
+    if ( llc_coloring_enabled )
+        init_secondary_addr = virt_to_reloc_virt(init_secondary);
+
+    arm_smccc_smc(psci_cpu_on_nr, cpu_logical_map(cpu),
+                  __pa(init_secondary_addr), &res);
 
     return PSCI_RET(res);
 }
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index c04e5012f0..72da5a8e5e 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -456,7 +456,7 @@ static void * __init relocate_fdt(paddr_t dtb_paddr, size_t dtb_size)
     return fdt;
 }
 
-#ifdef CONFIG_ARM_32
+#if defined (CONFIG_ARM_32) || defined(CONFIG_LLC_COLORING)
 /*
  * Returns the end address of the highest region in the range s..e
  * with required size and alignment that does not conflict with the
@@ -548,7 +548,9 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
     }
     return e;
 }
+#endif
 
+#ifdef CONFIG_ARM_32
 /*
  * Find a contiguous region that fits in the static heap region with
  * required size and alignment, and return the end address of the region
@@ -622,6 +624,62 @@ static paddr_t __init next_module(paddr_t s, paddr_t *end)
     return lowest;
 }
 
+#ifdef CONFIG_LLC_COLORING
+/**
+ * get_xen_paddr - get physical address to relocate Xen to
+ *
+ * Xen is relocated to as near to the top of RAM as possible and
+ * aligned to a XEN_PADDR_ALIGN boundary.
+ */
+static paddr_t __init get_xen_paddr(uint32_t xen_size)
+{
+    struct meminfo *mi = &bootinfo.mem;
+    paddr_t min_size;
+    paddr_t paddr = 0;
+    int i;
+
+    min_size = (xen_size + (XEN_PADDR_ALIGN-1)) & ~(XEN_PADDR_ALIGN-1);
+
+    /* Find the highest bank with enough space. */
+    for ( i = 0; i < mi->nr_banks; i++ )
+    {
+        const struct membank *bank = &mi->bank[i];
+        paddr_t s, e;
+
+        if ( bank->size >= min_size )
+        {
+            e = consider_modules(bank->start, bank->start + bank->size,
+                                 min_size, XEN_PADDR_ALIGN, 0);
+            if ( !e )
+                continue;
+
+#ifdef CONFIG_ARM_32
+            /* Xen must be under 4GB */
+            if ( e > 0x100000000ULL )
+                e = 0x100000000ULL;
+            if ( e < bank->start )
+                continue;
+#endif
+
+            s = e - min_size;
+
+            if ( s > paddr )
+                paddr = s;
+        }
+    }
+
+    if ( !paddr )
+        panic("Not enough memory to relocate Xen\n");
+
+    printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
+           paddr, paddr + min_size);
+
+    return paddr;
+}
+#else
+static paddr_t __init get_xen_paddr(uint32_t xen_size) { return 0; }
+#endif
+
 static void __init init_pdx(void)
 {
     paddr_t bank_start, bank_size, bank_end;
@@ -1004,8 +1062,6 @@ void __init start_xen(unsigned long boot_phys_offset,
     /* Initialize traps early allow us to get backtrace when an error occurred */
     init_traps();
 
-    setup_pagetables(boot_phys_offset);
-
     smp_clear_cpu_maps();
 
     device_tree_flattened = early_fdt_map(fdt_paddr);
@@ -1031,8 +1087,13 @@ void __init start_xen(unsigned long boot_phys_offset,
     {
         if ( !llc_coloring_init() )
             panic("Xen LLC coloring support: setup failed\n");
+        xen_bootmodule->size = xen_colored_map_size(_end - _start);
+        xen_bootmodule->start = get_xen_paddr(xen_bootmodule->size);
     }
 
+    setup_pagetables(boot_phys_offset, xen_bootmodule->start);
+    device_tree_flattened = early_fdt_map(fdt_paddr);
+
     setup_mm();
 
     /* Parse the ACPI tables for possible boot-time configuration */
@@ -1147,6 +1208,14 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     setup_virt_paging();
 
+    /*
+     * The removal is done earlier than discard_initial_modules beacuse the
+     * livepatch init uses a virtual address equal to BOOT_RELOC_VIRT_START.
+     * Remove LLC coloring mappings to expose a clear state to the livepatch
+     * module.
+     */
+    if ( llc_coloring_enabled )
+        remove_llc_coloring_mappings();
     do_initcalls();
 
     /*
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 4a89b3a834..7e437724b4 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -13,6 +13,7 @@
 #include <xen/domain_page.h>
 #include <xen/errno.h>
 #include <xen/init.h>
+#include <xen/llc_coloring.h>
 #include <xen/mm.h>
 #include <xen/param.h>
 #include <xen/sched.h>
@@ -445,6 +446,7 @@ int __cpu_up(unsigned int cpu)
 {
     int rc;
     s_time_t deadline;
+    unsigned long *smp_up_cpu_addr = &smp_up_cpu;
 
     printk("Bringing up CPU%d\n", cpu);
 
@@ -460,9 +462,12 @@ int __cpu_up(unsigned int cpu)
     /* Tell the remote CPU what its logical CPU ID is. */
     init_data.cpuid = cpu;
 
+    if ( llc_coloring_enabled )
+        smp_up_cpu_addr = (unsigned long *)virt_to_reloc_virt(&smp_up_cpu);
+
     /* Open the gate for this CPU */
-    smp_up_cpu = cpu_logical_map(cpu);
-    clean_dcache(smp_up_cpu);
+    *smp_up_cpu_addr = cpu_logical_map(cpu);
+    clean_dcache(*smp_up_cpu_addr);
 
     rc = arch_cpu_up(cpu);
 
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 3f7ebd19f3..a69c43e961 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -212,7 +212,7 @@ SECTIONS
        . = ALIGN(POINTER_ALIGN);
        __bss_end = .;
   } :text
-  _end = . ;
+  _end = ALIGN(PAGE_SIZE);
 
   /* Section for the device tree blob (if any). */
   .dtb : { *(.dtb) } :text
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 15:52:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 15:52:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483057.749019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz7L-0007s2-3i; Mon, 23 Jan 2023 15:52:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483057.749019; Mon, 23 Jan 2023 15:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJz7L-0007rt-11; Mon, 23 Jan 2023 15:52:19 +0000
Received: by outflank-mailman (input) for mailman id 483057;
 Mon, 23 Jan 2023 15:52:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJz7J-0007rG-6e
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 15:52:17 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2047.outbound.protection.outlook.com [40.107.14.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e8975095-9b35-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 16:52:16 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8932.eurprd04.prod.outlook.com (2603:10a6:20b:42f::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.28; Mon, 23 Jan
 2023 15:52:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 15:52:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8975095-9b35-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fUsbq+jxzMqtESt2kdk1hJ4+vt7pnSC79ONOsp4xQ3S2QEfLk9OGVgwccl49fmnvul+UBkrAPOG5plUX6ShhP041l7hefA5ndSNnktvPvMckKhf47Im4J1M+JHVR7afD0UuqDAG8JoeJUOi7Jkg6/nVkSxsiRTkpg2FeHgfhGC9Y762/1yLD7N+lsEGAILciO5nHsqsqZGHF0rAVNiFgqBjgqOn1XSPD+oRfakp2o3xWzQaYvFMj9iYG39ystK1rVALyM3i8CcLjZRB7CWKiam3/w62eBMsj5BoQTKGk8STM1U3UEL19tIgG+hVWAPknBsIeyflAy8BWm8jHc/5C0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WMrIkdY/8YnvQDKF/6QTAJDxwfaruzwGCLdEnVrHwMI=;
 b=HdSMLzjvVPU95gUKp5d+xbptMc2TTeBf3OVMAJIzd3xSUXMPHp0vmwANBUVIhZMc8wIa+mwU94Hl38LICp3sccWLh/NHHOyw3c9EM8i/DYay+MztFrwU8/OlRrEH3Zr7iwI6R2hYG1a06H8cSMM7y8djsOd/tujw3fMjRr86CM3aTwvJ9yrs6fyPPFSbLk15IL/i3Zbv0dVXXmo0ENynm2PrJyRAfd35We93Fke3VvwTAlj1qR23WEmKQG2c9amS278Zg5tSDCcWvyYk9lHK+G8N5OiKlAcJXnZ149x5O+YoSDQIunYshRb5HFQpIOCerty+eq+IfiaGak34HLt+eA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WMrIkdY/8YnvQDKF/6QTAJDxwfaruzwGCLdEnVrHwMI=;
 b=nlZJpJsFwg2rwlKMtWcHwVpqToZtXG8s/8d5jOfNr4MPN1f7o0xlG0qXF/uJcW2K3n1xuWONmmzXK0nXyAAch926+IdeRJ+GVOXbPHSMF/QOOwIlpEqX425Btb8ZOSICovA1dDRZytcvvPyZq5C05ynqkU0c/uEutfqrbGaVGfi5kmdXRCIF+8+bFIOww2wtbIeZJ9CdEGTYjlCbsnOiavimz6N/+P6ARqqzSAO1/gJpfDw5MevwnGNRVXHysxm3e7cT58vZx+szMHmx6Z8WM2SQl3mnkHhOu1ihOk89+E7jFYpcg0D6q1VRUm+tdAqWwzRGvUP11niE5Dqz5Q2d0A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8fc9366a-3a1b-6c40-499d-b16bce681c64@suse.com>
Date: Mon, 23 Jan 2023 16:52:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 00/11] Arm cache coloring
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0076.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8932:EE_
X-MS-Office365-Filtering-Correlation-Id: e64b95d0-1307-4fef-5ed4-08dafd59cba8
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4yXgd4wNmbq9jy0wQ8K/4rle38/G6aPLPoah5HaAn+GnPR/ld+uaZvKuSstKjfnWsbQxpz1Gao/Cl+HqaM6jbty3lhmgec/5SIW6lhppClynoKQ4hJFlnCJJzCLM6kPV/Ev5A+WLvKt0qiz1leDoK9YqFlKDmSQNWwEB7BRwNJLu6pH0/GZ5e2yZG5VeNoJub9Q9OejRmKtOCWAOekuy0MRzxxaITvoH192QZyFvMWAgZaWPFXSAST1ie7tE79GpVzkxXCMFOEsTEHMKcJqHOUcEy3bZXClVJAakezzLvsVDHF1TfTeRNa74aJjGZTaudLhHMoGeT/poOQa2M68o2YYroty22Cwrv1ltkn8FG2Xxh2E5jlrjznKkHXVWs7ovpGZ3FkuGZHR357FU6CdLP4UBJklqY+uxiLLPD23ubOOF8Un2Ddze+ai6IdrxhRdEAQXa+oNW2rvufFMIN/vU4WH/v9dLUY5/oo6U6P//knLbiurCNqGKjT/6SPxvLA5KYtast5atZvB8ofCLPdxcxSZhj2vJnop6SCSDRvrqxxAQG6wcG1bWXgjpg/3TKrwydfhpk2UH+HCZuIJ+wO3DQyg9qr3hGF3cMoJ96RUE/sk7g6JouhWSZxjiPkwLoCmnR9o9v7b0ECwL+5aQBjcSTBv/fFsWEo8FAEhtOvc5fnHBnd3nrWjOxSykSt9Z96Bf2XG+kABnabxmfKGfvss7RiZ71fsT8DMcVJId2CsGpj0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(366004)(396003)(39860400002)(136003)(346002)(451199015)(316002)(6506007)(36756003)(6666004)(38100700002)(54906003)(41300700001)(2616005)(83380400001)(2906002)(6512007)(26005)(186003)(53546011)(31696002)(66946007)(66556008)(8676002)(7416002)(86362001)(4326008)(8936002)(5660300002)(66476007)(31686004)(6916009)(478600001)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dVp3aUxsZzJudXJnMFJLVDlZNUMxSWZJNHpnMlUxdzIyZ2FJSnZXcmtTd1RD?=
 =?utf-8?B?cUs3R1hldUx0a21Qb0dKdFhyMTBPeVludzRaYlJCT3F6bUZZdk5xWVE4RWZ6?=
 =?utf-8?B?dGZMcUZqd1FvSFlvOUF5aGhsdGRlYmVOQlFRUm5ySWlKaGxkck14ZWpqTk5M?=
 =?utf-8?B?ZFJpUUYvdTl2c21KeVdzUWJqY1Q4WjZwS2dsbm4vU1FRNU54ajRwTVFjNWJE?=
 =?utf-8?B?UVVuQVB6K1N0RTQzVXpWVzNWekxzaURXNFRaNXlpNk84d29MS3hNZ3ZuOEQv?=
 =?utf-8?B?WHpBQTFWUXZwTTJpelRHanVTS3JJd0JoQWtFRVVnZFJyS2pBLzNNaFlIN2Ri?=
 =?utf-8?B?SWZONjZ5UTBFTGNQb3NWNktuL3BpSk1SNXlBZ0JtRi9Ha21LZ0pVQU1ISXVB?=
 =?utf-8?B?SnJlUGpDUEhiY0prRXA3RkV3YWJ0M2d0R3B1THowSDB2UUVMMS9MdEhzUW9N?=
 =?utf-8?B?K0MwMnVwdllJQzRxcmVvWjRrbEdJTFVIaDRWbUJ2Mit3NjJJTlRaNWo3blZx?=
 =?utf-8?B?b2NZWXNqa21DWVFqVk9JMWZLOVlzTjVoRnBIamRBR2tBQ29JZFBWdG9pWURk?=
 =?utf-8?B?cjN6azVkTnFpOTVhWTF3aEdZbmpwWDM5Q2Q1Y0xpVHlrLzlMWE9YQVVFdGRi?=
 =?utf-8?B?eitObWpNbEtIcG5xZTUrVmUvcjFienBYVC9hMmZCYmxSV09ZaDUyR0Z0K3VU?=
 =?utf-8?B?MCt6d3FjRm9LU1poZ3B3MkRuM2hNTjJTQWxEVGgzdW1DNU5qa1lDNFZ3cXl3?=
 =?utf-8?B?TGw3R1l2QXgvUGdORmpBdmpFTWZHb3dtNTdUMVhrUUVCUGc0TUU0N1M5WFZp?=
 =?utf-8?B?bDYxTjgrZ2tScWdxY3pPQitLLzVoUUFPVXllOGFGc09SNW5PRlhvREZyYlRI?=
 =?utf-8?B?aTdlREd6TmZsUTJ2WEdvNHRFKzk3c2U4QjJ1enJIUUZjL0ViUktGWlN0VWlY?=
 =?utf-8?B?cURZUGFlUjFuZnhSWWxoOFRvVXNHQys0VTllSHgyRUJXYllQNVdDaFVlaDFo?=
 =?utf-8?B?NTBPSzd0OEQ5TFVsZVRhSG5YSElKVVF3ZHNhQ1hnWHZtV0s1STBSQVZwb2sr?=
 =?utf-8?B?WEt0Y2VydExhN3JVb3QwbFZINy9mMkozVCtFZjhTYnJNYkhtNVNxN3lJZkJP?=
 =?utf-8?B?VzhzdmtZRURKOUk5S0RHY2ptQ3o3VnZOVnJReEhUNEVOWERCVXFTV2YvOS81?=
 =?utf-8?B?dUhieDRnZElVT1A0azBGT3YxajJLR3RjOGtmNUNBL28xbGhYK1RUNjNOSDlR?=
 =?utf-8?B?by9WdVB2eExLWnNxTU95R0JhQXE3bTBpSGRYQmsreFZqWk9XUEVzbVF1bWNt?=
 =?utf-8?B?RnZOTUYydlJKQ1JJNWhQeHl1TVdKa2ZWTE40aG5IRU5adGlrcVBockZhY0FE?=
 =?utf-8?B?cEcyayt1S2cwZktYMnE4b2Z1R1pjUU5iZytCdHo0NC9CSmRrN3hXd1ZIdFYx?=
 =?utf-8?B?M08rd1JWNCtQL2ZrdXlEL2pFWjdjMDFlRk9ORXI3TS9rVDM3bWd3bDlaQjZT?=
 =?utf-8?B?ZytmZDh5MjhHSThwVFEzcU9XekFoeUFJbGFDS0ZnUVBOSENKMXhZNjdoT3d2?=
 =?utf-8?B?YTdoUFBzQ29QS0E4bmIwSmxJZ05rNW9OSDE2ajhwYmlFZ0pGK01ZS2tMUHFz?=
 =?utf-8?B?KzgvNTROVzJRS1VKR1R6dGc2UFJXcTVCcXRVY2JaWG52c0ZOeVhJcExtcHJU?=
 =?utf-8?B?YmJicFF3SkUvWW04VUFKTldZT25GeW03bEFWMlhzNEhPZEVYMCtPMHZEM2VW?=
 =?utf-8?B?ZG5DMEFXWjBERFplZVRNVWxVMGZzQlErSURuMWlNQlFuQXlEKy93dDFvUWdm?=
 =?utf-8?B?VDI0NFJZc3ZjMkVEemg0b0Nrem1XcEJMUkN3TlJnRlBSUUxjdDF4Y3dBbHZ6?=
 =?utf-8?B?dnNiUktyWWw0emlCSFJhc2hCOFQyTjJISHZobkZsOXBza29LRXQ3Ym1nbldK?=
 =?utf-8?B?dlAxT2tSdmp1ZjErVldvTjJVTVNzYkEzY0ZncmtybUtSU2xHL29rang3WXdM?=
 =?utf-8?B?SWcvb1VqbzU2VVEzbldaQVVtL2k1NnM4VGZpeldjeUdkUzA3QlRVTGNwRzFS?=
 =?utf-8?B?VGxnOGk1L3g2MlVoQUo2YmhmMzJ5RzBJbndBQ1RyNjA3T3lqelB0LzRYM25n?=
 =?utf-8?Q?KSnEE0FbsTMK9krC+TfBqRfEd?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e64b95d0-1307-4fef-5ed4-08dafd59cba8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 15:52:13.8478
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: exhMQf91ChY0UydwkJT4Yj69XbLfppa0WJp9DGpO+dMaqS+oX4I1QZM7uz+Q6G9tkvOv2CwM4wz7gFqq1JmLPg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8932

On 23.01.2023 16:47, Carlo Nonato wrote:
> Shared caches in multi-core CPU architectures represent a problem for
> predictability of memory access latency. This jeopardizes applicability
> of many Arm platform in real-time critical and mixed-criticality
> scenarios. We introduce support for cache partitioning with page
> coloring, a transparent software technique that enables isolation
> between domains and Xen, and thus avoids cache interference.
> 
> When creating a domain, a simple syntax (e.g. `0-3` or `4-11`) allows
> the user to define assignments of cache partitions ids, called colors,
> where assigning different colors guarantees no mutual eviction on cache
> will ever happen. This instructs the Xen memory allocator to provide
> the i-th color assignee only with pages that maps to color i, i.e. that
> are indexed in the i-th cache partition.
> 
> The proposed implementation supports the dom0less feature.
> The proposed implementation doesn't support the static-mem feature.
> The solution has been tested in several scenarios, including Xilinx Zynq
> MPSoCs.
> 
> v4 global changes:
> - added "llc" acronym (Last Level Cache) in multiple places in code
>   (e.g. coloring.{c|h} -> llc_coloring.{c|h}) to better describe the

Please can you use dashes in favor of underscores in the names of new
files?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 16:01:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 16:01:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483098.749029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzFX-0001Zc-UL; Mon, 23 Jan 2023 16:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483098.749029; Mon, 23 Jan 2023 16:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzFX-0001ZV-QI; Mon, 23 Jan 2023 16:00:47 +0000
Received: by outflank-mailman (input) for mailman id 483098;
 Mon, 23 Jan 2023 16:00:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozL9=5U=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pJzFW-0001ZP-Bh
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 16:00:46 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2076.outbound.protection.outlook.com [40.107.93.76])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1690fc83-9b37-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 17:00:43 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DM4PR12MB7696.namprd12.prod.outlook.com (2603:10b6:8:100::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 16:00:39 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 16:00:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1690fc83-9b37-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ms3d8eB4JQjzPsYcN/NRll4Jnyiz2ffPIvxHvGe3UwttY8lh8TWRC4ZmfCFIp3Q75WNOwkn6JRkZiqJdKMG+WFCimXXE/Pmu1RlwEm83Zzmqm5UmRX+kI7gXN8I4D+CveZgJm/nXKV9ymV0VTrH5/sf0iPAy4XsJNdcocXHidO7f+fnrmOmrujRmC3AsLPuVvjL8MTD2VVTuLZ8BbYYvlLlu14oqKXEL/gQKLy3Ya2jNLVvn2xjyc/MTh881FYhbXNmpFKTUUvZ7AEr8kulU0ng54x6/Kow6KJ9EFI/0i0Mi0280FvtTsegVBgR2EcAulqsKtRzmmJFnsHSx/ixAlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EYPbFfiKt7zIfjPeWWObD4SdpHnEiPChlMM5TGKPguc=;
 b=eTsqjIphwfRdQCHpLj1eb7ozI7+0C7afzHsmppoy2+yrWJWX/6s327j3P6WF39PuCgWGcIhyk+UGAsYDhlh/oi1yqmWVNzESEyGFCrcfUXcBtP4sek4mQjtkE6TxJcnhLOtdWuUkJT0MmyCkap7lL/0hjrUuvqtiWGqfoBk/XnVxpw9eQrwFosmnYOng2fl5HzIGmcISLZmOocicsnmIHmXYZq+vihXoCOa63f01y+IXWRgDexFYDtfYk37i/lAVNFr//J6u2UIRTtq6qLLgE4nDqV2UPXMAaqIuSkiZ4W7rJnF2r4142bv3RqB1SgvPv3giLoM0ixOJzXpWO88q/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EYPbFfiKt7zIfjPeWWObD4SdpHnEiPChlMM5TGKPguc=;
 b=p0bxbIN6fFlp1sJ8lOOMzWbZAcEuHljEqU1VKfRoN3j8F73Z0xYPVls9oyX/nJKaTgEW/UFqDncM4id4+tvVMcHmxTwxkONZtPRh74sQRCFJQ12UfoNijMpY9WPzkQezIZToIDT6qnw0AZZVqzKWV8S8B2SF4JEbkq8joIA9vPA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <be452e89-506d-a187-f918-4e7450c52bb1@amd.com>
Date: Mon, 23 Jan 2023 16:00:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] automation: Modify static-mem check in
 qemu-smoke-dom0less-arm64.sh
To: Xenia Ragiadakou <burzalodowa@gmail.com>,
 Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org
References: <20230123131023.9408-1-michal.orzel@amd.com>
 <01297097-d8ce-a22c-a616-f98691d3ad4f@gmail.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <01297097-d8ce-a22c-a616-f98691d3ad4f@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0253.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:37c::6) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|DM4PR12MB7696:EE_
X-MS-Office365-Filtering-Correlation-Id: 5989a562-fdb3-429d-6e97-08dafd5af8af
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	15Nga2UeoDJvP+LW4bZiE7M5SFwbrSLhnu3r6xJXPivVXWJavz6lwfXtgbF1aSeZK1XeVEXH4s4YitHavLUyiWxQU55VHEpwSek81cInd5PT9CYc7ukVEdhBEHD9NvLcIvgNIBCVyVWIHBUwdosEJye2G5wR9auvuJ71EwxEwmCoPYbxhRpEpZRHHy+vGbXl27yfcu/lAcujz2RGCS9soWADs3L2pWIEfyHefU9ZfXHWHYb+fmW22j+hC9filPAIwxMVOahJ1E5JXAHdgZZQDbi4v9anKxmnzVUiZxixBfOqdgLp4pIEasxfIdoa1ddL+1+idADlCapjK/6HmFkQrTzkORNt3Cf6P2/RfU69ekHL0cSz8McyqS9ODsrBa02w3m5P7SlfeI1+iaHMWqJ6o2T8ErM/oUQ9rABWRp2eI4ZpLhbvdGR3Jzsi3uJSOOpOtVodmf3UQ/IEjG6o5b/NsHQDBPjnNDcphhAd/2d45Ge7obaNmz+nBQLyE7R1PtjJPYCNfTuTBuCHm3aWwqmTgIZg2NK6PzwL1QaWohGMr6HFPa5UEDOzzHfbqXj7pMhh8LJVqTxbta4lgdsLf0O6rl/rqXgbJMtzg2gi1oKa+3OGSxWtyGwUwtxmTyczV4WHZ29c7cNOZ8sRPupd4kU1ZBbMf1yvOpSoQhmY0MNXjglWdEFFigXU49qyDIyefW83o3kyDt+h/ee7kAHustUHxR2bEhLEFll2ML81on2wf3PNjEKuhaDK0VmVeBazVE/E8BPyYfZXIS+nDmgxi1HcJjFBjsl81H4Iv8iIZS89x4o=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(346002)(39860400002)(366004)(376002)(451199015)(6506007)(6666004)(8676002)(966005)(38100700002)(6486002)(66476007)(66946007)(66556008)(53546011)(186003)(26005)(6512007)(478600001)(4326008)(2906002)(8936002)(5660300002)(41300700001)(83380400001)(316002)(31686004)(2616005)(110136005)(54906003)(31696002)(36756003)(45980500001)(43740500002)(139555002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WkdPVHpjWWFweUZ2SXNra2VROVRDUllCVncvNUVPSjFQcFdJdzRlVnRiS3Q1?=
 =?utf-8?B?K0d0VFJtYi9TSXVrOVNKZm5jcnAzT1VwMEtkbFY4S1VaMGd2eU1WMjRpamw5?=
 =?utf-8?B?VjdDMVNFc0xlMDlkODk1SGViNWZJYTJMLzJycVU0YkNoaXprRlBFS3lwd0Ey?=
 =?utf-8?B?a2RxRXBDU3FNSGphMy9lMlhGQ0YrN1h4Qzc5Qnkzck9FNTdLRHlyeDZYZDQ0?=
 =?utf-8?B?SXFaak1aNjkxMWRFUzNvenlTU0tsMEdpU2R6UE1YR3FUWU8xNjlpZ3JaT2U4?=
 =?utf-8?B?b081ZXRvQTBMNnRsSDMxN29vdm85aDVBekUzZCtUbmM3T0piQjJLUU1JVng4?=
 =?utf-8?B?NFlQZmRIRTY2R0MzM3lHbzRxL0lkbjFTSkVCN1YwS1FoeTgwcFZjelJvUW15?=
 =?utf-8?B?blpFbHVNNCtmREU1MFV0WENhSVpydEoxUGdKeXBkSVBhdUNYYllQUFZvbzU3?=
 =?utf-8?B?M2hUTGowQ3N4em1UTk9va1NyUVB2U1hOZXp1Q2ZEMjNqNldDSVZnRE1WZXpG?=
 =?utf-8?B?L29MU0xWeWltS0hRSDNSS0lZVzRXYVdRbTZoakl4b2JpMUZOeEZPUmtiRGFW?=
 =?utf-8?B?QVRSYXhWT0pWcVNSaFFTdW1EeTA0NGx6R0MySm1QbE1MMHVrOWNCTkE5V2RU?=
 =?utf-8?B?czc5cHMydk0zYlBjeUllRDlIcHVrYjByUlpucldNS2JlU01qMWhlSVdLZlV4?=
 =?utf-8?B?aEhOM1hWcXQrWDdTNTVyTTJoeTJ2U2NwbWcwOHB4VDFVNDBRczh1bVdDeHVR?=
 =?utf-8?B?c0g5blIyc2c5VjJocU9Pb0NGVDVvdHpRa1BRUnVIYkhhaEphbmd6Y04zNVlI?=
 =?utf-8?B?TEdHeWs5YzRpbFc1MXYwU3o0MC9hNUhLNWEremdtcndlT2hidHZyK3F0Sm1x?=
 =?utf-8?B?eFQ3a2Z6ZHNNb3NTeHM0ejlmeFZBUUdscUdjYWZFU213aW9zOTE2K2JaNHE5?=
 =?utf-8?B?MSttT29kZTNWZ0RjVmhUTzZQKzZxVXAzVjIvL1NRMUovbktWVFV1UmRtT2pk?=
 =?utf-8?B?UWN3Sks4a3p6R1FiQytqZXJFdURzckRZbDY3OVJkVVhPb2RZc1ppYWhYTFkx?=
 =?utf-8?B?cHVoMExFdkZiZ1pQMCs4dm1GLzdBNUsvRVhmUXBqd1F6azE5WElXbDg4dmRK?=
 =?utf-8?B?QzRHV05aK05HSFlNQ1czRDgrM2hka1JiRG9QSlJ3VkdVREpNZ084bjgrNlgw?=
 =?utf-8?B?eFZUYXBRRzBxeUR2M2ZVbDNpMStoSXphUU80N1R2ZExRWmgxS1orVG5qc0Ry?=
 =?utf-8?B?UnpLMHp1T0JnN2R5Q3NZRU5ZeFRjR2xPTklLbzAyZVBVanBGYkNiVHVCbXdt?=
 =?utf-8?B?NmRmaXd1TXNGQ3NQN085WngyREpseFFERUVwUkJ1VW9qbWRTUW9RRisrRWJI?=
 =?utf-8?B?Y291ZUZHamxoYkpTK20wcGgrQmNSdTEzKzZyYzhIRk9LTHBuVlE5K0NMUExy?=
 =?utf-8?B?NW1paUszK24wQ3h1RkxyYlZSdys4aXhBeTVtZHJRRXhORzdQd2JoeUp4R2N2?=
 =?utf-8?B?VW5MdVFjeHhYeHlXTEVNdzI2OW5MaEJWN1VhSE5RellqS2xSSlVVOVBJcW51?=
 =?utf-8?B?N3NYK01Rb3lCZ3NLejlGSDY4YTNZeWtlUjJDcUpYZ0dnNDU0UEhwbU8yTm16?=
 =?utf-8?B?eHZDZ3MwWXZEZHBGYW1vQTlpZ0Y0MEhMajVxem5oV0MwRTNmVk4vNkhkV0pO?=
 =?utf-8?B?WkxHNkJpMU40Q1JJNGI3eVZNZDVNZFpXVHAxOGdFNGFwdkEvTW0wUlFyQjRR?=
 =?utf-8?B?TERVNnZsT01NNUVlZkNPRTNBV3RuM2hmQ2pnLzY4djQyb1dQWFpqeXpxY1du?=
 =?utf-8?B?MkQyV0g1YkZhK0lRYkMzbHJ3cTZwVFMwMkVGd3UxNWxCdWJxWi9LSklKQTB5?=
 =?utf-8?B?NnphRzF2WnhZZ0pvdHYyL05uZEppeGFJMnluQ3pxeTlJYndLVWVaZjNVYlNm?=
 =?utf-8?B?WGFDVlNRck9GUWVyQURTa2RwRytqMmljU09Ma0hnMEs5bDg1YWlqQ25jZFVm?=
 =?utf-8?B?RitVZHlnUkJwekVzc2RBeU5JU2VHaS8yZUVHcENvL1YzU0hQeGliSnVyMkNk?=
 =?utf-8?B?bnVEdXJtTmdobjJReXp2bDJyNWU3OFUwMENUcWZISjlzUUZJcE9hZFk4bVVI?=
 =?utf-8?Q?WldJ05PzRkxq0LPS0iJ1W+uXK?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5989a562-fdb3-429d-6e97-08dafd5af8af
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 16:00:39.0133
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: omWM8gxr1d2O+pT8Wrpwp+QSUHXl8YliIi6EmIoYikdFKDrE46uXH4u8foearUT9sAc44nweJdd1/7mdjCMCtA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7696


On 23/01/2023 14:30, Xenia Ragiadakou wrote:
>
> On 1/23/23 15:10, Michal Orzel wrote:
>> At the moment, the static-mem check relies on the way Xen exposes the
>> memory banks in device tree. As this might change, the check should be
>> modified to be generic and not to rely on device tree. In this case,
>> let's use /proc/iomem which exposes the memory ranges in %08x format
>> as follows:
>> <start_addr>-<end_addr> : <description>
>>
>> This way, we can grep in /proc/iomem for an entry containing memory
>> region defined by the static-mem configuration with "System RAM"
>> description. If it exists, mark the test as passed. Also, take the
>> opportunity to add 0x prefix to domu_{base,size} definition rather than
>> adding it in front of each occurence.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>
> Reviewed-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>
> Also you fixed the hard tab.
>
>> ---
>> Patch made as part of the discussion:
>> https://lore.kernel.org/xen-devel/ba37ee02-c07c-2803-0867-149c779890b6@amd.com/ 
>>
>>
>> CC: Julien, Ayan
>> ---
>>   automation/scripts/qemu-smoke-dom0less-arm64.sh | 13 ++++++-------
>>   1 file changed, 6 insertions(+), 7 deletions(-)
>>
>> diff --git a/automation/scripts/qemu-smoke-dom0less-arm64.sh 
>> b/automation/scripts/qemu-smoke-dom0less-arm64.sh
>> index 2b59346fdcfd..182a4b6c18fc 100755
>> --- a/automation/scripts/qemu-smoke-dom0less-arm64.sh
>> +++ b/automation/scripts/qemu-smoke-dom0less-arm64.sh
>> @@ -16,14 +16,13 @@ fi
>>     if [[ "${test_variant}" == "static-mem" ]]; then
>>       # Memory range that is statically allocated to DOM1
>> -    domu_base="50000000"
>> -    domu_size="10000000"
>> +    domu_base="0x50000000"
>> +    domu_size="0x10000000"
>>       passed="${test_variant} test passed"
>>       domU_check="
>> -current=\$(hexdump -e '16/1 \"%02x\"' 
>> /proc/device-tree/memory@${domu_base}/reg 2>/dev/null)
>> -expected=$(printf \"%016x%016x\" 0x${domu_base} 0x${domu_size})
>> -if [[ \"\${expected}\" == \"\${current}\" ]]; then
>> -    echo \"${passed}\"
>> +mem_range=$(printf \"%08x-%08x\" ${domu_base} $(( ${domu_base} + 
>> ${domu_size} - 1 )))
>> +if grep -q -x \"\${mem_range} : System RAM\" /proc/iomem; then
>> +    echo \"${passed}\"
>>   fi
>>   "
>>   fi
>> @@ -126,7 +125,7 @@ UBOOT_SOURCE="boot.source"
>>   UBOOT_SCRIPT="boot.scr"' > binaries/config
>>     if [[ "${test_variant}" == "static-mem" ]]; then
>> -    echo -e "\nDOMU_STATIC_MEM[0]=\"0x${domu_base} 0x${domu_size}\"" 
>> >> binaries/config
>> +    echo -e "\nDOMU_STATIC_MEM[0]=\"${domu_base} ${domu_size}\"" >> 
>> binaries/config
>>   fi
>>     if [[ "${test_variant}" == "boot-cpupools" ]]; then
>


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 16:10:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 16:10:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483104.749042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzOo-000386-SW; Mon, 23 Jan 2023 16:10:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483104.749042; Mon, 23 Jan 2023 16:10:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzOo-00037y-Ps; Mon, 23 Jan 2023 16:10:22 +0000
Received: by outflank-mailman (input) for mailman id 483104;
 Mon, 23 Jan 2023 16:10:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FuoH=5U=tklengyel.com=bounce+e181d6.cd840-xen-devel=lists.xenproject.org@srs-se1.protection.inumbo.net>)
 id 1pJzOn-00037r-Qb
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 16:10:22 +0000
Received: from m228-12.mailgun.net (m228-12.mailgun.net [159.135.228.12])
 by se1-gles-flk1.inumbo.com (Halon) with UTF8SMTPS
 id 6d4d47f0-9b38-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 17:10:18 +0100 (CET)
Received: from mail-yw1-f169.google.com (mail-yw1-f169.google.com
 [209.85.128.169]) by
 13593c2b9515 with SMTP id 63ceb166196255da45dc598d (version=TLS1.3,
 cipher=TLS_AES_128_GCM_SHA256); Mon, 23 Jan 2023 16:10:14 GMT
Received: by mail-yw1-f169.google.com with SMTP id
 00721157ae682-4ff07dae50dso133284737b3.2
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 08:10:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 6d4d47f0-9b38-11ed-b8d1-410ff93cb8f0
DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=tklengyel.com;
 q=dns/txt; s=mailo; t=1674490216; x=1674497416; h=Content-Type: Cc: To: To:
 Subject: Subject: Message-ID: Date: From: From: In-Reply-To: References:
 MIME-Version: Sender: Sender;
 bh=wM1Uaew0DbHCJGVY5lMyeejCeJvbZTTwMjzi9q79dCE=;
 b=I69SNRYy7bmLuxBmneyWatDgUVs60yYMzszTMcXBhLAxaxSwmhX4uXOsK+j8Ds4qFc8JDjJQMj1RVzW/HBMh3kWZO0fXp3yB1A7ETCgSG8z/s4Je+ZfwytaRNQK43WXX7rE5InOYYONIASffkIm6rZtZTz0FURUxMSC5BHpjOeglCkdwpfP3uiYvRBHjL9wClBepkio4fPx9ursi5et3t638QFfGYnQVBaFHEzWEq6Br3DDqNVNo1S/5uJr984zrdoxcOl3FhcYtBZ7/HbP6B6/XbJig4FvaIghQYAZh4FRg08zfSL2mwc5syL8gpzC5egaDvrEjjZzZ02g6n0+KoQ==
X-Mailgun-Sending-Ip: 159.135.228.12
X-Mailgun-Sid: WyIyYTNmOCIsInhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZyIsImNkODQwIl0=
Sender: tamas@tklengyel.com
X-Gm-Message-State: AFqh2koo+10EuePwV4cI8ge+Cv1vhvOo6v3T8gVJ80mhxHBeJmg25KP9
	QsQaHBTcU+vJ2mBA6tJCj5vBV6rscpzur2s+0rs=
X-Google-Smtp-Source: AMrXdXuX8sOqi8JeKpXJ8d/tuLr0cO/+U0tfae0MLIkDVbuiiZs1R12R2GZkyymHBJ19RGgHNgyFg7zFOhPPDuUG9pg=
X-Received: by 2002:a0d:db95:0:b0:4ff:f7a4:56d1 with SMTP id
 d143-20020a0ddb95000000b004fff7a456d1mr860258ywe.23.1674490213786; Mon, 23
 Jan 2023 08:10:13 -0800 (PST)
MIME-Version: 1.0
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com> <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com>
In-Reply-To: <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 23 Jan 2023 11:09:37 -0500
X-Gmail-Original-Message-ID: <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
Message-ID: <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
Subject: Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest areas
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: multipart/alternative; boundary="0000000000005f6bb505f2f0a22f"

--0000000000005f6bb505f2f0a22f
Content-Type: text/plain; charset="UTF-8"

On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> In preparation of the introduction of new vCPU operations allowing to
> register the respective areas (one of the two is x86-specific) by
> guest-physical address, add the necessary fork handling (with the
> backing function yet to be filled in).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>  }
>
> +static int copy_guest_area(struct guest_area *cd_area,
> +                           const struct guest_area *d_area,
> +                           struct vcpu *cd_vcpu,
> +                           const struct domain *d)
> +{
> +    mfn_t d_mfn, cd_mfn;
> +
> +    if ( !d_area->pg )
> +        return 0;
> +
> +    d_mfn = page_to_mfn(d_area->pg);
> +
> +    /* Allocate & map a page for the area if it hasn't been already. */
> +    if ( !cd_area->pg )
> +    {
> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
> +        p2m_type_t p2mt;
> +        p2m_access_t p2ma;
> +        unsigned int offset;
> +        int ret;
> +
> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
> +        {
> +            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain,
0);
> +
> +            if ( !pg )
> +                return -ENOMEM;
> +
> +            cd_mfn = page_to_mfn(pg);
> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
> +
> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,
p2m_ram_rw,
> +                                 p2m->default_access, -1);
> +            if ( ret )
> +                return ret;
> +        }
> +        else if ( p2mt != p2m_ram_rw )
> +            return -EBUSY;
> +
> +        /*
> +         * Simply specify the entire range up to the end of the page.
All the
> +         * function uses it for is a check for not crossing page
boundaries.
> +         */
> +        offset = PAGE_OFFSET(d_area->map);
> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
> +                             PAGE_SIZE - offset, cd_area, NULL);
> +        if ( ret )
> +            return ret;
> +    }
> +    else
> +        cd_mfn = page_to_mfn(cd_area->pg);

Everything to this point seems to be non mem-sharing/forking related. Could
these live somewhere else? There must be some other place where allocating
these areas happens already for non-fork VMs so it would make sense to just
refactor that code to be callable from here.

> +
> +    copy_domain_page(cd_mfn, d_mfn);
> +
> +    return 0;
> +}
> +
>  static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu)
>  {
>      struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu);
> @@ -1745,6 +1804,16 @@ static int copy_vcpu_settings(struct dom
>              copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
>          }
>
> +        /* Same for the (physically registered) runstate and time info
areas. */
> +        ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
> +                              &d_vcpu->runstate_guest_area, cd_vcpu, d);
> +        if ( ret )
> +            return ret;
> +        ret = copy_guest_area(&cd_vcpu->arch.time_guest_area,
> +                              &d_vcpu->arch.time_guest_area, cd_vcpu, d);
> +        if ( ret )
> +            return ret;
> +
>          ret = copy_vpmu(d_vcpu, cd_vcpu);
>          if ( ret )
>              return ret;
> @@ -1987,7 +2056,10 @@ int mem_sharing_fork_reset(struct domain
>
>   state:
>      if ( reset_state )
> +    {
>          rc = copy_settings(d, pd);
> +        /* TBD: What to do here with -ERESTART? */

Where does ERESTART coming from?

--0000000000005f6bb505f2f0a22f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><br>On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich &lt;<a=
 href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; wrote:<br>&gt;=
<br>&gt; In preparation of the introduction of new vCPU operations allowing=
 to<br>&gt; register the respective areas (one of the two is x86-specific) =
by<br>&gt; guest-physical address, add the necessary fork handling (with th=
e<br>&gt; backing function yet to be filled in).<br>&gt;<br>&gt; Signed-off=
-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com=
</a>&gt;<br>&gt;<br>&gt; --- a/xen/arch/x86/mm/mem_sharing.c<br>&gt; +++ b/=
xen/arch/x86/mm/mem_sharing.c<br>&gt; @@ -1653,6 +1653,65 @@ static void co=
py_vcpu_nonreg_state(struc<br>&gt; =C2=A0 =C2=A0 =C2=A0hvm_set_nonreg_state=
(cd_vcpu, &amp;nrs);<br>&gt; =C2=A0}<br>&gt;<br>&gt; +static int copy_guest=
_area(struct guest_area *cd_area,<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 const struct gu=
est_area *d_area,<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct vcpu *cd_vcpu,<br>&gt;=
 + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 const struct domain *d)<br>&gt; +{<br>&gt; + =C2=A0 =
=C2=A0mfn_t d_mfn, cd_mfn;<br>&gt; +<br>&gt; + =C2=A0 =C2=A0if ( !d_area-&g=
t;pg )<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;<br>&gt; +<br>&gt; + =
=C2=A0 =C2=A0d_mfn =3D page_to_mfn(d_area-&gt;pg);<br>&gt; +<br>&gt; + =C2=
=A0 =C2=A0/* Allocate &amp; map a page for the area if it hasn&#39;t been a=
lready. */<br>&gt; + =C2=A0 =C2=A0if ( !cd_area-&gt;pg )<br>&gt; + =C2=A0 =
=C2=A0{<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0gfn_t gfn =3D mfn_to_gfn(d, d_=
mfn);<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0struct p2m_domain *p2m =3D p2m_g=
et_hostp2m(cd_vcpu-&gt;domain);<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0p2m_ty=
pe_t p2mt;<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0p2m_access_t p2ma;<br>&gt; =
+ =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int offset;<br>&gt; + =C2=A0 =C2=A0 =
=C2=A0 =C2=A0int ret;<br>&gt; +<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn=
 =3D p2m-&gt;get_entry(p2m, gfn, &amp;p2mt, &amp;p2ma, 0, NULL, NULL);<br>&=
gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( mfn_eq(cd_mfn, INVALID_MFN) )<br>&gt;=
 + =C2=A0 =C2=A0 =C2=A0 =C2=A0{<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0struct page_info *pg =3D alloc_domheap_page(cd_vcpu-&gt;domain, 0=
);<br>&gt; +<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( !pg )<=
br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -EN=
OMEM;<br>&gt; +<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =
=3D page_to_mfn(pg);<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0set=
_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));<br>&gt; +<br>&gt; + =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D p2m-&gt;set_entry(p2m, gfn, cd_mfn, =
PAGE_ORDER_4K, p2m_ram_rw,<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p=
2m-&gt;default_access, -1);<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0if ( ret )<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0return ret;<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0}<br>&gt; + =C2=A0 =
=C2=A0 =C2=A0 =C2=A0else if ( p2mt !=3D p2m_ram_rw )<br>&gt; + =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -EBUSY;<br>&gt; +<br>&gt; + =C2=A0 =
=C2=A0 =C2=A0 =C2=A0/*<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Simply speci=
fy the entire range up to the end of the page. All the<br>&gt; + =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 * function uses it for is a check for not crossing page b=
oundaries.<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 */<br>&gt; + =C2=A0 =C2=A0=
 =C2=A0 =C2=A0offset =3D PAGE_OFFSET(d_area-&gt;map);<br>&gt; + =C2=A0 =C2=
=A0 =C2=A0 =C2=A0ret =3D map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset=
,<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 PAGE_SIZE - offset, cd_area, NULL);<br>&=
gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret )<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0return ret;<br>&gt; + =C2=A0 =C2=A0}<br>&gt; + =C2=A0 =
=C2=A0else<br><div>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D page_to_mfn=
(cd_area-&gt;pg);</div><div><br></div><div>Everything to this point seems t=
o be non mem-sharing/forking related. Could these live somewhere else? Ther=
e must be some other place where allocating these areas happens already for=
 non-fork VMs so it would make sense to just refactor that code to be calla=
ble from here.<br></div><div><br></div>&gt; +<br>&gt; + =C2=A0 =C2=A0copy_d=
omain_page(cd_mfn, d_mfn);<br>&gt; +<br>&gt; + =C2=A0 =C2=A0return 0;<br>&g=
t; +}<br>&gt; +<br>&gt; =C2=A0static int copy_vpmu(struct vcpu *d_vcpu, str=
uct vcpu *cd_vcpu)<br>&gt; =C2=A0{<br>&gt; =C2=A0 =C2=A0 =C2=A0struct vpmu_=
struct *d_vpmu =3D vcpu_vpmu(d_vcpu);<br>&gt; @@ -1745,6 +1804,16 @@ static=
 int copy_vcpu_settings(struct dom<br>&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);<br>&gt=
; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}<br>&gt;<br>&gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0/* Same for the (physically registered) runstate and time info ar=
eas. */<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D copy_guest_area(&amp;c=
d_vcpu-&gt;runstate_guest_area,<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&a=
mp;d_vcpu-&gt;runstate_guest_area, cd_vcpu, d);<br>&gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0if ( ret )<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret=
urn ret;<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D copy_guest_area(&amp;=
cd_vcpu-&gt;arch.time_guest_area,<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&a=
mp;d_vcpu-&gt;arch.time_guest_area, cd_vcpu, d);<br>&gt; + =C2=A0 =C2=A0 =
=C2=A0 =C2=A0if ( ret )<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
return ret;<br>&gt; +<br>&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D cop=
y_vpmu(d_vcpu, cd_vcpu);<br>&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret=
 )<br>&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;<br>&=
gt; @@ -1987,7 +2056,10 @@ int mem_sharing_fork_reset(struct domain<br>&gt;=
<br>&gt; =C2=A0 state:<br>&gt; =C2=A0 =C2=A0 =C2=A0if ( reset_state )<br>&g=
t; + =C2=A0 =C2=A0{<br>&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rc =3D copy_s=
ettings(d, pd);<br><div>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0/* TBD: What to d=
o here with -ERESTART? */</div><div><br></div><div>Where does ERESTART comi=
ng from?<br></div></div>

--0000000000005f6bb505f2f0a22f--


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 16:14:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 16:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483112.749055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzSd-0003ot-HT; Mon, 23 Jan 2023 16:14:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483112.749055; Mon, 23 Jan 2023 16:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzSd-0003om-E3; Mon, 23 Jan 2023 16:14:19 +0000
Received: by outflank-mailman (input) for mailman id 483112;
 Mon, 23 Jan 2023 16:14:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJzSb-0003oe-Mi
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 16:14:17 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2079.outbound.protection.outlook.com [40.107.20.79])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fb901ec1-9b38-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 17:14:16 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7511.eurprd04.prod.outlook.com (2603:10a6:20b:23f::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 16:14:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 16:14:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb901ec1-9b38-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VVGl0n4SGDbF+G3kvovPgaWobC6Wnf1zq4yRFn3JaNRocakIu4gSyxP2tq2br2ymM+lmZV7Oil4Pxh6VtDickLt5GUZNaerNfqqyY6Non6PJ/AiJIetA0nVunHC18V1WJvbULXlFwGCR1nl2bHmX5zDyMA1WpdeZyq95QObhFGc7OKqpFnp/8LHw5mSsgKxhlh0gS2usKzVBS/xaEW/8XfdMNbgpQHyvHqtDi9IwMFiBnaqqxmyvKsPAjgSfNkv9239WGRHSK23P7YFTVIUn7osucMBoZMiKsFN3BVAQVcbTcsVwY7Zob847ww9IxewhJOWaEILEU7w//hkRo+xZ/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aLhiWSO1/sbXhCBK6AdHYeMWiQDCkuoYjDwExbfLF0A=;
 b=mM3kBQmxFbDHCWIRie1tN+H87xX8OydWFZVu7/LHyumLhDTMEqF6ELdCZG7WbqNWZ7UC7dXjVP67/7ayKGpPMwf2LDeHxTkjQ/3KXgQAKxgfoP1xoL7jokhK7dteTvHTm+ydi9iQKsxfkBGaMCW6s842E9oEr0vCSUFteqpAg4v987nM49FDYMtcHQyVt3KSP13VGKyRkE+Cl1ZaOC5YJQqK8Ez4mB45dP4oJpe+XvHerhpoy5s8txpAP2sLRjKm/dZpReokBEHg4Xx3OiQ8AZ/jz6eBxd5hB8qPzodZSadtiPuO5laxE2B32ybj0nNvPtHlpcCP5EytBLFaB+UyjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aLhiWSO1/sbXhCBK6AdHYeMWiQDCkuoYjDwExbfLF0A=;
 b=4Gxx+QMbxWRyevMcxpljdihoABEtJBJQ3QmtpnVZOwipQFlxetBOpBnwg35+HbzFVlCmVJvgLeVq9cC5E2MkBVPV6Vw4V62436iDyv+Jrg1Qtgj+LPrytI5Moc5Y9RSpnCRH4Yi/p8NwEs0Hs1l4fL7U/srplGF2uvblFvpX1qMzpmE0+CXEjETCILHeG6KyxQ7BtepkOh6o4dhw3/Uplz16CFFZDk3NP1Ma8lELnPzrT+r1Rv5XS/Pf+IUSrvsd36yq/vQvApfagiOyp09pJzKV1FOoBk2X/+HrW9mc7Xb0F/ywBIuRWM8FcmgDPyetkt3Vm4GS+GDCprp5n7Gx+Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <11198de8-fcda-5e19-0ab8-25056dc47341@suse.com>
Date: Mon, 23 Jan 2023 17:14:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/9] x86/shadow: replace sh_reset_l3_up_pointers()
Content-Language: en-US
To: George Dunlap <george.dunlap@cloud.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>
References: <f17d07b3-2558-70b8-9d1b-b599a54a2d59@suse.com>
 <d91b5315-a5bb-a6ee-c9bb-58974c733a4e@suse.com>
 <CA+zSX=ZVK_7xpgraJyC3__uORqXo8F9Atj9gCF+oO7OyfRrtYg@mail.gmail.com>
 <c8ca4781-13ac-add6-1ae0-558f8d0da052@suse.com>
 <CA+zSX=b2o_sbC+CwLUm2F5QnSKaGBSayUPgsLheLWHob8jUnrg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CA+zSX=b2o_sbC+CwLUm2F5QnSKaGBSayUPgsLheLWHob8jUnrg@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0104.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7511:EE_
X-MS-Office365-Filtering-Correlation-Id: 7c99d569-13ff-4f03-bedc-08dafd5cde95
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nP7fWncDb52gk9AxkucLCF0Epm3VwZJFFAJFGajf4PpnFit/58iAxT3TT38Ub3B/dnTgaWfvQJBE17vwwz24h/OjcaEab8WPBvXGAd0T++5+bfRgkm+4AgOiE2vM/7PAjHeOKUzHtZBZUl4SWOwzfRFuhKtt2rD0sEUV4Col7AdFzQ2ek3AR2jLyGEEPvzzvFWC7NhDHPJYpoYPY1aBSn7EybwXd7LIcCG6kyOzzsopMICoOQsAz8tmfkdgRBw9UDa3FmLhx1cylYsHMNDqR7B/9nT5YSWelYpo8/6uxcg68mPiuMlBr3gOfOS3fpQkXrETHFr49D34md9tPJ2HLK7MfBB5wDqa9DM2yOrkrXxdbmg2uOsO1MaSxYGjFvvyqsUp/UlJjxNSmOEGSVndRVuC3ke1BD/Ix7RaZOk9gKrpsvCrHH97wPomWf0l1bukaiPPy5NDvoGOGH7RXZg4WYoNsQw6F3D403nWINzUVw/Kn+yW2OMxNeVw4cunajuEGgiQueejsZkinRdlPMEC+9lhR/VqXTQvi9fWrAs1HIAXuT3MN4PmXS9y36ODahh5UFo72jOwSm0P9VWi0SSfJxFjVq4/KmO4SXi7StqYjw6hun33cPiUxGYRQspj9VFWBGJ4TLVNUyCeanwi06ALnvI6pmZyLhPVbhEOf034Rk8CxsUCI1iNwPbIYbUIoMQk6/AC6yHF/3y+LWuSGFhEJ/qcpwC5EttybfJvmGNQwuDk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(136003)(376002)(396003)(39860400002)(451199015)(36756003)(86362001)(41300700001)(5660300002)(38100700002)(2906002)(8936002)(4326008)(31696002)(478600001)(6486002)(66476007)(31686004)(186003)(6512007)(8676002)(6916009)(53546011)(26005)(6506007)(66946007)(54906003)(316002)(66556008)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZGg3dDhCS1Z4K095VUFWeEhWWmdJenFDbDBQdDNHWGZCZUpCRHZ1TDZabGZ6?=
 =?utf-8?B?eTJvaUo3U3dscnJ6NjhIODJxVzl4S3A5NzZjTjFMTDRxWEkwQW14YmpjMmIy?=
 =?utf-8?B?M0wxUU9POUphVG1oYmRmMUN2M3JTbUM2Z0tqOG5DbFljdE9TeHdoZFQ1aWNH?=
 =?utf-8?B?QXMzVE13clcwOHUwejlBWEgxRG5yMVZkTThmRU5WN0dFYzZWRHVvUGYwcWpP?=
 =?utf-8?B?MVVSNHJuWWVzbXhwVmRtMUdncGVjMWppTHZRUVVXbVRpcFdHM21Id3Iwcjky?=
 =?utf-8?B?eDB0cEdHRGdjRENnS3RaN3hXSFF6bnZMZE01REZKbUZpV29xUHNYMitEaWRX?=
 =?utf-8?B?bmpYc1RMK2Fhb29qZ2VINmVlNTRoV3B5NFVEMXRTOERXcTBrUnA3Rk5XT2Y2?=
 =?utf-8?B?UmdBM0c4NGJBSVFML3FPSG1IcEhjcmNncDVjQ2gzNHdPcWoyRmt2WU5XbEMr?=
 =?utf-8?B?QzN2MlkrTHBCTCtiUkplUCtkTjNqRlZaN0Z3dUNEeFUrUU1rRzYzTmdnbGl1?=
 =?utf-8?B?bnRMa0VwRm53b05qdmxpMHBWZmZnT1o3Y0VuNk5xbEREQ2hwY1hReXM3Z3Rq?=
 =?utf-8?B?OUFueUVxQWpTeStCVkRYVHpLMnZ3NjJMdzhBZmwzcUsvRzhmUzUraG1jYjQ1?=
 =?utf-8?B?RXRNYVBCbGk5VTRlMnBWdFNnbHV6NzVubW1SUUtPbFEwczY4c1o4K1ZmTStv?=
 =?utf-8?B?Rm5qZnZWY21sQmhDenVXSG5mZTA4ekZJTzJTWWVnMzFCeXBWaGpXejVNS0tB?=
 =?utf-8?B?bm5KVFhPamNRWDNmSHZwNWIwNXB0bG0yV2ZCYkhhQlE3a3RVN0pBQVBmMnlh?=
 =?utf-8?B?SGVhMjVRVFA5aTFncnZHUmNSL0Z0blpQRjd1MHJqNmVDVGZYVU1kYTZ5OWFx?=
 =?utf-8?B?VTFVdnZTendqSzJlTUpOa0dKYndqbTFWcnBKdWs1WERFNlFBZWozcHYyK2d0?=
 =?utf-8?B?Si9XUGVZREJERjM4WE5hWDRTcHQwVmF6TWwvUllISmVqWXh0a0MxOHprYU94?=
 =?utf-8?B?VTUxS1Bmb0RzS2FVdjVGS3N5bklBVE5OV1kvaE5ON3NxbnBFNjhOTU1UMDJj?=
 =?utf-8?B?ZE16b3hwd1dWZktBeEQ1bzhaOC9TS29BbVpPa1E3a1RIYnM0OXdFNWo4SXho?=
 =?utf-8?B?M2JWNENMb1BiZW5OSmlrc3RxSGsyQWY0S0hybHkxeGFBNmhhRnV0bjBuVHow?=
 =?utf-8?B?dU53RjhFMVpKeDBONmxuV0xFejN0aFF1ekg0RE9QQUUxY2xvQWM4L3EvOUQ2?=
 =?utf-8?B?ZGlWamlLajZ3V1cya3hERTB6ZTdmc3VISVhJdVpRU3pNSmJMUDlpaHRUNlNm?=
 =?utf-8?B?NStzSFZjclQwV0s5RWtQcURCcFFMaGdITGNZY1psbFd2WW9HYnZXbVBvY2t6?=
 =?utf-8?B?NWFXR2NQdWtZdGx3NTF6c3k1R3lSSGNRT0dtdFNzTW9OZXROWWdaTXFid3o4?=
 =?utf-8?B?b1RXMERhUVFzQXJCWUl3Q1hFTHk3VUZJeGxnQm56UHdkc1JMUEJzQUZhR05X?=
 =?utf-8?B?YTFlUkU2QmNlT3hyZjFKenNSbWRpMFpkSHd6ZkhXSGg0dkJoS2phSVlZSHRN?=
 =?utf-8?B?T1BuZytqV2xGMzV2eERvRHdVK0lPdXJWNlF6WGlhaHZrYVl0VGJZNnJVT0Nm?=
 =?utf-8?B?TFZIblo3QVlXWjhCUXFVOGN3MkZjTGI4NExkN0VtaDU0Sm5KYllQVEpCUDhU?=
 =?utf-8?B?SXZTVXNkUDlPUWRNM05lMUtqZDRISlc3V2F2RzJYM3NvYXBVeFNMQ0RNUVJC?=
 =?utf-8?B?WUMvWXlsNHFkY0tMc1NMNzdZQlg2L1QvejkrSDlFQlRlQUYwTnF6M3BDbldO?=
 =?utf-8?B?cmpXL2FSVGJST3l5YkNLZGt1b1JXemh1M0tSN01wSmI0NkY4dlV5M25RSG5z?=
 =?utf-8?B?SnoxWHFOdGVOSFZWNGZPamVaYnZEdjFvaEh5RVdva0NDayt1N0xQSnVzaC9k?=
 =?utf-8?B?Um9JenhONFpRT3JTM1g5dVN6b29lK0h2U1JqajRSMC8yaXF0b1NuOHBVWjRj?=
 =?utf-8?B?SVJheWd0aUZVb0Y1ekxoR2MzUkdHR0xHNEsrOG1rUVhLQXg3TDZwMERTUmI0?=
 =?utf-8?B?M1BaSmJ0eDJIT1FVYjJhY2Y4TnkwZ0J6L1c3TGZqVnoxVGJEWG1DdlJXSHo1?=
 =?utf-8?Q?Jze9CixUGqsEFb36MoSC8j+yc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7c99d569-13ff-4f03-bedc-08dafd5cde95
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 16:14:14.1083
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +feA2W/8zwSrBtYkk5TJnfvUJzz9kjDMJPB+P91sLzwoeV0FTcssy02DeS2KnjyNojYmUY7/g5Te6gOsff6QcQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7511

On 23.01.2023 16:20, George Dunlap wrote:
> Re the original question: I've stared at the code for a bit now, and I
> can't see anything obviously wrong or dangerous about it.
> 
> But it does make me ask, why do we need the "unpinning_l3" pseudo-argument
> at all?  Is there any reason not to unconditionally zero out sp->up when we
> find a head_type of SH_type_l3_64_shadow?  As far as I can tell, sp->list
> doesn't require any special state.  Why do we make the effort to leave it
> alone when we're not unpinning all l3s?

This was an attempt to retain original behavior as much as possible, but I'm
afraid that, ...

> In fact, is there a way to unpin an l3 shadow *other* than when we're
> unpinning all l3's?

... since the answer here is of course "yes", ...

>  If so, then this patch, as written, is broken -- the
> original code clears the up-pointer for *all* L3_64 shadows, regardless of
> whether they're on the pinned list; the new patch will only clear the ones
> on the pinned list.  But unconditionally clearing sp->up could actually fix
> that.

... you're right, and I failed (went too far) with that attempt. Plus it'll
naturally resolve the parameter-vs-state aspect.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 16:17:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 16:17:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483117.749065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzVh-0004QV-W2; Mon, 23 Jan 2023 16:17:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483117.749065; Mon, 23 Jan 2023 16:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzVh-0004QO-Rm; Mon, 23 Jan 2023 16:17:29 +0000
Received: by outflank-mailman (input) for mailman id 483117;
 Mon, 23 Jan 2023 16:17:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kihy=5U=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pJzVg-0004QI-Qk
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 16:17:28 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6dd9d5b6-9b39-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 17:17:27 +0100 (CET)
Received: by mail-ed1-x531.google.com with SMTP id 18so15156412edw.7
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 08:17:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dd9d5b6-9b39-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=E5aXHdzzDPuXLFVEL1sFvEJow+/Fp7fJ1dNSzNwHwdk=;
        b=ZXE/h8Xu1TVbwCYgSkF6Gs8je7w8Gt5iGoGvoQoNxLV+1ksqn3pSEfdGO6G49g/YTQ
         bi6hHIHV52ZJ3Qob0kZJ/DnBh3r9LDWV0e6s7tY7MjPnPCDklB62bGQPPOeGq5mw/ePG
         UsmCw9ClDmv2v65S111rKKG17q9GlJA58wQwH18q8XssfECojsEosoCDse9BjENNEFuT
         K5mcgy57IAer43ZH+Zf0Q8EjFJAhzi3S5JAWgD5MOEu1bUQfF91nNUH1kV7RKaYY5t6j
         9jFJLo+RIlpjSh9fmju0/9OwzlGEnyBlMXScpeOgPEFuVx/p85xy68Sj5DcHxXWKRLmE
         s/VQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=E5aXHdzzDPuXLFVEL1sFvEJow+/Fp7fJ1dNSzNwHwdk=;
        b=q2CjN7RVW+04r9K8wK21sehrehH5F0yezMsXHmNfIpW9SENo8//afnqxTnIo+7O85v
         6AUnqLY6xJMJmUXHnwf+SX77HL7a/W/OIPenrOCLwJDg+ihdcfnnDcgfB6d5zoe+BKtH
         JhBbEaJyF7sYGJ1Gi9uEqzG7x25Dnc86DI/M5X6EWk/E5DW6k2VfU/NbV7KyvvVAbJ7r
         OZhEz5cQGALRGvFn9fRQO45iKGF4D+aq0JgV7CN7vAM/2ZoHpGAzYGI06i1lvbt9DKS9
         ISGl1z4izuDCQXTQPAUPtCDMICpWlz0rZrblI6Zv4yCrBCw8Mm0Qs2QqLvrflID4AWlq
         C1MQ==
X-Gm-Message-State: AFqh2kr7JEyTNXZPDSv0+6Bl2lWyvsPns5a6/oIqwDbqRkNchs8epXhr
	jw7zOdaGGpYkKFU0dmDBy8oe4pyzO/sH52tDuM0+3A==
X-Google-Smtp-Source: AMrXdXvfDxe2sV2dQbJcjzSoBbAtXzjDXraAgIKfT9PCI/XIgmuBydwBkcpfLgt81/X5H+Wd4uW7zt+e17/eZTQKb9w=
X-Received: by 2002:a05:6402:b9d:b0:49b:67c3:39a1 with SMTP id
 cf29-20020a0564020b9d00b0049b67c339a1mr3138928edb.33.1674490647446; Mon, 23
 Jan 2023 08:17:27 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech> <8fc9366a-3a1b-6c40-499d-b16bce681c64@suse.com>
In-Reply-To: <8fc9366a-3a1b-6c40-499d-b16bce681c64@suse.com>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Mon, 23 Jan 2023 17:17:16 +0100
Message-ID: <CAG+AhRVt660Gw_c7H1PHyKkxfuzGJzyXrx23HBvFGQMcHguzgQ@mail.gmail.com>
Subject: Re: [PATCH v4 00/11] Arm cache coloring
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hi Jan,

On Mon, Jan 23, 2023 at 4:52 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 23.01.2023 16:47, Carlo Nonato wrote:
> > Shared caches in multi-core CPU architectures represent a problem for
> > predictability of memory access latency. This jeopardizes applicability
> > of many Arm platform in real-time critical and mixed-criticality
> > scenarios. We introduce support for cache partitioning with page
> > coloring, a transparent software technique that enables isolation
> > between domains and Xen, and thus avoids cache interference.
> >
> > When creating a domain, a simple syntax (e.g. `0-3` or `4-11`) allows
> > the user to define assignments of cache partitions ids, called colors,
> > where assigning different colors guarantees no mutual eviction on cache
> > will ever happen. This instructs the Xen memory allocator to provide
> > the i-th color assignee only with pages that maps to color i, i.e. that
> > are indexed in the i-th cache partition.
> >
> > The proposed implementation supports the dom0less feature.
> > The proposed implementation doesn't support the static-mem feature.
> > The solution has been tested in several scenarios, including Xilinx Zynq
> > MPSoCs.
> >
> > v4 global changes:
> > - added "llc" acronym (Last Level Cache) in multiple places in code
> >   (e.g. coloring.{c|h} -> llc_coloring.{c|h}) to better describe the
>
> Please can you use dashes in favor of underscores in the names of new
> files?

Yes, ok.

> Jan

I also forgot to mention that this patch series applies on top of the
most recent
version of Julien's series (https://marc.info/?l=xen-devel&m=167360469228247).

Thanks.


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 16:19:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 16:19:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483122.749075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzXe-00050P-BE; Mon, 23 Jan 2023 16:19:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483122.749075; Mon, 23 Jan 2023 16:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzXe-00050I-7A; Mon, 23 Jan 2023 16:19:30 +0000
Received: by outflank-mailman (input) for mailman id 483122;
 Mon, 23 Jan 2023 16:19:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozL9=5U=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pJzXd-00050A-15
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 16:19:29 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2052.outbound.protection.outlook.com [40.107.94.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b3fe0745-9b39-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 17:19:26 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by PH7PR12MB7872.namprd12.prod.outlook.com (2603:10b6:510:27c::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 16:19:21 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 16:19:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3fe0745-9b39-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XLl1hhzrdkOrAcp5CdZUUvhuYKzJINJu5W4HXtn7uB/KLLV/MeYDErOBCMP0+APdhECul4+JNgtv/2/7dhMldtagS1CEwY/Qs/FnoTt2OkNMG5+kFKLd2xhfNgGlXoiAoqvZ+UfdeFxlnEJsupmA6+lO8kLqcA+XcqMMnuF+otxvi5KTgPFxBwN7tSiYvpuP23Fx5cvOvSdzs/6Ejt8WEY2FVnScoFCwxVFx7XRfLNSOvppFPFCcXZ5JLVfSRvmM/lwvRNA1nCUUKt4BTSP6MhET9SseKUvUQE/YGbU0Y4s+v8E8I7bfdI3V1i0f62i36s6h1djesyeCdagtxD+B0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tvNECinNMrwsoAscmZNEWlFVo+vNPFVGMSqcjQG2k90=;
 b=P3L/bTse6N7zSoQOHqYhzfbDyEZTByUmrqz4ihxtsjU+5RyPJ4M5vepqZwgAXv33oxDOrkUsnpkVfaWx4AnqCoh5/E9J+ZhfQfQI6yhKhCc5my0ogeeKZ/sUP8FYnHTivevu283lSVj2t5ONaxy1B753fVCzWIvU3RaqD8d2951y3FdpPsUs00H1ql3d9SEFJUAhvOt7SYGlvSiAvmJwEnVfVyS14f4VS4JdsJr4Go3xRU6c+0WxHcthQCEthcL9/96dW0J9E0EntZOGZhrh18qXD90myC4/RDfgH3ChVw7cXAp96Dcg4YUVfBLnD3szUyhqP2VkI7oii9beuUGS7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tvNECinNMrwsoAscmZNEWlFVo+vNPFVGMSqcjQG2k90=;
 b=Eq+CqfbbwS+58HDj2uB9He8ATi2sOfdZt/Q2ACXC2hpBz1slRt6sMz3Y0n7x2/nyCV+JKWfCaedCxG6/O6irwnhSXbmudYgP0WxyaUEBvGLBS3cnQbEr1z9EG9SJx8yvynD/Ly540INVy5YiEgTInERjSBApnPuUorbLZ7/7F5A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <efda7e8c-165d-43dc-4728-f89fa4e636da@amd.com>
Date: Mon, 23 Jan 2023 16:19:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 01/18] xen/arm64: flushtlb: Reduce scope of barrier for
 local TLB flush
To: xen-devel@lists.xenproject.org
References: <20221212095523.52683-1-julien@xen.org>
 <20221212095523.52683-2-julien@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20221212095523.52683-2-julien@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P123CA0017.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::22) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|PH7PR12MB7872:EE_
X-MS-Office365-Filtering-Correlation-Id: 1418c9cc-45ca-4cc3-4d16-08dafd5d959b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QlW/SSk4sSCuu9/6RTZ6HtarBRcKo+geQksk6eyXBYxuxEGhCt/RsWB/Uh2Cm7nOoejDIRvJbh4tbHmdvTBVTu00Rg/SnGIGxHs9tDaDLjzzVpVJ0Wx0SgE/Z7p2xst5Et40QKemlR3xbtYLU/0hFArqc5hHI742FlGOJ/5MKTJGxWlBbi5ulGflid2oKaoJoNQS1X/gDN9bHWICJ0DTG6zDlHcfYlGLjv0lUuWmrENE0H9+gBTRPCgfwfzfDlsK/DpgWi+Ue4KUDZjfatDYQBAaeP11/DQjA1wPX73B8oPGLPOeeJVPDquDWpllyZYgS2hLTKLXVzghKVXc0Pd02YAueINoBxl8HWODpMcd8LqaVavQ3Uh4jeWND+bd2XL6S4MLvnUe/qtrbNFp2Y4RjTZP9zGIYOkKlfs69i8HxkbixogKW1NWh/16jID3TZ9SBLPgkrUStjvH00ALGPGfLDn5Zs8drl+XdGL+ZU23ianzJUywTT8LC04VYeWVFA8p4qN79d9Q4j5+nehgG1KYqaP78wExdVcuRFS5Xbn2O144/fh3HEUYcGYj9+2komGddFZddLjEx3YUyqfpEs3WyvUOeS2f9x/Q6I1G6x1xxD+o142C8sSUasH79+QrDU51m/VRvRrOZds5HQWYL4LEG/+Iktn/Buyt1XsX5vPibhrw41tF3Z6PfAQNs0g179yBOaEbPzmXqmKTbRRYXNEdTf/h6QbuGmxRENG2RDlTzj1dxMV12M7zWNH0NdZCx4PQr6JgiPoWS60OjHpBJ4pabQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(376002)(346002)(39860400002)(396003)(366004)(451199015)(36756003)(31696002)(41300700001)(5660300002)(8936002)(2906002)(83380400001)(38100700002)(478600001)(6486002)(31686004)(6916009)(26005)(8676002)(53546011)(186003)(6512007)(6506007)(316002)(2616005)(66946007)(6666004)(66476007)(66556008)(43740500002)(45980500001)(473944003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L204UWwzVGc0UTZDZU9rdUp5WnR5NUZaai9FcmhUN2NMUlFhdTV2aE8rNG9v?=
 =?utf-8?B?UW9TOGZHSkd2S2Q5NUxOL2lReDZIcG1rZ3NUYlAzQ0FsTmg2N0F5d2duajFj?=
 =?utf-8?B?K0NlRk9PRlR0ekhyT0tVT3JpTXo2bVJPblh4TEJldU96WmIxSEFTQ1lSc2My?=
 =?utf-8?B?L0VEYkY0TUExTEF6QWtxQXJsd1hYL3RSNTFnYnNaZVQxVWc0ekNZWGM3RFhC?=
 =?utf-8?B?SEZKNVhNRmRzMUc0ZjBmTU5YNGg1ditBcDdnSFJOdEoxaW00bjF5MjNncEw5?=
 =?utf-8?B?SEFUWWN1aUJZRWwzcnFzbjVZQnJsNGNZbEtJSXpqcjlZeU10U1F1cTJWc3JP?=
 =?utf-8?B?ZTdsZU1ZQjB1Y2ZDOWU5U2luNTZadXUzdFdiSG5DNVVFQnJ2VnBJek9qRUY3?=
 =?utf-8?B?RlRaRmpQZEtzTkFya0lQRmF0bWJrNW02NlpXQW9jMU9EbHRsMWNMd09XbzR0?=
 =?utf-8?B?R2NjYU5PcDVPeFhTOW1mMGVvczhBa09sZnpMRFZLQ0tUb3N0Nzh6TDVDZTda?=
 =?utf-8?B?MnNwZDZxVjQ0N1MzRTJBMDhGYXN0NjNXQ3oxVGZlUWlIOVhlVzNVelFFdUNV?=
 =?utf-8?B?RzR2OUdjayszQVJNdkZZVmF2OVJkOTg4SDBYRDNwMzJtUUtvRFhkOXFlZzdV?=
 =?utf-8?B?U1Y3VkxMSTNoTXZnQk1HYjZyeGMrZ2VXSEQ0TVc1UVJhY2YrOGdqRWRxNkdP?=
 =?utf-8?B?U3ZtQmlhU2JNcU8rSitkRkt4cnhyblQzVVI2T2JxME1kQWpZTWlXVFVaaCsz?=
 =?utf-8?B?VStJV2hLeVdKcFJQbllxeVlpNHI1OGdEc1N3aVZ5VXBrU3N6OTVPWmsyRkh1?=
 =?utf-8?B?TzVpQnp5MFZGWmpsSmNkNktlTEdUSUNtUjFqL09Oc0VoNXBTUGQ4WG1yZmVl?=
 =?utf-8?B?M0Q5ZVliWEIzcjJWZWI1ckpBRkRkMTJHdUFYRW5CYWpSNHhMNGw2eVdKTTZj?=
 =?utf-8?B?VG9BcDVVRk9TSzhuK2xZZi9ydkZ3dFJ1NTNGVVhDWitWT2U5WFBCb04ycDNZ?=
 =?utf-8?B?RjdUQ04ycHAwR0d4dVpnZ0VtSXhwc1R0dzNoQXhIWlhJem9oemh1MUV2VmlU?=
 =?utf-8?B?M2JDOGtkUmFIdWNXOGZYOW1NalJpL25lUFExSmQ4VldzalNEa3RUUzVuMFRD?=
 =?utf-8?B?VVBFQXZJUXkvT1ZrRHpOUGRUZnF2aVpySWlBeXhYVXVqczkzN3pMdlFDdmxj?=
 =?utf-8?B?eXkzWmd0WDNmTjVOS0JKQlgwWU9IQkdaTXJUR1VuOVEzWERNQTBsTkRRTFpN?=
 =?utf-8?B?ZWFYU3JNdG1oZGo0ZHhlb2FkQ2ZmcE4rRnRBQzhpZVJveXRWek5SRGcwWVFZ?=
 =?utf-8?B?OFZxVUpXMnBoOW9RMGRVTWRDMlAxNStTdmFpYWtSZkVHcHlINDhaamcxRXdl?=
 =?utf-8?B?d0RKeHZpbWtnRnR4Rm5CeHhWSVpiSlg2RWFtYkUwVnBVVWppaXZxRFI4MDRC?=
 =?utf-8?B?S2ZqdG8wQlFqbXMzUWxSUUNCcVdLR2t0aUFtRm8xMGlUc1VKK3NETGgzRW1y?=
 =?utf-8?B?Y1dETWhWNUludktKdUpjK3JGMDdZeWlvS0FIMmZKRHl5OGc4QnE5bHBxZ2I1?=
 =?utf-8?B?WUZhaExUekVRVXJMdkNoZXdkQ2RIVlpSb1Q4bEFCMXBFMkFHbFJqVUpGaUMw?=
 =?utf-8?B?a2ZCVXZtOG0ySFJDeUJMYm1ERldYQ0FlSXMxa2ZqRE9qaFIvK2YvcW84M1Zo?=
 =?utf-8?B?SzBXcStFQ21jKzFSaDVrSDRya2NVVUVvUVpaV0lob3F5OFJlUll3NHBqamcr?=
 =?utf-8?B?MjJWdlp6YzFwMm9Zd3ZhZnNWOFpYSU5yRFMyZDFDQmNWbzl4WWwzLzRKQ1R2?=
 =?utf-8?B?TUdvd3ZDSWRieEh0cTRvY2I0YUV0a09LR1ZEbkpZNTF2bU9SMXhsSWZ1V0dR?=
 =?utf-8?B?VUV3OUlXcXJRRXlaVjJNQ3QwWko2Z2EvVnNkdldkNUNnZG4rSVUrQ1hMSWl6?=
 =?utf-8?B?YXhzOU5GSUwvcWgyVVNQRzREb2tMcXU0R3JjL1FkUjZpSFdabmVGaHloRHZm?=
 =?utf-8?B?UWlFZmV5NEFUcmZHSm1qVitoTU0vUDlzWHFKN2RVajd0ajluT1RvQnVQYndy?=
 =?utf-8?B?cU9aSGJzZ3FxNmtQNHRWckU0Ylo2Y25PTXprbWl0dXJwdStyUWxXYWF4RGFY?=
 =?utf-8?Q?YnF3A+56UoB4KxqEMbnZQ2gfb?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1418c9cc-45ca-4cc3-4d16-08dafd5d959b
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 16:19:21.2302
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: b3VPpXo3W1eHM+PLGMZkJFrXpv+zM4v3+n5j6j47Mgwv7GTw+0o3P9WfgpRE/oosaFdVQbz48dqdZCWjqQnDoQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7872


On 12/12/2022 09:55, Julien Grall wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> From: Julien Grall <jgrall@amazon.com>
>
> Per D5-4929 in ARM DDI 0487H.a:
> "A DSB NSH is sufficient to ensure completion of TLB maintenance
>   instructions that apply to a single PE. A DSB ISH is sufficient to
>   ensure completion of TLB maintenance instructions that apply to PEs
>   in the same Inner Shareable domain.
> "
>
> This means barrier after local TLB flushes could be reduced to
> non-shareable.
>
> Note that the scope of the barrier in the workaround has not been
> changed because Linux v6.1-rc8 is also using 'ish' and I couldn't
> find anything in the Neoverse N1 suggesting that a 'nsh' would
> be sufficient.
>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>
> ---
>
>      I have used an older version of the Arm Arm because the explanation
>      in the latest (ARM DDI 0487I.a) is less obvious. I reckon the paragraph
>      about DSB in D8.13.8 is missing the shareability. But this is implied
>      in B2.3.11:
>
>      "If the required access types of the DSB is reads and writes, the
>       following instructions issued by PEe before the DSB are complete for
>       the required shareability domain:
>
>       [...]
>
>       — All TLB maintenance instructions.
>      "
>
>      Changes in v3:
>          - Patch added
> ---
>   xen/arch/arm/include/asm/arm64/flushtlb.h | 27 ++++++++++++++---------
>   1 file changed, 16 insertions(+), 11 deletions(-)
>
> diff --git a/xen/arch/arm/include/asm/arm64/flushtlb.h b/xen/arch/arm/include/asm/arm64/flushtlb.h
> index 7c5431518741..39d429ace552 100644
> --- a/xen/arch/arm/include/asm/arm64/flushtlb.h
> +++ b/xen/arch/arm/include/asm/arm64/flushtlb.h
> @@ -12,8 +12,9 @@
>    * ARM64_WORKAROUND_REPEAT_TLBI:
>    * Modification of the translation table for a virtual address might lead to
>    * read-after-read ordering violation.
> - * The workaround repeats TLBI+DSB operation for all the TLB flush operations.
> - * While this is stricly not necessary, we don't want to take any risk.
> + * The workaround repeats TLBI+DSB ISH operation for all the TLB flush
> + * operations. While this is stricly not necessary, we don't want to
> + * take any risk.
>    *
>    * For Xen page-tables the ISB will discard any instructions fetched
>    * from the old mappings.
> @@ -21,38 +22,42 @@
>    * For the Stage-2 page-tables the ISB ensures the completion of the DSB
>    * (and therefore the TLB invalidation) before continuing. So we know
>    * the TLBs cannot contain an entry for a mapping we may have removed.
> + *
> + * Note that for local TLB flush, using non-shareable (nsh) is sufficient
> + * (see D5-4929 in ARM DDI 0487H.a). Althougth, the memory barrier in
> + * for the workaround is left as inner-shareable to match with Linux.

Nit:- It might be good to mention the Linux commit id.

>    */
> -#define TLB_HELPER(name, tlbop)                  \
> +#define TLB_HELPER(name, tlbop, sh)              \
>   static inline void name(void)                    \
>   {                                                \
>       asm volatile(                                \
> -        "dsb  ishst;"                            \
> +        "dsb  "  # sh  "st;"                     \
>           "tlbi "  # tlbop  ";"                    \
>           ALTERNATIVE(                             \
>               "nop; nop;",                         \
> -            "dsb  ish;"                          \
> +            "dsb  "  # sh  ";"                   \
>               "tlbi "  # tlbop  ";",               \
>               ARM64_WORKAROUND_REPEAT_TLBI,        \
>               CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \
> -        "dsb  ish;"                              \
> +        "dsb  "  # sh  ";"                       \
>           "isb;"                                   \
>           : : : "memory");                         \
>   }
>
>   /* Flush local TLBs, current VMID only. */
> -TLB_HELPER(flush_guest_tlb_local, vmalls12e1);
> +TLB_HELPER(flush_guest_tlb_local, vmalls12e1, nsh);
>
>   /* Flush innershareable TLBs, current VMID only */
> -TLB_HELPER(flush_guest_tlb, vmalls12e1is);
> +TLB_HELPER(flush_guest_tlb, vmalls12e1is, ish);
>
>   /* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -TLB_HELPER(flush_all_guests_tlb_local, alle1);
> +TLB_HELPER(flush_all_guests_tlb_local, alle1, nsh);
>
>   /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
> -TLB_HELPER(flush_all_guests_tlb, alle1is);
> +TLB_HELPER(flush_all_guests_tlb, alle1is, ish);
>
>   /* Flush all hypervisor mappings from the TLB of the local processor. */
> -TLB_HELPER(flush_xen_tlb_local, alle2);
> +TLB_HELPER(flush_xen_tlb_local, alle2, nsh);
>
>   /* Flush TLB of local processor for address va. */
>   static inline void  __flush_xen_tlb_one_local(vaddr_t va)
> --
> 2.38.1
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 16:24:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 16:24:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483130.749088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzca-0006Ur-2A; Mon, 23 Jan 2023 16:24:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483130.749088; Mon, 23 Jan 2023 16:24:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pJzcZ-0006Uk-Vb; Mon, 23 Jan 2023 16:24:35 +0000
Received: by outflank-mailman (input) for mailman id 483130;
 Mon, 23 Jan 2023 16:24:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pJzcZ-0006Ue-0m
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 16:24:35 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061.outbound.protection.outlook.com [40.107.21.61])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b0ac85d-9b3a-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 17:24:32 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8594.eurprd04.prod.outlook.com (2603:10a6:20b:425::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.28; Mon, 23 Jan
 2023 16:24:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 16:24:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b0ac85d-9b3a-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PX5vUPMpCHWodzN3sXdQycIMaWITxzZatTAXmDfK0+IV6MH0yF9EE2bJ0egmC0aqhy9flMKBBGiy2TZm8Kgz0HRrPPzcuzAvDo5mfzDrKWteRM0JKm3yrLo7ZikWKjF6/BaKwO3liOjIYy0aa1T+vSIwQ1K4SUEQX3sr30mZj4BG5wctGNkhy8xAlt8qcZuJ2I8w/43BrYYw0/Kaxrf0wFkNyf0rVWw2lijG806HJ4/7VNJnRg021blW+EUQZbhY1JQa/gOWJfGgQ3YA6vgYUzzxUcxZ0l7J6C2oKnopw/VRslQ69/QTXRVHIWdrrzsxUXPdBlECequvvu4vPh6vgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZpEjFsSQ6y2gsr604WoOs2H3Jxs7HWLWOC83Gv+E2tc=;
 b=ddT/d9P40v5Bl4yKBu9SZX+0GUSiCA1HvRXVA2FPQFpGvHLE5bhRnriiw7C/mtp+B0wLwuBOfOMHnjNwF+NwtkSNmHJhhfXHozz8T/LHz+TIdIz6T4+50GCZm5Y0iQ+qNI/jiFKR5cPZKlyZR1WsZk827SRSvwwvZqYwOgtHaVUhjmVSyTJqif/YJDF/fw5PRC4dps99W1nk6Z1SB8FEY3igTD7a2FoLc/uTL6ss1hstzxoZ9Jk6fA4JAJ+oXMw6BExwoSMg7drZNnPBVhLm/uhc8vo2lrT1ptfEEs4YSPIfAcvl2JnxQo0rZkZPqRBj2NbaqP7F/NVWBNlPrS3MMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZpEjFsSQ6y2gsr604WoOs2H3Jxs7HWLWOC83Gv+E2tc=;
 b=xvIqT0v9UEPEY3vbcNqtPMghYqMO1VnnvLoLEPiL+MDe371inAlwZEBYLIQVBj2h1EwpPo6k1DoDdWWuCo3/PrKdx9Dwb5VB+vI2mif0XdZiUJhHwPZJFZ3j/LbpaqkSqsidzUjZFrZSd2EEJh9ke4FlzTS1nKdCP2KTuAEzcjbUMKiZs3v/hvuHKeTisMVXEFGkICQ2/iGDnaL6rNQ9d7Qci9U/uMgciBJsktv9y1V5a7y3gGnIpVMQaypF6GkXjlBV24seGnw2a0EeSAEf72k2W9uuja2+46196Vzcd0Icu0zZ6N/CDtVSdm5FhR0elV7cvNPm6ONULvydoo0iMw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a92b9714-5e29-146f-3b68-b44692c56de1@suse.com>
Date: Mon, 23 Jan 2023 17:24:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest
 areas
Content-Language: en-US
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
 <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com>
 <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0108.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8594:EE_
X-MS-Office365-Filtering-Correlation-Id: 13d7ef72-ab85-48cf-818a-08dafd5e4e46
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	muht6iyUJbb/sqqC+tD7FSAbJwUha5OuHP6q9nzs6YjrN8BZzsVWIOcbGv6YOU4OZ/IMW6sDJurUYQdqv9ftaWOjnQxy9JTCf7QxeZu+GaUXWVyW0Z1H/pMCnVy+5mi6IyGR/39Xu0ZHAu5kQjvz9Hq2IPY1JyAJgmHtavG1d6BTCwS/rYKFd925IY/ktfZA/Gxrlp/U+69Fh3z1bSBy/S71QQ85+gLcAngO4uyLyB/RrSabEw8qW9FlmyA8gymw6VPfR7dUT4/TeQxBplVRleZ4xjDtrdNcISshFipNHlwIKeQBV9LfC4sNszINL4KjfCn+xaA1UbgRSPiaL74sZRRl3nVDLAyMDKILoewTlESxFqAKBhuPFMGAgRzar1489GVmY01/jSbk+TbP6AkrENJ+o181Hq5rmIEC+r1xpCu8OUdrfaCKacEFfJDqpd8hGvPLeqn6y0fdOmODFNlqV9fyvGWEyIFCG/c6oBVT2kI9r15tY2eRn10ZlybcAxaC3UWd2ODGJXOueCRFnrPmZGZyUV4ehJAkiiQtNNqxn4nVKE3vZ1u1PvgeI3ar1Y7D/r1kg9KA7bSEvb9mD9CEFdRJToa1vS78UlVwIAZFuEPtD4slnWqvS7IV1e/RPSYaU/JOIibcM3ogmeipff8fePXird0zQI056TtWc61qgI3hU0iA8vS65QfseyGfL5W6Uql4zccrtk4OMlPpEGQNQCdstByMDb0n2XjlaDhv3l4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(396003)(136003)(346002)(366004)(39860400002)(451199015)(36756003)(66556008)(8676002)(6916009)(478600001)(2616005)(4326008)(66946007)(66476007)(6512007)(186003)(41300700001)(26005)(2906002)(31696002)(86362001)(83380400001)(31686004)(6506007)(54906003)(6486002)(316002)(53546011)(5660300002)(38100700002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dURGRUR3WHQ2bWVWSk9zZkxLRDRldDcyNzhTdGVyaVdLRTg3WVF2TlRSMjZM?=
 =?utf-8?B?VmoybFp4OXdpYWIvaVQ2MnZFUFU5eFljOWJYSUV5Q3V2REFGeXdVQU1EYjhr?=
 =?utf-8?B?djViTG1oMEVSN2R5VW5UV0w4dDV5WDJsL0taVldXa1VRc2VEM0dPYjJZWGJn?=
 =?utf-8?B?S3hlRjhWZCtYK0VSam9ENVhiRndKekxGRFp6Y1VpWWxsNlJtb1RDUEdIU09R?=
 =?utf-8?B?blBaU3FsOVFXd240QzVpODhRajlPTFRjOHFHT05vMEw2UG8xQzFOaG5xWGxO?=
 =?utf-8?B?RzlVdnBkYjJCSVpLa24yLzY0eHVIdUtMZUxHampJNTgvYTZMWkVzdmUzVWwz?=
 =?utf-8?B?cVZEbVBuL1dvN2RMVzQrMi9md1Y1a1hHejYzZjZtM0VxTi9YNVpMWHBFbTho?=
 =?utf-8?B?SVVVd1ZmNWIwemZUSHBUQ3RiK0E4c1ZIbkFmdkJqZHNxNmVGM1BYOFNPV012?=
 =?utf-8?B?NW96Mnh4ZTJDWklkRUduZGFDNWxHbm9Gd2taWFRNa3lremptMEFqaTlvOFZH?=
 =?utf-8?B?Rncvb0Z0WWZieDI5dnpwWGtTSzRJaGhudFBDTDM5SU1xTXl4N2lnUm5NUXFn?=
 =?utf-8?B?cTVJOUNLZlgvcTNLSzVhMFY5V2FubXVrbGtEcHpFWmpUdkFYZUphd0VnL1Jz?=
 =?utf-8?B?N3VDbElqTytsb2ZPOTcreVl5T0dXWklzUExPeXhtbzlYWHdqVGtsbHBzSVFK?=
 =?utf-8?B?RXByMUJhQjRPMkUvaEFySmhpdk8zc2lNamorTmsxbktmbmQ5cDJpelZuaXF1?=
 =?utf-8?B?eFBlUVhtcWJnQmtQak1heXBndkxzVDBZc0pRVXI0YklML0FVRGFWdTdZQmtq?=
 =?utf-8?B?UzA0QzdFK2JTTnVMMVF3Y0JQOVU2QThoQjhDZittV3dGSlI4dlBnNUNrTzdE?=
 =?utf-8?B?NG40bkFmWEtET3RIK1Baem5leS9jd256eXkxcXpBTEdvb2M2Z2MxdXJUSUpz?=
 =?utf-8?B?WFg4Mkw3Yng3bVVlM0duVDZoR1pyTDFaWVpZeHV6Y1gybkZpVFdMaFg2bldv?=
 =?utf-8?B?ZTNrVDhtUVZVYzQ5Mys4Rk9LTlgyVmNuVzlFTGJKbS85d2tjVUxZWjB4NTFW?=
 =?utf-8?B?Q2gzSUVwWkorQ1BTdmdIUzh5YVVhYUNMVzNnd2FrR3ZyMEUyWWxrc1RCR2Qv?=
 =?utf-8?B?c1hpaWlReWpyMlJHUmtrVDdhZDduNkQ0SW9NVFFTT1UrYlRmckFtZTBqNjlu?=
 =?utf-8?B?dUxiL0V2UFFTQ2dIL0k3MVVBSFcvaDUyMkljejZvY3JKaEJaUXZMUEk1ZmFC?=
 =?utf-8?B?WkNBQkkyMlFUenAvQ3g1WkNvMnZzRzZiWjZma2htcUg0UlpzK1ZhbG5EelMz?=
 =?utf-8?B?SDVQODZ6cjNzbmdFVUcyK3JOTnEzOHhOMy9vZGo2VUdoTTk2ZklLaUUvVU44?=
 =?utf-8?B?Y2xuYzBWRGNOYVkvZEVlazNRRUNDSlQyZlFJbzBRQ05PMGw5VHNKWXA4c0pR?=
 =?utf-8?B?MzV1Mi9wL3FkSmpVSndGQzhXM1NtVlAxeHFRaGN2SUM0Ujg0a2FINUFUc0Mr?=
 =?utf-8?B?YjVXV3hTRkhseHpLVnhSeFkvR1B2MUptWTJSTnE0RHM1cHVuNTFsZ0lmNWNl?=
 =?utf-8?B?N2RHSmIwemIwbEZGbkw5R0hkN09QT1RHREdHUmJSVWYxN2NMNEMxeGhpVkVD?=
 =?utf-8?B?TzdVSHFaVnllbWVjdG1rS3hVaXFnVkdWL2g1dGI2RStQQ1J1WXRFd05LVFBD?=
 =?utf-8?B?ZENmS0VrOWpEemd2WnhQaFVyNmNHYndRcExYUHJ4TklqWVpmR2xTUFI5TDR1?=
 =?utf-8?B?bjZYc2JxbXY4TUFZNDF6dGhzTVBCNWkrVUhmaTZENG10T3pXSG1OZ3NwNFlB?=
 =?utf-8?B?U0kwVjRQSlVFLzdOZEFVREtTcVBob0Z4dk5GZWtFRVlEbHVlVTRLUnhuYVBE?=
 =?utf-8?B?aHNCOHFsMm4wT3JEN2VHbGZoZ2tEWFI2WWNKcTJrekhIc0ZQMG1zMUlvMTBr?=
 =?utf-8?B?Q05GYnUweGt6NThyNjNtNUtSTWRIR2c0Y20wcS9yRzR3RmVoTFU1L0liSUgy?=
 =?utf-8?B?WEhQZzdPYStWZFRnZUhSV3RwMzZiK2lYUjJlTk1nSC9NR1NGZGFoRFFjdTZt?=
 =?utf-8?B?bFFxWDBWUnVmTmFRMUF4dC9PTE50cFRGbUlVSnNMZTBESkIrRmVpb2NWQlFn?=
 =?utf-8?Q?zyWim2ApX4DyRnsUE2vW4x4xx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 13d7ef72-ab85-48cf-818a-08dafd5e4e46
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 16:24:30.9905
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dXiEbKU1aPVXni6M8W9ZGuR9V9uNTwGKs01RL6Pxahz9Ou+jENi++rmK8MvxlMMdweyA3s3uszzZenfFDaPH1g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8594

On 23.01.2023 17:09, Tamas K Lengyel wrote:
> On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>> --- a/xen/arch/x86/mm/mem_sharing.c
>> +++ b/xen/arch/x86/mm/mem_sharing.c
>> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>>  }
>>
>> +static int copy_guest_area(struct guest_area *cd_area,
>> +                           const struct guest_area *d_area,
>> +                           struct vcpu *cd_vcpu,
>> +                           const struct domain *d)
>> +{
>> +    mfn_t d_mfn, cd_mfn;
>> +
>> +    if ( !d_area->pg )
>> +        return 0;
>> +
>> +    d_mfn = page_to_mfn(d_area->pg);
>> +
>> +    /* Allocate & map a page for the area if it hasn't been already. */
>> +    if ( !cd_area->pg )
>> +    {
>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
>> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
>> +        p2m_type_t p2mt;
>> +        p2m_access_t p2ma;
>> +        unsigned int offset;
>> +        int ret;
>> +
>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
>> +        {
>> +            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain,
> 0);
>> +
>> +            if ( !pg )
>> +                return -ENOMEM;
>> +
>> +            cd_mfn = page_to_mfn(pg);
>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
>> +
>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,
> p2m_ram_rw,
>> +                                 p2m->default_access, -1);
>> +            if ( ret )
>> +                return ret;
>> +        }
>> +        else if ( p2mt != p2m_ram_rw )
>> +            return -EBUSY;
>> +
>> +        /*
>> +         * Simply specify the entire range up to the end of the page.
> All the
>> +         * function uses it for is a check for not crossing page
> boundaries.
>> +         */
>> +        offset = PAGE_OFFSET(d_area->map);
>> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
>> +                             PAGE_SIZE - offset, cd_area, NULL);
>> +        if ( ret )
>> +            return ret;
>> +    }
>> +    else
>> +        cd_mfn = page_to_mfn(cd_area->pg);
> 
> Everything to this point seems to be non mem-sharing/forking related. Could
> these live somewhere else? There must be some other place where allocating
> these areas happens already for non-fork VMs so it would make sense to just
> refactor that code to be callable from here.

It is the "copy" aspect with makes this mem-sharing (or really fork)
specific. Plus in the end this is no different from what you have
there right now for copying the vCPU info area. In the final patch
that other code gets removed by re-using the code here.

I also haven't been able to spot anything that could be factored
out (and one might expect that if there was something, then the vCPU
info area copying should also already have used it). map_guest_area()
is all that is used for other purposes as well.

>> +
>> +    copy_domain_page(cd_mfn, d_mfn);
>> +
>> +    return 0;
>> +}
>> +
>>  static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu)
>>  {
>>      struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu);
>> @@ -1745,6 +1804,16 @@ static int copy_vcpu_settings(struct dom
>>              copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
>>          }
>>
>> +        /* Same for the (physically registered) runstate and time info
> areas. */
>> +        ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
>> +                              &d_vcpu->runstate_guest_area, cd_vcpu, d);
>> +        if ( ret )
>> +            return ret;
>> +        ret = copy_guest_area(&cd_vcpu->arch.time_guest_area,
>> +                              &d_vcpu->arch.time_guest_area, cd_vcpu, d);
>> +        if ( ret )
>> +            return ret;
>> +
>>          ret = copy_vpmu(d_vcpu, cd_vcpu);
>>          if ( ret )
>>              return ret;
>> @@ -1987,7 +2056,10 @@ int mem_sharing_fork_reset(struct domain
>>
>>   state:
>>      if ( reset_state )
>> +    {
>>          rc = copy_settings(d, pd);
>> +        /* TBD: What to do here with -ERESTART? */
> 
> Where does ERESTART coming from?

>From map_guest_area()'s attempt to acquire the hypercall deadlock mutex,
in order to then pause the subject vCPU. I suppose that in the forking
case it may already be paused, but then there's no way map_guest_area()
could know. Looking at the pause count is fragile, as there's no
guarantee that the vCPU may be unpaused while we're still doing work on
it. Hence I view such checks as only suitable for assertions.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 16:56:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 16:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483136.749108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK07K-0001lc-Py; Mon, 23 Jan 2023 16:56:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483136.749108; Mon, 23 Jan 2023 16:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK07K-0001lV-MG; Mon, 23 Jan 2023 16:56:22 +0000
Received: by outflank-mailman (input) for mailman id 483136;
 Mon, 23 Jan 2023 16:56:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rv8W=5U=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pK07J-0001Uj-LH
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 16:56:21 +0000
Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com
 [2a00:1450:4864:20::333])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc7c58b3-9b3e-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 17:56:20 +0100 (CET)
Received: by mail-wm1-x333.google.com with SMTP id g10so9558430wmo.1
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 08:56:20 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 fc17-20020a05600c525100b003db1d9553e7sm12419990wmb.32.2023.01.23.08.56.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 08:56:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc7c58b3-9b3e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:date:cc:to:from
         :subject:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=OLPiIGqZ1pWrMRYCqpLN8WCCbMOzk8Ez8Nscxlk4waI=;
        b=YwUjyQCzePObmOfPpq4HR8/vXIiXWKHdOJuYeAJfbtakR7aqfx7z4wvTBL0nrPo6Nh
         soXvwg44Zv2pvOd5WNLfG/scG+fqmFNfupOC7aw5rUzLuvUs6bLHMt7OA7m2vgz4aZ3W
         uhpat5VF9sKj/7EKGUWr1sBD4rY1HH7mXbIXKL6hWO0WTZ6g2NzoB3CVWoZ1uN86vJga
         pCI1X5UEeJ/ojTn2YV+rLW2+wkrmo913P83zeOyEhYG3C/HWPSFKZ1ONC/Lx9UJq1UG3
         wZmjoA0QXGdDc857bJOFcFIPKV/rjseVf3BsJGustSPX6omVntrKMlJvsDRGkzEtQDBz
         52dw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:date:cc:to:from
         :subject:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OLPiIGqZ1pWrMRYCqpLN8WCCbMOzk8Ez8Nscxlk4waI=;
        b=hukF0haVEkbCazFLT0//3P/eHvExtG/AD4q4wqHmwskjXTUdXSO5KcWVsUnqEF1+CI
         tfel2kk/1fr9RbR8+lcreLCHjz2NdNHU4soXODiSW+iGbxbHxyaS6qoukKui8CX+M9gq
         nKTWW0UoH8KITfU/8GeY/+csSuUNvWdrG5ZaagSIrNJJS8gSkPg2WFy8DVBIlqr83wfc
         1oL0a5Ke+XiYVaut5ilRW+6ATmw6cpJ4GodH4qKhD54RpZv8L0mMbt1JWJ5kDRR6CccR
         O+KAUhAarLio6/0GMEnVYPPkYtSoQTPVCbis8sdkIDQrpfEImrQuPuCJMn5MomYkABkc
         hfWw==
X-Gm-Message-State: AFqh2kp+ezgQuvXCB2zO0FdYnSAxNZB43pQ/PcDsnTOUXNlJnbG9/6t7
	XN6OsGmIIygkMV/+/qrFE3Q=
X-Google-Smtp-Source: AMrXdXt1erXP/MU2EGAfeExoiz8uLcep+82Lj6Uh3s7Gi0MvEWA2iTuw25hb9/6V6hR47rF/TfO0rQ==
X-Received: by 2002:a7b:c5cb:0:b0:3d3:4f99:bb32 with SMTP id n11-20020a7bc5cb000000b003d34f99bb32mr23611659wmk.36.1674492980339;
        Mon, 23 Jan 2023 08:56:20 -0800 (PST)
Message-ID: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>
Subject: [RISC-V] Switch  to H-mode
From: Oleksii <oleksii.kurochko@gmail.com>
To: Alistair Francis <alistair.francis@wdc.com>, 
	xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>
Date: Mon, 23 Jan 2023 18:56:19 +0200
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

Hi Alistair and community,

I am working on RISC-V support upstream for Xen based on your and Bobby
patches.

Adding the RISC-V support I realized that Xen is ran in S-mode. Output
of OpenSBI:
    ...
    Domain0 Next Mode         : S-mode
    ...
So the first my question is shouldn't it be in H-mode?

If I am right than it looks like we have to do a patch to OpenSBI to
add support of H-mode as it is not supported now:
[1]
https://github.com/riscv-software-src/opensbi/blob/master/lib/sbi/sbi_domai=
n.c#L380
[2]
https://github.com/riscv-software-src/opensbi/blob/master/include/sbi/riscv=
_encoding.h#L110
Please correct me if I am wrong.

The other option I see is to switch to H-mode in U-boot as I understand
the classical boot flow is:
    OpenSBI -> U-boot -> Xen -> Domain{0,...}
If it is at all possible since U-boot will be in S mode after OpenSBI.

Thanks in advance.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 16:56:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 16:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483135.749098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK07B-0001Uw-IL; Mon, 23 Jan 2023 16:56:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483135.749098; Mon, 23 Jan 2023 16:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK07B-0001Up-FA; Mon, 23 Jan 2023 16:56:13 +0000
Received: by outflank-mailman (input) for mailman id 483135;
 Mon, 23 Jan 2023 16:56:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pK07A-0001Uj-5R
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 16:56:12 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2084.outbound.protection.outlook.com [40.107.241.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d39a7748-9b3e-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 17:56:06 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8208.eurprd04.prod.outlook.com (2603:10a6:102:1c7::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 16:56:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 16:56:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d39a7748-9b3e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U02u1RSaCduk9uRQnkssbLc+JM1FKivTlXOXMMUIYTq+zm+1d2C19WdZpMzkZSJuXVLEIJ1r/pMLD8rJYGYdhxH932GvyzsFzWslIPgCY062yg8hVC/i7k6mHDMcG3m3wh/pXqI1RFoBOmKT4Ex1+BvAnDWyFs+RzpganTrKNKVULzKq4hZgOg4JDqu1VHGfcPN8OJnQYxMD02i+3B6F4Go3RYlqEp6XWSUfn9re/giYb8qP5clT/ZRGXqX2TvECUyjtmwtPXNmr38YeMYXCxEWgMfn8TL0JpRIR1D5o7ay6mUt5D3h1idDyppt1KwFavEWVJzLA1+i64sN+HW2zYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zk4umC/FbFRIvgxIz3OY95Fco9mLJGiywi6HudqEr78=;
 b=fg7qDtxyHPHu0N8eOsCSIBKzXzfl9jcxJikfeSMAe9CSp7HEI4PvsfXhDbXIO1+2varLpqq7nLQacOTTB2qZeIX2NendIhBDH6M+9Er9vRm5LrBxir7BNfyg2mLhGN/5k6HQi2Linr8eGBtaGk9P6NEvMH1dQ39P2nmBm9nmMOxTQZoMCkygN5LlfeuLRRQtSlA2nTXXQxnIkDq6WMkcosMN/4VDVMkLpzIRX3mmUt+BOahLsQ8dMYXtUq5WcHKSxvRR7uTsciBTE4bcQDQUSpCdm9jmokWjDqDPw4fIPivHUshA3HCQORCP5c5QIZfJD/EFCFYO7T2X+tBHyb8Akg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zk4umC/FbFRIvgxIz3OY95Fco9mLJGiywi6HudqEr78=;
 b=5pi/7P8x8cGAdw16BhwNexriWERvojVGP7HVVjZM6hJtCrFPd+EBappKzFdLf//T/KP+ScJKKZ8arF7ZIE+7ygcTIuTHeumCNMLro+8ZXXarBRGGfiBMiVv4Sf77eMHazlPfxHSMGleXbETp95r4tkvDvCyC04lHmDl9nvO4g4ZIFug6qVdlRqIqn91cnInnXtJkLwQbadcnfgmNWQoJ3GP9boOjkbWzgZVu1/8ScDDRjfiqRS8YPZ7ZmOrpKFD/8SEwulEbZOfOCF08SkbUUScKEeITaDSCzx2yqmVEv/n8ih072VwacCNMLdffQA3GWOlVr3cXQRtMayqulUWPaQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e9073354-cc8f-8496-1fb6-d53ff5879ccf@suse.com>
Date: Mon, 23 Jan 2023 17:56:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/shadow: sh_type_to_size[] needs L2H entry when
 HVM+PV32
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <942e1164-5ed0-bdda-424f-90134b0e22c5@suse.com>
 <79420a4f-358a-f404-7965-e5f215234ba9@citrix.com>
 <2ec2a36e-4264-6c12-c2e6-1af85c91f1f6@suse.com>
 <04f5c9ba-24aa-c9be-e8de-a867c897835a@citrix.com>
 <12bcdc9b-52bf-ad10-a3ec-286d00372be0@suse.com>
In-Reply-To: <12bcdc9b-52bf-ad10-a3ec-286d00372be0@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0176.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8208:EE_
X-MS-Office365-Filtering-Correlation-Id: 69f1dfe4-8bd6-4b25-4f8d-08dafd62b661
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	M6KHcvx/Coe8dUjMRXWZ0B/7mFlc9s7Ac1lXBU+dAxLw6lEO47TEaYTlgeDncPpzlwWZHMKNPdKcVVtqgd30T/xF0m+PZfosfrqB+YbueQy/XX3QXifWkWjU4KW781qSVo2BphZWgGidiIYtd1/NzA1zPLYN+saJclGfbs67/6ePY1sg6ac/alFPyU/tXROqIQGpGPZ84O1HrBORa2vip2/+lbhetUAzqYRu6Y1ZOZ+uhUJ0lm3T4AccIWbhJ+T+OkTiULQ0SZF9PLccGcU8tkRKoKQMok0oYMH451id7JaBZhaMcZGiirsU6j7aQdX0jClJXdi0kYho9CmiwiBaGgwhGzvM8GgAVYhJ/H8QE2mHkYiIXnTiwthd7+L8RWJXUV10U/K1qavhviCR9AFSnXQCNqWsjrAV++rHrIb05t9EcDDMME1oFaJwza3QyjXnZh11PbrU5eEZQkKeSlratYdthdlnt95RFMKjZeItubfEGaoORCF+jUvp33kkc6leAKl2v3oMLXxKaqZtenbbFMHelLCuWb5Y/wMNohjEpue3KLQHHemy0eD2jlqSe3KYdNfDxBa8oRYhzgcJmOkSAoQ5RvGTAJR0jiyGq7RAyS7QIVApuD+MohyPNllfpOJjc7MWSCvfIn19dbUyN9HpTGVcbkAKgl2vLtm4uCIiZo2tNjcbP66odkFfYh9yorJigS0zpbcAqFdqgMur2JCXI8n6xIxb2LVB8B2aIeRFIJU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(39850400004)(136003)(376002)(396003)(451199015)(36756003)(2906002)(5660300002)(38100700002)(8936002)(4326008)(41300700001)(31696002)(86362001)(66476007)(6486002)(478600001)(31686004)(6916009)(6512007)(6506007)(53546011)(186003)(26005)(8676002)(2616005)(316002)(66946007)(66556008)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WVhyVW9FTTQwZThIOU02Q3RnbW5XNVhNNGxGdTBjejQ3RndVbFljQmRPVWh2?=
 =?utf-8?B?Mm9xQUQzcW9LUXJDQ0hIOVlvQVI4YzQ0L2lEUGFOVEJMdmYyMjJKY2NSOWJR?=
 =?utf-8?B?NW4yMmxaSWRhRzZIalhSUU13MHpZT09yL3dGMGVEeStMZUVwWWppcUxJb2Qz?=
 =?utf-8?B?QUFxTk15TlY0YTc3RFRqaysxRFlNVEZYczNrK09xQUMvTko1NUJWWHZxMzY0?=
 =?utf-8?B?NDh2ZTJQOGU5M0Zjc2pUUVJNbHptVS9CViswY0F4d0hmdFdNN1BhdHNFVGk2?=
 =?utf-8?B?ZnkvUW5Vd2xUVGtMNEdSNGZGMFMxR3lxUjZOMitNZ1hMcXRiOUlpVkpxNUNa?=
 =?utf-8?B?NFNrMUhUemxHejU0QituQytlM0pNK09xRFRIbjN5eUhIYWRVWXNQOVhVQVdw?=
 =?utf-8?B?WjRPR3pWVkxGcEg4OFErR1VFT0kwS2ZFMnBxS2FwUDVPTkgrYWV6UUlNdE1M?=
 =?utf-8?B?aTV0TDVrc25MUHMyZDBCOWFJaDBXK0JhQWVJMkVwTGxHWElOWWNxM3J0WUw5?=
 =?utf-8?B?bkRoUTE2OXRmSXZmRmViR3J2VlJpZGVlUkluTTlMZVV0NGQ0cnlVMzh1eWdU?=
 =?utf-8?B?bjhzNm1tVXMzNHN4cStyQkNJeEtMTnpDOUNXV0lpMDhKVzU3dXM4endldGhn?=
 =?utf-8?B?TVVvQWM0cEVCYm0rUUNWckhTMVpnL0pmdkxpNkN4a0dyRkJ1bjhTWDMzMGdk?=
 =?utf-8?B?d1dCZ1ZuK2xiUGVIdnAyZ2taNWRKYlhublNDVFpOcXhCR0llOGJKcHRZMTA4?=
 =?utf-8?B?NFhHekV6TENBTFlsai95RGV0MjdyZXA4UGM1QXlzRDVDc001NXdGODhubzhk?=
 =?utf-8?B?djdicEJmSmRBclpyaTNwSm12bUtKU2Q3NC90NkJGVFM5di9rZGpiYS9hLzFT?=
 =?utf-8?B?S1F1SEY4VlBUQ2xKdW1HOWZBSVhqU1BVYksrN0VWd3BCV3kyQ2ttbTNnZDZB?=
 =?utf-8?B?L2dHUVdhMHBSdUd4WGhzc0ZKUUZMQUhJSGF1Mm5sMVYzV0ZCRHNKdEk3RGFq?=
 =?utf-8?B?TUNiNCsxYWN2a1oyRVNBQWhrRnFBbStuTmpjR20vZTZwd3lyWndHcjg2cXlx?=
 =?utf-8?B?TkVBNjJNWU9iNGI2MlMrOTJZUStSQkxwR3l4dXI1VzJFeXpSLzhaYU5mWE9J?=
 =?utf-8?B?aWxZdy9YY21lZG5YV3JFeHplbXRpSGg4Um5RbGdTNERBdlorSUNHOXNEQXdV?=
 =?utf-8?B?QU1WZ28yRHF3amR6WngxUzRnR2xLNEo1UkN6R0I1Sm02Rkoya2VpY0U3OXNr?=
 =?utf-8?B?d1QyUkt0WmJGK3lBY0tUVE9qNVRLaFF4UnBIS0R0cUI1N2ZTejJCUHJXWk1T?=
 =?utf-8?B?bWFFczMvZzBycUMzU3pBWmlwY2c2Ny9KdE1LREozbk0rdFNjL3JrckI2K1I5?=
 =?utf-8?B?OVV4TmlWVG9VV0p0aEh5cXp2aC9WQWpWTnBwNTlCRXVjazFOV0tibFRKSDhE?=
 =?utf-8?B?TzV4eGNzRyswd0NmU0QwUS90dWVVZzRtQkNnT2NoS2lBdVRONHJmbzRDM3pN?=
 =?utf-8?B?dWpzTjZ0WXNRTGllWlRxbmtkQ3FLNzA0c2xHQlFaelFrMlMzUHJQMHRGMVk1?=
 =?utf-8?B?NjFHbFd4a3g1TmVsNGovSmd5eHRHMEcyUXJTZHBLMFo5UzFlVVcycGRjaWtP?=
 =?utf-8?B?TlUzK0hPbjZnSVNjaHhQZS9zWitYUWVkRlIydllWd1JxTGtTbFFWN0VhcFVx?=
 =?utf-8?B?Y2h0Y0JvK0ZXay9rY0ExTVF2QmV5aEs3ZjdicklMc0dNYUsrSmVSU2ZXRnVG?=
 =?utf-8?B?c3FuTVdSS2w1MmdRN0VXbENxT2YxcFlPQ1NhY0RVQ1lZQnFDTTJLbXR5ZVJ2?=
 =?utf-8?B?OWRkRk1GTkNEOUwzV2VtVkRWUEwxazFYMGFjQU0wUnE4bm5tT3hnNWV4RXNJ?=
 =?utf-8?B?VUxuNXVUTVo4YmxQSHNNRHdNbDlZN3NYQW9TbGhBOTExaWVGc1dDT3haWTF3?=
 =?utf-8?B?YWw1VGMreTNXRUdFeWZlT1d5T1VoRHF0dHBoWXZ6RHU3OVFaN1FuaVEvMDJv?=
 =?utf-8?B?TURBWU5vR0FaMXVEM0hBNlViWURDL1J4N0pqZVZHcElkVWxPS0JrYkNpUnEy?=
 =?utf-8?B?SmtTcW9ScXM2cldtY29HVWQ0NEZidEZWbi9EQWxTbUtPdExheXZsbmxPcUpD?=
 =?utf-8?Q?fe/PSE9fRBcVntlssuiA+Qj7D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 69f1dfe4-8bd6-4b25-4f8d-08dafd62b661
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 16:56:03.6518
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 40bfHUTuVArF536D89ArqVM/s+LuXygtdzk8+jtMer7VIqB7yP3tU3Y5cWvzJAgal7c1YPUcChoEpG6RpEwIiQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8208

On 23.01.2023 13:49, Jan Beulich wrote:
> On 23.01.2023 13:30, Andrew Cooper wrote:
>> On 23/01/2023 10:47 am, Jan Beulich wrote:
>>> On 23.01.2023 11:43, Andrew Cooper wrote:
>>>> On 23/01/2023 8:12 am, Jan Beulich wrote:
>>>>> While the table is used only when HVM=y, the table entry of course needs
>>>>> to be properly populated when also PV32=y. Fully removing the table
>>>>> entry we therefore wrong.
>>>>>
>>>>> Fixes: 1894049fa283 ("x86/shadow: L2H shadow type is PV32-only")
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> Erm, why?
>>>>
>>>> The safety justification for the original patch was that this is HVM
>>>> only code.

Coming back to this: There was no such claim. There was a claim about
the type in question being PV32-only, and there was a comparison with
other types which are HVM-only.

>>>>  And it really is HVM only code - it's genuinely compiled out
>>>> for !HVM builds.
>>> Right, and we have logic taking care of the !HVM case. But that same
>>> logic uses this "HVM-only" table when HVM=y also for all PV types.
>>
>> Ok - this is what needs fixing then.
>>
>> This is a layering violation which has successfully tricked you into
>> making a buggy patch.
>>
>> I'm unwilling to bet this will be the final time either...  "this file
>> is HVM-only, therefore no PV paths enter it" is a reasonable
>> expectation, and should be true.
> 
> Nice abstract consideration, but would mind pointing out how you envision
> shadow_size() to look like meeting your constraints _and_ meeting my
> demand of no excess #ifdef-ary? The way I'm reading your reply is that
> you ask to special case L2H _right in_ shadow_size(). Then again see also
> my remark in the original (now known faulty) patch regarding such special
> casing. I could of course follow that route, regardless of HVM (i.e.
> unlike said there not just for the #else part) ...

Actually no, that remark was about the opposite (!PV32) case, so if I
took both together, this would result:

static inline unsigned int
shadow_size(unsigned int shadow_type)
{
#ifdef CONFIG_HVM
#ifdef CONFIG_PV32
    if ( shadow_type == SH_type_l2h_64_shadow )
        return 1;
#endif
    ASSERT(shadow_type < ARRAY_SIZE(sh_type_to_size));
    return sh_type_to_size[shadow_type];
#else
#ifndef CONFIG_PV32
    if ( shadow_type == SH_type_l2h_64_shadow )
        return 0;
#endif
    ASSERT(shadow_type < SH_type_unused);
    return shadow_type != SH_type_none;
#endif
}

I think that's quite a bit worse than using sh_type_to_size[] for all
kinds of guest uniformly when HVM=y. This

static inline unsigned int
shadow_size(unsigned int shadow_type)
{
    if ( shadow_type == SH_type_l2h_64_shadow )
        return IS_ENABLED(CONFIG_PV32);
#ifdef CONFIG_HVM
    ASSERT(shadow_type < ARRAY_SIZE(sh_type_to_size));
    return sh_type_to_size[shadow_type];
#else
    ASSERT(shadow_type < SH_type_unused);
    return shadow_type != SH_type_none;
#endif
}

is also only marginally better, as we really would better avoid any
such open-coding.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 16:58:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 16:58:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483147.749118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK09g-0002fI-7q; Mon, 23 Jan 2023 16:58:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483147.749118; Mon, 23 Jan 2023 16:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK09g-0002fB-4a; Mon, 23 Jan 2023 16:58:48 +0000
Received: by outflank-mailman (input) for mailman id 483147;
 Mon, 23 Jan 2023 16:58:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K5hw=5U=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pK09e-0002f5-Mo
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 16:58:46 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2081.outbound.protection.outlook.com [40.107.15.81])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32a3b535-9b3f-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 17:58:45 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8208.eurprd04.prod.outlook.com (2603:10a6:102:1c7::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 16:58:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 16:58:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32a3b535-9b3f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kHeA42YueRqNfTfix+huWZ91uayUiqEYKjij3TTxZsj0OgFl521zI0LPTHPzeafaZ6l9wvOhi22d0jUe9xZkxBV3FtFv0ZDS9rz5Ff8E+1fo+aqpj72QPmH5Lsu2cdDQoQ8fZLCmZ43IdP7WRKQ4FYg3c936yQrL1048zJloAUYds4k+iFWoJeeyP5UO02MNQsakE18LPK7ELrgw2UgLAZuonQoG/razSe6DCmrYCYNqRS1skZWbsZOI785iVE3Ll6j3PMuG1mfatrSN1UCp46d+XYFbOBpC7Jg9RJ27TgrWAOvFFwpO97UFYA2yOsKXuF+0ULq1ttkOIxxu4SqScg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0yHRVQxuv4tZGsOTtvADB08XHjyedRpuCMjpilpE5Nw=;
 b=Da3AfREpsmoC5mf5SC56I3+20W/kt4jYKx2wW3ub0yOe4qlyHiTiD9DmayIZOnbaor594F5ABk9i1WNvsxlrM3gQWQQhOmc7C461EPtUNHmIyo6KslUdXS/SwJ4eiLbV+xzzuWwZ/pVR36Rbi0ajGoC4/EFxnKuWWSj2nyUigAncjvL64bI0bJEmGhtXNeF/jEdah8GLKoF+u21FL4MouwdWTHdnbZtAzBmnN8XMlUn8DttOgW4EFSNjhZiuuXXrgOL7w8ocB+GK6x8q+IDw6cq/T1wrpM6ZJtWUS+lkTlczywFJNRJocIku2xrytC5tOkrDxnYQ2VbjAp6mAlEujg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0yHRVQxuv4tZGsOTtvADB08XHjyedRpuCMjpilpE5Nw=;
 b=cWQ+4CecHQeYQViXgbBP9kculCMDRgT/EtJ6xt+AzxGvx4u6oq2EbdFWxTUFKm63tO+6/L2YVFFwA1ohDBTbvEKfOQikNEF1Ggy9OoEdphLd69+NfdCnmW+WF+Z+yUxpna28b57N+KyQqJiZ6eq6REiJ0MMyeRZ2cOpmTidSGOHovJKyDJv7Ano4sVaMOIFK2UbTgFgIcawtfKTNLQXTYSIixed7D2nP0uxgsrMUDaWOsuDKG1fHZCx+Kv/BKLtlbbVQ4WaKvFsPxq/ypBr59MsOgu5RgI4dIJTwBhMJoctBg6EX+V4gw+cpAp7GCthlmfGiMxWRYdXgbIX6bivjng==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f13acc85-3cc3-d685-2e32-e47bcbed893d@suse.com>
Date: Mon, 23 Jan 2023 17:58:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/shadow: sh_type_to_size[] needs L2H entry when
 HVM+PV32
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
 "Tim (Xen.org)" <tim@xen.org>, George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <942e1164-5ed0-bdda-424f-90134b0e22c5@suse.com>
 <79420a4f-358a-f404-7965-e5f215234ba9@citrix.com>
 <2ec2a36e-4264-6c12-c2e6-1af85c91f1f6@suse.com>
 <04f5c9ba-24aa-c9be-e8de-a867c897835a@citrix.com>
 <12bcdc9b-52bf-ad10-a3ec-286d00372be0@suse.com>
 <e9073354-cc8f-8496-1fb6-d53ff5879ccf@suse.com>
In-Reply-To: <e9073354-cc8f-8496-1fb6-d53ff5879ccf@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0103.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8208:EE_
X-MS-Office365-Filtering-Correlation-Id: d3caec77-1802-4d8e-c6a9-08dafd6315fb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qCHerRxpbs4xUMZTCeWuANf6/Lyc+s9jcugjDHaNmWe0OFIQf8eNLW7gdB6f/2qrlYEicwYYe3KPMzQ60LIic2jEISriuqfHw4t269PI/vWaW42LmsgEm4d+yRXqUkXEgsbSnWGXVYOxRVjfqxC4PQKFyKe0J9mmy/DR/+Mz+584ZtvQ7/AIiPs99NX8VnaTRxKRK8q92lqTr5llOeORHvJK+uM+GGnB007+zr/Ex4dRgPIR4NCzYAAl/FcWE1RrFcgfQJe6Yr4bb8A7YWxVaya+LiZh70JCS8de0yZB2/UQLk+tdjbQ5ijXqO5gWc8Ea0KokQB2wKOieToIPNddhYeqpr/DlNWFocNjdRveCMWS1ehquWOKinoK8+8G+59S62qgYw5XQeVOb9SO6kz4bU1af+R+FyrO/Y4g7Wxa3YfkSj8BBcsHcGKPqqBId1T/5AdvbV0mviZbK8H0FCUW2SjRVSRHfVajft7n+WmtfV82FIWeVo4erw7G5ZvVfQoapyljdUL4eP+z2GDY15noIa/UrNhQR8QP17jXCWC6KGqBIw0D5H1NtRNy5H1tSmzaFPlQjAwZWcMv861nJt2WPyqXQVj7FxqtmRHS97NSKpp8Z/bYD8gv1W70Z7SUrhLym7p36mehPC5+axdthk0TYQh4qcnk5d/qipARXcrxrpe9t/mOY19PfqQsBy+Rp+ScQNVQgTFTMYpVhoUy4wf2O8ePAFUjCP5UViSe8GBw0w4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(346002)(366004)(39850400004)(136003)(376002)(396003)(451199015)(36756003)(2906002)(5660300002)(38100700002)(8936002)(4326008)(41300700001)(31696002)(86362001)(66476007)(6486002)(478600001)(31686004)(6916009)(6512007)(6506007)(53546011)(186003)(26005)(8676002)(2616005)(316002)(66946007)(66556008)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cFVGOVNSeUlpOVpUOXJaU0ZibXVNYWRyRVloWFo1YXVHRUVsaWhqV0kwZHIv?=
 =?utf-8?B?YW5TYnBXcWV0U0lFSTlaUjNBVkMzK0FNZjJ1TVlQcm1INzhBdW1RYXV0N3lS?=
 =?utf-8?B?Yk5FUUhSU0drR3NNOW5mdTNEZ0RwOThyQXhvK3haK0VkT0ZabUt3aHM2UlFK?=
 =?utf-8?B?VUJMMCtPbXRUdDg5WExhYnVZUUY2d0JkREVZSVdNSkR4WEU4d2U1TVM0aVRs?=
 =?utf-8?B?cmk3bFN5TjZQeE1HVWlGV2ZiQktMa2tVR3ZmT0dMczE0cVdHYzF5aG9QNmFB?=
 =?utf-8?B?NFBlQkZDWXhvcWNqT2JCcUg1WGMzdkVYNlBqUkFITWJjUXRvZUNDdFRzbHJ2?=
 =?utf-8?B?bXk3c2h2SUJhTVNVOEd5Z2tyZjd1S09aMUZZSG40STMyVmRsMzg0T2dMZ2pq?=
 =?utf-8?B?RW0zYzVCbmhuSTdsTkpqbTBPTHZVeWs4d29iTWpNUlhPbWVucHhUNUprcUV6?=
 =?utf-8?B?RXU3ZHBmb0RnQWZMc3p2Z1RMOFRIUHQ5ZXkxOURXdXRCZFN3YzlnNndORC81?=
 =?utf-8?B?aHBNY1pBQWt6c2JJbDk4cHRPSHU5YkZTVFdzUW4yTG1qRU40R0FkQi8raGJW?=
 =?utf-8?B?Y20xaUJudjZVeEJpQ1A4NXpDOWZDS3g4U2tOMTJuR3JPZDRlSGlaVzBZMmY1?=
 =?utf-8?B?RUNBQ0ZNbmdSOVFwazlZNkxBUzlmckE2MzRZbjMvTXNOREc0Zk1xbm1pYlI0?=
 =?utf-8?B?MXFTNVdrVEdjODdscU94YlU5TU5KeGVwSmFSbDFuWmprS2JieVBmU2xJelZP?=
 =?utf-8?B?NGFtNTZEOHQrbEYrczlaZkthZUJvVzh2ZHVuWW81bXlUZUFNZG4rMmhpUFFt?=
 =?utf-8?B?d2tqTldoWXlpWjFFT05jczhrTFc4MjJtMTNGNUlsWTdqVWdXUWRoUmlxT1Rr?=
 =?utf-8?B?TnoxbW11TUFXU2FZNGpQRTJCVmFNVmhoMTVtNDh1cEd4U3RiYTVlVTNORzRz?=
 =?utf-8?B?WFVOM25QanczUWNVbEdpQ2Y1SWdsOW5VMkNxOVZEdDZabExIbjZvV3dBZ0xh?=
 =?utf-8?B?NVMzclhtUS84Y2QrY1ZIM085dUgzM3BZLzdidGNkcU5wb3lLLytNdWFxNXEx?=
 =?utf-8?B?ajlBdlJQYjhEVnRhTkJOUk11ZU5IbXU3WDAxUGNXWFVEN043ckVmVExNQXVC?=
 =?utf-8?B?eldIQWMvOUdOOExVNTkyR1B0c2d5Y1BBdTNsbGlQa3BQcVUwTHlqRXZWM2hJ?=
 =?utf-8?B?SmtubXNNWHJaekpQMm9TaDR3WElyQkx4NE1VdFdrSlh5NXp6OXFMOXRUNDJm?=
 =?utf-8?B?WGdFMGlQWHVGNmpzTnZEVER2SEpxZ3kwN1Yyak5IeUtHU1A5RFFISDI1MFJR?=
 =?utf-8?B?dWhsVDNudUtkWUI3MU52NzgxOCt1YmV3MCtPTFMyR1BZditKQnNnSGpwTE1t?=
 =?utf-8?B?MDNIUHRGaTh0TTFRL08zaURWSllQamwyWWd1aVdaN25mNGtwVFVaRTBDSEJt?=
 =?utf-8?B?S2VqSm1HUm5IcVNlUktINWs4MkM2VmFGZmM1NXNkaVdqUjdEdnpCbG5jNXo1?=
 =?utf-8?B?dWJ4ZUhybnp1czl3LzJSc0wwNGZMdUYwa2hTSy9maGV3S29uRXVRRUJYSCto?=
 =?utf-8?B?T1BmTU1nQnBHVUt1RUNWbjBkNHU3SFV2ejR3TmVxWGRJZzVGYnBmNWZ6N0s4?=
 =?utf-8?B?VnZ3VEpxNWE0aVBId3pqQlh0QnFVT096dUhpeFlFZ3FuRlUvU2dUNmI4T3pO?=
 =?utf-8?B?Y2svaDlYc05WTXhCNkZsdE0rMHhWK215MjgwZ0pBSDBsZ3hlZHpuQjJWNGVZ?=
 =?utf-8?B?UXFvSU00TlRBdEhWc1FUZGswR09iUHdDQlMrWDEyeWF4UzVEeDQ0bk9BMG1t?=
 =?utf-8?B?S2pBU1BxKzBHM29PYk5MMXpFNEprNVRJL3plRnpPaVZHNU1ETGRoQ1lHMXdI?=
 =?utf-8?B?dHgxTUJ1M3kyU0VOckMxMnpHZG55MXdFL29WaHVEaFpnRDJYVzZLL2tIZ1FW?=
 =?utf-8?B?WExWSDV6c1RKN0pFSGF6Q20wQ3BPaTRYbTQrT01BbjVyTVhQbFJhcmZkUTVR?=
 =?utf-8?B?dFR3QlNCYVE0cWMvLzNsT2FZWEhjTnNTcnlrR1I1YXVjUCt4azRMNVRGQjl0?=
 =?utf-8?B?MStpbzU3eUUzdTNSeE1mN2lmZ0dVaUhJeW4ySVBCUnB5amIyeVd0c3VFbHJL?=
 =?utf-8?Q?iy5Cfmf1UM2zUlO2aF+1Y4N0I?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d3caec77-1802-4d8e-c6a9-08dafd6315fb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 16:58:44.0320
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TtFAwhwpz1FATcJ7Ufd9RB11aAmDIBDiWaRpwZtFJ/atW7epiIn0dk08K8MIrY+hpHsANfERCPxcqLKlERaGHA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8208

On 23.01.2023 17:56, Jan Beulich wrote:
> On 23.01.2023 13:49, Jan Beulich wrote:
>> On 23.01.2023 13:30, Andrew Cooper wrote:
>>> This is a layering violation which has successfully tricked you into
>>> making a buggy patch.
>>>
>>> I'm unwilling to bet this will be the final time either...  "this file
>>> is HVM-only, therefore no PV paths enter it" is a reasonable
>>> expectation, and should be true.
>>
>> Nice abstract consideration, but would mind pointing out how you envision
>> shadow_size() to look like meeting your constraints _and_ meeting my
>> demand of no excess #ifdef-ary? The way I'm reading your reply is that
>> you ask to special case L2H _right in_ shadow_size(). Then again see also
>> my remark in the original (now known faulty) patch regarding such special
>> casing. I could of course follow that route, regardless of HVM (i.e.
>> unlike said there not just for the #else part) ...
> 
> Actually no, that remark was about the opposite (!PV32) case, so if I
> took both together, this would result:
> 
> static inline unsigned int
> shadow_size(unsigned int shadow_type)
> {
> #ifdef CONFIG_HVM
> #ifdef CONFIG_PV32
>     if ( shadow_type == SH_type_l2h_64_shadow )
>         return 1;
> #endif
>     ASSERT(shadow_type < ARRAY_SIZE(sh_type_to_size));
>     return sh_type_to_size[shadow_type];
> #else
> #ifndef CONFIG_PV32
>     if ( shadow_type == SH_type_l2h_64_shadow )
>         return 0;
> #endif
>     ASSERT(shadow_type < SH_type_unused);
>     return shadow_type != SH_type_none;
> #endif
> }
> 
> I think that's quite a bit worse than using sh_type_to_size[] for all
> kinds of guest uniformly when HVM=y. This
> 
> static inline unsigned int
> shadow_size(unsigned int shadow_type)
> {
>     if ( shadow_type == SH_type_l2h_64_shadow )
>         return IS_ENABLED(CONFIG_PV32);

Which might better use opt_pv32 instead, if we really were to go this route.

Jan

> #ifdef CONFIG_HVM
>     ASSERT(shadow_type < ARRAY_SIZE(sh_type_to_size));
>     return sh_type_to_size[shadow_type];
> #else
>     ASSERT(shadow_type < SH_type_unused);
>     return shadow_type != SH_type_none;
> #endif
> }
> 
> is also only marginally better, as we really would better avoid any
> such open-coding.
> 
> Jan
> 



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 17:08:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 17:08:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483153.749131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK0IX-0004Dz-3s; Mon, 23 Jan 2023 17:07:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483153.749131; Mon, 23 Jan 2023 17:07:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK0IX-0004Ds-0g; Mon, 23 Jan 2023 17:07:57 +0000
Received: by outflank-mailman (input) for mailman id 483153;
 Mon, 23 Jan 2023 17:07:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozL9=5U=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pK0IV-0004Dm-Qf
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 17:07:56 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on2051.outbound.protection.outlook.com [40.107.100.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78eee164-9b40-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 18:07:53 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by SA3PR12MB7784.namprd12.prod.outlook.com (2603:10b6:806:317::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 17:07:49 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 17:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78eee164-9b40-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cbhZCkfxCbeCrqnwkhUL7AY9fDAHChnbFBCWSqOehalrhDWm8FOauFsLDDQ/3ARaOSFLONMzT2p86tp4d+NK+peS6Csw8YCSMTxQflavjYZ5v233AP6aKKkzK3lWyHJ6lxj2LKsYmDIkNNskcBlZAEsAFNuo/MH5hRpJ/AGkMzYdxKGiBLe7UTGlYRgiIkbeyYC9jZNCSHH4JCp/7FkN7DKtLNBuLMo5Yfr5+THazy+A5PkaCRSSt5VSTmPBjLGLuRtmLiLEkhP0HyiPMP73QwrgjDYMH4RDaYl5ob1hadWgAfierdQPVoeU7wa2ExtT66/x5gAdgdbHLer7ls/mgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/1NhhlgkYmiyjOuNwuPRzQIdQAahxEJSbuj37pvXjV0=;
 b=Lr6lSxxYWsC42s1AeQD/pxRjOAxiGG+pd39luHckNF73AxuXyS/6GqQ5tUANPQSfNcQnL+XjSfz1IN2z4uR1EBQHVsthvVmqCkgZqWJmrzyUN7b5TGjeWErUFZsKQltNTDWlU+jcKSakSZ3uCEDOWZwjVx4vCa+zueuIhaaB0JmFIdSlNvkK1vSlTQORWnDNzDl4pVEywCJtVcvKidXxsc4uUK6I/xS5npoa6ePvImZXnczxR4jaD7cbcj3x9fnPs7LSYhK+lX8agFC9nNFnzzhzeGEmLVmLfDvkI5hsGFZCNhmOPziMTBGkH+HN4MJgIBvR54UPA+p7YvSKnk5QiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/1NhhlgkYmiyjOuNwuPRzQIdQAahxEJSbuj37pvXjV0=;
 b=TFVFCKd5YG8N7+PFrsATi4bDFr/l9AVzXWZ2sWDh35fTAo1i7ivfRHw4c0PVf5zTkUnhgcRydJP9ieWgUUa1xBSIulIkRVvt80u1QU6a9oOypZUTDfE+Z40lTr3jIgALoJ+ZO9+MYzwzWeiFkhbkN/PROVnC7wwy0PVTp/Q07ps=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <d311518f-c464-29a3-d517-565a30e36dcd@amd.com>
Date: Mon, 23 Jan 2023 17:07:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 12/40] xen/mpu: introduce helpers for MPU enablement
To: xen-devel@lists.xenproject.org
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-13-Penny.Zheng@arm.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230113052914.3845596-13-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0237.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::33) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|SA3PR12MB7784:EE_
X-MS-Office365-Filtering-Correlation-Id: 97c3ecef-5dd6-4746-016b-08dafd645b2b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hxaCWB1nj7+mOj0F9M0ErtO9M4oVfiEUlFeOA8WVA659vxZR1InJQ26/IHOHUd5wwBzYqUbzzLHVGR183h/BJPm3LpPksrBzXBIVctPFwYx+9pdknl0sQ/8UtRHqpREnLcSqpmmyozgn1Y54qXRLSOPjedH4lchpJ43nQ9Ih834hXfpCg8GdxV81Eqr8B0BsWQfO7tWDCdOOyhgD9wQv5jLzCz2QOh9IYhgnUMyCxTaHG8lJLwgWax9zyuTcuWHAeg1U+iZkjbfedydffuJHR3M0+s9slLrOMPUpSx2hrOy9Nxs9F7OpxbLmcmiTpS9mP7wPybQxq0B8x2kaAKxyGOeVr+zJ463AXkfU/jFCBkPm41HMYVNPIJuRRjHWjY7KFOo/+rqZqjEpJ4D1EQCkkHIo62tn+fM/Ez7sJVVsaH48Xry0tgMWXZmstVAJ1KMsGwYaCQmK7DpHQEqdvvDTzMsc5GNnsMqJ2LWPjoQ+76mVEAyyqT7sv8saOudJ8Q8LsQSQdkCyo/3A/Kt/842K1ncgPoiXNKNpz4j/zdz25zsE7NLbqnrYUocsr/0RSsxgAb7dK8ywzA0vw1neaP5FDVtdD+387R2u88n4m2hp96tcgaL0nmKy1PA/dbjOj5pNfiNmIYNdm4I6J6bo9+nSfNkzHabWsFb9kXVAQjSUyc27EOo8FbS0gt0V0uGuff9N+2f+tRl8gXXsjgjcMJw2O6QQksP4GfMeieR/qnzsfOA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(346002)(396003)(136003)(376002)(39860400002)(451199015)(38100700002)(36756003)(31696002)(478600001)(316002)(66946007)(6486002)(8676002)(66556008)(6916009)(66476007)(2616005)(2906002)(31686004)(6506007)(53546011)(83380400001)(26005)(6666004)(5660300002)(6512007)(41300700001)(186003)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V1FhMmdxR0FTeTJCem10ZWVFSFpWQzExWENHRlRaVktoY2FkVmFhcFh2Z0o5?=
 =?utf-8?B?dlNNN3FxNnI2ODczdHBESWh6WU03b2ZkdWtzS01BK0JWN0JQc1NMNHV3aHl6?=
 =?utf-8?B?YThwdEU3eFdtWDQwVnc2bWMyMUJveFVQSjZHMWVzdlM4Z3hqK28ySHNEZGtH?=
 =?utf-8?B?aVVlMWJZMkJ0bk0wdkEyWktmc2FWaXVhV0V2MmM1akcxWFM2bEZ0WU5DZjdW?=
 =?utf-8?B?c0E4dEV2ZCtQajNsdGxmSnJ5bmFWMk5jUDRuYmE0RGxBZTZaTlpjcHlNekhC?=
 =?utf-8?B?bjB0SDdRMjZteC82UGpKMUo0U3ZUdCtxMXhKNnpsR0EyZHUxb1d1U0lsaVNF?=
 =?utf-8?B?eStKL3ZMZ1hiaEVhZ25FUEdoK25nUy9URGwvMDVwUldEVTl3enE1YUxUaGFK?=
 =?utf-8?B?R1ZGQkF3bGw3dlVaYTRqK3I2U2RuQ2pteUlVejV6NHVqQiswOHdPVjFvVnF2?=
 =?utf-8?B?NmpNQmJEQTlJWFdEMG1iVWg2T3ZORFJVL2JFdU9wQ01mT2g0b2VwUU9GbUFi?=
 =?utf-8?B?MmR2SHRhQTRPY1g2bzRhTzhYWnF3UmZsUzZBUnUvVElSNkYreFVkVGdQNWdW?=
 =?utf-8?B?Q3VLQlpvOSsrM3NaVzhnSk12WkEyOTcySDBFQ0tLTDVSMlhBSFdiaTJnWFFw?=
 =?utf-8?B?clcvYmtSNW16N1g1RG9lYWZ4UW1oaDBQUXpqRWE0a0w4YTA4OEVpWUN6K1RM?=
 =?utf-8?B?MGc1R3hpRXZ5WnRTZjFEL3V2UURHbmhucVY5ZEFVTmV1Zld6eEYzWHFiSVBV?=
 =?utf-8?B?bHBxaU1QREtvdlFBNmtEWWZMNXJ4TTBDUDRFOXI0MmE2bWl2Ly9vMkRIWW9o?=
 =?utf-8?B?aFRuRVVPalYvUFBYV3kvYXQ2NmRrM2hxZmJWUENwWWVubUZFdTVXN0tpN0JG?=
 =?utf-8?B?TldwS0lEVFpnbExqVE1VSUZpVXAyZGhvQ3Jac2M1OUxBNFczLzc3U2dyR1NC?=
 =?utf-8?B?M3l5OWRsbFRkK0E4M0g2TXdOamVJcWhselBMcU4xa1RZK09UeVh2UklVUVgx?=
 =?utf-8?B?bUx1UWxLM2YvSWpKNFd2NW9vV0Q4TkVKLzFMdVlvM21jaXppNkpleHpzVjR6?=
 =?utf-8?B?c3ZNZzhFQVJZN3NSUTF3Z3FlZy9FTUdEM2RYak1oaVVzWGVUWEhvNUI4YjVP?=
 =?utf-8?B?dlBNbEtkWmhHU0ZsdnlNcjhXeCtKYjdOWk56Z2FiUnNaVWtkaDJQQzZoVldU?=
 =?utf-8?B?N3RYbHNpWjU1NStmb0dETmp2V0xWV1M0ZWdZRXNwOUNRb2JCU3dDaU5QbGo4?=
 =?utf-8?B?U3drdVgwMUxRZUNhS3F6NWZWNU5BMUNEWVkwdzVQVlYxaERuaVBVeUtjUTRz?=
 =?utf-8?B?K1Q5VFY1d20wcjRydFRzMlY2dnFGRW5zZy9ZeXNTUWpLTzNwbXVZSk9vSnpG?=
 =?utf-8?B?WmpUZTZrdVJzNHRmVG1qc21vOVc3VXJIcW83MUVMTnIrZTh4T3g4OFhVcUt6?=
 =?utf-8?B?OEhkMzRyaEJIRy9YczhzRkhSaW1wTzFzNUVKWmxzSzlVWkJiM0xDUWx0b1V1?=
 =?utf-8?B?a0Job2VROE9aWGlBcjNmbVFuNG1CekJzcS80K2RWQW5USFVSUzUrTEM5a0Jl?=
 =?utf-8?B?R3ZmQjNLNE1PaGxZQThHT0VJV1hKWW10Vk5YQVhMTk5TNHlqdnRlRVlwK3Ex?=
 =?utf-8?B?U0dRZlpvRy9KVXEwWmthVUU2MThHTTJCQ281K0lZREJ0Z0N4NUpFaXl5cjRn?=
 =?utf-8?B?clYzbEJyV0FxQmJ4SGVNOUhoQ0RnWVptanBiUUo2NFlOZFd6dHozRWsybTlY?=
 =?utf-8?B?N1dVYkxMaGNwNnVvc0p4TWI0RWwya0l0N2k1TVBTdlVOcStmK3EvS0JqY1ky?=
 =?utf-8?B?MTlkUFp5bVp2WjB4T05SOU9rcFpOKzJSY2puRGdCZ1huNXg0MFJaQnBycXZN?=
 =?utf-8?B?RFlHd1REL0k4U2RoVzBSRlJDSEFCM1V6a3VsRzFydEd0RWZTaStoUjNsdVVN?=
 =?utf-8?B?Wm5oT3dvM1RqcS9iOWwzanBUY0pRbHpsSG9kZjlKemR2UE5hUVR3MTFQWFY0?=
 =?utf-8?B?SGd6anZsam10dFhnRGYvK2x1aklLV2UyTGVFMG82c1g4azNIRVpNdjk4Rjgy?=
 =?utf-8?B?Mm5ZdDM0a3pwU2NCQ2RMY2xlOVhtOW1qNm5QbE83RzA2SFBUaFFlU05RK0tI?=
 =?utf-8?Q?7DuSO08+KCsdPxTLFYfNlMlqO?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 97c3ecef-5dd6-4746-016b-08dafd645b2b
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jan 2023 17:07:49.6635
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ri9ddwSJMBhj/hBNvM87L+tuDFXLkgQdmf8sywf77cRHh5CACS0ryih+EHIy8g5sgiuL3veFK5hd+oiSwW1+zQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7784

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> We need a new helper for Xen to enable MPU in boot-time.
> The new helper is semantically consistent with the original enable_mmu.
>
> If the Background region is enabled, then the MPU uses the default memory
> map as the Background region for generating the memory
> attributes when MPU is disabled.
> Since the default memory map of the Armv8-R AArch64 architecture is
> IMPLEMENTATION DEFINED, we always turn off the Background region.
>
> In this patch, we also introduce a neutral name enable_mm for
> Xen to enable MMU/MPU. This can help us to keep one code flow
> in head.S
>
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
>   xen/arch/arm/arm64/head.S     |  5 +++--
>   xen/arch/arm/arm64/head_mmu.S |  4 ++--
>   xen/arch/arm/arm64/head_mpu.S | 19 +++++++++++++++++++
>   3 files changed, 24 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 145e3d53dc..7f3f973468 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -258,7 +258,8 @@ real_start_efi:
>            * and memory regions for MPU systems.
>            */
>           bl    prepare_early_mappings
> -        bl    enable_mmu
> +        /* Turn on MMU or MPU */
> +        bl    enable_mm
>
>           /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
>           ldr   x0, =primary_switched
> @@ -316,7 +317,7 @@ GLOBAL(init_secondary)
>           bl    check_cpu_mode
>           bl    cpu_init
>           bl    prepare_early_mappings
> -        bl    enable_mmu
> +        bl    enable_mm
>
>           /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
>           ldr   x0, =secondary_switched
> diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
> index 2346f755df..b59c40495f 100644
> --- a/xen/arch/arm/arm64/head_mmu.S
> +++ b/xen/arch/arm/arm64/head_mmu.S
> @@ -217,7 +217,7 @@ ENDPROC(prepare_early_mappings)
>    *
>    * Clobbers x0 - x3
>    */
> -ENTRY(enable_mmu)
> +ENTRY(enable_mm)
>           PRINT("- Turning on paging -\r\n")
>
>           /*
> @@ -239,7 +239,7 @@ ENTRY(enable_mmu)
>           msr   SCTLR_EL2, x0          /* now paging is enabled */
>           isb                          /* Now, flush the icache */
>           ret
> -ENDPROC(enable_mmu)
> +ENDPROC(enable_mm)
>
>   /*
>    * Remove the 1:1 map from the page-tables. It is not easy to keep track
> diff --git a/xen/arch/arm/arm64/head_mpu.S b/xen/arch/arm/arm64/head_mpu.S
> index 0b97ce4646..e2ac69b0cc 100644
> --- a/xen/arch/arm/arm64/head_mpu.S
> +++ b/xen/arch/arm/arm64/head_mpu.S
> @@ -315,6 +315,25 @@ ENDPROC(prepare_early_mappings)
>
>   GLOBAL(_end_boot)
>
> +/*
> + * Enable EL2 MPU and data cache
> + * If the Background region is enabled, then the MPU uses the default memory
> + * map as the Background region for generating the memory
> + * attributes when MPU is disabled.
> + * Since the default memory map of the Armv8-R AArch64 architecture is
> + * IMPLEMENTATION DEFINED, we intend to turn off the Background region here.
> + */
> +ENTRY(enable_mm)
> +    mrs   x0, SCTLR_EL2
> +    orr   x0, x0, #SCTLR_Axx_ELx_M    /* Enable MPU */
> +    orr   x0, x0, #SCTLR_Axx_ELx_C    /* Enable D-cache */
> +    orr   x0, x0, #SCTLR_Axx_ELx_WXN  /* Enable WXN */
> +    dsb   sy
> +    msr   SCTLR_EL2, x0
> +    isb
> +    ret
> +ENDPROC(enable_mm)

Can this be renamed to enable_mpu or enable_mpu_and_cache() ?

Can we also have the corresponding disable function in this patch ?

Also (compared with "[PATCH v6 10/11] xen/arm64: introduce helpers for 
MPU enable/disable"), I see that you have added #SCTLR_Axx_ELx_WXN. What 
is the reason for this ?

- Ayan

> +
>   /*
>    * Local variables:
>    * mode: ASM
> --
> 2.25.1
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 17:09:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 17:09:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483158.749140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK0KI-0004nH-EO; Mon, 23 Jan 2023 17:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483158.749140; Mon, 23 Jan 2023 17:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK0KI-0004nA-BV; Mon, 23 Jan 2023 17:09:46 +0000
Received: by outflank-mailman (input) for mailman id 483158;
 Mon, 23 Jan 2023 17:09:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK0KH-0004n0-CN; Mon, 23 Jan 2023 17:09:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK0KH-0006Fs-AW; Mon, 23 Jan 2023 17:09:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK0KG-0004Kf-Rq; Mon, 23 Jan 2023 17:09:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pK0KG-0002fl-RJ; Mon, 23 Jan 2023 17:09:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z44Q9bxv61Z+wl36a33pOxs5Ow5WtbuIIR9E9Pdcm3c=; b=U+LGe6hqN9qEIxDjB9lC/jA81y
	LeUtjLDEfdmY7s4Nhty9qdMsj1yPNR5yhaYE73wNHlqVJ2BhJcB1KWZybpRXZqPh/GkLeW9y0C5zc
	aocLmyQtxU3sErEbYD9kQH/EU7Ol1vMfFA30yVdqPVY1ehjGhRiCTZRsaPDIxTzcXdxM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176060-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176060: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2475bf0250dee99b477e0c56d7dc9d7ac3f04117
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 17:09:44 +0000

flight 176060 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176060/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                2475bf0250dee99b477e0c56d7dc9d7ac3f04117
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  107 days
Failing since        173470  2022-10-08 06:21:34 Z  107 days  221 attempts
Testing same since   176053  2023-01-22 23:10:33 Z    0 days    2 attempts

------------------------------------------------------------
3437 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 527952 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 18:24:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 18:24:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483169.749154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK1Tk-0004ZX-UA; Mon, 23 Jan 2023 18:23:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483169.749154; Mon, 23 Jan 2023 18:23:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK1Tk-0004ZQ-QE; Mon, 23 Jan 2023 18:23:36 +0000
Received: by outflank-mailman (input) for mailman id 483169;
 Mon, 23 Jan 2023 18:23:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK1Tj-0004ZG-F4; Mon, 23 Jan 2023 18:23:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK1Tj-0007yB-1N; Mon, 23 Jan 2023 18:23:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK1Ti-0007rc-Ey; Mon, 23 Jan 2023 18:23:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pK1Ti-000095-EP; Mon, 23 Jan 2023 18:23:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NfbStVFqTb75Kl6Y6VhgG29DxugCLGEqd5ko27fPR+E=; b=k8yBTJ9hlJyobhRGEeXt/gXQZD
	TMutpC+kLEEdbAA7yUa3UozY/Y09A2RUp/xJQsVI8XgDJrXdDC6u8khdWU1P6TaCEDNxvmLi+a/l2
	KLjYMsov1B+Nypmh34BWSdbd8xnoonuWJTDqFBpByFBR/HGiBi4xi/DRipfwcV6puZbQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176066-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176066: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d60324d8af9404014cfcc37bba09e9facfd02fcf
X-Osstest-Versions-That:
    xen=1d60c20260c7e82fe5344d06c20d718e0cc03b8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 18:23:34 +0000

flight 176066 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176066/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d60324d8af9404014cfcc37bba09e9facfd02fcf
baseline version:
 xen                  1d60c20260c7e82fe5344d06c20d718e0cc03b8b

Last test of basis   176006  2023-01-21 01:01:52 Z    2 days
Testing same since   176066  2023-01-23 15:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1d60c20260..d60324d8af  d60324d8af9404014cfcc37bba09e9facfd02fcf -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 18:33:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 18:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483176.749164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK1cs-00062u-Pg; Mon, 23 Jan 2023 18:33:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483176.749164; Mon, 23 Jan 2023 18:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK1cs-00062n-MJ; Mon, 23 Jan 2023 18:33:02 +0000
Received: by outflank-mailman (input) for mailman id 483176;
 Mon, 23 Jan 2023 18:33:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FuoH=5U=tklengyel.com=bounce+e181d6.cd840-xen-devel=lists.xenproject.org@srs-se1.protection.inumbo.net>)
 id 1pK1cr-00062h-Ci
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 18:33:01 +0000
Received: from m228-12.mailgun.net (m228-12.mailgun.net [159.135.228.12])
 by se1-gles-flk1.inumbo.com (Halon) with UTF8SMTPS
 id 5baaebac-9b4c-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 19:32:58 +0100 (CET)
Received: from mail-yb1-f177.google.com (mail-yb1-f177.google.com
 [209.85.219.177]) by
 0ee4fa7eb591 with SMTP id 63ced2d85a6accb7bc5d1e89 (version=TLS1.3,
 cipher=TLS_AES_128_GCM_SHA256); Mon, 23 Jan 2023 18:32:56 GMT
Received: by mail-yb1-f177.google.com with SMTP id d62so15986650ybh.8
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 10:32:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 5baaebac-9b4c-11ed-b8d1-410ff93cb8f0
DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=tklengyel.com;
 q=dns/txt; s=mailo; t=1674498776; x=1674505976; h=Content-Type: Cc: To: To:
 Subject: Subject: Message-ID: Date: From: From: In-Reply-To: References:
 MIME-Version: Sender: Sender;
 bh=LGB29pxzb5btJ5hSYYdA6mTpc+hdzrCf1QUGnXiIyXM=;
 b=rfS2ayXlwQ8NByY6xycf69JhX0jWSGaR07w5Hll0p5+OIzIi2tN3Yrl2nHhhIj54w3HtKtrNgw1iHt1ZAjoEVqgCncMEpT+IN6F+EZEv3S2c8fB0dpf1g4PDPmbaEmTJYLjWTZr0C6kAnDcurR2OkbKyqK1+RDDmVO/j2WUVtNXphA5r+TQEtMU8UXwjTO4kR6d45Zccn4wBSM7lhZNDh19UoC2DysO+7Vm7YKcrI0qN5ZwPmsuSSf6z4vuYaKUKI+U2pfh81927GyKwiliUnunGl3cbaC99CczPzqcqDp2R3+HeHKlzAgmfvadKssF01aHBnHFvcuqQnxZIRM89gg==
X-Mailgun-Sending-Ip: 159.135.228.12
X-Mailgun-Sid: WyIyYTNmOCIsInhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZyIsImNkODQwIl0=
Sender: tamas@tklengyel.com
X-Gm-Message-State: AFqh2kqZI9oiGznABPcj4cNYTYyOtSykEfLDN4Q+E5Wq6Y65es20gbVs
	noJQKweqyf9/LbvmS+bN1ljGSq995nmMnPYUEk4=
X-Google-Smtp-Source: AMrXdXuJpvEMmnpUcv0t7QXW4xRwP38dTmWln4+johZCwx4GhzwMNXO+zsv8V4vxIdF97mtwW2sWYYzB1sGbEGjUpHM=
X-Received: by 2002:a25:ae8d:0:b0:7fb:dcd4:475 with SMTP id
 b13-20020a25ae8d000000b007fbdcd40475mr2069340ybj.31.1674498776399; Mon, 23
 Jan 2023 10:32:56 -0800 (PST)
MIME-Version: 1.0
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
 <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com> <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
 <a92b9714-5e29-146f-3b68-b44692c56de1@suse.com>
In-Reply-To: <a92b9714-5e29-146f-3b68-b44692c56de1@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 23 Jan 2023 13:32:20 -0500
X-Gmail-Original-Message-ID: <CABfawhkiaheQPJhtG7fupHcbfYPUy+BJgvbVoQ+FJUnev5bowQ@mail.gmail.com>
Message-ID: <CABfawhkiaheQPJhtG7fupHcbfYPUy+BJgvbVoQ+FJUnev5bowQ@mail.gmail.com>
Subject: Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest areas
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: multipart/alternative; boundary="000000000000be97bf05f2f2a008"

--000000000000be97bf05f2f2a008
Content-Type: text/plain; charset="UTF-8"

On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 23.01.2023 17:09, Tamas K Lengyel wrote:
> > On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@suse.com> wrote:
> >> --- a/xen/arch/x86/mm/mem_sharing.c
> >> +++ b/xen/arch/x86/mm/mem_sharing.c
> >> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
> >>      hvm_set_nonreg_state(cd_vcpu, &nrs);
> >>  }
> >>
> >> +static int copy_guest_area(struct guest_area *cd_area,
> >> +                           const struct guest_area *d_area,
> >> +                           struct vcpu *cd_vcpu,
> >> +                           const struct domain *d)
> >> +{
> >> +    mfn_t d_mfn, cd_mfn;
> >> +
> >> +    if ( !d_area->pg )
> >> +        return 0;
> >> +
> >> +    d_mfn = page_to_mfn(d_area->pg);
> >> +
> >> +    /* Allocate & map a page for the area if it hasn't been already.
*/
> >> +    if ( !cd_area->pg )
> >> +    {
> >> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
> >> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
> >> +        p2m_type_t p2mt;
> >> +        p2m_access_t p2ma;
> >> +        unsigned int offset;
> >> +        int ret;
> >> +
> >> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL,
NULL);
> >> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
> >> +        {
> >> +            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain,
> > 0);
> >> +
> >> +            if ( !pg )
> >> +                return -ENOMEM;
> >> +
> >> +            cd_mfn = page_to_mfn(pg);
> >> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
> >> +
> >> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,
> > p2m_ram_rw,
> >> +                                 p2m->default_access, -1);
> >> +            if ( ret )
> >> +                return ret;
> >> +        }
> >> +        else if ( p2mt != p2m_ram_rw )
> >> +            return -EBUSY;
> >> +
> >> +        /*
> >> +         * Simply specify the entire range up to the end of the page.
> > All the
> >> +         * function uses it for is a check for not crossing page
> > boundaries.
> >> +         */
> >> +        offset = PAGE_OFFSET(d_area->map);
> >> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
> >> +                             PAGE_SIZE - offset, cd_area, NULL);
> >> +        if ( ret )
> >> +            return ret;
> >> +    }
> >> +    else
> >> +        cd_mfn = page_to_mfn(cd_area->pg);
> >
> > Everything to this point seems to be non mem-sharing/forking related.
Could
> > these live somewhere else? There must be some other place where
allocating
> > these areas happens already for non-fork VMs so it would make sense to
just
> > refactor that code to be callable from here.
>
> It is the "copy" aspect with makes this mem-sharing (or really fork)
> specific. Plus in the end this is no different from what you have
> there right now for copying the vCPU info area. In the final patch
> that other code gets removed by re-using the code here.

Yes, the copy part is fork-specific. Arguably if there was a way to do the
allocation of the page for vcpu_info I would prefer that being elsewhere,
but while the only requirement is allocate-page and copy from parent I'm OK
with that logic being in here because it's really straight forward. But now
you also do extra sanity checks here which are harder to comprehend in this
context alone. What if extra sanity checks will be needed in the future? Or
the sanity checks in the future diverge from where this happens for normal
VMs because someone overlooks this needing to be synched here too?

> I also haven't been able to spot anything that could be factored
> out (and one might expect that if there was something, then the vCPU
> info area copying should also already have used it). map_guest_area()
> is all that is used for other purposes as well.

Well, there must be a location where all this happens for normal VMs as
well, no? Why not factor that code so that it can be called from here, so
that we don't have to track sanity check requirements in two different
locations? Or for normal VMs that sanity checking bit isn't required? If
so, why?

> >> +
> >> +    copy_domain_page(cd_mfn, d_mfn);
> >> +
> >> +    return 0;
> >> +}
> >> +
> >>  static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu)
> >>  {
> >>      struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu);
> >> @@ -1745,6 +1804,16 @@ static int copy_vcpu_settings(struct dom
> >>              copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
> >>          }
> >>
> >> +        /* Same for the (physically registered) runstate and time info
> > areas. */
> >> +        ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
> >> +                              &d_vcpu->runstate_guest_area, cd_vcpu,
d);
> >> +        if ( ret )
> >> +            return ret;
> >> +        ret = copy_guest_area(&cd_vcpu->arch.time_guest_area,
> >> +                              &d_vcpu->arch.time_guest_area, cd_vcpu,
d);
> >> +        if ( ret )
> >> +            return ret;
> >> +
> >>          ret = copy_vpmu(d_vcpu, cd_vcpu);
> >>          if ( ret )
> >>              return ret;
> >> @@ -1987,7 +2056,10 @@ int mem_sharing_fork_reset(struct domain
> >>
> >>   state:
> >>      if ( reset_state )
> >> +    {
> >>          rc = copy_settings(d, pd);
> >> +        /* TBD: What to do here with -ERESTART? */
> >
> > Where does ERESTART coming from?
>
> From map_guest_area()'s attempt to acquire the hypercall deadlock mutex,
> in order to then pause the subject vCPU. I suppose that in the forking
> case it may already be paused, but then there's no way map_guest_area()
> could know. Looking at the pause count is fragile, as there's no
> guarantee that the vCPU may be unpaused while we're still doing work on
> it. Hence I view such checks as only suitable for assertions.

Since map_guest_area is only used to sanity check, and it only happens when
the page is being setup for the fork, why can't the sanity check be done on
the parent? The parent is guaranteed to be paused when forks are active so
there is no ERESTART concern and from the looks of it if there is a concern
the sanity check is looking for it would be visible on the parent just as
well as the fork. But I would like to understand why that sanity checking
is required in the first place.

Thanks,
Tamas

--000000000000be97bf05f2f2a008
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><br>On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich &lt;<=
a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; wrote:<br>&gt=
;<br>&gt; On 23.01.2023 17:09, Tamas K Lengyel wrote:<br>&gt; &gt; On Mon, =
Jan 23, 2023 at 9:55 AM Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com=
">jbeulich@suse.com</a>&gt; wrote:<br>&gt; &gt;&gt; --- a/xen/arch/x86/mm/m=
em_sharing.c<br>&gt; &gt;&gt; +++ b/xen/arch/x86/mm/mem_sharing.c<br>&gt; &=
gt;&gt; @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc<br>=
&gt; &gt;&gt; =C2=A0 =C2=A0 =C2=A0hvm_set_nonreg_state(cd_vcpu, &amp;nrs);<=
br>&gt; &gt;&gt; =C2=A0}<br>&gt; &gt;&gt;<br>&gt; &gt;&gt; +static int copy=
_guest_area(struct guest_area *cd_area,<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 const struct guest_area *d_area,<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 s=
truct vcpu *cd_vcpu,<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 const struct domain=
 *d)<br>&gt; &gt;&gt; +{<br>&gt; &gt;&gt; + =C2=A0 =C2=A0mfn_t d_mfn, cd_mf=
n;<br>&gt; &gt;&gt; +<br>&gt; &gt;&gt; + =C2=A0 =C2=A0if ( !d_area-&gt;pg )=
<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;<br>&gt; &gt;&gt; +=
<br>&gt; &gt;&gt; + =C2=A0 =C2=A0d_mfn =3D page_to_mfn(d_area-&gt;pg);<br>&=
gt; &gt;&gt; +<br>&gt; &gt;&gt; + =C2=A0 =C2=A0/* Allocate &amp; map a page=
 for the area if it hasn&#39;t been already. */<br>&gt; &gt;&gt; + =C2=A0 =
=C2=A0if ( !cd_area-&gt;pg )<br>&gt; &gt;&gt; + =C2=A0 =C2=A0{<br>&gt; &gt;=
&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0gfn_t gfn =3D mfn_to_gfn(d, d_mfn);<br>&g=
t; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0struct p2m_domain *p2m =3D p2m_get=
_hostp2m(cd_vcpu-&gt;domain);<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=
=A0p2m_type_t p2mt;<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0p2m_acces=
s_t p2ma;<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int offset=
;<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0int ret;<br>&gt; &gt;&gt; +=
<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D p2m-&gt;get_entry=
(p2m, gfn, &amp;p2mt, &amp;p2ma, 0, NULL, NULL);<br>&gt; &gt;&gt; + =C2=A0 =
=C2=A0 =C2=A0 =C2=A0if ( mfn_eq(cd_mfn, INVALID_MFN) )<br>&gt; &gt;&gt; + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0{<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0struct page_info *pg =3D alloc_domheap_page(cd_vcpu-&gt;domai=
n,<br>&gt; &gt; 0);<br>&gt; &gt;&gt; +<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0if ( !pg )<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -ENOMEM;<br>&gt; &gt;&gt; +<br>&gt=
; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D page_to_mf=
n(pg);<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0set_gpfn=
_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));<br>&gt; &gt;&gt; +<br>&gt; &gt;&gt; +=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D p2m-&gt;set_entry(p2m, gf=
n, cd_mfn, PAGE_ORDER_4K,<br>&gt; &gt; p2m_ram_rw,<br>&gt; &gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p2m-&gt;default_access, -1);<br>&gt; &gt=
;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret )<br>&gt; &gt;&gt=
; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;<br>&=
gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0}<br>&gt; &gt;&gt; + =C2=A0 =C2=
=A0 =C2=A0 =C2=A0else if ( p2mt !=3D p2m_ram_rw )<br>&gt; &gt;&gt; + =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -EBUSY;<br>&gt; &gt;&gt; +<br>&gt=
; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0/*<br>&gt; &gt;&gt; + =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 * Simply specify the entire range up to the end of the page.=
<br>&gt; &gt; All the<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 * func=
tion uses it for is a check for not crossing page<br>&gt; &gt; boundaries.<=
br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 */<br>&gt; &gt;&gt; + =C2=A0=
 =C2=A0 =C2=A0 =C2=A0offset =3D PAGE_OFFSET(d_area-&gt;map);<br>&gt; &gt;&g=
t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D map_guest_area(cd_vcpu, gfn_to_gadd=
r(gfn) + offset,<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 PAGE_SIZE - off=
set, cd_area, NULL);<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret=
 )<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;<=
br>&gt; &gt;&gt; + =C2=A0 =C2=A0}<br>&gt; &gt;&gt; + =C2=A0 =C2=A0else<br>&=
gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D page_to_mfn(cd_area-&g=
t;pg);<br>&gt; &gt;<br>&gt; &gt; Everything to this point seems to be non m=
em-sharing/forking related. Could<br>&gt; &gt; these live somewhere else? T=
here must be some other place where allocating<br>&gt; &gt; these areas hap=
pens already for non-fork VMs so it would make sense to just<br>&gt; &gt; r=
efactor that code to be callable from here.<br>&gt;<br>&gt; It is the &quot=
;copy&quot; aspect with makes this mem-sharing (or really fork)<br>&gt; spe=
cific. Plus in the end this is no different from what you have<br>&gt; ther=
e right now for copying the vCPU info area. In the final patch<br>&gt; that=
 other code gets removed by re-using the code here.<br><div><br></div><div>
Yes, the copy part is fork-specific. Arguably if there was a way to do=20
the allocation of the page for vcpu_info I would prefer that being=20
elsewhere, but while the only requirement is allocate-page and copy from
 parent I&#39;m OK with that logic being in here because it&#39;s really=20
straight forward. But now you also do extra sanity checks here which are
 harder to comprehend in this context alone. What if extra sanity checks
 will be needed in the future? Or the sanity checks in the future=20
diverge from where this happens for normal VMs because someone overlooks
 this needing to be synched here too?

</div><div><br></div>&gt; I also haven&#39;t been able to spot anything tha=
t could be factored<br>&gt; out (and one might expect that if there was som=
ething, then the vCPU<br>&gt; info area copying should also already have us=
ed it). map_guest_area()<br>&gt; is all that is used for other purposes as =
well.<br><div><br></div><div>Well, there must be a location where all this =
happens for normal VMs as well, no? Why not factor that code so that it can=
 be called from here, so that we don&#39;t have to track sanity check requi=
rements in two different locations? Or for normal VMs that sanity checking =
bit isn&#39;t required? If so, why?<br></div><div><br></div>&gt; &gt;&gt; +=
<br>&gt; &gt;&gt; + =C2=A0 =C2=A0copy_domain_page(cd_mfn, d_mfn);<br>&gt; &=
gt;&gt; +<br>&gt; &gt;&gt; + =C2=A0 =C2=A0return 0;<br>&gt; &gt;&gt; +}<br>=
&gt; &gt;&gt; +<br>&gt; &gt;&gt; =C2=A0static int copy_vpmu(struct vcpu *d_=
vcpu, struct vcpu *cd_vcpu)<br>&gt; &gt;&gt; =C2=A0{<br>&gt; &gt;&gt; =C2=
=A0 =C2=A0 =C2=A0struct vpmu_struct *d_vpmu =3D vcpu_vpmu(d_vcpu);<br>&gt; =
&gt;&gt; @@ -1745,6 +1804,16 @@ static int copy_vcpu_settings(struct dom<br=
>&gt; &gt;&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0copy_domain_=
page(new_vcpu_info_mfn, vcpu_info_mfn);<br>&gt; &gt;&gt; =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0}<br>&gt; &gt;&gt;<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0=
 =C2=A0/* Same for the (physically registered) runstate and time info<br>&g=
t; &gt; areas. */<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D cop=
y_guest_area(&amp;cd_vcpu-&gt;runstate_guest_area,<br>&gt; &gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0&amp;d_vcpu-&gt;runstate_guest_area, cd_vcpu, d)=
;<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret )<br>&gt; &gt;&gt;=
 + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;<br>&gt; &gt;&gt; + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D copy_guest_area(&amp;cd_vcpu-&gt;arch.ti=
me_guest_area,<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&amp;d_vcpu-=
&gt;arch.time_guest_area, cd_vcpu, d);<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0if ( ret )<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0return ret;<br>&gt; &gt;&gt; +<br>&gt; &gt;&gt; =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0ret =3D copy_vpmu(d_vcpu, cd_vcpu);<br>&gt; &gt;&gt; =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret )<br>&gt; &gt;&gt; =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;<br>&gt; &gt;&gt; @@ -1987,7 +2056,1=
0 @@ int mem_sharing_fork_reset(struct domain<br>&gt; &gt;&gt;<br>&gt; &gt;=
&gt; =C2=A0 state:<br>&gt; &gt;&gt; =C2=A0 =C2=A0 =C2=A0if ( reset_state )<=
br>&gt; &gt;&gt; + =C2=A0 =C2=A0{<br>&gt; &gt;&gt; =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0rc =3D copy_settings(d, pd);<br>&gt; &gt;&gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0/* TBD: What to do here with -ERESTART? */<br>&gt; &gt;<br>&gt; &=
gt; Where does ERESTART coming from?<br>&gt;<br>&gt; From map_guest_area()&=
#39;s attempt to acquire the hypercall deadlock mutex,<br>&gt; in order to =
then pause the subject vCPU. I suppose that in the forking<br>&gt; case it =
may already be paused, but then there&#39;s no way map_guest_area()<br>&gt;=
 could know. Looking at the pause count is fragile, as there&#39;s no<br>&g=
t; guarantee that the vCPU may be unpaused while we&#39;re still doing work=
 on<br>&gt; it. Hence I view such checks as only suitable for assertions.<b=
r><div><br></div><div>Since map_guest_area is only used to sanity check, an=
d it only happens when the page is being setup for the fork, why can&#39;t =
the sanity check be done on the parent? The parent is guaranteed to be paus=
ed when forks are active so there is no ERESTART concern and from the looks=
 of it if there is a concern the sanity check is looking for it would be vi=
sible on the parent just as well as the fork. But I would like to understan=
d why that sanity checking is required in the first place.<br></div><div><b=
r></div><div>Thanks,<br></div><div>Tamas<br></div></div>

--000000000000be97bf05f2f2a008--


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 19:41:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 19:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483183.749179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK2gw-0004uv-PG; Mon, 23 Jan 2023 19:41:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483183.749179; Mon, 23 Jan 2023 19:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK2gw-0004uo-Mi; Mon, 23 Jan 2023 19:41:18 +0000
Received: by outflank-mailman (input) for mailman id 483183;
 Mon, 23 Jan 2023 19:41:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK2gv-0004ue-6Z; Mon, 23 Jan 2023 19:41:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK2gv-00019b-2i; Mon, 23 Jan 2023 19:41:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK2gu-0003SE-FG; Mon, 23 Jan 2023 19:41:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pK2gu-0005vp-EZ; Mon, 23 Jan 2023 19:41:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8yjdwdHsUn1gIRJpQnL6eJo0muhydNqq59Smok4ub88=; b=3wslDyoLpb3iBhoxRUmiuPY4mA
	zfmo4TbvqYSfIcTgQLACktNoRqhfQ41iKlH8ipd8aWfztuiwgyRGNDEnQQXMlqHkOb9F6oouJgGe6
	Ednsa3l4zWF7SJjFJjzRXSYYQAieLnDgkLvkl5ukCKO1GWm91tBVzz+XcFeBV3Zurgs4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176062-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176062: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1d60c20260c7e82fe5344d06c20d718e0cc03b8b
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 19:41:16 +0000

flight 176062 xen-unstable real [real]
flight 176073 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176062/
http://logs.test-lab.xenproject.org/osstest/logs/176073/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1d60c20260c7e82fe5344d06c20d718e0cc03b8b
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    3 days
Failing since        176003  2023-01-20 17:40:27 Z    3 days    8 attempts
Testing same since   176011  2023-01-21 07:04:02 Z    2 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 762 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 20:09:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 20:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483194.749193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK38I-0007ge-4j; Mon, 23 Jan 2023 20:09:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483194.749193; Mon, 23 Jan 2023 20:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK38I-0007gX-1Z; Mon, 23 Jan 2023 20:09:34 +0000
Received: by outflank-mailman (input) for mailman id 483194;
 Mon, 23 Jan 2023 20:09:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTo8=5U=citrix.com=prvs=380e0b34c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pK38G-0007gR-Ld
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 20:09:32 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d6efda1b-9b59-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 21:09:29 +0100 (CET)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jan 2023 15:09:15 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB6115.namprd03.prod.outlook.com (2603:10b6:610:ba::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 20:09:12 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 20:09:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6efda1b-9b59-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674504569;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=IMeUOonufG0fr/sflWmj3Tq5wJA/R6PntS6lJh0IYHI=;
  b=VbSRsjZrKya2D+8D0jQuQMLn6NrHieQf3vAmJqdWoHmoBidCQiAlFN7+
   qSv8/Vn4oqPNo7mkct1u9gVzedpYixmHhPrjZL2c8iyyi/C5xGU/oR5IM
   1mVlg+8mtXA+NTkZuSC7XME48FjEcMipH++pggUwRoMsV4PT6aGNYMs12
   Q=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 93330975
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:HVDjyq7TWUW+uW33fuXcQwxRtKHGchMFZxGqfqrLsTDasY5as4F+v
 mpNWWiPaP/YZmWnKt51O4/lpBsGsJfVzt9iGQA6/HhgHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakR5weE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m+
 KUYAg0dREi5ucGY8a20dLRtmp0oI5y+VG8fkikIITDxK98DGMqGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6NkkotiNABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTCdlJT+XpqaMCbFu7xjVPCQUaCHeAjdLjtkuXRIkEB
 Use9X97xUQ13AnxJjXnZDW/pHOHpR8dHdlNCeox6AKK4qXR6gedQGMDS1ZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9UQAONnMLbyIASQoD4vHgrZs1gxaJScxseIa6j9TzHSz7y
 hiQrTY5nLQVhogA0KDT1VrAiTi9q4PJSgMw7wP/UWes7wc/b4mgD6Sh7VnA8f9BNsCXVFCHt
 3kfs9eS56YFCpTlvCeKRuMKHr2g+feeGDLZiF9rWZIm8lyQF2WLeIlR5HR7Ox1vO8NdIzvxO
 heP4UVW+YNZO2asYelveYWtBs82zK/mU9P4SvTTadkIaZ90HOOawBxTiYer9ziFuCARfWsXa
 P93re7E4a4mNJla
IronPort-HdrOrdr: A9a23:bjOue6uh5EzvomGYQAuo/vgB7skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-IronPort-AV: E=Sophos;i="5.97,240,1669093200"; 
   d="scan'208";a="93330975"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RsmFmCnJFvilSpr7gFM95EFQe17nfM3l0QAVw22Jj0IOclXeCTjk4Kpde+zCz4ge3sfUCBFRhoSrzEXA9uzw5HOO+B3LvE0SAHt918AAMYuZuBscpY8Ns5Y26aivg5QerrMPPbR3shyUXs+0wkgd5RzmhJ9UXxL3z+teLaYval91SBgi8EjrPwTY1NiEOyDFPy3TMxy/WsMZ0tJGltONtO764/8Rw1m5WjJ0nEoALHVskjiSbSBIzLRaDEYyVZYXhe1+qbw4SaFJ+YWAhx+ZBR6uFtwdT2CuX6YsWUe/TCctf/pPbzkAFxvuVBdztul8U2YU9QpBfU/UzCgZOETmjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IMeUOonufG0fr/sflWmj3Tq5wJA/R6PntS6lJh0IYHI=;
 b=RVGGRVzhUbB5y6rqyBy/OmwoMbN3FXeyRFaNGCtDBDDqFnmi6NlghNGj63WH6BZ6c6EBZbEpMCgq2n9dawA4N0tiMZERpNvm57B0wI34l347JQEPXs+9SyYDASjdBabf4+QX/AS0vMANso7L2th9Dbrwb3snUAOzvQiv1tMNgkZcolTTh2XKrITN0ECF7kS0csLfP8A/xr3NSsCSv1O9cGChZkBTNKLNJASgY+3HSQLUQ4aBWT1FVuobKH933apsvl29zvGrFzFRS3u7UTLh8Y/8hjahOrv7/Ba7aIcUEPzSKUT5qic9RllCPAfKochDc/MPq4T2svPodqNTwebftA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IMeUOonufG0fr/sflWmj3Tq5wJA/R6PntS6lJh0IYHI=;
 b=kWADXomgu474RiOnl2WovZCrzCpQeE5/SX8kS3s3UxDTVa7Km43QmFKXAnTS2NOivwAinAtrP2pU1Rxzs4KE7yNwmdY024F0ZTjbxzAkT5LieMrMGJ+A5VH72dcAcoPp3FZKPbUD00yBZTRQmeA/mZJfqUPf7Ji905R1/wHV+8s=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii <oleksii.kurochko@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 07/14] xen/riscv: introduce exception handlers
 implementation
Thread-Topic: [PATCH v1 07/14] xen/riscv: introduce exception handlers
 implementation
Thread-Index: AQHZLOAVkIPSVA/PBEeRaJXa6cnAQq6r6ACAgAA54ACAAFF9gA==
Date: Mon, 23 Jan 2023 20:09:12 +0000
Message-ID: <29b9b149-d3b3-922b-c17f-d86b7f949fca@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
 <ac6f02e8-c493-7914-f3c4-32b4ebe1bc26@citrix.com>
 <ea3c256c0f5a7f09a2504c548e649a0cf0edcb43.camel@gmail.com>
In-Reply-To: <ea3c256c0f5a7f09a2504c548e649a0cf0edcb43.camel@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH0PR03MB6115:EE_
x-ms-office365-filtering-correlation-id: b82f8fff-52a1-4d91-20f7-08dafd7db1da
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 shFRAr9QZbH6ya9Kg9l6SBh40fVGiLxUnUS2XxY7Kp8KQ0PzwZkHcj9UoDwP3yG/z54pBiOZtDSAiYxDCb8+SuFi4XXHPWzxU9Ne2M4QsxGpzZ/1lny9X5Gr7SIKRLQQKcgAqaWfTPqU25uiMhvGmPPfg1YEM3ynJgIMVhtFCXnh41uCvjiUpYZS/uKgJYwyCI4MvYkFha28fEGU1L0jH9IyS2Z1miwXm9oJBjPRLvLHqGQCFZd8QJynLJ0dZTMGB2clsWSJhr5ECBqR1kBPjr+eoL/OIm5Co34ifFsKnn7JQ37JrkzsVAAxWzrvOLn9ffZErD3a4GJLj26GgBK/58XaABjtcqeo5pPLkvObyWErA2LYPwEx7/gCE0xFE6nxz/1k6f+aDJNoOcaYxazq0g53lb7Um/0gYNKJ90kyjT2JpXdyfZzFIFPJQLFtg8OY85z+xd4JfWVBZGiHeaCHaf+tL4UFbftTiwyvA4cJDrjpm6OPSEZHPPg/LBsubtzwYiFgl3N9E8PM8r8nzTLKVBHPTH1rMcHwP+xkymql1R0rUxsbMZrEsHqNHYlbB9hLH1ylSjD/wino6bnGMUzYeqR/whSX5SjTwctkG2jIUy+/pP9zrlLK3R4MoQUAziOvuXHbCXJg2MvboaXV8Od8UipGos2nkIHJ2ea8d1sYXOldaXtU+2557P1mf/NsRDNwoW81rN3zPhaC/5WY+WwWfVm7VW9dB2D1WM6j45pbFySjZw6q3DtXlqj0XVy5gVnzdkl+Ur4GD6R3mDOrR8bQnw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(136003)(396003)(346002)(376002)(366004)(39860400002)(451199015)(82960400001)(38070700005)(36756003)(86362001)(31696002)(316002)(110136005)(54906003)(91956017)(4326008)(8676002)(66556008)(66946007)(76116006)(66446008)(64756008)(66476007)(53546011)(6512007)(26005)(186003)(478600001)(6506007)(31686004)(71200400001)(2616005)(6486002)(2906002)(38100700002)(122000001)(8936002)(5660300002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?OEVFV3FsSk40N3RTeHlsRFdhQkE5Mm9UTEZZSXQxWnBuajJrNnk2aFFnbmdN?=
 =?utf-8?B?SDJzR05mMnc1bVVBejRCdTYydlJPRnI1NEM3dE1KRFVrV0dKandkTy9oaHlu?=
 =?utf-8?B?MVkvZi9KbmdHMG1XRWRlQ2VOTFJicXJDb3BxM2lrWGkwYVhpM1FuOFdhakcw?=
 =?utf-8?B?Qm5CNjJHcDBhUWR1Z0x0aTBDOWxXaWR6Z1J2cW1hQXcwV3EzWjArTFlTcTgy?=
 =?utf-8?B?M3p0TzFwTUw5aFV1V09kR0RVenZuZmUyY3lQeFl4OWtZbEFpYU50a3NwK1k4?=
 =?utf-8?B?ejRxRzErQkxDVG9sWW40d2t3c241V0lEVzlYRDlsWCtEUE5hN1JzZXhnNmVS?=
 =?utf-8?B?M2R4VHZnODNVZUh2aUo3RENxZHMyL2RiZEJXVDhIcWRHTlp4RDlWZ2lSSGJs?=
 =?utf-8?B?SlJYK04zeTI4QkFobGNGZ2NvMGp1YnRuY25MUHRTdXhUWURTT2xPS1FEWEo4?=
 =?utf-8?B?b3R6ME5Kd09xckwzUlhranUvM0tld1NYU0xwUW53Q2NXTXlJMGREb3VLSTFm?=
 =?utf-8?B?aFBlcW82YytkQ0Nha0dmN2dCaXRxOTRjdHVtWXB2QWpHNXVMbjF6ZURKVG5V?=
 =?utf-8?B?UWpPbElZdlNHVFA2a0hEK1AvMFZMb2pUVmlLTkQ5UFBvZEZSc05td055SlJs?=
 =?utf-8?B?UmNRQkRoMDFHWldQT21ubDIwZ1pzeEtUNnhaalRGVURXZHhSckF4VFNXUSt0?=
 =?utf-8?B?VkIrZzN6b3ByTERmWFFyMk1LOGozVmFkYVMwZWJMS0xzSzZQMjFxYzdISVdY?=
 =?utf-8?B?KzYvUWszTU5oTjMvSFVENXJKeWJSOE05WHVvTWdxUDZlQ2oxeC9qbVVpUjh2?=
 =?utf-8?B?NzcvY3V0VjlVdVMrS2wyU2F5NUQ2YnpobGFrVDhOS2xTOHd1SXpqMjJRS0cv?=
 =?utf-8?B?ampZV1Vzempqaks2Z2Y0VFplTW9kUjFwZmJJbUVzVEZsTDUrTWxZVmo1cjl2?=
 =?utf-8?B?MHg1VDVkWlN3WW5YVEx4YVIveTk1UnozWEVGeFF1UHpzc0Y5dUZPTzgxUW9U?=
 =?utf-8?B?ckQwcWRTYTJZTkNka01mQWJ4ZUtTUC9XRTNNcmdZUnNqRkhaSFlENEpacjlo?=
 =?utf-8?B?TkltM2R0QXdJRzRIbmphdE8yT1dHNy91TkFlKzU1V0cxYnJLSGI3UGdoYmxl?=
 =?utf-8?B?Y0FwMW96TTFxWFR1bG9KTHc1Q2JaVFJJTExFV0VlTmFHYzRhTFp4NGcremwv?=
 =?utf-8?B?bFBJdlJCUUx3Sk9jeUY2cDRjMWF1bU5GTkVoVzJGeUh5aVRNUnVwRlBDMGJN?=
 =?utf-8?B?dGk3Tnp6SlMvU0tBY0FxYnhCMkNoUWhndFEvMTJiYkJxRWlubzVjZTJjemhS?=
 =?utf-8?B?am1CYnFXWWNZQVZUUUVERXM4VEhYWDVYV0g0bEdXU0h6S3hvM2RSZ1JFdE1V?=
 =?utf-8?B?MXAwVkVoQURsRmJsZHE3VnByWmpWQXR0bTI2Q3BZWCtxNmhickcrakNCM2RN?=
 =?utf-8?B?bkFOTlcydTV5MDBzTzh1dkFJT1NJeGkvNFZkUnpjUGNMNWxpMjBIQm05Ylpt?=
 =?utf-8?B?ZnhTRFBJSS9mU2x0V3lhWnVaVWplVFdyN25TUDhOWDQyV28yK0JDRktFWTNI?=
 =?utf-8?B?R3c0YVRNZVJLalpMSTBHNmRWTEcwQzQrVCtGVEs2Qi81cm5MeXNUbnhoWUNL?=
 =?utf-8?B?YVMxd1pBVjczSGl3Sm1TUzdWb1d3VkxyYzNPZEpEMGdyaVhvY1c1Vk5jM2tR?=
 =?utf-8?B?U1NqRlNSU1pQQVF0RTBVUnh6MzhHYmVCN0NZc2V6cjBLQzVPWlk5LzBNRXd3?=
 =?utf-8?B?SGtqbUVVWC9nT1J0SEhubkJJUmFXQ2VSZENLMS8wc3phc3J5TjhsZU1RVGMy?=
 =?utf-8?B?WU1CM2Jud21hZmtTU2RmQWsyNjlHN0JCRmZvY3NtQW1SSHV4a1c1WnczMUMr?=
 =?utf-8?B?ZDkvWmVSK2tjMDVTWEFnUjQ0L1BtLzFsaDZTQ0ZEMFFML1Z0QTk1djJTbFVi?=
 =?utf-8?B?NHVBaXNUWTgrbElQOWh1SjQydjduclZ6eVZxUXJsVzlDZURMdHJzT3VwaXZ6?=
 =?utf-8?B?M1UxRHhNSTFKYVZqa001ZlBDbzNzVGdkNWNOR0p2UThwOW8rNXNlOGdEMjdq?=
 =?utf-8?B?OWpEaDNnK2VoTDRXekdtcW9HRy9adjV2dDE4ZUR1dzRDRFlmVkEwcVJ5S2lq?=
 =?utf-8?Q?NyOrguFW/87HWbY2uOis1I6c7?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0DEFD24B816E7846AFB476766BFA0BC3@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	YO2Rg4Q+wPuM6DsTNrARDpwl5pPST3HlF/1QDJnHiGIyb8HU9wpPepG9d/CFyXA6vXeXGCLWOfJLpW7zhhCbuCAJMwhrWCc27jJG24xujCGtTxWBLPJ+JDvWXpIrmdXAyA954OkEBz2WIX94/kSDg2ee0ijWSg0FmavA8hp1JnbJzE1I9hFT/HbMz0IPrcVM2zIZOvcYvgt+DHg913g+www4PgjQv29PIAz2KVC7gliLMytbYSDvwady6qpWggOB9G2Qijf49pxZwXB/B9to790tNwv0m6LmKQnz2mQA6B5jKeaT13Em3KWrFMnW1D7sjaadZoXKDN2A3R+24xtZZo7dJeVQY8U8ducz27Kdhvp/ttmFwrOXaiXMMhvcDqwu3lecdAp9G0HIb94Sjy/DY7q+QTzekLnIT8M3Ggc7x6+UVSrhuGeJBbowb/IpdXW2WdAWh7EQpc9Zxr2HSMQ7KiRCffyfYqjrBULl/QUNzaqlylGoyfTiMjcmB5pEgC1h6LqQRTE5ZeT9MIyFTsq0QW2uYlSgrP7hnrsgId9i4FqtGpqx0TLW9WibOtJ5BC8SkBkVtaOEWJZ3ATu3i2VtfNkjHCxcvq57EEcERujMdaBJGkk8FZ1HbDD0Ame/WB1RbcuM4LmsKyPylRCfWxtp/bN2FmtMOnE1ZP3HfA/IDMvLwln0O1lalL96tYPURUqUX8nx/7aUuf+NvyFdzSawiF50V4U/UFFwRV6Lfbzwr7V/Y4o1mp7XVYMMyieSB2AX61YOAxu6m9T3wz2dubIGihj3gvyA+SAyj4YE99dLhnbjTf57dwRnaU0Y5DLPGSVSiXkgux7UqYzoXiRz4WtiEAteEu1Tv7Rw/isRorhfukCdb4siRIog2eytXkloGsqhcQuBDMf4QkC4ieKwAhAZgw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b82f8fff-52a1-4d91-20f7-08dafd7db1da
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 20:09:12.2792
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: J9ow27tUoAwJSvFthwOcgWN97P6+brj1UM95mQRv2eSJA76q6mQS8LI1s7FXiibXoXsySZvg+EPAFSLtZWfjYOVhI9tae/Bc5lmbfEXh/DA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6115

T24gMjMvMDEvMjAyMyAzOjE3IHBtLCBPbGVrc2lpIHdyb3RlOg0KPiBPbiBNb24sIDIwMjMtMDEt
MjMgYXQgMTE6NTAgKzAwMDAsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+PiBPbiAyMC8wMS8yMDIz
IDI6NTkgcG0sIE9sZWtzaWkgS3Vyb2Noa28gd3JvdGU6DQo+Pj4gK8KgwqDCoMKgwqDCoMKgIC8q
IFNhdmUgY29udGV4dCB0byBzdGFjayAqLw0KPj4+ICvCoMKgwqDCoMKgwqDCoCBSRUdfU8KgwqAg
c3AsIChSSVNDVl9DUFVfVVNFUl9SRUdTX09GRlNFVChzcCkgLQ0KPj4+IFJJU0NWX0NQVV9VU0VS
X1JFR1NfU0laRSkgKHNwKQ0KPj4+ICvCoMKgwqDCoMKgwqDCoCBhZGRpwqDCoMKgIHNwLCBzcCwg
LVJJU0NWX0NQVV9VU0VSX1JFR1NfU0laRQ0KPj4+ICvCoMKgwqDCoMKgwqDCoCBSRUdfU8KgwqAg
dDAsIFJJU0NWX0NQVV9VU0VSX1JFR1NfT0ZGU0VUKHQwKShzcCkNCj4+IEV4Y2VwdGlvbnMgb24g
UklTQy1WIGRvbid0IGFkanVzdCB0aGUgc3RhY2sgcG9pbnRlci7CoCBUaGlzIGxvZ2ljDQo+PiBk
ZXBlbmRzDQo+PiBvbiBpbnRlcnJ1cHRpbmcgWGVuIGNvZGUsIGFuZCBYZW4gbm90IGhhdmluZyBz
dWZmZXJlZCBhIHN0YWNrDQo+PiBvdmVyZmxvdw0KPj4gKGFuZCBhY3R1YWxseSwgdGhhdCB0aGUg
c3BhY2Ugb24gdGhlIHN0YWNrIGZvciBhbGwgcmVnaXN0ZXJzIGFsc28NCj4+IGRvZXNuJ3Qgb3Zl
cmZsb3cpLg0KPj4NCj4+IFdoaWNoIG1pZ2h0IGJlIGZpbmUgZm9yIG5vdywgYnV0IEkgdGhpbmsg
aXQgd2FycmFudHMgYSBjb21tZW50DQo+PiBzb21ld2hlcmUNCj4+IChwcm9iYWJseSBhdCBoYW5k
bGVfZXhjZXB0aW9uIGl0c2VsZikgc3RhdGluZyB0aGUgZXhwZWN0YXRpb25zIHdoaWxlDQo+PiBp
dCdzIHN0aWxsIGEgd29yayBpbiBwcm9ncmVzcy7CoCBTbyBpbiB0aGlzIGNhc2Ugc29tZXRoaW5n
IGxpa2U6DQo+Pg0KPj4gLyogV29yay1pbi1wcm9ncmVzczrCoCBEZXBlbmRzIG9uIGludGVycnVw
dGluZyBYZW4sIGFuZCB0aGUgc3RhY2sNCj4+IGJlaW5nDQo+PiBnb29kLiAqLw0KPj4NCj4+DQo+
PiBCdXQsIGRvIHdlIHdhbnQgdG8gYWxsb2NhdGUgc3RlbXAgcmlnaHQgYXdheSAoZXZlbiB3aXRo
IGFuIGVtcHR5DQo+PiBzdHJ1Y3QpLCBhbmQgZ2V0IHRwIHNldCB1cCBwcm9wZXJseT8NCj4+DQo+
IEkgYW0gbm90IHN1cmUgdGhhdCBJIGdldCB5b3UgaGVyZSBhYm91dCBzdGVtcC4gQ291bGQgeW91
IHBsZWFzZSBjbGFyaWZ5DQo+IGEgbGl0dGxlIGJpdC4NCg0KU29ycnkgLSBzc2NyYXRjaCwgbm90
IHN0ZW1wIC0gSSBnb3QgdGhlIG5hbWUgd3JvbmcuDQoNCkFsbCByZWdpc3RlcnMgYXJlIHRoZSBp
bnRlcnJ1cHRlZCBjb250ZXh0LCBub3QgWGVuJ3MgY29udGV4dC7CoCBUaGlzDQppbmNsdWRlcyB0
aGUgc3RhY2sgcG9pbnRlciwgZ2xvYmFsIHBvaW50ZXIsIGFuZCB0aHJlYWQgcG9pbnRlci4NCg0K
VHJhcCBzZXR1cCBpcyBzdXBwb3NlZCB0byBzdGFzaCBYZW4ncyB0cCBpbiBzc2NyYXRjaCBzbyBv
biBhbg0KaW50ZXJydXB0L2V4Y2VwdGlvbiwgaXQgY2FuIGV4Y2hhbmdlIHNzY3JhdGNoIHdpdGgg
dHAgYW5kIHJlY292ZXIgdGhlDQpzdGFjayBwb2ludGVyLg0KDQpMaW51eCBwbGF5cyBnYW1lcyB3
aXRoIGhhdmluZyBzc2NyYXRjaCBiZSAwIHdoaWxlIGluIGtlcm5lbCBhbmQgdXNlcw0KdGhpcyB0
byBkZXRlcm1pbmUgd2hldGhlciB0aGUgZXhjZXB0aW9uIG9jY3VycmVkIGluIGtlcm5lbCBvciB1
c2VyDQptb2RlLsKgIFRoaXMgaXMgbWFzc2l2ZSBjYW4gb2YgcmUtZW50cmFuY3kgYnVncyB0aGF0
IGFwcGVhcnMgdG8gYmUgYmFrZWQNCmludG8gdGhlIGFyY2hpdGVjdHVyZS4NCg0KSSBnZW51aW5l
bHkgY2FuJ3QgZmlndXJlIG91dCBhIHNhZmUgd2F5IHRvIGNvcGUgd2l0aCBhIHN0YWNrIG92ZXJm
bG93LA0Kb3IgYSBiYWQgdHAsIGJlY2F1c2UgaXQgaXMgbm90IHNhZmUgdG8gYSBwYWdlZmF1bHQg
dW50aWwgdGhlIGV4Y2VwdGlvbg0KcHJvbG9ndWUgaGFzIGNvbXBsZXRlZC7CoCBJZiB5b3UgZG8s
IHlvdSdsbCBzd2l0Y2ggYmFjayB0byB0aGUNCmludGVycnVwdGVkIHRhc2sncyB0cCBhbmQgdXNl
IHRoYXQgYXMgaWYgaXQgd2VyZSBYZW4ncy4NCg0KfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 20:13:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 20:13:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483199.749203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK3Bs-0000ca-Ki; Mon, 23 Jan 2023 20:13:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483199.749203; Mon, 23 Jan 2023 20:13:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK3Bs-0000cT-Hd; Mon, 23 Jan 2023 20:13:16 +0000
Received: by outflank-mailman (input) for mailman id 483199;
 Mon, 23 Jan 2023 20:13:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK3Br-0000cL-2A
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 20:13:15 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5ca0cf35-9b5a-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 21:13:12 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id E180CB80DE1;
 Mon, 23 Jan 2023 20:13:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B3D57C433EF;
 Mon, 23 Jan 2023 20:13:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ca0cf35-9b5a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674504790;
	bh=P7/uM5QvtaIxmCuQb5zfjRw6WzYKyJDmoPb4RNinqSc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=oXUQ5yirGysZOUfIoK5GFG/sxB7DaVmj/5Vk3mkpinW+KSqXkvpHJdt+zZ9Uu5cr7
	 +Potjph8v1HOGuox9fp7QPvi8sIK7IfRleX7TrhS9yr0IKQi/EDxq436LBIkqF27Zd
	 2wlIKMLUzu+KcT9Ooy/Ny8/PoVspP8UUsXKJu61kbujRSoFw4pX8SmC8x7bTEtXNZI
	 DKJpa6sBWdmrRPhyYQZVIr43+QNebGL/s7xoUh7PqixMtrGqN2D61z/tKLRqtW12+o
	 oKrTPslPzz8jUkUo5+NiVZ6A27u0fE3Mt4n6Z9HDgyt3Shhm8gqKyFKQ1Isy/y9fPK
	 ie7IyqjkDNdFg==
Date: Mon, 23 Jan 2023 12:12:58 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org, 
    ayankuma@amd.com
Subject: Re: [PATCH] automation: Modify static-mem check in
 qemu-smoke-dom0less-arm64.sh
In-Reply-To: <20230123131023.9408-1-michal.orzel@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301231212510.1978264@ubuntu-linux-20-04-desktop>
References: <20230123131023.9408-1-michal.orzel@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Jan 2023, Michal Orzel wrote:
> At the moment, the static-mem check relies on the way Xen exposes the
> memory banks in device tree. As this might change, the check should be
> modified to be generic and not to rely on device tree. In this case,
> let's use /proc/iomem which exposes the memory ranges in %08x format
> as follows:
> <start_addr>-<end_addr> : <description>
> 
> This way, we can grep in /proc/iomem for an entry containing memory
> region defined by the static-mem configuration with "System RAM"
> description. If it exists, mark the test as passed. Also, take the
> opportunity to add 0x prefix to domu_{base,size} definition rather than
> adding it in front of each occurence.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Patch made as part of the discussion:
> https://lore.kernel.org/xen-devel/ba37ee02-c07c-2803-0867-149c779890b6@amd.com/
> 
> CC: Julien, Ayan
> ---
>  automation/scripts/qemu-smoke-dom0less-arm64.sh | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/automation/scripts/qemu-smoke-dom0less-arm64.sh b/automation/scripts/qemu-smoke-dom0less-arm64.sh
> index 2b59346fdcfd..182a4b6c18fc 100755
> --- a/automation/scripts/qemu-smoke-dom0less-arm64.sh
> +++ b/automation/scripts/qemu-smoke-dom0less-arm64.sh
> @@ -16,14 +16,13 @@ fi
>  
>  if [[ "${test_variant}" == "static-mem" ]]; then
>      # Memory range that is statically allocated to DOM1
> -    domu_base="50000000"
> -    domu_size="10000000"
> +    domu_base="0x50000000"
> +    domu_size="0x10000000"
>      passed="${test_variant} test passed"
>      domU_check="
> -current=\$(hexdump -e '16/1 \"%02x\"' /proc/device-tree/memory@${domu_base}/reg 2>/dev/null)
> -expected=$(printf \"%016x%016x\" 0x${domu_base} 0x${domu_size})
> -if [[ \"\${expected}\" == \"\${current}\" ]]; then
> -	echo \"${passed}\"
> +mem_range=$(printf \"%08x-%08x\" ${domu_base} $(( ${domu_base} + ${domu_size} - 1 )))
> +if grep -q -x \"\${mem_range} : System RAM\" /proc/iomem; then
> +    echo \"${passed}\"
>  fi
>  "
>  fi
> @@ -126,7 +125,7 @@ UBOOT_SOURCE="boot.source"
>  UBOOT_SCRIPT="boot.scr"' > binaries/config
>  
>  if [[ "${test_variant}" == "static-mem" ]]; then
> -    echo -e "\nDOMU_STATIC_MEM[0]=\"0x${domu_base} 0x${domu_size}\"" >> binaries/config
> +    echo -e "\nDOMU_STATIC_MEM[0]=\"${domu_base} ${domu_size}\"" >> binaries/config
>  fi
>  
>  if [[ "${test_variant}" == "boot-cpupools" ]]; then
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 20:28:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 20:28:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483205.749216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK3QD-0002Gm-Uf; Mon, 23 Jan 2023 20:28:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483205.749216; Mon, 23 Jan 2023 20:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK3QD-0002Gf-RG; Mon, 23 Jan 2023 20:28:05 +0000
Received: by outflank-mailman (input) for mailman id 483205;
 Mon, 23 Jan 2023 20:28:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK3QC-0002GX-1m
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 20:28:04 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6dd05750-9b5c-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 21:28:01 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 810A761029;
 Mon, 23 Jan 2023 20:27:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 314CEC433D2;
 Mon, 23 Jan 2023 20:27:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dd05750-9b5c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674505678;
	bh=pTUgUj0rudWGkaabcAfieQ/Wga7zg1phd06XWWevQoM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=h9/ou6rQLxZs2xz2xXYqaZL8Mntkl4pJr0XJQKOZzCZPIxqThKJ0UYYvZ85/q81t3
	 FWjkk9Ft2XS8XUUtwxKzfiZv2OCmnHmuSa4SIBwJNKV1215HDInSy2ArCAWwNynMKh
	 OnniTykvy8zaApiSjg0cp8Ibdx8yUYj5NdHo3vDsREx9qeH9M9Riie8woOdWt5/m44
	 x0bpREPgLQ0+BfrgQKzg8QEEGeGnHxXbJ7YnDvvL40/SLuHKe0vd9diHXFtwGTzOnk
	 WZitmV1KIW57SDM+qt2WSFKvBv8tw0pOqPz7t3+JPFuHEOrEwEuE4LGGkzXp1hPaUe
	 tRVJwKt8Vd/4Q==
Date: Mon, 23 Jan 2023 12:27:55 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Chuck Zmudzinski <brchuckz@aol.com>
cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    Paolo Bonzini <pbonzini@redhat.com>, 
    Richard Henderson <richard.henderson@linaro.org>, 
    Eduardo Habkost <eduardo@habkost.net>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    Igor Mammedov <imammedo@redhat.com>, xen-devel@lists.xenproject.org, 
    qemu-stable@nongnu.org
Subject: Re: [PATCH v11] xen/pt: reserve PCI slot 2 for Intel igd-passthru
In-Reply-To: <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz@aol.com>
Message-ID: <alpine.DEB.2.22.394.2301231227430.1978264@ubuntu-linux-20-04-desktop>
References: <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz.ref@aol.com> <b1b4a21fe9a600b1322742dda55a40e9961daa57.1674346505.git.brchuckz@aol.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 21 Jan 2023, Chuck Zmudzinski wrote:
> Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> as noted in docs/igd-assign.txt in the Qemu source code.
> 
> Currently, when the xl toolstack is used to configure a Xen HVM guest with
> Intel IGD passthrough to the guest with the Qemu upstream device model,
> a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> a different slot. This problem often prevents the guest from booting.
> 
> The only available workarounds are not good: Configure Xen HVM guests to
> use the old and no longer maintained Qemu traditional device model
> available from xenbits.xen.org which does reserve slot 2 for the Intel
> IGD or use the "pc" machine type instead of the "xenfv" machine type and
> add the xen platform device at slot 3 using a command line option
> instead of patching qemu to fix the "xenfv" machine type directly. The
> second workaround causes some degredation in startup performance such as
> a longer boot time and reduced resolution of the grub menu that is
> displayed on the monitor. This patch avoids that reduced startup
> performance when using the Qemu upstream device model for Xen HVM guests
> configured with the igd-passthru=on option.
> 
> To implement this feature in the Qemu upstream device model for Xen HVM
> guests, introduce the following new functions, types, and macros:
> 
> * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> * typedef XenPTQdevRealize function pointer
> * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> * xen_igd_reserve_slot and xen_igd_clear_slot functions
> 
> Michael Tsirkin:
> * Introduce XEN_PCI_IGD_DOMAIN, XEN_PCI_IGD_BUS, XEN_PCI_IGD_DEV, and
>   XEN_PCI_IGD_FN - use them to compute the value of XEN_PCI_IGD_SLOT_MASK
> 
> The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> the xl toolstack with the gfx_passthru option enabled, which sets the
> igd-passthru=on option to Qemu for the Xen HVM machine type.
> 
> The new xen_igd_reserve_slot function also needs to be implemented in
> hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> in which case it does nothing.
> 
> The new xen_igd_clear_slot function overrides qdev->realize of the parent
> PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> 
> Move the call to xen_host_pci_device_get, and the associated error
> handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> initialize the device class and vendor values which enables the checks for
> the Intel IGD to succeed. The verification that the host device is an
> Intel IGD to be passed through is done by checking the domain, bus, slot,
> and function values as well as by checking that gfx_passthru is enabled,
> the device class is VGA, and the device vendor in Intel.
> 
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> ---
> Notes that might be helpful to reviewers of patched code in hw/xen:
> 
> The new functions and types are based on recommendations from Qemu docs:
> https://qemu.readthedocs.io/en/latest/devel/qom.html
> 
> Notes that might be helpful to reviewers of patched code in hw/i386:
> 
> The small patch to hw/i386/pc_piix.c is protected by CONFIG_XEN so it does
> not affect builds that do not have CONFIG_XEN defined.
> 
> xen_igd_gfx_pt_enabled() in the patched hw/i386/pc_piix.c file is an
> existing function that is only true when Qemu is built with
> xen-pci-passthrough enabled and the administrator has configured the Xen
> HVM guest with Qemu's igd-passthru=on option.
> 
> v2: Remove From: <email address> tag at top of commit message
> 
> v3: Changed the test for the Intel IGD in xen_igd_clear_slot:
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         (s->real_device.vendor_id == PCI_VENDOR_ID_INTEL)) {
> 
>     is changed to
> 
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     I hoped that I could use the test in v2, since it matches the
>     other tests for the Intel IGD in Qemu and Xen, but those tests
>     do not work because the necessary data structures are not set with
>     their values yet. So instead use the test that the administrator
>     has enabled gfx_passthru and the device address on the host is
>     02.0. This test does detect the Intel IGD correctly.
> 
> v4: Use brchuckz@aol.com instead of brchuckz@netscape.net for the author's
>     email address to match the address used by the same author in commits
>     be9c61da and c0e86b76
>     
>     Change variable for XEN_PT_DEVICE_CLASS: xptc changed to xpdc
> 
> v5: The patch of xen_pt.c was re-worked to allow a more consistent test
>     for the Intel IGD that uses the same criteria as in other places.
>     This involved moving the call to xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot and updating the checks for the
>     Intel IGD in xen_igd_clear_slot:
>     
>     if (xen_igd_gfx_pt_enabled() && (s->hostaddr.slot == 2)
>         && (s->hostaddr.function == 0)) {
> 
>     is changed to
> 
>     if (is_igd_vga_passthrough(&s->real_device) &&
>         s->real_device.domain == 0 && s->real_device.bus == 0 &&
>         s->real_device.dev == 2 && s->real_device.func == 0 &&
>         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> 
>     Added an explanation for the move of xen_host_pci_device_get from
>     xen_pt_realize to xen_igd_clear_slot to the commit message.
> 
>     Rebase.
> 
> v6: Fix logging by removing these lines from the move from xen_pt_realize
>     to xen_igd_clear_slot that was done in v5:
> 
>     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
>                " to devfn 0x%x\n",
>                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                s->dev.devfn);
> 
>     This log needs to be in xen_pt_realize because s->dev.devfn is not
>     set yet in xen_igd_clear_slot.
> 
> v7: The v7 that was posted to the mailing list was incorrect. v8 is what
>     v7 was intended to be.
> 
> v8: Inhibit out of context log message and needless processing by
>     adding 2 lines at the top of the new xen_igd_clear_slot function:
> 
>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>         return;
> 
>     Rebase. This removed an unnecessary header file from xen_pt.h 
> 
> v9: Move check for xen_igd_gfx_pt_enabled() from pc_piix.c to xen_pt.c
> 
>     Move #include "hw/pci/pci_bus.h" from xen_pt.h to xen_pt.c
> 
>     Introduce macros for the IGD devfn constants and use them to compute
>     the value of XEN_PCI_IGD_SLOT_MASK
> 
>     Also use the new macros at an appropriate place in xen_pt_realize
> 
>     Add Cc: to stable - This has been broken for a long time, ever since
>                         support for igd-passthru was added to Qemu 7
>                         years ago.
> 
>     Mention new macros in the commit message (Michael Tsirkin)
> 
>     N.B.: I could not follow the suggestion to move the statement
>     pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK; to after
>     pci_qdev_realize for symmetry. Doing that results in an error when
>     creating the guest:
>     
>     libxl: error: libxl_qmp.c:1837:qmp_ev_parse_error_messages: Domain 4:PCI: slot 2 function 0 not available for xen-pci-passthrough, reserved
>     libxl: error: libxl_pci.c:1809:device_pci_add_done: Domain 4:libxl__device_pci_add failed for PCI device 0:0:2.0 (rc -28)
>     libxl: error: libxl_create.c:1921:domcreate_attach_devices: Domain 4:unable to add pci devices
> 
> v10: Change in xen_pt.c at xen_igd_clear_slot from
> 
>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK))
>         return;
> 
>     xen_host_pci_device_get(&s->real_device,
>                             s->hostaddr.domain, s->hostaddr.bus,
>                             s->hostaddr.slot, s->hostaddr.function,
>                             errp);
>     if (*errp) {
>         error_append_hint(errp, "Failed to \"open\" the real pci device");
>         return;
>     }
> 
> to:
> 
>     xen_host_pci_device_get(&s->real_device,
>                             s->hostaddr.domain, s->hostaddr.bus,
>                             s->hostaddr.slot, s->hostaddr.function,
>                             errp);
>     if (*errp) {
>         error_append_hint(errp, "Failed to \"open\" the real pci device");
>         return;
>     }
> 
>     if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
>         xpdc->pci_qdev_realize(qdev, errp);
>         return;
>     }
> 
>      Testing shows this fixes the problem of xen_host_pci_device_get
>      and xpdc->pci_qdev_realize not being called if xen_igd_clear_slot
>      returns because the bit to reserve slot 2 in slot_reserved_mask is
>      not set. Without this change, guest creation fails in the cases
>      when the bit to reserve slot 2 in slot_reserved_mask is not set.
>      Thanks, Stefano!
>      
>      Also, in addition to mentioning the workaround of using the
>      traditional qemu device model available from xenbits.xen.org in the
>      commit message, also mention in the commit message the workaround
>      of using the "pc" machine type instead of the "xenfv" machine type,
>      which results in reduced startup performance.
>      
>      Rebase.
>      
>      Add Igor Mammedov <imammedo@redhat.com> to Cc.
> 
> v11: I noticed a style mistake that has been present in the past few
>      versions that no one noticed. This version fixes it. No
>      functional change in v11. I also did more tests on different guests
>      such as guests that don't have igd-passthru=on set. No regressions
>      were observed.
>      
>      The style mistake (missing braces) is fixed as follows:
> 
> xen_pt.c at xen_igd_reserve_slot is changed from
> 
>     if (!xen_igd_gfx_pt_enabled())
>         return;
> 
> to
> 
>     if (!xen_igd_gfx_pt_enabled()) {
>         return;
>     }
> 
>  hw/i386/pc_piix.c    |  1 +
>  hw/xen/xen_pt.c      | 64 ++++++++++++++++++++++++++++++++++++--------
>  hw/xen/xen_pt.h      | 20 ++++++++++++++
>  hw/xen/xen_pt_stub.c |  4 +++
>  4 files changed, 78 insertions(+), 11 deletions(-)

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index df64dd8dcc..a9d535c815 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -421,6 +421,7 @@ static void pc_xen_hvm_init(MachineState *machine)
>      }
>  
>      pc_xen_hvm_init_pci(machine);
> +    xen_igd_reserve_slot(pcms->bus);
>      pci_create_simple(pcms->bus, -1, "xen-platform");
>  }
>  #endif
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 8db0532632..85c93cffcf 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -57,6 +57,7 @@
>  #include <sys/ioctl.h>
>  
>  #include "hw/pci/pci.h"
> +#include "hw/pci/pci_bus.h"
>  #include "hw/qdev-properties.h"
>  #include "hw/qdev-properties-system.h"
>  #include "hw/xen/xen.h"
> @@ -780,15 +781,6 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>                 s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
>                 s->dev.devfn);
>  
> -    xen_host_pci_device_get(&s->real_device,
> -                            s->hostaddr.domain, s->hostaddr.bus,
> -                            s->hostaddr.slot, s->hostaddr.function,
> -                            errp);
> -    if (*errp) {
> -        error_append_hint(errp, "Failed to \"open\" the real pci device");
> -        return;
> -    }
> -
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> @@ -803,8 +795,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
>      s->io_listener = xen_pt_io_listener;
>  
>      /* Setup VGA bios for passthrough GFX */
> -    if ((s->real_device.domain == 0) && (s->real_device.bus == 0) &&
> -        (s->real_device.dev == 2) && (s->real_device.func == 0)) {
> +    if ((s->real_device.domain == XEN_PCI_IGD_DOMAIN) &&
> +        (s->real_device.bus == XEN_PCI_IGD_BUS) &&
> +        (s->real_device.dev == XEN_PCI_IGD_DEV) &&
> +        (s->real_device.func == XEN_PCI_IGD_FN)) {
>          if (!is_igd_vga_passthrough(&s->real_device)) {
>              error_setg(errp, "Need to enable igd-passthru if you're trying"
>                      " to passthrough IGD GFX");
> @@ -950,11 +944,58 @@ static void xen_pci_passthrough_instance_init(Object *obj)
>      PCI_DEVICE(obj)->cap_present |= QEMU_PCI_CAP_EXPRESS;
>  }
>  
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +    if (!xen_igd_gfx_pt_enabled()) {
> +        return;
> +    }
> +
> +    XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
> +    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
> +}
> +
> +static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
> +{
> +    ERRP_GUARD();
> +    PCIDevice *pci_dev = (PCIDevice *)qdev;
> +    XenPCIPassthroughState *s = XEN_PT_DEVICE(pci_dev);
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_GET_CLASS(s);
> +    PCIBus *pci_bus = pci_get_bus(pci_dev);
> +
> +    xen_host_pci_device_get(&s->real_device,
> +                            s->hostaddr.domain, s->hostaddr.bus,
> +                            s->hostaddr.slot, s->hostaddr.function,
> +                            errp);
> +    if (*errp) {
> +        error_append_hint(errp, "Failed to \"open\" the real pci device");
> +        return;
> +    }
> +
> +    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
> +        xpdc->pci_qdev_realize(qdev, errp);
> +        return;
> +    }
> +
> +    if (is_igd_vga_passthrough(&s->real_device) &&
> +        s->real_device.domain == XEN_PCI_IGD_DOMAIN &&
> +        s->real_device.bus == XEN_PCI_IGD_BUS &&
> +        s->real_device.dev == XEN_PCI_IGD_DEV &&
> +        s->real_device.func == XEN_PCI_IGD_FN &&
> +        s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
> +        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
> +        XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
> +    }
> +    xpdc->pci_qdev_realize(qdev, errp);
> +}
> +
>  static void xen_pci_passthrough_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
>      PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
>  
> +    XenPTDeviceClass *xpdc = XEN_PT_DEVICE_CLASS(klass);
> +    xpdc->pci_qdev_realize = dc->realize;
> +    dc->realize = xen_igd_clear_slot;
>      k->realize = xen_pt_realize;
>      k->exit = xen_pt_unregister_device;
>      k->config_read = xen_pt_pci_read_config;
> @@ -977,6 +1018,7 @@ static const TypeInfo xen_pci_passthrough_info = {
>      .instance_size = sizeof(XenPCIPassthroughState),
>      .instance_finalize = xen_pci_passthrough_finalize,
>      .class_init = xen_pci_passthrough_class_init,
> +    .class_size = sizeof(XenPTDeviceClass),
>      .instance_init = xen_pci_passthrough_instance_init,
>      .interfaces = (InterfaceInfo[]) {
>          { INTERFACE_CONVENTIONAL_PCI_DEVICE },
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index cf10fc7bbf..e184699740 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -40,7 +40,20 @@ typedef struct XenPTReg XenPTReg;
>  #define TYPE_XEN_PT_DEVICE "xen-pci-passthrough"
>  OBJECT_DECLARE_SIMPLE_TYPE(XenPCIPassthroughState, XEN_PT_DEVICE)
>  
> +#define XEN_PT_DEVICE_CLASS(klass) \
> +    OBJECT_CLASS_CHECK(XenPTDeviceClass, klass, TYPE_XEN_PT_DEVICE)
> +#define XEN_PT_DEVICE_GET_CLASS(obj) \
> +    OBJECT_GET_CLASS(XenPTDeviceClass, obj, TYPE_XEN_PT_DEVICE)
> +
> +typedef void (*XenPTQdevRealize)(DeviceState *qdev, Error **errp);
> +
> +typedef struct XenPTDeviceClass {
> +    PCIDeviceClass parent_class;
> +    XenPTQdevRealize pci_qdev_realize;
> +} XenPTDeviceClass;
> +
>  uint32_t igd_read_opregion(XenPCIPassthroughState *s);
> +void xen_igd_reserve_slot(PCIBus *pci_bus);
>  void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
>  void xen_igd_passthrough_isa_bridge_create(XenPCIPassthroughState *s,
>                                             XenHostPCIDevice *dev);
> @@ -75,6 +88,13 @@ typedef int (*xen_pt_conf_byte_read)
>  
>  #define XEN_PCI_INTEL_OPREGION 0xfc
>  
> +#define XEN_PCI_IGD_DOMAIN 0
> +#define XEN_PCI_IGD_BUS 0
> +#define XEN_PCI_IGD_DEV 2
> +#define XEN_PCI_IGD_FN 0
> +#define XEN_PCI_IGD_SLOT_MASK \
> +    (1UL << PCI_SLOT(PCI_DEVFN(XEN_PCI_IGD_DEV, XEN_PCI_IGD_FN)))
> +
>  typedef enum {
>      XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
>      XEN_PT_GRP_TYPE_EMU,            /* emul reg group */
> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> index 2d8cac8d54..5c108446a8 100644
> --- a/hw/xen/xen_pt_stub.c
> +++ b/hw/xen/xen_pt_stub.c
> @@ -20,3 +20,7 @@ void xen_igd_gfx_pt_set(bool value, Error **errp)
>          error_setg(errp, "Xen PCI passthrough support not built in");
>      }
>  }
> +
> +void xen_igd_reserve_slot(PCIBus *pci_bus)
> +{
> +}
> -- 
> 2.39.0
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:10:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483212.749226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK459-0007aG-0s; Mon, 23 Jan 2023 21:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483212.749226; Mon, 23 Jan 2023 21:10:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK458-0007a9-UY; Mon, 23 Jan 2023 21:10:22 +0000
Received: by outflank-mailman (input) for mailman id 483212;
 Mon, 23 Jan 2023 21:10:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK457-0007Zy-1u; Mon, 23 Jan 2023 21:10:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK456-0003VJ-VP; Mon, 23 Jan 2023 21:10:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK456-0008UX-Je; Mon, 23 Jan 2023 21:10:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pK456-0006Nj-JB; Mon, 23 Jan 2023 21:10:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=RHsT7JaQFRtNwu+niMifT0cW09SjemPacgblphRvUcg=; b=uXJx6zlJ9Wp5vLIr+ktA0RRYjQ
	0fhq1KJq5odY1FJwJxQsiQS99ji6mmJN6uKkAssZBvvpDTOfDFV5ydHtqhvzy86l/feOV3I5S10cF
	c2BNt5aMryXdrEduB72QzJg9uxKneTvz6WAfeVXoFSF1ohhFrLP1eORyMKeWKWvQEJqQ=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-xsm
Message-Id: <E1pK456-0006Nj-JB@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 21:10:20 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-xsm
testid guest-localmigrate

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176075/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-xsm.guest-localmigrate.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-xsm.guest-localmigrate --summary-out=tmp/176075.bisection-summary --basis-template=175994 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl-xsm guest-localmigrate
Searching for failure / basis pass:
 176062 fail [host=fiano0] / 175994 [host=elbling0] 175987 [host=fiano1] 175965 [host=elbling1] 175734 [host=debina1] 175726 [host=italia0] 175720 [host=pinot1] 175714 [host=nobling0] 175694 [host=albana1] 175671 [host=nobling1] 175651 [host=debina0] 175635 [host=huxelrebe0] 175624 [host=nocera1] 175612 [host=albana0] 175601 [host=italia0] 175592 ok.
Failure / basis pass flights: 176062 / 175592
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 671f50ffab3329c5497208da89620322b9721a77
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#1cf02b05b27c48775a25699e61b93b8\
 14b9ae042-625eb5e96dc96aa7fddef59a08edae215527f19c git://xenbits.xen.org/xen.git#671f50ffab3329c5497208da89620322b9721a77-1d60c20260c7e82fe5344d06c20d718e0cc03b8b
Loaded 10003 nodes in revision graph
Searching for test results:
 175592 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 671f50ffab3329c5497208da89620322b9721a77
 175601 [host=italia0]
 175612 [host=albana0]
 175624 [host=nocera1]
 175635 [host=huxelrebe0]
 175651 [host=debina0]
 175671 [host=nobling1]
 175694 [host=albana1]
 175714 [host=nobling0]
 175720 [host=pinot1]
 175726 [host=italia0]
 175734 [host=debina1]
 175834 []
 175861 []
 175890 []
 175907 []
 175931 []
 175956 []
 175965 [host=elbling1]
 175987 [host=fiano1]
 175994 [host=elbling0]
 176003 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
 176011 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176025 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176035 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176042 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176048 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176055 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 671f50ffab3329c5497208da89620322b9721a77
 176057 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176058 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176056 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176061 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c a1a618208bf53469f5e3eaa14202ba777d33f442
 176063 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 41dbbfb5966f2517916333d1885ee68018161f48
 176064 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 321b1b5eb351a5836d26817d7db48052e623b411
 176065 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176067 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176062 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176070 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176071 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176074 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176075 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
Searching for interesting versions
 Result found: flight 175592 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f, results HASH(0x55831ca44980) HASH(0x55831ca597c8) HASH(0x55831c072b98) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96\
 dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x55831ca6a5d8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 671f50ffab3329c5497208da89620322b9721a77, results HASH(0x55831ca51900) HASH(0x55831ca62a10) Result found: flight 176003 (fail), for basis failure (at ancestor ~988)
 Repro found: flight 176055 (pass), for basis pass
 Repro found: flight 176056 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
No revisions left to test, checking graph state.
 Result found: flight 176065 (pass), for last pass
 Result found: flight 176067 (fail), for first failure
 Repro found: flight 176070 (pass), for last pass
 Repro found: flight 176071 (fail), for first failure
 Repro found: flight 176074 (pass), for last pass
 Repro found: flight 176075 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176075/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-xsm.guest-localmigrate.{dot,ps,png,html,svg}.
----------------------------------------
176075: tolerable ALL FAIL

flight 176075 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/176075/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-xsm       18 guest-localmigrate      fail baseline untested


jobs:
 test-amd64-i386-xl-xsm                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:18:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483219.749236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4D6-0008Ha-RU; Mon, 23 Jan 2023 21:18:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483219.749236; Mon, 23 Jan 2023 21:18:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4D6-0008HT-O1; Mon, 23 Jan 2023 21:18:36 +0000
Received: by outflank-mailman (input) for mailman id 483219;
 Mon, 23 Jan 2023 21:18:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fbAi=5U=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1pK4D4-0008HL-RP
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 21:18:34 +0000
Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com
 [2607:f8b0:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c99bd64-9b63-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 22:18:32 +0100 (CET)
Received: by mail-pl1-x634.google.com with SMTP id jl3so12738928plb.8
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 13:18:32 -0800 (PST)
Received: from localhost (c-73-164-155-12.hsd1.wa.comcast.net. [73.164.155.12])
 by smtp.gmail.com with ESMTPSA id
 s20-20020a056a00179400b0058dbd7a5e0esm41148pfg.89.2023.01.23.13.18.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 13:18:29 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c99bd64-9b63-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=FvP9ePGiLeIZbacBX9f8fdS0Vg9jpqMmwkb1uVgjCts=;
        b=UDNpBpONiaB9zcbgLJizT2LLfBkdM91UiNorkXXmdm2ov6CsWj8AEoq3IplXmSuw0R
         HQpPDhIIWI5wmAtVLCt9irzNLHnuRXJ5EJ72jAQXN79NXb2bvEpJy6ONHKbddwdNZGVF
         socue5HfVuhHf65wjhKM/vfz9eXVZyGwGV6OWBLdrMQ5fBYA2IsuX3rnKz+07dpyIbpV
         FLG2DB9gqyMRguVIunopxVF/vNTDCyOp2B3WeFGZ5iZWzYoZNeD3TCTZQkyelgNsquOA
         PMt+u9qaJxOpcMpKirEHunELchXB8qs7lEHAd5bgpXXcbAj5nnYTL8vfrkY88uB5W3Rj
         lOIw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FvP9ePGiLeIZbacBX9f8fdS0Vg9jpqMmwkb1uVgjCts=;
        b=VfYa9TX38vbRD2jVunaq9CEbw/bQCY7BHzBw+QjB8hQAYRpPDO2Wo6OL6DGeBxhWBz
         XTKNjjut1aIc/lgBwW/rZkHGiIhPeEGavwigQ8EJOEntSBAqV1CuV1Y66jFZR/gUD1Uq
         TzS1p0PrnrXBVcwUSlLlkCpO+kz0VD0DhsZdhxgALLEdLpfVArreUjpfM34gnvgAU33u
         4rngciuO7hh0Fygk2b5rSE9KvyMDj6OpbrZbvAUOW3Fet/MBsXu4YgjpdpGBLRT7GgU9
         /3unj3xU5umLcP17APZpkY/Me3xReQRMgJIS3Ypnt6KGEQE3i2xFPntToc+jv++a0W/g
         eZBg==
X-Gm-Message-State: AFqh2kq456nanPClrvuPp0+mX7C5hbcoaDYYkc/KyPdJGaDY4dg3T+x8
	QMDqReqk1McbADx4w4Ky96C0rUJ7Zsykc0fT
X-Google-Smtp-Source: AMrXdXtIsVq55xFM1TXfauRZ0Olcu8Fh2eUd4zV2fCWxhsPdeK/jj/SDi8p/SzbMV25wS+yjJqEoiQ==
X-Received: by 2002:a05:6a20:1611:b0:b4:6f9:ef7d with SMTP id l17-20020a056a20161100b000b406f9ef7dmr33569889pzj.35.1674508710356;
        Mon, 23 Jan 2023 13:18:30 -0800 (PST)
Date: Thu, 19 Jan 2023 13:05:09 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
	xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>
Subject: Re: [RISC-V] Switch  to H-mode
Message-ID: <Y8lABYJoQ5Qt4DAt@bullseye>
References: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>

On Mon, Jan 23, 2023 at 06:56:19PM +0200, Oleksii wrote:
> Hi Alistair and community,
> 
> I am working on RISC-V support upstream for Xen based on your and Bobby
> patches.
> 
> Adding the RISC-V support I realized that Xen is ran in S-mode. Output
> of OpenSBI:
>     ...
>     Domain0 Next Mode         : S-mode
>     ...
> So the first my question is shouldn't it be in H-mode?
> 
> If I am right than it looks like we have to do a patch to OpenSBI to
> add support of H-mode as it is not supported now:
> [1]
> https://github.com/riscv-software-src/opensbi/blob/master/lib/sbi/sbi_domain.c#L380
> [2]
> https://github.com/riscv-software-src/opensbi/blob/master/include/sbi/riscv_encoding.h#L110
> Please correct me if I am wrong.
> 
> The other option I see is to switch to H-mode in U-boot as I understand
> the classical boot flow is:
>     OpenSBI -> U-boot -> Xen -> Domain{0,...}
> If it is at all possible since U-boot will be in S mode after OpenSBI.
> 
> Thanks in advance.
> 
> ~ Oleksii
> 

Ah, what you are seeing there is that the openSBI's Next Mode excludes
the virtualization mode (it treats HS and S synonymously) and it is only
used for setting the mstatus MPP. The code also has next_virt for
setting the MPV but I don't think that is exposed via the device tree
yet. For Xen, you'd want next_mode = PRIV_S and next_virt = 0 (HS mode,
not VS mode). The relevant setup prior to mret is here for interested
readers:
https://github.com/riscv-software-src/opensbi/blob/001106d19b21cd6443ae7f7f6d4d048d80e9ecac/lib/sbi/sbi_hart.c#L759

As long as the next_mode and next_virt are set correctly, then Xen
should be launching in HS mode. I do believe this should be default for
the stock build too for Domain0, unless something has changed.

Thanks,
Bobby


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:19:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:19:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483223.749246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4Dq-0000Mi-49; Mon, 23 Jan 2023 21:19:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483223.749246; Mon, 23 Jan 2023 21:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4Dq-0000MZ-1Q; Mon, 23 Jan 2023 21:19:22 +0000
Received: by outflank-mailman (input) for mailman id 483223;
 Mon, 23 Jan 2023 21:19:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK4Dp-0000K1-5U
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 21:19:21 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9934eb3f-9b63-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 22:19:20 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A512460F2E;
 Mon, 23 Jan 2023 21:19:18 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7EEDCC433EF;
 Mon, 23 Jan 2023 21:19:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9934eb3f-9b63-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674508758;
	bh=cqCtiK4ukFa/0TkCKXPnG484yhJtk7LDq13c08g11Vg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=pS8ibk1ZZt8GC0ocWk5XVf29YuTsCO/kKEIM6wMtmsqytwZwahW+NYCq0y5AQ7+qq
	 rzWkKm9cNBFmzImy4HZ3k0BZ55J4QgpiAvtuP8oxhavV/xQHayE63vR453za2s66Di
	 3jp45qmciluE42hL6niGZvdyZX/lBdkdQMY4/u0HTV/z7sbzFxgTpOVw8rEytXRPL3
	 3aJ7IXT+2IzkNP90SmxVEdq6j6FD4a7NtGKibi+IjVDpWGPXTo6wqFh4aKMc48p7X+
	 NsgesAZ47JT/ITBCyAOTbmnGwfKX8nCcyNAMlrDaIoYhlyJ5SuUAJzyeZesQDJ6cHL
	 5cg3gdvWnohPQ==
Date: Mon, 23 Jan 2023 13:19:14 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com, andrew.cooper3@citrix.com, 
    george.dunlap@citrix.com, jbeulich@suse.com, wl@xen.org, 
    xuwei5@hisilicon.com
Subject: Re: [XEN v3 1/3] xen/arm: Use the correct format specifier
In-Reply-To: <20230123134451.47185-2-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301231313370.1978264@ubuntu-linux-20-04-desktop>
References: <20230123134451.47185-1-ayan.kumar.halder@amd.com> <20230123134451.47185-2-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Jan 2023, Ayan Kumar Halder wrote:
> 1. One should use 'PRIpaddr' to display 'paddr_t' variables. However,
> while creating nodes in fdt, the address (if present in the node name)
> should be represented using 'PRIx64'. This is to be in conformance
> with the following rule present in https://elinux.org/Device_Tree_Linux
> 
> . node names
> "unit-address does not have leading zeros"
> 
> As 'PRIpaddr' introduces leading zeros, we cannot use it.
> 
> So, we have introduced a wrapper ie domain_fdt_begin_node() which will
> represent physical address using 'PRIx64'.
> 
> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
> address.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from -
> 
> v1 - 1. Moved the patch earlier.
> 2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr"
> into this patch.
> 
> v2 - 1. Use PRIx64 for appending addresses to fdt node names. This fixes the CI failure.
> 
>  xen/arch/arm/domain_build.c | 45 +++++++++++++++++--------------------
>  xen/arch/arm/gic-v2.c       |  6 ++---
>  xen/arch/arm/mm.c           |  2 +-

The changes to mm.c and gic-v2.c look OK and I'd ack them already. One
question on the changes to domain_build.c below.


>  3 files changed, 25 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f35f4d2456..97c2395f9a 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1288,6 +1288,20 @@ static int __init fdt_property_interrupts(const struct kernel_info *kinfo,
>      return res;
>  }
>  
> +static int __init domain_fdt_begin_node(void *fdt, const char *name,
> +                                        uint64_t unit)
> +{
> +    /*
> +     * The size of the buffer to hold the longest possible string ie
> +     * interrupt-controller@ + a 64-bit number + \0
> +     */
> +    char buf[38];
> +
> +    /* ePAPR 3.4 */
> +    snprintf(buf, sizeof(buf), "%s@%"PRIx64, name, unit);
> +    return fdt_begin_node(fdt, buf);
> +}
> +
>  static int __init make_memory_node(const struct domain *d,
>                                     void *fdt,
>                                     int addrcells, int sizecells,
> @@ -1296,8 +1310,6 @@ static int __init make_memory_node(const struct domain *d,
>      unsigned int i;
>      int res, reg_size = addrcells + sizecells;
>      int nr_cells = 0;
> -    /* Placeholder for memory@ + a 64-bit number + \0 */
> -    char buf[24];
>      __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
>      __be32 *cells;
>  
> @@ -1314,9 +1326,7 @@ static int __init make_memory_node(const struct domain *d,
>  
>      dt_dprintk("Create memory node\n");
>  
> -    /* ePAPR 3.4 */
> -    snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[i].start);
> -    res = fdt_begin_node(fdt, buf);
> +    res = domain_fdt_begin_node(fdt, "memory", mem->bank[i].start);

Basically this "hides" the paddr_t->uint64_t cast because it happens
implicitly when passing mem->bank[i].start as an argument to
domain_fdt_begin_node.

To be honest, I don't know if it is necessary. Also a normal cast would
be fine:

    snprintf(buf, sizeof(buf), "memory@%"PRIx64, (uint64_t)mem->bank[i].start);
    res = fdt_begin_node(fdt, buf);

Julien, what do you prefer?


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:21:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:21:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483229.749255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4G5-0001kx-Fb; Mon, 23 Jan 2023 21:21:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483229.749255; Mon, 23 Jan 2023 21:21:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4G5-0001kq-Co; Mon, 23 Jan 2023 21:21:41 +0000
Received: by outflank-mailman (input) for mailman id 483229;
 Mon, 23 Jan 2023 21:21:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK4G4-0001kk-Hj
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 21:21:40 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ebff4c33-9b63-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 22:21:39 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by sin.source.kernel.org (Postfix) with ESMTPS id EEC6ACE0A3A;
 Mon, 23 Jan 2023 21:21:34 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 954E9C433D2;
 Mon, 23 Jan 2023 21:21:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebff4c33-9b63-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674508893;
	bh=N2++xv7fB+bLpim+051wjFzQ44LlKVqyIDwEHnyenYk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=PeZbhxoOJIAb8nl36NFxJfKCeuY8FOjdGGPftVahugM7OrVgYQRfeKyqa3ETSJrV8
	 tYPwIG0sTom/oHwPBKCEV8Cg0/kZLE48ZTinvzBWqanTgE61hGkUKlMaepmrMCtM9y
	 hZrUuR+Q2jGu5ksccFiO/TaBZoFBp22iy7WR7kDA2GsobCqDMfYLOyT6p3b8rWnaIG
	 8O30crAJIgooeAICpdcPA6+wtehxGt2dJajPnVc2Zy2iDW6sXHkQZuMVIa/aTxv4SN
	 hsJIp6QMgtDUATYAn6khY/UG6QaTurHFkDiJ9Oddb63TvFT7u9HMhqWUCQ+JD08HDo
	 VSgC1HvsfZzIw==
Date: Mon, 23 Jan 2023 13:21:29 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com, andrew.cooper3@citrix.com, 
    george.dunlap@citrix.com, jbeulich@suse.com, wl@xen.org, 
    xuwei5@hisilicon.com
Subject: Re: [XEN v3 3/3] xen/drivers: ns16550: Fix an incorrect assignment
 to uart->io_size
In-Reply-To: <20230123134451.47185-4-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301231321170.1978264@ubuntu-linux-20-04-desktop>
References: <20230123134451.47185-1-ayan.kumar.halder@amd.com> <20230123134451.47185-4-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Jan 2023, Ayan Kumar Halder wrote:
> uart->io_size represents the size in bytes. Thus, when serial_port.bit_width
> is assigned to it, it should be converted to size in bytes.
> 
> Fixes: 17b516196c55 ("ns16550: add ACPI support for ARM only")
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> Changes from -
> 
> v1, v2 - NA (New patch introduced in v3).
> 
>  xen/drivers/char/ns16550.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index 43e1f971ab..092f6b9c4b 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -1870,7 +1870,7 @@ static int __init ns16550_acpi_uart_init(const void *data)
>      uart->parity = spcr->parity;
>      uart->stop_bits = spcr->stop_bits;
>      uart->io_base = spcr->serial_port.address;
> -    uart->io_size = spcr->serial_port.bit_width;
> +    uart->io_size = DIV_ROUND_UP(spcr->serial_port.bit_width, BITS_PER_BYTE);
>      uart->reg_shift = spcr->serial_port.bit_offset;
>      uart->reg_width = spcr->serial_port.access_width;
>  
> -- 
> 2.17.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:29:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:29:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483240.749269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4Na-0002Xf-Dr; Mon, 23 Jan 2023 21:29:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483240.749269; Mon, 23 Jan 2023 21:29:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4Na-0002XY-A0; Mon, 23 Jan 2023 21:29:26 +0000
Received: by outflank-mailman (input) for mailman id 483240;
 Mon, 23 Jan 2023 21:29:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK4NZ-0002XS-E3
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 21:29:25 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 01d3983d-9b65-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 22:29:24 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 0DD6BB80DE1;
 Mon, 23 Jan 2023 21:29:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AAD88C433EF;
 Mon, 23 Jan 2023 21:29:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01d3983d-9b65-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674509362;
	bh=Km89VvCrM4ipmAbAkoIqOMm1S2WE6/sE8KNzh0pK6jE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=oy3Uru5h+3ZXmeEG6U4P8V52sH7j+yEI+Ncw2bFEmmILnrepDa2RwKKmTiN3YE3d0
	 iCu6JsraCimuP2mjtre91Qzn82UbeTXZgpjVNSfgqbj+2RAfOTlqrRnHXkEixSFmC4
	 Z22Hab6W7fA33BIp3q0iCnpcaeV0g91LAzrWBhUyc4a4HdhvM2lVZFRth00rgZ93xK
	 KFV1kBG2UTrWBkn2LxQ/QzHfbU6Z3w1B14f8PywIaQCeaz+im2FokdoA16p7VT59DI
	 JGluzGq+8Dbdo8Jz34mTM62GphFxRT1uaT1NVPrTAUxTOVFyZZEQHZZEnngFn8Sj4W
	 Oqp8gAN2FMCCw==
Date: Mon, 23 Jan 2023 13:29:20 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 01/22] xen/common: page_alloc: Re-order includes
In-Reply-To: <20221216114853.8227-2-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231327060.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-2-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Order the includes with the xen headers first, then asm headers and
> last public headers. Within each category, they are sorted alphabetically.
> 
> Note that the includes in protected by CONFIG_X86 hasn't been sorted
> to avoid adding multiple #ifdef.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

This patch doesn't apply as is any longer. Assuming it gets ported to
the latest staging appropriately:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ----
> 
>     I am open to add sort the includes protected by CONFIG_X86
>     and add multiple #ifdef if this is preferred.
> ---
>  xen/common/page_alloc.c | 29 ++++++++++++++++-------------
>  1 file changed, 16 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 0c93a1078702..0a950288e241 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -120,27 +120,30 @@
>   *   regions within it.
>   */
>  
> +#include <xen/domain_page.h>
> +#include <xen/event.h>
>  #include <xen/init.h>
> -#include <xen/types.h>
> +#include <xen/irq.h>
> +#include <xen/keyhandler.h>
>  #include <xen/lib.h>
> -#include <xen/sched.h>
> -#include <xen/spinlock.h>
>  #include <xen/mm.h>
> +#include <xen/nodemask.h>
> +#include <xen/numa.h>
>  #include <xen/param.h>
> -#include <xen/irq.h>
> -#include <xen/softirq.h>
> -#include <xen/domain_page.h>
> -#include <xen/keyhandler.h>
>  #include <xen/perfc.h>
>  #include <xen/pfn.h>
> -#include <xen/numa.h>
> -#include <xen/nodemask.h>
> -#include <xen/event.h>
> +#include <xen/types.h>
> +#include <xen/sched.h>
> +#include <xen/softirq.h>
> +#include <xen/spinlock.h>
> +
> +#include <asm/flushtlb.h>
> +#include <asm/numa.h>
> +#include <asm/page.h>
> +
>  #include <public/sysctl.h>
>  #include <public/sched.h>
> -#include <asm/page.h>
> -#include <asm/numa.h>
> -#include <asm/flushtlb.h>
> +
>  #ifdef CONFIG_X86
>  #include <asm/guest.h>
>  #include <asm/p2m.h>
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:35:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:35:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483245.749279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4Sp-0003xU-W6; Mon, 23 Jan 2023 21:34:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483245.749279; Mon, 23 Jan 2023 21:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4Sp-0003xN-TA; Mon, 23 Jan 2023 21:34:51 +0000
Received: by outflank-mailman (input) for mailman id 483245;
 Mon, 23 Jan 2023 21:34:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK4So-0003xH-D6
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 21:34:50 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c2a6ac55-9b65-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 22:34:49 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id D9B31B80DDB;
 Mon, 23 Jan 2023 21:34:46 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CDDE6C433D2;
 Mon, 23 Jan 2023 21:34:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2a6ac55-9b65-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674509685;
	bh=wgFkjV3SdYmmH+9RTBBV8Ge7w0PaC0+RNFpr+j5DR3Y=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZTD+HmHAPxHDocmsH9CZRT3aZUKgeFCYKV1mT9n+40oQT0hnDZCrdIwb5svkFqRZY
	 d7NiNed7XvI/N4T2ZjSHH/2GO+MTPzpgcdRoi3J4045tLHYpCHWSSjUqHA9ay24kWY
	 dfRFMtY6AkSyiRddYZYB1OFWc1Nx/9QA9J6uXRTtbL4yMl8jX0XE9FTJz1jKXx29Fv
	 wmribVdm5rAFsWYco/v1zSxRGMtDPNoDuDwwqtTRk/4xc//hgZnrAcnWoUz9V8nhC1
	 kZyQYdINFMeW0HIGTn61MXTrrs2yEKA0sWXiPHv+CpZqCb1TzEIrMzKIw1Fe2rc11N
	 kz6Hlt1kNhHCg==
Date: Mon, 23 Jan 2023 13:34:42 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Wei Liu <wl@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    David Woodhouse <dwmw2@amazon.com>, Hongyan Xia <hongyxia@amazon.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH 02/22] x86/setup: move vm_init() before acpi calls
In-Reply-To: <20221216114853.8227-3-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231332520.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-3-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> After the direct map removal, pages from the boot allocator are not
> mapped at all in the direct map. Although we have map_domain_page, they
> are ephemeral and are less helpful for mappings that are more than a
> page, so we want a mechanism to globally map a range of pages, which is
> what vmap is for. Therefore, we bring vm_init into early boot stage.
> 
> To allow vmap to be initialised and used in early boot, we need to
> modify vmap to receive pages from the boot allocator during early boot
> stage.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: David Woodhouse <dwmw2@amazon.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

For the arm and common parts:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/setup.c |  4 ++--
>  xen/arch/x86/setup.c | 31 ++++++++++++++++++++-----------
>  xen/common/vmap.c    | 37 +++++++++++++++++++++++++++++--------
>  3 files changed, 51 insertions(+), 21 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f26f67b90e3..2311726f5ddd 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -1028,6 +1028,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>  
>      setup_mm();
>  
> +    vm_init();
> +
>      /* Parse the ACPI tables for possible boot-time configuration */
>      acpi_boot_table_init();
>  
> @@ -1039,8 +1041,6 @@ void __init start_xen(unsigned long boot_phys_offset,
>       */
>      system_state = SYS_STATE_boot;
>  
> -    vm_init();
> -
>      if ( acpi_disabled )
>      {
>          printk("Booting using Device Tree\n");
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 6bb5bc7c84be..1c2e09711eb0 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -870,6 +870,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      unsigned long eb_start, eb_end;
>      bool acpi_boot_table_init_done = false, relocated = false;
>      int ret;
> +    bool vm_init_done = false;
>      struct ns16550_defaults ns16550 = {
>          .data_bits = 8,
>          .parity    = 'n',
> @@ -1442,12 +1443,23 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>              continue;
>  
>          if ( !acpi_boot_table_init_done &&
> -             s >= (1ULL << 32) &&
> -             !acpi_boot_table_init() )
> +             s >= (1ULL << 32) )
>          {
> -            acpi_boot_table_init_done = true;
> -            srat_parse_regions(s);
> -            setup_max_pdx(raw_max_page);
> +            /*
> +             * We only initialise vmap and acpi after going through the bottom
> +             * 4GiB, so that we have enough pages in the boot allocator.
> +             */
> +            if ( !vm_init_done )
> +            {
> +                vm_init();
> +                vm_init_done = true;
> +            }
> +            if ( !acpi_boot_table_init() )
> +            {
> +                acpi_boot_table_init_done = true;
> +                srat_parse_regions(s);
> +                setup_max_pdx(raw_max_page);
> +            }
>          }
>  
>          if ( pfn_to_pdx((e - 1) >> PAGE_SHIFT) >= max_pdx )
> @@ -1624,6 +1636,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>  
>      init_frametable();
>  
> +    if ( !vm_init_done )
> +        vm_init();
> +
>      if ( !acpi_boot_table_init_done )
>          acpi_boot_table_init();
>  
> @@ -1661,12 +1676,6 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>          end_boot_allocator();
>  
>      system_state = SYS_STATE_boot;
> -    /*
> -     * No calls involving ACPI code should go between the setting of
> -     * SYS_STATE_boot and vm_init() (or else acpi_os_{,un}map_memory()
> -     * will break).
> -     */
> -    vm_init();
>  
>      bsp_stack = cpu_alloc_stack(0);
>      if ( !bsp_stack )
> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
> index 4fd6b3067ec1..1340c7c6faf6 100644
> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -34,9 +34,20 @@ void __init vm_init_type(enum vmap_region type, void *start, void *end)
>  
>      for ( i = 0, va = (unsigned long)vm_bitmap(type); i < nr; ++i, va += PAGE_SIZE )
>      {
> -        struct page_info *pg = alloc_domheap_page(NULL, 0);
> +        mfn_t mfn;
> +        int rc;
>  
> -        map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR);
> +        if ( system_state == SYS_STATE_early_boot )
> +            mfn = alloc_boot_pages(1, 1);
> +        else
> +        {
> +            struct page_info *pg = alloc_domheap_page(NULL, 0);
> +
> +            BUG_ON(!pg);
> +            mfn = page_to_mfn(pg);
> +        }
> +        rc = map_pages_to_xen(va, mfn, 1, PAGE_HYPERVISOR);
> +        BUG_ON(rc);
>          clear_page((void *)va);
>      }
>      bitmap_fill(vm_bitmap(type), vm_low[type]);
> @@ -62,7 +73,7 @@ static void *vm_alloc(unsigned int nr, unsigned int align,
>      spin_lock(&vm_lock);
>      for ( ; ; )
>      {
> -        struct page_info *pg;
> +        mfn_t mfn;
>  
>          ASSERT(vm_low[t] == vm_top[t] || !test_bit(vm_low[t], vm_bitmap(t)));
>          for ( start = vm_low[t]; start < vm_top[t]; )
> @@ -97,9 +108,16 @@ static void *vm_alloc(unsigned int nr, unsigned int align,
>          if ( vm_top[t] >= vm_end[t] )
>              return NULL;
>  
> -        pg = alloc_domheap_page(NULL, 0);
> -        if ( !pg )
> -            return NULL;
> +        if ( system_state == SYS_STATE_early_boot )
> +            mfn = alloc_boot_pages(1, 1);
> +        else
> +        {
> +            struct page_info *pg = alloc_domheap_page(NULL, 0);
> +
> +            if ( !pg )
> +                return NULL;
> +            mfn = page_to_mfn(pg);
> +        }
>  
>          spin_lock(&vm_lock);
>  
> @@ -107,7 +125,7 @@ static void *vm_alloc(unsigned int nr, unsigned int align,
>          {
>              unsigned long va = (unsigned long)vm_bitmap(t) + vm_top[t] / 8;
>  
> -            if ( !map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR) )
> +            if ( !map_pages_to_xen(va, mfn, 1, PAGE_HYPERVISOR) )
>              {
>                  clear_page((void *)va);
>                  vm_top[t] += PAGE_SIZE * 8;
> @@ -117,7 +135,10 @@ static void *vm_alloc(unsigned int nr, unsigned int align,
>              }
>          }
>  
> -        free_domheap_page(pg);
> +        if ( system_state == SYS_STATE_early_boot )
> +            init_boot_pages(mfn_to_maddr(mfn), mfn_to_maddr(mfn) + PAGE_SIZE);
> +        else
> +            free_domheap_page(mfn_to_page(mfn));
>  
>          if ( start >= vm_top[t] )
>          {
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:40:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483250.749289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4Xs-00058O-JD; Mon, 23 Jan 2023 21:40:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483250.749289; Mon, 23 Jan 2023 21:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4Xs-00057c-GI; Mon, 23 Jan 2023 21:40:04 +0000
Received: by outflank-mailman (input) for mailman id 483250;
 Mon, 23 Jan 2023 21:40:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK4Xr-0004u2-Lu
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 21:40:03 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7d6f33bb-9b66-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 22:40:02 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 6827C61119;
 Mon, 23 Jan 2023 21:40:00 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80206C433D2;
 Mon, 23 Jan 2023 21:39:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d6f33bb-9b66-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674509999;
	bh=7z4gMYKOdsyiK4P1pFGZ62cPu6iRIegwPcJ767kAfzU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XEsxf5WHwY0gbrOKaK/w97t3oUuLByo4dkrp1ZIyHVrt2Erx3ZkPgR3fVXL7hj/n2
	 mfF2NGfsZ4m2Zh0ZcvMeS7f3yGntwEL1rFDLrtsI/oB61Afy101Ui9DzTh0YnbtFMt
	 fVCXIGfnCrLayTMGHGDwB1xCQ1Avk+qMqTp2Vi5cnEnrCJY9jcXOjZyOcBnBvibdqZ
	 tQyy2mcSQiFXOl459YHhqiuJ938PYOmUl/ohYgP0/w0J5D+pwMJw3JSUyxOcG0+2F6
	 M2Sv6DZDW9gJTeCpUS9HBRi+TQMeGtj6qHdUxhovLoNJOuyK8pIW67QtHAKCRnXcry
	 H09o3ExWcfnag==
Date: Mon, 23 Jan 2023 13:39:57 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Hongyan Xia <hongyxia@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH 03/22] acpi: vmap pages in acpi_os_alloc_memory
In-Reply-To: <20221216114853.8227-4-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231339310.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-4-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> Also, introduce a wrapper around vmap that maps a contiguous range for
> boot allocations. Unfortunately, the new helper cannot be a static inline
> because the dependences are a mess. We would need to re-include
> asm/page.h (was removed in aa4b9d1ee653 "include: don't use asm/page.h
> from common headers") and it doesn't look to be enough anymore
> because bits from asm/cpufeature.h is used in the definition of PAGE_NX.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

I saw Jan's comments and I agree with them but I also wanted to track
that I reviewed this patch and looks OK:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ----
> 
>     Changes since Hongyan's version:
>         * Rename vmap_boot_pages() to vmap_contig_pages()
>         * Move the new helper in vmap.c to avoid compilation issue
>         * Don't use __pa() to translate the virtual address
> ---
>  xen/common/vmap.c      |  5 +++++
>  xen/drivers/acpi/osl.c | 13 +++++++++++--
>  xen/include/xen/vmap.h |  2 ++
>  3 files changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
> index 1340c7c6faf6..78f051a67682 100644
> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -244,6 +244,11 @@ void *vmap(const mfn_t *mfn, unsigned int nr)
>      return __vmap(mfn, 1, nr, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
>  }
>  
> +void *vmap_contig_pages(mfn_t mfn, unsigned int nr_pages)
> +{
> +    return __vmap(&mfn, nr_pages, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
> +}
> +
>  void vunmap(const void *va)
>  {
>      unsigned long addr = (unsigned long)va;
> diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
> index 389505f78666..44a9719b0dcf 100644
> --- a/xen/drivers/acpi/osl.c
> +++ b/xen/drivers/acpi/osl.c
> @@ -221,7 +221,11 @@ void *__init acpi_os_alloc_memory(size_t sz)
>  	void *ptr;
>  
>  	if (system_state == SYS_STATE_early_boot)
> -		return mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz), 1)));
> +	{
> +		mfn_t mfn = alloc_boot_pages(PFN_UP(sz), 1);
> +
> +		return vmap_contig_pages(mfn, PFN_UP(sz));
> +	}
>  
>  	ptr = xmalloc_bytes(sz);
>  	ASSERT(!ptr || is_xmalloc_memory(ptr));
> @@ -246,5 +250,10 @@ void __init acpi_os_free_memory(void *ptr)
>  	if (is_xmalloc_memory(ptr))
>  		xfree(ptr);
>  	else if (ptr && system_state == SYS_STATE_early_boot)
> -		init_boot_pages(__pa(ptr), __pa(ptr) + PAGE_SIZE);
> +	{
> +		paddr_t addr = mfn_to_maddr(vmap_to_mfn(ptr));
> +
> +		vunmap(ptr);
> +		init_boot_pages(addr, addr + PAGE_SIZE);
> +	}
>  }
> diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h
> index b0f7632e8985..3c06c7c3ba30 100644
> --- a/xen/include/xen/vmap.h
> +++ b/xen/include/xen/vmap.h
> @@ -23,6 +23,8 @@ void *vmalloc_xen(size_t size);
>  void *vzalloc(size_t size);
>  void vfree(void *va);
>  
> +void *vmap_contig_pages(mfn_t mfn, unsigned int nr_pages);
> +
>  void __iomem *ioremap(paddr_t, size_t);
>  
>  static inline void iounmap(void __iomem *va)
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:45:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:45:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483255.749299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4dJ-00064C-82; Mon, 23 Jan 2023 21:45:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483255.749299; Mon, 23 Jan 2023 21:45:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4dJ-000645-5J; Mon, 23 Jan 2023 21:45:41 +0000
Received: by outflank-mailman (input) for mailman id 483255;
 Mon, 23 Jan 2023 21:45:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK4dH-00063z-UO
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 21:45:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4558684d-9b67-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 22:45:37 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E000D61049;
 Mon, 23 Jan 2023 21:45:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5CE1C433EF;
 Mon, 23 Jan 2023 21:45:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4558684d-9b67-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674510335;
	bh=ZuPiGptW3KXe3Gus67y+vUCaqRxd7Br0s0k42Kd7ZFk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mo5NZw06EPAsOHzp5eaIfVoYB603EpTYjqDpzsoeHiiS6N+L12qA6aG2jRctO7UXr
	 UqhT1aDL5hadel/BQnUb2MyQqG+QmCPq5B/DiMxfwmYzFegc/1bIFBy6f11Waasb1J
	 dH1o1wFMPost/zL6vhc+7UuKIE+CoDA8rLHrvkKubCfBM0E7cpiJr7BOIWwr5aB1hY
	 EMr3i3vfl62fWl4EAMFPFNn6KiL3QrOnT07dLIymx2ktKXyGgHpY9T4QAUyqDzTdri
	 LHah9VN3E5uG/ZfY1EHAY8IVG3J4OTyT43kZYfLDvs1lluZWNWjTozoL9I6ZVPlRB1
	 8rI3mSOcQeHLA==
Date: Mon, 23 Jan 2023 13:45:32 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Hongyan Xia <hongyxia@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH 11/22] x86: add a boot option to enable and disable the
 direct map
In-Reply-To: <20221216114853.8227-12-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231345010.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-12-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> Also add a helper function to retrieve it. Change arch_mfns_in_direct_map
> to check this option before returning.
> 
> This is added as a boot command line option, not a Kconfig to allow
> the user to experiment the feature without rebuild the hypervisor.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
> 
>     TODO:
>         * Do we also want to provide a Kconfig option?
> 
>     Changes since Hongyan's version:
>         * Reword the commit message
>         * opt_directmap is only modified during boot so mark it as
>           __ro_after_init
> ---
>  docs/misc/xen-command-line.pandoc | 12 ++++++++++++
>  xen/arch/arm/include/asm/mm.h     |  5 +++++
>  xen/arch/x86/include/asm/mm.h     | 17 ++++++++++++++++-
>  xen/arch/x86/mm.c                 |  3 +++
>  xen/arch/x86/setup.c              |  2 ++
>  5 files changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> index b7ee97be762e..a63e4612acac 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -760,6 +760,18 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
>  additionally a trace buffer of the specified size is allocated per cpu.
>  The debug trace feature is only enabled in debugging builds of Xen.
>  
> +### directmap (x86)
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Enable or disable the direct map region in Xen.
> +
> +By default, Xen creates the direct map region which maps physical memory
> +in that region. Setting this to no will remove the direct map, blocking
> +exploits that leak secrets via speculative memory access in the direct
> +map.
> +
>  ### dma_bits
>  > `= <integer>`
>  
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> index 68adcac9fa8d..2366928d71aa 100644
> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -406,6 +406,11 @@ static inline void page_set_xenheap_gfn(struct page_info *p, gfn_t gfn)
>      } while ( (y = cmpxchg(&p->u.inuse.type_info, x, nx)) != x );
>  }
>  
> +static inline bool arch_has_directmap(void)
> +{
> +    return true;

Shoudn't arch_has_directmap return false for arm32?



> +}
> +
>  #endif /*  __ARCH_ARM_MM__ */
>  /*
>   * Local variables:
> diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h
> index db29e3e2059f..cf8b20817c6c 100644
> --- a/xen/arch/x86/include/asm/mm.h
> +++ b/xen/arch/x86/include/asm/mm.h
> @@ -464,6 +464,8 @@ static inline int get_page_and_type(struct page_info *page,
>      ASSERT(((_p)->count_info & PGC_count_mask) != 0);          \
>      ASSERT(page_get_owner(_p) == (_d))
>  
> +extern bool opt_directmap;
> +
>  /******************************************************************************
>   * With shadow pagetables, the different kinds of address start
>   * to get get confusing.
> @@ -620,13 +622,26 @@ extern const char zero_page[];
>  /* Build a 32bit PSE page table using 4MB pages. */
>  void write_32bit_pse_identmap(uint32_t *l2);
>  
> +static inline bool arch_has_directmap(void)
> +{
> +    return opt_directmap;
> +}
> +
>  /*
>   * x86 maps part of physical memory via the directmap region.
>   * Return whether the range of MFN falls in the directmap region.
> + *
> + * When boot command line sets directmap=no, we will not have a direct map at
> + * all so this will always return false.
>   */
>  static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
>  {
> -    unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
> +    unsigned long eva;
> +
> +    if ( !arch_has_directmap() )
> +        return false;
> +
> +    eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
>  
>      return (mfn + nr) <= (virt_to_mfn(eva - 1) + 1);
>  }
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 041bd4cfde17..e76e135b96fc 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -157,6 +157,9 @@ l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
>  l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
>      l1_fixmap_x[L1_PAGETABLE_ENTRIES];
>  
> +bool __ro_after_init opt_directmap = true;
> +boolean_param("directmap", opt_directmap);
> +
>  /* Frame table size in pages. */
>  unsigned long max_page;
>  unsigned long total_pages;
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 1c2e09711eb0..2cb051c6e4e7 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1423,6 +1423,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      if ( highmem_start )
>          xenheap_max_mfn(PFN_DOWN(highmem_start - 1));
>  
> +    printk("Booting with directmap %s\n", arch_has_directmap() ? "on" : "off");
> +
>      /*
>       * Walk every RAM region and map it in its entirety (on x86/64, at least)
>       * and notify it to the boot allocator.
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:48:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:48:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483260.749308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4fX-0006ce-KW; Mon, 23 Jan 2023 21:47:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483260.749308; Mon, 23 Jan 2023 21:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4fX-0006cX-Hl; Mon, 23 Jan 2023 21:47:59 +0000
Received: by outflank-mailman (input) for mailman id 483260;
 Mon, 23 Jan 2023 21:47:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK4fW-0006cP-GU
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 21:47:58 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 98ab82b1-9b67-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 22:47:57 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 11B51610F4;
 Mon, 23 Jan 2023 21:47:56 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2F3A4C433D2;
 Mon, 23 Jan 2023 21:47:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98ab82b1-9b67-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674510475;
	bh=R42fdma7M0xrNIB0RtL9NSlfB6xWahp+yeTnjQSPPmM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ozF0Cstm5EJeRXweNi9D51a4wmbbFI7JWY2eAGk9Zb7rnIS5F5HWBIYQxsF2aLH/t
	 fc9FBm9xEJfVQ9/4uVOEXw8ODI7ovrdQ/RLxjFWvZUTw3kcIGW/E/mxhK5o0SG6ilz
	 NcHJM2RVWfeLHbTkQEBgSBdPYzXUIw16lOdFhq9K9a+7d6gRh53+0lTHB6QKn30A5G
	 X4DW3lEzZscwg9AJ69XxeG+BZqiRk7NfrqahexdO5sAh7vq3F1KirwSJeODXMMGxIx
	 qU8IOGffkwc6q0jVAUmhM5QAJGHqkzpYhjgXPhcqp4Hce1K/olP06XXvjRhM4aad6T
	 AZa8TEhHNldvQ==
Date: Mon, 23 Jan 2023 13:47:52 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 12/22] xen/arm: fixmap: Rename the fixmap slots to follow
 the x86 convention
In-Reply-To: <20221216114853.8227-13-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231347430.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-13-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment the fixmap slots are prefixed differently between arm and
> x86.
> 
> Some of them (e.g. the PMAP slots) are used in common code. So it would
> be better if they are named the same way to avoid having to create
> aliases.
> 
> I have decided to use the x86 naming because they are less change. So
> all the Arm fixmap slots will now be prefixed with FIX rather than
> FIXMAP.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ----
> 
>     Note that potentially more renaming that could be done to share
>     more code in future. I have decided to not do that to avoid going
>     down a rabbit hole.
> ---
>  xen/arch/arm/acpi/lib.c                 | 18 +++++++++---------
>  xen/arch/arm/include/asm/early_printk.h |  2 +-
>  xen/arch/arm/include/asm/fixmap.h       | 16 ++++++++--------
>  xen/arch/arm/kernel.c                   |  6 +++---
>  xen/common/pmap.c                       |  8 ++++----
>  5 files changed, 25 insertions(+), 25 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/lib.c b/xen/arch/arm/acpi/lib.c
> index 41d521f720ac..736cf09ecaa8 100644
> --- a/xen/arch/arm/acpi/lib.c
> +++ b/xen/arch/arm/acpi/lib.c
> @@ -40,10 +40,10 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>          return NULL;
>  
>      offset = phys & (PAGE_SIZE - 1);
> -    base = FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) + offset;
> +    base = FIXMAP_ADDR(FIX_ACPI_BEGIN) + offset;
>  
>      /* Check the fixmap is big enough to map the region */
> -    if ( (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE - base) < size )
> +    if ( (FIXMAP_ADDR(FIX_ACPI_END) + PAGE_SIZE - base) < size )
>          return NULL;
>  
>      /* With the fixmap, we can only map one region at the time */
> @@ -54,7 +54,7 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>  
>      size += offset;
>      mfn = maddr_to_mfn(phys);
> -    idx = FIXMAP_ACPI_BEGIN;
> +    idx = FIX_ACPI_BEGIN;
>  
>      do {
>          set_fixmap(idx, mfn, PAGE_HYPERVISOR);
> @@ -72,8 +72,8 @@ bool __acpi_unmap_table(const void *ptr, unsigned long size)
>      unsigned int idx;
>  
>      /* We are only handling fixmap address in the arch code */
> -    if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
> -         (vaddr >= (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE)) )
> +    if ( (vaddr < FIXMAP_ADDR(FIX_ACPI_BEGIN)) ||
> +         (vaddr >= (FIXMAP_ADDR(FIX_ACPI_END) + PAGE_SIZE)) )
>          return false;
>  
>      /*
> @@ -81,16 +81,16 @@ bool __acpi_unmap_table(const void *ptr, unsigned long size)
>       * for the ACPI fixmap region. The caller is expected to free with
>       * the same address.
>       */
> -    ASSERT((vaddr & PAGE_MASK) == FIXMAP_ADDR(FIXMAP_ACPI_BEGIN));
> +    ASSERT((vaddr & PAGE_MASK) == FIXMAP_ADDR(FIX_ACPI_BEGIN));
>  
>      /* The region allocated fit in the ACPI fixmap region. */
> -    ASSERT(size < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE - vaddr));
> +    ASSERT(size < (FIXMAP_ADDR(FIX_ACPI_END) + PAGE_SIZE - vaddr));
>      ASSERT(fixmap_inuse);
>  
>      fixmap_inuse = false;
>  
> -    size += vaddr - FIXMAP_ADDR(FIXMAP_ACPI_BEGIN);
> -    idx = FIXMAP_ACPI_BEGIN;
> +    size += vaddr - FIXMAP_ADDR(FIX_ACPI_BEGIN);
> +    idx = FIX_ACPI_BEGIN;
>  
>      do
>      {
> diff --git a/xen/arch/arm/include/asm/early_printk.h b/xen/arch/arm/include/asm/early_printk.h
> index c5149b2976da..a5f48801f476 100644
> --- a/xen/arch/arm/include/asm/early_printk.h
> +++ b/xen/arch/arm/include/asm/early_printk.h
> @@ -17,7 +17,7 @@
>  
>  /* need to add the uart address offset in page to the fixmap address */
>  #define EARLY_UART_VIRTUAL_ADDRESS \
> -    (FIXMAP_ADDR(FIXMAP_CONSOLE) + (CONFIG_EARLY_UART_BASE_ADDRESS & ~PAGE_MASK))
> +    (FIXMAP_ADDR(FIX_CONSOLE) + (CONFIG_EARLY_UART_BASE_ADDRESS & ~PAGE_MASK))
>  
>  #endif /* !CONFIG_EARLY_PRINTK */
>  
> diff --git a/xen/arch/arm/include/asm/fixmap.h b/xen/arch/arm/include/asm/fixmap.h
> index d0c9a52c8c28..154db85686c2 100644
> --- a/xen/arch/arm/include/asm/fixmap.h
> +++ b/xen/arch/arm/include/asm/fixmap.h
> @@ -8,17 +8,17 @@
>  #include <xen/pmap.h>
>  
>  /* Fixmap slots */
> -#define FIXMAP_CONSOLE  0  /* The primary UART */
> -#define FIXMAP_MISC     1  /* Ephemeral mappings of hardware */
> -#define FIXMAP_ACPI_BEGIN  2  /* Start mappings of ACPI tables */
> -#define FIXMAP_ACPI_END    (FIXMAP_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1)  /* End mappings of ACPI tables */
> -#define FIXMAP_PMAP_BEGIN (FIXMAP_ACPI_END + 1) /* Start of PMAP */
> -#define FIXMAP_PMAP_END (FIXMAP_PMAP_BEGIN + NUM_FIX_PMAP - 1) /* End of PMAP */
> +#define FIX_CONSOLE  0  /* The primary UART */
> +#define FIX_MISC     1  /* Ephemeral mappings of hardware */
> +#define FIX_ACPI_BEGIN  2  /* Start mappings of ACPI tables */
> +#define FIX_ACPI_END    (FIX_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1)  /* End mappings of ACPI tables */
> +#define FIX_PMAP_BEGIN (FIX_ACPI_END + 1) /* Start of PMAP */
> +#define FIX_PMAP_END (FIX_PMAP_BEGIN + NUM_FIX_PMAP - 1) /* End of PMAP */
>  
> -#define FIXMAP_LAST FIXMAP_PMAP_END
> +#define FIX_LAST FIX_PMAP_END
>  
>  #define FIXADDR_START FIXMAP_ADDR(0)
> -#define FIXADDR_TOP FIXMAP_ADDR(FIXMAP_LAST)
> +#define FIXADDR_TOP FIXMAP_ADDR(FIX_LAST)
>  
>  #ifndef __ASSEMBLY__
>  
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 23b840ea9ea8..56800750fd9c 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -49,7 +49,7 @@ struct minimal_dtb_header {
>   */
>  void __init copy_from_paddr(void *dst, paddr_t paddr, unsigned long len)
>  {
> -    void *src = (void *)FIXMAP_ADDR(FIXMAP_MISC);
> +    void *src = (void *)FIXMAP_ADDR(FIX_MISC);
>  
>      while (len) {
>          unsigned long l, s;
> @@ -57,10 +57,10 @@ void __init copy_from_paddr(void *dst, paddr_t paddr, unsigned long len)
>          s = paddr & (PAGE_SIZE-1);
>          l = min(PAGE_SIZE - s, len);
>  
> -        set_fixmap(FIXMAP_MISC, maddr_to_mfn(paddr), PAGE_HYPERVISOR_WC);
> +        set_fixmap(FIX_MISC, maddr_to_mfn(paddr), PAGE_HYPERVISOR_WC);
>          memcpy(dst, src + s, l);
>          clean_dcache_va_range(dst, l);
> -        clear_fixmap(FIXMAP_MISC);
> +        clear_fixmap(FIX_MISC);
>  
>          paddr += l;
>          dst += l;
> diff --git a/xen/common/pmap.c b/xen/common/pmap.c
> index 14517198aae3..6e3ba9298df4 100644
> --- a/xen/common/pmap.c
> +++ b/xen/common/pmap.c
> @@ -32,8 +32,8 @@ void *__init pmap_map(mfn_t mfn)
>  
>      __set_bit(idx, inuse);
>  
> -    slot = idx + FIXMAP_PMAP_BEGIN;
> -    ASSERT(slot >= FIXMAP_PMAP_BEGIN && slot <= FIXMAP_PMAP_END);
> +    slot = idx + FIX_PMAP_BEGIN;
> +    ASSERT(slot >= FIX_PMAP_BEGIN && slot <= FIX_PMAP_END);
>  
>      /*
>       * We cannot use set_fixmap() here. We use PMAP when the domain map
> @@ -53,10 +53,10 @@ void __init pmap_unmap(const void *p)
>      unsigned int slot = virt_to_fix((unsigned long)p);
>  
>      ASSERT(system_state < SYS_STATE_smp_boot);
> -    ASSERT(slot >= FIXMAP_PMAP_BEGIN && slot <= FIXMAP_PMAP_END);
> +    ASSERT(slot >= FIX_PMAP_BEGIN && slot <= FIX_PMAP_END);
>      ASSERT(!in_irq());
>  
> -    idx = slot - FIXMAP_PMAP_BEGIN;
> +    idx = slot - FIX_PMAP_BEGIN;
>  
>      __clear_bit(idx, inuse);
>      arch_pmap_unmap(slot);
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 21:57:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 21:57:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483267.749319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4p7-0008DC-MS; Mon, 23 Jan 2023 21:57:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483267.749319; Mon, 23 Jan 2023 21:57:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4p7-0008D5-JJ; Mon, 23 Jan 2023 21:57:53 +0000
Received: by outflank-mailman (input) for mailman id 483267;
 Mon, 23 Jan 2023 21:57:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pK4p5-0008Cj-Pm
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 21:57:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK4p4-0004iI-QV; Mon, 23 Jan 2023 21:57:50 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK4p4-0000MS-Kp; Mon, 23 Jan 2023 21:57:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=P0z1hxn/ltkAJpHB2GWK5p0RpyMp/HeoKYGxTsOcH6I=; b=tucYJqFMb2RNn13rIQNb2TbRnh
	3PZLZsDzsveg+mmHeVxwK5OKXXEx0Nh8LX+yY9nkJvZTT2Cd/GV8eT7mEARiUpkYdLhiYZqLDpdAH
	kGED+9pd2i1qYevtPS4DtxUMsWrWNVx3aEJtPfAhks1NOTSpxZdWNQYi1YBen718JXmI=;
Message-ID: <e391f6d0-5288-2216-0d11-ef683fd7ebf8@xen.org>
Date: Mon, 23 Jan 2023 21:57:48 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 01/22] xen/common: page_alloc: Re-order includes
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-2-julien@xen.org>
 <alpine.DEB.2.22.394.2301231327060.1978264@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301231327060.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 23/01/2023 21:29, Stefano Stabellini wrote:
> On Fri, 16 Dec 2022, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Order the includes with the xen headers first, then asm headers and
>> last public headers. Within each category, they are sorted alphabetically.
>>
>> Note that the includes in protected by CONFIG_X86 hasn't been sorted
>> to avoid adding multiple #ifdef.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> This patch doesn't apply as is any longer.

That's expected given that I committed this patch a month ago (see my 
answer to Jan's e-mail on the 23rd December).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:02:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:02:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483272.749328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4t6-0001DL-6q; Mon, 23 Jan 2023 22:02:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483272.749328; Mon, 23 Jan 2023 22:02:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4t6-0001DE-3o; Mon, 23 Jan 2023 22:02:00 +0000
Received: by outflank-mailman (input) for mailman id 483272;
 Mon, 23 Jan 2023 22:01:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pK4t4-0001D8-G0
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:01:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK4t3-0004ql-FP; Mon, 23 Jan 2023 22:01:57 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK4t3-0000fO-A4; Mon, 23 Jan 2023 22:01:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=yFpe2KVEJ2lFuASDCYbfpvVJtAEx0uiL7bNCS4k4EXE=; b=sjxZqaXQfExJhdjT33PvwQsfWh
	w+l9bi3FHC/OwAY31W2Zf4GKG01sI8hSsXvogOcYMwm6rwvFVkkkKCIKmX8EwwHum6V+0K2oFQykB
	bVZ6bjPKet5zTc1SqvYi1MSnNLkJ1u3Gkbc7tQNl6gwjRwRWAWhmYmqe2BmBSiz1XerI=;
Message-ID: <2dd5c041-ef70-64e0-dc22-25a0c813c5de@xen.org>
Date: Mon, 23 Jan 2023 22:01:55 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 11/22] x86: add a boot option to enable and disable the
 direct map
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <jgrall@amazon.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-12-julien@xen.org>
 <alpine.DEB.2.22.394.2301231345010.1978264@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301231345010.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 23/01/2023 21:45, Stefano Stabellini wrote:
>> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
>> index 68adcac9fa8d..2366928d71aa 100644
>> --- a/xen/arch/arm/include/asm/mm.h
>> +++ b/xen/arch/arm/include/asm/mm.h
>> @@ -406,6 +406,11 @@ static inline void page_set_xenheap_gfn(struct page_info *p, gfn_t gfn)
>>       } while ( (y = cmpxchg(&p->u.inuse.type_info, x, nx)) != x );
>>   }
>>   
>> +static inline bool arch_has_directmap(void)
>> +{
>> +    return true;
> 
> Shoudn't arch_has_directmap return false for arm32?

We still have a directmap on Arm32, but it only covers the xenheap.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:04:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:04:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483277.749339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4uv-0001n6-Hg; Mon, 23 Jan 2023 22:03:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483277.749339; Mon, 23 Jan 2023 22:03:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4uv-0001mz-Ex; Mon, 23 Jan 2023 22:03:53 +0000
Received: by outflank-mailman (input) for mailman id 483277;
 Mon, 23 Jan 2023 22:03:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK4uu-0001mb-IL
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:03:52 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d10673bf-9b69-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 23:03:50 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 9EC46B80DD4;
 Mon, 23 Jan 2023 22:03:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1EB72C433D2;
 Mon, 23 Jan 2023 22:03:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d10673bf-9b69-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674511428;
	bh=1sW3BkkqzetZkyypVgS7OGxzxKEd0zxhoJpCmMoJrjk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=MLxGYK3V8qRGxFfUU7gfVIeAqyhosC8lZhVhNtmAQoasmUzn0g5Jtrx2StGyI39ff
	 Y8H0sIjGSNyJByxulABl7rEPfjV1Mku3/qJv1vo25dk1jveX752hobJjQNCcJYp10r
	 xnHUQAWlH2d2uA+yiWUvtWE9mnN00JjjO/VRELzwpvyDQq0HpBmzOT1ITkAN7Fg8P9
	 lXaxmXGDSxY0Fgw62sIIXMg6u7qyfnyHAfeIcWdksvRQgGsefDQrXkq0Lj/YTQWKfv
	 jy08GUFuPs3evxl89vvLZKISFmciR4tkBc8hbXNMDVWYqo5hadu27P7P0c813m2z4j
	 XBLRjtQvUtNTg==
Date: Mon, 23 Jan 2023 14:03:45 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Hongyan Xia <hongyxia@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH 17/22] x86/setup: vmap heap nodes when they are outside
 the direct map
In-Reply-To: <20221216114853.8227-18-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231358440.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-18-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> When we do not have a direct map, archs_mfn_in_direct_map() will always
> return false, thus init_node_heap() will allocate xenheap pages from an
> existing node for the metadata of a new node. This means that the
> metadata of a new node is in a different node, slowing down heap
> allocation.
> 
> Since we now have early vmap, vmap the metadata locally in the new node.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
> 
>     Changes from Hongyan's version:
>         * arch_mfn_in_direct_map() was renamed to
>           arch_mfns_in_direct_map()
>         * Use vmap_contig_pages() rather than __vmap(...).
>         * Add missing include (xen/vmap.h) so it compiles on Arm
> ---
>  xen/common/page_alloc.c | 42 +++++++++++++++++++++++++++++++----------
>  1 file changed, 32 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 0c4af5a71407..581c15d74dfb 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -136,6 +136,7 @@
>  #include <xen/sched.h>
>  #include <xen/softirq.h>
>  #include <xen/spinlock.h>
> +#include <xen/vmap.h>
>  
>  #include <asm/flushtlb.h>
>  #include <asm/numa.h>
> @@ -597,22 +598,43 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>          needed = 0;
>      }
>      else if ( *use_tail && nr >= needed &&
> -              arch_mfns_in_directmap(mfn + nr - needed, needed) &&
>                (!xenheap_bits ||
>                 !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>      {
> -        _heap[node] = mfn_to_virt(mfn + nr - needed);
> -        avail[node] = mfn_to_virt(mfn + nr - 1) +
> -                      PAGE_SIZE - sizeof(**avail) * NR_ZONES;
> -    }
> -    else if ( nr >= needed &&
> -              arch_mfns_in_directmap(mfn, needed) &&
> +        if ( arch_mfns_in_directmap(mfn + nr - needed, needed) )
> +        {
> +            _heap[node] = mfn_to_virt(mfn + nr - needed);
> +            avail[node] = mfn_to_virt(mfn + nr - 1) +
> +                          PAGE_SIZE - sizeof(**avail) * NR_ZONES;
> +        }
> +        else
> +        {
> +            mfn_t needed_start = _mfn(mfn + nr - needed);
> +
> +            _heap[node] = vmap_contig_pages(needed_start, needed);
> +            BUG_ON(!_heap[node]);

I see a BUG_ON here but init_node_heap is not __init. Asking because
BUG_ON is only a good idea during init time. Should init_node_heap be
__init (not necessarely in this patch, but still)?


> +            avail[node] = (void *)(_heap[node]) + (needed << PAGE_SHIFT) -
> +                          sizeof(**avail) * NR_ZONES;
> +        }
> +    } else if ( nr >= needed &&
>                (!xenheap_bits ||
>                 !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>      {
> -        _heap[node] = mfn_to_virt(mfn);
> -        avail[node] = mfn_to_virt(mfn + needed - 1) +
> -                      PAGE_SIZE - sizeof(**avail) * NR_ZONES;
> +        if ( arch_mfns_in_directmap(mfn, needed) )
> +        {
> +            _heap[node] = mfn_to_virt(mfn);
> +            avail[node] = mfn_to_virt(mfn + needed - 1) +
> +                          PAGE_SIZE - sizeof(**avail) * NR_ZONES;
> +        }
> +        else
> +        {
> +            mfn_t needed_start = _mfn(mfn);
> +
> +            _heap[node] = vmap_contig_pages(needed_start, needed);
> +            BUG_ON(!_heap[node]);
> +            avail[node] = (void *)(_heap[node]) + (needed << PAGE_SHIFT) -
> +                          sizeof(**avail) * NR_ZONES;
> +        }
>          *use_tail = false;
>      }
>      else if ( get_order_from_bytes(sizeof(**_heap)) ==
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:06:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:06:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483282.749349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4xc-0002Pv-Uv; Mon, 23 Jan 2023 22:06:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483282.749349; Mon, 23 Jan 2023 22:06:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4xc-0002Po-Rj; Mon, 23 Jan 2023 22:06:40 +0000
Received: by outflank-mailman (input) for mailman id 483282;
 Mon, 23 Jan 2023 22:06:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK4xb-0002Pi-Ht
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:06:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34738f38-9b6a-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 23:06:37 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D5EBC6113C;
 Mon, 23 Jan 2023 22:06:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77791C433D2;
 Mon, 23 Jan 2023 22:06:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34738f38-9b6a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674511595;
	bh=CPExy/CVX14ZaDL7i2mDVsKaJL1MKCn9QWrX72FwgUM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ikIrfVP+OIX8IbnrHRUR7ZT9FLzoxEOno2ijoQJnvUkQ749oMerVTkYXd9H2LB/f/
	 3T39Fu+w9A4AhjNlYCgl4mGA3ED4s7wAaTN7vYEXxC6GcDt+yxSlfDrsWez6R09o7V
	 jspHI7eAFmy1TZcBH29ZB8LMISU/JaNr0zy+Gs6v42ci+atBVwrLAxOTUlh+xtnupX
	 93wCZduVehnFdz0iy+oX+5G39m5paMxIJKKudLB66XQQ1VR+5TjWa4mgQLnwlWiiW/
	 c0/Dc4URQ0fXDnLJm1vhcmilioED8DNRefjhxMxtqh90aUEGvEFpmxRqG2aAY62OnN
	 s+ump0w28C2zg==
Date: Mon, 23 Jan 2023 14:06:32 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 19/22] xen/arm32: mm: Rename 'first' to 'root' in
 init_secondary_pagetables()
In-Reply-To: <20221216114853.8227-20-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231406240.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-20-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The arm32 version of init_secondary_pagetables() will soon be re-used
> for arm64 as well where the root table start at level 0 rather than level 1.
> 
> So rename 'first' to 'root'.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/mm.c | 16 +++++++---------
>  1 file changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0fc6f2992dd1..4e208f7d20c8 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -571,32 +571,30 @@ int init_secondary_pagetables(int cpu)
>  #else
>  int init_secondary_pagetables(int cpu)
>  {
> -    lpae_t *first;
> +    lpae_t *root = alloc_xenheap_page();
>  
> -    first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
> -
> -    if ( !first )
> +    if ( !root )
>      {
> -        printk("CPU%u: Unable to allocate the first page-table\n", cpu);
> +        printk("CPU%u: Unable to allocate the root page-table\n", cpu);
>          return -ENOMEM;
>      }
>  
>      /* Initialise root pagetable from root of boot tables */
> -    memcpy(first, cpu0_pgtable, PAGE_SIZE);
> -    per_cpu(xen_pgtable, cpu) = first;
> +    memcpy(root, cpu0_pgtable, PAGE_SIZE);
> +    per_cpu(xen_pgtable, cpu) = root;
>  
>      if ( !init_domheap_mappings(cpu) )
>      {
>          printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu);
>          per_cpu(xen_pgtable, cpu) = NULL;
> -        free_xenheap_page(first);
> +        free_xenheap_page(root);
>          return -ENOMEM;
>      }
>  
>      clear_boot_pagetables();
>  
>      /* Set init_ttbr for this CPU coming up */
> -    init_ttbr = __pa(first);
> +    init_ttbr = __pa(root);
>      clean_dcache(init_ttbr);
>  
>      return 0;
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:07:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:07:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483286.749359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4yX-0002w1-6i; Mon, 23 Jan 2023 22:07:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483286.749359; Mon, 23 Jan 2023 22:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK4yX-0002vu-44; Mon, 23 Jan 2023 22:07:37 +0000
Received: by outflank-mailman (input) for mailman id 483286;
 Mon, 23 Jan 2023 22:07:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pK4yW-0002vo-9m
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:07:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK4yT-0004zk-Pg; Mon, 23 Jan 2023 22:07:33 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK4yT-0000t7-JB; Mon, 23 Jan 2023 22:07:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2NxIrUoJGYNAjv4fKwE7bBm/FlFpPnGkwoMYc2rZZSQ=; b=2gKQ8Y0hTH5SadsB4dZYLHnPpE
	7oMsF8S21u6k0tAwag2KwYX9+jlKxBds7sos+//5F3KXjjGxgpbHrUswrJl+j3T7qCH46jlo+rQKZ
	jqFHvzfSphsCEsVGiy7UzOppr7mDapYipGM9sP7xtWmTNdzYi4Np3A3Ej3FQ+t1/qqJo=;
Message-ID: <af94ef17-0891-4540-4238-ef842b8af249@xen.org>
Date: Mon, 23 Jan 2023 22:07:31 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v3 1/3] xen/arm: Use the correct format specifier
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, xuwei5@hisilicon.com
References: <20230123134451.47185-1-ayan.kumar.halder@amd.com>
 <20230123134451.47185-2-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301231313370.1978264@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301231313370.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 23/01/2023 21:19, Stefano Stabellini wrote:
> On Mon, 23 Jan 2023, Ayan Kumar Halder wrote:
>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables. However,
>> while creating nodes in fdt, the address (if present in the node name)
>> should be represented using 'PRIx64'. This is to be in conformance
>> with the following rule present in https://elinux.org/Device_Tree_Linux
>>
>> . node names
>> "unit-address does not have leading zeros"
>>
>> As 'PRIpaddr' introduces leading zeros, we cannot use it.
>>
>> So, we have introduced a wrapper ie domain_fdt_begin_node() which will
>> represent physical address using 'PRIx64'.
>>
>> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
>> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
>> address.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>>
>> Changes from -
>>
>> v1 - 1. Moved the patch earlier.
>> 2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr"
>> into this patch.
>>
>> v2 - 1. Use PRIx64 for appending addresses to fdt node names. This fixes the CI failure.
>>
>>   xen/arch/arm/domain_build.c | 45 +++++++++++++++++--------------------
>>   xen/arch/arm/gic-v2.c       |  6 ++---
>>   xen/arch/arm/mm.c           |  2 +-
> 
> The changes to mm.c and gic-v2.c look OK and I'd ack them already. One
> question on the changes to domain_build.c below.
> 
> 
>>   3 files changed, 25 insertions(+), 28 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index f35f4d2456..97c2395f9a 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -1288,6 +1288,20 @@ static int __init fdt_property_interrupts(const struct kernel_info *kinfo,
>>       return res;
>>   }
>>   
>> +static int __init domain_fdt_begin_node(void *fdt, const char *name,
>> +                                        uint64_t unit)
>> +{
>> +    /*
>> +     * The size of the buffer to hold the longest possible string ie
>> +     * interrupt-controller@ + a 64-bit number + \0
>> +     */
>> +    char buf[38];
>> +
>> +    /* ePAPR 3.4 */
>> +    snprintf(buf, sizeof(buf), "%s@%"PRIx64, name, unit);

The return wants to be checked.

>> +    return fdt_begin_node(fdt, buf);
>> +}
>> +
>>   static int __init make_memory_node(const struct domain *d,
>>                                      void *fdt,
>>                                      int addrcells, int sizecells,
>> @@ -1296,8 +1310,6 @@ static int __init make_memory_node(const struct domain *d,
>>       unsigned int i;
>>       int res, reg_size = addrcells + sizecells;
>>       int nr_cells = 0;
>> -    /* Placeholder for memory@ + a 64-bit number + \0 */
>> -    char buf[24];
>>       __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
>>       __be32 *cells;
>>   
>> @@ -1314,9 +1326,7 @@ static int __init make_memory_node(const struct domain *d,
>>   
>>       dt_dprintk("Create memory node\n");
>>   
>> -    /* ePAPR 3.4 */
>> -    snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[i].start);
>> -    res = fdt_begin_node(fdt, buf);
>> +    res = domain_fdt_begin_node(fdt, "memory", mem->bank[i].start);
> 
> Basically this "hides" the paddr_t->uint64_t cast because it happens
> implicitly when passing mem->bank[i].start as an argument to
> domain_fdt_begin_node.
> 
> To be honest, I don't know if it is necessary. Also a normal cast would
> be fine:
> 
>      snprintf(buf, sizeof(buf), "memory@%"PRIx64, (uint64_t)mem->bank[i].start);
>      res = fdt_begin_node(fdt, buf);
The problem with the open-coding version is you would need to explain 
the cast everywhere (I disliked unexplained one).

I don't particular mind 'hidden cast' but I think we need to explain on 
top of domain_fdt_begin_node() why it is necessary.

> 
> Julien, what do you prefer?

Definitely the function because that's what I suggested (see the 
rationale above).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:21:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:21:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483292.749369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5Bm-0005LS-Ct; Mon, 23 Jan 2023 22:21:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483292.749369; Mon, 23 Jan 2023 22:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5Bm-0005LL-AE; Mon, 23 Jan 2023 22:21:18 +0000
Received: by outflank-mailman (input) for mailman id 483292;
 Mon, 23 Jan 2023 22:21:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK5Bl-0005LF-26
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:21:17 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f7a1077-9b6c-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 23:21:15 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id AB40D610A3;
 Mon, 23 Jan 2023 22:21:13 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 40292C433D2;
 Mon, 23 Jan 2023 22:21:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f7a1077-9b6c-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674512473;
	bh=6E1kig4bLlQfEGpHD5h5nVyVIcXW+v7GZ2dJW8zl9n4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=n1A3PxzXcCyU4wYuPXy5uTC0ezDCjS6r8dW12biuG97Axp/s740gIL4g2Ffn+Gqtq
	 hb4dHX61MHs/RX6t+ODo06AX+c05ydk9VmDPLZ/LD+V4rnGTwMrt5tYu+EO4m8rkbL
	 td0Vx4xbVUr/1NEpxxFydYhqAt7KU9Bpfo6rJC3Fr4TLZ8qypY/kZbxxxM95pJ6TRY
	 fSf2JtQ5Uy7qOmJ42mkBoo6Wczx7fyNVIj78LwXnGHWnHL4LheoRLi3xvp2P8DSYft
	 HiDV4cX0GyObcFK/+CHUzS4VqizEHg7QJZLzj/DeiY8doeNlcvInZrjSkuCOS5J0A+
	 bxvrKw4CyuSEg==
Date: Mon, 23 Jan 2023 14:21:10 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 20/22] xen/arm64: mm: Use per-pCPU page-tables
In-Reply-To: <20221216114853.8227-21-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231421000.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-21-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, on Arm64, every pCPU are sharing the same page-tables.
> 
> In a follow-up patch, we will allow the possibility to remove the
> direct map and therefore it will be necessary to have a mapcache.
> 
> While we have plenty of spare virtual address space to have
> to reserve part for each pCPU, it means that temporary mappings
> (e.g. guest memory) could be accessible by every pCPU.
> 
> In order to increase our security posture, it would be better if
> those mappings are only accessible by the pCPU doing the temporary
> mapping.
> 
> In addition to that, a per-pCPU page-tables opens the way to have
> per-domain mapping area.
> 
> Arm32 is already using per-pCPU page-tables so most of the code
> can be re-used. Arm64 doesn't yet have support for the mapcache,
> so a stub is provided (moved to its own header asm/domain_page.h).
> 
> Take the opportunity to fix a typo in a comment that is modified.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/domain_page.c             |  2 ++
>  xen/arch/arm/include/asm/arm32/mm.h    |  8 -----
>  xen/arch/arm/include/asm/domain_page.h | 13 ++++++++
>  xen/arch/arm/include/asm/mm.h          |  5 +++
>  xen/arch/arm/mm.c                      | 42 +++++++-------------------
>  xen/arch/arm/setup.c                   |  1 +
>  6 files changed, 32 insertions(+), 39 deletions(-)
>  create mode 100644 xen/arch/arm/include/asm/domain_page.h
> 
> diff --git a/xen/arch/arm/domain_page.c b/xen/arch/arm/domain_page.c
> index b7c02c919064..4540b3c5f24c 100644
> --- a/xen/arch/arm/domain_page.c
> +++ b/xen/arch/arm/domain_page.c
> @@ -3,6 +3,8 @@
>  #include <xen/pmap.h>
>  #include <xen/vmap.h>
>  
> +#include <asm/domain_page.h>
> +
>  /* Override macros from asm/page.h to make them work with mfn_t */
>  #undef virt_to_mfn
>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
> index 8bfc906e7178..6b039d9ceaa2 100644
> --- a/xen/arch/arm/include/asm/arm32/mm.h
> +++ b/xen/arch/arm/include/asm/arm32/mm.h
> @@ -1,12 +1,6 @@
>  #ifndef __ARM_ARM32_MM_H__
>  #define __ARM_ARM32_MM_H__
>  
> -#include <xen/percpu.h>
> -
> -#include <asm/lpae.h>
> -
> -DECLARE_PER_CPU(lpae_t *, xen_pgtable);
> -
>  /*
>   * Only a limited amount of RAM, called xenheap, is always mapped on ARM32.
>   * For convenience always return false.
> @@ -16,8 +10,6 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
>      return false;
>  }
>  
> -bool init_domheap_mappings(unsigned int cpu);
> -
>  #endif /* __ARM_ARM32_MM_H__ */
>  
>  /*
> diff --git a/xen/arch/arm/include/asm/domain_page.h b/xen/arch/arm/include/asm/domain_page.h
> new file mode 100644
> index 000000000000..e9f52685e2ec
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/domain_page.h
> @@ -0,0 +1,13 @@
> +#ifndef __ASM_ARM_DOMAIN_PAGE_H__
> +#define __ASM_ARM_DOMAIN_PAGE_H__
> +
> +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
> +bool init_domheap_mappings(unsigned int cpu);
> +#else
> +static inline bool init_domheap_mappings(unsigned int cpu)
> +{
> +    return true;
> +}
> +#endif
> +
> +#endif /* __ASM_ARM_DOMAIN_PAGE_H__ */
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> index 2366928d71aa..7a2c775f9562 100644
> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -2,6 +2,9 @@
>  #define __ARCH_ARM_MM__
>  
>  #include <xen/kernel.h>
> +#include <xen/percpu.h>
> +
> +#include <asm/lpae.h>
>  #include <asm/page.h>
>  #include <public/xen.h>
>  #include <xen/pdx.h>
> @@ -14,6 +17,8 @@
>  # error "unknown ARM variant"
>  #endif
>  
> +DECLARE_PER_CPU(lpae_t *, xen_pgtable);
> +
>  /* Align Xen to a 2 MiB boundary. */
>  #define XEN_PADDR_ALIGN (1 << 21)
>  
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 4e208f7d20c8..2af751af9003 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -24,6 +24,7 @@
>  
>  #include <xsm/xsm.h>
>  
> +#include <asm/domain_page.h>
>  #include <asm/fixmap.h>
>  #include <asm/setup.h>
>  
> @@ -90,20 +91,19 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
>   * xen_second, xen_fixmap and xen_xenmap are always shared between all
>   * PCPUs.
>   */
> +/* Per-CPU pagetable pages */
> +/* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
> +DEFINE_PER_CPU(lpae_t *, xen_pgtable);
> +
> +/* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
> +static DEFINE_PAGE_TABLE(cpu0_pgtable);
> +#define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
>  
>  #ifdef CONFIG_ARM_64
>  #define HYP_PT_ROOT_LEVEL 0
> -static DEFINE_PAGE_TABLE(xen_pgtable);
>  static DEFINE_PAGE_TABLE(xen_first);
> -#define THIS_CPU_PGTABLE xen_pgtable
>  #else
>  #define HYP_PT_ROOT_LEVEL 1
> -/* Per-CPU pagetable pages */
> -/* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
> -DEFINE_PER_CPU(lpae_t *, xen_pgtable);
> -#define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
> -/* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */
> -static DEFINE_PAGE_TABLE(cpu0_pgtable);
>  #endif
>  
>  /* Common pagetable leaves */
> @@ -481,14 +481,13 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
>  
>      phys_offset = boot_phys_offset;
>  
> +    p = cpu0_pgtable;
> +
>  #ifdef CONFIG_ARM_64
> -    p = (void *) xen_pgtable;
>      p[0] = pte_of_xenaddr((uintptr_t)xen_first);
>      p[0].pt.table = 1;
>      p[0].pt.xn = 0;
>      p = (void *) xen_first;
> -#else
> -    p = (void *) cpu0_pgtable;
>  #endif
>  
>      /* Map xen second level page-table */
> @@ -527,19 +526,13 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
>      pte.pt.table = 1;
>      xen_second[second_table_offset(FIXMAP_ADDR(0))] = pte;
>  
> -#ifdef CONFIG_ARM_64
> -    ttbr = (uintptr_t) xen_pgtable + phys_offset;
> -#else
>      ttbr = (uintptr_t) cpu0_pgtable + phys_offset;
> -#endif
>  
>      switch_ttbr(ttbr);
>  
>      xen_pt_enforce_wnx();
>  
> -#ifdef CONFIG_ARM_32
>      per_cpu(xen_pgtable, 0) = cpu0_pgtable;
> -#endif
>  }
>  
>  static void clear_boot_pagetables(void)
> @@ -557,18 +550,6 @@ static void clear_boot_pagetables(void)
>      clear_table(boot_third);
>  }
>  
> -#ifdef CONFIG_ARM_64
> -int init_secondary_pagetables(int cpu)
> -{
> -    clear_boot_pagetables();
> -
> -    /* Set init_ttbr for this CPU coming up. All CPus share a single setof
> -     * pagetables, but rewrite it each time for consistency with 32 bit. */
> -    init_ttbr = (uintptr_t) xen_pgtable + phys_offset;
> -    clean_dcache(init_ttbr);
> -    return 0;
> -}
> -#else
>  int init_secondary_pagetables(int cpu)
>  {
>      lpae_t *root = alloc_xenheap_page();
> @@ -599,7 +580,6 @@ int init_secondary_pagetables(int cpu)
>  
>      return 0;
>  }
> -#endif
>  
>  /* MMU setup for secondary CPUS (which already have paging enabled) */
>  void mmu_init_secondary_cpu(void)
> @@ -1089,7 +1069,7 @@ static int xen_pt_update(unsigned long virt,
>      unsigned long left = nr_mfns;
>  
>      /*
> -     * For arm32, page-tables are different on each CPUs. Yet, they share
> +     * Page-tables are different on each CPU. Yet, they share
>       * some common mappings. It is assumed that only common mappings
>       * will be modified with this function.
>       *
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 2311726f5ddd..88d9d90fb5ad 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -39,6 +39,7 @@
>  #include <asm/gic.h>
>  #include <asm/cpuerrata.h>
>  #include <asm/cpufeature.h>
> +#include <asm/domain_page.h>
>  #include <asm/platform.h>
>  #include <asm/procinfo.h>
>  #include <asm/setup.h>
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:23:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:23:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483299.749379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5DW-0005wU-Qu; Mon, 23 Jan 2023 22:23:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483299.749379; Mon, 23 Jan 2023 22:23:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5DW-0005wN-OL; Mon, 23 Jan 2023 22:23:06 +0000
Received: by outflank-mailman (input) for mailman id 483299;
 Mon, 23 Jan 2023 22:23:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pK5DV-0005wF-Fu
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:23:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK5DU-0005Yv-Q2; Mon, 23 Jan 2023 22:23:04 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK5DU-0001ZS-Ks; Mon, 23 Jan 2023 22:23:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=M5CGCxzzAW/bES03HcF9ha3g+oYu/Y8f85jI11wEIAk=; b=ALXbaZxEeXG78HSgHJu9IQ6em+
	/pG+51DUuEFzxqjQbAZGK7P8FC6f97vXeG6c7Hpwd6eR8quXpqrFzhVjp6UlKqa2HsUiGm9khF3wS
	xOPH29gFaAGlBUFpoJx66V80mQ1JTPUbsL6uPs2KFfxYBzUr4fDsvotzX21ht48hDjJ8=;
Message-ID: <ea6c03f4-13ad-e312-1827-8e1c5ea1363e@xen.org>
Date: Mon, 23 Jan 2023 22:23:02 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-18-julien@xen.org>
 <alpine.DEB.2.22.394.2301231358440.1978264@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 17/22] x86/setup: vmap heap nodes when they are outside
 the direct map
In-Reply-To: <alpine.DEB.2.22.394.2301231358440.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 23/01/2023 22:03, Stefano Stabellini wrote:
> On Fri, 16 Dec 2022, Julien Grall wrote:
>> From: Hongyan Xia <hongyxia@amazon.com>
>>
>> When we do not have a direct map, archs_mfn_in_direct_map() will always
>> return false, thus init_node_heap() will allocate xenheap pages from an
>> existing node for the metadata of a new node. This means that the
>> metadata of a new node is in a different node, slowing down heap
>> allocation.
>>
>> Since we now have early vmap, vmap the metadata locally in the new node.
>>
>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ----
>>
>>      Changes from Hongyan's version:
>>          * arch_mfn_in_direct_map() was renamed to
>>            arch_mfns_in_direct_map()
>>          * Use vmap_contig_pages() rather than __vmap(...).
>>          * Add missing include (xen/vmap.h) so it compiles on Arm
>> ---
>>   xen/common/page_alloc.c | 42 +++++++++++++++++++++++++++++++----------
>>   1 file changed, 32 insertions(+), 10 deletions(-)
>>
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index 0c4af5a71407..581c15d74dfb 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -136,6 +136,7 @@
>>   #include <xen/sched.h>
>>   #include <xen/softirq.h>
>>   #include <xen/spinlock.h>
>> +#include <xen/vmap.h>
>>   
>>   #include <asm/flushtlb.h>
>>   #include <asm/numa.h>
>> @@ -597,22 +598,43 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>>           needed = 0;
>>       }
>>       else if ( *use_tail && nr >= needed &&
>> -              arch_mfns_in_directmap(mfn + nr - needed, needed) &&
>>                 (!xenheap_bits ||
>>                  !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>       {
>> -        _heap[node] = mfn_to_virt(mfn + nr - needed);
>> -        avail[node] = mfn_to_virt(mfn + nr - 1) +
>> -                      PAGE_SIZE - sizeof(**avail) * NR_ZONES;
>> -    }
>> -    else if ( nr >= needed &&
>> -              arch_mfns_in_directmap(mfn, needed) &&
>> +        if ( arch_mfns_in_directmap(mfn + nr - needed, needed) )
>> +        {
>> +            _heap[node] = mfn_to_virt(mfn + nr - needed);
>> +            avail[node] = mfn_to_virt(mfn + nr - 1) +
>> +                          PAGE_SIZE - sizeof(**avail) * NR_ZONES;
>> +        }
>> +        else
>> +        {
>> +            mfn_t needed_start = _mfn(mfn + nr - needed);
>> +
>> +            _heap[node] = vmap_contig_pages(needed_start, needed);
>> +            BUG_ON(!_heap[node]);
> 
> I see a BUG_ON here but init_node_heap is not __init.

FWIW, this is not the first introducing the first BUG_ON() in this function.

  Asking because
> BUG_ON is only a good idea during init time. Should init_node_heap be
> __init (not necessarely in this patch, but still)?
AFAIK, there are two uses outside of __init:
   1) Free the init sections
   2) Memory hotplug

In the first case, we will likely need to panic() in case of an error. 
For ther second case, I am not entirely sure.

But there would be a fair bit of plumbing and thinking (how do you deal 
with the case where part of the memory were already added?).

Anyway, I don't think I am making the function worse, so I would rather 
no open that can of worms (yet).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:32:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:32:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483304.749389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5MU-0007Uw-Mk; Mon, 23 Jan 2023 22:32:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483304.749389; Mon, 23 Jan 2023 22:32:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5MU-0007Up-Ju; Mon, 23 Jan 2023 22:32:22 +0000
Received: by outflank-mailman (input) for mailman id 483304;
 Mon, 23 Jan 2023 22:32:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiRr=5U=wdc.com=prvs=3802d7ee7=Alistair.Francis@srs-se1.protection.inumbo.net>)
 id 1pK5MT-0007Ui-Pm
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:32:22 +0000
Received: from esa2.hgst.iphmx.com (esa2.hgst.iphmx.com [68.232.143.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9b16daf-9b6d-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 23:32:17 +0100 (CET)
Received: from mail-bn1nam02lp2049.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.49])
 by ob1.hgst.iphmx.com with ESMTP; 24 Jan 2023 06:32:04 +0800
Received: from SJ0PR04MB7872.namprd04.prod.outlook.com (2603:10b6:a03:303::20)
 by MN2PR04MB6768.namprd04.prod.outlook.com (2603:10b6:208:1ea::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 22:32:02 +0000
Received: from SJ0PR04MB7872.namprd04.prod.outlook.com
 ([fe80::d767:b3d:a9d4:e1fa]) by SJ0PR04MB7872.namprd04.prod.outlook.com
 ([fe80::d767:b3d:a9d4:e1fa%8]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 22:32:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9b16daf-9b6d-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1674513137; x=1706049137;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=Zb0cS8TcZWkmVNeyfxolYZIL0x2TrXX0Ywu/3AavAUQ=;
  b=QlH8av8c2H5FbY/wGlHEVjAJ1ZJAvuaK+PnZlfam9nIxr4XYmHQjuTbr
   WLmbzC1lDjKdCPQTJUzcfUhiyjBdF/Ln3PJNhQHY5Ypo61+U8shxGLzzE
   6VAp8eHS+7XkZdvmii0J5AQong896JldIDkKaTIAi9cEqF21zOxfzM7+N
   zaXjfWnpvMHS0L0DYcHmwObW/DtMj6TjOkQg+mmHGIDdTBwWV5jYdmKun
   YbEZeC1QWEltqKjAPDUlg81DTQVPCXIfC7sVwbLtXHeu6UcIucNESR8ki
   ZiJROo4x68am018YKysXicqU7ohhEKtnc7yHTdUO5EhZy9EAS0nc3CX87
   A==;
X-IronPort-AV: E=Sophos;i="5.97,240,1669046400"; 
   d="scan'208";a="325885985"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a4zatBljZsRjEmJV2M0R5PO8oJdLHajBHHjvgH3Puod1WjbQq/KzoNIxnVNpQbZ3KzsYFQbRmIyvUS1ZVSvfoOsn7a3xvnOiUIIW7RFDBT+gRhqF4Ix4DebOHDVmQ6xI8iuvmjqU/bfkCUWjJbjAZVsdL+/sPnAfwaeWAIlOj5x8fx+/j8omHO5JBN/wsM1zzJkuPX6ghICzKIIirbPwMK4uRDvdjuO6+KQXFCWcIOrBm44Ft17PIhdhUG8jYIWBmO7gTqeM/FJCAfsoox60Ij4e+s2j0ZcWyjAR9QnwFSt+TK7HxixXt1swlYRcnK6gmiupE45isoCRljR4mTK/Zg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zb0cS8TcZWkmVNeyfxolYZIL0x2TrXX0Ywu/3AavAUQ=;
 b=TXAqlpk7QG4X8zLsAHflm7XAZWciSgjhT8lNeDsTG90fynsyt/Nt2bjlGXAApraJ008Rx1Xy/PEtg5I7mw0gQlCKFSSwGzvLLIRpWjnUJow/p/3u2v6gXtQTn+CdIRXNhAAhk1vdpkxpUTrSbmxT/YLzeTbQQ7qBUwSSzpbHzROu4XcaN8sEXvrmFwP/vKW7fmLj1uqjcY+G7heqFe6UPJ0iEmeYkfGSe/j/nuG5y+E6+n+VBWANe31gn8vWnncjf0QeGDFYnCa4N/m4oGoDvb94BNi0EIGP5mlJukrzFyEy4stzT/ri4A36Ua4c5NQnA42qYWC134q2X9yOKaeWEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zb0cS8TcZWkmVNeyfxolYZIL0x2TrXX0Ywu/3AavAUQ=;
 b=mxRf8gN+jAUmzM3M4/eoh5yrrrtIjNtS80tDO4IkhblcPLjo6uff/OT5sm8/hg0MQoumpRrpoetXQV63Ts0AsdI3m1F2N8aZv5NQ8bnRIwzXJu0ij+tdydpWUvgmfNvPCwYx4HnebP0GKu0YK9PU6IqsjMKvteEQDPUdTldRhXg=
From: Alistair Francis <Alistair.Francis@wdc.com>
To: "oleksii.kurochko@gmail.com" <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "bobbyeshleman@gmail.com" <bobbyeshleman@gmail.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>, "gianluca@rivosinc.com"
	<gianluca@rivosinc.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>
Subject: Re: [RISC-V] Switch  to H-mode
Thread-Topic: [RISC-V] Switch  to H-mode
Thread-Index: AQHZL0ujmIfMEOwMNEi0n3X25HhaKK6slm0A
Date: Mon, 23 Jan 2023 22:32:01 +0000
Message-ID: <0f810fd4de8b5b05ceddd53a9ce26e8be9014eb2.camel@wdc.com>
References: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>
In-Reply-To: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.46.3 (by Flathub.org) 
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=wdc.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: SJ0PR04MB7872:EE_|MN2PR04MB6768:EE_
x-ms-office365-filtering-correlation-id: f1330f0e-ddbc-42ad-621d-08dafd91a5aa
wdcipoutbound: EOP-TRUE
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 SWkzjImgI7m3GhJ+d9F47SCfP70c4AjwD31lbZ6S7MS9RO56wrYqtNpxBCrybnursLgBESz6STW14OWGJsCEiDG2N3gcJuj/68XOwTYt2CQWyh+cMPer/DZD5dN84nQJe4fUkMANOw09Oy0rWjtzy41yMM0XrnnYUDXz41dFr9LQsF/q07K94CNNy4jK4xCc27wRMLa+4VjeVjkmH8SNhZ8NiFq51K2tDI7KJUCqTXmieq6C5tiIjYUR7D2Xsxy3x3j1T6JLbw6c4XwlnTDQFIs2Xu4B1zgY/YMBX5NyJsfAHMHbP0jsyu5dcQzN33FowT6xb+ZrB14JQsZatj2cNTt9gNFgxAfwIZDGl+kXiqEYz2MaDFQOqJpWT7c057VcbeLkbBWqAhWCbSoE7qMfRu1nh8wHWCaJpmez3u1p+geP1lBjN3YWm9NPhveRU2V0lc1rWNimdWA/9HNdV/S134sD4mh/MyDg8gZ86YRVC7Xi2j0RiQWH2JxIVnsTnsT9OyrpLQX2Bc3KCPyHq6zSnh009nGwZVBj10bJPAhcYDFf7bVSQ8Vr4WV/0uQW/VME5uhfJEArJCoqSrZ5abZSKaLXjYoRV3LhlXs9qvybpcKMzNX5qyPQp6Ks53tfSoyWn5SdjOMNaNAJhWIDmysyzXPrTPkqzW3Hv4LRkvkQH+Ni5qo18Wt4sps+xZpTpNurzSJ/WTxbIgIBVNRTrnwQebHqw5htQQhKlRLjFHWdZwLtLCne/Wb//rkAw86Wo9NRTeSjgKnGvoeZDx8MjAjW6w==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR04MB7872.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199015)(478600001)(38070700005)(82960400001)(6486002)(122000001)(966005)(36756003)(6506007)(6512007)(86362001)(186003)(38100700002)(64756008)(91956017)(66446008)(66476007)(4326008)(8676002)(66556008)(76116006)(66946007)(316002)(2616005)(71200400001)(54906003)(110136005)(41300700001)(2906002)(5660300002)(8936002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Y0U2MnczUjA3ZGVyS2J5SUJUay9HaUZPckI2STB1MStleFFUSEZQUWtNNTVI?=
 =?utf-8?B?cW5yQUUxY2FTY1hPSjljQ0grTGJleGVOU2JQa2tPRXRWZ3puOXJrVkE2MUUz?=
 =?utf-8?B?Z2tiK1pleDVtTnl3WjNrL1pYSUJMR0Q3TW9yeHN1NGlmVCsvQjkyaDVlcUtn?=
 =?utf-8?B?eWpNSnc3SUFER0lwWCtlWWFuUDZ2a1NzRitJS1RPc1YxaTgrZktUajRFVE1p?=
 =?utf-8?B?R1FwaG9JTmFZNzJaRkJCaDYxSUVGeWxnK1kxeHJUT2FOa0xKVVNLK1dmcXpw?=
 =?utf-8?B?ZkpEUUFpWkZGVXpRNktDL1NQdEJJV2hWc21DYU4rVkczK1YxTlMrUlJYVUlM?=
 =?utf-8?B?YzFFN0dnQnlGWGhHeEcrNVVXMURhSHpUNWdKazFLenlvSkRKMDc0TFRvaTNp?=
 =?utf-8?B?WWdaUHozMExBa0pUY2NNRi9SVGhob0Q0V0xKcTZ4VVhtQzNTczVkaGlZNVY0?=
 =?utf-8?B?ZmZpamVoajF3SXgwR2FaemM3NGo0eXpvRjBETk9zN1FKQm1Uc0NBRnlsRDVj?=
 =?utf-8?B?Q2ZKSVR4VnZENnRoS21MT0w3SlZkTm1yYkh4djFISERCNE83cUFZeS8rVFpn?=
 =?utf-8?B?T1RiNGNZL1dnbm9tZnhFM2pYU3lkVG1UelhIdElBZDFRYmRxMDMxbW15dGF0?=
 =?utf-8?B?SFdVaTc5azJmUnNoNFh0R1NOekpIem1XMytrWjI4eGtEcnFyYVFCTG85UldC?=
 =?utf-8?B?aEYwaC9VZFRJYUcwbTE4WkNFeTVyNGtwWnRtTVNNRDNoYXRDdHh0VWpvY1V2?=
 =?utf-8?B?emxQMUFrejF4QzN3eVJ4dzZxM0s2N0RYcG1pYXhlejQ1S3ZicGhEY2NBNk44?=
 =?utf-8?B?WGRKOU5YQitIem1QdmNCbUFxSEtLU0l4c1VDcUpmaTAvSzJQUmh0SGFMa2g0?=
 =?utf-8?B?alc4TmtGekwwbDkxcjJSdGgwR0hvQm5jaXVHMnZXQnNnam5DS2xGVVJHKzRF?=
 =?utf-8?B?QTg0SythU1RNbGZCb2pKajBlL0dRU3lsTis3N0h2UzJOZEI5NkgxbHoyUUFK?=
 =?utf-8?B?RHRuT0F0a3M4NDlVMnhIeHZsczVaMml2ajFMNCtsUEl3M2RFOHp5UTJEQW9Z?=
 =?utf-8?B?c0hUUW56dEZaSFoyMERpV2xRZENNa0x2eXI3VDhVMnVkS3VITU1FcXdCRjBq?=
 =?utf-8?B?N1hZKzR6cWVVNTdyek9KMEMxYjlENGh4Mk5BdEo2TVRFRllOeHQ5UjY2Qllr?=
 =?utf-8?B?SGdXL1ZTdE00cjVjNDcwOFVnY0JYc1U3NlJYN1pqTzRqSFE5bHAyS3lmNDlp?=
 =?utf-8?B?cjNkY0xVWXh0VC85c08vZHVCU0g2WktTQVQxejY3U0o1ZWNuM0RIRTJHSG9o?=
 =?utf-8?B?OFFFcGlYQmFtWndoU1M5blVIRjNTbU1aVUdiczA4MExZRk4zSkZKc2dWUEov?=
 =?utf-8?B?bWI0S0RpZjZUbDB1ZjRWQWc3bkdMOVRUVldhcGphS2drTUpXbnJkc0hwaGdo?=
 =?utf-8?B?Nyt2R1dxK2tJNTlpSU9na2Q4RkVZdXlibWJIaDZWdDNXL3Q5WTZ4TGFBUWZV?=
 =?utf-8?B?OXoybVFJNEZUbVA0TzE2d3AvTE1IaWdkZEdVUDd0SDgrUHNJbFV4eTM2QzNP?=
 =?utf-8?B?UGRadW5EeWhvbnhtTDNMRG1zbHZiWmh6NmozU0ZHcHdaUXJDaVdCMkJaY3hU?=
 =?utf-8?B?M1g4K2dSSHNEY1kvZ0JQUWVMZVM1bTdJeDNWaG1aOVY2dEJEQkwrQ3Y1K2NI?=
 =?utf-8?B?bXFOLytXOVh6WXo2dmNjL0h4UTcycC9ZbGxTS0lLbHdYTEllcHk1cjhaVWdp?=
 =?utf-8?B?cTRmYVZGSkJjQUw0bHJWZ2RzZFBKMDNaOW5xOUdiU0lvVm9JZlhVdndPeHo5?=
 =?utf-8?B?azBVekpsOFFpMW81QXRObnVVMHBEeE5ES2Z2ckxGUjJlM2t5a1ZiVlJLZVlG?=
 =?utf-8?B?YTZ2V0hpWGhWRkdGMU9QS3huNk9LZG9rd2JxU05DdmVQc2dMVzR0Y25SQWJE?=
 =?utf-8?B?WWh3cFE4WTVvZUJJeXZ0MXFkNkRqbEp6RVFhc3hmbElxYWI5bTJJTXVyMjNk?=
 =?utf-8?B?eGxnSXh0S201RjViNEhpWVV3VitpdGlWb0wwRFlYNk9iYmtGOVQyN0RXTUJy?=
 =?utf-8?B?TEx3TitRZWlXdk1nYVlZK3FNTmZITUg0T0E5NzhBQXhac1gwZ1dRMW1majdD?=
 =?utf-8?B?WFhKeGRuK0I1RUxGcXExZ2pRN0xLVjFxVEp2Q3BFVTFrMGR0SjdDcHIvZ2Ny?=
 =?utf-8?B?U1FpN0RLNDB2Tnd4QTZQOEtFQ1FKYStvK1NkNUprNDJhUzd4bXhtV2FLVUxl?=
 =?utf-8?B?SDhQTGF0c0x2RWwwQm5xaTBZenlnPT0=?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <7B551ACBDADE7845AE8B8ACB57F14310@namprd04.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	RvjmPm/airjriMSJfZHKoSbMQ7G+Jw9NP/seFTVIOHPTv71+3UJbrbOStlcmq8oczTOPC+8mKffl3+WA/Z2l2DGhkIEKaQBft3ZiPE6IsuxhhlUF3F3qUBSBQG02LOZrMG0FPFs4hiTgm3Fa3GQXI7DsLEoswpZRCJ8sjjzFfVSo0umwJB18bbmXjL0IMfcTFsKPMNmZzGCdLaHssZX5g5P4RSCGHDbZyxFjD9v0QH9Xbas7TI4VwemfQt/nfaTPNVjeEz+SbLecrMZwPQA7tJ9ygsFpHEXNgiho8h8hKKKRKJUr2aclnsiBPvF8OOwzKtH6QLMGzpwYwkWTwc8CIa1VnQ4CRtorkApyY1rcOYuvpe3gY0pa/boEnaLbplmsi4bEqQWOuYb+6DguYzUIwMxEoP4/PVYOFAp+DNVuR+I6ATygKprXizOfj3rLzEO6sg+o9UXYV+ews22B/6bHWPaencwHwQnrAcCspexuDJZa3/VivvmQNU+aVcdKJhcVrakgES0BILpstjfyx/GhXeGffV+l1xiLTnd2o3EbjEcH2e9anvmCbkiYcV5M7WEMK78UjIMHGc7u169jArQN9neN7x3gcmziMcv2oHhCDwhq4sTcQl7+oN19nLP6YCBvA5YAuch+nFnqut3p+V1PYyyY7ILaF1sbMdyc+qiUCW+CXrRcZab3sH9F6udWQy4zenDCzjOwG6NJsGQW0AfpgU8Hxo3cDz8SC9gVTjyNi7+zq/0i30wPzxn45VeOns1WfIa43qtZ4fENF4J2gAdxdFMwA4z8BI/74quBxZXvR/7TYDUQKZrLDFWfQmuhFq2y8UB+Goo20gc8SglYX1g3BowDkNA3HNxev4WCck6ocZTEyrP9/dToEEGR4Rrd32KiY3xHURCOTYGRz0Y0XFSnrKfymdCTbdD5ISYF08GLFWg=
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR04MB7872.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f1330f0e-ddbc-42ad-621d-08dafd91a5aa
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 22:32:01.7682
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: oLdR+vDf+SuCqyCvEUavgEthDShD/GwPIaFZT4K7iIG/5cy46P/O2US4hEWtDz7GhWeJppVHqbXBOH9AEUlgJYtJimGUzf60ymVEa/ItjjQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB6768

T24gTW9uLCAyMDIzLTAxLTIzIGF0IDE4OjU2ICswMjAwLCBPbGVrc2lpIHdyb3RlOg0KPiBIaSBB
bGlzdGFpciBhbmQgY29tbXVuaXR5LA0KPiANCj4gSSBhbSB3b3JraW5nIG9uIFJJU0MtViBzdXBw
b3J0IHVwc3RyZWFtIGZvciBYZW4gYmFzZWQgb24geW91ciBhbmQNCj4gQm9iYnkNCj4gcGF0Y2hl
cy4NCj4gDQo+IEFkZGluZyB0aGUgUklTQy1WIHN1cHBvcnQgSSByZWFsaXplZCB0aGF0IFhlbiBp
cyByYW4gaW4gUy1tb2RlLg0KPiBPdXRwdXQNCj4gb2YgT3BlblNCSToNCj4gwqDCoMKgIC4uLg0K
PiDCoMKgwqAgRG9tYWluMCBOZXh0IE1vZGXCoMKgwqDCoMKgwqDCoMKgIDogUy1tb2RlDQo+IMKg
wqDCoCAuLi4NCj4gU28gdGhlIGZpcnN0IG15IHF1ZXN0aW9uIGlzIHNob3VsZG4ndCBpdCBiZSBp
biBILW1vZGU/DQoNClRoZXJlIGlzIG5vIEgtbW9kZSBpbiBSSVNDLVYuDQoNCldoZW4gdGhlIEh5
cGVydmlzb3IgZXh0ZW5zaW9uIGV4aXN0cyB0aGUgc3RhbmRhcmQgUy1tb2RlIGF1dG9tYXRpY2Fs
bHkNCmJlY29tZXMgSFMtbW9kZS4gVGhlIHR3byBuYW1lcyBjYW4gYmUgdXNlZCBpbnRlcmNoYW5n
ZWFibHkgKGFsdGhvdWdoDQp0aGUgc3BlYyBjYWxscyBpdCBIUy1tb2RlKS4NCg0KSW4gdGhpcyB3
YXkgTGludXggKHdpdGggb3Igd2l0aG91dCBLVk0gc3VwcG9ydCkgYW5kIFhlbiBhbGwgYm9vdCBp
biB0aGUNCnNhbWUgbW9kZSBhbmQgY2FuIGNob29zZSB0byB1c2UgdmlydHVsaXNhdGlvbiBpZiBk
ZXNpcmVkLg0KDQo+IA0KPiBJZiBJIGFtIHJpZ2h0IHRoYW4gaXQgbG9va3MgbGlrZSB3ZSBoYXZl
IHRvIGRvIGEgcGF0Y2ggdG8gT3BlblNCSSB0bw0KPiBhZGQgc3VwcG9ydCBvZiBILW1vZGUgYXMg
aXQgaXMgbm90IHN1cHBvcnRlZCBub3c6DQo+IFsxXQ0KPiBodHRwczovL2dpdGh1Yi5jb20vcmlz
Y3Ytc29mdHdhcmUtc3JjL29wZW5zYmkvYmxvYi9tYXN0ZXIvbGliL3NiaS9zYmlfZG9tYWluLmMj
TDM4MA0KPiBbMl0NCj4gaHR0cHM6Ly9naXRodWIuY29tL3Jpc2N2LXNvZnR3YXJlLXNyYy9vcGVu
c2JpL2Jsb2IvbWFzdGVyL2luY2x1ZGUvc2JpL3Jpc2N2X2VuY29kaW5nLmgjTDExMA0KPiBQbGVh
c2UgY29ycmVjdCBtZSBpZiBJIGFtIHdyb25nLg0KPiANCj4gVGhlIG90aGVyIG9wdGlvbiBJIHNl
ZSBpcyB0byBzd2l0Y2ggdG8gSC1tb2RlIGluIFUtYm9vdCBhcyBJDQo+IHVuZGVyc3RhbmQNCj4g
dGhlIGNsYXNzaWNhbCBib290IGZsb3cgaXM6DQo+IMKgwqDCoCBPcGVuU0JJIC0+IFUtYm9vdCAt
PiBYZW4gLT4gRG9tYWluezAsLi4ufQ0KPiBJZiBpdCBpcyBhdCBhbGwgcG9zc2libGUgc2luY2Ug
VS1ib290IHdpbGwgYmUgaW4gUyBtb2RlIGFmdGVyDQo+IE9wZW5TQkkuDQoNClMtbW9kZSBpcyB3
aGVyZSB5b3Ugd2FudCB0byBiZS4gVGhhdCdzIHdoYXQgWGVuIHNob3VsZCBzdGFydCBpbi4NCg0K
QWxpc3RhaXINCg0KPiANCj4gVGhhbmtzIGluIGFkdmFuY2UuDQo+IA0KPiB+IE9sZWtzaWkNCg0K


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:34:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:34:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483309.749399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5Op-00084c-3k; Mon, 23 Jan 2023 22:34:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483309.749399; Mon, 23 Jan 2023 22:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5Op-00084V-0r; Mon, 23 Jan 2023 22:34:47 +0000
Received: by outflank-mailman (input) for mailman id 483309;
 Mon, 23 Jan 2023 22:34:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK5Oo-00084P-0x
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:34:46 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 21c34c2f-9b6e-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 23:34:43 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id EE66FB80E9E;
 Mon, 23 Jan 2023 22:34:42 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C278FC433D2;
 Mon, 23 Jan 2023 22:34:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21c34c2f-9b6e-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674513281;
	bh=q18xf4QwE+7KjO3nG4LFzzF0G8fT8Xi8a6GR5JKY6lA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jAl8JSEpnLP6T7nq9hl9ZIb/21LpExpmH8Fga+3tfPxFgrPV2ODknirHFtzn6yUeI
	 HTT8K4xI+3iQvGVT0sP+RWcLiDkJjyZUl9+fpvt90aWBtF4ZVmP3O0uXH2PxMudh7A
	 GkLz6ZpcO72MzvQ7FecCwTY7cZqZsgHefgqJvY+LUJNUrGbAEiowkNKw/UL8cnU675
	 llhTrDE5RCsqC6oEHXGe+1pk81VKHujReeOGy2VH7yQNUc4W+mXRyABmUm6viDRaKT
	 QJD/u51HHBssZUx1I1CKim/f0kYSzQfBpWbcv7aOUyIzMX2/o0td+eSN+S/xTwbkZq
	 4zhBaWXsSLR2g==
Date: Mon, 23 Jan 2023 14:34:39 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 21/22] xen/arm64: Implement a mapcache for arm64
In-Reply-To: <20221216114853.8227-22-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231434260.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-22-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, on arm64, map_domain_page() is implemented using
> virt_to_mfn(). Therefore it is relying on the directmap.
> 
> In a follow-up patch, we will allow the admin to remove the directmap.
> Therefore we want to implement a mapcache.
> 
> Thanksfully there is already one for arm32. So select ARCH_ARM_DOMAIN_PAGE
> and add the necessary boiler plate to support 64-bit:
>     - The page-table start at level 0, so we need to allocate the level
>       1 page-table
>     - map_domain_page() should check if the page is in the directmap. If
>       yes, then use virt_to_mfn() to limit the performance impact
>       when the directmap is still enabled (this will be selectable
>       on the command line).
> 
> Take the opportunity to replace first_table_offset(...) with offsets[...].
> 
> Note that, so far, arch_mfns_in_directmap() always return true on
> arm64. So the mapcache is not yet used. This will change in a
> follow-up patch.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ----
> 
>     There are a few TODOs:
>         - It is becoming more critical to fix the mapcache
>           implementation (this is not compliant with the Arm Arm)
>         - Evaluate the performance
> ---
>  xen/arch/arm/Kconfig              |  1 +
>  xen/arch/arm/domain_page.c        | 47 +++++++++++++++++++++++++++----
>  xen/arch/arm/include/asm/config.h |  7 +++++
>  xen/arch/arm/include/asm/mm.h     |  5 ++++
>  xen/arch/arm/mm.c                 |  6 ++--
>  xen/arch/arm/setup.c              |  4 +++
>  6 files changed, 62 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 239d3aed3c7f..9c58b2d5c3aa 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -9,6 +9,7 @@ config ARM_64
>  	select 64BIT
>  	select ARM_EFI
>  	select HAS_FAST_MULTIPLY
> +	select ARCH_MAP_DOMAIN_PAGE
>  
>  config ARM
>  	def_bool y
> diff --git a/xen/arch/arm/domain_page.c b/xen/arch/arm/domain_page.c
> index 4540b3c5f24c..f3547dc853ef 100644
> --- a/xen/arch/arm/domain_page.c
> +++ b/xen/arch/arm/domain_page.c
> @@ -1,4 +1,5 @@
>  /* SPDX-License-Identifier: GPL-2.0-or-later */
> +#include <xen/domain_page.h>
>  #include <xen/mm.h>
>  #include <xen/pmap.h>
>  #include <xen/vmap.h>
> @@ -8,6 +9,8 @@
>  /* Override macros from asm/page.h to make them work with mfn_t */
>  #undef virt_to_mfn
>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> +#undef mfn_to_virt
> +#define mfn_to_virt(va) __mfn_to_virt(mfn_x(mfn))
>  
>  /* cpu0's domheap page tables */
>  static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES);
> @@ -31,13 +34,30 @@ bool init_domheap_mappings(unsigned int cpu)
>  {
>      unsigned int order = get_order_from_pages(DOMHEAP_SECOND_PAGES);
>      lpae_t *root = per_cpu(xen_pgtable, cpu);
> +    lpae_t *first;
>      unsigned int i, first_idx;
>      lpae_t *domheap;
>      mfn_t mfn;
>  
> +    /* Convenience aliases */
> +    DECLARE_OFFSETS(offsets, DOMHEAP_VIRT_START);
> +
>      ASSERT(root);
>      ASSERT(!per_cpu(xen_dommap, cpu));
>  
> +    /*
> +     * On Arm64, the root is at level 0. Therefore we need an extra step
> +     * to allocate the first level page-table.
> +     */
> +#ifdef CONFIG_ARM_64
> +    if ( create_xen_table(&root[offsets[0]]) )
> +        return false;
> +
> +    first = xen_map_table(lpae_get_mfn(root[offsets[0]]));
> +#else
> +    first = root;
> +#endif
> +
>      /*
>       * The domheap for cpu0 is initialized before the heap is initialized.
>       * So we need to use pre-allocated pages.
> @@ -58,16 +78,20 @@ bool init_domheap_mappings(unsigned int cpu)
>       * domheap mapping pages.
>       */
>      mfn = virt_to_mfn(domheap);
> -    first_idx = first_table_offset(DOMHEAP_VIRT_START);
> +    first_idx = offsets[1];
>      for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
>      {
>          lpae_t pte = mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL);
>          pte.pt.table = 1;
> -        write_pte(&root[first_idx + i], pte);
> +        write_pte(&first[first_idx + i], pte);
>      }
>  
>      per_cpu(xen_dommap, cpu) = domheap;
>  
> +#ifdef CONFIG_ARM_64
> +    xen_unmap_table(first);
> +#endif
> +
>      return true;
>  }
>  
> @@ -91,6 +115,10 @@ void *map_domain_page(mfn_t mfn)
>      lpae_t pte;
>      int i, slot;
>  
> +    /* Bypass the mapcache if the page is in the directmap */
> +    if ( arch_mfns_in_directmap(mfn_x(mfn), 1) )
> +        return mfn_to_virt(mfn);
> +
>      local_irq_save(flags);
>  
>      /* The map is laid out as an open-addressed hash table where each
> @@ -151,15 +179,24 @@ void *map_domain_page(mfn_t mfn)
>  }
>  
>  /* Release a mapping taken with map_domain_page() */
> -void unmap_domain_page(const void *va)
> +void unmap_domain_page(const void *ptr)
>  {
> +    unsigned long va = (unsigned long)ptr;
>      unsigned long flags;
>      lpae_t *map = this_cpu(xen_dommap);
> -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +    unsigned int slot;
>  
> -    if ( !va )
> +    /*
> +     * map_domain_page() may not have mapped anything if the address
> +     * is part of the directmap. So ignore anything outside of the
> +     * domheap.
> +     */
> +    if ( (va < DOMHEAP_VIRT_START) ||
> +         ((va - DOMHEAP_VIRT_START) >= DOMHEAP_VIRT_SIZE) )
>          return;
>  
> +    slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +
>      local_irq_save(flags);
>  
>      ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 0fefed1b8aa9..12b7f1f1b9ea 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -156,6 +156,13 @@
>  #define FRAMETABLE_SIZE        GB(32)
>  #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>  
> +#define DOMHEAP_VIRT_START     SLOT0(255)
> +#define DOMHEAP_VIRT_SIZE      GB(2)
> +
> +#define DOMHEAP_ENTRIES        1024 /* 1024 2MB mapping slots */
> +/* Number of domheap pagetable pages required at the second level (2MB mappings) */
> +#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT)
> +
>  #define DIRECTMAP_VIRT_START   SLOT0(256)
>  #define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
>  #define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> index 7a2c775f9562..d73abf1bf763 100644
> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -416,6 +416,11 @@ static inline bool arch_has_directmap(void)
>      return true;
>  }
>  
> +/* Helpers to allocate, map and unmap a Xen page-table */
> +int create_xen_table(lpae_t *entry);
> +lpae_t *xen_map_table(mfn_t mfn);
> +void xen_unmap_table(const lpae_t *table);
> +
>  #endif /*  __ARCH_ARM_MM__ */
>  /*
>   * Local variables:
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 2af751af9003..f5fb957554a5 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -177,7 +177,7 @@ static void __init __maybe_unused build_assertions(void)
>  #undef CHECK_SAME_SLOT
>  }
>  
> -static lpae_t *xen_map_table(mfn_t mfn)
> +lpae_t *xen_map_table(mfn_t mfn)
>  {
>      /*
>       * During early boot, map_domain_page() may be unusable. Use the
> @@ -189,7 +189,7 @@ static lpae_t *xen_map_table(mfn_t mfn)
>      return map_domain_page(mfn);
>  }
>  
> -static void xen_unmap_table(const lpae_t *table)
> +void xen_unmap_table(const lpae_t *table)
>  {
>      /*
>       * During early boot, xen_map_table() will not use map_domain_page()
> @@ -699,7 +699,7 @@ void *ioremap(paddr_t pa, size_t len)
>      return ioremap_attr(pa, len, PAGE_HYPERVISOR_NOCACHE);
>  }
>  
> -static int create_xen_table(lpae_t *entry)
> +int create_xen_table(lpae_t *entry)
>  {
>      mfn_t mfn;
>      void *p;
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 88d9d90fb5ad..b1a8f91bb385 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -923,6 +923,10 @@ static void __init setup_mm(void)
>       */
>      populate_boot_allocator();
>  
> +    if ( !init_domheap_mappings(smp_processor_id()) )
> +        panic("CPU%u: Unable to prepare the domheap page-tables\n",
> +              smp_processor_id());
> +
>      total_pages = 0;
>  
>      for ( i = 0; i < banks->nr_banks; i++ )
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:53:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:53:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483316.749409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5g6-0002Bc-Lg; Mon, 23 Jan 2023 22:52:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483316.749409; Mon, 23 Jan 2023 22:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5g6-0002BV-If; Mon, 23 Jan 2023 22:52:38 +0000
Received: by outflank-mailman (input) for mailman id 483316;
 Mon, 23 Jan 2023 22:52:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK5g5-0002BP-2j
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:52:37 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a044ab99-9b70-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 23:52:35 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B158961031;
 Mon, 23 Jan 2023 22:52:33 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BD4FCC4339C;
 Mon, 23 Jan 2023 22:52:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a044ab99-9b70-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674514353;
	bh=id8H5Ax/kFYEZ196VNAOQFBbJIMV50dtP9C73f9JjXE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ACN9GFKUL6m3RMA37xM4xANNNDn16dVzBvtC0NlDtvvdxpjQRK2KWJEYXqVCDnx19
	 ZA/K745gkG8YqkDW1FyRdoeZVySbRDOci/dR2ldc3aYNQMlt01W0d545peo4+tbBu9
	 4h36nZbIV7rWeIWT1JUWrV3ZhOEd/Gz68RPv0puP8f77YGnEjMtqJKWG+QFdYYnqWC
	 +NoP4tEfsUkVnYRDlyFHfYyPHVoCpp9Hn90eb3/WVcc7uy3UxooJvtVMghLQg7bdjz
	 UK99S9CSxFsWuIFGlYPf/uIz+3ty2SzN9xdRcpcsc/+VVLp7pA2yBJ5LupBI1OEyj5
	 PNkrvp1znUDRw==
Date: Mon, 23 Jan 2023 14:52:30 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 22/22] xen/arm64: Allow the admin to enable/disable the
 directmap
In-Reply-To: <20221216114853.8227-23-julien@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231437170.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-23-julien@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Dec 2022, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Implement the same command line option as x86 to enable/disable the
> directmap. By default this is kept enabled.
> 
> Also modify setup_directmap_mappings() to populate the L0 entries
> related to the directmap area.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
>     This patch is in an RFC state we need to decide what to do for arm32.
> 
>     Also, this is moving code that was introduced in this series. So
>     this will need to be fix in the next version (assuming Arm64 will
>     be ready).
> 
>     This was sent early as PoC to enable secret-free hypervisor
>     on Arm64.
> ---
>  docs/misc/xen-command-line.pandoc   |  2 +-
>  xen/arch/arm/include/asm/arm64/mm.h |  2 +-
>  xen/arch/arm/include/asm/mm.h       | 12 +++++----
>  xen/arch/arm/mm.c                   | 40 +++++++++++++++++++++++++++--
>  xen/arch/arm/setup.c                |  1 +
>  5 files changed, 48 insertions(+), 9 deletions(-)
> 
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> index a63e4612acac..948035286acc 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -760,7 +760,7 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
>  additionally a trace buffer of the specified size is allocated per cpu.
>  The debug trace feature is only enabled in debugging builds of Xen.
>  
> -### directmap (x86)
> +### directmap (arm64, x86)
>  > `= <boolean>`
>  
>  > Default: `true`
> diff --git a/xen/arch/arm/include/asm/arm64/mm.h b/xen/arch/arm/include/asm/arm64/mm.h
> index aa2adac63189..8b5dcb091750 100644
> --- a/xen/arch/arm/include/asm/arm64/mm.h
> +++ b/xen/arch/arm/include/asm/arm64/mm.h
> @@ -7,7 +7,7 @@
>   */
>  static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
>  {
> -    return true;
> +    return opt_directmap;
>  }
>  
>  #endif /* __ARM_ARM64_MM_H__ */
> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> index d73abf1bf763..ef9ad3b366e3 100644
> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -9,6 +9,13 @@
>  #include <public/xen.h>
>  #include <xen/pdx.h>
>  
> +extern bool opt_directmap;
> +
> +static inline bool arch_has_directmap(void)
> +{
> +    return opt_directmap;
> +}
> +
>  #if defined(CONFIG_ARM_32)
>  # include <asm/arm32/mm.h>
>  #elif defined(CONFIG_ARM_64)
> @@ -411,11 +418,6 @@ static inline void page_set_xenheap_gfn(struct page_info *p, gfn_t gfn)
>      } while ( (y = cmpxchg(&p->u.inuse.type_info, x, nx)) != x );
>  }
>  
> -static inline bool arch_has_directmap(void)
> -{
> -    return true;
> -}
> -
>  /* Helpers to allocate, map and unmap a Xen page-table */
>  int create_xen_table(lpae_t *entry);
>  lpae_t *xen_map_table(mfn_t mfn);
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index f5fb957554a5..925d81c450e8 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -15,6 +15,7 @@
>  #include <xen/init.h>
>  #include <xen/libfdt/libfdt.h>
>  #include <xen/mm.h>
> +#include <xen/param.h>
>  #include <xen/pfn.h>
>  #include <xen/pmap.h>
>  #include <xen/sched.h>
> @@ -131,6 +132,12 @@ vaddr_t directmap_virt_start __read_mostly;
>  unsigned long directmap_base_pdx __read_mostly;
>  #endif
>  
> +bool __ro_after_init opt_directmap = true;
> +/* TODO: Decide what to do for arm32. */
> +#ifdef CONFIG_ARM_64
> +boolean_param("directmap", opt_directmap);
> +#endif
> +
>  unsigned long frametable_base_pdx __read_mostly;
>  unsigned long frametable_virt_end __read_mostly;
>  
> @@ -606,16 +613,27 @@ void __init setup_directmap_mappings(unsigned long base_mfn,
>      directmap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
>  }
>  #else /* CONFIG_ARM_64 */
> -/* Map the region in the directmap area. */
> +/*
> + * This either populate a valid fdirect map, or allocates empty L1 tables

fdirect/direct


> + * and creates the L0 entries for the given region in the direct map
> + * depending on arch_has_directmap().
> + *
> + * When directmap=no, we still need to populate empty L1 tables in the
> + * directmap region. The reason is that the root page-table (i.e. L0)
> + * is per-CPU and secondary CPUs will initialize their root page-table
> + * based on the pCPU0 one. So L0 entries will be shared if they are
> + * pre-populated. We also rely on the fact that L1 tables are never
> + * freed.

You are saying that in case of directmap=no we are still creating empty
L1 tables and L0 entries because secondary CPUs will need them when they
initialize their root pagetables.

But why? Secondary CPUs will not be using the directmap either? Why do
seconday CPUs need the empty L1 tables?



> + */
>  void __init setup_directmap_mappings(unsigned long base_mfn,
>                                       unsigned long nr_mfns)
>  {
> +    unsigned long mfn_gb = base_mfn & ~((FIRST_SIZE >> PAGE_SHIFT) - 1);
>      int rc;
>  
>      /* First call sets the directmap physical and virtual offset. */
>      if ( mfn_eq(directmap_mfn_start, INVALID_MFN) )
>      {
> -        unsigned long mfn_gb = base_mfn & ~((FIRST_SIZE >> PAGE_SHIFT) - 1);
>  
>          directmap_mfn_start = _mfn(base_mfn);
>          directmap_base_pdx = mfn_to_pdx(_mfn(base_mfn));
> @@ -636,6 +654,24 @@ void __init setup_directmap_mappings(unsigned long base_mfn,
>          panic("cannot add directmap mapping at %lx below heap start %lx\n",
>                base_mfn, mfn_x(directmap_mfn_start));
>  
> +
> +    if ( !arch_has_directmap() )
> +    {
> +        vaddr_t vaddr = (vaddr_t)__mfn_to_virt(base_mfn);
> +        unsigned int i, slot;
> +
> +        slot = first_table_offset(vaddr);
> +        nr_mfns += base_mfn - mfn_gb;
> +        for ( i = 0; i < nr_mfns; i += BIT(XEN_PT_LEVEL_ORDER(0), UL), slot++ )
> +        {
> +            lpae_t *entry = &cpu0_pgtable[slot];
> +
> +            if ( !lpae_is_valid(*entry) && !create_xen_table(entry) )
> +                panic("Unable to populate zeroeth slot %u\n", slot);
> +        }
> +        return;
> +    }
> +
>      rc = map_pages_to_xen((vaddr_t)__mfn_to_virt(base_mfn),
>                            _mfn(base_mfn), nr_mfns,
>                            PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index b1a8f91bb385..83ded03c7b1f 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -1032,6 +1032,7 @@ void __init start_xen(unsigned long boot_phys_offset,
>      cmdline_parse(cmdline);
>  
>      setup_mm();
> +    printk("Booting with directmap %s\n", arch_has_directmap() ? "on" : "off");
>  
>      vm_init();
>  
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:56:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:56:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483321.749419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5k5-0002qR-5g; Mon, 23 Jan 2023 22:56:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483321.749419; Mon, 23 Jan 2023 22:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5k5-0002qK-2A; Mon, 23 Jan 2023 22:56:45 +0000
Received: by outflank-mailman (input) for mailman id 483321;
 Mon, 23 Jan 2023 22:56:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK5k3-0002qE-Ou
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:56:43 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 33109475-9b71-11ed-91b6-6bf2151ebd3b;
 Mon, 23 Jan 2023 23:56:42 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5156D61031;
 Mon, 23 Jan 2023 22:56:40 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F1AFC433EF;
 Mon, 23 Jan 2023 22:56:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33109475-9b71-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674514599;
	bh=fOJEyuWIKOYhXZe6fFrfsTDgt7D07/pRaa2AARrhRk8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=SYFryKR43dT8AQIjAus0HxaZohLMcsJSV7ugmtr8FqINDsslpCSJpqmCffcYulomK
	 hWvnSzULgpsoLfhpgZiJpeBCM3F3GEIl4Bp1XFfr3g3TiGqfym6ZdlOevDCqmZoNUw
	 Z7wRWKqLD3VJIHtsoIq9VZe9a1mVZjrv2L+hC2lOFmG2gkhy9E4Xlpzyep0br2eB/W
	 glIg5wZal/iPlEHiwQq6exzYzRxphoeNMQLhCR4k+MIBJXWmNYnzzT8m/1o6I/nBKz
	 jzckwmquRnsbZz8Tf13iibxDe1o7keo8W6CMEY1WRD16ou5zeliCwfaNZNlLdeUL7i
	 pbEoxUrGsQofQ==
Date: Mon, 23 Jan 2023 14:56:36 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Hongyan Xia <hongyxia@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH 17/22] x86/setup: vmap heap nodes when they are outside
 the direct map
In-Reply-To: <ea6c03f4-13ad-e312-1827-8e1c5ea1363e@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231452470.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-18-julien@xen.org> <alpine.DEB.2.22.394.2301231358440.1978264@ubuntu-linux-20-04-desktop> <ea6c03f4-13ad-e312-1827-8e1c5ea1363e@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Jan 2023, Julien Grall wrote:
> On 23/01/2023 22:03, Stefano Stabellini wrote:
> > On Fri, 16 Dec 2022, Julien Grall wrote:
> > > From: Hongyan Xia <hongyxia@amazon.com>
> > > 
> > > When we do not have a direct map, archs_mfn_in_direct_map() will always
> > > return false, thus init_node_heap() will allocate xenheap pages from an
> > > existing node for the metadata of a new node. This means that the
> > > metadata of a new node is in a different node, slowing down heap
> > > allocation.
> > > 
> > > Since we now have early vmap, vmap the metadata locally in the new node.
> > > 
> > > Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > > 
> > > ----
> > > 
> > >      Changes from Hongyan's version:
> > >          * arch_mfn_in_direct_map() was renamed to
> > >            arch_mfns_in_direct_map()
> > >          * Use vmap_contig_pages() rather than __vmap(...).
> > >          * Add missing include (xen/vmap.h) so it compiles on Arm
> > > ---
> > >   xen/common/page_alloc.c | 42 +++++++++++++++++++++++++++++++----------
> > >   1 file changed, 32 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> > > index 0c4af5a71407..581c15d74dfb 100644
> > > --- a/xen/common/page_alloc.c
> > > +++ b/xen/common/page_alloc.c
> > > @@ -136,6 +136,7 @@
> > >   #include <xen/sched.h>
> > >   #include <xen/softirq.h>
> > >   #include <xen/spinlock.h>
> > > +#include <xen/vmap.h>
> > >     #include <asm/flushtlb.h>
> > >   #include <asm/numa.h>
> > > @@ -597,22 +598,43 @@ static unsigned long init_node_heap(int node,
> > > unsigned long mfn,
> > >           needed = 0;
> > >       }
> > >       else if ( *use_tail && nr >= needed &&
> > > -              arch_mfns_in_directmap(mfn + nr - needed, needed) &&
> > >                 (!xenheap_bits ||
> > >                  !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
> > >       {
> > > -        _heap[node] = mfn_to_virt(mfn + nr - needed);
> > > -        avail[node] = mfn_to_virt(mfn + nr - 1) +
> > > -                      PAGE_SIZE - sizeof(**avail) * NR_ZONES;
> > > -    }
> > > -    else if ( nr >= needed &&
> > > -              arch_mfns_in_directmap(mfn, needed) &&
> > > +        if ( arch_mfns_in_directmap(mfn + nr - needed, needed) )
> > > +        {
> > > +            _heap[node] = mfn_to_virt(mfn + nr - needed);
> > > +            avail[node] = mfn_to_virt(mfn + nr - 1) +
> > > +                          PAGE_SIZE - sizeof(**avail) * NR_ZONES;
> > > +        }
> > > +        else
> > > +        {
> > > +            mfn_t needed_start = _mfn(mfn + nr - needed);
> > > +
> > > +            _heap[node] = vmap_contig_pages(needed_start, needed);
> > > +            BUG_ON(!_heap[node]);
> > 
> > I see a BUG_ON here but init_node_heap is not __init.
> 
> FWIW, this is not the first introducing the first BUG_ON() in this function.
> 
>  Asking because
> > BUG_ON is only a good idea during init time. Should init_node_heap be
> > __init (not necessarely in this patch, but still)?
> AFAIK, there are two uses outside of __init:
>   1) Free the init sections
>   2) Memory hotplug
> 
> In the first case, we will likely need to panic() in case of an error. For
> ther second case, I am not entirely sure.
> 
> But there would be a fair bit of plumbing and thinking (how do you deal with
> the case where part of the memory were already added?).
> 
> Anyway, I don't think I am making the function worse, so I would rather no
> open that can of worms (yet).

I am only trying to check that we are not introducing any BUG_ONs that
could be triggered at runtime. We don't have a rule that requires the
function with a BUG_ON to be __init, however that is a simple and nice
way to check that the BUG_ON is appropriate.

In this specific case, you are right that there are already 2 BUG_ONs
in this function so you are not making things worse.

Aside from Jan's code style comment:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 22:57:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 22:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483325.749428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5kb-0003Kk-D4; Mon, 23 Jan 2023 22:57:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483325.749428; Mon, 23 Jan 2023 22:57:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5kb-0003Kd-AV; Mon, 23 Jan 2023 22:57:17 +0000
Received: by outflank-mailman (input) for mailman id 483325;
 Mon, 23 Jan 2023 22:57:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMpG=5U=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK5kZ-00038N-UO
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 22:57:16 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 46398847-9b71-11ed-b8d1-410ff93cb8f0;
 Mon, 23 Jan 2023 23:57:13 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id AE38961031;
 Mon, 23 Jan 2023 22:57:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 86106C433EF;
 Mon, 23 Jan 2023 22:57:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46398847-9b71-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674514632;
	bh=GniFwgcaAWQxgdXWlx3+kZYStenbwsfAq5vmYFxpr/Y=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=r0mxdAUYGEZY4N1NdcpWa9+MDNtNyh73VkHN4FMsqEqmGXAX2+eRhWB/UoRsGWUkh
	 RcXZLNLcufl0+p1Y6/d12nThMraiJJZ+jgyOZe8VMvcitkrAyygtEMxiQlS5h44Q73
	 eCpXWDpaozHRXtOyizos6GCee8+HP3eU7zGMVNni200L0fuxyJAhwMy+cR7lZ2tO+u
	 X2LCeYMFYbQzzfi6p0/TxzJvIt48iTkMjfsn4YTFX1aQhztjF5FhF2+vicMgNGMrO2
	 XhHJNg7jYkzznH/46Pyd1E38yXyDqqsQgCG3i5kWjfkvqbu+dc/cV1/uCLzy5dtAZr
	 hgFpbfgN54O3A==
Date: Mon, 23 Jan 2023 14:57:08 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
    xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, 
    Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com, 
    andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com, 
    wl@xen.org, xuwei5@hisilicon.com
Subject: Re: [XEN v3 1/3] xen/arm: Use the correct format specifier
In-Reply-To: <af94ef17-0891-4540-4238-ef842b8af249@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231456510.1978264@ubuntu-linux-20-04-desktop>
References: <20230123134451.47185-1-ayan.kumar.halder@amd.com> <20230123134451.47185-2-ayan.kumar.halder@amd.com> <alpine.DEB.2.22.394.2301231313370.1978264@ubuntu-linux-20-04-desktop> <af94ef17-0891-4540-4238-ef842b8af249@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Jan 2023, Julien Grall wrote:
> Hi Stefano,
> 
> On 23/01/2023 21:19, Stefano Stabellini wrote:
> > On Mon, 23 Jan 2023, Ayan Kumar Halder wrote:
> > > 1. One should use 'PRIpaddr' to display 'paddr_t' variables. However,
> > > while creating nodes in fdt, the address (if present in the node name)
> > > should be represented using 'PRIx64'. This is to be in conformance
> > > with the following rule present in https://elinux.org/Device_Tree_Linux
> > > 
> > > . node names
> > > "unit-address does not have leading zeros"
> > > 
> > > As 'PRIpaddr' introduces leading zeros, we cannot use it.
> > > 
> > > So, we have introduced a wrapper ie domain_fdt_begin_node() which will
> > > represent physical address using 'PRIx64'.
> > > 
> > > 2. One should use 'PRIx64' to display 'u64' in hex format. The current
> > > use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
> > > address.
> > > 
> > > Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> > > ---
> > > 
> > > Changes from -
> > > 
> > > v1 - 1. Moved the patch earlier.
> > > 2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations
> > > required to support 32bit paddr"
> > > into this patch.
> > > 
> > > v2 - 1. Use PRIx64 for appending addresses to fdt node names. This fixes
> > > the CI failure.
> > > 
> > >   xen/arch/arm/domain_build.c | 45 +++++++++++++++++--------------------
> > >   xen/arch/arm/gic-v2.c       |  6 ++---
> > >   xen/arch/arm/mm.c           |  2 +-
> > 
> > The changes to mm.c and gic-v2.c look OK and I'd ack them already. One
> > question on the changes to domain_build.c below.
> > 
> > 
> > >   3 files changed, 25 insertions(+), 28 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > index f35f4d2456..97c2395f9a 100644
> > > --- a/xen/arch/arm/domain_build.c
> > > +++ b/xen/arch/arm/domain_build.c
> > > @@ -1288,6 +1288,20 @@ static int __init fdt_property_interrupts(const
> > > struct kernel_info *kinfo,
> > >       return res;
> > >   }
> > >   +static int __init domain_fdt_begin_node(void *fdt, const char *name,
> > > +                                        uint64_t unit)
> > > +{
> > > +    /*
> > > +     * The size of the buffer to hold the longest possible string ie
> > > +     * interrupt-controller@ + a 64-bit number + \0
> > > +     */
> > > +    char buf[38];
> > > +
> > > +    /* ePAPR 3.4 */
> > > +    snprintf(buf, sizeof(buf), "%s@%"PRIx64, name, unit);
> 
> The return wants to be checked.
> 
> > > +    return fdt_begin_node(fdt, buf);
> > > +}
> > > +
> > >   static int __init make_memory_node(const struct domain *d,
> > >                                      void *fdt,
> > >                                      int addrcells, int sizecells,
> > > @@ -1296,8 +1310,6 @@ static int __init make_memory_node(const struct
> > > domain *d,
> > >       unsigned int i;
> > >       int res, reg_size = addrcells + sizecells;
> > >       int nr_cells = 0;
> > > -    /* Placeholder for memory@ + a 64-bit number + \0 */
> > > -    char buf[24];
> > >       __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
> > >       __be32 *cells;
> > >   @@ -1314,9 +1326,7 @@ static int __init make_memory_node(const struct
> > > domain *d,
> > >         dt_dprintk("Create memory node\n");
> > >   -    /* ePAPR 3.4 */
> > > -    snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[i].start);
> > > -    res = fdt_begin_node(fdt, buf);
> > > +    res = domain_fdt_begin_node(fdt, "memory", mem->bank[i].start);
> > 
> > Basically this "hides" the paddr_t->uint64_t cast because it happens
> > implicitly when passing mem->bank[i].start as an argument to
> > domain_fdt_begin_node.
> > 
> > To be honest, I don't know if it is necessary. Also a normal cast would
> > be fine:
> > 
> >      snprintf(buf, sizeof(buf), "memory@%"PRIx64,
> > (uint64_t)mem->bank[i].start);
> >      res = fdt_begin_node(fdt, buf);
> The problem with the open-coding version is you would need to explain the cast
> everywhere (I disliked unexplained one).
> 
> I don't particular mind 'hidden cast' but I think we need to explain on top of
> domain_fdt_begin_node() why it is necessary.
> 
> > 
> > Julien, what do you prefer?
> 
> Definitely the function because that's what I suggested (see the rationale
> above).

OK, no worries


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 23:09:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 23:09:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483331.749439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5wI-000529-FO; Mon, 23 Jan 2023 23:09:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483331.749439; Mon, 23 Jan 2023 23:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5wI-000522-CN; Mon, 23 Jan 2023 23:09:22 +0000
Received: by outflank-mailman (input) for mailman id 483331;
 Mon, 23 Jan 2023 23:09:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTo8=5U=citrix.com=prvs=380e0b34c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pK5wH-00051w-M6
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 23:09:21 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5c6513b-9b72-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 00:09:18 +0100 (CET)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jan 2023 18:09:15 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH2PR03MB5288.namprd03.prod.outlook.com (2603:10b6:610:9b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 23:09:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 23:09:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5c6513b-9b72-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674515358;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=MS2c150mQ60+qHYVrs8WlxwdtITVkNalm9pR6FWHaBQ=;
  b=G/xyv2XavWQo1X+284oua8o34HFw0juf79K/SvBbcs/5MW2X+xEXPur8
   mFPvg7+Hlan29SW+m/hx4tuM4uz4lY5twy/lq/CK0EXfX99MG1GLv+R3R
   QWHURDEvbzkprKygcAxePZnPUTd+NXu1aXjZo8ywCH387kIEKagnyywwI
   I=;
X-IronPort-RemoteIP: 104.47.55.169
X-IronPort-MID: 93936651
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4eOPNKz/1b19hBNKuWB6t+e0xyrEfRIJ4+MujC+fZmUNrF6WrkUAz
 GQcWTuAaPyPZGGmL9B1bdvi90IEsZ+AmtIwGQtsqCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UwHUMja4mtC5QRnP6sT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KT8X7
 sMBbzQUVRqCqv2t7pW8SLQvne12eaEHPKtH0p1h5RfwKK98BLrlE+DN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjWVlVIguFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuqA9NJSuPlr5aGhnXC7Ug1Kz4QcGC2/9uZhG6YCutaC
 Rctr39GQa8asRbDosPGdxe/qnSVswUcX9dVGusS5wSEy66S6AGcbkADSjNCc90n8swrXzsh1
 lyOt9zsDD1r9raSTBq1/7OVti+7ODJTI3ULYyQFViMa79Klq4Y25jrfQ9AmHKOrg9ndHTDr3
 yvMvCU4n68Uj8MAy+O851+vqym3upHDQwox5wPWdmGo9AV0YMiifYPAwVfa5PBEMY2QZkOAo
 n8fms6VqusJCPmweDelRewMGPSj4aaDOTiF21p3RcB/pnKq5mKpep1W7HdmPkB1P80YeDjvJ
 kjOpQdW45wVN3yvBUNqX7+M5w0R5fCIPbzYujr8N7KivrAZmNe7wRxT
IronPort-HdrOrdr: A9a23:xsmkB6n+oAu5fwYl4kktZ/y5SWbpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-IronPort-AV: E=Sophos;i="5.97,240,1669093200"; 
   d="scan'208";a="93936651"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DHQ4BuL/CTm9DdfWN5XHInUfLsdatFeEs2Xp0mLpox6BVUIi370wTnis3eK4KR/AfkQgRm1zjxBM5XIQcY0BfyoEX5Q/Y/kxL6zvQ1oAKsI59wcZFzUGEYRrZHqzG0PA44z1t4O/7bUbu36bxEdeY+HqEtVWiZOsP434ojiOKM4XcFun/6idm9YeY2P11ycsGnA3TH1VEUGDIy/1ngZRHjx1AUUc1/VfVsx7JGisITS9r1hQEVrsKn0XUyq9F4sH7EqDXVKqYJUqEiPi43R+dS/6kJuWd0PAvRpINUP26zX/6tMar4P96N/7yuV77yIaxP6llvSCBgTA6ysCkJEfkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MS2c150mQ60+qHYVrs8WlxwdtITVkNalm9pR6FWHaBQ=;
 b=hL4+yYPdWsOS1Rd8K7i8sUiBjwe94m5pvoetmE98JY8cx78jbB7uDjkZKaBi4Gye8AlMbifOBYES6vPl1yFyBO2n2Sc1htwrGhbF1qgy9Yge+yaI+NCgZEek1OYAqiCZ+pooGFHzpHnLUeLiDzqYix/19RFJk+XLUsVnfP64cIDHP1o9yz/KiakNqkOI5OrACTv2i6ZbCLt8psczpExP1A+e89zd+4oo2gU/FvkeV5XYy0Ws9VrGqh/n7KY2sBsSultDsdJ23wE5P606aAE6qiF9pCZ+yPlJ0yBUn9aJYpMUtcPOzS0tHQbyATDB93Jw6D/2uo6bpEPH0vyA24ZmHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MS2c150mQ60+qHYVrs8WlxwdtITVkNalm9pR6FWHaBQ=;
 b=PnezOMziIqeInhldw7pPh9oy+/szRqODlIKBGVnKB1qe4rbytjT56iUC3OMvBDgo1D+zvXm9nCW9BmPgQpo0J6ZlCsuq1uUPXtfKcTVnTGB12cBSOsaZ8ci4aRDLXa5L4xQbtqbsi51vMxL2F7IIcAJcGNPdRWH4O5MXkyPwKVk=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Bobby Eshleman <bobbyeshleman@gmail.com>, Oleksii
	<oleksii.kurochko@gmail.com>
CC: Alistair Francis <alistair.francis@wdc.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>
Subject: Re: [RISC-V] Switch to H-mode
Thread-Topic: [RISC-V] Switch to H-mode
Thread-Index: AQHZL3+1jFKdxw/fjUKAFDmhXbPnOg==
Date: Mon, 23 Jan 2023 23:09:13 +0000
Message-ID: <d5be3bcd-8835-5a2f-12b0-2b2aaa98b9b4@citrix.com>
References: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>
 <Y8lABYJoQ5Qt4DAt@bullseye>
In-Reply-To: <Y8lABYJoQ5Qt4DAt@bullseye>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|CH2PR03MB5288:EE_
x-ms-office365-filtering-correlation-id: 4c800549-5e21-4881-3cea-08dafd96d7a5
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 03oGcIpdQBteGgvkMzMJHGV4Xw85yhfH1BHPXPwIxzOhrT+UucwlNS5HleXT9gHPZ2uzyQfCaJOs/yMg/ALWzG1DqO9NtZtL9LDwHnb3yXm6t1nfVTBWPqrUZtIr67/7t2H27DOvJvp1nQRbvgdr2r+wzcRnAeNW5rdnOLMKWE+HHyeaiVbmyjL31ZpM3Fb5QaduTkw6QmKZQi2mllDdm3OeZSbZjn6QOiknF1FnO3RPq5Ho7hg3OLDBe7GRjmpTzjiVOIRv2TCD5/UQBudChMwKgGBbjg1CpiJxfFKunCZkgziKOy+pmDlURashrlaHK3+FKzK9IBmLDYEy+RzY3vJeE+K3Ut+6NyTRgKpXbz0umztPWpQ7P4mRKfXzOqaXtIqxbYrYOD7gsmlLQTlZg365105nTn77HScAFZH6q9ZNbFz74+ySrL5TtSdxKPDh0KrS7PqEUIYErvQ8tYcZzvfEtu0ZyuFuHUGHdAxpZ+IaeVYgeeZN/6rioUqB3DEnuQWv1a5xc/p1a70QSoZ9p6SBvGYWYHZ6rizuYMCozL17Tc4blAkqy6xHoP3/24LPxoDEpZWurJ05EdB33lZOMTpNxZEqninHrcQ0NWqYqIH3KbEAJsp5xkkD5P4dMUTxqSVTzz46gvmnD3NU4TFIqWHbaYSDxE736qu1AdWjIk2BxZlKTL9EtXp0WZkpaGVdO6acO2KfWman7x5gDrJ8b0y0y7n+W1P5xEn3cs/bf4mPMJZ2eOdT5G58A4wNlDQE8yJi5pO1g7aCwCqgwa82THi0QV1bS4RolSsTO+0KpY7lxBF2BPxHEfE4dpkoAS/UbSULoOLtUGsFkq7k6f1rvA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199015)(36756003)(38070700005)(2906002)(5660300002)(38100700002)(82960400001)(8936002)(4326008)(41300700001)(122000001)(86362001)(31696002)(66446008)(71200400001)(966005)(6486002)(91956017)(478600001)(31686004)(66476007)(110136005)(53546011)(186003)(6506007)(26005)(8676002)(6512007)(316002)(66946007)(54906003)(76116006)(64756008)(2616005)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?U3ZXcVpDM2RKVktpYVJRNDUrdVBucEx6em5Db1hXeFo4L1h2RjJsd2xKM3k2?=
 =?utf-8?B?dkQ3MFRyY0x5NjJLWkdza3pxMVh3bmFlL0U3NTJQY0dwaTREZzd2ZTVoRjJ2?=
 =?utf-8?B?SURHMjA4RU1ZdDNBWk9qWmIrMzVaTXViSzVQcFNJZTVhaXBOUW9pT1JVRlZm?=
 =?utf-8?B?SmZBeGh2RWxyeGNuVWpGZVMyVnZmZW51VXRpSVFDZGFKWnpBeFEyM1dkM1dz?=
 =?utf-8?B?UHh4ZCtOVnVvdllQUHhmWjFFaHhJQUhWeHVvK0NsSCtBNnBDOUlaQ0hwYTlE?=
 =?utf-8?B?WmVUWnpBbUU3dVVrQXdOTTF1cGxEMk51bGtQSWtKYVlPeHhWY3VoZWNFWWhX?=
 =?utf-8?B?amxHMDJyWERHRm9US1VrMGc5Z0NwZWVyQ3dJOEJ2WDUrcmN5Zk50TEF4b29l?=
 =?utf-8?B?cDY1cWsyNTJhSUV2a2xhSjVTVHJUNkppRXhSek9tSkFBa0ZienpWaDN3T2RF?=
 =?utf-8?B?MVdyTVBTNy92emprcUJOcmxYelFXU1c0RnljYk4yaXFZakJ2eEo4cWxLTEVF?=
 =?utf-8?B?RWp4ckZjdjdVcHpmSlA3NC9uOU5PK2dwQ2V6ckFYL0VWN25iWjE2dHRGUGxR?=
 =?utf-8?B?NnBPaXZVenZWNWthV2psNkE2Tk1WNFg5L0xzZ3o0TnFIUy9jbWM4T2dicmps?=
 =?utf-8?B?cjROVHFLZDVnUjVkMFYrTElQNFpWSmJZQWprWGRYclY2ZlFUSittc3hVaGlr?=
 =?utf-8?B?Rnp5MW5pcXJ3NHhENmRYd2lWZGNncVVjd083ZXVEaThUdGhrYndqUlJwdE5W?=
 =?utf-8?B?RDZ3a00zemZGZXlCeDdLMFE0cFpLN2l3T1BsdGlCRWxHNzJReUdSYzdUZEoy?=
 =?utf-8?B?TENWRFJ1bGlpQjArWmowOGUxa1FEdWFybTFjRFVBYTdDSWpHTFRReG5WbXpH?=
 =?utf-8?B?L3JHZkRwRzBYSmNKWjlPdUpxaXhIRUNCSG12Mk1MdnJaYXE1TExnREdqamly?=
 =?utf-8?B?SjA3K2ZmRldtOTI5K3BhWW51TzFpalpPVWNCUytWM2czZGlYZ0drekg4eVlR?=
 =?utf-8?B?YTgyZzdIZm5Pa2h3WDhPeWpLSmhFcWtDUytLSHZhNTFsbHVkUk9iZjBNMThz?=
 =?utf-8?B?RzF1ZkhyZ1VFNVJtaTFLYjlkMlV5UGJEc20xeUNndGkwMXNKYjE0NStVaEQy?=
 =?utf-8?B?N3ROakdTKzZuMlhNb2c0K01MekZQSmlJcFFmRUxPZ1p2UitubFZzaW9GUUNo?=
 =?utf-8?B?RXBsd3hkcGRlMEZxb3NPOXdYVEQ0dXJOczJJcWN1ZnNHUW9tS1gyNy9tRFJF?=
 =?utf-8?B?TldMUE5raGxuZmJLU2V0YVU4NVBrUkUvS1ZQdEJPbmZQOFZzR1hVbXFCTm5Q?=
 =?utf-8?B?NHFSc0lOVWdZZzN2WS8vdmgzdVN4eWxGVDl3UTVBUUpBYkJ6ZExMTktGRDZG?=
 =?utf-8?B?bHpzbjYybFVDc2tKVnZtYzRneTM4ZldrOEpoN3ZzQUxsblk2bjNJR2JoRThT?=
 =?utf-8?B?b3M5SmpNdjREaXBGM2h6blI0Q1ZMelBqNDNJZVhNR3FKalpzL05pcEFzNWlT?=
 =?utf-8?B?bHpXekFUdWN6WkFoeWVRKzhaZUU1dzBNby9IYThETTlFeXE3Q2V0NkRuVkpF?=
 =?utf-8?B?ZGxieTFMSlZGZWxPQVU2ak95YmlZYlVPOHNGam5Cb2JMNnR5U25UcEMrY3Fh?=
 =?utf-8?B?L1dTakFKdDUrd1NUb3QwdlA1THlrck1WSDRuR0Z6MlhwZ0NFSDF3WGFINnpU?=
 =?utf-8?B?ano3SkJ0ZUwyYXdvdWtXNldQbXJpaHluSHFoK2krbi9GS0dJSlluUVFVQ1Zn?=
 =?utf-8?B?RjVsMk1JNzVuZ3g3N3V3ZkNxSmJITnlvM2tGNnFKMHQ3eWNTM2dxWWNmMWJn?=
 =?utf-8?B?bzlKM1NncklyMFdtd0JKNFlRQ3V4M3RWRXVNWU5QYWtqdmFRVWMyWmVoa29R?=
 =?utf-8?B?T3BJTHhXT0ttWk8xNFliTlU2QVdtajlsbmxNZFFYTW9QVWk3QmpnUCtWbjZC?=
 =?utf-8?B?U3VMZEFZeGtQZXMvVFFWMFpBZ3VZbmJaMXMvN2N3NFpnVDRFUXVUa2I4dnFx?=
 =?utf-8?B?bm5RSFAzeVExT1Jsb1F6OS9wRk1TUVg2SE03b2Mvcm9PaVRZOThvaEdGSWJC?=
 =?utf-8?B?ZXRzdVZqTjhlbnQvYXhqUkN5QkVzNVphYW0xUHhveFlzZUZTMlZBcVVQVUpD?=
 =?utf-8?Q?v6fmqRHvvZQu6wPMsmXpJlseu?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1DB5F6E353DA9C48A7411F014E077F19@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	yPCuYYzs9m7Y6YpB4dkYaBrje2Ddz9J72TUZr2ouooPbH9YEbDDTbj0FP2Rk3YfQ//xGpdNgkoUcRm/MjeAUmOcm4bkxGrdHbz1LcgUAEuCBXCvF1AA4u/IGzDubtu9EIj5czTXJzeyug4UXMPHyGpSgJkcLyhuHWX4/GoiXVWOM7KMw9RtygPJ2kZsW7zr7Cq/0b6yi71fBUkoEpTf5hKz8zg03Q3hcz1X36GDJGdBMQDFw4RrtTtrpnCt6kwFM14bMqQJ8GJJmhvkmuYdzqVTYtZUMiNJk//H7qGZRTW/kfM8hSTyuFaGT9qUhe/soWzTpk5JHj4Zs4pyd2h9H2AHa3B3LC8a5rVSCxEtK5sgBsheGlNwkYSJBT5cE0wQhiiUWNiufr6jtj0kQMUf/B2QqXLkPykDrupv7iOivgJRgvjlyVs4kMbAxvNpOCX+81zrK2O0Q8i02F7fVxNBo3bhrE81DN5KjRCguzkbc5BqYPDhz3ecwIypOzE4pwnYSs7mljgRh0tVO2+tjk39Zed/8Ly5hgoClui2BdE7uUCZ0Pwn7iIs1Xj2MO7UJ55h4crsUl/pjMUCOSgKKyknqXWCSiMz0HyEDR4cpFY3ZE6WMMJxR52EqoN53L04rpf76xpVSSlvVSorQ/wUW1sIJqDc4qF6XL5O/b+jiwCQJnwFmdr1QQOYae9WlPWKZdRNeaV1zzL2KE5iZmxjrBXr9esJpUguAoqa5IOlDxIV0QPzZ+pDo6sf7+bOcrebxsU4VyvDgVx/aBB1q9dFl/0//fmdXqFWNFKjGwSc3t5SNnbITrEZ8OCuNPnMYulieR7bF8lJWgMuk4s8vRZllhBvzuqomVnVAr0CJD7DA6epyzV5V4Fwzgf0928QiL/gtlDYaCO8oWwZRp8rtQmSSzzNGtw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4c800549-5e21-4881-3cea-08dafd96d7a5
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 23:09:13.0886
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: oyEdTFVrfTc0etpiPQJ6uZJvieJeZEulCjGod1qZVXIj+/6HhQunF/8UW3PPpEPYiQud6n6S0Qy9VEVD3KbSO3T689H8XHcfb94V+zHRiiU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5288

T24gMTkvMDEvMjAyMyAxOjA1IHBtLCBCb2JieSBFc2hsZW1hbiB3cm90ZToNCj4gT24gTW9uLCBK
YW4gMjMsIDIwMjMgYXQgMDY6NTY6MTlQTSArMDIwMCwgT2xla3NpaSB3cm90ZToNCj4+IEhpIEFs
aXN0YWlyIGFuZCBjb21tdW5pdHksDQo+Pg0KPj4gSSBhbSB3b3JraW5nIG9uIFJJU0MtViBzdXBw
b3J0IHVwc3RyZWFtIGZvciBYZW4gYmFzZWQgb24geW91ciBhbmQgQm9iYnkNCj4+IHBhdGNoZXMu
DQo+Pg0KPj4gQWRkaW5nIHRoZSBSSVNDLVYgc3VwcG9ydCBJIHJlYWxpemVkIHRoYXQgWGVuIGlz
IHJhbiBpbiBTLW1vZGUuIE91dHB1dA0KPj4gb2YgT3BlblNCSToNCj4+ICAgICAuLi4NCj4+ICAg
ICBEb21haW4wIE5leHQgTW9kZSAgICAgICAgIDogUy1tb2RlDQo+PiAgICAgLi4uDQo+PiBTbyB0
aGUgZmlyc3QgbXkgcXVlc3Rpb24gaXMgc2hvdWxkbid0IGl0IGJlIGluIEgtbW9kZT8NCj4+DQo+
PiBJZiBJIGFtIHJpZ2h0IHRoYW4gaXQgbG9va3MgbGlrZSB3ZSBoYXZlIHRvIGRvIGEgcGF0Y2gg
dG8gT3BlblNCSSB0bw0KPj4gYWRkIHN1cHBvcnQgb2YgSC1tb2RlIGFzIGl0IGlzIG5vdCBzdXBw
b3J0ZWQgbm93Og0KPj4gWzFdDQo+PiBodHRwczovL2dpdGh1Yi5jb20vcmlzY3Ytc29mdHdhcmUt
c3JjL29wZW5zYmkvYmxvYi9tYXN0ZXIvbGliL3NiaS9zYmlfZG9tYWluLmMjTDM4MA0KPj4gWzJd
DQo+PiBodHRwczovL2dpdGh1Yi5jb20vcmlzY3Ytc29mdHdhcmUtc3JjL29wZW5zYmkvYmxvYi9t
YXN0ZXIvaW5jbHVkZS9zYmkvcmlzY3ZfZW5jb2RpbmcuaCNMMTEwDQo+PiBQbGVhc2UgY29ycmVj
dCBtZSBpZiBJIGFtIHdyb25nLg0KPj4NCj4+IFRoZSBvdGhlciBvcHRpb24gSSBzZWUgaXMgdG8g
c3dpdGNoIHRvIEgtbW9kZSBpbiBVLWJvb3QgYXMgSSB1bmRlcnN0YW5kDQo+PiB0aGUgY2xhc3Np
Y2FsIGJvb3QgZmxvdyBpczoNCj4+ICAgICBPcGVuU0JJIC0+IFUtYm9vdCAtPiBYZW4gLT4gRG9t
YWluezAsLi4ufQ0KPj4gSWYgaXQgaXMgYXQgYWxsIHBvc3NpYmxlIHNpbmNlIFUtYm9vdCB3aWxs
IGJlIGluIFMgbW9kZSBhZnRlciBPcGVuU0JJLg0KPj4NCj4+IFRoYW5rcyBpbiBhZHZhbmNlLg0K
Pj4NCj4+IH4gT2xla3NpaQ0KPj4NCj4gQWgsIHdoYXQgeW91IGFyZSBzZWVpbmcgdGhlcmUgaXMg
dGhhdCB0aGUgb3BlblNCSSdzIE5leHQgTW9kZSBleGNsdWRlcw0KPiB0aGUgdmlydHVhbGl6YXRp
b24gbW9kZSAoaXQgdHJlYXRzIEhTIGFuZCBTIHN5bm9ueW1vdXNseSkgYW5kIGl0IGlzIG9ubHkN
Cj4gdXNlZCBmb3Igc2V0dGluZyB0aGUgbXN0YXR1cyBNUFAuIFRoZSBjb2RlIGFsc28gaGFzIG5l
eHRfdmlydCBmb3INCj4gc2V0dGluZyB0aGUgTVBWIGJ1dCBJIGRvbid0IHRoaW5rIHRoYXQgaXMg
ZXhwb3NlZCB2aWEgdGhlIGRldmljZSB0cmVlDQo+IHlldC4gRm9yIFhlbiwgeW91J2Qgd2FudCBu
ZXh0X21vZGUgPSBQUklWX1MgYW5kIG5leHRfdmlydCA9IDAgKEhTIG1vZGUsDQo+IG5vdCBWUyBt
b2RlKS4gVGhlIHJlbGV2YW50IHNldHVwIHByaW9yIHRvIG1yZXQgaXMgaGVyZSBmb3IgaW50ZXJl
c3RlZA0KPiByZWFkZXJzOg0KPiBodHRwczovL2dpdGh1Yi5jb20vcmlzY3Ytc29mdHdhcmUtc3Jj
L29wZW5zYmkvYmxvYi8wMDExMDZkMTliMjFjZDY0NDNhZTdmN2Y2ZDRkMDQ4ZDgwZTllY2FjL2xp
Yi9zYmkvc2JpX2hhcnQuYyNMNzU5DQo+DQo+IEFzIGxvbmcgYXMgdGhlIG5leHRfbW9kZSBhbmQg
bmV4dF92aXJ0IGFyZSBzZXQgY29ycmVjdGx5LCB0aGVuIFhlbg0KPiBzaG91bGQgYmUgbGF1bmNo
aW5nIGluIEhTIG1vZGUuIEkgZG8gYmVsaWV2ZSB0aGlzIHNob3VsZCBiZSBkZWZhdWx0IGZvcg0K
PiB0aGUgc3RvY2sgYnVpbGQgdG9vIGZvciBEb21haW4wLCB1bmxlc3Mgc29tZXRoaW5nIGhhcyBj
aGFuZ2VkLg0KDQpPaywgc28gZXZlcnl0aGluZyBvdWdodCB0byBiZSBkb2luZyB0aGUgcmlnaHQg
dGhpbmcsIGV2ZW4gaWYgaXQgZG9lc24ndA0Kc2hvdyB1cCBjbGVhcmx5IGluIHRoZSBsb2dnaW5n
Lg0KDQpBdCBzb21lIHBvaW50LCBYZW4gaXMgZ29pbmcgdG8gbmVlZCBhIGBpZiAoICFocy1tb2Rl
ICkgcGFuaWMoKTtgLA0KYmVjYXVzZSB3ZSBjYW4ndCBvcGVyYXRlIGRvbTAgcHJvcGVybHkgaWYg
WGVuIGlzIGluIHBsYW4gUy1tb2RlLg0KDQpJIHN1Z2dlc3RlZCB0aGF0IHdlIHRyeSBhbmQgbWFr
ZSBjc3JfcmVhZF9zYWZlKCkgd29yaywgdGhlbiB0cnkgYW5kIHJlYWQNCmBoc3RhdHVzYCB0byBw
cm9iZSBpZiB0aGUgSCBleHRlbnNpb24gaXMgYWN0aXZlLg0KDQpEb2VzIHRoaXMgc291bmQgcmVh
c29uYWJsZSwgb3IgaXMgdGhlcmUgYSBiZXR0ZXIgb3B0aW9uPw0KDQp+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 23:10:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 23:10:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483338.749452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5wu-0005ax-Ry; Mon, 23 Jan 2023 23:10:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483338.749452; Mon, 23 Jan 2023 23:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK5wu-0005aq-Ov; Mon, 23 Jan 2023 23:10:00 +0000
Received: by outflank-mailman (input) for mailman id 483338;
 Mon, 23 Jan 2023 23:09:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pK5wt-0005Zo-Qo
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 23:09:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK5ws-0006eN-KM; Mon, 23 Jan 2023 23:09:58 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pK5ws-0003Jp-E7; Mon, 23 Jan 2023 23:09:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=m30gMGbz86vHEQc32Ld/PoTqe7JIgGIPRBgxZGLecmU=; b=GvSjpHXojRJSWNHnVgAn/56D8a
	8w2poN6/O0Bhk6qhN/6rF0tqLE08gi6kmBJFy6SV6jhe8O7Bbuex0JzTo0Pp8pF7stEFCb6OUEQWU
	dqUzWNV4UOWDKDsh2QUEUAuYIfhm1Vhf+oLzpQWz5cKFhsuzctkPp3yopDaVdtz/jNBk=;
Message-ID: <92c4daa2-d841-3109-c1ec-4bdb088d6670@xen.org>
Date: Mon, 23 Jan 2023 23:09:56 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 22/22] xen/arm64: Allow the admin to enable/disable the
 directmap
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-23-julien@xen.org>
 <alpine.DEB.2.22.394.2301231437170.1978264@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301231437170.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 23/01/2023 22:52, Stefano Stabellini wrote:
> On Fri, 16 Dec 2022, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Implement the same command line option as x86 to enable/disable the
>> directmap. By default this is kept enabled.
>>
>> Also modify setup_directmap_mappings() to populate the L0 entries
>> related to the directmap area.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ----
>>      This patch is in an RFC state we need to decide what to do for arm32.
>>
>>      Also, this is moving code that was introduced in this series. So
>>      this will need to be fix in the next version (assuming Arm64 will
>>      be ready).
>>
>>      This was sent early as PoC to enable secret-free hypervisor
>>      on Arm64.
>> ---
>>   docs/misc/xen-command-line.pandoc   |  2 +-
>>   xen/arch/arm/include/asm/arm64/mm.h |  2 +-
>>   xen/arch/arm/include/asm/mm.h       | 12 +++++----
>>   xen/arch/arm/mm.c                   | 40 +++++++++++++++++++++++++++--
>>   xen/arch/arm/setup.c                |  1 +
>>   5 files changed, 48 insertions(+), 9 deletions(-)
>>
>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
>> index a63e4612acac..948035286acc 100644
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -760,7 +760,7 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
>>   additionally a trace buffer of the specified size is allocated per cpu.
>>   The debug trace feature is only enabled in debugging builds of Xen.
>>   
>> -### directmap (x86)
>> +### directmap (arm64, x86)
>>   > `= <boolean>`
>>   
>>   > Default: `true`
>> diff --git a/xen/arch/arm/include/asm/arm64/mm.h b/xen/arch/arm/include/asm/arm64/mm.h
>> index aa2adac63189..8b5dcb091750 100644
>> --- a/xen/arch/arm/include/asm/arm64/mm.h
>> +++ b/xen/arch/arm/include/asm/arm64/mm.h
>> @@ -7,7 +7,7 @@
>>    */
>>   static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
>>   {
>> -    return true;
>> +    return opt_directmap;
>>   }
>>   
>>   #endif /* __ARM_ARM64_MM_H__ */
>> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
>> index d73abf1bf763..ef9ad3b366e3 100644
>> --- a/xen/arch/arm/include/asm/mm.h
>> +++ b/xen/arch/arm/include/asm/mm.h
>> @@ -9,6 +9,13 @@
>>   #include <public/xen.h>
>>   #include <xen/pdx.h>
>>   
>> +extern bool opt_directmap;
>> +
>> +static inline bool arch_has_directmap(void)
>> +{
>> +    return opt_directmap;
>> +}
>> +
>>   #if defined(CONFIG_ARM_32)
>>   # include <asm/arm32/mm.h>
>>   #elif defined(CONFIG_ARM_64)
>> @@ -411,11 +418,6 @@ static inline void page_set_xenheap_gfn(struct page_info *p, gfn_t gfn)
>>       } while ( (y = cmpxchg(&p->u.inuse.type_info, x, nx)) != x );
>>   }
>>   
>> -static inline bool arch_has_directmap(void)
>> -{
>> -    return true;
>> -}
>> -
>>   /* Helpers to allocate, map and unmap a Xen page-table */
>>   int create_xen_table(lpae_t *entry);
>>   lpae_t *xen_map_table(mfn_t mfn);
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index f5fb957554a5..925d81c450e8 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -15,6 +15,7 @@
>>   #include <xen/init.h>
>>   #include <xen/libfdt/libfdt.h>
>>   #include <xen/mm.h>
>> +#include <xen/param.h>
>>   #include <xen/pfn.h>
>>   #include <xen/pmap.h>
>>   #include <xen/sched.h>
>> @@ -131,6 +132,12 @@ vaddr_t directmap_virt_start __read_mostly;
>>   unsigned long directmap_base_pdx __read_mostly;
>>   #endif
>>   
>> +bool __ro_after_init opt_directmap = true;
>> +/* TODO: Decide what to do for arm32. */
>> +#ifdef CONFIG_ARM_64
>> +boolean_param("directmap", opt_directmap);
>> +#endif
>> +
>>   unsigned long frametable_base_pdx __read_mostly;
>>   unsigned long frametable_virt_end __read_mostly;
>>   
>> @@ -606,16 +613,27 @@ void __init setup_directmap_mappings(unsigned long base_mfn,
>>       directmap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
>>   }
>>   #else /* CONFIG_ARM_64 */
>> -/* Map the region in the directmap area. */
>> +/*
>> + * This either populate a valid fdirect map, or allocates empty L1 tables
> 
> fdirect/direct
> 
> 
>> + * and creates the L0 entries for the given region in the direct map
>> + * depending on arch_has_directmap().
>> + *
>> + * When directmap=no, we still need to populate empty L1 tables in the
>> + * directmap region. The reason is that the root page-table (i.e. L0)
>> + * is per-CPU and secondary CPUs will initialize their root page-table
>> + * based on the pCPU0 one. So L0 entries will be shared if they are
>> + * pre-populated. We also rely on the fact that L1 tables are never
>> + * freed.
> 
> You are saying that in case of directmap=no we are still creating empty
> L1 tables and L0 entries because secondary CPUs will need them when they
> initialize their root pagetables.
> 
> But why? Secondary CPUs will not be using the directmap either? Why do
> seconday CPUs need the empty L1 tables?

 From the cover letter,

"
The subject is probably a misnomer. The directmap is still present but
the RAM is not mapped by default. Instead, the region will still be used
to map pages allocate via alloc_xenheap_pages().

The advantage is the solution is simple (so IHMO good enough for been
merged as a tech preview). The disadvantage is the page allocator is not
trying to keep all the xenheap pages together. So we may end up to have
an increase of page table usage.

In the longer term, we should consider to remove the direct map
completely and switch to vmap(). The main problem with this approach
is it is frequent to use mfn_to_virt() in the code. So we would need
to cache the mapping (maybe in the struct page_info).
"

I can add summary in the commit message.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 23:21:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 23:21:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483345.749462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK68E-000831-To; Mon, 23 Jan 2023 23:21:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483345.749462; Mon, 23 Jan 2023 23:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK68E-00082u-Qg; Mon, 23 Jan 2023 23:21:42 +0000
Received: by outflank-mailman (input) for mailman id 483345;
 Mon, 23 Jan 2023 23:21:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTo8=5U=citrix.com=prvs=380e0b34c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pK68C-00082o-V3
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 23:21:41 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aece4d75-9b74-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 00:21:38 +0100 (CET)
Received: from mail-mw2nam12lp2042.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 Jan 2023 18:21:35 -0500
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6876.namprd03.prod.outlook.com (2603:10b6:a03:43b::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Mon, 23 Jan
 2023 23:21:32 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::c679:226f:52fa:4c19%6]) with mapi id 15.20.6002.033; Mon, 23 Jan 2023
 23:21:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aece4d75-9b74-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674516098;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=9VAQ+PgfRLyqSM3sIrJHXqHBo+yHSLvVHai9VtXAOVc=;
  b=axQ9tY2VgrLq3OvPJMu2ufAWvHSa535xVmb7HQsDifWXekwLeMsGfLkf
   glgbTyyXdNnfFyd0oGefW6Ua+6C6fA+nIGOFCxMns8xqFIfzIgILOjlzK
   kn0rY1LtzXn10masE40ZtLVS/Hc6sfU9jNxmHRxdGgGEK+BMG2BEb7U0x
   4=;
X-IronPort-RemoteIP: 104.47.66.42
X-IronPort-MID: 93868080
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:i38qt6wZQfdGrZCls/N6t+euxyrEfRIJ4+MujC+fZmUNrF6WrkVWn
 2UWWmqHa/mNYGb9LY0lad/kpB9UvZ/QytI3SAE9+yAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UwHUMja4mtC5QRnP6sT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXlqq
 91IFCwzVxm4tsyvz5OUb7F9gv12eaEHPKtH0p1h5RfwKK98BLX8GeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvTaVkFYZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXKgCNxCTO3gnhJsqG2O4UgeCw8MbmX4neWnqVKQVdRac
 GVBr0LCqoB3riRHVOLVWBm1o2WFv1gfRsBXGO057ymCz6PV50CSAW1sZi5MbpkqudE7QRQu1
 0SVhJX5CDp3qrqXRHmBsLCOoluaIjMJJGUPYSsFSwot4NT5pow3yBXVQb5LH6+8iNnoEjjY2
 TGUqzM/gb5VhskOv42x+lrNkj+3ppzESwczzgrSV2OhqAh+YeaNboip8kTS7OwGIpyQSFKAp
 1Abl8PY5+cLZbmGkyqLR+cBFa+o/N6KNTTdhRhkGJxJyti203uqfIQV5S4kIk5sa5wAYWWxP
 BWVvh5N7phOOnfsdbVwf4+6F8Uty+7nCMjhUffXKNFJZ/CdaTO6wc2nXmbIt0iFraTmuftX1
 UuzGSp0MUsnNA==
IronPort-HdrOrdr: A9a23:Qa8OaKs3iZn4YPxhvrvnmVVZ7skCC4Mji2hC6mlwRA09TyXPrb
 HIoB19726RtN9xYgBEpTnkAtj/fZqyz/RICOUqUItKGTOGhILLFu9fBPrZsl/d8kTFn4Y36U
 4jSchD4bvLYWSS5vyKgzVQfexO/DCvytHUuc7ui1pgSAF0Z7pxqy1+DRuWA1Aefng9OXNALu
 vh2iM9nUvHRV0nKve2AXQIReqGiMbMkNbPaxQBGxk7rCmC5AnYl4LSIlyq0hASXylMhY0j/G
 SAuQr/+am5qfmnyhnak0PV8pRKiJ/c0dsrPr30tuElbgbhjQulfoYkYb2OsHQepuax5E0xmM
 TNpRBlE9t+7G6UXmzdm2qU5yDQlAUj7HLv013du3vvrYjSQjUkB9FajZ9YdByc1ks6sNlwlI
 JHtljpzKZqMQ==
X-IronPort-AV: E=Sophos;i="5.97,240,1669093200"; 
   d="scan'208";a="93868080"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fa5W9EOm8znadrPFHvvqhCeE4XdvNHGlTFNcAQ48Wgl1YRcZEJ3XtVltRncVxxiiaitOP31XHbfX7fbHgP8GT6vn7upI9Qjey7RMJp2oj78yfv+4/vhKYdYUbbz8BO+TJAj7NpQSExnY/y22zpaggieX9nvPCnTjHd2Vw4yho3NePS2Bi0AxHvmBoQgd9016xd6KSH68OsG2sKXW2XLH4pBoKmF0OngP10XCiAul4VwrR21hUjNL8OX6SgjNP1pQRgiJebAYlDLejUGrd1ki1kaP1+GF0qG9cbKGgPJCObkTCaaNWdEPGenmf3ucRTM/+vJOijIjHzxgmrJP3oE33w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9VAQ+PgfRLyqSM3sIrJHXqHBo+yHSLvVHai9VtXAOVc=;
 b=n0VUAHy8c8kwHB9+v9URprFRjBWK/vsehDLaiTj0zMQsOnQYwFWA7PE5D1GxXo8nY2H0DIf2/5zEixRZXziuuPT74XR5dbvPrxW9JlQNWRX/yH7XKo7RRjVNi1MlO0wKBdgkeMsZwhYVlTmFo8uLlO+2YwSa0z/jBSnSthSRJXx5DGKSMYvXotDRmL/MEbboJGnGbHgGc9q5R14fJdnBPLMePoG7jmw1eAE0SQd1lwqBvCry1FLIPL94ZE4L44UjvSfOCInCOhoM6h2+QHHpUpWCM+7q3R02O8zBK4J/365D4PdtYdV2Rb16YiJEyHQsYeX3aIUri3ZFzw77Q7hAOA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9VAQ+PgfRLyqSM3sIrJHXqHBo+yHSLvVHai9VtXAOVc=;
 b=sbiUf3LtAq2/nlz9Kfm8jZcp6tQ24RI4HsjYPqtxM9FOWjFIs0/E9XH4or+S7qBoG8ulKrCpO9XJPcYJzjHl9CqrsN+j2td/TO/R0f5Dc/rttKwSexdhEZXW6W8JADt7Z881lnye5FjtreKz0L3Oi5rCFrMSluEIgdiCF+1AW8A=
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	<gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v1 11/14] xen/riscv: introduce setup_trap_handler()
Thread-Topic: [PATCH v1 11/14] xen/riscv: introduce setup_trap_handler()
Thread-Index: AQHZLOAf2Nb+ieXL2kOG2Uab6ou5Zq6sqRgA
Date: Mon, 23 Jan 2023 23:21:30 +0000
Message-ID: <649d27c9-d8fb-e6d6-5b4a-a28ed4773f1b@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <b8d03f33aea498bb5fde4ccdc16f023bbe208e7f.1674226563.git.oleksii.kurochko@gmail.com>
In-Reply-To:
 <b8d03f33aea498bb5fde4ccdc16f023bbe208e7f.1674226563.git.oleksii.kurochko@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6876:EE_
x-ms-office365-filtering-correlation-id: 693343b8-ca98-4ab9-7560-08dafd988f7c
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 tj0CyC8sDgWGiGzCEZetYXwlMudXXJVGYcUXq58m3MjGjvXTHILys9naxETuQkA1xRE4PsnAtOaTGO5aUWcqK+8fS1TdGPLXrbYv3Y7XvraV9u7tlliKNCZkejMo8eGhafu24ZGkfM+bE17AWrCuuP7NLKeHx1esvfNTy241LUyFVWMfA6oVcXoLQ5cQ3EcP697Z/4Lj+fjAbg0E46rsZjxfQcXO3y5wrzxVrQJGvuB4TfgvfGzbbRtVnGpOjy4zqXnHBoTDM+atSobzgiGp/X6f8TxyjL5J9U4h9DwbZ7arAf5tGu4cmD9QJGAkhGbJepBD0aqzbgKTAgTvUpqLMZ0pmb4W1nDlaFwYIWqDyzifwf0gC9KO/cxAhielCIMGgapBk/iomb1/gudC5l5vBlwXPls8mINeIg3EmJ3MoMsT9Qg0Rgp0SzYys21HDRwRVdtX1QSr9ycy1mPsi6uJNWBNJn+dcqFMNHqPQinHPbsRcndBJzDpPVMHcCb82+cJhV+dvVJC45X6WzyBfUEZxQ2lffp8nIzo2SCSWhGeSx3z3fJe6i+znCPk3vTqBTBPmYc2ElnAjF0pUFFpmBdUB8Lwyg9b38Th7B6t43wKjmSFexP/JvQLnWKG9EXEGC6Q78ckM18/mMVJi4L7/a/1XhR7FVyw9zPRTlVbuOxX5gt5V+vf8iJ+VFD4CCZeVhWo3eR0c/YWgZLgNXNzpTtAKSTyjEC0Mjmx4HhV23POUdPEyc5JMO9DWtoHj95iniU5wCsWdONo6VcfSHnXl5S19w==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(346002)(396003)(39860400002)(136003)(376002)(451199015)(122000001)(38100700002)(83380400001)(38070700005)(82960400001)(41300700001)(86362001)(2906002)(8936002)(5660300002)(4326008)(8676002)(26005)(6512007)(53546011)(6506007)(186003)(66556008)(66476007)(316002)(76116006)(64756008)(2616005)(66446008)(54906003)(66946007)(110136005)(91956017)(478600001)(71200400001)(6486002)(31686004)(31696002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bmtqd2dJTVoxS1NwRjhsVGcxNjUyTzFRTEVMbEVLL0ptYjNXMnJSdEpyMDZ1?=
 =?utf-8?B?MVFHZEExQmxHVWFoM3FSM1A2RDNZYWdWTDArbzIwb3Y1S05udGxtYUI4ZnJJ?=
 =?utf-8?B?QXQ3aldvYkpUL2J0VDNXT3IwZWhQZytkVnFnWThic29GWU1GbkorRVBHdDFh?=
 =?utf-8?B?Y3BFbUdHa3NCZXRCY05HUnRRU3pqVDUrWUV4ZVJNNHNvTTNPV1FpUEl1eDBu?=
 =?utf-8?B?NCszbkVnSUN1TjRWZE9yMTFxZ05JQVNaS29TZ0pOaW42SDV4QUVzWHVNeXJI?=
 =?utf-8?B?ZE9ueDZuUjlDK01tVnNONDd3amFtbTRxY2d0MkR2TU9ac2lSbjc1T0NMUXNP?=
 =?utf-8?B?U2JkaDZsSjhJS3NNOUdBenVPZENmWVB2VmFXdU1Ra1FHNnRKRmc3ZG0rQjdq?=
 =?utf-8?B?VXZDWHdSL1VVZG00M0pWVVhSbVBaWGllTzk4NThDYWVFa3dQc3NuNkNybXRu?=
 =?utf-8?B?S24wTkFPc0VaV3pDZWd2aTBCckN0VU55Tnk0Z0twcXBacjFSaS9aT2pPdmJl?=
 =?utf-8?B?YzdGeUI1SG84emswRUJnTXdoS29JQ01Ualc5UzBXZmo3cGRrc2ZSRmJ3cHo3?=
 =?utf-8?B?OFA0d0pZL2xjOXRDQ1FFUUlqbFduK1BKNXV1ZWo2aGZra2k1K25JdWVoTG94?=
 =?utf-8?B?bDE3bG91TkdEV0tzcFM0NE9DaUFxdDZjeG1DRk5MTGpncXE4NnFXT2twY2Rl?=
 =?utf-8?B?N3p2ZEI3UmdidldtZXdiYUs3K2JET3gxeXkzYTVtVXZ0RUU4SUxzcXpzeW0r?=
 =?utf-8?B?RTBXbFVpMjhncERRcVViTzExQmpSTlpzSDBIZzdGSVlKdlRiNi9yWldKc0JN?=
 =?utf-8?B?MHoyTXIwcDk0b0NBd2VvNzRNUzlVaHR3Vzlsd0wzMlp4OHRBNHFELzVuazFm?=
 =?utf-8?B?Y0JZY21OM0tzWWVvQm5jMllnNmlhSUlXamdjQXBDYjNiMWZ2VDJ2Sk44TVJK?=
 =?utf-8?B?Z2loWGNyRlNBRkQrWlVLZzl5Mm9hMllXK3VIR2JReXlpUnRRQVUzNUZpcmY2?=
 =?utf-8?B?TSs3bXJPakswTldrYnJHbFBFdmxnRjhtRkxRbHJCZTVLa0pnU2JDRHJ6MUp6?=
 =?utf-8?B?d2wzbWlnN2NJdlc2MUJQSWFqdEJNODV6TlJRVCtuOVEzMHVkcDlWMUtXNjFs?=
 =?utf-8?B?c0dOcXhpa2RNRXYxcG1OYzZ4K05RUnBaRzVvdVppUmFLQnh0Nmg1bUMxVzJx?=
 =?utf-8?B?elF1QVczdG9vYzlJOWJDdnRtYjFEek1wZXdMYnhoanJUVWJjTldiT3RWaDk1?=
 =?utf-8?B?SG12VTlsei9NdVV0VStkOUU1dWFOczJHejUrRjg2NkhxMWVFSmRvWDR1czBN?=
 =?utf-8?B?a3VHVlBlKy9RSnRLLy9WNXdUd2x5Y0dUWXZrWkVucEVBeVlBT1JTRGdySWVr?=
 =?utf-8?B?dmlhSHY3RytuTncxYVljUDErQTJ3TzhtdlBTMkloNTkvR2U1eVNtS0dOMUNx?=
 =?utf-8?B?dTFTUlh0Uk0yNFF5aXNYNmwxYzl6ZHp6VEdzUDVOcDFIbjFWS1JXQU1JNFRL?=
 =?utf-8?B?aDNINk9yazlJZ3RiNUFEZ3BnTDljdjNLRmJxYnV6RXhyU0tMWmdIS2RTOW1r?=
 =?utf-8?B?dS94SnR2Z1lsb0VRUENtcThIQVcxdmJwYkMwQU45cjQ2Q1ZMcm81M2Fac1pT?=
 =?utf-8?B?WTRUQnlGTkh2T2pkbk13c1N5RHNkYWwwTWVKZ0l1ZHBUUXRuVnBmeTlDWFhL?=
 =?utf-8?B?dWNWcW8xYzBDb1IwTXlEc1AyQTJjaHRiVmhhZFE5bHBMcURxUFhrWFNHbHFK?=
 =?utf-8?B?TGxkWlRaeUt3c2VqRHZWVkFsYkFabE5tMUlKT2VldFp2bzVTc1ZBTnBsZUoy?=
 =?utf-8?B?RVY0djE0RjRZR2UzM1poeXZFUTNWQmR6Mm0renlXYkM2NFBUclJSUzZTMHg4?=
 =?utf-8?B?U0FZblZDeVRzODJPWjkxcFNUd1BSVDcrU0tsOTgrakVRUmFuNW44UmhYRE5I?=
 =?utf-8?B?cWlOQnhaUDM1NzZOUkhFT2h2WkYwWnVBVllEa1FUQnRZajBVdjRzVlBOTU5R?=
 =?utf-8?B?VU9INXM2MURqOUZ6OXJHdHEwRDhRTk53dnRkODJzY04ySTFPMmZHR2hyV0Nr?=
 =?utf-8?B?WkM3MTR0VEtPUFJEbjA5eDZ5MHJ1azVMaGViYW85R0pjQjBwejl4VGJXSXZa?=
 =?utf-8?Q?ffkzjilnNgE/vbk/dbUhetYpL?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <33AD6D6D84F6D94E8710024B9A2E812C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	X7FpE/wjCnPUrtb8AXS/5qBnmxCqSGqHSxxhNMGtwHDw3v4PNlXv1E2zm0EXxUciD8+FnzH6K1Nz1Y+/2xiWFqz3m5QxUxrZhW6mAnj3UQBkFM6S17cbHWDegitEEKn4J32VT+EcFxbLooKFz+0CInY+pKD/lFHqlRrxpETNfagtkOqGt67Vtpk/T/o182ceZanSMK0abr1VabQY+HffxXR/0KTMs4D5Gq8lSDIE4/OqPwPtS/fsAWT3JGkO77aSiYkrwIt1GyxoK4MknRgAFNTRQLmOeTle5HF5gf4PzJgNYrocs2+18VhWej5nl1ALagjTw/48I4mOcitW+LLQZJ+ypjt8rxL1rsV00I6Pb9VnkOhxJrtyzr3VSFIAWdi6JtHFYwr2eVGjmZhvvk+ZlZpA2OeViHxFSSYuhXfyRAqG23SHUs7oP4ofqwZTz6bvUzqVicwIhag5a3DD1M0CQnPREXc+cnaOEQ4xF1JsA4gEcpdkBPesunLczkMOPCICRAuIZ5rvzCDKerPWwZo08HzbqRM7Npq9VbHujb7Lbb5CM1SIsYhu7dMGRS7i7SaGfj0C9M7se6rWT75LvwXDzfMxEZnlet48pV0JMaXdaQAfYaAIgQSasGsaKIaMaqWsaoCQOf+ToX3rZ//ssD93QVopEakpkgkHaSf7RWGz5Ept1A6Kav7jFCMjC2w5fj3r+aAydljJj11wpQXHz5ex7/8ShRMQCrFKbxp6ejGH2HafpUz06D6m0Z1X7P/JXq1aV1YclNK0GpLmEkERQu+fgPN5KIpHH1gZ6pVNrH10NugeRy9stEXrBMy056DMbWMs61NkIInOXeQ8L/6mdkrunRhPLb1uxkDZ1Z6pdxNjkhwp9yBFmjbEtl8hOuQCGHif3RcJ2SinPDQXXmQWkhcy5g==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 693343b8-ca98-4ab9-7560-08dafd988f7c
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2023 23:21:31.0344
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ish8hN88PD0GyYUqyTIZW28n7SfQYNtaL7YxGGHTbmJHyQyIe3topu2az4lPf9FfWgt09TtDUxgd5cXCte3m1VdG+eVOJKF1H6oYUa9fzE8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6876

T24gMjAvMDEvMjAyMyAyOjU5IHBtLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOg0KPiBTaWduZWQt
b2ZmLWJ5OiBPbGVrc2lpIEt1cm9jaGtvIDxvbGVrc2lpLmt1cm9jaGtvQGdtYWlsLmNvbT4NCj4g
LS0tDQo+ICB4ZW4vYXJjaC9yaXNjdi9zZXR1cC5jIHwgMTEgKysrKysrKysrKysNCj4gIDEgZmls
ZSBjaGFuZ2VkLCAxMSBpbnNlcnRpb25zKCspDQo+DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9y
aXNjdi9zZXR1cC5jIGIveGVuL2FyY2gvcmlzY3Yvc2V0dXAuYw0KPiBpbmRleCBkMDlmZmUxNDU0
Li4xNzRlMTM0YzkzIDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC9yaXNjdi9zZXR1cC5jDQo+ICsr
KyBiL3hlbi9hcmNoL3Jpc2N2L3NldHVwLmMNCj4gQEAgLTEsMTYgKzEsMjcgQEANCj4gICNpbmNs
dWRlIDx4ZW4vY29tcGlsZS5oPg0KPiAgI2luY2x1ZGUgPHhlbi9pbml0Lmg+DQo+ICANCj4gKyNp
bmNsdWRlIDxhc20vY3NyLmg+DQo+ICAjaW5jbHVkZSA8YXNtL2Vhcmx5X3ByaW50ay5oPg0KPiAr
I2luY2x1ZGUgPGFzbS90cmFwcy5oPg0KPiAgDQo+ICAvKiBYZW4gc3RhY2sgZm9yIGJyaW5naW5n
IHVwIHRoZSBmaXJzdCBDUFUuICovDQo+ICB1bnNpZ25lZCBjaGFyIF9faW5pdGRhdGEgY3B1MF9i
b290X3N0YWNrW1NUQUNLX1NJWkVdDQo+ICAgICAgX19hbGlnbmVkKFNUQUNLX1NJWkUpOw0KPiAg
DQo+ICtzdGF0aWMgdm9pZCBzZXR1cF90cmFwX2hhbmRsZXIodm9pZCkNCg0KV2UnZCBub3JtYWxs
eSBjYWxsIHRoaXMgdHJhcF9pbml0KCksIGJ1dCBpdCB3YW50cyB0byBsaXZlIGluIHRyYXBzLmMN
CnJhdGhlciB0aGFuIHNldHVwLmMsIHdpdGggYSBwcm90b3R5cGUgaW4gdHJhcHMuaC4NCg0KPiAr
ew0KPiArICAgIHVuc2lnbmVkIGxvbmcgYWRkciA9ICh1bnNpZ25lZCBsb25nKSZoYW5kbGVfZXhj
ZXB0aW9uOw0KDQpDb2Rpbmcgc3R5bGUuwqAgTmV3bGluZSBiZXR3ZWVuIHZhcmlhYmxlIGRlY2xh
cmF0aW9ucyBhbmQgY29kZS4NCg0KSGF2aW5nIGxvb2tlZCBhdCB0aGUgc3BlYywgdGhlIGVudHJ5
cG9pbnQgc2hvdWxkIGJlIG5hbWVkIGhhbmRsZV90cmFwDQpyYXRoZXIgdGhhbiBoYW5kbGVfZXhj
ZXB0aW9uLsKgIFBlciB0aGUgc3BlYywgYSB0cmFwIGlzIGVpdGhlciBhbg0KaW50ZXJydXB0IG9y
IGFuIGV4Y2VwdGlvbiBiYXNlZCBvbiB0aGUgSVJRIGZsYWcgaW4gQ0FVU0UuDQoNClRoYXQgYWRq
dXN0bWVudCB0byBuYW1pbmcgd2FudHMgdG8gcGVyY29sYXRlIGRvd24gdGhyb3VnaCB0aGUgY2Fs
bHRyZWUNCmFuZCBhbHNvIGluIGVhcmxpZXIgcGF0Y2hlcy4NCg0KVG8gYXZvaWQgdGhlIF9faGFu
ZGxlX2V4Y2VwdGlvbigpIGZ1bmN0aW9uIGluIEMsIHlvdSBjb3VsZCBjYWxsIHRoZSBDDQp2ZXJz
aW9uIGRvX3RyYXAoKSB3aGljaCBpcyBhIHJlYXNvbmFibHkgY29tbW9uIGlkaW9tIGluIG90aGVy
IGFyY2hpdGVjdHVyZXMuDQoNCj4gKyAgICBjc3Jfd3JpdGUoQ1NSX1NUVkVDLCBhZGRyKTsNCj4g
K30NCj4gKw0KPiAgdm9pZCBfX2luaXQgbm9yZXR1cm4gc3RhcnRfeGVuKHZvaWQpDQo+ICB7DQo+
ICAgICAgZWFybHlfcHJpbnRrKCJIZWxsbyBmcm9tIEMgZW52XG4iKTsNCj4gIA0KPiArICAgIHNl
dHVwX3RyYXBfaGFuZGxlcigpOw0KPiArICAgIGVhcmx5X3ByaW50aygiZXhjZXB0aW9uIGhhbmRs
ZXIgaGFzIGJlZW4gc2V0dXBcbiIpOw0KDQpQZXJzb25hbGx5IEkgZG9uJ3QgdGhpbmsgdGhpcyBw
cmludGsoKSBhZGRzIGFueSB2YWx1ZS7CoCBJdCdzIG5vdCBvbmUNCmxvb2tlZCBhdCBieSB0aGUg
c21va2UgdGVzdCwgYW5kIGl0J3Mgb25seSBhIHNpbmdsZSBDU1JXIGF3YXkgZnJvbSB0aGUNCiJI
ZWxsbyBmcm9tIEMiIG1lc3NhZ2UuDQoNCn5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 23:22:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 23:22:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483349.749472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK68v-00006D-5i; Mon, 23 Jan 2023 23:22:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483349.749472; Mon, 23 Jan 2023 23:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK68v-000066-31; Mon, 23 Jan 2023 23:22:25 +0000
Received: by outflank-mailman (input) for mailman id 483349;
 Mon, 23 Jan 2023 23:22:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK68t-00005r-Rn; Mon, 23 Jan 2023 23:22:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK68t-00072N-PB; Mon, 23 Jan 2023 23:22:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pK68t-0005UT-8q; Mon, 23 Jan 2023 23:22:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pK68t-0005A1-8M; Mon, 23 Jan 2023 23:22:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dter0dnEPdO+vwd3bMiBLxu08C3t44U3nInevzmAvFs=; b=jvjXlzXPADT9j8znZsse8Ocx4H
	lEostSVRNpGNvLUgJMAzvF3DId5HqSkadNN1HyUbLyV39DFGGa6WpS0vhx4p7+ezDaPgOIfwy/vJp
	mMxcKGPZCB9IDeIyOSOPRWmGsyPntIuxVodm+0zw4xuiZgRIdrmSNzVxvlgbRDI1sjeY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176069-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 176069: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=00b1faea41d283e931256aa78aa975a369ec3ae6
X-Osstest-Versions-That:
    qemuu=65cc5ccf06a74c98de73ec683d9a543baa302a12
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Jan 2023 23:22:23 +0000

flight 176069 qemu-mainline real [real]
flight 176078 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176069/
http://logs.test-lab.xenproject.org/osstest/logs/176078/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 176022

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install         fail pass in 176078-retest
 test-arm64-arm64-xl-credit2   8 xen-boot            fail pass in 176078-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 176078 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 176078 never pass
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 176078 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 176022
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 176022
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 176022
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 176022
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 176022
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 176022
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 176022
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 176022
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                00b1faea41d283e931256aa78aa975a369ec3ae6
baseline version:
 qemuu                65cc5ccf06a74c98de73ec683d9a543baa302a12

Last test of basis   176022  2023-01-21 16:08:39 Z    2 days
Testing same since   176069  2023-01-23 16:08:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Corey Minyard <cminyard@mvista.com>
  David Reiss <dreiss@meta.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 486 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 23 23:36:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Jan 2023 23:36:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483361.749485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK6MQ-0001uy-HN; Mon, 23 Jan 2023 23:36:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483361.749485; Mon, 23 Jan 2023 23:36:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK6MQ-0001ur-Ed; Mon, 23 Jan 2023 23:36:22 +0000
Received: by outflank-mailman (input) for mailman id 483361;
 Mon, 23 Jan 2023 23:36:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4SGE=5U=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pK6MO-0001ul-DV
 for xen-devel@lists.xenproject.org; Mon, 23 Jan 2023 23:36:20 +0000
Received: from mail-vs1-xe2d.google.com (mail-vs1-xe2d.google.com
 [2607:f8b0:4864:20::e2d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc61ce0e-9b76-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 00:36:19 +0100 (CET)
Received: by mail-vs1-xe2d.google.com with SMTP id j185so14754704vsc.13
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 15:36:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc61ce0e-9b76-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=FOIXWMjA1OKXKH+4ug2Z84+dNixIPvYJggXC+9ot1Uc=;
        b=gaY3CdBeOrCJ4qnmbVHLkdXcUAmqxaoK3uZBgPASGGQkeh3iNjKmjkLtESBaKllme4
         XpzUbT8Q5Ean6KLtWZQCUXJrCwVQn12c7aqebzGB9LPiwxJzFAdLB66pulArCaONnNrz
         1oSVJa7igMsEzaPSIlc3ulKFDsagL2O8szpDclp1hQ1mhUkHrAhjKSqQWUb1GrIQ38Bf
         iiuRPVvV3FhXGaIp892hPajZBpaPMerzhLRPDqTY3NyyBN1se/Php56CuedShnRtppk2
         0mh0WGbx6ZaV5BZ/3n3Rw5dL3/9L9L3vOHUlebforbkQaVih3bC9F60fBD1YxtJ7bEpB
         Z6hw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=FOIXWMjA1OKXKH+4ug2Z84+dNixIPvYJggXC+9ot1Uc=;
        b=4s3IyF86MdBDyB4LfanpTYJoOTkk8nKNtkyqMCftQIUx/zpUio4WElIvdqBYY/tepL
         Whidi8qkIDXLO4GwBV7hquY95Pqe3X/pUDbJk4TiwW9MUvYkJwpbL2G3M4rmd8sDjRqO
         tUm9LRV8tqXgR84w91wpJFA+fKP8Lrukcbj8Bn7zzlSdQgFcY8XLrUJpgsR28TTg2UwS
         6C7iBkgzxd+E3okX+Rx7/OkliyY+fD3CRFrkbcUXaZO8VL9lKsDUECOcLlf7osfzy8z9
         sMlHVt7u2vj5r848M41NWfwBgDp/HOA60QRfjIPKKTenbZh5TFSVAdDoWaP0CY/vx/wC
         Knjg==
X-Gm-Message-State: AFqh2kpmQ7p/i4k2lfZTrfVz77/372T9JjzCeBfENF5Gd04Db0/LhZJg
	nRmBc7OgZHFmu8YwtO0vyALyZR1OLBFxC2gAawChQkeYwpc=
X-Google-Smtp-Source: AMrXdXuniWJFV1aRIBePsnoExkHbqdfyms8YeQxUmJpUH+2Ys3a7Ba2ZL7/0XujDLFCEOw1lEmX+fGaclNinI5+AaDI=
X-Received: by 2002:a05:6102:cd4:b0:3d0:c2e9:cb77 with SMTP id
 g20-20020a0561020cd400b003d0c2e9cb77mr3379230vst.54.1674516978218; Mon, 23
 Jan 2023 15:36:18 -0800 (PST)
MIME-Version: 1.0
References: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>
 <Y8lABYJoQ5Qt4DAt@bullseye> <d5be3bcd-8835-5a2f-12b0-2b2aaa98b9b4@citrix.com>
In-Reply-To: <d5be3bcd-8835-5a2f-12b0-2b2aaa98b9b4@citrix.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 24 Jan 2023 09:35:52 +1000
Message-ID: <CAKmqyKMSj5BsGs7RtsB2TV6eL=LaAMHc=3nF0+c0kY8_m_RYxg@mail.gmail.com>
Subject: Re: [RISC-V] Switch to H-mode
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>, Oleksii <oleksii.kurochko@gmail.com>, 
	Alistair Francis <alistair.francis@wdc.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jan 24, 2023 at 9:09 AM Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
>
> On 19/01/2023 1:05 pm, Bobby Eshleman wrote:
> > On Mon, Jan 23, 2023 at 06:56:19PM +0200, Oleksii wrote:
> >> Hi Alistair and community,
> >>
> >> I am working on RISC-V support upstream for Xen based on your and Bobby
> >> patches.
> >>
> >> Adding the RISC-V support I realized that Xen is ran in S-mode. Output
> >> of OpenSBI:
> >>     ...
> >>     Domain0 Next Mode         : S-mode
> >>     ...
> >> So the first my question is shouldn't it be in H-mode?
> >>
> >> If I am right than it looks like we have to do a patch to OpenSBI to
> >> add support of H-mode as it is not supported now:
> >> [1]
> >> https://github.com/riscv-software-src/opensbi/blob/master/lib/sbi/sbi_domain.c#L380
> >> [2]
> >> https://github.com/riscv-software-src/opensbi/blob/master/include/sbi/riscv_encoding.h#L110
> >> Please correct me if I am wrong.
> >>
> >> The other option I see is to switch to H-mode in U-boot as I understand
> >> the classical boot flow is:
> >>     OpenSBI -> U-boot -> Xen -> Domain{0,...}
> >> If it is at all possible since U-boot will be in S mode after OpenSBI.
> >>
> >> Thanks in advance.
> >>
> >> ~ Oleksii
> >>
> > Ah, what you are seeing there is that the openSBI's Next Mode excludes
> > the virtualization mode (it treats HS and S synonymously) and it is only
> > used for setting the mstatus MPP. The code also has next_virt for
> > setting the MPV but I don't think that is exposed via the device tree
> > yet. For Xen, you'd want next_mode = PRIV_S and next_virt = 0 (HS mode,
> > not VS mode). The relevant setup prior to mret is here for interested
> > readers:
> > https://github.com/riscv-software-src/opensbi/blob/001106d19b21cd6443ae7f7f6d4d048d80e9ecac/lib/sbi/sbi_hart.c#L759
> >
> > As long as the next_mode and next_virt are set correctly, then Xen
> > should be launching in HS mode. I do believe this should be default for
> > the stock build too for Domain0, unless something has changed.
>
> Ok, so everything ought to be doing the right thing, even if it doesn't
> show up clearly in the logging.
>
> At some point, Xen is going to need a `if ( !hs-mode ) panic();`,
> because we can't operate dom0 properly if Xen is in plan S-mode.

There are going to be two cases where Xen won't be able to continue.
If it's booting on hardware that doesn't have the Hypervisor extension
or if it's booting in VS-mode (inside a guest).

In theory Xen could run as a nested Hypervisor in VS-mode, but let's
worry about that later.

>
> I suggested that we try and make csr_read_safe() work, then try and read
> `hstatus` to probe if the H extension is active.

I think that makes sense. We don't need to probe for hstatus until
late in the boot though. We should get the console up first so we can
print a useful message. Xen won't need to touch the h* CSRs until it's
about to start a guest.

Alistair

>
> Does this sound reasonable, or is there a better option?
>
> ~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 00:13:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 00:13:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483366.749495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK6vj-0006sy-Tq; Tue, 24 Jan 2023 00:12:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483366.749495; Tue, 24 Jan 2023 00:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK6vj-0006sr-Qr; Tue, 24 Jan 2023 00:12:51 +0000
Received: by outflank-mailman (input) for mailman id 483366;
 Tue, 24 Jan 2023 00:12:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wk+X=5V=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pK6vi-0006sl-98
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 00:12:50 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5138a01-9b7b-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 01:12:47 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 3E705B80EF1;
 Tue, 24 Jan 2023 00:12:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D047C433D2;
 Tue, 24 Jan 2023 00:12:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5138a01-9b7b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674519165;
	bh=+zVOav+xfHiiyz9QiF0DoSvUa/Eu3zqbeOq7fVYdDXY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=AcWNaqO8OUbXno/TKsAHr49S8QhV8f2ZXXC/S42/B5nAb4HCZJPK6/ijyjfBqzy8b
	 fvb9/vjhFsa0KE5T762qjdb2PILHr94//t8ms/h08G6bUVifVLpV2s8ggCx5KXmBVg
	 wXBo1pthLQZn6wAe4BNVBPHik109jnLD81kG4/AERN1+/3C8Q5eeYHnwv7Ogs8KBL4
	 xnnSTacY1IOsZXEZCtP/4ilL+HUF/qOKKt8UoUlaUGAmE+gqpny2jsEplSwHR2CYHQ
	 aZI1SjGCJ3I7BlyEwFSr2yjF7yJrpujNfkAa6PSac18CCMg6I5qh0ttCVDCpG/gwff
	 VdSMc7lEDlOMA==
Date: Mon, 23 Jan 2023 16:12:43 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Wei Liu <wl@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 22/22] xen/arm64: Allow the admin to enable/disable the
 directmap
In-Reply-To: <92c4daa2-d841-3109-c1ec-4bdb088d6670@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301231605291.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-23-julien@xen.org> <alpine.DEB.2.22.394.2301231437170.1978264@ubuntu-linux-20-04-desktop> <92c4daa2-d841-3109-c1ec-4bdb088d6670@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Jan 2023, Julien Grall wrote:
> Hi Stefano,
> 
> On 23/01/2023 22:52, Stefano Stabellini wrote:
> > On Fri, 16 Dec 2022, Julien Grall wrote:
> > > From: Julien Grall <jgrall@amazon.com>
> > > 
> > > Implement the same command line option as x86 to enable/disable the
> > > directmap. By default this is kept enabled.
> > > 
> > > Also modify setup_directmap_mappings() to populate the L0 entries
> > > related to the directmap area.
> > > 
> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > > 
> > > ----
> > >      This patch is in an RFC state we need to decide what to do for arm32.
> > > 
> > >      Also, this is moving code that was introduced in this series. So
> > >      this will need to be fix in the next version (assuming Arm64 will
> > >      be ready).
> > > 
> > >      This was sent early as PoC to enable secret-free hypervisor
> > >      on Arm64.
> > > ---
> > >   docs/misc/xen-command-line.pandoc   |  2 +-
> > >   xen/arch/arm/include/asm/arm64/mm.h |  2 +-
> > >   xen/arch/arm/include/asm/mm.h       | 12 +++++----
> > >   xen/arch/arm/mm.c                   | 40 +++++++++++++++++++++++++++--
> > >   xen/arch/arm/setup.c                |  1 +
> > >   5 files changed, 48 insertions(+), 9 deletions(-)
> > > 
> > > diff --git a/docs/misc/xen-command-line.pandoc
> > > b/docs/misc/xen-command-line.pandoc
> > > index a63e4612acac..948035286acc 100644
> > > --- a/docs/misc/xen-command-line.pandoc
> > > +++ b/docs/misc/xen-command-line.pandoc
> > > @@ -760,7 +760,7 @@ Specify the size of the console debug trace buffer. By
> > > specifying `cpu:`
> > >   additionally a trace buffer of the specified size is allocated per cpu.
> > >   The debug trace feature is only enabled in debugging builds of Xen.
> > >   -### directmap (x86)
> > > +### directmap (arm64, x86)
> > >   > `= <boolean>`
> > >     > Default: `true`
> > > diff --git a/xen/arch/arm/include/asm/arm64/mm.h
> > > b/xen/arch/arm/include/asm/arm64/mm.h
> > > index aa2adac63189..8b5dcb091750 100644
> > > --- a/xen/arch/arm/include/asm/arm64/mm.h
> > > +++ b/xen/arch/arm/include/asm/arm64/mm.h
> > > @@ -7,7 +7,7 @@
> > >    */
> > >   static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned
> > > long nr)
> > >   {
> > > -    return true;
> > > +    return opt_directmap;
> > >   }
> > >     #endif /* __ARM_ARM64_MM_H__ */
> > > diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
> > > index d73abf1bf763..ef9ad3b366e3 100644
> > > --- a/xen/arch/arm/include/asm/mm.h
> > > +++ b/xen/arch/arm/include/asm/mm.h
> > > @@ -9,6 +9,13 @@
> > >   #include <public/xen.h>
> > >   #include <xen/pdx.h>
> > >   +extern bool opt_directmap;
> > > +
> > > +static inline bool arch_has_directmap(void)
> > > +{
> > > +    return opt_directmap;
> > > +}
> > > +
> > >   #if defined(CONFIG_ARM_32)
> > >   # include <asm/arm32/mm.h>
> > >   #elif defined(CONFIG_ARM_64)
> > > @@ -411,11 +418,6 @@ static inline void page_set_xenheap_gfn(struct
> > > page_info *p, gfn_t gfn)
> > >       } while ( (y = cmpxchg(&p->u.inuse.type_info, x, nx)) != x );
> > >   }
> > >   -static inline bool arch_has_directmap(void)
> > > -{
> > > -    return true;
> > > -}
> > > -
> > >   /* Helpers to allocate, map and unmap a Xen page-table */
> > >   int create_xen_table(lpae_t *entry);
> > >   lpae_t *xen_map_table(mfn_t mfn);
> > > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > > index f5fb957554a5..925d81c450e8 100644
> > > --- a/xen/arch/arm/mm.c
> > > +++ b/xen/arch/arm/mm.c
> > > @@ -15,6 +15,7 @@
> > >   #include <xen/init.h>
> > >   #include <xen/libfdt/libfdt.h>
> > >   #include <xen/mm.h>
> > > +#include <xen/param.h>
> > >   #include <xen/pfn.h>
> > >   #include <xen/pmap.h>
> > >   #include <xen/sched.h>
> > > @@ -131,6 +132,12 @@ vaddr_t directmap_virt_start __read_mostly;
> > >   unsigned long directmap_base_pdx __read_mostly;
> > >   #endif
> > >   +bool __ro_after_init opt_directmap = true;
> > > +/* TODO: Decide what to do for arm32. */
> > > +#ifdef CONFIG_ARM_64
> > > +boolean_param("directmap", opt_directmap);
> > > +#endif
> > > +
> > >   unsigned long frametable_base_pdx __read_mostly;
> > >   unsigned long frametable_virt_end __read_mostly;
> > >   @@ -606,16 +613,27 @@ void __init setup_directmap_mappings(unsigned long
> > > base_mfn,
> > >       directmap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
> > >   }
> > >   #else /* CONFIG_ARM_64 */
> > > -/* Map the region in the directmap area. */
> > > +/*
> > > + * This either populate a valid fdirect map, or allocates empty L1 tables
> > 
> > fdirect/direct
> > 
> > 
> > > + * and creates the L0 entries for the given region in the direct map
> > > + * depending on arch_has_directmap().
> > > + *
> > > + * When directmap=no, we still need to populate empty L1 tables in the
> > > + * directmap region. The reason is that the root page-table (i.e. L0)
> > > + * is per-CPU and secondary CPUs will initialize their root page-table
> > > + * based on the pCPU0 one. So L0 entries will be shared if they are
> > > + * pre-populated. We also rely on the fact that L1 tables are never
> > > + * freed.
> > 
> > You are saying that in case of directmap=no we are still creating empty
> > L1 tables and L0 entries because secondary CPUs will need them when they
> > initialize their root pagetables.
> > 
> > But why? Secondary CPUs will not be using the directmap either? Why do
> > seconday CPUs need the empty L1 tables?
> 
> From the cover letter,
> 
> "
> The subject is probably a misnomer. The directmap is still present but
> the RAM is not mapped by default. Instead, the region will still be used
> to map pages allocate via alloc_xenheap_pages().
> 
> The advantage is the solution is simple (so IHMO good enough for been
> merged as a tech preview). The disadvantage is the page allocator is not
> trying to keep all the xenheap pages together. So we may end up to have
> an increase of page table usage.
> 
> In the longer term, we should consider to remove the direct map
> completely and switch to vmap(). The main problem with this approach
> is it is frequent to use mfn_to_virt() in the code. So we would need
> to cache the mapping (maybe in the struct page_info).
> "

Ah yes! I see it now that we are relying on the same area
(alloc_xenheap_pages calls page_to_virt() then map_pages_to_xen()).

map_pages_to_xen() is able to create pagetable entries at every level,
but we need to make sure they are shared across per-cpu pagetables. To
make that happen, we are pre-creating the L0/L1 entries here so that
they become common across all per-cpu pagetables and then we let
map_pages_to_xen() do its job.

Did I understand it right?


> I can add summary in the commit message.

I would suggest to improve a bit the in-code comment on top of
setup_directmap_mappings. I might also be able to help with the text
once I am sure I understood what is going on :-)


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 01:07:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 01:07:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483373.749511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK7lt-0002Ig-TM; Tue, 24 Jan 2023 01:06:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483373.749511; Tue, 24 Jan 2023 01:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK7lt-0002IZ-QZ; Tue, 24 Jan 2023 01:06:45 +0000
Received: by outflank-mailman (input) for mailman id 483373;
 Tue, 24 Jan 2023 01:06:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QYlf=5V=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1pK7ls-0002IT-Me
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 01:06:44 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5cf66f2d-9b83-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 02:06:43 +0100 (CET)
Received: by mail-pl1-x636.google.com with SMTP id d3so13220333plr.10
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 17:06:42 -0800 (PST)
Received: from localhost (c-73-164-155-12.hsd1.wa.comcast.net. [73.164.155.12])
 by smtp.gmail.com with ESMTPSA id
 s13-20020a17090a764d00b00229d06713d7sm7128322pjl.46.2023.01.23.17.06.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 17:06:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cf66f2d-9b83-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=tRi2qQe60p8JdyBRrhIC/m03Eqmvs9ZvQ1CI3E1klMU=;
        b=JQqB4SG/aLtxtFmdYC+9/EX7dT70QFPVlkb1vgPqWfsB2xpuS/s96mYm7osf7cxs0B
         u4jGlJdH2FBCfDN5xo8LtYTwx1rJmCcC5tsUvrawcfBTc2gdk3/ngKBfX7i1868Ballt
         yRqd7uM9Jm43GuZsZVGMjw3hliAQftNU91f4Ww8Psmb5kpIWQcBiv9y3uAJ4RZIqxhaW
         0FDbMrPBjWnRCdqYf8VGuTGOsX2aEg6nMP/tTDDtaN05WfuNTlJpKi7iNgpqY7xh38X8
         TWPWWEJVyHBZnVwZIdDuzXiM6OGRiH5FUmDI2HAeIJLUpZhq1DmC3iwa7YOkQ2IvTvR6
         rQxQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tRi2qQe60p8JdyBRrhIC/m03Eqmvs9ZvQ1CI3E1klMU=;
        b=NaT00w0U6mdeD/P+jB77iFu+gpZ7cyzNOB295Pva4gFlgSx2gBBEfM3DEvd25/G4Nl
         4Ol5LH6XSbqABzPd8/GDo3syQEzZoTvXmErUQOguAn/l1uzgzbsBURWcOPJkQZTFrIuM
         fh043T2VcYfMjVdULKktyDn52sBkn3+mHocQq8pecpJ7cTHA6Tjjm2inbQfzD0OK5fUh
         rnZ2bR58U/Kux4MrgjSjByEEbVVoo1XRGmAjK/aoz0oz5wvC2cWeCWfKmbGYQtxGlsTA
         ML5HB+e2VKm+aeUmnbY3EJ4ptcbcAefvdySlevYCoEqGAjOO2uk16UzzDTauLkEdpbQW
         5Tpg==
X-Gm-Message-State: AFqh2kprdRcq3EBnclxj1Bkzkl5257WEv7nxrOFvWf13ZsWLFD9/D0nV
	fdgm0Z/mTBw22QjOYnvhcDY=
X-Google-Smtp-Source: AMrXdXvGI3JwGo3+pYFVJ4oSyBEBjlF+Ukr5GvNuAcewp98ttJHrAt7wT5fvQC/mgvN0IzbxJeXDFw==
X-Received: by 2002:a17:90a:e617:b0:229:5027:c2d9 with SMTP id j23-20020a17090ae61700b002295027c2d9mr29475918pjy.34.1674522401361;
        Mon, 23 Jan 2023 17:06:41 -0800 (PST)
Date: Thu, 19 Jan 2023 16:10:15 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Oleksii <oleksii.kurochko@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>
Subject: Re: [RISC-V] Switch to H-mode
Message-ID: <Y8lrZ18B8gpAyXTw@bullseye>
References: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>
 <Y8lABYJoQ5Qt4DAt@bullseye>
 <d5be3bcd-8835-5a2f-12b0-2b2aaa98b9b4@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d5be3bcd-8835-5a2f-12b0-2b2aaa98b9b4@citrix.com>

On Mon, Jan 23, 2023 at 11:09:13PM +0000, Andrew Cooper wrote:
> On 19/01/2023 1:05 pm, Bobby Eshleman wrote:
> > On Mon, Jan 23, 2023 at 06:56:19PM +0200, Oleksii wrote:
> >> Hi Alistair and community,
> >>
> >> I am working on RISC-V support upstream for Xen based on your and Bobby
> >> patches.
> >>
> >> Adding the RISC-V support I realized that Xen is ran in S-mode. Output
> >> of OpenSBI:
> >>     ...
> >>     Domain0 Next Mode         : S-mode
> >>     ...
> >> So the first my question is shouldn't it be in H-mode?
> >>
> >> If I am right than it looks like we have to do a patch to OpenSBI to
> >> add support of H-mode as it is not supported now:
> >> [1]
> >> https://github.com/riscv-software-src/opensbi/blob/master/lib/sbi/sbi_domain.c#L380
> >> [2]
> >> https://github.com/riscv-software-src/opensbi/blob/master/include/sbi/riscv_encoding.h#L110
> >> Please correct me if I am wrong.
> >>
> >> The other option I see is to switch to H-mode in U-boot as I understand
> >> the classical boot flow is:
> >>     OpenSBI -> U-boot -> Xen -> Domain{0,...}
> >> If it is at all possible since U-boot will be in S mode after OpenSBI.
> >>
> >> Thanks in advance.
> >>
> >> ~ Oleksii
> >>
> > Ah, what you are seeing there is that the openSBI's Next Mode excludes
> > the virtualization mode (it treats HS and S synonymously) and it is only
> > used for setting the mstatus MPP. The code also has next_virt for
> > setting the MPV but I don't think that is exposed via the device tree
> > yet. For Xen, you'd want next_mode = PRIV_S and next_virt = 0 (HS mode,
> > not VS mode). The relevant setup prior to mret is here for interested
> > readers:
> > https://github.com/riscv-software-src/opensbi/blob/001106d19b21cd6443ae7f7f6d4d048d80e9ecac/lib/sbi/sbi_hart.c#L759
> >
> > As long as the next_mode and next_virt are set correctly, then Xen
> > should be launching in HS mode. I do believe this should be default for
> > the stock build too for Domain0, unless something has changed.
> 
> Ok, so everything ought to be doing the right thing, even if it doesn't
> show up clearly in the logging.
> 

Right.

> At some point, Xen is going to need a `if ( !hs-mode ) panic();`,
> because we can't operate dom0 properly if Xen is in plan S-mode.
> 
> I suggested that we try and make csr_read_safe() work, then try and read
> `hstatus` to probe if the H extension is active.
> 
> Does this sound reasonable, or is there a better option?
> 
> ~Andrew

That sounds reasonable to me.

The alternative is parsing the isa string from the dtb, which also seems
fine, but I'm not sure if it is better per se.

Best,
Bobby


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 03:00:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 03:00:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483382.749537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK9Xr-0006ub-Dj; Tue, 24 Jan 2023 03:00:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483382.749537; Tue, 24 Jan 2023 03:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK9Xr-0006uU-AA; Tue, 24 Jan 2023 03:00:23 +0000
Received: by outflank-mailman (input) for mailman id 483382;
 Tue, 24 Jan 2023 03:00:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=25we=5V=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pK9Xq-0006f9-K0
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 03:00:22 +0000
Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com
 [2607:f8b0:4864:20::82c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d75b405-9b93-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 04:00:21 +0100 (CET)
Received: by mail-qt1-x82c.google.com with SMTP id s4so12094747qtx.6
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 19:00:21 -0800 (PST)
Received: from shine.lan ([2001:470:8:67e:5892:c1fa:86fb:7cb6])
 by smtp.gmail.com with ESMTPSA id
 72-20020a37074b000000b00706c1fc62desm602498qkh.112.2023.01.23.19.00.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 19:00:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d75b405-9b93-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GDbawZVEY3jBvUgz9K9gvtm7R7rDYX5K3Ot9S9z7cig=;
        b=MHD/JtlfEqStqBh+yCZtzngw1xHVqiOzbqmjhd8w9HNoP+GXjhvizrYupqqDeBIHC5
         x633tJJEvo66bFPww/tnXiGxdAjJtTZz7YG5d53mb0c6JwRp+cSwAUNokY1Eb6g/ohbv
         MNMf+Dpav/aaFIk7t+pxltnJgD0bkMaiZ56xZjqAx39s4gHZwhjfHeWQB4Z0x6pdp1in
         qEWp2lr8Yfs8jX6ZdFJQvMI6PzBT4VZVKjgf9088py2YBQPovTANAQdm/T6ja3G4Z1pZ
         czGL5ylXxbGtg6rFRM27zp81tvZYA1jEsN1fG8IvbJW5IAIiWYcKySc2CNy6KpHMLa7E
         VG5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GDbawZVEY3jBvUgz9K9gvtm7R7rDYX5K3Ot9S9z7cig=;
        b=TB9gKg+KERwB1KmFTAHmRXpnS/xus85IlEqHfEG4l+OvNsFPh7nCaWxRvhz5rXR0/2
         Tf+6DzOD275sc5/3fskxH1/TK3i4kpQY2W2zMuwvJuBUfL9KHBGtdKl2WKP41ZQm1R+o
         jCeAOj2NVz5NfmHpvGdGDER1zaSwQ4DGrmXQa488r2G/JrMTRyESJp5shNPnSO0H7i4b
         Vzc1HUVczKBXwpP2rPIp6aGVAW7XIzhrI8V51NKak3pvtK/Nh/Qh0IxRUKZjZlid/dQ9
         g2ofr0EP7BuS5HSrRFDwWeSYCzO2VZvJl7GGN5Stz1QzBjwX9xfjtGOyekdZ8/Nw9QXt
         vfow==
X-Gm-Message-State: AFqh2kqWCP7pswZSEGbfe4wHNZfqmH7FB4+D4e9xei1leHjTIQNEB5lS
	6/ACV7B7SxaBcLqaRKZT+ig6w661Uts=
X-Google-Smtp-Source: AMrXdXtnkBHLKvp/flqj7tgP9/6lYdP+rXUE59Yvba55shD0LwH1z/pwFUvgaf34vrDWDTPPyjKX3Q==
X-Received: by 2002:a05:622a:5c8c:b0:3ab:b632:fa95 with SMTP id ge12-20020a05622a5c8c00b003abb632fa95mr43253195qtb.44.1674529220123;
        Mon, 23 Jan 2023 19:00:20 -0800 (PST)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Dongli Zhang <dongli.zhang@oracle.com>
Subject: [PATCH v2 1/2] libxl: Fix guest kexec - skip cpuid policy
Date: Mon, 23 Jan 2023 21:59:38 -0500
Message-Id: <20230124025939.6480-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230124025939.6480-1-jandryuk@gmail.com>
References: <20230124025939.6480-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a domain performs a kexec (soft reset), libxl__build_pre() is
called with the existing domid.  Calling libxl__cpuid_legacy() on the
existing domain fails since the cpuid policy has already been set, and
the guest isn't rebuilt and doesn't kexec.

xc: error: Failed to set d1's policy (err leaf 0xffffffff, subleaf 0xffffffff, msr 0xffffffff) (17 = File exists): Internal error
libxl: error: libxl_cpuid.c:494:libxl__cpuid_legacy: Domain 1:Failed to apply CPUID policy: File exists
libxl: error: libxl_create.c:1641:domcreate_rebuild_done: Domain 1:cannot (re-)build domain: -3
libxl: error: libxl_xshelp.c:201:libxl__xs_read_mandatory: xenstore read failed: `/libxl/1/type': No such file or directory
libxl: warning: libxl_dom.c:49:libxl__domain_type: unable to get domain type for domid=1, assuming HVM

During a soft_reset, skip calling libxl__cpuid_legacy() to avoid the
issue.  Before the fixes commit, the libxl__cpuid_legacy() failure would
have been ignored, so kexec would continue.

Fixes: 34990446ca91 "libxl: don't ignore the return value from xc_cpuid_apply_policy"
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
Probably a backport candidate since this has been broken for a while.

v2:
Use soft_reset field in libxl__domain_build_state. - Juergen
---
 tools/libs/light/libxl_create.c   | 2 ++
 tools/libs/light/libxl_dom.c      | 2 +-
 tools/libs/light/libxl_internal.h | 1 +
 3 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 5cddc3df79..2eaffe7906 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -2210,6 +2210,8 @@ static int do_domain_soft_reset(libxl_ctx *ctx,
                               aop_console_how);
     cdcs->domid_out = &domid_out;
 
+    state->soft_reset = true;
+
     dom_path = libxl__xs_get_dompath(gc, domid);
     if (!dom_path) {
         LOGD(ERROR, domid, "failed to read domain path");
diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index b454f988fb..f6311eea6e 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -382,7 +382,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     /* Construct a CPUID policy, but only for brand new domains.  Domains
      * being migrated-in/restored have CPUID handled during the
      * static_data_done() callback. */
-    if (!state->restore)
+    if (!state->restore && !state->soft_reset)
         rc = libxl__cpuid_legacy(ctx, domid, false, info);
 
 out:
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 0dc8b8f210..ad982d691a 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1411,6 +1411,7 @@ typedef struct {
     /* Whether this domain is being migrated/restored, or booting fresh.  Only
      * applicable to the primary domain, not support domains (e.g. stub QEMU). */
     bool restore;
+    bool soft_reset;
 } libxl__domain_build_state;
 
 _hidden void libxl__domain_build_state_init(libxl__domain_build_state *s);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 03:00:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 03:00:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483383.749547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK9Xx-0007Dg-Qx; Tue, 24 Jan 2023 03:00:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483383.749547; Tue, 24 Jan 2023 03:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK9Xx-0007DZ-Nk; Tue, 24 Jan 2023 03:00:29 +0000
Received: by outflank-mailman (input) for mailman id 483383;
 Tue, 24 Jan 2023 03:00:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=25we=5V=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pK9Xw-0006f9-GQ
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 03:00:28 +0000
Received: from mail-qt1-x835.google.com (mail-qt1-x835.google.com
 [2607:f8b0:4864:20::835])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 408c996e-9b93-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 04:00:27 +0100 (CET)
Received: by mail-qt1-x835.google.com with SMTP id q15so12143621qtn.0
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 19:00:26 -0800 (PST)
Received: from shine.lan ([2001:470:8:67e:5892:c1fa:86fb:7cb6])
 by smtp.gmail.com with ESMTPSA id
 72-20020a37074b000000b00706c1fc62desm602498qkh.112.2023.01.23.19.00.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 19:00:24 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 408c996e-9b93-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2Icd4DGK4E6ry+YQ31IfWclbr0BWHSpu3WVwgfB09ys=;
        b=UG1ctbLD84tqe5OF+o+HvcsPc09j9k60iHNSJUkCbLKSPRzHRhjfc70AV+FKGtncmy
         8oKanYLSohR+mwHDJXgoyEY5C1TcTLrpkGawKLCbbaA7pIlttaxVacLZR1lgzgl3He5Y
         OWwpMraSMH4CwAR+9rZiiQP0X0g9C/VPiOVQngHKJwybJ0rNERAcpEEQvcNAaQbPcj/N
         G1hhPH3lw3brl986CHlpbmDhyj46lSSluh52CdJhy1QzHUywqVHMbS733QaLAmvXLk6x
         9Fyuh8dip4l6XD03sH4+dMCqwIjpSvdH4qEOFc9bYeJYh1bvkci1N2f9irlbwqwd4rO6
         ISVg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2Icd4DGK4E6ry+YQ31IfWclbr0BWHSpu3WVwgfB09ys=;
        b=w0DdWGt6+GAiHvLk9eO6+UXN7wP/sdLSHVbTyRalJNlkDJMA0J7G0CqqYhfSFwSo6b
         IpIQLwFF1SMoBbHvaxHgZQXAAtG4gBIPw2E5n/1IbTTgWg575juOtbDl6fob55DTA+2E
         tkRGvKN14QmdfLVR6OMlnCHhZPCFfLfWcEIW83dInorXhy+2QkHtfPrzGrEEmKfZatNL
         MmRPl4X+XC7NO+zTHNxU+4MuKwBWC8kf4BYOcPdRLgAplgnpN5XxvIq2OslTWlyhRg2t
         xbMw/XjOQiKcNb8j3uFcRc292G9kpQ6h+4q4dZkvwfiIOuhoP2bazTEg7eFE5nNCgDHA
         JzeQ==
X-Gm-Message-State: AFqh2krI0pLRr7SwEbV0now+j9mhUTrIdLx9wKcAAMsch8el2zE4Wppo
	O8aWB1DiUCoJl5OeodZ854IOFXrycTk=
X-Google-Smtp-Source: AMrXdXsFq3tGlq6FmMNV+QnmDzJePgiYZsBTrOIPs5X+LcpJQdJEwBpNfovnhvEHtReKfd+CSKrA8A==
X-Received: by 2002:ac8:750e:0:b0:3b5:7af4:87b3 with SMTP id u14-20020ac8750e000000b003b57af487b3mr38061181qtq.38.1674529225231;
        Mon, 23 Jan 2023 19:00:25 -0800 (PST)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 2/2] Revert "tools/xenstore: simplify loop handling connection I/O"
Date: Mon, 23 Jan 2023 21:59:39 -0500
Message-Id: <20230124025939.6480-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230124025939.6480-1-jandryuk@gmail.com>
References: <20230124025939.6480-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

I'm observing guest kexec trigger xenstored to abort on a double free.

gdb output:
Program received signal SIGABRT, Aborted.
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
44    ./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
    at ./nptl/pthread_kill.c:44
    at ./nptl/pthread_kill.c:78
    at ./nptl/pthread_kill.c:89
    at ../sysdeps/posix/raise.c:26
    at talloc.c:119
    ptr=ptr@entry=0x559fae724290) at talloc.c:232
    at xenstored_core.c:2945
(gdb) frame 5
    at talloc.c:119
119            TALLOC_ABORT("Bad talloc magic value - double free");
(gdb) frame 7
    at xenstored_core.c:2945
2945                talloc_increase_ref_count(conn);
(gdb) p conn
$1 = (struct connection *) 0x559fae724290

Looking at a xenstore trace, we have:
IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
id )
wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
ard
wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
ard
wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
ard
wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
ard
OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
ard
wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
ard
IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
DESTROY watch 0x559fae73f630
DESTROY watch 0x559fae75ddf0
DESTROY watch 0x559fae75ec30
DESTROY watch 0x559fae75ea60
DESTROY watch 0x559fae732c00
DESTROY watch 0x559fae72cea0
DESTROY watch 0x559fae728fc0
DESTROY watch 0x559fae729570
DESTROY connection 0x559fae724290
orphaned node /local/domain/3/device/suspend/event-channel deleted
orphaned node /local/domain/3/device/vbd/51712 deleted
orphaned node /local/domain/3/device/vkbd/0 deleted
orphaned node /local/domain/3/device/vif/0 deleted
orphaned node /local/domain/3/control/shutdown deleted
orphaned node /local/domain/3/control/feature-poweroff deleted
orphaned node /local/domain/3/control/feature-reboot deleted
orphaned node /local/domain/3/control/feature-suspend deleted
orphaned node /local/domain/3/control/feature-s3 deleted
orphaned node /local/domain/3/control/feature-s4 deleted
orphaned node /local/domain/3/control/sysrq deleted
orphaned node /local/domain/3/data deleted
orphaned node /local/domain/3/drivers deleted
orphaned node /local/domain/3/feature deleted
orphaned node /local/domain/3/attr deleted
orphaned node /local/domain/3/error deleted
orphaned node /local/domain/3/console/backend-id deleted

and no further output.

The trace shows that DESTROY was called for connection 0x559fae724290,
but that is the same pointer (conn) main() was looping through from
connections.  So it wasn't actually removed from the connections list?

Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
connection I/O" fixes the abort/double free.  I think the use of
list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
traversal safe for deleting the current iterator, but RELEASE/do_release
will delete some other entry in the connections list.  I think the
observed abort is because list_for_each_entry has next pointing to the
deleted connection, and it is used in the subsequent iteration.

Add a comment explaining the unsuitability of list_for_each_entry_safe.
Also notice that the old code takes a reference on next which would
prevents a use-after-free.

This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.

Fixes: e8e6e42279a5 "tools/xenstore: simplify loop handling connection I/O"
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
I didn't verify the stale pointers, which is why there are a lot of "I
think" qualifiers.  But reverting the commit has xenstored still running
whereas it was aborting consistently beforehand.

v2: Add Fixes
---
 tools/xenstore/xenstored_core.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 78a3edaa4e..029e3852fc 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2941,8 +2941,23 @@ int main(int argc, char *argv[])
 			}
 		}
 
-		list_for_each_entry_safe(conn, next, &connections, list) {
-			talloc_increase_ref_count(conn);
+		/*
+		 * list_for_each_entry_safe is not suitable here because
+		 * handle_input may delete entries besides the current one, but
+		 * those may be in the temporary next which would trigger a
+		 * use-after-free.  list_for_each_entry_safe is only safe for
+		 * deleting the current entry.
+		 */
+		next = list_entry(connections.next, typeof(*conn), list);
+		if (&next->list != &connections)
+			talloc_increase_ref_count(next);
+		while (&next->list != &connections) {
+			conn = next;
+
+			next = list_entry(conn->list.next,
+					  typeof(*conn), list);
+			if (&next->list != &connections)
+				talloc_increase_ref_count(next);
 
 			if (conn_can_read(conn))
 				handle_input(conn);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 03:00:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 03:00:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483381.749527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK9Xn-0006fi-98; Tue, 24 Jan 2023 03:00:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483381.749527; Tue, 24 Jan 2023 03:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pK9Xn-0006fJ-3R; Tue, 24 Jan 2023 03:00:19 +0000
Received: by outflank-mailman (input) for mailman id 483381;
 Tue, 24 Jan 2023 03:00:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=25we=5V=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pK9Xm-0006f9-3Q
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 03:00:18 +0000
Received: from mail-qv1-xf36.google.com (mail-qv1-xf36.google.com
 [2607:f8b0:4864:20::f36])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3a8faeb9-9b93-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 04:00:17 +0100 (CET)
Received: by mail-qv1-xf36.google.com with SMTP id g10so10772879qvo.6
 for <xen-devel@lists.xenproject.org>; Mon, 23 Jan 2023 19:00:16 -0800 (PST)
Received: from shine.lan ([2001:470:8:67e:5892:c1fa:86fb:7cb6])
 by smtp.gmail.com with ESMTPSA id
 72-20020a37074b000000b00706c1fc62desm602498qkh.112.2023.01.23.19.00.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 23 Jan 2023 19:00:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a8faeb9-9b93-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=L76zDfRTj/n0Xuzc86WBC7BClE4At9jWssMD/edYJas=;
        b=LNxVpcwQMhdFJpTIhVwDBKVFSLKxn/oNLsonu6fujZjczRa5j6XIAr1P2njPyu64Md
         yeTDx67Qang/461h/qd7OjNJNEVkrNt+kNnX84vDGFMIdvdjTDLUGBzB99yGisfP6BPy
         Z21lF4u076Ol0f1XgQ7GUjVDM41IIuJHe0UKE5Cx4X2IVDcTNXgmwLIqow7BQEJuZG26
         5yqOaaLXtdo9EXB14Qc1AwGqX0PZv8oOCpRkCmtmzq0dMHVV3GWDhgLh+6yUrx6boiGa
         Wm+wsZTQExG2N/MGqmCvm/3p7H8u3TNzRRU196lW5itufHpe0e52UEPBL2qKLNUylfwT
         /T6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=L76zDfRTj/n0Xuzc86WBC7BClE4At9jWssMD/edYJas=;
        b=0L3ErUt3NZzjoEmS2mA4ECuNZ3jzZGHgQTXaalWncgiZX6IjohLnZ7dhcSX4q4NiDY
         /vFqKLahdiIUrry4LginyA1Nuq0mkLGiN//DWlIwtJs6bt9jFMzHSm3cD1obnAsalLYE
         N+DgGaKUZXGO5aIiHDWWC6qTMIcxHtffBxCkhKtyfgDuR8OrYIggBqUnYhXuQkoh3Jow
         COv7+Eow9LtG2UHpzShAw1LDCpdWVkl98VRupxTz3qp21ju4wYmgHGBiMy8pN87yvSLJ
         TeL6OPS8Rv436yjEVLCyjxzmX0mtg/ouakdD77WQx1kljQXgPw6ywmsUTpOQno+VxGZn
         lRKg==
X-Gm-Message-State: AFqh2krvJBY1WDCSBNe5od7R3ub9DOsrPcwuhheZQSa6gobouw2Fx9Pi
	rBDnInTs0nrpjG0qDW27Pep6RM8zbRQ=
X-Google-Smtp-Source: AMrXdXuf6hTJd+XVFgHH32png9t7hWkV/MOZt9I06AVbQcxwVBGbvPEBg+z7kbhI/bENBo1KDAjdoA==
X-Received: by 2002:a05:6214:b94:b0:531:c116:33c4 with SMTP id fe20-20020a0562140b9400b00531c11633c4mr41496092qvb.0.1674529215236;
        Mon, 23 Jan 2023 19:00:15 -0800 (PST)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Dongli Zhang <dongli.zhang@oracle.com>
Subject: [PATCH v2 0/2] tools: guest kexec fixes
Date: Mon, 23 Jan 2023 21:59:37 -0500
Message-Id: <20230124025939.6480-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Two toolstack fixes for guest kexec.  This restored kexec enough that I
got Debian bullseye to kexec once.  Trying to kexec the guest a second
time had it spinning at 100% CPU after initiating the kexec.  Haven't
looked into that part yet.

Regards,
Jason

Jason Andryuk (2):
  libxl: Fix guest kexec - skip cpuid policy
  Revert "tools/xenstore: simplify loop handling connection I/O"

 tools/libs/light/libxl_create.c   |  2 ++
 tools/libs/light/libxl_dom.c      |  2 +-
 tools/libs/light/libxl_internal.h |  1 +
 tools/xenstore/xenstored_core.c   | 19 +++++++++++++++++--
 4 files changed, 21 insertions(+), 3 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 04:36:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 04:36:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483404.749569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKB2G-0000Zb-TN; Tue, 24 Jan 2023 04:35:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483404.749569; Tue, 24 Jan 2023 04:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKB2G-0000ZT-Qd; Tue, 24 Jan 2023 04:35:52 +0000
Received: by outflank-mailman (input) for mailman id 483404;
 Tue, 24 Jan 2023 04:35:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKB2G-0000ZK-Gt; Tue, 24 Jan 2023 04:35:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKB2G-0005C0-DJ; Tue, 24 Jan 2023 04:35:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKB2F-0006Rc-Sl; Tue, 24 Jan 2023 04:35:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKB2F-000790-SC; Tue, 24 Jan 2023 04:35:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QN7nRe8p6KgAyItnA3Yqy1wtv2rIjRZwAtQh6/PxAQI=; b=ZbtmUPWbqiknH7lsNAnPXQJGmF
	N/QmJPq11Qzxa7sEginXPbca0q/cTWiIB6L9Crm5X+o8DCbjPzYPE3TWvkGsps1X+ZbcZbSdS9OLp
	lbXYXXheZ4eQn24dNdDsWYrk2UPHvPBJVgxgqhNoETEUc7tIW57vliQBY/DegOKo/A1M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176072-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176072: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2475bf0250dee99b477e0c56d7dc9d7ac3f04117
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Jan 2023 04:35:51 +0000

flight 176072 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176072/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 8 xen-boot fail pass in 176060
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 176060

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 176060 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                2475bf0250dee99b477e0c56d7dc9d7ac3f04117
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  108 days
Failing since        173470  2022-10-08 06:21:34 Z  107 days  222 attempts
Testing same since   176053  2023-01-22 23:10:33 Z    1 days    3 attempts

------------------------------------------------------------
3437 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 527952 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 06:56:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 06:56:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483416.749591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKDDj-0006xm-86; Tue, 24 Jan 2023 06:55:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483416.749591; Tue, 24 Jan 2023 06:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKDDj-0006xf-5C; Tue, 24 Jan 2023 06:55:51 +0000
Received: by outflank-mailman (input) for mailman id 483416;
 Tue, 24 Jan 2023 06:55:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKDDh-0006xV-KZ; Tue, 24 Jan 2023 06:55:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKDDh-0000Mk-FE; Tue, 24 Jan 2023 06:55:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKDDg-0002xX-UV; Tue, 24 Jan 2023 06:55:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKDDg-0007a1-U7; Tue, 24 Jan 2023 06:55:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m17hlU1Cqfm404JIRxNTbNmYXxdQxZyvp0BSUm6ogZg=; b=HAe42Y5HmEJxWGXzhlK5uOYTHU
	WsB7QQlS/NjPeXm3K7CNk6FlVVbaVDF/SH74A7/W3o63zKFFG50absYSamF/o5E6msJCPDjhrt2EZ
	iC+RH/+CdPg1D4RdogZgK5b3MPkKHB+Qdb6cMAcCt1eBV14VgMNmLheozhaWhAHO+Nlc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176076-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176076: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d60324d8af9404014cfcc37bba09e9facfd02fcf
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Jan 2023 06:55:48 +0000

flight 176076 xen-unstable real [real]
flight 176088 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176076/
http://logs.test-lab.xenproject.org/osstest/logs/176088/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 176088-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d60324d8af9404014cfcc37bba09e9facfd02fcf
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    3 days
Failing since        176003  2023-01-20 17:40:27 Z    3 days    9 attempts
Testing same since   176076  2023-01-23 20:14:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 787 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 07:11:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 07:11:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483425.749601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKDSe-00014J-O1; Tue, 24 Jan 2023 07:11:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483425.749601; Tue, 24 Jan 2023 07:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKDSe-00014C-KR; Tue, 24 Jan 2023 07:11:16 +0000
Received: by outflank-mailman (input) for mailman id 483425;
 Tue, 24 Jan 2023 07:11:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKDSd-000142-Cl; Tue, 24 Jan 2023 07:11:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKDSd-0000hL-4F; Tue, 24 Jan 2023 07:11:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKDSc-00046H-MD; Tue, 24 Jan 2023 07:11:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKDSc-000213-Lm; Tue, 24 Jan 2023 07:11:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vfC2F/jOc2mXT1nAhRo8c11YBPMWqbtXUijMSBUmBz4=; b=P3pjQDO4fLX5HWpGH4sf2C+FUk
	kd7OuztgsAfWZ+uTAT4XAcvLtbVIgjPgFhUjTqhOuXBeNKLEjhm2RVQ6R+lM4RfZS5+4nlpIWKKca
	Wg+NUoAcfaWHaWJ+zcFBY5LVA9nbBZ017yrZiCqZwLyj3ZXxjSVGijaOwuX8yGjrh5Ew=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176080-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 176080: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=00b1faea41d283e931256aa78aa975a369ec3ae6
X-Osstest-Versions-That:
    qemuu=65cc5ccf06a74c98de73ec683d9a543baa302a12
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Jan 2023 07:11:14 +0000

flight 176080 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176080/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 176069 pass in 176080
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 176069 pass in 176080
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 176069 pass in 176080
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  7 xen-install fail pass in 176069

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 176022
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 176022
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 176022
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 176022
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 176022
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 176022
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 176022
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 176022
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                00b1faea41d283e931256aa78aa975a369ec3ae6
baseline version:
 qemuu                65cc5ccf06a74c98de73ec683d9a543baa302a12

Last test of basis   176022  2023-01-21 16:08:39 Z    2 days
Testing same since   176069  2023-01-23 16:08:45 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Corey Minyard <cminyard@mvista.com>
  David Reiss <dreiss@meta.com>
  Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   65cc5ccf06..00b1faea41  00b1faea41d283e931256aa78aa975a369ec3ae6 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 08:14:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 08:14:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483435.749614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKERE-000899-Ph; Tue, 24 Jan 2023 08:13:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483435.749614; Tue, 24 Jan 2023 08:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKERE-000892-ME; Tue, 24 Jan 2023 08:13:52 +0000
Received: by outflank-mailman (input) for mailman id 483435;
 Tue, 24 Jan 2023 08:13:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HFQP=5V=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKERD-00088u-CG
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 08:13:51 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2054.outbound.protection.outlook.com [40.107.105.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 078348c2-9bbf-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 09:13:49 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8686.eurprd04.prod.outlook.com (2603:10a6:102:21d::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 08:13:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 08:13:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 078348c2-9bbf-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oQg8yIYjuKLcsdN6f9R8TwSfxXEA1YraVm48ps7i1pmycXw3Wc4zKvc3dca+pJwn+3oyQPpSgYFEAj9U8ObI+Z8K+9Ov3weWaywd9jDrD/+zxX6fQoUNHOpbS8V937mLcb6oelsQM7SCfYS/tRGU0id01bvNQ+ExZiLxA88oN2pA4zjvUVKeGS0+tgvxHpYYGyurjfVfwtMYy7X3L7lbWhZfJtA8gY+22vVnLS564rh7fln0QouU1IjweFgsM2yXHo/kW7B8X8Iirl6hSx4ewzNkdLCWeYlhPaOGnQ24OMy2Uw92ec0aNON4xnKN24Tp0uyg6pt3se+bK0Lk11X2wg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nmPyni6SMRkEMpYVHVe/VGs8zDO01QfTHDGU+pniX10=;
 b=V00tYPjf8wXy9nW02Uw5aFCn9eOJ5GJ7kR0Ipa9VAgUH5PjYQZCNqW0FqL/DSwX4LM6mA3z/JpAk2V6BOtsD9//9Jbg5yjvni2JjrltTOJGp6FSdfRL5XF+GmfjLh0sZWtj/0sg+esV7FlBYykvkrM523oycgckos/ACjAMUl6HMLNow8jzTfCAL9QrtN21KakoBBrLA3iR6/kPgc8X2E1oXypXHIdl6sDeqOMT1LWZ7+AltOyjxsI1xirv5yjO/vUmUQhxr6sJubuWLNcO1Soy0iuZfv6EDfYi2lgCKsrvelSseSB1p7kcb6am/Pfq+PznulgzBzc8ZOF/GYCcnRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nmPyni6SMRkEMpYVHVe/VGs8zDO01QfTHDGU+pniX10=;
 b=QHiahjXHLQGccVBUkYI9KU8NisB3gNxgcyg6OQAS17FRcMYUdDjzWD9dZUNWBWzYosb5rwInFmCJhFlyLUoA0HIbNCHnrvKmdidmSX5IPYLHweNS/zkhUKD7cHy5MCOIZ3E7Nyk7tj6rBW0fH0JiTxwbzaFizAzdEJkTmeSj9yW+svWKs8cMrbMvjzgfLo7r03e5MZyDU+hQB/rm/hlc8EY5FTVzvZuMJKHvhtmPDOrLRJgz54KpIe8lN1P4vm3lGLr8qM+FqCJwQVzIGpmfUhyrdMeTVfoOETA53X4br7C+frWdfPNXwwKCn8vcO0hOmYOHOkZc+ZhOwLX7MMoSsA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <93431f73-9c9e-ab27-c50f-18dc4a3469fa@suse.com>
Date: Tue, 24 Jan 2023 09:13:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/shadow: special-case SH_type_l2h_64_shadow in
 shadow_size()
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0004.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8686:EE_
X-MS-Office365-Filtering-Correlation-Id: 96efa8b1-9cf8-43d9-8550-08dafde2e9de
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xyXfriv4H7BTW+6sHB7zB0pIC5YwJlARdt4V6Dn9vHA3d1I+/CGwlIIbKYauieSoc/lpwwTguRe6wuqFwL+BN8QrDsQw8ZkWAA9V+UdPCbp6mLDAGs4lra+VHMJEGTYDDe6ZOwCMweVFjZY//ioiIcKRx7CV2WXtfPywt8D1AfNqKbnoEKYhfXs79EsomW/RU19fNnt4EO0qDvOgDPVgaCC20u6fTOqjJJdi98SH1O9qyWEmjKoaKYs74NwAZPFzoUFssmnUloPrRonQ6ETJ1ViZnBQGJWXnH8DOiT1luBzJVE24ZSIegrm/5bQ3ovCkbz9dVfbC6J+61FUytCO/fcPjxDQ3QJ3v9kFezxO/7HMvoqaCM5+/ObGwKQdDJ3sFsNIZQnsXFhws6FBuepbFZxqasBW/qHo9kwjNtWdVeW6jc/5XoOMVS8wGF9z74CRApWaMsO8OVPziJyvJseCx3aG415fIEMh8kLEEdBZWOf87VRNFc50+QW6ZboXtT2IlpneptISG+D4dbMVMc9xl6i0jsxkaOlsa4NfdNa/s/g98wORJmQ0taylYC5OuNTVCTJY+qXNjtD+CN8htzkvpudaF8MTXy7MJINnbMzkAsWp5hHB9qxh3Ir4b4C3R3m6rGvY6TLfIPFiUy6L9fELITdRnGKzleJk5mo6Nz4vlNEDvn0llhYJlALxfb2eACTllbXdxW8N/dxO+ljkm7f6Wa/SVqrwX2dpO/G1sToXlVW1OGnAJYIAZkAyZ1lhTK4gxfmCd4bUnBNzRQxUiln3kJZrN0ZCJPws5o0df04cZ3iw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(396003)(136003)(366004)(346002)(376002)(451199015)(41300700001)(31696002)(83380400001)(86362001)(38100700002)(5660300002)(8936002)(2906002)(4326008)(26005)(6506007)(2616005)(6512007)(6916009)(186003)(8676002)(66556008)(54906003)(66946007)(316002)(966005)(478600001)(66476007)(6486002)(31686004)(36756003)(70780200001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZjRTLy90RUNMS3lTSXZwdDBheGFtWjkzeVNyM0xqQmZ0aXNreEV5UjBxRXNN?=
 =?utf-8?B?amdETXFaYS9xN0o5TkxYd0t5WjVrSEY1TGVzTjc2YlJGaHFMQ2JhOGFhTHB2?=
 =?utf-8?B?ckJmQitBNFYyODRvVlBBNXgxL2txNXpQak5SUXNwQjQxTHp5RVgxY2pDWXJO?=
 =?utf-8?B?R0tyMDF2OHRlbmIyV0Z3bGp2d2NzWHd6Vm1GR2VEdVl3Uy9xZ1BuMEZpWU44?=
 =?utf-8?B?cEg3Ukw3VmI0bU94aVljU1luKzIyVG03MEIxL2xOeDZMMk4vYkF2bEI1czJN?=
 =?utf-8?B?T0NVWHlwYW0xTmRFQnFUR3JMMHdMd1FIOGM5Q0IrNmMvWHdtZU5HMVY4UkxB?=
 =?utf-8?B?cGRhQmNwejRURVZSQ1hzSUJHamVRanYrNWhxSzEyYnlFY3llcE1Ham9nMS9w?=
 =?utf-8?B?cEFJNFZ3RmZFSEV0SVl2K0YxVnlJTGhHKy9tTjh6WmNCWHdWQi95SmgxRjhV?=
 =?utf-8?B?SWFHWDBlNzFlRUJjYTlGYjg5ZmdiR3d4emFXeGM0NVNmeFpPaXYvTEFoTmxO?=
 =?utf-8?B?OGtqYWlMMkZBdm9vcFRadGdHamlQYWxxVG1xTkMwd2YrcjFOWlNkN1VaTVpH?=
 =?utf-8?B?Y2N5OWNLWmwyMkVBVE5oRGpvUC9JRG1qVklrV29MRHM2RCsvLzNETlZaaGJo?=
 =?utf-8?B?VDE0d0RFeWl3K2ZSaUxsN0J3WHV2WlJTblRrUWlLcUZ4YjFvL0h4WnI5RzVk?=
 =?utf-8?B?NGNBQUV6bGwzVmpqdFVScTY0Qmc3NkdvMHp4cEY2aXhPdE1YYUc0WVk5WERE?=
 =?utf-8?B?Qk93K084aW1BV0ZjWlVjS1VZYk9NMFVlTC9Kd0lwSUNCZ3YrVDNFdHdocWIw?=
 =?utf-8?B?QjZpeDhOYVhkTG9PazB1TE1mWUFjSURvSkUranh4eHEzSzdGbThmWGpzZUky?=
 =?utf-8?B?UFpGSnNxbWZjSWRpclZaTjY2SitVMlJhWUt1Y0RweCtKNHpFelBMUTkwcjRq?=
 =?utf-8?B?MFpOQjJaTHNFc3hZQUgxL1hSUFNvWklHM09EWnA4Uk5FVjFCSzBLT3RFR1hJ?=
 =?utf-8?B?TnJrNVF3ajNnQUZac1FudjNCSURFQVR6dE5IZm9ORkt4bEtzOC9LYm5JM3Az?=
 =?utf-8?B?YkVDbFlDRGFJSUUrTWIxck9LMjFuNHpGOGhiMCs1SEtCa252bWlxaUo4Rk9u?=
 =?utf-8?B?OXFxMGJCU2JCWS96T1U5OU0wZkM4bVY4NjRaczY3SkpvbXJsMVVZaDBEcE9I?=
 =?utf-8?B?Z2lXSVlCWjhySWgwZTBvaWpiNTd0cmQrNXJpWkVwUm13ZmpMVFIzTkYrYmxt?=
 =?utf-8?B?Zkk3azJuUDVCZFlBczVLb3FrUWEvZE9nTjlNYUlzTCtSdE1JZWtpNTRuOTVp?=
 =?utf-8?B?MURjeWR6T2k4ZmdMYTU3bzZJOWZFZUJhOWltbStWZDRrVGFyY3k4TXA2RCtV?=
 =?utf-8?B?ZW15dXpKOTNnNDZQbktOQnhkM3I3emg1OGRwT1RDSEREcjdkaVU3SE1QYjRF?=
 =?utf-8?B?Nm9XL1RyRTJMa3VRdkpNckwzdUtFSGdBanJuanJzVklmUVpOSEE0WXI1UmR2?=
 =?utf-8?B?MVRjVEpIOFZqcVZqZW5rYzlNRTYwejVwWHNXMFFYbFdGR0phVHJNdUFlaGZu?=
 =?utf-8?B?Wms2UkYyd3FnU2FTaytBQ1ZpTkJiUmIwdUd1aFRkQldzalFmQ0JwN3BMMytp?=
 =?utf-8?B?TDFGb0c3RjFacTA5REpyckNXTDZIRVdZTFF1cHJIWkY3Z3R4ODhqSHQzbWxP?=
 =?utf-8?B?N1J3dU5RTWEvTWV6eDBNSWNNWVZXVEZmMkVHZkZaQVUrdlZDMnc4U29zYnFl?=
 =?utf-8?B?ZjI3Slh2aDVkLzFUeWplb1JwSGZSVlUxMndoZTdjaXYxT2lhdnNoaUFDRGhq?=
 =?utf-8?B?QzV2Ly81OS9FZFVvVzdqTVZyTHRkUDRwTUk0NDRsZVlRWmJOSEJVRHhaMUJz?=
 =?utf-8?B?cWRtR1NYYlpGbTRHOTlNWG1CMHl1QmZhU0dFdnJ1amVpQUJEeEZMOXZxamxE?=
 =?utf-8?B?Q2xUSDFibTZVWG14L0tqZ0h6YUV6QkZ1SzFvVFIvc3FTL2pqQy85bWhpMUky?=
 =?utf-8?B?YlpManJyeU9YblgzQnBnL3I3MUlyOVdqVGRWYy84Y25ENFI0ek5jWEo2MnU2?=
 =?utf-8?B?aXNsOWpyUjJrb3JUdkZPSTlhbUU0d2hLQ205NkZCY1lQNVdDRFRjWDNzcTFo?=
 =?utf-8?Q?dUjiecf9pjukJyuHrJw5lhbWM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96efa8b1-9cf8-43d9-8550-08dafde2e9de
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 08:13:45.6332
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UuMlMX0skIprW/MfEeopyMpm75MMuMeQg9ShRi/lTwG+zy7LEuEAzeQFrjQ+zyESpe4u5h6X1oEVuk8rWgcRvQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8686

The type is valid even in HVM=y plus PV32=y builds. Hence either the
respective sh_type_to_size[] entry needs to be non-zero [1], or an
override is needed. With the table sitting in a HVM-only file, it was
requested that the table be left alone. Leverage the need for an
override to make the size actually dependent on a runtime property,
not just a build time one.

Fixes: 1894049fa283 ("x86/shadow: L2H shadow type is PV32-only")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

[1] https://lists.xen.org/archives/html/xen-devel/2023-01/msg01586.html
---
This is an alternative to "x86/shadow: sh_type_to_size[] needs L2H entry
when HVM+PV32". I continue to think that's the better solution, but the
main goal is to get the regression sorted, so I'm (hesitantly) willing
to go this less optimal route. While there's a benefit to making the
size runtime instead of just build time dynamic (in principle we could
go further and make it depend on domain type), the downsides are extra
code and a scalability concern: Things will get unwieldy when a few more
types want special casing for (more or less) similar reasons.

Leaving the table alone is questionable in the first place: It's still
used for both HVM and PV domains. With the change here all we avoid is
the use of that one table entry. Its HVM-only-ness is a build property,
not (like in many other cases) a runtime one.

--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -29,6 +29,7 @@
 #include <xen/domain_page.h>
 #include <asm/x86_emulate.h>
 #include <asm/hvm/support.h>
+#include <asm/pv/domain.h>
 #include <asm/atomic.h>
 
 #include "../mm-locks.h"
@@ -366,6 +367,10 @@ extern const u8 sh_type_to_size[SH_type_
 static inline unsigned int
 shadow_size(unsigned int shadow_type)
 {
+#ifdef SH_type_l2h_64_shadow
+    if ( shadow_type == SH_type_l2h_64_shadow )
+        return opt_pv32;
+#endif
 #ifdef CONFIG_HVM
     ASSERT(shadow_type < ARRAY_SIZE(sh_type_to_size));
     return sh_type_to_size[shadow_type];


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 09:01:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 09:01:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483450.749627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKFBX-0005H0-KZ; Tue, 24 Jan 2023 09:01:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483450.749627; Tue, 24 Jan 2023 09:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKFBX-0005Gt-Hk; Tue, 24 Jan 2023 09:01:43 +0000
Received: by outflank-mailman (input) for mailman id 483450;
 Tue, 24 Jan 2023 09:01:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EcKj=5V=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pKFBV-0005Gn-PJ
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 09:01:42 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2070.outbound.protection.outlook.com [40.107.244.70])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b50c4a59-9bc5-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 10:01:38 +0100 (CET)
Received: from BN0PR02CA0010.namprd02.prod.outlook.com (2603:10b6:408:e4::15)
 by IA0PR12MB7507.namprd12.prod.outlook.com (2603:10b6:208:441::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 09:01:31 +0000
Received: from BN8NAM11FT005.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e4:cafe::a8) by BN0PR02CA0010.outlook.office365.com
 (2603:10b6:408:e4::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Tue, 24 Jan 2023 09:01:31 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT005.mail.protection.outlook.com (10.13.176.69) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Tue, 24 Jan 2023 09:01:30 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 03:01:29 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 24 Jan 2023 03:01:28 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b50c4a59-9bc5-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FBr2KAz/voljD6QjDriOFZkiEkediHHshpRX4uSLipiZmRSvn+HgHJv4wzadfSZ1EuHMXIgezmUNeiA2iAqA/HyuHKC2OKktN87/CMD//SWAaQwxiWNxYiFdzcANWLWWQeY3vMOm3J3Mf7ANgGRwOiy8/vSB3B0mRoJys6W/sT5gedcfXm8n9P69O67q2UXUeCmW2WSfTDI76dq069XwfIjhNgeTi07Qk3/k5u0pgfsagJouHEd5SDZBSJkslCTf3Slbck/FoQ/fwXZVjey3YtztKVeXjnD36agRRpbpYxDNgqqQrzF1Ri7ziRyO69oVTF/rFpkfFVzab1ce/OfG1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DRfejNFoZUA/jlfQt5W4e+msRg1J+ZD0CGXcB6NZgqs=;
 b=FSQB30GnzZCC7YqnHFUaOW5ULS++Vlx5ouIs3y9nOzTzfCLjQGrnXeH5tOcYPZPgXDe7fbxemqAyKaOXHDsxVKAN6IE6lJux9X9R6NHB+lAlGeULXjQVi6doA4xTOsLnLt0CFu6XBl8Cj3slYlpJBdGkopcje6/7UA3WMI3cZe0JYam2o+jcvCBUOrClwq+H1P+/Q4fu8rbUcNszlbjuJUzBiCtpKcmN5gem1OHinUX/F3kPOlNp9a9teyxPLksIHxqfU8sOAHbuampIJbFEiP65i0T6QwIoUymH/UPJTz63kXPrddnvG8KGc6kfFPnM7+0H03D1wedn9C0lS+pEwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DRfejNFoZUA/jlfQt5W4e+msRg1J+ZD0CGXcB6NZgqs=;
 b=QdZrCTPAezxXYNqlIeRxSBdlw0smgnrn7NTnjRS6FCGue5PxlvVFA+vIBQ6rDb2QUmsh2gJ1R/AJ3bEcoJQMg7+CkOoo/FqT/zqQgNTeLwNPnKIXQauo0MOwt9RTXIS/qUJITSwGnSh8yA3N8bGI44qC2ldnLdxfA2vJR/D26m8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <b1b5c81a-733d-6bf3-c711-0af5b68009db@amd.com>
Date: Tue, 24 Jan 2023 10:01:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN][RFC PATCH v4 12/16] xen/arm: Implement device tree node
 removal functionalities
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <julien@xen.org>, <Luca.Fancellu@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
References: <20221207061815.7404-1-vikram.garhwal@amd.com>
 <20221207061815.7404-6-vikram.garhwal@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221207061815.7404-6-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT005:EE_|IA0PR12MB7507:EE_
X-MS-Office365-Filtering-Correlation-Id: 05ca2281-5f83-4a09-39bb-08dafde995ad
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UAZPQPm4xsGzMbmlULmq5IiRDrN9BUf6yD6TaN4d/WUNG4Krj6dcmy78JtTugRyv7vcuuDb7J4rl1M3k0VfmiZHHbxeYCY40w49M8eays9qXSoGoax6Ji+zlj2auWQAKY/CL/mLCyzD1ht5rlPwRCaotiRDb5Ej1J/naDqVH6agA53KcWT+EscBe5vTEZbpsIrQq80oY2D3haDA0acy9QB134EhnMrjkS+jqJGPnriR+rnARiSVhwZeb5mbjdAXJPbf6HPp1O5qzCD/Y+nH8icnSIYs8nI7otILSS4FJknhKsH6WK9zvvvxARZ+otuHxZ8J3jzorEIQvajk/n3s3fYabwOdo+Gjz2IpaYtLd3Eyl9l4pynlEXwGYT4n/Tv6B8Gwa8+P7QPB+u8h9fAXi4cMCs7gOnJ8CD4anM0gdSsb7lU0Yuipm1s3LohgvWjGsKpy8l7ZPm2OSKunzlSDfhd2IQ6VWEEBRCYEcZ+XUrUoku/jZpNI/TObWs5cg74z14I/pftA7TlLgSERHtYr5eW0vDU3WgII5CkcdlGu5qBo5vc2E4kNTZm9hjh2VFYE/VR6fFQH9BKF//M8QdpLQgLT7IkmhPe58Za3JtiO/qEjpnzIf7UxO/vkN/4A/1mQ4Fgm56V1s4jET6fx6uyoi1bn4JkOn6/5XNSRcieCw7ZiTwRO5SRgSJzhiYwNrOMFmVdV/GxSQNt/UpCGKi8GLmNF4hPlWYdYSWeyuMeZeYYkx91aioJxEM2TRqwnUpwuSHpYKBmAgFUYSweFTri8daHaUwW1s98Rj+8fwTy0IVU3y9j43hCYFr8AvmGerLlwX
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199015)(40470700004)(36840700001)(46966006)(47076005)(426003)(336012)(83380400001)(2616005)(31696002)(40480700001)(86362001)(40460700003)(36860700001)(81166007)(82310400005)(356005)(82740400003)(36756003)(31686004)(4326008)(8676002)(70586007)(70206006)(30864003)(5660300002)(2906002)(44832011)(8936002)(41300700001)(478600001)(53546011)(26005)(186003)(54906003)(110136005)(316002)(16576012)(403724002)(2004002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 09:01:30.6399
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 05ca2281-5f83-4a09-39bb-08dafde995ad
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT005.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7507

Hi Vikram,

You received extensive feedback for v3 from others, so I will limit my review to just coding
and general comments.

On 07/12/2022 07:18, Vikram Garhwal wrote:
> 
> 
> Introduce sysctl XEN_SYSCTL_dt_overlay to remove device-tree nodes added using
> device tree overlay.
> 
> xl dt_overlay remove file.dtbo:
>     Removes all the nodes in a given dtbo.
>     First, removes IRQ permissions and MMIO accesses. Next, it finds the nodes
>     in dt_host and delete the device node entries from dt_host.
> 
>     The nodes get removed only if it is not used by any of dom0 or domio.
> 
> Also, added overlay_track struct to keep the track of added node through device
> tree overlay. overlay_track has dt_host_new which is unflattened form of updated
> fdt and name of overlay nodes. When a node is removed, we also free the memory
> used by overlay_track for the particular overlay node.
> 
> Nested overlay removal is supported in sequential manner only i.e. if
> overlay_child nests under overlay_parent, it is assumed that user first removes
> overlay_child and then removes overlay_parent.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/common/Makefile          |   1 +
>  xen/common/dt_overlay.c      | 411 +++++++++++++++++++++++++++++++++++
>  xen/common/sysctl.c          |   5 +
>  xen/include/public/sysctl.h  |  19 ++
>  xen/include/xen/dt_overlay.h |  55 +++++
>  5 files changed, 491 insertions(+)
>  create mode 100644 xen/common/dt_overlay.c
>  create mode 100644 xen/include/xen/dt_overlay.h
> 
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 3baf83d527..58a35f55b2 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -7,6 +7,7 @@ obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
>  obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
>  obj-$(CONFIG_IOREQ_SERVER) += dm.o
>  obj-y += domain.o
> +obj-$(CONFIG_OVERLAY_DTB) += dt_overlay.o
>  obj-y += event_2l.o
>  obj-y += event_channel.o
>  obj-y += event_fifo.o
> diff --git a/xen/common/dt_overlay.c b/xen/common/dt_overlay.c
> new file mode 100644
> index 0000000000..477341f0aa
> --- /dev/null
> +++ b/xen/common/dt_overlay.c
> @@ -0,0 +1,411 @@
> +/*
> + * xen/common/dt_overlay.c
New files should start with SPDX comment expressing license.

> + *
> + * Device tree overlay support in Xen.
> + *
> + * Copyright (c) 2022 AMD Inc.
> + * Written by Vikram Garhwal <vikram.garhwal@amd.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms and conditions of the GNU General Public
> + * License, version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +#include <xen/iocap.h>
> +#include <xen/xmalloc.h>
> +#include <asm/domain_build.h>
> +#include <xen/dt_overlay.h>
> +#include <xen/guest_access.h>
> +
> +static LIST_HEAD(overlay_tracker);
> +static DEFINE_SPINLOCK(overlay_lock);
> +
> +/* Find last descendants of the device_node. */
> +static struct dt_device_node *find_last_descendants_node(
> +                                            struct dt_device_node *device_node)
> +{
> +    struct dt_device_node *child_node;
> +
> +    for ( child_node = device_node->child; child_node->sibling != NULL;
> +          child_node = child_node->sibling )
> +    {
> +    }
> +
> +    /* If last child_node also have children. */
> +    if ( child_node->child )
> +        child_node = find_last_descendants_node(child_node);
Please add a blank line here.

> +    return child_node;
> +}
> +
> +static int dt_overlay_remove_node(struct dt_device_node *device_node)
> +{
> +    struct dt_device_node *np;
> +    struct dt_device_node *parent_node;
> +    struct dt_device_node *device_node_last_descendant = device_node->child;
> +
> +    parent_node = device_node->parent;
> +
> +    if ( parent_node == NULL )
> +    {
> +        dt_dprintk("%s's parent node not found\n", device_node->name);
> +        return -EFAULT;
> +    }
> +
> +    np = parent_node->child;
> +
> +    if ( np == NULL )
> +    {
> +        dt_dprintk("parent node %s's not found\n", parent_node->name);
> +        return -EFAULT;
> +    }
> +
> +    /* If node to be removed is only child node or first child. */
> +    if ( !dt_node_cmp(np->full_name, device_node->full_name) )
> +    {
> +        parent_node->child = np->sibling;
> +
> +        /*
> +         * Iterate over all child nodes of device_node. Given that we are
> +         * removing parent node, we need to remove all it's descendants too.
> +         */
> +        if ( device_node_last_descendant )
> +        {
> +            device_node_last_descendant =
> +                                        find_last_descendants_node(device_node);
> +            parent_node->allnext = device_node_last_descendant->allnext;
> +        }
> +        else
> +            parent_node->allnext = np->allnext;
> +
> +        return 0;
> +    }
> +
> +    for ( np = parent_node->child; np->sibling != NULL; np = np->sibling )
> +    {
> +        if ( !dt_node_cmp(np->sibling->full_name, device_node->full_name) )
> +        {
> +            /* Found the node. Now we remove it. */
> +            np->sibling = np->sibling->sibling;
> +
> +            if ( np->child )
> +                np = find_last_descendants_node(np);
> +
> +            /*
> +             * Iterate over all child nodes of device_node. Given that we are
> +             * removing parent node, we need to remove all it's descendants too.
> +             */
> +            if ( device_node_last_descendant )
> +                device_node_last_descendant =
> +                                        find_last_descendants_node(device_node);
> +
> +            if ( device_node_last_descendant )
> +                np->allnext = device_node_last_descendant->allnext;
> +            else
> +                np->allnext = np->allnext->allnext;
> +
> +            break;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +/* Basic sanity check for the dtbo tool stack provided to Xen. */
> +static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
> +{
> +    if ( (fdt_totalsize(overlay_fdt) != overlay_fdt_size) ||
> +          fdt_check_header(overlay_fdt) )
> +    {
> +        printk(XENLOG_ERR "The overlay FDT is not a valid Flat Device Tree\n");
> +        return -EINVAL;
> +    }
> +
> +    return 0;
> +}
> +
> +/* Count number of nodes till one level of __overlay__ tag. */
> +static unsigned int overlay_node_count(void *fdto)
> +{
> +    unsigned int num_overlay_nodes = 0;
> +    int fragment;
> +
> +    fdt_for_each_subnode(fragment, fdto, 0)
> +    {
> +        int subnode;
> +        int overlay;
> +
> +        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
> +
> +        /*
> +         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
> +         * overlay >= 0. So, no need for a overlay>=0 check here.
> +         */
> +        fdt_for_each_subnode(subnode, fdto, overlay)
> +        {
> +            num_overlay_nodes++;
> +        }
> +    }
> +
> +    return num_overlay_nodes;
> +}
> +
> +static int handle_remove_irq_iommu(struct dt_device_node *device_node)
> +{
> +    int rc = 0;
> +    struct domain *d = hardware_domain;
> +    domid_t domid = 0;
No need for this assignment.

> +    unsigned int naddr, len;
> +    unsigned int i, nirq;
> +    u64 addr, size;
We should not be using types like these anymore. Use uint64_t.

> +
> +    domid = dt_device_used_by(device_node);
> +
> +    dt_dprintk("Checking if node %s is used by any domain\n",
> +               device_node->full_name);
> +
> +    /* Remove the node iff it's assigned to domain 0 or domain io. */
> +    if ( domid != 0 && domid != DOMID_IO )
> +    {
> +        printk(XENLOG_ERR "Device %s as it is being used by domain %d. Removing nodes failed\n",
> +               device_node->full_name, domid);
> +        return -EINVAL;
> +    }
> +
> +    dt_dprintk("Removing node: %s\n", device_node->full_name);
> +
> +    nirq = dt_number_of_irq(device_node);
> +
> +    /* Remove IRQ permission */
> +    for ( i = 0; i < nirq; i++ )
> +    {
> +        rc = platform_get_irq(device_node, i);;
> +
> +        if ( irq_access_permitted(d, rc) == false )
> +        {
> +            printk(XENLOG_ERR "IRQ %d is not routed to domain %d\n", rc,
> +                   domid);
> +            return -EINVAL;
> +        }
> +        /*
> +         * TODO: We don't handle shared IRQs for now. So, it is assumed that
> +         * the IRQs was not shared with another devices.
> +         */
> +        rc = irq_deny_access(d, rc);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "unable to revoke access for irq %u for %s\n",
> +                   i, device_node->full_name);
> +            return rc;
> +        }
> +    }
> +
> +    /* Check if iommu property exists. */
> +    if ( dt_get_property(device_node, "iommus", &len) )
> +    {
> +
Remove extra line.

> +        rc = iommu_remove_dt_device(device_node);
> +        if ( rc != 0 && rc != -ENXIO )
> +            return rc;
> +    }
> +
> +    naddr = dt_number_of_address(device_node);
> +
> +    /* Remove mmio access. */
> +    for ( i = 0; i < naddr; i++ )
> +    {
> +        rc = dt_device_get_address(device_node, i, &addr, &size);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> +                   i, dt_node_full_name(device_node));
> +            return rc;
> +        }
> +
> +        rc = iomem_deny_access(d, paddr_to_pfn(addr),
> +                               paddr_to_pfn(PAGE_ALIGN(addr + size - 1)));
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Unable to remove dom%d access to"
> +                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
> +                   d->domain_id,
> +                   addr & PAGE_MASK, PAGE_ALIGN(addr + size) - 1);
> +            return rc;
> +        }
What about removing p2m mappings (comment from Julien on v3)?

> +
> +    }
> +
> +    return rc;
> +}
> +
> +/* Removes all descendants of the given node. */
> +static int remove_all_descendant_nodes(struct dt_device_node *device_node)
> +{
> +    int rc = 0;
> +    struct dt_device_node *child_node;
> +
> +    for ( child_node = device_node->child; child_node != NULL;
> +         child_node = child_node->sibling )
> +    {
> +        if ( child_node->child )
> +            remove_all_descendant_nodes(child_node);
> +
> +        rc = handle_remove_irq_iommu(child_node);
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    return rc;
> +}
> +
> +/* Remove nodes from dt_host. */
> +static int remove_nodes(const struct overlay_track *tracker)
> +{
> +    int rc = 0;
> +    struct dt_device_node *overlay_node;
> +    unsigned int j;
> +
> +    for ( j = 0; j < tracker->num_nodes; j++ )
> +    {
> +        overlay_node = (struct dt_device_node *)tracker->nodes_address[j];
> +        if ( overlay_node == NULL )
> +        {
> +            printk(XENLOG_ERR "Device %s is not present in the tree. Removing nodes failed\n",
> +                   overlay_node->full_name);
> +            return -EINVAL;
> +        }
> +
> +        rc = remove_all_descendant_nodes(overlay_node);
> +
> +        /* All children nodes are unmapped. Now remove the node itself. */
> +        rc = handle_remove_irq_iommu(overlay_node);
> +        if ( rc )
> +            return rc;
> +
> +        read_lock(&dt_host->lock);
> +
> +        rc = dt_overlay_remove_node(overlay_node);
> +        if ( rc )
> +        {
> +            read_unlock(&dt_host->lock);
> +
> +            return rc;
> +        }
> +
> +        read_unlock(&dt_host->lock);
> +    }
> +
> +    return rc;
> +}
> +
> +/*
> + * First finds the device node to remove. Check if the device is being used by
> + * any dom and finally remove it from dt_host. IOMMU is already being taken care
> + * while destroying the domain.
> + */
> +static long handle_remove_overlay_nodes(void *overlay_fdt,
> +                                        uint32_t overlay_fdt_size)
> +{
> +    int rc = 0;
> +    struct overlay_track *entry, *temp, *track;
> +    bool found_entry = false;
> +
> +    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
> +    if ( rc )
> +        return rc;
> +
> +    if ( overlay_node_count(overlay_fdt) == 0 )
> +        return -ENOMEM;
> +
> +    spin_lock(&overlay_lock);
> +
> +    /*
> +     * First check if dtbo is correct i.e. it should one of the dtbo which was
> +     * used when dynamically adding the node.
> +     * Limitation: Cases with same node names but different property are not
> +     * supported currently. We are relying on user to provide the same dtbo
> +     * as it was used when adding the nodes.
> +     */
> +    list_for_each_entry_safe( entry, temp, &overlay_tracker, entry )
> +    {
> +        if ( memcmp(entry->overlay_fdt, overlay_fdt, overlay_fdt_size) == 0 )
> +        {
> +            track = entry;
> +            found_entry = true;
> +            break;
> +        }
> +    }
> +
> +    if ( found_entry == false )
> +    {
> +        rc = -EINVAL;
> +
> +        printk(XENLOG_ERR "Cannot find any matching tracker with input dtbo."
> +               " Removing nodes is supported for only prior added dtbo. Please"
> +               " provide a valid dtbo which was used to add the nodes.\n");
> +        goto out;
> +
> +    }
> +
> +    rc = remove_nodes(entry);
> +
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Removing node failed\n");
> +        goto out;
> +    }
> +
> +    list_del(&entry->entry);
> +
> +    xfree(entry->dt_host_new);
> +    xfree(entry->fdt);
> +    xfree(entry->overlay_fdt);
> +
> +    xfree(entry->nodes_address);
> +
> +    xfree(entry);
> +
> +out:
> +    spin_unlock(&overlay_lock);
> +    return rc;
> +}
> +
> +long dt_sysctl(struct xen_sysctl_dt_overlay *op)
> +{
> +    long ret = 0;
No need for assignign a value that will be reassigned anyway.

> +    void *overlay_fdt;
> +
> +    if ( op->overlay_fdt_size <= 0 || op->overlay_fdt_size > 500000 )
FWICS, you want to limit the fdt size to 500KB which should be 512000.
Also, it would be clearer to use KB(500). Otherwise such value is a bit ambiguous.
Also overlay_fdt_size cannot be < 0.

> +        return -EINVAL;
> +
> +    overlay_fdt = xmalloc_bytes(op->overlay_fdt_size);
If you alloc the bytes here and the op will not be XEN_SYSCTL_DT_OVERLAY_REMOVE,
then you will end up without freeing it.

> +
> +    if ( overlay_fdt == NULL )
> +        return -ENOMEM;
> +
> +    ret = copy_from_guest(overlay_fdt, op->overlay_fdt, op->overlay_fdt_size);
> +    if ( ret )
> +    {
> +        gprintk(XENLOG_ERR, "copy from guest failed\n");
> +        xfree(overlay_fdt);
> +
> +        return -EFAULT;
> +    }
> +
> +    switch ( op->overlay_op )
> +    {
> +    case XEN_SYSCTL_DT_OVERLAY_REMOVE:
> +        ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
> +        xfree(overlay_fdt);
> +
> +        break;
> +
> +    default:
> +        break;
> +    }
> +
> +    return ret;
> +}
Don't you want to put EMACS comments block here?

> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 02505ab044..bb338b7c27 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -28,6 +28,7 @@
>  #include <xen/pmstat.h>
>  #include <xen/livepatch.h>
>  #include <xen/coverage.h>
> +#include <xen/dt_overlay.h>
> 
>  long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>  {
> @@ -482,6 +483,10 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>              copyback = 1;
>          break;
> 
> +    case XEN_SYSCTL_dt_overlay:
If you protect xen_sysctl_dt_overlay with ARM ifdefery as Jan suggested,
then you should move this handling to arch_do_sysctl.

> +        ret = dt_sysctl(&op->u.dt_overlay);
> +        break;
> +
>      default:
>          ret = arch_do_sysctl(op, u_sysctl);
>          copyback = 0;
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 5672906729..4bc76bbe27 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -1079,6 +1079,23 @@ typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
>  #endif
> 
> +#define XEN_SYSCTL_DT_OVERLAY_ADD                   1
I'm not sure whether the ADD macro should be added in this patch.

> +#define XEN_SYSCTL_DT_OVERLAY_REMOVE                2
> +
> +/*
> + * XEN_SYSCTL_dt_overlay
> + * Performs addition/removal of device tree nodes under parent node using dtbo.
> + * This does in three steps:
> + *  - Adds/Removes the nodes from dt_host.
> + *  - Adds/Removes IRQ permission for the nodes.
> + *  - Adds/Removes MMIO accesses.
> + */
> +struct xen_sysctl_dt_overlay {
> +    XEN_GUEST_HANDLE_64(void) overlay_fdt;
FWICS, this is the output variable and it would be beneficial to add a comment.
Also, usually IN variables appear first.

> +    uint32_t overlay_fdt_size;  /* Overlay dtb size. */
> +    uint8_t overlay_op; /* Add or remove. */
These are the input variables, so the comment should be e.g. /* IN: Overlay dtb size */

> +};
> +
>  struct xen_sysctl {
>      uint32_t cmd;
>  #define XEN_SYSCTL_readconsole                    1
> @@ -1109,6 +1126,7 @@ struct xen_sysctl {
>  #define XEN_SYSCTL_livepatch_op                  27
>  /* #define XEN_SYSCTL_set_parameter              28 */
>  #define XEN_SYSCTL_get_cpu_policy                29
> +#define XEN_SYSCTL_dt_overlay                    30
>      uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
>      union {
>          struct xen_sysctl_readconsole       readconsole;
> @@ -1139,6 +1157,7 @@ struct xen_sysctl {
>  #if defined(__i386__) || defined(__x86_64__)
>          struct xen_sysctl_cpu_policy        cpu_policy;
>  #endif
> +        struct xen_sysctl_dt_overlay        dt_overlay;
>          uint8_t                             pad[128];
>      } u;
>  };
> diff --git a/xen/include/xen/dt_overlay.h b/xen/include/xen/dt_overlay.h
> new file mode 100644
> index 0000000000..30f4b86586
> --- /dev/null
> +++ b/xen/include/xen/dt_overlay.h
> @@ -0,0 +1,55 @@
> +/*
Missing SPDX comment at the top of the files.

> + * xen/common/dt_overlay.h
Incorrect path.

> + *
> + * Device tree overlay support in Xen.
> + *
> + * Copyright (c) 2022 AMD Inc.
> + * Written by Vikram Garhwal <vikram.garhwal@amd.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms and conditions of the GNU General Public
> + * License, version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +#ifndef __XEN_DT_SYSCTL_H__
> +#define __XEN_DT_SYSCTL_H__
> +
> +#include <xen/list.h>
> +#include <xen/libfdt/libfdt.h>
> +#include <xen/device_tree.h>
> +#include <xen/rangeset.h>
> +
> +/*
> + * overlay_node_track describes information about added nodes through dtbo.
> + * @entry: List pointer.
> + * @dt_host_new: Pointer to the updated dt_host_new unflattened 'updated fdt'.
> + * @fdt: Stores the fdt.
> + * @nodes_fullname: Stores the full name of nodes.
> + * @nodes_irq: Stores the IRQ added from overlay dtb.
> + * @node_num_irq: Stores num of IRQ for each node in overlay dtb.
> + * @num_nodes: Stores total number of nodes in overlay dtb.
> + */
> +struct overlay_track {
> +    struct list_head entry;
> +    struct dt_device_node *dt_host_new;
> +    void *fdt;
> +    void *overlay_fdt;
> +    unsigned long *nodes_address;
> +    unsigned int num_nodes;
> +};
> +
> +struct xen_sysctl_dt_overlay;
> +
> +#ifdef CONFIG_OVERLAY_DTB
> +long dt_sysctl(struct xen_sysctl_dt_overlay *op);
> +#else
> +static inline long dt_sysctl(struct xen_sysctl_dt_overlay *op)
> +{
> +    return -ENOSYS;
> +}
> +#endif
> +#endif
Don't you want to put EMACS comments block here?

> --
> 2.17.1
> 
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 09:32:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 09:32:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483458.749643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKFev-0000N8-4K; Tue, 24 Jan 2023 09:32:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483458.749643; Tue, 24 Jan 2023 09:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKFeu-0000N1-W6; Tue, 24 Jan 2023 09:32:04 +0000
Received: by outflank-mailman (input) for mailman id 483458;
 Tue, 24 Jan 2023 09:32:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z8X8=5V=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pKFes-0000Mv-QJ
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 09:32:02 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3bec086-9bc9-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 10:32:00 +0100 (CET)
Received: by mail-wr1-x42f.google.com with SMTP id y1so8739972wru.2
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 01:32:00 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 f12-20020a5d664c000000b002bfafadb22asm999041wrw.87.2023.01.24.01.31.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Jan 2023 01:31:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3bec086-9bc9-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=9Ir6j/FLQz51gs79EMnHR8g5ulcJ/hqUQFsjnYUxf1o=;
        b=VfrE5UFaFfZY/lDNFGBm/p5RceBhWIXrZkGge+8XKW55zFLW2EWTw58O6DU0zPev9w
         HP95TRXoiRwb4jeQOFAj7DcTxPy2q/OouaFtjHB5j2G2zcyHMFymLuEPwRv3V6w02JxN
         aPUJCHRmC39RotLE1S9bwD0M9P+3qGRKkor4hze3q7hcz4kGSX2TryjSrSZ8A9t4Yzjx
         tt11adCwistSibApofvFV4yv5Y1pCmRMDWLuIghVIvYOG4m2BSkm/mkBAbDvEz+BJHQ7
         JtWqdY69GPMgyhc/Z5QU+8euCbDLv2qjlx5rjZK21CYgPlEOKjGf6ozVlmIzXRDg4/0G
         Dxlw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=9Ir6j/FLQz51gs79EMnHR8g5ulcJ/hqUQFsjnYUxf1o=;
        b=sGIpQGTain5PCtH3bqOV3XSYRZkmF4ootr18z6t9T6nGf19sZ6jrB+H2Lmh8ikeJ7C
         N1eFa6fgE7VfruUDrd8347J2KaETIGPDHd/Vw4xAOL2Fq9DbGKmVzQSAKumeLAaU7WOt
         fSqiaZZ93JzIZBh6/tDEuAnVO22awFgROztXsU276mFs3c70KjlqQGox79faweBKMYrr
         SWMmEA4TP3r75kiGtWz3VSsSdUrdC6duIMooPKzYczNjxYQCqvJTfOotbS4D8rpw4VF+
         YsrsYSs+xMfxsq/YfJmnMxeCmVPKKef79RKnDaMWqOTEAlXYL5kcqLIsfHqdO7JQHNTF
         LzEg==
X-Gm-Message-State: AFqh2kr75mlstDjNYP+N1UuOuH/sftaLf7RWRQONB9QMO2jHAnVE/5JK
	Y2485nTrK99KojgQZ0YFzDw=
X-Google-Smtp-Source: AMrXdXuxSAk4qj+sILGMdxfSBiTaHumPWp9SOvsxLsMRQ6MQc65EvYmm8LZIm5nOe2AlNG4J07hURg==
X-Received: by 2002:a5d:50d0:0:b0:2bd:c2d7:2700 with SMTP id f16-20020a5d50d0000000b002bdc2d72700mr24982572wrt.42.1674552719356;
        Tue, 24 Jan 2023 01:31:59 -0800 (PST)
Message-ID: <57b2c406fc1ac735d9b45150a7d7353d93a90a31.camel@gmail.com>
Subject: Re: [RISC-V] Switch  to H-mode
From: Oleksii <oleksii.kurochko@gmail.com>
To: Alistair Francis <Alistair.Francis@wdc.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: "bobbyeshleman@gmail.com" <bobbyeshleman@gmail.com>, 
 "sstabellini@kernel.org" <sstabellini@kernel.org>, "gianluca@rivosinc.com"
 <gianluca@rivosinc.com>,  "andrew.cooper3@citrix.com"
 <andrew.cooper3@citrix.com>
Date: Tue, 24 Jan 2023 11:31:58 +0200
In-Reply-To: <0f810fd4de8b5b05ceddd53a9ce26e8be9014eb2.camel@wdc.com>
References: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>
	 <0f810fd4de8b5b05ceddd53a9ce26e8be9014eb2.camel@wdc.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-23 at 22:32 +0000, Alistair Francis wrote:
> On Mon, 2023-01-23 at 18:56 +0200, Oleksii wrote:
> > Hi Alistair and community,
> >=20
> > I am working on RISC-V support upstream for Xen based on your and
> > Bobby
> > patches.
> >=20
> > Adding the RISC-V support I realized that Xen is ran in S-mode.
> > Output
> > of OpenSBI:
> > =C2=A0=C2=A0=C2=A0 ...
> > =C2=A0=C2=A0=C2=A0 Domain0 Next Mode=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 : S-mode
> > =C2=A0=C2=A0=C2=A0 ...
> > So the first my question is shouldn't it be in H-mode?
>=20
> There is no H-mode in RISC-V.
>=20
> When the Hypervisor extension exists the standard S-mode
> automatically
> becomes HS-mode. The two names can be used interchangeably (although
> the spec calls it HS-mode).
>=20
> In this way Linux (with or without KVM support) and Xen all boot in
> the
> same mode and can choose to use virtulisation if desired.
>=20
> >=20
> > If I am right than it looks like we have to do a patch to OpenSBI
> > to
> > add support of H-mode as it is not supported now:
> > [1]
> > https://github.com/riscv-software-src/opensbi/blob/master/lib/sbi/sbi_d=
omain.c#L380
> > [2]
> > https://github.com/riscv-software-src/opensbi/blob/master/include/sbi/r=
iscv_encoding.h#L110
> > Please correct me if I am wrong.
> >=20
> > The other option I see is to switch to H-mode in U-boot as I
> > understand
> > the classical boot flow is:
> > =C2=A0=C2=A0=C2=A0 OpenSBI -> U-boot -> Xen -> Domain{0,...}
> > If it is at all possible since U-boot will be in S mode after
> > OpenSBI.
>=20
> S-mode is where you want to be. That's what Xen should start in.
>=20
Thanks for clarification.

~ Oleksii
> Alistair
>=20
> >=20
> > Thanks in advance.
> >=20
> > ~ Oleksii
>=20



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 09:34:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 09:34:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483465.749652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKFge-0000zQ-HU; Tue, 24 Jan 2023 09:33:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483465.749652; Tue, 24 Jan 2023 09:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKFge-0000zJ-Es; Tue, 24 Jan 2023 09:33:52 +0000
Received: by outflank-mailman (input) for mailman id 483465;
 Tue, 24 Jan 2023 09:33:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z8X8=5V=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pKFgd-0000zD-Fa
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 09:33:51 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34f71a37-9bca-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 10:33:49 +0100 (CET)
Received: by mail-wm1-x331.google.com with SMTP id
 c4-20020a1c3504000000b003d9e2f72093so12372590wma.1
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 01:33:49 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 h15-20020a05600c350f00b003db0b0cc2afsm14119060wmq.30.2023.01.24.01.33.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Jan 2023 01:33:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34f71a37-9bca-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=OrZG6kmgRtXdjISbV3YapylA7pNTxy5mLEqT/LHitwg=;
        b=qbpL9siG8I4/g1xllEn1Qag+hvwxZ8clUITtvKEx4dNfFf3dXp9YrNINfMZGlj6izE
         Xsj47eLV4VMVtMMW+5Xb/9UXW4zkJpnqQ1FAFSR+WqDAX6WaQHW1/jbjbk4sgZQQ4APO
         /tDGNgMRdLZc3qexQWEJzdbj3DskFiN4O0w7z21Mk4eVMhjtBxMHwCqzIOpclVfw0Tpo
         n9r5ptdCg8xNF9eJCznFdRuLjy7G1mf7CHanuTCvRFutkUCHNrlPnUOyxVI+i3yvQra7
         3ZgKoyD7JFNv+/sdKufP7Goe5UOXrp3aIytXduPupHq7Znh7Y9LKJULwvEkaVOpsFa60
         4LhA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=OrZG6kmgRtXdjISbV3YapylA7pNTxy5mLEqT/LHitwg=;
        b=UIH+t+58OsW5dHp7i/Ot76iTcCdMu2r5PcrO1mK+rOHBsqPmCXNyzX4bpp2EefkDLl
         Lj7dyiILCJP5irKp49Q5rB+Bt+SkG66vGbs+mH17cAzYxzWSO1rQwhg0PCk+AaAdbrvv
         3tzbHGWp65U13aWD0aqfmFEz40YOr9IIJh3pNVNCFM1mmwH0fgwfqEFfhgZwByvWNVUG
         X2WTB05KRSVjA7zlEIH1pdUyoZeJZm3vWZBYDxT3r6+qyhBeaM4+Zr/OVCxMRzgOGDpY
         ukkLsUWP5Gb/G4hbA1w7oX4/EBqu6x86QBfDKGDB/1rzqnaHdJ30v3ZXoeGe+Q7dDwhz
         Gg7g==
X-Gm-Message-State: AFqh2kqos66Pzz6nrUzL/YRKyDNf7RX5CSgSynXJMhVxc4CPGodC4W6d
	LYEbGHuJSyzMy4WUrHBO96c=
X-Google-Smtp-Source: AMrXdXu8Sm+A5CPMqUJT/g13G9CUSQ7jhoUqti1Yuf/ZqO632HS2C/NK9SW44Bq6u0c/9miH88b+kQ==
X-Received: by 2002:a05:600c:35d4:b0:3db:3694:b93c with SMTP id r20-20020a05600c35d400b003db3694b93cmr14377450wmq.15.1674552828796;
        Tue, 24 Jan 2023 01:33:48 -0800 (PST)
Message-ID: <4536fdefd885719106d6de1de4a3ed16aa39f10f.camel@gmail.com>
Subject: Re: [RISC-V] Switch  to H-mode
From: Oleksii <oleksii.kurochko@gmail.com>
To: Bobby Eshleman <bobbyeshleman@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, 
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
 Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
 <gianluca@rivosinc.com>
Date: Tue, 24 Jan 2023 11:33:47 +0200
In-Reply-To: <Y8lABYJoQ5Qt4DAt@bullseye>
References: <18aa47afaebce70b00c3b5866a4809605240e619.camel@gmail.com>
	 <Y8lABYJoQ5Qt4DAt@bullseye>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Thu, 2023-01-19 at 13:05 +0000, Bobby Eshleman wrote:
> On Mon, Jan 23, 2023 at 06:56:19PM +0200, Oleksii wrote:
> > Hi Alistair and community,
> >=20
> > I am working on RISC-V support upstream for Xen based on your and
> > Bobby
> > patches.
> >=20
> > Adding the RISC-V support I realized that Xen is ran in S-mode.
> > Output
> > of OpenSBI:
> > =C2=A0=C2=A0=C2=A0 ...
> > =C2=A0=C2=A0=C2=A0 Domain0 Next Mode=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 : S-mode
> > =C2=A0=C2=A0=C2=A0 ...
> > So the first my question is shouldn't it be in H-mode?
> >=20
> > If I am right than it looks like we have to do a patch to OpenSBI
> > to
> > add support of H-mode as it is not supported now:
> > [1]
> > https://github.com/riscv-software-src/opensbi/blob/master/lib/sbi/sbi_d=
omain.c#L380
> > [2]
> > https://github.com/riscv-software-src/opensbi/blob/master/include/sbi/r=
iscv_encoding.h#L110
> > Please correct me if I am wrong.
> >=20
> > The other option I see is to switch to H-mode in U-boot as I
> > understand
> > the classical boot flow is:
> > =C2=A0=C2=A0=C2=A0 OpenSBI -> U-boot -> Xen -> Domain{0,...}
> > If it is at all possible since U-boot will be in S mode after
> > OpenSBI.
> >=20
> > Thanks in advance.
> >=20
> > ~ Oleksii
> >=20
>=20
> Ah, what you are seeing there is that the openSBI's Next Mode
> excludes
> the virtualization mode (it treats HS and S synonymously) and it is
> only
> used for setting the mstatus MPP. The code also has next_virt for
> setting the MPV but I don't think that is exposed via the device tree
> yet. For Xen, you'd want next_mode =3D PRIV_S and next_virt =3D 0 (HS
> mode,
> not VS mode). The relevant setup prior to mret is here for interested
> readers:
> https://github.com/riscv-software-src/opensbi/blob/001106d19b21cd6443ae7f=
7f6d4d048d80e9ecac/lib/sbi/sbi_hart.c#L759
>=20
> As long as the next_mode and next_virt are set correctly, then Xen
> should be launching in HS mode. I do believe this should be default
> for
> the stock build too for Domain0, unless something has changed.
>=20
The same I've found in OpenSBI before but wasn't 100% sure that I'm
right. Now it is clear.

Thanks for your explanation.

> Thanks,
> Bobby

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 10:57:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 10:57:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483472.749663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKGyl-00019H-Fn; Tue, 24 Jan 2023 10:56:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483472.749663; Tue, 24 Jan 2023 10:56:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKGyl-00019A-Be; Tue, 24 Jan 2023 10:56:39 +0000
Received: by outflank-mailman (input) for mailman id 483472;
 Tue, 24 Jan 2023 10:56:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EcKj=5V=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pKGyk-000194-7c
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 10:56:38 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2050.outbound.protection.outlook.com [40.107.93.50])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c34846f3-9bd5-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 11:56:33 +0100 (CET)
Received: from DM5PR07CA0058.namprd07.prod.outlook.com (2603:10b6:4:ad::23) by
 MN0PR12MB6004.namprd12.prod.outlook.com (2603:10b6:208:380::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 10:56:30 +0000
Received: from DM6NAM11FT099.eop-nam11.prod.protection.outlook.com
 (2603:10b6:4:ad:cafe::6) by DM5PR07CA0058.outlook.office365.com
 (2603:10b6:4:ad::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Tue, 24 Jan 2023 10:56:30 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT099.mail.protection.outlook.com (10.13.172.241) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Tue, 24 Jan 2023 10:56:29 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 04:56:29 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 02:56:28 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 24 Jan 2023 04:56:27 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c34846f3-9bd5-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L/R/F+pW8fX8wj85ZnJDH9M88aIjyWJyx69+g4FrEPMs5wXKhyezBRwuQBeqoXk5/eNPqVNicYYbsbdIBxT+9uqNNMDYJd8Swryc1ECfWa37j2yXTp78rXPcY0XPFkrZd9mkCwWxJdQGfKTCxsHJq0pz0vcqIDa0dyOStfM4SdMCv5AZgC5d+IRDI9wvMAcyAMJrsSD63BUpv4f7h2+yRfuAt6iXWgEqr3abDWGk/cWZcEjZwVIpzij7gYvBnzdwAnwPPM1wX4n25PXmcYvXweJC16cU0k7mvJ4IZ/tcQ2fcoA79IPMWu/GqideOWI6R+02pMVrKkoEzZvr6L+b4xQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6lNnU5d6266tgFlyfbx7geCpGMT6/ZXZjG5mtIs/Pq0=;
 b=WSRSwA51ev8o4rrTWumdwmcmLBHZPn3fUnJQUyUwck5SgvYks9ch2PFkUNOLyob4jOOFY3BH09ckoysdvEW/sqmbRBkWsVSQzAiMq9UviQn0nThqhEuG9xkJS6nPy3e//rXLXXCZF35rkXKqPHwsM4mh7F8xVBUcnXXbRtyETUppLekbVqQE0qy50pjndqGXtZjAcVdtJnHb9niXCSrghu+0UxhboiHYqzYliH+1jZbXj4Yr+K6rDTdij2sGQUcsuzrWz1Yc07DVk7O1EsLBH1gA5Az7VOXsEpLBNhMvkvpcNopqAAgaxvs/7AD2h4//HMsyWcNL2Umks8GIWAn5yA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6lNnU5d6266tgFlyfbx7geCpGMT6/ZXZjG5mtIs/Pq0=;
 b=rU3okWx531iXm9Ba5pnBNSvwTSwXix5ao3WMLwNmv1GIOWthUmwrOzaa2L+PCWdhzOY1fw8ak4ZGmvYjcwjBnplaQEgb7ZtVAI4emLNWXfnIyO1bMHSNpSnnpZZssKSfyCivtncdyaKdXtQVjiIQlEnQEtdWZVnSHQWSvus/jpo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <678e765c-9ab7-14d8-ecbd-6edeb230c41d@amd.com>
Date: Tue, 24 Jan 2023 11:56:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN][RFC PATCH v4 13/16] xen/arm: Implement device tree node
 addition functionalities
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <julien@xen.org>, <Luca.Fancellu@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
References: <20221207061815.7404-1-vikram.garhwal@amd.com>
 <20221207061815.7404-7-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20221207061815.7404-7-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT099:EE_|MN0PR12MB6004:EE_
X-MS-Office365-Filtering-Correlation-Id: 79ab540f-9338-49c1-ea0b-08dafdf9a5cd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BuueBnlFmZ/JstIG5n0pY6hl7s/jQ0rfSCUfJkiGec8g9WmQc2NvdFRR5TtvscjqhINz2CVxLGUYv2JJb/rDtjp6lQz1yx2HFPjd0fq9VE264dQJ03gT9TcqHpf8jvEiDeCMKS8mt3m0B7Fikh/A8wvbJ4IWPwpdA+ycA9z+PYY+iqTz4FHjd93+sLJS+d3O+QIypTrJUTn8GLQKOy5cuhjClC+jXhwbPPmelo7wdlUWI3HKeEfnEAeAzX0+70HoumqjlfLAuq5Rcru0W3O9SHnJdopHQ6VlATAIHyVjYa99ozBVXWfyRXAMi4c32N4P+VeKwLd3g/+BQdsRVeGoNQdx4I9K9aCosLuvgc1oq+wBnW1EQJ88i+ccbkrKibT0Xj/XsWt46/O1CUaAddco8CGM2U2fnCW3/l067Vp57O7/F+2JAQ88XXmXyXBP/f8HvgDfX4wCn7nWCwIh7MaxE417vVSvvKNVwhCl7NpRINX3r8AEkwVfsQnDxr6aNZ1APUpBPauPhHUcfjOaP5s8H0NHiwmugefj8/kWhTxtJPehx2E5TUTjAamo4f5nnM7+oWeALx5GChjpUl8CzQfI4//I6NyjRrcyMkja6ue0QQ598qycN0Wlw5QI85+yTvh3VNoLHEDL4N/Q0IvFqvw9Qmqhf0vCVGR7oWsLb80EBONfMicL3PTT/2QmaHSvoohT7uKE9GfUM3PoqMY3ISMWKTZtnlKBUBRUUAtRK+B2A3ykt00zel1V1iM4DypgU0IkfkvUO+JnSwEW0Y28f6Bg5Q==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199015)(40470700004)(46966006)(36840700001)(31686004)(4326008)(8676002)(70206006)(2616005)(53546011)(70586007)(426003)(47076005)(8936002)(41300700001)(186003)(26005)(83380400001)(30864003)(44832011)(5660300002)(2906002)(36860700001)(16576012)(82740400003)(81166007)(110136005)(82310400005)(31696002)(478600001)(336012)(54906003)(316002)(86362001)(36756003)(40460700003)(356005)(40480700001)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 10:56:29.5984
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 79ab540f-9338-49c1-ea0b-08dafdf9a5cd
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT099.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6004

Hi Vikram,

On 07/12/2022 07:18, Vikram Garhwal wrote:
> 
> 
> Update sysctl XEN_SYSCTL_dt_overlay to enable support for dtbo nodes addition
> using device tree overlay.
> 
> xl dt_overlay add file.dtbo:
>     Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
>     device_tree_flattened) is created and updated with overlay nodes. This
>     updated fdt is further unflattened to a dt_host_new. Next, it checks if any
>     of the overlay nodes already exists in the dt_host. If overlay nodes doesn't
>     exist then find the overlay nodes in dt_host_new, find the overlay node's
>     parent in dt_host and add the nodes as child under their parent in the
>     dt_host. The node is attached as the last node under target parent.
> 
>     Finally, add IRQs, add device to IOMMUs, set permissions and map MMIO for the
>     overlay node.
> 
> When a node is added using overlay, a new entry is allocated in the
> overlay_track to keep the track of memory allocation due to addition of overlay
> node. This is helpful for freeing the memory allocated when a device tree node
> is removed.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/common/dt_overlay.c | 465 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 465 insertions(+)
> 
> diff --git a/xen/common/dt_overlay.c b/xen/common/dt_overlay.c
> index 477341f0aa..f5426b9dab 100644
> --- a/xen/common/dt_overlay.c
> +++ b/xen/common/dt_overlay.c
> @@ -38,9 +38,29 @@ static struct dt_device_node *find_last_descendants_node(
>      /* If last child_node also have children. */
>      if ( child_node->child )
>          child_node = find_last_descendants_node(child_node);
> +
This should be done in a previous patch as I pointed out in patch 12.

>      return child_node;
>  }
> 
> +/*
> + * Returns next node to the input node. If node has children then return
> + * last descendant's next node.
> +*/
> +static struct dt_device_node *dt_find_next_node(struct dt_device_node *dt,
> +                                                struct dt_device_node *node)
I think the node should be const.

> +{
> +    struct dt_device_node *np;
> +
> +    dt_for_each_device_node(dt, np)
> +        if ( np == node )
> +            break;
> +
> +    if ( np->child )
> +        np = find_last_descendants_node(np);
> +
> +    return np->allnext;
> +}
> +
>  static int dt_overlay_remove_node(struct dt_device_node *device_node)
>  {
>      struct dt_device_node *np;
> @@ -114,6 +134,74 @@ static int dt_overlay_remove_node(struct dt_device_node *device_node)
>      return 0;
>  }
> 
> +static int dt_overlay_add_node(struct dt_device_node *device_node,
> +                               const char *parent_node_path)
> +{
> +    struct dt_device_node *parent_node;
> +    struct dt_device_node *np, *np_last_descendant;
> +    struct dt_device_node *next_node;
> +    struct dt_device_node *device_node_last_descendant;
> +
> +    parent_node = dt_find_node_by_path(parent_node_path);
> +
> +    if ( parent_node == NULL )
> +    {
> +        dt_dprintk("Node not found. Overlay node will not be added\n");
> +        return -EINVAL;
> +    }
> +
> +    /* If parent has no child. */
> +    if ( parent_node->child == NULL )
> +    {
> +        next_node = parent_node->allnext;
> +        device_node->parent = parent_node;
> +        parent_node->allnext = device_node;
> +        parent_node->child = device_node;
> +    }
> +    else
> +    {
> +        /* If parent has at least one child node.
> +         * Iterate to the last child node of parent.
> +         */
> +        for ( np = parent_node->child; np->sibling != NULL; np = np->sibling )
> +        {
> +        }
Instead of {} you could just put ; at the end of for statement.

> +
> +        /* Iterate over all child nodes of np node. */
> +        if ( np->child )
> +        {
> +            np_last_descendant = find_last_descendants_node(np);
> +
> +            next_node = np_last_descendant->allnext;
> +            np_last_descendant->allnext = device_node;
> +        }
> +        else
> +        {
> +            next_node = np->allnext;
> +            np->allnext = device_node;
> +        }
> +
> +        device_node->parent = parent_node;
> +        np->sibling = device_node;
> +        np->sibling->sibling = NULL;
> +    }
> +
> +    /* Iterate over all child nodes of device_node to add children too. */
> +    if ( device_node->child )
> +    {
> +        device_node_last_descendant = find_last_descendants_node(device_node);
> +        /* Plug next_node at the end of last children of device_node. */
> +        device_node_last_descendant->allnext = next_node;
> +    }
> +    else
> +    {
> +        /* Now plug next_node at the end of device_node. */
> +        device_node->allnext = next_node;
> +    }
> +
> +    return 0;
> +}
> +
>  /* Basic sanity check for the dtbo tool stack provided to Xen. */
>  static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
>  {
> @@ -153,6 +241,79 @@ static unsigned int overlay_node_count(void *fdto)
>      return num_overlay_nodes;
>  }
> 
> +/*
> + * overlay_get_nodes_info will get full name with path for all the nodes which
> + * are in one level of __overlay__ tag. This is useful when checking node for
> + * duplication i.e. dtbo tries to add nodes which already exists in device tree.
> + */
> +static int overlay_get_nodes_info(const void *fdto, char ***nodes_full_path,
> +                                  unsigned int num_overlay_nodes)
> +{
> +    int fragment;
> +    unsigned int node_num = 0;
node_num should be moved under the second for loop.

> +
> +    *nodes_full_path = xzalloc_bytes(num_overlay_nodes * sizeof(char *));
> +
> +    if ( *nodes_full_path == NULL )
> +        return -ENOMEM;
> +
> +    fdt_for_each_subnode(fragment, fdto, 0)
> +    {
> +        int target;
> +        int overlay;
> +        int subnode;
> +        const char *target_path;
> +
> +        target = fdt_overlay_target_offset(device_tree_flattened, fdto,
> +                                           fragment, &target_path);
> +        if ( target < 0 )
> +            return target;
> +
> +        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
> +
> +        /*
> +         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
> +         * overlay >= 0. So, no need for a overlay>=0 check here.
> +         */
> +        fdt_for_each_subnode(subnode, fdto, overlay)
> +        {
> +            const char *node_name = NULL;
> +            int node_name_len = 0;
No need for assignment.

> +            unsigned int target_path_len = strlen(target_path);
> +            unsigned int node_full_name_len = 0;
No need for assignment.
> +
> +            node_name = fdt_get_name(fdto, subnode, &node_name_len);
> +
> +            if ( node_name == NULL )
> +                return -EINVAL;
In case of node_name being NULL, the error code is stored in node_name_len, so
don't you want to return it instead of -EINVAL?

> +
> +            /*
> +             * Magic number 2 is for adding '/'. This is done to keep the
> +             * node_full_name in the correct full node name format.
> +             */
> +            node_full_name_len = target_path_len + node_name_len + 2;
> +
> +            (*nodes_full_path)[node_num] = xmalloc_bytes(node_full_name_len);
> +
> +            if ( (*nodes_full_path)[node_num] == NULL )
> +                return -ENOMEM;
> +
> +            memcpy((*nodes_full_path)[node_num], target_path, target_path_len);
> +
> +            (*nodes_full_path)[node_num][target_path_len] = '/';
> +
> +            memcpy((*nodes_full_path)[node_num] + target_path_len + 1,
> +                    node_name, node_name_len);
> +
> +            (*nodes_full_path)[node_num][node_full_name_len - 1] = '\0';
> +
> +            node_num++;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>  static int handle_remove_irq_iommu(struct dt_device_node *device_node)
>  {
>      int rc = 0;
> @@ -373,6 +534,302 @@ out:
>      return rc;
>  }
> 
> +/*
> + * Handles IRQ and IOMMU mapping for the overlay_node and all descendants of the
> + * overlay_nodes.
> + */
> +static int handle_add_irq_iommu(struct domain *d,
> +                                struct dt_device_node *overlay_node)
> +{
> +    int rc = 0;
No need for assignment.

> +    unsigned int naddr, i, len;
> +    u64 addr, size;
Should be uint64_t.

> +    struct dt_device_node *np;
> +
> +    /* First let's handle the interrupts. */
> +    rc = handle_device_interrupts(d, overlay_node, false);
> +    if ( rc )
To match the handle_device_interrupts behavior on failure, you should check ( rc < 0 )

> +    {
> +        printk(XENLOG_ERR "Interrupt failed\n");
> +        return rc;
> +    }
> +
> +    /* Check if iommu property exists. */
> +    if ( dt_get_property(overlay_node, "iommus", &len) )
> +    {
> +
> +        /* Add device to IOMMUs. */
> +        rc = iommu_add_dt_device(overlay_node);
> +        if ( rc < 0 )
> +        {
> +            printk(XENLOG_ERR "Failed to add %s to the IOMMU\n",
> +                   dt_node_full_name(overlay_node));
> +            return rc;
> +        }
> +    }
> +
> +    /* Set permissions. */
> +    naddr = dt_number_of_address(overlay_node);
> +
> +    dt_dprintk("%s passthrough = %d naddr = %u\n",
> +               dt_node_full_name(overlay_node), false, naddr);
> +
> +    /* Give permission for map MMIOs */
> +    for ( i = 0; i < naddr; i++ )
> +    {
> +        struct map_range_data mr_data = { .d = d,
> +                                          .p2mt = p2m_mmio_direct_c,
> +                                          .skip_mapping = true };
> +
> +        rc = dt_device_get_address(overlay_node, i, &addr, &size);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> +                   i, dt_node_full_name(overlay_node));
> +            return rc;
> +        }
> +
> +        rc = map_range_to_domain(overlay_node, addr, size, &mr_data);
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    /* Map IRQ and IOMMU for overlay_node's children. */
> +    for ( np = overlay_node->child; np != NULL; np = np->sibling)
> +    {
> +        rc = handle_add_irq_iommu(d, np);
Shouldn't you stop looping if rc not zero?

> +    }
> +
> +    return rc;
> +}
> +
> +/*
> + * Adds device tree nodes under target node.
> + * We use tr->dt_host_new to unflatten the updated device_tree_flattened. This
> + * is done to avoid the removal of device_tree generation, iomem regions mapping
> + * to hardware domain done by handle_node().
> + */
> +static long handle_add_overlay_nodes(void *overlay_fdt,
> +                                     uint32_t overlay_fdt_size)
> +{
> +    int rc = 0, j = 0, i;
No need for assignments.

> +    struct dt_device_node *overlay_node, *prev_node, *next_node;
> +    struct domain *d = hardware_domain;
> +    struct overlay_track *tr = NULL;
> +    char **nodes_full_path = NULL;
> +    unsigned int new_fdt_size;
> +
> +    tr = xzalloc(struct overlay_track);
> +    if ( tr == NULL )
> +    {
No need for braces.

> +        return -ENOMEM;
> +    }
> +
> +    new_fdt_size = fdt_totalsize(device_tree_flattened) +
> +                                 fdt_totalsize(overlay_fdt);
> +
> +    tr->fdt = xzalloc_bytes(new_fdt_size);
> +    if ( tr->fdt == NULL )
What about allocated tr? Shouldn't you free it? Also this applies to ...

> +        return -ENOMEM;
> +
> +    tr->num_nodes = overlay_node_count(overlay_fdt);
> +    if ( tr->num_nodes == 0 )
> +    {
> +        xfree(tr->fdt);
here and ...

> +        return -ENOMEM;
> +    }
> +
> +    tr->nodes_address = xzalloc_bytes(tr->num_nodes * sizeof(unsigned long));
> +    if ( tr->nodes_address == NULL )
> +    {
> +        xfree(tr->fdt);
here and ...

> +        return -ENOMEM;
> +    }
> +
> +    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
> +    if ( rc )
> +    {
> +        xfree(tr->fdt);
here and ...

> +        return rc;
> +    }
> +
> +    /*
> +     * Keep a copy of overlay_fdt as fdt_overlay_apply will change the input
> +     * overlay's content(magic) when applying overlay.
> +     */
> +    tr->overlay_fdt = xzalloc_bytes(overlay_fdt_size);
> +    if ( tr->overlay_fdt == NULL )
> +    {
> +        xfree(tr->fdt);
here.

> +        return -ENOMEM;
> +    }
> +
> +    memcpy(tr->overlay_fdt, overlay_fdt, overlay_fdt_size);
> +
> +    spin_lock(&overlay_lock);
> +
> +    memcpy(tr->fdt, device_tree_flattened,
> +           fdt_totalsize(device_tree_flattened));
> +
> +    /* Open tr->fdt with more space to accommodate the overlay_fdt. */
> +    fdt_open_into(tr->fdt, tr->fdt, new_fdt_size);
> +
> +    /*
> +     * overlay_get_nodes_info is called to get the node information from dtbo.
> +     * This is done before fdt_overlay_apply() because the overlay apply will
> +     * erase the magic of overlay_fdt.
> +     */
> +    rc = overlay_get_nodes_info(overlay_fdt, &nodes_full_path,
> +                                tr->num_nodes);
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Getting nodes information failed with error %d\n",
> +               rc);
> +        goto err;
> +    }
> +
> +    rc = fdt_overlay_apply(tr->fdt, overlay_fdt);
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Adding overlay node failed with error %d\n", rc);
> +        goto err;
> +    }
> +
> +    /*
> +     * Check if any of the node already exists in dt_host. If node already exits
> +     * we can return here as this overlay_fdt is not suitable for overlay ops.
> +     */
> +    for ( j = 0; j < tr->num_nodes; j++ )
> +    {
> +        overlay_node = dt_find_node_by_path(nodes_full_path[j]);
> +        if ( overlay_node != NULL )
> +        {
> +            printk(XENLOG_ERR "node %s exists in device tree\n",
> +                   nodes_full_path[j]);
> +            rc = -EINVAL;
> +            goto err;
> +        }
> +    }
> +
> +    /* Unflatten the tr->fdt into a new dt_host. */
> +    rc = unflatten_device_tree(tr->fdt, &tr->dt_host_new);
> +    if ( rc < 0 )
> +        goto err;
> +
> +    for ( j = 0; j < tr->num_nodes; j++ )
> +    {
> +        dt_dprintk("Adding node: %s\n", nodes_full_path[j]);
> +
> +        /* Find the newly added node in tr->dt_host_new by it's full path. */
> +        overlay_node = device_tree_find_node_by_path(tr->dt_host_new,
> +                                                     nodes_full_path[j]);
> +        if ( overlay_node == NULL )
> +        {
> +            dt_dprintk("%s node not found\n", nodes_full_path[j]);
> +            rc = -EFAULT;
> +            goto remove_node;
> +        }
> +
> +        /*
> +         * Find previous and next node to overlay_node in dt_host_new. We will
> +         * need these nodes to fix the dt_host_new mapping. When overlay_node is
> +         * take out of dt_host_new tree and added to dt_host, link between
> +         * previous node and next_node is broken. We will need to refresh
> +         * dt_host_new with correct linking for any other overlay nodes
> +         * extraction in future.
> +         */
> +        dt_for_each_device_node(tr->dt_host_new, prev_node)
> +            if ( prev_node->allnext == overlay_node )
> +                break;
> +
> +        next_node = dt_find_next_node(tr->dt_host_new, overlay_node);
> +
> +        read_lock(&dt_host->lock);
> +
> +        /* Add the node to dt_host. */
> +        rc = dt_overlay_add_node(overlay_node, overlay_node->parent->full_name);
> +        if ( rc )
> +        {
> +            read_unlock(&dt_host->lock);
> +
> +            /* Node not added in dt_host. */
> +            goto remove_node;
> +        }
> +
> +        read_unlock(&dt_host->lock);
> +
> +        prev_node->allnext = next_node;
> +
> +        overlay_node = dt_find_node_by_path(overlay_node->full_name);
> +        if ( overlay_node == NULL )
> +        {
> +            /* Sanity check. But code will never come here. */
> +            ASSERT_UNREACHABLE();
> +            goto remove_node;
> +        }
> +
> +        rc = handle_add_irq_iommu(d, overlay_node);
Shouldn't you check rc and exit the loop earlier in case of failure?

> +
> +        /* Keep overlay_node address in tracker. */
> +        tr->nodes_address[j] = (unsigned long)overlay_node;
> +    }
> +
> +    INIT_LIST_HEAD(&tr->entry);
> +    list_add_tail(&tr->entry, &overlay_tracker);
> +
> +    spin_unlock(&overlay_lock);
> +
> +    if ( nodes_full_path != NULL )
> +    {
> +        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
> +              i++ )
> +        {
> +            xfree(nodes_full_path[i]);
> +        }
> +        xfree(nodes_full_path);
> +    }
> +
> +    return rc;
> +
> +/*
> + * Failure case. We need to remove the nodes, free tracker(if tr exists) and
> + * tr->dt_host_new.
> + */
> +remove_node:
> +    tr->num_nodes = j;
> +    rc = remove_nodes(tr);
> +
> +    if ( rc )
> +    {
> +        /* If removing node fails, this may cause memory leaks. */
> +        printk(XENLOG_ERR "Removing node failed.\n");
> +        spin_unlock(&overlay_lock);
> +        return rc;
> +    }
> +
> +err:
> +    spin_unlock(&overlay_lock);
> +
> +    xfree(tr->dt_host_new);
> +    xfree(tr->fdt);
> +    xfree(tr->overlay_fdt);
> +    xfree(tr->nodes_address);
> +
> +    if ( nodes_full_path != NULL )
> +    {
> +        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
> +              i++ )
> +        {
> +            xfree(nodes_full_path[i]);
> +        }
> +        xfree(nodes_full_path);
> +    }
> +
> +    xfree(tr);
> +
> +    return rc;
> +}
> +
>  long dt_sysctl(struct xen_sysctl_dt_overlay *op)
>  {
>      long ret = 0;
> @@ -397,6 +854,14 @@ long dt_sysctl(struct xen_sysctl_dt_overlay *op)
> 
>      switch ( op->overlay_op )
>      {
> +    case XEN_SYSCTL_DT_OVERLAY_ADD:
> +        ret = handle_add_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
> +
> +        if ( ret )
> +            xfree(overlay_fdt);
> +
> +        break;
> +
>      case XEN_SYSCTL_DT_OVERLAY_REMOVE:
>          ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
>          xfree(overlay_fdt);
> --
> 2.17.1
> 
> 

~Michal



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 11:19:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 11:19:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483481.749672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKHKf-0003pg-Fi; Tue, 24 Jan 2023 11:19:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483481.749672; Tue, 24 Jan 2023 11:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKHKf-0003pZ-Cz; Tue, 24 Jan 2023 11:19:17 +0000
Received: by outflank-mailman (input) for mailman id 483481;
 Tue, 24 Jan 2023 11:19:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HFQP=5V=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKHKd-0003pT-Uh
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 11:19:16 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2053.outbound.protection.outlook.com [40.107.22.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb39f723-9bd8-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 12:19:13 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8359.eurprd04.prod.outlook.com (2603:10a6:20b:3b3::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 11:19:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 11:19:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb39f723-9bd8-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T3fNMc9sHrota4o9Rwa9dn+lPqX8rzzXDHVYYoPjC95+YY2NCyKCiWh5LHN0ibus5c2USJih1xaUMrwBCeOtrB6FOYE8dMxQ/F71YINNZIMtjzmzMqOWKj42AkLWVXOr7aPAII2nktxsw3FlmaryPryeNQiZd527XtbiFyF46X1g4taQnDeF6On5uBlQHxmfb9ky7y21WSkm1V9jZDhlrj/N/YnBi/kVlKVlcu55tj4LOxCM/bAGaUKsZ77ePgBGJN6IojGsROOYIZFG4Rpc6xsS2ifcBh07mTYNy0qoiluQxMf5B7AVZncBgobaRJc7qXHLWg9XN6FIzay1Ee9ntw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uYDO/WG9lwrpKxcxKdvmeoiZh8bvQ9oxQUj65CXMJ6Q=;
 b=ZvEhDF43MypbdF53UP2eApIxy6pNa+PlmFZ4YSYxlWCLNdOqfLfsWqbd3HjPOy3sbEwSz8OJ+ARoUOdDLwgbvgLnqbh5ZCmGOQgpaGL6Uj89S3BYfs8I/UN+sl6BJlrKCfJqnurOG2WZEgpSdVd3gpUAOT5oEgqk7xg/mxVdT+h76s6p+qEUmGT6+jBSu/jLM9reeNp5e8t0vlJnhqu2SPe0I0PNvRoufZhWMObri/835ltHcnq1J56nq5J9SbBmUIsWucooH1c60WDEczbVRsy8ep9AvpDbmv6gZJ8zab9JA1pe+6uds8X8fIwht7pkPL0qjgawlPMGu/odqpGg3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uYDO/WG9lwrpKxcxKdvmeoiZh8bvQ9oxQUj65CXMJ6Q=;
 b=CbLlbmT1Pbd3VCJFbZNfCvXIAztCcOUq+HgvGcgeCNqJO3id9MIqdgfQQSQoIfrJXmU8zr2G/4KRNOMKv+IkqR2/4QDz3ohw0umxnJmB0MetOom6KTs5O3bNrNdrXkYeJJHveVmaW9QsY6Y2GfSiM8S9F0gCILP6/Y9cmpPlwG/TxOX1VqwuAH6OrNCNyaS7QHeSGU90UN7JQdatQs3pdYS1V9X2m3dlVIQ/W44NL39Iy656vACUYmbBy9Y18X4spovBzlKY8ylxwEQCs25kYW2u/RV6vBOJuwKS3HGFq/7wF4Altp+oFuJUr3dPT8AhFIc+RmDDAF24TeG9tnF2sg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6099e6fb-0a3e-c6da-2766-d61c2c3d1e96@suse.com>
Date: Tue, 24 Jan 2023 12:19:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest
 areas
Content-Language: en-US
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
 <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com>
 <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
 <a92b9714-5e29-146f-3b68-b44692c56de1@suse.com>
 <CABfawhkiaheQPJhtG7fupHcbfYPUy+BJgvbVoQ+FJUnev5bowQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CABfawhkiaheQPJhtG7fupHcbfYPUy+BJgvbVoQ+FJUnev5bowQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0127.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8359:EE_
X-MS-Office365-Filtering-Correlation-Id: 562b1bf2-0a99-4f65-7237-08dafdfccdf1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1KYDM8G1IsVyjsYrqYWcnhgFVv9tp6283knR2d1SuWQA+FR7GVrLrZ7QdZn8GfwBUb1kRBK0qogQut2aQEpMF8UzU67aJHzpfW6AgiwfGj8icuooDvYLSZPizm05UJ9zdYPBTrIs3YCrQTAAOdSt+qXWEWXdbQ3kjLE2IsJX0tIrCC4S5AyfZ/l2szVYuXgikx0IxPR01t5ZQMwwJCAUNyKKdiRdIGFKG9M0EukNYWg6AH/RGweqpPtNvY1glp3kZjc4kDqlfJQfuEwZKzYU2mDDH8w/s7vikcKttggRSOVAEc+lIL6xMUWdn24QsfCCKwasZDi3aWrOOR/+JZPg0tDYXHnVietM53qg29tQo4qMBfaRqijqQMVIc8vOFhkyldJUuvjzMeryT4kBEQQNmz1Edv9sYYh6hZb0qElWvgmlbvGAY5IwW/aRLTLi8bgA/m33qS48SkU7acdmyXGImQHpmhGSUGfzcFOdrB9RQ7OaL8djeAnrjY44Zp07hIefjbAfPcsn4ctMh6q+siztFqEjChSYLFkT0A42WuwtNluqCg4DEEFsIhpDCwri7xFuO0b9Gpfa8gZOq9hAAaVUTLVw8rGQP92Y1fPIfbBUSuqOpWa88dnGGQctE8XRbYJwl+8ICp0U2vIspB1/UrW4YVl5KfM+dJnF7YZj2vXYIyroGstettIcky9IOLfw/rKgAsjgjgV+/CTHH1hfcgbbLmHkwDfaCGMDy1prQ92TcBY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(136003)(346002)(396003)(366004)(39860400002)(451199015)(31696002)(36756003)(41300700001)(86362001)(8936002)(5660300002)(4326008)(2906002)(38100700002)(83380400001)(478600001)(6486002)(31686004)(66946007)(6916009)(53546011)(6512007)(26005)(6506007)(186003)(8676002)(316002)(54906003)(2616005)(66476007)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bGhyN1ZaeXVQZ3BoYW8rQmRJV2dIMXZUbWltdkVCdU8zUmxBRDdtMUJsVGRv?=
 =?utf-8?B?UjdrZmVVK0dUZUVyU2Z4N3ExeTFLSVR5bG5JakJ6KzFpRVZ6bmdaVUZjcHUx?=
 =?utf-8?B?WW9iY0lwU0pnOGwvUlpwajdkZkxSczl6Nm4vWWJSMXRDZ2ZaYWFHUHpiUzdY?=
 =?utf-8?B?VGJNbFduT3NFbUJvM3N5RUsrcGMydG5BNlYwelo3MkUrdFQ3VEJzQUdwMGR4?=
 =?utf-8?B?L0hSYk9XZDErTDBCVno0d3cyNGFYYWhGQXJGQmgyT0h0VStCUHNzMERNUGZ4?=
 =?utf-8?B?YlVCaUhjaDhPRVVzR1ZKMDdmUDd6a2w1bnc1UkxZVU8rNXpRcDQ5cUJGNWdy?=
 =?utf-8?B?aVlHTXRBQ3VIS0ZaMjUvOUR5R25WeVB6ZFZ1N0Jqbng5cThmT3JVbm4xOHU4?=
 =?utf-8?B?ck5ndzNEdHZsb1drU09tSUxVclRZelBRVWFsaGh0cnlETTd6UmZCQU9YSm9B?=
 =?utf-8?B?dFN6dVhrRkoyQyswQUQzcnFSY1ZjUVgvaGtIR2c2TnJVZjNXSFl0Y3B3M3Nw?=
 =?utf-8?B?SmEva1FRajZ1ZnZZWXhBOElzbWdmc1BiVnNkQVAvdWcxNnlsYk05YUExZG00?=
 =?utf-8?B?MVJ1Skp2WDNkcjl6TlVjb0Z3RzVBZXJIdHhGREdtbFcwMkpVVEY1OVlMeGFU?=
 =?utf-8?B?N2o2cXFRbFBGd0hmOTE3aldUbEFKWVVnYXpPQ0djYlkycDlJQ2VPVTY4SHpC?=
 =?utf-8?B?MEhOU0JBeHdsd3BvL1B1dDd6ZDhjWCtkNlVOUE1LUmhCQ1dzZ25QV0pSTlBw?=
 =?utf-8?B?RE1WRTduWUtlVW9BaTcwM0dsV2lmVXJsa051WktGNXJmYlBtMEJIRGpQUzdo?=
 =?utf-8?B?NHBtM1JBWlNWdDFUSFJ2ZWdRdGJVTnBIeGdkSDRkbU0xM25rZFFUWFZXOEg3?=
 =?utf-8?B?VnRYLzJ1eTBLa1lUd3psL3ZKZktTbEk0QVJrZHZXbnlhc1l4Vi9KRWlWOXRs?=
 =?utf-8?B?YzE5QnRkbFhkQWdBNFlWV3g1Z09vOHVPYmEvOU1ySWorb1puZ1EyckV3N0FY?=
 =?utf-8?B?ZElUdGw5VkpUN2dJcHNUbUg0ZkNCaFpEQzkwa2pJNGw2dE9XSFJaM25RN2t1?=
 =?utf-8?B?UHU4K1JoMXIxSkdZRitkVi94Z3Fmd3JEWTVOQlZqdUFqYlNMdElBeDVZZzMz?=
 =?utf-8?B?N29SM1VFRVpzWWlZbE9MMXJNNlpoOGVyTUZzWXg4cEliTU1wdDB4SXlkSmI2?=
 =?utf-8?B?ZTVlQUtQN2kwVExGOUhDdXRyWDhPbUEyeFcxQnNGNFMxb1dneFhNeU9pR2ZT?=
 =?utf-8?B?NDcrZHRaS2FpY2VXZjR2bzZ4VU5vK1dwSVdNeVBwNms2NDdaWVgwcDh6ZDQv?=
 =?utf-8?B?ZTFGS1FqYmlMQ1dKc3pNQStnQWg0SUg0Z3c0V0xBZnhEMnBkNDA2WmVDL3BM?=
 =?utf-8?B?RjlPbkhBb0xTaEVzenlPZUM3VEJXZ3VvbXFVWjYwSHorZzlMeHMycEZaS0tT?=
 =?utf-8?B?NTlEMTJsRUp0WlNsNkx4NE82Z3ZBbkNlUlZLOU9ubDIyUEw2U0tFc3hTUTFY?=
 =?utf-8?B?VTR6RXZ3VmEwRURiRGY2WFZJZEoxTkczVTVESDIwQmFEazJxOFQwcG5uMmdK?=
 =?utf-8?B?a1lNY3UxRXRjcHBYaHpoSU8yL0lYTGFoL2pubGx2NnBNVzZZZXlYM29xdFVv?=
 =?utf-8?B?djBWYUU1bEdYVVhQTDlpMmhKU2o2Ryt5bFZVa3BSTDdnVmhaVC9RLzJ3cFBu?=
 =?utf-8?B?djBRRjVSZnNLQk1OS2NtcDI2aUt1VFk3RVBsbGEweDE3NWxvSnhveVZGVTZx?=
 =?utf-8?B?b2xqU0h6eVVzS0UrUW5hc0g3OWpId1NuL1BaTkFGQlBTek5pVzM1YkJFdmdE?=
 =?utf-8?B?aitJZy9haUNYRXVYUW1wNkZsa2pDQS9Mb3ZBQmliNnV3YjJNRlpOcmZvTGRl?=
 =?utf-8?B?Q25zY0tDL2NUSmpOVG8xYWJ2bDJQYnBqVmg1RlU1RFFlUXZ3WFZYWnNBalVm?=
 =?utf-8?B?UUVWQWI3eTUydEhyeHpPKy9QRExaNUR6UE1LVTF4UDlFSUZ3SUNySUM2RzQz?=
 =?utf-8?B?eWRXNjJFSXR2a25yVTNFallGQ1hzRGovb2QxUlp6YjYxWnNNMmNzRzBuN21m?=
 =?utf-8?B?K3hQNndZdEoxWXdxb0hEL05XVEZpSVhMTTFoYzdlVnhhWCtHbzdlOXBWWDNV?=
 =?utf-8?Q?/HOaG+dpuRmi5CjTgX5b5qM5P?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 562b1bf2-0a99-4f65-7237-08dafdfccdf1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 11:19:05.7272
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4+81QytibjlCRXWqrBcy+9d8tB0fbwddSkFZhkW141H/dJgXB9NnrJUhlS7JNrzzlsfwnD/8JTP5ssqNXXUvUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8359

On 23.01.2023 19:32, Tamas K Lengyel wrote:
> On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 23.01.2023 17:09, Tamas K Lengyel wrote:
>>> On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>> --- a/xen/arch/x86/mm/mem_sharing.c
>>>> +++ b/xen/arch/x86/mm/mem_sharing.c
>>>> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
>>>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>>>>  }
>>>>
>>>> +static int copy_guest_area(struct guest_area *cd_area,
>>>> +                           const struct guest_area *d_area,
>>>> +                           struct vcpu *cd_vcpu,
>>>> +                           const struct domain *d)
>>>> +{
>>>> +    mfn_t d_mfn, cd_mfn;
>>>> +
>>>> +    if ( !d_area->pg )
>>>> +        return 0;
>>>> +
>>>> +    d_mfn = page_to_mfn(d_area->pg);
>>>> +
>>>> +    /* Allocate & map a page for the area if it hasn't been already.
> */
>>>> +    if ( !cd_area->pg )
>>>> +    {
>>>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
>>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
>>>> +        p2m_type_t p2mt;
>>>> +        p2m_access_t p2ma;
>>>> +        unsigned int offset;
>>>> +        int ret;
>>>> +
>>>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL,
> NULL);
>>>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
>>>> +        {
>>>> +            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain,
>>> 0);
>>>> +
>>>> +            if ( !pg )
>>>> +                return -ENOMEM;
>>>> +
>>>> +            cd_mfn = page_to_mfn(pg);
>>>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
>>>> +
>>>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,
>>> p2m_ram_rw,
>>>> +                                 p2m->default_access, -1);
>>>> +            if ( ret )
>>>> +                return ret;
>>>> +        }
>>>> +        else if ( p2mt != p2m_ram_rw )
>>>> +            return -EBUSY;
>>>> +
>>>> +        /*
>>>> +         * Simply specify the entire range up to the end of the page.
>>> All the
>>>> +         * function uses it for is a check for not crossing page
>>> boundaries.
>>>> +         */
>>>> +        offset = PAGE_OFFSET(d_area->map);
>>>> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
>>>> +                             PAGE_SIZE - offset, cd_area, NULL);
>>>> +        if ( ret )
>>>> +            return ret;
>>>> +    }
>>>> +    else
>>>> +        cd_mfn = page_to_mfn(cd_area->pg);
>>>
>>> Everything to this point seems to be non mem-sharing/forking related.
> Could
>>> these live somewhere else? There must be some other place where
> allocating
>>> these areas happens already for non-fork VMs so it would make sense to
> just
>>> refactor that code to be callable from here.
>>
>> It is the "copy" aspect with makes this mem-sharing (or really fork)
>> specific. Plus in the end this is no different from what you have
>> there right now for copying the vCPU info area. In the final patch
>> that other code gets removed by re-using the code here.
> 
> Yes, the copy part is fork-specific. Arguably if there was a way to do the
> allocation of the page for vcpu_info I would prefer that being elsewhere,
> but while the only requirement is allocate-page and copy from parent I'm OK
> with that logic being in here because it's really straight forward. But now
> you also do extra sanity checks here which are harder to comprehend in this
> context alone.

What sanity checks are you talking about (also below, where you claim
map_guest_area() would be used only to sanity check)?

> What if extra sanity checks will be needed in the future? Or
> the sanity checks in the future diverge from where this happens for normal
> VMs because someone overlooks this needing to be synched here too?
> 
>> I also haven't been able to spot anything that could be factored
>> out (and one might expect that if there was something, then the vCPU
>> info area copying should also already have used it). map_guest_area()
>> is all that is used for other purposes as well.
> 
> Well, there must be a location where all this happens for normal VMs as
> well, no?

That's map_guest_area(). What is needed here but not elsewhere is the
populating of the GFN underlying the to-be-mapped area. That's the code
being added here, mirroring what you need to do for the vCPU info page.
Similar code isn't needed elsewhere because the guest invoked operation
is purely a "map" - the underlying pages are already expected to be
populated (which of course we check, or else we wouldn't know what page
to actually map).

> Why not factor that code so that it can be called from here, so
> that we don't have to track sanity check requirements in two different
> locations? Or for normal VMs that sanity checking bit isn't required? If
> so, why?

As per above, I'm afraid that I'm lost with these questions. I simply
don't know what you're talking about.

>>>> +
>>>> +    copy_domain_page(cd_mfn, d_mfn);
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>>  static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu)
>>>>  {
>>>>      struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu);
>>>> @@ -1745,6 +1804,16 @@ static int copy_vcpu_settings(struct dom
>>>>              copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
>>>>          }
>>>>
>>>> +        /* Same for the (physically registered) runstate and time info
>>> areas. */
>>>> +        ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
>>>> +                              &d_vcpu->runstate_guest_area, cd_vcpu,
> d);
>>>> +        if ( ret )
>>>> +            return ret;
>>>> +        ret = copy_guest_area(&cd_vcpu->arch.time_guest_area,
>>>> +                              &d_vcpu->arch.time_guest_area, cd_vcpu,
> d);
>>>> +        if ( ret )
>>>> +            return ret;
>>>> +
>>>>          ret = copy_vpmu(d_vcpu, cd_vcpu);
>>>>          if ( ret )
>>>>              return ret;
>>>> @@ -1987,7 +2056,10 @@ int mem_sharing_fork_reset(struct domain
>>>>
>>>>   state:
>>>>      if ( reset_state )
>>>> +    {
>>>>          rc = copy_settings(d, pd);
>>>> +        /* TBD: What to do here with -ERESTART? */
>>>
>>> Where does ERESTART coming from?
>>
>> From map_guest_area()'s attempt to acquire the hypercall deadlock mutex,
>> in order to then pause the subject vCPU. I suppose that in the forking
>> case it may already be paused, but then there's no way map_guest_area()
>> could know. Looking at the pause count is fragile, as there's no
>> guarantee that the vCPU may be unpaused while we're still doing work on
>> it. Hence I view such checks as only suitable for assertions.
> 
> Since map_guest_area is only used to sanity check, and it only happens when
> the page is being setup for the fork, why can't the sanity check be done on
> the parent?

As above, I'm afraid I simply don't understand what you're asking.

> The parent is guaranteed to be paused when forks are active so
> there is no ERESTART concern and from the looks of it if there is a concern
> the sanity check is looking for it would be visible on the parent just as
> well as the fork.

The parent being paused isn't of interest to map_guest_area(). It's the
subject vcpu (i.e. in the forked instance) where we require this. Thinking
of it - the forked domain wasn't started yet, was it? We could then avoid
the pausing (and the acquiring of the hypercall deadlock mutex) based on
->creation_finished still being "false", or even simply based on
v->domain != current->domain. Then there wouldn't be any chance anymore of
-ERESTART making it here.

> But I would like to understand why that sanity checking
> is required in the first place.

See further up.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:24:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:24:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483524.749709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKILD-00045Y-2Y; Tue, 24 Jan 2023 12:23:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483524.749709; Tue, 24 Jan 2023 12:23:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKILC-00045R-VK; Tue, 24 Jan 2023 12:23:54 +0000
Received: by outflank-mailman (input) for mailman id 483524;
 Tue, 24 Jan 2023 12:23:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KMs4=5V=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pKILB-00045L-IY
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:23:53 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2077.outbound.protection.outlook.com [40.107.243.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f55bea82-9be1-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 13:23:51 +0100 (CET)
Received: from BN9PR03CA0953.namprd03.prod.outlook.com (2603:10b6:408:108::28)
 by BY5PR12MB4308.namprd12.prod.outlook.com (2603:10b6:a03:20a::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 12:23:48 +0000
Received: from BN8NAM11FT096.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:108:cafe::20) by BN9PR03CA0953.outlook.office365.com
 (2603:10b6:408:108::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Tue, 24 Jan 2023 12:23:48 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT096.mail.protection.outlook.com (10.13.177.195) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Tue, 24 Jan 2023 12:23:48 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 06:23:47 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 06:23:47 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 24 Jan 2023 06:23:45 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f55bea82-9be1-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TiasAKgtQArF6j70wJj36I+BASH5iPP4RNLgrsuhsWY3IsarcpJ16PGwwzhjp0yW5bKFkxB9QBAHTemAN8Uno7uTDempEvrpAic6Xp8IS2/GT/zeG5RjSAdw8oCbW23k7DOW/kydbHK/lxn1jHpzsfZS+48ltpc/HFm0g9lpvGxc0906moqgXjzubDWqYDiWfnvDzDT/HRAU1vmx+6vvNuLsq7039M2fwkhyeyy5Ji2w70sgmPoSrP8K5ZKqXwTxMd+gh1+Rps6TYrc/7pFU9HGZP/Kha6ZBEU17yNjJDKf4/YOGdffAIYoN09nsC2ZEECQUbAfQ6NhCSw5SUwDbqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yNRhILNLfqW939U1RmkLOBvXHj+NzLs7DVp5r/4+RbA=;
 b=Ann4hREntyd3d2BdpjGgm57pS65EWe0Ar/IdFbFTWJSEVMEehQrPqDbJvS3POKYyKIsfdBOT74eK7KLbqBkNFYCj3lczpwtmwvDZfatBe0SQTcgD5D266+fkpo5+1QwtBenjhwAhPSkxK/cDATlrkdc7bb/v90ExuPqMftcUPqVXuLG+sMSe2yyN11t4tx0XHmYb+htzkiSSqv2BwotAYqHavethDjzISkV0K0rsPJ062fUJP98x9fwHWXd9EXIOcdT1S1DxtF1rlnvquy1T3xNE0SLZS+FE4lY+JEmxnwuRJ+Zwvb9ffiN1bhBm5f/IMHQY94+UtuZxw3zLVyI6bA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yNRhILNLfqW939U1RmkLOBvXHj+NzLs7DVp5r/4+RbA=;
 b=JuaMcYM0UNjHx7LCQbC54271lAT0ZX/JyDBdFg9mZCkLaroeqhBVyCBzwZ7v+2mrmtYL/SSqniDuwbxnr1jMi6YqyClhePKRDEkD5RX49ZQz8y9YZCY5OHac4jIYV1Uib40d6gqUE5h6ohBokOBT6JsNIV0lpC37dTirv11xAeI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <xuwei5@hisilicon.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v4 0/3] Pre-requisite patches for supporting 32 bit physical address
Date: Tue, 24 Jan 2023 12:23:33 +0000
Message-ID: <20230124122336.40993-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT096:EE_|BY5PR12MB4308:EE_
X-MS-Office365-Filtering-Correlation-Id: 33618034-cfe8-4c08-2b56-08dafe05d827
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UhbqywAVyihLhRPrchx13O0Hzt4A0/4KwQGGBX83fqXpnFSMMpukjZklySRy3rOAvR2x6aS7bwUdEp7KHlG8Lisv4mGdsONqfC7wxTysxwyvjDj4wKwK3yFmbEp54EDteNupPzbt+H1sT04YhWSTLRrNxfk9/M+h4lPxTwomLKUBlCdBduMBeba24SlaHBghNHQoKgDF7xM3NUJv7hagdTM9XuQf8orYqQLSGhmCqaM3ZyZkro+f2FtjZjEqj3MuyNqSNMfeU4vfmijC2ju0BPX6w5wkpk+nB7Ho/H0baiJvBv6sDObh34Vc3Xd2pJnUnv8tQGVkztb9yNXsyInartiV6m21TOsGa2yjQE+gEhQ8SidHzC5XBnaFj0y3DUbWEYmoy/jgF27GYvJcvaDTzgVRnuTp5IDy/fbTYwclUE3symG8Fdsv7lVGugfNQtTkF98pbh4HYNUs5v5v3mitCwe6ssiphLSS/AFKWI0myfYYjLB+EudeM3ONxzURSpU+K8llDc+iGrAb+Y0wK+CSM+doBL9WNHmNujgVw6PBHMcj0Ea+w9Tlyv7KZ7iLqg0479bX7+hCqVRiZHVdkEvTMGmugVXphUSNXHIvqyZaGHOSJmKn4GZRmfhXkS8SEO7sEkBVY5Z3H81OimT9X3nWd4FkWwPacVnzn9ceJnzAdDgwvyJGiZM9ksf6an4T8dxlx1Ltpl7m+rKyYcgnbdl3ges+VHJ+7ngYubF02XZ/RPs=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(81166007)(82740400003)(103116003)(40460700003)(36756003)(356005)(40480700001)(82310400005)(86362001)(478600001)(316002)(336012)(8676002)(70206006)(6916009)(70586007)(4326008)(2616005)(426003)(47076005)(54906003)(1076003)(2906002)(8936002)(186003)(26005)(41300700001)(36860700001)(83380400001)(6666004)(7416002)(5660300002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 12:23:48.0615
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 33618034-cfe8-4c08-2b56-08dafe05d827
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT096.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4308

Hi All,

These series include some patches and fixes identified during the review of
"[XEN v2 00/11] Add support for 32 bit physical address".

Patch 1/3 : The previous version causes CI to fail. This patch attempts to fix
this.

Patch 2/3 : This was pointed by Jan during the review of
"[XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size".
Similar to Patch 1/3, this can also be considered as a pre-req for supporting
32 bit physical address.

Patch 3/3 : This was also pointed by Jan during the review of
"[XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size".

Ayan Kumar Halder (3):
  xen/arm: Use the correct format specifier
  xen/drivers: ns16550: Fix the use of simple_strtoul() for extracting
    u64
  xen/drivers: ns16550: Fix an incorrect assignment to uart->io_size

 xen/arch/arm/domain_build.c | 64 +++++++++++++++++++++++--------------
 xen/arch/arm/gic-v2.c       |  6 ++--
 xen/arch/arm/mm.c           |  2 +-
 xen/drivers/char/ns16550.c  |  6 ++--
 4 files changed, 47 insertions(+), 31 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:24:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:24:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483526.749719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKILk-0004XN-A7; Tue, 24 Jan 2023 12:24:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483526.749719; Tue, 24 Jan 2023 12:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKILk-0004XG-7L; Tue, 24 Jan 2023 12:24:28 +0000
Received: by outflank-mailman (input) for mailman id 483526;
 Tue, 24 Jan 2023 12:24:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KMs4=5V=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pKILi-0004Vt-8S
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:24:26 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 09c7df3c-9be2-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 13:24:24 +0100 (CET)
Received: from DS7PR05CA0046.namprd05.prod.outlook.com (2603:10b6:8:2f::14) by
 PH7PR12MB6538.namprd12.prod.outlook.com (2603:10b6:510:1f1::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 12:24:21 +0000
Received: from CY4PEPF0000C96C.namprd02.prod.outlook.com
 (2603:10b6:8:2f:cafe::86) by DS7PR05CA0046.outlook.office365.com
 (2603:10b6:8:2f::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.16 via Frontend
 Transport; Tue, 24 Jan 2023 12:24:20 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000C96C.mail.protection.outlook.com (10.167.242.4) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.16 via Frontend Transport; Tue, 24 Jan 2023 12:24:20 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 06:24:19 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 06:24:19 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 24 Jan 2023 06:24:17 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09c7df3c-9be2-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UdbK5XxRQ+EMtHOJ1LsgpJTQUawO1BShYPxYdlsEi/dV021E8dfDopG4NEV8MvFODuoRv0RdrC8J1qHmSXsdMKDahc3pQrZ3kUPxIUhslyzEqoQgkCuIaAQaBYCDYr4KsYiInEdbbrKHU3mAVWKzeBji3eYf8IdRZ2AbfLwuD5s5tnuCYGgCNCyfy5LIvCvEPoDz932RvaKogumZyAp90R1DJQIpNHMtJbiThw5JYBGLvSwMPlHiDekjZkHcYuxVQgEpqAEvevUd1PO/LooJqkwYmJPZhTL8WYGCy1bEdatmXwQThPv60DqeJQiEInTbwGIsuECljhYbKFIO3BL25w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dLtHfVA9rWIdMrCEiLYIoSk9YFT/fvnUoYzthA+TwGE=;
 b=Ivmbq/5AnURA25XAoMW5roUrn7TcBu7Mp2fwf6syxQIHOF17i7FaFuMkqZI3IC0c7vZLehBydluR9+CSJgAVMo47SZwSs8Bv4TtOZXieHoCHhf+nG2R0VSr0Q5aWX6xbqnhTe1euYkG55EwTG9YrqKELoTogK3uMRQKzQmqdEeIuCJmKt5pLYbdjlOOAkKhfp7C+qpANwq5guf6rt+ErjVHgCSNz8/B8kHJrNBboaKc5WXqr167Nt7bxw+5fTB05aecTchEHnd/h1b98LkL5Lf6m6ynf11rECDd0G8XUSomxH5gEIqEhMGK3gKs3s2w1RzWE2woUGLKiU6JI6IZ4zw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dLtHfVA9rWIdMrCEiLYIoSk9YFT/fvnUoYzthA+TwGE=;
 b=3bhXEmcHit2wiXD102CubgJkRclTmvxYenlQlDVQxu36ZhJUKhqb/SzPwLXB+25mcPIpcIckrcmVtJG4I0CDpPodSAXPS/Jd3FJzhCiurY6VHPckCn+UhmVaOxrD+Y34GK3vbOaA9x88kM7ceCSK8AKGiHuMWFu1hLz1wQOEua0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <xuwei5@hisilicon.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v4 1/3] xen/arm: Use the correct format specifier
Date: Tue, 24 Jan 2023 12:23:34 +0000
Message-ID: <20230124122336.40993-2-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230124122336.40993-1-ayan.kumar.halder@amd.com>
References: <20230124122336.40993-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C96C:EE_|PH7PR12MB6538:EE_
X-MS-Office365-Filtering-Correlation-Id: 0564bf77-ed2e-4463-de66-08dafe05eb3a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	D8M8eEf3G61sjvtgaWcF8iBsvvH+pUI5Mi5wXBUFTaUqZa2s0ybz9NaNgWyoqEyJfFMjYYG3s3AUPBL//9Dd36x24F/TycjW1WWYq61RleIC6LpvRdEsn3TQUsLQ+dwFq5OqL82JkS7FeczUXg47jYQkvlK3ncrWfkEjUhcmETKgMY8uUkIgwv3uLauW1iSZ4H8/mU7xcRUXmqmSAEsNatitX1VfnOdcIfdflSoFlBqWbJaF/R9oJWKZ37ioROhd0GQj2tmTXTFbOkgnsR6QfmKlCJbuMcrKknNqVOYvLphN7MEcFPZZgBYhyHv/4F5BzzaLJtEh4XXdI0BPnhmO1sT976J23EI+awIGkYslwW4CvbNSLQ48WdtPeYXPPsKA1lCpyY1Wdi3RXZWlsb7BqzFtgTMUjR62nksw3eR/VuTOZ+M166Mq9Ga/cpv119pprz/5DlwzGZDnoVGi58V2JsUAH71LF8lIazXTXxfOGPAyOOe1JKr4F7mpDcSPeemJ5MmzFx2tQ2kFexSbjZtTIep/MSQDuh+k1rTMpE38sCb2JLoiHpOJRZSJLIZt9IZS2u1iuZrS7VxhQS95+e0R7CRHoPv3M6lnLw8R12xigs3HQcq+rzqVDwzgKc1Rnq/NWwE4u82oCXCePk3rNlK2B/WLUMDDnK8zADZfaAJbXJG/F6C/uoQSzIUvZsitPfruQWX4Y+t9/ajYm0NrHGg8QtTmyMRoWMYLoBArlBTpFXyEgw7KczUmVb9f7ChLd0ePHNGnLRKbQN3aOfQ5uL3DXg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(346002)(136003)(376002)(451199015)(46966006)(36840700001)(40470700004)(36860700001)(83380400001)(103116003)(81166007)(82740400003)(7416002)(41300700001)(86362001)(356005)(2906002)(82310400005)(8936002)(5660300002)(4326008)(40460700003)(316002)(40480700001)(26005)(6916009)(8676002)(186003)(6666004)(47076005)(336012)(426003)(70586007)(70206006)(54906003)(2616005)(1076003)(478600001)(966005)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 12:24:20.0047
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0564bf77-ed2e-4463-de66-08dafe05eb3a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C96C.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6538

1. One should use 'PRIpaddr' to display 'paddr_t' variables. However,
while creating nodes in fdt, the address (if present in the node name)
should be represented using 'PRIx64'. This is to be in conformance
with the following rule present in https://elinux.org/Device_Tree_Linux

. node names
"unit-address does not have leading zeros"

As 'PRIpaddr' introduces leading zeros, we cannot use it.

So, we have introduced a wrapper ie domain_fdt_begin_node() which will
represent physical address using 'PRIx64'.

2. One should use 'PRIx64' to display 'u64' in hex format. The current
use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
address.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v1 - 1. Moved the patch earlier.
2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr"
into this patch.

v2 - 1. Use PRIx64 for appending addresses to fdt node names. This fixes the CI failure.

v3 - 1. Added a comment on top of domain_fdt_begin_node().
2. Check for the return of snprintf() in domain_fdt_begin_node().

 xen/arch/arm/domain_build.c | 64 +++++++++++++++++++++++--------------
 xen/arch/arm/gic-v2.c       |  6 ++--
 xen/arch/arm/mm.c           |  2 +-
 3 files changed, 44 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f35f4d2456..81a213cf9a 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1288,6 +1288,39 @@ static int __init fdt_property_interrupts(const struct kernel_info *kinfo,
     return res;
 }
 
+/*
+ * Wrapper to convert physical address from paddr_t to uint64_t and
+ * invoke fdt_begin_node(). This is required as the physical address
+ * provided as a part of node name should not contain any leading
+ * zeroes. Thus, one should use PRIx64 (instead of PRIpaddr) to append
+ * unit (which contains the physical address) with name to generate a
+ * node name.
+ */
+static int __init domain_fdt_begin_node(void *fdt, const char *name,
+                                        uint64_t unit)
+{
+    /*
+     * The size of the buffer to hold the longest possible string ie
+     * interrupt-controller@ + a 64-bit number + \0
+     */
+    char buf[38];
+    int ret;
+
+    /* ePAPR 3.4 */
+    ret = snprintf(buf, sizeof(buf), "%s@%"PRIx64, name, unit);
+
+    if ( ret >= sizeof(buf) )
+    {
+        printk(XENLOG_ERR
+               "Insufficient buffer. Minimum size required is %d\n",
+               ( ret + 1 ));
+
+        return -FDT_ERR_TRUNCATED;
+    }
+
+    return fdt_begin_node(fdt, buf);
+}
+
 static int __init make_memory_node(const struct domain *d,
                                    void *fdt,
                                    int addrcells, int sizecells,
@@ -1296,8 +1329,6 @@ static int __init make_memory_node(const struct domain *d,
     unsigned int i;
     int res, reg_size = addrcells + sizecells;
     int nr_cells = 0;
-    /* Placeholder for memory@ + a 64-bit number + \0 */
-    char buf[24];
     __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
     __be32 *cells;
 
@@ -1314,9 +1345,7 @@ static int __init make_memory_node(const struct domain *d,
 
     dt_dprintk("Create memory node\n");
 
-    /* ePAPR 3.4 */
-    snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[i].start);
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "memory", mem->bank[i].start);
     if ( res )
         return res;
 
@@ -1375,16 +1404,13 @@ static int __init make_shm_memory_node(const struct domain *d,
     {
         uint64_t start = mem->bank[i].start;
         uint64_t size = mem->bank[i].size;
-        /* Placeholder for xen-shmem@ + a 64-bit number + \0 */
-        char buf[27];
         const char compat[] = "xen,shared-memory-v1";
         /* Worst case addrcells + sizecells */
         __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
         __be32 *cells;
         unsigned int len = (addrcells + sizecells) * sizeof(__be32);
 
-        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, mem->bank[i].start);
-        res = fdt_begin_node(fdt, buf);
+        res = domain_fdt_begin_node(fdt, "xen-shmem", mem->bank[i].start);
         if ( res )
             return res;
 
@@ -2716,12 +2742,9 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
     __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
     __be32 *cells;
     const struct domain *d = kinfo->d;
-    /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
-    char buf[38];
 
-    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
-             vgic_dist_base(&d->arch.vgic));
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "interrupt-controller",
+                                vgic_dist_base(&d->arch.vgic));
     if ( res )
         return res;
 
@@ -2771,14 +2794,10 @@ static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
     int res = 0;
     __be32 *reg, *cells;
     const struct domain *d = kinfo->d;
-    /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
-    char buf[38];
     unsigned int i, len = 0;
 
-    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
-             vgic_dist_base(&d->arch.vgic));
-
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "interrupt-controller",
+                                vgic_dist_base(&d->arch.vgic));
     if ( res )
         return res;
 
@@ -2858,11 +2877,8 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
     __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
     __be32 *cells;
     struct domain *d = kinfo->d;
-    /* Placeholder for sbsa-uart@ + a 64-bit number + \0 */
-    char buf[27];
 
-    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIx64, d->arch.vpl011.base_addr);
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "sbsa-uart", d->arch.vpl011.base_addr);
     if ( res )
         return res;
 
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 61802839cb..5d4d298b86 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1049,7 +1049,7 @@ static void __init gicv2_dt_init(void)
     if ( csize < SZ_8K )
     {
         printk(XENLOG_WARNING "GICv2: WARNING: "
-               "The GICC size is too small: %#"PRIx64" expected %#x\n",
+               "The GICC size is too small: %#"PRIpaddr" expected %#x\n",
                csize, SZ_8K);
         if ( platform_has_quirk(PLATFORM_QUIRK_GIC_64K_STRIDE) )
         {
@@ -1280,11 +1280,11 @@ static int __init gicv2_init(void)
         gicv2.map_cbase += aliased_offset;
 
         printk(XENLOG_WARNING
-               "GICv2: Adjusting CPU interface base to %#"PRIx64"\n",
+               "GICv2: Adjusting CPU interface base to %#"PRIpaddr"\n",
                cbase + aliased_offset);
     } else if ( csize == SZ_128K )
         printk(XENLOG_WARNING
-               "GICv2: GICC size=%#"PRIx64" but not aliased\n",
+               "GICv2: GICC size=%#"PRIpaddr" but not aliased\n",
                csize);
 
     gicv2.map_hbase = ioremap_nocache(hbase, PAGE_SIZE);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0fc6f2992d..fab54618ab 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -249,7 +249,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
 
         pte = mapping[offsets[level]];
 
-        printk("%s[0x%03x] = 0x%"PRIpaddr"\n",
+        printk("%s[0x%03x] = 0x%"PRIx64"\n",
                level_strs[level], offsets[level], pte.bits);
 
         if ( level == 3 || !pte.walk.valid || !pte.walk.table )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:24:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:24:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483528.749729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKILv-0004qZ-Iv; Tue, 24 Jan 2023 12:24:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483528.749729; Tue, 24 Jan 2023 12:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKILv-0004qS-Fr; Tue, 24 Jan 2023 12:24:39 +0000
Received: by outflank-mailman (input) for mailman id 483528;
 Tue, 24 Jan 2023 12:24:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KMs4=5V=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pKILu-0004Vt-AC
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:24:38 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on2062.outbound.protection.outlook.com [40.107.212.62])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10e786d3-9be2-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 13:24:37 +0100 (CET)
Received: from CY5PR15CA0072.namprd15.prod.outlook.com (2603:10b6:930:18::34)
 by LV2PR12MB5966.namprd12.prod.outlook.com (2603:10b6:408:171::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 12:24:34 +0000
Received: from CY4PEPF0000C96E.namprd02.prod.outlook.com
 (2603:10b6:930:18:cafe::8a) by CY5PR15CA0072.outlook.office365.com
 (2603:10b6:930:18::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Tue, 24 Jan 2023 12:24:34 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000C96E.mail.protection.outlook.com (10.167.242.6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.12 via Frontend Transport; Tue, 24 Jan 2023 12:24:33 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 06:24:33 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 06:24:32 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 24 Jan 2023 06:24:31 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10e786d3-9be2-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OU6X2r2k6ruapGOXDaflot9dt5fmXaJfkmPnOAyAzIOPB/TVhb6nlBoHp3/fYc0fXkrTKyWIqmqKNKR0/ggGKmoEaUbVjUEJ8E8Iub6Ak6TjzgAE39PRuKp0Xtnj3F/Kv0j4JNkdkWaoJbfc0jxGtT1DBMlO6iVZFEQkgCpysp4ArjcLYqJq9t0zkq010EGRcrbEKV39wP9vEZI/WcR6JJ9CkJDy8++9tGmghMX0ZWHhy/nrNovm33MgaNwi4nnLbK/g6u1d7Ol6o79dvrVpo2gzrD0C/bla78JIWzDz+FKdphqw6uY5+kMe/aHNffeHJ8qB1r/gV71q6CN/7obTmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TwJ1LfxOnpXeWIA1K07X5XJHKcSKQkZLrmyDBfROR78=;
 b=EBvOLbbmAZQ1bbqTGm01RAJ6zZlvgdtM1gFEAL8NZAhnP2ol6fDxVz0vt9togQFRMXuVn2n48rr65rXVc28RhZjhfy22s6CGz5sF05mWZlnFj9LGmuEr2b5ZY8kc5zySplbggBwF/7XNqDRuZ/cFBsc7zhctkj7pEbDg0kWLORL8wUWKA43HwOU2XvqbbMLxKBMn9XH/XyI4/uWBvJTBCOG2bYCY+bWBJKIFovbv20Bp/oJg571VTKg869ftq8muEWE0KgSZu3m+IYk9poU9ZeYsfwNrLownVez9j9a10JKAP/txZaibsuACww2M6/nI48pRFQMEul22qt2KoYeFFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TwJ1LfxOnpXeWIA1K07X5XJHKcSKQkZLrmyDBfROR78=;
 b=Un8EkCK6ClH8ZOO+otxiTkA1imf8Q2BOCnaGKysCDs19F+wd47asG5lhQMT0RjpF28iH0IYU/vMaHWp4AhEnzWXM+5P+PrOBlEIXiNXretH3D90zwMTZCIJKOd3Gx8KUidBmTpmbdoIocGKdmQG3OypT5AgmqdvxKIz6cUKOeF4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <xuwei5@hisilicon.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v4 2/3] xen/drivers: ns16550: Fix the use of simple_strtoul() for extracting u64
Date: Tue, 24 Jan 2023 12:23:35 +0000
Message-ID: <20230124122336.40993-3-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230124122336.40993-1-ayan.kumar.halder@amd.com>
References: <20230124122336.40993-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C96E:EE_|LV2PR12MB5966:EE_
X-MS-Office365-Filtering-Correlation-Id: 5a2be0ed-8941-410a-74a0-08dafe05f363
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l8gzHSm+U/Rdcj9/xuWRuAlV6PxoIoGSn0JGGYCUk3zeDLdonKdY2RMlYpSeY5jKOLV0QwTmrDhfKSMaFnjTox/qoZCJe8Zd+fXy9XY/SuhiYoyrmvWjZsM0I2J2QYSjEb4UfSmLA9k/2bcWIA+hL/FHMTAfO6dy49AwfsmnsEH7VxvLk23zE/iLs3Ac91phMi8Vrfzd+sMxCncBhgNotxBQrUrAQoX5YssTK4Kij6nTUGVtKPBhV50jt7bk/kDVGbl0ElVmXalpJ8faMgbk2vglef/kDthVsT/mL1mJaTdWYh8h60lsjaQnv6dOb0dZq3h/OPN7gZichfD/Y/K8vVYB9vFFd7VeNJk+1UK+kMK6EhrP8iZmUt2u0AUC/remQAYEAI4qJvhbz2HQ+mI92DTrbnf+2wbYBpmbJHz8Rv66oErRfVuVeIykAGOK4R/n/dHyMJgZHcfnQVmrc2ZIaBNMLA0hSc8dMuXfMLT7C3RdNkBuKpfVjvfUCgA3e2womi4M2JGqU64PJseiH7EJ9UOZbajRcOU/xKSZ+oNJSaTSrzTgfsQVTyoTvF9BMETfO4pQda9qjMTRyMuNT/HghXv5Z/vqHdiyze0LWaHVGLRobMo+0PUfK321fRVfYKQ4hNaxEYymeTyoY9PuOxP6h7jRmKlkTNQpuZzaz/i1t/7EOX4ERGoH6mRfmmgyu+ZYQ14sHsUZI7pueZO+jRqQbBL9gtsIHxkEbmiVc4V3UdQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199015)(40470700004)(36840700001)(46966006)(83380400001)(36860700001)(70586007)(86362001)(70206006)(6916009)(4326008)(8676002)(54906003)(316002)(36756003)(40480700001)(186003)(26005)(356005)(478600001)(336012)(81166007)(2616005)(1076003)(103116003)(47076005)(8936002)(7416002)(5660300002)(40460700003)(426003)(82740400003)(82310400005)(2906002)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 12:24:33.7081
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a2be0ed-8941-410a-74a0-08dafe05f363
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C96E.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5966

One should be using simple_strtoull() ( instead of simple_strtoul() )
to assign value to 'u64' variable. The reason being u64 can be
represented by 'unsigned long long' on all the platforms (ie Arm32,
Arm64 and x86).

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---

Changes from -

v1, v2 - NA (New patch introduced in v3).

v3 - Added Suggested-by and Reviewed-by tags.

 xen/drivers/char/ns16550.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 58d0ccd889..43e1f971ab 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -1532,7 +1532,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
         else
 #endif
         {
-            uart->io_base = simple_strtoul(conf, &conf, 0);
+            uart->io_base = simple_strtoull(conf, &conf, 0);
         }
     }
 
@@ -1603,7 +1603,7 @@ static bool __init parse_namevalue_pairs(char *str, struct ns16550 *uart)
                        "Can't use io_base with dev=pci or dev=amt options\n");
                 break;
             }
-            uart->io_base = simple_strtoul(param_value, NULL, 0);
+            uart->io_base = simple_strtoull(param_value, NULL, 0);
             break;
 
         case irq:
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:25:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:25:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483531.749739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKILy-00059C-UZ; Tue, 24 Jan 2023 12:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483531.749739; Tue, 24 Jan 2023 12:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKILy-00058w-Qt; Tue, 24 Jan 2023 12:24:42 +0000
Received: by outflank-mailman (input) for mailman id 483531;
 Tue, 24 Jan 2023 12:24:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KMs4=5V=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pKILx-0004Vt-SC
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:24:41 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2080.outbound.protection.outlook.com [40.107.243.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12d5bbbb-9be2-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 13:24:41 +0100 (CET)
Received: from DM6PR05CA0065.namprd05.prod.outlook.com (2603:10b6:5:335::34)
 by MN0PR12MB5882.namprd12.prod.outlook.com (2603:10b6:208:37a::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 12:24:38 +0000
Received: from CY4PEPF0000C973.namprd02.prod.outlook.com
 (2603:10b6:5:335:cafe::a0) by DM6PR05CA0065.outlook.office365.com
 (2603:10b6:5:335::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.16 via Frontend
 Transport; Tue, 24 Jan 2023 12:24:37 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000C973.mail.protection.outlook.com (10.167.242.11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.14 via Frontend Transport; Tue, 24 Jan 2023 12:24:36 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 24 Jan
 2023 06:24:36 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 24 Jan 2023 06:24:34 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12d5bbbb-9be2-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CqCmN4op8+KBFwNOy5VK+5EJxoGcsCTuQCyakfyh/MehtmCpxTGYE03XuFiq3nzsU7f3z8AmYBc4hfOadY2N0b6AygyuuwGVIkJn6IRVMLmNBKv6ntOYWJH5Aqqz+ZaIY87epZpRuiHYJZO+6GZPYgw3E0Szg0sYMMRiTr+c2ssXVLN1R8b4rq66qmKXztxxtCEWP4azXaw5bfrIOFrabfHnH0yzyPKu/+66kMXmLWmsTI5PrH8DZBmp9L58iISCcs5MCKPBJSPCNFvhvMxEnWA+EqGZdtgt1ASwtabunY3sFX0dxdL3b4l7OcjkUGSOauT6JKtNjkwVxGTB7B8KAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xYkmJ/OdKUndVT1CuPr9J8refiPRMUq7ilyeAaA5ma8=;
 b=YmAgGG+17SjCNW82QeyVPWml+YBWZy3s5jVsy5AtbXIXJZoo3e9X9EhKO3fGOP/fFG3huUInGvor3MgwQ8Mvxzj6UUUDQiq8dpuqleUD+og2T4Rb0BbyeijTuHkgLbyT63c3nIme14UZCkGbnLe9ck4TdfSCgVG8eTKLPNylFUCC4pVbFPaXcdYR7Oy80bp4Udg1/xtdyQrMZ2nuv9wyU5UbGnqQXcWkCOxaKwcoTGReE7iMjvU1lA+1QHLbHyuKZmcxBl6KqnWTRokqDIJ7d4XRqWvzuxoN9DkunAJ2cYwKCecMxHpdY//rsuu67dDisFAkv1XFXa8BzhqWbM0AjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xYkmJ/OdKUndVT1CuPr9J8refiPRMUq7ilyeAaA5ma8=;
 b=h9MgXR/zNNjYmYAeodUvrt3UgOjnQJf3KTBwvr3M2r/4k1qeH3hBV/uh45Slmk0uWgGe2DAsf/5HHL6dPiqd0Wx9GAdOnStc28FjrDvhyzZSjLu0tY8SKK1rmnk0Fh78alK+JEZwwPN/DGcxEBzLEuPuVpOPLDKKhgbIF+30TX4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <xuwei5@hisilicon.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v4 3/3] xen/drivers: ns16550: Fix an incorrect assignment to uart->io_size
Date: Tue, 24 Jan 2023 12:23:36 +0000
Message-ID: <20230124122336.40993-4-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230124122336.40993-1-ayan.kumar.halder@amd.com>
References: <20230124122336.40993-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C973:EE_|MN0PR12MB5882:EE_
X-MS-Office365-Filtering-Correlation-Id: 6139d27d-a81e-49cd-c9e0-08dafe05f557
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kG+zs+z5V4QE6umXtsi1ccsDnE99BjO/4xR8nbjQ39LFWlBsUdu5ZW7oTaVmtZxW6fnrM+lv+jtLmYaxWk3pnicOBW+RuR6GsOI8b5vsC9OSrigtVRZxMnvMJBXOijDIO0E5XVs5PZkbCcemrwat/eNWXCOBOpa/7I4aaoDdQiO5raSWfdBEo+kPjg9BhsC9pSS9AMBZZ1uueiifygdfTI7MuU0WfPvujdJ1cjLtwctTYJyH1PTRXQiPCbpBl8EpsOC2CqBL8oRa8vYRltDqaQvzfbUtSXnXx7rx36DFg6KSfOIORR5fyYfOmk3gN8XnY0S/uSrDRV1IhLlLkeiFCW6EZLyDzkESdgSQFcHOgbrLqT9Xi0lQU2W0in9U9Ak+gVB7nuhe8VKj4faCf/5NTAF1MH9QXluUPuxgxAHys4YiAPsQYz0UhfELtqBRSmsy8ZelU/U1AdN81of1vYKez31UdshdGvQYInafr7orjjUGoJeihBQJ2hJcP1E3DnpZ5gdTRhBrDP7K2vOzcjZ1jHuu2CsiN92g/1YEv/BkiT/ZSc5Hdvz1WfN9z+sn6STkM2kZdme5SF5r/og6nIaKip6A/yr1VM6s9IjVrn/4GAjrUJ5sjfHsSFYzBRpYo/seBPfE6egyrjxRQb0v88aw9omI5ZScjoL6YxmBSqgPyZGJBgt8azqjO6dCPQf73frEtyxkGuWDbYEtWfcl7N/qlKAJq1Z11MaKDeK0R1fVNGY=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199015)(40470700004)(36840700001)(46966006)(36860700001)(36756003)(316002)(8676002)(86362001)(70586007)(4326008)(70206006)(6916009)(54906003)(186003)(26005)(40480700001)(83380400001)(356005)(478600001)(336012)(81166007)(2616005)(103116003)(1076003)(7416002)(5660300002)(47076005)(8936002)(40460700003)(41300700001)(426003)(2906002)(82740400003)(82310400005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 12:24:36.9715
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6139d27d-a81e-49cd-c9e0-08dafe05f557
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C973.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5882

uart->io_size represents the size in bytes. Thus, when serial_port.bit_width
is assigned to it, it should be converted to size in bytes.

Fixes: 17b516196c ("ns16550: add ACPI support for ARM only")
Reported-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---

Changes from -

v1, v2 - NA (New patch introduced in v3).

v3 - Added Reviewed-by and Reported-by tags.

 xen/drivers/char/ns16550.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 43e1f971ab..092f6b9c4b 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -1870,7 +1870,7 @@ static int __init ns16550_acpi_uart_init(const void *data)
     uart->parity = spcr->parity;
     uart->stop_bits = spcr->stop_bits;
     uart->io_base = spcr->serial_port.address;
-    uart->io_size = spcr->serial_port.bit_width;
+    uart->io_size = DIV_ROUND_UP(spcr->serial_port.bit_width, BITS_PER_BYTE);
     uart->reg_shift = spcr->serial_port.bit_offset;
     uart->reg_width = spcr->serial_port.access_width;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:42:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:42:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483547.749752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcZ-0008JM-DN; Tue, 24 Jan 2023 12:41:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483547.749752; Tue, 24 Jan 2023 12:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcZ-0008JF-A9; Tue, 24 Jan 2023 12:41:51 +0000
Received: by outflank-mailman (input) for mailman id 483547;
 Tue, 24 Jan 2023 12:41:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jOcK=5V=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pKIcX-0008J9-Lt
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:41:49 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 77e892a4-9be4-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 13:41:48 +0100 (CET)
Received: by mail-ej1-x629.google.com with SMTP id v6so38671796ejg.6
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 04:41:48 -0800 (PST)
Received: from uni.router.wind (adsl-208.109.242.227.tellas.gr.
 [109.242.227.208]) by smtp.googlemail.com with ESMTPSA id
 bj10-20020a170906b04a00b0086b0d53cde2sm825419ejb.201.2023.01.24.04.41.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Jan 2023 04:41:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77e892a4-9be4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=And5MUKJQR/wk63gLOcXdYnpoe091+y1Os1GWl1dLpM=;
        b=Qj5nZN5/oOs8U+AUP9KtbZ/bcFpK/SGYOZZLPYtE61gbPS0W0bXkIirUdhqFnZdi4G
         Nx4k/xlgpHqLXSzuRH2rBw177io0iPVf5nkhgeeu4/w74V4dz6QfMJ0VbpsCSIpLWCPS
         RgAZKoQlXIDhrj5OpmRgvk4P6dns29PiKy0FCL8pdqZ/tZDLfp//btQ9hwE+K5PHku8h
         X3vYC08u96mPyhnUj3do7IajIc+//cWuVVblfgP3rnqgNNThzI+J8flpXcpTONd9UzaU
         Xf/6XJvhMK7W33oTdkBlf06NUcfJlD9aoYLwFoMBAjrDFIqHMhZk6+Or9tw9SkqFGHMg
         Vs9Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=And5MUKJQR/wk63gLOcXdYnpoe091+y1Os1GWl1dLpM=;
        b=r4AT+W3/0Dkvnd/z4sJoOXWVapueo+5HN+sxzqGIf2zpxPOaeC8gW0paAQ6bRzMVEf
         GRfQHWsaVD5rSy4OeHMAoDqSzb/t37irqgmW1Ba/MbdITALhS53lgKKWdElH2V71DAh7
         J1SS52K+vT1E8XAiOmFa+tUMOFmQNQ9w/NolOaq27Evptm3L12umXGoKhWrFNDvwEiCK
         v8bV3L4dASFpajcqOySp3YQZ7e/l/XLAUqz8zVUHYKKByFc9ymAyFdceCcZeEikEdoHZ
         0+4PosZytxtbpqoGZJMrEWwML8gXU3oC6PfpCGKk4yFo5w0O4ePzZI2gtJogw30p+D55
         dEOA==
X-Gm-Message-State: AFqh2kraIR5xZUCvl92ijXPagk4GF7XLvJhBTOyx8lYIwWpLF5XNLCzt
	KpxNIaIF1KTo6Si89UsxViM0Qs/DeeM=
X-Google-Smtp-Source: AMrXdXu7CvUxDkCr90FAwG0UrE6PWxE4k7AdI7HBcTxrTyEilRfrbAwIiyxLhIq8VgTWiVF0k65bYw==
X-Received: by 2002:a17:907:ca85:b0:86f:ae1f:9234 with SMTP id ul5-20020a170907ca8500b0086fae1f9234mr31691357ejc.7.1674564107980;
        Tue, 24 Jan 2023 04:41:47 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 0/5] Make x86 IOMMU driver support configurable
Date: Tue, 24 Jan 2023 14:41:37 +0200
Message-Id: <20230124124142.38500-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series aims to provide a means to render the iommu driver support for x86
configurable. Currently, irrespectively of the target platform, both AMD and
Intel iommu drivers are built. This is the case because the existent Kconfig
infrastructure does not provide any facilities for finer-grained configuration.

The series adds two new Kconfig options, AMD_IOMMU and INTEL_IOMMU, that can be
used to generate a tailored iommu configuration for a given platform.

This version of the series is rebased on top of the current staging and
addresses the comments made on version 3.
Patches
"[v3 1/8] x86/iommu: amd_iommu_perdev_intremap is AMD-Vi specific"
"[v3 2/8] x86/iommu: iommu_igfx and iommu_qinval are Intel VT-d specific"
"[v3 4/8] x86/acpi: separate AMD-Vi and VT-d specific functions"
are not included in this series because they have been already merged.

Xenia Ragiadakou (5):
  x86/iommu: snoop control is allowed only by Intel VT-d
  x86/iommu: make code addressing CVE-2011-1898 no VT-d specific
  x86/iommu: call pi_update_irte through an hvm_function callback
  x86/dpci: move hvm_dpci_isairq_eoi() to generic HVM code
  x86/iommu: make AMD-Vi and Intel VT-d support configurable

 xen/arch/x86/hvm/vmx/vmx.c               | 41 +++++++++++++++
 xen/arch/x86/include/asm/hvm/hvm.h       | 10 ++++
 xen/arch/x86/include/asm/iommu.h         |  4 --
 xen/drivers/passthrough/Kconfig          | 22 +++++++-
 xen/drivers/passthrough/vtd/intremap.c   | 36 -------------
 xen/drivers/passthrough/vtd/iommu.c      |  5 +-
 xen/drivers/passthrough/vtd/x86/Makefile |  1 -
 xen/drivers/passthrough/vtd/x86/hvm.c    | 64 ------------------------
 xen/drivers/passthrough/x86/hvm.c        | 50 ++++++++++++++++--
 xen/drivers/passthrough/x86/iommu.c      |  5 ++
 xen/include/xen/iommu.h                  |  8 ++-
 11 files changed, 131 insertions(+), 115 deletions(-)
 delete mode 100644 xen/drivers/passthrough/vtd/x86/hvm.c

-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:42:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:42:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483548.749761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcc-00007A-LQ; Tue, 24 Jan 2023 12:41:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483548.749761; Tue, 24 Jan 2023 12:41:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcc-000070-ID; Tue, 24 Jan 2023 12:41:54 +0000
Received: by outflank-mailman (input) for mailman id 483548;
 Tue, 24 Jan 2023 12:41:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jOcK=5V=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pKIca-0008Um-Hd
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:41:52 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7909615f-9be4-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 13:41:50 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id tz11so38739581ejc.0
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 04:41:50 -0800 (PST)
Received: from uni.router.wind (adsl-208.109.242.227.tellas.gr.
 [109.242.227.208]) by smtp.googlemail.com with ESMTPSA id
 bj10-20020a170906b04a00b0086b0d53cde2sm825419ejb.201.2023.01.24.04.41.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Jan 2023 04:41:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7909615f-9be4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=zmfm31eZhrV7KPD8IshZRyc6q/g4zltkTtUdKEtttQQ=;
        b=EQFj81wwdwMkQd5RRP8BhkKgYUtuOOVbwCt217519o9TM4cqXC+TKEmtodfCdRYMjn
         bYd3GEsh/dpy7/g84zlbSI9/4SRAjZdlcpIt+kWsh6rgIUhb8V4mD1s1sVN3HePWV2n8
         UrJ/+xkJlKoCrJFU53NA9yap0QxBVFot0BK5Ph24AzexkUnfE7gVvVXyYeN9WbwUzq2C
         GTTxM5GYE465NX4dnqrkQkenKVrg3f4w7tiQWQCav7D9Mx4o2F4ku8kB9UQu6q9UJ98z
         z6zYsv6XlX6Q1meO0bMdQGIXvMK/HI54+4tyG+xlliQ8VQZ8wZlPhJ/kiic2s2rmq7GV
         oZWg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=zmfm31eZhrV7KPD8IshZRyc6q/g4zltkTtUdKEtttQQ=;
        b=BBWlPf23+2aEFFt9vOsRp6RGKgVSyENw6Ga1rYk/j/k5fyrxvR+NSwkWAt+JQnbKNQ
         V/3roW57+Ko1E9/erXPr9YWlasZNjfs2p357MvXXWGNPvgH8DrbduYdGIKVQBpBzAo6H
         UnbRNiiq/GkBblQfXFhzCt+Z/us36A4Vh1y/b02y2AOnFC5vK1R4kQWLqJcR6a6TOwdn
         b0NnkSIypp2IsCPwmlwboBzsDqkwPZ69Xi7uWNzcjNUQA32qpFP8t1268GZL4f6WZna3
         tN9tuWdkKo3Z6mwyohSu8TaHMUdiC58q9M2naKSwzpzSOXLC095nx+DyTc431f5SHVIu
         nh1A==
X-Gm-Message-State: AFqh2ko2Y8N4WyePL8mF3uqH1idh5J/9jZ69kS+M3zYx4WkeQZmAp4kV
	wDzZ3Xl2OCKgdTq3UMnbkxRqQ14j3dM=
X-Google-Smtp-Source: AMrXdXsBSp8DWfrFCuLMLBmMTWvsN+rMvrspA2sLzBE5wcEmUbwdVPYEG1X/KxMGCAdUAQRwS51wxA==
X-Received: by 2002:a17:907:6745:b0:86c:f7ac:71f7 with SMTP id qm5-20020a170907674500b0086cf7ac71f7mr30134092ejc.8.1674564110000;
        Tue, 24 Jan 2023 04:41:50 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 1/5] x86/iommu: snoop control is allowed only by Intel VT-d
Date: Tue, 24 Jan 2023 14:41:38 +0200
Message-Id: <20230124124142.38500-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230124124142.38500-1-burzalodowa@gmail.com>
References: <20230124124142.38500-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The AMD-Vi driver forces coherent accesses by hardwiring the FC bit to 1.
Therefore, given that iommu_snoop is used only when the iommu is enabled,
when Xen is configured with only the AMD iommu enabled, iommu_snoop can be
reduced to a #define to true.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

No changes in v4

Depends on Jan's "x86/shadow: make iommu_snoop usage consistent with HAP's"
being applied first.

 xen/include/xen/iommu.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 4f22fc1bed..626731941b 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -74,7 +74,12 @@ extern enum __packed iommu_intremap {
    iommu_intremap_restricted,
    iommu_intremap_full,
 } iommu_intremap;
-extern bool iommu_igfx, iommu_qinval, iommu_snoop;
+extern bool iommu_igfx, iommu_qinval;
+#ifdef CONFIG_INTEL_IOMMU
+extern bool iommu_snoop;
+#else
+# define iommu_snoop true
+#endif /* CONFIG_INTEL_IOMMU */
 #else
 # define iommu_intremap false
 # define iommu_snoop false
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:42:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:42:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483549.749772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcd-0000Nl-UJ; Tue, 24 Jan 2023 12:41:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483549.749772; Tue, 24 Jan 2023 12:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcd-0000Nc-Qw; Tue, 24 Jan 2023 12:41:55 +0000
Received: by outflank-mailman (input) for mailman id 483549;
 Tue, 24 Jan 2023 12:41:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jOcK=5V=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pKIcc-0008Um-FQ
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:41:54 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a554de8-9be4-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 13:41:52 +0100 (CET)
Received: by mail-ej1-x62d.google.com with SMTP id kt14so38689985ejc.3
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 04:41:52 -0800 (PST)
Received: from uni.router.wind (adsl-208.109.242.227.tellas.gr.
 [109.242.227.208]) by smtp.googlemail.com with ESMTPSA id
 bj10-20020a170906b04a00b0086b0d53cde2sm825419ejb.201.2023.01.24.04.41.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Jan 2023 04:41:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a554de8-9be4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=w2olX0nzQBjitvSin0uiejmf+7loZKGT4TX0YsAFXQ0=;
        b=WXd780q1ocBcbk6sCNzF6Xey6SZmr18KWI+2TiUMqD3MK3Smf3q7iku4p7K+kUGgHU
         CjkjZf248ClmEFSAhMA63gEUAcOyL2+yvG4XlRLuIRMODcMtebpNvez7gV/+2XTA6ZYW
         2VGlerwlLet7vqZyh0enwE2RjGhubyeOggTp3uAyQ35WYi+jxSFxa6ys5QLy8opFKC9h
         0JPKwd4X62elZPmnUccB/dXc8I9zY5SHsvhL9Ii7hjUTQDobLqXg0dV+tFA38K+Zxl0S
         X6tAdUGC30ndH+2ahdZtAgekYpf7alUYP97U/dwk2vtnWmCnbGtfBCQ9JMV83XjpvPpa
         DDQg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=w2olX0nzQBjitvSin0uiejmf+7loZKGT4TX0YsAFXQ0=;
        b=xvMQOLOyFr4rAnNEhWjiF52IAqb1lIdhaFHXgLh7c4Wtnmw2WGjt41ye/pmKv/55J3
         bbzRnmc9nXM6MzhhDy/6qijhhxNx+2+VxZMk/Py8D0OeHl+2U+bG2RMFjsRiMPq4NE9/
         8pzLN/PVUJEs3KJ8egQiZp/cYv6Jjom89nX0XPPzUyw++T1Uj7WCLc1wRchqX1+VJ8zj
         Z2unpDImMWT0arFSHXFGdskR8skccm1BN+epC674ZMG0jlHWPvy5084sDya7kLWnnhwj
         H6fzuQXRYRqjMF0dEO7WTb6mDZmxeUSV8VnCPQH6guG3baeVHRYBdEBQbcNwK/UyeYha
         ec2Q==
X-Gm-Message-State: AFqh2koYLLYYXI9bhItxYziIQ+JjIyMABC04ju4x/FQMhWyBSC1OoV1e
	o4UvpO+U07i9ghaZo02+BiiyghVnsB4=
X-Google-Smtp-Source: AMrXdXuoIBMj013AoW65KBsYxEJXpnwAVaZyigFL3YJLPK2E8PyP0ab2oH+ARij84GXj+pgKmauWyQ==
X-Received: by 2002:a17:907:3ad0:b0:870:7c88:4668 with SMTP id fi16-20020a1709073ad000b008707c884668mr25029985ejc.68.1674564111992;
        Tue, 24 Jan 2023 04:41:51 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 2/5] x86/iommu: make code addressing CVE-2011-1898 no VT-d specific
Date: Tue, 24 Jan 2023 14:41:39 +0200
Message-Id: <20230124124142.38500-3-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230124124142.38500-1-burzalodowa@gmail.com>
References: <20230124124142.38500-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variable untrusted_msi indicates whether the system is vulnerable to
CVE-2011-1898 due to the absence of interrupt remapping support.
Although AMD iommus with interrupt remapping disabled are also affected,
this case is not handled yet. Given that the issue is not VT-d specific,
and to accommodate future use of the flag for covering also the AMD iommu
case, move the definition of the flag out of the VT-d specific code to the
common x86 iommu code.

Also, since the current implementation assumes that only PV guests are prone
to this attack, take the opportunity to define untrusted_msi only when PV is
enabled.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---

Changes in v4:
  - in vtd code, guard with CONFIG_PV the use of untrusted_msi
  - mention in commit log that CVE-2011-1898 currently is not addressed for
    AMD iommus with disabled intremap
  - add Jan's Reviewed-by tag

 xen/drivers/passthrough/vtd/iommu.c | 5 ++---
 xen/drivers/passthrough/x86/iommu.c | 5 +++++
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 62e143125d..e97b1fe8cd 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -54,9 +54,6 @@
                                  ? dom_iommu(d)->arch.vtd.pgd_maddr \
                                  : (pdev)->arch.vtd.pgd_maddr)
 
-/* Possible unfiltered LAPIC/MSI messages from untrusted sources? */
-bool __read_mostly untrusted_msi;
-
 bool __read_mostly iommu_igfx = true;
 bool __read_mostly iommu_qinval = true;
 #ifndef iommu_snoop
@@ -2770,6 +2767,7 @@ static int cf_check reassign_device_ownership(
         if ( !has_arch_pdevs(target) )
             vmx_pi_hooks_assign(target);
 
+#ifdef CONFIG_PV
         /*
          * Devices assigned to untrusted domains (here assumed to be any domU)
          * can attempt to send arbitrary LAPIC/MSI messages. We are unprotected
@@ -2778,6 +2776,7 @@ static int cf_check reassign_device_ownership(
         if ( !iommu_intremap && !is_hardware_domain(target) &&
              !is_system_domain(target) )
             untrusted_msi = true;
+#endif
 
         ret = domain_context_mapping(target, devfn, pdev);
 
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index f671b0f2bb..c5021ea023 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -36,6 +36,11 @@ bool __initdata iommu_superpages = true;
 
 enum iommu_intremap __read_mostly iommu_intremap = iommu_intremap_full;
 
+#ifdef CONFIG_PV
+/* Possible unfiltered LAPIC/MSI messages from untrusted sources? */
+bool __read_mostly untrusted_msi;
+#endif
+
 #ifndef iommu_intpost
 /*
  * In the current implementation of VT-d posted interrupts, in some extreme
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:42:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:42:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483550.749782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcg-0000fZ-6E; Tue, 24 Jan 2023 12:41:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483550.749782; Tue, 24 Jan 2023 12:41:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcg-0000fJ-30; Tue, 24 Jan 2023 12:41:58 +0000
Received: by outflank-mailman (input) for mailman id 483550;
 Tue, 24 Jan 2023 12:41:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jOcK=5V=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pKIcf-0008Um-2J
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:41:57 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7ba15fb2-9be4-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 13:41:54 +0100 (CET)
Received: by mail-ed1-x535.google.com with SMTP id s3so18152422edd.4
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 04:41:54 -0800 (PST)
Received: from uni.router.wind (adsl-208.109.242.227.tellas.gr.
 [109.242.227.208]) by smtp.googlemail.com with ESMTPSA id
 bj10-20020a170906b04a00b0086b0d53cde2sm825419ejb.201.2023.01.24.04.41.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Jan 2023 04:41:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ba15fb2-9be4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1Nu8ZqoSCJrh9/McWp3IN7UXZ38cJPK4OKj3F45cAog=;
        b=d+F72xaSZt1toJYte3aEmpa32g8w7JNzjAHaGcBfFH6Q34IFtABHt4x777I2nbNQLu
         Vis7YVf7NKeBF4uHVZe1b452nHzcSTGlnZccNOjs9VuNOFW+W8gpmT3Vq6U6n7qjSZiq
         AYqAUsoHeMZ4k10oJwb3qpKMZfK2kaIsnFzHlgbMI9iRbUuoKak4FCPYXlDG9J2gKJQ6
         LlqU9yS9tiHNdNsNzgsD41Q3z3dMPpkTKwfHmA5jyP3XaHnU8lEmiSqy48RUSqQK0a4i
         iiv2urgsulfwi4XNAxRF8jY3JLGnVBrRmSCYRxslS6EhyrH8H4WbaqIziO1yOjx6zW0D
         JICA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1Nu8ZqoSCJrh9/McWp3IN7UXZ38cJPK4OKj3F45cAog=;
        b=qmJPtxwgG9Y0YxpQZrGZqrkfOrM7V4Ff45QafnEJX82TBttWi6ipG+AHPGAr5TtsC4
         OZQGvpFkW+ePwknMPIGOJw9SLO7tRJvAo4R1o8j1kJ5wOS6DsHxVNiqk6He1E/lo5E79
         B1Ee41AWKbV8tGz3Lb67GE+t15Q7l3QLlHoY4D5LWwWdrdT7d1LQT8VCMOwOfofxPFc4
         Xc22cTzt6uhbJuJUxXnVVmL7y0WaTfDmuX4jbWEo5ytkluXAJ4vKa7sKmqEaf7xmjLdI
         +r3MKhncQrRj16Ij58ouydCCcdOyllmIoz1IFSPSkAM46/UfggMIoeNglfX6fbxrL7Rm
         PIcA==
X-Gm-Message-State: AO0yUKXd0O8OVnTDOsDbIUcuhAHs3roihlOkhuC0Sg2d5TyobPDHdbnr
	InMh+Dh1pg0/3u5rQQaHitVXCb+Qt34=
X-Google-Smtp-Source: AK7set87mxamyZnHvL9xdTE6v82DviNp4u2S57Bdrz81bYdzYbo2x/+A4hn1xUxMTlXitNRFZVtueQ==
X-Received: by 2002:a05:6402:1745:b0:4a0:8bb5:78cc with SMTP id v5-20020a056402174500b004a08bb578ccmr1697674edx.20.1674564114084;
        Tue, 24 Jan 2023 04:41:54 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v4 3/5] x86/iommu: call pi_update_irte through an hvm_function callback
Date: Tue, 24 Jan 2023 14:41:40 +0200
Message-Id: <20230124124142.38500-4-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230124124142.38500-1-burzalodowa@gmail.com>
References: <20230124124142.38500-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Posted interrupt support in Xen is currently implemented only for the
Intel platforms. Instead of calling directly pi_update_irte() from the
common hvm code, add a pi_update_irte callback to the hvm_function_table.
Then, create a wrapper function hvm_pi_update_irte() to be used by the
common hvm code.

In the pi_update_irte callback prototype, pass the vcpu as first parameter
instead of the posted-interrupt descriptor that is platform specific, and
remove the const qualifier from the parameter gvec since it is not needed
and because it does not compile with the alternative code patching in use.

Since the posted interrupt descriptor is Intel VT-x specific while
msi_msg_write_remap_rte is iommu specific, open code pi_update_irte() inside
vmx_pi_update_irte() but replace msi_msg_write_remap_rte() with generic
iommu_update_ire_from_msi(). That way vmx_pi_update_irte() is not bound to
Intel VT-d anymore.

Remove the now unused pi_update_irte() implementation.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

Changes in v4:
  - remove the now unused asm/hvm/vmx/vmcs.h

 xen/arch/x86/hvm/vmx/vmx.c             | 41 ++++++++++++++++++++++++++
 xen/arch/x86/include/asm/hvm/hvm.h     | 10 +++++++
 xen/arch/x86/include/asm/iommu.h       |  4 ---
 xen/drivers/passthrough/vtd/intremap.c | 36 ----------------------
 xen/drivers/passthrough/x86/hvm.c      |  5 ++--
 5 files changed, 53 insertions(+), 43 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ade2a25ce7..270bc98195 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -397,6 +397,43 @@ void vmx_pi_hooks_deassign(struct domain *d)
     domain_unpause(d);
 }
 
+/*
+ * This function is used to update the IRTE for posted-interrupt
+ * when guest changes MSI/MSI-X information.
+ */
+static int cf_check vmx_pi_update_irte(const struct vcpu *v,
+                                       const struct pirq *pirq, uint8_t gvec)
+{
+    const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
+    struct irq_desc *desc;
+    struct msi_desc *msi_desc;
+    int rc;
+
+    desc = pirq_spin_lock_irq_desc(pirq, NULL);
+    if ( !desc )
+        return -EINVAL;
+
+    msi_desc = desc->msi_desc;
+    if ( !msi_desc )
+    {
+        rc = -ENODEV;
+        goto unlock_out;
+    }
+    msi_desc->pi_desc = pi_desc;
+    msi_desc->gvec = gvec;
+
+    spin_unlock_irq(&desc->lock);
+
+    ASSERT(pcidevs_locked());
+
+    return iommu_update_ire_from_msi(msi_desc, &msi_desc->msg);
+
+ unlock_out:
+    spin_unlock_irq(&desc->lock);
+
+    return rc;
+}
+
 static const struct lbr_info {
     u32 base, count;
 } p4_lbr[] = {
@@ -2986,8 +3023,12 @@ const struct hvm_function_table * __init start_vmx(void)
     {
         alloc_direct_apic_vector(&posted_intr_vector, pi_notification_interrupt);
         if ( iommu_intpost )
+        {
             alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);
 
+            vmx_function_table.pi_update_irte = vmx_pi_update_irte;
+        }
+
         vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr;
         vmx_function_table.sync_pir_to_irr     = vmx_sync_pir_to_irr;
         vmx_function_table.test_pir            = vmx_test_pir;
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 65768c797e..80e4565bd2 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -28,6 +28,8 @@
 #include <asm/x86_emulate.h>
 #include <asm/hvm/asid.h>
 
+struct pirq; /* needed by pi_update_irte */
+
 #ifdef CONFIG_HVM_FEP
 /* Permit use of the Forced Emulation Prefix in HVM guests */
 extern bool_t opt_hvm_fep;
@@ -213,6 +215,8 @@ struct hvm_function_table {
     void (*sync_pir_to_irr)(struct vcpu *v);
     bool (*test_pir)(const struct vcpu *v, uint8_t vector);
     void (*handle_eoi)(uint8_t vector, int isr);
+    int (*pi_update_irte)(const struct vcpu *v, const struct pirq *pirq,
+                          uint8_t gvec);
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa,
@@ -776,6 +780,12 @@ static inline void hvm_set_nonreg_state(struct vcpu *v,
         alternative_vcall(hvm_funcs.set_nonreg_state, v, nrs);
 }
 
+static inline int hvm_pi_update_irte(const struct vcpu *v,
+                                     const struct pirq *pirq, uint8_t gvec)
+{
+    return alternative_call(hvm_funcs.pi_update_irte, v, pirq, gvec);
+}
+
 #else  /* CONFIG_HVM */
 
 #define hvm_enabled false
diff --git a/xen/arch/x86/include/asm/iommu.h b/xen/arch/x86/include/asm/iommu.h
index fc0afe35bf..586c7434f2 100644
--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -21,7 +21,6 @@
 #include <asm/apicdef.h>
 #include <asm/cache.h>
 #include <asm/processor.h>
-#include <asm/hvm/vmx/vmcs.h>
 
 #define DEFAULT_DOMAIN_ADDRESS_WIDTH 48
 
@@ -129,9 +128,6 @@ void iommu_identity_map_teardown(struct domain *d);
 
 extern bool untrusted_msi;
 
-int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
-                   const uint8_t gvec);
-
 extern bool iommu_non_coherent, iommu_superpages;
 
 static inline void iommu_sync_cache(const void *addr, unsigned int size)
diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
index 1512e4866b..b39bc83282 100644
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -866,39 +866,3 @@ void cf_check intel_iommu_disable_eim(void)
     for_each_drhd_unit ( drhd )
         disable_qinval(drhd->iommu);
 }
-
-/*
- * This function is used to update the IRTE for posted-interrupt
- * when guest changes MSI/MSI-X information.
- */
-int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
-    const uint8_t gvec)
-{
-    struct irq_desc *desc;
-    struct msi_desc *msi_desc;
-    int rc;
-
-    desc = pirq_spin_lock_irq_desc(pirq, NULL);
-    if ( !desc )
-        return -EINVAL;
-
-    msi_desc = desc->msi_desc;
-    if ( !msi_desc )
-    {
-        rc = -ENODEV;
-        goto unlock_out;
-    }
-    msi_desc->pi_desc = pi_desc;
-    msi_desc->gvec = gvec;
-
-    spin_unlock_irq(&desc->lock);
-
-    ASSERT(pcidevs_locked());
-
-    return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
-
- unlock_out:
-    spin_unlock_irq(&desc->lock);
-
-    return rc;
-}
diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c
index a16e0e5344..e720461a14 100644
--- a/xen/drivers/passthrough/x86/hvm.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -381,8 +381,7 @@ int pt_irq_create_bind(
 
         /* Use interrupt posting if it is supported. */
         if ( iommu_intpost )
-            pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
-                           info, pirq_dpci->gmsi.gvec);
+            hvm_pi_update_irte(vcpu, info, pirq_dpci->gmsi.gvec);
 
         if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
         {
@@ -672,7 +671,7 @@ int pt_irq_destroy_bind(
             what = "bogus";
     }
     else if ( pirq_dpci && pirq_dpci->gmsi.posted )
-        pi_update_irte(NULL, pirq, 0);
+        hvm_pi_update_irte(NULL, pirq, 0);
 
     if ( pirq_dpci && (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) &&
          list_empty(&pirq_dpci->digl_list) )
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:42:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:42:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483551.749792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIch-0000wW-Kl; Tue, 24 Jan 2023 12:41:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483551.749792; Tue, 24 Jan 2023 12:41:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIch-0000wI-HY; Tue, 24 Jan 2023 12:41:59 +0000
Received: by outflank-mailman (input) for mailman id 483551;
 Tue, 24 Jan 2023 12:41:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jOcK=5V=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pKIcf-0008J9-Po
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:41:57 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ce0a908-9be4-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 13:41:56 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id ud5so38686018ejc.4
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 04:41:56 -0800 (PST)
Received: from uni.router.wind (adsl-208.109.242.227.tellas.gr.
 [109.242.227.208]) by smtp.googlemail.com with ESMTPSA id
 bj10-20020a170906b04a00b0086b0d53cde2sm825419ejb.201.2023.01.24.04.41.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Jan 2023 04:41:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ce0a908-9be4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=9EgviqXdsqUwriDOWVGOdIK/T7/metkpbG7A4cQJ2uM=;
        b=dwsOZAS5rGzi1Ofj87fXv0WD7WUuIB75h0TEF5Ruc/RaQMx/79fNgMlZvoNKa0k5Nc
         DHjFz6sXcnNXrLLhcy43dlptrgkzZ64vEWGaj40GCxbEqL/wAFeLixT7Pcum+xzNmE/b
         V4u6CebPD8EqbGQK8j3NXWKjDLm2tyOWqwMxF1SD1KbQ7IUFUIzaeCwoReSAgHXT8kaY
         kiw1Fdrjp20reAcib9nqPY9VMYq+/H+/u0wuZsUWJ6JN3+EWnm+nKgeUSU5Yx0F3Qdzb
         JgFEo4N10r8eu18p5XEfuMJ9qiJ98h1/smycwYb+kbN1t16I3hKcZqX0qKpTXHItdpAC
         O9VQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=9EgviqXdsqUwriDOWVGOdIK/T7/metkpbG7A4cQJ2uM=;
        b=aukeWaVH0LuuLimBHY2ZN4/eMi39pEw9mSIGJEwSxzwIStz4XqsI6uDZ29j/b0OZux
         UTKTOeIAy22hvhigiV7A/wv7XX+Tbh8MD7+PqhoE3emErQKnxxgD1vC0ZtlwPCBe2p4p
         RGxleo2iIqwjV3TyNrJXwX+P72nGAJxWLi0c/fo0qdd5fVVtCe/cSJGoP9KGQv+ZNll/
         yRThwJMa2czpIr0Wzyun1W1AllEyERnTqhZFU2AlHq7yFAk0442S3a+wXcewq12Lnjs6
         7S8GNl1cRFbDBLWyXxez90bTZjKZMNWBJ6T1xplCIBXxL5aQtaf4uxey7L+c0pHmZ+B2
         3G/A==
X-Gm-Message-State: AFqh2kqDWhxDgzdLEH5h28wjCAVkVW7drFH6VGSowjZDj+h27ZJjxiuY
	jX1Qt2NJ1A9STR1IOt8be0Q3q9XTLwM=
X-Google-Smtp-Source: AMrXdXuu898LxZLuSz2oDd0XfXT03BihJWbG9k2uXtx2TqQAtr+MEkO3mu7cDr2DGovejAcxS3YntA==
X-Received: by 2002:a17:906:4b4c:b0:871:e336:cd2a with SMTP id j12-20020a1709064b4c00b00871e336cd2amr28087550ejv.47.1674564116369;
        Tue, 24 Jan 2023 04:41:56 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 4/5] x86/dpci: move hvm_dpci_isairq_eoi() to generic HVM code
Date: Tue, 24 Jan 2023 14:41:41 +0200
Message-Id: <20230124124142.38500-5-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230124124142.38500-1-burzalodowa@gmail.com>
References: <20230124124142.38500-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function hvm_dpci_isairq_eoi() has no dependencies on VT-d driver code
and can be moved from xen/drivers/passthrough/vtd/x86/hvm.c to
xen/drivers/passthrough/x86/hvm.c, along with the corresponding copyrights.

Remove the now empty xen/drivers/passthrough/vtd/x86/hvm.c.

Since the function is used only in this file, declare it static.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---

No changes in v4

 xen/drivers/passthrough/vtd/x86/Makefile |  1 -
 xen/drivers/passthrough/vtd/x86/hvm.c    | 64 ------------------------
 xen/drivers/passthrough/x86/hvm.c        | 45 +++++++++++++++++
 xen/include/xen/iommu.h                  |  1 -
 4 files changed, 45 insertions(+), 66 deletions(-)
 delete mode 100644 xen/drivers/passthrough/vtd/x86/hvm.c

diff --git a/xen/drivers/passthrough/vtd/x86/Makefile b/xen/drivers/passthrough/vtd/x86/Makefile
index 4ef00a4c5b..fe20a0b019 100644
--- a/xen/drivers/passthrough/vtd/x86/Makefile
+++ b/xen/drivers/passthrough/vtd/x86/Makefile
@@ -1,3 +1,2 @@
 obj-y += ats.o
-obj-$(CONFIG_HVM) += hvm.o
 obj-y += vtd.o
diff --git a/xen/drivers/passthrough/vtd/x86/hvm.c b/xen/drivers/passthrough/vtd/x86/hvm.c
deleted file mode 100644
index bc776cf7da..0000000000
--- a/xen/drivers/passthrough/vtd/x86/hvm.c
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Copyright (c) 2008, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; If not, see <http://www.gnu.org/licenses/>.
- *
- * Copyright (C) Allen Kay <allen.m.kay@intel.com>
- * Copyright (C) Weidong Han <weidong.han@intel.com>
- */
-
-#include <xen/iommu.h>
-#include <xen/irq.h>
-#include <xen/sched.h>
-
-static int cf_check _hvm_dpci_isairq_eoi(
-    struct domain *d, struct hvm_pirq_dpci *pirq_dpci, void *arg)
-{
-    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
-    unsigned int isairq = (long)arg;
-    const struct dev_intx_gsi_link *digl;
-
-    list_for_each_entry ( digl, &pirq_dpci->digl_list, list )
-    {
-        unsigned int link = hvm_pci_intx_link(digl->device, digl->intx);
-
-        if ( hvm_irq->pci_link.route[link] == isairq )
-        {
-            hvm_pci_intx_deassert(d, digl->device, digl->intx);
-            if ( --pirq_dpci->pending == 0 )
-                pirq_guest_eoi(dpci_pirq(pirq_dpci));
-        }
-    }
-
-    return 0;
-}
-
-void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq)
-{
-    struct hvm_irq_dpci *dpci = NULL;
-
-    ASSERT(isairq < NR_ISAIRQS);
-    if ( !is_iommu_enabled(d) )
-        return;
-
-    write_lock(&d->event_lock);
-
-    dpci = domain_get_irq_dpci(d);
-
-    if ( dpci && test_bit(isairq, dpci->isairq_map) )
-    {
-        /* Multiple mirq may be mapped to one isa irq */
-        pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq);
-    }
-    write_unlock(&d->event_lock);
-}
diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c
index e720461a14..6bbd04bf3d 100644
--- a/xen/drivers/passthrough/x86/hvm.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -14,6 +14,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  *
  * Copyright (C) Allen Kay <allen.m.kay@intel.com>
+ * Copyright (C) Weidong Han <weidong.han@intel.com>
  * Copyright (C) Xiaohui Xin <xiaohui.xin@intel.com>
  */
 
@@ -924,6 +925,50 @@ static void hvm_gsi_eoi(struct domain *d, unsigned int gsi)
     hvm_pirq_eoi(pirq);
 }
 
+static int cf_check _hvm_dpci_isairq_eoi(
+    struct domain *d, struct hvm_pirq_dpci *pirq_dpci, void *arg)
+{
+    const struct hvm_irq *hvm_irq = hvm_domain_irq(d);
+    unsigned int isairq = (long)arg;
+    const struct dev_intx_gsi_link *digl;
+
+    list_for_each_entry ( digl, &pirq_dpci->digl_list, list )
+    {
+        unsigned int link = hvm_pci_intx_link(digl->device, digl->intx);
+
+        if ( hvm_irq->pci_link.route[link] == isairq )
+        {
+            hvm_pci_intx_deassert(d, digl->device, digl->intx);
+            if ( --pirq_dpci->pending == 0 )
+                pirq_guest_eoi(dpci_pirq(pirq_dpci));
+        }
+    }
+
+    return 0;
+}
+
+static void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq)
+{
+    const struct hvm_irq_dpci *dpci = NULL;
+
+    ASSERT(isairq < NR_ISAIRQS);
+
+    if ( !is_iommu_enabled(d) )
+        return;
+
+    write_lock(&d->event_lock);
+
+    dpci = domain_get_irq_dpci(d);
+
+    if ( dpci && test_bit(isairq, dpci->isairq_map) )
+    {
+        /* Multiple mirq may be mapped to one isa irq */
+        pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq);
+    }
+
+    write_unlock(&d->event_lock);
+}
+
 void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi)
 {
     const struct hvm_irq_dpci *hvm_irq_dpci;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 626731941b..405db59971 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -201,7 +201,6 @@ int hvm_do_IRQ_dpci(struct domain *, struct pirq *);
 int pt_irq_create_bind(struct domain *, const struct xen_domctl_bind_pt_irq *);
 int pt_irq_destroy_bind(struct domain *, const struct xen_domctl_bind_pt_irq *);
 
-void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq);
 struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *);
 void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
 
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 12:42:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 12:42:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483556.749802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcw-0001vz-14; Tue, 24 Jan 2023 12:42:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483556.749802; Tue, 24 Jan 2023 12:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKIcv-0001vX-Rn; Tue, 24 Jan 2023 12:42:13 +0000
Received: by outflank-mailman (input) for mailman id 483556;
 Tue, 24 Jan 2023 12:42:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jOcK=5V=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pKIct-0008Um-Px
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 12:42:11 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84a944e9-9be4-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 13:42:10 +0100 (CET)
Received: by mail-wr1-x42b.google.com with SMTP id h12so9748480wrv.10
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 04:42:10 -0800 (PST)
Received: from uni.router.wind (adsl-208.109.242.227.tellas.gr.
 [109.242.227.208]) by smtp.googlemail.com with ESMTPSA id
 bj10-20020a170906b04a00b0086b0d53cde2sm825419ejb.201.2023.01.24.04.41.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Jan 2023 04:41:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84a944e9-9be4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qI0M5gT9kSSd1gsZvE7ggqEPt+mXuOzha9tFmlwlkiE=;
        b=dKuI58OeLbfqPF7oeKZz0cgwlQZesgZk4RIVJMgZDOh/e8CesFSFn2bfcISkCdJKK9
         Zu7nYsZ8i+JCrEiF+eZ7Bz+2R1xQN5963Lbj2MxjhO2Xanm8agm7zDeOqdzV/KgCo8FB
         1mYjRSLc3ib+0UjZqvhLwHS7qK6W1g84mjQwK8HI3qi67q0+Mtvu3N3SaQyAsy9BAvv8
         PWwQ6AhkW5vmoy/B2/IwLNK+/l/RQBfP4XpUF5oe2tTIJpE8ipnuaoQNziv/Wn0N0vsY
         6LgC4WIBH7LIaris1cuXuhfalVa0KVY00GI1+9VYhME76JQ1jdiT3DQwUMhVHNypMQPh
         +/Cg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=qI0M5gT9kSSd1gsZvE7ggqEPt+mXuOzha9tFmlwlkiE=;
        b=sZTtYo412eoXD2LF/+k7+4mnzIuljF/OSxCibqvRlo4aXG/63FWySd0/tjQMJCdXQe
         peiLwv3vlgT/4xbBomHJU+K2NARnj7RNt5mfSEoLZVUObGu9YHf+Jiav3qOY8tiX0OPY
         sX5WlHI4TAjXAQTjIv53pWI3w5nBg4Wg29ZIEAImLqS+pT0OORFhPaRVeoTN1+NswhNh
         dJqU5IJUpWJXlCqC5tAQKB5Wjq0lGCzlakAikMof7sbymS4Zdwsp4l5K/HmJQ6TFHeI2
         sUejYB4hmdkE/RU3Q3PzVTNicznsDUkhmeruT5AWlJZBuQ9rVOwtBJQP4bmzgGarVeHW
         kzlA==
X-Gm-Message-State: AFqh2kqfncA9aHTefWtf/uMWJZeh8bVv2N7upx/xbboUDWwvwpZnS1cr
	ItBfhNRtu/Gu0X+y85YOq+Eehdv03KI=
X-Google-Smtp-Source: AMrXdXuUclYajZExmVcH4C4tyOQnOeyyDeiiyiuoFV1khSj4/cLh+ok3jFslTwhIuV+PqBlHlznl6A==
X-Received: by 2002:a17:907:6e2a:b0:871:e9a0:eba7 with SMTP id sd42-20020a1709076e2a00b00871e9a0eba7mr74177322ejc.57.1674564118764;
        Tue, 24 Jan 2023 04:41:58 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 5/5] x86/iommu: make AMD-Vi and Intel VT-d support configurable
Date: Tue, 24 Jan 2023 14:41:42 +0200
Message-Id: <20230124124142.38500-6-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230124124142.38500-1-burzalodowa@gmail.com>
References: <20230124124142.38500-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Provide the user with configuration control over the IOMMU support by making
AMD_IOMMU and INTEL_IOMMU options user selectable and able to be turned off.

However, there are cases where the IOMMU support is required, for instance for
a system with more than 254 CPUs. In order to prevent users from unknowingly
disabling it and ending up with a broken hypervisor, make the support user
selectable only if EXPERT is enabled.

To preserve the current default configuration of an x86 system, both options
depend on X86 and default to Y.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---

No changes in v4

 xen/drivers/passthrough/Kconfig | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 5c65567744..864fcf3b0c 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -38,10 +38,28 @@ config IPMMU_VMSA
 endif
 
 config AMD_IOMMU
-	def_bool y if X86
+	bool "AMD IOMMU" if EXPERT
+	depends on X86
+	default y
+	help
+	  Enables I/O virtualization on platforms that implement the
+	  AMD I/O Virtualization Technology (IOMMU).
+
+	  If your system includes an IOMMU implementing AMD-Vi, say Y.
+	  This is required if your system has more than 254 CPUs.
+	  If in doubt, say Y.
 
 config INTEL_IOMMU
-	def_bool y if X86
+	bool "Intel VT-d" if EXPERT
+	depends on X86
+	default y
+	help
+	  Enables I/O virtualization on platforms that implement the
+	  Intel Virtualization Technology for Directed I/O (Intel VT-d).
+
+	  If your system includes an IOMMU implementing Intel VT-d, say Y.
+	  This is required if your system has more than 254 CPUs.
+	  If in doubt, say Y.
 
 config IOMMU_FORCE_PT_SHARE
 	bool
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 13:00:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 13:00:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483579.749812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKItw-0004RR-DP; Tue, 24 Jan 2023 12:59:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483579.749812; Tue, 24 Jan 2023 12:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKItw-0004RK-Ae; Tue, 24 Jan 2023 12:59:48 +0000
Received: by outflank-mailman (input) for mailman id 483579;
 Tue, 24 Jan 2023 12:59:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKItv-0004RA-EV; Tue, 24 Jan 2023 12:59:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKItv-0000uU-CA; Tue, 24 Jan 2023 12:59:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKItu-0007B3-Vy; Tue, 24 Jan 2023 12:59:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKItu-0000Qw-VU; Tue, 24 Jan 2023 12:59:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/8IUC6PHMY3Qb9rVr4cogMf0iIP2z4i3JelbKY8t2pg=; b=F30Asm9snVCBxXU60LaUaJjmXV
	oKx1Cr68/MrSsPF3Twwc/BAHMHyeAMHuvHjEmwxCpNutSCdY4DYzcYFFBGNqMadwLe23A0E9UEttr
	b35Ee+IVlRn9LljT0pt+U3SgNWuDesiEAqzEssj8a6PY5zIYDhd++FmgH4y3Cv92QsTE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176085-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 176085: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    libvirt:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=7b5777afcbe508a15a509444ff6e951e7201f321
X-Osstest-Versions-That:
    libvirt=57b0678590708de081e4498e164b86d5c8c85024
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Jan 2023 12:59:46 +0000

flight 176085 libvirt real [real]
flight 176093 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176085/
http://logs.test-lab.xenproject.org/osstest/logs/176093/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install         fail pass in 176093-retest
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail pass in 176093-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 176093 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 176009
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 176009
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 176009
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              7b5777afcbe508a15a509444ff6e951e7201f321
baseline version:
 libvirt              57b0678590708de081e4498e164b86d5c8c85024

Last test of basis   176009  2023-01-21 04:23:35 Z    3 days
Testing same since   176085  2023-01-24 04:20:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   57b0678590..7b5777afcb  7b5777afcbe508a15a509444ff6e951e7201f321 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 15:49:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 15:49:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483600.749846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLXc-0005dA-NG; Tue, 24 Jan 2023 15:48:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483600.749846; Tue, 24 Jan 2023 15:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLXc-0005d3-KG; Tue, 24 Jan 2023 15:48:56 +0000
Received: by outflank-mailman (input) for mailman id 483600;
 Tue, 24 Jan 2023 15:48:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pasG=5V=list.ru=valor@srs-se1.protection.inumbo.net>)
 id 1pKLXa-0005cx-Ml
 for xen-devel@lists.xen.org; Tue, 24 Jan 2023 15:48:55 +0000
Received: from smtp40.i.mail.ru (smtp40.i.mail.ru [95.163.41.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9845d4fc-9bfe-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 16:48:50 +0100 (CET)
Received: by smtp40.i.mail.ru with esmtpa (envelope-from <valor@list.ru>)
 id 1pKLXV-008lrn-3x
 for xen-devel@lists.xen.org; Tue, 24 Jan 2023 18:48:49 +0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9845d4fc-9bfe-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=list.ru; s=mail4;
	h=Content-Transfer-Encoding:Content-Type:Subject:From:To:MIME-Version:Date:Message-ID:From:Subject:Content-Type:Content-Transfer-Encoding:To:Cc; bh=6HXaMn8jM3dUyZ4mo/ZEhnoBNBA/htq7A1eFp9MAzro=;
	t=1674575330;x=1674665330; 
	b=PlF8aJ9uflhsCdkK1BYJ5NZU0DPGumRbOyh3hgsANezUDC2k4zpZhEf5ivlROiXYMYAbZrvZVPuGnfI4z89IuscDi7ZmOrqlRWgds1xshR/nnD6xd/9h+pzy2Gx4fZ+h8hF5A9yylp+S+YPadhfYQJSQB/DDlDgnjByiqAi/5j5wtUFiyuNYpBdC2b2wqXoLqQguTGO//bT9JwEHgG6PrMc5karX5NmO99/Jq1QwYUJlQgSC8VrfHp/MqLXx2HcD8Vu9bhu6naoRZNBpvGhE/PFC/qyyxVPLFhXHxvyi5kBSRFHzUTn5qlRbm2uQqF6+U85owzaQLsdJ7ADWInpAfA==;
Message-ID: <ce6e1346-bfea-f047-4a9e-f19c9c48e851@list.ru>
Date: Tue, 24 Jan 2023 18:48:48 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
To: xen-devel@lists.xen.org
Content-Language: ru, en-US
From: =?UTF-8?B?0JrQvtCy0LDQu9GR0LIg0KHQtdGA0LPQtdC5?= <valor@list.ru>
Subject: Xen Kdump analysis with crash utility
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Authentication-Results: smtp40.i.mail.ru; auth=pass smtp.auth=valor@list.ru smtp.mailfrom=valor@list.ru
X-Mailru-Src: smtp
X-7564579A: B8F34718100C35BD
X-77F55803: 4F1203BC0FB41BD9A6D6F5724DD3A0B68C669C2C04C2179C0C66A1D1D8F87810182A05F53808504071D413726DFE6304B63603C4812286292138486762FBF294E8F86B91EF63082F
X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE72847AA60176ABEF3EA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F7900637D24CDE3D695BBBC6EA1F7E6F0F101C6723150C8DA25C47586E58E00D9D99D84E1BDDB23E98D2D38B6F1F7B995052D5CEA95824367E20669B48CAAB5AC5CB88DB20879F7C8C5043D14489FFFB0AA5F4BF176DF2183F8FC7C068077CCD40B79AC98941B15DA834481FA18204E546F3947CED96AA85C75E140D117882F4460429724CE54428C33FAD30A8DF7F3B2552694AC26CFBAC0749D213D2E47CDBA5A9658378DA827A17800CE7C88295285D2BDD949FA2833FD35BB23DF004C90652538430302FCEF25BFAB3454AD6D5ED66289B5278DA827A17800CE7B559B571F7C468CAD32BA5DBAC0009BE395957E7521B51C20BC6067A898B09E4090A508E0FED6299176DF2183F8FC7C01B16E1F2CD1A17DACD04E86FAF290E2DB606B96278B59C421DD303D21008E29813377AFFFEAFD269A417C69337E82CC2E827F84554CEF50127C277FBC8AE2E8BA83251EDC214901ED5E8D9A59859A8B6CF11E2829993B7FCEFF80C71ABB335746BA297DBC24807EABDAD6C7F3747799A
X-C1DE0DAB: 0D63561A33F958A5B2C1FD409F46313C5FB9B145A985421178CD409596333F904EAF44D9B582CE87C8A4C02DF684249CC203C45FEA855C8F
X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D346F44602F3B4946346F47734530D7F902BD972D5050F2F4459EDC303C4634050C5508B2FC52C90CB51D7E09C32AA3244CB04EC87922053176B8B73F16A0A5C5DBF26BFA4C8A6946B83EB3F6AD6EA9203E
X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2bioj4ukhBIi/0FCSCARYYvmRVg==
X-Mailru-Sender: 6C3E74F07C41AE94600CBC3F37424253AEFFECFB15C1A7282DBEF3C44E9F9C2E8E785B02A3EBB0B4671A0538F0E0E4B8C77752E0C033A69E86F8B8EC1BECD1EECCC3E8BC0E932F7CB4A721A3011E896F
X-Mras: Ok

Hello,

I'm trying to start use of Kdump in my Xen 4.16 setup with Ubuntu 
18.04.6 ( 5.4.0-137-generic ).

I was able to load "dump-capture kernel" with kexec-tools and collect 
crashdump with makedumpfile like this:
```
makedumpfile -E -X -d 0 /proc/vmcore /var/crash/dump
```

This dump file could be used to analyze Dom0 panics.

Though I have some issues while analyzing dump file for Xen kernel:
```
  ~/src/crash/crash --hyper ~/xen-syms-dbg/usr/lib/debug/xen-syms 
/var/crash/202301241536/dump.202301241536

crash 8.0.2++
...
GNU gdb (GDB) 10.2
...
crash: invalid kernel virtual address: 1ef8  type: "fill_pcpu_struct"
WARNING: cannot fill pcpu_struct.

crash: cannot read cpu_info.
```

As far as I know developers community of crash utility doesn't actively 
support Xen. From 
https://github.com/crash-utility/crash/issues/21#issuecomment-330847410 :
```
I cannot help you with Xen-related issues because Red Hat stopped releasing
Xen kernels several years ago (RHEL5 was the last Red Hat kernel that 
contained
a Xen kernel).  Since then, ongoing Xen kernel support in the crash utility
has been maintained by engineers who work for other distributions that still
offer Xen kernels.
```

Does anybody use kdump to analyze Xen crashes? Could anybody share some 
tips and tricks with me to use crash or other tools with such dumps?

Thanks a lot.
-- 
Best regards,
Sergey Kovalev



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 15:58:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 15:58:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483607.749856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLh1-0007Fc-Ls; Tue, 24 Jan 2023 15:58:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483607.749856; Tue, 24 Jan 2023 15:58:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLh1-0007FV-JF; Tue, 24 Jan 2023 15:58:39 +0000
Received: by outflank-mailman (input) for mailman id 483607;
 Tue, 24 Jan 2023 15:58:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HFQP=5V=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKLgz-0007FM-Dm
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 15:58:37 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2083.outbound.protection.outlook.com [40.107.21.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f383c12f-9bff-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 16:58:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6796.eurprd04.prod.outlook.com (2603:10a6:10:11e::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 15:58:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 15:58:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f383c12f-9bff-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Sdi4oHl07HOxj8Ea0KoXAXtrm/v0J4AscAm+AnLeq7+VrEsO3PrWwDzWJeLO0EC1+Daflos+xxDKNp3+uHnNvWBKl6g6rpeoB0hqrahcBj99X9gsLiFUkGL49b/AmSmUWvYiHI0hx77amBtPQahO516yu6sq2eXLJtm1+86+YD/7ot9A+YCtBIuCUlwfwmx3mguhSbC/y4kJk7JGuC6mun742fxorp27HPPnW9036y2OVnZibMzwyF/ix956xIt4A76fqIEHWB9OckuZfxRdVg5ZKt6VHreqA+pKowZM9IZo+zYUuHyzKf8zoHmnqTVn7SYdb4A9HOPVHWP5rjP+Pw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=b7d7ZlAm2BxCPrIvK1TvA6JtRrtUwPvml/FrnjP+me0=;
 b=gA6G6DzCAF3vO85q3Id2T29RKyC+tIU/hbSC5GWrS9KSnulDh4nd8JkRdd0fUecLko819OJRbAi/73EtoMyfY+XqtwELBmvrMYXtTKswV4btrUgAxRnIgUH0O2BC2bR00iCY3pb/9cHibFei6u45uBVzz5JInJCAjH8uj4RrWcUixPjd8mt6Cc1KdQUk1OFU66L4SZdxBnY+bf7ORJf240FtiVP+HgntxLm8O2QOZQkRZ9u7tskV4UwaACXitnlRskciZOWAyD9OBIuGDrCT6zUW/7PtkuebfAHwSFWGAiUCvhZ6R+q/3bOSpPMCuTstGkJBnvRBZZ6WwbMTGb6YIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b7d7ZlAm2BxCPrIvK1TvA6JtRrtUwPvml/FrnjP+me0=;
 b=QCK8cebgr/ZZYsWmRt9WyC3VQv2s+lpMfGiftZz40wyAWEmLB7iekrvnSaQMBkHRPbo28RAVacItHCLtJ0BXoamReTRvQqG57flXP7tAnw9eeOsgWfSXinYe0WRASSD81hiRmqmXcTd0quux09eOAPmAdHHnK64V7Ry1wVOWQLvw1msPIuVfgrH/CHrbDJERCItPHk3GxCwa+dK0AV0iRfX4CzxzioN5xxx62XUM6cOn4cMicQ5RvlcLjTEACE5eh1vyIjbEN72sMaCtoFZAkQL7i6T8fmdzjFtITsgVrEtEOZXAzm4ytR3d74x6kvoHKG/Z6xw2NBSvVu7DJqX+4A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <06a46a30-88a2-2cbe-1c99-7f700e469160@suse.com>
Date: Tue, 24 Jan 2023 16:58:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN v4 2/3] xen/drivers: ns16550: Fix the use of
 simple_strtoul() for extracting u64
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, wl@xen.org,
 xuwei5@hisilicon.com, xen-devel@lists.xenproject.org
References: <20230124122336.40993-1-ayan.kumar.halder@amd.com>
 <20230124122336.40993-3-ayan.kumar.halder@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230124122336.40993-3-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0048.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6796:EE_
X-MS-Office365-Filtering-Correlation-Id: 2365b4a4-c460-46ca-1288-08dafe23d605
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cEN887Z8D3FQpZfBfbteZkqicy5fj7pF6mm4NOTktLLC70mjIG6DO4qP9sXX99lLvdVRZ6ijUdNlHcuHTk5Lfjg32QmKGVb72/WUMO8Jf7SS56AkiF3agQwrBrp7OvgxnjcSAf5PiB3jAwrX5YUfIZADUSDf3pod9GOQhpsUL2FF/hnILE9TrfAuUuaShVkDqF8oOhuCbGXd/XeSwTOSlV5x/JB94NcavtsG23tGBKA+g6IrSgpwvffWWysr7QanSW/PlBYio8XbUjLvsDDG5nnOZUqjJCyKk+bfui6cLuJiJ5W+co76SP6d9Ta+X8wHoGZitK2FPNkWphBCH+Y0NGKPhXFiXE2NFZeiCDc6OBafrERH1UTWEm8Z9+4TRMGP3agGShyxrtLA37VFobaD/liR27/EMTjQ0Ihp7juiXrJ3JyXnWtYXzYty2dPlbTIW9tmdfN2JvO3G4nkPGYzPF/h5cNERfkiBlY30Ww4iMoZ/N7AlaeZFMkvZlo5LCOESHWp0VO0O7ydtt3l5rBZf9CYTkDBJDT7G6XwumjiDUGiwN856KrMYBRtmUTdGD4mZiU/yWsh0xX8HlCeC8Mu/xnVAXmbJ3kN8BebPPLjZ8S+FZJErQPDfwuPXIDgANHaetJ38QlFb7QpgY3PQuPFvtcmgv5wA0gzrOFsJEoOqr/z1cJm+L+V3rXPq8rWggCaY5nstnwPwGT80T6e0yGC66ofLYN8lRx6clkLUBfZJyMs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(39860400002)(346002)(136003)(366004)(376002)(451199015)(38100700002)(5660300002)(31696002)(86362001)(7416002)(8936002)(2906002)(66476007)(6916009)(66946007)(4326008)(4744005)(8676002)(66556008)(41300700001)(26005)(53546011)(6512007)(186003)(2616005)(6666004)(316002)(36756003)(6506007)(478600001)(6486002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NDJlMmo5MHdpREFVUmZaOVVxSFd3WnFYRUVXVlBRbFJQcS9WaDNmVXBVWDE3?=
 =?utf-8?B?VWFRWUpVM1JUdHQ3bHdWN0g2SUx1OW1vVjRsRG1FeHoyTnRTODgwTDdZay82?=
 =?utf-8?B?TDkydW5kU1ZyT1V0K0xjR0p3MEY3OU4reTh2RzBHc1lncmhqMllmU1hJS0dv?=
 =?utf-8?B?dWdjS0NGMnl5RDdld3lYNmNzZi9QamprOXhSVTJUOUVydUYrUDN0d1RrT0Jt?=
 =?utf-8?B?bnhiT0h4cVRYTVpxWk1uUTliZGlIeXFSSUNEV0t3OVVxclVCbnhnZmdEeDZh?=
 =?utf-8?B?VTFIeTJGTFBOakEzMTRiMG9nZ004OEVEYWF3Mno3Y283bTUyTGhTbWMzR2xx?=
 =?utf-8?B?N0Y1MjlKYlNIVHZGS1drc1RDTW5tYVVabHp3eU5Ca2NOQ1FCc21sSVRrQkVq?=
 =?utf-8?B?YjQvR1Nzb1Y2dTV6S2xudUZlK3B6UlRjMUY5U0cydzVBQXJucExuR0ZaNkg1?=
 =?utf-8?B?cUg0TDhuaHBnZ2RXVkpWUERyQWFVZEtFek9JTXRXb21LQ3FrVU9XY29lbGo3?=
 =?utf-8?B?TGRYenhaN3RNY2dKODl4cXlpc3Q4UEdYa3Fad0RHMnVPTFB5TnpEa2libTZT?=
 =?utf-8?B?TkR5amxnZ1hWSUQ1NTc1N25hdHNsT2d4dU4vcUhhbkl3cnhQWjhacjdtZ1RO?=
 =?utf-8?B?OFNoOEVrNDNPME5ZVEFwZGk2UDJ4STVlOFU3bHh5Y2Z4QmEwN2xRbjE0d24w?=
 =?utf-8?B?OUpjeC9GZTgvUUZHNHU1T1IreWZyemxsY2JUZTFOZlE1SW41YVhuam91U1Iw?=
 =?utf-8?B?cytOTlhlM1ZGdFRoVkMvVENkdnAwK2QvajUvVlN2cmJDNzlxS2JiR3pid09H?=
 =?utf-8?B?K0o0K25UWVNVbnl6NXpTNW0xZUdja1FGZ0hKMzA0UmczS0RhMlhmajJPWmZ0?=
 =?utf-8?B?dXdBUUp5WUJjY2pnTE5NZ0Z3Z0JQM2svVjlHTk0yZEV0aUk2M0dSaFlFSWND?=
 =?utf-8?B?U1B0T0VmaU5zamd1N3AzOHhqQ256eGFRM2s4NTRVZDdvWHMvb2lBSlVGdDJP?=
 =?utf-8?B?WE84ZUpkV0x4UjZwMVJMVWI0TE5OZ3F1Mmp0enZkYktLM1RwZ1AwVjZPU1Az?=
 =?utf-8?B?c3E5bTlUZ1Zidlp3NDJXMytKbXBnZlhDNXRkS2JhQjlQYUF3akhYSjBET2tv?=
 =?utf-8?B?YUJSWDBwRkEzeEo0dUhyWU1KbWJuOUlUb3dHU2hRbDFhSGhiK0p6b0xWU1M5?=
 =?utf-8?B?N0hwazZhSGJ6MHNTN01OT1RXSnNnVVBNVEdwRHpDMXlwNXFZSExBQi9BYlRW?=
 =?utf-8?B?K29sMUJ1Rkk3UHhIY0JCZlpTTGV6NHlXTUNuNzNyS2p3SUx2YmtKcUQ0SGhK?=
 =?utf-8?B?eHpvVTNnT2RiQU9Eai9EbTlSdE1Bb21qNGVpNUdIZGV6TG5lVWNxbVlMaFRJ?=
 =?utf-8?B?dTRIeUtDa0hzM25MaHFHNlA0NE4yMnlxMWZBTW1rUUhldEFLWlFBZHJsbEhk?=
 =?utf-8?B?NE15N29CdlgwRU1uTW9XdDZqUmJZZWxXVU91My9udjVQaXYwZ05sdTBBeWVD?=
 =?utf-8?B?L29HQTJWWGRMaHNSY2dNMGtlU2ZIQlFsdmRNaUtiMGsvL05xRDlZNHQzQUJO?=
 =?utf-8?B?N24yR3VpamFrSnhpempGT3IvRkJKazZaNnNuMG11anppcGlWU3ltVEFGdVFh?=
 =?utf-8?B?cnNtdkNySW9TazNJS1hsaC9DWis4WkprK3cvNVJaOUJTNmpTaDdpVEF0UzJi?=
 =?utf-8?B?WFJodTR1WFVmdFRlSXNMdnZOVGNRQzZlZ3BBYjZKeU1OVk5aU1NqeUlVSytC?=
 =?utf-8?B?Z2FnMHZnVWloN3ZnMWozdmNrL3JxdCtKMDFDTXFIMisyMWdtQTVsUFp3Q3FY?=
 =?utf-8?B?bFlKMmpsZUtlazFrQUNWQk5iQ3JBWWhUd01tVUZYS1BaY0VTcE1qbUlZakl3?=
 =?utf-8?B?N3VBdnlXY2tJTnV4ZzdsL3VrWTdub3FiKzlyU2k1S3k1aW94L1dvYVJpNVM3?=
 =?utf-8?B?d3M3NUVhbUx2ZUsxSmRXb2lKeWx6Ym1aNm81bC9ZckxMcm94WWRCMnRLc2Rs?=
 =?utf-8?B?NzhJaXh5WDdRUk00S1I0Sm4xTy9rTmRnOGJ2WHFockRmV2tZZ3JIS2U3ZmxU?=
 =?utf-8?B?bXdLRktnWjNBaHltMVlVR1dpbTNndTZwTEdrSlI5MDZPZWM1cTMwMFRrZGY3?=
 =?utf-8?Q?68YWg608oGqz/TVSUgeELVl/Z?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2365b4a4-c460-46ca-1288-08dafe23d605
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 15:58:30.3375
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: D544z7uvp9kALrUo2FWwnrV1Z2Ki9QVx91ueVwjUt5S1uJW8UAhoPvuJRTuhYLbrrI1Jur2wtIfYLDz2embQrA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6796

On 24.01.2023 13:23, Ayan Kumar Halder wrote:
> One should be using simple_strtoull() ( instead of simple_strtoul() )
> to assign value to 'u64' variable. The reason being u64 can be
> represented by 'unsigned long long' on all the platforms (ie Arm32,
> Arm64 and x86).
> 
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

While committing I've taken the liberty to replace "extracting" by
"parsing" in the title.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 16:08:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 16:08:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483612.749867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLqJ-0000sx-Jq; Tue, 24 Jan 2023 16:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483612.749867; Tue, 24 Jan 2023 16:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLqJ-0000sq-Ei; Tue, 24 Jan 2023 16:08:15 +0000
Received: by outflank-mailman (input) for mailman id 483612;
 Tue, 24 Jan 2023 16:08:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HFQP=5V=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKLqI-0000sk-Kk
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 16:08:14 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2079.outbound.protection.outlook.com [40.107.6.79])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4caae6ac-9c01-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 17:08:11 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8089.eurprd04.prod.outlook.com (2603:10a6:10:247::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 16:08:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 16:08:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4caae6ac-9c01-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AWks2c5qtXQ74PBPZkVspG150flSTyUbK45duJIq2Un9fSDkVsoYRjrNHjT3oz/kqHaMvLusJMHbWoSQ/N+L/nMQ/GSw2MdvIK60D0TMWwMPYNyVMfGCSFmArSYOnr+d/ro8ciufVblBAzsjRrTX2GS1f6PDyV19ZbQ/n6uZGOD0Yxubn2vkd7MwVkUE0B5dWzHZDKIyZehw7nLNe58h8lEdR1g1W8nTe3GtSUScRaxyzuZ0DaSGE5OqEtZsL5Mqbr8nRLGcQPmr5sezp6yS4MDvvv+7wf/A23Gu7v4FGpazkv9DD99gwgP1XcfJxPMH9n+yOXm9w8V3w8NgRPHj3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sNwknKSSwotwLR5xCl2QxXrTihZFnHtv9H4h8RfW9b0=;
 b=FPMoTMcoBSYWBztYlIjOXcN7mJHEHVIf98Kq2xfQWZcKyxsnU1df/zwAQJAjFA3/wVzdaeTlEwpRL7pTjQvAKELwrW0afOC5onq/mpvk1jSo8uUrrFV64AUeSFtiAqAHaI7TdVjQoFHceehqmYm8qXp7XKNNzoUKdBhI8I8aAXKeeV92Vrt9p/KwKO75gurubz/OgP9M7pQjX02asJb7w+uthoKLzUDNnA43F3TMdoHbzBzkm2VsdNgnVxqFxgXQgSCUA97ZvCCAlUOrwFY8lowHJx6LIKFrDuoSTy2lRODHSNQI4eEkMgN5y2iLfcYkM4T2tpraoZsUcXQWipa0JQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sNwknKSSwotwLR5xCl2QxXrTihZFnHtv9H4h8RfW9b0=;
 b=ohABGAnaH38prMYykWQ7i3+GOu7ZSG9pwisgeVCqf1PbgGI6RiRFc3TU0qof2PyPgznYEUT70zSi2JNGsUPfxmSOFHbW2EirQrMZbtrWJKJyvv4ejzFZz5ZiZYrBHCddsrypAKhiyC9aHO95mboa2a5UWce02KlocM1HmUwn6V9HxBVllicdl0J5oWEx0UtlbfKjoJ7vQvnMbxSthTzZitLdrUrMTQ5oo1SYrxzUq0lhOkRsElXBDqui40zIGFNypJG0Fkkoms2HAFL6xm7AyUQlRLHRmFQgelNSZkUybpHWGc3hGOqevaYJs88mDZE6sXAIWD42NML5fJ0vvdupaQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <023806c5-66f6-aeb2-d53d-7242f5dd7b1d@suse.com>
Date: Tue, 24 Jan 2023 17:08:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 3/5] x86/iommu: call pi_update_irte through an
 hvm_function callback
Content-Language: en-US
To: Xenia Ragiadakou <burzalodowa@gmail.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230124124142.38500-1-burzalodowa@gmail.com>
 <20230124124142.38500-4-burzalodowa@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230124124142.38500-4-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0039.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8089:EE_
X-MS-Office365-Filtering-Correlation-Id: 2977d795-e96a-4d21-6d89-08dafe252f95
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E6LZ2jlEgugkbm0ZmyRbizEIju2+DjzwbVke5TH2B/bJltGNzXOCIJ9y1Aj7oj8x89Dgk4jT80wDinYxmFFTRNlv1awPXhtSAUfd9548fNpccGC6YGLKkUp/ID5Rhy94Lc8eCCdLkvPlvLpIQIbkJ8Qs1a9V7mIUTfOflZAMZNtW9xxbPeaHG5j9GJp6BpY0wfQkXli03R/OxpCEnavzfrPMzWntXzAzVqA6U6yonTVR6lU6TvBExdtQQ9AkNxGNZzktxBr/d4+4pGF5LQU1fRrxbcjqGeijiX42x0irZ9i5ao/yJCDF5JY5e2jcfCFSXkyTH1SLhSz8zyfpZefPEabFhHS/4433m8HoWruy6le+hsyhoS1zV/jpZX3J4tceQRCsXCjs4dvNvdexj7uWXofNB4FVMlE3RoWpGkwYhSNmWCvK5BpvABi+9G7IBI2nqWR4vTVdQAGlb7Rn/ekOe1c88g23Ov24eUsdkG9WIuIv0Cy3/9bBTie0W8bW3HT2fLLQVtWixzzgWznlgdN29vzDjQUvOZE3NcnFZHkaTsEb7g+gPwo54W2bkww7JVT3HbJhK7TehVjaaWpZhcJzsXNT6NTllVVInOhXyDtkwyX2ifDxzPQS8j+O1c/uRrBG/c7hA/e7lQLtV1c8aEC9bvG4GMnGzRtItUpG3NOw9MZ+6NghZpfrwoSTmQWw+fSMxEO26Mro2Adb99me309xq5K8ivRQEUtqQzLwI6vYOdI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(136003)(346002)(376002)(396003)(39860400002)(366004)(451199015)(54906003)(31686004)(2906002)(6916009)(66476007)(6486002)(4326008)(66556008)(8676002)(2616005)(5660300002)(6512007)(8936002)(26005)(41300700001)(186003)(6506007)(66946007)(83380400001)(53546011)(38100700002)(31696002)(316002)(478600001)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aUV6Vk1laFEwRXRnMWJUc3RzbWNWSHRSYTVydU1MRmt0emlwbmN2VDFoUzBY?=
 =?utf-8?B?NzQ2WW5Jc28ydG9GZjhKUHVYOEJES2UzcXhlQjVibmhJbTZmemFIZUx6VU9J?=
 =?utf-8?B?U3loeW92ZEZjcDJqbjhuTmhUL21QZDRxWlRZS01USzhhNDN3cnJheE1oTmti?=
 =?utf-8?B?Z1pXS1JvaGJvQVhxdXpuWXlQTEQwTkVGMDBzS25YOXllMFpxWlBGQU82Uzgv?=
 =?utf-8?B?QmNVajgxay9IYTk3R2FuZ29lRU9NbWxDdVJhZHpNaVBzYUJ5Z0RHTnJUbTFC?=
 =?utf-8?B?c1ZqTE1nS2xFSU5nTjVEdlJQcDA2OUdva1RXRG9XemF2YlYxbnptcjFnNGpE?=
 =?utf-8?B?c3hvemMzVUQwNmhUUFBVVklWZ0loZCswYmYvakVkeXpIZjVjekdROThNR3Vy?=
 =?utf-8?B?QVo5dG45TzdheEowaGpudEVkckF0TEluRDgrRlZqK0RqUElKdW9jZmlhTHNo?=
 =?utf-8?B?bnJxUnd2OXI5ajFGdjdVSDEwRCtMeGJCL2NnNWQzQW1uZHZaTC9Ha2tIQ1Bk?=
 =?utf-8?B?cGJUaXVTZkZ1aU5xcnpyZ1lHVjN5ZU0rZzZSTTQzMGNQZ3VVUHI0dC9WVXBw?=
 =?utf-8?B?VnQvN2oxZzE3QmlMemlpNTluQXVGaUNkTW9YTURLeHRtVUZGOU5wUUFOVXdQ?=
 =?utf-8?B?ZjMvYnJEZ0YvT3NDcVRoU2FHWllxSlpNYkVWWkViZWo0NWlFS3BrZVlwTUlE?=
 =?utf-8?B?WDlmS0JwOWx0ZXJyQjZXRU1rMHMrZ2RvZEt1ZHlhK2MrOUV3c2NkcTZiOXBa?=
 =?utf-8?B?ZW96eVRwNlhWV25zSzlNbWpOWW93R3grM2MyZnFYZ2RGa25ueldRaWlPZ1Vp?=
 =?utf-8?B?Qm42T1A0THExU0k3OFl3a0tTVDhFUEZMYkYzNVg0c0JUOG5pd2QwN2g2N1Vm?=
 =?utf-8?B?bWRSVi9FTkc2SlNocUlDclZFZ2JTcmtCTHM4NVlkblV5Uk91Q1dIS1JRY3JC?=
 =?utf-8?B?Y3ZKMFJnZ1hRcVBmZ0MyNVJDcWJFUyt2K05ham53aisyQnZ0QnV6OVNUeGFT?=
 =?utf-8?B?Ni9sMmNxNGNwd0NnMTNRcFU4cXd4Wkcxa05FWVJsbU4zQVJuN1lkNG5mYmtq?=
 =?utf-8?B?c1J0N1FzQmhiUndiRnhVTEJnR3IzcXJIWjYzQjlxMWN0MExHaXJuQlhaZng3?=
 =?utf-8?B?WXIwU0tCZS8yaXVwbDVlL0N4MC9TT0NhenhZc1haWHJyTnRQeC8wRk1qSlM2?=
 =?utf-8?B?VDNqQnZvSlZjTkJuY1ZNeEVpUTdpUklIL1NJTFlDZkF6d2xWSGsrVGhXTFhl?=
 =?utf-8?B?SDhjR1ZEcmtyS1d2ZkdOMjBVNFh6bm1NVmJxWHNsZXhGVDRUaCtZN1VpME5u?=
 =?utf-8?B?bjcrT2oydHROa2daaGNRdFh6S2NXRmN0cGx0QjVDSmtRNnI3SjhIRm14WXZ0?=
 =?utf-8?B?eGdNMXJoMFNla1l2WG9IUk5uSGZsNzRMbGVJQStMRllVbTVaQlZNa3dXZExz?=
 =?utf-8?B?b3Q4NllKZGlxMjNWQk1hM09OSzRGUFVJM1dITENyOEJZRGZiVnBjV3M0Q0ZZ?=
 =?utf-8?B?d3lBYWtxeURXTk9tV1huMkxMcFRLckdXQVNaUkdjZjhUa3BqbVlYZFhJSkFy?=
 =?utf-8?B?K1ErUVQwNGVSUHl4clFjeEd4emgzSlY2L09FOGFYbFNxZHArcnc3TWVUNFFL?=
 =?utf-8?B?UEkwOEx6dm9MekdxYmtubDBYdlAvM0l1U0ZvME9XSUJBWFZlKy9DVlRkSmRL?=
 =?utf-8?B?TWQraXozeWtIRGxXcXVTMHpkTEoyWXRyU1lsamxvMU5Ia1VaVUJOeU5MTzU5?=
 =?utf-8?B?bXpSRGhlejU4QXJhUmxIMXJHNnRpcVJRT1ZkRE5PdXhsTVRoejdIaE55enN4?=
 =?utf-8?B?UVRLazUzMVcvbXdhZ3NzVzJ6a2UrWjk3QjdRTEVBZmM5Tm5OZXoyVkVwOTVD?=
 =?utf-8?B?ZGV5OUxqaTNpVmk4aHlMa0YzRk9JWEZ1YWNRTkc4R3prT0NwVVFhOWZjaitS?=
 =?utf-8?B?L2RObm5Vai9HMW5uWXRFYkh1LzBrQTVTbG4rczd1bHpVd2diUitDWWhrWVhC?=
 =?utf-8?B?TWZON09hZTJiTmlQc1N5ejdtbHRoMWRGNmZBbk5tRlJOR1JRYnZJbFlHOWd4?=
 =?utf-8?B?YXkvd3Y5dzR5RkFEY1V0eVZrZUxqT3FKSlo2TWFZL0JjOU9NdTNVVVhrZXFq?=
 =?utf-8?Q?SMfBviNJzQ+jSY/8aoE5o+ieX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2977d795-e96a-4d21-6d89-08dafe252f95
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 16:08:09.3788
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l435T9KB7Fghhi2PunJZOvLtFGjFnUnTmvavLqy+WGTq4w+zzjpHW7Xfgu1WvcEywIXPGVME8UcTYMaewztZCw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8089

On 24.01.2023 13:41, Xenia Ragiadakou wrote:
> Posted interrupt support in Xen is currently implemented only for the
> Intel platforms. Instead of calling directly pi_update_irte() from the
> common hvm code, add a pi_update_irte callback to the hvm_function_table.
> Then, create a wrapper function hvm_pi_update_irte() to be used by the
> common hvm code.
> 
> In the pi_update_irte callback prototype, pass the vcpu as first parameter
> instead of the posted-interrupt descriptor that is platform specific, and
> remove the const qualifier from the parameter gvec since it is not needed
> and because it does not compile with the alternative code patching in use.
> 
> Since the posted interrupt descriptor is Intel VT-x specific while
> msi_msg_write_remap_rte is iommu specific, open code pi_update_irte() inside
> vmx_pi_update_irte() but replace msi_msg_write_remap_rte() with generic
> iommu_update_ire_from_msi(). That way vmx_pi_update_irte() is not bound to
> Intel VT-d anymore.
> 
> Remove the now unused pi_update_irte() implementation.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Jan 24 16:12:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 16:12:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483617.749875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLtw-0002HG-29; Tue, 24 Jan 2023 16:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483617.749875; Tue, 24 Jan 2023 16:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLtv-0002H9-V0; Tue, 24 Jan 2023 16:11:59 +0000
Received: by outflank-mailman (input) for mailman id 483617;
 Tue, 24 Jan 2023 16:11:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zJtm=5V=citrix.com=prvs=3811cd0e8=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pKLtu-0002H1-IG
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 16:11:58 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1925d5b-9c01-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 17:11:56 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1925d5b-9c01-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674576716;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=aJZjbVq4jKQ0w0wngtXqyqo/hwzFxt75DZOtebZQHFs=;
  b=fNQYr6Hz6NABRj0rWUnd9aY2lZJzpVn0JWJq8l3D1dFaa5AAFG5xHJI8
   k3leCbo6LnjvFAAnhb9jbQRY/jJhjgpujBhSCbCDZHcQje3JREvDkQCjG
   d4ZZ/Wht0wgSI1NFvOswF4bzFlnAkroD5K6tvtOvGOw+RjgxfndIWDWoF
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 94058108
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:GxeXy6+4c9v4OCAQ99JqDrUDnnmTJUtcMsCJ2f8bNWPcYEJGY0x3y
 moWWm6CMvjfNGTyed9zPNnj9UtUvJ/SyoNgTlBkqyw8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKoX5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkley
 fomdh5RaSmdrP/u8OKge6pIiNQ8eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0Ewh7F9
 juerwwVBDkbBvGm4gSk6E6lg8TBjQGhBo85F4OBo6sCbFq7mTVIVUx+uUGAifS1l0ekV9V3K
 0Ue+S01se40+VDDZsH3WBuqoXiFlgQRV9pZD6sx7wTl4q/d+Q2UAi4NVjBMbNYvqcoeSjkj1
 1vPlNTsbRR3sLyRTH618raSpCm1fy8PIgcqbDcJVwIf7/H/oYs4iVTESdMLOKuukvXvFD3wy
 izMpy87750Zl8ULyq6473jOhDbqrZ/MJiYt7xjTdnKo6EV+foHNT5ap4ljS9/oGLIufQlSbp
 38Cs8yf6ukUCteKjiPlaPwAGazs6/ubPTn0h1lpEJ88sTO39BaLZoBd5i1zNW9mN88FfXniZ
 0q7kRhK+JZZMX+ubKl2S4G8EcInye7nD9uNfvnJdNdKY5V3XAaa5ixqPhTW2W3x+GAsiaYiI
 oyad+62AH8RFaN8ij2sSI81wbItgywz227XbZT61Ai8l6qTYmaPTrUIO0fIafo2hJ5ouy2Mr
 YwZbZHTjUwCDqunOHK/HZMvwU4iPUQ9O87Y98tuaMWeejo2A1BiL/HB3uZ0E2B6pJh9muDN9
 3C7f0ZXzlvjmHHKQTm3hmBfhKDHBsgm8y9iVcA4FRPxgiV4P97zhEsKX8FvFYTL4tCP2hKdo
 xMtX8ybSspCRT3ck9j2Rcms9dcyHPhHaO/nAsZEXNTdV8Q4L+Aq0oW+FucKyMXpJnTfiCfGi
 +f8vj43uLJaL+iYMO7Yaei003S6tmUHleR5UiPge4cMJBm9oNgzdnKh35fbxv3gzj2allOnO
 /u+W09E9YEhXadrmDU2uUx0h9jwSLYvdqarN2La8ay3JUHnEpmLmOd9vBKzVWmFDgvcofzyD
 di5OtmgaJXran4W6dsje1uqpIpij+bSS0hylFQ5RCqbNwn7VtuN4BCuhKFyi0GE/ZcB0SPeZ
 65F0oUy1WmhUC89LGMsGQ==
IronPort-HdrOrdr: A9a23:3iT3NKpzngz3fMBqiuncRfQaV5v5L9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssRAb6La90cy7LU80mqQFhbX5UY3SPjUO21HYT72Kj7GSugEIcheWnoEytZ
 uIG5IOcOEYZmIK6voSjjPIdurI9OP3i5xAyN2uvEtFfEVPUeVN/g15AgGUHglfQxRHP4MwEN
 656tBcrzStVHwLZoDjb0N1KtTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y90g0FWz1C7L8++S
 zukhD/5I+kr/anoyWspVP73tBzop/M29FDDMuDhow8LSjtsB+hYMBbV7iLrFkO0Z+SAAJBqr
 jxiiZlG/42x2Laf2mzrxeo8RLnyiwS53jrzkLdqWf/oOTiLQhKQfZptMZ8SF/0+kAgtNZz3O
 ZgxGSCradaChvGgWDU+8XIbRd3jUC5yEBS2tL7t0YvHLf2VYUh5LD3vXklZqvoJRiKn7zPxd
 MeRP01555tACynhj7izyVSKeeXLwgO9ye9MzU/U/OuokJrdVBCvjolLZ8k7wc9HdQGOu1529
 g=
X-IronPort-AV: E=Sophos;i="5.97,242,1669093200"; 
   d="scan'208";a="94058108"
Date: Tue, 24 Jan 2023 16:11:47 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Bernhard Beschow <shentey@gmail.com>
CC: <qemu-devel@nongnu.org>, Richard Henderson <richard.henderson@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, =?iso-8859-1?Q?Herv=E9?= Poussineau
	<hpoussin@reactos.org>, Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant
	<paul@xen.org>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Paolo Bonzini
	<pbonzini@redhat.com>, Eduardo Habkost <eduardo@habkost.net>, Philippe
 =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>, Chuck Zmudzinski
	<brchuckz@aol.com>, "Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [PATCH v2 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
Message-ID: <Y9ADQ/Yu8QQD0oyD@perard.uk.xensource.com>
References: <20230104144437.27479-1-shentey@gmail.com>
 <20230118051230-mutt-send-email-mst@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230118051230-mutt-send-email-mst@kernel.org>

On Wed, Jan 18, 2023 at 05:13:03AM -0500, Michael S. Tsirkin wrote:
> On Wed, Jan 04, 2023 at 03:44:31PM +0100, Bernhard Beschow wrote:
> > This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally removes
> > it. The motivation is to 1/ decouple PIIX from Xen and 2/ to make Xen in the PC
> > machine agnostic to the precise southbridge being used. 2/ will become
> > particularily interesting once PIIX4 becomes usable in the PC machine, avoiding
> > the "Frankenstein" use of PIIX4_ACPI in PIIX3.
> 
> Looks ok to me.
> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> 
> Feel free to merge through Xen tree.

Hi Bernhard,

The series currently doesn't apply on master. And a quick try at
applying the series it is based on also failed. Could you rebase it , or
maybe you would prefer to wait until the other series "Consolidate
PIIX..." is fully applied?

Thanks.

> > Testing done:
> > None, because I don't know how to conduct this properly :(
> > 
> > Based-on: <20221221170003.2929-1-shentey@gmail.com>
> >           "[PATCH v4 00/30] Consolidate PIIX south bridges"

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 16:17:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 16:17:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483622.749885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLyh-0002yT-K0; Tue, 24 Jan 2023 16:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483622.749885; Tue, 24 Jan 2023 16:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKLyh-0002yM-HK; Tue, 24 Jan 2023 16:16:55 +0000
Received: by outflank-mailman (input) for mailman id 483622;
 Tue, 24 Jan 2023 16:16:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKLyg-0002yC-Jh; Tue, 24 Jan 2023 16:16:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKLyg-0006Mo-G7; Tue, 24 Jan 2023 16:16:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKLyg-0002x7-3L; Tue, 24 Jan 2023 16:16:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKLyg-0005ot-2t; Tue, 24 Jan 2023 16:16:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bEOJYhfe0zVbR/8czIGFqLIDE1+8eoI1nxtQuK7lUPo=; b=eXRGpxmy3F99fHcHMqwy6dmCHJ
	1P86SKNrZ7L/IeeL9ErnH8FXhM3fMigfJnCu8MEn6Hfe/qJlNOKqdU+aTYTMsz094FyTT79aFRhZ0
	KQKeVoxnuW2ph8ZyV2H0YXOzjRRik13BG6drWlP/Lb0dXwlIoi6dizwqDu0WCWUCuTjI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176086-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176086: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-saverestore:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7bf70dbb18820b37406fdfa2aaf14c2f5c71a11a
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Jan 2023 16:16:54 +0000

flight 176086 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176086/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-amd64-amd64-libvirt     17 guest-saverestore        fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7bf70dbb18820b37406fdfa2aaf14c2f5c71a11a
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  108 days
Failing since        173470  2022-10-08 06:21:34 Z  108 days  223 attempts
Testing same since   176086  2023-01-24 04:39:45 Z    0 days    1 attempts

------------------------------------------------------------
3438 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 528145 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 16:22:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 16:22:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483631.749896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKM4J-0004Rw-Cj; Tue, 24 Jan 2023 16:22:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483631.749896; Tue, 24 Jan 2023 16:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKM4J-0004Rp-9p; Tue, 24 Jan 2023 16:22:43 +0000
Received: by outflank-mailman (input) for mailman id 483631;
 Tue, 24 Jan 2023 16:22:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HFQP=5V=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKM4H-0004Rj-LK
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 16:22:41 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2052.outbound.protection.outlook.com [40.107.21.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 515abc4c-9c03-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 17:22:38 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9245.eurprd04.prod.outlook.com (2603:10a6:102:2a0::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Tue, 24 Jan
 2023 16:22:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 16:22:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 515abc4c-9c03-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YkFp+3iyWi8jmYyqAr1Cue+7KTxoPL6/DRRrK8vB1x/zbUb1Dwiars0PelzKHiInzszuQ8i23zjyJfDlxJCSujetz2BNmxxILRSm/+LPZiuKt9yJQhLZmvIlUOoeQFUYKxrJTuwkeLOeD5MSWKYaLauy5loiH9zaKaQVbm8ddHPzkAkDsLhKL8iopx7p36yxb9JKPVih2L7H8Zl+mKiDYhtdhPsbFZyKaiuuj83cv/uMRUr1wMroojXygXMotH8lHCxDgyAIUe8bAgyK6CB5crV+06KWEtw2BmXI7Wk5AcvjoRU2UZHAVoJduKg/3JoRwCmUpr/1bDVAmz+0lJt/dg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dDsn2I1YqpMu1j10i9ClhnhS1OWA7edRBW7WPl8YX0g=;
 b=kmXBCq6g5zOX9p444QB0V/Vhy0Q4seZQ7MOIIrZipSBMzsM1sBsVEVavu1gYcEQzwa1bprvysDg9KtmsHbz/TIzAkPhsk+ZGrEpjEzVYziVabdQW9F/22x638DVdprf2VAxlc9e6f80ZqSuZPM3ADrXA5cXw1W3M+Zl+B3TWt+Wp/H0M9wRR93M1HPxvv940XEjWmT1mcTz1MNP3oEWS8Hzc9BpF+rowd0jiOPVYP77g/v0EZvtVoG+fXd+t5IspeAr4BAwKSdod3r+dBIp4/0ZD8iaWq2o3bxLu6Xwnnk4dbAb2LhtZ1bQTeBNk4OYlX1jUiYLYDEmCApYsGODphw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dDsn2I1YqpMu1j10i9ClhnhS1OWA7edRBW7WPl8YX0g=;
 b=qu4x0Uwj9c9yg0BEU+Z1UNmUd2ldodnaAK0KsKf2N4xws3l9sA1RBy8O+TAYVkxGXAHPUbhIlA0X2nJEd2pjjjvxNawYURjJk3OPeqUHFcBBefHimi9M8Q9v2pY1sxpzAJ3aBvCluiVe/zjgP4a0GU2fEhaBQ331an8WbIzN65CxHfEhq+gSBl0+Or8+qiBT9iyFi403/OaMseZxuFc8r6lvSC8Ckpm+jFgAMbyGWG7XBzvEbSgscsdXYnyHfbiHc69eed6AJe4m3/RU/zdUNOMuOpk3FE3lZ6vBlZSURnHLszwz4SIxITbQxkAKE6duvJMxyk657vutD51BEERkBQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <96d6da16-5708-bae4-56a8-9efd4137eff4@suse.com>
Date: Tue, 24 Jan 2023 17:20:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 02/11] xen/arm: add cache coloring initialization
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-3-carlo.nonato@minervasys.tech>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230123154735.74832-3-carlo.nonato@minervasys.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0039.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9245:EE_
X-MS-Office365-Filtering-Correlation-Id: a2d90ed3-12de-45ab-5f10-08dafe2734c0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WE5FDBqXoqywhNeQUc4pdU8j9y/8FYFqqAKHuOWiffygngukS9nY3E+639Hw/GGIBWb6PhkuoA1+e1gZtkRN5FKA/Jtneum3RuBhQMts/AkQrc3FkgAepdw9cPRmrQYj0VBaK02kM6rF0dh/3X+q5xX76PoQTq7VO3ethiG0iJIlUcE0umd+fMsJQcM2r09G24ZrpiOMzv0eAzgYfQQBHDNaHfi1oXHH4eVGYeG9XcOV6N4c2dKi5f3vsxNAEeK/tk8/hxyRlPO/p+5PoVNyG1Pb59c61WLvAFUDyRunuuStR+S52Z9zfXidQJpW9U+e8h802HG7j7adZd3ukD2JC/swl0Hdwy5qYQV6KzPcMo8Yr5qpARffpAU7vrTERdBF9VNoyIUAFeGbGiCAh0jUHAtSNef1BEbPXLjJckRH9KW0LrkWRgPAbbynU4vmnHpUefRf32aKwCEPZMBaZXIXGnqJsouida+uFaVaDiRj9P6R8TPdSYP+UGKJw9vnmk7g9F4yrRYHvDJpZuBbCbjTuZkujsUrpwGWb0HB4qVtp0sFjJVAd/rNp+39rUyI/+pVvhgynR6pTaNOnuX1Ng2oL5p5bBpAebbzy0zraCR4o1rkeMfoMqFHE/5+kPpgZFJuQhqY2xuOBnF8HoPsYPVp3+8J6GWRiahSMcXQON2cOkTUncMAA0jgTgemR9gEXxSKj+UE2oOPr5//SEApGv1PwN1+9kF+IoTGTv1h/LFzGtQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(39860400002)(136003)(376002)(396003)(346002)(366004)(451199015)(38100700002)(86362001)(36756003)(31696002)(31686004)(2616005)(54906003)(316002)(186003)(6512007)(6666004)(26005)(53546011)(6506007)(478600001)(6486002)(2906002)(4326008)(5660300002)(4744005)(7416002)(8936002)(41300700001)(66556008)(8676002)(6916009)(66946007)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Tzd2R0huc0F6QVhxM1FQbUlLSngwR1NFQjB6NnhLTHlMY3kydzBGNFNDUHlw?=
 =?utf-8?B?OFB3ZUtacVd1R2NCQ0NZa3I4dGlHcGlrSlVNd2k1THBUVktOTUVHM29JRUJ0?=
 =?utf-8?B?N05LS3E2U2YxMDJnT2kwaHR1bFFuYlk2M2l2aDFzbkYwTm1ZbFFhMmJiOFo0?=
 =?utf-8?B?M3NrZUZJRzdWSzZQS01DeER6bm4zY1lWODkxeHpPTGV5a25vMktsRUlSZjJv?=
 =?utf-8?B?cy9JQ2kxRHVRNXlDczhZaXUzUTFWbjBKRnRUUWwxWGNSVU51cDVvY3hIRG91?=
 =?utf-8?B?QTRURnhVVjk4K1RxRmg0OThkN1g2YmNqYnlkbHhGK010RURFQW50bzVUd1l4?=
 =?utf-8?B?NGllcjhGQSthajNEamNEVnA4Si9mQjBoN1NlM3lmUE91aVdyMk9FTkdWU2xs?=
 =?utf-8?B?SERYc2xpUy9oYUE4K0ZSTm5ldWg0Q0cxRGRYMXRLTm5PMWl3N0hndDFWVVM1?=
 =?utf-8?B?QjV1V2l0aUFndHBrVWJ5TzJLY25JVkp1VVYrOWpPd2NsYnIrbnZxM0lRYnNS?=
 =?utf-8?B?d2JYSldIQithdWVHMTFvbmJvQWRGVHpha05PTWZVY3BvenJiZit2S1pEUFdh?=
 =?utf-8?B?RE9WWGJxOVFHdVYxK2RUUTZlSUtraG16N2V4Zlg0SWFlV3pRZmJJY3l0UU1h?=
 =?utf-8?B?bGQ1RmJiUnh2UjZwMmxXWE5JUVpLWXplcldQQzdPeGU0azhmRGtGTDJBVEZ3?=
 =?utf-8?B?c3BmVURIN3dlVXJhYktzQUhRUFRGY1FXZHZSM2RndStBUUFndVJTWGx4Vk5s?=
 =?utf-8?B?Q1NtQzVKMXg1c3RPdXpLSmdqZE15OHZSL0UxWCtQdUUzVndhVEE0RnJ3dFZm?=
 =?utf-8?B?R2xqREFSS1J3VUFqY3lLRjJlbEJURlZ6VWpxVXRHS1hLOW9ZV0hVQldtbUNU?=
 =?utf-8?B?RUlVaWNvYjJMMXorL1RWYkJ1dU5zWWJYR0xvMWpSWjArY1JZMVNxUy9icnhq?=
 =?utf-8?B?YmwyMTJVWi9uOVZmU284OGVPenFRTFBhZm1KdUpqWGE2S05TL1l3aWpXb1pQ?=
 =?utf-8?B?UmsxYStyTCtwRGIzb1BTUVMzbk5wR3RVMWZFUWE5aFBTL2s2UnZaY0lOeXlh?=
 =?utf-8?B?N3JWcTZTamdOaGNOY2tmcFczbFBnMUFmU3VFUWcwc0NtQnJ6ejVLdng3TmtW?=
 =?utf-8?B?dFZXeTM4SXdqaUVsV0FvSkZnb0F4c3NvZXd0NmRiWDdSTlMwMGh1N0NaQ3lX?=
 =?utf-8?B?UDVueW1NTmUrYmdFbnFDbkg4ajkxeUxHWGpyc0VSUUtWM3hvTHlNM0NmbmM2?=
 =?utf-8?B?T0JwSEhpL2JOakhIWTU0MGpPV0E5NVdpNElJZndMYXBLMThobEE3Vk5JUEVq?=
 =?utf-8?B?RFlnR3FyTWJFa2hCNGRxNFEwYk9tM3d6Y05VRXpnazJ3VnpSbkFxZUtkYnEw?=
 =?utf-8?B?YVNQbUNRTGwyZldOb2t1QXdoTmdnVEFtSFZTQ0ovSU56N3lvY1MyQURSaER3?=
 =?utf-8?B?eXVhQjNERTFlRlR2Y0k5VEg4ZTRIazJBYmRpV0JJenNCbUpMWWIrU2xQQTZY?=
 =?utf-8?B?ZnBINDlLclJiRmM5SURuVkpFUUM0aDMwOC9XQkZ1V2dRSUs2K0M0My9jUktt?=
 =?utf-8?B?NlRiMXhwbVcweFhjQXZvYTV0eWFFY09SRXN1RVVIS2FrVVp2RTFWd1VvdU14?=
 =?utf-8?B?YWpHUVNKNlh1NlFoaGcxKzA4OElxNWh4QXFubkJQS2lNc2dCOXZOcmQyYVBy?=
 =?utf-8?B?YzBZYUpZYjRIQkdTTDNmUFpCSittRHJlRG1OaWZ1dm4zZ2lBYTlPTTdTTjZ4?=
 =?utf-8?B?aGF1aFU1T3VDNXl6THNHZ1VTRW5qZXBIekREdkN3SlFPb3dZdEFQT3QwUEFG?=
 =?utf-8?B?eDBwWG5HejJuRHZnVU1tZDhlSHZCWWkxQXE2bnU5b2hnSFp5N2xyTFJFeU9B?=
 =?utf-8?B?d01YQmZlNUZMTWVIVHJQVDhMeDgzaHhvMHdBSkpFQ0wrK1pibTEvQUZaejgx?=
 =?utf-8?B?emVVL3pnM21lNnk0cFBJbWgvcnBLMnBCVG5uZ2hLZlZsSjV2RWcrZm12WUlz?=
 =?utf-8?B?Ylp5UDc0TXlBWHQ0d0hnakJDOWhIWmROZzEwT2NoYytQRmx3cTZ1TjgwRHR2?=
 =?utf-8?B?eDZ4YWVNYnA5U0ZxaGcwMHJOVEJzZHhiOFZjSWJBcGRLMElpQVRxeElaYzB6?=
 =?utf-8?Q?7H2fYJTBCIJ1HltQM0muNSKfD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2d90ed3-12de-45ab-5f10-08dafe2734c0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 16:22:37.0579
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VW+8UEVP836vm619z1SKAVKSmba1f8mYPPmnzqPDGNqhRRU74RUQO9ORbLySnWOOgxWfuIflPSZrHkmJXFKu3Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9245

On 23.01.2023 16:47, Carlo Nonato wrote:
> +static unsigned int *alloc_colors(unsigned int num_colors)
> +{
> +    unsigned int *colors = xmalloc_array(unsigned int, num_colors);
> +
> +    if ( !colors )
> +        panic("Unable to allocate LLC colors\n");

Already for Dom0 creation I view this as an unacceptable form of error
"handling". Later you even hook this up to a domctl for DomU creation,
at which point panic()ing is entirely unacceptable.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 16:29:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 16:29:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483636.749905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMAW-0005CD-1U; Tue, 24 Jan 2023 16:29:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483636.749905; Tue, 24 Jan 2023 16:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMAV-0005C6-V4; Tue, 24 Jan 2023 16:29:07 +0000
Received: by outflank-mailman (input) for mailman id 483636;
 Tue, 24 Jan 2023 16:29:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HFQP=5V=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKMAU-0005C0-3l
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 16:29:06 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2042.outbound.protection.outlook.com [40.107.14.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 37adac9f-9c04-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 17:29:05 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8699.eurprd04.prod.outlook.com (2603:10a6:20b:43e::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 16:29:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 16:29:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37adac9f-9c04-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZuLqPvXL8FplC6IKyC5xvbE35ZNegRogB5XclwRKmu8QdeP1tsO4ubymajP7tfFROnSoRnrINY4yC6Ktf6HWhrLoUGPS88iJ2n4VYxftY9Bp4h3sKQUUPsOTj4O3CEtiUF61XKJMTYJktgxyL7vWEVTVAO6XEyRMP/9TDHQXPfjbDxE2+qk7Z31MQW4Rl9v390H5jKo4MNNSa7/PyeSIVfTQqAuXa+85Ug0mRGHnoJSUfaVh1Z7KSTfqAEdySuYWPBsoFONf1jdus3SnKiyTxLUnSbXaDR+qr8PBRK4pKqultqLn++omCMI405iBSx9/HK77AXjz9u6ZwriRkNfMng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Clb67/GwDZhUCARy9ALs3RG4LTif/9oZzrj/is0AU6w=;
 b=OTkjtIB0794/qaih+8fD2aM4hWqeUyo59UU9oj/dJNKRfmL2fNL8uWbZsxLwJP5fah+gizZyeR1q2+y7pBa8gJ5L0GgislcnHyvi/uNtIISMjzzEpIi5uULdys0477yjgARo+weROsIRr2p4YLN6d7jC2+eTjWE7qNDxNKQ4DXy7vDvsto1P4twP1KJi/C9L9B/UWTggCO13wBVqHbTUF6HIYDNIh1KBMS4NUImWGtsMZwLzGWVCIs8wrTGnNODaJf31fAzTviQl4o65+8ZAxM9KVmS5tlNNogLIfcSn4FSO62k2wPMl0UMQnwsdT1SLCx7qGYkTj9kfZ/o+xx81sA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Clb67/GwDZhUCARy9ALs3RG4LTif/9oZzrj/is0AU6w=;
 b=gLnMa1+vlxh2IEd1spM8DI9lm9wCDVP3qPIYR+iWzg+VzASjK/8jRtfuWxXFa2s6hyO79uFZovgD3B1O/LoMwRI5eZet/y+mv4ff+J5pLcosebuBTn/MCABRvDzt6TS9I3IzBPZOj7O2IKBupWThBulIgS//bbkdsDZzPf8nR/1wnx64oNALcuKhaz/3Z1mfgIxZvJYzAOxWNRDTHJCIwnoA9lIIcsmMhPN29UFl9RiUE+TPJ0be9a+ctyg9oAjKNWCzRkv2xHjmjkAwlAK/06MeTjU6++tP7Kodld4FBjiF+B81Yoq817lb2rCu9y/OiHJ9qNumX2Q9ejCC0lAZJQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9bfee6d9-9cb2-262e-5a46-91b0bf35d60b@suse.com>
Date: Tue, 24 Jan 2023 17:29:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 04/11] xen: extend domctl interface for cache coloring
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-5-carlo.nonato@minervasys.tech>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230123154735.74832-5-carlo.nonato@minervasys.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0096.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8699:EE_
X-MS-Office365-Filtering-Correlation-Id: e2d5c666-9b71-4274-57dc-08dafe281a58
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	um4e5aLNyXEfl0E7iAx71N5MskEHveE9sUED3W62GEEXIN18msl8d8Jhk+golj/aor7CAvRMeWpf9JN/LFTm+ChkOCnV3Agqr/cBNiS045qn3kykygM64vsGk6Fogc/QaDdx/CCEi+wXbHgeQaSbh+B+BYCnt7+EFw/+lQPzoszJ2m8vjen/dRmiVnI5uNcyg914e1IqVa5aY+pOD/KIiSMZ/mAoiOj14V2EnOBp75I/ypWCfLskyAp9ZPXqpJyjQFlTH+gvRJJ53Sl899phaXsdD7Mn8X6Zei3+16HWkgYGE0iAlZm8Wpsd5MUquViFNITlCS5VctDNUKVDVhSEPhMcIF9DmdGsoAaV587v46b57AzjFtgjQnmOMzkd3+msZjD21kPJozro87b4zXxWdk1GJpuWA459p69+k/bh03OqYHFVYJcWqLKGT1MNol4/tQZtFPFNkG/3X9FIrdFXv4lJ2nS4rrDSpnyruUo6SbpXiuytzQzCERHehquGbqHrN8ZDTuMA93QWIMOafFmdc59Hal8nPeSFHfWtYUEtHwtxeREMkDou9GM9fBvko9Vr1qtGuWDGtXFgOpTyDPFR9Zx4y/ssmdFURhvQ6zW0n1ONHhDIVt+CTY1RMetLoFDnOFq3vm5X9g9PBC6ESWsFda4sXrmNPBFYF9EZFTJ2JKRz0+qZwCJ4JHTWedr471KmFK2CoFoLd4GvTF6C7HJFvXrbcIIyT2NWLwjBBMLr60c=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(376002)(39860400002)(366004)(136003)(346002)(396003)(451199015)(54906003)(31686004)(6916009)(66476007)(6486002)(4326008)(66556008)(8676002)(2616005)(41300700001)(26005)(8936002)(186003)(7416002)(6512007)(5660300002)(6506007)(53546011)(2906002)(66946007)(38100700002)(31696002)(316002)(478600001)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VitVQ29xbnE1TVdyU05pVWJSUGthaWNrYjVBK3M4a3BFWFBzRWRKVXNERW1C?=
 =?utf-8?B?NUQzUlNvc1g2c3FZTnBFVDJWajVzQnVCbVU2bi9aK0NlbHU1OTh5cWsweUFS?=
 =?utf-8?B?akVENGN1S2tvNEM1UTJybXJFaGxOczh3K082ZFVtb015QW1LNXduaGkvZm9m?=
 =?utf-8?B?akNNRnFLZXd0eWhvYXpvaVhweWpBQkFPN2dBZThZNUNGV1dLNk9SaGMrMjU1?=
 =?utf-8?B?SnJBcTEyQUhGRFY4K1hNZ0hGcjcrUGJGSGRZQXVtU1hNTU5zaHY4SWtuTEha?=
 =?utf-8?B?aitUSTVqNzdtRzdwc3Rkd08vQTkrM0dRZmQ5OGEwK0UxM3U4dDN6WTF4NFlN?=
 =?utf-8?B?TUVmdFZvd1pJY1FBUGhSTmVxSzhYTzlnODlCMXVRUEJyUzlHSjNqR3lMY2o0?=
 =?utf-8?B?VmVTTEM4T0Uva3ArSlFvc3p1WWluNXBJVDZpbnhDL1Qwa0Y3elJXbTZRVFVE?=
 =?utf-8?B?TzRnc28vbVJGZ3JZeUYrK0xDQkdTV29SbFVOaU5SdE5WaFVSaFF4NXlySWdY?=
 =?utf-8?B?Yys1Z29rRHBDS3pxRmhxS1hFalVraDk0cG1MVThSOXpCcHFlVktMcGdwd0dR?=
 =?utf-8?B?SXBBUDhzTXZQVU1yQjF2QlBiekY5Tjl4S0ZnTEZiMDl1dzhsNFlWUzZZdmp0?=
 =?utf-8?B?dDlzZnRDcWx2ejFFV3YrcUdwU0wvcnE1SUdSTk1yTVoxMzlrb1pwNHhsSjNQ?=
 =?utf-8?B?S2J3ZTFGK0lCWUdKaW5KSDlISmVldi9sUUFtUEtTcVlZZHVoQjh5ZTV2dlFs?=
 =?utf-8?B?NXdwN3I5QlUvaDdvQ1N2dkRCbjRlZjVRL1g5VUlQblJrY0dtUk5kRUVscnMr?=
 =?utf-8?B?UWN5ZmZwU2tMM08xRjNlb3dXeWloeWFKdzVvejl4OTJuVmV0enhuOHlpS2ZW?=
 =?utf-8?B?YkdaOHU5ZzNUN0wzcGg1T1ljeU15NGhxQjc1MXhmRnJtZkd0eDBkL3cxRVJo?=
 =?utf-8?B?c0tvWllwbmlMZk45MjcrNlF2ZWZVd0NsYStHNkN0WWZOV0J5ZjNHRUZLaUVZ?=
 =?utf-8?B?M25rSTJkTFZLS1M2M1BqOWN5aFYxR2UyL3BnTkk1dHhJQXJjVDVZRWhUOFJX?=
 =?utf-8?B?Q2cwblRKR2lLWkx4c25qQWRTRnBwWVZlRVNZdFY5WnVhSGlsdnZrNTRlVUIz?=
 =?utf-8?B?Z1FkcTVjamhBN05DWXJaQWxRcTVnSXI2SDFRZDhuck1EY1U0SXVwbzg3TXFx?=
 =?utf-8?B?YnhPQzRlSlV6Y29tRlZFUzlhc1hEcTEyREswM0hlcU4xUnpFc0RIQmFJazBr?=
 =?utf-8?B?UkRreEkzN1VIZzFSUWRocHFFRUpvM25YS2c4MU5zUUVPdWkyTE80SHphYm13?=
 =?utf-8?B?ZG9BSDBER0dRNERybHBoOWhBM0xYcDhsRkVSSzNFZklTRlFtLy9xM3FxNFh5?=
 =?utf-8?B?RmMyOWMxcnc3Y290NnFNY016QWhVTTY3WnY0Mmp2NHZUMklhcjZOZFZSZytx?=
 =?utf-8?B?aGk3bVRSTnRTeHkxWlVCTTNaVTVpQU9iQjJEUGI2ZkJJM2krZnpjWlU0VkVl?=
 =?utf-8?B?dXdFUElESHk4VGIrSlNDem1lMEFyUExIZzVNajhEVEt2ZlJEcnR4K0NUZEh0?=
 =?utf-8?B?dVZiS2c1TUdlL3A4eUdDdG5FUlJ2VUpLdzA4Q0xRSVYrdHRMOCtnbTBqSjJK?=
 =?utf-8?B?eTdzc2VCdEI0aFIwczh0ZUhuVGFzb1FXOGVuNXJuWCtQcnp5c0E2akV1VE9y?=
 =?utf-8?B?ZHEwMTlLK1Mwb2ZSaHBPcllpQmJYaVlWUC9YQWhQekF1bHpuOGNDeXRTNENr?=
 =?utf-8?B?aG1OM0RTMzJrT2F3enVxUTB6cnZrSlRuUk5MdksvQzR2MlJTamlGSXpCdHN0?=
 =?utf-8?B?R1dqT2w3eWRCcFlrUWRTbUdvZXpzcnplSjFtWHBwZUJGalBsemNlM1g3NGxP?=
 =?utf-8?B?MVNNOFU0Rjl5dU1HeTVONWZuckx5Z3ZDZkI0a3dPekRnTVhmL1cvMDdHOXdu?=
 =?utf-8?B?eVdmeWJJRXdKUHRIWmlqS05jRlUybmhpYlV5bzVpSURUcjhOSi9UYlB5S2RH?=
 =?utf-8?B?MHBjVTc2anVSVy9MVzgwMituU1h4YkgzLzFlT2NDZFhONjdYRjZQaTF5cDFy?=
 =?utf-8?B?b0JxTkZvWDBYM3pHb3krcEZobHlTK3RUZ0ZGNzRUWG4wMFAxbHBUV20wUzlO?=
 =?utf-8?Q?SH+LMPdlOsE5OEHXhAFn39UPZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e2d5c666-9b71-4274-57dc-08dafe281a58
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 16:29:02.2365
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eR4tBrfxj39mmjmh6mGEwBRppFxvg41BD/srfrZGnMt+Goe1o8vh+4lp6QuKsjqSOfdokM+UgwanUiQL9XbxUg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8699

On 23.01.2023 16:47, Carlo Nonato wrote:
> @@ -275,6 +276,19 @@ unsigned int *dom0_llc_colors(unsigned int *num_colors)
>      return colors;
>  }
>  
> +unsigned int *llc_colors_from_guest(struct xen_domctl_createdomain *config)

const struct ...?

> +{
> +    unsigned int *colors;
> +
> +    if ( !config->num_llc_colors )
> +        return NULL;
> +
> +    colors = alloc_colors(config->num_llc_colors);

Error handling needs to occur here; the panic() in alloc_colors() needs
to go away.

> @@ -434,7 +436,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>              rover = dom;
>          }
>  
> -        d = domain_create(dom, &op->u.createdomain, false);
> +        if ( llc_coloring_enabled )
> +        {
> +            llc_colors = llc_colors_from_guest(&op->u.createdomain);
> +            num_llc_colors = op->u.createdomain.num_llc_colors;

I think you would better avoid setting num_llc_colors to non-zero if
you got back NULL from the function. It's at best confusing.

> @@ -92,6 +92,10 @@ struct xen_domctl_createdomain {
>      /* CPU pool to use; specify 0 or a specific existing pool */
>      uint32_t cpupool_id;
>  
> +    /* IN LLC coloring parameters */
> +    uint32_t num_llc_colors;
> +    XEN_GUEST_HANDLE(uint32) llc_colors;

Despite your earlier replies I continue to be unconvinced that this
is information which needs to be available right at domain_create.
Without that you'd also get away without the sufficiently odd
domain_create_llc_colored(). (Odd because: Think of two or three
more extended features appearing, all of which want a special cased
domain_create().)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 16:37:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 16:37:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483641.749915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMIN-0006hx-Q1; Tue, 24 Jan 2023 16:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483641.749915; Tue, 24 Jan 2023 16:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMIN-0006hq-Mu; Tue, 24 Jan 2023 16:37:15 +0000
Received: by outflank-mailman (input) for mailman id 483641;
 Tue, 24 Jan 2023 16:37:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HFQP=5V=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKMIM-0006hk-Ee
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 16:37:14 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2047.outbound.protection.outlook.com [40.107.22.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5ada4bf2-9c05-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 17:37:13 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8854.eurprd04.prod.outlook.com (2603:10a6:10:2e1::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 16:37:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 16:37:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ada4bf2-9c05-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZKxeIMx3YacjOs2AkE3BO1GEAB0ngYwlBsCmdpyt8UqtPcbffnCSLe5POc+uiavb8wtI09MG/ZMg1mN2pnAHj1wFMRHPCOZqHAW87hRnJkwrs6jEZSrmcKPligw0s71/aNwV3rBIizQCfCmwRUVjgfmPISLKTxI+uAGQam8lE8t9RSgwXsn8gkHwVSsi2ehl7ct8Wp8JtSoAXkdoY1m+Mfrr6cGdyMDsmL4ECnSN6j3sa28nLisXOybHrkr1SU+ajhSPCpwzlQx2GNQwEl1Z7LIQfiT88OWgm2uyZzIuXXCtv9b22+g7ojs+FvsfKt9F2ipMDCoZwN/XDxWZEndWHw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VuMpGMMbVY6pa6U4dk2gVjIv82xVHwnmnS6Erq1FrsI=;
 b=hDUCXJNKaYglCf198fn86ild9gmFuCNJZ3I/QzPJoEo58cNdSqtrwwSXkuNMs1qgar+QT7/9W4wooSkSrAbHMFDrgD+Fi6bi4CNlsutXFaG3+LBx6FJuqVXvAERhVM9OnCn91/FqDqzqPGfZ4sPMASnkSJ1U+SJlNrh9SUGOi6ayyc0TJqXmC05B3k40lyJPGyCWsSy2KcPhbn+9/p+t2/JGPemSDcqWyWaURvLIOZm0+Yl3xITm4KWiV/J4q0QCxpGofJbWgNCNfxlB6hkSE+oNKkdRsyEjH/KJg4rI1VuIEje+21hfSXXX+l9ByrrdGzzFM+4cXtVoFc4d4J+aFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VuMpGMMbVY6pa6U4dk2gVjIv82xVHwnmnS6Erq1FrsI=;
 b=r4NZdBZgu0YnSTyxWirARe+w+xGfUHxW31RQ9ii2k41RdSPU55CUKBmiI1KOHdPPnJkd0VraxW8UJdoPxaEc9XPhVPBeG3SisG3+xpqpDMtLB3JAwsytJEILjC6uHPLOYiO04q7aBBtQQm/zvtWAb5oQKWa6qRwQWv94ZBxKR/rrZ1vvg+x+2Ehq4gA/l+eHZyUPn5pIjxO4t8PDWElRTmCvcy7rNU+NLneLE5GhOjv0lEjHyY0S2ToxvECYM9Vlm9mC1LRyJt1SvzNpLIx6/1DJ5ZeIy9+FRoMPabmkOHJC4KaCzUvfojF/6r5XuzKmEhCXbJhoEXBnzNC2lWjeGQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a470be46-ab6e-3970-2b04-6f4035adf1cb@suse.com>
Date: Tue, 24 Jan 2023 17:37:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 01/11] xen/common: add cache coloring common code
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-2-carlo.nonato@minervasys.tech>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230123154735.74832-2-carlo.nonato@minervasys.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0126.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8854:EE_
X-MS-Office365-Filtering-Correlation-Id: 3c26aaee-a6c6-4dcf-f459-08dafe293d28
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DBJ/dMjX7V31nuuO1U4+sbt+sWdSRIGolEAKjXAXwQN8EZlOLMgijhtHIIZ5Qnufjk6WDnUflf8scU0awxuZJ5LEPsdtuevmDWDRbCmRaOA5nhK8rU1ozDI8rb51dYlKto/Arx79JclvcNybqu9bRTvl59prJligqcONBnzICrCJTZPXcFYTAu3VUxMywdB1ahvFBICWycA+7ryIaEv2+0rfyG2NV64kFKwZTIEX0QCnIJ82lrVmFltsfEPBGf9NjdAWGkPPUr+Qt6vDfYPJfO9FITHGTXvUi1WZ37sGb0ydmfR1qKalzdXFsQP/2YsgTVygMtDI257d7EiWeKd9u35dJ5zxvhMJpWTo5cmPrdhDiZaX5DnjutfamMgZ7duAPJPwfT0RYJK0wfhqUArCSLar24xNg3btbs6349txwSBNxhdYDBl41ub7S5qBxSVt+Fo406LMRvsTe3tezYQLJeS1oUTjOR1Su2koWGrldmFM8kfgcEamm1Vz+OPwN8NBNISKdXO1Y0ioUBNXz5Xaz/RdB+W+Mq22fLADxr068VbfXvd3hg3a26vFtTpBAv/RX97buO0Ewd7P5abfywL0DMjYCsc/Y7RXBfTqlRh6n/HGXn3kgX+3fhc8CObB7vmmOw2uzbOfpKRzHdUC52o7lUL32GZbVVqDMDt+8Hx0ze7ysJIkFU2et1Iws6Q5TWlVBDIjLHnyyZs62oDFyIKW9oJOimNNe0CtIqN30X3P1fA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(376002)(136003)(346002)(39860400002)(451199015)(38100700002)(41300700001)(86362001)(2906002)(8936002)(5660300002)(4326008)(316002)(6916009)(26005)(6512007)(53546011)(6506007)(8676002)(186003)(66476007)(66556008)(54906003)(2616005)(478600001)(6486002)(66946007)(31686004)(31696002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WElmZE1GcnplcVZhTnQ5YkpPcloxWTRQbkpnYXBkdVJqOEkxSEc3R0dvV2FO?=
 =?utf-8?B?MlMrTEU0UGxWMTFwUlVLQ2xpTVVjR2Z3dWRVWTRWd0RhUmUvZEp6YzViSlZU?=
 =?utf-8?B?OTFvUFZ1dUpDTWw2eWVDUXlMNHdlY2dwTkl4Q2NSRnRJWHk1QS93RkhKcWJB?=
 =?utf-8?B?UDRBNlBITmxoZXFMRzhBUVZuMytxUWorT2JEdnhjYnVDMmlHUEo5OUF6ZVJK?=
 =?utf-8?B?SS9IRVRhZUZKR2pYTkdUS0VIdEZEbTRXbzhVUWpMTnM3TjZRa09Za2dGL3lm?=
 =?utf-8?B?VHE1SkNmRTRUWGhwWXd3ZnpIT1I2L28yN0plRmFYSTUvQlpwNkNZdG42a0RQ?=
 =?utf-8?B?RVU2L0JlY0Z2TzlrNWMzcXEwYW03THlzNGFWQXBGZ05oZllqdnRJWFFTa1ph?=
 =?utf-8?B?cFB2SzhJeVNCMzJoT0RZMGNTQzlKbmwrd3BobW5lRThDak41UTVrczdtSmJD?=
 =?utf-8?B?b2FBZ0FKVVZyUFZFZTJDMS91cGNhMVhjUEFtcGdPRlo1T0FoMkh5NUFaRXRy?=
 =?utf-8?B?ZU45anVkbjVTU1hsRUVtbmNYaGdMdTZTQmFWaDRpbXI1K292WjYzQ1FKL25W?=
 =?utf-8?B?S3huS0FkQTVqVHVJNWkwN2dxU1plT21oZGtOVjFVR3ZRTHFPa3FSNFltalYx?=
 =?utf-8?B?dzR3dFJWRkw1VXdybnhaTkZUYXVlVjBqN0dPM1JVNUlreEF2V3lURG1iV2cw?=
 =?utf-8?B?UlhZS3g3NVkzN25xUXBoTGZGSGgwbmVla3dwQ2hMTGwyUVZDSnQ1NVJJcGtW?=
 =?utf-8?B?MjlWelhjcHdzWXV3UFY4M01ZNHNTamlXSmhLZXJXUmVZYnFnY0tlVkZtZlpS?=
 =?utf-8?B?dGhBalUwbEtSREw3OEowNCttZnc3L0hRMlFaNVlGMW9EWjc2SzBtTnZYSk5G?=
 =?utf-8?B?VU14YUIrU1hXZDFsWlI0WjZJb0U4NmZjMHZLdmw4TUwxeURJSHUwbm9PUEtY?=
 =?utf-8?B?aWxqd0d3bGs2T0ZVNFI3VSs4QUQ0SjNReS96ekNqaC90bmxqbzdXS0poVkR4?=
 =?utf-8?B?NitoanJvRUhtZnR3M1p0WTdMN1hmdlMrYVJyQmNFcXR2elV2WEpCNnVtNHFZ?=
 =?utf-8?B?bXBWV2pWN1VRMjlKbGFjaytMTXYxZ3JqNFBGclV6M1JrWVpCWE1rOXBuSk1F?=
 =?utf-8?B?QU5POE9Ud2NkSy9obFE4TmlOSHp6YnAzUng0Ynl2UzRDWmxMczNOV2lhVEJn?=
 =?utf-8?B?MGNHT0ppTFh4ZmlRYzA1NVdWY3Q5SVdrNVdWWDQ1NldwYU1uOVhIME0xNThl?=
 =?utf-8?B?dXpFSXQyK0dJMW9VM1pFTXltZEc3NDdjVHJoaU9US1BMZ3hGdG1SNFV2RjF3?=
 =?utf-8?B?aGt4MFhVYi9Za0tTWThTcTI3Tm1mTjZJaFVKSGZid3orMzIvTVJXN0JEd2ti?=
 =?utf-8?B?SlBoMVNQM1VIc0xIN1hVVExaY281QVFNMnp5WlZCMmJhNFUzQkgwRjFLUWR3?=
 =?utf-8?B?bEJOdDU3Um5wekNMSU40MWl2QzFydGY5WWh1UXRhNGgzSHhOWmRWbVFqT1VM?=
 =?utf-8?B?RzlBYStzR2E5anZ2allTZGFCeXIwMmljV21ZMWhEaDd1b29aU0FaMktxZGs1?=
 =?utf-8?B?djZXUnRCeTFRMlRGWXFWaHc4eXBlbnlYdEMwQWk1T3ovRUVPVWd0TXpDeEdo?=
 =?utf-8?B?VzdEbkowSjU4Y0FGYjFxQ0FvUjdXMzZyZ04vU0owMWZEUUkvYjNWZ2RyWE14?=
 =?utf-8?B?ZE1MUG5tM3pGTjBjSy82TVVDSVptbEt4a1BzNmlVQUx6ZDFDczZxOElWdDZP?=
 =?utf-8?B?Z0huN200T01pb2hCVHJ2QlpPZmI2Y2F2WGtaRVFXZFAxYnlRN1NZT2VTNkQ1?=
 =?utf-8?B?cGcrWTFaR0dITzlQdGdTWWdCRzZHOTFEeWNoNGxURks0RUJ5TkxUTlVIdXYy?=
 =?utf-8?B?UkFRZVJaWTkxejRpZXpWY2t2Njc5dDNHUWV4Um4zWXpNOElTeVpobW1EdXp1?=
 =?utf-8?B?ck9KYjZ3M0R6NGxia0xBR042S3NHa2FZSW51NmRMYXZWNWY2aC9LbUdYb3Va?=
 =?utf-8?B?c0h4NjBvNmtFUXdlYnE2dURvM3JIRFUzakdkb3BPSityT2g0MncvdG1tbGl5?=
 =?utf-8?B?RDRZd1pwSXhlR2x2WC83d1NLWlBQdnhrN3B4NVNEcUYyVmtoRjlScjA1N0cx?=
 =?utf-8?Q?upyed03tcTJ7DrBTlfgs9f2pX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c26aaee-a6c6-4dcf-f459-08dafe293d28
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 16:37:10.0961
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: I0B2Q3EddL30p1nC3rWbXVFwnknCupNlWpLWwbEXCjp3HRa4l3q7y4rlbOfiTsrzHqX7UUQGCUBjh1CAhrKngQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8854

On 23.01.2023 16:47, Carlo Nonato wrote:
> @@ -769,6 +776,13 @@ struct domain *domain_create(domid_t domid,
>      return ERR_PTR(err);
>  }
>  
> +struct domain *domain_create(domid_t domid,
> +                             struct xen_domctl_createdomain *config,
> +                             unsigned int flags)
> +{
> +    return domain_create_llc_colored(domid, config, flags, 0, 0);

Please can you use NULL when you mean a null pointer?

> --- /dev/null
> +++ b/xen/include/xen/llc_coloring.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Last Level Cache (LLC) coloring common header
> + *
> + * Copyright (C) 2022 Xilinx Inc.
> + *
> + * Authors:
> + *    Carlo Nonato <carlo.nonato@minervasys.tech>
> + */
> +#ifndef __COLORING_H__
> +#define __COLORING_H__
> +
> +#include <xen/sched.h>
> +#include <public/domctl.h>
> +
> +#ifdef CONFIG_HAS_LLC_COLORING
> +
> +#include <asm/llc_coloring.h>
> +
> +extern bool llc_coloring_enabled;
> +
> +int domain_llc_coloring_init(struct domain *d, unsigned int *colors,
> +                             unsigned int num_colors);
> +void domain_llc_coloring_free(struct domain *d);
> +void domain_dump_llc_colors(struct domain *d);
> +
> +#else
> +
> +#define llc_coloring_enabled (false)

While I agree this is needed, ...

> +static inline int domain_llc_coloring_init(struct domain *d,
> +                                           unsigned int *colors,
> +                                           unsigned int num_colors)
> +{
> +    return 0;
> +}
> +static inline void domain_llc_coloring_free(struct domain *d) {}
> +static inline void domain_dump_llc_colors(struct domain *d) {}

... I don't think you need any of these. Instead the declarations above
simply need to be visible unconditionally (to be visible to the compiler
when processing consuming code). We rely on DCE to remove such references
in many other places.

> +#endif /* CONFIG_HAS_LLC_COLORING */
> +
> +#define is_domain_llc_colored(d) (llc_coloring_enabled)
> +
> +#endif /* __COLORING_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> \ No newline at end of file

This wants taking care of.

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -602,6 +602,9 @@ struct domain
>  
>      /* Holding CDF_* constant. Internal flags for domain creation. */
>      unsigned int cdf;
> +
> +    unsigned int *llc_colors;
> +    unsigned int num_llc_colors;
>  };

Why outside of any #ifdef, and why not in struct arch_domain?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 16:51:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 16:51:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483648.749926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMVK-0000jP-1x; Tue, 24 Jan 2023 16:50:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483648.749926; Tue, 24 Jan 2023 16:50:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMVJ-0000jI-VG; Tue, 24 Jan 2023 16:50:37 +0000
Received: by outflank-mailman (input) for mailman id 483648;
 Tue, 24 Jan 2023 16:50:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HFQP=5V=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKMVJ-0000jC-9K
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 16:50:37 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2064.outbound.protection.outlook.com [40.107.22.64])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38e8d463-9c07-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 17:50:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8318.eurprd04.prod.outlook.com (2603:10a6:102:1c0::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 16:50:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 16:50:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38e8d463-9c07-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d0b8LG/XPKh9qDfmONdxeiEIazWoA5Iypz4/1tiN0ChxSAIOwO9siaq9sqcQ+y/ESLN6Hi3LUoiCt6MIjteMU1BTOgpSizJLZJ3qP21eu6J3xfmyX3/2T/I/OOdxLLUSGmr29TnAAwfN2X15LKpNPLaxgJrny2s9KUUlGyWfK9gmMeUGxha3ORsPrjXSkWbzcdtvxVL4sH8FVVX6nwXKlKlkk8lpLhFj9YxkL+QF1AHJ9vFF266n7ubeDbmZJJtF/GwcGumIMmwVt40Gy0y/ml8EeORioBGtPQx5Ts2ABvXflAf+ciBcJB5nFkKqKh9WLDnJWkigV9r7FOovtgkwwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=w1SDZM9HfMDUigcz3g1sserzlTschR/KNnXVvv9LD/U=;
 b=X0ZDOs/bjPWHLNVchvU2wrEGkCZHrU57vlkLzvPA2bZiQ59pIiGXsEnzcXDZaleOsAYKFdNe4FwVzETkfCe1QLrOEnWOJY/bB2Wdwxo+nPoYWK8ha5YSGEzADqjVYa6j+f4er/CVj+Xhd4tUPJms3knEXxhB31LL2HNVdkSDipROW1ZmcNouo2KFahRHuKO6uGqCkGrcwFTxrgCoFWaTa36UvkyqV59uOXwxc5vNf/lj1mZLCnNMytJdMyM7UmOXimhfNd4TxShaTlK0Pw0fmKWJw9HbxD7CM9+kIwLqh9jKEHiVp8zBlYy23YJGBn5dUcSS8i5LF+WL9PUbDvJgWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w1SDZM9HfMDUigcz3g1sserzlTschR/KNnXVvv9LD/U=;
 b=Tz0RpLbbCUHhybWnYfSRp9Jrfs9+VA0eBNbz7k/Veuf80vH9EcIJ/NAVviBg+mehk08goNfsXUN4iwvo3LWrM2zv0JAuh/Jp6Z8kQpfvFmX4XuZYrCwUGdRTEJOL8qk01sMy0cz0teqK1BJZFncRKPqt/360OWcdRZcUPvJnya4XYGDMGquozbXTCJl6pXQS0S90T7wd4DILg6Mc/0Eqqqgb6FeYg6j+YWgt/qfuP0WrLICpWsAeMI/X1tFDWgu1pEfdi3ADRTaeOZyBtj4pSryUw8JMN2OGavQ94M0EuIlPUcjwJv+G7EfMpOCClnjBa5DGy/FI3CAVeBZI5ImbCg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a74381ce-d204-1f40-7ccc-2be3bbc3ebd1@suse.com>
Date: Tue, 24 Jan 2023 17:50:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 07/11] xen: add cache coloring allocator for domains
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Luca Miccio <lucmiccio@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-8-carlo.nonato@minervasys.tech>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230123154735.74832-8-carlo.nonato@minervasys.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0154.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8318:EE_
X-MS-Office365-Filtering-Correlation-Id: de8dd231-9134-4871-7d30-08dafe2b1c2c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Co7+iHAHWusnADd/W5wNICs5YmP+/oxlNxK1AhfXEeZGqOO/m6V1SX3j3hHma/krLpyGss8yZDD1L6+KGQyqUtBGg0C+WwNCya9QxO4QLfd+Gf/qGbhyulKL9iy4ZTCd0XVFbQ3yrqP4gPTpj1xegBaVteQTROFkVVzQ6tVQaAlkLEJ5pYYuSWx5pyvlbOsVkGGv0JuznGuiuC+Iy6cAsuX1JTPaHm4cobvGnrKObG8hjgXOOBpWmALDHmbz13Yq4IYORU8/WINlkf5UJT/YK+ffFVLjPlUOFX/0Np50gAxXK0hTEZCQWp4ZHp3BkMTyGp6akwlCELtDFtaLj+uQeo50+FxJHkKeH8aylShvvbiYUWFvOO61HKEdIhIgdyLwpIFJE11AEpf9PxWBvVZbwv4sqM9IUzkyzSRF+LE5J3cBVnu0wHWWPi9geOuHwzZ0Qn+CqXKtjvilPBv9ipErcC/wN3xWCsdTyM4nWsF+Gi0aIw7nj3qmciyiMbmpFgw4FXkMAJ0TT90i8kIeNvmZYvcJxA9MQxbcVqEY3RrnZJRYC8Tyv2KDXUC2RpH6MEcN/FNjA6jVpLAXcuLqC70aM7M6lD/J5EKFb69gbtUNhGcK+0uO0ieUYpk3TGgTBgW780Ut9Wp/WfRfi+aOVQtjCwUgVxNOtJYHjSF58yXypAkd1FNGmJ56t0QXy9gTpuRweonrNU+7Qhg8TmKPgMrMvy/pZEPVGPjUMoy0btJU94s=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(376002)(346002)(136003)(39860400002)(451199015)(38100700002)(36756003)(31696002)(86362001)(478600001)(316002)(66556008)(8676002)(6916009)(4326008)(66476007)(6486002)(2616005)(54906003)(31686004)(2906002)(6506007)(53546011)(83380400001)(66946007)(6666004)(6512007)(5660300002)(41300700001)(26005)(186003)(7416002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YWNDVkI4ZGZGZXRXbFNCMkJIK0RUNk41YmZXUjhQNDcya3kyUU83Rklsakw2?=
 =?utf-8?B?NmpXcGZpeUFKY2EzMmU4bStoV3J4NlFjTFdMMGIyZm9YOUxOWWJ6NkRNcFJS?=
 =?utf-8?B?V1pMckwxdTVDRWhjVk00dVpCbWdGR0wvdWZZTExUZW5wNnNPQ0FxTCtFUGdK?=
 =?utf-8?B?ZC9CWDRxUE5DczZEMGdOWlZUbTN6emxNaXpiTjQ2Qjg1dDRGdkJYSVhmaW1G?=
 =?utf-8?B?OE5tVThQald6cWYxVXVxcVFLNHlmNis3SmxCU2lmSEt6YnVEZDRjSnhjVENM?=
 =?utf-8?B?U1FTZ3dhRHVHWUtHenJxZy9jK0FER2s3VzFwTzVhQ0M3M2dlQk4xMS9SbXdI?=
 =?utf-8?B?REdIV2lSWDgxUURLOUdjQ2dQZTJrYkRNTU1nS0FDRVNCd29IVDZWbittME5w?=
 =?utf-8?B?eGw4VHZKTU50OU5raTZadFhxTGxkTE8yNlpwcWYvNjJzUWZNdnRYZlVMUklp?=
 =?utf-8?B?Zkhqdm1MN0dRVXBsdHpmQjNMRVRpR25UWWc5RUVlV3pPQUFwRmsrUkFySkUz?=
 =?utf-8?B?cXpISHZXR3FaTVJqa2hiTE5oaG9lUGRSOEtva2NZNEMzWkdQRlZITGNTbjhh?=
 =?utf-8?B?NW5NcWpUR0xVckZETllyV1hDbkt5Nms5NFg1WVI3M1YxRTdNbU1sNnBER0Jy?=
 =?utf-8?B?cEM0bEtka3RJR0wwcW1jbGFyZmtzeTU0bCtLTUw0MnpWNkFsT1lGbzhOVXJa?=
 =?utf-8?B?cW95cG9zdFRuc2hTU2tZQThzZGg3VlB4dlRQakRkSmxDZW5odXYxNllmVkph?=
 =?utf-8?B?NllpM0JnVGFDVnV6TjdpYVFLa3BscWplN2o1M05ib1lVc3MwV3ZyWnEvU1Nw?=
 =?utf-8?B?VEFBK2Z0R1FyV3VrTG1hTnB3ekdtSmVmTDhmMENGd0x1Q1g0N2R3RHptVGtr?=
 =?utf-8?B?M3RlZi9UcWdFSjMzTTJvZkpOVFhOTjFzTmk0Nm56dTNOSU1qbEREOU1HcEJ1?=
 =?utf-8?B?d3hYVTRVaStCOFNXeTRsSmpFdk11akZxR3ZxeUdDQ1JGYjQ2N2lQd01iWkR2?=
 =?utf-8?B?Y0pib2RTMEJyZEFzOFdXQUpJdE9ZVCtUbVEzenRzb1ZDbkVhZDhENUVoNW14?=
 =?utf-8?B?N2tCYm9USVhtTTI1ckZtTTRUQzJBaXkvMXQ4REVjalFydXFRV2hyQzU3TlRt?=
 =?utf-8?B?K0ZNU1hKRWhUVG9WZUtlSEh5aHhDcUxwTlRKQmVxdEpQY1RGejFOZVMyQW11?=
 =?utf-8?B?ZXY0UXNwS0d4OUk0MGpLS1JnN1NYcjNhOTFJVUVrZWtqcUloQmF5SjluZ1VK?=
 =?utf-8?B?RmMvR0hKZ25FSHJuMlBvVTVKbUl3K2V3c0s3UWFjNEZPSlprNm9jb3F1TFFy?=
 =?utf-8?B?TjZlTnZzVXBkYjNVbzI4NEZxcmltKzdHbGo4bXRjdUdtdis0YksxVTlVM0FB?=
 =?utf-8?B?V1NhMWdINzVrZld2SENtRWF1ZzREVE9sZnYza0FNT25CN1IwaG1QUzNmSERv?=
 =?utf-8?B?bVRtVTJsdklwRy95TXd0M2QyMENmcFE1Snl2Nisyb05paVZnTC9oMkRVM21k?=
 =?utf-8?B?cXJEYTk2V210ZC9DZ0pKUWdCU3EvbEhXOTNkbitWSUZTOTJHdzFSem9ZNTB4?=
 =?utf-8?B?LzJveG8wak1uTERVWWlBeVRYQmlpNDl6Zkdla2FMejBVTlgrc2lCVm0rNW9R?=
 =?utf-8?B?YW50Y3IzWVE0bFJtZFErcUFJTDhNd1NqZlpQOFBuTjVMNk5WeDFqUkRhKzBE?=
 =?utf-8?B?UzJvRkdJZGkzbzFiQWE2c3RLNE9hUG9DUnRCeHQzNjdOMEl0bmJiV0xLUld6?=
 =?utf-8?B?cC9JMmxLMDlxY0huU3VPM2ZMVEsyajJHV2s3RHNmNXZrS0RZcjV1YUpjWWtZ?=
 =?utf-8?B?MTNNNUVvWXl5czlaRnF2R3VpWnJ4eHJoV2ErZUZSb1BYU3luR3pqNkF4MEp5?=
 =?utf-8?B?WU93ZGprclhSNEY1bkR1dWRMRmJuOGtPOTR6UFNVdDJKRjhiNWMzZlNIZlpo?=
 =?utf-8?B?ZHJBUWtabXZMWk5tRWpnT2hPNERJRnhMemhqbndnVC9jcUhSbTZsNDhVR3dF?=
 =?utf-8?B?ZFF4YSs2WEViWWtiSkVLU244bDVZYy9PUjcwcGtOd2NGRm16aGtXdmtid2Nh?=
 =?utf-8?B?RkRoeWZCR01WRkRTbkZnVitOUDdGYnEvQ0Z5Wm1wK0NBc1BoODg4SC8zSE4y?=
 =?utf-8?Q?YBeR50BZ69UidavTusiHuimEx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: de8dd231-9134-4871-7d30-08dafe2b1c2c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 16:50:33.8104
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P5b0qTpkwSIFqAaPVSY/y+rNvNSZQc04iAAPo4ubGMFflUKjhtlRspv2SDSE2JiC6d/GYoLTWnnwYb26JRAEJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8318

On 23.01.2023 16:47, Carlo Nonato wrote:
> From: Luca Miccio <lucmiccio@gmail.com>
> 
> This commit adds a new memory page allocator that implements the cache
> coloring mechanism. The allocation algorithm follows the given domain color
> configuration and maximizes contiguity in the page selection of multiple
> subsequent requests.
> 
> Pages are stored in a color-indexed array of lists, each one sorted by
> machine address, that is referred to as "colored heap". Those lists are
> filled by a simple init function which computes the color of each page.
> When a domain requests a page, the allocator takes one from those lists
> whose colors equals the domain configuration. It chooses the page with the
> lowest machine address such that contiguous pages are sequentially
> allocated if this is made possible by a color assignment which includes
> adjacent colors.

What use is this with ...

> The allocator can handle only requests with order equal to 0 since the
> single color granularity is represented in memory by one page.

... this restriction? Plus aiui there's no guarantee of contiguous pages
coming back in any event (because things depend on what may have been
allocated / freed earlier on), so why even give the impression of there
being a way to obtain contiguous pages?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 16:58:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 16:58:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483654.749936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMcY-0001SS-R9; Tue, 24 Jan 2023 16:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483654.749936; Tue, 24 Jan 2023 16:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMcY-0001SL-OF; Tue, 24 Jan 2023 16:58:06 +0000
Received: by outflank-mailman (input) for mailman id 483654;
 Tue, 24 Jan 2023 16:58:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HFQP=5V=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKMcX-0001Rz-JR
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 16:58:05 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2043.outbound.protection.outlook.com [40.107.7.43])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 445e686c-9c08-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 17:58:04 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8389.eurprd04.prod.outlook.com (2603:10a6:102:1bf::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 16:58:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 16:58:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 445e686c-9c08-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JV8FGqkXwg7tfLhhl5vDFhSmboRQpUpLZ3uTyTLnAWk87CtIzlyaueQCGltmMM3xHYdCn6pnqAptLYTTJ24pMG/2K/WgIu5VrkGalLwaVDBYT4t7QxrcNBifFwxQFMRT3NaC/h2tPmG4yCHdguBuEwuCQC0h7H9NazC6t31HuxaII/6SQr6LdSzE1shDwCtWnz2bJFCITOz0CWfLHaCAvCn1PwzX4n74SGZWlVyl8FXkc+Yqnm2FgXbmDFHdiY7cM/2DwaLKSq4FvqP0IXWgnmttTqVQpNlgpFA0kQqhC4N9e0wVJUd8NJe1BgQfi+dP4WDoFLbMHlDXD1Cmta6J2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cGEDp2wJiYU4P10781u6ZMMpV8VSOZAIW1y6Bd723Xs=;
 b=kQQDKOQn9USClJDzDF5NMJSHKQXkbGbadYUkrLxKbvkajVxYwnFUc0SI4qVso85hmoiRU1WB7PZYbCnqSPFv7vAXanNsecu18kbxt0ek46zGzLuxht6g99FFQPAJeOtZouAVUAiHRUnWdKXRvdVZ2dK7k7Nbj7vlycSYphwXBhXk+Nr3Pf39Goa0+Y4DoUASIWyXl8nYnNCmAYdC+nfni6U26lJOmYMFPLZoQxt4ZUHkcCUT+3Gq0sZHPss+Id2f6nU7rD0a4u9l4Gd5NF/jSakCYxtceLlVYh3JkPgFLABdmk9J+0kAAG/CWAgKIPkcaGz2/yKy4OxNE4WKpNIs1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cGEDp2wJiYU4P10781u6ZMMpV8VSOZAIW1y6Bd723Xs=;
 b=Mm/DdlGBouYMbU4Ul9J7UYbTswagxWCYmX8pY56+Pn/jEsfq+91rH96UjDBhNXCZ16d3ZibCnfiyQO2gTsVNgUVLF9SCkZY2Ag6XX/sUDa30dn+QtFZBMAXovzXdS2tT7mhT7epvwFb0mLSjFhfkhCEnQGk+IT/0QblbW3v91AcVoqxO84INq4aCwuXGMvA+kvK2b4n4HxFjiP3WJZAxuxqRjipQVER2Per2k17qhjgFs2ZOchK8/SN4Vg0ToC5HvuU4YgNsUSprdXT9IDo22IQ8/EfUMExbBif1XyyxTPDlNDlFzEev8eZj2J71/yxI3mLtXSAZROsT/yyW6DJuIg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <85697ead-bbca-bbdd-0fc3-c08d1da43978@suse.com>
Date: Tue, 24 Jan 2023 17:58:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 07/11] xen: add cache coloring allocator for domains
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>,
 xen-devel@lists.xenproject.org
Cc: Luca Miccio <lucmiccio@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-8-carlo.nonato@minervasys.tech>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230123154735.74832-8-carlo.nonato@minervasys.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0208.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8389:EE_
X-MS-Office365-Filtering-Correlation-Id: e6a626bc-6f9f-40ad-4076-08dafe2c2772
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7n9KIPrpZbpucaG6irARSrh0/ofoAZo7J45CLvV3gYXU92phq7d7cWB1bASMGQaMRBQpWa8BtBGYI58D6odY/roFdj5NAadu9KeG56fm7ijEE7x2vx2Yn1xcRcLhQdhUS/shZMe4ZfKSZQ1y7GMitUYCp4P+W1DgQoGz8QjrsqHU8C0vgYsEBftFUPLP34JwewMnibrDNt7fShlas2kO15oZn99u257MktVVfaXcca4AdVzZwUFWDe8orYjTzieuoX5KJfyv0Jg9KcF309jf4BGDDqELuPO8+KhmfyNtx9Jnc5UriVs34OCHXsGtPRsZZmg/gwMn6BcJPZ5OYwUYNGoKCGJmNGrmzc6TuMF2zJ8gVbWo/iWFgdSrkDMaz2j6vupsV7jM6lddfPQ8JH/CjFUz8lJZF1y3EKCUHaF3jKxuwsCvpKIvdD8udHkynkqOP9Zk8xjXRyDuw8LMH90qbpjqvsajEMUigm3GD20C5RK7cLdvfMFhsMNBcvqQVPODx50JFJo8mQkH8sIOMva9vQF8IXDOdquwxcCyPdvfOJ+D/OU1BXrr6GmneACB8Kbdcw4PlXU9KqzzdxWouWvCTHYU3VHOUl9uQ8IKGw6ppbwH3PnzZiqoRwKf0bNvb5xH9KNnxcFE34xlsHgIfFboPsY+SRxKi19JczhgMmi5C5NSjS9jONXTWdMm+TT6+KcLoxysu/+mkw4CzDwwcMAjIup8CImXVbGbWZtVsAew+TI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(366004)(396003)(376002)(346002)(136003)(39860400002)(451199015)(38100700002)(36756003)(31696002)(86362001)(478600001)(316002)(66556008)(8676002)(4326008)(66476007)(6486002)(2616005)(54906003)(31686004)(2906002)(6506007)(53546011)(83380400001)(66946007)(6512007)(5660300002)(41300700001)(26005)(186003)(7416002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MUdQZGt3OHZNNkFrbHFJaS9YQVZPMHRLL0xGY3ZBZDJCeFBaTlZCd1U5ZXVH?=
 =?utf-8?B?K0RYdnQwN2VRTG41TkRZeVo4ZUVkNHY0MC95TmdOK2NkLzV6RjVKTHV3RDR3?=
 =?utf-8?B?VHRpYnp6MnU0QXpjR0g2cGl2bERjMHl1UnRWRUczdmc1UjRSSll0bWNSYVBm?=
 =?utf-8?B?Z3plSmw4U0tndWVhUEpRZmlkaU5XcWU3ckRLMlJ0bDRMSDJJeUhsZ3ErNjY1?=
 =?utf-8?B?eXY4WEVEU3M4RW52ZnZDUHJjVThnWkNKUVRVS2pnSHAvSWZVYjBsRGZTZEFl?=
 =?utf-8?B?V2R3M3hsdC9UMkVYSC9tTnlwcWZuaXlrWWxTQld2UnhCdVRvbjVaUjI3dFR1?=
 =?utf-8?B?SktvaUNMbHFjNXFCc1lUY2JKL2hpbzhTbmU2bXZ1MFdkOXFtN0pMcWZUMHpw?=
 =?utf-8?B?N1JnNnU2WGFJSjBMbnZHeWIrdWg3WTJ4YWI2a1ZvemMyN2JVVzhMSWw4Z0s2?=
 =?utf-8?B?OFg2ZVArN3BLQ000MForZXd2bkxoZ0MvSFh3TTZpQ0JyZXhhQ0RmSk96dExT?=
 =?utf-8?B?TDl2YmcrR0xreUpBcGp5RmE1WkRoV2cxR0RqNlQwRXNmcVZBeWcyTllpamFJ?=
 =?utf-8?B?SjM4MVFmNkFZd0VncmFLUzZrYlRwaVN3Ym5JVnZwbWFqb3BtSlhXVk1TSGpm?=
 =?utf-8?B?UExVeGI3L21EZVpqUkc4OGVQSGQrMHRmbzRVUHpPb2ZYMHBuZGh3RXQ4eE1V?=
 =?utf-8?B?MTc1YjlNcFAyY1BLRUx2OWk5NXdleWhCSXR2WnB0b20reWxIbmZuSzlaSlNm?=
 =?utf-8?B?ZjNKZlgwODZ0OHFPcnZBWVpPclJOaUZYT0tjRDlNdG40S0ZTbjloQ0lkRzJa?=
 =?utf-8?B?TUZEa1VoUmxFVnJtUGpYbFBBc2VkUWpFdG92bjIwTkgvWkozR2xLVkhMQ1ht?=
 =?utf-8?B?ZWs1Nmp0NmpkUkx6YUJETGw0VHVlMkJDMnBQcSt2S3pTTUhMbVB5c0JhMDVq?=
 =?utf-8?B?Q2xhZmFjYURESi8yYVZVM2tFUzFiMy94M1pURnlwRUR1d3FzQnRHU1VVUWJ6?=
 =?utf-8?B?SjM3Y0JSakllS1ZFUnJBWG9GNGM5bVJwbDhwQWZlMkZHMGxucEJ6eFRBME5x?=
 =?utf-8?B?Z1FxTGFlWTZmYlVkdmxXYjc5OVFFSUQvZ0ZrbUs4ZStBNGpIVUVuc3BRaXVH?=
 =?utf-8?B?c0ZQaWdvZGM1L0F4WWtkRVlXSUozOXE4cnlhSk5qYklGU0lYcnh0ZVg3SnJT?=
 =?utf-8?B?QlorL3g3UlRrbTZyMmdoS3BGS1VxdXZHOUMxRFhsR1RzdGlyRHF0eitrc2xn?=
 =?utf-8?B?L3FnSkZWdnB0N1lYUHVzKzFYY0ZqdUNHNUV1YkFLTEJDQXR4clVDcytyamQ2?=
 =?utf-8?B?UGp4MkN5WU5mYkMydTZXU0MrWTJPY3BCaGI3UkJ2Tjh5Um9XcU83VFFxaU1w?=
 =?utf-8?B?ZVY4ek5uWDR0bUpML0VRSUluRkxtWFU3N05OdFFHU0I2aEFJZ3ltdXh3TVNp?=
 =?utf-8?B?MWtJSHJsUFlEaitGMytESjFQZkx5KzZUR1VwU3pKbS9jTDh6V2pBbkoxTitx?=
 =?utf-8?B?SnByOHpGSy9ZajZNZERDaXl6VFJ3ZitwVkU0M2xJVzlkL2kxSUNRV2RNRkZ1?=
 =?utf-8?B?c01FTXlIQ1RINklsZ3JkeFBOdkxkZGw5TjA0QmRFYlNXKzhOT0dCRHVIQlJZ?=
 =?utf-8?B?cVJCQ096VERkVDZtV20yYTZ3NmdHSGluY01sSDhSejNndURNWFZrVWJxeWpP?=
 =?utf-8?B?Yk83Zmp2TFFTUUhTaXhoM3lpeHhnZmpPSGFOeUNuQW9wdXZxRGtJNVFlVWJL?=
 =?utf-8?B?SG5EYWN1bkJrQ3l1a2p5UW9MOFMzMWNkL2c1STc2bnZWdkVTUmxmS25tWE5o?=
 =?utf-8?B?UHl3SXJUbW42NU1yeW9kY3NKRUw2cGEwd0IvS1h0WVlDSVFQRU9mWE5IS0Q4?=
 =?utf-8?B?dG5oMFU5VlIra003a3lxTzE2aDlQWlhRRFE4UnZEL1dOTThWWFdBcVFxeHlj?=
 =?utf-8?B?dEFkTXRwbE5DUC9RNHUwTDBobnd4NG9ZblQ2KzFJWVdxcnQvaGF3djJwSmtQ?=
 =?utf-8?B?R1JOWmFtUERreHcxaENQY3ZsV2RoUmxxNFpNUmtUa01PMGZMM0Y4cXNETmNl?=
 =?utf-8?B?VGsyRTZ3Z3dpVzZxOUcwRDB1cXJMYUhVOGNiQ0lJS1c1YnpSeHZqNzYvcXE2?=
 =?utf-8?Q?8E0ID/u1KocPwHpdSPX5wZxbc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e6a626bc-6f9f-40ad-4076-08dafe2c2772
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 16:58:02.1567
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Iw45stCCldsFb8Mk9a5PifMyA8uGxh3o8Bo0VoQ8CfldbcvKA5ykMZ5V0u0/o526UrFJSf0cR+CBgonUKj0yuQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8389

On 23.01.2023 16:47, Carlo Nonato wrote:
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -299,6 +299,20 @@ can be maintained with the pv-shim mechanism.
>      cause Xen not to use Indirect Branch Tracking even when support is
>      available in hardware.
>  
> +### buddy-alloc-size (arm64)
> +> `= <size>`
> +
> +> Default: `64M`
> +
> +Amount of memory reserved for the buddy allocator when colored allocator is
> +active. This options is available only when `CONFIG_LLC_COLORING` is enabled.
> +The colored allocator is meant as an alternative to the buddy allocator,
> +because its allocation policy is by definition incompatible with the
> +generic one. Since the Xen heap systems is not colored yet, we need to
> +support the coexistence of the two allocators for now. This parameter, which is
> +optional and for expert only, it's used to set the amount of memory reserved to
> +the buddy allocator.
> +
>  ### clocksource (x86)
>  > `= pit | hpet | acpi | tsc`
>  

This hunk looks to be the result of a bad merge, as the new option should
go ahead of "cet", not after it.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 17:07:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 17:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483660.749945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMls-00032c-Nw; Tue, 24 Jan 2023 17:07:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483660.749945; Tue, 24 Jan 2023 17:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKMls-00032V-L2; Tue, 24 Jan 2023 17:07:44 +0000
Received: by outflank-mailman (input) for mailman id 483660;
 Tue, 24 Jan 2023 17:07:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0kvp=5V=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pKMlr-00032P-8x
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 17:07:43 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9c72c4bf-9c09-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 18:07:41 +0100 (CET)
Received: by mail-ej1-x62d.google.com with SMTP id ud5so40846370ejc.4
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 09:07:41 -0800 (PST)
Received: from ?IPv6:::1?
 (p200300faaf0bb2007c8246afca181621.dip0.t-ipconnect.de.
 [2003:fa:af0b:b200:7c82:46af:ca18:1621])
 by smtp.gmail.com with ESMTPSA id
 o25-20020a17090637d900b008536ff0bb44sm1150293ejc.109.2023.01.24.09.07.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 24 Jan 2023 09:07:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c72c4bf-9c09-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4IXISPY5stzync/KUBvKXq0ONfLxyCKnySoOrUOcUuM=;
        b=OFfSeCBodDaa0PquypwkxlnGdqhgExe5Tqmdj54i0tKDCWz5KEbPbM+YhL4kagRWq0
         pLVThJ/3Z2B1f48k/uj1IbPmGaJv27zzbXeeDPITp4wZyJTVHUQdz6hOum32x1CKch4E
         X6imBIQHUYKHD40TqI2rhvWI4U4vzqudSzI/NMwFeisI35VGGlvK431L7XDyiR8cZTNU
         7jbKBs0cFwZhmkHyjsDk18aaPvX6HZfR4xNJDfRCXWYobNMmW3fo/iSO/YpDwkI+GRtQ
         j+8NDATKZBMIsOvPkZGoZ/bb+SXNlE1kbGlSX1eeYRN/yJE9iIFLMKEiA+StjVABdV+s
         9LcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4IXISPY5stzync/KUBvKXq0ONfLxyCKnySoOrUOcUuM=;
        b=rqbi9VWgX9PHbDCh6p/ELlZ1J0XlYGFxiyOtg0OEEMvhXO2PsC1Q4v1wIldI0m4yBe
         Z3LaLD3Ws5GVseY9V+VnjUHp0Wl1x3T/IU1Yxf5KK73LaFHlnQYH8f1fq2kxZYEUkaaJ
         0nzXnj1/wEtKoS+925+bO8MP6Atr2lfrpbIv5IQpaEytrwko9Nsw7yImAIqPFiqBIX0q
         wMZoAWLzutVOf9W0EUHwrEsDRrbeFM5xzfzMq2giPTZ+4ErkAHxRk8MD2c28LEyLTHRA
         YOF62gxEUaCATR9qFXliHNEABXwAOai7oWUBt6+HKD35Y9OQvm59s6XBzjwHRD503BTN
         fNHQ==
X-Gm-Message-State: AFqh2kpx+XI4v7MuHgLY5ER1tYiS6gP0tKUfXNm5f0ghCAhZxHIJwpnd
	Y0rF196vOmxuCvbxcOpmzvg=
X-Google-Smtp-Source: AMrXdXuxMXeA+QJHu7+lnoisAh5OXs04+/6pXredGjOy405Q3BgjsuIdIBcKce8ALFJUS3/FmbgIqA==
X-Received: by 2002:a17:906:3783:b0:86f:e116:295 with SMTP id n3-20020a170906378300b0086fe1160295mr34112197ejc.4.1674580060813;
        Tue, 24 Jan 2023 09:07:40 -0800 (PST)
Date: Tue, 24 Jan 2023 17:07:30 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: qemu-devel@nongnu.org, Richard Henderson <richard.henderson@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Paul Durrant <paul@xen.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Eduardo Habkost <eduardo@habkost.net>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Chuck Zmudzinski <brchuckz@aol.com>, "Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [PATCH v2 0/6] Resolve TYPE_PIIX3_XEN_DEVICE
In-Reply-To: <Y9ADQ/Yu8QQD0oyD@perard.uk.xensource.com>
References: <20230104144437.27479-1-shentey@gmail.com> <20230118051230-mutt-send-email-mst@kernel.org> <Y9ADQ/Yu8QQD0oyD@perard.uk.xensource.com>
Message-ID: <0C2B1FE4-BB48-4C38-9161-6569BA1D6226@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 24=2E Januar 2023 16:11:47 UTC schrieb Anthony PERARD <anthony=2Eperard=
@citrix=2Ecom>:
>On Wed, Jan 18, 2023 at 05:13:03AM -0500, Michael S=2E Tsirkin wrote:
>> On Wed, Jan 04, 2023 at 03:44:31PM +0100, Bernhard Beschow wrote:
>> > This series first renders TYPE_PIIX3_XEN_DEVICE redundant and finally=
 removes
>> > it=2E The motivation is to 1/ decouple PIIX from Xen and 2/ to make X=
en in the PC
>> > machine agnostic to the precise southbridge being used=2E 2/ will bec=
ome
>> > particularily interesting once PIIX4 becomes usable in the PC machine=
, avoiding
>> > the "Frankenstein" use of PIIX4_ACPI in PIIX3=2E
>>=20
>> Looks ok to me=2E
>> Reviewed-by: Michael S=2E Tsirkin <mst@redhat=2Ecom>
>>=20
>> Feel free to merge through Xen tree=2E
>
>Hi Bernhard,

Hi Anthony,

>The series currently doesn't apply on master=2E And a quick try at
>applying the series it is based on also failed=2E Could you rebase it , o=
r
>maybe you would prefer to wait until the other series "Consolidate
>PIIX=2E=2E=2E" is fully applied?

Thanks for looking into it!

You can get the compilable series from https://patchew=2Eorg/QEMU/20230104=
144437=2E27479-1-shentey@gmail=2Ecom/ =2E If it doesn't work for you let me=
 know, then I can rebase onto master=2E All necessary dependencies for the =
series are upstreamed meanwhile=2E

Thanks,
Bernhard
>
>Thanks=2E
>
>> > Testing done:
>> > None, because I don't know how to conduct this properly :(
>> >=20
>> > Based-on: <20221221170003=2E2929-1-shentey@gmail=2Ecom>
>> >           "[PATCH v4 00/30] Consolidate PIIX south bridges"
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 18:07:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 18:07:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483667.749962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKNh7-0001JL-5M; Tue, 24 Jan 2023 18:06:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483667.749962; Tue, 24 Jan 2023 18:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKNh7-0001JE-2K; Tue, 24 Jan 2023 18:06:53 +0000
Received: by outflank-mailman (input) for mailman id 483667;
 Tue, 24 Jan 2023 18:06:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKNh5-0001J8-BB
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 18:06:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKNh4-00010X-9f; Tue, 24 Jan 2023 18:06:50 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKNh4-0007v2-2C; Tue, 24 Jan 2023 18:06:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=MtoYxdNXe+9/rcUSwzHrdCqqZexWYYTDy12Qbz3+P7w=; b=R3lo7Kh9Y9pEufOSLzj+4AYvsI
	NgK5EuK8E6y1VjzUYjNVgSPgDGAgrNz5l20uhcdDEjE7z4yYYLewPFBxJZVFlvINAuDkyanC2l317
	Z5XSE1aPZNuM+KEiWGfNY8ovHKIBM8P9yv8Qbhwt78uESAHi6gGEHzjjcNW8e+1tZwNM=;
Message-ID: <8995c20f-7d0d-5138-b802-d70c116b84e7@xen.org>
Date: Tue, 24 Jan 2023 18:06:47 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 22/22] xen/arm64: Allow the admin to enable/disable the
 directmap
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-23-julien@xen.org>
 <alpine.DEB.2.22.394.2301231437170.1978264@ubuntu-linux-20-04-desktop>
 <92c4daa2-d841-3109-c1ec-4bdb088d6670@xen.org>
 <alpine.DEB.2.22.394.2301231605291.1978264@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301231605291.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 24/01/2023 00:12, Stefano Stabellini wrote:
> On Mon, 23 Jan 2023, Julien Grall wrote:
> Ah yes! I see it now that we are relying on the same area
> (alloc_xenheap_pages calls page_to_virt() then map_pages_to_xen()).
> 
> map_pages_to_xen() is able to create pagetable entries at every level,
> but we need to make sure they are shared across per-cpu pagetables. To
> make that happen, we are pre-creating the L0/L1 entries here so that
> they become common across all per-cpu pagetables and then we let
> map_pages_to_xen() do its job.
> 
> Did I understand it right?

Your understanding is correct.

>> I can add summary in the commit message.
> 
> I would suggest to improve a bit the in-code comment on top of
> setup_directmap_mappings. I might also be able to help with the text
> once I am sure I understood what is going on :-)

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 18:16:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 18:16:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483674.749972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKNqH-0002vq-46; Tue, 24 Jan 2023 18:16:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483674.749972; Tue, 24 Jan 2023 18:16:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKNqH-0002vj-1G; Tue, 24 Jan 2023 18:16:21 +0000
Received: by outflank-mailman (input) for mailman id 483674;
 Tue, 24 Jan 2023 18:16:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKNqG-0002vd-3C
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 18:16:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKNqD-0001Kk-QT; Tue, 24 Jan 2023 18:16:17 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKNqD-0008Nt-JB; Tue, 24 Jan 2023 18:16:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=NY7JwPPR50y+fz7nwQdvaJwsfN4uh4lspWJSX8WfcUQ=; b=oOfkjowMIrpX675yS6yWavVdaZ
	RZq3h5KKZgzaI5jDOnvGIJEOmFiYLo+RwOyKVuGN7O0kqH+O2Bd3nQcpGDgK2vow0XU9x1pOZJnjL
	GgSGOdpykyacfVwbgQTRBBagC1kuDPXhv27yzm3EYcsSS3FlGLIdXC4IfGk0ZKbFP/8E=;
Message-ID: <631b5639-0f48-a872-219b-607d0a521960@xen.org>
Date: Tue, 24 Jan 2023 18:16:14 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v4 1/3] xen/arm: Use the correct format specifier
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, xuwei5@hisilicon.com
References: <20230124122336.40993-1-ayan.kumar.halder@amd.com>
 <20230124122336.40993-2-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230124122336.40993-2-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Ayan,

On 24/01/2023 12:23, Ayan Kumar Halder wrote:
> 1. One should use 'PRIpaddr' to display 'paddr_t' variables. However,
> while creating nodes in fdt, the address (if present in the node name)
> should be represented using 'PRIx64'. This is to be in conformance
> with the following rule present in https://elinux.org/Device_Tree_Linux
> 
> . node names
> "unit-address does not have leading zeros"
> 
> As 'PRIpaddr' introduces leading zeros, we cannot use it.
> 
> So, we have introduced a wrapper ie domain_fdt_begin_node() which will
> represent physical address using 'PRIx64'.
> 
> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
> address.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from -
> 
> v1 - 1. Moved the patch earlier.
> 2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr"
> into this patch.
> 
> v2 - 1. Use PRIx64 for appending addresses to fdt node names. This fixes the CI failure.
> 
> v3 - 1. Added a comment on top of domain_fdt_begin_node().
> 2. Check for the return of snprintf() in domain_fdt_begin_node().
> 
>   xen/arch/arm/domain_build.c | 64 +++++++++++++++++++++++--------------
>   xen/arch/arm/gic-v2.c       |  6 ++--
>   xen/arch/arm/mm.c           |  2 +-
>   3 files changed, 44 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f35f4d2456..81a213cf9a 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1288,6 +1288,39 @@ static int __init fdt_property_interrupts(const struct kernel_info *kinfo,
>       return res;
>   }
>   
> +/*
> + * Wrapper to convert physical address from paddr_t to uint64_t and
> + * invoke fdt_begin_node(). This is required as the physical address
> + * provided as a part of node name should not contain any leading

s/as a part/as part/ ?

> + * zeroes. Thus, one should use PRIx64 (instead of PRIpaddr) to append
> + * unit (which contains the physical address) with name to generate a
> + * node name.
> + */
> +static int __init domain_fdt_begin_node(void *fdt, const char *name,
> +                                        uint64_t unit)
> +{
> +    /*
> +     * The size of the buffer to hold the longest possible string ie

I think this should be "i.e." and there is possibly a missing full stop 
before hand.

> +     * interrupt-controller@ + a 64-bit number + \0
> +     */
> +    char buf[38];
> +    int ret;
> +
> +    /* ePAPR 3.4 */
> +    ret = snprintf(buf, sizeof(buf), "%s@%"PRIx64, name, unit);
> +
> +    if ( ret >= sizeof(buf) )
> +    {
> +        printk(XENLOG_ERR
> +               "Insufficient buffer. Minimum size required is %d\n",
> +               ( ret + 1 ));

The parenthesis are unnecessary. But if you want them, then you should 
not add a space after ( and before ).

> +
> +        return -FDT_ERR_TRUNCATED;
> +    }
> +
> +    return fdt_begin_node(fdt, buf);
> +}
> +

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 18:22:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 18:22:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483679.749981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKNvs-0004LK-Oj; Tue, 24 Jan 2023 18:22:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483679.749981; Tue, 24 Jan 2023 18:22:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKNvs-0004LC-MA; Tue, 24 Jan 2023 18:22:08 +0000
Received: by outflank-mailman (input) for mailman id 483679;
 Tue, 24 Jan 2023 18:22:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKNvr-0004L5-BU
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 18:22:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKNvq-0001Qa-Sa; Tue, 24 Jan 2023 18:22:06 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKNvq-0000DU-MX; Tue, 24 Jan 2023 18:22:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=vVB2uH+yR7egd+Uws0JxTVET7xDa/Cfg533Zfr/q1FY=; b=FxdcLXUsoOqUFmVx+xO7pF1Qm/
	2j5YQM1QmdUPJZ+4wDiHPjO2KdPVQOGAcNREvlmEKBQxJv426JxpkyRjcnVvNufME0vVmr0/5GvLN
	iiW/cK6B02/D/ZgSv6zP2F1WI0P2spqgaS8AXlRSFMsqkD09zKi0IctOmzYAtMt/4pPA=;
Message-ID: <25264dca-acf6-7ad1-e8a5-a1b893eab30d@xen.org>
Date: Tue, 24 Jan 2023 18:22:04 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 3/3] xen/arm: Clean-up in p2m_init() and
 p2m_final_teardown()
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, Henry Wang <Henry.Wang@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <wei.chen@arm.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-4-Henry.Wang@arm.com>
 <d9861060-22ba-5fce-eef6-a7f2ef01526a@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d9861060-22ba-5fce-eef6-a7f2ef01526a@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 20/01/2023 10:23, Michal Orzel wrote:
> Hi Henry,
> 
> On 16/01/2023 02:58, Henry Wang wrote:
>>
>>
>> With the change in previous patch, the initial 16 pages in the P2M
>> pool is not necessary anymore. Drop them for code simplification.
>>
>> Also the call to p2m_teardown() from arch_domain_destroy() is not
>> necessary anymore since the movement of the P2M allocation out of
>> arch_domain_create(). Drop the code and the above in-code comment
>> mentioning it.
>>
>> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
>> ---
>> I am not entirely sure if I should also drop the "TODO" on top of
>> the p2m_set_entry(). Because although we are sure there is no
>> p2m pages populated in domain_create() stage now, but we are not
>> sure if anyone will add more in the future...Any comments?
>> ---
>>   xen/arch/arm/include/asm/p2m.h |  4 ----
>>   xen/arch/arm/p2m.c             | 20 +-------------------
>>   2 files changed, 1 insertion(+), 23 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
>> index bf5183e53a..cf06d3cc21 100644
>> --- a/xen/arch/arm/include/asm/p2m.h
>> +++ b/xen/arch/arm/include/asm/p2m.h
>> @@ -200,10 +200,6 @@ int p2m_init(struct domain *d);
>>    *  - p2m_final_teardown() will be called when domain struct is been
>>    *    freed. This *cannot* be preempted and therefore one small
>>    *    resources should be freed here.
>> - *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
>> - *  free the P2M when failures happen in the domain creation with P2M pages
>> - *  already in use. In this case p2m_teardown() is called non-preemptively and
>> - *  p2m_teardown() will always return 0.
>>    */
>>   int p2m_teardown(struct domain *d, bool allow_preemption);
>>   void p2m_final_teardown(struct domain *d);
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 7de7d822e9..d41a316d18 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1744,13 +1744,9 @@ void p2m_final_teardown(struct domain *d)
>>       /*
>>        * No need to call relinquish_p2m_mapping() here because
>>        * p2m_final_teardown() is called either after domain_relinquish_resources()
>> -     * where relinquish_p2m_mapping() has been called, or from failure path of
>> -     * domain_create()/arch_domain_create() where mappings that require
>> -     * p2m_put_l3_page() should never be created. For the latter case, also see
>> -     * comment on top of the p2m_set_entry() for more info.
>> +     * where relinquish_p2m_mapping() has been called.
>>        */
>>
>> -    BUG_ON(p2m_teardown(d, false));
> Because you remove this,
>>       ASSERT(page_list_empty(&p2m->pages));
> you no longer need this assert, right?
I think the ASSERT() is still useful as it at least show that the pages 
should have been freed before the call to p2m_final_teardown().

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 18:28:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 18:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483684.749992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKO1f-00055U-DA; Tue, 24 Jan 2023 18:28:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483684.749992; Tue, 24 Jan 2023 18:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKO1f-00055N-9x; Tue, 24 Jan 2023 18:28:07 +0000
Received: by outflank-mailman (input) for mailman id 483684;
 Tue, 24 Jan 2023 18:28:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKO1e-000551-6I
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 18:28:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKO1d-0001WU-MH; Tue, 24 Jan 2023 18:28:05 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKO1d-0000MG-Bl; Tue, 24 Jan 2023 18:28:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=F4RyW7ieQKR2gtA7gaemy2f7d3xMbPhWD7eLZ+R1iOY=; b=DGhLiTu3HG0aTldb4lmA9HhieW
	/KEQO7OnNei1f7uMcUSE4onId/E5f9s2i5zmVJkHswAKs7UPEWtTui/enlgRec0mQbI6wqNfkvskj
	rIco85ZYxoCEu2wJ87HhIza13QaQJWywqZXh+SXpXM4zLRXG1nU2nVW0k863SZ8HN6e0=;
Message-ID: <59f4d24a-44cf-fa8f-bdac-2af036f2cd30@xen.org>
Date: Tue, 24 Jan 2023 18:28:03 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 1/3] xen/arm: Reduce redundant clear root pages when
 teardown p2m
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, Henry Wang <Henry.Wang@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <wei.chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-2-Henry.Wang@arm.com>
 <36821aa0-4e88-57f7-3f8b-35ba0529fabf@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <36821aa0-4e88-57f7-3f8b-35ba0529fabf@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 20/01/2023 09:43, Michal Orzel wrote:
> Hi Henry,
> 
> On 16/01/2023 02:58, Henry Wang wrote:
>>
>>
>> Currently, p2m for a domain will be teardown from two paths:
>> (1) The normal path when a domain is destroyed.
>> (2) The arch_domain_destroy() in the failure path of domain creation.
>>
>> When tearing down p2m from (1), the part to clear and clean the root
>> is only needed to do once rather than for every call of p2m_teardown().
>> If the p2m teardown is from (2), the clear and clean of the root
>> is unnecessary because the domain is not scheduled.
>>
>> Therefore, this patch introduces a helper `p2m_clear_root_pages()` to
>> do the clear and clean of the root, and move this logic outside of
>> p2m_teardown(). With this movement, the `page_list_empty(&p2m->pages)`
>> check can be dropped.
>>
>> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
>> ---
>> Was: [PATCH v2] xen/arm: Reduce redundant clear root pages when
>> teardown p2m. Picked to this series with changes in original v1:
>> 1. Introduce a new PROGRESS for p2m_clear_root_pages() to avoid
>>     multiple calling when p2m_teardown() is preempted.
>> 2. Move p2m_force_tlb_flush_sync() to p2m_clear_root_pages().
>> ---
>>   xen/arch/arm/domain.c          | 12 ++++++++++++
>>   xen/arch/arm/include/asm/p2m.h |  1 +
>>   xen/arch/arm/p2m.c             | 34 ++++++++++++++--------------------
>>   3 files changed, 27 insertions(+), 20 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 99577adb6c..961dab9166 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -959,6 +959,7 @@ enum {
>>       PROG_xen,
>>       PROG_page,
>>       PROG_mapping,
>> +    PROG_p2m_root,
>>       PROG_p2m,
>>       PROG_p2m_pool,
>>       PROG_done,
>> @@ -1021,6 +1022,17 @@ int domain_relinquish_resources(struct domain *d)
>>           if ( ret )
>>               return ret;
>>
>> +    PROGRESS(p2m_root):
>> +        /*
>> +         * We are about to free the intermediate page-tables, so clear the
>> +         * root to prevent any walk to use them.
> The comment from here...
>> +         * The domain will not be scheduled anymore, so in theory we should
>> +         * not need to flush the TLBs. Do it for safety purpose.
>> +         * Note that all the devices have already been de-assigned. So we don't
>> +         * need to flush the IOMMU TLB here.
>> +         */
> to here does not make a lot of sense in this place and should be moved to p2m_clear_root_pages
> where a user can see the call to p2m_force_tlb_flush_sync.

+1

> Apart from that:
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 18:55:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 18:55:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483691.750005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKORa-000085-GL; Tue, 24 Jan 2023 18:54:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483691.750005; Tue, 24 Jan 2023 18:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKORa-00007y-D6; Tue, 24 Jan 2023 18:54:54 +0000
Received: by outflank-mailman (input) for mailman id 483691;
 Tue, 24 Jan 2023 18:54:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKORZ-00007s-B6
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 18:54:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKORY-00025e-TI; Tue, 24 Jan 2023 18:54:52 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKORY-0001Rv-MS; Tue, 24 Jan 2023 18:54:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=qiKkbjNHba8n4ibZQ1qefH3703iGasc0Lc7/wN4aHn0=; b=nfoSOeuZ1gqyRWoDjv+JqC1q5O
	zxbi04sZpXfUMJs3lPfpSicRJbZgr8HSs7WoGEA1fDDburKwQ9PnZoww7z7wSRNqw50qDL8wKT+vY
	ZPoqTYdpvccfGOnoesyeCK4pvo8JQnh4dEs4KZJc04TirYlBEft9aWAz4BXY4VIfPDVA=;
Message-ID: <af8cec04-9817-0830-d989-b7453abdd931@xen.org>
Date: Tue, 24 Jan 2023 18:54:50 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 12/40] xen/mpu: introduce helpers for MPU enablement
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-13-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113052914.3845596-13-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> We need a new helper for Xen to enable MPU in boot-time.
> The new helper is semantically consistent with the original enable_mmu.
> 
> If the Background region is enabled, then the MPU uses the default memory
> map as the Background region for generating the memory
> attributes when MPU is disabled.
> Since the default memory map of the Armv8-R AArch64 architecture is
> IMPLEMENTATION DEFINED, we always turn off the Background region.

You are saying this. But I don't see any code below clearing 
SCTLR_EL2.BR. Can you clarify?

> 
> In this patch, we also introduce a neutral name enable_mm for
> Xen to enable MMU/MPU. This can help us to keep one code flow
> in head.S

NIT: Missing full stop.

> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
>   xen/arch/arm/arm64/head.S     |  5 +++--
>   xen/arch/arm/arm64/head_mmu.S |  4 ++--
>   xen/arch/arm/arm64/head_mpu.S | 19 +++++++++++++++++++
>   3 files changed, 24 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 145e3d53dc..7f3f973468 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -258,7 +258,8 @@ real_start_efi:
>            * and memory regions for MPU systems.
>            */
>           bl    prepare_early_mappings
> -        bl    enable_mmu
> +        /* Turn on MMU or MPU */
> +        bl    enable_mm
>   
>           /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
>           ldr   x0, =primary_switched
> @@ -316,7 +317,7 @@ GLOBAL(init_secondary)
>           bl    check_cpu_mode
>           bl    cpu_init
>           bl    prepare_early_mappings
> -        bl    enable_mmu
> +        bl    enable_mm
>   
>           /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
>           ldr   x0, =secondary_switched
> diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
> index 2346f755df..b59c40495f 100644
> --- a/xen/arch/arm/arm64/head_mmu.S
> +++ b/xen/arch/arm/arm64/head_mmu.S
> @@ -217,7 +217,7 @@ ENDPROC(prepare_early_mappings)
>    *
>    * Clobbers x0 - x3
>    */
> -ENTRY(enable_mmu)
> +ENTRY(enable_mm)
>           PRINT("- Turning on paging -\r\n")
>   
>           /*
> @@ -239,7 +239,7 @@ ENTRY(enable_mmu)
>           msr   SCTLR_EL2, x0          /* now paging is enabled */
>           isb                          /* Now, flush the icache */
>           ret
> -ENDPROC(enable_mmu)
> +ENDPROC(enable_mm)
>   
>   /*
>    * Remove the 1:1 map from the page-tables. It is not easy to keep track
> diff --git a/xen/arch/arm/arm64/head_mpu.S b/xen/arch/arm/arm64/head_mpu.S
> index 0b97ce4646..e2ac69b0cc 100644
> --- a/xen/arch/arm/arm64/head_mpu.S
> +++ b/xen/arch/arm/arm64/head_mpu.S
> @@ -315,6 +315,25 @@ ENDPROC(prepare_early_mappings)
>   
>   GLOBAL(_end_boot)
>   
> +/*
> + * Enable EL2 MPU and data cache
> + * If the Background region is enabled, then the MPU uses the default memory
> + * map as the Background region for generating the memory
> + * attributes when MPU is disabled.
> + * Since the default memory map of the Armv8-R AArch64 architecture is
> + * IMPLEMENTATION DEFINED, we intend to turn off the Background region here.

Please document which register you are clobberring. See the MMU code for 
examples how to do you.

> + */
> +ENTRY(enable_mm)
> +    mrs   x0, SCTLR_EL2
> +    orr   x0, x0, #SCTLR_Axx_ELx_M    /* Enable MPU */
> +    orr   x0, x0, #SCTLR_Axx_ELx_C    /* Enable D-cache */
> +    orr   x0, x0, #SCTLR_Axx_ELx_WXN  /* Enable WXN */
> +    dsb   sy

Please document the reason of each dsb. In this case, it is not entirely 
clear what this is for.

> +    msr   SCTLR_EL2, x0
> +    isb

Likely for isb.

> +    ret
> +ENDPROC(enable_mm)
> +
Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 19:09:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 19:09:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483696.750015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKOfK-0001tG-Kt; Tue, 24 Jan 2023 19:09:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483696.750015; Tue, 24 Jan 2023 19:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKOfK-0001t9-IF; Tue, 24 Jan 2023 19:09:06 +0000
Received: by outflank-mailman (input) for mailman id 483696;
 Tue, 24 Jan 2023 19:09:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKOfJ-0001t3-0U
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 19:09:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKOfI-0002M9-LZ; Tue, 24 Jan 2023 19:09:04 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKOfI-0002Ff-CE; Tue, 24 Jan 2023 19:09:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=TJNXNMFwokljr+TcruYlTqOvD9ILefY2hlX2fJ0KNhY=; b=Ulk0gUs/7OfUTw/8SeNfi//KSq
	HBoBgalLzaIJqiOlpNIm7W5asXT7jLKuEXccKjia37A0tRgQlobABKPnmsoSAMepYel5zOK91zhiG
	6p6rrTEMC9Hyvo0mxVdcAzV5F32aGQRO9UOEQkOOYETy+f0ugKtE9jYK+RHrdn9/viVw=;
Message-ID: <23f49916-dd2a-a956-1e6b-6dbb41a8817b@xen.org>
Date: Tue, 24 Jan 2023 19:09:02 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-14-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113052914.3845596-14-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Peny,

On 13/01/2023 05:28, Penny Zheng wrote:
> In MMU system, we map the UART in the fixmap (when earlyprintk is used).
> However in MPU system, we map the UART with a transient MPU memory
> region.
> 
> So we introduce a new unified function setup_early_uart to replace
> the previous setup_fixmap.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
>   xen/arch/arm/arm64/head.S               |  2 +-
>   xen/arch/arm/arm64/head_mmu.S           |  4 +-
>   xen/arch/arm/arm64/head_mpu.S           | 52 +++++++++++++++++++++++++
>   xen/arch/arm/include/asm/early_printk.h |  1 +
>   4 files changed, 56 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 7f3f973468..a92883319d 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -272,7 +272,7 @@ primary_switched:
>            * afterwards.
>            */
>           bl    remove_identity_mapping
> -        bl    setup_fixmap
> +        bl    setup_early_uart
>   #ifdef CONFIG_EARLY_PRINTK
>           /* Use a virtual address to access the UART. */
>           ldr   x23, =EARLY_UART_VIRTUAL_ADDRESS
> diff --git a/xen/arch/arm/arm64/head_mmu.S b/xen/arch/arm/arm64/head_mmu.S
> index b59c40495f..a19b7c873d 100644
> --- a/xen/arch/arm/arm64/head_mmu.S
> +++ b/xen/arch/arm/arm64/head_mmu.S
> @@ -312,7 +312,7 @@ ENDPROC(remove_identity_mapping)
>    *
>    * Clobbers x0 - x3
>    */
> -ENTRY(setup_fixmap)
> +ENTRY(setup_early_uart)

This function is doing more than enable the early UART. It also setups 
the fixmap even earlyprintk is not configured.

I am not entirely sure what could be the name. Maybe this needs to be 
split further.

>   #ifdef CONFIG_EARLY_PRINTK
>           /* Add UART to the fixmap table */
>           ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
> @@ -325,7 +325,7 @@ ENTRY(setup_fixmap)
>           dsb   nshst
>   
>           ret
> -ENDPROC(setup_fixmap)
> +ENDPROC(setup_early_uart)
>   
>   /* Fail-stop */
>   fail:   PRINT("- Boot failed -\r\n")
> diff --git a/xen/arch/arm/arm64/head_mpu.S b/xen/arch/arm/arm64/head_mpu.S
> index e2ac69b0cc..72d1e0863d 100644
> --- a/xen/arch/arm/arm64/head_mpu.S
> +++ b/xen/arch/arm/arm64/head_mpu.S
> @@ -18,8 +18,10 @@
>   #define REGION_TEXT_PRBAR       0x38    /* SH=11 AP=10 XN=00 */
>   #define REGION_RO_PRBAR         0x3A    /* SH=11 AP=10 XN=10 */
>   #define REGION_DATA_PRBAR       0x32    /* SH=11 AP=00 XN=10 */
> +#define REGION_DEVICE_PRBAR     0x22    /* SH=10 AP=00 XN=10 */
>   
>   #define REGION_NORMAL_PRLAR     0x0f    /* NS=0 ATTR=111 EN=1 */
> +#define REGION_DEVICE_PRLAR     0x09    /* NS=0 ATTR=100 EN=1 */
>   
>   /*
>    * Macro to round up the section address to be PAGE_SIZE aligned
> @@ -334,6 +336,56 @@ ENTRY(enable_mm)
>       ret
>   ENDPROC(enable_mm)
>   
> +/*
> + * Map the early UART with a new transient MPU memory region.
> + *

Missing "Inputs: "

> + * x27: region selector
> + * x28: prbar
> + * x29: prlar
> + *
> + * Clobbers x0 - x4
> + *
> + */
> +ENTRY(setup_early_uart)
> +#ifdef CONFIG_EARLY_PRINTK
> +    /* stack LR as write_pr will be called later like nested function */
> +    mov   x3, lr
> +
> +    /*
> +     * MPU region for early UART is a transient region, since it will be
> +     * replaced by specific device memory layout when FDT gets parsed.

I would rather not mention "FDT" here because this code is independent 
to the firmware table used.

However, any reason to use a transient region rather than the one that 
will be used for the UART driver?

> +     */
> +    load_paddr x0, next_transient_region_idx
> +    ldr   x4, [x0]
> +
> +    ldr   x28, =CONFIG_EARLY_UART_BASE_ADDRESS
> +    and   x28, x28, #MPU_REGION_MASK
> +    mov   x1, #REGION_DEVICE_PRBAR
> +    orr   x28, x28, x1

This needs some documentation to explain the logic. Maybe even a macro.

> +
> +    ldr x29, =(CONFIG_EARLY_UART_BASE_ADDRESS + EARLY_UART_SIZE)
> +    roundup_section x29

Does this mean we could give access to more than necessary? Shouldn't 
instead prevent compilation if the size doesn't align with the section size?

> +    /* Limit address is inclusive */
> +    sub   x29, x29, #1
> +    and   x29, x29, #MPU_REGION_MASK
> +    mov   x2, #REGION_DEVICE_PRLAR
> +    orr   x29, x29, x2
> +
> +    mov   x27, x4

This needs some documentation like:

x27: region selector

See how we documented the existing helpers.

> +    bl    write_pr
> +
> +    /* Create a new entry in xen_mpumap for early UART */
> +    create_mpu_entry xen_mpumap, x4, x28, x29, x1, x2
> +
> +    /* Update next_transient_region_idx */
> +    sub   x4, x4, #1
> +    str   x4, [x0]
> +
> +    mov   lr, x3
> +    ret
> +#endif
> +ENDPROC(setup_early_uart)
> +
>   /*
>    * Local variables:
>    * mode: ASM
> diff --git a/xen/arch/arm/include/asm/early_printk.h b/xen/arch/arm/include/asm/early_printk.h
> index 44a230853f..d87623e6d5 100644
> --- a/xen/arch/arm/include/asm/early_printk.h
> +++ b/xen/arch/arm/include/asm/early_printk.h
> @@ -22,6 +22,7 @@
>    * for EARLY_UART_VIRTUAL_ADDRESS.
>    */
>   #define EARLY_UART_VIRTUAL_ADDRESS CONFIG_EARLY_UART_BASE_ADDRESS
> +#define EARLY_UART_SIZE            0x1000

Shouldn't this be PAGE_SIZE? If not, how did you come up with the number?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 19:21:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 19:21:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483703.750025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKOqh-0004Jx-Q5; Tue, 24 Jan 2023 19:20:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483703.750025; Tue, 24 Jan 2023 19:20:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKOqh-0004Jq-NR; Tue, 24 Jan 2023 19:20:51 +0000
Received: by outflank-mailman (input) for mailman id 483703;
 Tue, 24 Jan 2023 19:20:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKOqh-0004Jk-1Z
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 19:20:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKOqg-0002jQ-NJ; Tue, 24 Jan 2023 19:20:50 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKOqg-0002bt-HI; Tue, 24 Jan 2023 19:20:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=q7kJYGeYvDl/GmALDTlhFfE2TblM4Q+GFsoTUC9C/rE=; b=zEnzKUW3K29MiBbiJyDaKaBOoz
	etVZMOs0xYEb2isjvqGtr8tNcvUkws/Sy+6B0PJDoliunMGWvgskfYepwtXniGMRZs4/mN+UHgoBD
	UIMba+FCWChkf2Ar19BR0Vg0n4cMVaXgMFJuLtD5ZsbAzRt8GUbMPSE1Jioo7v12X6tQ=;
Message-ID: <9530e5d4-d621-97e5-a7d5-0c928d030ff5@xen.org>
Date: Tue, 24 Jan 2023 19:20:48 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 18/40] xen/mpu: introduce helper
 access_protection_region
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-19-Penny.Zheng@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113052914.3845596-19-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> Each EL2 MPU protection region could be configured using PRBAR<n>_EL2 and
> PRLAR<n>_EL2.
> 
> This commit introduces a new helper access_protection_region() to access
> EL2 MPU protection region, including both read/write operations.
> 
> As explained in section G1.3.18 of the reference manual for AArch64v8R,
> a set of system register PRBAR<n>_EL2 and PRLAR<n>_EL2 provide access to
> the EL2 MPU region which is determined by the value of 'n' and
> PRSELR_EL2.REGION as PRSELR_EL2.REGION<7:4>:n.(n = 0, 1, 2, ... , 15)
> For example to access regions from 16 to 31:
> - Set PRSELR_EL2 to 0b1xxxx
> - Region 16 configuration is accessible through PRBAR0_EL2 and PRLAR0_EL2
> - Region 17 configuration is accessible through PRBAR1_EL2 and PRLAR1_EL2
> - Region 18 configuration is accessible through PRBAR2_EL2 and PRLAR2_EL2
> - ...
> - Region 31 configuration is accessible through PRBAR15_EL2 and PRLAR15_EL2
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
>   xen/arch/arm/mm_mpu.c | 151 ++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 151 insertions(+)
> 
> diff --git a/xen/arch/arm/mm_mpu.c b/xen/arch/arm/mm_mpu.c
> index c9e17ab6da..f2b494449c 100644
> --- a/xen/arch/arm/mm_mpu.c
> +++ b/xen/arch/arm/mm_mpu.c
> @@ -46,6 +46,157 @@ uint64_t __ro_after_init next_transient_region_idx;
>   /* Maximum number of supported MPU memory regions by the EL2 MPU. */
>   uint64_t __ro_after_init max_xen_mpumap;
>   
> +/* Write a MPU protection region */
> +#define WRITE_PROTECTION_REGION(sel, pr, prbar_el2, prlar_el2) ({       \
> +    uint64_t _sel = sel;                                                \
> +    const pr_t *_pr = pr;                                               \
> +    asm volatile(                                                       \
> +        "msr "__stringify(PRSELR_EL2)", %0;" /* Selects the region */   \

This is an open-coding version of WRITE_SYSREG(). Can we use it instead?

> +        "dsb sy;"                                                       \

What is this dsb for? Also is the 'sy' really necessary?

> +        "msr "__stringify(prbar_el2)", %1;" /* Write PRBAR<n>_EL2 */    \

WRITE_SYSREG()?

> +        "msr "__stringify(prlar_el2)", %2;" /* Write PRLAR<n>_EL2 */    \

WRITE_SYSREG()?

> +        "dsb sy;"                                                       \

Same about dsb. But I would consider to move the dsb and selection part 
outside of the macro. So they could outside of the switch and reduce the 
code generation.

> +        : : "r" (_sel), "r" (_pr->prbar.bits), "r" (_pr->prlar.bits));  \
> +})
> +
> +/* Read a MPU protection region */
> +#define READ_PROTECTION_REGION(sel, prbar_el2, prlar_el2) ({            \

My comment on READ_PROTECTION also applies here. But you would want to 
use READ_SYSREG() for 'mrs'.

> +    uint64_t _sel = sel;                                                \
> +    pr_t _pr;                                                           \
> +    asm volatile(                                                       \
> +        "msr "__stringify(PRSELR_EL2)", %2;" /* Selects the region */   \
> +        "dsb sy;"                                                       \
> +        "mrs %0, "__stringify(prbar_el2)";" /* Read PRBAR<n>_EL2 */     \
> +        "mrs %1, "__stringify(prlar_el2)";" /* Read PRLAR<n>_EL2 */     \
> +        "dsb sy;"                                                       \
> +        : "=r" (_pr.prbar.bits), "=r" (_pr.prlar.bits) : "r" (_sel));   \
> +    _pr;                                                                \
> +})
> +
> +/*
> + * Access MPU protection region, including both read/write operations.
> + * Armv8-R AArch64 at most supports 255 MPU protection regions.
> + * See section G1.3.18 of the reference manual for Armv8-R AArch64,
> + * PRBAR<n>_EL2 and PRLAR<n>_EL2 provide access to the EL2 MPU region
> + * determined by the value of 'n' and PRSELR_EL2.REGION as
> + * PRSELR_EL2.REGION<7:4>:n(n = 0, 1, 2, ... , 15)
> + * For example to access regions from 16 to 31 (0b10000 to 0b11111):
> + * - Set PRSELR_EL2 to 0b1xxxx
> + * - Region 16 configuration is accessible through PRBAR0_ELx and PRLAR0_ELx
> + * - Region 17 configuration is accessible through PRBAR1_ELx and PRLAR1_ELx
> + * - Region 18 configuration is accessible through PRBAR2_ELx and PRLAR2_ELx
> + * - ...
> + * - Region 31 configuration is accessible through PRBAR15_ELx and PRLAR15_ELx
> + *
> + * @read: if it is read operation.
> + * @pr_read: mpu protection region returned by read op.
> + * @pr_write: const mpu protection region passed through write op.
> + * @sel: mpu protection region selector
> + */
> +static void access_protection_region(bool read, pr_t *pr_read,
> +                                     const pr_t *pr_write, uint64_t sel)

I would rather prefer if we introduce two helpers (one for the read 
operation, the other for the write operation). This would make the code 
a bit easier to read.

> +{
> +    switch ( sel & 0xf )
> +    {
> +    case 0:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR0_EL2, PRLAR0_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR0_EL2, PRLAR0_EL2);
> +        break;
> +    case 1:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR1_EL2, PRLAR1_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR1_EL2, PRLAR1_EL2);
> +        break;
> +    case 2:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR2_EL2, PRLAR2_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR2_EL2, PRLAR2_EL2);
> +        break;
> +    case 3:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR3_EL2, PRLAR3_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR3_EL2, PRLAR3_EL2);
> +        break;
> +    case 4:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR4_EL2, PRLAR4_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR4_EL2, PRLAR4_EL2);
> +        break;
> +    case 5:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR5_EL2, PRLAR5_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR5_EL2, PRLAR5_EL2);
> +        break;
> +    case 6:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR6_EL2, PRLAR6_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR6_EL2, PRLAR6_EL2);
> +        break;
> +    case 7:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR7_EL2, PRLAR7_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR7_EL2, PRLAR7_EL2);
> +        break;
> +    case 8:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR8_EL2, PRLAR8_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR8_EL2, PRLAR8_EL2);
> +        break;
> +    case 9:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR9_EL2, PRLAR9_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR9_EL2, PRLAR9_EL2);
> +        break;
> +    case 10:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR10_EL2, PRLAR10_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR10_EL2, PRLAR10_EL2);
> +        break;
> +    case 11:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR11_EL2, PRLAR11_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR11_EL2, PRLAR11_EL2);
> +        break;
> +    case 12:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR12_EL2, PRLAR12_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR12_EL2, PRLAR12_EL2);
> +        break;
> +    case 13:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR13_EL2, PRLAR13_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR13_EL2, PRLAR13_EL2);
> +        break;
> +    case 14:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR14_EL2, PRLAR14_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR14_EL2, PRLAR14_EL2);
> +        break;
> +    case 15:
> +        if ( read )
> +            *pr_read = READ_PROTECTION_REGION(sel, PRBAR15_EL2, PRLAR15_EL2);
> +        else
> +            WRITE_PROTECTION_REGION(sel, pr_write, PRBAR15_EL2, PRLAR15_EL2);
> +        break;

What if the caller pass a number higher than 15?

> +    }
> +}
> +
>   /* TODO: Implementation on the first usage */
>   void dump_hyp_walk(vaddr_t addr)
>   {

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 19:31:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 19:31:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483709.750038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKP0U-0005uZ-Pl; Tue, 24 Jan 2023 19:30:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483709.750038; Tue, 24 Jan 2023 19:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKP0U-0005uS-Mu; Tue, 24 Jan 2023 19:30:58 +0000
Received: by outflank-mailman (input) for mailman id 483709;
 Tue, 24 Jan 2023 19:30:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKP0T-0005uM-7g
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 19:30:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKP0S-0002tl-QK; Tue, 24 Jan 2023 19:30:56 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKP0S-0002we-K2; Tue, 24 Jan 2023 19:30:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=7YdYPJl312OnHEj+UbDLejAvKnI79E7H68dLcKXm0eE=; b=ntku88/Nzcguov7lpxUwVrUSuH
	R9smszwqprQsVodjomKvbV90fbIhMsw2bRA2mfPJyN5XhOU+Udlt0E/S0Tdpb3saymldfqxWm8p8K
	HiMNAQ/0Sin/fKqw4t8vCG28JObVWd+n/WPXSpPfafNGaEt2ScdHcgcODZjjG+XywxyI=;
Message-ID: <0b185f5f-6d0c-8c4c-1332-b064befdc3ee@xen.org>
Date: Tue, 24 Jan 2023 19:30:54 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 05/14] xen/arm: Clean-up the memory layout
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Michal Orzel <michal.orzel@amd.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-6-julien@xen.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113101136.479-6-julien@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/01/2023 10:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> In a follow-up patch, the base address for the common mappings will
> vary between arm32 and arm64. To avoid any duplication, define
> every mapping in the common region from the previous one.
> 
> Take the opportunity to:
>      * add missing *_SIZE for FIXMAP_VIRT_* and XEN_VIRT_*
>      * switch to MB()/GB() to avoid hexadecimal (easier to read)
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

There was a context conflict with Michal's recent patch (see 
b2220f85256a). As this is minor, I have decided to handle it on commit.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 19:32:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 19:32:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483713.750047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKP1T-0006QO-2T; Tue, 24 Jan 2023 19:31:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483713.750047; Tue, 24 Jan 2023 19:31:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKP1S-0006QH-W6; Tue, 24 Jan 2023 19:31:58 +0000
Received: by outflank-mailman (input) for mailman id 483713;
 Tue, 24 Jan 2023 19:31:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KMs4=5V=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pKP1R-0006Ph-Od
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 19:31:58 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2060.outbound.protection.outlook.com [40.107.220.60])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1552b3e-9c1d-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 20:31:55 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by CY8PR12MB7609.namprd12.prod.outlook.com (2603:10b6:930:99::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan
 2023 19:31:50 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%6]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023
 19:31:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1552b3e-9c1d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hKwODMGGQJZ2dq3ya76qTegSOFq4NLF3YDQw3zDl6Js71kYY9QRiPCPtzWFbnOwOgKEPX3rsySWRHhhmuLeVbLddSQTn8xkwVPrb2YLeqRFS0RavZzdsZ5pLknNfxYpSwM69FCdT3GjKMZYGQoYh2U3w9FVgyCQUnc3n4k9fyebO20WLzugv7majYAO5I4QNUaiY7q5s+q4b2Nxeg5c+77CN07b6/vDXy5eqRxcG+t0y8zekIolTLwB91OtWSa/P7uVRUa2BDudYF1R9vXtK2kdTzbBJw+w3jDD2q47PQixxmlRfaiG0F6Jbe8yXfQvdA7pFn4jWCOqFYIZHDoBZUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CAMDv2Hz/qbJSvTbWP8NB24C++T7IAE0be5qijZjxb8=;
 b=TFmCdiaGXd7YqN90ox23/d0Tx/Jfh//Oo0io9ap442B1/0jl3XlM3YOu0l0ELgJ6/zkece5Uq9zxx9lFTjejbSddQ2rcLAPqsY3jaZs/utE7I2BQAuzPwyf64O7eW/ZFBocHbKebbtderZMNGrzSHKg5DmOJWlA1pvywC7YMjl4jme5cDChUKqmNxsWr93mG0P7ZBSytHoShBvXedwwvuBBH431XIALpXudHMq32jXxuqVE1Gjqh1wSbhWyewf9LujkU7oT7ZSJwxAlgFSb6H9OHDiCyBw+1x+HjykcHrMfKnEvW6tYbL8jZXlQKOvcymJyyCwQeTjL7yJMz4ZuG3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CAMDv2Hz/qbJSvTbWP8NB24C++T7IAE0be5qijZjxb8=;
 b=mybJJ2iHl2fCk6Z3JSHA1ukZz2S6ApG+j7GLW4VPrsqjCWGDT/aRr4Fr+thdr4lEHlcGwVoOu3qjOv7RTMVD/SWLl02pbe1LmXofoJ1nQpvpEJ267JgzeBRJJxHVGs5JgBcqAmgZgK875qW0AaUc9pRxmJYbfs7Pn4S024sQB1g=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <b3ae278f-b7a0-9273-fbc7-a3362905818d@amd.com>
Date: Tue, 24 Jan 2023 19:31:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 00/41] xen/arm: Add Armv8-R64 MPU support to Xen -
 Part#1
To: Penny Zheng <Penny.Zheng@arm.com>, xen-devel@lists.xenproject.org
Cc: wei.chen@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230113052914.3845596-1-Penny.Zheng@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0358.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18d::21) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|CY8PR12MB7609:EE_
X-MS-Office365-Filtering-Correlation-Id: 5b335160-9192-47bd-a259-08dafe41a3be
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	I9O+TEFoDzvouP9r60O5JeBN9kXmtFvVUVrE8GgMz5IiyaNkvGX3pOb2zIAxSCVnGTwhqevb1odibqDjqBcSIgnOtMgsY0NwBr40OmNM9FESZp8HzLK3nC5Ev5xqPN8AHZr3hWEGlGXSFMwQTF1+9TcOQGbZWB2LptaaR0ksSmQtY/ajKMw6yIZoqaVYWnwpF8SvYhigzQdreknVgCYlcyaj2cVk9k30Do0ndJBl0PnzSZns75IKDAtQ6yQxH8lnyyCc4rNJ6mCRT2dNKmQ4olZtaXM5ktT9xxDiMHul6A13WCOoCM1z/WWdRXu+dK7nlFCE1usgOXQp7ceTIq2UoqdlTujZ+kMP96NKlMF3pI2QNplEE6Rmwp0Q+N7NDZlefLaM7xuY8hjIxzIxOvd5YFHIbah9uUs5c5NgNz/3oG6/hSIelObpuPsPrcN5jHJD7RvFAm//QHPIySxfcpIvlJvqc2+bagu23CehqTr4uys63aeRg08++JW160FlX1LlqUO5abxWY1ENJOWQirW+W5lWlUNAjrAa/Oh0ZT6wsI3QPKj+bdLNGsWbQtCRarORF2kOGCXD+UCSiuU//KxgVYUKH2cgC/awvmnwkd+aAYo8dwnbO/RQctoVQuRg2lz64/KzeqZuqziVYmSp6kugddDcj17dlLZX3ZAReK8vMMuM6OobYOOQ7U8n7aKPY9hLk68VKUqE1njzCcBVROVKX0zAKfUJPuh47nf+sQnzatnxnQJPSvoAmlbFllGvJfZiMN4UsTIP8FFvQHxKF0/jToI8Gr70H86m9kuDcIOl5TgC0xIYYWRVrSmWoJHVJG6LBYHJx0+Ci1pcbwed5Rk2AzZjYGBR/557uG28SVfpRzE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(366004)(346002)(376002)(39860400002)(136003)(451199018)(5660300002)(2906002)(31696002)(7416002)(54906003)(8676002)(6486002)(316002)(66946007)(4326008)(2616005)(66556008)(966005)(66476007)(478600001)(6666004)(26005)(6512007)(53546011)(38100700002)(8936002)(186003)(41300700001)(6506007)(83380400001)(31686004)(36756003)(17413003)(21314003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cHJGUFphdXlPQ1lPVlhoVmdveURaVUZ2SG14clE3RUtaYUhxYXpETHhEaDBZ?=
 =?utf-8?B?UUh2ZXBpdDd5RkRhZStHNis3eWUxR25DSlRjU1lGaEVPN0gxUXIzKzBYSG5n?=
 =?utf-8?B?eng4YW1qVUVlZWd4Mmt1UjlmNDJFRldOa1h5RXB5MElSR0puT0E0V2Zub1Ro?=
 =?utf-8?B?cUMwS0kvVFFKalQ2Mlc1Q2d1cE5yc3U1a3RLS3dYckZSK1MxMXhEdTBvUUJI?=
 =?utf-8?B?djlwRXlnRlFUNkw3c2tMdnhSZ1RxeXZ3c0cxWUxLR21YcFBGallBM09oL2g2?=
 =?utf-8?B?TnZaMithNGk0ODFjdVJwYXNEMW5FRTFGOGZad2Zwd2Y4SkZMMXdaN0U4VGhH?=
 =?utf-8?B?M2tFaXJZbyt4MDEwZENEc29TMFdZYWVVbTU2QWVFWEpxQ2VWYlFkNWVFYXQ5?=
 =?utf-8?B?MU02YmdSY0g2c2lybEVOSEMzaGc1MTBpMEdEU0NCQzVTcjNkWHZsc2ttYU9o?=
 =?utf-8?B?aUU5M0JDbHd2S01YTlVEZ3A4YU1UU3lUdkpuZkt0VkR6WmxVeE9TeWpJVmRs?=
 =?utf-8?B?clRNbnM4b2pxVHJVNXJYS3VhU0hyeCs5cHVuRkgrMjNuM3UyM0hpNldmODVX?=
 =?utf-8?B?Y0Uvb3I1dnRkTHZES3lleHpnSE00bEpOTCtZNEZraURpclNmWkVzZjJacFFZ?=
 =?utf-8?B?WEN6MnE0R2lDT0RrMGkwV3BIbmY0T2gxd2hmZEFueVZHUGFqYy9aRW4wMlE2?=
 =?utf-8?B?RnRsM1Avb1dkRUVubHp4bjhUalhhRXQ3eXNGdXZuZTI3Z0svMnUzTWNSSnBq?=
 =?utf-8?B?UXo2cmN6U2o0bWlpU3g0MjU1ZytPbFo2Rm1mdExmMWh1Mit2c2J3NHU4eGdR?=
 =?utf-8?B?UE4vWStoWHIzQjRnYVl1ZXRDQ1gwZUMxalJXdkRqVHVuWjFtcWpPN1NnYTZS?=
 =?utf-8?B?WTBQY1JuYmt2dUxwSEJKVXdjalBUcDZWaEltS1pVVG45Z0Y0QzFnVzFpbDd5?=
 =?utf-8?B?eENMTWNvVWpCTzUxazhSc00yOVViS244V2dzdENHMThzaDQ4L0hWYnV6SU1R?=
 =?utf-8?B?Y1RjdkE5RmRuSjNRYTJWM01CMHQyMDUzUjg1dnFZUGFsRXZtWnRCWHQ1ZE8w?=
 =?utf-8?B?RHhxbkZuTEE3WGV1cWtBOW9Qak1wNHdRakVYWHJpcG9tVVhJV1lDeEplN3d2?=
 =?utf-8?B?UmFiWUF2cGtmNWRqWExaZlZvWFJzeXc4OHRlcFdGVjdkanZ0eUEwbkFWTnN1?=
 =?utf-8?B?Q2M0WU4vU1JuVG9tS2tFSyt6M2VJREFMNWwxUXpNWm5SZzA2Um1DL1dObm5D?=
 =?utf-8?B?TElYbmxyUnVtcFh6aUVDMWZUVFVjMmZIOVdaY1BQVmJwL0lJbUJFejhGYXdY?=
 =?utf-8?B?L1FHcTRrSDIrdDc4akVsREc4TFNFcnEyc2dQQ3ZqMnI5ODA2QmMxK2xkc3F5?=
 =?utf-8?B?TGVoUlJJdXd0RmFITDFqTmJSand3cGtSRTRzd2E4RXhTNDhtUG9XVjl0OGtM?=
 =?utf-8?B?ekY3a3VMRHVOVmNLVThnUjBuaS9rTkRaa1k3bGt2WDFiWGZtMWlvTUFtcGhk?=
 =?utf-8?B?azlnb09qdFJRdVUrNnNwRGxSQ0VDOSswVUlmWGxxUXNqNEUwNW9SdjhTNEVH?=
 =?utf-8?B?SFVhQzRMN0JDd3BSOWVhQlJVVmw2VjhGN3ZsVkRsOHZ0OHl6cE5KNnpsbTFp?=
 =?utf-8?B?bWh1bklhcUlJb1Q3SytiTkk4aFhvQ0N5REJ5K1g5QzFCRXJKQzVDb2x1UWpL?=
 =?utf-8?B?YlQ1YndoaGpSM3hsRTJXalFIMFRBZjFLWWUvZ2xmZGNoRjRCalpLMS8zOUVq?=
 =?utf-8?B?WWNHQ0pHa3JybFBhSzJPc1RuaS9FQktsTVQ5UHlOK1o1SGg3cXdyT2xvWVZ3?=
 =?utf-8?B?aTZNaklBRHExYXBxSzBKUmExNHU3dHZIS3EweEdFbkZYaHkyNmFrMER0M1li?=
 =?utf-8?B?M0ZSbmRrNjY3Z2dwOVUyR0t6Y21RNWM5QkRiclRGVTBCSmpPSDJBcTk4Yy9J?=
 =?utf-8?B?QUhJNVRVR240SGJaR1A3cXNOR3VFYzFtL0tkYlB2SjgvR2QvUTZKYWJJVkxV?=
 =?utf-8?B?M0wxcGlqQWcvanc5SURWUTNTL2c0dlNjTnpXcStnMHpjcGFaQU1JSXI5dDEr?=
 =?utf-8?B?eWsxbkNNd0t2QSt1UmRYNUFyWjNQQlp4UG5pdEdQcFVSVTdwb0dWTkx2OFI4?=
 =?utf-8?Q?i0HGhXlRbrkPAhXk7HNB6hiBP?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b335160-9192-47bd-a259-08dafe41a3be
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 19:31:50.3292
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: obnVnAhPyzV69rI0W9Cun/wraAeLdN+cnTHoWxm6kQWHv1CTOJzp6eqJhpQd0gqArlrm2qO5YnRFwuIH87bZrA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7609

Hi Penny,

On 13/01/2023 05:28, Penny Zheng wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> The Armv8-R architecture profile was designed to support use cases
> that have a high sensitivity to deterministic execution. (e.g.
> Fuel Injection, Brake control, Drive trains, Motor control etc)
>
> Arm announced Armv8-R in 2013, it is the latest generation Arm
> architecture targeted at the Real-time profile. It introduces
> virtualization at the highest security level while retaining the
> Protected Memory System Architecture (PMSA) based on a Memory
> Protection Unit (MPU). In 2020, Arm announced Cortex-R82,
> which is the first Arm 64-bit Cortex-R processor based on Armv8-R64.
> The latest Armv8-R64 document can be found [1]. And the features of
> Armv8-R64 architecture:
>    - An exception model that is compatible with the Armv8-A model
>    - Virtualization with support for guest operating systems
>    - PMSA virtualization using MPUs In EL2.
>    - Adds support for the 64-bit A64 instruction set.
>    - Supports up to 48-bit physical addressing.
>    - Supports three Exception Levels (ELs)
>          - Secure EL2 - The Highest Privilege
>          - Secure EL1 - RichOS (MMU) or RTOS (MPU)
>          - Secure EL0 - Application Workloads
>   - Supports only a single Security state - Secure.
>   - MPU in EL1 & EL2 is configurable, MMU in EL1 is configurable.
>
> These patch series are implementing the Armv8-R64 MPU support
> for Xen, which are based on the discussion of
> "Proposal for Porting Xen to Armv8-R64 - DraftC" [2].
>
> We will implement the Armv8-R64 and MPU support in three stages:
> 1. Boot Xen itself to idle thread, do not create any guests on it.
> 2. Support to boot MPU and MMU domains on Armv8-R64 Xen.
> 3. SMP and other advanced features of Xen support on Armv8-R64.
>
> As we have not implemented guest support in part#1 series of MPU
> support, Xen can not create any guest in boot time. So in this
> patch serie, we provide an extra DNM-commit in the last for users
> to test Xen boot to idle on MPU system.
>
> We will split these patches to several parts, this series is the
> part#1, v1 is in [3], the full PoC can be found in [4]. More software for
> Armv8-R64 can be found in [5];
>
> [1] https://developer.arm.com/documentation/ddi0600/latest
> [2] https://lists.xenproject.org/archives/html/xen-devel/2022-05/msg00643.html
> [3] https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg00289.html
> [4] https://gitlab.com/xen-project/people/weic/xen/-/tree/integration/mpu_v2
> [5] https://armv8r64-refstack.docs.arm.com/en/v5.0/
>
> Penny Zheng (28):
>    xen/mpu: build up start-of-day Xen MPU memory region map
>    xen/mpu: introduce helpers for MPU enablement
>    xen/mpu: introduce unified function setup_early_uart to map early UART
>    xen/arm64: head: Jump to the runtime mapping in enable_mm()
>    xen/arm: introduce setup_mm_mappings
>    xen/mpu: plump virt/maddr/mfn convertion in MPU system
>    xen/mpu: introduce helper access_protection_region
>    xen/mpu: populate a new region in Xen MPU mapping table
>    xen/mpu: plump early_fdt_map in MPU systems
>    xen/arm: move MMU-specific setup_mm to setup_mmu.c
>    xen/mpu: implement MPU version of setup_mm in setup_mpu.c
>    xen/mpu: initialize frametable in MPU system
>    xen/mpu: introduce "mpu,xxx-memory-section"
>    xen/mpu: map MPU guest memory section before static memory
>      initialization
>    xen/mpu: destroy an existing entry in Xen MPU memory mapping table
>    xen/mpu: map device memory resource in MPU system
>    xen/mpu: map boot module section in MPU system
>    xen/mpu: introduce mpu_memory_section_contains for address range check
>    xen/mpu: disable VMAP sub-system for MPU systems
>    xen/mpu: disable FIXMAP in MPU system
>    xen/mpu: implement MPU version of ioremap_xxx
>    xen/mpu: free init memory in MPU system
>    xen/mpu: destroy boot modules and early FDT mapping in MPU system
>    xen/mpu: Use secure hypervisor timer for AArch64v8R
>    xen/mpu: move MMU specific P2M code to p2m_mmu.c
>    xen/mpu: implement setup_virt_paging for MPU system
>    xen/mpu: re-order xen_mpumap in arch_init_finialize
>    xen/mpu: add Kconfig option to enable Armv8-R AArch64 support
>
> Wei Chen (13):
>    xen/arm: remove xen_phys_start and xenheap_phys_end from config.h
>    xen/arm: make ARM_EFI selectable for Arm64
>    xen/arm: adjust Xen TLB helpers for Armv8-R64 PMSA
>    xen/arm: add an option to define Xen start address for Armv8-R
>    xen/arm64: prepare for moving MMU related code from head.S
>    xen/arm64: move MMU related code from head.S to head_mmu.S
>    xen/arm64: add .text.idmap for Xen identity map sections
>    xen/arm: use PA == VA for EARLY_UART_VIRTUAL_ADDRESS on Armv-8R
>    xen/arm: decouple copy_from_paddr with FIXMAP
>    xen/arm: split MMU and MPU config files from config.h
>    xen/arm: move MMU-specific memory management code to mm_mmu.c/mm_mmu.h
>    xen/arm: check mapping status and attributes for MPU copy_from_paddr
>    xen/mpu: make Xen boot to idle on MPU systems(DNM)
>
>   xen/arch/arm/Kconfig                      |   44 +-
>   xen/arch/arm/Makefile                     |   17 +-
>   xen/arch/arm/arm64/Makefile               |    5 +
>   xen/arch/arm/arm64/head.S                 |  466 +----
>   xen/arch/arm/arm64/head_mmu.S             |  399 ++++
>   xen/arch/arm/arm64/head_mpu.S             |  394 ++++
>   xen/arch/arm/bootfdt.c                    |   13 +-
>   xen/arch/arm/domain_build.c               |    4 +
>   xen/arch/arm/include/asm/alternative.h    |   15 +
>   xen/arch/arm/include/asm/arm64/flushtlb.h |   25 +
>   xen/arch/arm/include/asm/arm64/macros.h   |   51 +
>   xen/arch/arm/include/asm/arm64/mpu.h      |  174 ++
>   xen/arch/arm/include/asm/arm64/sysregs.h  |   77 +
>   xen/arch/arm/include/asm/config.h         |  105 +-
>   xen/arch/arm/include/asm/config_mmu.h     |  112 +
>   xen/arch/arm/include/asm/config_mpu.h     |   25 +
>   xen/arch/arm/include/asm/cpregs.h         |    4 +-
>   xen/arch/arm/include/asm/cpuerrata.h      |   12 +
>   xen/arch/arm/include/asm/cpufeature.h     |    7 +
>   xen/arch/arm/include/asm/early_printk.h   |   13 +
>   xen/arch/arm/include/asm/fixmap.h         |   28 +-
>   xen/arch/arm/include/asm/flushtlb.h       |   22 +
>   xen/arch/arm/include/asm/mm.h             |   78 +-
>   xen/arch/arm/include/asm/mm_mmu.h         |   77 +
>   xen/arch/arm/include/asm/mm_mpu.h         |   54 +
>   xen/arch/arm/include/asm/p2m.h            |   27 +-
>   xen/arch/arm/include/asm/p2m_mmu.h        |   28 +
>   xen/arch/arm/include/asm/processor.h      |   13 +
>   xen/arch/arm/include/asm/setup.h          |   39 +
>   xen/arch/arm/kernel.c                     |   31 +-
>   xen/arch/arm/mm.c                         | 1340 +-----------
>   xen/arch/arm/mm_mmu.c                     | 1376 +++++++++++++
>   xen/arch/arm/mm_mpu.c                     | 1056 ++++++++++
>   xen/arch/arm/p2m.c                        | 2282 +--------------------
>   xen/arch/arm/p2m_mmu.c                    | 2257 ++++++++++++++++++++
>   xen/arch/arm/p2m_mpu.c                    |  274 +++
>   xen/arch/arm/platforms/Kconfig            |   16 +-
>   xen/arch/arm/setup.c                      |  394 +---
>   xen/arch/arm/setup_mmu.c                  |  391 ++++
>   xen/arch/arm/setup_mpu.c                  |  208 ++
>   xen/arch/arm/time.c                       |   14 +-
>   xen/arch/arm/traps.c                      |    2 +
>   xen/arch/arm/xen.lds.S                    |   10 +-
>   xen/arch/x86/Kconfig                      |    1 +
>   xen/common/Kconfig                        |    6 +
>   xen/common/Makefile                       |    2 +-
>   xen/include/xen/vmap.h                    |   93 +-
>   47 files changed, 7500 insertions(+), 4581 deletions(-)
>   create mode 100644 xen/arch/arm/arm64/head_mmu.S
>   create mode 100644 xen/arch/arm/arm64/head_mpu.S
>   create mode 100644 xen/arch/arm/include/asm/arm64/mpu.h
>   create mode 100644 xen/arch/arm/include/asm/config_mmu.h
>   create mode 100644 xen/arch/arm/include/asm/config_mpu.h
>   create mode 100644 xen/arch/arm/include/asm/mm_mmu.h
>   create mode 100644 xen/arch/arm/include/asm/mm_mpu.h
>   create mode 100644 xen/arch/arm/include/asm/p2m_mmu.h
>   create mode 100644 xen/arch/arm/mm_mmu.c
>   create mode 100644 xen/arch/arm/mm_mpu.c
>   create mode 100644 xen/arch/arm/p2m_mmu.c
>   create mode 100644 xen/arch/arm/p2m_mpu.c
>   create mode 100644 xen/arch/arm/setup_mmu.c
>   create mode 100644 xen/arch/arm/setup_mpu.c
>
> --
> 2.25.1
>
>
I applied this serie and there were some compilation issues :-

1. drivers/passthrough/arm/smmu.c:1240:29: error: ‘P2M_ROOT_LEVEL’ 
undeclared (first use in this function)
  1240 |                 reg |= (2 - P2M_ROOT_LEVEL) << TTBCR_SL0_SHIFT;

2. drivers/passthrough/arm/smmu-v3.c:1211:24: error: ‘P2M_ROOT_LEVEL’ 
undeclared (first use in this function)
  1211 |         vtcr->sl = 2 - P2M_ROOT_LEVEL;

For the above two issues, I have disabled SMMU.

3. /scratch/ayankuma/xen_v8r_64/xen/arch/arm/arm64/head.S:470: undefined 
reference to `init_ttbr'
You might need to wrap with some #ifdef .

Can you provide me the dts and the config file with which you have tested ?

I see that the console got stuck at this line.

"(XEN) Command line: console=dtuart dtuart=serial0"

Looking into setup_static_mappings(),

     for ( uint8_t i = MSINFO_GUEST; i < MSINFO_MAX; i++ )
     {
#ifdef CONFIG_EARLY_PRINTK
         if ( i == MSINFO_DEVICE )
             /*
              * Destroy early UART mapping before mapping device memory 
section.
              * WARNING：console will be inaccessible temporarily.
              */
             destroy_xen_mappings(CONFIG_EARLY_UART_BASE_ADDRESS,
                                  CONFIG_EARLY_UART_BASE_ADDRESS + 
EARLY_UART_SIZE);
#endif
         map_mpu_memory_section_on_boot(i, mpu_section_mattr[i]); 
<<<<----- Is this expected to map "mpu,device-memory-section" ?
     }

- Ayan



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 19:35:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 19:35:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483721.750058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKP5D-0007EC-Lg; Tue, 24 Jan 2023 19:35:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483721.750058; Tue, 24 Jan 2023 19:35:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKP5D-0007E5-Ic; Tue, 24 Jan 2023 19:35:51 +0000
Received: by outflank-mailman (input) for mailman id 483721;
 Tue, 24 Jan 2023 19:35:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKP5B-0007Dx-VE
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 19:35:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKP5B-0002z6-Lt; Tue, 24 Jan 2023 19:35:49 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKP5B-0003EC-GL; Tue, 24 Jan 2023 19:35:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=40ovQOdqdNKYosNiIyhFCiP3TotRxN751t34JAYVahw=; b=d8TvLErFib9Tss2uolGcDLAWUD
	Z+sYD/QIoucmBDvhJ/FExJAPZ3L3BWt7JXJJ8EA9vSwFMkYyqUjOGaTcBknc7uCzauYCZg8LbxkVM
	VgoytSZCXhf8dzrYIdWOp1wOdc5jYKZAXpEMjxH/5nDkmDA4Yl42evtM19LK1u3z4RXU=;
Message-ID: <715433bd-69ff-8682-bd15-fc8f7502ea61@xen.org>
Date: Tue, 24 Jan 2023 19:35:47 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 00/14] xen/arm: Don't switch TTBR while the MMU is on
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230113101136.479-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/01/2023 10:11, Julien Grall wrote:
> Julien Grall (14):
>    xen/arm64: flushtlb: Reduce scope of barrier for local TLB flush
>    xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB
>      flush by VA
>    xen/arm32: flushtlb: Reduce scope of barrier for local TLB flush
>    xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush
>    xen/arm: Clean-up the memory layout
>    xen/arm32: head: Replace "ldr rX, =<label>" with "mov_w rX, <label>"
>    xen/arm32: head: Jump to the runtime mapping in enable_mmu()
>    xen/arm32: head: Introduce an helper to flush the TLBs
>    xen/arm32: head: Remove restriction where to load Xen

I have committed up to this patch. I still need to go through the 
comments of the rest.

>    xen/arm32: head: Widen the use of the temporary mapping
>    xen/arm64: Rework the memory layout
>    xen/arm64: mm: Introduce helpers to prepare/enable/disable the
>      identity mapping
>    xen/arm64: mm: Rework switch_ttbr()
>    xen/arm64: smpboot: Directly switch to the runtime page-tables
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 19:43:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 19:43:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483726.750069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKPCL-0000E4-EJ; Tue, 24 Jan 2023 19:43:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483726.750069; Tue, 24 Jan 2023 19:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKPCL-0000Dx-9T; Tue, 24 Jan 2023 19:43:13 +0000
Received: by outflank-mailman (input) for mailman id 483726;
 Tue, 24 Jan 2023 19:43:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKPCK-0000Dr-BB
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 19:43:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKPCK-000366-1X; Tue, 24 Jan 2023 19:43:12 +0000
Received: from [54.239.6.189] (helo=[192.168.20.46])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKPCJ-0003YK-QC; Tue, 24 Jan 2023 19:43:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=cpybMUAdKrtWBFuwNoYQoaHCzxjprMe8QEaNvpWeyHs=; b=vHqk4TWPF3dDSVAHiSuE19sVfa
	udWGiieZ6Onnv27Yv3vL/PUBZV+pxxJmS7ymItLU5A56fUXW49y10RmNlpp77CZG4NAfIw91W3Hot
	6t6ANMzCX63k/ub2ZRbp2e02tsg8LV3Uxy7IpchICO7TLd3uo/8SPYnPEnMEPe6VVpMk=;
Message-ID: <5c18827c-ffc2-1c31-bd7c-812ca05c4bc3@xen.org>
Date: Tue, 24 Jan 2023 19:43:09 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary
 mapping
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-11-julien@xen.org>
 <0271e540-d3b0-fb9b-0f66-015abb45231c@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <0271e540-d3b0-fb9b-0f66-015abb45231c@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 16/01/2023 08:20, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> 
> On 13/01/2023 11:11, Julien Grall wrote:
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, the temporary mapping is only used when the virtual
>> runtime region of Xen is clashing with the physical region.
>>
>> In follow-up patches, we will rework how secondary CPU bring-up works
>> and it will be convenient to use the fixmap area for accessing
>> the root page-table (it is per-cpu).
>>
>> Rework the code to use temporary mapping when the Xen physical address
>> is not overlapping with the temporary mapping.
>>
>> This also has the advantage to simplify the logic to identity map
>> Xen.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ----
>>
>> Even if this patch is rewriting part of the previous patch, I decided
>> to keep them separated to help the review.
>>
>> The "folow-up patches" are still in draft at the moment. I still haven't
>> find a way to split them nicely and not require too much more work
>> in the coloring side.
>>
>> I have provided some medium-term goal in the cover letter.
>>
>>      Changes in v3:
>>          - Resolve conflicts after switching from "ldr rX, <label>" to
>>            "mov_w rX, <label>" in a previous patch
>>
>>      Changes in v2:
>>          - Patch added
>> ---
>>   xen/arch/arm/arm32/head.S | 82 +++++++--------------------------------
>>   1 file changed, 15 insertions(+), 67 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>> index 3800efb44169..ce858e9fc4da 100644
>> --- a/xen/arch/arm/arm32/head.S
>> +++ b/xen/arch/arm/arm32/head.S
>> @@ -459,7 +459,6 @@ ENDPROC(cpu_init)
>>   create_page_tables:
>>           /* Prepare the page-tables for mapping Xen */
>>           mov_w r0, XEN_VIRT_START
>> -        create_table_entry boot_pgtable, boot_second, r0, 1
>>           create_table_entry boot_second, boot_third, r0, 2
>>
>>           /* Setup boot_third: */
>> @@ -479,67 +478,37 @@ create_page_tables:
>>           cmp   r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */
>>           blo   1b
>>
>> -        /*
>> -         * If Xen is loaded at exactly XEN_VIRT_START then we don't
>> -         * need an additional 1:1 mapping, the virtual mapping will
>> -         * suffice.
>> -         */
>> -        cmp   r9, #XEN_VIRT_START
>> -        moveq pc, lr
>> -
>>           /*
>>            * Setup the 1:1 mapping so we can turn the MMU on. Note that
>>            * only the first page of Xen will be part of the 1:1 mapping.
>> -         *
>> -         * In all the cases, we will link boot_third_id. So create the
>> -         * mapping in advance.
>>            */
>> +        create_table_entry boot_pgtable, boot_second_id, r9, 1
>> +        create_table_entry boot_second_id, boot_third_id, r9, 2
>>           create_mapping_entry boot_third_id, r9, r9
>>
>>           /*
>> -         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
>> -         * then the 1:1 mapping will use its own set of page-tables from
>> -         * the second level.
>> +         * Find the first slot used. If the slot is not the same
>> +         * as XEN_TMP_FIRST_SLOT, then we will want to switch
> Do you mean TEMPORARY_AREA_FIRST_SLOT?

Yes. I have fixed it in my tree.

> 
>> +         * to the temporary mapping before jumping to the runtime
>> +         * virtual mapping.
>>            */
>>           get_table_slot r1, r9, 1     /* r1 := first slot */
>> -        cmp   r1, #XEN_FIRST_SLOT
>> -        beq   1f
>> -        create_table_entry boot_pgtable, boot_second_id, r9, 1
>> -        b     link_from_second_id
>> -
>> -1:
>> -        /*
>> -         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
>> -         * 1:1 mapping will use its own set of page-tables from the
>> -         * third level.
>> -         */
>> -        get_table_slot r1, r9, 2     /* r1 := second slot */
>> -        cmp   r1, #XEN_SECOND_SLOT
>> -        beq   virtphys_clash
>> -        create_table_entry boot_second, boot_third_id, r9, 2
>> -        b     link_from_third_id
>> +        cmp   r1, #TEMPORARY_AREA_FIRST_SLOT
>> +        bne   use_temporary_mapping
>>
>> -link_from_second_id:
>> -        create_table_entry boot_second_id, boot_third_id, r9, 2
>> -link_from_third_id:
>> -        /* Good news, we are not clashing with Xen virtual mapping */
>> +        mov_w r0, XEN_VIRT_START
>> +        create_table_entry boot_pgtable, boot_second, r0, 1
>>           mov   r12, #0                /* r12 := temporary mapping not created */
>>           mov   pc, lr
>>
>> -virtphys_clash:
>> +use_temporary_mapping:
>>           /*
>> -         * The identity map clashes with boot_third. Link boot_first_id and
>> -         * map Xen to a temporary mapping. See switch_to_runtime_mapping
>> -         * for more details.
>> +         * The identity mapping is not using the first slot
>> +         * TEMPORARY_AREA_FIRST_SLOT. Create a temporary mapping.
>> +         * See switch_to_runtime_mapping for more details.
>>            */
>> -        PRINT("- Virt and Phys addresses clash  -\r\n")
>>           PRINT("- Create temporary mapping -\r\n")
>>
>> -        /*
>> -         * This will override the link to boot_second in XEN_FIRST_SLOT.
>> -         * The page-tables are not live yet. So no need to use
>> -         * break-before-make.
>> -         */
>>           create_table_entry boot_pgtable, boot_second_id, r9, 1
>>           create_table_entry boot_second_id, boot_third_id, r9, 2
> Do we need to duplicate this if we just did the same in create_page_tables before branching to
> use_temporary_mapping?

Hmmm... Possibly not. I will give a try and let you know.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 19:51:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 19:51:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483732.750080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKPJm-0001ne-6H; Tue, 24 Jan 2023 19:50:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483732.750080; Tue, 24 Jan 2023 19:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKPJm-0001nX-3A; Tue, 24 Jan 2023 19:50:54 +0000
Received: by outflank-mailman (input) for mailman id 483732;
 Tue, 24 Jan 2023 19:50:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKPJk-0001nK-Ur; Tue, 24 Jan 2023 19:50:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKPJk-0003O4-PS; Tue, 24 Jan 2023 19:50:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKPJk-00034M-8d; Tue, 24 Jan 2023 19:50:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKPJk-0006HG-8C; Tue, 24 Jan 2023 19:50:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NjAok9SfvM+UFhKtFiaTVyjkn4fdlQJ7ToTW/tVo1To=; b=0et7QOHWSNN+W5VWGodw2Rk4cv
	GBcuEdXeXVxIww+lRtCprhiavWCU/Mr5GRaFHjMubayloU5941gDEOD+uGxxumA+SSfbl/hnV/D2O
	WS+UtVowzt7FnC2ypC3gl0cCCW7ig0JtxIXY+8klmdMgomBZBc3iA3hf+9GKZ5bxcHnA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176090-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 176090: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:regression
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=90245959a5b936ee013266e5d1e6a508ed69274e
X-Osstest-Versions-That:
    linux=1349fe3a332ad3d1ece60806225ca7955aba9f56
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Jan 2023 19:50:52 +0000

flight 176090 linux-5.4 real [real]
flight 176102 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176090/
http://logs.test-lab.xenproject.org/osstest/logs/176102/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail REGR. vs. 175958

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175968
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 175958
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 175968
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat    fail like 175968
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175968
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175968
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat    fail  like 175968
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175968
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175968
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175968
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175968
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175968
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175968
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175968
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175968
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175968
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                90245959a5b936ee013266e5d1e6a508ed69274e
baseline version:
 linux                1349fe3a332ad3d1ece60806225ca7955aba9f56

Last test of basis   175968  2023-01-19 04:24:06 Z    5 days
Testing same since   176090  2023-01-24 06:43:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abel Vesa <abel.vesa@linaro.org>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Ali Mirghasemi <ali.mirghasemi1376@gmail.com>
  Andi Shyti <andi.shyti@linux.intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arend van Spriel <arend.vanspriel@broadcom.com>
  Borislav Petkov <bp@suse.de>
  Chris Wilson <chris@chris-wilson.co.uk>
  Daniel Scally <dan.scally@ideasonboard.com>
  Daniil Tatianin <d-tatianin@yandex-team.ru>
  David Hildenbrand <david@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Ding Hui <dinghui@sangfor.com.cn>
  Duke Xin(辛安文) <duke_xinanwen@163.com>
  Enzo Matsumiya <ematsumiya@suse.de>
  Filipe Manana <fdmanana@suse.com>
  Flavio Suligoi <f.suligoi@asem.it>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hao Sun <sunhao.th@gmail.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  hongao <hongao@uniontech.com>
  Hugh Dickins <hughd@google.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jernej Skrabec <jernej.skrabec@gmail.com>
  Jimmy Hu <hhhuuu@google.com>
  Jiri Slaby (SUSE) <jirislaby@kernel.org>
  Johan Hovold <johan@kernel.org>
  Jordy Zomer <jordyzomer@google.com>
  Joshua Ashton <joshua@froggi.es>
  Juhyung Park <qkrwngud825@gmail.com>
  Kalle Valo <kvalo@kernel.org>
  Khazhismel Kumykov <khazhy@chromium.org>
  Khazhismel Kumykov <khazhy@google.com>
  Leon Romanovsky <leon@kernel.org>
  Maciej Żenczykowski <maze@google.com>
  Martin KaFai Lau <martin.lau@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Michael Adler <michael.adler@siemens.com>
  Mohan Kumar <mkumard@nvidia.com>
  Nathan Chancellor <nathan@kernel.org>
  Ola Jeppsson <ola@snap.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Prashant Malani <pmalani@chromium.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Richard Genoud <richard.genoud@gmail.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Ron Lee <ron.lee@intel.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Samuel Holland <samuel@sholland.org>
  Sasa Dragic <sasa.dragic@gmail.com>
  Sasha Levin <sashal@kernel.org>
  Shawn.Shao <shawn.shao@jaguarmicro.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Steve French <stfrench@microsoft.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Takashi Iwai <tiwai@suse.de>
  Tobias Schramm <t.schramm@manjaro.org>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vinod Koul <vkoul@kernel.org>
  YingChi Long <me@inclyc.cn>
  Yuchi Yang <yangyuchi66@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1865 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 20:07:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 20:07:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483742.750094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKPZP-0003hO-Ph; Tue, 24 Jan 2023 20:07:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483742.750094; Tue, 24 Jan 2023 20:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKPZP-0003hH-Me; Tue, 24 Jan 2023 20:07:03 +0000
Received: by outflank-mailman (input) for mailman id 483742;
 Tue, 24 Jan 2023 20:07:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKPZO-0003h7-S3; Tue, 24 Jan 2023 20:07:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKPZO-0003jr-RD; Tue, 24 Jan 2023 20:07:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKPZO-00040z-Es; Tue, 24 Jan 2023 20:07:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKPZO-00009k-ER; Tue, 24 Jan 2023 20:07:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YGjWEQIswyyZkMxro+3ka9dIz4sHlT8wKGQI62tgnNM=; b=Pk3YfJB9mEl2XRymIaMwKcxpKY
	8jhl6NpFGoRAPr5nmJfzLxw6oabRjPxqa0ggXVDBsVoaoV6EP+sNKljj3qh6VDT3ASy81Csa9F/mT
	z5MvR1VNt1WzFd1k9vzMOUmFO9AowY79VzsG6Fl+icoi6hk0RJLO5go+r9nzMAWnWXjg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176099-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176099: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=352c89f72ddb67b8d9d4e492203f8c77f85c8df1
X-Osstest-Versions-That:
    xen=d60324d8af9404014cfcc37bba09e9facfd02fcf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Jan 2023 20:07:02 +0000

flight 176099 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176099/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  352c89f72ddb67b8d9d4e492203f8c77f85c8df1
baseline version:
 xen                  d60324d8af9404014cfcc37bba09e9facfd02fcf

Last test of basis   176066  2023-01-23 15:00:27 Z    1 days
Testing same since   176099  2023-01-24 16:00:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d60324d8af..352c89f72d  352c89f72ddb67b8d9d4e492203f8c77f85c8df1 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 20:48:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 20:48:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483749.750104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKQDd-0000HQ-UX; Tue, 24 Jan 2023 20:48:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483749.750104; Tue, 24 Jan 2023 20:48:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKQDd-0000H7-R9; Tue, 24 Jan 2023 20:48:37 +0000
Received: by outflank-mailman (input) for mailman id 483749;
 Tue, 24 Jan 2023 20:48:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wk+X=5V=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKQDc-0000Gr-4i
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 20:48:36 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 76bfc8fc-9c28-11ed-b8d1-410ff93cb8f0;
 Tue, 24 Jan 2023 21:48:32 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 04BCEB816B7;
 Tue, 24 Jan 2023 20:48:32 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66B6CC433D2;
 Tue, 24 Jan 2023 20:48:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76bfc8fc-9c28-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674593311;
	bh=MxjMVgyKFUUOfQsy++5aBofv+dMz8sBE/NCuVELEzyY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sq4tjHPi8vtTuCf6hxFBNgj7ghD3xAKsB4ucdcst2w3SbSjXhluz0UI7Bk2mRxpCR
	 nXbVCoU6ULgonQ9KPDdSweqfhiWSWvrkWcVINhw6kxmYIL+zUIUSDEEB1XnmX4ahhH
	 Bx7cKqBn4RwjPqntYKDzKnlqieO7EWXavgA8/5OfFRuab0maQQ3fiUN47ZLTlo0P8O
	 k5yE6lZzg3Mc4rnxyEyojF0N7H7bhyQ28WAr5A/Kf2IkpgsUtYdtOCmssebVIvbwjE
	 +WT9Qgg/aNiRqIZA0M9chrkbOo0xhJp/5ONfrYAM+mdTRFrWIc6+qJyeZk0u6xZUKn
	 jofdx9MbBEw6A==
Date: Tue, 24 Jan 2023 12:48:28 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Wei Liu <wl@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 22/22] xen/arm64: Allow the admin to enable/disable the
 directmap
In-Reply-To: <8995c20f-7d0d-5138-b802-d70c116b84e7@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301241244500.1978264@ubuntu-linux-20-04-desktop>
References: <20221216114853.8227-1-julien@xen.org> <20221216114853.8227-23-julien@xen.org> <alpine.DEB.2.22.394.2301231437170.1978264@ubuntu-linux-20-04-desktop> <92c4daa2-d841-3109-c1ec-4bdb088d6670@xen.org> <alpine.DEB.2.22.394.2301231605291.1978264@ubuntu-linux-20-04-desktop>
 <8995c20f-7d0d-5138-b802-d70c116b84e7@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 24 Jan 2023, Julien Grall wrote:
> Hi Stefano,
> 
> On 24/01/2023 00:12, Stefano Stabellini wrote:
> > On Mon, 23 Jan 2023, Julien Grall wrote:
> > Ah yes! I see it now that we are relying on the same area
> > (alloc_xenheap_pages calls page_to_virt() then map_pages_to_xen()).
> > 
> > map_pages_to_xen() is able to create pagetable entries at every level,
> > but we need to make sure they are shared across per-cpu pagetables. To
> > make that happen, we are pre-creating the L0/L1 entries here so that
> > they become common across all per-cpu pagetables and then we let
> > map_pages_to_xen() do its job.
> > 
> > Did I understand it right?
> 
> Your understanding is correct.

Great!


> > > I can add summary in the commit message.
> > 
> > I would suggest to improve a bit the in-code comment on top of
> > setup_directmap_mappings. I might also be able to help with the text
> > once I am sure I understood what is going on :-)

How about this comment (feel free to edit/improve it as well, just a
suggestion):

In the !arch_has_directmap() case this function allocates empty L1
tables and creates the L0 entries for the direct map region.

When the direct map is disabled, alloc_xenheap_pages results in the page
being temporarily mapped in the usual xenheap address range via
map_pages_to_xen(). map_pages_to_xen() is able to create pagetable
entries at every level, but we need to make sure they are shared across
per-cpu pagetables. For this reason, here in this function we are
creating the L0 entries and empty L1 tables in advance, so that they
become common across all per-cpu pagetables.


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 22:27:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 22:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483758.750120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKRkp-0002mV-V6; Tue, 24 Jan 2023 22:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483758.750120; Tue, 24 Jan 2023 22:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKRkp-0002mO-SO; Tue, 24 Jan 2023 22:26:59 +0000
Received: by outflank-mailman (input) for mailman id 483758;
 Tue, 24 Jan 2023 22:26:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKRko-0002mE-0P; Tue, 24 Jan 2023 22:26:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKRkn-0006mn-TG; Tue, 24 Jan 2023 22:26:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKRkn-0001hC-FV; Tue, 24 Jan 2023 22:26:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKRkn-0005wf-Ey; Tue, 24 Jan 2023 22:26:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dGT8mSjI495+qBs7+hm990fUPX1+zl8otZQzPpWrLrk=; b=BlA+lsdiTW657OfpELt/ATlq0a
	1xhNe6h7HmgAHcTVqyKiCZuJSLxPzKqhVAN/O9m57n73on7pD1y51tvLmuYBEACPatiLCFcGijFWJ
	yiP9pBimlMw4GoDHEXaTavfvKH4+wxVihAPHE4HfMytx8pzyawFxG5WlebTFP1l0pppE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176091-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176091: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d60324d8af9404014cfcc37bba09e9facfd02fcf
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Jan 2023 22:26:57 +0000

flight 176091 xen-unstable real [real]
flight 176103 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176091/
http://logs.test-lab.xenproject.org/osstest/logs/176103/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d60324d8af9404014cfcc37bba09e9facfd02fcf
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    4 days
Failing since        176003  2023-01-20 17:40:27 Z    4 days   10 attempts
Testing same since   176076  2023-01-23 20:14:44 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 787 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 22:35:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 22:35:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483768.750130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKRsz-0004Qb-UZ; Tue, 24 Jan 2023 22:35:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483768.750130; Tue, 24 Jan 2023 22:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKRsz-0004QU-Qh; Tue, 24 Jan 2023 22:35:25 +0000
Received: by outflank-mailman (input) for mailman id 483768;
 Tue, 24 Jan 2023 22:35:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mREw=5V=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pKRsy-0004QO-EI
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 22:35:24 +0000
Received: from hedgehog.birch.relay.mailchannels.net
 (hedgehog.birch.relay.mailchannels.net [23.83.209.81])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 621c667d-9c37-11ed-91b6-6bf2151ebd3b;
 Tue, 24 Jan 2023 23:35:21 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id A3FBB641B11
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 22:35:19 +0000 (UTC)
Received: from pdx1-sub0-mail-a306.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id 323FE6419D2
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 22:35:19 +0000 (UTC)
Received: from pdx1-sub0-mail-a306.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.103.24.31 (trex/6.7.1); Tue, 24 Jan 2023 22:35:19 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a306.dreamhost.com (Postfix) with ESMTPSA id 4P1hck4Z67zf7
 for <xen-devel@lists.xenproject.org>; Tue, 24 Jan 2023 14:35:18 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e006a by kmjvbox (DragonFly Mail Agent v0.12);
 Tue, 24 Jan 2023 14:35:16 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 621c667d-9c37-11ed-91b6-6bf2151ebd3b
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1674599719; a=rsa-sha256;
	cv=none;
	b=na7ndZ/tJ7Wzepuo09WIEZUPa2Ur3Nxjlfe1JykQ1vtDWhf9zu8yhW74wmJAcL/nNge5ai
	YKnJn+4WiBi7+slkin556bmHu4jzJF5T+Jheji9RMHDStEDYGpXuWBRUnTwbPCp/AkPh2z
	v1vPHYfsXLWoWIHNvGuNt9+xU+3vOXCmtfRF+fwSJzlWBqVowtRNzRaIl0FwJUodbqRJp2
	1JaanLK5a8LXWfYupXZWBAdoQph+SWIcXbbO4ADNLXH51iWfcXWsbadb2CiAfb2XXZ9crw
	FuFqEYWFhxvGlfBGjYl0H7Lquj7uM3OG5YF1Q154/NFC9u0r0crbO8NQx4PaJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1674599719;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 dkim-signature; bh=/rnaVJdqABJD+0Xa4hNZqIvtAk2JjITERqbT0AFef+k=;
	b=VUFhB5qbzhlhyxCFK4du1nPoFYemSHM+zkdVetC6i/UY3ceoG3ObHByafFHRUp+m0ZCyt3
	sVyVUXu5CtGCRkgMDkaw9xj6vUEYqXqmfX+uTjmIxebOojPZaJtAKLqQi9ZiZivmk1STLa
	6ZY/vEn7oyeP3GIBBgcMiQfCbh62uDah0IJK+tCbTWzQYadT27USkgYJKX5Z+3wedAsGeK
	gLqx6FmoV2LHn0JtQIHlp2mpO2ZD5Xy/ipSYkO8lZ0zEHuhPUZ8PjOPxYW2hVqyZc3b1Zb
	/Nea1MV9K62x30UttDjjdBQVgL9lRuqfsNEUmBjfYcNY41OTFNJd34mDwAusZA==
ARC-Authentication-Results: i=1;
	rspamd-69c95c757c-zt9v2;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Neutral
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Wide-Eyed-Spicy: 79a11c450360c0dc_1674599719422_4293014534
X-MC-Loop-Signature: 1674599719422:603297941
X-MC-Ingress-Time: 1674599719421
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1674599718;
	bh=/rnaVJdqABJD+0Xa4hNZqIvtAk2JjITERqbT0AFef+k=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=DCwDfkrjRjWhSLFFfdj2dLanchkSI4muoA5sq2O2hGbpxiHWlMqkrXZWRhq1zb0Eg
	 yG6KGvxN1IvDut6OH6xEilCiB1laUUTSFeC1O/x1V+t6RztzIgtKDWbcLkhzPtR/bM
	 x2nCRKpRjWfEp22eX+bsrW2UzIQrz+rezO8mVdpw=
Date: Tue, 24 Jan 2023 14:35:16 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>
Subject: [PATCH] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230124223516.GA1962@templeofstupid.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Cpuid leaf 4 contains information about how the state of the tsc, its
mode, and some additional information.  A commit that is queued for
linux would like to use this to determine whether the tsc mode has been
set to 'no emulation' in order to make some decisions about which
clocksource is more reliable.

Expose this information in the public API headers so that they can
subsequently be imported into linux and used there.

Link: https://lore.kernel.org/xen-devel/eda8d9f2-3013-1b68-0df8-64d7f13ee35e@suse.com/
Link: https://lore.kernel.org/xen-devel/0835453d-9617-48d5-b2dc-77a2ac298bad@oracle.com/
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
---
 xen/include/public/arch-x86/cpuid.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/include/public/arch-x86/cpuid.h b/xen/include/public/arch-x86/cpuid.h
index 7ecd16ae05..97dc970417 100644
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -71,6 +71,12 @@
  *             EDX: shift amount for tsc->ns conversion
  * Sub-leaf 2: EAX: host tsc frequency in kHz
  */
+#define XEN_CPUID_TSC_EMULATED       (1u << 0)
+#define XEN_CPUID_HOST_TSC_RELIABLE  (1u << 1)
+#define XEN_CPUID_RDTSCP_INSTR_AVAIL (1u << 2)
+#define XEN_CPUID_TSC_MODE_DEFAULT   (0)
+#define XEN_CPUID_TSC_MODE_EMULATE   (1u)
+#define XEN_CPUID_TSC_MODE_NOEMULATE (2u)
 
 /*
  * Leaf 5 (0x40000x04)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 24 23:53:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 23:53:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483780.750146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKT6M-0004gJ-Jd; Tue, 24 Jan 2023 23:53:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483780.750146; Tue, 24 Jan 2023 23:53:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKT6M-0004gC-Gn; Tue, 24 Jan 2023 23:53:18 +0000
Received: by outflank-mailman (input) for mailman id 483780;
 Tue, 24 Jan 2023 23:53:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wk+X=5V=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKT6K-0004g6-QX
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 23:53:17 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 44509881-9c42-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 00:53:15 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 4797DB8091C;
 Tue, 24 Jan 2023 23:53:14 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 24F64C433D2;
 Tue, 24 Jan 2023 23:53:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44509881-9c42-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674604393;
	bh=F4qaZF8nm21TTmi8T/OgAKB21y/MRxlXkjSnWqnyWzI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bdGmK3Hb7zC1OA9TYSpU/Np0inm6D9EQ1RtY3Kq93R469HRDy9os0vev4XCIOMbQF
	 PDlZMCOWSDNp/QlPoA4mxuiV4/dWsiGHY+Z4rm9tsa0dicC+cFcFMIzlAfkhMErHmS
	 j0PFiay/h/cwNXnWXUiQ5uhjOE8/9CVmSRTL9CI3rCmejBrbmRU+Z4uWA8wf6S5/mD
	 CuBnkZn/Mpk3KSmh8EavQ8ZHG0fDvBouXQKeRQ6pceKXwHrBCkBJmMqx5bF00JGbgq
	 EiPRUaZXCywaUxEMNMYCBZ7JvhHI+ZKZ1mcAKC+4UJz3w+56ygX7RqDwthvqO/z22h
	 /X8oXtEVqnu2A==
Date: Tue, 24 Jan 2023 15:53:10 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Gianluca Guida <gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH] automation: rename RISCV_64 container and jobs
In-Reply-To: <cea2d287fd65033d8631bf9905ad00652bf11035.1673367923.git.oleksii.kurochko@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2301241552240.1978264@ubuntu-linux-20-04-desktop>
References: <cea2d287fd65033d8631bf9905ad00652bf11035.1673367923.git.oleksii.kurochko@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 10 Jan 2023, Oleksii Kurochko wrote:
> All RISCV_64-related stuff was renamed to be consistent with
> ARM (arm32 is cross build as RISCV_64).
> 
> The patch is based on the following patch series:
> [PATCH *] Basic early_printk and smoke test implementation
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed and committed.

Now the container name is archlinux:current-riscv64, make sure to update
it appropriate in the patch "automation: add RISC-V smoke test" when/if
you send a new version


> ---
>  ...v64.dockerfile => current-riscv64.dockerfile} |  0
>  automation/gitlab-ci/build.yaml                  | 16 ++++++++--------
>  automation/gitlab-ci/test.yaml                   |  4 ++--
>  automation/scripts/containerize                  |  2 +-
>  4 files changed, 11 insertions(+), 11 deletions(-)
>  rename automation/build/archlinux/{riscv64.dockerfile => current-riscv64.dockerfile} (100%)
> 
> diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/current-riscv64.dockerfile
> similarity index 100%
> rename from automation/build/archlinux/riscv64.dockerfile
> rename to automation/build/archlinux/current-riscv64.dockerfile
> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> index 6784974619..7ccd153375 100644
> --- a/automation/gitlab-ci/build.yaml
> +++ b/automation/gitlab-ci/build.yaml
> @@ -647,31 +647,31 @@ alpine-3.12-gcc-debug-arm64-boot-cpupools:
>        CONFIG_BOOT_TIME_CPUPOOLS=y
>  
>  # RISC-V 64 cross-build
> -riscv64-cross-gcc:
> +archlinux-current-gcc-riscv64:
>    extends: .gcc-riscv64-cross-build
>    variables:
> -    CONTAINER: archlinux:riscv64
> +    CONTAINER: archlinux:current-riscv64
>      KBUILD_DEFCONFIG: tiny64_defconfig
>      HYPERVISOR_ONLY: y
>  
> -riscv64-cross-gcc-debug:
> +archlinux-current-gcc-riscv64-debug:
>    extends: .gcc-riscv64-cross-build-debug
>    variables:
> -    CONTAINER: archlinux:riscv64
> +    CONTAINER: archlinux:current-riscv64
>      KBUILD_DEFCONFIG: tiny64_defconfig
>      HYPERVISOR_ONLY: y
>  
> -riscv64-cross-gcc-randconfig:
> +archlinux-current-gcc-riscv64-randconfig:
>    extends: .gcc-riscv64-cross-build
>    variables:
> -    CONTAINER: archlinux:riscv64
> +    CONTAINER: archlinux:current-riscv64
>      KBUILD_DEFCONFIG: tiny64_defconfig
>      RANDCONFIG: y
>  
> -riscv64-cross-gcc-debug-randconfig:
> +archlinux-current-gcc-riscv64-debug-randconfig:
>    extends: .gcc-riscv64-cross-build-debug
>    variables:
> -    CONTAINER: archlinux:riscv64
> +    CONTAINER: archlinux:current-riscv64
>      KBUILD_DEFCONFIG: tiny64_defconfig
>      RANDCONFIG: y
>  
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index 64f47a0ab9..4ca3e54862 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -57,7 +57,7 @@
>  .qemu-riscv64:
>    extends: .test-jobs-common
>    variables:
> -    CONTAINER: archlinux:riscv64
> +    CONTAINER: archlinux:current-riscv64
>      LOGFILE: qemu-smoke-riscv64.log
>    artifacts:
>      paths:
> @@ -252,7 +252,7 @@ qemu-smoke-riscv64-gcc:
>    script:
>      - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
>    needs:
> -    - riscv64-cross-gcc
> +    - archlinux-current-gcc-riscv64
>  
>  # Yocto test jobs
>  yocto-qemuarm64:
> diff --git a/automation/scripts/containerize b/automation/scripts/containerize
> index 0f4645c4cc..9e508918bf 100755
> --- a/automation/scripts/containerize
> +++ b/automation/scripts/containerize
> @@ -27,7 +27,7 @@ case "_${CONTAINER}" in
>      _alpine) CONTAINER="${BASE}/alpine:3.12" ;;
>      _alpine-arm64v8) CONTAINER="${BASE}/alpine:3.12-arm64v8" ;;
>      _archlinux|_arch) CONTAINER="${BASE}/archlinux:current" ;;
> -    _riscv64) CONTAINER="${BASE}/archlinux:riscv64" ;;
> +    _riscv64) CONTAINER="${BASE}/archlinux:current-riscv64" ;;
>      _centos7) CONTAINER="${BASE}/centos:7" ;;
>      _centos72) CONTAINER="${BASE}/centos:7.2" ;;
>      _fedora) CONTAINER="${BASE}/fedora:29";;
> -- 
> 2.38.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 23:54:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 23:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483783.750155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKT72-0005Bd-Sn; Tue, 24 Jan 2023 23:54:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483783.750155; Tue, 24 Jan 2023 23:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKT72-0005BW-QB; Tue, 24 Jan 2023 23:54:00 +0000
Received: by outflank-mailman (input) for mailman id 483783;
 Tue, 24 Jan 2023 23:54:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wk+X=5V=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKT72-0005BO-88
 for xen-devel@lists.xenproject.org; Tue, 24 Jan 2023 23:54:00 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e24ee1c-9c42-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 00:53:59 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 891B661014;
 Tue, 24 Jan 2023 23:53:57 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22B8CC433D2;
 Tue, 24 Jan 2023 23:53:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e24ee1c-9c42-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674604436;
	bh=ZMmOyANc1PW2IMYFXQWjo4aOEGKdzzO7BIAZeYh+vc0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ubHmsD4a6TKoRVGtI1ZI1X3ogYXhxXCrEBubqh3EDBqL9bFWOiOJmfRdvqELu/o89
	 xDkmf1LGko1TNY2O2wnf+Iz9LpEofhsrl+RFdTMqcnvgsTZ8nHgg2XqCy9NdgJt5XO
	 scRSefg188vPdvr2QsdUcqiDSV+JdZkWnvkwzfLsr+x6Mr6Vcr7kdrRAaa6F/q2cyN
	 0B6ZQlclrlCL8eimIUTyKrPfX1XpsQkptU1/mcHZMadOhJNNtVrvt2HOkC2N9fU1Ui
	 a6MQRWDLXBpMQRPDatpkX34hMj59QFUK5ZQwTmlbL/s4uv+caU8nhURoHuGsBAHDfz
	 C2nJBExrgSzoA==
Date: Tue, 24 Jan 2023 15:53:54 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Gianluca Guida <gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v1 14/14] automation: add smoke test to verify macros
 from bug.h
In-Reply-To: <4ce72535e44f49e82ad23f4e7dc004a67344b823.1674226563.git.oleksii.kurochko@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2301241553220.1978264@ubuntu-linux-20-04-desktop>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com> <4ce72535e44f49e82ad23f4e7dc004a67344b823.1674226563.git.oleksii.kurochko@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 Jan 2023, Oleksii Kurochko wrote:
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

I think you should remove the old greps. This script is part of the Xen
repository and in-sync with the codebase, so it is OK to only keep the
most recent version of the grep string.

> ---
>  automation/scripts/qemu-smoke-riscv64.sh | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
> index e0f06360bc..e7cc7f1442 100755
> --- a/automation/scripts/qemu-smoke-riscv64.sh
> +++ b/automation/scripts/qemu-smoke-riscv64.sh
> @@ -17,4 +17,6 @@ qemu-system-riscv64 \
>  
>  set -e
>  (grep -q "Hello from C env" smoke.serial) || exit 1
> +(grep -q "run_in_exception_handler is most likely working" smoke.serial) || exit 1
> +(grep -q "WARN is most likely working" smoke.serial) || exit 1
>  exit 0
> -- 
> 2.39.0
> 


From xen-devel-bounces@lists.xenproject.org Tue Jan 24 23:54:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Jan 2023 23:54:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483789.750166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKT7X-0005j8-4y; Tue, 24 Jan 2023 23:54:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483789.750166; Tue, 24 Jan 2023 23:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKT7X-0005j1-1M; Tue, 24 Jan 2023 23:54:31 +0000
Received: by outflank-mailman (input) for mailman id 483789;
 Tue, 24 Jan 2023 23:54:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKT7W-0005ip-7S; Tue, 24 Jan 2023 23:54:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKT7W-0000EV-4i; Tue, 24 Jan 2023 23:54:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKT7V-0006f1-KS; Tue, 24 Jan 2023 23:54:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKT7V-0005jh-K0; Tue, 24 Jan 2023 23:54:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1+vAPLVNo+3r/12O7Owm7uRZ2Llalxq04RMKFKyzLiI=; b=5lPeGyvmFL00H9gEfFyedUiPuA
	LWq8VVBnaeVDy0chQGwpprMPbQYVpVMTDHduerSaNevx/8JGpBcMg9R2s5+WHqNF9U9RmQ4f7v7m9
	3Otj4QDIafnTuoLtoet73B2+xz/0tzpgQBrvgpM94VP5pLindQwuhwd/+b7X097xGatI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176096-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 176096: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=13356edb87506c148b163b8c7eb0695647d00c2a
X-Osstest-Versions-That:
    qemuu=00b1faea41d283e931256aa78aa975a369ec3ae6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Jan 2023 23:54:29 +0000

flight 176096 qemu-mainline real [real]
flight 176109 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176096/
http://logs.test-lab.xenproject.org/osstest/logs/176109/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail pass in 176109-retest
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install   fail pass in 176109-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 176069
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  7 xen-install    fail like 176080
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 176080
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 176080
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 176080
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 176080
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 176080
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 176080
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 176080
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 176080
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                13356edb87506c148b163b8c7eb0695647d00c2a
baseline version:
 qemuu                00b1faea41d283e931256aa78aa975a369ec3ae6

Last test of basis   176080  2023-01-23 23:38:33 Z    1 days
Testing same since   176096  2023-01-24 12:38:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chao Gao <chao.gao@intel.com>
  Michael S. Tsirkin <mst@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Stefan Hajnoczi <stefanha@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   00b1faea41..13356edb87  13356edb87506c148b163b8c7eb0695647d00c2a -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 00:27:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 00:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483802.750176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKTdb-0001rx-8P; Wed, 25 Jan 2023 00:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483802.750176; Wed, 25 Jan 2023 00:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKTdb-0001rq-5U; Wed, 25 Jan 2023 00:27:39 +0000
Received: by outflank-mailman (input) for mailman id 483802;
 Wed, 25 Jan 2023 00:27:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKTdZ-0001rg-G2; Wed, 25 Jan 2023 00:27:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKTdZ-0001Zl-Eu; Wed, 25 Jan 2023 00:27:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKTdZ-0008K7-3P; Wed, 25 Jan 2023 00:27:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKTdZ-0001eA-2u; Wed, 25 Jan 2023 00:27:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N/62X3IJQs2U2MD/FxpOUs/BRetj6j9fL8XdJXG3wVU=; b=gxUfcvAOBPgy7kLTm+2ASg9A+Q
	JNQ03DUZA9JeLa6azeoXCyOjNgNA8Y4TMDYZoYvyzj8EpdWhXHbv8pMfUq3RmExkySIFpnOK6LHQG
	ffi/WMHxeGtTbP8+ntOBHnqlv8p5SObHPjnK98Z0+G2OBnPomwP9MstP7P75DlUNr30E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176107-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176107: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=fbd9b5fb4c26546d6f207036917283d2f1569d9c
X-Osstest-Versions-That:
    xen=352c89f72ddb67b8d9d4e492203f8c77f85c8df1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 00:27:37 +0000

flight 176107 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176107/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fbd9b5fb4c26546d6f207036917283d2f1569d9c
baseline version:
 xen                  352c89f72ddb67b8d9d4e492203f8c77f85c8df1

Last test of basis   176099  2023-01-24 16:00:29 Z    0 days
Testing same since   176107  2023-01-24 21:02:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Henry Wang <Henry.Wang@arm.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   352c89f72d..fbd9b5fb4c  fbd9b5fb4c26546d6f207036917283d2f1569d9c -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 00:37:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 00:37:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483812.750187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKTmt-0003T7-Ai; Wed, 25 Jan 2023 00:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483812.750187; Wed, 25 Jan 2023 00:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKTmt-0003T0-50; Wed, 25 Jan 2023 00:37:15 +0000
Received: by outflank-mailman (input) for mailman id 483812;
 Wed, 25 Jan 2023 00:37:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKTms-0003Sq-6f; Wed, 25 Jan 2023 00:37:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKTms-0001k0-4i; Wed, 25 Jan 2023 00:37:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKTmr-0000Rj-Oc; Wed, 25 Jan 2023 00:37:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKTmr-0002o4-O9; Wed, 25 Jan 2023 00:37:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=dMyYcceVseUuMXZSp2GLwQKrdyTdj3gdrwz32OA5eDA=; b=JiRf72lNCPLCNuqEJTdsHWE9TZ
	7imus3hNdPbkG73XKYBaUmAhVmTab3fjMCgNivN/fOjsRITgYLygIil7S3xk8bBSYAW0tStR2DA8x
	Mp85lvhHkbyBhsqF6AKM7xM6k8HIXIWS6a3DetV0LPI/udXVR+lZLenJoWq4l2vrFRHc=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl
Message-Id: <E1pKTmr-0002o4-O9@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 00:37:13 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl
testid guest-localmigrate

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176111/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl.guest-localmigrate.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl.guest-localmigrate --summary-out=tmp/176111.bisection-summary --basis-template=175994 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl guest-localmigrate
Searching for failure / basis pass:
 176091 fail [host=fiano1] / 175994 [host=huxelrebe0] 175987 [host=nobling0] 175965 [host=nobling1] 175734 [host=pinot1] 175726 [host=albana0] 175720 [host=italia0] 175714 [host=debina1] 175694 [host=italia1] 175671 [host=albana1] 175651 [host=fiano0] 175635 [host=pinot0] 175624 [host=debina0] 175612 [host=nocera0] 175601 [host=huxelrebe0] 175592 [host=albana0] 175573 [host=nocera1] 175569 ok.
Failure / basis pass flights: 176091 / 175569
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 c1df06afe578f698ebe91a1e3817463b9d165123
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#1cf02b05b27c48775a25699e61b93b8\
 14b9ae042-625eb5e96dc96aa7fddef59a08edae215527f19c git://xenbits.xen.org/xen.git#c1df06afe578f698ebe91a1e3817463b9d165123-d60324d8af9404014cfcc37bba09e9facfd02fcf
>From git://cache:9419/git://xenbits.xen.org/qemu-xen
   00b1faea41..13356edb87  upstream-tested -> origin/upstream-tested
>From git://cache:9419/git://xenbits.xen.org/xen
   352c89f72d..fbd9b5fb4c  smoke      -> origin/smoke
   fbd9b5fb4c..3b760245f7  staging    -> origin/staging
Loaded 10003 nodes in revision graph
Searching for test results:
 175592 [host=albana0]
 175601 [host=huxelrebe0]
 175612 [host=nocera0]
 175624 [host=debina0]
 175635 [host=pinot0]
 175651 [host=fiano0]
 175671 [host=albana1]
 175694 [host=italia1]
 175714 [host=debina1]
 175720 [host=italia0]
 175726 [host=albana0]
 175734 [host=pinot1]
 175834 []
 175861 []
 175890 []
 175907 []
 175931 []
 175956 []
 175965 [host=nobling1]
 175987 [host=nobling0]
 175994 [host=huxelrebe0]
 176003 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
 176011 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176025 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176035 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176042 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176048 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176056 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176062 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176077 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 c1df06afe578f698ebe91a1e3817463b9d165123
 176081 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176082 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176076 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176084 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c a1a618208bf53469f5e3eaa14202ba777d33f442
 176089 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176092 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 686b80c1ae4cc338334eb5df4836df526109377a
 176094 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176095 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176097 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176098 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176091 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176101 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176104 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176108 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176111 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 175569 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 c1df06afe578f698ebe91a1e3817463b9d165123
 175573 [host=nocera1]
Searching for interesting versions
 Result found: flight 175569 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f, results HASH(0x5642b5ba6e90) HASH(0x5642b5bc4140) HASH(0x5642b5bc8cc0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96\
 dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x5642b51c6130) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x5642b5bb6d40) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 c1df06afe578f698ebe91a1e3817463b9d165123, results HASH(0x5642b5bb2d08) HASH(0x5642b5bcb5c8) Result found: flight 176003 (fail), for basis failure (at ancestor ~989)
 Repro found: flight 176077 (pass), for basis pass
 Repro found: flight 176089 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
No revisions left to test, checking graph state.
 Result found: flight 176097 (pass), for last pass
 Result found: flight 176098 (fail), for first failure
 Repro found: flight 176101 (pass), for last pass
 Repro found: flight 176104 (fail), for first failure
 Repro found: flight 176108 (pass), for last pass
 Repro found: flight 176111 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176111/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl.guest-localmigrate.{dot,ps,png,html,svg}.
----------------------------------------
176111: tolerable ALL FAIL

flight 176111 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/176111/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl           18 guest-localmigrate      fail baseline untested


jobs:
 test-amd64-i386-xl                                           fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 00:57:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 00:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483820.750199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKU6T-0006BU-Un; Wed, 25 Jan 2023 00:57:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483820.750199; Wed, 25 Jan 2023 00:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKU6T-0006BN-R0; Wed, 25 Jan 2023 00:57:29 +0000
Received: by outflank-mailman (input) for mailman id 483820;
 Wed, 25 Jan 2023 00:57:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKU6S-0006BH-GN
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 00:57:28 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 39daefba-9c4b-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 01:57:23 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0E0596137F;
 Wed, 25 Jan 2023 00:57:22 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96415C433EF;
 Wed, 25 Jan 2023 00:57:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39daefba-9c4b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674608241;
	bh=x0yXbfLgcj/oiBFB9ck8jxO/ZesjK0dK4t36vQV1vjI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=tZETO2QrIpw5shZW4ymnVM6V8OIYXijYmCk/FUJaFkM4r/uAPeG25poAplAqcJTif
	 /Kr+cHKQkshkWzMapYjaSkixMUp/JfdPVTU6A6m12+fz287gyeVgjxCZAz4yTqU1bv
	 8nVJEvXO+X1NPDgDc/FkfC3OEZ1OcAL3y4o3vO6eawXlwVE72Nwx3y+f7baeej9y1Y
	 dKFTRhoi6NAEhdO3Fwcl4loWU48/iIH2T4qoW/9h2bqWeCkAm5X2xyqUBwId/dwjr6
	 qRpcruXlqG1bTVgbJCZtv55TTnVRtXRvXqfjcl0zDChINwgWvIHxZEqKFCouxXQ1P/
	 X3/CaWJ7BeiNA==
Date: Tue, 24 Jan 2023 16:57:19 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Henry Wang <Henry.Wang@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
In-Reply-To: <20221214031654.2815589-2-Henry.Wang@arm.com>
Message-ID: <alpine.DEB.2.22.394.2301241657120.1978264@ubuntu-linux-20-04-desktop>
References: <20221214031654.2815589-1-Henry.Wang@arm.com> <20221214031654.2815589-2-Henry.Wang@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 14 Dec 2022, Henry Wang wrote:
> As we are having more and more types of static region, and all of
> these static regions are defined in bootinfo.reserved_mem, it is
> necessary to add the overlap check of reserved memory regions in Xen,
> because such check will help user to identify the misconfiguration in
> the device tree at the early stage of boot time.
> 
> Currently we have 3 types of static region, namely
> (1) static memory
> (2) static heap
> (3) static shared memory
> 
> (1) and (2) are parsed by the function `device_tree_get_meminfo()` and
> (3) is parsed using its own logic. All of parsed information of these
> types will be stored in `struct meminfo`.
> 
> Therefore, to unify the overlap checking logic for all of these types,
> this commit firstly introduces a helper `meminfo_overlap_check()` and
> a function `check_reserved_regions_overlap()` to check if an input
> physical address range is overlapping with the existing memory regions
> defined in bootinfo. After that, use `check_reserved_regions_overlap()`
> in `device_tree_get_meminfo()` to do the overlap check of (1) and (2)
> and replace the original overlap check of (3) with
> `check_reserved_regions_overlap()`.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v1 -> v2:
> 1. Split original `overlap_check()` to `meminfo_overlap_check()`.
> 2. Rework commit message.
> ---
>  xen/arch/arm/bootfdt.c           | 13 +++++-----
>  xen/arch/arm/include/asm/setup.h |  2 ++
>  xen/arch/arm/setup.c             | 42 ++++++++++++++++++++++++++++++++
>  3 files changed, 50 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 0085c28d74..e2f6c7324b 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -88,6 +88,9 @@ static int __init device_tree_get_meminfo(const void *fdt, int node,
>      for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
>      {
>          device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +        if ( mem == &bootinfo.reserved_mem &&
> +             check_reserved_regions_overlap(start, size) )
> +            return -EINVAL;
>          /* Some DT may describe empty bank, ignore them */
>          if ( !size )
>              continue;
> @@ -482,7 +485,9 @@ static int __init process_shm_node(const void *fdt, int node,
>                  return -EINVAL;
>              }
>  
> -            if ( (end <= mem->bank[i].start) || (paddr >= bank_end) )
> +            if ( check_reserved_regions_overlap(paddr, size) )
> +                return -EINVAL;
> +            else
>              {
>                  if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 )
>                      continue;
> @@ -493,12 +498,6 @@ static int __init process_shm_node(const void *fdt, int node,
>                      return -EINVAL;
>                  }
>              }
> -            else
> -            {
> -                printk("fdt: shared memory region overlap with an existing entry %#"PRIpaddr" - %#"PRIpaddr"\n",
> -                        mem->bank[i].start, bank_end);
> -                return -EINVAL;
> -            }
>          }
>      }
>  
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index fdbf68aadc..6a9f88ecbb 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -143,6 +143,8 @@ void fw_unreserved_regions(paddr_t s, paddr_t e,
>  size_t boot_fdt_info(const void *fdt, paddr_t paddr);
>  const char *boot_fdt_cmdline(const void *fdt);
>  
> +int check_reserved_regions_overlap(paddr_t region_start, paddr_t region_size);
> +
>  struct bootmodule *add_boot_module(bootmodule_kind kind,
>                                     paddr_t start, paddr_t size, bool domU);
>  struct bootmodule *boot_module_find_by_kind(bootmodule_kind kind);
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f26f67b90..e6eeb3a306 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -261,6 +261,31 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>      cb(s, e);
>  }
>  
> +static int __init meminfo_overlap_check(struct meminfo *meminfo,
> +                                        paddr_t region_start,
> +                                        paddr_t region_end)
> +{
> +    paddr_t bank_start = INVALID_PADDR, bank_end = 0;
> +    unsigned int i, bank_num = meminfo->nr_banks;
> +
> +    for ( i = 0; i < bank_num; i++ )
> +    {
> +        bank_start = meminfo->bank[i].start;
> +        bank_end = bank_start + meminfo->bank[i].size;
> +
> +        if ( region_end <= bank_start || region_start >= bank_end )
> +            continue;
> +        else
> +        {
> +            printk("Region %#"PRIpaddr" - %#"PRIpaddr" overlapping with bank[%u] %#"PRIpaddr" - %#"PRIpaddr"\n",
> +                   region_start, region_end, i, bank_start, bank_end);
> +            return -EINVAL;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>  void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>                                    void (*cb)(paddr_t, paddr_t),
>                                    unsigned int first)
> @@ -271,7 +296,24 @@ void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>          cb(s, e);
>  }
>  
> +/*
> + * Given an input physical address range, check if this range is overlapping
> + * with the existing reserved memory regions defined in bootinfo.
> + * Return 0 if the input physical address range is not overlapping with any
> + * existing reserved memory regions, otherwise -EINVAL.
> + */
> +int __init check_reserved_regions_overlap(paddr_t region_start,
> +                                          paddr_t region_size)
> +{
> +    paddr_t region_end = region_start + region_size;
> +
> +    /* Check if input region is overlapping with bootinfo.reserved_mem banks */
> +    if ( meminfo_overlap_check(&bootinfo.reserved_mem,
> +                               region_start, region_end) )
> +        return -EINVAL;
>  
> +    return 0;
> +}
>  
>  struct bootmodule __init *add_boot_module(bootmodule_kind kind,
>                                            paddr_t start, paddr_t size,
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 00:57:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 00:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483821.750209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKU6e-0006S9-53; Wed, 25 Jan 2023 00:57:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483821.750209; Wed, 25 Jan 2023 00:57:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKU6e-0006S2-2C; Wed, 25 Jan 2023 00:57:40 +0000
Received: by outflank-mailman (input) for mailman id 483821;
 Wed, 25 Jan 2023 00:57:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKU6d-0006BH-2e
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 00:57:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 41f67389-9c4b-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 01:57:36 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B6A6F613CA;
 Wed, 25 Jan 2023 00:57:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4A68CC433EF;
 Wed, 25 Jan 2023 00:57:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41f67389-9c4b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674608255;
	bh=uR4rLCUDe+6STB91mcRK4XdS3Se6/BvvlfNg+xzjRXU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Odv7VUCMqbwpXGCyPVTVkSSUCEmP9ljkS1FeJbgGG4DRUpKU651Wdz0BIsi8a/u+t
	 rErf/dSnvfTgnlmJJAz+R+uvsOc9PND4pPdt7E8fx0HdSyaA+fgIEQxowYZnoQAvrB
	 CGRMcMF6b/wskiYJKQTU4ZU5CB9d2rugRBrrd+YrYs3pNE/a1xAPMzam+ghcwgrIri
	 P3zPPYtJJ3IfDo4XuyRsBzo7M8mGRAE9f1pD7ry7owsydYDShR5lpw/moqgdEnj52W
	 xtTkHiQlRH58Q3KspWpKmzaB4S02Qxo38118u1aMeeiXNTGsK3AObXiJN/kmyyC3Ev
	 hWNTkruN0y0tg==
Date: Tue, 24 Jan 2023 16:57:32 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Henry Wang <Henry.Wang@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 2/3] xen/arm: Extend the memory overlap check to
 include bootmodules
In-Reply-To: <20221214031654.2815589-3-Henry.Wang@arm.com>
Message-ID: <alpine.DEB.2.22.394.2301241657240.1978264@ubuntu-linux-20-04-desktop>
References: <20221214031654.2815589-1-Henry.Wang@arm.com> <20221214031654.2815589-3-Henry.Wang@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 14 Dec 2022, Henry Wang wrote:
> Similarly as the static regions defined in bootinfo.reserved_mem,
> the bootmodule regions defined in bootinfo.modules should also not
> be overlapping with memory regions in either bootinfo.reserved_mem
> or bootinfo.modules.
> 
> Therefore, this commit introduces a helper `bootmodules_overlap_check()`
> and uses this helper to extend the check in function
> `check_reserved_regions_overlap()` so that memory regions in
> bootinfo.modules are included. Use `check_reserved_regions_overlap()`
> in `add_boot_module()` to return early if any error occurs.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v1 -> v2:
> 1. Split original `overlap_check()` to `bootmodules_overlap_check()`.
> 2. Rework commit message.
> ---
>  xen/arch/arm/setup.c | 34 ++++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index e6eeb3a306..ba0152f868 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -286,6 +286,31 @@ static int __init meminfo_overlap_check(struct meminfo *meminfo,
>      return 0;
>  }
>  
> +static int __init bootmodules_overlap_check(struct bootmodules *bootmodules,
> +                                            paddr_t region_start,
> +                                            paddr_t region_end)
> +{
> +    paddr_t mod_start = INVALID_PADDR, mod_end = 0;
> +    unsigned int i, mod_num = bootmodules->nr_mods;
> +
> +    for ( i = 0; i < mod_num; i++ )
> +    {
> +        mod_start = bootmodules->module[i].start;
> +        mod_end = mod_start + bootmodules->module[i].size;
> +
> +        if ( region_end <= mod_start || region_start >= mod_end )
> +            continue;
> +        else
> +        {
> +            printk("Region %#"PRIpaddr" - %#"PRIpaddr" overlapping with mod[%u] %#"PRIpaddr" - %#"PRIpaddr"\n",
> +                   region_start, region_end, i, mod_start, mod_end);
> +            return -EINVAL;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>  void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>                                    void (*cb)(paddr_t, paddr_t),
>                                    unsigned int first)
> @@ -312,6 +337,11 @@ int __init check_reserved_regions_overlap(paddr_t region_start,
>                                 region_start, region_end) )
>          return -EINVAL;
>  
> +    /* Check if input region is overlapping with bootmodules */
> +    if ( bootmodules_overlap_check(&bootinfo.modules,
> +                                   region_start, region_end) )
> +        return -EINVAL;
> +
>      return 0;
>  }
>  
> @@ -329,6 +359,10 @@ struct bootmodule __init *add_boot_module(bootmodule_kind kind,
>                 boot_module_kind_as_string(kind), start, start + size);
>          return NULL;
>      }
> +
> +    if ( check_reserved_regions_overlap(start, size) )
> +        return NULL;
> +
>      for ( i = 0 ; i < mods->nr_mods ; i++ )
>      {
>          mod = &mods->module[i];
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 00:57:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 00:57:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483823.750219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKU6q-0006pN-K9; Wed, 25 Jan 2023 00:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483823.750219; Wed, 25 Jan 2023 00:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKU6q-0006pG-Gx; Wed, 25 Jan 2023 00:57:52 +0000
Received: by outflank-mailman (input) for mailman id 483823;
 Wed, 25 Jan 2023 00:57:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKU6o-0006BH-Q9
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 00:57:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 48f9b03a-9c4b-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 01:57:48 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9F5D860B86;
 Wed, 25 Jan 2023 00:57:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 413A2C433EF;
 Wed, 25 Jan 2023 00:57:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48f9b03a-9c4b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674608267;
	bh=C7QNZDgeCr1RHT/hC6pNJEHyiWlr0MI9uHe5zEejtcQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=aYlbX9L1YVfwYY2oxWcwMIIsBnCeoRw7krUdToxYmXdCu9jI1fpTKjbW98cZUiQx0
	 egJujhcgsIQ2oCV6HDmHiJwpUNZLVmqeEhXRODIpPoFBSXsnt/JdFiMc1EnUTSYWtH
	 CiJp2K+fpgWMQH34171g3YCthpfGIBK4FtOobfUElBv/DT1RRZLI5AXW2mEdiC3Fj5
	 2FhzjjMMwmwOOI9cj1R9iksYN0OOAhOUH3afS51XTjVtJ5/W+uj5e737GusZcAL8Nk
	 0EaSIhwmyhOf8NYrYjxnULcOHZm27TJUv1yARuhpPV8BkE2Es2KOr9jlYT2ETdUnIq
	 JY0L/+mcQSsPw==
Date: Tue, 24 Jan 2023 16:57:44 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Henry Wang <Henry.Wang@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/3] xen/arm: Extend the memory overlap check to
 include EfiACPIReclaimMemory
In-Reply-To: <20221214031654.2815589-4-Henry.Wang@arm.com>
Message-ID: <alpine.DEB.2.22.394.2301241657380.1978264@ubuntu-linux-20-04-desktop>
References: <20221214031654.2815589-1-Henry.Wang@arm.com> <20221214031654.2815589-4-Henry.Wang@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 14 Dec 2022, Henry Wang wrote:
> Similarly as the static regions and boot modules, memory regions with
> EfiACPIReclaimMemory type (defined in bootinfo.acpi if CONFIG_ACPI is
> enabled) should also not be overlapping with memory regions in
> bootinfo.reserved_mem and bootinfo.modules.
> 
> Therefore, this commit reuses the `meminfo_overlap_check()` to further
> extends the check in function `check_reserved_regions_overlap()` so that
> memory regions in bootinfo.acpi are included. If any error occurs in the
> extended `check_reserved_regions_overlap()`, the `meminfo_add_bank()`
> defined in `efi-boot.h` will return early.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v1 -> v2:
> 1. Rebase on top of patch #1 and #2.
> ---
>  xen/arch/arm/efi/efi-boot.h | 10 ++++++++--
>  xen/arch/arm/setup.c        |  6 ++++++
>  2 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
> index 43a836c3a7..6121ba1f2f 100644
> --- a/xen/arch/arm/efi/efi-boot.h
> +++ b/xen/arch/arm/efi/efi-boot.h
> @@ -161,13 +161,19 @@ static bool __init meminfo_add_bank(struct meminfo *mem,
>                                      EFI_MEMORY_DESCRIPTOR *desc)
>  {
>      struct membank *bank;
> +    paddr_t start = desc->PhysicalStart;
> +    paddr_t size = desc->NumberOfPages * EFI_PAGE_SIZE;
>  
>      if ( mem->nr_banks >= NR_MEM_BANKS )
>          return false;
> +#ifdef CONFIG_ACPI
> +    if ( check_reserved_regions_overlap(start, size) )
> +        return false;
> +#endif
>  
>      bank = &mem->bank[mem->nr_banks];
> -    bank->start = desc->PhysicalStart;
> -    bank->size = desc->NumberOfPages * EFI_PAGE_SIZE;
> +    bank->start = start;
> +    bank->size = size;
>  
>      mem->nr_banks++;
>  
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index ba0152f868..a0cb2dd588 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -342,6 +342,12 @@ int __init check_reserved_regions_overlap(paddr_t region_start,
>                                     region_start, region_end) )
>          return -EINVAL;
>  
> +#ifdef CONFIG_ACPI
> +    /* Check if input region is overlapping with ACPI EfiACPIReclaimMemory */
> +    if ( meminfo_overlap_check(&bootinfo.acpi, region_start, region_end) )
> +        return -EINVAL;
> +#endif
> +
>      return 0;
>  }
>  
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 03:54:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 03:54:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483840.750238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKWr9-0000Cr-FU; Wed, 25 Jan 2023 03:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483840.750238; Wed, 25 Jan 2023 03:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKWr9-0000Cj-A4; Wed, 25 Jan 2023 03:53:51 +0000
Received: by outflank-mailman (input) for mailman id 483840;
 Wed, 25 Jan 2023 03:53:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKWr7-0000CZ-Uw; Wed, 25 Jan 2023 03:53:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKWr7-0005JR-S0; Wed, 25 Jan 2023 03:53:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKWr7-0002EB-EG; Wed, 25 Jan 2023 03:53:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKWr7-0004zf-Di; Wed, 25 Jan 2023 03:53:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=emo+GFN8i/gsl2+mOt91zVqUXlLNzNYroDZd65BkOBE=; b=KoglEPlSiv9SjDBqEkOteRJOYj
	ziGR14Nc1hxyVcJsVGmosEO3BOOfUs/0WOImFVUQOwa5nOotBmytA5TxAI4sXaVeRjoFba4pUdqi9
	+/poCFmyElzwQtfSY1bsqG43zyQq2Im1/hb6tE0DM1eAw82LtFRT+V4NJKL05e+G1qSQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176100-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176100: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt:guest-saverestore:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7bf70dbb18820b37406fdfa2aaf14c2f5c71a11a
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 03:53:49 +0000

flight 176100 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176100/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt    17 guest-saverestore fail in 176086 pass in 176100
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 176086
 test-amd64-amd64-xl-vhd      21 guest-start/debian.repeat  fail pass in 176086

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7bf70dbb18820b37406fdfa2aaf14c2f5c71a11a
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  109 days
Failing since        173470  2022-10-08 06:21:34 Z  108 days  224 attempts
Testing same since   176086  2023-01-24 04:39:45 Z    0 days    2 attempts

------------------------------------------------------------
3438 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 528145 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 04:04:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 04:04:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483847.750248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKX1W-0001tx-CC; Wed, 25 Jan 2023 04:04:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483847.750248; Wed, 25 Jan 2023 04:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKX1W-0001tq-9W; Wed, 25 Jan 2023 04:04:34 +0000
Received: by outflank-mailman (input) for mailman id 483847;
 Wed, 25 Jan 2023 04:04:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKX1U-0001tg-CL; Wed, 25 Jan 2023 04:04:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKX1U-0005b5-2i; Wed, 25 Jan 2023 04:04:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKX1T-0002bO-KC; Wed, 25 Jan 2023 04:04:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKX1T-0002Ko-Jh; Wed, 25 Jan 2023 04:04:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hXQZBKGk2ysDCLALXuKwhrZia1yADvpYpbv45Pji76g=; b=V/fGG4zsD4p6an6h09EELr+pTH
	u9Tf/NALjg3J6r1hvgdja0urzlujL/CZotpdOcyWWsBD94uXCjQ2x01HkoYqmTlOT88QSU1oPUCem
	wBe/0zxXlhovQURbFu1O0eV7F7r9QOOcRqtrro4s4Z+kBIqv22qnscpaLVQQV78WsCEM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176113-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176113: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3b760245f74ab2022b1aa4da842c4545228c2e83
X-Osstest-Versions-That:
    xen=fbd9b5fb4c26546d6f207036917283d2f1569d9c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 04:04:31 +0000

flight 176113 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176113/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3b760245f74ab2022b1aa4da842c4545228c2e83
baseline version:
 xen                  fbd9b5fb4c26546d6f207036917283d2f1569d9c

Last test of basis   176107  2023-01-24 21:02:17 Z    0 days
Testing same since   176113  2023-01-25 01:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fbd9b5fb4c..3b760245f7  3b760245f74ab2022b1aa4da842c4545228c2e83 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 06:52:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 06:52:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483857.750264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKZdl-0003tk-Gv; Wed, 25 Jan 2023 06:52:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483857.750264; Wed, 25 Jan 2023 06:52:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKZdl-0003td-Bw; Wed, 25 Jan 2023 06:52:13 +0000
Received: by outflank-mailman (input) for mailman id 483857;
 Wed, 25 Jan 2023 06:52:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKZdk-0003tT-4X; Wed, 25 Jan 2023 06:52:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKZdk-0001Zc-0Z; Wed, 25 Jan 2023 06:52:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKZdj-0004Mf-J7; Wed, 25 Jan 2023 06:52:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKZdj-00020F-Id; Wed, 25 Jan 2023 06:52:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4t4Cdt0GjBraTbVpVScDQHbzbpu339AFxqZg9apyEss=; b=T+gneBXPc9awdfE5udIVfwHde2
	9Dj5kATjwI1zKVQ7zCdnCxNP6eaHmODIWuhwUP/wv7GgXsdYmQaCP6dEUduqlw/gs+kn6roZdGCUa
	paB5EyiLYCLLRo9wFtL608mquDCUGVlqF/cSlNrlnCeATrBecw//NDLP9OUCJObGqatQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176106-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 176106: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=90245959a5b936ee013266e5d1e6a508ed69274e
X-Osstest-Versions-That:
    linux=1349fe3a332ad3d1ece60806225ca7955aba9f56
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 06:52:11 +0000

flight 176106 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176106/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 176090 pass in 176106
 test-amd64-i386-libvirt       7 xen-install                fail pass in 176090
 test-armhf-armhf-xl-multivcpu 14 guest-start               fail pass in 176090
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 176090
 test-armhf-armhf-libvirt-raw 13 guest-start                fail pass in 176090

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 175968
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 176090 like 175958
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail in 176090 like 175968
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 176090 like 175968
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 176090 like 175968
 test-amd64-i386-libvirt     15 migrate-support-check fail in 176090 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 176090 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 176090 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 176090 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175968
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175968
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175968
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175968
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175968
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175968
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175968
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175968
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175968
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat    fail  like 175968
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175968
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                90245959a5b936ee013266e5d1e6a508ed69274e
baseline version:
 linux                1349fe3a332ad3d1ece60806225ca7955aba9f56

Last test of basis   175968  2023-01-19 04:24:06 Z    6 days
Testing same since   176090  2023-01-24 06:43:46 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abel Vesa <abel.vesa@linaro.org>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Ali Mirghasemi <ali.mirghasemi1376@gmail.com>
  Andi Shyti <andi.shyti@linux.intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arend van Spriel <arend.vanspriel@broadcom.com>
  Borislav Petkov <bp@suse.de>
  Chris Wilson <chris@chris-wilson.co.uk>
  Daniel Scally <dan.scally@ideasonboard.com>
  Daniil Tatianin <d-tatianin@yandex-team.ru>
  David Hildenbrand <david@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Ding Hui <dinghui@sangfor.com.cn>
  Duke Xin(辛安文) <duke_xinanwen@163.com>
  Enzo Matsumiya <ematsumiya@suse.de>
  Filipe Manana <fdmanana@suse.com>
  Flavio Suligoi <f.suligoi@asem.it>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hao Sun <sunhao.th@gmail.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  hongao <hongao@uniontech.com>
  Hugh Dickins <hughd@google.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jernej Skrabec <jernej.skrabec@gmail.com>
  Jimmy Hu <hhhuuu@google.com>
  Jiri Slaby (SUSE) <jirislaby@kernel.org>
  Johan Hovold <johan@kernel.org>
  Jordy Zomer <jordyzomer@google.com>
  Joshua Ashton <joshua@froggi.es>
  Juhyung Park <qkrwngud825@gmail.com>
  Kalle Valo <kvalo@kernel.org>
  Khazhismel Kumykov <khazhy@chromium.org>
  Khazhismel Kumykov <khazhy@google.com>
  Leon Romanovsky <leon@kernel.org>
  Maciej Żenczykowski <maze@google.com>
  Martin KaFai Lau <martin.lau@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Michael Adler <michael.adler@siemens.com>
  Mohan Kumar <mkumard@nvidia.com>
  Nathan Chancellor <nathan@kernel.org>
  Ola Jeppsson <ola@snap.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Prashant Malani <pmalani@chromium.org>
  Ricardo Ribalda <ribalda@chromium.org>
  Richard Genoud <richard.genoud@gmail.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Ron Lee <ron.lee@intel.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Samuel Holland <samuel@sholland.org>
  Sasa Dragic <sasa.dragic@gmail.com>
  Sasha Levin <sashal@kernel.org>
  Shawn.Shao <shawn.shao@jaguarmicro.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Steve French <stfrench@microsoft.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Takashi Iwai <tiwai@suse.de>
  Tobias Schramm <t.schramm@manjaro.org>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vinod Koul <vkoul@kernel.org>
  YingChi Long <me@inclyc.cn>
  Yuchi Yang <yangyuchi66@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   1349fe3a332a..90245959a5b9  90245959a5b936ee013266e5d1e6a508ed69274e -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 06:57:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 06:57:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483866.750274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKZil-0004hS-5n; Wed, 25 Jan 2023 06:57:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483866.750274; Wed, 25 Jan 2023 06:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKZil-0004hL-2i; Wed, 25 Jan 2023 06:57:23 +0000
Received: by outflank-mailman (input) for mailman id 483866;
 Wed, 25 Jan 2023 06:57:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YOW=5W=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKZik-0004hF-6V
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 06:57:22 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2059.outbound.protection.outlook.com [40.107.6.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 82a2b530-9c7d-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 07:57:19 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8311.eurprd04.prod.outlook.com (2603:10a6:20b:3b3::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Wed, 25 Jan
 2023 06:57:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 06:57:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82a2b530-9c7d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GusuMdDprl11HR7Wfd2TxM/tQvacf/s/VBiIOSDeBztnfR7o3CHdYy9xHB/bv50qW42Iv/r/BQMQwqNapCLeOp7M8lnLGgnxeFtwJ4ASSZ5IuNqMtEm9ZkkmqSLb7MrEXbKhJK89IvS2LoUz/JAFs/RqKNpbB6uFvQHTO/eErJbYKYRWnjpkegfdz5dYGSPofc1nKwCbWW19PlkvTMY6Gzq8bDaNLjBoMCanIBxMvgjLR0na0PD2utnv8Pi5q/FQNhWRpUqsPPaTvEEw4r8pceZ4IOeWNVls/T/Jcb4dBnn3FuYSh2XxPB2mVRRMAh69vj6rkOEgd3iHgtm5AgRYfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LMGxywd3OXSdXfdzAAZ9ttYRe4lHqqnocHfh7aGa3FI=;
 b=Y+ZBx0UQuG54rtKBtjJuQj1YjoB4wGocyKvuEBZmapgI33sjvp8fhOX5Nn9fG76bqMAtUIRzQTCznw7fzaBSPpkX9s9FKW/UzAdQpOnep1iGJ6ATW71k++UrLhDQ1arFSkn8x/vfJrSzbXTT0Y45FjbVcGcfbVQWgnCSCongsk1kymQvtnQgkSZ3OCJvm8sYxJ44jWR25o5sn7vH+GnA8dloM1BSFob5BiLfO86FP2K2JXeKJGhgsshma/Ytg5DY0zDdToQsBrFjtFOOVDaxpWzatwYLVvvSXnMtnDZG4eIquMPRdqnIpK7UFScJrUMU1QZlLamf2jigAV71/Or/bA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LMGxywd3OXSdXfdzAAZ9ttYRe4lHqqnocHfh7aGa3FI=;
 b=Ph4ujBK3DUEtgw5dDK9E+ehVOpZ9NAhVQwD6NFdjr1+69urqKUEi26eXJ3rVVlK6sUmNYybH1V05dLytUdDTW+xGRgFEQ8GxP+Wn3vEqFmJfFZl+BDhQZvgNsKf+xKogHk9RZnmYThnO56l/sHa2i6OxOVlv+25w4dELIq6TqhjVOnJUeOrVQLd6ABQB2Jel0wvQq4q1zGJOgFjvb2w7vmKr0xRt1uV6Ot4olEjDAIW0y+sbfFqTwbWsUvoPN2pCoTwG26Vy+9GqwpRnaG79h32nYQWIYotdUnaNu6+t3/Hk3/NPe6TghqHLOeLeovKUd42W4CHIcpdt6snPOD0tVA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <145a827e-4b09-5a85-cb12-eb8f3e0c4f2a@suse.com>
Date: Wed, 25 Jan 2023 07:57:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/x86: public: add TSC defines for cpuid leaf 4
Content-Language: en-US
To: Krister Johansen <kjlx@templeofstupid.com>
Cc: xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 David Reaver <me@davidreaver.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230124223516.GA1962@templeofstupid.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230124223516.GA1962@templeofstupid.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0110.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8311:EE_
X-MS-Office365-Filtering-Correlation-Id: 06c5b104-54f1-451c-4367-08dafea165c8
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8FlnjAqWsHLO9JAL76PZYFahNh2wNIz2Su42BCMLnz+h6VbJPi97OTcYECNSAvFOWOVecgW10ZLDNwE8HA4aI+xAwP/nlTKrymIpLzXA+R7N72/ycthMMJ5S38vsmpav64t3gjsTgenB3ad1fVj0Kh2/KAs2h2Vv+OgnQC1oryIiFcE0Vk7BZaGgeiziTopRX8uIe5KoYi8BV1XUI+YW/4IZJMwqi3PQs9Jd3opLdSqv8rFyLIdi9AQv/IJGUcr29Ae/tieeBjPQ+K19ddV5LBrqffIorSbT5YrmR43cYAU9VCxvMawt18B3hPrZmrRUZyJVicE4k9s2fj7DUhfwKe5g3B0jn7PKLDSYofaNtzB5woJVsUkOBj3HanYtbE67KUGFp4ay5ZwjZ2MdYVygVKMVjpO67CZUUvhgdMLAbUuUpPfLjOd3dphvMUk0VJ5PXF6H/NQ2Q9VGkB06QzsUqPIoA5dKdvhZ5jyTAqNh19jVF3/5drdjL5z4ri9CVzNHrv2H1T7ZPj7YBbo82WA5Cg9Rhf5b1zRRlV0PAsb3jn2RSzmr8Bl9nxkrYXo54xjJgTAvJOMNOkbT7ZEhBK/J3gHfWM91k/8KYLUxIn3RVAnovAJ/+7+Pr3lmizz/hsncreMu1XR+5ueVGSQvchJHJMHrWnCTCXjloa6T6A+2TesNlqZeiXEm+87154AbnNhwFHvYLr9cEMu6nZ8dS8xyRdNrvikEaV1SYu4SIpdo7oc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(376002)(366004)(346002)(39860400002)(136003)(451199018)(478600001)(6486002)(31686004)(66556008)(26005)(6512007)(316002)(66946007)(54906003)(41300700001)(8676002)(4326008)(6916009)(2616005)(4744005)(8936002)(66476007)(53546011)(6506007)(2906002)(5660300002)(186003)(36756003)(86362001)(31696002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QUpJMFVobU5ZMFNpdW9TZm8vKzZwa0tGWlJBVU5ZTkhzTjIycHEzamFJOWdY?=
 =?utf-8?B?blJoa0VxUGliZ3djWThDVEhYNlU2ZW0vYWpxbnkvQTRjMzg1NDlWMXAvZTVJ?=
 =?utf-8?B?VjU4NjVlOWp5Z1ZGcVp6K29hVEhMU3hNaTNWR2p6ckFkZ3VjZnl0SkhLR0Rn?=
 =?utf-8?B?RXdIYlAvTkFlUG1GVTJ6MEkxaXVCL1AwTGhoVUM3bFBlYkJraGxST0tRUjdK?=
 =?utf-8?B?bTE1UStzWWxlYVdUeU0yMkxmZGNLb2lvdWtTbGRDNnl3ZXlvSWtYQ0ZNZnRl?=
 =?utf-8?B?WThaaHFtTnI4QTk1aUZIbmdEWWp2SHNIQXBBRzJ4ZldZSDJSZFZuaU5PNWFU?=
 =?utf-8?B?VWFpcUlQbDh5QzVWczM3aUxXNUF0bGlDdkhIQXErSzlTZ1Bhby9XZ2ROMWU3?=
 =?utf-8?B?SiswWEVWTWpubzZlY0FkNDJROWViTWw1VjMveVg2Ry9tQ29DS3hjVjNhWlll?=
 =?utf-8?B?dktCZjZ4aUNKbVBvWkJTdTlLSDRFUWZ5NlRQc0lueWNIQTdrNFpCeEZHbk9L?=
 =?utf-8?B?Nms3ZFN2WjhkSS9VOWU0aHl3TloyYVRoWUVES2JnYkx1N2hCOW5iMHNmUUJy?=
 =?utf-8?B?aUZxMk03dXhTOTN6MWRKZTNNVUFUNTBlVmFud3JKcTJ3dmtPaWttK0gzQVNU?=
 =?utf-8?B?bkExTEN0TWt1OWlQZDRETVRLWDZKdnhxY0lVaENmUnpBYTdTVU1laFczS3dz?=
 =?utf-8?B?R21WT3JIY1U1dE1uRFRvMVdNNHkwQWNPOHZHNUhGeHZaVHRYMUh4eGVKTFl1?=
 =?utf-8?B?a2JXQ1ZNWkp1RStkWWpaVldZRlQzWk5KaDhRV3BGMnFMRjZ6aitPQ0ZqQzN5?=
 =?utf-8?B?SFdvWk5XTzRZZXExTFJLNmpaWXBPWW0yYWttM1JuN3J2SHVmeTMzcDNzbU96?=
 =?utf-8?B?TC9nczJsV3A2YmxjTE1XdTczRHU3YWIrYVV4SVlJdFpkVGJ0OHJNbUF4V3Zz?=
 =?utf-8?B?ZkJiNTF2bXVSVDlxdDJoVUhjcCtmZnJFUno1WWJYWDY5N210aHp4MXdENlRu?=
 =?utf-8?B?Wk5qYWMvYlk0MHFITVRkVHIrQzZSUnN5d1RCMVhxMFRrbzhqK0QrK1RBM1k2?=
 =?utf-8?B?b0djM2FkZVNqdHpvd25YMEQ3R01WRUV3c0VkaWlrTE56VWxpRFZGTnpQZnBl?=
 =?utf-8?B?ZjM1TnRyOUdrazM0MldQejRHTzJCWk4xY2VwL1NlMlRFc1BRRUQ1SVFIbmY0?=
 =?utf-8?B?cEI2R0tqaHFJS1FvT0VCc2pXQzdxcXd2NWRqNUpxQlZsQ2hEa2k1ZFZuekFS?=
 =?utf-8?B?MUFHaGZ4QmdQbUpWZnlsdEc5blV3N0liNVJLeVdWc0FlM0JhVUROUkcrQUI5?=
 =?utf-8?B?LzR3OFVOVDBUeW5DOTd0QStVYzh1RTNWVlN5YUs0UGhDNHBEOFRyZis0NTEv?=
 =?utf-8?B?MUhwWTFZUjJjQVFudnA2ZTUwMjhxdDZIWkxFZ3B0VVBhams0SnhKbE4rWmEw?=
 =?utf-8?B?Y3kxdEJZZU90YnBjWFErS0QzNEpHTFJ1cElKSjNISHlWNGh1eEJOZ0M3OFlz?=
 =?utf-8?B?cVNDTTBieUQ5L2ZLWU9TYXBiQkx4YkkrNkxiaW5mVDRvWWYycTU2YzVkeXZu?=
 =?utf-8?B?a1Z1anFZR1JkWkF6SEFJckdxSzlnL20xTmk5KzR5Y2l3MnhKQ2VXeTZUeXg0?=
 =?utf-8?B?RlVqOW16WC9hVFVBRDNvLys3NGM5UmErVXlqdWRueFZtVlA2ZktlNHBVSlo3?=
 =?utf-8?B?cGo5OGUzMVIwR2NSdkV2VDMyRi9vdHJ1Vm5YeTNOTW9kY2hnbHRuc1dwVXZU?=
 =?utf-8?B?S2g4Ty9kVHBQYkZWZGNWQjlvMnM0R1FBU2pza0NXQ2g3M3p2KzcvcnZyTHBB?=
 =?utf-8?B?aVpMQUovQVBUU0RpRlQreTIxNXN6K1h2Z2lidEY4Zm1JT0VBMGgvNTJtN2dP?=
 =?utf-8?B?SXdab3h6RjlhaCtNNnp1NUpOZFRrcHA1RkFqQkJVaE8vb0NnYU1KWTE3SU5t?=
 =?utf-8?B?VHZ1UUtWMzgyM3UzREVlUHpiTjZYTXYxdDRYOStIVHM0VmpZREZDTVhVTEQz?=
 =?utf-8?B?NU83VjZSKy9xcVVydkQ1clpjcXIwUXJtamRIM2RJeGF6R29QVmZ2M2t5NXpV?=
 =?utf-8?B?NnVpek9kOHRXcFRMMUI3cW1IUk4yYVJla3FhcXhtQm54ZDRFQk5XWFc4K0NR?=
 =?utf-8?Q?MQOr6nV+00KwjchV9bgaV2ZpJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 06c5b104-54f1-451c-4367-08dafea165c8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 06:57:17.9190
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Bw+nEkbGObAITccix27FtLWZZvPGMX741BEZiPbtaSid2AKYJPQMlHzi8lFfm6E+AS+86tmRJSGqeMRedZnNQw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8311

On 24.01.2023 23:35, Krister Johansen wrote:
> --- a/xen/include/public/arch-x86/cpuid.h
> +++ b/xen/include/public/arch-x86/cpuid.h
> @@ -71,6 +71,12 @@
>   *             EDX: shift amount for tsc->ns conversion
>   * Sub-leaf 2: EAX: host tsc frequency in kHz
>   */
> +#define XEN_CPUID_TSC_EMULATED       (1u << 0)
> +#define XEN_CPUID_HOST_TSC_RELIABLE  (1u << 1)
> +#define XEN_CPUID_RDTSCP_INSTR_AVAIL (1u << 2)
> +#define XEN_CPUID_TSC_MODE_DEFAULT   (0)
> +#define XEN_CPUID_TSC_MODE_EMULATE   (1u)
> +#define XEN_CPUID_TSC_MODE_NOEMULATE (2u)

This could do with a blank line between the two groups. You're also
missing mode 3. Plus, as a formal remark, please follow patch
submission rules: They are sent To: the list, with maintainers on
Cc:.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483892.750290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbPi-0000gw-On; Wed, 25 Jan 2023 08:45:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483892.750290; Wed, 25 Jan 2023 08:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbPi-0000gp-MA; Wed, 25 Jan 2023 08:45:50 +0000
Received: by outflank-mailman (input) for mailman id 483892;
 Wed, 25 Jan 2023 08:45:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbPh-0000gj-KQ
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:45:49 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2060c.outbound.protection.outlook.com
 [2a01:111:f400:7e89::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8e701aae-9c8c-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:45:03 +0100 (CET)
Received: from DM6PR03CA0095.namprd03.prod.outlook.com (2603:10b6:5:333::28)
 by SA0PR12MB7074.namprd12.prod.outlook.com (2603:10b6:806:2d5::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:44:21 +0000
Received: from DM6NAM11FT030.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:333:cafe::b7) by DM6PR03CA0095.outlook.office365.com
 (2603:10b6:5:333::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:44:21 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT030.mail.protection.outlook.com (10.13.172.146) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.21 via Frontend Transport; Wed, 25 Jan 2023 08:44:20 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:19 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:19 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e701aae-9c8c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bcGycvZFyRxpAEIqfsI3cKxKUUlrd+yXyzoDHglrn9qeQvM8vemgXPAlpfzqLE3mk6Z3gxhCvAmsQDt56MerLAmaRsAS19aGxuMMIbyLDjEkSBg3NXXNh/DTBIqHwT3bUpV0nrm4pCtbcBVqKxSCfP1lZ5V+Co4mBJPkVyMmuc6J2Ctyca7lyv19fFl4XIxeQisx/PLFerzVEB+Yr4m/xRSkmMxsdsWIV8WWcBd7NT1m8UB/7rMo0NLan4+VM007G1ISo+FjdwxXmIQcwQX4yqNctH69o9mHl8VH93H7WP1tNcQu9mcbHJmcn0ilAHBUPdV8q0oVCqbMrqy6eeVomg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Qx1KuYQR6ecGUTs1AcZ1xyNLJ/NWGF8kUn8IPVLjxhM=;
 b=EifB5JC9/QBhZQWAF+0Cfc+W3atOg0bZ+VeDRRaMrgSLwQCmhJOkq/qb57VgPLoAUp28RpPYd0laY1LK9g1tvtf65tqzFLfoNc49xp0XKDC75HsrgzEcVfD5vTJeTMgaAfivfSiLFFAh7FuCtWxDSKjmzxZI8LDRUHh1vHBJ1CbegzocDr0djbm7RH+zZRwIXSOxQ68lm0idOkstB4B/0t+q16h0d3KsF7XDmpbf8nGeFkgmlojmQj+AL/jscjlo3Yly9/bCnNUxFeG9TaZ3217+T+2xzThpFHQWXKcsRoOYV/1r32SppVRsfZPKTLWwvZIMIYoOPhvqab2f50zlYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qx1KuYQR6ecGUTs1AcZ1xyNLJ/NWGF8kUn8IPVLjxhM=;
 b=dLhpK0DVCBzbVycLkTcZNXCo7w1fJV7Ldqx+SJW+Gqf4IW2v6OH+p/qGwKAfhAtjHhWfxElrwAjrVKwnLN8be7Ob9BuSjflrAEyjJd3ItasK5p/v182eE+nMIGTOPgNJGLvDsMo6kYuldYJbw1vsyuk4u7u+b9nufLrZZ/3QNsk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>
Subject: [QEMU][PATCH v3 00/10] Introduce xenpvh machine for arm architecture
Date: Wed, 25 Jan 2023 00:43:45 -0800
Message-ID: <20230125084356.6684-1-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT030:EE_|SA0PR12MB7074:EE_
X-MS-Office365-Filtering-Correlation-Id: 5a434cbf-ab58-498c-0e99-08dafeb05a15
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zDqgLfzZ3BkeV9hJO+jv9ivBkrALiRdtGC1QlTxa1HnY2+19PEk8Dm7U6AZ7CxyPwCN8pkZWwMiQ6JFAzin/mgByR8pMQex5HM01a/PgqwSt4Ugm3Xz/r739Ihaoz0gupbvVnDthCJQj3Ub7P8X6ZR+Qij2+RzPxnrxVYrA4bIELzAGITHuBeW0QmNnuG8WpDbGVW5zyfyc+aim6DX+y8Sn5EtHwKNGvwvjut8/qizPHKEyBqxlHSSUf4oWRYDzYVgfgLXxU+56W5wxObmcefK61CQlDQXOVW02+P22d1+QzowY8pkRukPEz4zVaJI9z7/em8mktksUDtFEW1fcLIGIyLxL3pwmEGQJ4qMDZxRLITUKYWLl7j/Uh6ByZp3oGr8wyHJiiwwCFw1UnpctUO0a7Du6pjsfOpiOIEEywpu/dy3+X+y3VjoR5GDjYi/fccpZ9U5UVk/l+hPqiw8yRtrOztpweY7oNPRtAfcUF18p9ZxTsv8pJdvsZYBO09k+N4JZ/8ar+5dJAxmwA35e6571dbB/MGB8zr9kmTEorJ+gIzwfNYOOJ/7RSxgKSOm5uYc983JXTIVIlZ12i/S2Vwc6sNCqmaOKaxVk4hzDe9q4409UmnWqPaZkvIZeIfnuFWqggmkOThqKiiYv6S7TzJ8dfej+SDZarMg49JHjGKpLqmIOqtqoNg30lYA/MX0kedfuTVOdyYvpGC8PmpdN6KISHZ+us8iAzIIool9xg29c=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(39860400002)(346002)(396003)(376002)(451199018)(40470700004)(46966006)(36840700001)(70586007)(70206006)(8676002)(4326008)(54906003)(40480700001)(316002)(41300700001)(36860700001)(336012)(86362001)(40460700003)(44832011)(2616005)(1076003)(82310400005)(2906002)(82740400003)(8936002)(356005)(5660300002)(426003)(186003)(83380400001)(47076005)(478600001)(6666004)(36756003)(6916009)(26005)(81166007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:20.4593
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a434cbf-ab58-498c-0e99-08dafeb05a15
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT030.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB7074

MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hi,
This series add xenpvh machine for aarch64. Motivation behind creating xenpvh
machine with IOREQ and TPM was to enable each guest on Xen aarch64 to have it's
own unique and emulated TPM.

This series does following:
    1. Moved common xen functionalities from hw/i386/xen to hw/xen/ so those can
       be used for aarch64.
    2. We added a minimal xenpvh arm machine which creates an IOREQ server and
       support TPM.

Also, checkpatch.pl fails for 03/12 and 06/12. These fails are due to
moving old code to new place which was not QEMU code style compatible.
No new add code was added.

Regards,
Vikram

ChangeLog:
    v2->v3:
        1. Change machine name to xenpvh as per Jurgen's input.
        2. Add docs/system/xenpvh.rst documentation.
        3. Removed GUEST_TPM_BASE and added tpm_base_address as property.
        4. Correct CONFIG_TPM related issues.
        5. Added xen_register_backend() function call to xen_register_ioreq().
        6. Added Oleksandr's suggestion i.e. removed extra interface opening and
           used accel=xen option

    v1 -> v2
    Merged patch 05 and 06.
    04/12: xen-hvm-common.c:
        1. Moved xen_be_init() and xen_be_register_common() from
            xen_register_ioreq() to xen_register_backend().
        2. Changed g_malloc to g_new and perror -> error_setg_errno.
        3. Created a local subroutine function for Xen_IOREQ_register.
        4. Fixed build issues with inclusion of xenstore.h.
        5. Fixed minor errors.

Stefano Stabellini (5):
  hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState
  xen-hvm: reorganize xen-hvm and move common function to xen-hvm-common
  include/hw/xen/xen_common: return error from xen_create_ioreq_server
  hw/xen/xen-hvm-common: skip ioreq creation on ioreq registration
    failure
  meson.build: do not set have_xen_pci_passthrough for aarch64 targets

Vikram Garhwal (5):
  hw/i386/xen/: move xen-mapcache.c to hw/xen/
  hw/i386/xen: rearrange xen_hvm_init_pc
  hw/xen/xen-hvm-common: Use g_new and error_setg_errno
  hw/arm: introduce xenpvh machine
  meson.build: enable xenpv machine build for ARM

 docs/system/arm/xenpvh.rst       |   34 +
 docs/system/target-arm.rst       |    1 +
 hw/arm/meson.build               |    2 +
 hw/arm/xen_arm.c                 |  184 +++++
 hw/i386/meson.build              |    1 +
 hw/i386/xen/meson.build          |    1 -
 hw/i386/xen/trace-events         |   19 -
 hw/i386/xen/xen-hvm.c            | 1084 +++---------------------------
 hw/xen/meson.build               |    7 +
 hw/xen/trace-events              |   19 +
 hw/xen/xen-hvm-common.c          |  889 ++++++++++++++++++++++++
 hw/{i386 => }/xen/xen-mapcache.c |    0
 include/hw/arm/xen_arch_hvm.h    |    9 +
 include/hw/i386/xen_arch_hvm.h   |   11 +
 include/hw/xen/arch_hvm.h        |    5 +
 include/hw/xen/xen-hvm-common.h  |   97 +++
 include/hw/xen/xen_common.h      |   13 +-
 meson.build                      |    4 +-
 18 files changed, 1363 insertions(+), 1017 deletions(-)
 create mode 100644 docs/system/arm/xenpvh.rst
 create mode 100644 hw/arm/xen_arm.c
 create mode 100644 hw/xen/xen-hvm-common.c
 rename hw/{i386 => }/xen/xen-mapcache.c (100%)
 create mode 100644 include/hw/arm/xen_arch_hvm.h
 create mode 100644 include/hw/i386/xen_arch_hvm.h
 create mode 100644 include/hw/xen/arch_hvm.h
 create mode 100644 include/hw/xen/xen-hvm-common.h

-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483894.750305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbPw-00012j-Bk; Wed, 25 Jan 2023 08:46:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483894.750305; Wed, 25 Jan 2023 08:46:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbPw-00011s-80; Wed, 25 Jan 2023 08:46:04 +0000
Received: by outflank-mailman (input) for mailman id 483894;
 Wed, 25 Jan 2023 08:46:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbPv-0000gj-Jv
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:03 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20619.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 970f5534-9c8c-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:45:17 +0100 (CET)
Received: from DS7PR05CA0092.namprd05.prod.outlook.com (2603:10b6:8:56::10) by
 SA3PR12MB7877.namprd12.prod.outlook.com (2603:10b6:806:31b::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:44:35 +0000
Received: from DM6NAM11FT021.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:56:cafe::e3) by DS7PR05CA0092.outlook.office365.com
 (2603:10b6:8:56::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend
 Transport; Wed, 25 Jan 2023 08:44:34 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT021.mail.protection.outlook.com (10.13.173.76) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:44:34 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:33 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:23 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 970f5534-9c8c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G29PBh7Q//H4VRO1nigiPDvX3sAA1yAaKLmXvbI7Co9W1bRWbKh+Cec6MjztCuIWvuhHzcf59hNLWI2PwCJ0oKFS9nYPdSWi9/jcyAJ+Jff3b56VcmInAFObCTIlV7MgztF6ull7RR9/xLcctF7AMohAkFvs9QO+YiKNe50FCx61RSFCqs+Mg0Pow9WPRTFhRCPEVUIf1ZuG+7zzYYOk3f2zbSZlLZrEZBLy0mtbQMg3kKmRf6YGN6OECcXo8tDfmSGh+Lf4ouVKBYKxasGUz7H6OHMJcYvT7QAM7OGomSHFDSG2tvUNY8vvlwKpwhayY7obyTqnPrxn4FSmp8Yw8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z7qBBSFtT16dKo24xhhVW2fFUFVUWqNupyxQmZ/Rxf8=;
 b=eLOp4Hs/DeOsIJjTESCYLUSu8gCKJYEzVnXLdOmQxHHPJpA+iN1YI9J9/6d8F0Y1xPN866lNOmx44lXyzfhmP//g8LCzBPGVxamImFKBz+dqo/c0OROWSwnHWQl382LQfSph/BYZ4NYHfb0bpsVL4qlTYNDuNBi1qHTV/qwj0o0XWU3XbjVDNCJyD/SadkzqBzlEjpdMe/COKapnpZNaph51UFIUFpzwfWobSEq879FO6IQcm53yeF0rN3pqTBqqFfYfC8upj95Okct0MToopdI6vli1JA1MUzrcsW9w0r9MFvXYguW7a2k74XUrA/IJaBeoyuaaE7EdLQ7Lg0jqJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z7qBBSFtT16dKo24xhhVW2fFUFVUWqNupyxQmZ/Rxf8=;
 b=CelUXT+ym5KFe2kLYH40QX4Hi47zAVtLuKj2vXVGu3jeLuZkCIr6LWuzv/krv/+2jDaL2yfbztmJrl6npBcY54mtXYAR/2d+YhnJGWpVNvWnSsKwP7s+9s/7FZKkU3tIM5RBlLX2p8pKCANBLl/J6LKqoEmnBq8han/1lDnNwOs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Paolo Bonzini
	<pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v3 01/10] hw/i386/xen/: move xen-mapcache.c to hw/xen/
Date: Wed, 25 Jan 2023 00:43:46 -0800
Message-ID: <20230125084356.6684-2-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT021:EE_|SA3PR12MB7877:EE_
X-MS-Office365-Filtering-Correlation-Id: 143b188b-9fee-43fa-77b7-08dafeb06280
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IezT27L6yStktY1CqCs1KqhaXS6gIJXqUBISPyav23MuWr43mQu+evXfv5PbmF+Zrrp20WkyaI2ukrfKRmVwdLjk0lftg88EV7W6vqsUsoNUuU7T1MTYB+w/oaQNVugDJXv6aP4navVEpiQEh1+BkUJALUEqsrzbpbgHBImtsWyewlcNynR2plHUuxq3IsE9dvxSVYk7HzGyqZbaRA0WcOWLAcx2ibzl/vA4G4hX72McW841u3dlTYvzpjd3inOUE8cPWEtDKP3V5wGstkotGewzHFiCGg5yj5b3tMpumye8sleypu7vU+90G9xY06wDKZBFMXQLxz6M4dwe3lWpe452g97PJBcm/oNF4jO46BPYk1TGUfpgOrB2kWijC4zUkgCriXvzmLIJ2JwSj3cz7598/frfUkDDmX2PA1QQN5dOERj9SAx7ew4S5ACC4Cgkbl6dco2sTp8231jeW8Us7wWLIGi0EhfBKaK9nf0wJv+2SlC/ULkZJPo8H8ZyhDH3RRkNFw0iR8o09FM7tmIwgRfxEUFZiZCoDPXxjwJdxRubAOgFUGdpTpJW26/fllP4jhPNE81LujiuASnwHM+vUwC0ZXJujJr0E2zHyg+t8Knlk/bHMcWXtLCVOJQyufBRPPIO5B9cBbcN/AfBNjXyAPbJ1xi84Nn7RfwZfKjxXfezHD8IurI0Par/RRBSHWYFSrHhYu2+ujtqnVrPKROBUXDDenFXpGombBKFCuGxqgA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199018)(40470700004)(46966006)(36840700001)(83380400001)(41300700001)(36860700001)(426003)(1076003)(6666004)(6916009)(478600001)(70206006)(86362001)(70586007)(2616005)(36756003)(54906003)(336012)(82310400005)(356005)(44832011)(5660300002)(47076005)(82740400003)(81166007)(2906002)(316002)(40460700003)(40480700001)(26005)(8676002)(186003)(4326008)(7416002)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:34.5850
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 143b188b-9fee-43fa-77b7-08dafeb06280
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT021.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7877

xen-mapcache.c contains common functions which can be used for enabling Xen on
aarch64 with IOREQ handling. Moving it out from hw/i386/xen to hw/xen to make it
accessible for both aarch64 and x86.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 hw/i386/meson.build              | 1 +
 hw/i386/xen/meson.build          | 1 -
 hw/i386/xen/trace-events         | 5 -----
 hw/xen/meson.build               | 4 ++++
 hw/xen/trace-events              | 5 +++++
 hw/{i386 => }/xen/xen-mapcache.c | 0
 6 files changed, 10 insertions(+), 6 deletions(-)
 rename hw/{i386 => }/xen/xen-mapcache.c (100%)

diff --git a/hw/i386/meson.build b/hw/i386/meson.build
index 213e2e82b3..cfdbfdcbcb 100644
--- a/hw/i386/meson.build
+++ b/hw/i386/meson.build
@@ -33,5 +33,6 @@ subdir('kvm')
 subdir('xen')
 
 i386_ss.add_all(xenpv_ss)
+i386_ss.add_all(xen_ss)
 
 hw_arch += {'i386': i386_ss}
diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
index be84130300..2fcc46e6ca 100644
--- a/hw/i386/xen/meson.build
+++ b/hw/i386/xen/meson.build
@@ -1,6 +1,5 @@
 i386_ss.add(when: 'CONFIG_XEN', if_true: files(
   'xen-hvm.c',
-  'xen-mapcache.c',
   'xen_apic.c',
   'xen_platform.c',
   'xen_pvdevice.c',
diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
index 5d6be61090..a0c89d91c4 100644
--- a/hw/i386/xen/trace-events
+++ b/hw/i386/xen/trace-events
@@ -21,8 +21,3 @@ xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
 cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
 cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
 
-# xen-mapcache.c
-xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
-xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
-xen_map_cache_return(void* ptr) "%p"
-
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index ae0ace3046..19d0637c46 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -22,3 +22,7 @@ else
 endif
 
 specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
+
+xen_ss = ss.source_set()
+
+xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))
diff --git a/hw/xen/trace-events b/hw/xen/trace-events
index 3da3fd8348..2c8f238f42 100644
--- a/hw/xen/trace-events
+++ b/hw/xen/trace-events
@@ -41,3 +41,8 @@ xs_node_vprintf(char *path, char *value) "%s %s"
 xs_node_vscanf(char *path, char *value) "%s %s"
 xs_node_watch(char *path) "%s"
 xs_node_unwatch(char *path) "%s"
+
+# xen-mapcache.c
+xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
+xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
+xen_map_cache_return(void* ptr) "%p"
diff --git a/hw/i386/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
similarity index 100%
rename from hw/i386/xen/xen-mapcache.c
rename to hw/xen/xen-mapcache.c
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483893.750300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbPw-0000yS-03; Wed, 25 Jan 2023 08:46:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483893.750300; Wed, 25 Jan 2023 08:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbPv-0000yL-TJ; Wed, 25 Jan 2023 08:46:03 +0000
Received: by outflank-mailman (input) for mailman id 483893;
 Wed, 25 Jan 2023 08:46:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbPu-0000gj-Js
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:02 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on20613.outbound.protection.outlook.com
 [2a01:111:f400:7eb2::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 96d110d0-9c8c-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:45:17 +0100 (CET)
Received: from DM6PR02CA0086.namprd02.prod.outlook.com (2603:10b6:5:1f4::27)
 by DS7PR12MB6141.namprd12.prod.outlook.com (2603:10b6:8:9b::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:44:35 +0000
Received: from DM6NAM11FT006.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1f4:cafe::e0) by DM6PR02CA0086.outlook.office365.com
 (2603:10b6:5:1f4::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend
 Transport; Wed, 25 Jan 2023 08:44:35 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT006.mail.protection.outlook.com (10.13.173.104) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Wed, 25 Jan 2023 08:44:35 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:35 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:34 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96d110d0-9c8c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l4KXXaQMRaejNI2NxT1L9b7SVGKe0MgvOJqGQgp6rmk/aOSXo7MV6bSz9DquPztYCcpBGq6jxVvfal4ZSe2IvSeKCXO4DHkWKb7lIhaboJz3YJxdSHQDulSpYwZXNpdylx76NSbh5lNbODQ5MioXWLfEHqIv/l+0x0NDYXTqQ+skiCSHf+XieCaWKNIe6Qk3nTrPr2cKeWaaOf3GuUd410WGXpDa3IZfKk97iu6o2Krju0Gt3i+c7tf72XJpRc3+6bAEhu2ju5TeDMaU3j0f+Kv0vptRDJv36lkJINZvAZrg5L+ZxeTgohs5TrVECNiSk0gwJXaDSZtwhfhMpr8jgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xQRygdwVbnDYBJcZUcE7RFsjWA1hPALnId6WXnaDoUU=;
 b=PF/BTKfZemBKX/vU7XaCm1b8YBKuTB8Nukp4b8RVLlcAdPz9HESzvGQGR9zL6PvbbR6RN0iIDyY3SXDWlCnGrCMHz/KblIP7t+DMABQzCcnc+BMPEWmRdwLca3+Q6cOS4HqW6swxxhxVE2zvWqHWE/ZeUbUN22m4gCnihbT/6i0ILhetfYBl34CvnXJEyvFkfrJJYLfJKpFoAlW6rX28tWNHXa60LCWQWh2V6UNVeN/YismU9y05fOiIuaz0OYUYDos0EQjoIpVCilG82t2C/QrbsSotjdogD7sRR0tRJTNJ3Na4QQXFL21WpNjCPXTk6A62Z8AwvhHv/q0vvrxq5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xQRygdwVbnDYBJcZUcE7RFsjWA1hPALnId6WXnaDoUU=;
 b=CCUdbyNLKzmKHDW2NvgsV7EgKKyR2ZJ9x7Bka1hrvKbNtyaALdO7CWPbhZvK/JLTp6L0AjYHfU+i4vBFA8UQvhJfWm9ogpvuGzyo+gcF2Kqj98ayDacL697OCpoZ1AcBqHZf1PTyt7dYjQ52ZEGmrSQBKz2RVixIsB+0vsom+/c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, "Richard
 Henderson" <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>
Subject: [QEMU][PATCH v3 02/10] hw/i386/xen: rearrange xen_hvm_init_pc
Date: Wed, 25 Jan 2023 00:43:47 -0800
Message-ID: <20230125084356.6684-3-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT006:EE_|DS7PR12MB6141:EE_
X-MS-Office365-Filtering-Correlation-Id: 866e63c7-a9a8-4a9a-c8e3-08dafeb0633a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/bbNmkVnikww/QVbtNyaUG2m/czeFkCwYQddtI3MF2oDEpgyCXsLLd+mo7Pifwx+urE0TNWvkdbnfOUgSUH6bMZAHIPyc/BU71GIdHmXrYoL23HsB2BfeFqrf8hgCimb2E+XJ6aWbUx7rSpg+kRjSynSmBHQb/DAElrEbcSePDSJ9dEistzjilH5JD4rTxtr7VRxnXgSavuK7S2E3+s273LyY4HATyAtK0EY6jlxzblDuSkPMVl0oez6dKsXzSasaWNO20bhvs3/ugYkQ6GrRK4cHqACt2DWmM30CuBE/geIuWGGu7vWeXY/EdLmCVG1v/O6tqUNdmQ7GVFnkTbZ9qQH5FvS41nNG9/l8JsqH/Cu0LVKuGif2IwtvrZmG6lFbA1TXgdLmraDMr7mDl0JURYb6fHrXxJ1o4uBXD0xdKiLIPyNL37f8BNq/l5on3R09BKYE/SyyWUz7XGD694k7HGSzAu/qnv21b2/FCvidB9vcO9mhUD5Sqh8AgiQp9B+7ISQ7WbkraKstit96SQdoFmS2BwBBrmUT5Mywz+flI6ektj/nWN+s1dXOHMql35BZmmHwGUMtVBTb5+1RkIg5ebA3S4ju5fZ9yclPnfAn9jAO6zmNF5E8QikhrpfcAMh57/5UoqXjeVW0SvLMaexlIdX7+8ViM5psSELzP/kLjfnAwnPHDHXLfr7m4a6b3iys/C7PjY0KWmY5IC8OvFvOSzgLyYcRknrUvJjWG0+Azw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(39860400002)(376002)(396003)(346002)(451199018)(40470700004)(36840700001)(46966006)(82310400005)(4326008)(82740400003)(86362001)(356005)(36756003)(2906002)(40460700003)(70586007)(8676002)(186003)(26005)(6916009)(81166007)(6666004)(47076005)(41300700001)(426003)(54906003)(336012)(40480700001)(1076003)(5660300002)(36860700001)(2616005)(83380400001)(44832011)(8936002)(478600001)(7416002)(70206006)(316002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:35.8061
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 866e63c7-a9a8-4a9a-c8e3-08dafeb0633a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT006.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6141

In preparation to moving most of xen-hvm code to an arch-neutral location,
move non IOREQ references to:
- xen_get_vmport_regs_pfn
- xen_suspend_notifier
- xen_wakeup_notifier
- xen_ram_init

towards the end of the xen_hvm_init_pc() function.

This is done to keep the common ioreq functions in one place which will be
moved to new function in next patch in order to make it common to both x86 and
aarch64 machines.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 hw/i386/xen/xen-hvm.c | 49 ++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index b9a6f7f538..1fba0e0ae1 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -1416,12 +1416,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
 
-    state->suspend.notify = xen_suspend_notifier;
-    qemu_register_suspend_notifier(&state->suspend);
-
-    state->wakeup.notify = xen_wakeup_notifier;
-    qemu_register_wakeup_notifier(&state->wakeup);
-
     /*
      * Register wake-up support in QMP query-current-machine API
      */
@@ -1432,23 +1426,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
         goto err;
     }
 
-    rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
-    if (!rc) {
-        DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
-        state->shared_vmport_page =
-            xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
-                                 1, &ioreq_pfn, NULL);
-        if (state->shared_vmport_page == NULL) {
-            error_report("map shared vmport IO page returned error %d handle=%p",
-                         errno, xen_xc);
-            goto err;
-        }
-    } else if (rc != -ENOSYS) {
-        error_report("get vmport regs pfn returned error %d, rc=%d",
-                     errno, rc);
-        goto err;
-    }
-
     /* Note: cpus is empty at this point in init */
     state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
 
@@ -1486,7 +1463,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 #else
     xen_map_cache_init(NULL, state);
 #endif
-    xen_ram_init(pcms, ms->ram_size, ram_memory);
 
     qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
 
@@ -1513,6 +1489,31 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
 
+    state->suspend.notify = xen_suspend_notifier;
+    qemu_register_suspend_notifier(&state->suspend);
+
+    state->wakeup.notify = xen_wakeup_notifier;
+    qemu_register_wakeup_notifier(&state->wakeup);
+
+    rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
+    if (!rc) {
+        DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
+        state->shared_vmport_page =
+            xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
+                                 1, &ioreq_pfn, NULL);
+        if (state->shared_vmport_page == NULL) {
+            error_report("map shared vmport IO page returned error %d handle=%p",
+                         errno, xen_xc);
+            goto err;
+        }
+    } else if (rc != -ENOSYS) {
+        error_report("get vmport regs pfn returned error %d, rc=%d",
+                     errno, rc);
+        goto err;
+    }
+
+    xen_ram_init(pcms, ms->ram_size, ram_memory);
+
     /* Disable ACPI build because Xen handles it */
     pcms->acpi_build_enabled = false;
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483896.750320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQE-0001qg-Jp; Wed, 25 Jan 2023 08:46:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483896.750320; Wed, 25 Jan 2023 08:46:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQE-0001qY-Gh; Wed, 25 Jan 2023 08:46:22 +0000
Received: by outflank-mailman (input) for mailman id 483896;
 Wed, 25 Jan 2023 08:46:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbQD-0000gj-L8
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:21 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20607.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::607])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a2591a8f-9c8c-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:45:36 +0100 (CET)
Received: from DS7PR05CA0082.namprd05.prod.outlook.com (2603:10b6:8:57::23) by
 SA1PR12MB7343.namprd12.prod.outlook.com (2603:10b6:806:2b5::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:44:53 +0000
Received: from DM6NAM11FT093.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:57:cafe::c3) by DS7PR05CA0082.outlook.office365.com
 (2603:10b6:8:57::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.20 via Frontend
 Transport; Wed, 25 Jan 2023 08:44:53 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT093.mail.protection.outlook.com (10.13.172.235) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:44:52 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:52 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:51 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2591a8f-9c8c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UWjzAdTG6FoBkN4VDTnTSvQz1D1OfqFDSKOveTec9XCbZiEOgMbjg7xlooci1aLAywFuG2s4609qFWzFXIOnJsJrftZDukRlBewEd+EEy5Q4A2yfXHBBplkm6+/KXR2eoyegwp1Qe+a1CEtJe9tHnaYTHq4CdI0l+v/cHF4KJb4sIVM6bANO8zVCyK71+/mlo0mCmXQT1hcPFjaEbOimjW84PDTA6XuoP3Nyk0glF7rfdQPUXxPDEv+OFLf8tz0/0Xoo62IXT/WsuqcxEk4xY7iFt6E2NZ2H9s0zlFhpKCiVGoR1EyWd9IYUqRVMoGJVPcVa1eD+e+WfBYsnuOR9YQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/pArBdHE6pupZI0iEmfLZok1fiWO0kN2Oo56dAzFs5Q=;
 b=FY0ZSyu8iRwp2MB6nfhOXejjO2odNeqCK59ZLMa/IQQLMC2zELC5f/W3shE4HK3XhF3GI1QLsynBj05si6BEItHkTFcMoFyYYt2gpEFBBwcMop8J7HUaOjGgDNMYYPENotf4OGOq063VVElEROoriZEXokp4IpVbA0l3uQ12/ooJddv0ek19R6n66aN+Mm9GdH8+B5w0quoXjGwyrEgKbOI/Au9MLXZpFoki8jjzapi84eHB+4SSzt4Ao44NQIXMAHBROTtPVLsJiV+JRwuQuilFXzHwdSZSmkzSmLfRJlhRixUN7BMdOm+KcNqs43fNBYMVykS57OqVf5OZ3Y56eA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/pArBdHE6pupZI0iEmfLZok1fiWO0kN2Oo56dAzFs5Q=;
 b=Lgdse+349LyOqGLCHcSKWqZGXX2S1rn5X36/4YUCOI+OSanxmPe23n+ZU7QqyRYLtzu9JeCnlEIdZlCwBOqxsZiMnR62PqIk3n5Ni1r4igFEJ3IegvHitbjD91B3enw1gprYOtwdRzWQYhsCBS6In5iRcBUZjKszPBi8+SOf8v8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>
Subject: [QEMU][PATCH v3 03/10] hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState
Date: Wed, 25 Jan 2023 00:43:48 -0800
Message-ID: <20230125084356.6684-4-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT093:EE_|SA1PR12MB7343:EE_
X-MS-Office365-Filtering-Correlation-Id: ea2cdfe9-c8d6-43bc-1ee9-08dafeb06d75
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	R/+hzhp4rBaMNuv9m8sJk2mH+LF62in/hp4KvrYWVglqmqZW7BpmUYPxvlQ77K1OMQUl8Po+ZsO3jo3+nIJAEacrU1KrPRLfKOikjCC9PdBavyxzsv18moCJER2GWjHrFAYuomEcGoqIk/Ug8pytG17qpcKA9i/wZcl9fz6qZiCPDfirXiSal8W9ZRBzx415JBnwWWs+NUPXHjcQNu7Fc8WqMwgXOe5EMJON+dTMhVlx+dNTma4IicaT5RMHh2pRT+wrbRlFgN5/ipxOrc4VYA+TDTBapUS/dWT7wyK+ecPNcC+Ju2pv87CsDYnnRF3t4W5LfRVXikTG8QxfkvidUQs7R42B3iqmds5itPS4N3bU5n1YCRf5+D8sn9bPiWqXTsJ0t7TUerePJCLXXqPf/NA3lpT5zTXcwC3RcZgOUaXpuqe3Wd/s/ye1w+lTWVmqtkQ3sHVd3PJDbM7scNCmzsiwkdU3HlYu2WQenx5eVe0IA64pxcf83pOGO64diNEDcQkHhymCGTjDgjcNf2MGToVv7jowM7B8y7OsDJHe1bz8yHxgJQUtrZjVK3VwGIl92xnGKA21uPylZkNAuqHYbG+eSWNsY+acJuVVET0sPSaV42QhJV+2HMl+hqq8PrrOy0ncjs6EDpMw951UzdHVWMCCWLwScVB67pnzObhgPKMjpVIbKzN/x4GfkWYnABOMUGOCJH62/zHdzN0Nc/7ldpTIKclh3scQH/TWjTBMmRw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199018)(40470700004)(36840700001)(46966006)(83380400001)(36860700001)(426003)(336012)(54906003)(47076005)(6916009)(2616005)(478600001)(66574015)(8676002)(186003)(36756003)(1076003)(70206006)(70586007)(356005)(2906002)(86362001)(82740400003)(7416002)(41300700001)(26005)(81166007)(5660300002)(316002)(40480700001)(40460700003)(6666004)(44832011)(4326008)(82310400005)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:52.9656
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ea2cdfe9-c8d6-43bc-1ee9-08dafeb06d75
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT093.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7343

From: Stefano Stabellini <stefano.stabellini@amd.com>

In preparation to moving most of xen-hvm code to an arch-neutral location, move:
- shared_vmport_page
- log_for_dirtybit
- dirty_bitmap
- suspend
- wakeup

out of XenIOState struct as these are only used on x86, especially the ones
related to dirty logging.
Updated XenIOState can be used for both aarch64 and x86.

Also, remove free_phys_offset as it was unused.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/i386/xen/xen-hvm.c | 58 ++++++++++++++++++++-----------------------
 1 file changed, 27 insertions(+), 31 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 1fba0e0ae1..06c446e7be 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -73,6 +73,7 @@ struct shared_vmport_iopage {
 };
 typedef struct shared_vmport_iopage shared_vmport_iopage_t;
 #endif
+static shared_vmport_iopage_t *shared_vmport_page;
 
 static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
 {
@@ -95,6 +96,11 @@ typedef struct XenPhysmap {
 } XenPhysmap;
 
 static QLIST_HEAD(, XenPhysmap) xen_physmap;
+static const XenPhysmap *log_for_dirtybit;
+/* Buffer used by xen_sync_dirty_bitmap */
+static unsigned long *dirty_bitmap;
+static Notifier suspend;
+static Notifier wakeup;
 
 typedef struct XenPciDevice {
     PCIDevice *pci_dev;
@@ -105,7 +111,6 @@ typedef struct XenPciDevice {
 typedef struct XenIOState {
     ioservid_t ioservid;
     shared_iopage_t *shared_page;
-    shared_vmport_iopage_t *shared_vmport_page;
     buffered_iopage_t *buffered_io_page;
     xenforeignmemory_resource_handle *fres;
     QEMUTimer *buffered_io_timer;
@@ -125,14 +130,8 @@ typedef struct XenIOState {
     MemoryListener io_listener;
     QLIST_HEAD(, XenPciDevice) dev_list;
     DeviceListener device_listener;
-    hwaddr free_phys_offset;
-    const XenPhysmap *log_for_dirtybit;
-    /* Buffer used by xen_sync_dirty_bitmap */
-    unsigned long *dirty_bitmap;
 
     Notifier exit;
-    Notifier suspend;
-    Notifier wakeup;
 } XenIOState;
 
 /* Xen specific function for piix pci */
@@ -462,10 +461,10 @@ static int xen_remove_from_physmap(XenIOState *state,
     }
 
     QLIST_REMOVE(physmap, list);
-    if (state->log_for_dirtybit == physmap) {
-        state->log_for_dirtybit = NULL;
-        g_free(state->dirty_bitmap);
-        state->dirty_bitmap = NULL;
+    if (log_for_dirtybit == physmap) {
+        log_for_dirtybit = NULL;
+        g_free(dirty_bitmap);
+        dirty_bitmap = NULL;
     }
     g_free(physmap);
 
@@ -626,16 +625,16 @@ static void xen_sync_dirty_bitmap(XenIOState *state,
         return;
     }
 
-    if (state->log_for_dirtybit == NULL) {
-        state->log_for_dirtybit = physmap;
-        state->dirty_bitmap = g_new(unsigned long, bitmap_size);
-    } else if (state->log_for_dirtybit != physmap) {
+    if (log_for_dirtybit == NULL) {
+        log_for_dirtybit = physmap;
+        dirty_bitmap = g_new(unsigned long, bitmap_size);
+    } else if (log_for_dirtybit != physmap) {
         /* Only one range for dirty bitmap can be tracked. */
         return;
     }
 
     rc = xen_track_dirty_vram(xen_domid, start_addr >> TARGET_PAGE_BITS,
-                              npages, state->dirty_bitmap);
+                              npages, dirty_bitmap);
     if (rc < 0) {
 #ifndef ENODATA
 #define ENODATA  ENOENT
@@ -650,7 +649,7 @@ static void xen_sync_dirty_bitmap(XenIOState *state,
     }
 
     for (i = 0; i < bitmap_size; i++) {
-        unsigned long map = state->dirty_bitmap[i];
+        unsigned long map = dirty_bitmap[i];
         while (map != 0) {
             j = ctzl(map);
             map &= ~(1ul << j);
@@ -676,12 +675,10 @@ static void xen_log_start(MemoryListener *listener,
 static void xen_log_stop(MemoryListener *listener, MemoryRegionSection *section,
                          int old, int new)
 {
-    XenIOState *state = container_of(listener, XenIOState, memory_listener);
-
     if (old & ~new & (1 << DIRTY_MEMORY_VGA)) {
-        state->log_for_dirtybit = NULL;
-        g_free(state->dirty_bitmap);
-        state->dirty_bitmap = NULL;
+        log_for_dirtybit = NULL;
+        g_free(dirty_bitmap);
+        dirty_bitmap = NULL;
         /* Disable dirty bit tracking */
         xen_track_dirty_vram(xen_domid, 0, 0, NULL);
     }
@@ -1021,9 +1018,9 @@ static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req)
 {
     vmware_regs_t *vmport_regs;
 
-    assert(state->shared_vmport_page);
+    assert(shared_vmport_page);
     vmport_regs =
-        &state->shared_vmport_page->vcpu_vmport_regs[state->send_vcpu];
+        &shared_vmport_page->vcpu_vmport_regs[state->send_vcpu];
     QEMU_BUILD_BUG_ON(sizeof(*req) < sizeof(*vmport_regs));
 
     current_cpu = state->cpu_by_vcpu_id[state->send_vcpu];
@@ -1468,7 +1465,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 
     state->memory_listener = xen_memory_listener;
     memory_listener_register(&state->memory_listener, &address_space_memory);
-    state->log_for_dirtybit = NULL;
 
     state->io_listener = xen_io_listener;
     memory_listener_register(&state->io_listener, &address_space_io);
@@ -1489,19 +1485,19 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
 
-    state->suspend.notify = xen_suspend_notifier;
-    qemu_register_suspend_notifier(&state->suspend);
+    suspend.notify = xen_suspend_notifier;
+    qemu_register_suspend_notifier(&suspend);
 
-    state->wakeup.notify = xen_wakeup_notifier;
-    qemu_register_wakeup_notifier(&state->wakeup);
+    wakeup.notify = xen_wakeup_notifier;
+    qemu_register_wakeup_notifier(&wakeup);
 
     rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
     if (!rc) {
         DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
-        state->shared_vmport_page =
+        shared_vmport_page =
             xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
                                  1, &ioreq_pfn, NULL);
-        if (state->shared_vmport_page == NULL) {
+        if (shared_vmport_page == NULL) {
             error_report("map shared vmport IO page returned error %d handle=%p",
                          errno, xen_xc);
             goto err;
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483898.750329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQG-00029y-QH; Wed, 25 Jan 2023 08:46:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483898.750329; Wed, 25 Jan 2023 08:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQG-00029n-Nj; Wed, 25 Jan 2023 08:46:24 +0000
Received: by outflank-mailman (input) for mailman id 483898;
 Wed, 25 Jan 2023 08:46:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbQF-0000gj-96
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:23 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20602.outbound.protection.outlook.com
 [2a01:111:f400:7eab::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a3f50a45-9c8c-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:45:38 +0100 (CET)
Received: from DM6PR02CA0162.namprd02.prod.outlook.com (2603:10b6:5:332::29)
 by MW3PR12MB4554.namprd12.prod.outlook.com (2603:10b6:303:55::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:44:56 +0000
Received: from DM6NAM11FT076.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:332:cafe::67) by DM6PR02CA0162.outlook.office365.com
 (2603:10b6:5:332::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:44:56 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT076.mail.protection.outlook.com (10.13.173.204) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Wed, 25 Jan 2023 08:44:56 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:56 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:55 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3f50a45-9c8c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j9kgrq0J9pbxL/AyOVO121qVFYZ063IgjxO6MYfS8kZ6I2xzzo/0TaHpOuOMHbMREsM0IVSNP2YKU9Quf7Lgxhycx8qQMIjpIzRHtyl4SlNFIPXTUZ8uNOScIrJqQHvhyiprZZdLglOuatXv8FBfoOaKzGOZHv52Apt3OnV/TV8OjV3OsokUI3bndiiUuxr2bnEYFPIea98VaTXdCwbFBjVGsiCt1L8aVd6F6wJuO2Ofz8jEwQib6JBpGZFbFTgvwbhBMQZxy9NPO/3yz/AbvuB/EDxMct2hNEO8F6CghH7AY05L0519JhbGJlouh/ifoHsf5Pd8KLPljQuTPvP0Cg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HpZAZyFrLxyZiJ9zE+SxaljhwMZQFl/Jyy62XvsgXdo=;
 b=Nu/nQPJqzmo1eOYxI4nu9mgpcDlCoydMlgtvQroZyasqkyVIkEFbsfQ7nR20+WRJvShDWgGQ8Ls/UKSs9MTPHeHNiWetulM6XqBPU5D8neSi0gCHb2xqxTSwjv9kDHOQVG5ZQ36EqojdYvD7UpNtAOBiyAxhW/jdPDVJskHhtx50+cj0v+AFmROecZmhbeJG4FD2uz/L+IVT7R6j7jaGRPcKufRoHMWsVvmJS4K5efay9TQx8aza0Cl051m22JWFodbHnEtNdgOzW5UPqEtAG80jRVhBpRgxh4R5Fcdnmt0sp4RghtKHWpJImuVtl+Vh+fauQq3EDUrtZlGmZDYNHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HpZAZyFrLxyZiJ9zE+SxaljhwMZQFl/Jyy62XvsgXdo=;
 b=xalL8quzwxceQL9a2kWLO96cnp50A6Q1b60MtG9yC+7hULPHxQ+X4Sxqe3Fp30N0agvViACnNq2sEtYQQ0VOpkbM+71chf40QrxV81G0RHz480Ina1HmG3jj8hSgyHar7EO/VLpLanU72n2AnV1o43XVPOFNh2n2iTWgtbNlJLw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v3 05/10] include/hw/xen/xen_common: return error from xen_create_ioreq_server
Date: Wed, 25 Jan 2023 00:43:51 -0800
Message-ID: <20230125084356.6684-7-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT076:EE_|MW3PR12MB4554:EE_
X-MS-Office365-Filtering-Correlation-Id: de517362-066b-4f42-431b-08dafeb06fa1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rI9Qh+RwFGOxbzWR/uj2h9k4uG4aM+W8gkP4KqlVZzBguMBTbmjS/SgKx53lJkvCF81NOiouIz/7/52Kqbx6za9OW/KjfXm3uKfk+BOPMrZpxRv0QO9ITObQ2Hie7t5MQ2NVm/QaiP2tVNUdYYJCXOQgOd0XVUAvkaonqqS19plnDqJxwXau6j/4kLiasmHUTke3IFsamv0pZ/9EYOj78fmptHfY9hkghgRayg27lGYXqluiXLqPJuBBAfIipvRCFAADZbRsRNWyU7OhyvXEnp8u5cLZgV+Rq+n+S6SXJ7yrgI3+a7zCpC/MtmXvAxJas2wCunEGbRXY6UKnhelIkKfzF43KZitdSR434hK0rfkWkAbgDVhC1Y4UhkW+GEEQqMiN8upahGYzTkep2/A48e1mw2Os0TVQEHMf/CJJ/GGO73ji9SFcQxTuzebmrNH9C5XBc5c6bKRG6SgHBfJVi6JWBAgGpMZYJl+zzZumUqlPcDilPyptxFffxWKqWmIybwmMEDWW3bYox/IiswBRyOHC5wVap0lEw+YWMZUUCzNrKKG5958UQ+o2eqPKeSOL365vu0laht0JDVunb1i7ZQJ2N8EnDmfrxeB7iPjPVR2hGy1hQB5FsjJSwk8vQpralMC6neW4akmfTTJEGtlKojxT2C0gWu26jICmFYUtx8JA/lRlaEGPgbbkq5IJ0aTTLgw8RwW9uClFMz62HwQ5R/c4LHDG2m4I2+aDR9pPeTI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199018)(36840700001)(40470700004)(46966006)(44832011)(82740400003)(86362001)(40460700003)(356005)(81166007)(36756003)(2906002)(82310400005)(41300700001)(2616005)(6666004)(186003)(47076005)(8936002)(478600001)(6916009)(8676002)(70586007)(70206006)(4326008)(5660300002)(83380400001)(40480700001)(36860700001)(336012)(316002)(26005)(1076003)(54906003)(426003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:56.6103
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: de517362-066b-4f42-431b-08dafeb06fa1
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT076.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4554

From: Stefano Stabellini <stefano.stabellini@amd.com>

This is done to prepare for enabling xenpv support for ARM architecture.
On ARM it is possible to have a functioning xenpv machine with only the
PV backends and no IOREQ server. If the IOREQ server creation fails,
continue to the PV backends initialization.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 include/hw/xen/xen_common.h | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
index 9a13a756ae..9ec69582b3 100644
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -467,9 +467,10 @@ static inline void xen_unmap_pcidev(domid_t dom,
 {
 }
 
-static inline void xen_create_ioreq_server(domid_t dom,
-                                           ioservid_t *ioservid)
+static inline int xen_create_ioreq_server(domid_t dom,
+                                          ioservid_t *ioservid)
 {
+    return 0;
 }
 
 static inline void xen_destroy_ioreq_server(domid_t dom,
@@ -600,8 +601,8 @@ static inline void xen_unmap_pcidev(domid_t dom,
                                                   PCI_FUNC(pci_dev->devfn));
 }
 
-static inline void xen_create_ioreq_server(domid_t dom,
-                                           ioservid_t *ioservid)
+static inline int xen_create_ioreq_server(domid_t dom,
+                                          ioservid_t *ioservid)
 {
     int rc = xendevicemodel_create_ioreq_server(xen_dmod, dom,
                                                 HVM_IOREQSRV_BUFIOREQ_ATOMIC,
@@ -609,12 +610,14 @@ static inline void xen_create_ioreq_server(domid_t dom,
 
     if (rc == 0) {
         trace_xen_ioreq_server_create(*ioservid);
-        return;
+        return rc;
     }
 
     *ioservid = 0;
     use_default_ioreq_server = true;
     trace_xen_default_ioreq_server();
+
+    return rc;
 }
 
 static inline void xen_destroy_ioreq_server(domid_t dom,
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483900.750339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQJ-0002T2-6G; Wed, 25 Jan 2023 08:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483900.750339; Wed, 25 Jan 2023 08:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQJ-0002Sm-2j; Wed, 25 Jan 2023 08:46:27 +0000
Received: by outflank-mailman (input) for mailman id 483900;
 Wed, 25 Jan 2023 08:46:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbQH-0001qZ-K3
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:25 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2061a.outbound.protection.outlook.com
 [2a01:111:f400:fe59::61a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bfb9bbf6-9c8c-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:46:24 +0100 (CET)
Received: from DS7PR03CA0010.namprd03.prod.outlook.com (2603:10b6:5:3b8::15)
 by IA0PR12MB8280.namprd12.prod.outlook.com (2603:10b6:208:3df::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:45:00 +0000
Received: from DM6NAM11FT016.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b8:cafe::c8) by DS7PR03CA0010.outlook.office365.com
 (2603:10b6:5:3b8::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:45:00 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT016.mail.protection.outlook.com (10.13.173.139) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:44:59 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:59 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:58 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfb9bbf6-9c8c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cVnhG3HCmOtvqu1mYz1m1obU98TlbaJreLPp6WYTG253DMzCzz/1mF5fZFZoBOhQl2wh07EnqB4AzVqRUVs8sG0fzFKhRnKmJxlNGVspjoqWstRMAdbOH2ELKGasmNlDO1qjWtGwJUAf1Smn4Ig6w8/5jIy2bN+1+nKTPjQ8PL8Na4kI7z/bwKNCpeg3irH3YtnVbsBQRyZu88Ea9VcU/oxSHRKOLe//Ko/dGE2lUIWpRh/y537TYrKan0qNnhKFaeqrQi7s1U8XuINEkZVa9T9bI+jsRsOCBNrM77udok/1utCpC9zBSR8oJQ1XFs44v1daMO6GH6KZRDwxY1v8KQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hmpuP41Ztn/lKlMcuw44+3sOiW4TYVQ+KI34gBksLz8=;
 b=Igq/ddRoW3iSeidStXB/isGPacoiS0QR7skTpOmXN8NnrBylbK4HPFbMSUZoU3PFb1zxWFm1qlIcu+ARxIdZ3R5lak2AsaeBOdXlPDVQVlmtqMjYACZ63K2NpX7LrNTeBu1TjXE1152/+TZMRtWDRY1uhaCFDDtvHy+yn61fL8mcP0NGEKUxvJRatz7CVE/50elk4SyhG7ajiKhoDiauZYkGU6ljeC/elbdog8sIjU0MDAEZWjTByVp1hPIntUqp/Ppbl8cLZzfEPoJEWYuGoAmaDGMqZg4iTYCqNI303LFfKdb692ErsJUUxslx5QnYeerU4QS02Dru+C8QChV3WQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hmpuP41Ztn/lKlMcuw44+3sOiW4TYVQ+KI34gBksLz8=;
 b=gHUjRYwJKjkyikvIw8q/F94USVX5BA7AAzVnnVYWtl0JvuCPlBApjmR9G1xif4w4eg7ScUtVXqqBQrLe7AD3lu67xvhGzOX/68cJBH3mMsxt5TSNF5GQaYQN+aiH0uDESa2xLwv++idxpNgDttLf6sRA2ueKyTkHNrBMJM8l0HY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Paolo Bonzini
	<pbonzini@redhat.com>, =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?=
	<marcandre.lureau@redhat.com>, =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?=
	<berrange@redhat.com>, Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [QEMU][PATCH v3 08/10] meson.build: do not set have_xen_pci_passthrough for aarch64 targets
Date: Wed, 25 Jan 2023 00:43:54 -0800
Message-ID: <20230125084356.6684-10-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT016:EE_|IA0PR12MB8280:EE_
X-MS-Office365-Filtering-Correlation-Id: a129ade1-a24f-4838-1dc1-08dafeb07184
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	O1M/qNgEplGq3joZFwMs4OZCGQTYWfbGduEWggrYnGHhqpvO61NSXc/h5to6V8zDyeA+QMJdgYwBEzemXtZGw9ze8rOb9ayY1X9GsMXE7Jm8u+Xj7+yYGzjr8mBjASH7orZbX+LrniTnBbZndl28/as37GmhbnET3YdVP6Oz+93flvBKGoPpQkTMbVZ44O5jTQOvQpoAN1g+XZwskRG78TFPBFNPei826ogX6msqF56CNGxHsiuglfX80pTGRqPgM9HRexIAMKhjP9LZJb/4tSYOi1G25fb0hSQCysJ2FcJeYYw91AuUXMcpzz/ToCi1QWH4+fBRUqzePtdPQpDIIP2/3fyn2eRhEjDWq1THLfXbahBnG8WWWLIOh9sCi9hHgBToQ3VfRS4Id1qH8JDMepkkskrIwOTLYrj15nlxoyjEcCT/JPvs4pgIKqTZojYZ/4RZPCsxOdFCx0OuYi3cvkql/wsYy7e/c+Jy69vHFlz9lxzvPdKGqJRfNxVmaJxuGdZGNmXkcljKkOfCJKcuVoqFsJuu8KlLDJHrlXEcDhOV+Ohv2JlhCozSJ5/jXiJQE9RCm2F/7LI3B5/POqw40sbOiCtqT5d/GlF7I3bqCSSgcrwKpfX+qNbkBc4S5Wxiklm2RR3DopCkt4HdgyG/vXZ3sXwqfwFxrJy2kordT6vHK17iGAdLxbrTP308aoaR0N+Vk7VRB98ZyPefhp2S9xAVOgRce5r91zk6Bo0IDRY=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(136003)(396003)(346002)(39860400002)(451199018)(40470700004)(36840700001)(46966006)(4744005)(6666004)(5660300002)(83380400001)(40480700001)(41300700001)(8936002)(82310400005)(86362001)(426003)(82740400003)(81166007)(186003)(2906002)(36860700001)(2616005)(70586007)(1076003)(26005)(44832011)(478600001)(356005)(36756003)(54906003)(47076005)(8676002)(70206006)(316002)(4326008)(336012)(40460700003)(6916009)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:59.7748
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a129ade1-a24f-4838-1dc1-08dafeb07184
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT016.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8280

From: Stefano Stabellini <stefano.stabellini@amd.com>

have_xen_pci_passthrough is only used for Xen x86 VMs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 meson.build | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/meson.build b/meson.build
index 6d3b665629..693802adb2 100644
--- a/meson.build
+++ b/meson.build
@@ -1471,6 +1471,8 @@ have_xen_pci_passthrough = get_option('xen_pci_passthrough') \
            error_message: 'Xen PCI passthrough requested but Xen not enabled') \
   .require(targetos == 'linux',
            error_message: 'Xen PCI passthrough not available on this platform') \
+  .require(cpu == 'x86'  or cpu == 'x86_64',
+           error_message: 'Xen PCI passthrough not available on this platform') \
   .allowed()
 
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483901.750346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQJ-0002Vg-M4; Wed, 25 Jan 2023 08:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483901.750346; Wed, 25 Jan 2023 08:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQJ-0002V3-CF; Wed, 25 Jan 2023 08:46:27 +0000
Received: by outflank-mailman (input) for mailman id 483901;
 Wed, 25 Jan 2023 08:46:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbQI-0001qZ-FL
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:26 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bfc1936e-9c8c-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:46:25 +0100 (CET)
Received: from DS7PR03CA0175.namprd03.prod.outlook.com (2603:10b6:5:3b2::30)
 by SA3PR12MB7976.namprd12.prod.outlook.com (2603:10b6:806:312::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:45:01 +0000
Received: from DM6NAM11FT013.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b2:cafe::f7) by DS7PR03CA0175.outlook.office365.com
 (2603:10b6:5:3b2::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:45:01 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT013.mail.protection.outlook.com (10.13.173.142) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:45:01 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:45:00 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:59 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfc1936e-9c8c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bMvtlcHaVxs1OkH9HYg0x6WAgGAscarLNNSQ+XJhZ4Yo2cgqiK6N3JHglsp3FV9JR/is43x/mYUb8H3y/0uXOWNO61tfQUatswOkAw2yQ/EMH3kCJ9DzcW+0E7kktkGpCbHyJjdIL+t7dwmfU1ScY5JFWmbm5n7ZE4YtsTjzwnryOgTIYJ/9TeH87lXZryeJiGkUBEsoTr7W8eMloAa7aV2DY88J+JuugAUSLZyQCOLj5WcElVTCQyPc638yuOhDSFiRJg1gZdtVZbeuW7m+ATVuMaP1pTzFLHQQOlnKjB+F30qr+pYibnIdQr13aZ20yqnlzttfhzQdXBu+S6l20A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A+E4BfA2jNdGJLu/KOY6Tiy/3hPOgzwaMVA9okKy1Xk=;
 b=Vws0NdyaoU4swVXjcXEXBKoqfzhUcIoKqjZLHX05/Wq3uv6bP6NUjCXnUGHgqtNFwOUAbTg1AA41P/TiISP63MdXOg9dpz5W38pxocMGULKZ3KOzF2JptHcHwff+yX8pCIVHmcotR/arA1e2V2KKAPpPmbsrq312mDRNFfLm+g8AIZiiHQ4RaAYKAwU+zW7C4cEW53uKYcv8PBNXrufwkVqJ1urS5tNwxJDyY8PAWNmRHqE6UHR7nuApj+0YrSAjDPuK76nDs4Ev00A2KoFycNLnaDQjFJiP/smc1aSruUCH+t8ylZK8TY8A98GlBQ1+NYThiL6W3JnphMSWXUFycg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A+E4BfA2jNdGJLu/KOY6Tiy/3hPOgzwaMVA9okKy1Xk=;
 b=XyhZ73Tckdb6Rhe6SP7Sy5L3mG7G0MIaB7mEMdx1Nka03sDkmgEyL5TsLXBhioupwIma5VXN6kTpj4pYOhXAhZWJpAGsq289p6fqHmnrMhw0YMLmqND0huXFhAN61cpTOv6s3O23V1WlbnZyj1Zlu5hSJ6ryHTSfwMmMDnIA+7s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Peter Maydell
	<peter.maydell@linaro.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
	"open list:ARM TCG CPUs" <qemu-arm@nongnu.org>
Subject: [QEMU][PATCH v3 09/10] hw/arm: introduce xenpvh machine
Date: Wed, 25 Jan 2023 00:43:55 -0800
Message-ID: <20230125084356.6684-11-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT013:EE_|SA3PR12MB7976:EE_
X-MS-Office365-Filtering-Correlation-Id: 00f51690-a587-4396-ff9f-08dafeb07252
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iukmimeQS2l48JWPADIxQ1FDxcgtBJAgIQfvh96HLpiu4+SXCUIyKZA552IXRFsygOZAqDtUospneXTUDpTz4guTURruQGfsXfGQqP/FTugEB+pC6WR8vDSzUSn+BdVxtMxYUsbDL2uxfX8YJc+VcFaZhXgLuQeth6amTbTIzlDFi2GsPmmKGYuBwJVEjDztA2xW/dZbcK6FqrKCTwHsr/YNmwEq7JczFTL6iTFQIDXhz8EjR4tALTJTyq00R9OCnsCf9EYHS7lZBAWUt3MjDlbHHNCKwCIp66zEtMWpu6PJQkv4kIkqdli7gb21oz4YPJTPBai9IwMhAYvNJdovF38WGPCfBzdU0XcqT7e6Uh2VSV0zWq38iCx1xXq0Dp/FPRkSaWsf+XAnQ/SZnMRLuVFp6c5zxSNcXE9Qm5iQeP+0hm6eeUHQz3mfQ29OY9IR0ke0LycBNeK5zL8XSP0+mqiQPbH3AN3rn7lfgTeyWYbI7/BeSJSru2LTjrAndeiQlbiQ0yQPN1SDjfWH46gd6lhsX9XbSY5gReLwFaEfm9KvTN2YyviGbU6mabnr0AWAFhssOeOZuUjez4f62TvYnZfJzuJVhFO2Tb2DLPrvXc4b/CRPPuroq7OMaaQ8hDdlvfOaHgqMdHJKonHNJdYJr+hkDjatEFUjqNqxmrURtLWhQRI8z/+k3WXg1R1J4N6xy6KlFGxpoftNt0VLp82fBIqoEh/raFZNN5aA+EJcCmZa6Ia56G3FLYSh3PYa2PZ5bzBNllWHU8ELLtCynL4n+w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199018)(40470700004)(36840700001)(46966006)(83380400001)(36860700001)(426003)(336012)(54906003)(47076005)(6916009)(2616005)(478600001)(8676002)(186003)(36756003)(1076003)(70206006)(70586007)(966005)(356005)(2906002)(86362001)(82740400003)(41300700001)(26005)(81166007)(5660300002)(316002)(40480700001)(40460700003)(6666004)(44832011)(4326008)(82310400005)(8936002)(66899018)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:45:01.1306
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 00f51690-a587-4396-ff9f-08dafeb07252
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT013.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7976

Add a new machine xenpvh which creates a IOREQ server to register/connect with
Xen Hypervisor.

Optional: When CONFIG_TPM is enabled, it also creates a tpm-tis-device, adds a
TPM emulator and connects to swtpm running on host machine via chardev socket
and support TPM functionalities for a guest domain.

Extra command line for aarch64 xenpvh QEMU to connect to swtpm:
    -chardev socket,id=chrtpm,path=/tmp/myvtpm2/swtpm-sock \
    -tpmdev emulator,id=tpm0,chardev=chrtpm \
    -machine tpm-base-addr=0x0c000000 \

swtpm implements a TPM software emulator(TPM 1.2 & TPM 2) built on libtpms and
provides access to TPM functionality over socket, chardev and CUSE interface.
Github repo: https://github.com/stefanberger/swtpm
Example for starting swtpm on host machine:
    mkdir /tmp/vtpm2
    swtpm socket --tpmstate dir=/tmp/vtpm2 \
    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 docs/system/arm/xenpvh.rst    |  34 +++++++
 docs/system/target-arm.rst    |   1 +
 hw/arm/meson.build            |   2 +
 hw/arm/xen_arm.c              | 184 ++++++++++++++++++++++++++++++++++
 include/hw/arm/xen_arch_hvm.h |   9 ++
 include/hw/xen/arch_hvm.h     |   2 +
 6 files changed, 232 insertions(+)
 create mode 100644 docs/system/arm/xenpvh.rst
 create mode 100644 hw/arm/xen_arm.c
 create mode 100644 include/hw/arm/xen_arch_hvm.h

diff --git a/docs/system/arm/xenpvh.rst b/docs/system/arm/xenpvh.rst
new file mode 100644
index 0000000000..e1655c7ab8
--- /dev/null
+++ b/docs/system/arm/xenpvh.rst
@@ -0,0 +1,34 @@
+XENPVH (``xenpvh``)
+=========================================
+This machine creates a IOREQ server to register/connect with Xen Hypervisor.
+
+When TPM is enabled, this machine also creates a tpm-tis-device at a user input
+tpm base address, adds a TPM emulator and connects to a swtpm application
+running on host machine via chardev socket. This enables xenpvh to support TPM
+functionalities for a guest domain.
+
+More information about TPM use and installing swtpm linux application can be
+found at: docs/specs/tpm.rst.
+
+Example for starting swtpm on host machine:
+.. code-block:: console
+
+    mkdir /tmp/vtpm2
+    swtpm socket --tpmstate dir=/tmp/vtpm2 \
+    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
+
+Sample QEMU xenpvh commands for running and connecting with Xen:
+.. code-block:: console
+
+    qemu-system-aarch64 -xen-domid 1 \
+    -chardev socket,id=libxl-cmd,path=qmp-libxl-1,server=on,wait=off \
+    -mon chardev=libxl-cmd,mode=control \
+    -chardev socket,id=libxenstat-cmd,path=qmp-libxenstat-1,server=on,wait=off \
+    -mon chardev=libxenstat-cmd,mode=control \
+    -xen-attach -name guest0 -vnc none -display none -nographic \
+    -machine xenpvh -m 1301 \
+    -chardev socket,id=chrtpm,path=tmp/vtpm2/swtpm-sock \
+    -tpmdev emulator,id=tpm0,chardev=chrtpm -machine tpm-base-addr=0x0C000000
+
+In above QEMU command, last two lines are for connecting xenpvh QEMU to swtpm
+via chardev socket.
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
index 91ebc26c6d..af8d7c77d6 100644
--- a/docs/system/target-arm.rst
+++ b/docs/system/target-arm.rst
@@ -106,6 +106,7 @@ undocumented; you can get a complete list by running
    arm/stm32
    arm/virt
    arm/xlnx-versal-virt
+   arm/xenpvh
 
 Emulated CPU architecture support
 =================================
diff --git a/hw/arm/meson.build b/hw/arm/meson.build
index b036045603..06bddbfbb8 100644
--- a/hw/arm/meson.build
+++ b/hw/arm/meson.build
@@ -61,6 +61,8 @@ arm_ss.add(when: 'CONFIG_FSL_IMX7', if_true: files('fsl-imx7.c', 'mcimx7d-sabre.
 arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
 arm_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c', 'mcimx6ul-evk.c'))
 arm_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
+arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen_arm.c'))
+arm_ss.add_all(xen_ss)
 
 softmmu_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmu-common.c'))
 softmmu_ss.add(when: 'CONFIG_EXYNOS4', if_true: files('exynos4_boards.c'))
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
new file mode 100644
index 0000000000..12b19e3609
--- /dev/null
+++ b/hw/arm/xen_arm.c
@@ -0,0 +1,184 @@
+/*
+ * QEMU ARM Xen PV Machine
+ *
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/error-report.h"
+#include "qapi/qapi-commands-migration.h"
+#include "qapi/visitor.h"
+#include "hw/boards.h"
+#include "hw/sysbus.h"
+#include "sysemu/block-backend.h"
+#include "sysemu/tpm_backend.h"
+#include "sysemu/sysemu.h"
+#include "hw/xen/xen-legacy-backend.h"
+#include "hw/xen/xen-hvm-common.h"
+#include "sysemu/tpm.h"
+#include "hw/xen/arch_hvm.h"
+
+#define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
+OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
+
+static MemoryListener xen_memory_listener = {
+    .region_add = xen_region_add,
+    .region_del = xen_region_del,
+    .log_start = NULL,
+    .log_stop = NULL,
+    .log_sync = NULL,
+    .log_global_start = NULL,
+    .log_global_stop = NULL,
+    .priority = 10,
+};
+
+struct XenArmState {
+    /*< private >*/
+    MachineState parent;
+
+    XenIOState *state;
+
+    struct {
+        uint64_t tpm_base_addr;
+    } cfg;
+};
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    hw_error("Invalid ioreq type 0x%x\n", req->type);
+
+    return;
+}
+
+void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
+                         bool add)
+{
+}
+
+void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
+{
+}
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+}
+
+#ifdef CONFIG_TPM
+static void xen_enable_tpm(XenArmState *xam)
+{
+    Error *errp = NULL;
+    DeviceState *dev;
+    SysBusDevice *busdev;
+
+    TPMBackend *be = qemu_find_tpm_be("tpm0");
+    if (be == NULL) {
+        DPRINTF("Couldn't fine the backend for tpm0\n");
+        return;
+    }
+    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
+    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
+    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
+    busdev = SYS_BUS_DEVICE(dev);
+    sysbus_realize_and_unref(busdev, &error_fatal);
+    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
+
+    DPRINTF("Connected tpmdev at address 0x%lx\n", xam->cfg.tpm_base_addr);
+}
+#endif
+
+static void xen_arm_init(MachineState *machine)
+{
+    XenArmState *xam = XEN_ARM(machine);
+
+    xam->state =  g_new0(XenIOState, 1);
+
+    xen_register_ioreq(xam->state, machine->smp.cpus, xen_memory_listener);
+
+#ifdef CONFIG_TPM
+    if (xam->cfg.tpm_base_addr) {
+        xen_enable_tpm(xam);
+    } else {
+        DPRINTF("tpm-base-addr is not provided. TPM will not be enabled\n");
+    }
+#endif
+
+    return;
+}
+
+#ifdef CONFIG_TPM
+static void xen_arm_get_tpm_base_addr(Object *obj, Visitor *v,
+                                      const char *name, void *opaque,
+                                      Error **errp)
+{
+    XenArmState *xam = XEN_ARM(obj);
+    uint64_t value = xam->cfg.tpm_base_addr;
+
+    visit_type_uint64(v, name, &value, errp);
+}
+
+static void xen_arm_set_tpm_base_addr(Object *obj, Visitor *v,
+                                      const char *name, void *opaque,
+                                      Error **errp)
+{
+    XenArmState *xam = XEN_ARM(obj);
+    uint64_t value;
+
+    if (!visit_type_uint64(v, name, &value, errp)) {
+        return;
+    }
+
+    xam->cfg.tpm_base_addr = value;
+}
+#endif
+
+static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
+{
+
+    MachineClass *mc = MACHINE_CLASS(oc);
+    mc->desc = "Xen Para-virtualized PC";
+    mc->init = xen_arm_init;
+    mc->max_cpus = 1;
+    mc->default_machine_opts = "accel=xen";
+
+#ifdef CONFIG_TPM
+    object_class_property_add(oc, "tpm-base-addr", "uint64_t",
+                              xen_arm_get_tpm_base_addr,
+                              xen_arm_set_tpm_base_addr,
+                              NULL, NULL);
+    object_class_property_set_description(oc, "tpm-base-addr",
+                                          "Set Base address for TPM device.");
+
+    machine_class_allow_dynamic_sysbus_dev(mc, TYPE_TPM_TIS_SYSBUS);
+#endif
+}
+
+static const TypeInfo xen_arm_machine_type = {
+    .name = TYPE_XEN_ARM,
+    .parent = TYPE_MACHINE,
+    .class_init = xen_arm_machine_class_init,
+    .instance_size = sizeof(XenArmState),
+};
+
+static void xen_arm_machine_register_types(void)
+{
+    type_register_static(&xen_arm_machine_type);
+}
+
+type_init(xen_arm_machine_register_types)
diff --git a/include/hw/arm/xen_arch_hvm.h b/include/hw/arm/xen_arch_hvm.h
new file mode 100644
index 0000000000..8fd645e723
--- /dev/null
+++ b/include/hw/arm/xen_arch_hvm.h
@@ -0,0 +1,9 @@
+#ifndef HW_XEN_ARCH_ARM_HVM_H
+#define HW_XEN_ARCH_ARM_HVM_H
+
+#include <xen/hvm/ioreq.h>
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
+void arch_xen_set_memory(XenIOState *state,
+                         MemoryRegionSection *section,
+                         bool add);
+#endif
diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
index 26674648d8..c7c515220d 100644
--- a/include/hw/xen/arch_hvm.h
+++ b/include/hw/xen/arch_hvm.h
@@ -1,3 +1,5 @@
 #if defined(TARGET_I386) || defined(TARGET_X86_64)
 #include "hw/i386/xen_arch_hvm.h"
+#elif defined(TARGET_ARM) || defined(TARGET_ARM_64)
+#include "hw/arm/xen_arch_hvm.h"
 #endif
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483902.750359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQK-0002wl-Ra; Wed, 25 Jan 2023 08:46:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483902.750359; Wed, 25 Jan 2023 08:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQK-0002vH-NT; Wed, 25 Jan 2023 08:46:28 +0000
Received: by outflank-mailman (input) for mailman id 483902;
 Wed, 25 Jan 2023 08:46:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbQJ-0000gj-AF
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:27 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7eab::61d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5aa0e51-9c8c-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:45:42 +0100 (CET)
Received: from DS7PR03CA0229.namprd03.prod.outlook.com (2603:10b6:5:3ba::24)
 by DM4PR12MB5167.namprd12.prod.outlook.com (2603:10b6:5:396::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Wed, 25 Jan
 2023 08:44:57 +0000
Received: from DM6NAM11FT114.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3ba:cafe::a4) by DS7PR03CA0229.outlook.office365.com
 (2603:10b6:5:3ba::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:44:57 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT114.mail.protection.outlook.com (10.13.172.206) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Wed, 25 Jan 2023 08:44:57 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:57 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:56 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5aa0e51-9c8c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n6GieLWep7fHehfvtO0P7hxmp62pV8AyyOXQ+4WMhZDO8667MwB5/I+XmVm8PoYlfnPNP7R1Gm7EqtCxjXDCffFD2jNV/blLYSh7sCgbuhMxjFQ9G5k9+1HE/RFC0BQVvyTJsoDGRgyLBqX2o9wglmv9pl98yIuuWlBg64eEm/GBdjmR2gacct/GYpv3JHXl2Qn4iJj3oJfFAehTQMaFt6KfkOOmAYZ94azOmYpjNzc6U/tVAgfUyP4vjhXYjMrPw96GpfRRXXLcpXvkHMKb7zDvmyZgYI4XzzfVQ/BF0CvxlVhFBm5ePei4zW1ccciPRJbNrYubS5wpt2AUZwelhw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=URxTp3Cj5zWl6imZ6G51I5Cd2LY5Q6C4OO3dPqmmi+4=;
 b=GYyvDxPxnwwehsB2XdCkyPJImoshu+qrGPsy3uhPQo0H+UKylWR7pgPcyVAk0nhcOIJ1b/gqcNqhRyp4P7eNHcYCxhwc6C0l9I7pXMzb+lJJz94hYJ60vlxPtCOiMMCr0hR1YHEh0CzDfU70pALAUCzBAs8hLNAl/SDGicEg4twpTiUhcohNUA0ReD4ml6zetGQwyfK2tms25c9EVEKtAhSgp57UzuVnRbbnDA8p7gDfeOArV0HNo/SeJ6eVluJ8hIUidQl32Rh7tbKpFlNM0jb62CzO1x+UmcmjSv6wpXMBOAdCVF46nn9S7xP9lExDBrZVKp91gTanUNUNCz25TA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=URxTp3Cj5zWl6imZ6G51I5Cd2LY5Q6C4OO3dPqmmi+4=;
 b=OzN/oGFGZ/17Pbl6+cdTxHoOS8AaWzRb3syr/0+m42Ii5mOw7KOO0RKA69H4Km208CzdmUMGy51w8vO6J8kyUYkzbEMwwaO/wHzj789eDcE83uMQ89QTcHjIG6ltkAOB8aHr1KcIHvhbSiT9J4O93kv9Pve0E+uyd8adBjTnNGU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v3 06/10] hw/xen/xen-hvm-common: skip ioreq creation on ioreq registration failure
Date: Wed, 25 Jan 2023 00:43:52 -0800
Message-ID: <20230125084356.6684-8-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT114:EE_|DM4PR12MB5167:EE_
X-MS-Office365-Filtering-Correlation-Id: e7226f32-1b6c-4f14-8ea2-08dafeb0703e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OarZNzlyYufzoapGTPFK6aLHsMCMHkniKdDziHZ7pyAzGW60lwZmwqQQYNRP+7+YLuGyIMigTuylaREeSsCrR8yXKKGsNtGFqbMDQRQpvZKhttpTtEmLDgoDbMZLsMkFQPgEXWJj2GL+5DO0FHsml0oKqbfSIN1QWK5poiQuTPvWHHTFjR0R3okIqxkvz1n9zX1nmL9xJLzciqEUeRAAv5IVnonNcxnb227h6DNj6nCWryp3yls5EtmVLeqcZcLnm+0THwsnBpXJ6wPiOdQdGFjLqN6fy15PwtUmyESBh4gK9CQ0K0it7UfgCKDzVQ0HxMu/ClORCQwGFcZg+aXAkvaH4UyrpAYR2e3v2FTrryqxzq0sYaBBV/dYbN8fPgb+ulSZlSafRqU3uMqzh3ap9vvv3gWeXdFdY9XNh3YMaACk4wSSS/31ZMAsdLKKlxDBN9U5hu3vIUFDzCJ/GQRn1ZNpoV+vfuFjoEDuW9U9qFWIaIAGQxPOQBM3cdYhNcKMvJ+XQatVbAkXE66+V/YVjoz7UnVbhL4GuE1qMPHvTaWf+ZtzNUCC68QqwgDFnDbMVlhdQ9NxbtlFUjv1DedphMtm2awILugjuk7YijIlXlJq6WVvl25umeAL8NpWjvVYS5YU4duaFuA2ZMnVUPbtfSN0cn4Q3SoExiqWlErU+OTVJdM+TGY4DEZ9PVNXWRKtV/d0+HXvyotCwzedRXFQvrEt8Ub/mVWYpzz+pSzo2Fo=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(396003)(136003)(346002)(376002)(451199018)(40470700004)(36840700001)(46966006)(47076005)(82310400005)(54906003)(86362001)(40460700003)(36756003)(40480700001)(81166007)(356005)(36860700001)(82740400003)(44832011)(316002)(2906002)(41300700001)(4326008)(8936002)(70206006)(70586007)(6916009)(8676002)(5660300002)(186003)(26005)(1076003)(2616005)(83380400001)(426003)(6666004)(478600001)(336012)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:57.6412
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e7226f32-1b6c-4f14-8ea2-08dafeb0703e
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT114.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5167

From: Stefano Stabellini <stefano.stabellini@amd.com>

On ARM it is possible to have a functioning xenpv machine with only the
PV backends and no IOREQ server. If the IOREQ server creation fails continue
to the PV backends initialization.

Also, moved the IOREQ registration and mapping subroutine to new function
xen_do_ioreq_register().

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 hw/xen/xen-hvm-common.c | 53 ++++++++++++++++++++++++++++-------------
 1 file changed, 36 insertions(+), 17 deletions(-)

diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index e748d8d423..94dbbe97ed 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -777,25 +777,12 @@ err:
     exit(1);
 }
 
-void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
-                        MemoryListener xen_memory_listener)
+static void xen_do_ioreq_register(XenIOState *state,
+                                           unsigned int max_cpus,
+                                           MemoryListener xen_memory_listener)
 {
     int i, rc;
 
-    state->xce_handle = xenevtchn_open(NULL, 0);
-    if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
-        goto err;
-    }
-
-    state->xenstore = xs_daemon_open();
-    if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
-        goto err;
-    }
-
-    xen_create_ioreq_server(xen_domid, &state->ioservid);
-
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
 
@@ -859,12 +846,44 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
     QLIST_INIT(&state->dev_list);
     device_listener_register(&state->device_listener);
 
+    return;
+
+err:
+    error_report("xen hardware virtual machine initialisation failed");
+    exit(1);
+}
+
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener)
+{
+    int rc;
+
+    state->xce_handle = xenevtchn_open(NULL, 0);
+    if (state->xce_handle == NULL) {
+        perror("xen: event channel open");
+        goto err;
+    }
+
+    state->xenstore = xs_daemon_open();
+    if (state->xenstore == NULL) {
+        perror("xen: xenstore open");
+        goto err;
+    }
+
+    rc = xen_create_ioreq_server(xen_domid, &state->ioservid);
+    if (!rc) {
+        xen_do_ioreq_register(state, max_cpus, xen_memory_listener);
+    } else {
+        warn_report("xen: failed to create ioreq server");
+    }
+
     xen_bus_init();
 
     xen_register_backend(state);
 
     return;
+
 err:
-    error_report("xen hardware virtual machine initialisation failed");
+    error_report("xen hardware virtual machine backend registration failed");
     exit(1);
 }
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483903.750366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQL-00031M-LC; Wed, 25 Jan 2023 08:46:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483903.750366; Wed, 25 Jan 2023 08:46:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQL-0002zL-5m; Wed, 25 Jan 2023 08:46:29 +0000
Received: by outflank-mailman (input) for mailman id 483903;
 Wed, 25 Jan 2023 08:46:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbQJ-0001qZ-IG
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:27 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c107994c-9c8c-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:46:26 +0100 (CET)
Received: from DM6PR08CA0043.namprd08.prod.outlook.com (2603:10b6:5:1e0::17)
 by SJ0PR12MB5674.namprd12.prod.outlook.com (2603:10b6:a03:42c::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Wed, 25 Jan
 2023 08:44:55 +0000
Received: from DM6NAM11FT115.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1e0:cafe::50) by DM6PR08CA0043.outlook.office365.com
 (2603:10b6:5:1e0::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.17 via Frontend
 Transport; Wed, 25 Jan 2023 08:44:55 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT115.mail.protection.outlook.com (10.13.173.33) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.21 via Frontend Transport; Wed, 25 Jan 2023 08:44:54 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:53 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:52 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c107994c-9c8c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lZKJ+r++xbLiJS/zYGFJP+fHkvEOBIMEkKfYQoukd5vtKLjDV4CJX2Z4fLEIGpzy0BqIKu6gKqYeHieGF1N1aoczy+XCEJZ0S5f3DXVN+ICaoVlK+Kvqtcc+CKbKNjTDuNYq4B07NxrnFnpbPYtdTXCqKOwb8NSat/wk8IQv6VzILXZIh0zr56et1eP7soRtwCb02QHA+p+ouIlXDFl6H1pvCyyknsv17aHYdXMfGLR12rzhK3/oEamKnieSEtjKbSvQu6H3PYp8pJfYJI/J8ftrzIse3zrn0z0GJ/uwS/yQiEbZqxLFx+3Y4/k7CvyGARSvkReOLrLRO+/r9OpLAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/6RMYIuXur/hQizPdgkR79Juo1RH5cFqlOnZ0Xl8lqo=;
 b=gnf3Dn4Wk+B3G4Iz1BSqftFRfMkfnbb3ZWoUhPtRJfjktF9tiaW1hHhomY19+lu5H8C1KBIcr3oXhmuN8K0E+teANIjx5O93BwXqdnzOexzaLe31Hl5ESzp9MlttjqeNppvOj+Q7IJm4JvVBpgKv5ddClOLTLGVcNxn4j6r/rCw0IG5HfgfjUqFssnj0MUxCMhG+7T+qwNW+9N4UYA37Jsm4jCpd2/kzCtR7jgEQhIYLX3Wkvk6X97qYcB0H94edxntwdXsOAN7awuskVb8HbgmYl1RUIyde0yb6OeaSrz0fY5F50cv1v5Vt3YwHlyD2gXNrRGJvq/rjEDYDruOnaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/6RMYIuXur/hQizPdgkR79Juo1RH5cFqlOnZ0Xl8lqo=;
 b=vldVUt1KH+Mo6Rt3OeV85C4wKPj+I3uIcNFji1GmSnOBl1xCfVIgN+rCzLW4bVRjplysDmhGkRfGttqxjxXdjI97AE974GxcANT2bS8DNvNA1F6aq95y7I5IF7Dmrr1tKUGmbXEITGA8zCkUAxz0MpA2a+WX5RMl1ZA81En4m4c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Peter Maydell <peter.maydell@linaro.org>,
	"open list:ARM TCG CPUs" <qemu-arm@nongnu.org>
Subject: [QEMU][PATCH v3 04/12] xen_arm: Add "accel = xen" and drop extra interface openings
Date: Wed, 25 Jan 2023 00:43:49 -0800
Message-ID: <20230125084356.6684-5-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT115:EE_|SJ0PR12MB5674:EE_
X-MS-Office365-Filtering-Correlation-Id: 73687c1e-53a1-42a7-fe2f-08dafeb06e6d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oUtmTJ7y1AgSEHl6PkRSX8s2dmFNxi1Qja0LHx7oFtqtUemcwrD6RwisYgTFnSt5wjX9Ti199ItD9AE6uP9VklJCXtKwa46ODq4EG9mRhDuqUGJB5LCMK2n0+MYlWfgei6gEk8P5uf0NbXYb8JyLDKTJ8WKyeiVSSmE1D1COWs8+ZJwNjWb9LqYi7UNTWAiQhKHCUYNjioAsqHQMazjNS6zZt5p8TajUvqn3j5SM0DjhwkUPyqr9eqCkuzBGGgsM7en6WjuxDMBnGwR1nI0fRWqqKXdIDhDiCkS2kfFit6C8y/k4CE0XRUIRuI06xZo+U+PNj3CfICxDUaoqY2bpze8hP9o8s/tCRzkMxP3bwModvzFhXyF/oiFGD7osKj5kS+jAK5fyqrbHcB3t34wx63uYr6AOFA2mcG0d0QdF3VuXTITBESiEIpH3VnDB9t3MxjOMxXvK5d6aLibiqLd+2TQPjguR/mmKhoAIQo4zlJRBjLsT2WfZ4qtSWCEdpXcPKOgb4fBxwyveWS0Sly8kqLngXvKAILlBURXVBTy+fhCzS4kdH10BvcgHtBrlxASH97xfw9gnll4txiyJ1M8JLIbnE+SgMOZyK5azgwy+NSLThvAtMvalm1K4w+pAvXz8qUXv1mIEyX2licxUuoNy+ISl+wbgsvcqY7M2Uq1iGp5FnitYot42op4pVIewgRF62XhX+aspcFaHvy+G4DYgcQrATkTYyEpnf9BqgsNhJvk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199018)(40470700004)(46966006)(36840700001)(47076005)(36756003)(8676002)(40460700003)(40480700001)(81166007)(36860700001)(356005)(54906003)(2616005)(426003)(83380400001)(336012)(86362001)(26005)(82310400005)(186003)(44832011)(478600001)(5660300002)(6666004)(316002)(1076003)(82740400003)(70206006)(41300700001)(8936002)(70586007)(6916009)(2906002)(4326008)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:54.5966
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 73687c1e-53a1-42a7-fe2f-08dafeb06e6d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT115.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB5674

In order to use virtio backends we need to make sure that Xen accelerator
is enabled (xen_enabled() returns true) as the memory/cache systems
check for xen_enabled() to perform specific actions. Without that
the xen-mapcache (which is needed for mapping guest memory) is not in use.

Also drop extra interface opening as this is already done in xen-all.c
(so drop xen_init_ioreq() completely) and skip virtio/tpm initialization
if device emulation is not available.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 hw/arm/xen_arm.c | 29 ++---------------------------
 1 file changed, 2 insertions(+), 27 deletions(-)

diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
index fde919df29..4ac425a3c5 100644
--- a/hw/arm/xen_arm.c
+++ b/hw/arm/xen_arm.c
@@ -137,30 +137,6 @@ void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
 }
 
-static int xen_init_ioreq(XenIOState *state, unsigned int max_cpus)
-{
-    xen_dmod = xendevicemodel_open(0, 0);
-    xen_xc = xc_interface_open(0, 0, 0);
-
-    if (xen_xc == NULL) {
-        perror("xen: can't open xen interface\n");
-        return -1;
-    }
-
-    xen_fmem = xenforeignmemory_open(0, 0);
-    if (xen_fmem == NULL) {
-        perror("xen: can't open xen fmem interface\n");
-        xc_interface_close(xen_xc);
-        return -1;
-    }
-
-    xen_register_ioreq(state, max_cpus, xen_memory_listener);
-
-    xenstore_record_dm_state(xenstore, "running");
-
-    return 0;
-}
-
 static void xen_enable_tpm(void)
 {
 #ifdef CONFIG_TPM
@@ -198,9 +174,7 @@ static void xen_arm_init(MachineState *machine)
 
     xen_init_ram(machine);
 
-    if (xen_init_ioreq(xam->state, machine->smp.cpus)) {
-        return;
-    }
+    xen_register_ioreq(xam->state, machine->smp.cpus, xen_memory_listener);
 
     xen_create_virtio_mmio_devices(xam);
 
@@ -218,6 +192,7 @@ static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
     mc->max_cpus = 1;
     /* Set explicitly here to make sure that real ram_size is passed */
     mc->default_ram_size = 0;
+    mc->default_machine_opts = "accel=xen";
 
     machine_class_allow_dynamic_sysbus_dev(mc, TYPE_TPM_TIS_SYSBUS);
 }
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483904.750375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQM-0003Ms-Nq; Wed, 25 Jan 2023 08:46:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483904.750375; Wed, 25 Jan 2023 08:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQM-0003LU-EV; Wed, 25 Jan 2023 08:46:30 +0000
Received: by outflank-mailman (input) for mailman id 483904;
 Wed, 25 Jan 2023 08:46:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbQK-0000gj-AS
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:28 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7e88::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a4db92c9-9c8c-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:45:39 +0100 (CET)
Received: from DM6PR08CA0057.namprd08.prod.outlook.com (2603:10b6:5:1e0::31)
 by DS7PR12MB5934.namprd12.prod.outlook.com (2603:10b6:8:7d::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Wed, 25 Jan
 2023 08:44:55 +0000
Received: from DM6NAM11FT115.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1e0:cafe::8) by DM6PR08CA0057.outlook.office365.com
 (2603:10b6:5:1e0::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:44:55 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT115.mail.protection.outlook.com (10.13.173.33) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.21 via Frontend Transport; Wed, 25 Jan 2023 08:44:55 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:54 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:53 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4db92c9-9c8c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bCwFPdOQKGbm2INyRJDrgyBJ5/YSrdneuuIWcAJtoab3wM2eU18DA4ua9jtW9g3kZb2x7S+RwauUgPSpBfw4zfn+MfeeFIMYkbVEarX1nd2LiVgX9JD/eaErQL4bYMg9BZFGb7Z06hllplbm96JM9ivqN124au6rDTYxzKhrI7E5UIlNfy8Ac1Z6xhbr/p11ajuMHbGjyUXZnal0ewOFNYCrlNkq7b1LvBdzSBrMnpgCddKdfyzV7UOWzFT2q9u8AP9Itr7CdXhUyXAsBJMkUt5dWrAYIihcWtbLhCDZosXnSk6DdYdw3F0Qbg38VI2JIsPYkpZ+iJ8TBu4MC8y5Uw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2UWlGpUn0+7jB5T2dKo2cLBSjhMlAcViRVxgAVG74GI=;
 b=imXtTnoM3bEGlLZ0Y1OO91OEuy4Lj+xhHWWvsAd9sDTrVnij8ncnmDGJfZmAT7kOi6VKEITMbACtRdSLmLTdnrxCYOOk2S+uzQzB347gXQx4cUaSzxq3RUlhAIpp4fRFfkftEoH61AddcbKP2Sy39FB8d+fN4VxI3cWdxmjsl5j6NOsSIYbe6/PdJ160LsoZFsosEmcmJNcB/r5cvBbmsCz76xuvwtBpOB0xMpC6JXV/cWzZrOMVuAZS9sz1waGmCudew41XHZtHyTnwkaBs5qnfURMFI8G7dFyCKPbaQWA3ZUp2fZTmSpCG+NSx4lapgv/sZy1Dii88LTB2DtqroA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2UWlGpUn0+7jB5T2dKo2cLBSjhMlAcViRVxgAVG74GI=;
 b=oHctls+CEwqJnKDWhU/i8Eff66ZGSevHte3rqXfILaGsbanaMOM2WOZU30eDR6ghaoUDmf1TePGagulqlUO7guq5U7AEXCyFpQtsiS4NGPkwwfW7LUOftnnFPvwded9/nhu45lniT7dkyYCszIV3Lw6Ne/C4uolFy9A9VR8HDck=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, "Richard
 Henderson" <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>
Subject: [QEMU][PATCH v3 04/10] xen-hvm: reorganize xen-hvm and move common function to xen-hvm-common
Date: Wed, 25 Jan 2023 00:43:50 -0800
Message-ID: <20230125084356.6684-6-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT115:EE_|DS7PR12MB5934:EE_
X-MS-Office365-Filtering-Correlation-Id: e011e546-3b3d-4504-08d0-08dafeb06efd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JqXNS4unolU/ezT5WBwFxsdim6ZIPcBQP6D7hbZdyGdUOjHx9L5hRqibdW8yQJmayd+r/pnXI18sXsQKxEY4QANLi+18YQsDAmPYFvx6WOlAzPkiNM9NYQ/0QPRD5TyB1krGfG3UXomHV89IXRm1bazKYc6VGitWqevOsNFlO2myoqPhYOaoVJc0BNc4gpDnXxgbzb/yGLQ3W/ccbUsLq9nQ5qjLnLNZTxmxCMRyhhP0/B7yaMrlxfwYihZdrNuXabbAQJRHM0ygaCeTO0lWQAq29hTX85eEnqPpELqRqj6WyEJq7KYKqplMMDaiBKn10rriwlr+K7Sr5THi7Y53xskCm7IXJ7FOcIXyaL7FjX/Vvun5dY9DU5MUx3/8QMmIGbQ9QdGvpnyu1ipAWlSHAiUXCL6ED4XBDCwOptAgIcyu0jpH51EDhqUtxmIckU4Ldj3ohr4TjnLZSnOJUshz45Yj2CXI+hKznu/POScrdrBGKJDL7NDeKYeHBTBgBDDToZTVpyDpuNW6GRXCYjswR1Ypfq3KSTFbW9fcQuwZwvoCMPnAiwSq0pGeP4OEAh13fUKmJhGoyt3Jd8d4Q4+it1Dze31j1TEEcKkx5rTNKPCNuVtoWAzZCqfr0KjZdgdc7rYWa4NPPY2oUxeWfgLLnvO6ScOlLzKZI3+XBo1axnlr1TeR90lFwjvw5cuB9ZOW+1ohVPsaQuql3YlHImfkaDr6PKbYussc8X80L2ItAIc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(396003)(136003)(346002)(376002)(451199018)(40470700004)(36840700001)(46966006)(47076005)(82310400005)(54906003)(86362001)(40460700003)(36756003)(40480700001)(81166007)(356005)(36860700001)(82740400003)(44832011)(30864003)(7416002)(316002)(2906002)(41300700001)(4326008)(8936002)(70206006)(70586007)(6916009)(8676002)(5660300002)(186003)(26005)(1076003)(2616005)(83380400001)(426003)(478600001)(336012)(36900700001)(559001)(579004);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:55.5341
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e011e546-3b3d-4504-08d0-08dafeb06efd
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT115.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5934

From: Stefano Stabellini <stefano.stabellini@amd.com>

This patch does following:
1. creates arch_handle_ioreq() and arch_xen_set_memory(). This is done in
    preparation for moving most of xen-hvm code to an arch-neutral location,
    move the x86-specific portion of xen_set_memory to arch_xen_set_memory.
    Also, move handle_vmport_ioreq to arch_handle_ioreq.

2. Pure code movement: move common functions to hw/xen/xen-hvm-common.c
    Extract common functionalities from hw/i386/xen/xen-hvm.c and move them to
    hw/xen/xen-hvm-common.c. These common functions are useful for creating
    an IOREQ server.

    xen_hvm_init_pc() contains the architecture independent code for creating
    and mapping a IOREQ server, connecting memory and IO listeners, initializing
    a xen bus and registering backends. Moved this common xen code to a new
    function xen_register_ioreq() which can be used by both x86 and ARM machines.

    Following functions are moved to hw/xen/xen-hvm-common.c:
        xen_vcpu_eport(), xen_vcpu_ioreq(), xen_ram_alloc(), xen_set_memory(),
        xen_region_add(), xen_region_del(), xen_io_add(), xen_io_del(),
        xen_device_realize(), xen_device_unrealize(),
        cpu_get_ioreq_from_shared_memory(), cpu_get_ioreq(), do_inp(),
        do_outp(), rw_phys_req_item(), read_phys_req_item(),
        write_phys_req_item(), cpu_ioreq_pio(), cpu_ioreq_move(),
        cpu_ioreq_config(), handle_ioreq(), handle_buffered_iopage(),
        handle_buffered_io(), cpu_handle_ioreq(), xen_main_loop_prepare(),
        xen_hvm_change_state_handler(), xen_exit_notifier(),
        xen_map_ioreq_server(), destroy_hvm_domain() and
        xen_shutdown_fatal_error()

3. Removed static type from below functions:
    1. xen_region_add()
    2. xen_region_del()
    3. xen_io_add()
    4. xen_io_del()
    5. xen_device_realize()
    6. xen_device_unrealize()
    7. xen_hvm_change_state_handler()
    8. cpu_ioreq_pio()
    9. xen_exit_notifier()

4. Replace TARGET_PAGE_SIZE with XC_PAGE_SIZE to match the page side with Xen.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 hw/i386/xen/trace-events        |   14 -
 hw/i386/xen/xen-hvm.c           | 1023 ++-----------------------------
 hw/xen/meson.build              |    5 +-
 hw/xen/trace-events             |   14 +
 hw/xen/xen-hvm-common.c         |  870 ++++++++++++++++++++++++++
 include/hw/i386/xen_arch_hvm.h  |   11 +
 include/hw/xen/arch_hvm.h       |    3 +
 include/hw/xen/xen-hvm-common.h |   97 +++
 8 files changed, 1063 insertions(+), 974 deletions(-)
 create mode 100644 hw/xen/xen-hvm-common.c
 create mode 100644 include/hw/i386/xen_arch_hvm.h
 create mode 100644 include/hw/xen/arch_hvm.h
 create mode 100644 include/hw/xen/xen-hvm-common.h

diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
index a0c89d91c4..5d0a8d6dcf 100644
--- a/hw/i386/xen/trace-events
+++ b/hw/i386/xen/trace-events
@@ -7,17 +7,3 @@ xen_platform_log(char *s) "xen platform: %s"
 xen_pv_mmio_read(uint64_t addr) "WARNING: read from Xen PV Device MMIO space (address 0x%"PRIx64")"
 xen_pv_mmio_write(uint64_t addr) "WARNING: write to Xen PV Device MMIO space (address 0x%"PRIx64")"
 
-# xen-hvm.c
-xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"
-xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "0x%"PRIx64" size 0x%lx, log_dirty %i"
-handle_ioreq(void *req, uint32_t type, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p type=%d dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-handle_ioreq_read(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p read type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-handle_ioreq_write(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p write type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-cpu_ioreq_pio(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p pio dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-cpu_ioreq_pio_read_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio read reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
-cpu_ioreq_pio_write_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio write reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
-cpu_ioreq_move(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p copy dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
-cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
-cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
-
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 06c446e7be..b81c671598 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -10,46 +10,20 @@
 
 #include "qemu/osdep.h"
 #include "qemu/units.h"
+#include "qapi/error.h"
+#include "qapi/qapi-commands-migration.h"
+#include "trace.h"
 
-#include "cpu.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pci_host.h"
 #include "hw/i386/pc.h"
 #include "hw/irq.h"
-#include "hw/hw.h"
 #include "hw/i386/apic-msidef.h"
-#include "hw/xen/xen_common.h"
-#include "hw/xen/xen-legacy-backend.h"
-#include "hw/xen/xen-bus.h"
 #include "hw/xen/xen-x86.h"
-#include "qapi/error.h"
-#include "qapi/qapi-commands-migration.h"
-#include "qemu/error-report.h"
-#include "qemu/main-loop.h"
 #include "qemu/range.h"
-#include "sysemu/runstate.h"
-#include "sysemu/sysemu.h"
-#include "sysemu/xen.h"
-#include "sysemu/xen-mapcache.h"
-#include "trace.h"
 
-#include <xen/hvm/ioreq.h>
+#include "hw/xen/xen-hvm-common.h"
+#include "hw/xen/arch_hvm.h"
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN_HVM
-
-#ifdef DEBUG_XEN_HVM
-#define DPRINTF(fmt, ...) \
-    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
-#else
-#define DPRINTF(fmt, ...) \
-    do { } while (0)
-#endif
-
-static MemoryRegion ram_memory, ram_640k, ram_lo, ram_hi;
-static MemoryRegion *framebuffer;
-static bool xen_in_migration;
-
 /* Compatibility with older version */
 
 /* This allows QEMU to build on a system that has Xen 4.5 or earlier
@@ -75,26 +49,9 @@ typedef struct shared_vmport_iopage shared_vmport_iopage_t;
 #endif
 static shared_vmport_iopage_t *shared_vmport_page;
 
-static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
-{
-    return shared_page->vcpu_ioreq[i].vp_eport;
-}
-static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
-{
-    return &shared_page->vcpu_ioreq[vcpu];
-}
-
-#define BUFFER_IO_MAX_DELAY  100
-
-typedef struct XenPhysmap {
-    hwaddr start_addr;
-    ram_addr_t size;
-    const char *name;
-    hwaddr phys_offset;
-
-    QLIST_ENTRY(XenPhysmap) list;
-} XenPhysmap;
-
+static MemoryRegion ram_640k, ram_lo, ram_hi;
+static MemoryRegion *framebuffer;
+static bool xen_in_migration;
 static QLIST_HEAD(, XenPhysmap) xen_physmap;
 static const XenPhysmap *log_for_dirtybit;
 /* Buffer used by xen_sync_dirty_bitmap */
@@ -102,38 +59,6 @@ static unsigned long *dirty_bitmap;
 static Notifier suspend;
 static Notifier wakeup;
 
-typedef struct XenPciDevice {
-    PCIDevice *pci_dev;
-    uint32_t sbdf;
-    QLIST_ENTRY(XenPciDevice) entry;
-} XenPciDevice;
-
-typedef struct XenIOState {
-    ioservid_t ioservid;
-    shared_iopage_t *shared_page;
-    buffered_iopage_t *buffered_io_page;
-    xenforeignmemory_resource_handle *fres;
-    QEMUTimer *buffered_io_timer;
-    CPUState **cpu_by_vcpu_id;
-    /* the evtchn port for polling the notification, */
-    evtchn_port_t *ioreq_local_port;
-    /* evtchn remote and local ports for buffered io */
-    evtchn_port_t bufioreq_remote_port;
-    evtchn_port_t bufioreq_local_port;
-    /* the evtchn fd for polling */
-    xenevtchn_handle *xce_handle;
-    /* which vcpu we are serving */
-    int send_vcpu;
-
-    struct xs_handle *xenstore;
-    MemoryListener memory_listener;
-    MemoryListener io_listener;
-    QLIST_HEAD(, XenPciDevice) dev_list;
-    DeviceListener device_listener;
-
-    Notifier exit;
-} XenIOState;
-
 /* Xen specific function for piix pci */
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
@@ -246,42 +171,6 @@ static void xen_ram_init(PCMachineState *pcms,
     }
 }
 
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
-                   Error **errp)
-{
-    unsigned long nr_pfn;
-    xen_pfn_t *pfn_list;
-    int i;
-
-    if (runstate_check(RUN_STATE_INMIGRATE)) {
-        /* RAM already populated in Xen */
-        fprintf(stderr, "%s: do not alloc "RAM_ADDR_FMT
-                " bytes of ram at "RAM_ADDR_FMT" when runstate is INMIGRATE\n",
-                __func__, size, ram_addr);
-        return;
-    }
-
-    if (mr == &ram_memory) {
-        return;
-    }
-
-    trace_xen_ram_alloc(ram_addr, size);
-
-    nr_pfn = size >> TARGET_PAGE_BITS;
-    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
-
-    for (i = 0; i < nr_pfn; i++) {
-        pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
-    }
-
-    if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) {
-        error_setg(errp, "xen: failed to populate ram at " RAM_ADDR_FMT,
-                   ram_addr);
-    }
-
-    g_free(pfn_list);
-}
-
 static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
 {
     XenPhysmap *physmap = NULL;
@@ -471,144 +360,6 @@ static int xen_remove_from_physmap(XenIOState *state,
     return 0;
 }
 
-static void xen_set_memory(struct MemoryListener *listener,
-                           MemoryRegionSection *section,
-                           bool add)
-{
-    XenIOState *state = container_of(listener, XenIOState, memory_listener);
-    hwaddr start_addr = section->offset_within_address_space;
-    ram_addr_t size = int128_get64(section->size);
-    bool log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
-    hvmmem_type_t mem_type;
-
-    if (section->mr == &ram_memory) {
-        return;
-    } else {
-        if (add) {
-            xen_map_memory_section(xen_domid, state->ioservid,
-                                   section);
-        } else {
-            xen_unmap_memory_section(xen_domid, state->ioservid,
-                                     section);
-        }
-    }
-
-    if (!memory_region_is_ram(section->mr)) {
-        return;
-    }
-
-    if (log_dirty != add) {
-        return;
-    }
-
-    trace_xen_client_set_memory(start_addr, size, log_dirty);
-
-    start_addr &= TARGET_PAGE_MASK;
-    size = TARGET_PAGE_ALIGN(size);
-
-    if (add) {
-        if (!memory_region_is_rom(section->mr)) {
-            xen_add_to_physmap(state, start_addr, size,
-                               section->mr, section->offset_within_region);
-        } else {
-            mem_type = HVMMEM_ram_ro;
-            if (xen_set_mem_type(xen_domid, mem_type,
-                                 start_addr >> TARGET_PAGE_BITS,
-                                 size >> TARGET_PAGE_BITS)) {
-                DPRINTF("xen_set_mem_type error, addr: "HWADDR_FMT_plx"\n",
-                        start_addr);
-            }
-        }
-    } else {
-        if (xen_remove_from_physmap(state, start_addr, size) < 0) {
-            DPRINTF("physmapping does not exist at "HWADDR_FMT_plx"\n", start_addr);
-        }
-    }
-}
-
-static void xen_region_add(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    memory_region_ref(section->mr);
-    xen_set_memory(listener, section, true);
-}
-
-static void xen_region_del(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    xen_set_memory(listener, section, false);
-    memory_region_unref(section->mr);
-}
-
-static void xen_io_add(MemoryListener *listener,
-                       MemoryRegionSection *section)
-{
-    XenIOState *state = container_of(listener, XenIOState, io_listener);
-    MemoryRegion *mr = section->mr;
-
-    if (mr->ops == &unassigned_io_ops) {
-        return;
-    }
-
-    memory_region_ref(mr);
-
-    xen_map_io_section(xen_domid, state->ioservid, section);
-}
-
-static void xen_io_del(MemoryListener *listener,
-                       MemoryRegionSection *section)
-{
-    XenIOState *state = container_of(listener, XenIOState, io_listener);
-    MemoryRegion *mr = section->mr;
-
-    if (mr->ops == &unassigned_io_ops) {
-        return;
-    }
-
-    xen_unmap_io_section(xen_domid, state->ioservid, section);
-
-    memory_region_unref(mr);
-}
-
-static void xen_device_realize(DeviceListener *listener,
-                               DeviceState *dev)
-{
-    XenIOState *state = container_of(listener, XenIOState, device_listener);
-
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
-        PCIDevice *pci_dev = PCI_DEVICE(dev);
-        XenPciDevice *xendev = g_new(XenPciDevice, 1);
-
-        xendev->pci_dev = pci_dev;
-        xendev->sbdf = PCI_BUILD_BDF(pci_dev_bus_num(pci_dev),
-                                     pci_dev->devfn);
-        QLIST_INSERT_HEAD(&state->dev_list, xendev, entry);
-
-        xen_map_pcidev(xen_domid, state->ioservid, pci_dev);
-    }
-}
-
-static void xen_device_unrealize(DeviceListener *listener,
-                                 DeviceState *dev)
-{
-    XenIOState *state = container_of(listener, XenIOState, device_listener);
-
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
-        PCIDevice *pci_dev = PCI_DEVICE(dev);
-        XenPciDevice *xendev, *next;
-
-        xen_unmap_pcidev(xen_domid, state->ioservid, pci_dev);
-
-        QLIST_FOREACH_SAFE(xendev, &state->dev_list, entry, next) {
-            if (xendev->pci_dev == pci_dev) {
-                QLIST_REMOVE(xendev, entry);
-                g_free(xendev);
-                break;
-            }
-        }
-    }
-}
-
 static void xen_sync_dirty_bitmap(XenIOState *state,
                                   hwaddr start_addr,
                                   ram_addr_t size)
@@ -716,277 +467,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-static MemoryListener xen_io_listener = {
-    .name = "xen-io",
-    .region_add = xen_io_add,
-    .region_del = xen_io_del,
-    .priority = 10,
-};
-
-static DeviceListener xen_device_listener = {
-    .realize = xen_device_realize,
-    .unrealize = xen_device_unrealize,
-};
-
-/* get the ioreq packets from share mem */
-static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
-{
-    ioreq_t *req = xen_vcpu_ioreq(state->shared_page, vcpu);
-
-    if (req->state != STATE_IOREQ_READY) {
-        DPRINTF("I/O request not ready: "
-                "%x, ptr: %x, port: %"PRIx64", "
-                "data: %"PRIx64", count: %u, size: %u\n",
-                req->state, req->data_is_ptr, req->addr,
-                req->data, req->count, req->size);
-        return NULL;
-    }
-
-    xen_rmb(); /* see IOREQ_READY /then/ read contents of ioreq */
-
-    req->state = STATE_IOREQ_INPROCESS;
-    return req;
-}
-
-/* use poll to get the port notification */
-/* ioreq_vec--out,the */
-/* retval--the number of ioreq packet */
-static ioreq_t *cpu_get_ioreq(XenIOState *state)
-{
-    MachineState *ms = MACHINE(qdev_get_machine());
-    unsigned int max_cpus = ms->smp.max_cpus;
-    int i;
-    evtchn_port_t port;
-
-    port = xenevtchn_pending(state->xce_handle);
-    if (port == state->bufioreq_local_port) {
-        timer_mod(state->buffered_io_timer,
-                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
-        return NULL;
-    }
-
-    if (port != -1) {
-        for (i = 0; i < max_cpus; i++) {
-            if (state->ioreq_local_port[i] == port) {
-                break;
-            }
-        }
-
-        if (i == max_cpus) {
-            hw_error("Fatal error while trying to get io event!\n");
-        }
-
-        /* unmask the wanted port again */
-        xenevtchn_unmask(state->xce_handle, port);
-
-        /* get the io packet from shared memory */
-        state->send_vcpu = i;
-        return cpu_get_ioreq_from_shared_memory(state, i);
-    }
-
-    /* read error or read nothing */
-    return NULL;
-}
-
-static uint32_t do_inp(uint32_t addr, unsigned long size)
-{
-    switch (size) {
-        case 1:
-            return cpu_inb(addr);
-        case 2:
-            return cpu_inw(addr);
-        case 4:
-            return cpu_inl(addr);
-        default:
-            hw_error("inp: bad size: %04x %lx", addr, size);
-    }
-}
-
-static void do_outp(uint32_t addr,
-        unsigned long size, uint32_t val)
-{
-    switch (size) {
-        case 1:
-            return cpu_outb(addr, val);
-        case 2:
-            return cpu_outw(addr, val);
-        case 4:
-            return cpu_outl(addr, val);
-        default:
-            hw_error("outp: bad size: %04x %lx", addr, size);
-    }
-}
-
-/*
- * Helper functions which read/write an object from/to physical guest
- * memory, as part of the implementation of an ioreq.
- *
- * Equivalent to
- *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
- *                          val, req->size, 0/1)
- * except without the integer overflow problems.
- */
-static void rw_phys_req_item(hwaddr addr,
-                             ioreq_t *req, uint32_t i, void *val, int rw)
-{
-    /* Do everything unsigned so overflow just results in a truncated result
-     * and accesses to undesired parts of guest memory, which is up
-     * to the guest */
-    hwaddr offset = (hwaddr)req->size * i;
-    if (req->df) {
-        addr -= offset;
-    } else {
-        addr += offset;
-    }
-    cpu_physical_memory_rw(addr, val, req->size, rw);
-}
-
-static inline void read_phys_req_item(hwaddr addr,
-                                      ioreq_t *req, uint32_t i, void *val)
-{
-    rw_phys_req_item(addr, req, i, val, 0);
-}
-static inline void write_phys_req_item(hwaddr addr,
-                                       ioreq_t *req, uint32_t i, void *val)
-{
-    rw_phys_req_item(addr, req, i, val, 1);
-}
-
-
-static void cpu_ioreq_pio(ioreq_t *req)
-{
-    uint32_t i;
-
-    trace_cpu_ioreq_pio(req, req->dir, req->df, req->data_is_ptr, req->addr,
-                         req->data, req->count, req->size);
-
-    if (req->size > sizeof(uint32_t)) {
-        hw_error("PIO: bad size (%u)", req->size);
-    }
-
-    if (req->dir == IOREQ_READ) {
-        if (!req->data_is_ptr) {
-            req->data = do_inp(req->addr, req->size);
-            trace_cpu_ioreq_pio_read_reg(req, req->data, req->addr,
-                                         req->size);
-        } else {
-            uint32_t tmp;
-
-            for (i = 0; i < req->count; i++) {
-                tmp = do_inp(req->addr, req->size);
-                write_phys_req_item(req->data, req, i, &tmp);
-            }
-        }
-    } else if (req->dir == IOREQ_WRITE) {
-        if (!req->data_is_ptr) {
-            trace_cpu_ioreq_pio_write_reg(req, req->data, req->addr,
-                                          req->size);
-            do_outp(req->addr, req->size, req->data);
-        } else {
-            for (i = 0; i < req->count; i++) {
-                uint32_t tmp = 0;
-
-                read_phys_req_item(req->data, req, i, &tmp);
-                do_outp(req->addr, req->size, tmp);
-            }
-        }
-    }
-}
-
-static void cpu_ioreq_move(ioreq_t *req)
-{
-    uint32_t i;
-
-    trace_cpu_ioreq_move(req, req->dir, req->df, req->data_is_ptr, req->addr,
-                         req->data, req->count, req->size);
-
-    if (req->size > sizeof(req->data)) {
-        hw_error("MMIO: bad size (%u)", req->size);
-    }
-
-    if (!req->data_is_ptr) {
-        if (req->dir == IOREQ_READ) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->addr, req, i, &req->data);
-            }
-        } else if (req->dir == IOREQ_WRITE) {
-            for (i = 0; i < req->count; i++) {
-                write_phys_req_item(req->addr, req, i, &req->data);
-            }
-        }
-    } else {
-        uint64_t tmp;
-
-        if (req->dir == IOREQ_READ) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->addr, req, i, &tmp);
-                write_phys_req_item(req->data, req, i, &tmp);
-            }
-        } else if (req->dir == IOREQ_WRITE) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->data, req, i, &tmp);
-                write_phys_req_item(req->addr, req, i, &tmp);
-            }
-        }
-    }
-}
-
-static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
-{
-    uint32_t sbdf = req->addr >> 32;
-    uint32_t reg = req->addr;
-    XenPciDevice *xendev;
-
-    if (req->size != sizeof(uint8_t) && req->size != sizeof(uint16_t) &&
-        req->size != sizeof(uint32_t)) {
-        hw_error("PCI config access: bad size (%u)", req->size);
-    }
-
-    if (req->count != 1) {
-        hw_error("PCI config access: bad count (%u)", req->count);
-    }
-
-    QLIST_FOREACH(xendev, &state->dev_list, entry) {
-        if (xendev->sbdf != sbdf) {
-            continue;
-        }
-
-        if (!req->data_is_ptr) {
-            if (req->dir == IOREQ_READ) {
-                req->data = pci_host_config_read_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->size);
-                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
-                                            req->size, req->data);
-            } else if (req->dir == IOREQ_WRITE) {
-                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
-                                             req->size, req->data);
-                pci_host_config_write_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->data, req->size);
-            }
-        } else {
-            uint32_t tmp;
-
-            if (req->dir == IOREQ_READ) {
-                tmp = pci_host_config_read_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->size);
-                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
-                                            req->size, tmp);
-                write_phys_req_item(req->data, req, 0, &tmp);
-            } else if (req->dir == IOREQ_WRITE) {
-                read_phys_req_item(req->data, req, 0, &tmp);
-                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
-                                             req->size, tmp);
-                pci_host_config_write_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    tmp, req->size);
-            }
-        }
-    }
-}
-
 static void regs_to_cpu(vmware_regs_t *vmport_regs, ioreq_t *req)
 {
     X86CPU *cpu;
@@ -1030,226 +510,6 @@ static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req)
     current_cpu = NULL;
 }
 
-static void handle_ioreq(XenIOState *state, ioreq_t *req)
-{
-    trace_handle_ioreq(req, req->type, req->dir, req->df, req->data_is_ptr,
-                       req->addr, req->data, req->count, req->size);
-
-    if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
-            (req->size < sizeof (target_ulong))) {
-        req->data &= ((target_ulong) 1 << (8 * req->size)) - 1;
-    }
-
-    if (req->dir == IOREQ_WRITE)
-        trace_handle_ioreq_write(req, req->type, req->df, req->data_is_ptr,
-                                 req->addr, req->data, req->count, req->size);
-
-    switch (req->type) {
-        case IOREQ_TYPE_PIO:
-            cpu_ioreq_pio(req);
-            break;
-        case IOREQ_TYPE_COPY:
-            cpu_ioreq_move(req);
-            break;
-        case IOREQ_TYPE_VMWARE_PORT:
-            handle_vmport_ioreq(state, req);
-            break;
-        case IOREQ_TYPE_TIMEOFFSET:
-            break;
-        case IOREQ_TYPE_INVALIDATE:
-            xen_invalidate_map_cache();
-            break;
-        case IOREQ_TYPE_PCI_CONFIG:
-            cpu_ioreq_config(state, req);
-            break;
-        default:
-            hw_error("Invalid ioreq type 0x%x\n", req->type);
-    }
-    if (req->dir == IOREQ_READ) {
-        trace_handle_ioreq_read(req, req->type, req->df, req->data_is_ptr,
-                                req->addr, req->data, req->count, req->size);
-    }
-}
-
-static bool handle_buffered_iopage(XenIOState *state)
-{
-    buffered_iopage_t *buf_page = state->buffered_io_page;
-    buf_ioreq_t *buf_req = NULL;
-    bool handled_ioreq = false;
-    ioreq_t req;
-    int qw;
-
-    if (!buf_page) {
-        return 0;
-    }
-
-    memset(&req, 0x00, sizeof(req));
-    req.state = STATE_IOREQ_READY;
-    req.count = 1;
-    req.dir = IOREQ_WRITE;
-
-    for (;;) {
-        uint32_t rdptr = buf_page->read_pointer, wrptr;
-
-        xen_rmb();
-        wrptr = buf_page->write_pointer;
-        xen_rmb();
-        if (rdptr != buf_page->read_pointer) {
-            continue;
-        }
-        if (rdptr == wrptr) {
-            break;
-        }
-        buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
-        req.size = 1U << buf_req->size;
-        req.addr = buf_req->addr;
-        req.data = buf_req->data;
-        req.type = buf_req->type;
-        xen_rmb();
-        qw = (req.size == 8);
-        if (qw) {
-            if (rdptr + 1 == wrptr) {
-                hw_error("Incomplete quad word buffered ioreq");
-            }
-            buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
-                                           IOREQ_BUFFER_SLOT_NUM];
-            req.data |= ((uint64_t)buf_req->data) << 32;
-            xen_rmb();
-        }
-
-        handle_ioreq(state, &req);
-
-        /* Only req.data may get updated by handle_ioreq(), albeit even that
-         * should not happen as such data would never make it to the guest (we
-         * can only usefully see writes here after all).
-         */
-        assert(req.state == STATE_IOREQ_READY);
-        assert(req.count == 1);
-        assert(req.dir == IOREQ_WRITE);
-        assert(!req.data_is_ptr);
-
-        qatomic_add(&buf_page->read_pointer, qw + 1);
-        handled_ioreq = true;
-    }
-
-    return handled_ioreq;
-}
-
-static void handle_buffered_io(void *opaque)
-{
-    XenIOState *state = opaque;
-
-    if (handle_buffered_iopage(state)) {
-        timer_mod(state->buffered_io_timer,
-                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
-    } else {
-        timer_del(state->buffered_io_timer);
-        xenevtchn_unmask(state->xce_handle, state->bufioreq_local_port);
-    }
-}
-
-static void cpu_handle_ioreq(void *opaque)
-{
-    XenIOState *state = opaque;
-    ioreq_t *req = cpu_get_ioreq(state);
-
-    handle_buffered_iopage(state);
-    if (req) {
-        ioreq_t copy = *req;
-
-        xen_rmb();
-        handle_ioreq(state, &copy);
-        req->data = copy.data;
-
-        if (req->state != STATE_IOREQ_INPROCESS) {
-            fprintf(stderr, "Badness in I/O request ... not in service?!: "
-                    "%x, ptr: %x, port: %"PRIx64", "
-                    "data: %"PRIx64", count: %u, size: %u, type: %u\n",
-                    req->state, req->data_is_ptr, req->addr,
-                    req->data, req->count, req->size, req->type);
-            destroy_hvm_domain(false);
-            return;
-        }
-
-        xen_wmb(); /* Update ioreq contents /then/ update state. */
-
-        /*
-         * We do this before we send the response so that the tools
-         * have the opportunity to pick up on the reset before the
-         * guest resumes and does a hlt with interrupts disabled which
-         * causes Xen to powerdown the domain.
-         */
-        if (runstate_is_running()) {
-            ShutdownCause request;
-
-            if (qemu_shutdown_requested_get()) {
-                destroy_hvm_domain(false);
-            }
-            request = qemu_reset_requested_get();
-            if (request) {
-                qemu_system_reset(request);
-                destroy_hvm_domain(true);
-            }
-        }
-
-        req->state = STATE_IORESP_READY;
-        xenevtchn_notify(state->xce_handle,
-                         state->ioreq_local_port[state->send_vcpu]);
-    }
-}
-
-static void xen_main_loop_prepare(XenIOState *state)
-{
-    int evtchn_fd = -1;
-
-    if (state->xce_handle != NULL) {
-        evtchn_fd = xenevtchn_fd(state->xce_handle);
-    }
-
-    state->buffered_io_timer = timer_new_ms(QEMU_CLOCK_REALTIME, handle_buffered_io,
-                                                 state);
-
-    if (evtchn_fd != -1) {
-        CPUState *cpu_state;
-
-        DPRINTF("%s: Init cpu_by_vcpu_id\n", __func__);
-        CPU_FOREACH(cpu_state) {
-            DPRINTF("%s: cpu_by_vcpu_id[%d]=%p\n",
-                    __func__, cpu_state->cpu_index, cpu_state);
-            state->cpu_by_vcpu_id[cpu_state->cpu_index] = cpu_state;
-        }
-        qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state);
-    }
-}
-
-
-static void xen_hvm_change_state_handler(void *opaque, bool running,
-                                         RunState rstate)
-{
-    XenIOState *state = opaque;
-
-    if (running) {
-        xen_main_loop_prepare(state);
-    }
-
-    xen_set_ioreq_server_state(xen_domid,
-                               state->ioservid,
-                               (rstate == RUN_STATE_RUNNING));
-}
-
-static void xen_exit_notifier(Notifier *n, void *data)
-{
-    XenIOState *state = container_of(n, XenIOState, exit);
-
-    xen_destroy_ioreq_server(xen_domid, state->ioservid);
-    if (state->fres != NULL) {
-        xenforeignmemory_unmap_resource(xen_fmem, state->fres);
-    }
-
-    xenevtchn_close(state->xce_handle);
-    xs_daemon_close(state->xenstore);
-}
-
 #ifdef XEN_COMPAT_PHYSMAP
 static void xen_read_physmap(XenIOState *state)
 {
@@ -1309,178 +569,17 @@ static void xen_wakeup_notifier(Notifier *notifier, void *data)
     xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 0);
 }
 
-static int xen_map_ioreq_server(XenIOState *state)
-{
-    void *addr = NULL;
-    xen_pfn_t ioreq_pfn;
-    xen_pfn_t bufioreq_pfn;
-    evtchn_port_t bufioreq_evtchn;
-    int rc;
-
-    /*
-     * Attempt to map using the resource API and fall back to normal
-     * foreign mapping if this is not supported.
-     */
-    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0);
-    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1);
-    state->fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
-                                         XENMEM_resource_ioreq_server,
-                                         state->ioservid, 0, 2,
-                                         &addr,
-                                         PROT_READ | PROT_WRITE, 0);
-    if (state->fres != NULL) {
-        trace_xen_map_resource_ioreq(state->ioservid, addr);
-        state->buffered_io_page = addr;
-        state->shared_page = addr + TARGET_PAGE_SIZE;
-    } else if (errno != EOPNOTSUPP) {
-        error_report("failed to map ioreq server resources: error %d handle=%p",
-                     errno, xen_xc);
-        return -1;
-    }
-
-    rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
-                                   (state->shared_page == NULL) ?
-                                   &ioreq_pfn : NULL,
-                                   (state->buffered_io_page == NULL) ?
-                                   &bufioreq_pfn : NULL,
-                                   &bufioreq_evtchn);
-    if (rc < 0) {
-        error_report("failed to get ioreq server info: error %d handle=%p",
-                     errno, xen_xc);
-        return rc;
-    }
-
-    if (state->shared_page == NULL) {
-        DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
-
-        state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
-                                                  PROT_READ | PROT_WRITE,
-                                                  1, &ioreq_pfn, NULL);
-        if (state->shared_page == NULL) {
-            error_report("map shared IO page returned error %d handle=%p",
-                         errno, xen_xc);
-        }
-    }
-
-    if (state->buffered_io_page == NULL) {
-        DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
-
-        state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
-                                                       PROT_READ | PROT_WRITE,
-                                                       1, &bufioreq_pfn,
-                                                       NULL);
-        if (state->buffered_io_page == NULL) {
-            error_report("map buffered IO page returned error %d", errno);
-            return -1;
-        }
-    }
-
-    if (state->shared_page == NULL || state->buffered_io_page == NULL) {
-        return -1;
-    }
-
-    DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
-
-    state->bufioreq_remote_port = bufioreq_evtchn;
-
-    return 0;
-}
-
 void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 {
     MachineState *ms = MACHINE(pcms);
     unsigned int max_cpus = ms->smp.max_cpus;
-    int i, rc;
+    int rc;
     xen_pfn_t ioreq_pfn;
     XenIOState *state;
 
     state = g_new0(XenIOState, 1);
 
-    state->xce_handle = xenevtchn_open(NULL, 0);
-    if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
-        goto err;
-    }
-
-    state->xenstore = xs_daemon_open();
-    if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
-        goto err;
-    }
-
-    xen_create_ioreq_server(xen_domid, &state->ioservid);
-
-    state->exit.notify = xen_exit_notifier;
-    qemu_add_exit_notifier(&state->exit);
-
-    /*
-     * Register wake-up support in QMP query-current-machine API
-     */
-    qemu_register_wakeup_support();
-
-    rc = xen_map_ioreq_server(state);
-    if (rc < 0) {
-        goto err;
-    }
-
-    /* Note: cpus is empty at this point in init */
-    state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
-
-    rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
-    if (rc < 0) {
-        error_report("failed to enable ioreq server info: error %d handle=%p",
-                     errno, xen_xc);
-        goto err;
-    }
-
-    state->ioreq_local_port = g_new0(evtchn_port_t, max_cpus);
-
-    /* FIXME: how about if we overflow the page here? */
-    for (i = 0; i < max_cpus; i++) {
-        rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
-                                        xen_vcpu_eport(state->shared_page, i));
-        if (rc == -1) {
-            error_report("shared evtchn %d bind error %d", i, errno);
-            goto err;
-        }
-        state->ioreq_local_port[i] = rc;
-    }
-
-    rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
-                                    state->bufioreq_remote_port);
-    if (rc == -1) {
-        error_report("buffered evtchn bind error %d", errno);
-        goto err;
-    }
-    state->bufioreq_local_port = rc;
-
-    /* Init RAM management */
-#ifdef XEN_COMPAT_PHYSMAP
-    xen_map_cache_init(xen_phys_offset_to_gaddr, state);
-#else
-    xen_map_cache_init(NULL, state);
-#endif
-
-    qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
-
-    state->memory_listener = xen_memory_listener;
-    memory_listener_register(&state->memory_listener, &address_space_memory);
-
-    state->io_listener = xen_io_listener;
-    memory_listener_register(&state->io_listener, &address_space_io);
-
-    state->device_listener = xen_device_listener;
-    QLIST_INIT(&state->dev_list);
-    device_listener_register(&state->device_listener);
-
-    xen_bus_init();
-
-    /* Initialize backend core & drivers */
-    if (xen_be_init() != 0) {
-        error_report("xen backend core setup failed");
-        goto err;
-    }
-    xen_be_register_common();
+    xen_register_ioreq(state, max_cpus, xen_memory_listener);
 
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
@@ -1520,59 +619,11 @@ err:
     exit(1);
 }
 
-void destroy_hvm_domain(bool reboot)
-{
-    xc_interface *xc_handle;
-    int sts;
-    int rc;
-
-    unsigned int reason = reboot ? SHUTDOWN_reboot : SHUTDOWN_poweroff;
-
-    if (xen_dmod) {
-        rc = xendevicemodel_shutdown(xen_dmod, xen_domid, reason);
-        if (!rc) {
-            return;
-        }
-        if (errno != ENOTTY /* old Xen */) {
-            perror("xendevicemodel_shutdown failed");
-        }
-        /* well, try the old thing then */
-    }
-
-    xc_handle = xc_interface_open(0, 0, 0);
-    if (xc_handle == NULL) {
-        fprintf(stderr, "Cannot acquire xenctrl handle\n");
-    } else {
-        sts = xc_domain_shutdown(xc_handle, xen_domid, reason);
-        if (sts != 0) {
-            fprintf(stderr, "xc_domain_shutdown failed to issue %s, "
-                    "sts %d, %s\n", reboot ? "reboot" : "poweroff",
-                    sts, strerror(errno));
-        } else {
-            fprintf(stderr, "Issued domain %d %s\n", xen_domid,
-                    reboot ? "reboot" : "poweroff");
-        }
-        xc_interface_close(xc_handle);
-    }
-}
-
 void xen_register_framebuffer(MemoryRegion *mr)
 {
     framebuffer = mr;
 }
 
-void xen_shutdown_fatal_error(const char *fmt, ...)
-{
-    va_list ap;
-
-    va_start(ap, fmt);
-    vfprintf(stderr, fmt, ap);
-    va_end(ap);
-    fprintf(stderr, "Will destroy the domain.\n");
-    /* destroy the domain */
-    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
-}
-
 void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
 {
     if (unlikely(xen_in_migration)) {
@@ -1604,3 +655,57 @@ void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
         memory_global_dirty_log_stop(GLOBAL_DIRTY_MIGRATION);
     }
 }
+
+void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
+                                bool add)
+{
+    hwaddr start_addr = section->offset_within_address_space;
+    ram_addr_t size = int128_get64(section->size);
+    bool log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
+    hvmmem_type_t mem_type;
+
+    if (!memory_region_is_ram(section->mr)) {
+        return;
+    }
+
+    if (log_dirty != add) {
+        return;
+    }
+
+    trace_xen_client_set_memory(start_addr, size, log_dirty);
+
+    start_addr &= TARGET_PAGE_MASK;
+    size = TARGET_PAGE_ALIGN(size);
+
+    if (add) {
+        if (!memory_region_is_rom(section->mr)) {
+            xen_add_to_physmap(state, start_addr, size,
+                               section->mr, section->offset_within_region);
+        } else {
+            mem_type = HVMMEM_ram_ro;
+            if (xen_set_mem_type(xen_domid, mem_type,
+                                 start_addr >> TARGET_PAGE_BITS,
+                                 size >> TARGET_PAGE_BITS)) {
+                DPRINTF("xen_set_mem_type error, addr: "TARGET_FMT_plx"\n",
+                        start_addr);
+            }
+        }
+    } else {
+        if (xen_remove_from_physmap(state, start_addr, size) < 0) {
+            DPRINTF("physmapping does not exist at "TARGET_FMT_plx"\n", start_addr);
+        }
+    }
+}
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    switch (req->type) {
+    case IOREQ_TYPE_VMWARE_PORT:
+            handle_vmport_ioreq(state, req);
+        break;
+    default:
+        hw_error("Invalid ioreq type 0x%x\n", req->type);
+    }
+
+    return;
+}
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index 19d0637c46..008e036d63 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -25,4 +25,7 @@ specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
 
 xen_ss = ss.source_set()
 
-xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))
+xen_ss.add(when: 'CONFIG_XEN', if_true: files(
+  'xen-mapcache.c',
+  'xen-hvm-common.c',
+))
diff --git a/hw/xen/trace-events b/hw/xen/trace-events
index 2c8f238f42..02ca1183da 100644
--- a/hw/xen/trace-events
+++ b/hw/xen/trace-events
@@ -42,6 +42,20 @@ xs_node_vscanf(char *path, char *value) "%s %s"
 xs_node_watch(char *path) "%s"
 xs_node_unwatch(char *path) "%s"
 
+# xen-hvm.c
+xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"
+xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "0x%"PRIx64" size 0x%lx, log_dirty %i"
+handle_ioreq(void *req, uint32_t type, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p type=%d dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+handle_ioreq_read(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p read type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+handle_ioreq_write(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p write type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+cpu_ioreq_pio(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p pio dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+cpu_ioreq_pio_read_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio read reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
+cpu_ioreq_pio_write_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio write reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
+cpu_ioreq_move(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p copy dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
+cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
+cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
+
 # xen-mapcache.c
 xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
 xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
new file mode 100644
index 0000000000..e748d8d423
--- /dev/null
+++ b/hw/xen/xen-hvm-common.c
@@ -0,0 +1,870 @@
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+#include "qapi/error.h"
+#include "trace.h"
+
+#include "hw/pci/pci_host.h"
+#include "hw/xen/xen-hvm-common.h"
+#include "hw/xen/xen-legacy-backend.h"
+#include "hw/xen/xen-bus.h"
+#include "hw/boards.h"
+#include "hw/xen/arch_hvm.h"
+
+MemoryRegion ram_memory;
+
+void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
+                   Error **errp)
+{
+    unsigned long nr_pfn;
+    xen_pfn_t *pfn_list;
+    int i;
+
+    if (runstate_check(RUN_STATE_INMIGRATE)) {
+        /* RAM already populated in Xen */
+        fprintf(stderr, "%s: do not alloc "RAM_ADDR_FMT
+                " bytes of ram at "RAM_ADDR_FMT" when runstate is INMIGRATE\n",
+                __func__, size, ram_addr);
+        return;
+    }
+
+    if (mr == &ram_memory) {
+        return;
+    }
+
+    trace_xen_ram_alloc(ram_addr, size);
+
+    nr_pfn = size >> TARGET_PAGE_BITS;
+    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
+
+    for (i = 0; i < nr_pfn; i++) {
+        pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
+    }
+
+    if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) {
+        error_setg(errp, "xen: failed to populate ram at " RAM_ADDR_FMT,
+                   ram_addr);
+    }
+
+    g_free(pfn_list);
+}
+
+
+static void xen_set_memory(struct MemoryListener *listener,
+                           MemoryRegionSection *section,
+                           bool add)
+{
+    XenIOState *state = container_of(listener, XenIOState, memory_listener);
+
+    if (section->mr == &ram_memory) {
+        return;
+    } else {
+        if (add) {
+            xen_map_memory_section(xen_domid, state->ioservid,
+                                   section);
+        } else {
+            xen_unmap_memory_section(xen_domid, state->ioservid,
+                                     section);
+        }
+    }
+    arch_xen_set_memory(state, section, add);
+}
+
+void xen_region_add(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    memory_region_ref(section->mr);
+    xen_set_memory(listener, section, true);
+}
+
+void xen_region_del(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    xen_set_memory(listener, section, false);
+    memory_region_unref(section->mr);
+}
+
+void xen_io_add(MemoryListener *listener,
+                       MemoryRegionSection *section)
+{
+    XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops) {
+        return;
+    }
+
+    memory_region_ref(mr);
+
+    xen_map_io_section(xen_domid, state->ioservid, section);
+}
+
+void xen_io_del(MemoryListener *listener,
+                       MemoryRegionSection *section)
+{
+    XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops) {
+        return;
+    }
+
+    xen_unmap_io_section(xen_domid, state->ioservid, section);
+
+    memory_region_unref(mr);
+}
+
+void xen_device_realize(DeviceListener *listener,
+                               DeviceState *dev)
+{
+    XenIOState *state = container_of(listener, XenIOState, device_listener);
+
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
+        PCIDevice *pci_dev = PCI_DEVICE(dev);
+        XenPciDevice *xendev = g_new(XenPciDevice, 1);
+
+        xendev->pci_dev = pci_dev;
+        xendev->sbdf = PCI_BUILD_BDF(pci_dev_bus_num(pci_dev),
+                                     pci_dev->devfn);
+        QLIST_INSERT_HEAD(&state->dev_list, xendev, entry);
+
+        xen_map_pcidev(xen_domid, state->ioservid, pci_dev);
+    }
+}
+
+void xen_device_unrealize(DeviceListener *listener,
+                                 DeviceState *dev)
+{
+    XenIOState *state = container_of(listener, XenIOState, device_listener);
+
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
+        PCIDevice *pci_dev = PCI_DEVICE(dev);
+        XenPciDevice *xendev, *next;
+
+        xen_unmap_pcidev(xen_domid, state->ioservid, pci_dev);
+
+        QLIST_FOREACH_SAFE(xendev, &state->dev_list, entry, next) {
+            if (xendev->pci_dev == pci_dev) {
+                QLIST_REMOVE(xendev, entry);
+                g_free(xendev);
+                break;
+            }
+        }
+    }
+}
+
+MemoryListener xen_io_listener = {
+    .region_add = xen_io_add,
+    .region_del = xen_io_del,
+    .priority = 10,
+};
+
+DeviceListener xen_device_listener = {
+    .realize = xen_device_realize,
+    .unrealize = xen_device_unrealize,
+};
+
+/* get the ioreq packets from share mem */
+static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
+{
+    ioreq_t *req = xen_vcpu_ioreq(state->shared_page, vcpu);
+
+    if (req->state != STATE_IOREQ_READY) {
+        DPRINTF("I/O request not ready: "
+                "%x, ptr: %x, port: %"PRIx64", "
+                "data: %"PRIx64", count: %u, size: %u\n",
+                req->state, req->data_is_ptr, req->addr,
+                req->data, req->count, req->size);
+        return NULL;
+    }
+
+    xen_rmb(); /* see IOREQ_READY /then/ read contents of ioreq */
+
+    req->state = STATE_IOREQ_INPROCESS;
+    return req;
+}
+
+/* use poll to get the port notification */
+/* ioreq_vec--out,the */
+/* retval--the number of ioreq packet */
+static ioreq_t *cpu_get_ioreq(XenIOState *state)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+    unsigned int max_cpus = ms->smp.max_cpus;
+    int i;
+    evtchn_port_t port;
+
+    port = xenevtchn_pending(state->xce_handle);
+    if (port == state->bufioreq_local_port) {
+        timer_mod(state->buffered_io_timer,
+                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
+        return NULL;
+    }
+
+    if (port != -1) {
+        for (i = 0; i < max_cpus; i++) {
+            if (state->ioreq_local_port[i] == port) {
+                break;
+            }
+        }
+
+        if (i == max_cpus) {
+            hw_error("Fatal error while trying to get io event!\n");
+        }
+
+        /* unmask the wanted port again */
+        xenevtchn_unmask(state->xce_handle, port);
+
+        /* get the io packet from shared memory */
+        state->send_vcpu = i;
+        return cpu_get_ioreq_from_shared_memory(state, i);
+    }
+
+    /* read error or read nothing */
+    return NULL;
+}
+
+static uint32_t do_inp(uint32_t addr, unsigned long size)
+{
+    switch (size) {
+        case 1:
+            return cpu_inb(addr);
+        case 2:
+            return cpu_inw(addr);
+        case 4:
+            return cpu_inl(addr);
+        default:
+            hw_error("inp: bad size: %04x %lx", addr, size);
+    }
+}
+
+static void do_outp(uint32_t addr,
+        unsigned long size, uint32_t val)
+{
+    switch (size) {
+        case 1:
+            return cpu_outb(addr, val);
+        case 2:
+            return cpu_outw(addr, val);
+        case 4:
+            return cpu_outl(addr, val);
+        default:
+            hw_error("outp: bad size: %04x %lx", addr, size);
+    }
+}
+
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(hwaddr addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
+{
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    hwaddr offset = (hwaddr)req->size * i;
+    if (req->df) {
+        addr -= offset;
+    } else {
+        addr += offset;
+    }
+    cpu_physical_memory_rw(addr, val, req->size, rw);
+}
+
+static inline void read_phys_req_item(hwaddr addr,
+                                      ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(hwaddr addr,
+                                       ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 1);
+}
+
+
+void cpu_ioreq_pio(ioreq_t *req)
+{
+    uint32_t i;
+
+    trace_cpu_ioreq_pio(req, req->dir, req->df, req->data_is_ptr, req->addr,
+                         req->data, req->count, req->size);
+
+    if (req->size > sizeof(uint32_t)) {
+        hw_error("PIO: bad size (%u)", req->size);
+    }
+
+    if (req->dir == IOREQ_READ) {
+        if (!req->data_is_ptr) {
+            req->data = do_inp(req->addr, req->size);
+            trace_cpu_ioreq_pio_read_reg(req, req->data, req->addr,
+                                         req->size);
+        } else {
+            uint32_t tmp;
+
+            for (i = 0; i < req->count; i++) {
+                tmp = do_inp(req->addr, req->size);
+                write_phys_req_item(req->data, req, i, &tmp);
+            }
+        }
+    } else if (req->dir == IOREQ_WRITE) {
+        if (!req->data_is_ptr) {
+            trace_cpu_ioreq_pio_write_reg(req, req->data, req->addr,
+                                          req->size);
+            do_outp(req->addr, req->size, req->data);
+        } else {
+            for (i = 0; i < req->count; i++) {
+                uint32_t tmp = 0;
+
+                read_phys_req_item(req->data, req, i, &tmp);
+                do_outp(req->addr, req->size, tmp);
+            }
+        }
+    }
+}
+
+static void cpu_ioreq_move(ioreq_t *req)
+{
+    uint32_t i;
+
+    trace_cpu_ioreq_move(req, req->dir, req->df, req->data_is_ptr, req->addr,
+                         req->data, req->count, req->size);
+
+    if (req->size > sizeof(req->data)) {
+        hw_error("MMIO: bad size (%u)", req->size);
+    }
+
+    if (!req->data_is_ptr) {
+        if (req->dir == IOREQ_READ) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->addr, req, i, &req->data);
+            }
+        } else if (req->dir == IOREQ_WRITE) {
+            for (i = 0; i < req->count; i++) {
+                write_phys_req_item(req->addr, req, i, &req->data);
+            }
+        }
+    } else {
+        uint64_t tmp;
+
+        if (req->dir == IOREQ_READ) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
+            }
+        } else if (req->dir == IOREQ_WRITE) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
+            }
+        }
+    }
+}
+
+static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
+{
+    uint32_t sbdf = req->addr >> 32;
+    uint32_t reg = req->addr;
+    XenPciDevice *xendev;
+
+    if (req->size != sizeof(uint8_t) && req->size != sizeof(uint16_t) &&
+        req->size != sizeof(uint32_t)) {
+        hw_error("PCI config access: bad size (%u)", req->size);
+    }
+
+    if (req->count != 1) {
+        hw_error("PCI config access: bad count (%u)", req->count);
+    }
+
+    QLIST_FOREACH(xendev, &state->dev_list, entry) {
+        if (xendev->sbdf != sbdf) {
+            continue;
+        }
+
+        if (!req->data_is_ptr) {
+            if (req->dir == IOREQ_READ) {
+                req->data = pci_host_config_read_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->size);
+                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
+                                            req->size, req->data);
+            } else if (req->dir == IOREQ_WRITE) {
+                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
+                                             req->size, req->data);
+                pci_host_config_write_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->data, req->size);
+            }
+        } else {
+            uint32_t tmp;
+
+            if (req->dir == IOREQ_READ) {
+                tmp = pci_host_config_read_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->size);
+                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
+                                            req->size, tmp);
+                write_phys_req_item(req->data, req, 0, &tmp);
+            } else if (req->dir == IOREQ_WRITE) {
+                read_phys_req_item(req->data, req, 0, &tmp);
+                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
+                                             req->size, tmp);
+                pci_host_config_write_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    tmp, req->size);
+            }
+        }
+    }
+}
+
+static void handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    trace_handle_ioreq(req, req->type, req->dir, req->df, req->data_is_ptr,
+                       req->addr, req->data, req->count, req->size);
+
+    if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
+            (req->size < sizeof (target_ulong))) {
+        req->data &= ((target_ulong) 1 << (8 * req->size)) - 1;
+    }
+
+    if (req->dir == IOREQ_WRITE)
+        trace_handle_ioreq_write(req, req->type, req->df, req->data_is_ptr,
+                                 req->addr, req->data, req->count, req->size);
+
+    switch (req->type) {
+        case IOREQ_TYPE_PIO:
+            cpu_ioreq_pio(req);
+            break;
+        case IOREQ_TYPE_COPY:
+            cpu_ioreq_move(req);
+            break;
+        case IOREQ_TYPE_TIMEOFFSET:
+            break;
+        case IOREQ_TYPE_INVALIDATE:
+            xen_invalidate_map_cache();
+            break;
+        case IOREQ_TYPE_PCI_CONFIG:
+            cpu_ioreq_config(state, req);
+            break;
+        default:
+            arch_handle_ioreq(state, req);
+    }
+    if (req->dir == IOREQ_READ) {
+        trace_handle_ioreq_read(req, req->type, req->df, req->data_is_ptr,
+                                req->addr, req->data, req->count, req->size);
+    }
+}
+
+static int handle_buffered_iopage(XenIOState *state)
+{
+    buffered_iopage_t *buf_page = state->buffered_io_page;
+    buf_ioreq_t *buf_req = NULL;
+    ioreq_t req;
+    int qw;
+
+    if (!buf_page) {
+        return 0;
+    }
+
+    memset(&req, 0x00, sizeof(req));
+    req.state = STATE_IOREQ_READY;
+    req.count = 1;
+    req.dir = IOREQ_WRITE;
+
+    for (;;) {
+        uint32_t rdptr = buf_page->read_pointer, wrptr;
+
+        xen_rmb();
+        wrptr = buf_page->write_pointer;
+        xen_rmb();
+        if (rdptr != buf_page->read_pointer) {
+            continue;
+        }
+        if (rdptr == wrptr) {
+            break;
+        }
+        buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
+        req.size = 1U << buf_req->size;
+        req.addr = buf_req->addr;
+        req.data = buf_req->data;
+        req.type = buf_req->type;
+        xen_rmb();
+        qw = (req.size == 8);
+        if (qw) {
+            if (rdptr + 1 == wrptr) {
+                hw_error("Incomplete quad word buffered ioreq");
+            }
+            buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
+                                           IOREQ_BUFFER_SLOT_NUM];
+            req.data |= ((uint64_t)buf_req->data) << 32;
+            xen_rmb();
+        }
+
+        handle_ioreq(state, &req);
+
+        /* Only req.data may get updated by handle_ioreq(), albeit even that
+         * should not happen as such data would never make it to the guest (we
+         * can only usefully see writes here after all).
+         */
+        assert(req.state == STATE_IOREQ_READY);
+        assert(req.count == 1);
+        assert(req.dir == IOREQ_WRITE);
+        assert(!req.data_is_ptr);
+
+        qatomic_add(&buf_page->read_pointer, qw + 1);
+    }
+
+    return req.count;
+}
+
+static void handle_buffered_io(void *opaque)
+{
+    XenIOState *state = opaque;
+
+    if (handle_buffered_iopage(state)) {
+        timer_mod(state->buffered_io_timer,
+                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
+    } else {
+        timer_del(state->buffered_io_timer);
+        xenevtchn_unmask(state->xce_handle, state->bufioreq_local_port);
+    }
+}
+
+static void cpu_handle_ioreq(void *opaque)
+{
+    XenIOState *state = opaque;
+    ioreq_t *req = cpu_get_ioreq(state);
+
+    handle_buffered_iopage(state);
+    if (req) {
+        ioreq_t copy = *req;
+
+        xen_rmb();
+        handle_ioreq(state, &copy);
+        req->data = copy.data;
+
+        if (req->state != STATE_IOREQ_INPROCESS) {
+            fprintf(stderr, "Badness in I/O request ... not in service?!: "
+                    "%x, ptr: %x, port: %"PRIx64", "
+                    "data: %"PRIx64", count: %u, size: %u, type: %u\n",
+                    req->state, req->data_is_ptr, req->addr,
+                    req->data, req->count, req->size, req->type);
+            destroy_hvm_domain(false);
+            return;
+        }
+
+        xen_wmb(); /* Update ioreq contents /then/ update state. */
+
+        /*
+         * We do this before we send the response so that the tools
+         * have the opportunity to pick up on the reset before the
+         * guest resumes and does a hlt with interrupts disabled which
+         * causes Xen to powerdown the domain.
+         */
+        if (runstate_is_running()) {
+            ShutdownCause request;
+
+            if (qemu_shutdown_requested_get()) {
+                destroy_hvm_domain(false);
+            }
+            request = qemu_reset_requested_get();
+            if (request) {
+                qemu_system_reset(request);
+                destroy_hvm_domain(true);
+            }
+        }
+
+        req->state = STATE_IORESP_READY;
+        xenevtchn_notify(state->xce_handle,
+                         state->ioreq_local_port[state->send_vcpu]);
+    }
+}
+
+static void xen_main_loop_prepare(XenIOState *state)
+{
+    int evtchn_fd = -1;
+
+    if (state->xce_handle != NULL) {
+        evtchn_fd = xenevtchn_fd(state->xce_handle);
+    }
+
+    state->buffered_io_timer = timer_new_ms(QEMU_CLOCK_REALTIME, handle_buffered_io,
+                                                 state);
+
+    if (evtchn_fd != -1) {
+        CPUState *cpu_state;
+
+        DPRINTF("%s: Init cpu_by_vcpu_id\n", __func__);
+        CPU_FOREACH(cpu_state) {
+            DPRINTF("%s: cpu_by_vcpu_id[%d]=%p\n",
+                    __func__, cpu_state->cpu_index, cpu_state);
+            state->cpu_by_vcpu_id[cpu_state->cpu_index] = cpu_state;
+        }
+        qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state);
+    }
+}
+
+
+void xen_hvm_change_state_handler(void *opaque, bool running,
+                                         RunState rstate)
+{
+    XenIOState *state = opaque;
+
+    if (running) {
+        xen_main_loop_prepare(state);
+    }
+
+    xen_set_ioreq_server_state(xen_domid,
+                               state->ioservid,
+                               (rstate == RUN_STATE_RUNNING));
+}
+
+void xen_exit_notifier(Notifier *n, void *data)
+{
+    XenIOState *state = container_of(n, XenIOState, exit);
+
+    xen_destroy_ioreq_server(xen_domid, state->ioservid);
+
+    xenevtchn_close(state->xce_handle);
+    xs_daemon_close(state->xenstore);
+}
+
+static int xen_map_ioreq_server(XenIOState *state)
+{
+    void *addr = NULL;
+    xenforeignmemory_resource_handle *fres;
+    xen_pfn_t ioreq_pfn;
+    xen_pfn_t bufioreq_pfn;
+    evtchn_port_t bufioreq_evtchn;
+    int rc;
+
+    /*
+     * Attempt to map using the resource API and fall back to normal
+     * foreign mapping if this is not supported.
+     */
+    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0);
+    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1);
+    fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
+                                         XENMEM_resource_ioreq_server,
+                                         state->ioservid, 0, 2,
+                                         &addr,
+                                         PROT_READ | PROT_WRITE, 0);
+    if (fres != NULL) {
+        trace_xen_map_resource_ioreq(state->ioservid, addr);
+        state->buffered_io_page = addr;
+        state->shared_page = addr + XC_PAGE_SIZE;
+    } else if (errno != EOPNOTSUPP) {
+        error_report("failed to map ioreq server resources: error %d handle=%p",
+                     errno, xen_xc);
+        return -1;
+    }
+
+    rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
+                                   (state->shared_page == NULL) ?
+                                   &ioreq_pfn : NULL,
+                                   (state->buffered_io_page == NULL) ?
+                                   &bufioreq_pfn : NULL,
+                                   &bufioreq_evtchn);
+    if (rc < 0) {
+        error_report("failed to get ioreq server info: error %d handle=%p",
+                     errno, xen_xc);
+        return rc;
+    }
+
+    if (state->shared_page == NULL) {
+        DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
+
+        state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
+                                                  PROT_READ | PROT_WRITE,
+                                                  1, &ioreq_pfn, NULL);
+        if (state->shared_page == NULL) {
+            error_report("map shared IO page returned error %d handle=%p",
+                         errno, xen_xc);
+        }
+    }
+
+    if (state->buffered_io_page == NULL) {
+        DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
+
+        state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
+                                                       PROT_READ | PROT_WRITE,
+                                                       1, &bufioreq_pfn,
+                                                       NULL);
+        if (state->buffered_io_page == NULL) {
+            error_report("map buffered IO page returned error %d", errno);
+            return -1;
+        }
+    }
+
+    if (state->shared_page == NULL || state->buffered_io_page == NULL) {
+        return -1;
+    }
+
+    DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
+
+    state->bufioreq_remote_port = bufioreq_evtchn;
+
+    return 0;
+}
+
+void destroy_hvm_domain(bool reboot)
+{
+    xc_interface *xc_handle;
+    int sts;
+    int rc;
+
+    unsigned int reason = reboot ? SHUTDOWN_reboot : SHUTDOWN_poweroff;
+
+    if (xen_dmod) {
+        rc = xendevicemodel_shutdown(xen_dmod, xen_domid, reason);
+        if (!rc) {
+            return;
+        }
+        if (errno != ENOTTY /* old Xen */) {
+            perror("xendevicemodel_shutdown failed");
+        }
+        /* well, try the old thing then */
+    }
+
+    xc_handle = xc_interface_open(0, 0, 0);
+    if (xc_handle == NULL) {
+        fprintf(stderr, "Cannot acquire xenctrl handle\n");
+    } else {
+        sts = xc_domain_shutdown(xc_handle, xen_domid, reason);
+        if (sts != 0) {
+            fprintf(stderr, "xc_domain_shutdown failed to issue %s, "
+                    "sts %d, %s\n", reboot ? "reboot" : "poweroff",
+                    sts, strerror(errno));
+        } else {
+            fprintf(stderr, "Issued domain %d %s\n", xen_domid,
+                    reboot ? "reboot" : "poweroff");
+        }
+        xc_interface_close(xc_handle);
+    }
+}
+
+void xen_shutdown_fatal_error(const char *fmt, ...)
+{
+    va_list ap;
+
+    va_start(ap, fmt);
+    vfprintf(stderr, fmt, ap);
+    va_end(ap);
+    fprintf(stderr, "Will destroy the domain.\n");
+    /* destroy the domain */
+    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
+}
+
+static void xen_register_backend(XenIOState *state)
+{
+    /* Initialize backend core & drivers */
+    if (xen_be_init() != 0) {
+        error_report("xen backend core setup failed");
+        goto err;
+    }
+
+    xen_be_register_common();
+
+    return;
+
+err:
+    error_report("xen hardware virtual machine backend registration failed");
+    exit(1);
+}
+
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener)
+{
+    int i, rc;
+
+    state->xce_handle = xenevtchn_open(NULL, 0);
+    if (state->xce_handle == NULL) {
+        perror("xen: event channel open");
+        goto err;
+    }
+
+    state->xenstore = xs_daemon_open();
+    if (state->xenstore == NULL) {
+        perror("xen: xenstore open");
+        goto err;
+    }
+
+    xen_create_ioreq_server(xen_domid, &state->ioservid);
+
+    state->exit.notify = xen_exit_notifier;
+    qemu_add_exit_notifier(&state->exit);
+
+    /*
+     * Register wake-up support in QMP query-current-machine API
+     */
+    qemu_register_wakeup_support();
+
+    rc = xen_map_ioreq_server(state);
+    if (rc < 0) {
+        goto err;
+    }
+
+    /* Note: cpus is empty at this point in init */
+    state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
+
+    rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
+    if (rc < 0) {
+        error_report("failed to enable ioreq server info: error %d handle=%p",
+                     errno, xen_xc);
+        goto err;
+    }
+
+    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
+
+    /* FIXME: how about if we overflow the page here? */
+    for (i = 0; i < max_cpus; i++) {
+        rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
+                                        xen_vcpu_eport(state->shared_page, i));
+        if (rc == -1) {
+            error_report("shared evtchn %d bind error %d", i, errno);
+            goto err;
+        }
+        state->ioreq_local_port[i] = rc;
+    }
+
+    rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
+                                    state->bufioreq_remote_port);
+    if (rc == -1) {
+        error_report("buffered evtchn bind error %d", errno);
+        goto err;
+    }
+    state->bufioreq_local_port = rc;
+
+    /* Init RAM management */
+#ifdef XEN_COMPAT_PHYSMAP
+    xen_map_cache_init(xen_phys_offset_to_gaddr, state);
+#else
+    xen_map_cache_init(NULL, state);
+#endif
+
+    qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
+
+    state->memory_listener = xen_memory_listener;
+    memory_listener_register(&state->memory_listener, &address_space_memory);
+
+    state->io_listener = xen_io_listener;
+    memory_listener_register(&state->io_listener, &address_space_io);
+
+    state->device_listener = xen_device_listener;
+    QLIST_INIT(&state->dev_list);
+    device_listener_register(&state->device_listener);
+
+    xen_bus_init();
+
+    xen_register_backend(state);
+
+    return;
+err:
+    error_report("xen hardware virtual machine initialisation failed");
+    exit(1);
+}
diff --git a/include/hw/i386/xen_arch_hvm.h b/include/hw/i386/xen_arch_hvm.h
new file mode 100644
index 0000000000..1000f8f543
--- /dev/null
+++ b/include/hw/i386/xen_arch_hvm.h
@@ -0,0 +1,11 @@
+#ifndef HW_XEN_ARCH_I386_HVM_H
+#define HW_XEN_ARCH_I386_HVM_H
+
+#include <xen/hvm/ioreq.h>
+#include "hw/xen/xen-hvm-common.h"
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
+void arch_xen_set_memory(XenIOState *state,
+                         MemoryRegionSection *section,
+                         bool add);
+#endif
diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
new file mode 100644
index 0000000000..26674648d8
--- /dev/null
+++ b/include/hw/xen/arch_hvm.h
@@ -0,0 +1,3 @@
+#if defined(TARGET_I386) || defined(TARGET_X86_64)
+#include "hw/i386/xen_arch_hvm.h"
+#endif
diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h
new file mode 100644
index 0000000000..c16057835f
--- /dev/null
+++ b/include/hw/xen/xen-hvm-common.h
@@ -0,0 +1,97 @@
+#ifndef HW_XEN_HVM_COMMON_H
+#define HW_XEN_HVM_COMMON_H
+
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+
+#include "cpu.h"
+#include "hw/pci/pci.h"
+#include "hw/hw.h"
+#include "hw/xen/xen_common.h"
+#include "sysemu/runstate.h"
+#include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
+#include "sysemu/xen-mapcache.h"
+
+#include <xen/hvm/ioreq.h>
+
+extern MemoryRegion ram_memory;
+extern MemoryListener xen_io_listener;
+extern DeviceListener xen_device_listener;
+
+//#define DEBUG_XEN_HVM
+
+#ifdef DEBUG_XEN_HVM
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
+{
+    return shared_page->vcpu_ioreq[i].vp_eport;
+}
+static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
+{
+    return &shared_page->vcpu_ioreq[vcpu];
+}
+
+#define BUFFER_IO_MAX_DELAY  100
+
+typedef struct XenPhysmap {
+    hwaddr start_addr;
+    ram_addr_t size;
+    const char *name;
+    hwaddr phys_offset;
+
+    QLIST_ENTRY(XenPhysmap) list;
+} XenPhysmap;
+
+typedef struct XenPciDevice {
+    PCIDevice *pci_dev;
+    uint32_t sbdf;
+    QLIST_ENTRY(XenPciDevice) entry;
+} XenPciDevice;
+
+typedef struct XenIOState {
+    ioservid_t ioservid;
+    shared_iopage_t *shared_page;
+    buffered_iopage_t *buffered_io_page;
+    QEMUTimer *buffered_io_timer;
+    CPUState **cpu_by_vcpu_id;
+    /* the evtchn port for polling the notification, */
+    evtchn_port_t *ioreq_local_port;
+    /* evtchn remote and local ports for buffered io */
+    evtchn_port_t bufioreq_remote_port;
+    evtchn_port_t bufioreq_local_port;
+    /* the evtchn fd for polling */
+    xenevtchn_handle *xce_handle;
+    /* which vcpu we are serving */
+    int send_vcpu;
+
+    struct xs_handle *xenstore;
+    MemoryListener memory_listener;
+    MemoryListener io_listener;
+    QLIST_HEAD(, XenPciDevice) dev_list;
+    DeviceListener device_listener;
+
+    Notifier exit;
+} XenIOState;
+
+void xen_exit_notifier(Notifier *n, void *data);
+
+void xen_region_add(MemoryListener *listener, MemoryRegionSection *section);
+void xen_region_del(MemoryListener *listener, MemoryRegionSection *section);
+void xen_io_add(MemoryListener *listener, MemoryRegionSection *section);
+void xen_io_del(MemoryListener *listener, MemoryRegionSection *section);
+void xen_device_realize(DeviceListener *listener, DeviceState *dev);
+void xen_device_unrealize(DeviceListener *listener, DeviceState *dev);
+
+void xen_hvm_change_state_handler(void *opaque, bool running, RunState rstate);
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener);
+
+void cpu_ioreq_pio(ioreq_t *req);
+#endif /* HW_XEN_HVM_COMMON_H */
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483905.750382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQN-0003Wl-I2; Wed, 25 Jan 2023 08:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483905.750382; Wed, 25 Jan 2023 08:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQN-0003VP-41; Wed, 25 Jan 2023 08:46:31 +0000
Received: by outflank-mailman (input) for mailman id 483905;
 Wed, 25 Jan 2023 08:46:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbQK-0001qZ-MC
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:28 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2062d.outbound.protection.outlook.com
 [2a01:111:f400:7e89::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c193687c-9c8c-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:46:27 +0100 (CET)
Received: from DM6PR05CA0049.namprd05.prod.outlook.com (2603:10b6:5:335::18)
 by MN2PR12MB4174.namprd12.prod.outlook.com (2603:10b6:208:15f::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:45:03 +0000
Received: from DM6NAM11FT005.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:335:cafe::bd) by DM6PR05CA0049.outlook.office365.com
 (2603:10b6:5:335::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.17 via Frontend
 Transport; Wed, 25 Jan 2023 08:45:03 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT005.mail.protection.outlook.com (10.13.172.238) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:45:02 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:45:01 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:45:00 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c193687c-9c8c-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Cfg9uFbvX1/0e14oGpq+1K2b9EWGcTTpXqCO+WKfrAV0uvy/DOcdCq9KCT7LngqSGtSZEDa0PzpSN94IVhoRQ8CM6OsRdkY9xYo7fRS5nzJouRVbX9eLLCs3KUsNxtFrjd16IQbkbku9r53OYcBqTLMJQIkI/YrHXlIpGJyQOJXCjwGbw1VdkCalKTwXQNLRVXm30cYfob3MnDHFpBGYEkbY5rBwEqdlFPm82phkxr6AYCdXK3wdP7HDdNddVJxGlFF1rADDiCigXWeEZbVUNYyC4Z1gBWqjExV/R6L36nkG6trRkNb7olQHDXWvE8v58cviU1k5QE5In6B6myzXNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5c1IzY0YQxuTPTSwy7UvS0mMbpPve8RkE4gupE88sNQ=;
 b=MvnEVCvleSxLhbqfIp7SB78q35wMCpcm0KlSSe4zP4KvG6A2bsGVEiEi9EfRI+kkgw1FM00DWfiVZ2PleejTg+MFWXjxDbx0h2HXsGmhHPhn+ZZpFdriU/7ziVN0iNt96wVPC/+JHON0lxdVZVlTBZj7PwZ5Gy4NHQAbgfbFQkcjUdppwpB72ApTtNnqQYWHE6wbkP9/xMdyVf598u3+hMRmz7mq44WtdMRIsfmLerfEGySw2+SD5F7YM3iHt8DlYpaHSDNGdoIYWVKCoJfcLzn3c935y1MAnsaP7Ik1jSnF2K90UsxDgd6qD/loPE3enUvG3kgN6cr2FztHL/pFwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5c1IzY0YQxuTPTSwy7UvS0mMbpPve8RkE4gupE88sNQ=;
 b=DTkjj+xqDyjsf3389KNNx6xqUwuT5FttfA8WRl8QKr+zswooVD+S4P87UlDmWECG4JOkqJbeHXlOC6Vug1Fm0i5Hi4NmcT7eCmWtt7uIS6/hmhaHwolyNzv6K0LNdlDvm8wlm6ZtZim9Hp1h9pzFdl82FKM7p8GAEU3bm006WVg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Paolo Bonzini
	<pbonzini@redhat.com>, =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?=
	<marcandre.lureau@redhat.com>, =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?=
	<berrange@redhat.com>, Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [QEMU][PATCH v3 10/10] meson.build: enable xenpv machine build for ARM
Date: Wed, 25 Jan 2023 00:43:56 -0800
Message-ID: <20230125084356.6684-12-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT005:EE_|MN2PR12MB4174:EE_
X-MS-Office365-Filtering-Correlation-Id: 3abb656f-ac22-4400-7d39-08dafeb07351
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nuaeDtjyRP8hHVUv/smR+2PIkN4yrogeGguf8oOIcfyKBgjr498jP9iqETLQ6XCYj/WXgixBguhxkxOMsFxZaUlS9nBBUBzI56+o4hW21wToyVnLuKUAuTfJlgtV83qNjIWwkdn/jzeourv0rclO0tVHrIQPoRWtY+r5DvJ4p6tLn+W5uJibN8xeb/vS/ft30u9QeSnCBFqNsrsDiBxGRV4cz3P0tVhi/kibpqVU7PzKmhEkYfbGKAfnCc3IJ01WsVSk0gdATn+eHUrxgaOwsXVjMTqOVvX0xPdH6ViVqL/LNTdYyH1+wqYML9NCIWpOeh7sU2nFW0T9xdKA91+9kpkqrKQGlaiqSmdCuynV0T43axoKSJFP6BkYwpfFZabeFN8+EZAl76d3+/ze4ZK7KpFpwcAQIEPUsjPHmNwPC7veYeidQ8Ei9gBhiTc1jG1i3to1CbgmwXTqmqySbnG9mdpSmsZqUE/PcbCqV4M3NW85p49WQjJpLL+/LAYL/isjNtXsKmF54lCVK6gsSXu7Te+5/I8cwfTPen2upytfVOBH+3DHr4P0stUiXvsuB5et+RRVE0NrsEBu3lJPSuBCr25U/bJQ87/nzReOGqJvEb3HOomzbJsSccqU0jWVh26oycx0Czh52tbohET7fMryOJLSlxvK3IuWzOP/OHGW8cSzl4NWWlA9yRhRJyiTpvNEvkFN0Fts9OrsrNtidRfwnvbM50rAzHuWNC5+t3jJmGI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199018)(40470700004)(36840700001)(46966006)(83380400001)(36860700001)(426003)(336012)(54906003)(47076005)(6916009)(2616005)(478600001)(66574015)(8676002)(186003)(36756003)(1076003)(70206006)(70586007)(356005)(4744005)(2906002)(86362001)(82740400003)(41300700001)(26005)(81166007)(5660300002)(316002)(40480700001)(40460700003)(6666004)(44832011)(4326008)(82310400005)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:45:02.7984
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3abb656f-ac22-4400-7d39-08dafeb07351
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT005.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4174

Add CONFIG_XEN for aarch64 device to support build for ARM targets.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 meson.build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meson.build b/meson.build
index 693802adb2..13c4ad1017 100644
--- a/meson.build
+++ b/meson.build
@@ -135,7 +135,7 @@ endif
 if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
   # i386 emulator provides xenpv machine type for multiple architectures
   accelerator_targets += {
-    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
+    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu', 'aarch64-softmmu'],
   }
 endif
 if cpu in ['x86', 'x86_64']
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:46:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:46:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483908.750400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQQ-0004Cj-13; Wed, 25 Jan 2023 08:46:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483908.750400; Wed, 25 Jan 2023 08:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbQP-0004CT-SV; Wed, 25 Jan 2023 08:46:33 +0000
Received: by outflank-mailman (input) for mailman id 483908;
 Wed, 25 Jan 2023 08:46:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbQO-0000gj-Hd
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:46:32 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a98de2fc-9c8c-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:45:47 +0100 (CET)
Received: from DM5PR07CA0060.namprd07.prod.outlook.com (2603:10b6:4:ad::25) by
 DM6PR12MB4236.namprd12.prod.outlook.com (2603:10b6:5:212::14) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.33; Wed, 25 Jan 2023 08:44:58 +0000
Received: from DM6NAM11FT086.eop-nam11.prod.protection.outlook.com
 (2603:10b6:4:ad:cafe::75) by DM5PR07CA0060.outlook.office365.com
 (2603:10b6:4:ad::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend
 Transport; Wed, 25 Jan 2023 08:44:58 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT086.mail.protection.outlook.com (10.13.173.75) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:44:58 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:44:58 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:44:57 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a98de2fc-9c8c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MJnZEXqmcNQyqzsamWn/Tz+f9f3+M7rbDNVOmONVC0ymBvg8hFRRyLKhqn8id4CmxS/DogQDcoqNRx1Zst0yMrI1xqrTO6kQd4d8xEablEjkardOELJ1mWIS7XsD0mx9CJl8/PyJD3x41U8VxKjVDAr8QN6xfWz2C06S5JFFgmnMhf3oXGY6bMXoFtZiIwYdw+omwGy8SL2oMsV1nGASIOvNRaMnEtk2TNNbI9Kw13nZFz3G08dklfbVXpxWX4R6pgHqsqadjbOJS2CtQVp2igulO7R1gDQJAqDWd+GixvkDglTcXrqEMHgT8wx/so+s/OD++zHxIcL0AKXePGln6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mqy76BJj/3k0CZFIEfqRDFKvHIQ6y9gkszdQF5vGxqY=;
 b=EnIHaXP+GOrQkDB6A0iU0C5OVGBGxPLtGaw8aZxwbgfWdZBnhm3XfSlAJUPJIlimAiCEqdWs8xMwGSm/JEDZn3IEc+ge2frJdYQlc/5OZIRp4CWFVxDf74o8apKQbvCuvEwnrnm1BKgWC92MShtevPZakpfWkFztgIKtnanU1mu2NIS8ckoDY86rOvdM9g4kAirGWgCgeEuZmNC0O+GY3Jj1yeWIZk1KVpwOpb13gWbl2Md1vYZI08ZdHJJmOSNqbUQSd97vaCD3xBEB+OfZPbRIFBI9YoxDoYBeq20/NytGg+4HB9nND0KyTXOOf8kMtSse2iFBQchCrU429HXEWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mqy76BJj/3k0CZFIEfqRDFKvHIQ6y9gkszdQF5vGxqY=;
 b=3YI2BHCAmWP+aeECoe5Xnvfgdh2ySj746NHyooaoOkY7yQdpyAnsZkQkmrx5DFu0NBCUVqGzmb6yTgoa13Ms5HBEi9rh2T3hGNxJb0PahemNuxhWkGpsTo1jTYn+wZVnVtv7XW3aD1EYZSg6hBdztY5PRvmNh2DP2k2ZVztwD5c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v3 07/10] hw/xen/xen-hvm-common: Use g_new and error_setg_errno
Date: Wed, 25 Jan 2023 00:43:53 -0800
Message-ID: <20230125084356.6684-9-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125084356.6684-1-vikram.garhwal@amd.com>
References: <20230125084356.6684-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT086:EE_|DM6PR12MB4236:EE_
X-MS-Office365-Filtering-Correlation-Id: 41276d37-3ce8-4d3a-3e8d-08dafeb070d8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2NAWaaAs3EQ6yHLucfvolSmlAzpVexLqF7NEL8H8rb1wWOUlLIaa1prnR9BZidoAfkoW/Kms68HuFMIjoZZhh+nsjOGgBX2+SNMn+j/EhiG+AgZqiRz/JHqrXc1qQqWFmZtatF5Zy94J1I2SFF7cA/5jEC/c3YdS668/bduP0mu3/V6LOunqX6g7fN3MEHIBfX1VThcWGsSTkVmKA7nUhlXab75SiYTUQBOVBCL3kEWXJbkR6jNB9DdCEOEcYsKYXPSYiLseDl9FQwR3WQC3CiLrIhh1x7y09oi15ZX3EZV2Z60U6JC/1wXbTO2IN0hVZvJgCaJOdPP4+xpd6OU4gNMQ8OTpnIlVESNdvRPJb7k58gnX/M7gIMAlchJahcgs7nVvHp+RKvVFd4ELFldW5NJoN3VBJbSRNYu2TC9HscTwIT2gYUwT0ZHWprjOGRwQ4jl3wDTZNVpLBfnWGdZRuksHCqHYnBvJcqrOPT95I/BO8pcOhN44l9MVB6x5HLAEG/Na8Qp+IMW1Stwwns1L4gHIrV/dUuimweEK7xCWH31D/wPbEkNJKXdmXP9uRCq8QbEbL40q0IoIGvF50FHN1BvuzWZbNEFVw9/Fa6sDJ7wuqQi8xX/bkLQFlf+JG6Ju8kGAH4PTi8RUa1FEovtDGZycf3DfaWPCuKeL6SoP0Zh1jAWgPrThCOi3RBu5jmi5cWxVCpM6ukYIsxFLfqoJZ38SSdlftKghscl11Nv1CKs=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(396003)(39860400002)(136003)(346002)(451199018)(40470700004)(36840700001)(46966006)(4326008)(316002)(86362001)(70206006)(36860700001)(6916009)(36756003)(1076003)(44832011)(5660300002)(426003)(2906002)(82740400003)(82310400005)(41300700001)(40460700003)(47076005)(8936002)(2616005)(6666004)(186003)(40480700001)(26005)(8676002)(54906003)(70586007)(478600001)(336012)(83380400001)(356005)(81166007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:44:58.6484
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 41276d37-3ce8-4d3a-3e8d-08dafeb070d8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT086.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4236

Replace g_malloc with g_new and perror with error_setg_errno.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 hw/xen/xen-hvm-common.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index 94dbbe97ed..01c8ec1956 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -34,7 +34,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
     trace_xen_ram_alloc(ram_addr, size);
 
     nr_pfn = size >> TARGET_PAGE_BITS;
-    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
+    pfn_list = g_new(xen_pfn_t, nr_pfn);
 
     for (i = 0; i < nr_pfn; i++) {
         pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
@@ -726,7 +726,7 @@ void destroy_hvm_domain(bool reboot)
             return;
         }
         if (errno != ENOTTY /* old Xen */) {
-            perror("xendevicemodel_shutdown failed");
+            error_report("xendevicemodel_shutdown failed with error %d", errno);
         }
         /* well, try the old thing then */
     }
@@ -797,7 +797,7 @@ static void xen_do_ioreq_register(XenIOState *state,
     }
 
     /* Note: cpus is empty at this point in init */
-    state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
+    state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
 
     rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
     if (rc < 0) {
@@ -806,7 +806,7 @@ static void xen_do_ioreq_register(XenIOState *state,
         goto err;
     }
 
-    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
+    state->ioreq_local_port = g_new0(evtchn_port_t, max_cpus);
 
     /* FIXME: how about if we overflow the page here? */
     for (i = 0; i < max_cpus; i++) {
@@ -860,13 +860,13 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
 
     state->xce_handle = xenevtchn_open(NULL, 0);
     if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
+        error_report("xen: event channel open failed with error %d", errno);
         goto err;
     }
 
     state->xenstore = xs_daemon_open();
     if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
+        error_report("xen: xenstore open failed with error %d", errno);
         goto err;
     }
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:56:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483959.750409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZS-0000Cq-Vl; Wed, 25 Jan 2023 08:55:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483959.750409; Wed, 25 Jan 2023 08:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZS-0000Cj-T1; Wed, 25 Jan 2023 08:55:54 +0000
Received: by outflank-mailman (input) for mailman id 483959;
 Wed, 25 Jan 2023 08:55:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbZR-0000CY-K6
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:55:53 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20625.outbound.protection.outlook.com
 [2a01:111:f400:fe59::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5f3f3c7-9c8d-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:55:08 +0100 (CET)
Received: from DM6PR03CA0016.namprd03.prod.outlook.com (2603:10b6:5:40::29) by
 IA1PR12MB6329.namprd12.prod.outlook.com (2603:10b6:208:3e5::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:54:25 +0000
Received: from DM6NAM11FT078.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:40:cafe::ef) by DM6PR03CA0016.outlook.office365.com
 (2603:10b6:5:40::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:25 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT078.mail.protection.outlook.com (10.13.173.183) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Wed, 25 Jan 2023 08:54:24 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:23 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:22 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5f3f3c7-9c8d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AgV50fT1m/j3iQ4CYF+16+F+pUcFGY/i24169WJ2xmreMYrggyhyPYqaUWyngZsKgOAev912mR59bonx5Tq+ISuMuKDFVYJ59aULzyHLMhcTK2nYhyAIJvQwgsBRmBByy46nlx4jlBwIwM9M3sQN/1cmzWPlwBsjwICuJP8hfbbdMQSp9IITjP3e8JODJApdsC2tQ3aw/eShzyOXAy0BlHEvfEfzkJpAQlE76f1xwkC6O2HSmtXugDIYeWROal7/Nxgetcjx9RuV2WSmw8p9C2yFjn1OxT6v5yAniEflkR7SxYs0jskikxXgNHNMqQQzt0ha5SDKD7NxoIvJwlWi+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z7qBBSFtT16dKo24xhhVW2fFUFVUWqNupyxQmZ/Rxf8=;
 b=nObe/ObNQ0St78OeVXgMcWXShbX7KsAG+V5dLpyNcuIXrVHOFewLGGIQ0oG3VZS0NB4jTbyxDH8wf/GVy7Dz6YD7UEhO3OpO6se4RYjHGYXqxec4DJ8u38HtMqcIeS/IyrKxO6mkdFc3hH3IjmS1U//t0rYamzWntaNHE1x5AnjkJ99nq5wGzamZzFxBqLUuQ8+hCBjiqh1kD/EIgtRLNOyvlgif1hzxntMXidchg08FQdm+CdRAi3VfP5Z0zw6B6z2v1OIN1rPu1EODgi/1q2J/Xhac3TNGlwmY0BZqDoCWPFyDDV5n1tgd/wUXva0l9sWAZO6MHUmRDB8DbgZj2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z7qBBSFtT16dKo24xhhVW2fFUFVUWqNupyxQmZ/Rxf8=;
 b=izip4ENKNevPyyqu5gkt87AXgE0Jj5ZazEJTypsMcDLCs3ffrWOCLK3zxK8kqKi6enZj9yfTWKwKLjBWgPK7ihuLqTtsDjJM65ZW8wtpw0Y8tGxLbDfNZIw/EJYJP6ZxSY4LR/hmaTdazLzsVi3gwOg/85FYguorojVJCL0PgF8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, "Michael S. Tsirkin"
	<mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, "Paolo
 Bonzini" <pbonzini@redhat.com>, Richard Henderson
	<richard.henderson@linaro.org>, Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard
	<anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: [QEMU][PATCH v4 01/10] hw/i386/xen/: move xen-mapcache.c to hw/xen/
Date: Wed, 25 Jan 2023 00:53:58 -0800
Message-ID: <20230125085407.7144-2-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT078:EE_|IA1PR12MB6329:EE_
X-MS-Office365-Filtering-Correlation-Id: e095d815-3abf-4042-6778-08dafeb1c220
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WOpLBQnMfwGAg0Y1y5ZtTt0aUgrDIh7Fe1/5bETPnHOPRHlbTwljkAEdFUejfYmBLYz01w2VifAqXIgyoRmssiQRJH1EwyZkAZ3EwzHKeiygI3ojVPpitDP5E4H4wjUNQBtqxX0Tam7JTb22U6VbYB+UuZ+s393BHiq4X0NR4eL7R1FsT+DWtYEsEonvFN1bUMLwGP/cklt1GhptjMsT7CfbP+axFRBelFmQnBis/JFxAdeJ1KZITYnMqHl0Rsp1OADyzCztC0VHdt1QDySLH9jA4Bu+qyKTgdFP6t+kQkoLo3Ko/va76CjnuEa8zSLtsfO45sXFk9rESlWDhDWErZc7xMiIyJXP5t4JJqGf1azTYwQSHTmVLFFDMgDjXIZA+40GUj6PIhJrm0ngLonV/g2lLVE307TVsYZc76laZb7e2i44C4ob94+eMvWXVCW/wWHkOZIfXgzTq3tbvyYOJhtLM6qEMSNyjv9t4ofd25yyMf1RTTBVh1NKJYa5+5Y913HjBYKils1HBNyg0g2/miKiwz5Qb/bB/4k5wu2FAqmsHVJ3vYBkUKxPiuR8DBP5grhFcEvA6POnPobSNsU3z8jRKb+bSVTplvo7W+/BxiA8Kdz2Cv5rulmrais64saZfh07rzT+1X7tZQvc5j/tUI0fiUwE20l5FShWZF7TjqN3YOjZ2S8UJcKWBIzi4bkgZ4o8BqqRbI0Mwso5EjbRt3HqUzTotHEsrZnJLDREtQU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199018)(36840700001)(46966006)(40470700004)(36860700001)(40460700003)(36756003)(6666004)(2616005)(2906002)(54906003)(26005)(1076003)(336012)(186003)(41300700001)(7416002)(81166007)(4326008)(70206006)(8936002)(44832011)(70586007)(6916009)(8676002)(316002)(86362001)(82740400003)(82310400005)(356005)(478600001)(40480700001)(426003)(5660300002)(83380400001)(47076005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:24.5135
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e095d815-3abf-4042-6778-08dafeb1c220
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT078.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6329

xen-mapcache.c contains common functions which can be used for enabling Xen on
aarch64 with IOREQ handling. Moving it out from hw/i386/xen to hw/xen to make it
accessible for both aarch64 and x86.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 hw/i386/meson.build              | 1 +
 hw/i386/xen/meson.build          | 1 -
 hw/i386/xen/trace-events         | 5 -----
 hw/xen/meson.build               | 4 ++++
 hw/xen/trace-events              | 5 +++++
 hw/{i386 => }/xen/xen-mapcache.c | 0
 6 files changed, 10 insertions(+), 6 deletions(-)
 rename hw/{i386 => }/xen/xen-mapcache.c (100%)

diff --git a/hw/i386/meson.build b/hw/i386/meson.build
index 213e2e82b3..cfdbfdcbcb 100644
--- a/hw/i386/meson.build
+++ b/hw/i386/meson.build
@@ -33,5 +33,6 @@ subdir('kvm')
 subdir('xen')
 
 i386_ss.add_all(xenpv_ss)
+i386_ss.add_all(xen_ss)
 
 hw_arch += {'i386': i386_ss}
diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
index be84130300..2fcc46e6ca 100644
--- a/hw/i386/xen/meson.build
+++ b/hw/i386/xen/meson.build
@@ -1,6 +1,5 @@
 i386_ss.add(when: 'CONFIG_XEN', if_true: files(
   'xen-hvm.c',
-  'xen-mapcache.c',
   'xen_apic.c',
   'xen_platform.c',
   'xen_pvdevice.c',
diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
index 5d6be61090..a0c89d91c4 100644
--- a/hw/i386/xen/trace-events
+++ b/hw/i386/xen/trace-events
@@ -21,8 +21,3 @@ xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
 cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
 cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
 
-# xen-mapcache.c
-xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
-xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
-xen_map_cache_return(void* ptr) "%p"
-
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index ae0ace3046..19d0637c46 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -22,3 +22,7 @@ else
 endif
 
 specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
+
+xen_ss = ss.source_set()
+
+xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))
diff --git a/hw/xen/trace-events b/hw/xen/trace-events
index 3da3fd8348..2c8f238f42 100644
--- a/hw/xen/trace-events
+++ b/hw/xen/trace-events
@@ -41,3 +41,8 @@ xs_node_vprintf(char *path, char *value) "%s %s"
 xs_node_vscanf(char *path, char *value) "%s %s"
 xs_node_watch(char *path) "%s"
 xs_node_unwatch(char *path) "%s"
+
+# xen-mapcache.c
+xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
+xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
+xen_map_cache_return(void* ptr) "%p"
diff --git a/hw/i386/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
similarity index 100%
rename from hw/i386/xen/xen-mapcache.c
rename to hw/xen/xen-mapcache.c
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:56:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:56:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483960.750420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZd-0000VM-8Q; Wed, 25 Jan 2023 08:56:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483960.750420; Wed, 25 Jan 2023 08:56:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZd-0000VF-48; Wed, 25 Jan 2023 08:56:05 +0000
Received: by outflank-mailman (input) for mailman id 483960;
 Wed, 25 Jan 2023 08:56:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbZb-0000CY-65
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:03 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20615.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::615])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fb5a7d7c-9c8d-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:55:17 +0100 (CET)
Received: from DM6PR12CA0015.namprd12.prod.outlook.com (2603:10b6:5:1c0::28)
 by DM6PR12MB4172.namprd12.prod.outlook.com (2603:10b6:5:212::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:54:33 +0000
Received: from DM6NAM11FT044.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1c0:cafe::93) by DM6PR12CA0015.outlook.office365.com
 (2603:10b6:5:1c0::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:33 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT044.mail.protection.outlook.com (10.13.173.185) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6002.13 via Frontend Transport; Wed, 25 Jan 2023 08:54:32 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:31 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:30 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb5a7d7c-9c8d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AMpDNAxixgJL7512FqAuSfWr6ha1jFX+Jrf9SRlMN5OwVKXdi+gNte58535aiV8E/Nip98ICJThV/PYQRk7TR/O94ArzIbd6h19v5D9IsbKKIb3S09XpmNQ1L91BsnQ7UnKE6LjQb6vBmfvV5qYzAewliENfQ99ZDGivO7qJCPIQH2zUSueDXTCIZRvUXN2snJ4RAi2l08hGTe82v/4QMPbQKDFidbpJEj2V90WTmkRsig+UD8GAKK5dqxXs4vzV26RY8FGYpX3EB68ZSEEP212Uy1Vbr2jsyxykKsi8Ny5mLoGg6TSLNL9kLQ85QRL8A+0jrvbujSLHW3RG3KSIXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/pArBdHE6pupZI0iEmfLZok1fiWO0kN2Oo56dAzFs5Q=;
 b=nDdmZhDUiMPmMkApAcDIA1P9OpFtF+nrRPCxo91SOQf2pZ2xCtCbAJ6VhaiNeGDZFIVJwylVyCsAp4pwPEDw5nec+iPjPBV94J5lYJ45DIDPMjJgPCp1SvYfUCFUyxCd6QIMYDsj9IdoOJE2CJji5FXXCnIpGHMTDd4WzMas5cv8Jngoqnh0ArkoIZ7A4gjUTRp6fBdcawXOmiA5XlzKXaWOt+DQCA1VOZ+OcmnRzLepkaLsl8RzGG5KttOYCFb2/LLAzj/NLlSs18m3MVPPnZaSTpYk6VWMEBNSXQAlI4bg9ttq6SQ7UONLl/WV7vMaxQ4XKTYS4AxQFNuaurdBJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/pArBdHE6pupZI0iEmfLZok1fiWO0kN2Oo56dAzFs5Q=;
 b=HLU+BFcBNXAaVd/z2syJEG3/8eHOJEL1fx1xcNS8OO2S0oG4Dm670ImZ2ez3X2LU7HA+uboXWquxPLx37gs2cTieH2OT0XKLB+CGCbisZzSPvX8xgj64ZpKCK16wk7cVWa8JKoIIU/gv76UzB/j+wZCDPps8xKNpb3+kvyMxB6w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>
Subject: [QEMU][PATCH v4 03/10] hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState
Date: Wed, 25 Jan 2023 00:54:00 -0800
Message-ID: <20230125085407.7144-4-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT044:EE_|DM6PR12MB4172:EE_
X-MS-Office365-Filtering-Correlation-Id: 15f73fff-e87f-4a7a-48e5-08dafeb1c70a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FOEAkFDXi7pJkF58Kre3tRU/P22AG5ViCXj3+wrloQEu6c0z12dWU+oKYwg8O+eo74CiDAFfubQmKaaHi1ufzNrfp0uxpDaPxV39y05H1qkkU6LZNVG3WLO/fLddTsPvwmKgRLUehI/DIEL1Y9od2lgE4whjA7D7gbR1JmMgXPFVAO3UseMpbqBTrRzaJfqmpkr9IpmmjqevUWEgkAxsK6yK8N9KestO/CkYGmG5Uc/RevH3lr+iheyqvZDtMFLhoKyXG2+9iRCDTHTAkbyK+9DmC5kJ5/777iD42uYp6DlDFmrpNl5AL8VCvWAr0kjyMctzSaeR32IvqqOqmaQcRTo/sSkQ+Sd039J2A6Ch85dfWkW6ykNUJSrq/NkNdiyV+9bxxmYA5LHex4ZHD/P6Mt9SHMv0tnFYIgRkGf8IEHnG8uIN8kxgQTJ/GGd7wnntogMln4QDQ8vohZNLq73E29pVYuhgSBhUPg8lgWa4gFFwNWkOpUz2ARxFgALLv4aZYclu79LGrGlto/A1gAXsNcVnJE+b8xQwBwrAq/UlpM1UCdZpZ6VUPX0YAwRsqnxAA281tLyGGsRqcFTqoc+6rWIpHSYMHdSynbh5eTIiKeEzplVKi+GKILMhcPoPfwnR2a+/zhiIt+MMfO7itsZNBQAC5Sww4ny0auHPzPX6BfVJXyssl1P6Cw/do9xRUTxSb/xCvTa1ihTec1u5bC9mzE5mtUDHD18jVS310ge2VFI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(376002)(396003)(346002)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(2906002)(8936002)(356005)(5660300002)(7416002)(81166007)(86362001)(44832011)(478600001)(41300700001)(40480700001)(70206006)(8676002)(70586007)(426003)(1076003)(26005)(6666004)(47076005)(82310400005)(336012)(4326008)(6916009)(66574015)(82740400003)(40460700003)(2616005)(36756003)(83380400001)(316002)(54906003)(36860700001)(186003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:32.7415
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 15f73fff-e87f-4a7a-48e5-08dafeb1c70a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT044.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4172

From: Stefano Stabellini <stefano.stabellini@amd.com>

In preparation to moving most of xen-hvm code to an arch-neutral location, move:
- shared_vmport_page
- log_for_dirtybit
- dirty_bitmap
- suspend
- wakeup

out of XenIOState struct as these are only used on x86, especially the ones
related to dirty logging.
Updated XenIOState can be used for both aarch64 and x86.

Also, remove free_phys_offset as it was unused.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/i386/xen/xen-hvm.c | 58 ++++++++++++++++++++-----------------------
 1 file changed, 27 insertions(+), 31 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 1fba0e0ae1..06c446e7be 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -73,6 +73,7 @@ struct shared_vmport_iopage {
 };
 typedef struct shared_vmport_iopage shared_vmport_iopage_t;
 #endif
+static shared_vmport_iopage_t *shared_vmport_page;
 
 static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
 {
@@ -95,6 +96,11 @@ typedef struct XenPhysmap {
 } XenPhysmap;
 
 static QLIST_HEAD(, XenPhysmap) xen_physmap;
+static const XenPhysmap *log_for_dirtybit;
+/* Buffer used by xen_sync_dirty_bitmap */
+static unsigned long *dirty_bitmap;
+static Notifier suspend;
+static Notifier wakeup;
 
 typedef struct XenPciDevice {
     PCIDevice *pci_dev;
@@ -105,7 +111,6 @@ typedef struct XenPciDevice {
 typedef struct XenIOState {
     ioservid_t ioservid;
     shared_iopage_t *shared_page;
-    shared_vmport_iopage_t *shared_vmport_page;
     buffered_iopage_t *buffered_io_page;
     xenforeignmemory_resource_handle *fres;
     QEMUTimer *buffered_io_timer;
@@ -125,14 +130,8 @@ typedef struct XenIOState {
     MemoryListener io_listener;
     QLIST_HEAD(, XenPciDevice) dev_list;
     DeviceListener device_listener;
-    hwaddr free_phys_offset;
-    const XenPhysmap *log_for_dirtybit;
-    /* Buffer used by xen_sync_dirty_bitmap */
-    unsigned long *dirty_bitmap;
 
     Notifier exit;
-    Notifier suspend;
-    Notifier wakeup;
 } XenIOState;
 
 /* Xen specific function for piix pci */
@@ -462,10 +461,10 @@ static int xen_remove_from_physmap(XenIOState *state,
     }
 
     QLIST_REMOVE(physmap, list);
-    if (state->log_for_dirtybit == physmap) {
-        state->log_for_dirtybit = NULL;
-        g_free(state->dirty_bitmap);
-        state->dirty_bitmap = NULL;
+    if (log_for_dirtybit == physmap) {
+        log_for_dirtybit = NULL;
+        g_free(dirty_bitmap);
+        dirty_bitmap = NULL;
     }
     g_free(physmap);
 
@@ -626,16 +625,16 @@ static void xen_sync_dirty_bitmap(XenIOState *state,
         return;
     }
 
-    if (state->log_for_dirtybit == NULL) {
-        state->log_for_dirtybit = physmap;
-        state->dirty_bitmap = g_new(unsigned long, bitmap_size);
-    } else if (state->log_for_dirtybit != physmap) {
+    if (log_for_dirtybit == NULL) {
+        log_for_dirtybit = physmap;
+        dirty_bitmap = g_new(unsigned long, bitmap_size);
+    } else if (log_for_dirtybit != physmap) {
         /* Only one range for dirty bitmap can be tracked. */
         return;
     }
 
     rc = xen_track_dirty_vram(xen_domid, start_addr >> TARGET_PAGE_BITS,
-                              npages, state->dirty_bitmap);
+                              npages, dirty_bitmap);
     if (rc < 0) {
 #ifndef ENODATA
 #define ENODATA  ENOENT
@@ -650,7 +649,7 @@ static void xen_sync_dirty_bitmap(XenIOState *state,
     }
 
     for (i = 0; i < bitmap_size; i++) {
-        unsigned long map = state->dirty_bitmap[i];
+        unsigned long map = dirty_bitmap[i];
         while (map != 0) {
             j = ctzl(map);
             map &= ~(1ul << j);
@@ -676,12 +675,10 @@ static void xen_log_start(MemoryListener *listener,
 static void xen_log_stop(MemoryListener *listener, MemoryRegionSection *section,
                          int old, int new)
 {
-    XenIOState *state = container_of(listener, XenIOState, memory_listener);
-
     if (old & ~new & (1 << DIRTY_MEMORY_VGA)) {
-        state->log_for_dirtybit = NULL;
-        g_free(state->dirty_bitmap);
-        state->dirty_bitmap = NULL;
+        log_for_dirtybit = NULL;
+        g_free(dirty_bitmap);
+        dirty_bitmap = NULL;
         /* Disable dirty bit tracking */
         xen_track_dirty_vram(xen_domid, 0, 0, NULL);
     }
@@ -1021,9 +1018,9 @@ static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req)
 {
     vmware_regs_t *vmport_regs;
 
-    assert(state->shared_vmport_page);
+    assert(shared_vmport_page);
     vmport_regs =
-        &state->shared_vmport_page->vcpu_vmport_regs[state->send_vcpu];
+        &shared_vmport_page->vcpu_vmport_regs[state->send_vcpu];
     QEMU_BUILD_BUG_ON(sizeof(*req) < sizeof(*vmport_regs));
 
     current_cpu = state->cpu_by_vcpu_id[state->send_vcpu];
@@ -1468,7 +1465,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 
     state->memory_listener = xen_memory_listener;
     memory_listener_register(&state->memory_listener, &address_space_memory);
-    state->log_for_dirtybit = NULL;
 
     state->io_listener = xen_io_listener;
     memory_listener_register(&state->io_listener, &address_space_io);
@@ -1489,19 +1485,19 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
 
-    state->suspend.notify = xen_suspend_notifier;
-    qemu_register_suspend_notifier(&state->suspend);
+    suspend.notify = xen_suspend_notifier;
+    qemu_register_suspend_notifier(&suspend);
 
-    state->wakeup.notify = xen_wakeup_notifier;
-    qemu_register_wakeup_notifier(&state->wakeup);
+    wakeup.notify = xen_wakeup_notifier;
+    qemu_register_wakeup_notifier(&wakeup);
 
     rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
     if (!rc) {
         DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
-        state->shared_vmport_page =
+        shared_vmport_page =
             xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
                                  1, &ioreq_pfn, NULL);
-        if (state->shared_vmport_page == NULL) {
+        if (shared_vmport_page == NULL) {
             error_report("map shared vmport IO page returned error %d handle=%p",
                          errno, xen_xc);
             goto err;
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:56:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:56:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483961.750430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZp-0000zV-Ld; Wed, 25 Jan 2023 08:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483961.750430; Wed, 25 Jan 2023 08:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZp-0000zO-II; Wed, 25 Jan 2023 08:56:17 +0000
Received: by outflank-mailman (input) for mailman id 483961;
 Wed, 25 Jan 2023 08:56:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbZo-0000CY-S5
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:16 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20624.outbound.protection.outlook.com
 [2a01:111:f400:7eab::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03a4143e-9c8e-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:55:31 +0100 (CET)
Received: from DM6PR02CA0098.namprd02.prod.outlook.com (2603:10b6:5:1f4::39)
 by IA1PR12MB8312.namprd12.prod.outlook.com (2603:10b6:208:3fc::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:54:45 +0000
Received: from DM6NAM11FT019.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1f4:cafe::bd) by DM6PR02CA0098.outlook.office365.com
 (2603:10b6:5:1f4::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:45 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT019.mail.protection.outlook.com (10.13.172.172) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:54:45 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:44 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:43 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03a4143e-9c8e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c3x/o+4D0sGp88pcO1gKkDC1dYpkc8lXDZyi/Jpuavg/QFOcay8K4GWLg0aEYousnVIV6Y0BMcH+oPYLEEOlQNKdkEY0VHXNHpfk0uNDt7+DX6hgHC1NnCbC7jC+yUvWeirOEFWMmWXx66h4uSoSpFr6FNtUmb/ds+sFTrcHn1wRFnh/mz7Ce1MLH0yV6/vKEiM8ZGqaHnQXCziTuNvqDrgSeHqIZh+XzaOwJYOzmj+kV/godniKIk3JaCWLrJJ4KBWMoEAPH+SyAeJFRyCWEDivoUmXOQbBUyDREYLEdkbFudTELcIlhy8zYvYal2iTQtOHNgK4/OtkgEjUihjAxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=URxTp3Cj5zWl6imZ6G51I5Cd2LY5Q6C4OO3dPqmmi+4=;
 b=C16Q07x/usFkzOGhugD9BMLkxA78/VLthsz70L0fb1vBaHmkY038W7lMS0aFc+jNBuRDiwtZ9bNecTFkLC4WbQkJZoNG5W7Y0Y5pcLDmlX+uH2/952fVL/TOdNGZNXXwCr3Mv506ckOLC/s1bntx7t4zG7uUnGE0BDwWC1qd0aGPSWWx5L9b23bFj7I6GFD4K/OFIqjbjNaqEfbQxP/6gx+3Iq1IrwmEI4EtfVUeKIp9pqyAAP2LcJAA3V/xJcfhEO5nnCcGsdGgBUffLKfxun4BIePcqvzz/YG8rPt7Vvch2oQTep+4EqyWaqzA/xPVf3miOQA2DRGq/HzzcHxUKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=URxTp3Cj5zWl6imZ6G51I5Cd2LY5Q6C4OO3dPqmmi+4=;
 b=d6jq8Y7r1TRgnT8lPajuS2J4PaZpnpyo3I8YTBfrP5H/pV4+DFDXUqYDdioSbj9HlXrVjH6VrK7d40Cv9YDW/vWcsr6sHWVCRzbUD49Vhu59doJTR25efWA+24Q2COsbhYk0p1JH750N2Nca4GPim5qZOKeA2vW36d1uZLL1IAU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v4 06/10] hw/xen/xen-hvm-common: skip ioreq creation on ioreq registration failure
Date: Wed, 25 Jan 2023 00:54:03 -0800
Message-ID: <20230125085407.7144-7-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT019:EE_|IA1PR12MB8312:EE_
X-MS-Office365-Filtering-Correlation-Id: 31b10660-926b-4149-e40c-08dafeb1ce6b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	buVTwfv4QgoGRQlDDpuL3e5HpCFuDCYoHoHdNCKljxxFM5WAzSpcDUSM+Nn5Yh3NCIxqYu8PkkFnEauYb9k+nwl5sGSGZZLbIo5bn8X/P4qJ9llbbtVxaDxzysu1NZO9h1wi/Lka7hq3jqbWpFct2Z39VioPWlrp8Ok8lCaNr0HgqQeSRjMGqdiwtLIxVgKh5QwfClhDTyuWhf9Y3gV3us6gNMPPdUcsDvqLYfttp9gTofAUCiDU2T4OX5JYrmbsbXaciuSLEO/6+9DzZkH3tQwAX33oB0JRsheQcyFeudqB6BsJkISgvYFDMCNJqMX5tPM6LwrfhmDntZGDwvXhgFuG0/64SKj4JnpvtnYe+/2dib+TyAScqelfqwfg9PFiDX+ysVQEfjdOHRNK9oKZarJ/rtIVrUMoAZSdNMEA7z7wYNHj+KVt2pzx++oIeP2kCOp/pzGCKwc9LjCQeBo2PLLnirvVtz3wfiXNF0906Io1xe35vgtedQYYuTYCiAgurp/GEtBzg1gZxyAsLnRTfaCTQx27/fMZ5pGOQgptLi2HlxJh6bpa5S3JT6lWZAzGeEiX9uUt4l5NvlzkYlhnLeLu0q6wIaweRMfQIWw3whjpUuuk5K9W19e77zxjVFRJPTJp4gim4ypvj9KLWrcNAaejZ6WXYyK8zEy3tlOzBq/1q01e5ah+jMUb9hV9JC622TNobmjE9nTl1byyeiipH0J1QtUUeoBScuAemORZQXE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199018)(46966006)(40470700004)(36840700001)(36860700001)(83380400001)(426003)(336012)(54906003)(47076005)(6916009)(2616005)(478600001)(8676002)(186003)(36756003)(1076003)(70206006)(70586007)(356005)(2906002)(86362001)(82310400005)(82740400003)(41300700001)(81166007)(26005)(5660300002)(316002)(40480700001)(40460700003)(6666004)(44832011)(4326008)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:45.1409
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 31b10660-926b-4149-e40c-08dafeb1ce6b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT019.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8312

From: Stefano Stabellini <stefano.stabellini@amd.com>

On ARM it is possible to have a functioning xenpv machine with only the
PV backends and no IOREQ server. If the IOREQ server creation fails continue
to the PV backends initialization.

Also, moved the IOREQ registration and mapping subroutine to new function
xen_do_ioreq_register().

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 hw/xen/xen-hvm-common.c | 53 ++++++++++++++++++++++++++++-------------
 1 file changed, 36 insertions(+), 17 deletions(-)

diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index e748d8d423..94dbbe97ed 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -777,25 +777,12 @@ err:
     exit(1);
 }
 
-void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
-                        MemoryListener xen_memory_listener)
+static void xen_do_ioreq_register(XenIOState *state,
+                                           unsigned int max_cpus,
+                                           MemoryListener xen_memory_listener)
 {
     int i, rc;
 
-    state->xce_handle = xenevtchn_open(NULL, 0);
-    if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
-        goto err;
-    }
-
-    state->xenstore = xs_daemon_open();
-    if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
-        goto err;
-    }
-
-    xen_create_ioreq_server(xen_domid, &state->ioservid);
-
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
 
@@ -859,12 +846,44 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
     QLIST_INIT(&state->dev_list);
     device_listener_register(&state->device_listener);
 
+    return;
+
+err:
+    error_report("xen hardware virtual machine initialisation failed");
+    exit(1);
+}
+
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener)
+{
+    int rc;
+
+    state->xce_handle = xenevtchn_open(NULL, 0);
+    if (state->xce_handle == NULL) {
+        perror("xen: event channel open");
+        goto err;
+    }
+
+    state->xenstore = xs_daemon_open();
+    if (state->xenstore == NULL) {
+        perror("xen: xenstore open");
+        goto err;
+    }
+
+    rc = xen_create_ioreq_server(xen_domid, &state->ioservid);
+    if (!rc) {
+        xen_do_ioreq_register(state, max_cpus, xen_memory_listener);
+    } else {
+        warn_report("xen: failed to create ioreq server");
+    }
+
     xen_bus_init();
 
     xen_register_backend(state);
 
     return;
+
 err:
-    error_report("xen hardware virtual machine initialisation failed");
+    error_report("xen hardware virtual machine backend registration failed");
     exit(1);
 }
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:56:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:56:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483962.750440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZr-0001HM-UP; Wed, 25 Jan 2023 08:56:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483962.750440; Wed, 25 Jan 2023 08:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZr-0001HF-Qq; Wed, 25 Jan 2023 08:56:19 +0000
Received: by outflank-mailman (input) for mailman id 483962;
 Wed, 25 Jan 2023 08:56:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbZq-00012q-9c
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:18 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe5b::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1f8427dd-9c8e-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:56:14 +0100 (CET)
Received: from DM6PR21CA0020.namprd21.prod.outlook.com (2603:10b6:5:174::30)
 by DM6PR12MB4059.namprd12.prod.outlook.com (2603:10b6:5:215::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:54:20 +0000
Received: from DM6NAM11FT067.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:174:cafe::d7) by DM6PR21CA0020.outlook.office365.com
 (2603:10b6:5:174::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.4 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:20 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT067.mail.protection.outlook.com (10.13.172.76) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:54:19 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:19 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:18 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f8427dd-9c8e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l3/Kh3JJFa2tw0RwGwSINvgrPgcX3x+Q9W3Djxo/clwYnAphGAPG0ALItJODsGgor8DIKx//qaU+8CNO8a+aPyrTS/0dDXmxLgsAlcmTGRx2UIC1z0tPuUySowXD3RYy3Rz54NL8wLt+PLr1DzBdrkIsBo2hKy5nBt154/94Hu+fEHPGerIKlRt+8lyM5ld0v18EDexL3jON2zPFIOzJDNV6Sr7QZRqSBg5F417xz/doLuApffw+yDYrV26mvJDdzNTqLlJL3GSjbsOP/ldkcVb4208zJ2pMpRMFfKs3g9NE6QAOsUxKZT0216ePOgb+CKMIRZ3MYsl6T9+etZcLIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fses6GPjSDeO1Rg1V/rPgyl+4LoVZw/rTYn2KGu2M1M=;
 b=lrwHhrNHLdbtpk263ufDE+TbQMZS/JWr7KKufVct5fbmUTMvBR46mXUPwuIYzhKWcvJt7CkAvMe7nxuvoy/R/cSd6Dlbp88B7I1FW8ZUNUQh6XcfL5UCiUf8+nXJB3nwcDNvOus3RCSSXMtg4wmhSmfjuo1hDpqF08/LIpUtCrChZUHUbJ+nYzF3nZvWyY25T+4iypB5tJ3OlmkkLTHdox37av3OglRYbv6NRpejfQXLvZH3l21T8kOleAxppELKejf4Ofa5TxZRIkhHtna2Y72ZBGdgD/aGo/v8KLSyS4goKH4qF5nm1d6bXYH8PMM4ig2Wwz/f6mOa/IMokL658Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fses6GPjSDeO1Rg1V/rPgyl+4LoVZw/rTYn2KGu2M1M=;
 b=ipTINMEqsjL8Xi7RvBb70hIHY6k1H7G4ch7LKt8gprRn+hpFqa1Wzepnr2G3oTJ4HbQXLPTeeLOmnnzenHS+QYkG2Nq6Uy5zPRPQJtMsDO08ztqzLV07XsOwTGC/9MtbymrdvunoCvyVTjQaV82mwMUyjAL04iUlOskeMdZU/qQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>
Subject: [QEMU][PATCH v4 00/10] Introduce xenpvh machine for arm architecture 
Date: Wed, 25 Jan 2023 00:53:57 -0800
Message-ID: <20230125085407.7144-1-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT067:EE_|DM6PR12MB4059:EE_
X-MS-Office365-Filtering-Correlation-Id: 7bcc5be2-42bc-4f46-ff0b-08dafeb1bf55
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	da4g3wEWlfXMyhYj3QlmYfiSAVVJaZOBOraNWQ6m6glAY2rR4l6IIOVq/STE3zW+ryhUk1AsQU5OZw7pQ2IovRMTgIZbu+NZAk4mFxv/5sNvAnUKg10wfHrMdL850LHWTJG/F9m36ooPjDdMOm5s0/qBvGtmj9dHHAJ4nbKyWBhTBj7temYYJHz+Pp5jNRo6tuxzcg7sCSOVZpzqrTiq3Q98pJP9t3OTGH0OKihsUH8GGkeNX/rATYzy76fcMKETOg6V5QJb/22DEHXJrn4JBin/la6/x5dNXRG8Ky3p10+K3PB6GWtu8b4PJP0Ccs883mgog/vC7ola6CmC23I7W1Bnc3ofzaTPuTj290R6B+ye/W9Hx531UkuX+KCmXwf4lzNkujNJMY3+Ut73TWZPyqmkY11NgK1yuwLMZb8DnN+8ZBcpLOHrbhlFy93S0qQjz6/tPX8l8P10W4FQWL8P7rayb4FzpRgYkw9LTjqcz2RVJZrAwvUXZX4i2ogWqX+aW+QX2jSLrRAxQx5vetMVvKq/80BogI+naoKmLR/a0ltPqbc62yEOxj/JLK12ZsRi1xf5dMw0AvcKuhniBrriKMN6rpgJyqZsu0QxQgfX5lAYaI5dgxyAG6gjlStF5ZlkLyzdMGJc21wEZ/5AZ0W1ntaB7rFVqPbjZ/nImU8tmohrxWY6sYYFqP2+rg1qG/3PQw5ixd5OLOXmTs2oPJdnUiIAlkOGxONzqVQSPjO3My4=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199018)(36840700001)(40470700004)(46966006)(36860700001)(83380400001)(426003)(336012)(54906003)(47076005)(6916009)(2616005)(478600001)(8676002)(186003)(36756003)(1076003)(70206006)(70586007)(356005)(2906002)(86362001)(41300700001)(82740400003)(81166007)(26005)(5660300002)(4743002)(316002)(40480700001)(40460700003)(6666004)(44832011)(4326008)(82310400005)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:19.8315
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7bcc5be2-42bc-4f46-ff0b-08dafeb1bf55
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT067.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4059

MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hi,
This series add xenpvh machine for aarch64. Motivation behind creating xenpvh
machine with IOREQ and TPM was to enable each guest on Xen aarch64 to have it's
own unique and emulated TPM.

This series does following:
    1. Moved common xen functionalities from hw/i386/xen to hw/xen/ so those can
       be used for aarch64.
    2. We added a minimal xenpvh arm machine which creates an IOREQ server and
       support TPM.

Also, checkpatch.pl fails for 03/12 and 06/12. These fails are due to
moving old code to new place which was not QEMU code style compatible.
No new add code was added.

Regards,
Vikram

ChangeLog:
    v3->v4:
        Removed the out of series 04/12 patch.

    v2->v3:
        1. Change machine name to xenpvh as per Jurgen's input.
        2. Add docs/system/xenpvh.rst documentation.
        3. Removed GUEST_TPM_BASE and added tpm_base_address as property.
        4. Correct CONFIG_TPM related issues.
        5. Added xen_register_backend() function call to xen_register_ioreq().
        6. Added Oleksandr's suggestion i.e. removed extra interface opening and
           used accel=xen option

    v1 -> v2
    Merged patch 05 and 06.
    04/12: xen-hvm-common.c:
        1. Moved xen_be_init() and xen_be_register_common() from
            xen_register_ioreq() to xen_register_backend().
        2. Changed g_malloc to g_new and perror -> error_setg_errno.
        3. Created a local subroutine function for Xen_IOREQ_register.
        4. Fixed build issues with inclusion of xenstore.h.
        5. Fixed minor errors.

Stefano Stabellini (5):
  hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState
  xen-hvm: reorganize xen-hvm and move common function to xen-hvm-common
  include/hw/xen/xen_common: return error from xen_create_ioreq_server
  hw/xen/xen-hvm-common: skip ioreq creation on ioreq registration
    failure
  meson.build: do not set have_xen_pci_passthrough for aarch64 targets

Vikram Garhwal (5):
  hw/i386/xen/: move xen-mapcache.c to hw/xen/
  hw/i386/xen: rearrange xen_hvm_init_pc
  hw/xen/xen-hvm-common: Use g_new and error_setg_errno
  hw/arm: introduce xenpvh machine
  meson.build: enable xenpv machine build for ARM

 docs/system/arm/xenpvh.rst       |   34 +
 docs/system/target-arm.rst       |    1 +
 hw/arm/meson.build               |    2 +
 hw/arm/xen_arm.c                 |  184 +++++
 hw/i386/meson.build              |    1 +
 hw/i386/xen/meson.build          |    1 -
 hw/i386/xen/trace-events         |   19 -
 hw/i386/xen/xen-hvm.c            | 1084 +++---------------------------
 hw/xen/meson.build               |    7 +
 hw/xen/trace-events              |   19 +
 hw/xen/xen-hvm-common.c          |  889 ++++++++++++++++++++++++
 hw/{i386 => }/xen/xen-mapcache.c |    0
 include/hw/arm/xen_arch_hvm.h    |    9 +
 include/hw/i386/xen_arch_hvm.h   |   11 +
 include/hw/xen/arch_hvm.h        |    5 +
 include/hw/xen/xen-hvm-common.h  |   97 +++
 include/hw/xen/xen_common.h      |   13 +-
 meson.build                      |    4 +-
 18 files changed, 1363 insertions(+), 1017 deletions(-)
 create mode 100644 docs/system/arm/xenpvh.rst
 create mode 100644 hw/arm/xen_arm.c
 create mode 100644 hw/xen/xen-hvm-common.c
 rename hw/{i386 => }/xen/xen-mapcache.c (100%)
 create mode 100644 include/hw/arm/xen_arch_hvm.h
 create mode 100644 include/hw/i386/xen_arch_hvm.h
 create mode 100644 include/hw/xen/arch_hvm.h
 create mode 100644 include/hw/xen/xen-hvm-common.h

-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:56:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:56:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483964.750450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZw-0001fd-6q; Wed, 25 Jan 2023 08:56:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483964.750450; Wed, 25 Jan 2023 08:56:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbZw-0001fQ-3i; Wed, 25 Jan 2023 08:56:24 +0000
Received: by outflank-mailman (input) for mailman id 483964;
 Wed, 25 Jan 2023 08:56:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbZu-0000CY-P9
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:22 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20630.outbound.protection.outlook.com
 [2a01:111:f400:7eab::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 071fa16b-9c8e-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:55:37 +0100 (CET)
Received: from DM6PR06CA0038.namprd06.prod.outlook.com (2603:10b6:5:54::15) by
 PH7PR12MB6490.namprd12.prod.outlook.com (2603:10b6:510:1f5::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.33; Wed, 25 Jan 2023 08:54:48 +0000
Received: from DM6NAM11FT063.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:54:cafe::c2) by DM6PR06CA0038.outlook.office365.com
 (2603:10b6:5:54::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.20 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:48 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT063.mail.protection.outlook.com (10.13.172.219) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.21 via Frontend Transport; Wed, 25 Jan 2023 08:54:47 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:46 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 00:54:46 -0800
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:45 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 071fa16b-9c8e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZXVp6JaPtuzxwlY5i3IJ6BbSnIk1EtKOnrrHHvoMe52VdlFVddVIIniaBIkjUeBFblD21vzuRien8hi/IuV2EM2HjvtwVGPRh5Pbi6zWcyCH3XztHD2rW3LaeicskGf2eX4V5QbT/QdLCnPyTS/i8YqLrKq5PJfw7P/A/+/iUTbNyuPhIth2/1oL7eiuVFGJc2aPwMFaEyeNOmRmHJWOdyovLculRi3i2zeqboQIZPC/ur//EIn3+kz+mg3aGUnTCHlRBq2tXTsRWOEm04xeuwiIL6lwSf877MKFv2V48d4cy86lm5ZSnckVFShYeR/IXY5LaLNdyeDA8EWUyRGDrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mqy76BJj/3k0CZFIEfqRDFKvHIQ6y9gkszdQF5vGxqY=;
 b=LreCuP1shWsGCxizkNtaNRqUQ0YHJVq+l4udbzMvY1za0SA2b7tGY67GDYhVntS1rGWRbje7l5l1oagU5h9r+bYYGqUfmU9XWXIfx6Z9fWayM3AOTHGNrBRzk2GBKjrf0vMCPC4HXaVCST4Ht1q1gmoRgjSzBcmHjulht2cH6dHXNwSBc2VrffAcbHF3G84FrSduwMAWG12Hy/VlePThqmst4ZV/4p2aTOaZdZ/C3tD3hOAFIWvOkyoamIAaBdNa/cIvybmuSldUEg1B0imbkeSI8fmie2JoaMl8GxUg/XRPpOQoaItriaro5pMLWBxYnZrGvc+STkip2TT4RTob2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mqy76BJj/3k0CZFIEfqRDFKvHIQ6y9gkszdQF5vGxqY=;
 b=g3EUzN8vS+ZJOA/ATNFCIKQ5BTMy12NwUJkebUN4eahBXO5NU0GCtN3aJiTR3lhmkUMIcslO/eX+bnCblN7Zz9zrNO/LeSi793Pw9rXE22Fj8t/8x7B9ydrKcbW5IdzNV3aBALsALSND8WIKIlpGAiHRDHkGvieoN78ayDkGi2A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v4 07/10] hw/xen/xen-hvm-common: Use g_new and error_setg_errno
Date: Wed, 25 Jan 2023 00:54:04 -0800
Message-ID: <20230125085407.7144-8-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT063:EE_|PH7PR12MB6490:EE_
X-MS-Office365-Filtering-Correlation-Id: a3428d60-6b23-40c5-d9c3-08dafeb1cf87
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Zc8C8J07RmauMRfAH9gxUSWIHSJI7GhDjuy5OaYKOqPUcuuqN6oZO+6/L870NfIGonmMjM+x8Md1Reiq/GuUcwUU5B6yoznsckcWmGkCmuOViMO1bqWwNcmc90I5K5n4r+ueun686FRjEhg6vdHK13GXs8+G3cubecHVANpoTSBDpW50VuielDKxIYdN/EMH2OyL49Hh2fJFlBE+MMy7vyNnJtWSivmnGWg2cNWChnG+2+lmyfPIuINSoNfGUCl+7HFUw3L+9sUm1tgqXT4KPfPJ9o/6Bmhre7Tbm3oWFplhCHTFqFXqElhAJo/HDCYE5DrOCuP+EyQm+muHDJNmxBwrXaGc5aRpLBpo6jcI6Mkz5eQOQaWK55IoYrZKEJuqcNEE6dgnm8EsSBvyymNxZWI3KnkED12i250ZpW4CUv1c7wONA47DyshYrgcZkrhtFwJzeeSRfZXnSj74M8671U/ajcYm5uUBrTucNYvY+zF4uNUABrI6xlsgz05wXc2/n/Xd74upFrUbwvB7QnV035FvulZankslT8c6xCG7uPH2An7EUXvv3SwIXASqkSFrgTq84kJxLf8Zku/eHeDj+/gyB7a3eo5d1I3sByKaWb/KyYVGXJU7EGnYnm6XVVPemqD/GPYUMAuOuhL+NckRQAKIDmGskYUVncxQ8vgnx8zmRwh8HYXeMhZwLJrloC/BlFn4Wvnft/5kloOyUUrHC5BuC4V3AiSy/itrsMyoPWk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(4326008)(36860700001)(70586007)(86362001)(54906003)(70206006)(6916009)(36756003)(1076003)(44832011)(8936002)(5660300002)(426003)(2906002)(82740400003)(41300700001)(82310400005)(40460700003)(47076005)(6666004)(26005)(83380400001)(40480700001)(186003)(8676002)(2616005)(316002)(336012)(81166007)(478600001)(356005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:47.0018
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a3428d60-6b23-40c5-d9c3-08dafeb1cf87
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT063.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6490

Replace g_malloc with g_new and perror with error_setg_errno.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 hw/xen/xen-hvm-common.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index 94dbbe97ed..01c8ec1956 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -34,7 +34,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
     trace_xen_ram_alloc(ram_addr, size);
 
     nr_pfn = size >> TARGET_PAGE_BITS;
-    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
+    pfn_list = g_new(xen_pfn_t, nr_pfn);
 
     for (i = 0; i < nr_pfn; i++) {
         pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
@@ -726,7 +726,7 @@ void destroy_hvm_domain(bool reboot)
             return;
         }
         if (errno != ENOTTY /* old Xen */) {
-            perror("xendevicemodel_shutdown failed");
+            error_report("xendevicemodel_shutdown failed with error %d", errno);
         }
         /* well, try the old thing then */
     }
@@ -797,7 +797,7 @@ static void xen_do_ioreq_register(XenIOState *state,
     }
 
     /* Note: cpus is empty at this point in init */
-    state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
+    state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
 
     rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
     if (rc < 0) {
@@ -806,7 +806,7 @@ static void xen_do_ioreq_register(XenIOState *state,
         goto err;
     }
 
-    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
+    state->ioreq_local_port = g_new0(evtchn_port_t, max_cpus);
 
     /* FIXME: how about if we overflow the page here? */
     for (i = 0; i < max_cpus; i++) {
@@ -860,13 +860,13 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
 
     state->xce_handle = xenevtchn_open(NULL, 0);
     if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
+        error_report("xen: event channel open failed with error %d", errno);
         goto err;
     }
 
     state->xenstore = xs_daemon_open();
     if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
+        error_report("xen: xenstore open failed with error %d", errno);
         goto err;
     }
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:56:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:56:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483971.750460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKba3-0002HB-N6; Wed, 25 Jan 2023 08:56:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483971.750460; Wed, 25 Jan 2023 08:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKba3-0002FS-J4; Wed, 25 Jan 2023 08:56:31 +0000
Received: by outflank-mailman (input) for mailman id 483971;
 Wed, 25 Jan 2023 08:56:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKba1-00012q-K3
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:29 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7eaa::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12267ac9-9c8e-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:56:28 +0100 (CET)
Received: from DM6PR08CA0036.namprd08.prod.outlook.com (2603:10b6:5:80::49) by
 MN2PR12MB4111.namprd12.prod.outlook.com (2603:10b6:208:1de::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:54:29 +0000
Received: from DM6NAM11FT016.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:80:cafe::4a) by DM6PR08CA0036.outlook.office365.com
 (2603:10b6:5:80::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:29 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT016.mail.protection.outlook.com (10.13.173.139) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:54:28 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:27 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:26 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12267ac9-9c8e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JQ/YrdxXbBAwraVU9uU5qScpbPMmTfefIKg3RaAS/k0Q1tjvLnIhyeex3eJRUzGjBtELaVLGX5doUqYpF7m9fTwEa2B8YGitLmJA5ch/jHSclbmPlTkQgY62ebViJ6OzkVlNW13DipfGxgDjcluphGqgmFxe7GadPqhdChlMVVzdeq4GhvpD9azkWCisnuimfwKUiNB6oIIp6HxeD64YBTJoeWbkMkh2YSHGrpj+PoLdfjXvQaeNd1SOhCQpF9tW4Wwe3ojhjcJbtw7JFWlIWOYCp1T1JERX6veNT/u+4vAfP9hjZqRgT/drPiqsNmxe+n+TtT6bbMjdCqFLvjrKzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xQRygdwVbnDYBJcZUcE7RFsjWA1hPALnId6WXnaDoUU=;
 b=hq2+A92pf5JiH/kPCKdTdCq6Eb+huETOJgJzldmtDkg9+Q3KNuYO54I98CacnOvbmySA5vZEcN9Yj1t0rsQmob0lTOM/oUqaO4E3h1N0SVjowQjT4KaObl7ovhfduUMLh/HB3FXFwUFXIyZwQmCdjXBgxcg20nPiEyuc8W+LOTefbXVvqfg1TsZfeVfVOGAcjni3pAGroBqIXbBGbG6o9/0Kh7Cii0AeoMPixdSn8ZMpWJqGmgJkpnYccbfhhkXIfTa/pJrLkR5AtZVKuVrA8dFxc27iJ3kOAoiQhgnCH2lGbscrExaU4Noc7XVImmiwiTGqVxbFNHgFHVEP5LU7Kw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xQRygdwVbnDYBJcZUcE7RFsjWA1hPALnId6WXnaDoUU=;
 b=ibrq85+4pNNvO0lO2iNeJy43rqs3yV8Z/DeLrhaFQRApIJ8IDTpYZlrBMvQ6XBKaJTByN3ygbfcmP5UefnmyO3ESTvvta0CFuV7z/X3KYkplHnxXk2NNYYLBtLcWjPmVMSbMiLVGKNlKq5mhYpzFlRzcmCOYTGdqP6/EG3j10f0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>
Subject: [QEMU][PATCH v4 02/10] hw/i386/xen: rearrange xen_hvm_init_pc
Date: Wed, 25 Jan 2023 00:53:59 -0800
Message-ID: <20230125085407.7144-3-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT016:EE_|MN2PR12MB4111:EE_
X-MS-Office365-Filtering-Correlation-Id: 91a567b6-f4cd-441a-df48-08dafeb1c48a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	olvmbCUD/rGjlgOvMdbQOW61JeNYXVjoNjTN0g+fcWLXzEMCOViwvhRBknhEiMg1TAC/tirgPm88rrtR1dT62eXX1fybfUfUtIGbqFlfin/h5F2/xRiVuR1QQYSl/KRqTMVbnmsUX8+01b4V7HK+a5OKkN+ERLA+y0+QETDbv+qkccByylm0+jP7DnzBEe854yeTP1b/prDEUi51B5NlwkELvzyN5LK7bN/6up4woLIy5aWeNLuEKs6/0uE3wwppShUuf2qtUA1G9TaBKnPe4YgcMMVbCkZrvNIs374pIwpSLteB7qU4zOBPkoCUzS5wJBwelhg/gmCOqbHblVKDzjsdZ9/w2lNykoOpkzYNuir3AegdF9/AzPFiyltnTdlbPMPOXandm6j92qowyg4SEOwSAs9rFapLT4azXHoXFdr88aw3HMQ7dsfdGQ9j6JeiUx+RC+vTcdqk4GOVU2yDi8w9K/BeRQ5yUSQVKrV8sGhCaZBPXcZ7sx2r4RJT0i7TSA7esdinYlbqZEJKDT5Dg5aaYxE+BqsrsfYi+gG9x/2Yvqw9eVIJF1dYuzqQXTfer/9AMfstvEZH98bddclpWsv9HRPRx46Pi4rsvrslY9XImlqgsF4WbfB/1F9enf8Ir0itQqibc7tqN1nDwrmmaqCk7y20FFFdGh+QlLCOkMgtMXE/S+eqo902GMBSZYiKlxd0hPwXDDN/AvHEOhNZlr5DHWHzuRDuQpPAeVJglxc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(39860400002)(376002)(396003)(346002)(451199018)(40470700004)(36840700001)(46966006)(82310400005)(82740400003)(4326008)(86362001)(2906002)(36756003)(356005)(40460700003)(70586007)(8676002)(186003)(26005)(6916009)(81166007)(6666004)(47076005)(41300700001)(426003)(54906003)(336012)(40480700001)(1076003)(5660300002)(36860700001)(2616005)(83380400001)(44832011)(8936002)(478600001)(7416002)(70206006)(316002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:28.5667
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 91a567b6-f4cd-441a-df48-08dafeb1c48a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT016.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4111

In preparation to moving most of xen-hvm code to an arch-neutral location,
move non IOREQ references to:
- xen_get_vmport_regs_pfn
- xen_suspend_notifier
- xen_wakeup_notifier
- xen_ram_init

towards the end of the xen_hvm_init_pc() function.

This is done to keep the common ioreq functions in one place which will be
moved to new function in next patch in order to make it common to both x86 and
aarch64 machines.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 hw/i386/xen/xen-hvm.c | 49 ++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index b9a6f7f538..1fba0e0ae1 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -1416,12 +1416,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
 
-    state->suspend.notify = xen_suspend_notifier;
-    qemu_register_suspend_notifier(&state->suspend);
-
-    state->wakeup.notify = xen_wakeup_notifier;
-    qemu_register_wakeup_notifier(&state->wakeup);
-
     /*
      * Register wake-up support in QMP query-current-machine API
      */
@@ -1432,23 +1426,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
         goto err;
     }
 
-    rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
-    if (!rc) {
-        DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
-        state->shared_vmport_page =
-            xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
-                                 1, &ioreq_pfn, NULL);
-        if (state->shared_vmport_page == NULL) {
-            error_report("map shared vmport IO page returned error %d handle=%p",
-                         errno, xen_xc);
-            goto err;
-        }
-    } else if (rc != -ENOSYS) {
-        error_report("get vmport regs pfn returned error %d, rc=%d",
-                     errno, rc);
-        goto err;
-    }
-
     /* Note: cpus is empty at this point in init */
     state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
 
@@ -1486,7 +1463,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 #else
     xen_map_cache_init(NULL, state);
 #endif
-    xen_ram_init(pcms, ms->ram_size, ram_memory);
 
     qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
 
@@ -1513,6 +1489,31 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
 
+    state->suspend.notify = xen_suspend_notifier;
+    qemu_register_suspend_notifier(&state->suspend);
+
+    state->wakeup.notify = xen_wakeup_notifier;
+    qemu_register_wakeup_notifier(&state->wakeup);
+
+    rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
+    if (!rc) {
+        DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
+        state->shared_vmport_page =
+            xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
+                                 1, &ioreq_pfn, NULL);
+        if (state->shared_vmport_page == NULL) {
+            error_report("map shared vmport IO page returned error %d handle=%p",
+                         errno, xen_xc);
+            goto err;
+        }
+    } else if (rc != -ENOSYS) {
+        error_report("get vmport regs pfn returned error %d, rc=%d",
+                     errno, rc);
+        goto err;
+    }
+
+    xen_ram_init(pcms, ms->ram_size, ram_memory);
+
     /* Disable ACPI build because Xen handles it */
     pcms->acpi_build_enabled = false;
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 08:56:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 08:56:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483973.750465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKba4-0002LG-1H; Wed, 25 Jan 2023 08:56:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483973.750465; Wed, 25 Jan 2023 08:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKba3-0002Kl-Rl; Wed, 25 Jan 2023 08:56:31 +0000
Received: by outflank-mailman (input) for mailman id 483973;
 Wed, 25 Jan 2023 08:56:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKba3-0000CY-2u
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:31 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on20602.outbound.protection.outlook.com
 [2a01:111:f400:7eb2::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b2c09dc-9c8e-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:55:45 +0100 (CET)
Received: from DM6PR05CA0060.namprd05.prod.outlook.com (2603:10b6:5:335::29)
 by DM4PR12MB8449.namprd12.prod.outlook.com (2603:10b6:8:17f::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.27; Wed, 25 Jan
 2023 08:54:53 +0000
Received: from DM6NAM11FT023.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:335:cafe::75) by DM6PR05CA0060.outlook.office365.com
 (2603:10b6:5:335::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:53 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT023.mail.protection.outlook.com (10.13.173.96) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:54:53 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:52 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 00:54:51 -0800
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:51 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b2c09dc-9c8e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oGopsUYhhHnYRJDCP/FiCXesns1Bq2PZcXNpP/03+CX2sPshXW7cPuFrO6KMy77kLVB+wVYmvM4dA+rCfj5KYw+W0Uj/G8mQ5bvfAGCeSTqxiRg8tlCRSK59DqTcxZ9aRI2wXhAlAXIIWik4J5WZAM52GpyBaWOdmSRzv8lNnnVWv81fAnWhi2BB52B6eFtfXjYEMFqcv+xLn6BBsTPrFh19ZLrSjkAy4IlztevVAl9Kvx2/HccJIfHARAI7ZHFKcih0wOGp5m24dtsbQyyAzOzoUlaXydr+Tuwi5Nlq3tVqVJbYyPmRHTpM+e/5OcYL/cWCGT3j4paleoEQjCxt0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A+E4BfA2jNdGJLu/KOY6Tiy/3hPOgzwaMVA9okKy1Xk=;
 b=L/zQzX0TroeeF0E1ELRyHNYPImy8BOPolMX2IUrbbOrMTbR5zQBUz7bXGfQoey8kWqiZuTmuu6xdWXkCfObaUpVd84/raXUs/Y7qZBnsnP4YJIevEOF+xWEcV36Jh4Q1IB8Vh3glflu5Wn364b8z/xzY/O3mjHtnssN2UZgj4zFvd6wQrbd5fC860qJQ5Vn1eY/E93Ei0/w4rZe4TQRqdV44E8dVE7vhGOnUOuLXz56Syg7u6ZxoBa+YoheyS/MirB4H6yTL+F3mfrYCpwA2rExIpwcKE3+Jcla+T0gju/ShMueFZ7APruODEtR83aGWGTqRiW1s94nZGbiSS7wRpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A+E4BfA2jNdGJLu/KOY6Tiy/3hPOgzwaMVA9okKy1Xk=;
 b=TtuPwwtLq+2QytKpmlBdCei0wMZTmTBPumausEOJzQDX8flkSQe05cFOVThzVjIC+FwE6ykioijOvyvai1ayAtgGz5/KbXPo06wgmw/5qKdn2Gjrkln/3RqBzmoQn5Q94MpyFORXUpD1nqfkWFTvUrkqOTGfhmDF/yPN3Wek8Mk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Peter Maydell
	<peter.maydell@linaro.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
	"open list:ARM TCG CPUs" <qemu-arm@nongnu.org>
Subject: [QEMU][PATCH v4 09/10] hw/arm: introduce xenpvh machine
Date: Wed, 25 Jan 2023 00:54:06 -0800
Message-ID: <20230125085407.7144-10-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT023:EE_|DM4PR12MB8449:EE_
X-MS-Office365-Filtering-Correlation-Id: f8222557-8e9b-4499-8928-08dafeb1d321
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UESb5+TkqkWIwH0Fk09Gz/AONNGLrpxZctozO5EQTfFd15E3K/lMrH+akIosfkTfzdCCLiMATGxz30lN2LoSpVbycYIAS8TsoeNUBeXFhHvUGrl1XNOJwA6WArE2KH0W9JMKhiaD00lIH2QFLZuVUJbktGmQgdiBjGfuRkXS2xDsh+iTkwlfG4aYuJ0HYORkpslUNJOR2g17clAKZq7YUr3W1oLQTBh8etkZiimaEYiC6QACGnnMaNk/3RFz9qGASuWTea0X+5AyqwEqr+nT0VQ2JbrnTBOGfAKvfUtNykdsUAsA+YVlKnXypEYCi45Fbccj44aV+gWBa8tqz/PmaYw2thYlNRU5wo97gQeCA+jz2xM1lOFuU+n2NCLuRY0ZX7eBAlKduknkev/PXqUxbH8mxpoI3JQhrM8kCO1ar/jOAUyrJV33yEU18tnm1WdgCo+i15LWEe5wK9Bg8oOtn0SW/qo1Ozl0pPsJHRtsqIH8Z4HIO3Z4aJk5foPlGnhsFs7bbZGFxdSv5reusifeA6kub2NAdpiHosT5LV71G6/Q4/9ygqWjhG5ezR5lhltGt9gsqfvB4RbBGIgkb3ca+3fRP5z/wNfUHxq1bbze9/ozbSMRITX8OMs5O8ZbTFkJUqmQeWPyqYijxjEQNGOUmPWv3y5/FuSa9g3AgWo0vOLbvSmJPBiBx3E3zw5nkmfiEt0TnkQa+oQ+GqAq26TxR7fRX5x6objEQQtA9cjJl6oOksaFaFVh64YX8fD8r3wYHrB3HAuy+5O9rpR5KVQcrQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(136003)(376002)(346002)(39860400002)(451199018)(40470700004)(46966006)(36840700001)(82310400005)(966005)(1076003)(86362001)(186003)(26005)(478600001)(70206006)(44832011)(41300700001)(82740400003)(336012)(8676002)(426003)(4326008)(47076005)(70586007)(8936002)(5660300002)(40460700003)(40480700001)(36860700001)(2616005)(83380400001)(6916009)(36756003)(6666004)(356005)(81166007)(54906003)(316002)(2906002)(66899018)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:53.0422
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f8222557-8e9b-4499-8928-08dafeb1d321
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT023.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB8449

Add a new machine xenpvh which creates a IOREQ server to register/connect with
Xen Hypervisor.

Optional: When CONFIG_TPM is enabled, it also creates a tpm-tis-device, adds a
TPM emulator and connects to swtpm running on host machine via chardev socket
and support TPM functionalities for a guest domain.

Extra command line for aarch64 xenpvh QEMU to connect to swtpm:
    -chardev socket,id=chrtpm,path=/tmp/myvtpm2/swtpm-sock \
    -tpmdev emulator,id=tpm0,chardev=chrtpm \
    -machine tpm-base-addr=0x0c000000 \

swtpm implements a TPM software emulator(TPM 1.2 & TPM 2) built on libtpms and
provides access to TPM functionality over socket, chardev and CUSE interface.
Github repo: https://github.com/stefanberger/swtpm
Example for starting swtpm on host machine:
    mkdir /tmp/vtpm2
    swtpm socket --tpmstate dir=/tmp/vtpm2 \
    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 docs/system/arm/xenpvh.rst    |  34 +++++++
 docs/system/target-arm.rst    |   1 +
 hw/arm/meson.build            |   2 +
 hw/arm/xen_arm.c              | 184 ++++++++++++++++++++++++++++++++++
 include/hw/arm/xen_arch_hvm.h |   9 ++
 include/hw/xen/arch_hvm.h     |   2 +
 6 files changed, 232 insertions(+)
 create mode 100644 docs/system/arm/xenpvh.rst
 create mode 100644 hw/arm/xen_arm.c
 create mode 100644 include/hw/arm/xen_arch_hvm.h

diff --git a/docs/system/arm/xenpvh.rst b/docs/system/arm/xenpvh.rst
new file mode 100644
index 0000000000..e1655c7ab8
--- /dev/null
+++ b/docs/system/arm/xenpvh.rst
@@ -0,0 +1,34 @@
+XENPVH (``xenpvh``)
+=========================================
+This machine creates a IOREQ server to register/connect with Xen Hypervisor.
+
+When TPM is enabled, this machine also creates a tpm-tis-device at a user input
+tpm base address, adds a TPM emulator and connects to a swtpm application
+running on host machine via chardev socket. This enables xenpvh to support TPM
+functionalities for a guest domain.
+
+More information about TPM use and installing swtpm linux application can be
+found at: docs/specs/tpm.rst.
+
+Example for starting swtpm on host machine:
+.. code-block:: console
+
+    mkdir /tmp/vtpm2
+    swtpm socket --tpmstate dir=/tmp/vtpm2 \
+    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
+
+Sample QEMU xenpvh commands for running and connecting with Xen:
+.. code-block:: console
+
+    qemu-system-aarch64 -xen-domid 1 \
+    -chardev socket,id=libxl-cmd,path=qmp-libxl-1,server=on,wait=off \
+    -mon chardev=libxl-cmd,mode=control \
+    -chardev socket,id=libxenstat-cmd,path=qmp-libxenstat-1,server=on,wait=off \
+    -mon chardev=libxenstat-cmd,mode=control \
+    -xen-attach -name guest0 -vnc none -display none -nographic \
+    -machine xenpvh -m 1301 \
+    -chardev socket,id=chrtpm,path=tmp/vtpm2/swtpm-sock \
+    -tpmdev emulator,id=tpm0,chardev=chrtpm -machine tpm-base-addr=0x0C000000
+
+In above QEMU command, last two lines are for connecting xenpvh QEMU to swtpm
+via chardev socket.
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
index 91ebc26c6d..af8d7c77d6 100644
--- a/docs/system/target-arm.rst
+++ b/docs/system/target-arm.rst
@@ -106,6 +106,7 @@ undocumented; you can get a complete list by running
    arm/stm32
    arm/virt
    arm/xlnx-versal-virt
+   arm/xenpvh
 
 Emulated CPU architecture support
 =================================
diff --git a/hw/arm/meson.build b/hw/arm/meson.build
index b036045603..06bddbfbb8 100644
--- a/hw/arm/meson.build
+++ b/hw/arm/meson.build
@@ -61,6 +61,8 @@ arm_ss.add(when: 'CONFIG_FSL_IMX7', if_true: files('fsl-imx7.c', 'mcimx7d-sabre.
 arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
 arm_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c', 'mcimx6ul-evk.c'))
 arm_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
+arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen_arm.c'))
+arm_ss.add_all(xen_ss)
 
 softmmu_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmu-common.c'))
 softmmu_ss.add(when: 'CONFIG_EXYNOS4', if_true: files('exynos4_boards.c'))
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
new file mode 100644
index 0000000000..12b19e3609
--- /dev/null
+++ b/hw/arm/xen_arm.c
@@ -0,0 +1,184 @@
+/*
+ * QEMU ARM Xen PV Machine
+ *
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/error-report.h"
+#include "qapi/qapi-commands-migration.h"
+#include "qapi/visitor.h"
+#include "hw/boards.h"
+#include "hw/sysbus.h"
+#include "sysemu/block-backend.h"
+#include "sysemu/tpm_backend.h"
+#include "sysemu/sysemu.h"
+#include "hw/xen/xen-legacy-backend.h"
+#include "hw/xen/xen-hvm-common.h"
+#include "sysemu/tpm.h"
+#include "hw/xen/arch_hvm.h"
+
+#define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
+OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
+
+static MemoryListener xen_memory_listener = {
+    .region_add = xen_region_add,
+    .region_del = xen_region_del,
+    .log_start = NULL,
+    .log_stop = NULL,
+    .log_sync = NULL,
+    .log_global_start = NULL,
+    .log_global_stop = NULL,
+    .priority = 10,
+};
+
+struct XenArmState {
+    /*< private >*/
+    MachineState parent;
+
+    XenIOState *state;
+
+    struct {
+        uint64_t tpm_base_addr;
+    } cfg;
+};
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    hw_error("Invalid ioreq type 0x%x\n", req->type);
+
+    return;
+}
+
+void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
+                         bool add)
+{
+}
+
+void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
+{
+}
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+}
+
+#ifdef CONFIG_TPM
+static void xen_enable_tpm(XenArmState *xam)
+{
+    Error *errp = NULL;
+    DeviceState *dev;
+    SysBusDevice *busdev;
+
+    TPMBackend *be = qemu_find_tpm_be("tpm0");
+    if (be == NULL) {
+        DPRINTF("Couldn't fine the backend for tpm0\n");
+        return;
+    }
+    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
+    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
+    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
+    busdev = SYS_BUS_DEVICE(dev);
+    sysbus_realize_and_unref(busdev, &error_fatal);
+    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
+
+    DPRINTF("Connected tpmdev at address 0x%lx\n", xam->cfg.tpm_base_addr);
+}
+#endif
+
+static void xen_arm_init(MachineState *machine)
+{
+    XenArmState *xam = XEN_ARM(machine);
+
+    xam->state =  g_new0(XenIOState, 1);
+
+    xen_register_ioreq(xam->state, machine->smp.cpus, xen_memory_listener);
+
+#ifdef CONFIG_TPM
+    if (xam->cfg.tpm_base_addr) {
+        xen_enable_tpm(xam);
+    } else {
+        DPRINTF("tpm-base-addr is not provided. TPM will not be enabled\n");
+    }
+#endif
+
+    return;
+}
+
+#ifdef CONFIG_TPM
+static void xen_arm_get_tpm_base_addr(Object *obj, Visitor *v,
+                                      const char *name, void *opaque,
+                                      Error **errp)
+{
+    XenArmState *xam = XEN_ARM(obj);
+    uint64_t value = xam->cfg.tpm_base_addr;
+
+    visit_type_uint64(v, name, &value, errp);
+}
+
+static void xen_arm_set_tpm_base_addr(Object *obj, Visitor *v,
+                                      const char *name, void *opaque,
+                                      Error **errp)
+{
+    XenArmState *xam = XEN_ARM(obj);
+    uint64_t value;
+
+    if (!visit_type_uint64(v, name, &value, errp)) {
+        return;
+    }
+
+    xam->cfg.tpm_base_addr = value;
+}
+#endif
+
+static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
+{
+
+    MachineClass *mc = MACHINE_CLASS(oc);
+    mc->desc = "Xen Para-virtualized PC";
+    mc->init = xen_arm_init;
+    mc->max_cpus = 1;
+    mc->default_machine_opts = "accel=xen";
+
+#ifdef CONFIG_TPM
+    object_class_property_add(oc, "tpm-base-addr", "uint64_t",
+                              xen_arm_get_tpm_base_addr,
+                              xen_arm_set_tpm_base_addr,
+                              NULL, NULL);
+    object_class_property_set_description(oc, "tpm-base-addr",
+                                          "Set Base address for TPM device.");
+
+    machine_class_allow_dynamic_sysbus_dev(mc, TYPE_TPM_TIS_SYSBUS);
+#endif
+}
+
+static const TypeInfo xen_arm_machine_type = {
+    .name = TYPE_XEN_ARM,
+    .parent = TYPE_MACHINE,
+    .class_init = xen_arm_machine_class_init,
+    .instance_size = sizeof(XenArmState),
+};
+
+static void xen_arm_machine_register_types(void)
+{
+    type_register_static(&xen_arm_machine_type);
+}
+
+type_init(xen_arm_machine_register_types)
diff --git a/include/hw/arm/xen_arch_hvm.h b/include/hw/arm/xen_arch_hvm.h
new file mode 100644
index 0000000000..8fd645e723
--- /dev/null
+++ b/include/hw/arm/xen_arch_hvm.h
@@ -0,0 +1,9 @@
+#ifndef HW_XEN_ARCH_ARM_HVM_H
+#define HW_XEN_ARCH_ARM_HVM_H
+
+#include <xen/hvm/ioreq.h>
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
+void arch_xen_set_memory(XenIOState *state,
+                         MemoryRegionSection *section,
+                         bool add);
+#endif
diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
index 26674648d8..c7c515220d 100644
--- a/include/hw/xen/arch_hvm.h
+++ b/include/hw/xen/arch_hvm.h
@@ -1,3 +1,5 @@
 #if defined(TARGET_I386) || defined(TARGET_X86_64)
 #include "hw/i386/xen_arch_hvm.h"
+#elif defined(TARGET_ARM) || defined(TARGET_ARM_64)
+#include "hw/arm/xen_arch_hvm.h"
 #endif
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:06:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:06:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483995.750479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbju-0005Kl-4P; Wed, 25 Jan 2023 09:06:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483995.750479; Wed, 25 Jan 2023 09:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbju-0005Ke-1Y; Wed, 25 Jan 2023 09:06:42 +0000
Received: by outflank-mailman (input) for mailman id 483995;
 Wed, 25 Jan 2023 09:06:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbai-00012q-TC
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:57:12 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e8c::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 26026597-9c8e-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:57:03 +0100 (CET)
Received: from MW4P220CA0011.NAMP220.PROD.OUTLOOK.COM (2603:10b6:303:115::16)
 by BY5PR12MB4097.namprd12.prod.outlook.com (2603:10b6:a03:213::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:55:00 +0000
Received: from CO1NAM11FT102.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:115:cafe::9d) by MW4P220CA0011.outlook.office365.com
 (2603:10b6:303:115::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:55:00 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT102.mail.protection.outlook.com (10.13.175.87) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Wed, 25 Jan 2023 08:54:58 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:57 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 00:54:57 -0800
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:56 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26026597-9c8e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kxWAVTOBVOUjL5UYlCO84yiyrkTvRDA4n7XtfxX7eKRC8MRclmE72TUiQ89XDARHwfu6JOPXdbX6tJX8HDGcgXTczweqLEQMbQ8nX6YoakDdA75UmAb+Bg30WurKLvcUFESnb0b+1A1bmZTdw0HX5iCuYv492sohT6MZ1j7njzyjgxrChKD9WI+GH/iUaRHF4UDTwRNNcdbPQL0e1ZJuDkj01kNDs1pYKhdj9DQgvSseyhQO2KoEh5BH8aUvPkKpBcYsdZ4uN540yN7u4Qg//W3++7fpS3JYnmYxb0jZSSRCO+cEOfIjf6CtqDFU/k2n9uP9/tJu2r2cHc6yidOIxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5c1IzY0YQxuTPTSwy7UvS0mMbpPve8RkE4gupE88sNQ=;
 b=jWjVVchFI7l5w6SpbKbyeHS6mytJWIg0DDoNnEVQQC9zpufPeYIHXUuNhUn1j4JkcAIIZYUZOsBzrUjt08VQSfgf26zZQkAozDX9as/3msijr5h7uooxShFra8i8mGaonREP/V2+oGIQw+mprs5NEmO7p0l6pTewCHswbqPmsOVqWR4qeEVVxT1gRZ8F3+TAJeiRRF8MEZaVVEODXnqpgNVA/T1VWDiSNi2lMWXIg3iTD7QAJjGVMFvhSSyhWSAGXlZujN7at6jvkAdadIEqf/geTt9HC8o/4/amiBmm5pEsAHFLI1HI/Pk/1riPyJxDWmYPpu0WlosHqORTnytQ9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5c1IzY0YQxuTPTSwy7UvS0mMbpPve8RkE4gupE88sNQ=;
 b=hlARu3HTkF23xk9tSeTT9xRxQfkJSimxcFiFQEte+F8SNvnUAEQspE8SuV6KvePw3BdgR47pQaBGTMsWy/zZfV0E3JioxFpxN70LvU5ZnLdc9kU/45ZryIujTGzi+CGmjGPnl/jDu7OGIsamavfPEbFdUlPjjhYJ/H4CSzqXf+Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Paolo Bonzini
	<pbonzini@redhat.com>, =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?=
	<marcandre.lureau@redhat.com>, =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?=
	<berrange@redhat.com>, Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [QEMU][PATCH v4 10/10] meson.build: enable xenpv machine build for ARM
Date: Wed, 25 Jan 2023 00:54:07 -0800
Message-ID: <20230125085407.7144-11-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT102:EE_|BY5PR12MB4097:EE_
X-MS-Office365-Filtering-Correlation-Id: 781d4595-95aa-49c5-90cc-08dafeb1d692
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	X8BA/G5jP91Ma0luh2+A2kdn49JOEazXISIIsHSdexkIRnaHxdVRTDfJH9iNXtJIaR0nZjBZjLliBn2SB2sFCekNvcqiLgQO6eVmlNzagJ/BHbVu3NsPbX5OWx7s31mPgwbZQpkid6EJfpPrILcnawNAM9J+3wqKc+UVQRq3uHDvyWfcyH+C2ARWEGONZ5sUi3l83vGItQQbqjTv37EVfAIjXzs9mr95+HiCD7p60pw/WYChj+LSdQpl3/CbUbqkc61pfYNQ42EMinhwSD4Izbl1JFPyqW2Lvaf4hf7/a6jmkgqh4BYPkJyQnqU2vOP1KH1jGWLBhfe+cQC0qApsXsVfZk7BE5nEzN+Rs3CVITzSXhdhIU7/Ct1WUfbGrqsS6ByvJX9eNkgYRyCdgjEeNJsYvidyv1gMFXJ6JpKNJ8cb1lC8SsWcKazBvSQNW85YAOTVmiMT+j9f4W/eYHPWw2rnAeT/gMrhiLtqJiXChdZon9obqEfW/XGOZoZqLObKnC6jWkEFjpwfw2LR4RJm/YK/EBWPy+Biv/MadKKsJtEAPfmwh1lXgyfHEaa/o4QB1Hc60GOw/tz4RIoT9uO7MyMDyqLDfL7o+uqZhAJEL+UVPreZ+A3IFAVcRXN/xUzg9Cttkoxjj5ROduik9EjYT2CQj8ki5+b4uy211qIwaCzEZh8E5kzdpBxZoA2wfA6md3nlWvEf5Y68tyfe+jkO2I8u6ZzFKt2ULlIwf4GoIwk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199018)(36840700001)(46966006)(40470700004)(82310400005)(336012)(86362001)(356005)(5660300002)(8676002)(81166007)(426003)(40480700001)(6916009)(36860700001)(36756003)(47076005)(2906002)(4744005)(66574015)(8936002)(83380400001)(44832011)(26005)(70206006)(316002)(6666004)(186003)(1076003)(478600001)(41300700001)(2616005)(40460700003)(54906003)(82740400003)(70586007)(4326008)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:58.7223
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 781d4595-95aa-49c5-90cc-08dafeb1d692
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT102.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4097

Add CONFIG_XEN for aarch64 device to support build for ARM targets.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 meson.build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meson.build b/meson.build
index 693802adb2..13c4ad1017 100644
--- a/meson.build
+++ b/meson.build
@@ -135,7 +135,7 @@ endif
 if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
   # i386 emulator provides xenpv machine type for multiple architectures
   accelerator_targets += {
-    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
+    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu', 'aarch64-softmmu'],
   }
 endif
 if cpu in ['x86', 'x86_64']
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:07:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:07:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483998.750490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbk4-0005dN-BB; Wed, 25 Jan 2023 09:06:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483998.750490; Wed, 25 Jan 2023 09:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbk4-0005dG-8M; Wed, 25 Jan 2023 09:06:52 +0000
Received: by outflank-mailman (input) for mailman id 483998;
 Wed, 25 Jan 2023 09:06:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbaD-00012q-Lz
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:41 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe5a::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1c60bbda-9c8e-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:56:40 +0100 (CET)
Received: from DM6PR03CA0039.namprd03.prod.outlook.com (2603:10b6:5:100::16)
 by DS0PR12MB8573.namprd12.prod.outlook.com (2603:10b6:8:162::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 08:54:43 +0000
Received: from DM6NAM11FT012.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:100:cafe::b6) by DM6PR03CA0039.outlook.office365.com
 (2603:10b6:5:100::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:43 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT012.mail.protection.outlook.com (10.13.173.109) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:54:42 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:41 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:40 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c60bbda-9c8e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W2gE/67T9fTJ1cEkhDxG5gATfKNecs7kxqzcMDdzBpwc41+N5hEJqTowHUXPQOLbGvE8jka+NiQBCQnDV/r5zrvDgGn0jhLhQq7fnppRwPbGqllyMrEsajoJXafc9OR80sZGCEBcMlqLmu/eEeSkPZyYT5sUxCC+/iePrEwvKWvNZ2bncMcXMJIhZqAS9/wyVog6soGV1OjBtA47xkllFzq0UHJ01m0zVwxG+XliecPKIDQnwI71aylVW3GLKrBsKzniFyggdCNbHJ6ESA8T6pTXRrlXf0z2p5xEpg5khysd+3KCWs4bFGtUjiiYFJc8gaw8D2L7vJaYUr8jxF8CpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HpZAZyFrLxyZiJ9zE+SxaljhwMZQFl/Jyy62XvsgXdo=;
 b=CG14mV1q7sGZmZuPMZ9TvNQ/d7Ta0ljhIFmZVpfAMRjDcaKmWXmSQJmbH+p9p4d/6So7GYi3VsIBPaSQQ57MUT0hpe0e4eqh6hoimhFY3ZwvIe/DLybYGGYaZhpFj63xo5qritGnNLh54fif09iBOBTEdQEIgcPuCgG1AcS4J9cYZbqCudg38H9eKF9Wvae81T5FzOh/SzKc8hE3MnkR2+htGDbyt3c7NJWFCt17JbMPEFrctjQ56s9g1t4RRkdjG7ua8l8TjWM69k6KmLEFhpO6kalGWZ72pRZymrdIeYcaUWfE/DBqM6mla8EuG3fbzQwLothXAIj2wUWjLNYdLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HpZAZyFrLxyZiJ9zE+SxaljhwMZQFl/Jyy62XvsgXdo=;
 b=L3FKMqOVPcHZSXue9H/8xKIAm8ferRL25jkgj29YN76TH8QskBNEZaUdtUJAAPm01hW2WrmbzcEgLY1PSU3w2uCzbfO1z1IV0s7Q71r52DT7HAFGbZb+4eZmgqRsQf9aPF8FmlontKBtCaFXTzovJUyH8tgze0B2y3LvbmucEUs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v4 05/10] include/hw/xen/xen_common: return error from xen_create_ioreq_server
Date: Wed, 25 Jan 2023 00:54:02 -0800
Message-ID: <20230125085407.7144-6-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT012:EE_|DS0PR12MB8573:EE_
X-MS-Office365-Filtering-Correlation-Id: 3c1a4c42-ae34-4c40-956d-08dafeb1ccdb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RJSy8LqXs5hc5dPAm+Bi+j2GGmtE5eY+BRLAHGAkWPCzjbmoSMKGMyUBUlP1AjJkTOrT+xTQ5qLY/i3m/ZQJhoA7t5NNYmX4zzFi06K1TbInjN9TAUFxcMTYqllKpUwBwaHNKWLI1ln5wiEMFjVmBIHQtNUBJ1wJYHE8HXqfLeS0iG2ZI2X/5Hj95pDFbtihMr/caDJbPsBsnD69mgi2AaIlow6xPdkGzEISuFN3PB5hPYP+kS0Ld2nUv0qhdNPMUzexEEALbVVZz/IiToGkiGQHogGQGFg5fdNhD9I6euvZogPS25NVjfpFcQKIckhf+bBNAdgoBFM03ABAtYsaAGzro3hkpVr/WajRsZLDEZCxwvae9kDCMgL1Vo+zmP6POZiACWP4rLpRBGAV+cUpC84XYWMsQQLtrZyV/lHEaZcDeRvwcX0S7y+fLIFjAXsKlQsp34mi2SX9GCG52LU6LCXJmcPEqw8eCJGwz++W/V2P0KKQEu5YdQMm0qRnuyKqcWO18WBtAx1zJdmMqeieCIXQVRrD/GyG03eMBONh/QnqRDfNelNTKNFmiPf+sF1rOjoCQcfm5Ux52cNs1ExjyeMl8TkPd3ymKvPO8pXUanNtX7E95GIJaa4nDjozr3f9V8+rkdZ5Tz9ze6yDrdMQQp+t/Og/YGk6D9r42DSH5ge5qvijfvRfBlOwKcyoVYqzwYoYoG6Wd+d4lO1HNNe9MZ4+aw117VJPqkbi2r5JOLE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199018)(46966006)(40470700004)(36840700001)(36860700001)(81166007)(82740400003)(83380400001)(26005)(426003)(47076005)(40460700003)(186003)(336012)(2616005)(356005)(478600001)(316002)(54906003)(36756003)(6666004)(6916009)(70586007)(4326008)(44832011)(70206006)(8676002)(86362001)(5660300002)(8936002)(40480700001)(2906002)(1076003)(41300700001)(82310400005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:42.5174
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c1a4c42-ae34-4c40-956d-08dafeb1ccdb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT012.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8573

From: Stefano Stabellini <stefano.stabellini@amd.com>

This is done to prepare for enabling xenpv support for ARM architecture.
On ARM it is possible to have a functioning xenpv machine with only the
PV backends and no IOREQ server. If the IOREQ server creation fails,
continue to the PV backends initialization.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 include/hw/xen/xen_common.h | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
index 9a13a756ae..9ec69582b3 100644
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -467,9 +467,10 @@ static inline void xen_unmap_pcidev(domid_t dom,
 {
 }
 
-static inline void xen_create_ioreq_server(domid_t dom,
-                                           ioservid_t *ioservid)
+static inline int xen_create_ioreq_server(domid_t dom,
+                                          ioservid_t *ioservid)
 {
+    return 0;
 }
 
 static inline void xen_destroy_ioreq_server(domid_t dom,
@@ -600,8 +601,8 @@ static inline void xen_unmap_pcidev(domid_t dom,
                                                   PCI_FUNC(pci_dev->devfn));
 }
 
-static inline void xen_create_ioreq_server(domid_t dom,
-                                           ioservid_t *ioservid)
+static inline int xen_create_ioreq_server(domid_t dom,
+                                          ioservid_t *ioservid)
 {
     int rc = xendevicemodel_create_ioreq_server(xen_dmod, dom,
                                                 HVM_IOREQSRV_BUFIOREQ_ATOMIC,
@@ -609,12 +610,14 @@ static inline void xen_create_ioreq_server(domid_t dom,
 
     if (rc == 0) {
         trace_xen_ioreq_server_create(*ioservid);
-        return;
+        return rc;
     }
 
     *ioservid = 0;
     use_default_ioreq_server = true;
     trace_xen_default_ioreq_server();
+
+    return rc;
 }
 
 static inline void xen_destroy_ioreq_server(domid_t dom,
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:07:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484003.750500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbkW-0006GT-LG; Wed, 25 Jan 2023 09:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484003.750500; Wed, 25 Jan 2023 09:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbkW-0006GM-HH; Wed, 25 Jan 2023 09:07:20 +0000
Received: by outflank-mailman (input) for mailman id 484003;
 Wed, 25 Jan 2023 09:07:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbaT-00012q-W8
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:58 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7eb2::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1acfe1ca-9c8e-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:56:51 +0100 (CET)
Received: from DM6PR03CA0075.namprd03.prod.outlook.com (2603:10b6:5:333::8) by
 CH0PR12MB5074.namprd12.prod.outlook.com (2603:10b6:610:e1::10) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6002.33; Wed, 25 Jan 2023 08:54:41 +0000
Received: from DM6NAM11FT035.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:333:cafe::7a) by DM6PR03CA0075.outlook.office365.com
 (2603:10b6:5:333::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:40 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT035.mail.protection.outlook.com (10.13.172.100) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 08:54:39 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:38 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:37 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1acfe1ca-9c8e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IgyGjhjdrdzMPM4Ob484YPuoZb3ejC8mDXpj2VQho0vnAWZZyJd4A2HFqdea7mLlt4ds6ysF8JqQ6tW5hm+eB/ORwrDn8dzjBJFw+OQCL9773TEC7sMjlEdD51vCO4OTvEdJFp/wasm/skRPPmru1V+cXNfbVFnjM+p1/oDVf694IkuuocgoIPnHGfYMaKpnEuYf2Nj39ba9f6wHNbum3gxl8jMcBTDQ1354gjFj3k4H8L2lvxb2PrivA+5bG5JJqCdvxu9FgNfl/HpTqDCtRQI5hlbgeY2s++cFHvhz3ur+M5YDLDfeAqi34KcQU2dvenloT2jeRl/YfZzcQc6IxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2UWlGpUn0+7jB5T2dKo2cLBSjhMlAcViRVxgAVG74GI=;
 b=Mdp5peaPI0YlzowULIIOxxNPA3x2sfkEYRPsMHfjz9EPQNoJ6UPJekobTpkcyrB/Ur4bfuGZi1ZRrCvmFyckJPgWxgDYc0zWvnWzvLy0cYSwEnNvCVkP4LAYVFqzm2cXNIoeGOHKVHsb45Wf64mYz1R5nMpcLQK2y2+xnjoT7hrXwG/lCyyLOr2LzEVg69t2jXXxkZ70eofiyjJAO9HDeAmVFcbof9qecVWIVSa8GfttXKWrZqJ4Q5KXXamR/ysa1Pz0fRhx4cieIRljOcO4hGVwW7WIzhnsUBIrecgslgE/tRmq0uq2K2Dk7b6SEkJo/aaQhEeJ30vkJQfrD3dQYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2UWlGpUn0+7jB5T2dKo2cLBSjhMlAcViRVxgAVG74GI=;
 b=Ao19m+wwwy7UZJ7f5tEt8XzKVG5Km0IzBSjT+X69QAwiAgMt7B4wyON+9dYp7YY0H+gdltd/UXpEX7ZtgxKsN0uFEIJDpVPIsssz2FZKPjNIWoj5l8WQcw1TksJZDvtxSXgG6a14/fBWarmHXup2D+GtlwYjomesIjIyZ0P+3oE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, "Richard
 Henderson" <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>
Subject: [QEMU][PATCH v4 04/10] xen-hvm: reorganize xen-hvm and move common function to xen-hvm-common
Date: Wed, 25 Jan 2023 00:54:01 -0800
Message-ID: <20230125085407.7144-5-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT035:EE_|CH0PR12MB5074:EE_
X-MS-Office365-Filtering-Correlation-Id: 94d43469-c869-4de5-e3f4-08dafeb1cb05
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ewN5nMgv2N0soicivvsbrL70pzym+O+xIap2c+6NmRJJjxQsEp+kj/Ju5EmNl2n/R5o3EHhbfEfYEF+mYskNOK6IJQ9p5KdIEZ5mhcw8/WN9HlGrssB2jvWBewXj05BK8ra3GTxg+fKAir/ntQKvmGdeGWmxy4S8x+TC5wmM2hGUdVXUQxe2blHcUS0emGRx+1XDluN/EyUiTwciiQqsGn6UA6PmoAEmKfFnKgPvA7xp6QtcjMfiFj0uQstUfD3GIZHGsq0ZLbupTEezqvQikhY2B21nK93LEv3la5cX7/cTbqzIVT3gHZWDZKsZWlh9YtF/y4GMPXbMH/HpCrWDKW+wHD+HtONMsKqdLCKNjPwERKSbQX5NabbCnlbvJPqdZ5ZmY4JE1NEgbvHEPeH1NNPNlZWDNHVIzjgWZtcuwnQctPZU0xRhg61vY1u8YjLeiqFv6bCDm9lWOEdt0uYWmpfnH155WiLhF4lJ/YZ1J/nXXSfTgoMVqh0ZxNcnI0b8rwh20BDg7agHXl7JJzKIrnfeu9XKVfS8ijqpMERX+PHY3Jq8KaTKz3JokgUIe8ZJ2bNUHMSbOM/9AXPVqDsHkpIIJJObqO6racln5CW11xIJfuvHKrRSAhgIwPQMehW4pzxzNhqMI3ZUd9kriqwlrFuTfYH1GlqNn+7yq5BYVzHx5GDffwme5Cs4zwprtHrJHsDBLAz7WjU/WkcrEnBVpKHZA7UFIRdRUaz1bJW5qH8=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(82740400003)(81166007)(36860700001)(83380400001)(426003)(186003)(47076005)(316002)(1076003)(2616005)(26005)(82310400005)(478600001)(86362001)(40460700003)(36756003)(54906003)(8936002)(70586007)(6916009)(8676002)(5660300002)(40480700001)(6666004)(7416002)(356005)(30864003)(4326008)(41300700001)(70206006)(336012)(44832011)(2906002)(36900700001)(559001)(579004);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:39.4392
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 94d43469-c869-4de5-e3f4-08dafeb1cb05
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT035.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5074

From: Stefano Stabellini <stefano.stabellini@amd.com>

This patch does following:
1. creates arch_handle_ioreq() and arch_xen_set_memory(). This is done in
    preparation for moving most of xen-hvm code to an arch-neutral location,
    move the x86-specific portion of xen_set_memory to arch_xen_set_memory.
    Also, move handle_vmport_ioreq to arch_handle_ioreq.

2. Pure code movement: move common functions to hw/xen/xen-hvm-common.c
    Extract common functionalities from hw/i386/xen/xen-hvm.c and move them to
    hw/xen/xen-hvm-common.c. These common functions are useful for creating
    an IOREQ server.

    xen_hvm_init_pc() contains the architecture independent code for creating
    and mapping a IOREQ server, connecting memory and IO listeners, initializing
    a xen bus and registering backends. Moved this common xen code to a new
    function xen_register_ioreq() which can be used by both x86 and ARM machines.

    Following functions are moved to hw/xen/xen-hvm-common.c:
        xen_vcpu_eport(), xen_vcpu_ioreq(), xen_ram_alloc(), xen_set_memory(),
        xen_region_add(), xen_region_del(), xen_io_add(), xen_io_del(),
        xen_device_realize(), xen_device_unrealize(),
        cpu_get_ioreq_from_shared_memory(), cpu_get_ioreq(), do_inp(),
        do_outp(), rw_phys_req_item(), read_phys_req_item(),
        write_phys_req_item(), cpu_ioreq_pio(), cpu_ioreq_move(),
        cpu_ioreq_config(), handle_ioreq(), handle_buffered_iopage(),
        handle_buffered_io(), cpu_handle_ioreq(), xen_main_loop_prepare(),
        xen_hvm_change_state_handler(), xen_exit_notifier(),
        xen_map_ioreq_server(), destroy_hvm_domain() and
        xen_shutdown_fatal_error()

3. Removed static type from below functions:
    1. xen_region_add()
    2. xen_region_del()
    3. xen_io_add()
    4. xen_io_del()
    5. xen_device_realize()
    6. xen_device_unrealize()
    7. xen_hvm_change_state_handler()
    8. cpu_ioreq_pio()
    9. xen_exit_notifier()

4. Replace TARGET_PAGE_SIZE with XC_PAGE_SIZE to match the page side with Xen.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 hw/i386/xen/trace-events        |   14 -
 hw/i386/xen/xen-hvm.c           | 1023 ++-----------------------------
 hw/xen/meson.build              |    5 +-
 hw/xen/trace-events             |   14 +
 hw/xen/xen-hvm-common.c         |  870 ++++++++++++++++++++++++++
 include/hw/i386/xen_arch_hvm.h  |   11 +
 include/hw/xen/arch_hvm.h       |    3 +
 include/hw/xen/xen-hvm-common.h |   97 +++
 8 files changed, 1063 insertions(+), 974 deletions(-)
 create mode 100644 hw/xen/xen-hvm-common.c
 create mode 100644 include/hw/i386/xen_arch_hvm.h
 create mode 100644 include/hw/xen/arch_hvm.h
 create mode 100644 include/hw/xen/xen-hvm-common.h

diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
index a0c89d91c4..5d0a8d6dcf 100644
--- a/hw/i386/xen/trace-events
+++ b/hw/i386/xen/trace-events
@@ -7,17 +7,3 @@ xen_platform_log(char *s) "xen platform: %s"
 xen_pv_mmio_read(uint64_t addr) "WARNING: read from Xen PV Device MMIO space (address 0x%"PRIx64")"
 xen_pv_mmio_write(uint64_t addr) "WARNING: write to Xen PV Device MMIO space (address 0x%"PRIx64")"
 
-# xen-hvm.c
-xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"
-xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "0x%"PRIx64" size 0x%lx, log_dirty %i"
-handle_ioreq(void *req, uint32_t type, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p type=%d dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-handle_ioreq_read(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p read type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-handle_ioreq_write(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p write type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-cpu_ioreq_pio(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p pio dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-cpu_ioreq_pio_read_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio read reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
-cpu_ioreq_pio_write_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio write reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
-cpu_ioreq_move(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p copy dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
-cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
-cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
-
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 06c446e7be..b81c671598 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -10,46 +10,20 @@
 
 #include "qemu/osdep.h"
 #include "qemu/units.h"
+#include "qapi/error.h"
+#include "qapi/qapi-commands-migration.h"
+#include "trace.h"
 
-#include "cpu.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pci_host.h"
 #include "hw/i386/pc.h"
 #include "hw/irq.h"
-#include "hw/hw.h"
 #include "hw/i386/apic-msidef.h"
-#include "hw/xen/xen_common.h"
-#include "hw/xen/xen-legacy-backend.h"
-#include "hw/xen/xen-bus.h"
 #include "hw/xen/xen-x86.h"
-#include "qapi/error.h"
-#include "qapi/qapi-commands-migration.h"
-#include "qemu/error-report.h"
-#include "qemu/main-loop.h"
 #include "qemu/range.h"
-#include "sysemu/runstate.h"
-#include "sysemu/sysemu.h"
-#include "sysemu/xen.h"
-#include "sysemu/xen-mapcache.h"
-#include "trace.h"
 
-#include <xen/hvm/ioreq.h>
+#include "hw/xen/xen-hvm-common.h"
+#include "hw/xen/arch_hvm.h"
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN_HVM
-
-#ifdef DEBUG_XEN_HVM
-#define DPRINTF(fmt, ...) \
-    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
-#else
-#define DPRINTF(fmt, ...) \
-    do { } while (0)
-#endif
-
-static MemoryRegion ram_memory, ram_640k, ram_lo, ram_hi;
-static MemoryRegion *framebuffer;
-static bool xen_in_migration;
-
 /* Compatibility with older version */
 
 /* This allows QEMU to build on a system that has Xen 4.5 or earlier
@@ -75,26 +49,9 @@ typedef struct shared_vmport_iopage shared_vmport_iopage_t;
 #endif
 static shared_vmport_iopage_t *shared_vmport_page;
 
-static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
-{
-    return shared_page->vcpu_ioreq[i].vp_eport;
-}
-static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
-{
-    return &shared_page->vcpu_ioreq[vcpu];
-}
-
-#define BUFFER_IO_MAX_DELAY  100
-
-typedef struct XenPhysmap {
-    hwaddr start_addr;
-    ram_addr_t size;
-    const char *name;
-    hwaddr phys_offset;
-
-    QLIST_ENTRY(XenPhysmap) list;
-} XenPhysmap;
-
+static MemoryRegion ram_640k, ram_lo, ram_hi;
+static MemoryRegion *framebuffer;
+static bool xen_in_migration;
 static QLIST_HEAD(, XenPhysmap) xen_physmap;
 static const XenPhysmap *log_for_dirtybit;
 /* Buffer used by xen_sync_dirty_bitmap */
@@ -102,38 +59,6 @@ static unsigned long *dirty_bitmap;
 static Notifier suspend;
 static Notifier wakeup;
 
-typedef struct XenPciDevice {
-    PCIDevice *pci_dev;
-    uint32_t sbdf;
-    QLIST_ENTRY(XenPciDevice) entry;
-} XenPciDevice;
-
-typedef struct XenIOState {
-    ioservid_t ioservid;
-    shared_iopage_t *shared_page;
-    buffered_iopage_t *buffered_io_page;
-    xenforeignmemory_resource_handle *fres;
-    QEMUTimer *buffered_io_timer;
-    CPUState **cpu_by_vcpu_id;
-    /* the evtchn port for polling the notification, */
-    evtchn_port_t *ioreq_local_port;
-    /* evtchn remote and local ports for buffered io */
-    evtchn_port_t bufioreq_remote_port;
-    evtchn_port_t bufioreq_local_port;
-    /* the evtchn fd for polling */
-    xenevtchn_handle *xce_handle;
-    /* which vcpu we are serving */
-    int send_vcpu;
-
-    struct xs_handle *xenstore;
-    MemoryListener memory_listener;
-    MemoryListener io_listener;
-    QLIST_HEAD(, XenPciDevice) dev_list;
-    DeviceListener device_listener;
-
-    Notifier exit;
-} XenIOState;
-
 /* Xen specific function for piix pci */
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
@@ -246,42 +171,6 @@ static void xen_ram_init(PCMachineState *pcms,
     }
 }
 
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
-                   Error **errp)
-{
-    unsigned long nr_pfn;
-    xen_pfn_t *pfn_list;
-    int i;
-
-    if (runstate_check(RUN_STATE_INMIGRATE)) {
-        /* RAM already populated in Xen */
-        fprintf(stderr, "%s: do not alloc "RAM_ADDR_FMT
-                " bytes of ram at "RAM_ADDR_FMT" when runstate is INMIGRATE\n",
-                __func__, size, ram_addr);
-        return;
-    }
-
-    if (mr == &ram_memory) {
-        return;
-    }
-
-    trace_xen_ram_alloc(ram_addr, size);
-
-    nr_pfn = size >> TARGET_PAGE_BITS;
-    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
-
-    for (i = 0; i < nr_pfn; i++) {
-        pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
-    }
-
-    if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) {
-        error_setg(errp, "xen: failed to populate ram at " RAM_ADDR_FMT,
-                   ram_addr);
-    }
-
-    g_free(pfn_list);
-}
-
 static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
 {
     XenPhysmap *physmap = NULL;
@@ -471,144 +360,6 @@ static int xen_remove_from_physmap(XenIOState *state,
     return 0;
 }
 
-static void xen_set_memory(struct MemoryListener *listener,
-                           MemoryRegionSection *section,
-                           bool add)
-{
-    XenIOState *state = container_of(listener, XenIOState, memory_listener);
-    hwaddr start_addr = section->offset_within_address_space;
-    ram_addr_t size = int128_get64(section->size);
-    bool log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
-    hvmmem_type_t mem_type;
-
-    if (section->mr == &ram_memory) {
-        return;
-    } else {
-        if (add) {
-            xen_map_memory_section(xen_domid, state->ioservid,
-                                   section);
-        } else {
-            xen_unmap_memory_section(xen_domid, state->ioservid,
-                                     section);
-        }
-    }
-
-    if (!memory_region_is_ram(section->mr)) {
-        return;
-    }
-
-    if (log_dirty != add) {
-        return;
-    }
-
-    trace_xen_client_set_memory(start_addr, size, log_dirty);
-
-    start_addr &= TARGET_PAGE_MASK;
-    size = TARGET_PAGE_ALIGN(size);
-
-    if (add) {
-        if (!memory_region_is_rom(section->mr)) {
-            xen_add_to_physmap(state, start_addr, size,
-                               section->mr, section->offset_within_region);
-        } else {
-            mem_type = HVMMEM_ram_ro;
-            if (xen_set_mem_type(xen_domid, mem_type,
-                                 start_addr >> TARGET_PAGE_BITS,
-                                 size >> TARGET_PAGE_BITS)) {
-                DPRINTF("xen_set_mem_type error, addr: "HWADDR_FMT_plx"\n",
-                        start_addr);
-            }
-        }
-    } else {
-        if (xen_remove_from_physmap(state, start_addr, size) < 0) {
-            DPRINTF("physmapping does not exist at "HWADDR_FMT_plx"\n", start_addr);
-        }
-    }
-}
-
-static void xen_region_add(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    memory_region_ref(section->mr);
-    xen_set_memory(listener, section, true);
-}
-
-static void xen_region_del(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    xen_set_memory(listener, section, false);
-    memory_region_unref(section->mr);
-}
-
-static void xen_io_add(MemoryListener *listener,
-                       MemoryRegionSection *section)
-{
-    XenIOState *state = container_of(listener, XenIOState, io_listener);
-    MemoryRegion *mr = section->mr;
-
-    if (mr->ops == &unassigned_io_ops) {
-        return;
-    }
-
-    memory_region_ref(mr);
-
-    xen_map_io_section(xen_domid, state->ioservid, section);
-}
-
-static void xen_io_del(MemoryListener *listener,
-                       MemoryRegionSection *section)
-{
-    XenIOState *state = container_of(listener, XenIOState, io_listener);
-    MemoryRegion *mr = section->mr;
-
-    if (mr->ops == &unassigned_io_ops) {
-        return;
-    }
-
-    xen_unmap_io_section(xen_domid, state->ioservid, section);
-
-    memory_region_unref(mr);
-}
-
-static void xen_device_realize(DeviceListener *listener,
-                               DeviceState *dev)
-{
-    XenIOState *state = container_of(listener, XenIOState, device_listener);
-
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
-        PCIDevice *pci_dev = PCI_DEVICE(dev);
-        XenPciDevice *xendev = g_new(XenPciDevice, 1);
-
-        xendev->pci_dev = pci_dev;
-        xendev->sbdf = PCI_BUILD_BDF(pci_dev_bus_num(pci_dev),
-                                     pci_dev->devfn);
-        QLIST_INSERT_HEAD(&state->dev_list, xendev, entry);
-
-        xen_map_pcidev(xen_domid, state->ioservid, pci_dev);
-    }
-}
-
-static void xen_device_unrealize(DeviceListener *listener,
-                                 DeviceState *dev)
-{
-    XenIOState *state = container_of(listener, XenIOState, device_listener);
-
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
-        PCIDevice *pci_dev = PCI_DEVICE(dev);
-        XenPciDevice *xendev, *next;
-
-        xen_unmap_pcidev(xen_domid, state->ioservid, pci_dev);
-
-        QLIST_FOREACH_SAFE(xendev, &state->dev_list, entry, next) {
-            if (xendev->pci_dev == pci_dev) {
-                QLIST_REMOVE(xendev, entry);
-                g_free(xendev);
-                break;
-            }
-        }
-    }
-}
-
 static void xen_sync_dirty_bitmap(XenIOState *state,
                                   hwaddr start_addr,
                                   ram_addr_t size)
@@ -716,277 +467,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-static MemoryListener xen_io_listener = {
-    .name = "xen-io",
-    .region_add = xen_io_add,
-    .region_del = xen_io_del,
-    .priority = 10,
-};
-
-static DeviceListener xen_device_listener = {
-    .realize = xen_device_realize,
-    .unrealize = xen_device_unrealize,
-};
-
-/* get the ioreq packets from share mem */
-static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
-{
-    ioreq_t *req = xen_vcpu_ioreq(state->shared_page, vcpu);
-
-    if (req->state != STATE_IOREQ_READY) {
-        DPRINTF("I/O request not ready: "
-                "%x, ptr: %x, port: %"PRIx64", "
-                "data: %"PRIx64", count: %u, size: %u\n",
-                req->state, req->data_is_ptr, req->addr,
-                req->data, req->count, req->size);
-        return NULL;
-    }
-
-    xen_rmb(); /* see IOREQ_READY /then/ read contents of ioreq */
-
-    req->state = STATE_IOREQ_INPROCESS;
-    return req;
-}
-
-/* use poll to get the port notification */
-/* ioreq_vec--out,the */
-/* retval--the number of ioreq packet */
-static ioreq_t *cpu_get_ioreq(XenIOState *state)
-{
-    MachineState *ms = MACHINE(qdev_get_machine());
-    unsigned int max_cpus = ms->smp.max_cpus;
-    int i;
-    evtchn_port_t port;
-
-    port = xenevtchn_pending(state->xce_handle);
-    if (port == state->bufioreq_local_port) {
-        timer_mod(state->buffered_io_timer,
-                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
-        return NULL;
-    }
-
-    if (port != -1) {
-        for (i = 0; i < max_cpus; i++) {
-            if (state->ioreq_local_port[i] == port) {
-                break;
-            }
-        }
-
-        if (i == max_cpus) {
-            hw_error("Fatal error while trying to get io event!\n");
-        }
-
-        /* unmask the wanted port again */
-        xenevtchn_unmask(state->xce_handle, port);
-
-        /* get the io packet from shared memory */
-        state->send_vcpu = i;
-        return cpu_get_ioreq_from_shared_memory(state, i);
-    }
-
-    /* read error or read nothing */
-    return NULL;
-}
-
-static uint32_t do_inp(uint32_t addr, unsigned long size)
-{
-    switch (size) {
-        case 1:
-            return cpu_inb(addr);
-        case 2:
-            return cpu_inw(addr);
-        case 4:
-            return cpu_inl(addr);
-        default:
-            hw_error("inp: bad size: %04x %lx", addr, size);
-    }
-}
-
-static void do_outp(uint32_t addr,
-        unsigned long size, uint32_t val)
-{
-    switch (size) {
-        case 1:
-            return cpu_outb(addr, val);
-        case 2:
-            return cpu_outw(addr, val);
-        case 4:
-            return cpu_outl(addr, val);
-        default:
-            hw_error("outp: bad size: %04x %lx", addr, size);
-    }
-}
-
-/*
- * Helper functions which read/write an object from/to physical guest
- * memory, as part of the implementation of an ioreq.
- *
- * Equivalent to
- *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
- *                          val, req->size, 0/1)
- * except without the integer overflow problems.
- */
-static void rw_phys_req_item(hwaddr addr,
-                             ioreq_t *req, uint32_t i, void *val, int rw)
-{
-    /* Do everything unsigned so overflow just results in a truncated result
-     * and accesses to undesired parts of guest memory, which is up
-     * to the guest */
-    hwaddr offset = (hwaddr)req->size * i;
-    if (req->df) {
-        addr -= offset;
-    } else {
-        addr += offset;
-    }
-    cpu_physical_memory_rw(addr, val, req->size, rw);
-}
-
-static inline void read_phys_req_item(hwaddr addr,
-                                      ioreq_t *req, uint32_t i, void *val)
-{
-    rw_phys_req_item(addr, req, i, val, 0);
-}
-static inline void write_phys_req_item(hwaddr addr,
-                                       ioreq_t *req, uint32_t i, void *val)
-{
-    rw_phys_req_item(addr, req, i, val, 1);
-}
-
-
-static void cpu_ioreq_pio(ioreq_t *req)
-{
-    uint32_t i;
-
-    trace_cpu_ioreq_pio(req, req->dir, req->df, req->data_is_ptr, req->addr,
-                         req->data, req->count, req->size);
-
-    if (req->size > sizeof(uint32_t)) {
-        hw_error("PIO: bad size (%u)", req->size);
-    }
-
-    if (req->dir == IOREQ_READ) {
-        if (!req->data_is_ptr) {
-            req->data = do_inp(req->addr, req->size);
-            trace_cpu_ioreq_pio_read_reg(req, req->data, req->addr,
-                                         req->size);
-        } else {
-            uint32_t tmp;
-
-            for (i = 0; i < req->count; i++) {
-                tmp = do_inp(req->addr, req->size);
-                write_phys_req_item(req->data, req, i, &tmp);
-            }
-        }
-    } else if (req->dir == IOREQ_WRITE) {
-        if (!req->data_is_ptr) {
-            trace_cpu_ioreq_pio_write_reg(req, req->data, req->addr,
-                                          req->size);
-            do_outp(req->addr, req->size, req->data);
-        } else {
-            for (i = 0; i < req->count; i++) {
-                uint32_t tmp = 0;
-
-                read_phys_req_item(req->data, req, i, &tmp);
-                do_outp(req->addr, req->size, tmp);
-            }
-        }
-    }
-}
-
-static void cpu_ioreq_move(ioreq_t *req)
-{
-    uint32_t i;
-
-    trace_cpu_ioreq_move(req, req->dir, req->df, req->data_is_ptr, req->addr,
-                         req->data, req->count, req->size);
-
-    if (req->size > sizeof(req->data)) {
-        hw_error("MMIO: bad size (%u)", req->size);
-    }
-
-    if (!req->data_is_ptr) {
-        if (req->dir == IOREQ_READ) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->addr, req, i, &req->data);
-            }
-        } else if (req->dir == IOREQ_WRITE) {
-            for (i = 0; i < req->count; i++) {
-                write_phys_req_item(req->addr, req, i, &req->data);
-            }
-        }
-    } else {
-        uint64_t tmp;
-
-        if (req->dir == IOREQ_READ) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->addr, req, i, &tmp);
-                write_phys_req_item(req->data, req, i, &tmp);
-            }
-        } else if (req->dir == IOREQ_WRITE) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->data, req, i, &tmp);
-                write_phys_req_item(req->addr, req, i, &tmp);
-            }
-        }
-    }
-}
-
-static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
-{
-    uint32_t sbdf = req->addr >> 32;
-    uint32_t reg = req->addr;
-    XenPciDevice *xendev;
-
-    if (req->size != sizeof(uint8_t) && req->size != sizeof(uint16_t) &&
-        req->size != sizeof(uint32_t)) {
-        hw_error("PCI config access: bad size (%u)", req->size);
-    }
-
-    if (req->count != 1) {
-        hw_error("PCI config access: bad count (%u)", req->count);
-    }
-
-    QLIST_FOREACH(xendev, &state->dev_list, entry) {
-        if (xendev->sbdf != sbdf) {
-            continue;
-        }
-
-        if (!req->data_is_ptr) {
-            if (req->dir == IOREQ_READ) {
-                req->data = pci_host_config_read_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->size);
-                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
-                                            req->size, req->data);
-            } else if (req->dir == IOREQ_WRITE) {
-                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
-                                             req->size, req->data);
-                pci_host_config_write_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->data, req->size);
-            }
-        } else {
-            uint32_t tmp;
-
-            if (req->dir == IOREQ_READ) {
-                tmp = pci_host_config_read_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->size);
-                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
-                                            req->size, tmp);
-                write_phys_req_item(req->data, req, 0, &tmp);
-            } else if (req->dir == IOREQ_WRITE) {
-                read_phys_req_item(req->data, req, 0, &tmp);
-                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
-                                             req->size, tmp);
-                pci_host_config_write_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    tmp, req->size);
-            }
-        }
-    }
-}
-
 static void regs_to_cpu(vmware_regs_t *vmport_regs, ioreq_t *req)
 {
     X86CPU *cpu;
@@ -1030,226 +510,6 @@ static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req)
     current_cpu = NULL;
 }
 
-static void handle_ioreq(XenIOState *state, ioreq_t *req)
-{
-    trace_handle_ioreq(req, req->type, req->dir, req->df, req->data_is_ptr,
-                       req->addr, req->data, req->count, req->size);
-
-    if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
-            (req->size < sizeof (target_ulong))) {
-        req->data &= ((target_ulong) 1 << (8 * req->size)) - 1;
-    }
-
-    if (req->dir == IOREQ_WRITE)
-        trace_handle_ioreq_write(req, req->type, req->df, req->data_is_ptr,
-                                 req->addr, req->data, req->count, req->size);
-
-    switch (req->type) {
-        case IOREQ_TYPE_PIO:
-            cpu_ioreq_pio(req);
-            break;
-        case IOREQ_TYPE_COPY:
-            cpu_ioreq_move(req);
-            break;
-        case IOREQ_TYPE_VMWARE_PORT:
-            handle_vmport_ioreq(state, req);
-            break;
-        case IOREQ_TYPE_TIMEOFFSET:
-            break;
-        case IOREQ_TYPE_INVALIDATE:
-            xen_invalidate_map_cache();
-            break;
-        case IOREQ_TYPE_PCI_CONFIG:
-            cpu_ioreq_config(state, req);
-            break;
-        default:
-            hw_error("Invalid ioreq type 0x%x\n", req->type);
-    }
-    if (req->dir == IOREQ_READ) {
-        trace_handle_ioreq_read(req, req->type, req->df, req->data_is_ptr,
-                                req->addr, req->data, req->count, req->size);
-    }
-}
-
-static bool handle_buffered_iopage(XenIOState *state)
-{
-    buffered_iopage_t *buf_page = state->buffered_io_page;
-    buf_ioreq_t *buf_req = NULL;
-    bool handled_ioreq = false;
-    ioreq_t req;
-    int qw;
-
-    if (!buf_page) {
-        return 0;
-    }
-
-    memset(&req, 0x00, sizeof(req));
-    req.state = STATE_IOREQ_READY;
-    req.count = 1;
-    req.dir = IOREQ_WRITE;
-
-    for (;;) {
-        uint32_t rdptr = buf_page->read_pointer, wrptr;
-
-        xen_rmb();
-        wrptr = buf_page->write_pointer;
-        xen_rmb();
-        if (rdptr != buf_page->read_pointer) {
-            continue;
-        }
-        if (rdptr == wrptr) {
-            break;
-        }
-        buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
-        req.size = 1U << buf_req->size;
-        req.addr = buf_req->addr;
-        req.data = buf_req->data;
-        req.type = buf_req->type;
-        xen_rmb();
-        qw = (req.size == 8);
-        if (qw) {
-            if (rdptr + 1 == wrptr) {
-                hw_error("Incomplete quad word buffered ioreq");
-            }
-            buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
-                                           IOREQ_BUFFER_SLOT_NUM];
-            req.data |= ((uint64_t)buf_req->data) << 32;
-            xen_rmb();
-        }
-
-        handle_ioreq(state, &req);
-
-        /* Only req.data may get updated by handle_ioreq(), albeit even that
-         * should not happen as such data would never make it to the guest (we
-         * can only usefully see writes here after all).
-         */
-        assert(req.state == STATE_IOREQ_READY);
-        assert(req.count == 1);
-        assert(req.dir == IOREQ_WRITE);
-        assert(!req.data_is_ptr);
-
-        qatomic_add(&buf_page->read_pointer, qw + 1);
-        handled_ioreq = true;
-    }
-
-    return handled_ioreq;
-}
-
-static void handle_buffered_io(void *opaque)
-{
-    XenIOState *state = opaque;
-
-    if (handle_buffered_iopage(state)) {
-        timer_mod(state->buffered_io_timer,
-                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
-    } else {
-        timer_del(state->buffered_io_timer);
-        xenevtchn_unmask(state->xce_handle, state->bufioreq_local_port);
-    }
-}
-
-static void cpu_handle_ioreq(void *opaque)
-{
-    XenIOState *state = opaque;
-    ioreq_t *req = cpu_get_ioreq(state);
-
-    handle_buffered_iopage(state);
-    if (req) {
-        ioreq_t copy = *req;
-
-        xen_rmb();
-        handle_ioreq(state, &copy);
-        req->data = copy.data;
-
-        if (req->state != STATE_IOREQ_INPROCESS) {
-            fprintf(stderr, "Badness in I/O request ... not in service?!: "
-                    "%x, ptr: %x, port: %"PRIx64", "
-                    "data: %"PRIx64", count: %u, size: %u, type: %u\n",
-                    req->state, req->data_is_ptr, req->addr,
-                    req->data, req->count, req->size, req->type);
-            destroy_hvm_domain(false);
-            return;
-        }
-
-        xen_wmb(); /* Update ioreq contents /then/ update state. */
-
-        /*
-         * We do this before we send the response so that the tools
-         * have the opportunity to pick up on the reset before the
-         * guest resumes and does a hlt with interrupts disabled which
-         * causes Xen to powerdown the domain.
-         */
-        if (runstate_is_running()) {
-            ShutdownCause request;
-
-            if (qemu_shutdown_requested_get()) {
-                destroy_hvm_domain(false);
-            }
-            request = qemu_reset_requested_get();
-            if (request) {
-                qemu_system_reset(request);
-                destroy_hvm_domain(true);
-            }
-        }
-
-        req->state = STATE_IORESP_READY;
-        xenevtchn_notify(state->xce_handle,
-                         state->ioreq_local_port[state->send_vcpu]);
-    }
-}
-
-static void xen_main_loop_prepare(XenIOState *state)
-{
-    int evtchn_fd = -1;
-
-    if (state->xce_handle != NULL) {
-        evtchn_fd = xenevtchn_fd(state->xce_handle);
-    }
-
-    state->buffered_io_timer = timer_new_ms(QEMU_CLOCK_REALTIME, handle_buffered_io,
-                                                 state);
-
-    if (evtchn_fd != -1) {
-        CPUState *cpu_state;
-
-        DPRINTF("%s: Init cpu_by_vcpu_id\n", __func__);
-        CPU_FOREACH(cpu_state) {
-            DPRINTF("%s: cpu_by_vcpu_id[%d]=%p\n",
-                    __func__, cpu_state->cpu_index, cpu_state);
-            state->cpu_by_vcpu_id[cpu_state->cpu_index] = cpu_state;
-        }
-        qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state);
-    }
-}
-
-
-static void xen_hvm_change_state_handler(void *opaque, bool running,
-                                         RunState rstate)
-{
-    XenIOState *state = opaque;
-
-    if (running) {
-        xen_main_loop_prepare(state);
-    }
-
-    xen_set_ioreq_server_state(xen_domid,
-                               state->ioservid,
-                               (rstate == RUN_STATE_RUNNING));
-}
-
-static void xen_exit_notifier(Notifier *n, void *data)
-{
-    XenIOState *state = container_of(n, XenIOState, exit);
-
-    xen_destroy_ioreq_server(xen_domid, state->ioservid);
-    if (state->fres != NULL) {
-        xenforeignmemory_unmap_resource(xen_fmem, state->fres);
-    }
-
-    xenevtchn_close(state->xce_handle);
-    xs_daemon_close(state->xenstore);
-}
-
 #ifdef XEN_COMPAT_PHYSMAP
 static void xen_read_physmap(XenIOState *state)
 {
@@ -1309,178 +569,17 @@ static void xen_wakeup_notifier(Notifier *notifier, void *data)
     xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 0);
 }
 
-static int xen_map_ioreq_server(XenIOState *state)
-{
-    void *addr = NULL;
-    xen_pfn_t ioreq_pfn;
-    xen_pfn_t bufioreq_pfn;
-    evtchn_port_t bufioreq_evtchn;
-    int rc;
-
-    /*
-     * Attempt to map using the resource API and fall back to normal
-     * foreign mapping if this is not supported.
-     */
-    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0);
-    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1);
-    state->fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
-                                         XENMEM_resource_ioreq_server,
-                                         state->ioservid, 0, 2,
-                                         &addr,
-                                         PROT_READ | PROT_WRITE, 0);
-    if (state->fres != NULL) {
-        trace_xen_map_resource_ioreq(state->ioservid, addr);
-        state->buffered_io_page = addr;
-        state->shared_page = addr + TARGET_PAGE_SIZE;
-    } else if (errno != EOPNOTSUPP) {
-        error_report("failed to map ioreq server resources: error %d handle=%p",
-                     errno, xen_xc);
-        return -1;
-    }
-
-    rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
-                                   (state->shared_page == NULL) ?
-                                   &ioreq_pfn : NULL,
-                                   (state->buffered_io_page == NULL) ?
-                                   &bufioreq_pfn : NULL,
-                                   &bufioreq_evtchn);
-    if (rc < 0) {
-        error_report("failed to get ioreq server info: error %d handle=%p",
-                     errno, xen_xc);
-        return rc;
-    }
-
-    if (state->shared_page == NULL) {
-        DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
-
-        state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
-                                                  PROT_READ | PROT_WRITE,
-                                                  1, &ioreq_pfn, NULL);
-        if (state->shared_page == NULL) {
-            error_report("map shared IO page returned error %d handle=%p",
-                         errno, xen_xc);
-        }
-    }
-
-    if (state->buffered_io_page == NULL) {
-        DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
-
-        state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
-                                                       PROT_READ | PROT_WRITE,
-                                                       1, &bufioreq_pfn,
-                                                       NULL);
-        if (state->buffered_io_page == NULL) {
-            error_report("map buffered IO page returned error %d", errno);
-            return -1;
-        }
-    }
-
-    if (state->shared_page == NULL || state->buffered_io_page == NULL) {
-        return -1;
-    }
-
-    DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
-
-    state->bufioreq_remote_port = bufioreq_evtchn;
-
-    return 0;
-}
-
 void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 {
     MachineState *ms = MACHINE(pcms);
     unsigned int max_cpus = ms->smp.max_cpus;
-    int i, rc;
+    int rc;
     xen_pfn_t ioreq_pfn;
     XenIOState *state;
 
     state = g_new0(XenIOState, 1);
 
-    state->xce_handle = xenevtchn_open(NULL, 0);
-    if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
-        goto err;
-    }
-
-    state->xenstore = xs_daemon_open();
-    if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
-        goto err;
-    }
-
-    xen_create_ioreq_server(xen_domid, &state->ioservid);
-
-    state->exit.notify = xen_exit_notifier;
-    qemu_add_exit_notifier(&state->exit);
-
-    /*
-     * Register wake-up support in QMP query-current-machine API
-     */
-    qemu_register_wakeup_support();
-
-    rc = xen_map_ioreq_server(state);
-    if (rc < 0) {
-        goto err;
-    }
-
-    /* Note: cpus is empty at this point in init */
-    state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
-
-    rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
-    if (rc < 0) {
-        error_report("failed to enable ioreq server info: error %d handle=%p",
-                     errno, xen_xc);
-        goto err;
-    }
-
-    state->ioreq_local_port = g_new0(evtchn_port_t, max_cpus);
-
-    /* FIXME: how about if we overflow the page here? */
-    for (i = 0; i < max_cpus; i++) {
-        rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
-                                        xen_vcpu_eport(state->shared_page, i));
-        if (rc == -1) {
-            error_report("shared evtchn %d bind error %d", i, errno);
-            goto err;
-        }
-        state->ioreq_local_port[i] = rc;
-    }
-
-    rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
-                                    state->bufioreq_remote_port);
-    if (rc == -1) {
-        error_report("buffered evtchn bind error %d", errno);
-        goto err;
-    }
-    state->bufioreq_local_port = rc;
-
-    /* Init RAM management */
-#ifdef XEN_COMPAT_PHYSMAP
-    xen_map_cache_init(xen_phys_offset_to_gaddr, state);
-#else
-    xen_map_cache_init(NULL, state);
-#endif
-
-    qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
-
-    state->memory_listener = xen_memory_listener;
-    memory_listener_register(&state->memory_listener, &address_space_memory);
-
-    state->io_listener = xen_io_listener;
-    memory_listener_register(&state->io_listener, &address_space_io);
-
-    state->device_listener = xen_device_listener;
-    QLIST_INIT(&state->dev_list);
-    device_listener_register(&state->device_listener);
-
-    xen_bus_init();
-
-    /* Initialize backend core & drivers */
-    if (xen_be_init() != 0) {
-        error_report("xen backend core setup failed");
-        goto err;
-    }
-    xen_be_register_common();
+    xen_register_ioreq(state, max_cpus, xen_memory_listener);
 
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
@@ -1520,59 +619,11 @@ err:
     exit(1);
 }
 
-void destroy_hvm_domain(bool reboot)
-{
-    xc_interface *xc_handle;
-    int sts;
-    int rc;
-
-    unsigned int reason = reboot ? SHUTDOWN_reboot : SHUTDOWN_poweroff;
-
-    if (xen_dmod) {
-        rc = xendevicemodel_shutdown(xen_dmod, xen_domid, reason);
-        if (!rc) {
-            return;
-        }
-        if (errno != ENOTTY /* old Xen */) {
-            perror("xendevicemodel_shutdown failed");
-        }
-        /* well, try the old thing then */
-    }
-
-    xc_handle = xc_interface_open(0, 0, 0);
-    if (xc_handle == NULL) {
-        fprintf(stderr, "Cannot acquire xenctrl handle\n");
-    } else {
-        sts = xc_domain_shutdown(xc_handle, xen_domid, reason);
-        if (sts != 0) {
-            fprintf(stderr, "xc_domain_shutdown failed to issue %s, "
-                    "sts %d, %s\n", reboot ? "reboot" : "poweroff",
-                    sts, strerror(errno));
-        } else {
-            fprintf(stderr, "Issued domain %d %s\n", xen_domid,
-                    reboot ? "reboot" : "poweroff");
-        }
-        xc_interface_close(xc_handle);
-    }
-}
-
 void xen_register_framebuffer(MemoryRegion *mr)
 {
     framebuffer = mr;
 }
 
-void xen_shutdown_fatal_error(const char *fmt, ...)
-{
-    va_list ap;
-
-    va_start(ap, fmt);
-    vfprintf(stderr, fmt, ap);
-    va_end(ap);
-    fprintf(stderr, "Will destroy the domain.\n");
-    /* destroy the domain */
-    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
-}
-
 void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
 {
     if (unlikely(xen_in_migration)) {
@@ -1604,3 +655,57 @@ void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
         memory_global_dirty_log_stop(GLOBAL_DIRTY_MIGRATION);
     }
 }
+
+void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
+                                bool add)
+{
+    hwaddr start_addr = section->offset_within_address_space;
+    ram_addr_t size = int128_get64(section->size);
+    bool log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
+    hvmmem_type_t mem_type;
+
+    if (!memory_region_is_ram(section->mr)) {
+        return;
+    }
+
+    if (log_dirty != add) {
+        return;
+    }
+
+    trace_xen_client_set_memory(start_addr, size, log_dirty);
+
+    start_addr &= TARGET_PAGE_MASK;
+    size = TARGET_PAGE_ALIGN(size);
+
+    if (add) {
+        if (!memory_region_is_rom(section->mr)) {
+            xen_add_to_physmap(state, start_addr, size,
+                               section->mr, section->offset_within_region);
+        } else {
+            mem_type = HVMMEM_ram_ro;
+            if (xen_set_mem_type(xen_domid, mem_type,
+                                 start_addr >> TARGET_PAGE_BITS,
+                                 size >> TARGET_PAGE_BITS)) {
+                DPRINTF("xen_set_mem_type error, addr: "TARGET_FMT_plx"\n",
+                        start_addr);
+            }
+        }
+    } else {
+        if (xen_remove_from_physmap(state, start_addr, size) < 0) {
+            DPRINTF("physmapping does not exist at "TARGET_FMT_plx"\n", start_addr);
+        }
+    }
+}
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    switch (req->type) {
+    case IOREQ_TYPE_VMWARE_PORT:
+            handle_vmport_ioreq(state, req);
+        break;
+    default:
+        hw_error("Invalid ioreq type 0x%x\n", req->type);
+    }
+
+    return;
+}
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index 19d0637c46..008e036d63 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -25,4 +25,7 @@ specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
 
 xen_ss = ss.source_set()
 
-xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))
+xen_ss.add(when: 'CONFIG_XEN', if_true: files(
+  'xen-mapcache.c',
+  'xen-hvm-common.c',
+))
diff --git a/hw/xen/trace-events b/hw/xen/trace-events
index 2c8f238f42..02ca1183da 100644
--- a/hw/xen/trace-events
+++ b/hw/xen/trace-events
@@ -42,6 +42,20 @@ xs_node_vscanf(char *path, char *value) "%s %s"
 xs_node_watch(char *path) "%s"
 xs_node_unwatch(char *path) "%s"
 
+# xen-hvm.c
+xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"
+xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "0x%"PRIx64" size 0x%lx, log_dirty %i"
+handle_ioreq(void *req, uint32_t type, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p type=%d dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+handle_ioreq_read(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p read type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+handle_ioreq_write(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p write type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+cpu_ioreq_pio(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p pio dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+cpu_ioreq_pio_read_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio read reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
+cpu_ioreq_pio_write_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio write reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
+cpu_ioreq_move(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p copy dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
+cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
+cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
+
 # xen-mapcache.c
 xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
 xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
new file mode 100644
index 0000000000..e748d8d423
--- /dev/null
+++ b/hw/xen/xen-hvm-common.c
@@ -0,0 +1,870 @@
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+#include "qapi/error.h"
+#include "trace.h"
+
+#include "hw/pci/pci_host.h"
+#include "hw/xen/xen-hvm-common.h"
+#include "hw/xen/xen-legacy-backend.h"
+#include "hw/xen/xen-bus.h"
+#include "hw/boards.h"
+#include "hw/xen/arch_hvm.h"
+
+MemoryRegion ram_memory;
+
+void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
+                   Error **errp)
+{
+    unsigned long nr_pfn;
+    xen_pfn_t *pfn_list;
+    int i;
+
+    if (runstate_check(RUN_STATE_INMIGRATE)) {
+        /* RAM already populated in Xen */
+        fprintf(stderr, "%s: do not alloc "RAM_ADDR_FMT
+                " bytes of ram at "RAM_ADDR_FMT" when runstate is INMIGRATE\n",
+                __func__, size, ram_addr);
+        return;
+    }
+
+    if (mr == &ram_memory) {
+        return;
+    }
+
+    trace_xen_ram_alloc(ram_addr, size);
+
+    nr_pfn = size >> TARGET_PAGE_BITS;
+    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
+
+    for (i = 0; i < nr_pfn; i++) {
+        pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
+    }
+
+    if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) {
+        error_setg(errp, "xen: failed to populate ram at " RAM_ADDR_FMT,
+                   ram_addr);
+    }
+
+    g_free(pfn_list);
+}
+
+
+static void xen_set_memory(struct MemoryListener *listener,
+                           MemoryRegionSection *section,
+                           bool add)
+{
+    XenIOState *state = container_of(listener, XenIOState, memory_listener);
+
+    if (section->mr == &ram_memory) {
+        return;
+    } else {
+        if (add) {
+            xen_map_memory_section(xen_domid, state->ioservid,
+                                   section);
+        } else {
+            xen_unmap_memory_section(xen_domid, state->ioservid,
+                                     section);
+        }
+    }
+    arch_xen_set_memory(state, section, add);
+}
+
+void xen_region_add(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    memory_region_ref(section->mr);
+    xen_set_memory(listener, section, true);
+}
+
+void xen_region_del(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    xen_set_memory(listener, section, false);
+    memory_region_unref(section->mr);
+}
+
+void xen_io_add(MemoryListener *listener,
+                       MemoryRegionSection *section)
+{
+    XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops) {
+        return;
+    }
+
+    memory_region_ref(mr);
+
+    xen_map_io_section(xen_domid, state->ioservid, section);
+}
+
+void xen_io_del(MemoryListener *listener,
+                       MemoryRegionSection *section)
+{
+    XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops) {
+        return;
+    }
+
+    xen_unmap_io_section(xen_domid, state->ioservid, section);
+
+    memory_region_unref(mr);
+}
+
+void xen_device_realize(DeviceListener *listener,
+                               DeviceState *dev)
+{
+    XenIOState *state = container_of(listener, XenIOState, device_listener);
+
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
+        PCIDevice *pci_dev = PCI_DEVICE(dev);
+        XenPciDevice *xendev = g_new(XenPciDevice, 1);
+
+        xendev->pci_dev = pci_dev;
+        xendev->sbdf = PCI_BUILD_BDF(pci_dev_bus_num(pci_dev),
+                                     pci_dev->devfn);
+        QLIST_INSERT_HEAD(&state->dev_list, xendev, entry);
+
+        xen_map_pcidev(xen_domid, state->ioservid, pci_dev);
+    }
+}
+
+void xen_device_unrealize(DeviceListener *listener,
+                                 DeviceState *dev)
+{
+    XenIOState *state = container_of(listener, XenIOState, device_listener);
+
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
+        PCIDevice *pci_dev = PCI_DEVICE(dev);
+        XenPciDevice *xendev, *next;
+
+        xen_unmap_pcidev(xen_domid, state->ioservid, pci_dev);
+
+        QLIST_FOREACH_SAFE(xendev, &state->dev_list, entry, next) {
+            if (xendev->pci_dev == pci_dev) {
+                QLIST_REMOVE(xendev, entry);
+                g_free(xendev);
+                break;
+            }
+        }
+    }
+}
+
+MemoryListener xen_io_listener = {
+    .region_add = xen_io_add,
+    .region_del = xen_io_del,
+    .priority = 10,
+};
+
+DeviceListener xen_device_listener = {
+    .realize = xen_device_realize,
+    .unrealize = xen_device_unrealize,
+};
+
+/* get the ioreq packets from share mem */
+static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
+{
+    ioreq_t *req = xen_vcpu_ioreq(state->shared_page, vcpu);
+
+    if (req->state != STATE_IOREQ_READY) {
+        DPRINTF("I/O request not ready: "
+                "%x, ptr: %x, port: %"PRIx64", "
+                "data: %"PRIx64", count: %u, size: %u\n",
+                req->state, req->data_is_ptr, req->addr,
+                req->data, req->count, req->size);
+        return NULL;
+    }
+
+    xen_rmb(); /* see IOREQ_READY /then/ read contents of ioreq */
+
+    req->state = STATE_IOREQ_INPROCESS;
+    return req;
+}
+
+/* use poll to get the port notification */
+/* ioreq_vec--out,the */
+/* retval--the number of ioreq packet */
+static ioreq_t *cpu_get_ioreq(XenIOState *state)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+    unsigned int max_cpus = ms->smp.max_cpus;
+    int i;
+    evtchn_port_t port;
+
+    port = xenevtchn_pending(state->xce_handle);
+    if (port == state->bufioreq_local_port) {
+        timer_mod(state->buffered_io_timer,
+                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
+        return NULL;
+    }
+
+    if (port != -1) {
+        for (i = 0; i < max_cpus; i++) {
+            if (state->ioreq_local_port[i] == port) {
+                break;
+            }
+        }
+
+        if (i == max_cpus) {
+            hw_error("Fatal error while trying to get io event!\n");
+        }
+
+        /* unmask the wanted port again */
+        xenevtchn_unmask(state->xce_handle, port);
+
+        /* get the io packet from shared memory */
+        state->send_vcpu = i;
+        return cpu_get_ioreq_from_shared_memory(state, i);
+    }
+
+    /* read error or read nothing */
+    return NULL;
+}
+
+static uint32_t do_inp(uint32_t addr, unsigned long size)
+{
+    switch (size) {
+        case 1:
+            return cpu_inb(addr);
+        case 2:
+            return cpu_inw(addr);
+        case 4:
+            return cpu_inl(addr);
+        default:
+            hw_error("inp: bad size: %04x %lx", addr, size);
+    }
+}
+
+static void do_outp(uint32_t addr,
+        unsigned long size, uint32_t val)
+{
+    switch (size) {
+        case 1:
+            return cpu_outb(addr, val);
+        case 2:
+            return cpu_outw(addr, val);
+        case 4:
+            return cpu_outl(addr, val);
+        default:
+            hw_error("outp: bad size: %04x %lx", addr, size);
+    }
+}
+
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(hwaddr addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
+{
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    hwaddr offset = (hwaddr)req->size * i;
+    if (req->df) {
+        addr -= offset;
+    } else {
+        addr += offset;
+    }
+    cpu_physical_memory_rw(addr, val, req->size, rw);
+}
+
+static inline void read_phys_req_item(hwaddr addr,
+                                      ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(hwaddr addr,
+                                       ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 1);
+}
+
+
+void cpu_ioreq_pio(ioreq_t *req)
+{
+    uint32_t i;
+
+    trace_cpu_ioreq_pio(req, req->dir, req->df, req->data_is_ptr, req->addr,
+                         req->data, req->count, req->size);
+
+    if (req->size > sizeof(uint32_t)) {
+        hw_error("PIO: bad size (%u)", req->size);
+    }
+
+    if (req->dir == IOREQ_READ) {
+        if (!req->data_is_ptr) {
+            req->data = do_inp(req->addr, req->size);
+            trace_cpu_ioreq_pio_read_reg(req, req->data, req->addr,
+                                         req->size);
+        } else {
+            uint32_t tmp;
+
+            for (i = 0; i < req->count; i++) {
+                tmp = do_inp(req->addr, req->size);
+                write_phys_req_item(req->data, req, i, &tmp);
+            }
+        }
+    } else if (req->dir == IOREQ_WRITE) {
+        if (!req->data_is_ptr) {
+            trace_cpu_ioreq_pio_write_reg(req, req->data, req->addr,
+                                          req->size);
+            do_outp(req->addr, req->size, req->data);
+        } else {
+            for (i = 0; i < req->count; i++) {
+                uint32_t tmp = 0;
+
+                read_phys_req_item(req->data, req, i, &tmp);
+                do_outp(req->addr, req->size, tmp);
+            }
+        }
+    }
+}
+
+static void cpu_ioreq_move(ioreq_t *req)
+{
+    uint32_t i;
+
+    trace_cpu_ioreq_move(req, req->dir, req->df, req->data_is_ptr, req->addr,
+                         req->data, req->count, req->size);
+
+    if (req->size > sizeof(req->data)) {
+        hw_error("MMIO: bad size (%u)", req->size);
+    }
+
+    if (!req->data_is_ptr) {
+        if (req->dir == IOREQ_READ) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->addr, req, i, &req->data);
+            }
+        } else if (req->dir == IOREQ_WRITE) {
+            for (i = 0; i < req->count; i++) {
+                write_phys_req_item(req->addr, req, i, &req->data);
+            }
+        }
+    } else {
+        uint64_t tmp;
+
+        if (req->dir == IOREQ_READ) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
+            }
+        } else if (req->dir == IOREQ_WRITE) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
+            }
+        }
+    }
+}
+
+static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
+{
+    uint32_t sbdf = req->addr >> 32;
+    uint32_t reg = req->addr;
+    XenPciDevice *xendev;
+
+    if (req->size != sizeof(uint8_t) && req->size != sizeof(uint16_t) &&
+        req->size != sizeof(uint32_t)) {
+        hw_error("PCI config access: bad size (%u)", req->size);
+    }
+
+    if (req->count != 1) {
+        hw_error("PCI config access: bad count (%u)", req->count);
+    }
+
+    QLIST_FOREACH(xendev, &state->dev_list, entry) {
+        if (xendev->sbdf != sbdf) {
+            continue;
+        }
+
+        if (!req->data_is_ptr) {
+            if (req->dir == IOREQ_READ) {
+                req->data = pci_host_config_read_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->size);
+                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
+                                            req->size, req->data);
+            } else if (req->dir == IOREQ_WRITE) {
+                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
+                                             req->size, req->data);
+                pci_host_config_write_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->data, req->size);
+            }
+        } else {
+            uint32_t tmp;
+
+            if (req->dir == IOREQ_READ) {
+                tmp = pci_host_config_read_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->size);
+                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
+                                            req->size, tmp);
+                write_phys_req_item(req->data, req, 0, &tmp);
+            } else if (req->dir == IOREQ_WRITE) {
+                read_phys_req_item(req->data, req, 0, &tmp);
+                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
+                                             req->size, tmp);
+                pci_host_config_write_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    tmp, req->size);
+            }
+        }
+    }
+}
+
+static void handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    trace_handle_ioreq(req, req->type, req->dir, req->df, req->data_is_ptr,
+                       req->addr, req->data, req->count, req->size);
+
+    if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
+            (req->size < sizeof (target_ulong))) {
+        req->data &= ((target_ulong) 1 << (8 * req->size)) - 1;
+    }
+
+    if (req->dir == IOREQ_WRITE)
+        trace_handle_ioreq_write(req, req->type, req->df, req->data_is_ptr,
+                                 req->addr, req->data, req->count, req->size);
+
+    switch (req->type) {
+        case IOREQ_TYPE_PIO:
+            cpu_ioreq_pio(req);
+            break;
+        case IOREQ_TYPE_COPY:
+            cpu_ioreq_move(req);
+            break;
+        case IOREQ_TYPE_TIMEOFFSET:
+            break;
+        case IOREQ_TYPE_INVALIDATE:
+            xen_invalidate_map_cache();
+            break;
+        case IOREQ_TYPE_PCI_CONFIG:
+            cpu_ioreq_config(state, req);
+            break;
+        default:
+            arch_handle_ioreq(state, req);
+    }
+    if (req->dir == IOREQ_READ) {
+        trace_handle_ioreq_read(req, req->type, req->df, req->data_is_ptr,
+                                req->addr, req->data, req->count, req->size);
+    }
+}
+
+static int handle_buffered_iopage(XenIOState *state)
+{
+    buffered_iopage_t *buf_page = state->buffered_io_page;
+    buf_ioreq_t *buf_req = NULL;
+    ioreq_t req;
+    int qw;
+
+    if (!buf_page) {
+        return 0;
+    }
+
+    memset(&req, 0x00, sizeof(req));
+    req.state = STATE_IOREQ_READY;
+    req.count = 1;
+    req.dir = IOREQ_WRITE;
+
+    for (;;) {
+        uint32_t rdptr = buf_page->read_pointer, wrptr;
+
+        xen_rmb();
+        wrptr = buf_page->write_pointer;
+        xen_rmb();
+        if (rdptr != buf_page->read_pointer) {
+            continue;
+        }
+        if (rdptr == wrptr) {
+            break;
+        }
+        buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
+        req.size = 1U << buf_req->size;
+        req.addr = buf_req->addr;
+        req.data = buf_req->data;
+        req.type = buf_req->type;
+        xen_rmb();
+        qw = (req.size == 8);
+        if (qw) {
+            if (rdptr + 1 == wrptr) {
+                hw_error("Incomplete quad word buffered ioreq");
+            }
+            buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
+                                           IOREQ_BUFFER_SLOT_NUM];
+            req.data |= ((uint64_t)buf_req->data) << 32;
+            xen_rmb();
+        }
+
+        handle_ioreq(state, &req);
+
+        /* Only req.data may get updated by handle_ioreq(), albeit even that
+         * should not happen as such data would never make it to the guest (we
+         * can only usefully see writes here after all).
+         */
+        assert(req.state == STATE_IOREQ_READY);
+        assert(req.count == 1);
+        assert(req.dir == IOREQ_WRITE);
+        assert(!req.data_is_ptr);
+
+        qatomic_add(&buf_page->read_pointer, qw + 1);
+    }
+
+    return req.count;
+}
+
+static void handle_buffered_io(void *opaque)
+{
+    XenIOState *state = opaque;
+
+    if (handle_buffered_iopage(state)) {
+        timer_mod(state->buffered_io_timer,
+                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
+    } else {
+        timer_del(state->buffered_io_timer);
+        xenevtchn_unmask(state->xce_handle, state->bufioreq_local_port);
+    }
+}
+
+static void cpu_handle_ioreq(void *opaque)
+{
+    XenIOState *state = opaque;
+    ioreq_t *req = cpu_get_ioreq(state);
+
+    handle_buffered_iopage(state);
+    if (req) {
+        ioreq_t copy = *req;
+
+        xen_rmb();
+        handle_ioreq(state, &copy);
+        req->data = copy.data;
+
+        if (req->state != STATE_IOREQ_INPROCESS) {
+            fprintf(stderr, "Badness in I/O request ... not in service?!: "
+                    "%x, ptr: %x, port: %"PRIx64", "
+                    "data: %"PRIx64", count: %u, size: %u, type: %u\n",
+                    req->state, req->data_is_ptr, req->addr,
+                    req->data, req->count, req->size, req->type);
+            destroy_hvm_domain(false);
+            return;
+        }
+
+        xen_wmb(); /* Update ioreq contents /then/ update state. */
+
+        /*
+         * We do this before we send the response so that the tools
+         * have the opportunity to pick up on the reset before the
+         * guest resumes and does a hlt with interrupts disabled which
+         * causes Xen to powerdown the domain.
+         */
+        if (runstate_is_running()) {
+            ShutdownCause request;
+
+            if (qemu_shutdown_requested_get()) {
+                destroy_hvm_domain(false);
+            }
+            request = qemu_reset_requested_get();
+            if (request) {
+                qemu_system_reset(request);
+                destroy_hvm_domain(true);
+            }
+        }
+
+        req->state = STATE_IORESP_READY;
+        xenevtchn_notify(state->xce_handle,
+                         state->ioreq_local_port[state->send_vcpu]);
+    }
+}
+
+static void xen_main_loop_prepare(XenIOState *state)
+{
+    int evtchn_fd = -1;
+
+    if (state->xce_handle != NULL) {
+        evtchn_fd = xenevtchn_fd(state->xce_handle);
+    }
+
+    state->buffered_io_timer = timer_new_ms(QEMU_CLOCK_REALTIME, handle_buffered_io,
+                                                 state);
+
+    if (evtchn_fd != -1) {
+        CPUState *cpu_state;
+
+        DPRINTF("%s: Init cpu_by_vcpu_id\n", __func__);
+        CPU_FOREACH(cpu_state) {
+            DPRINTF("%s: cpu_by_vcpu_id[%d]=%p\n",
+                    __func__, cpu_state->cpu_index, cpu_state);
+            state->cpu_by_vcpu_id[cpu_state->cpu_index] = cpu_state;
+        }
+        qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state);
+    }
+}
+
+
+void xen_hvm_change_state_handler(void *opaque, bool running,
+                                         RunState rstate)
+{
+    XenIOState *state = opaque;
+
+    if (running) {
+        xen_main_loop_prepare(state);
+    }
+
+    xen_set_ioreq_server_state(xen_domid,
+                               state->ioservid,
+                               (rstate == RUN_STATE_RUNNING));
+}
+
+void xen_exit_notifier(Notifier *n, void *data)
+{
+    XenIOState *state = container_of(n, XenIOState, exit);
+
+    xen_destroy_ioreq_server(xen_domid, state->ioservid);
+
+    xenevtchn_close(state->xce_handle);
+    xs_daemon_close(state->xenstore);
+}
+
+static int xen_map_ioreq_server(XenIOState *state)
+{
+    void *addr = NULL;
+    xenforeignmemory_resource_handle *fres;
+    xen_pfn_t ioreq_pfn;
+    xen_pfn_t bufioreq_pfn;
+    evtchn_port_t bufioreq_evtchn;
+    int rc;
+
+    /*
+     * Attempt to map using the resource API and fall back to normal
+     * foreign mapping if this is not supported.
+     */
+    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0);
+    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1);
+    fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
+                                         XENMEM_resource_ioreq_server,
+                                         state->ioservid, 0, 2,
+                                         &addr,
+                                         PROT_READ | PROT_WRITE, 0);
+    if (fres != NULL) {
+        trace_xen_map_resource_ioreq(state->ioservid, addr);
+        state->buffered_io_page = addr;
+        state->shared_page = addr + XC_PAGE_SIZE;
+    } else if (errno != EOPNOTSUPP) {
+        error_report("failed to map ioreq server resources: error %d handle=%p",
+                     errno, xen_xc);
+        return -1;
+    }
+
+    rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
+                                   (state->shared_page == NULL) ?
+                                   &ioreq_pfn : NULL,
+                                   (state->buffered_io_page == NULL) ?
+                                   &bufioreq_pfn : NULL,
+                                   &bufioreq_evtchn);
+    if (rc < 0) {
+        error_report("failed to get ioreq server info: error %d handle=%p",
+                     errno, xen_xc);
+        return rc;
+    }
+
+    if (state->shared_page == NULL) {
+        DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
+
+        state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
+                                                  PROT_READ | PROT_WRITE,
+                                                  1, &ioreq_pfn, NULL);
+        if (state->shared_page == NULL) {
+            error_report("map shared IO page returned error %d handle=%p",
+                         errno, xen_xc);
+        }
+    }
+
+    if (state->buffered_io_page == NULL) {
+        DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
+
+        state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
+                                                       PROT_READ | PROT_WRITE,
+                                                       1, &bufioreq_pfn,
+                                                       NULL);
+        if (state->buffered_io_page == NULL) {
+            error_report("map buffered IO page returned error %d", errno);
+            return -1;
+        }
+    }
+
+    if (state->shared_page == NULL || state->buffered_io_page == NULL) {
+        return -1;
+    }
+
+    DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
+
+    state->bufioreq_remote_port = bufioreq_evtchn;
+
+    return 0;
+}
+
+void destroy_hvm_domain(bool reboot)
+{
+    xc_interface *xc_handle;
+    int sts;
+    int rc;
+
+    unsigned int reason = reboot ? SHUTDOWN_reboot : SHUTDOWN_poweroff;
+
+    if (xen_dmod) {
+        rc = xendevicemodel_shutdown(xen_dmod, xen_domid, reason);
+        if (!rc) {
+            return;
+        }
+        if (errno != ENOTTY /* old Xen */) {
+            perror("xendevicemodel_shutdown failed");
+        }
+        /* well, try the old thing then */
+    }
+
+    xc_handle = xc_interface_open(0, 0, 0);
+    if (xc_handle == NULL) {
+        fprintf(stderr, "Cannot acquire xenctrl handle\n");
+    } else {
+        sts = xc_domain_shutdown(xc_handle, xen_domid, reason);
+        if (sts != 0) {
+            fprintf(stderr, "xc_domain_shutdown failed to issue %s, "
+                    "sts %d, %s\n", reboot ? "reboot" : "poweroff",
+                    sts, strerror(errno));
+        } else {
+            fprintf(stderr, "Issued domain %d %s\n", xen_domid,
+                    reboot ? "reboot" : "poweroff");
+        }
+        xc_interface_close(xc_handle);
+    }
+}
+
+void xen_shutdown_fatal_error(const char *fmt, ...)
+{
+    va_list ap;
+
+    va_start(ap, fmt);
+    vfprintf(stderr, fmt, ap);
+    va_end(ap);
+    fprintf(stderr, "Will destroy the domain.\n");
+    /* destroy the domain */
+    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
+}
+
+static void xen_register_backend(XenIOState *state)
+{
+    /* Initialize backend core & drivers */
+    if (xen_be_init() != 0) {
+        error_report("xen backend core setup failed");
+        goto err;
+    }
+
+    xen_be_register_common();
+
+    return;
+
+err:
+    error_report("xen hardware virtual machine backend registration failed");
+    exit(1);
+}
+
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener)
+{
+    int i, rc;
+
+    state->xce_handle = xenevtchn_open(NULL, 0);
+    if (state->xce_handle == NULL) {
+        perror("xen: event channel open");
+        goto err;
+    }
+
+    state->xenstore = xs_daemon_open();
+    if (state->xenstore == NULL) {
+        perror("xen: xenstore open");
+        goto err;
+    }
+
+    xen_create_ioreq_server(xen_domid, &state->ioservid);
+
+    state->exit.notify = xen_exit_notifier;
+    qemu_add_exit_notifier(&state->exit);
+
+    /*
+     * Register wake-up support in QMP query-current-machine API
+     */
+    qemu_register_wakeup_support();
+
+    rc = xen_map_ioreq_server(state);
+    if (rc < 0) {
+        goto err;
+    }
+
+    /* Note: cpus is empty at this point in init */
+    state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
+
+    rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
+    if (rc < 0) {
+        error_report("failed to enable ioreq server info: error %d handle=%p",
+                     errno, xen_xc);
+        goto err;
+    }
+
+    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
+
+    /* FIXME: how about if we overflow the page here? */
+    for (i = 0; i < max_cpus; i++) {
+        rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
+                                        xen_vcpu_eport(state->shared_page, i));
+        if (rc == -1) {
+            error_report("shared evtchn %d bind error %d", i, errno);
+            goto err;
+        }
+        state->ioreq_local_port[i] = rc;
+    }
+
+    rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
+                                    state->bufioreq_remote_port);
+    if (rc == -1) {
+        error_report("buffered evtchn bind error %d", errno);
+        goto err;
+    }
+    state->bufioreq_local_port = rc;
+
+    /* Init RAM management */
+#ifdef XEN_COMPAT_PHYSMAP
+    xen_map_cache_init(xen_phys_offset_to_gaddr, state);
+#else
+    xen_map_cache_init(NULL, state);
+#endif
+
+    qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
+
+    state->memory_listener = xen_memory_listener;
+    memory_listener_register(&state->memory_listener, &address_space_memory);
+
+    state->io_listener = xen_io_listener;
+    memory_listener_register(&state->io_listener, &address_space_io);
+
+    state->device_listener = xen_device_listener;
+    QLIST_INIT(&state->dev_list);
+    device_listener_register(&state->device_listener);
+
+    xen_bus_init();
+
+    xen_register_backend(state);
+
+    return;
+err:
+    error_report("xen hardware virtual machine initialisation failed");
+    exit(1);
+}
diff --git a/include/hw/i386/xen_arch_hvm.h b/include/hw/i386/xen_arch_hvm.h
new file mode 100644
index 0000000000..1000f8f543
--- /dev/null
+++ b/include/hw/i386/xen_arch_hvm.h
@@ -0,0 +1,11 @@
+#ifndef HW_XEN_ARCH_I386_HVM_H
+#define HW_XEN_ARCH_I386_HVM_H
+
+#include <xen/hvm/ioreq.h>
+#include "hw/xen/xen-hvm-common.h"
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
+void arch_xen_set_memory(XenIOState *state,
+                         MemoryRegionSection *section,
+                         bool add);
+#endif
diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
new file mode 100644
index 0000000000..26674648d8
--- /dev/null
+++ b/include/hw/xen/arch_hvm.h
@@ -0,0 +1,3 @@
+#if defined(TARGET_I386) || defined(TARGET_X86_64)
+#include "hw/i386/xen_arch_hvm.h"
+#endif
diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h
new file mode 100644
index 0000000000..c16057835f
--- /dev/null
+++ b/include/hw/xen/xen-hvm-common.h
@@ -0,0 +1,97 @@
+#ifndef HW_XEN_HVM_COMMON_H
+#define HW_XEN_HVM_COMMON_H
+
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+
+#include "cpu.h"
+#include "hw/pci/pci.h"
+#include "hw/hw.h"
+#include "hw/xen/xen_common.h"
+#include "sysemu/runstate.h"
+#include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
+#include "sysemu/xen-mapcache.h"
+
+#include <xen/hvm/ioreq.h>
+
+extern MemoryRegion ram_memory;
+extern MemoryListener xen_io_listener;
+extern DeviceListener xen_device_listener;
+
+//#define DEBUG_XEN_HVM
+
+#ifdef DEBUG_XEN_HVM
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
+{
+    return shared_page->vcpu_ioreq[i].vp_eport;
+}
+static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
+{
+    return &shared_page->vcpu_ioreq[vcpu];
+}
+
+#define BUFFER_IO_MAX_DELAY  100
+
+typedef struct XenPhysmap {
+    hwaddr start_addr;
+    ram_addr_t size;
+    const char *name;
+    hwaddr phys_offset;
+
+    QLIST_ENTRY(XenPhysmap) list;
+} XenPhysmap;
+
+typedef struct XenPciDevice {
+    PCIDevice *pci_dev;
+    uint32_t sbdf;
+    QLIST_ENTRY(XenPciDevice) entry;
+} XenPciDevice;
+
+typedef struct XenIOState {
+    ioservid_t ioservid;
+    shared_iopage_t *shared_page;
+    buffered_iopage_t *buffered_io_page;
+    QEMUTimer *buffered_io_timer;
+    CPUState **cpu_by_vcpu_id;
+    /* the evtchn port for polling the notification, */
+    evtchn_port_t *ioreq_local_port;
+    /* evtchn remote and local ports for buffered io */
+    evtchn_port_t bufioreq_remote_port;
+    evtchn_port_t bufioreq_local_port;
+    /* the evtchn fd for polling */
+    xenevtchn_handle *xce_handle;
+    /* which vcpu we are serving */
+    int send_vcpu;
+
+    struct xs_handle *xenstore;
+    MemoryListener memory_listener;
+    MemoryListener io_listener;
+    QLIST_HEAD(, XenPciDevice) dev_list;
+    DeviceListener device_listener;
+
+    Notifier exit;
+} XenIOState;
+
+void xen_exit_notifier(Notifier *n, void *data);
+
+void xen_region_add(MemoryListener *listener, MemoryRegionSection *section);
+void xen_region_del(MemoryListener *listener, MemoryRegionSection *section);
+void xen_io_add(MemoryListener *listener, MemoryRegionSection *section);
+void xen_io_del(MemoryListener *listener, MemoryRegionSection *section);
+void xen_device_realize(DeviceListener *listener, DeviceState *dev);
+void xen_device_unrealize(DeviceListener *listener, DeviceState *dev);
+
+void xen_hvm_change_state_handler(void *opaque, bool running, RunState rstate);
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener);
+
+void cpu_ioreq_pio(ioreq_t *req);
+#endif /* HW_XEN_HVM_COMMON_H */
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:07:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:07:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484013.750509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbks-0006qR-3l; Wed, 25 Jan 2023 09:07:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484013.750509; Wed, 25 Jan 2023 09:07:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKbks-0006qI-0z; Wed, 25 Jan 2023 09:07:42 +0000
Received: by outflank-mailman (input) for mailman id 484013;
 Wed, 25 Jan 2023 09:07:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKbaQ-00012q-VX
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:55 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7eaa::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 16ed3462-9c8e-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:56:48 +0100 (CET)
Received: from MW4P223CA0021.NAMP223.PROD.OUTLOOK.COM (2603:10b6:303:80::26)
 by MN0PR12MB5836.namprd12.prod.outlook.com (2603:10b6:208:37b::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.20; Wed, 25 Jan
 2023 08:54:50 +0000
Received: from CO1NAM11FT011.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:80:cafe::da) by MW4P223CA0021.outlook.office365.com
 (2603:10b6:303:80::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 08:54:50 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT011.mail.protection.outlook.com (10.13.175.186) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.21 via Frontend Transport; Wed, 25 Jan 2023 08:54:49 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 02:54:48 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 00:54:48 -0800
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 25 Jan 2023 02:54:47 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16ed3462-9c8e-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kBr7wPoI9gyrbG/Lls5evEW14t00tRtwoX2TDDfOxEzCLOtGXQihdXSTeq3cgwiHNdGz9rPb4RHYMAqJ6SEArSUBo3908fXz53DYJTqUrSu+ZghrJbASeGI+7h64nBC1A3czsVTlegfNCDxAaiS36ZbO/cH+3FKSLPc6GHaNNPbtJivHmsPcsaQINQfYjZ0Od9Fd2PkgE4Xrjs8bvUgSgYeXEgkARskt9GJjPKBsZVgURcDMCtrnS1VMjWBpTCFyGvCOtszJVD7obAoOgM4nW1qbc1pI2482gPOVVNgPf26QQRqvR9szm6CaQ0Pt3VoewjWHksWaExvva6qSfxQwSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hmpuP41Ztn/lKlMcuw44+3sOiW4TYVQ+KI34gBksLz8=;
 b=TBld6471/HQGm7V0G/POxlUjS9XAXjpca2gLogIJ/1C4iIJRZrxzQ6K6DNOup2GaSLCE0KjN7LWqKlWTHT2ub33SWqbKus11bp1tjNlt4XmGnfMzKrOg58UbSALALPP8ixSrQraYWCa8v/33P1o2W0b8gvbJXMjGz+5WB/0v0QwTp6cbuamdhQETBKcwya9EmRv+P2yvz3yI4ZoIVcHfR343kdOi+m+b0iUc1p5G49AMit+8xB7IhxBeUqL4jl9IJ5r/Imq77yIrjNABFRYvkqk2+JtZ97Rv/ilplIz3spKKO+QPMjnYm7Nt3Ri/E3Sb7hZ1YNvp5GS6ZizkeYtrnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hmpuP41Ztn/lKlMcuw44+3sOiW4TYVQ+KI34gBksLz8=;
 b=xMW7ZuxoqDm7ruV8kMuNoAwdLg4q9yMzU4S0PUpd1mPzrxABlzgt/e/ElDUDeYNdct0jiJcpnntG3Ae8a1ZBHY6SWzlmQJ9CzlriJ+3P2FFXa9UFk6WWgClYO4lFud8jYE1e7PjTxdFFXoRzzXeqjwTPBJ8snzhIhoG41jL1sbk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Paolo Bonzini
	<pbonzini@redhat.com>, =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?=
	<marcandre.lureau@redhat.com>, =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?=
	<berrange@redhat.com>, Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [QEMU][PATCH v4 08/10] meson.build: do not set have_xen_pci_passthrough for aarch64 targets
Date: Wed, 25 Jan 2023 00:54:05 -0800
Message-ID: <20230125085407.7144-9-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230125085407.7144-1-vikram.garhwal@amd.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT011:EE_|MN0PR12MB5836:EE_
X-MS-Office365-Filtering-Correlation-Id: f3e33193-298d-4928-0a9b-08dafeb1d136
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JrU+L8fHBezUmbiy8yLf6TqoYHDBgtn2b8vkCyyZy6Cw7RyRyOj1eynGgBhTiJXoHZExtTbeCH4DW5DMM2l/8C20VVlsoh9SKysux2HmfHwq9pwDbBO1OqbJoaIwg0HXxNvCWJAb4W5OgCt/4Uzjv0hNKCzSFXODa1zLmOMc6WtSeNkGYRdAFEpLY9L1IhPUxRXQIL8Dm7JFO7vZanKQG3VhZLZgQwfpeH3Eo5NcwJ+JsIs0Veb2uj6+Xa6y2SOx+h5KYMuo/EUFB7INnb3Anx+s4V0PyVBwiRyHPUAhVmvM7FkAWy5TcBQovaWvHD4P8TLEqribioPtdeqqm9dMUK0iJPd/zGxNOd0Ytz110aIKlyEcpGsM1Tl5LpuoQykOOJeAS+yf5SYSMehkn5U/9WG1DQw1NOymkQa4JFN1FqqF1Md9N51pfC+GVwlKKn9HJ7pj4zN1ZjGVkL7SdNp9epGWwoHxg7RFvR64GTN48ZmPlAACbb5dZF0YY+BxVXTcvoDHJei8bVVYu57V1kItxZLc983OaqG227guANXDrRlp6tBfg2F1vm7jBwEEZqAyboA9nEHmBnbsDiKPEiMKLgbdhhsGyfncKF/91I7HUIwg8g9W0UJbiSKbaFsCsWr/TbMShen5Je1/yN2MdLuHRbTO3MUc/uJI8i/DOhyyBFwIHo7+Lr6S+lcmGl6h3zDwnajPDVwtJpvAVkYt3SHNQoPhSOTYFafa8CpSKGZbtFA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(5660300002)(8936002)(2906002)(70206006)(6916009)(70586007)(4326008)(8676002)(41300700001)(478600001)(1076003)(6666004)(186003)(26005)(2616005)(336012)(426003)(83380400001)(82310400005)(47076005)(82740400003)(316002)(44832011)(54906003)(4744005)(81166007)(356005)(86362001)(36756003)(40480700001)(40460700003)(36860700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 08:54:49.7274
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f3e33193-298d-4928-0a9b-08dafeb1d136
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT011.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5836

From: Stefano Stabellini <stefano.stabellini@amd.com>

have_xen_pci_passthrough is only used for Xen x86 VMs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 meson.build | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/meson.build b/meson.build
index 6d3b665629..693802adb2 100644
--- a/meson.build
+++ b/meson.build
@@ -1471,6 +1471,8 @@ have_xen_pci_passthrough = get_option('xen_pci_passthrough') \
            error_message: 'Xen PCI passthrough requested but Xen not enabled') \
   .require(targetos == 'linux',
            error_message: 'Xen PCI passthrough not available on this platform') \
+  .require(cpu == 'x86'  or cpu == 'x86_64',
+           error_message: 'Xen PCI passthrough not available on this platform') \
   .allowed()
 
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483889.750567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3H-0002BV-1X; Wed, 25 Jan 2023 09:26:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483889.750567; Wed, 25 Jan 2023 09:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3G-0002A6-KL; Wed, 25 Jan 2023 09:26:42 +0000
Received: by outflank-mailman (input) for mailman id 483889;
 Wed, 25 Jan 2023 08:39:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5DRb=5W=flex--surenb.bounces.google.com=3rurQYwYKCesfheRaOTbbTYR.PbZkRa-QRiRYYVfgf.kRacebWRPg.beT@srs-se1.protection.inumbo.net>)
 id 1pKbJI-00083W-9K
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:39:12 +0000
Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com
 [2607:f8b0:4864:20::114a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bd5e0aa4-9c8b-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:39:11 +0100 (CET)
Received: by mail-yw1-x114a.google.com with SMTP id
 00721157ae682-506466c484fso43568687b3.13
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 00:39:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd5e0aa4-9c8b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=J7ancMifNNyjJiTof8bzSGdh+LvtuOAZ/yhUxoIZ6EY=;
        b=l/hcCYqvcIly8bHSP154OLa3geKFmW7WOJHtqW6QP0WOiQCkjUgO6NFBclTznfTfXd
         q/KZ4uqcFfz6e/A7WhMCPvMZlY7cpI5ZXPygAYrMACNGYIbdZdn2/wrvbiLbPFMi0EJU
         e2xhTpttA9V49soY/4kK5pbOiY/jtu7/UBbsL9M2oZElpThGo5ueYxij1daBdATn/U6J
         lFgz4QPoUTVqu2tO2Jv29c+EKfKlUr82uf6j5jrJG0K4kH7t4tXslYBl2iUfCiGyJg2L
         sa2ZzT9fi8Nbfwma/h4EwurZmwhJrp5hy3f2wiTWrdCgSBqaWH+IxEwCQ2mYpS41PmTz
         ZJ5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=J7ancMifNNyjJiTof8bzSGdh+LvtuOAZ/yhUxoIZ6EY=;
        b=4aDsLxXHwgk9u7LiQu+YW5XR+doG+eTFElwdSy66t9UNWsY/twtC7J0XLGePhEgxh+
         Ik06KSQq/iLAS3WlfUoWvQWi4sNK2wSrTXnenUTCfRpWWcO5VcYGZ+aTUU+88+L5CMtt
         Pk4G4baebNSOhgLXXRyCQJBDY7G4E48z8/i0eUwvpry3goPsYchJJx5/wrHLxulENb/A
         GX0fmrRCv1erAGKCLGPB+UO3AVTPgbyeNEmAntwTv3UHaiPBBYfjltrLutTtCjdxJy+Q
         V5JOejZ+GKDt8k5OhxI83316HgDXEJK0l0Xi+Cpi977QqZUSi6e5rmk8AmFo8M+9iOIi
         6fMg==
X-Gm-Message-State: AFqh2kpTZq3MMTwWXhp1zdC90kzszpRHcXWrX9Tx5pEs4bG1lTA9qm5D
	F0PTcoAsKlpFgUytnnR++yfa4zQiVxY=
X-Google-Smtp-Source: AMrXdXslOvF1txeQ7QJk1q8MehIUDQnhzGgaj+Ckxm4gdtDkml40uAzU8COPOn8c5wwtHui85E8JZje3JZk=
X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:f7b0:20e8:ce66:f98])
 (user=surenb job=sendgmr) by 2002:a0d:ca88:0:b0:501:80db:3eca with SMTP id
 m130-20020a0dca88000000b0050180db3ecamr2010555ywd.100.1674635950405; Wed, 25
 Jan 2023 00:39:10 -0800 (PST)
Date: Wed, 25 Jan 2023 00:38:51 -0800
In-Reply-To: <20230125083851.27759-1-surenb@google.com>
Mime-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com>
X-Mailer: git-send-email 2.39.1.405.gd4c25cc71f-goog
Message-ID: <20230125083851.27759-7-surenb@google.com>
Subject: [PATCH v2 6/6] mm: export dump_mm()
From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, 
	hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, 
	willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, 
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org, 
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com, 
	surenb@google.com
Content-Type: text/plain; charset="UTF-8"

mmap_assert_write_locked() is used in vm_flags modifiers. Because
mmap_assert_write_locked() uses dump_mm() and vm_flags are sometimes
modified from from inside a module, it's necessary to export
dump_mm() function.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/debug.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/debug.c b/mm/debug.c
index 9d3d893dc7f4..96d594e16292 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -215,6 +215,7 @@ void dump_mm(const struct mm_struct *mm)
 		mm->def_flags, &mm->def_flags
 	);
 }
+EXPORT_SYMBOL(dump_mm);
 
 static bool page_init_poisoning __read_mostly = true;
 
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483888.750558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3G-00022f-HC; Wed, 25 Jan 2023 09:26:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483888.750558; Wed, 25 Jan 2023 09:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3G-00020l-6X; Wed, 25 Jan 2023 09:26:42 +0000
Received: by outflank-mailman (input) for mailman id 483888;
 Wed, 25 Jan 2023 08:39:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mP3z=5W=flex--surenb.bounces.google.com=3rOrQYwYKCekdfcPYMRZZRWP.NZXiPY-OPgPWWTded.iPYacZUPNe.ZcR@srs-se1.protection.inumbo.net>)
 id 1pKbJI-00083Q-30
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:39:12 +0000
Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com
 [2607:f8b0:4864:20::b4a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a25a9aec-9c8b-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:38:26 +0100 (CET)
Received: by mail-yb1-xb4a.google.com with SMTP id
 k15-20020a5b0a0f000000b007eba3f8e3baso18992171ybq.4
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 00:39:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a25a9aec-9c8b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=sj271avprjKe1me6H/esP7A1ES+jGLLWBsFdbdzx2Jg=;
        b=MF4o67HHEg6wxxFaPI24vOvqDdgAsiiopj9Jmjd6uvJHKoN0cmyBguf/551h9iDvbU
         uB2NKUhC7RaYIEm/9caYoWPUTANnNVHWS2xJdXDx/lcI6GcWgigCfEEbTlZGTMQBSsWX
         k8/CWZcxcb/BySg6e3+h9eb7w5trNFt42RvDvxcst6OfmLoa5MBI2SAUfqwKLEaTxaEa
         G0NocmVo14Y7xttJ2elLfQ94AxICUs5RwP447wA4tGne6++mYjvSxMX72QvUUwshvmYz
         3fNi++tbSZXqk3zPkMLsMlUogcx8otyfqKt95XXSqoizVnjCimFLvFXHXbaOzmHhXyvI
         XWuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=sj271avprjKe1me6H/esP7A1ES+jGLLWBsFdbdzx2Jg=;
        b=RwoslFzHhc+MbXj01AZ/xfe6MryakSJB51TZjijA8X1ZfPJLzusn8QtaFNkHymdGrJ
         CYTjHag9vnv4URLJ47Gz2jIGWvv+50VqN7NePxpRXLUaYwPaZjFhvfWUEf3GkWkIggk9
         wioaTtTkcHAsSkOkh2OawlFolZD0MDIY5WsQPn+gfxit7an2XmnvMkgTcR7AZIQ5u4bw
         V6YFdgYYKPRJc0KN0ds6BrRsts+STgzCb/aDkUvWDH/CrneYGcLxpWB8Ub/fEwN9d4aJ
         WYjfN5TF5d7Oy8Hx7V54eelK0Qn0m3Q7VoXC051ceMjwBaCY9rHAhqyJudeHOnUaM20s
         XYWQ==
X-Gm-Message-State: AFqh2kqUFOmBgQywW4pD7D9oIsYwpP66KkXoLWxJtpYG+c6oyXSerpjs
	gtybj5sA2ImBwiKv8G78v7Oq/uWxAfU=
X-Google-Smtp-Source: AMrXdXuvyVzp2fB0Stt6K1hitZVrUldY7217lrubuFDE2XAwfP/Qm87P8hidNk4h8aVCYNlJLm5UfURoDHY=
X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:f7b0:20e8:ce66:f98])
 (user=surenb job=sendgmr) by 2002:a25:2bc1:0:b0:7fe:35ff:fddb with SMTP id
 r184-20020a252bc1000000b007fe35fffddbmr2303833ybr.466.1674635948201; Wed, 25
 Jan 2023 00:39:08 -0800 (PST)
Date: Wed, 25 Jan 2023 00:38:50 -0800
In-Reply-To: <20230125083851.27759-1-surenb@google.com>
Mime-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com>
X-Mailer: git-send-email 2.39.1.405.gd4c25cc71f-goog
Message-ID: <20230125083851.27759-6-surenb@google.com>
Subject: [PATCH v2 5/6] mm: introduce mod_vm_flags_nolock and use it in untrack_pfn
From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, 
	hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, 
	willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, 
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org, 
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com, 
	surenb@google.com
Content-Type: text/plain; charset="UTF-8"

In cases when VMA flags are modified after VMA was isolated and mmap_lock
was downgraded, flags modifications would result in an assertion because
mmap write lock is not held.
Introduce mod_vm_flags_nolock to be used in such situation.
Pass a hint to untrack_pfn to conditionally use mod_vm_flags_nolock for
flags modification and to avoid assertion.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 arch/x86/mm/pat/memtype.c | 10 +++++++---
 include/linux/mm.h        | 12 +++++++++---
 include/linux/pgtable.h   |  5 +++--
 mm/memory.c               | 13 +++++++------
 mm/memremap.c             |  4 ++--
 mm/mmap.c                 | 16 ++++++++++------
 6 files changed, 38 insertions(+), 22 deletions(-)

diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
index ae9645c900fa..d8adc0b42cf2 100644
--- a/arch/x86/mm/pat/memtype.c
+++ b/arch/x86/mm/pat/memtype.c
@@ -1046,7 +1046,7 @@ void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn)
  * can be for the entire vma (in which case pfn, size are zero).
  */
 void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
-		 unsigned long size)
+		 unsigned long size, bool mm_wr_locked)
 {
 	resource_size_t paddr;
 	unsigned long prot;
@@ -1065,8 +1065,12 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
 		size = vma->vm_end - vma->vm_start;
 	}
 	free_pfn_range(paddr, size);
-	if (vma)
-		clear_vm_flags(vma, VM_PAT);
+	if (vma) {
+		if (mm_wr_locked)
+			clear_vm_flags(vma, VM_PAT);
+		else
+			mod_vm_flags_nolock(vma, 0, VM_PAT);
+	}
 }
 
 /*
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 55335edd1373..48d49930c411 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -656,12 +656,18 @@ static inline void clear_vm_flags(struct vm_area_struct *vma,
 	vma->vm_flags &= ~flags;
 }
 
+static inline void mod_vm_flags_nolock(struct vm_area_struct *vma,
+				       unsigned long set, unsigned long clear)
+{
+	vma->vm_flags |= set;
+	vma->vm_flags &= ~clear;
+}
+
 static inline void mod_vm_flags(struct vm_area_struct *vma,
 				unsigned long set, unsigned long clear)
 {
 	mmap_assert_write_locked(vma->vm_mm);
-	vma->vm_flags |= set;
-	vma->vm_flags &= ~clear;
+	mod_vm_flags_nolock(vma, set, clear);
 }
 
 static inline void vma_set_anonymous(struct vm_area_struct *vma)
@@ -2087,7 +2093,7 @@ static inline void zap_vma_pages(struct vm_area_struct *vma)
 }
 void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
 		struct vm_area_struct *start_vma, unsigned long start,
-		unsigned long end);
+		unsigned long end, bool mm_wr_locked);
 
 struct mmu_notifier_range;
 
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 5fd45454c073..c63cd44777ec 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1185,7 +1185,8 @@ static inline int track_pfn_copy(struct vm_area_struct *vma)
  * can be for the entire vma (in which case pfn, size are zero).
  */
 static inline void untrack_pfn(struct vm_area_struct *vma,
-			       unsigned long pfn, unsigned long size)
+			       unsigned long pfn, unsigned long size,
+			       bool mm_wr_locked)
 {
 }
 
@@ -1203,7 +1204,7 @@ extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
 			     pfn_t pfn);
 extern int track_pfn_copy(struct vm_area_struct *vma);
 extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
-			unsigned long size);
+			unsigned long size, bool mm_wr_locked);
 extern void untrack_pfn_moved(struct vm_area_struct *vma);
 #endif
 
diff --git a/mm/memory.c b/mm/memory.c
index d6902065e558..5b11b50e2c4a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1613,7 +1613,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 static void unmap_single_vma(struct mmu_gather *tlb,
 		struct vm_area_struct *vma, unsigned long start_addr,
 		unsigned long end_addr,
-		struct zap_details *details)
+		struct zap_details *details, bool mm_wr_locked)
 {
 	unsigned long start = max(vma->vm_start, start_addr);
 	unsigned long end;
@@ -1628,7 +1628,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
 		uprobe_munmap(vma, start, end);
 
 	if (unlikely(vma->vm_flags & VM_PFNMAP))
-		untrack_pfn(vma, 0, 0);
+		untrack_pfn(vma, 0, 0, mm_wr_locked);
 
 	if (start != end) {
 		if (unlikely(is_vm_hugetlb_page(vma))) {
@@ -1675,7 +1675,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
  */
 void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
 		struct vm_area_struct *vma, unsigned long start_addr,
-		unsigned long end_addr)
+		unsigned long end_addr, bool mm_wr_locked)
 {
 	struct mmu_notifier_range range;
 	struct zap_details details = {
@@ -1689,7 +1689,8 @@ void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
 				start_addr, end_addr);
 	mmu_notifier_invalidate_range_start(&range);
 	do {
-		unmap_single_vma(tlb, vma, start_addr, end_addr, &details);
+		unmap_single_vma(tlb, vma, start_addr, end_addr, &details,
+				 mm_wr_locked);
 	} while ((vma = mas_find(&mas, end_addr - 1)) != NULL);
 	mmu_notifier_invalidate_range_end(&range);
 }
@@ -1723,7 +1724,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
 	 * unmap 'address-end' not 'range.start-range.end' as range
 	 * could have been expanded for hugetlb pmd sharing.
 	 */
-	unmap_single_vma(&tlb, vma, address, end, details);
+	unmap_single_vma(&tlb, vma, address, end, details, false);
 	mmu_notifier_invalidate_range_end(&range);
 	tlb_finish_mmu(&tlb);
 }
@@ -2492,7 +2493,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
 
 	err = remap_pfn_range_notrack(vma, addr, pfn, size, prot);
 	if (err)
-		untrack_pfn(vma, pfn, PAGE_ALIGN(size));
+		untrack_pfn(vma, pfn, PAGE_ALIGN(size), true);
 	return err;
 }
 EXPORT_SYMBOL(remap_pfn_range);
diff --git a/mm/memremap.c b/mm/memremap.c
index 08cbf54fe037..2f88f43d4a01 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -129,7 +129,7 @@ static void pageunmap_range(struct dev_pagemap *pgmap, int range_id)
 	}
 	mem_hotplug_done();
 
-	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
+	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true);
 	pgmap_array_delete(range);
 }
 
@@ -276,7 +276,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
 	if (!is_private)
 		kasan_remove_zero_shadow(__va(range->start), range_len(range));
 err_kasan:
-	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
+	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true);
 err_pfn_remap:
 	pgmap_array_delete(range);
 	return error;
diff --git a/mm/mmap.c b/mm/mmap.c
index 2c6e9072e6a8..69d440997648 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -78,7 +78,7 @@ core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644);
 static void unmap_region(struct mm_struct *mm, struct maple_tree *mt,
 		struct vm_area_struct *vma, struct vm_area_struct *prev,
 		struct vm_area_struct *next, unsigned long start,
-		unsigned long end);
+		unsigned long end, bool mm_wr_locked);
 
 static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
 {
@@ -2136,14 +2136,14 @@ static inline void remove_mt(struct mm_struct *mm, struct ma_state *mas)
 static void unmap_region(struct mm_struct *mm, struct maple_tree *mt,
 		struct vm_area_struct *vma, struct vm_area_struct *prev,
 		struct vm_area_struct *next,
-		unsigned long start, unsigned long end)
+		unsigned long start, unsigned long end, bool mm_wr_locked)
 {
 	struct mmu_gather tlb;
 
 	lru_add_drain();
 	tlb_gather_mmu(&tlb, mm);
 	update_hiwater_rss(mm);
-	unmap_vmas(&tlb, mt, vma, start, end);
+	unmap_vmas(&tlb, mt, vma, start, end, mm_wr_locked);
 	free_pgtables(&tlb, mt, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS,
 				 next ? next->vm_start : USER_PGTABLES_CEILING);
 	tlb_finish_mmu(&tlb);
@@ -2391,7 +2391,11 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 			mmap_write_downgrade(mm);
 	}
 
-	unmap_region(mm, &mt_detach, vma, prev, next, start, end);
+	/*
+	 * We can free page tables without write-locking mmap_lock because VMAs
+	 * were isolated before we downgraded mmap_lock.
+	 */
+	unmap_region(mm, &mt_detach, vma, prev, next, start, end, !downgrade);
 	/* Statistics and freeing VMAs */
 	mas_set(&mas_detach, start);
 	remove_mt(mm, &mas_detach);
@@ -2704,7 +2708,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 
 		/* Undo any partial mapping done by a device driver. */
 		unmap_region(mm, &mm->mm_mt, vma, prev, next, vma->vm_start,
-			     vma->vm_end);
+			     vma->vm_end, true);
 	}
 	if (file && (vm_flags & VM_SHARED))
 		mapping_unmap_writable(file->f_mapping);
@@ -3031,7 +3035,7 @@ void exit_mmap(struct mm_struct *mm)
 	tlb_gather_mmu_fullmm(&tlb, mm);
 	/* update_hiwater_rss(mm) here? but nobody should be looking */
 	/* Use ULONG_MAX here to ensure all VMAs in the mm are unmapped */
-	unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX);
+	unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX, false);
 	mmap_read_unlock(mm);
 
 	/*
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483879.750528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3E-0001Qz-Uh; Wed, 25 Jan 2023 09:26:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483879.750528; Wed, 25 Jan 2023 09:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3E-0001QA-Qq; Wed, 25 Jan 2023 09:26:40 +0000
Received: by outflank-mailman (input) for mailman id 483879;
 Wed, 25 Jan 2023 08:39:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GHF1=5W=flex--surenb.bounces.google.com=3ourQYwYKCd8TVSFOCHPPHMF.DPNYFO-EFWFMMJTUT.YFOQSPKFDU.PSH@srs-se1.protection.inumbo.net>)
 id 1pKbJ7-00083W-3Q
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:39:01 +0000
Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com
 [2607:f8b0:4864:20::b4a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b63d7c55-9c8b-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:38:59 +0100 (CET)
Received: by mail-yb1-xb4a.google.com with SMTP id
 x188-20020a2531c5000000b00716de19d76bso19183731ybx.19
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 00:38:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b63d7c55-9c8b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=ralp0YsbeintaJ1+3rV0BJxcsbdnu/jVKuNhvnt0T5Y=;
        b=FpW3ZfeoZ58NaZWfySIfB+f4+MM7GrKuq9uCXe+gfhjrC0bvKaL2kAypJ/pJdIrezU
         Ru0cU0KNklRfuYrJ3SKXkdPgNjMfodaxcsFUMH2z2VP+LqiGmmBZi4WNjYcaaYQLe2mC
         v6uAPWcErh2g6vXcpQB3pwh9XmkkPfWLDi2bI5mgZlrfFBGfjeQ9Wv8iIQeddGuuYJGy
         vGYSJIu//+24XGZjzB+Q3VYOxZ5pJZ/HCGoJ5h7l6e1K0sxINKIjiYRXApyFQ3GXR7xJ
         ZclVjCWVjm/7Pu3pZxJbkPLxrnDwpix1sMzuU+AEqB9vICm+UZCWtOUOt5GLju8+nccL
         rTTw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=ralp0YsbeintaJ1+3rV0BJxcsbdnu/jVKuNhvnt0T5Y=;
        b=b0sr59lcMqogSZObj147Kf+cvhZr515XgFw0Q9R9h/KKb66ss1RD6RODLRkiLeRxY7
         uMcayOKPICG6MLt6ER4Q2N3rrzT9ibTMAcsNSfozPaiQlvUKLG73PLzcuVVPPuyTmqSK
         k8tcTsEaEKITKrE6uSbuLFI2RMNjVXHt903z82dbqIbQWpks58vcn2uBgRhD5oKQxawo
         hxFFq9LX7jJgUdXvnl15xUcaodwKJaRUAeOTMUcA6DSuIiO1SD4WKYYCD0k4IFMEW3VD
         DmQZ3Dx3vvDvzJDjl7nCsEkKVuhvvKIuXAmyDACyFMzAMiOeNb6y+c8vayzoJaQ3f2G4
         DbJg==
X-Gm-Message-State: AO0yUKUr8P6HCBIgnRYEOsTwDLEXZZ1c+qr6m5DcnOt0jiavQ/iHN5sn
	p2uNo5AL/FUbnUdMZ+iWzA6ADSAuT6c=
X-Google-Smtp-Source: AK7set8/stQqnGGz4OB/vqyDOBLoBX54VTTeagu3mMK0HzymWEEvjI0ujOzPLr/zvAsu1DJMDE0YK80W9/s=
X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:f7b0:20e8:ce66:f98])
 (user=surenb job=sendgmr) by 2002:a81:3e07:0:b0:506:6185:4fad with SMTP id
 l7-20020a813e07000000b0050661854fadmr450398ywa.451.1674635938431; Wed, 25 Jan
 2023 00:38:58 -0800 (PST)
Date: Wed, 25 Jan 2023 00:38:46 -0800
In-Reply-To: <20230125083851.27759-1-surenb@google.com>
Mime-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com>
X-Mailer: git-send-email 2.39.1.405.gd4c25cc71f-goog
Message-ID: <20230125083851.27759-2-surenb@google.com>
Subject: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, 
	hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, 
	willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, 
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org, 
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com, 
	surenb@google.com
Content-Type: text/plain; charset="UTF-8"

vm_flags are among VMA attributes which affect decisions like VMA merging
and splitting. Therefore all vm_flags modifications are performed after
taking exclusive mmap_lock to prevent vm_flags updates racing with such
operations. Introduce modifier functions for vm_flags to be used whenever
flags are updated. This way we can better check and control correct
locking behavior during these updates.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 include/linux/mm.h       | 37 +++++++++++++++++++++++++++++++++++++
 include/linux/mm_types.h |  8 +++++++-
 2 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c2f62bdce134..b71f2809caac 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -627,6 +627,43 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
 	INIT_LIST_HEAD(&vma->anon_vma_chain);
 }
 
+/* Use when VMA is not part of the VMA tree and needs no locking */
+static inline void init_vm_flags(struct vm_area_struct *vma,
+				 unsigned long flags)
+{
+	vma->vm_flags = flags;
+}
+
+/* Use when VMA is part of the VMA tree and modifications need coordination */
+static inline void reset_vm_flags(struct vm_area_struct *vma,
+				  unsigned long flags)
+{
+	mmap_assert_write_locked(vma->vm_mm);
+	init_vm_flags(vma, flags);
+}
+
+static inline void set_vm_flags(struct vm_area_struct *vma,
+				unsigned long flags)
+{
+	mmap_assert_write_locked(vma->vm_mm);
+	vma->vm_flags |= flags;
+}
+
+static inline void clear_vm_flags(struct vm_area_struct *vma,
+				  unsigned long flags)
+{
+	mmap_assert_write_locked(vma->vm_mm);
+	vma->vm_flags &= ~flags;
+}
+
+static inline void mod_vm_flags(struct vm_area_struct *vma,
+				unsigned long set, unsigned long clear)
+{
+	mmap_assert_write_locked(vma->vm_mm);
+	vma->vm_flags |= set;
+	vma->vm_flags &= ~clear;
+}
+
 static inline void vma_set_anonymous(struct vm_area_struct *vma)
 {
 	vma->vm_ops = NULL;
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 2d6d790d9bed..6c7c70bf50dd 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -491,7 +491,13 @@ struct vm_area_struct {
 	 * See vmf_insert_mixed_prot() for discussion.
 	 */
 	pgprot_t vm_page_prot;
-	unsigned long vm_flags;		/* Flags, see mm.h. */
+
+	/*
+	 * Flags, see mm.h.
+	 * WARNING! Do not modify directly.
+	 * Use {init|reset|set|clear|mod}_vm_flags() functions instead.
+	 */
+	unsigned long vm_flags;
 
 	/*
 	 * For areas with an address space and backing store,
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483981.750583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3I-0002ep-0H; Wed, 25 Jan 2023 09:26:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483981.750583; Wed, 25 Jan 2023 09:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3H-0002aV-MZ; Wed, 25 Jan 2023 09:26:43 +0000
Received: by outflank-mailman (input) for mailman id 483981;
 Wed, 25 Jan 2023 09:02:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FU6P=5W=suse.com=mhocko@srs-se1.protection.inumbo.net>)
 id 1pKbfZ-000585-O2
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 09:02:13 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f4f16504-9c8e-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 10:02:12 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6FA971FED3;
 Wed, 25 Jan 2023 09:02:12 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 224651339E;
 Wed, 25 Jan 2023 09:02:12 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id reRPCBTw0GOqDAAAMHmgww
 (envelope-from <mhocko@suse.com>); Wed, 25 Jan 2023 09:02:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4f16504-9c8e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674637332; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SlRMHnKgl2ysMiG3ziDgQVqy0pZ2+RMTN2tEUkXl1oQ=;
	b=CylERLDbieGu3PWTdvXLEld0Li+qbd7wIdoCwjIyDd+qG3j/xsmZaFNxHZ9BHqigKFDp2h
	G+luEBa1wjOWHFua4YAnGvwD5IYd5KDo2FdpDHLq+ELEUbPuYE6WIwO6oA3RlzeuL+Fv1J
	XkwT2InE7+nANi+BujeIlpRiAMYhk9g=
Date: Wed, 25 Jan 2023 10:02:11 +0100
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com,
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org,
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com,
	david@redhat.com, dhowells@redhat.com, hughd@google.com,
	bigeasy@linutronix.de, kent.overstreet@linux.dev,
	punit.agrawal@bytedance.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 2/6] mm: replace VM_LOCKED_CLEAR_MASK with
 VM_LOCKED_MASK
Message-ID: <Y9DwE4Z8hB38aX6X@dhcp22.suse.cz>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-3-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-3-surenb@google.com>

On Wed 25-01-23 00:38:47, Suren Baghdasaryan wrote:
> To simplify the usage of VM_LOCKED_CLEAR_MASK in clear_vm_flags(),
> replace it with VM_LOCKED_MASK bitmask and convert all users.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  include/linux/mm.h | 4 ++--
>  kernel/fork.c      | 2 +-
>  mm/hugetlb.c       | 4 ++--
>  mm/mlock.c         | 6 +++---
>  mm/mmap.c          | 6 +++---
>  mm/mremap.c        | 2 +-
>  6 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b71f2809caac..da62bdd627bf 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -421,8 +421,8 @@ extern unsigned int kobjsize(const void *objp);
>  /* This mask defines which mm->def_flags a process can inherit its parent */
>  #define VM_INIT_DEF_MASK	VM_NOHUGEPAGE
>  
> -/* This mask is used to clear all the VMA flags used by mlock */
> -#define VM_LOCKED_CLEAR_MASK	(~(VM_LOCKED | VM_LOCKONFAULT))
> +/* This mask represents all the VMA flag bits used by mlock */
> +#define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
>  
>  /* Arch-specific flags to clear when updating VM flags on protection change */
>  #ifndef VM_ARCH_CLEAR
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 6683c1b0f460..03d472051236 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -669,7 +669,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
>  			tmp->anon_vma = NULL;
>  		} else if (anon_vma_fork(tmp, mpnt))
>  			goto fail_nomem_anon_vma_fork;
> -		tmp->vm_flags &= ~(VM_LOCKED | VM_LOCKONFAULT);
> +		clear_vm_flags(tmp, VM_LOCKED_MASK);
>  		file = tmp->vm_file;
>  		if (file) {
>  			struct address_space *mapping = file->f_mapping;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index d20c8b09890e..4ecdbad9a451 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6973,8 +6973,8 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma,
>  	unsigned long s_end = sbase + PUD_SIZE;
>  
>  	/* Allow segments to share if only one is marked locked */
> -	unsigned long vm_flags = vma->vm_flags & VM_LOCKED_CLEAR_MASK;
> -	unsigned long svm_flags = svma->vm_flags & VM_LOCKED_CLEAR_MASK;
> +	unsigned long vm_flags = vma->vm_flags & ~VM_LOCKED_MASK;
> +	unsigned long svm_flags = svma->vm_flags & ~VM_LOCKED_MASK;
>  
>  	/*
>  	 * match the virtual addresses, permission and the alignment of the
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 0336f52e03d7..5c4fff93cd6b 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -497,7 +497,7 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
>  		if (vma->vm_start != tmp)
>  			return -ENOMEM;
>  
> -		newflags = vma->vm_flags & VM_LOCKED_CLEAR_MASK;
> +		newflags = vma->vm_flags & ~VM_LOCKED_MASK;
>  		newflags |= flags;
>  		/* Here we know that  vma->vm_start <= nstart < vma->vm_end. */
>  		tmp = vma->vm_end;
> @@ -661,7 +661,7 @@ static int apply_mlockall_flags(int flags)
>  	struct vm_area_struct *vma, *prev = NULL;
>  	vm_flags_t to_add = 0;
>  
> -	current->mm->def_flags &= VM_LOCKED_CLEAR_MASK;
> +	current->mm->def_flags &= ~VM_LOCKED_MASK;
>  	if (flags & MCL_FUTURE) {
>  		current->mm->def_flags |= VM_LOCKED;
>  
> @@ -681,7 +681,7 @@ static int apply_mlockall_flags(int flags)
>  	for_each_vma(vmi, vma) {
>  		vm_flags_t newflags;
>  
> -		newflags = vma->vm_flags & VM_LOCKED_CLEAR_MASK;
> +		newflags = vma->vm_flags & ~VM_LOCKED_MASK;
>  		newflags |= to_add;
>  
>  		/* Ignore errors */
> diff --git a/mm/mmap.c b/mm/mmap.c
> index d4abc6feced1..323bd253b25a 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2671,7 +2671,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>  		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
>  					is_vm_hugetlb_page(vma) ||
>  					vma == get_gate_vma(current->mm))
> -			vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> +			clear_vm_flags(vma, VM_LOCKED_MASK);
>  		else
>  			mm->locked_vm += (len >> PAGE_SHIFT);
>  	}
> @@ -3340,8 +3340,8 @@ static struct vm_area_struct *__install_special_mapping(
>  	vma->vm_start = addr;
>  	vma->vm_end = addr + len;
>  
> -	vma->vm_flags = vm_flags | mm->def_flags | VM_DONTEXPAND | VM_SOFTDIRTY;
> -	vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> +	init_vm_flags(vma, (vm_flags | mm->def_flags |
> +		      VM_DONTEXPAND | VM_SOFTDIRTY) & ~VM_LOCKED_MASK);
>  	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>  
>  	vma->vm_ops = ops;
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 1b3ee02bead7..35db9752cb6a 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -687,7 +687,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
>  
>  	if (unlikely(!err && (flags & MREMAP_DONTUNMAP))) {
>  		/* We always clear VM_LOCKED[ONFAULT] on the old vma */
> -		vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> +		clear_vm_flags(vma, VM_LOCKED_MASK);
>  
>  		/*
>  		 * anon_vma links of the old vma is no longer needed after its page
> -- 
> 2.39.1

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483886.750548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3F-0001nR-TF; Wed, 25 Jan 2023 09:26:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483886.750548; Wed, 25 Jan 2023 09:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3F-0001jJ-NT; Wed, 25 Jan 2023 09:26:41 +0000
Received: by outflank-mailman (input) for mailman id 483886;
 Wed, 25 Jan 2023 08:39:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=In1n=5W=flex--surenb.bounces.google.com=3p-rQYwYKCeQYaXKTHMUUMRK.IUSdKT-JKbKRROYZY.dKTVXUPKIZ.UXM@srs-se1.protection.inumbo.net>)
 id 1pKbJE-00083Q-Tt
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:39:09 +0000
Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com
 [2607:f8b0:4864:20::114a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f76a7c8-9c8b-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:38:21 +0100 (CET)
Received: by mail-yw1-x114a.google.com with SMTP id
 00721157ae682-5063c0b909eso48194837b3.7
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 00:39:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f76a7c8-9c8b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=XxFuorec8QHj/tdWOvCdzRgXK0H9Q0ku3hZrmKWXg4k=;
        b=OIEAaSiJVxxsYnj9GUESi5ULfT5TyrOplvBP1n6cQBewtyh88MOjoe9BUxsnqFME6E
         K2kPCPqCQF1V+q9x7ddFhtzD99J/T5/V7uLXhMC5glkol51CKdLncz/+KqcIjEaWu4LA
         1ZvCAJojjIV0VEcilUswIjyK04Y51awqEdCZAgu7yd10Q9InJaXmAdPv2ZR8HLbUl2lO
         /iIXP1VqCuXZdEhqyz+fQvs1AFh4aMrvFWp8wwug5TAlRaFGqemBTEn7xWJydqcYL0gc
         qkwrk2HPwsz6D+FRHYne3bo56R++/qI+bkXGQ/j1SNFair4lGzKRxX02K6tED408xqtH
         p/4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=XxFuorec8QHj/tdWOvCdzRgXK0H9Q0ku3hZrmKWXg4k=;
        b=K5ja5P7rwf/rDYOd7yCzCDlsTrVO0F3Phh8Yb1HdxYH4CRE+KdnbuTBUEueFPe5qgM
         fcv8w3DOxxL71Qy+lnTnVXLDzz34ttnN/1coZsF9GhAPJG1+UraYGCNkZFO/SVVk93ad
         C735CiCoHCTXTJbu3NzukpO47YI9a+iSVnjK8AMwDy3nzttjk7AYgIdO0dnLjAQf8a/p
         +oKnkL79994ZKhUv4rinVVSHCNy9pSSkYzpsqUxoZTMtprwisDyS4y5z6dgRdRMZ7Ccn
         g9kuJ3RvuJptefHVqP+RrB42SkucOZBRF5xGLj15uRtitffGVIPwD9eC4TN9ZjrIK5Na
         mCCg==
X-Gm-Message-State: AFqh2kqx9ofVnodWS2t+mp2KbHFb6ZaqyXceQmtEiavTCSmAn9ptHzDo
	pKCdohtV2eFLLLoWfr2Jz4aXC+n8XwM=
X-Google-Smtp-Source: AMrXdXsfA7oqm+rAsRrFpYdLp2jj8XD8VVoLDU6MWm1+i51Ob30RMvLY07EiqipgOhh1ffcBv1f7R6om24M=
X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:f7b0:20e8:ce66:f98])
 (user=surenb job=sendgmr) by 2002:a25:1984:0:b0:7fe:e7f5:e228 with SMTP id
 126-20020a251984000000b007fee7f5e228mr1669778ybz.582.1674635943260; Wed, 25
 Jan 2023 00:39:03 -0800 (PST)
Date: Wed, 25 Jan 2023 00:38:48 -0800
In-Reply-To: <20230125083851.27759-1-surenb@google.com>
Mime-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com>
X-Mailer: git-send-email 2.39.1.405.gd4c25cc71f-goog
Message-ID: <20230125083851.27759-4-surenb@google.com>
Subject: [PATCH v2 3/6] mm: replace vma->vm_flags direct modifications with
 modifier calls
From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, 
	hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, 
	willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, 
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org, 
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com, 
	surenb@google.com
Content-Type: text/plain; charset="UTF-8"

Replace direct modifications to vma->vm_flags with calls to modifier
functions to be able to track flag changes and to keep vma locking
correctness.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 arch/arm/kernel/process.c                          |  2 +-
 arch/ia64/mm/init.c                                |  8 ++++----
 arch/loongarch/include/asm/tlb.h                   |  2 +-
 arch/powerpc/kvm/book3s_xive_native.c              |  2 +-
 arch/powerpc/mm/book3s64/subpage_prot.c            |  2 +-
 arch/powerpc/platforms/book3s/vas-api.c            |  2 +-
 arch/powerpc/platforms/cell/spufs/file.c           | 14 +++++++-------
 arch/s390/mm/gmap.c                                |  3 +--
 arch/x86/entry/vsyscall/vsyscall_64.c              |  2 +-
 arch/x86/kernel/cpu/sgx/driver.c                   |  2 +-
 arch/x86/kernel/cpu/sgx/virt.c                     |  2 +-
 arch/x86/mm/pat/memtype.c                          |  6 +++---
 arch/x86/um/mem_32.c                               |  2 +-
 drivers/acpi/pfr_telemetry.c                       |  2 +-
 drivers/android/binder.c                           |  3 +--
 drivers/char/mspec.c                               |  2 +-
 drivers/crypto/hisilicon/qm.c                      |  2 +-
 drivers/dax/device.c                               |  2 +-
 drivers/dma/idxd/cdev.c                            |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c            |  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c           |  4 ++--
 drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c          |  4 ++--
 drivers/gpu/drm/amd/amdkfd/kfd_events.c            |  4 ++--
 drivers/gpu/drm/amd/amdkfd/kfd_process.c           |  4 ++--
 drivers/gpu/drm/drm_gem.c                          |  2 +-
 drivers/gpu/drm/drm_gem_dma_helper.c               |  3 +--
 drivers/gpu/drm/drm_gem_shmem_helper.c             |  2 +-
 drivers/gpu/drm/drm_vm.c                           |  8 ++++----
 drivers/gpu/drm/etnaviv/etnaviv_gem.c              |  2 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c            |  4 ++--
 drivers/gpu/drm/gma500/framebuffer.c               |  2 +-
 drivers/gpu/drm/i810/i810_dma.c                    |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_mman.c           |  4 ++--
 drivers/gpu/drm/mediatek/mtk_drm_gem.c             |  2 +-
 drivers/gpu/drm/msm/msm_gem.c                      |  2 +-
 drivers/gpu/drm/omapdrm/omap_gem.c                 |  3 +--
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c        |  3 +--
 drivers/gpu/drm/tegra/gem.c                        |  5 ++---
 drivers/gpu/drm/ttm/ttm_bo_vm.c                    |  3 +--
 drivers/gpu/drm/virtio/virtgpu_vram.c              |  2 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c           |  2 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c            |  3 +--
 drivers/hsi/clients/cmt_speech.c                   |  2 +-
 drivers/hwtracing/intel_th/msu.c                   |  2 +-
 drivers/hwtracing/stm/core.c                       |  2 +-
 drivers/infiniband/hw/hfi1/file_ops.c              |  4 ++--
 drivers/infiniband/hw/mlx5/main.c                  |  4 ++--
 drivers/infiniband/hw/qib/qib_file_ops.c           | 13 ++++++-------
 drivers/infiniband/hw/usnic/usnic_ib_verbs.c       |  2 +-
 drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c    |  2 +-
 .../media/common/videobuf2/videobuf2-dma-contig.c  |  2 +-
 drivers/media/common/videobuf2/videobuf2-vmalloc.c |  2 +-
 drivers/media/v4l2-core/videobuf-dma-contig.c      |  2 +-
 drivers/media/v4l2-core/videobuf-dma-sg.c          |  4 ++--
 drivers/media/v4l2-core/videobuf-vmalloc.c         |  2 +-
 drivers/misc/cxl/context.c                         |  2 +-
 drivers/misc/habanalabs/common/memory.c            |  2 +-
 drivers/misc/habanalabs/gaudi/gaudi.c              |  4 ++--
 drivers/misc/habanalabs/gaudi2/gaudi2.c            |  8 ++++----
 drivers/misc/habanalabs/goya/goya.c                |  4 ++--
 drivers/misc/ocxl/context.c                        |  4 ++--
 drivers/misc/ocxl/sysfs.c                          |  2 +-
 drivers/misc/open-dice.c                           |  4 ++--
 drivers/misc/sgi-gru/grufile.c                     |  4 ++--
 drivers/misc/uacce/uacce.c                         |  2 +-
 drivers/sbus/char/oradax.c                         |  2 +-
 drivers/scsi/cxlflash/ocxl_hw.c                    |  2 +-
 drivers/scsi/sg.c                                  |  2 +-
 drivers/staging/media/atomisp/pci/hmm/hmm_bo.c     |  2 +-
 drivers/staging/media/deprecated/meye/meye.c       |  4 ++--
 .../media/deprecated/stkwebcam/stk-webcam.c        |  2 +-
 drivers/target/target_core_user.c                  |  2 +-
 drivers/uio/uio.c                                  |  2 +-
 drivers/usb/core/devio.c                           |  3 +--
 drivers/usb/mon/mon_bin.c                          |  3 +--
 drivers/vdpa/vdpa_user/iova_domain.c               |  2 +-
 drivers/vfio/pci/vfio_pci_core.c                   |  2 +-
 drivers/vhost/vdpa.c                               |  2 +-
 drivers/video/fbdev/68328fb.c                      |  2 +-
 drivers/video/fbdev/core/fb_defio.c                |  4 ++--
 drivers/xen/gntalloc.c                             |  2 +-
 drivers/xen/gntdev.c                               |  4 ++--
 drivers/xen/privcmd-buf.c                          |  2 +-
 drivers/xen/privcmd.c                              |  4 ++--
 fs/aio.c                                           |  2 +-
 fs/cramfs/inode.c                                  |  2 +-
 fs/erofs/data.c                                    |  2 +-
 fs/exec.c                                          |  4 ++--
 fs/ext4/file.c                                     |  2 +-
 fs/fuse/dax.c                                      |  2 +-
 fs/hugetlbfs/inode.c                               |  4 ++--
 fs/orangefs/file.c                                 |  3 +--
 fs/proc/task_mmu.c                                 |  2 +-
 fs/proc/vmcore.c                                   |  3 +--
 fs/userfaultfd.c                                   |  2 +-
 fs/xfs/xfs_file.c                                  |  2 +-
 include/linux/mm.h                                 |  2 +-
 kernel/bpf/ringbuf.c                               |  4 ++--
 kernel/bpf/syscall.c                               |  4 ++--
 kernel/events/core.c                               |  2 +-
 kernel/kcov.c                                      |  2 +-
 kernel/relay.c                                     |  2 +-
 mm/madvise.c                                       |  2 +-
 mm/memory.c                                        |  6 +++---
 mm/mlock.c                                         |  6 +++---
 mm/mmap.c                                          | 10 +++++-----
 mm/mprotect.c                                      |  2 +-
 mm/mremap.c                                        |  6 +++---
 mm/nommu.c                                         | 11 ++++++-----
 mm/secretmem.c                                     |  2 +-
 mm/shmem.c                                         |  2 +-
 mm/vmalloc.c                                       |  2 +-
 net/ipv4/tcp.c                                     |  4 ++--
 security/selinux/selinuxfs.c                       |  6 +++---
 sound/core/oss/pcm_oss.c                           |  2 +-
 sound/core/pcm_native.c                            |  9 +++++----
 sound/soc/pxa/mmp-sspa.c                           |  2 +-
 sound/usb/usx2y/us122l.c                           |  4 ++--
 sound/usb/usx2y/usX2Yhwdep.c                       |  2 +-
 sound/usb/usx2y/usx2yhwdeppcm.c                    |  2 +-
 120 files changed, 188 insertions(+), 199 deletions(-)

diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index f811733a8fc5..ec65f3ea3150 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -316,7 +316,7 @@ static int __init gate_vma_init(void)
 	gate_vma.vm_page_prot = PAGE_READONLY_EXEC;
 	gate_vma.vm_start = 0xffff0000;
 	gate_vma.vm_end	= 0xffff0000 + PAGE_SIZE;
-	gate_vma.vm_flags = VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYEXEC;
+	init_vm_flags(&gate_vma, VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYEXEC);
 	return 0;
 }
 arch_initcall(gate_vma_init);
diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index fc4e4217e87f..d355e0ce28ab 100644
--- a/arch/ia64/mm/init.c
+++ b/arch/ia64/mm/init.c
@@ -109,7 +109,7 @@ ia64_init_addr_space (void)
 		vma_set_anonymous(vma);
 		vma->vm_start = current->thread.rbs_bot & PAGE_MASK;
 		vma->vm_end = vma->vm_start + PAGE_SIZE;
-		vma->vm_flags = VM_DATA_DEFAULT_FLAGS|VM_GROWSUP|VM_ACCOUNT;
+		init_vm_flags(vma, VM_DATA_DEFAULT_FLAGS|VM_GROWSUP|VM_ACCOUNT);
 		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 		mmap_write_lock(current->mm);
 		if (insert_vm_struct(current->mm, vma)) {
@@ -127,8 +127,8 @@ ia64_init_addr_space (void)
 			vma_set_anonymous(vma);
 			vma->vm_end = PAGE_SIZE;
 			vma->vm_page_prot = __pgprot(pgprot_val(PAGE_READONLY) | _PAGE_MA_NAT);
-			vma->vm_flags = VM_READ | VM_MAYREAD | VM_IO |
-					VM_DONTEXPAND | VM_DONTDUMP;
+			init_vm_flags(vma, VM_READ | VM_MAYREAD | VM_IO |
+				      VM_DONTEXPAND | VM_DONTDUMP);
 			mmap_write_lock(current->mm);
 			if (insert_vm_struct(current->mm, vma)) {
 				mmap_write_unlock(current->mm);
@@ -272,7 +272,7 @@ static int __init gate_vma_init(void)
 	vma_init(&gate_vma, NULL);
 	gate_vma.vm_start = FIXADDR_USER_START;
 	gate_vma.vm_end = FIXADDR_USER_END;
-	gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
+	init_vm_flags(&gate_vma, VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC);
 	gate_vma.vm_page_prot = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX);
 
 	return 0;
diff --git a/arch/loongarch/include/asm/tlb.h b/arch/loongarch/include/asm/tlb.h
index dd24f5898f65..51e35b44d105 100644
--- a/arch/loongarch/include/asm/tlb.h
+++ b/arch/loongarch/include/asm/tlb.h
@@ -149,7 +149,7 @@ static inline void tlb_flush(struct mmu_gather *tlb)
 	struct vm_area_struct vma;
 
 	vma.vm_mm = tlb->mm;
-	vma.vm_flags = 0;
+	init_vm_flags(&vma, 0);
 	if (tlb->fullmm) {
 		flush_tlb_mm(tlb->mm);
 		return;
diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c
index 4f566bea5e10..7976af0f5ff8 100644
--- a/arch/powerpc/kvm/book3s_xive_native.c
+++ b/arch/powerpc/kvm/book3s_xive_native.c
@@ -324,7 +324,7 @@ static int kvmppc_xive_native_mmap(struct kvm_device *dev,
 		return -EINVAL;
 	}
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached_wc(vma->vm_page_prot);
 
 	/*
diff --git a/arch/powerpc/mm/book3s64/subpage_prot.c b/arch/powerpc/mm/book3s64/subpage_prot.c
index d73b3b4176e8..72948cdb1911 100644
--- a/arch/powerpc/mm/book3s64/subpage_prot.c
+++ b/arch/powerpc/mm/book3s64/subpage_prot.c
@@ -156,7 +156,7 @@ static void subpage_mark_vma_nohuge(struct mm_struct *mm, unsigned long addr,
 	 * VM_NOHUGEPAGE and split them.
 	 */
 	for_each_vma_range(vmi, vma, addr + len) {
-		vma->vm_flags |= VM_NOHUGEPAGE;
+		set_vm_flags(vma, VM_NOHUGEPAGE);
 		walk_page_vma(vma, &subpage_walk_ops, NULL);
 	}
 }
diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platforms/book3s/vas-api.c
index 9580e8e12165..d5b8e55d010a 100644
--- a/arch/powerpc/platforms/book3s/vas-api.c
+++ b/arch/powerpc/platforms/book3s/vas-api.c
@@ -525,7 +525,7 @@ static int coproc_mmap(struct file *fp, struct vm_area_struct *vma)
 	pfn = paste_addr >> PAGE_SHIFT;
 
 	/* flags, page_prot from cxl_mmap(), except we want cachable */
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_cached(vma->vm_page_prot);
 
 	prot = __pgprot(pgprot_val(vma->vm_page_prot) | _PAGE_DIRTY);
diff --git a/arch/powerpc/platforms/cell/spufs/file.c b/arch/powerpc/platforms/cell/spufs/file.c
index 62d90a5e23d1..784fa39a484a 100644
--- a/arch/powerpc/platforms/cell/spufs/file.c
+++ b/arch/powerpc/platforms/cell/spufs/file.c
@@ -291,7 +291,7 @@ static int spufs_mem_mmap(struct file *file, struct vm_area_struct *vma)
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached_wc(vma->vm_page_prot);
 
 	vma->vm_ops = &spufs_mem_mmap_vmops;
@@ -381,7 +381,7 @@ static int spufs_cntl_mmap(struct file *file, struct vm_area_struct *vma)
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
 	vma->vm_ops = &spufs_cntl_mmap_vmops;
@@ -1043,7 +1043,7 @@ static int spufs_signal1_mmap(struct file *file, struct vm_area_struct *vma)
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
 	vma->vm_ops = &spufs_signal1_mmap_vmops;
@@ -1179,7 +1179,7 @@ static int spufs_signal2_mmap(struct file *file, struct vm_area_struct *vma)
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
 	vma->vm_ops = &spufs_signal2_mmap_vmops;
@@ -1302,7 +1302,7 @@ static int spufs_mss_mmap(struct file *file, struct vm_area_struct *vma)
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
 	vma->vm_ops = &spufs_mss_mmap_vmops;
@@ -1364,7 +1364,7 @@ static int spufs_psmap_mmap(struct file *file, struct vm_area_struct *vma)
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
 	vma->vm_ops = &spufs_psmap_mmap_vmops;
@@ -1424,7 +1424,7 @@ static int spufs_mfc_mmap(struct file *file, struct vm_area_struct *vma)
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
 	vma->vm_ops = &spufs_mfc_mmap_vmops;
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 69af6cdf1a2a..3a695b8a1e3c 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2522,8 +2522,7 @@ static inline void thp_split_mm(struct mm_struct *mm)
 	VMA_ITERATOR(vmi, mm, 0);
 
 	for_each_vma(vmi, vma) {
-		vma->vm_flags &= ~VM_HUGEPAGE;
-		vma->vm_flags |= VM_NOHUGEPAGE;
+		mod_vm_flags(vma, VM_NOHUGEPAGE, VM_HUGEPAGE);
 		walk_page_vma(vma, &thp_split_walk_ops, NULL);
 	}
 	mm->def_flags |= VM_NOHUGEPAGE;
diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
index 4af81df133ee..e2a1626d86d8 100644
--- a/arch/x86/entry/vsyscall/vsyscall_64.c
+++ b/arch/x86/entry/vsyscall/vsyscall_64.c
@@ -391,7 +391,7 @@ void __init map_vsyscall(void)
 	}
 
 	if (vsyscall_mode == XONLY)
-		gate_vma.vm_flags = VM_EXEC;
+		init_vm_flags(&gate_vma, VM_EXEC);
 
 	BUILD_BUG_ON((unsigned long)__fix_to_virt(VSYSCALL_PAGE) !=
 		     (unsigned long)VSYSCALL_ADDR);
diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c
index aa9b8b868867..42c0bded93b6 100644
--- a/arch/x86/kernel/cpu/sgx/driver.c
+++ b/arch/x86/kernel/cpu/sgx/driver.c
@@ -95,7 +95,7 @@ static int sgx_mmap(struct file *file, struct vm_area_struct *vma)
 		return ret;
 
 	vma->vm_ops = &sgx_vm_ops;
-	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO;
+	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO);
 	vma->vm_private_data = encl;
 
 	return 0;
diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c
index 6a77a14eee38..0774a0bfeb28 100644
--- a/arch/x86/kernel/cpu/sgx/virt.c
+++ b/arch/x86/kernel/cpu/sgx/virt.c
@@ -105,7 +105,7 @@ static int sgx_vepc_mmap(struct file *file, struct vm_area_struct *vma)
 
 	vma->vm_ops = &sgx_vepc_vm_ops;
 	/* Don't copy VMA in fork() */
-	vma->vm_flags |= VM_PFNMAP | VM_IO | VM_DONTDUMP | VM_DONTCOPY;
+	set_vm_flags(vma, VM_PFNMAP | VM_IO | VM_DONTDUMP | VM_DONTCOPY);
 	vma->vm_private_data = vepc;
 
 	return 0;
diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
index fb4b1b5e0dea..ae9645c900fa 100644
--- a/arch/x86/mm/pat/memtype.c
+++ b/arch/x86/mm/pat/memtype.c
@@ -1000,7 +1000,7 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 
 		ret = reserve_pfn_range(paddr, size, prot, 0);
 		if (ret == 0 && vma)
-			vma->vm_flags |= VM_PAT;
+			set_vm_flags(vma, VM_PAT);
 		return ret;
 	}
 
@@ -1066,7 +1066,7 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
 	}
 	free_pfn_range(paddr, size);
 	if (vma)
-		vma->vm_flags &= ~VM_PAT;
+		clear_vm_flags(vma, VM_PAT);
 }
 
 /*
@@ -1076,7 +1076,7 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
  */
 void untrack_pfn_moved(struct vm_area_struct *vma)
 {
-	vma->vm_flags &= ~VM_PAT;
+	clear_vm_flags(vma, VM_PAT);
 }
 
 pgprot_t pgprot_writecombine(pgprot_t prot)
diff --git a/arch/x86/um/mem_32.c b/arch/x86/um/mem_32.c
index cafd01f730da..bfd2c320ad25 100644
--- a/arch/x86/um/mem_32.c
+++ b/arch/x86/um/mem_32.c
@@ -16,7 +16,7 @@ static int __init gate_vma_init(void)
 	vma_init(&gate_vma, NULL);
 	gate_vma.vm_start = FIXADDR_USER_START;
 	gate_vma.vm_end = FIXADDR_USER_END;
-	gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
+	init_vm_flags(&gate_vma, VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC);
 	gate_vma.vm_page_prot = PAGE_READONLY;
 
 	return 0;
diff --git a/drivers/acpi/pfr_telemetry.c b/drivers/acpi/pfr_telemetry.c
index 27fb6cdad75f..9e339c705b5b 100644
--- a/drivers/acpi/pfr_telemetry.c
+++ b/drivers/acpi/pfr_telemetry.c
@@ -310,7 +310,7 @@ pfrt_log_mmap(struct file *file, struct vm_area_struct *vma)
 		return -EROFS;
 
 	/* changing from read to write with mprotect is not allowed */
-	vma->vm_flags &= ~VM_MAYWRITE;
+	clear_vm_flags(vma, VM_MAYWRITE);
 
 	pfrt_log_dev = to_pfrt_log_dev(file);
 
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 880224ec6abb..dd6c99223b8c 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -5572,8 +5572,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
 		       proc->pid, vma->vm_start, vma->vm_end, "bad vm_flags", -EPERM);
 		return -EPERM;
 	}
-	vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
-	vma->vm_flags &= ~VM_MAYWRITE;
+	mod_vm_flags(vma, VM_DONTCOPY | VM_MIXEDMAP, VM_MAYWRITE);
 
 	vma->vm_ops = &binder_vm_ops;
 	vma->vm_private_data = proc;
diff --git a/drivers/char/mspec.c b/drivers/char/mspec.c
index f8231e2e84be..57bd36a28f95 100644
--- a/drivers/char/mspec.c
+++ b/drivers/char/mspec.c
@@ -206,7 +206,7 @@ mspec_mmap(struct file *file, struct vm_area_struct *vma,
 	refcount_set(&vdata->refcnt, 1);
 	vma->vm_private_data = vdata;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 	if (vdata->type == MSPEC_UNCACHED)
 		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 	vma->vm_ops = &mspec_vm_ops;
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index 007ac7a69ce7..57ecdb5c97fb 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -2363,7 +2363,7 @@ static int hisi_qm_uacce_mmap(struct uacce_queue *q,
 				return -EINVAL;
 		}
 
-		vma->vm_flags |= VM_IO;
+		set_vm_flags(vma, VM_IO);
 
 		return remap_pfn_range(vma, vma->vm_start,
 				       phys_base >> PAGE_SHIFT,
diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 5494d745ced5..6e9726dfaa7e 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -308,7 +308,7 @@ static int dax_mmap(struct file *filp, struct vm_area_struct *vma)
 		return rc;
 
 	vma->vm_ops = &dax_vm_ops;
-	vma->vm_flags |= VM_HUGEPAGE;
+	set_vm_flags(vma, VM_HUGEPAGE);
 	return 0;
 }
 
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
index e13e92609943..51cf836cf329 100644
--- a/drivers/dma/idxd/cdev.c
+++ b/drivers/dma/idxd/cdev.c
@@ -201,7 +201,7 @@ static int idxd_cdev_mmap(struct file *filp, struct vm_area_struct *vma)
 	if (rc < 0)
 		return rc;
 
-	vma->vm_flags |= VM_DONTCOPY;
+	set_vm_flags(vma, VM_DONTCOPY);
 	pfn = (base + idxd_get_wq_portal_full_offset(wq->id,
 				IDXD_PORTAL_LIMITED)) >> PAGE_SHIFT;
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index bb7350ea1d75..70b08a0d13cd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -257,7 +257,7 @@ static int amdgpu_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_str
 	 */
 	if (is_cow_mapping(vma->vm_flags) &&
 	    !(vma->vm_flags & VM_ACCESS_FLAGS))
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 
 	return drm_gem_ttm_mmap(obj, vma);
 }
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 6d291aa6386b..7beb8dd6a5e6 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -2879,8 +2879,8 @@ static int kfd_mmio_mmap(struct kfd_dev *dev, struct kfd_process *process,
 
 	address = dev->adev->rmmio_remap.bus_addr;
 
-	vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
-				VM_DONTDUMP | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
+				VM_DONTDUMP | VM_PFNMAP);
 
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
index cd4e61bf0493..6cbe47cf9be5 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
@@ -159,8 +159,8 @@ int kfd_doorbell_mmap(struct kfd_dev *dev, struct kfd_process *process,
 	address = kfd_get_process_doorbells(pdd);
 	if (!address)
 		return -ENOMEM;
-	vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
-				VM_DONTDUMP | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
+				VM_DONTDUMP | VM_PFNMAP);
 
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
index 729d26d648af..95cd20056cea 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
@@ -1052,8 +1052,8 @@ int kfd_event_mmap(struct kfd_process *p, struct vm_area_struct *vma)
 	pfn = __pa(page->kernel_address);
 	pfn >>= PAGE_SHIFT;
 
-	vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE
-		       | VM_DONTDUMP | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE
+		       | VM_DONTDUMP | VM_PFNMAP);
 
 	pr_debug("Mapping signal page\n");
 	pr_debug("     start user address  == 0x%08lx\n", vma->vm_start);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index 51b1683ac5c1..b40f4b122918 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -1978,8 +1978,8 @@ int kfd_reserved_mem_mmap(struct kfd_dev *dev, struct kfd_process *process,
 		return -ENOMEM;
 	}
 
-	vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND
-		| VM_NORESERVE | VM_DONTDUMP | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_DONTCOPY | VM_DONTEXPAND
+		| VM_NORESERVE | VM_DONTDUMP | VM_PFNMAP);
 	/* Mapping pages to user process */
 	return remap_pfn_range(vma, vma->vm_start,
 			       PFN_DOWN(__pa(qpd->cwsr_kaddr)),
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index b8db675e7fb5..6ea7bcaa592b 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1047,7 +1047,7 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
 			goto err_drm_gem_object_put;
 		}
 
-		vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+		set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 		vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
 		vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 	}
diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
index 1e658c448366..41f241b9a581 100644
--- a/drivers/gpu/drm/drm_gem_dma_helper.c
+++ b/drivers/gpu/drm/drm_gem_dma_helper.c
@@ -530,8 +530,7 @@ int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *
 	 * the whole buffer.
 	 */
 	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_DONTEXPAND;
+	mod_vm_flags(vma, VM_DONTEXPAND, VM_PFNMAP);
 
 	if (dma_obj->map_noncoherent) {
 		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index b602cd72a120..a5032dfac492 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -633,7 +633,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
 	if (ret)
 		return ret;
 
-	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 	if (shmem->map_wc)
 		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
diff --git a/drivers/gpu/drm/drm_vm.c b/drivers/gpu/drm/drm_vm.c
index f024dc93939e..8867bb6c40e3 100644
--- a/drivers/gpu/drm/drm_vm.c
+++ b/drivers/gpu/drm/drm_vm.c
@@ -476,7 +476,7 @@ static int drm_mmap_dma(struct file *filp, struct vm_area_struct *vma)
 
 	if (!capable(CAP_SYS_ADMIN) &&
 	    (dma->flags & _DRM_DMA_USE_PCI_RO)) {
-		vma->vm_flags &= ~(VM_WRITE | VM_MAYWRITE);
+		clear_vm_flags(vma, VM_WRITE | VM_MAYWRITE);
 #if defined(__i386__) || defined(__x86_64__)
 		pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW;
 #else
@@ -492,7 +492,7 @@ static int drm_mmap_dma(struct file *filp, struct vm_area_struct *vma)
 
 	vma->vm_ops = &drm_vm_dma_ops;
 
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 
 	drm_vm_open_locked(dev, vma);
 	return 0;
@@ -560,7 +560,7 @@ static int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
 		return -EINVAL;
 
 	if (!capable(CAP_SYS_ADMIN) && (map->flags & _DRM_READ_ONLY)) {
-		vma->vm_flags &= ~(VM_WRITE | VM_MAYWRITE);
+		clear_vm_flags(vma, VM_WRITE | VM_MAYWRITE);
 #if defined(__i386__) || defined(__x86_64__)
 		pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW;
 #else
@@ -628,7 +628,7 @@ static int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
 	default:
 		return -EINVAL;	/* This should never happen. */
 	}
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 
 	drm_vm_open_locked(dev, vma);
 	return 0;
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index c5ae5492e1af..9a5a317038a4 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -130,7 +130,7 @@ static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
 {
 	pgprot_t vm_page_prot;
 
-	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 
 	vm_page_prot = vm_get_page_prot(vma->vm_flags);
 
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index 3e493f48e0d4..c330d415729c 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -274,7 +274,7 @@ static int exynos_drm_gem_mmap_buffer(struct exynos_drm_gem *exynos_gem,
 	unsigned long vm_size;
 	int ret;
 
-	vma->vm_flags &= ~VM_PFNMAP;
+	clear_vm_flags(vma, VM_PFNMAP);
 	vma->vm_pgoff = 0;
 
 	vm_size = vma->vm_end - vma->vm_start;
@@ -368,7 +368,7 @@ static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct
 	if (obj->import_attach)
 		return dma_buf_mmap(obj->dma_buf, vma, 0);
 
-	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
 
 	DRM_DEV_DEBUG_KMS(to_dma_dev(obj->dev), "flags = 0x%x\n",
 			  exynos_gem->flags);
diff --git a/drivers/gpu/drm/gma500/framebuffer.c b/drivers/gpu/drm/gma500/framebuffer.c
index 8d5a37b8f110..471d5b3c1535 100644
--- a/drivers/gpu/drm/gma500/framebuffer.c
+++ b/drivers/gpu/drm/gma500/framebuffer.c
@@ -139,7 +139,7 @@ static int psbfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 	 */
 	vma->vm_ops = &psbfb_vm_ops;
 	vma->vm_private_data = (void *)fb;
-	vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i810/i810_dma.c b/drivers/gpu/drm/i810/i810_dma.c
index 9fb4dd63342f..bced8c30709e 100644
--- a/drivers/gpu/drm/i810/i810_dma.c
+++ b/drivers/gpu/drm/i810/i810_dma.c
@@ -102,7 +102,7 @@ static int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma)
 	buf = dev_priv->mmap_buffer;
 	buf_priv = buf->dev_private;
 
-	vma->vm_flags |= VM_DONTCOPY;
+	set_vm_flags(vma, VM_DONTCOPY);
 
 	buf_priv->currently_mapped = I810_BUF_MAPPED;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 0ad44f3868de..71b9e0485cb9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -979,7 +979,7 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 			i915_gem_object_put(obj);
 			return -EINVAL;
 		}
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 	}
 
 	anon = mmap_singleton(to_i915(dev));
@@ -988,7 +988,7 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 		return PTR_ERR(anon);
 	}
 
-	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO;
+	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO);
 
 	/*
 	 * We keep the ref on mmo->obj, not vm_file, but we require
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
index 47e96b0289f9..427089733b87 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -158,7 +158,7 @@ static int mtk_drm_gem_object_mmap(struct drm_gem_object *obj,
 	 * dma_alloc_attrs() allocated a struct page table for mtk_gem, so clear
 	 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap().
 	 */
-	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
 	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 1dee0d18abbb..8aff3ae909af 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -1012,7 +1012,7 @@ static int msm_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct
 {
 	struct msm_gem_object *msm_obj = to_msm_bo(obj);
 
-	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_page_prot = msm_gem_pgprot(msm_obj, vm_get_page_prot(vma->vm_flags));
 
 	return 0;
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index cf571796fd26..9c0e7d6a3784 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -543,8 +543,7 @@ int omap_gem_mmap_obj(struct drm_gem_object *obj,
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_MIXEDMAP;
+	mod_vm_flags(vma, VM_MIXEDMAP, VM_PFNMAP);
 
 	if (omap_obj->flags & OMAP_BO_WC) {
 		vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 6edb7c52cb3d..735b64bbdcf2 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -251,8 +251,7 @@ static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj,
 	 * We allocated a struct page table for rk_obj, so clear
 	 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap().
 	 */
-	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
-	vma->vm_flags &= ~VM_PFNMAP;
+	mod_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP, VM_PFNMAP);
 
 	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
 	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 979e7bc902f6..6cdc6c45ef27 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -574,7 +574,7 @@ int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
 		 * and set the vm_pgoff (used as a fake buffer offset by DRM)
 		 * to 0 as we want to map the whole buffer.
 		 */
-		vma->vm_flags &= ~VM_PFNMAP;
+		clear_vm_flags(vma, VM_PFNMAP);
 		vma->vm_pgoff = 0;
 
 		err = dma_mmap_wc(gem->dev->dev, vma, bo->vaddr, bo->iova,
@@ -588,8 +588,7 @@ int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
 	} else {
 		pgprot_t prot = vm_get_page_prot(vma->vm_flags);
 
-		vma->vm_flags |= VM_MIXEDMAP;
-		vma->vm_flags &= ~VM_PFNMAP;
+		mod_vm_flags(vma, VM_MIXEDMAP, VM_PFNMAP);
 
 		vma->vm_page_prot = pgprot_writecombine(prot);
 	}
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 5a3e4b891377..0861e6e33964 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -468,8 +468,7 @@ int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo)
 
 	vma->vm_private_data = bo;
 
-	vma->vm_flags |= VM_PFNMAP;
-	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_PFNMAP | VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
 	return 0;
 }
 EXPORT_SYMBOL(ttm_bo_mmap_obj);
diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c b/drivers/gpu/drm/virtio/virtgpu_vram.c
index 6b45b0429fef..5498a1dbef63 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vram.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vram.c
@@ -46,7 +46,7 @@ static int virtio_gpu_vram_mmap(struct drm_gem_object *obj,
 		return -EINVAL;
 
 	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
-	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
+	set_vm_flags(vma, VM_MIXEDMAP | VM_DONTEXPAND);
 	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 	vma->vm_ops = &virtio_gpu_vram_vm_ops;
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
index 265f7c48d856..8c8015528b6f 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
@@ -97,7 +97,7 @@ int vmw_mmap(struct file *filp, struct vm_area_struct *vma)
 
 	/* Use VM_PFNMAP rather than VM_MIXEDMAP if not a COW mapping */
 	if (!is_cow_mapping(vma->vm_flags))
-		vma->vm_flags = (vma->vm_flags & ~VM_MIXEDMAP) | VM_PFNMAP;
+		mod_vm_flags(vma, VM_PFNMAP, VM_MIXEDMAP);
 
 	ttm_bo_put(bo); /* release extra ref taken by ttm_bo_mmap_obj() */
 
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4c95ebcdcc2d..18a93ad4aa1f 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -69,8 +69,7 @@ static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj,
 	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
 	 * the whole buffer.
 	 */
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
+	mod_vm_flags(vma, VM_MIXEDMAP | VM_DONTEXPAND, VM_PFNMAP);
 	vma->vm_pgoff = 0;
 
 	/*
diff --git a/drivers/hsi/clients/cmt_speech.c b/drivers/hsi/clients/cmt_speech.c
index 8069f795c864..952a31e742a1 100644
--- a/drivers/hsi/clients/cmt_speech.c
+++ b/drivers/hsi/clients/cmt_speech.c
@@ -1264,7 +1264,7 @@ static int cs_char_mmap(struct file *file, struct vm_area_struct *vma)
 	if (vma_pages(vma) != 1)
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_DONTDUMP | VM_DONTEXPAND;
+	set_vm_flags(vma, VM_IO | VM_DONTDUMP | VM_DONTEXPAND);
 	vma->vm_ops = &cs_char_vm_ops;
 	vma->vm_private_data = file->private_data;
 
diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
index 6c8215a47a60..a6f178bf3ded 100644
--- a/drivers/hwtracing/intel_th/msu.c
+++ b/drivers/hwtracing/intel_th/msu.c
@@ -1659,7 +1659,7 @@ static int intel_th_msc_mmap(struct file *file, struct vm_area_struct *vma)
 		atomic_dec(&msc->user_count);
 
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTCOPY;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTCOPY);
 	vma->vm_ops = &msc_mmap_ops;
 	return ret;
 }
diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
index 2712e699ba08..9a59e61c4194 100644
--- a/drivers/hwtracing/stm/core.c
+++ b/drivers/hwtracing/stm/core.c
@@ -715,7 +715,7 @@ static int stm_char_mmap(struct file *file, struct vm_area_struct *vma)
 	pm_runtime_get_sync(&stm->dev);
 
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
-	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &stm_mmap_vmops;
 	vm_iomap_memory(vma, phys, size);
 
diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index f5f9269fdc16..7294f2d33bc6 100644
--- a/drivers/infiniband/hw/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -403,7 +403,7 @@ static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
 			ret = -EPERM;
 			goto done;
 		}
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 		addr = vma->vm_start;
 		for (i = 0 ; i < uctxt->egrbufs.numbufs; i++) {
 			memlen = uctxt->egrbufs.buffers[i].len;
@@ -528,7 +528,7 @@ static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
 		goto done;
 	}
 
-	vma->vm_flags = flags;
+	reset_vm_flags(vma, flags);
 	hfi1_cdbg(PROC,
 		  "%u:%u type:%u io/vf:%d/%d, addr:0x%llx, len:%lu(%lu), flags:0x%lx\n",
 		    ctxt, subctxt, type, mapio, vmf, memaddr, memlen,
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index c669ef6e47e7..538318c809b3 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2087,7 +2087,7 @@ static int mlx5_ib_mmap_clock_info_page(struct mlx5_ib_dev *dev,
 
 	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
 		return -EPERM;
-	vma->vm_flags &= ~VM_MAYWRITE;
+	clear_vm_flags(vma, VM_MAYWRITE);
 
 	if (!dev->mdev->clock_info)
 		return -EOPNOTSUPP;
@@ -2311,7 +2311,7 @@ static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vm
 
 		if (vma->vm_flags & VM_WRITE)
 			return -EPERM;
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 
 		/* Don't expose to user-space information it shouldn't have */
 		if (PAGE_SIZE > 4096)
diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
index 3937144b2ae5..16ef80df4b7f 100644
--- a/drivers/infiniband/hw/qib/qib_file_ops.c
+++ b/drivers/infiniband/hw/qib/qib_file_ops.c
@@ -733,7 +733,7 @@ static int qib_mmap_mem(struct vm_area_struct *vma, struct qib_ctxtdata *rcd,
 		}
 
 		/* don't allow them to later change with mprotect */
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 	}
 
 	pfn = virt_to_phys(kvaddr) >> PAGE_SHIFT;
@@ -769,7 +769,7 @@ static int mmap_ureg(struct vm_area_struct *vma, struct qib_devdata *dd,
 		phys = dd->physaddr + ureg;
 		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
-		vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
+		set_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND);
 		ret = io_remap_pfn_range(vma, vma->vm_start,
 					 phys >> PAGE_SHIFT,
 					 vma->vm_end - vma->vm_start,
@@ -810,8 +810,7 @@ static int mmap_piobufs(struct vm_area_struct *vma,
 	 * don't allow them to later change to readable with mprotect (for when
 	 * not initially mapped readable, as is normally the case)
 	 */
-	vma->vm_flags &= ~VM_MAYREAD;
-	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
+	mod_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND, VM_MAYREAD);
 
 	/* We used PAT if wc_cookie == 0 */
 	if (!dd->wc_cookie)
@@ -852,7 +851,7 @@ static int mmap_rcvegrbufs(struct vm_area_struct *vma,
 		goto bail;
 	}
 	/* don't allow them to later change to writable with mprotect */
-	vma->vm_flags &= ~VM_MAYWRITE;
+	clear_vm_flags(vma, VM_MAYWRITE);
 
 	start = vma->vm_start;
 
@@ -944,7 +943,7 @@ static int mmap_kvaddr(struct vm_area_struct *vma, u64 pgaddr,
 		 * Don't allow permission to later change to writable
 		 * with mprotect.
 		 */
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 	} else
 		goto bail;
 	len = vma->vm_end - vma->vm_start;
@@ -955,7 +954,7 @@ static int mmap_kvaddr(struct vm_area_struct *vma, u64 pgaddr,
 
 	vma->vm_pgoff = (unsigned long) addr >> PAGE_SHIFT;
 	vma->vm_ops = &qib_file_vm_ops;
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	ret = 1;
 
 bail:
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
index 6e8c4fbb8083..6f9237c2a26b 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
@@ -672,7 +672,7 @@ int usnic_ib_mmap(struct ib_ucontext *context,
 	usnic_dbg("\n");
 
 	us_ibdev = to_usdev(context->device);
-	vma->vm_flags |= VM_IO;
+	set_vm_flags(vma, VM_IO);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 	vfid = vma->vm_pgoff;
 	usnic_dbg("Page Offset %lu PAGE_SHIFT %u VFID %u\n",
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
index 19176583dbde..7f1b7b5dd3f4 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
@@ -408,7 +408,7 @@ int pvrdma_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
 	}
 
 	/* Map UAR to kernel space, VM_LOCKED? */
-	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
+	set_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 	if (io_remap_pfn_range(vma, start, context->uar.pfn, size,
 			       vma->vm_page_prot))
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index 5f1175f8b349..e66ae399749e 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -293,7 +293,7 @@ static int vb2_dc_mmap(void *buf_priv, struct vm_area_struct *vma)
 		return ret;
 	}
 
-	vma->vm_flags		|= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_private_data	= &buf->handler;
 	vma->vm_ops		= &vb2_common_vm_ops;
 
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index 959b45beb1f3..edb47240ec17 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -185,7 +185,7 @@ static int vb2_vmalloc_mmap(void *buf_priv, struct vm_area_struct *vma)
 	/*
 	 * Make sure that vm_areas for 2 buffers won't be merged together
 	 */
-	vma->vm_flags		|= VM_DONTEXPAND;
+	set_vm_flags(vma, VM_DONTEXPAND);
 
 	/*
 	 * Use common vm_area operations to track buffer refcount.
diff --git a/drivers/media/v4l2-core/videobuf-dma-contig.c b/drivers/media/v4l2-core/videobuf-dma-contig.c
index f2c439359557..c030823185ba 100644
--- a/drivers/media/v4l2-core/videobuf-dma-contig.c
+++ b/drivers/media/v4l2-core/videobuf-dma-contig.c
@@ -314,7 +314,7 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q,
 	}
 
 	vma->vm_ops = &videobuf_vm_ops;
-	vma->vm_flags |= VM_DONTEXPAND;
+	set_vm_flags(vma, VM_DONTEXPAND);
 	vma->vm_private_data = map;
 
 	dev_dbg(q->dev, "mmap %p: q=%p %08lx-%08lx (%lx) pgoff %08lx buf %d\n",
diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c
index 234e9f647c96..9adac4875f29 100644
--- a/drivers/media/v4l2-core/videobuf-dma-sg.c
+++ b/drivers/media/v4l2-core/videobuf-dma-sg.c
@@ -630,8 +630,8 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q,
 	map->count    = 1;
 	map->q        = q;
 	vma->vm_ops   = &videobuf_vm_ops;
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
-	vma->vm_flags &= ~VM_IO; /* using shared anonymous pages */
+	/* using shared anonymous pages */
+	mod_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_IO);
 	vma->vm_private_data = map;
 	dprintk(1, "mmap %p: q=%p %08lx-%08lx pgoff %08lx bufs %d-%d\n",
 		map, q, vma->vm_start, vma->vm_end, vma->vm_pgoff, first, last);
diff --git a/drivers/media/v4l2-core/videobuf-vmalloc.c b/drivers/media/v4l2-core/videobuf-vmalloc.c
index 9b2443720ab0..48d439ccd414 100644
--- a/drivers/media/v4l2-core/videobuf-vmalloc.c
+++ b/drivers/media/v4l2-core/videobuf-vmalloc.c
@@ -247,7 +247,7 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q,
 	}
 
 	vma->vm_ops          = &videobuf_vm_ops;
-	vma->vm_flags       |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_private_data = map;
 
 	dprintk(1, "mmap %p: q=%p %08lx-%08lx (%lx) pgoff %08lx buf %d\n",
diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index acaa44809c58..17562e4efcb2 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -220,7 +220,7 @@ int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)
 	pr_devel("%s: mmio physical: %llx pe: %i master:%i\n", __func__,
 		 ctx->psn_phys, ctx->pe , ctx->master);
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 	vma->vm_ops = &cxl_mmap_vmops;
 	return 0;
diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c
index 5e9ae7600d75..ad8eae764b9b 100644
--- a/drivers/misc/habanalabs/common/memory.c
+++ b/drivers/misc/habanalabs/common/memory.c
@@ -2082,7 +2082,7 @@ static int hl_ts_mmap(struct hl_mmap_mem_buf *buf, struct vm_area_struct *vma, v
 {
 	struct hl_ts_buff *ts_buff = buf->private;
 
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_DONTCOPY | VM_NORESERVE;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP | VM_DONTCOPY | VM_NORESERVE);
 	return remap_vmalloc_range(vma, ts_buff->user_buff_address, 0);
 }
 
diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
index 9f5e208701ba..4186f04da224 100644
--- a/drivers/misc/habanalabs/gaudi/gaudi.c
+++ b/drivers/misc/habanalabs/gaudi/gaudi.c
@@ -4236,8 +4236,8 @@ static int gaudi_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
 {
 	int rc;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
-			VM_DONTCOPY | VM_NORESERVE;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
+			VM_DONTCOPY | VM_NORESERVE);
 
 	rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr,
 				(dma_addr - HOST_PHYS_BASE), size);
diff --git a/drivers/misc/habanalabs/gaudi2/gaudi2.c b/drivers/misc/habanalabs/gaudi2/gaudi2.c
index e793fb2bdcbe..7311c3053944 100644
--- a/drivers/misc/habanalabs/gaudi2/gaudi2.c
+++ b/drivers/misc/habanalabs/gaudi2/gaudi2.c
@@ -5538,8 +5538,8 @@ static int gaudi2_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
 {
 	int rc;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
-			VM_DONTCOPY | VM_NORESERVE;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
+			VM_DONTCOPY | VM_NORESERVE);
 
 #ifdef _HAS_DMA_MMAP_COHERENT
 
@@ -10116,8 +10116,8 @@ static int gaudi2_block_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
 
 	address = pci_resource_start(hdev->pdev, SRAM_CFG_BAR_ID) + offset_in_bar;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
-			VM_DONTCOPY | VM_NORESERVE;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
+			VM_DONTCOPY | VM_NORESERVE);
 
 	rc = remap_pfn_range(vma, vma->vm_start, address >> PAGE_SHIFT,
 			block_size, vma->vm_page_prot);
diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
index 0f083fcf81a6..5e2aaa26ea29 100644
--- a/drivers/misc/habanalabs/goya/goya.c
+++ b/drivers/misc/habanalabs/goya/goya.c
@@ -2880,8 +2880,8 @@ static int goya_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
 {
 	int rc;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
-			VM_DONTCOPY | VM_NORESERVE;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
+			VM_DONTCOPY | VM_NORESERVE);
 
 	rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr,
 				(dma_addr - HOST_PHYS_BASE), size);
diff --git a/drivers/misc/ocxl/context.c b/drivers/misc/ocxl/context.c
index 9eb0d93b01c6..e6f941248e93 100644
--- a/drivers/misc/ocxl/context.c
+++ b/drivers/misc/ocxl/context.c
@@ -180,7 +180,7 @@ static int check_mmap_afu_irq(struct ocxl_context *ctx,
 	if ((vma->vm_flags & VM_READ) || (vma->vm_flags & VM_EXEC) ||
 		!(vma->vm_flags & VM_WRITE))
 		return -EINVAL;
-	vma->vm_flags &= ~(VM_MAYREAD | VM_MAYEXEC);
+	clear_vm_flags(vma, VM_MAYREAD | VM_MAYEXEC);
 	return 0;
 }
 
@@ -204,7 +204,7 @@ int ocxl_context_mmap(struct ocxl_context *ctx, struct vm_area_struct *vma)
 	if (rc)
 		return rc;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 	vma->vm_ops = &ocxl_vmops;
 	return 0;
diff --git a/drivers/misc/ocxl/sysfs.c b/drivers/misc/ocxl/sysfs.c
index 25c78df8055d..9398246cac79 100644
--- a/drivers/misc/ocxl/sysfs.c
+++ b/drivers/misc/ocxl/sysfs.c
@@ -134,7 +134,7 @@ static int global_mmio_mmap(struct file *filp, struct kobject *kobj,
 		(afu->config.global_mmio_size >> PAGE_SHIFT))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 	vma->vm_ops = &global_mmio_vmops;
 	vma->vm_private_data = afu;
diff --git a/drivers/misc/open-dice.c b/drivers/misc/open-dice.c
index 9dda47b3fd70..61b4747270aa 100644
--- a/drivers/misc/open-dice.c
+++ b/drivers/misc/open-dice.c
@@ -95,12 +95,12 @@ static int open_dice_mmap(struct file *filp, struct vm_area_struct *vma)
 		if (vma->vm_flags & VM_WRITE)
 			return -EPERM;
 		/* Ensure userspace cannot acquire VM_WRITE later. */
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYSHARE);
 	}
 
 	/* Create write-combine mapping so all clients observe a wipe. */
 	vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
-	vma->vm_flags |= VM_DONTCOPY | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTCOPY | VM_DONTDUMP);
 	return vm_iomap_memory(vma, drvdata->rmem->base, drvdata->rmem->size);
 }
 
diff --git a/drivers/misc/sgi-gru/grufile.c b/drivers/misc/sgi-gru/grufile.c
index 7ffcfc0bb587..8b777286d3b2 100644
--- a/drivers/misc/sgi-gru/grufile.c
+++ b/drivers/misc/sgi-gru/grufile.c
@@ -101,8 +101,8 @@ static int gru_file_mmap(struct file *file, struct vm_area_struct *vma)
 				vma->vm_end & (GRU_GSEG_PAGESIZE - 1))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_LOCKED |
-			 VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_LOCKED |
+			 VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_page_prot = PAGE_SHARED;
 	vma->vm_ops = &gru_vm_ops;
 
diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
index 905eff1f840e..f57e91cdb0f6 100644
--- a/drivers/misc/uacce/uacce.c
+++ b/drivers/misc/uacce/uacce.c
@@ -229,7 +229,7 @@ static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
 	if (!qfr)
 		return -ENOMEM;
 
-	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND | VM_WIPEONFORK;
+	set_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_WIPEONFORK);
 	vma->vm_ops = &uacce_vm_ops;
 	vma->vm_private_data = q;
 	qfr->type = type;
diff --git a/drivers/sbus/char/oradax.c b/drivers/sbus/char/oradax.c
index 21b7cb6e7e70..a096734daad0 100644
--- a/drivers/sbus/char/oradax.c
+++ b/drivers/sbus/char/oradax.c
@@ -389,7 +389,7 @@ static int dax_devmap(struct file *f, struct vm_area_struct *vma)
 	/* completion area is mapped read-only for user */
 	if (vma->vm_flags & VM_WRITE)
 		return -EPERM;
-	vma->vm_flags &= ~VM_MAYWRITE;
+	clear_vm_flags(vma, VM_MAYWRITE);
 
 	if (remap_pfn_range(vma, vma->vm_start, ctx->ca_buf_ra >> PAGE_SHIFT,
 			    len, vma->vm_page_prot))
diff --git a/drivers/scsi/cxlflash/ocxl_hw.c b/drivers/scsi/cxlflash/ocxl_hw.c
index 631eda2d467e..d386c25c2699 100644
--- a/drivers/scsi/cxlflash/ocxl_hw.c
+++ b/drivers/scsi/cxlflash/ocxl_hw.c
@@ -1167,7 +1167,7 @@ static int afu_mmap(struct file *file, struct vm_area_struct *vma)
 	    (ctx->psn_size >> PAGE_SHIFT))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 	vma->vm_ops = &ocxlflash_vmops;
 	return 0;
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index ff9854f59964..7438adfe3bdc 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -1288,7 +1288,7 @@ sg_mmap(struct file *filp, struct vm_area_struct *vma)
 	}
 
 	sfp->mmap_called = 1;
-	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_private_data = sfp;
 	vma->vm_ops = &sg_mmap_vm_ops;
 out:
diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
index 5e53eed8ae95..df1c944e5058 100644
--- a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
+++ b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
@@ -1072,7 +1072,7 @@ int hmm_bo_mmap(struct vm_area_struct *vma, struct hmm_buffer_object *bo)
 	vma->vm_private_data = bo;
 
 	vma->vm_ops = &hmm_bo_vm_ops;
-	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
 
 	/*
 	 * call hmm_bo_vm_open explicitly.
diff --git a/drivers/staging/media/deprecated/meye/meye.c b/drivers/staging/media/deprecated/meye/meye.c
index 5d87efd9b95c..2505e64d7119 100644
--- a/drivers/staging/media/deprecated/meye/meye.c
+++ b/drivers/staging/media/deprecated/meye/meye.c
@@ -1476,8 +1476,8 @@ static int meye_mmap(struct file *file, struct vm_area_struct *vma)
 	}
 
 	vma->vm_ops = &meye_vm_ops;
-	vma->vm_flags &= ~VM_IO;	/* not I/O memory */
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	/* not I/O memory */
+	mod_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_IO);
 	vma->vm_private_data = (void *) (offset / gbufsize);
 	meye_vm_open(vma);
 
diff --git a/drivers/staging/media/deprecated/stkwebcam/stk-webcam.c b/drivers/staging/media/deprecated/stkwebcam/stk-webcam.c
index 787edb3d47c2..196d1034f104 100644
--- a/drivers/staging/media/deprecated/stkwebcam/stk-webcam.c
+++ b/drivers/staging/media/deprecated/stkwebcam/stk-webcam.c
@@ -779,7 +779,7 @@ static int v4l_stk_mmap(struct file *fp, struct vm_area_struct *vma)
 	ret = remap_vmalloc_range(vma, sbuf->buffer, 0);
 	if (ret)
 		return ret;
-	vma->vm_flags |= VM_DONTEXPAND;
+	set_vm_flags(vma, VM_DONTEXPAND);
 	vma->vm_private_data = sbuf;
 	vma->vm_ops = &stk_v4l_vm_ops;
 	sbuf->v4lbuf.flags |= V4L2_BUF_FLAG_MAPPED;
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 2940559c3086..9fd64259904c 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -1928,7 +1928,7 @@ static int tcmu_mmap(struct uio_info *info, struct vm_area_struct *vma)
 {
 	struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info);
 
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &tcmu_vm_ops;
 
 	vma->vm_private_data = udev;
diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
index 43afbb7c5ab9..08802744f3b7 100644
--- a/drivers/uio/uio.c
+++ b/drivers/uio/uio.c
@@ -713,7 +713,7 @@ static const struct vm_operations_struct uio_logical_vm_ops = {
 
 static int uio_mmap_logical(struct vm_area_struct *vma)
 {
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &uio_logical_vm_ops;
 	return 0;
 }
diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
index 837f3e57f580..d9aefa259883 100644
--- a/drivers/usb/core/devio.c
+++ b/drivers/usb/core/devio.c
@@ -279,8 +279,7 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
 		}
 	}
 
-	vma->vm_flags |= VM_IO;
-	vma->vm_flags |= (VM_DONTEXPAND | VM_DONTDUMP);
+	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &usbdev_vm_ops;
 	vma->vm_private_data = usbm;
 
diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index 094e812e9e69..9b2d48a65fdf 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1272,8 +1272,7 @@ static int mon_bin_mmap(struct file *filp, struct vm_area_struct *vma)
 	if (vma->vm_flags & VM_WRITE)
 		return -EPERM;
 
-	vma->vm_flags &= ~VM_MAYWRITE;
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	mod_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_MAYWRITE);
 	vma->vm_private_data = filp->private_data;
 	mon_bin_vma_open(vma);
 	return 0;
diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
index e682bc7ee6c9..39dcce2e455b 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.c
+++ b/drivers/vdpa/vdpa_user/iova_domain.c
@@ -512,7 +512,7 @@ static int vduse_domain_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	struct vduse_iova_domain *domain = file->private_data;
 
-	vma->vm_flags |= VM_DONTDUMP | VM_DONTEXPAND;
+	set_vm_flags(vma, VM_DONTDUMP | VM_DONTEXPAND);
 	vma->vm_private_data = domain;
 	vma->vm_ops = &vduse_domain_mmap_ops;
 
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 26a541cc64d1..86eb3fc9ffb4 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -1799,7 +1799,7 @@ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma
 	 * See remap_pfn_range(), called from vfio_pci_fault() but we can't
 	 * change vm_flags within the fault handler.  Set them now.
 	 */
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &vfio_pci_mmap_ops;
 
 	return 0;
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index ec32f785dfde..7b81994a7d02 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -1315,7 +1315,7 @@ static int vhost_vdpa_mmap(struct file *file, struct vm_area_struct *vma)
 	if (vma->vm_end - vma->vm_start != notify.size)
 		return -ENOTSUPP;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &vhost_vdpa_vm_ops;
 	return 0;
 }
diff --git a/drivers/video/fbdev/68328fb.c b/drivers/video/fbdev/68328fb.c
index 7db03ed77c76..a794a740af10 100644
--- a/drivers/video/fbdev/68328fb.c
+++ b/drivers/video/fbdev/68328fb.c
@@ -391,7 +391,7 @@ static int mc68x328fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 #ifndef MMU
 	/* this is uClinux (no MMU) specific code */
 
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_start = videomemory;
 
 	return 0;
diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
index c730253ab85c..af0bfaa2d014 100644
--- a/drivers/video/fbdev/core/fb_defio.c
+++ b/drivers/video/fbdev/core/fb_defio.c
@@ -232,9 +232,9 @@ static const struct address_space_operations fb_deferred_io_aops = {
 int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma)
 {
 	vma->vm_ops = &fb_deferred_io_vm_ops;
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	if (!(info->flags & FBINFO_VIRTFB))
-		vma->vm_flags |= VM_IO;
+		set_vm_flags(vma, VM_IO);
 	vma->vm_private_data = info;
 	return 0;
 }
diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c
index a15729beb9d1..ee4a8958dc68 100644
--- a/drivers/xen/gntalloc.c
+++ b/drivers/xen/gntalloc.c
@@ -525,7 +525,7 @@ static int gntalloc_mmap(struct file *filp, struct vm_area_struct *vma)
 
 	vma->vm_private_data = vm_priv;
 
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 
 	vma->vm_ops = &gntalloc_vmops;
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 4d9a3050de6a..6d5bb1ebb661 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -1055,10 +1055,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 
 	vma->vm_ops = &gntdev_vmops;
 
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_MIXEDMAP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP | VM_MIXEDMAP);
 
 	if (use_ptemod)
-		vma->vm_flags |= VM_DONTCOPY;
+		set_vm_flags(vma, VM_DONTCOPY);
 
 	vma->vm_private_data = map;
 	if (map->flags) {
diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
index dd5bbb6e1b6b..037547918630 100644
--- a/drivers/xen/privcmd-buf.c
+++ b/drivers/xen/privcmd-buf.c
@@ -156,7 +156,7 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
 	vma_priv->file_priv = file_priv;
 	vma_priv->users = 1;
 
-	vma->vm_flags |= VM_IO | VM_DONTEXPAND;
+	set_vm_flags(vma, VM_IO | VM_DONTEXPAND);
 	vma->vm_ops = &privcmd_buf_vm_ops;
 	vma->vm_private_data = vma_priv;
 
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 1edf45ee9890..4c8cfc6f86d8 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -934,8 +934,8 @@ static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
 	 * how to recreate these mappings */
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTCOPY |
-			 VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTCOPY |
+			 VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &privcmd_vm_ops;
 	vma->vm_private_data = NULL;
 
diff --git a/fs/aio.c b/fs/aio.c
index 650cd795aa7e..4106b3209e5e 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -392,7 +392,7 @@ static const struct vm_operations_struct aio_ring_vm_ops = {
 
 static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma)
 {
-	vma->vm_flags |= VM_DONTEXPAND;
+	set_vm_flags(vma, VM_DONTEXPAND);
 	vma->vm_ops = &aio_ring_vm_ops;
 	return 0;
 }
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index 50e4e060db68..aa8695e685aa 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -408,7 +408,7 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
 		 * unpopulated ptes via cramfs_read_folio().
 		 */
 		int i;
-		vma->vm_flags |= VM_MIXEDMAP;
+		set_vm_flags(vma, VM_MIXEDMAP);
 		for (i = 0; i < pages && !ret; i++) {
 			vm_fault_t vmf;
 			unsigned long off = i * PAGE_SIZE;
diff --git a/fs/erofs/data.c b/fs/erofs/data.c
index f57f921683d7..e6413ced2bb1 100644
--- a/fs/erofs/data.c
+++ b/fs/erofs/data.c
@@ -429,7 +429,7 @@ static int erofs_file_mmap(struct file *file, struct vm_area_struct *vma)
 		return -EINVAL;
 
 	vma->vm_ops = &erofs_dax_vm_ops;
-	vma->vm_flags |= VM_HUGEPAGE;
+	set_vm_flags(vma, VM_HUGEPAGE);
 	return 0;
 }
 #else
diff --git a/fs/exec.c b/fs/exec.c
index c0df813d2b45..ed499e850970 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -270,7 +270,7 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
 	BUILD_BUG_ON(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP);
 	vma->vm_end = STACK_TOP_MAX;
 	vma->vm_start = vma->vm_end - PAGE_SIZE;
-	vma->vm_flags = VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP;
+	init_vm_flags(vma, VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP);
 	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 
 	err = insert_vm_struct(mm, vma);
@@ -834,7 +834,7 @@ int setup_arg_pages(struct linux_binprm *bprm,
 	}
 
 	/* mprotect_fixup is overkill to remove the temporary stack flags */
-	vma->vm_flags &= ~VM_STACK_INCOMPLETE_SETUP;
+	clear_vm_flags(vma, VM_STACK_INCOMPLETE_SETUP);
 
 	stack_expand = 131072UL; /* randomly 32*4k (or 2*64k) pages */
 	stack_size = vma->vm_end - vma->vm_start;
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index 7ac0a81bd371..baeb385b07c7 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -801,7 +801,7 @@ static int ext4_file_mmap(struct file *file, struct vm_area_struct *vma)
 	file_accessed(file);
 	if (IS_DAX(file_inode(file))) {
 		vma->vm_ops = &ext4_dax_vm_ops;
-		vma->vm_flags |= VM_HUGEPAGE;
+		set_vm_flags(vma, VM_HUGEPAGE);
 	} else {
 		vma->vm_ops = &ext4_file_vm_ops;
 	}
diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
index e23e802a8013..599969edc869 100644
--- a/fs/fuse/dax.c
+++ b/fs/fuse/dax.c
@@ -860,7 +860,7 @@ int fuse_dax_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	file_accessed(file);
 	vma->vm_ops = &fuse_dax_vm_ops;
-	vma->vm_flags |= VM_MIXEDMAP | VM_HUGEPAGE;
+	set_vm_flags(vma, VM_MIXEDMAP | VM_HUGEPAGE);
 	return 0;
 }
 
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 48f1a8ad2243..b0f59d8a7b09 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -132,7 +132,7 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
 	 * way when do_mmap unwinds (may be important on powerpc
 	 * and ia64).
 	 */
-	vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND;
+	set_vm_flags(vma, VM_HUGETLB | VM_DONTEXPAND);
 	vma->vm_ops = &hugetlb_vm_ops;
 
 	ret = seal_check_future_write(info->seals, vma);
@@ -811,7 +811,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
 	 * as input to create an allocation policy.
 	 */
 	vma_init(&pseudo_vma, mm);
-	pseudo_vma.vm_flags = (VM_HUGETLB | VM_MAYSHARE | VM_SHARED);
+	init_vm_flags(&pseudo_vma, VM_HUGETLB | VM_MAYSHARE | VM_SHARED);
 	pseudo_vma.vm_file = file;
 
 	for (index = start; index < end; index++) {
diff --git a/fs/orangefs/file.c b/fs/orangefs/file.c
index 167fa43b24f9..0f668db6bcf3 100644
--- a/fs/orangefs/file.c
+++ b/fs/orangefs/file.c
@@ -389,8 +389,7 @@ static int orangefs_file_mmap(struct file *file, struct vm_area_struct *vma)
 		     "orangefs_file_mmap: called on %pD\n", file);
 
 	/* set the sequential readahead hint */
-	vma->vm_flags |= VM_SEQ_READ;
-	vma->vm_flags &= ~VM_RAND_READ;
+	mod_vm_flags(vma, VM_SEQ_READ, VM_RAND_READ);
 
 	file_accessed(file);
 	vma->vm_ops = &orangefs_file_vm_ops;
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index f937c4cd0214..9018645a359c 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1301,7 +1301,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
 			for_each_vma(vmi, vma) {
 				if (!(vma->vm_flags & VM_SOFTDIRTY))
 					continue;
-				vma->vm_flags &= ~VM_SOFTDIRTY;
+				clear_vm_flags(vma, VM_SOFTDIRTY);
 				vma_set_page_prot(vma);
 			}
 
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 09a81e4b1273..858e4e804f85 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -582,8 +582,7 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
 	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
 		return -EPERM;
 
-	vma->vm_flags &= ~(VM_MAYWRITE | VM_MAYEXEC);
-	vma->vm_flags |= VM_MIXEDMAP;
+	mod_vm_flags(vma, VM_MIXEDMAP, VM_MAYWRITE | VM_MAYEXEC);
 	vma->vm_ops = &vmcore_mmap_ops;
 
 	len = 0;
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index f3c75c6222de..40030059db53 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -113,7 +113,7 @@ static void userfaultfd_set_vm_flags(struct vm_area_struct *vma,
 {
 	const bool uffd_wp_changed = (vma->vm_flags ^ flags) & VM_UFFD_WP;
 
-	vma->vm_flags = flags;
+	reset_vm_flags(vma, flags);
 	/*
 	 * For shared mappings, we want to enable writenotify while
 	 * userfaultfd-wp is enabled (see vma_wants_writenotify()). We'll simply
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 595a5bcf46b9..bf777fed0dd4 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1429,7 +1429,7 @@ xfs_file_mmap(
 	file_accessed(file);
 	vma->vm_ops = &xfs_file_vm_ops;
 	if (IS_DAX(inode))
-		vma->vm_flags |= VM_HUGEPAGE;
+		set_vm_flags(vma, VM_HUGEPAGE);
 	return 0;
 }
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index da62bdd627bf..55335edd1373 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3652,7 +3652,7 @@ static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)
 		 * VM_MAYWRITE as we still want them to be COW-writable.
 		 */
 		if (vma->vm_flags & VM_SHARED)
-			vma->vm_flags &= ~(VM_MAYWRITE);
+			clear_vm_flags(vma, VM_MAYWRITE);
 	}
 
 	return 0;
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 80f4b4d88aaf..d2c967cc2873 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -269,7 +269,7 @@ static int ringbuf_map_mmap_kern(struct bpf_map *map, struct vm_area_struct *vma
 		if (vma->vm_pgoff != 0 || vma->vm_end - vma->vm_start != PAGE_SIZE)
 			return -EPERM;
 	} else {
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 	}
 	/* remap_vmalloc_range() checks size and offset constraints */
 	return remap_vmalloc_range(vma, rb_map->rb,
@@ -290,7 +290,7 @@ static int ringbuf_map_mmap_user(struct bpf_map *map, struct vm_area_struct *vma
 			 */
 			return -EPERM;
 	} else {
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 	}
 	/* remap_vmalloc_range() checks size and offset constraints */
 	return remap_vmalloc_range(vma, rb_map->rb, vma->vm_pgoff + RINGBUF_PGOFF);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 64131f88c553..db19094c7ac7 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -882,10 +882,10 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
 	/* set default open/close callbacks */
 	vma->vm_ops = &bpf_map_default_vmops;
 	vma->vm_private_data = map;
-	vma->vm_flags &= ~VM_MAYEXEC;
+	clear_vm_flags(vma, VM_MAYEXEC);
 	if (!(vma->vm_flags & VM_WRITE))
 		/* disallow re-mapping with PROT_WRITE */
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 
 	err = map->ops->map_mmap(map, vma);
 	if (err)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index d56328e5080e..6745460dcf49 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6573,7 +6573,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
 	 * Since pinned accounting is per vm we cannot allow fork() to copy our
 	 * vma.
 	 */
-	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &perf_mmap_vmops;
 
 	if (event->pmu->event_mapped)
diff --git a/kernel/kcov.c b/kernel/kcov.c
index e5cd09fd8a05..27fc1e26e1e1 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -489,7 +489,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
 		goto exit;
 	}
 	spin_unlock_irqrestore(&kcov->lock, flags);
-	vma->vm_flags |= VM_DONTEXPAND;
+	set_vm_flags(vma, VM_DONTEXPAND);
 	for (off = 0; off < size; off += PAGE_SIZE) {
 		page = vmalloc_to_page(kcov->area + off);
 		res = vm_insert_page(vma, vma->vm_start + off, page);
diff --git a/kernel/relay.c b/kernel/relay.c
index ef12532168d9..085aa8707bc2 100644
--- a/kernel/relay.c
+++ b/kernel/relay.c
@@ -91,7 +91,7 @@ static int relay_mmap_buf(struct rchan_buf *buf, struct vm_area_struct *vma)
 		return -EINVAL;
 
 	vma->vm_ops = &relay_file_mmap_ops;
-	vma->vm_flags |= VM_DONTEXPAND;
+	set_vm_flags(vma, VM_DONTEXPAND);
 	vma->vm_private_data = buf;
 
 	return 0;
diff --git a/mm/madvise.c b/mm/madvise.c
index 7db6622f8293..74941a9784b4 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -176,7 +176,7 @@ static int madvise_update_vma(struct vm_area_struct *vma,
 	/*
 	 * vm_flags is protected by the mmap_lock held in write mode.
 	 */
-	vma->vm_flags = new_flags;
+	reset_vm_flags(vma, new_flags);
 	if (!vma->vm_file || vma_is_anon_shmem(vma)) {
 		error = replace_anon_vma_name(vma, anon_name);
 		if (error)
diff --git a/mm/memory.c b/mm/memory.c
index ec833a2e0601..d6902065e558 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1928,7 +1928,7 @@ int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
 	if (!(vma->vm_flags & VM_MIXEDMAP)) {
 		BUG_ON(mmap_read_trylock(vma->vm_mm));
 		BUG_ON(vma->vm_flags & VM_PFNMAP);
-		vma->vm_flags |= VM_MIXEDMAP;
+		set_vm_flags(vma, VM_MIXEDMAP);
 	}
 	/* Defer page refcount checking till we're about to map that page. */
 	return insert_pages(vma, addr, pages, num, vma->vm_page_prot);
@@ -1986,7 +1986,7 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
 	if (!(vma->vm_flags & VM_MIXEDMAP)) {
 		BUG_ON(mmap_read_trylock(vma->vm_mm));
 		BUG_ON(vma->vm_flags & VM_PFNMAP);
-		vma->vm_flags |= VM_MIXEDMAP;
+		set_vm_flags(vma, VM_MIXEDMAP);
 	}
 	return insert_page(vma, addr, page, vma->vm_page_prot);
 }
@@ -2452,7 +2452,7 @@ int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
 		vma->vm_pgoff = pfn;
 	}
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 
 	BUG_ON(addr >= end);
 	pfn -= addr >> PAGE_SHIFT;
diff --git a/mm/mlock.c b/mm/mlock.c
index 5c4fff93cd6b..5b2431221b4d 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -380,7 +380,7 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
 	 */
 	if (newflags & VM_LOCKED)
 		newflags |= VM_IO;
-	WRITE_ONCE(vma->vm_flags, newflags);
+	reset_vm_flags(vma, newflags);
 
 	lru_add_drain();
 	walk_page_range(vma->vm_mm, start, end, &mlock_walk_ops, NULL);
@@ -388,7 +388,7 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
 
 	if (newflags & VM_IO) {
 		newflags &= ~VM_IO;
-		WRITE_ONCE(vma->vm_flags, newflags);
+		reset_vm_flags(vma, newflags);
 	}
 }
 
@@ -457,7 +457,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
 
 	if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) {
 		/* No work to do, and mlocking twice would be wrong */
-		vma->vm_flags = newflags;
+		reset_vm_flags(vma, newflags);
 	} else {
 		mlock_vma_pages_range(vma, start, end, newflags);
 	}
diff --git a/mm/mmap.c b/mm/mmap.c
index 323bd253b25a..2c6e9072e6a8 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2558,7 +2558,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	vma_iter_set(&vmi, addr);
 	vma->vm_start = addr;
 	vma->vm_end = end;
-	vma->vm_flags = vm_flags;
+	init_vm_flags(vma, vm_flags);
 	vma->vm_page_prot = vm_get_page_prot(vm_flags);
 	vma->vm_pgoff = pgoff;
 
@@ -2686,7 +2686,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	 * then new mapped in-place (which must be aimed as
 	 * a completely new data area).
 	 */
-	vma->vm_flags |= VM_SOFTDIRTY;
+	set_vm_flags(vma, VM_SOFTDIRTY);
 
 	vma_set_page_prot(vma);
 
@@ -2911,7 +2911,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		init_vma_prep(&vp, vma);
 		vma_prepare(&vp);
 		vma->vm_end = addr + len;
-		vma->vm_flags |= VM_SOFTDIRTY;
+		set_vm_flags(vma, VM_SOFTDIRTY);
 		vma_iter_store(vmi, vma);
 
 		vma_complete(&vp, vmi, mm);
@@ -2928,7 +2928,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	vma->vm_start = addr;
 	vma->vm_end = addr + len;
 	vma->vm_pgoff = addr >> PAGE_SHIFT;
-	vma->vm_flags = flags;
+	init_vm_flags(vma, flags);
 	vma->vm_page_prot = vm_get_page_prot(flags);
 	if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL))
 		goto mas_store_fail;
@@ -2940,7 +2940,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	mm->data_vm += len >> PAGE_SHIFT;
 	if (flags & VM_LOCKED)
 		mm->locked_vm += (len >> PAGE_SHIFT);
-	vma->vm_flags |= VM_SOFTDIRTY;
+	set_vm_flags(vma, VM_SOFTDIRTY);
 	validate_mm(mm);
 	return 0;
 
diff --git a/mm/mprotect.c b/mm/mprotect.c
index cce6a0e58fb5..fba770d889ea 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -670,7 +670,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
 	 * vm_flags and vm_page_prot are protected by the mmap_lock
 	 * held in write mode.
 	 */
-	vma->vm_flags = newflags;
+	reset_vm_flags(vma, newflags);
 	if (vma_wants_manual_pte_write_upgrade(vma))
 		mm_cp_flags |= MM_CP_TRY_CHANGE_WRITABLE;
 	vma_set_page_prot(vma);
diff --git a/mm/mremap.c b/mm/mremap.c
index 35db9752cb6a..0f3c78e8eea5 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -662,7 +662,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
 
 	/* Conceal VM_ACCOUNT so old reservation is not undone */
 	if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP)) {
-		vma->vm_flags &= ~VM_ACCOUNT;
+		clear_vm_flags(vma, VM_ACCOUNT);
 		if (vma->vm_start < old_addr)
 			account_start = vma->vm_start;
 		if (vma->vm_end > old_addr + old_len)
@@ -718,12 +718,12 @@ static unsigned long move_vma(struct vm_area_struct *vma,
 	/* Restore VM_ACCOUNT if one or two pieces of vma left */
 	if (account_start) {
 		vma = vma_prev(&vmi);
-		vma->vm_flags |= VM_ACCOUNT;
+		set_vm_flags(vma, VM_ACCOUNT);
 	}
 
 	if (account_end) {
 		vma = vma_next(&vmi);
-		vma->vm_flags |= VM_ACCOUNT;
+		set_vm_flags(vma, VM_ACCOUNT);
 	}
 
 	return new_addr;
diff --git a/mm/nommu.c b/mm/nommu.c
index 9a166738909e..93d052b5a0c2 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -173,7 +173,7 @@ static void *__vmalloc_user_flags(unsigned long size, gfp_t flags)
 		mmap_write_lock(current->mm);
 		vma = find_vma(current->mm, (unsigned long)ret);
 		if (vma)
-			vma->vm_flags |= VM_USERMAP;
+			set_vm_flags(vma, VM_USERMAP);
 		mmap_write_unlock(current->mm);
 	}
 
@@ -950,7 +950,8 @@ static int do_mmap_private(struct vm_area_struct *vma,
 
 	atomic_long_add(total, &mmap_pages_allocated);
 
-	region->vm_flags = vma->vm_flags |= VM_MAPPED_COPY;
+	set_vm_flags(vma, VM_MAPPED_COPY);
+	region->vm_flags = vma->flags;
 	region->vm_start = (unsigned long) base;
 	region->vm_end   = region->vm_start + len;
 	region->vm_top   = region->vm_start + (total << PAGE_SHIFT);
@@ -1047,7 +1048,7 @@ unsigned long do_mmap(struct file *file,
 	region->vm_flags = vm_flags;
 	region->vm_pgoff = pgoff;
 
-	vma->vm_flags = vm_flags;
+	init_vm_flags(vma, vm_flags);
 	vma->vm_pgoff = pgoff;
 
 	if (file) {
@@ -1111,7 +1112,7 @@ unsigned long do_mmap(struct file *file,
 			vma->vm_end = start + len;
 
 			if (pregion->vm_flags & VM_MAPPED_COPY)
-				vma->vm_flags |= VM_MAPPED_COPY;
+				set_vm_flags(vma, VM_MAPPED_COPY);
 			else {
 				ret = do_mmap_shared_file(vma);
 				if (ret < 0) {
@@ -1601,7 +1602,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
 	if (addr != (pfn << PAGE_SHIFT))
 		return -EINVAL;
 
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
 	return 0;
 }
 EXPORT_SYMBOL(remap_pfn_range);
diff --git a/mm/secretmem.c b/mm/secretmem.c
index be3fff86ba00..236a1b6b4100 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -128,7 +128,7 @@ static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
 	if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
 		return -EAGAIN;
 
-	vma->vm_flags |= VM_LOCKED | VM_DONTDUMP;
+	set_vm_flags(vma, VM_LOCKED | VM_DONTDUMP);
 	vma->vm_ops = &secretmem_vm_ops;
 
 	return 0;
diff --git a/mm/shmem.c b/mm/shmem.c
index 9e1015cbad29..3d7fc7a979c6 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2304,7 +2304,7 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
 		return ret;
 
 	/* arm64 - allow memory tagging on RAM-based files */
-	vma->vm_flags |= VM_MTE_ALLOWED;
+	set_vm_flags(vma, VM_MTE_ALLOWED);
 
 	file_accessed(file);
 	/* This is anonymous shared memory if it is unlinked at the time of mmap */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index dfde5324e480..8fccba7aa514 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3682,7 +3682,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
 		size -= PAGE_SIZE;
 	} while (size > 0);
 
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 
 	return 0;
 }
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index f713c0422f0f..cfa2e8a92fcb 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1890,10 +1890,10 @@ int tcp_mmap(struct file *file, struct socket *sock,
 {
 	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
 		return -EPERM;
-	vma->vm_flags &= ~(VM_MAYWRITE | VM_MAYEXEC);
+	clear_vm_flags(vma, VM_MAYWRITE | VM_MAYEXEC);
 
 	/* Instruct vm_insert_page() to not mmap_read_lock(mm) */
-	vma->vm_flags |= VM_MIXEDMAP;
+	set_vm_flags(vma, VM_MIXEDMAP);
 
 	vma->vm_ops = &tcp_vm_ops;
 	return 0;
diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
index 0a6894cdc54d..9037deb5979e 100644
--- a/security/selinux/selinuxfs.c
+++ b/security/selinux/selinuxfs.c
@@ -262,7 +262,7 @@ static int sel_mmap_handle_status(struct file *filp,
 	if (vma->vm_flags & VM_WRITE)
 		return -EPERM;
 	/* disallow mprotect() turns it into writable */
-	vma->vm_flags &= ~VM_MAYWRITE;
+	clear_vm_flags(vma, VM_MAYWRITE);
 
 	return remap_pfn_range(vma, vma->vm_start,
 			       page_to_pfn(status),
@@ -506,13 +506,13 @@ static int sel_mmap_policy(struct file *filp, struct vm_area_struct *vma)
 {
 	if (vma->vm_flags & VM_SHARED) {
 		/* do not allow mprotect to make mapping writable */
-		vma->vm_flags &= ~VM_MAYWRITE;
+		clear_vm_flags(vma, VM_MAYWRITE);
 
 		if (vma->vm_flags & VM_WRITE)
 			return -EACCES;
 	}
 
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &sel_mmap_policy_ops;
 
 	return 0;
diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
index ac2efeb63a39..52473e2acd07 100644
--- a/sound/core/oss/pcm_oss.c
+++ b/sound/core/oss/pcm_oss.c
@@ -2910,7 +2910,7 @@ static int snd_pcm_oss_mmap(struct file *file, struct vm_area_struct *area)
 	}
 	/* set VM_READ access as well to fix memset() routines that do
 	   reads before writes (to improve performance) */
-	area->vm_flags |= VM_READ;
+	set_vm_flags(area, VM_READ);
 	if (substream == NULL)
 		return -ENXIO;
 	runtime = substream->runtime;
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index 9c122e757efe..f716bdb70afe 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -3675,8 +3675,9 @@ static int snd_pcm_mmap_status(struct snd_pcm_substream *substream, struct file
 		return -EINVAL;
 	area->vm_ops = &snd_pcm_vm_ops_status;
 	area->vm_private_data = substream;
-	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
-	area->vm_flags &= ~(VM_WRITE | VM_MAYWRITE);
+	mod_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP,
+		     VM_WRITE | VM_MAYWRITE);
+
 	return 0;
 }
 
@@ -3712,7 +3713,7 @@ static int snd_pcm_mmap_control(struct snd_pcm_substream *substream, struct file
 		return -EINVAL;
 	area->vm_ops = &snd_pcm_vm_ops_control;
 	area->vm_private_data = substream;
-	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP);
 	return 0;
 }
 
@@ -3828,7 +3829,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_data_fault = {
 int snd_pcm_lib_default_mmap(struct snd_pcm_substream *substream,
 			     struct vm_area_struct *area)
 {
-	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP);
 	if (!substream->ops->page &&
 	    !snd_dma_buffer_mmap(snd_pcm_get_dma_buf(substream), area))
 		return 0;
diff --git a/sound/soc/pxa/mmp-sspa.c b/sound/soc/pxa/mmp-sspa.c
index fb5a4390443f..fdd72d9bb46c 100644
--- a/sound/soc/pxa/mmp-sspa.c
+++ b/sound/soc/pxa/mmp-sspa.c
@@ -404,7 +404,7 @@ static int mmp_pcm_mmap(struct snd_soc_component *component,
 			struct snd_pcm_substream *substream,
 			struct vm_area_struct *vma)
 {
-	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 	return remap_pfn_range(vma, vma->vm_start,
 		substream->dma_buffer.addr >> PAGE_SHIFT,
diff --git a/sound/usb/usx2y/us122l.c b/sound/usb/usx2y/us122l.c
index e558931cce16..b51db622a69b 100644
--- a/sound/usb/usx2y/us122l.c
+++ b/sound/usb/usx2y/us122l.c
@@ -224,9 +224,9 @@ static int usb_stream_hwdep_mmap(struct snd_hwdep *hw,
 	}
 
 	area->vm_ops = &usb_stream_hwdep_vm_ops;
-	area->vm_flags |= VM_DONTDUMP;
+	set_vm_flags(area, VM_DONTDUMP);
 	if (!read)
-		area->vm_flags |= VM_DONTEXPAND;
+		set_vm_flags(area, VM_DONTEXPAND);
 	area->vm_private_data = us122l;
 	atomic_inc(&us122l->mmap_count);
 out:
diff --git a/sound/usb/usx2y/usX2Yhwdep.c b/sound/usb/usx2y/usX2Yhwdep.c
index c29da0341bc5..3abe6d891f98 100644
--- a/sound/usb/usx2y/usX2Yhwdep.c
+++ b/sound/usb/usx2y/usX2Yhwdep.c
@@ -61,7 +61,7 @@ static int snd_us428ctls_mmap(struct snd_hwdep *hw, struct file *filp, struct vm
 	}
 
 	area->vm_ops = &us428ctls_vm_ops;
-	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP);
 	area->vm_private_data = hw->private_data;
 	return 0;
 }
diff --git a/sound/usb/usx2y/usx2yhwdeppcm.c b/sound/usb/usx2y/usx2yhwdeppcm.c
index 767a227d54da..22ce93b2fb24 100644
--- a/sound/usb/usx2y/usx2yhwdeppcm.c
+++ b/sound/usb/usx2y/usx2yhwdeppcm.c
@@ -706,7 +706,7 @@ static int snd_usx2y_hwdep_pcm_mmap(struct snd_hwdep *hw, struct file *filp, str
 		return -ENODEV;
 
 	area->vm_ops = &snd_usx2y_hwdep_pcm_vm_ops;
-	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	set_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP);
 	area->vm_private_data = hw->private_data;
 	return 0;
 }
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483878.750523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3E-0001OR-Lo; Wed, 25 Jan 2023 09:26:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483878.750523; Wed, 25 Jan 2023 09:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3E-0001OK-IX; Wed, 25 Jan 2023 09:26:40 +0000
Received: by outflank-mailman (input) for mailman id 483878;
 Wed, 25 Jan 2023 08:39:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MUGE=5W=flex--surenb.bounces.google.com=3oOrQYwYKCd0RTQDMAFNNFKD.BNLWDM-CDUDKKHRSR.WDMOQNIDBS.NQF@srs-se1.protection.inumbo.net>)
 id 1pKbJ6-00083Q-6q
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:39:00 +0000
Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com
 [2607:f8b0:4864:20::114a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9b38f7f1-9c8b-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 09:38:14 +0100 (CET)
Received: by mail-yw1-x114a.google.com with SMTP id
 00721157ae682-4ff7c6679f2so147686237b3.12
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 00:38:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b38f7f1-9c8b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject
         :date:message-id:reply-to;
        bh=gBh0+9Ej4fdZiBcL1FEtjjBddEyxpct0JkyXcSRf3Wg=;
        b=lB4btchvNlTfCRHflexrgCXbRXFvyy8g1gFK/OMeKiIJNh1tqeN9E/OUXMnf2U4uji
         YYT9ZG6Uz9BNIXcJPuB6IBGSNwEdssClBGTD7I8F8xN9feCnsNR77DeUemH49gSW7JIZ
         i8Fyk+8dX7xmZBgb/4LlxS50DWuY+p/ZAvE5rke3AXUvgG50Y7E3pTP3J2Z3NRLwRPG+
         tuCoTtjvQeclLyeusopPtZIO8LUuu13+9dnLJ2g+6d0g96WjPTO8eD8l7HbwyilzSGVH
         oGzLOV4is7gwgAYTgd7llD5IZe+pWMCF81FsE6KNSAjCT9NY1E+DcM18ZvHUvuruVpcX
         WOLw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=gBh0+9Ej4fdZiBcL1FEtjjBddEyxpct0JkyXcSRf3Wg=;
        b=kc0Ea9gMPaMvrnCIitnN1OQekKuSOryRRY+l+UJgPB5yWt2MxihzNtMjQlqWJ7vpih
         NoovbpZ2f4z6NjLmMBeKhActGgLnzYzi3F4tF94GTs9IMC4VwVJjSj7M9LHRwr1Oj1m3
         PNcLUx13UNx4oUu0afeEsZD2kvfOaXrSu+bLljMB/xrsKSP/34mS+ODlJe7r0N4YuKkR
         2jdkraoEwBzEaTexuslSCvFQvvbyhoJ5FInVfDN1eHAJJ8XE1qHSkreUdagNt1pXZrgk
         ea/pF/5rmwsH4vU+6bb1zxATrZD287r6tXCI7nxT2pLgu3yc5NwjlZeHObb/KKrikFxV
         XIRA==
X-Gm-Message-State: AFqh2kpPoowdBol2tES3cqgOz8D1elHf3aZHxf8iXKAJxBOHjM5tohTN
	v4qhhmo+1+HKI6oZqThfGwL/lM2pmok=
X-Google-Smtp-Source: AMrXdXtLI+TT5dUaC9pRI2aKjkmKt7tjdC79b+se6AwZUh4jJJgCSUqcAubWc2ByjewYclrfSPgwiLKmrmw=
X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:f7b0:20e8:ce66:f98])
 (user=surenb job=sendgmr) by 2002:a05:6902:34f:b0:6f9:7bf9:8fc7 with SMTP id
 e15-20020a056902034f00b006f97bf98fc7mr3373858ybs.279.1674635936246; Wed, 25
 Jan 2023 00:38:56 -0800 (PST)
Date: Wed, 25 Jan 2023 00:38:45 -0800
Mime-Version: 1.0
X-Mailer: git-send-email 2.39.1.405.gd4c25cc71f-goog
Message-ID: <20230125083851.27759-1-surenb@google.com>
Subject: [PATCH v2 0/6] introduce vm_flags modifier functions
From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, 
	hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, 
	willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, 
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org, 
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com, 
	surenb@google.com
Content-Type: text/plain; charset="UTF-8"

This patchset was originally published as a part of per-VMA locking [1] and
was split after suggestion that it's viable on its own and to facilitate
the review process. It is now a preprequisite for the next version of per-VMA
lock patchset, which reuses vm_flags modifier functions to lock the VMA when
vm_flags are being updated.

VMA vm_flags modifications are usually done under exclusive mmap_lock
protection because this attrubute affects other decisions like VMA merging
or splitting and races should be prevented. Introduce vm_flags modifier
functions to enforce correct locking.

[1] https://lore.kernel.org/all/20230109205336.3665937-1-surenb@google.com/

The patchset applies cleanly over mm-unstable branch of mm tree.

My apologies for an extremely large distribution list. The patch touches
lots of files and many are in arch/ and drivers/.

Suren Baghdasaryan (6):
  mm: introduce vma->vm_flags modifier functions
  mm: replace VM_LOCKED_CLEAR_MASK with VM_LOCKED_MASK
  mm: replace vma->vm_flags direct modifications with modifier calls
  mm: replace vma->vm_flags indirect modification in ksm_madvise
  mm: introduce mod_vm_flags_nolock and use it in untrack_pfn
  mm: export dump_mm()

 arch/arm/kernel/process.c                     |  2 +-
 arch/ia64/mm/init.c                           |  8 +--
 arch/loongarch/include/asm/tlb.h              |  2 +-
 arch/powerpc/kvm/book3s_hv_uvmem.c            |  5 +-
 arch/powerpc/kvm/book3s_xive_native.c         |  2 +-
 arch/powerpc/mm/book3s64/subpage_prot.c       |  2 +-
 arch/powerpc/platforms/book3s/vas-api.c       |  2 +-
 arch/powerpc/platforms/cell/spufs/file.c      | 14 ++---
 arch/s390/mm/gmap.c                           |  8 +--
 arch/x86/entry/vsyscall/vsyscall_64.c         |  2 +-
 arch/x86/kernel/cpu/sgx/driver.c              |  2 +-
 arch/x86/kernel/cpu/sgx/virt.c                |  2 +-
 arch/x86/mm/pat/memtype.c                     | 14 +++--
 arch/x86/um/mem_32.c                          |  2 +-
 drivers/acpi/pfr_telemetry.c                  |  2 +-
 drivers/android/binder.c                      |  3 +-
 drivers/char/mspec.c                          |  2 +-
 drivers/crypto/hisilicon/qm.c                 |  2 +-
 drivers/dax/device.c                          |  2 +-
 drivers/dma/idxd/cdev.c                       |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c     |  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_events.c       |  4 +-
 drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  4 +-
 drivers/gpu/drm/drm_gem.c                     |  2 +-
 drivers/gpu/drm/drm_gem_dma_helper.c          |  3 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c        |  2 +-
 drivers/gpu/drm/drm_vm.c                      |  8 +--
 drivers/gpu/drm/etnaviv/etnaviv_gem.c         |  2 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c       |  4 +-
 drivers/gpu/drm/gma500/framebuffer.c          |  2 +-
 drivers/gpu/drm/i810/i810_dma.c               |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_mman.c      |  4 +-
 drivers/gpu/drm/mediatek/mtk_drm_gem.c        |  2 +-
 drivers/gpu/drm/msm/msm_gem.c                 |  2 +-
 drivers/gpu/drm/omapdrm/omap_gem.c            |  3 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c   |  3 +-
 drivers/gpu/drm/tegra/gem.c                   |  5 +-
 drivers/gpu/drm/ttm/ttm_bo_vm.c               |  3 +-
 drivers/gpu/drm/virtio/virtgpu_vram.c         |  2 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c      |  2 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c       |  3 +-
 drivers/hsi/clients/cmt_speech.c              |  2 +-
 drivers/hwtracing/intel_th/msu.c              |  2 +-
 drivers/hwtracing/stm/core.c                  |  2 +-
 drivers/infiniband/hw/hfi1/file_ops.c         |  4 +-
 drivers/infiniband/hw/mlx5/main.c             |  4 +-
 drivers/infiniband/hw/qib/qib_file_ops.c      | 13 +++--
 drivers/infiniband/hw/usnic/usnic_ib_verbs.c  |  2 +-
 .../infiniband/hw/vmw_pvrdma/pvrdma_verbs.c   |  2 +-
 .../common/videobuf2/videobuf2-dma-contig.c   |  2 +-
 .../common/videobuf2/videobuf2-vmalloc.c      |  2 +-
 drivers/media/v4l2-core/videobuf-dma-contig.c |  2 +-
 drivers/media/v4l2-core/videobuf-dma-sg.c     |  4 +-
 drivers/media/v4l2-core/videobuf-vmalloc.c    |  2 +-
 drivers/misc/cxl/context.c                    |  2 +-
 drivers/misc/habanalabs/common/memory.c       |  2 +-
 drivers/misc/habanalabs/gaudi/gaudi.c         |  4 +-
 drivers/misc/habanalabs/gaudi2/gaudi2.c       |  8 +--
 drivers/misc/habanalabs/goya/goya.c           |  4 +-
 drivers/misc/ocxl/context.c                   |  4 +-
 drivers/misc/ocxl/sysfs.c                     |  2 +-
 drivers/misc/open-dice.c                      |  4 +-
 drivers/misc/sgi-gru/grufile.c                |  4 +-
 drivers/misc/uacce/uacce.c                    |  2 +-
 drivers/sbus/char/oradax.c                    |  2 +-
 drivers/scsi/cxlflash/ocxl_hw.c               |  2 +-
 drivers/scsi/sg.c                             |  2 +-
 .../staging/media/atomisp/pci/hmm/hmm_bo.c    |  2 +-
 drivers/staging/media/deprecated/meye/meye.c  |  4 +-
 .../media/deprecated/stkwebcam/stk-webcam.c   |  2 +-
 drivers/target/target_core_user.c             |  2 +-
 drivers/uio/uio.c                             |  2 +-
 drivers/usb/core/devio.c                      |  3 +-
 drivers/usb/mon/mon_bin.c                     |  3 +-
 drivers/vdpa/vdpa_user/iova_domain.c          |  2 +-
 drivers/vfio/pci/vfio_pci_core.c              |  2 +-
 drivers/vhost/vdpa.c                          |  2 +-
 drivers/video/fbdev/68328fb.c                 |  2 +-
 drivers/video/fbdev/core/fb_defio.c           |  4 +-
 drivers/xen/gntalloc.c                        |  2 +-
 drivers/xen/gntdev.c                          |  4 +-
 drivers/xen/privcmd-buf.c                     |  2 +-
 drivers/xen/privcmd.c                         |  4 +-
 fs/aio.c                                      |  2 +-
 fs/cramfs/inode.c                             |  2 +-
 fs/erofs/data.c                               |  2 +-
 fs/exec.c                                     |  4 +-
 fs/ext4/file.c                                |  2 +-
 fs/fuse/dax.c                                 |  2 +-
 fs/hugetlbfs/inode.c                          |  4 +-
 fs/orangefs/file.c                            |  3 +-
 fs/proc/task_mmu.c                            |  2 +-
 fs/proc/vmcore.c                              |  3 +-
 fs/userfaultfd.c                              |  2 +-
 fs/xfs/xfs_file.c                             |  2 +-
 include/linux/mm.h                            | 51 +++++++++++++++++--
 include/linux/mm_types.h                      |  8 ++-
 include/linux/pgtable.h                       |  5 +-
 kernel/bpf/ringbuf.c                          |  4 +-
 kernel/bpf/syscall.c                          |  4 +-
 kernel/events/core.c                          |  2 +-
 kernel/fork.c                                 |  2 +-
 kernel/kcov.c                                 |  2 +-
 kernel/relay.c                                |  2 +-
 mm/debug.c                                    |  1 +
 mm/hugetlb.c                                  |  4 +-
 mm/khugepaged.c                               |  2 +
 mm/ksm.c                                      |  2 +
 mm/madvise.c                                  |  2 +-
 mm/memory.c                                   | 19 +++----
 mm/memremap.c                                 |  4 +-
 mm/mlock.c                                    | 12 ++---
 mm/mmap.c                                     | 32 +++++++-----
 mm/mprotect.c                                 |  2 +-
 mm/mremap.c                                   |  8 +--
 mm/nommu.c                                    | 11 ++--
 mm/secretmem.c                                |  2 +-
 mm/shmem.c                                    |  2 +-
 mm/vmalloc.c                                  |  2 +-
 net/ipv4/tcp.c                                |  4 +-
 security/selinux/selinuxfs.c                  |  6 +--
 sound/core/oss/pcm_oss.c                      |  2 +-
 sound/core/pcm_native.c                       |  9 ++--
 sound/soc/pxa/mmp-sspa.c                      |  2 +-
 sound/usb/usx2y/us122l.c                      |  4 +-
 sound/usb/usx2y/usX2Yhwdep.c                  |  2 +-
 sound/usb/usx2y/usx2yhwdeppcm.c               |  2 +-
 129 files changed, 292 insertions(+), 233 deletions(-)

-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483882.750533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3F-0001XS-7N; Wed, 25 Jan 2023 09:26:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483882.750533; Wed, 25 Jan 2023 09:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3F-0001Vg-2x; Wed, 25 Jan 2023 09:26:41 +0000
Received: by outflank-mailman (input) for mailman id 483882;
 Wed, 25 Jan 2023 08:39:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+ALF=5W=flex--surenb.bounces.google.com=3pOrQYwYKCeEVXUHQEJRRJOH.FRPaHQ-GHYHOOLVWV.aHQSURMHFW.RUJ@srs-se1.protection.inumbo.net>)
 id 1pKbJ8-00083W-RP
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:39:02 +0000
Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com
 [2607:f8b0:4864:20::b4a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b7a3fc2b-9c8b-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:39:02 +0100 (CET)
Received: by mail-yb1-xb4a.google.com with SMTP id
 a62-20020a25ca41000000b0080b838a5199so3259467ybg.6
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 00:39:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7a3fc2b-9c8b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=lALTt0Cs0OJR7OUarHpyHJwBeXB89z2/3DAseSYxnFw=;
        b=n8rHaHJ5xFxeipCQlmXzBA+dLl0Q/zQ/7gpJayuWnen6FrB+fqCbvqlW8MPEXS4yhs
         bSL1pYwp+yfHB69Hp8M4s6QhrR4Bh+owtXwR86TSKSSBvXCHkuElvnXJB2pYl2OH40Sk
         6JqqQAtuTJiqmWpETSSPHy8wmwcbh6uJm29hqrWWlo46flaJs7O8ixI8LA96LhuTgZ3K
         6bp3GycG9gG7sxo2rAWEXyc1+zeJtRrm7H/AIZ+tvA74dyl+qbdsA4sZuPo2ibgOOxUJ
         sgArwf+v4A/cnZMkh1Y6cHQmLN0WEk5ISpsmps7k3RlDvYj2gT49VkTmf7cSvZeX/37U
         drJg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=lALTt0Cs0OJR7OUarHpyHJwBeXB89z2/3DAseSYxnFw=;
        b=EqneT7Krabjd8D2KjospDR4qxPB7O28v8b//w9JWKxTjNQCTmuWALYX4qG3Uxg0+3K
         aewb0dErl5vXMiNd9KwvEm1gUR61Ez6fpQxsMFD6ftgSL1dZNtJDAdrzCQDZbxetQ1C1
         hg9hlINh1SrDr4PUWd7zca5ouTK0/2X1PKh9C5SLzKYWbFcR+6N318IlJ8CnLL++wK7T
         Yv61X4swbvdfFkuWGknW0J2Cyi6mH54lBwDTzx5djuzDd98i0KsM25gut2qeooVezR5+
         fbPNdBlwJtCh5ig7b5PABFSJTEoTCYxBi0Zo7prFnERWYk5BsoNWfUVU7cGhlG4ELIOj
         WE0g==
X-Gm-Message-State: AFqh2kpK/e5DrGDXFN2kiTKjZljKzqYfswYQ2mKZhWoBIjJy/yBrSayQ
	93OtZq4ozJ5SNrUTEhnhtPZH1Vh1tEI=
X-Google-Smtp-Source: AMrXdXtyE3jRCV+tlQlIzUhs3264UUIvOIjga3fyBI7LKMqSD9M/fdJ0aCCVUOrfvnEcBmXTcpHYPERTKZU=
X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:f7b0:20e8:ce66:f98])
 (user=surenb job=sendgmr) by 2002:a25:c057:0:b0:802:898f:6e73 with SMTP id
 c84-20020a25c057000000b00802898f6e73mr2020239ybf.411.1674635940754; Wed, 25
 Jan 2023 00:39:00 -0800 (PST)
Date: Wed, 25 Jan 2023 00:38:47 -0800
In-Reply-To: <20230125083851.27759-1-surenb@google.com>
Mime-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com>
X-Mailer: git-send-email 2.39.1.405.gd4c25cc71f-goog
Message-ID: <20230125083851.27759-3-surenb@google.com>
Subject: [PATCH v2 2/6] mm: replace VM_LOCKED_CLEAR_MASK with VM_LOCKED_MASK
From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, 
	hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, 
	willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, 
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org, 
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com, 
	surenb@google.com
Content-Type: text/plain; charset="UTF-8"

To simplify the usage of VM_LOCKED_CLEAR_MASK in clear_vm_flags(),
replace it with VM_LOCKED_MASK bitmask and convert all users.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 include/linux/mm.h | 4 ++--
 kernel/fork.c      | 2 +-
 mm/hugetlb.c       | 4 ++--
 mm/mlock.c         | 6 +++---
 mm/mmap.c          | 6 +++---
 mm/mremap.c        | 2 +-
 6 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index b71f2809caac..da62bdd627bf 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -421,8 +421,8 @@ extern unsigned int kobjsize(const void *objp);
 /* This mask defines which mm->def_flags a process can inherit its parent */
 #define VM_INIT_DEF_MASK	VM_NOHUGEPAGE
 
-/* This mask is used to clear all the VMA flags used by mlock */
-#define VM_LOCKED_CLEAR_MASK	(~(VM_LOCKED | VM_LOCKONFAULT))
+/* This mask represents all the VMA flag bits used by mlock */
+#define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
 
 /* Arch-specific flags to clear when updating VM flags on protection change */
 #ifndef VM_ARCH_CLEAR
diff --git a/kernel/fork.c b/kernel/fork.c
index 6683c1b0f460..03d472051236 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -669,7 +669,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
 			tmp->anon_vma = NULL;
 		} else if (anon_vma_fork(tmp, mpnt))
 			goto fail_nomem_anon_vma_fork;
-		tmp->vm_flags &= ~(VM_LOCKED | VM_LOCKONFAULT);
+		clear_vm_flags(tmp, VM_LOCKED_MASK);
 		file = tmp->vm_file;
 		if (file) {
 			struct address_space *mapping = file->f_mapping;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d20c8b09890e..4ecdbad9a451 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6973,8 +6973,8 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma,
 	unsigned long s_end = sbase + PUD_SIZE;
 
 	/* Allow segments to share if only one is marked locked */
-	unsigned long vm_flags = vma->vm_flags & VM_LOCKED_CLEAR_MASK;
-	unsigned long svm_flags = svma->vm_flags & VM_LOCKED_CLEAR_MASK;
+	unsigned long vm_flags = vma->vm_flags & ~VM_LOCKED_MASK;
+	unsigned long svm_flags = svma->vm_flags & ~VM_LOCKED_MASK;
 
 	/*
 	 * match the virtual addresses, permission and the alignment of the
diff --git a/mm/mlock.c b/mm/mlock.c
index 0336f52e03d7..5c4fff93cd6b 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -497,7 +497,7 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
 		if (vma->vm_start != tmp)
 			return -ENOMEM;
 
-		newflags = vma->vm_flags & VM_LOCKED_CLEAR_MASK;
+		newflags = vma->vm_flags & ~VM_LOCKED_MASK;
 		newflags |= flags;
 		/* Here we know that  vma->vm_start <= nstart < vma->vm_end. */
 		tmp = vma->vm_end;
@@ -661,7 +661,7 @@ static int apply_mlockall_flags(int flags)
 	struct vm_area_struct *vma, *prev = NULL;
 	vm_flags_t to_add = 0;
 
-	current->mm->def_flags &= VM_LOCKED_CLEAR_MASK;
+	current->mm->def_flags &= ~VM_LOCKED_MASK;
 	if (flags & MCL_FUTURE) {
 		current->mm->def_flags |= VM_LOCKED;
 
@@ -681,7 +681,7 @@ static int apply_mlockall_flags(int flags)
 	for_each_vma(vmi, vma) {
 		vm_flags_t newflags;
 
-		newflags = vma->vm_flags & VM_LOCKED_CLEAR_MASK;
+		newflags = vma->vm_flags & ~VM_LOCKED_MASK;
 		newflags |= to_add;
 
 		/* Ignore errors */
diff --git a/mm/mmap.c b/mm/mmap.c
index d4abc6feced1..323bd253b25a 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2671,7 +2671,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
 					is_vm_hugetlb_page(vma) ||
 					vma == get_gate_vma(current->mm))
-			vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+			clear_vm_flags(vma, VM_LOCKED_MASK);
 		else
 			mm->locked_vm += (len >> PAGE_SHIFT);
 	}
@@ -3340,8 +3340,8 @@ static struct vm_area_struct *__install_special_mapping(
 	vma->vm_start = addr;
 	vma->vm_end = addr + len;
 
-	vma->vm_flags = vm_flags | mm->def_flags | VM_DONTEXPAND | VM_SOFTDIRTY;
-	vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+	init_vm_flags(vma, (vm_flags | mm->def_flags |
+		      VM_DONTEXPAND | VM_SOFTDIRTY) & ~VM_LOCKED_MASK);
 	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 
 	vma->vm_ops = ops;
diff --git a/mm/mremap.c b/mm/mremap.c
index 1b3ee02bead7..35db9752cb6a 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -687,7 +687,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
 
 	if (unlikely(!err && (flags & MREMAP_DONTUNMAP))) {
 		/* We always clear VM_LOCKED[ONFAULT] on the old vma */
-		vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+		clear_vm_flags(vma, VM_LOCKED_MASK);
 
 		/*
 		 * anon_vma links of the old vma is no longer needed after its page
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483884.750544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3F-0001hq-Kq; Wed, 25 Jan 2023 09:26:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483884.750544; Wed, 25 Jan 2023 09:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3F-0001e4-Ck; Wed, 25 Jan 2023 09:26:41 +0000
Received: by outflank-mailman (input) for mailman id 483884;
 Wed, 25 Jan 2023 08:39:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t0Nh=5W=flex--surenb.bounces.google.com=3qerQYwYKCeYacZMVJOWWOTM.KWUfMV-LMdMTTQaba.fMVXZWRMKb.WZO@srs-se1.protection.inumbo.net>)
 id 1pKbJD-00083W-SK
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:39:07 +0000
Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com
 [2607:f8b0:4864:20::114a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id baaf7a3c-9c8b-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:39:07 +0100 (CET)
Received: by mail-yw1-x114a.google.com with SMTP id
 00721157ae682-4d5097a95f5so179443777b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 00:39:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: baaf7a3c-9c8b-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=mVuGMhZwlEMXe0FAwC4rLE7OnnAfe6cAroxVvXklBMk=;
        b=VwtVZ3ffN/XX+4OSphvkCPu5LWSZpevMhnHytsAi7M7gsxKAkKbzq3/Ez6cAZGs+/z
         JzIcfmxATmRyceN+/Zt8AqltlXqbiRJWNjswPpQNjZE22YwQ3Z8QnOSAOet+uqjmXlIE
         OyZz8lnbt23bEWFA19bN7QyWM6BlHY9bxVkgVCuFfE21FaJHoilx3LiQj39QBuuIrUeb
         wQOUdgl76g8l+p4COnuz8oTsyo6A7P+o3cLtEEcXBMjD1VU3kk8/P76KYyyBr91hidcu
         b/JvxTM3TLDOWOtw79r0Hu/8F/9388RbMrqUiDk2vbRh2AxEmOh9ht/7njQAxt7Q0X5z
         07nA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=mVuGMhZwlEMXe0FAwC4rLE7OnnAfe6cAroxVvXklBMk=;
        b=NXpjnuEO9br+tJBgupZRXcPMesJyYyIaWlm1R3Y+jWvQVIRnaSqlyD7A3H35Gkd2tU
         aZ+AWyM17ljqn2T89w8rTgkEX3N0kUk68EaUJK7kKKKJdjOBpgeZn+KR35gWMwyIgOi2
         JO+2ohXMZvsvnfQFx5NZ9nk2eHBFwZjNhq6+4B1ie7XJX3loOc8QCBWDD8Ii/cXo4nbt
         NWQZzwtK7xcLdsvEM6Wq2fXcuJOmA8SA0Qu3RCzSV5kg3yHtzMKmMgT0T3uvjsnwgR3J
         E3ovMUChkwFw3WjYrVUX6xt4s9STECnzHZo2W4YeGq+WpEHYZ+eNYNK4DBOzmIwtGLky
         fa7A==
X-Gm-Message-State: AO0yUKVCeqnK6x2sho09qDltk5RLS/gdhOp4cY0srY+rU+rzN8LdZ7qs
	1vdIPcu8pNoM0pHlQgxCFHl7AzFRVIQ=
X-Google-Smtp-Source: AK7set/eGcbmxK47Xow2TUKC1msmny+SAO+Fcyz1BhvcA/uBRY0Kash94Ztfh9nd8yz0OXKqanY9SYPz0MI=
X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:f7b0:20e8:ce66:f98])
 (user=surenb job=sendgmr) by 2002:a25:c247:0:b0:80b:6201:bee7 with SMTP id
 s68-20020a25c247000000b0080b6201bee7mr702095ybf.340.1674635945821; Wed, 25
 Jan 2023 00:39:05 -0800 (PST)
Date: Wed, 25 Jan 2023 00:38:49 -0800
In-Reply-To: <20230125083851.27759-1-surenb@google.com>
Mime-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com>
X-Mailer: git-send-email 2.39.1.405.gd4c25cc71f-goog
Message-ID: <20230125083851.27759-5-surenb@google.com>
Subject: [PATCH v2 4/6] mm: replace vma->vm_flags indirect modification in ksm_madvise
From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, 
	hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, 
	willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, 
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org, 
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com, 
	surenb@google.com
Content-Type: text/plain; charset="UTF-8"

Replace indirect modifications to vma->vm_flags with calls to modifier
functions to be able to track flag changes and to keep vma locking
correctness. Add a BUG_ON check in ksm_madvise() to catch indirect
vm_flags modification attempts.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 arch/powerpc/kvm/book3s_hv_uvmem.c | 5 ++++-
 arch/s390/mm/gmap.c                | 5 ++++-
 mm/khugepaged.c                    | 2 ++
 mm/ksm.c                           | 2 ++
 4 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 1d67baa5557a..325a7a47d348 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -393,6 +393,7 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm,
 {
 	unsigned long gfn = memslot->base_gfn;
 	unsigned long end, start = gfn_to_hva(kvm, gfn);
+	unsigned long vm_flags;
 	int ret = 0;
 	struct vm_area_struct *vma;
 	int merge_flag = (merge) ? MADV_MERGEABLE : MADV_UNMERGEABLE;
@@ -409,12 +410,14 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm,
 			ret = H_STATE;
 			break;
 		}
+		vm_flags = vma->vm_flags;
 		ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
-			  merge_flag, &vma->vm_flags);
+			  merge_flag, &vm_flags);
 		if (ret) {
 			ret = H_STATE;
 			break;
 		}
+		reset_vm_flags(vma, vm_flags);
 		start = vma->vm_end;
 	} while (end > vma->vm_end);
 
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 3a695b8a1e3c..d5eb47dcdacb 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2587,14 +2587,17 @@ int gmap_mark_unmergeable(void)
 {
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
+	unsigned long vm_flags;
 	int ret;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	for_each_vma(vmi, vma) {
+		vm_flags = vma->vm_flags;
 		ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
-				  MADV_UNMERGEABLE, &vma->vm_flags);
+				  MADV_UNMERGEABLE, &vm_flags);
 		if (ret)
 			return ret;
+		reset_vm_flags(vma, vm_flags);
 	}
 	mm->def_flags &= ~VM_MERGEABLE;
 	return 0;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 8abc59345bf2..76b24cd0c179 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -354,6 +354,8 @@ struct attribute_group khugepaged_attr_group = {
 int hugepage_madvise(struct vm_area_struct *vma,
 		     unsigned long *vm_flags, int advice)
 {
+	/* vma->vm_flags can be changed only using modifier functions */
+	BUG_ON(vm_flags == &vma->vm_flags);
 	switch (advice) {
 	case MADV_HUGEPAGE:
 #ifdef CONFIG_S390
diff --git a/mm/ksm.c b/mm/ksm.c
index 04f1c8c2df11..992b2be9f5e6 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2573,6 +2573,8 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 	struct mm_struct *mm = vma->vm_mm;
 	int err;
 
+	/* vma->vm_flags can be changed only using modifier functions */
+	BUG_ON(vm_flags == &vma->vm_flags);
 	switch (advice) {
 	case MADV_MERGEABLE:
 		/*
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.483967.750576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3H-0002Q3-Hj; Wed, 25 Jan 2023 09:26:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 483967.750576; Wed, 25 Jan 2023 09:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3H-0002P4-6Y; Wed, 25 Jan 2023 09:26:43 +0000
Received: by outflank-mailman (input) for mailman id 483967;
 Wed, 25 Jan 2023 08:56:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FU6P=5W=suse.com=mhocko@srs-se1.protection.inumbo.net>)
 id 1pKbZw-00012q-Jb
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 08:56:24 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24b4fb1c-9c8e-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 09:56:23 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0BC6721B0A;
 Wed, 25 Jan 2023 08:56:23 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B78841339E;
 Wed, 25 Jan 2023 08:56:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ZJ2WLLbu0GN/CQAAMHmgww
 (envelope-from <mhocko@suse.com>); Wed, 25 Jan 2023 08:56:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24b4fb1c-9c8e-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674636983; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HlfVezvAjFmKLztHMOaZ+yji8Nsqp9vLwu4LVliFLNo=;
	b=sP3k2vbL/TMjnnanRjZuUO51OPKL7NCAE7wor1COt2s99RKg6YGTByegVuny2bFzwE3+s6
	UutXKdgKZ+JCm3/zcJc6jOoDAopdP55BVOpdwrjI4e178jvtCsfm11DvfUjHyJJs6l60Lo
	ZwU2a8676f0/M9W0HO3obY7BVSmDGjU=
Date: Wed, 25 Jan 2023 09:56:22 +0100
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com,
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org,
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com,
	david@redhat.com, dhowells@redhat.com, hughd@google.com,
	bigeasy@linutronix.de, kent.overstreet@linux.dev,
	punit.agrawal@bytedance.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
Message-ID: <Y9DuttqjdKSRCVYh@dhcp22.suse.cz>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-2-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-2-surenb@google.com>

On Wed 25-01-23 00:38:46, Suren Baghdasaryan wrote:
> vm_flags are among VMA attributes which affect decisions like VMA merging
> and splitting. Therefore all vm_flags modifications are performed after
> taking exclusive mmap_lock to prevent vm_flags updates racing with such
> operations. Introduce modifier functions for vm_flags to be used whenever
> flags are updated. This way we can better check and control correct
> locking behavior during these updates.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  include/linux/mm.h       | 37 +++++++++++++++++++++++++++++++++++++
>  include/linux/mm_types.h |  8 +++++++-
>  2 files changed, 44 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c2f62bdce134..b71f2809caac 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -627,6 +627,43 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  	INIT_LIST_HEAD(&vma->anon_vma_chain);
>  }
>  
> +/* Use when VMA is not part of the VMA tree and needs no locking */
> +static inline void init_vm_flags(struct vm_area_struct *vma,
> +				 unsigned long flags)
> +{
> +	vma->vm_flags = flags;
> +}
> +
> +/* Use when VMA is part of the VMA tree and modifications need coordination */
> +static inline void reset_vm_flags(struct vm_area_struct *vma,
> +				  unsigned long flags)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	init_vm_flags(vma, flags);
> +}
> +
> +static inline void set_vm_flags(struct vm_area_struct *vma,
> +				unsigned long flags)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	vma->vm_flags |= flags;
> +}
> +
> +static inline void clear_vm_flags(struct vm_area_struct *vma,
> +				  unsigned long flags)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	vma->vm_flags &= ~flags;
> +}
> +
> +static inline void mod_vm_flags(struct vm_area_struct *vma,
> +				unsigned long set, unsigned long clear)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	vma->vm_flags |= set;
> +	vma->vm_flags &= ~clear;
> +}
> +
>  static inline void vma_set_anonymous(struct vm_area_struct *vma)
>  {
>  	vma->vm_ops = NULL;
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 2d6d790d9bed..6c7c70bf50dd 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -491,7 +491,13 @@ struct vm_area_struct {
>  	 * See vmf_insert_mixed_prot() for discussion.
>  	 */
>  	pgprot_t vm_page_prot;
> -	unsigned long vm_flags;		/* Flags, see mm.h. */
> +
> +	/*
> +	 * Flags, see mm.h.
> +	 * WARNING! Do not modify directly.
> +	 * Use {init|reset|set|clear|mod}_vm_flags() functions instead.
> +	 */
> +	unsigned long vm_flags;
>  
>  	/*
>  	 * For areas with an address space and backing store,
> -- 
> 2.39.1

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:26:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:26:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484022.750595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3I-0002qC-Qz; Wed, 25 Jan 2023 09:26:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484022.750595; Wed, 25 Jan 2023 09:26:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc3I-0002nI-9O; Wed, 25 Jan 2023 09:26:44 +0000
Received: by outflank-mailman (input) for mailman id 484022;
 Wed, 25 Jan 2023 09:10:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gP5O=5W=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pKbnD-0008Ov-TD
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 09:10:08 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0df2acd9-9c90-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 10:10:05 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pKbmI-0026lz-24; Wed, 25 Jan 2023 09:09:11 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B63953006C4;
 Wed, 25 Jan 2023 10:09:37 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 673382C247607; Wed, 25 Jan 2023 10:09:37 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0df2acd9-9c90-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=hk+0OseVD3OO+ksIzhtEU5LrZV8KXEqXC3zpbOGn/ls=; b=qsa+lu0U9mwFGWjutATG/A9iym
	F+d7IIQowgNB15FNtA53/QA/RcEQKz2NQMKXkkpiUFb08vzCFWx/JFLhZ0br08C5Ckzefhutl3QCt
	ceW/0pGOVnNMXlzdiPZkUHL2edRHXWt902QL7eOXJMDQBzbzHfjVb3six/mnj0Lwks7UYNitCDD/a
	WjH6Y7VU5VmGXuLCLjlq1cs0p/QmTHDQxmy7G6T6M9gy021CLYFDOBZwIb0ILx+d/MfDO2qH3S0mE
	cy+gKIGp5mQPEgu/vJHsyE3Wju/jlQ1l3hOGxA/ErAiDF4zG2A5eFJiQUeMaQql9LtlJhR0gyt8F5
	77Iz0wbQ==;
Date: Wed, 25 Jan 2023 10:09:37 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org,
	liam.howlett@oracle.com, ldufour@linux.ibm.com, paulmck@kernel.org,
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com,
	david@redhat.com, dhowells@redhat.com, hughd@google.com,
	bigeasy@linutronix.de, kent.overstreet@linux.dev,
	punit.agrawal@bytedance.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
Message-ID: <Y9Dx0cPXF2yoLwww@hirez.programming.kicks-ass.net>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-2-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-2-surenb@google.com>

On Wed, Jan 25, 2023 at 12:38:46AM -0800, Suren Baghdasaryan wrote:

> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 2d6d790d9bed..6c7c70bf50dd 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -491,7 +491,13 @@ struct vm_area_struct {
>  	 * See vmf_insert_mixed_prot() for discussion.
>  	 */
>  	pgprot_t vm_page_prot;
> -	unsigned long vm_flags;		/* Flags, see mm.h. */
> +
> +	/*
> +	 * Flags, see mm.h.
> +	 * WARNING! Do not modify directly.
> +	 * Use {init|reset|set|clear|mod}_vm_flags() functions instead.
> +	 */
> +	unsigned long vm_flags;

We have __private and ACCESS_PRIVATE() to help with enforcing this.


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:33:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:33:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484052.750626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc9o-00084P-LC; Wed, 25 Jan 2023 09:33:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484052.750626; Wed, 25 Jan 2023 09:33:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKc9o-00084I-Gp; Wed, 25 Jan 2023 09:33:28 +0000
Received: by outflank-mailman (input) for mailman id 484052;
 Wed, 25 Jan 2023 09:30:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FU6P=5W=suse.com=mhocko@srs-se1.protection.inumbo.net>)
 id 1pKc7N-00080w-8K
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 09:30:57 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f7f5b465-9c92-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 10:30:56 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C6E0521C79;
 Wed, 25 Jan 2023 09:30:54 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 684E91358F;
 Wed, 25 Jan 2023 09:30:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id yJRSGc720GMeHAAAMHmgww
 (envelope-from <mhocko@suse.com>); Wed, 25 Jan 2023 09:30:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7f5b465-9c92-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674639054; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jognpwZWDP/caNk47/2I5pB+VrjaxgWmcyaDbH1onsg=;
	b=Pij8ssoJlw6k6tQr48jFXJyevDWch308vx0CLomRDmV84uEdMgTC6NhqR8VCkgI74II7DW
	anm6YuwD0GfDOi17b+9Zs5S20/ilyumOWkOAGicsthADYQDP18s2AfYp6RzP5d5hjTOUwm
	W4uNWFQcSkUDrOF5p/6xahiK2Zzyzto=
Date: Wed, 25 Jan 2023 10:30:53 +0100
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com,
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org,
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com,
	david@redhat.com, dhowells@redhat.com, hughd@google.com,
	bigeasy@linutronix.de, kent.overstreet@linux.dev,
	punit.agrawal@bytedance.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 3/6] mm: replace vma->vm_flags direct modifications
 with modifier calls
Message-ID: <Y9D2zXpy+9iyZNun@dhcp22.suse.cz>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-4-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-4-surenb@google.com>

On Wed 25-01-23 00:38:48, Suren Baghdasaryan wrote:
> Replace direct modifications to vma->vm_flags with calls to modifier
> functions to be able to track flag changes and to keep vma locking
> correctness.

Is this a manual (git grep) based work or have you used Coccinele for
the patch generation?

My potentially incomplete check
$ git grep ">[[:space:]]*vm_flags[[:space:]]*[&|^]="

shows that nothing should be left after this. There is still quite a lot
of direct checks of the flags (more than 600). Maybe it would be good to
make flags accessible only via accessors which would also prevent any
future direct setting of those flags in uncontrolled way as well.

Anyway
Acked-by: Michal Hocko <mhocko@suse.com>
-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:41:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:41:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484085.750635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKcHI-0001KE-EP; Wed, 25 Jan 2023 09:41:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484085.750635; Wed, 25 Jan 2023 09:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKcHI-0001K6-Ab; Wed, 25 Jan 2023 09:41:12 +0000
Received: by outflank-mailman (input) for mailman id 484085;
 Wed, 25 Jan 2023 09:38:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FU6P=5W=suse.com=mhocko@srs-se1.protection.inumbo.net>)
 id 1pKcF5-0000VO-SB
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 09:38:55 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 158ccdbc-9c94-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 10:38:55 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8ADEB21C7D;
 Wed, 25 Jan 2023 09:38:54 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 446411358F;
 Wed, 25 Jan 2023 09:38:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id jLUjEK740GMsIAAAMHmgww
 (envelope-from <mhocko@suse.com>); Wed, 25 Jan 2023 09:38:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 158ccdbc-9c94-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674639534; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rlC3v8G7Qltb7/sMWGykfElAbj3RZvreU88MtZ+OLHM=;
	b=r0FIat4kOE8WovOTP3LWhacl5M3EjQXhF3aBT5cBj3FuQ5I+yJMTY0cq7QTmO7SFPDQv6O
	tRrm4BhOZHjZFNfq3BxFKWYU7SqtAc8zTE0tqhcCiP6BOgQ+ipP6+yjGZBzFWwz4Mee2C0
	GFF6IKS/BdrV8Bwlj7T9ur543veHKvk=
Date: Wed, 25 Jan 2023 10:38:53 +0100
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com,
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org,
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com,
	david@redhat.com, dhowells@redhat.com, hughd@google.com,
	bigeasy@linutronix.de, kent.overstreet@linux.dev,
	punit.agrawal@bytedance.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 4/6] mm: replace vma->vm_flags indirect modification
 in ksm_madvise
Message-ID: <Y9D4rWEsajV/WfNx@dhcp22.suse.cz>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-5-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-5-surenb@google.com>

On Wed 25-01-23 00:38:49, Suren Baghdasaryan wrote:
> Replace indirect modifications to vma->vm_flags with calls to modifier
> functions to be able to track flag changes and to keep vma locking
> correctness. Add a BUG_ON check in ksm_madvise() to catch indirect
> vm_flags modification attempts.

Those BUG_ONs scream to much IMHO. KSM is an MM internal code so I
gueess we should be willing to trust it.

> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Michal Hocko <mhocko@suse.com>
-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:43:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484093.750646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKcJB-0001xA-0X; Wed, 25 Jan 2023 09:43:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484093.750646; Wed, 25 Jan 2023 09:43:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKcJA-0001x3-SB; Wed, 25 Jan 2023 09:43:08 +0000
Received: by outflank-mailman (input) for mailman id 484093;
 Wed, 25 Jan 2023 09:42:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FU6P=5W=suse.com=mhocko@srs-se1.protection.inumbo.net>)
 id 1pKcIa-0001k2-N8
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 09:42:32 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 96933d9f-9c94-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 10:42:31 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 41D0E1F8B3;
 Wed, 25 Jan 2023 09:42:31 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EA4FB1358F;
 Wed, 25 Jan 2023 09:42:30 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id yLixOIb50GN6IgAAMHmgww
 (envelope-from <mhocko@suse.com>); Wed, 25 Jan 2023 09:42:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96933d9f-9c94-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674639751; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=wTTa+i+nU+PdO+h/KcQj1hMAey36ZrEColPN1KlfYvE=;
	b=clWvQxTxYrcNsHq7lTW0Gmn0i9h9Vid00Q4BuOr2XE/g1EjsCUMp3x1tEVjyMZZDFxjZ7J
	MRDv0FaWEHjjbLpg/9bXRPSZ0dzlfrRGaR9q7sau82jdMWVZcIKDfvaIB1lskwM2rm1evI
	GWHFNFcuhlRvYoauPeDnqXLmG4Hflh8=
Date: Wed, 25 Jan 2023 10:42:30 +0100
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com,
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org,
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com,
	david@redhat.com, dhowells@redhat.com, hughd@google.com,
	bigeasy@linutronix.de, kent.overstreet@linux.dev,
	punit.agrawal@bytedance.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 5/6] mm: introduce mod_vm_flags_nolock and use it in
 untrack_pfn
Message-ID: <Y9D5hjcprLI92VKf@dhcp22.suse.cz>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-6-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-6-surenb@google.com>

On Wed 25-01-23 00:38:50, Suren Baghdasaryan wrote:
> In cases when VMA flags are modified after VMA was isolated and mmap_lock
> was downgraded, flags modifications would result in an assertion because
> mmap write lock is not held.
> Introduce mod_vm_flags_nolock to be used in such situation.
> Pass a hint to untrack_pfn to conditionally use mod_vm_flags_nolock for
> flags modification and to avoid assertion.

The changelog nor the documentation of mod_vm_flags_nolock 
really explain when it is safe to use it. This is really important for
future potential users.

> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  arch/x86/mm/pat/memtype.c | 10 +++++++---
>  include/linux/mm.h        | 12 +++++++++---
>  include/linux/pgtable.h   |  5 +++--
>  mm/memory.c               | 13 +++++++------
>  mm/memremap.c             |  4 ++--
>  mm/mmap.c                 | 16 ++++++++++------
>  6 files changed, 38 insertions(+), 22 deletions(-)
> 
> diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> index ae9645c900fa..d8adc0b42cf2 100644
> --- a/arch/x86/mm/pat/memtype.c
> +++ b/arch/x86/mm/pat/memtype.c
> @@ -1046,7 +1046,7 @@ void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn)
>   * can be for the entire vma (in which case pfn, size are zero).
>   */
>  void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> -		 unsigned long size)
> +		 unsigned long size, bool mm_wr_locked)
>  {
>  	resource_size_t paddr;
>  	unsigned long prot;
> @@ -1065,8 +1065,12 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
>  		size = vma->vm_end - vma->vm_start;
>  	}
>  	free_pfn_range(paddr, size);
> -	if (vma)
> -		clear_vm_flags(vma, VM_PAT);
> +	if (vma) {
> +		if (mm_wr_locked)
> +			clear_vm_flags(vma, VM_PAT);
> +		else
> +			mod_vm_flags_nolock(vma, 0, VM_PAT);
> +	}
>  }
>  
>  /*
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 55335edd1373..48d49930c411 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -656,12 +656,18 @@ static inline void clear_vm_flags(struct vm_area_struct *vma,
>  	vma->vm_flags &= ~flags;
>  }
>  
> +static inline void mod_vm_flags_nolock(struct vm_area_struct *vma,
> +				       unsigned long set, unsigned long clear)
> +{
> +	vma->vm_flags |= set;
> +	vma->vm_flags &= ~clear;
> +}
> +
>  static inline void mod_vm_flags(struct vm_area_struct *vma,
>  				unsigned long set, unsigned long clear)
>  {
>  	mmap_assert_write_locked(vma->vm_mm);
> -	vma->vm_flags |= set;
> -	vma->vm_flags &= ~clear;
> +	mod_vm_flags_nolock(vma, set, clear);
>  }
>  
>  static inline void vma_set_anonymous(struct vm_area_struct *vma)
> @@ -2087,7 +2093,7 @@ static inline void zap_vma_pages(struct vm_area_struct *vma)
>  }
>  void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
>  		struct vm_area_struct *start_vma, unsigned long start,
> -		unsigned long end);
> +		unsigned long end, bool mm_wr_locked);
>  
>  struct mmu_notifier_range;
>  
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 5fd45454c073..c63cd44777ec 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1185,7 +1185,8 @@ static inline int track_pfn_copy(struct vm_area_struct *vma)
>   * can be for the entire vma (in which case pfn, size are zero).
>   */
>  static inline void untrack_pfn(struct vm_area_struct *vma,
> -			       unsigned long pfn, unsigned long size)
> +			       unsigned long pfn, unsigned long size,
> +			       bool mm_wr_locked)
>  {
>  }
>  
> @@ -1203,7 +1204,7 @@ extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
>  			     pfn_t pfn);
>  extern int track_pfn_copy(struct vm_area_struct *vma);
>  extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> -			unsigned long size);
> +			unsigned long size, bool mm_wr_locked);
>  extern void untrack_pfn_moved(struct vm_area_struct *vma);
>  #endif
>  
> diff --git a/mm/memory.c b/mm/memory.c
> index d6902065e558..5b11b50e2c4a 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1613,7 +1613,7 @@ void unmap_page_range(struct mmu_gather *tlb,
>  static void unmap_single_vma(struct mmu_gather *tlb,
>  		struct vm_area_struct *vma, unsigned long start_addr,
>  		unsigned long end_addr,
> -		struct zap_details *details)
> +		struct zap_details *details, bool mm_wr_locked)
>  {
>  	unsigned long start = max(vma->vm_start, start_addr);
>  	unsigned long end;
> @@ -1628,7 +1628,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
>  		uprobe_munmap(vma, start, end);
>  
>  	if (unlikely(vma->vm_flags & VM_PFNMAP))
> -		untrack_pfn(vma, 0, 0);
> +		untrack_pfn(vma, 0, 0, mm_wr_locked);
>  
>  	if (start != end) {
>  		if (unlikely(is_vm_hugetlb_page(vma))) {
> @@ -1675,7 +1675,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
>   */
>  void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
>  		struct vm_area_struct *vma, unsigned long start_addr,
> -		unsigned long end_addr)
> +		unsigned long end_addr, bool mm_wr_locked)
>  {
>  	struct mmu_notifier_range range;
>  	struct zap_details details = {
> @@ -1689,7 +1689,8 @@ void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
>  				start_addr, end_addr);
>  	mmu_notifier_invalidate_range_start(&range);
>  	do {
> -		unmap_single_vma(tlb, vma, start_addr, end_addr, &details);
> +		unmap_single_vma(tlb, vma, start_addr, end_addr, &details,
> +				 mm_wr_locked);
>  	} while ((vma = mas_find(&mas, end_addr - 1)) != NULL);
>  	mmu_notifier_invalidate_range_end(&range);
>  }
> @@ -1723,7 +1724,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
>  	 * unmap 'address-end' not 'range.start-range.end' as range
>  	 * could have been expanded for hugetlb pmd sharing.
>  	 */
> -	unmap_single_vma(&tlb, vma, address, end, details);
> +	unmap_single_vma(&tlb, vma, address, end, details, false);
>  	mmu_notifier_invalidate_range_end(&range);
>  	tlb_finish_mmu(&tlb);
>  }
> @@ -2492,7 +2493,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
>  
>  	err = remap_pfn_range_notrack(vma, addr, pfn, size, prot);
>  	if (err)
> -		untrack_pfn(vma, pfn, PAGE_ALIGN(size));
> +		untrack_pfn(vma, pfn, PAGE_ALIGN(size), true);
>  	return err;
>  }
>  EXPORT_SYMBOL(remap_pfn_range);
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 08cbf54fe037..2f88f43d4a01 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -129,7 +129,7 @@ static void pageunmap_range(struct dev_pagemap *pgmap, int range_id)
>  	}
>  	mem_hotplug_done();
>  
> -	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
> +	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true);
>  	pgmap_array_delete(range);
>  }
>  
> @@ -276,7 +276,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
>  	if (!is_private)
>  		kasan_remove_zero_shadow(__va(range->start), range_len(range));
>  err_kasan:
> -	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
> +	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true);
>  err_pfn_remap:
>  	pgmap_array_delete(range);
>  	return error;
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 2c6e9072e6a8..69d440997648 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -78,7 +78,7 @@ core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644);
>  static void unmap_region(struct mm_struct *mm, struct maple_tree *mt,
>  		struct vm_area_struct *vma, struct vm_area_struct *prev,
>  		struct vm_area_struct *next, unsigned long start,
> -		unsigned long end);
> +		unsigned long end, bool mm_wr_locked);
>  
>  static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
>  {
> @@ -2136,14 +2136,14 @@ static inline void remove_mt(struct mm_struct *mm, struct ma_state *mas)
>  static void unmap_region(struct mm_struct *mm, struct maple_tree *mt,
>  		struct vm_area_struct *vma, struct vm_area_struct *prev,
>  		struct vm_area_struct *next,
> -		unsigned long start, unsigned long end)
> +		unsigned long start, unsigned long end, bool mm_wr_locked)
>  {
>  	struct mmu_gather tlb;
>  
>  	lru_add_drain();
>  	tlb_gather_mmu(&tlb, mm);
>  	update_hiwater_rss(mm);
> -	unmap_vmas(&tlb, mt, vma, start, end);
> +	unmap_vmas(&tlb, mt, vma, start, end, mm_wr_locked);
>  	free_pgtables(&tlb, mt, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS,
>  				 next ? next->vm_start : USER_PGTABLES_CEILING);
>  	tlb_finish_mmu(&tlb);
> @@ -2391,7 +2391,11 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  			mmap_write_downgrade(mm);
>  	}
>  
> -	unmap_region(mm, &mt_detach, vma, prev, next, start, end);
> +	/*
> +	 * We can free page tables without write-locking mmap_lock because VMAs
> +	 * were isolated before we downgraded mmap_lock.
> +	 */
> +	unmap_region(mm, &mt_detach, vma, prev, next, start, end, !downgrade);
>  	/* Statistics and freeing VMAs */
>  	mas_set(&mas_detach, start);
>  	remove_mt(mm, &mas_detach);
> @@ -2704,7 +2708,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>  
>  		/* Undo any partial mapping done by a device driver. */
>  		unmap_region(mm, &mm->mm_mt, vma, prev, next, vma->vm_start,
> -			     vma->vm_end);
> +			     vma->vm_end, true);
>  	}
>  	if (file && (vm_flags & VM_SHARED))
>  		mapping_unmap_writable(file->f_mapping);
> @@ -3031,7 +3035,7 @@ void exit_mmap(struct mm_struct *mm)
>  	tlb_gather_mmu_fullmm(&tlb, mm);
>  	/* update_hiwater_rss(mm) here? but nobody should be looking */
>  	/* Use ULONG_MAX here to ensure all VMAs in the mm are unmapped */
> -	unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX);
> +	unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX, false);
>  	mmap_read_unlock(mm);
>  
>  	/*
> -- 
> 2.39.1

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 09:43:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 09:43:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484095.750656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKcJX-0002R7-9Q; Wed, 25 Jan 2023 09:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484095.750656; Wed, 25 Jan 2023 09:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKcJX-0002R0-6U; Wed, 25 Jan 2023 09:43:31 +0000
Received: by outflank-mailman (input) for mailman id 484095;
 Wed, 25 Jan 2023 09:43:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FU6P=5W=suse.com=mhocko@srs-se1.protection.inumbo.net>)
 id 1pKcJA-0001wv-3x
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 09:43:08 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id abe741dd-9c94-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 10:43:07 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id DD33E21C75;
 Wed, 25 Jan 2023 09:43:06 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8FA761358F;
 Wed, 25 Jan 2023 09:43:06 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id usqPIqr50GPHIgAAMHmgww
 (envelope-from <mhocko@suse.com>); Wed, 25 Jan 2023 09:43:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abe741dd-9c94-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674639786; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=/H7fv8j3//amIhsuvsnszBORL7yxYM5gHjte1pGnd6k=;
	b=cJ84t8+Z/oyAwTTCtaCrRcZJz3vmv0agsy9pwxx1mjpy2wVyb9bbJa/fovrQrusm/Svjk5
	J6I5Atn+BG/uY1djc0bMaR2EhFHHAjtT6MjckOpioFidRZmoo/SxgZTemFBWI/hmnJ034Z
	w1GOPFueJOKBcny/L6MDveuVWEd9TYY=
Date: Wed, 25 Jan 2023 10:43:05 +0100
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com,
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org,
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com,
	david@redhat.com, dhowells@redhat.com, hughd@google.com,
	bigeasy@linutronix.de, kent.overstreet@linux.dev,
	punit.agrawal@bytedance.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 6/6] mm: export dump_mm()
Message-ID: <Y9D5qS02j/fPLP/6@dhcp22.suse.cz>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-7-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-7-surenb@google.com>

On Wed 25-01-23 00:38:51, Suren Baghdasaryan wrote:
> mmap_assert_write_locked() is used in vm_flags modifiers. Because
> mmap_assert_write_locked() uses dump_mm() and vm_flags are sometimes
> modified from from inside a module, it's necessary to export
> dump_mm() function.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/debug.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/mm/debug.c b/mm/debug.c
> index 9d3d893dc7f4..96d594e16292 100644
> --- a/mm/debug.c
> +++ b/mm/debug.c
> @@ -215,6 +215,7 @@ void dump_mm(const struct mm_struct *mm)
>  		mm->def_flags, &mm->def_flags
>  	);
>  }
> +EXPORT_SYMBOL(dump_mm);
>  
>  static bool page_init_poisoning __read_mostly = true;
>  
> -- 
> 2.39.1

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 10:20:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 10:20:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484172.750717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKcso-0001CE-1J; Wed, 25 Jan 2023 10:19:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484172.750717; Wed, 25 Jan 2023 10:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKcsn-0001C7-Ug; Wed, 25 Jan 2023 10:19:57 +0000
Received: by outflank-mailman (input) for mailman id 484172;
 Wed, 25 Jan 2023 10:19:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=prA8=5W=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pKcsm-0001Bt-QE
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 10:19:57 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2068.outbound.protection.outlook.com [40.107.101.68])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce837c49-9c99-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 11:19:54 +0100 (CET)
Received: from MW4PR03CA0160.namprd03.prod.outlook.com (2603:10b6:303:8d::15)
 by CO6PR12MB5491.namprd12.prod.outlook.com (2603:10b6:303:13b::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Wed, 25 Jan
 2023 10:19:50 +0000
Received: from CO1NAM11FT049.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8d:cafe::c5) by MW4PR03CA0160.outlook.office365.com
 (2603:10b6:303:8d::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend
 Transport; Wed, 25 Jan 2023 10:19:50 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT049.mail.protection.outlook.com (10.13.175.50) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Wed, 25 Jan 2023 10:19:49 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 04:19:48 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 04:19:48 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Wed, 25 Jan 2023 04:19:46 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce837c49-9c99-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H9XecRi87XlK+2UzhZO3iKfOsGNETcX1n/uw2eeC9PAuSDN+zleU37uxXc9DWVZ6a1f3t/DiDxoSYhL1GfSc2b2CWANub1NkiF4uw8MtL+xbuFHuTeIcmSpMoXkWW3i7ySg+ZUPs1Rd/+R60Quc1+qaefSlTAMVV2qRx5LNpEvWA6EpNE1/Wp51Ijgs68eyashd31+EuwWzIGprJaTAMxdjN65HqgqZxINcZ8E/eq4IShtQ5p76zVlDNyz6NdSS4qn16GHEKCbJp9lNiVdSnkV8QxUx2+ZHb5/OA7wWcowtzbIZ+OSVLQra8Qopy7UzuQ5TABLWryE/qec5bovs7RA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8SCl2SUj9m8p+vkWTjsQjs5FnYNmwo0Z0hBBrx6zsjw=;
 b=d7qiKJjD4/snoMJP7vLtED4NkpCMK+9DJu8F6rSE0AOzjL5zE0qG2iCr62oyhve+/ti/i04t7D9l2DxvWmJoS36xWDrZ5sI59LlxeDTv/oEBHO5GdlvaDXt6s4AMTvi3yWu+CEs7BwBOkN80EfuznAxchGheRrcOyqPnqL2LH+kr70U6Q/5++mE3OG2Wqj57lsEnJrw3kD4iMIbbVIXREp0L3qKOuyxrGRrjjKZumerrQRLORBinlfqvMd+lbT9vNMY3MTVGNGyIbY9ocsa8rPcAzqb9SaFYpCOzcYEPt+XBUQTgPQWIMTs2tCy1M8P7qu1PfBLylPIiLKHK81ZHxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8SCl2SUj9m8p+vkWTjsQjs5FnYNmwo0Z0hBBrx6zsjw=;
 b=TYHAQEdUYIh3rTgn9OPznuVXxs7Xuu+8SQ6eCctfM0UDQNpwnlvd7E3fGWxdC6NZOWOGlQfdRwvxiVMvzZ2RfUkIp0sofGAa1pxN8AKsTgWgd3XTfKTILeNA57lb4LssiV7lM7PMSEob5GIqp7zKcH9v06zy6ZE+1QxPAeijgXI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5] xen/arm: Use the correct format specifier
Date: Wed, 25 Jan 2023 10:19:43 +0000
Message-ID: <20230125101943.1854-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT049:EE_|CO6PR12MB5491:EE_
X-MS-Office365-Filtering-Correlation-Id: 4624ba55-b80b-4220-6dc0-08dafebdb0df
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lDGFDTWZ+CXzyoue3gOjeEkSGB2gp3SucTx4D4x4aM4rBi2gb5cwuokf2iPlQmK0aRnuHPp27prZ4g1obzHpcXrTIxU+z4WdbBmDOmsgAz+CnL2IdO+Q42j96S93b1KWBb3yBVhgL5LzEXeRIZ7iWWTZL7riZpfP/uf4Uv2R+ftH5M3c4+x2yvkGqe1BvqaaGkqD6aQrT/oeV/NNwBn2Q3F0mkA9J6blZijbCkHDWZDL7VNgFml/88XXIhSrP9nYh/OUqMLUNtRSL68eVB6tFroOkURhxkkzQIK1nBt3aO+DlV2kZ/fkbmNIdNpXn3kPxee/UkbbpqvwLFQt/3zBJqi8zqpC0wTkwX9pau+K2oPPgSATzmDz+ftaJz6d8pTszIBBl0Vdr0Qqk4KNb9EGIiY/UnkDXP1qHJD1P5WnnsDCGukrEaEMbXCJy2L3+M8Y7jlx/JQVrcGeG18aH3uZaVcIGbIs50oiY3zfVi1N3jRj6W9ngPF1nTCtSurtqTQ9BR617tB6LAypwZi/ccPNjZxFRPqJlq7wDtN/FXvJJKGKNUtR2hx887fVOAWNLUYK8eBSoDbqbmGea/WeX/Y/vgNiONrDVO/EpaXDsxmP1bnDkjNp0LBH6zDSOBi++pLia9i39yXsWavJLct3gpzD4hIcZVOaEVA/bYaZIZX53W6jjupDKbhlMNIqm6Xs25S+LWYPC1QuOjxNgQRM4wxVZOImvEyOree3qBBr9BjOr9d5wW/JdrvbW9b6I8lEqFpAGoLqVzd+XIgD5iH7i/03oQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(5660300002)(8936002)(2906002)(70206006)(6916009)(70586007)(4326008)(8676002)(41300700001)(478600001)(966005)(1076003)(6666004)(186003)(26005)(2616005)(336012)(426003)(83380400001)(82310400005)(47076005)(82740400003)(316002)(54906003)(81166007)(356005)(86362001)(103116003)(36756003)(40480700001)(40460700003)(36860700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 10:19:49.4647
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4624ba55-b80b-4220-6dc0-08dafebdb0df
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT049.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR12MB5491

1. One should use 'PRIpaddr' to display 'paddr_t' variables. However,
while creating nodes in fdt, the address (if present in the node name)
should be represented using 'PRIx64'. This is to be in conformance
with the following rule present in https://elinux.org/Device_Tree_Linux

. node names
"unit-address does not have leading zeros"

As 'PRIpaddr' introduces leading zeros, we cannot use it.

So, we have introduced a wrapper ie domain_fdt_begin_node() which will
represent physical address using 'PRIx64'.

2. One should use 'PRIx64' to display 'u64' in hex format. The current
use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
address.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - 1. Moved the patch earlier.
2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr"
into this patch.

v2 - 1. Use PRIx64 for appending addresses to fdt node names. This fixes the CI failure.

v3 - 1. Added a comment on top of domain_fdt_begin_node().
2. Check for the return of snprintf() in domain_fdt_begin_node().

v4 - 1. Grammatical error fixes.

 xen/arch/arm/domain_build.c | 64 +++++++++++++++++++++++--------------
 xen/arch/arm/gic-v2.c       |  6 ++--
 xen/arch/arm/mm.c           |  2 +-
 3 files changed, 44 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c2b97fa21e..a798e0b256 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1288,6 +1288,39 @@ static int __init fdt_property_interrupts(const struct kernel_info *kinfo,
     return res;
 }
 
+/*
+ * Wrapper to convert physical address from paddr_t to uint64_t and
+ * invoke fdt_begin_node(). This is required as the physical address
+ * provided as part of node name should not contain any leading
+ * zeroes. Thus, one should use PRIx64 (instead of PRIpaddr) to append
+ * unit (which contains the physical address) with name to generate a
+ * node name.
+ */
+static int __init domain_fdt_begin_node(void *fdt, const char *name,
+                                        uint64_t unit)
+{
+    /*
+     * The size of the buffer to hold the longest possible string (i.e.
+     * interrupt-controller@ + a 64-bit number + \0).
+     */
+    char buf[38];
+    int ret;
+
+    /* ePAPR 3.4 */
+    ret = snprintf(buf, sizeof(buf), "%s@%"PRIx64, name, unit);
+
+    if ( ret >= sizeof(buf) )
+    {
+        printk(XENLOG_ERR
+               "Insufficient buffer. Minimum size required is %d\n",
+               (ret + 1));
+
+        return -FDT_ERR_TRUNCATED;
+    }
+
+    return fdt_begin_node(fdt, buf);
+}
+
 static int __init make_memory_node(const struct domain *d,
                                    void *fdt,
                                    int addrcells, int sizecells,
@@ -1296,8 +1329,6 @@ static int __init make_memory_node(const struct domain *d,
     unsigned int i;
     int res, reg_size = addrcells + sizecells;
     int nr_cells = 0;
-    /* Placeholder for memory@ + a 64-bit number + \0 */
-    char buf[24];
     __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
     __be32 *cells;
 
@@ -1314,9 +1345,7 @@ static int __init make_memory_node(const struct domain *d,
 
     dt_dprintk("Create memory node\n");
 
-    /* ePAPR 3.4 */
-    snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[i].start);
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "memory", mem->bank[i].start);
     if ( res )
         return res;
 
@@ -1375,16 +1404,13 @@ static int __init make_shm_memory_node(const struct domain *d,
     {
         uint64_t start = mem->bank[i].start;
         uint64_t size = mem->bank[i].size;
-        /* Placeholder for xen-shmem@ + a 64-bit number + \0 */
-        char buf[27];
         const char compat[] = "xen,shared-memory-v1";
         /* Worst case addrcells + sizecells */
         __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
         __be32 *cells;
         unsigned int len = (addrcells + sizecells) * sizeof(__be32);
 
-        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, mem->bank[i].start);
-        res = fdt_begin_node(fdt, buf);
+        res = domain_fdt_begin_node(fdt, "xen-shmem", mem->bank[i].start);
         if ( res )
             return res;
 
@@ -2716,12 +2742,9 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
     __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
     __be32 *cells;
     const struct domain *d = kinfo->d;
-    /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
-    char buf[38];
 
-    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
-             vgic_dist_base(&d->arch.vgic));
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "interrupt-controller",
+                                vgic_dist_base(&d->arch.vgic));
     if ( res )
         return res;
 
@@ -2771,14 +2794,10 @@ static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
     int res = 0;
     __be32 *reg, *cells;
     const struct domain *d = kinfo->d;
-    /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
-    char buf[38];
     unsigned int i, len = 0;
 
-    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
-             vgic_dist_base(&d->arch.vgic));
-
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "interrupt-controller",
+                                vgic_dist_base(&d->arch.vgic));
     if ( res )
         return res;
 
@@ -2858,11 +2877,8 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
     __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
     __be32 *cells;
     struct domain *d = kinfo->d;
-    /* Placeholder for sbsa-uart@ + a 64-bit number + \0 */
-    char buf[27];
 
-    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIx64, d->arch.vpl011.base_addr);
-    res = fdt_begin_node(fdt, buf);
+    res = domain_fdt_begin_node(fdt, "sbsa-uart", d->arch.vpl011.base_addr);
     if ( res )
         return res;
 
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 61802839cb..5d4d298b86 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1049,7 +1049,7 @@ static void __init gicv2_dt_init(void)
     if ( csize < SZ_8K )
     {
         printk(XENLOG_WARNING "GICv2: WARNING: "
-               "The GICC size is too small: %#"PRIx64" expected %#x\n",
+               "The GICC size is too small: %#"PRIpaddr" expected %#x\n",
                csize, SZ_8K);
         if ( platform_has_quirk(PLATFORM_QUIRK_GIC_64K_STRIDE) )
         {
@@ -1280,11 +1280,11 @@ static int __init gicv2_init(void)
         gicv2.map_cbase += aliased_offset;
 
         printk(XENLOG_WARNING
-               "GICv2: Adjusting CPU interface base to %#"PRIx64"\n",
+               "GICv2: Adjusting CPU interface base to %#"PRIpaddr"\n",
                cbase + aliased_offset);
     } else if ( csize == SZ_128K )
         printk(XENLOG_WARNING
-               "GICv2: GICC size=%#"PRIx64" but not aliased\n",
+               "GICv2: GICC size=%#"PRIpaddr" but not aliased\n",
                csize);
 
     gicv2.map_hbase = ioremap_nocache(hbase, PAGE_SIZE);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index f758cad545..b99806af99 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -263,7 +263,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
 
         pte = mapping[offsets[level]];
 
-        printk("%s[0x%03x] = 0x%"PRIpaddr"\n",
+        printk("%s[0x%03x] = 0x%"PRIx64"\n",
                level_strs[level], offsets[level], pte.bits);
 
         if ( level == 3 || !pte.walk.valid || !pte.walk.table )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 10:46:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 10:46:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484235.750761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdIU-0007C6-Tj; Wed, 25 Jan 2023 10:46:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484235.750761; Wed, 25 Jan 2023 10:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdIU-0007Bz-Qu; Wed, 25 Jan 2023 10:46:30 +0000
Received: by outflank-mailman (input) for mailman id 484235;
 Wed, 25 Jan 2023 10:46:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKdIU-0007Bp-9d; Wed, 25 Jan 2023 10:46:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKdIU-0007ba-5X; Wed, 25 Jan 2023 10:46:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKdIT-00027B-Kk; Wed, 25 Jan 2023 10:46:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKdIT-00062r-KJ; Wed, 25 Jan 2023 10:46:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PIWYKaICnVE2jFvDTlOX9t+M7reNJEfKs8nQ4E8AvgM=; b=YE2Egj4zfsg8z0vActuATPB4qA
	bDjURnMO7B636qsqMuXP/zBh5oxIxLs2FXRuZIAKdzWI1HWNMw0pKJlq2CrOpoepvRZydnsNcteJF
	XdrxYNv9Hz8An8MkggGWf0qVRxXlg8Ic7itoefMEIfL2cWF0mOy24X6Kj6fgFM1dT+Vc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176110-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176110: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-examine-bios:xen-install:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=352c89f72ddb67b8d9d4e492203f8c77f85c8df1
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 10:46:29 +0000

flight 176110 xen-unstable real [real]
flight 176119 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176110/
http://logs.test-lab.xenproject.org/osstest/logs/176119/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-examine-bios  6 xen-install              fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail pass in 176119-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  352c89f72ddb67b8d9d4e492203f8c77f85c8df1
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    5 days
Failing since        176003  2023-01-20 17:40:27 Z    4 days   11 attempts
Testing same since   176110  2023-01-24 22:40:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 816 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 11:18:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 11:18:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484245.750777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdnR-0002t2-Fn; Wed, 25 Jan 2023 11:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484245.750777; Wed, 25 Jan 2023 11:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdnR-0002sv-CR; Wed, 25 Jan 2023 11:18:29 +0000
Received: by outflank-mailman (input) for mailman id 484245;
 Wed, 25 Jan 2023 11:18:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lDzi=5W=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pKdnQ-0002sp-BW
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 11:18:28 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc7c4e57-9ca1-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 12:18:26 +0100 (CET)
Received: by mail-ej1-x635.google.com with SMTP id os24so2985451ejb.8
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 03:18:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc7c4e57-9ca1-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=JnELyy+I0V7mbAt5rOK/y3cLZLtcPiOcehs2mJ0Lxq4=;
        b=SP6fh37trHBOP+aM02xr4JeZ7OBYIi1UtiqQpyfUgGX/uctS/hOiUBOkb9qomlLTKL
         2ajH8hQe6hjV6yosI3A7iXTouab5/34xETCzMAkx2ZqWOq8ERPPjZZdS3yLFebDSpCh+
         4ag5sezJQAzp+Vzw0DzYb0OR0SmrlXWe8rgkKozDZ701aN6pkvSQXteqxwxdvxEd1STy
         sR3eIDdHjkoqvKvyYR4/Url13ZUXSuKVoi6lcUXjTWmQePSvqx8zn1hyvFnDBbnTKTnu
         wrfoz7OL10ZEvhINZiZwpEETMCvq80XGRZJcVlowkhgW1Nt+C6x9bdhHHYTE7P/J7PB+
         G2Lw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=JnELyy+I0V7mbAt5rOK/y3cLZLtcPiOcehs2mJ0Lxq4=;
        b=NnaBCzYHQvHtlW+fYJQgE2v4G79OZS2cMt7R+91T2Qy0R7Nqarvsd0s6RDOH6hSHUO
         kCFXgXjCtXzaOyOZPtSsFHCk3f0Bmm80mBPH/jRYvguKFw4LzNvsMtNi8qtC4rYJusy6
         MlxfooJ+woS3J/r3/3R1NquEFbT9ibSE3HBinFxt+RNBYHd0POn/nOYwu75Cj9mpL1vB
         x/ErOo3twoGOy0Xz4WEyw70zp0GHGpVcnoyc5HGmFs2N9EOLSHwjCt5L9R03xHzjYJNJ
         kFPEUtOra4uexYHOvOkfQ7d3qOIY611ba1L73sEd+a21bTwCNrVJalR5p8XolIsf9l8d
         Y21w==
X-Gm-Message-State: AFqh2koJAVhcK6yq1v53loMQjavM+Sy0CsU/L/MdghpENg8oyGe9Q0By
	oV2Bo2CXLFBRnbeIhAZ0plNqZkMnoDeUDmlS50hcNQ==
X-Google-Smtp-Source: AMrXdXs+hpQ28/87xMp4v/W7H/UScjp7IjM0rSK6tb5nE99q2BDMnqZI2qcfcTHWjZTdwzhQPwCrw+z3PKvgtsCNZnk=
X-Received: by 2002:a17:907:c928:b0:85e:4218:c011 with SMTP id
 ui40-20020a170907c92800b0085e4218c011mr3335275ejc.258.1674645505312; Wed, 25
 Jan 2023 03:18:25 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-2-carlo.nonato@minervasys.tech> <a470be46-ab6e-3970-2b04-6f4035adf1cb@suse.com>
In-Reply-To: <a470be46-ab6e-3970-2b04-6f4035adf1cb@suse.com>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Wed, 25 Jan 2023 12:18:14 +0100
Message-ID: <CAG+AhRX9DVW5EfXKQoDG9hmcE0FORydTZd0pNm-0uqwddaN9NQ@mail.gmail.com>
Subject: Re: [PATCH v4 01/11] xen/common: add cache coloring common code
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hi Jan, Julien

On Tue, Jan 24, 2023 at 5:37 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 23.01.2023 16:47, Carlo Nonato wrote:
> > @@ -769,6 +776,13 @@ struct domain *domain_create(domid_t domid,
> >      return ERR_PTR(err);
> >  }
> >
> > +struct domain *domain_create(domid_t domid,
> > +                             struct xen_domctl_createdomain *config,
> > +                             unsigned int flags)
> > +{
> > +    return domain_create_llc_colored(domid, config, flags, 0, 0);
>
> Please can you use NULL when you mean a null pointer?
>
> > --- /dev/null
> > +++ b/xen/include/xen/llc_coloring.h
> > @@ -0,0 +1,54 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Last Level Cache (LLC) coloring common header
> > + *
> > + * Copyright (C) 2022 Xilinx Inc.
> > + *
> > + * Authors:
> > + *    Carlo Nonato <carlo.nonato@minervasys.tech>
> > + */
> > +#ifndef __COLORING_H__
> > +#define __COLORING_H__
> > +
> > +#include <xen/sched.h>
> > +#include <public/domctl.h>
> > +
> > +#ifdef CONFIG_HAS_LLC_COLORING
> > +
> > +#include <asm/llc_coloring.h>
> > +
> > +extern bool llc_coloring_enabled;
> > +
> > +int domain_llc_coloring_init(struct domain *d, unsigned int *colors,
> > +                             unsigned int num_colors);
> > +void domain_llc_coloring_free(struct domain *d);
> > +void domain_dump_llc_colors(struct domain *d);
> > +
> > +#else
> > +
> > +#define llc_coloring_enabled (false)
>
> While I agree this is needed, ...
>
> > +static inline int domain_llc_coloring_init(struct domain *d,
> > +                                           unsigned int *colors,
> > +                                           unsigned int num_colors)
> > +{
> > +    return 0;
> > +}
> > +static inline void domain_llc_coloring_free(struct domain *d) {}
> > +static inline void domain_dump_llc_colors(struct domain *d) {}
>
> ... I don't think you need any of these. Instead the declarations above
> simply need to be visible unconditionally (to be visible to the compiler
> when processing consuming code). We rely on DCE to remove such references
> in many other places.

So this is true for any other stub function that I used in the series, right?
Since all of them are guarded by the same kind of if statement: checking for
llc_coloring_enabled value which, in case of coloring disabled from Kconfig,
is always false and then DCE comes in. Sorry for being so verbose, but I just
want to be sure I understood.

> > +#endif /* CONFIG_HAS_LLC_COLORING */
> > +
> > +#define is_domain_llc_colored(d) (llc_coloring_enabled)
> > +
> > +#endif /* __COLORING_H__ */
> > +
> > +/*
> > + * Local variables:
> > + * mode: C
> > + * c-file-style: "BSD"
> > + * c-basic-offset: 4
> > + * tab-width: 4
> > + * indent-tabs-mode: nil
> > + * End:
> > + */
> > \ No newline at end of file
>
> This wants taking care of.
>
> > --- a/xen/include/xen/sched.h
> > +++ b/xen/include/xen/sched.h
> > @@ -602,6 +602,9 @@ struct domain
> >
> >      /* Holding CDF_* constant. Internal flags for domain creation. */
> >      unsigned int cdf;
> > +
> > +    unsigned int *llc_colors;
> > +    unsigned int num_llc_colors;
> >  };
>
> Why outside of any #ifdef, and why not in struct arch_domain?

Moving this in sched.h seemed like the natural continuation of the common +
arch specific split. Notice that this split is also because Julien pointed
out (as you did in some earlier revision) that cache coloring can be used
by other arch in the future (even if x86 is excluded). Having two maintainers
saying the same thing sounded like a good reason to do that.

The missing #ifdef comes from a discussion I had with Julien in v2 about
domctl interface where he suggested removing it
(https://marc.info/?l=xen-devel&m=166151802002263). We were talking about
a different struct, but I thought the principle was the same. Anyway I would
like the #ifdef too.

So @Jan, @Julien, can you help me fix this once for all?

> Jan

Thanks.

- Carlo Nonato


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 11:22:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 11:22:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484253.750787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdqk-0004JE-1V; Wed, 25 Jan 2023 11:21:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484253.750787; Wed, 25 Jan 2023 11:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdqj-0004J7-Up; Wed, 25 Jan 2023 11:21:53 +0000
Received: by outflank-mailman (input) for mailman id 484253;
 Wed, 25 Jan 2023 11:21:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=prA8=5W=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pKdqi-0004J1-N1
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 11:21:52 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2081.outbound.protection.outlook.com [40.107.244.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 750032e2-9ca2-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 12:21:49 +0100 (CET)
Received: from MW4PR02CA0027.namprd02.prod.outlook.com (2603:10b6:303:16d::32)
 by SN7PR12MB8132.namprd12.prod.outlook.com (2603:10b6:806:321::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 11:21:44 +0000
Received: from CO1NAM11FT076.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:16d:cafe::2f) by MW4PR02CA0027.outlook.office365.com
 (2603:10b6:303:16d::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend
 Transport; Wed, 25 Jan 2023 11:21:43 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT076.mail.protection.outlook.com (10.13.174.152) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 11:21:43 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 25 Jan
 2023 05:21:41 -0600
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Wed, 25 Jan 2023 05:21:40 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 750032e2-9ca2-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c4tvu20iW5Wlggb7MvXS2tahW8zqHIuILG8P6v0vOHzxsncLKOxtNHIQo4orgHYPhcW8PKFqmQrjQaq2rN7VuEDANcU8V1aXd1PQnhH0KR/eKXSFqzbJUooT6SE8K6EfyMMzayb/sXKsbmN1LIc2CvfH3iZWpKlS0mvfjKLkLcs+eLk5phTSmIaD4cMWss38oW8/ihCNJchQQ9pn8NUPzRFfU1st6zb/Dg6+tnLDXDltuER5qFtfXoR5GAJ9L0g/pWiqG1+BTjX5e17aoRQVE2UCUTsvwwgSoeyCQE3N8++Ri3Muk5gH+o4V1/+EVH3y82O8wzSDwKxddiwOFYVR/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HGNUaoMLWy/qi79izVXbtqkr3RliCxNQ1+LWlkJVz3E=;
 b=XWQ0rkjuTY9jOZ5Ngxg6P/U0KC+tbyY8VqYLnB/KtpvXHvew2jUs4ULSHTzQg/yD52iArBgrfXPXXhj9D/VLP1UdpuIxtveTm5902vCdx4cp7bzJeDVdkxBDcWm/WLcRAyFuzb4osiFWSKe4ZRCLQczW2/BELcstDOnT3nrxhrG8msLG3G6oz2WozvoOxz8oeqgCxo7LeDJXhLtKCQJk8a1yiLWrxY9jsEX3wWKWDN3al9/PUtObHH4LHsDYIS5tsbdE/loulXULara+1lKJgbKhHG5CmSNm72cFKiCte3mFnZnDxR+TLG4G3GQlHZIMhFK4ASftSeihjRCwfe7aBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HGNUaoMLWy/qi79izVXbtqkr3RliCxNQ1+LWlkJVz3E=;
 b=4yW3o3GloPQ2vcNIIyXDIEZYGuoQkmBVPv5rpKGuQiNUonv4X8HwMOVZcfYPaGVQKTwqqdSHYVr0UH0ZWru73fNy3fRz75JMEjLaupWzxqCsP0bsVhCP3YKsVgZ8nPUYRiqR+8sOnH5M0xuOhW3DhjX8iEKWnPrCqizk5ysr2H0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6] xen/arm: Probe the load/entry point address of an uImage correctly
Date: Wed, 25 Jan 2023 11:21:31 +0000
Message-ID: <20230125112131.19682-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT076:EE_|SN7PR12MB8132:EE_
X-MS-Office365-Filtering-Correlation-Id: 6079eb60-06e9-41aa-a89f-08dafec65656
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JoqepkaC5AzNvbPBIGHhpmN/45ev+0wJ2Eprjj+6khlF/7enTJAD3lIjEtBPdHhvetcDjHk+WlnF1dKnqDTuEsFXHPiV/T21W5371mhofdMrZURde5Wg1g/ADfJHkMNwWkp6ESNKe7AtJmYEraR85HNbRvzF37ZqV6kyZALaUoGXowiqLqzDZHAMFUNs8rE7UbcfkcXOMIoThK9gzbUIk2FMUBJbDwvp3AiOPUV3wsuSRmnpcYSkPUpCVHWLFH03qIO4kI8Z9bWtP+1g5cU0vNZeo2RmayjrxnsESABM3cDzg9OYiPdoB6mTUgUmZguD3gjXVkMuI6IvnwnY9/iOFMJnzJ/tk59I+iBPo6eHVOpwCj/vT5UDPSyboiVVB+ivsxKYtvbtMJ5MAg1iBhG+0g7AQDxNZmt+oLDjL5wc/EzqtasdoCOnYIJX3OyGs7CTKVQhl72jzoThV6ebMDRwbp90DH2T/u2v6d9uwNxY2Qz+YeuBnArcwNt4F67AgX54h1IcgcG0dp8xDjPHLKMIgrkYwvDwKz8VQ32qtZYR7ovbezbqv5248jkPXneGD7xOvyZKfu9V5/bhvC80rsxz4DbqzuOh5Px/w2+Np1ShjVZtjrc3l5vVS2tey2s9WA52DquGJc2swMkbFfl63p32f8gRdvJWN33afm/Nd++ZppizFbSizSL+Dk2V55KAJBHmGAvpU7hdgAhV5sISWp4bZTlWQ4RQ5ijMr+e8pzbW6jPWcLvGCI68l6UoQP0/dc7DWFi+opKJgDmu+cHO/zouJnPCr6dftfh3smcEQ7XiHFVA3hUamvf4BxHeEuZNRlF/
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(396003)(346002)(136003)(39860400002)(451199018)(46966006)(40470700004)(36840700001)(4326008)(316002)(81166007)(2616005)(6666004)(1076003)(83380400001)(478600001)(82740400003)(2906002)(41300700001)(36860700001)(40480700001)(82310400005)(86362001)(103116003)(47076005)(54906003)(8676002)(26005)(186003)(70586007)(966005)(40460700003)(6916009)(36756003)(336012)(5660300002)(356005)(426003)(70206006)(8936002)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 11:21:43.0283
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6079eb60-06e9-41aa-a89f-08dafec65656
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT076.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB8132

Currently, kernel_uimage_probe() does not read the load/entry point address
set in the uImge header. Thus, info->zimage.start is 0 (default value). This
causes, kernel_zimage_place() to treat the binary (contained within uImage)
as position independent executable. Thus, it loads it at an incorrect
address.

The correct approach would be to read "uimage.load" and set
info->zimage.start. This will ensure that the binary is loaded at the
correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
address).

If user provides load address (ie "uimage.load") as 0x0, then the image is
treated as position independent executable. Xen can load such an image at
any address it considers appropriate. A position independent executable
cannot have a fixed entry point address.

This behavior is applicable for both arm32 and arm64 platforms.

Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
point address set in the uImage header. With this commit, Xen will use them.
This makes the behavior of Xen consistent with uboot for uimage headers.

Users who want to use Xen with statically partitioned domains, can provide
non zero load address and entry address for the dom0/domU kernel. It is
required that the load and entry address provided must be within the memory
region allocated by Xen.

A deviation from uboot behaviour is that we consider load address == 0x0,
to denote that the image supports position independent execution. This
is to make the behavior consistent across uImage and zImage.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from v1 :-
1. Added a check to ensure load address and entry address are the same.
2. Considered load address == 0x0 as position independent execution.
3. Ensured that the uImage header interpretation is consistent across
arm32 and arm64.

v2 :-
1. Mentioned the change in existing behavior in booting.txt.
2. Updated booting.txt with a new section to document "Booting Guests".

v3 :-
1. Removed the constraint that the entry point should be same as the load
address. Thus, Xen uses both the load address and entry point to determine
where the image is to be copied and the start address.
2. Updated documentation to denote that load address and start address
should be within the memory region allocated by Xen.
3. Added constraint that user cannot provide entry point for a position
independent executable (PIE) image.

v4 :-
1. Explicitly mentioned the version in booting.txt from when the uImage
probing behavior has changed.
2. Logged the requested load address and entry point parsed from the uImage
header.
3. Some style issues.

v5 :-
1. Set info->zimage.text_offset = 0 in kernel_uimage_probe().
2. Mention that if the kernel has a legacy image header on top of zImage/zImage64
header, then the attrbutes from legacy image header is used to determine the load
address, entry point, etc. Thus, zImage/zImage64 header is effectively ignored.

This is true because Xen currently does not support recursive probing of kernel
headers ie if uImage header is probed, then Xen will not attempt to see if there
is an underlying zImage/zImage64 header.

 docs/misc/arm/booting.txt         | 30 ++++++++++++++++
 xen/arch/arm/include/asm/kernel.h |  2 +-
 xen/arch/arm/kernel.c             | 58 +++++++++++++++++++++++++++++--
 3 files changed, 86 insertions(+), 4 deletions(-)

diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
index 3e0c03e065..1837579aef 100644
--- a/docs/misc/arm/booting.txt
+++ b/docs/misc/arm/booting.txt
@@ -23,6 +23,32 @@ The exceptions to this on 32-bit ARM are as follows:
 
 There are no exception on 64-bit ARM.
 
+Booting Guests
+--------------
+
+Xen supports the legacy image header[3], zImage protocol for 32-bit
+ARM Linux[1] and Image protocol defined for ARM64[2].
+
+Until Xen 4.17, in case of legacy image protocol, Xen ignored the load
+address and entry point specified in the header. This has now changed.
+
+Now, it loads the image at the load address provided in the header.
+And the entry point is used as the kernel start address.
+
+A deviation from uboot is that, Xen treats "load address == 0x0" as
+position independent execution (PIE). Thus, Xen will load such an image
+at an address it considers appropriate. Also, user cannot specify the
+entry point of a PIE image since the start address cennot be
+predetermined.
+
+Users who want to use Xen with statically partitioned domains, can provide
+the fixed non zero load address and start address for the dom0/domU kernel.
+The load address and start address specified by the user in the header must
+be within the memory region allocated by Xen.
+
+Also, it is to be noted that if user provides the legacy image header on top of
+zImage or Image header, then Xen uses the attrbutes of legacy image header only
+to determine the load address, entry point, etc.
 
 Firmware/bootloader requirements
 --------------------------------
@@ -39,3 +65,7 @@ Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/t
 
 [2] linux/Documentation/arm64/booting.rst
 Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst
+
+[3] legacy format header
+Latest version: https://source.denx.de/u-boot/u-boot/-/blob/master/include/image.h#L315
+https://linux.die.net/man/1/mkimage
diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h
index 5bb30c3f2f..4617cdc83b 100644
--- a/xen/arch/arm/include/asm/kernel.h
+++ b/xen/arch/arm/include/asm/kernel.h
@@ -72,7 +72,7 @@ struct kernel_info {
 #ifdef CONFIG_ARM_64
             paddr_t text_offset; /* 64-bit Image only */
 #endif
-            paddr_t start; /* 32-bit zImage only */
+            paddr_t start; /* Must be 0 for 64-bit Image */
         } zimage;
     };
 };
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 23b840ea9e..36081e73f1 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -127,7 +127,7 @@ static paddr_t __init kernel_zimage_place(struct kernel_info *info)
     paddr_t load_addr;
 
 #ifdef CONFIG_ARM_64
-    if ( info->type == DOMAIN_64BIT )
+    if ( (info->type == DOMAIN_64BIT) && (info->zimage.start == 0) )
         return info->mem.bank[0].start + info->zimage.text_offset;
 #endif
 
@@ -162,7 +162,12 @@ static void __init kernel_zimage_load(struct kernel_info *info)
     void *kernel;
     int rc;
 
-    info->entry = load_addr;
+    /*
+     * If the image does not have a fixed entry point, then use the load
+     * address as the entry point.
+     */
+    if ( info->entry == 0 )
+        info->entry = load_addr;
 
     place_modules(info, load_addr, load_addr + len);
 
@@ -223,10 +228,38 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
     if ( len > size - sizeof(uimage) )
         return -EINVAL;
 
+    info->zimage.start = be32_to_cpu(uimage.load);
+    info->entry = be32_to_cpu(uimage.ep);
+
+    /*
+     * While uboot considers 0x0 to be a valid load/start address, for Xen
+     * to maintain parity with zImage, we consider 0x0 to denote position
+     * independent image. That means Xen is free to load such an image at
+     * any valid address.
+     */
+    if ( info->zimage.start == 0 )
+        printk(XENLOG_INFO
+               "No load address provided. Xen will decide where to load it.\n");
+    else
+        printk(XENLOG_INFO
+               "Provided load address: %"PRIpaddr" and entry address: %"PRIpaddr"\n",
+               info->zimage.start, info->entry);
+
+    /*
+     * If the image supports position independent execution, then user cannot
+     * provide an entry point as Xen will load such an image at any appropriate
+     * memory address. Thus, we need to return error.
+     */
+    if ( (info->zimage.start == 0) && (info->entry != 0) )
+    {
+        printk(XENLOG_ERR
+               "Entry point cannot be non zero for PIE image.\n");
+        return -EINVAL;
+    }
+
     info->zimage.kernel_addr = addr + sizeof(uimage);
     info->zimage.len = len;
 
-    info->entry = info->zimage.start;
     info->load = kernel_zimage_load;
 
 #ifdef CONFIG_ARM_64
@@ -242,6 +275,15 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
         printk(XENLOG_ERR "Unsupported uImage arch type %d\n", uimage.arch);
         return -EINVAL;
     }
+
+    /*
+     * If there is a uImage header, then we do not parse zImage or zImage64
+     * header. In other words if the user provides a uImage header on top of
+     * zImage or zImage64 header, Xen uses the attributes of uImage header only.
+     * Thus, Xen uses uimage.load attribute to determine the load address and
+     * zimage.text_offset is ignored.
+     */
+    info->zimage.text_offset = 0;
 #endif
 
     return 0;
@@ -366,6 +408,7 @@ static int __init kernel_zimage64_probe(struct kernel_info *info,
     info->zimage.kernel_addr = addr;
     info->zimage.len = end - start;
     info->zimage.text_offset = zimage.text_offset;
+    info->zimage.start = 0;
 
     info->load = kernel_zimage_load;
 
@@ -436,6 +479,15 @@ int __init kernel_probe(struct kernel_info *info,
     u64 kernel_addr, initrd_addr, dtb_addr, size;
     int rc;
 
+    /*
+     * We need to initialize start to 0. This field may be populated during
+     * kernel_xxx_probe() if the image has a fixed entry point (for e.g.
+     * uimage.ep).
+     * We will use this to determine if the image has a fixed entry point or
+     * the load address should be used as the start address.
+     */
+    info->entry = 0;
+
     /* domain is NULL only for the hardware domain */
     if ( domain == NULL )
     {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 11:26:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 11:26:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484259.750797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKduu-0004xV-Hn; Wed, 25 Jan 2023 11:26:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484259.750797; Wed, 25 Jan 2023 11:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKduu-0004xO-Er; Wed, 25 Jan 2023 11:26:12 +0000
Received: by outflank-mailman (input) for mailman id 484259;
 Wed, 25 Jan 2023 11:26:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=prA8=5W=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pKdut-0004xI-0a
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 11:26:11 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on2051.outbound.protection.outlook.com [40.107.102.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10223aa6-9ca3-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 12:26:09 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by IA0PR12MB8225.namprd12.prod.outlook.com (2603:10b6:208:408::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 11:25:50 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%6]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 11:25:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10223aa6-9ca3-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OZz0WN8rKFrM4+nEJSrf8/FKGGAxWBE3+nzhc04muWJSoSxXKwMHCS399Nq9GmOpnsB1H/7WOK4y+vBzA7lLLStwN5F6vsvqRthdrtCcAbZ0AoKueWwm/Co5HqUzjLASe3l8g9QxcL4/hKtYGuiCOEDTRpfbdX9OePsZ+VibIgUxwSVhh9CUGPXfA77w/M12eHhb9KVSxdDyJrRclF0s2TMEoxoRY4vC1IFqEnkuRZrra4UFi9bxejh2I1IxwChcXUN1D5CS9TFm9ShHZFGglQA8JHDLgcDzrv+J3lUWLHcXAs9FBi0CfSiYMYtQI/2efrL+R+k5yupKTvSmiPfYKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RtVUccqFBaLhf5fYO8s/Y/UvUi5J2vimD0Ele7c/Ni0=;
 b=AeEtgJTxkISAhccpFpmtdjNA6KtIDVgxCntLsZCO+uDDizGDBwtrECko2l4ozTjYg4d4aep2X/HMkDi26GOu5CUrW4zWFBWaunerRPj4L71tHxqlUAEkGms+T4i4nKyX0uaXXe+HpphInHBHupwSPv9wZfxqR1SDY8n9eo6Qxh8JQRaMmRs8ZipPPy+kIR72pYpFP0SkocYx0cBSB9nr39JxF48w9G32ydTZcfVFZ2PmKAQyBNquxbWbUgZkXdgifC9S9jc7kinQ+ZYPR7qZphlZB2CxO9eQO8bBJld8s7VP8Kf0ZsBq9HOuA6DVnFE7LnnBoPXFG6Wm0rpmJFaT+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RtVUccqFBaLhf5fYO8s/Y/UvUi5J2vimD0Ele7c/Ni0=;
 b=eaO9gc5dEjxFvCPoWv7pjved4IqJngpF+nCcDllr2oBfJO963w9VWsDzpfL4txpK/8G1KjfELI9u365FsdQxmvJ8dE8BLraQ9wtcswR/SBNJtqt4xDT6us7HyrBlDgZwWYg0qsvQWVPEZ1yk8SnpqkJowdBup1EvnPKBCWJNyDM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <4bc7437d-3052-508a-6eba-5f2fb96fc0ab@amd.com>
Date: Wed, 25 Jan 2023 11:25:44 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v5] xen/arm: Probe the load/entry point address of an uImage
 correctly
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 julien@xen.org, Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 michal.orzel@amd.com
References: <20230113122423.22902-1-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301201422410.731018@ubuntu-linux-20-04-desktop>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <alpine.DEB.2.22.394.2301201422410.731018@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO3P265CA0032.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:387::15) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|IA0PR12MB8225:EE_
X-MS-Office365-Filtering-Correlation-Id: ca59cbba-2235-4c68-177b-08dafec6e992
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZkP+c9+2rbP8mXE8J3E1AByuuewN90HmdShX3yk+70dfBkiuTz3frz3F0nacnfBXT7iFmLsV+CNs/8TFc9z0zk7hnJMBxB0eOZx2L17F5ywBsC+02/dkVZhzIHCeo+mzNe1N3AY5zbJg3ZelZX+GlQbqen73MSnFnqT+3gy2aD3bk00uS5mOV//g8m/LuR40PNv7ZdwZnzouwMt7rxEU745IDwWOsRv7nclqEqCF2yeg5bxi3CyiOVsdXo6zxxNcpQn2/Bu65MrVRCuXqiIOp0ZpXV1tX2idM7OA0Zn1PKmdBpNhmHt4vaUUtfl5iHte5ncK0pf6x+OMflvBFiZlq7Ad0EqXZ4BB2b4vdbxYwtOHhrkUqS7G2XsQBUAHFSPE/+4pu5yIxN/E5iAugW1DGnpGI0SULWLqlx2Bg0z/5icChY27r297gMDHKLOA4UJX7x/OJJGVnMd9yuWEzVMiejm/dO3k8PkuG3XTkojVOxbgWi0WfuWdSalL9KwgbCNxQocBMSUH1lYSwP9NVb/Ty3q0VVtY9/XX0/5P+lvxQ1tWVGE+VgD0u4L/+K8OWE63YfsSeoMl1GW9ImK0MqKPCKV7SmH+LJdIxunyrZPRB0kjGHKjB0sJunC6ojc+e3suDZ4KqZDpmsa9GtgOJizHtKTKPFud3kHLgXx7/otIFs8bMGxKvIPAFTMoCXoAR15LEJ6UFBVIavgzCrD5z2t4fqgQFF2aC8G/kU6VLaWvCttl3MSuzxVXDd571Ku4oKvOchmCPmspsOVaJmsjsxsz9c8Bx/LPHpNeNzsRgL1PPC9T4S58JYPOjuBMOXhUyE5X
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(376002)(39860400002)(136003)(346002)(366004)(396003)(451199018)(38100700002)(83380400001)(66556008)(2616005)(478600001)(8676002)(186003)(66476007)(53546011)(31686004)(6486002)(36756003)(31696002)(6636002)(110136005)(66946007)(966005)(2906002)(41300700001)(26005)(5660300002)(316002)(6506007)(6666004)(6512007)(4326008)(8936002)(21314003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eGJmV3M1M1UrbU1weEFxemc3RUF0Y0ppWTVQaHUzYVpRL05Lano0ekUzcXZB?=
 =?utf-8?B?MnFTdjlBVS93Ny84UTNsYlRXVHlFQkdxU3RIc1dnTEEzVkR0VE5oSWNaVVV6?=
 =?utf-8?B?cUxYVm43QzJSeTEvRG1ZZHYzbHhENUQrdEcyRi93YVhEVzdqU1pMK0NZMUt0?=
 =?utf-8?B?bk9QU3RyYy8xSVpFcUZIVTVvZlNPV2lEMzdWNVJERUJOdHFOd3BqS3lVd2Zi?=
 =?utf-8?B?NkN0aFFjWENoUXpuTUFGV1Q4OURkYUpPM3Y3MEhPMW9INTMyRVArTnFGNGU5?=
 =?utf-8?B?aExUMEZxdmltc1h2Y0FTZUxzYXVsVVRGaEFkYVhabThYQ0R4cG1pdC8rcXhL?=
 =?utf-8?B?L3JzcHNIRUliTG5VeFpqVCt2M3lPSlVET3Y2VmEyQmlxZEd5bk53VXFJUSsw?=
 =?utf-8?B?MDlvTERWZHZuVU9wT3BHVUZyUnpoTTNDMUhoWDMrWjYzbXROL1pYMGVWL0Q0?=
 =?utf-8?B?NW9oUERTSGVxM0ordVFZTTByZFVrdjkvZDJaUk95UnVCTU94b3lWVE5sREdT?=
 =?utf-8?B?eFJkVGxERDRVck9zQnFlaHdTVkhCeUJlRlpoVGI5K0psRzhrYWhHZ200Vk9o?=
 =?utf-8?B?NlYvWlI0dklHZFJyQS9Hc1U4QkNTRmptaWZHU1hNRWk4NnhxOFNSS2J0NjVJ?=
 =?utf-8?B?Z29ET2tldmozRHNpMU1uR29NT3BXY0piMzRsVm1adk1Wc0VlSzdsbUVrZWtX?=
 =?utf-8?B?cVRiMEEyWi9CTjl3dEZwUjVpTWczcXpzL2lPeEN6TUM4dWQ1Zmorellpdy9S?=
 =?utf-8?B?TmVLWW80Q1ZFdFErQkVGQWRuN2ZtQzZwL3hkbHQ5MVRDS1FuQ2lBTVJiaVhn?=
 =?utf-8?B?Z201a2lqN1phbHNaRUtjS0VSWHBMbnN2UGJZeGszSjRBeGtzSmk0WFlFYzBn?=
 =?utf-8?B?U2QybmFMOUVxcEJJa1VFQ3lqb3ovem9yeHhXQVhaWjdvbjRWTk9YcGVUaW5v?=
 =?utf-8?B?VHkwL0tkeTNGYnY2akRkRnZsM1BTeTNybjNValZGN0pJNDZxSUZLeW9YSXpM?=
 =?utf-8?B?aHFPKzU0R3pJK2oxNkJzRTJtMEtWZkR0OHFiVkVPbFlUN2c1SmFvVDMxdWFC?=
 =?utf-8?B?aDFUSUtYbmxRWmlaWjZYVkZoL2FOQUF1VlVLM3BkMTQ0QVdMaFc4QmJtV2Nm?=
 =?utf-8?B?K3RRTWFaYkU5RC9nSXZPRTI5Z29QUDE3VElGdFE3NUliaUhBN0s3cHZrSmhD?=
 =?utf-8?B?cm1aN1V3VGRwR0E4T1Vjc1V1WE5RbUxPazRKUG4vZkxVandJTHZ3TTZqM2Q5?=
 =?utf-8?B?U3NHbHVMSktrMHZQMTBtSzVETzdMdFJsMVZuNmpEVGFjNmhTVVUxMm9ERytH?=
 =?utf-8?B?ZmEvK2YxOVZjSlZUZnkrNUxFdldtbHRqMFZUR1QyNXAvL2dRTWRVREduYnpq?=
 =?utf-8?B?czZhNnJPSG02VVpWV2RIYWRRUU1uMVZzZG1oN0F0WWtacHlQM3FXNTNtNVJB?=
 =?utf-8?B?ZjEyMm9XTkJJU1JwbXNpWFVuamZlQTU0UEhMbVFmMXdTb1JwWHdYMUU3djUz?=
 =?utf-8?B?VW5XOGNuNHFJenhHczNiV1E4cUkydit1LzBTYSs5WUtIc3kxZm9SUkVwSm5I?=
 =?utf-8?B?T1hwcmJqRHlMVmhQZXM5enhPYUZsdnFXa1ZESDdSUFVsMTZVU1MrT1R6T0VK?=
 =?utf-8?B?UGY0Q3dUcnNNK3VCcWJjeER0NThXR1RQWnM3SFZMVCtWLzZ1WVk5YjBVQ1ZH?=
 =?utf-8?B?ZFFKTmZLc3kvOW91SmR2RnBJbldUNDc1ckF0TnJNZlNZMWVBWURHd3JTNm9V?=
 =?utf-8?B?dkYvVDMrQklzWnZYa0NhYnF3aGRHOFdTZnZuZ1dpdFpGbUpkSm5QY0RsV1VO?=
 =?utf-8?B?bCtEQVZXbGpIMTZCbzN0aVB1ZUIzOWpQbHVMRk51dFA3aVFGWER4ODU0TW5C?=
 =?utf-8?B?bUZoTWQrWjFheDNwZkxyVGhsNndBbnRtald6V3RGSnEyNHhHeWZDVTdIY2xJ?=
 =?utf-8?B?RUF5bDdZeFZ3bTBPeFRFTjJtVXJCUmFSWVh4ZHZaR0hiZTlyYmR1b0hsUDI2?=
 =?utf-8?B?U3h5NCswS2lBWWU1VmVQMndPWU5Nek9aek5UelhOY29ETnBWckh0RHk0US9Q?=
 =?utf-8?B?c0tQUHUwNVpnV1poVGJsUnU2YlFEU0w5czEyYXlRbm1Pa1o3SEplbVRFMGh6?=
 =?utf-8?Q?/tpnEDrneizRzD/I8wJ+NCRpG?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ca59cbba-2235-4c68-177b-08dafec6e992
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 11:25:50.4491
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wYiivXZ35YlPYO6iP2AU+fnLmrFHGIMBkVsvYJzPRB2xESpT+YQERjE1TTU1XB2YiPyQXS0k6nVQBzsQ+YwzUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8225

Hi Stefano,

On 20/01/2023 22:28, Stefano Stabellini wrote:
> On Fri, 13 Jan 2023, Ayan Kumar Halder wrote:
>> Currently, kernel_uimage_probe() does not read the load/entry point address
>> set in the uImge header. Thus, info->zimage.start is 0 (default value). This
>> causes, kernel_zimage_place() to treat the binary (contained within uImage)
>> as position independent executable. Thus, it loads it at an incorrect
>> address.
>>
>> The correct approach would be to read "uimage.load" and set
>> info->zimage.start. This will ensure that the binary is loaded at the
>> correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
>> address).
>>
>> If user provides load address (ie "uimage.load") as 0x0, then the image is
>> treated as position independent executable. Xen can load such an image at
>> any address it considers appropriate. A position independent executable
>> cannot have a fixed entry point address.
>>
>> This behavior is applicable for both arm32 and arm64 platforms.
>>
>> Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
>> point address set in the uImage header. With this commit, Xen will use them.
>> This makes the behavior of Xen consistent with uboot for uimage headers.
>>
>> Users who want to use Xen with statically partitioned domains, can provide
>> non zero load address and entry address for the dom0/domU kernel. It is
>> required that the load and entry address provided must be within the memory
>> region allocated by Xen.
>>
>> A deviation from uboot behaviour is that we consider load address == 0x0,
>> to denote that the image supports position independent execution. This
>> is to make the behavior consistent across uImage and zImage.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>>
>> Changes from v1 :-
>> 1. Added a check to ensure load address and entry address are the same.
>> 2. Considered load address == 0x0 as position independent execution.
>> 3. Ensured that the uImage header interpretation is consistent across
>> arm32 and arm64.
>>
>> v2 :-
>> 1. Mentioned the change in existing behavior in booting.txt.
>> 2. Updated booting.txt with a new section to document "Booting Guests".
>>
>> v3 :-
>> 1. Removed the constraint that the entry point should be same as the load
>> address. Thus, Xen uses both the load address and entry point to determine
>> where the image is to be copied and the start address.
>> 2. Updated documentation to denote that load address and start address
>> should be within the memory region allocated by Xen.
>> 3. Added constraint that user cannot provide entry point for a position
>> independent executable (PIE) image.
>>
>> v4 :-
>> 1. Explicitly mentioned the version in booting.txt from when the uImage
>> probing behavior has changed.
>> 2. Logged the requested load address and entry point parsed from the uImage
>> header.
>> 3. Some style issues.
>>
>>   docs/misc/arm/booting.txt         | 26 ++++++++++++++++
>>   xen/arch/arm/include/asm/kernel.h |  2 +-
>>   xen/arch/arm/kernel.c             | 49 +++++++++++++++++++++++++++++--
>>   3 files changed, 73 insertions(+), 4 deletions(-)
>>
>> diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
>> index 3e0c03e065..aeb0123e8d 100644
>> --- a/docs/misc/arm/booting.txt
>> +++ b/docs/misc/arm/booting.txt
>> @@ -23,6 +23,28 @@ The exceptions to this on 32-bit ARM are as follows:
>>   
>>   There are no exception on 64-bit ARM.
>>   
>> +Booting Guests
>> +--------------
>> +
>> +Xen supports the legacy image header[3], zImage protocol for 32-bit
>> +ARM Linux[1] and Image protocol defined for ARM64[2].
>> +
>> +Until Xen 4.17, in case of legacy image protocol, Xen ignored the load
>> +address and entry point specified in the header. This has now changed.
>> +
>> +Now, it loads the image at the load address provided in the header.
>> +And the entry point is used as the kernel start address.
>> +
>> +A deviation from uboot is that, Xen treats "load address == 0x0" as
>> +position independent execution (PIE). Thus, Xen will load such an image
>> +at an address it considers appropriate. Also, user cannot specify the
>> +entry point of a PIE image since the start address cennot be
>> +predetermined.
>> +
>> +Users who want to use Xen with statically partitioned domains, can provide
>> +the fixed non zero load address and start address for the dom0/domU kernel.
>> +The load address and start address specified by the user in the header must
>> +be within the memory region allocated by Xen.
>>   
>>   Firmware/bootloader requirements
>>   --------------------------------
>> @@ -39,3 +61,7 @@ Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/t
>>   
>>   [2] linux/Documentation/arm64/booting.rst
>>   Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst
>> +
>> +[3] legacy format header
>> +Latest version: https://source.denx.de/u-boot/u-boot/-/blob/master/include/image.h#L315
>> +https://linux.die.net/man/1/mkimage
>> diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h
>> index 5bb30c3f2f..4617cdc83b 100644
>> --- a/xen/arch/arm/include/asm/kernel.h
>> +++ b/xen/arch/arm/include/asm/kernel.h
>> @@ -72,7 +72,7 @@ struct kernel_info {
>>   #ifdef CONFIG_ARM_64
>>               paddr_t text_offset; /* 64-bit Image only */
>>   #endif
>> -            paddr_t start; /* 32-bit zImage only */
>> +            paddr_t start; /* Must be 0 for 64-bit Image */
>>           } zimage;
>>       };
>>   };
>> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
>> index 23b840ea9e..0b7f591857 100644
>> --- a/xen/arch/arm/kernel.c
>> +++ b/xen/arch/arm/kernel.c
>> @@ -127,7 +127,7 @@ static paddr_t __init kernel_zimage_place(struct kernel_info *info)
>>       paddr_t load_addr;
>>   
>>   #ifdef CONFIG_ARM_64
>> -    if ( info->type == DOMAIN_64BIT )
>> +    if ( (info->type == DOMAIN_64BIT) && (info->zimage.start == 0) )
>>           return info->mem.bank[0].start + info->zimage.text_offset;
> This is an issue because if we have a zImage64 kernel binary with a
> uimage header, we are not setting zimage.text_offset appropriately, if I
> am not mistaken.
>
> The way booting.txt is written in this patch, I think the matching
> behavior would be that if there is a uimage header, then the zImage64
> header is ignored.

I have followed this approach in

"[XEN v6] xen/arm: Probe the load/entry point address of an uImage 
correctly"

- Ayan

> If the uimage header has uimage.load == zero, then
> we should allocate the load_addr for the kernel (PIE case).
>
> I think it would also be OK if we choose the different behavior that if
> there is a uimage header but uimage.load == zero, then we look at
> zImage64.text_offset next.
>
> Either way is OK for me as long as it is clearly specified in
> booting.txt.
>
>
>
>
>>   #endif
>>   
>> @@ -162,7 +162,12 @@ static void __init kernel_zimage_load(struct kernel_info *info)
>>       void *kernel;
>>       int rc;
>>   
>> -    info->entry = load_addr;
>> +    /*
>> +     * If the image does not have a fixed entry point, then use the load
>> +     * address as the entry point.
>> +     */
>> +    if ( info->entry == 0 )
>> +        info->entry = load_addr;
>>   
>>       place_modules(info, load_addr, load_addr + len);
>>   
>> @@ -223,10 +228,38 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>>       if ( len > size - sizeof(uimage) )
>>           return -EINVAL;
>>   
>> +    info->zimage.start = be32_to_cpu(uimage.load);
>> +    info->entry = be32_to_cpu(uimage.ep);
>> +
>> +    /*
>> +     * While uboot considers 0x0 to be a valid load/start address, for Xen
>> +     * to maintain parity with zImage, we consider 0x0 to denote position
>> +     * independent image. That means Xen is free to load such an image at
>> +     * any valid address.
>> +     */
>> +    if ( info->zimage.start == 0 )
>> +        printk(XENLOG_INFO
>> +               "No load address provided. Xen will decide where to load it.\n");
>> +    else
>> +        printk(XENLOG_INFO
>> +               "Provided load address: %"PRIpaddr" and entry address: %"PRIpaddr"\n",
>> +               info->zimage.start, info->entry);
>> +
>> +    /*
>> +     * If the image supports position independent execution, then user cannot
>> +     * provide an entry point as Xen will load such an image at any appropriate
>> +     * memory address. Thus, we need to return error.
>> +     */
>> +    if ( (info->zimage.start == 0) && (info->entry != 0) )
>> +    {
>> +        printk(XENLOG_ERR
>> +               "Entry point cannot be non zero for PIE image.\n");
>> +        return -EINVAL;
>> +    }
>> +
>>       info->zimage.kernel_addr = addr + sizeof(uimage);
>>       info->zimage.len = len;
>>   
>> -    info->entry = info->zimage.start;
>>       info->load = kernel_zimage_load;
>>   
>>   #ifdef CONFIG_ARM_64
>> @@ -366,6 +399,7 @@ static int __init kernel_zimage64_probe(struct kernel_info *info,
>>       info->zimage.kernel_addr = addr;
>>       info->zimage.len = end - start;
>>       info->zimage.text_offset = zimage.text_offset;
>> +    info->zimage.start = 0;
>>   
>>       info->load = kernel_zimage_load;
>>   
>> @@ -436,6 +470,15 @@ int __init kernel_probe(struct kernel_info *info,
>>       u64 kernel_addr, initrd_addr, dtb_addr, size;
>>       int rc;
>>   
>> +    /*
>> +     * We need to initialize start to 0. This field may be populated during
>> +     * kernel_xxx_probe() if the image has a fixed entry point (for e.g.
>> +     * uimage.ep).
>> +     * We will use this to determine if the image has a fixed entry point or
>> +     * the load address should be used as the start address.
>> +     */
>> +    info->entry = 0;
>> +
>>       /* domain is NULL only for the hardware domain */
>>       if ( domain == NULL )
>>       {
>> -- 
>> 2.17.1
>>
>>


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 11:27:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 11:27:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484264.750807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdvb-0005Tw-T5; Wed, 25 Jan 2023 11:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484264.750807; Wed, 25 Jan 2023 11:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdvb-0005Tp-QS; Wed, 25 Jan 2023 11:26:55 +0000
Received: by outflank-mailman (input) for mailman id 484264;
 Wed, 25 Jan 2023 11:26:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKdva-0005TV-4a
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 11:26:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKdvZ-0008Ry-SE; Wed, 25 Jan 2023 11:26:53 +0000
Received: from [54.239.6.189] (helo=[192.168.17.90])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKdvZ-0001jT-MB; Wed, 25 Jan 2023 11:26:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=oHnuqwdp7arSV56KzoSCjHLJqp8kA7vAlZet81OOiJg=; b=v356EbYOMQTGfZspJGBS6CoO3E
	YxAe+9HSsgYoT6CnKq5s/yfQaomZ2DV3jMZOAyyMjWHPR1xZ0b5rdtwljebajwJhTK70U/Dao6Z+r
	fLr3sNFpyT5SRALNYoexYuRmFcMajsjM2UMotKOIopn7IREk0qBjgMiRNyPcpP9lsKjQ=;
Message-ID: <bb52731d-94b3-694b-8038-8c87dd986654@xen.org>
Date: Wed, 25 Jan 2023 11:26:51 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221214031654.2815589-1-Henry.Wang@arm.com>
 <20221214031654.2815589-2-Henry.Wang@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20221214031654.2815589-2-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Henry,

On 14/12/2022 03:16, Henry Wang wrote:
> As we are having more and more types of static region, and all of
> these static regions are defined in bootinfo.reserved_mem, it is
> necessary to add the overlap check of reserved memory regions in Xen,
> because such check will help user to identify the misconfiguration in
> the device tree at the early stage of boot time.
> 
> Currently we have 3 types of static region, namely
> (1) static memory
> (2) static heap
> (3) static shared memory
> 
> (1) and (2) are parsed by the function `device_tree_get_meminfo()` and
> (3) is parsed using its own logic. All of parsed information of these
> types will be stored in `struct meminfo`.
> 
> Therefore, to unify the overlap checking logic for all of these types,
> this commit firstly introduces a helper `meminfo_overlap_check()` and
> a function `check_reserved_regions_overlap()` to check if an input
> physical address range is overlapping with the existing memory regions
> defined in bootinfo. After that, use `check_reserved_regions_overlap()`
> in `device_tree_get_meminfo()` to do the overlap check of (1) and (2)
> and replace the original overlap check of (3) with
> `check_reserved_regions_overlap()`.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> ---
> v1 -> v2:
> 1. Split original `overlap_check()` to `meminfo_overlap_check()`.
> 2. Rework commit message.
> ---
>   xen/arch/arm/bootfdt.c           | 13 +++++-----
>   xen/arch/arm/include/asm/setup.h |  2 ++
>   xen/arch/arm/setup.c             | 42 ++++++++++++++++++++++++++++++++
>   3 files changed, 50 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 0085c28d74..e2f6c7324b 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -88,6 +88,9 @@ static int __init device_tree_get_meminfo(const void *fdt, int node,
>       for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
>       {
>           device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +        if ( mem == &bootinfo.reserved_mem &&
> +             check_reserved_regions_overlap(start, size) )
> +            return -EINVAL;
>           /* Some DT may describe empty bank, ignore them */
>           if ( !size )
>               continue;
> @@ -482,7 +485,9 @@ static int __init process_shm_node(const void *fdt, int node,
>                   return -EINVAL;
>               }
>   
> -            if ( (end <= mem->bank[i].start) || (paddr >= bank_end) )
> +            if ( check_reserved_regions_overlap(paddr, size) )
> +                return -EINVAL;
> +            else
>               {
>                   if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 )
>                       continue;
> @@ -493,12 +498,6 @@ static int __init process_shm_node(const void *fdt, int node,
>                       return -EINVAL;
>                   }
>               }
> -            else
> -            {
> -                printk("fdt: shared memory region overlap with an existing entry %#"PRIpaddr" - %#"PRIpaddr"\n",
> -                        mem->bank[i].start, bank_end);
> -                return -EINVAL;
> -            }
>           }
>       }
>   
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index fdbf68aadc..6a9f88ecbb 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -143,6 +143,8 @@ void fw_unreserved_regions(paddr_t s, paddr_t e,
>   size_t boot_fdt_info(const void *fdt, paddr_t paddr);
>   const char *boot_fdt_cmdline(const void *fdt);
>   
> +int check_reserved_regions_overlap(paddr_t region_start, paddr_t region_size);
> +
>   struct bootmodule *add_boot_module(bootmodule_kind kind,
>                                      paddr_t start, paddr_t size, bool domU);
>   struct bootmodule *boot_module_find_by_kind(bootmodule_kind kind);
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f26f67b90..e6eeb3a306 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -261,6 +261,31 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>       cb(s, e);
>   }
>   
> +static int __init meminfo_overlap_check(struct meminfo *meminfo,
> +                                        paddr_t region_start,
> +                                        paddr_t region_end)

I am starting to dislike the use of 'end' for a couple of reasons:
   1) It never clear whether this is inclusive or exclusive
   2) When it is exclusive, this doesn't properly work if the region 
finish at (2^64 - 1) as 'end' would be 0

I have started to clean-up the Arm code to avoid all those issues. So 
for new code, I would rather prefer if we use 'start' and 'size' to 
describe a region.

> +{
> +    paddr_t bank_start = INVALID_PADDR, bank_end = 0;
> +    unsigned int i, bank_num = meminfo->nr_banks;
> +
> +    for ( i = 0; i < bank_num; i++ )
> +    {
> +        bank_start = meminfo->bank[i].start;
> +        bank_end = bank_start + meminfo->bank[i].size;
> +
> +        if ( region_end <= bank_start || region_start >= bank_end )
> +            continue;
> +        else
> +        {
> +            printk("Region %#"PRIpaddr" - %#"PRIpaddr" overlapping with bank[%u] %#"PRIpaddr" - %#"PRIpaddr"\n",
> +                   region_start, region_end, i, bank_start, bank_end);
> +            return -EINVAL;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>   void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>                                     void (*cb)(paddr_t, paddr_t),
>                                     unsigned int first)
> @@ -271,7 +296,24 @@ void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>           cb(s, e);
>   }
>   
> +/*
> + * Given an input physical address range, check if this range is overlapping
> + * with the existing reserved memory regions defined in bootinfo.
> + * Return 0 if the input physical address range is not overlapping with any
> + * existing reserved memory regions, otherwise -EINVAL.
> + */
> +int __init check_reserved_regions_overlap(paddr_t region_start,
> +                                          paddr_t region_size)

None of the caller seems to care about the return (other than it is 
failing or not). So I would prefer if this returns a boolean to indicate 
whether the check pass or not.

> +{
> +    paddr_t region_end = region_start + region_size; > +
> +    /* Check if input region is overlapping with bootinfo.reserved_mem banks */
> +    if ( meminfo_overlap_check(&bootinfo.reserved_mem,
> +                               region_start, region_end) )
> +        return -EINVAL;
>   
> +    return 0;
> +}
>   
>   struct bootmodule __init *add_boot_module(bootmodule_kind kind,
>                                             paddr_t start, paddr_t size,

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 11:27:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 11:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484269.750817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdvx-0005uw-4O; Wed, 25 Jan 2023 11:27:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484269.750817; Wed, 25 Jan 2023 11:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdvx-0005up-1g; Wed, 25 Jan 2023 11:27:17 +0000
Received: by outflank-mailman (input) for mailman id 484269;
 Wed, 25 Jan 2023 11:27:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=usdV=5W=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1pKdvv-0004xI-TY
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 11:27:16 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 37cb554e-9ca3-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 12:27:14 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id hw16so46690185ejc.10
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 03:27:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37cb554e-9ca3-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=m0cgVc512nKxxMxutLRM21GmDwdiwS769D7xbBd8UtA=;
        b=Sgw2BAFQKH4kfDnafvNiVrSlz4xLavNLjnyd8NgWGM7NNbhnXiFWwAvUDunh/O7txi
         Kowm6Kgy8OdLUZq/GLDhqme13diQfgFyW/o6A5aPNQECJJr5Pn/15cqDQ41mfJst3c5a
         dDLyqC40TsrQkTwgEsrIaMEWdfawlYMRISukQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=m0cgVc512nKxxMxutLRM21GmDwdiwS769D7xbBd8UtA=;
        b=KIMdZDzR/qs4d7bTi3il5LoJPOSVKS3EvFt0dqzclBWKxgnYstslLlk0jU99OREakN
         aLjajbUbcLe2B10rT/Aqb/uCdLUwEgBoC0++eGEIPjQP3NLlGuzzf5XWPBu+5oERUbe8
         0dIaW3UAgJSkaYVyzwVLqvXC4OzUY4qNSaHdnwAb6xOdXrprLVSTGYZIYjTJmPrIjkZA
         900qazGd5CEcgze+5KHunrWp1NXMo6isqeLPmFuaMCj6gu++hVFsw/eOQC3F+NNRmNnV
         oYz+1GI96ULGYpAWDBRfloZkKdQxsHl/P3n0gWqrqWpbUd7ZhAEUjYyw9Y7Jf/6o60xQ
         B+Rg==
X-Gm-Message-State: AFqh2kowRFHZXD0fZQVowYcHMWKhiD7PXuUWMsKW6tghUh3WtVH8iZnI
	9WzKPCprFtmFmisPzKYpfbJF2vme6WddTZCtWAqX8Q==
X-Google-Smtp-Source: AMrXdXvF1luSpkwLOMxmHKNO87j6NmqDCbLqUoaIUkvwN3Ld0ih9sxMsnHz3B5zJWy1GL7ZqtMFhYTzcfxbmpbH+QRc=
X-Received: by 2002:a17:906:9156:b0:83f:cbc0:1b32 with SMTP id
 y22-20020a170906915600b0083fcbc01b32mr3683617ejw.296.1674646034571; Wed, 25
 Jan 2023 03:27:14 -0800 (PST)
MIME-Version: 1.0
References: <CAFD1rPdT5Tod+qdit50EWBN6WyRuK2ybb2G2HmOAayAV7uyBuA@mail.gmail.com>
 <7ddac120-29c5-d4fa-2bc7-9da6b1cf2dd9@citrix.com> <CAFD1rPfv1jCNkcPP1KBLDr1e+_aa7+aCphVTjZG-xAnbkcnNGQ@mail.gmail.com>
In-Reply-To: <CAFD1rPfv1jCNkcPP1KBLDr1e+_aa7+aCphVTjZG-xAnbkcnNGQ@mail.gmail.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Wed, 25 Jan 2023 11:27:03 +0000
Message-ID: <CA+zSX=YW7tFzmreeh1YXaVoseUBjadsrCNRA8f5vf4EoknA2=g@mail.gmail.com>
Subject: Re: Usage of Xen Security Data in VulnerableCode
To: Tushar Goel <tushar.goel.dav@gmail.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Xen Security <security@xen.org>, 
	Philippe Ombredanne <pombredanne@nexb.com>, jmhoran@nexb.com
Content-Type: multipart/alternative; boundary="00000000000003fed105f314ea5c"

--00000000000003fed105f314ea5c
Content-Type: text/plain; charset="UTF-8"

On Thu, Jan 19, 2023 at 1:10 PM Tushar Goel <tushar.goel.dav@gmail.com>
wrote:

> Hi Andrew,
>
> > Maybe we want to make it CC-BY-4 to require people to reference back to
> > the canonical upstream ?
> Thanks for your response, can we have a more declarative statement on
> the license from your end
> and also can you please provide your acknowledgement over the usage of
> Xen security data in vulnerablecode.
>

Hey Tushar,

Informally, the Xen Project Security Team is happy for you to include the
data from xsa.json in your open-source vulnerability database.  As a
courtesy we'd request that it be documented where the information came
from.  (I think if the data includes links to then advisories on our
website, that will suffice.)

Formally, we're not copyright lawyers; but we don't think there's anything
copyright-able in the xsa.json: There is no editorial or creative control
in the generation of that file; it's just a collection of facts which you
could re-generate by scanning all the advisories.  (In fact that's exactly
how the file is created; i.e., the collection of advisory texts is our
"source of truth".)

We do have "Officially license all advisory text as CC-BY-4" on our to-do
list; if you'd be more comfortable with an official license for xsa.json as
well, we can add that to the list.

 -George

--00000000000003fed105f314ea5c
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Thu, Jan 19, 2023 at 1:10 PM Tusha=
r Goel &lt;<a href=3D"mailto:tushar.goel.dav@gmail.com">tushar.goel.dav@gma=
il.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"m=
argin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left=
:1ex">Hi Andrew,<br>
<br>
&gt; Maybe we want to make it CC-BY-4 to require people to reference back t=
o<br>
&gt; the canonical upstream ?<br>
Thanks for your response, can we have a more declarative statement on<br>
the license from your end<br>
and also can you please provide your acknowledgement over the usage of<br>
Xen security data in vulnerablecode.<br></blockquote><div><br></div><div>He=
y Tushar,</div><div><br></div><div>Informally, the Xen Project Security Tea=
m is happy for you to include the data from xsa.json in your open-source vu=
lnerability database.=C2=A0 As a courtesy we&#39;d request that it be docum=
ented where the information came from.=C2=A0 (I think if the data includes =
links to then advisories on our website, that will suffice.)</div><div><br>=
</div><div>Formally, we&#39;re not copyright lawyers; but we don&#39;t thin=
k there&#39;s anything copyright-able in the xsa.json: There is no editoria=
l or creative control in the generation of that file; it&#39;s just a colle=
ction of facts which you could re-generate by scanning all the advisories.=
=C2=A0 (In fact that&#39;s exactly how the file is created; i.e., the colle=
ction of advisory texts is our &quot;source of truth&quot;.)</div><div><br>=
</div><div>We do have &quot;Officially license all advisory text as CC-BY-4=
&quot; on our to-do list; if you&#39;d be more comfortable with an official=
 license for xsa.json as well, we can add that to the list.</div><div><br><=
/div><div>=C2=A0-George</div></div></div>

--00000000000003fed105f314ea5c--


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 11:29:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 11:29:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484279.750827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdxr-0006fH-G4; Wed, 25 Jan 2023 11:29:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484279.750827; Wed, 25 Jan 2023 11:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKdxr-0006fA-CW; Wed, 25 Jan 2023 11:29:15 +0000
Received: by outflank-mailman (input) for mailman id 484279;
 Wed, 25 Jan 2023 11:29:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKdxq-0006f4-PW
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 11:29:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKdxq-0008V2-FR; Wed, 25 Jan 2023 11:29:14 +0000
Received: from [54.239.6.189] (helo=[192.168.17.90])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKdxq-0001n5-9L; Wed, 25 Jan 2023 11:29:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=hq8x5tD4fkBQZXEJ8xUmBepk3LuRIsWea3CR7KV/5eM=; b=PafRPy/zfr31stlZ/PQEoS/0T1
	GwwsWPQslxu6D3ckhYE4VmsJBp6+eeTO8ciHVMNFG5nwCSDphb2DhYUBZoTUcx+Hutr2R0mk27+LY
	UR41C/d3CpeUb6AlKnLs0bxWYaKb6MtV6OcflIllP3RpzlLCVjwFQuslePYL2r7/THuk=;
Message-ID: <e86d2b48-2da7-f21c-d191-85615a934c81@xen.org>
Date: Wed, 25 Jan 2023 11:29:12 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20221214031654.2815589-1-Henry.Wang@arm.com>
 <20221214031654.2815589-2-Henry.Wang@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20221214031654.2815589-2-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 14/12/2022 03:16, Henry Wang wrote:
> As we are having more and more types of static region, and all of
> these static regions are defined in bootinfo.reserved_mem, it is
> necessary to add the overlap check of reserved memory regions in Xen,
> because such check will help user to identify the misconfiguration in
> the device tree at the early stage of boot time.
> 
> Currently we have 3 types of static region, namely
> (1) static memory
> (2) static heap
> (3) static shared memory
> 
> (1) and (2) are parsed by the function `device_tree_get_meminfo()` and
> (3) is parsed using its own logic. All of parsed information of these
> types will be stored in `struct meminfo`.
> 
> Therefore, to unify the overlap checking logic for all of these types,
> this commit firstly introduces a helper `meminfo_overlap_check()` and
> a function `check_reserved_regions_overlap()` to check if an input
> physical address range is overlapping with the existing memory regions
> defined in bootinfo. After that, use `check_reserved_regions_overlap()`
> in `device_tree_get_meminfo()` to do the overlap check of (1) and (2)
> and replace the original overlap check of (3) with
> `check_reserved_regions_overlap()`.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> ---
> v1 -> v2:
> 1. Split original `overlap_check()` to `meminfo_overlap_check()`.
> 2. Rework commit message.
> ---
>   xen/arch/arm/bootfdt.c           | 13 +++++-----
>   xen/arch/arm/include/asm/setup.h |  2 ++
>   xen/arch/arm/setup.c             | 42 ++++++++++++++++++++++++++++++++
>   3 files changed, 50 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 0085c28d74..e2f6c7324b 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -88,6 +88,9 @@ static int __init device_tree_get_meminfo(const void *fdt, int node,
>       for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
>       {
>           device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +        if ( mem == &bootinfo.reserved_mem &&
> +             check_reserved_regions_overlap(start, size) )
> +            return -EINVAL;
>           /* Some DT may describe empty bank, ignore them */
>           if ( !size )
>               continue;
> @@ -482,7 +485,9 @@ static int __init process_shm_node(const void *fdt, int node,
>                   return -EINVAL;
>               }
>   
> -            if ( (end <= mem->bank[i].start) || (paddr >= bank_end) )
> +            if ( check_reserved_regions_overlap(paddr, size) )
> +                return -EINVAL;
> +            else
>               {
>                   if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 )
>                       continue;
> @@ -493,12 +498,6 @@ static int __init process_shm_node(const void *fdt, int node,
>                       return -EINVAL;
>                   }
>               }
> -            else
> -            {
> -                printk("fdt: shared memory region overlap with an existing entry %#"PRIpaddr" - %#"PRIpaddr"\n",
> -                        mem->bank[i].start, bank_end);
> -                return -EINVAL;
> -            }
>           }
>       }
>   
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index fdbf68aadc..6a9f88ecbb 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -143,6 +143,8 @@ void fw_unreserved_regions(paddr_t s, paddr_t e,
>   size_t boot_fdt_info(const void *fdt, paddr_t paddr);
>   const char *boot_fdt_cmdline(const void *fdt);
>   
> +int check_reserved_regions_overlap(paddr_t region_start, paddr_t region_size);
> +
>   struct bootmodule *add_boot_module(bootmodule_kind kind,
>                                      paddr_t start, paddr_t size, bool domU);
>   struct bootmodule *boot_module_find_by_kind(bootmodule_kind kind);
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f26f67b90..e6eeb3a306 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -261,6 +261,31 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>       cb(s, e);
>   }
>   
> +static int __init meminfo_overlap_check(struct meminfo *meminfo,
> +                                        paddr_t region_start,
> +                                        paddr_t region_end)
> +{
> +    paddr_t bank_start = INVALID_PADDR, bank_end = 0;
> +    unsigned int i, bank_num = meminfo->nr_banks;
> +
> +    for ( i = 0; i < bank_num; i++ )
> +    {
> +        bank_start = meminfo->bank[i].start;
> +        bank_end = bank_start + meminfo->bank[i].size;
> +
> +        if ( region_end <= bank_start || region_start >= bank_end )
> +            continue;
> +        else
> +        {
> +            printk("Region %#"PRIpaddr" - %#"PRIpaddr" overlapping with bank[%u] %#"PRIpaddr" - %#"PRIpaddr"\n",

AFAICT, in messages, the end would be inclusive. But here...

> +                   region_start, region_end, i, bank_start, bank_end);

... it would be exclusive. I would suggest to print using the format 
[start, end[ or decrement the value by 1.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 11:37:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 11:37:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484285.750837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKe5s-0008WQ-9v; Wed, 25 Jan 2023 11:37:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484285.750837; Wed, 25 Jan 2023 11:37:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKe5s-0008WJ-70; Wed, 25 Jan 2023 11:37:32 +0000
Received: by outflank-mailman (input) for mailman id 484285;
 Wed, 25 Jan 2023 11:37:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Be9=5W=citrix.com=prvs=3821facd5=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pKe5r-0008V8-37
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 11:37:31 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a4fed201-9ca4-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 12:37:29 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4fed201-9ca4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674646649;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=RfCE76XoGdXJb6w1WTkggE9lANZ0qxu5K2hgVoiyY8I=;
  b=DfxOwbzVT2qA95jF3gL0Kq4Bna2n321vYKHLDpLdc7wu6k6DnGy7qSRB
   JrfDO2ovPc+kzU+nd0Kp1IgSa4bfG/NdomUMA3U7Cm3jADCadq7zjRmlX
   X6Uci5RTrvEBdGqzE1dS6nhObNiXisAaHh6QSwqRhIMPY+DD/ym2iNUYp
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93075483
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:yPRC8KPVAj519NjvrR1ql8FynXyQoLVcMsEvi/4bfWQNrUp0g2YBn
 WdOWmiEbPfYZWv8KIsgaYjj90lSvZ/czodiHgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5AFmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0stKKGZv7
 M5HFGoqPkuFqtmZ4fGXUdA506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUo3RHZoIwhbDz
 o7A11b1JgE+PsGc8CSi1kmxi76SvxjlYbtHQdVU8dY12QbOlwT/EiY+V1ShpuKiolWjQN8ZI
 EsRkgI3oK0vsUCmUNT5dxu/pnGCo1gbQdU4O+Qi5RuE0Kb8/weTDW9CRTlEAPQsrsQ2WDcpx
 HeAmNevDjtq2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nZslnOL64iJvyAz6Y/
 tyRhHFg3fNJ15dNjvjluwmd2FpAu6QlUCYY2yzQf3uXsT8jQ97+YaGTxnmYwe15edPxoka6g
 FAInM2X7eYrBJ6LlTCQTOhlIIxF98ppIxWH3wcxQsBJGyCFvif6INsOuG0WyFJBaJ5sRNP/X
 KPEVeq9Drd3NWDiU6J4apnZ5y8Cnfm5ToSNuhw5g7NzjnlNmO2vpnkGia24hTqFfK0QfUYXZ
 /+mnT6EVypyNEie5GPeqx0h+bEq3Dsi4mjYWIr2yR+quZLHOiHJFOdYagDXNr5phE9hnOky2
 48PX/ZmNj0FCLGuCsUp2dB7wa82wYgTWsmt9p0/mh+rKQt6AmAxY8I9Mpt4E7GJa599z7+Sl
 lnkAx8w9bYKrSGfQel8Qiw5OeyHsFcWhS5TABHAyn75hyh6PNf0vM/ytfIfJNEayQCq9tYsJ
 9FtRilKKq4npujvk9jFUaTAkQ==
IronPort-HdrOrdr: A9a23:QPddUK4JEDstzmnoCwPXwKvXdLJyesId70hD6qkRc3xom6mj/P
 xG88536faZslwssRIb+OxoRpPufZq0z/cc3WB7B9uftWfd1leVEA==
X-IronPort-AV: E=Sophos;i="5.97,245,1669093200"; 
   d="scan'208";a="93075483"
Date: Wed, 25 Jan 2023 11:37:14 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Chuck Zmudzinski <brchuckz@aol.com>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, <qemu-devel@nongnu.org>
Subject: Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default
 for igd-passthru
Message-ID: <Y9EUarVVWr223API@perard.uk.xensource.com>
References: <cover.1673300848.git.brchuckz.ref@aol.com>
 <cover.1673300848.git.brchuckz@aol.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1673300848.git.brchuckz@aol.com>

On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
> I call attention to the commit message of the first patch which points
> out that using the "pc" machine and adding the xen platform device on
> the qemu upstream command line is not functionally equivalent to using
> the "xenfv" machine which automatically adds the xen platform device
> earlier in the guest creation process. As a result, there is a noticeable
> reduction in the performance of the guest during startup with the "pc"
> machne type even if the xen platform device is added via the qemu
> command line options, although eventually both Linux and Windows guests
> perform equally well once the guest operating system is fully loaded.

There shouldn't be a difference between "xenfv" machine or using the
"pc" machine while adding the "xen-platform" device, at least with
regards to access to disk or network.

The first patch of the series is using the "pc" machine without any
"xen-platform" device, so we can't compare startup performance based on
that.

> Specifically, startup time is longer and neither the grub vga drivers
> nor the windows vga drivers in early startup perform as well when the
> xen platform device is added via the qemu command line instead of being
> added immediately after the other emulated i440fx pci devices when the
> "xenfv" machine type is used.

The "xen-platform" device is mostly an hint to a guest that they can use
pv-disk and pv-network devices. I don't think it would change anything
with regards to graphics.

> For example, when using the "pc" machine, which adds the xen platform
> device using a command line option, the Linux guest could not display
> the grub boot menu at the native resolution of the monitor, but with the
> "xenfv" machine, the grub menu is displayed at the full 1920x1080
> native resolution of the monitor for testing. So improved startup
> performance is an advantage for the patch for qemu.

I've just found out that when doing IGD passthrough, both machine
"xenfv" and "pc" are much more different than I though ... :-(
pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
turns copy some informations from the real host bridge.
I guess this new host bridge help when the firmware setup the graphic
for grub.

> I also call attention to the last point of the commit message of the
> second patch and the comments for reviewers section of the second patch.
> This approach, as opposed to fixing this in qemu upstream, makes
> maintaining the code in libxl__build_device_model_args_new more
> difficult and therefore increases the chances of problems caused by
> coding errors and typos for users of libxl. So that is another advantage
> of the patch for qemu.

We would just needs to use a different approach in libxl when generating
the command line. We could probably avoid duplications. I was hopping to
have patch series for libxl that would change the machine used to start
using "pc" instead of "xenfv" for all configurations, but based on the
point above (IGD specific change to "xenfv"), then I guess we can't
really do anything from libxl to fix IGD passthrough.

> OTOH, fixing this in qemu causes newer qemu versions to behave
> differently than previous versions of qemu, which the qemu community
> does not like, although they seem OK with the other patch since it only
> affects qemu "xenfv" machine types, but they do not want the patch to
> affect toolstacks like libvirt that do not use qemu upstream's
> autoconfiguration options as much as libxl does, and, of course, libvirt
> can manage qemu "xenfv" machines so exising "xenfv" guests configured
> manually by libvirt could be adversely affected by the patch to qemu,
> but only if those same guests are also configured for igd-passthrough,
> which is likely a very small number of possibly affected libvirt users
> of qemu.
> 
> A year or two ago I tried to configure guests for pci passthrough on xen
> using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
> could not convert an xl.cfg file with a configuration item
> pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
> So it is unlikely there are any users out there using libvirt to
> configure xen hvm guests for igd passthrough on xen, and those are the
> only users that could be adversely affected by the simpler patch to qemu
> to fix this.

FYI, libvirt should be using libxl to create guest, I don't think there
is another way for libvirt to create xen guests.



So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
IGD passthrough as the "xenfv" machine has already some workaround to
make IGD work and just need some more.

I've seen that the patch for QEMU is now reviewed, so I look at having
it merged soonish.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 13:11:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 13:11:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484303.750852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKfXm-0003Xi-Ev; Wed, 25 Jan 2023 13:10:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484303.750852; Wed, 25 Jan 2023 13:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKfXm-0003Xb-C1; Wed, 25 Jan 2023 13:10:26 +0000
Received: by outflank-mailman (input) for mailman id 484303;
 Wed, 25 Jan 2023 13:10:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YOW=5W=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKfXl-0003XV-6G
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 13:10:25 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2062.outbound.protection.outlook.com [40.107.105.62])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a07518ba-9cb1-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 14:10:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7932.eurprd04.prod.outlook.com (2603:10a6:10:1ef::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 13:10:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 13:10:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a07518ba-9cb1-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L3DTo5b66jq9CjZPtR8J4bwhgudK94cb3EZ3cD0sMhxJW2I+WczQIF8tTKLkZMtm/mVMGHebQiOiyTZhIikxfxzdz21DRsbysYtIaQNs/EJ/gUFvN/C3YHdYFkNzp/l/N1bYdtcOChVMDCznLITb0j2qfO0DMUY5BZeWz+s8Lck2PP/JgqpdVtfzNP73BoMB/LMl7+h4KNXotBHr+p0pLYnu8e99mrQHouBIYYJlduPNiCCBwKR5Y3NduMns1JrTpJTslGl5yDO1M5Cr5KUNLNmfCIrkR4016Lz0YPnTV9q1Ch7AEUS6El8FmZo5YYqeXDmIqmw0WK6LyYeNfmQtdA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ipOBsiNfElvS9REl+s0nHNRWK56gfuZF38I+aQn/+C8=;
 b=RojFAhr1532Far4P4c2wAMgioGXqXgUKDEgb6Hn0/3R/xbwTcnY6SeuwxkJ3o3qyvYwaYZka8TpMOynzwUjMYpY67j34STujgy+vtpfbHT9hd2n1ww3nECaMk/YoRRXxzPiCxLf+bfVRF3bTrwD62gtDxPTs/09ULNPfZI6Z+JhyKRoOq3PWkSz3iBssGVFi1e++ZNmxdkLBeFIskmBsTpBor5QHCSjBnQlZ7vAiet0bBGudLP20R9AaL6PfAJnNjAqebXD9JaAaPV5EtHKnJD+eWkBSl9b25kp16O6wRqk1ZHs+69fTaC3BPud5OyVI7JcUW5GHyWLO2eDxlGDG6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ipOBsiNfElvS9REl+s0nHNRWK56gfuZF38I+aQn/+C8=;
 b=b747XuSGDz6kmzzg8eVwa9GldY9yLMMuvkzjF6iUwvaTVQeubaSSi/i20DUwtkO5dvam3cMTrJMI2Zrcqn9HA8EcPI3yJY/TPSuhv/Ny6XAr/nkd0EcugirpI6qy6e0ocFsRI2RDgq5buP5U9s4RRLB1TOk6tJN0kO1TTRSRtTuxr+/2e4tUcseCE3BtvFgEhSJyAF/l3ZDgKUIsJ1b9kCbMVSse4ezkNW4HltKhUnkjpUwZiJEpEgcGUdTlr8D44G1NBqsXUudaxJP4t1u3AGgVTf8y1InyBpWAiGgdwWlD5TYlOuCttzrwOv9Kxwizk9yIBp+RtkftGnDr+V4cFA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6c952571-6a8d-e4fc-36ec-b5b79dac40f6@suse.com>
Date: Wed, 25 Jan 2023 14:10:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 01/11] xen/common: add cache coloring common code
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>,
 xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-2-carlo.nonato@minervasys.tech>
 <a470be46-ab6e-3970-2b04-6f4035adf1cb@suse.com>
 <CAG+AhRX9DVW5EfXKQoDG9hmcE0FORydTZd0pNm-0uqwddaN9NQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAG+AhRX9DVW5EfXKQoDG9hmcE0FORydTZd0pNm-0uqwddaN9NQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0124.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7932:EE_
X-MS-Office365-Filtering-Correlation-Id: b22bbbf9-bbd3-4428-b688-08dafed58320
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N64OQdYCtSgj/AZPycOHgchpuyLYn6h4+9S6MJ7n8RME4eqeDDtRR/yl5Le5XqGTRDSwnvru7reawlrE7WSFSiZrU2icX2R6vqQnlw8ij+porKOZiKtV8Y2tFZdYGmXU+ctZqsluIFgiX5Jf8lXcaWD56wuwA9Ac5kM6PhDLm2REn15qmRq1E/s0Lpk2GwCMSuL61poQehbtZE9aNj/bsSgmJfOdM1yq08wTiBbyonqFjuiER/a7P2wBzB+rJPywtL6zaFvBkAzIBUtCXcy4z9boLENjGPO004DIkEXlCR8f+DZLNnZzgdvFxmPAhU4AM3Y6dVXm1oEHPC9OREYD/QzDqpAmHwyVha4NW9ek089o874D5s6QSF/53mmhDxaiMO4cj0kYjawqLefKt0YGoMMB6mAEBdUq6atrpjYb87EqyIfvh1Q+mLzu9gpTnZxc7KujYR4wVSwm9SKnKyqHOGmsYwAPfwTFn+JwVYlyu7/7lQP9XJ5PojN3gXtTZ7lGI5DcAF0aVf2WKZYm1Ng2b8gnXxWtFEcjpFMpMcjfYi1sI8FiNKpSO/OxSGE0dTBkEUe9ihSnaOg5FCM2kJcxpuL6IDXaJ64/C2NG40CSdkqH8QWEhtIrTNGjCbLkY+18s+f5h+HnPWpw7BnztYRS+BoS7Hng4TABXhbzWHFfIxqrRaDs0afUaqG4zeb3koOVhYlq8/3g6131rFmkUBh2To1mekY7wtab9EciaWPH2xUcUJXyW1Wco/KIaNszS1D1
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(136003)(396003)(366004)(346002)(376002)(84040400005)(451199018)(66899018)(31696002)(38100700002)(83380400001)(5660300002)(2906002)(4326008)(8936002)(316002)(6512007)(26005)(186003)(8676002)(53546011)(6666004)(41300700001)(2616005)(6916009)(66946007)(478600001)(66476007)(6506007)(6486002)(36756003)(54906003)(31686004)(86362001)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V1RrV1ZrcU1GLythTjZHellIdEt2S1ZKcUt4TG5TUGFGdnFjRVg2cHgxSEhw?=
 =?utf-8?B?cEFjbW5yMHh0QytvK0h5cDNzRE0xU0R0eXpqQ0VPdTNWUUNrZlM0RDZucWwx?=
 =?utf-8?B?a0lQNzNGWndqMy9LTVcyaHJ3czcrcWxxWm1PV2hkbk5MRVNvSE1RL2FHY1lG?=
 =?utf-8?B?ZGZCbzU3V2FGdlZrMlo5U2dKZ2xueDJmb3hNR0IyWUU4SmQ0OVlFR25RSDJl?=
 =?utf-8?B?bFljUU10VFc0MGVHUUxRbW1HaVZXazlHRm9MbVR0YlN6dzR1NlJlQWc3bk9H?=
 =?utf-8?B?aUFUTHpQS2RnZHFTamN3TFYwZW1LRnlIZWJMaS9wK085VS9hakFJQ1AvdzJ0?=
 =?utf-8?B?dUxlaWtkWkVuVURyZXd4V09nVFNqSVltNzRYNTFKQlgzemFzR0I1L2x5c2Rr?=
 =?utf-8?B?TXcwb3Z2VC95c0JmeEgrM2RBUk1MbytOYUdCNlhsTU9vZlg1M0hOZmlkVDZl?=
 =?utf-8?B?YVVNcVZkbzNNOGQrVDdaNmZvUjFESXNQMFFNR21TK1hPa3VWSVBDdGxyUjJu?=
 =?utf-8?B?eHlWUXpGTUg3S1JWSUgzWkk5VGhOaGZWaW5JMUhZWTEveGRwalpNY3p5dDRR?=
 =?utf-8?B?Tk14MUh6SlE5VjZUYjdGRzU3VmJ2Q1dVY1owU283aFNLeEtwUGRDZitCcDNr?=
 =?utf-8?B?M2t4aXBrZXU1bE9nMGhDc0M5UCtnQ2VRZXZtOGp2UzlISzNPdVk5MmwvTDFR?=
 =?utf-8?B?bWZXWnd6WHczNWtsMXBkN3hlR3JIdFJ6TFhoSS9Qa2ZqZ1hUMlVXdzhMUVM1?=
 =?utf-8?B?L1FRZGFQbDhBa3ppN3dRS1RrVW4zM1k3WjQwVEE5UmtraWFBMDh4cXl3ayto?=
 =?utf-8?B?K0t2RU9UdVlXNmpXUHJ2a2JCYTFxMUc3UWk5TXc1MmhmKzdTRXZsNTBOWGw5?=
 =?utf-8?B?Smk3bzhiTGR6LzVHT0ZCeVRYVTF2UlZHQ1JUclg3Qmo1V0k5cTlsSjNJZlFs?=
 =?utf-8?B?NEk0SjRvMUYzcjFsekdqVUhtTDNUeDRRaGdORjViRkdkWGdBcmNWeTA5anFy?=
 =?utf-8?B?Q1J4YVZXY3dnRmdVZW1TdnY2L01kMmhQODNXZFQ0Vy8vTDFRR2FVdkNaVDBS?=
 =?utf-8?B?MTF2K2RvVGs5Q1UwbTRaK2dHVGM0c0k0UHdybWg1OGZqWXFCVThYZ3l0NW9q?=
 =?utf-8?B?SmNJVkVLeFk4ZDZQZGYxL2FLTmVhLzVjczZXQzFTc2w3VFRoYStYcU9CY3kw?=
 =?utf-8?B?ZVBtOUs2Tmx6dndTLzcrc2xqTldHN1piODhFcGd1Y3BlcjNPelpPU00wZktr?=
 =?utf-8?B?c1hmVnk5K0pBbWhKUFZLYlJDaXdOU0NudExvZ3VVL21yWi9nbU5vL2R5QnlR?=
 =?utf-8?B?b1B1b0dzWXJNYmFXQ0E0VjIxNWYvSzF6TWhTUDZ5dXdKak5JQ2NDWTdwY2RY?=
 =?utf-8?B?NXY1YndvUHpqSmZXQjhaT2pPQWVaaDE5SjRxM3hvZlMyT2JUZ0FRZjhqaU8x?=
 =?utf-8?B?K1VCbHphcVBCUisva1Iwc2ZoMEZFWUJTMnR5SytTVE54UWowZXJoVURQUEp4?=
 =?utf-8?B?d0p0cUdVcU9QSzBnckZpK0RFSi9MMXhxdWRxa2QydUdKc0VxTHFpNVZRZVFa?=
 =?utf-8?B?L3ZiTkhKenhKSjlwL0FZNlZRb1hNU3o2RnVYeVpFTDJKNGtRZG1ubW1hV3pi?=
 =?utf-8?B?OVFvRWdGbzFPT0JoUEhMRktBdDQrZDZ6bmZtSkFvU053RmU1Z1RDWURnQ1I3?=
 =?utf-8?B?cnkrQkgzZ0pocjZDUndsSnVZR1pyL0NxUzVtSXZuNG9OaVk2K2dBaWlFbmE5?=
 =?utf-8?B?TlFBVnhPMllWdUE2cndUV1BOUnlRMjhnelZFUlBpZU5FNC8vL1prNk5OTGNQ?=
 =?utf-8?B?QmY3anpUeHVCdGtNdGJ0d1dHNGpTU2NaQ3VuVXZINnhkbVBJZFZzT09XaTlQ?=
 =?utf-8?B?VVNvL3JuWWd1NXU4dkVYZGdxS1pXRDZtRktOdVZ0VzV6ZTR6bS9aMUViWTVl?=
 =?utf-8?B?c1AybG5aZk1Rb3hnNnNJNTlLNW9zWWhESVVDcVFNcXhKSk5kWnBsem9BVmlN?=
 =?utf-8?B?KzNVQitIdjBwYnlQTHI5bG95SFRxMXZFNWxLNmJBZlBKY3Ywemc4cnQ2Yk9J?=
 =?utf-8?B?V1pMejl0L3V4NForM09ybG5CL21BK1l1cHRnUEl5N2ZDWGhScXJQWEwxUkRI?=
 =?utf-8?Q?sX+yJTs5aGxANLfaoz5TDEKSn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b22bbbf9-bbd3-4428-b688-08dafed58320
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 13:10:21.3888
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6ZpsmjwO7SdZXFeZ8zBbppRKLgl2UCM5mBCJzH1gOj+FwOa9o/7YCAXuSaXTIvk7qz57504upDJYOpewdL4TGA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7932

On 25.01.2023 12:18, Carlo Nonato wrote:
> On Tue, Jan 24, 2023 at 5:37 PM Jan Beulich <jbeulich@suse.com> wrote:
>> On 23.01.2023 16:47, Carlo Nonato wrote:
>>> --- /dev/null
>>> +++ b/xen/include/xen/llc_coloring.h
>>> @@ -0,0 +1,54 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Last Level Cache (LLC) coloring common header
>>> + *
>>> + * Copyright (C) 2022 Xilinx Inc.
>>> + *
>>> + * Authors:
>>> + *    Carlo Nonato <carlo.nonato@minervasys.tech>
>>> + */
>>> +#ifndef __COLORING_H__
>>> +#define __COLORING_H__
>>> +
>>> +#include <xen/sched.h>
>>> +#include <public/domctl.h>
>>> +
>>> +#ifdef CONFIG_HAS_LLC_COLORING
>>> +
>>> +#include <asm/llc_coloring.h>
>>> +
>>> +extern bool llc_coloring_enabled;
>>> +
>>> +int domain_llc_coloring_init(struct domain *d, unsigned int *colors,
>>> +                             unsigned int num_colors);
>>> +void domain_llc_coloring_free(struct domain *d);
>>> +void domain_dump_llc_colors(struct domain *d);
>>> +
>>> +#else
>>> +
>>> +#define llc_coloring_enabled (false)
>>
>> While I agree this is needed, ...
>>
>>> +static inline int domain_llc_coloring_init(struct domain *d,
>>> +                                           unsigned int *colors,
>>> +                                           unsigned int num_colors)
>>> +{
>>> +    return 0;
>>> +}
>>> +static inline void domain_llc_coloring_free(struct domain *d) {}
>>> +static inline void domain_dump_llc_colors(struct domain *d) {}
>>
>> ... I don't think you need any of these. Instead the declarations above
>> simply need to be visible unconditionally (to be visible to the compiler
>> when processing consuming code). We rely on DCE to remove such references
>> in many other places.
> 
> So this is true for any other stub function that I used in the series, right?

Likely. I didn't look at most of the Arm-only pieces.

>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -602,6 +602,9 @@ struct domain
>>>
>>>      /* Holding CDF_* constant. Internal flags for domain creation. */
>>>      unsigned int cdf;
>>> +
>>> +    unsigned int *llc_colors;
>>> +    unsigned int num_llc_colors;
>>>  };
>>
>> Why outside of any #ifdef, and why not in struct arch_domain?
> 
> Moving this in sched.h seemed like the natural continuation of the common +
> arch specific split. Notice that this split is also because Julien pointed
> out (as you did in some earlier revision) that cache coloring can be used
> by other arch in the future (even if x86 is excluded). Having two maintainers
> saying the same thing sounded like a good reason to do that.

If you mean this to be usable by other arch-es as well (which I would
welcome, as I think I had expressed on an earlier version), then I think
more pieces want to be in common code. But putting the fields here and all
users of them in arch-specific code (which I think is the way I saw it)
doesn't look very logical to me. IOW to me there exist only two possible
approaches: As much as possible in common code, or common code being
disturbed as little as possible.

> The missing #ifdef comes from a discussion I had with Julien in v2 about
> domctl interface where he suggested removing it
> (https://marc.info/?l=xen-devel&m=166151802002263).

I went about five levels deep in the replies, without finding any such reply
from Julien. Can you please be more specific with the link, so readers don't
need to endlessly dig?

Jan

> We were talking about
> a different struct, but I thought the principle was the same. Anyway I would
> like the #ifdef too.
> 
> So @Jan, @Julien, can you help me fix this once for all?
> 
> Thanks.
> 
> - Carlo Nonato



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 13:21:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 13:21:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484311.750863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKfil-0005Ez-Jk; Wed, 25 Jan 2023 13:21:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484311.750863; Wed, 25 Jan 2023 13:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKfil-0005Es-Gz; Wed, 25 Jan 2023 13:21:47 +0000
Received: by outflank-mailman (input) for mailman id 484311;
 Wed, 25 Jan 2023 13:21:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GtWz=5W=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pKfik-0005Em-7K
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 13:21:46 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2073.outbound.protection.outlook.com [40.107.8.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 35a3eceb-9cb3-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 14:21:43 +0100 (CET)
Received: from DU2PR04CA0173.eurprd04.prod.outlook.com (2603:10a6:10:2b0::28)
 by DU0PR08MB9655.eurprd08.prod.outlook.com (2603:10a6:10:447::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 13:21:39 +0000
Received: from DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b0:cafe::4) by DU2PR04CA0173.outlook.office365.com
 (2603:10a6:10:2b0::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.34 via Frontend
 Transport; Wed, 25 Jan 2023 13:21:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT028.mail.protection.outlook.com (100.127.142.236) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 13:21:39 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Wed, 25 Jan 2023 13:21:39 +0000
Received: from 59ade3a6353a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D9B7593F-7726-4184-9E6F-A835AEBBD4E9.1; 
 Wed, 25 Jan 2023 13:21:33 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 59ade3a6353a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 25 Jan 2023 13:21:33 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB9PR08MB9825.eurprd08.prod.outlook.com (2603:10a6:10:462::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 13:21:31 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::1909:220b:70ee:a5c3]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::1909:220b:70ee:a5c3%7]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 13:21:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35a3eceb-9cb3-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k0988EFARxls/2WUpzJM70zTAbLoSmEkUSsQwSp7PJg=;
 b=ZKW9SHH2ULX9O90kjlMc1aC2yJqzDB4sQ48kwFNyESJyEM4eclBNHpwgfxIrIpw15ux9APcV/LLVNiNwp+5hSG7FyPAcycuw/P0IXY1rUjZZMi+o/8BpAUgcT4bRWGR3XUe0B0uk+vdPXd2gU+cNN65agmHcmBu8FbULgHi28/I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 708563b4c9ae479b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dB2bpc6m0J4hTnmASH7aw9SATgfZ34ekPfDD9o1PFVx1J+0V6bAQ20ZzLPQea1iaoZLHl5+2j4brH1/ULJaKqnfMRflHgi9ZHnPajh5FhqoG8aaIthrGEBac1SwqH+Yzg2FMlnRR1/3uhc1zTejAVjkvO1AFjkp9Zv+xig78DrqZhbiNyPUmt3TXFUMr+ZUm8IHAqM0NRLo5H1jL7o3/FL/pS8ZrcCaIKx9eWE+zaDYRa+MNBP5vRPtcJ8Id7XBSQoLoBk5VkECPHfZCD66Wh+0KZTWa8e/lmz2rz4kwbNCOomQjdlR0vAD4J6mAxuuroIoXw1xTO4p9tdTzGGzpwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=k0988EFARxls/2WUpzJM70zTAbLoSmEkUSsQwSp7PJg=;
 b=SKe2r8dTjbs5KuaRqyOvenZJZiCCosUIZu3uAwN8LfGe1oTUfAuZvjojzF9dfEhRwdZvAbn59qk+znqpsqMFkuXKlN/6mwac1SYY8OKfs5t6xm6W9LA91esMqV4EDPT1OwNnJvSURbfmJr4I0A4OQyTWVQuCza2YvLRz9vSQhAkuQ7nrkqfstmctDtJqz7yhK0xkzOE74GHdVi5sl5OQ7561Xg7UeN8AHeKppOi9j/8CLMdsrKH5pNx/ZE8QeQeUDQDZokTMygLO1H3mk9aNTbQBhgQurhfb/CQvfk+U8b3lqqk54be4yzIeoZvsHycksLYTKkVq02NFC3Wbn1ZYaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k0988EFARxls/2WUpzJM70zTAbLoSmEkUSsQwSp7PJg=;
 b=ZKW9SHH2ULX9O90kjlMc1aC2yJqzDB4sQ48kwFNyESJyEM4eclBNHpwgfxIrIpw15ux9APcV/LLVNiNwp+5hSG7FyPAcycuw/P0IXY1rUjZZMi+o/8BpAUgcT4bRWGR3XUe0B0uk+vdPXd2gU+cNN65agmHcmBu8FbULgHi28/I=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Nick
 Rosbrook <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [RFC PATCH 0/8] SVE feature for arm guests
Thread-Topic: [RFC PATCH 0/8] SVE feature for arm guests
Thread-Index: AQHZJcpxlecBD3vJ/066zddcxj2mrq6ZcImAgAE+YoCAAVwdAIATKTYA
Date: Wed, 25 Jan 2023 13:21:30 +0000
Message-ID: <B81C0C5E-6FEC-4B71-B740-8418EC1BBA06@arm.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <3e4ce6c0-9949-1312-f492-913b7dd2cf18@xen.org>
 <EB12FEDD-F3EC-401A-9648-77D7B28F6750@arm.com>
 <f8823ca1-450f-7522-d5db-41f124195ab3@xen.org>
In-Reply-To: <f8823ca1-450f-7522-d5db-41f124195ab3@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DB9PR08MB9825:EE_|DBAEUR03FT028:EE_|DU0PR08MB9655:EE_
X-MS-Office365-Filtering-Correlation-Id: 5cb3f806-a3c8-4d28-35b2-08dafed717b1
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3sl2G05paPbZMOVGiyC+JEwpRMwm7Qm2ls43hwi3xtYL0aa2DHcv69TwTmua6mAhcqpVTBW0RmZsc4XRziNtY1eJZYw8OE7QGb6yF5lVvolIzocFUsICMmHh5aVofoW07QzbdrNvXDJmT8J9Z334I0da57+6N/OPRWnKShmMsKsc9eyB+VrTQOUZ+gnJ4iWHUO2T3IxXQZD0MbtXxzGFHhsvWd2zr393Fu4HqR3oRdHH14i/UwxtETiyk3WyGjCYPWBXaMYMgE+OeW5pidkJ8TBFR3bgS7a10pbWvUC2xGNZ3iRQH2aZ4Ttvs+5wrYMD9hD4+JOCMTqp3EQvUY/uDZlCUAsG5kANl4tuh4ocseY9591OubVWB+Ncn1UIkspn8B40Sb3MB70Mm2z+3tnspfDltmFzQ6lMi6rx6O1u6bo7n7zhpnL2fmss03QOSFU9/gyrZkP1s4Rwk5YZPdiQSI1zR7QWOuXmbPF/BYZdYcY6hKplNrq5X/YDfy1fh1kJlnKpdmjH2RFZiNTKjo2wi8pnG6yR0CfCCdF1zEhaXLKrAiQcSQD3tFRNIVv1hY7zLXstvQ0+UnnzXHcrcNJCMIgrzG/VGDcJPQ3oyZ4nMCSDcNQIuhScEpqZFkM3ZrJcuPOKOFWpDMHH26mgRNW61LKcTkflGjN4Pbw3AWguAv/P75wV2Vkn4E88o3FlGcLvbbUl+eMM2POXUdeTUjsSWEwkcos7YtklBulBkEgTm8s=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(366004)(39860400002)(396003)(376002)(346002)(136003)(451199018)(66476007)(5660300002)(122000001)(36756003)(2906002)(478600001)(7416002)(91956017)(86362001)(76116006)(66556008)(53546011)(64756008)(8676002)(6916009)(66946007)(186003)(66446008)(6486002)(2616005)(71200400001)(4326008)(83380400001)(38070700005)(33656002)(38100700002)(6506007)(41300700001)(316002)(6512007)(8936002)(54906003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <A693D382F7B0344A950662B7F827E277@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9825
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fb89ed59-fbff-4a59-0b1b-08dafed7123f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9OaMwV2hg+pZLE+IXXfXU2Y3A9WMZeEyxLoTl8+iS9KPpbldE7zgFBe1/HR61i6jLSemtYQl8ZxiX684qQBtm/Px84dBJtGfL4wJ7N0KVvESIBgy/ZSvrsPggbsqQH53rzSv2I/L4v8a0+CgBHP0qnBaQi84mRn0Fwoqm5HnQuqmKeMhwJSOue6DmDCurKu1nq06AYRvmb3nKpBO0FW0VqcuiLFAzbK5z+GVZTPJuqbY4uGonV2c5Lu+YEWBRxXqw6lNhUBg67HLLz8e+DrMdkOY5my+S8BXgIhoC/8lASC5fwmx/foKQiiX2BRP/XWeJUms1NxbwiW4liAbl9txbqe9WnmtJl3Sj2WEHGHMLuwUqMWiyZ7APDMRqVc0S1Jz+dkWf+A7swAyWgtsUmNXV151X998Rgaq8JGv4E83XNIMk9JTC8BHGV7zyucMpdQVYorAEbPp9ZgYGmwzOIRTjr2xVHW7Rx7cDoTNuasze6MkfB/QunyCbMUz2SfQT7G37LsOaWh4P+kRLpkASFPa+WEhy8G3IFZNus5o9LS0WGEtMU6EaL4DaJ4obwCwYJAsOMYMUIeA37wTM+t9JMLJpdCneF6ycyzFslTybDUuFcCnCe2kzpkNCHw+r+rLSAj9DB84PEmJzZP3ygiEMhyFO278AHrW1RUQ7nx9Kq9+XtDa3ph31ZCAst2rFNdePceH9PDYc9NoLz3DqiPrDRSesw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199018)(40470700004)(36840700001)(46966006)(36860700001)(26005)(8936002)(5660300002)(70586007)(316002)(4326008)(41300700001)(81166007)(6862004)(186003)(70206006)(2906002)(336012)(82740400003)(36756003)(53546011)(6512007)(478600001)(40480700001)(8676002)(40460700003)(83380400001)(6506007)(2616005)(47076005)(54906003)(356005)(33656002)(86362001)(6486002)(82310400005)(107886003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 13:21:39.5103
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5cb3f806-a3c8-4d28-35b2-08dafed717b1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9655

Hi Julien,

> On 13 Jan 2023, at 09:44, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Luca,
>=20
> On 12/01/2023 11:58, Luca Fancellu wrote:
>>> On 11 Jan 2023, at 16:59, Julien Grall <julien@xen.org> wrote:
>>> On 11/01/2023 14:38, Luca Fancellu wrote:
>>>> This serie is introducing the possibility for Dom0 and DomU guests to =
use
>>>> sve/sve2 instructions.
>>>> SVE feature introduces new instruction and registers to improve perfor=
mances on
>>>> floating point operations.
>>>> The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE =
field, and
>>>> when available the ID_AA64ZFR0_EL1 register provides additional inform=
ation
>>>> about the implemented version and other SVE feature.
>>>> New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_EL=
x.
>>>> Z0-Z31 are scalable vector register whose size is implementation defin=
ed and
>>>> goes from 128 bits to maximum 2048, the term vector length will be use=
d to refer
>>>> to this quantity.
>>>> P0-P15 are predicate registers and the size is the vector length divid=
ed by 8,
>>>> same size is the FFR (First Fault Register).
>>>> ZCR_ELx is a register that can control and restrict the maximum vector=
 length
>>>> used by the <x> exception level and all the lower exception levels, so=
 for
>>>> example EL3 can restrict the vector length usable by EL3,2,1,0.
>>>> The platform has a maximum implemented vector length, so for every val=
ue
>>>> written in ZCR register, if this value is above the implemented length=
, then the
>>>> lower value will be used. The RDVL instruction can be used to check wh=
at vector
>>>> length is the HW using after setting ZCR.
>>>> For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so ther=
e is no
>>>> need to save them separately, saving Z0-Z31 will save implicitly also =
V0-V31.
>>>> SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie=
 the
>>>> register is added to the domain state, to be able to trap only the gue=
sts that
>>>> are not allowed to use SVE.
>>>> This serie is introducing a command line parameter to enable Dom0 to u=
se SVE and
>>>> to set its maximum vector length that by default is 0 which means the =
guest is
>>>> not allowed to use SVE. Values from 128 to 2048 mean the guest can use=
 SVE with
>>>> the selected value used as maximum allowed vector length (which could =
be lower
>>>> if the implemented one is lower).
>>>> For DomUs, an XL parameter with the same way of use is introduced and =
a dom0less
>>>> DTB binding is created.
>>>> The context switch is the most critical part because there can be big =
registers
>>>> to be saved, in this serie an easy approach is used and the context is
>>>> saved/restored every time for the guests that are allowed to use SVE.
>>>=20
>>> This would be OK for an initial approach. But I would be worry to offic=
ially support SVE because of the potential large impact on other users.
>>>=20
>>> What's the long term plan?
>> Hi Julien,
>> For the future we can plan some work and decide together how to handle t=
he context switch,
>> we might need some suggestions from you (arm maintainers) to design that=
 part in the best
>> way for functional and security perspective.
> I think SVE will need to be lazily saved/restored. So on context switch, =
we would tell that the context belongs to the a previous domain. The first =
time after the current domain tries to access SVE, then we would load it.

We should try to prevent those kind of things because it makes the real tim=
e analysis a lot more complex.
The only use case where this would make the system a lot faster is if there=
 is only one guest using SVE (which might be a use case), other than that c=
ase this will just create delays when someone else is trying to use SVE ins=
tead of having a fix delay at context switch.

>=20
>> For now we might flag the feature as unsupported, explaining in the Kcon=
fig help that switching
>> between SVE and non-SVE guests, or between SVE guests, might add latency=
 compared to
>> switching between non-SVE guests.
>=20
> I am OK with that. I actually like the idea to spell it out because that =
helps us to remember what are the gaps in the code :).

I like this solution to.

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 13:43:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 13:43:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484317.750873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKg3K-00082k-BQ; Wed, 25 Jan 2023 13:43:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484317.750873; Wed, 25 Jan 2023 13:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKg3K-00082d-8W; Wed, 25 Jan 2023 13:43:02 +0000
Received: by outflank-mailman (input) for mailman id 484317;
 Wed, 25 Jan 2023 13:43:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKg3J-00082T-40; Wed, 25 Jan 2023 13:43:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKg3I-0002uG-Uv; Wed, 25 Jan 2023 13:43:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKg3I-0001AR-Hm; Wed, 25 Jan 2023 13:43:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKg3I-00012E-HN; Wed, 25 Jan 2023 13:43:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dpzjpaEzJwsooSzw7Ibfbl+C9BV+q5EeXs+2pURBt+c=; b=NV+IuMpUUHezg4ZT/4EekWftCe
	s7W4xkiaDIpZzN9f4Pr4fI8oAsTL3U7Guo4WoFgV2QmEGxCeRoFwBwMDAX4Vz+ytqUQ99bJ6vNMCW
	kf/FATCzHgpsGwEbyDXkXzacd7Dy2iN4yC6j7cWfDosV/F1BIaIpWNfy/6W9Z9VT4sWI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176116-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 176116: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-i386-libvirt-raw:xen-install:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=d5ecc2aa779d48be32bba51a6c8c16635c52721d
X-Osstest-Versions-That:
    libvirt=7b5777afcbe508a15a509444ff6e951e7201f321
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 13:43:00 +0000

flight 176116 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176116/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw   7 xen-install                  fail  like 176085
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 176085
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 176085
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 176085
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              d5ecc2aa779d48be32bba51a6c8c16635c52721d
baseline version:
 libvirt              7b5777afcbe508a15a509444ff6e951e7201f321

Last test of basis   176085  2023-01-24 04:20:12 Z    1 days
Testing same since   176116  2023-01-25 04:18:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Brooks Swinnerton <bswinnerton@gmail.com>
  Daniel Henrique Barboza <dbarboza@ventanamicro.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Shaleen Bathla <shaleen.bathla@oracle.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   7b5777afcb..d5ecc2aa77  d5ecc2aa779d48be32bba51a6c8c16635c52721d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 13:51:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 13:51:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484328.750886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKgB6-0001Ld-CZ; Wed, 25 Jan 2023 13:51:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484328.750886; Wed, 25 Jan 2023 13:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKgB6-0001LW-8r; Wed, 25 Jan 2023 13:51:04 +0000
Received: by outflank-mailman (input) for mailman id 484328;
 Wed, 25 Jan 2023 13:51:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKgB4-0001LQ-UP
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 13:51:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKgB4-0003Cd-95; Wed, 25 Jan 2023 13:51:02 +0000
Received: from [54.239.6.189] (helo=[192.168.17.90])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKgB4-0007bj-2R; Wed, 25 Jan 2023 13:51:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=wiP8CMMp3FpxQblRKq04jMFfYF1wWspDeZNaHMMxxdU=; b=3C4dYN68cWP2rPtxYNKZOaRlcH
	GBjBzXaldWQWYexuhkE5ig1+r/GfLHqHFT9K0TlTORurFgkcHrv4+W4ywHkdfNHEyYGgpyWF0tnRW
	P2wAPiSF6u+WrfR4PZb9fLxajqSzYggPuTo6spsPryjA1lMcD0PbdL4rN2I5L0OBr7i0=;
Message-ID: <c8ef3347-2756-51b6-d474-268963abbef5@xen.org>
Date: Wed, 25 Jan 2023 13:50:58 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 0/8] SVE feature for arm guests
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Luca Fancellu <Luca.Fancellu@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>
References: <20230111143826.3224-1-luca.fancellu@arm.com>
 <3e4ce6c0-9949-1312-f492-913b7dd2cf18@xen.org>
 <EB12FEDD-F3EC-401A-9648-77D7B28F6750@arm.com>
 <f8823ca1-450f-7522-d5db-41f124195ab3@xen.org>
 <B81C0C5E-6FEC-4B71-B740-8418EC1BBA06@arm.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <B81C0C5E-6FEC-4B71-B740-8418EC1BBA06@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 25/01/2023 13:21, Bertrand Marquis wrote:
>> On 13 Jan 2023, at 09:44, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> On 12/01/2023 11:58, Luca Fancellu wrote:
>>>> On 11 Jan 2023, at 16:59, Julien Grall <julien@xen.org> wrote:
>>>> On 11/01/2023 14:38, Luca Fancellu wrote:
>>>>> This serie is introducing the possibility for Dom0 and DomU guests to use
>>>>> sve/sve2 instructions.
>>>>> SVE feature introduces new instruction and registers to improve performances on
>>>>> floating point operations.
>>>>> The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE field, and
>>>>> when available the ID_AA64ZFR0_EL1 register provides additional information
>>>>> about the implemented version and other SVE feature.
>>>>> New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.
>>>>> Z0-Z31 are scalable vector register whose size is implementation defined and
>>>>> goes from 128 bits to maximum 2048, the term vector length will be used to refer
>>>>> to this quantity.
>>>>> P0-P15 are predicate registers and the size is the vector length divided by 8,
>>>>> same size is the FFR (First Fault Register).
>>>>> ZCR_ELx is a register that can control and restrict the maximum vector length
>>>>> used by the <x> exception level and all the lower exception levels, so for
>>>>> example EL3 can restrict the vector length usable by EL3,2,1,0.
>>>>> The platform has a maximum implemented vector length, so for every value
>>>>> written in ZCR register, if this value is above the implemented length, then the
>>>>> lower value will be used. The RDVL instruction can be used to check what vector
>>>>> length is the HW using after setting ZCR.
>>>>> For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there is no
>>>>> need to save them separately, saving Z0-Z31 will save implicitly also V0-V31.
>>>>> SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie the
>>>>> register is added to the domain state, to be able to trap only the guests that
>>>>> are not allowed to use SVE.
>>>>> This serie is introducing a command line parameter to enable Dom0 to use SVE and
>>>>> to set its maximum vector length that by default is 0 which means the guest is
>>>>> not allowed to use SVE. Values from 128 to 2048 mean the guest can use SVE with
>>>>> the selected value used as maximum allowed vector length (which could be lower
>>>>> if the implemented one is lower).
>>>>> For DomUs, an XL parameter with the same way of use is introduced and a dom0less
>>>>> DTB binding is created.
>>>>> The context switch is the most critical part because there can be big registers
>>>>> to be saved, in this serie an easy approach is used and the context is
>>>>> saved/restored every time for the guests that are allowed to use SVE.
>>>>
>>>> This would be OK for an initial approach. But I would be worry to officially support SVE because of the potential large impact on other users.
>>>>
>>>> What's the long term plan?
>>> Hi Julien,
>>> For the future we can plan some work and decide together how to handle the context switch,
>>> we might need some suggestions from you (arm maintainers) to design that part in the best
>>> way for functional and security perspective.
>> I think SVE will need to be lazily saved/restored. So on context switch, we would tell that the context belongs to the a previous domain. The first time after the current domain tries to access SVE, then we would load it.
> 
> We should try to prevent those kind of things because it makes the real time analysis a lot more complex.

The choice of SVE (including the vector length) is per-domain. If all 
the VMs are using the same vector length. Then the delay would indeed be 
fixed. Otherwise, the delay will vary depending on the scheduling choice.

It is not clear to me how this is better for real time analysis.

> The only use case where this would make the system a lot faster is if there is only one guest using SVE (which might be a use case), other than that case this will just create delays when someone else is trying to use SVE instead of having a fix delay at context swit
Even in the case you mention, I think it will highly depend on the cost 
of context switching SVE. I have been told this is quite large, and one 
surely don't want to spend an extra thousand cycles when receiving an 
interrupt (I don't expect handler to use SVE).

I think we need to understand the workload (and cost) in order to decide 
whether it should be eager/lazy.

At least, I know that in Linux, only the part common with VFP are 
guaranteed to be preserved (see [1]). So the expectation seems that SVE 
use will be short-lived.

> 
>>
>>> For now we might flag the feature as unsupported, explaining in the Kconfig help that switching
>>> between SVE and non-SVE guests, or between SVE guests, might add latency compared to
>>> switching between non-SVE guests.
>>
>> I am OK with that. I actually like the idea to spell it out because that helps us to remember what are the gaps in the code :).
> 
> I like this solution to.
> 
> Cheers
> Bertrand
> 
>>
>> Cheers,
>>
>> -- 
>> Julien Grall
> 

[1] https://www.kernel.org/doc/Documentation/arm64/sve.txt

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 14:44:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 14:44:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484335.750898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKh0U-0007v3-A7; Wed, 25 Jan 2023 14:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484335.750898; Wed, 25 Jan 2023 14:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKh0U-0007uw-6w; Wed, 25 Jan 2023 14:44:10 +0000
Received: by outflank-mailman (input) for mailman id 484335;
 Wed, 25 Jan 2023 14:44:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKh0S-0007um-Gc; Wed, 25 Jan 2023 14:44:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKh0S-0004SF-D6; Wed, 25 Jan 2023 14:44:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKh0R-0004Ac-WE; Wed, 25 Jan 2023 14:44:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKh0R-0004gQ-Vp; Wed, 25 Jan 2023 14:44:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6MyFkatM2E0QPicRk9tpfDF+DWUV7H1ccJnuZEgT1UM=; b=VBO/y92J6J/6R/9m7cE62xnwhb
	mEheZ8Tn9UiL3YqXHBctSmwbuRwvTUMzcrxJFH81LKkLAOVeFEGVy62z9WCLA9zRyT72hvAqx1tyJ
	/QSxa9yM4afUKJyiaRpqmOO+NbEHnT9vxr2Axg55JLEDMgEYM5fpRYq5/yfmWx9NK5yA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176115-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176115: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=948ef7bb70c4acaf74d87420ea3a1190862d4548
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 14:44:07 +0000

flight 176115 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176115/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                948ef7bb70c4acaf74d87420ea3a1190862d4548
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  109 days
Failing since        173470  2022-10-08 06:21:34 Z  109 days  225 attempts
Testing same since   176115  2023-01-25 03:57:20 Z    0 days    1 attempts

------------------------------------------------------------
3442 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 528925 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 14:44:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 14:44:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484338.750909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKh11-0008K2-IS; Wed, 25 Jan 2023 14:44:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484338.750909; Wed, 25 Jan 2023 14:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKh11-0008Jt-FX; Wed, 25 Jan 2023 14:44:43 +0000
Received: by outflank-mailman (input) for mailman id 484338;
 Wed, 25 Jan 2023 14:44:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Jk/=5W=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pKh10-0008IQ-2G
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 14:44:42 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cb325fcb-9cbe-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 15:44:38 +0100 (CET)
Received: by mail-wr1-x432.google.com with SMTP id h12so13292694wrv.10
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 06:44:38 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 x18-20020a5d4912000000b002be099f78c0sm4767318wrq.69.2023.01.25.06.44.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 25 Jan 2023 06:44:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb325fcb-9cbe-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=AGfYi2dLWPro9fJQ4wRS2nm01X4dtE6uH7MTB6rpS58=;
        b=SpgEdUNwxeim+2QQYdEZBwvnPnXBOy7B477oN2KlSlThWxspuY33Mxw0cto+sh3jaA
         eYmI2FUv2jJdMFdZiPQXi1UK3NE0knIX/x4kHSGxs7XZfR5qPlrSNi+v2mCxdYWTIA8C
         le0tn+vk7cezqXnnygUHjA6i0KiFFFgyBBx/Ko65Uvw+jd2ORD/f5IRjNb/G0/p0zFH0
         IPjAl3auwDwwqZP/nTfoPuNZxRzZC30v1isGOZPZj5o0rCJqI4psgyO7UUsZPY7WoqqT
         m2u2V7whGQIL8+W2hH8GXGrf7KqDIB+dhH7RWWDCh+cDkaGQvei1GivRWm9s+JhZKN6z
         7VRA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=AGfYi2dLWPro9fJQ4wRS2nm01X4dtE6uH7MTB6rpS58=;
        b=UpnfNtd8nX69U6yra6UVLguWYCarn4mJBMxm5WcCn+88r6+QuUkKnjENXoUBTMSQua
         zsGCP614aCKVTSsyFx9b6MxqMpDXRc6wvj1FS1PKDujFRpRHzGZd/qgjl/ih0NpIsVkl
         Vhfh+Gofa7kutazY3OaGEODy5KoBRM0EasMFZ1r2w4ZKyJxH9Q24RLXpafo5AhOmyaGI
         uIxOtW34tM+GC+XSY8pLk6ZO+VorkX0It7234itBbasHhG1J+EsCTzDwHEAxhCyCjMP0
         XdYNXPIrPhV0Bm79NDe0VqzXdLOLsKcoArwX+8qktZs+3NUXNoUjum11fI8jwBp8Ak7B
         6dIQ==
X-Gm-Message-State: AO0yUKW9oLl8LrUc7RnLbAI6NgTW+6aD3Nz1vBoCJBPJ89AK7YqmuKbu
	4w4LlFDOY1AbUJ0AOvg//Kw=
X-Google-Smtp-Source: AK7set+YS47CcAsqUF8VmcHUi5yS5X5sIwTaIP2GU3XU2VVKCW/ipZ1Ta/fSpoyl+mx697UTt03oxw==
X-Received: by 2002:adf:f705:0:b0:2bf:bb0a:e486 with SMTP id r5-20020adff705000000b002bfbb0ae486mr2005953wrp.30.1674657878055;
        Wed, 25 Jan 2023 06:44:38 -0800 (PST)
Message-ID: <ca20e076ae8af5ae0924a19e73352ac9d7e7a202.camel@gmail.com>
Subject: Re: [PATCH v1 07/14] xen/riscv: introduce exception handlers
 implementation
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida
	 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
	Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Date: Wed, 25 Jan 2023 16:44:36 +0200
In-Reply-To: <ac6f02e8-c493-7914-f3c4-32b4ebe1bc26@citrix.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <7a459ea843d5823ee2c50b0e44dded5bdb554ca6.1674226563.git.oleksii.kurochko@gmail.com>
	 <ac6f02e8-c493-7914-f3c4-32b4ebe1bc26@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-23 at 11:50 +0000, Andrew Cooper wrote:
>=20
>=20
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Save context to stack */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 REG_S=C2=A0=C2=A0 sp, (RISC=
V_CPU_USER_REGS_OFFSET(sp) -
> > RISCV_CPU_USER_REGS_SIZE) (sp)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 addi=C2=A0=C2=A0=C2=A0 sp, =
sp, -RISCV_CPU_USER_REGS_SIZE
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 REG_S=C2=A0=C2=A0 t0, RISCV=
_CPU_USER_REGS_OFFSET(t0)(sp)
>=20
> Exceptions on RISC-V don't adjust the stack pointer.=C2=A0 This logic
> depends
> on interrupting Xen code, and Xen not having suffered a stack
> overflow
> (and actually, that the space on the stack for all registers also
> doesn't overflow).
>=20
Probably I missed something but an idea of the code above was to
reserve memory on a stack to save the registers which can be changed
in __handler_expception() as the line of code where exception occurs
will expect that registers value weren't changed.
Otherwise if we won't reserve memory on stack it will be corrupted by
REG_S which basically is SD instruction.




From xen-devel-bounces@lists.xenproject.org Wed Jan 25 14:57:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 14:57:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484350.750922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhCp-00020K-ME; Wed, 25 Jan 2023 14:56:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484350.750922; Wed, 25 Jan 2023 14:56:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhCp-00020D-Ie; Wed, 25 Jan 2023 14:56:55 +0000
Received: by outflank-mailman (input) for mailman id 484350;
 Wed, 25 Jan 2023 14:56:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G8Uq=5W=xenbits.xen.org=julieng@srs-se1.protection.inumbo.net>)
 id 1pKhCo-0001zz-A4
 for xen-devel@lists.xen.org; Wed, 25 Jan 2023 14:56:54 +0000
Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7dcf57f9-9cc0-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 15:56:49 +0100 (CET)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1pKhCZ-0004qG-Gl; Wed, 25 Jan 2023 14:56:39 +0000
Received: from julieng by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1pKhCZ-0001p5-FK; Wed, 25 Jan 2023 14:56:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dcf57f9-9cc0-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=4x+zYFZJ2fux5Y6wWNRS5h2J8/Cv4GXccaEhicN1EBc=; b=gJeN+Tbf+gVwtZasWDv9LrtxkB
	HIua5PqCH9ryAxObrVZLIXuLVCD1CQNRoVFedpw4Yw31Ua3XrxPvE2eR7UCj8YTSNO4L7f6TY8zar
	TlO1teALImTV5W+p56d12sVwXZEKu7LMZikjaKniuE3h7MfVVVPX/KMhmyZjfvJ4ceck=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 425 v1 (CVE-2022-42330) - Guests can cause
 Xenstore crash via soft reset
Message-Id: <E1pKhCZ-0001p5-FK@xenbits.xenproject.org>
Date: Wed, 25 Jan 2023 14:56:39 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2022-42330 / XSA-425

            Guests can cause Xenstore crash via soft reset

ISSUE DESCRIPTION
=================

When a guest issues a "Soft Reset" (e.g. for performing a kexec) the
libxl based Xen toolstack will normally perform a XS_RELEASE Xenstore
operation.

Due to a bug in xenstored this can result in a crash of xenstored.

Any other use of XS_RELEASE will have the same impact.

IMPACT
======

A malicious guest could try to kexec until it hits the xenstored bug,
resulting in the inability to perform any further domain administration
like starting new guests, or adding/removing resources to or from any
existing guest.

VULNERABLE SYSTEMS
==================

Only Xen version 4.17 is vulnerable. Systems running an older version
of Xen are not vulnerable.

All Xen systems using C xenstored are vulnerable. Systems using the
OCaml variant of xenstored are not vulnerable.

Systems running only PV guests (x86 only) are not vulnerable, as long as
they are using a libxl based toolstack.

MITIGATION
==========

The problem can be avoided by either:

- - using the OCaml xenstored variant

- - explicitly configuring guests to NOT perform the "Soft Reset" action
  by adding:
    on_soft_reset="reboot"
  or similar to the guest's configuration. This will break kexec in the
  guest, though.

NOTE REGARDING LACK OF EMBARGO
==============================

This issue was discussed in public already.

RESOLUTION
==========

Applying the attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa425.patch           xen-unstable, Xen 4.17.x

$ sha256sum xsa425*
49f322c955fe7857cc824bba80625e56f582fdf0a4b244f513b6750e15ba5e48  xsa425.patch
$

-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmPRQroMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZEpsIAJmIVB2lvqT2Qdp0pPSoaJIxXxuGE320kVTWmudB
F2WbRCxeubqoOC/MyHTLOujMix6wBHnbm1cMQo0r4Vah/KX34vPS3wYqDZQYZtES
aEkOQ+214QLAS2futcT0gde9idKpShI9jjWSRwcH01a7V6tlwwidc4V0luUFV0iX
EKHPJ89rbbCMP1fOq5B+C7UP8oyiHItNWPWPFBwtUeXKvFiPOoyUPCoTHG8CCYHG
WiVbeaZab7x/9+WUwXJ6hZqZiVr6NqoaItOx9Nbw4yCHwJlAj2UfA9skmqtGbPbB
vxhkbIgOeiWoPvZgTGQjzZLosWO5+y30Fv5QYIbjA2/1OSQ=
=7kiM
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa425.patch"
Content-Disposition: attachment; filename="xsa425.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFzb24gQW5kcnl1ayA8amFuZHJ5dWtAZ21haWwuY29tPgpTdWJq
ZWN0OiBSZXZlcnQgInRvb2xzL3hlbnN0b3JlOiBzaW1wbGlmeSBsb29wIGhh
bmRsaW5nIGNvbm5lY3Rpb24gSS9PIgoKSSdtIG9ic2VydmluZyBndWVzdCBr
ZXhlYyB0cmlnZ2VyIHhlbnN0b3JlZCB0byBhYm9ydCBvbiBhIGRvdWJsZSBm
cmVlLgoKZ2RiIG91dHB1dDoKUHJvZ3JhbSByZWNlaXZlZCBzaWduYWwgU0lH
QUJSVCwgQWJvcnRlZC4KX19wdGhyZWFkX2tpbGxfaW1wbGVtZW50YXRpb24g
KG5vX3RpZD0wLCBzaWdubz02LCB0aHJlYWRpZD0xNDA2NDU2MTQyNTgxMTIp
IGF0IC4vbnB0bC9wdGhyZWFkX2tpbGwuYzo0NAo0NCAgICAuL25wdGwvcHRo
cmVhZF9raWxsLmM6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkuCihnZGIp
IGJ0CiAgICBhdCAuL25wdGwvcHRocmVhZF9raWxsLmM6NDQKICAgIGF0IC4v
bnB0bC9wdGhyZWFkX2tpbGwuYzo3OAogICAgYXQgLi9ucHRsL3B0aHJlYWRf
a2lsbC5jOjg5CiAgICBhdCAuLi9zeXNkZXBzL3Bvc2l4L3JhaXNlLmM6MjYK
ICAgIGF0IHRhbGxvYy5jOjExOQogICAgcHRyPXB0ckBlbnRyeT0weDU1OWZh
ZTcyNDI5MCkgYXQgdGFsbG9jLmM6MjMyCiAgICBhdCB4ZW5zdG9yZWRfY29y
ZS5jOjI5NDUKKGdkYikgZnJhbWUgNQogICAgYXQgdGFsbG9jLmM6MTE5CjEx
OSAgICAgICAgICAgIFRBTExPQ19BQk9SVCgiQmFkIHRhbGxvYyBtYWdpYyB2
YWx1ZSAtIGRvdWJsZSBmcmVlIik7CihnZGIpIGZyYW1lIDcKICAgIGF0IHhl
bnN0b3JlZF9jb3JlLmM6Mjk0NQoyOTQ1ICAgICAgICAgICAgICAgIHRhbGxv
Y19pbmNyZWFzZV9yZWZfY291bnQoY29ubik7CihnZGIpIHAgY29ubgokMSA9
IChzdHJ1Y3QgY29ubmVjdGlvbiAqKSAweDU1OWZhZTcyNDI5MAoKTG9va2lu
ZyBhdCBhIHhlbnN0b3JlIHRyYWNlLCB3ZSBoYXZlOgpJTiAweDU1OWZhZTcx
ZjI1MCAyMDIzMDEyMCAxNzo0MDo1MyBSRUFEICgvbG9jYWwvZG9tYWluLzMv
aW1hZ2UvZGV2aWNlLW1vZGVsLWRvbQppZCApCndybDogZG9tICAgIDAgICAg
ICAxICBtc2VjICAgICAgMTAwMDAgY3JlZGl0ICAgICAxMDAwMDAwIHJlc2Vy
dmUgICAgICAgIDEwMCBkaXNjCmFyZAp3cmw6IGRvbSAgICAzICAgICAgMSAg
bXNlYyAgICAgIDEwMDAwIGNyZWRpdCAgICAgMTAwMDAwMCByZXNlcnZlICAg
ICAgICAxMDAgZGlzYwphcmQKd3JsOiBkb20gICAgMCAgICAgIDAgIG1zZWMg
ICAgICAxMDAwMCBjcmVkaXQgICAgIDEwMDAwMDAgcmVzZXJ2ZSAgICAgICAg
ICAwIGRpc2MKYXJkCndybDogZG9tICAgIDMgICAgICAwICBtc2VjICAgICAg
MTAwMDAgY3JlZGl0ICAgICAxMDAwMDAwIHJlc2VydmUgICAgICAgICAgMCBk
aXNjCmFyZApPVVQgMHg1NTlmYWU3MWYyNTAgMjAyMzAxMjAgMTc6NDA6NTMg
RVJST1IgKEVOT0VOVCApCndybDogZG9tICAgIDAgICAgICAxICBtc2VjICAg
ICAgMTAwMDAgY3JlZGl0ICAgICAxMDAwMDAwIHJlc2VydmUgICAgICAgIDEw
MCBkaXNjCmFyZAp3cmw6IGRvbSAgICAzICAgICAgMSAgbXNlYyAgICAgIDEw
MDAwIGNyZWRpdCAgICAgMTAwMDAwMCByZXNlcnZlICAgICAgICAxMDAgZGlz
YwphcmQKSU4gMHg1NTlmYWU3MWYyNTAgMjAyMzAxMjAgMTc6NDA6NTMgUkVM
RUFTRSAoMyApCkRFU1RST1kgd2F0Y2ggMHg1NTlmYWU3M2Y2MzAKREVTVFJP
WSB3YXRjaCAweDU1OWZhZTc1ZGRmMApERVNUUk9ZIHdhdGNoIDB4NTU5ZmFl
NzVlYzMwCkRFU1RST1kgd2F0Y2ggMHg1NTlmYWU3NWVhNjAKREVTVFJPWSB3
YXRjaCAweDU1OWZhZTczMmMwMApERVNUUk9ZIHdhdGNoIDB4NTU5ZmFlNzJj
ZWEwCkRFU1RST1kgd2F0Y2ggMHg1NTlmYWU3MjhmYzAKREVTVFJPWSB3YXRj
aCAweDU1OWZhZTcyOTU3MApERVNUUk9ZIGNvbm5lY3Rpb24gMHg1NTlmYWU3
MjQyOTAKb3JwaGFuZWQgbm9kZSAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3N1
c3BlbmQvZXZlbnQtY2hhbm5lbCBkZWxldGVkCm9ycGhhbmVkIG5vZGUgL2xv
Y2FsL2RvbWFpbi8zL2RldmljZS92YmQvNTE3MTIgZGVsZXRlZApvcnBoYW5l
ZCBub2RlIC9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmtiZC8wIGRlbGV0ZWQK
b3JwaGFuZWQgbm9kZSAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZpZi8wIGRl
bGV0ZWQKb3JwaGFuZWQgbm9kZSAvbG9jYWwvZG9tYWluLzMvY29udHJvbC9z
aHV0ZG93biBkZWxldGVkCm9ycGhhbmVkIG5vZGUgL2xvY2FsL2RvbWFpbi8z
L2NvbnRyb2wvZmVhdHVyZS1wb3dlcm9mZiBkZWxldGVkCm9ycGhhbmVkIG5v
ZGUgL2xvY2FsL2RvbWFpbi8zL2NvbnRyb2wvZmVhdHVyZS1yZWJvb3QgZGVs
ZXRlZApvcnBoYW5lZCBub2RlIC9sb2NhbC9kb21haW4vMy9jb250cm9sL2Zl
YXR1cmUtc3VzcGVuZCBkZWxldGVkCm9ycGhhbmVkIG5vZGUgL2xvY2FsL2Rv
bWFpbi8zL2NvbnRyb2wvZmVhdHVyZS1zMyBkZWxldGVkCm9ycGhhbmVkIG5v
ZGUgL2xvY2FsL2RvbWFpbi8zL2NvbnRyb2wvZmVhdHVyZS1zNCBkZWxldGVk
Cm9ycGhhbmVkIG5vZGUgL2xvY2FsL2RvbWFpbi8zL2NvbnRyb2wvc3lzcnEg
ZGVsZXRlZApvcnBoYW5lZCBub2RlIC9sb2NhbC9kb21haW4vMy9kYXRhIGRl
bGV0ZWQKb3JwaGFuZWQgbm9kZSAvbG9jYWwvZG9tYWluLzMvZHJpdmVycyBk
ZWxldGVkCm9ycGhhbmVkIG5vZGUgL2xvY2FsL2RvbWFpbi8zL2ZlYXR1cmUg
ZGVsZXRlZApvcnBoYW5lZCBub2RlIC9sb2NhbC9kb21haW4vMy9hdHRyIGRl
bGV0ZWQKb3JwaGFuZWQgbm9kZSAvbG9jYWwvZG9tYWluLzMvZXJyb3IgZGVs
ZXRlZApvcnBoYW5lZCBub2RlIC9sb2NhbC9kb21haW4vMy9jb25zb2xlL2Jh
Y2tlbmQtaWQgZGVsZXRlZAoKYW5kIG5vIGZ1cnRoZXIgb3V0cHV0LgoKVGhl
IHRyYWNlIHNob3dzIHRoYXQgREVTVFJPWSB3YXMgY2FsbGVkIGZvciBjb25u
ZWN0aW9uIDB4NTU5ZmFlNzI0MjkwLApidXQgdGhhdCBpcyB0aGUgc2FtZSBw
b2ludGVyIChjb25uKSBtYWluKCkgd2FzIGxvb3BpbmcgdGhyb3VnaCBmcm9t
CmNvbm5lY3Rpb25zLiAgU28gaXQgd2Fzbid0IGFjdHVhbGx5IHJlbW92ZWQg
ZnJvbSB0aGUgY29ubmVjdGlvbnMgbGlzdD8KClJldmVydGluZyBjb21taXQg
ZThlNmU0MjI3OWE1ICJ0b29scy94ZW5zdG9yZTogc2ltcGxpZnkgbG9vcCBo
YW5kbGluZwpjb25uZWN0aW9uIEkvTyIgZml4ZXMgdGhlIGFib3J0L2RvdWJs
ZSBmcmVlLiAgSSB0aGluayB0aGUgdXNlIG9mCmxpc3RfZm9yX2VhY2hfZW50
cnlfc2FmZSBpcyBpbmNvcnJlY3QuICBsaXN0X2Zvcl9lYWNoX2VudHJ5X3Nh
ZmUgbWFrZXMKdHJhdmVyc2FsIHNhZmUgZm9yIGRlbGV0aW5nIHRoZSBjdXJy
ZW50IGl0ZXJhdG9yLCBidXQgUkVMRUFTRS9kb19yZWxlYXNlCndpbGwgZGVs
ZXRlIHNvbWUgb3RoZXIgZW50cnkgaW4gdGhlIGNvbm5lY3Rpb25zIGxpc3Qu
ICBJIHRoaW5rIHRoZQpvYnNlcnZlZCBhYm9ydCBpcyBiZWNhdXNlIGxpc3Rf
Zm9yX2VhY2hfZW50cnkgaGFzIG5leHQgcG9pbnRpbmcgdG8gdGhlCmRlbGV0
ZWQgY29ubmVjdGlvbiwgYW5kIGl0IGlzIHVzZWQgaW4gdGhlIHN1YnNlcXVl
bnQgaXRlcmF0aW9uLgoKQWRkIGEgY29tbWVudCBleHBsYWluaW5nIHRoZSB1
bnN1aXRhYmlsaXR5IG9mIGxpc3RfZm9yX2VhY2hfZW50cnlfc2FmZS4KQWxz
byBub3RpY2UgdGhhdCB0aGUgb2xkIGNvZGUgdGFrZXMgYSByZWZlcmVuY2Ug
b24gbmV4dCB3aGljaCB3b3VsZApwcmV2ZW50cyBhIHVzZS1hZnRlci1mcmVl
LgoKVGhpcyByZXZlcnRzIGNvbW1pdCBlOGU2ZTQyMjc5YTU3MjMyMzljNWM0
MGJhNGM3ZjU3OWE5Nzk0NjVkLgoKVGhpcyBpcyBYU0EtNDI1L0NWRS0yMDIy
LTQyMzMwLgoKRml4ZXM6IGU4ZTZlNDIyNzlhNSAoInRvb2xzL3hlbnN0b3Jl
OiBzaW1wbGlmeSBsb29wIGhhbmRsaW5nIGNvbm5lY3Rpb24gSS9PIikKU2ln
bmVkLW9mZi1ieTogSmFzb24gQW5kcnl1ayA8amFuZHJ5dWtAZ21haWwuY29t
PgpSZXZpZXdlZC1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29t
PgpSZXZpZXdlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNv
bT4KLS0tCiB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIHwgMTkg
KysrKysrKysrKysrKysrKystLQogMSBmaWxlIGNoYW5nZWQsIDE3IGluc2Vy
dGlvbnMoKyksIDIgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYyBiL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9jb3JlLmMKaW5kZXggNzhhM2VkYWE0ZS4uMDI5ZTM4NTJmYyAx
MDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwor
KysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCkBAIC0yOTQx
LDggKzI5NDEsMjMgQEAgaW50IG1haW4oaW50IGFyZ2MsIGNoYXIgKmFyZ3Zb
XSkKIAkJCX0KIAkJfQogCi0JCWxpc3RfZm9yX2VhY2hfZW50cnlfc2FmZShj
b25uLCBuZXh0LCAmY29ubmVjdGlvbnMsIGxpc3QpIHsKLQkJCXRhbGxvY19p
bmNyZWFzZV9yZWZfY291bnQoY29ubik7CisJCS8qCisJCSAqIGxpc3RfZm9y
X2VhY2hfZW50cnlfc2FmZSBpcyBub3Qgc3VpdGFibGUgaGVyZSBiZWNhdXNl
CisJCSAqIGhhbmRsZV9pbnB1dCBtYXkgZGVsZXRlIGVudHJpZXMgYmVzaWRl
cyB0aGUgY3VycmVudCBvbmUsIGJ1dAorCQkgKiB0aG9zZSBtYXkgYmUgaW4g
dGhlIHRlbXBvcmFyeSBuZXh0IHdoaWNoIHdvdWxkIHRyaWdnZXIgYQorCQkg
KiB1c2UtYWZ0ZXItZnJlZS4gIGxpc3RfZm9yX2VhY2hfZW50cnlfc2FmZSBp
cyBvbmx5IHNhZmUgZm9yCisJCSAqIGRlbGV0aW5nIHRoZSBjdXJyZW50IGVu
dHJ5LgorCQkgKi8KKwkJbmV4dCA9IGxpc3RfZW50cnkoY29ubmVjdGlvbnMu
bmV4dCwgdHlwZW9mKCpjb25uKSwgbGlzdCk7CisJCWlmICgmbmV4dC0+bGlz
dCAhPSAmY29ubmVjdGlvbnMpCisJCQl0YWxsb2NfaW5jcmVhc2VfcmVmX2Nv
dW50KG5leHQpOworCQl3aGlsZSAoJm5leHQtPmxpc3QgIT0gJmNvbm5lY3Rp
b25zKSB7CisJCQljb25uID0gbmV4dDsKKworCQkJbmV4dCA9IGxpc3RfZW50
cnkoY29ubi0+bGlzdC5uZXh0LAorCQkJCQkgIHR5cGVvZigqY29ubiksIGxp
c3QpOworCQkJaWYgKCZuZXh0LT5saXN0ICE9ICZjb25uZWN0aW9ucykKKwkJ
CQl0YWxsb2NfaW5jcmVhc2VfcmVmX2NvdW50KG5leHQpOwogCiAJCQlpZiAo
Y29ubl9jYW5fcmVhZChjb25uKSkKIAkJCQloYW5kbGVfaW5wdXQoY29ubik7
Ci0tIAoyLjM0LjEK

--=separator--


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 15:24:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 15:24:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484410.750968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhdS-0007lb-Pk; Wed, 25 Jan 2023 15:24:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484410.750968; Wed, 25 Jan 2023 15:24:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhdS-0007lU-MI; Wed, 25 Jan 2023 15:24:26 +0000
Received: by outflank-mailman (input) for mailman id 484410;
 Wed, 25 Jan 2023 15:24:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YOW=5W=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKhdR-0007lJ-O6
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 15:24:26 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2084.outbound.protection.outlook.com [40.107.8.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58444f55-9cc4-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 16:24:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8948.eurprd04.prod.outlook.com (2603:10a6:20b:42f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 15:24:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 15:24:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58444f55-9cc4-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JoryF+zXzbdVZnKxwKq1JxwkHvDcdlHX1piCQfei9pV0pv/AuGdY96256hF88egQzQrP7YtKTs5EHnKiELTfsuSs7LDDO+RWXEB4VzHKtT4XNHvIST4fsd8QN/FUVfGVu7aha40lmBDAp8nqv0dQ2MQvCREMrtqs2uddfFrK+tVqrcFv/UjfpnBEQa9Fvj+FqjxNU6khAcgTN8uL1L3vfK4pYU7gXGgFVBoJ6ckFmpg2A96EuktuOIVC/R2hAjdWpE1xGMJwV781Cwqa487Y3RCXznrgo2G2GaKiT4BT/R7p7FkOo7/qKNxSbVaGzUOqYVdGshMEPnEJzBnz/Y+1tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6LGB6M/zSypAQqdu7F039ipa7i1HTcq09wHgGgOd0tE=;
 b=lkeIpm38aXAkp718tIVyAZcwOJlpoHDGGwg+ny/XNVWB6seMPbGhhi8TniM8IE/XaNVmnzhu65krF3i6Q6yPomSSegT0zGVoCMgWSHYegePJ16Af2kQIJfWwnkxqv07L1GIGOgOR6bJeset/r2zW3OWbF8ENOb+uBNi1op4JKW9cZTu8MS6q+eHE8BtTzjw/ranWQ0Uq5oFpgF1LdxWYyCW76Sx0PXoAldqrGyILkyqpYyxjBOwbSCQzsj3SRpC51wKGH7gIR7lsW+VfvHOvwid+VwuqWbg+/AzGZRavhF5ymzk8ABt179DFB8+H2+o1OHpxs/NklaEDJB1ZhwQ56g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6LGB6M/zSypAQqdu7F039ipa7i1HTcq09wHgGgOd0tE=;
 b=43GevqWSHikkUEp8tnTUGPv+/lDVXgteuYPd2xo6wN42g7fa25ivkKnYrvLTJ8qBm7XUhWSQjDx/GujDLZPGXVcRLXtICezorcu2fXzBBQIr7eUe8HVMffR/JnwIBUfoP0MnIDbWWvHzmCvhlh9Rp5PaBw+glarrfID32e2DpTdskZIi2QjlHc3kCuBzDcajZXd2qS6u/ikw5+30boTw5c/kJ+MJjszVzqX5Vhv2xDFLctJSsBjASsXQIbwbFw85xLU+eCX5ZHgx0gDlFqDOsjIsc25JTsL7k6CwJHdTI/29HZqaV/C23SSGCc4vPtKQX77cV3JmHEJTt/nbkD6ooA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
Date: Wed, 25 Jan 2023 16:24:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/4] x86/spec-ctrl: IPBP improvements
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0120.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8948:EE_
X-MS-Office365-Filtering-Correlation-Id: c5a365d8-4397-47b1-6ef8-08dafee83b51
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NHg8pu1oi0Rd1/V/Otr8WY8ta9VeApn3Ec4loYc/h60xBDxUqdm1wpfc1eamgSpYzxcTYNdlq0gXZxakLIrcy4omhyqr9LAL2fv2/XnkHkdngh/FCFUpP7zlDL9lrZkQumwmUOD7/z2D5IJfV4afXF7drvRCvvd1mPBL43srP9kQt09+oJY6AsCKYXR6cz6Nkxpi46pt0MAdR+V+xXGHgw1GpQto6lqBGxUxKcbP5L9+FmHefF07yzDvZKmDIhc5wgEF62lYts/k29Ff+9HC0RmzaWYdPpb9PpKbtQXJqSmLQ9m5RZ/SQ0hAk4b35NrrHC/4Nz13vOUW7zFIaNQEPg4UDW/45ofaNK584knLwT9N2cdJeWa1+9wPQc61TPQTYDGoy3w+yUATbPhR3B4wk6jpsiQ9A1qk+pUpdv7TRsJQJ3REHK3ItJZJS+Or7ph6Mu/5fYhtzkMci0GkgpTXNuTomrUm24R9wrHB6ArNj2sm5mZdRHrN2HMNzr6c42VT5WBl78zUuQQccI6/Qbg967o6Pj13/vJo/xAPUPj9j5/r0Khe2ddwMVwCRDpORt4MF3JibWzdXwcYwax6czLPM02k8R+c9LnTjEYKaYAlXqg90h4fP4mqcscfrqjuvq1rKgA0mVUKAC8bsuQr+/Avwfz8e+eHZ99bND1GIi7BNqrnBGlo4Be84BsfGyM3of9aaYZSc5TSMFSKaItvQl3fdb1RhyifnFkwXMgiuC73Hy4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(396003)(366004)(376002)(346002)(136003)(451199018)(83380400001)(38100700002)(26005)(5660300002)(4744005)(2906002)(41300700001)(86362001)(8936002)(4326008)(6512007)(316002)(6506007)(8676002)(186003)(66476007)(66556008)(2616005)(54906003)(478600001)(6916009)(66946007)(36756003)(31696002)(6486002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WHNtQ0VUdE9FUFp1Q2Y0V01jMWl0YXFTbC9QZ3RIWTBOMjRrUlVjYlNLejh4?=
 =?utf-8?B?WjBsMEQ2bGljV0d1WXRCa2V4b1FpcURzRTU5TDlMbGlMS2twa0U4QjEyOXFk?=
 =?utf-8?B?RU4vQVFldE9XcXQvNFY1RW1VdEJWQlVMMWlzRGRmYytkT2JzbSt0WDFNTTV6?=
 =?utf-8?B?TGZJZWFQbGJoa1ExalhHbmdST3NCWVVvYUtyVUw5cnEwOVdnM1BvT1F5MjV0?=
 =?utf-8?B?dkZ1ZmxxS05lYXRlcnZjQXVDVjMyS3JFbGxnSzhidVNjd256N2F0NjlTR2pE?=
 =?utf-8?B?N0JDUkkzNUVhaDY0dyt4OGYvSC95dGRwVUc0TzYreVRoVTBHREUwV0Z4TUdQ?=
 =?utf-8?B?WXAyUnQwcU9nMlUrZ3RIc1FWNExvbmtBNlpDbyt0cHgralNpNHFJckg5cTRT?=
 =?utf-8?B?U1pJQ25wSGFRcHJ3VTI4a3NiZ2FTeFp4cUtXU0Y5cnMzYmREa2FSbm5DQktx?=
 =?utf-8?B?R2pKdEo4Q1VTaDRXVDh3d3lVL1IyREI2VFg2Y2FlMlF6bGIvbjVKUjlHUTJL?=
 =?utf-8?B?R2kyZDVoc0hJS1pIZEhZMGVuSzl0LzhFQWVGaGZYN2sxeGhScDE2ZE14bnJy?=
 =?utf-8?B?T0w2eHNHN0krVkhIQlRrMEg2NFk1SFNWTlIrRDdKZVd2KzRkd2JLZTNIS0ow?=
 =?utf-8?B?TmJWRWhFbHZkT2tDQm9PbzZuNTkwM0JpdlVka0VROXBHVlZKYW0vWXVhWi82?=
 =?utf-8?B?VlNUOEplSHZhN1pKME9DdFhwZlRxSC9wSi90RmNncWVtMFA2RTZmbnE3bHJt?=
 =?utf-8?B?QVhTQ0JWVy92eVAyaWdOQWdUeWRNb25mMXpjVE5oQ1hRdzdOZ3VGU2dQKzBx?=
 =?utf-8?B?SXhXZHovek5WZXAxRkluWm1ocDVrWUVXSFp5Slk2T3hZNlMrSy81OW5wVVo1?=
 =?utf-8?B?RTBjTU81Sm9VdnhHT3pjM1hQaGdVZDE3UTBXVnc0TzYxdmdnbFB2K3lZL1Jh?=
 =?utf-8?B?b2lqSGJUV25zaUR1NHRHbVlnNlRDZ1Q2RkZJSlduOWtpWkZIUTd3MnRYN0hX?=
 =?utf-8?B?SzUwS3BTakg2ZHp1eDg0a29Ga1FSdWNRdGxadFVtWDNWamNyVXdyYmdqUzZy?=
 =?utf-8?B?bVVKdytOSVIxcUdqMVVjU2FEbHQvZThJREs5UUl4QlJSdUVDYzZwT1B0NHVB?=
 =?utf-8?B?Q2FBamY3NEF4TVFxM0Ivd0V6bTUybHRLVHczMmZyMHhsWGdmbmpBSmEySzRE?=
 =?utf-8?B?S2VpMDJIbXkzQnNsWFcwS0tBL3pRa2JHdTNUUStkaEhOeVd3MUcyRHJBV0hm?=
 =?utf-8?B?a0pkSGxlQnJTZzdRY3FSZGJVb3VTVXdJWWFzblEzYjdMOUFTUm9mRnJSdHR1?=
 =?utf-8?B?bFBIRDVZallWaHo2cU0zbjlkbnRSaGM1dStRRU0ya3BHc1FJeXhzUGVsMnA3?=
 =?utf-8?B?TVV3UC9uZHNpRnh0d0RPSVpUQkFjSmc4alpXR1pMTCtsLzJpcDZ0anJGeFY2?=
 =?utf-8?B?VUxlVkZGMTQ5TU91NnV0eVk0SlBFWGRuV1kwakdNb2FzTmw4S3VVenBpUVlI?=
 =?utf-8?B?RmF6THpSU2tYY3Q5NEpzS2IxQ2d2cmJmMUw4K2dhMmFkMmplZUROcjE2N2d1?=
 =?utf-8?B?TklERXl6VitBeEJtMmJseVJUak4zQW95L2Z5cUlNUkx4SXdLZUFPR2RmaU8z?=
 =?utf-8?B?WGVETGxXZWgzYmkxRWh5UXE5YW1uZ2xYcWxyTWpnM2VJbmVpUVRnMy9tZWlB?=
 =?utf-8?B?NjRDN3BHUkE0VStUbTQvSjBTSHh1S2lCa01acERxcWhiMlRQL1BWYng0N2tq?=
 =?utf-8?B?YVRrZklzanVsTU44VldyZzNYakwrbmx0d0tpOVp1VnlVTlg0cXR6MHpNL0wy?=
 =?utf-8?B?V0tIZFQ3QlNWRGNkSEY0VHlBdXFrY3B5WlJkRlJmelFqUDg2Rk5GUDBEUjlK?=
 =?utf-8?B?SUJiQTNVeUVObXlzaEl0a3cyVWhOQm96V003dlFBNnhPaUlTSFRwTjBWLzN3?=
 =?utf-8?B?dXhsc2MxMXo1REhPdUpXUFNvVjA3dWV3QVNLWTNvWlhqNHNxc080RVNrRERG?=
 =?utf-8?B?aUJmM3FtVzNXR09yL3Nva0lOTGNEdmo1Z3c0aTBJMEhGU0FBR3gvZGhQM0Ur?=
 =?utf-8?B?Mm9RekRGU3RLZWZWS1ZyYXJuYWtLdnRZYkN1bWo3alZFRnNKc2ZWRExFT0xm?=
 =?utf-8?Q?pSgpCmQPArAzWTcYnyt11oBHa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c5a365d8-4397-47b1-6ef8-08dafee83b51
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 15:24:20.9437
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2PGgNULCcOcrVDLTvb1QpHvXdUSN2mvTpvf7bVCzuFrmE4QpHAKdVf/8TS7v/9vn8cQj05ZKebGXQpOXz1nhSQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8948

Versions of the two final patches were submitted standalone earlier
on. The series here tries to carry out a suggestion from Andrew,
which the two of us have been discussing. Then said previously posted
patches are re-based on top, utilizing the new functionality.

1: spec-ctrl: add logic to issue IBPB on exit to guest
2: spec-ctrl: defer context-switch IBPB until guest entry
3: limit issuing of IBPB during context switch
4: PV: issue branch prediction barrier when switching 64-bit guest to kernel mode

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 15:25:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 15:25:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484415.750978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKheg-0008PW-3V; Wed, 25 Jan 2023 15:25:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484415.750978; Wed, 25 Jan 2023 15:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKheg-0008PP-0c; Wed, 25 Jan 2023 15:25:42 +0000
Received: by outflank-mailman (input) for mailman id 484415;
 Wed, 25 Jan 2023 15:25:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YOW=5W=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKhee-0008PB-Hv
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 15:25:40 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2081.outbound.protection.outlook.com [40.107.7.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8502e035-9cc4-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 16:25:38 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9347.eurprd04.prod.outlook.com (2603:10a6:10:357::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 15:25:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 15:25:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8502e035-9cc4-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JkuE97FeZo59IOmaaHsSq4mfi/WArXPGMEinoOkarGnJwby13ZVTCO4+zLZKMgF/AyUFn5TkbBW4j0Ge4DpeB1w5mGAGk6kHZrNd/RQl7+P+hnSNOZZJIWJmoGDWU27IStLQA/+hjT6T2PPB2Q8AVpjU670qyJbuvH7gBF0OPx22/sfpNapjwgK/+JRISL8Lm55rDZ6TJERTpL8D6hykfuGe1Bwb0OmlhaWb/QzoaoDJMCTfWbM0LCt/aOUI94Wbr+D7ejnhJbiNmF7Vd7sj7WWD+Gbvb8iI9gzebI1tCv1JDtqqnm5ycSv2abjMbMCeHxLNv0G2sfCvVMXc2u3q1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nQJ5ptgo3DhG6hsOaqhb2+eXMCyhMWsjUMcGzP04kEI=;
 b=mcipZiIfxvBrt4hAo29yT6UYstrj+54BmDs+86z2G1Crd3QSPg9hqdFcgYmlS6ZNBydpYwJDQqxBCbWv5LUuZ7MTK77dDG6PGpEqLeanPFiNVBYGI+oNUEjyBFz8tZ/VGU0j1vc868nejjNVIb4lG07azHDkxUqTSEu4M76uoQyx9xMNcs8OtpLFn+LOgWLu+XThUPhao7LevthNXGMSF1GQqZFyVuq9HBa3lhZ5bwirCkyTeVDby8OWF0YZyESrVPwxG8CpK2irQcegR5f/fhpxeKTGq/ZzE12Ske38auVSLFronW4sFv57vq/gNUqFFkyg5Cb7vmxFcUNr7GAuEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nQJ5ptgo3DhG6hsOaqhb2+eXMCyhMWsjUMcGzP04kEI=;
 b=cWXtqxdk7ma7soXxvW5AnuwyYDfvH27WtAs/Vp+qhbJ6iNMujCv3q87Cgkno2DouWNJktEFs9qERw+9LMa8cE25oKSZ2qFd8Pw/yQu6fl0ITNsQ6OjBcdnyGaEr5OUDM+DDHb+j3degGkUS1p/2nKNXBanXkw4BJ9AzqOzpmrql12PkUUy0yJJ9qTnty2a+14o9NGQJFJq22K5YtQ9K6Yhs/+AyljhnusjVLxUG+i4/I0wrHkovRvsATP+7vv1F5gPd2BWBZ/YNVVzBNbCKczCdM4YQwvn9dxEUXtjdXZuKL/Wd6s3AB75+p/TmupEhGeFEYK5/VM2Zs84q8dDyf4Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8ee98cc0-21d3-100a-ffcc-37cd466e7761@suse.com>
Date: Wed, 25 Jan 2023 16:25:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v3 1/4] x86/spec-ctrl: add logic to issue IBPB on exit to
 guest
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
In-Reply-To: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0147.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9347:EE_
X-MS-Office365-Filtering-Correlation-Id: 6bdc3063-2c0e-47b2-e53d-08dafee867f8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OdJsSHxnVG0lQv9L5ANexYP3pT4K4EC/c9HG1f3JXC5Cl4p0xpXgyj3RIpZ+o+/A1qCPZvSlDD1OEvwvqxmy+2YtqyqKZhc+xfWM4cURNnF4lWBPG2NaB5hENGEkdz+FyFUiavBFN/53hqM8T3tc//GlLQ2ST9KRqgFQWzhfQL6utxrwdt9gzh3WRXoHgKYXcW75VuzVngiwM3JWjpAb0yVapK2H1ynq7xCVH0T9ICbfqsovJQ6OWBEdomLHwo8mnZRmJ9mOZpkKeEODbZlXbczB8XUHZWJcdH8YKysEyoA+1ZIPpL1NfU5Q4U+XfuY82vZbK44pEQeefMNL1n7H11+U6OuI0VuVoxYin/Ud0VGvJrJwDNRXBSqEv23Ho1xMFHgaSn5BHCmjQOTM4XbJT6rDDh4wFbTdTFoJ1oLDuVKhE8NWZo02z6Qj2CSR0g1793OeyRnQhUk3K2znzWFcHt4T9RhYptawp8nATu8TsZMki91MQgMQ1PsXKxmLvU8/JpsmaXgRH5w/WW8yXklCEzsa1sCg8R+qwdGSNONIU8hhiLwduGKJ7ZTUVsrw+MyyM2jxiT6Wjuk31cTmZ9/o49kn3YSfa3H6X+2m06p2eKSzytbAqLwM+PGR3BdMkzH8tdV86KyvKIyPG2RwK1OayXxdRZNQvE/Y8l2AbsY93UzOmPk2Tmbl1lCH8XBTYSGPhfeYoAPjv0XoLrfJPqNEbIne23RXtYxDaoGfGEtZJfc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(346002)(136003)(366004)(396003)(376002)(451199018)(38100700002)(66946007)(36756003)(31696002)(66556008)(478600001)(6486002)(54906003)(31686004)(316002)(66476007)(6916009)(4326008)(8676002)(2906002)(86362001)(6506007)(186003)(26005)(2616005)(83380400001)(41300700001)(8936002)(5660300002)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cU00V3ZXMHNTT0prakhqMzhLWjVnZjJjK01oNGh2dXhLOEYrenFvdmdENEg0?=
 =?utf-8?B?NkRBaXN5ZlVLNkFBV1I5TU1UUjRzUFRXTStiTDZxY0ZrMzQycTBhREVLTi9J?=
 =?utf-8?B?RXNzL1Q4NzVTaXlHZzB1VW1md3FiSWpJVElpeXlyVXVkbWxYbUpaOWY4b3lx?=
 =?utf-8?B?bjR6TGxGOXcvckU5dDlnMEpKY1JDeDlheUJnWTE1ZmhYTVM4Q2hibFlLR01L?=
 =?utf-8?B?bVhvazcrSEw3NEU0L2NtSTd0UjhzbitVOUhhVS9RZEgyT21DenM0YlkwR0d3?=
 =?utf-8?B?YUtpUjZCaFp5UTgwRllIaUF0VXdKSFltSnZrKzRwUjFWaHk1ajhyZjlPVzZj?=
 =?utf-8?B?NFhRTldrNTZlVnYwNkpIbTJBSkdHZXRYRnRPQmFCV1FrMHJXZVA0NndCZDN5?=
 =?utf-8?B?RzBmaE5sanZnUERRaFhLbDVHb1hYc0M4eGFmMTFXNThXTkMwRWNNak9TWlgx?=
 =?utf-8?B?dXJIUEt6ekd2M1JhT3lSWGVjaEtPUll1SDRWaGlMZy9DV2M3eWpqSVZuWnN2?=
 =?utf-8?B?RFFUc0YwT0ZINTQ4NDBrcG1JY1pDSjh2SThyRGltSFIxUlEvdmhtWiszZzEy?=
 =?utf-8?B?bzhZMnVNaHoxZXVqSjRoS2RSMUdCRktuQ3B0N0NUenVOMitHNHBwN2szRUlF?=
 =?utf-8?B?YnpDSEllN1c3VVQ3cENabE1wYlB5akpTbTF4SHNYUDJUMzB6WVR0Yk41TjUz?=
 =?utf-8?B?TXMzb0tUeVcvZ2xCRnk0SXovWHJaR3hXZjJZNVZoU1pBeXlwTDBNUkRINEhF?=
 =?utf-8?B?Q2xnMThSQ05raHhoaFlnYW54aHh1VHYrWm9LTFlacWhCZkdybGV2ZjgwSGNS?=
 =?utf-8?B?eFZqakxEMkpXOVNzTmx4Z1RyT2VldWo1ajZIMjFDZG14bFF0eHpyWXFnSjhv?=
 =?utf-8?B?S0JKdXhwenVuQyszVUwxS25EYjJwVU1KZ25STE1BSXQwNHROT3phei8rN1p1?=
 =?utf-8?B?RDc2b2pqWmc5bDRIMzgzUlVnWFhBbUg2Y0svTi9XRkljenhyYjJDb3F0cThp?=
 =?utf-8?B?c1VaVFhEWU5lLzRuWDcwaEQ1TGFlZXF4NnIwRjJ5b0lxbHJyTHJLYXo2VGFu?=
 =?utf-8?B?dlFBTXZnVFdQOERCTmRHcHVLVG84MnhrbC9vbU9pb3dhNzNadGNRc3ZIVXdo?=
 =?utf-8?B?WHFOQlBReitSbVFPY05uejlwM240MlBvS3o2eEV5dHBuK2Y4R0ZHWFRBZmNT?=
 =?utf-8?B?SXdVZENLWWI4UGNXM0hCYjgrOUgyREptM0RSR1VBSDJ4RjlQb0ZJdlFTUXdr?=
 =?utf-8?B?YndqODc1ODF5bVk0WE5yc1RIWjRnczJLNjVldXZuTUNGUkVCdUl1cTVNWkh0?=
 =?utf-8?B?bkl2SFFyL2VoS3pEU0V2U2pMREtPaVBCQ1Z5bGR1VG5WN2xkbGVxay9Kays4?=
 =?utf-8?B?RDFxbmxRMXh1YVZHRXFhcERSL0ZxTVpaWGpqb0FCNmlwQXBrZTBjUUswTDND?=
 =?utf-8?B?OG9ZRHQ2emUyVStEbDBBTlI1OEMzZDlZRlJNT1RnWUhDa0U4dnJDSUR6TXZM?=
 =?utf-8?B?RTdRL2FxV2syeHAxcG8rQzZrK0E1bC8zU0lraWlEYXJvSm50NitZTHFIWUlo?=
 =?utf-8?B?bG04N2lLZW4wZlhFeGZNWnhITmIzWTZWUTF3bWxrVGNXQjZPbHF1cXUzSWdu?=
 =?utf-8?B?Yzl2eloxV0JZNWg5cjlRT0V5MFJuWnFjZHYzUDFMQlp5dXRhdlJTdUovbldy?=
 =?utf-8?B?U21MUHVlRjlhbWF6YlVJZDJBRzJjbkx1cjk2RG12VzMzdlI0UytCQzVwbW9Q?=
 =?utf-8?B?SFJIbUJLc1p2VXpVYmpGVnIzN21hZHlxQy93akJVdjFjdkgybTFoVEdKM25R?=
 =?utf-8?B?d0N1MzlxdVZ4MmgvbmljaExURm9ReWFSbmVRNFpnOFpmQXZ6RlpwcXBveUpV?=
 =?utf-8?B?Ris5dStqWXM1eWpVQytJK1pPNVdhVDdSQTBTdnBIYWlueUFZYmFuWkFVNURV?=
 =?utf-8?B?eHEvL00zeUdmMFVLTit3VTZZUGNwMk1zTDJwUGFnZkNVRml3V3dsZ0ZNT3VK?=
 =?utf-8?B?aFdSRUZ2R0lsRDRhOXdQUlF2WlBsVEpseE9LTTJxUk91SmpyRCt6ckVnVkVG?=
 =?utf-8?B?UHNOR2dwQTk4YnVudlVOOFVOWndRcHQ3eFZHUU4vL0FUcFJvRzEwWWhJRzVC?=
 =?utf-8?Q?K1LWwO54gPWXPZWW92YBDeoTl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6bdc3063-2c0e-47b2-e53d-08dafee867f8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 15:25:35.8297
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3vASHap+7b0xjNm+kcg8lgg9XU6B3vL4Ygt9XYF68WfkO1Xc/nobvhvlcBGqCM6dUjPjYrPYPYdnt/dtBKIduA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9347

In order to be able to defer the context switch IBPB to the last
possible point, add logic to the exit-to-guest paths to issue the
barrier there, including the "IBPB doesn't flush the RSB/RAS"
workaround. Since alternatives, for now at least, can't nest, emit JMP
to skip past both constructs where both are needed. This may be more
efficient anyway, as the sequence of NOPs is pretty long.

LFENCEs are omitted - for HVM a VM entry is immanent, which already
elsewhere we deem sufficiently serializing an event. For 32-bit PV
we're going through IRET, which ought to be good enough as well. While
64-bit PV may use SYSRET, there are several more conditional branches
there which are all unprotected.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I have to admit that I'm not really certain about the placement of the
IBPB wrt the MSR_SPEC_CTRL writes. For now I've simply used "opposite of
entry".

Since we're going to run out of SCF_* bits soon and since the new flag
is meaningful only in struct cpu_info's spec_ctrl_flags, we could choose
to widen that field to 16 bits right away and then use bit 8 (or higher)
for the purpose here.
---
v3: New.

--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -75,6 +75,12 @@ __UNLIKELY_END(nsvm_hap)
         .endm
         ALTERNATIVE "", svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
 
+        ALTERNATIVE "jmp 2f", __stringify(DO_SPEC_CTRL_EXIT_IBPB disp=(2f-1f)), \
+                    X86_FEATURE_IBPB_EXIT_HVM
+1:
+        ALTERNATIVE "", DO_OVERWRITE_RSB, X86_BUG_IBPB_NO_RET
+2:
+
         pop  %r15
         pop  %r14
         pop  %r13
--- a/xen/arch/x86/hvm/vmx/entry.S
+++ b/xen/arch/x86/hvm/vmx/entry.S
@@ -86,7 +86,8 @@ UNLIKELY_END(realmode)
         jz .Lvmx_vmentry_restart
 
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
-        /* SPEC_CTRL_EXIT_TO_VMX   Req: %rsp=regs/cpuinfo              Clob:    */
+        /* SPEC_CTRL_EXIT_TO_VMX   Req: %rsp=regs/cpuinfo              Clob: acd */
+        ALTERNATIVE "", DO_SPEC_CTRL_EXIT_IBPB, X86_FEATURE_IBPB_EXIT_HVM
         DO_SPEC_CTRL_COND_VERW
 
         mov  VCPU_hvm_guest_cr2(%rbx),%rax
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -39,8 +39,10 @@ XEN_CPUFEATURE(XEN_LBR,           X86_SY
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
 XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
 XEN_CPUFEATURE(XEN_IBT,           X86_SYNTH(27)) /* Xen uses CET Indirect Branch Tracking */
-XEN_CPUFEATURE(IBPB_ENTRY_PV,     X86_SYNTH(28)) /* MSR_PRED_CMD used by Xen for PV */
-XEN_CPUFEATURE(IBPB_ENTRY_HVM,    X86_SYNTH(29)) /* MSR_PRED_CMD used by Xen for HVM */
+XEN_CPUFEATURE(IBPB_ENTRY_PV,     X86_SYNTH(28)) /* MSR_PRED_CMD used by Xen when entered from PV */
+XEN_CPUFEATURE(IBPB_ENTRY_HVM,    X86_SYNTH(29)) /* MSR_PRED_CMD used by Xen when entered from HVM */
+XEN_CPUFEATURE(IBPB_EXIT_PV,      X86_SYNTH(30)) /* MSR_PRED_CMD used by Xen when exiting to PV */
+XEN_CPUFEATURE(IBPB_EXIT_HVM,     X86_SYNTH(31)) /* MSR_PRED_CMD used by Xen when exiting to HVM */
 
 /* Bug words follow the synthetic words. */
 #define X86_NR_BUG 1
--- a/xen/arch/x86/include/asm/current.h
+++ b/xen/arch/x86/include/asm/current.h
@@ -55,9 +55,13 @@ struct cpu_info {
 
     /* See asm/spec_ctrl_asm.h for usage. */
     unsigned int shadow_spec_ctrl;
+    /*
+     * spec_ctrl_flags can be accessed as a 32-bit entity and hence needs
+     * placing suitably.
+     */
+    uint8_t      spec_ctrl_flags;
     uint8_t      xen_spec_ctrl;
     uint8_t      last_spec_ctrl;
-    uint8_t      spec_ctrl_flags;
 
     /*
      * The following field controls copying of the L4 page table of 64-bit
--- a/xen/arch/x86/include/asm/spec_ctrl.h
+++ b/xen/arch/x86/include/asm/spec_ctrl.h
@@ -36,6 +36,8 @@
 #define SCF_verw       (1 << 3)
 #define SCF_ist_ibpb   (1 << 4)
 #define SCF_entry_ibpb (1 << 5)
+#define SCF_exit_ibpb_bit 6
+#define SCF_exit_ibpb  (1 << SCF_exit_ibpb_bit)
 
 /*
  * The IST paths (NMI/#MC) can interrupt any arbitrary context.  Some
--- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
+++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
@@ -117,6 +117,27 @@
 .L\@_done:
 .endm
 
+.macro DO_SPEC_CTRL_EXIT_IBPB disp=0
+/*
+ * Requires %rsp=regs
+ * Clobbers %rax, %rcx, %rdx
+ *
+ * Conditionally issue IBPB if SCF_exit_ibpb is active.  The macro invocation
+ * may be followed by X86_BUG_IBPB_NO_RET workaround code.  The "disp" argument
+ * is to allow invocation sites to pass in the extra amount of code which needs
+ * skipping in case no action is necessary.
+ *
+ * The flag is a "one-shot" indicator, so it is being cleared at the same time.
+ */
+    btrl    $SCF_exit_ibpb_bit, CPUINFO_spec_ctrl_flags(%rsp)
+    jnc     .L\@_skip + (\disp)
+    mov     $MSR_PRED_CMD, %ecx
+    mov     $PRED_CMD_IBPB, %eax
+    xor     %edx, %edx
+    wrmsr
+.L\@_skip:
+.endm
+
 .macro DO_OVERWRITE_RSB tmp=rax
 /*
  * Requires nothing
@@ -272,6 +293,14 @@
 #define SPEC_CTRL_EXIT_TO_PV                                            \
     ALTERNATIVE "",                                                     \
         DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_PV;              \
+    ALTERNATIVE __stringify(jmp PASTE(.Lscexitpv_done, __LINE__)),      \
+        __stringify(DO_SPEC_CTRL_EXIT_IBPB                              \
+                    disp=(PASTE(.Lscexitpv_done, __LINE__) -            \
+                          PASTE(.Lscexitpv_rsb, __LINE__))),            \
+        X86_FEATURE_IBPB_EXIT_PV;                                       \
+PASTE(.Lscexitpv_rsb, __LINE__):                                        \
+    ALTERNATIVE "", DO_OVERWRITE_RSB, X86_BUG_IBPB_NO_RET;              \
+PASTE(.Lscexitpv_done, __LINE__):                                       \
     DO_SPEC_CTRL_COND_VERW
 
 /*
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -8,6 +8,7 @@
 #include <asm/page.h>
 #include <asm/processor.h>
 #include <asm/desc.h>
+#include <xen/lib.h>
 #include <public/xen.h>
 #include <irq_vectors.h>
 
@@ -156,7 +157,7 @@ ENTRY(compat_restore_all_guest)
         mov VCPUMSR_spec_ctrl_raw(%rax), %eax
 
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
-        SPEC_CTRL_EXIT_TO_PV    /* Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
+        SPEC_CTRL_EXIT_TO_PV    /* Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: acd */
 
         RESTORE_ALL adj=8 compat=1
 .Lft0:  iretq
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -9,6 +9,7 @@
 #include <asm/asm_defns.h>
 #include <asm/page.h>
 #include <asm/processor.h>
+#include <xen/lib.h>
 #include <public/xen.h>
 #include <irq_vectors.h>
 
@@ -187,7 +188,7 @@ restore_all_guest:
         mov   %r15d, %eax
 
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
-        SPEC_CTRL_EXIT_TO_PV    /* Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */
+        SPEC_CTRL_EXIT_TO_PV    /* Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: acd */
 
         RESTORE_ALL
         testw $TRAP_syscall,4(%rsp)



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 15:26:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 15:26:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484421.750988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhfE-0000Vs-Bk; Wed, 25 Jan 2023 15:26:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484421.750988; Wed, 25 Jan 2023 15:26:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhfE-0000Vl-8z; Wed, 25 Jan 2023 15:26:16 +0000
Received: by outflank-mailman (input) for mailman id 484421;
 Wed, 25 Jan 2023 15:26:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YOW=5W=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKhfD-0008PB-9z
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 15:26:15 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2086.outbound.protection.outlook.com [40.107.6.86])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99fbe8b4-9cc4-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 16:26:13 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8948.eurprd04.prod.outlook.com (2603:10a6:20b:42f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 15:26:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 15:26:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99fbe8b4-9cc4-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hOm0XeuodWHgs+yabik/SXDquFfo4rfPyVOeBtdw8w8xF0BaWzVgBdm52r3V6MiZZStFNl9wE4/MQhvB7rNlkecyia03bpUw9KLlesyubv9nWYpFz/HkSlw1TveebEv/EyB0dM93stxRqMyIEUcZ36+9gZjFSYq3z9L2kZFcN8Y7gJP1H0bhcr7pbhGgKMufDlQLJHgtDI1TvLnoXhVDZ4DKT05vlpotPiZkeXEMy9I2T6WWL1cPejv7rfzrjqSwfEahe/aXFjRYZAFYGk9VDb49bgO+nf9Nk/CWFknlZofU8LTNjPUdClu5ZkwJD7SPqDfaA71iNIWvOxsMyFWAiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3hzT4IxLbkKQPfXvLmEol5is+s4A37v/BAV/NZrhc5o=;
 b=n7a4k7R3vTNQPKtW4XWZBNdVmouCUvLIEBnhCPHnRYy88NTSWxikVAb6ObhdMHH52Y6cAXlPfqSEYlWwyEhHsMnHB5qpJpelyHa2DLrVnTO6L1PR3oRYGUuGAn84srTGGjqawdBe6yengvrSI4awYGKDpBHTsjviAf9Kj8yZakf9sOlZA9ChYOjju2n99a9DOFD3xT11RL70tN0321baZ8ifHgfnPaijhdG9P1qmc3wG0aUEDRQlXcCjG2lFoSApKhJL68yEPSlTT33Cdtr7vNMfBtNdbcNDrw8RZlVC3G/qMMVB1/Eto+kj0K0TJBGdFeiPXbfCMYVPsM+NaE+ARA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3hzT4IxLbkKQPfXvLmEol5is+s4A37v/BAV/NZrhc5o=;
 b=2Bf0exAKq/Rrs7XRlyXUtBVgpaFBSwfnyt/Zp8g7AUi8vJNWcq+aw4J70UKviNetaUHpxWx3FL2x6AJbgAvWLs084DIMHEP3HrAnRbLTGTXCnKaEM22JK89fz/1hzofFR+7T/KwPuZAHkHOdKxjLKSSIl44wfczSLmjTKwSV5/IEYsOm/Jkyk4CvUEvMAWSGDSPaFCFbOdUIpc5ZkxslQL+LYc4uil3Ph3XL59Dy54WhYloWmM538aM+oCFOgQtG8tzxDJN8CqeVwM+crOEFaMDqnQvokFHOzK3iaIAUodmQ94vN6fGIA9LArLx+pTJlQuAEk4MBiwqx5hhjVh9aDw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <23ea08db-3b64-5d1a-6743-19abb7bd6529@suse.com>
Date: Wed, 25 Jan 2023 16:26:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v3 2/4] x86/spec-ctrl: defer context-switch IBPB until guest
 entry
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
In-Reply-To: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0149.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8948:EE_
X-MS-Office365-Filtering-Correlation-Id: c52ef015-6633-4f42-40ed-08dafee87d77
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dAZ259qQ1aY5wGjm3/NvDsMTezqLPgI9/jhGGMBEKngDQl7DSerJNPp9qwmZlXNw5Z8uGb5aqrf0TDkscENbOdoy997nskmlIm3SaqbB7XhxklLSeCbcieSEkvk7XDtGhAJyq9irU94QAjumF3jFK8LKEYQAtTSTO42frDI3TEs243gqQQa/9M2rjwUybFcssrFDNy43i93QIkpWkJMZWcc087Q3SbSke5yXkNVXHYvrhmhf5j1a4DSM6RvWWj6CThsJReoZ5tcYMEThgWKiOyDR2zJx3RLmbjXxiUTudAHA2CsuEK74YsiCU5UGfSSt4iM6K8SAMzc7QyLfdAxgqSWJEZemaFmBX3s6RT9/g0JLGKODFb/pZxgKyx4S0PjwKDSPaS5U3XTm2K9tVq0H3naK6aNsFUaDxtmZieC0eCaDxmoVrudd5nVWB7Tl26dOv9L4s4Nsf3NVawGTWRJ29gjngHZc99IefTMwl4VMwtZnyA0LXMCG8RRH5OUaW26NB4MoQvh7ESuJ0o3SvWb1P4on94Hzd9/RBAuFbKXVTm2j4yL0TjYYfcMdTOqcq5OfygKJEHF5KsRWNNjjJiLt40L1vramEodYsZ+bUDvhBkCvoGv7cKxvqUG0UM3L1TIrA65o0wJGyfdK/hf50ZBkOjq0fpCf1jhbO5h0y50alzuqqwx5328MSNVN8yr438OZxvEFnf2XAvb0Ptcqh1q56R0Qhu+g9wDH65I6T7j1I9I=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(396003)(366004)(376002)(346002)(136003)(451199018)(83380400001)(38100700002)(26005)(5660300002)(2906002)(41300700001)(86362001)(8936002)(4326008)(6666004)(6512007)(316002)(6506007)(8676002)(186003)(66476007)(66556008)(2616005)(54906003)(478600001)(6916009)(66946007)(36756003)(31696002)(6486002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Rk42K1QwWlluM1d1dFhKY2RKRC9BSkpkRzdCeGV2SEgwNEJ2TTlwOCtwUEF0?=
 =?utf-8?B?K1pEOTczUkdtRnMvelBjeXR0NndVYUl0clgvUWZydHZ3QnpTNWd4T1ZrU29i?=
 =?utf-8?B?SGFUbXNvbG1RM1BsVzlFcklINi9nOG9iNlVjT2Q1THI4TUFMc05mMEwyUW9D?=
 =?utf-8?B?NlJxOVlZdi9kTHVTaUJ1SjlmNitSckxvbGZDNHZSdUVhaS9xejR1Q0d3ZU5o?=
 =?utf-8?B?cUEveU1VOCtZOUdtaVRwNGN4NVJCK0VVcFlQeno4dW5aZ0ExTWFkaUZIZmVL?=
 =?utf-8?B?NGdpMVJMSFljU3J0UVNPVTNKWmtMY01nV1o0a2VyUzJZd2ovZmV6OHNpSnpE?=
 =?utf-8?B?UU9yN3pRcTlIS2JEM1BZZjMyMmQxUDVyZkNBYjVaWm1XSU5GckltVWFWRk1t?=
 =?utf-8?B?NWlMRnlidXc2Y3F4OERDU0MzQnRYTVZZUy9CT25FK3Juc21tbXdVb2VRN2pn?=
 =?utf-8?B?L29JUzlBZFRESXB4RmcwSTNZME53ejRTTFJxTTlPUEEzZUllUEQyZjZPYWI3?=
 =?utf-8?B?OXlZVWp6a0pQT2FjdWNZZ3pWZnpNZEZ6a0NGUTVNQk1DMGVqYVoyZ0pQMnVJ?=
 =?utf-8?B?ZThCbUhOWEJOTHlZRno4aGFsNVJyUVNOYXdremUxcU5xTzRuWVFkRmNoOUd3?=
 =?utf-8?B?VFhxMEdqQUgrU3gyS2ZNcTJ5eWlqUElCRHBNaGxDbmRpejRaVlBqbXNxdmc1?=
 =?utf-8?B?LzloYkt6NXljWEdNN1FhdUVoRm9Vd1ZuQkhDR1AvcHRTckpYaUxwS1JmZFV0?=
 =?utf-8?B?QU51VDBvZGN4QnBzOGVnYUFvc0dNWjVRa05HOUpCb1NPY2dMZ1NzOFpyNmEw?=
 =?utf-8?B?L2YrUFJtVVFqMHdSdGU2bFZWTzhyMG02SHF4bmhwcGo0eS9oSXkzUFF6VEZx?=
 =?utf-8?B?YXJ6Tys3WFczQWw5MXZ1K3NBOWNxSThkeXVtMVJJUUthVSs2N3BBSG9ZYjQ1?=
 =?utf-8?B?cS8wektrcmQzVWsvRFRuRUY3TDl0RmN2MHArTFNXR3RPd294VU5VQjBvMmJB?=
 =?utf-8?B?SjlOVCt1TEhqc21oL3dqQ2xCVldyang0Q2RUeVJPcnk2cnUzV2ZVN2VhL3JN?=
 =?utf-8?B?em1kQ0t2aUUvT2RsWXArZkJUSTE2SXVWaGxabXNsYmp4NWYwdHcxVUJnZlNZ?=
 =?utf-8?B?TDM2NWRIMEtNaTNqZ0Zsd1MvYnJwVG0xbzJyMEZGTDlTTlJEcGlyK2ZpbFRj?=
 =?utf-8?B?enJONVNLOERwREhjZ1hFZk1TLzZnRG44dmxPTWxGS0ZoR0tWdkcwUlFsR2tq?=
 =?utf-8?B?eXg4TTdQMlhhSWhINWphbDI2MGFNNVp5TnZhQzQ4R29EYnFZcVdtVHJwc25t?=
 =?utf-8?B?bktiNjRKVkNkREJwRjFpcWNEY0U3ZHVNa0JRS2RJcnJOMlhwTVg5MVNUS1c5?=
 =?utf-8?B?RXZaYUozWXdmcmhrWGVhQ3liSUpmYytISVNUVTNueGNDcUZ4TTJMLzg3eERo?=
 =?utf-8?B?cFV2R3BSMENQQVNQU3lGZWZKK0xKWjVQWGMrSmNTaWdKQkw3ZFA0cE9nVmNy?=
 =?utf-8?B?ejgyU2dRYUhaNDljaXBkOWEyYmRHbkFTbVlxZGsrY3dYZFFmY3RvN1ROWXc4?=
 =?utf-8?B?VTRCTldsZy9pT0dpLy94OEpCMktiTTFnZWJXbzBXdXNNK1NHc3dFdy8well3?=
 =?utf-8?B?L3E4bUVTMUZubU5CdWFwMjNPMkNYblVJK3dQbWw4T0Y0cmU5M3hYYm8vUVUr?=
 =?utf-8?B?bGtEdFQxMmxMajQyZGtScXh2SFR3dVBRZDhCU3RWaC9YWXpJb3haZDJLQ2wr?=
 =?utf-8?B?dG0zRmM0WkJVZGhBWEtydC90d3ZxTTdXTUR0UGxlV1B3aFBrL1RZKzhDWVRz?=
 =?utf-8?B?akJ4cWJBejdTcDJUR2pVblIvbFdPRVoxL2VpdkNQQzVvT293c2huTkNTcnNs?=
 =?utf-8?B?akh0d0F1Nmw4OGg4eU9FRTJsVzhDWHJhc2UxR2pYZU9jVTd1Q3lkSUdtK042?=
 =?utf-8?B?UFJDTWxKdGlIaDJkVzVCYWNHYWNuVEc4YlhoS2lYbE5MQXB6VzRtMGV5ckl2?=
 =?utf-8?B?OGdLelpEUDl1RWlGbmR1RDM4cE1GeHN3TnN4MndKS1JvV1JlRnZVQ04rMUZn?=
 =?utf-8?B?RUhEUkZncWk1YklsbVpCM2FlcTM2Vnpld1hyNjNwUFE2bkdGcHlsTnpDWGhT?=
 =?utf-8?Q?WXf+b8UVx/VCssxpedJvdoemP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c52ef015-6633-4f42-40ed-08dafee87d77
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 15:26:11.9056
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FNOIvU6IDt41a0vuZYiCBYG+TcCTA9loUDmfyaG8tHSiVH8ToWZ8k/vM2V7AaKPOWBA9UZxBh6Y6jo8u0ow3AQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8948

In order to avoid clobbering Xen's own predictions, defer the barrier as
much as possible. Merely mark the CPU as needing a barrier issued the
next time we're exiting to guest context.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I couldn't find any sensible (central/unique) place where to move the
comment which is being deleted alongside spec_ctrl_new_guest_context().
---
v3: New.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2038,7 +2038,7 @@ void context_switch(struct vcpu *prev, s
              */
             if ( *last_id != next_id )
             {
-                spec_ctrl_new_guest_context();
+                info->spec_ctrl_flags |= SCF_exit_ibpb;
                 *last_id = next_id;
             }
         }
--- a/xen/arch/x86/include/asm/spec_ctrl.h
+++ b/xen/arch/x86/include/asm/spec_ctrl.h
@@ -67,28 +67,6 @@
 void init_speculation_mitigations(void);
 void spec_ctrl_init_domain(struct domain *d);
 
-/*
- * Switch to a new guest prediction context.
- *
- * This flushes all indirect branch predictors (BTB, RSB/RAS), so guest code
- * which has previously run on this CPU can't attack subsequent guest code.
- *
- * As this flushes the RSB/RAS, it destroys the predictions of the calling
- * context.  For best performace, arrange for this to be used when we're going
- * to jump out of the current context, e.g. with reset_stack_and_jump().
- *
- * For hardware which mis-implements IBPB, fix up by flushing the RSB/RAS
- * manually.
- */
-static always_inline void spec_ctrl_new_guest_context(void)
-{
-    wrmsrl(MSR_PRED_CMD, PRED_CMD_IBPB);
-
-    /* (ab)use alternative_input() to specify clobbers. */
-    alternative_input("", "DO_OVERWRITE_RSB", X86_BUG_IBPB_NO_RET,
-                      : "rax", "rcx");
-}
-
 extern int8_t opt_ibpb_ctxt_switch;
 extern bool opt_ssbd;
 extern int8_t opt_eager_fpu;
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -854,6 +854,11 @@ static void __init ibpb_calculations(voi
      */
     if ( opt_ibpb_ctxt_switch == -1 )
         opt_ibpb_ctxt_switch = !(opt_ibpb_entry_hvm && opt_ibpb_entry_pv);
+    if ( opt_ibpb_ctxt_switch )
+    {
+        setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_PV);
+        setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_HVM);
+    }
 }
 
 /* Calculate whether this CPU is vulnerable to L1TF. */



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 15:26:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 15:26:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484425.750998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhfi-000134-Or; Wed, 25 Jan 2023 15:26:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484425.750998; Wed, 25 Jan 2023 15:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhfi-00012v-KL; Wed, 25 Jan 2023 15:26:46 +0000
Received: by outflank-mailman (input) for mailman id 484425;
 Wed, 25 Jan 2023 15:26:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YOW=5W=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKhfg-0008PB-Qd
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 15:26:44 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2085.outbound.protection.outlook.com [40.107.14.85])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ab9f44a7-9cc4-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 16:26:42 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8948.eurprd04.prod.outlook.com (2603:10a6:20b:42f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 15:26:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 15:26:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab9f44a7-9cc4-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CR/jY9DYtV8ZcBFm/SD/ViDqcDsMptrfwccZHAQWBZCpYuRjAxSv0vkuvgL1uJNnhjHstGyBi9LceuD4Tv8XxvXUUKZD7zUrdQ52AwfDhwVBJU/eCj+JWVJbYmU0k7f4JCmrPF1JH+BwG808c1DgBZmfQjJA3Dv+032qgfu2LRez8Ok6sCfsBoJo2TzFPiLlvlVGpkgggJGjwQrN4QcZxlL0iOcKXcH0iNAyAQmC4dFsOWG+T+HUKaEZUgNe7buVyUfQrb4p69lIwqPMhfoItl5r1AMphu89VFsuN8Pz/uNwhnFeKhRGPMRfOj1gvcOk+5W2sAe1mvLmfI3tchFcCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DxE8sU+0fw6taHSR/FfhMJ8LA+rLgHnMSqhdgo6D/y0=;
 b=giwGbcvOEsP2fp7R1guOV+dUq0VrCVqUBCkTf91MDVxrS192ASZHBDqRnb4Gu08Y3tpvtsCOjysy02slsRrKoT4Vd4b3z5NM4qF/QdQsEFf/cS3grbXJasA2zphyj6XBM8H1FqnEpzUCAZqWgoj8uQQb7Ep5SJR1o3UZjmPYOYjzxPcvKbUMueFAKCE+emdRmJXPVexdYPJf9AAAjY9hi1uNL+IOHTaABmO8bt3FygzLoTE0TekbProFWCJP3KuQWIMc5aJzsET3Cdpm+ZZ8E0CBCLKvdSFMBqnwzy4b7b4ANAu4QQE7jLRY7Q0drFqL/vjzuHuVdrcJKhfBaFVBEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DxE8sU+0fw6taHSR/FfhMJ8LA+rLgHnMSqhdgo6D/y0=;
 b=TUxuJNihnYbBRhav70J2n5dMIvXeBGJu6zwr9TThcv6uMQdBnwsDWqiTeRiHkhbTgMYW758y33kLV5Oa2P9E995dmEUu39/JzL3CcRee8TUcIdyilvZ4kxgq5iLGe2pUL7AjzmAkl9p5T9FXTe34TcQvI8j3RQ//HmsuEu2BBdtp5YioHem3cI9FnUBYcxsBm9KuhzjCEs8w2wyRpUGbyekArMIqdJoHluGG+9Ab1UmxtSNP46uYRHF0yy/gJgzPPr2y7tZgFi4/9MkYIHE1vRgGJ43teaA4jq3ooDl/vVNZI8L2JF70rLrtzO6vcVfD7vZ60NIJzY+3gTAkEoJ3BQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c39faba2-1ab6-71da-f748-1545aac8290b@suse.com>
Date: Wed, 25 Jan 2023 16:26:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v3 3/4] x86: limit issuing of IBPB during context switch
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
In-Reply-To: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0049.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8948:EE_
X-MS-Office365-Filtering-Correlation-Id: e7be8421-044f-4350-b2f4-08dafee88f0e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	h/nj97cr8vLxJVhcB/2C3S68VXOlpq1DL0vMgHYb/eQsjD17sxVwWJXDAt5hMe/rTOSE++pRDGVQmLW2GJVBfDUsDYSXwQ+wWMJ4rp4t/O6IwE0lRX7m/D9RLcanM1sQrr+OoJ9k/DtHiIhW2bnFYTWs1/Y8Xx+QFHu6vPc5XGbhvbsmAAZq0C9RiZoSBNbaqsG3s00L6v1g5v/TBJNsNWKHvF6hPn5ghP+LLbYVeSVLKLpj3umeea841J9waEwkaljEo+G43KCV6iAit/2rLSZBDE9JfE5xA/uR2G4KxEgL7qKIWok+Utwu6coupun4HB+CJIoKMG/DfyK9KafsMHdXAutESXGv7ecBK0x8sdY1TbLp1lf6znCKsPM9Q906wpXkBxuso/g+B1NmrFy400G50u1XPZtxR+S4Il/llYtP+hbAjzcMMr/p7O9p2vB1lVg5gSstCXUCh9zTCJOKTbH+vpqc03hXF8t5dlhIgqac6jDWSGhwTdM1lCiZLaJva9ISd2OHMuIHPpUGqMz379P7AfDVB4MoOwQr5rB0EMNiqNBlaDoA8BV1Dp9DzLrc7Pu1kH2s1VVKgPkOSOhk+DeEoxQDLV2Lx9BPVkyMd7/U7NR3Kj5a69N3rKLrEsymeEQrWI/FvPmk0Gp86Qf3Grm7h7WtV0GNOlGGUIwlbTk2EXRu0qcZIfIS49U6Gv5f2S9n7LDLN+ylkw3k2uD4bQHAEcQkYnfv+ZmC8t+zTFA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(396003)(366004)(376002)(346002)(136003)(451199018)(38100700002)(26005)(5660300002)(4744005)(2906002)(41300700001)(86362001)(8936002)(4326008)(6512007)(316002)(6506007)(8676002)(186003)(66476007)(66556008)(2616005)(54906003)(478600001)(6916009)(66946007)(36756003)(31696002)(6486002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S2ZFMjlVb283WUxwOGkyY2hKZkIvcDlaT1FDckJTTDhsRWxBQWJ5dGFRNTh4?=
 =?utf-8?B?SXhJekRaUXJjTnFYRVJScEpwTHhXcUlqbmJyOEs2dnhnbkhwTmZJZzVXNkFD?=
 =?utf-8?B?K1FYMUVTeElYcGRrWm4zWnFXQ2k5TGdzcWptVkMwUUdoc1daR2d2K2twZU14?=
 =?utf-8?B?M2xVOHZnYm5la2NlMW84QkVRNlNzMHArd0ordWd6V3FxejN0d0JneHgrdzBt?=
 =?utf-8?B?SFNuemdWcjlqSmhhb3JiMCtzaU9QZWJNNm8xaUhQaEtBVG14UlhvODFnTUpO?=
 =?utf-8?B?alRINTNyTWtwTG1aUzFNOXc5aGRLeWtEYjJtTmwwZUVZU3k4cVlBKzg3Z21x?=
 =?utf-8?B?aC9ZUURxV3VFVDF3VHFxUGhlMmFyeFFjYWordk5keUh3aUJiSWVOM0lraG1U?=
 =?utf-8?B?ZzFDRTZkOWVnS1RQZGhDdTlYSnJsWm44WStJbm82RGM2OWllTE51NUd5Y01t?=
 =?utf-8?B?VGtTbmlpMFVUNWlTZGZnclJsUFhMdHJFc25MTGlDYUhUNFU1dExTMEpMcllF?=
 =?utf-8?B?Y05oakl0VHRoUlVGdjJteGRkdG8wRW9uS0QwejdITEJoZzV0U0pTZVloU2dE?=
 =?utf-8?B?cEV1eUx1dWJvZHgyZFRJMGpPb3NDZWZPSHl6amFIbDVRYUFPeGEzNEpSaVNs?=
 =?utf-8?B?dUF4SGlNQVMram5HZ0V5bHRCMTF0M1NXREdGT2M1Y2YvRGxQMjU3TVJ2Yk1j?=
 =?utf-8?B?VncwdEYyK1hyTTVPcE9IU2pxa0J6bkJKVkFCcWNNRGloMU4wblNsV05rVFlP?=
 =?utf-8?B?c1JObHZQY2l4bnRYdGszdlpyNDhuTGQ4R0RQZDhDdnJnTll5ZUhRRnJoOEJH?=
 =?utf-8?B?dnI5VUZLQlZSYXR0VHptVFp2QVhuazFtOHRNZXhKaHpGRDdGeUtRb2x4bjBa?=
 =?utf-8?B?bTdON21YU1RYYXRXZkZnZWx6azMzcmhXV0U3QjVaOEE1R29YWCtHTlAva3hW?=
 =?utf-8?B?UnRYTnBPd3owSU5tYVVoSE4rSTJWOC82SnJENVZzN0xOdGhqKyt5N2o5dFYr?=
 =?utf-8?B?cDVDb05RYzZwQm42b2xQVTdpb1JWWnJIYjRKSHNUOGJ1YUFKV084WVVId0Vk?=
 =?utf-8?B?dFhMSlg1QmprTVRGQTU2clRYenNpSllrSHdUTEc3aEgrelpBWlRpdDdwcUJq?=
 =?utf-8?B?Q2d6V2o0aGxYSERURW83ZDdlUVUyWTdLZGo5akZLM0dHWXh1ZlI2WTQyVzU3?=
 =?utf-8?B?ajJkdm1QL0dRbE90STMxRFoyTHBSU2w3c2ZnYTVYSUljWnplVE5sY3JTbDZv?=
 =?utf-8?B?cjBqU1poS0p3MFZyZGplK2RtZGYxQzh5dWNteXkvYWZLNnVXankzK3hwOFUy?=
 =?utf-8?B?NFFoVk92dXB5S0tiT3VPamx2bUZhY1ZPaUtMR0pKcDFCMWFPQng2eERFMzBX?=
 =?utf-8?B?bTFFMzR4c3FYUXZGNkg5MzhzRTZERENoUDhpZnIvM2I2ZXYrT0dLWlBMMnFV?=
 =?utf-8?B?dUxEblVtMG1MNGZ5VWVrNGxuYy9NQk5FMlVoR0EzVFFmWlVremF3MVBaV0M3?=
 =?utf-8?B?dHNYTzVlTjBvUU1VZ0VITERmQ2oyNVVMcE5WMnB2NmlSMTVKUDNmZlVGZ2xM?=
 =?utf-8?B?UjQ1VCs1NS9XVHBCai9vMHp4UHZrS0UxN3VuZW84ZUNrUVhhV0Y1amxHN0ps?=
 =?utf-8?B?T2YyY1dKTVJ5SkNYYWsyT0RaTXpNTEI1eHNnNEVtajc5TXlueXNnaENTckx1?=
 =?utf-8?B?cnQ5dWdzQjdGRk02Z0ZqTE9CRG13V1VtaW1xMTlkR2lKOG8vcVFKRVc4am55?=
 =?utf-8?B?SmUweEpYeUM1L3JwMkZ2UjdVMlNSWHJvclh1NGpIUy9HSVhUOS94eS9pN3FV?=
 =?utf-8?B?NW56KzlZd3QyY0J0bWNWUFhSS2NBNWdpMStmY29ZSGtFZVRjZ0JKWHZDaC96?=
 =?utf-8?B?bG10ZzZ2UVl3T21JZTFXQTlwVFdGdDNCWjE1ckdaZEUrNlZGNmFocU1HWW12?=
 =?utf-8?B?UUd0UFExSlRsaklmK05Za2t4eStwSkN3OU9CWnUrbXRDUFV1cGhpckhyTzZn?=
 =?utf-8?B?UytJbnhhRDZ2MFdCWnY3eGpxbXZwOUp1S0VrQ08rcmd3QnhSYnpFT3pPVG5H?=
 =?utf-8?B?bFJsWmtqaGk0a0x0VEJRRnB6RXcvWTFYNS9QMmEvbTY2NUVEYWl0Q0JuRXdv?=
 =?utf-8?Q?iNND5m0FLichtQdvjmg/B6jIU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e7be8421-044f-4350-b2f4-08dafee88f0e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 15:26:41.3725
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RIYie6s0JoJqtRru9GQK+4juuWunlc5kWx4e++VJaK11ykHnSEV9Sdor9Gz2duKJ+/lzJ2VMwxU2ZVoUcnWKUA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8948

When the outgoing vCPU had IBPB issued upon entering Xen there's no
need for a 2nd barrier during context switch.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Fold into series.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2015,7 +2015,8 @@ void context_switch(struct vcpu *prev, s
 
         ctxt_switch_levelling(next);
 
-        if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) )
+        if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) &&
+             !(prevd->arch.spec_ctrl_flags & SCF_entry_ibpb) )
         {
             static DEFINE_PER_CPU(unsigned int, last);
             unsigned int *last_id = &this_cpu(last);



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 15:27:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 15:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484432.751007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhg9-0001ca-VF; Wed, 25 Jan 2023 15:27:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484432.751007; Wed, 25 Jan 2023 15:27:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhg9-0001cT-ST; Wed, 25 Jan 2023 15:27:13 +0000
Received: by outflank-mailman (input) for mailman id 484432;
 Wed, 25 Jan 2023 15:27:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8YOW=5W=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKhg9-0008PB-BK
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 15:27:13 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2045.outbound.protection.outlook.com [40.107.8.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bc73d8e7-9cc4-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 16:27:11 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8295.eurprd04.prod.outlook.com (2603:10a6:20b:3b0::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 15:27:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 15:27:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc73d8e7-9cc4-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EmrsoaMcuaqEvysV4AVdeOJZhpbqg4/Wm77DCxX8k7qFrUudr3j0eWi8H7H6nodf+kvODhQsoF4JlM9U4ZkkV6kqw7iKraffDW+xBzk56jzF5z1JXPkzLUK29aaavnyJZ2JKimhq98e2KGhwz3RcHXDDDqJOqAytpFp2sVzF2x8tp0cpSWiWX+dkoolUIJFpuuoGRPzzcHfC5U1X+3ig0zUjtrnXMevGy2zsGIoF1o9MVfWJHFTg3hr9RXVYGJRGk9NAtZ18pSPS1u7JvqxMwOy0dp9+N1O5agWgJIBFEw7V79xVbsLjC4JMfjq2cKKtCo1GRUPiGVk8kh/w+S/iFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uPkRfwew94fNX+zunfbWzw+IG6xaC8yyVufkSJVeAIQ=;
 b=V5FJQnfdXphepJF44Tn1IE7c8gG5SWAnNgWCqfha7qxuTffv2Jf33tbZ0aFqiSjwVbPRonNpBQeaIqckfqL9eb7u/ElCojlm4eoxSvSJEk6kCgDYAm/LDAI2+hIBL5JhAjqLWPmHJaQwUnzjUzb0IUEs24zMeDrylZsSCiBgqtKoc1gHjzrPohZYli0FrWz8ET5LnIZfXStB/5TXFim3fD6Wx/Mt9e2lqtLvi6dDUd+irNhuo6W8zbro8UNU4msBrAWUU3knZOveqdl3tyvnNUzPaHSiVMRxEuOPbVrvp4uBkIgkmCOCEX9szlCyfCUqlpvuLITQQbMdjklDpfDvgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uPkRfwew94fNX+zunfbWzw+IG6xaC8yyVufkSJVeAIQ=;
 b=D5FmR54IefBLJmFMWH4qi+4qs+2q4u03PaRumYGktJMimE8YIEr8QNLVkNuY8z13MdLoj84CNlWe0MuRQuJa370x/bPNF55u4kfb5w4e18WooTMz9ku+pOanxfLVZr2UUHGqnTI6Uw2796PI6tsMCerDlzlePq5LDr/u7xExkTrYrInq0p0qyt4nJgc6ZF5A+VvdPqc1dxblc+49SRy1H+mOUaoV3cPfCQaM7gA3bgO9HaXIcmYgTpGWLG9drixMC81YprLfB4P5ssgD8ULPkL7mK8DshdAoigjmROC+337snxzzykRrHNeZRehfvYEXAsOK9v1Gon53BMq8/CyeoQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <33cf5bff-4d74-c5f5-0c2b-d773d10f2fb2@suse.com>
Date: Wed, 25 Jan 2023 16:27:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: [PATCH v3 4/4] x86/PV: issue branch prediction barrier when switching
 64-bit guest to kernel mode
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
In-Reply-To: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0129.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8295:EE_
X-MS-Office365-Filtering-Correlation-Id: 1287c27f-06d8-4f7c-a238-08dafee89ef6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	G0Xo+7iaaXzwQMF1BPJIZBnW9erBhL/AHcPCtqSTQ3RfFdUZTG52YuFX1WE9JqtMrrsk2rT31GHzXmnA0HYktdzZIYh3JP3sdQwyc8LMUiosx6hMG8uepWUJfjjFjn1lleQ4jHMHdsLKPl5rht4nFL8QVgBbjRukJVowl1XE8Z54gzAau2GawC61W0uF7JQvDDFjc6Raf53YvmdTYOGIYd7wieWRyJAFGcRd9afPZS1cnARZ1zRMu9xWxv/Cc18kxf2Q+9Zj9wDucFpaHCeMb/4Jh4GjXUtQyOTyROIscPCTGKktcnojt4+z1akPnWAlDxAM9ndT+e9LM0bFUxPYDGfJTlQHgnX5NVEdHjPU8IbgV6nniO5iq5ArsWFI9DqPWa28r7aS4GISogIBUy/BVGbngJM1vLr1OJduJ6bImOAz8zmZQC1KzDujhtubOKGl71wxYaV6m2R+GTEEfjY9IO+nxXBZDNzpxkFg70KlxEDVyVtZUigwvHFsZA65XKCULD5DGs0iLsJD7xTmMmCqpiy9vMWqhk20HLwp/poc33T182lnev2LVQijvAu/XIRMU5E8S0TPU45FLOpodwRSRgEim9u+b15bKNijs+fu3XuW3IMHvLgwL+GSNcR2iJFzjRdTGwxS87F76G5MRzwHgsddT4AoOcW5JumTk3w4OzgQL1TeD6XFAw9i33ZIXiJAmsxQYRjBFAU07ZC71e0VAhq0AJ0HxjXUXdB91UEthV0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(346002)(366004)(39860400002)(376002)(136003)(396003)(451199018)(36756003)(86362001)(2906002)(4326008)(2616005)(83380400001)(8936002)(5660300002)(66476007)(31696002)(316002)(478600001)(6486002)(6512007)(31686004)(186003)(8676002)(6916009)(26005)(6506007)(38100700002)(66946007)(54906003)(66556008)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RUhpQ1NEZnhQT2N0enFSUEFUTVRxcGROZmFUUmdRN094ZXNORzBZWWpHdFpJ?=
 =?utf-8?B?WFJDWnVtS2RaNG1uK3VzMk9RUHdSR1RnYUJxZXBXUUNNMGdoS3hvTFl4WEFj?=
 =?utf-8?B?ZXVRQlAwOFF6WlRvRjZnaE9USXFMUmpIdUNTelJDRU80dDFVUEVuUGViK3pE?=
 =?utf-8?B?eEZQTGVLcHpXbWc4d295V2JrK1ZBYkpHOHhUL0JpZC9abGU1dmZpRVU2SzF3?=
 =?utf-8?B?MEJLL1ZJQlg4OXBxTzBScm9oanFlN2hDbGtGV1A0MVlFTEgxK25oVkEzeWs0?=
 =?utf-8?B?eVdyeXEzMUNURVNDRGtMc1Rkc1RSQkFxTG9KTTZ5UXFlMU5UeWZ5UmJmUm9l?=
 =?utf-8?B?N3BsakVGRm03SkdRZG1FNzdDUFpNZ2JqUWd5RnRVdWJuZjZqL0NHZmE4YlQz?=
 =?utf-8?B?NkYrUGZvMUJBdk1NZHd5dUxPQ3RVd2MwaDVKRXR4dWNqVFRBQlVQOWY4TXk4?=
 =?utf-8?B?TFFlM0E1VkFFeFN5OFFPZ3Q2Rjh5aWI4N0FOYTA3Rm9jUm53OVNyZWo4OWRk?=
 =?utf-8?B?SEVMUmZLckFGWUxTeGUraVl0SFA2Q3pWV2I1dit2Kzh1cFd0bjBWNUZYYUNR?=
 =?utf-8?B?QVN5Z0JhQ0t6Mlh2ekw5T3RoM2hIWFQxRFhtWm9SSXllVmpOZmlZMWhmc09K?=
 =?utf-8?B?WG5HWnhhMUlFT0NlS1NVdE03TjZpVjZUQ203RkVaOXhnTUUxRGVLcnVWd0VF?=
 =?utf-8?B?Um4wbENtMVlFdXZEQ29mUTh2Rk96d00yRWNxRVZqTGFtTktoK2JPZDVManZZ?=
 =?utf-8?B?UE95ZEN5b3UzZisxdVJGYWd6WnNOQWc5MjdxbS84YkcyalNkMCsvRy8yeVZa?=
 =?utf-8?B?aHV3NFBBdExmSVZOYjQ4SW5ZRjFKeGJzSnQrZjBMVGF6QkJKMU00TDQ5YWtw?=
 =?utf-8?B?Q2ozbkJsVzdFdnhiWTQ0YTdUR05lcytIeld2cSs1Y3BmKzJOK002TGMwZmVs?=
 =?utf-8?B?M2djU2YyQXRsMEsxQTF4ZkFaRjNJLzFETmlwOFNZaXlWVTEzaFdxOVJoYjJa?=
 =?utf-8?B?bzU1NTU4VmZva3owUUVTS0pBckFjbGxVZHVPS0FlckoyNEVwUGNlNHBqNXZR?=
 =?utf-8?B?YnVqMTYxMzk4NklpZnFmMmk0TGJLTE4wNVZHeG9BZkdSUTUxZG53QkZLVVp4?=
 =?utf-8?B?c1JKSFdrS2pYV2c0dmNvMVlObHhFR0ZPbk5YUG8vU01YY2o4c2FXM04vdzJR?=
 =?utf-8?B?SExRWXhjdEFCN0FHa1BqaVM1ek8yMFNneWZXMlNhTmYxaUdiOFFDNmxGOVUx?=
 =?utf-8?B?ZWMwSG1pQ1hUOVdKUnJBd1lXVGRucHk5UmZnTmZBYzZpL1J5SVp2UjhFQnlI?=
 =?utf-8?B?b1hscnlqcWQwV1JINUMwenNtTFo0cHl6M1U2MzZQeE1iMTIvWTdRck9UTmZS?=
 =?utf-8?B?ZXhOV2NEWGRJY05QSjBZZUVTZUJlUEZsVmh4WkM4OS9uUUFBQmdta3dLcFRL?=
 =?utf-8?B?VG5FVHNVSjV2R3Aza0pUY0VpTm9QNEY4VDNrRCtTd1ZkWXY4cVhvbGxLeXNL?=
 =?utf-8?B?dldsSVhjQzZDYzhYM0pGWUpvaUVFOUFNeWxPbG5qZ1hGeUxDTkN1aWcwOE5x?=
 =?utf-8?B?b0FjYzhXV3lvYzIvalM0cjNzdUczT1JSREgzZVM5Z1ZtOUNlbzJ6ay9UZFNl?=
 =?utf-8?B?Yks5Smgyd1NUN2NtdFFYNTdIY1pLbFRyL25uWEdrSXNXdmdEZjVYaFNIWi9S?=
 =?utf-8?B?R1lvOHNwdElJUEIyT29YMm5jRjFjNVgwT2RrWEVTMjZuY2k5c3pQTDM5bUVU?=
 =?utf-8?B?MDN3eXBzenI0VDhjcmlMNkpndFRDZmR3Tkxwd1dSa1lXaG83NEI4V3V0dVhM?=
 =?utf-8?B?c3RiZUpVVjFpK0E5aCtBSmdIWEF6ZnhhWUpoQ2UxMnhpR1dFTG5Wci9BdU81?=
 =?utf-8?B?VDBZWVVYZkZFaFFSVlRjYThCYzdQTzM4UHpyQ1FveFFPVnl3SHNSeWNKdUdl?=
 =?utf-8?B?K3psdmdKYWMvL1JVenFCSnlhd2ZHSGhKQ0w5eXNiNkZTS0JpbTVQaWdjVDc2?=
 =?utf-8?B?c3RlQ3BDUTVORnRUVWFNbFBLTTFCcjM0azh0c1JTUU13ZlUrYm96U2dFY0NG?=
 =?utf-8?B?TkowZUUwWHpvVTdjYVZVa1RiYVoyUmxlMEhacUpaOW9IRFJWSCsyT0hxclc3?=
 =?utf-8?Q?TOtHAnN+D421NM3pGjigj6Rtt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1287c27f-06d8-4f7c-a238-08dafee89ef6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 15:27:08.0740
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U3erKZWRXGrqZ9ZEY9zj75An4uY/+PkWG0p0l4H47jHyrusBWrmdK2/QSKupPPUWgai2w/oxYu9QF+Sqrb3mcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8295

Since both kernel and user mode run in ring 3, they run in the same
"predictor mode". While the kernel could take care of this itself, doing
so would be yet another item distinguishing PV from native. Additionally
we're in a much better position to issue the barrier command, and we can
save a #GP (for privileged instruction emulation) this way.

To allow to recover performance, introduce a new VM assist allowing the
guest kernel to suppress this barrier. Make availability of the assist
dependent upon the command line control, such that kernels have a way to
know whether their request actually took any effect.

Note that because of its use in PV64_VM_ASSIST_MASK, the declaration of
opt_ibpb_mode_switch can't live in asm/spec_ctrl.h.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Is the placement of the clearing of opt_ibpb_ctxt_switch correct in
parse_spec_ctrl()? Shouldn't it live ahead of the "disable_common"
label, as being about guest protection, not Xen's?

Adding setting of the variable to the "pv" sub-case in parse_spec_ctrl()
didn't seem quite right to me, considering that we default it to the
opposite of opt_ibpb_entry_pv.
---
v3: Leverage exit-IBPB. Introduce separate command line control.
v2: Leverage entry-IBPB. Add VM assist. Re-base.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2315,8 +2315,8 @@ By default SSBD will be mitigated at run
 ### spec-ctrl (x86)
 > `= List of [ <bool>, xen=<bool>, {pv,hvm}=<bool>,
 >              {msr-sc,rsb,md-clear,ibpb-entry}=<bool>|{pv,hvm}=<bool>,
->              bti-thunk=retpoline|lfence|jmp, {ibrs,ibpb,ssbd,psfd,
->              eager-fpu,l1d-flush,branch-harden,srb-lock,
+>              bti-thunk=retpoline|lfence|jmp, {ibrs,ibpb,ibpb-mode-switch,
+>              ssbd,psfd,eager-fpu,l1d-flush,branch-harden,srb-lock,
 >              unpriv-mmio}=<bool> ]`
 
 Controls for speculative execution sidechannel mitigations.  By default, Xen
@@ -2398,7 +2398,10 @@ default.
 
 On hardware supporting IBPB (Indirect Branch Prediction Barrier), the `ibpb=`
 option can be used to force (the default) or prevent Xen from issuing branch
-prediction barriers on vcpu context switches.
+prediction barriers on vcpu context switches.  On such hardware the
+`ibpb-mode-switch` option can be used to control whether, by default, Xen
+would issue branch prediction barriers when 64-bit PV guests switch from
+user to kernel mode.  If enabled, guest kernels can op out of this behavior.
 
 On all hardware, the `eager-fpu=` option can be used to force or prevent Xen
 from using fully eager FPU context switches.  This is currently implemented as
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -742,6 +742,8 @@ static inline void pv_inject_sw_interrup
     pv_inject_event(&event);
 }
 
+extern int8_t opt_ibpb_mode_switch;
+
 #define PV32_VM_ASSIST_MASK ((1UL << VMASST_TYPE_4gb_segments)        | \
                              (1UL << VMASST_TYPE_4gb_segments_notify) | \
                              (1UL << VMASST_TYPE_writable_pagetables) | \
@@ -753,7 +755,9 @@ static inline void pv_inject_sw_interrup
  * but we can't make such requests fail all of the sudden.
  */
 #define PV64_VM_ASSIST_MASK (PV32_VM_ASSIST_MASK                      | \
-                             (1UL << VMASST_TYPE_m2p_strict))
+                             (1UL << VMASST_TYPE_m2p_strict)          | \
+                             ((opt_ibpb_mode_switch + 0UL) <<           \
+                              VMASST_TYPE_mode_switch_no_ibpb))
 #define HVM_VM_ASSIST_MASK  (1UL << VMASST_TYPE_runstate_update_flag)
 
 #define arch_vm_assist_valid_mask(d) \
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -455,6 +455,7 @@ static void _toggle_guest_pt(struct vcpu
 void toggle_guest_mode(struct vcpu *v)
 {
     const struct domain *d = v->domain;
+    struct cpu_info *cpu_info = get_cpu_info();
     unsigned long gs_base;
 
     ASSERT(!is_pv_32bit_vcpu(v));
@@ -467,15 +468,21 @@ void toggle_guest_mode(struct vcpu *v)
     if ( v->arch.flags & TF_kernel_mode )
         v->arch.pv.gs_base_kernel = gs_base;
     else
+    {
         v->arch.pv.gs_base_user = gs_base;
+
+        if ( opt_ibpb_mode_switch &&
+             !(d->arch.spec_ctrl_flags & SCF_entry_ibpb) &&
+             !VM_ASSIST(d, mode_switch_no_ibpb) )
+            cpu_info->spec_ctrl_flags |= SCF_exit_ibpb;
+    }
+
     asm volatile ( "swapgs" );
 
     _toggle_guest_pt(v);
 
     if ( d->arch.pv.xpti )
     {
-        struct cpu_info *cpu_info = get_cpu_info();
-
         cpu_info->root_pgt_changed = true;
         cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)) |
                            (d->arch.pv.pcid ? get_pcid_bits(v, true) : 0);
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -60,6 +60,7 @@ bool __ro_after_init opt_ssbd;
 int8_t __initdata opt_psfd = -1;
 
 int8_t __ro_after_init opt_ibpb_ctxt_switch = -1;
+int8_t __ro_after_init opt_ibpb_mode_switch = -1;
 int8_t __read_mostly opt_eager_fpu = -1;
 int8_t __read_mostly opt_l1d_flush = -1;
 static bool __initdata opt_branch_harden = true;
@@ -111,6 +112,8 @@ static int __init cf_check parse_spec_ct
             if ( opt_pv_l1tf_domu < 0 )
                 opt_pv_l1tf_domu = 0;
 
+            opt_ibpb_mode_switch = 0;
+
             if ( opt_tsx == -1 )
                 opt_tsx = -3;
 
@@ -271,6 +274,8 @@ static int __init cf_check parse_spec_ct
         /* Misc settings. */
         else if ( (val = parse_boolean("ibpb", s, ss)) >= 0 )
             opt_ibpb_ctxt_switch = val;
+        else if ( (val = parse_boolean("ibpb-mode-switch", s, ss)) >= 0 )
+            opt_ibpb_mode_switch = val;
         else if ( (val = parse_boolean("eager-fpu", s, ss)) >= 0 )
             opt_eager_fpu = val;
         else if ( (val = parse_boolean("l1d-flush", s, ss)) >= 0 )
@@ -527,7 +532,7 @@ static void __init print_details(enum in
 
 #endif
 #ifdef CONFIG_PV
-    printk("  Support for PV VMs:%s%s%s%s%s%s\n",
+    printk("  Support for PV VMs:%s%s%s%s%s%s%s\n",
            (boot_cpu_has(X86_FEATURE_SC_MSR_PV) ||
             boot_cpu_has(X86_FEATURE_SC_RSB_PV) ||
             boot_cpu_has(X86_FEATURE_IBPB_ENTRY_PV) ||
@@ -536,7 +541,8 @@ static void __init print_details(enum in
            boot_cpu_has(X86_FEATURE_SC_RSB_PV)       ? " RSB"           : "",
            opt_eager_fpu                             ? " EAGER_FPU"     : "",
            opt_md_clear_pv                           ? " MD_CLEAR"      : "",
-           boot_cpu_has(X86_FEATURE_IBPB_ENTRY_PV)   ? " IBPB-entry"    : "");
+           boot_cpu_has(X86_FEATURE_IBPB_ENTRY_PV)   ? " IBPB-entry"    : "",
+           opt_ibpb_mode_switch                      ? " IBPB-mode-switch" : "");
 
     printk("  XPTI (64-bit PV only): Dom0 %s, DomU %s (with%s PCID)\n",
            opt_xpti_hwdom ? "enabled" : "disabled",
@@ -804,7 +810,8 @@ static void __init ibpb_calculations(voi
     /* Check we have hardware IBPB support before using it... */
     if ( !boot_cpu_has(X86_FEATURE_IBRSB) && !boot_cpu_has(X86_FEATURE_IBPB) )
     {
-        opt_ibpb_entry_hvm = opt_ibpb_entry_pv = opt_ibpb_ctxt_switch = 0;
+        opt_ibpb_entry_hvm = opt_ibpb_entry_pv = 0;
+        opt_ibpb_mode_switch = opt_ibpb_ctxt_switch = 0;
         opt_ibpb_entry_dom0 = false;
         return;
     }
@@ -859,6 +866,18 @@ static void __init ibpb_calculations(voi
         setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_PV);
         setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_HVM);
     }
+
+#ifdef CONFIG_PV
+    /*
+     * If we're using IBPB-on-entry to protect against PV guests, then
+     * there's no need to also issue IBPB on a guest user->kernel switch.
+     */
+    if ( opt_ibpb_mode_switch == -1 )
+        opt_ibpb_mode_switch = !opt_ibpb_entry_pv ||
+                               (!opt_ibpb_entry_dom0 && !opt_dom0_pvh);
+    if ( opt_ibpb_mode_switch )
+        setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_PV);
+#endif
 }
 
 /* Calculate whether this CPU is vulnerable to L1TF. */
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -554,6 +554,16 @@ DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
  */
 #define VMASST_TYPE_m2p_strict           32
 
+/*
+ * x86-64 guests: Suppress IBPB on guest-user to guest-kernel mode switch.
+ *
+ * By default (on affected and capable hardware) as a safety measure Xen,
+ * to cover for the fact that guest-kernel and guest-user modes are both
+ * running in ring 3 (and hence share prediction context), would issue a
+ * barrier for user->kernel mode switches of PV guests.
+ */
+#define VMASST_TYPE_mode_switch_no_ibpb  33
+
 #if __XEN_INTERFACE_VERSION__ < 0x00040600
 #define MAX_VMASST_TYPE                  3
 #endif



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 15:35:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 15:35:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484443.751018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhoF-0003Yt-Os; Wed, 25 Jan 2023 15:35:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484443.751018; Wed, 25 Jan 2023 15:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhoF-0003Ym-ME; Wed, 25 Jan 2023 15:35:35 +0000
Received: by outflank-mailman (input) for mailman id 484443;
 Wed, 25 Jan 2023 15:35:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t/5Q=5W=tklengyel.com=bounce+e181d6.cd840-xen-devel=lists.xenproject.org@srs-se1.protection.inumbo.net>)
 id 1pKhoE-0003Yg-SB
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 15:35:35 +0000
Received: from m228-12.mailgun.net (m228-12.mailgun.net [159.135.228.12])
 by se1-gles-flk1.inumbo.com (Halon) with UTF8SMTPS
 id e6a0976b-9cc5-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 16:35:31 +0100 (CET)
Received: from mail-yb1-f170.google.com (mail-yb1-f170.google.com
 [209.85.219.170]) by
 077977e9653f with SMTP id 63d14c4243cbeb0aa6d64684 (version=TLS1.3,
 cipher=TLS_AES_128_GCM_SHA256); Wed, 25 Jan 2023 15:35:30 GMT
Received: by mail-yb1-f170.google.com with SMTP id h5so6548002ybj.8
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 07:35:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: e6a0976b-9cc5-11ed-b8d1-410ff93cb8f0
DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=tklengyel.com;
 q=dns/txt; s=mailo; t=1674660930; x=1674668130; h=Content-Type: Cc: To: To:
 Subject: Subject: Message-ID: Date: From: From: In-Reply-To: References:
 MIME-Version: Sender: Sender;
 bh=X7l6HbJqqvRylbgNUlZYlc0NjtUAG9ARY9qVVyuvW5M=;
 b=KNfpQHR/hEU21mCwxZzyK8dhYyI8pzM4WBmB7OurY0rYKolqpruV0XVaK5+HbKZaoAPQJhmVVTk7DKTYF4Xp8/adxSZ9S6PHLzznUOV+STHWuqhnI3EJqVlwPFLGX4m1SdktOqQQuT+HpVayHVlE8R0aX2re6KT1kHgOBoOCK7tc3YbJyNHc+UqjBqKfDurQgrs2/ZWnF70PN5mWnmRn4sTvh1qPQJ5Qjg4ZricF0ysaIJXx+136RmVUxcLZsjKUUPD64YAp6AE/4cOAMId7+OBuvTn+R0TxIWhasr3J6RNe85PiXATREjhiOHhdeK83+V59xQ6L/9CcIxLU1a73wQ==
X-Mailgun-Sending-Ip: 159.135.228.12
X-Mailgun-Sid: WyIyYTNmOCIsInhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZyIsImNkODQwIl0=
Sender: tamas@tklengyel.com
X-Gm-Message-State: AFqh2kr2ALba2enni5p0IaG8HYQ1/4GWxyAWMTl7zQPcl4oWm2lr00bZ
	QAsoGuHDbKnI0O5wKg0nyRYWthHN0PrxHkEbtWc=
X-Google-Smtp-Source: AMrXdXs2nqdRH1H5xL60afJ7t5p/goADoygaHGKB46sYpG4tc7yVX9LT2YAOHDUb4cXAu673rjXC5IzmzXwwJ6s8F94=
X-Received: by 2002:a25:4001:0:b0:803:47:aa4 with SMTP id n1-20020a254001000000b0080300470aa4mr2354070yba.183.1674660929558;
 Wed, 25 Jan 2023 07:35:29 -0800 (PST)
MIME-Version: 1.0
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
 <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com> <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
 <a92b9714-5e29-146f-3b68-b44692c56de1@suse.com> <CABfawhkiaheQPJhtG7fupHcbfYPUy+BJgvbVoQ+FJUnev5bowQ@mail.gmail.com>
 <6099e6fb-0a3e-c6da-2766-d61c2c3d1e96@suse.com>
In-Reply-To: <6099e6fb-0a3e-c6da-2766-d61c2c3d1e96@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Wed, 25 Jan 2023 10:34:53 -0500
X-Gmail-Original-Message-ID: <CABfawh=1XUWbeRJJZQsYVLyZX-Ez8=D2YYCgBYvDGQemHeJkzA@mail.gmail.com>
Message-ID: <CABfawh=1XUWbeRJJZQsYVLyZX-Ez8=D2YYCgBYvDGQemHeJkzA@mail.gmail.com>
Subject: Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest areas
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: multipart/alternative; boundary="000000000000d3623505f31861c7"

--000000000000d3623505f31861c7
Content-Type: text/plain; charset="UTF-8"

On Tue, Jan 24, 2023 at 6:19 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 23.01.2023 19:32, Tamas K Lengyel wrote:
> > On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich <jbeulich@suse.com> wrote:
> >> On 23.01.2023 17:09, Tamas K Lengyel wrote:
> >>> On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>>> --- a/xen/arch/x86/mm/mem_sharing.c
> >>>> +++ b/xen/arch/x86/mm/mem_sharing.c
> >>>> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
> >>>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
> >>>>  }
> >>>>
> >>>> +static int copy_guest_area(struct guest_area *cd_area,
> >>>> +                           const struct guest_area *d_area,
> >>>> +                           struct vcpu *cd_vcpu,
> >>>> +                           const struct domain *d)
> >>>> +{
> >>>> +    mfn_t d_mfn, cd_mfn;
> >>>> +
> >>>> +    if ( !d_area->pg )
> >>>> +        return 0;
> >>>> +
> >>>> +    d_mfn = page_to_mfn(d_area->pg);
> >>>> +
> >>>> +    /* Allocate & map a page for the area if it hasn't been already.
> > */
> >>>> +    if ( !cd_area->pg )
> >>>> +    {
> >>>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
> >>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
> >>>> +        p2m_type_t p2mt;
> >>>> +        p2m_access_t p2ma;
> >>>> +        unsigned int offset;
> >>>> +        int ret;
> >>>> +
> >>>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL,
> > NULL);
> >>>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
> >>>> +        {
> >>>> +            struct page_info *pg =
alloc_domheap_page(cd_vcpu->domain,
> >>> 0);
> >>>> +
> >>>> +            if ( !pg )
> >>>> +                return -ENOMEM;
> >>>> +
> >>>> +            cd_mfn = page_to_mfn(pg);
> >>>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
> >>>> +
> >>>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,
> >>> p2m_ram_rw,
> >>>> +                                 p2m->default_access, -1);
> >>>> +            if ( ret )
> >>>> +                return ret;
> >>>> +        }
> >>>> +        else if ( p2mt != p2m_ram_rw )
> >>>> +            return -EBUSY;
> >>>> +
> >>>> +        /*
> >>>> +         * Simply specify the entire range up to the end of the
page.
> >>> All the
> >>>> +         * function uses it for is a check for not crossing page
> >>> boundaries.
> >>>> +         */
> >>>> +        offset = PAGE_OFFSET(d_area->map);
> >>>> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
> >>>> +                             PAGE_SIZE - offset, cd_area, NULL);
> >>>> +        if ( ret )
> >>>> +            return ret;
> >>>> +    }
> >>>> +    else
> >>>> +        cd_mfn = page_to_mfn(cd_area->pg);
> >>>
> >>> Everything to this point seems to be non mem-sharing/forking related.
> > Could
> >>> these live somewhere else? There must be some other place where
> > allocating
> >>> these areas happens already for non-fork VMs so it would make sense to
> > just
> >>> refactor that code to be callable from here.
> >>
> >> It is the "copy" aspect with makes this mem-sharing (or really fork)
> >> specific. Plus in the end this is no different from what you have
> >> there right now for copying the vCPU info area. In the final patch
> >> that other code gets removed by re-using the code here.
> >
> > Yes, the copy part is fork-specific. Arguably if there was a way to do
the
> > allocation of the page for vcpu_info I would prefer that being
elsewhere,
> > but while the only requirement is allocate-page and copy from parent
I'm OK
> > with that logic being in here because it's really straight forward. But
now
> > you also do extra sanity checks here which are harder to comprehend in
this
> > context alone.
>
> What sanity checks are you talking about (also below, where you claim
> map_guest_area() would be used only to sanity check)?

Did I misread your comment above "All the function uses it for is a check
for not crossing page boundaries"? That sounds to me like a simple sanity
check, unclear why it matters though and why only for forks.

>
> > What if extra sanity checks will be needed in the future? Or
> > the sanity checks in the future diverge from where this happens for
normal
> > VMs because someone overlooks this needing to be synched here too?
> >
> >> I also haven't been able to spot anything that could be factored
> >> out (and one might expect that if there was something, then the vCPU
> >> info area copying should also already have used it). map_guest_area()
> >> is all that is used for other purposes as well.
> >
> > Well, there must be a location where all this happens for normal VMs as
> > well, no?
>
> That's map_guest_area(). What is needed here but not elsewhere is the
> populating of the GFN underlying the to-be-mapped area. That's the code
> being added here, mirroring what you need to do for the vCPU info page.
> Similar code isn't needed elsewhere because the guest invoked operation
> is purely a "map" - the underlying pages are already expected to be
> populated (which of course we check, or else we wouldn't know what page
> to actually map).

Populated by what and when?

>
> > Why not factor that code so that it can be called from here, so
> > that we don't have to track sanity check requirements in two different
> > locations? Or for normal VMs that sanity checking bit isn't required? If
> > so, why?
>
> As per above, I'm afraid that I'm lost with these questions. I simply
> don't know what you're talking about.

You are adding code here that allocates memory and copies the content of
similarly allocated memory from the parent. You perform extra sanity checks
for unknown reasons that seem to be only needed here. It is unclear why you
are doing that and why can't the same code-paths that allocate this memory
for the parent be simply reused so the only thing left to do here would be
to copy the content from the parent?

Tamas

--000000000000d3623505f31861c7
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><br>On Tue, Jan 24, 2023 at 6:19 AM Jan Beulich &lt;<a=
 href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; wrote:<br>&gt;=
<br>&gt; On 23.01.2023 19:32, Tamas K Lengyel wrote:<br>&gt; &gt; On Mon, J=
an 23, 2023 at 11:24 AM Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com=
">jbeulich@suse.com</a>&gt; wrote:<br>&gt; &gt;&gt; On 23.01.2023 17:09, Ta=
mas K Lengyel wrote:<br>&gt; &gt;&gt;&gt; On Mon, Jan 23, 2023 at 9:55 AM J=
an Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&g=
t; wrote:<br>&gt; &gt;&gt;&gt;&gt; --- a/xen/arch/x86/mm/mem_sharing.c<br>&=
gt; &gt;&gt;&gt;&gt; +++ b/xen/arch/x86/mm/mem_sharing.c<br>&gt; &gt;&gt;&g=
t;&gt; @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc<br>&=
gt; &gt;&gt;&gt;&gt; =C2=A0 =C2=A0 =C2=A0hvm_set_nonreg_state(cd_vcpu, &amp=
;nrs);<br>&gt; &gt;&gt;&gt;&gt; =C2=A0}<br>&gt; &gt;&gt;&gt;&gt;<br>&gt; &g=
t;&gt;&gt;&gt; +static int copy_guest_area(struct guest_area *cd_area,<br>&=
gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 const struct guest_area *d_area,<br>=
&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct vcpu *cd_vcpu,<br>&gt; &gt=
;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 const struct domain *d)<br>&gt; &gt;&gt;&gt=
;&gt; +{<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0mfn_t d_mfn, cd_mfn;<br>&g=
t; &gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0if ( !d_area-=
&gt;pg )<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;<br=
>&gt; &gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0d_mfn =3D =
page_to_mfn(d_area-&gt;pg);<br>&gt; &gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;=
&gt; + =C2=A0 =C2=A0/* Allocate &amp; map a page for the area if it hasn&#3=
9;t been already.<br>&gt; &gt; */<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0i=
f ( !cd_area-&gt;pg )<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0{<br>&gt; &gt=
;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0gfn_t gfn =3D mfn_to_gfn(d, d_mf=
n);<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0struct p2m_domain=
 *p2m =3D p2m_get_hostp2m(cd_vcpu-&gt;domain);<br>&gt; &gt;&gt;&gt;&gt; + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0p2m_type_t p2mt;<br>&gt; &gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0p2m_access_t p2ma;<br>&gt; &gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0unsigned int offset;<br>&gt; &gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0int ret;<br>&gt; &gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt=
;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D p2m-&gt;get_entry(p2m, gf=
n, &amp;p2mt, &amp;p2ma, 0, NULL,<br>&gt; &gt; NULL);<br>&gt; &gt;&gt;&gt;&=
gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( mfn_eq(cd_mfn, INVALID_MFN) )<br>&gt;=
 &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0{<br>&gt; &gt;&gt;&gt;&gt; +=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct page_info *pg =3D alloc_do=
mheap_page(cd_vcpu-&gt;domain,<br>&gt; &gt;&gt;&gt; 0);<br>&gt; &gt;&gt;&gt=
;&gt; +<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0if ( !pg )<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0return -ENOMEM;<br>&gt; &gt;&gt;&gt;&gt; +<br>&gt; &gt=
;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D page_to=
_mfn(pg);<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));<br>&gt; &gt;&gt;&gt;&gt; +=
<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =
=3D p2m-&gt;set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,<br>&gt; &gt;&gt;&gt;=
 p2m_ram_rw,<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 p2m-&gt;default_access, -1);<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret )<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;<br>&gt; &gt;&gt;&g=
t;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0}<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=
=A0 =C2=A0 =C2=A0else if ( p2mt !=3D p2m_ram_rw )<br>&gt; &gt;&gt;&gt;&gt; =
+ =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -EBUSY;<br>&gt; &gt;&gt;&=
gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0/*<br>&gt; =
&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Simply specify the entire =
range up to the end of the page.<br>&gt; &gt;&gt;&gt; All the<br>&gt; &gt;&=
gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 * function uses it for is a check=
 for not crossing page<br>&gt; &gt;&gt;&gt; boundaries.<br><div>&gt; &gt;&g=
t;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 */</div>&gt; &gt;&gt;&gt;&gt; + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0offset =3D PAGE_OFFSET(d_area-&gt;map);<br>&gt; =
&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D map_guest_area(cd_vcp=
u, gfn_to_gaddr(gfn) + offset,<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 PAGE_SIZE - offset, cd_area, NULL);<br>&gt; &gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0if ( ret )<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =
=C2=A0}<br>&gt; &gt;&gt;&gt;&gt; + =C2=A0 =C2=A0else<br>&gt; &gt;&gt;&gt;&g=
t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D page_to_mfn(cd_area-&gt;pg);<br>=
&gt; &gt;&gt;&gt;<br>&gt; &gt;&gt;&gt; Everything to this point seems to be=
 non mem-sharing/forking related.<br>&gt; &gt; Could<br>&gt; &gt;&gt;&gt; t=
hese live somewhere else? There must be some other place where<br>&gt; &gt;=
 allocating<br>&gt; &gt;&gt;&gt; these areas happens already for non-fork V=
Ms so it would make sense to<br>&gt; &gt; just<br>&gt; &gt;&gt;&gt; refacto=
r that code to be callable from here.<br>&gt; &gt;&gt;<br>&gt; &gt;&gt; It =
is the &quot;copy&quot; aspect with makes this mem-sharing (or really fork)=
<br>&gt; &gt;&gt; specific. Plus in the end this is no different from what =
you have<br>&gt; &gt;&gt; there right now for copying the vCPU info area. I=
n the final patch<br>&gt; &gt;&gt; that other code gets removed by re-using=
 the code here.<br>&gt; &gt;<br>&gt; &gt; Yes, the copy part is fork-specif=
ic. Arguably if there was a way to do the<br>&gt; &gt; allocation of the pa=
ge for vcpu_info I would prefer that being elsewhere,<br>&gt; &gt; but whil=
e the only requirement is allocate-page and copy from parent I&#39;m OK<br>=
&gt; &gt; with that logic being in here because it&#39;s really straight fo=
rward. But now<br>&gt; &gt; you also do extra sanity checks here which are =
harder to comprehend in this<br>&gt; &gt; context alone.<br>&gt;<br>&gt; Wh=
at sanity checks are you talking about (also below, where you claim<br><div=
>&gt; map_guest_area() would be used only to sanity check)?</div><div><br><=
/div><div>Did I misread your comment above &quot;All the function uses it f=
or is a check for not crossing page boundaries&quot;? That sounds to me lik=
e a simple sanity check, unclear why it matters though and why only for for=
ks.

</div><div><br></div><div>&gt;</div>&gt; &gt; What if extra sanity checks w=
ill be needed in the future? Or<br>&gt; &gt; the sanity checks in the futur=
e diverge from where this happens for normal<br>&gt; &gt; VMs because someo=
ne overlooks this needing to be synched here too?<br>&gt; &gt;<br>&gt; &gt;=
&gt; I also haven&#39;t been able to spot anything that could be factored<b=
r>&gt; &gt;&gt; out (and one might expect that if there was something, then=
 the vCPU<br>&gt; &gt;&gt; info area copying should also already have used =
it). map_guest_area()<br>&gt; &gt;&gt; is all that is used for other purpos=
es as well.<br>&gt; &gt;<br>&gt; &gt; Well, there must be a location where =
all this happens for normal VMs as<br>&gt; &gt; well, no?<br>&gt;<br>&gt; T=
hat&#39;s map_guest_area(). What is needed here but not elsewhere is the<br=
>&gt; populating of the GFN underlying the to-be-mapped area. That&#39;s th=
e code<br>&gt; being added here, mirroring what you need to do for the vCPU=
 info page.<br>&gt; Similar code isn&#39;t needed elsewhere because the gue=
st invoked operation<br>&gt; is purely a &quot;map&quot; - the underlying p=
ages are already expected to be<br>&gt; populated (which of course we check=
, or else we wouldn&#39;t know what page<br><div>&gt; to actually map).</di=
v><div><br></div><div>Populated by what and when? <br></div><div><br></div>=
&gt;<br>&gt; &gt; Why not factor that code so that it can be called from he=
re, so<br>&gt; &gt; that we don&#39;t have to track sanity check requiremen=
ts in two different<br>&gt; &gt; locations? Or for normal VMs that sanity c=
hecking bit isn&#39;t required? If<br>&gt; &gt; so, why?<br>&gt;<br>&gt; As=
 per above, I&#39;m afraid that I&#39;m lost with these questions. I simply=
<br><div>&gt; don&#39;t know what you&#39;re talking about.</div><div><br><=
/div><div>You are adding code here that allocates memory and copies the con=
tent of similarly allocated memory from the parent. You perform extra sanit=
y checks for unknown reasons that seem to be only needed here. It is unclea=
r why you are doing that and why can&#39;t the same code-paths that allocat=
e this memory for the parent be simply reused so the only thing left to do =
here would be to copy the content from the parent?<br></div><div><br></div>=
Tamas<br></div>

--000000000000d3623505f31861c7--


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 15:46:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 15:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484452.751027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhyd-0005Ma-Rq; Wed, 25 Jan 2023 15:46:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484452.751027; Wed, 25 Jan 2023 15:46:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKhyd-0005MT-P0; Wed, 25 Jan 2023 15:46:19 +0000
Received: by outflank-mailman (input) for mailman id 484452;
 Wed, 25 Jan 2023 15:46:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Be9=5W=citrix.com=prvs=3821facd5=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pKhyc-0005ML-23
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 15:46:18 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65ef12d4-9cc7-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 16:46:15 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65ef12d4-9cc7-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674661575;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=LC9PJZe33Ic+OCc6OuGe9WiNV9QYfHLh+F4cOUUQabc=;
  b=OR4ysfNTMisJSn8AGXP0joh3Srh+CTmO5kMoF3dmLjMcjflycS/qpJnX
   7HP/LWvyovOyFKbotNUwTdoanP+uNdbX8qrvN4vqNZyftW7r9HbgZ2F7w
   hrlHyHsWg5qRIxQ+fkq/hcnnKIL3kvchtBH78pI0FzQrcnsHt2Av1R5/a
   k=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93115270
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:7w3MQKnDarIF24mM8ie0E1ro5gxLJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJOXWrVbKyLamv8LY9xboS29hhXvMDdx9JqHAM5rnwxHiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5gKGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 eUFCmETPzSBvOHsnZexUeszjMkjI/C+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglH2dSFYr1SE47I6+WHJwCR60aT3McqTcduPLSlQthfD/
 zubpTuhav0cHN+v4gTfz2uMv86RwBrGCa5PSeey9OE/1TV/wURMUUZLBDNXu8KRiEe4V8hON
 k889S8nrKx0/0uuJvHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQMMinN87Q3otz
 FDht9HmHzt0q5WOVGmQsLyTqFuaNS8TImsDIz0ERA0Ky975qYo3g1TESdMLOKetg8f8Az3Y3
 zGApy94jLIW5fPnzI3iowqB2Wj14MGUEEhsvF6/sn+ZAh1RfZOHNpL5zVrg7qwdCYyCTAaLs
 XgLop3LhAwRNq2lmCuISeQLObim4feZLTHR6WJS84kdGyeFoCD6I90JiN1qDAIwa5tfJ2e1C
 KPGkVkJjKK/KkdGekOej2iZL80xhZbtGt3+Phw/RoofO8MhHONrEcwHWKJx44wPuBJ0+U3cE
 c3BGSpJMZr9IfoP8dZOb71BuYLHPwhnrY8pebj1zg68zZ2Vb2OPRLEOPTOmN75msP7U/V+Oq
 o4BZ6NmLimzt8WnMkHqHXM7dwhWfRDX+7iowyCoSgJzClU/QzxwYxMg6bggZ5Zkj8xoehTgp
 xmAtrtj4AOn3xXvcFzaAk2PnZuzBf6TW1pnZ31zVbtpslB/CbuSAFA3LMVqLeh8qrcypRO2J
 tFcE/i97j10Ymyv01wggVPV9eSOqDzDadqyAheY
IronPort-HdrOrdr: A9a23:OZP6xa9vgj+MaBC/xu9uk+AoI+orL9Y04lQ7vn2ZKSY5TiX4rb
 HIoB1/73XJYVkqN03I9ervBEDEewK+yXcX2/h0AV7BZmnbUQKTRekP0WKh+UyDJ8SXzIVgPM
 xbAs1D4bPLbGSTjazBkXWF+9RL+qj5zEh/792usUuETmtRGtBdBx8SMHf8LqXvLjM2f6bQEv
 Cnl7N6jgvlQ1s7ROKhCEIIWuDSzue76a4PMXY9dmYaABDlt0LS1ILH
X-IronPort-AV: E=Sophos;i="5.97,245,1669093200"; 
   d="scan'208";a="93115270"
Date: Wed, 25 Jan 2023 15:45:55 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Dongli Zhang <dongli.zhang@oracle.com>
Subject: Re: [PATCH v2 1/2] libxl: Fix guest kexec - skip cpuid policy
Message-ID: <Y9FOs1C4YETC7Lgu@perard.uk.xensource.com>
References: <20230124025939.6480-1-jandryuk@gmail.com>
 <20230124025939.6480-2-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230124025939.6480-2-jandryuk@gmail.com>

On Mon, Jan 23, 2023 at 09:59:38PM -0500, Jason Andryuk wrote:
> When a domain performs a kexec (soft reset), libxl__build_pre() is
> called with the existing domid.  Calling libxl__cpuid_legacy() on the
> existing domain fails since the cpuid policy has already been set, and
> the guest isn't rebuilt and doesn't kexec.
> 
> xc: error: Failed to set d1's policy (err leaf 0xffffffff, subleaf 0xffffffff, msr 0xffffffff) (17 = File exists): Internal error
> libxl: error: libxl_cpuid.c:494:libxl__cpuid_legacy: Domain 1:Failed to apply CPUID policy: File exists
> libxl: error: libxl_create.c:1641:domcreate_rebuild_done: Domain 1:cannot (re-)build domain: -3
> libxl: error: libxl_xshelp.c:201:libxl__xs_read_mandatory: xenstore read failed: `/libxl/1/type': No such file or directory
> libxl: warning: libxl_dom.c:49:libxl__domain_type: unable to get domain type for domid=1, assuming HVM
> 
> During a soft_reset, skip calling libxl__cpuid_legacy() to avoid the
> issue.  Before the fixes commit, the libxl__cpuid_legacy() failure would

s/fixes/fixed/ or maybe better just write: "before commit 34990446ca91".

> have been ignored, so kexec would continue.
> 
> Fixes: 34990446ca91 "libxl: don't ignore the return value from xc_cpuid_apply_policy"

FYI, the tags format is with () around the commit title:
    Fixes: 34990446ca91 ("libxl: don't ignore the return value from xc_cpuid_apply_policy")
I have this in my git config file to help generate those:
[alias]
    fixes = log -1 --abbrev=12 --format=tformat:'Fixes: %h (\"%s\")'


> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
> Probably a backport candidate since this has been broken for a while.
> 
> v2:
> Use soft_reset field in libxl__domain_build_state. - Juergen

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 16:18:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 16:18:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484458.751038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKiTg-0001b5-9q; Wed, 25 Jan 2023 16:18:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484458.751038; Wed, 25 Jan 2023 16:18:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKiTg-0001ay-72; Wed, 25 Jan 2023 16:18:24 +0000
Received: by outflank-mailman (input) for mailman id 484458;
 Wed, 25 Jan 2023 16:18:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lDzi=5W=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pKiTe-0001as-Qp
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 16:18:23 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e13eabab-9ccb-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 17:18:19 +0100 (CET)
Received: by mail-ej1-x630.google.com with SMTP id hw16so48963377ejc.10
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 08:18:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e13eabab-9ccb-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=bqRwAxYb38Rra289y0CJ69vKEaXm7dY70FgkG0q6CeA=;
        b=clWIstxGC/XBTSYAkXI3NB6fdw6wEV6W3ztYzSSBFBZRdZhvOS1y1VyurXa1D5e6YS
         P4bxHj3/piOLJ15+0flyc4s+Xp9Y/cfKH4VuCIUZtrC2pIujbKtvRcDD5V18jUU9bAol
         1mC3dYptus1dULDc21ZHWl2LrJsQcpJpZ+sroVdZLxklNzv1hZ18Vw0UtIiA+aqxX0zl
         mJiCdc5KfU7Pv9irYarA8ADk8VZoA2LQE6u+Q8PKbRKcMbnYTOmDroBDxN7rV4jS83ot
         WmpUXoZ96jkYHSIxlkbzIgCG1Bu4KFmeOfTW6JYQ1i54H+1O4aFBvkpubCC8mlqfYLla
         PvMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=bqRwAxYb38Rra289y0CJ69vKEaXm7dY70FgkG0q6CeA=;
        b=NXF/ziwzQloCoGdG7uZF0qPQgjOJnGSZkUeD/zak0IHNomjDWmVjtHXs4LWorLBLQq
         5US39T1Mnzq7+q7rmt1FrprtkYk/77O+VyldPGAj0yXKippvptNl6X1mAJYhmCNAnz57
         PusqnoPO9aG7fhDQKQMPeR1l8FuS+pmUYl89+40pUWlAImAdK3XZphbgwH0Nbw053d+1
         3rYnRzUZhb2F8LGXYvcscEqLninChJYx3sx3zvlMHGGuyQrs513rJ8lPR5RgbvyHW9G1
         Kt9tTVb8Pz+tdfRQG+9PEBdDO7cX8SUOsLM/Tw2BmfBv48UmXGY7bc9fRCn6tODfvGKQ
         D1zQ==
X-Gm-Message-State: AFqh2kr5JsspezesvL0341Yh7Qxiy7w9Atl1TxqBANws7QyPAsQu4BhU
	V2qYfYi3ySYCCNOfmT/AffYX7B1Y9lvlTqYxpoJS6w==
X-Google-Smtp-Source: AMrXdXuJ+Dd+putufdCXRUhJ0xotbP9SUcd8vZE5AnE+VGSO7DwmBS7WnZcAk9I7oeFolV91DPuzKpSZar6J9ceRf3w=
X-Received: by 2002:a17:906:340b:b0:7ad:a2ef:c62 with SMTP id
 c11-20020a170906340b00b007ada2ef0c62mr4232687ejb.126.1674663499162; Wed, 25
 Jan 2023 08:18:19 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-2-carlo.nonato@minervasys.tech> <a470be46-ab6e-3970-2b04-6f4035adf1cb@suse.com>
 <CAG+AhRX9DVW5EfXKQoDG9hmcE0FORydTZd0pNm-0uqwddaN9NQ@mail.gmail.com> <6c952571-6a8d-e4fc-36ec-b5b79dac40f6@suse.com>
In-Reply-To: <6c952571-6a8d-e4fc-36ec-b5b79dac40f6@suse.com>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Wed, 25 Jan 2023 17:18:08 +0100
Message-ID: <CAG+AhRUOBgPsT9yU3EtqSPj5VX70H1DsUL_dOWguapC+u3iSvw@mail.gmail.com>
Subject: Re: [PATCH v4 01/11] xen/common: add cache coloring common code
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org, 
	Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 25, 2023 at 2:10 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 25.01.2023 12:18, Carlo Nonato wrote:
> > On Tue, Jan 24, 2023 at 5:37 PM Jan Beulich <jbeulich@suse.com> wrote:
> >> On 23.01.2023 16:47, Carlo Nonato wrote:
> >>> --- /dev/null
> >>> +++ b/xen/include/xen/llc_coloring.h
> >>> @@ -0,0 +1,54 @@
> >>> +/* SPDX-License-Identifier: GPL-2.0 */
> >>> +/*
> >>> + * Last Level Cache (LLC) coloring common header
> >>> + *
> >>> + * Copyright (C) 2022 Xilinx Inc.
> >>> + *
> >>> + * Authors:
> >>> + *    Carlo Nonato <carlo.nonato@minervasys.tech>
> >>> + */
> >>> +#ifndef __COLORING_H__
> >>> +#define __COLORING_H__
> >>> +
> >>> +#include <xen/sched.h>
> >>> +#include <public/domctl.h>
> >>> +
> >>> +#ifdef CONFIG_HAS_LLC_COLORING
> >>> +
> >>> +#include <asm/llc_coloring.h>
> >>> +
> >>> +extern bool llc_coloring_enabled;
> >>> +
> >>> +int domain_llc_coloring_init(struct domain *d, unsigned int *colors,
> >>> +                             unsigned int num_colors);
> >>> +void domain_llc_coloring_free(struct domain *d);
> >>> +void domain_dump_llc_colors(struct domain *d);
> >>> +
> >>> +#else
> >>> +
> >>> +#define llc_coloring_enabled (false)
> >>
> >> While I agree this is needed, ...
> >>
> >>> +static inline int domain_llc_coloring_init(struct domain *d,
> >>> +                                           unsigned int *colors,
> >>> +                                           unsigned int num_colors)
> >>> +{
> >>> +    return 0;
> >>> +}
> >>> +static inline void domain_llc_coloring_free(struct domain *d) {}
> >>> +static inline void domain_dump_llc_colors(struct domain *d) {}
> >>
> >> ... I don't think you need any of these. Instead the declarations above
> >> simply need to be visible unconditionally (to be visible to the compiler
> >> when processing consuming code). We rely on DCE to remove such references
> >> in many other places.
> >
> > So this is true for any other stub function that I used in the series, right?
>
> Likely. I didn't look at most of the Arm-only pieces.
>
> >>> --- a/xen/include/xen/sched.h
> >>> +++ b/xen/include/xen/sched.h
> >>> @@ -602,6 +602,9 @@ struct domain
> >>>
> >>>      /* Holding CDF_* constant. Internal flags for domain creation. */
> >>>      unsigned int cdf;
> >>> +
> >>> +    unsigned int *llc_colors;
> >>> +    unsigned int num_llc_colors;
> >>>  };
> >>
> >> Why outside of any #ifdef, and why not in struct arch_domain?
> >
> > Moving this in sched.h seemed like the natural continuation of the common +
> > arch specific split. Notice that this split is also because Julien pointed
> > out (as you did in some earlier revision) that cache coloring can be used
> > by other arch in the future (even if x86 is excluded). Having two maintainers
> > saying the same thing sounded like a good reason to do that.
>
> If you mean this to be usable by other arch-es as well (which I would
> welcome, as I think I had expressed on an earlier version), then I think
> more pieces want to be in common code. But putting the fields here and all
> users of them in arch-specific code (which I think is the way I saw it)
> doesn't look very logical to me. IOW to me there exist only two possible
> approaches: As much as possible in common code, or common code being
> disturbed as little as possible.

This means having a llc-coloring.c in common where to put the common
implementation, right?
Anyway right now there is also another user of such fields in common:
page_alloc.c.

> > The missing #ifdef comes from a discussion I had with Julien in v2 about
> > domctl interface where he suggested removing it
> > (https://marc.info/?l=xen-devel&m=166151802002263).
>
> I went about five levels deep in the replies, without finding any such reply
> from Julien. Can you please be more specific with the link, so readers don't
> need to endlessly dig?

https://marc.info/?l=xen-devel&m=166669617917298

quote (me and then Julien):
>> We can also think of moving the coloring fields from this
>> struct to the common one (xen_domctl_createdomain) protecting them with
>> the proper #ifdef (but we are targeting only arm64...).

> Your code is targeting arm64 but fundamentally this is an arm64 specific
> feature. IOW, this could be used in the future on other arch. So I think
> it would make sense to define it in common without the #ifdef.

> Jan
>
> > We were talking about
> > a different struct, but I thought the principle was the same. Anyway I would
> > like the #ifdef too.
> >
> > So @Jan, @Julien, can you help me fix this once for all?
> >
> > Thanks.
> >
> > - Carlo Nonato
>


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 16:27:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 16:27:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484463.751047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKicg-0003Fn-4u; Wed, 25 Jan 2023 16:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484463.751047; Wed, 25 Jan 2023 16:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKicg-0003Fg-2B; Wed, 25 Jan 2023 16:27:42 +0000
Received: by outflank-mailman (input) for mailman id 484463;
 Wed, 25 Jan 2023 16:27:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lDzi=5W=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pKicf-0003Fa-9a
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 16:27:41 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2fc39f21-9ccd-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 17:27:40 +0100 (CET)
Received: by mail-ed1-x535.google.com with SMTP id x36so22315909ede.13
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 08:27:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fc39f21-9ccd-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=WS75iCaS6galYLsMegPWqZEfP46Cp6WNhxt3bRqb4rU=;
        b=UO8LOEsqvtm0j+3zYsk71B/p7/eQxfb/IlPM7s6iyvuHzW3hSCtjXlE5RTMZdoOwfc
         kjFTzSFVPJfNgLA+NjTsIy8CX3w+rptEp/x/oi1r3oc6Q9VdHgzabcfv6a/vZGv0u8uD
         KBp0+24TBWsj6kNBrooPMEIfCNBJx/3iNH4ItoSmG34rlq0jneLfK6WIOSLJTftbEKE2
         fjmlDmtF91Bctjb5h7i+iTEZRBx0MwnbLg6gHqkUc+HEntOsH06YKUL2QlvnnhZiUdFR
         n2NcLTno14X/Qtp7j6suFf9Ozp2UiobrZ40hI2QhM7b9taseyudSJQg/ofh4VSvEddd6
         UlcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=WS75iCaS6galYLsMegPWqZEfP46Cp6WNhxt3bRqb4rU=;
        b=OrJ2/6iM7lJZnpbkTqBtWqXXpzFd4ne4HQ2czyHWBkopHsuDR9wRDuDpQQzhu/PBjF
         Oq0HuTlS9djhZA2h8QX6Of5qUiifiLxKcrtt2YT9BH0LQGORQioSXv6RxKyRvpcYCdid
         9AHzTGIHwB/HWx+C8ofR+AIpPcS6MNTU/h0U8fye69pumw5wU4Y5bVuSwUolinn13FkD
         /AgrB1OcYx9QMC1nUWxAn/Cc/Ro7xdDEbaC+uOxtHawC22+HLOZ75gaEgBpo3p/b4Tqi
         ip64WLysHRUcm9SKD7v6ZobYYL+AgbzFKt1nISBbevKo1myBAgUiLwTpfOd4FmWmFWiI
         i0vw==
X-Gm-Message-State: AFqh2koRQ5bzQuTsoULFL3+WTKCQvrb5PRqGddKN0KXAQHB4PKPt5zct
	UJo/o+cMtigcWXA9WqUBW9Mgmziryvz4R+1HohxO2g==
X-Google-Smtp-Source: AMrXdXsdo4fehNIz56UMinSeYMqrq2QMT2zXwlDBfPsrTeaZfkH2znNcH6ajROt0vVcuD9k8ul57xlzl0np9CIhrhvU=
X-Received: by 2002:aa7:c681:0:b0:461:e3f2:38bc with SMTP id
 n1-20020aa7c681000000b00461e3f238bcmr4528316edq.149.1674664059977; Wed, 25
 Jan 2023 08:27:39 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-5-carlo.nonato@minervasys.tech> <9bfee6d9-9cb2-262e-5a46-91b0bf35d60b@suse.com>
In-Reply-To: <9bfee6d9-9cb2-262e-5a46-91b0bf35d60b@suse.com>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Wed, 25 Jan 2023 17:27:29 +0100
Message-ID: <CAG+AhRW+45gt7ZyOYSjaQZbfLORNsJVeADk_Tb7j9CEyTcY6QQ@mail.gmail.com>
Subject: Re: [PATCH v4 04/11] xen: extend domctl interface for cache coloring
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Wei Liu <wl@xen.org>, Marco Solieri <marco.solieri@minervasys.tech>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hi Jan,

On Tue, Jan 24, 2023 at 5:29 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 23.01.2023 16:47, Carlo Nonato wrote:
> > @@ -275,6 +276,19 @@ unsigned int *dom0_llc_colors(unsigned int *num_colors)
> >      return colors;
> >  }
> >
> > +unsigned int *llc_colors_from_guest(struct xen_domctl_createdomain *config)
>
> const struct ...?
>
> > +{
> > +    unsigned int *colors;
> > +
> > +    if ( !config->num_llc_colors )
> > +        return NULL;
> > +
> > +    colors = alloc_colors(config->num_llc_colors);
>
> Error handling needs to occur here; the panic() in alloc_colors() needs
> to go away.
>
> > @@ -434,7 +436,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >              rover = dom;
> >          }
> >
> > -        d = domain_create(dom, &op->u.createdomain, false);
> > +        if ( llc_coloring_enabled )
> > +        {
> > +            llc_colors = llc_colors_from_guest(&op->u.createdomain);
> > +            num_llc_colors = op->u.createdomain.num_llc_colors;
>
> I think you would better avoid setting num_llc_colors to non-zero if
> you got back NULL from the function. It's at best confusing.
>
> > @@ -92,6 +92,10 @@ struct xen_domctl_createdomain {
> >      /* CPU pool to use; specify 0 or a specific existing pool */
> >      uint32_t cpupool_id;
> >
> > +    /* IN LLC coloring parameters */
> > +    uint32_t num_llc_colors;
> > +    XEN_GUEST_HANDLE(uint32) llc_colors;
>
> Despite your earlier replies I continue to be unconvinced that this
> is information which needs to be available right at domain_create.
> Without that you'd also get away without the sufficiently odd
> domain_create_llc_colored(). (Odd because: Think of two or three
> more extended features appearing, all of which want a special cased
> domain_create().)

Yes, I definitely see your point. Still there is the p2m table allocation
problem that you and Julien have discussed previously. I'm not sure I
understood what the approach is.

> Jan


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 16:53:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 16:53:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484470.751058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKj1W-0006xn-6P; Wed, 25 Jan 2023 16:53:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484470.751058; Wed, 25 Jan 2023 16:53:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKj1W-0006xg-3O; Wed, 25 Jan 2023 16:53:22 +0000
Received: by outflank-mailman (input) for mailman id 484470;
 Wed, 25 Jan 2023 16:53:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r4Sd=5W=citrix.com=prvs=382279126=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pKj1V-0006xH-Cb
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 16:53:21 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c35bb39a-9cd0-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 17:53:18 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c35bb39a-9cd0-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674665598;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=gmxuRRCOV7eQL3vdZh232abFBbe3fD6d91KmgOc1EpE=;
  b=ItoCwm8DE2tArZe8+JhVGSZUtA82R8e46qjKpqFh6qDw7LFanKzFvOqu
   JwNglL7qzeeYtPvLJHYU3uydaXH7pJq39MVHZF2pOpJlXu2YzkHG3eUsY
   D7U+YvRh9TN3sr+ruUrU19N9GVmaURGRhBL1rz6faOZPTO4kh2GOTmORQ
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93128235
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:YmWy5aowUbCIEmYbVDQ8q1uDj7BeBmIDZRIvgKrLsJaIsI4StFCzt
 garIBnVP/zeazCgf9B2aN6+90NU6sXWxtI1HABvqSkyRi4Q+JuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHzSRNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGEmNjTZiL6U+e6+RfddhOYOLtLiN4xK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 jOdpj6pWEpHXDCZ4QWOqEKLlMiXpB3cZLkxN6zk06RIjHTGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8UYwgyQzqvf4y6CG3MJCDVGbbQOq8seVTEsk
 FiTkLvBGT1qmK2YTzSa7Lj8kN+pEXFLdylYP3ZCFFZbpYC5++nfky4jUP4yO/aR1NKpOwisx
 j+UnSocqrcIhMUEgvDTEU/8vxqgoZ3ATwgQ7wrRX3644g4RWLNJd7BE+nCAs68ecd/xok2p+
 SFdxpPAtLxm4YSlznTlfQkbIF2+Cx9p2hX4iEUnIZQu/i/FF5WLLdEJu2EWyKuE3685ld7Vj
 K3741s5CHx7ZiHCgUpLj2WZVawXIVDIT4iNaxwtRoMmjmJNXAGG5jpyQkWbwnrglkMh+YlmZ
 8jHLJbxVy1DUfo3pNZTewv6+eZ7rh3SOEuJHcyrp/hZ+eT2iIGppUctbwLVM7FRAFKsqwTJ6
 ddPX/ZmOD0GONASlhL/qNZJRXhTdChTOHwDg5APHgJ1ClY8ST5J5j646e9JRrGJaIwOyLqYr
 yrjAR4wJZiWrSSvFDhmo0tLMNvHNauTZ1piVcDwFT5EA0QeXLs=
IronPort-HdrOrdr: A9a23:vlfuX66hT3sKcWfRAgPXwOPXdLJyesId70hD6qkRc20xTiX8ra
 rCoB1173PJYVoqN03I4OrwQZVoIkmsl6Kdg7NwAV7KZmCPhILPFu9fBODZsl7d8kPFl9K14p
 0QF5SWWOeaMbGjt7eA3OBjKadH/DBbytHOuQ4D9QYUcei1UdAb0ztE
X-IronPort-AV: E=Sophos;i="5.97,245,1669093200"; 
   d="scan'208";a="93128235"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: [PATCH] x86/shadow: Fix PV32 shadowing in !HVM builds
Date: Wed, 25 Jan 2023 16:53:08 +0000
Message-ID: <20230125165308.22897-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The OSSTest bisector identified an issue with c/s 1894049fa283 ("x86/shadow:
L2H shadow type is PV32-only") in !HVM builds.

The bug is ultimately caused by sh_type_to_size[] not actually being specific
to HVM guests, and it's position in shadow/hvm.c mislead the reasoning.

To fix the issue that OSSTest identified, SH_type_l2h_64_shadow must still
have the value 1 in any CONFIG_PV32 build.  But simply adjusting this leaves
us with misleading logic, and a reasonable chance of making a related error
again in the future.

In hindsight, moving sh_type_to_size[] out of common.c in the first place a
mistake.  Therefore, move sh_type_to_size[] back to living in common.c,
leaving a comment explaining why it happens to be inside an HVM conditional.

This effectively reverts the second half of 4fec945409fc ("x86/shadow: adjust
and move sh_type_to_size[]") while retaining the other improvements from the
same changeset.

While making this change, also adjust the sh_type_to_size[] declaration to
match its definition.

Fixes: 4fec945409fc ("x86/shadow: adjust and move sh_type_to_size[]")
Fixes: 1894049fa283 ("x86/shadow: L2H shadow type is PV32-only")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Tim Deegan <tim@xen.org>

I was unsure whether it was reasonable to move the table back into its old
position but it can live pretty much anywhere in common.c as far as I'm
concerned.
---
 xen/arch/x86/mm/shadow/common.c  | 38 ++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/shadow/hvm.c     | 31 -------------------------------
 xen/arch/x86/mm/shadow/private.h |  2 +-
 3 files changed, 39 insertions(+), 32 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 26901b8b3bcf..a74b15e3e75b 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -39,6 +39,44 @@
 #include <public/sched.h>
 #include "private.h"
 
+/*
+ * This table shows the allocation behaviour of the different modes:
+ *
+ * Xen paging      64b  64b  64b
+ * Guest paging    32b  pae  64b
+ * PV or HVM       HVM  HVM   *
+ * Shadow paging   pae  pae  64b
+ *
+ * sl1 size         8k   4k   4k
+ * sl2 size        16k   4k   4k
+ * sl3 size         -    -    4k
+ * sl4 size         -    -    4k
+ *
+ * Note: our accessor, shadow_size(), can optimise out this table in PV-only
+ * builds.
+ */
+#ifdef CONFIG_HVM
+const uint8_t sh_type_to_size[] = {
+    [SH_type_l1_32_shadow]   = 2,
+    [SH_type_fl1_32_shadow]  = 2,
+    [SH_type_l2_32_shadow]   = 4,
+    [SH_type_l1_pae_shadow]  = 1,
+    [SH_type_fl1_pae_shadow] = 1,
+    [SH_type_l2_pae_shadow]  = 1,
+    [SH_type_l1_64_shadow]   = 1,
+    [SH_type_fl1_64_shadow]  = 1,
+    [SH_type_l2_64_shadow]   = 1,
+#ifdef CONFIG_PV32
+    [SH_type_l2h_64_shadow]  = 1,
+#endif
+    [SH_type_l3_64_shadow]   = 1,
+    [SH_type_l4_64_shadow]   = 1,
+    [SH_type_p2m_table]      = 1,
+    [SH_type_monitor_table]  = 1,
+    [SH_type_oos_snapshot]   = 1,
+};
+#endif /* CONFIG_HVM */
+
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
 
 static int cf_check sh_enable_log_dirty(struct domain *, bool log_global);
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index 918865cf1b6a..88c3c16322f2 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -33,37 +33,6 @@
 
 #include "private.h"
 
-/*
- * This table shows the allocation behaviour of the different modes:
- *
- * Xen paging      64b  64b  64b
- * Guest paging    32b  pae  64b
- * PV or HVM       HVM  HVM   *
- * Shadow paging   pae  pae  64b
- *
- * sl1 size         8k   4k   4k
- * sl2 size        16k   4k   4k
- * sl3 size         -    -    4k
- * sl4 size         -    -    4k
- */
-const uint8_t sh_type_to_size[] = {
-    [SH_type_l1_32_shadow]   = 2,
-    [SH_type_fl1_32_shadow]  = 2,
-    [SH_type_l2_32_shadow]   = 4,
-    [SH_type_l1_pae_shadow]  = 1,
-    [SH_type_fl1_pae_shadow] = 1,
-    [SH_type_l2_pae_shadow]  = 1,
-    [SH_type_l1_64_shadow]   = 1,
-    [SH_type_fl1_64_shadow]  = 1,
-    [SH_type_l2_64_shadow]   = 1,
-/*  [SH_type_l2h_64_shadow]  = 1,  PV32-only */
-    [SH_type_l3_64_shadow]   = 1,
-    [SH_type_l4_64_shadow]   = 1,
-    [SH_type_p2m_table]      = 1,
-    [SH_type_monitor_table]  = 1,
-    [SH_type_oos_snapshot]   = 1,
-};
-
 /**************************************************************************/
 /* x86 emulator support for the shadow code
  */
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 7d6c846c8037..79d82364fc92 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -362,7 +362,7 @@ static inline int mfn_oos_may_write(mfn_t gmfn)
 #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
 
 /* Figure out the size (in pages) of a given shadow type */
-extern const u8 sh_type_to_size[SH_type_unused];
+extern const uint8_t sh_type_to_size[SH_type_unused];
 static inline unsigned int
 shadow_size(unsigned int shadow_type)
 {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 17:01:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 17:01:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484481.751068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKj98-0000FP-4i; Wed, 25 Jan 2023 17:01:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484481.751068; Wed, 25 Jan 2023 17:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKj98-0000FI-1c; Wed, 25 Jan 2023 17:01:14 +0000
Received: by outflank-mailman (input) for mailman id 484481;
 Wed, 25 Jan 2023 17:01:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Jk/=5W=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pKj96-0000FC-ST
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 17:01:12 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd94de6a-9cd1-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 18:01:10 +0100 (CET)
Received: by mail-wm1-x329.google.com with SMTP id
 o17-20020a05600c511100b003db021ef437so1780803wms.4
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 09:01:10 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 u11-20020a05600c19cb00b003d9fb04f658sm2613027wmq.4.2023.01.25.09.01.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 25 Jan 2023 09:01:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd94de6a-9cd1-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=vjfpLOODeW3bg2TWPdcYR6eYF/INyUTmjzURQM4H7N8=;
        b=iDPs5b3JyfoadfJZQpMx58PMsbC0n7ThoK/il8W8C4w1obtX01XXG/w1mOifdyNEEg
         nZqV7vOcdXz20oiBlbc82tFv1Z9FdDleuwYym4XErXLYGc7YWiPmeDmOOLxi8dwtElPB
         RXnLxbvY6WkBRkRp2OwZREZTskLF4mDwTk133Fr5AMAAoj7Xp1b/eRpy2fZ05/FBcm2T
         WrcN91UrFlX0l1o+N0EUWju45EWP/cQIXaLRHe3kwmo1iK/i9A75wflbYkTFOiDkoU+8
         dam3Y4mF0JgX8VrQPK9clhAhODru6dOjN0DFRKEjoicD4uqNjDfjYzXEukbrg15JfRhO
         wcXA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=vjfpLOODeW3bg2TWPdcYR6eYF/INyUTmjzURQM4H7N8=;
        b=nnSnlpK0EgvztVR52TZ6SLg96kS/GqgODUW5EWdlod5jAUQDCW/1j/qTgcVHv01T/M
         Ah3Kd4Xdklrz5IbSKrsJikMJ1N1k+2HQczS0tP4rFv63jfPy3DAsQq9BtfO1M8rGTEr+
         P9PBTwj04iuuWP6XaLP7fDKOQpXjtyaa/CtVdSSTA95yrb3g9zFeq4myKwPYL6pM3l3K
         niR8ZXlXU21fsEvbBlgBFlkV436JgtujBVTZ3Opv9f7ZqJ6Wt+jGAW76qF1IAvRhE0FZ
         8SvKXRnXgu9VNXqZiYyPDBpfRZQ5PE2VRyiiwOVEoTnOpHeFWyAAPpa3dj6z6xBcw0FN
         9Hqg==
X-Gm-Message-State: AFqh2kq77cueSGdhjBhMAxp2B5OIoOyp6kr7NZw8M2F4JlJrHqS1ygOf
	OWvL0REzoiszQ9vSn71+OH8=
X-Google-Smtp-Source: AMrXdXu+Ebiv2ESP/Dm6sCF4cGTqO2Tegs9+SM8dLF1xk4C4nXagri/RlHGLkkW7Dp3cVt5D8aLnUg==
X-Received: by 2002:a05:600c:214f:b0:3cf:7197:e67c with SMTP id v15-20020a05600c214f00b003cf7197e67cmr32711922wml.25.1674666069904;
        Wed, 25 Jan 2023 09:01:09 -0800 (PST)
Message-ID: <df6bd499b06c2e4997a3b647624aa2163e7f23d6.camel@gmail.com>
Subject: Re: [PATCH v1 09/14] xen/riscv: introduce do_unexpected_trap()
From: Oleksii <oleksii.kurochko@gmail.com>
To: Alistair Francis <alistair23@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>
Date: Wed, 25 Jan 2023 19:01:08 +0200
In-Reply-To: <CAKmqyKNtFGoXmF1SJWO+JBJQvPSyDYEfpaYn2YBMQ=BsCk6VPQ@mail.gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <74ca10d9be1dfc3aed4b3b21a79eae88c9df26a4.1674226563.git.oleksii.kurochko@gmail.com>
	 <CAKmqyKNtFGoXmF1SJWO+JBJQvPSyDYEfpaYn2YBMQ=BsCk6VPQ@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-23 at 09:39 +1000, Alistair Francis wrote:
> On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
> <oleksii.kurochko@gmail.com> wrote:
> >=20
> > The patch introduces the function the purpose of which is to print
> > a cause of an exception and call "wfi" instruction.
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > =C2=A0xen/arch/riscv/traps.c | 14 +++++++++++++-
> > =C2=A01 file changed, 13 insertions(+), 1 deletion(-)
> >=20
> > diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
> > index dd64f053a5..fc25138a4b 100644
> > --- a/xen/arch/riscv/traps.c
> > +++ b/xen/arch/riscv/traps.c
> > @@ -95,7 +95,19 @@ const char *decode_cause(unsigned long cause)
> > =C2=A0=C2=A0=C2=A0=C2=A0 return decode_trap_cause(cause);
> > =C2=A0}
> >=20
> > -void __handle_exception(struct cpu_user_regs *cpu_regs)
> > +static void do_unexpected_trap(const struct cpu_user_regs *regs)
> > =C2=A0{
> > +=C2=A0=C2=A0=C2=A0 unsigned long cause =3D csr_read(CSR_SCAUSE);
> > +
> > +=C2=A0=C2=A0=C2=A0 early_printk("Unhandled exception: ");
> > +=C2=A0=C2=A0=C2=A0 early_printk(decode_cause(cause));
> > +=C2=A0=C2=A0=C2=A0 early_printk("\n");
> > +
> > +=C2=A0=C2=A0=C2=A0 // kind of die...
> > =C2=A0=C2=A0=C2=A0=C2=A0 wait_for_interrupt();
>=20
> We could put this in a loop, to ensure we never progress
>=20
I think that right now there is no big difference how to stop
because we have only 1 CPU, interrupts are disabled and we are in
exception so it looks like anything can interrupt us.
And in future it will be changed to panic() so we won't need here wfi()
any more.
>=20

Oleksii


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 17:11:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 17:11:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484491.751078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKjJE-000224-9E; Wed, 25 Jan 2023 17:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484491.751078; Wed, 25 Jan 2023 17:11:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKjJE-00021x-4o; Wed, 25 Jan 2023 17:11:40 +0000
Received: by outflank-mailman (input) for mailman id 484491;
 Wed, 25 Jan 2023 17:11:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKjJC-00021r-JP
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 17:11:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKjJC-0008K4-30; Wed, 25 Jan 2023 17:11:38 +0000
Received: from [54.239.6.189] (helo=[192.168.17.90])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKjJB-0006cj-TG; Wed, 25 Jan 2023 17:11:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=uOoPMSx7VIOWCdUM1b2Y3crUYU9rHsv3ZujwPGSCjq0=; b=N6shWrfkTVAEyHbdRvAQW10JYG
	tiivFGYTEixPhwkL4mRR7yKOAArwlyEVaAfvCIA87I40pkDz+YubwoaorUizZ/6Pv1P1f5PWGP9w+
	DiaP4TnX0mMc7MNKmPymuVJEMjnoU/xu22EE2Ge1vmXqSbGkzjFSxC5AosxVeUW1w6iE=;
Message-ID: <a790bbf6-10aa-214a-b3ef-d6cd7e849384@xen.org>
Date: Wed, 25 Jan 2023 17:11:35 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v1 09/14] xen/riscv: introduce do_unexpected_trap()
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>,
 Alistair Francis <alistair23@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <74ca10d9be1dfc3aed4b3b21a79eae88c9df26a4.1674226563.git.oleksii.kurochko@gmail.com>
 <CAKmqyKNtFGoXmF1SJWO+JBJQvPSyDYEfpaYn2YBMQ=BsCk6VPQ@mail.gmail.com>
 <df6bd499b06c2e4997a3b647624aa2163e7f23d6.camel@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <df6bd499b06c2e4997a3b647624aa2163e7f23d6.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 25/01/2023 17:01, Oleksii wrote:
> On Mon, 2023-01-23 at 09:39 +1000, Alistair Francis wrote:
>> On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
>> <oleksii.kurochko@gmail.com> wrote:
>>>
>>> The patch introduces the function the purpose of which is to print
>>> a cause of an exception and call "wfi" instruction.
>>>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>>   xen/arch/riscv/traps.c | 14 +++++++++++++-
>>>   1 file changed, 13 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
>>> index dd64f053a5..fc25138a4b 100644
>>> --- a/xen/arch/riscv/traps.c
>>> +++ b/xen/arch/riscv/traps.c
>>> @@ -95,7 +95,19 @@ const char *decode_cause(unsigned long cause)
>>>       return decode_trap_cause(cause);
>>>   }
>>>
>>> -void __handle_exception(struct cpu_user_regs *cpu_regs)
>>> +static void do_unexpected_trap(const struct cpu_user_regs *regs)
>>>   {
>>> +    unsigned long cause = csr_read(CSR_SCAUSE);
>>> +
>>> +    early_printk("Unhandled exception: ");
>>> +    early_printk(decode_cause(cause));
>>> +    early_printk("\n");
>>> +
>>> +    // kind of die...
>>>       wait_for_interrupt();
>>
>> We could put this in a loop, to ensure we never progress
>>
> I think that right now there is no big difference how to stop
> because we have only 1 CPU, interrupts are disabled and we are in
> exception so it looks like anything can interrupt us.

 From my understanding of the specification, WFI is an hint. So it could 
be implemented as a NOP.

Therefore it would sound better to wrap in a loop. That said...

> And in future it will be changed to panic() so we won't need here wfi()
> any more.

... ideally using panic() right now would be the best.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 17:15:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 17:15:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484496.751088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKjMb-0002d0-Ne; Wed, 25 Jan 2023 17:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484496.751088; Wed, 25 Jan 2023 17:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKjMb-0002ct-JR; Wed, 25 Jan 2023 17:15:09 +0000
Received: by outflank-mailman (input) for mailman id 484496;
 Wed, 25 Jan 2023 17:15:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ToUz=5W=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pKjMa-0002cg-9H
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 17:15:08 +0000
Received: from ppsw-43.srv.uis.cam.ac.uk (ppsw-43.srv.uis.cam.ac.uk
 [131.111.8.143]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d04ab1f3-9cd3-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 18:15:07 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:35168)
 by ppsw-43.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.139]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pKjMT-000KrU-Ux (Exim 4.96) (return-path <amc96@srcf.net>);
 Wed, 25 Jan 2023 17:15:01 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 534AE1FBD8;
 Wed, 25 Jan 2023 17:15:01 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d04ab1f3-9cd3-11ed-91b6-6bf2151ebd3b
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <c12689b9-950c-94f5-e54b-490eea19b066@srcf.net>
Date: Wed, 25 Jan 2023 17:15:01 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Oleksii <oleksii.kurochko@gmail.com>,
 Alistair Francis <alistair23@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <74ca10d9be1dfc3aed4b3b21a79eae88c9df26a4.1674226563.git.oleksii.kurochko@gmail.com>
 <CAKmqyKNtFGoXmF1SJWO+JBJQvPSyDYEfpaYn2YBMQ=BsCk6VPQ@mail.gmail.com>
 <df6bd499b06c2e4997a3b647624aa2163e7f23d6.camel@gmail.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH v1 09/14] xen/riscv: introduce do_unexpected_trap()
In-Reply-To: <df6bd499b06c2e4997a3b647624aa2163e7f23d6.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25/01/2023 5:01 pm, Oleksii wrote:
> On Mon, 2023-01-23 at 09:39 +1000, Alistair Francis wrote:
>> On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
>> <oleksii.kurochko@gmail.com> wrote:
>>> The patch introduces the function the purpose of which is to print
>>> a cause of an exception and call "wfi" instruction.
>>>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>>  xen/arch/riscv/traps.c | 14 +++++++++++++-
>>>  1 file changed, 13 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
>>> index dd64f053a5..fc25138a4b 100644
>>> --- a/xen/arch/riscv/traps.c
>>> +++ b/xen/arch/riscv/traps.c
>>> @@ -95,7 +95,19 @@ const char *decode_cause(unsigned long cause)
>>>      return decode_trap_cause(cause);
>>>  }
>>>
>>> -void __handle_exception(struct cpu_user_regs *cpu_regs)
>>> +static void do_unexpected_trap(const struct cpu_user_regs *regs)
>>>  {
>>> +    unsigned long cause = csr_read(CSR_SCAUSE);
>>> +
>>> +    early_printk("Unhandled exception: ");
>>> +    early_printk(decode_cause(cause));
>>> +    early_printk("\n");
>>> +
>>> +    // kind of die...
>>>      wait_for_interrupt();
>> We could put this in a loop, to ensure we never progress
>>
> I think that right now there is no big difference how to stop
> because we have only 1 CPU, interrupts are disabled and we are in
> exception so it looks like anything can interrupt us.
> And in future it will be changed to panic() so we won't need here wfi()
> any more.

WFI is permitted to be implemented as a NOP by hardware.  Furthermore,
WFI with interrupts already disabled is a supported usecase, and will
resume execution without taking the interrupt that became pending.

You need an infinite loop of WFI's for execution to halt here.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 17:50:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 17:50:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484506.751104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKjuI-0006Y9-GM; Wed, 25 Jan 2023 17:49:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484506.751104; Wed, 25 Jan 2023 17:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKjuI-0006Y2-By; Wed, 25 Jan 2023 17:49:58 +0000
Received: by outflank-mailman (input) for mailman id 484506;
 Wed, 25 Jan 2023 17:49:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ToUz=5W=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pKjuH-0006Xw-Ml
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 17:49:57 +0000
Received: from ppsw-43.srv.uis.cam.ac.uk (ppsw-43.srv.uis.cam.ac.uk
 [131.111.8.143]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac85d094-9cd8-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 18:49:54 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:39600)
 by ppsw-43.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.139]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pKjuD-000T2l-V4 (Exim 4.96) (return-path <amc96@srcf.net>);
 Wed, 25 Jan 2023 17:49:53 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 713F61FA26;
 Wed, 25 Jan 2023 17:49:53 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac85d094-9cd8-11ed-b8d1-410ff93cb8f0
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <a4e9272b-6110-e041-13d0-6746f721135e@srcf.net>
Date: Wed, 25 Jan 2023 17:49:53 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH v3 0/4] x86/spec-ctrl: IPBP improvements
In-Reply-To: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25/01/2023 3:24 pm, Jan Beulich wrote:
> Versions of the two final patches were submitted standalone earlier
> on. The series here tries to carry out a suggestion from Andrew,
> which the two of us have been discussing. Then said previously posted
> patches are re-based on top, utilizing the new functionality.
>
> 1: spec-ctrl: add logic to issue IBPB on exit to guest
> 2: spec-ctrl: defer context-switch IBPB until guest entry
> 3: limit issuing of IBPB during context switch
> 4: PV: issue branch prediction barrier when switching 64-bit guest to kernel mode

In the subject, you mean IBPB.  I think all the individual patches are fine.

Do you have an implementation of VMASST_TYPE_mode_switch_no_ibpb for
Linux yet?  The thing I'd like to avoid is that we commit this perf it
to Xen, without lining Linux up to be able to skip it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 18:27:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 18:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484512.751114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKkUh-0002tz-8V; Wed, 25 Jan 2023 18:27:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484512.751114; Wed, 25 Jan 2023 18:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKkUh-0002ts-4M; Wed, 25 Jan 2023 18:27:35 +0000
Received: by outflank-mailman (input) for mailman id 484512;
 Wed, 25 Jan 2023 18:27:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmdc=5W=citrix.com=prvs=382ccbc00=Per.Bilse@srs-se1.protection.inumbo.net>)
 id 1pKkUg-0002tm-6L
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 18:27:34 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ecb563a4-9cdd-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 19:27:30 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecb563a4-9cdd-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674671250;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=HfetvvXoeGVeKHCAxwKgly1dttDJk6a1tWc5GK5C/pM=;
  b=AYti+agZ6F8s41Az/zatDwRrVSMZUa3J05TWwHPF6Z8QsRhc5PUFgFnZ
   ftVXEubv8ZtxcIFf5jUsbkL+kmzkBzhsbmSHVdny0bpmUJ4F5MKfBQ41N
   czYrJH6Yvfulj1oMQdxbr0Pdj1s818/9YYVkpVvQVOMJ2qV4iaG0TH5To
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93141906
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:wZ6azapJJBSffdgDS3PCTb5OmDleBmIQZRIvgKrLsJaIsI4StFCzt
 garIBmEaPnbambweIsgYIy1p0oOvcTUyIBmGVRvriFhRitGopuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHzSRNUPrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACgHRRTfiMe6+qCQardMr+smJsa1I7pK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVgpUjTj6sz+GX7xw1tyrn9dtHSf7RmQO0ExR/E/
 zOeoQwVBDk8FMyykjy/ykuun9CevAbUdItDG5uno6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0WdBdDuk74wGl0bfP7kCSAW1sZiFFQMwrsokxXzNC6
 7OSt4q3X3o16uTTEC/DsO7O9lteJBT5M0cabwQAEQQg7+Pxi6FtrjvgS9xsTrGM24id9S7L/
 xiGqy03hrM2hMEN1rmm8V2vvw9AtqQlXSZuuFyJAzvNAhdRIdf8Otf2sQSzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF5yLJ9g4DNJWfh0B3iM4ldjBP
 ifuVft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKlDboXEzPxXJhz69+KTJrU3YE
 c7LGftA8F5AUfg3pNZIb7l1PUAXKtAWmjqIGMGTI+WP2ruCfn+FIYrpw3PXBt3VGJis+V2Pm
 /4Gbpvi9vmqeLGmCsUh2dJJfA9iwLlSLcyelvG7gcbafVs/QjFxUqONqV7jEqQ895loei7z1
 inVcidlJJDX3BUr9S3ihqhfVY7S
IronPort-HdrOrdr: A9a23:w+9ThalqSkUUTHuaRfT3qmK7PqHpDfIT3DAbv31ZSRFFG/FwWf
 re5cjztCWE8Ar5PUtLpTnuAtjkfZqxz+8W3WBVB8bAYOCEggqVxeNZnO/fKlTbckWUygce78
 ddmsNFebrN5DZB/KDHCcqDf+rIAuPrzEllv4jjJr5WIz1XVw==
X-IronPort-AV: E=Sophos;i="5.97,246,1669093200"; 
   d="scan'208";a="93141906"
From: Per Bilse <per.bilse@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Per Bilse <per.bilse@citrix.com>, Jan Beulich <jbeulich@suse.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2] Create a Kconfig option to set preferred reboot method
Date: Wed, 25 Jan 2023 18:27:06 +0000
Message-ID: <20230125182706.1480160-1-per.bilse@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Provide a user-friendly option to specify preferred reboot details at
compile time.  It uses the same internals as the command line 'reboot'
parameter, and will be overridden by a choice on the command line.

Signed-off-by: Per Bilse <per.bilse@citrix.com>
---
v2: Incorporate feedback from initial patch.  Separating out warm
reboot as a separate boolean led to a proliferation of code changes,
so we now use the details from Kconfig to assemble a reboot string
identical to what would be specified on the command line.  This leads
to minimal changes and additions to the code.
---
 xen/arch/x86/Kconfig    | 84 +++++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/shutdown.c | 30 ++++++++++++++-
 2 files changed, 112 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 6a7825f4ba..b881a118f1 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -306,6 +306,90 @@ config MEM_SHARING
 	bool "Xen memory sharing support (UNSUPPORTED)" if UNSUPPORTED
 	depends on HVM
 
+config REBOOT_SYSTEM_DEFAULT
+	bool "Xen-defined reboot method"
+	default y
+	help
+	  Xen will choose the most appropriate reboot method,
+	  which will be a Xen SCHEDOP hypercall if running as
+	  a Xen guest, otherwise EFI, ACPI, or by way of the
+	  keyboard controller, depending on system features.
+	  Disabling this will allow you to specify how the
+	  system will be rebooted.
+
+choice
+	bool "Reboot method"
+	depends on !REBOOT_SYSTEM_DEFAULT
+	default REBOOT_METHOD_ACPI
+	help
+	  This is a compiled-in alternative to specifying the
+	  reboot method on the Xen command line.  Specifying a
+	  method on the command line will override both this
+	  configuration and the warm boot option below.
+
+	  none    Suppress automatic reboot after panics or crashes
+	  triple  Force a triple fault (init)
+	  kbd     Use the keyboard controller
+	  acpi    Use the RESET_REG in the FADT
+	  pci     Use the so-called "PCI reset register", CF9
+	  power   Like 'pci' but for a full power-cyle reset
+	  efi     Use the EFI reboot (if running under EFI)
+	  xen     Use Xen SCHEDOP hypercall (if running under Xen as a guest)
+
+	config REBOOT_METHOD_NONE
+	bool "none"
+
+	config REBOOT_METHOD_TRIPLE
+	bool "triple"
+
+	config REBOOT_METHOD_KBD
+	bool "kbd"
+
+	config REBOOT_METHOD_ACPI
+	bool "acpi"
+
+	config REBOOT_METHOD_PCI
+	bool "pci"
+
+	config REBOOT_METHOD_POWER
+	bool "power"
+
+	config REBOOT_METHOD_EFI
+	bool "efi"
+
+	config REBOOT_METHOD_XEN
+	bool "xen"
+	depends on !XEN_GUEST
+
+endchoice
+
+config REBOOT_METHOD
+	string
+	default "none"   if REBOOT_METHOD_NONE
+	default "triple" if REBOOT_METHOD_TRIPLE
+	default "kbd"    if REBOOT_METHOD_KBD
+	default "acpi"   if REBOOT_METHOD_ACPI
+	default "pci"    if REBOOT_METHOD_PCI
+	default "Power"  if REBOOT_METHOD_POWER
+	default "efi"    if REBOOT_METHOD_EFI
+	default "xen"    if REBOOT_METHOD_XEN
+
+config REBOOT_WARM
+	bool "Warm reboot"
+	default n
+	help
+	  By default the system will perform a cold reboot.
+	  Enable this to carry out a warm reboot.  This
+	  configuration will have no effect if a "reboot="
+	  string is supplied on the Xen command line; in this
+	  case the reboot string must include "warm" if a warm
+	  reboot is desired.
+
+config REBOOT_TEMPERATURE
+	string
+	default "warm" if REBOOT_WARM
+	default "cold" if !REBOOT_WARM && !REBOOT_SYSTEM_DEFAULT
+
 endmenu
 
 source "common/Kconfig"
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 7619544d14..4969af1316 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -28,6 +28,19 @@
 #include <asm/apic.h>
 #include <asm/guest.h>
 
+/*
+ * We don't define a compiled-in reboot string if both method and
+ * temperature are defaults, in which case we can compile better code.
+ */
+#ifdef CONFIG_REBOOT_METHOD
+#define REBOOT_STR CONFIG_REBOOT_METHOD "," CONFIG_REBOOT_TEMPERATURE
+#else
+#ifdef CONFIG_REBOOT_TEMPERATURE
+#define REBOOT_STR CONFIG_REBOOT_TEMPERATURE
+#endif
+#endif
+
+/* Do not modify without updating arch/x86/Kconfig, see below. */
 enum reboot_type {
         BOOT_INVALID,
         BOOT_TRIPLE = 't',
@@ -42,10 +55,13 @@ enum reboot_type {
 static int reboot_mode;
 
 /*
- * reboot=t[riple] | k[bd] | a[cpi] | p[ci] | n[o] | [e]fi [, [w]arm | [c]old]
+ * These constants are duplicated in full in arch/x86/Kconfig, keep in synch.
+ *
+ * reboot=t[riple] | k[bd] | a[cpi] | p[ci] | P[ower] | n[one] | [e]fi
+ *                                                     [, [w]arm | [c]old]
  * warm   Don't set the cold reboot flag
  * cold   Set the cold reboot flag
- * no     Suppress automatic reboot after panics or crashes
+ * none   Suppress automatic reboot after panics or crashes
  * triple Force a triple fault (init)
  * kbd    Use the keyboard controller. cold reset (default)
  * acpi   Use the RESET_REG in the FADT
@@ -56,7 +72,12 @@ static int reboot_mode;
  */
 static enum reboot_type reboot_type = BOOT_INVALID;
 
+/* If we don't have a compiled-in boot string, we won't call after start-up. */
+#ifndef REBOOT_STR
 static int __init cf_check set_reboot_type(const char *str)
+#else
+static int cf_check set_reboot_type(const char *str)
+#endif
 {
     int rc = 0;
 
@@ -145,6 +166,11 @@ void machine_halt(void)
 
 static void default_reboot_type(void)
 {
+#ifdef REBOOT_STR
+    if ( reboot_type == BOOT_INVALID )
+        set_reboot_type(REBOOT_STR);
+#endif
+
     if ( reboot_type != BOOT_INVALID )
         return;
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 18:32:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 18:32:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484517.751124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKkZ6-0004IW-PQ; Wed, 25 Jan 2023 18:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484517.751124; Wed, 25 Jan 2023 18:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKkZ6-0004IP-MS; Wed, 25 Jan 2023 18:32:08 +0000
Received: by outflank-mailman (input) for mailman id 484517;
 Wed, 25 Jan 2023 18:32:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dnVv=5W=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pKkZ5-0004IJ-Jn
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 18:32:07 +0000
Received: from caracal.birch.relay.mailchannels.net
 (caracal.birch.relay.mailchannels.net [23.83.209.30])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8f97e226-9cde-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 19:32:04 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id 07950500ED2
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 18:32:02 +0000 (UTC)
Received: from pdx1-sub0-mail-a306.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id 8034F50144E
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 18:32:01 +0000 (UTC)
Received: from pdx1-sub0-mail-a306.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.97.74.44 (trex/6.7.1); Wed, 25 Jan 2023 18:32:01 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a306.dreamhost.com (Postfix) with ESMTPSA id 4P2C9X4Xh6zTR
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 10:32:00 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e00e2 by kmjvbox (DragonFly Mail Agent v0.12);
 Wed, 25 Jan 2023 10:31:59 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f97e226-9cde-11ed-b8d1-410ff93cb8f0
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1674671521; a=rsa-sha256;
	cv=none;
	b=kr8jnzAkaLLX580X7SPq7SzTh9zUL60gexwnpSocbZxRcvjgvYA3b4PMom9MHspUiPF/4g
	k6USz/RVwpEQoe4uNBmzGsq34Gru36ePqs6TLrWhUeZXfXcMW0/wsCQVv/ZVbkln5TuGif
	ZKQjtkHj9lQUqWSy6k4IjGIHdG4eyki9HcYmavHSeBiELg1jNzN+zojfC6qfdTf8cV4Agk
	i2nIHBGngGDypOERCxwWTC8Ra+f27+54Md/WLXoscM9Kg7AgRMoOffcjE4CImt15kuw9+y
	5VPKc+uVa8w5A5jjEC9gXAtRPVLDWE8kua4f2xL1HBZ+ZO8IiVq8JL/j2OCPpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1674671521;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:dkim-signature;
	bh=qxwJU1KGPHSbufcjGUiRlF4u7gN7908PHDzD9oDlrkU=;
	b=C/QeFOk5WBEiEMplgSWwzOvwqxU/Wwa+U+hbKhBZa1mv66NkCAh4NMGaB2zGz6l5U9q1jH
	U7n2A9VJKHHoJRQlSU4SHDlZGCpvmeH/JRTxr6kiMn3Onlx+/1LcMdgAo+uS0z9odW1G+O
	ECE11cp0k0rz3V+2BNrZgSJbQSdERyO1dDsyhgFulVeQG+ic+jWBMEIIUG6pV64GzTBHHz
	LSlXAs5TprSKuw/VBGRjtn6m/FXqgPhgjuDo0+kkexS79MQNTQsKIdVszRDKVYW1O7l7SH
	bLRrHRFkXL84JUT10v9QmzlOmKZQhRR3Osr2Tqbb+JFzxrM6P+6PtsPqWkF39g==
ARC-Authentication-Results: i=1;
	rspamd-6989874cc5-vfcqp;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Neutral
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Spill-Average: 0bdc106079f3d167_1674671521842_3279774127
X-MC-Loop-Signature: 1674671521842:647148034
X-MC-Ingress-Time: 1674671521841
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1674671520;
	bh=qxwJU1KGPHSbufcjGUiRlF4u7gN7908PHDzD9oDlrkU=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=Rw2OFdLCEOwAbp/1i7CZHRg8P+T2J8WuPv8M3Nn8kxUho5cxbHkJxVO4LW+hvULCo
	 SjQ2s5aDxaGy6ZFOp06CAKr0KvPLDL4oZI3WPSagoLBxznOSl3EhSLKui8vx7D8QqJ
	 XcNQqtkuHSd/9sSNyNabeAAcYMr0H6AINrbanQ6c=
Date: Wed, 25 Jan 2023 10:31:59 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230125183159.GA1963@templeofstupid.com>
References: <20230124223516.GA1962@templeofstupid.com>
 <145a827e-4b09-5a85-cb12-eb8f3e0c4f2a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <145a827e-4b09-5a85-cb12-eb8f3e0c4f2a@suse.com>

On Wed, Jan 25, 2023 at 07:57:16AM +0100, Jan Beulich wrote:
> On 24.01.2023 23:35, Krister Johansen wrote:
> > --- a/xen/include/public/arch-x86/cpuid.h
> > +++ b/xen/include/public/arch-x86/cpuid.h
> > @@ -71,6 +71,12 @@
> >   *             EDX: shift amount for tsc->ns conversion
> >   * Sub-leaf 2: EAX: host tsc frequency in kHz
> >   */
> > +#define XEN_CPUID_TSC_EMULATED       (1u << 0)
> > +#define XEN_CPUID_HOST_TSC_RELIABLE  (1u << 1)
> > +#define XEN_CPUID_RDTSCP_INSTR_AVAIL (1u << 2)
> > +#define XEN_CPUID_TSC_MODE_DEFAULT   (0)
> > +#define XEN_CPUID_TSC_MODE_EMULATE   (1u)
> > +#define XEN_CPUID_TSC_MODE_NOEMULATE (2u)
> 
> This could do with a blank line between the two groups. You're also
> missing mode 3. Plus, as a formal remark, please follow patch
> submission rules: They are sent To: the list, with maintainers on
> Cc:.

Thanks for the feedback.  I'll make those changes.

My apologies for the breach etiquette, and thank you for the reminder
about the norms.  I'll correct the To: and CC: headers on the next go
around.

-K


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 18:33:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 18:33:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484521.751133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKkaf-0004s4-47; Wed, 25 Jan 2023 18:33:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484521.751133; Wed, 25 Jan 2023 18:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKkaf-0004rx-18; Wed, 25 Jan 2023 18:33:45 +0000
Received: by outflank-mailman (input) for mailman id 484521;
 Wed, 25 Jan 2023 18:33:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dnVv=5W=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pKkad-0004rn-Uo
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 18:33:43 +0000
Received: from crocodile.elm.relay.mailchannels.net
 (crocodile.elm.relay.mailchannels.net [23.83.212.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c98d1fd9-9cde-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 19:33:41 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id 25BFC881D46
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 18:33:39 +0000 (UTC)
Received: from pdx1-sub0-mail-a306.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id 8EC9C881C5C
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 18:33:38 +0000 (UTC)
Received: from pdx1-sub0-mail-a306.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.126.30.49 (trex/6.7.1); Wed, 25 Jan 2023 18:33:38 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a306.dreamhost.com (Postfix) with ESMTPSA id 4P2CCQ1YPDzK3
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 10:33:38 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e00e2 by kmjvbox (DragonFly Mail Agent v0.12);
 Wed, 25 Jan 2023 10:33:37 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c98d1fd9-9cde-11ed-91b6-6bf2151ebd3b
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1674671618; a=rsa-sha256;
	cv=none;
	b=0RcDVdnfrfpZO++0fp/wERntWG6DUztZnG2iq28H16d12M7DJEGQQoj1QheyUuXcgnMhPA
	nkv+xt4MRVjEIEKg+t6WaZfh14lmJj0isg1S1dI8RbMD813+1d4iJoOZaZ2cPdzfmW66ea
	4OE6hO/TowJHfI5vo+cI9M9QMg1MUAV8ank6MUmpIeU6M4HfxYuJL45dZUZGd5ZQxbVm2Z
	QsL4MnEa5PaNVUoFMAIeW41ZuJkdvRe4kP3c7QuV+LHqxhb2F/9gHOt5FQ8G1lPQFTaajb
	zPIfHEilx2e4rMdPOab3CleXm6o15uVgEBEZS7hJGY3lrIMRiDEViUaHXYqo+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1674671618;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:dkim-signature;
	bh=eqyFtuln9teDjnwVY8VK7WjssFBHkmh8oTKDG+ZK2Dg=;
	b=iooMJgKXGkqohds3FAEuulD2ymWzfGD8M8Q/jP9FiKGctDuiUS1NE+CdwS4Tf4u1fMWHiT
	FnAxR06uOCsFohEA/nc8YjyI/puk0ctlmOTHEvBuqvY60N2oX8F9j6pG0d8flTkP66WcIz
	UZafvLARpFUMbVO9HwRPtngvctong+lJ5dl3VaZF+0BX2GtfODy7DUuE4P621DNw/LHjfw
	u+I0A702GgqNvuaulE/f0vPBAKdJffyqG0qXLA+NKWL8P0IJmk6Ic7CS5bTfobaEP6+1OX
	Fr/sPuV8WoMBf+7R3d580dR55eHnBstUCYXqGEvk3saIsKhh/a6wyurhMiOCbA==
ARC-Authentication-Results: i=1;
	rspamd-65f5b7cf85-z6rsq;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Neutral
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Cure-Gusty: 65b9a5b53bc53093_1674671618808_1379431902
X-MC-Loop-Signature: 1674671618808:1002556764
X-MC-Ingress-Time: 1674671618808
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1674671618;
	bh=eqyFtuln9teDjnwVY8VK7WjssFBHkmh8oTKDG+ZK2Dg=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=jTYoayBwAJyox93l/wMGptzqafXCYrPR9Oascr3rXA8ynmDNqiGto3CJNq1wd80ic
	 fEZEVmdZtOW06CnMG4XZFajP6M+zd341nk89Nb+q0GyISCnGfzx4lc4uiwut3n734C
	 Blo3ouPHsd60YXfSE+Rlas1IaH1a2b8Blg/z4zu8=
Date: Wed, 25 Jan 2023 10:33:37 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>
Subject: [PATCH v2] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230125183337.GC1963@templeofstupid.com>
References: <20230124223531.GB1962@templeofstupid.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230124223531.GB1962@templeofstupid.com>

Cpuid leaf 4 contains information about how the state of the tsc, its
mode, and some additional information.  A commit that is queued for
linux would like to use this to determine whether the tsc mode has been
set to 'no emulation' in order to make some decisions about which
clocksource is more reliable.

Expose this information in the public API headers so that they can
subsequently be imported into linux and used there.

Link: https://lore.kernel.org/xen-devel/eda8d9f2-3013-1b68-0df8-64d7f13ee35e@suse.com/
Link: https://lore.kernel.org/xen-devel/0835453d-9617-48d5-b2dc-77a2ac298bad@oracle.com/
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
---
v2:
  - Fix whitespace between comment and #defines (feedback from Jan Beulich)
  - Add tsc mode 3: no emulate TSC_AUX (feedback from Jan Beulich)
---
 xen/include/public/arch-x86/cpuid.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/include/public/arch-x86/cpuid.h b/xen/include/public/arch-x86/cpuid.h
index 7ecd16ae05..090f7f0034 100644
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -72,6 +72,14 @@
  * Sub-leaf 2: EAX: host tsc frequency in kHz
  */
 
+#define XEN_CPUID_TSC_EMULATED               (1u << 0)
+#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
+#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
+#define XEN_CPUID_TSC_MODE_DEFAULT           (0)
+#define XEN_CPUID_TSC_MODE_EMULATE           (1u)
+#define XEN_CPUID_TSC_MODE_NOEMULATE         (2u)
+#define XEN_CPUID_TSC_MODE_NOEMULATE_TSC_AUX (3u)
+
 /*
  * Leaf 5 (0x40000x04)
  * HVM-specific features
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 18:45:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 18:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484535.751144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKklo-0006jl-CN; Wed, 25 Jan 2023 18:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484535.751144; Wed, 25 Jan 2023 18:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKklo-0006je-9b; Wed, 25 Jan 2023 18:45:16 +0000
Received: by outflank-mailman (input) for mailman id 484535;
 Wed, 25 Jan 2023 18:45:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dnVv=5W=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pKkln-0006jY-3v
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 18:45:15 +0000
Received: from buffalo.larch.relay.mailchannels.net
 (buffalo.larch.relay.mailchannels.net [23.83.213.24])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6580acb2-9ce0-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 19:45:12 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id ED582201E66
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 18:45:08 +0000 (UTC)
Received: from pdx1-sub0-mail-a306.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id 8EBA4201DEE
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 18:45:08 +0000 (UTC)
Received: from pdx1-sub0-mail-a306.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.109.196.199 (trex/6.7.1); Wed, 25 Jan 2023 18:45:08 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a306.dreamhost.com (Postfix) with ESMTPSA id 4P2CSh1HffzK1
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 10:45:08 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e00e2 by kmjvbox (DragonFly Mail Agent v0.12);
 Wed, 25 Jan 2023 10:45:06 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6580acb2-9ce0-11ed-91b6-6bf2151ebd3b
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1674672308; a=rsa-sha256;
	cv=none;
	b=sHXSvZg6a4ESmjJMDTauMoz+oej81nxqM9LHKP0M5N1bYchPmFqQ1z12sRlki06O4/Yy8u
	fQioyNctYw7yLf3H3omMbI1IEn6bYPee2xkhAcGRnU9KRa0AD8/XvOVgZEB9f8RNTruJdz
	CxKztOuikyFXhxREeim5zJttpaeUkXMUGOo2+hoAxEMnKuKjAjOIL46HQKa3vHQLBMFP+Q
	nITR0DK6DZbH+h4lqmaK5Xpb47aYk+HifnGo8nL0fm8p4pwICvLnV/8hcTNAmS1qXt3jp7
	aBVCOAuin2B1GIkd1hS71NUz1LFhUhMzAPC2bVap+Nn46SNLif5ws5CqF5Pi4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1674672308;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:dkim-signature;
	bh=dwwSySAB0/noOy7j2ESEKeWWjrMfxpM7nNQbHo69AIU=;
	b=Sx50BYuCU9K4AzSxAjRIxlMX1HraVB4wVzDlta9Ie5Xiy1llqKot2YZ61xW3WuqEX3Ynmq
	0Mt1n+wHMDbX7VV2Ie4+m60VoB/6OhvWm52kGb+FhS20NaPsnrotFJUUUMcbjM2J5MFb2J
	hAM0BZ0E8CQK32uy0V6S6wyyvUOW8eWfVh0718ip7nqBbzPvC2lfrmNf+Y9Piglv1hiVXt
	udfHrXgm4bp3krCayU/T8Nxf+rIRFzRg035mRyN3NZ+xuNLJokM351tCXts7grwgVV5o51
	NLHYi8UNk+3vzFrMePHn5Oca+/Hud7JzGmlpL54saCMp+QJmjY5j9KY7DMSDjA==
ARC-Authentication-Results: i=1;
	rspamd-6989874cc5-vfcqp;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Good
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Abortive-Reign: 4b1828fe3d6c6c3c_1674672308827_357499150
X-MC-Loop-Signature: 1674672308827:2558123209
X-MC-Ingress-Time: 1674672308827
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1674672308;
	bh=dwwSySAB0/noOy7j2ESEKeWWjrMfxpM7nNQbHo69AIU=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=PYR3YyMwqygtLxvD5YTYV0Dw+In3vPsoMFuy8e/LaadPOi3qXMr4jAIy2xHSG/oQT
	 t5qDfzZygJYJtu2GQNfyT7jNELjKh6vSxRF/h+OwF0fqOMI+6YnsOVAOI8xi5ut1qh
	 DY9BQiLXKm/gwGHdrf3hQyxmFg7gCf+jqzw9KcwI=
Date: Wed, 25 Jan 2023 10:45:06 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>
Subject: [PATCH v2] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230125184506.GE1963@templeofstupid.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230124223516.GA1962@templeofstupid.com>

Cpuid leaf 4 contains information about how the state of the tsc, its
mode, and some additional information.  A commit that is queued for
linux would like to use this to determine whether the tsc mode has been
set to 'no emulation' in order to make some decisions about which
clocksource is more reliable.

Expose this information in the public API headers so that they can
subsequently be imported into linux and used there.

Link: https://lore.kernel.org/xen-devel/eda8d9f2-3013-1b68-0df8-64d7f13ee35e@suse.com/
Link: https://lore.kernel.org/xen-devel/0835453d-9617-48d5-b2dc-77a2ac298bad@oracle.com/
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
---
v2.1:
  - Correct In-Reply-To header for proper threading
v2:
  - Fix whitespace between comment and #defines (feedback from Jan Beulich)
  - Add tsc mode 3: no emulate TSC_AUX (feedback from Jan Beulich)
---
 xen/include/public/arch-x86/cpuid.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/include/public/arch-x86/cpuid.h b/xen/include/public/arch-x86/cpuid.h
index 7ecd16ae05..090f7f0034 100644
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -72,6 +72,14 @@
  * Sub-leaf 2: EAX: host tsc frequency in kHz
  */
 
+#define XEN_CPUID_TSC_EMULATED               (1u << 0)
+#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
+#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
+#define XEN_CPUID_TSC_MODE_DEFAULT           (0)
+#define XEN_CPUID_TSC_MODE_EMULATE           (1u)
+#define XEN_CPUID_TSC_MODE_NOEMULATE         (2u)
+#define XEN_CPUID_TSC_MODE_NOEMULATE_TSC_AUX (3u)
+
 /*
  * Leaf 5 (0x40000x04)
  * HVM-specific features
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 20:20:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 20:20:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484550.751165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKmFw-00010g-Iv; Wed, 25 Jan 2023 20:20:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484550.751165; Wed, 25 Jan 2023 20:20:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKmFw-00010Z-GP; Wed, 25 Jan 2023 20:20:28 +0000
Received: by outflank-mailman (input) for mailman id 484550;
 Wed, 25 Jan 2023 20:20:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7BNa=5W=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pKmFv-00010T-Gk
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 20:20:27 +0000
Received: from sonic310-21.consmr.mail.gq1.yahoo.com
 (sonic310-21.consmr.mail.gq1.yahoo.com [98.137.69.147])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b23372aa-9ced-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 21:20:24 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic310.consmr.mail.gq1.yahoo.com with HTTP; Wed, 25 Jan 2023 20:20:22 +0000
Received: by hermes--production-ne1-746bc6c6c4-wq9r9 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 2ac2f41faba65782946ecdf6ca09df93; 
 Wed, 25 Jan 2023 20:20:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b23372aa-9ced-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674678022; bh=wxgIj3YaWUYlS3Z+C9Gh5GQC6tw9/QqKuYdS8h0YavU=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=BKoXZc9FPFBbXIsHPBUchBdwrDvfFyWVitHSf0s2/II2w3GPgimJy/I728ueUMkMWjRw6tLPHWYErH4PyapVlouNNwS5AdbLqkFK/w1Ve0F5DsHo3q0mdroom4MsvvldC3yn3VcsgjdOvvRDapf90rwbDdZb/wqGyMN0dpAapQiOUBq9EFHe5qFgxQ+9NSr1B92CODLDxdQD/xav2ovyZ2WFuHmzjYUOOWGCntYRczD7DV5SV5xO8X9/9EBDo4LTitdkicqgMCMChS96skRa4ydrrcLYjG5sjqGSHViYvOj+tZcmJYCVVArfsWejTBAAhJc3N1dE6wcTPK1kwGbxBA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674678022; bh=3TQtTIUh0wupZImavFLQ8RVWW78DhFn+WpCO8H3vcmJ=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=OEAYP5H6DLExazYSgWVCVKJKzZY1GlZcAYK3nTctoArjEUl0p4kymL5KaM6bwwSgQBm0FS2/AVV+w6hQ4VCOXDHv4G6gzRJ2B8OohbnuH50z3JqxZ4eBf6WTlJ5xnBSJN9/mDmnjGyLdS2/Sci51Hmkf9DJEAzTux7DOj6YIIGfKP4ohJOA5DoZQRmx6v+l+PYglFJEqb70rDXPCGYuDoFvUlKx4wuX46tZtlR/QSpslFF5WNbF/OJiZhbO1RzqQYRU6FttjehOXMEMc/2BlhBLgG7YiDR46Pg/nckQFS6TSNo84+b+RYOu/XVzV8meoyz0kRbiOG7jSg9iif/c5MQ==
X-YMail-OSG: RJOOyIUVM1ntvRVshJc0ZyBGDbe9YhAAiYg0J6unhsMZkx2HGulPelIgSk_mVc2
 wl7l6En8fxXXCyBQgUt9DMVjGwRVZoY1BUq0prMOAySG2VP38.Yq6VNGO47RbGGYtT79yWG5KjEm
 z8POalhuTnVAjHtCfk.Q7LGL4FsL1AOz7SMD87xWGpAVykGRV5XJo8dFzOw8k9snOYjCGHeO6PKo
 Kr6Ykp2YPuF7bath4ltkKtQJErB1.Yx_PMUiAnU4rrrh97Wc9SOONrljoHqv6S5c61f8J9chy7RS
 Ap2fDEzNq0Ma2QjQhSN5Tc1rUBaNURBSZb9F4Tu.xq_YIXa6rMYcEoNzCj4CjZ_tfINATDGqHOPL
 GPhS7GQk4LES_cMgVY2mEAklMdZoqcuTXykXenPjpf.srgdqnq4nZeo76dB1snb7YzaygRb9kwOM
 3LjdVi5NpyFeSBsYoXs8yGWqCRazx5IUGDw7VZWlnffRuW67lf8v.duEWAQt9sJPSr.XqD3e35us
 NbTQ8ib4l2u1Ip95oAaKl.9bkSBBnwsQDRNqSQz1ppjXD6A53fQ75o2CjCrJKeCQ74RbRbfusF_X
 UcvsEr.1_N487mIk2Lhn2h7smObmUx06uATXAOd05RshIn48zrXHWc6UnmmY_ySu.LhOYSa0jJv3
 i8CbWYJtNmSiJoDmrq3vTbQdvKl2E.bEyk862dW4LwIjJFBc7kVxlzCPpxZwJB1F7VIeJ3QqvjzV
 OdkbJ9ZVgkoa1Znjd_yswtnlQn_P.s6hMGheqXQ9VC66e.bYw21X4az8vvkw26je1e3G1uwfAtIb
 qTYrJwF9SWMH0ioJgXcPrTSmM_si1k8hFa6muWis4WFgx4_bnzTHvWl4elYtRjeIwXqOPIKukHsz
 OoG.WBDq_ag7igwTd3uYxxm40kpvyS1fkk3x21GvKcmPoJU0fAZcplc8jO7OiBjtpkYd4Pa6G.JP
 30.4BE7sHFOKskNAq2Zd2FsyZhqQaFe2p.2qoDr_n6TziLtUoHb45Fe88A9D6_7PELzcr2UnP.As
 pmQJ5eG.TZllqeeLP39UXyYwoIdgYZx.nfc7i3CN0yPwdRiMu_cYzUML7WSXG8CLLgGsgicpN52x
 szP04Cldz94SnFUuazqYv4ra896JOtxTwKb02z96Z6J.jyndHd8wDWy3z9JDoMAoiVQ2CGzcJ_ow
 HDMv1oOXHZoH1EbeKjTB8jpoFtUn_dkW5ZSturdmZJwhPU1URmKwSZtwGCOQDJqgo8P9ctI3f3K5
 L9u5bqPBiI6EQKaQs73KDUhxp9pWw.uxgtJGlxuGR3ev4bo_LAtWN6GauZ7x8BQuCViRoSOoYF4T
 Rl2X0dH5U6Vp_NmGe.r0T4L6NUxn3u3FTCJd_AHS0NsDQ_QkgUlJQkRQtefHMBR1sqaq5oqQbG3z
 rr5Qmvcedw5QzwTs7xoH_.P.SUWaPuc2_Ye1.DQsXt5DyIazmvn4WllKY1Jsdd5ID4solZLt3nqs
 ybzHRT.Fx_jhSrgmHZ29fqiWP8Y1WgZ9Qi4kTHIUHZevN40dibvo9gCaAVSdzQMuq29tMAbzwFLm
 0KGSG9Prlue70gI15GUXaaBLgRPutMYPnGG9Qzqz2iaiXv2imbDN5cBz_rUl0SLzI80T2KEgFaip
 1b6HUz0sEHF_fLXM61g4k_zbkjKcKM5Q93yrpvZjvMzJ7ENABIdbIim2v5nLQVFHDt304Wlc8NNJ
 YW407Eq37spiP46AEeWsqo6JNFZidTSUGJSjzwMl1ilJqiH548BttI6kx4HvJPC9kAS3auJ0MehG
 c9z6K5ZVjwov8cNuP6CsSMHgp06XBE5jG7.BsnDbRfu7c7Ky2Okm1F65E8JNKa5QcFEW7EfCpn5a
 UV4oMMtte5vHZsRQlyCMjJhGsv6zzXyjV0BFoywG5C1oKf6qSn3DrVEp.spUNq48ZhrquoAWG4jf
 vSzwagPDmUj7P2jqjrfpiDXZTAupqvGokWy4axqwcu6CVQN6Ca6uofcvnWtrWJ5F4vX83rdxwtfB
 r4hFWn3msYZO7RqGgbLMulMZg3Pxd4dcUcNPbUguCrJHN.Mfs2vHDpy0waV7CCXMt0v46Re8EH8G
 _WgtM5btQWxKlbuJviPQHi9V0e7DI_WyZ2YRGKUOw7yE5sa0sPANhP8YfHO_zIjgo3Gd8QMErKZb
 BZh0Rt711tIQ1Q.TnKmIsLpOqf8QmPCAfq6LeKzBiMHgHAIf9xAltXyKMHQDf.U8hDv6opuEsHvD
 _E2w30fP99A--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <18d8d8e5-bb1a-8a82-622f-3c6a60b97660@aol.com>
Date: Wed, 25 Jan 2023 15:20:16 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default
 for igd-passthru
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, qemu-devel@nongnu.org
References: <cover.1673300848.git.brchuckz.ref@aol.com>
 <cover.1673300848.git.brchuckz@aol.com>
 <Y9EUarVVWr223API@perard.uk.xensource.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <Y9EUarVVWr223API@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.21123 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 6653

On 1/25/2023 6:37 AM, Anthony PERARD wrote:
> On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
> > I call attention to the commit message of the first patch which points
> > out that using the "pc" machine and adding the xen platform device on
> > the qemu upstream command line is not functionally equivalent to using
> > the "xenfv" machine which automatically adds the xen platform device
> > earlier in the guest creation process. As a result, there is a noticeable
> > reduction in the performance of the guest during startup with the "pc"
> > machne type even if the xen platform device is added via the qemu
> > command line options, although eventually both Linux and Windows guests
> > perform equally well once the guest operating system is fully loaded.
>
> There shouldn't be a difference between "xenfv" machine or using the
> "pc" machine while adding the "xen-platform" device, at least with
> regards to access to disk or network.
>
> The first patch of the series is using the "pc" machine without any
> "xen-platform" device, so we can't compare startup performance based on
> that.
>
> > Specifically, startup time is longer and neither the grub vga drivers
> > nor the windows vga drivers in early startup perform as well when the
> > xen platform device is added via the qemu command line instead of being
> > added immediately after the other emulated i440fx pci devices when the
> > "xenfv" machine type is used.
>
> The "xen-platform" device is mostly an hint to a guest that they can use
> pv-disk and pv-network devices. I don't think it would change anything
> with regards to graphics.
>
> > For example, when using the "pc" machine, which adds the xen platform
> > device using a command line option, the Linux guest could not display
> > the grub boot menu at the native resolution of the monitor, but with the
> > "xenfv" machine, the grub menu is displayed at the full 1920x1080
> > native resolution of the monitor for testing. So improved startup
> > performance is an advantage for the patch for qemu.
>
> I've just found out that when doing IGD passthrough, both machine
> "xenfv" and "pc" are much more different than I though ... :-(
> pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
> turns copy some informations from the real host bridge.
> I guess this new host bridge help when the firmware setup the graphic
> for grub.
>
> > I also call attention to the last point of the commit message of the
> > second patch and the comments for reviewers section of the second patch.
> > This approach, as opposed to fixing this in qemu upstream, makes
> > maintaining the code in libxl__build_device_model_args_new more
> > difficult and therefore increases the chances of problems caused by
> > coding errors and typos for users of libxl. So that is another advantage
> > of the patch for qemu.
>
> We would just needs to use a different approach in libxl when generating
> the command line. We could probably avoid duplications. I was hopping to
> have patch series for libxl that would change the machine used to start
> using "pc" instead of "xenfv" for all configurations, but based on the
> point above (IGD specific change to "xenfv"), then I guess we can't
> really do anything from libxl to fix IGD passthrough.
>
> > OTOH, fixing this in qemu causes newer qemu versions to behave
> > differently than previous versions of qemu, which the qemu community
> > does not like, although they seem OK with the other patch since it only
> > affects qemu "xenfv" machine types, but they do not want the patch to
> > affect toolstacks like libvirt that do not use qemu upstream's
> > autoconfiguration options as much as libxl does, and, of course, libvirt
> > can manage qemu "xenfv" machines so exising "xenfv" guests configured
> > manually by libvirt could be adversely affected by the patch to qemu,
> > but only if those same guests are also configured for igd-passthrough,
> > which is likely a very small number of possibly affected libvirt users
> > of qemu.
> > 
> > A year or two ago I tried to configure guests for pci passthrough on xen
> > using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
> > could not convert an xl.cfg file with a configuration item
> > pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
> > So it is unlikely there are any users out there using libvirt to
> > configure xen hvm guests for igd passthrough on xen, and those are the
> > only users that could be adversely affected by the simpler patch to qemu
> > to fix this.
>
> FYI, libvirt should be using libxl to create guest, I don't think there
> is another way for libvirt to create xen guests.

I have success using libvirt as a frontend to libxl for most of my xen guests,
except for HVM guests that have pci devices passed through because the
tool to convert an xl.cfg file to libvirt xml was not able to convert the
pci = ... line in xl.cfg. Perhaps newer versions of libvirt can do it (I haven't
tried it since at least a couple of years ago with an older version of libvirt).

>
>
>
> So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
> IGD passthrough as the "xenfv" machine has already some workaround to
> make IGD work and just need some more.
>
> I've seen that the patch for QEMU is now reviewed, so I look at having
> it merged soonish.

Hi Anthony,

Thanks for looking at this and for also looking at the Qemu patch
to fix this. As I said earlier, I think to fix this problem for the IGD,
the qemu patch is probably better than this patch to libxl.

Regarding the rest of your comments, I think the Xen developers
need to decide what the roadmap for the future development of
Xen HVM machines on x86 is before deciding on any further
changes. I have not noticed much development in this feature
in the past few years, except for Bernhard Beschow who has been
doing some work to make the piix3 stuff more maintainable in
Qemu upstream. When that is done, it might be an opportunity to do
some work improving the "xenfv" machine in Qemu upstream.
The "pc" machine type is of course a very old machine type
to still be using as the device model for modern systems.

I noticed about four or five years ago there was a patch set
proposed to use "q35" instead of "pc" for Xen HVM guests and
Qemu upstream, but there did not seem to be any agreement
about the best way to implement that change, with some saying
more of it should be implemented outside of Qemu and by libxl
or maybe hvmloader instead. If anyone can describe if there is a
roadmap for the future of Xen HVM on x86, that would be helpful.
Thanks,

Chuck


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 20:57:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 20:57:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484555.751176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKmpz-0004vA-BG; Wed, 25 Jan 2023 20:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484555.751176; Wed, 25 Jan 2023 20:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKmpz-0004v3-8J; Wed, 25 Jan 2023 20:57:43 +0000
Received: by outflank-mailman (input) for mailman id 484555;
 Wed, 25 Jan 2023 20:57:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKmpy-0004ux-Dp
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 20:57:42 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7fa0eef-9cf2-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 21:57:41 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 4E7EFB81BAA;
 Wed, 25 Jan 2023 20:57:40 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B8872C433D2;
 Wed, 25 Jan 2023 20:57:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7fa0eef-9cf2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674680258;
	bh=NjOPVgeL6TzTpB1raybc4GpDzLzYle8JRXOrLMSBex8=;
	h=From:To:Cc:Subject:Date:From;
	b=bfeTCjxBCoUud7hi+9tT+TDGcx1QFAdoxLOtHXWet0Mb/L72k3B7fMgJq+CQs+mAq
	 lZgq8+yDRhzNzH4DiIpGuvwR56kPb3I/+mOqKGQvMO8TQd0pxi2671LfqsdQjmheho
	 4Ul8fvBnm8FlxfZAHzpJ2lBMRWrEgA7jLkC1lD/Y+ziPCVl4B19jYZ1lQIOzsk7L2c
	 uZgDBR8o7bAK+WX0jSrmnRuGUfnVFb1TcPLRwSVw7YnouNP8k7aFDrHl/71qnqWikb
	 zzTUsG4ygPfj39RQYXpc/MLSsIssrUFUKVMmT7JZfftkS7DnXWNuZOVCgKucc6C/sK
	 al2jYZRLSJKzA==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	george.dunlap@citrix.com,
	andrew.cooper3@citrix.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	Bertrand.Marquis@arm.com,
	julien@xen.org,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: [PATCH] Add more rules to docs/misra/rules.rst
Date: Wed, 25 Jan 2023 12:57:35 -0800
Message-Id: <20230125205735.2662514-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@amd.com>

As agreed during the last MISRA C discussion, I am adding the following
MISRA C rules: 7.1, 7.3, 18.3.

I am also adding 13.1 and 18.2 that were "agreed pending an analysis on
the amount of violations".

In the case of 13.1 there are zero violations reported by cppcheck.

In the case of 18.2, there are zero violations reported by cppcheck
after deviating the linker symbols, as discussed.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 docs/misra/rules.rst | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
index dcceab9388..1da79f33c1 100644
--- a/docs/misra/rules.rst
+++ b/docs/misra/rules.rst
@@ -138,6 +138,16 @@ existing codebase are work-in-progress.
      - Single-bit named bit fields shall not be of a signed type
      -
 
+   * - `Rule 7.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_07_01.c>`_
+     - Required
+     - Octal constants shall not be used
+     -
+
+   * - `Rule 7.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_07_03.c>`_
+     - Required
+     - The lowercase character l shall not be used in a literal suffix
+     -
+
    * - `Rule 8.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_01.c>`_
      - Required
      - Types shall be explicitly specified
@@ -200,6 +210,11 @@ existing codebase are work-in-progress.
        expression which has potential side effects
      -
 
+   * - `Rule 13.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_01_1.c>`_
+     - Required
+     - Initializer lists shall not contain persistent side effects
+     -
+
    * - `Rule 14.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_01.c>`_
      - Required
      - A loop counter shall not have essentially floating type
@@ -227,6 +242,16 @@ existing codebase are work-in-progress.
        static keyword between the [ ]
      -
 
+   * - `Rule 18.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_02.c>`_
+     - Required
+     - Subtraction between pointers shall only be applied to pointers that address elements of the same array
+     -
+
+   * - `Rule 18.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_03.c>`_
+     - Required
+     - The relational operators > >= < and <= shall not be applied to objects of pointer type except where they point into the same object
+     -
+
    * - `Rule 19.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_19_01.c>`_
      - Mandatory
      - An object shall not be assigned or copied to an overlapping
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 21:07:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 21:07:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484560.751186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKmyn-0006XT-71; Wed, 25 Jan 2023 21:06:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484560.751186; Wed, 25 Jan 2023 21:06:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKmyn-0006XM-47; Wed, 25 Jan 2023 21:06:49 +0000
Received: by outflank-mailman (input) for mailman id 484560;
 Wed, 25 Jan 2023 21:06:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKmym-0006XG-3t
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 21:06:48 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d3bbcd1-9cf4-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 22:06:46 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id B14A8B81BA4;
 Wed, 25 Jan 2023 21:06:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C905C433D2;
 Wed, 25 Jan 2023 21:06:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d3bbcd1-9cf4-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674680804;
	bh=m6k4jbuGQlX2dImWgEurG1RtDsvQrOrANOYLEJHIYvE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gMyyFfExEUd+w6CsMnjL3uYVMMQc4wjO8Ja+5lB4qaEPQfQwcK/3vm+NwJiD3ZkXr
	 +gWFL6EaC8Kf8hZkzDnDx+XC4oQqxdq8Gzcvsigfss9pbNlLUg7URk0QaGBqEC+nez
	 Zi332TSbBSyH94YwopJUYSgPm97BHLJbq0v8iBL63W9WNDzQ45meajCj3oBX2JFkdK
	 Z/r0B2xJDbOVlabdEwDbI2grTsovWkkmYu/yAX1McQUuL4YGp68HfnCXmv7hLvCOBj
	 cFln/lqjX8pqgVgqnOGHkbqsjtoOOfkYuDoxokuxxEBhn1oG90B0ufH/DZo9bOJMln
	 XrHvtPck+I7hQ==
Date: Wed, 25 Jan 2023 13:06:41 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v6] xen/arm: Probe the load/entry point address of an uImage
 correctly
In-Reply-To: <20230125112131.19682-1-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301251302360.1978264@ubuntu-linux-20-04-desktop>
References: <20230125112131.19682-1-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Jan 2023, Ayan Kumar Halder wrote:
> Currently, kernel_uimage_probe() does not read the load/entry point address
> set in the uImge header. Thus, info->zimage.start is 0 (default value). This
> causes, kernel_zimage_place() to treat the binary (contained within uImage)
> as position independent executable. Thus, it loads it at an incorrect
> address.
> 
> The correct approach would be to read "uimage.load" and set
> info->zimage.start. This will ensure that the binary is loaded at the
> correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
> address).
> 
> If user provides load address (ie "uimage.load") as 0x0, then the image is
> treated as position independent executable. Xen can load such an image at
> any address it considers appropriate. A position independent executable
> cannot have a fixed entry point address.
> 
> This behavior is applicable for both arm32 and arm64 platforms.
> 
> Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
> point address set in the uImage header. With this commit, Xen will use them.
> This makes the behavior of Xen consistent with uboot for uimage headers.
> 
> Users who want to use Xen with statically partitioned domains, can provide
> non zero load address and entry address for the dom0/domU kernel. It is
> required that the load and entry address provided must be within the memory
> region allocated by Xen.
> 
> A deviation from uboot behaviour is that we consider load address == 0x0,
> to denote that the image supports position independent execution. This
> is to make the behavior consistent across uImage and zImage.
>
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from v1 :-
> 1. Added a check to ensure load address and entry address are the same.
> 2. Considered load address == 0x0 as position independent execution.
> 3. Ensured that the uImage header interpretation is consistent across
> arm32 and arm64.
> 
> v2 :-
> 1. Mentioned the change in existing behavior in booting.txt.
> 2. Updated booting.txt with a new section to document "Booting Guests".
> 
> v3 :-
> 1. Removed the constraint that the entry point should be same as the load
> address. Thus, Xen uses both the load address and entry point to determine
> where the image is to be copied and the start address.
> 2. Updated documentation to denote that load address and start address
> should be within the memory region allocated by Xen.
> 3. Added constraint that user cannot provide entry point for a position
> independent executable (PIE) image.
> 
> v4 :-
> 1. Explicitly mentioned the version in booting.txt from when the uImage
> probing behavior has changed.
> 2. Logged the requested load address and entry point parsed from the uImage
> header.
> 3. Some style issues.
> 
> v5 :-
> 1. Set info->zimage.text_offset = 0 in kernel_uimage_probe().
> 2. Mention that if the kernel has a legacy image header on top of zImage/zImage64
> header, then the attrbutes from legacy image header is used to determine the load
> address, entry point, etc. Thus, zImage/zImage64 header is effectively ignored.
> 
> This is true because Xen currently does not support recursive probing of kernel
> headers ie if uImage header is probed, then Xen will not attempt to see if there
> is an underlying zImage/zImage64 header.
> 
>  docs/misc/arm/booting.txt         | 30 ++++++++++++++++
>  xen/arch/arm/include/asm/kernel.h |  2 +-
>  xen/arch/arm/kernel.c             | 58 +++++++++++++++++++++++++++++--
>  3 files changed, 86 insertions(+), 4 deletions(-)
> 
> diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
> index 3e0c03e065..1837579aef 100644
> --- a/docs/misc/arm/booting.txt
> +++ b/docs/misc/arm/booting.txt
> @@ -23,6 +23,32 @@ The exceptions to this on 32-bit ARM are as follows:
>  
>  There are no exception on 64-bit ARM.
>  
> +Booting Guests
> +--------------
> +
> +Xen supports the legacy image header[3], zImage protocol for 32-bit
> +ARM Linux[1] and Image protocol defined for ARM64[2].
> +
> +Until Xen 4.17, in case of legacy image protocol, Xen ignored the load
> +address and entry point specified in the header. This has now changed.
> +
> +Now, it loads the image at the load address provided in the header.
> +And the entry point is used as the kernel start address.
> +
> +A deviation from uboot is that, Xen treats "load address == 0x0" as
> +position independent execution (PIE). Thus, Xen will load such an image
> +at an address it considers appropriate. Also, user cannot specify the
> +entry point of a PIE image since the start address cennot be
> +predetermined.
> +
> +Users who want to use Xen with statically partitioned domains, can provide
> +the fixed non zero load address and start address for the dom0/domU kernel.
> +The load address and start address specified by the user in the header must
> +be within the memory region allocated by Xen.
> +
> +Also, it is to be noted that if user provides the legacy image header on top of
> +zImage or Image header, then Xen uses the attrbutes of legacy image header only
                                             ^ attributes                    ^ remove only

> +to determine the load address, entry point, etc.

Also add:

"""
Known limitation: compressed kernels with a uboot headers are not
working.
"""

These few minor changes to the documentation can be done on commit:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



>  Firmware/bootloader requirements
>  --------------------------------
> @@ -39,3 +65,7 @@ Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/t
>  
>  [2] linux/Documentation/arm64/booting.rst
>  Latest version: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst
> +
> +[3] legacy format header
> +Latest version: https://source.denx.de/u-boot/u-boot/-/blob/master/include/image.h#L315
> +https://linux.die.net/man/1/mkimage
> diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h
> index 5bb30c3f2f..4617cdc83b 100644
> --- a/xen/arch/arm/include/asm/kernel.h
> +++ b/xen/arch/arm/include/asm/kernel.h
> @@ -72,7 +72,7 @@ struct kernel_info {
>  #ifdef CONFIG_ARM_64
>              paddr_t text_offset; /* 64-bit Image only */
>  #endif
> -            paddr_t start; /* 32-bit zImage only */
> +            paddr_t start; /* Must be 0 for 64-bit Image */
>          } zimage;
>      };
>  };
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 23b840ea9e..36081e73f1 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -127,7 +127,7 @@ static paddr_t __init kernel_zimage_place(struct kernel_info *info)
>      paddr_t load_addr;
>  
>  #ifdef CONFIG_ARM_64
> -    if ( info->type == DOMAIN_64BIT )
> +    if ( (info->type == DOMAIN_64BIT) && (info->zimage.start == 0) )
>          return info->mem.bank[0].start + info->zimage.text_offset;
>  #endif
>  
> @@ -162,7 +162,12 @@ static void __init kernel_zimage_load(struct kernel_info *info)
>      void *kernel;
>      int rc;
>  
> -    info->entry = load_addr;
> +    /*
> +     * If the image does not have a fixed entry point, then use the load
> +     * address as the entry point.
> +     */
> +    if ( info->entry == 0 )
> +        info->entry = load_addr;
>  
>      place_modules(info, load_addr, load_addr + len);
>  
> @@ -223,10 +228,38 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>      if ( len > size - sizeof(uimage) )
>          return -EINVAL;
>  
> +    info->zimage.start = be32_to_cpu(uimage.load);
> +    info->entry = be32_to_cpu(uimage.ep);
> +
> +    /*
> +     * While uboot considers 0x0 to be a valid load/start address, for Xen
> +     * to maintain parity with zImage, we consider 0x0 to denote position
> +     * independent image. That means Xen is free to load such an image at
> +     * any valid address.
> +     */
> +    if ( info->zimage.start == 0 )
> +        printk(XENLOG_INFO
> +               "No load address provided. Xen will decide where to load it.\n");
> +    else
> +        printk(XENLOG_INFO
> +               "Provided load address: %"PRIpaddr" and entry address: %"PRIpaddr"\n",
> +               info->zimage.start, info->entry);
> +
> +    /*
> +     * If the image supports position independent execution, then user cannot
> +     * provide an entry point as Xen will load such an image at any appropriate
> +     * memory address. Thus, we need to return error.
> +     */
> +    if ( (info->zimage.start == 0) && (info->entry != 0) )
> +    {
> +        printk(XENLOG_ERR
> +               "Entry point cannot be non zero for PIE image.\n");
> +        return -EINVAL;
> +    }
> +
>      info->zimage.kernel_addr = addr + sizeof(uimage);
>      info->zimage.len = len;
>  
> -    info->entry = info->zimage.start;
>      info->load = kernel_zimage_load;
>  
>  #ifdef CONFIG_ARM_64
> @@ -242,6 +275,15 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>          printk(XENLOG_ERR "Unsupported uImage arch type %d\n", uimage.arch);
>          return -EINVAL;
>      }
> +
> +    /*
> +     * If there is a uImage header, then we do not parse zImage or zImage64
> +     * header. In other words if the user provides a uImage header on top of
> +     * zImage or zImage64 header, Xen uses the attributes of uImage header only.
> +     * Thus, Xen uses uimage.load attribute to determine the load address and
> +     * zimage.text_offset is ignored.
> +     */
> +    info->zimage.text_offset = 0;
>  #endif
>  
>      return 0;
> @@ -366,6 +408,7 @@ static int __init kernel_zimage64_probe(struct kernel_info *info,
>      info->zimage.kernel_addr = addr;
>      info->zimage.len = end - start;
>      info->zimage.text_offset = zimage.text_offset;
> +    info->zimage.start = 0;
>  
>      info->load = kernel_zimage_load;
>  
> @@ -436,6 +479,15 @@ int __init kernel_probe(struct kernel_info *info,
>      u64 kernel_addr, initrd_addr, dtb_addr, size;
>      int rc;
>  
> +    /*
> +     * We need to initialize start to 0. This field may be populated during
> +     * kernel_xxx_probe() if the image has a fixed entry point (for e.g.
> +     * uimage.ep).
> +     * We will use this to determine if the image has a fixed entry point or
> +     * the load address should be used as the start address.
> +     */
> +    info->entry = 0;
> +
>      /* domain is NULL only for the hardware domain */
>      if ( domain == NULL )
>      {
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 21:09:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 21:09:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484567.751196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKn1a-0007Bf-Nk; Wed, 25 Jan 2023 21:09:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484567.751196; Wed, 25 Jan 2023 21:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKn1a-0007BY-L6; Wed, 25 Jan 2023 21:09:42 +0000
Received: by outflank-mailman (input) for mailman id 484567;
 Wed, 25 Jan 2023 21:09:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKn1Z-0007BS-Af
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 21:09:41 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 93d27d8f-9cf4-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 22:09:38 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 3EA3FB81BA4;
 Wed, 25 Jan 2023 21:09:38 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2680C433D2;
 Wed, 25 Jan 2023 21:09:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93d27d8f-9cf4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674680976;
	bh=eP9p7Owq6sX9WhkkEIrdPKlxpEYqi5HBwp29OQpgcpU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=E1neAMK1neHJjFK5rjSwLns75F6rA67sZnu8o9QsoNKBsbcwGWJaA9UoxdsEYScki
	 JL2vRPb7SO7nRflvIwK1FFmbqO4FCZqzc8qtRrfqeZTw2OBRdUbHizwXu/kIxfB+Xa
	 DenjYV3L4ekJ1xgCstNIx90j4uPZDC3X+B7P7NGj60EvhQImwtVLqRhgoy10PeomLs
	 E62l8oa6faeT9mT5sL19fPCaTdGL49WYUx4jyMjOKLigkbU1xncB1ovdmzJOhSHmKy
	 VF8rtlWSDkAW01pIaBtBfDYKBJ0RLSfzqgowB1m44at7iZih2lHrfLzwkuz92blLdv
	 yOWOcdKgIOEHg==
Date: Wed, 25 Jan 2023 13:09:34 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [XEN v5] xen/arm: Use the correct format specifier
In-Reply-To: <20230125101943.1854-1-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301251309160.1978264@ubuntu-linux-20-04-desktop>
References: <20230125101943.1854-1-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Jan 2023, Ayan Kumar Halder wrote:
> 1. One should use 'PRIpaddr' to display 'paddr_t' variables. However,
> while creating nodes in fdt, the address (if present in the node name)
> should be represented using 'PRIx64'. This is to be in conformance
> with the following rule present in https://elinux.org/Device_Tree_Linux
> 
> . node names
> "unit-address does not have leading zeros"
> 
> As 'PRIpaddr' introduces leading zeros, we cannot use it.
> 
> So, we have introduced a wrapper ie domain_fdt_begin_node() which will
> represent physical address using 'PRIx64'.
> 
> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
> address.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>


Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

(I checked that Ayan also addressed Julien's latest comments.)


> ---
> Changes from -
> 
> v1 - 1. Moved the patch earlier.
> 2. Moved a part of change from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr"
> into this patch.
> 
> v2 - 1. Use PRIx64 for appending addresses to fdt node names. This fixes the CI failure.
> 
> v3 - 1. Added a comment on top of domain_fdt_begin_node().
> 2. Check for the return of snprintf() in domain_fdt_begin_node().
> 
> v4 - 1. Grammatical error fixes.
> 
>  xen/arch/arm/domain_build.c | 64 +++++++++++++++++++++++--------------
>  xen/arch/arm/gic-v2.c       |  6 ++--
>  xen/arch/arm/mm.c           |  2 +-
>  3 files changed, 44 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index c2b97fa21e..a798e0b256 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1288,6 +1288,39 @@ static int __init fdt_property_interrupts(const struct kernel_info *kinfo,
>      return res;
>  }
>  
> +/*
> + * Wrapper to convert physical address from paddr_t to uint64_t and
> + * invoke fdt_begin_node(). This is required as the physical address
> + * provided as part of node name should not contain any leading
> + * zeroes. Thus, one should use PRIx64 (instead of PRIpaddr) to append
> + * unit (which contains the physical address) with name to generate a
> + * node name.
> + */
> +static int __init domain_fdt_begin_node(void *fdt, const char *name,
> +                                        uint64_t unit)
> +{
> +    /*
> +     * The size of the buffer to hold the longest possible string (i.e.
> +     * interrupt-controller@ + a 64-bit number + \0).
> +     */
> +    char buf[38];
> +    int ret;
> +
> +    /* ePAPR 3.4 */
> +    ret = snprintf(buf, sizeof(buf), "%s@%"PRIx64, name, unit);
> +
> +    if ( ret >= sizeof(buf) )
> +    {
> +        printk(XENLOG_ERR
> +               "Insufficient buffer. Minimum size required is %d\n",
> +               (ret + 1));
> +
> +        return -FDT_ERR_TRUNCATED;
> +    }
> +
> +    return fdt_begin_node(fdt, buf);
> +}
> +
>  static int __init make_memory_node(const struct domain *d,
>                                     void *fdt,
>                                     int addrcells, int sizecells,
> @@ -1296,8 +1329,6 @@ static int __init make_memory_node(const struct domain *d,
>      unsigned int i;
>      int res, reg_size = addrcells + sizecells;
>      int nr_cells = 0;
> -    /* Placeholder for memory@ + a 64-bit number + \0 */
> -    char buf[24];
>      __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
>      __be32 *cells;
>  
> @@ -1314,9 +1345,7 @@ static int __init make_memory_node(const struct domain *d,
>  
>      dt_dprintk("Create memory node\n");
>  
> -    /* ePAPR 3.4 */
> -    snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[i].start);
> -    res = fdt_begin_node(fdt, buf);
> +    res = domain_fdt_begin_node(fdt, "memory", mem->bank[i].start);
>      if ( res )
>          return res;
>  
> @@ -1375,16 +1404,13 @@ static int __init make_shm_memory_node(const struct domain *d,
>      {
>          uint64_t start = mem->bank[i].start;
>          uint64_t size = mem->bank[i].size;
> -        /* Placeholder for xen-shmem@ + a 64-bit number + \0 */
> -        char buf[27];
>          const char compat[] = "xen,shared-memory-v1";
>          /* Worst case addrcells + sizecells */
>          __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
>          __be32 *cells;
>          unsigned int len = (addrcells + sizecells) * sizeof(__be32);
>  
> -        snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, mem->bank[i].start);
> -        res = fdt_begin_node(fdt, buf);
> +        res = domain_fdt_begin_node(fdt, "xen-shmem", mem->bank[i].start);
>          if ( res )
>              return res;
>  
> @@ -2716,12 +2742,9 @@ static int __init make_gicv2_domU_node(struct kernel_info *kinfo)
>      __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
>      __be32 *cells;
>      const struct domain *d = kinfo->d;
> -    /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
> -    char buf[38];
>  
> -    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
> -             vgic_dist_base(&d->arch.vgic));
> -    res = fdt_begin_node(fdt, buf);
> +    res = domain_fdt_begin_node(fdt, "interrupt-controller",
> +                                vgic_dist_base(&d->arch.vgic));
>      if ( res )
>          return res;
>  
> @@ -2771,14 +2794,10 @@ static int __init make_gicv3_domU_node(struct kernel_info *kinfo)
>      int res = 0;
>      __be32 *reg, *cells;
>      const struct domain *d = kinfo->d;
> -    /* Placeholder for interrupt-controller@ + a 64-bit number + \0 */
> -    char buf[38];
>      unsigned int i, len = 0;
>  
> -    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
> -             vgic_dist_base(&d->arch.vgic));
> -
> -    res = fdt_begin_node(fdt, buf);
> +    res = domain_fdt_begin_node(fdt, "interrupt-controller",
> +                                vgic_dist_base(&d->arch.vgic));
>      if ( res )
>          return res;
>  
> @@ -2858,11 +2877,8 @@ static int __init make_vpl011_uart_node(struct kernel_info *kinfo)
>      __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS];
>      __be32 *cells;
>      struct domain *d = kinfo->d;
> -    /* Placeholder for sbsa-uart@ + a 64-bit number + \0 */
> -    char buf[27];
>  
> -    snprintf(buf, sizeof(buf), "sbsa-uart@%"PRIx64, d->arch.vpl011.base_addr);
> -    res = fdt_begin_node(fdt, buf);
> +    res = domain_fdt_begin_node(fdt, "sbsa-uart", d->arch.vpl011.base_addr);
>      if ( res )
>          return res;
>  
> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> index 61802839cb..5d4d298b86 100644
> --- a/xen/arch/arm/gic-v2.c
> +++ b/xen/arch/arm/gic-v2.c
> @@ -1049,7 +1049,7 @@ static void __init gicv2_dt_init(void)
>      if ( csize < SZ_8K )
>      {
>          printk(XENLOG_WARNING "GICv2: WARNING: "
> -               "The GICC size is too small: %#"PRIx64" expected %#x\n",
> +               "The GICC size is too small: %#"PRIpaddr" expected %#x\n",
>                 csize, SZ_8K);
>          if ( platform_has_quirk(PLATFORM_QUIRK_GIC_64K_STRIDE) )
>          {
> @@ -1280,11 +1280,11 @@ static int __init gicv2_init(void)
>          gicv2.map_cbase += aliased_offset;
>  
>          printk(XENLOG_WARNING
> -               "GICv2: Adjusting CPU interface base to %#"PRIx64"\n",
> +               "GICv2: Adjusting CPU interface base to %#"PRIpaddr"\n",
>                 cbase + aliased_offset);
>      } else if ( csize == SZ_128K )
>          printk(XENLOG_WARNING
> -               "GICv2: GICC size=%#"PRIx64" but not aliased\n",
> +               "GICv2: GICC size=%#"PRIpaddr" but not aliased\n",
>                 csize);
>  
>      gicv2.map_hbase = ioremap_nocache(hbase, PAGE_SIZE);
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index f758cad545..b99806af99 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -263,7 +263,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
>  
>          pte = mapping[offsets[level]];
>  
> -        printk("%s[0x%03x] = 0x%"PRIpaddr"\n",
> +        printk("%s[0x%03x] = 0x%"PRIx64"\n",
>                 level_strs[level], offsets[level], pte.bits);
>  
>          if ( level == 3 || !pte.walk.valid || !pte.walk.table )
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 21:10:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 21:10:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484571.751206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKn2B-0008OS-0t; Wed, 25 Jan 2023 21:10:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484571.751206; Wed, 25 Jan 2023 21:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKn2A-0008OL-Tg; Wed, 25 Jan 2023 21:10:18 +0000
Received: by outflank-mailman (input) for mailman id 484571;
 Wed, 25 Jan 2023 21:10:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ToUz=5W=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pKn29-0008Nq-NY
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 21:10:17 +0000
Received: from ppsw-33.srv.uis.cam.ac.uk (ppsw-33.srv.uis.cam.ac.uk
 [131.111.8.133]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a9f22214-9cf4-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 22:10:16 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:34984)
 by ppsw-33.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.137]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pKn22-0008CT-R3 (Exim 4.96) (return-path <amc96@srcf.net>);
 Wed, 25 Jan 2023 21:10:10 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 29F7B1FBD8;
 Wed, 25 Jan 2023 21:10:10 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9f22214-9cf4-11ed-91b6-6bf2151ebd3b
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <718f6fd0-cb96-6f72-87ff-7382582d89f9@srcf.net>
Date: Wed, 25 Jan 2023 21:10:08 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
 <8ee98cc0-21d3-100a-ffcc-37cd466e7761@suse.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH v3 1/4] x86/spec-ctrl: add logic to issue IBPB on exit to
 guest
In-Reply-To: <8ee98cc0-21d3-100a-ffcc-37cd466e7761@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25/01/2023 3:25 pm, Jan Beulich wrote:
> In order to be able to defer the context switch IBPB to the last
> possible point, add logic to the exit-to-guest paths to issue the
> barrier there, including the "IBPB doesn't flush the RSB/RAS"
> workaround. Since alternatives, for now at least, can't nest, emit JMP
> to skip past both constructs where both are needed. This may be more
> efficient anyway, as the sequence of NOPs is pretty long.

It is very uarch specific as to when a jump is less overhead than a line
of nops.

In all CPUs liable to be running Xen, even unconditional jumps take up
branch prediction resource, because all branch prediction is pre-decode
these days, so branch locations/types/destinations all need deriving
from %rip and "history" alone.

So whether a branch or a line of nops is better is a tradeoff between
how much competition there is for branch prediction resource, and how
efficiently the CPU can brute-force its way through a long line of nops.

But a very interesting datapoint.  It turns out that AMD Zen4 CPUs
macrofuse adjacent nops, including longnops, because it reduces the
amount of execute/retire resources required.  And a lot of
kernel/hypervisor fastpaths have a lot of nops these days.


For us, the "can't nest" is singularly more important than any worry
about uarch behaviour.  We've frankly got much lower hanging fruit than
worring about one branch vs a few nops.

> LFENCEs are omitted - for HVM a VM entry is immanent, which already
> elsewhere we deem sufficiently serializing an event. For 32-bit PV
> we're going through IRET, which ought to be good enough as well. While
> 64-bit PV may use SYSRET, there are several more conditional branches
> there which are all unprotected.

Privilege changes are serialsing-ish, and this behaviour has been
guaranteed moving forwards, although not documented coherently.

CPL (well - privilege, which includes SMM, root/non-root, etc) is not
written speculatively.  So any logic which needs to modify privilege has
to block until it is known to be an architectural execution path.

This gets us "lfence-like" or "dispatch serialising" behaviour, which is
also the reason why INT3 is our go-to speculation halting instruction. 
Microcode has to be entirely certain we are going to deliver an
interrupt/exception/etc before it can start reading the IDT/etc.

Either way, we've been promised that all instructions like IRET,
SYS{CALL,RET,ENTER,EXIT}, VM{RUN,LAUNCH,RESUME} (and ERET{U,S} in the
future FRED world) do, and shall continue to not execute speculatively.

Which in practice means we don't need to worry about Spectre-v1 attack
against codepaths which hit an exit-from-xen path, in terms of skipping
protections.

We do need to be careful about memory accesses and potential double
dereferences, but all the data is on the top of the stack for XPTI
reasons.  About the only concern is v->arch.msrs->* in the HVM path, and
we're fine with the current layout (AFAICT).

>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I have to admit that I'm not really certain about the placement of the
> IBPB wrt the MSR_SPEC_CTRL writes. For now I've simply used "opposite of
> entry".

It really doesn't matter.  They're independent operations that both need
doing, and are fully serialising so can't parallelise.

But on this note, WRMSRNS and WRMSRLIST are on the horizon.  The CPUs
which implement these instructions are the ones which also ought not to
need any adjustments in the exit paths.  So I think it is specifically
not worth trying to make any effort to turn *these* WRMSR's into more
optimised forms.

But WRMSRLIST was designed specifically for this kind of usecase
(actually, more for the main context switch path) where you can prepare
the list of MSRs in memory, including the ability to conditionally skip
certain entries by adjusting the index field.


It occurs to me, having written this out, is that what we actually want
to do is have slightly custom not-quite-alternative blocks.  We have a
sequence of independent code blocks, and a small block at the end that
happens to contain an IRET.

We could remove the nops at boot time if we treated it as one large
region, with the IRET at the end also able to have a variable position,
and assembles the "active" blocks tightly from the start.  Complications
would include adjusting the IRET extable entry, but this isn't
insurmountable.  Entrypoints are a bit more tricky but could be done by
packing from the back forward, and adjusting the entry position.

Either way, something to ponder.  (It's also possible that it doesn't
make a measurable difference until we get to FRED, at which point we
have a set of fresh entry-points to write anyway, and a slight glimmer
of hope of not needing to pollute them with speculation workarounds...)

> Since we're going to run out of SCF_* bits soon and since the new flag
> is meaningful only in struct cpu_info's spec_ctrl_flags, we could choose
> to widen that field to 16 bits right away and then use bit 8 (or higher)
> for the purpose here.

I really don't think it matters.  We've got plenty of room, and the
flexibility to shuffle, in both structures.  It's absolutely not worth
trying to introduce asymmetries to save 1 bit.

> --- a/xen/arch/x86/include/asm/current.h
> +++ b/xen/arch/x86/include/asm/current.h
> @@ -55,9 +55,13 @@ struct cpu_info {
>  
>      /* See asm/spec_ctrl_asm.h for usage. */
>      unsigned int shadow_spec_ctrl;
> +    /*
> +     * spec_ctrl_flags can be accessed as a 32-bit entity and hence needs
> +     * placing suitably.

I'd suggest "is accessed as a 32-bit entity, and wants aligning suitably" ?

If I've followed the logic correctly.  (I can't say I was specifically
aware that the bit test instructions didn't have byte forms, but I
suspect such instruction forms would be very very niche.)

> +     */
> +    uint8_t      spec_ctrl_flags;
>      uint8_t      xen_spec_ctrl;
>      uint8_t      last_spec_ctrl;
> -    uint8_t      spec_ctrl_flags;
>  
>      /*
>       * The following field controls copying of the L4 page table of 64-bit
> --- a/xen/arch/x86/include/asm/spec_ctrl.h
> +++ b/xen/arch/x86/include/asm/spec_ctrl.h
> @@ -36,6 +36,8 @@
>  #define SCF_verw       (1 << 3)
>  #define SCF_ist_ibpb   (1 << 4)
>  #define SCF_entry_ibpb (1 << 5)
> +#define SCF_exit_ibpb_bit 6
> +#define SCF_exit_ibpb  (1 << SCF_exit_ibpb_bit)

One option to avoid the second define is to use ILOG2() with btrl.

Of all the common forms of doing this, its the only one I'm aware of
which avoids needing the second define.

>  
>  /*
>   * The IST paths (NMI/#MC) can interrupt any arbitrary context.  Some
> --- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
> +++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
> @@ -117,6 +117,27 @@
>  .L\@_done:
>  .endm
>  
> +.macro DO_SPEC_CTRL_EXIT_IBPB disp=0
> +/*
> + * Requires %rsp=regs
> + * Clobbers %rax, %rcx, %rdx
> + *
> + * Conditionally issue IBPB if SCF_exit_ibpb is active.  The macro invocation
> + * may be followed by X86_BUG_IBPB_NO_RET workaround code.  The "disp" argument
> + * is to allow invocation sites to pass in the extra amount of code which needs
> + * skipping in case no action is necessary.
> + *
> + * The flag is a "one-shot" indicator, so it is being cleared at the same time.
> + */
> +    btrl    $SCF_exit_ibpb_bit, CPUINFO_spec_ctrl_flags(%rsp)
> +    jnc     .L\@_skip + (\disp)
> +    mov     $MSR_PRED_CMD, %ecx
> +    mov     $PRED_CMD_IBPB, %eax
> +    xor     %edx, %edx
> +    wrmsr
> +.L\@_skip:
> +.endm
> +
>  .macro DO_OVERWRITE_RSB tmp=rax
>  /*
>   * Requires nothing
> @@ -272,6 +293,14 @@
>  #define SPEC_CTRL_EXIT_TO_PV                                            \
>      ALTERNATIVE "",                                                     \
>          DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_PV;              \
> +    ALTERNATIVE __stringify(jmp PASTE(.Lscexitpv_done, __LINE__)),      \
> +        __stringify(DO_SPEC_CTRL_EXIT_IBPB                              \
> +                    disp=(PASTE(.Lscexitpv_done, __LINE__) -            \
> +                          PASTE(.Lscexitpv_rsb, __LINE__))),            \
> +        X86_FEATURE_IBPB_EXIT_PV;                                       \
> +PASTE(.Lscexitpv_rsb, __LINE__):                                        \
> +    ALTERNATIVE "", DO_OVERWRITE_RSB, X86_BUG_IBPB_NO_RET;              \
> +PASTE(.Lscexitpv_done, __LINE__):                                       \
>      DO_SPEC_CTRL_COND_VERW

What's wrong with the normal %= trick?  The use of __LINE__ makes this
hard to subsequently livepatch, so I'd prefer to avoid it if possible.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 21:11:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 21:11:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484579.751216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKn3U-0000ee-B7; Wed, 25 Jan 2023 21:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484579.751216; Wed, 25 Jan 2023 21:11:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKn3U-0000eX-7Y; Wed, 25 Jan 2023 21:11:40 +0000
Received: by outflank-mailman (input) for mailman id 484579;
 Wed, 25 Jan 2023 21:11:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKn3T-0000eN-3s; Wed, 25 Jan 2023 21:11:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKn3S-0005UX-J9; Wed, 25 Jan 2023 21:11:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKn3S-0002XR-4W; Wed, 25 Jan 2023 21:11:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKn3S-000514-40; Wed, 25 Jan 2023 21:11:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bf15wktUeJG/yoqjttgJQe8mSEg/eJCifiM5eHfnTMk=; b=KnBe6QDNzKRwCUg8m8Tt5Vi3ku
	nxAYJ6xxHfWKCVa9tyLr/NsF99um5NnKiNbSOWHLxQhldmRLwS7OlRL/0WZ8pph+8m72WgGptiLqu
	Ro9H6wgB2KakbjHXPEVDsSbt/o6oDFulI+e7AAJGahN5CyKgqDUGfrBpYXG8HkzagpnI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176121-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176121: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-freebsd10-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3b760245f74ab2022b1aa4da842c4545228c2e83
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 21:11:38 +0000

flight 176121 xen-unstable real [real]
flight 176129 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176121/
http://logs.test-lab.xenproject.org/osstest/logs/176129/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail pass in 176129-retest
 test-amd64-i386-freebsd10-amd64 19 guest-localmigrate/x10 fail pass in 176129-retest
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 176129-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3b760245f74ab2022b1aa4da842c4545228c2e83
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    5 days
Failing since        176003  2023-01-20 17:40:27 Z    5 days   12 attempts
Testing same since   176121  2023-01-25 10:51:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1067 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 21:14:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 21:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484588.751225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKn6O-0001MP-UJ; Wed, 25 Jan 2023 21:14:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484588.751225; Wed, 25 Jan 2023 21:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKn6O-0001MI-Rf; Wed, 25 Jan 2023 21:14:40 +0000
Received: by outflank-mailman (input) for mailman id 484588;
 Wed, 25 Jan 2023 21:14:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKn6N-0001M8-HC; Wed, 25 Jan 2023 21:14:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKn6N-0005Xj-FV; Wed, 25 Jan 2023 21:14:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKn6N-0002br-5p; Wed, 25 Jan 2023 21:14:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKn6N-00066g-5L; Wed, 25 Jan 2023 21:14:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=fgW2ok+cMDkDT/P0VYaTAn9ujXluc4eeCMXeF/oNBCg=; b=UGXzan9Gqp1B8wCsqXuSXKpc+V
	m1vvYENbaHlKyzmGZXzntO1RlvBCp8gLuT0QA3ijegnWIvp3Y4NNlMVXWd7lioZ2T8mHDuETl/ggU
	zmYnsjetVLgFitT8YbTO5uUP+xxJq6C8guSTwNOIBFv1Y5FfMtklFpDe1jXraT9LPtuU=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-pair
Message-Id: <E1pKn6N-00066g-5L@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 21:14:39 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-pair
testid guest-migrate/src_host/dst_host

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176130/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-pair.guest-migrate--src_host--dst_host.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-pair.guest-migrate--src_host--dst_host --summary-out=tmp/176130.bisection-summary --basis-template=175994 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-pair guest-migrate/src_host/dst_host
Searching for failure / basis pass:
 176121 fail [dst_host=elbling1,src_host=elbling0] / 175994 [dst_host=nocera0,src_host=nocera1] 175987 [dst_host=huxelrebe1,src_host=huxelrebe0] 175965 [dst_host=albana0,src_host=albana1] 175734 [dst_host=italia0,src_host=italia1] 175726 [dst_host=nocera1,src_host=nocera0] 175714 [dst_host=albana1,src_host=albana0] 175694 [dst_host=huxelrebe0,src_host=huxelrebe1] 175671 [dst_host=elbling0,src_host=elbling1] 175651 [dst_host=debina0,src_host=debina1] 175635 [dst_host=italia1,src_host=italia0] 175\
 624 [dst_host=fiano0,src_host=fiano1] 175612 [dst_host=pinot1,src_host=pinot0] 175601 ok.
Failure / basis pass flights: 176121 / 175601
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 2b21cbbb339fb14414f357a6683b1df74c36fda2
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#1cf02b05b27c48775a25699e61b93b8\
 14b9ae042-625eb5e96dc96aa7fddef59a08edae215527f19c git://xenbits.xen.org/xen.git#2b21cbbb339fb14414f357a6683b1df74c36fda2-3b760245f74ab2022b1aa4da842c4545228c2e83
Loaded 10003 nodes in revision graph
Searching for test results:
 175592 [dst_host=nobling1,src_host=nobling0]
 175601 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 2b21cbbb339fb14414f357a6683b1df74c36fda2
 175612 [dst_host=pinot1,src_host=pinot0]
 175624 [dst_host=fiano0,src_host=fiano1]
 175635 [dst_host=italia1,src_host=italia0]
 175651 [dst_host=debina0,src_host=debina1]
 175671 [dst_host=elbling0,src_host=elbling1]
 175694 [dst_host=huxelrebe0,src_host=huxelrebe1]
 175714 [dst_host=albana1,src_host=albana0]
 175720 [dst_host=nocera1,src_host=nocera0]
 175726 [dst_host=nocera1,src_host=nocera0]
 175734 [dst_host=italia0,src_host=italia1]
 175834 []
 175861 []
 175890 []
 175907 []
 175931 []
 175956 []
 175965 [dst_host=albana0,src_host=albana1]
 175987 [dst_host=huxelrebe1,src_host=huxelrebe0]
 175994 [dst_host=nocera0,src_host=nocera1]
 176003 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
 176011 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176025 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176035 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176042 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176048 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176056 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176062 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176076 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176091 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176112 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 2b21cbbb339fb14414f357a6683b1df74c36fda2
 176110 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 352c89f72ddb67b8d9d4e492203f8c77f85c8df1
 176114 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176117 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176118 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c ab5fa21c8d91f7057f0373ac63abc659f05b0c69
 176120 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 352c89f72ddb67b8d9d4e492203f8c77f85c8df1
 176123 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d99732f2b092173d8600fa818aee3fa51046bb0
 176124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176126 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
 176128 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176130 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176122 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f8fdceefbb1193ec81667eb40b83bc525cb71204
Searching for interesting versions
 Result found: flight 175601 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f, results HASH(0x55c7bafc14e8) HASH(0x55c7baff1430) HASH(0x55c7bafea1c8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b\
 27c48775a25699e61b93b814b9ae042 2b21cbbb339fb14414f357a6683b1df74c36fda2, results HASH(0x55c7bafcb3b8) HASH(0x55c7bafc9830) Result found: flight 176003 (fail), for basis failure (at ancestor ~1002)
 Repro found: flight 176112 (pass), for basis pass
 Repro found: flight 176121 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
No revisions left to test, checking graph state.
 Result found: flight 176117 (pass), for last pass
 Result found: flight 176124 (fail), for first failure
 Repro found: flight 176126 (pass), for last pass
 Repro found: flight 176127 (fail), for first failure
 Repro found: flight 176128 (pass), for last pass
 Repro found: flight 176130 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176130/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-pair.guest-migrate--src_host--dst_host.{dot,ps,png,html,svg}.
----------------------------------------
176130: tolerable ALL FAIL

flight 176130 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/176130/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-pair 26 guest-migrate/src_host/dst_host fail baseline untested


jobs:
 test-amd64-i386-pair                                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 21:56:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 21:56:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484596.751238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKnju-0006I1-1t; Wed, 25 Jan 2023 21:55:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484596.751238; Wed, 25 Jan 2023 21:55:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKnjt-0006Hu-VS; Wed, 25 Jan 2023 21:55:29 +0000
Received: by outflank-mailman (input) for mailman id 484596;
 Wed, 25 Jan 2023 21:55:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKnjt-0006Ho-2L
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 21:55:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f9745774-9cfa-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 22:55:26 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 776B261617;
 Wed, 25 Jan 2023 21:55:25 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45DCDC433D2;
 Wed, 25 Jan 2023 21:55:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9745774-9cfa-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674683724;
	bh=11+YS4X+XuTHVoAhxZs3uahEcdGhI+UChCKBPLj63h4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=m5mxuaEzJVDELptAiXncQR0RH0C2faHVpX5k7miFSwLgvs3biZdfHBlU35eumllYF
	 FW6mY55A6RrbaoGbSdPj6E++6hXcdYijji4makH5EfYqNkEIzmkE/GACWbBxErXrXY
	 lEDB6sTPE/ocPi4xdxs6kG5yZfkXZNAHC+iccfvyznIIslDHkC3n+fTv9lEFRR5V+c
	 Mhf5Su9OaKF2TQzF2rbBQlJQsSAwaftoS6cK4QYe9unsdNuACbEyMzCHS2COTftC26
	 lapU0BfWtlrWDzScGz7ZIjgL5mfUpo76bo/trZ2pIbnXrmb59qfLXzYJH3FGqJjXUE
	 FtyV5e9cVUXcw==
Date: Wed, 25 Jan 2023 13:55:21 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Vikram Garhwal <vikram.garhwal@amd.com>
cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
    stefano.stabellini@amd.com, alex.bennee@linaro.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    Paolo Bonzini <pbonzini@redhat.com>, 
    Richard Henderson <richard.henderson@linaro.org>, 
    Eduardo Habkost <eduardo@habkost.net>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: Re: [QEMU][PATCH v4 04/10] xen-hvm: reorganize xen-hvm and move
 common function to xen-hvm-common
In-Reply-To: <20230125085407.7144-5-vikram.garhwal@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301251329520.1978264@ubuntu-linux-20-04-desktop>
References: <20230125085407.7144-1-vikram.garhwal@amd.com> <20230125085407.7144-5-vikram.garhwal@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Jan 2023, Vikram Garhwal wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> This patch does following:
> 1. creates arch_handle_ioreq() and arch_xen_set_memory(). This is done in
>     preparation for moving most of xen-hvm code to an arch-neutral location,
>     move the x86-specific portion of xen_set_memory to arch_xen_set_memory.
>     Also, move handle_vmport_ioreq to arch_handle_ioreq.
> 
> 2. Pure code movement: move common functions to hw/xen/xen-hvm-common.c
>     Extract common functionalities from hw/i386/xen/xen-hvm.c and move them to
>     hw/xen/xen-hvm-common.c. These common functions are useful for creating
>     an IOREQ server.
> 
>     xen_hvm_init_pc() contains the architecture independent code for creating
>     and mapping a IOREQ server, connecting memory and IO listeners, initializing
>     a xen bus and registering backends. Moved this common xen code to a new
>     function xen_register_ioreq() which can be used by both x86 and ARM machines.
> 
>     Following functions are moved to hw/xen/xen-hvm-common.c:
>         xen_vcpu_eport(), xen_vcpu_ioreq(), xen_ram_alloc(), xen_set_memory(),
>         xen_region_add(), xen_region_del(), xen_io_add(), xen_io_del(),
>         xen_device_realize(), xen_device_unrealize(),
>         cpu_get_ioreq_from_shared_memory(), cpu_get_ioreq(), do_inp(),
>         do_outp(), rw_phys_req_item(), read_phys_req_item(),
>         write_phys_req_item(), cpu_ioreq_pio(), cpu_ioreq_move(),
>         cpu_ioreq_config(), handle_ioreq(), handle_buffered_iopage(),
>         handle_buffered_io(), cpu_handle_ioreq(), xen_main_loop_prepare(),
>         xen_hvm_change_state_handler(), xen_exit_notifier(),
>         xen_map_ioreq_server(), destroy_hvm_domain() and
>         xen_shutdown_fatal_error()
> 
> 3. Removed static type from below functions:
>     1. xen_region_add()
>     2. xen_region_del()
>     3. xen_io_add()
>     4. xen_io_del()
>     5. xen_device_realize()
>     6. xen_device_unrealize()
>     7. xen_hvm_change_state_handler()
>     8. cpu_ioreq_pio()
>     9. xen_exit_notifier()
> 
> 4. Replace TARGET_PAGE_SIZE with XC_PAGE_SIZE to match the page side with Xen.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>

One comment below

[...]

> +void xen_exit_notifier(Notifier *n, void *data)
> +{
> +    XenIOState *state = container_of(n, XenIOState, exit);
> +
> +    xen_destroy_ioreq_server(xen_domid, state->ioservid);

In the original code we had:

-    if (state->fres != NULL) {
-        xenforeignmemory_unmap_resource(xen_fmem, state->fres);
-    }

Should we add it here?


I went through the manual process of comparing all the code additions
and deletions (not fun!) and everything checks out except for this.


> +    xenevtchn_close(state->xce_handle);
> +    xs_daemon_close(state->xenstore);
> +}


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 21:58:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 21:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484601.751249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKnn9-0006sI-GY; Wed, 25 Jan 2023 21:58:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484601.751249; Wed, 25 Jan 2023 21:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKnn9-0006sB-Dw; Wed, 25 Jan 2023 21:58:51 +0000
Received: by outflank-mailman (input) for mailman id 484601;
 Wed, 25 Jan 2023 21:58:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKnn8-0006rz-Az
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 21:58:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 71899ab7-9cfb-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 22:58:48 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0EBA8616A1;
 Wed, 25 Jan 2023 21:58:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5CE73C433EF;
 Wed, 25 Jan 2023 21:58:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71899ab7-9cfb-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674683926;
	bh=CctgY9rPc7Uype53cxREDxzCw01bCr9SqjFOC+BRSIY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WXjMkSoanXUQhAi3Rbv8aMAGAH1jFgCe/RYw7tGNwreaW4z8zAEE0dUk3cdV/8FBj
	 2o580OUMHmemvJOj24TkcvLwbBOIEpGiX77v3Hv9bXxVWf22p5sIVu6Em950pZ9CI9
	 QQmpVcQodzkO/45cEBt7bLxrV13tUDS0y8SBwmR1iXT4qvuxQFUsODOQOU4vqzyP/M
	 ZaHLFQ46DmZJtFoHoe1bxd3MfoZiJ/oolV55BzO1s9bd//XGqk76GZ+a0kQXiNNJke
	 ZLnP2qx/XrvQklmrqWWldW+AVjf9+BJmlzCAKYqUtYIyL5QKy0DrM3Z9GO8GiFX07F
	 9k6VlhIN+x6yg==
Date: Wed, 25 Jan 2023 13:58:43 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Vikram Garhwal <vikram.garhwal@amd.com>
cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
    stefano.stabellini@amd.com, alex.bennee@linaro.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: Re: [QEMU][PATCH v4 05/10] include/hw/xen/xen_common: return error
 from xen_create_ioreq_server
In-Reply-To: <20230125085407.7144-6-vikram.garhwal@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301251357590.1978264@ubuntu-linux-20-04-desktop>
References: <20230125085407.7144-1-vikram.garhwal@amd.com> <20230125085407.7144-6-vikram.garhwal@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Jan 2023, Vikram Garhwal wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> This is done to prepare for enabling xenpv support for ARM architecture.
> On ARM it is possible to have a functioning xenpv machine with only the
> PV backends and no IOREQ server. If the IOREQ server creation fails,
> continue to the PV backends initialization.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

I know I am co-author of the patch but just for record-keeping to
remember that I also reviewed this patch:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  include/hw/xen/xen_common.h | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
> index 9a13a756ae..9ec69582b3 100644
> --- a/include/hw/xen/xen_common.h
> +++ b/include/hw/xen/xen_common.h
> @@ -467,9 +467,10 @@ static inline void xen_unmap_pcidev(domid_t dom,
>  {
>  }
>  
> -static inline void xen_create_ioreq_server(domid_t dom,
> -                                           ioservid_t *ioservid)
> +static inline int xen_create_ioreq_server(domid_t dom,
> +                                          ioservid_t *ioservid)
>  {
> +    return 0;
>  }
>  
>  static inline void xen_destroy_ioreq_server(domid_t dom,
> @@ -600,8 +601,8 @@ static inline void xen_unmap_pcidev(domid_t dom,
>                                                    PCI_FUNC(pci_dev->devfn));
>  }
>  
> -static inline void xen_create_ioreq_server(domid_t dom,
> -                                           ioservid_t *ioservid)
> +static inline int xen_create_ioreq_server(domid_t dom,
> +                                          ioservid_t *ioservid)
>  {
>      int rc = xendevicemodel_create_ioreq_server(xen_dmod, dom,
>                                                  HVM_IOREQSRV_BUFIOREQ_ATOMIC,
> @@ -609,12 +610,14 @@ static inline void xen_create_ioreq_server(domid_t dom,
>  
>      if (rc == 0) {
>          trace_xen_ioreq_server_create(*ioservid);
> -        return;
> +        return rc;
>      }
>  
>      *ioservid = 0;
>      use_default_ioreq_server = true;
>      trace_xen_default_ioreq_server();
> +
> +    return rc;
>  }
>  
>  static inline void xen_destroy_ioreq_server(domid_t dom,
> -- 
> 2.17.0
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 22:01:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 22:01:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484606.751259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKnpw-0008Ho-Uo; Wed, 25 Jan 2023 22:01:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484606.751259; Wed, 25 Jan 2023 22:01:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKnpw-0008Hh-Rh; Wed, 25 Jan 2023 22:01:44 +0000
Received: by outflank-mailman (input) for mailman id 484606;
 Wed, 25 Jan 2023 22:01:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKnpv-0008Hb-RX
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 22:01:43 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d959978d-9cfb-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 23:01:42 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 3546E616C2;
 Wed, 25 Jan 2023 22:01:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 79BA5C433D2;
 Wed, 25 Jan 2023 22:01:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d959978d-9cfb-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674684100;
	bh=Z8Z+kV8zvQPaDPI2dAWyQXmqy22O8fHjplm46zAzBBY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=OYQzzHlz6UEq1bihNNY79A4KseLIpdDOw/8yuhwDfoTgoIXbNoc3QyW7p2oBAb1+M
	 qRudygQYHXR53MBv6o1G3v+dvCtawgUbMWEnpdmbxPMrb1zw4ZupEF/ccb/tVFxsEw
	 4NHjdO9XomyyAI2mvFE5B0vBB+M5+p2vVgDAebSG6QZlSbv3dvvNDLE53BkxlopI+0
	 5fJpAn5wHvkHRPAnoX3Y1baotdFFzGL8rpmtAcwLn1TJZ+eX0Xh1Md+RbDyDMUiIMh
	 oBrh+OlWxWJzlciHoZkcpV4qWGl2K6Q5/SpnuooGPhp6Bjxg1Byax5dYcHY+USt+Xo
	 em3rWPl+NqwIg==
Date: Wed, 25 Jan 2023 14:01:37 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Vikram Garhwal <vikram.garhwal@amd.com>
cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
    stefano.stabellini@amd.com, alex.bennee@linaro.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: Re: [QEMU][PATCH v4 06/10] hw/xen/xen-hvm-common: skip ioreq creation
 on ioreq registration failure
In-Reply-To: <20230125085407.7144-7-vikram.garhwal@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301251400500.1978264@ubuntu-linux-20-04-desktop>
References: <20230125085407.7144-1-vikram.garhwal@amd.com> <20230125085407.7144-7-vikram.garhwal@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Jan 2023, Vikram Garhwal wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> On ARM it is possible to have a functioning xenpv machine with only the
> PV backends and no IOREQ server. If the IOREQ server creation fails continue
> to the PV backends initialization.
> 
> Also, moved the IOREQ registration and mapping subroutine to new function
> xen_do_ioreq_register().
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

as per my previous reply, even though I am listed as co-author, for
tracking that I did review this version of the patch:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  hw/xen/xen-hvm-common.c | 53 ++++++++++++++++++++++++++++-------------
>  1 file changed, 36 insertions(+), 17 deletions(-)
> 
> diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
> index e748d8d423..94dbbe97ed 100644
> --- a/hw/xen/xen-hvm-common.c
> +++ b/hw/xen/xen-hvm-common.c
> @@ -777,25 +777,12 @@ err:
>      exit(1);
>  }
>  
> -void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
> -                        MemoryListener xen_memory_listener)
> +static void xen_do_ioreq_register(XenIOState *state,
> +                                           unsigned int max_cpus,
> +                                           MemoryListener xen_memory_listener)
>  {
>      int i, rc;
>  
> -    state->xce_handle = xenevtchn_open(NULL, 0);
> -    if (state->xce_handle == NULL) {
> -        perror("xen: event channel open");
> -        goto err;
> -    }
> -
> -    state->xenstore = xs_daemon_open();
> -    if (state->xenstore == NULL) {
> -        perror("xen: xenstore open");
> -        goto err;
> -    }
> -
> -    xen_create_ioreq_server(xen_domid, &state->ioservid);
> -
>      state->exit.notify = xen_exit_notifier;
>      qemu_add_exit_notifier(&state->exit);
>  
> @@ -859,12 +846,44 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
>      QLIST_INIT(&state->dev_list);
>      device_listener_register(&state->device_listener);
>  
> +    return;
> +
> +err:
> +    error_report("xen hardware virtual machine initialisation failed");
> +    exit(1);
> +}
> +
> +void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
> +                        MemoryListener xen_memory_listener)
> +{
> +    int rc;
> +
> +    state->xce_handle = xenevtchn_open(NULL, 0);
> +    if (state->xce_handle == NULL) {
> +        perror("xen: event channel open");
> +        goto err;
> +    }
> +
> +    state->xenstore = xs_daemon_open();
> +    if (state->xenstore == NULL) {
> +        perror("xen: xenstore open");
> +        goto err;
> +    }
> +
> +    rc = xen_create_ioreq_server(xen_domid, &state->ioservid);
> +    if (!rc) {
> +        xen_do_ioreq_register(state, max_cpus, xen_memory_listener);
> +    } else {
> +        warn_report("xen: failed to create ioreq server");
> +    }
> +
>      xen_bus_init();
>  
>      xen_register_backend(state);
>  
>      return;
> +
>  err:
> -    error_report("xen hardware virtual machine initialisation failed");
> +    error_report("xen hardware virtual machine backend registration failed");
>      exit(1);
>  }
> -- 
> 2.17.0
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 22:07:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 22:07:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484613.751268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKnvO-0000gc-N8; Wed, 25 Jan 2023 22:07:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484613.751268; Wed, 25 Jan 2023 22:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKnvO-0000gV-K9; Wed, 25 Jan 2023 22:07:22 +0000
Received: by outflank-mailman (input) for mailman id 484613;
 Wed, 25 Jan 2023 22:07:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKnvO-0000gP-3Q
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 22:07:22 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a2f81672-9cfc-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 23:07:20 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 838B6B81BE0;
 Wed, 25 Jan 2023 22:07:19 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2822EC433EF;
 Wed, 25 Jan 2023 22:07:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2f81672-9cfc-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674684438;
	bh=+67Nm9MEXf9tqfA8JJgsd8crv/UF0jOLjrPwb+n11Ac=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gX2egELc5SZdxp4rDaGX8Ol0GPtMmRl5zkdC1lek8jGKXzo/WlEzz9dOGDyWBeJ2J
	 rvugQsULNsrMQEGoTG6CSA9jeJvlk3tTuWmv/8uwxcsVMSni6aw/lKjv2nKJ+/XuSX
	 dleZRbL7JhfPuuac7yck8DtnBFs5vizHGSzcJAFG3bWlwL70YOU2nunxYUdqIajnjt
	 D5lKT90yWlBrWrnO+qGTuy02g4NMasnmKlOJNfVz+e8bzPTkfmJe64r8vv1kPalDJ8
	 CeW1u8F74Se5eLiZtJAAutrsU/RAiG9GI3JwVYQFaiEeeedTDY3wdzzQ+Kwavkg3bj
	 Nv8JVqzFZirfQ==
Date: Wed, 25 Jan 2023 14:07:15 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Vikram Garhwal <vikram.garhwal@amd.com>
cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
    stefano.stabellini@amd.com, alex.bennee@linaro.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: Re: [QEMU][PATCH v4 07/10] hw/xen/xen-hvm-common: Use g_new and
 error_setg_errno
In-Reply-To: <20230125085407.7144-8-vikram.garhwal@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301251406170.1978264@ubuntu-linux-20-04-desktop>
References: <20230125085407.7144-1-vikram.garhwal@amd.com> <20230125085407.7144-8-vikram.garhwal@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Jan 2023, Vikram Garhwal wrote:
> Replace g_malloc with g_new and perror with error_setg_errno.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  hw/xen/xen-hvm-common.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
> index 94dbbe97ed..01c8ec1956 100644
> --- a/hw/xen/xen-hvm-common.c
> +++ b/hw/xen/xen-hvm-common.c
> @@ -34,7 +34,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
>      trace_xen_ram_alloc(ram_addr, size);
>  
>      nr_pfn = size >> TARGET_PAGE_BITS;
> -    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
> +    pfn_list = g_new(xen_pfn_t, nr_pfn);
>  
>      for (i = 0; i < nr_pfn; i++) {
>          pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
> @@ -726,7 +726,7 @@ void destroy_hvm_domain(bool reboot)
>              return;
>          }
>          if (errno != ENOTTY /* old Xen */) {
> -            perror("xendevicemodel_shutdown failed");
> +            error_report("xendevicemodel_shutdown failed with error %d", errno);

You can use strerror(errno), here and below.

Either way:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



>          }
>          /* well, try the old thing then */
>      }
> @@ -797,7 +797,7 @@ static void xen_do_ioreq_register(XenIOState *state,
>      }
>  
>      /* Note: cpus is empty at this point in init */
> -    state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
> +    state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
>  
>      rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
>      if (rc < 0) {
> @@ -806,7 +806,7 @@ static void xen_do_ioreq_register(XenIOState *state,
>          goto err;
>      }
>  
> -    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
> +    state->ioreq_local_port = g_new0(evtchn_port_t, max_cpus);
>  
>      /* FIXME: how about if we overflow the page here? */
>      for (i = 0; i < max_cpus; i++) {
> @@ -860,13 +860,13 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
>  
>      state->xce_handle = xenevtchn_open(NULL, 0);
>      if (state->xce_handle == NULL) {
> -        perror("xen: event channel open");
> +        error_report("xen: event channel open failed with error %d", errno);
>          goto err;
>      }
>  
>      state->xenstore = xs_daemon_open();
>      if (state->xenstore == NULL) {
> -        perror("xen: xenstore open");
> +        error_report("xen: xenstore open failed with error %d", errno);
>          goto err;
>      }
>  
> -- 
> 2.17.0
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 22:20:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 22:20:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484619.751278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKo7h-00033N-RV; Wed, 25 Jan 2023 22:20:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484619.751278; Wed, 25 Jan 2023 22:20:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKo7h-00033G-OZ; Wed, 25 Jan 2023 22:20:05 +0000
Received: by outflank-mailman (input) for mailman id 484619;
 Wed, 25 Jan 2023 22:20:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V6li=5W=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pKo7g-0002mN-Cv
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 22:20:04 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6898e916-9cfe-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 23:20:01 +0100 (CET)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-523-Eg2ockkrMBCcGSKv08KftQ-1; Wed, 25 Jan 2023 17:19:55 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AE2F43C0CD39;
 Wed, 25 Jan 2023 22:19:53 +0000 (UTC)
Received: from localhost (unknown [10.39.192.105])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B4F4CC15BA0;
 Wed, 25 Jan 2023 22:19:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6898e916-9cfe-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1674685200;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SXWTS7PEfK4jfis5AMTJ0jfF1+NWNTH+CYWW0b9TfAk=;
	b=IVY4iK60xsypQIQzi6BKwM4EAFcXxDsEmaTwU9c1UiyJaR4RTOjldQ4B9NxX7VTpqpxE35
	exrVAU2L4q2yijlNQtBbQDfAp9J0QLIJ8k0ZevUvCfgMhvAWg/kb1QKMJ6g1n8QVruFIX5
	qkqBv6Q7y0qC4mgmBdFtoFdLOiYrezE=
X-MC-Unique: Eg2ockkrMBCcGSKv08KftQ-1
Date: Wed, 25 Jan 2023 17:19:49 -0500
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Alexander Bulekov <alxndr@bu.edu>
Cc: qemu-devel@nongnu.org,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Mauro Matteo Cascella <mcascell@redhat.com>,
	Peter Xu <peterx@redhat.com>, Jason Wang <jasowang@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>, Thomas Huth <thuth@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>, Bandan Das <bsd@redhat.com>,
	"Edgar E . Iglesias" <edgar.iglesias@gmail.com>,
	Darren Kenny <darren.kenny@oracle.com>,
	Bin Meng <bin.meng@windriver.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Daniel P =?iso-8859-1?Q?=2E_Berrang=E9?= <berrange@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Jon Maloy <jmaloy@redhat.com>, Siqi Chen <coc.cyqh@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Kevin Wolf <kwolf@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>, Amit Shah <amit@kernel.org>,
	=?iso-8859-1?Q?Marc-Andr=E9?= Lureau <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Keith Busch <kbusch@kernel.org>, Klaus Jensen <its@irrelevant.dk>,
	Fam Zheng <fam@euphon.net>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	"open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>,
	"open list:virtio-blk" <qemu-block@nongnu.org>,
	"open list:i.MX31 (kzm)" <qemu-arm@nongnu.org>,
	"open list:Old World (g3beige)" <qemu-ppc@nongnu.org>
Subject: Re: [PATCH v4 3/3] hw: replace most qemu_bh_new calls with
 qemu_bh_new_guarded
Message-ID: <Y9GrBTALs18YkSKG@fedora>
References: <20230119070308.321653-1-alxndr@bu.edu>
 <20230119070308.321653-4-alxndr@bu.edu>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="M2N6eqImXDUC6qNs"
Content-Disposition: inline
In-Reply-To: <20230119070308.321653-4-alxndr@bu.edu>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--M2N6eqImXDUC6qNs
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Jan 19, 2023 at 02:03:08AM -0500, Alexander Bulekov wrote:
> This protects devices from bh->mmio reentrancy issues.
>=20
> Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
> ---
>  hw/9pfs/xen-9p-backend.c        | 4 +++-
>  hw/block/dataplane/virtio-blk.c | 3 ++-
>  hw/block/dataplane/xen-block.c  | 5 +++--
>  hw/block/virtio-blk.c           | 5 +++--
>  hw/char/virtio-serial-bus.c     | 3 ++-
>  hw/display/qxl.c                | 9 ++++++---
>  hw/display/virtio-gpu.c         | 6 ++++--
>  hw/ide/ahci.c                   | 3 ++-
>  hw/ide/core.c                   | 3 ++-
>  hw/misc/imx_rngc.c              | 6 ++++--
>  hw/misc/macio/mac_dbdma.c       | 2 +-
>  hw/net/virtio-net.c             | 3 ++-
>  hw/nvme/ctrl.c                  | 6 ++++--
>  hw/scsi/mptsas.c                | 3 ++-
>  hw/scsi/scsi-bus.c              | 3 ++-
>  hw/scsi/vmw_pvscsi.c            | 3 ++-
>  hw/usb/dev-uas.c                | 3 ++-
>  hw/usb/hcd-dwc2.c               | 3 ++-
>  hw/usb/hcd-ehci.c               | 3 ++-
>  hw/usb/hcd-uhci.c               | 2 +-
>  hw/usb/host-libusb.c            | 6 ++++--
>  hw/usb/redirect.c               | 6 ++++--
>  hw/usb/xen-usb.c                | 3 ++-
>  hw/virtio/virtio-balloon.c      | 5 +++--
>  hw/virtio/virtio-crypto.c       | 3 ++-
>  25 files changed, 66 insertions(+), 35 deletions(-)

Should scripts/checkpatch.pl complain when qemu_bh_new() or aio_bh_new()
are called from hw/? Adding a check is important so new instances cannot
be added accidentally in the future.

Stefan

--M2N6eqImXDUC6qNs
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmPRqwUACgkQnKSrs4Gr
c8gRpgf/XSMSSsqXecEYX86MfvLzEtDZEi3FpITflNUsNp60gUR3RxhOHYM1uYKt
JJsD58pqNmAqE+w3Yp3IfsHiqN2/Nn4M11DhA+LTtr/aOBDj2Avtn8cjlD9B/9sv
pdBlSmT9qdqNSMV0Vf3PeTQoFPgO0HfszA90SWOxVtRSPY5+I0ogrcBQnF9CniKP
ckWq3++62BxEnQDD74thjGagTPexUnMER/G5RGu7bEHZZPnVCUZSrRN413f5eco2
8hJZwVfvUmr/28Pn57ShdIm0T4VAr8A2T4+BpaGpc5oiBL5Jxblkj7J8+A78tv0v
jeSYTaMqrl98ceiYFrymlZCjC9aaOQ==
=MJ1Z
-----END PGP SIGNATURE-----

--M2N6eqImXDUC6qNs--



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 22:20:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 22:20:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484620.751289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKo7k-0003Mw-2l; Wed, 25 Jan 2023 22:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484620.751289; Wed, 25 Jan 2023 22:20:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKo7j-0003Mp-W9; Wed, 25 Jan 2023 22:20:07 +0000
Received: by outflank-mailman (input) for mailman id 484620;
 Wed, 25 Jan 2023 22:20:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V6li=5W=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pKo7j-0002mN-2E
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 22:20:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ab7840e-9cfe-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 23:20:05 +0100 (CET)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-653-dZa9kRTuPoi_YhvH_rzgJQ-1; Wed, 25 Jan 2023 17:19:59 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8D7D7100F903;
 Wed, 25 Jan 2023 22:19:58 +0000 (UTC)
Received: from localhost (unknown [10.39.192.105])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B172F1121330;
 Wed, 25 Jan 2023 22:19:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ab7840e-9cfe-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1674685204;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Na3xe+QZCW01A+v/Yj9/ftxjQvjcENPbaW+sNyyenXI=;
	b=hjaRrraQwN1OUa+3jPir2N2v2uRYvv1fwKVHsz5Y2ogcLPnV7nNcFnIBgb3cagFlPZWDMH
	U5hLWDwq22ZPRp2D9lb7EGTqC74KxbGEM78uOopLse5dW+Aa2y11ltFuP9e+SO3iUdc7NI
	0cAaRkA18kw8ZiUll34Y0PVtfEbeenQ=
X-MC-Unique: dZa9kRTuPoi_YhvH_rzgJQ-1
Date: Wed, 25 Jan 2023 17:19:55 -0500
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Alexander Bulekov <alxndr@bu.edu>
Cc: qemu-devel@nongnu.org,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Mauro Matteo Cascella <mcascell@redhat.com>,
	Peter Xu <peterx@redhat.com>, Jason Wang <jasowang@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>, Thomas Huth <thuth@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>, Bandan Das <bsd@redhat.com>,
	"Edgar E . Iglesias" <edgar.iglesias@gmail.com>,
	Darren Kenny <darren.kenny@oracle.com>,
	Bin Meng <bin.meng@windriver.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Daniel P =?iso-8859-1?Q?=2E_Berrang=E9?= <berrange@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Jon Maloy <jmaloy@redhat.com>, Siqi Chen <coc.cyqh@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, Kevin Wolf <kwolf@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>, Amit Shah <amit@kernel.org>,
	=?iso-8859-1?Q?Marc-Andr=E9?= Lureau <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Keith Busch <kbusch@kernel.org>, Klaus Jensen <its@irrelevant.dk>,
	Fam Zheng <fam@euphon.net>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	"open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>,
	"open list:virtio-blk" <qemu-block@nongnu.org>,
	"open list:i.MX31 (kzm)" <qemu-arm@nongnu.org>,
	"open list:Old World (g3beige)" <qemu-ppc@nongnu.org>
Subject: Re: [PATCH v4 3/3] hw: replace most qemu_bh_new calls with
 qemu_bh_new_guarded
Message-ID: <Y9GrC87Nbp6ViSBj@fedora>
References: <20230119070308.321653-1-alxndr@bu.edu>
 <20230119070308.321653-4-alxndr@bu.edu>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="PR/kp6ea097A6OkX"
Content-Disposition: inline
In-Reply-To: <20230119070308.321653-4-alxndr@bu.edu>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3


--PR/kp6ea097A6OkX
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Jan 19, 2023 at 02:03:08AM -0500, Alexander Bulekov wrote:
> This protects devices from bh->mmio reentrancy issues.
>=20
> Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
> ---
>  hw/9pfs/xen-9p-backend.c        | 4 +++-
>  hw/block/dataplane/virtio-blk.c | 3 ++-
>  hw/block/dataplane/xen-block.c  | 5 +++--
>  hw/block/virtio-blk.c           | 5 +++--
>  hw/char/virtio-serial-bus.c     | 3 ++-
>  hw/display/qxl.c                | 9 ++++++---
>  hw/display/virtio-gpu.c         | 6 ++++--
>  hw/ide/ahci.c                   | 3 ++-
>  hw/ide/core.c                   | 3 ++-
>  hw/misc/imx_rngc.c              | 6 ++++--
>  hw/misc/macio/mac_dbdma.c       | 2 +-
>  hw/net/virtio-net.c             | 3 ++-
>  hw/nvme/ctrl.c                  | 6 ++++--
>  hw/scsi/mptsas.c                | 3 ++-
>  hw/scsi/scsi-bus.c              | 3 ++-
>  hw/scsi/vmw_pvscsi.c            | 3 ++-
>  hw/usb/dev-uas.c                | 3 ++-
>  hw/usb/hcd-dwc2.c               | 3 ++-
>  hw/usb/hcd-ehci.c               | 3 ++-
>  hw/usb/hcd-uhci.c               | 2 +-
>  hw/usb/host-libusb.c            | 6 ++++--
>  hw/usb/redirect.c               | 6 ++++--
>  hw/usb/xen-usb.c                | 3 ++-
>  hw/virtio/virtio-balloon.c      | 5 +++--
>  hw/virtio/virtio-crypto.c       | 3 ++-
>  25 files changed, 66 insertions(+), 35 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

--PR/kp6ea097A6OkX
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmPRqwsACgkQnKSrs4Gr
c8j5WQf9F8Eg3nUxaGWZRHD3I8wFILb8NIBkmrWVzzVmfcZcxecQEQX/AJnoFfSP
7SY83PXVaWxyH5HHtNQMGMMchQ2bMO4m/8Rci3LPGKgDkauPzWbQVdj4mqkODrnl
/T+qIamwv5Zu7ddBh68Fi5qnA9OUGc6ycrKaQ0tDjA0xQ9j2ubdIw3i+KLLuUKLo
woyYb5kim7fMMt/1kVhUOM21c85TsFqe1hsyUjkbWN5fO3JifQPwoFjpYvNWDHdu
ysmdkchT2ekz2COepAVv5yc7yWI1ID8r7i//3xKrA486/GIm3XickhVNWoYNoU0e
Qo53IYuDqiMcWlTjSsfBJbq4CQa5iw==
=f/lH
-----END PGP SIGNATURE-----

--PR/kp6ea097A6OkX--



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 22:20:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 22:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484622.751299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKo85-0003s1-Ca; Wed, 25 Jan 2023 22:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484622.751299; Wed, 25 Jan 2023 22:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKo85-0003qw-9R; Wed, 25 Jan 2023 22:20:29 +0000
Received: by outflank-mailman (input) for mailman id 484622;
 Wed, 25 Jan 2023 22:20:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a83O=5W=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pKo83-0002mN-D9
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 22:20:27 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 768c0cf3-9cfe-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 23:20:25 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id BBAAB616A0;
 Wed, 25 Jan 2023 22:20:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB66FC433EF;
 Wed, 25 Jan 2023 22:20:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 768c0cf3-9cfe-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674685223;
	bh=nM4CDPgkZO79xaze5PzbZuLPxNC5muBKuCXpjbrA80Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eNQUn3AqEr00S/f7FAjrqFGr7giK78v9HDuKttvf3Ev4uKCzJ+a8aV2ovvJWFYbSS
	 Z3J+rhvpdGe2xaSedDcRtvwB6vXngxmzRHw35K+F194v2HV30HC+bTcuruZXWaY8yF
	 wm7hK3bHP5pED6bg889/Zt5aSaPIk54QKYh1Vzj4N1k5oZ5AkzfZmwkgZ0y9nYSwC9
	 n4jacnhe0tfrreaTrnUp6hJKZy4aLwPSUKzsTqeTzBYLAan4PVh7mKHRWMba5pPToC
	 zaFvqMhx6oo/sIA/F6u/WoRevnqa+Ekqx/Uwmcv0mGs/NWDSb8otRvneISX/XWDNZj
	 SbLN0eQyLfR7Q==
Date: Wed, 25 Jan 2023 14:20:20 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Vikram Garhwal <vikram.garhwal@amd.com>
cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
    stefano.stabellini@amd.com, alex.bennee@linaro.org, 
    Peter Maydell <peter.maydell@linaro.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    "open list:ARM TCG CPUs" <qemu-arm@nongnu.org>
Subject: Re: [QEMU][PATCH v4 09/10] hw/arm: introduce xenpvh machine
In-Reply-To: <20230125085407.7144-10-vikram.garhwal@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301251410440.1978264@ubuntu-linux-20-04-desktop>
References: <20230125085407.7144-1-vikram.garhwal@amd.com> <20230125085407.7144-10-vikram.garhwal@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Jan 2023, Vikram Garhwal wrote:
> Add a new machine xenpvh which creates a IOREQ server to register/connect with
> Xen Hypervisor.
> 
> Optional: When CONFIG_TPM is enabled, it also creates a tpm-tis-device, adds a
> TPM emulator and connects to swtpm running on host machine via chardev socket
> and support TPM functionalities for a guest domain.
> 
> Extra command line for aarch64 xenpvh QEMU to connect to swtpm:
>     -chardev socket,id=chrtpm,path=/tmp/myvtpm2/swtpm-sock \
>     -tpmdev emulator,id=tpm0,chardev=chrtpm \
>     -machine tpm-base-addr=0x0c000000 \
> 
> swtpm implements a TPM software emulator(TPM 1.2 & TPM 2) built on libtpms and
> provides access to TPM functionality over socket, chardev and CUSE interface.
> Github repo: https://github.com/stefanberger/swtpm
> Example for starting swtpm on host machine:
>     mkdir /tmp/vtpm2
>     swtpm socket --tpmstate dir=/tmp/vtpm2 \
>     --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> ---
>  docs/system/arm/xenpvh.rst    |  34 +++++++
>  docs/system/target-arm.rst    |   1 +
>  hw/arm/meson.build            |   2 +
>  hw/arm/xen_arm.c              | 184 ++++++++++++++++++++++++++++++++++
>  include/hw/arm/xen_arch_hvm.h |   9 ++
>  include/hw/xen/arch_hvm.h     |   2 +
>  6 files changed, 232 insertions(+)
>  create mode 100644 docs/system/arm/xenpvh.rst
>  create mode 100644 hw/arm/xen_arm.c
>  create mode 100644 include/hw/arm/xen_arch_hvm.h
> 
> diff --git a/docs/system/arm/xenpvh.rst b/docs/system/arm/xenpvh.rst
> new file mode 100644
> index 0000000000..e1655c7ab8
> --- /dev/null
> +++ b/docs/system/arm/xenpvh.rst
> @@ -0,0 +1,34 @@
> +XENPVH (``xenpvh``)
> +=========================================
> +This machine creates a IOREQ server to register/connect with Xen Hypervisor.
> +
> +When TPM is enabled, this machine also creates a tpm-tis-device at a user input
> +tpm base address, adds a TPM emulator and connects to a swtpm application
> +running on host machine via chardev socket. This enables xenpvh to support TPM
> +functionalities for a guest domain.
> +
> +More information about TPM use and installing swtpm linux application can be
> +found at: docs/specs/tpm.rst.
> +
> +Example for starting swtpm on host machine:
> +.. code-block:: console
> +
> +    mkdir /tmp/vtpm2
> +    swtpm socket --tpmstate dir=/tmp/vtpm2 \
> +    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
> +
> +Sample QEMU xenpvh commands for running and connecting with Xen:
> +.. code-block:: console
> +
> +    qemu-system-aarch64 -xen-domid 1 \
> +    -chardev socket,id=libxl-cmd,path=qmp-libxl-1,server=on,wait=off \
> +    -mon chardev=libxl-cmd,mode=control \
> +    -chardev socket,id=libxenstat-cmd,path=qmp-libxenstat-1,server=on,wait=off \
> +    -mon chardev=libxenstat-cmd,mode=control \
> +    -xen-attach -name guest0 -vnc none -display none -nographic \
> +    -machine xenpvh -m 1301 \
> +    -chardev socket,id=chrtpm,path=tmp/vtpm2/swtpm-sock \
> +    -tpmdev emulator,id=tpm0,chardev=chrtpm -machine tpm-base-addr=0x0C000000
> +
> +In above QEMU command, last two lines are for connecting xenpvh QEMU to swtpm
> +via chardev socket.
> diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
> index 91ebc26c6d..af8d7c77d6 100644
> --- a/docs/system/target-arm.rst
> +++ b/docs/system/target-arm.rst
> @@ -106,6 +106,7 @@ undocumented; you can get a complete list by running
>     arm/stm32
>     arm/virt
>     arm/xlnx-versal-virt
> +   arm/xenpvh
>  
>  Emulated CPU architecture support
>  =================================
> diff --git a/hw/arm/meson.build b/hw/arm/meson.build
> index b036045603..06bddbfbb8 100644
> --- a/hw/arm/meson.build
> +++ b/hw/arm/meson.build
> @@ -61,6 +61,8 @@ arm_ss.add(when: 'CONFIG_FSL_IMX7', if_true: files('fsl-imx7.c', 'mcimx7d-sabre.
>  arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
>  arm_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c', 'mcimx6ul-evk.c'))
>  arm_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
> +arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen_arm.c'))
> +arm_ss.add_all(xen_ss)
>  
>  softmmu_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmu-common.c'))
>  softmmu_ss.add(when: 'CONFIG_EXYNOS4', if_true: files('exynos4_boards.c'))
> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> new file mode 100644
> index 0000000000..12b19e3609
> --- /dev/null
> +++ b/hw/arm/xen_arm.c
> @@ -0,0 +1,184 @@
> +/*
> + * QEMU ARM Xen PV Machine
                   ^ PVH


> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to deal
> + * in the Software without restriction, including without limitation the rights
> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> + * copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
> + * THE SOFTWARE.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/error-report.h"
> +#include "qapi/qapi-commands-migration.h"
> +#include "qapi/visitor.h"
> +#include "hw/boards.h"
> +#include "hw/sysbus.h"
> +#include "sysemu/block-backend.h"
> +#include "sysemu/tpm_backend.h"
> +#include "sysemu/sysemu.h"
> +#include "hw/xen/xen-legacy-backend.h"
> +#include "hw/xen/xen-hvm-common.h"
> +#include "sysemu/tpm.h"
> +#include "hw/xen/arch_hvm.h"
> +
> +#define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
> +OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
> +
> +static MemoryListener xen_memory_listener = {
> +    .region_add = xen_region_add,
> +    .region_del = xen_region_del,
> +    .log_start = NULL,
> +    .log_stop = NULL,
> +    .log_sync = NULL,
> +    .log_global_start = NULL,
> +    .log_global_stop = NULL,
> +    .priority = 10,
> +};
> +
> +struct XenArmState {
> +    /*< private >*/
> +    MachineState parent;
> +
> +    XenIOState *state;
> +
> +    struct {
> +        uint64_t tpm_base_addr;
> +    } cfg;
> +};
> +
> +void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
> +{
> +    hw_error("Invalid ioreq type 0x%x\n", req->type);
> +
> +    return;
> +}
> +
> +void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
> +                         bool add)
> +{
> +}
> +
> +void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
> +{
> +}
> +
> +void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
> +{
> +}
> +
> +#ifdef CONFIG_TPM
> +static void xen_enable_tpm(XenArmState *xam)
> +{
> +    Error *errp = NULL;
> +    DeviceState *dev;
> +    SysBusDevice *busdev;
> +
> +    TPMBackend *be = qemu_find_tpm_be("tpm0");
> +    if (be == NULL) {
> +        DPRINTF("Couldn't fine the backend for tpm0\n");
> +        return;
> +    }
> +    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
> +    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
> +    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
> +    busdev = SYS_BUS_DEVICE(dev);
> +    sysbus_realize_and_unref(busdev, &error_fatal);
> +    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
> +
> +    DPRINTF("Connected tpmdev at address 0x%lx\n", xam->cfg.tpm_base_addr);
> +}
> +#endif
> +
> +static void xen_arm_init(MachineState *machine)
> +{
> +    XenArmState *xam = XEN_ARM(machine);
> +
> +    xam->state =  g_new0(XenIOState, 1);
> +
> +    xen_register_ioreq(xam->state, machine->smp.cpus, xen_memory_listener);
> +
> +#ifdef CONFIG_TPM
> +    if (xam->cfg.tpm_base_addr) {
> +        xen_enable_tpm(xam);
> +    } else {
> +        DPRINTF("tpm-base-addr is not provided. TPM will not be enabled\n");
> +    }

I would remove the "else", we already have a DPRINTF at the end of
xen_enable_tpm.


> +#endif
> +
> +    return;

the return is unnecessary


> +}
> +
> +#ifdef CONFIG_TPM
> +static void xen_arm_get_tpm_base_addr(Object *obj, Visitor *v,
> +                                      const char *name, void *opaque,
> +                                      Error **errp)
> +{
> +    XenArmState *xam = XEN_ARM(obj);
> +    uint64_t value = xam->cfg.tpm_base_addr;
> +
> +    visit_type_uint64(v, name, &value, errp);
> +}
> +
> +static void xen_arm_set_tpm_base_addr(Object *obj, Visitor *v,
> +                                      const char *name, void *opaque,
> +                                      Error **errp)
> +{
> +    XenArmState *xam = XEN_ARM(obj);
> +    uint64_t value;
> +
> +    if (!visit_type_uint64(v, name, &value, errp)) {
> +        return;
> +    }
> +
> +    xam->cfg.tpm_base_addr = value;
> +}
> +#endif
> +
> +static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
> +{
> +
> +    MachineClass *mc = MACHINE_CLASS(oc);
> +    mc->desc = "Xen Para-virtualized PC";
> +    mc->init = xen_arm_init;
> +    mc->max_cpus = 1;
> +    mc->default_machine_opts = "accel=xen";
> +
> +#ifdef CONFIG_TPM
> +    object_class_property_add(oc, "tpm-base-addr", "uint64_t",
> +                              xen_arm_get_tpm_base_addr,
> +                              xen_arm_set_tpm_base_addr,
> +                              NULL, NULL);
> +    object_class_property_set_description(oc, "tpm-base-addr",
> +                                          "Set Base address for TPM device.");
> +
> +    machine_class_allow_dynamic_sysbus_dev(mc, TYPE_TPM_TIS_SYSBUS);
> +#endif
> +}
> +
> +static const TypeInfo xen_arm_machine_type = {
> +    .name = TYPE_XEN_ARM,
> +    .parent = TYPE_MACHINE,
> +    .class_init = xen_arm_machine_class_init,
> +    .instance_size = sizeof(XenArmState),
> +};
> +
> +static void xen_arm_machine_register_types(void)
> +{
> +    type_register_static(&xen_arm_machine_type);
> +}
> +
> +type_init(xen_arm_machine_register_types)
> diff --git a/include/hw/arm/xen_arch_hvm.h b/include/hw/arm/xen_arch_hvm.h
> new file mode 100644
> index 0000000000..8fd645e723
> --- /dev/null
> +++ b/include/hw/arm/xen_arch_hvm.h
> @@ -0,0 +1,9 @@
> +#ifndef HW_XEN_ARCH_ARM_HVM_H
> +#define HW_XEN_ARCH_ARM_HVM_H
> +
> +#include <xen/hvm/ioreq.h>
> +void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
> +void arch_xen_set_memory(XenIOState *state,
> +                         MemoryRegionSection *section,
> +                         bool add);
> +#endif
> diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
> index 26674648d8..c7c515220d 100644
> --- a/include/hw/xen/arch_hvm.h
> +++ b/include/hw/xen/arch_hvm.h
> @@ -1,3 +1,5 @@
>  #if defined(TARGET_I386) || defined(TARGET_X86_64)
>  #include "hw/i386/xen_arch_hvm.h"
> +#elif defined(TARGET_ARM) || defined(TARGET_ARM_64)
> +#include "hw/arm/xen_arch_hvm.h"
>  #endif
> -- 
> 2.17.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 22:39:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 22:39:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484638.751315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKoPw-0006B7-4t; Wed, 25 Jan 2023 22:38:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484638.751315; Wed, 25 Jan 2023 22:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKoPw-0006B0-2G; Wed, 25 Jan 2023 22:38:56 +0000
Received: by outflank-mailman (input) for mailman id 484638;
 Wed, 25 Jan 2023 22:38:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6g1N=5W=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKoPv-0006Au-4I
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 22:38:55 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2079.outbound.protection.outlook.com [40.107.220.79])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0afdfff7-9d01-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 23:38:53 +0100 (CET)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by DS7PR12MB6309.namprd12.prod.outlook.com (2603:10b6:8:96::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan
 2023 22:38:50 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f803:f951:a68f:663a]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f803:f951:a68f:663a%6]) with mapi id 15.20.6002.033; Wed, 25 Jan 2023
 22:38:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0afdfff7-9d01-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ji8EGikVIjrqdqnOkXJJBbDgd8FQzTYmzS3DNT/ehmpEUfVaddrSJMsYA/EH7KoMml+dyylCsdl330MlG8VMEKebWbOWn7dokA2UK6afPHDYOBIfoSb8YhI6QY8RKX3kVSm5q2NmleiwG/YO+zU+3YJk/sSzs+RbCx2xgMiAgoV56dQ8jnt4rjY3cy1rHXVGNJn59x+zl/rL/3myxKUbMESkcwPOCMMn8ODyl6/WhfGgkfkv4RyvglJTUxpH1cnZfcchQxX3P0mI/Dll7wOeOEyuttMic60uRweKY4vHhhUfSpZaKhJCArSSP8bTcgCJg6Ad+Oz3Oq2DYpJfif2UJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1RU7lzqCpBB1280QOXUVIR+DSuSw8vtfdFZGSytrw4Y=;
 b=mPihZU5KU3871/Pmd1cY7z4ST77wHrPtYOIw5x8KKloo0PdfjqcTjR1x2awLaSkVBusukfTKHL6h6OXeumGaij6PmO1Ne2eBxlP5DcrYIb2YS111l82edUB9ouoL1xOaTfEMcsjVDreXunPqSpf744IzqI5ULU4gmBiyUbRy0vl9mAeTi6SHfAP3ZcvQ3Mk75pd1BPRt7S1Vpfoapq7x/AExF2jXdoAp4z2vJAWHszPHIToI+jG7sWgPn00MfRbabCUPSYbbarpfmz0CY5gnwnY1yZp4OtaagO1gqfQ9AHErvEQ8EaSTOykEOVIKTeZr6/+wUv0CdmuUQmWrw1n5mA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1RU7lzqCpBB1280QOXUVIR+DSuSw8vtfdFZGSytrw4Y=;
 b=MoNWpsCwxbd5vcY56FTmTdy8Djq8QQGtw0B46hAW5T2hboWwcjXllmGIhQhiwQGbUguZkhB33pu3sjlZCnweHqHIqDn3W/x0cANLN+/w9FRHeB2RY4ZavNbIPpHsIeUAWYoTJpyUz8IjE07bzpBsgKZ8uGdkxnPH8oA8FDuAYQg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <35e561e8-df9e-5ca8-7367-07db3388b0ac@amd.com>
Date: Wed, 25 Jan 2023 14:38:47 -0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [QEMU][PATCH v4 04/10] xen-hvm: reorganize xen-hvm and move
 common function to xen-hvm-common
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
 stefano.stabellini@amd.com, alex.bennee@linaro.org,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
 <20230125085407.7144-5-vikram.garhwal@amd.com>
 <alpine.DEB.2.22.394.2301251329520.1978264@ubuntu-linux-20-04-desktop>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <alpine.DEB.2.22.394.2301251329520.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SJ0PR03CA0361.namprd03.prod.outlook.com
 (2603:10b6:a03:3a1::6) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|DS7PR12MB6309:EE_
X-MS-Office365-Filtering-Correlation-Id: 4569b997-3058-4173-c0b3-08daff24ed59
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vJ/21zRa5FJLD8XGnqFsFsPjJgnyI9IAvvRLjEOFpNijOwwJCClkhdhQHH9/qC/Ck0fXmmMyDeNuuaM6eXZUTiXrPZdiGv92lKw2XvnWK0Zvlg9fCLzour/reo7hkXUzU5DDdCBMa4JuHYrvHJcn44kNc3jdVX5Unl3VeixvPJcJP7dc+N2j+XIXd3hmHzJ2wKjvBmtV8nkb2mEpVH0m9iLYIHvFZr6dxulynDnlWjBoz+zuxqkevd+wjZ2SP3qo3uOP+RYk4kbqZfUwlhuEptfw+kti44xxWC2MJLLGFXbJOiwaPAt5KYdinDVDpVlZiErJGEoZLeWajKB1DgVCY6y+j2E8jzYOa9wtRNItV/NnQHjLHIBXY5HaEjJ2yjn0rFhM6wNMvq4S5YeCcp/SQ2wj3Z46m7mNo+nGphYC2XWtNh44GyfOjZq58tqoIqU0i2JqdC88vl7wCVhYhuT3cfOvVImky7l6x0npCvdjRa5/MUveMDMNGA4+8Cgrglu2+iXX2R1KRlUWVDfXxkLx9HMHQR4zBONkvAv3gb3Gkei69VUJ8JbPHLhY3PcB1lECrY8FjRq5C6uxaIviRQXHoGyHqAmWpnO9C/hdQ8JjJihntuFLCbIdN8txGPCixMyg+1DFazq/Dead+rsYlb4SorfWmykbXTh17vpH054DEBqySrJRY2yXaj0QZvOK2TiXVEk+ECxt27uEToEx0TXki1Uj2+VRgZHBHHD5Lmz+48c=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(376002)(346002)(39860400002)(136003)(366004)(451199018)(2616005)(8936002)(86362001)(31696002)(478600001)(2906002)(8676002)(38100700002)(4326008)(7416002)(6486002)(5660300002)(36756003)(31686004)(44832011)(186003)(26005)(316002)(6506007)(54906003)(41300700001)(53546011)(6512007)(66556008)(83380400001)(66946007)(66476007)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VGJ5a2x6NUVMSExEZ3VCUXhOTWVOTm10NXVWSDlFMEdHNEZ0V3FLVjFlUzc5?=
 =?utf-8?B?RWRsTUQ0QTJjLzF3ZSt3a3czbGNYamhYMFF4MmxWRnA4M3RqVExSOGoxVkhS?=
 =?utf-8?B?OFVsTTJ1NVdFaG0zbGhDUnVlUXR1cE9Ec1pLUFpwNTFIaVBUTmFaRWg5K1l5?=
 =?utf-8?B?cTFpeGpMVWNRNzBPYmZJSjcyczBIUnlBK2h3OGRVL3hWWTZqUk44L0tNNTRE?=
 =?utf-8?B?aWpKWjNJWWJCSTVvY3ZlSlFLVmgvN25Ld2o5SktLY0RlMTBqWlFqa2F4Z2Zr?=
 =?utf-8?B?NXlVTlo1MHRubUZKd3pJUVJ5L0UzbCt2VGJKd3pQRGIxMy9ubkttMXRZaUZR?=
 =?utf-8?B?TFlMWVJiQ1pNaWIxenZKTElLcG9qZzdPczljR1hhZU83TDNHQ0dNVXJ5d0hl?=
 =?utf-8?B?TUNzNDdrbkxCelh0bHo0TmhzdnRWUzhLSGlBcHdFKzBjbXNzTGkydjNPZTZX?=
 =?utf-8?B?enJzNmZFc2FPZzBneFZheGxJdUNJcFlsU0hvK1J2VnVQRWNFVFBUbkVjRkV0?=
 =?utf-8?B?OUtGN1dNeVdITWhOYlluck9GUjJYN3NBTkVYc01DQVMzK0J5VUFxcFI1c1VF?=
 =?utf-8?B?aGZoV3Y4Y04wNFRuWE1iYXFQdlZ0UEtpWmR4MkR4Vk55YVNXVU9sSHdrRlR1?=
 =?utf-8?B?RDNsSmM5cGhsdVJXVG1ta2RMdzBmQVJINkxYMFVNa0U4ZkUyT2NweGE3c0M1?=
 =?utf-8?B?YUJha3FyYlpaR2JKR3I1NklzZnVobWdaMnFaOFNpVDBvOE8vMDdiVW9CakJ1?=
 =?utf-8?B?WTFOM0NwaTNvQmFkaUdXV0VvNGllR25GbmRDb0djakE2NDFkYTJwbG91V0VE?=
 =?utf-8?B?U2JLOEorR1lBL0tETllOS0RqRENQbmZMNUloK3NzdVM5Si8xb0ZkVTRSOWtT?=
 =?utf-8?B?Ulc1YkdtNCtyVzJaam1ldDh5Y0VEVkFVK1JabFk5VGV3KzhtYlo3NkVkSy8y?=
 =?utf-8?B?eml6bmJPVHpnbXJzZThmd2wzTlBqZ2Y1Z0tHNys4ZDFuWmRYSEIvQ0tObjZs?=
 =?utf-8?B?NzBsZW81Y1VzWjVYSHU1YmN2ZUIydGFVc3BDK2Q2QzVhYThJMC9wbEd4ZVVk?=
 =?utf-8?B?VEt6aG95SzFkNU96SzFkRVJadVhyWG91aVlUVTdadTJVMUVabnNRNkJIZWk1?=
 =?utf-8?B?NG9DN3NZQVcvVWVCQ2l6U1FVWjlTSVFPYzhvVlg1QS9MVmNwZGdtWFU5TkVo?=
 =?utf-8?B?UDBKczMxTEYxZ3JmWmpMaG5maW0wd1VYbnBnMk9DdlRKRXl0aGxzWE5zQXNE?=
 =?utf-8?B?M3V1STgxQ3BoTkpZZUg4SVJ0dk9nZVZORXM0U1FlRytjM3FhbEI2OUcxR3Vz?=
 =?utf-8?B?T1hFQ0J1RllxTjVPNzlMNlMrK2pRVmFqdWJtYlNPSzBscmVzckRqc3kxT1RU?=
 =?utf-8?B?aEVHTVl6SU55aHlLQ1dzTXMzbHRXNnk5MWJtRVh1SG9jWm5VTnhab0puNFVC?=
 =?utf-8?B?UWpwZUhHTU9rcjN2Y2tUQTdYR3p0WGlBZCtXQVdMV0hmNEhidlgvN1hPeDRh?=
 =?utf-8?B?d2JiQisyTzY1Yk1rbjBtNGlSNElpQmViL2h5MGw5SXZZejFlL2ZpcVBDazdK?=
 =?utf-8?B?b005dTdnaTBRRHQ1d0EzaGpwaDljdmNWam9nVVVndFBoTm5rV1J6QTZBN04z?=
 =?utf-8?B?UVV0RXptdm9yWE5nSENkK3BzYVNTaUdGZUZXQWVYKzlKWjB1WUNjTEpOUE5j?=
 =?utf-8?B?YXVWeVlDcXJ2a0hVY2tkNGMwZVhYbXJONW9JUUxtWlVTRUJjSzBaTVJMbHNV?=
 =?utf-8?B?L2t5ZW1qdjlJUTNEVjE5S2VHSW8yb2FPSDJjRUhadkhyZzZSMmxOVytTVjZG?=
 =?utf-8?B?OG0vaklZWFZ4TWZ4UE42d2JhN2YwbTJtQitsWWxram13VkZWVzl5ZkRWYWU2?=
 =?utf-8?B?bTA2YWRxVjIzdHNCUUZ5Y1AyQVZHcGRtTHZIdTJsWVBkbzZQdlFEcWFzOW1W?=
 =?utf-8?B?L3Y3blNJTFk5d00wMEMzdy8raStTaWtWai9RcW1nNnlORXNOR05PTjFzYnNs?=
 =?utf-8?B?UzVkYWZacTlMMWlHV3F1N1N6Y0E3TWpQb2tqelFSdnljc2NUUnJzRVQwOUtx?=
 =?utf-8?B?d3VjNnpGZllvT1NzTnMwRkljeE4zRlFscnFkeSs5Q1F6MEdSeTcza2QwbUFr?=
 =?utf-8?Q?T7Nh9x8xs2jyrOyNKNZ7Fv6ws?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4569b997-3058-4173-c0b3-08daff24ed59
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 22:38:49.4051
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jpRnYWngK3/JphQlzNHsQSBO9aA8ufeNPhJOOV/O+2bU8f3A1Z7BAGizwRq4hbUqisulCddtnmTTWaamLoXLNQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6309

Hi Stefano,

On 1/25/23 1:55 PM, Stefano Stabellini wrote:
> On Wed, 25 Jan 2023, Vikram Garhwal wrote:
>> From: Stefano Stabellini <stefano.stabellini@amd.com>
>>
>> This patch does following:
>> 1. creates arch_handle_ioreq() and arch_xen_set_memory(). This is done in
>>      preparation for moving most of xen-hvm code to an arch-neutral location,
>>      move the x86-specific portion of xen_set_memory to arch_xen_set_memory.
>>      Also, move handle_vmport_ioreq to arch_handle_ioreq.
>>
>> 2. Pure code movement: move common functions to hw/xen/xen-hvm-common.c
>>      Extract common functionalities from hw/i386/xen/xen-hvm.c and move them to
>>      hw/xen/xen-hvm-common.c. These common functions are useful for creating
>>      an IOREQ server.
>>
>>      xen_hvm_init_pc() contains the architecture independent code for creating
>>      and mapping a IOREQ server, connecting memory and IO listeners, initializing
>>      a xen bus and registering backends. Moved this common xen code to a new
>>      function xen_register_ioreq() which can be used by both x86 and ARM machines.
>>
>>      Following functions are moved to hw/xen/xen-hvm-common.c:
>>          xen_vcpu_eport(), xen_vcpu_ioreq(), xen_ram_alloc(), xen_set_memory(),
>>          xen_region_add(), xen_region_del(), xen_io_add(), xen_io_del(),
>>          xen_device_realize(), xen_device_unrealize(),
>>          cpu_get_ioreq_from_shared_memory(), cpu_get_ioreq(), do_inp(),
>>          do_outp(), rw_phys_req_item(), read_phys_req_item(),
>>          write_phys_req_item(), cpu_ioreq_pio(), cpu_ioreq_move(),
>>          cpu_ioreq_config(), handle_ioreq(), handle_buffered_iopage(),
>>          handle_buffered_io(), cpu_handle_ioreq(), xen_main_loop_prepare(),
>>          xen_hvm_change_state_handler(), xen_exit_notifier(),
>>          xen_map_ioreq_server(), destroy_hvm_domain() and
>>          xen_shutdown_fatal_error()
>>
>> 3. Removed static type from below functions:
>>      1. xen_region_add()
>>      2. xen_region_del()
>>      3. xen_io_add()
>>      4. xen_io_del()
>>      5. xen_device_realize()
>>      6. xen_device_unrealize()
>>      7. xen_hvm_change_state_handler()
>>      8. cpu_ioreq_pio()
>>      9. xen_exit_notifier()
>>
>> 4. Replace TARGET_PAGE_SIZE with XC_PAGE_SIZE to match the page side with Xen.
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> One comment below
>
> [...]
>
>> +void xen_exit_notifier(Notifier *n, void *data)
>> +{
>> +    XenIOState *state = container_of(n, XenIOState, exit);
>> +
>> +    xen_destroy_ioreq_server(xen_domid, state->ioservid);
> In the original code we had:
>
> -    if (state->fres != NULL) {
> -        xenforeignmemory_unmap_resource(xen_fmem, state->fres);
> -    }
>
> Should we add it here?
>
>
> I went through the manual process of comparing all the code additions
> and deletions (not fun!) and everything checks out except for this.
thanks for catching this. There were two recent commits in upstream and 
i missed those. I rechecked and there are actually three other lines 
which needs update. I will address it in v5.
>
>> +    xenevtchn_close(state->xce_handle);
>> +    xs_daemon_close(state->xenstore);
>> +}


From xen-devel-bounces@lists.xenproject.org Wed Jan 25 23:00:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 23:00:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484643.751325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKokE-0000UO-Q8; Wed, 25 Jan 2023 22:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484643.751325; Wed, 25 Jan 2023 22:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKokE-0000UH-NR; Wed, 25 Jan 2023 22:59:54 +0000
Received: by outflank-mailman (input) for mailman id 484643;
 Wed, 25 Jan 2023 22:59:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ecwt=5W=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pKokD-0000UB-8E
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 22:59:53 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f9338a22-9d03-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 23:59:51 +0100 (CET)
Received: by mail-wm1-x32f.google.com with SMTP id
 f25-20020a1c6a19000000b003da221fbf48so68331wmc.1
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 14:59:51 -0800 (PST)
Received: from [192.168.0.114] ([196.77.22.181])
 by smtp.gmail.com with ESMTPSA id
 h24-20020a05600c499800b003dc1a525f22sm2918005wmp.25.2023.01.25.14.59.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 25 Jan 2023 14:59:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9338a22-9d03-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=8dTJiZGN7IM1TGcu2FzqrQUW/4q0B2eS/x/LwLTuMGg=;
        b=Hu9WboM70U+ng5jm2wA8zD0O+GTSbl9c72Ae7mmteSGlLw6gFrbN+syN0gDbQq9jjK
         5CdlZjpkDj0EHnMvTRUdAn/FUDuOEfsovLxu0PTwwqLN79d9mDUjb2bkPmiRVp7SMHhU
         1/hPBk06aXrFR4phSkYIFTzAW3s05OtW94+tqLHUlU7U0dQIBr6S6Nxo59JF6VNWPrcR
         ZMNnjptfGAL3WuRxRLue5oTrR6ppRGOIrYvolxEp0ZHgvZK7hO42IW/V+he8gtLJPS0u
         9gdfdfNRAjkFgi58GK+Y7o65jWo+MnwavkHmJBhtHGpzD3LlH5zQfvSEnJ7Iqlmjgj2J
         2+jA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=8dTJiZGN7IM1TGcu2FzqrQUW/4q0B2eS/x/LwLTuMGg=;
        b=QGfUgs8NB/BOAWPRC54sv6TI5oUXbbkX6KBTQ/V3nrnh2oTUljXLhf04z1cnIjq+ka
         nQB4rmY637eMQ3tf1froVy7iNVcRNYkpsN03iGlpb3l3kGj1tYCC6Dr4n4zVhEqFdc8O
         4h1ZolzQouk3eLYQCG2WKq7MaWWu4TEscybMqJleKBU3oOiLMGoyeH9TP2Ud/mH5s1ys
         JngwGhILPCGpR1phxUDqxFh9h6BQhZZ5226ZcBeOMLtPjAt2HKBZqAI7ABg8GGq/8cwF
         Na3iWt7D4K1jwNjWqh/XxD9TiQQPYGVlJWnOMl29WjIVO5INWFDPvPKW37ffFiee9pRo
         huOQ==
X-Gm-Message-State: AFqh2kqKUARqi/DIj2CkXXPnw265x+XYCZFa4mkl7TDmbgiKlJlgYiF+
	a5c6q/cGirkdZlqH2q3UJwS9ZQ==
X-Google-Smtp-Source: AMrXdXt5kkKnXWiu24bHOo+rgY1Pyx44R8PAHSh/XuuJxEVm1eXCdcuQXLw1FQMnqxp1y2dliMd/9A==
X-Received: by 2002:a05:600c:1712:b0:3d9:a145:91a with SMTP id c18-20020a05600c171200b003d9a145091amr33196886wmn.28.1674687590639;
        Wed, 25 Jan 2023 14:59:50 -0800 (PST)
Message-ID: <1294362f-4359-949e-3673-6198a78310be@linaro.org>
Date: Wed, 25 Jan 2023 23:59:48 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [QEMU][PATCH v4 01/10] hw/i386/xen/: move xen-mapcache.c to
 hw/xen/
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, qemu-devel@nongnu.org,
 Thomas Huth <thuth@redhat.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 alex.bennee@linaro.org, "Michael S. Tsirkin" <mst@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
 <20230125085407.7144-2-vikram.garhwal@amd.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230125085407.7144-2-vikram.garhwal@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 25/1/23 09:53, Vikram Garhwal wrote:
> xen-mapcache.c contains common functions which can be used for enabling Xen on
> aarch64 with IOREQ handling. Moving it out from hw/i386/xen to hw/xen to make it
> accessible for both aarch64 and x86.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> ---
>   hw/i386/meson.build              | 1 +
>   hw/i386/xen/meson.build          | 1 -
>   hw/i386/xen/trace-events         | 5 -----
>   hw/xen/meson.build               | 4 ++++
>   hw/xen/trace-events              | 5 +++++
>   hw/{i386 => }/xen/xen-mapcache.c | 0
>   6 files changed, 10 insertions(+), 6 deletions(-)
>   rename hw/{i386 => }/xen/xen-mapcache.c (100%)
> 
> diff --git a/hw/i386/meson.build b/hw/i386/meson.build
> index 213e2e82b3..cfdbfdcbcb 100644
> --- a/hw/i386/meson.build
> +++ b/hw/i386/meson.build
> @@ -33,5 +33,6 @@ subdir('kvm')
>   subdir('xen')
>   
>   i386_ss.add_all(xenpv_ss)
> +i386_ss.add_all(xen_ss)
>   
>   hw_arch += {'i386': i386_ss}
> diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
> index be84130300..2fcc46e6ca 100644
> --- a/hw/i386/xen/meson.build
> +++ b/hw/i386/xen/meson.build
> @@ -1,6 +1,5 @@
>   i386_ss.add(when: 'CONFIG_XEN', if_true: files(
>     'xen-hvm.c',
> -  'xen-mapcache.c',
>     'xen_apic.c',
>     'xen_platform.c',
>     'xen_pvdevice.c',
> diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
> index 5d6be61090..a0c89d91c4 100644
> --- a/hw/i386/xen/trace-events
> +++ b/hw/i386/xen/trace-events
> @@ -21,8 +21,3 @@ xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
>   cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
>   cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
>   
> -# xen-mapcache.c
> -xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
> -xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
> -xen_map_cache_return(void* ptr) "%p"
> -
> diff --git a/hw/xen/meson.build b/hw/xen/meson.build
> index ae0ace3046..19d0637c46 100644
> --- a/hw/xen/meson.build
> +++ b/hw/xen/meson.build
> @@ -22,3 +22,7 @@ else
>   endif
>   
>   specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
> +
> +xen_ss = ss.source_set()
> +
> +xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))

Can't we add it to softmmu_ss directly?

> diff --git a/hw/xen/trace-events b/hw/xen/trace-events
> index 3da3fd8348..2c8f238f42 100644
> --- a/hw/xen/trace-events
> +++ b/hw/xen/trace-events
> @@ -41,3 +41,8 @@ xs_node_vprintf(char *path, char *value) "%s %s"
>   xs_node_vscanf(char *path, char *value) "%s %s"
>   xs_node_watch(char *path) "%s"
>   xs_node_unwatch(char *path) "%s"
> +
> +# xen-mapcache.c
> +xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
> +xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
> +xen_map_cache_return(void* ptr) "%p"
> diff --git a/hw/i386/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
> similarity index 100%
> rename from hw/i386/xen/xen-mapcache.c
> rename to hw/xen/xen-mapcache.c



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 23:20:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 23:20:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484648.751335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKp3j-0003XJ-HM; Wed, 25 Jan 2023 23:20:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484648.751335; Wed, 25 Jan 2023 23:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKp3j-0003X3-BS; Wed, 25 Jan 2023 23:20:03 +0000
Received: by outflank-mailman (input) for mailman id 484648;
 Wed, 25 Jan 2023 23:20:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7BNa=5W=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pKp3h-0003Fi-Rq
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 23:20:02 +0000
Received: from sonic314-20.consmr.mail.gq1.yahoo.com
 (sonic314-20.consmr.mail.gq1.yahoo.com [98.137.69.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c844030c-9d06-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 00:19:59 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic314.consmr.mail.gq1.yahoo.com with HTTP; Wed, 25 Jan 2023 23:19:56 +0000
Received: by hermes--production-ne1-746bc6c6c4-8sf8l (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID c284c3e27cd5358d47453ecd001766b3; 
 Wed, 25 Jan 2023 23:19:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c844030c-9d06-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1674688796; bh=NuTLbxWBAWRYcSkweCeIPV+fWnHo6x+Ljugv1fLZfOM=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From:Subject:Reply-To; b=TXTdXgDt7k4s6iyBiG/AEdHho2fWovtCfTlI3btGhNoi+b4IIyYuNQRa6aQkC660atlwNbg4z4vqDYgnB5uSdNuTd1Yjc6QMS8z7Y2ZNvjHVoyqdLdHX55WwJ6wUzYkze8JZ6yvUjtue9GqWLC3cNq5rrOQV84KrQlJnN7wvdxLA31oFQ558WjhYe4POB7vP/jPaRPNvsy1gLSYxisaRFA+NfmfPx+spB9XKcB4sRv0sZvFjKSaX6zt5L7k/U6V0JzC8HF9JITVQKXItFDRu1B+/OAzDB+EM5+bTtrgFhCf9jNlkn11SZ2qXMCVwxjaE/U9/XmNQZLMXypNU3LhObA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674688796; bh=5KKl7DiXO3ks0SHkucflRuHXrE5+TembJgJ8JFt4FZ3=; h=X-Sonic-MF:Date:Subject:To:From:From:Subject; b=RmVUsSNJqEu8EzaUGaTXuyD9Ye6RdCqphmoSWU5a90yTLjnhUxz1X1JTgJQ72LIKpdC4ksu5jTRhkpHrk3nepXYJvIoKAqWoHf1cCLJXynxS9hVZNYgdChA7WO/VhOQKdWUr5V4qv8QAhKP3CzhI3vIPQq9vf7/dO1dp9JttUzhAjNYc0wQYbNmiDeVCrAbdkgX2a8SK3zqc8lpgCBleqZWFcxXVztYKlopMsRRsLCW9Wyal2YeP//VoWhbFf0GLg+zQQMGB5MZMvkWxFCv15wfSx8cROkj/PnlOsDIwtquw4uxWMhvzyE3pxoS/nojvVM/42j7yXWkix0ND39Mx5Q==
X-YMail-OSG: Nm0aPdsVM1mOjbka2hdwLWy13dOEar7Y2.wjMSgW6ndIRAJKOi9MY5rifF_v0oM
 ipf1jzPy8TuShe8rMZDXEe8kK99.w7_J4dkmFacf.A8DWCGn7uuml4aBijRzxnc2iB39kIaWkN2b
 Q71MDT_p7lVekknXhwXqRzIfXLIdN11s_pCXkyvgxoDNiDoKRRIzu.OZLxhBICqQtFjLR81RsiAp
 0yNJCQBzSnzCIPpbvGYTe8F1fVyIpgbWoBp0q0Wt7Se25UnSrxlIJ.c3YbnitwctlPz32b__2crS
 UWSrvWgb_D5w3u2jv3pauqRF80GOVuxS9trAXypR2T4NYv5rHywTOSLboBAoFgxO6B5T574IzpYH
 FwuyeQBZFGIt08w8ZIER.GPnlOtMUIcFWiV51q4go4F2_FA3oaf_JMUV6xX48hSxpcpibNCeSmYo
 UEMIHZo8oWF1cFmDehaV3uHunShqWWnrkLXQO_WsEEyxAcksjcGMle3G3CDubThacwGXsh0cC.st
 jUSCwfPMWktwrVLZYPgm6u0x3OzwmyD3_DTucEzjx7ij0Xh5L2RZlRUzJABIZXlaTwjy9C4h9Oe_
 vj1E3py6uQNDUt8p0Y5lZb4dcWOm7FX6V6ER4n7bfozxQF9pUvgdQSGHT35E1ObLFgcDeBYXywvq
 m7_FVpofeARqAX6mkmt3mieBQ_zTEBeMSCZBqXBL5nisA5pKtiFmdgM_XgG8T3Mh8Vhj9ync19.j
 JQkjz9m.PXeYMwq9EZmBv8QnPMza9bABM8b.liQTL8KZwPPsN3W16nvOZzkp7LJ_Cn.RmEn0SG6_
 UiftYIJDP3mHBKSpn1fAXmpwRlPF9L3JCfGy.IF0Y13fh_LeQtaszcDw_A1XoQCmLi.ueCjFvwM_
 LkO3szmrwUKj0fMAxaAdfYkrblue7NOaAJodQnxqZl98hDam9VomNYWZJqHAHTqlSmFnupGZIBoS
 3dyPFxDd.8ZG2EDa4rlzwSeIMGWEqFf561HM8aRt2nimosc0a0_zOuN8cmdd_h.Cqg6VgyyhCFuT
 ilQlzU3FhDI47K6VWw0ZFMwi2ngmOFHXpb1Ql1hOnCvgoDCQpVUkjoRk2rYO_XhoD514VGrw3ovt
 mZKk5nAn2f2zU2PTzkNy741l6_.YapYs5Fe62gkD.echfW07fPO_8lgg_5bGeId6Wc8lhHaEEcYo
 nHMBfILUIjPvp2mpWHHE0jXMfFhIrkBL61UvLPE5uJ1qOh3dHTkaMc9m29A2nccjgZ8cAhSWQY9O
 2ONJB9n3tponEe5fUJbsSlB0gvnkPRHH.o1gEuplIGJ8P9V5gS1xLQEXMLrqRoEKelYisdIQeTpy
 0iTCpA7V3w14uIrGK9S7ia8C0XslBW9kpCSwTbIscIB3pgduvTlZBHLholo7KoOq2S.zvfzEUhdJ
 L8.JGDOzXbNB2aV11yFvDAgitCIMe2pjZ4rCvmkvoRbqEo4ZhL16VynX3oW1C3KCSqhp8L.um1em
 5kg8IBLem.Aw2QEyvD.yiww9jqLD3yMlwXHpyrMJPHsnIVzlyQvVeVZ8EgXW2ygWjzjZSvee_TKv
 Flo5OKBHb3dhdtvu3FY8LxuMi7e48IrHbCwUvRcRxt_ITFo3p6y.sXk7HgSNotvSPRfnmYmTLgV3
 f7JNEp9Bzzw1CzBY0_73kYrBZQFKYGnOklfN6SK4yrHb0zWKfQrLlk6gq8yD..TPA1xa7dORYK3e
 vtaGS0dXGXfNGKzT1x47cs9aHlsLxUSJ0oTKuWuEk2.ari5VvuaDFsy2E5GdMfRIFW.qy3uDDqwR
 VXfzsnhNFtnLklwMqY.hfNpfTfgGhmXOn2b2J3pHXyI7jYcNKutw8pbSy8wPBWSKqgP3YqnkuZQw
 XmZVL320cQmf1REBNgMaTRKvRIm_EtFU7bOXtOcVlAxecSZhSrEJdaJJTDLq8ccfmbld80j1r3LQ
 wHTF4dXr1elh6pWrRhi_3zSdc5MKM_BD2A6xrgIBFVqP9PzX56C280hAzYfwmVWawf_5FTAIQrWZ
 uj6o4YOEpXmvpgleQhbEiix_9_ngnZuwvb.f_5vwxIPkvwlw.sUJUQyfLVVOnUZg61I9.5TK9gVD
 UPq1RqelQzypRJw_ymj3BQX31AwN3Yt4vvry4o6lUqVIhRmb4KrXzEd3ukp10PC5t9gwpfdOX8ol
 zWTyYn6ovkO5ciWFcbxSyMM__Hn1JI3H31XqobJuNZYWqYeyJWyXMm_edfTbzH5aqhMiL9uORCCJ
 8v_wL1p.y
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <de3a3992-8f56-086a-e19e-bac9233d4265@aol.com>
Date: Wed, 25 Jan 2023 18:19:52 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default
 for igd-passthru
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, qemu-devel@nongnu.org
References: <cover.1673300848.git.brchuckz.ref@aol.com>
 <cover.1673300848.git.brchuckz@aol.com>
 <Y9EUarVVWr223API@perard.uk.xensource.com>
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
In-Reply-To: <Y9EUarVVWr223API@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.21096 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 5670

On 1/25/2023 6:37 AM, Anthony PERARD wrote:
> On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
> > I call attention to the commit message of the first patch which points
> > out that using the "pc" machine and adding the xen platform device on
> > the qemu upstream command line is not functionally equivalent to using
> > the "xenfv" machine which automatically adds the xen platform device
> > earlier in the guest creation process. As a result, there is a noticeable
> > reduction in the performance of the guest during startup with the "pc"
> > machne type even if the xen platform device is added via the qemu
> > command line options, although eventually both Linux and Windows guests
> > perform equally well once the guest operating system is fully loaded.
>
> There shouldn't be a difference between "xenfv" machine or using the
> "pc" machine while adding the "xen-platform" device, at least with
> regards to access to disk or network.
>
> The first patch of the series is using the "pc" machine without any
> "xen-platform" device, so we can't compare startup performance based on
> that.
>
> > Specifically, startup time is longer and neither the grub vga drivers
> > nor the windows vga drivers in early startup perform as well when the
> > xen platform device is added via the qemu command line instead of being
> > added immediately after the other emulated i440fx pci devices when the
> > "xenfv" machine type is used.
>
> The "xen-platform" device is mostly an hint to a guest that they can use
> pv-disk and pv-network devices. I don't think it would change anything
> with regards to graphics.
>
> > For example, when using the "pc" machine, which adds the xen platform
> > device using a command line option, the Linux guest could not display
> > the grub boot menu at the native resolution of the monitor, but with the
> > "xenfv" machine, the grub menu is displayed at the full 1920x1080
> > native resolution of the monitor for testing. So improved startup
> > performance is an advantage for the patch for qemu.
>
> I've just found out that when doing IGD passthrough, both machine
> "xenfv" and "pc" are much more different than I though ... :-(
> pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
> turns copy some informations from the real host bridge.
> I guess this new host bridge help when the firmware setup the graphic
> for grub.

I am surprised it works at all with the "pc" machine, that is, without the
TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE that is used in the "xenfv"
machine. This only seems to affect the legacy grub vga driver and the legacy
Windows vga driver during early boot. Still, I much prefer keeping the "xenfv"
machine for Intel IGD than this workaround of patching libxl to use the "pc"
machine.

>
> > I also call attention to the last point of the commit message of the
> > second patch and the comments for reviewers section of the second patch.
> > This approach, as opposed to fixing this in qemu upstream, makes
> > maintaining the code in libxl__build_device_model_args_new more
> > difficult and therefore increases the chances of problems caused by
> > coding errors and typos for users of libxl. So that is another advantage
> > of the patch for qemu.
>
> We would just needs to use a different approach in libxl when generating
> the command line. We could probably avoid duplications. I was hopping to
> have patch series for libxl that would change the machine used to start
> using "pc" instead of "xenfv" for all configurations, but based on the
> point above (IGD specific change to "xenfv"), then I guess we can't
> really do anything from libxl to fix IGD passthrough.

We could switch to the "pc" machine, but we would need to patch
qemu also so the "pc" machine uses the special device the "xenfv"
machine uses (TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE).
So it is simpler to just use the other patch to qemu and not patch
libxl at all to fix this.

>
> > OTOH, fixing this in qemu causes newer qemu versions to behave
> > differently than previous versions of qemu, which the qemu community
> > does not like, although they seem OK with the other patch since it only
> > affects qemu "xenfv" machine types, but they do not want the patch to
> > affect toolstacks like libvirt that do not use qemu upstream's
> > autoconfiguration options as much as libxl does, and, of course, libvirt
> > can manage qemu "xenfv" machines so exising "xenfv" guests configured
> > manually by libvirt could be adversely affected by the patch to qemu,
> > but only if those same guests are also configured for igd-passthrough,
> > which is likely a very small number of possibly affected libvirt users
> > of qemu.
> > 
> > A year or two ago I tried to configure guests for pci passthrough on xen
> > using libvirt's tool to convert a libxl xl.cfg file to libvirt xml. It
> > could not convert an xl.cfg file with a configuration item
> > pci = [ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...] for pci passthrough.
> > So it is unlikely there are any users out there using libvirt to
> > configure xen hvm guests for igd passthrough on xen, and those are the
> > only users that could be adversely affected by the simpler patch to qemu
> > to fix this.
>
> FYI, libvirt should be using libxl to create guest, I don't think there
> is another way for libvirt to create xen guests.
>
>
>
> So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
> IGD passthrough as the "xenfv" machine has already some workaround to
> make IGD work and just need some more.
>
> I've seen that the patch for QEMU is now reviewed, so I look at having
> it merged soonish.
>
> Thanks,
>



From xen-devel-bounces@lists.xenproject.org Wed Jan 25 23:53:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Jan 2023 23:53:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484656.751348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKpaH-0008BL-8H; Wed, 25 Jan 2023 23:53:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484656.751348; Wed, 25 Jan 2023 23:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKpaH-0008BE-5c; Wed, 25 Jan 2023 23:53:41 +0000
Received: by outflank-mailman (input) for mailman id 484656;
 Wed, 25 Jan 2023 23:53:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKpaF-0008B4-PY; Wed, 25 Jan 2023 23:53:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKpaF-0000tz-LZ; Wed, 25 Jan 2023 23:53:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKpaF-0000Si-7C; Wed, 25 Jan 2023 23:53:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKpaF-0001FF-6U; Wed, 25 Jan 2023 23:53:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8SO67mjrzR1UfZodUyjSUeXE7NvNGdXmaEqnKTW1YbM=; b=QCg1JfO+Smdhl3580sgBl/Qq5P
	bWlR/v5fA7wjcy+jsI/mmAaa8Fy3d8JQiZCoDsZ3GVMkgMhrxtA/R+asuU6JNEB3yx3AyzhYoCiJi
	chviYfhzP4bArEAK/Ko2ygtvMDY5UMPS23aKe0Yp93PH//1VcVWENr5S2GRLBA129wBk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176125-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176125: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=948ef7bb70c4acaf74d87420ea3a1190862d4548
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Jan 2023 23:53:39 +0000

flight 176125 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176125/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 15 guest-saverestore fail pass in 176115

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                948ef7bb70c4acaf74d87420ea3a1190862d4548
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  110 days
Failing since        173470  2022-10-08 06:21:34 Z  109 days  226 attempts
Testing same since   176115  2023-01-25 03:57:20 Z    0 days    2 attempts

------------------------------------------------------------
3442 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 528925 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 02:14:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 02:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484666.751367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKrmI-0006MO-9N; Thu, 26 Jan 2023 02:14:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484666.751367; Thu, 26 Jan 2023 02:14:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKrmI-0006MG-4Z; Thu, 26 Jan 2023 02:14:14 +0000
Received: by outflank-mailman (input) for mailman id 484666;
 Thu, 26 Jan 2023 02:14:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NEHk=5X=csail.mit.edu=srivatsa@srs-se1.protection.inumbo.net>)
 id 1pKrmG-0006MA-VO
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 02:14:12 +0000
Received: from outgoing2021.csail.mit.edu (outgoing2021.csail.mit.edu
 [128.30.2.78]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1db40a11-9d1f-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 03:14:09 +0100 (CET)
Received: from c-24-17-218-140.hsd1.wa.comcast.net ([24.17.218.140]
 helo=srivatsab3MD6R.vmware.com)
 by outgoing2021.csail.mit.edu with esmtpsa (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.95)
 (envelope-from <srivatsa@csail.mit.edu>) id 1pKrm9-00HW0d-KX;
 Wed, 25 Jan 2023 21:14:05 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1db40a11-9d1f-11ed-b8d1-410ff93cb8f0
To: Sean Christopherson <seanjc@google.com>,
 Igor Mammedov <imammedo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, linux-kernel@vger.kernel.org,
 amakhalov@vmware.com, ganb@vmware.com, ankitja@vmware.com,
 bordoloih@vmware.com, keerthanak@vmware.com, blamoreaux@vmware.com,
 namit@vmware.com, Peter Zijlstra <peterz@infradead.org>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>,
 "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
 "Paul E. McKenney" <paulmck@kernel.org>, Wyes Karny <wyes.karny@amd.com>,
 Lewis Caroll <lewis.carroll@amd.com>, Tom Lendacky
 <thomas.lendacky@amd.com>, Juergen Gross <jgross@suse.com>, x86@kernel.org,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <20230116060134.80259-1-srivatsa@csail.mit.edu>
 <20230116155526.05d37ff9@imammedo.users.ipa.redhat.com> <87bkmui5z4.ffs@tglx>
 <ecb9a22e-fd6e-67f0-d916-ad16033fc13c@csail.mit.edu>
 <20230120163734.63e62444@imammedo.users.ipa.redhat.com>
 <Y8rfBBBicRMk+Hut@google.com>
From: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
Subject: Re: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle
 state
Message-ID: <c3304b18-533b-4845-0ca8-b2680bfd715d@csail.mit.edu>
Date: Wed, 25 Jan 2023 18:14:01 -0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.12.0
MIME-Version: 1.0
In-Reply-To: <Y8rfBBBicRMk+Hut@google.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit


Hi Igor and Sean,

On 1/20/23 10:35 AM, Sean Christopherson wrote:
> On Fri, Jan 20, 2023, Igor Mammedov wrote:
>> On Fri, 20 Jan 2023 05:55:11 -0800
>> "Srivatsa S. Bhat" <srivatsa@csail.mit.edu> wrote:
>>
>>> Hi Igor and Thomas,
>>>
>>> Thank you for your review!
>>>
>>> On 1/19/23 1:12 PM, Thomas Gleixner wrote:
>>>> On Mon, Jan 16 2023 at 15:55, Igor Mammedov wrote:  
>>>>> "Srivatsa S. Bhat" <srivatsa@csail.mit.edu> wrote:  
>>>>>> Fix this by preventing the use of mwait idle state in the vCPU offline
>>>>>> play_dead() path for any hypervisor, even if mwait support is
>>>>>> available.  
>>>>>
>>>>> if mwait is enabled, it's very likely guest to have cpuidle
>>>>> enabled and using the same mwait as well. So exiting early from
>>>>>  mwait_play_dead(), might just punt workflow down:
>>>>>   native_play_dead()
>>>>>         ...
>>>>>         mwait_play_dead();
>>>>>         if (cpuidle_play_dead())   <- possible mwait here                                              
>>>>>                 hlt_play_dead(); 
>>>>>
>>>>> and it will end up in mwait again and only if that fails
>>>>> it will go HLT route and maybe transition to VMM.  
>>>>
>>>> Good point.
>>>>   
>>>>> Instead of workaround on guest side,
>>>>> shouldn't hypervisor force VMEXIT on being uplugged vCPU when it's
>>>>> actually hot-unplugging vCPU? (ex: QEMU kicks vCPU out from guest
>>>>> context when it is removing vCPU, among other things)  
>>>>
>>>> For a pure guest side CPU unplug operation:
>>>>
>>>>     guest$ echo 0 >/sys/devices/system/cpu/cpu$N/online
>>>>
>>>> the hypervisor is not involved at all. The vCPU is not removed in that
>>>> case.
>>>>   
>>>
>>> Agreed, and this is indeed the scenario I was targeting with this patch,
>>> as opposed to vCPU removal from the host side. I'll add this clarification
>>> to the commit message.
> 
> Forcing HLT doesn't solve anything, it's perfectly legal to passthrough HLT.  I
> guarantee there are use cases that passthrough HLT but _not_ MONITOR/MWAIT, and
> that passthrough all of them.
> 
>> commit message explicitly said:
>> "which prevents the hypervisor from running other vCPUs or workloads on the
>> corresponding pCPU."
>>
>> and that implies unplug on hypervisor side as well.
>> Why? That's because when hypervisor exposes mwait to guest, it has to reserve/pin
>> a pCPU for each of present vCPUs. And you can safely run other VMs/workloads
>> on that pCPU only after it's not possible for it to be reused by VM where
>> it was used originally.
> 
> Pinning isn't strictly required from a safety perspective.  The latency of context
> switching may suffer due to wake times, but preempting a vCPU that it's C1 (or
> deeper) won't cause functional problems.   Passing through an entire socket
> (or whatever scope triggers extra fun) might be a different story, but pinning
> isn't strictly required.
> 
> That said, I 100% agree that this is expected behavior and not a bug.  Letting the
> guest execute MWAIT or HLT means the host won't have perfect visibility into guest
> activity state.
> 
> Oversubscribing a pCPU and exposing MWAIT and/or HLT to vCPUs is generally not done
> precisely because the guest will always appear busy without extra effort on the
> host.  E.g. KVM requires an explicit opt-in from userspace to expose MWAIT and/or
> HLT.
> 
> If someone really wants to effeciently oversubscribe pCPUs and passthrough MWAIT,
> then their best option is probably to have a paravirt interface so that the guest
> can tell the host its offlining a vCPU.  Barring that the host could inspect the
> guest when preempting a vCPU to try and guesstimate how much work the vCPU is
> actually doing in order to make better scheduling decisions.
> 
>> Now consider following worst (and most likely) case without unplug
>> on hypervisor side:
>>
>>  1. vm1mwait: pin pCPU2 to vCPU2
>>  2. vm1mwait: guest$ echo 0 >/sys/devices/system/cpu/cpu2/online
>>         -> HLT -> VMEXIT
>>  --
>>  3. vm2mwait: pin pCPU2 to vCPUx and start VM
>>  4. vm2mwait: guest OS onlines Vcpu and starts using it incl.
>>        going into idle=>mwait state
>>  --
>>  5. vm1mwait: it still thinks that vCPU is present it can rightfully do:
>>        guest$ echo 1 >/sys/devices/system/cpu/cpu2/online
>>  --              
>>  6.1 best case vm1mwait online fails after timeout
>>  6.2 worse case: vm2mwait does VMEXIT on vCPUx around time-frame when
>>      vm1mwait onlines vCPU2, the online may succeed and then vm2mwait's
>>      vCPUx will be stuck (possibly indefinitely) until for some reason
>>      VMEXIT happens on vm1mwait's vCPU2 _and_ host decides to schedule
>>      vCPUx on pCPU2 which would make vm1mwait stuck on vCPU2.
>> So either way it's expected behavior.
>>
>> And if there is no intention to unplug vCPU on hypervisor side,
>> then VMEXIT on play_dead is not really necessary (mwait is better
>> then HLT), since hypervisor can't safely reuse pCPU elsewhere and
>> VCPU goes into deep sleep within guest context.
>>
>> PS:
>> The only case where making HLT/VMEXIT on play_dead might work out,
>> would be if new workload weren't pinned to the same pCPU nor
>> used mwait (i.e. host can migrate it elsewhere and schedule
>> vCPU2 back on pCPU2).


That makes sense. Thank you both for the detailed explanation!
Let's drop this patch.

Regards,
Srivatsa
VMware Photon OS


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 02:40:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 02:40:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484672.751376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKsBE-0000pJ-6K; Thu, 26 Jan 2023 02:40:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484672.751376; Thu, 26 Jan 2023 02:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKsBE-0000pC-3h; Thu, 26 Jan 2023 02:40:00 +0000
Received: by outflank-mailman (input) for mailman id 484672;
 Thu, 26 Jan 2023 02:39:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iqmz=5X=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKsBC-0000p6-Hw
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 02:39:58 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2058.outbound.protection.outlook.com [40.107.101.58])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6ba8fe6-9d22-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 03:39:55 +0100 (CET)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by PH8PR12MB7302.namprd12.prod.outlook.com (2603:10b6:510:221::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 02:39:51 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f803:f951:a68f:663a]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f803:f951:a68f:663a%6]) with mapi id 15.20.6002.033; Thu, 26 Jan 2023
 02:39:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6ba8fe6-9d22-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a2w39JC6jvvUHlG7MixG3utS857ZREftnT46DK5L6qW0nfLbiGlJ50pkP4ACWWEz+XUalDdFmlimtLXUZ1Ewkv8JHpw7bkEKh1bO0pIagnOioy9WbjgwiFn+VBOmSsbIYMrWaCp09hx0/D34Z8iGTqgipkvuKTwDAmv35TEGz2sDsEd9dOWsoXZCftZKycH5uyt2YYV92SZSt7EgIg6WrnEMCgSSo6WzVgizcr1jjva+oEdoG7qC7qBALSnzeQdiLc5R9+fW1qpp+3oA0jTeLM0Xjm8WKvFHehg0B4s38ZRZaCffgqSET9UxCiFLxoug+qw/kVs/ORDzyto22KO7oA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qMjrBvV4NfmaaAxqSQYzxP3uw/54BY9KnOmHBvTeIlU=;
 b=kBWUhIEr/LmtqbTC6mNWCTaH/j3aCpcSFqgsehn4dMrIvKKN9arl1sT2hMWeD0396HTc2Z5becFGugahrXfrHLg1aHQ8AKojGhwhtSIBv2jZNdy9+Q9stbGarumSDhq+BXoC53ae8tMpcdYvOqvtvy0UwLUoVU7IfuDC7hylPz8b2zUeNJ1eReMzKXhNqY0tnjsXrnxiL0MX6kZ3tB66Kzj5UiiBUiRA4jkQsPSboMBamesDpGsAy7GofaRi0E0NTNuUDBEzYxiKo2ECFPuRVTxX7Hzc/qgfsRw3Z+5QVPe3tV96w8mRlj/yHlLMI9USf5zyAVnp5mJb44cQNTKOyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qMjrBvV4NfmaaAxqSQYzxP3uw/54BY9KnOmHBvTeIlU=;
 b=LSTViJLa569gRxuWtpYc7YCcs3yiV5ykjZi3E3z6cYnIio9iRDIhQNuui3/60iTnBB5yfvXnIAUOg3rQaKHp0znh+LbTlIBfybeQJ/ijh9kliizxIp3pnkIzazTvOyVquioll34PZuPBQNt4wcwVuKT6r29jHyiug0Z/AJeVuTo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <10564f02-1aef-9298-fa26-b5a8aecc4c37@amd.com>
Date: Wed, 25 Jan 2023 18:39:48 -0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [QEMU][PATCH v4 01/10] hw/i386/xen/: move xen-mapcache.c to
 hw/xen/
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org, Thomas Huth <thuth@redhat.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 alex.bennee@linaro.org, "Michael S. Tsirkin" <mst@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
 <20230125085407.7144-2-vikram.garhwal@amd.com>
 <1294362f-4359-949e-3673-6198a78310be@linaro.org>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <1294362f-4359-949e-3673-6198a78310be@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: BYAPR07CA0068.namprd07.prod.outlook.com
 (2603:10b6:a03:60::45) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|PH8PR12MB7302:EE_
X-MS-Office365-Filtering-Correlation-Id: e5abccd0-d351-4a50-7fe9-08daff469912
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T5dkPwsCHqm0lei3T8iT2Tn6ToiV9YkQftIO0JBbu2fLaGUohTgqLZPTnyK2bc4doPYWX9zatt8KeXXPebjXCl7P/+0HWXfesqXvafrh7jorEsuYo7W2a+Vy2sWZZFhKrnQPOLEG36QW0EOOKdoD3acmR08QxIcH/VttR8I9EnwmQEp1Pz1JTi6W5FK7X9ttVmR7wOo7xSa+X68hKOrXp4qffH65B9PShJv3SRwwPyxNNESP8U2Ec6fAzy8d8S/SKDXQRfJtxtYGBm+vr3hjNfknRSavo09PGANR6/fxpHCT+RkMPx/R7UKMJhYpYCYlzqraTFfChois7uEN6f4Im4vcdQMECXSnb1Okr6SLqiNT8m9yjrew6ebhTIDY1iEl912cZL8OmqjpFxb52Jt7dvySsWY1TaB/dgsTm1p/mDpuEic4UcbJIqwXb2SGCoGNfu8g4aK6BLWw06VMYoamKix1ViYMGNzGPStoZ5g+leoPXBqRBHsEf5w+aLaPxzPEabpfx/dgG7L7HtAJ2R/EIAxMzzI/00AOT41vnC9x9CBa9I/t+h4hsyMHaOeb7TINWjiGClRRydXmHpTH9vh2LQoADEpG+1yI8qLqHgLFsK4aTi7ZsnwSirWVMWy7ziyqoaNAu24ORuslqJh7aFveLf1O3pFa7eB3pQPUHHOh/rP8v5flZDSAlLF+PtYuVbLcGCTqymFmUoWx2+YOobraukE+vTB59p/9KT99jLfMK9E=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199018)(31686004)(6512007)(2616005)(83380400001)(186003)(44832011)(53546011)(36756003)(86362001)(31696002)(66946007)(38100700002)(26005)(41300700001)(6506007)(6666004)(5660300002)(478600001)(7416002)(66476007)(8936002)(2906002)(4326008)(8676002)(66556008)(6486002)(316002)(110136005)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QjZ6eSs4eDVueVZqZE96RUg2REhoZllXZ2o1cG1EeDVKZ2NxU2RPY0RsaGtq?=
 =?utf-8?B?b0trdCtGVGVLNWhabFB6dnEvQyttV2E0S1FTSWtRVnZEK1NUVzJNT1BTZDlY?=
 =?utf-8?B?QXFZWExzekhLSWc4OFVTd3JzbWp5Zm9CbXN1Y0pzeUo0ekJWYURtR0F2RXdL?=
 =?utf-8?B?M2I5SVZLRkZ2aHFsTWV4SWNwYWRXbmJBSEhmTXkrbDZtdHpOLzhvdi8zKzJz?=
 =?utf-8?B?d3AyUDU1OWJsWUhSOExHNng5UlRGVEgwZkUyMHAxTjlzeUlDd1hyNUhHM0VV?=
 =?utf-8?B?eTBhQW5ldUMyMlVxcDg4STA5Y2MrZngzVEI4dG5POHV2RjN5QVZjRDdCdTV1?=
 =?utf-8?B?My9PL0JENXVLU3hrZjZBN2VwejZmdGlHbFR1NjVtd3Z4N25iN0lRWWhSR0la?=
 =?utf-8?B?d0pmeE9lU0tyZ1BRRXNUOEVWbjVmREUzZms4YzEvQTlEWVlqYldUL095aWRJ?=
 =?utf-8?B?SWNlZVRUU1JhaGVMSmdqOExCc0I4KzJkOWF3SmpYNVRxK2FabDQxWUxUVkhs?=
 =?utf-8?B?M3haQmx3Y3hycEM2TEJtODZlaWRpMHJmRnRENVh3NFVXbWJzeFNnUnVCSnMy?=
 =?utf-8?B?LzV4czRJUFJScEt1SXRIa3JDTm00azM2K0t3REZQQnFhTEEyMnZ4Y09MdXVw?=
 =?utf-8?B?em53TUNQcG92cnhMbjFKWXRuOFcxbVB6VnVobzdMMCtOcmo3bWNzMUdaOGRa?=
 =?utf-8?B?VnczQlRJZGlaUStvd0xPemhwNHIwajFXcW9vRkMyMGY0SEJ6aVI0WEUxZFhW?=
 =?utf-8?B?SXBqOHlOTHhBVG10bnhId0dYMTB2UHZJdmVsbkZhRzc0VVhyNnZLaUhsd2dJ?=
 =?utf-8?B?andqdTdjUEkyL0VFd1o0OTc4TmFGTU1hUHc4V2Jwd0ZzYXljTXNRYitBdmk4?=
 =?utf-8?B?NEV6WDdQeUNObHlXNzU5Q2pwaFVrUXB5aDVZdVdWWFVEbXI4bUJyR3NuWEpi?=
 =?utf-8?B?SFJ4Y3crTkdHZlZPa3gwRW5GVEsvZUI4OXFUTEZabDdXNnZ0bzFVK1Z4d0t6?=
 =?utf-8?B?dmpqY3RuTjVFbW5QaFdEZEZkWlRSWmxoQVNYK2JxUlNTWVNFQUtJamt6UXdD?=
 =?utf-8?B?VWh0MUJ1YkkrZ1V1SkkxTldkSkZwbXhBOXc2bzNIUnJZbmdEb3huSDdLa0xu?=
 =?utf-8?B?Q3ZuVU5CaU90enJ2L3FuRkh3ME5QYnNhTVB2TC9QRlpjSGJFeTZlSGZlcnZQ?=
 =?utf-8?B?VE9IeEJFaGZhWWRvRHBWNnk2T2hjaENlNy9VM3c5SzhWUWdlZWU4RHVZUGJ2?=
 =?utf-8?B?d2hOVjM4ekNoWWRoUWYxbVR5cFJKQ2ZoSTR6MzJLWDNZNW1pakc4WmJYZkF3?=
 =?utf-8?B?L3NtUFJPdWlrZWxyQlE2aElkU2Z1dzJNWGFIYlBGcDJLQ3ZvTTAyREZUR0hj?=
 =?utf-8?B?RUg1OERNZG8xVUhIYkFSWVpPRk1PcWRQbnA2ellsRGhPVGE1UEhrNGlkRVZM?=
 =?utf-8?B?UGdBWDF6Um0vS3RzMEhkU2tjZ1B4YUQ4M2NKdmMwUWdtTDhEWFVmL2lvdWps?=
 =?utf-8?B?azVQU3d6ZnhjRU4rOGZYeW5XeW03aUM0cG5UUXRyNjR0T1JpaTVWdmpxWFFW?=
 =?utf-8?B?NEEzQlVmRTBJb0U5YXU5UkFUUzE1ZWl5dVFvbURvQitIaUtKVXdVRG5WSE9G?=
 =?utf-8?B?alFTNDRtUWRjK1ljQWEzZW1oV2FtVFpRdWg1L2xBM2hickYxUzAwVkNxUEdy?=
 =?utf-8?B?Y1JJNzBiY3FvT2RMalRzdW4ya2k1ZWZ5cWlMa0thMmFYVWNhMHFRTE5jcTM4?=
 =?utf-8?B?SGYrZ2VQZTNTclhsaHRLRjZER3RkRis3d2VzMklwYmkwbVM3Tll3SFBYTWJD?=
 =?utf-8?B?OWVPUFVoNllJVmFWdStzNW9GTkxhdWUyS0Yxc0dhMVk0azJxOFNPU1NXVmFX?=
 =?utf-8?B?c3l4TEhOTE9XVUR4VjRRMkRUTDZyM1UwRlRpRStzZkQyL3o2VG9kSUdFNzU1?=
 =?utf-8?B?ekovaDlVbUptaUhYWlZmRWxZTk15V3NjcDErY1N4Zy9VTG9XeTVXdjMzMFUz?=
 =?utf-8?B?NE16Qm9DSXVmOHhyWUhGNVdvcDRHRzN5OVFDcmxHQWhTVWtFbkI4QTBxdnhw?=
 =?utf-8?B?VTVCeWV4Tys3TkcrU1hkTndjRDNzWUxRVVNpbHJWL1RobEJFdU1ZVnlGZDRO?=
 =?utf-8?Q?F435p2yKRX0Ku+TrDaraQ3ogb?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e5abccd0-d351-4a50-7fe9-08daff469912
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 02:39:50.9297
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FR/AWSLeIKb5uAtnGFpnVJCpUY5cMRDSFrB4bTf2GIfcbHbizG2xlursIqHS2kSofkJfhyH1lL/7F7srMfQdDg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7302

Hi Philippe,

On 1/25/23 2:59 PM, Philippe Mathieu-Daudé wrote:
> On 25/1/23 09:53, Vikram Garhwal wrote:
>> xen-mapcache.c contains common functions which can be used for 
>> enabling Xen on
>> aarch64 with IOREQ handling. Moving it out from hw/i386/xen to hw/xen 
>> to make it
>> accessible for both aarch64 and x86.
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
>> ---
>>   hw/i386/meson.build              | 1 +
>>   hw/i386/xen/meson.build          | 1 -
>>   hw/i386/xen/trace-events         | 5 -----
>>   hw/xen/meson.build               | 4 ++++
>>   hw/xen/trace-events              | 5 +++++
>>   hw/{i386 => }/xen/xen-mapcache.c | 0
>>   6 files changed, 10 insertions(+), 6 deletions(-)
>>   rename hw/{i386 => }/xen/xen-mapcache.c (100%)
>>
>> diff --git a/hw/i386/meson.build b/hw/i386/meson.build
>> index 213e2e82b3..cfdbfdcbcb 100644
>> --- a/hw/i386/meson.build
>> +++ b/hw/i386/meson.build
>> @@ -33,5 +33,6 @@ subdir('kvm')
>>   subdir('xen')
>>     i386_ss.add_all(xenpv_ss)
>> +i386_ss.add_all(xen_ss)
>>     hw_arch += {'i386': i386_ss}
>> diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
>> index be84130300..2fcc46e6ca 100644
>> --- a/hw/i386/xen/meson.build
>> +++ b/hw/i386/xen/meson.build
>> @@ -1,6 +1,5 @@
>>   i386_ss.add(when: 'CONFIG_XEN', if_true: files(
>>     'xen-hvm.c',
>> -  'xen-mapcache.c',
>>     'xen_apic.c',
>>     'xen_platform.c',
>>     'xen_pvdevice.c',
>> diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
>> index 5d6be61090..a0c89d91c4 100644
>> --- a/hw/i386/xen/trace-events
>> +++ b/hw/i386/xen/trace-events
>> @@ -21,8 +21,3 @@ xen_map_resource_ioreq(uint32_t id, void *addr) 
>> "id: %u addr: %p"
>>   cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, 
>> uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u 
>> data=0x%x"
>>   cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, 
>> uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u 
>> data=0x%x"
>>   -# xen-mapcache.c
>> -xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
>> -xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
>> -xen_map_cache_return(void* ptr) "%p"
>> -
>> diff --git a/hw/xen/meson.build b/hw/xen/meson.build
>> index ae0ace3046..19d0637c46 100644
>> --- a/hw/xen/meson.build
>> +++ b/hw/xen/meson.build
>> @@ -22,3 +22,7 @@ else
>>   endif
>>     specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: 
>> xen_specific_ss)
>> +
>> +xen_ss = ss.source_set()
>> +
>> +xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))
>
> Can't we add it to softmmu_ss directly?
>
I tried adding this in softmmu_ss as per your comment in v2. But it 
fails with following error:
//mnt/qemu_ioreq_upstream/include/sysemu/xen-mapcache.h:16:8: error: 
attempt to use poisoned "CONFIG_XEN"//
// #ifdef CONFIG_XEN//
//        ^//
//../hw/xen/xen-mapcache.c:106:6: error: redefinition of 
'xen_map_cache_init'//
/

/ void xen_map_cache_init(phys_offset_to_gaddr_t f, void *opaque)/

I couldn't fix it in easy way.

>> diff --git a/hw/xen/trace-events b/hw/xen/trace-events
>> index 3da3fd8348..2c8f238f42 100644
>> --- a/hw/xen/trace-events
>> +++ b/hw/xen/trace-events
>> @@ -41,3 +41,8 @@ xs_node_vprintf(char *path, char *value) "%s %s"
>>   xs_node_vscanf(char *path, char *value) "%s %s"
>>   xs_node_watch(char *path) "%s"
>>   xs_node_unwatch(char *path) "%s"
>> +
>> +# xen-mapcache.c
>> +xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
>> +xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
>> +xen_map_cache_return(void* ptr) "%p"
>> diff --git a/hw/i386/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
>> similarity index 100%
>> rename from hw/i386/xen/xen-mapcache.c
>> rename to hw/xen/xen-mapcache.c
>


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 02:45:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 02:45:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484679.751387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKsG6-0002KK-Ti; Thu, 26 Jan 2023 02:45:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484679.751387; Thu, 26 Jan 2023 02:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKsG6-0002KD-QT; Thu, 26 Jan 2023 02:45:02 +0000
Received: by outflank-mailman (input) for mailman id 484679;
 Thu, 26 Jan 2023 02:45:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iqmz=5X=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pKsG5-0002K3-PV
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 02:45:01 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2067.outbound.protection.outlook.com [40.107.101.67])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c48b0df-9d23-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 03:44:59 +0100 (CET)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by SN7PR12MB6839.namprd12.prod.outlook.com (2603:10b6:806:265::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 02:44:50 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f803:f951:a68f:663a]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f803:f951:a68f:663a%6]) with mapi id 15.20.6002.033; Thu, 26 Jan 2023
 02:44:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c48b0df-9d23-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ste5XnmvEcriXvXq/ATwzg61Ckz+VNvBnHr1RseZq5qj5fbcgIjqh5/iUXixGDsRbfxX6ggx4ZyAMibpbBJpEjIv6VqBJM/yJveE6jBi1WasrXsonXPxWGzK9NITofE3eCv4QeLwsDSU8k+79Z8hRMHVCLjvwurljetd0/TL2HejDduvGpIprV+oTmTI0gVO5CkNPcxBlmO5dTvJv2h17bd/ZPd5KZy8uhJaNht0w3+xZkPq9Eq8Qd0xShhZPAqKbTUQ5QYTB3EVqiPchg+1dueoItqwj3IxTFWeAMlTSL+dUSXhnBIBwA7JYq4z9vTovImNVbzNmKK2uN9Ib9NI8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rlvI8LkXToqzdA3+aPRP1u9+dN8LW4rsDNqSst6ck3o=;
 b=iAlvDPt43YZmhCFG146wM14pMaRY5849IXTaIi7n63p3zgEtzrKGOpIgtyt1Cf5Hxc6ZlZp2RW2fe1sBv5j6c8jNMrSXg8XLsSjIP6PeMEjGGLqOEtJkA8Us3GQ87/dMsZI6R3+ScVIjcs97kcH0eygZt7zPQIZ4wqYGyRdScjlOynoFuXVeaRvxXkwDj8QEF0gpY7ysAQ9LI9hG1kIugRefVvZ077tJJXsc9Tr3m6AuZs67U0J4vEatePgtL+E/JU8EfjE7zoaNn7hvaKPdHy+Jci8an4Z/vXMkGjeXBGXvlQri6dEWwI+vGC0JAN7OiKVYqHm018AH+hCu5bj4Yg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rlvI8LkXToqzdA3+aPRP1u9+dN8LW4rsDNqSst6ck3o=;
 b=utuKKBMdWOQXkk+KTqnNb49FqXiRgl0maSdw6798T7ksZoeJoi10S6MRvVCPBxeSSppdI91ffpWFEUWDamd5jc502EW2BeHME8QMBtQ6MjsYvf+4Q5bC16rUyr0anLUCyrwiXyF9mxIzN/CI/UQgBZpA4pz4Lsf5BOgOhtyRV7A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <d6bb030b-406a-5a07-f089-2382bdd46e3c@amd.com>
Date: Wed, 25 Jan 2023 18:44:47 -0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [QEMU][PATCH v4 09/10] hw/arm: introduce xenpvh machine
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
 stefano.stabellini@amd.com, alex.bennee@linaro.org,
 Peter Maydell <peter.maydell@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 "open list:ARM TCG CPUs" <qemu-arm@nongnu.org>
References: <20230125085407.7144-1-vikram.garhwal@amd.com>
 <20230125085407.7144-10-vikram.garhwal@amd.com>
 <alpine.DEB.2.22.394.2301251410440.1978264@ubuntu-linux-20-04-desktop>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <alpine.DEB.2.22.394.2301251410440.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SJ0PR03CA0283.namprd03.prod.outlook.com
 (2603:10b6:a03:39e::18) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|SN7PR12MB6839:EE_
X-MS-Office365-Filtering-Correlation-Id: c57ce31b-6a8b-4479-e274-08daff474b4d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nHrgwokLaRcsjOFVSdFHPpbNMamTQHmdFZbc70H+KUyN1bJf8hZeDLvQM26+DGsipBtadJR34q+uaXifleXh3RLcU+pHM0ejpGCAC8vryHeh9Bsft0WjUUciKMMJBr8QmHeAuvrjXdZVLq7TvBYDh3IPrrVi62L/8lG2bIbrIhcq2ejsEko/ZLdzXvmOpTjb0qdxaIdVciZffdj6UqVIh2Gxh2yQc5VP5SfA6agfZr8dVpVCd+joe+XqHZgwGkrq0B3i4wH7Lh8artu2YhMGjW4SNT6HcfsjbHqyuysLgo6XDu/s35dreuPTduTka1T/HkX3jU4OTBqhWM6V1O42lhSe+M4liLqVRDaQJCf5GMK3wTkMwDugCmH8lPsOgT/EUgVPd7jMvo/5kqmHWAVp0ZlmOI/gxELXHfJd7zOcFo4//dmdk7ahOAx5vpCxPT2Ts+VsPqtrRvGVtCvHFeQgrdB1yR2eBPDbWVX1UHuiR9gGbOVMCZL7CjrM5Strq344s7yHQyYUqrzAUsBfeGaWERHH2ue/BMsnPKc4BlcOfcJrSfzhakaJhUxz5um3Rc9CjbGgNXBHJN5NKgSCKE5ft8ABP57X97z3Zm/ldRnWj18T22NzPGrmBw/tfcdwWVymD4ng3lRh+5TxN5pPTJTWtxLQ5jXys6VAqSqa01NSib/PU54OdZ7XSLDQj2SknUw18+ojUyNb5d+i2Cc7jAubzs3lzfqZ89/OgjOoi+5CfYJ8l/TxiIxKPELHoBrABWK7I0XSIDBH5S+mgHUk1gz9Bw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(346002)(366004)(376002)(136003)(396003)(451199018)(66899018)(2616005)(2906002)(36756003)(41300700001)(86362001)(4326008)(44832011)(83380400001)(31696002)(30864003)(66476007)(316002)(38100700002)(478600001)(5660300002)(6506007)(6916009)(31686004)(8676002)(26005)(966005)(186003)(6512007)(66946007)(53546011)(6486002)(54906003)(66556008)(8936002)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d2ZHdWdQbnZ4aWZWcW1LWExlZ0tXSWQ5MEpkSjk2dlMwSjFjaGUrd2J1NEtk?=
 =?utf-8?B?WUFaZm1LOTBmb2pvMUdwMjMxNTlYdFViTVVFdG5vUTdaRHh4QjdraUxFSUlC?=
 =?utf-8?B?MW1tdmovU1NOZGRQdldnV1ZjazN6eGJ6WXZxUUdDOHJnUUVqdGt5N1d6SjZ0?=
 =?utf-8?B?NDYzdDF1QUdmRlhnY1BOWXZYdVpsN0IwbzNBd3NoRHdobUFJZWhKOVdnSzhO?=
 =?utf-8?B?YVhPVkV3NHNhTnplNGRMaDF2SVBtQWdaOXQvMEd3KzU1d0xmSkE3TytMY2JD?=
 =?utf-8?B?TngvbmVXWHRhckFtQ3AreGhOUEJZN1BVRlkya1F0OHpNTTJxd1VHVXNMV2hG?=
 =?utf-8?B?bjMvZFNpdXozWk5Pd0ZncnVtU1RyV3BvQjZkdTQ4ZHpTQy8rMVZmeVViSERN?=
 =?utf-8?B?eGVaWm9CWjBndy9IdklzZ29YL2xJUXVvQ0pIWVFNL1cyczhXQnZOOHk3WjQ5?=
 =?utf-8?B?OFVpMUNXUzR4cUlGdHkxZExxUlNVLzZad2s0c0h6SC9OeWV4QnVUQWdrZFBq?=
 =?utf-8?B?MnJ5MlNmWXJNR1l2alBEWUdOQzhUTUlBVzQ1dnIyUmFVaGVlYmlRNmxWSjF2?=
 =?utf-8?B?alVDVnV3c1Z3Ry9zNGFzOHQrN25ENkdEREJZQ0l3ek1Td2diak00SnNBTlk1?=
 =?utf-8?B?YnVMaENSeXpSRFQ2TUtoWWFtSzIwMXFzN0dhWWhIRG5XcDFCMERLVmVKTkt2?=
 =?utf-8?B?S0pMNDRNY3ZTcGZ4Z01ZMFBWT2hwa2NZampBa3RsVnQ2WndyNllDMnlva1Bi?=
 =?utf-8?B?R3JNcEYvQnU0czB3aUpXdXhHcTNnUXFBSnFhUGZaVDJZM1ZPYnREdTVZenJB?=
 =?utf-8?B?cWk2Qk5zMkx4N3duUGJ3d0QxRnVlL043RlYvcWFwZTJDTExueHJOSmZoVWdJ?=
 =?utf-8?B?ZnROTld3SGdYNTI5N0h6RTZ2M2ZsT3VXWDU5NGp1MXZjTVdMOHgxazFJOEhr?=
 =?utf-8?B?VVpucnhYenZDYXJHcEVUdS8zWXFndVBjd3NOd0M1RTFpbFFLTldNZVNOMUxy?=
 =?utf-8?B?Q1lXL1Nsd3pqUzVTNUhlVUdsdGZuQ1pHcURSSVZ0MFc2bTIzMnU4SmhWKzhk?=
 =?utf-8?B?aG9waStVVE8yOWxvRFVHWElYWmVHSElhd0VaZXAzZDFjZTRGaGMwbHlNT0dn?=
 =?utf-8?B?S2c2WUdWMm55TW9GQmxYenJrUXZFMW5ENnQ3SHBpQkZTSEgvMXhSN1ZCNVVU?=
 =?utf-8?B?ZGVoUStOaFprVEp2RjV3ZnZzQzdXTUxxVUM1YjlYZlhvbFFwUEhwRHBWb2k4?=
 =?utf-8?B?dkxFU25PeUlLcHR1NUNSMDR2Y3hnR0svNldZdDBTNVh4RzFhMWNPMEtqMDc2?=
 =?utf-8?B?RVM0aSsyWEhzcUcxekdnUzF0eHhGVm1WK1hGNXVSSm56ZERzYjZsWDc2MTJ4?=
 =?utf-8?B?QlY2VmZaRGdvbCtQMzRBM3ZrbFJRa2RhVHV6ZlFmMjNrVmEvbkFwYkl2dmMy?=
 =?utf-8?B?WGdVcjdxMWhGSjFYd3JRZXpUL0JkK3M2VkRueVo0L3UwaW5ObnFNN0xyOHQv?=
 =?utf-8?B?alFvVFpmN3orY3g5SnFLRm4rNUY3aEVVZk5jY044U1BRSVE2Yld2Q1IvSFFH?=
 =?utf-8?B?dWY1V1p0NitiTVBvcjd5V0xxaVhoSk5aVXB4QTdEUmg0cE1wWXNIZnJTVVVE?=
 =?utf-8?B?eHFPeWF2K2RENVlhaDlwNkFPNUZWYVg2aHZQUGxlN1NOU05iK3h2QzB0cm5w?=
 =?utf-8?B?b2lFWFBkaGUyV2I0U3M3ZEZBT3lnLzVQelgxeUpNWm52Y1VOczIrREFtdlBs?=
 =?utf-8?B?QkEzMUJXc3JNZTE1MDNNaG9GUzBSNDhKYWpIeTkxQ0MvbUhzdkpPQjhEbjlZ?=
 =?utf-8?B?VVQwUU85WFU5YWdxVGRQd2gydVVpR3ZKcXg5UUlaZGtJVkNtcEdDZm1ZRG5y?=
 =?utf-8?B?akRoQUtpOVJUem5Ca2xaVG9GTVVwcG9nL21aUDRySEFBTzFXTXVCdFYxRTlt?=
 =?utf-8?B?SEZ0QVRDczNFU1pNSVRTYkFneFJ2bXdxMG9nMVVFZERYQlkyaFc4UUJvdlRR?=
 =?utf-8?B?NFhNNUcyQXI0eW8vbFVQWmJ1MEVKRzFQNk9ZVXRudnBNNndEenFza21iODJk?=
 =?utf-8?B?bXVMUHlTQzgxd1ZzU2dBT0s2MDcwbDBnU2NKbUNYUGZ4TFNiRGZzVitKbnlX?=
 =?utf-8?Q?G/8m/mDf/2BS9p+C9NGYW/2PJ?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c57ce31b-6a8b-4479-e274-08daff474b4d
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 02:44:49.9221
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DfSZ2j9t8C55LSW1Bibkp6RWIXyJ4riDhi5n1z0Xt6cohY4gtTcMVCko7rff1vM+cDDNqFkuAxnhDl2gNdnZRw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6839

Hi Stefano,

On 1/25/23 2:20 PM, Stefano Stabellini wrote:
> On Wed, 25 Jan 2023, Vikram Garhwal wrote:
>> Add a new machine xenpvh which creates a IOREQ server to register/connect with
>> Xen Hypervisor.
>>
>> Optional: When CONFIG_TPM is enabled, it also creates a tpm-tis-device, adds a
>> TPM emulator and connects to swtpm running on host machine via chardev socket
>> and support TPM functionalities for a guest domain.
>>
>> Extra command line for aarch64 xenpvh QEMU to connect to swtpm:
>>      -chardev socket,id=chrtpm,path=/tmp/myvtpm2/swtpm-sock \
>>      -tpmdev emulator,id=tpm0,chardev=chrtpm \
>>      -machine tpm-base-addr=0x0c000000 \
>>
>> swtpm implements a TPM software emulator(TPM 1.2 & TPM 2) built on libtpms and
>> provides access to TPM functionality over socket, chardev and CUSE interface.
>> Github repo: https://github.com/stefanberger/swtpm
>> Example for starting swtpm on host machine:
>>      mkdir /tmp/vtpm2
>>      swtpm socket --tpmstate dir=/tmp/vtpm2 \
>>      --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
>> ---
>>   docs/system/arm/xenpvh.rst    |  34 +++++++
>>   docs/system/target-arm.rst    |   1 +
>>   hw/arm/meson.build            |   2 +
>>   hw/arm/xen_arm.c              | 184 ++++++++++++++++++++++++++++++++++
>>   include/hw/arm/xen_arch_hvm.h |   9 ++
>>   include/hw/xen/arch_hvm.h     |   2 +
>>   6 files changed, 232 insertions(+)
>>   create mode 100644 docs/system/arm/xenpvh.rst
>>   create mode 100644 hw/arm/xen_arm.c
>>   create mode 100644 include/hw/arm/xen_arch_hvm.h
>>
>> diff --git a/docs/system/arm/xenpvh.rst b/docs/system/arm/xenpvh.rst
>> new file mode 100644
>> index 0000000000..e1655c7ab8
>> --- /dev/null
>> +++ b/docs/system/arm/xenpvh.rst
>> @@ -0,0 +1,34 @@
>> +XENPVH (``xenpvh``)
>> +=========================================
>> +This machine creates a IOREQ server to register/connect with Xen Hypervisor.
>> +
>> +When TPM is enabled, this machine also creates a tpm-tis-device at a user input
>> +tpm base address, adds a TPM emulator and connects to a swtpm application
>> +running on host machine via chardev socket. This enables xenpvh to support TPM
>> +functionalities for a guest domain.
>> +
>> +More information about TPM use and installing swtpm linux application can be
>> +found at: docs/specs/tpm.rst.
>> +
>> +Example for starting swtpm on host machine:
>> +.. code-block:: console
>> +
>> +    mkdir /tmp/vtpm2
>> +    swtpm socket --tpmstate dir=/tmp/vtpm2 \
>> +    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
>> +
>> +Sample QEMU xenpvh commands for running and connecting with Xen:
>> +.. code-block:: console
>> +
>> +    qemu-system-aarch64 -xen-domid 1 \
>> +    -chardev socket,id=libxl-cmd,path=qmp-libxl-1,server=on,wait=off \
>> +    -mon chardev=libxl-cmd,mode=control \
>> +    -chardev socket,id=libxenstat-cmd,path=qmp-libxenstat-1,server=on,wait=off \
>> +    -mon chardev=libxenstat-cmd,mode=control \
>> +    -xen-attach -name guest0 -vnc none -display none -nographic \
>> +    -machine xenpvh -m 1301 \
>> +    -chardev socket,id=chrtpm,path=tmp/vtpm2/swtpm-sock \
>> +    -tpmdev emulator,id=tpm0,chardev=chrtpm -machine tpm-base-addr=0x0C000000
>> +
>> +In above QEMU command, last two lines are for connecting xenpvh QEMU to swtpm
>> +via chardev socket.
>> diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
>> index 91ebc26c6d..af8d7c77d6 100644
>> --- a/docs/system/target-arm.rst
>> +++ b/docs/system/target-arm.rst
>> @@ -106,6 +106,7 @@ undocumented; you can get a complete list by running
>>      arm/stm32
>>      arm/virt
>>      arm/xlnx-versal-virt
>> +   arm/xenpvh
>>   
>>   Emulated CPU architecture support
>>   =================================
>> diff --git a/hw/arm/meson.build b/hw/arm/meson.build
>> index b036045603..06bddbfbb8 100644
>> --- a/hw/arm/meson.build
>> +++ b/hw/arm/meson.build
>> @@ -61,6 +61,8 @@ arm_ss.add(when: 'CONFIG_FSL_IMX7', if_true: files('fsl-imx7.c', 'mcimx7d-sabre.
>>   arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
>>   arm_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c', 'mcimx6ul-evk.c'))
>>   arm_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
>> +arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen_arm.c'))
>> +arm_ss.add_all(xen_ss)
>>   
>>   softmmu_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmu-common.c'))
>>   softmmu_ss.add(when: 'CONFIG_EXYNOS4', if_true: files('exynos4_boards.c'))
>> diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
>> new file mode 100644
>> index 0000000000..12b19e3609
>> --- /dev/null
>> +++ b/hw/arm/xen_arm.c
>> @@ -0,0 +1,184 @@
>> +/*
>> + * QEMU ARM Xen PV Machine
>                     ^ PVH
>
>
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining a copy
>> + * of this software and associated documentation files (the "Software"), to deal
>> + * in the Software without restriction, including without limitation the rights
>> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
>> + * copies of the Software, and to permit persons to whom the Software is
>> + * furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
>> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
>> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
>> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
>> + * THE SOFTWARE.
>> + */
>> +
>> +#include "qemu/osdep.h"
>> +#include "qemu/error-report.h"
>> +#include "qapi/qapi-commands-migration.h"
>> +#include "qapi/visitor.h"
>> +#include "hw/boards.h"
>> +#include "hw/sysbus.h"
>> +#include "sysemu/block-backend.h"
>> +#include "sysemu/tpm_backend.h"
>> +#include "sysemu/sysemu.h"
>> +#include "hw/xen/xen-legacy-backend.h"
>> +#include "hw/xen/xen-hvm-common.h"
>> +#include "sysemu/tpm.h"
>> +#include "hw/xen/arch_hvm.h"
>> +
>> +#define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
>> +OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
>> +
>> +static MemoryListener xen_memory_listener = {
>> +    .region_add = xen_region_add,
>> +    .region_del = xen_region_del,
>> +    .log_start = NULL,
>> +    .log_stop = NULL,
>> +    .log_sync = NULL,
>> +    .log_global_start = NULL,
>> +    .log_global_stop = NULL,
>> +    .priority = 10,
>> +};
>> +
>> +struct XenArmState {
>> +    /*< private >*/
>> +    MachineState parent;
>> +
>> +    XenIOState *state;
>> +
>> +    struct {
>> +        uint64_t tpm_base_addr;
>> +    } cfg;
>> +};
>> +
>> +void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
>> +{
>> +    hw_error("Invalid ioreq type 0x%x\n", req->type);
>> +
>> +    return;
>> +}
>> +
>> +void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
>> +                         bool add)
>> +{
>> +}
>> +
>> +void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
>> +{
>> +}
>> +
>> +void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
>> +{
>> +}
>> +
>> +#ifdef CONFIG_TPM
>> +static void xen_enable_tpm(XenArmState *xam)
>> +{
>> +    Error *errp = NULL;
>> +    DeviceState *dev;
>> +    SysBusDevice *busdev;
>> +
>> +    TPMBackend *be = qemu_find_tpm_be("tpm0");
>> +    if (be == NULL) {
>> +        DPRINTF("Couldn't fine the backend for tpm0\n");
>> +        return;
>> +    }
>> +    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
>> +    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
>> +    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
>> +    busdev = SYS_BUS_DEVICE(dev);
>> +    sysbus_realize_and_unref(busdev, &error_fatal);
>> +    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
>> +
>> +    DPRINTF("Connected tpmdev at address 0x%lx\n", xam->cfg.tpm_base_addr);
>> +}
>> +#endif
>> +
>> +static void xen_arm_init(MachineState *machine)
>> +{
>> +    XenArmState *xam = XEN_ARM(machine);
>> +
>> +    xam->state =  g_new0(XenIOState, 1);
>> +
>> +    xen_register_ioreq(xam->state, machine->smp.cpus, xen_memory_listener);
>> +
>> +#ifdef CONFIG_TPM
>> +    if (xam->cfg.tpm_base_addr) {
>> +        xen_enable_tpm(xam);
>> +    } else {
>> +        DPRINTF("tpm-base-addr is not provided. TPM will not be enabled\n");
>> +    }
> I would remove the "else", we already have a DPRINTF at the end of
> xen_enable_tpm.

This print is bit different than the one in /xen_enable_tpm/. I added it 
because now user needs to provide "tpm_base_addr=0x0C00_0000" from 
command line. If no /tpm_base_addr/ then /cfg.tpm_base_addr /value is 
0x0 and we don't wanna create tpm device at 0x0.

Perhaps instead of debug print, I print a warning here?

>
>
>> +#endif
>> +
>> +    return;
> the return is unnecessary
>
>
>> +}
>> +
>> +#ifdef CONFIG_TPM
>> +static void xen_arm_get_tpm_base_addr(Object *obj, Visitor *v,
>> +                                      const char *name, void *opaque,
>> +                                      Error **errp)
>> +{
>> +    XenArmState *xam = XEN_ARM(obj);
>> +    uint64_t value = xam->cfg.tpm_base_addr;
>> +
>> +    visit_type_uint64(v, name, &value, errp);
>> +}
>> +
>> +static void xen_arm_set_tpm_base_addr(Object *obj, Visitor *v,
>> +                                      const char *name, void *opaque,
>> +                                      Error **errp)
>> +{
>> +    XenArmState *xam = XEN_ARM(obj);
>> +    uint64_t value;
>> +
>> +    if (!visit_type_uint64(v, name, &value, errp)) {
>> +        return;
>> +    }
>> +
>> +    xam->cfg.tpm_base_addr = value;
>> +}
>> +#endif
>> +
>> +static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
>> +{
>> +
>> +    MachineClass *mc = MACHINE_CLASS(oc);
>> +    mc->desc = "Xen Para-virtualized PC";
>> +    mc->init = xen_arm_init;
>> +    mc->max_cpus = 1;
>> +    mc->default_machine_opts = "accel=xen";
>> +
>> +#ifdef CONFIG_TPM
>> +    object_class_property_add(oc, "tpm-base-addr", "uint64_t",
>> +                              xen_arm_get_tpm_base_addr,
>> +                              xen_arm_set_tpm_base_addr,
>> +                              NULL, NULL);
>> +    object_class_property_set_description(oc, "tpm-base-addr",
>> +                                          "Set Base address for TPM device.");
>> +
>> +    machine_class_allow_dynamic_sysbus_dev(mc, TYPE_TPM_TIS_SYSBUS);
>> +#endif
>> +}
>> +
>> +static const TypeInfo xen_arm_machine_type = {
>> +    .name = TYPE_XEN_ARM,
>> +    .parent = TYPE_MACHINE,
>> +    .class_init = xen_arm_machine_class_init,
>> +    .instance_size = sizeof(XenArmState),
>> +};
>> +
>> +static void xen_arm_machine_register_types(void)
>> +{
>> +    type_register_static(&xen_arm_machine_type);
>> +}
>> +
>> +type_init(xen_arm_machine_register_types)
>> diff --git a/include/hw/arm/xen_arch_hvm.h b/include/hw/arm/xen_arch_hvm.h
>> new file mode 100644
>> index 0000000000..8fd645e723
>> --- /dev/null
>> +++ b/include/hw/arm/xen_arch_hvm.h
>> @@ -0,0 +1,9 @@
>> +#ifndef HW_XEN_ARCH_ARM_HVM_H
>> +#define HW_XEN_ARCH_ARM_HVM_H
>> +
>> +#include <xen/hvm/ioreq.h>
>> +void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
>> +void arch_xen_set_memory(XenIOState *state,
>> +                         MemoryRegionSection *section,
>> +                         bool add);
>> +#endif
>> diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
>> index 26674648d8..c7c515220d 100644
>> --- a/include/hw/xen/arch_hvm.h
>> +++ b/include/hw/xen/arch_hvm.h
>> @@ -1,3 +1,5 @@
>>   #if defined(TARGET_I386) || defined(TARGET_X86_64)
>>   #include "hw/i386/xen_arch_hvm.h"
>> +#elif defined(TARGET_ARM) || defined(TARGET_ARM_64)
>> +#include "hw/arm/xen_arch_hvm.h"
>>   #endif
>> -- 
>> 2.17.0
>>


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 03:34:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 03:34:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484687.751403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKt1h-0008Fr-Jn; Thu, 26 Jan 2023 03:34:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484687.751403; Thu, 26 Jan 2023 03:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKt1h-0008Fk-Gj; Thu, 26 Jan 2023 03:34:13 +0000
Received: by outflank-mailman (input) for mailman id 484687;
 Thu, 26 Jan 2023 03:34:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q4q1=5X=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pKt1f-0008Fc-UT
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 03:34:12 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49431b6c-9d2a-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 04:34:07 +0100 (CET)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id D81D35C05B2;
 Wed, 25 Jan 2023 22:34:05 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute6.internal (MEProxy); Wed, 25 Jan 2023 22:34:05 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 25 Jan 2023 22:34:04 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49431b6c-9d2a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1674704045; x=1674790445; bh=3zrqCCWozEhlE3QduurTIaTnuK8HrtZwnOI
	/8OAKWhk=; b=itbOC/cn4y/B40q8XnHC36ZK342HhUiZQWzWRilS+UaShH1NhZp
	1PIxe43uByehOYWScDldnSB68SutvMRyXl/4yQ7STcrYFsYiZxDIdibVrkfQd59y
	smuG5Op6lc6AP91KJTBaEkXF4J+7yg3MUFiilIMxFP0EiZxs/I016ZoFZFI2tBci
	9w7zB9ntKgF7oe9mhUDMTKz2pnLewxPQUN2Izy4WUrdvWwVGWHskqPXAtNoOiop+
	XDLHC8lPhPgqa6XJ8JtDKVoBXB5MRkNV9+BowGxC0SWn5O9gr1KxGin24Vs8sqN+
	1bBzr/xYHNkhKOCWtfqrdtjly7Jh0E7AK8w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1674704045; x=1674790445; bh=3zrqCCWozEhlE
	3QduurTIaTnuK8HrtZwnOI/8OAKWhk=; b=oxKex1i2j/0Mco60GksvTldYC42Dd
	pdUMjSpUSOTpb9z1QAY+WUjYg6ksLLQK2uzbQKUAo1Zg2C+sHtIXBCpAnIbFsAjn
	4+0oxB0u3bOM25qOS+bHQra+pbeecgVar02jsJ9u+jvTbdz4WMSYw49Jb34aMEYM
	eA7qAkSX54AjjyHdGvu4vnqfb//Q+JQdEJJN4zQg7/KwK2oz0+Y7dZaj6f58L7TT
	hs5nF3d73B2HTS3diqakESrrVHJYBme1t58O9mNY2dfhdL+yq37BpqHFilxmkj3P
	N3TdP1ZHcFZtWcQrp9Tc+rIw7+GJR9krLEHVjqwyH+TQS+i2g8sdlFxCQ==
X-ME-Sender: <xms:rfTRY6725xmw3M9_wy6VAcPGHbcrIDAAXALC_ezzqGgu8_uRhoPv0w>
    <xme:rfTRYz54wgPmc9J-m65mihdjNty8KrpH3dOxqxrzqbIl5I_4wleBtSzykLUzL999H
    yEK2Y9VkVevVew>
X-ME-Received: <xmr:rfTRY5c-XeYocmJVDOF_vnw6pNbx_lwfvTcuwuxZ1xAZTtge5V4IEydC9RVAweE53XptqUbyW9xj>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddvfedgheelucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefhgefgieeitdeijeeguddtkefgteeg
    heehgeetkeevhfefgfduhedtveelgeeugeenucevlhhushhtvghrufhiiigvpedtnecurf
    grrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhl
    rggsrdgtohhm
X-ME-Proxy: <xmx:rfTRY3K8BsaR6SikC5jT7-f3d9e22OHeqXhkgePMrLn0y58UnAVNCw>
    <xmx:rfTRY-JsiC-CmyH3JqmtndM5ovAjIocOp_H5Vnz5xbocvmIcvUB5Vw>
    <xmx:rfTRY4wEKKzvSMjPN6rg_jb98hKZEXW7biJaskmWaZ9ourLFDq6IKQ>
    <xmx:rfTRY9_6RGw9FBSqSFdBrKQEovNBIAL-_eHatj1IbfsXY2Q7MP1mdQ>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Juergen Gross <jgross@suse.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	dm-devel@redhat.com
Subject: [RFC PATCH 0/7] Allow race-free block device handling
Date: Wed, 25 Jan 2023 22:33:52 -0500
Message-Id: <20230126033358.1880-1-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This work aims to allow userspace to create and destroy block devices
in a race-free and leak-free way, and to allow them to be exposed to
other Xen VMs via blkback without leaks or races.  It’s marked as RFC
for a few reasons:

- The code has been only lightly tested.  It might be unstable or
  insecure.

- The DM_DEV_CREATE ioctl gains a new flag.  Unknown flags were
  previously ignored, so this could theoretically break buggy userspace
  tools.

- I have no idea if I got the block device reference counting and
  locking correct.

Demi Marie Obenour (7):
  block: Support creating a struct file from a block device
  Allow userspace to get an FD to a newly-created DM device
  Implement diskseq checks in blkback
  Increment diskseq when releasing a loop device
  If autoclear is set, delete a no-longer-used loop device
  Minor blkback cleanups
  xen/blkback: Inform userspace that device has been opened

 block/bdev.c                        |  77 +++++++++++--
 block/genhd.c                       |   1 +
 drivers/block/loop.c                |  17 ++-
 drivers/block/xen-blkback/blkback.c |   8 +-
 drivers/block/xen-blkback/xenbus.c  | 171 ++++++++++++++++++++++------
 drivers/md/dm-ioctl.c               |  67 +++++++++--
 include/linux/blkdev.h              |   5 +
 include/uapi/linux/dm-ioctl.h       |  16 ++-
 8 files changed, 298 insertions(+), 64 deletions(-)

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 03:34:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 03:34:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484688.751412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKt1l-0008Ur-Qy; Thu, 26 Jan 2023 03:34:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484688.751412; Thu, 26 Jan 2023 03:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKt1l-0008Uk-OM; Thu, 26 Jan 2023 03:34:17 +0000
Received: by outflank-mailman (input) for mailman id 484688;
 Thu, 26 Jan 2023 03:34:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q4q1=5X=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pKt1k-0008Fc-KU
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 03:34:16 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4deb3e3d-9d2a-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 04:34:14 +0100 (CET)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id CE38E5C05B8;
 Wed, 25 Jan 2023 22:34:13 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute6.internal (MEProxy); Wed, 25 Jan 2023 22:34:13 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 25 Jan 2023 22:34:13 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4deb3e3d-9d2a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1674704053; x=1674790453; bh=sHxeYo7zgfjFccvxCuuph95svfpElxK2vn8
	+J41Up4A=; b=adUQaXp/oYs3g7OS0f7kpq6+GmWpux+QVtFXmFgtDpZ+2VjYKeP
	9TItVn12LYJcNB5LUo9myKpUTXtUrub4Cu3SZHp9B+8vNaXlQInbcta8Jgb5Rdm3
	GDMdlpRs9T01yZ8uWOdhyDKShEL87ywJ2VNVgFYUH1bDwbHFK2dO0qr3yq0cvVFy
	3eoUWxCkpQavHyEfCY5wDHOBniqwVCTrJuMQ2SieXTo8oPvzh6pG7gmtGRGPdCnT
	tedZ2CBSXoGlenczbkKFo4+j/OkQjZR3Py3rH3eqHWaVVP0NndZ+GzPKlFzxNqkO
	myxdVg9hbn03ZDe/5l/5s9z/r+Y8ZAjNyBg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1674704053; x=1674790453; bh=sHxeYo7zgfjFc
	cvxCuuph95svfpElxK2vn8+J41Up4A=; b=XIQCpZ3lX4/2fvGEl+smR04IQxUTp
	YuK64iAV32C+MVzzlelSu8NtBFy3/erKTDaWxxRgFI9OYqQVuQ/DJz9Tv+hoFKOm
	4FxRc/bDupPlBYUU4UYdCHegdxMEunUBGzNtYiKm9AtHfexmQpZVGpBIxAahaA1T
	fvGb/rfnrO7ezrUMGViTPOF1DjPNXbqTAxFzq3hW9JBb0TLR8dNXWzO1JOmt26Eb
	tvOejPXmncG8791tijP4bM4Kd9ujFo1RIPbMn81uxM3JiB5xsMiPNiHHrpZi1QYx
	Yea9SHaJ1U7hOEBIAMxH2y4mx1MZa2AwwK0EUaeLP30w2vkPrVQp9AZWw==
X-ME-Sender: <xms:tfTRY6RAol72fj1u1ITTzffjf13SuYDGvZJW2kpYz2bGieIW_GJMDg>
    <xme:tfTRY_z5TiiBtt-0BX8vAhX0ddJaRfaz7MQoewPgHdks3DUCcV7DWixXEkMi6HfQg
    yaK_fc4IaFHbzw>
X-ME-Received: <xmr:tfTRY30w6RETlzCYZ2lEM_tWt6QmFeDgVkMAK_hrqJo3fjZxERsl3au4LUOn--qRx7btPnOYLcmD>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddvfedgheelucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepjeffjefggfeugeduvedvjeekgfeh
    gffhhfffjeetkeelueefffetfffhtdduheetnecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:tfTRY2Bf-X_3FCTLk3Szue3EOF95aEtMIVdgXBWa31wVr91cDPl6RA>
    <xmx:tfTRYzigofFurxcrjTtFYCetZEOw1dSBgitqG3wjwcRhN6oaNQe4VQ>
    <xmx:tfTRYyokStzJCyD_7oSZHto9b08SZEox4pnzEh5ptU9hKuHtd9tSqw>
    <xmx:tfTRY-UWccG3HaEP6VIIgwpptXZDMU3fICCe6fG84ssRJodWhMRKNg>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Jens Axboe <axboe@kernel.dk>
Cc: Demi Marie Obenour <demiobenour@gmail.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Demi Marie Obenour <demi@invisiblethingslab.com>
Subject: [RFC PATCH 3/7] Implement diskseq checks in blkback
Date: Wed, 25 Jan 2023 22:33:55 -0500
Message-Id: <20230126033358.1880-4-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <20230126033358.1880-1-demi@invisiblethingslab.com>
References: <20230126033358.1880-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Demi Marie Obenour <demiobenour@gmail.com>

This allows specifying a disk sequence number in XenStore.  If it does
not match the disk sequence number of the underlying device, the device
will not be exported and a warning will be logged.  Userspace can use
this to eliminate race conditions due to major/minor number reuse.
Older kernels will ignore this, so it is safe for userspace to set it
unconditionally.

This also makes physical-device parsing stricter.  I do not believe this
will break any extant userspace tools.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/block/xen-blkback/xenbus.c | 137 +++++++++++++++++++++--------
 1 file changed, 100 insertions(+), 37 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 4807af1d58059394d7a992335dabaf2bc3901721..2c43bfc7ab5ba6954f11d4b949a5668660dbd290 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -24,6 +24,7 @@ struct backend_info {
 	struct xenbus_watch	backend_watch;
 	unsigned		major;
 	unsigned		minor;
+	unsigned long long	diskseq;
 	char			*mode;
 };
 
@@ -479,7 +480,7 @@ static void xen_vbd_free(struct xen_vbd *vbd)
 
 static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
 			  unsigned major, unsigned minor, int readonly,
-			  int cdrom)
+			  bool cdrom, u64 diskseq)
 {
 	struct xen_vbd *vbd;
 	struct block_device *bdev;
@@ -507,6 +508,25 @@ static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
 		xen_vbd_free(vbd);
 		return -ENOENT;
 	}
+
+	if (diskseq) {
+		struct gendisk *disk = bdev->bd_disk;
+		if (unlikely(disk == NULL)) {
+			pr_err("xen_vbd_create: device %08x has no gendisk\n",
+			       vbd->pdevice);
+			xen_vbd_free(vbd);
+			return -EFAULT;
+		}
+
+		if (unlikely(disk->diskseq != diskseq)) {
+			pr_warn("xen_vbd_create: device %08x has incorrect sequence "
+				"number 0x%llx (expected 0x%llx)\n",
+				vbd->pdevice, disk->diskseq, diskseq);
+			xen_vbd_free(vbd);
+			return -ENODEV;
+		}
+	}
+
 	vbd->size = vbd_sz(vbd);
 
 	if (cdrom || disk_to_cdi(vbd->bdev->bd_disk))
@@ -690,6 +710,55 @@ static int xen_blkbk_probe(struct xenbus_device *dev,
 	return err;
 }
 
+static bool read_physical_device(struct xenbus_device *dev,
+				 unsigned long long *diskseq,
+				 unsigned *major, unsigned *minor)
+{
+	char *physical_device, *problem;
+	int i, physical_device_length;
+	char junk;
+
+	physical_device = xenbus_read(XBT_NIL, dev->nodename, "physical-device",
+				      &physical_device_length);
+
+	if (IS_ERR(physical_device)) {
+		int err = PTR_ERR(physical_device);
+		/*
+		 * Since this watch will fire once immediately after it is
+		 * registered, we expect "does not exist" errors.  Ignore
+		 * them and wait for the hotplug scripts.
+		 */
+		if (unlikely(!XENBUS_EXIST_ERR(err)))
+			xenbus_dev_fatal(dev, err, "reading physical-device");
+		return false;
+	}
+
+	for (i = 0; i < physical_device_length; ++i)
+		if (unlikely(physical_device[i] <= 0x20 || physical_device[i] >= 0x7F)) {
+			problem = "bad byte in physical-device";
+			goto fail;
+		}
+
+	if (sscanf(physical_device, "%16llx@%8x:%8x%c",
+		   diskseq, major, minor, &junk) == 3) {
+		if (*diskseq == 0) {
+			problem = "diskseq 0 is invalid";
+			goto fail;
+		}
+	} else if (sscanf(physical_device, "%8x:%8x%c", major, minor, &junk) == 2) {
+		*diskseq = 0;
+	} else {
+		problem = "invalid physical-device";
+		goto fail;
+	}
+	kfree(physical_device);
+	return true;
+fail:
+	kfree(physical_device);
+	xenbus_dev_fatal(dev, -EINVAL, problem);
+	return false;
+}
+
 /*
  * Callback received when the hotplug scripts have placed the physical-device
  * node.  Read it and the mode node, and create a vbd.  If the frontend is
@@ -707,28 +776,17 @@ static void backend_changed(struct xenbus_watch *watch,
 	int cdrom = 0;
 	unsigned long handle;
 	char *device_type;
+	unsigned long long diskseq;
 
 	pr_debug("%s %p %d\n", __func__, dev, dev->otherend_id);
-
-	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
-			   &major, &minor);
-	if (XENBUS_EXIST_ERR(err)) {
-		/*
-		 * Since this watch will fire once immediately after it is
-		 * registered, we expect this.  Ignore it, and wait for the
-		 * hotplug scripts.
-		 */
+	if (!read_physical_device(dev, &diskseq, &major, &minor))
 		return;
-	}
-	if (err != 2) {
-		xenbus_dev_fatal(dev, err, "reading physical-device");
-		return;
-	}
 
-	if (be->major | be->minor) {
-		if (be->major != major || be->minor != minor)
-			pr_warn("changing physical device (from %x:%x to %x:%x) not supported.\n",
-				be->major, be->minor, major, minor);
+	if (be->major | be->minor | be->diskseq) {
+		if (be->major != major || be->minor != minor || be->diskseq != diskseq)
+			pr_warn("changing physical device (from %x:%x:%llx to %x:%x:%llx)"
+				" not supported.\n",
+				be->major, be->minor, be->diskseq, major, minor, diskseq);
 		return;
 	}
 
@@ -756,29 +814,34 @@ static void backend_changed(struct xenbus_watch *watch,
 
 	be->major = major;
 	be->minor = minor;
+	be->diskseq = diskseq;
 
 	err = xen_vbd_create(be->blkif, handle, major, minor,
-			     !strchr(be->mode, 'w'), cdrom);
-
-	if (err)
-		xenbus_dev_fatal(dev, err, "creating vbd structure");
-	else {
-		err = xenvbd_sysfs_addif(dev);
-		if (err) {
-			xen_vbd_free(&be->blkif->vbd);
-			xenbus_dev_fatal(dev, err, "creating sysfs entries");
-		}
-	}
+			     !strchr(be->mode, 'w'), cdrom, diskseq);
 
 	if (err) {
-		kfree(be->mode);
-		be->mode = NULL;
-		be->major = 0;
-		be->minor = 0;
-	} else {
-		/* We're potentially connected now */
-		xen_update_blkif_status(be->blkif);
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+		goto fail;
 	}
+
+	err = xenvbd_sysfs_addif(dev);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "creating sysfs entries");
+		goto free_vbd;
+	}
+
+	/* We're potentially connected now */
+	xen_update_blkif_status(be->blkif);
+	return;
+
+free_vbd:
+	xen_vbd_free(&be->blkif->vbd);
+fail:
+	kfree(be->mode);
+	be->mode = NULL;
+	be->major = 0;
+	be->minor = 0;
+	be->diskseq = 0;
 }
 
 /*
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 03:34:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 03:34:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484689.751423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKt1s-0000N8-5b; Thu, 26 Jan 2023 03:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484689.751423; Thu, 26 Jan 2023 03:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKt1s-0000N1-2J; Thu, 26 Jan 2023 03:34:24 +0000
Received: by outflank-mailman (input) for mailman id 484689;
 Thu, 26 Jan 2023 03:34:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q4q1=5X=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pKt1r-0000Lj-60
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 03:34:23 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4fe5c3ad-9d2a-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 04:34:20 +0100 (CET)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id 2472C5C05B7;
 Wed, 25 Jan 2023 22:34:17 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute6.internal (MEProxy); Wed, 25 Jan 2023 22:34:17 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 25 Jan 2023 22:34:16 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fe5c3ad-9d2a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1674704057; x=1674790457; bh=fly1TGtjPYQFqq0ZAbmMb6xmSvsz35BqaEQ
	byQNg2OA=; b=D1MtYf7memgXbJNrc512Gsyf7cLrgjr6PQo/+5SRDUNVexrSxSC
	Jkb5mc4UepBZHaS+WfVmJOPzRnlte6IdRUwS2LchBnaoWtesjoO1buvJGixk1ZWt
	W2JlzBei3s3Q+yUEq/btspULrZTxS5BYxXousMYLYeRrAQ8Vz/bDNL7SjKPt/1HO
	V+D/T8GldIOZR/fE2l3RLQw7uXqbjeBid6SdZxtWQPQmjG+VNO/1ntp5kbEu7oFI
	lnzGP/sb+Mtlukny0sQD11buwfGcg/H2paeQWwfapygyZ1QbCpdMapdr3yZZzAOu
	2nJkuttV4wyUT8tSgLLiVazf/HXYBehvrdg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1674704057; x=1674790457; bh=fly1TGtjPYQFq
	q0ZAbmMb6xmSvsz35BqaEQbyQNg2OA=; b=ctkpfx7SZRRL6KeO6bFZhCqe3XQGd
	ddY9ADAEaPF42/Pdj3OYxAFRo254vZsVEa8tWSr/KSF+zVoyu2EteDsynPXOj/iu
	63IO7MJyBbRlf7+vQey7wd0POoDgEkJdxFADcHNTYc1/FbN9ZjueKRKqVShvQjKd
	H52kTj6LTGwltBqYV4MSsF9pfN6Ngjul1V6orbHr317ENrVkDkF04ZJyJ/u6siGZ
	040psDMjFLte9H8xZp2q3BlERdUWC3vFfHvaXpbUYvoMJDuxj/eP7uesva1h1NAJ
	A31HPEIeEK62hAR0XVgh41M6lyaOiUZeQlINouPHd1OF+PYrKk/UTJ/5g==
X-ME-Sender: <xms:uPTRY-UJugvRHQWqLPTevHKqWNImygoOOyYewPe_uQqS9qbczCFzJQ>
    <xme:uPTRY6kQLCCCct-1GcPEKU697zRbqex5ZNGMVHmm5m1c95TxOkO3SEzC1ScFo6NML
    oOwjeNEzEMil4k>
X-ME-Received: <xmr:uPTRYyZG5ypvub0AjxhcULUmMs0YaTlx0iWR3IkE2nr6D7XtDdxS9uvROL8aCXQoMnZS-iuZdIud>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddvfedgheelucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepheehvdekudfffeelkeelvdevffej
    ieetudeifeeliefgleffiedvgefhheehgfeunecuffhomhgrihhnpehinhguihhrvggtth
    drnhhrpdhinhguihhrvggtthdrihgunecuvehluhhsthgvrhfuihiivgeptdenucfrrghr
    rghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:ufTRY1VuavHbMeRfNh8kDPgSScRHLLvGJKom87ozljBfd7VtLvnXOw>
    <xmx:ufTRY4kcERKeOg3A5RFmTzfWgpLEQfKpt-p4qEWyUhIR0g086pbcKg>
    <xmx:ufTRY6fYHq4xTO6jgFUdjKJN8TzKE92H_QhdqAHrdujMm1gOVJ1NeA>
    <xmx:ufTRY3ufM2M8yw78lZ1XU9KjM0U4Iyf08R5809m_oRpfv3KY2kmPCQ>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	Juergen Gross <jgross@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: [RFC PATCH 6/7] Minor blkback cleanups
Date: Wed, 25 Jan 2023 22:33:57 -0500
Message-Id: <20230126033358.1880-6-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <20230126033358.1880-1-demi@invisiblethingslab.com>
References: <20230126033358.1880-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

No functional change intended.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/block/xen-blkback/blkback.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index a5cf7f1e871c7f9ff397ab8ff1d7b9e3db686659..8a49cbe81d8895f89371bdf50d1b445c088c9b6a 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -1238,6 +1238,8 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
 	nseg = req->operation == BLKIF_OP_INDIRECT ?
 	       req->u.indirect.nr_segments : req->u.rw.nr_segments;
 
+	BUILD_BUG_ON(offsetof(struct blkif_request, u.rw.id) != 8);
+	BUILD_BUG_ON(offsetof(struct blkif_request, u.indirect.id) != 8);
 	if (unlikely(nseg == 0 && operation_flags != REQ_PREFLUSH) ||
 	    unlikely((req->operation != BLKIF_OP_INDIRECT) &&
 		     (nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST)) ||
@@ -1261,13 +1263,13 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
 		preq.sector_number     = req->u.rw.sector_number;
 		for (i = 0; i < nseg; i++) {
 			pages[i]->gref = req->u.rw.seg[i].gref;
-			seg[i].nsec = req->u.rw.seg[i].last_sect -
-				req->u.rw.seg[i].first_sect + 1;
-			seg[i].offset = (req->u.rw.seg[i].first_sect << 9);
 			if ((req->u.rw.seg[i].last_sect >= (XEN_PAGE_SIZE >> 9)) ||
 			    (req->u.rw.seg[i].last_sect <
 			     req->u.rw.seg[i].first_sect))
 				goto fail_response;
+			seg[i].nsec = req->u.rw.seg[i].last_sect -
+				req->u.rw.seg[i].first_sect + 1;
+			seg[i].offset = (req->u.rw.seg[i].first_sect << 9);
 			preq.nr_sects += seg[i].nsec;
 		}
 	} else {
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 03:34:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 03:34:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484690.751433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKt1u-0000dk-DO; Thu, 26 Jan 2023 03:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484690.751433; Thu, 26 Jan 2023 03:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKt1u-0000dZ-AT; Thu, 26 Jan 2023 03:34:26 +0000
Received: by outflank-mailman (input) for mailman id 484690;
 Thu, 26 Jan 2023 03:34:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q4q1=5X=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pKt1s-0008Fc-Cy
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 03:34:24 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 52ae7c33-9d2a-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 04:34:22 +0100 (CET)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id CBD635C0153;
 Wed, 25 Jan 2023 22:34:21 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute1.internal (MEProxy); Wed, 25 Jan 2023 22:34:21 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 25 Jan 2023 22:34:21 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52ae7c33-9d2a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1674704061; x=1674790461; bh=V4qU6xBczXYBxuAJBBtWt5596TfV6Bd4Dyk
	+YpmwRl4=; b=CuRQlVlIShgx+Q3OebmHVhmoUi/3WbvG8C7kvnQPwXe1QU0tnFD
	XmXhEOq6gJBc6ipUhy7jKIip+gM3ffn5Znohlz62xTB8F1yHGTtB13fNiydmxq8e
	2NjBisXqiKFl0vK9uPtshH9lj1xhj6MDUbOxxBL4KbfXVOMP7KwQ4DrSGSnoqz5v
	d6dv73iHBERop8YVJX5LqtEdHvUdr0xm4S09J7FjPLp5bR7H52os2QZqiFTm67i9
	WSyDDhKmA7AG5cp7E0sqeAwVvYFz7eBL5YnZTmc2OuyQSug0g4Q6R7UsbbvzwEBp
	VBXdD9UALLDVFotjfYSfUvP34A7I/PNUiFw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1674704061; x=1674790461; bh=V4qU6xBczXYBx
	uAJBBtWt5596TfV6Bd4Dyk+YpmwRl4=; b=G9bxwKaZXTcN/J3Y2b7exagftg8il
	c5jEtm6ch7lOIR5yCje41RguRYT6IRbn7gpcD3VHhIl+gCjLBkRtp4h/UAu98A0e
	7H98igegXsNN79gnXc7QzoSxLZLguaUyAYegjyRxUITnysoDIeVqdrWD/iyE9DUJ
	01Hj38zqEP6unZz26qbaMVQ0cu7ymak7KVLI3QeEssr7vVUZp21eJAhutlVu6KV/
	KL9QJ9/r/Mba6IKukxPiuLInp9TnFA58SK240NHNIxaXY2vb4wxe4pklBQI9NGVt
	5avPzNEuaIKk0yRBDDkmAc1gSsTYrCTxsk0OSevDGJ1k8LPKZX83qcDtw==
X-ME-Sender: <xms:vfTRYxA6-Mngv9kVvNeKYFelujd9X3hxqrxnsQQgQoRXnx4akNDqMA>
    <xme:vfTRY_g0kbEZ2GK9NWaXaIW2jukfJa_e8DoKuK0Qs1K8To3jRJqmKJfnAJm8He6OV
    OClK6S6edNa1b0>
X-ME-Received: <xmr:vfTRY8m1auzzdb8jSU39w5zp7FT0b4qePSogOj96JXeuvMIR8ldPIDUslbahgUZ95JUqh_QZYJQn>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddvfedgheekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepjeffjefggfeugeduvedvjeekgfeh
    gffhhfffjeetkeelueefffetfffhtdduheetnecuvehluhhsthgvrhfuihiivgepudenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:vfTRY7xo8bC9Gr4gUIjDd84JXD5A1aGSZY1WcD3a0S1k6pWQFfviyg>
    <xmx:vfTRY2REdgxnXQkrWOt5bdeY1R2H8D3OLGT6Jgnwvyz-LF6f9vLeNg>
    <xmx:vfTRY-aS49RRI1jZpvQNiqfXM3wRVucZ3oNnZ3i3OTk5w4kJRrQnCg>
    <xmx:vfTRYzI9BQbkoretVj3UW_QFpClo_dTyXivnEBXZLbRkgvC5qIZA_A>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Juergen Gross <jgross@suse.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: [RFC PATCH 7/7] xen/blkback: Inform userspace that device has been opened
Date: Wed, 25 Jan 2023 22:33:58 -0500
Message-Id: <20230126033358.1880-7-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <20230126033358.1880-1-demi@invisiblethingslab.com>
References: <20230126033358.1880-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This allows userspace to use block devices with delete-on-close
behavior, which is necessary to ensure virtual devices (such as loop or
device-mapper devices) are cleaned up automatically.  Protocol details
are included in comments.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/block/xen-blkback/xenbus.c | 34 ++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 2c43bfc7ab5ba6954f11d4b949a5668660dbd290..ca8dae05985038da490c5ac93364509913f6b4c7 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -3,6 +3,19 @@
     Copyright (C) 2005 Rusty Russell <rusty@rustcorp.com.au>
     Copyright (C) 2005 XenSource Ltd
 
+In addition to the XenStore nodes required by the Xen block device
+specification, this implementation of blkback uses a new XenStore
+node: "opened".  blkback sets "opened" to "0" before the hotplug script
+is called.  Once the device node has been opened, blkback sets "opened"
+to "1".
+
+"opened" is used exclusively by userspace.  It serves two purposes:
+
+1. It tells userspace that diskseq@major:minor syntax for "physical-device" is
+   supported.
+2. It tells userspace that it can wait for "opened" to be set to 1.  Once
+   "opened" is 1, blkback has a reference to the device, so userspace doesn't
+   need to keep one.
 
 */
 
@@ -698,6 +711,14 @@ static int xen_blkbk_probe(struct xenbus_device *dev,
 	if (err)
 		pr_warn("%s write out 'max-ring-page-order' failed\n", __func__);
 
+	/*
+	 * This informs userspace that the "opened" node will be set to "1" when
+	 * the device has been opened successfully.
+	 */
+	err = xenbus_write(XBT_NIL, dev->nodename, "opened", "0");
+	if (err)
+		goto fail;
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -824,6 +845,19 @@ static void backend_changed(struct xenbus_watch *watch,
 		goto fail;
 	}
 
+	/*
+	 * Tell userspace that the device has been opened and that blkback has a
+	 * reference to it.  Userspace can then close the device or mark it as
+	 * delete-on-close, knowing that blkback will keep the device open as
+	 * long as necessary.
+	 */
+	err = xenbus_write(XBT_NIL, dev->nodename, "opened", "1");
+	if (err) {
+		xenbus_dev_fatal(dev, err, "%s: notifying userspace device has been opened",
+				 dev->nodename);
+		goto free_vbd;
+	}
+
 	err = xenvbd_sysfs_addif(dev);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "creating sysfs entries");
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484545.751505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUy-0005Su-0S; Thu, 26 Jan 2023 05:08:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484545.751505; Thu, 26 Jan 2023 05:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUx-0005QT-MK; Thu, 26 Jan 2023 05:08:31 +0000
Received: by outflank-mailman (input) for mailman id 484545;
 Wed, 25 Jan 2023 19:22:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EtGL=5W=google.com=surenb@srs-se1.protection.inumbo.net>)
 id 1pKlMC-00032o-6K
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 19:22:52 +0000
Received: from mail-yw1-x112e.google.com (mail-yw1-x112e.google.com
 [2607:f8b0:4864:20::112e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a86b1174-9ce5-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 20:22:51 +0100 (CET)
Received: by mail-yw1-x112e.google.com with SMTP id
 00721157ae682-4a2f8ad29d5so277874057b3.8
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 11:22:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a86b1174-9ce5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=PCnMypYs8jymH6Ufuva+9r/69Y/DZs8kYxwQEYrzQFk=;
        b=hXTUs83qvSMCw0sqrewtph/1zyRGQA8EwV9VOqASoIhN1O27YlTeLHvU30LwpkaPKE
         6ZtbduOBpeU0Mk6yScQyxHGjdOZ8W8NbAK7HCSbKR2qQVwpslpoP2Mqh8DnM60UrkSEV
         Fy/rJx/xAj6ReR1yQheSDvUeiLsUOZe36GsXUe+/pA5dp4+pot+ScO1k5jhTafZOkoJo
         vEjMe7B3/0BupSIQn5CSlkYjolaKBFMYB73xyId061H8p1ZYZbkvvK9SPdZltanRDpUl
         vOuM+M/xFo7tmOP85pcv8h038+SVqlBqYWxO3Lgun0O5W06va+PJHK2pFsHgrhvmIiGL
         lvow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=PCnMypYs8jymH6Ufuva+9r/69Y/DZs8kYxwQEYrzQFk=;
        b=FnwznxdQno0Pm0YySvq6bWGN2pBtvocxfGVX+Jwm4zCJiBNfwv9K1wLdaSB+16ElMK
         X8knzhHarQGvx2hY5sUg+xdHPaRaNeKifzXhYXRf8RXgpzbKw9JwdKuGaLS1R+B8Vira
         bz4q49Nq7J8VSARfaCUwWvpOyZrWlCe3F5tX8IbbN/BQc9VS5NtNLGoGwiUqxNTr9+tT
         0CQYj4nHKAZbLQLdN37uM1d1AJgGm45pSv1gy9F6D+ezSd6O1OP8itZth8NqCi4ivcPn
         ZCaYH3aONL9+dR1OqTMAnm6v0+vSGUQ3K9zFjIHYUFHUK2De5/WJDlrmQlHeZBzkDjYa
         7vqA==
X-Gm-Message-State: AO0yUKWpT2F04pmakUSi7BD46zTBSJISO6yM6QrVViggQIDCrQRbJ7zu
	xFwC9BNkyNdCxbGG+R5pGllJJFA4NL2OKmc5aMM5AQ==
X-Google-Smtp-Source: AK7set/Rj29H3r8vHaYccCmp943Un+QyMRF/w8dcRdt99GFw8apI+/L+5tSmXBTsBLdlTBniti7hA8kyFbpjr3H10kc=
X-Received: by 2002:a0d:d456:0:b0:507:26dc:ebd with SMTP id
 w83-20020a0dd456000000b0050726dc0ebdmr298632ywd.455.1674674569763; Wed, 25
 Jan 2023 11:22:49 -0800 (PST)
MIME-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com> <20230125083851.27759-2-surenb@google.com>
 <Y9F19QEDX5d/44EV@casper.infradead.org>
In-Reply-To: <Y9F19QEDX5d/44EV@casper.infradead.org>
From: Suren Baghdasaryan <surenb@google.com>
Date: Wed, 25 Jan 2023 11:22:38 -0800
Message-ID: <CAJuCfpH+LMFX=TT04gSMA05cz_-CXMum6fobRrduWvzm1HWPmQ@mail.gmail.com>
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
To: Matthew Wilcox <willy@infradead.org>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, 
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, 
	mgorman@techsingularity.net, dave@stgolabs.net, liam.howlett@oracle.com, 
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, 
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 25, 2023 at 10:33 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jan 25, 2023 at 12:38:46AM -0800, Suren Baghdasaryan wrote:
> > +/* Use when VMA is not part of the VMA tree and needs no locking */
> > +static inline void init_vm_flags(struct vm_area_struct *vma,
> > +                              unsigned long flags)
> > +{
> > +     vma->vm_flags = flags;
>
> vm_flags are supposed to have type vm_flags_t.  That's not been
> fully realised yet, but perhaps we could avoid making it worse?
>
> >       pgprot_t vm_page_prot;
> > -     unsigned long vm_flags;         /* Flags, see mm.h. */
> > +
> > +     /*
> > +      * Flags, see mm.h.
> > +      * WARNING! Do not modify directly.
> > +      * Use {init|reset|set|clear|mod}_vm_flags() functions instead.
> > +      */
> > +     unsigned long vm_flags;
>
> Including changing this line to vm_flags_t

Good point. Will make the change. Thanks!


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484502.751477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUw-0004rg-6G; Thu, 26 Jan 2023 05:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484502.751477; Thu, 26 Jan 2023 05:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUv-0004oq-VQ; Thu, 26 Jan 2023 05:08:29 +0000
Received: by outflank-mailman (input) for mailman id 484502;
 Wed, 25 Jan 2023 17:22:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EtGL=5W=google.com=surenb@srs-se1.protection.inumbo.net>)
 id 1pKjTr-0004CI-Ir
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 17:22:39 +0000
Received: from mail-yb1-xb2d.google.com (mail-yb1-xb2d.google.com
 [2607:f8b0:4864:20::b2d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dc229028-9cd4-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 18:22:36 +0100 (CET)
Received: by mail-yb1-xb2d.google.com with SMTP id m199so8144629ybm.4
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 09:22:36 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc229028-9cd4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=IqJXv+IInDU1mKTOWecSjx46uIp9RhIiEah7PjvPLrg=;
        b=QlZyecSvRJAlH7daZiVbTmdMCDJawwIkjqybbPcYd50INvCOOFwycJjGFsDCENBvPF
         JeBM9yeOQDknmaZC6LT0p7wlzLAJo3QYgAn7K1xQNCTJjPrh6oinOxgRNGwGedAqUfeH
         4EF6oknHwKU0N6BIZOc0CJi1hrgM4W0zxm3gicvvOuEGF7JC9HCNICDdMPEAz6Lz/B5y
         FN2dpyuojKkpe3r7ipsFxRIyHaJ4nJmkiiUGkQy6aOsuBhBZ3WzwUsGkq0uXDlGTKFEH
         iRh4N/trPN148wcbBUC1DmqB3ErHQLF9n9pkmMXooLK6oXyXf0aS41iIJv2buhnN7zSu
         X8Bw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=IqJXv+IInDU1mKTOWecSjx46uIp9RhIiEah7PjvPLrg=;
        b=5rp6rkzmt/hqEXkXuMpnrN2x6AKRB+XmjPEuuyid7mrDOZ1zb14sl9RPqY6BsoPU9/
         fkdnT6U8pNbtNJ9PLPLRzWPCtvGgVXG2mqXsWlK5VnQceSzSFXM/CM82kClXbH0vjJhG
         6u597nSC2rERBk8uvR2UOjsgswJPyX8lVqIkcaLvv15ioHit+FzgNKcroOQrEPu+Fmhe
         Wcs1d4bG/UU0ae9xN2zOU6PYf4UMjW5QjdB3gHFEUnBMIr8y2uYOVK67OP34vI4SYOWw
         9tGg99sGawD/0VQsLyOPZia/xpr1VeVk5e+lN/+Qk7x7Bk3iWAZq+jUbFaCRce0et2ns
         6ykw==
X-Gm-Message-State: AFqh2korhaMmUd/+j9j54B2bPYLaaGbSFCbv7TYfWA47ukEp43OwxPXt
	IovmCFb2aYyjKg8rRgAOS/aVErA57xwbnv9y5t4Z+Q==
X-Google-Smtp-Source: AMrXdXv634nY3BrnZ3PlTUGlyUi8i7L3YenO2gMgxW5qJ4tF6Xha6yZp+2FFeWeU8OQrXrCFeHpomo5nr++haHZW7UI=
X-Received: by 2002:a25:a408:0:b0:800:28d4:6936 with SMTP id
 f8-20020a25a408000000b0080028d46936mr2303639ybi.431.1674667354997; Wed, 25
 Jan 2023 09:22:34 -0800 (PST)
MIME-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com> <20230125083851.27759-5-surenb@google.com>
 <Y9D4rWEsajV/WfNx@dhcp22.suse.cz> <CAJuCfpGd2eG0RSMte9OVgsRVWPo+Sj7+t8EOo8o_iKzZoh1MXA@mail.gmail.com>
 <Y9Fh9joU3vTCwYbX@dhcp22.suse.cz>
In-Reply-To: <Y9Fh9joU3vTCwYbX@dhcp22.suse.cz>
From: Suren Baghdasaryan <surenb@google.com>
Date: Wed, 25 Jan 2023 09:22:23 -0800
Message-ID: <CAJuCfpEJ1U2UHBNhLx4gggN3PLZKP5RejiZL_U5ZLxU_wdviVg@mail.gmail.com>
Subject: Re: [PATCH v2 4/6] mm: replace vma->vm_flags indirect modification in ksm_madvise
To: Michal Hocko <mhocko@suse.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, 
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, 
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, 
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, 
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 25, 2023 at 9:08 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Wed 25-01-23 08:57:48, Suren Baghdasaryan wrote:
> > On Wed, Jan 25, 2023 at 1:38 AM 'Michal Hocko' via kernel-team
> > <kernel-team@android.com> wrote:
> > >
> > > On Wed 25-01-23 00:38:49, Suren Baghdasaryan wrote:
> > > > Replace indirect modifications to vma->vm_flags with calls to modifier
> > > > functions to be able to track flag changes and to keep vma locking
> > > > correctness. Add a BUG_ON check in ksm_madvise() to catch indirect
> > > > vm_flags modification attempts.
> > >
> > > Those BUG_ONs scream to much IMHO. KSM is an MM internal code so I
> > > gueess we should be willing to trust it.
> >
> > Yes, but I really want to prevent an indirect misuse since it was not
> > easy to find these. If you feel strongly about it I will remove them
> > or if you have a better suggestion I'm all for it.
>
> You can avoid that by making flags inaccesible directly, right?

Ah, you mean Peter's suggestion of using __private? I guess that would
cover it. I'll drop these BUG_ONs in the next version. Thanks!

>
> --
> Michal Hocko
> SUSE Labs


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484533.751492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUx-00058S-2V; Thu, 26 Jan 2023 05:08:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484533.751492; Thu, 26 Jan 2023 05:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUw-00055k-OA; Thu, 26 Jan 2023 05:08:30 +0000
Received: by outflank-mailman (input) for mailman id 484533;
 Wed, 25 Jan 2023 18:39:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NNIG=5W=infradead.org=willy@srs-se1.protection.inumbo.net>)
 id 1pKkg4-0005iZ-Ia
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 18:39:20 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 92c0e269-9cdf-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 19:39:18 +0100 (CET)
Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1pKkeP-0066hH-0o; Wed, 25 Jan 2023 18:37:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92c0e269-9cdf-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=j8xmcwaWWp8ClSuIcWbfPO9W4DI7+K1SIXYR1hupOAY=; b=uqNynxzucz1vO4Sybud/wnfqlG
	XxsLNOwKfxMMQq2z+2mEwSQepA2Cgl288rz8ppyGnCDD245OrpnK+lLHo83C85yAJqJOOkyy9NCp4
	qmW1PdDBkKx7H5H7CuTrTaW668Ax6XD7hJfO7woN7MoWAjzhSqQRaCuht1RFhZ03cdOhzx8vskmyo
	utYjRqoPln+RK+MIFkiqDBv2fEqneNERI4c9/pWDfp7yn00dEjgBzAOu+9ajSFVQvgY4Ml53E7t5d
	4KvoHjjGbdeWlMlk7Lgwsa8VGdyruWiJYIAak7bpfijYPXdYqIwuOD9Kmo+xWyNX8MhOoxpc82QLY
	/xONblxA==;
Date: Wed, 25 Jan 2023 18:37:36 +0000
From: Matthew Wilcox <willy@infradead.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>, akpm@linux-foundation.org,
	michel@lespinasse.org, jglisse@google.com, mhocko@suse.com,
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
	dave@stgolabs.net, liam.howlett@oracle.com, ldufour@linux.ibm.com,
	paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com,
	peterx@redhat.com, david@redhat.com, dhowells@redhat.com,
	hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev,
	punit.agrawal@bytedance.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
Message-ID: <Y9F28J9njAtwifuL@casper.infradead.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-2-surenb@google.com>
 <Y9Dx0cPXF2yoLwww@hirez.programming.kicks-ass.net>
 <CAJuCfpEcVCZaCGzc-Wim25eaV5e6YG1YJAAdKwZ6JHViB0z8aw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAJuCfpEcVCZaCGzc-Wim25eaV5e6YG1YJAAdKwZ6JHViB0z8aw@mail.gmail.com>

On Wed, Jan 25, 2023 at 08:49:50AM -0800, Suren Baghdasaryan wrote:
> On Wed, Jan 25, 2023 at 1:10 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > > +     /*
> > > +      * Flags, see mm.h.
> > > +      * WARNING! Do not modify directly.
> > > +      * Use {init|reset|set|clear|mod}_vm_flags() functions instead.
> > > +      */
> > > +     unsigned long vm_flags;
> >
> > We have __private and ACCESS_PRIVATE() to help with enforcing this.
> 
> Thanks for pointing this out, Peter! I guess for that I'll need to
> convert all read accesses and provide get_vm_flags() too? That will
> cause some additional churt (a quick search shows 801 hits over 248
> files) but maybe it's worth it? I think Michal suggested that too in
> another patch. Should I do that while we are at it?

Here's a trick I saw somewhere in the VFS:

	union {
		const vm_flags_t vm_flags;
		vm_flags_t __private __vm_flags;
	};

Now it can be read by anybody but written only by those using
ACCESS_PRIVATE.


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484479.751463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUv-0004cM-EC; Thu, 26 Jan 2023 05:08:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484479.751463; Thu, 26 Jan 2023 05:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUv-0004b3-9a; Thu, 26 Jan 2023 05:08:29 +0000
Received: by outflank-mailman (input) for mailman id 484479;
 Wed, 25 Jan 2023 17:00:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EtGL=5W=google.com=surenb@srs-se1.protection.inumbo.net>)
 id 1pKj8C-0000E7-9l
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 17:00:16 +0000
Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com
 [2607:f8b0:4864:20::112f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc682fc0-9cd1-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 18:00:14 +0100 (CET)
Received: by mail-yw1-x112f.google.com with SMTP id
 00721157ae682-50660e2d2ffso56440067b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 09:00:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc682fc0-9cd1-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=L4MXGiF0uBNS4YdEO04YxfSu/PH3d6GEYxinGGkJ0l0=;
        b=W2wM+2B23DX4UFGjrNjz2QhZIRJJ0rJbNeAj16MEZr1nVm+y2drgVLMWhYKM+RXZwl
         LyA9Ln6ru3ELYSr9r7So/z16X90Zls60BXzg/QzJqLOLV/j0Tixqf16TWNyXZGnACB9E
         QkGJl1ScGfsTp7c4ttit0pgHNn4ey7S424sXU9dWBD5cTrLynb/1YBuhhrezEYSZhLSi
         3eb09az3BMlgAYXh3d2+5Q7sC0FqvrC0NX040NYjkqv/h2gdKBNO3YpN4EQOf23xtORi
         NxWfmFe951E1Au7ZDRJqODWALmTWnaVbj7QqRA33jeb/AA79tsFrFPyzpW0c4v2lui2R
         IvJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=L4MXGiF0uBNS4YdEO04YxfSu/PH3d6GEYxinGGkJ0l0=;
        b=knpWLww94F8MT/dBoUTPhjFqnj14ieKjJ+g0rSYXFsqKrmHi6zJHJCGrH6dEpexw6F
         lrTnQ0lkla8phTS76wNlihFuZcSBOJudf6u6MOGUDL9cN79aMFAiQIxY4AR0vJk9u8FJ
         zIIfjQi25P5wLBNzpvR4TLYcMADppK/pU1PjDfQ+qCzSGXH6DvO6IdFEIUNsrCPpSJrG
         4VfpC753oGvhuFMcsehjLQVRB2rlaPGTceISmEHXuCMcc2NVjGwZJQA79NDnb8Veh+e7
         0pYWTUdC6OceKaDrJOWs8QBbsOVRwJhwIsupFxyWKedzQ7eoN1mN6RaMGK41uzu1Mw/7
         nRnQ==
X-Gm-Message-State: AFqh2kr7s67hrj5dVk2wx9OXT46RPImCId5RDTq5wgAy0rggsrTLfZ7C
	6Z34ElO7wEzfkNeMtui8IAFq65LedUl82Tf3SzeNnw==
X-Google-Smtp-Source: AMrXdXv6WFpwQ2RivVVjLGZIQcqOZe35ABaf7/kGtBR6QRxHke62z7D7Nf0zd78A9HxP1avFtZlnWBmbm7RD5GZUdQA=
X-Received: by 2002:a0d:c0c7:0:b0:502:30d7:5fff with SMTP id
 b190-20020a0dc0c7000000b0050230d75fffmr2052050ywd.347.1674666013171; Wed, 25
 Jan 2023 09:00:13 -0800 (PST)
MIME-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com> <20230125083851.27759-6-surenb@google.com>
 <Y9D5hjcprLI92VKf@dhcp22.suse.cz>
In-Reply-To: <Y9D5hjcprLI92VKf@dhcp22.suse.cz>
From: Suren Baghdasaryan <surenb@google.com>
Date: Wed, 25 Jan 2023 09:00:00 -0800
Message-ID: <CAJuCfpHHPB=VE7Q=hoxVj7GBF18rpSQ-O-5+S3EPxOB5rHOrDg@mail.gmail.com>
Subject: Re: [PATCH v2 5/6] mm: introduce mod_vm_flags_nolock and use it in untrack_pfn
To: Michal Hocko <mhocko@suse.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, 
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, 
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, 
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, 
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 25, 2023 at 1:42 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Wed 25-01-23 00:38:50, Suren Baghdasaryan wrote:
> > In cases when VMA flags are modified after VMA was isolated and mmap_lock
> > was downgraded, flags modifications would result in an assertion because
> > mmap write lock is not held.
> > Introduce mod_vm_flags_nolock to be used in such situation.
> > Pass a hint to untrack_pfn to conditionally use mod_vm_flags_nolock for
> > flags modification and to avoid assertion.
>
> The changelog nor the documentation of mod_vm_flags_nolock
> really explain when it is safe to use it. This is really important for
> future potential users.

True. I'll add clarification in the comments and in the changelog. Thanks!

>
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> >  arch/x86/mm/pat/memtype.c | 10 +++++++---
> >  include/linux/mm.h        | 12 +++++++++---
> >  include/linux/pgtable.h   |  5 +++--
> >  mm/memory.c               | 13 +++++++------
> >  mm/memremap.c             |  4 ++--
> >  mm/mmap.c                 | 16 ++++++++++------
> >  6 files changed, 38 insertions(+), 22 deletions(-)
> >
> > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> > index ae9645c900fa..d8adc0b42cf2 100644
> > --- a/arch/x86/mm/pat/memtype.c
> > +++ b/arch/x86/mm/pat/memtype.c
> > @@ -1046,7 +1046,7 @@ void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn)
> >   * can be for the entire vma (in which case pfn, size are zero).
> >   */
> >  void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> > -              unsigned long size)
> > +              unsigned long size, bool mm_wr_locked)
> >  {
> >       resource_size_t paddr;
> >       unsigned long prot;
> > @@ -1065,8 +1065,12 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> >               size = vma->vm_end - vma->vm_start;
> >       }
> >       free_pfn_range(paddr, size);
> > -     if (vma)
> > -             clear_vm_flags(vma, VM_PAT);
> > +     if (vma) {
> > +             if (mm_wr_locked)
> > +                     clear_vm_flags(vma, VM_PAT);
> > +             else
> > +                     mod_vm_flags_nolock(vma, 0, VM_PAT);
> > +     }
> >  }
> >
> >  /*
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 55335edd1373..48d49930c411 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -656,12 +656,18 @@ static inline void clear_vm_flags(struct vm_area_struct *vma,
> >       vma->vm_flags &= ~flags;
> >  }
> >
> > +static inline void mod_vm_flags_nolock(struct vm_area_struct *vma,
> > +                                    unsigned long set, unsigned long clear)
> > +{
> > +     vma->vm_flags |= set;
> > +     vma->vm_flags &= ~clear;
> > +}
> > +
> >  static inline void mod_vm_flags(struct vm_area_struct *vma,
> >                               unsigned long set, unsigned long clear)
> >  {
> >       mmap_assert_write_locked(vma->vm_mm);
> > -     vma->vm_flags |= set;
> > -     vma->vm_flags &= ~clear;
> > +     mod_vm_flags_nolock(vma, set, clear);
> >  }
> >
> >  static inline void vma_set_anonymous(struct vm_area_struct *vma)
> > @@ -2087,7 +2093,7 @@ static inline void zap_vma_pages(struct vm_area_struct *vma)
> >  }
> >  void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
> >               struct vm_area_struct *start_vma, unsigned long start,
> > -             unsigned long end);
> > +             unsigned long end, bool mm_wr_locked);
> >
> >  struct mmu_notifier_range;
> >
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 5fd45454c073..c63cd44777ec 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -1185,7 +1185,8 @@ static inline int track_pfn_copy(struct vm_area_struct *vma)
> >   * can be for the entire vma (in which case pfn, size are zero).
> >   */
> >  static inline void untrack_pfn(struct vm_area_struct *vma,
> > -                            unsigned long pfn, unsigned long size)
> > +                            unsigned long pfn, unsigned long size,
> > +                            bool mm_wr_locked)
> >  {
> >  }
> >
> > @@ -1203,7 +1204,7 @@ extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
> >                            pfn_t pfn);
> >  extern int track_pfn_copy(struct vm_area_struct *vma);
> >  extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> > -                     unsigned long size);
> > +                     unsigned long size, bool mm_wr_locked);
> >  extern void untrack_pfn_moved(struct vm_area_struct *vma);
> >  #endif
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index d6902065e558..5b11b50e2c4a 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -1613,7 +1613,7 @@ void unmap_page_range(struct mmu_gather *tlb,
> >  static void unmap_single_vma(struct mmu_gather *tlb,
> >               struct vm_area_struct *vma, unsigned long start_addr,
> >               unsigned long end_addr,
> > -             struct zap_details *details)
> > +             struct zap_details *details, bool mm_wr_locked)
> >  {
> >       unsigned long start = max(vma->vm_start, start_addr);
> >       unsigned long end;
> > @@ -1628,7 +1628,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
> >               uprobe_munmap(vma, start, end);
> >
> >       if (unlikely(vma->vm_flags & VM_PFNMAP))
> > -             untrack_pfn(vma, 0, 0);
> > +             untrack_pfn(vma, 0, 0, mm_wr_locked);
> >
> >       if (start != end) {
> >               if (unlikely(is_vm_hugetlb_page(vma))) {
> > @@ -1675,7 +1675,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
> >   */
> >  void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
> >               struct vm_area_struct *vma, unsigned long start_addr,
> > -             unsigned long end_addr)
> > +             unsigned long end_addr, bool mm_wr_locked)
> >  {
> >       struct mmu_notifier_range range;
> >       struct zap_details details = {
> > @@ -1689,7 +1689,8 @@ void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
> >                               start_addr, end_addr);
> >       mmu_notifier_invalidate_range_start(&range);
> >       do {
> > -             unmap_single_vma(tlb, vma, start_addr, end_addr, &details);
> > +             unmap_single_vma(tlb, vma, start_addr, end_addr, &details,
> > +                              mm_wr_locked);
> >       } while ((vma = mas_find(&mas, end_addr - 1)) != NULL);
> >       mmu_notifier_invalidate_range_end(&range);
> >  }
> > @@ -1723,7 +1724,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> >        * unmap 'address-end' not 'range.start-range.end' as range
> >        * could have been expanded for hugetlb pmd sharing.
> >        */
> > -     unmap_single_vma(&tlb, vma, address, end, details);
> > +     unmap_single_vma(&tlb, vma, address, end, details, false);
> >       mmu_notifier_invalidate_range_end(&range);
> >       tlb_finish_mmu(&tlb);
> >  }
> > @@ -2492,7 +2493,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
> >
> >       err = remap_pfn_range_notrack(vma, addr, pfn, size, prot);
> >       if (err)
> > -             untrack_pfn(vma, pfn, PAGE_ALIGN(size));
> > +             untrack_pfn(vma, pfn, PAGE_ALIGN(size), true);
> >       return err;
> >  }
> >  EXPORT_SYMBOL(remap_pfn_range);
> > diff --git a/mm/memremap.c b/mm/memremap.c
> > index 08cbf54fe037..2f88f43d4a01 100644
> > --- a/mm/memremap.c
> > +++ b/mm/memremap.c
> > @@ -129,7 +129,7 @@ static void pageunmap_range(struct dev_pagemap *pgmap, int range_id)
> >       }
> >       mem_hotplug_done();
> >
> > -     untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
> > +     untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true);
> >       pgmap_array_delete(range);
> >  }
> >
> > @@ -276,7 +276,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
> >       if (!is_private)
> >               kasan_remove_zero_shadow(__va(range->start), range_len(range));
> >  err_kasan:
> > -     untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
> > +     untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true);
> >  err_pfn_remap:
> >       pgmap_array_delete(range);
> >       return error;
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 2c6e9072e6a8..69d440997648 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -78,7 +78,7 @@ core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644);
> >  static void unmap_region(struct mm_struct *mm, struct maple_tree *mt,
> >               struct vm_area_struct *vma, struct vm_area_struct *prev,
> >               struct vm_area_struct *next, unsigned long start,
> > -             unsigned long end);
> > +             unsigned long end, bool mm_wr_locked);
> >
> >  static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
> >  {
> > @@ -2136,14 +2136,14 @@ static inline void remove_mt(struct mm_struct *mm, struct ma_state *mas)
> >  static void unmap_region(struct mm_struct *mm, struct maple_tree *mt,
> >               struct vm_area_struct *vma, struct vm_area_struct *prev,
> >               struct vm_area_struct *next,
> > -             unsigned long start, unsigned long end)
> > +             unsigned long start, unsigned long end, bool mm_wr_locked)
> >  {
> >       struct mmu_gather tlb;
> >
> >       lru_add_drain();
> >       tlb_gather_mmu(&tlb, mm);
> >       update_hiwater_rss(mm);
> > -     unmap_vmas(&tlb, mt, vma, start, end);
> > +     unmap_vmas(&tlb, mt, vma, start, end, mm_wr_locked);
> >       free_pgtables(&tlb, mt, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS,
> >                                next ? next->vm_start : USER_PGTABLES_CEILING);
> >       tlb_finish_mmu(&tlb);
> > @@ -2391,7 +2391,11 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >                       mmap_write_downgrade(mm);
> >       }
> >
> > -     unmap_region(mm, &mt_detach, vma, prev, next, start, end);
> > +     /*
> > +      * We can free page tables without write-locking mmap_lock because VMAs
> > +      * were isolated before we downgraded mmap_lock.
> > +      */
> > +     unmap_region(mm, &mt_detach, vma, prev, next, start, end, !downgrade);
> >       /* Statistics and freeing VMAs */
> >       mas_set(&mas_detach, start);
> >       remove_mt(mm, &mas_detach);
> > @@ -2704,7 +2708,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> >
> >               /* Undo any partial mapping done by a device driver. */
> >               unmap_region(mm, &mm->mm_mt, vma, prev, next, vma->vm_start,
> > -                          vma->vm_end);
> > +                          vma->vm_end, true);
> >       }
> >       if (file && (vm_flags & VM_SHARED))
> >               mapping_unmap_writable(file->f_mapping);
> > @@ -3031,7 +3035,7 @@ void exit_mmap(struct mm_struct *mm)
> >       tlb_gather_mmu_fullmm(&tlb, mm);
> >       /* update_hiwater_rss(mm) here? but nobody should be looking */
> >       /* Use ULONG_MAX here to ensure all VMAs in the mm are unmapped */
> > -     unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX);
> > +     unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX, false);
> >       mmap_read_unlock(mm);
> >
> >       /*
> > --
> > 2.39.1
>
> --
> Michal Hocko
> SUSE Labs


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484489.751467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUv-0004ji-OU; Thu, 26 Jan 2023 05:08:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484489.751467; Thu, 26 Jan 2023 05:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUv-0004gZ-Ip; Thu, 26 Jan 2023 05:08:29 +0000
Received: by outflank-mailman (input) for mailman id 484489;
 Wed, 25 Jan 2023 17:08:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FU6P=5W=suse.com=mhocko@srs-se1.protection.inumbo.net>)
 id 1pKjFp-0001Ah-Dv
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 17:08:09 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d6f10a9c-9cd2-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 18:08:08 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C730421CA3;
 Wed, 25 Jan 2023 17:08:07 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7757F1358F;
 Wed, 25 Jan 2023 17:08:07 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id T+qcHPdh0WP1JAAAMHmgww
 (envelope-from <mhocko@suse.com>); Wed, 25 Jan 2023 17:08:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6f10a9c-9cd2-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674666487; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=FNzsO8QkHd4bwcFXjgFFOOBC/xElVC4T91KIaFtN0Io=;
	b=D/br1xSsxQfk6apBfZHBB6RWB236iiiYqtOZ2eZ95J8WjWhuYAEVV+tXTEh0QKRmA1b4xH
	QN55Gxk1acvIRqe6qMTQwSbVuwI9C24RIc9UEoj8Q4rbZSCTfRh25I8IC+VWb1YjIErqcg
	9b1EbAjGibksBO3NPm+wpmCzLKQPsIY=
Date: Wed, 25 Jan 2023 18:08:06 +0100
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com,
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org,
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com,
	david@redhat.com, dhowells@redhat.com, hughd@google.com,
	bigeasy@linutronix.de, kent.overstreet@linux.dev,
	punit.agrawal@bytedance.com, lstoakes@gmail.com,
	peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 4/6] mm: replace vma->vm_flags indirect modification
 in ksm_madvise
Message-ID: <Y9Fh9joU3vTCwYbX@dhcp22.suse.cz>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-5-surenb@google.com>
 <Y9D4rWEsajV/WfNx@dhcp22.suse.cz>
 <CAJuCfpGd2eG0RSMte9OVgsRVWPo+Sj7+t8EOo8o_iKzZoh1MXA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAJuCfpGd2eG0RSMte9OVgsRVWPo+Sj7+t8EOo8o_iKzZoh1MXA@mail.gmail.com>

On Wed 25-01-23 08:57:48, Suren Baghdasaryan wrote:
> On Wed, Jan 25, 2023 at 1:38 AM 'Michal Hocko' via kernel-team
> <kernel-team@android.com> wrote:
> >
> > On Wed 25-01-23 00:38:49, Suren Baghdasaryan wrote:
> > > Replace indirect modifications to vma->vm_flags with calls to modifier
> > > functions to be able to track flag changes and to keep vma locking
> > > correctness. Add a BUG_ON check in ksm_madvise() to catch indirect
> > > vm_flags modification attempts.
> >
> > Those BUG_ONs scream to much IMHO. KSM is an MM internal code so I
> > gueess we should be willing to trust it.
> 
> Yes, but I really want to prevent an indirect misuse since it was not
> easy to find these. If you feel strongly about it I will remove them
> or if you have a better suggestion I'm all for it.

You can avoid that by making flags inaccesible directly, right?

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484468.751443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUu-0004FB-8K; Thu, 26 Jan 2023 05:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484468.751443; Thu, 26 Jan 2023 05:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUu-0004F4-3i; Thu, 26 Jan 2023 05:08:28 +0000
Received: by outflank-mailman (input) for mailman id 484468;
 Wed, 25 Jan 2023 16:50:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EtGL=5W=google.com=surenb@srs-se1.protection.inumbo.net>)
 id 1pKiyN-0006pE-1q
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 16:50:07 +0000
Received: from mail-yw1-x112c.google.com (mail-yw1-x112c.google.com
 [2607:f8b0:4864:20::112c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5072b1bf-9cd0-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 17:50:04 +0100 (CET)
Received: by mail-yw1-x112c.google.com with SMTP id
 00721157ae682-4ff1fa82bbbso225474377b3.10
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 08:50:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5072b1bf-9cd0-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=9oBkpwDuDP6Raat/HRHsNpx8MqzIjjZSJojQEumXoyE=;
        b=telMSbMEB1I/JdT8E+AOlYPkvLPDxj1UCnywL1f/zz+GxB6zyzUgN+Q3Q48l8AYc8X
         qS5rmRFlzboD9JIVNM4srPZju+Dc0OvfYTnyPjpp4sHdEutwmKfLXtqzcN+4YmFBRVgu
         A3oTpxaFO/E/7LaM1iu3pe6Iv0CmRV+MC10+sbVUAqr2eeBPD17IhS27buTMH3bCJoMu
         fEQq5XzvwDAqtF1fp8r2H4g3vh98tijISo+CkjBRkXWvZWGfdkjpWZzF8VgIKtCc5sLs
         +bZFNae0CmrxcmnBVuga8Dg0CQRlWnW+6m8p+riuFjzQ8BSrHATHKU3+MoO2RS3dWuby
         cPfQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=9oBkpwDuDP6Raat/HRHsNpx8MqzIjjZSJojQEumXoyE=;
        b=AqJ+wBTB+NFPv537epxKJFlmvwb75510fFeSG/V3LSv0+E3LFE7n840BN0D43GQoI1
         q9EuikDsPO6RvrGjMb/E8DsCkVh0pgMy+6s+izDxqmswf9Vmwvd/t/RYDyb6i6pT2lg3
         J5DB8Av58putBMilZA9qzh8k8MPON/CaBUL1fKe+Sx7nNB+0UkzO3ly2UhSG2iaoKAJP
         6H3/OCx4nenvUOZllMbwGgB4esi+30zFeVxCb0HXNGMG/6AjGlXjTEPLjMoomWzoi9bZ
         og1nBjlKBPbBPsekdHkL623KKd/GgLqwCj1wEbTQwjDMLSrV0e/XrcsswmI8dSrcPpgu
         S+RA==
X-Gm-Message-State: AFqh2kpwF8aF16UwuMl0CuF5GITkCCYO4hcKuYN6DxC1vNSo/FLepOGZ
	jy5mKbyoYS43aJNkjVIO0cs3MS4sfTscejMP3gMsHw==
X-Google-Smtp-Source: AMrXdXsah5c2WKVW38d6ZSr7pPop+n/mEJC53KDRGLQOXrHcU20AK6F0ktCN8HrtzVAtxjIN7wyafP19A/nGyLOtNU8=
X-Received: by 2002:a81:1d2:0:b0:433:f1c0:3f1c with SMTP id
 201-20020a8101d2000000b00433f1c03f1cmr4401576ywb.438.1674665403087; Wed, 25
 Jan 2023 08:50:03 -0800 (PST)
MIME-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com> <20230125083851.27759-2-surenb@google.com>
 <Y9Dx0cPXF2yoLwww@hirez.programming.kicks-ass.net>
In-Reply-To: <Y9Dx0cPXF2yoLwww@hirez.programming.kicks-ass.net>
From: Suren Baghdasaryan <surenb@google.com>
Date: Wed, 25 Jan 2023 08:49:50 -0800
Message-ID: <CAJuCfpEcVCZaCGzc-Wim25eaV5e6YG1YJAAdKwZ6JHViB0z8aw@mail.gmail.com>
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
To: Peter Zijlstra <peterz@infradead.org>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, 
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, 
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, 
	liam.howlett@oracle.com, ldufour@linux.ibm.com, paulmck@kernel.org, 
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 25, 2023 at 1:10 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Jan 25, 2023 at 12:38:46AM -0800, Suren Baghdasaryan wrote:
>
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index 2d6d790d9bed..6c7c70bf50dd 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -491,7 +491,13 @@ struct vm_area_struct {
> >        * See vmf_insert_mixed_prot() for discussion.
> >        */
> >       pgprot_t vm_page_prot;
> > -     unsigned long vm_flags;         /* Flags, see mm.h. */
> > +
> > +     /*
> > +      * Flags, see mm.h.
> > +      * WARNING! Do not modify directly.
> > +      * Use {init|reset|set|clear|mod}_vm_flags() functions instead.
> > +      */
> > +     unsigned long vm_flags;
>
> We have __private and ACCESS_PRIVATE() to help with enforcing this.

Thanks for pointing this out, Peter! I guess for that I'll need to
convert all read accesses and provide get_vm_flags() too? That will
cause some additional churt (a quick search shows 801 hits over 248
files) but maybe it's worth it? I think Michal suggested that too in
another patch. Should I do that while we are at it?

>


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484477.751456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUu-0004RN-VG; Thu, 26 Jan 2023 05:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484477.751456; Thu, 26 Jan 2023 05:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUu-0004QR-Ov; Thu, 26 Jan 2023 05:08:28 +0000
Received: by outflank-mailman (input) for mailman id 484477;
 Wed, 25 Jan 2023 16:58:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EtGL=5W=google.com=surenb@srs-se1.protection.inumbo.net>)
 id 1pKj64-0007nJ-FA
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 16:58:04 +0000
Received: from mail-yb1-xb2c.google.com (mail-yb1-xb2c.google.com
 [2607:f8b0:4864:20::b2c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6d2c35dc-9cd1-11ed-b8d1-410ff93cb8f0;
 Wed, 25 Jan 2023 17:58:01 +0100 (CET)
Received: by mail-yb1-xb2c.google.com with SMTP id 129so19773684ybb.0
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 08:58:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d2c35dc-9cd1-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=G/HaQvIeifLKJJp6JsBUTA/Cu0PG4HCFXOBV2LCBpGc=;
        b=ngnRJahtqVxSN6UIjLqWU+1aTxdIhv7re4hDfLrN8LXncI6BNG1ZwqGy4tzOTLxPiH
         4VWOIuHk0tMtzp4F11bJV5cn9qN75X4TjPW2iyn4nOIj2Y2qsTa18wfmHkOGhLKlmZoS
         vXpRc/g/QGDtReYNnPGF6rlvG7jz8sM8MsRTR7LhXioKCpHgDgO7fKBddw0dzS48JeET
         78piPZYcyZX9vIYyfAzpklatM6XnaaFqr/b+XLZlxI1Y0y+wCHWFOsnbVElEi6T+CQOb
         4zMPa96gAXD4xbmjnQv7nMUvo5bgJJEK/r3Eu0NIV8f4SPGC7eT5IU6zl2RkKdozGWcK
         sknQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=G/HaQvIeifLKJJp6JsBUTA/Cu0PG4HCFXOBV2LCBpGc=;
        b=jixSNznKDtjoCkHUUoC9KNUp/POCCGmHBLelsXr5Qgb2f36aaOphvnNT2v94f+PamY
         HinbRwFK/lJjrm2JTm0CsYX4oQ170DlxXbSbph/FWqVqhf3rdtmAg51urAaEGHtE4kMJ
         d/X3CVJvlRu/6sQ3t4khAivWPeQA6zZzVxIEVb4Flz/r3tcEhtGhqderl7e+TIJgAe/F
         SFgYPRAvpjhPtmalN9CGjiV7yYUJVzIgVrjfU9fee8w+HKiarkN7U5Y9+MuIQoCo9/IV
         DlOBJFwJbllJ586oX6gZ2nddcR0BR8LaLY0mYS4IxKdOotCrv2hCcRWMw4EbiHjhCRY0
         nSXg==
X-Gm-Message-State: AO0yUKXKwWSusFvVE2GwX3t0Rzbg+UJOg90awf4tFczYT/mvFaDpvPsW
	IqNHGE/rt/XDqsG2EmkzzXDdkEKoAd/wcGTwvaYKig==
X-Google-Smtp-Source: AK7set97wrc43TM0/ovVONCfpWDKrtFzkyYmNKL3FW97qdU/yOGMPZssBTWD4MRF2UhZkPfEz4JLEj3/xF1loldnjYM=
X-Received: by 2002:a25:ad02:0:b0:80b:6fd3:84d3 with SMTP id
 y2-20020a25ad02000000b0080b6fd384d3mr714673ybi.316.1674665880846; Wed, 25 Jan
 2023 08:58:00 -0800 (PST)
MIME-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com> <20230125083851.27759-5-surenb@google.com>
 <Y9D4rWEsajV/WfNx@dhcp22.suse.cz>
In-Reply-To: <Y9D4rWEsajV/WfNx@dhcp22.suse.cz>
From: Suren Baghdasaryan <surenb@google.com>
Date: Wed, 25 Jan 2023 08:57:48 -0800
Message-ID: <CAJuCfpGd2eG0RSMte9OVgsRVWPo+Sj7+t8EOo8o_iKzZoh1MXA@mail.gmail.com>
Subject: Re: [PATCH v2 4/6] mm: replace vma->vm_flags indirect modification in ksm_madvise
To: Michal Hocko <mhocko@suse.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, 
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, 
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, 
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, 
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 25, 2023 at 1:38 AM 'Michal Hocko' via kernel-team
<kernel-team@android.com> wrote:
>
> On Wed 25-01-23 00:38:49, Suren Baghdasaryan wrote:
> > Replace indirect modifications to vma->vm_flags with calls to modifier
> > functions to be able to track flag changes and to keep vma locking
> > correctness. Add a BUG_ON check in ksm_madvise() to catch indirect
> > vm_flags modification attempts.
>
> Those BUG_ONs scream to much IMHO. KSM is an MM internal code so I
> gueess we should be willing to trust it.

Yes, but I really want to prevent an indirect misuse since it was not
easy to find these. If you feel strongly about it I will remove them
or if you have a better suggestion I'm all for it.

>
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
>
> Acked-by: Michal Hocko <mhocko@suse.com>
> --
> Michal Hocko
> SUSE Labs
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484475.751449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUu-0004Io-KP; Thu, 26 Jan 2023 05:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484475.751449; Thu, 26 Jan 2023 05:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUu-0004IT-Da; Thu, 26 Jan 2023 05:08:28 +0000
Received: by outflank-mailman (input) for mailman id 484475;
 Wed, 25 Jan 2023 16:55:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EtGL=5W=google.com=surenb@srs-se1.protection.inumbo.net>)
 id 1pKj3W-0007d7-MQ
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 16:55:26 +0000
Received: from mail-yw1-x1131.google.com (mail-yw1-x1131.google.com
 [2607:f8b0:4864:20::1131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 101f0543-9cd1-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 17:55:25 +0100 (CET)
Received: by mail-yw1-x1131.google.com with SMTP id
 00721157ae682-5063029246dso104354547b3.6
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 08:55:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 101f0543-9cd1-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=49H+5dcfdqsESvQ4qNxkOgTxyGkRYtaMvRra0jMgwoI=;
        b=S433Z2p/j0oEnqzhdMbCoBVS6yLsoVuhPxi7PQkxnyRK8RoE5tlGqQCWFiSx/y2EvP
         VAUU6f5te6yd0FF3QKoXQk59fXSc40Ni99FjmOhoS8uoMeuhfG+AT8hg6vTHrAcNtls7
         iMIrnMdjSfEnhMBuERTLvpM6IdVwE9wobd2Cb3XY0VBKEUGGTBatMri522T6kyoxGdRv
         R/YHI2CzXA+66ssNNRD0kBHVPl8TI5ear6XOrgI2lTdtmT0YpMoSddNSDob6lG0ADoLk
         +xTQAwFZ3i/oeH+aroOYDwlOGwYRh8N7mvnq0HFpqRIYUl+jcTUyG7X+0g74P9WaFx1x
         4nDw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=49H+5dcfdqsESvQ4qNxkOgTxyGkRYtaMvRra0jMgwoI=;
        b=Cam+JNGHMP9vTwVqJcCZ4xX1sudekRnJ/c1A14tZQI1ueXajOzqORvOYtQ5+OQsKKW
         TM3haIIXYoOElOcHk2cZR5FTdT19xPJqdl5EFAKNt5B5QcCrQoMcy+l4StqDZ/KbLq3A
         +1K9Bz3iJ6PiJ7hEzc2sxtjgMhdKaCNxhDi5dtGO5YYTHWXJ0LXYBrvpKHCK9OyCz8UB
         Bkf4Mu38jbMzBwDomzHyq9IMWd7T7rCWlrCBpHTNCOGzgB8AFWt7BRQ2YP6hSa1kY1LL
         lIeIE3PUpTrpILlgEBjdx3oNxsA6R/3JAq0vW06EpnQTSzG55RmUjtWYL+Rw6GfQK+v9
         wnEw==
X-Gm-Message-State: AO0yUKWtP6n13Bgn5Q9IK6QKQsKHKFj3MpkOrDKfExtDecFFgA4KaOld
	pGFAe0R71QFUx4gBqvCsPwuxkCmt3cfRdKMG5OQyQA==
X-Google-Smtp-Source: AK7set/FwLbWx6yiqkzvjcGbe3t+VxnD8xj+/iXHUbjRIwB0jRkvrcCR+VhH4DhOMzxTzJJlzd0h1TxaYfABci/YfRk=
X-Received: by 2002:a0d:d456:0:b0:507:26dc:ebd with SMTP id
 w83-20020a0dd456000000b0050726dc0ebdmr239978ywd.455.1674665724181; Wed, 25
 Jan 2023 08:55:24 -0800 (PST)
MIME-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com> <20230125083851.27759-4-surenb@google.com>
 <Y9D2zXpy+9iyZNun@dhcp22.suse.cz>
In-Reply-To: <Y9D2zXpy+9iyZNun@dhcp22.suse.cz>
From: Suren Baghdasaryan <surenb@google.com>
Date: Wed, 25 Jan 2023 08:55:12 -0800
Message-ID: <CAJuCfpG7KWnj3J_t4nN1R4gfiM5jgjsiTfL55hNa=Uvz4E835g@mail.gmail.com>
Subject: Re: [PATCH v2 3/6] mm: replace vma->vm_flags direct modifications
 with modifier calls
To: Michal Hocko <mhocko@suse.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, 
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, 
	dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, 
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, 
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 25, 2023 at 1:30 AM 'Michal Hocko' via kernel-team
<kernel-team@android.com> wrote:
>
> On Wed 25-01-23 00:38:48, Suren Baghdasaryan wrote:
> > Replace direct modifications to vma->vm_flags with calls to modifier
> > functions to be able to track flag changes and to keep vma locking
> > correctness.
>
> Is this a manual (git grep) based work or have you used Coccinele for
> the patch generation?

It was a manual "search and replace" and in the process I temporarily
renamed vm_flags to ensure I did not miss any usage.

>
> My potentially incomplete check
> $ git grep ">[[:space:]]*vm_flags[[:space:]]*[&|^]="
>
> shows that nothing should be left after this. There is still quite a lot
> of direct checks of the flags (more than 600). Maybe it would be good to
> make flags accessible only via accessors which would also prevent any
> future direct setting of those flags in uncontrolled way as well.

Yes, I think Peter's suggestion in the first patch would also require
that. Much more churn but probably worth it for the future
maintenance. I'll add a patch which converts all readers as well.

>
> Anyway
> Acked-by: Michal Hocko <mhocko@suse.com>

Thanks for all the reviews!

> --
> Michal Hocko
> SUSE Labs
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484531.751481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUw-0004yi-HT; Thu, 26 Jan 2023 05:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484531.751481; Thu, 26 Jan 2023 05:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUw-0004vj-8u; Thu, 26 Jan 2023 05:08:30 +0000
Received: by outflank-mailman (input) for mailman id 484531;
 Wed, 25 Jan 2023 18:39:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NNIG=5W=infradead.org=willy@srs-se1.protection.inumbo.net>)
 id 1pKkfq-0005i0-RB
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 18:39:11 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8b2f30b6-9cdf-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 19:39:05 +0100 (CET)
Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1pKkaL-0066XZ-MG; Wed, 25 Jan 2023 18:33:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b2f30b6-9cdf-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=WhE3p5nMlMDjaV/irGLArTSJUCL3MW0izpTScdbc1OU=; b=mx74wVm8uyXRMzRkznkmPv14f3
	DM5mkZY0d2Y454KV/p1DBO6bVWUBYmLnjNwq5de4oOCp4K1tzlZ+pYhblaAnUsgNim9Cg22n4lXC7
	YVziRodKlXV3h1dcA4wCil3iZ6I2W+LteukgjO5nFw9bnJFOnLJvx0ni4Ju6wCzLw38ztU2xqwXDF
	Hz7a4pCnrIPatIpdvDFmrtxTdMVr7eH9j59LSpJj79ys6zGb7fhMV69syzXZoxm/q67WE7IxkNx+x
	YR6wIJfVUlSwCzFkthS2vqR79CZmfDxbuiYrAaQdlm8CD16FJpwzTtCFj0zPf7AtdmoIV8Mlhhb7E
	KwDK04uQ==;
Date: Wed, 25 Jan 2023 18:33:25 +0000
From: Matthew Wilcox <willy@infradead.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net,
	liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
Message-ID: <Y9F19QEDX5d/44EV@casper.infradead.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-2-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-2-surenb@google.com>

On Wed, Jan 25, 2023 at 12:38:46AM -0800, Suren Baghdasaryan wrote:
> +/* Use when VMA is not part of the VMA tree and needs no locking */
> +static inline void init_vm_flags(struct vm_area_struct *vma,
> +				 unsigned long flags)
> +{
> +	vma->vm_flags = flags;

vm_flags are supposed to have type vm_flags_t.  That's not been
fully realised yet, but perhaps we could avoid making it worse?

>  	pgprot_t vm_page_prot;
> -	unsigned long vm_flags;		/* Flags, see mm.h. */
> +
> +	/*
> +	 * Flags, see mm.h.
> +	 * WARNING! Do not modify directly.
> +	 * Use {init|reset|set|clear|mod}_vm_flags() functions instead.
> +	 */
> +	unsigned long vm_flags;

Including changing this line to vm_flags_t


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:08:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:08:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484543.751501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUx-0005Gs-KB; Thu, 26 Jan 2023 05:08:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484543.751501; Thu, 26 Jan 2023 05:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuUx-0005DF-4V; Thu, 26 Jan 2023 05:08:31 +0000
Received: by outflank-mailman (input) for mailman id 484543;
 Wed, 25 Jan 2023 19:22:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EtGL=5W=google.com=surenb@srs-se1.protection.inumbo.net>)
 id 1pKlLW-00031j-EF
 for xen-devel@lists.xenproject.org; Wed, 25 Jan 2023 19:22:10 +0000
Received: from mail-yb1-xb2d.google.com (mail-yb1-xb2d.google.com
 [2607:f8b0:4864:20::b2d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8f4831d9-9ce5-11ed-91b6-6bf2151ebd3b;
 Wed, 25 Jan 2023 20:22:09 +0100 (CET)
Received: by mail-yb1-xb2d.google.com with SMTP id h5so7435313ybj.8
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 11:22:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f4831d9-9ce5-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=azRxZ4iz/UmvW0JqPrqtZcKNmaFt9MhDUKzgc4KSKbs=;
        b=idbJO946IIJ5xclJl8K3ScrKphDS9TF1tg1AgsB8MPco32Ta3+qR61NnmqeUtsnx6v
         goCUeETP5lzAzmASwAR43MDdzy2mxaLRi4K6yZE3Q+T5C1FUbuWCmQDSDtqKn5s2e6iw
         g3L5+Ey8VBu5ZDVtB0a1Axssd66xmU+yYweeF+hE6jQwYilC1xMJq5jCj+NjIEVUsfL4
         tw7nY9DM/DzPYd8vrkO+1JZY6FVSYBMjhRJz+WIxF8k5j1BI+Qpdb88Is7APtmnyodC7
         tcl2VYHmnzhTEoT5QVoxpB++zHDpcXCuz10aLB8wrCQNw685txpV1MvZDG3GANSg7b1j
         9Xwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=azRxZ4iz/UmvW0JqPrqtZcKNmaFt9MhDUKzgc4KSKbs=;
        b=atf1gOVlaXFtpQHqq5ihDQ5QScrymePGWzyh75pIHqvb1DEM4OYSIDbro7aZ3T+vyZ
         v+AzqDKIb7Vlg/7gvN8/A4SL7TQpAmT6E20Xg4RfJoCQ+s+RKsxL4ThQrFgMvZdlqeVO
         +U3zg4U4py0H6r8n/dFKLMNV2xQEBFzJhZN1AZRk1cdNDQOhDZvcrqJpA8CYmYtprKKC
         mmPa/a0oCAD6W4y7YXoFgg/iwIWw1CCFDRY0YGn6544i8whZB2PXnInf8moh3kDWGQWy
         TLbK9vkK5qG9z0TKH8+hSV0/niZduQ3b2AFfq8umxTOHlka4wwjmqS7rykn33kicgy3j
         rapQ==
X-Gm-Message-State: AO0yUKXa49kl8aj1dnd6GnGwDDfq3ZbuFXxNzu60UN51KifwD/fMXSmi
	JuN4q4BFTX0Fg9Fxs4beMebzVljpgOleeP2pM5jVMA==
X-Google-Smtp-Source: AK7set9Keebho/efVX9+GO9rxFk2PCEOZWbQBvt6Ld/Tpsz3uuvCSMXu6oGPdxC/ITtQ87qnzhMkMAEzeywlAhveKx4=
X-Received: by 2002:a25:c247:0:b0:80b:6201:bee7 with SMTP id
 s68-20020a25c247000000b0080b6201bee7mr946541ybf.340.1674674527537; Wed, 25
 Jan 2023 11:22:07 -0800 (PST)
MIME-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com> <20230125083851.27759-2-surenb@google.com>
 <Y9Dx0cPXF2yoLwww@hirez.programming.kicks-ass.net> <CAJuCfpEcVCZaCGzc-Wim25eaV5e6YG1YJAAdKwZ6JHViB0z8aw@mail.gmail.com>
 <Y9F28J9njAtwifuL@casper.infradead.org>
In-Reply-To: <Y9F28J9njAtwifuL@casper.infradead.org>
From: Suren Baghdasaryan <surenb@google.com>
Date: Wed, 25 Jan 2023 11:21:56 -0800
Message-ID: <CAJuCfpHO7g-5GZep0e7r=dFTBhVHpN3R_pHMGOqetgrKyYzMFQ@mail.gmail.com>
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
To: Matthew Wilcox <willy@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>, akpm@linux-foundation.org, michel@lespinasse.org, 
	jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, 
	mgorman@techsingularity.net, dave@stgolabs.net, liam.howlett@oracle.com, 
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org, 
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 25, 2023 at 10:37 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jan 25, 2023 at 08:49:50AM -0800, Suren Baghdasaryan wrote:
> > On Wed, Jan 25, 2023 at 1:10 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > > > +     /*
> > > > +      * Flags, see mm.h.
> > > > +      * WARNING! Do not modify directly.
> > > > +      * Use {init|reset|set|clear|mod}_vm_flags() functions instead.
> > > > +      */
> > > > +     unsigned long vm_flags;
> > >
> > > We have __private and ACCESS_PRIVATE() to help with enforcing this.
> >
> > Thanks for pointing this out, Peter! I guess for that I'll need to
> > convert all read accesses and provide get_vm_flags() too? That will
> > cause some additional churt (a quick search shows 801 hits over 248
> > files) but maybe it's worth it? I think Michal suggested that too in
> > another patch. Should I do that while we are at it?
>
> Here's a trick I saw somewhere in the VFS:
>
>         union {
>                 const vm_flags_t vm_flags;
>                 vm_flags_t __private __vm_flags;
>         };
>
> Now it can be read by anybody but written only by those using
> ACCESS_PRIVATE.

Huh, this is quite nice! I think it does not save us from the cases
when vma->vm_flags is passed by a reference and modified indirectly,
like in ksm_madvise()? Though maybe such usecases are so rare (I found
only 2 cases) that we can ignore this?


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:13:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:13:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484743.751543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuZo-0002MD-4g; Thu, 26 Jan 2023 05:13:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484743.751543; Thu, 26 Jan 2023 05:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuZo-0002M6-1w; Thu, 26 Jan 2023 05:13:32 +0000
Received: by outflank-mailman (input) for mailman id 484743;
 Thu, 26 Jan 2023 05:13:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0+3p=5X=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pKuZm-0002M0-M3
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 05:13:30 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2a2f8711-9d38-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 06:13:28 +0100 (CET)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id A89635C05B9;
 Thu, 26 Jan 2023 00:13:26 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute6.internal (MEProxy); Thu, 26 Jan 2023 00:13:26 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 26 Jan 2023 00:13:25 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a2f8711-9d38-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1674710006; x=1674796406; bh=si2CIeQHBvEuBg0GKAq7c57w/l0f6keWtZb
	ZENwz+Cg=; b=TU/TLoenO7nU4Zgag4pHQ9NDhmS5vEblIyIC4EpELdyspPyxEmK
	vsUJEPyhrTPE2dHGgRwzvEoQt7hIipuxkPcGo2FmyKYxNj/fstwpizlEQ8zwJByI
	0bohzchjPAywJTCw39Xs4U0IFz1+7l014ULWIRbSOtCzfh1ZER9HkH4B+bXzyF/j
	YRJqE00gCT/FzHmm/qySX6Vlg9SafclW/NaPbmwgy6SrVze5hw48f3JKwCnkfF1x
	QJIRy+lruJ/YYL+Sp8OJcOP8tjCSj8MzAMHr8OgrNU74iUyuPWeWNRm9ulRe9lzB
	HvsfiwG0rows11oEdF1Ncd3toEisNiWfrHg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1674710006; x=1674796406; bh=si2CIeQHBvEuB
	g0GKAq7c57w/l0f6keWtZbZENwz+Cg=; b=YO8NH5dPFIM8uFLgeyEWKZ+X8ljlj
	k3N28Pkp8dSwiqlIzicRI2tK2elnHGj/Yb3giv7m8wRKUdjhwd1HGficyhm3xQXU
	fBbtUvMJ5g0qfmPvqAwlBUnOhxOzSzoVf4i2Vji1Bm6lRCM1kq2lWAMP2jVw+yne
	lAeb1hoMuwdYRyb7YXYgjXnPwWsvs+AwSTagAK4yYzfHMGg7OzWk8h9k+ay1oD+W
	7pN8Y5DiEWTmVZPTwBnHjKk7dNzdyeNtQ4C7n7QeI5o5i7JXhw83fkj5xCSem3QD
	c5UgP7hVWG0PUqcEmfiQw4fzTYm2aQgLgz4kS+Z08yhoCr02DiDhZnakQ==
X-ME-Sender: <xms:9gvSY9qUe1TGob5fDDdTEgKUKT4F17CK9aCpQW6yWeiTHH14Q6pB3Q>
    <xme:9gvSY_pYSI36pRCuodyysmPy9csENTepU83j7_zze7YtZhpSIBprTHXeGniYN58fJ
    mFt3VYeJO2wbA>
X-ME-Received: <xmr:9gvSY6PWlvHydTEMQJfYIFEyf3M-uHerzRtMifpON62Wb9v7m-FqF7YKOJS00crMrEuXW2G1A2epa-nPeforirbGSUQoTpvEc0WT01SNv0vAMlvWYWqm>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddvfedgjeekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeelkefh
    udelteelleelteetveeffeetffekteetjeehlefggeekleeghefhtdehvdenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:9gvSY45BN99xpV5q7YvHznyVgbn9nAVqJyLaKYLBh1OwZpTG8Okm_A>
    <xmx:9gvSY85BmH5WKkTZbKZ27ug5ISgPJuv7L9F2nMCwQ0-YCXZEahiizQ>
    <xmx:9gvSYwi1wYAeX0qR_y2msvZUkO5Nahg38NO9h5jZ5Sj35X8LVlX4aw>
    <xmx:9gvSY6Sws963j1I7Wb2NkG1BTO3AlsI8RiSo93hBywUHMH9goeaEaw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/python: change 's#' size type for Python >= 3.10
Date: Thu, 26 Jan 2023 06:13:10 +0100
Message-Id: <20230126051310.4149074-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.37.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Python < 3.10 by default uses 'int' type for data+size string types
(s#), unless PY_SSIZE_T_CLEAN is defined - in which case it uses
Py_ssize_t. The former behavior was removed in Python 3.10 and now it's
required to define PY_SSIZE_T_CLEAN before including Python.h, and using
Py_ssize_t for the length argument. The PY_SSIZE_T_CLEAN behavior is
supported since Python 2.5.

Adjust bindings accordingly.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 tools/python/xen/lowlevel/xc/xc.c | 3 ++-
 tools/python/xen/lowlevel/xs/xs.c | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index fd008610329b..cfb2734a992b 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -4,6 +4,7 @@
  * Copyright (c) 2003-2004, K A Fraser (University of Cambridge)
  */
 
+#define PY_SSIZE_T_CLEAN
 #include <Python.h>
 #define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
@@ -1774,7 +1775,7 @@ static PyObject *pyflask_load(PyObject *self, PyObject *args, PyObject *kwds)
 {
     xc_interface *xc_handle;
     char *policy;
-    uint32_t len;
+    Py_ssize_t len;
     int ret;
 
     static char *kwd_list[] = { "policy", NULL };
diff --git a/tools/python/xen/lowlevel/xs/xs.c b/tools/python/xen/lowlevel/xs/xs.c
index 0dad7fa5f2fc..3ba5a8b893d9 100644
--- a/tools/python/xen/lowlevel/xs/xs.c
+++ b/tools/python/xen/lowlevel/xs/xs.c
@@ -18,6 +18,7 @@
  * Copyright (C) 2005 XenSource Ltd.
  */
 
+#define PY_SSIZE_T_CLEAN
 #include <Python.h>
 
 #include <stdbool.h>
@@ -141,7 +142,7 @@ static PyObject *xspy_write(XsHandle *self, PyObject *args)
     char *thstr;
     char *path;
     char *data;
-    int data_n;
+    Py_ssize_t data_n;
     bool result;
 
     if (!xh)
-- 
2.37.3



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:20:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:20:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484767.751552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKugF-0003zN-Qu; Thu, 26 Jan 2023 05:20:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484767.751552; Thu, 26 Jan 2023 05:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKugF-0003zG-Nv; Thu, 26 Jan 2023 05:20:11 +0000
Received: by outflank-mailman (input) for mailman id 484767;
 Thu, 26 Jan 2023 05:20:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKugE-0003z1-Aa; Thu, 26 Jan 2023 05:20:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKugE-0008SM-6I; Thu, 26 Jan 2023 05:20:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKugD-0008L6-Pj; Thu, 26 Jan 2023 05:20:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKugD-0001Io-PE; Thu, 26 Jan 2023 05:20:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CdW05b1cGTFWpcFqWab1GoWLYNxPfb/ACDgvJhg2hhc=; b=bwFRP52MDnxuJ1rQk7kTLMDMoP
	zjAyMFUEVtBUHDyJOVUqdPJjxWY+3+4gue14EQliyFgPFrEbkwC0pVl154J8dgH698fTGPMJp4T96
	CgeBB8f8yOy+ZNc9BWZYrHMMwbvnBjXLFmkeV/dfpYFU70yurtBwLA0iXssv4xdwPQR0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176132-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176132: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3b760245f74ab2022b1aa4da842c4545228c2e83
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Jan 2023 05:20:09 +0000

flight 176132 xen-unstable real [real]
flight 176138 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/176132/
http://logs.test-lab.xenproject.org/osstest/logs/176138/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail pass in 176138-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3b760245f74ab2022b1aa4da842c4545228c2e83
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    5 days
Failing since        176003  2023-01-20 17:40:27 Z    5 days   13 attempts
Testing same since   176121  2023-01-25 10:51:47 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1067 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:26:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:26:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484776.751563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKumf-0004sB-LO; Thu, 26 Jan 2023 05:26:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484776.751563; Thu, 26 Jan 2023 05:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKumf-0004s4-IR; Thu, 26 Jan 2023 05:26:49 +0000
Received: by outflank-mailman (input) for mailman id 484776;
 Thu, 26 Jan 2023 05:26:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+L1k=5X=bu.edu=alxndr@srs-se1.protection.inumbo.net>)
 id 1pKume-0004ry-NX
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 05:26:48 +0000
Received: from esa14.hc2706-39.iphmx.com (esa14.hc2706-39.iphmx.com
 [216.71.140.199]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 05373c0a-9d3a-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 06:26:45 +0100 (CET)
Received: from mail-io1-f70.google.com ([209.85.166.70])
 by ob1.hc2706-39.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 26 Jan 2023 00:26:39 -0500
Received: by mail-io1-f70.google.com with SMTP id
 x12-20020a5d990c000000b00707d2f838acso355004iol.21
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 21:26:39 -0800 (PST)
Received: from mozz.bu.edu (mozz.bu.edu. [128.197.127.33])
 by smtp.gmail.com with ESMTPSA id
 i7-20020a05620a074700b006fed58fc1a3sm318746qki.119.2023.01.25.21.26.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 25 Jan 2023 21:26:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05373c0a-9d3a-11ed-91b6-6bf2151ebd3b
X-IronPort-RemoteIP: 209.85.166.70
X-IronPort-MID: 275983199
X-IronPort-Reputation: None
X-IronPort-Listener: OutgoingMail
X-IronPort-SenderGroup: RELAY_GSUITE
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hQlNG6wgJo8/h9P6htl6t+dzxyrEfRIJ4+MujC+fZmUNrF6WrkUPm
 msYUGiGO/uMYDH8Ktl3bYSx/EIPvsPWy4A1GwNtqy00HyNBpPSeOdnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+Sn9j8kkPnSHdIQMcacUghpXwhoVSw9vhxqnu89k+ZAjMOwRgiAo
 rsemeWGULOe82MyYz18B56r8ks156yr4m1A5zTSWNgQ1LPgvyhNZH4gDfzpR5fIatE8NvK3Q
 e/F0Ia48gvxl/v6Ior4+lpTWhRiro/6ZGBiuFIPM0SRqkEqShgJ70oOHKF0hXG7Ktm+t4sZJ
 N1l7fRcQOqyV0HGsL11vxJwSkmSMUDakVNuzLfWXcG7liX7n3XQL/pGVFkbIpIe+u9OK05O6
 +UZI3MzaDbeiLfjqF67YrEEasULKcDqOMYevSglw26BS/khRp/HTuPB4towMDUY3JgfW6aDI
 ZNHN3wwNHwsYDUWUrsTIJs6jOGknFH1bntVpE/9Sa8fuTeOnVwqiem8WDbTUvChfdR2oFSjm
 mLL9lb6KEkBLZ+TkyXQpxpAgceKx0sXQrk6BLC+s/JnnlCX7mgSEwENE0u2p+GjjUyzUM4ZL
 FYbkhfCtoA3/U2vC9j6Bli2/ybCsRkbVN5dVeY97Wlh15bp3upQPUBcJhYpVTDsnJNeqeACv
 rNRo+7UOA==
IronPort-HdrOrdr: A9a23:J1cOPqtpJLzyizczaHftfL8I7skDR9V00zEX/kB9WHVpm6uj+P
 xG/c5rrCMc7Qx7ZJhOo6HjBEDtex3hHP1OjbX5Q43SPzUO0VHAROsO0WKI+Vzd8kPFh4lg/J
 YlX69iCMDhSXhW5PyKhjVQyuxQpeVvJprY4ds35BpWLT1XVw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=bu.edu; s=s1gsbu;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FG0/FU6uw0vDtarwt5+2HOJFgKERR2bBzA1uds1Yh7k=;
        b=GmkGejaNeTsdycAL/X8WF4cDWl6UjNaTGkTsucW4NaBQwCdMeQH+nl0+bKC0uHNuq3
         egx929x1Ij38QCLWMQ6sQcuHlBItNc/pmEnJ1c7ogEnQz8eRKDOnYe9jAEYdQCqmmNS0
         uzkqay0teZNBPbU8GooA5WbdJ9Da094j+Facglk6Uf0TdqZwlKAttsdJew0KB8EK2Lx2
         ggJyvXde6UBYjvc7G5xcel4pjTDCI8b6CNF/rMEPpax3sqWVQkR0XBXgvkFS+OPlzr8N
         qlb8UNKVxf23Gok6Gh66cNAZJrn85bClKvewt6Ru/E8mzPriW/3FPXTaku9tG6Ira1Br
         Jiog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FG0/FU6uw0vDtarwt5+2HOJFgKERR2bBzA1uds1Yh7k=;
        b=1NtBRf1WdHUFi3C8MwmtckV7O43b5P5zrSn8lOV5ldzrpikHXb/OJ76rVM1a19XEd2
         0WImJyy6NIPyXceYMXC7spWQRmR6+jkdcAkqaZPym6o1JlASy/z5Mczawo7wKm9oo9MJ
         U7V1zBjOQpA6HOupRfRU/mfh1rjw5aC4N37XlLDRMIxUoTCn9r+izwI0TTGcv1kKnWZU
         /vmsakW3jHhK3cC4jEc41BY8joiiMeniyp4t43vpnb+Zr3/7ZpanPKbnvc+3EFm4xEU0
         WIzSOfP+SeQkfMrtlkaVLZn261nMxiG7FgZvxhvVqW2IRZCMdaCXdvYZjk16P51yIjGR
         /3Ww==
X-Gm-Message-State: AFqh2kopC+CUAN1x28pEnO9eYpB1tAS1K0qaOJhWKCFdHPbiG7r5Hjr1
	f+JzjLUPE5ZqRWxdHO14rGCqxc7qMrHFien9KMwHQmzhjcveeM6gNU4uzahSFz/ULpWFNyf0fgS
	mmVAcNHPnystEAA9UCL3+WKhT7+KUHgKBqvC5VYHyVg==
X-Received: by 2002:ac8:5545:0:b0:3b6:2f0d:1925 with SMTP id o5-20020ac85545000000b003b62f0d1925mr49128540qtr.64.1674710788444;
        Wed, 25 Jan 2023 21:26:28 -0800 (PST)
X-Google-Smtp-Source: AMrXdXully7MjMRtDtbZtceH3RS9913TvvPBNRP2ZPLGLwxrSXUh8elrZBVGZpyyxkhK/A9rW0V7RQ==
X-Received: by 2002:ac8:5545:0:b0:3b6:2f0d:1925 with SMTP id o5-20020ac85545000000b003b62f0d1925mr49128489qtr.64.1674710788085;
        Wed, 25 Jan 2023 21:26:28 -0800 (PST)
From: Alexander Bulekov <alxndr@bu.edu>
To: qemu-devel@nongnu.org
Cc: Alexander Bulekov <alxndr@bu.edu>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Mauro Matteo Cascella <mcascell@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Bandan Das <bsd@redhat.com>,
	"Edgar E . Iglesias" <edgar.iglesias@gmail.com>,
	Darren Kenny <darren.kenny@oracle.com>,
	Bin Meng <bin.meng@windriver.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Jon Maloy <jmaloy@redhat.com>,
	Siqi Chen <coc.cyqh@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Amit Shah <amit@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Keith Busch <kbusch@kernel.org>,
	Klaus Jensen <its@irrelevant.dk>,
	Fam Zheng <fam@euphon.net>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs),
	qemu-block@nongnu.org (open list:virtio-blk),
	qemu-arm@nongnu.org (open list:i.MX31 (kzm)),
	qemu-ppc@nongnu.org (open list:Old World (g3beige))
Subject: [PATCH v5 4/4] hw: replace most qemu_bh_new calls with qemu_bh_new_guarded
Date: Thu, 26 Jan 2023 00:25:58 -0500
Message-Id: <20230126052558.572634-5-alxndr@bu.edu>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230126052558.572634-1-alxndr@bu.edu>
References: <20230126052558.572634-1-alxndr@bu.edu>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-CES-GSUITE_AUTH: bf3aNvsZpxl8

This protects devices from bh->mmio reentrancy issues.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 hw/9pfs/xen-9p-backend.c        | 4 +++-
 hw/block/dataplane/virtio-blk.c | 3 ++-
 hw/block/dataplane/xen-block.c  | 5 +++--
 hw/block/virtio-blk.c           | 5 +++--
 hw/char/virtio-serial-bus.c     | 3 ++-
 hw/display/qxl.c                | 9 ++++++---
 hw/display/virtio-gpu.c         | 6 ++++--
 hw/ide/ahci.c                   | 3 ++-
 hw/ide/core.c                   | 3 ++-
 hw/misc/imx_rngc.c              | 6 ++++--
 hw/misc/macio/mac_dbdma.c       | 2 +-
 hw/net/virtio-net.c             | 3 ++-
 hw/nvme/ctrl.c                  | 6 ++++--
 hw/scsi/mptsas.c                | 3 ++-
 hw/scsi/scsi-bus.c              | 3 ++-
 hw/scsi/vmw_pvscsi.c            | 3 ++-
 hw/usb/dev-uas.c                | 3 ++-
 hw/usb/hcd-dwc2.c               | 3 ++-
 hw/usb/hcd-ehci.c               | 3 ++-
 hw/usb/hcd-uhci.c               | 2 +-
 hw/usb/host-libusb.c            | 6 ++++--
 hw/usb/redirect.c               | 6 ++++--
 hw/usb/xen-usb.c                | 3 ++-
 hw/virtio/virtio-balloon.c      | 5 +++--
 hw/virtio/virtio-crypto.c       | 3 ++-
 25 files changed, 66 insertions(+), 35 deletions(-)

diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
index 65c4979c3c..f077c1b255 100644
--- a/hw/9pfs/xen-9p-backend.c
+++ b/hw/9pfs/xen-9p-backend.c
@@ -441,7 +441,9 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
         xen_9pdev->rings[i].ring.out = xen_9pdev->rings[i].data +
                                        XEN_FLEX_RING_SIZE(ring_order);
 
-        xen_9pdev->rings[i].bh = qemu_bh_new(xen_9pfs_bh, &xen_9pdev->rings[i]);
+        xen_9pdev->rings[i].bh = qemu_bh_new_guarded(xen_9pfs_bh,
+                                                     &xen_9pdev->rings[i],
+                                                     &DEVICE(xen_9pdev)->mem_reentrancy_guard);
         xen_9pdev->rings[i].out_cons = 0;
         xen_9pdev->rings[i].out_size = 0;
         xen_9pdev->rings[i].inprogress = false;
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 26f965cabc..191a8c90aa 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -127,7 +127,8 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
     } else {
         s->ctx = qemu_get_aio_context();
     }
-    s->bh = aio_bh_new(s->ctx, notify_guest_bh, s);
+    s->bh = aio_bh_new_guarded(s->ctx, notify_guest_bh, s,
+                               &DEVICE(s)->mem_reentrancy_guard);
     s->batch_notify_vqs = bitmap_new(conf->num_queues);
 
     *dataplane = s;
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 2785b9e849..e31806b317 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -632,8 +632,9 @@ XenBlockDataPlane *xen_block_dataplane_create(XenDevice *xendev,
     } else {
         dataplane->ctx = qemu_get_aio_context();
     }
-    dataplane->bh = aio_bh_new(dataplane->ctx, xen_block_dataplane_bh,
-                               dataplane);
+    dataplane->bh = aio_bh_new_guarded(dataplane->ctx, xen_block_dataplane_bh,
+                                       dataplane,
+                                       &DEVICE(xendev)->mem_reentrancy_guard);
 
     return dataplane;
 }
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index f717550fdc..e9f516e633 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -866,8 +866,9 @@ static void virtio_blk_dma_restart_cb(void *opaque, bool running,
      * requests will be processed while starting the data plane.
      */
     if (!s->bh && !virtio_bus_ioeventfd_enabled(bus)) {
-        s->bh = aio_bh_new(blk_get_aio_context(s->conf.conf.blk),
-                           virtio_blk_dma_restart_bh, s);
+        s->bh = aio_bh_new_guarded(blk_get_aio_context(s->conf.conf.blk),
+                                   virtio_blk_dma_restart_bh, s,
+                                   &DEVICE(s)->mem_reentrancy_guard);
         blk_inc_in_flight(s->conf.conf.blk);
         qemu_bh_schedule(s->bh);
     }
diff --git a/hw/char/virtio-serial-bus.c b/hw/char/virtio-serial-bus.c
index 7d4601cb5d..dd619f0731 100644
--- a/hw/char/virtio-serial-bus.c
+++ b/hw/char/virtio-serial-bus.c
@@ -985,7 +985,8 @@ static void virtser_port_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    port->bh = qemu_bh_new(flush_queued_data_bh, port);
+    port->bh = qemu_bh_new_guarded(flush_queued_data_bh, port,
+                                   &dev->mem_reentrancy_guard);
     port->elem = NULL;
 }
 
diff --git a/hw/display/qxl.c b/hw/display/qxl.c
index 6772849dec..67efa3c3ef 100644
--- a/hw/display/qxl.c
+++ b/hw/display/qxl.c
@@ -2223,11 +2223,14 @@ static void qxl_realize_common(PCIQXLDevice *qxl, Error **errp)
 
     qemu_add_vm_change_state_handler(qxl_vm_change_state_handler, qxl);
 
-    qxl->update_irq = qemu_bh_new(qxl_update_irq_bh, qxl);
+    qxl->update_irq = qemu_bh_new_guarded(qxl_update_irq_bh, qxl,
+                                          &DEVICE(qxl)->mem_reentrancy_guard);
     qxl_reset_state(qxl);
 
-    qxl->update_area_bh = qemu_bh_new(qxl_render_update_area_bh, qxl);
-    qxl->ssd.cursor_bh = qemu_bh_new(qemu_spice_cursor_refresh_bh, &qxl->ssd);
+    qxl->update_area_bh = qemu_bh_new_guarded(qxl_render_update_area_bh, qxl,
+                                              &DEVICE(qxl)->mem_reentrancy_guard);
+    qxl->ssd.cursor_bh = qemu_bh_new_guarded(qemu_spice_cursor_refresh_bh, &qxl->ssd,
+                                             &DEVICE(qxl)->mem_reentrancy_guard);
 }
 
 static void qxl_realize_primary(PCIDevice *dev, Error **errp)
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 5e15c79b94..66ac9b6cc5 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1339,8 +1339,10 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
 
     g->ctrl_vq = virtio_get_queue(vdev, 0);
     g->cursor_vq = virtio_get_queue(vdev, 1);
-    g->ctrl_bh = qemu_bh_new(virtio_gpu_ctrl_bh, g);
-    g->cursor_bh = qemu_bh_new(virtio_gpu_cursor_bh, g);
+    g->ctrl_bh = qemu_bh_new_guarded(virtio_gpu_ctrl_bh, g,
+                                     &qdev->mem_reentrancy_guard);
+    g->cursor_bh = qemu_bh_new_guarded(virtio_gpu_cursor_bh, g,
+                                       &qdev->mem_reentrancy_guard);
     QTAILQ_INIT(&g->reslist);
     QTAILQ_INIT(&g->cmdq);
     QTAILQ_INIT(&g->fenceq);
diff --git a/hw/ide/ahci.c b/hw/ide/ahci.c
index 7ce001cacd..37091150cb 100644
--- a/hw/ide/ahci.c
+++ b/hw/ide/ahci.c
@@ -1508,7 +1508,8 @@ static void ahci_cmd_done(const IDEDMA *dma)
     ahci_write_fis_d2h(ad);
 
     if (ad->port_regs.cmd_issue && !ad->check_bh) {
-        ad->check_bh = qemu_bh_new(ahci_check_cmd_bh, ad);
+        ad->check_bh = qemu_bh_new_guarded(ahci_check_cmd_bh, ad,
+                                           &DEVICE(ad)->mem_reentrancy_guard);
         qemu_bh_schedule(ad->check_bh);
     }
 }
diff --git a/hw/ide/core.c b/hw/ide/core.c
index 5d1039378f..8c8d1a8ec2 100644
--- a/hw/ide/core.c
+++ b/hw/ide/core.c
@@ -519,7 +519,8 @@ BlockAIOCB *ide_issue_trim(
 
     iocb = blk_aio_get(&trim_aiocb_info, s->blk, cb, cb_opaque);
     iocb->s = s;
-    iocb->bh = qemu_bh_new(ide_trim_bh_cb, iocb);
+    iocb->bh = qemu_bh_new_guarded(ide_trim_bh_cb, iocb,
+                                   &DEVICE(s)->mem_reentrancy_guard);
     iocb->ret = 0;
     iocb->qiov = qiov;
     iocb->i = -1;
diff --git a/hw/misc/imx_rngc.c b/hw/misc/imx_rngc.c
index 632c03779c..082c6980ad 100644
--- a/hw/misc/imx_rngc.c
+++ b/hw/misc/imx_rngc.c
@@ -228,8 +228,10 @@ static void imx_rngc_realize(DeviceState *dev, Error **errp)
     sysbus_init_mmio(sbd, &s->iomem);
 
     sysbus_init_irq(sbd, &s->irq);
-    s->self_test_bh = qemu_bh_new(imx_rngc_self_test, s);
-    s->seed_bh = qemu_bh_new(imx_rngc_seed, s);
+    s->self_test_bh = qemu_bh_new_guarded(imx_rngc_self_test, s,
+                                          &dev->mem_reentrancy_guard);
+    s->seed_bh = qemu_bh_new_guarded(imx_rngc_seed, s,
+                                     &dev->mem_reentrancy_guard);
 }
 
 static void imx_rngc_reset(DeviceState *dev)
diff --git a/hw/misc/macio/mac_dbdma.c b/hw/misc/macio/mac_dbdma.c
index efcc02609f..cc7e02203d 100644
--- a/hw/misc/macio/mac_dbdma.c
+++ b/hw/misc/macio/mac_dbdma.c
@@ -914,7 +914,7 @@ static void mac_dbdma_realize(DeviceState *dev, Error **errp)
 {
     DBDMAState *s = MAC_DBDMA(dev);
 
-    s->bh = qemu_bh_new(DBDMA_run_bh, s);
+    s->bh = qemu_bh_new_guarded(DBDMA_run_bh, s, &dev->mem_reentrancy_guard);
 }
 
 static void mac_dbdma_class_init(ObjectClass *oc, void *data)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 3ae909041a..a170c724de 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2885,7 +2885,8 @@ static void virtio_net_add_queue(VirtIONet *n, int index)
         n->vqs[index].tx_vq =
             virtio_add_queue(vdev, n->net_conf.tx_queue_size,
                              virtio_net_handle_tx_bh);
-        n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, &n->vqs[index]);
+        n->vqs[index].tx_bh = qemu_bh_new_guarded(virtio_net_tx_bh, &n->vqs[index],
+                                                  &DEVICE(vdev)->mem_reentrancy_guard);
     }
 
     n->vqs[index].tx_waiting = 0;
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index f25cc2c235..dcb250e772 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -4318,7 +4318,8 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr,
         QTAILQ_INSERT_TAIL(&(sq->req_list), &sq->io_req[i], entry);
     }
 
-    sq->bh = qemu_bh_new(nvme_process_sq, sq);
+    sq->bh = qemu_bh_new_guarded(nvme_process_sq, sq,
+                                 &DEVICE(sq->ctrl)->mem_reentrancy_guard);
 
     if (n->dbbuf_enabled) {
         sq->db_addr = n->dbbuf_dbs + (sqid << 3);
@@ -4708,7 +4709,8 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr,
         }
     }
     n->cq[cqid] = cq;
-    cq->bh = qemu_bh_new(nvme_post_cqes, cq);
+    cq->bh = qemu_bh_new_guarded(nvme_post_cqes, cq,
+                                 &DEVICE(cq->ctrl)->mem_reentrancy_guard);
 }
 
 static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req)
diff --git a/hw/scsi/mptsas.c b/hw/scsi/mptsas.c
index c485da792c..3de288b454 100644
--- a/hw/scsi/mptsas.c
+++ b/hw/scsi/mptsas.c
@@ -1322,7 +1322,8 @@ static void mptsas_scsi_realize(PCIDevice *dev, Error **errp)
     }
     s->max_devices = MPTSAS_NUM_PORTS;
 
-    s->request_bh = qemu_bh_new(mptsas_fetch_requests, s);
+    s->request_bh = qemu_bh_new_guarded(mptsas_fetch_requests, s,
+                                        &DEVICE(dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), &dev->qdev, &mptsas_scsi_info);
 }
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index ceceafb2cd..e5c9f7a53d 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -193,7 +193,8 @@ static void scsi_dma_restart_cb(void *opaque, bool running, RunState state)
         AioContext *ctx = blk_get_aio_context(s->conf.blk);
         /* The reference is dropped in scsi_dma_restart_bh.*/
         object_ref(OBJECT(s));
-        s->bh = aio_bh_new(ctx, scsi_dma_restart_bh, s);
+        s->bh = aio_bh_new_guarded(ctx, scsi_dma_restart_bh, s,
+                                   &DEVICE(s)->mem_reentrancy_guard);
         qemu_bh_schedule(s->bh);
     }
 }
diff --git a/hw/scsi/vmw_pvscsi.c b/hw/scsi/vmw_pvscsi.c
index fa76696855..4de34536e9 100644
--- a/hw/scsi/vmw_pvscsi.c
+++ b/hw/scsi/vmw_pvscsi.c
@@ -1184,7 +1184,8 @@ pvscsi_realizefn(PCIDevice *pci_dev, Error **errp)
         pcie_endpoint_cap_init(pci_dev, PVSCSI_EXP_EP_OFFSET);
     }
 
-    s->completion_worker = qemu_bh_new(pvscsi_process_completion_queue, s);
+    s->completion_worker = qemu_bh_new_guarded(pvscsi_process_completion_queue, s,
+                                               &DEVICE(pci_dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), DEVICE(pci_dev), &pvscsi_scsi_info);
     /* override default SCSI bus hotplug-handler, with pvscsi's one */
diff --git a/hw/usb/dev-uas.c b/hw/usb/dev-uas.c
index 88f99c05d5..f013ded91e 100644
--- a/hw/usb/dev-uas.c
+++ b/hw/usb/dev-uas.c
@@ -937,7 +937,8 @@ static void usb_uas_realize(USBDevice *dev, Error **errp)
 
     QTAILQ_INIT(&uas->results);
     QTAILQ_INIT(&uas->requests);
-    uas->status_bh = qemu_bh_new(usb_uas_send_status_bh, uas);
+    uas->status_bh = qemu_bh_new_guarded(usb_uas_send_status_bh, uas,
+                                         &d->mem_reentrancy_guard);
 
     dev->flags |= (1 << USB_DEV_FLAG_IS_SCSI_STORAGE);
     scsi_bus_init(&uas->bus, sizeof(uas->bus), DEVICE(dev), &usb_uas_scsi_info);
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
index 8755e9cbb0..a0c4e782b2 100644
--- a/hw/usb/hcd-dwc2.c
+++ b/hw/usb/hcd-dwc2.c
@@ -1364,7 +1364,8 @@ static void dwc2_realize(DeviceState *dev, Error **errp)
     s->fi = USB_FRMINTVL - 1;
     s->eof_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_frame_boundary, s);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_work_timer, s);
-    s->async_bh = qemu_bh_new(dwc2_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(dwc2_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
 
     sysbus_init_irq(sbd, &s->irq);
 }
diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index d4da8dcb8d..c930c60921 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -2533,7 +2533,8 @@ void usb_ehci_realize(EHCIState *s, DeviceState *dev, Error **errp)
     }
 
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, ehci_work_timer, s);
-    s->async_bh = qemu_bh_new(ehci_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(ehci_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
     s->device = dev;
 
     s->vmstate = qemu_add_vm_change_state_handler(usb_ehci_vm_state_change, s);
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index 30ae0104bb..bdc891f57a 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -1193,7 +1193,7 @@ void usb_uhci_common_realize(PCIDevice *dev, Error **errp)
                               USB_SPEED_MASK_LOW | USB_SPEED_MASK_FULL);
         }
     }
-    s->bh = qemu_bh_new(uhci_bh, s);
+    s->bh = qemu_bh_new_guarded(uhci_bh, s, &DEVICE(dev)->mem_reentrancy_guard);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, uhci_frame_timer, s);
     s->num_ports_vmstate = NB_PORTS;
     QTAILQ_INIT(&s->queues);
diff --git a/hw/usb/host-libusb.c b/hw/usb/host-libusb.c
index 176868d345..f500db85ab 100644
--- a/hw/usb/host-libusb.c
+++ b/hw/usb/host-libusb.c
@@ -1141,7 +1141,8 @@ static void usb_host_nodev_bh(void *opaque)
 static void usb_host_nodev(USBHostDevice *s)
 {
     if (!s->bh_nodev) {
-        s->bh_nodev = qemu_bh_new(usb_host_nodev_bh, s);
+        s->bh_nodev = qemu_bh_new_guarded(usb_host_nodev_bh, s,
+                                          &DEVICE(s)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(s->bh_nodev);
 }
@@ -1739,7 +1740,8 @@ static int usb_host_post_load(void *opaque, int version_id)
     USBHostDevice *dev = opaque;
 
     if (!dev->bh_postld) {
-        dev->bh_postld = qemu_bh_new(usb_host_post_load_bh, dev);
+        dev->bh_postld = qemu_bh_new_guarded(usb_host_post_load_bh, dev,
+                                             &DEVICE(dev)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(dev->bh_postld);
     dev->bh_postld_pending = true;
diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index fd7df599bc..39fbaaab16 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -1441,8 +1441,10 @@ static void usbredir_realize(USBDevice *udev, Error **errp)
         }
     }
 
-    dev->chardev_close_bh = qemu_bh_new(usbredir_chardev_close_bh, dev);
-    dev->device_reject_bh = qemu_bh_new(usbredir_device_reject_bh, dev);
+    dev->chardev_close_bh = qemu_bh_new_guarded(usbredir_chardev_close_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
+    dev->device_reject_bh = qemu_bh_new_guarded(usbredir_device_reject_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
     dev->attach_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL, usbredir_do_attach, dev);
 
     packet_id_queue_init(&dev->cancelled, dev, "cancelled");
diff --git a/hw/usb/xen-usb.c b/hw/usb/xen-usb.c
index 0f7369e7ed..dec91294ad 100644
--- a/hw/usb/xen-usb.c
+++ b/hw/usb/xen-usb.c
@@ -1021,7 +1021,8 @@ static void usbback_alloc(struct XenLegacyDevice *xendev)
 
     QTAILQ_INIT(&usbif->req_free_q);
     QSIMPLEQ_INIT(&usbif->hotplug_q);
-    usbif->bh = qemu_bh_new(usbback_bh, usbif);
+    usbif->bh = qemu_bh_new_guarded(usbback_bh, usbif,
+                                    &DEVICE(xendev)->mem_reentrancy_guard);
 }
 
 static int usbback_free(struct XenLegacyDevice *xendev)
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index 746f07c4d2..309cebacc6 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -908,8 +908,9 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
         precopy_add_notifier(&s->free_page_hint_notify);
 
         object_ref(OBJECT(s->iothread));
-        s->free_page_bh = aio_bh_new(iothread_get_aio_context(s->iothread),
-                                     virtio_ballloon_get_free_page_hints, s);
+        s->free_page_bh = aio_bh_new_guarded(iothread_get_aio_context(s->iothread),
+                                             virtio_ballloon_get_free_page_hints, s,
+                                             &DEVICE(s)->mem_reentrancy_guard);
     }
 
     if (virtio_has_feature(s->host_features, VIRTIO_BALLOON_F_REPORTING)) {
diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c
index 516425e26a..4c95f1096e 100644
--- a/hw/virtio/virtio-crypto.c
+++ b/hw/virtio/virtio-crypto.c
@@ -1050,7 +1050,8 @@ static void virtio_crypto_device_realize(DeviceState *dev, Error **errp)
         vcrypto->vqs[i].dataq =
                  virtio_add_queue(vdev, 1024, virtio_crypto_handle_dataq_bh);
         vcrypto->vqs[i].dataq_bh =
-                 qemu_bh_new(virtio_crypto_dataq_bh, &vcrypto->vqs[i]);
+                 qemu_bh_new_guarded(virtio_crypto_dataq_bh, &vcrypto->vqs[i],
+                                     &dev->mem_reentrancy_guard);
         vcrypto->vqs[i].vcrypto = vcrypto;
     }
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 05:37:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 05:37:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484781.751573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuwz-0006WS-MD; Thu, 26 Jan 2023 05:37:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484781.751573; Thu, 26 Jan 2023 05:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKuwz-0006WL-J0; Thu, 26 Jan 2023 05:37:29 +0000
Received: by outflank-mailman (input) for mailman id 484781;
 Thu, 26 Jan 2023 05:32:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aqlM=5X=gmail.com=freddy77@srs-se1.protection.inumbo.net>)
 id 1pKusc-0006HV-2t
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 05:32:58 +0000
Received: from mail-oa1-x31.google.com (mail-oa1-x31.google.com
 [2001:4860:4864:20::31])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e30d2913-9d3a-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 06:32:57 +0100 (CET)
Received: by mail-oa1-x31.google.com with SMTP id
 586e51a60fabf-16332831ed0so1183717fac.10
 for <xen-devel@lists.xenproject.org>; Wed, 25 Jan 2023 21:32:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e30d2913-9d3a-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=tJAQgBRuwpXxWSXisMFNNhhlbaGmkLss4fzFxsPQzfU=;
        b=ZncYNnnhEqhhPst6qVkgMxts8Sll+Kn9+2JRpLXBY3R65P7i30YnBafkiw3FrMNrjI
         b5eowmqak/1Hr3HrdXfZIHLT0in8n3XmdGvzvLOtcZcZq8UVIqfmr3VhPyM94AXMIgzM
         VrGS1D0FF6sNcpqT9z/M1kvMJk/m1H7U2aEQ7s8G6xwfVzzWEqA9rVQEADyJkIsqPP+2
         ZIr9LRKHgWyIaY5AQ9JB+e3jy22GstNGyDyuZatwufcnX/1qQQWegKRUeV91Qp0mqs+z
         Tn+5/Ezk8sMmWltYWN48rWaoKlvGq3qGTW2Zau5SAJ2/ImOxmTCu2zNRMgN/w1Vwsca1
         5iYQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=tJAQgBRuwpXxWSXisMFNNhhlbaGmkLss4fzFxsPQzfU=;
        b=IlK2J+LXEhGcJwIo3Z8PSM61D9A3J9K+AaK17Gch+szPK1Ppo6zAcEGgzTrJZDoLkG
         kt8WtYddzps7XU7wiTbsJPkRuZiuE6Dw8r0jjNq5nCVBciIzqsV0BhiHy+lpaUkSPzq+
         Rq/L5zecR+XpgVB5rHI6c8NVBzKcMzEYTBHmiTUfWJT7KCRjOOInOBTC5ikOEW3V+Fyz
         QCT/xZfMV4a3mZvMptYMUekT5N83E9bdOqanSSo8BBcPPk5lkZYV6fcVDUQfyNOZLu0T
         acgGbHSlXJTH6O+QdT9qrN1ZtxJYz9igWHQybsKISEFpXW02QF9Iv+GFGcwEa9+YFgmf
         /iXg==
X-Gm-Message-State: AO0yUKWVzkRaDtuv1o8k/gNQ/IZj9g4PWtnO/SKq02SaVVxA2u+cY212
	B1Dd/bus1M2TlD9xuicORxBBH281XWqcKyTkh3M=
X-Google-Smtp-Source: AK7set+WOpkr7U5MlgMG9N0kqnSZnb4s8gS05hCP8kcZmvzBkPLTQWkfRVn8IbmO+1hkpeSQe1aYQ+dCrZ4xa40gl2c=
X-Received: by 2002:a05:6871:829:b0:163:2d87:3a90 with SMTP id
 q41-20020a056871082900b001632d873a90mr413948oap.1.1674711175744; Wed, 25 Jan
 2023 21:32:55 -0800 (PST)
MIME-Version: 1.0
References: <20230125085407.7144-1-vikram.garhwal@amd.com> <20230125085407.7144-8-vikram.garhwal@amd.com>
 <alpine.DEB.2.22.394.2301251406170.1978264@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2301251406170.1978264@ubuntu-linux-20-04-desktop>
From: Frediano Ziglio <freddy77@gmail.com>
Date: Thu, 26 Jan 2023 05:32:44 +0000
Message-ID: <CAHt6W4di4kUQxrXtE9Y8Nrv-H_r0OdhMBk7fo9CwDBDUaDkhnw@mail.gmail.com>
Subject: Re: [QEMU][PATCH v4 07/10] hw/xen/xen-hvm-common: Use g_new and error_setg_errno
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Vikram Garhwal <vikram.garhwal@amd.com>, qemu-devel@nongnu.org, 
	xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, 
	alex.bennee@linaro.org, Anthony Perard <anthony.perard@citrix.com>, 
	Paul Durrant <paul@xen.org>
Content-Type: text/plain; charset="UTF-8"

Il giorno mer 25 gen 2023 alle ore 22:07 Stefano Stabellini
<sstabellini@kernel.org> ha scritto:
>
> On Wed, 25 Jan 2023, Vikram Garhwal wrote:
> > Replace g_malloc with g_new and perror with error_setg_errno.
> >

error_setg_errno -> error_report ?

Also in the title

> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

Frediano


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 07:26:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 07:26:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484789.751588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKwdX-0002BS-B2; Thu, 26 Jan 2023 07:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484789.751588; Thu, 26 Jan 2023 07:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKwdX-0002BL-8T; Thu, 26 Jan 2023 07:25:31 +0000
Received: by outflank-mailman (input) for mailman id 484789;
 Thu, 26 Jan 2023 07:25:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKwdV-0002Aw-10
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 07:25:29 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2086.outbound.protection.outlook.com [40.107.6.86])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9abe75e1-9d4a-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 08:25:27 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7733.eurprd04.prod.outlook.com (2603:10a6:20b:288::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 07:25:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 07:25:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9abe75e1-9d4a-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cTipI0aJrfne+AgNwoQWsi0QbbBGD1Ehwevpi0hk6vWNkK6BVfsgXZ8zvEbXz+CqEquaM64k6n0Ar43nL3u33/eaRoRPi5UspTE7xnbHOiGHgPSTDEq4B2SRfxGbmKQvn6uIAw9TaoOIFXiYAKcxPc/Z1K4SqzS1D8lV2HLPySH+j6TVpkwfrCVloUR99EpMma3zR4V7gMAuikioqJZQfx18DUK3tM5Wonh5hOddW4xSPHUQGVOBZhHwPVEAgdJTpQoHkdo3Dj9bFIGQ+N1d0WTnN6D+ud/ZH+Lp9SSGA4mRlPaO9KdsRewuR3woWgdLIO2NJPCpymMaFwlKiTHT6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=maygO8TCv17GMyYbpi2QfNkyPw39ruWhUXIBMdNPS8w=;
 b=lrewr8mNaelIeyQl1P2HN++6phG5MfHWKVJnr8MCe3CgiZPnrBIfMUMK89chDQ9JMNAYM/lFCOULWf2ZBckaBWzJcMhsNaB9DUQKbSTbM15ChlEJmaEOVX1tIvexm0Sd0tvieI3Y3oCXpUmqIYNHGe0HLdKBA4js57IqA+UB1BP5lhfkJICr3Y+NI0OYH0DyWNFHFfEKrKzRu6iNNklAS8TKZteewcF2deD8pHdNWkrJgQB2ljbTQeRt8856EuWe12NHRTau76OWltRaAMLruScNrZGopfephd1rMTozhuHnCsfqXp/4bexDeUuJ0JXsN0xI4cvkidgM3HxdO2Pb1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=maygO8TCv17GMyYbpi2QfNkyPw39ruWhUXIBMdNPS8w=;
 b=x5Bs17O7Gny8YC1f3IHdmlVOMMQdb748kMnSpXjbOamOcwyAg3b4ra+JqRgLa7BFZbLLF/f7vVtpew+h6ftk+7WFHOvHNp14js2cPPsRnCnupkPBVkGPqgthAEuZegXwu1vfn28TUZCd6VJ3jXm/tKvMNZ69AJAwQkOBXL7fzCpFjG/MmaH0v712GfoqBpVIf/jIbwGWmlqgajRoNrh3XL5yiqxmENcA9hWWzGNb5wqTyr6jLKhLAUPsCLGNzWEjTF5OH7N4HH8PtD37AudfQu81D4TR0G5Mi5v49tRkhcTc8ClOnqfu2vMFodhaYmz41jNOIuVtQomiSizEWMRWIg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4e723846-09c1-32c8-94ba-3755e6af0529@suse.com>
Date: Thu, 26 Jan 2023 08:25:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 04/11] xen: extend domctl interface for cache coloring
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-5-carlo.nonato@minervasys.tech>
 <9bfee6d9-9cb2-262e-5a46-91b0bf35d60b@suse.com>
In-Reply-To: <9bfee6d9-9cb2-262e-5a46-91b0bf35d60b@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0124.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7733:EE_
X-MS-Office365-Filtering-Correlation-Id: 54978596-43d1-4853-00c8-08daff6e7d6b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RJv+qOBHDdc/aOJDtWT/SmGzuY/L9BhinVRZKg2qUUHIauAj4vIgMqnUmpuFiuOVVBEhCERsfMB2CEKczyUcsZ9CpEIV92JyMkK0Gg1oXQoPw18rymIW/vs5wbr6vl/UgS0OYs81l8m0fMnhEualIeTQANmX34O3IVw1LETNusk6XH6dOhXcALTp3eKmLI5LjAARuo1KHgv7+oGwmZDTj3NHHuzUxJSKw9DztuLX1ZSgbA7J19rAP5HQIHt87haDhOmFG76qDB/kHrD94CM3llzpFv92nYZgEceB1O/ZkF1zLsHEomtKHU4TCurmGiYYYq7/ZFTEDKxhXNJ3HJX5pVHO6YZ/2E3ByXaAlTbU6/QChOGnNuNZysEbUd+mxp7h4JRG4ZWWNhCpWGyw/2dNIS8fnG+x6ob1Vlw6IZBWFW6OMNsCsoArk4fOTbyJoWhfAaF7WTeX642Ie2ZQ0S84bbS6VpHfZxU2hCr0zbRnUSqf1EB2YSza8b5OJIaOa+4IjOiHBGLZzKpdzjrRBhQ9xKjlwhaABiDCTL3OmZ75mgXStgJyCiUHm0YH9i8PLPhT6YTJFjVwZ/EzLkOR2RkYNqL20MUnqAwIAoOAkA9Sra+QmdVNW37buSwsj8ql+3FGRdJ902K2emaTx7JguWdE2RjYOYBRx3O0MT+Jb1WBXQgdAJKkuXcVyzX7/1TakQuLU8CmiPm5pauvlNdpupFgjUPaxPh5BoWo76zMONEAwhs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(366004)(396003)(346002)(376002)(136003)(451199018)(31696002)(86362001)(38100700002)(36756003)(66476007)(5660300002)(6916009)(8676002)(41300700001)(4326008)(316002)(8936002)(54906003)(66946007)(66556008)(7416002)(2616005)(6486002)(478600001)(2906002)(53546011)(26005)(186003)(6512007)(6506007)(6666004)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K2lJaHRpMXZxczZ5ZkZxckg1WnNUKzVqdTZjYUxVL2VaZlBCNzVReDAzaXZl?=
 =?utf-8?B?ZDk0WmdjdkkwV3R3eTRFdExzdUJTZ1RvMWhuUUhYYk1pRmJVbnNPb0hsblN4?=
 =?utf-8?B?MHZ3VGZFT1htSkU3RjhjTXVVUWdiaTRhZDM0K05lUUMrTVdQM0NNQ1g5K3pT?=
 =?utf-8?B?ZDJndllJcUNIYVFhZ1J6clJMVTNPTW5vOEhrWG5PU0MzSmNQVVJEbCs5bXc1?=
 =?utf-8?B?R2hqbXRsL0xWWStiaEJJbGtmdkFOSkxScGpPTHFwUXQxR1pLVmNVblh2aFB2?=
 =?utf-8?B?RFUxZW45WXBibExnUVNkbkNHaWFFK0hib0ZSUUN5Z0dnTytvRVBUc3l5TnF3?=
 =?utf-8?B?NDFLYlVtNkRUOFZzTjE5UWlic29TTHYzL3hsb1A1ejRNSUFuaFJ3RHNTV3l4?=
 =?utf-8?B?dm9zOFJhZWp0d3ZJdG1QQjU2Z3pvdHpMOC9kRUFoRWtxYWJDa09lcE5salc3?=
 =?utf-8?B?L01jV2VDSFJkMXd3YUhhVEhOWFZIU2FYeTcra21BOENSNkI5NWFUU0VRTWVL?=
 =?utf-8?B?cVlaQjdjaURvRGxPeE9aTXMwTjdabkIrV0ZWZHZzVkpRZS9xKzd2SCs4YW9r?=
 =?utf-8?B?M2dhc0NXQUtEd0t4MWRiQjZYLys1ZGhFZy9XQ0l2K1hpeUlsc1VDVkhSZko1?=
 =?utf-8?B?U1ZFL09HVy9xQjlQbXdnL2dacE5OSWdNTmY5Q05lajFYMUkzUWlwLytrYlhD?=
 =?utf-8?B?MWZkSExDNHcweE5QeGRnV0F5VEFNbVo2S3c1cFFBQ1BGZVRPeFhNL2ExT1Bv?=
 =?utf-8?B?S29WL1NJZTJaNDhOUk8rU055Zlhtczl5dS9YNFliL0pmeEdkcDgybDJoS0ZQ?=
 =?utf-8?B?VEhRWnhoOHcyQ0NLaXFNUnY1NVJxN1l1NWgyS1MxUk02L1RlRGtxSWFGdGRD?=
 =?utf-8?B?YWlaOHBpNW5HN0pZcTMyeTJCRWNtNTZ6L0NUN0MxNGhqTHFmUkVtdUFMemwz?=
 =?utf-8?B?OG5CYWhoSjdrclYybVh5VjZ2UDVBSVpVc1ZyaW5PRDFWNm5BMmI2QlduVUI0?=
 =?utf-8?B?TzJTT1pybGdpZVROK0JyR0pOSkI3SGx1SzQvU3dSTVg5cjdDd214OGVINHRw?=
 =?utf-8?B?ZllGbCtJVGdKSTQ0WlMza0wyQUR0bW1NMjJrU1JSMjdBVjVsMzRTdW1xL0ph?=
 =?utf-8?B?SDgxN0JMZG00a0xQRlNxOXo1YnJtSWJqS09UNkVRU3VjUWx6dDNVRkxON1Iy?=
 =?utf-8?B?OWYwT3JvMzFlMUl4QjhmR043UWs5T3ZVSzNwQmVJMUFsVTFoQUlRRWZEYkdp?=
 =?utf-8?B?WnJUV1BLdlZWUjhMZHRWempOU3VZaTl5WGRhNVJQa0hOc0NTVkpQbTNoSFZz?=
 =?utf-8?B?aHFoVWplTzBwQmxFaWtYMGNVWkp4ZHBaZ24rSlJuRGQ2dWRZaVJNZDZjZ2tP?=
 =?utf-8?B?cHgyRklQQ1hyK2EvckNobkMva1ZUaEF1b2pvMEhjQkczQjBvMVk5YjZDL3Jl?=
 =?utf-8?B?T0lGZGRNcXVhZkhFUXJ2YVBLSFRmNU8vVGp1a1Bpdkt0aTdiWFJOQ0o3Y1o5?=
 =?utf-8?B?YjBmWnE1emF0bWFKWVdqWEN2V2JXVUd0T2xPb3d3U3ZUbWRQYklFZkZVanlN?=
 =?utf-8?B?NEN1R1NNNUFhOTdYUkVBSDJYVG4reXR1MUYyTzBUdlhCa3JPaHQ0Ulkzb21B?=
 =?utf-8?B?TEV4Sy9wdWZBZko4T0lwSmJoNkRsS1M0Y3llSVU3eUlsczRzS1JZalc1Z3BE?=
 =?utf-8?B?UjJPeEQ5Q2JaNHp0Y0d6Nm9mUmh6U3EyM2c0ek9weFh1cWNML2pFMVp2QklJ?=
 =?utf-8?B?dUJSOG9hMEMrUjZ4eHp0d09lQlJWYmptdEprK1JVZm4wc3FOdnYwT1JmZGtp?=
 =?utf-8?B?ZlhSVldnQXRKSzcrSWdPZGo1QXpsRnh5dlArWU9COHMrR1lWdk1wKzhQMEZO?=
 =?utf-8?B?Y2pZczA0L0RJR1dYdytNK2JuRVV1TVR4UUZIVE5VQkxGb3V1VGZ5RDdJeW1L?=
 =?utf-8?B?VExLQ3U2T1d3WVpWY0swaHEyUytEYzI4aFk5eGkrSmczUXYrWjdFaTNQNFBD?=
 =?utf-8?B?REJGV1J4WWJ5NVZLdmZRVWVhOE1sNlZPZTZDcEJhTEhQcGc0R2dOU2hkaFRQ?=
 =?utf-8?B?V3FoVjNudzMrMTVaY0VTek9tU0d4OUdCeU1mOEwzQnpwTkdsMGF6Ly9JVmFm?=
 =?utf-8?Q?h0vs2BjOXpII3aecDOQshZASJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 54978596-43d1-4853-00c8-08daff6e7d6b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 07:25:24.5902
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5VRIq8ddhBCOUB2lFY5UG7ZuxDk66dM0sv3sH5sYnOBxCReZ2ZJpDHfymJeVaHXln8Y/MlxsQU+E6UK/P4Yo5Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7733

On 24.01.2023 17:29, Jan Beulich wrote:
> On 23.01.2023 16:47, Carlo Nonato wrote:
>> @@ -92,6 +92,10 @@ struct xen_domctl_createdomain {
>>      /* CPU pool to use; specify 0 or a specific existing pool */
>>      uint32_t cpupool_id;
>>  
>> +    /* IN LLC coloring parameters */
>> +    uint32_t num_llc_colors;
>> +    XEN_GUEST_HANDLE(uint32) llc_colors;
> 
> Despite your earlier replies I continue to be unconvinced that this
> is information which needs to be available right at domain_create.
> Without that you'd also get away without the sufficiently odd
> domain_create_llc_colored(). (Odd because: Think of two or three
> more extended features appearing, all of which want a special cased
> domain_create().)

And perhaps the real question is: Why do the two items need passing
to a special variant of domain_create() in the first place? The
necessary information already is passed to the normal function via
struct xen_domctl_createdomain. All it would take is to read the
array from guest space later, when struct domain was already
allocated and is hence available for storing the pointer. (Passing
the count separately is redundant in any event.)

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 07:33:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 07:33:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484796.751598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKwkh-0003fP-8A; Thu, 26 Jan 2023 07:32:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484796.751598; Thu, 26 Jan 2023 07:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKwkh-0003fI-5R; Thu, 26 Jan 2023 07:32:55 +0000
Received: by outflank-mailman (input) for mailman id 484796;
 Thu, 26 Jan 2023 07:32:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKwkf-0003fC-Tk
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 07:32:53 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2074.outbound.protection.outlook.com [40.107.247.74])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a46901a5-9d4b-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 08:32:52 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7351.eurprd04.prod.outlook.com (2603:10a6:10:1b2::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Thu, 26 Jan
 2023 07:32:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 07:32:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a46901a5-9d4b-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZzUgLNJ1riouUaAL0jTs8K6XJgRWWw3Ep1vI+4+c0OOsxaB3pDMqM4Qx4UbwF/4gRqoR/Re8LXyOxHrO5LAMRaEyeGiWQJ6uBqHwOqh6Gj+5qvgqa6Cm7e7a4Vn1s0kKkjd0Ivxq11RUi5g2Y9b+DnGLCWOjl8y9EP3G30Pd8J2FN1cYhvUGs4nxP8O2iPNrbhFyw+h5Mn/S2ij63Aq1LJNhFQnwcZC0rLqwbyEkVciMIRqKMnnLjrbRv4n4BdLRzXyMTNnuAUGAAXNG8fhNSewPnysZSjEhg8l/ZvqMm79ApfRpXE3S5PJ9/yPVVGPa+x/SCnGc3lTworHGQVNJKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=211q3Ct1tYDPKt6ZFKS3g1+8GWE4XUWKff1Fz9CZ/fA=;
 b=ccJWDvxF+hTCaYQ08VMT+0Hv8utlzb+EbcMPgkwE2XFCgY/npUFybDTNQ7a/iA891vU3F85GbUNZwFNzYzHUvZRNVXi+KG9lo4kW+4xV6B5d9IGlGtNu0KVRdzRErZyuudxLKa4b/11j2rWYnDiXVewe56T6xeOC0YI8LzoQUpmopp3WyjAzD7SC1kxVdS5if4ZXXLB8fofR+ZWK5bixTtDdfQYGoS8Yn8/+PSmlh0HMsSL4rQVTeSmj31XXhgBiSSp2OMYv/Mz1bP/Vc/BkgtXsU3PpCQpdFruGE3/Q2Pl+Vf3uKNTt+L2V7lAwACB7ZrmdMne+YEcWuc0UHrKcGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=211q3Ct1tYDPKt6ZFKS3g1+8GWE4XUWKff1Fz9CZ/fA=;
 b=XJggrCQ0H4FQ9PQR1JXjlpiQErgGSgIbGwRS4YJK5Ohz0KvEcXIeCtfvzOi0oENeN8J7O9ycaevXHtsPQ9t1y1ABgU/4eAYtE1yRUfQwps14dOL73mtl0yI7XAtsUNTd/3tuafwSV7jodOtU1hIfTp3QvCx5JEYRa5z5L63pw5Z7FgT060lsiYSksGVm1exZqo1BsDoxsgel9wEGcKfs3XUgum3rZg+5Pnma70JKzZYxhjTCmyXf5Ld37C9bfJGiL+lBiaG5sAQBOv4POnnXcjKeLuhrGBapQo8B3bS3YvcnFz2TMf0hOXG0Uqv+bK8m/MSS9BCGHntxOiEoQcYGgw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <101afc13-d952-b3b7-7594-ab219bd471cc@suse.com>
Date: Thu, 26 Jan 2023 08:32:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 0/4] x86/spec-ctrl: IPBP improvements
Content-Language: en-US
To: Andrew Cooper <amc96@srcf.net>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
 <a4e9272b-6110-e041-13d0-6746f721135e@srcf.net>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a4e9272b-6110-e041-13d0-6746f721135e@srcf.net>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0096.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7351:EE_
X-MS-Office365-Filtering-Correlation-Id: 4931f041-61a2-46ee-3065-08daff6f86f5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	R9gqL/Dl3qM+LuIUvo9Qsw4pz9BGNBGABRRyGHv+k5bSrb31dVVzrwHLdkF4nlUXE/h5vMw6kpGYPlO/q6RxZkeHlSjbkw9OYvFr83gaMFAyJ/Y/RoH8cyR3V8op53NmrOFGbw0CgNodvCnhdg0RMO+/eWUZy/pXgckarQvTg7/9JwAT46WEg+6P6VaOOn0DbOXYFuJhFHH2nagujZsdqeNl5pdkmGC7j01wvKpPywaxr5d40QJsy3FXePfEESI0G27uopA85AkOu6uIjItRlfHJjyA/D0eoMZKGwhWl/r6vKeJE9YaFs2usmnJ4KuuIYos1UExEdqe6PkcKuZPtjn8iw+aoYdGovjOA4JrI23Wa4fbiqSJkRVfhOnbDSfAG2QIFi4oiv1eCP1E5JDFYDlkrQ1Gm2zdWP4Zzhmw1JiXRc1AutFeMlPU8kO7xn1fklt4P00GsT13k19mtr6LOaFKnLZ5bILgVq4tTA/fl0n4zJ/OAfOXkTnXQQ7V2JrxLe5PtHNBzoup1zHjVhBt7fmv7Mc6ZruPzwmyu/QrI0sqQoONUO6k3ozQawRhnkH2Jfq5x/QqyTPHSEyTM5GrjDlab0KpK7BX0xyVC7rqpobSfMZbwrtLVRxS8YprbczoiD7fGhtoFpUsm4NfU4q2XWv/UxxqQ3ww7ccZnQSRTuvEmhLa6no3Dcm4EcrIjWvPWQ2bi/Bew8iHDeC7O1IXBTq0ZDwYrvoiMAtsMdPX+P0s=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(346002)(396003)(376002)(366004)(136003)(451199018)(5660300002)(186003)(2906002)(66556008)(8936002)(8676002)(4326008)(66946007)(6916009)(66476007)(31686004)(6512007)(41300700001)(83380400001)(6486002)(478600001)(26005)(6506007)(53546011)(316002)(2616005)(38100700002)(54906003)(36756003)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cjJuTXZ4V25CaXBmcGVVeUtQK0NTZHhHRy9SMklISEMyTkZnNzBPR29kLzVQ?=
 =?utf-8?B?QklyaTV0MklubHMzZ2hrVElOcmZ4aFkyU0lCck4zYSttU2lQbzhSV1BQMWZa?=
 =?utf-8?B?SWlSWVRpMGVBdW15ZU5CbXB2YmFmQ3NqR01xb0lHeW1aYnB2NlZMa1BjQVp3?=
 =?utf-8?B?cjVRL3hWZU1laEgwS0c3RTdhVHQrS3YzTkJZSXVRV0lpV0dMd1djaEpnMlpG?=
 =?utf-8?B?Q3pVeURnY1JrQ2pJeVdTZUJ3bkRQL1puRFRhNjF4Nkh5eGoxYjNVR0hZM3dP?=
 =?utf-8?B?dDFMd24vczZOM2lscW0zbnNsTUJ1TUlLTmZaZHdNcFV3S3dCM0JMV1czeHNa?=
 =?utf-8?B?UEhGRWMydDcxZEgxNUsxaFhnb0ltd1RrOHZTV3ZRUjRXS3oxQldETTNRQU5R?=
 =?utf-8?B?dGE0cXM3bWl6VjNldEF3cTZ1dXM4cW9UUnBFQit3aXZKNldRaEVEOWFIZklJ?=
 =?utf-8?B?ZjN1NUVQSk1jeU9TcGtYR05lcE1qWUtGUGMxZXJaUjRTV3RFSXVvV3FKTEZo?=
 =?utf-8?B?VjUrL1VjMWdhWjJvZ2ljSXRSeHordlBsTE45YXMvYnp0TzNTWWFlOVZjNHRN?=
 =?utf-8?B?TlcwcjVNckx6WXYzNGUrZ1lvZDcxQndlUE1POERmdGxiRmhtSkhHeWRtOW5S?=
 =?utf-8?B?b1FWYVpGbmRvWmNxdVg1WmVZaW5COWcvL2RBVEl0YnlTcm1tZHlXOForL2Jp?=
 =?utf-8?B?VE9RUEk0RnliSlMrU2ZJTHdrZENNVEpIRlhiaUh1Wkd1ZGk3Z1JpeG52dTEr?=
 =?utf-8?B?L01RYVVXdEdkbE1LVHJ4MmI4TnVwQnNvamI4UUVubjJPWEJXa0tIK0tZQWQw?=
 =?utf-8?B?Rk00NXBrcXhVczBlWkFva2RRbFRjRDEyUDFINzBndVJCRmJlY3N0WVo3RXNq?=
 =?utf-8?B?ZE9EQURlUFkrbEhZcVhrdk1aaW51eU42RDFkM0NTWVZ1V3RXU3U2eDNmYUZv?=
 =?utf-8?B?eGZyREpMNWR3VVAvaGlYWk8rZzVaWVdoU3p5dnBWMEVMa2cvS29vUTBzVGNr?=
 =?utf-8?B?M1d2eDFpRW13cTJ0V3FaTE9zMWxmSStoendpZ2R6M2s3U0E1RjBTZGI2QSs1?=
 =?utf-8?B?VUVIY0hvenJFZnZrQmNialBYOGdSQ3orZlpxOG94bUpnSnFKWENDdEs4VTdB?=
 =?utf-8?B?ZTFNYkUvZU9oVkVvL2RoZm1XQXZ0b0V5NDFEZnB1V0pmclpJamtjOXZjWUNz?=
 =?utf-8?B?MVl6VmNlYllPdWFvYXIwY3NiY2QwT1BNNzNoMDdmT0tZNVFKZUt6RXo0ZjZX?=
 =?utf-8?B?bDgycU0vOW9KL3FXUG12SVkxeUpQM3BSNTFKR3grVWw2VXlKcmFKR1VSajhq?=
 =?utf-8?B?UEMvMEF5NXprZW9HN3gxaXozSmxVRU82WTFsNFB2VnFiZTEvNGNPZnJLTytN?=
 =?utf-8?B?K1dUeHpKRklQcC8xTEhNOUR5WTVGN1k4R1d5NTlZeG92UWVIc1p3ZjdWRjBk?=
 =?utf-8?B?OVQrY3VDTENtWjdxZGlQU25xYmVkaGJ3a2hRTlcrRFBucVFIVHRFZmJkM3or?=
 =?utf-8?B?SE5DdXd4YlgzdkdRdjEzS2lnWHNaTm14QXNLUVlWZ0ltNmNXT0t2cncyUlg0?=
 =?utf-8?B?OVh3VEdBazAzbW9UaHFDVi9JOXpzNE5iNFFlcVFRaUxwWTVDeURQcWNtLzN0?=
 =?utf-8?B?UmpXd1RVQ3NvVHcvcmRpV09sV3JoT0ZmdGZhR0QrSXo4Q09vc3hEZFdMNi94?=
 =?utf-8?B?Y2RUSmhHMEpHdEtrWHdUVCt1QUtWSUJvMk9MRElsbERPamtudnZJWVNUMUhZ?=
 =?utf-8?B?QkN0dHVxbE5SNHVLeWFTekl4T1FESHZ3VkkybTYxb3VHMitsV0RRSWNYUEpN?=
 =?utf-8?B?Z3pFeWlJeTR3OFRxMHFFOTJhZkVSZG5rWFJXT2ZzYTNQRlV5NFBVOERXVnlG?=
 =?utf-8?B?LzUxcmozU3o2VStVN1Y0NTVBb0N5VkZ3SURBMXBGaTNaVkdFaFhydFgyblJT?=
 =?utf-8?B?VFhYWkVRNGRrN2w1aWl3NVI3bnZYM0w5RlJhV0dUSmJZNlBsM1ZYN2M3UzJL?=
 =?utf-8?B?YXVwL1liVG1OSHYwdzcrWkRrUnU4OUxOWFpNckpGbXIvOUNJKzdCTnRjMkta?=
 =?utf-8?B?QjAwVXpUVmp0NUxOSHlITmtoRnZMd0xsZEQ2bCtidUZzV0ppMEFDeWh4ZU5q?=
 =?utf-8?Q?iEELosQokW5UATgC89Y/VKFIn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4931f041-61a2-46ee-3065-08daff6f86f5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 07:32:49.8421
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qumHtgoNJLZL4BAUzPRA+vLHGyjLt9CkKj2a3L3XC9hqV10HKUJuSW9So3qOOnwYEQdnUMQUHuOmKqKnnF5WJg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7351

On 25.01.2023 18:49, Andrew Cooper wrote:
> On 25/01/2023 3:24 pm, Jan Beulich wrote:
>> Versions of the two final patches were submitted standalone earlier
>> on. The series here tries to carry out a suggestion from Andrew,
>> which the two of us have been discussing. Then said previously posted
>> patches are re-based on top, utilizing the new functionality.
>>
>> 1: spec-ctrl: add logic to issue IBPB on exit to guest
>> 2: spec-ctrl: defer context-switch IBPB until guest entry
>> 3: limit issuing of IBPB during context switch
>> 4: PV: issue branch prediction barrier when switching 64-bit guest to kernel mode
> 
> In the subject, you mean IBPB.  I think all the individual patches are fine.

Yes, I did notice the typo immediately after sending.

> Do you have an implementation of VMASST_TYPE_mode_switch_no_ibpb for
> Linux yet?  The thing I'd like to avoid is that we commit this perf it
> to Xen, without lining Linux up to be able to skip it.

No, I don't. I haven't even looked at where invoking this might be best placed.
Also I have to admit that it's not really clear to me what the criteria are
going to be for Linux to disable this, and whether perhaps finer grained
control might be needed (i.e. to turn it on/off dynamically under certain
conditions).

In any event this concern is only related to patch 4; I'd appreciate if at
least the earlier three patches wouldn't be blocked on there being something
on the Linux side. (In fact patch 3 ends up [still] being entirely independent
of the rest of the rework, unlike I think you were expecting it to be.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 08:02:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 08:02:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484806.751615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKxDB-00083t-0L; Thu, 26 Jan 2023 08:02:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484806.751615; Thu, 26 Jan 2023 08:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKxDA-00083m-TI; Thu, 26 Jan 2023 08:02:20 +0000
Received: by outflank-mailman (input) for mailman id 484806;
 Thu, 26 Jan 2023 08:02:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKxD8-00083g-Ud
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 08:02:19 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2089.outbound.protection.outlook.com [40.107.21.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c02977f2-9d4f-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 09:02:17 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6923.eurprd04.prod.outlook.com (2603:10a6:10:114::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Thu, 26 Jan
 2023 08:02:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 08:02:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c02977f2-9d4f-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OtNvsD/z76VgcckoTJnfYWAECdfk7jyxaTyCUmgOc9eXM9pr1u6LWBbZcQ03f6+L7115/zaMPWmFkk7rTtjqFm+QRQPjM4Alq+uR2kSKkTM5qzJ/API/VdvzuvQX6Jn8pJBst4GL98nHBq6G5O77rsScCHCRog7f4bbMJMUYsq3VG3KpsfCRXyOs3MLmk6DsIYmoYOx3rJ+7lB/y5dbb+ve09Xt+DcZ5lTv5JA4K6KccgkvMmTrvSyKeFW7QAXS9Wmr22JoSv3aAwkprUOr2l/GItlig/UaNLKr2dMFrdfp959uKKfhliJNmCT8ZFadt4VyffkYi/WBuv09hLwdyDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=meOV+7T8ZQZ/6PKje+wXQ6cgl9ZXye4EOfKH5lcTyQ4=;
 b=OkctuZUuewoh6/Aq01ALyVPv64iNHCMTCD0jPuHpjdnMcd09k8bIIwM7kZlah1B97n8EbPcyYMzeZQWooMb2iHVAJvXrWlXqt2OCgxhtyi7EXBt7V5V4YhRx5Vx+8Txl+LgXY8CmJtVNrvmLNePkmFZwfj043n3opAsZcCu7ETVsYlBNXoi6aw3PSFP5FKlX//mAwLc3ytRDsckFrgXIlEUFobunFsImWjOXG18vpDTFIT+02CSIzDwLu1q5yIbsiK9PCBxD2vRku2g3yF4UNT/yyZmJavt8o6lZa26rmbKn8S1PPedzC2fwfTSUYLddgtx4dT9JJXHfQQO5y5s7ag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=meOV+7T8ZQZ/6PKje+wXQ6cgl9ZXye4EOfKH5lcTyQ4=;
 b=vAfQ411kT6coy+8Ne2NPwDoucLwViuSLXiZF4NFFlHKrh3zMmYz2DHTqfSt7psgMN7ntKQXtVIsu8TWB3XEGMUWebNr/DpxEa0XxzXKICcFSe58FPySfMHYa28JqLI8jecYcHVQsU0v0zDf4oTrqLGNu32WrCdpRRyhEkFjhIzTDZFhiAVBI3Hv2KUH5ccPbctBkDWA7ta12a5bZhIVHSsuM5dKKdOo1SwIUYO5j+d4NosALD5BNh3TbZ/FxGo92UBqvFmJdJSzD5ja5ACTQ51iQTBG91CHfsSpP6F1G649rH+rTPWi6vbTFmbBSsUKE+dyYXlLFuqL3RCYH0GV22Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fa38f305-df29-4178-2279-17a084fdf2cd@suse.com>
Date: Thu, 26 Jan 2023 09:02:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 1/4] x86/spec-ctrl: add logic to issue IBPB on exit to
 guest
Content-Language: en-US
To: Andrew Cooper <amc96@srcf.net>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
 <8ee98cc0-21d3-100a-ffcc-37cd466e7761@suse.com>
 <718f6fd0-cb96-6f72-87ff-7382582d89f9@srcf.net>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <718f6fd0-cb96-6f72-87ff-7382582d89f9@srcf.net>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0127.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6923:EE_
X-MS-Office365-Filtering-Correlation-Id: 5b44c0fb-2359-49cb-3d80-08daff73a342
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OfxGu+uZkxMzJ15T+g9EM2fPwgeNlSafQyOEYOcjbuDRGu/nAK4nF9UOsoP1+gHhStPXjdwiQoq267HmGTVEB5xXEUcWidxXTHy+lZ6IjR2ioJPvj3XErxv1TMxRhcLXR6yHL3kYRn7MyU34Io/0ZeXgjii2ZkDerKF0rv+EUCUG4BmxFKOpfRJAEjLpdMt1/pa5AJnRSo0C6pZYi8NW2uxyY+0GeeWoRv4Fn+8Fcc55mrtuOVhBRnCAWKdVENpMsoQfykAedt3YsaNfaDgPtu3VyaWbfGuk72IykeijIHSdxO4XIfa9s5TPZaI6SEE5aeD79xYOi6bZCDdMxG1Tf+39rWRCMZO5/L+flr7t+iw14yq/vrdg9VcUO7B63cRZAPFdN1QU4u2ps990nZBKHK6o/iWgBxEy6CS4wMN4Dg6FcRLALJv9eP2OODXyfDD63Frj3+OYV+SwbgvFisCJA2nnb0mXzOGEZUDyfrIokfykjQQUVxTmU2JmoVmupjC/iY8yS7ZtlK11OVffBdMiqqwY9pFn6vIoT9gu1KX3JVVmahvmF0apaUG09JXIEqWIRP0ZUlzEfgC8+CEweaHvt6qq/Ya1pQ7cmENtYP9aYJIJrZtoSMb3C5XkPiD+Gd6xpgwXBKaCiyKkAMji9jNqkZFrkJXJ72y4LpdZBMOFCaXAmFUL0qObfIA3bRW6R3oHeXh1TxgK8FspZ5W4iQls8CvFLnSNj0S05XCOy06gEBA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(136003)(346002)(376002)(366004)(39860400002)(451199018)(38100700002)(86362001)(31696002)(36756003)(316002)(54906003)(478600001)(2906002)(41300700001)(8676002)(31686004)(66476007)(66556008)(4326008)(6916009)(66946007)(5660300002)(8936002)(26005)(6512007)(53546011)(6486002)(186003)(2616005)(83380400001)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UkRqcWJ4ZUdDaWpOWk1ySWFjTWhiUVpkVksvK1U1bGFPcWY3Z3NadEUrUDhy?=
 =?utf-8?B?TnRkOFJJOXRsYnoyaTFhVVFHdkQ3MVQyNGJKSFlxS0ZZVzVmK3BiaXg3VzlC?=
 =?utf-8?B?bnRoMTJQNVdLRHh2THA2Y29uWTZPbXRLTmVNQlE0T3IyWHRaMkMzUlVDekNE?=
 =?utf-8?B?Q1NjRWx1UjFkSXRzZ0QzaTlaOGpBdG9qUEdWSWlWNG1PcnpUallmRFZWanBS?=
 =?utf-8?B?eENmOUdEV25sZktGZnJ4K2ZZWnVwYjNyNnVCWm5JWldQcm5Ydm9scXhDd1Nx?=
 =?utf-8?B?YkdtQXZpUVN0Vjlpd1VYc0JtZlhCbVBsLzJyd0ZibDF5T2MyZEt3a0FxbU1N?=
 =?utf-8?B?UXFVeVFLdGVLYlAvSm1jSHFLc3h3elBTMUNmbFZ5ZldCSXFXeVh0Sktpb3pC?=
 =?utf-8?B?bXJENjR1RlB5UVVoalBpMGMvWGdOeEhYcFFmZ1hNdWFXdWxiWmFxNkZqN1FI?=
 =?utf-8?B?SnhHT2R2VnZidVNMSDZITkpIV0g4STFocnpVaStIeC9Rd1hBaVd5VlJUNkRV?=
 =?utf-8?B?Wmc1d0lWSVoybG1ZT1BiTGtkZVRrL1RTNGVEeHErdUgySUIxUi9ycXBiTTBX?=
 =?utf-8?B?bzNSSlIyWmFRMkxGRkxGK2ZYWlpiVjdidy80LzgwS3dPdkI2NXpiWmhydVdJ?=
 =?utf-8?B?a2l5T0pUNFg3V2lWdm5rZDlXTFBYWmVlVU1ZMVhpMHF2QUduZDQ3ZG5LVVJ3?=
 =?utf-8?B?VXU0bWNvQWVGLy95MmJlZXRDZTZzb2JZbWVHbnk5L2dQa3hBVGVPQ0JhTXlV?=
 =?utf-8?B?ZFZaUk5HandIV0N5RXkyLzQwWnE1ZTlMZ0dLUVFPbS9SN08yQzBVYWpBd2ZP?=
 =?utf-8?B?d0xBSHFMTVZHT1ZCUE1zSUQvKzVSWmN6bWNFYmdVbStRMkNXWTduUE9CalMx?=
 =?utf-8?B?K1laSDV2dHhTQ3V2V1FVdlhQM2pDSlhKbTcxeTB4dGtmMjRzcnZVUS9vRTYr?=
 =?utf-8?B?Tkg2eU1XSnc1N2NXSmRZSVpzOWo2cmV3MWVZZTRFOGlBUStvVXltV2V0UFl1?=
 =?utf-8?B?clprcTgrRHo5K0JvM3B6YVFMek1GaWFUamRWQmxLVnQxUU9uZFA3b2hNTkFy?=
 =?utf-8?B?Vmo5bUZQME9YWFVpcHQ5UWdCWitaNGlPVFIrMWgyd2dqclVKcS9tVldWWG9Q?=
 =?utf-8?B?T3ZCZzBNT2pLSDBzR0hrS2NPZ1ViOWJRbWlUWkVKd2NPYnFTNG1UOFlBVCtH?=
 =?utf-8?B?UkY4YnQyM3BFc3RqUC83MktwNTEvbGd2eTMrRkU5T0VhbmxObUZVVHhLN0Vm?=
 =?utf-8?B?aHN6SnpjTWVka3NySkdiaXRDT0MxS1Z0TWdRMzArcmNSRXZtVmhjaHZ3eEZ5?=
 =?utf-8?B?K21YSVRCZS9GSlpGaFlWWTlSZk5rYUJIdGlzWGJ0MjIwZExDYlVHdmFrbCsv?=
 =?utf-8?B?S0ErTU1JVi80S0FpWUNmQXF1Vkw1NXRaQ0RSbURBaFY3aGFmeHV5bkZpVXlm?=
 =?utf-8?B?SVlJNVJBaHF0N3RsVlhGY2I1WmpSUzZIRWV0NVB0dDAwWCt4ZzFqZGQvUUUw?=
 =?utf-8?B?clhsa0tvbUx2Q3laL2tWb3lmMnpvUmJ4dm9OeExEUXlrb0RqL0FiUE1BckY3?=
 =?utf-8?B?VjA3TkllSGpjZjZXRlVTT2t2RnJXVkllMFFpdTdCRHYzR1V1WlNhSWJDNW9O?=
 =?utf-8?B?VXdLbWoxbE01S0JqWHdnVVJqUnpWaEZCVHRCRlFabmxJVFgydnFXMEUxN3RI?=
 =?utf-8?B?eXhXQ3Bzdm05ZmtSUTBhSzB3VTV4bG90bVJrWGVwbmdoV0lZTWhscG1GZkpt?=
 =?utf-8?B?SkJUeDB3R1NVMDM1WWVZV05oQ1YrSjZPMkRRbVk0ekx1bG54cGpvTG04Y2Jq?=
 =?utf-8?B?REd1T3NFemUwQ0IxN243WUI2alo3c1gyQWlYTWs0QXE2TG1aM09zdzZOb1B3?=
 =?utf-8?B?UGxYYWd1N2U0ZFVtQXRKbFhTeWM2UWMxWEo1LzlNYW9OazY3OGRvZlc4QVEv?=
 =?utf-8?B?S2dBSkFkUFFhMERhbUJESDFjeHdHSHRXK2w0WXl4YkJjcit0VDJuM3RkVVFk?=
 =?utf-8?B?a0ozUWYzdEpNVXNNT0dpOXRqckRQR0xhN0lQTFg0ekpaczRQT0YxMFc2bHRZ?=
 =?utf-8?B?bEJreDNiN1dWRGx2ekNVNG00WHhkOTFxdHJMM0dsZXJUKzlObDVJa2FQNjVU?=
 =?utf-8?Q?3o8t9eFfyOU4XJLhT3Owx/IRH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b44c0fb-2359-49cb-3d80-08daff73a342
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 08:02:15.3570
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fqr982xKoqse3k5VxgulOV0XWo91Pz97MS8sTIu3Yzx1sjktekmLQUBeuoOoChPBZmQrJordp+5fArIe8VCP8Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6923

On 25.01.2023 22:10, Andrew Cooper wrote:
> On 25/01/2023 3:25 pm, Jan Beulich wrote:
>> In order to be able to defer the context switch IBPB to the last
>> possible point, add logic to the exit-to-guest paths to issue the
>> barrier there, including the "IBPB doesn't flush the RSB/RAS"
>> workaround. Since alternatives, for now at least, can't nest, emit JMP
>> to skip past both constructs where both are needed. This may be more
>> efficient anyway, as the sequence of NOPs is pretty long.
> 
> It is very uarch specific as to when a jump is less overhead than a line
> of nops.
> 
> In all CPUs liable to be running Xen, even unconditional jumps take up
> branch prediction resource, because all branch prediction is pre-decode
> these days, so branch locations/types/destinations all need deriving
> from %rip and "history" alone.
> 
> So whether a branch or a line of nops is better is a tradeoff between
> how much competition there is for branch prediction resource, and how
> efficiently the CPU can brute-force its way through a long line of nops.
> 
> But a very interesting datapoint.  It turns out that AMD Zen4 CPUs
> macrofuse adjacent nops, including longnops, because it reduces the
> amount of execute/retire resources required.  And a lot of
> kernel/hypervisor fastpaths have a lot of nops these days.
> 
> 
> For us, the "can't nest" is singularly more important than any worry
> about uarch behaviour.  We've frankly got much lower hanging fruit than
> worring about one branch vs a few nops.
> 
>> LFENCEs are omitted - for HVM a VM entry is immanent, which already
>> elsewhere we deem sufficiently serializing an event. For 32-bit PV
>> we're going through IRET, which ought to be good enough as well. While
>> 64-bit PV may use SYSRET, there are several more conditional branches
>> there which are all unprotected.
> 
> Privilege changes are serialsing-ish, and this behaviour has been
> guaranteed moving forwards, although not documented coherently.
> 
> CPL (well - privilege, which includes SMM, root/non-root, etc) is not
> written speculatively.  So any logic which needs to modify privilege has
> to block until it is known to be an architectural execution path.
> 
> This gets us "lfence-like" or "dispatch serialising" behaviour, which is
> also the reason why INT3 is our go-to speculation halting instruction. 
> Microcode has to be entirely certain we are going to deliver an
> interrupt/exception/etc before it can start reading the IDT/etc.
> 
> Either way, we've been promised that all instructions like IRET,
> SYS{CALL,RET,ENTER,EXIT}, VM{RUN,LAUNCH,RESUME} (and ERET{U,S} in the
> future FRED world) do, and shall continue to not execute speculatively.
> 
> Which in practice means we don't need to worry about Spectre-v1 attack
> against codepaths which hit an exit-from-xen path, in terms of skipping
> protections.
> 
> We do need to be careful about memory accesses and potential double
> dereferences, but all the data is on the top of the stack for XPTI
> reasons.  About the only concern is v->arch.msrs->* in the HVM path, and
> we're fine with the current layout (AFAICT).
> 
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I have to admit that I'm not really certain about the placement of the
>> IBPB wrt the MSR_SPEC_CTRL writes. For now I've simply used "opposite of
>> entry".
> 
> It really doesn't matter.  They're independent operations that both need
> doing, and are fully serialising so can't parallelise.
> 
> But on this note, WRMSRNS and WRMSRLIST are on the horizon.  The CPUs
> which implement these instructions are the ones which also ought not to
> need any adjustments in the exit paths.  So I think it is specifically
> not worth trying to make any effort to turn *these* WRMSR's into more
> optimised forms.
> 
> But WRMSRLIST was designed specifically for this kind of usecase
> (actually, more for the main context switch path) where you can prepare
> the list of MSRs in memory, including the ability to conditionally skip
> certain entries by adjusting the index field.
> 
> 
> It occurs to me, having written this out, is that what we actually want
> to do is have slightly custom not-quite-alternative blocks.  We have a
> sequence of independent code blocks, and a small block at the end that
> happens to contain an IRET.
> 
> We could remove the nops at boot time if we treated it as one large
> region, with the IRET at the end also able to have a variable position,
> and assembles the "active" blocks tightly from the start.  Complications
> would include adjusting the IRET extable entry, but this isn't
> insurmountable.  Entrypoints are a bit more tricky but could be done by
> packing from the back forward, and adjusting the entry position.
> 
> Either way, something to ponder.  (It's also possible that it doesn't
> make a measurable difference until we get to FRED, at which point we
> have a set of fresh entry-points to write anyway, and a slight glimmer
> of hope of not needing to pollute them with speculation workarounds...)
> 
>> Since we're going to run out of SCF_* bits soon and since the new flag
>> is meaningful only in struct cpu_info's spec_ctrl_flags, we could choose
>> to widen that field to 16 bits right away and then use bit 8 (or higher)
>> for the purpose here.
> 
> I really don't think it matters.  We've got plenty of room, and the
> flexibility to shuffle, in both structures.  It's absolutely not worth
> trying to introduce asymmetries to save 1 bit.

Thanks for all the comments up to here. Just to clarify - I've not spotted
anything there that would result in me being expected to take any action.

>> --- a/xen/arch/x86/include/asm/current.h
>> +++ b/xen/arch/x86/include/asm/current.h
>> @@ -55,9 +55,13 @@ struct cpu_info {
>>  
>>      /* See asm/spec_ctrl_asm.h for usage. */
>>      unsigned int shadow_spec_ctrl;
>> +    /*
>> +     * spec_ctrl_flags can be accessed as a 32-bit entity and hence needs
>> +     * placing suitably.
> 
> I'd suggest "is accessed as a 32-bit entity, and wants aligning suitably" ?

I've tried to choose the wording carefully: The 32-bit access is in an
alternative block, so doesn't always come into play. Hence the "may",
not "is". Alignment alone also isn't sufficient here (and mis-aligning
isn't merely a performance problem) - the following three bytes also
need to be valid to access in the first place. Hence "needs" and
"placing", not "wants" and "aligning".

> If I've followed the logic correctly.  (I can't say I was specifically
> aware that the bit test instructions didn't have byte forms, but I
> suspect such instruction forms would be very very niche.)

Yes, there not being byte forms of BT* is the sole reason here.

>> --- a/xen/arch/x86/include/asm/spec_ctrl.h
>> +++ b/xen/arch/x86/include/asm/spec_ctrl.h
>> @@ -36,6 +36,8 @@
>>  #define SCF_verw       (1 << 3)
>>  #define SCF_ist_ibpb   (1 << 4)
>>  #define SCF_entry_ibpb (1 << 5)
>> +#define SCF_exit_ibpb_bit 6
>> +#define SCF_exit_ibpb  (1 << SCF_exit_ibpb_bit)
> 
> One option to avoid the second define is to use ILOG2() with btrl.

Specifically not. The assembler doesn't know the conditional operator,
and the pre-processor won't collapse the expression resulting from
expanding ilog2().

>> @@ -272,6 +293,14 @@
>>  #define SPEC_CTRL_EXIT_TO_PV                                            \
>>      ALTERNATIVE "",                                                     \
>>          DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_PV;              \
>> +    ALTERNATIVE __stringify(jmp PASTE(.Lscexitpv_done, __LINE__)),      \
>> +        __stringify(DO_SPEC_CTRL_EXIT_IBPB                              \
>> +                    disp=(PASTE(.Lscexitpv_done, __LINE__) -            \
>> +                          PASTE(.Lscexitpv_rsb, __LINE__))),            \
>> +        X86_FEATURE_IBPB_EXIT_PV;                                       \
>> +PASTE(.Lscexitpv_rsb, __LINE__):                                        \
>> +    ALTERNATIVE "", DO_OVERWRITE_RSB, X86_BUG_IBPB_NO_RET;              \
>> +PASTE(.Lscexitpv_done, __LINE__):                                       \
>>      DO_SPEC_CTRL_COND_VERW
> 
> What's wrong with the normal %= trick?

We're in a C macro here which is then used in assembly code. %= only
works in asm(), though. If we were in an assembler macro, I'd have
used \@. Yet wrapping the whole thing in an assembler macro would, for
my taste, hide too much information from the use sites (in particular
the X86_{FEATURE,BUG}_* which are imo relevant to be visible there).

>  The use of __LINE__ makes this
> hard to subsequently livepatch, so I'd prefer to avoid it if possible.

Yes, I was certainly aware this would be a concern. I couldn't think of
a (forward looking) clean solution, though: Right now we have only one
use per source file (the native and compat PV entry.S), so we could use
a context-independent label name. But as you say above, for FRED we're
likely to get new entry points, and they're likely better placed in the
same files.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 08:07:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 08:07:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484811.751625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKxHe-0000Ob-IP; Thu, 26 Jan 2023 08:06:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484811.751625; Thu, 26 Jan 2023 08:06:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKxHe-0000OU-Fe; Thu, 26 Jan 2023 08:06:58 +0000
Received: by outflank-mailman (input) for mailman id 484811;
 Thu, 26 Jan 2023 08:06:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKxHd-0000OM-Al
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 08:06:57 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2076.outbound.protection.outlook.com [40.107.104.76])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 653d5620-9d50-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 09:06:54 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9097.eurprd04.prod.outlook.com (2603:10a6:10:2f0::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.28; Thu, 26 Jan
 2023 08:06:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 08:06:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 653d5620-9d50-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I6WiUg7FTfkkYqIuA4DYn350H6oXBK6EgP+UCCr9QaoRBu0+wr2LHd9CKEgBXC7ZKlmntdMNuZ33VcwBMeUkti5ML9supqQ3NWmA/ZHF6VQ9W+rz2tExE/F/K9+6ok3XmBYukEPyepeTNBIUEog+uYHkCCfcjMCznsixb63vD+3kvE7Ii35qd8mjpmTDLZ9cEgreNO/hcwavRB2X+Jy/PeUxdGmw0IfKaWcVo5awr7s76YxyIbEs2elY9i9mKMx3JWLcx+osZv+I4hrVQJ6D8k9BH0UhbVcusdO6cwdxlC9qqQW+bfcB5X+InsbSt2pxhIHfohklmAhsvEOwEsgXGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xDfyNhPkcKIS1cxazSpqnUmPOl/7JM90WKSm92OYybk=;
 b=LRPASgZ0BPnZC1dblSCtDJ1wd4m68ORSX+5o8hWIEh64/6rlMnZCz7strAvKI3Pwt03ygjMd19q6CrNIjIwQAQHLYXGJKJjg978CKEX1cg9DYb9pXAJe/PTAICPDOdmVMmnyNYwIYzZBzWtPZOnTBHyj2kKaPHIYuhhKQCpRB8/e5Or4x63myQYFr1VSRFSDXnAvIbijv2XQs9/gsnxJwH083iLBpLvMcd1enJE8n8p2aDQ+GuFglb7jJPgoq+0f65SZOw8C1/fWD+WjFSeKkv3kufUhWbtCn9XaBaQkbl+IGQn/eL5g1LHkbE2uqWNMdDgeo94IcireVe8pqSMdwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xDfyNhPkcKIS1cxazSpqnUmPOl/7JM90WKSm92OYybk=;
 b=Zr4Dpaq4bas4lkUGYg26dl+xfTe15ztBL4KzEOqr4F4vZShudMjjqaj1SKyb1Pg+nPV4Rc6K3l6RiSmDqJlpWuQRBAFF7XlHeS5ee1UYDzVEL502apJ6zXWbs8Lkhju/c9OP9HQokMNgDhjIaS1UirMGorQJUWGG3xENuk3jxlG51N7oXFzhhbbRGEVxPP5IntAZ2PrgjRF+xWeK8K1zHPmWXxSfxhk0WJK/xkrFbz5FUGfwo2vUocgVkC8f1cTSrXD7OGhIWZs780145zWHr/ETPutU2sFU3qtgX7c8Kgmu3dc66NdmjjiN+qHaSHw88JycYKQDJlZYzUB5bfYRBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bececcba-7606-924d-aba1-f51134414fd0@suse.com>
Date: Thu, 26 Jan 2023 09:06:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 01/11] xen/common: add cache coloring common code
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>,
 xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-2-carlo.nonato@minervasys.tech>
 <a470be46-ab6e-3970-2b04-6f4035adf1cb@suse.com>
 <CAG+AhRX9DVW5EfXKQoDG9hmcE0FORydTZd0pNm-0uqwddaN9NQ@mail.gmail.com>
 <6c952571-6a8d-e4fc-36ec-b5b79dac40f6@suse.com>
 <CAG+AhRUOBgPsT9yU3EtqSPj5VX70H1DsUL_dOWguapC+u3iSvw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAG+AhRUOBgPsT9yU3EtqSPj5VX70H1DsUL_dOWguapC+u3iSvw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0134.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9097:EE_
X-MS-Office365-Filtering-Correlation-Id: e67c2169-5f5a-4afe-7bba-08daff7448d1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TuBIkMOW+5Ql7aoDlT4iSjUDL2RRiRgXjeHF/IEOhq+dUE7q+rIN/ZJS6LuXcUPWvUyKKa9tOJrXfkejfRjwSjZtl4He5oOC+PBHEKaqtH8h6a3XSNXKCPEDvW6d+5pA8fgugSkBkXpkbyN8+tuZL4TBjW08JKRXksEzuyhoIcJa+NnPK5I8d38SxhJwcdLa6XXSypWEVpE/GdXtXMp3ow6/wXkUYhJlNRlo3Oy3woHErmc571eZm/nz4M2I9dRVuZQvwJ3//Va/dP6PkRGS2ugx012Vyvsum/umVCSAOYQ8AgBdoYjtU8Vv5xmKxcgCizYthOIvMMMRa5StzQ/fu+xo6NYuHrNBQNyBIusH0Ev5RQ3KDg130/iWImBomqi5dHrcRztVryW1F7ulcZNKACjap8nTFmEGXL63x9t33zOV6Wu14b8+5nlkmlkkFSwxDDsmCfMVOaKMTWcoiVFfhbaIHdIOVUWCKYhqW6sTsbZ6mwXQYJwjBmFg8de/YRpOohKAhOy1mTUG3+MQtgDt/5zI7hgAmSkunA08kmSyo4V97A0HddKTJugDSZi+IMXtw1XsKG8+l4YYIRpCLFygJbbSmgpDKERa/jkSmc+TRDTr4eXfv1oPWa1GPhoJuur53cPmi41hzNk1V3JUk+WvV3quJ3GFcWBr2Fm6CUS+T4MzRMJy408pSoBwkgGxeH+Hvy5oKRQh20h6UkN22+GEKJyLvpYqTt0JPQkYCheTkfV8VvS+lAg/uPlN0hy4oaYy
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(376002)(346002)(39860400002)(136003)(366004)(451199018)(66899018)(31696002)(2616005)(6486002)(2906002)(8676002)(4326008)(66476007)(316002)(478600001)(186003)(83380400001)(41300700001)(26005)(86362001)(66946007)(6506007)(38100700002)(53546011)(6916009)(54906003)(966005)(6512007)(8936002)(31686004)(5660300002)(36756003)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WmVpZWR1dTU2M2RGZGYvTmtkSEYxZHIwQWpIcFA4dlZ4UHpmT0Z1YW0wVDVB?=
 =?utf-8?B?OU04VXRJL0dmdmdIekpzQTFHdDhacmpmRndKS3B0cGpMcnVZSENMNTVZVVhk?=
 =?utf-8?B?dXA5Y09lRzBibDEzR2NOTml4RlFQRDlXb1RGTk1GWE03OXVJU0piZWgzb2ds?=
 =?utf-8?B?alg1dmVCU3FETE0yUjBMNmZRU3p3VDE0VlhWKzlPV1FhTVhpWkpweWhOc3F6?=
 =?utf-8?B?TU1kcHlXWDFNL3RTaWc4NWxOY0VrMVQwZmZDMlRaQkJtMjBLb2FTQ0xiRHhN?=
 =?utf-8?B?UHhqUGFtZEdMUWVhcm9pbGdGdkFKMnpvWlNNd1d4L2Y0M0NUd2ZYOThXU0Mz?=
 =?utf-8?B?NFMrbEZodVRRS2FrZDNzbjRhRmtzTVc4dHU3RUl0dnJtZGZhd2JscjJvZHlw?=
 =?utf-8?B?Q0gyWFRockM5NVExZlVXQ2FvT2NzbWRJcmNUWDlWa2cxMGxUc01ueXRPSGN4?=
 =?utf-8?B?QThBSHVkYUFZVGtwQjZFbnpRNlhOU1JVNjRpeEliWmtjOGt0eklpdStrRjRC?=
 =?utf-8?B?L1J0Rkt6U0lBSG1rVURXb2xmSmxoSFMvMjFXdkExT21qTmtEa0pncnVwR3dt?=
 =?utf-8?B?NjZnbGM2cktNcUVFVTZTbzZ4ME9EK0ppNWJWbVpzaHF5dXN1dExuempwOWVR?=
 =?utf-8?B?ZzN5cnNnSko2bkhFbWpqaHA0Y0ZIUVU4REpoNkx0TGVMbHQ3YnlHREVDUGtp?=
 =?utf-8?B?VzgvYXRqcVBvQkdUeHRaMDNxMStDKyttKzNzMnMrS2lsbUlYMGo1OVVucG81?=
 =?utf-8?B?MkYrUXhBYUpNUXl2bGErbFFLbnhNbHAyYlFlTVFhelZVUzlPYXFwenJjN0Nn?=
 =?utf-8?B?ZUU5a2NRQ3o5angvMkQzYXBid2Foc3BOM0dxQ3hJTjJSeTFaMTZtOHc3M25B?=
 =?utf-8?B?N1hPaXZBdS9xQnZML0dTZW5PK3Z4R3BDdnlaV3VveXhGUVErVUV0b2JObnNi?=
 =?utf-8?B?QXROZWNCdVdMZlMzcmh5Wi9QYkxPMFlzOC9rZ0wvdFdSRG1SeWluTloxZk5O?=
 =?utf-8?B?cGgwZWROWmVCemRsbTZXeEdaQ0xEV01jOUFSdG5Jc0VhdDVnd1BCZTVlT25Q?=
 =?utf-8?B?YzBXS1hraC9tOXZndFhTZE56TmtXNHEyUGRQY0J3VWRCUTZBQmM4K09JNkc2?=
 =?utf-8?B?cVc4UTlSc0FDSVdMTGJscWlvajN5NFkrTW9hNWM1OUVPUzBjRWtndEVXQ1NR?=
 =?utf-8?B?NUpncFdhOVZRL25WNDlQdTA1L0tyNDdwT1BVVXlnNjNEYkdEWDFZb2Mzd2F6?=
 =?utf-8?B?TWhxRzRsS2NwalVlbkxtVmo1SHNhcUQzaFlUeWdBeHdLa0R6NnJjNjdLcTVB?=
 =?utf-8?B?T0k5U29KY2VIYnhlMytDWGVyZzcyV3dyNkF5Yk5CR055VVZYMDFETjBRdmpJ?=
 =?utf-8?B?N3A2U3lWU1B6NDZ5YzA3cUpmS0xjMGRMSmNvYjR4OHBrTDU5WlJNMEpuOXla?=
 =?utf-8?B?RXB6ZHdQeU9LRkVsT1B2ZGZheitWZGQzckJNNU1yY280KzRiYzFxMkJiRkFi?=
 =?utf-8?B?bjRxYWdtOWNkSUZMbmg2UFBiVGFhdXpGNWJqamZjdURDUlc2dlRvZmN0SGVj?=
 =?utf-8?B?emRoeFh2dmdCbTNoVjlxOTc1Mlpad1hLTSt4NktXdFduMHZxeVdwQjM3ZGhB?=
 =?utf-8?B?bnp5TXN1aXdOdVpJQ0pWLzFOSjNxR21lMmhVY3NLU0E3THVoZk9LVmt3Qm5N?=
 =?utf-8?B?N09nOFlNR0crcmVmdTFXMGY3V1FGemVORVNEeWhiOGFMbjNldEJHdzEzSzJ6?=
 =?utf-8?B?ZTgweHpDcHVkVDVCVWJheTFzWkhtMlZLT2hoWEd6a1NKNjJqWFZJQUh3dHpW?=
 =?utf-8?B?MkZDQ1RWUFdoN0d3QVdWUCtabTcxMU5hMTdnVlJ1OUVxYitMdGwxajM1YnJG?=
 =?utf-8?B?NHpSTlI0TXp0WUhOUkxKRk5yUDNpQlBRVXdZa2VEOEtnLzg1anJaR2xDYzRs?=
 =?utf-8?B?cXFieU5makUrVUJReVRqSnEzdnVMWGc4Y09iMENid0srZVRicytpTEtNakFQ?=
 =?utf-8?B?ZkVVRGlFcnBHR2crYmlWb1JKeEc0ejFiK3IrWkVWT0Nkbjk5NlRPL252bHR2?=
 =?utf-8?B?Tm9ZakNKTzZodXo2OExTeVl0THVtbmJ2aDEvREhuUVZzQ0dYYjZhc2tUNTVi?=
 =?utf-8?Q?jxoBfXc3807U3OjO34j/7I4om?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e67c2169-5f5a-4afe-7bba-08daff7448d1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 08:06:53.0886
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rTEIzw9co5Hy7LWWcRGKflNn6rLUPP/8v+8OlkSA+nB91iR+jBOUu1i0YWWCjnsapH7kHS3tECdvayGHBwzI5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9097

On 25.01.2023 17:18, Carlo Nonato wrote:
> On Wed, Jan 25, 2023 at 2:10 PM Jan Beulich <jbeulich@suse.com> wrote:
>> On 25.01.2023 12:18, Carlo Nonato wrote:
>>> On Tue, Jan 24, 2023 at 5:37 PM Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 23.01.2023 16:47, Carlo Nonato wrote:
>>>>> --- a/xen/include/xen/sched.h
>>>>> +++ b/xen/include/xen/sched.h
>>>>> @@ -602,6 +602,9 @@ struct domain
>>>>>
>>>>>      /* Holding CDF_* constant. Internal flags for domain creation. */
>>>>>      unsigned int cdf;
>>>>> +
>>>>> +    unsigned int *llc_colors;
>>>>> +    unsigned int num_llc_colors;
>>>>>  };
>>>>
>>>> Why outside of any #ifdef, and why not in struct arch_domain?
>>>
>>> Moving this in sched.h seemed like the natural continuation of the common +
>>> arch specific split. Notice that this split is also because Julien pointed
>>> out (as you did in some earlier revision) that cache coloring can be used
>>> by other arch in the future (even if x86 is excluded). Having two maintainers
>>> saying the same thing sounded like a good reason to do that.
>>
>> If you mean this to be usable by other arch-es as well (which I would
>> welcome, as I think I had expressed on an earlier version), then I think
>> more pieces want to be in common code. But putting the fields here and all
>> users of them in arch-specific code (which I think is the way I saw it)
>> doesn't look very logical to me. IOW to me there exist only two possible
>> approaches: As much as possible in common code, or common code being
>> disturbed as little as possible.
> 
> This means having a llc-coloring.c in common where to put the common
> implementation, right?

Likely, yes.

> Anyway right now there is also another user of such fields in common:
> page_alloc.c.

Yet hopefully all inside suitable #ifdef.

>>> The missing #ifdef comes from a discussion I had with Julien in v2 about
>>> domctl interface where he suggested removing it
>>> (https://marc.info/?l=xen-devel&m=166151802002263).
>>
>> I went about five levels deep in the replies, without finding any such reply
>> from Julien. Can you please be more specific with the link, so readers don't
>> need to endlessly dig?
> 
> https://marc.info/?l=xen-devel&m=166669617917298
> 
> quote (me and then Julien):
>>> We can also think of moving the coloring fields from this
>>> struct to the common one (xen_domctl_createdomain) protecting them with
>>> the proper #ifdef (but we are targeting only arm64...).
> 
>> Your code is targeting arm64 but fundamentally this is an arm64 specific
>> feature. IOW, this could be used in the future on other arch. So I think
>> it would make sense to define it in common without the #ifdef.

I'm inclined to read this as a dislike for "#ifdef CONFIG_ARM64", not for
"#ifdef CONFIG_LLC_COLORING" (or whatever the name of the option was). But
I guess only Julien can clarify this ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 08:14:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 08:14:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484818.751636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKxOV-0001wX-GE; Thu, 26 Jan 2023 08:14:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484818.751636; Thu, 26 Jan 2023 08:14:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKxOV-0001wQ-Ax; Thu, 26 Jan 2023 08:14:03 +0000
Received: by outflank-mailman (input) for mailman id 484818;
 Thu, 26 Jan 2023 08:14:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKxOU-0001wK-CA
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 08:14:02 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2045.outbound.protection.outlook.com [40.107.8.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 637ed24a-9d51-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 09:14:00 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9133.eurprd04.prod.outlook.com (2603:10a6:150:24::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 08:13:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 08:13:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 637ed24a-9d51-11ed-91b6-6bf2151ebd3b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SH95VcRID7fFh6fkEqoMIHiNCo5lC3K8CqfOh5RM81/FmNVCMxXyeqAT9/OX0XR7zsOUkAJXeYnfGBfCM3ziTcDpIVaBNHC2rEAzNL7gXwlR850SFwjCWIsDEdqVYxTQdIwwHFkn5mN06R1M0/q2W4KEs1q+wp5TgsEuK3RPPwUOTllqUygESJj29klSu1vdKXH+CXY6n3bPBAcjUs1PGSZDzwEh1CA7uuX7pUgEO2zI1vEgzw4on9QrnKLM4KtaRhuCMAB7q6d4Syob7s/8pjUzRzWlgT8Px8toer8RY9EGLvlsucbFhEuNLHpRSSmAHRIdlNG7fdX9kzpOB6bd3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=d72iJKQhYowTECgr8vXItHRLRcBZNOVgJ+n71MAuwuk=;
 b=AL9MWgo5YsudVv5fUEITCjvxTndVFeGn5rBZU9aSzJi1u6SfReOcutCQ5Kg20NaB+3qTCGtxKlSZB0nmGoUtbWhrNwXcgdZsYVfNw8wyW2tyLPtKujzcFyFbV+70MsaYNilE5laZVB0E4e+n0M3zsUvVC5IwR69bZeoCdOLCSnH63x/FLn4oyVuPJDw4LYjsFkMhG4DjFJ+koUzHKL5Sm6NZ9p6zM3CkfaLoZRnFpkK6hifIeyC8kIQHpDenFTnoizxKcJYis8iSr6/HmVH0Q0PjB4H+s2y69R98WYWVFRqFCkoRD5YpKrgZgjOCyYLGJDUbuAKaIKYxWX1Pfxx7dA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d72iJKQhYowTECgr8vXItHRLRcBZNOVgJ+n71MAuwuk=;
 b=cvh7cdrVGlTiVTS1SD7ulbXlisQCqR4Ftyvb2nne+NCK+tnH7iJHcWdQx2UyD+4rmgMdFcjaAGfZRxX377//ewvLbrS6NjQy9L+JFNUWtFUbtXu2KDuOMB7/1vvUA8cn8+I36sKqK5XiduH+t+ixPzh86LlYDj5FdOWv9lBnlgqyLKVEVZRHAeZFSBuEfdvshEJ2Xq/9Z6Jk7bfSSfeidsn7/v7ryWp/cX3FEwjnqdV0ePQpr99U5O+DI0jfE1RQvu5jg/UnCRrISqQ9gStiq03+5GPcWmldfEcjuBujTyRvEtDyRPtk64Wpvx6WiXE73N7JMS1QhFdEcOQayLcVVg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cfffcf15-c2fa-6529-d1ff-a71a7571bfe2@suse.com>
Date: Thu, 26 Jan 2023 09:13:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest
 areas
Content-Language: en-US
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
 <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com>
 <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
 <a92b9714-5e29-146f-3b68-b44692c56de1@suse.com>
 <CABfawhkiaheQPJhtG7fupHcbfYPUy+BJgvbVoQ+FJUnev5bowQ@mail.gmail.com>
 <6099e6fb-0a3e-c6da-2766-d61c2c3d1e96@suse.com>
 <CABfawh=1XUWbeRJJZQsYVLyZX-Ez8=D2YYCgBYvDGQemHeJkzA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CABfawh=1XUWbeRJJZQsYVLyZX-Ez8=D2YYCgBYvDGQemHeJkzA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS4P190CA0035.EURP190.PROD.OUTLOOK.COM
 (2603:10a6:20b:5d1::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9133:EE_
X-MS-Office365-Filtering-Correlation-Id: 3d9a32f9-7407-4873-314e-08daff7545b4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	giyhcqB5tgFIQT8EDmo+CL0M5Dlgw1drT7GEhPOicgcNMuRq9ADJIb65KK/VY9jpTt3BjOvxv7WAMnlBQph09reEdEXjJgDxOQ8lcmrRb6SN4j7eYlyuyo785kXnFpjwMR+Kwdn3uwVZhWTKbbZFAyeezmZlORbpBzPbEE0+6jUELRUK/BeXYvxWOAmee3nyxdqbGiR2qqz4JSfbzc+SD4e8mqM2PFCI/B9jrLiAG/B7AsZvhvSkucvoEy/8o9ZnKS7+Xy9Q67kZlXUnaDvY/CdXktkbvyI0mwLgIRLnwQX86jSLK4QmFYCzxEmsS/PO1g+aSPLssx3Csid9wK6cQKNV3nyiIClNSOfxR7H0f2jBLjGREOKzz24EYO2Jr+LaNWH2GLrPQ9MFO8GwOWUtEwyvmM6CaFb7DJhpPGdL9YyctDtzfK1zzUWrynGTZccO+ITErmgazNsgDpg6GBnq543srQs0mn8Yvx+bM3kqyZowfGTTfoTbomAOLnrnrs76X6l2phTAZl93ZwZlgsM+EaKYf+YHJzbf/5xb4SZLe8/ihkEEWF3JSF8bsu5wqGzbZXtk6tK3QUm41DlGUm+/YmFGOEv40r74veljVjIqtKk9jVvF9eaoJAE2x+P2FuuE+mIy/FpIPywDUwN5ThE3wcKYzdFMTKaQAsGrjADQQqE53Ou82AXsdO0yhC8lQf3bdtuTHHCDdt/Zc8V4hnFRt3svFTTpMwbYJ8F9nnv5LXQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(376002)(136003)(366004)(346002)(396003)(39860400002)(451199018)(2906002)(36756003)(38100700002)(26005)(53546011)(6506007)(186003)(6486002)(316002)(478600001)(6666004)(2616005)(83380400001)(8676002)(6916009)(66476007)(66946007)(4326008)(66556008)(86362001)(8936002)(41300700001)(6512007)(5660300002)(54906003)(31696002)(31686004)(66899018)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MEhFTTRuYUF0bklkdHM2SlVaWG1hNEVQK1VHbHBtNVlRUkxLTmZDbEVaVUFz?=
 =?utf-8?B?eWJZN0dFdEtXQ25EazdzSmoxaGNGay95cjlIa0VnZXJpd3JVSFZPT05kdzVz?=
 =?utf-8?B?UUJJYXJob05qdGhjZHEwN3dFUzg2dFAwWVJPamoyMzNIMHBTTWdpZlB4T0ts?=
 =?utf-8?B?aXJIRXFLem40Z3V4VFpYeW5lMVU3NEFVQzhlQnpRYnVyQ0FxUDdZeUI1NHpC?=
 =?utf-8?B?OTdhYlAvN004TVBzeDVCRktxS3NzblhGYys3WTA4dGpjY2hQOWxaTCtlSzla?=
 =?utf-8?B?Y09ORTkxVGhVMFl2VWFTRjdnTE9EWFpJZjZpa2pWVmM5dlFDZTJ1OFoxUnNl?=
 =?utf-8?B?KzBtR1YvRzB4S1FHczBYcytKMnFibUJkM21kaHE1TXo0OEc2RWFnQzh5VlQy?=
 =?utf-8?B?WHBVZWtIN29NbnZ2dDRIZjExdkI1cTRHeE0zb3FEUC9BTVpXQlViNmFBYkpS?=
 =?utf-8?B?V3J6eFluMDAzWTcvWW5LQjBQcE4zNER4bFVqUUVtSlV1YnFuSW9MUjRoMnFh?=
 =?utf-8?B?Umd2NWIzQ2FBU2JOamcvamtPbThyUElqS3hmeDQwMmllWlROT1lMZ0JLT0Vs?=
 =?utf-8?B?cTlaeHVBYjMvY2k5aVFBMEVpUE05NnZrVFJDNTUvSEFKQXlOQTM4dm1YMVJJ?=
 =?utf-8?B?WmZYd2JJbmg4Vk94cVliM2NYSmNlREtIRkl5cG1IUkhJWDBlQ2RuVU5aajBy?=
 =?utf-8?B?eE9uSXRzcFJLSlp3SFdFY29JYWZWS1lFWDcxT3JDWUIrbGhSMENSZk1wQTRF?=
 =?utf-8?B?U1ZQZ25hTER3RThFaFY0aEgrTGh5aWtkQVlJNmVxQ0VuQXZHdkhGSXNZZytS?=
 =?utf-8?B?SHJRK0RlTlp6Mkg2Tmh6MnhVQlZTenQzaTJFU21VVDN6V0Eyd2EwNFhCdkdw?=
 =?utf-8?B?ZmRmZWhkdG0rVnJSM1Q1cEV5Tk1KWTJlRXp6M1Z4RklOYjRVaWJOYlh2RFdu?=
 =?utf-8?B?YVNXeE9jUlpUM1dhY3I5TmMxSVBLUC9YaGRnTnpBUmZJRjdHeHg2MmY3K2R1?=
 =?utf-8?B?RDFldlVKcDR3K0xRTHRMZ2t0Y3lIYU5OVTJBZ3U3ZW4zcGZYY0RnY2RlVnJQ?=
 =?utf-8?B?bFNvdkpCeWZHVmh2WmNyVnJQcFRoOEhIRmdsNzAyUVFzK215MUpJajgzQklJ?=
 =?utf-8?B?TzhrTmJ5SXBIN1kyMXRJRTNnd3djaWhnb3QzZjZObnZiWi9DRDd1NG54QjFi?=
 =?utf-8?B?S3BPSGd4TnVXcFdZamRvd05tMGxwNWRKZkN1NTk5dXBsU0FadExreTFYb2N3?=
 =?utf-8?B?c3BSTTU2cGo0RVIrUlFqQ2h0aUVHZXpQVnJMNko5QlJQMTZ6SEM1RW84RnlB?=
 =?utf-8?B?TmZrVVdWMTVJNlY3K3Ayck5qdWlpdlVPMWRQWVhkTUwxeTJWaHNzUFM0bUhy?=
 =?utf-8?B?VmlWc3h4YWhqQlU3VkIwa3VucHRwT1AyOER6dlAvUjAvNmpyU2NsTmhiVDVr?=
 =?utf-8?B?Z3lYSEpUYXphcWw0TGRvaHZUaW5hRkNtL3J1MmdWVkRsQXp1UHJJd0twMXlJ?=
 =?utf-8?B?UUlmbzJmd09OamVxZU1UdGpoQUZwUjFBMWcvd2twRXM4eWhGZzVOUVRZTE45?=
 =?utf-8?B?ZmJSOSs4TE9XSjg5Z3Z1aGFpUlJuSXM0aVV4MVhoMDZzUnVXSFd1R20rWXpU?=
 =?utf-8?B?TWZlWlpBYUloREN2WFdIdFlWTDRleHh3Q1hLcWFpWTZyWWdldUdQMjB5Vjg1?=
 =?utf-8?B?b2pCaFZnUWdvUE5GRzRORnpSTEtIeksxbmFKWm8xbDlHUmg0MmRlaE5wc2Nj?=
 =?utf-8?B?WjA4Y2U4d2xJSE9JZ2t0ejlldjJ0V0lJV2NpZVBvZUVwVExFc0FaTHp5SUdL?=
 =?utf-8?B?SjhMVGZLSXgzKzA5azl3clhPMTczcWxnYmxiblhoN0NtUk1aV3R1QjBTRDZ6?=
 =?utf-8?B?ZnIyQTN1c0JzWFRRUFRRNkZwQWZNMmJmUzhaKzE1b3oxMHkzbTBobEFpRjhs?=
 =?utf-8?B?YjF6ZU5ZOWt2bFMzYlpnYTA3bVBUYUgvNUl6Y1k2Ty9uSjRkUEF6ekc3aFFv?=
 =?utf-8?B?eXVJcmVHWDFqeDIvZzR6emZCQkdWZmlOQzNCaURkOVJHNnNFTS9EVUdZRFR2?=
 =?utf-8?B?Wnh5YUhVZGtJTkFiS0FZUll4TVR6eHVMY0QvOXNwUDFSNCs4WFhOU285ZjU3?=
 =?utf-8?Q?JCG+YduTrQTqQ09mt52AVWBVt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d9a32f9-7407-4873-314e-08daff7545b4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 08:13:57.4071
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iOLD5xKDf7IugIIDHBQRsOjNx0ZxTXw7ADWUDhzHEueyrPtPXnrf/FiKVPxjXGySQ6wSB3vT7AIcnMrsk5NZbg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9133

On 25.01.2023 16:34, Tamas K Lengyel wrote:
> On Tue, Jan 24, 2023 at 6:19 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 23.01.2023 19:32, Tamas K Lengyel wrote:
>>> On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 23.01.2023 17:09, Tamas K Lengyel wrote:
>>>>> On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>> --- a/xen/arch/x86/mm/mem_sharing.c
>>>>>> +++ b/xen/arch/x86/mm/mem_sharing.c
>>>>>> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
>>>>>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>>>>>>  }
>>>>>>
>>>>>> +static int copy_guest_area(struct guest_area *cd_area,
>>>>>> +                           const struct guest_area *d_area,
>>>>>> +                           struct vcpu *cd_vcpu,
>>>>>> +                           const struct domain *d)
>>>>>> +{
>>>>>> +    mfn_t d_mfn, cd_mfn;
>>>>>> +
>>>>>> +    if ( !d_area->pg )
>>>>>> +        return 0;
>>>>>> +
>>>>>> +    d_mfn = page_to_mfn(d_area->pg);
>>>>>> +
>>>>>> +    /* Allocate & map a page for the area if it hasn't been already.
>>> */
>>>>>> +    if ( !cd_area->pg )
>>>>>> +    {
>>>>>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
>>>>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
>>>>>> +        p2m_type_t p2mt;
>>>>>> +        p2m_access_t p2ma;
>>>>>> +        unsigned int offset;
>>>>>> +        int ret;
>>>>>> +
>>>>>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL,
>>> NULL);
>>>>>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
>>>>>> +        {
>>>>>> +            struct page_info *pg =
> alloc_domheap_page(cd_vcpu->domain,
>>>>> 0);
>>>>>> +
>>>>>> +            if ( !pg )
>>>>>> +                return -ENOMEM;
>>>>>> +
>>>>>> +            cd_mfn = page_to_mfn(pg);
>>>>>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
>>>>>> +
>>>>>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,
>>>>> p2m_ram_rw,
>>>>>> +                                 p2m->default_access, -1);
>>>>>> +            if ( ret )
>>>>>> +                return ret;
>>>>>> +        }
>>>>>> +        else if ( p2mt != p2m_ram_rw )
>>>>>> +            return -EBUSY;
>>>>>> +
>>>>>> +        /*
>>>>>> +         * Simply specify the entire range up to the end of the
> page.
>>>>> All the
>>>>>> +         * function uses it for is a check for not crossing page
>>>>> boundaries.
>>>>>> +         */
>>>>>> +        offset = PAGE_OFFSET(d_area->map);
>>>>>> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
>>>>>> +                             PAGE_SIZE - offset, cd_area, NULL);
>>>>>> +        if ( ret )
>>>>>> +            return ret;
>>>>>> +    }
>>>>>> +    else
>>>>>> +        cd_mfn = page_to_mfn(cd_area->pg);
>>>>>
>>>>> Everything to this point seems to be non mem-sharing/forking related.
>>> Could
>>>>> these live somewhere else? There must be some other place where
>>> allocating
>>>>> these areas happens already for non-fork VMs so it would make sense to
>>> just
>>>>> refactor that code to be callable from here.
>>>>
>>>> It is the "copy" aspect with makes this mem-sharing (or really fork)
>>>> specific. Plus in the end this is no different from what you have
>>>> there right now for copying the vCPU info area. In the final patch
>>>> that other code gets removed by re-using the code here.
>>>
>>> Yes, the copy part is fork-specific. Arguably if there was a way to do
> the
>>> allocation of the page for vcpu_info I would prefer that being
> elsewhere,
>>> but while the only requirement is allocate-page and copy from parent
> I'm OK
>>> with that logic being in here because it's really straight forward. But
> now
>>> you also do extra sanity checks here which are harder to comprehend in
> this
>>> context alone.
>>
>> What sanity checks are you talking about (also below, where you claim
>> map_guest_area() would be used only to sanity check)?
> 
> Did I misread your comment above "All the function uses it for is a check
> for not crossing page boundaries"? That sounds to me like a simple sanity
> check, unclear why it matters though and why only for forks.

The comment is about the function's use of the range it is being passed.
It doesn't say in any way that the function is doing only sanity checking.
If the comment wording is ambiguous or unclear, I'm happy to take
improvement suggestions.

>>> What if extra sanity checks will be needed in the future? Or
>>> the sanity checks in the future diverge from where this happens for
> normal
>>> VMs because someone overlooks this needing to be synched here too?
>>>
>>>> I also haven't been able to spot anything that could be factored
>>>> out (and one might expect that if there was something, then the vCPU
>>>> info area copying should also already have used it). map_guest_area()
>>>> is all that is used for other purposes as well.
>>>
>>> Well, there must be a location where all this happens for normal VMs as
>>> well, no?
>>
>> That's map_guest_area(). What is needed here but not elsewhere is the
>> populating of the GFN underlying the to-be-mapped area. That's the code
>> being added here, mirroring what you need to do for the vCPU info page.
>> Similar code isn't needed elsewhere because the guest invoked operation
>> is purely a "map" - the underlying pages are already expected to be
>> populated (which of course we check, or else we wouldn't know what page
>> to actually map).
> 
> Populated by what and when?

Population happens either at domain creation or when the guest is moving
pages around (e.g. during ballooning). What is happening here (also in
the pre-existing code to deal with the vCPU info page) is the minimal
amount of "populate at creation" to meet the prereq for the mapping
operation(s).

>>> Why not factor that code so that it can be called from here, so
>>> that we don't have to track sanity check requirements in two different
>>> locations? Or for normal VMs that sanity checking bit isn't required? If
>>> so, why?
>>
>> As per above, I'm afraid that I'm lost with these questions. I simply
>> don't know what you're talking about.
> 
> You are adding code here that allocates memory and copies the content of
> similarly allocated memory from the parent. You perform extra sanity checks
> for unknown reasons that seem to be only needed here. It is unclear why you
> are doing that and why can't the same code-paths that allocate this memory
> for the parent be simply reused so the only thing left to do here would be
> to copy the content from the parent?

No, I'm not "adding code" in the sense that I read your comments so far.
Such code was already there (and, as pointed out somewhere, in slightly
broken form). Yes, I'm introducing a 2nd instance of this, but just to
then (in the last patch) remove the original (slightly broken) instance.
So across the entire series this is merely code movement (with
adjustments).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 08:41:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 08:41:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484823.751645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKxoJ-0005gC-ES; Thu, 26 Jan 2023 08:40:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484823.751645; Thu, 26 Jan 2023 08:40:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKxoJ-0005g5-B2; Thu, 26 Jan 2023 08:40:43 +0000
Received: by outflank-mailman (input) for mailman id 484823;
 Thu, 26 Jan 2023 08:40:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrEi=5X=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pKxoH-0005fz-MU
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 08:40:41 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1bfa8f44-9d55-11ed-91b6-6bf2151ebd3b;
 Thu, 26 Jan 2023 09:40:40 +0100 (CET)
Received: by mail-wr1-x42e.google.com with SMTP id n7so981129wrx.5
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 00:40:38 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 h12-20020a056000000c00b002bdd96d88b4sm666153wrx.75.2023.01.26.00.40.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 26 Jan 2023 00:40:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bfa8f44-9d55-11ed-91b6-6bf2151ebd3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=t7R7kW3+kQQwunvG4xN2f7eHPAYe6arAoTmJi5SsZ9M=;
        b=jlsaszc3U+bs1Y1M7iTjODGwgHPMxFmkB1Bt6yeOkARxajYiU9SaPJdbSZsth175eZ
         rRg423RUBuFWGWKdqlmJppn3esZuA/4Yo1/BDRk6O0j0vtNutKO2zkpCdDc+24Wczqql
         FN0gmmzT8dTDFZpJ4eLKdPasrdYlz0nBRFEc9tgU3tVuANEWgUUtZyyFQKVshpgqu15G
         jl5uW9DGRf3JRLaoWTGj6PMjHUhC3meVVo5kTYccTENwzp3Uc6brjWCImw9R7CmHLldi
         sugFK0pZnUH0W1BrcUr4xxdrEhcKNkEwX4y61Ny+u9LyJt/fb7uxI6yDkYzMRCsxEvPx
         309A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=t7R7kW3+kQQwunvG4xN2f7eHPAYe6arAoTmJi5SsZ9M=;
        b=Xa2s4NNpiVidklIcpXsBorKqlnclkZER3NRvswWSxYLgh/dwAQVEGkf55+jdTOehlZ
         wFkqf6zbQdZGblj+Ek2zu/UBCop4lFZPGB4nYGiAO124uVKXhcMH8o+imnQQv0AVwDPd
         PUx1pdn9vff5k1Wvrs15NZsqDTX66CzRsIQmm7vlucQKpvuGfzh0sCZxMCiAMIRS4sEj
         DFo9lqvghpT2kBpTzOTg+N0FAyNvTA5nEaqNUtluFYHuq47i2qVZPCCSvxQpfbEQv7TS
         ektkKFpcHOiNUGqLaPBjLs7zf8iiTHyI0Lr7uhSp1F+DXZ/UvatPbOUspzOm57sJFtVT
         Lqag==
X-Gm-Message-State: AO0yUKVqrNYjB4jOlHH8oN4hpDOXTNa+qtrSF/F0DAw04MxlsVhpokQc
	Y9DcC7DfOJnKkq2jlew88Tw=
X-Google-Smtp-Source: AK7set9FBE7uLh+inpdRGiHFJL+HTb8FJNSLkaDzd5jCVifeNizxcxqtiLVfha2p7juBVST2diAP0w==
X-Received: by 2002:adf:b35e:0:b0:2bf:cab7:cc36 with SMTP id k30-20020adfb35e000000b002bfcab7cc36mr198608wrd.23.1674722438145;
        Thu, 26 Jan 2023 00:40:38 -0800 (PST)
Message-ID: <ed76d0891b9c8f30acb14ecaf434dd1d1910b26b.camel@gmail.com>
Subject: Re: [PATCH v1 09/14] xen/riscv: introduce do_unexpected_trap()
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <amc96@srcf.net>, Alistair Francis <alistair23@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>
Date: Thu, 26 Jan 2023 10:40:36 +0200
In-Reply-To: <c12689b9-950c-94f5-e54b-490eea19b066@srcf.net>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <74ca10d9be1dfc3aed4b3b21a79eae88c9df26a4.1674226563.git.oleksii.kurochko@gmail.com>
	 <CAKmqyKNtFGoXmF1SJWO+JBJQvPSyDYEfpaYn2YBMQ=BsCk6VPQ@mail.gmail.com>
	 <df6bd499b06c2e4997a3b647624aa2163e7f23d6.camel@gmail.com>
	 <c12689b9-950c-94f5-e54b-490eea19b066@srcf.net>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Wed, 2023-01-25 at 17:15 +0000, Andrew Cooper wrote:
> On 25/01/2023 5:01 pm, Oleksii wrote:
> > On Mon, 2023-01-23 at 09:39 +1000, Alistair Francis wrote:
> > > On Sat, Jan 21, 2023 at 1:00 AM Oleksii Kurochko
> > > <oleksii.kurochko@gmail.com> wrote:
> > > > The patch introduces the function the purpose of which is to
> > > > print
> > > > a cause of an exception and call "wfi" instruction.
> > > >=20
> > > > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > > > ---
> > > > =C2=A0xen/arch/riscv/traps.c | 14 +++++++++++++-
> > > > =C2=A01 file changed, 13 insertions(+), 1 deletion(-)
> > > >=20
> > > > diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
> > > > index dd64f053a5..fc25138a4b 100644
> > > > --- a/xen/arch/riscv/traps.c
> > > > +++ b/xen/arch/riscv/traps.c
> > > > @@ -95,7 +95,19 @@ const char *decode_cause(unsigned long
> > > > cause)
> > > > =C2=A0=C2=A0=C2=A0=C2=A0 return decode_trap_cause(cause);
> > > > =C2=A0}
> > > >=20
> > > > -void __handle_exception(struct cpu_user_regs *cpu_regs)
> > > > +static void do_unexpected_trap(const struct cpu_user_regs
> > > > *regs)
> > > > =C2=A0{
> > > > +=C2=A0=C2=A0=C2=A0 unsigned long cause =3D csr_read(CSR_SCAUSE);
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 early_printk("Unhandled exception: ");
> > > > +=C2=A0=C2=A0=C2=A0 early_printk(decode_cause(cause));
> > > > +=C2=A0=C2=A0=C2=A0 early_printk("\n");
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 // kind of die...
> > > > =C2=A0=C2=A0=C2=A0=C2=A0 wait_for_interrupt();
> > > We could put this in a loop, to ensure we never progress
> > >=20
> > I think that right now there is no big difference how to stop
> > because we have only 1 CPU, interrupts are disabled and we are in
> > exception so it looks like anything can interrupt us.
> > And in future it will be changed to panic() so we won't need here
> > wfi()
> > any more.
>=20
> WFI is permitted to be implemented as a NOP by hardware.=C2=A0
> Furthermore,
> WFI with interrupts already disabled is a supported usecase, and will
> resume execution without taking the interrupt that became pending.
>=20
> You need an infinite loop of WFI's for execution to halt here.
>=20
Thanks a lot for clarification!
Then definitely it should changed to loop.
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 08:58:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 08:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484828.751655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKy4x-0007YI-T2; Thu, 26 Jan 2023 08:57:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484828.751655; Thu, 26 Jan 2023 08:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKy4x-0007YB-QK; Thu, 26 Jan 2023 08:57:55 +0000
Received: by outflank-mailman (input) for mailman id 484828;
 Thu, 26 Jan 2023 08:57:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKy4w-0007Y5-Kx
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 08:57:54 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2045.outbound.protection.outlook.com [40.107.8.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 83f81ca0-9d57-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 09:57:52 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7021.eurprd04.prod.outlook.com (2603:10a6:800:127::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Thu, 26 Jan
 2023 08:57:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 08:57:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83f81ca0-9d57-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SS6inCbMqnb84+xIQpueDSHR/V0jb+47VoliqhOzyzLlbCeQg1FOBcbvlZqY0aqK1+KVrTtP49LQAhXf2wPeLzc1GsO09odbpWoiGCA+1YKbX6he0ZGjY3WLD9cLyK5iEuc+2T3zcU36/bRjsfcjC6KlPQ7ekl9A02Q8AvOVxEi6ESg8AA6jvJUATwpMcVGP0rCs0Q+dOKYYUSI15oPUzar/861EshyDML8a8L3sA3HspWGjsp169H713ys0zVy6eIT2vio8A8ZPwf0JmaK10RprBvIdibAFt1u7nH+DZ7ZvF8opCj8bo0nnqTB3W1bqfhVsvsw38guFTxcw3gbX7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MW1uAeyzIXYgFzZDoCTqIsfkigZzerbTXM+ilyurhaU=;
 b=XFhW95+a+saJr91ZntAhk0Yp6Q8OmTRI9doD0IcYmZ+C+3NjViaOXgBrVSVSqfTkymiIowPb28jF4jI8wVhxnoXBPhsQAnq+oznCmsyhAPOwHh4/s8G3BQp/Osv/9JY1/A0FNxr7TqLk1oD075biYsHsyETS4CJQ4J1wjzsoKy6JInSOwuMEhXfCY5YKjJs4sPOeRq4Mm/RwUk+eEi77ZG6gKN9Pe2c4oSxxrPUJn4CiAsjOQHr3jl4fSUbTjRHrTYXEndmFzVTLXW6HURfWgEIBqGXgm4cKWEAFYE9SG/mc6yVJWOR/OIVuZSuH0T0NqnM+tIFE7ov2/3F8zyriJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MW1uAeyzIXYgFzZDoCTqIsfkigZzerbTXM+ilyurhaU=;
 b=N+5UlcIj/TWNcQTHosfRhJO8SJ4Qg3GCn0Pi1NphVFSxqXbxPxxc0V5jMXpBiO3hunt8QJK69dstzZm0AvBd9AuNHb2Vtg3vdjctEDw7WfavyiY46myE2s7yLycoiJMuVdP56tEssYyyxfPjCI/SWrmgtfsTUyc61q3AyLEbsNuEk0KquPc341RUzO0h/7wbdhsH81aRS+EnbyZV04KNGe5BggtpJGXSIpQRaaGd2Sz3t+dgkmq+HMes6/8gz3eRCgyqEBQ4H2LQmioL83xJGRM8/FqiXbleWhVAtzFyVgOMHH1JDK+X44Sk9mBb6wmztHm6xbXmEXZ6081y4SKjaw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8a07d6f8-a07a-d435-deef-1366fad29a11@suse.com>
Date: Thu, 26 Jan 2023 09:57:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/x86: public: add TSC defines for cpuid leaf 4
Content-Language: en-US
To: Krister Johansen <kjlx@templeofstupid.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 David Reaver <me@davidreaver.com>, xen-devel@lists.xenproject.org
References: <20230125184506.GE1963@templeofstupid.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230125184506.GE1963@templeofstupid.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0125.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7021:EE_
X-MS-Office365-Filtering-Correlation-Id: 19288d99-e591-49c1-9934-08daff7b642e
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KOhWYqtn1fLxx/PLTPrLB89CsknCn1FmQrdyZK1gGiSLMNupO+VPd/q0mCKbwh4oUYmGVS3iziE8J4SxCvsOJzNY51gxbEztQkI5SQCUjQKsX+vKTIZkqBPBViKeaet+mMUTiHYVva+a2xXvGh//ow2eh6wJkF3J8EeOmdXCLY81GXGs3wcZeWZvPbQPPe63xFz5OtK3UjkrOOXMfFwfmXMfNeZRh12uCEWOHpbO3XdMjstLCyJnRQOe1BG/HlUq0GgTnjolSGtPUKiwdZhpeqJybjVf9nsRAgYCBzHUoQox2HrHg4n29Jemz0Z90lgCc1r7zFyAHOiKNg34fywCnoWHl7b0HeIcUElH6koF4qhSoUgvGiD5zuIC/Bof1mBVxwHrrUBHZcRgkWu0wHzQcFr5qxfZJaBcU7fhRaPAZKtQRpWnqqUxUzFFpeN6WkAobB+FdQtINZxn8jv2/vrHMLnKxQBJV5rMlTvvP1z/5yF6xVIDnsCRpTUAp/29cMzje2L3Dm7iMoOBbZolerNai837LzDB/pRbMcM4TWtN8HWxHDkxgkP5c0/eEU7aypD6+Q877CmTlr4pPkQnyqfS+/M3iuWcNev5ywkm9TEen3HXYOE6luBlPQuWueLTvevas1e4CfFsz/+eIZurirB+CY76zP2C9MC9oNdI1w+eGSM4XATdtS6xagoHv10sXmqyHDGjeNTrsDsfv5qdP15Neycv4c837pkp01LYnBeKhFA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(396003)(366004)(376002)(346002)(136003)(451199018)(2616005)(26005)(186003)(6512007)(53546011)(83380400001)(36756003)(31696002)(86362001)(38100700002)(66946007)(66556008)(66476007)(31686004)(4326008)(6916009)(8676002)(316002)(4744005)(41300700001)(5660300002)(8936002)(2906002)(478600001)(6486002)(54906003)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MFVyWGdUNXF1VnoyWmtjYlpRT3AwZlp3clNqYkRhUzNHNS9GVlJvTm03dmVM?=
 =?utf-8?B?QmZOQWhHaGExdTNsTDU2RUVjeDhLdEJlTU1aOXFNc0p5WC9TN0ZKamZzMlcz?=
 =?utf-8?B?K0NoTzhzT2dMd2ltUWdQQkdyRi9UZk5YR0JkKy8wcG1wdjNJcWRkL2p4NVR6?=
 =?utf-8?B?VWpEaWxxMmVFeFh2ekVJTXN6dFViV2szQzdlcm96MlZXNmg4V2NGT0dOZXc5?=
 =?utf-8?B?dFVmc0crenJ2TXpUOE14YlJGOGdtMzFXbURpWFpxZElPUjFsWGNTbXIrUi9S?=
 =?utf-8?B?MDdjTmVjRms2V0pvMTFSckl5NloxeWU1RnlMNjVQdU1UckRNNWdyWmpjSzVL?=
 =?utf-8?B?QzNGa3BNdXhZR0kzSHhqQXlqaVQ4M1V6UFFsTHZYYlY0amVONUtIVUpEYjlk?=
 =?utf-8?B?cHJ2a1F3SDVKbFpheTRLZlJrbno5U2hWOUZqN05SeXYzNHVsRThibVNFRDdm?=
 =?utf-8?B?M0JjaTVOTHB3WmJzY2hrTll6cFpxYVVvaGx1Vk5MK1VXR0JVenRqR0VGVzVM?=
 =?utf-8?B?YmZlTU9Id01qcER5SW5uS3d1b1pHbEJJMEI1L3RyRFFoZThnb0tPMTAwMWU4?=
 =?utf-8?B?bUIxRkNORkZNQkE1MG1jY0F1SE5WSXZ1R1FxQUF6ZlUreEVzMDJXSXh2aDdR?=
 =?utf-8?B?MGVKNVJzUVBnK3UxbXVnb2tpVitqMjhnQStITHYrRFdPSmsvdy82eDhSUjJQ?=
 =?utf-8?B?UlVmVVhzYTZtWmhvRjdwRlBiTDJSMG1OZ3pQYjd3K3BmTks5d2RQTk50c2pT?=
 =?utf-8?B?cmo4VkVpajhrU0Z3WDBUUTdWNUJ2MXU0WnJBalRLN0J0dHF6S21rQ0tKYmtK?=
 =?utf-8?B?Y0pSTHhXZWY5Tmd3MHZHMWY2Uk1ZZ0pxWk5FcVVtT2d5dWptRVI1dmNNTUVw?=
 =?utf-8?B?T3V3eHFveGRjbi9Rby9hQllnYm9IMkZXWFZTcy90bGFoTjNwV3BPdDhaTXBK?=
 =?utf-8?B?eVBtQUZ6NDBRejZjSUExQ1BjSkplbjRPSm5kOXU5aDBNcS9nMW4ySGFNZlhK?=
 =?utf-8?B?elkxNWNqN2pMMXNrVFFMZkFqS0NXQjE5aGoraEF2OEdIaHhTUW1IUkdhQWZm?=
 =?utf-8?B?ZlQvU25hYk50Z0hIbHdHRTdTaktTWlY0MDc2dFZhRldNcjNzTi9BWHltSkNV?=
 =?utf-8?B?OVl4aGExajQ4ZFJ4OXlhK0JUNG9GRTczQVZmeHV3T0o1b1FQWm9FRWhCM3hL?=
 =?utf-8?B?VW90S2ZwdFZaaHRMaDhzZlVBQVZVb0UzcTV4L0wvU3BCZE5oMmtpYW83aENv?=
 =?utf-8?B?SDFhays1R0FXY09tdjltaWlaNVhna05YWGl1WVJlNm5DU3NnYVVRem9nWldx?=
 =?utf-8?B?RDVuRW9QUmpucXJvL3JiSmg3K1djK0NhR0ZGb3NNeWFZc2sxbVQzdStqZ3FO?=
 =?utf-8?B?UHZxWUxtRnJ2MnN4L2ZreUlwM3ZYVVBRc1A1cFNQYkJUK1BNU1VJOUJqYi9x?=
 =?utf-8?B?R0Z5QXdhZ1c0VGt4STRldmxIdXVXS2Njc0gwVmxDSWJXYkJ3U1I4TTRrVjBR?=
 =?utf-8?B?aEl1QnpoOURFdVZNY3FQWCt2bHpCMkxqU0JJVVRicEFaNGIyc2tkaTB2WEMw?=
 =?utf-8?B?Nk1JSFRwU05sa2lOdGZJVjI4R3M5NEhGY25WK0lCSitsMC9paFVacTBnTVNY?=
 =?utf-8?B?WmU0R3N3OGdxUzV4REw3cEpQcE9SV0tvdUt5ZzRIT0VDZDVidDBkVVM1L3FF?=
 =?utf-8?B?eHZUWnZHd2FJUlVFSUFIbFVvbEhzL2hkWW5YTUNXUmpqVXZ4SjhMczJOL2tR?=
 =?utf-8?B?ZHp3b01aMUtFcllIQ0JQUlgrMXo4RlRaVFVBWVd1b01NTW5qcituUXVTWlFC?=
 =?utf-8?B?ZDg0Y1JtdGxkRGV0R1lXU1ZKUW55RUx1MmxlQTZpT3RlQXZRSStvQTJ2WUlx?=
 =?utf-8?B?d2NhYy8rUVViZ2pSajluTi9zWjcwTkZQdlJJcEVCc0wrb0NZOTltUmRjNDho?=
 =?utf-8?B?WDVQSnhRREZHcXF3MmxWOEdTYkdPT1VnUlBhVytYQUEwbFBtUFhIQzA2Z05v?=
 =?utf-8?B?MDlGOUR6WDVocFJpVGlNWGVPMi9XaDY4NWhtUDR1UmdXUDhqR3JaMFJFTnZw?=
 =?utf-8?B?K3JTT0ZxUVljWTdieWVvNjhSMzdXdEg3LzlHYjRWSi9FNEZUSDNocG5memJi?=
 =?utf-8?Q?7GH8iyufBYCmxY7Phv3mAGt0Y?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 19288d99-e591-49c1-9934-08daff7b642e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 08:57:45.4744
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 22nqlTiDehuZ5YlJa6PUhDWvfLK6BCdg+fQAeGLhNDSjwtu0Mf4Th4drj4ouW9sgkVhWsjn0VHhrtRX5UXzHBQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7021

On 25.01.2023 19:45, Krister Johansen wrote:
> v2:
>   - Fix whitespace between comment and #defines (feedback from Jan Beulich)

Hmm, ...

> --- a/xen/include/public/arch-x86/cpuid.h
> +++ b/xen/include/public/arch-x86/cpuid.h
> @@ -72,6 +72,14 @@
>   * Sub-leaf 2: EAX: host tsc frequency in kHz
>   */
>  
> +#define XEN_CPUID_TSC_EMULATED               (1u << 0)
> +#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
> +#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
> +#define XEN_CPUID_TSC_MODE_DEFAULT           (0)
> +#define XEN_CPUID_TSC_MODE_EMULATE           (1u)
> +#define XEN_CPUID_TSC_MODE_NOEMULATE         (2u)
> +#define XEN_CPUID_TSC_MODE_NOEMULATE_TSC_AUX (3u)

... while I'm fine with the leading blank line, what my earlier comment was
about really are the two separate blocks of #define-s (the flag bits and the
modes). I'll take care of this while committing; with the adjustment

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 08:58:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 08:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484829.751665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKy51-0007nw-AA; Thu, 26 Jan 2023 08:57:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484829.751665; Thu, 26 Jan 2023 08:57:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKy51-0007np-4v; Thu, 26 Jan 2023 08:57:59 +0000
Received: by outflank-mailman (input) for mailman id 484829;
 Thu, 26 Jan 2023 08:57:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKy50-0007nQ-0u; Thu, 26 Jan 2023 08:57:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKy4z-0005QR-V0; Thu, 26 Jan 2023 08:57:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKy4z-0001eu-AU; Thu, 26 Jan 2023 08:57:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKy4z-0004JL-A4; Thu, 26 Jan 2023 08:57:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=u+61ipuNK13IXoGW4+n7hDFMethKabU7cJGGomb9BFA=; b=v02Ptnb7lmEvLIG7dh4utvjSVl
	CQrdcD4/ylberivV9N0CIk+rUgYTxiIROjLgICCXz3DRJMzCzV5acqWG8zXsidxbnEHbsXn4F+RPd
	IVgZemHMjqJLhgXEBiv0/JGNx57lL0WVGX12Oq5qcQk076A+H9TxU2WmlcSbGu/ZEJq8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176135-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176135: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7c46948a6e9cf47ed03b0d489fde894ad46f1437
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Jan 2023 08:57:57 +0000

flight 176135 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176135/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7c46948a6e9cf47ed03b0d489fde894ad46f1437
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  110 days
Failing since        173470  2022-10-08 06:21:34 Z  110 days  227 attempts
Testing same since   176135  2023-01-26 00:10:53 Z    0 days    1 attempts

------------------------------------------------------------
3442 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 529026 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:06:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484841.751675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyD2-0001PG-6q; Thu, 26 Jan 2023 09:06:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484841.751675; Thu, 26 Jan 2023 09:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyD2-0001P9-3w; Thu, 26 Jan 2023 09:06:16 +0000
Received: by outflank-mailman (input) for mailman id 484841;
 Thu, 26 Jan 2023 09:06:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKyD1-0001P3-I7
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:06:15 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2060.outbound.protection.outlook.com [40.107.105.60])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae13157c-9d58-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:06:12 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8595.eurprd04.prod.outlook.com (2603:10a6:20b:426::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Thu, 26 Jan
 2023 08:50:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 08:50:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae13157c-9d58-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X04Xvx0HrJ8yXtwKoMTUZwiMYwpDd0g6nHtrUxbGuij2Uv0MbJuH41N+++kYD3epQyO+IWAlieXnKrTK3f2CgtL6khOfAcRA61ZR2DJZu8x8PPukOvEns2tzulUomVTfy90yQJmaHyjp2bbD5nZp96GJvQbzodRyO3uOcWrH2/SFV/mrYaZW9RFYyDoy/+0olr1QzV7nHAxSEByeFtNgu/ncEsZF1PmYZLhAVLAv1pINeu5AxszKGW96Ucx1a+bv+yT5r5cGEFYzanmGnJnkDrmgeoGcgdg+4VvvuACMqt4mLX4ROvG6xNgEFqyCjOwYYXHlDv07p92SeaHchk8ybQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=07Olfahwhy/F/0iLY3tr63GquYMtGeh2XTGyUOO8lLA=;
 b=RB8Lgx8K0w04+x4cBc34vjT9xdvEewjzQF+ApS3oEs/f6h3uGFMo8HYu8ViqlUDAVN97+316sC9+lXyfbnobmU3irQenZUjBHK8tHfDIb5fS21BkX4ypdH7x4iyRYGFkjLoMKveg4D1wobj0qu4qqeA1U9Nb4uwO+j9ahSghZt2qViKY7APv018T2pZ4qq1HMMGyQdov7GzZWf/YtE4yGBqa0d4VhwQfYx9+e9j6WAUTDyVhab7p1iIF1GI8pV24RO0SYrAfGgU/AHtpSprluRAtadKcvKN8PYOnKtgSFTnBrizpUEKa1Ri5+kaGr8bdTyFjLZDUzlqGPKsPIxx6Og==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=07Olfahwhy/F/0iLY3tr63GquYMtGeh2XTGyUOO8lLA=;
 b=u9k8dz/zKz9PqJcm/t9rnXKNfoT8UoijVVVAY3Aqvg+49Qzm6tDpQZfqkExuQFGvixxhzScZWBirX9cZGxuubATySu8gzLJc8ugqTrarGCEa0u54kiaLykHYm0iW4oMI5uUSNhxkR0vuPyDu/J5eMiK3oF/dr49MPchYPMaoLLQQ/xZAoZBVuCxoMUiDxxSytWm77OFQNr9s/QO9cFgaNpJ6MsNMjIBlzR8nHsa2IlWPQ3YPY+Nzfs6OcNo9TovVXGXx39UD43cj0/vQvHISsgxM19JVzaAToOxgpgYa/Rw8VP6rlUSo92oCFN/tReJv75eo7QwsKzbbHU8x/WkwBA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a5ba3821-37fc-c724-d015-6e9dc8cf65fd@suse.com>
Date: Thu, 26 Jan 2023 09:50:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86/shadow: Fix PV32 shadowing in !HVM builds
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
 Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230125165308.22897-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230125165308.22897-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0103.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8595:EE_
X-MS-Office365-Filtering-Correlation-Id: 7fb4b5a9-877a-4f59-1f10-08daff7a57a7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	glKRW5peAxo7/Y+zPJU4TG/MxSDSj3YVSUwyOH1YBSi7vSeKtnYKo8xdPKewPEs9Hfb3Saquk5wXvQ8Zgq5yc+traC3wrCINSZPgvPcGN6SlDMeMhXhLHzC3R2nDeDSVwTy2a6C8y6wMcATKvUbzfQ9nZ0S4bhd9y8qvQXl1voMJ+bQeXPmebpcfrzlbJ7DsU98Y1YfwRnjCFZE1PzYXD5v/IC1wTqyWq07dkFoGrbt0qHDKYpxS20bZ+o+9Rj4/xNgTnAwC0NcbQSj7RP2UARXQUY1T/zCCqNbY1DPV0+TciRGCMp0xU6muDTjkgzQrNBATRXr1+Yfid/QAs/DMWL8zcJ9M8QMmQD8sf5VJo8vNR8iNK4lOwKJjWZv5Kfl+9FuKs9jtmndepDEWTs5P6tjI8sDIr5lSFrleylnieq6/R3zVAFOuug//LKpnq+PSpRQ54u/jDHxBTacdAW/ZjAd6rb9AujtlG63nyQPycSdxmr3/wJj1mmfiTBGkm/YqrYHL9cFO/ONxWSwHY76EAm+kJLMMXlBrf3xyfqtwuxSEYsCNXrlfBCvC84i8EZD2W4Sq6Gmt5fatD7RnzID9+Bxe8EAdZcTWr0hS77PyHCA71+Ss9vdJ6mUL6WQZRG6QCwaDPt39zgaASHIp/ny8gD5x533+Pmom/TFTsX7kJpf8BovXNDPBXrDlIy3BnTtod2FnQk//5iW/jUgavLFEUs8H7Zta7AIIrCD1D3cQT3I=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(376002)(346002)(396003)(136003)(39860400002)(366004)(451199018)(2906002)(66476007)(6916009)(66946007)(38100700002)(66556008)(4326008)(5660300002)(2616005)(31686004)(41300700001)(8936002)(6512007)(26005)(186003)(478600001)(53546011)(6486002)(6506007)(8676002)(54906003)(316002)(86362001)(36756003)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aUN4VjZWbThQeFpqY2JHRDBJRlY5bTV0MFRMUE9tRUk2cFFoaVVkVENmaUNR?=
 =?utf-8?B?YnZ4VkZpWUhjMGJ4Y1A0anJNN0x2YVErcDVNUkZxNXg1L05GZWtuYXU1cnFu?=
 =?utf-8?B?emRieVl3YUJDYjJqZktNaWNkUExTbUcxZ0ltWjl1NnhQUVhjUHhRdWxuVU5Y?=
 =?utf-8?B?S0w0YlhBSUNQa1pmYnhseFNhbGVSNFdwMSsrQkFyZ2d6TjlUd0JxWU5sUEto?=
 =?utf-8?B?TG95RG5YdEVPeFlPS2VFd2VMYVZib1ljMmY2eS9WbjlGYUFYOEczSFFrZW52?=
 =?utf-8?B?YmF4cG5JU1FTdlBnKzhSSExyK2Z0YkZMYW1rM1JYZmZ0WHRBRWxpYVNNTkUx?=
 =?utf-8?B?MUpzdjF1elFJVkhWeVQ1cTZqOHh3UTRRdytvaGZrclV3d09pMDhNaFg1TDZp?=
 =?utf-8?B?WFNNbm5pSUZQZWpxOW1EUzVScWhFdHgrWEZXNlU0QnUzTXE2VnFIOG42bzFP?=
 =?utf-8?B?TjBIRUxXWkZVNUhpTml2aVdOcEpPdFRGMUlIQ2dRLzV0dEY0V2tiazBwKy9u?=
 =?utf-8?B?ZXU4am9BeVg5M0lIbi9GMUpxS2dWd3EyWTFDS1JvSHdWeUJnZWF0aWxNRUYz?=
 =?utf-8?B?eDBCeEN1a3pjdDl5dUhlbVd6TG5mNko1dmJQS0JucysrVXRYQUxkaTRUbVZM?=
 =?utf-8?B?VzRhM0pBdGFUcTlIM2lTbWxmU1hiS3pxczFsUDNCU29CVmxhZmlkR3pCeCtl?=
 =?utf-8?B?UVNpUGRheXlDVWtkcGtOUnpFZEY1VThxTmdnZ0k5U2tiWlBrREZCMHJVcDVh?=
 =?utf-8?B?bndpUktrRm1LZXNMbEVTb0lrbHk2TllscnZzWmc0aS8rY0drdkFtSWZiZ0lw?=
 =?utf-8?B?aUJHRDhvN1lyNkIzRmpNN1g0V3Y2c0w1YjFyaEsxRHA0Q0VZcGJwVWxQY1BL?=
 =?utf-8?B?azlCaG04ZVo3M1RETXJCb2IxeUtnc2dmc3ZVaWVZZGNCTTBLMS9VY1Fvd2Vw?=
 =?utf-8?B?MmI1ZGl5UldYRDUwS1IrQjM1YmpBRCtzcG1pTzRzV1FYcUlXNVF1NTlJcHV1?=
 =?utf-8?B?MkVKMDhzbHlURzFPNjR2VEdEUG03Y0JjVS9hcS8wTWhWOWY3SmVwSHZnNFBa?=
 =?utf-8?B?Q1dqcVQvelAxNUVLbkFubFZVS1B3MlBVK1ZHYnFMVlJUZXlHUFlGelhQQlRv?=
 =?utf-8?B?ZWxVNmFuK2JkSEM4TmM3aFhXdWMyTGt0eG5PSUthZFA3bHpNNjFCeXQrMTNO?=
 =?utf-8?B?SDhpbUd5R2tWSkQ0MHc4aERuU09aQkNmdnArREdxN3ljR09nYnFmVTZKR2F1?=
 =?utf-8?B?bExCVlQxMUtNYmZaUkxMeXhhNXppdzVTemxyYTVNNFJyOUFIMkRVMUwzSzRX?=
 =?utf-8?B?S1Z0YlF2bmttN1FBTHljQThPektBeHN4QytrRFoxQ21ZWkdiNXgwNXVlQzlT?=
 =?utf-8?B?MTFDU2N6M3l6TTBJRldrN05OWEpSN3JWM3JMMFN0WTZwSU1GbytjQjlqc1lw?=
 =?utf-8?B?aVdEaGFIR1pFSWtUSEhEWlhqZFF1TFkvN3V6VGRsaGJsdk05cVB2UXlCQVlm?=
 =?utf-8?B?QVFHa0w4S05DWkxJdXc5NHlselBBcmdZNlIrNjdGbUluVWpYc1Q5WmRqWXlC?=
 =?utf-8?B?UURGY05iUk9sSTlaK0FhbHdmdHF4aStZS3RYWkV1Tk4yMFF4RHVHaGd4UU0v?=
 =?utf-8?B?OTd4c1JNMU9kbW1sTkVJS3grK0U5dyttRFllT1M5K0taQzhkcFpLT1ovSnZa?=
 =?utf-8?B?UC9ua0ZaNDVNYTJtellHZkcvMmJrcDRtdnNBYUk3Q3pNdS9sZWk5S25QTTNt?=
 =?utf-8?B?ZlZ0dUxnWnFFZ2NkTUFhQWYrbGxLVlRTUWFTMmhqYzk0Q2VORDhHVkphVXpZ?=
 =?utf-8?B?VXhjVE93eWQ1cEU3NXdYclByTldwWGpjYXJ1YWI4S1p1NlVIbzg1ZWs1MVhQ?=
 =?utf-8?B?N3pzeWREbi9Bbm4xbjRJQ2p6T3lKSFFsRnRzR0NaRUlwR09GUmJtU21QTS9i?=
 =?utf-8?B?Sll5OUFhUlNqZVhFVlIrZ290RUtLYjZvbVBBMnNvS045MHBjS2dGR1RvR1pn?=
 =?utf-8?B?bk1NMVcxemFlVXRQQVlnbmkyUkNFU0dNaUw1NDVlY1BleEVvbkhOM3JSN3J1?=
 =?utf-8?B?Y0RoenZSTW9udkVPbnF5S256M1ZEcWxBVnVqSU12Ukh5Q0hRc0tTZkhiVTdN?=
 =?utf-8?Q?y9Xs6X6r5I7bnpLzjfjDevjdr?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7fb4b5a9-877a-4f59-1f10-08daff7a57a7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 08:50:14.9728
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yDB+5zA8N8YtxnLeiu0FPHD5VuxVIEJDVrma8BiTmLfAfkdVPvI2CWGrJb6w9xqm1O894XFR5NN9g75TWgwqIA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8595

On 25.01.2023 17:53, Andrew Cooper wrote:
> The OSSTest bisector identified an issue with c/s 1894049fa283 ("x86/shadow:
> L2H shadow type is PV32-only") in !HVM builds.
> 
> The bug is ultimately caused by sh_type_to_size[] not actually being specific
> to HVM guests, and it's position in shadow/hvm.c mislead the reasoning.
> 
> To fix the issue that OSSTest identified, SH_type_l2h_64_shadow must still
> have the value 1 in any CONFIG_PV32 build.  But simply adjusting this leaves
> us with misleading logic, and a reasonable chance of making a related error
> again in the future.
> 
> In hindsight, moving sh_type_to_size[] out of common.c in the first place a
> mistake.  Therefore, move sh_type_to_size[] back to living in common.c,
> leaving a comment explaining why it happens to be inside an HVM conditional.
> 
> This effectively reverts the second half of 4fec945409fc ("x86/shadow: adjust
> and move sh_type_to_size[]") while retaining the other improvements from the
> same changeset.
> 
> While making this change, also adjust the sh_type_to_size[] declaration to
> match its definition.
> 
> Fixes: 4fec945409fc ("x86/shadow: adjust and move sh_type_to_size[]")
> Fixes: 1894049fa283 ("x86/shadow: L2H shadow type is PV32-only")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Now it's time for me to ask: But why? Your interpretation of "HVM-only"
is simply too restricted. As said, "HVM-only" can have two meanings -
build-time HVM-only and run time HVM-only. Code in hvm.c is also
expecting to be entered for PV guests when HVM=y - see the several
is_hvm_...(). They're all bogus though, and I have a patch pending to
remove them. But that doesn't alter the principle. See e.g. audit_p2m(),
which simply bails first thing when !paging_mode_translate(), or
p2m_pod_active(), which bails first thing when !is_hvm_domain().

Content of hvm.c (and other files which are built only when HVM=y, or
more generally any other files which have a Kconfig build time
dependency in their respective Makefile) simply has to be aware of this
fact, and hence the data (array) in question is quite fine to live where
it does.

I continue to view my 1st patch as the better approach. And in no case
do I view the 1st Fixes: tag as appropriate.

I guess we really need George or Tim to break ties here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:09:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:09:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484847.751685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyFa-0001yo-KU; Thu, 26 Jan 2023 09:08:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484847.751685; Thu, 26 Jan 2023 09:08:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyFa-0001yh-Hl; Thu, 26 Jan 2023 09:08:54 +0000
Received: by outflank-mailman (input) for mailman id 484847;
 Thu, 26 Jan 2023 09:08:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKyFZ-0001yb-Kz
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:08:53 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2057.outbound.protection.outlook.com [40.107.6.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0ce6631e-9d59-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:08:51 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9500.eurprd04.prod.outlook.com (2603:10a6:10:361::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.17; Thu, 26 Jan
 2023 09:08:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 09:08:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ce6631e-9d59-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jB0sYNfFOrFzxdoHaH4DfaWjmX8FACGONScwucrBd0htmFLkAmcU7JqWTPwL3TPJ5rKIKUBiRnf/9IIi7LCS+U1AkPM/q2Bp2/poQOPR4KmUTwS4M44mcNqGxYINyN7+X7xV9xn0QklhU6fVzAQi1DQkUNpzssDa4UTzpJ5h5B7+GlsiGVHJD6cMruJvH+XSOd+4vJMPX69BFxOTqMGXsGH27XtMwrbGhUUzxwQV0YGI7ld0TC+WF0EiqtgazdQ3yuE/kZmGpcmqFk/+JGFPwujOKAw7wvmoiAX9o4RupmIbqk5eaLmj7AV6RyoCNLmbDjuMRsqaFWRjv7TZsKN13Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6RiI8QZcCWBDagvCxjm3/OB8m9mvzCFh1dM4+LmAZCk=;
 b=dvTnxYXeZZGH0NOmgz1CyenNhIcKx2uSxTc/88GrcnvLaNL/QFpUO0Hpg1nx6tA5PMo8CJHq51GO/YA6xBDFdcf33OWUXeBDOzTSEYSGV8HrM5cGVO64oSWYo0VXz3HpVqAZEzYzWWNSNPSqJlq3bNNE8hSg3Rk8acaEq2dh3Ayen39kO4n1Ja1SCktMfrGwK5tTgvE8v/zhBqZpcvC7ofKT4o7unw3cxFwPiWE/gqg7Zmkh06zFy2hNJ9cff6Pen/qfYKhO8poKWD8UvHQypc8PeFCG0fPKJSgGajvlVCtn+nMn5ITB6goQt5H+k6lqTwqdFAsfS5iWG3kfwoU4qQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6RiI8QZcCWBDagvCxjm3/OB8m9mvzCFh1dM4+LmAZCk=;
 b=3nWxqRRs5gO8J0TraANDbye+TV6/0XjjFZicnMo4ruGHjDX+0PM11iAcpGpwMza+oT6dHdzv5YNjBdGtOoFZvlQZPQesLUI95Hcli/WyC6/MXJn9esvR1KLyVzhnFYqrDDQEk7KpxNncNlxnnYMKZBMZdCFsWVhlUbyPJj8rA/ZUgOR4Vt1nWaJnmzfxOqUwR69yYUVU1b59rwwI1/zifizgDR7VsqUxQxrNHG2teWJAOtIvQgBO54OpLDPuU3TMjAgyAi0ZaoimDHsDs8r6IPEx+ODRl10KoxPFPSGc4nEp2T0JNmWoCrMbpXVhPDaB0wl3QVX4WzvKjMHWbVWEZw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com>
Date: Thu, 26 Jan 2023 10:08:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: george.dunlap@citrix.com, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, Bertrand.Marquis@arm.com, julien@xen.org,
 Stefano Stabellini <stefano.stabellini@amd.com>,
 xen-devel@lists.xenproject.org
References: <20230125205735.2662514-1-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230125205735.2662514-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0117.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9500:EE_
X-MS-Office365-Filtering-Correlation-Id: ac9280ec-288d-4722-4adb-08daff7cee33
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E/FHT6nTtMybwU0xG/d9YeLpvKjvkyEBsl5RemgO3HhkVg2fdh24zo5OHa9ePyAyPZQpgonKbGtKPUPB0r7mseZGZ68wmqmclNUps6a3Y+UfDpXy04ak7eWBd56bkVXfJ5ojjnqdcThqBG5TTPp/6Zz6NYl+pY5cz9c8hrZtXiduWaPL3mc6G5U2Kj491zRW4xjsCMRcm9KVGarrMcDDHSrO18CdT5Jy/m/GbUTailI+m+Plur0Dv5bAZYRNOABtojYF2H2csvcgws4k6VgMj2WzEQVMIuqofZY01brlK5jH73oUFokhdbgTBZaZpxggnlunsugAC/90xpOIIqP1RD0RJ/WLWUmc8PoXChLuU0x9b/uFp7gZ9mEFvdlC0GNk7WOTBLhHqYA3/RxY7T4bKBfRyhuvvxFNrItjmHJslD1xbKLKKQCwBBUellFWepW8UFqpBRi2vq6HSfYHGFWUCs6djqx91o1BwOqawOhzlTU0eJJy4QTOQQ3DPrTaHNgRt5lDoxOsRXq5XSOHdVyjF2tW13r4tBgiHCs2CrERvad6Z1mbGL5zjf5Bfz3X+qYnHLW3Dzx8IDvStUOEq4fhwKeT+RnQh8DCjSFf56IFZB0/gHPLDXWDKjE4oRHpsVmt79L5Rwfir+O1xM9UfgyEGdtDNnTWhcd9eSN3Pr9Zep7p5p5pD/J1u7TtdldXPAcGnluwO/wQDcTvVpWCiHRAM9rgI2wd35OSZ8I4BLiF23E=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(136003)(376002)(396003)(39860400002)(346002)(366004)(451199018)(4744005)(31686004)(5660300002)(41300700001)(8936002)(2906002)(66476007)(66556008)(6916009)(26005)(8676002)(4326008)(31696002)(66946007)(186003)(36756003)(6512007)(6506007)(53546011)(478600001)(6486002)(83380400001)(38100700002)(316002)(86362001)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cUs2bG13cWRmeG9rNEpxL000Y3hmdnhQTEtRTU93SXBEbWY2T0IybUJqQm5u?=
 =?utf-8?B?STJOUHBJTG9KRFpBdE14aXVxb1V5Z3RjSnNHZU1NU0x6UGdtQmlkSE1YTzNu?=
 =?utf-8?B?aVhkTFNycEpNTkNGNi9hOXFTdnhHN09xOU9OaGRkUThzbjhRaDlaa1U5bjRz?=
 =?utf-8?B?VkMzUkJPMGUxRWFaSHFwZ2lTOThKbWU5Slo0dk9NS1hObjJieGF2QVIzNTF0?=
 =?utf-8?B?RTZqMG0zaXVuQmQzY2Zzb2dvblMvMFNONnV0bFRBRlhxZDN2eS8yQTFHNXph?=
 =?utf-8?B?ZmlnU0lHUTA4VmpUYzlqNmZrNWpKSTVWRWFlbUcyd296MENFWXgvR0Zhcys1?=
 =?utf-8?B?S2pvaitCMkFOMUJ3eEppZWJOVWVmK3N3NUVUTlAwU3BrY1MzMFNsWEtuajc1?=
 =?utf-8?B?eml3NVNtbEtrS285MTljQUZudHJvMGRseUFrSnh5TGF4VGFEZ0EyMFF6ZGtH?=
 =?utf-8?B?MlQyak5nODdPUWlNU2JlNXoyd2Z0QmlvcTVBc3EzQ0EzMWsvOGVBZTdhaUtP?=
 =?utf-8?B?M2V6QmhoOVhZc3F4NE9XQ0xiSmZNdjdvemNWT3B0Rm1CRlhZYjNlMnNMK01Y?=
 =?utf-8?B?bXI0NzZDdEsweTJtejdJb3JCdWhMR0U0NFdrNUF3SU5wMUVySjhtWlFOK29F?=
 =?utf-8?B?dXJqenVQbk00VFNXTFJsZUFkcW1PdFNLQk0ySkdzWm5Ia3hnTVZ6YVM2dUFH?=
 =?utf-8?B?enF4QitiMFdEd1VkUGtEaVhPUmNXa0pZR0ZpVTE3RWY2V0QwUFo3VE9QemNI?=
 =?utf-8?B?cGhpRTFkN3VleHAwSFRpWFJlNkJYZ2h1Zk5ZWmtrd0hDSTRDQlVmV2RmdVdG?=
 =?utf-8?B?cS9kK0hsd3RpdzBaRjk0c3ZkUHZrc0d1dldteGJSUTE3YW5xaTQyUUphWFpQ?=
 =?utf-8?B?eGtwL0V2Z2Q4WFhVNGJnY2llMnRPNDZIMHlMNm5vbDRHNDR6U2JCSGw0Vjcz?=
 =?utf-8?B?TkQ2djlINDVWQXlpZ2JkcCt6RG9MZDhSSTlpdTh6OTZkdFpoMVlsR3lVWnM4?=
 =?utf-8?B?czRiU3V2VzUzeElzalFLb3d5eEJaZXp1bGplT3BySlh0R2RWdzZ5ZG95OEFp?=
 =?utf-8?B?cWY4TndCam5OY2JOVGFOdVlHV3dTZnZiSXJaR2RIYXpTUnZTRndpVExIMDBG?=
 =?utf-8?B?V21pTkE1b1hCTDVEMXdFRE1uL1hBb2pVR0ExR1g1OUN0QTIzYjZhYW5VZnky?=
 =?utf-8?B?T3hvSGZkRU5FeklNSnJ2U1VNWGVSd0xWYWF0REdlNldrWTM1eUdpN2xpWG9k?=
 =?utf-8?B?RHhjQW5URFNDMVMwNnk1ZmdWWHN2K3IwaHlmaGR2WDFkYS9TRDJoYUVqQ3Jm?=
 =?utf-8?B?RVlObXVDcG5abTgxc0JybHhUK0JvdTNkeWNWSWNuYWlKay9CYlVFdWc4S3Z4?=
 =?utf-8?B?YmRpNlpEQXlDSE5JaEtIN0dNVXZ4dW1RMTlZanpLNWRjOUdhc2JCWUJwVi9I?=
 =?utf-8?B?Lyt2MzlHNWxzWHlWWGdHWks5Lzc3SGxvRmtWY3Z4TEFLZy9KNE8yaFZJVGFX?=
 =?utf-8?B?Q0Q3ZmFDc3JCQnI3V3JoektXQ3pONVltbEtUMXcwbGhBcmU5cUdrRGcxRHJh?=
 =?utf-8?B?VXpnWjh4cldqSGZKYWNLcER1akRqcmVmNUs4TjY1WUhNQ0xlbGJWbXBJSzdp?=
 =?utf-8?B?Y21mMlN0L0pEcytsNjNRb0ZqWnhkenpzM3U0d2N0YTRJVFRrVXZRRGN5Y1lW?=
 =?utf-8?B?WnVNM2U2bzRWTERDNGJUTzJTT0Y2Z1A2WG14RFVhS0ZUYjFUZUNSNnhEelYr?=
 =?utf-8?B?YjZxNXZGb1RoTHYxQU52YTVHRm4xcXNHamNmTGRiQXNZMXNMeW1vNlM1Q1Fl?=
 =?utf-8?B?UFNlQ0tRYU01d3dpZ29sU3VxcFh1WThHRi8yTUtzbW53eTZ0TEE2RVkwaWJS?=
 =?utf-8?B?ZXczRnNFdkxFSXJQbkVjNXBmWGNJK0paUUF5N1UzT3hwMi9yZG5ReXRpd0dV?=
 =?utf-8?B?cEszcTFJMXJJNEsyUmoyTG5IbjVXcVF1L0Q5M081N2R5T0QybE00Y1o4ODV3?=
 =?utf-8?B?dnFhamIxdjAvWTZCbUN0MzJOVFFxOUo1aEplVWpKWjNNOVpGL2NkUHpaTzg5?=
 =?utf-8?B?OG95eDdkVlNkR0p0bG9BQ1JoczE0bUJKc0NORmZKNmwwUUwwMlNLWHlCWnAr?=
 =?utf-8?Q?m/pzzjE9QHwC8lqhjuq+6yNuO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ac9280ec-288d-4722-4adb-08daff7cee33
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 09:08:46.5119
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AsGgduxdXWmpfr7UgmiEq0l5S6MYPlpQqFLVOKAdDC7438Jc1hsFiUlkEPqS5q96PJureV1RlTOAhWPq74FmhA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9500

On 25.01.2023 21:57, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> As agreed during the last MISRA C discussion, I am adding the following
> MISRA C rules: 7.1, 7.3, 18.3.
> 
> I am also adding 13.1 and 18.2 that were "agreed pending an analysis on
> the amount of violations".
> 
> In the case of 13.1 there are zero violations reported by cppcheck.
> 
> In the case of 18.2, there are zero violations reported by cppcheck
> after deviating the linker symbols, as discussed.

I find this suspicious. See e.g. ((pg) - frame_table) expressions both Arm
and x86 have. frame_table is neither a linker generated symbol, nor does
it represent something that the compiler (or static analysis tools) would
recognized as an "object". Still, the entire frame table of course
effectively is an object (array), yet there's no way for any tool to
actually recognize the array dimension.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:15:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:15:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484853.751695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyLZ-0003Tj-Dn; Thu, 26 Jan 2023 09:15:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484853.751695; Thu, 26 Jan 2023 09:15:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyLZ-0003Tc-9z; Thu, 26 Jan 2023 09:15:05 +0000
Received: by outflank-mailman (input) for mailman id 484853;
 Thu, 26 Jan 2023 09:15:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKyLY-0003TW-8n
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:15:04 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2047.outbound.protection.outlook.com [40.107.6.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e93dd99b-9d59-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:15:01 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6889.eurprd04.prod.outlook.com (2603:10a6:10:11d::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Thu, 26 Jan
 2023 09:14:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 09:14:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e93dd99b-9d59-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HUkRYXfjCWdFAWWENBlBK42ErIP1HCzUv/luReeTVIkMZmq9hBEKlPwIjvNovJzSpZU4DIesLpemiHvm6PWvhZtAM+2E2B8+EytEBMXeaTCRuu+tqpv5Wpfw1CiTa4M5aTvXn2AWylrClu13uMb/g4icP7XSMJMX6lD0kRGq8xuHaUieKh7G9aA/AYJI2QAryNK7rqV8/fJw2tFMEOlrnVnm4i0tIhTAGfORzc/HLt25EX/JHVrjMVpLILyjSVEV6UAxZFfBvtobXICoH8FUws0DViXQ4Ad0SGz6/DUc5G+srVrw3un4hlKfiv+Hti+vxHo3Jz4G4Np6O8IRvlEzwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M3y46onS+R+2h2xkxHtZmw8U0u00amcjJQwPunPuj5Y=;
 b=UMMl0YiefOXnrku8QgJHLFVntBy9eZp3gWBsvXzAh6rPUXaZgYUeHxTbwZQOITYbxkHI0TCyg9Zfw0TdICF6aN8JLiMT8D0KZbWd6RQ+TpaKrz9b2MhAyuguSzphCGlNTjZ+qgRKTWuEnlJbpKErFY7fX6TIbbKuRfxZ4/8TUG0Syw8ab/WLBGFPa5sNw5v2WSCWYyRd6hA5oDmtBjsXCxYHrG4kGhExF0OTJSyPL1RK+J/NZW4TvQIkR1sGZCoVJNrqOjV/I28FYOluH4CNPTsrfGDxPt3+qQEePc69rReBIk+Yw48DPBskoCqZPQSLq356E3Bt9FSSz4V9Kp/AVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M3y46onS+R+2h2xkxHtZmw8U0u00amcjJQwPunPuj5Y=;
 b=eKBP/g2/IV817524EJrL8dlyoWs/rZ0OMOBww6zV1tL1e/4U2hVV58wwqRHEiX0kZbeuBfTVDgHNs4AhEFmhsFrN62gdj0/UqOhKMGJEU/V80WTGtxLkfi2UZT7TKbLyQlOGpxvn/OVQovULXiMdypssVDAqDiXIsErqAsSArzWMecGFu4MBWaf5LADeRo1ak4L8wXZxEi6kqIn3vw5U/HK9Ny/K4HWtzSzxJt80fn8kx2xUIUONi0EWmcUTcA5fq3OAXDWKMi9E9Q+WK/TXbw2FR3sbBRjrjy8vhs/eA6arZKEXZurVh1O4mNIfTcoheE0nOrQocu6ldcFB6anW+g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5804fff0-b26e-bcc5-fc76-7e2be09bcd71@suse.com>
Date: Thu, 26 Jan 2023 10:14:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] tools/python: change 's#' size type for Python >= 3.10
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230126051310.4149074-1-marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230126051310.4149074-1-marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0075.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6889:EE_
X-MS-Office365-Filtering-Correlation-Id: 4f4e0eee-0ba4-4f24-ebe0-08daff7dcaef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vLKivhn5wrOWFQOzkTUmTLU69B+a/CxGn56Ti8bx+qO8S6Fomkz0um3R80CX1W6nWuoIktNi2LRMWxqX71FbU3VcR+950Hd3vIOHlFoNFhFJfRkEovkG3TAx2U8xXWCV5GM7+azB0JOsEYXOinMLecCMUTxYb1ShjFG/Hu9+WnFUmr7k4zvEOKRhfz8ngxIVmprTx0R2eNH/0w88PgZYE3cXmy+MMtv5ZJExFKW1pK166UHBgcvLgI8KAatRF10WSkDI6Jm9uVOP4ohuuAZ3WCs6+NZ+8EiEW+RKD4ClAOgzQo/Gp4Z2eAkOhETpMMkFkq6m0QEyMRs0Kx8+uD/jWunAPa2IiXM0d4/6eaSyCjXYH2dEYgvlTsbRcsvvXrDYh0BQytrMTnrxILe4rfnyJ/ufUJV6/X2dwIK3M8yzYoBLs9f23zYqS7Dg2zAYXyQK+2fiR1vM9YU2sDr/bfkY5jgw4Qx6YcvWx6xbTZBpzE6OjbVR3DXErC57yeDS+yjuOVqXHpHWzhQzeRT20KtHUEff8/o+NBNgtdGMK8PrT4qLfa8eYG64oh/xZbIWzo1ZIpPVueqjNWVeaRJGX/NbBSoJaDvQXObHRCxolamdK4I7aYTrBUmbg9TckUwoGCK+Xn6ZpdQDsotXoQNrWhNSSkzomGmU7w2mndJRisjrpSV0e3feF1pRgmS1TUEy+WWR5CYLRrbCrb9ztbCtKXo8ALG32Ci8BAn8c5mgeq9WqV0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(136003)(39860400002)(346002)(376002)(366004)(451199018)(54906003)(36756003)(31696002)(86362001)(478600001)(316002)(66476007)(66946007)(6666004)(66556008)(6486002)(53546011)(2906002)(4326008)(41300700001)(8676002)(6916009)(4744005)(5660300002)(8936002)(2616005)(38100700002)(6512007)(186003)(26005)(6506007)(66574015)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?alY4QXlmUk1KUjRPeTlTRUkzbk9BK2d3ZHArY3FXMEl0RHlGaVFURStVTGlo?=
 =?utf-8?B?Y3RuaEs0WmZSRHhpY0FvRklJak9aQ2ozZ3VZRFlMeE4yR0h2TkVoTlNMcENX?=
 =?utf-8?B?VDdteFFhdHBOTUxnZ3IrYUhuMkkyZXJXZGwyYkswamF2VFRZTXJzWmpJaDlw?=
 =?utf-8?B?TnQ0NGQ1VURaalBwWFQwcURnQ3NVUldIcGttL1FSRlhkQVpsb2lGRFVsbldq?=
 =?utf-8?B?cEZNZjZERUdzSlRvZGNSVVp2Yit2SXJQRlFlNVU1cGZwSGJ6NEdteS9iaW9j?=
 =?utf-8?B?N284ZktMODhoTy9uODdCWjNrYWVSaXVRSENPWmdINW5IdlUrWjA0MG1LbVZ1?=
 =?utf-8?B?V3VSandRK1RveEt3QURnbG9ZVDFIMWJZa1pFd3hXY0t2ODB6a0htdlBKVzVl?=
 =?utf-8?B?eXo5bWpjcjFZcUJqZzYzSGVkdFZjYnlvczMvOFpGa052WjdtcVZOUmkvcUEy?=
 =?utf-8?B?ZnhQa0QrVkxaUUMvMXBsYkM5cC90VlJUV1JSb3RjUUVaVUUrU1g4bCtxSk90?=
 =?utf-8?B?M0NPLzdhQzJHcWYwL2pPN2FmRmdJYkg0bmd1Wm1iL0dxOEFSdzZGNlpBNlJs?=
 =?utf-8?B?TjVUY1RDWldYbVF5aFVqODBleCt6T2N4TTBpd1l2Nmw2b3hkQm5hWjMrMHNX?=
 =?utf-8?B?dnc0c0ZETW1nR2VEZGhsZ0ZjYmQzM1B3RngrbzN6WmVqWTEya2FTTmVMZlN4?=
 =?utf-8?B?bDJIOUJtYmJJR0l0V0Z0WUdYcW1vWk5PU3piSk5rMS9HblBYMGZWejliR1VX?=
 =?utf-8?B?bUJOalZ6bDJnckxjZ0RHUkJGbUF6YVFmd0wvTHB4WmdvVEN6TjVoWC80QzBm?=
 =?utf-8?B?S0kyZG83YzVrUERqWWxPSjFkL3VGWTBHdVdVNytMSlY5SXZKMVE5aHRxaTdY?=
 =?utf-8?B?bTA5ZW5jV3p5bzBYbmU1SWRnVC8zaUdkVzdGdnA1c0xQYU1wLzhDbWtUa29X?=
 =?utf-8?B?bEw4b1BORDc2ekNLWDdUUVc5bm5rVFk2RWpvUDlNd2ZQaHZHT1VORmthTDJ6?=
 =?utf-8?B?T0xKalAzTDlaeUF6V0lQdGtTbWFsNU9iYjlJTWo2ajJ4L0ZBVXdCRi92bUR5?=
 =?utf-8?B?U21VMlB5WkUwMWErM1pYTkpXVmdFbDI1K09iSUVQRmVDaHdQRFhOa1Y1WGli?=
 =?utf-8?B?QVFZYXIya2ZLWEJBVnR3a2ZrWndIZk5ReTk4R2JhR0ltejFpL1BnZkUveUM4?=
 =?utf-8?B?K0NZSGEvajBrWmdqeCtkelJ3cE11ZlJHZWEzYkhqRXhIejFRaHhIYUxWL1Mz?=
 =?utf-8?B?V2dzUmlwbXJtS3VxbDAwQ2NkbjRIaHY3T0oydXdHTURCOG04dzVkSG5aMWpU?=
 =?utf-8?B?eXNndHdRbjRkUElEbWpDQmtqV0tFbjBBN0hWeXNKSUdPcjBSRzF3WEt1ekh4?=
 =?utf-8?B?OFhYWStjZVdtb1JNN3FKNTlyMkxVTTVHd1IwMXN0TFRNOTQwWkpjQnFhZS9O?=
 =?utf-8?B?RnFlRm5WSlVvNnBOSzE5SmxFN2ZIY2lMNkZkWHZDV2hDR1BpcWh4dC8rOGgw?=
 =?utf-8?B?MlhETHJ5ZHBOMzVMVCt4OWdhWWphZ2FOSGh4ejJtOXQyMk1aRnY0WHlyUmZF?=
 =?utf-8?B?VW1nWk1vL1hJdElNOXg2YkNWaldQNW1hbCttM0pxSDRkMXViZzM3Y3VLQlNa?=
 =?utf-8?B?Wk44WGZWaldxaE9kdUtjRHpMSFp5Q1FPZ0I1VXpGODYyQmZCNllCR1N4eGVz?=
 =?utf-8?B?am80ZjZDT3ZyMDRqeTE3d0kwcW4rd1JtK25zNWowTmhFOXJ2UjRuTXd2Q2xu?=
 =?utf-8?B?UVR0QStaZ2U3YzI2NDVYcXJhTXFTS25IWEVGbnFBTmdWbVZqSFRkS0lBZXRN?=
 =?utf-8?B?ZWdxM0d1blpuTmtQWTl4YWhVZklyNE1KZmFJYnZJRnVob1pyRFVMUTZnNjVY?=
 =?utf-8?B?SXVWSHBKL0lqYzlsdlF0d2o5VkgyOXNjQ2tDQUdlRnhSV0pzR3pYdnpnNWZx?=
 =?utf-8?B?MFdpZklVVGxjNnRSR01STk1rWVBaRUZrWkd2bURkazg3TGttbGxvZWdkQm40?=
 =?utf-8?B?SmNON1EwS3p4Y1ZVdEloVEJhNEtqTWRzd3Z3d2NaTmRlUkcreVNaLzN5enVo?=
 =?utf-8?B?MExFNmFITlh3Kzk1S0hQdG80UnRlWkk0ZTFTRTFzdkxDTk51NHFsb1NTZytF?=
 =?utf-8?Q?RtIoi/LQunnRjhxeeLourRDGe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f4e0eee-0ba4-4f24-ebe0-08daff7dcaef
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 09:14:56.9093
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1gM1ackvDxblD5Ljmh+SZ7s2L3U+X5L2d8zIhr0xW8MaVHX8azYVetrToh2n7m+3VoJfK9v/1wdUYRAEBPVxyg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6889

On 26.01.2023 06:13, Marek Marczykowski-Górecki wrote:
> @@ -1774,7 +1775,7 @@ static PyObject *pyflask_load(PyObject *self, PyObject *args, PyObject *kwds)
>  {
>      xc_interface *xc_handle;
>      char *policy;
> -    uint32_t len;
> +    Py_ssize_t len;

I find this suspicious - by the name, this is a signed type when an
unsigned one was used here before (and properly, imo).

Irrespective of the remark of course I'll leave acking (or not) of this
to people knowing Python better than I do.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:18:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:18:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484858.751705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyOw-0004F1-UA; Thu, 26 Jan 2023 09:18:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484858.751705; Thu, 26 Jan 2023 09:18:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyOw-0004Eu-RB; Thu, 26 Jan 2023 09:18:34 +0000
Received: by outflank-mailman (input) for mailman id 484858;
 Thu, 26 Jan 2023 09:18:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vvgl=5X=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1pKyOU-0004CH-IK
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:18:06 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56839980-9d5a-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:18:04 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id C2FD8B818BE;
 Thu, 26 Jan 2023 09:18:03 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B686C433EF;
 Thu, 26 Jan 2023 09:17:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56839980-9d5a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674724682;
	bh=A3jqgSV9WUxdb3ZkujtLcqAQwnfqkoU5ybDSQdNw5cg=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=sky5enLFwJr3DfJ9b2ymbv/d7rE2oypuJvDM2cNeWmTa68+LJkoJeMjxMVF911xjR
	 /p46Fe2/pbyCsXipax42/Hou3ZWrY6fA7yy3y0E4tly4hSn7A+WSEbwQZyXAjmF/fy
	 t3crMkotzOdGBKQN+I5JbdstqSnsQJbVUv4CaD/JnI/YsWTl+N0fRV5BPO+OqFre41
	 7ERJV1xNKFLVanuVeJ94drHEBHBdP99mIJAbLamNbCF2r05esRYyfEqSbQFmRXAtIl
	 xm2AXIIoDKu6q2SCf9/o5Ej3L+rCFY/nYnTN8yhQCyREfEIhRdSCFxGuw7sIOiHW7k
	 L6KyLx8xuXeew==
Date: Thu, 26 Jan 2023 11:17:09 +0200
From: Mike Rapoport <rppt@kernel.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org,
	liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
Message-ID: <Y9JFFYjfJf9uDijE@kernel.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-2-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-2-surenb@google.com>

On Wed, Jan 25, 2023 at 12:38:46AM -0800, Suren Baghdasaryan wrote:
> vm_flags are among VMA attributes which affect decisions like VMA merging
> and splitting. Therefore all vm_flags modifications are performed after
> taking exclusive mmap_lock to prevent vm_flags updates racing with such
> operations. Introduce modifier functions for vm_flags to be used whenever
> flags are updated. This way we can better check and control correct
> locking behavior during these updates.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  include/linux/mm.h       | 37 +++++++++++++++++++++++++++++++++++++
>  include/linux/mm_types.h |  8 +++++++-
>  2 files changed, 44 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c2f62bdce134..b71f2809caac 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -627,6 +627,43 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  	INIT_LIST_HEAD(&vma->anon_vma_chain);
>  }
>  
> +/* Use when VMA is not part of the VMA tree and needs no locking */
> +static inline void init_vm_flags(struct vm_area_struct *vma,
> +				 unsigned long flags)

I'd suggest to make it vm_flags_init() etc.
Except that

Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>

> +{
> +	vma->vm_flags = flags;
> +}
> +
> +/* Use when VMA is part of the VMA tree and modifications need coordination */
> +static inline void reset_vm_flags(struct vm_area_struct *vma,
> +				  unsigned long flags)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	init_vm_flags(vma, flags);
> +}
> +
> +static inline void set_vm_flags(struct vm_area_struct *vma,
> +				unsigned long flags)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	vma->vm_flags |= flags;
> +}
> +
> +static inline void clear_vm_flags(struct vm_area_struct *vma,
> +				  unsigned long flags)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	vma->vm_flags &= ~flags;
> +}
> +
> +static inline void mod_vm_flags(struct vm_area_struct *vma,
> +				unsigned long set, unsigned long clear)
> +{
> +	mmap_assert_write_locked(vma->vm_mm);
> +	vma->vm_flags |= set;
> +	vma->vm_flags &= ~clear;
> +}
> +
>  static inline void vma_set_anonymous(struct vm_area_struct *vma)
>  {
>  	vma->vm_ops = NULL;
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 2d6d790d9bed..6c7c70bf50dd 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -491,7 +491,13 @@ struct vm_area_struct {
>  	 * See vmf_insert_mixed_prot() for discussion.
>  	 */
>  	pgprot_t vm_page_prot;
> -	unsigned long vm_flags;		/* Flags, see mm.h. */
> +
> +	/*
> +	 * Flags, see mm.h.
> +	 * WARNING! Do not modify directly.
> +	 * Use {init|reset|set|clear|mod}_vm_flags() functions instead.
> +	 */
> +	unsigned long vm_flags;
>  
>  	/*
>  	 * For areas with an address space and backing store,
> -- 
> 2.39.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:24:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484867.751720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyUs-0005l8-Va; Thu, 26 Jan 2023 09:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484867.751720; Thu, 26 Jan 2023 09:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyUs-0005kh-R9; Thu, 26 Jan 2023 09:24:42 +0000
Received: by outflank-mailman (input) for mailman id 484867;
 Thu, 26 Jan 2023 09:22:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vvgl=5X=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1pKySp-0005dZ-Uc
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:22:36 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f52aa8d4-9d5a-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:22:31 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D701661729;
 Thu, 26 Jan 2023 09:22:29 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id ABA23C433D2;
 Thu, 26 Jan 2023 09:21:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f52aa8d4-9d5a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674724949;
	bh=X+K66URYHlQUq77MuW3ws+0rHID3sIIyGmW2m75YS2c=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=i459OxxGFozoMnMqyvNvyIvIU77SfOS7DdoHD4GSBvt0YtXSu8iXKi+m5ie0s80g1
	 hjha3FTZQHHAaR8uUkaBgSVCHlM1KTxnQfIWOvnS3PFCjzSFvEal8SQKqzGYH+aWQV
	 MQz6yEbH2MAOR6OVELUy81XKAnHysdkP9OvMwznYNZ/C/kvRKWjLx8FPwz0queHfFv
	 hRnKmaozI8vIEBtKnIY3gn8/zAkeOWpf3uWUNZkDyF3ssPng8iOMcMXVKiuJ5jRE8r
	 dpDpzpvSg3XhfiRYXGnkb9BveSsNi5AeqIU2Q4FixENtYJ5Hq/QvaFe7nfltBxY16K
	 HFJuUR3StfxtQ==
Date: Thu, 26 Jan 2023 11:21:32 +0200
From: Mike Rapoport <rppt@kernel.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org,
	liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 3/6] mm: replace vma->vm_flags direct modifications
 with modifier calls
Message-ID: <Y9JGHGs/vnTboqYY@kernel.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-4-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-4-surenb@google.com>

On Wed, Jan 25, 2023 at 12:38:48AM -0800, Suren Baghdasaryan wrote:
> Replace direct modifications to vma->vm_flags with calls to modifier
> functions to be able to track flag changes and to keep vma locking
> correctness.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>

> ---
>  arch/arm/kernel/process.c                          |  2 +-
>  arch/ia64/mm/init.c                                |  8 ++++----
>  arch/loongarch/include/asm/tlb.h                   |  2 +-
>  arch/powerpc/kvm/book3s_xive_native.c              |  2 +-
>  arch/powerpc/mm/book3s64/subpage_prot.c            |  2 +-
>  arch/powerpc/platforms/book3s/vas-api.c            |  2 +-
>  arch/powerpc/platforms/cell/spufs/file.c           | 14 +++++++-------
>  arch/s390/mm/gmap.c                                |  3 +--
>  arch/x86/entry/vsyscall/vsyscall_64.c              |  2 +-
>  arch/x86/kernel/cpu/sgx/driver.c                   |  2 +-
>  arch/x86/kernel/cpu/sgx/virt.c                     |  2 +-
>  arch/x86/mm/pat/memtype.c                          |  6 +++---
>  arch/x86/um/mem_32.c                               |  2 +-
>  drivers/acpi/pfr_telemetry.c                       |  2 +-
>  drivers/android/binder.c                           |  3 +--
>  drivers/char/mspec.c                               |  2 +-
>  drivers/crypto/hisilicon/qm.c                      |  2 +-
>  drivers/dax/device.c                               |  2 +-
>  drivers/dma/idxd/cdev.c                            |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c            |  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c           |  4 ++--
>  drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c          |  4 ++--
>  drivers/gpu/drm/amd/amdkfd/kfd_events.c            |  4 ++--
>  drivers/gpu/drm/amd/amdkfd/kfd_process.c           |  4 ++--
>  drivers/gpu/drm/drm_gem.c                          |  2 +-
>  drivers/gpu/drm/drm_gem_dma_helper.c               |  3 +--
>  drivers/gpu/drm/drm_gem_shmem_helper.c             |  2 +-
>  drivers/gpu/drm/drm_vm.c                           |  8 ++++----
>  drivers/gpu/drm/etnaviv/etnaviv_gem.c              |  2 +-
>  drivers/gpu/drm/exynos/exynos_drm_gem.c            |  4 ++--
>  drivers/gpu/drm/gma500/framebuffer.c               |  2 +-
>  drivers/gpu/drm/i810/i810_dma.c                    |  2 +-
>  drivers/gpu/drm/i915/gem/i915_gem_mman.c           |  4 ++--
>  drivers/gpu/drm/mediatek/mtk_drm_gem.c             |  2 +-
>  drivers/gpu/drm/msm/msm_gem.c                      |  2 +-
>  drivers/gpu/drm/omapdrm/omap_gem.c                 |  3 +--
>  drivers/gpu/drm/rockchip/rockchip_drm_gem.c        |  3 +--
>  drivers/gpu/drm/tegra/gem.c                        |  5 ++---
>  drivers/gpu/drm/ttm/ttm_bo_vm.c                    |  3 +--
>  drivers/gpu/drm/virtio/virtgpu_vram.c              |  2 +-
>  drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c           |  2 +-
>  drivers/gpu/drm/xen/xen_drm_front_gem.c            |  3 +--
>  drivers/hsi/clients/cmt_speech.c                   |  2 +-
>  drivers/hwtracing/intel_th/msu.c                   |  2 +-
>  drivers/hwtracing/stm/core.c                       |  2 +-
>  drivers/infiniband/hw/hfi1/file_ops.c              |  4 ++--
>  drivers/infiniband/hw/mlx5/main.c                  |  4 ++--
>  drivers/infiniband/hw/qib/qib_file_ops.c           | 13 ++++++-------
>  drivers/infiniband/hw/usnic/usnic_ib_verbs.c       |  2 +-
>  drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c    |  2 +-
>  .../media/common/videobuf2/videobuf2-dma-contig.c  |  2 +-
>  drivers/media/common/videobuf2/videobuf2-vmalloc.c |  2 +-
>  drivers/media/v4l2-core/videobuf-dma-contig.c      |  2 +-
>  drivers/media/v4l2-core/videobuf-dma-sg.c          |  4 ++--
>  drivers/media/v4l2-core/videobuf-vmalloc.c         |  2 +-
>  drivers/misc/cxl/context.c                         |  2 +-
>  drivers/misc/habanalabs/common/memory.c            |  2 +-
>  drivers/misc/habanalabs/gaudi/gaudi.c              |  4 ++--
>  drivers/misc/habanalabs/gaudi2/gaudi2.c            |  8 ++++----
>  drivers/misc/habanalabs/goya/goya.c                |  4 ++--
>  drivers/misc/ocxl/context.c                        |  4 ++--
>  drivers/misc/ocxl/sysfs.c                          |  2 +-
>  drivers/misc/open-dice.c                           |  4 ++--
>  drivers/misc/sgi-gru/grufile.c                     |  4 ++--
>  drivers/misc/uacce/uacce.c                         |  2 +-
>  drivers/sbus/char/oradax.c                         |  2 +-
>  drivers/scsi/cxlflash/ocxl_hw.c                    |  2 +-
>  drivers/scsi/sg.c                                  |  2 +-
>  drivers/staging/media/atomisp/pci/hmm/hmm_bo.c     |  2 +-
>  drivers/staging/media/deprecated/meye/meye.c       |  4 ++--
>  .../media/deprecated/stkwebcam/stk-webcam.c        |  2 +-
>  drivers/target/target_core_user.c                  |  2 +-
>  drivers/uio/uio.c                                  |  2 +-
>  drivers/usb/core/devio.c                           |  3 +--
>  drivers/usb/mon/mon_bin.c                          |  3 +--
>  drivers/vdpa/vdpa_user/iova_domain.c               |  2 +-
>  drivers/vfio/pci/vfio_pci_core.c                   |  2 +-
>  drivers/vhost/vdpa.c                               |  2 +-
>  drivers/video/fbdev/68328fb.c                      |  2 +-
>  drivers/video/fbdev/core/fb_defio.c                |  4 ++--
>  drivers/xen/gntalloc.c                             |  2 +-
>  drivers/xen/gntdev.c                               |  4 ++--
>  drivers/xen/privcmd-buf.c                          |  2 +-
>  drivers/xen/privcmd.c                              |  4 ++--
>  fs/aio.c                                           |  2 +-
>  fs/cramfs/inode.c                                  |  2 +-
>  fs/erofs/data.c                                    |  2 +-
>  fs/exec.c                                          |  4 ++--
>  fs/ext4/file.c                                     |  2 +-
>  fs/fuse/dax.c                                      |  2 +-
>  fs/hugetlbfs/inode.c                               |  4 ++--
>  fs/orangefs/file.c                                 |  3 +--
>  fs/proc/task_mmu.c                                 |  2 +-
>  fs/proc/vmcore.c                                   |  3 +--
>  fs/userfaultfd.c                                   |  2 +-
>  fs/xfs/xfs_file.c                                  |  2 +-
>  include/linux/mm.h                                 |  2 +-
>  kernel/bpf/ringbuf.c                               |  4 ++--
>  kernel/bpf/syscall.c                               |  4 ++--
>  kernel/events/core.c                               |  2 +-
>  kernel/kcov.c                                      |  2 +-
>  kernel/relay.c                                     |  2 +-
>  mm/madvise.c                                       |  2 +-
>  mm/memory.c                                        |  6 +++---
>  mm/mlock.c                                         |  6 +++---
>  mm/mmap.c                                          | 10 +++++-----
>  mm/mprotect.c                                      |  2 +-
>  mm/mremap.c                                        |  6 +++---
>  mm/nommu.c                                         | 11 ++++++-----
>  mm/secretmem.c                                     |  2 +-
>  mm/shmem.c                                         |  2 +-
>  mm/vmalloc.c                                       |  2 +-
>  net/ipv4/tcp.c                                     |  4 ++--
>  security/selinux/selinuxfs.c                       |  6 +++---
>  sound/core/oss/pcm_oss.c                           |  2 +-
>  sound/core/pcm_native.c                            |  9 +++++----
>  sound/soc/pxa/mmp-sspa.c                           |  2 +-
>  sound/usb/usx2y/us122l.c                           |  4 ++--
>  sound/usb/usx2y/usX2Yhwdep.c                       |  2 +-
>  sound/usb/usx2y/usx2yhwdeppcm.c                    |  2 +-
>  120 files changed, 188 insertions(+), 199 deletions(-)
> 
> diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
> index f811733a8fc5..ec65f3ea3150 100644
> --- a/arch/arm/kernel/process.c
> +++ b/arch/arm/kernel/process.c
> @@ -316,7 +316,7 @@ static int __init gate_vma_init(void)
>  	gate_vma.vm_page_prot = PAGE_READONLY_EXEC;
>  	gate_vma.vm_start = 0xffff0000;
>  	gate_vma.vm_end	= 0xffff0000 + PAGE_SIZE;
> -	gate_vma.vm_flags = VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYEXEC;
> +	init_vm_flags(&gate_vma, VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYEXEC);
>  	return 0;
>  }
>  arch_initcall(gate_vma_init);
> diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
> index fc4e4217e87f..d355e0ce28ab 100644
> --- a/arch/ia64/mm/init.c
> +++ b/arch/ia64/mm/init.c
> @@ -109,7 +109,7 @@ ia64_init_addr_space (void)
>  		vma_set_anonymous(vma);
>  		vma->vm_start = current->thread.rbs_bot & PAGE_MASK;
>  		vma->vm_end = vma->vm_start + PAGE_SIZE;
> -		vma->vm_flags = VM_DATA_DEFAULT_FLAGS|VM_GROWSUP|VM_ACCOUNT;
> +		init_vm_flags(vma, VM_DATA_DEFAULT_FLAGS|VM_GROWSUP|VM_ACCOUNT);
>  		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>  		mmap_write_lock(current->mm);
>  		if (insert_vm_struct(current->mm, vma)) {
> @@ -127,8 +127,8 @@ ia64_init_addr_space (void)
>  			vma_set_anonymous(vma);
>  			vma->vm_end = PAGE_SIZE;
>  			vma->vm_page_prot = __pgprot(pgprot_val(PAGE_READONLY) | _PAGE_MA_NAT);
> -			vma->vm_flags = VM_READ | VM_MAYREAD | VM_IO |
> -					VM_DONTEXPAND | VM_DONTDUMP;
> +			init_vm_flags(vma, VM_READ | VM_MAYREAD | VM_IO |
> +				      VM_DONTEXPAND | VM_DONTDUMP);
>  			mmap_write_lock(current->mm);
>  			if (insert_vm_struct(current->mm, vma)) {
>  				mmap_write_unlock(current->mm);
> @@ -272,7 +272,7 @@ static int __init gate_vma_init(void)
>  	vma_init(&gate_vma, NULL);
>  	gate_vma.vm_start = FIXADDR_USER_START;
>  	gate_vma.vm_end = FIXADDR_USER_END;
> -	gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
> +	init_vm_flags(&gate_vma, VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC);
>  	gate_vma.vm_page_prot = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX);
>  
>  	return 0;
> diff --git a/arch/loongarch/include/asm/tlb.h b/arch/loongarch/include/asm/tlb.h
> index dd24f5898f65..51e35b44d105 100644
> --- a/arch/loongarch/include/asm/tlb.h
> +++ b/arch/loongarch/include/asm/tlb.h
> @@ -149,7 +149,7 @@ static inline void tlb_flush(struct mmu_gather *tlb)
>  	struct vm_area_struct vma;
>  
>  	vma.vm_mm = tlb->mm;
> -	vma.vm_flags = 0;
> +	init_vm_flags(&vma, 0);
>  	if (tlb->fullmm) {
>  		flush_tlb_mm(tlb->mm);
>  		return;
> diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c
> index 4f566bea5e10..7976af0f5ff8 100644
> --- a/arch/powerpc/kvm/book3s_xive_native.c
> +++ b/arch/powerpc/kvm/book3s_xive_native.c
> @@ -324,7 +324,7 @@ static int kvmppc_xive_native_mmap(struct kvm_device *dev,
>  		return -EINVAL;
>  	}
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached_wc(vma->vm_page_prot);
>  
>  	/*
> diff --git a/arch/powerpc/mm/book3s64/subpage_prot.c b/arch/powerpc/mm/book3s64/subpage_prot.c
> index d73b3b4176e8..72948cdb1911 100644
> --- a/arch/powerpc/mm/book3s64/subpage_prot.c
> +++ b/arch/powerpc/mm/book3s64/subpage_prot.c
> @@ -156,7 +156,7 @@ static void subpage_mark_vma_nohuge(struct mm_struct *mm, unsigned long addr,
>  	 * VM_NOHUGEPAGE and split them.
>  	 */
>  	for_each_vma_range(vmi, vma, addr + len) {
> -		vma->vm_flags |= VM_NOHUGEPAGE;
> +		set_vm_flags(vma, VM_NOHUGEPAGE);
>  		walk_page_vma(vma, &subpage_walk_ops, NULL);
>  	}
>  }
> diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platforms/book3s/vas-api.c
> index 9580e8e12165..d5b8e55d010a 100644
> --- a/arch/powerpc/platforms/book3s/vas-api.c
> +++ b/arch/powerpc/platforms/book3s/vas-api.c
> @@ -525,7 +525,7 @@ static int coproc_mmap(struct file *fp, struct vm_area_struct *vma)
>  	pfn = paste_addr >> PAGE_SHIFT;
>  
>  	/* flags, page_prot from cxl_mmap(), except we want cachable */
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_cached(vma->vm_page_prot);
>  
>  	prot = __pgprot(pgprot_val(vma->vm_page_prot) | _PAGE_DIRTY);
> diff --git a/arch/powerpc/platforms/cell/spufs/file.c b/arch/powerpc/platforms/cell/spufs/file.c
> index 62d90a5e23d1..784fa39a484a 100644
> --- a/arch/powerpc/platforms/cell/spufs/file.c
> +++ b/arch/powerpc/platforms/cell/spufs/file.c
> @@ -291,7 +291,7 @@ static int spufs_mem_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached_wc(vma->vm_page_prot);
>  
>  	vma->vm_ops = &spufs_mem_mmap_vmops;
> @@ -381,7 +381,7 @@ static int spufs_cntl_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  
>  	vma->vm_ops = &spufs_cntl_mmap_vmops;
> @@ -1043,7 +1043,7 @@ static int spufs_signal1_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  
>  	vma->vm_ops = &spufs_signal1_mmap_vmops;
> @@ -1179,7 +1179,7 @@ static int spufs_signal2_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  
>  	vma->vm_ops = &spufs_signal2_mmap_vmops;
> @@ -1302,7 +1302,7 @@ static int spufs_mss_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  
>  	vma->vm_ops = &spufs_mss_mmap_vmops;
> @@ -1364,7 +1364,7 @@ static int spufs_psmap_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  
>  	vma->vm_ops = &spufs_psmap_mmap_vmops;
> @@ -1424,7 +1424,7 @@ static int spufs_mfc_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  
>  	vma->vm_ops = &spufs_mfc_mmap_vmops;
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index 69af6cdf1a2a..3a695b8a1e3c 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -2522,8 +2522,7 @@ static inline void thp_split_mm(struct mm_struct *mm)
>  	VMA_ITERATOR(vmi, mm, 0);
>  
>  	for_each_vma(vmi, vma) {
> -		vma->vm_flags &= ~VM_HUGEPAGE;
> -		vma->vm_flags |= VM_NOHUGEPAGE;
> +		mod_vm_flags(vma, VM_NOHUGEPAGE, VM_HUGEPAGE);
>  		walk_page_vma(vma, &thp_split_walk_ops, NULL);
>  	}
>  	mm->def_flags |= VM_NOHUGEPAGE;
> diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
> index 4af81df133ee..e2a1626d86d8 100644
> --- a/arch/x86/entry/vsyscall/vsyscall_64.c
> +++ b/arch/x86/entry/vsyscall/vsyscall_64.c
> @@ -391,7 +391,7 @@ void __init map_vsyscall(void)
>  	}
>  
>  	if (vsyscall_mode == XONLY)
> -		gate_vma.vm_flags = VM_EXEC;
> +		init_vm_flags(&gate_vma, VM_EXEC);
>  
>  	BUILD_BUG_ON((unsigned long)__fix_to_virt(VSYSCALL_PAGE) !=
>  		     (unsigned long)VSYSCALL_ADDR);
> diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c
> index aa9b8b868867..42c0bded93b6 100644
> --- a/arch/x86/kernel/cpu/sgx/driver.c
> +++ b/arch/x86/kernel/cpu/sgx/driver.c
> @@ -95,7 +95,7 @@ static int sgx_mmap(struct file *file, struct vm_area_struct *vma)
>  		return ret;
>  
>  	vma->vm_ops = &sgx_vm_ops;
> -	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO;
> +	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO);
>  	vma->vm_private_data = encl;
>  
>  	return 0;
> diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c
> index 6a77a14eee38..0774a0bfeb28 100644
> --- a/arch/x86/kernel/cpu/sgx/virt.c
> +++ b/arch/x86/kernel/cpu/sgx/virt.c
> @@ -105,7 +105,7 @@ static int sgx_vepc_mmap(struct file *file, struct vm_area_struct *vma)
>  
>  	vma->vm_ops = &sgx_vepc_vm_ops;
>  	/* Don't copy VMA in fork() */
> -	vma->vm_flags |= VM_PFNMAP | VM_IO | VM_DONTDUMP | VM_DONTCOPY;
> +	set_vm_flags(vma, VM_PFNMAP | VM_IO | VM_DONTDUMP | VM_DONTCOPY);
>  	vma->vm_private_data = vepc;
>  
>  	return 0;
> diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> index fb4b1b5e0dea..ae9645c900fa 100644
> --- a/arch/x86/mm/pat/memtype.c
> +++ b/arch/x86/mm/pat/memtype.c
> @@ -1000,7 +1000,7 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
>  
>  		ret = reserve_pfn_range(paddr, size, prot, 0);
>  		if (ret == 0 && vma)
> -			vma->vm_flags |= VM_PAT;
> +			set_vm_flags(vma, VM_PAT);
>  		return ret;
>  	}
>  
> @@ -1066,7 +1066,7 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
>  	}
>  	free_pfn_range(paddr, size);
>  	if (vma)
> -		vma->vm_flags &= ~VM_PAT;
> +		clear_vm_flags(vma, VM_PAT);
>  }
>  
>  /*
> @@ -1076,7 +1076,7 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
>   */
>  void untrack_pfn_moved(struct vm_area_struct *vma)
>  {
> -	vma->vm_flags &= ~VM_PAT;
> +	clear_vm_flags(vma, VM_PAT);
>  }
>  
>  pgprot_t pgprot_writecombine(pgprot_t prot)
> diff --git a/arch/x86/um/mem_32.c b/arch/x86/um/mem_32.c
> index cafd01f730da..bfd2c320ad25 100644
> --- a/arch/x86/um/mem_32.c
> +++ b/arch/x86/um/mem_32.c
> @@ -16,7 +16,7 @@ static int __init gate_vma_init(void)
>  	vma_init(&gate_vma, NULL);
>  	gate_vma.vm_start = FIXADDR_USER_START;
>  	gate_vma.vm_end = FIXADDR_USER_END;
> -	gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
> +	init_vm_flags(&gate_vma, VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC);
>  	gate_vma.vm_page_prot = PAGE_READONLY;
>  
>  	return 0;
> diff --git a/drivers/acpi/pfr_telemetry.c b/drivers/acpi/pfr_telemetry.c
> index 27fb6cdad75f..9e339c705b5b 100644
> --- a/drivers/acpi/pfr_telemetry.c
> +++ b/drivers/acpi/pfr_telemetry.c
> @@ -310,7 +310,7 @@ pfrt_log_mmap(struct file *file, struct vm_area_struct *vma)
>  		return -EROFS;
>  
>  	/* changing from read to write with mprotect is not allowed */
> -	vma->vm_flags &= ~VM_MAYWRITE;
> +	clear_vm_flags(vma, VM_MAYWRITE);
>  
>  	pfrt_log_dev = to_pfrt_log_dev(file);
>  
> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> index 880224ec6abb..dd6c99223b8c 100644
> --- a/drivers/android/binder.c
> +++ b/drivers/android/binder.c
> @@ -5572,8 +5572,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
>  		       proc->pid, vma->vm_start, vma->vm_end, "bad vm_flags", -EPERM);
>  		return -EPERM;
>  	}
> -	vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
> -	vma->vm_flags &= ~VM_MAYWRITE;
> +	mod_vm_flags(vma, VM_DONTCOPY | VM_MIXEDMAP, VM_MAYWRITE);
>  
>  	vma->vm_ops = &binder_vm_ops;
>  	vma->vm_private_data = proc;
> diff --git a/drivers/char/mspec.c b/drivers/char/mspec.c
> index f8231e2e84be..57bd36a28f95 100644
> --- a/drivers/char/mspec.c
> +++ b/drivers/char/mspec.c
> @@ -206,7 +206,7 @@ mspec_mmap(struct file *file, struct vm_area_struct *vma,
>  	refcount_set(&vdata->refcnt, 1);
>  	vma->vm_private_data = vdata;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  	if (vdata->type == MSPEC_UNCACHED)
>  		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  	vma->vm_ops = &mspec_vm_ops;
> diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
> index 007ac7a69ce7..57ecdb5c97fb 100644
> --- a/drivers/crypto/hisilicon/qm.c
> +++ b/drivers/crypto/hisilicon/qm.c
> @@ -2363,7 +2363,7 @@ static int hisi_qm_uacce_mmap(struct uacce_queue *q,
>  				return -EINVAL;
>  		}
>  
> -		vma->vm_flags |= VM_IO;
> +		set_vm_flags(vma, VM_IO);
>  
>  		return remap_pfn_range(vma, vma->vm_start,
>  				       phys_base >> PAGE_SHIFT,
> diff --git a/drivers/dax/device.c b/drivers/dax/device.c
> index 5494d745ced5..6e9726dfaa7e 100644
> --- a/drivers/dax/device.c
> +++ b/drivers/dax/device.c
> @@ -308,7 +308,7 @@ static int dax_mmap(struct file *filp, struct vm_area_struct *vma)
>  		return rc;
>  
>  	vma->vm_ops = &dax_vm_ops;
> -	vma->vm_flags |= VM_HUGEPAGE;
> +	set_vm_flags(vma, VM_HUGEPAGE);
>  	return 0;
>  }
>  
> diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
> index e13e92609943..51cf836cf329 100644
> --- a/drivers/dma/idxd/cdev.c
> +++ b/drivers/dma/idxd/cdev.c
> @@ -201,7 +201,7 @@ static int idxd_cdev_mmap(struct file *filp, struct vm_area_struct *vma)
>  	if (rc < 0)
>  		return rc;
>  
> -	vma->vm_flags |= VM_DONTCOPY;
> +	set_vm_flags(vma, VM_DONTCOPY);
>  	pfn = (base + idxd_get_wq_portal_full_offset(wq->id,
>  				IDXD_PORTAL_LIMITED)) >> PAGE_SHIFT;
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index bb7350ea1d75..70b08a0d13cd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -257,7 +257,7 @@ static int amdgpu_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_str
>  	 */
>  	if (is_cow_mapping(vma->vm_flags) &&
>  	    !(vma->vm_flags & VM_ACCESS_FLAGS))
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  
>  	return drm_gem_ttm_mmap(obj, vma);
>  }
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> index 6d291aa6386b..7beb8dd6a5e6 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> @@ -2879,8 +2879,8 @@ static int kfd_mmio_mmap(struct kfd_dev *dev, struct kfd_process *process,
>  
>  	address = dev->adev->rmmio_remap.bus_addr;
>  
> -	vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
> -				VM_DONTDUMP | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
> +				VM_DONTDUMP | VM_PFNMAP);
>  
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
> index cd4e61bf0493..6cbe47cf9be5 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
> @@ -159,8 +159,8 @@ int kfd_doorbell_mmap(struct kfd_dev *dev, struct kfd_process *process,
>  	address = kfd_get_process_doorbells(pdd);
>  	if (!address)
>  		return -ENOMEM;
> -	vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
> -				VM_DONTDUMP | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
> +				VM_DONTDUMP | VM_PFNMAP);
>  
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
> index 729d26d648af..95cd20056cea 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
> @@ -1052,8 +1052,8 @@ int kfd_event_mmap(struct kfd_process *p, struct vm_area_struct *vma)
>  	pfn = __pa(page->kernel_address);
>  	pfn >>= PAGE_SHIFT;
>  
> -	vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE
> -		       | VM_DONTDUMP | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE
> +		       | VM_DONTDUMP | VM_PFNMAP);
>  
>  	pr_debug("Mapping signal page\n");
>  	pr_debug("     start user address  == 0x%08lx\n", vma->vm_start);
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
> index 51b1683ac5c1..b40f4b122918 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
> @@ -1978,8 +1978,8 @@ int kfd_reserved_mem_mmap(struct kfd_dev *dev, struct kfd_process *process,
>  		return -ENOMEM;
>  	}
>  
> -	vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND
> -		| VM_NORESERVE | VM_DONTDUMP | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_DONTCOPY | VM_DONTEXPAND
> +		| VM_NORESERVE | VM_DONTDUMP | VM_PFNMAP);
>  	/* Mapping pages to user process */
>  	return remap_pfn_range(vma, vma->vm_start,
>  			       PFN_DOWN(__pa(qpd->cwsr_kaddr)),
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index b8db675e7fb5..6ea7bcaa592b 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1047,7 +1047,7 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
>  			goto err_drm_gem_object_put;
>  		}
>  
> -		vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +		set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  		vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
>  		vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
>  	}
> diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
> index 1e658c448366..41f241b9a581 100644
> --- a/drivers/gpu/drm/drm_gem_dma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_dma_helper.c
> @@ -530,8 +530,7 @@ int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *
>  	 * the whole buffer.
>  	 */
>  	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
> -	vma->vm_flags &= ~VM_PFNMAP;
> -	vma->vm_flags |= VM_DONTEXPAND;
> +	mod_vm_flags(vma, VM_DONTEXPAND, VM_PFNMAP);
>  
>  	if (dma_obj->map_noncoherent) {
>  		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index b602cd72a120..a5032dfac492 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -633,7 +633,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
>  	if (ret)
>  		return ret;
>  
> -	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>  	if (shmem->map_wc)
>  		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> diff --git a/drivers/gpu/drm/drm_vm.c b/drivers/gpu/drm/drm_vm.c
> index f024dc93939e..8867bb6c40e3 100644
> --- a/drivers/gpu/drm/drm_vm.c
> +++ b/drivers/gpu/drm/drm_vm.c
> @@ -476,7 +476,7 @@ static int drm_mmap_dma(struct file *filp, struct vm_area_struct *vma)
>  
>  	if (!capable(CAP_SYS_ADMIN) &&
>  	    (dma->flags & _DRM_DMA_USE_PCI_RO)) {
> -		vma->vm_flags &= ~(VM_WRITE | VM_MAYWRITE);
> +		clear_vm_flags(vma, VM_WRITE | VM_MAYWRITE);
>  #if defined(__i386__) || defined(__x86_64__)
>  		pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW;
>  #else
> @@ -492,7 +492,7 @@ static int drm_mmap_dma(struct file *filp, struct vm_area_struct *vma)
>  
>  	vma->vm_ops = &drm_vm_dma_ops;
>  
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  
>  	drm_vm_open_locked(dev, vma);
>  	return 0;
> @@ -560,7 +560,7 @@ static int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
>  		return -EINVAL;
>  
>  	if (!capable(CAP_SYS_ADMIN) && (map->flags & _DRM_READ_ONLY)) {
> -		vma->vm_flags &= ~(VM_WRITE | VM_MAYWRITE);
> +		clear_vm_flags(vma, VM_WRITE | VM_MAYWRITE);
>  #if defined(__i386__) || defined(__x86_64__)
>  		pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW;
>  #else
> @@ -628,7 +628,7 @@ static int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
>  	default:
>  		return -EINVAL;	/* This should never happen. */
>  	}
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  
>  	drm_vm_open_locked(dev, vma);
>  	return 0;
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index c5ae5492e1af..9a5a317038a4 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -130,7 +130,7 @@ static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
>  {
>  	pgprot_t vm_page_prot;
>  
> -	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  
>  	vm_page_prot = vm_get_page_prot(vma->vm_flags);
>  
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index 3e493f48e0d4..c330d415729c 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -274,7 +274,7 @@ static int exynos_drm_gem_mmap_buffer(struct exynos_drm_gem *exynos_gem,
>  	unsigned long vm_size;
>  	int ret;
>  
> -	vma->vm_flags &= ~VM_PFNMAP;
> +	clear_vm_flags(vma, VM_PFNMAP);
>  	vma->vm_pgoff = 0;
>  
>  	vm_size = vma->vm_end - vma->vm_start;
> @@ -368,7 +368,7 @@ static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct
>  	if (obj->import_attach)
>  		return dma_buf_mmap(obj->dma_buf, vma, 0);
>  
> -	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
>  
>  	DRM_DEV_DEBUG_KMS(to_dma_dev(obj->dev), "flags = 0x%x\n",
>  			  exynos_gem->flags);
> diff --git a/drivers/gpu/drm/gma500/framebuffer.c b/drivers/gpu/drm/gma500/framebuffer.c
> index 8d5a37b8f110..471d5b3c1535 100644
> --- a/drivers/gpu/drm/gma500/framebuffer.c
> +++ b/drivers/gpu/drm/gma500/framebuffer.c
> @@ -139,7 +139,7 @@ static int psbfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>  	 */
>  	vma->vm_ops = &psbfb_vm_ops;
>  	vma->vm_private_data = (void *)fb;
> -	vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/i810/i810_dma.c b/drivers/gpu/drm/i810/i810_dma.c
> index 9fb4dd63342f..bced8c30709e 100644
> --- a/drivers/gpu/drm/i810/i810_dma.c
> +++ b/drivers/gpu/drm/i810/i810_dma.c
> @@ -102,7 +102,7 @@ static int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma)
>  	buf = dev_priv->mmap_buffer;
>  	buf_priv = buf->dev_private;
>  
> -	vma->vm_flags |= VM_DONTCOPY;
> +	set_vm_flags(vma, VM_DONTCOPY);
>  
>  	buf_priv->currently_mapped = I810_BUF_MAPPED;
>  
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> index 0ad44f3868de..71b9e0485cb9 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> @@ -979,7 +979,7 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>  			i915_gem_object_put(obj);
>  			return -EINVAL;
>  		}
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  	}
>  
>  	anon = mmap_singleton(to_i915(dev));
> @@ -988,7 +988,7 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>  		return PTR_ERR(anon);
>  	}
>  
> -	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO;
> +	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO);
>  
>  	/*
>  	 * We keep the ref on mmo->obj, not vm_file, but we require
> diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> index 47e96b0289f9..427089733b87 100644
> --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> @@ -158,7 +158,7 @@ static int mtk_drm_gem_object_mmap(struct drm_gem_object *obj,
>  	 * dma_alloc_attrs() allocated a struct page table for mtk_gem, so clear
>  	 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap().
>  	 */
> -	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
>  	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
>  
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index 1dee0d18abbb..8aff3ae909af 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -1012,7 +1012,7 @@ static int msm_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct
>  {
>  	struct msm_gem_object *msm_obj = to_msm_bo(obj);
>  
> -	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_page_prot = msm_gem_pgprot(msm_obj, vm_get_page_prot(vma->vm_flags));
>  
>  	return 0;
> diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
> index cf571796fd26..9c0e7d6a3784 100644
> --- a/drivers/gpu/drm/omapdrm/omap_gem.c
> +++ b/drivers/gpu/drm/omapdrm/omap_gem.c
> @@ -543,8 +543,7 @@ int omap_gem_mmap_obj(struct drm_gem_object *obj,
>  {
>  	struct omap_gem_object *omap_obj = to_omap_bo(obj);
>  
> -	vma->vm_flags &= ~VM_PFNMAP;
> -	vma->vm_flags |= VM_MIXEDMAP;
> +	mod_vm_flags(vma, VM_MIXEDMAP, VM_PFNMAP);
>  
>  	if (omap_obj->flags & OMAP_BO_WC) {
>  		vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 6edb7c52cb3d..735b64bbdcf2 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -251,8 +251,7 @@ static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj,
>  	 * We allocated a struct page table for rk_obj, so clear
>  	 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap().
>  	 */
> -	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
> -	vma->vm_flags &= ~VM_PFNMAP;
> +	mod_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP, VM_PFNMAP);
>  
>  	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
>  	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index 979e7bc902f6..6cdc6c45ef27 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -574,7 +574,7 @@ int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
>  		 * and set the vm_pgoff (used as a fake buffer offset by DRM)
>  		 * to 0 as we want to map the whole buffer.
>  		 */
> -		vma->vm_flags &= ~VM_PFNMAP;
> +		clear_vm_flags(vma, VM_PFNMAP);
>  		vma->vm_pgoff = 0;
>  
>  		err = dma_mmap_wc(gem->dev->dev, vma, bo->vaddr, bo->iova,
> @@ -588,8 +588,7 @@ int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
>  	} else {
>  		pgprot_t prot = vm_get_page_prot(vma->vm_flags);
>  
> -		vma->vm_flags |= VM_MIXEDMAP;
> -		vma->vm_flags &= ~VM_PFNMAP;
> +		mod_vm_flags(vma, VM_MIXEDMAP, VM_PFNMAP);
>  
>  		vma->vm_page_prot = pgprot_writecombine(prot);
>  	}
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> index 5a3e4b891377..0861e6e33964 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> @@ -468,8 +468,7 @@ int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo)
>  
>  	vma->vm_private_data = bo;
>  
> -	vma->vm_flags |= VM_PFNMAP;
> -	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_PFNMAP | VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
>  	return 0;
>  }
>  EXPORT_SYMBOL(ttm_bo_mmap_obj);
> diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c b/drivers/gpu/drm/virtio/virtgpu_vram.c
> index 6b45b0429fef..5498a1dbef63 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_vram.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_vram.c
> @@ -46,7 +46,7 @@ static int virtio_gpu_vram_mmap(struct drm_gem_object *obj,
>  		return -EINVAL;
>  
>  	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
> -	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_MIXEDMAP | VM_DONTEXPAND);
>  	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>  	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
>  	vma->vm_ops = &virtio_gpu_vram_vm_ops;
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
> index 265f7c48d856..8c8015528b6f 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
> @@ -97,7 +97,7 @@ int vmw_mmap(struct file *filp, struct vm_area_struct *vma)
>  
>  	/* Use VM_PFNMAP rather than VM_MIXEDMAP if not a COW mapping */
>  	if (!is_cow_mapping(vma->vm_flags))
> -		vma->vm_flags = (vma->vm_flags & ~VM_MIXEDMAP) | VM_PFNMAP;
> +		mod_vm_flags(vma, VM_PFNMAP, VM_MIXEDMAP);
>  
>  	ttm_bo_put(bo); /* release extra ref taken by ttm_bo_mmap_obj() */
>  
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4c95ebcdcc2d..18a93ad4aa1f 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -69,8 +69,7 @@ static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj,
>  	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>  	 * the whole buffer.
>  	 */
> -	vma->vm_flags &= ~VM_PFNMAP;
> -	vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
> +	mod_vm_flags(vma, VM_MIXEDMAP | VM_DONTEXPAND, VM_PFNMAP);
>  	vma->vm_pgoff = 0;
>  
>  	/*
> diff --git a/drivers/hsi/clients/cmt_speech.c b/drivers/hsi/clients/cmt_speech.c
> index 8069f795c864..952a31e742a1 100644
> --- a/drivers/hsi/clients/cmt_speech.c
> +++ b/drivers/hsi/clients/cmt_speech.c
> @@ -1264,7 +1264,7 @@ static int cs_char_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (vma_pages(vma) != 1)
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_DONTDUMP | VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_IO | VM_DONTDUMP | VM_DONTEXPAND);
>  	vma->vm_ops = &cs_char_vm_ops;
>  	vma->vm_private_data = file->private_data;
>  
> diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
> index 6c8215a47a60..a6f178bf3ded 100644
> --- a/drivers/hwtracing/intel_th/msu.c
> +++ b/drivers/hwtracing/intel_th/msu.c
> @@ -1659,7 +1659,7 @@ static int intel_th_msc_mmap(struct file *file, struct vm_area_struct *vma)
>  		atomic_dec(&msc->user_count);
>  
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTCOPY;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTCOPY);
>  	vma->vm_ops = &msc_mmap_ops;
>  	return ret;
>  }
> diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
> index 2712e699ba08..9a59e61c4194 100644
> --- a/drivers/hwtracing/stm/core.c
> +++ b/drivers/hwtracing/stm/core.c
> @@ -715,7 +715,7 @@ static int stm_char_mmap(struct file *file, struct vm_area_struct *vma)
>  	pm_runtime_get_sync(&stm->dev);
>  
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> -	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &stm_mmap_vmops;
>  	vm_iomap_memory(vma, phys, size);
>  
> diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
> index f5f9269fdc16..7294f2d33bc6 100644
> --- a/drivers/infiniband/hw/hfi1/file_ops.c
> +++ b/drivers/infiniband/hw/hfi1/file_ops.c
> @@ -403,7 +403,7 @@ static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
>  			ret = -EPERM;
>  			goto done;
>  		}
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  		addr = vma->vm_start;
>  		for (i = 0 ; i < uctxt->egrbufs.numbufs; i++) {
>  			memlen = uctxt->egrbufs.buffers[i].len;
> @@ -528,7 +528,7 @@ static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
>  		goto done;
>  	}
>  
> -	vma->vm_flags = flags;
> +	reset_vm_flags(vma, flags);
>  	hfi1_cdbg(PROC,
>  		  "%u:%u type:%u io/vf:%d/%d, addr:0x%llx, len:%lu(%lu), flags:0x%lx\n",
>  		    ctxt, subctxt, type, mapio, vmf, memaddr, memlen,
> diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> index c669ef6e47e7..538318c809b3 100644
> --- a/drivers/infiniband/hw/mlx5/main.c
> +++ b/drivers/infiniband/hw/mlx5/main.c
> @@ -2087,7 +2087,7 @@ static int mlx5_ib_mmap_clock_info_page(struct mlx5_ib_dev *dev,
>  
>  	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
>  		return -EPERM;
> -	vma->vm_flags &= ~VM_MAYWRITE;
> +	clear_vm_flags(vma, VM_MAYWRITE);
>  
>  	if (!dev->mdev->clock_info)
>  		return -EOPNOTSUPP;
> @@ -2311,7 +2311,7 @@ static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vm
>  
>  		if (vma->vm_flags & VM_WRITE)
>  			return -EPERM;
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  
>  		/* Don't expose to user-space information it shouldn't have */
>  		if (PAGE_SIZE > 4096)
> diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
> index 3937144b2ae5..16ef80df4b7f 100644
> --- a/drivers/infiniband/hw/qib/qib_file_ops.c
> +++ b/drivers/infiniband/hw/qib/qib_file_ops.c
> @@ -733,7 +733,7 @@ static int qib_mmap_mem(struct vm_area_struct *vma, struct qib_ctxtdata *rcd,
>  		}
>  
>  		/* don't allow them to later change with mprotect */
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  	}
>  
>  	pfn = virt_to_phys(kvaddr) >> PAGE_SHIFT;
> @@ -769,7 +769,7 @@ static int mmap_ureg(struct vm_area_struct *vma, struct qib_devdata *dd,
>  		phys = dd->physaddr + ureg;
>  		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  
> -		vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
> +		set_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND);
>  		ret = io_remap_pfn_range(vma, vma->vm_start,
>  					 phys >> PAGE_SHIFT,
>  					 vma->vm_end - vma->vm_start,
> @@ -810,8 +810,7 @@ static int mmap_piobufs(struct vm_area_struct *vma,
>  	 * don't allow them to later change to readable with mprotect (for when
>  	 * not initially mapped readable, as is normally the case)
>  	 */
> -	vma->vm_flags &= ~VM_MAYREAD;
> -	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
> +	mod_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND, VM_MAYREAD);
>  
>  	/* We used PAT if wc_cookie == 0 */
>  	if (!dd->wc_cookie)
> @@ -852,7 +851,7 @@ static int mmap_rcvegrbufs(struct vm_area_struct *vma,
>  		goto bail;
>  	}
>  	/* don't allow them to later change to writable with mprotect */
> -	vma->vm_flags &= ~VM_MAYWRITE;
> +	clear_vm_flags(vma, VM_MAYWRITE);
>  
>  	start = vma->vm_start;
>  
> @@ -944,7 +943,7 @@ static int mmap_kvaddr(struct vm_area_struct *vma, u64 pgaddr,
>  		 * Don't allow permission to later change to writable
>  		 * with mprotect.
>  		 */
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  	} else
>  		goto bail;
>  	len = vma->vm_end - vma->vm_start;
> @@ -955,7 +954,7 @@ static int mmap_kvaddr(struct vm_area_struct *vma, u64 pgaddr,
>  
>  	vma->vm_pgoff = (unsigned long) addr >> PAGE_SHIFT;
>  	vma->vm_ops = &qib_file_vm_ops;
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	ret = 1;
>  
>  bail:
> diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
> index 6e8c4fbb8083..6f9237c2a26b 100644
> --- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
> +++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
> @@ -672,7 +672,7 @@ int usnic_ib_mmap(struct ib_ucontext *context,
>  	usnic_dbg("\n");
>  
>  	us_ibdev = to_usdev(context->device);
> -	vma->vm_flags |= VM_IO;
> +	set_vm_flags(vma, VM_IO);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  	vfid = vma->vm_pgoff;
>  	usnic_dbg("Page Offset %lu PAGE_SHIFT %u VFID %u\n",
> diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
> index 19176583dbde..7f1b7b5dd3f4 100644
> --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
> +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
> @@ -408,7 +408,7 @@ int pvrdma_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
>  	}
>  
>  	/* Map UAR to kernel space, VM_LOCKED? */
> -	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  	if (io_remap_pfn_range(vma, start, context->uar.pfn, size,
>  			       vma->vm_page_prot))
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> index 5f1175f8b349..e66ae399749e 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> @@ -293,7 +293,7 @@ static int vb2_dc_mmap(void *buf_priv, struct vm_area_struct *vma)
>  		return ret;
>  	}
>  
> -	vma->vm_flags		|= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_private_data	= &buf->handler;
>  	vma->vm_ops		= &vb2_common_vm_ops;
>  
> diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> index 959b45beb1f3..edb47240ec17 100644
> --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> @@ -185,7 +185,7 @@ static int vb2_vmalloc_mmap(void *buf_priv, struct vm_area_struct *vma)
>  	/*
>  	 * Make sure that vm_areas for 2 buffers won't be merged together
>  	 */
> -	vma->vm_flags		|= VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_DONTEXPAND);
>  
>  	/*
>  	 * Use common vm_area operations to track buffer refcount.
> diff --git a/drivers/media/v4l2-core/videobuf-dma-contig.c b/drivers/media/v4l2-core/videobuf-dma-contig.c
> index f2c439359557..c030823185ba 100644
> --- a/drivers/media/v4l2-core/videobuf-dma-contig.c
> +++ b/drivers/media/v4l2-core/videobuf-dma-contig.c
> @@ -314,7 +314,7 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q,
>  	}
>  
>  	vma->vm_ops = &videobuf_vm_ops;
> -	vma->vm_flags |= VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_DONTEXPAND);
>  	vma->vm_private_data = map;
>  
>  	dev_dbg(q->dev, "mmap %p: q=%p %08lx-%08lx (%lx) pgoff %08lx buf %d\n",
> diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c
> index 234e9f647c96..9adac4875f29 100644
> --- a/drivers/media/v4l2-core/videobuf-dma-sg.c
> +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c
> @@ -630,8 +630,8 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q,
>  	map->count    = 1;
>  	map->q        = q;
>  	vma->vm_ops   = &videobuf_vm_ops;
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> -	vma->vm_flags &= ~VM_IO; /* using shared anonymous pages */
> +	/* using shared anonymous pages */
> +	mod_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_IO);
>  	vma->vm_private_data = map;
>  	dprintk(1, "mmap %p: q=%p %08lx-%08lx pgoff %08lx bufs %d-%d\n",
>  		map, q, vma->vm_start, vma->vm_end, vma->vm_pgoff, first, last);
> diff --git a/drivers/media/v4l2-core/videobuf-vmalloc.c b/drivers/media/v4l2-core/videobuf-vmalloc.c
> index 9b2443720ab0..48d439ccd414 100644
> --- a/drivers/media/v4l2-core/videobuf-vmalloc.c
> +++ b/drivers/media/v4l2-core/videobuf-vmalloc.c
> @@ -247,7 +247,7 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q,
>  	}
>  
>  	vma->vm_ops          = &videobuf_vm_ops;
> -	vma->vm_flags       |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_private_data = map;
>  
>  	dprintk(1, "mmap %p: q=%p %08lx-%08lx (%lx) pgoff %08lx buf %d\n",
> diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
> index acaa44809c58..17562e4efcb2 100644
> --- a/drivers/misc/cxl/context.c
> +++ b/drivers/misc/cxl/context.c
> @@ -220,7 +220,7 @@ int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)
>  	pr_devel("%s: mmio physical: %llx pe: %i master:%i\n", __func__,
>  		 ctx->psn_phys, ctx->pe , ctx->master);
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  	vma->vm_ops = &cxl_mmap_vmops;
>  	return 0;
> diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c
> index 5e9ae7600d75..ad8eae764b9b 100644
> --- a/drivers/misc/habanalabs/common/memory.c
> +++ b/drivers/misc/habanalabs/common/memory.c
> @@ -2082,7 +2082,7 @@ static int hl_ts_mmap(struct hl_mmap_mem_buf *buf, struct vm_area_struct *vma, v
>  {
>  	struct hl_ts_buff *ts_buff = buf->private;
>  
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_DONTCOPY | VM_NORESERVE;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP | VM_DONTCOPY | VM_NORESERVE);
>  	return remap_vmalloc_range(vma, ts_buff->user_buff_address, 0);
>  }
>  
> diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
> index 9f5e208701ba..4186f04da224 100644
> --- a/drivers/misc/habanalabs/gaudi/gaudi.c
> +++ b/drivers/misc/habanalabs/gaudi/gaudi.c
> @@ -4236,8 +4236,8 @@ static int gaudi_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
>  {
>  	int rc;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
> -			VM_DONTCOPY | VM_NORESERVE;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
> +			VM_DONTCOPY | VM_NORESERVE);
>  
>  	rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr,
>  				(dma_addr - HOST_PHYS_BASE), size);
> diff --git a/drivers/misc/habanalabs/gaudi2/gaudi2.c b/drivers/misc/habanalabs/gaudi2/gaudi2.c
> index e793fb2bdcbe..7311c3053944 100644
> --- a/drivers/misc/habanalabs/gaudi2/gaudi2.c
> +++ b/drivers/misc/habanalabs/gaudi2/gaudi2.c
> @@ -5538,8 +5538,8 @@ static int gaudi2_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
>  {
>  	int rc;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
> -			VM_DONTCOPY | VM_NORESERVE;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
> +			VM_DONTCOPY | VM_NORESERVE);
>  
>  #ifdef _HAS_DMA_MMAP_COHERENT
>  
> @@ -10116,8 +10116,8 @@ static int gaudi2_block_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
>  
>  	address = pci_resource_start(hdev->pdev, SRAM_CFG_BAR_ID) + offset_in_bar;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
> -			VM_DONTCOPY | VM_NORESERVE;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
> +			VM_DONTCOPY | VM_NORESERVE);
>  
>  	rc = remap_pfn_range(vma, vma->vm_start, address >> PAGE_SHIFT,
>  			block_size, vma->vm_page_prot);
> diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
> index 0f083fcf81a6..5e2aaa26ea29 100644
> --- a/drivers/misc/habanalabs/goya/goya.c
> +++ b/drivers/misc/habanalabs/goya/goya.c
> @@ -2880,8 +2880,8 @@ static int goya_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
>  {
>  	int rc;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
> -			VM_DONTCOPY | VM_NORESERVE;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP |
> +			VM_DONTCOPY | VM_NORESERVE);
>  
>  	rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr,
>  				(dma_addr - HOST_PHYS_BASE), size);
> diff --git a/drivers/misc/ocxl/context.c b/drivers/misc/ocxl/context.c
> index 9eb0d93b01c6..e6f941248e93 100644
> --- a/drivers/misc/ocxl/context.c
> +++ b/drivers/misc/ocxl/context.c
> @@ -180,7 +180,7 @@ static int check_mmap_afu_irq(struct ocxl_context *ctx,
>  	if ((vma->vm_flags & VM_READ) || (vma->vm_flags & VM_EXEC) ||
>  		!(vma->vm_flags & VM_WRITE))
>  		return -EINVAL;
> -	vma->vm_flags &= ~(VM_MAYREAD | VM_MAYEXEC);
> +	clear_vm_flags(vma, VM_MAYREAD | VM_MAYEXEC);
>  	return 0;
>  }
>  
> @@ -204,7 +204,7 @@ int ocxl_context_mmap(struct ocxl_context *ctx, struct vm_area_struct *vma)
>  	if (rc)
>  		return rc;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  	vma->vm_ops = &ocxl_vmops;
>  	return 0;
> diff --git a/drivers/misc/ocxl/sysfs.c b/drivers/misc/ocxl/sysfs.c
> index 25c78df8055d..9398246cac79 100644
> --- a/drivers/misc/ocxl/sysfs.c
> +++ b/drivers/misc/ocxl/sysfs.c
> @@ -134,7 +134,7 @@ static int global_mmio_mmap(struct file *filp, struct kobject *kobj,
>  		(afu->config.global_mmio_size >> PAGE_SHIFT))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  	vma->vm_ops = &global_mmio_vmops;
>  	vma->vm_private_data = afu;
> diff --git a/drivers/misc/open-dice.c b/drivers/misc/open-dice.c
> index 9dda47b3fd70..61b4747270aa 100644
> --- a/drivers/misc/open-dice.c
> +++ b/drivers/misc/open-dice.c
> @@ -95,12 +95,12 @@ static int open_dice_mmap(struct file *filp, struct vm_area_struct *vma)
>  		if (vma->vm_flags & VM_WRITE)
>  			return -EPERM;
>  		/* Ensure userspace cannot acquire VM_WRITE later. */
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYSHARE);
>  	}
>  
>  	/* Create write-combine mapping so all clients observe a wipe. */
>  	vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> -	vma->vm_flags |= VM_DONTCOPY | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTCOPY | VM_DONTDUMP);
>  	return vm_iomap_memory(vma, drvdata->rmem->base, drvdata->rmem->size);
>  }
>  
> diff --git a/drivers/misc/sgi-gru/grufile.c b/drivers/misc/sgi-gru/grufile.c
> index 7ffcfc0bb587..8b777286d3b2 100644
> --- a/drivers/misc/sgi-gru/grufile.c
> +++ b/drivers/misc/sgi-gru/grufile.c
> @@ -101,8 +101,8 @@ static int gru_file_mmap(struct file *file, struct vm_area_struct *vma)
>  				vma->vm_end & (GRU_GSEG_PAGESIZE - 1))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_LOCKED |
> -			 VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_LOCKED |
> +			 VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_page_prot = PAGE_SHARED;
>  	vma->vm_ops = &gru_vm_ops;
>  
> diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
> index 905eff1f840e..f57e91cdb0f6 100644
> --- a/drivers/misc/uacce/uacce.c
> +++ b/drivers/misc/uacce/uacce.c
> @@ -229,7 +229,7 @@ static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
>  	if (!qfr)
>  		return -ENOMEM;
>  
> -	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND | VM_WIPEONFORK;
> +	set_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_WIPEONFORK);
>  	vma->vm_ops = &uacce_vm_ops;
>  	vma->vm_private_data = q;
>  	qfr->type = type;
> diff --git a/drivers/sbus/char/oradax.c b/drivers/sbus/char/oradax.c
> index 21b7cb6e7e70..a096734daad0 100644
> --- a/drivers/sbus/char/oradax.c
> +++ b/drivers/sbus/char/oradax.c
> @@ -389,7 +389,7 @@ static int dax_devmap(struct file *f, struct vm_area_struct *vma)
>  	/* completion area is mapped read-only for user */
>  	if (vma->vm_flags & VM_WRITE)
>  		return -EPERM;
> -	vma->vm_flags &= ~VM_MAYWRITE;
> +	clear_vm_flags(vma, VM_MAYWRITE);
>  
>  	if (remap_pfn_range(vma, vma->vm_start, ctx->ca_buf_ra >> PAGE_SHIFT,
>  			    len, vma->vm_page_prot))
> diff --git a/drivers/scsi/cxlflash/ocxl_hw.c b/drivers/scsi/cxlflash/ocxl_hw.c
> index 631eda2d467e..d386c25c2699 100644
> --- a/drivers/scsi/cxlflash/ocxl_hw.c
> +++ b/drivers/scsi/cxlflash/ocxl_hw.c
> @@ -1167,7 +1167,7 @@ static int afu_mmap(struct file *file, struct vm_area_struct *vma)
>  	    (ctx->psn_size >> PAGE_SHIFT))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  	vma->vm_ops = &ocxlflash_vmops;
>  	return 0;
> diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
> index ff9854f59964..7438adfe3bdc 100644
> --- a/drivers/scsi/sg.c
> +++ b/drivers/scsi/sg.c
> @@ -1288,7 +1288,7 @@ sg_mmap(struct file *filp, struct vm_area_struct *vma)
>  	}
>  
>  	sfp->mmap_called = 1;
> -	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_private_data = sfp;
>  	vma->vm_ops = &sg_mmap_vm_ops;
>  out:
> diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
> index 5e53eed8ae95..df1c944e5058 100644
> --- a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
> +++ b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
> @@ -1072,7 +1072,7 @@ int hmm_bo_mmap(struct vm_area_struct *vma, struct hmm_buffer_object *bo)
>  	vma->vm_private_data = bo;
>  
>  	vma->vm_ops = &hmm_bo_vm_ops;
> -	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
>  
>  	/*
>  	 * call hmm_bo_vm_open explicitly.
> diff --git a/drivers/staging/media/deprecated/meye/meye.c b/drivers/staging/media/deprecated/meye/meye.c
> index 5d87efd9b95c..2505e64d7119 100644
> --- a/drivers/staging/media/deprecated/meye/meye.c
> +++ b/drivers/staging/media/deprecated/meye/meye.c
> @@ -1476,8 +1476,8 @@ static int meye_mmap(struct file *file, struct vm_area_struct *vma)
>  	}
>  
>  	vma->vm_ops = &meye_vm_ops;
> -	vma->vm_flags &= ~VM_IO;	/* not I/O memory */
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	/* not I/O memory */
> +	mod_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_IO);
>  	vma->vm_private_data = (void *) (offset / gbufsize);
>  	meye_vm_open(vma);
>  
> diff --git a/drivers/staging/media/deprecated/stkwebcam/stk-webcam.c b/drivers/staging/media/deprecated/stkwebcam/stk-webcam.c
> index 787edb3d47c2..196d1034f104 100644
> --- a/drivers/staging/media/deprecated/stkwebcam/stk-webcam.c
> +++ b/drivers/staging/media/deprecated/stkwebcam/stk-webcam.c
> @@ -779,7 +779,7 @@ static int v4l_stk_mmap(struct file *fp, struct vm_area_struct *vma)
>  	ret = remap_vmalloc_range(vma, sbuf->buffer, 0);
>  	if (ret)
>  		return ret;
> -	vma->vm_flags |= VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_DONTEXPAND);
>  	vma->vm_private_data = sbuf;
>  	vma->vm_ops = &stk_v4l_vm_ops;
>  	sbuf->v4lbuf.flags |= V4L2_BUF_FLAG_MAPPED;
> diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
> index 2940559c3086..9fd64259904c 100644
> --- a/drivers/target/target_core_user.c
> +++ b/drivers/target/target_core_user.c
> @@ -1928,7 +1928,7 @@ static int tcmu_mmap(struct uio_info *info, struct vm_area_struct *vma)
>  {
>  	struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info);
>  
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &tcmu_vm_ops;
>  
>  	vma->vm_private_data = udev;
> diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
> index 43afbb7c5ab9..08802744f3b7 100644
> --- a/drivers/uio/uio.c
> +++ b/drivers/uio/uio.c
> @@ -713,7 +713,7 @@ static const struct vm_operations_struct uio_logical_vm_ops = {
>  
>  static int uio_mmap_logical(struct vm_area_struct *vma)
>  {
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &uio_logical_vm_ops;
>  	return 0;
>  }
> diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
> index 837f3e57f580..d9aefa259883 100644
> --- a/drivers/usb/core/devio.c
> +++ b/drivers/usb/core/devio.c
> @@ -279,8 +279,7 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
>  		}
>  	}
>  
> -	vma->vm_flags |= VM_IO;
> -	vma->vm_flags |= (VM_DONTEXPAND | VM_DONTDUMP);
> +	set_vm_flags(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &usbdev_vm_ops;
>  	vma->vm_private_data = usbm;
>  
> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index 094e812e9e69..9b2d48a65fdf 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1272,8 +1272,7 @@ static int mon_bin_mmap(struct file *filp, struct vm_area_struct *vma)
>  	if (vma->vm_flags & VM_WRITE)
>  		return -EPERM;
>  
> -	vma->vm_flags &= ~VM_MAYWRITE;
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	mod_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_MAYWRITE);
>  	vma->vm_private_data = filp->private_data;
>  	mon_bin_vma_open(vma);
>  	return 0;
> diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> index e682bc7ee6c9..39dcce2e455b 100644
> --- a/drivers/vdpa/vdpa_user/iova_domain.c
> +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> @@ -512,7 +512,7 @@ static int vduse_domain_mmap(struct file *file, struct vm_area_struct *vma)
>  {
>  	struct vduse_iova_domain *domain = file->private_data;
>  
> -	vma->vm_flags |= VM_DONTDUMP | VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_DONTDUMP | VM_DONTEXPAND);
>  	vma->vm_private_data = domain;
>  	vma->vm_ops = &vduse_domain_mmap_ops;
>  
> diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> index 26a541cc64d1..86eb3fc9ffb4 100644
> --- a/drivers/vfio/pci/vfio_pci_core.c
> +++ b/drivers/vfio/pci/vfio_pci_core.c
> @@ -1799,7 +1799,7 @@ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma
>  	 * See remap_pfn_range(), called from vfio_pci_fault() but we can't
>  	 * change vm_flags within the fault handler.  Set them now.
>  	 */
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &vfio_pci_mmap_ops;
>  
>  	return 0;
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index ec32f785dfde..7b81994a7d02 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -1315,7 +1315,7 @@ static int vhost_vdpa_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (vma->vm_end - vma->vm_start != notify.size)
>  		return -ENOTSUPP;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &vhost_vdpa_vm_ops;
>  	return 0;
>  }
> diff --git a/drivers/video/fbdev/68328fb.c b/drivers/video/fbdev/68328fb.c
> index 7db03ed77c76..a794a740af10 100644
> --- a/drivers/video/fbdev/68328fb.c
> +++ b/drivers/video/fbdev/68328fb.c
> @@ -391,7 +391,7 @@ static int mc68x328fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>  #ifndef MMU
>  	/* this is uClinux (no MMU) specific code */
>  
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_start = videomemory;
>  
>  	return 0;
> diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
> index c730253ab85c..af0bfaa2d014 100644
> --- a/drivers/video/fbdev/core/fb_defio.c
> +++ b/drivers/video/fbdev/core/fb_defio.c
> @@ -232,9 +232,9 @@ static const struct address_space_operations fb_deferred_io_aops = {
>  int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma)
>  {
>  	vma->vm_ops = &fb_deferred_io_vm_ops;
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	if (!(info->flags & FBINFO_VIRTFB))
> -		vma->vm_flags |= VM_IO;
> +		set_vm_flags(vma, VM_IO);
>  	vma->vm_private_data = info;
>  	return 0;
>  }
> diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c
> index a15729beb9d1..ee4a8958dc68 100644
> --- a/drivers/xen/gntalloc.c
> +++ b/drivers/xen/gntalloc.c
> @@ -525,7 +525,7 @@ static int gntalloc_mmap(struct file *filp, struct vm_area_struct *vma)
>  
>  	vma->vm_private_data = vm_priv;
>  
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  
>  	vma->vm_ops = &gntalloc_vmops;
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index 4d9a3050de6a..6d5bb1ebb661 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -1055,10 +1055,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
>  
>  	vma->vm_ops = &gntdev_vmops;
>  
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_MIXEDMAP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP | VM_MIXEDMAP);
>  
>  	if (use_ptemod)
> -		vma->vm_flags |= VM_DONTCOPY;
> +		set_vm_flags(vma, VM_DONTCOPY);
>  
>  	vma->vm_private_data = map;
>  	if (map->flags) {
> diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
> index dd5bbb6e1b6b..037547918630 100644
> --- a/drivers/xen/privcmd-buf.c
> +++ b/drivers/xen/privcmd-buf.c
> @@ -156,7 +156,7 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
>  	vma_priv->file_priv = file_priv;
>  	vma_priv->users = 1;
>  
> -	vma->vm_flags |= VM_IO | VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_IO | VM_DONTEXPAND);
>  	vma->vm_ops = &privcmd_buf_vm_ops;
>  	vma->vm_private_data = vma_priv;
>  
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 1edf45ee9890..4c8cfc6f86d8 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -934,8 +934,8 @@ static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
>  {
>  	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
>  	 * how to recreate these mappings */
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTCOPY |
> -			 VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTCOPY |
> +			 VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &privcmd_vm_ops;
>  	vma->vm_private_data = NULL;
>  
> diff --git a/fs/aio.c b/fs/aio.c
> index 650cd795aa7e..4106b3209e5e 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -392,7 +392,7 @@ static const struct vm_operations_struct aio_ring_vm_ops = {
>  
>  static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma)
>  {
> -	vma->vm_flags |= VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_DONTEXPAND);
>  	vma->vm_ops = &aio_ring_vm_ops;
>  	return 0;
>  }
> diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> index 50e4e060db68..aa8695e685aa 100644
> --- a/fs/cramfs/inode.c
> +++ b/fs/cramfs/inode.c
> @@ -408,7 +408,7 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
>  		 * unpopulated ptes via cramfs_read_folio().
>  		 */
>  		int i;
> -		vma->vm_flags |= VM_MIXEDMAP;
> +		set_vm_flags(vma, VM_MIXEDMAP);
>  		for (i = 0; i < pages && !ret; i++) {
>  			vm_fault_t vmf;
>  			unsigned long off = i * PAGE_SIZE;
> diff --git a/fs/erofs/data.c b/fs/erofs/data.c
> index f57f921683d7..e6413ced2bb1 100644
> --- a/fs/erofs/data.c
> +++ b/fs/erofs/data.c
> @@ -429,7 +429,7 @@ static int erofs_file_mmap(struct file *file, struct vm_area_struct *vma)
>  		return -EINVAL;
>  
>  	vma->vm_ops = &erofs_dax_vm_ops;
> -	vma->vm_flags |= VM_HUGEPAGE;
> +	set_vm_flags(vma, VM_HUGEPAGE);
>  	return 0;
>  }
>  #else
> diff --git a/fs/exec.c b/fs/exec.c
> index c0df813d2b45..ed499e850970 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -270,7 +270,7 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
>  	BUILD_BUG_ON(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP);
>  	vma->vm_end = STACK_TOP_MAX;
>  	vma->vm_start = vma->vm_end - PAGE_SIZE;
> -	vma->vm_flags = VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP;
> +	init_vm_flags(vma, VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP);
>  	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>  
>  	err = insert_vm_struct(mm, vma);
> @@ -834,7 +834,7 @@ int setup_arg_pages(struct linux_binprm *bprm,
>  	}
>  
>  	/* mprotect_fixup is overkill to remove the temporary stack flags */
> -	vma->vm_flags &= ~VM_STACK_INCOMPLETE_SETUP;
> +	clear_vm_flags(vma, VM_STACK_INCOMPLETE_SETUP);
>  
>  	stack_expand = 131072UL; /* randomly 32*4k (or 2*64k) pages */
>  	stack_size = vma->vm_end - vma->vm_start;
> diff --git a/fs/ext4/file.c b/fs/ext4/file.c
> index 7ac0a81bd371..baeb385b07c7 100644
> --- a/fs/ext4/file.c
> +++ b/fs/ext4/file.c
> @@ -801,7 +801,7 @@ static int ext4_file_mmap(struct file *file, struct vm_area_struct *vma)
>  	file_accessed(file);
>  	if (IS_DAX(file_inode(file))) {
>  		vma->vm_ops = &ext4_dax_vm_ops;
> -		vma->vm_flags |= VM_HUGEPAGE;
> +		set_vm_flags(vma, VM_HUGEPAGE);
>  	} else {
>  		vma->vm_ops = &ext4_file_vm_ops;
>  	}
> diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
> index e23e802a8013..599969edc869 100644
> --- a/fs/fuse/dax.c
> +++ b/fs/fuse/dax.c
> @@ -860,7 +860,7 @@ int fuse_dax_mmap(struct file *file, struct vm_area_struct *vma)
>  {
>  	file_accessed(file);
>  	vma->vm_ops = &fuse_dax_vm_ops;
> -	vma->vm_flags |= VM_MIXEDMAP | VM_HUGEPAGE;
> +	set_vm_flags(vma, VM_MIXEDMAP | VM_HUGEPAGE);
>  	return 0;
>  }
>  
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 48f1a8ad2243..b0f59d8a7b09 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -132,7 +132,7 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
>  	 * way when do_mmap unwinds (may be important on powerpc
>  	 * and ia64).
>  	 */
> -	vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_HUGETLB | VM_DONTEXPAND);
>  	vma->vm_ops = &hugetlb_vm_ops;
>  
>  	ret = seal_check_future_write(info->seals, vma);
> @@ -811,7 +811,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
>  	 * as input to create an allocation policy.
>  	 */
>  	vma_init(&pseudo_vma, mm);
> -	pseudo_vma.vm_flags = (VM_HUGETLB | VM_MAYSHARE | VM_SHARED);
> +	init_vm_flags(&pseudo_vma, VM_HUGETLB | VM_MAYSHARE | VM_SHARED);
>  	pseudo_vma.vm_file = file;
>  
>  	for (index = start; index < end; index++) {
> diff --git a/fs/orangefs/file.c b/fs/orangefs/file.c
> index 167fa43b24f9..0f668db6bcf3 100644
> --- a/fs/orangefs/file.c
> +++ b/fs/orangefs/file.c
> @@ -389,8 +389,7 @@ static int orangefs_file_mmap(struct file *file, struct vm_area_struct *vma)
>  		     "orangefs_file_mmap: called on %pD\n", file);
>  
>  	/* set the sequential readahead hint */
> -	vma->vm_flags |= VM_SEQ_READ;
> -	vma->vm_flags &= ~VM_RAND_READ;
> +	mod_vm_flags(vma, VM_SEQ_READ, VM_RAND_READ);
>  
>  	file_accessed(file);
>  	vma->vm_ops = &orangefs_file_vm_ops;
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index f937c4cd0214..9018645a359c 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1301,7 +1301,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
>  			for_each_vma(vmi, vma) {
>  				if (!(vma->vm_flags & VM_SOFTDIRTY))
>  					continue;
> -				vma->vm_flags &= ~VM_SOFTDIRTY;
> +				clear_vm_flags(vma, VM_SOFTDIRTY);
>  				vma_set_page_prot(vma);
>  			}
>  
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index 09a81e4b1273..858e4e804f85 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -582,8 +582,7 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
>  	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
>  		return -EPERM;
>  
> -	vma->vm_flags &= ~(VM_MAYWRITE | VM_MAYEXEC);
> -	vma->vm_flags |= VM_MIXEDMAP;
> +	mod_vm_flags(vma, VM_MIXEDMAP, VM_MAYWRITE | VM_MAYEXEC);
>  	vma->vm_ops = &vmcore_mmap_ops;
>  
>  	len = 0;
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index f3c75c6222de..40030059db53 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -113,7 +113,7 @@ static void userfaultfd_set_vm_flags(struct vm_area_struct *vma,
>  {
>  	const bool uffd_wp_changed = (vma->vm_flags ^ flags) & VM_UFFD_WP;
>  
> -	vma->vm_flags = flags;
> +	reset_vm_flags(vma, flags);
>  	/*
>  	 * For shared mappings, we want to enable writenotify while
>  	 * userfaultfd-wp is enabled (see vma_wants_writenotify()). We'll simply
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index 595a5bcf46b9..bf777fed0dd4 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -1429,7 +1429,7 @@ xfs_file_mmap(
>  	file_accessed(file);
>  	vma->vm_ops = &xfs_file_vm_ops;
>  	if (IS_DAX(inode))
> -		vma->vm_flags |= VM_HUGEPAGE;
> +		set_vm_flags(vma, VM_HUGEPAGE);
>  	return 0;
>  }
>  
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index da62bdd627bf..55335edd1373 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3652,7 +3652,7 @@ static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)
>  		 * VM_MAYWRITE as we still want them to be COW-writable.
>  		 */
>  		if (vma->vm_flags & VM_SHARED)
> -			vma->vm_flags &= ~(VM_MAYWRITE);
> +			clear_vm_flags(vma, VM_MAYWRITE);
>  	}
>  
>  	return 0;
> diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
> index 80f4b4d88aaf..d2c967cc2873 100644
> --- a/kernel/bpf/ringbuf.c
> +++ b/kernel/bpf/ringbuf.c
> @@ -269,7 +269,7 @@ static int ringbuf_map_mmap_kern(struct bpf_map *map, struct vm_area_struct *vma
>  		if (vma->vm_pgoff != 0 || vma->vm_end - vma->vm_start != PAGE_SIZE)
>  			return -EPERM;
>  	} else {
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  	}
>  	/* remap_vmalloc_range() checks size and offset constraints */
>  	return remap_vmalloc_range(vma, rb_map->rb,
> @@ -290,7 +290,7 @@ static int ringbuf_map_mmap_user(struct bpf_map *map, struct vm_area_struct *vma
>  			 */
>  			return -EPERM;
>  	} else {
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  	}
>  	/* remap_vmalloc_range() checks size and offset constraints */
>  	return remap_vmalloc_range(vma, rb_map->rb, vma->vm_pgoff + RINGBUF_PGOFF);
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 64131f88c553..db19094c7ac7 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -882,10 +882,10 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
>  	/* set default open/close callbacks */
>  	vma->vm_ops = &bpf_map_default_vmops;
>  	vma->vm_private_data = map;
> -	vma->vm_flags &= ~VM_MAYEXEC;
> +	clear_vm_flags(vma, VM_MAYEXEC);
>  	if (!(vma->vm_flags & VM_WRITE))
>  		/* disallow re-mapping with PROT_WRITE */
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  
>  	err = map->ops->map_mmap(map, vma);
>  	if (err)
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index d56328e5080e..6745460dcf49 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -6573,7 +6573,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
>  	 * Since pinned accounting is per vm we cannot allow fork() to copy our
>  	 * vma.
>  	 */
> -	vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &perf_mmap_vmops;
>  
>  	if (event->pmu->event_mapped)
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index e5cd09fd8a05..27fc1e26e1e1 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -489,7 +489,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
>  		goto exit;
>  	}
>  	spin_unlock_irqrestore(&kcov->lock, flags);
> -	vma->vm_flags |= VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_DONTEXPAND);
>  	for (off = 0; off < size; off += PAGE_SIZE) {
>  		page = vmalloc_to_page(kcov->area + off);
>  		res = vm_insert_page(vma, vma->vm_start + off, page);
> diff --git a/kernel/relay.c b/kernel/relay.c
> index ef12532168d9..085aa8707bc2 100644
> --- a/kernel/relay.c
> +++ b/kernel/relay.c
> @@ -91,7 +91,7 @@ static int relay_mmap_buf(struct rchan_buf *buf, struct vm_area_struct *vma)
>  		return -EINVAL;
>  
>  	vma->vm_ops = &relay_file_mmap_ops;
> -	vma->vm_flags |= VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_DONTEXPAND);
>  	vma->vm_private_data = buf;
>  
>  	return 0;
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 7db6622f8293..74941a9784b4 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -176,7 +176,7 @@ static int madvise_update_vma(struct vm_area_struct *vma,
>  	/*
>  	 * vm_flags is protected by the mmap_lock held in write mode.
>  	 */
> -	vma->vm_flags = new_flags;
> +	reset_vm_flags(vma, new_flags);
>  	if (!vma->vm_file || vma_is_anon_shmem(vma)) {
>  		error = replace_anon_vma_name(vma, anon_name);
>  		if (error)
> diff --git a/mm/memory.c b/mm/memory.c
> index ec833a2e0601..d6902065e558 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1928,7 +1928,7 @@ int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
>  	if (!(vma->vm_flags & VM_MIXEDMAP)) {
>  		BUG_ON(mmap_read_trylock(vma->vm_mm));
>  		BUG_ON(vma->vm_flags & VM_PFNMAP);
> -		vma->vm_flags |= VM_MIXEDMAP;
> +		set_vm_flags(vma, VM_MIXEDMAP);
>  	}
>  	/* Defer page refcount checking till we're about to map that page. */
>  	return insert_pages(vma, addr, pages, num, vma->vm_page_prot);
> @@ -1986,7 +1986,7 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
>  	if (!(vma->vm_flags & VM_MIXEDMAP)) {
>  		BUG_ON(mmap_read_trylock(vma->vm_mm));
>  		BUG_ON(vma->vm_flags & VM_PFNMAP);
> -		vma->vm_flags |= VM_MIXEDMAP;
> +		set_vm_flags(vma, VM_MIXEDMAP);
>  	}
>  	return insert_page(vma, addr, page, vma->vm_page_prot);
>  }
> @@ -2452,7 +2452,7 @@ int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
>  		vma->vm_pgoff = pfn;
>  	}
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  
>  	BUG_ON(addr >= end);
>  	pfn -= addr >> PAGE_SHIFT;
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 5c4fff93cd6b..5b2431221b4d 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -380,7 +380,7 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
>  	 */
>  	if (newflags & VM_LOCKED)
>  		newflags |= VM_IO;
> -	WRITE_ONCE(vma->vm_flags, newflags);
> +	reset_vm_flags(vma, newflags);
>  
>  	lru_add_drain();
>  	walk_page_range(vma->vm_mm, start, end, &mlock_walk_ops, NULL);
> @@ -388,7 +388,7 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
>  
>  	if (newflags & VM_IO) {
>  		newflags &= ~VM_IO;
> -		WRITE_ONCE(vma->vm_flags, newflags);
> +		reset_vm_flags(vma, newflags);
>  	}
>  }
>  
> @@ -457,7 +457,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  
>  	if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) {
>  		/* No work to do, and mlocking twice would be wrong */
> -		vma->vm_flags = newflags;
> +		reset_vm_flags(vma, newflags);
>  	} else {
>  		mlock_vma_pages_range(vma, start, end, newflags);
>  	}
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 323bd253b25a..2c6e9072e6a8 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2558,7 +2558,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>  	vma_iter_set(&vmi, addr);
>  	vma->vm_start = addr;
>  	vma->vm_end = end;
> -	vma->vm_flags = vm_flags;
> +	init_vm_flags(vma, vm_flags);
>  	vma->vm_page_prot = vm_get_page_prot(vm_flags);
>  	vma->vm_pgoff = pgoff;
>  
> @@ -2686,7 +2686,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>  	 * then new mapped in-place (which must be aimed as
>  	 * a completely new data area).
>  	 */
> -	vma->vm_flags |= VM_SOFTDIRTY;
> +	set_vm_flags(vma, VM_SOFTDIRTY);
>  
>  	vma_set_page_prot(vma);
>  
> @@ -2911,7 +2911,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  		init_vma_prep(&vp, vma);
>  		vma_prepare(&vp);
>  		vma->vm_end = addr + len;
> -		vma->vm_flags |= VM_SOFTDIRTY;
> +		set_vm_flags(vma, VM_SOFTDIRTY);
>  		vma_iter_store(vmi, vma);
>  
>  		vma_complete(&vp, vmi, mm);
> @@ -2928,7 +2928,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  	vma->vm_start = addr;
>  	vma->vm_end = addr + len;
>  	vma->vm_pgoff = addr >> PAGE_SHIFT;
> -	vma->vm_flags = flags;
> +	init_vm_flags(vma, flags);
>  	vma->vm_page_prot = vm_get_page_prot(flags);
>  	if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL))
>  		goto mas_store_fail;
> @@ -2940,7 +2940,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  	mm->data_vm += len >> PAGE_SHIFT;
>  	if (flags & VM_LOCKED)
>  		mm->locked_vm += (len >> PAGE_SHIFT);
> -	vma->vm_flags |= VM_SOFTDIRTY;
> +	set_vm_flags(vma, VM_SOFTDIRTY);
>  	validate_mm(mm);
>  	return 0;
>  
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index cce6a0e58fb5..fba770d889ea 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -670,7 +670,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
>  	 * vm_flags and vm_page_prot are protected by the mmap_lock
>  	 * held in write mode.
>  	 */
> -	vma->vm_flags = newflags;
> +	reset_vm_flags(vma, newflags);
>  	if (vma_wants_manual_pte_write_upgrade(vma))
>  		mm_cp_flags |= MM_CP_TRY_CHANGE_WRITABLE;
>  	vma_set_page_prot(vma);
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 35db9752cb6a..0f3c78e8eea5 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -662,7 +662,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
>  
>  	/* Conceal VM_ACCOUNT so old reservation is not undone */
>  	if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP)) {
> -		vma->vm_flags &= ~VM_ACCOUNT;
> +		clear_vm_flags(vma, VM_ACCOUNT);
>  		if (vma->vm_start < old_addr)
>  			account_start = vma->vm_start;
>  		if (vma->vm_end > old_addr + old_len)
> @@ -718,12 +718,12 @@ static unsigned long move_vma(struct vm_area_struct *vma,
>  	/* Restore VM_ACCOUNT if one or two pieces of vma left */
>  	if (account_start) {
>  		vma = vma_prev(&vmi);
> -		vma->vm_flags |= VM_ACCOUNT;
> +		set_vm_flags(vma, VM_ACCOUNT);
>  	}
>  
>  	if (account_end) {
>  		vma = vma_next(&vmi);
> -		vma->vm_flags |= VM_ACCOUNT;
> +		set_vm_flags(vma, VM_ACCOUNT);
>  	}
>  
>  	return new_addr;
> diff --git a/mm/nommu.c b/mm/nommu.c
> index 9a166738909e..93d052b5a0c2 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -173,7 +173,7 @@ static void *__vmalloc_user_flags(unsigned long size, gfp_t flags)
>  		mmap_write_lock(current->mm);
>  		vma = find_vma(current->mm, (unsigned long)ret);
>  		if (vma)
> -			vma->vm_flags |= VM_USERMAP;
> +			set_vm_flags(vma, VM_USERMAP);
>  		mmap_write_unlock(current->mm);
>  	}
>  
> @@ -950,7 +950,8 @@ static int do_mmap_private(struct vm_area_struct *vma,
>  
>  	atomic_long_add(total, &mmap_pages_allocated);
>  
> -	region->vm_flags = vma->vm_flags |= VM_MAPPED_COPY;
> +	set_vm_flags(vma, VM_MAPPED_COPY);
> +	region->vm_flags = vma->flags;
>  	region->vm_start = (unsigned long) base;
>  	region->vm_end   = region->vm_start + len;
>  	region->vm_top   = region->vm_start + (total << PAGE_SHIFT);
> @@ -1047,7 +1048,7 @@ unsigned long do_mmap(struct file *file,
>  	region->vm_flags = vm_flags;
>  	region->vm_pgoff = pgoff;
>  
> -	vma->vm_flags = vm_flags;
> +	init_vm_flags(vma, vm_flags);
>  	vma->vm_pgoff = pgoff;
>  
>  	if (file) {
> @@ -1111,7 +1112,7 @@ unsigned long do_mmap(struct file *file,
>  			vma->vm_end = start + len;
>  
>  			if (pregion->vm_flags & VM_MAPPED_COPY)
> -				vma->vm_flags |= VM_MAPPED_COPY;
> +				set_vm_flags(vma, VM_MAPPED_COPY);
>  			else {
>  				ret = do_mmap_shared_file(vma);
>  				if (ret < 0) {
> @@ -1601,7 +1602,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
>  	if (addr != (pfn << PAGE_SHIFT))
>  		return -EINVAL;
>  
> -	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>  	return 0;
>  }
>  EXPORT_SYMBOL(remap_pfn_range);
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index be3fff86ba00..236a1b6b4100 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -128,7 +128,7 @@ static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
>  	if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
>  		return -EAGAIN;
>  
> -	vma->vm_flags |= VM_LOCKED | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_LOCKED | VM_DONTDUMP);
>  	vma->vm_ops = &secretmem_vm_ops;
>  
>  	return 0;
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 9e1015cbad29..3d7fc7a979c6 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2304,7 +2304,7 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
>  		return ret;
>  
>  	/* arm64 - allow memory tagging on RAM-based files */
> -	vma->vm_flags |= VM_MTE_ALLOWED;
> +	set_vm_flags(vma, VM_MTE_ALLOWED);
>  
>  	file_accessed(file);
>  	/* This is anonymous shared memory if it is unlinked at the time of mmap */
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index dfde5324e480..8fccba7aa514 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3682,7 +3682,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
>  		size -= PAGE_SIZE;
>  	} while (size > 0);
>  
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  
>  	return 0;
>  }
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index f713c0422f0f..cfa2e8a92fcb 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -1890,10 +1890,10 @@ int tcp_mmap(struct file *file, struct socket *sock,
>  {
>  	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
>  		return -EPERM;
> -	vma->vm_flags &= ~(VM_MAYWRITE | VM_MAYEXEC);
> +	clear_vm_flags(vma, VM_MAYWRITE | VM_MAYEXEC);
>  
>  	/* Instruct vm_insert_page() to not mmap_read_lock(mm) */
> -	vma->vm_flags |= VM_MIXEDMAP;
> +	set_vm_flags(vma, VM_MIXEDMAP);
>  
>  	vma->vm_ops = &tcp_vm_ops;
>  	return 0;
> diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
> index 0a6894cdc54d..9037deb5979e 100644
> --- a/security/selinux/selinuxfs.c
> +++ b/security/selinux/selinuxfs.c
> @@ -262,7 +262,7 @@ static int sel_mmap_handle_status(struct file *filp,
>  	if (vma->vm_flags & VM_WRITE)
>  		return -EPERM;
>  	/* disallow mprotect() turns it into writable */
> -	vma->vm_flags &= ~VM_MAYWRITE;
> +	clear_vm_flags(vma, VM_MAYWRITE);
>  
>  	return remap_pfn_range(vma, vma->vm_start,
>  			       page_to_pfn(status),
> @@ -506,13 +506,13 @@ static int sel_mmap_policy(struct file *filp, struct vm_area_struct *vma)
>  {
>  	if (vma->vm_flags & VM_SHARED) {
>  		/* do not allow mprotect to make mapping writable */
> -		vma->vm_flags &= ~VM_MAYWRITE;
> +		clear_vm_flags(vma, VM_MAYWRITE);
>  
>  		if (vma->vm_flags & VM_WRITE)
>  			return -EACCES;
>  	}
>  
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &sel_mmap_policy_ops;
>  
>  	return 0;
> diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
> index ac2efeb63a39..52473e2acd07 100644
> --- a/sound/core/oss/pcm_oss.c
> +++ b/sound/core/oss/pcm_oss.c
> @@ -2910,7 +2910,7 @@ static int snd_pcm_oss_mmap(struct file *file, struct vm_area_struct *area)
>  	}
>  	/* set VM_READ access as well to fix memset() routines that do
>  	   reads before writes (to improve performance) */
> -	area->vm_flags |= VM_READ;
> +	set_vm_flags(area, VM_READ);
>  	if (substream == NULL)
>  		return -ENXIO;
>  	runtime = substream->runtime;
> diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
> index 9c122e757efe..f716bdb70afe 100644
> --- a/sound/core/pcm_native.c
> +++ b/sound/core/pcm_native.c
> @@ -3675,8 +3675,9 @@ static int snd_pcm_mmap_status(struct snd_pcm_substream *substream, struct file
>  		return -EINVAL;
>  	area->vm_ops = &snd_pcm_vm_ops_status;
>  	area->vm_private_data = substream;
> -	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> -	area->vm_flags &= ~(VM_WRITE | VM_MAYWRITE);
> +	mod_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP,
> +		     VM_WRITE | VM_MAYWRITE);
> +
>  	return 0;
>  }
>  
> @@ -3712,7 +3713,7 @@ static int snd_pcm_mmap_control(struct snd_pcm_substream *substream, struct file
>  		return -EINVAL;
>  	area->vm_ops = &snd_pcm_vm_ops_control;
>  	area->vm_private_data = substream;
> -	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP);
>  	return 0;
>  }
>  
> @@ -3828,7 +3829,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_data_fault = {
>  int snd_pcm_lib_default_mmap(struct snd_pcm_substream *substream,
>  			     struct vm_area_struct *area)
>  {
> -	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP);
>  	if (!substream->ops->page &&
>  	    !snd_dma_buffer_mmap(snd_pcm_get_dma_buf(substream), area))
>  		return 0;
> diff --git a/sound/soc/pxa/mmp-sspa.c b/sound/soc/pxa/mmp-sspa.c
> index fb5a4390443f..fdd72d9bb46c 100644
> --- a/sound/soc/pxa/mmp-sspa.c
> +++ b/sound/soc/pxa/mmp-sspa.c
> @@ -404,7 +404,7 @@ static int mmp_pcm_mmap(struct snd_soc_component *component,
>  			struct snd_pcm_substream *substream,
>  			struct vm_area_struct *vma)
>  {
> -	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  	return remap_pfn_range(vma, vma->vm_start,
>  		substream->dma_buffer.addr >> PAGE_SHIFT,
> diff --git a/sound/usb/usx2y/us122l.c b/sound/usb/usx2y/us122l.c
> index e558931cce16..b51db622a69b 100644
> --- a/sound/usb/usx2y/us122l.c
> +++ b/sound/usb/usx2y/us122l.c
> @@ -224,9 +224,9 @@ static int usb_stream_hwdep_mmap(struct snd_hwdep *hw,
>  	}
>  
>  	area->vm_ops = &usb_stream_hwdep_vm_ops;
> -	area->vm_flags |= VM_DONTDUMP;
> +	set_vm_flags(area, VM_DONTDUMP);
>  	if (!read)
> -		area->vm_flags |= VM_DONTEXPAND;
> +		set_vm_flags(area, VM_DONTEXPAND);
>  	area->vm_private_data = us122l;
>  	atomic_inc(&us122l->mmap_count);
>  out:
> diff --git a/sound/usb/usx2y/usX2Yhwdep.c b/sound/usb/usx2y/usX2Yhwdep.c
> index c29da0341bc5..3abe6d891f98 100644
> --- a/sound/usb/usx2y/usX2Yhwdep.c
> +++ b/sound/usb/usx2y/usX2Yhwdep.c
> @@ -61,7 +61,7 @@ static int snd_us428ctls_mmap(struct snd_hwdep *hw, struct file *filp, struct vm
>  	}
>  
>  	area->vm_ops = &us428ctls_vm_ops;
> -	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP);
>  	area->vm_private_data = hw->private_data;
>  	return 0;
>  }
> diff --git a/sound/usb/usx2y/usx2yhwdeppcm.c b/sound/usb/usx2y/usx2yhwdeppcm.c
> index 767a227d54da..22ce93b2fb24 100644
> --- a/sound/usb/usx2y/usx2yhwdeppcm.c
> +++ b/sound/usb/usx2y/usx2yhwdeppcm.c
> @@ -706,7 +706,7 @@ static int snd_usx2y_hwdep_pcm_mmap(struct snd_hwdep *hw, struct file *filp, str
>  		return -ENODEV;
>  
>  	area->vm_ops = &snd_usx2y_hwdep_pcm_vm_ops;
> -	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
> +	set_vm_flags(area, VM_DONTEXPAND | VM_DONTDUMP);
>  	area->vm_private_data = hw->private_data;
>  	return 0;
>  }
> -- 
> 2.39.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:24:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484865.751715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyUs-0005hS-Lk; Thu, 26 Jan 2023 09:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484865.751715; Thu, 26 Jan 2023 09:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyUs-0005hL-IV; Thu, 26 Jan 2023 09:24:42 +0000
Received: by outflank-mailman (input) for mailman id 484865;
 Thu, 26 Jan 2023 09:20:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vvgl=5X=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1pKyQu-0005ah-6D
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:20:36 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id afc0c841-9d5a-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:20:34 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id B276EB81CBE;
 Thu, 26 Jan 2023 09:20:33 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2F7DBC433D2;
 Thu, 26 Jan 2023 09:19:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afc0c841-9d5a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674724832;
	bh=73aMcUob1WYX+/b4rggndOJ7N9caL088plxdfjSDDEQ=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=JmyjvNONg1SgIhjqFFggxa6D/d0DdvOOGuNt2dtDINzMuck9VRm1yrgcKsMjflpW+
	 SwTLsN9QAcAPFlEDt4IXaVuzKuvw/QTFDgiE6iXZrjCF1Dk77N344FV6FZLMxl/SIh
	 YCzWCtWMyri2S/VxFa8w5izmma6QvGZR6W3LW+HhZQLDOEjS2QQ4+yWFtPAWXwzXjB
	 8sjW4u5olaD4DQ8+tTWSGydqMOF+S7T6bSIcqdA9DZQS+/0Qnjc5pNbohtILz6vsxE
	 pH2w8v8IDyyC1JX/SGY55iAqC16GztTVGkZPd8qlGsAwBxzWAs0I60Xu7VFg5IVHwG
	 yDtfzpUFPYi0Q==
Date: Thu, 26 Jan 2023 11:19:37 +0200
From: Mike Rapoport <rppt@kernel.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org,
	liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 2/6] mm: replace VM_LOCKED_CLEAR_MASK with
 VM_LOCKED_MASK
Message-ID: <Y9JFqaE4n/eGoWWi@kernel.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-3-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-3-surenb@google.com>

On Wed, Jan 25, 2023 at 12:38:47AM -0800, Suren Baghdasaryan wrote:
> To simplify the usage of VM_LOCKED_CLEAR_MASK in clear_vm_flags(),
> replace it with VM_LOCKED_MASK bitmask and convert all users.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>

> ---
>  include/linux/mm.h | 4 ++--
>  kernel/fork.c      | 2 +-
>  mm/hugetlb.c       | 4 ++--
>  mm/mlock.c         | 6 +++---
>  mm/mmap.c          | 6 +++---
>  mm/mremap.c        | 2 +-
>  6 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b71f2809caac..da62bdd627bf 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -421,8 +421,8 @@ extern unsigned int kobjsize(const void *objp);
>  /* This mask defines which mm->def_flags a process can inherit its parent */
>  #define VM_INIT_DEF_MASK	VM_NOHUGEPAGE
>  
> -/* This mask is used to clear all the VMA flags used by mlock */
> -#define VM_LOCKED_CLEAR_MASK	(~(VM_LOCKED | VM_LOCKONFAULT))
> +/* This mask represents all the VMA flag bits used by mlock */
> +#define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
>  
>  /* Arch-specific flags to clear when updating VM flags on protection change */
>  #ifndef VM_ARCH_CLEAR
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 6683c1b0f460..03d472051236 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -669,7 +669,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
>  			tmp->anon_vma = NULL;
>  		} else if (anon_vma_fork(tmp, mpnt))
>  			goto fail_nomem_anon_vma_fork;
> -		tmp->vm_flags &= ~(VM_LOCKED | VM_LOCKONFAULT);
> +		clear_vm_flags(tmp, VM_LOCKED_MASK);
>  		file = tmp->vm_file;
>  		if (file) {
>  			struct address_space *mapping = file->f_mapping;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index d20c8b09890e..4ecdbad9a451 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6973,8 +6973,8 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma,
>  	unsigned long s_end = sbase + PUD_SIZE;
>  
>  	/* Allow segments to share if only one is marked locked */
> -	unsigned long vm_flags = vma->vm_flags & VM_LOCKED_CLEAR_MASK;
> -	unsigned long svm_flags = svma->vm_flags & VM_LOCKED_CLEAR_MASK;
> +	unsigned long vm_flags = vma->vm_flags & ~VM_LOCKED_MASK;
> +	unsigned long svm_flags = svma->vm_flags & ~VM_LOCKED_MASK;
>  
>  	/*
>  	 * match the virtual addresses, permission and the alignment of the
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 0336f52e03d7..5c4fff93cd6b 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -497,7 +497,7 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
>  		if (vma->vm_start != tmp)
>  			return -ENOMEM;
>  
> -		newflags = vma->vm_flags & VM_LOCKED_CLEAR_MASK;
> +		newflags = vma->vm_flags & ~VM_LOCKED_MASK;
>  		newflags |= flags;
>  		/* Here we know that  vma->vm_start <= nstart < vma->vm_end. */
>  		tmp = vma->vm_end;
> @@ -661,7 +661,7 @@ static int apply_mlockall_flags(int flags)
>  	struct vm_area_struct *vma, *prev = NULL;
>  	vm_flags_t to_add = 0;
>  
> -	current->mm->def_flags &= VM_LOCKED_CLEAR_MASK;
> +	current->mm->def_flags &= ~VM_LOCKED_MASK;
>  	if (flags & MCL_FUTURE) {
>  		current->mm->def_flags |= VM_LOCKED;
>  
> @@ -681,7 +681,7 @@ static int apply_mlockall_flags(int flags)
>  	for_each_vma(vmi, vma) {
>  		vm_flags_t newflags;
>  
> -		newflags = vma->vm_flags & VM_LOCKED_CLEAR_MASK;
> +		newflags = vma->vm_flags & ~VM_LOCKED_MASK;
>  		newflags |= to_add;
>  
>  		/* Ignore errors */
> diff --git a/mm/mmap.c b/mm/mmap.c
> index d4abc6feced1..323bd253b25a 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2671,7 +2671,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>  		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
>  					is_vm_hugetlb_page(vma) ||
>  					vma == get_gate_vma(current->mm))
> -			vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> +			clear_vm_flags(vma, VM_LOCKED_MASK);
>  		else
>  			mm->locked_vm += (len >> PAGE_SHIFT);
>  	}
> @@ -3340,8 +3340,8 @@ static struct vm_area_struct *__install_special_mapping(
>  	vma->vm_start = addr;
>  	vma->vm_end = addr + len;
>  
> -	vma->vm_flags = vm_flags | mm->def_flags | VM_DONTEXPAND | VM_SOFTDIRTY;
> -	vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> +	init_vm_flags(vma, (vm_flags | mm->def_flags |
> +		      VM_DONTEXPAND | VM_SOFTDIRTY) & ~VM_LOCKED_MASK);
>  	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>  
>  	vma->vm_ops = ops;
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 1b3ee02bead7..35db9752cb6a 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -687,7 +687,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
>  
>  	if (unlikely(!err && (flags & MREMAP_DONTUNMAP))) {
>  		/* We always clear VM_LOCKED[ONFAULT] on the old vma */
> -		vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> +		clear_vm_flags(vma, VM_LOCKED_MASK);
>  
>  		/*
>  		 * anon_vma links of the old vma is no longer needed after its page
> -- 
> 2.39.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:29:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484880.751738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyZA-0007Aw-VH; Thu, 26 Jan 2023 09:29:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484880.751738; Thu, 26 Jan 2023 09:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyZA-0007Ap-SM; Thu, 26 Jan 2023 09:29:08 +0000
Received: by outflank-mailman (input) for mailman id 484880;
 Thu, 26 Jan 2023 09:27:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vvgl=5X=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1pKyY3-000771-Qh
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:27:59 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b755866b-9d5b-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:27:57 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 3647A61778;
 Thu, 26 Jan 2023 09:27:55 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3921C433EF;
 Thu, 26 Jan 2023 09:27:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b755866b-9d5b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674725274;
	bh=SoMuUCKccvLBFFlS2yaoZNAJ6vi8hf11UD5LIFXhHPI=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=j//Fl5kIolRbc8e9oVCQ2yHwbpz0TuieYLSj+mDFLpa1qwKxXUhf5PJmcO75Ujh3s
	 twG4fOSIQIUcY+3MnnJB2LFgFpgYd/SthHle/+axd8X6B40DgEguPrzEIDstErfxF8
	 ko0Mb6Gc7DtRkgMx2KamDNuuhnb/IfTXyWRUAI8okoWPpJCrU+5VBwHmsXg3EJf5FZ
	 gJ2/jlRLLWN+Ab/FNlmJJeHDKv+eOxzvt50mhgYbacQHyS9BYwkvyF434NJwFVAGsI
	 dA9tzS1LLkbpY2wsykh131+0o90nnu9297soO8LSNeHRZppCk70KOOayN+gSrVyob+
	 3jVum/MlvC8IQ==
Date: Thu, 26 Jan 2023 11:26:58 +0200
From: Mike Rapoport <rppt@kernel.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org,
	liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 4/6] mm: replace vma->vm_flags indirect modification
 in ksm_madvise
Message-ID: <Y9JHYvihjxGpAFPg@kernel.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-5-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-5-surenb@google.com>

On Wed, Jan 25, 2023 at 12:38:49AM -0800, Suren Baghdasaryan wrote:
> Replace indirect modifications to vma->vm_flags with calls to modifier
> functions to be able to track flag changes and to keep vma locking
> correctness. Add a BUG_ON check in ksm_madvise() to catch indirect
> vm_flags modification attempts.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>

> ---
>  arch/powerpc/kvm/book3s_hv_uvmem.c | 5 ++++-
>  arch/s390/mm/gmap.c                | 5 ++++-
>  mm/khugepaged.c                    | 2 ++
>  mm/ksm.c                           | 2 ++
>  4 files changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 1d67baa5557a..325a7a47d348 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -393,6 +393,7 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm,
>  {
>  	unsigned long gfn = memslot->base_gfn;
>  	unsigned long end, start = gfn_to_hva(kvm, gfn);
> +	unsigned long vm_flags;
>  	int ret = 0;
>  	struct vm_area_struct *vma;
>  	int merge_flag = (merge) ? MADV_MERGEABLE : MADV_UNMERGEABLE;
> @@ -409,12 +410,14 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm,
>  			ret = H_STATE;
>  			break;
>  		}
> +		vm_flags = vma->vm_flags;
>  		ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> -			  merge_flag, &vma->vm_flags);
> +			  merge_flag, &vm_flags);
>  		if (ret) {
>  			ret = H_STATE;
>  			break;
>  		}
> +		reset_vm_flags(vma, vm_flags);
>  		start = vma->vm_end;
>  	} while (end > vma->vm_end);
>  
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index 3a695b8a1e3c..d5eb47dcdacb 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -2587,14 +2587,17 @@ int gmap_mark_unmergeable(void)
>  {
>  	struct mm_struct *mm = current->mm;
>  	struct vm_area_struct *vma;
> +	unsigned long vm_flags;
>  	int ret;
>  	VMA_ITERATOR(vmi, mm, 0);
>  
>  	for_each_vma(vmi, vma) {
> +		vm_flags = vma->vm_flags;
>  		ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> -				  MADV_UNMERGEABLE, &vma->vm_flags);
> +				  MADV_UNMERGEABLE, &vm_flags);
>  		if (ret)
>  			return ret;
> +		reset_vm_flags(vma, vm_flags);
>  	}
>  	mm->def_flags &= ~VM_MERGEABLE;
>  	return 0;
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 8abc59345bf2..76b24cd0c179 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -354,6 +354,8 @@ struct attribute_group khugepaged_attr_group = {
>  int hugepage_madvise(struct vm_area_struct *vma,
>  		     unsigned long *vm_flags, int advice)
>  {
> +	/* vma->vm_flags can be changed only using modifier functions */
> +	BUG_ON(vm_flags == &vma->vm_flags);
>  	switch (advice) {
>  	case MADV_HUGEPAGE:
>  #ifdef CONFIG_S390
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 04f1c8c2df11..992b2be9f5e6 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2573,6 +2573,8 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>  	struct mm_struct *mm = vma->vm_mm;
>  	int err;
>  
> +	/* vma->vm_flags can be changed only using modifier functions */
> +	BUG_ON(vm_flags == &vma->vm_flags);
>  	switch (advice) {
>  	case MADV_MERGEABLE:
>  		/*
> -- 
> 2.39.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:36:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484888.751748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKygR-0000O3-Ov; Thu, 26 Jan 2023 09:36:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484888.751748; Thu, 26 Jan 2023 09:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKygR-0000Nw-M9; Thu, 26 Jan 2023 09:36:39 +0000
Received: by outflank-mailman (input) for mailman id 484888;
 Thu, 26 Jan 2023 09:35:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vvgl=5X=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1pKyfm-0000Mo-Hj
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:35:58 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d50674d8-9d5c-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:35:55 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 93D00B81D14;
 Thu, 26 Jan 2023 09:35:54 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 40FC4C433EF;
 Thu, 26 Jan 2023 09:35:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d50674d8-9d5c-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674725753;
	bh=Jq0Q+Dfka0m9QKRqrUV8u4PYW6lJ0tXLbKty2Z/MONQ=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=DpDvg8hLIsvs2nUChgLqcCzQb5MGT6xlFHD2548qKue6I+mFb5y6KuSL9cEj8Sv48
	 34VITUz8VdpEiQ9Lx/p/ZzxIfgWzJF/ix7JV7i7oWmkpSZBBDw/wkwDZeRctfKpgE4
	 Ug8fi9SZm0Ot2nPaKiPqjgCQxC9GT5ZFTD04h9bfnxijqaLpzmrs/o75HcASj7qn/g
	 7zriVl66jfzZ9BuQJRZl5X3Xuu5sTLySnZxHc1US6dXmWen4DqRyj7BX7yuPOxA0Ol
	 7+wswiyrJ+2czF/DMRCi3tC4jGYGzS4NDlKIoMpCo6a40qDBmQLb9AasmuzNqhmeLt
	 5DbFCNBqCE0QQ==
Date: Thu, 26 Jan 2023 11:34:54 +0200
From: Mike Rapoport <rppt@kernel.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org,
	liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 5/6] mm: introduce mod_vm_flags_nolock and use it in
 untrack_pfn
Message-ID: <Y9JJPvvuvSjQ+x9h@kernel.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-6-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-6-surenb@google.com>

On Wed, Jan 25, 2023 at 12:38:50AM -0800, Suren Baghdasaryan wrote:
> In cases when VMA flags are modified after VMA was isolated and mmap_lock
> was downgraded, flags modifications would result in an assertion because
> mmap write lock is not held.
> Introduce mod_vm_flags_nolock to be used in such situation.

vm_flags_mod_nolock?

> Pass a hint to untrack_pfn to conditionally use mod_vm_flags_nolock for
> flags modification and to avoid assertion.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  arch/x86/mm/pat/memtype.c | 10 +++++++---
>  include/linux/mm.h        | 12 +++++++++---
>  include/linux/pgtable.h   |  5 +++--
>  mm/memory.c               | 13 +++++++------
>  mm/memremap.c             |  4 ++--
>  mm/mmap.c                 | 16 ++++++++++------
>  6 files changed, 38 insertions(+), 22 deletions(-)
> 
> diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> index ae9645c900fa..d8adc0b42cf2 100644
> --- a/arch/x86/mm/pat/memtype.c
> +++ b/arch/x86/mm/pat/memtype.c
> @@ -1046,7 +1046,7 @@ void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn)
>   * can be for the entire vma (in which case pfn, size are zero).
>   */
>  void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> -		 unsigned long size)
> +		 unsigned long size, bool mm_wr_locked)
>  {
>  	resource_size_t paddr;
>  	unsigned long prot;
> @@ -1065,8 +1065,12 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
>  		size = vma->vm_end - vma->vm_start;
>  	}
>  	free_pfn_range(paddr, size);
> -	if (vma)
> -		clear_vm_flags(vma, VM_PAT);
> +	if (vma) {
> +		if (mm_wr_locked)
> +			clear_vm_flags(vma, VM_PAT);
> +		else
> +			mod_vm_flags_nolock(vma, 0, VM_PAT);
> +	}
>  }
>  
>  /*
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 55335edd1373..48d49930c411 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -656,12 +656,18 @@ static inline void clear_vm_flags(struct vm_area_struct *vma,
>  	vma->vm_flags &= ~flags;
>  }
>  
> +static inline void mod_vm_flags_nolock(struct vm_area_struct *vma,
> +				       unsigned long set, unsigned long clear)
> +{
> +	vma->vm_flags |= set;
> +	vma->vm_flags &= ~clear;
> +}
> +
>  static inline void mod_vm_flags(struct vm_area_struct *vma,
>  				unsigned long set, unsigned long clear)
>  {
>  	mmap_assert_write_locked(vma->vm_mm);
> -	vma->vm_flags |= set;
> -	vma->vm_flags &= ~clear;
> +	mod_vm_flags_nolock(vma, set, clear);
>  }
>  
>  static inline void vma_set_anonymous(struct vm_area_struct *vma)
> @@ -2087,7 +2093,7 @@ static inline void zap_vma_pages(struct vm_area_struct *vma)
>  }
>  void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
>  		struct vm_area_struct *start_vma, unsigned long start,
> -		unsigned long end);
> +		unsigned long end, bool mm_wr_locked);
>  
>  struct mmu_notifier_range;
>  
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 5fd45454c073..c63cd44777ec 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1185,7 +1185,8 @@ static inline int track_pfn_copy(struct vm_area_struct *vma)
>   * can be for the entire vma (in which case pfn, size are zero).
>   */
>  static inline void untrack_pfn(struct vm_area_struct *vma,
> -			       unsigned long pfn, unsigned long size)
> +			       unsigned long pfn, unsigned long size,
> +			       bool mm_wr_locked)
>  {
>  }
>  
> @@ -1203,7 +1204,7 @@ extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
>  			     pfn_t pfn);
>  extern int track_pfn_copy(struct vm_area_struct *vma);
>  extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> -			unsigned long size);
> +			unsigned long size, bool mm_wr_locked);
>  extern void untrack_pfn_moved(struct vm_area_struct *vma);
>  #endif
>  
> diff --git a/mm/memory.c b/mm/memory.c
> index d6902065e558..5b11b50e2c4a 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1613,7 +1613,7 @@ void unmap_page_range(struct mmu_gather *tlb,
>  static void unmap_single_vma(struct mmu_gather *tlb,
>  		struct vm_area_struct *vma, unsigned long start_addr,
>  		unsigned long end_addr,
> -		struct zap_details *details)
> +		struct zap_details *details, bool mm_wr_locked)
>  {
>  	unsigned long start = max(vma->vm_start, start_addr);
>  	unsigned long end;
> @@ -1628,7 +1628,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
>  		uprobe_munmap(vma, start, end);
>  
>  	if (unlikely(vma->vm_flags & VM_PFNMAP))
> -		untrack_pfn(vma, 0, 0);
> +		untrack_pfn(vma, 0, 0, mm_wr_locked);
>  
>  	if (start != end) {
>  		if (unlikely(is_vm_hugetlb_page(vma))) {
> @@ -1675,7 +1675,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
>   */
>  void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
>  		struct vm_area_struct *vma, unsigned long start_addr,
> -		unsigned long end_addr)
> +		unsigned long end_addr, bool mm_wr_locked)
>  {
>  	struct mmu_notifier_range range;
>  	struct zap_details details = {
> @@ -1689,7 +1689,8 @@ void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
>  				start_addr, end_addr);
>  	mmu_notifier_invalidate_range_start(&range);
>  	do {
> -		unmap_single_vma(tlb, vma, start_addr, end_addr, &details);
> +		unmap_single_vma(tlb, vma, start_addr, end_addr, &details,
> +				 mm_wr_locked);
>  	} while ((vma = mas_find(&mas, end_addr - 1)) != NULL);
>  	mmu_notifier_invalidate_range_end(&range);
>  }
> @@ -1723,7 +1724,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
>  	 * unmap 'address-end' not 'range.start-range.end' as range
>  	 * could have been expanded for hugetlb pmd sharing.
>  	 */
> -	unmap_single_vma(&tlb, vma, address, end, details);
> +	unmap_single_vma(&tlb, vma, address, end, details, false);
>  	mmu_notifier_invalidate_range_end(&range);
>  	tlb_finish_mmu(&tlb);
>  }
> @@ -2492,7 +2493,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
>  
>  	err = remap_pfn_range_notrack(vma, addr, pfn, size, prot);
>  	if (err)
> -		untrack_pfn(vma, pfn, PAGE_ALIGN(size));
> +		untrack_pfn(vma, pfn, PAGE_ALIGN(size), true);
>  	return err;
>  }
>  EXPORT_SYMBOL(remap_pfn_range);
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 08cbf54fe037..2f88f43d4a01 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -129,7 +129,7 @@ static void pageunmap_range(struct dev_pagemap *pgmap, int range_id)
>  	}
>  	mem_hotplug_done();
>  
> -	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
> +	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true);
>  	pgmap_array_delete(range);
>  }
>  
> @@ -276,7 +276,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
>  	if (!is_private)
>  		kasan_remove_zero_shadow(__va(range->start), range_len(range));
>  err_kasan:
> -	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
> +	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true);
>  err_pfn_remap:
>  	pgmap_array_delete(range);
>  	return error;
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 2c6e9072e6a8..69d440997648 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -78,7 +78,7 @@ core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644);
>  static void unmap_region(struct mm_struct *mm, struct maple_tree *mt,
>  		struct vm_area_struct *vma, struct vm_area_struct *prev,
>  		struct vm_area_struct *next, unsigned long start,
> -		unsigned long end);
> +		unsigned long end, bool mm_wr_locked);
>  
>  static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
>  {
> @@ -2136,14 +2136,14 @@ static inline void remove_mt(struct mm_struct *mm, struct ma_state *mas)
>  static void unmap_region(struct mm_struct *mm, struct maple_tree *mt,
>  		struct vm_area_struct *vma, struct vm_area_struct *prev,
>  		struct vm_area_struct *next,
> -		unsigned long start, unsigned long end)
> +		unsigned long start, unsigned long end, bool mm_wr_locked)
>  {
>  	struct mmu_gather tlb;
>  
>  	lru_add_drain();
>  	tlb_gather_mmu(&tlb, mm);
>  	update_hiwater_rss(mm);
> -	unmap_vmas(&tlb, mt, vma, start, end);
> +	unmap_vmas(&tlb, mt, vma, start, end, mm_wr_locked);
>  	free_pgtables(&tlb, mt, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS,
>  				 next ? next->vm_start : USER_PGTABLES_CEILING);
>  	tlb_finish_mmu(&tlb);
> @@ -2391,7 +2391,11 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  			mmap_write_downgrade(mm);
>  	}
>  
> -	unmap_region(mm, &mt_detach, vma, prev, next, start, end);
> +	/*
> +	 * We can free page tables without write-locking mmap_lock because VMAs
> +	 * were isolated before we downgraded mmap_lock.
> +	 */
> +	unmap_region(mm, &mt_detach, vma, prev, next, start, end, !downgrade);
>  	/* Statistics and freeing VMAs */
>  	mas_set(&mas_detach, start);
>  	remove_mt(mm, &mas_detach);
> @@ -2704,7 +2708,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>  
>  		/* Undo any partial mapping done by a device driver. */
>  		unmap_region(mm, &mm->mm_mt, vma, prev, next, vma->vm_start,
> -			     vma->vm_end);
> +			     vma->vm_end, true);
>  	}
>  	if (file && (vm_flags & VM_SHARED))
>  		mapping_unmap_writable(file->f_mapping);
> @@ -3031,7 +3035,7 @@ void exit_mmap(struct mm_struct *mm)
>  	tlb_gather_mmu_fullmm(&tlb, mm);
>  	/* update_hiwater_rss(mm) here? but nobody should be looking */
>  	/* Use ULONG_MAX here to ensure all VMAs in the mm are unmapped */
> -	unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX);
> +	unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX, false);
>  	mmap_read_unlock(mm);
>  
>  	/*
> -- 
> 2.39.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:44:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:44:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484895.751761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyo8-0001uj-Jy; Thu, 26 Jan 2023 09:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484895.751761; Thu, 26 Jan 2023 09:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyo8-0001uc-HB; Thu, 26 Jan 2023 09:44:36 +0000
Received: by outflank-mailman (input) for mailman id 484895;
 Thu, 26 Jan 2023 09:44:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKyo8-0001uQ-0l; Thu, 26 Jan 2023 09:44:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKyo7-0006UE-VU; Thu, 26 Jan 2023 09:44:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pKyo7-0004sy-GV; Thu, 26 Jan 2023 09:44:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pKyo7-0007DU-G6; Thu, 26 Jan 2023 09:44:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zx7UD9hPA74hO1HhZD9+3CXz7nGIRcVr9VR6oJ1lWXM=; b=URY6hDV6leYwyRThM31idH2FNH
	pFICLoaOriCysQGCmLCIqWpiLWFlmlkiDyu993PrmwENe3aNoKWlONjACx01G2vSyClQhGtIaiA0E
	URWUAUjcqxLwAYLpzPBJXmydPDdmlAjCtWWDTo2Z22hjelZMN15exXEA4ngXss8wbCRA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176139-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 176139: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-i386-libvirt-raw:xen-install:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=95a278a84591b6a4cfa170eba31c8ec60e82f940
X-Osstest-Versions-That:
    libvirt=d5ecc2aa779d48be32bba51a6c8c16635c52721d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Jan 2023 09:44:35 +0000

flight 176139 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176139/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw   7 xen-install                  fail  like 176116
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 176116
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 176116
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 176116
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              95a278a84591b6a4cfa170eba31c8ec60e82f940
baseline version:
 libvirt              d5ecc2aa779d48be32bba51a6c8c16635c52721d

Last test of basis   176116  2023-01-25 04:18:48 Z    1 days
Testing same since   176139  2023-01-26 04:18:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Privoznik <mprivozn@redhat.com>
  zhenwei pi <pizhenwei@bytedance.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   d5ecc2aa77..95a278a845  95a278a84591b6a4cfa170eba31c8ec60e82f940 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:57:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:57:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484905.751771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyzs-0003qT-Tk; Thu, 26 Jan 2023 09:56:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484905.751771; Thu, 26 Jan 2023 09:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKyzs-0003qM-PT; Thu, 26 Jan 2023 09:56:44 +0000
Received: by outflank-mailman (input) for mailman id 484905;
 Thu, 26 Jan 2023 09:56:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YROu=5X=gmail.com=dunlapg@srs-se1.protection.inumbo.net>)
 id 1pKyzr-0003qG-Ho
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:56:43 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb0c107d-9d5f-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:56:40 +0100 (CET)
Received: by mail-wm1-x334.google.com with SMTP id m15so743167wms.4
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 01:56:40 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb0c107d-9d5f-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=umich.edu; s=google-2016-06-03;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=gkNyTWKvIHUCpmNo3/mf7YggFUlwg6d1JADIsZE/58Y=;
        b=immGmLZwBSlsuaHB6Qca30hwHjIrgXaxL25dChDEV5PFmPCSqrvE8N19U1r0ui9AuH
         PeQa9lXb09lHZAWosrEuzXgC8yUl6X/PYE9TLDd0p6X6y5z6nJ6Xoj0iRumKnXJtQR9g
         PvXc8PV39YDMQTwVhTDfDfQuDNQ4xgaD8djgl0q1n5/Ayu91aMTzryFDYGGYSFFY75SA
         eYCH8oNTlMWwWEXbOmL92eUdIERWMdbfzelz26V9e/fH3NSlTr992hUdgOMUVg8EAR0G
         Bzs7WO2AVXV0NcOKS1NMy5AKf3dOMvVIVk/G0b9ySIbCxZY7oDJX4mupcV45NV3R78K2
         CthA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=gkNyTWKvIHUCpmNo3/mf7YggFUlwg6d1JADIsZE/58Y=;
        b=q2eH+wYxd0uquxZVqYkcor3DFunfVuRw8a+FI0zmaPonkYePYRAQJ9AfJkZMWSb6m1
         jKIRUVaO+23E9WhtAOAlZqL5e8hzyHZLFQ9wOsgEOkWtpMCNPce+ayDJgwvhcFkwJUuQ
         b8jsCFkdEhdGO7xLolUgRTtdz2qC+l/qQN7O8Rnm7yEe6Ys1t19tYawz+tUzFPPkHcq+
         rsm2CRK3Nc+a9Q3hKM9Ai+9Dmhjvj2Wfk9tB4CX4Ir6KQfy1nv/iRMVNKFbMl7ywZm9p
         4y+8pkkhenI6wAm3bRtTeaabX+VsO3K9dZhb7B0onnUrZ9drJSHCMTaI58QofF/f6twY
         LJ4w==
X-Gm-Message-State: AFqh2kqeJAz0XwQAJ9iM+hxkxAXdsl4bDnQfHzOJ0+snYhWP/xdm4lHN
	vd00zSXypGJe43SJb2KxSM9YnFxIur+iQNmLLpAaWUKZ
X-Google-Smtp-Source: AMrXdXt5TBLGY/xIqXxyqjIu1sRpcSiyvx1fR6dzBW0Dfu83KHXKTyyaDan7puFT/ReZNquokhx5QZ++dEcEqa1Xtwg=
X-Received: by 2002:a05:600c:825:b0:3db:23:37e9 with SMTP id
 k37-20020a05600c082500b003db002337e9mr1947064wmp.46.1674727000136; Thu, 26
 Jan 2023 01:56:40 -0800 (PST)
MIME-Version: 1.0
References: <20230125165308.22897-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230125165308.22897-1-andrew.cooper3@citrix.com>
From: George Dunlap <dunlapg@umich.edu>
Date: Thu, 26 Jan 2023 09:56:29 +0000
Message-ID: <CAFLBxZaMj9k+gPFOxr-f=7TJxWoE4nb4=_Vn+ZR_rqAaBzuLGw@mail.gmail.com>
Subject: Re: [PATCH] x86/shadow: Fix PV32 shadowing in !HVM builds
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	Tim Deegan <tim@xen.org>
Content-Type: multipart/alternative; boundary="000000000000f0633905f327c3fc"

--000000000000f0633905f327c3fc
Content-Type: text/plain; charset="UTF-8"

On Wed, Jan 25, 2023 at 4:53 PM Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

> The OSSTest bisector identified an issue with c/s 1894049fa283
> ("x86/shadow:
> L2H shadow type is PV32-only") in !HVM builds.
>
> The bug is ultimately caused by sh_type_to_size[] not actually being
> specific
> to HVM guests, and it's position in shadow/hvm.c mislead the reasoning.
>
> To fix the issue that OSSTest identified, SH_type_l2h_64_shadow must still
> have the value 1 in any CONFIG_PV32 build.  But simply adjusting this
> leaves
> us with misleading logic, and a reasonable chance of making a related error
> again in the future.
>
> In hindsight, moving sh_type_to_size[] out of common.c in the first place a
> mistake.  Therefore, move sh_type_to_size[] back to living in common.c,
> leaving a comment explaining why it happens to be inside an HVM
> conditional.
>
> This effectively reverts the second half of 4fec945409fc ("x86/shadow:
> adjust
> and move sh_type_to_size[]") while retaining the other improvements from
> the
> same changeset.
>
> While making this change, also adjust the sh_type_to_size[] declaration to
> match its definition.
>
> Fixes: 4fec945409fc ("x86/shadow: adjust and move sh_type_to_size[]")
> Fixes: 1894049fa283 ("x86/shadow: L2H shadow type is PV32-only")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

I don't have super-strong opinions here; I'd be OK with any of the
patches.  But I tend to sympathize with Andrew's arguments re communicating
what's going on.

Acked-by: George Dunlap <george.dunlap@cloud.com>

--000000000000f0633905f327c3fc
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Wed, Jan 25, 2023 at 4:53 PM Andre=
w Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.com">andrew.cooper3@ci=
trix.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D=
"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-le=
ft:1ex">The OSSTest bisector identified an issue with c/s 1894049fa283 (&qu=
ot;x86/shadow:<br>
L2H shadow type is PV32-only&quot;) in !HVM builds.<br>
<br>
The bug is ultimately caused by sh_type_to_size[] not actually being specif=
ic<br>
to HVM guests, and it&#39;s position in shadow/hvm.c mislead the reasoning.=
<br>
<br>
To fix the issue that OSSTest identified, SH_type_l2h_64_shadow must still<=
br>
have the value 1 in any CONFIG_PV32 build.=C2=A0 But simply adjusting this =
leaves<br>
us with misleading logic, and a reasonable chance of making a related error=
<br>
again in the future.<br>
<br>
In hindsight, moving sh_type_to_size[] out of common.c in the first place a=
<br>
mistake.=C2=A0 Therefore, move sh_type_to_size[] back to living in common.c=
,<br>
leaving a comment explaining why it happens to be inside an HVM conditional=
.<br>
<br>
This effectively reverts the second half of 4fec945409fc (&quot;x86/shadow:=
 adjust<br>
and move sh_type_to_size[]&quot;) while retaining the other improvements fr=
om the<br>
same changeset.<br>
<br>
While making this change, also adjust the sh_type_to_size[] declaration to<=
br>
match its definition.<br>
<br>
Fixes: 4fec945409fc (&quot;x86/shadow: adjust and move sh_type_to_size[]&qu=
ot;)<br>
Fixes: 1894049fa283 (&quot;x86/shadow: L2H shadow type is PV32-only&quot;)<=
br>
Signed-off-by: Andrew Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.co=
m" target=3D"_blank">andrew.cooper3@citrix.com</a>&gt;<br></blockquote><div=
><br></div><div>I don&#39;t have super-strong opinions here; I&#39;d be OK =
with any of the patches.=C2=A0 But I tend to sympathize with Andrew&#39;s a=
rguments re communicating what&#39;s going on.</div><div><br></div><div>Ack=
ed-by: George Dunlap &lt;<a href=3D"mailto:george.dunlap@cloud.com">george.=
dunlap@cloud.com</a>&gt;</div></div></div>

--000000000000f0633905f327c3fc--


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 09:57:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 09:57:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484906.751781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKz0H-0004Gr-4g; Thu, 26 Jan 2023 09:57:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484906.751781; Thu, 26 Jan 2023 09:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKz0H-0004Gk-1g; Thu, 26 Jan 2023 09:57:09 +0000
Received: by outflank-mailman (input) for mailman id 484906;
 Thu, 26 Jan 2023 09:57:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pKz0F-0003qG-Jm
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 09:57:07 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2059.outbound.protection.outlook.com [40.107.8.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9b5ea4f-9d5f-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 10:57:05 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7361.eurprd04.prod.outlook.com (2603:10a6:20b:1d2::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Thu, 26 Jan
 2023 09:57:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 09:57:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9b5ea4f-9d5f-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MmZuvCnnBRPr9tyXCAabttiQSspZplr43yea8CiTEhD0JI5ldq9cpg/A7NCw9YpV/hdQ80QEQfoBSMqDqzI4uluTwbmIDQkCl3MPR4B4KUgCjxN2B6ulc7uHt3ZWPbKMOhycBBDByzAVjmo9+wyo//KSkUihGMbug7RhPbqlT/K6S1iGbJwDyvtjyF/cNpFa1rSW2IaS2X5O6AQHV3apXQ+mI3slD28cXg7+NU4o0uuR31MmHNLMjuAVl36BmTA4LxRSEur0myUbHk4cffLcfQTZCSce5qxh1RNkPhe53mhPw8gZwIAdOLyHB0J60QTllJhniL9eYcle3V+pfU5B/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0u+EaJubThqqUuPcnofIEnkd2+LmR9QAnIGiAZG6LQw=;
 b=DRA6QNuW9+D3D+w9ARLAJVXPKpvYC826N6WT+N8YyvX9QSkjREbRpEZzXbbqdJA0Mfhg9eyoAEsrmcQcSOHCruGYaHsNEO/Rz1cRDMfJc4u9SRMTn4Ap3Ze2SXIJ+VFeob4lKNLc0B6jYVOdEusdJVo2TkYmqkg2K/Ia/K8qbqQBIaqf2IlHWLC7xjc1ang3p4o4U92wMffneTn36hNFC+QrZUOMM1BMn5foS2OCA3M61B7y0nA9rPuYcHtpyAOj/TVaYEVko9OUMTWRRoWhheffsZFGW/tY/ZdW9GUNnV4GRBDs1crUsFhfPEehqaNKSM0kS1YWZpPzJ2omDf9SKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0u+EaJubThqqUuPcnofIEnkd2+LmR9QAnIGiAZG6LQw=;
 b=XkekT/Ym1gBG4o1gZ6PygaTXiXIgWD9tc1Y4fHpurxcShfv1URnSmTNtW5pc4hkmlga3MZdXUgxHulg7obCm+XfUCb17l2I+b1zYhh5qQhK2nFuDmcd9bkZNZAhE846zOEtA+IWHYzlMoF8z70CqfcBpj7PmCxlN4Z8rbDnLoHpyab9ffG+L6Zo59BZeCGLuut3fMsTUTt6p/x6OqLMLj0U3qJ1YXZDTeVQWw5IvGKGZ4rcGxw34bjWZbgg9Ylg2RuGcDEvWt50KtQvsuIs6D200+qLGrs9aOGc+6GkplzHsOGQdrbZLmzZzxdRyd5FbgIXr9qjeHPw9GkJm2XJehA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <77576aab-93bf-5f6a-9b04-17eaf1d84ffb@suse.com>
Date: Thu, 26 Jan 2023 10:57:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/x86: public: add TSC defines for cpuid leaf 4
Content-Language: en-US
To: Krister Johansen <kjlx@templeofstupid.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 David Reaver <me@davidreaver.com>, xen-devel@lists.xenproject.org
References: <20230125184506.GE1963@templeofstupid.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230125184506.GE1963@templeofstupid.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0206.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7361:EE_
X-MS-Office365-Filtering-Correlation-Id: 53e83e82-042e-4929-453c-08daff83acd4
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CgvWJm0662OsUUvG+9lq+8sPjGfuLXnhaiAXKMZKU8qtVfIxM3mCHB3lyEImpWO9AjLtI/B54J73w3mBJQBnF+sXh+lMkdjan8M4kTAa5/rDMu93kpSz4ePhMe461yr/v0NUDOuKSHY4z1/o2v3a3yCdXT83dAj+NhGehBHi82hDltVjzsRMHOmFvsyCpXxkp5rwXEtBz5M+RoIRY2Zk6jsUpXa2TQ6zByqCEvvO4BG+yOvzYfLWvSur6uuLi7IDqwAfDok4uqtGPkEcfaf8+w/nA+8H6X8eaS7wlV/qiTGKii9p8+afRdf9C5AR/YDGH2M+WTXvKa5Lprlfn0bx595SUiNNbXChkTiEMyYYfqhR1OzF8MyuQiVpBxXbK3FPdPEq4WcN5PJp9IGrj6PWDGJp5eSzmylYh9O0mCzd42supyPsQJ5IbhZiFhFf292W7wbrhCbod25apG6vMWM4GG9lJj5MCyA4cgQr5CmiJKjdwM7ULk047FoXGHvLfUSAy3DEscdl0E9WcyIZ5vwCfI+HZP/s6i7ql5emnz9RmNhchMf31MWu+NpKcM17yIDX568p3GzJBLDFl0bTS1wOwzdrLzOePMd/dDRJHG7Aovxa1dIwqP4vGVTM6bjAsJpcQubPAikyu8Ah7AZ+bprP5JREFk2+KCcl/yyj74jYqsY9EfkMbdeS3AkPZajfQSIX8j/ZN3EoHAQ7VtEDnd7OJexPitXaYTnf2sAzX7asqG4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(346002)(136003)(376002)(366004)(39860400002)(451199018)(8936002)(86362001)(31696002)(4326008)(6916009)(36756003)(41300700001)(8676002)(66476007)(66556008)(38100700002)(66946007)(6486002)(478600001)(31686004)(6506007)(53546011)(26005)(54906003)(4744005)(2906002)(186003)(2616005)(316002)(6512007)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZlUydEhyTVBuaHBBMnhseUNmWUk1WWVKTWRqQXh6aWhTNldqTHd0T0RDOUxB?=
 =?utf-8?B?SktSQmxWQnk3QlV6ZEh0WlNleElyZ2w4b0hxejNlNThLcWl2dXlnczRvNUpt?=
 =?utf-8?B?Mkx3bVpCZUxWOFRIdC9QYU1YcHc0empudUJzTFN0TnM5WGlOZUZOMnEyODZI?=
 =?utf-8?B?cXNpMmxpK1hReUVCODlGL1Vhc3U2aVIyTXZOM0JFcWxIYVNwUTRuT3NRVmsr?=
 =?utf-8?B?dFA2NkpYbzlnbmVKbmtNTVVBdEtJMG5BeWFOUytVZlZZaEVrTDFhbjZxZTB6?=
 =?utf-8?B?UHhRR3BER1RwSlA0OVlhcmpCczFTSXZtVXA3REZQVmUvdkNKWDhLMWl1SENr?=
 =?utf-8?B?VGJ2a0gvbWY2ajNjMWpNVE03OUg5WmprVlJFWVY1cHMzZndwRC9MUmF2clM2?=
 =?utf-8?B?NlRPeXQyMG1qN3c3M3dhOTA1Y2YwTWNLdDJyMEI0cU8vS1c3WlN0aGErZE5w?=
 =?utf-8?B?cW5odmVWVzJSMldlb0VDYXFhSlZ0UkJ0LzJ6dGpXTnFhYVRRSUZhM1doeEx3?=
 =?utf-8?B?VjEvUERXUGMyNGtENk00K04xYitES3RmemIyNVBYdmlGbDlpTHNJZWpReFB6?=
 =?utf-8?B?Wmc2enZUTU1uVkpFS0M0QnM5WGQxRHcyaTdMZ0FMazhURUN2K3JOcXk4bTZF?=
 =?utf-8?B?YUVxd2YxaitXZzZBRlNybjY4Qkk2UnJyNkd5YlpUa3ptZm13Ujk2MGVYcllM?=
 =?utf-8?B?d1g0Q0FwOVQwTEFTQnNjY3h4bk9TS3ZFTUhyckJBUFQ2QjJqTW1XK0FMZFhS?=
 =?utf-8?B?RWFHamhybmw0NjNlWjFoQzJJMWgvdlJqVjRhbmxFVUxDeEZHM0ZlVTQ4TDhK?=
 =?utf-8?B?cnVRa3dYWC93VEgvekxiR295allsSkJWeGtwM2VSbzM5OUJ5RUJkWllaWTBM?=
 =?utf-8?B?N0FHMU52QlZZMW5WdnpZdkNlVThLc2RoWVA1aUM5ZkQ0bmNLcnM4VDhNY1ln?=
 =?utf-8?B?anc4d0hiWFFHa1FaOUNpVGI5akdmYmRXbk9iTDJmQUxyekM0TXBINTFPMmhX?=
 =?utf-8?B?WTRhN2lMcVVDTVdXVndEa0ZqOWRTY28ySThWRHYxdldaRTYwMGJZVzhzSjh4?=
 =?utf-8?B?SGdybE1XT3phOTZkUVlNUmpFU0dvL1dzeC80K0pDMEdMaXp2aDZzN1hLMVJz?=
 =?utf-8?B?cTJiRnJHMElxa2pCby9haEJ6ZExVdjVJbVd3ZCtCY2daSWtmeGkyODVBNDBC?=
 =?utf-8?B?d0FaZTYrQ0NOUFJXdnN3RzBoTVEwMlpJclhOd1dXd1dPdXNGZW9uaVA3alFD?=
 =?utf-8?B?L0xieGxNMnZ5SGNrZDNXemtjZEo3SXp6aDRrVTNsYUVmb0xxRjZoL2kydjVm?=
 =?utf-8?B?L0FRM2ZuM1J4U0F6VGpFbngwQmJOekJITkQvNkVoampFbGZIaDEyZUliRUQ1?=
 =?utf-8?B?SUdjT28xb2JpdU5FY2VCSlAvS2hBRnVvc2dURmkrQVcxWnZKa3NsU0VLMXNR?=
 =?utf-8?B?T3piYkh4TmxuYzlucEI1TTUwUTVpRHRoc2p0L253cDQwNWpNNnBhOU1GNmQz?=
 =?utf-8?B?c1Y5bUc0azI5MWdYNDNrV0xZdHlmRXhDS3k3ek5vUldrNU0rT1NIYUh6TC92?=
 =?utf-8?B?VGpDaXZxQ1ZvT2ErdkROcGNmSUE1ek8rRUFDMWlmL1BEK21NRkdJV1ZDdUtx?=
 =?utf-8?B?ZW5Dbm1DeS8wRnNJSEt6RkZxY3U4M3grUjEzWW1nTmtJSlBKYWVEdllhci9z?=
 =?utf-8?B?WFRQOTduVStnQkhwZUZ2aEZKOG1HQlB5U2JCSFljaWdjRldKYUN4cHA5Y1p4?=
 =?utf-8?B?WEx4bWptb2tpdlBCNkh0bVZaN2JoVXhvN243TGFsZzl0bWJBTGk4YzRweUVU?=
 =?utf-8?B?TExudnlMWk9NZktSa1NKbWN1VjljTmlWUHV3SHR1ZWJOZnZyUVBZSm5uS0hl?=
 =?utf-8?B?eUpXd05qUEVpTjl2Y2tJeittTDZZak1zOExsL0tzc09ZMFJmOUoxSER4ZEFL?=
 =?utf-8?B?TXJ2b3Z5N0JJWjFGK2Z6VGdZREs5V2Q0RjhNaU1hdDFIZUxlYWhaMnUwYXc5?=
 =?utf-8?B?TlJkWmZ1WXhEVE16dVlFK1J3ZW0ycWxmWG9ETjdkbmQ3SmdMQWZ0RWJuSXNx?=
 =?utf-8?B?TVY2YmhCZFIyTUtsTS9NeFVGUUNvTE4wZVZ1dDVueC8rZmVlOUZiazNyaWxC?=
 =?utf-8?Q?JPe+fyNh+hJt0Ez+kyo7A00kq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 53e83e82-042e-4929-453c-08daff83acd4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 09:57:03.3612
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qJma0m/9m6FCtX/1I/tPFRRhvCWOt+OO/9o57+I7UOxWjdDigBsxnbNIkJGfITvaLKneG92M9BiJ61o1Q9uGfQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7361

On 25.01.2023 19:45, Krister Johansen wrote:
> --- a/xen/include/public/arch-x86/cpuid.h
> +++ b/xen/include/public/arch-x86/cpuid.h
> @@ -72,6 +72,14 @@
>   * Sub-leaf 2: EAX: host tsc frequency in kHz
>   */
>  
> +#define XEN_CPUID_TSC_EMULATED               (1u << 0)
> +#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
> +#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
> +#define XEN_CPUID_TSC_MODE_DEFAULT           (0)
> +#define XEN_CPUID_TSC_MODE_EMULATE           (1u)
> +#define XEN_CPUID_TSC_MODE_NOEMULATE         (2u)
> +#define XEN_CPUID_TSC_MODE_NOEMULATE_TSC_AUX (3u)

Actually I think we'd better stick to the names found in asm/time.h
(and then replace their uses, dropping the #define-s there). If you
agree, I'd be happy to make the adjustment while committing.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 10:14:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 10:14:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484915.751791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKzGZ-00073Q-Io; Thu, 26 Jan 2023 10:13:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484915.751791; Thu, 26 Jan 2023 10:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKzGZ-00073J-G6; Thu, 26 Jan 2023 10:13:59 +0000
Received: by outflank-mailman (input) for mailman id 484915;
 Thu, 26 Jan 2023 10:13:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKzGX-00073D-L5
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 10:13:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKzGX-0007Df-7J; Thu, 26 Jan 2023 10:13:57 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=[192.168.8.102]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKzGX-0008DG-16; Thu, 26 Jan 2023 10:13:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=5tPIG1qmtNwurtmmSSmJk8cImvT1+LsA9AgPiRDKbQ4=; b=W9YoECbPg912MxaAUDxQsb9kKS
	1jx9LmXTALKek1SQoymAJ1in4a9FOMnVAVmNwsxd+dOGXcHuMytriTpZJS2O26GD9xlfXvH9iCJZE
	sf8vhdmfKxAPYOEJkOlr3S30Y9u1CkozaJtbiSVgxCJ/GDWzFyGzF6spmtev+QqRDAYY=;
Message-ID: <55e75c62-7b8c-dd6e-092e-48984aecff3b@xen.org>
Date: Thu, 26 Jan 2023 10:13:54 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v5] xen/arm: Use the correct format specifier
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230125101943.1854-1-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301251309160.1978264@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301251309160.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 25/01/2023 21:09, Stefano Stabellini wrote:
> On Wed, 25 Jan 2023, Ayan Kumar Halder wrote:
>> 1. One should use 'PRIpaddr' to display 'paddr_t' variables. However,
>> while creating nodes in fdt, the address (if present in the node name)
>> should be represented using 'PRIx64'. This is to be in conformance
>> with the following rule present in https://elinux.org/Device_Tree_Linux
>>
>> . node names
>> "unit-address does not have leading zeros"
>>
>> As 'PRIpaddr' introduces leading zeros, we cannot use it.
>>
>> So, we have introduced a wrapper ie domain_fdt_begin_node() which will
>> represent physical address using 'PRIx64'.
>>
>> 2. One should use 'PRIx64' to display 'u64' in hex format. The current
>> use of 'PRIpaddr' for printing PTE is buggy as this is not a physical
>> address.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> 
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> (I checked that Ayan also addressed Julien's latest comments.)

They are indeed.

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 10:16:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 10:16:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484919.751800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKzIW-0007kx-Uo; Thu, 26 Jan 2023 10:16:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484919.751800; Thu, 26 Jan 2023 10:16:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKzIW-0007kq-S1; Thu, 26 Jan 2023 10:16:00 +0000
Received: by outflank-mailman (input) for mailman id 484919;
 Thu, 26 Jan 2023 10:16:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKzIW-0007kk-JP
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 10:16:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKzIS-0007H3-Lv; Thu, 26 Jan 2023 10:15:56 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=[192.168.8.102]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKzIS-0008Ei-GK; Thu, 26 Jan 2023 10:15:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=dXlLCs/wiYB4UdmaSeytTDWjasDq+tpvf201Gl4qjyA=; b=bbAUDPgrPayN8toHLvFSMeutGv
	yOm0cdgFoMog3hbeYNMScDeT330aAGR5haUmzVpUWLddQpJWKB/+2Lilu9un3UcPL4zsRmGPqIrTs
	0aSpsMEPeht+8c1FymhNCwJvG5wtakRpUfp8rSK+qnQ/tifZqkfezwmYnhpoonXbsmq8=;
Message-ID: <2be0aa77-3381-8552-a6e3-917e9005cdc2@xen.org>
Date: Thu, 26 Jan 2023 10:15:54 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 01/11] xen/common: add cache coloring common code
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-2-carlo.nonato@minervasys.tech>
 <a470be46-ab6e-3970-2b04-6f4035adf1cb@suse.com>
 <CAG+AhRX9DVW5EfXKQoDG9hmcE0FORydTZd0pNm-0uqwddaN9NQ@mail.gmail.com>
 <6c952571-6a8d-e4fc-36ec-b5b79dac40f6@suse.com>
 <CAG+AhRUOBgPsT9yU3EtqSPj5VX70H1DsUL_dOWguapC+u3iSvw@mail.gmail.com>
 <bececcba-7606-924d-aba1-f51134414fd0@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <bececcba-7606-924d-aba1-f51134414fd0@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 26/01/2023 08:06, Jan Beulich wrote:
> On 25.01.2023 17:18, Carlo Nonato wrote:
>> On Wed, Jan 25, 2023 at 2:10 PM Jan Beulich <jbeulich@suse.com> wrote:
>>> On 25.01.2023 12:18, Carlo Nonato wrote:
>>>> On Tue, Jan 24, 2023 at 5:37 PM Jan Beulich <jbeulich@suse.com> wrote:
>>>>> On 23.01.2023 16:47, Carlo Nonato wrote:
>>>>>> --- a/xen/include/xen/sched.h
>>>>>> +++ b/xen/include/xen/sched.h
>>>>>> @@ -602,6 +602,9 @@ struct domain
>>>>>>
>>>>>>       /* Holding CDF_* constant. Internal flags for domain creation. */
>>>>>>       unsigned int cdf;
>>>>>> +
>>>>>> +    unsigned int *llc_colors;
>>>>>> +    unsigned int num_llc_colors;
>>>>>>   };
>>>>>
>>>>> Why outside of any #ifdef, and why not in struct arch_domain?
>>>>
>>>> Moving this in sched.h seemed like the natural continuation of the common +
>>>> arch specific split. Notice that this split is also because Julien pointed
>>>> out (as you did in some earlier revision) that cache coloring can be used
>>>> by other arch in the future (even if x86 is excluded). Having two maintainers
>>>> saying the same thing sounded like a good reason to do that.
>>>
>>> If you mean this to be usable by other arch-es as well (which I would
>>> welcome, as I think I had expressed on an earlier version), then I think
>>> more pieces want to be in common code. But putting the fields here and all
>>> users of them in arch-specific code (which I think is the way I saw it)
>>> doesn't look very logical to me. IOW to me there exist only two possible
>>> approaches: As much as possible in common code, or common code being
>>> disturbed as little as possible.
>>
>> This means having a llc-coloring.c in common where to put the common
>> implementation, right?
> 
> Likely, yes.
> 
>> Anyway right now there is also another user of such fields in common:
>> page_alloc.c.
> 
> Yet hopefully all inside suitable #ifdef.
> 
>>>> The missing #ifdef comes from a discussion I had with Julien in v2 about
>>>> domctl interface where he suggested removing it
>>>> (https://marc.info/?l=xen-devel&m=166151802002263).
>>>
>>> I went about five levels deep in the replies, without finding any such reply
>>> from Julien. Can you please be more specific with the link, so readers don't
>>> need to endlessly dig?
>>
>> https://marc.info/?l=xen-devel&m=166669617917298
>>
>> quote (me and then Julien):
>>>> We can also think of moving the coloring fields from this
>>>> struct to the common one (xen_domctl_createdomain) protecting them with
>>>> the proper #ifdef (but we are targeting only arm64...).
>>
>>> Your code is targeting arm64 but fundamentally this is an arm64 specific
>>> feature. IOW, this could be used in the future on other arch. So I think
>>> it would make sense to define it in common without the #ifdef.
> 
> I'm inclined to read this as a dislike for "#ifdef CONFIG_ARM64", not for
> "#ifdef CONFIG_LLC_COLORING" (or whatever the name of the option was). But
> I guess only Julien can clarify this ...
Your interpretation is correct. I would prefer if the fields are 
protected with #ifdef CONFIG_LLC_COLORING.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 10:21:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 10:21:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484924.751811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKzNQ-0000mI-HZ; Thu, 26 Jan 2023 10:21:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484924.751811; Thu, 26 Jan 2023 10:21:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKzNQ-0000mB-Ej; Thu, 26 Jan 2023 10:21:04 +0000
Received: by outflank-mailman (input) for mailman id 484924;
 Thu, 26 Jan 2023 10:21:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKzNP-0000m5-Pd
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 10:21:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKzNK-0007Vt-SO; Thu, 26 Jan 2023 10:20:58 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=[192.168.8.102]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKzNK-0008Jy-Kt; Thu, 26 Jan 2023 10:20:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=WBQlvO9xVxVW5S63BCblukKsAP/S14u07uD/HmlDkr4=; b=mdu0VHFTLzK/2O+RN8aygI81QP
	tYRl3XTnsdv8j8fwEsSNAfi05utn8BlN9WhgzKh3vyXUPU9d+HuOiGIH7FFR/NYOrAOcxWYUrki/z
	uzpimIhmYKAK3NaYf2FEWgccLT2LT/MxuPuoN4Q301Xw5FM449yG/fMwTt+sJ1Smsp8s=;
Message-ID: <0ec4c364-1e18-4176-ac24-ece84eb72859@xen.org>
Date: Thu, 26 Jan 2023 10:20:56 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 04/11] xen: extend domctl interface for cache coloring
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>,
 Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-5-carlo.nonato@minervasys.tech>
 <9bfee6d9-9cb2-262e-5a46-91b0bf35d60b@suse.com>
 <CAG+AhRW+45gt7ZyOYSjaQZbfLORNsJVeADk_Tb7j9CEyTcY6QQ@mail.gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CAG+AhRW+45gt7ZyOYSjaQZbfLORNsJVeADk_Tb7j9CEyTcY6QQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 25/01/2023 16:27, Carlo Nonato wrote:
> On Tue, Jan 24, 2023 at 5:29 PM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 23.01.2023 16:47, Carlo Nonato wrote:
>>> @@ -275,6 +276,19 @@ unsigned int *dom0_llc_colors(unsigned int *num_colors)
>>>       return colors;
>>>   }
>>>
>>> +unsigned int *llc_colors_from_guest(struct xen_domctl_createdomain *config)
>>
>> const struct ...?
>>
>>> +{
>>> +    unsigned int *colors;
>>> +
>>> +    if ( !config->num_llc_colors )
>>> +        return NULL;
>>> +
>>> +    colors = alloc_colors(config->num_llc_colors);
>>
>> Error handling needs to occur here; the panic() in alloc_colors() needs
>> to go away.
>>
>>> @@ -434,7 +436,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>>               rover = dom;
>>>           }
>>>
>>> -        d = domain_create(dom, &op->u.createdomain, false);
>>> +        if ( llc_coloring_enabled )
>>> +        {
>>> +            llc_colors = llc_colors_from_guest(&op->u.createdomain);
>>> +            num_llc_colors = op->u.createdomain.num_llc_colors;
>>
>> I think you would better avoid setting num_llc_colors to non-zero if
>> you got back NULL from the function. It's at best confusing.
>>
>>> @@ -92,6 +92,10 @@ struct xen_domctl_createdomain {
>>>       /* CPU pool to use; specify 0 or a specific existing pool */
>>>       uint32_t cpupool_id;
>>>
>>> +    /* IN LLC coloring parameters */
>>> +    uint32_t num_llc_colors;
>>> +    XEN_GUEST_HANDLE(uint32) llc_colors;
>>
>> Despite your earlier replies I continue to be unconvinced that this
>> is information which needs to be available right at domain_create.
>> Without that you'd also get away without the sufficiently odd
>> domain_create_llc_colored(). (Odd because: Think of two or three
>> more extended features appearing, all of which want a special cased
>> domain_create().)
> 
> Yes, I definitely see your point. Still there is the p2m table allocation
> problem that you and Julien have discussed previously. I'm not sure I
> understood what the approach is.

Henry has sent a series [1] to remove the requirement to allocate the 
P2M in domain_create().

With that series applied, there requirements to pass the colors at 
domain creation should be lifted.

Cheers,

[1] 
https://lore.kernel.org/xen-devel/20230116015820.1269387-1-Henry.Wang@arm.com/

> 
>> Jan

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 10:25:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 10:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484932.751821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKzRt-0001cm-6L; Thu, 26 Jan 2023 10:25:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484932.751821; Thu, 26 Jan 2023 10:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pKzRt-0001cf-36; Thu, 26 Jan 2023 10:25:41 +0000
Received: by outflank-mailman (input) for mailman id 484932;
 Thu, 26 Jan 2023 10:25:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pKzRs-0001cZ-4u
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 10:25:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKzRr-0007bV-OH; Thu, 26 Jan 2023 10:25:39 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=[192.168.8.102]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pKzRr-00005w-Ir; Thu, 26 Jan 2023 10:25:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=0Kg20AapJpmlw0BBBiAJrM4D4nqJNkf4ZrfuQfHy8Ac=; b=hM83Cvd5PFQMtQ0hbDGpGok3Fo
	jI4Vo+4DddanwM7bn1f70wF5WdWUi3ZV2YJS0YalRTiGppVSESwF1p85oA6zPRRLzL1KbXmJucOGZ
	OFecMkx+kNle7kb8prImFOZ2HPe8ghxjxA910V6aE5THVpQkkygyMgRSWAFE1sSEef3A=;
Message-ID: <79a1cd30-b2b6-f7e1-f000-d78520ec9e0e@xen.org>
Date: Thu, 26 Jan 2023 10:25:37 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 08/11] xen/arm: use colored allocator for p2m page
 tables
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Marco Solieri <marco.solieri@minervasys.tech>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-9-carlo.nonato@minervasys.tech>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230123154735.74832-9-carlo.nonato@minervasys.tech>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Carlo,

On 23/01/2023 15:47, Carlo Nonato wrote:
> Cache colored domains can benefit from having p2m page tables allocated
> with the same coloring schema so that isolation can be achieved also for
> those kind of memory accesses.
> In order to do that, the domain struct is passed to the allocator and the
> MEMF_no_owner flag is used.
> 
> Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
> Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
> ---
> v4:
> - fixed p2m page allocation using MEMF_no_owner memflag
> ---
>   xen/arch/arm/p2m.c | 11 +++++++++--
>   1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 948f199d84..f9faeb61af 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -4,6 +4,7 @@
>   #include <xen/iocap.h>
>   #include <xen/ioreq.h>
>   #include <xen/lib.h>
> +#include <xen/llc_coloring.h>
>   #include <xen/sched.h>
>   #include <xen/softirq.h>
>   
> @@ -56,7 +57,10 @@ static struct page_info *p2m_alloc_page(struct domain *d)
>        */
>       if ( is_hardware_domain(d) )
>       {
> -        pg = alloc_domheap_page(NULL, 0);
> +        if ( is_domain_llc_colored(d) )
> +            pg = alloc_domheap_page(d, MEMF_no_owner);
> +        else
> +            pg = alloc_domheap_page(NULL, 0);
I don't think we need to special case a colored domain here.You could 
simply always pass the domain/MEMF_no_owner and let the function decide 
what to do.

This approach would also be useful when NUMA will be supported on Arm 
(the series is still under review).

>           if ( pg == NULL )
>               printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
>       }
> @@ -105,7 +109,10 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
>           if ( d->arch.paging.p2m_total_pages < pages )
>           {
>               /* Need to allocate more memory from domheap */
> -            pg = alloc_domheap_page(NULL, 0);
> +            if ( is_domain_llc_colored(d) )
> +                pg = alloc_domheap_page(d, MEMF_no_owner);
> +            else
> +                pg = alloc_domheap_page(NULL, 0);

Ditto.

>               if ( pg == NULL )
>               {
>                   printk(XENLOG_ERR "Failed to allocate P2M pages.\n");

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 11:01:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 11:01:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484938.751830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL00J-0006HM-Pv; Thu, 26 Jan 2023 11:01:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484938.751830; Thu, 26 Jan 2023 11:01:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL00J-0006HF-NQ; Thu, 26 Jan 2023 11:01:15 +0000
Received: by outflank-mailman (input) for mailman id 484938;
 Thu, 26 Jan 2023 11:01:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q9FD=5X=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pL00I-0006H3-Hy
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 11:01:14 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb935eee-9d68-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 12:01:06 +0100 (CET)
Received: by mail-ed1-x531.google.com with SMTP id y11so1508531edd.6
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 03:01:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb935eee-9d68-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=673vAa67vWT2HXWDufxt2zxk3AW8Lvgl9CbqmU9Xjsk=;
        b=5fx9N+womgJAmOzVdR2OZVKfr7pSlkVI+GlUdaGJ0lGt73i5XQ8khE/66sbNlm07xp
         ZeCu2mwONMF1eLrxKOWTYs9x8YykCmZ9GudGjR55LwzodjwstnHCdVk4+Oxp6zSnfbTf
         F6bSqIfE2/p4mLo/Z7mpvSR046B2O/JQ8wSOFbFktKf4t3PEGAPPHleIWHdkC96G7B6K
         0rbAHNT2UjHUpbqGMP/5FVYD+PvGCpfrMEiaBBYojtVGtCIEApWRpBbyKzZBu4Zs+4uL
         Y6LeulWx98Ksqwosvb7LvUdH6uD0+dvc8Ny5pLYOc4dIy83sw5/tRKzGZANeJdydOG3b
         121w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=673vAa67vWT2HXWDufxt2zxk3AW8Lvgl9CbqmU9Xjsk=;
        b=7vD+gBng+yXzIBtl+Wwz+/Ioq1C5646eGOFLnqX76njNCATtRzzTVXXlwy0148eBHC
         Wwue54r6bNK6eU3EXvhyesBp68rBDFFuRv/B/sj1By/zOkvbhQIRrEksUxem7qbNftgR
         R2Gh2DZccD1PJXEJYdu4djMX1bL4JrS+1UOxFG5YD0MGsqOBVixlKEyhPd4NIIla+56s
         l45FF3QsPSiXhvDMYfVdo7jcDUT95Pw8ZffGMtwyas04hHdxfI8whS7p6O6lSo6owlkg
         TVsXfluReg6yXm1+4Gb1iTatUW2PogT5f7DhFztM47Js6qfUq/qnb/u/4wJd5dsC9b3u
         E2KA==
X-Gm-Message-State: AFqh2ko0WMcPPO/1cUTqewqtBvPGZA4bb4zjzjgRmw0oHf+SWQ+mYhjZ
	z+IpJWm2EBi/W9E2QrcnOwfpWWHX54js4xFycUIklA==
X-Google-Smtp-Source: AMrXdXshVQ8ydHOeU84+dezm9ho09eDhfc6+4l3RnUjgKXX0IGWjs8INqyyBBZEOdhU+ioX9KwdaJ7TK5INglsAS53Y=
X-Received: by 2002:a05:6402:221a:b0:49d:836e:21f9 with SMTP id
 cq26-20020a056402221a00b0049d836e21f9mr5476199edb.36.1674730866340; Thu, 26
 Jan 2023 03:01:06 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-8-carlo.nonato@minervasys.tech> <a74381ce-d204-1f40-7ccc-2be3bbc3ebd1@suse.com>
In-Reply-To: <a74381ce-d204-1f40-7ccc-2be3bbc3ebd1@suse.com>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Thu, 26 Jan 2023 12:00:55 +0100
Message-ID: <CAG+AhRUKWfJBf5C0uqfzePMvxN-gc2gYup+oBRBA2DXnNW-txw@mail.gmail.com>
Subject: Re: [PATCH v4 07/11] xen: add cache coloring allocator for domains
To: Jan Beulich <jbeulich@suse.com>
Cc: Luca Miccio <lucmiccio@gmail.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, 
	Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hi Jan,

On Tue, Jan 24, 2023 at 5:50 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 23.01.2023 16:47, Carlo Nonato wrote:
> > From: Luca Miccio <lucmiccio@gmail.com>
> >
> > This commit adds a new memory page allocator that implements the cache
> > coloring mechanism. The allocation algorithm follows the given domain color
> > configuration and maximizes contiguity in the page selection of multiple
> > subsequent requests.
> >
> > Pages are stored in a color-indexed array of lists, each one sorted by
> > machine address, that is referred to as "colored heap". Those lists are
> > filled by a simple init function which computes the color of each page.
> > When a domain requests a page, the allocator takes one from those lists
> > whose colors equals the domain configuration. It chooses the page with the
> > lowest machine address such that contiguous pages are sequentially
> > allocated if this is made possible by a color assignment which includes
> > adjacent colors.
>
> What use is this with ...
>
> > The allocator can handle only requests with order equal to 0 since the
> > single color granularity is represented in memory by one page.
>
> ... this restriction? Plus aiui there's no guarantee of contiguous pages
> coming back in any event (because things depend on what may have been
> allocated / freed earlier on), so why even give the impression of there
> being a way to obtain contiguous pages?

I really need us to be on the same "page" (no pun intended) here cause we
discussed the subject multiple times and I'm probably missing important
details.

First, is physical memory contiguity important? I'm assuming this is good
because then some hardware optimization can occur when accessing memory.
I'm taking it for granted because it's what the original author of the series
thought, but I don't have an objective view of this.

Then, let's state what contiguity means with coloring:
*if* there are contiguous free pages and *if* subsequent requests are made
and *if* the coloring configuration allows it, the allocator guarantees
contiguity because it serves pages *in order*.

>From the fragmentation perspective (first prerequisite), this is somewhat
similar to the buddy case where only if contiguous pages are freed they can
be allocated after. So order of operation is always important for
fragmentation in dynamic allocation. The main difference is speed
(I'm not comparing them on this aspect).

The second prerequisite requires that users of the allocator have exclusive
access to it until the request is carried out. If interleaved requests happen,
contiguity is practically impossible. How often does this happen? I view
allocation as something that happens mainly at domain creation time, one
domain at a time which results in a lot of subsequent requests, and then
contiguity (if other prerequisites hold) isn't an impression.

Obviously fragmentation is inherently higher with coloring because it actually
needs to partition memory, so the third prerequisite actually limits contiguity
a lot.

> Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 11:02:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 11:02:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484944.751844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL01t-0006oS-5v; Thu, 26 Jan 2023 11:02:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484944.751844; Thu, 26 Jan 2023 11:02:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL01t-0006oL-2u; Thu, 26 Jan 2023 11:02:53 +0000
Received: by outflank-mailman (input) for mailman id 484944;
 Thu, 26 Jan 2023 11:02:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q9FD=5X=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pL01r-0006oF-Pc
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 11:02:51 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8e9b41f-9d68-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 12:02:49 +0100 (CET)
Received: by mail-ej1-x62d.google.com with SMTP id qx13so3959416ejb.13
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 03:02:49 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8e9b41f-9d68-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=Bujbm72cptBXg/nA31OUEyqO7n4/E7HF5yfYemWcZ6o=;
        b=c+h2jQRkciPXrKdsi6WVM2efQhnOM8MYjV4DAldu0bohYZKPjSenZRrttjcFoLw3mx
         Ogf5nMyYp4biL3I7rzKhDeR+FbE5jKyDkD51IufU9N4qae/ZExLINErYrRm3/TngHazJ
         pefg19QhUcu/QtcXcXBQw5P71tCIFpOKRwFeqp6iuR3iMs1jHJlUH53EzD5Q+qlW2Y3f
         JFv5E+v8eaCntol8DDYBlUV2WVO3SjVBvXFGCTw9bMh84fKpJLmvU60Cd8KsNfkurXvI
         H4dzLTOHsgiIE1kABv5wuLmp0pmg1tqIu3Bfcc7LLWa4u7uhaNzo8ixeoB+qXvncIUsR
         xLnQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Bujbm72cptBXg/nA31OUEyqO7n4/E7HF5yfYemWcZ6o=;
        b=TF1wdgkgBxPoPyfunhg+1+LvPnrtbYXxP33VLGYX7PcLP1IM6bVzh7qNEkvV/bHIMo
         tU67uuQobhBYE2i82oMzbuAqA2rQa4Egf5X/9uUzWD1q5bc3DoJbKjGrH00nAA3MwGFe
         on1Hoftf6KwQNIUd9mJBoaOKy3X2bkTLZ5LB1TjM6JqXWzDPrhqe5k/D1kItGt7jKPYA
         1qQLURJAxITy4kEJe1C53yuYdStrQ7RtNbB1GmmG+PsaWRa2YWJF3pPSm5QGndHgOQaq
         VVIXoBQPs7V0rrCf9s1/JCJJ2jQ1Sf/AxmC5dv/G23ijQwufnalPMxGYa9H/CGZY7zwQ
         D6dw==
X-Gm-Message-State: AFqh2krbo6u/Wn3WDNixg0Mn3nhC7gmFAVaRXTWSbmTn/nACwQbZN7m0
	GQ6b4+vmfNOC7caug1Jwy3HYXsp5znTRH9AvUIqSxQ==
X-Google-Smtp-Source: AMrXdXs6ZO20bnYkjz9tUNqm/EaiI9vO3ArxqEmCfUEJCp/qLipGMo6wjTU2IGtZn94XRgc4X26+HFanW7Q+KuwiZqA=
X-Received: by 2002:a17:907:d043:b0:868:dca5:b73c with SMTP id
 vb3-20020a170907d04300b00868dca5b73cmr3726723ejc.1.1674730969321; Thu, 26 Jan
 2023 03:02:49 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-9-carlo.nonato@minervasys.tech> <79a1cd30-b2b6-f7e1-f000-d78520ec9e0e@xen.org>
In-Reply-To: <79a1cd30-b2b6-f7e1-f000-d78520ec9e0e@xen.org>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Thu, 26 Jan 2023 12:02:38 +0100
Message-ID: <CAG+AhRXNhFOFe-jmN6Lj=RH9zhnZF+=k6yT412_GB0js9pLPTA@mail.gmail.com>
Subject: Re: [PATCH v4 08/11] xen/arm: use colored allocator for p2m page tables
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Marco Solieri <marco.solieri@minervasys.tech>
Content-Type: text/plain; charset="UTF-8"

Hi Julien,

On Thu, Jan 26, 2023 at 11:25 AM Julien Grall <julien@xen.org> wrote:
>
> Hi Carlo,
>
> On 23/01/2023 15:47, Carlo Nonato wrote:
> > Cache colored domains can benefit from having p2m page tables allocated
> > with the same coloring schema so that isolation can be achieved also for
> > those kind of memory accesses.
> > In order to do that, the domain struct is passed to the allocator and the
> > MEMF_no_owner flag is used.
> >
> > Signed-off-by: Carlo Nonato <carlo.nonato@minervasys.tech>
> > Signed-off-by: Marco Solieri <marco.solieri@minervasys.tech>
> > ---
> > v4:
> > - fixed p2m page allocation using MEMF_no_owner memflag
> > ---
> >   xen/arch/arm/p2m.c | 11 +++++++++--
> >   1 file changed, 9 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > index 948f199d84..f9faeb61af 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -4,6 +4,7 @@
> >   #include <xen/iocap.h>
> >   #include <xen/ioreq.h>
> >   #include <xen/lib.h>
> > +#include <xen/llc_coloring.h>
> >   #include <xen/sched.h>
> >   #include <xen/softirq.h>
> >
> > @@ -56,7 +57,10 @@ static struct page_info *p2m_alloc_page(struct domain *d)
> >        */
> >       if ( is_hardware_domain(d) )
> >       {
> > -        pg = alloc_domheap_page(NULL, 0);
> > +        if ( is_domain_llc_colored(d) )
> > +            pg = alloc_domheap_page(d, MEMF_no_owner);
> > +        else
> > +            pg = alloc_domheap_page(NULL, 0);
> I don't think we need to special case a colored domain here.You could
> simply always pass the domain/MEMF_no_owner and let the function decide
> what to do.
>
> This approach would also be useful when NUMA will be supported on Arm
> (the series is still under review).

Ok, nice. This was pointed out also by Jan in the previous revision.

> >           if ( pg == NULL )
> >               printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
> >       }
> > @@ -105,7 +109,10 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
> >           if ( d->arch.paging.p2m_total_pages < pages )
> >           {
> >               /* Need to allocate more memory from domheap */
> > -            pg = alloc_domheap_page(NULL, 0);
> > +            if ( is_domain_llc_colored(d) )
> > +                pg = alloc_domheap_page(d, MEMF_no_owner);
> > +            else
> > +                pg = alloc_domheap_page(NULL, 0);
>
> Ditto.
>
> >               if ( pg == NULL )
> >               {
> >                   printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 11:04:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 11:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484949.751854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL038-0007Nq-GO; Thu, 26 Jan 2023 11:04:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484949.751854; Thu, 26 Jan 2023 11:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL038-0007Nj-DW; Thu, 26 Jan 2023 11:04:10 +0000
Received: by outflank-mailman (input) for mailman id 484949;
 Thu, 26 Jan 2023 11:04:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q9FD=5X=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pL037-0007Nd-AP
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 11:04:09 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 275f1ab9-9d69-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 12:04:08 +0100 (CET)
Received: by mail-ej1-x62f.google.com with SMTP id vw16so3979166ejc.12
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 03:04:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 275f1ab9-9d69-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=9zSsK+5GG+agbABRpBeWbZ7QWbfd2vhT5X+daJUi+BI=;
        b=oG4jAzvi/710hiyV0GGd3BTGR5Xuu4RBc60L8xZm/sDv7kNJlcjEy5Ou1slimPRkCQ
         SWB7DWFdH7r8EUIbyLac3XiwU/TztgXAF+HNtG1twQVs/L0KSSXNMFL1tgROt3idA/S2
         ZF4vF8nTZfn4J869qqYabtCQLuZiGa9f57PB+gkK1P1WMKdZizC6Nw1KHesZC883R/C7
         gVV50kNe692nIkMWbBNM1Lj6cty+a2wFHzstBA9DmfsbmI5sMujkpbqa6m/Mj3BEr1/o
         mHdqDcBUt183tFBdSBpMBYAoQ+nSAE683oubTys2YwEQWlPD2ROTxlLeE/iyiJyf7DZe
         RAvA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=9zSsK+5GG+agbABRpBeWbZ7QWbfd2vhT5X+daJUi+BI=;
        b=uPBn2jtFJGcPyTIGBJRAiDulbSV+3/XCCcJjHgCC9sA6afdiwidltG+3B2rIq/u1Ii
         jdWUL/rf9EFa1d7MrspR4Zq0JB6aoNQzsOw+yordDnQVIqACh8HGgsKQ0ThMtMGJdiST
         zF8h8dTvV0i4oslsuFCOmDFT6JNWMZZC4jlRU0NILrAIQCSgSh5Reyq2WV76rIRIjAYx
         SVYzzvtUDA4Uo+V7CY1DK0XggNf9XJ09a9/nu7q77jeV++Qo8IfcVdVyXrZurqQn5EO9
         ciu2jbxytrf9jmqzqqKXZvrTP57EGu/PYEWVtcO1aEWJH2djzo7aABSOqi42zjNj4ra6
         b5QQ==
X-Gm-Message-State: AO0yUKUmeUwvQIECu7r8QD9xvc6VsQ23y0EhXyh/IcO4/raWyAnRlx+K
	YXokvY1CaJTnZRbFATzaF0fw/rItXfvQR4ug9N8i0A==
X-Google-Smtp-Source: AK7set+514+TaqtoKcnp+ViRTk2/O6lah9EyL8DwLrPpnQKH/aYB3lOSIK24xOE2Qjiie2drX1VImsPXR8agHjVqm74=
X-Received: by 2002:a17:906:308b:b0:878:42af:c614 with SMTP id
 11-20020a170906308b00b0087842afc614mr913745ejv.149.1674731047361; Thu, 26 Jan
 2023 03:04:07 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-2-carlo.nonato@minervasys.tech> <a470be46-ab6e-3970-2b04-6f4035adf1cb@suse.com>
 <CAG+AhRX9DVW5EfXKQoDG9hmcE0FORydTZd0pNm-0uqwddaN9NQ@mail.gmail.com>
 <6c952571-6a8d-e4fc-36ec-b5b79dac40f6@suse.com> <CAG+AhRUOBgPsT9yU3EtqSPj5VX70H1DsUL_dOWguapC+u3iSvw@mail.gmail.com>
 <bececcba-7606-924d-aba1-f51134414fd0@suse.com> <2be0aa77-3381-8552-a6e3-917e9005cdc2@xen.org>
In-Reply-To: <2be0aa77-3381-8552-a6e3-917e9005cdc2@xen.org>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Thu, 26 Jan 2023 12:03:56 +0100
Message-ID: <CAG+AhRWZkrRkm12r+P2hUXAGELXVpu=2Fqpk7mO3q=+RPt9vyQ@mail.gmail.com>
Subject: Re: [PATCH v4 01/11] xen/common: add cache coloring common code
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hi Julien and Jan,

On Thu, Jan 26, 2023 at 11:16 AM Julien Grall <julien@xen.org> wrote:
>
> Hi Jan,
>
> On 26/01/2023 08:06, Jan Beulich wrote:
> > On 25.01.2023 17:18, Carlo Nonato wrote:
> >> On Wed, Jan 25, 2023 at 2:10 PM Jan Beulich <jbeulich@suse.com> wrote:
> >>> On 25.01.2023 12:18, Carlo Nonato wrote:
> >>>> On Tue, Jan 24, 2023 at 5:37 PM Jan Beulich <jbeulich@suse.com> wrote:
> >>>>> On 23.01.2023 16:47, Carlo Nonato wrote:
> >>>>>> --- a/xen/include/xen/sched.h
> >>>>>> +++ b/xen/include/xen/sched.h
> >>>>>> @@ -602,6 +602,9 @@ struct domain
> >>>>>>
> >>>>>>       /* Holding CDF_* constant. Internal flags for domain creation. */
> >>>>>>       unsigned int cdf;
> >>>>>> +
> >>>>>> +    unsigned int *llc_colors;
> >>>>>> +    unsigned int num_llc_colors;
> >>>>>>   };
> >>>>>
> >>>>> Why outside of any #ifdef, and why not in struct arch_domain?
> >>>>
> >>>> Moving this in sched.h seemed like the natural continuation of the common +
> >>>> arch specific split. Notice that this split is also because Julien pointed
> >>>> out (as you did in some earlier revision) that cache coloring can be used
> >>>> by other arch in the future (even if x86 is excluded). Having two maintainers
> >>>> saying the same thing sounded like a good reason to do that.
> >>>
> >>> If you mean this to be usable by other arch-es as well (which I would
> >>> welcome, as I think I had expressed on an earlier version), then I think
> >>> more pieces want to be in common code. But putting the fields here and all
> >>> users of them in arch-specific code (which I think is the way I saw it)
> >>> doesn't look very logical to me. IOW to me there exist only two possible
> >>> approaches: As much as possible in common code, or common code being
> >>> disturbed as little as possible.
> >>
> >> This means having a llc-coloring.c in common where to put the common
> >> implementation, right?
> >
> > Likely, yes.
> >
> >> Anyway right now there is also another user of such fields in common:
> >> page_alloc.c.
> >
> > Yet hopefully all inside suitable #ifdef.
> >
> >>>> The missing #ifdef comes from a discussion I had with Julien in v2 about
> >>>> domctl interface where he suggested removing it
> >>>> (https://marc.info/?l=xen-devel&m=166151802002263).
> >>>
> >>> I went about five levels deep in the replies, without finding any such reply
> >>> from Julien. Can you please be more specific with the link, so readers don't
> >>> need to endlessly dig?
> >>
> >> https://marc.info/?l=xen-devel&m=166669617917298
> >>
> >> quote (me and then Julien):
> >>>> We can also think of moving the coloring fields from this
> >>>> struct to the common one (xen_domctl_createdomain) protecting them with
> >>>> the proper #ifdef (but we are targeting only arm64...).
> >>
> >>> Your code is targeting arm64 but fundamentally this is an arm64 specific
> >>> feature. IOW, this could be used in the future on other arch. So I think
> >>> it would make sense to define it in common without the #ifdef.
> >
> > I'm inclined to read this as a dislike for "#ifdef CONFIG_ARM64", not for
> > "#ifdef CONFIG_LLC_COLORING" (or whatever the name of the option was). But
> > I guess only Julien can clarify this ...
> Your interpretation is correct. I would prefer if the fields are
> protected with #ifdef CONFIG_LLC_COLORING.

Understood. Thanks to both.

> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 11:19:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 11:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484955.751868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL0HL-0000ro-Qp; Thu, 26 Jan 2023 11:18:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484955.751868; Thu, 26 Jan 2023 11:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL0HL-0000rh-MS; Thu, 26 Jan 2023 11:18:51 +0000
Received: by outflank-mailman (input) for mailman id 484955;
 Thu, 26 Jan 2023 11:18:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q9FD=5X=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pL0HK-0000rb-Ub
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 11:18:51 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 331c5a5f-9d6b-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 12:18:46 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id kt14so4195871ejc.3
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 03:18:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 331c5a5f-9d6b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=dqzFF7QucsnVSfBoepSCqTZ+X/O6fJVPSJ88AYHvMA4=;
        b=r116+izGoWqcUtXsMn2LBmc4VbAbsal1ABTZrJhpvuL5h+nV+XhPm1Bkkndl8h3+6j
         mGXL4EEZ+NaxpmiOaGqgzYS2QVyRa4gwLqog64iGKwTTo3dV09xh+BdzPCJagOKWiMUG
         5VtuxVc5jMULIr9fAtqaLqEsGRxlGVfFcrt6CmrYOL2yqYQFSLpQjqB/x9SsaWFa7AIS
         tvpZqbXYcrMycZ8dwE1bQWb0jrVnFx/x/6pUuudOLSCYbY2FfgSGc8A5JrAa4pnMPhp4
         QF2DiWGP93Qc8K69jXMLCZK0g6YHIzAz4ZqWvr1V5FwgRIgW4zBb5KhGo0YFDTO12BIY
         5hlA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=dqzFF7QucsnVSfBoepSCqTZ+X/O6fJVPSJ88AYHvMA4=;
        b=APiq5er3CX3wxh1rc67hb/uUHTWHbw2I5rfMrnAQphNpBwXa+qkDuxIoESezo0igmG
         lSZyW2slivvbwvk8JZ8qqWXdO6Q1kRN9Vm6kZMxA1b+4ikz6CA8jHP7Sfl37h46yuhq1
         27INPqvmhpFvHvwWTGEcrHqlNIelAn9+GLlEUQOC99o9GL24t6SRrIcUyjW7bLFLfXKa
         vi3ZsznFJTrRHuM1JCUmnzGEVFjQId8Et0b6/83fDUHk1gFesTu/T6LIlGpf7oBNrm5n
         JymVsZtCCFVNUK1mz8lItX3u9ePd7Mzq/ydcex4jVGDtbucVQEvqFWiVE+x0h9JXS9cs
         oV/Q==
X-Gm-Message-State: AO0yUKVkajajI/Umj7cQ2iIYUImc8EbKqb9Pi0f2WKHGRTltpOnRTe23
	YYbjBJkO4LGnKmtCsYLcffrNEWzA4Qp4jZATKIPdHQ==
X-Google-Smtp-Source: AK7set8czsAbe20kBZW4lC5tuIlUA5aqHtVaj/ZQ214iNi1aI0+eDHUGTcg541k/2KhunXj5uj84wEejI9fxbSmYzzQ=
X-Received: by 2002:a17:906:308b:b0:878:42af:c614 with SMTP id
 11-20020a170906308b00b0087842afc614mr925274ejv.149.1674731928050; Thu, 26 Jan
 2023 03:18:48 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-5-carlo.nonato@minervasys.tech> <9bfee6d9-9cb2-262e-5a46-91b0bf35d60b@suse.com>
 <4e723846-09c1-32c8-94ba-3755e6af0529@suse.com>
In-Reply-To: <4e723846-09c1-32c8-94ba-3755e6af0529@suse.com>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Thu, 26 Jan 2023 12:18:37 +0100
Message-ID: <CAG+AhRULW9ZWUcKpFq6_grF-8GzdKm3CqOZpwjYz5gjTg_Uukw@mail.gmail.com>
Subject: Re: [PATCH v4 04/11] xen: extend domctl interface for cache coloring
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Wei Liu <wl@xen.org>, Marco Solieri <marco.solieri@minervasys.tech>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hi Jan,

On Thu, Jan 26, 2023 at 8:25 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 24.01.2023 17:29, Jan Beulich wrote:
> > On 23.01.2023 16:47, Carlo Nonato wrote:
> >> @@ -92,6 +92,10 @@ struct xen_domctl_createdomain {
> >>      /* CPU pool to use; specify 0 or a specific existing pool */
> >>      uint32_t cpupool_id;
> >>
> >> +    /* IN LLC coloring parameters */
> >> +    uint32_t num_llc_colors;
> >> +    XEN_GUEST_HANDLE(uint32) llc_colors;
> >
> > Despite your earlier replies I continue to be unconvinced that this
> > is information which needs to be available right at domain_create.
> > Without that you'd also get away without the sufficiently odd
> > domain_create_llc_colored(). (Odd because: Think of two or three
> > more extended features appearing, all of which want a special cased
> > domain_create().)
>
> And perhaps the real question is: Why do the two items need passing
> to a special variant of domain_create() in the first place? The
> necessary information already is passed to the normal function via
> struct xen_domctl_createdomain. All it would take is to read the
> array from guest space later, when struct domain was already
> allocated and is hence available for storing the pointer. (Passing
> the count separately is redundant in any event.)

This was our first approach. However, struct xen_domctl_createdomain is used
both by domctl (pointing to guest memory) and by Xen itself (using Xen memory)
and Julien wasn't happy with this approach because it required some
kind of hack.

See this message from him:

https://marc.info/?l=xen-devel&m=166637496520053

and my answer:

https://marc.info/?l=xen-devel&m=166782830201561

> Jan
>


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 11:19:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 11:19:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484961.751877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL0I8-0001SP-4s; Thu, 26 Jan 2023 11:19:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484961.751877; Thu, 26 Jan 2023 11:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL0I8-0001SI-2C; Thu, 26 Jan 2023 11:19:40 +0000
Received: by outflank-mailman (input) for mailman id 484961;
 Thu, 26 Jan 2023 11:19:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q9FD=5X=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pL0I6-0001S5-UU
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 11:19:38 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51d9962c-9d6b-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 12:19:38 +0100 (CET)
Received: by mail-ej1-x634.google.com with SMTP id m2so3687763ejb.8
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 03:19:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51d9962c-9d6b-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=fzQGTWbaPl1YN/xW9eCb7OJQZkAto1uIKcuM5Qbgrr4=;
        b=GQflZ1jhsRdZiuXW1FlGyi4SBuecrs3kxu/buHb10WvVv5xh9vzlL5wYDj3KIuzJh9
         7cBEwFyXiQBu6QEvtzScVk/744a7UPHPtSvkyoV0DoAo8yag4zp0DfxciI4A0BH9sl3I
         8rkQripbWk+DrCXxwTEqeib1BR0eHi/RM8HIzBVR1Lsd8Eb5FiW1Inmt0iPjaLRPxI3w
         JN8W/gjFhulkgFJxM2wK3Ndr+xYhxmIopU5F/eYMxQH+0DPQj6W+psy1f/m6ZPBK0ab4
         VVpLGCLrxZvVOoRJlKP2W6bpgQ3Si37Z+oiSrCRxiFhJivD0WWaNPdr16Q1oT9YQXTZI
         BiCw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=fzQGTWbaPl1YN/xW9eCb7OJQZkAto1uIKcuM5Qbgrr4=;
        b=HthcU0Nta5B4os6vwzoja61T627ag19b3H9pRHwyfiOAwRZtgjldQ05PVIPVuCmf3f
         f5K2B6D+oSGpw0bnngeyEH3Pb8yRZtSs9GtqMmxhDhrglOsg/15sVPzL7r6VZz6YmV1v
         mNmyWyAWPJqwMkoL+1WhpAVGsfOFz9TLVxV7JY/xUYFldGExux/NuO/RF6hr9p2irUMM
         FAVeGIpzUzBWd1Pn0oI4Ff1uKyXLQShOcDXpqImrg37Dm9Ok5J1l/lppEiiyRguq5Kpl
         U0KBtQLblTLpCwj+46kKTRgmL/5A6lTvi61SZcFVaxpejHulTCNfbpvp11DdjKfL/EgP
         AjSQ==
X-Gm-Message-State: AFqh2kqIKp0/z6WvRgfl3B1WxcTHvQ69v8Y46URtZ/cZD/29tN6ZVsnt
	mYH9hnbtpDMBlQEISvnI6fxY6mFdvtqg3i69kDUsQg==
X-Google-Smtp-Source: AMrXdXtSdARUMR/fTagNMtjfnY8gpMcble7Fdibe1YB8/nvg2agNvodYiUdS72YXnZaQxc7KS5eBsJbFSrXp0BLNZTA=
X-Received: by 2002:a17:907:10d0:b0:84d:49c2:8701 with SMTP id
 rv16-20020a17090710d000b0084d49c28701mr4049092ejb.236.1674731977601; Thu, 26
 Jan 2023 03:19:37 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-5-carlo.nonato@minervasys.tech> <9bfee6d9-9cb2-262e-5a46-91b0bf35d60b@suse.com>
 <CAG+AhRW+45gt7ZyOYSjaQZbfLORNsJVeADk_Tb7j9CEyTcY6QQ@mail.gmail.com> <0ec4c364-1e18-4176-ac24-ece84eb72859@xen.org>
In-Reply-To: <0ec4c364-1e18-4176-ac24-ece84eb72859@xen.org>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Thu, 26 Jan 2023 12:19:26 +0100
Message-ID: <CAG+AhRVF+XEbbkARh5VuZuh2JiE6J3Z3yXvXQCwD_vrLDhCB6Q@mail.gmail.com>
Subject: Re: [PATCH v4 04/11] xen: extend domctl interface for cache coloring
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, 
	Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hi Julien and Jan,

On Thu, Jan 26, 2023 at 11:21 AM Julien Grall <julien@xen.org> wrote:
>
> Hi,
>
> On 25/01/2023 16:27, Carlo Nonato wrote:
> > On Tue, Jan 24, 2023 at 5:29 PM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 23.01.2023 16:47, Carlo Nonato wrote:
> >>> @@ -275,6 +276,19 @@ unsigned int *dom0_llc_colors(unsigned int *num_colors)
> >>>       return colors;
> >>>   }
> >>>
> >>> +unsigned int *llc_colors_from_guest(struct xen_domctl_createdomain *config)
> >>
> >> const struct ...?
> >>
> >>> +{
> >>> +    unsigned int *colors;
> >>> +
> >>> +    if ( !config->num_llc_colors )
> >>> +        return NULL;
> >>> +
> >>> +    colors = alloc_colors(config->num_llc_colors);
> >>
> >> Error handling needs to occur here; the panic() in alloc_colors() needs
> >> to go away.
> >>
> >>> @@ -434,7 +436,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >>>               rover = dom;
> >>>           }
> >>>
> >>> -        d = domain_create(dom, &op->u.createdomain, false);
> >>> +        if ( llc_coloring_enabled )
> >>> +        {
> >>> +            llc_colors = llc_colors_from_guest(&op->u.createdomain);
> >>> +            num_llc_colors = op->u.createdomain.num_llc_colors;
> >>
> >> I think you would better avoid setting num_llc_colors to non-zero if
> >> you got back NULL from the function. It's at best confusing.
> >>
> >>> @@ -92,6 +92,10 @@ struct xen_domctl_createdomain {
> >>>       /* CPU pool to use; specify 0 or a specific existing pool */
> >>>       uint32_t cpupool_id;
> >>>
> >>> +    /* IN LLC coloring parameters */
> >>> +    uint32_t num_llc_colors;
> >>> +    XEN_GUEST_HANDLE(uint32) llc_colors;
> >>
> >> Despite your earlier replies I continue to be unconvinced that this
> >> is information which needs to be available right at domain_create.
> >> Without that you'd also get away without the sufficiently odd
> >> domain_create_llc_colored(). (Odd because: Think of two or three
> >> more extended features appearing, all of which want a special cased
> >> domain_create().)
> >
> > Yes, I definitely see your point. Still there is the p2m table allocation
> > problem that you and Julien have discussed previously. I'm not sure I
> > understood what the approach is.
>
> Henry has sent a series [1] to remove the requirement to allocate the
> P2M in domain_create().
>
> With that series applied, there requirements to pass the colors at
> domain creation should be lifted.
>
> Cheers,
>
> [1]
> https://lore.kernel.org/xen-devel/20230116015820.1269387-1-Henry.Wang@arm.com/

Really nice. Thanks to both.

> >
> >> Jan
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 12:04:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 12:04:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484985.751887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL0z5-0007a1-V1; Thu, 26 Jan 2023 12:04:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484985.751887; Thu, 26 Jan 2023 12:04:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL0z5-0007Zu-RJ; Thu, 26 Jan 2023 12:04:03 +0000
Received: by outflank-mailman (input) for mailman id 484985;
 Thu, 26 Jan 2023 12:04:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL0z5-0007Zk-Ad; Thu, 26 Jan 2023 12:04:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL0z5-0001TC-6p; Thu, 26 Jan 2023 12:04:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL0z4-0005Qm-LA; Thu, 26 Jan 2023 12:04:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pL0z4-0008Dk-Kc; Thu, 26 Jan 2023 12:04:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PlT1ak9kVEKcDDwoRnxtfWTTEyNMRFnjOTiDjeYopW4=; b=U78Sj4vZElFUSiDTuHNCpft8ch
	Db203o7GX2kX9/jnnF9adP2k1xQLUDObDkBfFK4Is/vBtavm9YObSboNIV9ldNuowpJsoA4HHgNq2
	r/5jA3rYnp6EjGuwrI0P7GOnSW0gRPZeKtsbQvsOJIhzrcwWOm+FLcykKRUg+OVQmL7A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176140-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176140: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3b760245f74ab2022b1aa4da842c4545228c2e83
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Jan 2023 12:04:02 +0000

flight 176140 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176140/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl 18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-xsm       18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl           18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-pair  26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 test-amd64-i386-xl-vhd       17 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-xl-shadow    18 guest-localmigrate       fail REGR. vs. 175994
 test-amd64-i386-libvirt-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 175994
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 175994

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 175994
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install       fail like 175994
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3b760245f74ab2022b1aa4da842c4545228c2e83
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    6 days
Failing since        176003  2023-01-20 17:40:27 Z    5 days   14 attempts
Testing same since   176121  2023-01-25 10:51:47 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1067 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 12:04:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 12:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.484992.751897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL0zt-00084k-8y; Thu, 26 Jan 2023 12:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 484992.751897; Thu, 26 Jan 2023 12:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL0zt-00084d-6D; Thu, 26 Jan 2023 12:04:53 +0000
Received: by outflank-mailman (input) for mailman id 484992;
 Thu, 26 Jan 2023 12:04:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL0zs-00084T-4P; Thu, 26 Jan 2023 12:04:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL0zs-0001Tx-3Z; Thu, 26 Jan 2023 12:04:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL0zr-0005Rx-Pv; Thu, 26 Jan 2023 12:04:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pL0zr-00012J-PT; Thu, 26 Jan 2023 12:04:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+OmUHSYHsNr/JUmhwbLgyGxpx+vgRMvixWpuyMpwIZc=; b=cEgoVMw3FD9FZMJ3vXUtvufm+d
	zmjY0KFukTvtd0nscOiIPhvuFb2wATulhHkO8FXhOTRCKBhmhrppwB7EKZ74OkjHnKpyJKOoSDxaw
	ureHwQe7LfMKOxYTAEnYsf8HK5skTkAlsNjvOZj+v/wG9D6Qcs26aDSC8ESNRKZSHX60=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176144-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176144: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d0ff1cae3a1ab20ffd5a1d80658c38c113585651
X-Osstest-Versions-That:
    ovmf=37d3eb026a766b2405daae47e02094c2ec248646
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Jan 2023 12:04:51 +0000

flight 176144 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176144/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d0ff1cae3a1ab20ffd5a1d80658c38c113585651
baseline version:
 ovmf                 37d3eb026a766b2405daae47e02094c2ec248646

Last test of basis   176059  2023-01-23 06:10:49 Z    3 days
Testing same since   176144  2023-01-26 09:10:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Jake Garver <jake@nvidia.com>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   37d3eb026a..d0ff1cae3a  d0ff1cae3a1ab20ffd5a1d80658c38c113585651 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 12:06:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 12:06:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485001.751907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL10w-0000Nz-Kx; Thu, 26 Jan 2023 12:05:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485001.751907; Thu, 26 Jan 2023 12:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL10w-0000Ns-Gj; Thu, 26 Jan 2023 12:05:58 +0000
Received: by outflank-mailman (input) for mailman id 485001;
 Thu, 26 Jan 2023 12:05:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pL10v-0000Kk-N1
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 12:05:57 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2042.outbound.protection.outlook.com [40.107.249.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c5854333-9d71-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 13:05:49 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7892.eurprd04.prod.outlook.com (2603:10a6:20b:235::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Thu, 26 Jan
 2023 12:05:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 12:05:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5854333-9d71-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U59pi/IFyxkp89gNnR45EgeJgf6br5cm0nD0wXFYIdmvy5oRxYb0OTITB93C7B7Mun4gEJ9IJhowBKWpYHzwP2UpAsi0ZFCvJk1NmUtLoCRqDv24rL987yJsjLr43ujMPrjFtKeyU34L0G4PJoGazUynqvrH0JKXC5ZddxzBpyKiNUx8qx+GQul8Ike/HapQMt9VP/6txHavEizNdnapY3Xp3CKreAA8FqlshQqMMDFDhGf0hpgYnc2Spz+AeooyMQCSoLuLarcqyUXDMSR8+7LK0N+t9gJ+PvD+OcgJ/PDz/D7UBr/ltVH+RkT8qiuTpXhUc30iLZrH9ZZY7t3wPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Lgkh4n+3WNdn4C0TtYi6Ty1jSE4VWSqbFzvEfXccjvQ=;
 b=RL+YalR5zrICOwUR19w4QcWGcdHang2zsaCYs+kVcCIBHInYDgocxvAE+1uO2C4mcQ1kvLYPmnYh2Bf0ADKH2ZnUOHMntLS1AX9x+bqT3GuCu1JyK0j2xlxAbDsMNyRTePEt7DJ9zqt/+JJkhNx7w+KB864VHbJo27+54MODpFbk6hvxmVaEsM9J8ZSWS6x9aAn7ZNfyMgks8lVKhVn5WR3G4bDEsIm5t0CkhWEBuMkNEdFHu12uyFXn3UvgwG86C6tl/Kwkxj94FrDxZx5Gpwsu3P14arhLm1OLHwb1g4ZY2su0gSXv7NoiJGfYKbggSvd10eu1N5kLPgTvQF5RZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Lgkh4n+3WNdn4C0TtYi6Ty1jSE4VWSqbFzvEfXccjvQ=;
 b=di1Jn64daczxPLS/QW00PoMgkBwGqHv5ujjvxw+JakMK4apknjAwOhZNBuuAIKATIlmI5rQVQhqKi8emTSUybwtteFJvtT6C+qNUphfxlQz3GK+Qe2AzwaQ108W50HGOnMKQvAkJYPNbCy2tJMlUfdFXCwXB4ku14l5KNV1//aslAqo7pZA3TE2hxh6dInh+u4v05hzpUN0sNHjiiHmPImUTlwAatPqs8xGeKCdJhF17/i8PxialkhBlHJmq6AfKuyZVPtGHevesPosoIsf1Gahtj7v+ihXGvLQTuxPC8kKHXnIQX54EJqrmvc7RkxYXK+2SIAqmfHOmB2H1iC30sw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <347328d3-dd01-4d50-b592-87544534f5df@suse.com>
Date: Thu, 26 Jan 2023 13:05:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 02/12] xen/arm: add cache coloring initialization for
 domains
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: andrew.cooper3@citrix.com, george.dunlap@citrix.com,
 stefano.stabellini@amd.com, wl@xen.org, marco.solieri@unimore.it,
 andrea.bastoni@minervasys.tech, lucmiccio@gmail.com,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20220826125111.152261-1-carlo.nonato@minervasys.tech>
 <20220826125111.152261-3-carlo.nonato@minervasys.tech>
 <308a7afa-a3c9-b500-06c1-3d4cbe8bbf65@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <308a7afa-a3c9-b500-06c1-3d4cbe8bbf65@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0120.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7892:EE_
X-MS-Office365-Filtering-Correlation-Id: 2b8d39ad-196a-4c9c-5f8d-08daff95ac46
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AwT9LERUYxnGXvJzC2aX71RuhuggoWXdfPOMrrq6QID9iZXTJLSgiusRU6dprpGIOmOvzgNqitH9f3mlTSO4ZAJPGRvF1vBNlu1PdJUrvX533eko1VIOJuUFuK7YHM5AIoOCshM61ut+9KvfwZDKPnKYo/BsaeMD2W6K1OtPQh+fDYhjNXdlYtvxU2Nx+JwJ3gYOk4wwoFeo1FqPYECyc5iWzYs4UevKnzRv1alVHJI3BlgFCTqUyjLH9tbcegJfAqmOX4xg6YgaZNzZZ2U6exxI1tNnhDsrrv1a7R7+u69A13hJKyH686pX0G57sIEnw/XHDwVRt3DjUWCwgJ3Y08HuueViJI5qKaJkn0j/vObAc4czDtWhMdAwHXB/i9pGKcrFGhGzNp0JDFzODjn9t1UPPsGiNXhpVkgffgECOMSpXeQwABJytcECAvQy9hlW9XYstyw8LT8oaq5qnCzKdcGES3GqttdfrBUkXFKE61zVyo9VVPdWP9gUmu0t6OhzJHMUMfhDroZ+4l7ls8YMaUHbxhq33QPwF5zGdICetJK91VWx6wtfxUTMhhQjf/0qT1v1apqFGHM5+3DoRz65/LOai7jnkT8L1FNVt03AKwyTmz92/RlBE8fnCk23d4nTLTi11tZ2+S+210EsoeGxdIQPcsZ0ZVtUCjsmBWO4HMRNapEr1YTtZV/ZcJf8rO/w60Yl0k80VkbfnhMWTS5kXrKwBuE0PCwFAk+Y2dZz7kg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(366004)(136003)(39860400002)(346002)(396003)(376002)(451199018)(66556008)(66476007)(66946007)(41300700001)(4326008)(8676002)(83380400001)(316002)(31696002)(86362001)(38100700002)(6486002)(110136005)(2616005)(6512007)(36756003)(26005)(53546011)(186003)(6506007)(478600001)(2906002)(31686004)(5660300002)(8936002)(7416002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Qm0zOUcrTGc5dStBUTBwQmVjcCt3UlZoQWZXZmFid1laY1ZUOEd1b2ZBS2dl?=
 =?utf-8?B?anF1RFBWcUdPcjM1anA5K2tsSUN2aUtsTUFQaklVM3NWRkVHYmQ0enRNc3ZX?=
 =?utf-8?B?SkdscTBFRWFCTWMzaEMzOWtvVmVBdDQ0RU5aVmU0VkxiTE9jVmFsR2N0V1B2?=
 =?utf-8?B?OHpheHZtaFhUd2RnbEt2L3preW9DT282dVNNTzlzVW84cGlsYWZncjNILzNy?=
 =?utf-8?B?b1FmM1RySVNSWXpnUUwva1R6Sjl2NU15TEhGYkZoLzMzemJYVWhQdjlwc3hB?=
 =?utf-8?B?eEJRTUxQbVl1UmJmeXFrall2Y244d2ZzZlpUQVFUVjhHZWliVVpoRHNkUmpE?=
 =?utf-8?B?UEtEOGNBZUZmaU9rYVFLamZnMXZoVG5CZDhmaWtWaXNIc2VMTXFGaVZJRG5x?=
 =?utf-8?B?NjU3U04xeFdTRDlKRFJzcW54d2ErVGp2czB4Q1JNUXZWeU5zUVpUWDBkQ1BF?=
 =?utf-8?B?TU1Sb1JhYm84UWx4RllyMGRlcmRvREpaY3RSS1RBRTJQd0JoK25XSWcwSE9S?=
 =?utf-8?B?bnNyS0RpODVlOTY4NXZpSUwvYkxab3ljWEx0N2IyaTd4bCtxazhRUFN4SVdp?=
 =?utf-8?B?WTM3THFub09sMmZwUHVsYlJ1d2x2bDFCejk5dVdzUGpKcjBIajJORW8zNUlt?=
 =?utf-8?B?VktFZml3TTZYYmtqcStYZnZvRnhOT3VnanU2dVR6eVpKcldmOFNvZDlMdGo2?=
 =?utf-8?B?Qm5TdlZ5L0lCcklUaW5MOU56bVNqeVppMkRjMzJzYk5OY1RJMU5tMFd1QjNH?=
 =?utf-8?B?bmxZNkZBQWdBRWJkOWxYUkhYN3VMV3lMVEd3UXJIVTl4dlBzK1dQVkt3SWlX?=
 =?utf-8?B?c3AyQTFTTWtMdTJRT3pDT2xyUnVKeWhoOERCeXkxOGpNNzF5dG9HN1VUWHZz?=
 =?utf-8?B?WWt5bFpHcmd3OG51MDU2YU9OQXdPbUNxSTRZYmFsRlJtcm5PVGYxem1meHZw?=
 =?utf-8?B?dzFnL25EbHNyRGM0OVl2RDd1RXBVQkRJem5wMTRSMThOVHdNRzNlV1I5aW56?=
 =?utf-8?B?eXpMT2lUeVlzWFJmODN6eHAxNkhuOHdXQ3p1bXVHakJoRXdsUmFVR1lESFRq?=
 =?utf-8?B?LzQyMlNBNG5RWnVTVHFqck9zVGZtS2ZhQzR1bGY3cVRFcDJ1ZlFtZ0VuYktU?=
 =?utf-8?B?N2hZbnV4ODRWN2JtSGw0TmNuUTVzZXgxUU1GSk9QNnI0OEQzQnovYlJRMHpK?=
 =?utf-8?B?dERQMStzd1FMT28yUmRtYlJkZkNZc09HdTNKQWVmRDh5UGNsNldwdXJXY2tT?=
 =?utf-8?B?M1ZGTWhXUXQzUy8wb0wyNlIzRC8zV0pWQy93TVVvRElYN21yd3lYV29RT2U1?=
 =?utf-8?B?MEJvdE5uVWVJanp4VDJwYTJSbzNyWTlIdEJ6VnZnSFNXbjByU0hIaVE0c3pz?=
 =?utf-8?B?QTlIa1NPWlFTRUhxTEJjTlkvbjhzWWdOc2sydTlvM1BDYzg2QnVIT1NZUjZU?=
 =?utf-8?B?MVN2Qng4eXdFeDUwbnVPZ0QvTHJRdUlkUnVKZ0FrbUNVd0dqbE1XR0ZCSzhk?=
 =?utf-8?B?dlRFVFhvSUh1YlpIaml0YzNkVWV1Wk1EYWxNNjluUkNMTTJBbWdONWZHcHF2?=
 =?utf-8?B?dnkxL09GQVZ2enMxVnVTR2lIR3FVUUEwb2V0Y1lWanl2VEtoaGNMUmRTUjZY?=
 =?utf-8?B?UGN6VElnR2FvT29uL2lZbTdZd2JhVENKazVBbXVBWGZzU1JtTi84dkRRREEx?=
 =?utf-8?B?NXg4RFlkMUd5SVZxNWVwWk5IN1BlRGE1bUhxUG1pT3AvblJtZGtwUFJOMVpJ?=
 =?utf-8?B?K1RlVzFaMTZDOURLaHlmMC93ZkRhTjhkRTRqUWMyWC9JRDRSNHdYL0hSQWpp?=
 =?utf-8?B?WS9ZRHpFOHFHWVFrMDgwY21DNGN1YnpiaXlZQkpBVHp6N3creVhyU3AvVnVq?=
 =?utf-8?B?cFlBa3F1YXFxRmNEeXJZN252dHpHalhmQWorckRPazNYYnpmR1dVcFpUNjFX?=
 =?utf-8?B?Y2tZWUpmekNvYmFOL1ZnNDY1MUplSHV1K2ZDMjF3Y00zdTRHTnIzSldXdDkx?=
 =?utf-8?B?RlZMR2JRTkJydFVWMmZJNjhiQm1LREQyTlBtODNsTUkyL1d0N2dWOTdNcUFh?=
 =?utf-8?B?OGt0QkVtc3UyRlUzTkRZOGlHNUp4NnEycm1JUWlYRzZDc2R4QnJ4MjcrUGFs?=
 =?utf-8?Q?XeA/Qt32ScTnNFMJoB+ypq/+L?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b8d39ad-196a-4c9c-5f8d-08daff95ac46
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 12:05:53.3411
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bwiBN+W9fn6eTv78XbtGxqgx41ONvKAkfMtlcKNgkKVO5abuVg7Y9vFtVhYzrnUhAkvvY6DEwv/lF1UGh3waTg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7892

On 21.10.2022 20:02, Julien Grall wrote:
> On 26/08/2022 13:51, Carlo Nonato wrote:
>> This commit adds array pointers to domains as well as to the hypercall
>> and configuration structure employed in domain creation. The latter is used
>> both by the toolstack and by Xen itself to pass configuration data to the
>> domain creation function, so the XEN_GUEST_HANDLE macro must be adopted to be
>> able to access guest memory in the first case. This implies special care for
>> the copy of the configuration data into the domain data, meaning that a
>> discrimination variable for the two possible code paths (coming from Xen or
>> from the toolstack) is needed.
> 
> So this means that a toolstack could set from_guest. I know the 
> toolstack is trusted... However, we should try to limit when the trust 
> when this is possible.
> 
> In this case, I would consider to modify the prototype of 
> domain_create() to pass internal information.

Since I was pointed at this in the context of reviewing v4: The way a
clone of domain_create() was introduced there is, as pointed out there,
not scalable. Imo the prototype shouldn't change unless really needed
(i.e. benefiting at least a fair share of callers), and clones like
this one also shouldn't be introduced.

Hopefully this is all moot now anyway, with - as it looks - there being
agreement that this won't need to be part of domain_create() anymore.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 12:08:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 12:08:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485008.751917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL13K-00015j-61; Thu, 26 Jan 2023 12:08:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485008.751917; Thu, 26 Jan 2023 12:08:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL13K-00015c-1X; Thu, 26 Jan 2023 12:08:26 +0000
Received: by outflank-mailman (input) for mailman id 485008;
 Thu, 26 Jan 2023 12:08:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pL13I-00015U-Eu
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 12:08:24 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2071.outbound.protection.outlook.com [40.107.6.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 217339c5-9d72-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 13:08:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7253.eurprd04.prod.outlook.com (2603:10a6:10:1a2::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 12:08:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 12:08:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 217339c5-9d72-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EVgjPFRpbqZoSB1YAWpH57dwTeqQSLBBLndWcQ69wtLydByL+xPeDeOawS0ujUtM2pSgkNQvzVnb41hzbLDgjVQ4cSsMH5rpAj6LvggsPJ89z74aV0inX09GRMJ8nf9O/74D1KLwXgII1wAIqXNBrqvGCRANK79XlHlhtheXdCw91+nBAoRIR7zcVG5hN5bQsoDJFUpw3kKA5jzTUuwIIye6zG+pHrWBg/tjB112we1Xcs0rYbjXnZBEReRbyJA3c3YJdN4ouZ5MluQvUuP1jjYh/7LSfOL5hYiE/GRhD+6ox792wbHr+/xEKG6OSf4e/y05KHWwg2cBz7hRmCnpxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Oicwn2j9Il7hRUshoGUBWA0bFha8kNNLdMZO6PpzwG4=;
 b=Mq+StqAB0mgs64uGPVDgiXM0vKvbBJGZNHP0ES0CK87+Bm4GNs8BU7Whf2l/wWZGpPHQL/uGZEiJSKI05gaZwomlGPT537AERsJwsaZ0rZvOpj1M3SDfP4+ef6xLpdrv78IfIzhlu/JUeRtmrGhpT1blxJN1lxnm1VjNwYOEgUGOpjU1dFEDTp/2J1I3qfYb6vKCd51I9pn3W08t7FiS1QBcdRKPhUcleY23+urOrk5QTuvfINKK7Wu8UpUjz2V9a0koaf/GnT8FqKNyTwOtV41/njDODPre3CnVHppG/1oASfc7U9LVRsJHYi433cp+vz0Ijul+KbqjQ0S9nTzXgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oicwn2j9Il7hRUshoGUBWA0bFha8kNNLdMZO6PpzwG4=;
 b=S5XFeunJLVcKZTPb1hIWui9udf+/JoTxNtki3fQSsEg6RJRZN0qzfsrIB8Y4czbJ/lhRpM5O05Q6XHnf+En0SqBc5xkbaFV8BjNEXieTpewb0LT8lEFQ320el7lr/YJp7xXgF5r0w1EJ3ST/wKk4XCQDD4qt5JBy+oT+bYRn9aUGsw3Zns7rfE0hLmGdxgAap+93hv5JOHc4dcIluHW5t2HSNxhAZ5Rbhp1oAvFn0ZN33IxyA0kE+Szk2+5GNttxDF//+7sbN+LvDlAbkbWgOY/n/1fDpSn3ZJnVdcMK/Wz2zZ/T8KNkTbkXqa5tvQzJHBeYL/UhIAByiZnaxuxnXw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e0a27c49-e31f-8298-91ae-60ab4c47aa9a@suse.com>
Date: Thu, 26 Jan 2023 13:07:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 02/12] xen/arm: add cache coloring initialization for
 domains
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: andrew.cooper3@citrix.com, george.dunlap@citrix.com,
 stefano.stabellini@amd.com, wl@xen.org, marco.solieri@unimore.it,
 andrea.bastoni@minervasys.tech, lucmiccio@gmail.com,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20220826125111.152261-1-carlo.nonato@minervasys.tech>
 <20220826125111.152261-3-carlo.nonato@minervasys.tech>
 <308a7afa-a3c9-b500-06c1-3d4cbe8bbf65@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <308a7afa-a3c9-b500-06c1-3d4cbe8bbf65@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0117.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7253:EE_
X-MS-Office365-Filtering-Correlation-Id: 2b6639e6-e0ee-40ec-48af-08daff95f905
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	x7/5FSpMBw5lZXYJcLvKALPKUtaCaNGoo3FWsVdUe9sYP+nnBVT1wmqwEkCoWFISsOL6r/exuzsB9auIlKdBaB/OwfyvJQpt81OjyOWr3jxsqtavNbUB5AHxvBJUXazvwhqaQUkGWBLs9Wbm9lhE78bSQROe6l01icwOZDEtdK2Pitc5Ki0n4sHVnca3C6zffdlUCKpPSQL179QvhKuK2kXuZZBh8XbjFqwWD7LdTHR46itJrRajD9jEJkwzh2foFJnEZ2fluFmDtCXDvwWqSE1wWvrmYKkxTEVV8kTVKMR5wYU/z6gq0xEAoJFVjIkYO5USE7M1li59HSZ/ALrEeXQ89NCKjhZb8DgNMLHtaVyymPKSXDfu8wrNa9qXhs08Q/Sv+QEzGDdBCH0CZJtiEB9jKRacXo0KaCnO0La9WqH+T/Co0rSdbMNqWzJOmFin7A6LOw26isJKacCkPU2CbWalvV7Q0I3AZ/sbFkJrvtpxYYwswOu1cN4sICVvbHfVVFlDB2schbZObrKQcSgzZ7S+xaFIGrQzu9iMDejSifhh2FNKz+Yt6sqIEmP4mGxSCKCMJf+bLs15ymP7xUuc2yyazBNvBJXyIRjgz7+s5OnomYihh4URZVbLO2B20t9P6yP0b/d5yQT9X6396rl/lDQU++p9MjkPopRv5XltKKpSrMnKnYL/r6ei5ihtOKs1qOl7h+RxHTJf4wUMVKKxWWi84tKv48T2fRQc/C5+kYs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(366004)(376002)(136003)(39860400002)(346002)(451199018)(36756003)(66946007)(38100700002)(478600001)(186003)(6666004)(2616005)(26005)(6506007)(53546011)(5660300002)(83380400001)(4744005)(2906002)(6512007)(7416002)(8936002)(41300700001)(86362001)(31686004)(31696002)(6486002)(316002)(110136005)(66556008)(66476007)(4326008)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cDhrczNGOTN4QzhrUmN3OHFERjhMSjBzZ1NPaUUxdndTUGxsUUF2OXVnbmM4?=
 =?utf-8?B?MnY1Ti9VNVpzWkVib0FYUEl1dXdUVnRkclJYa1ZZMnpDOEdxbFowMjBjbitF?=
 =?utf-8?B?Qm9EUGpCU2dOSFYyRGdCbGplc1NkaEtMcFFkRkh5dWZFczdEQVp6WEtQOTAr?=
 =?utf-8?B?aHNWVFI1alpOWVg4TDBrczcrZmJnbnJKbStZSXo1U0gxOVI1UlBsTnlKdW5D?=
 =?utf-8?B?VGZsdnFDVUtvNGRPeU1ZS3AvWFNMY0tKL0JLUW5qUmRFY2ZXS1luTUo5djIv?=
 =?utf-8?B?K1NJUExpbGFFbzVWei80cVM2dlBxUTlQZFRNUVBJaVZsWUIvUlZrNWVJMnow?=
 =?utf-8?B?TjJVaStNbHpOWklkc2tKWHpGaEcwRnU2YmtMRnhxR2hKaXFaRE5NckdQQUFr?=
 =?utf-8?B?RDZRZkFWZ1FFUklTM3dnNGRJaFNrQ1NSeE9hSGJhN3M1NFBaVVdIVWRIK2FV?=
 =?utf-8?B?MytsRFYvRTZPWWlrUURWdlRjRTNCb1h2ejE0V081Tk5VLzR5WENQWERaeVZP?=
 =?utf-8?B?bmZjQ0pCYTRkVGxCbGsxNjRKa1NIMk93dkZGRE9va0tRa3RMMFFuTmo1Q3hM?=
 =?utf-8?B?cTI4RG5WcC9renhIbU5OZC9oNmV2eUZ1UldkQ3l1aDg5a09aNi9FNk5ZOTF3?=
 =?utf-8?B?NUVlaHFxQmN4TXdDM0V3RGhkeUw1RGRaU3hMMm92aXFoNll4QlhCU0JTSm5j?=
 =?utf-8?B?UlQ5UmdJRGMwQkdSTW52Q1RLSDAxODkvdDVITFcwaFZGVnJXbitueS9yRHRh?=
 =?utf-8?B?T0EydDdDRExGQndXVWVuZjcxZWt1SlJOd0FWT0x3RHczMHlNVXBycGFrOFVI?=
 =?utf-8?B?cy9RcEdMRU9PQnd1ZFNFMStkRnRxRi9MYXNBUTNVTVc5RW1yS0tQbm1yN1RP?=
 =?utf-8?B?KzBvSDBGSWVmN1owNXZrb2F4QjNLRDY5UG1jWUd0eVpTVG1qN0lTcnYwWWhT?=
 =?utf-8?B?b044TDJDSDRVbk9mcGNyb0JQYktmNnJSWEVaNjI4am4xdmhIZUZ4ZHB5TmNV?=
 =?utf-8?B?VjNYWjNmY29sUm9hUkVITkk5VHpuUzJhdGxIQWZIbzZTaUllS2ZpUWlVS2pB?=
 =?utf-8?B?aTNQMytFSzJ6d3hGUFp1SXhHRi9JUllFYjAwTzNNenptKzFrM1JCNmR0ZUtP?=
 =?utf-8?B?djRsSWhWelFUMXhoSitTL3A5K25uZ25YY25nalJ2WDB4aU8vTEdyT0d1OXRz?=
 =?utf-8?B?Z25xZUdsdkdnOEVnZUplK3FsS3EwK0xxUDltVFVSTDhZSUV0Z2dZbytKdWFl?=
 =?utf-8?B?NVJQd2Z3U2FJbGdIcWFHaS90VEh2SjNhb2JKRnUvVDNLUlNnTzlNd1krRU44?=
 =?utf-8?B?NklsbnV6TW5acllsazRYWFpiVk5NaU9ralhxM1BsWkh2REczdjZmcERUR1Jz?=
 =?utf-8?B?S1hDcnkrK0Q4QlpWelBFT2o5SWJIZGpKaDkzVTBUa2IxaEZKazZ6aUxrWnR6?=
 =?utf-8?B?TThYTWNkMllYZkRRVlR0Z2pTWVdkdFp4enlnMXE0OWh6aTR6Qm1VcXR2ZHEr?=
 =?utf-8?B?b3RTZWxoQ1g2QW9heTVSNmxFMGxRc0trdnkyQzRLbXo4c2h0TTU4RE11NUFT?=
 =?utf-8?B?bnpxc0NLRUJPa01QbTdZTi80R0RHcFdQWGNETlJqa2xPY1Rycy9nL29HQzhG?=
 =?utf-8?B?YkRBeEI0a05yQjA2UlhaQUNXeHlvTG1JbDdobS8xRmRISktlaWNmUnlMT0Z0?=
 =?utf-8?B?OFdRbHYrQWU5U2R4WjJjRlZPbGhZOFFCU3JYekNNNmp1SGc4R1I3Sm1iZEwy?=
 =?utf-8?B?VW5wL2dqQWc3QlZXSkt5U0NuWjRqaG02OUR3bUVVclhmRW9nbWRjam1vblpB?=
 =?utf-8?B?NTNSZTVKdGdzNEIvT0l0UFF5ZVpFZ3VSTVJhenl6ZFNCbkkvRnB2dEFMb3B3?=
 =?utf-8?B?NXRNczVnaWV6QmoreUlLODJCeGJpK09VY0tTaGNSL2ttOGdBNHVmS1FVYVNv?=
 =?utf-8?B?aDBkTmJDb1ppQ3RIaTJpcG1DbXRwZUhhYXo2ajhLVEx0Y1FkLzlzOUg4MzlM?=
 =?utf-8?B?OWVRSnZXTlJGUitPZ0twczRhMjIzTXBEaXp5S1hzWVZZNzhSdVZvUm96THhX?=
 =?utf-8?B?eW5mNnVlMXRhdnJJZTFjbWo5aThVOXE0MXF0V3c1Z09MVnJEc1VJVXJsdjNq?=
 =?utf-8?Q?GbAXKBauJGuUcD3ng9piJ/Wpk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b6639e6-e0ee-40ec-48af-08daff95f905
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 12:08:02.0835
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 396ApYTUTX3OuOmfaMkpYPVepq7PRlQxbH8jeP+/vMEa4DPvjYTL8I5PMijFqHNQSzbBVMa5vVo1esoVHpb0GQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7253

On 21.10.2022 20:02, Julien Grall wrote:
> On 26/08/2022 13:51, Carlo Nonato wrote:
>> @@ -335,6 +337,12 @@ struct xen_arch_domainconfig {
>>        *
>>        */
>>       uint32_t clock_frequency;
>> +    /* IN */
>> +    uint8_t from_guest;
> 
> There is an implicit padding here and ...
>> +    /* IN */
>> +    uint16_t num_colors;
> 
> ... here. For the ABI, we are trying to have all the padding explicit. 
> So the layout of the structure is clear.
> 
> Also, DOMCTL is an unstable ABI, so I think it would not be necessary to 
> check the padding are zeroed. If it were a stable ABI, then we would 
> need to check so they can be re-used in the future.

Independently of the other reply, a comment here as well: While domctl
being unstable does permit to omit zero checks, I think we're well
advised to still have them. The we can re-use such fields without
needing to bump the interface version.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 12:09:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 12:09:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485012.751927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL14L-0001e1-FD; Thu, 26 Jan 2023 12:09:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485012.751927; Thu, 26 Jan 2023 12:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL14L-0001dt-B8; Thu, 26 Jan 2023 12:09:29 +0000
Received: by outflank-mailman (input) for mailman id 485012;
 Thu, 26 Jan 2023 12:09:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0+3p=5X=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pL14J-0001df-Rd
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 12:09:28 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 417314ed-9d72-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 13:09:18 +0100 (CET)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id AE6425C00E9;
 Thu, 26 Jan 2023 07:09:22 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Thu, 26 Jan 2023 07:09:22 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 26 Jan 2023 07:09:21 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 417314ed-9d72-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm3; t=1674734962; x=
	1674821362; bh=MND+nG8pu/5o9eYIVl8ltWETmqVJtI/WQW+hfRrRrds=; b=m
	ynJNHI/FA+1/zJLzLQ14gzn5u/HAUkwxpH6vJt0At08its2oxLUq/ROxHpwHfFz/
	1U5Wuch0Iz5mLoP76fvqRGT8m2qmWj37mnIhLVIuCmVnYRD7eOKpSBFhRWHudfAv
	2GdkL0prYD7yuxEL2gV3IjeGhArUjaN5vlSHrdsc0a/qzB7xG7XGa91Vro//B/Ab
	ZmoMs+z7RP0ApIrrEaVKJlAYxN6QxVR0+QmqrOlXZGXphN7Iec3cVNwKm/PaS0iC
	lWmDniNSHj/sPay0PufokSTe7M0nLu1Vl4UXnnwXR52Q+IDevypLp22/b7A/KRpD
	+5QE8U42JWvCX/vLQXapw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:in-reply-to:message-id
	:mime-version:references:reply-to:sender:subject:subject:to:to
	:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm3; t=1674734962; x=1674821362; bh=MND+nG8pu/5o9eYIVl8ltWETmqVJ
	tI/WQW+hfRrRrds=; b=rqDjc7CM0cvAd1bAmhbYRsyOJ6WRq8eG0p0oezojlekb
	zBhQQmPiP6SMt7dr73VDsiFdKYk3R0fbsvBHYu80AMYTAHWY3K1lDu9cC1FmM0Qj
	PpgW/HeyIz+anAjQ2N9ahrswAkOyYk/KR63zpqkqpb2YYgiFZgwTaWMzitRYlWFj
	GvNIAclljoDFa4bXEEs8R++cxXhOdfG8+M+UdYelIaDK8+nDo8ann1TJANf5bTOj
	xJQr9W2f9dHuOsel/AODn4KODt0YIvZxWno00khnDIfkwwVOLGazibUD6atWTnV0
	U5ksPw5MCGQqqQI/ayb4KVJ4QQvOinGld4YJAyLuTA==
X-ME-Sender: <xms:cm3SY5R2hoTzzltHGsW_4ToZeopEE9UPWIJSJwK6KIvTxmEyxvySTA>
    <xme:cm3SYyyRLyxY4EaKmQN1w2tX1YNC7nrrJLMDCD62ORK0nb22looOitckwtskj_FSp
    YWIoXey5MuXSQ>
X-ME-Received: <xmr:cm3SY-2vnoCcM8k1vC8iqp5WIWPZW1IS-E9ZTgVyF6NSqTNCvTfc4VDOdRhoTA42ZJQBTlB1uJf6v4gGoqb6ChJcoV2OKT8NGGw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddvgedgfeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhephfet
    udegleeuvdekudfghedufeekuefgffeukeejvdfgkeegtdehhfegueejteelnecuffhomh
    grihhnpehphihthhhonhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhm
X-ME-Proxy: <xmx:cm3SYxBC_07Dp-3AD_oik2M0jEuwxJHs0mQ-nujdBFBXwPQsa9khQQ>
    <xmx:cm3SYyhFDfuboj-IBirvVJnw8cP2bYq69e-eykdEUCYlIWQtAPWMTQ>
    <xmx:cm3SY1r-jsIF17EHfr9h7YvEKIsSR2SmC9__Y6pFpVjt3CphXs-6BQ>
    <xmx:cm3SY2Z_llDIaSQD1VB8Rz04jpcgEuacwG34vITnkeF5-9GBRULOvg>
Feedback-ID: i1568416f:Fastmail
Date: Thu, 26 Jan 2023 13:09:18 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] tools/python: change 's#' size type for Python >= 3.10
Message-ID: <Y9Jtbr596V0F5cLm@mail-itl>
References: <20230126051310.4149074-1-marmarek@invisiblethingslab.com>
 <5804fff0-b26e-bcc5-fc76-7e2be09bcd71@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="c65UIZuL6tzF9P1v"
Content-Disposition: inline
In-Reply-To: <5804fff0-b26e-bcc5-fc76-7e2be09bcd71@suse.com>


--c65UIZuL6tzF9P1v
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Thu, 26 Jan 2023 13:09:18 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] tools/python: change 's#' size type for Python >= 3.10

On Thu, Jan 26, 2023 at 10:14:54AM +0100, Jan Beulich wrote:
> On 26.01.2023 06:13, Marek Marczykowski-G=C3=B3recki wrote:
> > @@ -1774,7 +1775,7 @@ static PyObject *pyflask_load(PyObject *self, PyO=
bject *args, PyObject *kwds)
> >  {
> >      xc_interface *xc_handle;
> >      char *policy;
> > -    uint32_t len;
> > +    Py_ssize_t len;
>=20
> I find this suspicious - by the name, this is a signed type when an
> unsigned one was used here before (and properly, imo).

It is suspicious indeed, but correct according to the documentation:
https://docs.python.org/3/c-api/arg.html#strings-and-buffers

> Irrespective of the remark of course I'll leave acking (or not) of this
> to people knowing Python better than I do.
>=20
> Jan

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--c65UIZuL6tzF9P1v
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmPSbW8ACgkQ24/THMrX
1yyP/Af8ClrtiKVKa9h/6+XbaBPSszAMSDXCnpeEhRAXRyiICnPjwUlPWBnDveMg
XOjeA7ewhtvpg+HvI92lYU1oCnYyI+7q+V24u01kFFlPQSlHkD3BFu1ReW+q2MwP
oN1mM81wJ0LB2XMTyzyi/xolW1tIqxaCPaVRjyHKqY1SqkEs0nwsebjxDrgsn1OL
kQpKmV4JR/CsCljxuU9U771l8sB6Xn9LwycdyJG9Js97sSetqshI9SnTNgMFmIkS
qn/I/vlMIDbCU/FqymDHaqsQk41SlnTbHbA5KDkWZsUjXAqg5sxWGNL70nX0P2cM
4E9MHL0T6T3sH1TO1n/DHGelcoMqXw==
=Sctc
-----END PGP SIGNATURE-----

--c65UIZuL6tzF9P1v--


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 12:24:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 12:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485019.751937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL1IY-0004BM-Jf; Thu, 26 Jan 2023 12:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485019.751937; Thu, 26 Jan 2023 12:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL1IY-0004BF-Fn; Thu, 26 Jan 2023 12:24:10 +0000
Received: by outflank-mailman (input) for mailman id 485019;
 Thu, 26 Jan 2023 12:24:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WJ2g=5X=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pL1IX-0004B4-9V
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 12:24:09 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 54d0c634-9d74-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 13:24:08 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 17F6E1FF3F;
 Thu, 26 Jan 2023 12:24:08 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E268713A09;
 Thu, 26 Jan 2023 12:24:07 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id OcDLNedw0mNkPgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 26 Jan 2023 12:24:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54d0c634-9d74-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674735848; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=7jcdb0fhmVXMQ4Aeze9M9Ly9YtBIBMGWZvJo0VuLWZE=;
	b=gmqoY3ucf8w1MoqseMCGVu7Hg/Acz5FTf3ykuEdG73Gwn+cgTw+M9GSp1WdUlEsB5PzuG4
	LpVIUwgUwEbrd50+hO6ctx7TXYr03DPWbOkzcA9TW990I28/uWanrEsiQHa/x1ueSSj1ec
	LaEGzOH8yoIORxXwcqrKy5RFxsln4VU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/helpers: don't log errors when trying to load PVH xenstore-stubdom
Date: Thu, 26 Jan 2023 13:24:06 +0100
Message-Id: <20230126122406.14627-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When loading a Xenstore stubdom the loader doesn't know whether the
lo be loaded kernel is a PVH or a PV one. So it tries to load it as
a PVH one first, and if this fails it is loading it as a PV kernel.

This results in errors being logged in case the stubdom is a PV kernel.

Suppress those errors by setting the minimum logging level to
"critical" while trying to load the kernel as PVH.

Fixes: f89955449c5a ("tools/init-xenstore-domain: support xenstore pvh stubdom")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/helpers/init-xenstore-domain.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 04e351ca29..c323cf7456 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -31,6 +31,8 @@ static int memory;
 static int maxmem;
 static xen_pfn_t console_gfn;
 static xc_evtchn_port_or_error_t console_evtchn;
+static xentoollog_level minmsglevel = XTL_PROGRESS;
+static void *logger;
 
 static struct option options[] = {
     { "kernel", 1, NULL, 'k' },
@@ -141,8 +143,11 @@ static int build(xc_interface *xch)
         goto err;
     }
 
+    /* Try PVH first, suppress errors by setting min level high. */
     dom->container_type = XC_DOM_HVM_CONTAINER;
+    xtl_stdiostream_set_minlevel(logger, XTL_CRITICAL);
     rv = xc_dom_parse_image(dom);
+    xtl_stdiostream_set_minlevel(logger, minmsglevel);
     if ( rv )
     {
         dom->container_type = XC_DOM_PV_CONTAINER;
@@ -412,8 +417,6 @@ int main(int argc, char** argv)
     char buf[16], be_path[64], fe_path[64];
     int rv, fd;
     char *maxmem_str = NULL;
-    xentoollog_level minmsglevel = XTL_PROGRESS;
-    xentoollog_logger *logger = NULL;
 
     while ( (opt = getopt_long(argc, argv, "v", options, NULL)) != -1 )
     {
@@ -456,9 +459,7 @@ int main(int argc, char** argv)
         return 2;
     }
 
-    logger = (xentoollog_logger *)xtl_createlogger_stdiostream(stderr,
-                                                               minmsglevel, 0);
-
+    logger = xtl_createlogger_stdiostream(stderr, minmsglevel, 0);
     xch = xc_interface_open(logger, logger, 0);
     if ( !xch )
     {
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 12:37:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 12:37:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485024.751947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL1Vg-00061e-Pe; Thu, 26 Jan 2023 12:37:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485024.751947; Thu, 26 Jan 2023 12:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL1Vg-00061X-Kh; Thu, 26 Jan 2023 12:37:44 +0000
Received: by outflank-mailman (input) for mailman id 485024;
 Thu, 26 Jan 2023 12:37:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pL1Ve-00061R-PN
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 12:37:42 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38dd6c5e-9d76-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 13:37:41 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8053.eurprd04.prod.outlook.com (2603:10a6:20b:2ad::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 12:37:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 12:37:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38dd6c5e-9d76-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KS8JjwTi2Cm019snbcTlDX+lXxVZMY27NFiFVEwhE6cIL6vdCBUMEVp8auuzNbrxehgE2cgXabnynfmxGKU26b6jX5i1e5j4pAHy2csiNNhNTZKJJ/am/UXqMk6C4FAgTkXLkwLOy2G/dmGmBBcrAtNwhHCyaNPWCSjhfNUgj4IBivF8Y3KzqjbPzHD4VIjv/wgL3eum500KBC9g5TPfP+Hl6fg0X6lbYJfMC0hMABn7jpgnZMDJU895VYps6KHl/p5PPu6xLlGWZwwhbOTI+T8C1BAU7y8aYdJs6MX+WFs0hWl77SGbdkb1PORmMQz/+xpxBy49cHGZUaLg2D3apw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=D364SC4S/gkKE4hjHkfXwMrR0SoocseV35E26O+8DJI=;
 b=Em3WMlbJwe/Cl+wOaWmsr9tarfEXBRHgprT9JCusKyh7gzbx6HH7F0WkH9cDzNzHH0/ai4HRZ+1FWw6AJotUeSE8jcExKsJQNpIW6/QHPWRwZseJJKnzzulOdIySeGgb/lMq2LdGJacQPukWECgBYyjKmYA+AbUkNHLkR3sLpoSfDyqaqfs8SF0oxrOItCCTkN8V5UXmnnhAWHJULcfG+6JunC4aEUxo+pP03ZJlBAQMzIw7PDxeJMu/Fml0E49BZSAF3RvoPrE4eX/j93fRNMANFcE4DvvNIryc4Vm0bfMLV0EMWLw7qvoR8CveaGDaaSEUylkzmhfhiny6aD50Eg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D364SC4S/gkKE4hjHkfXwMrR0SoocseV35E26O+8DJI=;
 b=Wzcn9R4LdSN1X6jzbcv+hXEBD4KRBFcj0eQ7OUmx41X/7nk6APXPP0ijOvehVZYSFT1P3qVCKNApt9lXm6Z/jF/mYVItMpKl1BXY+ShfadUnODESu8r4hjUpgreaHmTOmuY++oJ12OOOJzReOdWqeSDam5m6LOJpy00kpY453eW5I9SnwQKIPNnzx/ft34ZaaWP7No/3wlMMgZT1oWuboJq/wbvnZTVRnYpiaxUnJtL7qR8ynZk3BVtsaHEzzOuSfJs54iAJM9/2EcBkDP8PY/xzpNMWxF/o6MvkW7LyKFnO7eKzFyZ0NV6+f/Ydy4WdxBCkH3GfgaXa+Fwe7XSUIQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <19d5a912-cfdd-80f2-5233-15f04cfc2b88@suse.com>
Date: Thu, 26 Jan 2023 13:37:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 07/11] xen: add cache coloring allocator for domains
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Luca Miccio <lucmiccio@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-8-carlo.nonato@minervasys.tech>
 <a74381ce-d204-1f40-7ccc-2be3bbc3ebd1@suse.com>
 <CAG+AhRUKWfJBf5C0uqfzePMvxN-gc2gYup+oBRBA2DXnNW-txw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAG+AhRUKWfJBf5C0uqfzePMvxN-gc2gYup+oBRBA2DXnNW-txw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0042.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8053:EE_
X-MS-Office365-Filtering-Correlation-Id: 09f0ba3c-14cb-471a-09be-08daff9a1bea
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jyE3kqjMbtfjHhG7SLk1fstbWn3hN/rlPuu8ZBWHnLKks/R+ksQb++/1ZK5qg2e69WH6GiY56PNBUoKpYnsiJCdklmJAqCPeULdtR5H8KRXK3SwMxoFlnPagS6j7RikTWtiNCAF+VrvxsezLeG9QuIDUW7XwJyVSUm76rmsFfxWTfLiUQsFZmZLMPpe1+iKm+WfinvT8y31uj1j69xuC3dM8Nh7Gy4FiYei1SAyRfHgCK7hSdmSK+xTQmOrfQnaGS6mPSaIOHA9Hgj/Bk9EpzeQgdbt3FgRTksI9E6FMvH9hgzqy7NgxKQ2Tl5BBlGFAGkTBV5fECKhBCA0NShe0PZsoqxWpF8b3slEcx7fbBzMc2I3Mebeu3rZ2eywS1zJmlvohaze5sxeJ03LQ6/K2ujZeI5eW3gXsPRDYXQb3NG6M1VvRultwuKqLq1cCbWaW9Ur3cWp761t2qrvvDRGWog3xvB9RqV5TdTeBW6eVKufhoGJxN4rKC7x9vxlxWfSBwSSjsy/T+GWNZPAdQzW2sIOFl5OG0SenwFJ362gD9YZnFTT31aimZ4b4izJ5iAPbJ7TtVM4Tfk8FJ6KOOaaD98VUn5OvocBi0M5o0suoVyMQNOF4eHS5qMee//YH0gULHvicVBiBrhOv+Gz8ifV2uluKUU/dPWOPryV4AEtWIFQNQlbL3WwBXGzXNImmqOscDZzqu62PIMA/oqGJB2s0O5gkE8NWY0+1yQzvL3t15yk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(39860400002)(366004)(136003)(346002)(376002)(451199018)(38100700002)(36756003)(8676002)(6486002)(478600001)(54906003)(31696002)(316002)(6916009)(66476007)(2616005)(41300700001)(4326008)(66556008)(53546011)(31686004)(6506007)(86362001)(2906002)(66946007)(186003)(83380400001)(6512007)(5660300002)(26005)(7416002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dG9RSGFDeG9mejJwV29MOE5zTG9vZ01hUGNrY25wN0dwSERTZU9OZ0gva3Nm?=
 =?utf-8?B?eGg4Nm5MR0FhN0xBSjdLOUFQNGtJZUxQeWYrMjA1RzlCd2liOHJEY0pYclhk?=
 =?utf-8?B?Q2lCMDR4d3ZkT2dhbGlMQXJPTFphdjRVVStsVEUrQkp6UEhoM3dlVFR5QU1T?=
 =?utf-8?B?Qy95azVNYlI0b3hFd25WR1ZaUkowVDZZZ2hLY200NmVHa2I0SEpRVlBrcFJF?=
 =?utf-8?B?MlpEb2VmTCtDUDhmN2pqQVMvRktBclJNajdsemwyeU5DeUhrbWZ3cW5UVkhm?=
 =?utf-8?B?b3kzK0UyeWhOSjg1TEljc3pCb1d3c2pKN0dLYU1kVEVadWJtZ3Q5STUwS3p3?=
 =?utf-8?B?S25hWXd2bDAzbHpGK3Z1YWxrVkFtREJTSzU5MkQ2QUZuVS9oTldnYnY4d3N3?=
 =?utf-8?B?cmpOTURZN0dEd1laWms4SmZFMmx4MXcxSWNpd0tJY1lWNDBmaEcxeHJzTUFR?=
 =?utf-8?B?ZmNpVXgwbk4wYTNKcEVCTG5wTmNuNW1vMkpTZWdQM05tZytkU241dVBIdEdF?=
 =?utf-8?B?a2FtR3I5azd4TjJpUnBES3phZjNsRGNFSXpXRjU3eDREckZJMGxlSHRCN01S?=
 =?utf-8?B?NVYzaDJtWmhrekhvdjVFeXhLbWZ2T0FMdzNLMVhXT0pkTGdrMUpGdkNYZUF4?=
 =?utf-8?B?ZHJtZG16WmQ1UkhzK1BtZnZKQUVzYm9tcmM5QnFDdC9Ra0NIRGNhdkJLbnpZ?=
 =?utf-8?B?TEJQZjRTZ2FMalFPS2lEQXN1ZVM2TkQ0Q2llaHo0OVl0NG1EWU1nMjFrZ3Y0?=
 =?utf-8?B?MVdRaENHOUhrV1ZFN29BdFZSV2dOdDlWbThZcmtBN0pSM2ZONUE0Z2IxR2Fh?=
 =?utf-8?B?d3JOcVYxRHMvc3dwRGtraHVHNnJ0dWE2Mk1jeFQ5L0FLblpGY2lvQjJFNXFF?=
 =?utf-8?B?dWhER3YzUFRyOE8zTWM4Ynk5K0NDQVNFL2JrL2ZIMFNxaGozQ2pNa0lTZFFQ?=
 =?utf-8?B?MmkrRW15WTFtR09Xdkd3UWNEbU8vMzRZTnJvMk9XN2hwVkF5cUJ4cURCYVUx?=
 =?utf-8?B?VFE3K3lzWmc4dmNOeTUvc2phRFNROW8rNDNVNXV0VFZWTzZ1N1lmT2daZUNF?=
 =?utf-8?B?cysyQXBZLzg2djBVajFqcGhPT0o2YkNuOFlUNUtOaHNHYngvc0dvNU43dThw?=
 =?utf-8?B?RVBlRi9IQzRCL1pENnhiaGRhTDM3TXcrUk9pZGZxZU90ZVc2by9LTXB4M0tk?=
 =?utf-8?B?bUlzRnlnSWNIbWpqNjNJRVZOTll2aFBKN2NERXc5MFBwQUNDQ2NWQmZkRWs3?=
 =?utf-8?B?VlpMb3FLSmpQQnpnRkt2ZzU4dmdnbSs4Qk9nVXVoM3ZaaWpnTXRmV1psQ1Ra?=
 =?utf-8?B?U1hoZnVBNmhVZ1VqOTMydzhSTWhoUjRibDIxai9nK0RJampHSXhKczB0MlJu?=
 =?utf-8?B?Z0RWSldLa2VEdUJ1WFJ1aE8xRXVaRUV1VERrVy9TZ09tdExiK0w3ZWluSElo?=
 =?utf-8?B?ZG5sY2lsWVZLTllVOWRyUmtLODZMSldNU3JFclZrdmNIRkMyT3RCZHZQbko3?=
 =?utf-8?B?TTVyR3V2QUR0WVlZTmFselVuTW55OWR3bzluenUxYXMwSHM5bEJHT1JVWFpS?=
 =?utf-8?B?ZjZtQkc4M0FhT2lLVnJiemZQUkE3b3BvbXpUL090dXBzSmg4YUQ1b2RxL2JE?=
 =?utf-8?B?OUdST3Zab3dPUm5jN3JsWFZ1WE91bGJRcWJyb2FLTVIwejhnT3ZEb1padWQ0?=
 =?utf-8?B?R1ljY3U1dXdBVGFka0tNYVFpalVDSnpNbnovaFJQaWs1S2tGVEpBUEpjWWJ6?=
 =?utf-8?B?NnNZS2xIS25FOW1laVl3SlF5Y0VtSlJNS242eHlHTHowTzdrb1pjMitkWmxZ?=
 =?utf-8?B?QzZiZWk0dk9RQlVnU0hRaG9kajZZbmpzcWZ5cGQyVGl4NGFoNEFCZ0VVL3A2?=
 =?utf-8?B?ZGZEOUVBR1pPZkxjcEFiSys5Y3lhRzhWQkp4VEJkRjdUZWNEN2N6bmkyRXdV?=
 =?utf-8?B?NXRVTHk2Y1lmY2U1NG5pWFlMQktJeHdlcW5hbG9yNTBrZTMwbkRHTVQwZHhM?=
 =?utf-8?B?ZVBGdmIyNXovVGJvK0lBNDI3NE5INStwc0gxZDdpT05JQ3g4clpHMWRIZjJJ?=
 =?utf-8?B?clA4eGNtdmpHU0ZmUDJDVUVpRlVVRXZYMUg0cXZvZmZwZWw5Zk83b3FDMUNW?=
 =?utf-8?Q?0Ma4JK20Ldt9N5UvjUe9Ne74z?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 09f0ba3c-14cb-471a-09be-08daff9a1bea
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 12:37:38.5809
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CuCTrSrq/FTlzAbTumJAlMFzOdap78ynIpOW2/u/jZO8bU86pRd9YBlPWq8eXdmCkCe67eQRMbpUq45OMonf+g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8053

On 26.01.2023 12:00, Carlo Nonato wrote:
> On Tue, Jan 24, 2023 at 5:50 PM Jan Beulich <jbeulich@suse.com> wrote:
>> On 23.01.2023 16:47, Carlo Nonato wrote:
>>> From: Luca Miccio <lucmiccio@gmail.com>
>>>
>>> This commit adds a new memory page allocator that implements the cache
>>> coloring mechanism. The allocation algorithm follows the given domain color
>>> configuration and maximizes contiguity in the page selection of multiple
>>> subsequent requests.
>>>
>>> Pages are stored in a color-indexed array of lists, each one sorted by
>>> machine address, that is referred to as "colored heap". Those lists are
>>> filled by a simple init function which computes the color of each page.
>>> When a domain requests a page, the allocator takes one from those lists
>>> whose colors equals the domain configuration. It chooses the page with the
>>> lowest machine address such that contiguous pages are sequentially
>>> allocated if this is made possible by a color assignment which includes
>>> adjacent colors.
>>
>> What use is this with ...
>>
>>> The allocator can handle only requests with order equal to 0 since the
>>> single color granularity is represented in memory by one page.
>>
>> ... this restriction? Plus aiui there's no guarantee of contiguous pages
>> coming back in any event (because things depend on what may have been
>> allocated / freed earlier on), so why even give the impression of there
>> being a way to obtain contiguous pages?
> 
> I really need us to be on the same "page" (no pun intended) here cause we
> discussed the subject multiple times and I'm probably missing important
> details.
> 
> First, is physical memory contiguity important? I'm assuming this is good
> because then some hardware optimization can occur when accessing memory.
> I'm taking it for granted because it's what the original author of the series
> thought, but I don't have an objective view of this.

I'd need to have a reference to a concrete description of such hardware
behavior to believe it. On x86 I certainly know of an "adjacent cacheline
prefetch" optimization in hardware, but that won't cross page boundaries
afair.

Contiguous memory can be advantageous for I/O (allowing bigger chunks in
individual requests or s/g list elements), and it is certainly a prereq
to using large page mappings (which earlier on you already said isn't of
interest in your use case).

> Then, let's state what contiguity means with coloring:
> *if* there are contiguous free pages and *if* subsequent requests are made
> and *if* the coloring configuration allows it, the allocator guarantees
> contiguity because it serves pages *in order*.

I don't think it can, simply because ...

> From the fragmentation perspective (first prerequisite), this is somewhat
> similar to the buddy case where only if contiguous pages are freed they can
> be allocated after. So order of operation is always important for
> fragmentation in dynamic allocation. The main difference is speed
> (I'm not comparing them on this aspect).
> 
> The second prerequisite requires that users of the allocator have exclusive
> access to it until the request is carried out. If interleaved requests happen,
> contiguity is practically impossible. How often does this happen? I view
> allocation as something that happens mainly at domain creation time, one
> domain at a time which results in a lot of subsequent requests, and then
> contiguity (if other prerequisites hold) isn't an impression.

... the "exclusive access" here is a fiction: Domain memory is populated
by the tool stack. Such tool stack allocations (even if properly serialized
in the control domain) will necessarily compete with anything done
internally in the hypervisor. Specifically the hypervisor may, at any time,
allocate a page (perhaps just for transient use), and then free that page
again, or free another one which was allocated before the tool stack
started populating domain memory.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 12:43:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 12:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485031.751957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL1ar-0007Vo-HO; Thu, 26 Jan 2023 12:43:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485031.751957; Thu, 26 Jan 2023 12:43:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL1ar-0007Vh-E2; Thu, 26 Jan 2023 12:43:05 +0000
Received: by outflank-mailman (input) for mailman id 485031;
 Thu, 26 Jan 2023 12:43:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pL1aq-0007VZ-9a
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 12:43:04 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8f6e22d-9d76-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 13:43:03 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8053.eurprd04.prod.outlook.com (2603:10a6:20b:2ad::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 12:43:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 12:43:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8f6e22d-9d76-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IuqP638I+4WqBBCP5FB9x/X7qBxV61sHYai4874dCn0Kpy1ZcnuJPz2VwLI6XClc3NdJcREfE8JYxQSGzBqj4cVsWOZhHManMAjxPMUPXD2N9a8TvpgXv3eGXhAIhCFW+VOA9+MohNQvh2GPbNuqQfWM0oZi37sEhE4t4wtgMjhpx6pfbf/9O5glB6x1xKCF16JZ+kQYssSvdHzb6NXGvl3p3NpnkAfk9wGO6xcafgcfOZmidwNqj/Bi1Vd1PnymY3pUYIuSiZQrwDd9MqXtn2c4GHXcQVdmT2mw8mwJZYgs6kbgLl3a5dS0WcDfnWsk+Iao0U1T93GO4pr2VHv47A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fCIVs0Urt1X0gEKvV55PmOdFxv9nt7cPT5Q15l+hdtw=;
 b=P6hYdFFCJtRgINdiyMhiLr0VKg2g5G+mRuggH64MRM1vv/wK+7JNp+Yf7B91giV3Yas+uOmX1NQ5eKE4vY7h/PMuhS3JguTBthKM4sg6tI5s1yb2QS4ObmTYpjIuLqAMNopuEBvh4J99aGu9ZKwxOU9UbSLz4rAG49uv/VkiMDt/vKlYI/Hv6B9EtdLzqaPWdaRWjLG4KfDdqApcNQBuIstC5g+imOte7Rm7svC1USytzko45HNmUsHNYUEdp4wjyZP1rJrkJMhWHVr0GrdgJLmRELVyHWIMf13KRV70vW3MOegeQzGUSMSSnf1NfkNd+mo8vb3+3dUBkxs/9vJweQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fCIVs0Urt1X0gEKvV55PmOdFxv9nt7cPT5Q15l+hdtw=;
 b=MZLirm73QnJ4WXfT1CJyZBiswhmGgK+A4VY+7ypXZC174lzSNiXNxGdI2ONUSsjvtU8g042fyihiZZ/Fu36hQB++eNo5So066eiipnA194ZtRoZcdNxZvuCwkHgdL+LQf07sITI4US/MRcLJNWDI+SX5ErKFleEPTX5qFAa68VXVBpchmxAyW8HKEDGmx8K0r9TkVIAj1nAxYts82Y2NlgPWczHr9aBx4gOFh2MVfWw03aNHPKdotyyDBMDMGjSFfLQqjOEHty7zBVcrE1bH2zwsQCv7EZAKDTJc79MVeHcsD9pvUGv5O57ZPTRxI9yQRRRAKTLCANtJ/jvZlNcXKg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c80da13e-32c1-0499-de91-67cabc3a495e@suse.com>
Date: Thu, 26 Jan 2023 13:42:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] tools/helpers: don't log errors when trying to load PVH
 xenstore-stubdom
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230126122406.14627-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230126122406.14627-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0111.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8053:EE_
X-MS-Office365-Filtering-Correlation-Id: 97764e3b-e437-46d0-0d77-08daff9adbba
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JaFNiF4Sn1KUgSDtNdWigj/xWAmF0XI5rY8XY9FjpP7Vdn/zUiKqgrZAflsxl3iQh7qZR31IrwHwc2PdiI4PYG8KADvKCXpUrVlbKHkVyIPnzsYCUVAPatgqFp+RKx21RrHb3LcU8D4nDjs68kuMQ7+1D6HmcqNSqazGEvM7an8rsxih/KFIqngB8UA0muAU4QBAfK0dcBE46vMe3jvzojEMMiuv/wA9Mv4ZE/80Mb3DvnQK9Ey/tzdDXqU9W61rwEWs6pkqOWqPqGcF1lm6wb+gZ3s97LcbNHs395CA9gPd2+7S/kh8GwY5TqF70DlUvOoAMLGZwplriRKqpCUo/lm3ERL6bkdms1mjHwVqj+rrvaNDlIt7TTVxm+OhCdChPZ+tepKDAFOQNynK7MAyq57s88/5RoTC4h2nOYL5W7FcMOTnZBz0+Cn48n/5s+OYwBT/RevDtZOoiXvQ8XSHsoY1FKo3Cg+HE1w7aCcuWwqsOFO5+jzpMScblG2SYoeFSViJHbwklfgkJeytczDd0tZxvxjNgUDYeowan+3mpgdIYWmiGuHH48+aLeUilW5HbtZM6ec8+zSi+iu3ThVx5KgJeHSTiu87tKSJld3fhzxIfdKnRWXw7KuQP6KObG+iDYh5M5r0UQ7uNcMBlWRvm+Vu2HuSxFVz+ayA7OO+/orOvxJ0RqIQ1rR98k/7nuxF5kHLDSUNSc6HUFmblonX8uxh3+bFRJszNf6qRcydwxI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(39860400002)(366004)(136003)(346002)(376002)(451199018)(38100700002)(36756003)(8676002)(6486002)(478600001)(54906003)(31696002)(316002)(37006003)(66476007)(2616005)(41300700001)(4326008)(66556008)(53546011)(6636002)(31686004)(6506007)(86362001)(2906002)(66946007)(186003)(83380400001)(6512007)(5660300002)(26005)(6862004)(8936002)(4744005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VUNQNFgvWlRwc1VJZ2NkbGRqdW91VmZESFVFSXp4cUlucnNMU1JsaGx5bUVv?=
 =?utf-8?B?U3MzalhjUU05WHFnS1ZsMFBqNGZKSEVqWlpNRWtqaCtzWVpFSDVtbThMQnlM?=
 =?utf-8?B?VnZrQnNMY0FmR0c0Nyt2Z1Ruc0MxWHlUdTh1enVjRDhCakFzRkJtK1h2YlFx?=
 =?utf-8?B?S0cvU3RXZmp4aUVobUNKUjJINkNkaXVMTFE3MzRjZnZTTWxFaDFBY25KOU5N?=
 =?utf-8?B?ZU5WRUVpTWJNWHlIUHpEN0ZmMGh1ZVhxbFcxczBxaEFTUkpmd2JPNXBBUkdo?=
 =?utf-8?B?S2dXVEZFeEFpVU9uT3VmWElHbWZOdFlKbjFSNWtLYzNENDBzVVpua0Q1L1ZW?=
 =?utf-8?B?SURLTldTWTFZV3YybCs3ZlRQUnV1eFhVK3NNQjVOOGNjQWE1SkRWeG1qMU5a?=
 =?utf-8?B?eU9pTlN1YUZjRkxlRDZqL3ZVYkd0VW5IbDNnb0FPeitDdmk0aFoyNm5HRDIx?=
 =?utf-8?B?d0RtV0RwQ2tTd3IrUW9CTmVtL2krNHRLOG9zekc4aVdLQ0ovMHFRK0NkWmZw?=
 =?utf-8?B?YnZlc000Zis5UWNvdVIzL1p5YnZ4MTBJMlRoOGs4RkNuRmF6dXhIR2JGUDNa?=
 =?utf-8?B?K3FpOGdmNTFXQUR4N3h2SGtlU3lRdTRWVy8vSkVlNFlCb0NsME9uR0prTDNI?=
 =?utf-8?B?Q25NUnhzd1dRbjB5ZG90SXYzbVR5V3BybmVzZlpvSWtuZGIxc2ZyQnVqc2FO?=
 =?utf-8?B?Y3ZPWFlVK2cxZ2RXVStPS3lGQm1OQWR0QkFNZ3BGdEZOQmM4RzAyUW1kVFRn?=
 =?utf-8?B?cy9uaFNiWmRlcjdPc0pLMkJyL09vdHhmUklyb096VDh0RThscVlicjZic3h2?=
 =?utf-8?B?Q3pzQVlYczdUeld6K0lPYnh3R3ZFVkFwZmM0TFpEd3hWQ25oN0QwZnRDbkJj?=
 =?utf-8?B?TjJBeFB4WmY4aE5GcENDMlFkQi9iallzbEdWRGVERDZhaGJzVWg4ZVJpY0pN?=
 =?utf-8?B?cmsxdFpaRmZ4Y2Z6NllVd0creldOUUVnbllaQ1k1NHZWMHZoRUpFSTdrT3NK?=
 =?utf-8?B?V1RiREp4bmwrTk94N09vYkV5aDJ3dUlvNmpxVHFNd3hEcE45Z3VLZUVJL1Az?=
 =?utf-8?B?Nmp3aHZoeDYyQXNJemkyZzFXeURMQldRdXlJT013dGNYL0pSWGx3M2xGYThq?=
 =?utf-8?B?ZlRFMlhrRDVjUnk5KzZScndDRUdwQkpOVVlDZnBPbU9zTnMzRnM3QkVRVTJx?=
 =?utf-8?B?MmtJdm12V0dhRllkUW1aOXN2clBxcHkxK2NzY29WWTBkSWFLTmQzVmpFSDI1?=
 =?utf-8?B?SmRSaEY4dUpxem43UVdlVnFubjY1YXJwRFlmQnBXYWc3YmtwWTMwZ2poMFg2?=
 =?utf-8?B?ZFhlUU5sWVN2NXVxOGVBeEFVT1I5aFl3eE56ZG1pS2k4TzdZM3prQnhtWFdu?=
 =?utf-8?B?YlNhamFJUGtnd3VPK1JVeTFnMzhxMndkSHpiZWR0OEZoeFdJdWdTV0gwdFpm?=
 =?utf-8?B?R0NpdXlKV3JLTzUrdk1ITUViNnNMV0UxUTlNdEJEQURlSTlMZVZnalRzcWlT?=
 =?utf-8?B?dy9sRkFjcDhDQXRncndNQXlyNTdJM1lwV2dTdGhQNG9rTlhlZkhRS1JYbWFs?=
 =?utf-8?B?Q2VNZ3luU3lhS3pUVjZhUUJBZHJuSURQd0h5UUxVdUVETVRIbnVwZmJuZzFS?=
 =?utf-8?B?T2l4NHFHa1NzYUhzUHpvRkI1Q25CWlhiMmVIdkZjdGJxK08vV1YyeVVRcHY3?=
 =?utf-8?B?UndHVml6Zm5wQnJPMUZJWXc3b1hKWkNaYmdJUURkZVJSRzM5NUYvMWcrejhk?=
 =?utf-8?B?L0dBSGNxQjg1MzJ1ZTNOUmpvZHdwMDFsVkx1TGw0QXlza29nbCswcEJkRHF6?=
 =?utf-8?B?VWhPUnl5aXg2QncyWkVxNFhrY3U3K1UxcXY3VXdlOWVFcUwrUUFFM0kvK1Vl?=
 =?utf-8?B?SG5XcStiSnVwczdKSXczSCs0YUFCcHNkMCtCVWcyR2k0L2FETllJbzM0T0to?=
 =?utf-8?B?K0NtclYxeFFsY2ZxeVJRRVVwUCtJWDQrem9SWFhXL1RRc2RlK0pDb3FzZHhG?=
 =?utf-8?B?d2Z0cVlEOXh4bHFrL2dTWTFOOFFhV2ZhK2RxNzZtUkp6N0Z1cXUwL2Y5OWk2?=
 =?utf-8?B?UVFGeDV5b0c4dFdpaHliZU45a3JUOHBmd2hGOG9NN0F6bmc3NGZUcjJvT0xV?=
 =?utf-8?Q?NRazB1HWyhHZAjsYoBJC0Ypcg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 97764e3b-e437-46d0-0d77-08daff9adbba
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 12:43:00.4046
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: puN3ciDTc5C31Z/gFbG1i1WMGvrmFjsLc6PdKF2xY7U54EngaBPvIOpIBrV9bkUZj7tDBvdujRfcy9APUq9DjQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8053

On 26.01.2023 13:24, Juergen Gross wrote:
> When loading a Xenstore stubdom the loader doesn't know whether the
> lo be loaded kernel is a PVH or a PV one. So it tries to load it as
> a PVH one first, and if this fails it is loading it as a PV kernel.
> 
> This results in errors being logged in case the stubdom is a PV kernel.
> 
> Suppress those errors by setting the minimum logging level to
> "critical" while trying to load the kernel as PVH.

And if the PV loading also fails and PVH was actually expected, then the
messages will be heavily misleading? Shouldn't you instead accumulate the
PVH messages, and throw them away only in case PV loading actually worked
(or issue them at lower severity, in case they're actually of interest)?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 12:48:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 12:48:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485036.751967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL1fm-0008JC-41; Thu, 26 Jan 2023 12:48:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485036.751967; Thu, 26 Jan 2023 12:48:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL1fm-0008J3-0l; Thu, 26 Jan 2023 12:48:10 +0000
Received: by outflank-mailman (input) for mailman id 485036;
 Thu, 26 Jan 2023 12:48:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WJ2g=5X=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pL1fk-0008Iv-Eh
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 12:48:08 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aa36da72-9d77-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 13:48:00 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E9DAD1FF65;
 Thu, 26 Jan 2023 12:48:05 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C316213A09;
 Thu, 26 Jan 2023 12:48:05 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id fSzXLYV20mOzSwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 26 Jan 2023 12:48:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa36da72-9d77-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674737285; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xUpmI9iYBy7d/T2ZPrcHnej9OUVSQnMndyr9lRWTEWY=;
	b=rS9ITiznH+SkERQkDiPLRyTNH4/grMZmtjQ96/HtPHauz6Pt2eqThkSsL2J2g6V9NZCjHC
	Knf08M3UMOKO30jGrQmHEgXzbegzXksc+CaxdF8BxwVK7N1AYBzl4KVuXKDiC2Zv/9J3xN
	nJU7cdXO+SXCPakD+247zFkjeKWnjbo=
Message-ID: <17b8673a-2c31-1ed8-4bda-ab514ad7ad5a@suse.com>
Date: Thu, 26 Jan 2023 13:48:05 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] tools/helpers: don't log errors when trying to load PVH
 xenstore-stubdom
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230126122406.14627-1-jgross@suse.com>
 <c80da13e-32c1-0499-de91-67cabc3a495e@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <c80da13e-32c1-0499-de91-67cabc3a495e@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------Iiyi6dtwAAeART1IBbWE5c18"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------Iiyi6dtwAAeART1IBbWE5c18
Content-Type: multipart/mixed; boundary="------------5dA2ampjOGOPtEPY3t84EbHh";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
Message-ID: <17b8673a-2c31-1ed8-4bda-ab514ad7ad5a@suse.com>
Subject: Re: [PATCH] tools/helpers: don't log errors when trying to load PVH
 xenstore-stubdom
References: <20230126122406.14627-1-jgross@suse.com>
 <c80da13e-32c1-0499-de91-67cabc3a495e@suse.com>
In-Reply-To: <c80da13e-32c1-0499-de91-67cabc3a495e@suse.com>

--------------5dA2ampjOGOPtEPY3t84EbHh
Content-Type: multipart/mixed; boundary="------------4kPHOl7zIIYgX19W0LaRaMTU"

--------------4kPHOl7zIIYgX19W0LaRaMTU
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjYuMDEuMjMgMTM6NDIsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyNi4wMS4yMDIz
IDEzOjI0LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gV2hlbiBsb2FkaW5nIGEgWGVuc3Rv
cmUgc3R1YmRvbSB0aGUgbG9hZGVyIGRvZXNuJ3Qga25vdyB3aGV0aGVyIHRoZQ0KPj4gbG8g
YmUgbG9hZGVkIGtlcm5lbCBpcyBhIFBWSCBvciBhIFBWIG9uZS4gU28gaXQgdHJpZXMgdG8g
bG9hZCBpdCBhcw0KPj4gYSBQVkggb25lIGZpcnN0LCBhbmQgaWYgdGhpcyBmYWlscyBpdCBp
cyBsb2FkaW5nIGl0IGFzIGEgUFYga2VybmVsLg0KPj4NCj4+IFRoaXMgcmVzdWx0cyBpbiBl
cnJvcnMgYmVpbmcgbG9nZ2VkIGluIGNhc2UgdGhlIHN0dWJkb20gaXMgYSBQViBrZXJuZWwu
DQo+Pg0KPj4gU3VwcHJlc3MgdGhvc2UgZXJyb3JzIGJ5IHNldHRpbmcgdGhlIG1pbmltdW0g
bG9nZ2luZyBsZXZlbCB0bw0KPj4gImNyaXRpY2FsIiB3aGlsZSB0cnlpbmcgdG8gbG9hZCB0
aGUga2VybmVsIGFzIFBWSC4NCj4gDQo+IEFuZCBpZiB0aGUgUFYgbG9hZGluZyBhbHNvIGZh
aWxzIGFuZCBQVkggd2FzIGFjdHVhbGx5IGV4cGVjdGVkLCB0aGVuIHRoZQ0KPiBtZXNzYWdl
cyB3aWxsIGJlIGhlYXZpbHkgbWlzbGVhZGluZz8gU2hvdWxkbid0IHlvdSBpbnN0ZWFkIGFj
Y3VtdWxhdGUgdGhlDQo+IFBWSCBtZXNzYWdlcywgYW5kIHRocm93IHRoZW0gYXdheSBvbmx5
IGluIGNhc2UgUFYgbG9hZGluZyBhY3R1YWxseSB3b3JrZWQNCj4gKG9yIGlzc3VlIHRoZW0g
YXQgbG93ZXIgc2V2ZXJpdHksIGluIGNhc2UgdGhleSdyZSBhY3R1YWxseSBvZiBpbnRlcmVz
dCk/DQoNCkkgdGhpbmsgdGhpcyB3b3VsZCBhZGQgYSBsb3Qgb2YgY29kZSBmb3IgbGl0dGxl
IGdhaW4uDQoNCldoYXQgSSBjb3VsZCBkbyBpcyB0byByZXBlYXQgdGhlIFBWSCBsb2FkIGF0
dGVtcHQgd2l0aCBmdWxsIGxvZ2dpbmcgaWYgdGhlDQpQViBsb2FkIGZhaWxzLCB0b28uDQoN
Cg0KSnVlcmdlbg0K
--------------4kPHOl7zIIYgX19W0LaRaMTU
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4kPHOl7zIIYgX19W0LaRaMTU--

--------------5dA2ampjOGOPtEPY3t84EbHh--

--------------Iiyi6dtwAAeART1IBbWE5c18
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPSdoUFAwAAAAAACgkQsN6d1ii/Ey9t
2gf/UCiXQXWpzH4Ar2bKpI8jkrC5cZNZwotDdDVRArKFu+GP9NtvvsRLFcXMR7OxrvudnRVSlfQI
eZRl8Ob/f/LA8ABqXu/5SkO32efiQibAUC+jPOiNlOlL6r6LlF0pnDlC05p06UnLuJ5Qq3dVKHqH
CqVz9EZkFOASCZNV9SWmsRmPKWa5N5WNSW8H/t+YxQJ6ie43GSE/3fWMfRyaJEksF+wTD2rcXoeY
AFxifUNCHsPHsWqaghkRbNJh1o9v7r6D/i8bTDpBOyze2iOin2LTKRiFxZbtiPiVBNKLde+3UDUj
BOMAmfuSwe5tczedqRKl7LSEu4tejvNEyVjYjD2hGQ==
=CVg0
-----END PGP SIGNATURE-----

--------------Iiyi6dtwAAeART1IBbWE5c18--


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 13:52:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 13:52:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485044.751983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL2fe-0007ti-Sv; Thu, 26 Jan 2023 13:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485044.751983; Thu, 26 Jan 2023 13:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL2fe-0007tb-Q8; Thu, 26 Jan 2023 13:52:06 +0000
Received: by outflank-mailman (input) for mailman id 485044;
 Thu, 26 Jan 2023 13:52:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL2fd-0007tR-L5; Thu, 26 Jan 2023 13:52:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL2fd-0004M1-Id; Thu, 26 Jan 2023 13:52:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL2fd-0003EI-Bc; Thu, 26 Jan 2023 13:52:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pL2fd-0001c7-Ag; Thu, 26 Jan 2023 13:52:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=W95Dxlic67TFuipthZEcMjDGg/0Lq/UhGNPGbhYmtBM=; b=mdmDsyd6Q9kQic3z4pq0bN2g7p
	YPru1z05ARy8kjwgSYyJaBYk+Ek3K5n0LEY2lkqqQRmW9m61w+YVf8O0MtMTXI31tFWFTEqpebG3o
	8jisLft72ok6OodEpeKcTGw7lRMCQGGx4otRSaYJR5Nc9+KaD/RyoxlUV1opBPhjom50=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176146-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176146: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1e454c2b5b1172e0fc7457e411ebaba61db8fc87
X-Osstest-Versions-That:
    xen=3b760245f74ab2022b1aa4da842c4545228c2e83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Jan 2023 13:52:05 +0000

flight 176146 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176146/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1e454c2b5b1172e0fc7457e411ebaba61db8fc87
baseline version:
 xen                  3b760245f74ab2022b1aa4da842c4545228c2e83

Last test of basis   176113  2023-01-25 01:00:30 Z    1 days
Testing same since   176146  2023-01-26 10:03:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3b760245f7..1e454c2b5b  1e454c2b5b1172e0fc7457e411ebaba61db8fc87 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 14:16:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 14:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485052.751993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL32S-0002Um-Q1; Thu, 26 Jan 2023 14:15:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485052.751993; Thu, 26 Jan 2023 14:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL32S-0002Uf-Mc; Thu, 26 Jan 2023 14:15:40 +0000
Received: by outflank-mailman (input) for mailman id 485052;
 Thu, 26 Jan 2023 14:15:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onDU=5X=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pL32P-0002UZ-QS
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 14:15:39 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e318ca3a-9d83-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 15:15:32 +0100 (CET)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pL31L-002TOI-34; Thu, 26 Jan 2023 14:14:32 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 784893002BF;
 Thu, 26 Jan 2023 15:15:00 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 2587D203C2B1E; Thu, 26 Jan 2023 15:15:00 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e318ca3a-9d83-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Transfer-Encoding:
	Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=iS3Z4P4S5Y5WqQAB/YRDv8NY/GBojbeqpUql4Ne3Mbs=; b=h/u1WI1OuFtZGHQY7znWc8oCl+
	RXuDaL+k7nzlzQ0mXBMm73He6KVpyLCbc0I1V7FqJjsZbdzbGyF0VPzI803s6cvfrUxNn7vZtftVS
	U1GnxKIoGxMd//et4YwQywjy3Jo/X4g8STKiP9cA9H13olFEjSaL48aMdG+OfWc+cAbIqtUE9XaAg
	OkoaF0QiKM+WLyV67D5P/VfojkNgwtMbdMkUuFomo3icyfSRKqgaHOvnn4rsXiHWlwmDr94hoXYfK
	4JvsEWHG1/Iu6uzuz4pMJJadoVJ24d0PZ0iP+cpZmpPXw3Vg00LKCtmo0an2vrna4+vVNcFOa+C2T
	UErXYD/A==;
Date: Thu, 26 Jan 2023 15:15:00 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?iso-8859-1?Q?J=F6rg_R=F6del?= <joro@8bytes.org>, jroedel@suse.de,
	kirill.shutemov@linux.intel.com, dave.hansen@intel.com,
	kai.huang@intel.com
Subject: Re: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from
 secondary_startup_64()
Message-ID: <Y9KK5IQbORNFUqqV@hirez.programming.kicks-ass.net>
References: <20230116142533.905102512@infradead.org>
 <20230116143645.589522290@infradead.org>
 <Y8e/yKgVZgbqgvAG@hirez.programming.kicks-ass.net>
 <5718C98C-C07A-4BD1-9182-7F3A8BDBC605@zytor.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5718C98C-C07A-4BD1-9182-7F3A8BDBC605@zytor.com>

On Thu, Jan 19, 2023 at 11:35:06AM -0800, H. Peter Anvin wrote:
> On January 18, 2023 1:45:44 AM PST, Peter Zijlstra <peterz@infradead.org> wrote:
> >On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote:
> >> The boot trampolines from trampoline_64.S have code flow like:
> >> 
> >>   16bit BIOS			SEV-ES				64bit EFI
> >> 
> >>   trampoline_start()		sev_es_trampoline_start()	trampoline_start_64()
> >>     verify_cpu()			  |				|
> >>   switch_to_protected:    <---------------'				v
> >>        |							pa_trampoline_compat()
> >>        v								|
> >>   startup_32()		<-----------------------------------------------'
> >>        |
> >>        v
> >>   startup_64()
> >>        |
> >>        v
> >>   tr_start() := head_64.S:secondary_startup_64()
> >> 
> >> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
> >> touch the APs), there is already a verify_cpu() invocation.
> >
> >So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs --
> >can any of the TDX capable folks tell me if we need verify_cpu() on
> >these?
> >
> >Aside from checking for LM, it seems to clear XD_DISABLE on Intel and
> >force enable SSE on AMD/K7. Surely none of that is needed for these
> >shiny new chips?
> >
> >I mean, I can hack up a patch that adds verify_cpu() to the 64bit entry
> >point, but it seems really sad to need that on modern systems.
> 
> Sad, perhaps, but really better for orthogonality – fewer special cases.

I'd argue more, but whatever. XD_DISABLE is an abomination and 64bit
entry points should care about it just as much as having LM. And this
way we have 2/3 instead of 1/3 entry points do 'special' nonsense.

I ended up with this trainwreck, it adds verify_cpu to
pa_trampoline_compat() because for some raisin it doesn't want to
assemble when placed in trampoline_start64().

Is this really what we want?

---

--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -689,9 +689,14 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lno_longmo
 	jmp     1b
 SYM_FUNC_END(.Lno_longmode)
 
-	.globl	verify_cpu
 #include "../../kernel/verify_cpu.S"
 
+	.globl	verify_cpu
+SYM_FUNC_START_LOCAL(verify_cpu)
+	VERIFY_CPU
+	RET
+SYM_FUNC_END(verify_cpu)
+
 	.data
 SYM_DATA_START_LOCAL(gdt64)
 	.word	gdt_end - gdt - 1
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -321,6 +321,11 @@ SYM_FUNC_END(startup_32_smp)
 
 #include "verify_cpu.S"
 
+SYM_FUNC_START_LOCAL(verify_cpu)
+	VERIFY_CPU
+	RET
+SYM_FUNC_END(verify_cpu)
+
 __INIT
 SYM_FUNC_START(early_idt_handler_array)
 	# 36(%esp) %eflags
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -345,6 +345,12 @@ SYM_CODE_START(secondary_startup_64)
 SYM_CODE_END(secondary_startup_64)
 
 #include "verify_cpu.S"
+
+SYM_FUNC_START_LOCAL(verify_cpu)
+	VERIFY_CPU
+	RET
+SYM_FUNC_END(verify_cpu)
+
 #include "sev_verify_cbit.S"
 
 #ifdef CONFIG_HOTPLUG_CPU
--- a/arch/x86/kernel/verify_cpu.S
+++ b/arch/x86/kernel/verify_cpu.S
@@ -31,7 +31,7 @@
 #include <asm/cpufeatures.h>
 #include <asm/msr-index.h>
 
-SYM_FUNC_START_LOCAL(verify_cpu)
+.macro VERIFY_CPU
 	pushf				# Save caller passed flags
 	push	$0			# Kill any dangerous flags
 	popf
@@ -46,31 +46,31 @@ SYM_FUNC_START_LOCAL(verify_cpu)
 	pushfl
 	popl	%eax
 	cmpl	%eax,%ebx
-	jz	.Lverify_cpu_no_longmode	# cpu has no cpuid
+	jz	.Lverify_cpu_no_longmode_\@	# cpu has no cpuid
 #endif
 
 	movl	$0x0,%eax		# See if cpuid 1 is implemented
 	cpuid
 	cmpl	$0x1,%eax
-	jb	.Lverify_cpu_no_longmode	# no cpuid 1
+	jb	.Lverify_cpu_no_longmode_\@	# no cpuid 1
 
 	xor	%di,%di
 	cmpl	$0x68747541,%ebx	# AuthenticAMD
-	jnz	.Lverify_cpu_noamd
+	jnz	.Lverify_cpu_noamd_\@
 	cmpl	$0x69746e65,%edx
-	jnz	.Lverify_cpu_noamd
+	jnz	.Lverify_cpu_noamd_\@
 	cmpl	$0x444d4163,%ecx
-	jnz	.Lverify_cpu_noamd
+	jnz	.Lverify_cpu_noamd_\@
 	mov	$1,%di			# cpu is from AMD
-	jmp	.Lverify_cpu_check
+	jmp	.Lverify_cpu_check_\@
 
-.Lverify_cpu_noamd:
+.Lverify_cpu_noamd_\@:
 	cmpl	$0x756e6547,%ebx        # GenuineIntel?
-	jnz	.Lverify_cpu_check
+	jnz	.Lverify_cpu_check_\@
 	cmpl	$0x49656e69,%edx
-	jnz	.Lverify_cpu_check
+	jnz	.Lverify_cpu_check_\@
 	cmpl	$0x6c65746e,%ecx
-	jnz	.Lverify_cpu_check
+	jnz	.Lverify_cpu_check_\@
 
 	# only call IA32_MISC_ENABLE when:
 	# family > 6 || (family == 6 && model >= 0xd)
@@ -81,60 +81,62 @@ SYM_FUNC_START_LOCAL(verify_cpu)
 	andl	$0x0ff00f00, %eax	# mask family and extended family
 	shrl	$8, %eax
 	cmpl	$6, %eax
-	ja	.Lverify_cpu_clear_xd	# family > 6, ok
-	jb	.Lverify_cpu_check	# family < 6, skip
+	ja	.Lverify_cpu_clear_xd_\@	# family > 6, ok
+	jb	.Lverify_cpu_check_\@	# family < 6, skip
 
 	andl	$0x000f00f0, %ecx	# mask model and extended model
 	shrl	$4, %ecx
 	cmpl	$0xd, %ecx
-	jb	.Lverify_cpu_check	# family == 6, model < 0xd, skip
+	jb	.Lverify_cpu_check_\@	# family == 6, model < 0xd, skip
 
-.Lverify_cpu_clear_xd:
+.Lverify_cpu_clear_xd_\@:
 	movl	$MSR_IA32_MISC_ENABLE, %ecx
 	rdmsr
 	btrl	$2, %edx		# clear MSR_IA32_MISC_ENABLE_XD_DISABLE
-	jnc	.Lverify_cpu_check	# only write MSR if bit was changed
+	jnc	.Lverify_cpu_check_\@	# only write MSR if bit was changed
 	wrmsr
 
-.Lverify_cpu_check:
+.Lverify_cpu_check_\@:
 	movl    $0x1,%eax		# Does the cpu have what it takes
 	cpuid
 	andl	$REQUIRED_MASK0,%edx
 	xorl	$REQUIRED_MASK0,%edx
-	jnz	.Lverify_cpu_no_longmode
+	jnz	.Lverify_cpu_no_longmode_\@
 
 	movl    $0x80000000,%eax	# See if extended cpuid is implemented
 	cpuid
 	cmpl    $0x80000001,%eax
-	jb      .Lverify_cpu_no_longmode	# no extended cpuid
+	jb      .Lverify_cpu_no_longmode_\@	# no extended cpuid
 
 	movl    $0x80000001,%eax	# Does the cpu have what it takes
 	cpuid
 	andl    $REQUIRED_MASK1,%edx
 	xorl    $REQUIRED_MASK1,%edx
-	jnz     .Lverify_cpu_no_longmode
+	jnz     .Lverify_cpu_no_longmode_\@
 
-.Lverify_cpu_sse_test:
+.Lverify_cpu_sse_test_\@:
 	movl	$1,%eax
 	cpuid
 	andl	$SSE_MASK,%edx
 	cmpl	$SSE_MASK,%edx
-	je	.Lverify_cpu_sse_ok
+	je	.Lverify_cpu_sse_ok_\@
 	test	%di,%di
-	jz	.Lverify_cpu_no_longmode	# only try to force SSE on AMD
+	jz	.Lverify_cpu_no_longmode_\@	# only try to force SSE on AMD
 	movl	$MSR_K7_HWCR,%ecx
 	rdmsr
 	btr	$15,%eax		# enable SSE
 	wrmsr
 	xor	%di,%di			# don't loop
-	jmp	.Lverify_cpu_sse_test	# try again
+	jmp	.Lverify_cpu_sse_test_\@	# try again
 
-.Lverify_cpu_no_longmode:
+.Lverify_cpu_no_longmode_\@:
 	popf				# Restore caller passed flags
 	movl $1,%eax
-	RET
-.Lverify_cpu_sse_ok:
+	jmp	.Lverify_cpu_ret_\@
+
+.Lverify_cpu_sse_ok_\@:
 	popf				# Restore caller passed flags
 	xorl %eax, %eax
-	RET
-SYM_FUNC_END(verify_cpu)
+
+.Lverify_cpu_ret_\@:
+.endm
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -34,6 +34,8 @@
 #include <asm/realmode.h>
 #include "realmode.h"
 
+#include "../kernel/verify_cpu.S"
+
 	.text
 	.code16
 
@@ -52,7 +54,8 @@ SYM_CODE_START(trampoline_start)
 	# Setup stack
 	movl	$rm_stack_end, %esp
 
-	call	verify_cpu		# Verify the cpu supports long mode
+	VERIFY_CPU			# Verify the cpu supports long mode
+
 	testl   %eax, %eax		# Check for return code
 	jnz	no_longmode
 
@@ -100,8 +103,6 @@ SYM_CODE_START(sev_es_trampoline_start)
 SYM_CODE_END(sev_es_trampoline_start)
 #endif	/* CONFIG_AMD_MEM_ENCRYPT */
 
-#include "../kernel/verify_cpu.S"
-
 	.section ".text32","ax"
 	.code32
 	.balign 4
@@ -180,6 +181,8 @@ SYM_CODE_START(pa_trampoline_compat)
 	movl	$rm_stack_end, %esp
 	movw	$__KERNEL_DS, %dx
 
+	VERIFY_CPU
+
 	movl	$(CR0_STATE & ~X86_CR0_PG), %eax
 	movl	%eax, %cr0
 	ljmpl   $__KERNEL32_CS, $pa_startup_32


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 14:22:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 14:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485059.752003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL38o-0003y5-Ik; Thu, 26 Jan 2023 14:22:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485059.752003; Thu, 26 Jan 2023 14:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL38o-0003xy-G9; Thu, 26 Jan 2023 14:22:14 +0000
Received: by outflank-mailman (input) for mailman id 485059;
 Thu, 26 Jan 2023 14:22:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BaYt=5X=citrix.com=prvs=383a151c9=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pL38m-0003xs-O8
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 14:22:12 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd2887b0-9d84-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 15:22:03 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd2887b0-9d84-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674742929;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=sVjI5Dn1EMQCairDjw6V7ktUOG7PPoBlg5eLRMuJWnM=;
  b=ZUtw0fBIFpWE3FyPvZPNuX1+4J4qWB24srfubEWLdqiBLggW0YcIOcdz
   CP3+KzqknhyBjo+3ip+oJKjA7PcbgfRZFisBfFvqxc8Dg80DjiP81+E7F
   ZgQ0Vra4YcmvIUOmwLeBXCulT++e8OAVBeAFiNXgGC1eGzVqTGFML2Vu1
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 94762590
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:qEkGLqK3OpvFu0lPFE+RMpUlxSXFcZb7ZxGr2PjKsXjdYENSgWYOz
 jEdXz2BaPeJNDD3KthzO9nk9R4PsZHWn9Y3GwBlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wZgPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c55E0wV0
 q0VCAsvdzeDmcuWmq24dPdz05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TbGZoLxBvJ/
 goq+UzeJwsRLtCyxgOf406u38/Ck3nhSZMNQejQGvlC3wTImz175ActfVm0u/6ikWalRslSb
 UcT/0IGqKEo/0vtVd75XhCioVaBvxgRQcRZCPwhrh2Awaq8yw+BC3INVDJpdN0sv8hwTjsvv
 neClsntAnppt7ucVXW187aSoCmsMDMENikeaCQEJSMV7t+mrIwtgxbnStd4DLXzntDzASv3w
 T2BsG45nbp7pdEP/7W2+xbAmT3Em3TSZldrvEONBDvjt14nItf/PORE9GQ3895OPqvCaQiMn
 EMmgu+e8skuV46OqjKSFbBl8K6S296JNzjVgFhKFpYn9iiw93PLQb288A2SN28ybJ9aJGaBj
 Fv7/FoIucQNZCfCgbpfOdrZNig88UT3+T0JvNjwZ8EGXJV+fRTvEMpGNR/JhDCFfKTBfMgC1
 XannSSEVy5y5UdPlmDeqwIhPVgDmEgDKZv7H8yT8vhe+eP2iISpYbkEKkCSSesy8bmJpg7Ym
 /4GaZTWlUoGDLKhPHGLmWL2EbzsBSJjbXwRg5UHHtNv3yI8QD1xYxMv6exJl3NZc1R9yb6To
 yDVtr5ww1vjn3zXQThmmVg6AI4Dqa1X9CphVQR1ZAbA5pTWSdr3hEvpX8dtLOZPGS0K5aIcc
 sTpjO3ZUqwSEGmZo29BBXQ/xaQ7HCmWacu1F3LNSFACk1RIHmQlJveMktPTyRQz
IronPort-HdrOrdr: A9a23:7joQ56mVd68Hz6agu0E+rrCMl6rpDfIT3DAbv31ZSRFFG/FwWf
 re5cjztCWE8Ar5PUtLpTnuAtjkfZqxz+8W3WBVB8bAYOCEggqVxeNZnO/fKlTbckWUygce78
 ddmsNFebrN5DZB/KDHCcqDf+rIAuPrzEllv4jjJr5WIz1XVw==
X-IronPort-AV: E=Sophos;i="5.97,248,1669093200"; 
   d="scan'208";a="94762590"
Date: Thu, 26 Jan 2023 14:22:02 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Carlo Nonato <carlo.nonato@minervasys.tech>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Marco Solieri <marco.solieri@minervasys.tech>
Subject: Re: [PATCH v4 05/11] tools: add support for cache coloring
 configuration
Message-ID: <Y9KMio62otVChUcq@perard.uk.xensource.com>
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-6-carlo.nonato@minervasys.tech>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230123154735.74832-6-carlo.nonato@minervasys.tech>

On Mon, Jan 23, 2023 at 04:47:29PM +0100, Carlo Nonato wrote:
> diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
> index e939d07157..064f54c349 100644
> --- a/tools/libs/ctrl/xc_domain.c
> +++ b/tools/libs/ctrl/xc_domain.c
> @@ -28,6 +28,20 @@ int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
>  {
>      int err;
>      DECLARE_DOMCTL;
> +    DECLARE_HYPERCALL_BUFFER(uint32_t, llc_colors);
> +
> +    if ( config->num_llc_colors )
> +    {
> +        size_t bytes = sizeof(uint32_t) * config->num_llc_colors;
> +
> +        llc_colors = xc_hypercall_buffer_alloc(xch, llc_colors, bytes);
> +        if ( llc_colors == NULL ) {
> +            PERROR("Could not allocate LLC colors for xc_domain_create");
> +            return -ENOMEM;
> +        }
> +        memcpy(llc_colors, config->llc_colors.p, bytes);
> +        set_xen_guest_handle(config->llc_colors, llc_colors);

I think this two lines looks wrong. There is a double usage of
config->llc_colors, to both store a user pointer and then to store an
hypercall buffer. Also, accessing llc_colors.p field is probably wrong.
I guess the caller of xc_domain_create() (that is libxl) will have to
take care of the hypercall buffer. It is already filling the
xen_domctl_createdomain struct that is passed to the hypercall, so it's
probably fine to handle hypercall buffer which is part of it.

What happen in the hypervisor when both num_llc_colors and llc_colors
are set to 0 in the struct xen_domctl_createdomain? Is it fine? That is
to figure out if all users of xc_domain_create() needs to be updated.

Also, ocaml binding is "broken", or at least needs updating. This is due
to the addition of llc_colors into xen_domctl_createdomain, the size
been different than the expected size.


> +    }
>  
>      domctl.cmd = XEN_DOMCTL_createdomain;
>      domctl.domain = *pdomid;
> @@ -39,6 +53,9 @@ int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
>      *pdomid = (uint16_t)domctl.domain;
>      *config = domctl.u.createdomain;
>  
> +    if ( llc_colors )
> +        xc_hypercall_buffer_free(xch, llc_colors);
> +
>      return 0;
>  }
>  
> diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
> index beec3f6b6f..6d0c768241 100644
> --- a/tools/libs/light/libxl_create.c
> +++ b/tools/libs/light/libxl_create.c
> @@ -638,6 +638,8 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
>              .grant_opts = XEN_DOMCTL_GRANT_version(b_info->max_grant_version),
>              .vmtrace_size = ROUNDUP(b_info->vmtrace_buf_kb << 10, XC_PAGE_SHIFT),
>              .cpupool_id = info->poolid,
> +            .num_llc_colors = b_info->num_llc_colors,
> +            .llc_colors.p = b_info->llc_colors,
>          };
>  
>          if (info->type != LIBXL_DOMAIN_TYPE_PV) {
> diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
> index 0cfad8508d..1f944ca6d7 100644
> --- a/tools/libs/light/libxl_types.idl
> +++ b/tools/libs/light/libxl_types.idl
> @@ -562,6 +562,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>      ("ioports",          Array(libxl_ioport_range, "num_ioports")),
>      ("irqs",             Array(uint32, "num_irqs")),
>      ("iomem",            Array(libxl_iomem_range, "num_iomem")),
> +    ("llc_colors",       Array(uint32, "num_llc_colors")),

For this, you are going to need to add a LIBXL_HAVE_ macro in libxl.h.
They are plenty of example as well as an explanation.
But a good name I guess would be LIBXL_HAVE_BUILDINFO_LLC_COLORS along
with a short comment.

>      ("claim_mode",	     libxl_defbool),
>      ("event_channels",   uint32),
>      ("kernel",           string),


Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 14:41:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 14:41:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485066.752019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL3Qn-0006gk-77; Thu, 26 Jan 2023 14:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485066.752019; Thu, 26 Jan 2023 14:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL3Qn-0006gd-46; Thu, 26 Jan 2023 14:40:49 +0000
Received: by outflank-mailman (input) for mailman id 485066;
 Thu, 26 Jan 2023 14:40:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BaYt=5X=citrix.com=prvs=383a151c9=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pL3Ql-0006gU-LJ
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 14:40:47 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69b40072-9d87-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 15:40:45 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69b40072-9d87-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1674744045;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=VM9sR+syhv1vlmzJLowt67XHK+AqrGCrBVP2V3L5BVM=;
  b=LSO19JMy8xZSUD/J28RqT7hOFmt4uPumbTB8ByckdYs2p+yKN2BClavk
   YqK9FrRcvag6bVrmfwv8mkoiShXxIkJsYK34tZFQX1tng+dNR/WBNKF3b
   2ESieND6oomHlKDYZVoRfQo3ZFIWz/RDBVhJp4ILN9Q2GcHqO3y7AJ4N5
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 94765648
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:IW1lvKsNOK541jr2r129nIPcXufnVGpeMUV32f8akzHdYApBsoF/q
 tZmKW6PPKmNNGSjKdwgOo2+9EwOv5PQm4RrSwU/+y4xRi0Q+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj51v0gnRkPaoQ5AaEySFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwdgszbDK+2LuMxJGJa8dznc09L+v0M9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WzRetFKSo7tx+2XJxRZ9+LPsLMDUapqBQsA9ckOw9
 z6ZoTmnXkFy2Nq31BfV70OutOX0vi7nccFJMZmo3+JaqQjGroAUIEJPDgbqyRWjsWa7UshaI
 lYZ+QIvq7Yz702hStThXxy+r2WAtxRaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBLSNrmK2YTzSa7Lj8hTqqNDIcN2MqeS4ORgxD6N7myLzflTqWEIwlSvTsyISoR3epm
 WviQDUCa6s70/U11YeGogn9hxGeh7bWVQcr5SfIUTfwhu9mX7KNa4ut4FndyP9PKoeFU1WM1
 EQ5d9iiAPMmVs/UynHUKAkZNPTwvqvebmWA6bJ6N8N5nwlB7UJPamy5DNtWAE5yevgJdjbyC
 KM4kVMAvcQDVJdGgEIeXm5QNyjI5fK4fTgGfqqOBjarXnSWXFHvwc2WTRTMt10BaWB1+U3FB
 b+VcNy3EVERArl9wTy9So81iOF0mn5hnD+MGs6jk3xLNIZyglbPEd/p13PXPogEAF6s+l2Jo
 76zyePVo/mgbAEOSnaOqtNCRbz7BXM6GYr3u6Rqmh2re2Jb9JUaI6aJm9sJItU195m5Y8+Up
 hlRrGcEkgug7ZAGQC3WAk1ehETHBMYn8ChmY3R3YT5FGRELOO6S0UvWTLNvFZFPyQCp5aUco
 yUtEylYPslydw==
IronPort-HdrOrdr: A9a23:Awy4dqHfg7c+kw+rpLqFFpLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdvZJjko6HyBEGBKUmskqKdkrNhQ4tKPTOW8ldASbsP0WKM+UyaJ8STzJ856U
 4kSdkDNDSSNyk2sS+Z2njDLz9I+rDuzE2xv4njJjVWPHxXgspbnmFE43OgYzVLrX59dOME/f
 Snl656Tg6bCDsqR/X+KmgOWuDCo9HRtZT9fBIKPR4o7wGSkSil8vrfHwKD1hkTaihIy7s562
 TJ+jaZ2oyT992rwBrV12ve9LRTgcDgzcZqDtGNjM99EESPti+YIKhxUbiLvDQ4u8Gq8U0rl8
 ToqwotOM5igkmhHV2dkF/AygPk2DYr52Ta0lmIkV7qvMD/TiJSMap8bM9iA1/kA4VJhqA27E
 oNtFjpw6Z/PFflpmDQ9tLIXxZlmg6dpmcjq/caizhyQJYTc7hYqK0Y5QdwHI0bFCz3xYg7GK
 02Zfusosp+QBe/VTT0r2NvyNujUjAaGQqHeFELvoi4wiVbh3dwymof3Yg6km0b/JwwZpFY76
 DvM7hulptJUsgKBJgNQ9spcI+SMCjgUBjMOGWdLRDOE7wGAWvEr9rN7LA89IiRCeo1JM9Zov
 v8eWIdkVR3V1PlCMWI0pEO2AvKWn+BUTPkzdwbz4Rlu5XnLYCbbRGreRQLqY+Nsv8fCsrUV7
 KYI5RNGcLuKmPoBMJgwxD+YZ9PMnMTOfdl5+rTY2j+/f4jF7ea7dAzMcyjfIYFKAxUA18X10
 FzBgQaJ617nzeWszHD8VShBU8EvCTEjNxN+ePhjpkuIbM2R/hxWtJ8syX42ii6E0wCjkV/Rj
 oPHFrGqNL4mYDkxxeM042eUiAtRnq8F93bIjp3TEkxQhLJmJ44yqeiUHEXxneBOxl5VtjbFA
 5Eqz1MiOGKE6A=
X-IronPort-AV: E=Sophos;i="5.97,248,1669093200"; 
   d="scan'208";a="94765648"
Date: Thu, 26 Jan 2023 14:40:39 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/python: change 's#' size type for Python >= 3.10
Message-ID: <Y9KQ53cVznRdT7ie@perard.uk.xensource.com>
References: <20230126051310.4149074-1-marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230126051310.4149074-1-marmarek@invisiblethingslab.com>

On Thu, Jan 26, 2023 at 06:13:10AM +0100, Marek Marczykowski-Grecki wrote:
> Python < 3.10 by default uses 'int' type for data+size string types
> (s#), unless PY_SSIZE_T_CLEAN is defined - in which case it uses
> Py_ssize_t. The former behavior was removed in Python 3.10 and now it's
> required to define PY_SSIZE_T_CLEAN before including Python.h, and using
> Py_ssize_t for the length argument. The PY_SSIZE_T_CLEAN behavior is
> supported since Python 2.5.
> 
> Adjust bindings accordingly.
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 14:49:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 14:49:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485073.752029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL3ZN-0007Wi-3X; Thu, 26 Jan 2023 14:49:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485073.752029; Thu, 26 Jan 2023 14:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL3ZN-0007Wb-0S; Thu, 26 Jan 2023 14:49:41 +0000
Received: by outflank-mailman (input) for mailman id 485073;
 Thu, 26 Jan 2023 14:48:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vvgl=5X=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1pL3Yd-0007Vf-MF
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 14:48:55 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8d85064a-9d88-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 15:48:54 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E1F826183C;
 Thu, 26 Jan 2023 14:48:52 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05241C4339B;
 Thu, 26 Jan 2023 14:48:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d85064a-9d88-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674744532;
	bh=WNohoknU9nLfbwIXeO77neZ4yMyv64MASZK2UIAfOfc=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=OyaVsT5ePzTFS76R1FH5KLxOUgzscan6sLv55lgf+totKkalpj5P4hTJ0CFzpg2nB
	 FjnAcEI6WBL0tfwF/TTw8b+wwY6a9pfQDEoFaNDhDzv4Bx4E//TKL2HIis05iqFTqA
	 6cWnwUhcKJagQmydnhE9MWdKN98RZA7QH0/g2op6Qc+WX5kkWc6P8AxP1IQok4d4Iv
	 rC+rDq5bfpCpp87TX2zmlNbavkxhh4pjEZT5VQ1/5GQQQR5j3P3SiDxYUla0TLmDmY
	 tHoanGw9DWG+oAt/RnrhsxsQ7gltrsJN75kvWssUX4MOKayClNcjWM9UluK5fqquNs
	 P9OGQuC06AeFQ==
Date: Thu, 26 Jan 2023 16:48:04 +0200
From: Mike Rapoport <rppt@kernel.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org,
	liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 6/6] mm: export dump_mm()
Message-ID: <Y9KSpNJ4y0GMwkrW@kernel.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-7-surenb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-7-surenb@google.com>

On Wed, Jan 25, 2023 at 12:38:51AM -0800, Suren Baghdasaryan wrote:
> mmap_assert_write_locked() is used in vm_flags modifiers. Because
> mmap_assert_write_locked() uses dump_mm() and vm_flags are sometimes
> modified from from inside a module, it's necessary to export
> dump_mm() function.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>

> ---
>  mm/debug.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/mm/debug.c b/mm/debug.c
> index 9d3d893dc7f4..96d594e16292 100644
> --- a/mm/debug.c
> +++ b/mm/debug.c
> @@ -215,6 +215,7 @@ void dump_mm(const struct mm_struct *mm)
>  		mm->def_flags, &mm->def_flags
>  	);
>  }
> +EXPORT_SYMBOL(dump_mm);
>  
>  static bool page_init_poisoning __read_mostly = true;
>  
> -- 
> 2.39.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 14:55:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 14:55:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485079.752038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL3f7-0000gp-PB; Thu, 26 Jan 2023 14:55:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485079.752038; Thu, 26 Jan 2023 14:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL3f7-0000gi-MV; Thu, 26 Jan 2023 14:55:37 +0000
Received: by outflank-mailman (input) for mailman id 485079;
 Thu, 26 Jan 2023 14:51:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vvgl=5X=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1pL3bS-0000RC-1B
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 14:51:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5758fce-9d88-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 15:51:48 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4EA0C61756;
 Thu, 26 Jan 2023 14:51:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3695EC433EF;
 Thu, 26 Jan 2023 14:51:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5758fce-9d88-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674744706;
	bh=vTtNm5B4uxkFGkyUcD6bt7bUv740cC5zoLQ3n4iCXig=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=r54XDevqCxaIthjhpV90e/3PReNDit6CxkjYziBMldcr+DpwGPcGH7s0yFhPlYu8B
	 NohDoJ7KVIUCX5Zs7V0ydKB7t9XQRchAYOCR8XLnEbVPCZ8e7MIHv0UEcp73VODAq8
	 aFhcsGcrVCc28IZRly/gODnl3cguu4R9trJCc4u2GtQ9ygUwR4IFSGftEKj188geBR
	 3987kGANtDLWA/GzBI6W69E/F450ClXU14kAngo3y+A1wCivQR2InkOef4HeCwIwVj
	 QYEspA95Wv7WS3mfCUr9QMGnsDd2HsZvTUBOWl52ZSZiQBqHvpwy1Yy3PsmXixNWVQ
	 B28KDvwGClVRQ==
Date: Thu, 26 Jan 2023 16:50:59 +0200
From: Mike Rapoport <rppt@kernel.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org,
	liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
Message-ID: <Y9KTUw/04FmBVplw@kernel.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-2-surenb@google.com>
 <Y9JFFYjfJf9uDijE@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Y9JFFYjfJf9uDijE@kernel.org>

On Thu, Jan 26, 2023 at 11:17:09AM +0200, Mike Rapoport wrote:
> On Wed, Jan 25, 2023 at 12:38:46AM -0800, Suren Baghdasaryan wrote:
> > vm_flags are among VMA attributes which affect decisions like VMA merging
> > and splitting. Therefore all vm_flags modifications are performed after
> > taking exclusive mmap_lock to prevent vm_flags updates racing with such
> > operations. Introduce modifier functions for vm_flags to be used whenever
> > flags are updated. This way we can better check and control correct
> > locking behavior during these updates.
> > 
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> >  include/linux/mm.h       | 37 +++++++++++++++++++++++++++++++++++++
> >  include/linux/mm_types.h |  8 +++++++-
> >  2 files changed, 44 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index c2f62bdce134..b71f2809caac 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -627,6 +627,43 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
> >  	INIT_LIST_HEAD(&vma->anon_vma_chain);
> >  }
> >  
> > +/* Use when VMA is not part of the VMA tree and needs no locking */
> > +static inline void init_vm_flags(struct vm_area_struct *vma,
> > +				 unsigned long flags)
> 
> I'd suggest to make it vm_flags_init() etc.

Thinking more about it, it will be even clearer to name these vma_flags_xyz()

> Except that
> 
> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
> 

--
Sincerely yours,
Mike.


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 15:39:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 15:39:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485087.752049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL4LU-0005mc-47; Thu, 26 Jan 2023 15:39:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485087.752049; Thu, 26 Jan 2023 15:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL4LU-0005mV-1U; Thu, 26 Jan 2023 15:39:24 +0000
Received: by outflank-mailman (input) for mailman id 485087;
 Thu, 26 Jan 2023 15:39:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HSUn=5X=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pL4LS-0005mP-Gi
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 15:39:22 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9992aeeb-9d8f-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 16:39:20 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8C16661880;
 Thu, 26 Jan 2023 15:39:19 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 944BAC4339B;
 Thu, 26 Jan 2023 15:39:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9992aeeb-9d8f-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674747559;
	bh=eb7A4/uVco8ZN4jcR8yIJnmo4s/u3G30hcK0FC8/PjA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=RA3AawPIpSn6Nq9YUayfGmaMDik7XQPttSV9giBwoaO1vOx+oe3d2BFSW6b40NnLN
	 /M11So5tp7e7D1WBpLfXRKt2B97OzMMkHTlnlXlkFOxjsv+VpGYTZWZALQDwr+WPbG
	 P8wdTusYR4/shaj1AbY29xQCrhmT2m5bMIBaSxC5qzY4DNTzf1T/ve/YSIxOxFrED3
	 ny7PFAlzY8B1ANmWrfR7yHIzC1oJvcuSV/3SGnsg4QFDoCyu8k8/TWitZ3ZRdilgX4
	 jVBdsdT3hx4f9VmJn81D5glbyVePvQgZlR/sJePzhgIj74khO43tUpWdTBqPFYrLUN
	 uHAgIeeUh29HQ==
Date: Thu, 26 Jan 2023 07:39:16 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Vikram Garhwal <vikram.garhwal@amd.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, qemu-devel@nongnu.org, 
    xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, 
    alex.bennee@linaro.org, Peter Maydell <peter.maydell@linaro.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    "open list:ARM TCG CPUs" <qemu-arm@nongnu.org>
Subject: Re: [QEMU][PATCH v4 09/10] hw/arm: introduce xenpvh machine
In-Reply-To: <d6bb030b-406a-5a07-f089-2382bdd46e3c@amd.com>
Message-ID: <alpine.DEB.2.22.394.2301260739100.1978264@ubuntu-linux-20-04-desktop>
References: <20230125085407.7144-1-vikram.garhwal@amd.com> <20230125085407.7144-10-vikram.garhwal@amd.com> <alpine.DEB.2.22.394.2301251410440.1978264@ubuntu-linux-20-04-desktop> <d6bb030b-406a-5a07-f089-2382bdd46e3c@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Jan 2023, Vikram Garhwal wrote:
> Hi Stefano,
> 
> On 1/25/23 2:20 PM, Stefano Stabellini wrote:
> > On Wed, 25 Jan 2023, Vikram Garhwal wrote:
> > > Add a new machine xenpvh which creates a IOREQ server to register/connect
> > > with
> > > Xen Hypervisor.
> > > 
> > > Optional: When CONFIG_TPM is enabled, it also creates a tpm-tis-device,
> > > adds a
> > > TPM emulator and connects to swtpm running on host machine via chardev
> > > socket
> > > and support TPM functionalities for a guest domain.
> > > 
> > > Extra command line for aarch64 xenpvh QEMU to connect to swtpm:
> > >      -chardev socket,id=chrtpm,path=/tmp/myvtpm2/swtpm-sock \
> > >      -tpmdev emulator,id=tpm0,chardev=chrtpm \
> > >      -machine tpm-base-addr=0x0c000000 \
> > > 
> > > swtpm implements a TPM software emulator(TPM 1.2 & TPM 2) built on libtpms
> > > and
> > > provides access to TPM functionality over socket, chardev and CUSE
> > > interface.
> > > Github repo: https://github.com/stefanberger/swtpm
> > > Example for starting swtpm on host machine:
> > >      mkdir /tmp/vtpm2
> > >      swtpm socket --tpmstate dir=/tmp/vtpm2 \
> > >      --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
> > > 
> > > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > ---
> > >   docs/system/arm/xenpvh.rst    |  34 +++++++
> > >   docs/system/target-arm.rst    |   1 +
> > >   hw/arm/meson.build            |   2 +
> > >   hw/arm/xen_arm.c              | 184 ++++++++++++++++++++++++++++++++++
> > >   include/hw/arm/xen_arch_hvm.h |   9 ++
> > >   include/hw/xen/arch_hvm.h     |   2 +
> > >   6 files changed, 232 insertions(+)
> > >   create mode 100644 docs/system/arm/xenpvh.rst
> > >   create mode 100644 hw/arm/xen_arm.c
> > >   create mode 100644 include/hw/arm/xen_arch_hvm.h
> > > 
> > > diff --git a/docs/system/arm/xenpvh.rst b/docs/system/arm/xenpvh.rst
> > > new file mode 100644
> > > index 0000000000..e1655c7ab8
> > > --- /dev/null
> > > +++ b/docs/system/arm/xenpvh.rst
> > > @@ -0,0 +1,34 @@
> > > +XENPVH (``xenpvh``)
> > > +=========================================
> > > +This machine creates a IOREQ server to register/connect with Xen
> > > Hypervisor.
> > > +
> > > +When TPM is enabled, this machine also creates a tpm-tis-device at a user
> > > input
> > > +tpm base address, adds a TPM emulator and connects to a swtpm application
> > > +running on host machine via chardev socket. This enables xenpvh to
> > > support TPM
> > > +functionalities for a guest domain.
> > > +
> > > +More information about TPM use and installing swtpm linux application can
> > > be
> > > +found at: docs/specs/tpm.rst.
> > > +
> > > +Example for starting swtpm on host machine:
> > > +.. code-block:: console
> > > +
> > > +    mkdir /tmp/vtpm2
> > > +    swtpm socket --tpmstate dir=/tmp/vtpm2 \
> > > +    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
> > > +
> > > +Sample QEMU xenpvh commands for running and connecting with Xen:
> > > +.. code-block:: console
> > > +
> > > +    qemu-system-aarch64 -xen-domid 1 \
> > > +    -chardev socket,id=libxl-cmd,path=qmp-libxl-1,server=on,wait=off \
> > > +    -mon chardev=libxl-cmd,mode=control \
> > > +    -chardev
> > > socket,id=libxenstat-cmd,path=qmp-libxenstat-1,server=on,wait=off \
> > > +    -mon chardev=libxenstat-cmd,mode=control \
> > > +    -xen-attach -name guest0 -vnc none -display none -nographic \
> > > +    -machine xenpvh -m 1301 \
> > > +    -chardev socket,id=chrtpm,path=tmp/vtpm2/swtpm-sock \
> > > +    -tpmdev emulator,id=tpm0,chardev=chrtpm -machine
> > > tpm-base-addr=0x0C000000
> > > +
> > > +In above QEMU command, last two lines are for connecting xenpvh QEMU to
> > > swtpm
> > > +via chardev socket.
> > > diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
> > > index 91ebc26c6d..af8d7c77d6 100644
> > > --- a/docs/system/target-arm.rst
> > > +++ b/docs/system/target-arm.rst
> > > @@ -106,6 +106,7 @@ undocumented; you can get a complete list by running
> > >      arm/stm32
> > >      arm/virt
> > >      arm/xlnx-versal-virt
> > > +   arm/xenpvh
> > >     Emulated CPU architecture support
> > >   =================================
> > > diff --git a/hw/arm/meson.build b/hw/arm/meson.build
> > > index b036045603..06bddbfbb8 100644
> > > --- a/hw/arm/meson.build
> > > +++ b/hw/arm/meson.build
> > > @@ -61,6 +61,8 @@ arm_ss.add(when: 'CONFIG_FSL_IMX7', if_true:
> > > files('fsl-imx7.c', 'mcimx7d-sabre.
> > >   arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
> > >   arm_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c',
> > > 'mcimx6ul-evk.c'))
> > >   arm_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
> > > +arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen_arm.c'))
> > > +arm_ss.add_all(xen_ss)
> > >     softmmu_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true:
> > > files('smmu-common.c'))
> > >   softmmu_ss.add(when: 'CONFIG_EXYNOS4', if_true:
> > > files('exynos4_boards.c'))
> > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> > > new file mode 100644
> > > index 0000000000..12b19e3609
> > > --- /dev/null
> > > +++ b/hw/arm/xen_arm.c
> > > @@ -0,0 +1,184 @@
> > > +/*
> > > + * QEMU ARM Xen PV Machine
> >                     ^ PVH
> > 
> > 
> > > + *
> > > + * Permission is hereby granted, free of charge, to any person obtaining
> > > a copy
> > > + * of this software and associated documentation files (the "Software"),
> > > to deal
> > > + * in the Software without restriction, including without limitation the
> > > rights
> > > + * to use, copy, modify, merge, publish, distribute, sublicense, and/or
> > > sell
> > > + * copies of the Software, and to permit persons to whom the Software is
> > > + * furnished to do so, subject to the following conditions:
> > > + *
> > > + * The above copyright notice and this permission notice shall be
> > > included in
> > > + * all copies or substantial portions of the Software.
> > > + *
> > > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > > EXPRESS OR
> > > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > > MERCHANTABILITY,
> > > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
> > > SHALL
> > > + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
> > > OTHER
> > > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> > > ARISING FROM,
> > > + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> > > IN
> > > + * THE SOFTWARE.
> > > + */
> > > +
> > > +#include "qemu/osdep.h"
> > > +#include "qemu/error-report.h"
> > > +#include "qapi/qapi-commands-migration.h"
> > > +#include "qapi/visitor.h"
> > > +#include "hw/boards.h"
> > > +#include "hw/sysbus.h"
> > > +#include "sysemu/block-backend.h"
> > > +#include "sysemu/tpm_backend.h"
> > > +#include "sysemu/sysemu.h"
> > > +#include "hw/xen/xen-legacy-backend.h"
> > > +#include "hw/xen/xen-hvm-common.h"
> > > +#include "sysemu/tpm.h"
> > > +#include "hw/xen/arch_hvm.h"
> > > +
> > > +#define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
> > > +OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
> > > +
> > > +static MemoryListener xen_memory_listener = {
> > > +    .region_add = xen_region_add,
> > > +    .region_del = xen_region_del,
> > > +    .log_start = NULL,
> > > +    .log_stop = NULL,
> > > +    .log_sync = NULL,
> > > +    .log_global_start = NULL,
> > > +    .log_global_stop = NULL,
> > > +    .priority = 10,
> > > +};
> > > +
> > > +struct XenArmState {
> > > +    /*< private >*/
> > > +    MachineState parent;
> > > +
> > > +    XenIOState *state;
> > > +
> > > +    struct {
> > > +        uint64_t tpm_base_addr;
> > > +    } cfg;
> > > +};
> > > +
> > > +void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
> > > +{
> > > +    hw_error("Invalid ioreq type 0x%x\n", req->type);
> > > +
> > > +    return;
> > > +}
> > > +
> > > +void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
> > > +                         bool add)
> > > +{
> > > +}
> > > +
> > > +void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
> > > +{
> > > +}
> > > +
> > > +void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
> > > +{
> > > +}
> > > +
> > > +#ifdef CONFIG_TPM
> > > +static void xen_enable_tpm(XenArmState *xam)
> > > +{
> > > +    Error *errp = NULL;
> > > +    DeviceState *dev;
> > > +    SysBusDevice *busdev;
> > > +
> > > +    TPMBackend *be = qemu_find_tpm_be("tpm0");
> > > +    if (be == NULL) {
> > > +        DPRINTF("Couldn't fine the backend for tpm0\n");
> > > +        return;
> > > +    }
> > > +    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
> > > +    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
> > > +    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
> > > +    busdev = SYS_BUS_DEVICE(dev);
> > > +    sysbus_realize_and_unref(busdev, &error_fatal);
> > > +    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
> > > +
> > > +    DPRINTF("Connected tpmdev at address 0x%lx\n",
> > > xam->cfg.tpm_base_addr);
> > > +}
> > > +#endif
> > > +
> > > +static void xen_arm_init(MachineState *machine)
> > > +{
> > > +    XenArmState *xam = XEN_ARM(machine);
> > > +
> > > +    xam->state =  g_new0(XenIOState, 1);
> > > +
> > > +    xen_register_ioreq(xam->state, machine->smp.cpus,
> > > xen_memory_listener);
> > > +
> > > +#ifdef CONFIG_TPM
> > > +    if (xam->cfg.tpm_base_addr) {
> > > +        xen_enable_tpm(xam);
> > > +    } else {
> > > +        DPRINTF("tpm-base-addr is not provided. TPM will not be
> > > enabled\n");
> > > +    }
> > I would remove the "else", we already have a DPRINTF at the end of
> > xen_enable_tpm.
> 
> This print is bit different than the one in /xen_enable_tpm/. I added it
> because now user needs to provide "tpm_base_addr=0x0C00_0000" from command
> line. If no /tpm_base_addr/ then /cfg.tpm_base_addr /value is 0x0 and we don't
> wanna create tpm device at 0x0.
> 
> Perhaps instead of debug print, I print a warning here?

Definitely not a warning because it is totally OK to configure QEMU with
CONFIG_TPM but then not pass tpm_base_addr because you don't want to
provide one to a Xen VM. But I can see that a debug printf can be useful
for debugging so it is fine to keep it too.
 
 
> > > +#endif
> > > +
> > > +    return;
> > the return is unnecessary
> > 
> > 
> > > +}
> > > +
> > > +#ifdef CONFIG_TPM
> > > +static void xen_arm_get_tpm_base_addr(Object *obj, Visitor *v,
> > > +                                      const char *name, void *opaque,
> > > +                                      Error **errp)
> > > +{
> > > +    XenArmState *xam = XEN_ARM(obj);
> > > +    uint64_t value = xam->cfg.tpm_base_addr;
> > > +
> > > +    visit_type_uint64(v, name, &value, errp);
> > > +}
> > > +
> > > +static void xen_arm_set_tpm_base_addr(Object *obj, Visitor *v,
> > > +                                      const char *name, void *opaque,
> > > +                                      Error **errp)
> > > +{
> > > +    XenArmState *xam = XEN_ARM(obj);
> > > +    uint64_t value;
> > > +
> > > +    if (!visit_type_uint64(v, name, &value, errp)) {
> > > +        return;
> > > +    }
> > > +
> > > +    xam->cfg.tpm_base_addr = value;
> > > +}
> > > +#endif
> > > +
> > > +static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
> > > +{
> > > +
> > > +    MachineClass *mc = MACHINE_CLASS(oc);
> > > +    mc->desc = "Xen Para-virtualized PC";
> > > +    mc->init = xen_arm_init;
> > > +    mc->max_cpus = 1;
> > > +    mc->default_machine_opts = "accel=xen";
> > > +
> > > +#ifdef CONFIG_TPM
> > > +    object_class_property_add(oc, "tpm-base-addr", "uint64_t",
> > > +                              xen_arm_get_tpm_base_addr,
> > > +                              xen_arm_set_tpm_base_addr,
> > > +                              NULL, NULL);
> > > +    object_class_property_set_description(oc, "tpm-base-addr",
> > > +                                          "Set Base address for TPM
> > > device.");
> > > +
> > > +    machine_class_allow_dynamic_sysbus_dev(mc, TYPE_TPM_TIS_SYSBUS);
> > > +#endif
> > > +}
> > > +
> > > +static const TypeInfo xen_arm_machine_type = {
> > > +    .name = TYPE_XEN_ARM,
> > > +    .parent = TYPE_MACHINE,
> > > +    .class_init = xen_arm_machine_class_init,
> > > +    .instance_size = sizeof(XenArmState),
> > > +};
> > > +
> > > +static void xen_arm_machine_register_types(void)
> > > +{
> > > +    type_register_static(&xen_arm_machine_type);
> > > +}
> > > +
> > > +type_init(xen_arm_machine_register_types)
> > > diff --git a/include/hw/arm/xen_arch_hvm.h b/include/hw/arm/xen_arch_hvm.h
> > > new file mode 100644
> > > index 0000000000..8fd645e723
> > > --- /dev/null
> > > +++ b/include/hw/arm/xen_arch_hvm.h
> > > @@ -0,0 +1,9 @@
> > > +#ifndef HW_XEN_ARCH_ARM_HVM_H
> > > +#define HW_XEN_ARCH_ARM_HVM_H
> > > +
> > > +#include <xen/hvm/ioreq.h>
> > > +void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
> > > +void arch_xen_set_memory(XenIOState *state,
> > > +                         MemoryRegionSection *section,
> > > +                         bool add);
> > > +#endif
> > > diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
> > > index 26674648d8..c7c515220d 100644
> > > --- a/include/hw/xen/arch_hvm.h
> > > +++ b/include/hw/xen/arch_hvm.h
> > > @@ -1,3 +1,5 @@
> > >   #if defined(TARGET_I386) || defined(TARGET_X86_64)
> > >   #include "hw/i386/xen_arch_hvm.h"
> > > +#elif defined(TARGET_ARM) || defined(TARGET_ARM_64)
> > > +#include "hw/arm/xen_arch_hvm.h"
> > >   #endif



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 15:42:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 15:42:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485094.752059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL4OV-0007Bt-Lx; Thu, 26 Jan 2023 15:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485094.752059; Thu, 26 Jan 2023 15:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL4OV-0007Bm-J7; Thu, 26 Jan 2023 15:42:31 +0000
Received: by outflank-mailman (input) for mailman id 485094;
 Thu, 26 Jan 2023 15:42:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5Fb=5X=tklengyel.com=bounce+e181d6.cd840-xen-devel=lists.xenproject.org@srs-se1.protection.inumbo.net>)
 id 1pL4OU-0007Bg-C6
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 15:42:30 +0000
Received: from so254-35.mailgun.net (so254-35.mailgun.net [198.61.254.35])
 by se1-gles-sth1.inumbo.com (Halon) with UTF8SMTPS
 id 0957f3d6-9d90-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 16:42:28 +0100 (CET)
Received: from mail-yb1-f179.google.com (mail-yb1-f179.google.com
 [209.85.219.179]) by
 c0240418b22b with SMTP id 63d29f6241f80c63a459d68c (version=TLS1.3,
 cipher=TLS_AES_128_GCM_SHA256); Thu, 26 Jan 2023 15:42:26 GMT
Received: by mail-yb1-f179.google.com with SMTP id 123so2438690ybv.6
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 07:42:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 0957f3d6-9d90-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=tklengyel.com;
 q=dns/txt; s=mailo; t=1674747746; x=1674754946; h=Content-Type: Cc: To: To:
 Subject: Subject: Message-ID: Date: From: From: In-Reply-To: References:
 MIME-Version: Sender: Sender;
 bh=o5LlAxm5OT7NlU/UHCsH/EC8XsY/l2h67gaOon1D984=;
 b=jwoREEcpVTayJiH2Ikc8XGnjKB5tND7Gx/Xn1KqRVhbchl7sKxy086nKH+SsZVSHC1k9il+9OY6YrJgKxHBTVlZyfruIjkMGVpId+AXcvPAsTncsvIPWVM/3jOIkvAsHEU/dzey3TguglWDnMmHb+OQLEtXN3hJreYBn9SH4VjDwD4g0BydKP+dLipemyIsItDyQWzPik9P/LZaTFmx3p2/4+uF5DytNjVBZhJC+n6omyLOS97Gm/m2FQXCxY+IEVmCVURpg5ARLiL+XOZbNvHVEEV6EU3TW1dwr45QeefodeVbF5ilXtHerQLII8Bhl+BXw17qvr01OKTdJsWnvMA==
X-Mailgun-Sending-Ip: 198.61.254.35
X-Mailgun-Sid: WyIyYTNmOCIsInhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZyIsImNkODQwIl0=
Sender: tamas@tklengyel.com
X-Gm-Message-State: AFqh2koUvt9zWZqzKn524aIOqcxJ5rett87VxsY/hB1veMY85Gz8oA4I
	EagYQa5u9hxPO0HdYYnB3qBAmXC0btsvJh8vyBU=
X-Google-Smtp-Source: AMrXdXuSqMC+T7CjEFrOtcBhFSv0+T0wPEDQwTvKipL7Qg0sdKhNTE0df15tTgfdfOTc4kxVG29IOrmOCOrpFxYoteo=
X-Received: by 2002:a25:af8a:0:b0:7fc:a696:981c with SMTP id
 g10-20020a25af8a000000b007fca696981cmr2981028ybh.165.1674747746093; Thu, 26
 Jan 2023 07:42:26 -0800 (PST)
MIME-Version: 1.0
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
 <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com> <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
 <a92b9714-5e29-146f-3b68-b44692c56de1@suse.com> <CABfawhkiaheQPJhtG7fupHcbfYPUy+BJgvbVoQ+FJUnev5bowQ@mail.gmail.com>
 <6099e6fb-0a3e-c6da-2766-d61c2c3d1e96@suse.com> <CABfawh=1XUWbeRJJZQsYVLyZX-Ez8=D2YYCgBYvDGQemHeJkzA@mail.gmail.com>
 <cfffcf15-c2fa-6529-d1ff-a71a7571bfe2@suse.com>
In-Reply-To: <cfffcf15-c2fa-6529-d1ff-a71a7571bfe2@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Thu, 26 Jan 2023 10:41:49 -0500
X-Gmail-Original-Message-ID: <CABfawhm_b=MskQN_zZsuKz0FDtZzZNvBMa8bXtxxUZU9rXbUCA@mail.gmail.com>
Message-ID: <CABfawhm_b=MskQN_zZsuKz0FDtZzZNvBMa8bXtxxUZU9rXbUCA@mail.gmail.com>
Subject: Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest areas
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: multipart/alternative; boundary="0000000000007e917105f32c98ab"

--0000000000007e917105f32c98ab
Content-Type: text/plain; charset="UTF-8"

On Thu, Jan 26, 2023 at 3:14 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 25.01.2023 16:34, Tamas K Lengyel wrote:
> > On Tue, Jan 24, 2023 at 6:19 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 23.01.2023 19:32, Tamas K Lengyel wrote:
> >>> On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich <jbeulich@suse.com>
wrote:
> >>>> On 23.01.2023 17:09, Tamas K Lengyel wrote:
> >>>>> On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@suse.com>
wrote:
> >>>>>> --- a/xen/arch/x86/mm/mem_sharing.c
> >>>>>> +++ b/xen/arch/x86/mm/mem_sharing.c
> >>>>>> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
> >>>>>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
> >>>>>>  }
> >>>>>>
> >>>>>> +static int copy_guest_area(struct guest_area *cd_area,
> >>>>>> +                           const struct guest_area *d_area,
> >>>>>> +                           struct vcpu *cd_vcpu,
> >>>>>> +                           const struct domain *d)
> >>>>>> +{
> >>>>>> +    mfn_t d_mfn, cd_mfn;
> >>>>>> +
> >>>>>> +    if ( !d_area->pg )
> >>>>>> +        return 0;
> >>>>>> +
> >>>>>> +    d_mfn = page_to_mfn(d_area->pg);
> >>>>>> +
> >>>>>> +    /* Allocate & map a page for the area if it hasn't been
already.
> >>> */
> >>>>>> +    if ( !cd_area->pg )
> >>>>>> +    {
> >>>>>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
> >>>>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
> >>>>>> +        p2m_type_t p2mt;
> >>>>>> +        p2m_access_t p2ma;
> >>>>>> +        unsigned int offset;
> >>>>>> +        int ret;
> >>>>>> +
> >>>>>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL,
> >>> NULL);
> >>>>>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
> >>>>>> +        {
> >>>>>> +            struct page_info *pg =
> > alloc_domheap_page(cd_vcpu->domain,
> >>>>> 0);
> >>>>>> +
> >>>>>> +            if ( !pg )
> >>>>>> +                return -ENOMEM;
> >>>>>> +
> >>>>>> +            cd_mfn = page_to_mfn(pg);
> >>>>>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
> >>>>>> +
> >>>>>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,
> >>>>> p2m_ram_rw,
> >>>>>> +                                 p2m->default_access, -1);
> >>>>>> +            if ( ret )
> >>>>>> +                return ret;
> >>>>>> +        }
> >>>>>> +        else if ( p2mt != p2m_ram_rw )
> >>>>>> +            return -EBUSY;
> >>>>>> +
> >>>>>> +        /*
> >>>>>> +         * Simply specify the entire range up to the end of the
> > page.
> >>>>> All the
> >>>>>> +         * function uses it for is a check for not crossing page
> >>>>> boundaries.
> >>>>>> +         */
> >>>>>> +        offset = PAGE_OFFSET(d_area->map);
> >>>>>> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
> >>>>>> +                             PAGE_SIZE - offset, cd_area, NULL);
> >>>>>> +        if ( ret )
> >>>>>> +            return ret;
> >>>>>> +    }
> >>>>>> +    else
> >>>>>> +        cd_mfn = page_to_mfn(cd_area->pg);
> >>>>>
> >>>>> Everything to this point seems to be non mem-sharing/forking
related.
> >>> Could
> >>>>> these live somewhere else? There must be some other place where
> >>> allocating
> >>>>> these areas happens already for non-fork VMs so it would make sense
to
> >>> just
> >>>>> refactor that code to be callable from here.
> >>>>
> >>>> It is the "copy" aspect with makes this mem-sharing (or really fork)
> >>>> specific. Plus in the end this is no different from what you have
> >>>> there right now for copying the vCPU info area. In the final patch
> >>>> that other code gets removed by re-using the code here.
> >>>
> >>> Yes, the copy part is fork-specific. Arguably if there was a way to do
> > the
> >>> allocation of the page for vcpu_info I would prefer that being
> > elsewhere,
> >>> but while the only requirement is allocate-page and copy from parent
> > I'm OK
> >>> with that logic being in here because it's really straight forward.
But
> > now
> >>> you also do extra sanity checks here which are harder to comprehend in
> > this
> >>> context alone.
> >>
> >> What sanity checks are you talking about (also below, where you claim
> >> map_guest_area() would be used only to sanity check)?
> >
> > Did I misread your comment above "All the function uses it for is a
check
> > for not crossing page boundaries"? That sounds to me like a simple
sanity
> > check, unclear why it matters though and why only for forks.
>
> The comment is about the function's use of the range it is being passed.
> It doesn't say in any way that the function is doing only sanity checking.
> If the comment wording is ambiguous or unclear, I'm happy to take
> improvement suggestions.

Yes, please do, it definitely was confusing while reviewing the patch.

Thanks,
Tamas

--0000000000007e917105f32c98ab
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><br>On Thu, Jan 26, 2023 at 3:14 AM Jan Beulich &lt;<a=
 href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; wrote:<br>&gt;=
<br>&gt; On 25.01.2023 16:34, Tamas K Lengyel wrote:<br>&gt; &gt; On Tue, J=
an 24, 2023 at 6:19 AM Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com"=
>jbeulich@suse.com</a>&gt; wrote:<br>&gt; &gt;&gt;<br>&gt; &gt;&gt; On 23.0=
1.2023 19:32, Tamas K Lengyel wrote:<br>&gt; &gt;&gt;&gt; On Mon, Jan 23, 2=
023 at 11:24 AM Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeuli=
ch@suse.com</a>&gt; wrote:<br>&gt; &gt;&gt;&gt;&gt; On 23.01.2023 17:09, Ta=
mas K Lengyel wrote:<br>&gt; &gt;&gt;&gt;&gt;&gt; On Mon, Jan 23, 2023 at 9=
:55 AM Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.c=
om</a>&gt; wrote:<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; --- a/xen/arch/x86/mm/me=
m_sharing.c<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; +++ b/xen/arch/x86/mm/mem_shar=
ing.c<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; @@ -1653,6 +1653,65 @@ static void c=
opy_vcpu_nonreg_state(struc<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; =C2=A0 =C2=A0 =
=C2=A0hvm_set_nonreg_state(cd_vcpu, &amp;nrs);<br>&gt; &gt;&gt;&gt;&gt;&gt;=
&gt; =C2=A0}<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;<br>&gt; &gt;&gt;&gt;&gt;&gt;&=
gt; +static int copy_guest_area(struct guest_area *cd_area,<br>&gt; &gt;&gt=
;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 const struct guest_area *d_area,<br>&gt=
; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct vcpu *cd_vcpu,<br>&gt;=
 &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 const struct domain *d)<br>&g=
t; &gt;&gt;&gt;&gt;&gt;&gt; +{<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =
=C2=A0mfn_t d_mfn, cd_mfn;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt;&=
gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0if ( !d_area-&gt;pg )<br>&gt; &gt;&gt;&g=
t;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;<br>&gt; &gt;&gt;&gt;&=
gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0d_mfn =3D pag=
e_to_mfn(d_area-&gt;pg);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt=
;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0/* Allocate &amp; map a page for the area =
if it hasn&#39;t been already.<br>&gt; &gt;&gt;&gt; */<br>&gt; &gt;&gt;&gt;=
&gt;&gt;&gt; + =C2=A0 =C2=A0if ( !cd_area-&gt;pg )<br>&gt; &gt;&gt;&gt;&gt;=
&gt;&gt; + =C2=A0 =C2=A0{<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =
=C2=A0 =C2=A0gfn_t gfn =3D mfn_to_gfn(d, d_mfn);<br>&gt; &gt;&gt;&gt;&gt;&g=
t;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0struct p2m_domain *p2m =3D p2m_get_host=
p2m(cd_vcpu-&gt;domain);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =
=C2=A0 =C2=A0p2m_type_t p2mt;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=
=A0 =C2=A0 =C2=A0p2m_access_t p2ma;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0unsigned int offset;<br>&gt; &gt;&gt;&gt;&gt;&gt;&g=
t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0int ret;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; +=
<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D p=
2m-&gt;get_entry(p2m, gfn, &amp;p2mt, &amp;p2ma, 0, NULL,<br>&gt; &gt;&gt;&=
gt; NULL);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0if=
 ( mfn_eq(cd_mfn, INVALID_MFN) )<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =
=C2=A0 =C2=A0 =C2=A0{<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0struct page_info *pg =3D<br>&gt; &gt; alloc_domheap=
_page(cd_vcpu-&gt;domain,<br>&gt; &gt;&gt;&gt;&gt;&gt; 0);<br>&gt; &gt;&gt;=
&gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0if ( !pg )<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -ENOMEM;<br>&gt; &gt=
;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D page_to_mfn(pg);<br>&gt; &gt;&gt;&gt;=
&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0set_gpfn_from_mfn(m=
fn_x(cd_mfn), gfn_x(gfn));<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt;&=
gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D p2m-=
&gt;set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,<br>&gt; &gt;&gt;&gt;&gt;&gt;=
 p2m_ram_rw,<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 p2m-&gt;default_access, -1);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret )<br>&gt; &gt;&gt;&gt;&gt=
;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return r=
et;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0}<br>&gt;=
 &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0else if ( p2mt !=3D =
p2m_ram_rw )<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0return -EBUSY;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt=
;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0/*<br>&gt; &gt;&gt;&gt;&=
gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Simply specify the entire range=
 up to the end of the<br>&gt; &gt; page.<br>&gt; &gt;&gt;&gt;&gt;&gt; All t=
he<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 * functio=
n uses it for is a check for not crossing page<br>&gt; &gt;&gt;&gt;&gt;&gt;=
 boundaries.<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 */<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0offset =
=3D PAGE_OFFSET(d_area-&gt;map);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =
=C2=A0 =C2=A0 =C2=A0ret =3D map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + off=
set,<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 PAGE_SIZE - =
offset, cd_area, NULL);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =
=C2=A0 =C2=A0if ( ret )<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =
=C2=A0 =C2=A0}<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0else<br>&gt;=
 &gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D page_to_m=
fn(cd_area-&gt;pg);<br>&gt; &gt;&gt;&gt;&gt;&gt;<br>&gt; &gt;&gt;&gt;&gt;&g=
t; Everything to this point seems to be non mem-sharing/forking related.<br=
>&gt; &gt;&gt;&gt; Could<br>&gt; &gt;&gt;&gt;&gt;&gt; these live somewhere =
else? There must be some other place where<br>&gt; &gt;&gt;&gt; allocating<=
br>&gt; &gt;&gt;&gt;&gt;&gt; these areas happens already for non-fork VMs s=
o it would make sense to<br>&gt; &gt;&gt;&gt; just<br>&gt; &gt;&gt;&gt;&gt;=
&gt; refactor that code to be callable from here.<br>&gt; &gt;&gt;&gt;&gt;<=
br>&gt; &gt;&gt;&gt;&gt; It is the &quot;copy&quot; aspect with makes this =
mem-sharing (or really fork)<br>&gt; &gt;&gt;&gt;&gt; specific. Plus in the=
 end this is no different from what you have<br>&gt; &gt;&gt;&gt;&gt; there=
 right now for copying the vCPU info area. In the final patch<br>&gt; &gt;&=
gt;&gt;&gt; that other code gets removed by re-using the code here.<br>&gt;=
 &gt;&gt;&gt;<br>&gt; &gt;&gt;&gt; Yes, the copy part is fork-specific. Arg=
uably if there was a way to do<br>&gt; &gt; the<br>&gt; &gt;&gt;&gt; alloca=
tion of the page for vcpu_info I would prefer that being<br>&gt; &gt; elsew=
here,<br>&gt; &gt;&gt;&gt; but while the only requirement is allocate-page =
and copy from parent<br>&gt; &gt; I&#39;m OK<br>&gt; &gt;&gt;&gt; with that=
 logic being in here because it&#39;s really straight forward. But<br>&gt; =
&gt; now<br>&gt; &gt;&gt;&gt; you also do extra sanity checks here which ar=
e harder to comprehend in<br>&gt; &gt; this<br>&gt; &gt;&gt;&gt; context al=
one.<br>&gt; &gt;&gt;<br>&gt; &gt;&gt; What sanity checks are you talking a=
bout (also below, where you claim<br>&gt; &gt;&gt; map_guest_area() would b=
e used only to sanity check)?<br>&gt; &gt;<br>&gt; &gt; Did I misread your =
comment above &quot;All the function uses it for is a check<br>&gt; &gt; fo=
r not crossing page boundaries&quot;? That sounds to me like a simple sanit=
y<br>&gt; &gt; check, unclear why it matters though and why only for forks.=
<br>&gt;<br>&gt; The comment is about the function&#39;s use of the range i=
t is being passed.<br>&gt; It doesn&#39;t say in any way that the function =
is doing only sanity checking.<br>&gt; If the comment wording is ambiguous =
or unclear, I&#39;m happy to take<br>&gt; improvement suggestions.<br><div>=
<br></div><div>Yes, please do, it definitely was confusing while reviewing =
the patch.<br></div><div><br></div><div>Thanks,<br></div><div>Tamas<br></di=
v></div>

--0000000000007e917105f32c98ab--


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 15:46:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 15:46:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485085.752068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL4SN-0007yr-5R; Thu, 26 Jan 2023 15:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485085.752068; Thu, 26 Jan 2023 15:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL4SN-0007yk-2i; Thu, 26 Jan 2023 15:46:31 +0000
Received: by outflank-mailman (input) for mailman id 485085;
 Thu, 26 Jan 2023 15:12:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Z7u=5X=infradead.org=willy@srs-se1.protection.inumbo.net>)
 id 1pL3ux-0003Gf-4r
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 15:12:06 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c257ac3f-9d8b-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 16:11:56 +0100 (CET)
Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1pL3s4-006q4a-Hd; Thu, 26 Jan 2023 15:09:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c257ac3f-9d8b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=EH9adbDOpWNaGYEHl4xrAV40gwHDwqK76tq84Z0eyXc=; b=iqmwaHjxa8hZXZrYCzuWMxG2Kd
	gUi1Y8eYsd0omFHjfmpVdNaehwIHED+H1VzLiTscJu/MXykaHjjIUo4/UeePSQ/3qO2Scxl5KBcbs
	rwafzWiMp0gW54lDXpQNTGojrzBrhUSE24qBNrz9Eu8wZKUKM+VJeNVRGRDui4Mjh6zNrJRA9uXJe
	2WKv1rttzI5041i+Ny9WQoRsYwvuy+Pu29+WLEfQrszB7lQZjKA+bZ/4URx7/yXNldScNG2l8YeYl
	moQR7vt2oRKUKukioTBGNdLmNI+GiODuZUH9eGz4sLFdnhDHr8CijQAXVN9xNJqlnGwrCU0R4Kh9h
	Kpe2WfSQ==;
Date: Thu, 26 Jan 2023 15:09:00 +0000
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>, akpm@linux-foundation.org,
	michel@lespinasse.org, jglisse@google.com, mhocko@suse.com,
	vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
	dave@stgolabs.net, liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org,
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com,
	tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
Message-ID: <Y9KXjLaFFUvqqdd4@casper.infradead.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-2-surenb@google.com>
 <Y9JFFYjfJf9uDijE@kernel.org>
 <Y9KTUw/04FmBVplw@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Y9KTUw/04FmBVplw@kernel.org>

On Thu, Jan 26, 2023 at 04:50:59PM +0200, Mike Rapoport wrote:
> On Thu, Jan 26, 2023 at 11:17:09AM +0200, Mike Rapoport wrote:
> > On Wed, Jan 25, 2023 at 12:38:46AM -0800, Suren Baghdasaryan wrote:
> > > +/* Use when VMA is not part of the VMA tree and needs no locking */
> > > +static inline void init_vm_flags(struct vm_area_struct *vma,
> > > +				 unsigned long flags)
> > 
> > I'd suggest to make it vm_flags_init() etc.
> 
> Thinking more about it, it will be even clearer to name these vma_flags_xyz()

Perhaps vma_VERB_flags()?

vma_init_flags()
vma_reset_flags()
vma_set_flags()
vma_clear_flags()
vma_mod_flags()



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 15:59:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 15:59:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485104.752079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL4ep-0001F0-8f; Thu, 26 Jan 2023 15:59:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485104.752079; Thu, 26 Jan 2023 15:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL4ep-0001Et-5s; Thu, 26 Jan 2023 15:59:23 +0000
Received: by outflank-mailman (input) for mailman id 485104;
 Thu, 26 Jan 2023 15:59:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wwAt=5X=amd.com=stefano.stabellini@srs-se1.protection.inumbo.net>)
 id 1pL4en-0001En-Vs
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 15:59:22 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2086.outbound.protection.outlook.com [40.107.220.86])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 63e0e897-9d92-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 16:59:19 +0100 (CET)
Received: from MW4PR03CA0238.namprd03.prod.outlook.com (2603:10b6:303:b9::33)
 by DM4PR12MB6398.namprd12.prod.outlook.com (2603:10b6:8:b5::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 15:59:16 +0000
Received: from CO1NAM11FT071.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b9:cafe::e5) by MW4PR03CA0238.outlook.office365.com
 (2603:10b6:303:b9::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend
 Transport; Thu, 26 Jan 2023 15:59:16 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT071.mail.protection.outlook.com (10.13.175.56) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Thu, 26 Jan 2023 15:59:15 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 26 Jan
 2023 09:59:14 -0600
Received: from ubuntu-20.04.2-arm64.shared (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2375.34 via Frontend Transport; Thu, 26 Jan 2023 09:59:13 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63e0e897-9d92-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lJAy8u549xO7FSbiHmtv6cGSNfE40BzoVH8yZb3gqVHO/ZKyocfQEr7iXYeJjt4h/HUJmZwYuCvz/l51qjRqpgk3/MZD3EG/zcPTOAXr7wd3b6sXsIPbZyz/DNvN0fG2nemBfYNogMwkPvgVS7/XXMFzIBVzMX0yTiIKm57gXj17R/qu9QX36KPCs7Ikm2fPumM/Dnhk//cjqh9XsfyHEfpakO7J46BOXsmBQSlEq6EVGmsDDPFxTxNpirMzJ381Mt/9kQoMxH/lPV3n5V/qQjQ376uKXQjwl8K2TnEZ7bsst3C9I5pUtkL0yPKpNtB4s9Py1gWELJh/j8RL+IDYLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=khjpEyDzuKs0srj1SOymkxfPi+62LoBUiSGbPAT6KQI=;
 b=lW2OlqXCJ7EArNoEcsRx8D0chrvDTPODK6+A6JFK/Qb2C598yr9FolJ51kKvZg2wJivZKfdeVV31M22boyXFI7fzfpwm8jrIc4KiS6su+rGTXXD2suZ0yezEnkMDe7N98ojbVN9k7G4lsrtGwTPZjXaNwB0qCm6XdfuinCeqnolPI4zs6CaZtOxTikrEvh8zqrFC5zoLF5AJtb9GwVt+1WALcP6HOHRxVGHIpAW5YWzxKfN7m9x1xdPqnJ5/li/Ro3IuOmkou9KtwHYfadDFxExWyomvGcrZTTf619d5JaAwDpHurGoihFXP/jM9MbXvIfg0sS9D4KnRbiX61eajCA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=khjpEyDzuKs0srj1SOymkxfPi+62LoBUiSGbPAT6KQI=;
 b=uBlhObuBp9F2zTi5hUty2J8LFx37/ShwVbCavTIxzW5a/1RcxNwBFJAlNdJyMKyqs0uvUcj2/RYSpfY8ALm8XBoAK2Ephs0+WYVpAUVTVbXoowbURG9FpTkpeKQ0NLSbHzFeZHKsYDocd8mSplRUGY8hcHY6pSzDv8oWH6ppy8k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Date: Thu, 26 Jan 2023 07:59:08 -0800
From: Stefano Stabellini <stefano.stabellini@amd.com>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, <george.dunlap@citrix.com>,
	<andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<Bertrand.Marquis@arm.com>, <julien@xen.org>, Stefano Stabellini
	<stefano.stabellini@amd.com>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
In-Reply-To: <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com>
Message-ID: <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop>
References: <20230125205735.2662514-1-sstabellini@kernel.org> <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT071:EE_|DM4PR12MB6398:EE_
X-MS-Office365-Filtering-Correlation-Id: c01d5663-9fa2-4427-b02d-08daffb64657
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1SewCxArtfHH8VvxTuh6pP47ffIUk7XYQye3VJqC4wKDgb56HNG77jDTA9NdcYjV4C4QpgdYVnacMSBZayoQMnM+KG3JtV9LA8pRuDySd/yu7mAOX/8z8qGyzu16RFtJXL5wSWZdDEtzJRVp5RhQjwtMXluwVG8Be09XFa6sgrKisaCpr08mq3LWOcMvOeD+N87J2+5B5wkhwIqsyoAHiMj9/X+Y/1IveIDevZYV78pI+qmnURqDZULU7wRx5oP3PwYLng3WWHNQrliucAqPTWVOrv+aU9wkcV7VP0XBT0xRzWelYHn2e2a0J9jmu65ehSXdl+o5u1Ogp1G3UNbcLsfCzZUf0UUMXJ+9DerwmF5t1lXzmMHY2yI4FRkIkDCkVtOcFCehWit8LY87XR2fii427n31oEVnQWU9NweTuT/Sc7RQj2TrFPYT8YoFPkGnz3C/g6/6wTVZVOQoA31KZPiGhaXwlhVBgFtkB9rp3gOndYwmxnss/4FJE14JtTgfribZR9jfb84XsVOPRodpuCaqJYud0TQHQVLwmG59mHobqcXy5Lft0aoStM0ww2AwcvtoCEl8pEBWJvYP/99nsw+2LijfGJM1S+zBEO3HaLKXbErSc10QE392xJvf/RImddv1OBLNylIoIoyn2c7aGe/jVFNNbS7j+XA1SvX/nYOLpu/eOD/pnjjDwCwnw+FwlUTvVuEcrs4/DIwyh8AhHW/viQmU0bz6Iga0GzQL35s=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(7916004)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199018)(36840700001)(40470700004)(46966006)(86362001)(426003)(336012)(316002)(47076005)(33716001)(54906003)(9686003)(82310400005)(6666004)(186003)(81166007)(53546011)(83380400001)(478600001)(26005)(44832011)(2906002)(70586007)(70206006)(8676002)(5660300002)(40460700003)(40480700001)(6916009)(356005)(82740400003)(8936002)(41300700001)(36860700001)(4326008)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 15:59:15.3910
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c01d5663-9fa2-4427-b02d-08daffb64657
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT071.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6398

On Thu, 26 Jan 2023, Jan Beulich wrote:
> On 25.01.2023 21:57, Stefano Stabellini wrote:
> > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > 
> > As agreed during the last MISRA C discussion, I am adding the following
> > MISRA C rules: 7.1, 7.3, 18.3.
> > 
> > I am also adding 13.1 and 18.2 that were "agreed pending an analysis on
> > the amount of violations".
> > 
> > In the case of 13.1 there are zero violations reported by cppcheck.
> > 
> > In the case of 18.2, there are zero violations reported by cppcheck
> > after deviating the linker symbols, as discussed.
> 
> I find this suspicious.

Hi Jan, you are right to be suspicious about 18.2 :-)  cppcheck is
clearly not doing a great job at finding violations. Here is the full
picture:

- cppcheck finds 3 violations, obviously related to linker symbols
specifically common/version.c:xen_build_init and
xen/lib/ctors.c:init_constructors

- Coverity finds 9 violations, not sure which ones

- Eclair finds 56 total on x86. Eclair is always the strictest of the
  three tools and is flagging:
  - the usage of the guest_mode macro in x86/traps.c and other places
  - the usage of the NEED_OP/NEED_IP macros in common/lzo.c
  the remaining violations should be less than 10


> See e.g. ((pg) - frame_table) expressions both Arm
> and x86 have. frame_table is neither a linker generated symbol, nor does
> it represent something that the compiler (or static analysis tools) would
> recognized as an "object". Still, the entire frame table of course
> effectively is an object (array), yet there's no way for any tool to
> actually recognize the array dimension.

I used cppcheck in my original email because it is the only tool today
where I can add a deviation as an in-code comment, re-run the scan,
and what happens (see the number of violations go down.)

However also considering that Coverity reports less than 10, and Eclair
reports more but due to only 2-3 macros, I think 18.2 should be
manageable.


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 16:26:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 16:26:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485111.752089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL54v-0005co-KM; Thu, 26 Jan 2023 16:26:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485111.752089; Thu, 26 Jan 2023 16:26:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL54v-0005ch-H6; Thu, 26 Jan 2023 16:26:21 +0000
Received: by outflank-mailman (input) for mailman id 485111;
 Thu, 26 Jan 2023 16:25:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2gD3=5X=google.com=surenb@srs-se1.protection.inumbo.net>)
 id 1pL53v-0005Vr-Dn
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 16:25:19 +0000
Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com
 [2607:f8b0:4864:20::1133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 042798df-9d96-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 17:25:17 +0100 (CET)
Received: by mail-yw1-x1133.google.com with SMTP id
 00721157ae682-506609635cbso30187387b3.4
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 08:25:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 042798df-9d96-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=rUFKXit/Slxp5jNkRtm2qJfbudUdkwg/5ym9L9/2xFg=;
        b=BrNExWGHbV1NeR+vu2Js5zAqwDKTAmFhHgoWjYZ0a3qbH7rru8W3QViuclznskZkVo
         6Q5eqrGX7jMOOdvE9K9lsVmJpHX9roidNQoqd4ah6qpZ3z5AR/LzfumpWsF7qxr+L/L7
         2EeJAw9MATGkA5VBf2UwOc7KCg21F0CUspP8pGqPmL78PHbmYrJgHGcDXuiJf+tpyEq5
         mYd3qiJmdB/mmqbT25mkgF6e/9yHOLIF4ZmJU2qiUjSg09+a1L9BOQF70sP/z/t1hEtG
         N3QguNCw23qX8RL/9XVLJb/vAzwIwx19tcG6Myly1SJ+d6fdbBsCbrW+Wp9iSE9SLjFe
         r7Zw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=rUFKXit/Slxp5jNkRtm2qJfbudUdkwg/5ym9L9/2xFg=;
        b=ZeZtfZHUn6FQcHMxVT537QhXzjNvrPPB2Gwc5QjHQ35zIcHk4QKiTfg9n3hPrvHx9V
         o5H07m1V3iRqRNtN0S/ofwh1sknGyqJ/n6GPJ3Ab75vt5bi7DyOYYFPv6pQgWj3mppeS
         LGgnuEo1Cny5N7H5QMm7al3C4acV9K6kO8YGfH/eoojNGNs0/dY348CN/ZaqkxZtAyrc
         lRA4ky53tJzWXm0rlcc46j6TZTCBwkJn7JBjxUxLcNIZop3w9YxYP0gm+gLF3vhan5Ry
         w2lf6BQAq4EcYe5LC0Ve2cQ2dV7iRaV7a6Id/s3GSw91gHtTE2LZpieUcxl9mYlfRN8a
         ffOA==
X-Gm-Message-State: AFqh2kq8FUT2bgPznnDP5NuGBLcWXKnwZDBihgom1djQPdfrP0U+ggDs
	d/Y/WA6pKV/TKLDbH2npXHvfxBTLmTQB7GSyM7xcDA==
X-Google-Smtp-Source: AMrXdXvRVLeaIi85wIrJBS5zRkOyr5/BQ66PCe0y1aLe9hmWIu7jqHBlhyy2SYPoXbE9x4WotKo1j1aRHLVU6Hd1LsI=
X-Received: by 2002:a81:1b8b:0:b0:4ff:774b:7ffb with SMTP id
 b133-20020a811b8b000000b004ff774b7ffbmr3541685ywb.218.1674750315051; Thu, 26
 Jan 2023 08:25:15 -0800 (PST)
MIME-Version: 1.0
References: <20230125083851.27759-1-surenb@google.com> <20230125083851.27759-2-surenb@google.com>
 <Y9JFFYjfJf9uDijE@kernel.org> <Y9KTUw/04FmBVplw@kernel.org> <Y9KXjLaFFUvqqdd4@casper.infradead.org>
In-Reply-To: <Y9KXjLaFFUvqqdd4@casper.infradead.org>
From: Suren Baghdasaryan <surenb@google.com>
Date: Thu, 26 Jan 2023 08:25:03 -0800
Message-ID: <CAJuCfpHs4wvQpitiAYc+PQX3LnitF=wvm=zVX7CzMozzmnbcnw@mail.gmail.com>
Subject: Re: [PATCH v2 1/6] mm: introduce vma->vm_flags modifier functions
To: Matthew Wilcox <willy@infradead.org>
Cc: Mike Rapoport <rppt@kernel.org>, akpm@linux-foundation.org, michel@lespinasse.org, 
	jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, 
	mgorman@techsingularity.net, dave@stgolabs.net, liam.howlett@oracle.com, 
	peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, 
	luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, 
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, 
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, 
	peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, 
	joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, 
	tatashin@google.com, edumazet@google.com, gthelen@google.com, 
	gurua@google.com, arjunroy@google.com, soheil@google.com, 
	hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, 
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, 
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, 
	dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, 
	johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, 
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, 
	airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, 
	mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, 
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, 
	matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, 
	dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, 
	heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, 
	mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, 
	m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, 
	zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, 
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, 
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, 
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, 
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, 
	miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, 
	bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, 
	dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, 
	tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, 
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, 
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, 
	linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, 
	linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, 
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, 
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, 
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, 
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, 
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, 
	linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, 
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, 
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org, 
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, 
	linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, 
	linux-ext4@vger.kernel.org, devel@lists.orangefs.org, 
	kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, 
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, 
	selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com
Content-Type: text/plain; charset="UTF-8"

On Thu, Jan 26, 2023 at 7:09 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Jan 26, 2023 at 04:50:59PM +0200, Mike Rapoport wrote:
> > On Thu, Jan 26, 2023 at 11:17:09AM +0200, Mike Rapoport wrote:
> > > On Wed, Jan 25, 2023 at 12:38:46AM -0800, Suren Baghdasaryan wrote:
> > > > +/* Use when VMA is not part of the VMA tree and needs no locking */
> > > > +static inline void init_vm_flags(struct vm_area_struct *vma,
> > > > +                          unsigned long flags)
> > >
> > > I'd suggest to make it vm_flags_init() etc.
> >
> > Thinking more about it, it will be even clearer to name these vma_flags_xyz()
>
> Perhaps vma_VERB_flags()?
>
> vma_init_flags()
> vma_reset_flags()
> vma_set_flags()
> vma_clear_flags()
> vma_mod_flags()

Due to excessive email bouncing I posted the v3 of this patchset using
the original per-VMA patchset's distribution list. That might have
dropped Mike from the list. Sorry about that Mike, I'll add you to my
usual list of suspects :)
The v3 is here:
https://lore.kernel.org/all/20230125233554.153109-1-surenb@google.com/
and Andrew did suggest the same renames, so I'll be posting v4 with
those changes later today.
Thanks for the feedback!

>


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 16:29:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 16:29:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485117.752099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL58G-0006DI-2g; Thu, 26 Jan 2023 16:29:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485117.752099; Thu, 26 Jan 2023 16:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL58G-0006DB-04; Thu, 26 Jan 2023 16:29:48 +0000
Received: by outflank-mailman (input) for mailman id 485117;
 Thu, 26 Jan 2023 16:29:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pL58F-0006D3-Dq
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 16:29:47 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2054.outbound.protection.outlook.com [40.107.8.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a3ee31af-9d96-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 17:29:44 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8896.eurprd04.prod.outlook.com (2603:10a6:102:20f::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Thu, 26 Jan
 2023 16:29:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 16:29:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3ee31af-9d96-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BGv5P8Dak0c+5vWajkX1NtDIcNDgqk32ZBuapNcrrlX82wd0bBmFMBx3M7FEIoO3qIof3dCJaagMJth7PFtEpb0vQUP9dGGDeWJgAiSGedql+YkZathOIvWnQGkifqOOX2oaXi+/DhweQZrMuz9/s5zBTmQ6AhGMP6YEaRMTYGMxkqJ/3laZhwq4i+KzV69KCIhCxNlvKaDGp+/K9+eUZcWMmg6ETu/uHgyZ5qBxfh2Dro0fImxXCbBHXxCHVkDGqgYHLbSAdX75TNEFmfuP/QsXjRpjyTOgWDKvFkg8Phd8lBtgw0zrH6zI4i8188zpCtSFjbcuggDm3hSpMGyJjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zNvLVdm6rkdShMT+NhHdgNyJY1Z/V0L0khdGqzMkUZg=;
 b=oaJgk8DMdKciK1JtyyEVws7NKaJXNfUTWxdGqKzyFRtfS9rNnfeO0xgN1jacHGIQG15aNaEldd+1Oo9qKpkZYVeO6zpNb++lnQlQL1zHIr8xssliLVeN+N5YuefbtY20kuG4BfWfQm1Ri1KPL9PAr46Y4DMN/vtC+JQfgkQZeVLMJ1f8Z8dG3BV7ezd8Kd4CazMfSR49reWbeuzPnsc2Toc8zQBW+hOoZXM7hFvQeqDXr8iK6t4TuG0lBJnZK5YZ1Fj4ee8Nr/d6APMe8AL1JoUuIJc0MicdqHnNI7k1fM3y/VrqRV9v5Lsp213ML3XATklThEPu8UDlF6nZW9X4EA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zNvLVdm6rkdShMT+NhHdgNyJY1Z/V0L0khdGqzMkUZg=;
 b=FWqxCSFbTr8LZ5iSGlEgk85YVxgaubjw6U9lbEaXTo1So7YMVexrxvF93h0C0FcGXcS++UVH+6cdycLlo1H382Pk90b4Vtoq+FdADm0cZzqSrgBNdZ9kcIKKvyY+9CmTzHfWb7uEIddlq6i9TWoEzFtvBmAE3d1+AlVE/zO8WQKhneGWfVcIp+Ov/lJ+EJFAoFdZPDhVuzJU2zbgnR7xFnp9jSRvWgfJwGG+SdxbiqgO6NmXHz+QIr7lj/V0Uf00DWk/r7ypB5wHavRG/2qHW2EOhe0Fl78kLYQZhjhn6pOmt8qxbipu+sfELU6kPXRyx9QTm4z1e8JhYZGQhbMziw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <21f1e368-5d44-b689-0bb7-164a53e5ffd7@suse.com>
Date: Thu, 26 Jan 2023 17:29:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 07/11] xen: add cache coloring allocator for domains
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Luca Miccio <lucmiccio@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-8-carlo.nonato@minervasys.tech>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230123154735.74832-8-carlo.nonato@minervasys.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0084.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8896:EE_
X-MS-Office365-Filtering-Correlation-Id: 7d109836-5f0c-42f5-b384-08daffba869b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TFTSmXhh8cex01dyobuo2VtaJmtOV4GGciE7nuSjCiLcSnyDSnh5NJvFjL8kfZXdY5RQIJvtZ0FTcTcI42hNicWd5q4xv3CI1WRGyQMJfStELNEXGb1z61S06xrmqHN4aQ1dzn+iDmOB8bNu1RHZBN3B0+1S11Cpn2fuLzA6HOQnkB0ntYa9hsXIkCIbzRp3E973pa4WgKG9tM1XQJ+I6etVFZ3PPUZLeqV2CAzd8DpfX75d+NABCTnF6H/UAfpkaCekwj6oefqe/hBDmGWzEQ+jVQYQvZHeRBOYkTbfZ0RQ7mqQVMg3r93+mh0bIrRb4SvocIVs1r28Zmk9Xzz2lrz2F+cUOn1/HnHH+Kz0sX2fccm90gqKaVq+JgGy3GMHCFQBdWhxBY2OT5N87St3j/G09lmZ0DvepolPw3HGkUek81dfhLm1FYYQgP6Ft14U90mFBUiwNyB0zVKuVx7lOdLyR7YjoOMDuQ77z0Yil4cYmkd22TsG9P+nvNmIy4VK0mjuV8MAZ4Nmoi4Aw/5r9HWhHWKzL1RusuocIu6raskOYPjBjnSMSM3JhzDStnTXNCHpczquYst5ZRtwg7dzkdbxc5mETKinUkl+E/5U+cCCquOkpYwNjrD+jbN52QHTHOpVd1f8E/XdVKLk2tezWmvW1KMs8cXBP/P/x1pUmnZUmoOfNSFywIIzzSkEm/YdYD6UNwXQaYD8hAxr307L808lmjOUikO5iH8KyCFi7Go=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(366004)(346002)(376002)(136003)(39860400002)(396003)(451199018)(7416002)(30864003)(316002)(5660300002)(54906003)(66556008)(8936002)(41300700001)(66946007)(36756003)(86362001)(8676002)(6916009)(66476007)(4326008)(31696002)(38100700002)(478600001)(6486002)(2616005)(31686004)(83380400001)(2906002)(53546011)(6506007)(186003)(6512007)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dzRNWFUrMFdVKy9hYVlIV0x3M0ltMjAwT1dlYU53OExtdStBeURraTVtWWkz?=
 =?utf-8?B?aUNCOXA5d3B3dnBKVFptRU4ybkdmTGErVlN6LzRIa1pDWHFjZHdJVDdlOFcy?=
 =?utf-8?B?RytDS3RuUDMxMWRHK3NlbUhqT2JWVjBaQ3NlSHdTUUFlNHpHMExXQzgvazVB?=
 =?utf-8?B?NnBEK0lqemEvNm4vUjRvN0NzN0RsSGg4cDZXOXhJeC8rT2RLcExDUjVJQjRU?=
 =?utf-8?B?SGUwYlZuRWQvMjM1OGNaUklKZmJuVWVzampyOVBCR3J5M3BRSXFCeVJwdVNU?=
 =?utf-8?B?SSs2SGozMjdMTHpvUWc2ZFdLUTI2SURkYlE5NkNHQ3Nxb1BYc3YwYVUyMzBB?=
 =?utf-8?B?bDZxVG1HUm5vVzljcUdIdncxeXRvVzlUcXdOS0l4YXlZd0VHckp5L3B4RDd1?=
 =?utf-8?B?RU0xUjhRSFVNUWEwTlB1N0lYeUtDOW5iZ0hRakkrMDAwV2lSYmNHaW8yaG12?=
 =?utf-8?B?WlJlemM1ejRYSDBNR1JJUFdrVkd0eXpPUGsxNjlaeXdoeGM1d3JMWkE5R0Nn?=
 =?utf-8?B?Y0Zsbm9XYVpxcEFYTU5ZVi90MFhHa2lNb1hFSG5Dbk16WFQ3eENVWitTR3Nk?=
 =?utf-8?B?RmppUSsvZHY3bW5tUVZtWExRYmlmZHpBUEVsWFZqTUdKd0RwdUcrbGVKSm0v?=
 =?utf-8?B?bmZFazBEeDluNmtGc1Z4U2RKSzdkU2JEeXVKSk9DMldYOVhUZVZtRUV5UnB4?=
 =?utf-8?B?Z2pReXZRK3hpSUxtbXhRNkpzVytMaFcwa1dGVFJSR09WTE9RbitpU2o1MVN3?=
 =?utf-8?B?TWM5RnZNWVZDY050SXlIcHBCUGtBenV6bDJ5bWxXK2twV3lyREt2MzB4ZjdW?=
 =?utf-8?B?eGkyMWlEK2trUVVwK0VMWFBEb0lWSStybGxzWlNsUHlBSTB6WUJ4MU1xYlBJ?=
 =?utf-8?B?a3RkZXVZYnpiREdESFZDa1FqUnEvYVYvZC9qV1JLc3hoWWxyUDF0MjF5aHdp?=
 =?utf-8?B?Rk5OZm16WGJOMnhVZ2txcnhCbE5sb2pYR3kwUkZRbTVuakhpeDdaR3c2N1FQ?=
 =?utf-8?B?akg1V0lYbiswemx2Q0ZoZjZzNWhhQzFNS3h6YlgxWnpYNVRlRzRHcjBWUXBI?=
 =?utf-8?B?cWhGS1J0d1hQN01VbVZoNHI2dXVCVERTUjg3d2puclU4N3FSMEFMRkw3enZ1?=
 =?utf-8?B?SnpqazZicG1sR1h1SE4vM2M5NkdHVnlSZVBNeU1LeGxyeHJwSGtjRXBJMERP?=
 =?utf-8?B?SDFpUGtXbW5ETkZRd0Jkc0N5UU5ic0ZmbVQ2T1V2SFBkVUNkQjBzUFFVNU43?=
 =?utf-8?B?SXc5THpzcVVLYnB3V2Z2QkVZRTZCd0VTa2FIM1U2cXZhQTlheW5SYmtDOG4v?=
 =?utf-8?B?cWplSVlHbXV2bnJrcWhTcTRKYlZJdDhvVHQ4M1RIaWxINlpVWUV0a09nZW5L?=
 =?utf-8?B?SDkrdFNrd3pBc05RMDNtRG1kNENOTitOVGxXRG54U0RnN2JJMTBRRThZcUdv?=
 =?utf-8?B?VFE2T3REMlBId05IbXJNMXlHclI3VnRnZ25XLzJLcS8vRDNMd3ZYc25oTWVo?=
 =?utf-8?B?bnplZWhDZ0h4eGFvKzVuRmpyWkJTL1NSVWlwMHFXM1VpMUd3bVBEMXp5WmdF?=
 =?utf-8?B?M1F1OXRMd2EyTElxb3ZyRXNPdkFrOFNRSlNoLzUzbEg5R3BBNVU2VXg5dkFr?=
 =?utf-8?B?YUFEUEE4elZSR0hIWi82K29lODNQWFF4WS85QzRIdW5TQ2o2UmVhMlp2Y056?=
 =?utf-8?B?RE85bXNlSFEvRmFZTVhKOE1NVUgxYjBEVmpRNjR0NUUrY2k5RDdTU2xac2VI?=
 =?utf-8?B?SHppdUo4a2R3TUFEWEIwRFR4dFM4THVySzBtbzJvMmdmS0NJenF6eTFHbDZG?=
 =?utf-8?B?RUc2OFNucGdrZ0NlWEtxbkp0U2QwengrZnJyanNWbXVIQWFnbnJBd1dhZFJs?=
 =?utf-8?B?Y2UvVTBLa2RrVnpJZ1pzb1JPUG1rWGtTRUJUek1pOXlySW1qaSttenpCNHhV?=
 =?utf-8?B?NU9sY1ZSUjlFVHhpWUFPUkkwTGEzelE2dGpFd3FDUDZaS2I2M0lKanNiUEYr?=
 =?utf-8?B?dFVLRkRzM1JaWXhxa3JheURLWkZRWTkyWUp2cm5qcDFiSnF2NHM0dFZuU1Ft?=
 =?utf-8?B?QVBUVXlZZWE0Q1VPUGFBTnd3MGhNK2I5cksxNTNweWZYYXozSmNHTDZpYUlq?=
 =?utf-8?Q?ASs7B5etY0l762L4yKp96J+wk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7d109836-5f0c-42f5-b384-08daffba869b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 16:29:41.6822
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HeooxKG6TWandFeaCmfL3l8dCyelZSQLkhwUdEv59GfjobJdJgdrAfXXwpa2Kijo9f0JSGLBJLLLtRmVKKbTFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8896

On 23.01.2023 16:47, Carlo Nonato wrote:
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -299,6 +299,20 @@ can be maintained with the pv-shim mechanism.
>      cause Xen not to use Indirect Branch Tracking even when support is
>      available in hardware.
>  
> +### buddy-alloc-size (arm64)

I can't find where such a command line option would be processed.

> --- a/xen/arch/arm/include/asm/mm.h
> +++ b/xen/arch/arm/include/asm/mm.h
> @@ -128,6 +128,9 @@ struct page_info
>  #else
>  #define PGC_static     0
>  #endif
> +/* Page is cache colored */
> +#define _PGC_colored      PG_shift(4)
> +#define PGC_colored       PG_mask(1, 4)

Is there a reason you don't follow the conditional approach we've taken
for PGC_static?

Thinking of which - what are the planned interactions with the static
allocator? If the two are exclusive of one another, I guess that would
need expressing somehow.

> --- a/xen/arch/arm/llc_coloring.c
> +++ b/xen/arch/arm/llc_coloring.c
> @@ -33,6 +33,8 @@ static paddr_t __ro_after_init addr_col_mask;
>  static unsigned int __ro_after_init dom0_colors[CONFIG_NR_LLC_COLORS];
>  static unsigned int __ro_after_init dom0_num_colors;
>  
> +#define addr_to_color(addr) (((addr) & addr_col_mask) >> PAGE_SHIFT)

You're shifting right by PAGE_SHIFT here just to ...

> @@ -299,6 +301,16 @@ unsigned int *llc_colors_from_str(const char *str, unsigned int *num_colors)
>      return colors;
>  }
>  
> +unsigned int page_to_llc_color(const struct page_info *pg)
> +{
> +    return addr_to_color(page_to_maddr(pg));

... undo the corresponding left shift in page_to_maddr(). Depending
on other uses of addr_col_mask it may be worthwhile to either change
that to or accompany it by a mask to operate on frame numbers.

> @@ -924,6 +929,13 @@ static struct page_info *get_free_buddy(unsigned int zone_lo,
>      }
>  }
>  
> +/* Initialise fields which have other uses for free pages. */
> +static void init_free_page_fields(struct page_info *pg)
> +{
> +    pg->u.inuse.type_info = PGT_TYPE_INFO_INITIALIZER;
> +    page_set_owner(pg, NULL);
> +}

To limit the size of the functional change, abstracting out this function
could helpfully be a separate patch (and could then also likely go in ahead
of time, simplifying things slightly for you as well).

> @@ -1488,7 +1497,7 @@ static void free_heap_pages(
>              /* Merge with predecessor block? */
>              if ( !mfn_valid(page_to_mfn(predecessor)) ||
>                   !page_state_is(predecessor, free) ||
> -                 (predecessor->count_info & PGC_static) ||
> +                 (predecessor->count_info & (PGC_static | PGC_colored)) ||
>                   (PFN_ORDER(predecessor) != order) ||
>                   (page_to_nid(predecessor) != node) )
>                  break;
> @@ -1512,7 +1521,7 @@ static void free_heap_pages(
>              /* Merge with successor block? */
>              if ( !mfn_valid(page_to_mfn(successor)) ||
>                   !page_state_is(successor, free) ||
> -                 (successor->count_info & PGC_static) ||
> +                 (successor->count_info & (PGC_static | PGC_colored)) ||
>                   (PFN_ORDER(successor) != order) ||
>                   (page_to_nid(successor) != node) )
>                  break;

This, especially without being mentioned in the description (only in the
revision log), could likely also be split out (and then also be properly
justified).

> @@ -1928,6 +1937,182 @@ static unsigned long avail_heap_pages(
>      return free_pages;
>  }
>  
> +#ifdef CONFIG_LLC_COLORING
> +/*************************
> + * COLORED SIDE-ALLOCATOR
> + *
> + * Pages are grouped by LLC color in lists which are globally referred to as the
> + * color heap. Lists are populated in end_boot_allocator().
> + * After initialization there will be N lists where N is the number of
> + * available colors on the platform.
> + */
> +typedef struct page_list_head colored_pages_t;

To me this type rather hides information, so I think I would prefer if
you dropped it.

> +static colored_pages_t *__ro_after_init _color_heap;
> +static unsigned long *__ro_after_init free_colored_pages;
> +
> +/* Memory required for buddy allocator to work with colored one */
> +static unsigned long __initdata buddy_alloc_size =
> +    CONFIG_BUDDY_ALLOCATOR_SIZE << 20;

Please don't open-code MB().

> +#define color_heap(color) (&_color_heap[color])
> +
> +static bool is_free_colored_page(struct page_info *page)

const please (and wherever applicable throughout the series)

> +{
> +    return page && (page->count_info & PGC_state_free) &&
> +                   (page->count_info & PGC_colored);
> +}
> +
> +/*
> + * The {free|alloc}_color_heap_page overwrite pg->count_info, but they do it in
> + * the same way as the buddy allocator corresponding functions do:
> + * protecting the access with a critical section using heap_lock.
> + */

I think such a comment would only be useful if you did things differently,
even if just slightly. And indeed I think you do, e.g. by ORing in
PGC_colored below (albeit that's still similar to unprepare_staticmem_pages(),
so perhaps fine without further explanation). Differences are what may need
commenting on (such that the safety thereof can be judged upon).

> +static void free_color_heap_page(struct page_info *pg)
> +{
> +    unsigned int color = page_to_llc_color(pg), nr_colors = get_nr_llc_colors();
> +    unsigned long pdx = page_to_pdx(pg);
> +    colored_pages_t *head = color_heap(color);
> +    struct page_info *prev = pdx >= nr_colors ? pg - nr_colors : NULL;
> +    struct page_info *next = pdx + nr_colors < FRAMETABLE_NR ? pg + nr_colors
> +                                                             : NULL;

Are these two calculations safe? At least on x86 parts of frame_table[] may
not be populated, so de-referencing prev and/or next might fault.

> +    spin_lock(&heap_lock);
> +
> +    if ( is_free_colored_page(prev) )
> +        next = page_list_next(prev, head);
> +    else if ( !is_free_colored_page(next) )
> +    {
> +        /*
> +         * FIXME: linear search is slow, but also note that the frametable is
> +         * used to find free pages in the immediate neighborhood of pg in
> +         * constant time. When freeing contiguous pages, the insert position of
> +         * most of them is found without the linear search.
> +         */
> +        page_list_for_each( next, head )
> +        {
> +            if ( page_to_maddr(next) > page_to_maddr(pg) )
> +                break;
> +        }
> +    }
> +
> +    mark_page_free(pg, page_to_mfn(pg));
> +    pg->count_info |= PGC_colored;
> +    free_colored_pages[color]++;
> +    page_list_add_prev(pg, next, head);
> +
> +    spin_unlock(&heap_lock);
> +}

There's no scrubbing here at all, and no mention of the lack thereof in
the description.

> +static void __init init_color_heap_pages(struct page_info *pg,
> +                                         unsigned long nr_pages)
> +{
> +    unsigned int i;
> +
> +    if ( buddy_alloc_size )
> +    {
> +        unsigned long buddy_pages = min(PFN_DOWN(buddy_alloc_size), nr_pages);
> +
> +        init_heap_pages(pg, buddy_pages);
> +        nr_pages -= buddy_pages;
> +        buddy_alloc_size -= buddy_pages << PAGE_SHIFT;
> +        pg += buddy_pages;
> +    }

I think you want to bail here if nr_pages is now zero, not the least to avoid
crashing ...

> +    if ( !_color_heap )
> +    {
> +        unsigned int nr_colors = get_nr_llc_colors();
> +
> +        _color_heap = xmalloc_array(colored_pages_t, nr_colors);
> +        BUG_ON(!_color_heap);
> +        free_colored_pages = xzalloc_array(unsigned long, nr_colors);
> +        BUG_ON(!free_colored_pages);

... here in case the amount that was freed was really tiny.

> +        for ( i = 0; i < nr_colors; i++ )
> +            INIT_PAGE_LIST_HEAD(color_heap(i));
> +    }
> +
> +    printk(XENLOG_DEBUG
> +           "Init color heap with %lu pages starting from: %#"PRIx64"\n",
> +           nr_pages, page_to_maddr(pg));
> +
> +    for ( i = 0; i < nr_pages; i++ )
> +        free_color_heap_page(&pg[i]);
> +}
> +
> +static void dump_color_heap(void)
> +{
> +    unsigned int color;
> +
> +    printk("Dumping color heap info\n");
> +    for ( color = 0; color < get_nr_llc_colors(); color++ )
> +        printk("Color heap[%u]: %lu pages\n", color, free_colored_pages[color]);

When there are many colors and most memory is used, you may produce a
lot of output here for just displaying zeros. May I suggest that you
log only non-zero values?

> +}
> +
> +#else /* !CONFIG_LLC_COLORING */
> +
> +static void free_color_heap_page(struct page_info *pg) {}
> +static void __init init_color_heap_pages(struct page_info *pg,
> +                                         unsigned long nr_pages) {}
> +static struct page_info *alloc_color_heap_page(unsigned int memflags,
> +                                               struct domain *d)
> +{
> +    return NULL;
> +}
> +static void dump_color_heap(void) {}

As said elsewhere (albeit for a slightly different reason): It may be
worthwhile to try to omit these stubs and instead expose the normal
code to the compiler unconditionally, relying on DCE. That'll reduce
the risk of people breaking the coloring code without noticing, when
build-testing only other configurations.

> @@ -1936,12 +2121,19 @@ void __init end_boot_allocator(void)
>      for ( i = 0; i < nr_bootmem_regions; i++ )
>      {
>          struct bootmem_region *r = &bootmem_region_list[i];
> -        if ( (r->s < r->e) &&
> -             (mfn_to_nid(_mfn(r->s)) == cpu_to_node(0)) )
> +        if ( r->s < r->e )
>          {
> -            init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s);
> -            r->e = r->s;
> -            break;
> +            if ( llc_coloring_enabled )
> +            {
> +                init_color_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s);
> +                r->e = r->s;
> +            }
> +            else if ( mfn_to_nid(_mfn(r->s)) == cpu_to_node(0) )
> +            {
> +                init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s);
> +                r->e = r->s;
> +                break;
> +            }

I think the coloring part here deserves a comment, or else it can easily
look as if there was a missing "break" (or it was placed in the wrong
scope). I also think it might help to restructure your change a little,
both to reduce the diff and to keep indentation bounded:

  if ( r->s >= r->e )
    continue;

  if ( llc_coloring_enabled )
    ...

Also please take the opportunity to add the missing blank lines between
declaration and statements.

> @@ -2332,6 +2524,7 @@ int assign_pages(
>  {
>      int rc = 0;
>      unsigned int i;
> +    unsigned long allowed_flags = (PGC_extra | PGC_static | PGC_colored);

This is one of the few cases where I think "const" would be helpful even
on a not-pointed-to type. There's also not really any need for parentheses
here. As to the name, ...

> @@ -2349,7 +2542,7 @@ int assign_pages(
>  
>          for ( i = 0; i < nr; i++ )
>          {
> -            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_static)));
> +            ASSERT(!(pg[i].count_info & ~allowed_flags));

... while "allowed" may be fine for this use, it really isn't ...

> @@ -2408,8 +2601,8 @@ int assign_pages(
>          ASSERT(page_get_owner(&pg[i]) == NULL);
>          page_set_owner(&pg[i], d);
>          smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
> -        pg[i].count_info =
> -            (pg[i].count_info & (PGC_extra | PGC_static)) | PGC_allocated | 1;
> +        pg[i].count_info = (pg[i].count_info & allowed_flags) |
> +                           PGC_allocated | 1;

... here. Maybe "preserved_flags" (or just "preserved")?

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -299,6 +299,33 @@ page_list_add_tail(struct page_info *page, struct page_list_head *head)
>      }
>      head->tail = page;
>  }
> +static inline void
> +_page_list_add(struct page_info *page, struct page_info *prev,
> +               struct page_info *next)
> +{
> +    page->list.prev = page_to_pdx(prev);
> +    page->list.next = page_to_pdx(next);
> +    prev->list.next = page_to_pdx(page);
> +    next->list.prev = page_to_pdx(page);
> +}
> +static inline void
> +page_list_add_prev(struct page_info *page, struct page_info *next,
> +                   struct page_list_head *head)
> +{
> +    struct page_info *prev;
> +
> +    if ( !next )
> +    {
> +        page_list_add_tail(page, head);
> +        return;
> +    }

!next is ambiguous in its meaning, so a comment towards the intended
behavior here would be helpful. It could be that the tail insertion is
necessary behavior, but it also could be that insertion anywhere would
actually be okay, and tail insertion is merely the variant you ended up
picking.

Then again ...

> +    prev = page_list_prev(next, head);
> +    if ( !prev )
> +        page_list_add(page, head);
> +    else
> +        _page_list_add(page, prev, next);
> +}
>  static inline bool_t
>  __page_list_del_head(struct page_info *page, struct page_list_head *head,
>                       struct page_info *next, struct page_info *prev)
> @@ -451,6 +478,12 @@ page_list_add_tail(struct page_info *page, struct page_list_head *head)
>      list_add_tail(&page->list, head);
>  }
>  static inline void
> +page_list_add_prev(struct page_info *page, struct page_info *next,
> +                   struct page_list_head *head)
> +{
> +    list_add_tail(&page->list, &next->list);

... you don't care about !next here at all?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 16:32:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 16:32:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485124.752109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5Aw-0007eO-KD; Thu, 26 Jan 2023 16:32:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485124.752109; Thu, 26 Jan 2023 16:32:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5Aw-0007eH-HY; Thu, 26 Jan 2023 16:32:34 +0000
Received: by outflank-mailman (input) for mailman id 485124;
 Thu, 26 Jan 2023 16:32:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL5Av-0007e7-UI; Thu, 26 Jan 2023 16:32:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL5Av-0000LC-Rh; Thu, 26 Jan 2023 16:32:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL5Av-0003D2-Ec; Thu, 26 Jan 2023 16:32:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pL5Av-0001OA-EB; Thu, 26 Jan 2023 16:32:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JGqFTLvULm2qS67evUl/e9R6+ddQiB1kFJ4JFqiRuPg=; b=OcBz2UHIpz2O5n5gRNo+7jd2KF
	HqLmiRUFEJAeSiIYKhmmJwzr5uvfRm5jVEMjpUNEslXerxeKYJhY6l3bYsqbUFBF2FY7ed0zkP+4M
	iXKS7G/gZD69YBysYzZa0T6ib2fv8ISKFIFth1z85czTxPcxQzmRGDPpYj4hMnCeWjgs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176143-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176143: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-vhd:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-install(5):broken:heisenbug
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7c46948a6e9cf47ed03b0d489fde894ad46f1437
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Jan 2023 16:32:33 +0000

flight 176143 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176143/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit2     <job status>                 broken
 test-arm64-arm64-xl-seattle     <job status>                 broken
 test-arm64-arm64-xl-vhd         <job status>                 broken
 test-arm64-arm64-xl-xsm         <job status>                 broken
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot                 fail REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot               fail REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 173462
 test-arm64-arm64-xl-vhd       8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot       fail in 176135 REGR. vs. 173462

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd       5 host-install(5)          broken pass in 176135
 test-arm64-arm64-xl-seattle   5 host-install(5)          broken pass in 176135
 test-arm64-arm64-xl-xsm       5 host-install(5)          broken pass in 176135
 test-arm64-arm64-xl-credit2   5 host-install(5)          broken pass in 176135
 test-amd64-amd64-xl-xsm       8 xen-boot         fail in 176135 pass in 176143

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7c46948a6e9cf47ed03b0d489fde894ad46f1437
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  110 days
Failing since        173470  2022-10-08 06:21:34 Z  110 days  228 attempts
Testing same since   176135  2023-01-26 00:10:53 Z    0 days    2 attempts

------------------------------------------------------------
3442 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  broken  
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      broken  
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit2 broken
broken-job test-arm64-arm64-xl-seattle broken
broken-job test-arm64-arm64-xl-vhd broken
broken-job test-arm64-arm64-xl-xsm broken
broken-step test-arm64-arm64-xl-vhd host-install(5)
broken-step test-arm64-arm64-xl-seattle host-install(5)
broken-step test-arm64-arm64-xl-xsm host-install(5)
broken-step test-arm64-arm64-xl-credit2 host-install(5)

Not pushing.

(No revision log; it would be 529026 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 16:35:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 16:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485132.752121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5DU-0008Qx-2q; Thu, 26 Jan 2023 16:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485132.752121; Thu, 26 Jan 2023 16:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5DT-0008QE-WF; Thu, 26 Jan 2023 16:35:12 +0000
Received: by outflank-mailman (input) for mailman id 485132;
 Thu, 26 Jan 2023 16:35:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q9FD=5X=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pL5DS-0008Fu-PJ
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 16:35:10 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 652f58c7-9d97-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 17:35:08 +0100 (CET)
Received: by mail-ej1-x62c.google.com with SMTP id hw16so6509860ejc.10
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 08:35:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 652f58c7-9d97-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=eY8MqaP/PKYZ/s1JBiUJ0LQ4pGCWSOlUG8pefKZmdXg=;
        b=g33B0RAz1kGKhaAH+77KhFDGg3hPTJCMoK2/EpzhzJo16+3Qh51ZzWazPD2EM7t2yg
         FfUGc78vBdJcCJaeUWS8OUnpkPcCSHJ4VmL0RPF8hSHX7DH/IQgdBrZrq2ilvipa5TPt
         Q7ky46onwxn2wJV5V45YvGo13jQTn0DAYTGRkOl0mxMb0rxDTlQYxsct89ALSnAbty69
         RmU/jPBB8h2A1Tcb2nOkkORIt52Gfg44DeZ+xw/cMfS7a3MjmI2Dc32Mh0YqV+5g2eiM
         bcyB0VczL7eU553xJDJHikdVzUOnC4lOFuOreCJ5CZttfImGhmhu9Nr1LTie8fpij4tO
         xprQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=eY8MqaP/PKYZ/s1JBiUJ0LQ4pGCWSOlUG8pefKZmdXg=;
        b=PZ8OniEA7wuYiiePqSgMjxS/RoP7RMZ8MhxNXgJPIW6aPIANwHMKeRVLFwWf/yELzn
         eKxh9tX92bQZCvoU3HY27ZRzm5dLw3X3tC6a2H4fZ3Q7fqYnJ+bOWZUp5e2zPAokgvp3
         YD+0zEiNgM4ZJJAK6mLrZ4z9iqcilmc+PtP22mDRMnc0NzeRcITz0ODbsfCCHZWyMgM6
         Z1JzjWBcSslzrBSVcalIqyn5kHFKKO3GRAgM7Cvm2il0QG94MGIBWuKHUBTDw6+IGyrv
         z0bJzxG2zUWS6rwBqyizJtNAto0sEq7LbXGhvxcCZjquxU06Y/q8oLWITaalktNHdANL
         /sKQ==
X-Gm-Message-State: AFqh2kof1ENVVczFPTLNlzdD7IImnfAWNmJny/nD9tDCb94V2yzSLPNq
	bwBdRfqX6JeK51LieQVgU+ZnQsdWtS4PMHA5qBpMWg==
X-Google-Smtp-Source: AMrXdXvWV9BcG0zVt1RjwAsW/HnA/RUgqErGWu7AP4OVUk0pQuS0bJbbBjjk4BVqtCfkM3O2ZN1/7T3M+FK6xkOY+d4=
X-Received: by 2002:a17:906:1ccb:b0:86a:7123:d366 with SMTP id
 i11-20020a1709061ccb00b0086a7123d366mr4827149ejh.300.1674750907847; Thu, 26
 Jan 2023 08:35:07 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-6-carlo.nonato@minervasys.tech> <Y9KMio62otVChUcq@perard.uk.xensource.com>
In-Reply-To: <Y9KMio62otVChUcq@perard.uk.xensource.com>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Thu, 26 Jan 2023 17:34:56 +0100
Message-ID: <CAG+AhRUNuK-mCB5hZWEktXsLMWdM39odhLjGBw1jbgS_gqhBdA@mail.gmail.com>
Subject: Re: [PATCH v4 05/11] tools: add support for cache coloring configuration
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, 
	Juergen Gross <jgross@suse.com>, Marco Solieri <marco.solieri@minervasys.tech>
Content-Type: text/plain; charset="UTF-8"

Hi Anthony,

On Thu, Jan 26, 2023 at 3:22 PM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> On Mon, Jan 23, 2023 at 04:47:29PM +0100, Carlo Nonato wrote:
> > diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
> > index e939d07157..064f54c349 100644
> > --- a/tools/libs/ctrl/xc_domain.c
> > +++ b/tools/libs/ctrl/xc_domain.c
> > @@ -28,6 +28,20 @@ int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
> >  {
> >      int err;
> >      DECLARE_DOMCTL;
> > +    DECLARE_HYPERCALL_BUFFER(uint32_t, llc_colors);
> > +
> > +    if ( config->num_llc_colors )
> > +    {
> > +        size_t bytes = sizeof(uint32_t) * config->num_llc_colors;
> > +
> > +        llc_colors = xc_hypercall_buffer_alloc(xch, llc_colors, bytes);
> > +        if ( llc_colors == NULL ) {
> > +            PERROR("Could not allocate LLC colors for xc_domain_create");
> > +            return -ENOMEM;
> > +        }
> > +        memcpy(llc_colors, config->llc_colors.p, bytes);
> > +        set_xen_guest_handle(config->llc_colors, llc_colors);
>
> I think this two lines looks wrong. There is a double usage of
> config->llc_colors, to both store a user pointer and then to store an
> hypercall buffer. Also, accessing llc_colors.p field is probably wrong.

> I guess the caller of xc_domain_create() (that is libxl) will have to
> take care of the hypercall buffer. It is already filling the
> xen_domctl_createdomain struct that is passed to the hypercall, so it's
> probably fine to handle hypercall buffer which is part of it.

This is what I did in v3 :) (https://marc.info/?l=xen-devel&m=166930291506578)
However things probably will change again because of a new interface in Xen.

> What happen in the hypervisor when both num_llc_colors and llc_colors
> are set to 0 in the struct xen_domctl_createdomain? Is it fine? That is
> to figure out if all users of xc_domain_create() needs to be updated.

A default coloring configuration is generated if the array is null or its
length is 0. So no problems on the Xen side.

> Also, ocaml binding is "broken", or at least needs updating. This is due
> to the addition of llc_colors into xen_domctl_createdomain, the size
> been different than the expected size.
>
>
> > +    }
> >
> >      domctl.cmd = XEN_DOMCTL_createdomain;
> >      domctl.domain = *pdomid;
> > @@ -39,6 +53,9 @@ int xc_domain_create(xc_interface *xch, uint32_t *pdomid,
> >      *pdomid = (uint16_t)domctl.domain;
> >      *config = domctl.u.createdomain;
> >
> > +    if ( llc_colors )
> > +        xc_hypercall_buffer_free(xch, llc_colors);
> > +
> >      return 0;
> >  }
> >
> > diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
> > index beec3f6b6f..6d0c768241 100644
> > --- a/tools/libs/light/libxl_create.c
> > +++ b/tools/libs/light/libxl_create.c
> > @@ -638,6 +638,8 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
> >              .grant_opts = XEN_DOMCTL_GRANT_version(b_info->max_grant_version),
> >              .vmtrace_size = ROUNDUP(b_info->vmtrace_buf_kb << 10, XC_PAGE_SHIFT),
> >              .cpupool_id = info->poolid,
> > +            .num_llc_colors = b_info->num_llc_colors,
> > +            .llc_colors.p = b_info->llc_colors,
> >          };
> >
> >          if (info->type != LIBXL_DOMAIN_TYPE_PV) {
> > diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
> > index 0cfad8508d..1f944ca6d7 100644
> > --- a/tools/libs/light/libxl_types.idl
> > +++ b/tools/libs/light/libxl_types.idl
> > @@ -562,6 +562,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
> >      ("ioports",          Array(libxl_ioport_range, "num_ioports")),
> >      ("irqs",             Array(uint32, "num_irqs")),
> >      ("iomem",            Array(libxl_iomem_range, "num_iomem")),
> > +    ("llc_colors",       Array(uint32, "num_llc_colors")),
>
> For this, you are going to need to add a LIBXL_HAVE_ macro in libxl.h.
> They are plenty of example as well as an explanation.
> But a good name I guess would be LIBXL_HAVE_BUILDINFO_LLC_COLORS along
> with a short comment.

Ok, thanks for the explanation.

> >      ("claim_mode",        libxl_defbool),
> >      ("event_channels",   uint32),
> >      ("kernel",           string),
>
>
> Thanks,
>
> --
> Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 16:45:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 16:45:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485136.752132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5Ma-0001Tp-VY; Thu, 26 Jan 2023 16:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485136.752132; Thu, 26 Jan 2023 16:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5Ma-0001Ti-SK; Thu, 26 Jan 2023 16:44:36 +0000
Received: by outflank-mailman (input) for mailman id 485136;
 Thu, 26 Jan 2023 16:44:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pL5Ma-0001TM-4e
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 16:44:36 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2043.outbound.protection.outlook.com [40.107.6.43])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b689a550-9d98-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 17:44:34 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9075.eurprd04.prod.outlook.com (2603:10a6:102:229::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 16:44:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 16:44:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b689a550-9d98-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ojmv1VtyFUFgcDx2SEtOnLgMu/igI+C5JlLBttAPqmJKAx+SkhbS1JgL16P+r7iI3/4i1pvheghcIwFWEFWIMfB2VP6DafAdz7lbs742r/PycqmRwJs/iLNIifsM/SvDmcNVRIt2PHA5gMFZIeND9wu8a2nIPW5TMs3bu3Gtxbr+JZzgjllNZhh0Rc8A19KcLftyH0/EJxtghWrbAdDLW+ObMmVyJ6r3rBofatG//a4wt45lQkquo5evn0ZYGQLR90rJpRs0HuT9uRDoyr7OVOBMhc2GJwuGl6O7+l2wW7s9oUs/WCsE3F+Yb/uy8DD0loZz/AggKD2S/zUXYnmkzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oemARvIFtOwswuNzV3njIwrpUKIuh/DvcovG4shLT0U=;
 b=X0i1Q+9N1clSLPg5SQ/sYeNrUn1z3+HBraOvE+U/tazc/9heJp+0Vw/nxGJLXDfIIcFBAI/YXILxFBfn3KrtFJKnumsFH+8HvvbJ8SLq77OSjrTP9b3E3hiH4UfGKsezGYzQZYtkFToH5eutQTij83buzhaSr/qcW150Newrv40NrQLQYIMz91UA+3Ku9fbGAH1FGCQRuz6//AOHU8UrZ2orE6sk8Co3yhe8aIXag+ovjc9Bc/6ZnOgcn9h/gq6wjO039dk26NfylDj8RWmwDbtbdLWhpJZ++Eu5gT9nwiIBSpa2vYicEb/iPSHR4gVjt8AUvK+YPiuHn/B43aROSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oemARvIFtOwswuNzV3njIwrpUKIuh/DvcovG4shLT0U=;
 b=3x+o0haWNB/ZZ7Z94ubaoFSKpK6XmusWcoObXDlma5ZXH5wgK5KkLURH1xlDmlJLXCY+Y1uqB5X9QWXLTvuEn27LcwqZOLWKEq1H2urC7dauHKKoYPk0GHoWgV3youWlEFpJwft+W+xV3MGJAANjK065GuHgtaf0nLkxoxuIwsyri9YFj/dYvVV/gZhhmweSXLP00P3luUPm+iDfjz0pTRRk+Vo+YcbBIVOxS/piUy6rSApl9rb/dGm2rGOrjDtbUAmkMn9NF3krqUZzXFLmkLo1FeWRuTF/Su0ieoqGn3oOW8qJ1kpWek0xSi8JJuAfW8vS5IvAGnmcqrwGIEIu/w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5a3ef92e-281f-e337-1a3e-aa4c6825d964@suse.com>
Date: Thu, 26 Jan 2023 17:44:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
Content-Language: en-US
To: Stefano Stabellini <stefano.stabellini@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, george.dunlap@citrix.com,
 andrew.cooper3@citrix.com, roger.pau@citrix.com, Bertrand.Marquis@arm.com,
 julien@xen.org, xen-devel@lists.xenproject.org
References: <20230125205735.2662514-1-sstabellini@kernel.org>
 <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com>
 <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0048.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9075:EE_
X-MS-Office365-Filtering-Correlation-Id: d7ec4fc6-b245-46e0-ab55-08daffbc9980
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ScWyUv9k/KrqIlapT0rT7P87HG90vu/KdMdy7FeXMXRAuYXdNFPmQ+cRK9qfOXVImbpS34fPKGjAYGGiBR21/EZN9UNwichtu1cREMVHugIZNM3uqWXlYV580L/ATGvbk9LvdJbELvIDd/Fw1Ps4tTSeDIZYqIs3BG5KPIBWGciCVIWTIRI7yDGzamDNtfv9+dO2f5WZ/DgSgNnA5PXARoGP//nXS42AF2DlnoYCm921GuHt/arfW3t9AVFKzjyiEzpCx0sJoVrPEcfCso7tClZBI94AI1wAl5J5kIvhAxyHzcI75KhU+IsGZsUgArJCKevbTLq7ppZW70WuXBxq3mubrSxTOEb4+weXZv+01nEL/m5vB0Il3dYKsqD8/83ecrB2j46ONMX4zmhzlpkdxmMcpAELcWZOYCMP/wmV6gJwLpqqS99YBAVsNbFYjyIXEVrsSWm4DYsMidsOb1pZGfHa5pZbGVIlAFCLdrQf1Rhftgg3uqd1DeOsI658NWh8n41VYbmyqYsigtSg4tiJbaSDSsxHEuX0KTX+cxjJfArNY0M+l50t+gDVtrfdylL1nBxAIw9vVPT1ulCYBM7ydVGn7pM4QfCSp8yt2x0/+OVc4yQ0EJI5SCc7rOjz3GfkVi3rGEyFIRFhGrBQvzNckM1YHulyKBmUNHdkhBc44/PEOVkHUgPgeeImfmg+H69FgtkXaA5G0IGejSsfzXSk+oZ7ZAIHlQEbhyQ/82Bc9xc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(346002)(396003)(136003)(39860400002)(366004)(376002)(451199018)(8676002)(31696002)(31686004)(316002)(6512007)(41300700001)(4326008)(86362001)(6916009)(66476007)(2906002)(66946007)(66556008)(6666004)(478600001)(6506007)(6486002)(8936002)(83380400001)(2616005)(36756003)(38100700002)(5660300002)(26005)(186003)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VGM5ZFF0enZzR3dWSlZYcVBPZlFQU1R4NWM1NXpwc3FWYzdLQXJ6aldUdUhY?=
 =?utf-8?B?MXZkR3RNbERZaVQvZ1prR1NnWmZQeTRWWmNRUk5rNGlaLzZjN1dXOWFIK2Np?=
 =?utf-8?B?MGpFekc0ZTFHaVp1MEl3dm9jR3U5eVY4eWRnSzRnc2ZCdlhNbmlYdjhnazBa?=
 =?utf-8?B?WU9ZZDBKbDZrbkFhaDdPM0xvcEhZSFpvbGlXMTFTQXhqRWp0TDdJQitHbjg5?=
 =?utf-8?B?aStzUnBkbXg1b3N2di9TU08xL1Yxakd0bUJUcWtmaGVHUThhNnd3cDA5RUZO?=
 =?utf-8?B?MHhaZVFPa3c4dklaUnFFMlJabzRRN1c3VkVHc3RHTTg2WXpXRXFrRzhrZjZp?=
 =?utf-8?B?amFaTlpKQ01mdTdlVDZZQjdiclErTGFQNkUyNkkrUmJrekpRR284VHZaVlhp?=
 =?utf-8?B?K1VqQ3FkR21iYms5eTJJbENWeEdHRGFpM29rNHJHTTB4VFJiUk9tdkxwYnpX?=
 =?utf-8?B?dFU3cnNabUlUTWhCNUUwRWpLSDlXZWZGb0pMbnlqQlhXMlhzdUZQK01tTWpR?=
 =?utf-8?B?K2tZVnpTOEZiNXh0OUYxaWFzbkF1U2NHdGhJQmZGSTQra1F2a0ZHR1Y1YU5v?=
 =?utf-8?B?ZlpyOU9jZzdBOERzT25kM0crakxSV3FCN0FRRkRRWHZQQ0FHUm9RY2FRRFpC?=
 =?utf-8?B?TTlmUm0vVWxEckh4K2FTZ2d6WTl6Tno2cG9wNFZqT1dIQkk2cUNVWVpkNGV5?=
 =?utf-8?B?cHRSOUM0QklaRnkxa2lUM3dOSGtTVERET25TNVJkdmRDSU0xK3JNVFNGSTM3?=
 =?utf-8?B?U0dvOHJKVFg0cXlGNVZ3cUkwdnFqTWc3b1YwNHJmMFBwRFBKV1IxY3ZLMnVL?=
 =?utf-8?B?WTB3azg4aUgzOTVWQU1sSDJ3MHhyaFd5a2hreW1WbGFGWm9CWG92TUZzWndL?=
 =?utf-8?B?V3V5M2hZdFdmNElZTDBNNGVoSUNVVnQ4NzVIQzRuc0NuQ3Q4dklzM0VKODN5?=
 =?utf-8?B?bW1aeDcxckJjYjVJM3c1YnRsWUp2TmIyWVdlUVFvVUJwdEhralRDY0cwQUNS?=
 =?utf-8?B?OFd5WUlISW10N3VOTVJnVEtqL2RFL0tNZ0l5a2tmaVFWazdIa1hNTUZQc01w?=
 =?utf-8?B?cWR4akJoQTY5Yjc0SGQ3UiszMm1RRlMzd1N4Z3FoOWMzclZHWVM2Q0s3L2d5?=
 =?utf-8?B?ajBwdVF2U0RqeStib09OUWJzYjl3V2lBb1ZtNzkwVVh4RnB0dDU0NUtHa0pT?=
 =?utf-8?B?b1luSlFvMVJKSXRPYTBXOFZXMGc5cFk1MTNzZEhoYVZaZ2hJSXhzcUlVUVZZ?=
 =?utf-8?B?OC9IMFF1MDNGcmFkUVRKN05JV1dSbFN2LzJtRzVhd1Z5RE42QUZXTEUwQ1Nk?=
 =?utf-8?B?T2s3M0xpbHlRSG5FNERjYTZQbXp0a0hZN1hBY0lLbXp4aEN0OVVPODZFK2hr?=
 =?utf-8?B?T3hoaGkyYkR6djhMOWtoejdjZmlHNXZOTnNOM3VMZVFOeDJ2Q01EUkxsenN6?=
 =?utf-8?B?V3lWeXFJdGcxaEJZdFV5SVBNTnFsUStYMmFPNmhWaFYrUFRXaGxVa1RjRm9j?=
 =?utf-8?B?dkFOTkV4UHJJWW1DTXgxckh6U3pucmloRm00OTE2WXdBVjNZcFBQUVRwaTRK?=
 =?utf-8?B?KzBwS1k2dlNkcHdCZ1RmUUVFTWt1bTk0WktpL1J6VnZ0SUM4eG5WR1RzRGNL?=
 =?utf-8?B?bVhPYlVNdzA4dUloUWFYUzkzbkozTW0wN05NdXAwOWdMdm9TcUt3b0RQN0E0?=
 =?utf-8?B?bE1RZnBTNUVrazdBOTF3VFBXUEVYRVRmM1Y0ZUVZMk45VStkU0pVQlFJQ21a?=
 =?utf-8?B?MDgrMG44UFJhbC96TFBOVzZCZTdpK1loY05FTTFncThlWjdYdVMrUGRTZVRy?=
 =?utf-8?B?dFFOUHdhRUJwbGZJUTNCNlFUWHByVEhnQzhjWWF2Uk1STHdtOUdTMkkwSnUv?=
 =?utf-8?B?dndPM3BZUjhuWW5RRGsyYUt3RUxKYjVwMWsvdkc5YWRxWk90TFlEdWsxMXJG?=
 =?utf-8?B?ajFwb01HL3FVQjZ6S05yUDhSSGdEZ3lhNXdHRDhOZjNQcXl6NmNLd082ZFRS?=
 =?utf-8?B?bkxRUzF1Q2RYZUpRZGxnUjBUNUkxTGtCYUREaTkvb1pvQmg5c2huSWFmeFg1?=
 =?utf-8?B?blE3enhkVS8zWDJ5bzZCdElWajlSZGYrSHFxRkI1aWlQOEdXeDNaZWxlZWhh?=
 =?utf-8?Q?h3WhMhAdIu8uqUW0+4ZbTVHBH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d7ec4fc6-b245-46e0-ab55-08daffbc9980
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 16:44:32.3127
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: boO8fsNwJbpfzWjUWscDfyI5gg0VpSmYKiSg2dbM4nF0OvM6oUEjraH0aXsN0vng1iGhosstxiuT098GDYF6HQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9075

On 26.01.2023 16:59, Stefano Stabellini wrote:
> On Thu, 26 Jan 2023, Jan Beulich wrote:
>> On 25.01.2023 21:57, Stefano Stabellini wrote:
>>> From: Stefano Stabellini <stefano.stabellini@amd.com>
>>>
>>> As agreed during the last MISRA C discussion, I am adding the following
>>> MISRA C rules: 7.1, 7.3, 18.3.
>>>
>>> I am also adding 13.1 and 18.2 that were "agreed pending an analysis on
>>> the amount of violations".
>>>
>>> In the case of 13.1 there are zero violations reported by cppcheck.
>>>
>>> In the case of 18.2, there are zero violations reported by cppcheck
>>> after deviating the linker symbols, as discussed.
>>
>> I find this suspicious.
> 
> Hi Jan, you are right to be suspicious about 18.2 :-)  cppcheck is
> clearly not doing a great job at finding violations. Here is the full
> picture:
> 
> - cppcheck finds 3 violations, obviously related to linker symbols
> specifically common/version.c:xen_build_init and
> xen/lib/ctors.c:init_constructors
> 
> - Coverity finds 9 violations, not sure which ones
> 
> - Eclair finds 56 total on x86. Eclair is always the strictest of the
>   three tools and is flagging:
>   - the usage of the guest_mode macro in x86/traps.c and other places
>   - the usage of the NEED_OP/NEED_IP macros in common/lzo.c
>   the remaining violations should be less than 10
> 
> 
>> See e.g. ((pg) - frame_table) expressions both Arm
>> and x86 have. frame_table is neither a linker generated symbol, nor does
>> it represent something that the compiler (or static analysis tools) would
>> recognized as an "object". Still, the entire frame table of course
>> effectively is an object (array), yet there's no way for any tool to
>> actually recognize the array dimension.
> 
> I used cppcheck in my original email because it is the only tool today
> where I can add a deviation as an in-code comment, re-run the scan,
> and what happens (see the number of violations go down.)
> 
> However also considering that Coverity reports less than 10, and Eclair
> reports more but due to only 2-3 macros, I think 18.2 should be
> manageable.

That's not the conclusion I would draw. If none of the three finds what
ought to be found, I'm not convinced this can be considered "manageable".
Subsequent tool improvements may change the picture quite unexpectedly.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 16:48:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 16:48:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485144.752146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5QB-0002Kq-NC; Thu, 26 Jan 2023 16:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485144.752146; Thu, 26 Jan 2023 16:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5QB-0002Kj-Hh; Thu, 26 Jan 2023 16:48:19 +0000
Received: by outflank-mailman (input) for mailman id 485144;
 Thu, 26 Jan 2023 16:48:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=71dA=5X=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pL5QA-0002Jq-3z
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 16:48:18 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20605.outbound.protection.outlook.com
 [2a01:111:f400:7d00::605])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ac3f0bc-9d99-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 17:48:16 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7861.eurprd04.prod.outlook.com (2603:10a6:20b:2a9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Thu, 26 Jan
 2023 16:48:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Thu, 26 Jan 2023
 16:48:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ac3f0bc-9d99-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LJ+kCTQo5WUHIzkpK8QPkc8DDNJ7Kn3eQpSpocAdQnN5yikYNfglkqbNDVTjc9Sls/7gEQiWwBGKHz6l+rsAp8ICIP3dLZVGgtGvSYwB02zxhyDhjBSZpMbe0bpvtWIJN8vnvuPbrqMvst3EXi/UF6PfryQhEKrVVsBD7rjDxoIT+w/pzcxYeU2ifUnOPR8JyM3jhy4SwnqImQ7QumDvUBaSj3fW+PP8kS1ZH1qikspbjN8ytUuIPyVQHskxR6naF5JbU/Arc9i/h4wHHNveZq7KYovtaTSM15vvHkJ+/UrnAh0Dwuu3+0FYM3FhZauI4nmnW7yfA6xPhtdeb/ffEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jPds969yM+ezwXn14kl8/MZxAMdHi6ULw6VbqlqSeKU=;
 b=LyhlGr3wmvTlKdAHGY3+gcUWU4zelstlRLMfFqKlEYIVVykt+pSqvPsaHGLzJVqQZu+hc2ymUnsfLpGurhgDFoA7JWS3XmbW8xM0e0/C2JegC7akDDmAfp7qp7fmVpKS+0eSSceeureCmwra4Vu82rJGrVnghuWMvPdFMEoK/hsD3syi46HttH6swN4662Ao1SbO1SgtQSKw4tOcfpH470cc+sCooJzlS6OwQZTHkpuojdvX7yqvaqNq/w6Ce0cuVHxS/SETbAQVV5+RE65D2hH93o4Zuuv2DbU4tsNwAgUJuzB3ioxObsdveKplWLIxhI6M5i3r9UNuPr5V5Zd80w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jPds969yM+ezwXn14kl8/MZxAMdHi6ULw6VbqlqSeKU=;
 b=qs/Ge3JjWEOjpNCGajVEc01JDNW/dLW/4zYmQOKRGWS8S8eLmM+LQlDDY9Kjp9h+fLQjVBbrw4kMr8B6zSIiciIWwYK0UYnxinbB7hXHGT++7FjDzpQ2PUrzCDOZ6EOBK4whWkV0VAl9YklyRi5pkNcBHm1L09iWePsqI4vVgQsMCw80er+nENErVYBXKQ7jbUbnuxC205pvjYgeBKx5ssJ62F9u5wig6a5mbIjfhVvx0BLfLa+50InC3K6gbc1Kjek2yUdqSXSpu5Arg886wKGD4B6thSP1cyVmwU9bbSZE+C2N1a6Pr69rygWWmAY0I4LaxpY2itxpDpcnYjJVzg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <151a70b0-fcb6-9b0f-7834-d2cc15b5d9b0@suse.com>
Date: Thu, 26 Jan 2023 17:48:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest
 areas
Content-Language: en-US
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
 <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com>
 <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
 <a92b9714-5e29-146f-3b68-b44692c56de1@suse.com>
 <CABfawhkiaheQPJhtG7fupHcbfYPUy+BJgvbVoQ+FJUnev5bowQ@mail.gmail.com>
 <6099e6fb-0a3e-c6da-2766-d61c2c3d1e96@suse.com>
 <CABfawh=1XUWbeRJJZQsYVLyZX-Ez8=D2YYCgBYvDGQemHeJkzA@mail.gmail.com>
 <cfffcf15-c2fa-6529-d1ff-a71a7571bfe2@suse.com>
 <CABfawhm_b=MskQN_zZsuKz0FDtZzZNvBMa8bXtxxUZU9rXbUCA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CABfawhm_b=MskQN_zZsuKz0FDtZzZNvBMa8bXtxxUZU9rXbUCA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0173.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7861:EE_
X-MS-Office365-Filtering-Correlation-Id: 43662b3f-a9f0-4930-687e-08daffbd1d95
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KX/7Nmaa8hLkIDnzvBmN5wMZObUGyNILIXPrf5v7w0ZmEiR/DfmF+6Wz98x/F9ixgEt/UwGLk5sl7DcdJ0ZNBw7KWjGXvm56mu5wEarHEiRumf9k5qEM33FfhAILrI9paUzifeXx294XcJCcHgnf5fMRQ+5NgP/ZTlpDbRO/+qE2RXbmIn9xSKYi4xG4ujFA3hZnDXdfTLrdpDIphyAgBQOiOYE3CxUgymwUxgWZ/fhoq10WsLYmPhlPqwqdqHC2U1E9xGDnDl9Z2YGBHES9UG43vePSqDmuKqwLPbRKt77tGZvRSw98PKxyVTgIM5nNvU0fkpMXicSYHEKDebYPM4Ra6Fu8dAI+DLQbchbuvA0oZPPbsBchkKci8Xyq2K8XMq7om6/D2R6vht+RVWyTbi0g4tyJqHVYzAS4iKBrOft9oPm5cCF02z66GW4FnPcKnUvWQ/pGegTJ8ZA2psWo6pJb91+t2xotgbD8lvFufK8NyGmr20NNjBJJ5MhqSfphJJbBUvmfrlRvqpmE+qIZN944Wz5b9WsKUzmVIN1nvgUz5deOzOjZisCYhyU9wmM9DOmrje3LvU/lsD6FuQl7lGHvDv67QDmsMmpc6zc0e4VuiNHIMIA8ocDDsaQ2ID3xeLTB547nVm72nkh1IEHJ4orJV8L3PXq8/zLfAw+cU8cDTqYgu2nR8Ql3Ld2JByQPwUDS6jBcBdPp3cu64nmC5uwTUPBZk95ljcVgtB7GEzc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(376002)(366004)(39860400002)(396003)(346002)(136003)(451199018)(36756003)(38100700002)(31686004)(26005)(186003)(8936002)(86362001)(6506007)(53546011)(6512007)(2906002)(41300700001)(66946007)(5660300002)(83380400001)(66556008)(316002)(478600001)(8676002)(6916009)(6486002)(66476007)(31696002)(2616005)(4326008)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NS84R3NaSnZKUWZqdlB5U1lDWkFvRkt2dVN1eHo1UjRhVk5NSlRpUitIbnlV?=
 =?utf-8?B?OE9kQ3NDT1pmNjFKNmFPYVNWV0VDV3VFVHovRnFtOVpON0NxaGwvdzYyM0d6?=
 =?utf-8?B?WmdXN1p0UnJXWVMwYlRxcUhMMjRCcHh5RzdWc2xZMFoyZnpYc3ZTVjdXWlll?=
 =?utf-8?B?UTArcHdrTWNabzloZDNXRWd2VHVRVTI5MENhMTBrTlNxV1dNWWNBSEhXeTZw?=
 =?utf-8?B?MVBPZ3duWmV4VEJVNGdCUWRiTXhKZUx4TnBMSmhzc3Fqc1dvRTRuUlVqT2lC?=
 =?utf-8?B?azh1SnhIYmRqWElvWm93cXNPZG5Za2I1eVc4T3lSaExvVTFtb3c2U21hdjlQ?=
 =?utf-8?B?UWtGdENjN2hNL1RNSXF4NVAyK3RlVDJPdGo2KzgxOFE3dFZLMWlmaWtQaEhC?=
 =?utf-8?B?S1hYdi8zc1dyV2NSWFNsV2E4NkN4VHpNUFh1N2xlUFdHK0hLYlVYMlErV3RK?=
 =?utf-8?B?MU5BdXVEUTc2Y2xoQWFpNTJ3WS9YWjhxRUZhYS9DZWptRXBvaU9XZVliM3hp?=
 =?utf-8?B?ZDJ3R0VoWU81bm5mRWU1UHZuUG9JYWNxZlU5T01MM0M3dkFSWllHTkdRdnVB?=
 =?utf-8?B?UWphVnJXcFN6bmk5ejl0cTJVaEhjbUxRa29YZTY2ODgvNjI1VHB4cVBRZmp5?=
 =?utf-8?B?MmQvVjc0ejBYMXJMbFZsK1RlY2IvdnBJeWx6a2ExeVdlODBtZUkvUTAxdlIx?=
 =?utf-8?B?Qkg3Zlp6Qk5XRGRZam9BOWV1SkliSVRKR1FtZ1BTZVY2bHlocXduNks4RklK?=
 =?utf-8?B?S3ZLdm5zbk5qdDNxSkowUkwxQUFYOEtpVFp6T1BLaFg2UDVuT1hQQnBxeU82?=
 =?utf-8?B?eFJRallnRkRlUzExaE82em8yeWRzUnJKb05XdUhGYUNaR2xYNDNlVjEyMUJa?=
 =?utf-8?B?TitFNjdWMjBHZVZCaVZiVkRVVDY5enpFUUE2aVREa0pQNTlKWk44TStMV2hO?=
 =?utf-8?B?cFlwN09DK0g5c1hHaHI1bDdPeEROcVMrZGsvVzJxQ0IxTHRaeUtzQUEwWjNK?=
 =?utf-8?B?YzNvMTVnNjJzYVJ2L01yck9wZUVTK0JYeVYySzRtSDFaYnJ0aEZKQlpsZGIz?=
 =?utf-8?B?NHdVazVZd3BJWVhnNTRvSXV0NDZjM2YvaVNaa2xkMkx6QTA5MDJTTkdGTzRS?=
 =?utf-8?B?MHVIWm5KdHhVb2dJeEJEUjlkNHdLMVZTeXc3dXFXUG5kLzJNQW5tc1RDQUpr?=
 =?utf-8?B?RlEwcnBiRXdmNCszbWRrK2dxWlJYMVJSV2xteDdNS05tSmh1MUNPUG83WHRF?=
 =?utf-8?B?U0xMOHA3WnZwcnZPSlpab0d0dVQ5bUxwdkhoOWpXZUxkdExjVHpQaDlaUkpI?=
 =?utf-8?B?YUtoMGhhOURGMm9Ibys3Qm9KbU1GZW94a3dUakJGa2cwVEl6YWVYZDN5eFFi?=
 =?utf-8?B?c0R0cnoxeDVOeDlXZDNZS00wcDBSOVdwUHRXYmVRUnBKcHdkWmN6NUhkVDhr?=
 =?utf-8?B?dUZjTmVLM0J2eWdpcnd0bzd2Zi94RW1QdW9jOW1MM2xyRFdhcWVqMEtPTEl1?=
 =?utf-8?B?TTZhK09GUGxuTVFudFgrVkJyZ1FlT0s5YW5EeEQyd1l6UC9EdEV3Tkw5L3Yr?=
 =?utf-8?B?YTB0dUJmVlA4eW9Na2dEQlJ1YjJ6R0V3SUtQcDlxOHR6YzlCSHhSWXpTRUli?=
 =?utf-8?B?OWxrazNhU1gyRWpXZmcyeERTRG9SY1VYNnRmMDZqdHFZUURpT3FIY0xRU1FH?=
 =?utf-8?B?SmZPZS9ZK2tpSTRIZUFzTHIyY1dHRUdWaFU3cXdoSTFoaVl1YXo1WjhKMy92?=
 =?utf-8?B?cUEwOGlSMGVBNFVGMFJqY1l6K3hoY2JJNzZaMDk4cVpXTk94dzArVTRZTVVy?=
 =?utf-8?B?bzZBaGdMei9hSzZpUzVrTmxEQnFJcW5icVdKNEE1NnhVVDB0cXRsZUN0TmxE?=
 =?utf-8?B?L3lGYmEzMkkvK1FLcjl5VmZ5RVE2aFN1QXRQem4rS2tIbUUwQ1d1Q01rWnpZ?=
 =?utf-8?B?S1RNbWdGc0ludzlSQ0lqT0xLVDRsWkhCMzY0V0h2UkRLRllKUGxDNzRaOFBL?=
 =?utf-8?B?SnhHV29pMnV3bm1ydmUrdkx3RnBRMkQ1UVJPajRlY3IvaThpLzlaUG5ucGxC?=
 =?utf-8?B?SWFyeVNYODl3K1luZFRhZ1IwWXRNa05OMkNQaXdsMFowQ0Qrc1JiYzBQWUZp?=
 =?utf-8?Q?xE/dUYBsDv1jeJz/jabjbLpzW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43662b3f-a9f0-4930-687e-08daffbd1d95
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 16:48:13.8141
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UtHNiyfndP5orjGZnWB0it0bwXjYvBiwalVHiPjUxPPzQ8trzn5nLkONh3TlBESl1yx3Bcz1mIx78RgQshtfDg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7861

On 26.01.2023 16:41, Tamas K Lengyel wrote:
> On Thu, Jan 26, 2023 at 3:14 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 25.01.2023 16:34, Tamas K Lengyel wrote:
>>> On Tue, Jan 24, 2023 at 6:19 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 23.01.2023 19:32, Tamas K Lengyel wrote:
>>>>> On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich <jbeulich@suse.com>
> wrote:
>>>>>> On 23.01.2023 17:09, Tamas K Lengyel wrote:
>>>>>>> On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@suse.com>
> wrote:
>>>>>>>> --- a/xen/arch/x86/mm/mem_sharing.c
>>>>>>>> +++ b/xen/arch/x86/mm/mem_sharing.c
>>>>>>>> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
>>>>>>>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>>>>>>>>  }
>>>>>>>>
>>>>>>>> +static int copy_guest_area(struct guest_area *cd_area,
>>>>>>>> +                           const struct guest_area *d_area,
>>>>>>>> +                           struct vcpu *cd_vcpu,
>>>>>>>> +                           const struct domain *d)
>>>>>>>> +{
>>>>>>>> +    mfn_t d_mfn, cd_mfn;
>>>>>>>> +
>>>>>>>> +    if ( !d_area->pg )
>>>>>>>> +        return 0;
>>>>>>>> +
>>>>>>>> +    d_mfn = page_to_mfn(d_area->pg);
>>>>>>>> +
>>>>>>>> +    /* Allocate & map a page for the area if it hasn't been
> already.
>>>>> */
>>>>>>>> +    if ( !cd_area->pg )
>>>>>>>> +    {
>>>>>>>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
>>>>>>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
>>>>>>>> +        p2m_type_t p2mt;
>>>>>>>> +        p2m_access_t p2ma;
>>>>>>>> +        unsigned int offset;
>>>>>>>> +        int ret;
>>>>>>>> +
>>>>>>>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL,
>>>>> NULL);
>>>>>>>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
>>>>>>>> +        {
>>>>>>>> +            struct page_info *pg =
>>> alloc_domheap_page(cd_vcpu->domain,
>>>>>>> 0);
>>>>>>>> +
>>>>>>>> +            if ( !pg )
>>>>>>>> +                return -ENOMEM;
>>>>>>>> +
>>>>>>>> +            cd_mfn = page_to_mfn(pg);
>>>>>>>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
>>>>>>>> +
>>>>>>>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,
>>>>>>> p2m_ram_rw,
>>>>>>>> +                                 p2m->default_access, -1);
>>>>>>>> +            if ( ret )
>>>>>>>> +                return ret;
>>>>>>>> +        }
>>>>>>>> +        else if ( p2mt != p2m_ram_rw )
>>>>>>>> +            return -EBUSY;
>>>>>>>> +
>>>>>>>> +        /*
>>>>>>>> +         * Simply specify the entire range up to the end of the
>>> page.
>>>>>>> All the
>>>>>>>> +         * function uses it for is a check for not crossing page
>>>>>>> boundaries.
>>>>>>>> +         */
>>>>>>>> +        offset = PAGE_OFFSET(d_area->map);
>>>>>>>> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
>>>>>>>> +                             PAGE_SIZE - offset, cd_area, NULL);
>>>>>>>> +        if ( ret )
>>>>>>>> +            return ret;
>>>>>>>> +    }
>>>>>>>> +    else
>>>>>>>> +        cd_mfn = page_to_mfn(cd_area->pg);
>>>>>>>
>>>>>>> Everything to this point seems to be non mem-sharing/forking
> related.
>>>>> Could
>>>>>>> these live somewhere else? There must be some other place where
>>>>> allocating
>>>>>>> these areas happens already for non-fork VMs so it would make sense
> to
>>>>> just
>>>>>>> refactor that code to be callable from here.
>>>>>>
>>>>>> It is the "copy" aspect with makes this mem-sharing (or really fork)
>>>>>> specific. Plus in the end this is no different from what you have
>>>>>> there right now for copying the vCPU info area. In the final patch
>>>>>> that other code gets removed by re-using the code here.
>>>>>
>>>>> Yes, the copy part is fork-specific. Arguably if there was a way to do
>>> the
>>>>> allocation of the page for vcpu_info I would prefer that being
>>> elsewhere,
>>>>> but while the only requirement is allocate-page and copy from parent
>>> I'm OK
>>>>> with that logic being in here because it's really straight forward.
> But
>>> now
>>>>> you also do extra sanity checks here which are harder to comprehend in
>>> this
>>>>> context alone.
>>>>
>>>> What sanity checks are you talking about (also below, where you claim
>>>> map_guest_area() would be used only to sanity check)?
>>>
>>> Did I misread your comment above "All the function uses it for is a
> check
>>> for not crossing page boundaries"? That sounds to me like a simple
> sanity
>>> check, unclear why it matters though and why only for forks.
>>
>> The comment is about the function's use of the range it is being passed.
>> It doesn't say in any way that the function is doing only sanity checking.
>> If the comment wording is ambiguous or unclear, I'm happy to take
>> improvement suggestions.
> 
> Yes, please do, it definitely was confusing while reviewing the patch.

I'm sorry, but what does "please do" mean when I asked for improvement
suggestions? I continue to think the comment is quite clear as is, so
if anything needs adjusting, I'd need to know pretty precisely what it
is that needs adding and/or re-writing. I can't, after all, guess what
your misunderstanding resulted from.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 17:13:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 17:13:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485152.752155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5o9-0005xM-Le; Thu, 26 Jan 2023 17:13:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485152.752155; Thu, 26 Jan 2023 17:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5o9-0005xF-Is; Thu, 26 Jan 2023 17:13:05 +0000
Received: by outflank-mailman (input) for mailman id 485152;
 Thu, 26 Jan 2023 17:13:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL5o7-0005x3-VS; Thu, 26 Jan 2023 17:13:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL5o7-0001KE-R4; Thu, 26 Jan 2023 17:13:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pL5o6-00050T-IS; Thu, 26 Jan 2023 17:13:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pL5o6-0002M3-Hi; Thu, 26 Jan 2023 17:13:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1+ogHmssp9YnsPMFTPk+o21fgmbfJ9r1jy+OCIqKqbc=; b=Wfq9oA4xaAE4ZqRp1c9VXTTR76
	5wprWHg0M1S7R5kW3JbtL369OrfhasMOkFzthge3edEomljKgDvtsnhB3j43aIB8w9OkAOKJpQjXY
	ov/OSLL/xxPx+0G+fEADwwPtAkkyxK0n+p3lJUKy10gBQNrHsxWK0EkcWaKK55A0jOXU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176151-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176151: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
X-Osstest-Versions-That:
    xen=1e454c2b5b1172e0fc7457e411ebaba61db8fc87
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Jan 2023 17:13:02 +0000

flight 176151 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176151/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a
baseline version:
 xen                  1e454c2b5b1172e0fc7457e411ebaba61db8fc87

Last test of basis   176146  2023-01-26 10:03:31 Z    0 days
Testing same since   176151  2023-01-26 14:00:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@cloud.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1e454c2b5b..10b80ee558  10b80ee5588e8928b266dea02a5e99d098bd227a -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 17:25:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 17:25:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485159.752165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5zh-0007e5-Qw; Thu, 26 Jan 2023 17:25:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485159.752165; Thu, 26 Jan 2023 17:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL5zh-0007dw-MV; Thu, 26 Jan 2023 17:25:01 +0000
Received: by outflank-mailman (input) for mailman id 485159;
 Thu, 26 Jan 2023 17:25:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5Fb=5X=tklengyel.com=bounce+e181d6.cd840-xen-devel=lists.xenproject.org@srs-se1.protection.inumbo.net>)
 id 1pL5zf-0007do-TN
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 17:25:00 +0000
Received: from rs227.mailgun.us (rs227.mailgun.us [209.61.151.227])
 by se1-gles-sth1.inumbo.com (Halon) with UTF8SMTPS
 id 5a9123ef-9d9e-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 18:24:57 +0100 (CET)
Received: from mail-yb1-f181.google.com (mail-yb1-f181.google.com
 [209.85.219.181]) by
 8b0d0d3a25e8 with SMTP id 63d2b76782f0767c3ece081d (version=TLS1.3,
 cipher=TLS_AES_128_GCM_SHA256); Thu, 26 Jan 2023 17:24:55 GMT
Received: by mail-yb1-f181.google.com with SMTP id e15so2827103ybn.10
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 09:24:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 5a9123ef-9d9e-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=tklengyel.com;
 q=dns/txt; s=mailo; t=1674753895; x=1674761095; h=Content-Type: Cc: To: To:
 Subject: Subject: Message-ID: Date: From: From: In-Reply-To: References:
 MIME-Version: Sender: Sender;
 bh=oWmW0uMkRjvQW4Jx9crNRfjROinoQ8/2XaMKFAATFnc=;
 b=IJb0E3a012sACQ9mtw29ajMWmEUEVZGZ3FkU0yeFXzbQDq32ylg8qX/MMw+wq/sx0g68YtF2JMVAWANfT78eZHYh0i+yBJPIkyRWGxgR+nlZatoqhvtoEX6Mm1uaslkMcixzKIOluuTUHk9LpWQ8x/T7EDgiRvsRS+OuiJd42G+nZLlkWsE5h3sySRLk2HujitBfL3WpaP7Plfjp9aWp2lShFthJUvrrv14pKQNe5q/ZjwXXPqWHzP/YXBB0yKubdtPFBp2n/JPE+N8+6d5Ioo3ul8Uqzic/NzLQTXiPCNmUPwaSRbWKIKZT/WUDQ+9TBL9vAbnDrK3/Zy6FTzvVmQ==
X-Mailgun-Sending-Ip: 209.61.151.227
X-Mailgun-Sid: WyIyYTNmOCIsInhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZyIsImNkODQwIl0=
Sender: tamas@tklengyel.com
X-Gm-Message-State: AO0yUKUNhsNHp010vQgiNUV2MaKmHrUr/ac8nos9EGVHQkJNTFE3nNYS
	ImsHd38G63eoQ33UQtxboK8qYRXljk2uL3Q2Kqw=
X-Google-Smtp-Source: AK7set8rESRTIvWHaUTBKPZsAsAOp52Y+oMzYBVC/fYc8FMvr27JSWFt+lI0Kaln3nk0sMbr2WVHaoTdxflDxuYaNxg=
X-Received: by 2002:a05:6902:1345:b0:80b:9496:2179 with SMTP id
 g5-20020a056902134500b0080b94962179mr993222ybu.31.1674753895273; Thu, 26 Jan
 2023 09:24:55 -0800 (PST)
MIME-Version: 1.0
References: <33cd2aba-73fc-6dfe-d0f2-f41883e7cdfa@suse.com>
 <dad36e4c-4529-6836-c50e-7c5febb8eea4@suse.com> <CABfawhmTe3Rxwo54gR5-4KGv=K0Ai7o9g6i=1nkb=XdES1CrcQ@mail.gmail.com>
 <a92b9714-5e29-146f-3b68-b44692c56de1@suse.com> <CABfawhkiaheQPJhtG7fupHcbfYPUy+BJgvbVoQ+FJUnev5bowQ@mail.gmail.com>
 <6099e6fb-0a3e-c6da-2766-d61c2c3d1e96@suse.com> <CABfawh=1XUWbeRJJZQsYVLyZX-Ez8=D2YYCgBYvDGQemHeJkzA@mail.gmail.com>
 <cfffcf15-c2fa-6529-d1ff-a71a7571bfe2@suse.com> <CABfawhm_b=MskQN_zZsuKz0FDtZzZNvBMa8bXtxxUZU9rXbUCA@mail.gmail.com>
 <151a70b0-fcb6-9b0f-7834-d2cc15b5d9b0@suse.com>
In-Reply-To: <151a70b0-fcb6-9b0f-7834-d2cc15b5d9b0@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Thu, 26 Jan 2023 12:24:18 -0500
X-Gmail-Original-Message-ID: <CABfawh=Q7b9NVEn7YXZ3Bgu+N1tY4_o-_HqhbVH0tn5GjXVpew@mail.gmail.com>
Message-ID: <CABfawh=Q7b9NVEn7YXZ3Bgu+N1tY4_o-_HqhbVH0tn5GjXVpew@mail.gmail.com>
Subject: Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest areas
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: multipart/alternative; boundary="000000000000039b2705f32e07cd"

--000000000000039b2705f32e07cd
Content-Type: text/plain; charset="UTF-8"

On Thu, Jan 26, 2023 at 11:48 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 26.01.2023 16:41, Tamas K Lengyel wrote:
> > On Thu, Jan 26, 2023 at 3:14 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 25.01.2023 16:34, Tamas K Lengyel wrote:
> >>> On Tue, Jan 24, 2023 at 6:19 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>>>
> >>>> On 23.01.2023 19:32, Tamas K Lengyel wrote:
> >>>>> On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich <jbeulich@suse.com>
> > wrote:
> >>>>>> On 23.01.2023 17:09, Tamas K Lengyel wrote:
> >>>>>>> On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@suse.com>
> > wrote:
> >>>>>>>> --- a/xen/arch/x86/mm/mem_sharing.c
> >>>>>>>> +++ b/xen/arch/x86/mm/mem_sharing.c
> >>>>>>>> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
> >>>>>>>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
> >>>>>>>>  }
> >>>>>>>>
> >>>>>>>> +static int copy_guest_area(struct guest_area *cd_area,
> >>>>>>>> +                           const struct guest_area *d_area,
> >>>>>>>> +                           struct vcpu *cd_vcpu,
> >>>>>>>> +                           const struct domain *d)
> >>>>>>>> +{
> >>>>>>>> +    mfn_t d_mfn, cd_mfn;
> >>>>>>>> +
> >>>>>>>> +    if ( !d_area->pg )
> >>>>>>>> +        return 0;
> >>>>>>>> +
> >>>>>>>> +    d_mfn = page_to_mfn(d_area->pg);
> >>>>>>>> +
> >>>>>>>> +    /* Allocate & map a page for the area if it hasn't been
> > already.
> >>>>> */
> >>>>>>>> +    if ( !cd_area->pg )
> >>>>>>>> +    {
> >>>>>>>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
> >>>>>>>> +        struct p2m_domain *p2m =
p2m_get_hostp2m(cd_vcpu->domain);
> >>>>>>>> +        p2m_type_t p2mt;
> >>>>>>>> +        p2m_access_t p2ma;
> >>>>>>>> +        unsigned int offset;
> >>>>>>>> +        int ret;
> >>>>>>>> +
> >>>>>>>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL,
> >>>>> NULL);
> >>>>>>>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
> >>>>>>>> +        {
> >>>>>>>> +            struct page_info *pg =
> >>> alloc_domheap_page(cd_vcpu->domain,
> >>>>>>> 0);
> >>>>>>>> +
> >>>>>>>> +            if ( !pg )
> >>>>>>>> +                return -ENOMEM;
> >>>>>>>> +
> >>>>>>>> +            cd_mfn = page_to_mfn(pg);
> >>>>>>>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
> >>>>>>>> +
> >>>>>>>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn,
PAGE_ORDER_4K,
> >>>>>>> p2m_ram_rw,
> >>>>>>>> +                                 p2m->default_access, -1);
> >>>>>>>> +            if ( ret )
> >>>>>>>> +                return ret;
> >>>>>>>> +        }
> >>>>>>>> +        else if ( p2mt != p2m_ram_rw )
> >>>>>>>> +            return -EBUSY;
> >>>>>>>> +
> >>>>>>>> +        /*
> >>>>>>>> +         * Simply specify the entire range up to the end of the
> >>> page.
> >>>>>>> All the
> >>>>>>>> +         * function uses it for is a check for not crossing page
> >>>>>>> boundaries.
> >>>>>>>> +         */
> >>>>>>>> +        offset = PAGE_OFFSET(d_area->map);
> >>>>>>>> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) +
offset,
> >>>>>>>> +                             PAGE_SIZE - offset, cd_area, NULL);
> >>>>>>>> +        if ( ret )
> >>>>>>>> +            return ret;
> >>>>>>>> +    }
> >>>>>>>> +    else
> >>>>>>>> +        cd_mfn = page_to_mfn(cd_area->pg);
> >>>>>>>
> >>>>>>> Everything to this point seems to be non mem-sharing/forking
> > related.
> >>>>> Could
> >>>>>>> these live somewhere else? There must be some other place where
> >>>>> allocating
> >>>>>>> these areas happens already for non-fork VMs so it would make
sense
> > to
> >>>>> just
> >>>>>>> refactor that code to be callable from here.
> >>>>>>
> >>>>>> It is the "copy" aspect with makes this mem-sharing (or really
fork)
> >>>>>> specific. Plus in the end this is no different from what you have
> >>>>>> there right now for copying the vCPU info area. In the final patch
> >>>>>> that other code gets removed by re-using the code here.
> >>>>>
> >>>>> Yes, the copy part is fork-specific. Arguably if there was a way to
do
> >>> the
> >>>>> allocation of the page for vcpu_info I would prefer that being
> >>> elsewhere,
> >>>>> but while the only requirement is allocate-page and copy from parent
> >>> I'm OK
> >>>>> with that logic being in here because it's really straight forward.
> > But
> >>> now
> >>>>> you also do extra sanity checks here which are harder to comprehend
in
> >>> this
> >>>>> context alone.
> >>>>
> >>>> What sanity checks are you talking about (also below, where you claim
> >>>> map_guest_area() would be used only to sanity check)?
> >>>
> >>> Did I misread your comment above "All the function uses it for is a
> > check
> >>> for not crossing page boundaries"? That sounds to me like a simple
> > sanity
> >>> check, unclear why it matters though and why only for forks.
> >>
> >> The comment is about the function's use of the range it is being
passed.
> >> It doesn't say in any way that the function is doing only sanity
checking.
> >> If the comment wording is ambiguous or unclear, I'm happy to take
> >> improvement suggestions.
> >
> > Yes, please do, it definitely was confusing while reviewing the patch.
>
> I'm sorry, but what does "please do" mean when I asked for improvement
> suggestions? I continue to think the comment is quite clear as is, so
> if anything needs adjusting, I'd need to know pretty precisely what it
> is that needs adding and/or re-writing. I can't, after all, guess what
> your misunderstanding resulted from.

I meant please do add the extra information you just spelled out in your
previous email. "Map the page into the guest. We pass the entire range to
map_guest_area so that it can check that no cross-page mapping is taking
place (in which case it will fail). If no such issue is present the page is
being mapped and made accessible to the guest."

If that's not what the function is doing, please explain clearly what it
will do.

Tamas

--000000000000039b2705f32e07cd
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><br>On Thu, Jan 26, 2023 at 11:48 AM Jan Beulich &lt;<=
a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; wrote:<br>&gt=
;<br>&gt; On 26.01.2023 16:41, Tamas K Lengyel wrote:<br>&gt; &gt; On Thu, =
Jan 26, 2023 at 3:14 AM Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com=
">jbeulich@suse.com</a>&gt; wrote:<br>&gt; &gt;&gt;<br>&gt; &gt;&gt; On 25.=
01.2023 16:34, Tamas K Lengyel wrote:<br>&gt; &gt;&gt;&gt; On Tue, Jan 24, =
2023 at 6:19 AM Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeuli=
ch@suse.com</a>&gt; wrote:<br>&gt; &gt;&gt;&gt;&gt;<br>&gt; &gt;&gt;&gt;&gt=
; On 23.01.2023 19:32, Tamas K Lengyel wrote:<br>&gt; &gt;&gt;&gt;&gt;&gt; =
On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich &lt;<a href=3D"mailto:jbeulich=
@suse.com">jbeulich@suse.com</a>&gt;<br>&gt; &gt; wrote:<br>&gt; &gt;&gt;&g=
t;&gt;&gt;&gt; On 23.01.2023 17:09, Tamas K Lengyel wrote:<br>&gt; &gt;&gt;=
&gt;&gt;&gt;&gt;&gt; On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich &lt;<a hre=
f=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt;<br>&gt; &gt; wrote=
:<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; --- a/xen/arch/x86/mm/mem_sharin=
g.c<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; +++ b/xen/arch/x86/mm/mem_shar=
ing.c<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; @@ -1653,6 +1653,65 @@ stati=
c void copy_vcpu_nonreg_state(struc<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt=
; =C2=A0 =C2=A0 =C2=A0hvm_set_nonreg_state(cd_vcpu, &amp;nrs);<br>&gt; &gt;=
&gt;&gt;&gt;&gt;&gt;&gt;&gt; =C2=A0}<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&g=
t;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; +static int copy_guest_area(str=
uct guest_area *cd_area,<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 const struct guest_area *d_area,<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt=
;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 struct vcpu *cd_vcpu,<br>&gt; &gt;&gt;&gt;&gt;&=
gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 const struct domain *d)<br>&gt; &gt;&gt;=
&gt;&gt;&gt;&gt;&gt;&gt; +{<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0mfn_t d_mfn, cd_mfn;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; +<b=
r>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0if ( !d_area-&gt;pg =
)<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0ret=
urn 0;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt;&=
gt;&gt;&gt;&gt; + =C2=A0 =C2=A0d_mfn =3D page_to_mfn(d_area-&gt;pg);<br>&gt=
; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&g=
t; + =C2=A0 =C2=A0/* Allocate &amp; map a page for the area if it hasn&#39;=
t been<br>&gt; &gt; already.<br>&gt; &gt;&gt;&gt;&gt;&gt; */<br>&gt; &gt;&g=
t;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0if ( !cd_area-&gt;pg )<br>&gt; &g=
t;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0{<br>&gt; &gt;&gt;&gt;&gt;&gt=
;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0gfn_t gfn =3D mfn_to_gfn(d, d_mf=
n);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0s=
truct p2m_domain *p2m =3D p2m_get_hostp2m(cd_vcpu-&gt;domain);<br>&gt; &gt;=
&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0p2m_type_t p2mt;<=
br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0p2m_a=
ccess_t p2ma;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0unsigned int offset;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0int ret;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt=
; +<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0c=
d_mfn =3D p2m-&gt;get_entry(p2m, gfn, &amp;p2mt, &amp;p2ma, 0, NULL,<br>&gt=
; &gt;&gt;&gt;&gt;&gt; NULL);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0if ( mfn_eq(cd_mfn, INVALID_MFN) )<br>&gt; &gt;&=
gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0{<br>&gt; &gt;&gt;=
&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct =
page_info *pg =3D<br>&gt; &gt;&gt;&gt; alloc_domheap_page(cd_vcpu-&gt;domai=
n,<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt; 0);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt=
;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0if ( !pg )<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -ENOMEM;<br>&=
gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;=
&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D page_to_mfn(pg);=
<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));<br>&gt; &gt;&gt;&gt=
;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D p2m-&gt;set_entry(p2m, gfn, cd_mf=
n, PAGE_ORDER_4K,<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt; p2m_ram_rw,<br>&gt; =
&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p=
2m-&gt;default_access, -1);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret )<br>&gt; &gt;&gt;&gt;&gt;&g=
t;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret=
urn ret;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =
=C2=A0}<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=
=A0else if ( p2mt !=3D p2m_ram_rw )<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt=
; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -EBUSY;<br>&gt; &gt;&gt=
;&gt;&gt;&gt;&gt;&gt;&gt; +<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0/*<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 * Simply specify the entire range up to the end of=
 the<br>&gt; &gt;&gt;&gt; page.<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt; All th=
e<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 * =
function uses it for is a check for not crossing page<br>&gt; &gt;&gt;&gt;&=
gt;&gt;&gt;&gt; boundaries.<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 */<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=
=A0 =C2=A0 =C2=A0 =C2=A0offset =3D PAGE_OFFSET(d_area-&gt;map);<br>&gt; &gt=
;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D map_gues=
t_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt=
;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 PAGE_SIZE - offset, cd_area, NULL);<br>&=
gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( ret =
)<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0return ret;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =
=C2=A0}<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0else<br>&gt=
; &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0cd_mfn =3D =
page_to_mfn(cd_area-&gt;pg);<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>&gt; &=
gt;&gt;&gt;&gt;&gt;&gt;&gt; Everything to this point seems to be non mem-sh=
aring/forking<br>&gt; &gt; related.<br>&gt; &gt;&gt;&gt;&gt;&gt; Could<br>&=
gt; &gt;&gt;&gt;&gt;&gt;&gt;&gt; these live somewhere else? There must be s=
ome other place where<br>&gt; &gt;&gt;&gt;&gt;&gt; allocating<br>&gt; &gt;&=
gt;&gt;&gt;&gt;&gt;&gt; these areas happens already for non-fork VMs so it =
would make sense<br>&gt; &gt; to<br>&gt; &gt;&gt;&gt;&gt;&gt; just<br>&gt; =
&gt;&gt;&gt;&gt;&gt;&gt;&gt; refactor that code to be callable from here.<b=
r>&gt; &gt;&gt;&gt;&gt;&gt;&gt;<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; It is the =
&quot;copy&quot; aspect with makes this mem-sharing (or really fork)<br>&gt=
; &gt;&gt;&gt;&gt;&gt;&gt; specific. Plus in the end this is no different f=
rom what you have<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt; there right now for copy=
ing the vCPU info area. In the final patch<br>&gt; &gt;&gt;&gt;&gt;&gt;&gt;=
 that other code gets removed by re-using the code here.<br>&gt; &gt;&gt;&g=
t;&gt;&gt;<br>&gt; &gt;&gt;&gt;&gt;&gt; Yes, the copy part is fork-specific=
. Arguably if there was a way to do<br>&gt; &gt;&gt;&gt; the<br>&gt; &gt;&g=
t;&gt;&gt;&gt; allocation of the page for vcpu_info I would prefer that bei=
ng<br>&gt; &gt;&gt;&gt; elsewhere,<br>&gt; &gt;&gt;&gt;&gt;&gt; but while t=
he only requirement is allocate-page and copy from parent<br>&gt; &gt;&gt;&=
gt; I&#39;m OK<br>&gt; &gt;&gt;&gt;&gt;&gt; with that logic being in here b=
ecause it&#39;s really straight forward.<br>&gt; &gt; But<br>&gt; &gt;&gt;&=
gt; now<br>&gt; &gt;&gt;&gt;&gt;&gt; you also do extra sanity checks here w=
hich are harder to comprehend in<br>&gt; &gt;&gt;&gt; this<br>&gt; &gt;&gt;=
&gt;&gt;&gt; context alone.<br>&gt; &gt;&gt;&gt;&gt;<br>&gt; &gt;&gt;&gt;&g=
t; What sanity checks are you talking about (also below, where you claim<br=
>&gt; &gt;&gt;&gt;&gt; map_guest_area() would be used only to sanity check)=
?<br>&gt; &gt;&gt;&gt;<br>&gt; &gt;&gt;&gt; Did I misread your comment abov=
e &quot;All the function uses it for is a<br>&gt; &gt; check<br>&gt; &gt;&g=
t;&gt; for not crossing page boundaries&quot;? That sounds to me like a sim=
ple<br>&gt; &gt; sanity<br>&gt; &gt;&gt;&gt; check, unclear why it matters =
though and why only for forks.<br>&gt; &gt;&gt;<br>&gt; &gt;&gt; The commen=
t is about the function&#39;s use of the range it is being passed.<br>&gt; =
&gt;&gt; It doesn&#39;t say in any way that the function is doing only sani=
ty checking.<br>&gt; &gt;&gt; If the comment wording is ambiguous or unclea=
r, I&#39;m happy to take<br>&gt; &gt;&gt; improvement suggestions.<br>&gt; =
&gt;<br>&gt; &gt; Yes, please do, it definitely was confusing while reviewi=
ng the patch.<br>&gt;<br>&gt; I&#39;m sorry, but what does &quot;please do&=
quot; mean when I asked for improvement<br>&gt; suggestions? I continue to =
think the comment is quite clear as is, so<br>&gt; if anything needs adjust=
ing, I&#39;d need to know pretty precisely what it<br>&gt; is that needs ad=
ding and/or re-writing. I can&#39;t, after all, guess what<br>&gt; your mis=
understanding resulted from.<br><div><br></div><div>I meant please do add t=
he extra information you just spelled out in your previous email. &quot;Map=
 the page into the guest. We pass the entire range to=20
map_guest_area so that it can check that no cross-page mapping is taking pl=
ace (in which case it will fail). If no such issue is present the page is b=
eing mapped and made accessible to the guest.&quot;<br></div><div><br></div=
><div>If that&#39;s not what the function is doing, please explain clearly =
what it will do.<br></div><div><br></div><div>Tamas<br></div></div>

--000000000000039b2705f32e07cd--


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 17:31:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 17:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485150.752175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL665-0000t4-LP; Thu, 26 Jan 2023 17:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485150.752175; Thu, 26 Jan 2023 17:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL665-0000sx-HP; Thu, 26 Jan 2023 17:31:37 +0000
Received: by outflank-mailman (input) for mailman id 485150;
 Thu, 26 Jan 2023 17:07:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KraK=5X=collabora.com=sebastian.reichel@srs-se1.protection.inumbo.net>)
 id 1pL5iz-00055K-Gf
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 17:07:45 +0000
Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1989b4a-9d9b-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 18:07:42 +0100 (CET)
Received: from mercury (dyndsl-037-138-191-219.ewe-ip-backbone.de
 [37.138.191.219])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: sre)
 by madras.collabora.co.uk (Postfix) with ESMTPSA id 7FE346602E7E;
 Thu, 26 Jan 2023 17:07:41 +0000 (GMT)
Received: by mercury (Postfix, from userid 1000)
 id 8DAD710609C7; Thu, 26 Jan 2023 18:07:39 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1989b4a-9d9b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com;
	s=mail; t=1674752861;
	bh=TJwB9xWefUtenAJhobcnuA3/dr44yh/1qZH05naV3dA=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=HIAeawkSMl4wLK9atphFO2PYIuQtn0mXWrQTGJkp27Vvwen4RmNJXIKy0KOcMN2qu
	 MGji/BoqRh//5quIKkg+7Jg9a4rlT6zYnOxKnHRve8IRt32uZ0O2J9C6ByA7KSG9ni
	 ooI+f+xDzdj/4Xv/XqAW1ZMOqMbxlnaE69Ns4AaVsCb09DHvVD+xjXrJ7HkcJ65ZDs
	 0FKkR9qOOchTbFQKw+Lbe8GJsd/K96q/WWSPWtyPUY4G0yRJ/G34tsJUBI/vL5q95x
	 a6dIJ/I/0Glk/75yhzfiBv6PLPN/HUTuWOcS4vpdAGRdicde5HTb77Vj3shLRE2rZN
	 snAuQFm3yL5CQ==
Date: Thu, 26 Jan 2023 18:07:39 +0100
From: Sebastian Reichel <sebastian.reichel@collabora.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com,
	mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
	mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org,
	liam.howlett@oracle.com, peterz@infradead.org,
	ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org,
	songliubraving@fb.com, peterx@redhat.com, david@redhat.com,
	dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de,
	kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
	lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com,
	axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
	jannh@google.com, shakeelb@google.com, tatashin@google.com,
	edumazet@google.com, gthelen@google.com, gurua@google.com,
	arjunroy@google.com, soheil@google.com, hughlynch@google.com,
	leewalsh@google.com, posk@google.com, will@kernel.org,
	aneesh.kumar@linux.ibm.com, npiggin@gmail.com,
	chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at,
	anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net,
	qianweili@huawei.com, wangzhou1@hisilicon.com,
	herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org,
	airlied@gmail.com, daniel@ffwll.ch,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	tzimmermann@suse.de, l.stach@pengutronix.de,
	krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com,
	matthias.bgg@gmail.com, robdclark@gmail.com,
	quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org,
	tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de,
	ray.huang@amd.com, kraxel@redhat.com, mcoquelin.stm32@gmail.com,
	alexandre.torgue@foss.st.com, tfiga@chromium.org,
	m.szyprowski@samsung.com, mchehab@kernel.org,
	dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org,
	jejb@linux.ibm.com, martin.petersen@oracle.com,
	dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com,
	jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de,
	jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net,
	xiang@kernel.org, chao@kernel.org, tytso@mit.edu,
	adilger.kernel@dilger.ca, miklos@szeredi.hu,
	mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com,
	andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org,
	kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com,
	haojian.zhuang@gmail.com, robert.jarzmik@free.fr,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com,
	linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org,
	loongarch@lists.linux.dev, kvm@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org,
	linux-um@lists.infradead.org, linux-acpi@vger.kernel.org,
	linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev,
	dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org,
	linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
	linux-tegra@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev,
	target-devel@vger.kernel.org, linux-usb@vger.kernel.org,
	netdev@vger.kernel.org, linux-fbdev@vger.kernel.org,
	linux-aio@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org,
	devel@lists.orangefs.org, kexec@lists.infradead.org,
	linux-xfs@vger.kernel.org, bpf@vger.kernel.org,
	linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com,
	selinux@vger.kernel.org, alsa-devel@alsa-project.org,
	kernel-team@android.com
Subject: Re: [PATCH v2 3/6] mm: replace vma->vm_flags direct modifications
 with modifier calls
Message-ID: <20230126170739.mlka2jivn3mfstyf@mercury.elektranox.org>
References: <20230125083851.27759-1-surenb@google.com>
 <20230125083851.27759-4-surenb@google.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="qcyccrleajamxo75"
Content-Disposition: inline
In-Reply-To: <20230125083851.27759-4-surenb@google.com>


--qcyccrleajamxo75
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hi,

On Wed, Jan 25, 2023 at 12:38:48AM -0800, Suren Baghdasaryan wrote:
> Replace direct modifications to vma->vm_flags with calls to modifier
> functions to be able to track flag changes and to keep vma locking
> correctness.
>=20
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
> [...]
>  drivers/hsi/clients/cmt_speech.c                   |  2 +-
>  120 files changed, 188 insertions(+), 199 deletions(-)
> [...]
> diff --git a/drivers/hsi/clients/cmt_speech.c b/drivers/hsi/clients/cmt_s=
peech.c
> index 8069f795c864..952a31e742a1 100644
> --- a/drivers/hsi/clients/cmt_speech.c
> +++ b/drivers/hsi/clients/cmt_speech.c
> @@ -1264,7 +1264,7 @@ static int cs_char_mmap(struct file *file, struct v=
m_area_struct *vma)
>  	if (vma_pages(vma) !=3D 1)
>  		return -EINVAL;
> =20
> -	vma->vm_flags |=3D VM_IO | VM_DONTDUMP | VM_DONTEXPAND;
> +	set_vm_flags(vma, VM_IO | VM_DONTDUMP | VM_DONTEXPAND);
>  	vma->vm_ops =3D &cs_char_vm_ops;
>  	vma->vm_private_data =3D file->private_data;
> =20

Acked-by: Sebastian Reichel <sebastian.reichel@collabora.com>

-- Sebastian

--qcyccrleajamxo75
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEE72YNB0Y/i3JqeVQT2O7X88g7+poFAmPSs1EACgkQ2O7X88g7
+pquLBAAkw9lw9lxNRCI6jvqLy98JsUBgSQigNB6Eh8JVWsySHMm1OszFCcvTpoc
vinC/VPMOa6JwEw5e9naXRF2UJahO+Cx+e5MYIKos3QyIUPfi0YM7Cv96h6+c4l/
NdcxLS8+9ElitTuA47UVgPSeZwzdZ1kU5VUV1X2fx+6aGA+dBfWVBgWDqU6AB0Sa
ehU4betso5Ypl26YEmLPHmY+8Xx2jXNwwBEgsHgO2/YjRn9YPDeMAqb4lWs99h0d
nUV1VqwTClRrExtNDvidHryknmyCIBpYt38gn0i9+uIf9mFoBmUDN+/zAdRguGBT
r1CQAwvRvHmEyGJ4dp1nijyt/PWxDBlCWytlmzXrK/rkeH8sQCRdCr9L83/d5DM0
iU98ehmbH9kx8rD4y0L91xmsnegNYNKSfAvz3EP4KYFOHjTw2SOCYoazPu3z62bN
d3HL+08LeZpm1XwVPydZqBd5UpBK8NaQYCJ3BjsLUefsSJE+SWzsnoYFnbUrL1X9
1XfU6LGtVvjCPUsjk7oqh5PjtRGQsdtUhSZJLwNzTeh4I0nSzL1pj8vRFZ7UTcV4
RmFYsjBbKhja2fC13eM4tKzfx53harnHVNuUPw2aoLKshpkQaOTUqWBnRXtbJZkb
dSRKObxfPlHVI+awnfN6owpXF86Owew2+XJcXILOPxaBk8PI/Ns=
=/0TB
-----END PGP SIGNATURE-----

--qcyccrleajamxo75--


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 17:49:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 17:49:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485170.752185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL6NM-0002qG-3C; Thu, 26 Jan 2023 17:49:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485170.752185; Thu, 26 Jan 2023 17:49:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL6NM-0002q9-0A; Thu, 26 Jan 2023 17:49:28 +0000
Received: by outflank-mailman (input) for mailman id 485170;
 Thu, 26 Jan 2023 17:49:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xamf=5X=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1pL6NK-0002q3-LP
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 17:49:26 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c575218e-9da1-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 18:49:25 +0100 (CET)
Received: by mail-wm1-x32f.google.com with SMTP id q8so1662603wmo.5
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 09:49:24 -0800 (PST)
Received: from [192.168.16.153] (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id
 iz17-20020a05600c555100b003db07420d14sm2065074wmb.39.2023.01.26.09.49.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 26 Jan 2023 09:49:23 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c575218e-9da1-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:in-reply-to:organization:content-language
         :references:to:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=OK8D8R+tAQUoFtCWzoU1n8SQdUbAKXNoKJvtArjQYWo=;
        b=fsrSpnrqMY978vETifGGB5jJhFgGa9z1kwcWK6T2nOCu+2w+3q003EfrS2S3nto/9O
         lxe3tYcU8G4mfiFeNDwyeOtZ6IpN6OGNe5mvX2oCYaolr2h2fKwfDb+bxz95DcEGZcgt
         baW9SuyDs+IEAs4Fczqco/bSDXCuXJ9zw8fBxPmw1fsMHyYS/OnDg/d4M0REeUbUPJvJ
         RQv6LwyWhdbAqU2ztQFsLdrlFMGtDLdvlO64WYfr1H45JdhGSnTHa0NWmRYX1GMcLOq5
         lgctRAthj0M2xFVSiOFnZZWIfgJNNw6o+vkIeVJ+9yWcTvOSRYMwMeFWYUlgDljKCXEG
         ExkA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:organization:content-language
         :references:to:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OK8D8R+tAQUoFtCWzoU1n8SQdUbAKXNoKJvtArjQYWo=;
        b=tYpBlRY33xSJenDPb3hlqVO868Xecr7i7h0mMR0sPv4esnzAfEZE4bDq2wka3SvRNA
         tXAPRfavCTcFUenLb8Xe7Bnxi4Vp3wp7zGVzb6qmqCjd7/ew3Ne90m23dr/GyiebSWIw
         rRJ6cjkDpf6Z/ct+XyaqGYzRoZLYJvuEKjR8uZaV6w2O405ytmBzcG/Rua2oS+FTHJmi
         AA0BRHQ0UG0GpMtDdAkYzOHFPLWHQ7NshrIIxIRV7CkZlIQnZZsHjw6hyINyr0jQ7cgc
         5Z22hpgL6Y69IJEeWmyynbjF/HDUuji++f7I8pUi62EFHhD1dXd0qb14IE6KuBtKGrQk
         Wvgw==
X-Gm-Message-State: AFqh2kpD0EDjORh43OdWZ49kouQ9Z2bwyFH5NeCYH9lIhQv2lc3oaG8a
	ir5IEH8OOoxsToJA0z3tXKU=
X-Google-Smtp-Source: AMrXdXs5F4OCgvN3uEJ60IUaqKqHzXV14E3jUKorb3s+0tE+r6nmGLv0K7Ivj0ZpjpxRkHHOTPAUPQ==
X-Received: by 2002:a05:600c:3ac8:b0:3da:270b:ba6b with SMTP id d8-20020a05600c3ac800b003da270bba6bmr37704210wms.41.1674755364280;
        Thu, 26 Jan 2023 09:49:24 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <46ea1f4e-2061-d5f8-2b44-5d24a3a3e7ca@xen.org>
Date: Thu, 26 Jan 2023 17:49:22 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Reply-To: paul@xen.org
Subject: Re: [SeaBIOS PATCH] xen: require Xen info structure at 0x1000 to
 detect Xen
To: David Woodhouse <dwmw2@infradead.org>, seabios <seabios@seabios.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 qemu-devel <qemu-devel@nongnu.org>
References: <feef99dd2e1a5dce004d22baf07d716d6ea1344c.camel@infradead.org>
Content-Language: en-US
Organization: Xen Project
In-Reply-To: <feef99dd2e1a5dce004d22baf07d716d6ea1344c.camel@infradead.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 20/01/2023 11:33, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> When running under Xen, hvmloader places a table at 0x1000 with the e820
> information and BIOS tables. If this isn't present, SeaBIOS will
> currently panic.
> 
> We now have support for running Xen guests natively in QEMU/KVM, which
> boots SeaBIOS directly instead of via hvmloader, and does not provide
> the same structure.
> 
> As it happens, this doesn't matter on first boot. because although we
> set PlatformRunningOn to PF_QEMU|PF_XEN, reading it back again still
> gives zero. Presumably because in true Xen, this is all already RAM. But
> in QEMU with a faithfully-emulated PAM config in the host bridge, it's
> still in ROM mode at this point so we don't see what we've just written.
> 
> On reboot, however, the region *is* set to RAM mode and we do see the
> updated value of PlatformRunningOn, do manage to remember that we've
> detected Xen in CPUID, and hit the panic.
> 
> It's not trivial to detect QEMU vs. real Xen at the time xen_preinit()
> runs, because it's so early. We can't even make a XENVER_extraversion
> hypercall to look for hints, because we haven't set up the hypercall
> page (and don't have an allocator to give us a page in which to do so).
> 
> So just make Xen detection contingent on the info structure being
> present. If it wasn't, we were going to panic anyway. That leaves us
> taking the standard QEMU init path for Xen guests in native QEMU,
> which is just fine.
> 
> Untested on actual Xen but ObviouslyCorrect™.

Works for me...

Tested-by: Paul Durrant <paul@xen.org>

> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   src/fw/xen.c | 45 ++++++++++++++++++++++++++++++++-------------
>   1 file changed, 32 insertions(+), 13 deletions(-)
> 

Also...

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Thu Jan 26 17:55:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 17:55:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485175.752195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL6Sk-0004Ha-NE; Thu, 26 Jan 2023 17:55:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485175.752195; Thu, 26 Jan 2023 17:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL6Sk-0004HT-KS; Thu, 26 Jan 2023 17:55:02 +0000
Received: by outflank-mailman (input) for mailman id 485175;
 Thu, 26 Jan 2023 17:55:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZWf=5X=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pL6Si-0004HL-T2
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 17:55:01 +0000
Received: from cross.elm.relay.mailchannels.net
 (cross.elm.relay.mailchannels.net [23.83.212.46])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a7f377c-9da2-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 18:54:56 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id EF67C762218
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 17:54:54 +0000 (UTC)
Received: from pdx1-sub0-mail-a305.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id 8AADF762051
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 17:54:54 +0000 (UTC)
Received: from pdx1-sub0-mail-a305.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.120.227.169 (trex/6.7.1); Thu, 26 Jan 2023 17:54:54 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a305.dreamhost.com (Postfix) with ESMTPSA id 4P2pJF4RXCzSG
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 09:54:53 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e00cd by kmjvbox (DragonFly Mail Agent v0.12);
 Thu, 26 Jan 2023 09:54:51 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a7f377c-9da2-11ed-b8d1-410ff93cb8f0
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1674755694; a=rsa-sha256;
	cv=none;
	b=07rfH9T6IVlT2nXEgcpxUcr7SdvG8EQ5l7bhIi2VZhUCTEP/F8FK+HO7Rq09JATZuigxxG
	xoSftRnZ7wVHkUwlBmAYx0ooScCJBFTDbCFtHESzPwYHxrqqcfn2Xbk5oo++TkWjcyIE1K
	/KrWHgxoQ5cwqQiXVnQCUsfYZoCp8WGjhp24CavKyIiRvAQtMhbhRYX/7IMXJJTHOIPPJ3
	uXUhnCO+OE8dCiw6JN1p/8T+LjBpxxEjIH8DP5oSruLXta9E5B2pmgph33hIedpEGEB2tY
	btrwi5URFxYevq49mcJZNkpQeP0k33Mlcg4SYFBQRbi9zyemNrTdygfF7vu9eA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1674755694;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:dkim-signature;
	bh=jxmpzMRS8uv3Rx43UOYjHE4GiOK6D+gngqmat/OvKTQ=;
	b=AQfgmF5sW+ntihOWQ6Vke+AXGJrE91B+aBonYWQblB5AUrfZY8L8akUZ0Pw+lJCmdfRBTJ
	p8vga3lXnOGzwGw03UkzEPNrVLJIOp+O4WkKWf+qTxK+JrXGKVddfHIZqanvkbUFwmea2K
	HVMcPW7PUd/oH/je4vf/3fJGRoSvolRPrtR4KacAzoWHgge0+Sf3MesLtelyf9u7OF6ncx
	zmtspb9JliiaAgNI8ANi1OQOa3HB0cQN4cbkPAixnIYsIpsDaWN1QqZQoakA2pDLjSMrEB
	l+FZ4PE+thOL6lFkn9CAAVy2a1lhF0pIu67cYQwVM8NRSERkPEA9ujljTL7jJw==
ARC-Authentication-Results: i=1;
	rspamd-7b784c8cc8-jzzsr;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Neutral
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Ruddy-Ski: 1455bc5660740c05_1674755694805_2390760637
X-MC-Loop-Signature: 1674755694805:1234692241
X-MC-Ingress-Time: 1674755694805
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1674755693;
	bh=jxmpzMRS8uv3Rx43UOYjHE4GiOK6D+gngqmat/OvKTQ=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=A/gr4s6WbWSqnhtz9OVBbLJeoRkHj9sZom+OSFHjQGKW4pAb9y6xSnjhGQKahtLh9
	 jeW1jUeTiKtfb0B9I+zmwGu1eJuTZJUsMcholeKxYE8osojqKClUCB8LTY4wtzpUXS
	 2+IPXzITtOb/7JMRIlgf1GZldC6kQeQSv+OLwUgQ=
Date: Thu, 26 Jan 2023 09:54:51 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230126175451.GA1959@templeofstupid.com>
References: <20230125184506.GE1963@templeofstupid.com>
 <8a07d6f8-a07a-d435-deef-1366fad29a11@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <8a07d6f8-a07a-d435-deef-1366fad29a11@suse.com>

On Thu, Jan 26, 2023 at 09:57:43AM +0100, Jan Beulich wrote:
> On 25.01.2023 19:45, Krister Johansen wrote:
> > v2:
> >   - Fix whitespace between comment and #defines (feedback from Jan Beulich)
> 
> Hmm, ...
> 
> > --- a/xen/include/public/arch-x86/cpuid.h
> > +++ b/xen/include/public/arch-x86/cpuid.h
> > @@ -72,6 +72,14 @@
> >   * Sub-leaf 2: EAX: host tsc frequency in kHz
> >   */
> >  
> > +#define XEN_CPUID_TSC_EMULATED               (1u << 0)
> > +#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
> > +#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
> > +#define XEN_CPUID_TSC_MODE_DEFAULT           (0)
> > +#define XEN_CPUID_TSC_MODE_EMULATE           (1u)
> > +#define XEN_CPUID_TSC_MODE_NOEMULATE         (2u)
> > +#define XEN_CPUID_TSC_MODE_NOEMULATE_TSC_AUX (3u)
> 
> ... while I'm fine with the leading blank line, what my earlier comment was
> about really are the two separate blocks of #define-s (the flag bits and the
> modes). I'll take care of this while committing; with the adjustment
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Sorry I miunderstood, and thanks for being willing to fix this up
while committing.

-K


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 18:03:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 18:03:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485181.752205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL6aN-0005yh-Gw; Thu, 26 Jan 2023 18:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485181.752205; Thu, 26 Jan 2023 18:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL6aN-0005ya-EF; Thu, 26 Jan 2023 18:02:55 +0000
Received: by outflank-mailman (input) for mailman id 485181;
 Thu, 26 Jan 2023 18:02:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZWf=5X=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pL6aM-0005yU-Rg
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 18:02:54 +0000
Received: from cross.elm.relay.mailchannels.net
 (cross.elm.relay.mailchannels.net [23.83.212.46])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a60c1c6b-9da3-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 19:02:52 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id 427C27E2460
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 18:02:50 +0000 (UTC)
Received: from pdx1-sub0-mail-a305.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id C3ABB7E2513
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 18:02:49 +0000 (UTC)
Received: from pdx1-sub0-mail-a305.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.99.229.34 (trex/6.7.1); Thu, 26 Jan 2023 18:02:50 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a305.dreamhost.com (Postfix) with ESMTPSA id 4P2pTM0R0bzT5
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 10:02:46 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e00cd by kmjvbox (DragonFly Mail Agent v0.12);
 Thu, 26 Jan 2023 10:02:44 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a60c1c6b-9da3-11ed-a5d9-ddcf98b90cbd
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1674756169; a=rsa-sha256;
	cv=none;
	b=m+F75p7unMf/DGAEQAxbnR+zXDdWAgGhYW8HjvdjZ0Hod0DyabuhDUnsAzIWJqYgjIdznl
	IR5KCTZMj11Cju+QtbNGJzKmXZz7lC5HoUXaGuALdRG2FUxsnBkconashgGRvj10xZ5CI7
	4rzLSVNL2dnBBOQCzbSw2juI6/vKuWBqlb46BdKdTvNrtnmubwnZnT3unpmfj8CR2OF4Vh
	CZW1C/rumpE84+lTMPuC1NFqRetZYl9MchAWzwOdYLnGUb+M2f/YgFy6ctHcT/KuvDcpYi
	Z7faWG4MdbAtiz9x9iu/Tur2mH1+0C7Y/N+f0UCx9zyxsFxEeZiFYFoSAJSPAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1674756169;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:dkim-signature;
	bh=0h75USw75PEGBO20cN1/ibS8ETsuoLrcwhoitcGBX6c=;
	b=BHR7BiDKdJ1SntEX1cBO0PfaNNxwDHLp+30G7VrOakbdq/KV69mJKur9Gu06BYj54kvVni
	wVGm9sfUbr41MeaZQ2COEr5AtUziL00m/nTWAP+4/HyTauMIBMDyYcG/OgwdDVjHfNL01O
	gSoQ6Is1W7kOfy5VwNXsu80RTzZ6DoQV0BYzgNSGHJJu+tlg3HxA4SPZytS7+fNicsOZ1D
	JnF9J5Osq4we+4MPT8RpA9sQAaB6v0M13/iAbGSxofDbUhD4wcpS0uWe1mKE4EBkff7vLu
	PNNmVUvpVfO5VeKFgSmIADxRMLMMpR4l+2Fut6WXhV06FsYp1QT0PdAs9LFcKQ==
ARC-Authentication-Results: i=1;
	rspamd-9476994bd-kbl4q;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Neutral
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Interest-Madly: 553898051b093dac_1674756170081_675623390
X-MC-Loop-Signature: 1674756170081:1438025192
X-MC-Ingress-Time: 1674756170081
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1674756167;
	bh=0h75USw75PEGBO20cN1/ibS8ETsuoLrcwhoitcGBX6c=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=llWecIXKg3DgeOW4ZnIcOdY2rwmIKWHjYEN9bbbv5l7NZVOs5scTZeOoDs3TeKaui
	 Ih1XnGvI5NqGYj3655ruFXf3VMGvGMbP1gbPqr3DUgws+vxEXnIFCWTV8qUUaogOwu
	 8rZDmzuTHwcr2wd/2gHO5RpG7CSTQpTElmCcDWnI=
Date: Thu, 26 Jan 2023 10:02:44 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Krister Johansen <kjlx@templeofstupid.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230126180244.GB1959@templeofstupid.com>
References: <20230125184506.GE1963@templeofstupid.com>
 <77576aab-93bf-5f6a-9b04-17eaf1d84ffb@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <77576aab-93bf-5f6a-9b04-17eaf1d84ffb@suse.com>

On Thu, Jan 26, 2023 at 10:57:01AM +0100, Jan Beulich wrote:
> On 25.01.2023 19:45, Krister Johansen wrote:
> > --- a/xen/include/public/arch-x86/cpuid.h
> > +++ b/xen/include/public/arch-x86/cpuid.h
> > @@ -72,6 +72,14 @@
> >   * Sub-leaf 2: EAX: host tsc frequency in kHz
> >   */
> >  
> > +#define XEN_CPUID_TSC_EMULATED               (1u << 0)
> > +#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
> > +#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
> > +#define XEN_CPUID_TSC_MODE_DEFAULT           (0)
> > +#define XEN_CPUID_TSC_MODE_EMULATE           (1u)
> > +#define XEN_CPUID_TSC_MODE_NOEMULATE         (2u)
> > +#define XEN_CPUID_TSC_MODE_NOEMULATE_TSC_AUX (3u)
> 
> Actually I think we'd better stick to the names found in asm/time.h
> (and then replace their uses, dropping the #define-s there). If you
> agree, I'd be happy to make the adjustment while committing.

Just to confirm, this would be moving these:

   #define TSC_MODE_DEFAULT          0
   #define TSC_MODE_ALWAYS_EMULATE   1
   #define TSC_MODE_NEVER_EMULATE    2
   
To cpuid.h?  I'm generally fine with this.  I don't see anything in
Linux that's using these names.  The only question I have is whether
we'd still want to prefix the names with XEN so that if they're pulled
in to Linux it's clear that the define is Xen specific?  E.g. something
like this perhaps?

   #define XEN_TSC_MODE_DEFAULT          0
   #define XEN_TSC_MODE_ALWAYS_EMULATE   1
   #define XEN_TSC_MODE_NEVER_EMULATE    2

That does increase the number of files we'd need to touch to make the
change, though. (And the other defines in that file all start with
XEN_CPUID).

Though, if you mean doing it this way:

   #define XEN_CPUID_TSC_MODE_DEFAULT          0
   #define XEN_CPUID_TSC_MODE_ALWAYS_EMULATE   1
   #define XEN_CPUID_TSC_MODE_NEVER_EMULATE    2
 
then no objection to that at all.  Apologies for overlooking the naming
overlap when I put this together the first time.

-K


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 18:30:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 18:30:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485189.752221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL70y-0001Qj-NV; Thu, 26 Jan 2023 18:30:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485189.752221; Thu, 26 Jan 2023 18:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL70y-0001Qc-KI; Thu, 26 Jan 2023 18:30:24 +0000
Received: by outflank-mailman (input) for mailman id 485189;
 Thu, 26 Jan 2023 18:30:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6/FI=5X=yahoo.com=hack3rcon@srs-se1.protection.inumbo.net>)
 id 1pL70x-0001QU-4I
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 18:30:23 +0000
Received: from sonic306-2.consmr.mail.bf2.yahoo.com
 (sonic306-2.consmr.mail.bf2.yahoo.com [74.6.132.41])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c38155a-9da7-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 19:30:20 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic306.consmr.mail.bf2.yahoo.com with HTTP; Thu, 26 Jan 2023 18:30:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c38155a-9da7-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674757818; bh=xGcwb2Lir9gHeBMb8n1Gla1tLvlag64G3DJCpjlw9w0=; h=Date:From:Reply-To:To:Subject:References:From:Subject:Reply-To; b=bDDVHk9XfOmYrva6H68sniGnBua3IoY3V6oN/HMadHDFn66Pq94tu25e7aMquqekqBW4+H55VmWFPNjqcEcHkzt53S7AfVKgqStzwD5LU1LwBu4AuOljK7P7dlJRgb2y6mfKIU791ZO30UUVZbYzRLGkLyDO7W9yB0zEnzHCO12196P7sjIw97l8NdEOGIB9aDOZ6WKC/XRTzAO3b3z/BZ3/6eyAVsECpGckkpWa4oLsEzVJ4JiEFJPHeZtvxVT5xB8p0jpeczYKLmkGK3dDEUhmFHrJEEBULtJCWnUuMS1NHyVlfLYoNI1w/XW0SDBxQ+np9lxDy5qP4flz5RPMAA==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1674757818; bh=sNeXBtdZp2cbMl7i8rfeKa+JLG2d67gzMFaCZF5pBdg=; h=X-Sonic-MF:Date:From:To:Subject:From:Subject; b=twJND2pGV1fo1WTO9gQKtSNKDP/9TghVpM7UGaTCPsTTif9OSYaRlhrM4BLRF5KpJpeFlTEC/EcDQe5N6s2qiI+PBl3EcapShPWO5XH12J8CEUlYXIgid27horq3w992qAp+l7hAeTW2OcoEPgYlBPAfzCTj0eQq34yachzMTGkrhrq1rWe0+tatJ0aTjz0uAZLbwz0R4eqqUKvT3M9bq10mFPVSCIpUHe2Knm5FXFYzNy9vrW1RyZr8wsiAl8j2YKg0xKg+BROC7tOn09Gsn1eHw86DKcyBAj5sG2/FpyJYjbFMrBvvBHdsykrWWVW52oNBtrX+c23H3OOhfxQ4IA==
X-YMail-OSG: 2zuozUkVM1kVEl9tFD7uBA7wo7Hu1XXSt3lG433MVo95xQ6moEivLXfUVVC2BYu
 zLpsGtxPsou.ilHSSTkGd69Cv3Fl_mMqugsuIARViFj5W3OJbMapJGBTSdUukUxR3Elak2Cl7Pf1
 bZjTN2TBaNzyXzj4ed6UFrO89b5hdUmPg7EP4rEUY2ap02Wr9iOiLT2388w_T81j9u9l2tMpz0VA
 Rh_IDdqFE0jbqMhp_yL7a6mftJX2ysP_f5dFSKL95LRuZOqzhgglSr7ZrGa2VuhQB.aqcF0I4jZN
 giaDrYGyDkCv7.gNBK1Jmxkh_L7xe_ON726Vz8TNL50.ZfkEihWOZwRhEcdUaIspHb0b_VC.R1kG
 MrRJP8Qpn6_iph8vvY_zBn0Da9aJjDbfw3kNyFqWDZZcBkiWEQEvRoShdQTnLmzaxJcHjXmiJpFp
 VpyzN6PfCddFD0guhKPjbZVIMKjFQTsf5kIB3gF6HKf.z63xQZJFWZDgaY8YCZu_rjG9_nxviXle
 bZw7fozKsU9KR1WWV8uh1COWYsPIlRoTZqgGYvRcdvcazsftxBfTqhkrI13B60g_1HBRZVo0L7pL
 lY49y268TJzkQCc4InBN5UyczgpebrmSw_BqoJi3qJ71KUkSybTHAa9haycNjHwZS9.cE5pV6Ugy
 hBy8kxPJ4dcBW_Qu8N.aME5sVG5rvyMmXAIpArzltEX2IJ9XzuXt3jEKQOoWkLD8rjJx6hUhpx4a
 FzYtE5cWiMG_5i4ybJoItGo3SoJKUzIn24uq9Af.d9JFaG9bwTOrn4H0BWc7DjJxZBqkcbHtyvzT
 1yJSI1xMLi2IXx7vu60470RpcAZfKdToo485vc7JvGJwsXKfhQ6jk9j5UW3CyuMszFyY1kDRsHEs
 fP8zXxDoCC.IYJ0MusApdHkgt_OirkfIvWX81qqjPxD8q0pf5Gy_ZfNR6qyZhMwzxSgTvYVLzkrH
 iSu05U86yN3kGsGmMVzD2WYN7TbdAk8TvmFjUPTwYD20RuVnptxavyYU7GQTxIPlLGjQakLUCh65
 a7LGQIrYwCxAn1ooIh1ynIJqkGcWEdRVkM5YdTWgHHMYQ9RvTTlt2Ga._d2Sn2E9Z.RkLT1Ce3uI
 MgRpvh72c1j2P3HvUXFuDGLvKrF.kukPYQIyMdVm4KtfK9sMAu0Ey_iQ_XAbvQNqIKXSANoYF5x_
 wsAciIFUpi7s1Eglxibl1IjburrFgVbTJ9Wd3ORI3B_oZCambzbQMmo6DVFkGTr40hcd_iVcLLls
 ELdBJcjq9bMT2Dt2zp9_FslrifNv6Yo6yASfAEki29yLmEyHuWwj7oQtOJCPOdT2AMTAsG9S1Uff
 yU4r1G28OvKZWK.onB9NdVMJ3M1qmxWokaRt_V5KRNrbrO96ysdumFYDUaH1N_UQffY1ivFVe1BN
 IQ69ZmuVn.VTVGbtg3AiQZf86gRZXF1iGoRMJLUNEg0GaTlykpXwZvuimVNDHIwPC_k2LGLF3Xz6
 .9tggycE4Cqy6fIDnAlICdqwOAUfojurIc7KcN.v2ujzUPOYbHOTESykr2y7C9evBZBHlyqBOWgC
 S5VWugYNcXDwf7NK1a33FSAaZTQYIp_oI9WdgHY6TS8sREnJ398RQ.BPasSi9.0LF05T26g0TFt1
 uiBzZBLA8f4jdukCHAbrGpx8Wv.aAxyZqXcqTPwT27vARgJgUA34uXE_nAn_WHgNLaBolYEHVmrQ
 oyVvLyusCXydvca7Z0Ek66mZJq3.mHlSGsM6jNZZcrlQDTCi7Nz6rQBo3H_TJ.X7tjxpJaDYQuV9
 nqp0oFccDmlYPJE8AyPjT3WSgcZVQHWgdgY0yfbsVXUc_So_ACPZjGqbpHuSMnHM7nGzKWnaA0s2
 jURLrWcmEaxgeQyh4wxIKbJOL0fHamEp0j9cTrfqoAtUiEKW5jX5MSpsqRQkrcbfLUZTQFOGEKaL
 Asz.UvrTZRxHqhIFbUGP5YtB.dciabm5I2a7yb6n1_kd0PYF4darkIAdvWP9uzPFqouPT8leebuh
 LgwNqWgLqZqiGKq0RzLJZ6YPHnt4QN8NG_ciP_e6YThPkzIB7sihFj9n58gho3n3RkiV2vAzcp7j
 PatRy6QZaxgYPkVcAA.Z8GIE0xF6lhaLufGavUprQQu6vs61yVPv_CCB2z9IP4ggYBI2fF.PxbLU
 7Paaf9awF_smgRC1iBTDcZ56KqZKRxjZCwbXpD9qJuIgmTOvfCodTku2sbw6K89RgaOS29BtYFM9
 PVwWFO.g5g7BE5AM9aXKKxOZa0SDnAvb_pPNZWXtd28KWfhXpLSVjGtGbar3MFFjtKMP5fD_dhiZ
 PeYUKo2heZwRcuuyCGrr0HjmkpCMMMUAjyYoBDwGp
X-Sonic-MF: <hack3rcon@yahoo.com>
Date: Thu, 26 Jan 2023 18:30:11 +0000 (UTC)
From: Jason Long <hack3rcon@yahoo.com>
Reply-To: Jason Long <hack3rcon@yahoo.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <907082531.1459240.1674757811189@mail.yahoo.com>
Subject: The re-architecture of Xen
MIME-Version: 1.0
Content-Type: multipart/alternative; 
	boundary="----=_Part_1459239_246279745.1674757811189"
References: <907082531.1459240.1674757811189.ref@mail.yahoo.com>
X-Mailer: WebService/1.1.21096 YahooMailAndroidMobile
Content-Length: 743

------=_Part_1459239_246279745.1674757811189
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello,Doesn't KVM need re-architecture? For example, Dom0 consumes memory and if its number is large, it causes waste of memory in the system.
Tnx.
------=_Part_1459239_246279745.1674757811189
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<div id="ymail_android_signature">Hello,</div><div id="ymail_android_signature">Doesn't KVM need re-architecture?  For example, Dom0 consumes memory and if its number is large, it causes waste of memory in the system.</div><div id="ymail_android_signature"><br></div><div id="ymail_android_signature">Tnx.</div>
------=_Part_1459239_246279745.1674757811189--


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 18:55:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 18:55:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485196.752231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL7Oi-0004Kl-RQ; Thu, 26 Jan 2023 18:54:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485196.752231; Thu, 26 Jan 2023 18:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL7Oi-0004Ke-MU; Thu, 26 Jan 2023 18:54:56 +0000
Received: by outflank-mailman (input) for mailman id 485196;
 Thu, 26 Jan 2023 18:54:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wwAt=5X=amd.com=stefano.stabellini@srs-se1.protection.inumbo.net>)
 id 1pL7Oh-0004KY-EA
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 18:54:55 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e9d06e1c-9daa-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 19:54:51 +0100 (CET)
Received: from BN9PR03CA0289.namprd03.prod.outlook.com (2603:10b6:408:f5::24)
 by CH3PR12MB7524.namprd12.prod.outlook.com (2603:10b6:610:146::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Thu, 26 Jan
 2023 18:54:46 +0000
Received: from BL02EPF000108EB.namprd05.prod.outlook.com
 (2603:10b6:408:f5:cafe::65) by BN9PR03CA0289.outlook.office365.com
 (2603:10b6:408:f5::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend
 Transport; Thu, 26 Jan 2023 18:54:45 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF000108EB.mail.protection.outlook.com (10.167.241.200) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.12 via Frontend Transport; Thu, 26 Jan 2023 18:54:44 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 26 Jan
 2023 12:54:43 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 26 Jan
 2023 10:54:43 -0800
Received: from ubuntu-20.04.2-arm64.shared (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2375.34 via Frontend Transport; Thu, 26 Jan 2023 12:54:42 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9d06e1c-9daa-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AbNrToVVlYJj+3qov2aM8Y6IRBnjJdbGDc2QVEGT5DRpx4uij/qkFC/MCUB8yFQk8awTTBDDh/eI23JlkHE/ypsabPBV1BmhU180i6qZE/4r6gL9CLMmNQoCeOH2YBsjyeR8kN1ZsHU+lgGqGcbcIxpOOpjNzQwK45gEkrVNXcJvoi0MFxuqnyv5ZDetqzfbMC09K1srl5gR/bwp6cTYXie8m/7j8IGb8LxFDKXYIujP38AVF4UdUn+dmHdgy4B3lISzp/FXXbXYNF1gfnMOqT3SG8mvsXb9EZFKPOVdV0tcyf+BykMFEEXT4SJIxCY5AoPB/+KMYEEop/DEVzSlsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jnhVOI7/C6P07l70YzzpuToHV64j2v2n4IaKG41cFJQ=;
 b=CHYNrjyWs40cG2fj9hSKhZmUp/T4BqPyIFUJ86AUIDGjg1z9HrQyUzRGnCuQss6tOwQ5gaqdrV2cVyX6kH+/eW0/AnR5yn8KhbCMaK8o+93daEr4dR8e8mtp62oeywkykfEh79wzAIpy0qfJMfVPPe5on7pn1gK5hLLv2WLAZX8p9E4NAhfmJqxWz5cc4Wj8/yIazaZGHM/4u5MKS6eByXMPlZNt+NFzebkVNCZ88H4F+kNu3Q5sA9yY2h8nSwOAaqNTX9UTDgEbsxPHWuQsB4h615PsejyOahECFWJQGBjRwTOclt9Epdm83iys9q6V/3cF96wcNuotU3/2pCvtAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jnhVOI7/C6P07l70YzzpuToHV64j2v2n4IaKG41cFJQ=;
 b=1wPgeLYe3NsgO2WaN4viyvTs0ZgTFg4L5iAFPZDamyr4pxeQBMahpjqkAVbEjmijaousoMWtjap7wSQLIyp3H6bhFC+pWBSeN+e578oSN5i6zAdifr8nB5SM+tVcrupBAuTX3JvCRmrPUEjc1QeBzdlEWmkQTIbUoKQOChfWE1A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Date: Thu, 26 Jan 2023 10:54:42 -0800
From: Stefano Stabellini <stefano.stabellini@amd.com>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <stefano.stabellini@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, <george.dunlap@citrix.com>,
	<andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<Bertrand.Marquis@arm.com>, <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
In-Reply-To: <5a3ef92e-281f-e337-1a3e-aa4c6825d964@suse.com>
Message-ID: <alpine.DEB.2.22.394.2301261041440.1978264@ubuntu-linux-20-04-desktop>
References: <20230125205735.2662514-1-sstabellini@kernel.org> <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com> <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop> <5a3ef92e-281f-e337-1a3e-aa4c6825d964@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000108EB:EE_|CH3PR12MB7524:EE_
X-MS-Office365-Filtering-Correlation-Id: e8543448-d0bd-45c0-a264-08daffcec9f5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KbvNoquGxB/zqqRGnocSVGhkQaaujDHO25tFpkUHRenZ2s7vENBnN2BYTg3qHyKPaSshECEuSprtwkQlAWJ4x5EgSaPiLdBB+bkRZJJ6CFeqqFvMjlasOY7uQMySdcfUzLQ/U+NWgI+9Cu6qWMv1W3dU/y0ugj+omaS8npVLnnCX8UClFa0PzwX0URANz3I2LmNoaQaSPPJbNf0QvsD6oF0NzYx3Ni+gZSyuBCSLvZPDRsxfc0xOiuimT9QU7CR/0M1zrifX88w21osjJ9h2Jew+gB11DKWMocegMlpOr+I61HOohBOUPQ5tzBgo0IeelGlKd8bBRpTPqXvpiMQXP0POciecfTLnGyLmaQ/hZClDPOhO/tTCGlOE0/gKU7imCyaEtTpryljORkrW6WVz+NKnHl+KaLcXMoWZJpWNBeSVSrK8GPCYB6BaeR1d/c1GU8iioTu1CZk5+a96OBr5qjEo71zfVzxotm3cMiyntdtSpNPTreY1S/ntuZIadDWCVIamyxJzd7fOyNaE6zJpYc+DyBCgC7iEk8SPr0TgfsTT6nHUpSuSZgq5PeAD7cYH7F9wujIK8bTBSfAQ57mlM76eduT9/ea7z76AOZP9Kwp5BDIlBzjSEDuNl2NVjxM3iFsXlU+RGei1e95823+EOMQQC7mEMTSQcx9hyC8UOPgiPPeCma+GbvbwZ7zeL52gDW+ILOc5LDl3DF5TfJrq7FtG8Avl9cFVKI6ns9J/ZNQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(7916004)(39860400002)(396003)(376002)(136003)(346002)(451199018)(46966006)(36840700001)(40470700004)(6916009)(41300700001)(8936002)(5660300002)(44832011)(2906002)(33716001)(82310400005)(66899018)(40460700003)(40480700001)(356005)(8676002)(4326008)(54906003)(70586007)(478600001)(36860700001)(47076005)(81166007)(82740400003)(86362001)(70206006)(83380400001)(316002)(186003)(426003)(53546011)(336012)(26005)(9686003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 18:54:44.2541
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e8543448-d0bd-45c0-a264-08daffcec9f5
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000108EB.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7524

On Thu, 26 Jan 2023, Jan Beulich wrote:
> On 26.01.2023 16:59, Stefano Stabellini wrote:
> > On Thu, 26 Jan 2023, Jan Beulich wrote:
> >> On 25.01.2023 21:57, Stefano Stabellini wrote:
> >>> From: Stefano Stabellini <stefano.stabellini@amd.com>
> >>>
> >>> As agreed during the last MISRA C discussion, I am adding the following
> >>> MISRA C rules: 7.1, 7.3, 18.3.
> >>>
> >>> I am also adding 13.1 and 18.2 that were "agreed pending an analysis on
> >>> the amount of violations".
> >>>
> >>> In the case of 13.1 there are zero violations reported by cppcheck.
> >>>
> >>> In the case of 18.2, there are zero violations reported by cppcheck
> >>> after deviating the linker symbols, as discussed.
> >>
> >> I find this suspicious.
> > 
> > Hi Jan, you are right to be suspicious about 18.2 :-)  cppcheck is
> > clearly not doing a great job at finding violations. Here is the full
> > picture:
> > 
> > - cppcheck finds 3 violations, obviously related to linker symbols
> > specifically common/version.c:xen_build_init and
> > xen/lib/ctors.c:init_constructors
> > 
> > - Coverity finds 9 violations, not sure which ones
> > 
> > - Eclair finds 56 total on x86. Eclair is always the strictest of the
> >   three tools and is flagging:
> >   - the usage of the guest_mode macro in x86/traps.c and other places
> >   - the usage of the NEED_OP/NEED_IP macros in common/lzo.c
> >   the remaining violations should be less than 10
> > 
> > 
> >> See e.g. ((pg) - frame_table) expressions both Arm
> >> and x86 have. frame_table is neither a linker generated symbol, nor does
> >> it represent something that the compiler (or static analysis tools) would
> >> recognized as an "object". Still, the entire frame table of course
> >> effectively is an object (array), yet there's no way for any tool to
> >> actually recognize the array dimension.
> > 
> > I used cppcheck in my original email because it is the only tool today
> > where I can add a deviation as an in-code comment, re-run the scan,
> > and what happens (see the number of violations go down.)
> > 
> > However also considering that Coverity reports less than 10, and Eclair
> > reports more but due to only 2-3 macros, I think 18.2 should be
> > manageable.
> 
> That's not the conclusion I would draw. If none of the three finds what
> ought to be found, I'm not convinced this can be considered "manageable".
> Subsequent tool improvements may change the picture quite unexpectedly.

Keep in mind that there is always the possibility that a new version of
the tool will detect more violations. In that case, we'll have the
choice whether:
- to fix/deviate them
- not to upgrade the tool
- to skip checking this specific rule with the specific tool in question


So let me make a concrete example based on cppcheck, given that is the
one we understand better from an integration point of view. Let's say
that tomorrow we introduce a gitlab-ci job that automatically scans
xen.git using cppcheck and docs/misra/rules.rst. The job fails when it
detects *new* violations. All is good for now.

Then one day we upgrade cppcheck to a new version. The new cppcheck
version detects way more violations. We have a few options:
- hold back the upgrade in gitlab-ci until we fix more violations or
  deviate them
- keep the rule in docs/misra/rules.rst but skip checking for the rule
  in gitlab-ci using the mechanism introduced by Luca, and upgrade
  cppcheck
- remove the rule from docs/misra/rules.rst, and upgrade cppcheck


We have ways to deal with this situation well. I think the decision
whether to add a rule to docs/misra/rules.rst should be primarily based
on whether the rule makes sense for Xen. And of course it also makes
sense to have a look at the number of violations because having a rule
in docs/misra/rules.rst that no tool can scan properly, or with
thousands of violations, is not useful.

Coming back to 18.2: it makes sense for Xen and the scanners today work
well with this rule, so I think we are fine.

(In retrospect we should have thought twice before adding 20.7 given all
the troubles that the rule is giving us with the scanners. We are
learning. There is still the option of removing 20.7, we can discuss in
the next meeting.)


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 20:18:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 20:18:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485201.752241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL8hJ-0005o6-Q7; Thu, 26 Jan 2023 20:18:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485201.752241; Thu, 26 Jan 2023 20:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL8hJ-0005nz-NE; Thu, 26 Jan 2023 20:18:13 +0000
Received: by outflank-mailman (input) for mailman id 485201;
 Thu, 26 Jan 2023 20:18:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4vwE=5X=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pL8hI-0005nt-E2
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 20:18:12 +0000
Received: from ppsw-33.srv.uis.cam.ac.uk (ppsw-33.srv.uis.cam.ac.uk
 [131.111.8.133]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a63b653-9db6-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 21:18:05 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:50998)
 by ppsw-33.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.137]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pL8h2-000Fom-SU (Exim 4.96) (return-path <amc96@srcf.net>);
 Thu, 26 Jan 2023 20:17:56 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 8DF111FB9D;
 Thu, 26 Jan 2023 20:17:56 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a63b653-9db6-11ed-b8d1-410ff93cb8f0
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <6742bf64-1d09-ee63-1aa4-1ecbef6b7c0a@srcf.net>
Date: Thu, 26 Jan 2023 20:17:56 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
 Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230125165308.22897-1-andrew.cooper3@citrix.com>
 <a5ba3821-37fc-c724-d015-6e9dc8cf65fd@suse.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH] x86/shadow: Fix PV32 shadowing in !HVM builds
In-Reply-To: <a5ba3821-37fc-c724-d015-6e9dc8cf65fd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26/01/2023 8:50 am, Jan Beulich wrote:
> On 25.01.2023 17:53, Andrew Cooper wrote:
>> The OSSTest bisector identified an issue with c/s 1894049fa283 ("x86/shadow:
>> L2H shadow type is PV32-only") in !HVM builds.
>>
>> The bug is ultimately caused by sh_type_to_size[] not actually being specific
>> to HVM guests, and it's position in shadow/hvm.c mislead the reasoning.
>>
>> To fix the issue that OSSTest identified, SH_type_l2h_64_shadow must still
>> have the value 1 in any CONFIG_PV32 build.  But simply adjusting this leaves
>> us with misleading logic, and a reasonable chance of making a related error
>> again in the future.
>>
>> In hindsight, moving sh_type_to_size[] out of common.c in the first place a
>> mistake.  Therefore, move sh_type_to_size[] back to living in common.c,
>> leaving a comment explaining why it happens to be inside an HVM conditional.
>>
>> This effectively reverts the second half of 4fec945409fc ("x86/shadow: adjust
>> and move sh_type_to_size[]") while retaining the other improvements from the
>> same changeset.
>>
>> While making this change, also adjust the sh_type_to_size[] declaration to
>> match its definition.
>>
>> Fixes: 4fec945409fc ("x86/shadow: adjust and move sh_type_to_size[]")
>> Fixes: 1894049fa283 ("x86/shadow: L2H shadow type is PV32-only")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Now it's time for me to ask: But why? Your interpretation of "HVM-only"
> is simply too restricted.

I appreciate that we have different opinions on the matter.

But the sequence of events speaks for itself.  Inadvertently, you
created this trap, and then fell straight into it.

> As said, "HVM-only" can have two meanings -
> build-time HVM-only and run time HVM-only.

So

obj-$(CONFIG_HVM) += hvm.c

mean "this file, if and only if it is compiled in, contains logic
critical to the runtime functioning of PV guests" does it.

>  Code in hvm.c is also
> expecting to be entered for PV guests when HVM=y - see the several
> is_hvm_...(). They're all bogus though, and I have a patch pending to
> remove them. But that doesn't alter the principle. See e.g. audit_p2m(),
> which simply bails first thing when !paging_mode_translate(), or
> p2m_pod_active(), which bails first thing when !is_hvm_domain().

The fact that similar layering violations exist elsewhere doesn't mean
that this isn't one.  The fact that all experts in this area of code got
it wrong *is* the problem.

You in writing the patch and me in reviewing it made the assumption that
PV guests don't enter code in hvm.c *because* it is an entirely
reasonable assumption to make.

> Content of hvm.c (and other files which are built only when HVM=y, or
> more generally any other files which have a Kconfig build time
> dependency in their respective Makefile) simply has to be aware of this
> fact, and hence the data (array) in question is quite fine to live where
> it does.

There are two ways to stop this from happening.

Either make the assumption actually true, or stop people making the
assumption.

One is these can be done, and one cannot.


> I continue to view my 1st patch as the better approach. And in no case
> do I view the 1st Fixes: tag as appropriate.
>
> I guess we really need George or Tim to break ties here.

I have committed this with George's Ack.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 20:27:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 20:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485207.752251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL8qI-0007S5-Lt; Thu, 26 Jan 2023 20:27:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485207.752251; Thu, 26 Jan 2023 20:27:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL8qI-0007Ry-Ic; Thu, 26 Jan 2023 20:27:30 +0000
Received: by outflank-mailman (input) for mailman id 485207;
 Thu, 26 Jan 2023 20:27:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4vwE=5X=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pL8qH-0007Rs-S3
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 20:27:29 +0000
Received: from ppsw-42.srv.uis.cam.ac.uk (ppsw-42.srv.uis.cam.ac.uk
 [131.111.8.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d8b6b4eb-9db7-11ed-b8d1-410ff93cb8f0;
 Thu, 26 Jan 2023 21:27:26 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:49568)
 by ppsw-42.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.138]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pL8q4-000eKb-G5 (Exim 4.96) (return-path <amc96@srcf.net>);
 Thu, 26 Jan 2023 20:27:17 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id B374A1FA26;
 Thu, 26 Jan 2023 20:27:16 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8b6b4eb-9db7-11ed-b8d1-410ff93cb8f0
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <df86e0a6-1415-3fd3-5202-faff5edfc271@srcf.net>
Date: Thu, 26 Jan 2023 20:27:16 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
 <8ee98cc0-21d3-100a-ffcc-37cd466e7761@suse.com>
 <718f6fd0-cb96-6f72-87ff-7382582d89f9@srcf.net>
 <fa38f305-df29-4178-2279-17a084fdf2cd@suse.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH v3 1/4] x86/spec-ctrl: add logic to issue IBPB on exit to
 guest
In-Reply-To: <fa38f305-df29-4178-2279-17a084fdf2cd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26/01/2023 8:02 am, Jan Beulich wrote:
> On 25.01.2023 22:10, Andrew Cooper wrote:
>> On 25/01/2023 3:25 pm, Jan Beulich wrote:
>>> In order to be able to defer the context switch IBPB to the last
>>> possible point, add logic to the exit-to-guest paths to issue the
>>> barrier there, including the "IBPB doesn't flush the RSB/RAS"
>>> workaround. Since alternatives, for now at least, can't nest, emit JMP
>>> to skip past both constructs where both are needed. This may be more
>>> efficient anyway, as the sequence of NOPs is pretty long.
>> It is very uarch specific as to when a jump is less overhead than a line
>> of nops.
>>
>> In all CPUs liable to be running Xen, even unconditional jumps take up
>> branch prediction resource, because all branch prediction is pre-decode
>> these days, so branch locations/types/destinations all need deriving
>> from %rip and "history" alone.
>>
>> So whether a branch or a line of nops is better is a tradeoff between
>> how much competition there is for branch prediction resource, and how
>> efficiently the CPU can brute-force its way through a long line of nops.
>>
>> But a very interesting datapoint.  It turns out that AMD Zen4 CPUs
>> macrofuse adjacent nops, including longnops, because it reduces the
>> amount of execute/retire resources required.  And a lot of
>> kernel/hypervisor fastpaths have a lot of nops these days.
>>
>>
>> For us, the "can't nest" is singularly more important than any worry
>> about uarch behaviour.  We've frankly got much lower hanging fruit than
>> worring about one branch vs a few nops.
>>
>>> LFENCEs are omitted - for HVM a VM entry is immanent, which already
>>> elsewhere we deem sufficiently serializing an event. For 32-bit PV
>>> we're going through IRET, which ought to be good enough as well. While
>>> 64-bit PV may use SYSRET, there are several more conditional branches
>>> there which are all unprotected.
>> Privilege changes are serialsing-ish, and this behaviour has been
>> guaranteed moving forwards, although not documented coherently.
>>
>> CPL (well - privilege, which includes SMM, root/non-root, etc) is not
>> written speculatively.  So any logic which needs to modify privilege has
>> to block until it is known to be an architectural execution path.
>>
>> This gets us "lfence-like" or "dispatch serialising" behaviour, which is
>> also the reason why INT3 is our go-to speculation halting instruction. 
>> Microcode has to be entirely certain we are going to deliver an
>> interrupt/exception/etc before it can start reading the IDT/etc.
>>
>> Either way, we've been promised that all instructions like IRET,
>> SYS{CALL,RET,ENTER,EXIT}, VM{RUN,LAUNCH,RESUME} (and ERET{U,S} in the
>> future FRED world) do, and shall continue to not execute speculatively.
>>
>> Which in practice means we don't need to worry about Spectre-v1 attack
>> against codepaths which hit an exit-from-xen path, in terms of skipping
>> protections.
>>
>> We do need to be careful about memory accesses and potential double
>> dereferences, but all the data is on the top of the stack for XPTI
>> reasons.  About the only concern is v->arch.msrs->* in the HVM path, and
>> we're fine with the current layout (AFAICT).
>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> I have to admit that I'm not really certain about the placement of the
>>> IBPB wrt the MSR_SPEC_CTRL writes. For now I've simply used "opposite of
>>> entry".
>> It really doesn't matter.  They're independent operations that both need
>> doing, and are fully serialising so can't parallelise.
>>
>> But on this note, WRMSRNS and WRMSRLIST are on the horizon.  The CPUs
>> which implement these instructions are the ones which also ought not to
>> need any adjustments in the exit paths.  So I think it is specifically
>> not worth trying to make any effort to turn *these* WRMSR's into more
>> optimised forms.
>>
>> But WRMSRLIST was designed specifically for this kind of usecase
>> (actually, more for the main context switch path) where you can prepare
>> the list of MSRs in memory, including the ability to conditionally skip
>> certain entries by adjusting the index field.
>>
>>
>> It occurs to me, having written this out, is that what we actually want
>> to do is have slightly custom not-quite-alternative blocks.  We have a
>> sequence of independent code blocks, and a small block at the end that
>> happens to contain an IRET.
>>
>> We could remove the nops at boot time if we treated it as one large
>> region, with the IRET at the end also able to have a variable position,
>> and assembles the "active" blocks tightly from the start.  Complications
>> would include adjusting the IRET extable entry, but this isn't
>> insurmountable.  Entrypoints are a bit more tricky but could be done by
>> packing from the back forward, and adjusting the entry position.
>>
>> Either way, something to ponder.  (It's also possible that it doesn't
>> make a measurable difference until we get to FRED, at which point we
>> have a set of fresh entry-points to write anyway, and a slight glimmer
>> of hope of not needing to pollute them with speculation workarounds...)
>>
>>> Since we're going to run out of SCF_* bits soon and since the new flag
>>> is meaningful only in struct cpu_info's spec_ctrl_flags, we could choose
>>> to widen that field to 16 bits right away and then use bit 8 (or higher)
>>> for the purpose here.
>> I really don't think it matters.  We've got plenty of room, and the
>> flexibility to shuffle, in both structures.  It's absolutely not worth
>> trying to introduce asymmetries to save 1 bit.
> Thanks for all the comments up to here. Just to clarify - I've not spotted
> anything there that would result in me being expected to take any action.

I'm tempted to suggest dropping the sentence about "might be more
efficient".  It's not necessary at all IMO, and it's probably not
correct if you were to compare an atom microserver vs a big Xeon.

For the lfences paragraph, I'd suggest rephrasing to something like this:

"As with all other conditional blocks on exit-to-guest paths, no
Spectre-v1 protections are necessary as execution will imminently be
hitting a serialising event."

The commentary on IRET/SYSRET and other jumps is adding confusion to
matter which is known to be safe.

>
>>> --- a/xen/arch/x86/include/asm/current.h
>>> +++ b/xen/arch/x86/include/asm/current.h
>>> @@ -55,9 +55,13 @@ struct cpu_info {
>>>  
>>>      /* See asm/spec_ctrl_asm.h for usage. */
>>>      unsigned int shadow_spec_ctrl;
>>> +    /*
>>> +     * spec_ctrl_flags can be accessed as a 32-bit entity and hence needs
>>> +     * placing suitably.
>> I'd suggest "is accessed as a 32-bit entity, and wants aligning suitably" ?
> I've tried to choose the wording carefully: The 32-bit access is in an
> alternative block, so doesn't always come into play. Hence the "may",
> not "is". Alignment alone also isn't sufficient here (and mis-aligning
> isn't merely a performance problem) - the following three bytes also
> need to be valid to access in the first place. Hence "needs" and
> "placing", not "wants" and "aligning".

"can" is actually a very ambiguous, and one valid interpretation of the
sentence is "spec_ctrl_fags is permitted to be accessed as a 32bit
quantity, but isn't actually accessed in that way".

In this case, we mean "it *is* accessed as a 32bit entity, in certain
cases".

So what about:

"spec_ctrl_flags is accessed as a 32-bit entity in some cases.  Place it
so it can be used as if it were an int."

?

>
>> If I've followed the logic correctly.  (I can't say I was specifically
>> aware that the bit test instructions didn't have byte forms, but I
>> suspect such instruction forms would be very very niche.)
> Yes, there not being byte forms of BT* is the sole reason here.
>
>>> --- a/xen/arch/x86/include/asm/spec_ctrl.h
>>> +++ b/xen/arch/x86/include/asm/spec_ctrl.h
>>> @@ -36,6 +36,8 @@
>>>  #define SCF_verw       (1 << 3)
>>>  #define SCF_ist_ibpb   (1 << 4)
>>>  #define SCF_entry_ibpb (1 << 5)
>>> +#define SCF_exit_ibpb_bit 6
>>> +#define SCF_exit_ibpb  (1 << SCF_exit_ibpb_bit)
>> One option to avoid the second define is to use ILOG2() with btrl.
> Specifically not. The assembler doesn't know the conditional operator,
> and the pre-processor won't collapse the expression resulting from
> expanding ilog2().

Oh that's dull.

I suspect we could construct equivalent logic with an .if/.else chain,
but I have no idea if the order of evaluation would be conducive to
doing so as part of evaluating an immediate operand.  We would
specifically not want something that ended looking like `ilog2 const
"btrl $" ",%eax"`, even though I could see how to make that work.

It would be nice if we could make something suitable here, but if not we
can live with the second _bit constant.

>
>>> @@ -272,6 +293,14 @@
>>>  #define SPEC_CTRL_EXIT_TO_PV                                            \
>>>      ALTERNATIVE "",                                                     \
>>>          DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_PV;              \
>>> +    ALTERNATIVE __stringify(jmp PASTE(.Lscexitpv_done, __LINE__)),      \
>>> +        __stringify(DO_SPEC_CTRL_EXIT_IBPB                              \
>>> +                    disp=(PASTE(.Lscexitpv_done, __LINE__) -            \
>>> +                          PASTE(.Lscexitpv_rsb, __LINE__))),            \
>>> +        X86_FEATURE_IBPB_EXIT_PV;                                       \
>>> +PASTE(.Lscexitpv_rsb, __LINE__):                                        \
>>> +    ALTERNATIVE "", DO_OVERWRITE_RSB, X86_BUG_IBPB_NO_RET;              \
>>> +PASTE(.Lscexitpv_done, __LINE__):                                       \
>>>      DO_SPEC_CTRL_COND_VERW
>> What's wrong with the normal %= trick?
> We're in a C macro here which is then used in assembly code. %= only
> works in asm(), though. If we were in an assembler macro, I'd have
> used \@. Yet wrapping the whole thing in an assembler macro would, for
> my taste, hide too much information from the use sites (in particular
> the X86_{FEATURE,BUG}_* which are imo relevant to be visible there).
>
>>   The use of __LINE__ makes this
>> hard to subsequently livepatch, so I'd prefer to avoid it if possible.
> Yes, I was certainly aware this would be a concern. I couldn't think of
> a (forward looking) clean solution, though: Right now we have only one
> use per source file (the native and compat PV entry.S), so we could use
> a context-independent label name. But as you say above, for FRED we're
> likely to get new entry points, and they're likely better placed in the
> same files.

I was going to suggest using __COUNTER__ but I've just realised why that
wont work.

But on further consideration, this might be ok for livepatching, so long
as __LINE__ is only ever used with a local label name.  By the time it
has been compiled to a binary, the label name wont survive; only the
resulting displacement will.

I think we probably want to merge this with TEMP_NAME() (perhaps as
UNIQ_NAME(), as it will have to move elsewhere to become common with
this) to avoid proliferating our livepatching reasoning.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 20:43:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 20:43:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485217.752267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL95g-0001f7-5Q; Thu, 26 Jan 2023 20:43:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485217.752267; Thu, 26 Jan 2023 20:43:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL95g-0001f0-1z; Thu, 26 Jan 2023 20:43:24 +0000
Received: by outflank-mailman (input) for mailman id 485217;
 Thu, 26 Jan 2023 20:43:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4vwE=5X=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pL95f-0001eu-AT
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 20:43:23 +0000
Received: from ppsw-32.srv.uis.cam.ac.uk (ppsw-32.srv.uis.cam.ac.uk
 [131.111.8.132]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 129012a8-9dba-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 21:43:22 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:59324)
 by ppsw-32.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.136]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pL95d-000rTU-B9 (Exim 4.96) (return-path <amc96@srcf.net>);
 Thu, 26 Jan 2023 20:43:21 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 3BBFB1FC8B;
 Thu, 26 Jan 2023 20:43:21 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 129012a8-9dba-11ed-a5d9-ddcf98b90cbd
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <8b4c6aa8-cdc5-50a3-2170-f4af80fe1a26@srcf.net>
Date: Thu, 26 Jan 2023 20:43:21 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
 <23ea08db-3b64-5d1a-6743-19abb7bd6529@suse.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH v3 2/4] x86/spec-ctrl: defer context-switch IBPB until
 guest entry
In-Reply-To: <23ea08db-3b64-5d1a-6743-19abb7bd6529@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25/01/2023 3:26 pm, Jan Beulich wrote:
> In order to avoid clobbering Xen's own predictions, defer the barrier as
> much as possible.

I can't actually think of a case where this matters.  We've done a
reasonable amount of work to get rid of indirect branches, and rets were
already immaterial because of the reset_stack_and_jump().

What I'm trying to figure out is whether this ends up hurting us.  If
there was an indirect call we used reliably pre and post context switch,
that changed target based on the guest type, then we'd now retain and
use the bad prediction.

>  Merely mark the CPU as needing a barrier issued the
> next time we're exiting to guest context.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I couldn't find any sensible (central/unique) place where to move the
> comment which is being deleted alongside spec_ctrl_new_guest_context().

Given this, I'm wondering whether in patch 1, it might be better to name
the new bit SCF_new_guest_context.  Or new_pred_context context (which
on consideration, I think is better than the current name)?

This would have a knock-on effect on the feature names, which I think is
important because I think you've got a subtle bug in patch 3.

The comment really wants to stay, and it could simply move into
spec_ctrl_asm.h at that point.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 20:49:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 20:49:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485223.752276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL9Bf-0002Tt-PH; Thu, 26 Jan 2023 20:49:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485223.752276; Thu, 26 Jan 2023 20:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pL9Bf-0002Tm-Me; Thu, 26 Jan 2023 20:49:35 +0000
Received: by outflank-mailman (input) for mailman id 485223;
 Thu, 26 Jan 2023 20:49:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4vwE=5X=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pL9Be-0002Tg-FU
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 20:49:34 +0000
Received: from ppsw-43.srv.uis.cam.ac.uk (ppsw-43.srv.uis.cam.ac.uk
 [131.111.8.143]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef74a9d0-9dba-11ed-a5d9-ddcf98b90cbd;
 Thu, 26 Jan 2023 21:49:33 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:33972)
 by ppsw-43.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.139]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pL9Bc-000UNr-TI (Exim 4.96) (return-path <amc96@srcf.net>);
 Thu, 26 Jan 2023 20:49:32 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id D1B011FA26;
 Thu, 26 Jan 2023 20:49:31 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef74a9d0-9dba-11ed-a5d9-ddcf98b90cbd
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <d0a0960a-f110-c065-753d-9f000ca379a9@srcf.net>
Date: Thu, 26 Jan 2023 20:49:31 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
 <c39faba2-1ab6-71da-f748-1545aac8290b@suse.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH v3 3/4] x86: limit issuing of IBPB during context switch
In-Reply-To: <c39faba2-1ab6-71da-f748-1545aac8290b@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25/01/2023 3:26 pm, Jan Beulich wrote:
> When the outgoing vCPU had IBPB issued upon entering Xen there's no
> need for a 2nd barrier during context switch.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v3: Fold into series.
>
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -2015,7 +2015,8 @@ void context_switch(struct vcpu *prev, s
>  
>          ctxt_switch_levelling(next);
>  
> -        if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) )
> +        if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) &&
> +             !(prevd->arch.spec_ctrl_flags & SCF_entry_ibpb) )
>          {
>              static DEFINE_PER_CPU(unsigned int, last);
>              unsigned int *last_id = &this_cpu(last);
>
>

The aforementioned naming change makes the (marginal) security hole here
more obvious.

When we use entry-IBPB to protect Xen, we only care about the branch
types in the BTB.  We don't flush the RSB when using the SMEP optimisation.

Therefore, entry-IBPB is not something which lets us safely skip
exit-new-pred-context.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 23:19:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 23:19:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485232.752296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLBVl-00038E-N5; Thu, 26 Jan 2023 23:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485232.752296; Thu, 26 Jan 2023 23:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLBVl-000387-KW; Thu, 26 Jan 2023 23:18:29 +0000
Received: by outflank-mailman (input) for mailman id 485232;
 Thu, 26 Jan 2023 23:18:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HSUn=5X=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLBVk-00037z-E9
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 23:18:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bb7ea3c8-9dcf-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 00:18:26 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 2154961987;
 Thu, 26 Jan 2023 23:18:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6D2FC433EF;
 Thu, 26 Jan 2023 23:18:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb7ea3c8-9dcf-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674775103;
	bh=oEPoflK9awJKSXqE5/QPLzbwK4CdwEB6BVinXmr75wU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=l/SD3o4PRRj7/YUs6CoVZkeh59hizanT7ua2uPxrddWSXxswztV0zSpzITIitGQFb
	 PIrBOuuoJ/fNc9ce+RkhAKKjFhGsAmoHEkdDjVqoduh3B0oc8PeAzeLQarxJwTT3WE
	 UMZSTVwJkns8Q/vO+Lhua4iUl2+gPug+wlDjqDXwudju3S/iJQq6PIofjSm4H7kJhF
	 jrtjdndRsmcAC50wRdp7L1z1T3vLNzbyHRr7qXflh21e/gDjnrTimsimbDi5NWNwZN
	 1lpLwsCEs6eEmGxtOg/X+MGhnzR8qWmsUdi9un57jLYe3Enk4dvHovrnD53nBd6ILf
	 6FBe2x2qPAbgQ==
Date: Thu, 26 Jan 2023 15:18:20 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Kevin Tian <kevin.tian@intel.com>, Stewart.Hildebrand@amd.com
Subject: Re: [RFC PATCH 01/10] xen: pci: add per-domain pci list lock
In-Reply-To: <20220831141040.13231-2-volodymyr_babchuk@epam.com>
Message-ID: <alpine.DEB.2.22.394.2301261435230.1978264@ubuntu-linux-20-04-desktop>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com> <20220831141040.13231-2-volodymyr_babchuk@epam.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
> domain->pdevs_lock protects access to domain->pdev_list.
> As this, it should be used when we are adding, removing on enumerating
> PCI devices assigned to a domain.
> 
> This enables more granular locking instead of one huge pcidevs_lock that
> locks entire PCI subsystem. Please note that pcidevs_lock() is still
> used, we are going to remove it in subsequent patches.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

I reviewed the patch, and made sure to pay extra attention to:
- error paths
- missing locks
- lock ordering
- interruptions

Here is what I found:


1) iommu.c:reassign_device_ownership and pci_amd_iommu.c:reassign_device
Both functions without any pdevs_lock locking do:
list_move(&pdev->domain_list, &target->pdev_list);

It seems to be it would need pdevs_lock. Maybe we need to change
list_move into list_del (protected by the pdevs_lock of the old domain)
and list_add (protected by the pdev_lock of the new domain).


2) has_arch_pdevs
has_arch_pdevs is implemented as list_empty and needs locking as well,
however no domain->pdevs_lock are added to protect has_arch_pdevs in
this patch. I think we need pdevs_lock around has_arch_pdevs.


Two more comments below about lock inversion and taking the same lock
twice



> ---
>  xen/common/domain.c                         |  1 +
>  xen/drivers/passthrough/amd/iommu_cmd.c     |  4 ++-
>  xen/drivers/passthrough/amd/pci_amd_iommu.c |  7 ++++-
>  xen/drivers/passthrough/pci.c               | 29 ++++++++++++++++++++-
>  xen/drivers/passthrough/vtd/iommu.c         |  9 +++++--
>  xen/drivers/vpci/header.c                   |  3 +++
>  xen/drivers/vpci/msi.c                      |  7 ++++-
>  xen/drivers/vpci/vpci.c                     |  4 +--
>  xen/include/xen/pci.h                       |  2 +-
>  xen/include/xen/sched.h                     |  1 +
>  10 files changed, 58 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 7062393e37..4611141b87 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -618,6 +618,7 @@ struct domain *domain_create(domid_t domid,
>  
>  #ifdef CONFIG_HAS_PCI
>      INIT_LIST_HEAD(&d->pdev_list);
> +    spin_lock_init(&d->pdevs_lock);
>  #endif
>  
>      /* All error paths can depend on the above setup. */
> diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c
> index 40ddf366bb..47c45398d4 100644
> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
> @@ -308,11 +308,12 @@ void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
>      flush_command_buffer(iommu, iommu_dev_iotlb_timeout);
>  }
>  
> -static void amd_iommu_flush_all_iotlbs(const struct domain *d, daddr_t daddr,
> +static void amd_iommu_flush_all_iotlbs(struct domain *d, daddr_t daddr,
>                                         unsigned int order)
>  {
>      struct pci_dev *pdev;
>  
> +    spin_lock(&d->pdevs_lock);
>      for_each_pdev( d, pdev )
>      {
>          u8 devfn = pdev->devfn;
> @@ -323,6 +324,7 @@ static void amd_iommu_flush_all_iotlbs(const struct domain *d, daddr_t daddr,
>          } while ( devfn != pdev->devfn &&
>                    PCI_SLOT(devfn) == PCI_SLOT(pdev->devfn) );
>      }
> +    spin_unlock(&d->pdevs_lock);
>  }
>  
>  /* Flush iommu cache after p2m changes. */
> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> index 4ba8e764b2..64c016491d 100644
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -96,20 +96,25 @@ static int __must_check allocate_domain_resources(struct domain *d)
>      return rc;
>  }
>  
> -static bool any_pdev_behind_iommu(const struct domain *d,
> +static bool any_pdev_behind_iommu(struct domain *d,
>                                    const struct pci_dev *exclude,
>                                    const struct amd_iommu *iommu)
>  {
>      const struct pci_dev *pdev;
>  
> +    spin_lock(&d->pdevs_lock);
>      for_each_pdev ( d, pdev )
>      {
>          if ( pdev == exclude )
>              continue;
>  
>          if ( find_iommu_for_device(pdev->seg, pdev->sbdf.bdf) == iommu )
> +	{
> +	    spin_unlock(&d->pdevs_lock);
>              return true;
> +	}

code style: tabs instead of spaces


>      }
> +    spin_unlock(&d->pdevs_lock);
>  
>      return false;
>  }
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index cdaf5c247f..4366f8f965 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -523,7 +523,9 @@ static void __init _pci_hide_device(struct pci_dev *pdev)
>      if ( pdev->domain )
>          return;
>      pdev->domain = dom_xen;
> +    spin_lock(&dom_xen->pdevs_lock);
>      list_add(&pdev->domain_list, &dom_xen->pdev_list);
> +    spin_unlock(&dom_xen->pdevs_lock);
>  }
>  
>  int __init pci_hide_device(unsigned int seg, unsigned int bus,
> @@ -595,7 +597,7 @@ struct pci_dev *pci_get_real_pdev(pci_sbdf_t sbdf)
>      return pdev;
>  }
>  
> -struct pci_dev *pci_get_pdev(const struct domain *d, pci_sbdf_t sbdf)
> +struct pci_dev *pci_get_pdev(struct domain *d, pci_sbdf_t sbdf)
>  {
>      struct pci_dev *pdev;
>  
> @@ -620,9 +622,16 @@ struct pci_dev *pci_get_pdev(const struct domain *d, pci_sbdf_t sbdf)
>                  return pdev;
>      }
>      else
> +    {
> +        spin_lock(&d->pdevs_lock);
>          list_for_each_entry ( pdev, &d->pdev_list, domain_list )
>              if ( pdev->sbdf.bdf == sbdf.bdf )
> +            {
> +                spin_unlock(&d->pdevs_lock);
>                  return pdev;
> +            }
> +        spin_unlock(&d->pdevs_lock);
> +    }
>  
>      return NULL;
>  }
> @@ -817,7 +826,9 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>      if ( !pdev->domain )
>      {
>          pdev->domain = hardware_domain;
> +        spin_lock(&hardware_domain->pdevs_lock);
>          list_add(&pdev->domain_list, &hardware_domain->pdev_list);
> +        spin_unlock(&hardware_domain->pdevs_lock);
>  
>          /*
>           * For devices not discovered by Xen during boot, add vPCI handlers
> @@ -827,7 +838,9 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>          if ( ret )
>          {
>              printk(XENLOG_ERR "Setup of vPCI failed: %d\n", ret);
> +            spin_lock(&pdev->domain->pdevs_lock);
>              list_del(&pdev->domain_list);
> +            spin_unlock(&pdev->domain->pdevs_lock);
>              pdev->domain = NULL;
>              goto out;
>          }
> @@ -835,7 +848,9 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>          if ( ret )
>          {
>              vpci_remove_device(pdev);
> +            spin_lock(&pdev->domain->pdevs_lock);
>              list_del(&pdev->domain_list);
> +            spin_unlock(&pdev->domain->pdevs_lock);
>              pdev->domain = NULL;
>              goto out;
>          }
> @@ -885,7 +900,11 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>              pci_cleanup_msi(pdev);
>              ret = iommu_remove_device(pdev);
>              if ( pdev->domain )
> +            {
> +                spin_lock(&pdev->domain->pdevs_lock);
>                  list_del(&pdev->domain_list);
> +                spin_unlock(&pdev->domain->pdevs_lock);
> +            }
>              printk(XENLOG_DEBUG "PCI remove device %pp\n", &pdev->sbdf);
>              free_pdev(pseg, pdev);
>              break;
> @@ -967,12 +986,14 @@ int pci_release_devices(struct domain *d)
>          pcidevs_unlock();
>          return ret;
>      }
> +    spin_lock(&d->pdevs_lock);
>      list_for_each_entry_safe ( pdev, tmp, &d->pdev_list, domain_list )
>      {
>          bus = pdev->bus;
>          devfn = pdev->devfn;
>          ret = deassign_device(d, pdev->seg, bus, devfn) ?: ret;

This causes pdevs_lock to be taken twice. deassign_device also takes a
pdevs_lock.  Probably we need to change all the
spin_lock(&d->pdevs_lock) into spin_lock_recursive.



>      }
> +    spin_unlock(&d->pdevs_lock);
>      pcidevs_unlock();
>  
>      return ret;
> @@ -1194,7 +1215,9 @@ static int __hwdom_init cf_check _setup_hwdom_pci_devices(
>              if ( !pdev->domain )
>              {
>                  pdev->domain = ctxt->d;
> +                spin_lock(&pdev->domain->pdevs_lock);
>                  list_add(&pdev->domain_list, &ctxt->d->pdev_list);
> +                spin_unlock(&pdev->domain->pdevs_lock);
>                  setup_one_hwdom_device(ctxt, pdev);
>              }
>              else if ( pdev->domain == dom_xen )
> @@ -1556,6 +1579,7 @@ static int iommu_get_device_group(
>          return group_id;
>  
>      pcidevs_lock();
> +    spin_lock(&d->pdevs_lock);
>      for_each_pdev( d, pdev )
>      {
>          unsigned int b = pdev->bus;
> @@ -1571,6 +1595,7 @@ static int iommu_get_device_group(
>          if ( sdev_id < 0 )
>          {
>              pcidevs_unlock();
> +            spin_unlock(&d->pdevs_lock);

lock inversion


>              return sdev_id;
>          }
>  
> @@ -1581,6 +1606,7 @@ static int iommu_get_device_group(
>              if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
>              {
>                  pcidevs_unlock();
> +                spin_unlock(&d->pdevs_lock);

lock inversion


>                  return -EFAULT;
>              }
>              i++;
> @@ -1588,6 +1614,7 @@ static int iommu_get_device_group(
>      }
>  
>      pcidevs_unlock();
> +    spin_unlock(&d->pdevs_lock);

lock inversion


>      return i;
>  }
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index 62e143125d..fff1442265 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -183,12 +183,13 @@ static void cleanup_domid_map(domid_t domid, struct vtd_iommu *iommu)
>      }
>  }
>  
> -static bool any_pdev_behind_iommu(const struct domain *d,
> +static bool any_pdev_behind_iommu(struct domain *d,
>                                    const struct pci_dev *exclude,
>                                    const struct vtd_iommu *iommu)
>  {
>      const struct pci_dev *pdev;
>  
> +    spin_lock(&d->pdevs_lock);
>      for_each_pdev ( d, pdev )
>      {
>          const struct acpi_drhd_unit *drhd;
> @@ -198,8 +199,12 @@ static bool any_pdev_behind_iommu(const struct domain *d,
>  
>          drhd = acpi_find_matched_drhd_unit(pdev);
>          if ( drhd && drhd->iommu == iommu )
> +        {
> +            spin_unlock(&d->pdevs_lock);
>              return true;
> +        }
>      }
> +    spin_unlock(&d->pdevs_lock);
>  
>      return false;
>  }
> @@ -208,7 +213,7 @@ static bool any_pdev_behind_iommu(const struct domain *d,
>   * If no other devices under the same iommu owned by this domain,
>   * clear iommu in iommu_bitmap and clear domain_id in domid_bitmap.
>   */
> -static void check_cleanup_domid_map(const struct domain *d,
> +static void check_cleanup_domid_map(struct domain *d,
>                                      const struct pci_dev *exclude,
>                                      struct vtd_iommu *iommu)
>  {
> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> index a1c928a0d2..a59aa7ad0b 100644
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -267,6 +267,7 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>       * Check for overlaps with other BARs. Note that only BARs that are
>       * currently mapped (enabled) are checked for overlaps.
>       */
> +    spin_lock(&pdev->domain->pdevs_lock);
>      for_each_pdev ( pdev->domain, tmp )
>      {
>          if ( tmp == pdev )
> @@ -306,11 +307,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>                  printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
>                         start, end, rc);
>                  rangeset_destroy(mem);
> +                spin_unlock( &pdev->domain->pdevs_lock);
>                  return rc;
>              }
>          }
>      }
>  
> +    spin_unlock( &pdev->domain->pdevs_lock);
>      ASSERT(dev);
>  
>      if ( system_state < SYS_STATE_active )
> diff --git a/xen/drivers/vpci/msi.c b/xen/drivers/vpci/msi.c
> index 8f2b59e61a..8969c335b0 100644
> --- a/xen/drivers/vpci/msi.c
> +++ b/xen/drivers/vpci/msi.c
> @@ -265,7 +265,7 @@ REGISTER_VPCI_INIT(init_msi, VPCI_PRIORITY_LOW);
>  
>  void vpci_dump_msi(void)
>  {
> -    const struct domain *d;
> +    struct domain *d;
>  
>      rcu_read_lock(&domlist_read_lock);
>      for_each_domain ( d )
> @@ -277,6 +277,9 @@ void vpci_dump_msi(void)
>  
>          printk("vPCI MSI/MSI-X d%d\n", d->domain_id);
>  
> +        if ( !spin_trylock(&d->pdevs_lock) )
> +            continue;
> +
>          for_each_pdev ( d, pdev )
>          {
>              const struct vpci_msi *msi;
> @@ -326,6 +329,8 @@ void vpci_dump_msi(void)
>              spin_unlock(&pdev->vpci->lock);
>              process_pending_softirqs();
>          }
> +        spin_unlock(&d->pdevs_lock);
> +
>      }
>      rcu_read_unlock(&domlist_read_lock);
>  }
> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> index 3467c0de86..7d1f9fd438 100644
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -312,7 +312,7 @@ static uint32_t merge_result(uint32_t data, uint32_t new, unsigned int size,
>  
>  uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size)
>  {
> -    const struct domain *d = current->domain;
> +    struct domain *d = current->domain;
>      const struct pci_dev *pdev;
>      const struct vpci_register *r;
>      unsigned int data_offset = 0;
> @@ -415,7 +415,7 @@ static void vpci_write_helper(const struct pci_dev *pdev,
>  void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
>                  uint32_t data)
>  {
> -    const struct domain *d = current->domain;
> +    struct domain *d = current->domain;
>      const struct pci_dev *pdev;
>      const struct vpci_register *r;
>      unsigned int data_offset = 0;
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index 5975ca2f30..19047b4b20 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -177,7 +177,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>  int pci_remove_device(u16 seg, u8 bus, u8 devfn);
>  int pci_ro_device(int seg, int bus, int devfn);
>  int pci_hide_device(unsigned int seg, unsigned int bus, unsigned int devfn);
> -struct pci_dev *pci_get_pdev(const struct domain *d, pci_sbdf_t sbdf);
> +struct pci_dev *pci_get_pdev(struct domain *d, pci_sbdf_t sbdf);
>  struct pci_dev *pci_get_real_pdev(pci_sbdf_t sbdf);
>  void pci_check_disable_device(u16 seg, u8 bus, u8 devfn);
>  
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 1cf629e7ec..0775228ba9 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -457,6 +457,7 @@ struct domain
>  
>  #ifdef CONFIG_HAS_PCI
>      struct list_head pdev_list;
> +    spinlock_t pdevs_lock;

I think it would be better called "pdev_lock" but OK either way


>  #endif
>  
>  #ifdef CONFIG_HAS_PASSTHROUGH
> -- 
> 2.36.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 23:41:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 23:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485237.752306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLBrU-0006kg-JB; Thu, 26 Jan 2023 23:40:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485237.752306; Thu, 26 Jan 2023 23:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLBrU-0006kZ-GE; Thu, 26 Jan 2023 23:40:56 +0000
Received: by outflank-mailman (input) for mailman id 485237;
 Thu, 26 Jan 2023 23:40:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HSUn=5X=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLBrT-0006kT-K9
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 23:40:55 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dfbc2637-9dd2-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 00:40:54 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id AE3F0B81ED5;
 Thu, 26 Jan 2023 23:40:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8114FC433EF;
 Thu, 26 Jan 2023 23:40:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfbc2637-9dd2-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674776452;
	bh=MwcrpHNjJOmzEcKQbB9lI4v77vX57G2nWxiUid79/vs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=So3E+Vni2AScsj3hwqrikOAemXJ89SU3gU9S78+9BTutugIa77wG2DWWgf17BHFkT
	 C0/nNQDlyUP667xkmkarejmvhwTSJnblya12XW2cAqp617d2SkZ2joaXBCiYM/J972
	 rpccpuPDA18n44C/cFRSKW1SR1MrkrB8Oz6epXz6jy9pULEFQmJAl7kdV0yfRDhN/u
	 Ea+rOdcKB9B1ACC+w5zp9kvZEutjuCjzKlP7SV2p3+2XiHxI51ZwyT782q+Qarnxys
	 Z90h+cXM78rfItuX2PGBDa4SX0Us2ehHL+KFiOjpZWxHpq7dDH8NppkC87oAKm9AD9
	 uQmwrZ+PuG5MA==
Date: Thu, 26 Jan 2023 15:40:50 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>, 
    Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [RFC PATCH 02/10] xen: pci: add pci_seg->alldevs_lock
In-Reply-To: <20220831141040.13231-3-volodymyr_babchuk@epam.com>
Message-ID: <alpine.DEB.2.22.394.2301261524430.1978264@ubuntu-linux-20-04-desktop>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com> <20220831141040.13231-3-volodymyr_babchuk@epam.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
> This lock protects alldevs_list of struct pci_seg. As this, it should
> be used when we are adding, removing on enumerating PCI devices
> assigned to a PCI segment.
> 
> Radix tree that stores PCI segment has own locking mechanism, also
> pci_seg structures are only allocated and newer freed, so we need no
> additional locking to access pci_seg structures. But we need a lock
> that protects alldevs_list field.
> 
> This enables more granular locking instead of one huge pcidevs_lock
> that locks entire PCI subsystem.  Please note that pcidevs_lock() is
> still used, we are going to remove it in subsequent patches.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
>  xen/drivers/passthrough/pci.c | 20 +++++++++++++++++++-
>  1 file changed, 19 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 4366f8f965..2dfa1c2875 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -38,6 +38,7 @@
>  
>  struct pci_seg {
>      struct list_head alldevs_list;
> +    spinlock_t alldevs_lock;
>      u16 nr;
>      unsigned long *ro_map;
>      /* bus2bridge_lock protects bus2bridge array */
> @@ -93,6 +94,7 @@ static struct pci_seg *alloc_pseg(u16 seg)
>      pseg->nr = seg;
>      INIT_LIST_HEAD(&pseg->alldevs_list);
>      spin_lock_init(&pseg->bus2bridge_lock);
> +    spin_lock_init(&pseg->alldevs_lock);
>  
>      if ( radix_tree_insert(&pci_segments, seg, pseg) )
>      {
> @@ -385,9 +387,13 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
>      unsigned int pos;
>      int rc;
>  
> +    spin_lock(&pseg->alldevs_lock);
>      list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
>          if ( pdev->bus == bus && pdev->devfn == devfn )
> +        {
> +            spin_unlock(&pseg->alldevs_lock);
>              return pdev;
> +        }
>  
>      pdev = xzalloc(struct pci_dev);
>      if ( !pdev )

Here there is a missing spin_unlock on the error path


> @@ -404,10 +410,12 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
>      if ( rc )
>      {
>          xfree(pdev);
> +        spin_unlock(&pseg->alldevs_lock);
>          return NULL;
>      }
>  
>      list_add(&pdev->alldevs_list, &pseg->alldevs_list);
> +    spin_unlock(&pseg->alldevs_lock);
>  
>      /* update bus2bridge */
>      switch ( pdev->type = pdev_type(pseg->nr, bus, devfn) )
> @@ -611,15 +619,20 @@ struct pci_dev *pci_get_pdev(struct domain *d, pci_sbdf_t sbdf)
>       */
>      if ( !d || is_hardware_domain(d) )
>      {
> -        const struct pci_seg *pseg = get_pseg(sbdf.seg);
> +        struct pci_seg *pseg = get_pseg(sbdf.seg);
>  
>          if ( !pseg )
>              return NULL;
>  
> +        spin_lock(&pseg->alldevs_lock);
>          list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
>              if ( pdev->sbdf.bdf == sbdf.bdf &&
>                   (!d || pdev->domain == d) )
> +            {
> +                spin_unlock(&pseg->alldevs_lock);
>                  return pdev;
> +            }
> +        spin_unlock(&pseg->alldevs_lock);
>      }
>      else
>      {
> @@ -893,6 +906,7 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>          return -ENODEV;
>  
>      pcidevs_lock();
> +    spin_lock(&pseg->alldevs_lock);
>      list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
>          if ( pdev->bus == bus && pdev->devfn == devfn )
>          {
> @@ -907,10 +921,12 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>              }
>              printk(XENLOG_DEBUG "PCI remove device %pp\n", &pdev->sbdf);
>              free_pdev(pseg, pdev);
> +            list_del(&pdev->alldevs_list);

use after free: free_pdev is freeing pdef

>              break;
>          }
>  
>      pcidevs_unlock();
> +    spin_unlock(&pseg->alldevs_lock);

lock inversion


>      return ret;
>  }
>  
> @@ -1363,6 +1379,7 @@ static int cf_check _dump_pci_devices(struct pci_seg *pseg, void *arg)
>  
>      printk("==== segment %04x ====\n", pseg->nr);
>  
> +    spin_lock(&pseg->alldevs_lock);
>      list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
>      {
>          printk("%pp - ", &pdev->sbdf);
> @@ -1376,6 +1393,7 @@ static int cf_check _dump_pci_devices(struct pci_seg *pseg, void *arg)
>          pdev_dump_msi(pdev);
>          printk("\n");
>      }
> +    spin_unlock(&pseg->alldevs_lock);
>  
>      return 0;
>  }
> -- 
> 2.36.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 23:53:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 23:53:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485243.752316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLC3A-0008Vs-PP; Thu, 26 Jan 2023 23:53:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485243.752316; Thu, 26 Jan 2023 23:53:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLC3A-0008Vl-Md; Thu, 26 Jan 2023 23:53:00 +0000
Received: by outflank-mailman (input) for mailman id 485243;
 Thu, 26 Jan 2023 23:52:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xXdJ=5X=koconnor.net=kevin@srs-se1.protection.inumbo.net>)
 id 1pLC38-0008Vf-J9
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 23:52:58 +0000
Received: from mail-qt1-x830.google.com (mail-qt1-x830.google.com
 [2607:f8b0:4864:20::830])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8da7cf04-9dd4-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 00:52:56 +0100 (CET)
Received: by mail-qt1-x830.google.com with SMTP id jr19so2775298qtb.7
 for <xen-devel@lists.xenproject.org>; Thu, 26 Jan 2023 15:52:56 -0800 (PST)
Received: from localhost ([64.18.11.71]) by smtp.gmail.com with ESMTPSA id
 pj4-20020a05620a1d8400b0070648cf78bdsm1845526qkn.54.2023.01.26.15.52.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 26 Jan 2023 15:52:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8da7cf04-9dd4-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=koconnor.net; s=google;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=UroPNeyGolPSc4AeIUJptAUxE2BU/L0w0GU3C1PJzJc=;
        b=KVRXSZlcq9t+wUG/QENleViQ72Qmw5LwKgEqKoPBm/sOgrjGjIkRYDp2+ZPwyO3B8I
         M0tTrWHY/Ps1cZxrrNYNcy/pQP5RqkiBilyAOW93nlovnDDp6Zf+YBHAfbvNRiSoPmWw
         Z00f58fy/xSV3sSdiuQ5iindJwIPSpUzJ4Fuk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=UroPNeyGolPSc4AeIUJptAUxE2BU/L0w0GU3C1PJzJc=;
        b=K93ECvnM9PMTlHi/RSnLU89h1rNIiizFtf1Bn1hbPb7wHiP+9niowE88U0UI6Vjiml
         cVlxwFBwjj0MSk6oDM0s5jd6pqiOCtnYs4P+icALbpEALC5sSSW2vRg637BF/9ghZROm
         HPDRxhIsYVfOxFRMY7m0jgM6wyEkBI9b55OptbaBP0+NlppspktNvHeoSzrjFdUZAdHi
         ZiKe4XpOewCjer7c7Ca/WuzjvFLxiBvj+MxVb6YlwN0ZtYUgvj7Rj2+Wz+P1PyIBhSXg
         nsWLp8rD1rMVPqd9I3NHqY4oCEYx6sqw10MHyGOEgIvDDEWqbVSfuoho0n8OcgOT7qYF
         BEHQ==
X-Gm-Message-State: AFqh2kqGQWzBMkyecpXdw6Tb3A4HQfZrNy2rUu/XfiRswUXqifqNKGx7
	zNU5hy3vPjahdaI8adpWXLIIuw==
X-Google-Smtp-Source: AMrXdXsrfVzASVRgvjjBI+Fccg+qIz2lwVVPSChxGQajRz8mS4PHXRer1c82HYJIpg8zeOQLroq52A==
X-Received: by 2002:ac8:4a0a:0:b0:3b6:2d6d:3546 with SMTP id x10-20020ac84a0a000000b003b62d6d3546mr53029151qtq.64.1674777174879;
        Thu, 26 Jan 2023 15:52:54 -0800 (PST)
Date: Thu, 26 Jan 2023 18:52:53 -0500
From: Kevin O'Connor <kevin@koconnor.net>
To: David Woodhouse <dwmw2@infradead.org>
Cc: seabios <seabios@seabios.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	qemu-devel <qemu-devel@nongnu.org>, paul <paul@xen.org>
Subject: Re: [SeaBIOS] [SeaBIOS PATCH] xen: require Xen info structure at
 0x1000 to detect Xen
Message-ID: <Y9MSVYx4sN1dMRbn@morn>
References: <feef99dd2e1a5dce004d22baf07d716d6ea1344c.camel@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <feef99dd2e1a5dce004d22baf07d716d6ea1344c.camel@infradead.org>

On Fri, Jan 20, 2023 at 11:33:19AM +0000, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> When running under Xen, hvmloader places a table at 0x1000 with the e820
> information and BIOS tables. If this isn't present, SeaBIOS will 
> currently panic.
> 
> We now have support for running Xen guests natively in QEMU/KVM, which
> boots SeaBIOS directly instead of via hvmloader, and does not provide
> the same structure.
> 
> As it happens, this doesn't matter on first boot. because although we
> set PlatformRunningOn to PF_QEMU|PF_XEN, reading it back again still
> gives zero. Presumably because in true Xen, this is all already RAM. But
> in QEMU with a faithfully-emulated PAM config in the host bridge, it's
> still in ROM mode at this point so we don't see what we've just written.
> 
> On reboot, however, the region *is* set to RAM mode and we do see the
> updated value of PlatformRunningOn, do manage to remember that we've
> detected Xen in CPUID, and hit the panic.
> 
> It's not trivial to detect QEMU vs. real Xen at the time xen_preinit()
> runs, because it's so early. We can't even make a XENVER_extraversion
> hypercall to look for hints, because we haven't set up the hypercall
> page (and don't have an allocator to give us a page in which to do so).
> 
> So just make Xen detection contingent on the info structure being
> present. If it wasn't, we were going to panic anyway. That leaves us
> taking the standard QEMU init path for Xen guests in native QEMU,
> which is just fine.
> 
> Untested on actual Xen but ObviouslyCorrect™.

Thanks.  I don't have a way to test this, but it looks fine to me.
I'll give a few more days to see if others have comments, and
otherwise look to commit.

Cheers,
-Kevin


From xen-devel-bounces@lists.xenproject.org Thu Jan 26 23:56:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Jan 2023 23:56:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485248.752326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLC6S-0000tV-8V; Thu, 26 Jan 2023 23:56:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485248.752326; Thu, 26 Jan 2023 23:56:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLC6S-0000tO-5t; Thu, 26 Jan 2023 23:56:24 +0000
Received: by outflank-mailman (input) for mailman id 485248;
 Thu, 26 Jan 2023 23:56:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HSUn=5X=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLC6R-0000tE-A2
 for xen-devel@lists.xenproject.org; Thu, 26 Jan 2023 23:56:23 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 07ba4d61-9dd5-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 00:56:20 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 34110619B6;
 Thu, 26 Jan 2023 23:56:18 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC114C433EF;
 Thu, 26 Jan 2023 23:56:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07ba4d61-9dd5-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674777377;
	bh=DwfCsZqi11xOsBibNjExjGi09Wzh6P6UdIyV2qy6aZk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=RTKzyjZG3PuPEN19kpNkRjAbGF0P95/sJ4tQc7bokyg8MJHpNlj2zDA7+i1cjkEmV
	 tTxkkSPlwVjhHJTIxzzfnYyotTF+hAKMIWZryb5ktwwmgerTqgw4T91rbVYBGuTSKI
	 +Y/hXhey+SBJP1Z7M5qT9dO82TzLBjfu4MjwOoTtbojOpucc451NunQawLFxFyjLAy
	 7XVeiAi1G2fQB+9U50Q8m3l5yQ1ijAVtqk0Ko2kgQZVCKNFw/TAcGsCY1wgEAvd72j
	 UqF70clJCAEdZf+hVBZ/Q0fA3lobPjurxdSLMqoLuYOxRNolom7C5bjU1pMj47JxND
	 kCReDtBcIG8Xg==
Date: Thu, 26 Jan 2023 15:56:13 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    Paul Durrant <paul@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Kevin Tian <kevin.tian@intel.com>
Subject: Re: [RFC PATCH 03/10] xen: pci: introduce ats_list_lock
In-Reply-To: <20220831141040.13231-4-volodymyr_babchuk@epam.com>
Message-ID: <alpine.DEB.2.22.394.2301261541420.1978264@ubuntu-linux-20-04-desktop>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com> <20220831141040.13231-4-volodymyr_babchuk@epam.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
> ATS subsystem has own list of PCI devices. As we are going to remove
> global pcidevs_lock() in favor to more granular locking, we need to
> ensure that this list is protected somehow. To do this, we need to add
> additional lock for each IOMMU, as list to be protected is also part
> of IOMMU.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
>  xen/drivers/passthrough/amd/iommu.h         |  1 +
>  xen/drivers/passthrough/amd/iommu_detect.c  |  1 +
>  xen/drivers/passthrough/amd/pci_amd_iommu.c |  8 ++++++++
>  xen/drivers/passthrough/pci.c               |  1 +
>  xen/drivers/passthrough/vtd/iommu.c         | 11 +++++++++++
>  xen/drivers/passthrough/vtd/iommu.h         |  1 +
>  xen/drivers/passthrough/vtd/qinval.c        |  3 +++
>  xen/drivers/passthrough/vtd/x86/ats.c       |  3 +++
>  8 files changed, 29 insertions(+)
> 
> diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h
> index 8bc3c35b1b..edd6eb52b3 100644
> --- a/xen/drivers/passthrough/amd/iommu.h
> +++ b/xen/drivers/passthrough/amd/iommu.h
> @@ -106,6 +106,7 @@ struct amd_iommu {
>      int enabled;
>  
>      struct list_head ats_devices;
> +    spinlock_t ats_list_lock;
>  };
>  
>  struct ivrs_unity_map {
> diff --git a/xen/drivers/passthrough/amd/iommu_detect.c b/xen/drivers/passthrough/amd/iommu_detect.c
> index 2317fa6a7d..1d6f4f2168 100644
> --- a/xen/drivers/passthrough/amd/iommu_detect.c
> +++ b/xen/drivers/passthrough/amd/iommu_detect.c
> @@ -160,6 +160,7 @@ int __init amd_iommu_detect_one_acpi(
>      }
>  
>      spin_lock_init(&iommu->lock);
> +    spin_lock_init(&iommu->ats_list_lock);
>      INIT_LIST_HEAD(&iommu->ats_devices);
>  
>      iommu->seg = ivhd_block->pci_segment_group;
> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> index 64c016491d..955f3af57a 100644
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -276,7 +276,11 @@ static int __must_check amd_iommu_setup_domain_device(
>           !pci_ats_enabled(iommu->seg, bus, pdev->devfn) )
>      {
>          if ( devfn == pdev->devfn )
> +	{
> +	    spin_lock(&iommu->ats_list_lock);
>              enable_ats_device(pdev, &iommu->ats_devices);
> +	    spin_unlock(&iommu->ats_list_lock);

code style


> +	}
>  
>          amd_iommu_flush_iotlb(devfn, pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);
>      }
> @@ -416,7 +420,11 @@ static void amd_iommu_disable_domain_device(const struct domain *domain,
>  
>      if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
>           pci_ats_enabled(iommu->seg, bus, pdev->devfn) )
> +    {
> +	spin_lock(&iommu->ats_list_lock);
>          disable_ats_device(pdev);
> +	spin_unlock(&iommu->ats_list_lock);

code style


> +    }
>  
>      BUG_ON ( iommu->dev_table.buffer == NULL );
>      req_id = get_dma_requestor_id(iommu->seg, PCI_BDF(bus, devfn));
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 2dfa1c2875..b5db5498a1 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -1641,6 +1641,7 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev)
>  {
>      pcidevs_lock();
>  
> +    /* iommu->ats_list_lock is taken by the caller of this function */

This is a locking inversion. In all other places we take pcidevs_lock
first, then ats_list_lock lock. For instance look at
xen/drivers/passthrough/pci.c:deassign_device that is called with
pcidevs_locked and then calls iommu_call(... reassign_device ...) which
ends up taking ats_list_lock.

This is the only exception. I think we need to move the
spin_lock(ats_list_lock) from qinval.c to here.



>      disable_ats_device(pdev);
>  
>      ASSERT(pdev->domain);
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index fff1442265..42661f22f4 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1281,6 +1281,7 @@ int __init iommu_alloc(struct acpi_drhd_unit *drhd)
>      spin_lock_init(&iommu->lock);
>      spin_lock_init(&iommu->register_lock);
>      spin_lock_init(&iommu->intremap.lock);
> +    spin_lock_init(&iommu->ats_list_lock);
>  
>      iommu->drhd = drhd;
>      drhd->iommu = iommu;
> @@ -1769,7 +1770,11 @@ static int domain_context_mapping(struct domain *domain, u8 devfn,
>          if ( ret > 0 )
>              ret = 0;
>          if ( !ret && devfn == pdev->devfn && ats_device(pdev, drhd) > 0 )
> +        {
> +            spin_lock(&drhd->iommu->ats_list_lock);
>              enable_ats_device(pdev, &drhd->iommu->ats_devices);
> +            spin_unlock(&drhd->iommu->ats_list_lock);
> +        }
>  
>          break;
>  
> @@ -1977,7 +1982,11 @@ static const struct acpi_drhd_unit *domain_context_unmap(
>                     domain, &PCI_SBDF(seg, bus, devfn));
>          ret = domain_context_unmap_one(domain, iommu, bus, devfn);
>          if ( !ret && devfn == pdev->devfn && ats_device(pdev, drhd) > 0 )
> +        {
> +            spin_lock(&iommu->ats_list_lock);
>              disable_ats_device(pdev);
> +            spin_unlock(&iommu->ats_list_lock);
> +        }
>  
>          break;
>  
> @@ -2374,7 +2383,9 @@ static int cf_check intel_iommu_enable_device(struct pci_dev *pdev)
>      if ( ret <= 0 )
>          return ret;
>  
> +    spin_lock(&drhd->iommu->ats_list_lock);
>      ret = enable_ats_device(pdev, &drhd->iommu->ats_devices);
> +    spin_unlock(&drhd->iommu->ats_list_lock);
>  
>      return ret >= 0 ? 0 : ret;
>  }
> diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
> index 78aa8a96f5..2a7a4c1b58 100644
> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -506,6 +506,7 @@ struct vtd_iommu {
>      } flush;
>  
>      struct list_head ats_devices;
> +    spinlock_t ats_list_lock;
>      unsigned long *pseudo_domid_map; /* "pseudo" domain id bitmap */
>      unsigned long *domid_bitmap;  /* domain id bitmap */
>      domid_t *domid_map;           /* domain id mapping array */
> diff --git a/xen/drivers/passthrough/vtd/qinval.c b/xen/drivers/passthrough/vtd/qinval.c
> index 4f9ad136b9..6e876348db 100644
> --- a/xen/drivers/passthrough/vtd/qinval.c
> +++ b/xen/drivers/passthrough/vtd/qinval.c
> @@ -238,7 +238,10 @@ static int __must_check dev_invalidate_sync(struct vtd_iommu *iommu,
>          if ( d == NULL )
>              return rc;
>  
> +	spin_lock(&iommu->ats_list_lock);
>          iommu_dev_iotlb_flush_timeout(d, pdev);
> +	spin_unlock(&iommu->ats_list_lock);

code style


>          rcu_unlock_domain(d);
>      }
>      else if ( rc == -ETIMEDOUT )
> diff --git a/xen/drivers/passthrough/vtd/x86/ats.c b/xen/drivers/passthrough/vtd/x86/ats.c
> index 04d702b1d6..55e991183b 100644
> --- a/xen/drivers/passthrough/vtd/x86/ats.c
> +++ b/xen/drivers/passthrough/vtd/x86/ats.c
> @@ -117,6 +117,7 @@ int dev_invalidate_iotlb(struct vtd_iommu *iommu, u16 did,
>      if ( !ecap_dev_iotlb(iommu->ecap) )
>          return ret;
>  
> +    spin_lock(&iommu->ats_list_lock);
>      list_for_each_entry_safe( pdev, temp, &iommu->ats_devices, ats.list )
>      {
>          bool_t sbit;
> @@ -155,12 +156,14 @@ int dev_invalidate_iotlb(struct vtd_iommu *iommu, u16 did,
>              break;
>          default:
>              dprintk(XENLOG_WARNING VTDPREFIX, "invalid vt-d flush type\n");
> +	    spin_unlock(&iommu->ats_list_lock);

code style


>              return -EOPNOTSUPP;
>          }
>  
>          if ( !ret )
>              ret = rc;
>      }
> +    spin_unlock(&iommu->ats_list_lock);
>  
>      return ret;
>  }
> -- 
> 2.36.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 00:43:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 00:43:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485256.752342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLCpw-0007Vt-8M; Fri, 27 Jan 2023 00:43:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485256.752342; Fri, 27 Jan 2023 00:43:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLCpw-0007Vm-4q; Fri, 27 Jan 2023 00:43:24 +0000
Received: by outflank-mailman (input) for mailman id 485256;
 Fri, 27 Jan 2023 00:43:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BqQd=5Y=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLCpu-0007Vg-Tk
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 00:43:23 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 96f7eaf8-9ddb-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 01:43:18 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id EB694619E2;
 Fri, 27 Jan 2023 00:43:16 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4543C433D2;
 Fri, 27 Jan 2023 00:43:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96f7eaf8-9ddb-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674780196;
	bh=0UZTKXk99UQAyBs9j4BYQuTWCYKjpinEkTmlxbyNXGE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=l2qjVZR1kqdb4PDpAToT6Cf6CWdaVgodrlgzetM8lObUKdm3/jcuzIH5axPpUGUKh
	 Qxi4qsfIq9rA9ibKCOhbS12c4BtNI1LWX3ifXEOEAZqaibx6ykYJ2+gbaXR+8mouMJ
	 DiVKg3KLCiKa73hZ40EZZdSNyHHNtNtjPffAtBZxKwEbNNl8GG8EsiZido19i0HXMT
	 HpFHRjMX8bw3r0yfe4IaHD3KlB+6EJa5+qVT/dhCFNoAOAkG2C8eFgv25SOFqPJ3H1
	 fFtrNJAly8of6y5IBSYlcmOo8b5vfeBrHLYo/QdeoufDsCe2i7t6iHaMc3JpRWCN2M
	 QBrVYVUQa+15A==
Date: Thu, 26 Jan 2023 16:43:13 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [RFC PATCH 05/10] xen: pci: introduce reference counting for
 pdev
In-Reply-To: <20220831141040.13231-6-volodymyr_babchuk@epam.com>
Message-ID: <alpine.DEB.2.22.394.2301261604370.1978264@ubuntu-linux-20-04-desktop>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com> <20220831141040.13231-6-volodymyr_babchuk@epam.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
> Prior to this change, lifetime of pci_dev objects was protected by global
> pcidevs_lock(). We are going to get if of this lock, so we need some
> other mechanism to ensure that those objects will not disappear under
> feet of code that access them. Reference counting is a good choice as
> it provides easy to comprehend way to control object lifetime with
> better granularity than global super lock.
> 
> This patch adds two new helper functions: pcidev_get() and
> pcidev_put(). pcidev_get() will increase reference counter, while
> pcidev_put() will decrease it, destroying object when counter reaches
> zero.
> 
> pcidev_get() should be used only when you already have a valid pointer
> to the object or you are holding lock that protects one of the
> lists (domain, pseg or ats) that store pci_dev structs.
> 
> pcidev_get() is rarely used directly, because there already are
> functions that will provide valid pointer to pci_dev struct:
> pci_get_pdev() and pci_get_real_pdev(). They will lock appropriate
> list, find needed object and increase its reference counter before
> returning to the caller.
> 
> Naturally, pci_put() should be called after finishing working with a
> received object. This is the reason why this patch have so many
> pcidev_put()s and so little pcidev_get()s: existing calls to
> pci_get_*() functions now will increase reference counter
> automatically, we just need to decrease it back when we finished.
> 
> This patch removes "const" qualifier from some pdev pointers because
> pcidev_put() technically alters the contents of pci_dev structure.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

tabs everywhere in this patch


> ---
> 
> - Jan, can I add your Suggested-by tag?
> ---
>  xen/arch/x86/hvm/vmsi.c                  |   2 +-
>  xen/arch/x86/irq.c                       |   4 +
>  xen/arch/x86/msi.c                       |  41 ++++++-
>  xen/arch/x86/pci.c                       |   4 +-
>  xen/arch/x86/physdev.c                   |  17 ++-
>  xen/common/sysctl.c                      |   5 +-
>  xen/drivers/passthrough/amd/iommu_init.c |  12 ++-
>  xen/drivers/passthrough/amd/iommu_map.c  |   6 +-
>  xen/drivers/passthrough/pci.c            | 131 +++++++++++++++--------
>  xen/drivers/passthrough/vtd/quirks.c     |   2 +
>  xen/drivers/video/vga.c                  |  10 +-
>  xen/drivers/vpci/vpci.c                  |   6 +-
>  xen/include/xen/pci.h                    |  18 ++++
>  13 files changed, 201 insertions(+), 57 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
> index 75f92885dc..7fb1075673 100644
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -912,7 +912,7 @@ int vpci_msix_arch_print(const struct vpci_msix *msix)
>  
>              spin_unlock(&msix->pdev->vpci->lock);
>              process_pending_softirqs();
> -            /* NB: we assume that pdev cannot go away for an alive domain. */
> +
>              if ( !pdev->vpci || !spin_trylock(&pdev->vpci->lock) )
>                  return -EBUSY;
>              if ( pdev->vpci->msix != msix )
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index cd0c8a30a8..d8672a03e1 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -2174,6 +2174,7 @@ int map_domain_pirq(
>                  msi->entry_nr = ret;
>                  ret = -ENFILE;
>              }
> +	    pcidev_put(pdev);

I think it would be better to move pcidev_put just after done:


>              goto done;
>          }
>  
> @@ -2188,6 +2189,7 @@ int map_domain_pirq(
>              msi_desc->irq = -1;
>              msi_free_irq(msi_desc);
>              ret = -EBUSY;
> +	    pcidev_put(pdev);
>              goto done;
>          }
>  
> @@ -2272,10 +2274,12 @@ int map_domain_pirq(
>              }
>              msi_desc->irq = -1;
>              msi_free_irq(msi_desc);
> +	    pcidev_put(pdev);
>              goto done;
>          }
>  
>          set_domain_irq_pirq(d, irq, info);
> +	pcidev_put(pdev);
>          spin_unlock_irqrestore(&desc->lock, flags);
>      }
>      else
> diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
> index d0bf63df1d..bccaccb98b 100644
> --- a/xen/arch/x86/msi.c
> +++ b/xen/arch/x86/msi.c
> @@ -572,6 +572,10 @@ int msi_free_irq(struct msi_desc *entry)
>                          virt_to_fix((unsigned long)entry->mask_base));
>  
>      list_del(&entry->list);
> +
> +    /* Corresponds to pcidev_get() in msi[x]_capability_init()  */
> +    pcidev_put(entry->dev);
> +
>      xfree(entry);
>      return 0;
>  }
> @@ -644,6 +648,7 @@ static int msi_capability_init(struct pci_dev *dev,
>              entry[i].msi.mpos = mpos;
>          entry[i].msi.nvec = 0;
>          entry[i].dev = dev;
> +	pcidev_get(dev);
>      }
>      entry->msi.nvec = nvec;
>      entry->irq = irq;
> @@ -703,22 +708,36 @@ static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 slot, u8 func, u8 bir, int vf)
>               !num_vf || !offset || (num_vf > 1 && !stride) ||
>               bir >= PCI_SRIOV_NUM_BARS ||
>               !pdev->vf_rlen[bir] )
> +        {
> +            if ( pdev )
> +                pcidev_put(pdev);
>              return 0;
> +        }
>          base = pos + PCI_SRIOV_BAR;
>          vf -= PCI_BDF(bus, slot, func) + offset;
>          if ( vf < 0 )
> +        {
> +            pcidev_put(pdev);
>              return 0;
> +        }
>          if ( stride )
>          {
>              if ( vf % stride )
> +            {
> +                pcidev_put(pdev);
>                  return 0;
> +            }
>              vf /= stride;
>          }
>          if ( vf >= num_vf )
> +        {
> +            pcidev_put(pdev);
>              return 0;
> +        }
>          BUILD_BUG_ON(ARRAY_SIZE(pdev->vf_rlen) != PCI_SRIOV_NUM_BARS);
>          disp = vf * pdev->vf_rlen[bir];
>          limit = PCI_SRIOV_NUM_BARS;
> +        pcidev_put(pdev);
>      }
>      else switch ( pci_conf_read8(PCI_SBDF(seg, bus, slot, func),
>                                   PCI_HEADER_TYPE) & 0x7f )
> @@ -925,6 +944,8 @@ static int msix_capability_init(struct pci_dev *dev,
>          entry->dev = dev;
>          entry->mask_base = base;
>  
> +	pcidev_get(dev);
> +
>          list_add_tail(&entry->list, &dev->msi_list);
>          *desc = entry;
>      }
> @@ -999,6 +1020,7 @@ static int __pci_enable_msi(struct msi_info *msi, struct msi_desc **desc)
>  {
>      struct pci_dev *pdev;
>      struct msi_desc *old_desc;
> +    int ret;
>  
>      ASSERT(pcidevs_locked());
>      pdev = pci_get_pdev(NULL, msi->sbdf);
> @@ -1010,6 +1032,7 @@ static int __pci_enable_msi(struct msi_info *msi, struct msi_desc **desc)
>      {
>          printk(XENLOG_ERR "irq %d already mapped to MSI on %pp\n",
>                 msi->irq, &pdev->sbdf);
> +	pcidev_put(pdev);
>          return -EEXIST;
>      }
>  
> @@ -1020,7 +1043,10 @@ static int __pci_enable_msi(struct msi_info *msi, struct msi_desc **desc)
>          __pci_disable_msix(old_desc);
>      }
>  
> -    return msi_capability_init(pdev, msi->irq, desc, msi->entry_nr);
> +    ret = msi_capability_init(pdev, msi->irq, desc, msi->entry_nr);
> +    pcidev_put(pdev);
> +
> +    return ret;
>  }
>  
>  static void __pci_disable_msi(struct msi_desc *entry)
> @@ -1054,6 +1080,7 @@ static int __pci_enable_msix(struct msi_info *msi, struct msi_desc **desc)
>  {
>      struct pci_dev *pdev;
>      struct msi_desc *old_desc;
> +    int ret;
>  
>      ASSERT(pcidevs_locked());
>      pdev = pci_get_pdev(NULL, msi->sbdf);
> @@ -1061,13 +1088,17 @@ static int __pci_enable_msix(struct msi_info *msi, struct msi_desc **desc)
>          return -ENODEV;

maybe missed pcidev_put above if pdev != 0 && pdev->msix == 0


>      if ( msi->entry_nr >= pdev->msix->nr_entries )
> +    {
> +	pcidev_put(pdev);
>          return -EINVAL;
> +    }
>  
>      old_desc = find_msi_entry(pdev, msi->irq, PCI_CAP_ID_MSIX);
>      if ( old_desc )
>      {
>          printk(XENLOG_ERR "irq %d already mapped to MSI-X on %pp\n",
>                 msi->irq, &pdev->sbdf);
> +	pcidev_put(pdev);
>          return -EEXIST;
>      }
>  
> @@ -1078,7 +1109,11 @@ static int __pci_enable_msix(struct msi_info *msi, struct msi_desc **desc)
>          __pci_disable_msi(old_desc);
>      }
>  
> -    return msix_capability_init(pdev, msi, desc);
> +    ret = msix_capability_init(pdev, msi, desc);
> +
> +    pcidev_put(pdev);
> +
> +    return ret;
>  }
>  
>  static void _pci_cleanup_msix(struct arch_msix *msix)
> @@ -1161,6 +1196,8 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool off)
>          rc = msix_capability_init(pdev, NULL, NULL);
>      pcidevs_unlock();
>  
> +    pcidev_put(pdev);
> +
>      return rc;
>  }
>  
> diff --git a/xen/arch/x86/pci.c b/xen/arch/x86/pci.c
> index 97b792e578..1d38f0df7c 100644
> --- a/xen/arch/x86/pci.c
> +++ b/xen/arch/x86/pci.c
> @@ -91,8 +91,10 @@ int pci_conf_write_intercept(unsigned int seg, unsigned int bdf,
>      pcidevs_lock();
>  
>      pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bdf));
> -    if ( pdev )
> +    if ( pdev ) {
>          rc = pci_msi_conf_write_intercept(pdev, reg, size, data);
> +	pcidev_put(pdev);
> +    }
>  
>      pcidevs_unlock();
>  
> diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
> index 2f1d955a96..96214a3d40 100644
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -533,7 +533,14 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          pcidevs_lock();
>          pdev = pci_get_pdev(NULL,
>                              PCI_SBDF(0, restore_msi.bus, restore_msi.devfn));
> -        ret = pdev ? pci_restore_msi_state(pdev) : -ENODEV;
> +        if ( pdev )
> +        {
> +            ret = pci_restore_msi_state(pdev);
> +            pcidev_put(pdev);
> +        }
> +        else
> +            ret = -ENODEV;
> +
>          pcidevs_unlock();
>          break;
>      }
> @@ -548,7 +555,13 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>          pcidevs_lock();
>          pdev = pci_get_pdev(NULL, PCI_SBDF(dev.seg, dev.bus, dev.devfn));
> -        ret = pdev ? pci_restore_msi_state(pdev) : -ENODEV;
> +        if ( pdev )
> +        {
> +            ret =  pci_restore_msi_state(pdev);
> +            pcidev_put(pdev);
> +        }
> +        else
> +            ret = -ENODEV;
>          pcidevs_unlock();
>          break;
>      }
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 02505ab044..0feef94cd2 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -438,7 +438,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>          {
>              physdev_pci_device_t dev;
>              uint32_t node;
> -            const struct pci_dev *pdev;
> +            struct pci_dev *pdev;
>  
>              if ( copy_from_guest_offset(&dev, ti->devs, i, 1) )
>              {
> @@ -456,6 +456,9 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>                  node = pdev->node;
>              pcidevs_unlock();
>  
> +            if ( pdev )
> +                pcidev_put(pdev);
> +
>              if ( copy_to_guest_offset(ti->nodes, i, &node, 1) )
>              {
>                  ret = -EFAULT;
> diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
> index 1f14aaf49e..7c1713a602 100644
> --- a/xen/drivers/passthrough/amd/iommu_init.c
> +++ b/xen/drivers/passthrough/amd/iommu_init.c
> @@ -644,6 +644,7 @@ static void cf_check parse_ppr_log_entry(struct amd_iommu *iommu, u32 entry[])
>  
>      if ( pdev )
>          guest_iommu_add_ppr_log(pdev->domain, entry);
> +    pcidev_put(pdev);
>  }
>  
>  static void iommu_check_ppr_log(struct amd_iommu *iommu)
> @@ -747,6 +748,11 @@ static bool_t __init set_iommu_interrupt_handler(struct amd_iommu *iommu)
>      }
>  
>      pcidevs_lock();
> +    /*
> +     * XXX: it is unclear if this device can be removed. Right now
> +     * there is no code that clears msi.dev, so no one will decrease
> +     * refcount on it.
> +     */
>      iommu->msi.dev = pci_get_pdev(NULL, PCI_SBDF(iommu->seg, iommu->bdf));
>      pcidevs_unlock();
>      if ( !iommu->msi.dev )
> @@ -1272,7 +1278,7 @@ static int __init cf_check amd_iommu_setup_device_table(
>      {
>          if ( ivrs_mappings[bdf].valid )
>          {
> -            const struct pci_dev *pdev = NULL;
> +            struct pci_dev *pdev = NULL;
>  
>              /* add device table entry */
>              iommu_dte_add_device_entry(&dt[bdf], &ivrs_mappings[bdf]);
> @@ -1297,7 +1303,10 @@ static int __init cf_check amd_iommu_setup_device_table(
>                          pdev->msix ? pdev->msix->nr_entries
>                                     : pdev->msi_maxvec);
>                  if ( !ivrs_mappings[bdf].intremap_table )
> +		{
> +		    pcidev_put(pdev);
>                      return -ENOMEM;
> +		}
>  
>                  if ( pdev->phantom_stride )
>                  {
> @@ -1315,6 +1324,7 @@ static int __init cf_check amd_iommu_setup_device_table(
>                              ivrs_mappings[bdf].intremap_inuse;
>                      }
>                  }
> +		pcidev_put(pdev);
>              }
>  
>              amd_iommu_set_intremap_table(
> diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
> index 993bac6f88..9d621e3d36 100644
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -724,14 +724,18 @@ int cf_check amd_iommu_get_reserved_device_memory(
>          if ( !iommu )
>          {
>              /* May need to trigger the workaround in find_iommu_for_device(). */
> -            const struct pci_dev *pdev;
> +            struct pci_dev *pdev;
>  
>              pcidevs_lock();
>              pdev = pci_get_pdev(NULL, sbdf);
>              pcidevs_unlock();
>  
>              if ( pdev )
> +            {
>                  iommu = find_iommu_for_device(seg, bdf);
> +                /* XXX: Should we hold pdev reference till end of the loop? */
> +                pcidev_put(pdev);
> +            }
>              if ( !iommu )
>                  continue;
>          }
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index b5db5498a1..a6c6368769 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -403,6 +403,7 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
>      *((u8*) &pdev->bus) = bus;
>      *((u8*) &pdev->devfn) = devfn;
>      pdev->domain = NULL;
> +    refcnt_init(&pdev->refcnt);
>  
>      arch_pci_init_pdev(pdev);
>  
> @@ -499,33 +500,6 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
>      return pdev;
>  }
>  
> -static void free_pdev(struct pci_seg *pseg, struct pci_dev *pdev)
> -{
> -    /* update bus2bridge */
> -    switch ( pdev->type )
> -    {
> -        unsigned int sec_bus, sub_bus;
> -
> -        case DEV_TYPE_PCIe2PCI_BRIDGE:
> -        case DEV_TYPE_LEGACY_PCI_BRIDGE:
> -            sec_bus = pci_conf_read8(pdev->sbdf, PCI_SECONDARY_BUS);
> -            sub_bus = pci_conf_read8(pdev->sbdf, PCI_SUBORDINATE_BUS);
> -
> -            spin_lock(&pseg->bus2bridge_lock);
> -            for ( ; sec_bus <= sub_bus; sec_bus++ )
> -                pseg->bus2bridge[sec_bus] = pseg->bus2bridge[pdev->bus];
> -            spin_unlock(&pseg->bus2bridge_lock);
> -            break;
> -
> -        default:
> -            break;
> -    }
> -
> -    list_del(&pdev->alldevs_list);
> -    pdev_msi_deinit(pdev);
> -    xfree(pdev);
> -}
> -
>  static void __init _pci_hide_device(struct pci_dev *pdev)
>  {
>      if ( pdev->domain )
> @@ -596,10 +570,15 @@ struct pci_dev *pci_get_real_pdev(pci_sbdf_t sbdf)
>      {
>          if ( !(sbdf.devfn & stride) )
>              continue;

We also need a pcidev_put before continue


> +
>          sbdf.devfn &= ~stride;
> +        pcidev_put(pdev);
>          pdev = pci_get_pdev(NULL, sbdf);
>          if ( pdev && stride != pdev->phantom_stride )
> +        {
> +            pcidev_put(pdev);
>              pdev = NULL;
> +        }
>      }
>  
>      return pdev;
> @@ -629,6 +608,7 @@ struct pci_dev *pci_get_pdev(struct domain *d, pci_sbdf_t sbdf)
>              if ( pdev->sbdf.bdf == sbdf.bdf &&
>                   (!d || pdev->domain == d) )
>              {
> +                pcidev_get(pdev);
>                  spin_unlock(&pseg->alldevs_lock);
>                  return pdev;
>              }
> @@ -640,6 +620,7 @@ struct pci_dev *pci_get_pdev(struct domain *d, pci_sbdf_t sbdf)
>          list_for_each_entry ( pdev, &d->pdev_list, domain_list )
>              if ( pdev->sbdf.bdf == sbdf.bdf )
>              {
> +                pcidev_get(pdev);
>                  spin_unlock(&d->pdevs_lock);
>                  return pdev;
>              }
> @@ -754,7 +735,10 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>                              PCI_SBDF(seg, info->physfn.bus,
>                                       info->physfn.devfn));
>          if ( pdev )
> +        {
>              pf_is_extfn = pdev->info.is_extfn;
> +            pcidev_put(pdev);
> +        }
>          pcidevs_unlock();
>          if ( !pdev )
>              pci_add_device(seg, info->physfn.bus, info->physfn.devfn,
> @@ -920,8 +904,9 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>                  spin_unlock(&pdev->domain->pdevs_lock);
>              }
>              printk(XENLOG_DEBUG "PCI remove device %pp\n", &pdev->sbdf);
> -            free_pdev(pseg, pdev);
>              list_del(&pdev->alldevs_list);
> +            pdev_msi_deinit(pdev);
> +            pcidev_put(pdev);
>              break;
>          }
>  
> @@ -952,7 +937,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>      {
>          ret = iommu_quarantine_dev_init(pci_to_dev(pdev));
>          if ( ret )
> -           return ret;
> +            goto out;
>  
>          target = dom_io;
>      }
> @@ -982,6 +967,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>      pdev->fault.count = 0;
>  
>   out:
> +    pcidev_put(pdev);
>      if ( ret )
>          printk(XENLOG_G_ERR "%pd: deassign (%pp) failed (%d)\n",
>                 d, &PCI_SBDF(seg, bus, devfn), ret);
> @@ -1117,7 +1103,10 @@ void pci_check_disable_device(u16 seg, u8 bus, u8 devfn)
>              pdev->fault.count >>= 1;
>          pdev->fault.time = now;
>          if ( ++pdev->fault.count < PT_FAULT_THRESHOLD )
> +        {
> +            pcidev_put(pdev);
>              pdev = NULL;
> +        }
>      }
>      pcidevs_unlock();
>  
> @@ -1128,6 +1117,8 @@ void pci_check_disable_device(u16 seg, u8 bus, u8 devfn)
>       * control it for us. */
>      cword = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
>      pci_conf_write16(pdev->sbdf, PCI_COMMAND, cword & ~PCI_COMMAND_MASTER);
> +
> +    pcidev_put(pdev);
>  }
>  
>  /*
> @@ -1246,6 +1237,7 @@ static int __hwdom_init cf_check _setup_hwdom_pci_devices(
>                  printk(XENLOG_WARNING "Dom%d owning %pp?\n",
>                         pdev->domain->domain_id, &pdev->sbdf);
>  
> +            pcidev_put(pdev);
>              if ( iommu_verbose )
>              {
>                  pcidevs_unlock();
> @@ -1495,33 +1487,28 @@ static int iommu_remove_device(struct pci_dev *pdev)
>      return iommu_call(hd->platform_ops, remove_device, devfn, pci_to_dev(pdev));
>  }
>  
> -static int device_assigned(u16 seg, u8 bus, u8 devfn)
> +static int device_assigned(struct pci_dev *pdev)
>  {
> -    struct pci_dev *pdev;
>      int rc = 0;
>  
>      ASSERT(pcidevs_locked());
> -    pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> -
> -    if ( !pdev )
> -        rc = -ENODEV;
>      /*
>       * If the device exists and it is not owned by either the hardware
>       * domain or dom_io then it must be assigned to a guest, or be
>       * hidden (owned by dom_xen).
>       */
> -    else if ( pdev->domain != hardware_domain &&
> -              pdev->domain != dom_io )
> +    if ( pdev->domain != hardware_domain &&
> +         pdev->domain != dom_io )
>          rc = -EBUSY;
>  
>      return rc;
>  }
>  
>  /* Caller should hold the pcidevs_lock */
> -static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> +static int assign_device(struct domain *d, struct pci_dev *pdev, u32 flag)
>  {
>      const struct domain_iommu *hd = dom_iommu(d);
> -    struct pci_dev *pdev;
> +    uint8_t devfn;
>      int rc = 0;
>  
>      if ( !is_iommu_enabled(d) )
> @@ -1532,10 +1519,11 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
>  
>      /* device_assigned() should already have cleared the device for assignment */
>      ASSERT(pcidevs_locked());
> -    pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
>      ASSERT(pdev && (pdev->domain == hardware_domain ||
>                      pdev->domain == dom_io));
>  
> +    devfn = pdev->devfn;
> +
>      /* Do not allow broken devices to be assigned to guests. */
>      rc = -EBADF;
>      if ( pdev->broken && d != hardware_domain && d != dom_io )
> @@ -1570,7 +1558,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
>   done:
>      if ( rc )
>          printk(XENLOG_G_WARNING "%pd: assign (%pp) failed (%d)\n",
> -               d, &PCI_SBDF(seg, bus, devfn), rc);
> +               d, &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
>      /* The device is assigned to dom_io so mark it as quarantined */
>      else if ( d == dom_io )
>          pdev->quarantine = true;
> @@ -1710,6 +1698,9 @@ int iommu_do_pci_domctl(
>          ASSERT(d);
>          /* fall through */
>      case XEN_DOMCTL_test_assign_device:
> +    {
> +        struct pci_dev *pdev;
> +
>          /* Don't support self-assignment of devices. */
>          if ( d == current->domain )
>          {
> @@ -1737,26 +1728,36 @@ int iommu_do_pci_domctl(
>          seg = machine_sbdf >> 16;
>          bus = PCI_BUS(machine_sbdf);
>          devfn = PCI_DEVFN(machine_sbdf);
> +        pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> +        if ( !pdev )
> +        {
> +            printk(XENLOG_G_INFO "%pp non-existent\n",
> +                   &PCI_SBDF(seg, bus, devfn));
> +            ret = -EINVAL;
> +            break;
> +        }
>  
>          pcidevs_lock();
> -        ret = device_assigned(seg, bus, devfn);
> +        ret = device_assigned(pdev);
>          if ( domctl->cmd == XEN_DOMCTL_test_assign_device )
>          {
>              if ( ret )
>              {
> -                printk(XENLOG_G_INFO "%pp already assigned, or non-existent\n",
> +                printk(XENLOG_G_INFO "%pp already assigned\n",
>                         &PCI_SBDF(seg, bus, devfn));
>                  ret = -EINVAL;
>              }
>          }
>          else if ( !ret )
> -            ret = assign_device(d, seg, bus, devfn, flags);
> +            ret = assign_device(d, pdev, flags);
> +
> +        pcidev_put(pdev);
>          pcidevs_unlock();
>          if ( ret == -ERESTART )
>              ret = hypercall_create_continuation(__HYPERVISOR_domctl,
>                                                  "h", u_domctl);
>          break;
> -
> +    }
>      case XEN_DOMCTL_deassign_device:
>          /* Don't support self-deassignment of devices. */
>          if ( d == current->domain )
> @@ -1796,6 +1797,46 @@ int iommu_do_pci_domctl(
>      return ret;
>  }
>  
> +static void release_pdev(refcnt_t *refcnt)
> +{
> +    struct pci_dev *pdev = container_of(refcnt, struct pci_dev, refcnt);
> +    struct pci_seg *pseg = get_pseg(pdev->seg);
> +
> +    printk(XENLOG_DEBUG "PCI release device %pp\n", &pdev->sbdf);
> +
> +    /* update bus2bridge */
> +    switch ( pdev->type )
> +    {
> +        unsigned int sec_bus, sub_bus;
> +
> +        case DEV_TYPE_PCIe2PCI_BRIDGE:
> +        case DEV_TYPE_LEGACY_PCI_BRIDGE:
> +            sec_bus = pci_conf_read8(pdev->sbdf, PCI_SECONDARY_BUS);
> +            sub_bus = pci_conf_read8(pdev->sbdf, PCI_SUBORDINATE_BUS);
> +
> +            spin_lock(&pseg->bus2bridge_lock);
> +            for ( ; sec_bus <= sub_bus; sec_bus++ )
> +                pseg->bus2bridge[sec_bus] = pseg->bus2bridge[pdev->bus];
> +            spin_unlock(&pseg->bus2bridge_lock);
> +            break;
> +
> +        default:
> +            break;
> +    }
> +
> +    xfree(pdev);
> +}
> +
> +void pcidev_get(struct pci_dev *pdev)
> +{
> +    refcnt_get(&pdev->refcnt);
> +}
> +
> +void pcidev_put(struct pci_dev *pdev)
> +{
> +    refcnt_put(&pdev->refcnt, release_pdev);
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/drivers/passthrough/vtd/quirks.c b/xen/drivers/passthrough/vtd/quirks.c
> index fcc8f73e8b..d240da0416 100644
> --- a/xen/drivers/passthrough/vtd/quirks.c
> +++ b/xen/drivers/passthrough/vtd/quirks.c
> @@ -429,6 +429,8 @@ static int __must_check map_me_phantom_function(struct domain *domain,
>          rc = domain_context_unmap_one(domain, drhd->iommu, 0,
>                                        PCI_DEVFN(dev, 7));
>  
> +    pcidev_put(pdev);
> +
>      return rc;
>  }
>  
> diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
> index 29a88e8241..1298f3a7b6 100644
> --- a/xen/drivers/video/vga.c
> +++ b/xen/drivers/video/vga.c
> @@ -114,7 +114,7 @@ void __init video_endboot(void)
>          for ( bus = 0; bus < 256; ++bus )
>              for ( devfn = 0; devfn < 256; ++devfn )
>              {
> -                const struct pci_dev *pdev;
> +                struct pci_dev *pdev;
>                  u8 b = bus, df = devfn, sb;
>  
>                  pcidevs_lock();
> @@ -126,7 +126,11 @@ void __init video_endboot(void)
>                                       PCI_CLASS_DEVICE) != 0x0300 ||
>                       !(pci_conf_read16(PCI_SBDF(0, bus, devfn), PCI_COMMAND) &
>                         (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) )
> +		{
> +		    if (pdev)
> +			pcidev_put(pdev);
>                      continue;
> +		}
>  
>                  while ( b )
>                  {
> @@ -144,7 +148,10 @@ void __init video_endboot(void)
>                              if ( pci_conf_read16(PCI_SBDF(0, b, df),
>                                                   PCI_BRIDGE_CONTROL) &
>                                   PCI_BRIDGE_CTL_VGA )
> +			    {
> +				pcidev_put(pdev);
>                                  continue;
> +			    }

This is wrong: it is inside the inner while loop and unnecessary given
the pcidev_put below


>                              break;
>                          }
>                          break;
> @@ -157,6 +164,7 @@ void __init video_endboot(void)
>                             bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
>                      pci_hide_device(0, bus, devfn);
>                  }
> +		pcidev_put(pdev);
>              }
>      }
>  
> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> index 7d1f9fd438..59dc55f498 100644
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -313,7 +313,7 @@ static uint32_t merge_result(uint32_t data, uint32_t new, unsigned int size,
>  uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size)
>  {
>      struct domain *d = current->domain;
> -    const struct pci_dev *pdev;
> +    struct pci_dev *pdev;
>      const struct vpci_register *r;
>      unsigned int data_offset = 0;
>      uint32_t data = ~(uint32_t)0;
> @@ -373,6 +373,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size)
>          ASSERT(data_offset < size);
>      }
>      spin_unlock(&pdev->vpci->lock);
> +    pcidev_put(pdev);

I think there is a missing pcidev_put above in the vpci_read function:

if ( !pdev || !pdev->vpci )
    return ...

in case pdev != 0 && pdev->vpci == 0


>      if ( data_offset < size )
>      {
> @@ -416,7 +417,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
>                  uint32_t data)
>  {
>      struct domain *d = current->domain;
> -    const struct pci_dev *pdev;
> +    struct pci_dev *pdev;
>      const struct vpci_register *r;
>      unsigned int data_offset = 0;
>      const unsigned long *ro_map = pci_get_ro_map(sbdf.seg);
> @@ -478,6 +479,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
>          ASSERT(data_offset < size);
>      }
>      spin_unlock(&pdev->vpci->lock);
> +    pcidev_put(pdev);

same here, missing pcidev_put above


>      if ( data_offset < size )
>          /* Tailing gap, write the remaining. */
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index 19047b4b20..e71a180ef3 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -13,6 +13,7 @@
>  #include <xen/irq.h>
>  #include <xen/pci_regs.h>
>  #include <xen/pfn.h>
> +#include <xen/refcnt.h>
>  #include <asm/device.h>
>  #include <asm/numa.h>
>  
> @@ -116,6 +117,9 @@ struct pci_dev {
>      /* Device misbehaving, prevent assigning it to guests. */
>      bool broken;
>  
> +    /* Reference counter */
> +    refcnt_t refcnt;
> +
>      enum pdev_type {
>          DEV_TYPE_PCI_UNKNOWN,
>          DEV_TYPE_PCIe_ENDPOINT,
> @@ -160,6 +164,14 @@ void pcidevs_lock(void);
>  void pcidevs_unlock(void);
>  bool_t __must_check pcidevs_locked(void);
>  
> +/*
> + * Acquire and release reference to the given device. Holding
> + * reference ensures that device will not disappear under feet, but
> + * does not guarantee that code has exclusive access to the device.
> + */
> +void pcidev_get(struct pci_dev *pdev);
> +void pcidev_put(struct pci_dev *pdev);
> +
>  bool_t pci_known_segment(u16 seg);
>  bool_t pci_device_detect(u16 seg, u8 bus, u8 dev, u8 func);
>  int scan_pci_devices(void);
> @@ -177,8 +189,14 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>  int pci_remove_device(u16 seg, u8 bus, u8 devfn);
>  int pci_ro_device(int seg, int bus, int devfn);
>  int pci_hide_device(unsigned int seg, unsigned int bus, unsigned int devfn);
> +
> +/*
> + * Next two functions will find a requested device and acquire
> + * reference to it. Use pcidev_put() to release the reference.
> + */
>  struct pci_dev *pci_get_pdev(struct domain *d, pci_sbdf_t sbdf);
>  struct pci_dev *pci_get_real_pdev(pci_sbdf_t sbdf);
> +
>  void pci_check_disable_device(u16 seg, u8 bus, u8 devfn);
>  
>  uint8_t pci_conf_read8(pci_sbdf_t sbdf, unsigned int reg);
> -- 
> 2.36.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 02:56:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 02:56:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485265.752358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLEtz-0004uy-Nj; Fri, 27 Jan 2023 02:55:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485265.752358; Fri, 27 Jan 2023 02:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLEtz-0004uq-Hq; Fri, 27 Jan 2023 02:55:43 +0000
Received: by outflank-mailman (input) for mailman id 485265;
 Fri, 27 Jan 2023 02:55:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLEty-0004uN-Kn; Fri, 27 Jan 2023 02:55:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLEty-0005qF-HS; Fri, 27 Jan 2023 02:55:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLEty-0004cD-5X; Fri, 27 Jan 2023 02:55:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLEty-0006vS-4t; Fri, 27 Jan 2023 02:55:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=5HpyWzWoJtiuW1SS6M635kjFwhznt4Hf7IdG1Enno7U=; b=aUscZZoivRPu3zXdGpXI74oIQo
	bagDcoTB8xSnBwJoScvKTrs9qkWLmRIjCy3S5SZ1qsPLNWDq62Uorm3/+a6a+StqqmwLaJIfp+CYC
	VG9jcRtb8JKXZdDk7c/lOvCMHCyvqNVwXuI+akLMZrZIgfHJoMcT4WUXpr1L871v0/Og=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-vhd
Message-Id: <E1pLEty-0006vS-4t@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 02:55:42 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-vhd
testid guest-localmigrate

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176228/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-vhd.guest-localmigrate.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-vhd.guest-localmigrate --summary-out=tmp/176228.bisection-summary --basis-template=175994 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl-vhd guest-localmigrate
Searching for failure / basis pass:
 176140 fail [host=italia1] / 175994 [host=nocera1] 175987 [host=nocera0] 175734 [host=italia0] 175726 [host=elbling1] 175720 [host=elbling1] 175714 [host=fiano0] 175694 [host=huxelrebe0] 175671 [host=huxelrebe0] 175651 [host=huxelrebe0] 175635 [host=elbling0] 175624 [host=elbling0] 175612 [host=huxelrebe1] 175601 [host=huxelrebe1] 175592 ok.
Failure / basis pass flights: 176140 / 175592
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 671f50ffab3329c5497208da89620322b9721a77
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#1cf02b05b27c48775a25699e61b93b8\
 14b9ae042-625eb5e96dc96aa7fddef59a08edae215527f19c git://xenbits.xen.org/xen.git#671f50ffab3329c5497208da89620322b9721a77-3b760245f74ab2022b1aa4da842c4545228c2e83
Loaded 10003 nodes in revision graph
Searching for test results:
 175592 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 671f50ffab3329c5497208da89620322b9721a77
 175601 [host=huxelrebe1]
 175612 [host=huxelrebe1]
 175624 [host=elbling0]
 175635 [host=elbling0]
 175651 [host=huxelrebe0]
 175671 [host=huxelrebe0]
 175694 [host=huxelrebe0]
 175714 [host=fiano0]
 175720 [host=elbling1]
 175726 [host=elbling1]
 175734 [host=italia0]
 175834 []
 175861 []
 175890 []
 175907 []
 175931 []
 175956 []
 175965 [host=nocera0]
 175988 [host=nocera0]
 175989 [host=nocera0]
 175987 [host=nocera0]
 175994 [host=nocera1]
 176003 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
 176011 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176025 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176035 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176042 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176048 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176056 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176062 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176076 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176091 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176110 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 352c89f72ddb67b8d9d4e492203f8c77f85c8df1
 176121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
 176132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
 176141 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 671f50ffab3329c5497208da89620322b9721a77
 176142 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
 176145 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176140 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
 176148 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c a1a618208bf53469f5e3eaa14202ba777d33f442
 176150 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 41dbbfb5966f2517916333d1885ee68018161f48
 176152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 321b1b5eb351a5836d26817d7db48052e623b411
 176153 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176155 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176221 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176226 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176227 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176228 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
Searching for interesting versions
 Result found: flight 175592 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f, results HASH(0x5633b9ed3808) HASH(0x5633b9ecee08) HASH(0x5633b9ece060) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96\
 dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x5633b9ea4ed8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 671f50ffab3329c5497208da89620322b9721a77, results HASH(0x5633b9eafb50) HASH(0x5633b9eb19d8) Result found: flight 176003 (fail), for basis failure (at ancestor ~1002)
 Repro found: flight 176141 (pass), for basis pass
 Repro found: flight 176142 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
No revisions left to test, checking graph state.
 Result found: flight 176153 (pass), for last pass
 Result found: flight 176155 (fail), for first failure
 Repro found: flight 176221 (pass), for last pass
 Repro found: flight 176226 (fail), for first failure
 Repro found: flight 176227 (pass), for last pass
 Repro found: flight 176228 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176228/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-vhd.guest-localmigrate.{dot,ps,png,html,svg}.
----------------------------------------
176228: tolerable ALL FAIL

flight 176228 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/176228/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-vhd       17 guest-localmigrate      fail baseline untested


jobs:
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 03:49:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 03:49:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485273.752370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLFkB-0002kS-Lm; Fri, 27 Jan 2023 03:49:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485273.752370; Fri, 27 Jan 2023 03:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLFkB-0002kL-JA; Fri, 27 Jan 2023 03:49:39 +0000
Received: by outflank-mailman (input) for mailman id 485273;
 Fri, 27 Jan 2023 03:49:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLFkA-0002k9-Ad; Fri, 27 Jan 2023 03:49:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLFkA-0006zs-4P; Fri, 27 Jan 2023 03:49:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLFk9-0007r2-NF; Fri, 27 Jan 2023 03:49:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLFk9-00021w-Mp; Fri, 27 Jan 2023 03:49:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nogSXhR7mPKXML4M+IZXrxXqnvV9OIgL8XeUCA9K+Y8=; b=I8b8N5wPxiyVxM5v4RhFcYPiM8
	+pnHqDKH/K4bSGjCIAZDzGxr2lixhrznXTU6hDLDDXo5gc8FuLfV+yO07TH998axWB5r3Aj7nG/Kf
	+j/GixgBwMRowmZTWlp1X5xO307da+cwU/Mo/r0YPPXJbUxFnO83fREpBgtL9455TjdM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176223-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176223: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-vhd:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-examine:host-install:broken:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-cubietruck:capture-logs(22):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-vhd:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-examine:capture-logs:broken:heisenbug
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:capture-logs(9):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:capture-logs(6):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7c46948a6e9cf47ed03b0d489fde894ad46f1437
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 03:49:37 +0000

flight 176223 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176223/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-libvirt-qcow2    <job status>                 broken
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 173462
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 173462
 test-arm64-arm64-xl-credit2     <job status>                 broken  in 176143
 test-arm64-arm64-xl-xsm         <job status>                 broken  in 176143
 test-arm64-arm64-xl-vhd         <job status>                 broken  in 176143
 test-arm64-arm64-xl-seattle     <job status>                 broken  in 176143
 test-arm64-arm64-xl-vhd       8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot         fail in 176143 REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot       fail in 176143 REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot      fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot       fail in 176143 REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot         fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot     fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot       fail in 176143 REGR. vs. 173462

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd      5 host-install(5) broken in 176143 pass in 176135
 test-arm64-arm64-xl-seattle  5 host-install(5) broken in 176143 pass in 176135
 test-arm64-arm64-xl-xsm      5 host-install(5) broken in 176143 pass in 176135
 test-arm64-arm64-xl-credit2  5 host-install(5) broken in 176143 pass in 176135
 test-armhf-armhf-xl-arndale   5 host-install(5)          broken pass in 176143
 test-armhf-armhf-xl-credit1   5 host-install(5)          broken pass in 176143
 test-armhf-armhf-libvirt      5 host-install(5)          broken pass in 176143
 test-armhf-armhf-xl           5 host-install(5)          broken pass in 176143
 test-armhf-armhf-libvirt-qcow2  5 host-install(5)        broken pass in 176143
 test-armhf-armhf-examine      5 host-install             broken pass in 176143
 test-armhf-armhf-xl-credit2   5 host-install(5)          broken pass in 176143
 test-armhf-armhf-xl-cubietruck 22 capture-logs(22)       broken pass in 176143
 test-armhf-armhf-xl-vhd       5 host-install(5)          broken pass in 176143
 test-armhf-armhf-xl-multivcpu  5 host-install(5)         broken pass in 176143
 test-armhf-armhf-libvirt-raw  5 host-install(5)          broken pass in 176143
 test-armhf-armhf-examine      6 capture-logs             broken pass in 176143
 test-amd64-amd64-xl-xsm       8 xen-boot         fail in 176135 pass in 176223

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  6 capture-logs(6)     broken blocked in 173462
 test-armhf-armhf-xl-credit1   6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-libvirt      6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-arndale   6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-credit2   6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl           6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-rtds      9 capture-logs(9)       broken blocked in 173462
 test-armhf-armhf-xl-vhd       6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-multivcpu  6 capture-logs(6)      broken blocked in 173462
 test-armhf-armhf-libvirt-raw  6 capture-logs(6)       broken blocked in 173462
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 176143 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 176143 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                7c46948a6e9cf47ed03b0d489fde894ad46f1437
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  111 days
Failing since        173470  2022-10-08 06:21:34 Z  110 days  229 attempts
Testing same since   176135  2023-01-26 00:10:53 Z    1 days    3 attempts

------------------------------------------------------------
3442 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  broken  
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               broken  
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      broken  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step test-armhf-armhf-libvirt-qcow2 capture-logs(6)
broken-step test-armhf-armhf-xl-credit1 capture-logs(6)
broken-step test-armhf-armhf-libvirt capture-logs(6)
broken-step test-armhf-armhf-xl-arndale capture-logs(6)
broken-step test-armhf-armhf-xl-credit2 capture-logs(6)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step test-armhf-armhf-xl-arndale host-install(5)
broken-step test-armhf-armhf-xl-credit1 host-install(5)
broken-step test-armhf-armhf-libvirt host-install(5)
broken-step test-armhf-armhf-xl host-install(5)
broken-step test-armhf-armhf-xl capture-logs(6)
broken-step test-armhf-armhf-xl-rtds capture-logs(9)
broken-step test-armhf-armhf-libvirt-qcow2 host-install(5)
broken-step test-armhf-armhf-examine host-install
broken-step test-armhf-armhf-xl-credit2 host-install(5)
broken-step test-armhf-armhf-xl-cubietruck capture-logs(22)
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-libvirt-raw host-install(5)
broken-step test-armhf-armhf-examine capture-logs
broken-step test-armhf-armhf-xl-vhd capture-logs(6)
broken-step test-armhf-armhf-xl-multivcpu capture-logs(6)
broken-step test-armhf-armhf-libvirt-raw capture-logs(6)
broken-job test-arm64-arm64-xl-credit2 broken
broken-job test-arm64-arm64-xl-xsm broken
broken-job test-arm64-arm64-xl-vhd broken
broken-job test-arm64-arm64-xl-seattle broken

Not pushing.

(No revision log; it would be 529026 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 04:11:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 04:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485284.752387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLG4U-0006sw-GL; Fri, 27 Jan 2023 04:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485284.752387; Fri, 27 Jan 2023 04:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLG4U-0006sp-Db; Fri, 27 Jan 2023 04:10:38 +0000
Received: by outflank-mailman (input) for mailman id 485284;
 Fri, 27 Jan 2023 04:10:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLG4T-0006sf-0v; Fri, 27 Jan 2023 04:10:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLG4T-0007Rz-08; Fri, 27 Jan 2023 04:10:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLG4S-0000o8-Om; Fri, 27 Jan 2023 04:10:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLG4S-0004lv-OL; Fri, 27 Jan 2023 04:10:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tojenUbH89imVLJ1Tfxy4zNhjtTPKFyyN11R1dIr28s=; b=RTkIPYUh25ScdNnsFKxcRDmri5
	1E1S2w8ZN/G25oIi5q5Hffdf/GSzrwa86Cv3by4TxgEJ2lSzFEaGQEHjLM1L323n6+k7ZgpQtWeWX
	UMsI+ntp9qLHkoTUbIy+OUWveiNz+pCNxwMyDqsIYsM9u7mWjonjIjoxr6qsqve+lYMU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176225-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176225: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ca573b86157e7fcd34cd44e79ebd10e89d8b8cc4
X-Osstest-Versions-That:
    ovmf=d0ff1cae3a1ab20ffd5a1d80658c38c113585651
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 04:10:36 +0000

flight 176225 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176225/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ca573b86157e7fcd34cd44e79ebd10e89d8b8cc4
baseline version:
 ovmf                 d0ff1cae3a1ab20ffd5a1d80658c38c113585651

Last test of basis   176144  2023-01-26 09:10:46 Z    0 days
Testing same since   176225  2023-01-26 22:14:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d0ff1cae3a..ca573b8615  ca573b86157e7fcd34cd44e79ebd10e89d8b8cc4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 05:08:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 05:08:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485291.752397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLGyf-0005By-QG; Fri, 27 Jan 2023 05:08:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485291.752397; Fri, 27 Jan 2023 05:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLGyf-0005Br-Nb; Fri, 27 Jan 2023 05:08:41 +0000
Received: by outflank-mailman (input) for mailman id 485291;
 Fri, 27 Jan 2023 05:08:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8w6d=5Y=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pLGye-0005Bj-Bu
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 05:08:40 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a6de638d-9e00-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 06:08:37 +0100 (CET)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 329725C0241;
 Fri, 27 Jan 2023 00:08:35 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Fri, 27 Jan 2023 00:08:35 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 27 Jan 2023 00:08:33 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6de638d-9e00-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1674796115; x=1674882515; bh=Qk3I70q4GCQbspH1CXoRlayOxU4vaCkRWWj
	ciJNXOHs=; b=UoI1Lf6cisrESiDEjHX0jyCFz8HgWlc8IKhQYQ+/WPdSsEM4cC8
	kF3SHkWuYNWFlYhJUkmy+3+F4NVe0PSvrlQVCBgqSLmujy87wDU11uoz7f9vA+Zq
	BgPvkkmjniYxbSNyjn0j+YMeG35bdm8fwcIlBYXHVpY6Q+8vhMfRCT+DSS6McSSR
	rELtCQSyZQCdUztYGurt9LLnEAG/ld3AlofLuQ075wO4s4AWa3toTjjpJdU/Hr1D
	Mcckf6XKvMj++M70lALneACzsr5r3NMKYMjWkAidLtqgs409oEUklIqP+hb1+6Gx
	CipEJTvgact2JfSknwRHFWhqvG9lsvQRAGw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1674796115; x=1674882515; bh=Qk3I70q4GCQbs
	pH1CXoRlayOxU4vaCkRWWjciJNXOHs=; b=gj+/hk7a0WHa7zTzV03WZPwCizOzg
	QDX+gbUlW/S2rITQjs7FtoQ3atYZQsJqHn4fYVMNR6h06A900ADZIWBEjwPDw/y/
	nLPPcQ3rFiT9py/Osff/Jo4HTsxNpEI/p7LkUqcoj16ImtMA6ollzDuGsAxzrcXY
	ksJGfW5OF/nWhZuxLxtbQTIkqokRm2KnkilF2p0mx/4J/ZWCB4x8+OSi3Nei+Gwd
	vVC71HUDEMsJQND6YkhZZZORxzdcL7V3DLHamWc5Jo4djVfKzBzwWlY7w1cFi7ym
	F4zrc/Sn3hEN11CfoEf5Jfzcr6oE9nch643GqzCLxg/dMa3XzD4wGNFgA==
X-ME-Sender: <xms:UlzTY1S40NxBomxMWr_EMSJeuH09qck2smw95jB5inDukrCGBUrWZw>
    <xme:UlzTY-wP4JcwD_mtgMeChz-BTyK9ioLblRcgBASt-hAIC_M1ey0ptMBh1AdVAju-l
    SPOWQcRWTXUCg>
X-ME-Received: <xmr:UlzTY60npUVlmQ4NX5BsPnFjF669IKSCBLL2DQ3RjgWdMs9QoY5jIV3AaE6a6zQzZiZmD8rtzmHh2DE72DXMsjUXGdNUudurp7mIzICGHS-hXHFgIekm>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddvhedgkeduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeelkefh
    udelteelleelteetveeffeetffekteetjeehlefggeekleeghefhtdehvdenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:U1zTY9C-VuehQ4MCv8wStqp2IO7KnalAuM0ULNuZIMxVDlQ-cADcAw>
    <xmx:U1zTY-jiYfOixCl5nYUd-PncczRo35ydL7wvz60db9raagXBYo9AGA>
    <xmx:U1zTYxqpUHFEKDLm3EFCEAauBLDm-O_6CeCpYz_y9uZAMb2xpp59Rw>
    <xmx:U1zTY_tX1zCEKcHpUlA_xoWzogiirisIhumeCFAPeqiN0A4ksSGN8Q>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [PATCH] hw/xen/xen_pt: fix uninitialized variable
Date: Fri, 27 Jan 2023 06:08:14 +0100
Message-Id: <20230127050815.4155276-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.37.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

xen_pt_config_reg_init() reads only that many bytes as the size of the
register that is being initialized. It uses
xen_host_pci_get_{byte,word,long} and casts its last argument to
expected pointer type. This means for smaller registers higher bits of
'val' are not initialized. Then, the function fails if any of those
higher bits are set.

Fix this by initializing 'val' with zero.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 hw/xen/xen_pt_config_init.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index cde898b744..8b9b554352 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -1924,7 +1924,7 @@ static void xen_pt_config_reg_init(XenPCIPassthroughState *s,
     if (reg->init) {
         uint32_t host_mask, size_mask;
         unsigned int offset;
-        uint32_t val;
+        uint32_t val = 0;
 
         /* initialize emulate register */
         rc = reg->init(s, reg_entry->reg,
-- 
2.37.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 05:54:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 05:54:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485299.752413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLHgx-0002lk-9v; Fri, 27 Jan 2023 05:54:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485299.752413; Fri, 27 Jan 2023 05:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLHgx-0002ld-5n; Fri, 27 Jan 2023 05:54:27 +0000
Received: by outflank-mailman (input) for mailman id 485299;
 Fri, 27 Jan 2023 05:54:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D8Jc=5Y=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pLHgv-0002lV-Km
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 05:54:25 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0cc1a9d8-9e07-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 06:54:24 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0E4AE21C04;
 Fri, 27 Jan 2023 05:54:23 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D9BEB134F5;
 Fri, 27 Jan 2023 05:54:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id wWPFMw5n02PENAAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 27 Jan 2023 05:54:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cc1a9d8-9e07-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674798863; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=CNsH2eyRIqDjpt3u1MPsyOiSxETn2lLKo6InDVQ1ypo=;
	b=OlKlScu4SW4kHazrJhOsyQkfJD5Ac88Y+bKFjU5G/6rUMaKwzABVjNoMrbl5P7+1WSbIrh
	oVR225T1xbpLQYSlXIzVV7xdJ9eLVar2X7CPsPdPUH4egHvkSO5AjVINDHRBzoApzfPci0
	/6J8Z1NUU/h+EwLgWbYs/gV5ZhDAPZo=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2] tools/helpers: don't log errors when trying to load PVH xenstore-stubdom
Date: Fri, 27 Jan 2023 06:54:21 +0100
Message-Id: <20230127055421.22945-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When loading a Xenstore stubdom the loader doesn't know whether the
lo be loaded kernel is a PVH or a PV one. So it tries to load it as
a PVH one first, and if this fails it is loading it as a PV kernel.

This results in errors being logged in case the stubdom is a PV kernel.

Suppress those errors by setting the minimum logging level to
"critical" while trying to load the kernel as PVH.

Fixes: f89955449c5a ("tools/init-xenstore-domain: support xenstore pvh stubdom")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- retry PVH loading with logging if PV fails, too (Jan Beulich)
---
 tools/helpers/init-xenstore-domain.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 04e351ca29..4e2f8d4da5 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -31,6 +31,8 @@ static int memory;
 static int maxmem;
 static xen_pfn_t console_gfn;
 static xc_evtchn_port_or_error_t console_evtchn;
+static xentoollog_level minmsglevel = XTL_PROGRESS;
+static void *logger;
 
 static struct option options[] = {
     { "kernel", 1, NULL, 'k' },
@@ -141,19 +143,29 @@ static int build(xc_interface *xch)
         goto err;
     }
 
+    /* Try PVH first, suppress errors by setting min level high. */
     dom->container_type = XC_DOM_HVM_CONTAINER;
+    xtl_stdiostream_set_minlevel(logger, XTL_CRITICAL);
     rv = xc_dom_parse_image(dom);
+    xtl_stdiostream_set_minlevel(logger, minmsglevel);
     if ( rv )
     {
         dom->container_type = XC_DOM_PV_CONTAINER;
         rv = xc_dom_parse_image(dom);
         if ( rv )
         {
-            fprintf(stderr, "xc_dom_parse_image failed\n");
-            goto err;
+            /* Retry PVH, now with normal logging level. */
+            dom->container_type = XC_DOM_HVM_CONTAINER;
+            rv = xc_dom_parse_image(dom);
+            if ( rv )
+            {
+                fprintf(stderr, "xc_dom_parse_image failed\n");
+                goto err;
+            }
         }
     }
-    else
+
+    if ( dom->container_type == XC_DOM_HVM_CONTAINER )
     {
         config.flags |= XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap;
         config.arch.emulation_flags = XEN_X86_EMU_LAPIC;
@@ -412,8 +424,6 @@ int main(int argc, char** argv)
     char buf[16], be_path[64], fe_path[64];
     int rv, fd;
     char *maxmem_str = NULL;
-    xentoollog_level minmsglevel = XTL_PROGRESS;
-    xentoollog_logger *logger = NULL;
 
     while ( (opt = getopt_long(argc, argv, "v", options, NULL)) != -1 )
     {
@@ -456,9 +466,7 @@ int main(int argc, char** argv)
         return 2;
     }
 
-    logger = (xentoollog_logger *)xtl_createlogger_stdiostream(stderr,
-                                                               minmsglevel, 0);
-
+    logger = xtl_createlogger_stdiostream(stderr, minmsglevel, 0);
     xch = xc_interface_open(logger, logger, 0);
     if ( !xch )
     {
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 07:17:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 07:17:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485307.752429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLIyL-0004HU-Di; Fri, 27 Jan 2023 07:16:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485307.752429; Fri, 27 Jan 2023 07:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLIyL-0004HN-9J; Fri, 27 Jan 2023 07:16:29 +0000
Received: by outflank-mailman (input) for mailman id 485307;
 Fri, 27 Jan 2023 07:16:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLIyK-0004HH-5z
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 07:16:28 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8178bdcd-9e12-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 08:16:24 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8608.eurprd04.prod.outlook.com (2603:10a6:102:21b::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan
 2023 07:16:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 07:16:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8178bdcd-9e12-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jpw84VxHUTVE1n8f/imAZkiCH97/a4Uy2H+cxOWUOlzJSquVEgu/gYaQLuQmBsGM9FEnbLM1s9B9cILISWC9kjdce6CgX9+D3uIX6KlXMhC/PljQ3cayS8mnp0mn6BabIXegzD/7xsH1gnFQ6S271awKO2pSekauBktbadYImmITgKXK/gCmfcIECsim+FlPVTSW+xJhFsErQmc5ljuMyYPIR9kHBJAXVwq3KVklrCzH9tN92hpGiJ7aas06Dp3ScmnA6/HE1OihAWsTrFSCYmCP/522bMbAEK6tJUpcfCWdN9KxCcVRG2FQD9CccDBMIWLThA3Oy6gGaPzLdxMfeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=e6QnBkLBpdbM7c7K2c/W//gYzIe+gH0cr7Ap2wwwbNA=;
 b=KWuVO6842/mH3zfkRM0EJToWh4FZ7vV4+ffzlH0hw5+qxmji7aa9oTkFf78/SfKnxgV8PO4tI4oa+1g8LBje0WUqCdFzVY4aG4wETR8uehZp5dRRTXK84mmMjkmmKZsiffqfUk44JxU2WD45/jld9l0s5VL8gZikbscBDNRenTk026lIYCecyeu/0v51tYieiA1NQvJB8tay29MGePag5r5lvZEneqsRd8U4mRMYs3JyOsCp2UyYqgRSC7Le8TQmrkAbH8Z2Ub+a36Ii1RpII9v4aWDuCXlgLQkgbcqAJhrVfLalSsBFmnXcBgeta85Dc8pY1u/cJ2s+lKAo5or8IA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e6QnBkLBpdbM7c7K2c/W//gYzIe+gH0cr7Ap2wwwbNA=;
 b=0+tvVQewipbAjocVMwrlJgQXwFkhx6HVnIrFR0bgkhT+SyqnXfpqcG0biYaSo5l0I4UYNYExUTylTBb3SDBjmnYQmp3KNOdfLvIfnQ1m62SaUfZN5qe4KQ0tnsnctzlaNKg7j0Q8lBDiHVF5OjcuKtgHx7fXIOdjqKVGGId/1WgAukjrT46Vkoyc7Vxc+4sm7UoxJ/ZoPIje9rtpWOOWxwcWsIDdYpm/c8McEKrPIjSxIrWRtEBkpRXya4ERWHLvBWhbAd7SGT3uzpvIQ1KBbj51WE0JA09p2tMiFWmfuIJuAmcrmwrwlxtbNhSWRYjwmx20v3kYmJaZmrreBJ3eVQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0c4eb4ba-4314-02c7-62d5-b08a3573fcc2@suse.com>
Date: Fri, 27 Jan 2023 08:16:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/x86: public: add TSC defines for cpuid leaf 4
Content-Language: en-US
To: Krister Johansen <kjlx@templeofstupid.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 David Reaver <me@davidreaver.com>, xen-devel@lists.xenproject.org
References: <20230125184506.GE1963@templeofstupid.com>
 <77576aab-93bf-5f6a-9b04-17eaf1d84ffb@suse.com>
 <20230126180244.GB1959@templeofstupid.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230126180244.GB1959@templeofstupid.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0205.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8608:EE_
X-MS-Office365-Filtering-Correlation-Id: c283f1de-a238-47ce-f406-08db003663c7
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CzT2MWIeQOE9o901nB+nPz+Tm3/UYsXEZ88wFLwqVHhDmMagy2zWH+QlfKJxQ9vtdmgaDqLCx/qxKDZqKAiq8opBCXnvshemrSVFx/pJuXATz+VIGKlm6gwxEk5/iBs+kfteonwE/CAt1u7W/vrr1q+E1oysO4w1WoYTXm0vX3EPZ/m2+i3i06Ag7PAPx2fzR+0RePJJvfW3JWOf0qIuWGFIclWPb2S5QUWr0Mb5LKfyE7mLi45JnXBg2cgKYd9+gVQ/fMywQNgZ6qsaAHe/udXcausIFpXLLXDQryzhtRis0wDOPC5iVBEBau/FDBidJH+K5mO5ZArJzCiBcxmPsdE7CIZHQNaZc4MmbZEC7AZL2NhvfMF52GyWBrt/+r4/rTZe6KgAoIa2F6i1uhLfvHJ1loKWbi2zu0YWoTz77J3k47+e2QWM/jZwIxSRKkDN8XyCkFIHMiRmWfKTswsOHJVWPCNadJsXHEW9P9Pndb49pl/BpDEWmGsK4684ziekJJ25qp+9WRLg7Lvg5CpkP7OvsrL8eLG9rp3RKlY2Hxf288o8EnLsaoLLxQKxMKTsVUdjucKm//T9fE/YX4phacKjZgKgo+/wf7VB7wBttW5CykuyuyoXVvlehitmsgkdGkViPHfIb5NhBEr6ROhdWuPnp8O8X+CRTsm9ofiIdFhSRjuQ+HLGUP+khYjZ0OxD48vB4/KsdiuASmiI/bSUchqxSRU0xEs7CSYGOIX2bkc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(376002)(136003)(346002)(39860400002)(366004)(451199018)(31696002)(36756003)(54906003)(8676002)(6486002)(478600001)(316002)(6506007)(53546011)(66946007)(5660300002)(4326008)(2906002)(41300700001)(66476007)(6916009)(66556008)(8936002)(26005)(38100700002)(6512007)(186003)(2616005)(83380400001)(86362001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eFRQZjhTSHRoMTV3RXFBaWlqZk9oaitJSW1kaHAzV3pHa2ZWUWt5VDd0U3BR?=
 =?utf-8?B?K3cyK3c3clpramJhMUE3WmVPR3JmMG9lUm05VGRLcEpkR1g1Q2M4aDJKMFpV?=
 =?utf-8?B?OVcxY1BGMEhNSzRQTlJtSGxpV2E0VXpMbEQxdGZVUzVwTWQrM3lGMFpKMXov?=
 =?utf-8?B?cnNVZ3oweEZpMXpFZUNPUjA2M25hWWtRQ2hadm1UalVFT1dwQlJrSy9XMW8z?=
 =?utf-8?B?Si83L0lVVnhGcmI1bkNEY1FvMmZaWG1lckNra3ZTYzQyTUlLL3dJUW4yRGg1?=
 =?utf-8?B?UXpWWXJiWWE4TmFYVFVNbWNYaTFmaDlCZDloVGt4K2h4cWI4ODEwK2Vrcjd6?=
 =?utf-8?B?TzRKYURxUVRua3dRSUcrRjdid0tHaVJ4K2tnZzE5Q3V0OStNMFZYdktMQlE1?=
 =?utf-8?B?bW5GZTJsWFROdEl5ZnNONU1BQWQxTFlMUUJRclFJUTBMZ3BXNWhBQmxDaUlQ?=
 =?utf-8?B?bVJkbmxPU2tBTGEwbEJsd0FtcmMvT3lFT3VFWDFIZ1JkTjFIbEI5cHFCZm9U?=
 =?utf-8?B?bjhCRjNmVlJ5TXFpSVRtVEN4VjIxKzJOZ1RoYkRRb0o4UUl6YVZEekY1bVZr?=
 =?utf-8?B?WGFBRTE4S1NNWGFlbnBmTGU4Y2kvVmI5aUJKMW1PUTA0MUUvZTVTSFBNVS95?=
 =?utf-8?B?R21zSlpTdTNmdFRPdFpGN0txS3FaZEQra1d1dEhreXY1dldJZUtCeHpuOGNK?=
 =?utf-8?B?MmxWWnZ0eEJGWEZYdWs4V21rVE8vNXB3amNXU3R5eHBCandjOFo5SkJ5aFVt?=
 =?utf-8?B?V2g1YWF0dW1NdG85bzhBQVkvbllMV2FZZ0lEMnNRdnczdE1sZlFxODJ6Q2Jj?=
 =?utf-8?B?UFFYM2Q3R1VUTndOc21PdkVrSUowWndpYUNXYU5GMUw0Uk9wMTNqMVA2OTQ2?=
 =?utf-8?B?MU9rV0IweUdMQkJoREN5b1ozQnZ1bG40ZHl6cEppUWphd09GdDk3SW9XK3Ux?=
 =?utf-8?B?WWJwdUNLcm9EZzN3MXRYM0ZkcVlmY3JyYVBldjljOThFeDFETDk0UmtZbXhi?=
 =?utf-8?B?T21kNFMzN1I2QzRscjVTL2VoSHA2amFnbW9sWTFQZVE1amhJbEFDRnNlQ3Y1?=
 =?utf-8?B?Tk82aEM0TmxRajZNVk9kQ3haZWJ5NDZncE9CMGZ2bEJhcXROYmhEWW5UYTZn?=
 =?utf-8?B?MitVRUtOc1B4Z0E1YVBYRFAvRFVrRjQ2S3pIWkxTTDl4bit3MUxJeUxPZ2dR?=
 =?utf-8?B?bm1LU2FyNUxBdW92NzZ5KytReWJQbmFJeUV5aGsrNXFGSHQvL2hWNFpKVy9i?=
 =?utf-8?B?Rk1odWdLZ0NZS0drRWw1UzQ1M01vbVVBc3RJSmZPd0ptREw1WTdLK3NKZHZt?=
 =?utf-8?B?K1hSRWp0aGVvcGV5dHJFWUxERHAvekhmcmhUUE1GWWpzQjY4SVRuR3BZV2kw?=
 =?utf-8?B?c0JGRHVXc000YjJYY1A1TFl6WjF3MUxKd0M5V3Nzc2grY0g3Skc4anBWc2lK?=
 =?utf-8?B?RjdZQTZ4SGNFZGVraS9Zc2lNdi80V3FkSlVTc0hibGN1Qk9WVHRscm1ncGFL?=
 =?utf-8?B?YjFuUVh6YThzQm5Gd0FUQWVBU3cyWVBxTWNCVWQyd0hxeG1RUHg3WlJlaHE4?=
 =?utf-8?B?ckRsbXRBd1ROMmpTa2hLaXRpZmpVQ2hVOWJHWUlxWVB2QnhENHdsT2phVzha?=
 =?utf-8?B?em1KYjhxZnE2TGlveVpIL2NOV0hHb1UvVGQ0MXpuMHg3cXJYRm02WU41bTRr?=
 =?utf-8?B?VjFsTjRSVmc4LzNOUkpPNUFISU5yMkJwZ2VPamZ1c3RMOWY2N3QyYi81OXZF?=
 =?utf-8?B?Z3k4YVB0TkVXMVdMa3N3OGFKSlA2VU1HWWZjT0NrNFI4bzBWYjBTTmZ1bW1z?=
 =?utf-8?B?eTBiWEZHOE9TWVozUVN0OHVpUW1JN3ZDR0hDQlJLQ0V1R3lMSHlzWkJ5RzY2?=
 =?utf-8?B?a0NWQXl4MndjMTlkY3cwcXhpQWxQeGlwY05QNGM2OEszTjk1MzVkZ1RDTkNq?=
 =?utf-8?B?VCtyTmpyRXVZelJQQWdmemhYcTZxblBqNTlHaTZYSk1QSE1JM2V4djhTVHc4?=
 =?utf-8?B?N2pmVWpNeHpwNDdpcEx6VDZzR3llREpFdlNCOGFFYkVqRGRtaFNyckE3Mzgx?=
 =?utf-8?B?TytyQ1NldjV5Zm4zYVBlQUJMY3A1bW1kdlFJK09TOUQ5WDh5QzZ2L0hxaWpa?=
 =?utf-8?Q?kRruhBl35FDi6xKmcwPRVHeDR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c283f1de-a238-47ce-f406-08db003663c7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 07:16:20.8151
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rdi+R+6vBr6BIgUmmRmGqktKWUo9rmzKerOkXuWjVGWwHVYtDDpwl72+MtP6B1PymyWWMon45faBRmQTaOdxkA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8608

On 26.01.2023 19:02, Krister Johansen wrote:
> On Thu, Jan 26, 2023 at 10:57:01AM +0100, Jan Beulich wrote:
>> On 25.01.2023 19:45, Krister Johansen wrote:
>>> --- a/xen/include/public/arch-x86/cpuid.h
>>> +++ b/xen/include/public/arch-x86/cpuid.h
>>> @@ -72,6 +72,14 @@
>>>   * Sub-leaf 2: EAX: host tsc frequency in kHz
>>>   */
>>>  
>>> +#define XEN_CPUID_TSC_EMULATED               (1u << 0)
>>> +#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
>>> +#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
>>> +#define XEN_CPUID_TSC_MODE_DEFAULT           (0)
>>> +#define XEN_CPUID_TSC_MODE_EMULATE           (1u)
>>> +#define XEN_CPUID_TSC_MODE_NOEMULATE         (2u)
>>> +#define XEN_CPUID_TSC_MODE_NOEMULATE_TSC_AUX (3u)
>>
>> Actually I think we'd better stick to the names found in asm/time.h
>> (and then replace their uses, dropping the #define-s there). If you
>> agree, I'd be happy to make the adjustment while committing.
> 
> Just to confirm, this would be moving these:
> 
>    #define TSC_MODE_DEFAULT          0
>    #define TSC_MODE_ALWAYS_EMULATE   1
>    #define TSC_MODE_NEVER_EMULATE    2
>    
> To cpuid.h?  I'm generally fine with this.  I don't see anything in
> Linux that's using these names.  The only question I have is whether
> we'd still want to prefix the names with XEN so that if they're pulled
> in to Linux it's clear that the define is Xen specific?  E.g. something
> like this perhaps?
> 
>    #define XEN_TSC_MODE_DEFAULT          0
>    #define XEN_TSC_MODE_ALWAYS_EMULATE   1
>    #define XEN_TSC_MODE_NEVER_EMULATE    2
> 
> That does increase the number of files we'd need to touch to make the
> change, though. (And the other defines in that file all start with
> XEN_CPUID).
> 
> Though, if you mean doing it this way:
> 
>    #define XEN_CPUID_TSC_MODE_DEFAULT          0
>    #define XEN_CPUID_TSC_MODE_ALWAYS_EMULATE   1
>    #define XEN_CPUID_TSC_MODE_NEVER_EMULATE    2
>  
> then no objection to that at all.  Apologies for overlooking the naming
> overlap when I put this together the first time.

Yes, it's the last variant you list that I was after. And I'd be okay to
leave dropping the so far private constants to a separate follow-on patch.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 07:33:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 07:33:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485313.752443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJEC-0006rN-Py; Fri, 27 Jan 2023 07:32:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485313.752443; Fri, 27 Jan 2023 07:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJEC-0006rG-NK; Fri, 27 Jan 2023 07:32:52 +0000
Received: by outflank-mailman (input) for mailman id 485313;
 Fri, 27 Jan 2023 07:32:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D8Jc=5Y=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pLJEB-0006pK-2K
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 07:32:51 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cb272f3c-9e14-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 08:32:47 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 008AD21DC8;
 Fri, 27 Jan 2023 07:32:46 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BD16D138E3;
 Fri, 27 Jan 2023 07:32:45 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 41rDLB1+02MzYAAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 27 Jan 2023 07:32:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb272f3c-9e14-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674804766; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=n76WnWUE/5YrtZqdKfCuwTkAbmeQBtDXu0yYOZuyFm0=;
	b=ue9gaEms7h+5KZTR5o6qE8naxYWtyoVA94IjgLmiULZBhqu3O5QbiMVwUYJ42PSaR1XOb+
	au4pv19pIbxXII5KM6kcR/1+nNo8/+74EA4kBSdqUdOCDMeCuZXtaDaz1Yiyo56fzB29+c
	mq5vcZ/t8nh4BX/huflxH3ksUppDcS8=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH] Mini-OS: remove stale subdirs from Makefile
Date: Fri, 27 Jan 2023 08:32:44 +0100
Message-Id: <20230127073244.6883-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The SUBDIRS make variable has some stale entries, remove them.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index f3acdd2f..747c7c01 100644
--- a/Makefile
+++ b/Makefile
@@ -34,7 +34,7 @@ EXTRA_OBJS =
 TARGET := mini-os
 
 # Subdirectories common to mini-os
-SUBDIRS := lib xenbus console
+SUBDIRS := lib
 
 src-$(CONFIG_BLKFRONT) += blkfront.c
 src-$(CONFIG_CONSFRONT) += consfront.c
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 07:34:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 07:34:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485322.752457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJFB-0007X8-8b; Fri, 27 Jan 2023 07:33:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485322.752457; Fri, 27 Jan 2023 07:33:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJFB-0007X1-60; Fri, 27 Jan 2023 07:33:53 +0000
Received: by outflank-mailman (input) for mailman id 485322;
 Fri, 27 Jan 2023 07:33:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D8Jc=5Y=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pLJF9-0007JG-Ao
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 07:33:51 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f01b6503-9e14-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 08:33:49 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 322E321DA3;
 Fri, 27 Jan 2023 07:33:48 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0495C138E3;
 Fri, 27 Jan 2023 07:33:47 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id HkVtO1t+02OkYAAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 27 Jan 2023 07:33:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f01b6503-9e14-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674804828; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=2F3jMA00rIuMoLB5a5BqPwacl19YT7UGy2nW1Pjamfc=;
	b=njRbIWfll6G2JpZj/M/OWVZX+cvo69nOjKOlQppL9T3HC1pvHTHcufdWAzj6KqMDjTH5CU
	3o1JsuQ5BrxuUzsbnerkUnCRLNX3y1HeP9Tbhr+6KMs57YvOXTepuNu78Zzi0cyTs9JCEg
	7lS2RB5EwFKwptg/jPFxb08bRzTJkjU=
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Cc: samuel.thibault@ens-lyon.org,
	wl@xen.org,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH] Mini-OS: move xenbus test code into test.c
Date: Fri, 27 Jan 2023 08:33:46 +0100
Message-Id: <20230127073346.6992-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The test code in xenbus.c can easily be moved into test.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 test.c   | 108 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 xenbus.c | 113 -------------------------------------------------------
 2 files changed, 106 insertions(+), 115 deletions(-)

diff --git a/test.c b/test.c
index 42a26661..465c54e8 100644
--- a/test.c
+++ b/test.c
@@ -44,6 +44,7 @@
 #include <fcntl.h>
 #include <xen/features.h>
 #include <xen/version.h>
+#include <xen/io/xs_wire.h>
 
 #ifdef CONFIG_XENBUS
 static unsigned int do_shutdown = 0;
@@ -52,11 +53,114 @@ static DECLARE_WAIT_QUEUE_HEAD(shutdown_queue);
 #endif
 
 #ifdef CONFIG_XENBUS
-void test_xenbus(void);
+/* Send a debug message to xenbus.  Can block. */
+static void xenbus_debug_msg(const char *msg)
+{
+    int len = strlen(msg);
+    struct write_req req[] = {
+        { "print", sizeof("print") },
+        { msg, len },
+        { "", 1 }};
+    struct xsd_sockmsg *reply;
+
+    reply = xenbus_msg_reply(XS_DEBUG, 0, req, ARRAY_SIZE(req));
+    printk("Got a reply, type %d, id %d, len %d.\n",
+           reply->type, reply->req_id, reply->len);
+}
+
+static void do_ls_test(const char *pre)
+{
+    char **dirs, *msg;
+    int x;
+
+    printk("ls %s...\n", pre);
+    msg = xenbus_ls(XBT_NIL, pre, &dirs);
+    if ( msg )
+    {
+        printk("Error in xenbus ls: %s\n", msg);
+        free(msg);
+        return;
+    }
+
+    for ( x = 0; dirs[x]; x++ )
+    {
+        printk("ls %s[%d] -> %s\n", pre, x, dirs[x]);
+        free(dirs[x]);
+    }
+
+    free(dirs);
+}
+
+static void do_read_test(const char *path)
+{
+    char *res, *msg;
+
+    printk("Read %s...\n", path);
+    msg = xenbus_read(XBT_NIL, path, &res);
+    if ( msg )
+    {
+        printk("Error in xenbus read: %s\n", msg);
+        free(msg);
+        return;
+    }
+    printk("Read %s -> %s.\n", path, res);
+    free(res);
+}
+
+static void do_write_test(const char *path, const char *val)
+{
+    char *msg;
+
+    printk("Write %s to %s...\n", val, path);
+    msg = xenbus_write(XBT_NIL, path, val);
+    if ( msg )
+    {
+        printk("Result %s\n", msg);
+        free(msg);
+    }
+    else
+        printk("Success.\n");
+}
+
+static void do_rm_test(const char *path)
+{
+    char *msg;
+
+    printk("rm %s...\n", path);
+    msg = xenbus_rm(XBT_NIL, path);
+    if ( msg )
+    {
+        printk("Result %s\n", msg);
+        free(msg);
+    }
+    else
+        printk("Success.\n");
+}
 
 static void xenbus_tester(void *p)
 {
-    test_xenbus();
+    printk("Doing xenbus test.\n");
+    xenbus_debug_msg("Testing xenbus...\n");
+
+    printk("Doing ls test.\n");
+    do_ls_test("device");
+    do_ls_test("device/vif");
+    do_ls_test("device/vif/0");
+
+    printk("Doing read test.\n");
+    do_read_test("device/vif/0/mac");
+    do_read_test("device/vif/0/backend");
+
+    printk("Doing write test.\n");
+    do_write_test("device/vif/0/flibble", "flobble");
+    do_read_test("device/vif/0/flibble");
+    do_write_test("device/vif/0/flibble", "widget");
+    do_read_test("device/vif/0/flibble");
+
+    printk("Doing rm test.\n");
+    do_rm_test("device/vif/0/flibble");
+    do_read_test("device/vif/0/flibble");
+    printk("(Should have said ENOENT)\n");
 }
 #endif
 
diff --git a/xenbus.c b/xenbus.c
index aa1fe7bf..81e9b65d 100644
--- a/xenbus.c
+++ b/xenbus.c
@@ -964,119 +964,6 @@ domid_t xenbus_get_self_id(void)
     return ret;
 }
 
-#ifdef CONFIG_TEST
-/* Send a debug message to xenbus.  Can block. */
-static void xenbus_debug_msg(const char *msg)
-{
-    int len = strlen(msg);
-    struct write_req req[] = {
-        { "print", sizeof("print") },
-        { msg, len },
-        { "", 1 }};
-    struct xsd_sockmsg *reply;
-
-    reply = xenbus_msg_reply(XS_DEBUG, 0, req, ARRAY_SIZE(req));
-    printk("Got a reply, type %d, id %d, len %d.\n",
-           reply->type, reply->req_id, reply->len);
-}
-
-static void do_ls_test(const char *pre)
-{
-    char **dirs, *msg;
-    int x;
-
-    printk("ls %s...\n", pre);
-    msg = xenbus_ls(XBT_NIL, pre, &dirs);
-    if ( msg )
-    {
-        printk("Error in xenbus ls: %s\n", msg);
-        free(msg);
-        return;
-    }
-
-    for ( x = 0; dirs[x]; x++ )
-    {
-        printk("ls %s[%d] -> %s\n", pre, x, dirs[x]);
-        free(dirs[x]);
-    }
-
-    free(dirs);
-}
-
-static void do_read_test(const char *path)
-{
-    char *res, *msg;
-
-    printk("Read %s...\n", path);
-    msg = xenbus_read(XBT_NIL, path, &res);
-    if ( msg )
-    {
-        printk("Error in xenbus read: %s\n", msg);
-        free(msg);
-        return;
-    }
-    printk("Read %s -> %s.\n", path, res);
-    free(res);
-}
-
-static void do_write_test(const char *path, const char *val)
-{
-    char *msg;
-
-    printk("Write %s to %s...\n", val, path);
-    msg = xenbus_write(XBT_NIL, path, val);
-    if ( msg )
-    {
-        printk("Result %s\n", msg);
-        free(msg);
-    }
-    else
-        printk("Success.\n");
-}
-
-static void do_rm_test(const char *path)
-{
-    char *msg;
-
-    printk("rm %s...\n", path);
-    msg = xenbus_rm(XBT_NIL, path);
-    if ( msg )
-    {
-        printk("Result %s\n", msg);
-        free(msg);
-    }
-    else
-        printk("Success.\n");
-}
-
-/* Simple testing thing */
-void test_xenbus(void)
-{
-    printk("Doing xenbus test.\n");
-    xenbus_debug_msg("Testing xenbus...\n");
-
-    printk("Doing ls test.\n");
-    do_ls_test("device");
-    do_ls_test("device/vif");
-    do_ls_test("device/vif/0");
-
-    printk("Doing read test.\n");
-    do_read_test("device/vif/0/mac");
-    do_read_test("device/vif/0/backend");
-
-    printk("Doing write test.\n");
-    do_write_test("device/vif/0/flibble", "flobble");
-    do_read_test("device/vif/0/flibble");
-    do_write_test("device/vif/0/flibble", "widget");
-    do_read_test("device/vif/0/flibble");
-
-    printk("Doing rm test.\n");
-    do_rm_test("device/vif/0/flibble");
-    do_read_test("device/vif/0/flibble");
-    printk("(Should have said ENOENT)\n");
-}
-#endif /* CONFIG_TEST */
-
 /*
  * Local variables:
  * mode: C
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 07:38:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 07:38:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485329.752469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJJY-0008QQ-Py; Fri, 27 Jan 2023 07:38:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485329.752469; Fri, 27 Jan 2023 07:38:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJJY-0008QJ-NM; Fri, 27 Jan 2023 07:38:24 +0000
Received: by outflank-mailman (input) for mailman id 485329;
 Fri, 27 Jan 2023 07:38:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLJJY-0008QD-6c
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 07:38:24 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2042.outbound.protection.outlook.com [40.107.8.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 931a776e-9e15-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 08:38:22 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9151.eurprd04.prod.outlook.com (2603:10a6:150:26::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Fri, 27 Jan
 2023 07:38:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 07:38:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 931a776e-9e15-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GiVZTLmQ2r5w7tItkwTeeCvNU2Rkpg6tPZ/4aWKLfBlbdXHrFClYnqpAf123s1ewXt2owOXqKeu60sv4XJlgOTvttHXi45O7TJCyu7wQMwFVRBvPxejfB7/7Dy4omLjbQ4x0ngmDxZ7Q98tlNu4QaXdTof9fua1rqE0BmIY7QAePFUFVH3I/m3nLl74efwEflp5qrOl6fsqpRu1u1GP7ypQyOWIqMnU9uamQL/2bI1frxjRTAaIQH7PYqiHGVG7lay8PPkRF5Sb0cs2sMf2bI/nKKsvgzJ6W/frzwMctsY1xBSPLnAovEhKO/0jQ+ZQwBz4QxFt7GFo/nwsi6fTQWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Z8t9bFBJRxiZyuwbPCM+0ENqAXQjTS1a8sn/vV2zYRw=;
 b=iae+yINI3JCYGQewehKXA3V5aZF++55ZYq5erY9cwpR145MHsfovJIAosfnxaO9sr/ODG4MASgsqa7tvgacKei628pcOflzsnd5lV3OJ5ciZ0rUSvuq7fUcG6TVXxJsjJVlNHNQwjLVv2JDOcqEvOoTd+KeMrBbBlwdNMY21Ik+8uoZl1C9CvhE9xklniLnFsT9pRYXVxenxoL0OaUv7wb3rOsSFZJKIegadekPSg/NotUxhimpWriR8CMjNDeC52ii1g3vPcBh06cwIxgut9cyo/CAXLtqoxh9rLqUWR89NTew5trs9kkG16m4UCxZTGEYDW1YTk8fARu8YyGOlxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z8t9bFBJRxiZyuwbPCM+0ENqAXQjTS1a8sn/vV2zYRw=;
 b=mSMMc3yctLkyfavNSd7xEm/dTRyrmV42wRhi2WRpQ2ZtSLHCvsH1DvAI6izaX/EDkWeRtAJKHbBKQXYuXKqhqauKPNkUmgyULcT2YcLo6RK55lDUcrAzpQC8o3grBsle5ySC5iTI68jDsVs2joCNR3PgJyRFxRIANqvTeL73G6Vg3/l/U4wuC0uZ2lKfZ2lnQyE077fO0Hsn5OWnuCDZnksP9XdgMPmf6iG8ShUkUzwdLkQZqlK5nPKwbUVk6bu6NZBQZ3JgzfjEGVw5KMebM1dc9nJ+pvKlSgLVbWoqJ0afJZm0o0vSrJxb6i9rbaRpEDgSWeI6DHZmazYxE1vpEg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <db97da84-5f23-e7ed-119b-74aed02fb573@suse.com>
Date: Fri, 27 Jan 2023 08:38:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
Content-Language: en-US
To: Stefano Stabellini <stefano.stabellini@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, george.dunlap@citrix.com,
 andrew.cooper3@citrix.com, roger.pau@citrix.com, Bertrand.Marquis@arm.com,
 julien@xen.org, xen-devel@lists.xenproject.org
References: <20230125205735.2662514-1-sstabellini@kernel.org>
 <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com>
 <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop>
 <5a3ef92e-281f-e337-1a3e-aa4c6825d964@suse.com>
 <alpine.DEB.2.22.394.2301261041440.1978264@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2301261041440.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0143.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9151:EE_
X-MS-Office365-Filtering-Correlation-Id: 3de67a69-24f1-47a1-083b-08db003975c2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qSKeY0BHhRQWp7CeXEte3EhBqKhvtnGEyWeNzk9i0UXzn4h0CtaIFvtUNwrgI6MXz1jL6b0I99iidN+Ngwwl5R2zqOEbCuDuRxdlf5YnlWMz2oP1Th9tHJZtp3XD74ahduRrTRYgULvb4z0CFJMJtynUHYuNmqnDd1t5thuVA7QhFztFepG5z5MKtKvZgN2JS0nBbS+qi3HVuSedzKN4R1orOY3aOQUjR+oqgnwux1/fd2s6Yo2gMuR+rYrCcpMSfZEI20PeUnzWTjiTlUUQ+P2iADLp8+zFX8Du+qPx2bCSFxMZq9ENnO9In08iZnTkK4POe3PQW8KZYjVJQV5lpbJVD8m8bA0HaLkHfE33Y5Ze6YZ2WE/x9No/m8VFMBcINetJqw0hWd/y9YI89BLIuRKmG00tX59vy7biJm3Yzh9xuQWdRQPrb42CXCS8JWGgVNfjf2y8Dx9eH/cVZE0i32WMdSnz/jCOhLaus0VrOpQ577wJk5XYAzhfalHkg2k+5PG/Ik2k17K/P+dL9PbLiCC0PeId1td1p9ftlGPIWRys3AvjVxzAbe49OhL+nXeTx6Uk/xj98t4pBMHMU0iVq/upWaRCiVv/1TaX1as7MJA2y8C7zPTKSDJuIfzsYT+irXYUrRWD8HGtncAlvy54WtiRtjhg4xJp4crS6INcgVuJyX6vhOqy1pf/kv9D7OaKxFWMdSh/ftWWBEAoG4yas0KtGvUBBNXJ115bV48Jb40=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(376002)(366004)(39860400002)(396003)(136003)(346002)(451199018)(478600001)(186003)(6506007)(53546011)(6512007)(36756003)(26005)(31686004)(6916009)(2616005)(38100700002)(86362001)(31696002)(8936002)(8676002)(83380400001)(5660300002)(2906002)(6486002)(4326008)(316002)(41300700001)(66556008)(66946007)(66476007)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aVF6VXQxSllUNkRZN1phMXI2T3A3eUJFMnlFMVkyc0NLYmVEOUNKKzdaNHhz?=
 =?utf-8?B?bXFvMVpwWU9Db2djMmkvQWwwNDJGRi9va2NUNDhjdG5CWEw3SFByL2psUGxE?=
 =?utf-8?B?OEtTenNManZKMkIrOXlMQzFVOWJrZG0vVHFDR1VUV25YMkk3MjBXSTAwRzAr?=
 =?utf-8?B?bFBLVEN0UENJYWh3UFRIak52S0UxVzVQZlJoZkVsTSt1cnptbGMvL0hhcDZG?=
 =?utf-8?B?VE1iK0dLYUh2UkdCa3BnaWJLSVM1SnUrQzExTTB6enpkMEFCR0dKTlhFV0or?=
 =?utf-8?B?cDZRQjllbHJBRVpNY2dWOVdYM1dJTkFySUxidXc0Ri9uakxsSnpIZkRyUGJ1?=
 =?utf-8?B?SlNWaUQvbitXSHpTaXVsOGhMZzBjZjdUK0F4MFBXdU5LTnFucjlFdGc2THM1?=
 =?utf-8?B?eWRuQlcyenZCaitFSjZpMFg3cVN5b0xwVCtsUm5MWjZjUnlqTnVnWnVZZCsw?=
 =?utf-8?B?WStBRHdTUml6ajFxenhsa2NmRE5YbCtGVTEycmI0NC9rMGdqajhXd1JrNGVL?=
 =?utf-8?B?dVlNMjJla3RZYXVFWXlqZURFRTdadFhsN1NJM1NML0VQbGNzU0pqT3ViYzRk?=
 =?utf-8?B?YVRKaWtybzA1QjB2YXNHQ3lDU0RmNEkxM1VFNUlJdHNFYi9vMytIck1MZnp4?=
 =?utf-8?B?enhTWmlOSFM1azZVREswR0FOOWFIUkRwekJVNkgxYWk1S3FpOGQ5bUNHbHV5?=
 =?utf-8?B?NWcwWjRoVWswMUJTU2RQU01IZEl4dm95cmJxSm9ZZ3dlL0sxbUJIc0I5dTVq?=
 =?utf-8?B?N3ErVXBpd01Xekx5Wkg5ZXNGeWRwUGhqS1ZnRllOcnk4MVA5RG11ajNWdXlX?=
 =?utf-8?B?aEV0SGt1cy9ITGRrdGNNSGxPckNjUnZqMWpVV01ZRGhraUFmQUhTRUVsKzNW?=
 =?utf-8?B?V2c5VjZHUytGUjhtK0h1dERBamZPWlc2bFBRNExIMmkrbmwwN1cvSmI5cVYw?=
 =?utf-8?B?eGpEQXlCNE1TUCtBME9CY3kyRW84TkxabUYwb3YweCsrZll4YXRPQjAzV3hS?=
 =?utf-8?B?dTJ1L0xaeE1JeHh1QjZHbXI4d3MveHc0OXV1YmUzU1pzeWYvb2hpZlJ2WFVJ?=
 =?utf-8?B?SzZJZEROMEc2Y0F4WFhTMjE1SU90VXU4RHZNNmlySE1tNkI0WUhON3lMRnJS?=
 =?utf-8?B?MFNkcEpHRlZkWGVrMDczdDdEa25sOXJxUEZqYTVjSEp0U243UGdLeEZDalk2?=
 =?utf-8?B?dU1uUDV6bzNxQ3FOYXFzNFdnYis4UFBpam5qTHRwNFZMRHpONmhOWGJMVzM3?=
 =?utf-8?B?blVZYkZYT3pNaDVhT2s1cFZHdTRkODNvbE1UbVlJM0kweWNlUjgvZHI5QVcz?=
 =?utf-8?B?QW4wMjBWbE9IWDNpa3FFOXBNdGxrNElqUjQ0Smx3SzYvcGJxQWI1WTA1b3VP?=
 =?utf-8?B?MUxUUjROUi9iSGF5UjBUdXlNUUVWeHlTMmk0ZkV6UmF0aHhHZ2FhaTNjblVU?=
 =?utf-8?B?YWFETkVnZ1JJdHRPc2JkK1JzS21GWWpSNTFNQU0vbVB3clBBalFVOTNYR2xp?=
 =?utf-8?B?UWJyME9uSGc4YXdScGplQXdGYjA0Q1h2Vys4NkI5YTMxS25tNVN1MWNJMTUz?=
 =?utf-8?B?L3JZVWYzRTZ2Uk1TVTlmcm5SaExpLzJFM1RYUXV0ZElHMUhPdHRGVHI2RjRu?=
 =?utf-8?B?aWFYWGVuRDc3dXpxSTFoL1pJaVpXNFFkSnBqUnNURnUvZE14RzBOQ1hsV2l2?=
 =?utf-8?B?eWZqQjg5SjlOcWVYWTBiQ1g0L1k4YisrQUg2SzVUUVEvL05YQU02S1JWeExi?=
 =?utf-8?B?eUtRckU5d3lvKzNGajVLV3ZxY0ZnSzA4c1BRS2JxbDBGdmdMT0ZJZ0tQNjBu?=
 =?utf-8?B?VTJKczNCYU1FVVg3NmRBWE5Qemg3azh2SmhnaSs3b0pqMVlsNjBHOE1Ra1Nl?=
 =?utf-8?B?eWpUZ2hYRlBPTSs2TVpGeFJnY0pYZDlvTXhTeFJXRHpuSjIvYndoL24yTlYr?=
 =?utf-8?B?eVp0bzJTYWl3T0tneExWc1FrQkx4QmtzV0dQU2pLM2hsbmZUSVMybldWLytO?=
 =?utf-8?B?N3g2ZVZ1WHFRRHRCVUtPdmZ5a0xYS3h0WVBTeWRBR1lTNnZ1SDQvaHdHWlhU?=
 =?utf-8?B?b2lsNVBmRVFxODlyUUc5WkY2a0hHOWFsWVJMNEJYbEFqS0Z3WnBSZXpzVlY5?=
 =?utf-8?Q?uwl4LrbaBNxz01KxmnCoRADYY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3de67a69-24f1-47a1-083b-08db003975c2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 07:38:19.3562
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i19NGQ6oSGTeKNsiDct0srBsm/n0jU8Mpiee4y6HjIfD6Fg7DJym7QQ686W9vl+FHDZCqOxEsA8pMK95bOYr4Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9151

On 26.01.2023 19:54, Stefano Stabellini wrote:
> Coming back to 18.2: it makes sense for Xen and the scanners today work
> well with this rule, so I think we are fine.

I disagree. Looking back at the sheet, it says "rule already followed by
the community in most cases" which I assume was based on there being
only very few violations that are presently reported. Now we've found
the frame_table[] issue, I'm inclined to say that the statement was put
there by mistake (due to that oversight).

Furthermore I notice that I had put a reference to 18.1 there. And indeed
I (continue to) think that the two can only sensibly be accepted together.
One might consider 18.3 in the same group actually (I have a similar note
there referring to 18.1), but luckily we look to not have a lot of issues
there, so accepting that ahead of time was hopefully okay.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 07:40:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 07:40:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485334.752480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJLm-0001Ld-79; Fri, 27 Jan 2023 07:40:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485334.752480; Fri, 27 Jan 2023 07:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJLm-0001LW-3i; Fri, 27 Jan 2023 07:40:42 +0000
Received: by outflank-mailman (input) for mailman id 485334;
 Fri, 27 Jan 2023 07:40:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLJLk-0001LM-MW
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 07:40:40 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2053.outbound.protection.outlook.com [40.107.7.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e479af7e-9e15-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 08:40:38 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8544.eurprd04.prod.outlook.com (2603:10a6:102:217::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 07:40:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 07:40:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e479af7e-9e15-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XbP7TRtSJjN8ysXwfq8aTe6cyoq19iP+wv55DnBWPXw3yy/k0arzhN+q+VQF/uQheFNVl5SNRPGvdHOuE2+N9iy7W67eRXNBvdWO2TuGi3H8MwLGUauq0hg8RJshpGMsxESa2tjPCFBy33Rj7pggCRhJXG8fLcLyCIoIbVNoUvud4YfqroyxqIE5t6DEP/CDHNP0CND0ayZl95pA3PI5mV8jqEiFKuM2QCAhqEJjq9xRD+mYsBOAdnxqdiVHD5NbsIpHbv4ATFZcA5f/XZm0Ze7qSQbDTvEJz9VXYkCGd3YBNhwrhhOb+P8bh17NVxw40l4iHX3GRj9sMFA26HpPFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4WHPETdbjkGibGJtMUoJe2pcFPW4mC74H/kI77gqb8Q=;
 b=KpC5/ecM/+s/EbV9snWtoS4xaBjjDufaJZHtjUGQc5kIgwZXn6Xe0tYsgdWLEwx09JWTY4G8+viSM+MVAKqvOWa21k/wZPbpDTvbPL2wuGMJr5/17MO5+/Fx11glaQuRNE46n9E8Ie88AYV2w89PEwa4uQxQy9cqKHKSsPVdw+X1VXdj+mxkG7r5wA4YcU0tc23B2FFwPzsMar9cm48/VveSESbAy4nDzE9mCQHDqGqwrLbJgXLBfX50bAUS3F+ph8b2btfHAyq3jk+9MWpowjT1Bl4FxAYAPMBLqdNsxDGetrdWHrLl6JfktqrVXWk/+ozZ0yvGI0W57QryMO5gRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4WHPETdbjkGibGJtMUoJe2pcFPW4mC74H/kI77gqb8Q=;
 b=tp1FNNqiR+eS80pFkrsuA0Tdo19tWAMf3Zy+2JsstwYYNQ+tTqkpN6fqZubqxQBRSdAaFUCeX2kXUQuwvttGQeaPDUDU0ZTIr2qDEh1k1wCrlDRpigcaVpeeRV7kY9oYnRVTDzQedDd0WKXfr4YAHZjKq1cSzqctu9C1YJ09K58j1HsQ2Twrb+xz6/+1uWyFdMeMClq1RFQk0oX0nc0UEUL4LthcvKRTOTOFhGX0+hW+7/xZCwCkBWNUKEF1nXgbTuoX6/myXj1a61AYxDwe0iJ0hxiGpC1Y38CogRi6HebYSS67HdenCuJbc6tgQQf+/5448I4lhYJ7oi/joyL/Qg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9ea1c4b9-c02a-f189-f535-bf15d28f57ed@suse.com>
Date: Fri, 27 Jan 2023 08:40:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] tools/helpers: don't log errors when trying to load
 PVH xenstore-stubdom
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230127055421.22945-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230127055421.22945-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0135.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8544:EE_
X-MS-Office365-Filtering-Correlation-Id: cbd583c8-8d55-4122-472b-08db0039c6af
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XlMHZtdHH9iwy36fOFDOrY98YWiJuApSC5A+wz3i5rXz8H1Sb78d9EaUYCVkxVXA6f8/dyiCnr6VUJN3Sf6yxLnRH55d1C6clnHCzlH4qNiWLovSZ20L3Ft8CL4HwGfb2qLfkOINBVkqM/6XBJgKaWg+quyVEJsyKBTm70zUe25hvX0NhxyoOeOi0caCyQfzpV+rDQ6aGmJJaB+d6SzaMPSFxqxYQMbl7AB5BzyGlWNO9GnkDa906CqleJ59AMAoiRu7zoZ5P1gLXjT9yzv7MaDnE9mw7V59xmvjOui/60zmVs5xWuC8tnk2y3lyZvoXkiYPylMrA0geIPIrxCh+Uo8kXw1bAkEjPFglumRmB9YZACg9M9h24HyZD/jY9tVHJPUZ26tRscbEXIm/EOodBt7/1Leoerw60PvH9YJNjVEvPOAyFsdgAJgxrn0RLjDi5cGVDatURGI+rR9odz+gOLSVori20NhzAVBOMen93APfIEv7XqXEIbTAuMYB+Z37zrTTI2CfckU/BAIkALPGmDTgeKlosIFFcmBjzpec8TUo9kQsSeMqqqTCnouSZyeTHhXJoAtjQhGDHb/4xia3rME/O4OOnR3vmdgn9vVCjINOcxMkEGR15fpb7ONFVqN2ORsBa4TPBCz8D72o/tH7OnHfVa85hXKTaugy3dvt/v1svm2ebZiz9Sh/t9cN9XHW3MmJqz3vEbI7UI3NisyFw8aQeUEVDQKaF6LQJd289Jou/vZpKZSPqtmdrpCuqsgm1YIpoPt6yfM53VKoq15nujwYtmXbrFU9zmpu8wHkKY0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(366004)(136003)(39860400002)(376002)(346002)(396003)(451199018)(31686004)(6666004)(478600001)(6506007)(53546011)(6512007)(26005)(186003)(5660300002)(8676002)(4326008)(66946007)(2906002)(54906003)(316002)(37006003)(2616005)(6486002)(6636002)(8936002)(6862004)(41300700001)(86362001)(4744005)(31696002)(36756003)(38100700002)(66476007)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RXdHVDNxWGlGWThRZmlsWFZNV0tXSEs5RFE3M3ZjOU9pc3lIWTMwa0xBczhL?=
 =?utf-8?B?RUZPelRTdzhIaEJxdk1MMW9GZVhIbnRQdCtwc2x0ZXZtS1krc3UxRmFCUnM2?=
 =?utf-8?B?c0VRUWZHSW5jRUZLZHpma003SHc1OEUwcmtTcGw2Tks0S1h6K2l4a0QzSWtG?=
 =?utf-8?B?ZzFTVGVjRCtYV1V3N2Z1aTdSSEVGbWxPUnBnTFVLM3RUMG8vNnVSOW9pN3Mz?=
 =?utf-8?B?V0IrMjRQU1ZWN2VzNFpXMmFCeTQ0L21RSURtYTNPVXBlWTBsaHZTTlZnRFhO?=
 =?utf-8?B?UTB3NGpzUWEva1ZxdGVSWW40ckZydnltWVJHQWxsYjR1MSs1WGg4cEo1VURX?=
 =?utf-8?B?L3V1M1psZGU0ZkNUTytJb29FempkK1IwUk50bkJ2VVRqOHdSZEc0REcyUFFB?=
 =?utf-8?B?L2JySjdESWlrL3ZmWUhrOGlFUjBQRDlaVU56Y21oMXRzeEkyenE1OHFMekxL?=
 =?utf-8?B?QkZnVFZ2d21CTUpWdDVpYWdQTEVFR3N0Q3dIakF2dTNIUXpLMGtZTURrSGJN?=
 =?utf-8?B?UmdDNGU0ZVNUTUdnODk5TnZ6ZkhsRENMaG9UNnFHTHhheFoxTnlrSmM5dmVh?=
 =?utf-8?B?QXc5V1R2NkZteTFWNStyQnNObXpPS2tOejByWmpEbUtWYlByUUR0dlY2eUUx?=
 =?utf-8?B?alJkVVpJZVRHT0laSEFCTTNZd1RseGtFeG5sR2dpbXM2K1g3L093dVo4NFZT?=
 =?utf-8?B?QUxNOVlZU2FTbG9veGhxRGROdm5VbC9KL0FjVTRXYkp5dWgyTXFaamZZZ3Vn?=
 =?utf-8?B?em9PbVZwS1NvSWcrSm10L2lBNkxEalJQZyszcitmZE5ZVjJ2U05jeWVEdHJm?=
 =?utf-8?B?dUw5WjMxRnBreVpSakxzUWxwUmV4dHU4MGVpcWxuTnZLVmJIdm4wWkJZNnRr?=
 =?utf-8?B?d21OWTMzMGZlUVh3QzV2S0ZDUmtjQm5yL3dNY1VXcWhuR0tESkF2YVlDY0Ji?=
 =?utf-8?B?TUlQV2NMYjhKc2FycFdvd3luTTNsaTdWVGhBZmFkdTNKWWRVZENTaDZQeHNy?=
 =?utf-8?B?WDVzNFdoaVZZend6TXRwTWZVUnRWeEd1SjR3MlZXamNOUjBBZzhrK2VsTytq?=
 =?utf-8?B?bGVFd3ZtbkVWcjNVL3NQQmFRVlVZMVprKzB3YmE4dG9HaXBmVG5Na252NWUx?=
 =?utf-8?B?UzAxQ0NvMkYwOEdFNm5WRmFHQ05BcG16c2o3Lyt3MnI4dDBDbkt2N2NuSGhS?=
 =?utf-8?B?Undqc3pKNU1udzY5V2ZwcFFpTldsTHE0TEwydXNZRFBPUEw5OTdLUzdqcmFS?=
 =?utf-8?B?bUdOekc3MEZzQSsrdWY1OFJCdmp0U2ppRjc2Vkd3UVJOdXd6N1gvbXVOMUtL?=
 =?utf-8?B?azZUUmx1dUpaTjJrSVJBWW5sUUNUa1h0N1ByRDJYZjFrTkZ4Y0Q4aTNUcTJC?=
 =?utf-8?B?TVpjZUJvU1lGL2g2TkZ0WWJCWGQzZTBQdFZublpiRnBQRkR4dC9nTDhMK3hW?=
 =?utf-8?B?UWI4V1BvL3Q0VFo3UFpkVlB0dVpXZ0F0Zk9HYzFLbEZlYWY4dWNVaisraHND?=
 =?utf-8?B?R3pZRmlPRXBkejhOWEtyellya2J3YmwvYjY1a3BRS2VGVDV4LzQ4RCtLQ052?=
 =?utf-8?B?UUZYOG8yWW9teHNYMkpUSFNNazNTOVphdVZxRnZwMjFkNHZWc2IrOGhWOW9N?=
 =?utf-8?B?ZTZ2R05jMGRqTjYzc0lFcUlxZXVnd2dJUXpxSENWOEdzcloxd3ExN0F3VHg5?=
 =?utf-8?B?bUFDYVN6MnhNcWdFWXdNTERrWHE5Q0pDb0p4RGVrb3BLeWRBVW5jT0t1clpY?=
 =?utf-8?B?N1FQaXdhVlc0L0RiTituRHFrME1YSnFQSFp2bW1IcTNVUW83MWVrbmZGYXBI?=
 =?utf-8?B?a0Zrdy9RY2Q1WTk0dzdNbnNEdEFFMDRDWFV3ZVBnOUtYNzZpYnhsNHBkZzlx?=
 =?utf-8?B?ZEFrRDJGV1luMm40cWlZemNrd25PTHB1b1lXY25zUDBPQ3pwV3ZkMmZUMGdG?=
 =?utf-8?B?N0pJdDNLeDZJa1dkK1IwT0NwTENYVGZ1S2dDSmc3Zk5aRVkwTWJoYXlqSkhS?=
 =?utf-8?B?NXMzWVVPajY4N0wwMVlHZ2FoOHZSLzZRZ0lkMGNhT3hLU0MvQkZEN3FZc2I4?=
 =?utf-8?B?dnZvSEowaFlXZUVyVVVYM3YxUHJQNGhPaWtlMWtkQW93ZmFiVytGSG9qajRn?=
 =?utf-8?Q?DSYQ2nuTctwREA+E7HEbdHS7V?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cbd583c8-8d55-4122-472b-08db0039c6af
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 07:40:36.6130
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: unUFIODMSoDtjV601A5ZD16GAwkA/JFydhhLhSnkW9mAKxdIpoBMXwjAinuz/eYI8oSS9bF4h/AnZhnioHEarw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8544

On 27.01.2023 06:54, Juergen Gross wrote:
> When loading a Xenstore stubdom the loader doesn't know whether the
> lo be loaded kernel is a PVH or a PV one. So it tries to load it as
> a PVH one first, and if this fails it is loading it as a PV kernel.
> 
> This results in errors being logged in case the stubdom is a PV kernel.
> 
> Suppress those errors by setting the minimum logging level to
> "critical" while trying to load the kernel as PVH.
> 
> Fixes: f89955449c5a ("tools/init-xenstore-domain: support xenstore pvh stubdom")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - retry PVH loading with logging if PV fails, too (Jan Beulich)

I'm sorry to be picky, but shouldn't this be reflected in the description?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 07:51:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 07:51:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485342.752493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJWF-000387-AW; Fri, 27 Jan 2023 07:51:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485342.752493; Fri, 27 Jan 2023 07:51:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJWF-000380-62; Fri, 27 Jan 2023 07:51:31 +0000
Received: by outflank-mailman (input) for mailman id 485342;
 Fri, 27 Jan 2023 07:51:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D8Jc=5Y=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pLJWD-00037u-T5
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 07:51:29 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67cb96ca-9e17-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 08:51:28 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id CBF2F21D83;
 Fri, 27 Jan 2023 07:51:27 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A7C28138E3;
 Fri, 27 Jan 2023 07:51:27 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id twLSJn+C02OGaAAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 27 Jan 2023 07:51:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67cb96ca-9e17-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674805887; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=empbKyW0EVxAMyn8UGPwYPl8YjmloVGQm4Ju7WfuG7Q=;
	b=llHbutsG9cVNpVeRLsOdAb1oihQQQheT5EGnAk1shkJnehrM4c5F4Tm9pofALKe8Q1/dBD
	V5L2ORsNX0irf23SUDZha5kCi/OR7ULfuaNGHcB2j2NQmkIDPj+YJY+ZusDCrfllKP+QaS
	4+t+NCw1Yperym0IN6vFKvQXRCtjI6s=
Message-ID: <9ab4f70d-9aff-cfb2-edde-cb79b9834c03@suse.com>
Date: Fri, 27 Jan 2023 08:51:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] tools/helpers: don't log errors when trying to load
 PVH xenstore-stubdom
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230127055421.22945-1-jgross@suse.com>
 <9ea1c4b9-c02a-f189-f535-bf15d28f57ed@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <9ea1c4b9-c02a-f189-f535-bf15d28f57ed@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------YjO6YdBSO2dQV0F6etaW132Z"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------YjO6YdBSO2dQV0F6etaW132Z
Content-Type: multipart/mixed; boundary="------------Mdr0V0B27dqO9at8iOxQMbL6";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
Message-ID: <9ab4f70d-9aff-cfb2-edde-cb79b9834c03@suse.com>
Subject: Re: [PATCH v2] tools/helpers: don't log errors when trying to load
 PVH xenstore-stubdom
References: <20230127055421.22945-1-jgross@suse.com>
 <9ea1c4b9-c02a-f189-f535-bf15d28f57ed@suse.com>
In-Reply-To: <9ea1c4b9-c02a-f189-f535-bf15d28f57ed@suse.com>

--------------Mdr0V0B27dqO9at8iOxQMbL6
Content-Type: multipart/mixed; boundary="------------QbR67bX0WLAdjGiT5MlQincT"

--------------QbR67bX0WLAdjGiT5MlQincT
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDEuMjMgMDg6NDAsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyNy4wMS4yMDIz
IDA2OjU0LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gV2hlbiBsb2FkaW5nIGEgWGVuc3Rv
cmUgc3R1YmRvbSB0aGUgbG9hZGVyIGRvZXNuJ3Qga25vdyB3aGV0aGVyIHRoZQ0KPj4gbG8g
YmUgbG9hZGVkIGtlcm5lbCBpcyBhIFBWSCBvciBhIFBWIG9uZS4gU28gaXQgdHJpZXMgdG8g
bG9hZCBpdCBhcw0KPj4gYSBQVkggb25lIGZpcnN0LCBhbmQgaWYgdGhpcyBmYWlscyBpdCBp
cyBsb2FkaW5nIGl0IGFzIGEgUFYga2VybmVsLg0KPj4NCj4+IFRoaXMgcmVzdWx0cyBpbiBl
cnJvcnMgYmVpbmcgbG9nZ2VkIGluIGNhc2UgdGhlIHN0dWJkb20gaXMgYSBQViBrZXJuZWwu
DQo+Pg0KPj4gU3VwcHJlc3MgdGhvc2UgZXJyb3JzIGJ5IHNldHRpbmcgdGhlIG1pbmltdW0g
bG9nZ2luZyBsZXZlbCB0bw0KPj4gImNyaXRpY2FsIiB3aGlsZSB0cnlpbmcgdG8gbG9hZCB0
aGUga2VybmVsIGFzIFBWSC4NCj4+DQo+PiBGaXhlczogZjg5OTU1NDQ5YzVhICgidG9vbHMv
aW5pdC14ZW5zdG9yZS1kb21haW46IHN1cHBvcnQgeGVuc3RvcmUgcHZoIHN0dWJkb20iKQ0K
Pj4gU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4g
LS0tDQo+PiBWMjoNCj4+IC0gcmV0cnkgUFZIIGxvYWRpbmcgd2l0aCBsb2dnaW5nIGlmIFBW
IGZhaWxzLCB0b28gKEphbiBCZXVsaWNoKQ0KPiANCj4gSSdtIHNvcnJ5IHRvIGJlIHBpY2t5
LCBidXQgc2hvdWxkbid0IHRoaXMgYmUgcmVmbGVjdGVkIGluIHRoZSBkZXNjcmlwdGlvbj8N
Cg0KSSBjYW4gYWRkIHRoYXQsIGJ1dCBJIHRoaW5rIGxvb2tpbmcgYXQgdGhlIHBhdGNoIGl0
c2VsZiBpdCBpcyByYXRoZXINCmNsZWFyLCBlc3BlY2lhbGx5IHdpdGggdGhlIGFkZGVkIGNv
bW1lbnRzLg0KDQpJZiB5b3Ugc3RpbGwgdGhpbmsgaXQgc2hvdWxkIGJlIGFkZGVkLCBteSBz
dWdnZXN0aW9uIHdvdWxkIGJlOg0KDQogICBJbiBjYXNlIFBWSCBtb2RlIGFuZCBQViBtb2Rl
IGxvYWRpbmcgZmFpbHMsIHJldHJ5IFBWSCBtb2RlIGxvYWRpbmcNCiAgIHdpdGhvdXQgY2hh
bmdpbmcgdGhlIGxvZyBsZXZlbCBpbiBvcmRlciB0byBnZXQgdGhlIGVycm9yIG1lc3NhZ2Vz
DQogICBsb2dnZWQuDQoNCkkgY2FuIHJlc2VuZCwgaWYgeW91IHdhbnQgbWUgdG8sIG9yIEkn
ZCBiZSBmaW5lIHdpdGggYWJvdmUgYWRkaXRpb24NCmFkZGVkIHdoaWxlIGNvbW1pdHRpbmcg
KGlmIG5lZWRlZCkuDQoNCg0KSnVlcmdlbg0KDQo=
--------------QbR67bX0WLAdjGiT5MlQincT
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------QbR67bX0WLAdjGiT5MlQincT--

--------------Mdr0V0B27dqO9at8iOxQMbL6--

--------------YjO6YdBSO2dQV0F6etaW132Z
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPTgn8FAwAAAAAACgkQsN6d1ii/Ey+V
mwf+N3U6zdbveeQUA9s/sGaab2s3uWTTtsEB7DkJuMn0HvEk4E4pVd8aT/A3T4XrSR0XncXxcn3N
Sf2KuyNVLMhaHwdWaSbMzraHRxD6duLmTmHBqry4FE60PIc7Obngn58G41aidW2MigISzNOmmPGY
7dSI/obm/rq8kCAzeEWZbV3j1Jluf2OelTHiMjZfUjNmu+n1ZD/hHZnHCTQfeF3INhZn4gw68Wtg
KimqsiRO2WlW88eyxkuzCKkY4SvM4UVLi4U6xAlyi5g75642SbKzX18hfQBzgU8vV4waEQG9MbyR
barVmFQaUz83RI26p1JtDFUQm1fhiqzIFHq5aqDpgQ==
=oLqy
-----END PGP SIGNATURE-----

--------------YjO6YdBSO2dQV0F6etaW132Z--


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 07:52:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 07:52:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485345.752503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJWk-0003aB-HL; Fri, 27 Jan 2023 07:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485345.752503; Fri, 27 Jan 2023 07:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJWk-0003a4-EP; Fri, 27 Jan 2023 07:52:02 +0000
Received: by outflank-mailman (input) for mailman id 485345;
 Fri, 27 Jan 2023 07:52:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLJWi-0003YP-Qs
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 07:52:00 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20626.outbound.protection.outlook.com
 [2a01:111:f400:fe16::626])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 79b35d7e-9e17-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 08:51:58 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7419.eurprd04.prod.outlook.com (2603:10a6:102:80::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Fri, 27 Jan
 2023 07:51:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 07:51:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79b35d7e-9e17-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GXu/0IEsS8MdLKeLqGOA4j64nbbil7oGf/YXuPBoHKhgQdPQ3s577Aa+Ejkr0HLCzeL3i5sTQOop3P0nOTldUNA86LkP7a0D/PlOLx7HKC0cK0+811rcnY6fnVwVB2M/sCn5Sc6J3p04vSPY4XYlqglwtMJB+nR3wExxq/h+7B9j8dl63M7xr9JNUMqPyaQ7FxAEaqdluolVLhPF8txC+NphIv+8ukNe1hDogI5LHyslhJyWAUDkZtszi+qh46izpfR8zTeaV237Ykk44BIBC4pyZwfMjlA4fU3h1kwtZir5EOsFJ6scWe0dt4RmowA0i98/Bmnb9oDNHiHM7I6bsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YRlnGW8yKLIHOKMPsol8u8OLrFS2lIjE0G+Q353odtc=;
 b=Og/fVgWh96Ad3bagvNY8LFvdKWAyFRS7zceuQWGdFCMptpBhqhfKYkGUhT7675HqVZAWZdjU5eTAjM9CX7Ge6gvidAL5xa0xQqzVDZvfzt7GY3U7GSXMbto20Eo/hUgjH1aErcb64nTO1cHtJKw99TmhBPFOcr2DEU26MUxqaoJz+mN8CB04nE5YtRL6wRC3g/3Cor2fYnXRhv9/roK3U1uI7JqnwNjEhExtOoOU9lhcB4Ui2+oZA8+CdSRwgGScUYa/Irpfh//ME7CRFn6s54Izs6t7ivuZg0UMhgwSNBbIUNXgAkZfO5vjJ22pHqRvt8BOGet9OzxaNY99x+bm0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YRlnGW8yKLIHOKMPsol8u8OLrFS2lIjE0G+Q353odtc=;
 b=vt7OSA2PlXfciv+oyQMsDacrL/0va2TkqL87iuQjOxpb2dDbEbk3kN+yJzVKwBSm5v//a28pMPCSoCrHWEsTqRnngawxqBwHAoWIlyoc4HQQWS7JDap905uv6dq6w5GiYqh6+kghrtq26kwaf5XSN/z0Eiw9TkD3HJ/YY4bBBkjcL6F/BoATJmw1AFZykJsP13CS2/1nlzdh0EfG4gtvL3i5Ndo4D+BL9DfARYbQ7v4nb0cWuEoVzo5ttPtlz39ntzN+WtGS+eGabk+hgxE2sEaNsw3kOc19uCcmFjd9uJj7w6ndqRYoEUPDfl2umS3kbVTbqJfP5VAeDiw2wmSADw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0acfa717-8711-599f-4d29-d92a1466a1db@suse.com>
Date: Fri, 27 Jan 2023 08:51:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3 3/4] x86: limit issuing of IBPB during context switch
Content-Language: en-US
To: Andrew Cooper <amc96@srcf.net>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
 <c39faba2-1ab6-71da-f748-1545aac8290b@suse.com>
 <d0a0960a-f110-c065-753d-9f000ca379a9@srcf.net>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d0a0960a-f110-c065-753d-9f000ca379a9@srcf.net>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0149.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7419:EE_
X-MS-Office365-Filtering-Correlation-Id: ce5cda7a-a2c1-4a77-15d4-08db003b5d20
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kPh/TfzOy259rqDcSPKIE803AVk+xI2QIrwcS1PSxFxlBB57fe4bwx+Czt+kYk4sVVh7/9kukFKiWvKqB1H8K+M1AI3FwhVRvJqEDrDjHMz7WokTpwPPnUfXQXEtGyAKx7VqaA3U4v65MD1PnTZkWSMXcEQ9oiqGeIosDNH6JbvIekeP7qfrxBtxHha7o+wtmfBeXChFpGBeQrCDFOTR/DD1zwz2laUkgwGrEaKQEqOhrc6Kzf0OLmVYkD0UQEiOKz9Sx28okjs16fxrsW6ZqBygAs5p+L3kARD/S4YlUEayx1KS4kFvSfT0pyGqQNvu90nBZU8+qmM4maOQ/6L5M+AeQyV5Zl8aDivLoGvlwEOvo0vGMn/4vLYjgcLrM0xcH7nQ+YbtXojcSAFp0F2mm+5ZRhTxQ2nOh+YnCxKRP4aake9+z4Vk2oaFaSIQODJhdPfYmmOsoD7u/WFAXuZgZpU3YLWLwC4DRogI0uqCdbaxos5Ly2MDXJMytVZD4egLMHEtRiloDJ60HH9VyNndDCxhluaAS3mHAZI74Zbm84C5f9S1qExaewePGRTOUb1J2/KElhiRViH9X44WMS2hCDYDpb7vdTjiAG2z2glimgctuCjtH4bT8iJbLFHK36j4iaQHlS6OmIo+qCByqpXsBCLtFby4W75CWq3DnauUPR5Ijc+Nk3dp2hP3yEMH34BbO2tNLW6ZNyxMH2vXV5Orkwewwd5TReDhLTc2YKwS5UI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(376002)(39860400002)(136003)(346002)(366004)(396003)(451199018)(36756003)(2906002)(38100700002)(31696002)(26005)(53546011)(6506007)(186003)(478600001)(4326008)(66556008)(5660300002)(8676002)(6512007)(6916009)(66946007)(83380400001)(6486002)(66476007)(8936002)(41300700001)(316002)(2616005)(54906003)(86362001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dzRkZFVsaFBPay8vMDh6cHZRUGRjSWREM2xUUXY1T0xEcTN1NDh6SzJSY2pZ?=
 =?utf-8?B?QmN3cnJnZG1hZmVwT2FWRjJUbkhZU1lxMlBZaEZrV00zZWt2UmIyZlVIajRz?=
 =?utf-8?B?eTkveGRHVnVLc1VFTW1VTEUva3RuWXF2TS9kajEyL0c0VTJ1czhsV3hDY3ZP?=
 =?utf-8?B?NTJUZVBvc3h6aFpjSXY0UjRoVWJaYkRoci9JcXh1ZXJ4SU05SW5DdUc5aUkw?=
 =?utf-8?B?TFFyU2hSdW0xUVRjUHFBQkMvK1dOcW4xL0xwU3dTUnFoajVoc0xTRnUrWmRF?=
 =?utf-8?B?K3Ztb0EwTTU1TlNXNThiRXZ0bnNqS2RVekNXNWxjaURNQUZaU01EY2NjMk1R?=
 =?utf-8?B?TURDemVLWFFkRy8xejFEWkxEaUxPeC92YVN5aXpUTS9UeEt6UldWWHlkOStB?=
 =?utf-8?B?ejRRSTdCeFBJSG9TWWNxbUpXZ2ozeHVVVnladXc2S1JaeHVxVFdwY3BmMXNu?=
 =?utf-8?B?YXcwSnZxNE0ydU1NK3AwRTcrb0xaK1NPZkVrZVphbUhId29iT0l4T0lENFMr?=
 =?utf-8?B?TTA1L0NVVFBKRlhITnhaQTJuVUJVSTgwK3puMzFJTTlseDVuM3duUlpmS2xR?=
 =?utf-8?B?N0F6WlhpVktSNTl6b0ExejI2MDhrVlJEZUNEazlpR1ZxMW1CR3JyRjRTSVVa?=
 =?utf-8?B?b1lpdWxHVnV3Z3FXMXo2MWI3RXAxQ3lYVnJHLzRqcHhnWWVEYUF1K3ZYTU1n?=
 =?utf-8?B?eWJnNDNsVHdKVGtrTDZDalBZVWVPUHVJY0J1eTl3WHB6SnF3ekpvb3k3SFlv?=
 =?utf-8?B?NktBTXdHc3pwTXQ0b01vbXQvWlNQeTN4MEszeE9QNWk1M2lqUGxlNXFDSWlG?=
 =?utf-8?B?amVRWXp0VGlxV2FPZXdMYzV3U0creEtsY2taR29LYmI3L1BBU3RMd254MjZW?=
 =?utf-8?B?TG1QV29VM0txSG5PSlZROVQwcEU3ODVNNnVsVE8xSXdHSlMxbVFLQzVRWlRZ?=
 =?utf-8?B?clB4dWVtck8vd1pja0kwTnZQT2toVGhjTEY2ZmtvbWoyc1dkNE00MUJmODJ4?=
 =?utf-8?B?Y0NlYkt1d0pwSU9QNnh3NkxaMmtmVHU2NFJuZnBTV0VlZGptSjZ3cFArNHVG?=
 =?utf-8?B?S3JRNnViOGMvN3JFSHFkOVdqcUhuWDNNaTgzUENIcmN1Y1RuYmVLL1VJTElR?=
 =?utf-8?B?bGpNbk52NzFxMmdwQ3d1bzZJUmh5MzZVODFVeDcxVUVGMTRZUjNKSE1BK09M?=
 =?utf-8?B?LzRLWUdid3FpeGh4eGoxeWhiUTZVQ2tsNUI0ZExJSGorQ0tJM0pIMG1qTjh0?=
 =?utf-8?B?MU5XanBZUE5pSXVsYzgycDFhQXo1RHhmVjhNcGxocGZKbnd0K3NFakFubVRr?=
 =?utf-8?B?UE5jY2VsOEZMbEFOTkgwNWhQUWhGY1RlamhsdjVIRzVYR2ZYVWwzK29Mcngz?=
 =?utf-8?B?SWRTR2o3ZUh4SmpYOHIzUkNtRXdUMkdwekxaVlRLNzdNWkdNYmU1ZG5aV3o4?=
 =?utf-8?B?ak5qOTQzVlNIVEhZRHdtV3BPVkpvL1dGb3pJUFpxbzV0ZllaN0Z3WWJjaDlW?=
 =?utf-8?B?ZjlkVmovWklPTjJwejRxMFNiV0lyRkhiMGJ6OHFHNFNVdHBYNE1Hd3pZaFo4?=
 =?utf-8?B?OExvemhkVEZKVGpPVEZyYlBMZmtSVS9lOUkrenBMeVlZTmVmcUo4bU5JRVBN?=
 =?utf-8?B?TUFUenpQS0U1UUk0NXo0cG0vWWNWaURXcGgrcU1KTEZXSlBFZkVYZXlRVGow?=
 =?utf-8?B?dnRPSEFEZkFLbEl2OXpwY3VPaGN2SWJ2QUdQZXF5disrMzdIbWhBWXNRQTBr?=
 =?utf-8?B?TFZMcHJwQ3dUZjlNYTFZdjJqeFA2Tno3OG8xTjhQUHh4eG93RUtMWWxpRzFj?=
 =?utf-8?B?b3ZFTzlnc1pXeWR0aVNiMVFacjZ1UzY0R09nYlZXV3pCQ0Nuck80NlVZRHhC?=
 =?utf-8?B?RVZ5aEdWcThLanNYZmp5U3UzRlBEeFJRbExHdXc3S3FxdExXSGZEbnJMZERN?=
 =?utf-8?B?YkMwdkFUTEdqa0V3UmtFbXNZb0pFNjFHblc5RGpRYWRjbVBhV0wwUmQ3VFg4?=
 =?utf-8?B?Z3pKKzlTaXNGQ3NSL0lJYmRGZHllYXYvVVA2Mjl3SXdDeTNCQ25OTk4ya1E1?=
 =?utf-8?B?ZWU4aEZJbXJKZm40V1BEVEoxK0lCUHdrR0ZnKy9MN3RHWjg3c2x5MklEUWhD?=
 =?utf-8?Q?IALoHma8EVfEIxlmn7fKWWcWV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ce5cda7a-a2c1-4a77-15d4-08db003b5d20
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 07:51:56.9605
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fJ7VoWkRuvWdMiino+XKAMObxBor5Obpj1YHUfyU5vxJCMmgYtBO6DepCB0lum1IPylPThg8xlb7IoF1hHFkug==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7419

On 26.01.2023 21:49, Andrew Cooper wrote:
> On 25/01/2023 3:26 pm, Jan Beulich wrote:
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -2015,7 +2015,8 @@ void context_switch(struct vcpu *prev, s
>>  
>>          ctxt_switch_levelling(next);
>>  
>> -        if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) )
>> +        if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) &&
>> +             !(prevd->arch.spec_ctrl_flags & SCF_entry_ibpb) )
>>          {
>>              static DEFINE_PER_CPU(unsigned int, last);
>>              unsigned int *last_id = &this_cpu(last);
>>
>>
> 
> The aforementioned naming change makes the (marginal) security hole here
> more obvious.
> 
> When we use entry-IBPB to protect Xen, we only care about the branch
> types in the BTB.  We don't flush the RSB when using the SMEP optimisation.
> 
> Therefore, entry-IBPB is not something which lets us safely skip
> exit-new-pred-context.

Yet what's to be my takeaway? You may be suggesting to drop the patch,
or you may be suggesting to tighten the condition. (My guess would be
the former.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 08:01:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 08:01:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485353.752512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJfy-0005t1-Nb; Fri, 27 Jan 2023 08:01:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485353.752512; Fri, 27 Jan 2023 08:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJfy-0005su-Kx; Fri, 27 Jan 2023 08:01:34 +0000
Received: by outflank-mailman (input) for mailman id 485353;
 Fri, 27 Jan 2023 08:01:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLJfx-0005so-OD
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 08:01:33 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2062d.outbound.protection.outlook.com
 [2a01:111:f400:fe13::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf53fcab-9e18-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 09:01:31 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9406.eurprd04.prod.outlook.com (2603:10a6:20b:4da::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan
 2023 08:01:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 08:01:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf53fcab-9e18-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GW0OGY1D87CM8jgZm0mLxPeowJ3CtaXPwRD/DlByNkch1mr3PZ6rI9otkp1zAaX9WQedOPip7oF5ecQ3HMqBVO7jnM5YvIyXuS1Ki6pKWm5z6T2JpDHHdnnl2JaXzGgzV/TQ9O4QIzAEHvrMIjUV+ASVX+6Tjlv8GMl5Jh6Ouwzgo5JxmFSk1MUtuor3TtbMTei09DAbwfmnPqoLwXtJxSnLkk3gkkJ8lDGS24dfP+V7ZnrTLIFFYUQZbhw+wPEaganRTiVOx0yhwqchetOWpm9SUPHuL6zyK1MRL4cfOyHAVFQ6PhdaQ4jrmKWeGHymYM+QQoFPsa1yFMWyOVM5/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0YnwN72aDMnVZqr5t87pf8kep0aqn1v1Av6qUJI9B34=;
 b=RyaddUlONyHCWP34F9N0glczzX18E1GhAVl5zfelWZuCrRag1lje5xBNAghIBkqqALTGF2tCMjUWumarBdxrc5oSOROPZBYhN+MzkRtpmjnzpRQPgPgtQz52TEddJJdHOLSkGsZHFb6kBqMv9xYqEFjcqBkkd/Z4Pp7Qu43GOuE86moEOivD8hOj7BIfhAvuIUkeR9mflQzkcO8LLD+otYbD9GmhsIBlheOrcTYOPMcd8PQxaj1P50mBRAhYjYxq1CTWgD3rv4OWp8eh6EhEsOOfWBTDVEOsAUk5jojoYUsfQnvuFHZI2qeVW6bLW1uppOFxTdkiHBBWTUgRVKPzxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0YnwN72aDMnVZqr5t87pf8kep0aqn1v1Av6qUJI9B34=;
 b=bq8HmN0Pf4zHDmQUrwFy0Ywn229xzD4+UotWSZdOUdzX6tGjd2Q36N0/5VhTF935HKoXMUafLLKhXMjy2qu2GwmqChY4Dp4/YwB+37CgHU5cO6G9QNkYNY5zqqhciR1SeXFzjoAS/TtuUAZeJqUMWKxcz6QgIU7nyprOLAuc978xa6R/+7CFbC2vTn1lUSQzGl62tDe7f4jewDdK5MVAjzhuP6M8uJcsKcOfTsYmo0jkINoFaupuPcv/hMafHTQKxQH4IkYaY02xKojFAr+7ABaHNahyCghUJltjKVdqbW3Lv9OLxx3m2HeW0caZIBzBKHiFTudujbGlWqZ/gi6w5A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2579d8d6-acc1-8a03-098c-87c8247016f5@suse.com>
Date: Fri, 27 Jan 2023 09:01:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 01/10] xen: pci: add per-domain pci list lock
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Stewart.Hildebrand@amd.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com>
 <20220831141040.13231-2-volodymyr_babchuk@epam.com>
 <alpine.DEB.2.22.394.2301261435230.1978264@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2301261435230.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0165.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9406:EE_
X-MS-Office365-Filtering-Correlation-Id: 232ff92d-f126-4143-5112-08db003cb1d6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kMmxM6QvP7G2F517dkuMAqANTTEJEfl59a0MsIO3BboqfX5b1mmZjrvDQNeV1THwniKY29XYqdgWDuOe+2W2s1DFWEao03bkLUMOvwY9WpHzRkJ5EF/72nUFUl8XlLCBp/hj3pQovbCdWKKbh3kJ61yC3BR7l2BB/752wyuvaRiv6ak09oRtL8ENtxS7KDYhEBxjnR9hQP2PKog+z02OBW7kNpNRPfzN63dAYoKpJtuHM6u0ZKY1fGVed2nTewCvcHa2mW9mb0MHCnxGW/YwKt7kApoiklbKdtEN3oeHow+IbdOwGvYOeGEh9rXgpixFCzauzhRXwiJDMl4Hse5TdAYLMWYN/VDdQzNfWdnRGTb//NhrzUSoWJgpmdbxtDuP1p/pwbDwVmmQy8cZYUEo2VCX7x9r6dIHaDgkDhQu8cUctcPb7KFeOAbqOAUYnLyEyTNUVODkN2M+RYPvbep1z/Xa/bZzNe69mUQwDa1d9Tj4kvLQtHGSWMpxiTL0IlNRsnCQhVBzd+Oslp0gimYvSpMMx16tNPV5jQEZtrhU+90bTGZIgVdZiqQ5PMG30nKVeEkWCbanPbMpOAGPL7G9cBH64LY4g52vdZK4CumqU4em1jD2npbZZOS8QKqb4cuPu5OIYo2oP35YrfkpDAPT0Y27Zs5rQmGB09DWlSCLly2nZHFKfR8gT5LiNrsXFuJ8Nh7ZOyQgfdf9Tir9ZJKPjL7ojCOlN6WnRMrVcJt/USk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(376002)(136003)(39860400002)(346002)(366004)(396003)(451199018)(6506007)(6512007)(83380400001)(53546011)(316002)(26005)(2616005)(186003)(6486002)(8676002)(478600001)(4326008)(31696002)(66946007)(54906003)(86362001)(38100700002)(66556008)(5660300002)(7416002)(2906002)(36756003)(41300700001)(6916009)(8936002)(66476007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bnJHKytCbk10L0tjcG8rS2xQV1NweHVPa0JpM0lpSEloVHphOURMbkY5cjB1?=
 =?utf-8?B?VjJ3VEZyNW5NeWY5Ylh2QkJ5UzcyYXhqVjlCNE04Uzd6WWhETnphenU4ajc2?=
 =?utf-8?B?NmtYN1FmNzlaZ1E1cjM2Sm40dFdTMm1YcUpYenVuTjNwck5DSkJtOVdJR284?=
 =?utf-8?B?cmFxSERDTjNNUTl1TVFsQ3p3a1VIYldTNStCLzNJTy8rS252Z1pYWVJlVnRt?=
 =?utf-8?B?V0tzSnhHb21LQ3F3Z1g1azhSVXNHTGZJc3RiMFV6NUFvSnFXVE1qRjZNa1FG?=
 =?utf-8?B?UE8xQW0zZDczVXAyL1hPRHJTeUxpQWxLeU9IeFpjaHBRaEltUGxlTlFnYTMr?=
 =?utf-8?B?UWhkcElhMjFjcSszaVlzVnE2MEw5Wm1PZURoa1czbjVwYWhyWkcweGYzM2wx?=
 =?utf-8?B?dXlySk1TZGtUM2hncWFMN0RmcmxjOEZYVkhFUlVNNVdaVVFUc3lqN2Vnc0tm?=
 =?utf-8?B?V1ZrL2xCTlQrWnVORGZIcFJVdVJ1U3d5RHJBZExLL1dNZjhnTkV0WlBhNFo2?=
 =?utf-8?B?bDFsNXpsSy9mVTZ3ek4yV29LM1lYandOb1NYY2lFay8vcmRaSGtIVVgvSHZN?=
 =?utf-8?B?WVp3U1U3bCsvYWV5R2t5ZXlZZVNyOFBudWNLcW9IT042a2JlUFFhQStPSHpZ?=
 =?utf-8?B?d1ZWNUk4akowYjYvM3BrejJQRDA4ZkhXak9yUitKdmQzdEV3QWxiMzBQc1pT?=
 =?utf-8?B?aVdIUlBmOERkVmFqTEtXQW9kUk5oMVRGRGxSc2V6VzBSS1B0bGxpWHBhZFdQ?=
 =?utf-8?B?T3dMYnpoN2ppYmVkUFZuZE9PaTdsZm9QdHFlYVl2US9oN3V3NWFlcGpCcU1R?=
 =?utf-8?B?VmhHd0NCLzU1TFVFZzhqS2xzN1cwL21zZUpYa1R3WC95UTF6b0V3c2FBWG1B?=
 =?utf-8?B?KzM2WGZzUTZ1T2FleUtzblRJQkxWRnZ5am9VOWtTUitJZDJqV3N1SGRiOHVR?=
 =?utf-8?B?b0N6RXVJbzFRM2pQbTBtdEVuNnAvMWJnSVBjaTFsb04wbmtaSlJ3Y05NRGds?=
 =?utf-8?B?bWt1MzdnYXRQcEZ5YnFiZ2pVVXlhVFNDdjBIeEhFZDc2c3M0bVhuRHg5Vnpw?=
 =?utf-8?B?eVQvTlUyOGZlQm9Xay9iempZTWR4UGhkOEJhdUxLczd1NTN0SGdvbmJBWDhL?=
 =?utf-8?B?aVJ0SzMxTlkwanJ0YlpQVEhiS3JoT3pqcHF4NjU3RVBLUjFKNUd0a2lIQXI2?=
 =?utf-8?B?SWk5Z2hQKzBiS2tyNXUyOVN6QUlYemF0SUpkUXR6bVk0N2lRak8wWGxTUDhu?=
 =?utf-8?B?ZUo0NFJyY2dGZXk4ZHIrR3MyVzhpQ2FwL0JncnJnUk4yVkVzZkliWWs1b0Zm?=
 =?utf-8?B?eWU3dytISlcrQXROV3ZTa3g3WW9LMEdLSkNOcDhWUWxzakNNSEhndzdFZDJL?=
 =?utf-8?B?Qm82R0M2ZlBEVncvcXY1MDFNQlpnQmhUVkxhckFIK2YvUlJKci9aV0c2Mndi?=
 =?utf-8?B?UFZZM29ZQ29uN21oNmxBWkZsd20zQWtyYlEvbUdHSjBzVVQzd3ZxdE94R2NM?=
 =?utf-8?B?eGJoZldJOWJhYmM1S28wU2tPcTk4dUJGakFyQUlYRFZlWCtza1daUVh6aU9E?=
 =?utf-8?B?cmxzUWhGNXRNVnJUai9WRVdzU2tIRk5PTGV5ZmZxVFlaUmxJLzk2bGlHRVI1?=
 =?utf-8?B?KzI4T3VpS04zRFB5NXVxcTUzUENqQzI2cVVCTWQxUXdjOVlxSmhiYUFnWTZ3?=
 =?utf-8?B?elJKLzVWR1JSS1p2QnY3ejdybm5rT0ZJMUl3aEU0UmkxbTB1SGIrRlNPdTRy?=
 =?utf-8?B?S2c5QXZaa3N4eFJGYmErZCtObE9FNEpOTTRPeUtBQmZkdDlIb05rSnJEaGZs?=
 =?utf-8?B?NzVtUXpMU1FDOEdxU0RuZTdhTk5KdjRydlZEbTg4WkNXZ0pVUDBUNCtzT1VD?=
 =?utf-8?B?WjdwSGxiU3orWVFNUjhGWGtXamNxcFBPc2pTd3JTTlhhcEY5OU9aR1NnRldS?=
 =?utf-8?B?MlBUbnVrYnJPT3NRcjNWbkZBSHhyT21SWm1MdUhxd3BDdGFueWNWZ25MWVlD?=
 =?utf-8?B?VTNQd3p6S3RRSnlaeFkzaVBOR3U0eVRncjFqMCt3VWNFN3dNRmw2b2JFTlRn?=
 =?utf-8?B?amxpOU5RTzR0cjRNcWlyT2FENEJLejJBM0lPTTBTNG42ZmxHRENRVDl5U0ZN?=
 =?utf-8?Q?kyYsyZPoI5pNMf98DaMHVVj6q?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 232ff92d-f126-4143-5112-08db003cb1d6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 08:01:28.5957
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XouGEt59OZah5fayniz8PQKr51ZBIAnc9cWKi0RDdSbUUj258fFQqM44EYlBK9p6/768EnoBxDqCdSbzxV5ecw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9406

On 27.01.2023 00:18, Stefano Stabellini wrote:
> On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
>> @@ -1571,6 +1595,7 @@ static int iommu_get_device_group(
>>          if ( sdev_id < 0 )
>>          {
>>              pcidevs_unlock();
>> +            spin_unlock(&d->pdevs_lock);
> 
> lock inversion
> 
> 
>>              return sdev_id;
>>          }
>>  
>> @@ -1581,6 +1606,7 @@ static int iommu_get_device_group(
>>              if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
>>              {
>>                  pcidevs_unlock();
>> +                spin_unlock(&d->pdevs_lock);
> 
> lock inversion
> 
> 
>>                  return -EFAULT;
>>              }
>>              i++;
>> @@ -1588,6 +1614,7 @@ static int iommu_get_device_group(
>>      }
>>  
>>      pcidevs_unlock();
>> +    spin_unlock(&d->pdevs_lock);
> 
> lock inversion

While from a cosmetic perspective I of course agree that releasing locks
would better be done the opposite order of acquiring whenever possible,
I'd like to point out that lock release alone is never subject to "lock
order" issues. We do have a couple of cases (I think) where we actually
do so because otherwise respective code would end up uglier, or because
we want to limit locking regions as much as possible (I'm sorry, I don't
have an example to hand).

>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -457,6 +457,7 @@ struct domain
>>  
>>  #ifdef CONFIG_HAS_PCI
>>      struct list_head pdev_list;
>> +    spinlock_t pdevs_lock;
> 
> I think it would be better called "pdev_lock" but OK either way

I'm of two minds here: On one hand the lock is to guard all devices, so
plural looks applicable. Otoh the list head field uses singular as well.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 08:13:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 08:13:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485361.752522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJrR-0007dz-TY; Fri, 27 Jan 2023 08:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485361.752522; Fri, 27 Jan 2023 08:13:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLJrR-0007ds-Qg; Fri, 27 Jan 2023 08:13:25 +0000
Received: by outflank-mailman (input) for mailman id 485361;
 Fri, 27 Jan 2023 08:13:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLJrQ-0007dm-Ts
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 08:13:24 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2080.outbound.protection.outlook.com [40.107.7.80])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 77b73583-9e1a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 09:13:23 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7373.eurprd04.prod.outlook.com (2603:10a6:800:1ab::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Fri, 27 Jan
 2023 08:13:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 08:13:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77b73583-9e1a-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uqps4NbFksyLnoHc298MdU8OEcyTiZ2L5bqcy9Ok5o2I3kW+JLwLhjbs863WnBnhV4and/AbTPNYPNdzCTQ0KfUom8tiLt0M1cB9foUpMh3LBWAlLyOTuOwwSHI0zBdyaV/ZyzlE38CYeF+P5VVnmddYw+kSEn0mstuaGFaDleQOvbHPq+cpfXPd/BRpApkteqfUPk6aHs5td+nlcGW50zCKktD6SM49BT5bvSg9WKGfWPcMh/8FpPm9Qsh0he+PZaVdDkKpOqaYGUxe030p77yhX8XDV32vd5zhtKjFf79EWAe9QBHW/jkLHJ1UXH936M8u7gPnPy96Rf2XFtGbfQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=x9Vhe70Om2MBmgaGBCi2JO16S9NIgYIWitcatAN8/Q8=;
 b=jsKCv/y7QzH7AglA3NtSHGS7gQojpW1tdN17nEbMS4KQJ3EzxoZrHtTpTKaXjPsukHwfZhx9bLJLhlsfNybvWM8W0aQi6mJ604W4MAzMEvQoNp1ejrhjVwRRqmuYXILLQPtDrXnrtEI/rQkx54ftCEsdSVznOZVO/nUUp7LbAgXA+UqyKrrt0YRJ5tAEigzX5cwjuD8YYDMGYdhTdugvEJI3aERSe1TIYZGDzLmV6sfO1b1vEg4Gxnazso8K3/L9nJ08EhH4pHmb7siN6xq9AWAFHON06VglFXXpmtOvol/QoEnseO91885d2zSrlUiBY45E4xHVO4IymxVU+Yfofw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x9Vhe70Om2MBmgaGBCi2JO16S9NIgYIWitcatAN8/Q8=;
 b=IBF2OxelOgK8rlYuJcmU0DvqkbZxUbJ0kzU2+gyYKQB8FUfl5Srx+nfC4h4Bws9fI8wvzIU2nYOxYJ9xDFaS+L8Lt3iYzG9n9TWTfj+Y3ZZnesLAgpJA/7Q3SOfOQmkkv9piDp7nMM0QM+e0ym07kc9mRSdKstcP/FHAQlo6tH2mQb26UMR1qXwDOxmrlyO6oua1+cxHoNa//FaShK4sJnBqmoHZmXLkMAFmBiFMz2ltm7ONWRbUx9/3vmH5EB83WX0LFQ7TQRKhgppdYkfWK+JT5mh3GQ9Px2ju1hl/9LBLFXxds59A6REP7c6F19QL6gU+qre6gOvuG5pLSOG39A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <97112cd2-d16e-6c9d-7c3d-a3fe5ab76125@suse.com>
Date: Fri, 27 Jan 2023 09:13:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [RFC PATCH 03/10] xen: pci: introduce ats_list_lock
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com>
 <20220831141040.13231-4-volodymyr_babchuk@epam.com>
 <alpine.DEB.2.22.394.2301261541420.1978264@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2301261541420.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0148.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7373:EE_
X-MS-Office365-Filtering-Correlation-Id: 33217c4d-e635-401d-2895-08db003e5a1a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Sv9NzCGZUal7kUNBE09gUeoZS/KYX14WhWrQ5u9B/Sb1FQPzdWCT80pdE1ETPMcWt+4+bj3bdz3nVjOgM8tBlb3hFPsnwywewRCC/FWoHp/77s8hbdtInAaM8B6WVG/VITdETMvgoMMG971l624ISxcy4wj+Iy1Z5cu1RLY4+bLiTA8zw2wTw/LzL8jpu6M6hMroCO/qEzJQ8NSeFuIrSb4EGi/MNNb/kdHVgh/bH4kFpv7C7jrjDXh6mRaxzDGgPGPPVGWZwV4aRNLP1JOKNjY/kZNtOf4H7Nms9Y6Rd4lCq2z+sdzCOq7pgXA4hQBLmrSET/8mqb3HU8O/mcXOrqqbEPyM6zAg0fktTm9GB8jQajxvNKqOLKPSlZ93WbvZABzECK60NyCtVSSrttfIG3gJJWXYgZZyZB4/Acvvp0wMtCCigQRHbgXQihmkEyhFcDCbOzOraTS44jeeQDRU4+tnFvWtW/pAylJR+6M00qaFSbU4RC/tMnXb+H37QYGZUocFI56niCb6eo2lKt0uZQ8a8j03pXYXtFFQ0cCKlPq/PwdnhwWW9fJOkrDfAcPQ5+znic4E8Lx9o9EL1FuM+Cu1A/pisGKegaBjXXv2dKQ8Sv55zeUEDber2RniSP1IYQONRzucI4lKNBhg+0yHrutc9z31TFd+pGjLZ59gzvUkCyDa4FWGB8CPvBtqhELFX8oQAiF+bHEjODq7f1TBzEHLxr6qx8B2C9lxdWIMydw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(366004)(396003)(136003)(376002)(39850400004)(346002)(451199018)(31696002)(38100700002)(86362001)(36756003)(54906003)(110136005)(2906002)(6512007)(66476007)(66946007)(31686004)(41300700001)(8936002)(4326008)(66556008)(186003)(8676002)(5660300002)(2616005)(53546011)(6506007)(316002)(26005)(6486002)(478600001)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R0FDRWh0dGNmTEpiYnphWGpQU01HSjNzckZvMlhyNFIwRWVYNTk1YUd0Zlk4?=
 =?utf-8?B?RTlYdWorS2pDOUNKT0h2dGY4WHVLMm4ydHpUZHpaU2dQUkdGb0lxNGg5R3pG?=
 =?utf-8?B?OHVodUlHU1Q0Um9ZVzY2NDNKaCtxMDdDWkZodGV5ejdUUGM5Qmd4UkttUTNi?=
 =?utf-8?B?RGNRRFpjaVdnaFR2RkNzYXE1WndCc05VRlhpYzkyWThsNVhRNFEzQmIrbEo3?=
 =?utf-8?B?QjBaRVY1MHNyWlFTTmlITnpyZWxKOFJpeGRFRzFjVmVEOHJCVTY1Qm53d0V5?=
 =?utf-8?B?NmlkY3NaUEtjL2ZNYmsyQkNDQkVvdTRFMkZsVE5tQ0pMc1BXak4wMFozTmZK?=
 =?utf-8?B?OE5UVFc0aVVnNEVKdytaSVRCNWJXaytQUTNEbUI1a01VWkRSZ2x1ZC96Q25u?=
 =?utf-8?B?SUdkZUtNY01BV1crK3hMc1FjMEdlNHVkbmczMmt0MTAxNjZOZVZGMVp2aGxi?=
 =?utf-8?B?MU9yK2JMcXZ4OFF6b0hzd0ZIOGpoK21lUC9RMXhqaTU3T0dmTTgxVnJ3M3Rw?=
 =?utf-8?B?blgrdk11VHI2S0NpUThNeWwxeXB4QzZhd01EK2R0QVNITDBpL0F5Vk9sei95?=
 =?utf-8?B?OUxreXZ6SGJFLzlPRDltWDFjaC8xK1k2TjJVZy9yT29WcFRKRENTaElUM1Nj?=
 =?utf-8?B?WjBTQ1I2NVphMnFOR2gyN0pHV2paM3dnT2JvMjV0dnI0TGJoQWRIcDhUcDJx?=
 =?utf-8?B?MTZMTGx1K2l4WUJwZ0k0QUFIU0k4YjJXclhKVG5LSDl5OU1GL1BEb1JPbHc4?=
 =?utf-8?B?SXFKNUN6czdHZTd4N3R0bklWNGYvcUZkUmdKWjNGRHhIYmU1UlQ2bWZxMzFV?=
 =?utf-8?B?MkNrSUdDaWNwZHRxYkx4WDRwems3TDNyeDZmb2lGRWNWQVlvSWIyaTlaN1hG?=
 =?utf-8?B?U3kzc3F4RDNNeVA1YVJodWdCbEF6WFRveFlMY3dQNWpZMFhyckZJTUM3YjNh?=
 =?utf-8?B?bWR0V0FORWVYdEI3L0pabU1kY0ZlWitOU1RsbnVGanBxVnRjSE1LanF6TlBw?=
 =?utf-8?B?U1RZaTUwU05iMEljOHIyeWZ0UXZCYjFEN3RpT3NUMGQ5UURjOENyM0NFZjQy?=
 =?utf-8?B?TlNTSFJENDFrMCtWai8yME5XazVVM1MyU2JlQmJuTzE4czdMZW82RFRUNGFz?=
 =?utf-8?B?UHpZL214Mms2Sml4cndpcE9oU1ZHV3NYQ1g0MkZnOGY4NU9iYStySk1zVVJk?=
 =?utf-8?B?U0Q4WUN3ZDJyTnRlN2IxWFJjV28vbkxEeUtxYTdvVEZLN0Rvdno4NmhLclp0?=
 =?utf-8?B?aE5PcmU3VWNkcTN5aXpFTWc4SndGZ2JNeGpZUFhBWSs5MVd5bXllcVRRajRS?=
 =?utf-8?B?MjdWRkE3ZHpCeUxKRUQwUER0d1NhWXBZWTk1Sm1hc3E1RTkvRWVseThYaU85?=
 =?utf-8?B?SUtZUE55bHJwSTBmaTdRK2IxSzU5NUFidlVHbzVOQUpaSHFNTG5xWlZQY1oy?=
 =?utf-8?B?bHh2bEowZUs2ZVQ4NmlvQkQ2NUw1bW5yV0lYVEtpbmpHb25SSEdwTXNmTnpQ?=
 =?utf-8?B?WGpad2toNnZsRnFsbmRzcTYzeDBoa2Y2ZHFrQTQ2YVk3UkZKY25YbmF3OXho?=
 =?utf-8?B?ZG9lR2ovQTUrMDdjTGZFbDljeGMrMHM4c1JDeWU1MUtjdEs2b1dhaCtoVEdO?=
 =?utf-8?B?VlpuL3ZVdTg5b1ROYzZxc2J2NXdiMC8zWjBCekR5NldsYmlDQXBUYWc3NU1U?=
 =?utf-8?B?bGZQSjZZaStxTjc5VG84aDlKV3BFNUVRRndVd1ZDeGduZ1huZmNuc08xWGl5?=
 =?utf-8?B?YkhZL0hZYVZvQkJMdHZxa25VSG1wMEZ6K1hUQ0d2cXRZSUN0YWVuWFY4bWlh?=
 =?utf-8?B?UjNaYTZDQWZhMWNPanYwd3IwdnA1VDN2aGN3TC9Jb05Xa2pucFFyb1pxZUtt?=
 =?utf-8?B?UUZETzgvckdFRGJ1aTVSVjZkVGVkSGRWRHNCTEtSWGlYa2pCVWhQM0tvS245?=
 =?utf-8?B?YjliM1BnRU1vVzI0NUkzVFYvQmtQOXgrYWlCMXQzZiswa0IzL3FOUlk5WURV?=
 =?utf-8?B?MElQQk91YTZLbzVYbTFIUEVpV245ZS9tcWlZeTl0T1ZTMnoyUE5Da1FPUFhu?=
 =?utf-8?B?K29QNUowSmh4eWJob3dNSnhJVGFiYnpDbWZLOUdCTzFxbkJReVVDSjFHN1Rp?=
 =?utf-8?Q?VV3lepglJWZ2YLaB34o6Fkhgf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 33217c4d-e635-401d-2895-08db003e5a1a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 08:13:20.5661
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FqkK2eGyxrAnr7avLw1C5tm2uQrVMOjciRZf5uhv5WZhrBOEM3Jx3EZzSKSren/cdfpQIJWUP9gDc2qRHfOpsg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7373

On 27.01.2023 00:56, Stefano Stabellini wrote:
> On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
>> --- a/xen/drivers/passthrough/pci.c
>> +++ b/xen/drivers/passthrough/pci.c
>> @@ -1641,6 +1641,7 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev)
>>  {
>>      pcidevs_lock();
>>  
>> +    /* iommu->ats_list_lock is taken by the caller of this function */
> 
> This is a locking inversion. In all other places we take pcidevs_lock
> first, then ats_list_lock lock. For instance look at
> xen/drivers/passthrough/pci.c:deassign_device that is called with
> pcidevs_locked and then calls iommu_call(... reassign_device ...) which
> ends up taking ats_list_lock.
> 
> This is the only exception. I think we need to move the
> spin_lock(ats_list_lock) from qinval.c to here.

First question here is what the lock is meant to protect: Just the list,
or also the ATS state change (i.e. serializing enable and disable against
one another). In the latter case the lock also wants naming differently.

Second question is who is to acquire the lock. Why isn't this done _in_
{en,dis}able_ats_device() themselves? That would then allow to further
reduce the locked range, because at least the pci_find_ext_capability()
call and the final logging can occur without the lock held.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 09:13:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 09:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485367.752536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLKnK-0006V4-ED; Fri, 27 Jan 2023 09:13:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485367.752536; Fri, 27 Jan 2023 09:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLKnK-0006Ux-9z; Fri, 27 Jan 2023 09:13:14 +0000
Received: by outflank-mailman (input) for mailman id 485367;
 Fri, 27 Jan 2023 09:13:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLKnJ-0006Un-8G; Fri, 27 Jan 2023 09:13:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLKnJ-0007IQ-5N; Fri, 27 Jan 2023 09:13:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLKnI-0004qw-NZ; Fri, 27 Jan 2023 09:13:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLKnI-0003zd-N4; Fri, 27 Jan 2023 09:13:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=E48k8rRtupaBL+QxXvUmkqQh2S/0WKFsjgFR6BwtAO4=; b=ktE3uP8jFT3GYUjmAVCUsNQqaE
	dP8DURgO7YZ6s3zHKZ8rgYa8mnd4Y6Dk7zxMhV0KVXzFFYwLRLdSoO0tg53zDot0HLe27t/aPpwhR
	jihmzPX4ObdIukmoh52RDHOAmB54YSwHq5zaZ/PwHFHAoU2GMi8mw33d3hNlgL8zqErw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176232-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176232: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0d129ef7c3a95d64f2f2cab4f8302318775f9933
X-Osstest-Versions-That:
    ovmf=ca573b86157e7fcd34cd44e79ebd10e89d8b8cc4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 09:13:12 +0000

flight 176232 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176232/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0d129ef7c3a95d64f2f2cab4f8302318775f9933
baseline version:
 ovmf                 ca573b86157e7fcd34cd44e79ebd10e89d8b8cc4

Last test of basis   176225  2023-01-26 22:14:24 Z    0 days
Testing same since   176232  2023-01-27 04:15:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dionna Glaze <dionnaglaze@google.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ca573b8615..0d129ef7c3  0d129ef7c3a95d64f2f2cab4f8302318775f9933 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 09:47:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 09:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485376.752559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLK8-0003T4-8v; Fri, 27 Jan 2023 09:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485376.752559; Fri, 27 Jan 2023 09:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLK8-0003Sx-5e; Fri, 27 Jan 2023 09:47:08 +0000
Received: by outflank-mailman (input) for mailman id 485376;
 Fri, 27 Jan 2023 09:47:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yrR2=5Y=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pLLK6-0003Dw-TY
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 09:47:06 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8f28a56d-9e27-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 10:47:06 +0100 (CET)
Received: by mail-ed1-x535.google.com with SMTP id g11so4197186eda.12
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 01:47:06 -0800 (PST)
Received: from uni.router.wind (adsl-40.109.242.227.tellas.gr.
 [109.242.227.40]) by smtp.googlemail.com with ESMTPSA id
 l23-20020a50d6d7000000b004a0b1d7e39csm2054861edj.51.2023.01.27.01.47.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 01:47:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f28a56d-9e27-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZuMqEkUObXwx6VxSDOaQ2lorR4GwNazZTarPmLJUzc4=;
        b=iYwByFmSd5mLhkd48HIQR2RURAXcUrDGHluwCzAWjg8ApMA0m0RzUv9VWguRbyS1Dw
         D6A3QgGLjLHPR3KpTLU+oQZ7xxtSaSGK1R+MDuLeDstSk5miLKMuzYwO3NzZK8+MtNCl
         AMSgV8H24BvC1FO8dLhg0EskIc8BvFaGTdmKX1gVpfwzajGq/kQrgSQvoycXHXChslmR
         fmM9hvEa71EkiLdUAxMKZVxrr/k3l0Ht6vjcDZfmKxuoeYiQkWgK5oruPGCBskjxC1Ij
         QPd4NDqphFgLs60yMC0MnO3qEAplw7HhGJlbb3FoD1c30PQj90SM9b0f8b7wf8t1gOAA
         6w+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ZuMqEkUObXwx6VxSDOaQ2lorR4GwNazZTarPmLJUzc4=;
        b=MkboXaiwRqNnarouZA61QY/urkBaXWUqZb7nnclpw4GClFfHpIgSS8pm0bkaVZ3xKO
         PL413WFKIiNrbQfBxiO9GtlntMowgUN8DSoQm5+UU4Z4TtSfPVpljQ14Eb903p3397Gi
         N9VtIiHMiUTLlLV2ZCoDhACRTxuaoJ3jBCfRTV5b+2GP5sVETr6dP1ZT+e13cPUt6O4w
         XfSSgu+QiFJjv43uXUMy3FGwDeAuJn0jBtj/ObwQGTlgdu3nX+IicQWnnA0VHlxuMeQi
         Io8r6elb5hAiTkcI1fQUjtpGTbtC/V8iXJLYNu3s/7MRrKj864zmXCnzD7uSTY/lcg9q
         fi3Q==
X-Gm-Message-State: AO0yUKVLC2us8SWcZ+4qcFB2zjpOCAUYHg2znmfwy9e7J6+liXr3tMiQ
	+OHph204ljY1e3f2YIIL3K/XKf2zk8Y=
X-Google-Smtp-Source: AK7set/4hBHF+bf0ZMmx3MDbntL254QqcV8kidTc7ZqquaiRl51OStrmxdD2WnNWCdsTH7mBbgsJ2w==
X-Received: by 2002:aa7:d899:0:b0:4a1:a308:8b8f with SMTP id u25-20020aa7d899000000b004a1a3088b8fmr2865688edq.20.1674812825616;
        Fri, 27 Jan 2023 01:47:05 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] x86/emulate: remove unused svm specific header
Date: Fri, 27 Jan 2023 11:46:55 +0200
Message-Id: <20230127094656.140120-2-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230127094656.140120-1-burzalodowa@gmail.com>
References: <20230127094656.140120-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Fixes: 2191599bacb7 ("x86/emul: Simplfy emulation state setup")
Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

I 'm not sure that a Fixes tag applies in this case. I 've added it mostly
for reference.

 xen/arch/x86/hvm/emulate.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index cb221f70e8..95364deb19 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -24,7 +24,6 @@
 #include <asm/hvm/monitor.h>
 #include <asm/hvm/trace.h>
 #include <asm/hvm/support.h>
-#include <asm/hvm/svm/svm.h>
 #include <asm/iocap.h>
 #include <asm/vm_event.h>
 
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 09:47:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 09:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485377.752569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLKB-0003ju-GK; Fri, 27 Jan 2023 09:47:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485377.752569; Fri, 27 Jan 2023 09:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLKB-0003jh-CN; Fri, 27 Jan 2023 09:47:11 +0000
Received: by outflank-mailman (input) for mailman id 485377;
 Fri, 27 Jan 2023 09:47:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yrR2=5Y=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pLLKA-0003im-MW
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 09:47:10 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9079e61e-9e27-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 10:47:08 +0100 (CET)
Received: by mail-ed1-x533.google.com with SMTP id s3so4247289edd.4
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 01:47:08 -0800 (PST)
Received: from uni.router.wind (adsl-40.109.242.227.tellas.gr.
 [109.242.227.40]) by smtp.googlemail.com with ESMTPSA id
 l23-20020a50d6d7000000b004a0b1d7e39csm2054861edj.51.2023.01.27.01.47.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 01:47:07 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9079e61e-9e27-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=fNHLqVTJvvKFPavM3IC7KX+Lm0FkHZuO+pPfJrnfnO0=;
        b=amMeP8IP3Mn9HPzY0unccRDcPcTE/i3gEg22kwEdTJsxvSFHlyhCPT2ENTP44V31Rf
         zjxMogcdbckB/c8eaAVvptC+h/p5+dPjDKv9A+3YftJwj0Xj/IaZZawb5EJCLJC1/77g
         Jd6c0Qnke9Q1YH0Oh3b+6lYbe4io+PljsFw9npPi/0/4fscDEgzpcEINaWqFfNPt/W8w
         LgOQIQHIsPRFF/rjoLiiQBdVDcqKdR+td6WU1jXpjj0LsheQm3tJL/sb0e8DIkSidPI9
         6aEhWGH7yuiG7IeFKjlEsscJKMdnX+DeLq6Acx/Lp/VyfnxicoqaCwsthq1HigW9L28e
         /+Pw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=fNHLqVTJvvKFPavM3IC7KX+Lm0FkHZuO+pPfJrnfnO0=;
        b=LHLQz3KOoTQTgNMd3kdV20UMBz8K5bkLStrpLVuSGdxDFFxvi6nv/8ZtHT/8eXh/2p
         zqtOjvU+6O+DaOe2Ff8s/ixXq3qyVh/kkUp6Xk2zhZUMYZx/Srx2LkSzX6buvh/7I1Ep
         xOysevf2CT7JZhfcMCUJE3lgQuLIJAjfcH3ww6vzZ9Uw9WNq0DCbOpioEvTfCoAw7edF
         XiFK5QqkjKetAnkiTO1k9GwtK8JNNW+Hzcizxty7J1/RiAh1DYxB3mbVpHCSNmvbtLWz
         e5ScbTWNzSloFK6SBSDSQ5Fl9THjTROYhb6Kxzz/LZd/fKVW7plZGl62FynD/jjw/AeB
         XcNA==
X-Gm-Message-State: AO0yUKXclyNWT1GSTNwpLhitPMbihmnwohYoVDI/58yIpMopxmpqXT3Z
	beuy5j0bpd9rEmFcGJzomRfNJ9FjJjw=
X-Google-Smtp-Source: AK7set9YbIeVVxZQvUD7MCdwwdED7L+4+SehWRHzX8xbhSB+9RftPx5MGarrrJgiGFEGa5dsuO2BSQ==
X-Received: by 2002:a50:ce54:0:b0:4a0:e039:e911 with SMTP id k20-20020a50ce54000000b004a0e039e911mr5663717edj.12.1674812827804;
        Fri, 27 Jan 2023 01:47:07 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/2] x86/vpmu: remove unused svm and vmx specific headers
Date: Fri, 27 Jan 2023 11:46:56 +0200
Message-Id: <20230127094656.140120-3-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230127094656.140120-1-burzalodowa@gmail.com>
References: <20230127094656.140120-1-burzalodowa@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Fixes: 8c20aca6751b ("x86/vPMU: invoke <vendor>_vpmu_initialise() through a hook as well")
Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
---

I 'm not sure that a Fixes tag applies in this case. I 've added it mostly
for reference.

 xen/arch/x86/cpu/vpmu.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 64cdbfc48c..33e2fca8cd 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -31,10 +31,6 @@
 #include <asm/p2m.h>
 #include <asm/vpmu.h>
 #include <asm/hvm/support.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/hvm/svm/svm.h>
-#include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
 #include <irq_vectors.h>
 #include <public/pmu.h>
-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 09:47:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 09:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485375.752549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLK6-0003E9-1J; Fri, 27 Jan 2023 09:47:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485375.752549; Fri, 27 Jan 2023 09:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLK5-0003E2-Tw; Fri, 27 Jan 2023 09:47:05 +0000
Received: by outflank-mailman (input) for mailman id 485375;
 Fri, 27 Jan 2023 09:47:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yrR2=5Y=gmail.com=burzalodowa@srs-se1.protection.inumbo.net>)
 id 1pLLK4-0003Dw-Pt
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 09:47:04 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8d35e9a9-9e27-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 10:47:03 +0100 (CET)
Received: by mail-ed1-x52f.google.com with SMTP id y19so4231203edc.2
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 01:47:03 -0800 (PST)
Received: from uni.router.wind (adsl-40.109.242.227.tellas.gr.
 [109.242.227.40]) by smtp.googlemail.com with ESMTPSA id
 l23-20020a50d6d7000000b004a0b1d7e39csm2054861edj.51.2023.01.27.01.47.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 01:47:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d35e9a9-9e27-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=CuPGA/8gL6xHnSmQ2U0UWhnzfACNHy9SjmHvgXWBUiM=;
        b=oTBpDVON0Q4EIXeQyQX4pEn9RjlsJf6DHrr9rPbDsbR2kKHtPdQFL1wKJ7CX1adc+5
         uXiYqZseiUHfSrur8oVCbFF7Vxf8CGhCYKefv/y4jXWLw8EQDAsmM3gXDII6DPj+6l1A
         YaKJbR9NEHDc7zDbHB9ykMnmiRNadFG3AeIWgfiVURCxPUuSdB8LeCaOPl0mjELcPvDz
         9Sjel/HFIqfNg+pZXRbIYNrWy0N9lSkcN1OAr4q/JJcWMIH+ajKHgMGWJ6Om6CkEhE38
         AGv7MX8e27+7lKP5N9N33eQLr7f3l9wUc1zL53zWg/+/7JHx3j/w63no2/5lAFjB/Zdk
         2Qww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=CuPGA/8gL6xHnSmQ2U0UWhnzfACNHy9SjmHvgXWBUiM=;
        b=OmypUSeel3DnzpNA9H6qKvuFB0by8lajagAuK09K7lK8Qgl2CuAdEHNycR2soWjEhO
         /R5gsMBpgaHFX2W49yNCQrgz/mvFmKbxHrcJlHv6vqbaWewabuBKDC3JAD3ySTt1lXjB
         BRUXpg4MbWcDqubPtwaQLqpKEAOO9yLhdbnxY2jfE3ffLHNolok99+dUbQBvb96mJ8s0
         OSPvylFi7jWHqH2m+AQigQoJu//YdB95qhJxByovogi33sxvgNP2ctcVkYfzZ4he/ve+
         0bbHobpWlmwU8Nl1aM4vncEcj6kZ4A9G1a7wu0s8wBsoAtY4GDLjyF2LhVc70RLN0dxL
         TfIw==
X-Gm-Message-State: AFqh2ko+41N/AZcDDzG0uXOIez97kmi7Lf2b+oe16SDyOKhk32A51qLR
	F+b9pHp1+8bEQBDYxggdDHo9PpnWIo8=
X-Google-Smtp-Source: AMrXdXt4FZtSMFZG/fxg6aTjEPe5Sc+/xzYteDgMpwQHB35/wJdbTUT8RiDQfgSQ1MsSwta0erzvUA==
X-Received: by 2002:a05:6402:10c9:b0:49d:a87f:ba78 with SMTP id p9-20020a05640210c900b0049da87fba78mr40140224edu.35.1674812822290;
        Fri, 27 Jan 2023 01:47:02 -0800 (PST)
From: Xenia Ragiadakou <burzalodowa@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/2] Remove unused virtualization technology specific headers
Date: Fri, 27 Jan 2023 11:46:54 +0200
Message-Id: <20230127094656.140120-1-burzalodowa@gmail.com>
X-Mailer: git-send-email 2.37.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patches remove forgotten and now unused virtualization technology specific
headers from arch/x86/cpu/vpmu.c and arch/x86/hvm/emulate.c.

Xenia Ragiadakou (2):
  x86/emulate: remove unused svm specific header
  x86/vpmu: remove unused svm and vmx specific headers

 xen/arch/x86/cpu/vpmu.c    | 4 ----
 xen/arch/x86/hvm/emulate.c | 1 -
 2 files changed, 5 deletions(-)

-- 
2.37.2



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 09:54:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 09:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485390.752579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLQw-0005mr-6D; Fri, 27 Jan 2023 09:54:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485390.752579; Fri, 27 Jan 2023 09:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLQw-0005mk-3g; Fri, 27 Jan 2023 09:54:10 +0000
Received: by outflank-mailman (input) for mailman id 485390;
 Fri, 27 Jan 2023 09:54:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gttX=5Y=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pLLQu-0005me-Ut
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 09:54:08 +0000
Received: from ppsw-33.srv.uis.cam.ac.uk (ppsw-33.srv.uis.cam.ac.uk
 [131.111.8.133]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 89a39567-9e28-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 10:54:07 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:58384)
 by ppsw-33.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.137]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pLLQr-000Rmd-Rf (Exim 4.96) (return-path <amc96@srcf.net>);
 Fri, 27 Jan 2023 09:54:05 +0000
Received: from [192.168.1.10] (host-92-12-62-6.as13285.net [92.12.62.6])
 (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 57BA71FBD6;
 Fri, 27 Jan 2023 09:54:05 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89a39567-9e28-11ed-a5d9-ddcf98b90cbd
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <9718888d-96a9-09fb-fca9-f12fa76929f3@srcf.net>
Date: Fri, 27 Jan 2023 09:54:05 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Subject: Re: [PATCH 0/2] Remove unused virtualization technology specific
 headers
Content-Language: en-GB
To: Xenia Ragiadakou <burzalodowa@gmail.com>, xen-devel@lists.xenproject.org
Cc: Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230127094656.140120-1-burzalodowa@gmail.com>
From: Andrew Cooper <amc96@srcf.net>
In-Reply-To: <20230127094656.140120-1-burzalodowa@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 27/01/2023 9:46 am, Xenia Ragiadakou wrote:
> The patches remove forgotten and now unused virtualization technology specific
> headers from arch/x86/cpu/vpmu.c and arch/x86/hvm/emulate.c.
>
> Xenia Ragiadakou (2):
>   x86/emulate: remove unused svm specific header
>   x86/vpmu: remove unused svm and vmx specific headers
>
>  xen/arch/x86/cpu/vpmu.c    | 4 ----
>  xen/arch/x86/hvm/emulate.c | 1 -
>  2 files changed, 5 deletions(-)
>

Oh excellent.  Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

IMO the fixes tags are useful even if we won't want to backport them. 
It's an easy pointer for people to follow if needs be.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 09:57:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 09:57:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485398.752589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLTm-0006cL-Nk; Fri, 27 Jan 2023 09:57:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485398.752589; Fri, 27 Jan 2023 09:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLTm-0006cE-KW; Fri, 27 Jan 2023 09:57:06 +0000
Received: by outflank-mailman (input) for mailman id 485398;
 Fri, 27 Jan 2023 09:57:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLLTl-0006c4-84; Fri, 27 Jan 2023 09:57:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLLTl-0008Kq-2Z; Fri, 27 Jan 2023 09:57:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLLTk-0006Dq-Id; Fri, 27 Jan 2023 09:57:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLLTk-0002wt-I8; Fri, 27 Jan 2023 09:57:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NbrSL8Oeoo19Mw8mdWoB2+wcuYDIIzWG4vTNqZGKae8=; b=Jcs16JH9/HQ3jQpgjRE5Yc2bcF
	MyeV2F8CV3RCgBbo5WF7QHsAvad2hG/vcRFCrfgiPwlqDyTZtTTMq0tnV8qwzyX/LwnWpD0NiSLwU
	Sy6U0M9dVGURdIkOLyZLkm9rtxIX1wfSovqGUAjvWrtwYJPMdKPzi2iT6bgQ6OSs0Mhc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176224-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176224: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-4.17-testing:build-arm64-libvirt:<job status>:broken:regression
    xen-4.17-testing:build-arm64-pvops:<job status>:broken:regression
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:build-arm64-libvirt:host-install(4):broken:regression
    xen-4.17-testing:build-arm64-pvops:host-install(4):broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-ping-check-xen:fail:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 09:57:04 +0000

flight 176224 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176224/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt             <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 build-arm64-libvirt           4 host-install(4)        broken REGR. vs. 175447
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 175447
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 host-ping-check-xen fail REGR. vs. 175447
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   36 days
Testing same since   176224  2023-01-26 22:14:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          broken  
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-libvirt broken
broken-job build-arm64-pvops broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-step build-arm64-libvirt host-install(4)
broken-step build-arm64-pvops host-install(4)

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 10:17:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 10:17:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485406.752602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLng-000168-G6; Fri, 27 Jan 2023 10:17:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485406.752602; Fri, 27 Jan 2023 10:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLng-000161-Cv; Fri, 27 Jan 2023 10:17:40 +0000
Received: by outflank-mailman (input) for mailman id 485406;
 Fri, 27 Jan 2023 10:17:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oNy1=5Y=minervasys.tech=carlo.nonato@srs-se1.protection.inumbo.net>)
 id 1pLLne-00015v-Qz
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 10:17:39 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d0992301-9e2b-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 11:17:34 +0100 (CET)
Received: by mail-ej1-x62d.google.com with SMTP id gs13so1659312ejc.0
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 02:17:34 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0992301-9e2b-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=minervasys-tech.20210112.gappssmtp.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=uCXzBVPg1G6s5/AvAdll+PAHsk5kTJigLRFtl/dlAT4=;
        b=FtrBXCp3i+7AZJSaZUXZ6udBZZSrCIMMM0H6jjTp9h1Jc5dTggBurnteypWPsL6EK5
         EwMxkGjn7ruWg7AhLHGM2R5aLZGKa4eYu1o+mmKA6hxdcG1AuDaoyDijN5MvAqhqXRLB
         Ipuzw4Fbt9PF0p/OeHpSDix+p7s0ikqGqeozjJ0d/8sBQAMxLKJ/ILYPB2tGm6smVVp6
         VPeh6RAJXoZtnNnvZlydIW0+U4CUn6UXQAnm88bbKKD6lV/Sbgx4o7EoSgL9nGwKgrmx
         1xkbXqFTIn07f7hCMauKwMuRzMtBTAUq5WGhMtRJC15gvJkKYSP35lRqdzflE7VlGlKP
         +fvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=uCXzBVPg1G6s5/AvAdll+PAHsk5kTJigLRFtl/dlAT4=;
        b=ApGFeSx2GACa5YCLvY4O67STFNPKQ+x7X8cIwz3/67IoMxP6FOmZPjmHa3hezN7kxP
         Tgw5/YTkSOo5r+TbQ1voETaZVeplefPdVUsFxYleyYFXspZhkfjwGEqpMMNAmvXOUruC
         3dOjWm8Y2SJ1YU2u5xB53vEG+TlX7lWBT/RRDODFPKTC7QnCNlyXJp+cDV4m68X3qF+w
         47EeqBYObMItdwH5KGix0BLHU98osbmnMKcpAF6lqW6dGqA8G8a45rJj0+DLYQ7+SOhg
         7EYvFtofk5Fs5m58yMKbNf+S6NcxhhDx+rRUrfC7Q1pYqWU0BWp/ES/8VcZ6TGp1bHQu
         U07Q==
X-Gm-Message-State: AFqh2kqgAjwEpUYJNLgq3nN9LSKSugSCLC7aR5ONz828hzqjorFWHfbJ
	sG1yD0VtiJzZD/I/7QmLjYZ5DUJQ3I+2rNZxSA6odQ==
X-Google-Smtp-Source: AMrXdXufg9iB1rD9C/1JZE0U56JkK2prqmggZzvg6J4MvcM7WwA1TqsmZu0b+XG4DBJcDnHCZ5eqX0EKvAkzplDvZe4=
X-Received: by 2002:a17:907:10d0:b0:84d:49c2:8701 with SMTP id
 rv16-20020a17090710d000b0084d49c28701mr5092702ejb.236.1674814653429; Fri, 27
 Jan 2023 02:17:33 -0800 (PST)
MIME-Version: 1.0
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-8-carlo.nonato@minervasys.tech> <21f1e368-5d44-b689-0bb7-164a53e5ffd7@suse.com>
In-Reply-To: <21f1e368-5d44-b689-0bb7-164a53e5ffd7@suse.com>
From: Carlo Nonato <carlo.nonato@minervasys.tech>
Date: Fri, 27 Jan 2023 11:17:22 +0100
Message-ID: <CAG+AhRXvKi4HmPoOL7cLToCgOPf3-evvcdvSzYGZ6fLc+mBvtQ@mail.gmail.com>
Subject: Re: [PATCH v4 07/11] xen: add cache coloring allocator for domains
To: Jan Beulich <jbeulich@suse.com>
Cc: Luca Miccio <lucmiccio@gmail.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, 
	Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

Hi Jan,

On Thu, Jan 26, 2023 at 5:29 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 23.01.2023 16:47, Carlo Nonato wrote:
> > --- a/docs/misc/xen-command-line.pandoc
> > +++ b/docs/misc/xen-command-line.pandoc
> > @@ -299,6 +299,20 @@ can be maintained with the pv-shim mechanism.
> >      cause Xen not to use Indirect Branch Tracking even when support is
> >      available in hardware.
> >
> > +### buddy-alloc-size (arm64)
>
> I can't find where such a command line option would be processed.

Another merge problem... sorry.

> > --- a/xen/arch/arm/include/asm/mm.h
> > +++ b/xen/arch/arm/include/asm/mm.h
> > @@ -128,6 +128,9 @@ struct page_info
> >  #else
> >  #define PGC_static     0
> >  #endif
> > +/* Page is cache colored */
> > +#define _PGC_colored      PG_shift(4)
> > +#define PGC_colored       PG_mask(1, 4)
>
> Is there a reason you don't follow the conditional approach we've taken
> for PGC_static?

Nope, fixed that.

> Thinking of which - what are the planned interactions with the static
> allocator? If the two are exclusive of one another, I guess that would
> need expressing somehow.
>
> > --- a/xen/arch/arm/llc_coloring.c
> > +++ b/xen/arch/arm/llc_coloring.c
> > @@ -33,6 +33,8 @@ static paddr_t __ro_after_init addr_col_mask;
> >  static unsigned int __ro_after_init dom0_colors[CONFIG_NR_LLC_COLORS];
> >  static unsigned int __ro_after_init dom0_num_colors;
> >
> > +#define addr_to_color(addr) (((addr) & addr_col_mask) >> PAGE_SHIFT)
>
> You're shifting right by PAGE_SHIFT here just to ...
>
> > @@ -299,6 +301,16 @@ unsigned int *llc_colors_from_str(const char *str, unsigned int *num_colors)
> >      return colors;
> >  }
> >
> > +unsigned int page_to_llc_color(const struct page_info *pg)
> > +{
> > +    return addr_to_color(page_to_maddr(pg));
>
> ... undo the corresponding left shift in page_to_maddr(). Depending
> on other uses of addr_col_mask it may be worthwhile to either change
> that to or accompany it by a mask to operate on frame numbers.

Yep, this would simplify things probably.

> > @@ -924,6 +929,13 @@ static struct page_info *get_free_buddy(unsigned int zone_lo,
> >      }
> >  }
> >
> > +/* Initialise fields which have other uses for free pages. */
> > +static void init_free_page_fields(struct page_info *pg)
> > +{
> > +    pg->u.inuse.type_info = PGT_TYPE_INFO_INITIALIZER;
> > +    page_set_owner(pg, NULL);
> > +}
>
> To limit the size of the functional change, abstracting out this function
> could helpfully be a separate patch (and could then also likely go in ahead
> of time, simplifying things slightly for you as well).
>
> > @@ -1488,7 +1497,7 @@ static void free_heap_pages(
> >              /* Merge with predecessor block? */
> >              if ( !mfn_valid(page_to_mfn(predecessor)) ||
> >                   !page_state_is(predecessor, free) ||
> > -                 (predecessor->count_info & PGC_static) ||
> > +                 (predecessor->count_info & (PGC_static | PGC_colored)) ||
> >                   (PFN_ORDER(predecessor) != order) ||
> >                   (page_to_nid(predecessor) != node) )
> >                  break;
> > @@ -1512,7 +1521,7 @@ static void free_heap_pages(
> >              /* Merge with successor block? */
> >              if ( !mfn_valid(page_to_mfn(successor)) ||
> >                   !page_state_is(successor, free) ||
> > -                 (successor->count_info & PGC_static) ||
> > +                 (successor->count_info & (PGC_static | PGC_colored)) ||
> >                   (PFN_ORDER(successor) != order) ||
> >                   (page_to_nid(successor) != node) )
> >                  break;
>
> This, especially without being mentioned in the description (only in the
> revision log), could likely also be split out (and then also be properly
> justified).

You mean to make it a prereq patch or putting it after this patch?
In the second case it would be like a fix for the previous patch, something
that is to be avoided, right?

> > @@ -1928,6 +1937,182 @@ static unsigned long avail_heap_pages(
> >      return free_pages;
> >  }
> >
> > +#ifdef CONFIG_LLC_COLORING
> > +/*************************
> > + * COLORED SIDE-ALLOCATOR
> > + *
> > + * Pages are grouped by LLC color in lists which are globally referred to as the
> > + * color heap. Lists are populated in end_boot_allocator().
> > + * After initialization there will be N lists where N is the number of
> > + * available colors on the platform.
> > + */
> > +typedef struct page_list_head colored_pages_t;
>
> To me this type rather hides information, so I think I would prefer if
> you dropped it.

Ok.

> > +static colored_pages_t *__ro_after_init _color_heap;
> > +static unsigned long *__ro_after_init free_colored_pages;
> > +
> > +/* Memory required for buddy allocator to work with colored one */
> > +static unsigned long __initdata buddy_alloc_size =
> > +    CONFIG_BUDDY_ALLOCATOR_SIZE << 20;
>
> Please don't open-code MB().
>
> > +#define color_heap(color) (&_color_heap[color])
> > +
> > +static bool is_free_colored_page(struct page_info *page)
>
> const please (and wherever applicable throughout the series)
>
> > +{
> > +    return page && (page->count_info & PGC_state_free) &&
> > +                   (page->count_info & PGC_colored);
> > +}
> > +
> > +/*
> > + * The {free|alloc}_color_heap_page overwrite pg->count_info, but they do it in
> > + * the same way as the buddy allocator corresponding functions do:
> > + * protecting the access with a critical section using heap_lock.
> > + */
>
> I think such a comment would only be useful if you did things differently,
> even if just slightly. And indeed I think you do, e.g. by ORing in
> PGC_colored below (albeit that's still similar to unprepare_staticmem_pages(),
> so perhaps fine without further explanation). Differences are what may need
> commenting on (such that the safety thereof can be judged upon).
>
> > +static void free_color_heap_page(struct page_info *pg)
> > +{
> > +    unsigned int color = page_to_llc_color(pg), nr_colors = get_nr_llc_colors();
> > +    unsigned long pdx = page_to_pdx(pg);
> > +    colored_pages_t *head = color_heap(color);
> > +    struct page_info *prev = pdx >= nr_colors ? pg - nr_colors : NULL;
> > +    struct page_info *next = pdx + nr_colors < FRAMETABLE_NR ? pg + nr_colors
> > +                                                             : NULL;
>
> Are these two calculations safe? At least on x86 parts of frame_table[] may
> not be populated, so de-referencing prev and/or next might fault.

I have to admit I've not taken this into consideration. I'll check that.

> > +    spin_lock(&heap_lock);
> > +
> > +    if ( is_free_colored_page(prev) )
> > +        next = page_list_next(prev, head);
> > +    else if ( !is_free_colored_page(next) )
> > +    {
> > +        /*
> > +         * FIXME: linear search is slow, but also note that the frametable is
> > +         * used to find free pages in the immediate neighborhood of pg in
> > +         * constant time. When freeing contiguous pages, the insert position of
> > +         * most of them is found without the linear search.
> > +         */
> > +        page_list_for_each( next, head )
> > +        {
> > +            if ( page_to_maddr(next) > page_to_maddr(pg) )
> > +                break;
> > +        }
> > +    }
> > +
> > +    mark_page_free(pg, page_to_mfn(pg));
> > +    pg->count_info |= PGC_colored;
> > +    free_colored_pages[color]++;
> > +    page_list_add_prev(pg, next, head);
> > +
> > +    spin_unlock(&heap_lock);
> > +}
>
> There's no scrubbing here at all, and no mention of the lack thereof in
> the description.

This comes from the very first version (which I'm not an author of).
Can you explain to me briefly what is it and if I need it? Or can you give
me pointers?

> > +static void __init init_color_heap_pages(struct page_info *pg,
> > +                                         unsigned long nr_pages)
> > +{
> > +    unsigned int i;
> > +
> > +    if ( buddy_alloc_size )
> > +    {
> > +        unsigned long buddy_pages = min(PFN_DOWN(buddy_alloc_size), nr_pages);
> > +
> > +        init_heap_pages(pg, buddy_pages);
> > +        nr_pages -= buddy_pages;
> > +        buddy_alloc_size -= buddy_pages << PAGE_SHIFT;
> > +        pg += buddy_pages;
> > +    }
>
> I think you want to bail here if nr_pages is now zero, not the least to avoid
> crashing ...
>
> > +    if ( !_color_heap )
> > +    {
> > +        unsigned int nr_colors = get_nr_llc_colors();
> > +
> > +        _color_heap = xmalloc_array(colored_pages_t, nr_colors);
> > +        BUG_ON(!_color_heap);
> > +        free_colored_pages = xzalloc_array(unsigned long, nr_colors);
> > +        BUG_ON(!free_colored_pages);
>
> ... here in case the amount that was freed was really tiny.

Here the array is allocated with nr_colors not nr_pages. nr_colors can't be 0.
And if nr_pages is 0 it just means that all pages went to the buddy. I see no
crash in this case.

> > +        for ( i = 0; i < nr_colors; i++ )
> > +            INIT_PAGE_LIST_HEAD(color_heap(i));
> > +    }
> > +
> > +    printk(XENLOG_DEBUG
> > +           "Init color heap with %lu pages starting from: %#"PRIx64"\n",
> > +           nr_pages, page_to_maddr(pg));
> > +
> > +    for ( i = 0; i < nr_pages; i++ )
> > +        free_color_heap_page(&pg[i]);
> > +}
> > +
> > +static void dump_color_heap(void)
> > +{
> > +    unsigned int color;
> > +
> > +    printk("Dumping color heap info\n");
> > +    for ( color = 0; color < get_nr_llc_colors(); color++ )
> > +        printk("Color heap[%u]: %lu pages\n", color, free_colored_pages[color]);
>
> When there are many colors and most memory is used, you may produce a
> lot of output here for just displaying zeros. May I suggest that you
> log only non-zero values?

Yep.

> > +}
> > +
> > +#else /* !CONFIG_LLC_COLORING */
> > +
> > +static void free_color_heap_page(struct page_info *pg) {}
> > +static void __init init_color_heap_pages(struct page_info *pg,
> > +                                         unsigned long nr_pages) {}
> > +static struct page_info *alloc_color_heap_page(unsigned int memflags,
> > +                                               struct domain *d)
> > +{
> > +    return NULL;
> > +}
> > +static void dump_color_heap(void) {}
>
> As said elsewhere (albeit for a slightly different reason): It may be
> worthwhile to try to omit these stubs and instead expose the normal
> code to the compiler unconditionally, relying on DCE. That'll reduce
> the risk of people breaking the coloring code without noticing, when
> build-testing only other configurations.

Yep.

> > @@ -1936,12 +2121,19 @@ void __init end_boot_allocator(void)
> >      for ( i = 0; i < nr_bootmem_regions; i++ )
> >      {
> >          struct bootmem_region *r = &bootmem_region_list[i];
> > -        if ( (r->s < r->e) &&
> > -             (mfn_to_nid(_mfn(r->s)) == cpu_to_node(0)) )
> > +        if ( r->s < r->e )
> >          {
> > -            init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s);
> > -            r->e = r->s;
> > -            break;
> > +            if ( llc_coloring_enabled )
> > +            {
> > +                init_color_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s);
> > +                r->e = r->s;
> > +            }
> > +            else if ( mfn_to_nid(_mfn(r->s)) == cpu_to_node(0) )
> > +            {
> > +                init_heap_pages(mfn_to_page(_mfn(r->s)), r->e - r->s);
> > +                r->e = r->s;
> > +                break;
> > +            }
>
> I think the coloring part here deserves a comment, or else it can easily
> look as if there was a missing "break" (or it was placed in the wrong
> scope). I also think it might help to restructure your change a little,
> both to reduce the diff and to keep indentation bounded:
>
>   if ( r->s >= r->e )
>     continue;
>
>   if ( llc_coloring_enabled )
>     ...
>
> Also please take the opportunity to add the missing blank lines between
> declaration and statements.
>
> > @@ -2332,6 +2524,7 @@ int assign_pages(
> >  {
> >      int rc = 0;
> >      unsigned int i;
> > +    unsigned long allowed_flags = (PGC_extra | PGC_static | PGC_colored);
>
> This is one of the few cases where I think "const" would be helpful even
> on a not-pointed-to type. There's also not really any need for parentheses
> here. As to the name, ...
>
> > @@ -2349,7 +2542,7 @@ int assign_pages(
> >
> >          for ( i = 0; i < nr; i++ )
> >          {
> > -            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_static)));
> > +            ASSERT(!(pg[i].count_info & ~allowed_flags));
>
> ... while "allowed" may be fine for this use, it really isn't ...
>
> > @@ -2408,8 +2601,8 @@ int assign_pages(
> >          ASSERT(page_get_owner(&pg[i]) == NULL);
> >          page_set_owner(&pg[i], d);
> >          smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
> > -        pg[i].count_info =
> > -            (pg[i].count_info & (PGC_extra | PGC_static)) | PGC_allocated | 1;
> > +        pg[i].count_info = (pg[i].count_info & allowed_flags) |
> > +                           PGC_allocated | 1;
>
> ... here. Maybe "preserved_flags" (or just "preserved")?
>
> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -299,6 +299,33 @@ page_list_add_tail(struct page_info *page, struct page_list_head *head)
> >      }
> >      head->tail = page;
> >  }
> > +static inline void
> > +_page_list_add(struct page_info *page, struct page_info *prev,
> > +               struct page_info *next)
> > +{
> > +    page->list.prev = page_to_pdx(prev);
> > +    page->list.next = page_to_pdx(next);
> > +    prev->list.next = page_to_pdx(page);
> > +    next->list.prev = page_to_pdx(page);
> > +}
> > +static inline void
> > +page_list_add_prev(struct page_info *page, struct page_info *next,
> > +                   struct page_list_head *head)
> > +{
> > +    struct page_info *prev;
> > +
> > +    if ( !next )
> > +    {
> > +        page_list_add_tail(page, head);
> > +        return;
> > +    }
>
> !next is ambiguous in its meaning, so a comment towards the intended
> behavior here would be helpful. It could be that the tail insertion is
> necessary behavior, but it also could be that insertion anywhere would
> actually be okay, and tail insertion is merely the variant you ended up
> picking.

This makes it possible to call the function when next has been used to iterate
over a list. If there's no next we are at the end of the list, so add to the
tail. The other way is to handle the case outside this function.

> Then again ...
>
> > +    prev = page_list_prev(next, head);
> > +    if ( !prev )
> > +        page_list_add(page, head);
> > +    else
> > +        _page_list_add(page, prev, next);
> > +}
> >  static inline bool_t
> >  __page_list_del_head(struct page_info *page, struct page_list_head *head,
> >                       struct page_info *next, struct page_info *prev)
> > @@ -451,6 +478,12 @@ page_list_add_tail(struct page_info *page, struct page_list_head *head)
> >      list_add_tail(&page->list, head);
> >  }
> >  static inline void
> > +page_list_add_prev(struct page_info *page, struct page_info *next,
> > +                   struct page_list_head *head)
> > +{
> > +    list_add_tail(&page->list, &next->list);
>
> ... you don't care about !next here at all?

I did it this way since *page is never checked for NULL as well. Also, this
other type of list isn't NULL terminated.

> Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 10:29:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 10:29:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485414.752615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLyV-0002uu-Lz; Fri, 27 Jan 2023 10:28:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485414.752615; Fri, 27 Jan 2023 10:28:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLLyV-0002un-J6; Fri, 27 Jan 2023 10:28:51 +0000
Received: by outflank-mailman (input) for mailman id 485414;
 Fri, 27 Jan 2023 10:28:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gttX=5Y=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pLLyT-0002uh-NX
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 10:28:49 +0000
Received: from ppsw-43.srv.uis.cam.ac.uk (ppsw-43.srv.uis.cam.ac.uk
 [131.111.8.143]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 61f18c8e-9e2d-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 11:28:47 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:40252)
 by ppsw-43.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.139]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pLLyQ-000spr-VZ (Exim 4.96) (return-path <amc96@srcf.net>);
 Fri, 27 Jan 2023 10:28:46 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 990701FBA7;
 Fri, 27 Jan 2023 10:28:46 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61f18c8e-9e2d-11ed-a5d9-ddcf98b90cbd
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <547fab47-d4b5-2c04-74d5-baffa10b9638@srcf.net>
Date: Fri, 27 Jan 2023 10:28:46 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230127055421.22945-1-jgross@suse.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH v2] tools/helpers: don't log errors when trying to load
 PVH xenstore-stubdom
In-Reply-To: <20230127055421.22945-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 27/01/2023 5:54 am, Juergen Gross wrote:
> When loading a Xenstore stubdom the loader doesn't know whether the
> lo be loaded kernel is a PVH or a PV one. So it tries to load it as
> a PVH one first, and if this fails it is loading it as a PV kernel.

Well, yes it does.

What might be missing is libxenguest's ability to parse the provided
kernel's ELF Notes ahead of trying to build the domain.

This is the same kind of poor design which has left us with a
dom0=pv|pvh cmdline option which must match the kernel the bootloader
gave us, if we want to not panic() on boot.

So while this might be an acceptable gross bodge in the short term, this...

>
> This results in errors being logged in case the stubdom is a PV kernel.
>
> Suppress those errors by setting the minimum logging level to
> "critical" while trying to load the kernel as PVH.
>
> Fixes: f89955449c5a ("tools/init-xenstore-domain: support xenstore pvh stubdom")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - retry PVH loading with logging if PV fails, too (Jan Beulich)
> ---
>  tools/helpers/init-xenstore-domain.c | 24 ++++++++++++++++--------
>  1 file changed, 16 insertions(+), 8 deletions(-)
>
> diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
> index 04e351ca29..4e2f8d4da5 100644
> --- a/tools/helpers/init-xenstore-domain.c
> +++ b/tools/helpers/init-xenstore-domain.c
> @@ -31,6 +31,8 @@ static int memory;
>  static int maxmem;
>  static xen_pfn_t console_gfn;
>  static xc_evtchn_port_or_error_t console_evtchn;
> +static xentoollog_level minmsglevel = XTL_PROGRESS;
> +static void *logger;
>  
>  static struct option options[] = {
>      { "kernel", 1, NULL, 'k' },
> @@ -141,19 +143,29 @@ static int build(xc_interface *xch)
>          goto err;
>      }
>  
> +    /* Try PVH first, suppress errors by setting min level high. */

... needs to make the position clear.

/*  This is a bodge.  We can't currently inspect the kernel's ELF notes
ahead of attempting to construct a domain, so try PVH first, suppressing
errors by setting min level to high, and fall back to PV. */

~Andrew

>      dom->container_type = XC_DOM_HVM_CONTAINER;
> +    xtl_stdiostream_set_minlevel(logger, XTL_CRITICAL);
>      rv = xc_dom_parse_image(dom);
> +    xtl_stdiostream_set_minlevel(logger, minmsglevel);
>      if ( rv )
>      {
>          dom->container_type = XC_DOM_PV_CONTAINER;
>          rv = xc_dom_parse_image(dom);
>          if ( rv )
>          {
> -            fprintf(stderr, "xc_dom_parse_image failed\n");
> -            goto err;
> +            /* Retry PVH, now with normal logging level. */
> +            dom->container_type = XC_DOM_HVM_CONTAINER;
> +            rv = xc_dom_parse_image(dom);
> +            if ( rv )
> +            {
> +                fprintf(stderr, "xc_dom_parse_image failed\n");
> +                goto err;
> +            }
>          }
>      }
> -    else
> +
> +    if ( dom->container_type == XC_DOM_HVM_CONTAINER )
>      {
>          config.flags |= XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap;
>          config.arch.emulation_flags = XEN_X86_EMU_LAPIC;
> @@ -412,8 +424,6 @@ int main(int argc, char** argv)
>      char buf[16], be_path[64], fe_path[64];
>      int rv, fd;
>      char *maxmem_str = NULL;
> -    xentoollog_level minmsglevel = XTL_PROGRESS;
> -    xentoollog_logger *logger = NULL;
>  
>      while ( (opt = getopt_long(argc, argv, "v", options, NULL)) != -1 )
>      {
> @@ -456,9 +466,7 @@ int main(int argc, char** argv)
>          return 2;
>      }
>  
> -    logger = (xentoollog_logger *)xtl_createlogger_stdiostream(stderr,
> -                                                               minmsglevel, 0);
> -
> +    logger = xtl_createlogger_stdiostream(stderr, minmsglevel, 0);
>      xch = xc_interface_open(logger, logger, 0);
>      if ( !xch )
>      {



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 10:56:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 10:56:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485420.752625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMOQ-0006rL-Qj; Fri, 27 Jan 2023 10:55:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485420.752625; Fri, 27 Jan 2023 10:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMOQ-0006rE-Np; Fri, 27 Jan 2023 10:55:38 +0000
Received: by outflank-mailman (input) for mailman id 485420;
 Fri, 27 Jan 2023 10:55:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D8Jc=5Y=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pLMOP-0006r8-V2
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 10:55:38 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 205e9910-9e31-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 11:55:36 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D5F7A21EF1;
 Fri, 27 Jan 2023 10:55:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id ADA8F1336F;
 Fri, 27 Jan 2023 10:55:34 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id lbz9KKat02MfSwAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 27 Jan 2023 10:55:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 205e9910-9e31-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674816934; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Tik5Mo8RQiGkhGpFKdBUsc6igXWcZ6MXreo5CUGHB80=;
	b=gNp0m7tcvV9pR3KPb8A54JMoYoBGemPBy99j7pkkRPS10Ep1Q6BGoCkQG4vKDOZJ8ZB/Uu
	Z2fkn/ZfCjz5EweLx20hoUV4TBljpw9qNzgRxZ3tfGgr9yknE9fQedgMohBBHW6iUL1rRs
	a1mM627/u58lIoRLGdetP1NMjG0Xkmg=
Message-ID: <260bfbf4-8a6c-d3ea-a4e6-547a51d59ba1@suse.com>
Date: Fri, 27 Jan 2023 11:55:34 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: Andrew Cooper <amc96@srcf.net>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230127055421.22945-1-jgross@suse.com>
 <547fab47-d4b5-2c04-74d5-baffa10b9638@srcf.net>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2] tools/helpers: don't log errors when trying to load
 PVH xenstore-stubdom
In-Reply-To: <547fab47-d4b5-2c04-74d5-baffa10b9638@srcf.net>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------ehhHZFICYFi45f9zaxjsa7Ij"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------ehhHZFICYFi45f9zaxjsa7Ij
Content-Type: multipart/mixed; boundary="------------0Hf0Y5jeYYyQNk9QcLRhwTje";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Andrew Cooper <amc96@srcf.net>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <260bfbf4-8a6c-d3ea-a4e6-547a51d59ba1@suse.com>
Subject: Re: [PATCH v2] tools/helpers: don't log errors when trying to load
 PVH xenstore-stubdom
References: <20230127055421.22945-1-jgross@suse.com>
 <547fab47-d4b5-2c04-74d5-baffa10b9638@srcf.net>
In-Reply-To: <547fab47-d4b5-2c04-74d5-baffa10b9638@srcf.net>

--------------0Hf0Y5jeYYyQNk9QcLRhwTje
Content-Type: multipart/mixed; boundary="------------SBe6Q266FspEqNUdTEPl5Yy0"

--------------SBe6Q266FspEqNUdTEPl5Yy0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDEuMjMgMTE6MjgsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+IE9uIDI3LzAxLzIw
MjMgNTo1NCBhbSwgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+IFdoZW4gbG9hZGluZyBhIFhl
bnN0b3JlIHN0dWJkb20gdGhlIGxvYWRlciBkb2Vzbid0IGtub3cgd2hldGhlciB0aGUNCj4+
IGxvIGJlIGxvYWRlZCBrZXJuZWwgaXMgYSBQVkggb3IgYSBQViBvbmUuIFNvIGl0IHRyaWVz
IHRvIGxvYWQgaXQgYXMNCj4+IGEgUFZIIG9uZSBmaXJzdCwgYW5kIGlmIHRoaXMgZmFpbHMg
aXQgaXMgbG9hZGluZyBpdCBhcyBhIFBWIGtlcm5lbC4NCj4gDQo+IFdlbGwsIHllcyBpdCBk
b2VzLg0KPiANCj4gV2hhdCBtaWdodCBiZSBtaXNzaW5nIGlzIGxpYnhlbmd1ZXN0J3MgYWJp
bGl0eSB0byBwYXJzZSB0aGUgcHJvdmlkZWQNCj4ga2VybmVsJ3MgRUxGIE5vdGVzIGFoZWFk
IG9mIHRyeWluZyB0byBidWlsZCB0aGUgZG9tYWluLg0KDQpDb3JyZWN0Lg0KDQo+IFRoaXMg
aXMgdGhlIHNhbWUga2luZCBvZiBwb29yIGRlc2lnbiB3aGljaCBoYXMgbGVmdCB1cyB3aXRo
IGENCj4gZG9tMD1wdnxwdmggY21kbGluZSBvcHRpb24gd2hpY2ggbXVzdCBtYXRjaCB0aGUg
a2VybmVsIHRoZSBib290bG9hZGVyDQo+IGdhdmUgdXMsIGlmIHdlIHdhbnQgdG8gbm90IHBh
bmljKCkgb24gYm9vdC4NCg0KSG1tLCB0aGlzIGlzIG9ubHkgcGFydGlhbGx5IHRydWUuDQoN
CkkgYWdyZWUgdGhhdCB3aXRob3V0IGFueSBkb20wIG9wdGlvbiBnaXZlbiBpdCB3b3VsZCBi
ZSBuaWNlIGlmIHRoZQ0KaHlwZXJ2aXNvciBjb3VsZCB1c2UgdGhlIHNwZWNpZmllZCBkb20w
IGtlcm5lbCBhcyBsb25nIGFzIGl0IGlzDQpzdXBwb3J0aW5nIGVpdGhlciBQViBvciBQVkgg
bW9kZS4NCg0KT1RPSCBub3dhZGF5cyBpdCBpcyBlYXNpbHkgcG9zc2libGUgdG8gYnVpbGQg
YSBrZXJuZWwgYmVpbmcgY2FwYWJsZQ0KdG8gc3VwcG9ydCBib3RoIHZhcmlhbnRzLCBpbiB3
aGljaCBjYXNlIHRoZSBoeXBlcnZpc29yIG5lZWRzIHRvDQpzZWxlY3Qgd2hpY2ggbW9kZSB0
byB1c2UuIFRoaXMgbWlnaHQgbmVlZCB0aGUgaGVscCBvZiB0aGUgdXNlciBpbg0KY2FzZSB0
aGUgbm9uLWRlZmF1bHQgbW9kZSBpcyB3YW50ZWQuDQoNCkZvciB4ZW5zdG9yZS1zdHViZG9t
IGl0IGlzIGVhc2llciwgYXMgdGhlcmUgaXMgbm8gcmVhc29uIHRvIHByZWZlcg0KdGhlIFBW
IG1vZGUgb3ZlciBQVkggKGluIGZhY3QgSSdtIHN0aWxsIHdvcmtpbmcgb24gWGVuc3RvcmUg
TFUgZm9yDQp0aGUgUFZIIGNhc2UsIG1ha2luZyB0aGUgZGVjaXNpb24gZXZlbiBlYXNpZXIp
Lg0KDQo+IA0KPiBTbyB3aGlsZSB0aGlzIG1pZ2h0IGJlIGFuIGFjY2VwdGFibGUgZ3Jvc3Mg
Ym9kZ2UgaW4gdGhlIHNob3J0IHRlcm0sIHRoaXMuLi4NCj4gDQo+Pg0KPj4gVGhpcyByZXN1
bHRzIGluIGVycm9ycyBiZWluZyBsb2dnZWQgaW4gY2FzZSB0aGUgc3R1YmRvbSBpcyBhIFBW
IGtlcm5lbC4NCj4+DQo+PiBTdXBwcmVzcyB0aG9zZSBlcnJvcnMgYnkgc2V0dGluZyB0aGUg
bWluaW11bSBsb2dnaW5nIGxldmVsIHRvDQo+PiAiY3JpdGljYWwiIHdoaWxlIHRyeWluZyB0
byBsb2FkIHRoZSBrZXJuZWwgYXMgUFZILg0KPj4NCj4+IEZpeGVzOiBmODk5NTU0NDljNWEg
KCJ0b29scy9pbml0LXhlbnN0b3JlLWRvbWFpbjogc3VwcG9ydCB4ZW5zdG9yZSBwdmggc3R1
YmRvbSIpDQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5j
b20+DQo+PiAtLS0NCj4+IFYyOg0KPj4gLSByZXRyeSBQVkggbG9hZGluZyB3aXRoIGxvZ2dp
bmcgaWYgUFYgZmFpbHMsIHRvbyAoSmFuIEJldWxpY2gpDQo+PiAtLS0NCj4+ICAgdG9vbHMv
aGVscGVycy9pbml0LXhlbnN0b3JlLWRvbWFpbi5jIHwgMjQgKysrKysrKysrKysrKysrKy0t
LS0tLS0tDQo+PiAgIDEgZmlsZSBjaGFuZ2VkLCAxNiBpbnNlcnRpb25zKCspLCA4IGRlbGV0
aW9ucygtKQ0KPj4NCj4+IGRpZmYgLS1naXQgYS90b29scy9oZWxwZXJzL2luaXQteGVuc3Rv
cmUtZG9tYWluLmMgYi90b29scy9oZWxwZXJzL2luaXQteGVuc3RvcmUtZG9tYWluLmMNCj4+
IGluZGV4IDA0ZTM1MWNhMjkuLjRlMmY4ZDRkYTUgMTAwNjQ0DQo+PiAtLS0gYS90b29scy9o
ZWxwZXJzL2luaXQteGVuc3RvcmUtZG9tYWluLmMNCj4+ICsrKyBiL3Rvb2xzL2hlbHBlcnMv
aW5pdC14ZW5zdG9yZS1kb21haW4uYw0KPj4gQEAgLTMxLDYgKzMxLDggQEAgc3RhdGljIGlu
dCBtZW1vcnk7DQo+PiAgIHN0YXRpYyBpbnQgbWF4bWVtOw0KPj4gICBzdGF0aWMgeGVuX3Bm
bl90IGNvbnNvbGVfZ2ZuOw0KPj4gICBzdGF0aWMgeGNfZXZ0Y2huX3BvcnRfb3JfZXJyb3Jf
dCBjb25zb2xlX2V2dGNobjsNCj4+ICtzdGF0aWMgeGVudG9vbGxvZ19sZXZlbCBtaW5tc2ds
ZXZlbCA9IFhUTF9QUk9HUkVTUzsNCj4+ICtzdGF0aWMgdm9pZCAqbG9nZ2VyOw0KPj4gICAN
Cj4+ICAgc3RhdGljIHN0cnVjdCBvcHRpb24gb3B0aW9uc1tdID0gew0KPj4gICAgICAgeyAi
a2VybmVsIiwgMSwgTlVMTCwgJ2snIH0sDQo+PiBAQCAtMTQxLDE5ICsxNDMsMjkgQEAgc3Rh
dGljIGludCBidWlsZCh4Y19pbnRlcmZhY2UgKnhjaCkNCj4+ICAgICAgICAgICBnb3RvIGVy
cjsNCj4+ICAgICAgIH0NCj4+ICAgDQo+PiArICAgIC8qIFRyeSBQVkggZmlyc3QsIHN1cHBy
ZXNzIGVycm9ycyBieSBzZXR0aW5nIG1pbiBsZXZlbCBoaWdoLiAqLw0KPiANCj4gLi4uIG5l
ZWRzIHRvIG1ha2UgdGhlIHBvc2l0aW9uIGNsZWFyLg0KPiANCj4gLyrCoCBUaGlzIGlzIGEg
Ym9kZ2UuwqAgV2UgY2FuJ3QgY3VycmVudGx5IGluc3BlY3QgdGhlIGtlcm5lbCdzIEVMRiBu
b3Rlcw0KPiBhaGVhZCBvZiBhdHRlbXB0aW5nIHRvIGNvbnN0cnVjdCBhIGRvbWFpbiwgc28g
dHJ5IFBWSCBmaXJzdCwgc3VwcHJlc3NpbmcNCj4gZXJyb3JzIGJ5IHNldHRpbmcgbWluIGxl
dmVsIHRvIGhpZ2gsIGFuZCBmYWxsIGJhY2sgdG8gUFYuICovDQoNCk5vIHByb2JsZW0gd2l0
aCBtZS4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------SBe6Q266FspEqNUdTEPl5Yy0
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------SBe6Q266FspEqNUdTEPl5Yy0--

--------------0Hf0Y5jeYYyQNk9QcLRhwTje--

--------------ehhHZFICYFi45f9zaxjsa7Ij
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPTraYFAwAAAAAACgkQsN6d1ii/Ey/v
aggAh2kAqx9HDiiHpaMwITwQTJEYLocJvaerKVHzTliHd3RKAlh2qGkiJ0uF39zP54Vc2YYgmuDj
HgXwoVLakYfgDw36cGvLx8sLDIc7ZiApOVWvIu+PsoZWAjcdszC0HMwqKj9Q1hnzloVr8X0cFnuF
4TPoHLnGTZgHGkDU8+jMDAupuOHxOvnAOj8W0Nd+WdlL7td53XXfPWEYSKgmdwwo6IrZBxMkIUZs
d33geklrdsobWyxr+Z30X6gFOdib56XCJzvxOdyZDwDewhvuZ2gqMGjrBuvnUT3biPdmwSXiyYR2
i7oYBMi4XpHQsmqUEKhjh9fn6GhfCfV+E/x8RTxm+A==
=8OgK
-----END PGP SIGNATURE-----

--------------ehhHZFICYFi45f9zaxjsa7Ij--


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:04:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:04:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485426.752639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMWp-0008Ob-NV; Fri, 27 Jan 2023 11:04:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485426.752639; Fri, 27 Jan 2023 11:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMWp-0008OU-Io; Fri, 27 Jan 2023 11:04:19 +0000
Received: by outflank-mailman (input) for mailman id 485426;
 Fri, 27 Jan 2023 11:04:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLMWo-0008OK-43; Fri, 27 Jan 2023 11:04:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLMWo-0001Oy-15; Fri, 27 Jan 2023 11:04:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLMWn-0008MZ-Ee; Fri, 27 Jan 2023 11:04:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLMWn-0006Lu-B6; Fri, 27 Jan 2023 11:04:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cuHw8GsZhNl7V/Nouoao+s9Oxlk4RPLsFJAA73QT7Do=; b=s0DWXK2FChRzG8SfN/quZVuRc5
	W9XrMLEbyNIUJahp+UtTeWY2lKEDiZKnyoZYqBy1XbJxcPTnck+MEaZ39cAq7F33L2wEn0RrbWc9T
	TfOpyqmFMvUBlAQW2UN91p55M2X03KUyQmhAs3gOsDC2QfbhJjqfYFonPr2Yj9A3Qaxc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176233-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 176233: trouble: blocked/broken/pass
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:<job status>:broken:regression
    libvirt:build-armhf-libvirt:<job status>:broken:regression
    libvirt:build-armhf-libvirt:host-install(4):broken:regression
    libvirt:build-arm64-libvirt:host-install(4):broken:regression
    libvirt:build-armhf-libvirt:syslog-server:running:regression
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:capture-logs:broken:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=2dde3840b1d50e79f6b8161820fff9fe62f613a9
X-Osstest-Versions-That:
    libvirt=95a278a84591b6a4cfa170eba31c8ec60e82f940
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 11:04:17 +0000

flight 176233 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176233/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt             <job status>                 broken
 build-armhf-libvirt             <job status>                 broken
 build-armhf-libvirt           4 host-install(4)        broken REGR. vs. 176139
 build-arm64-libvirt           4 host-install(4)        broken REGR. vs. 176139
 build-armhf-libvirt           3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           5 capture-logs          broken blocked in 176139
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              2dde3840b1d50e79f6b8161820fff9fe62f613a9
baseline version:
 libvirt              95a278a84591b6a4cfa170eba31c8ec60e82f940

Last test of basis   176139  2023-01-26 04:18:49 Z    1 days
Testing same since   176233  2023-01-27 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiri Denemark <jdenemar@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          broken  
 build-armhf-libvirt                                          broken  
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-libvirt broken
broken-job build-armhf-libvirt broken
broken-step build-armhf-libvirt host-install(4)
broken-step build-armhf-libvirt capture-logs
broken-step build-arm64-libvirt host-install(4)

Not pushing.

------------------------------------------------------------
commit 2dde3840b1d50e79f6b8161820fff9fe62f613a9
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix missing device in crypto-builtin XML
    
    Another forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit f3c9cbc36cc10775f6cefeb7e3de2f799dc74d70
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix watchdog parameters in crypto-builtin
    
    Forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit a2c5c5dad2275414e325ca79778fad2612d14470
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:34 2023 +0100

    news: Add information about iTCO watchdog changes
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 2fa92efe9b286ad064833cd2d8b907698e58e1cf
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:30 2023 +0100

    Document change to multiple watchdogs
    
    With the reasoning behind it.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 926594dcc82b40f483010cebe5addbf1d7f58b24
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 11:22:22 2023 +0100

    qemu: Add implicit watchdog for q35 machine types
    
    The iTCO watchdog is part of the q35 machine type since its inception,
    we just did not add it implicitly.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2137346
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d81a27b9815d68d85d2ddc9671649923ee5905d7
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 14:15:06 2023 +0100

    qemu: Enable iTCO watchdog by disabling its noreboot pin strap
    
    In order for the iTCO watchdog to be operational we must disable the
    noreboot pin strap in qemu.  This is the default starting from 8.0
    machine types, but desirable for older ones as well.  And we can safely
    do that since that is not guest-visible.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 5b80e93e42a1d89ee64420debd2b4b785a144c40
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:26:21 2023 +0100

    Add iTCO watchdog support
    
    Supported only with q35 machine types.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1c61bd718a9e311016da799a42dfae18f538385a
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Tue Nov 8 09:10:57 2022 +0100

    Support multiple watchdog devices
    
    This is already possible with qemu, and actually already happening with
    q35 machines and a specified watchdog since q35 already includes a
    watchdog we do not include in the XML.  In order to express such
    posibility multiple watchdogs need to be supported.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit c5340d5420012412ea298f0102cc7f113e87d89b
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:28:52 2023 +0100

    qemuDomainAttachWatchdog: Avoid unnecessary nesting
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1cf7e6ec057a80f3c256d739a8228e04b7fb8862
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:25:06 2023 +0100

    remote: Drop useless cleanup in remoteDispatchNodeGet{CPU,Memory}Stats
    
    The function cannot fail once it starts populating
    ret->params.params_val[i].field.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d0f339170f35957e7541e5b20552d0007e150fbc
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:06:33 2023 +0100

    remote: Avoid leaking uri_out
    
    In case the API returned success and a NULL pointer in uri_out, we would
    leak the preallocated buffer used for storing the uri_out pointer.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 4849eb2220fb2171e88e014a8e63018d20a8de95
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 11:56:28 2023 +0100

    remote: Propagate error from virDomainGetSecurityLabelList via RPC
    
    The daemon side of this API has been broken ever since the API was
    introduced in 2012. Instead of sending the error from
    virDomainGetSecurityLabelList via RPC so that the client can see it, the
    dispatcher would just send a successful reply with return value set to
    -1 (and an empty array of labels). The client side would propagate this
    return value so the client can see the API failed, but the original
    error would be lost.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 0211e430a87a96db9a4e085e12f33caad9167653
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 13:19:31 2023 +0100

    remote: Initialize args variable
    
    Recently, in v9.0.0-7-gb2034bb04c we've dropped initialization of
    @args variable. The reasoning was that eventually, all members of
    the variable will be set. Well, this is not correct. For
    instance, in remoteConnectGetAllDomainStats() the
    args.doms.doms_val pointer is set iff @ndoms != 0. However,
    regardless of that, the pointer is then passed to VIR_FREE().
    
    Worse, the whole args is passed to
    xdr_remote_connect_get_all_domain_stats_args() which then calls
    xdr_array, which tests the (uninitialized) pointer against NULL.
    
    This effectively reverts b2034bb04c61c75ddbfbed46879d641b6f8ca8dc.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Martin Kletzander <mkletzan@redhat.com>

commit c3afde9211b550d3900edc5386ab121f5b39fd3e
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 11:56:10 2023 +0100

    qemu_domain: Don't unref NULL hash table in qemuDomainRefreshStatsSchema()
    
    The g_hash_table_unref() function does not accept NULL. Passing
    NULL results in a glib warning being triggered. Check whether the
    hash table is not NULL and unref it only then.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:04:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:04:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485431.752648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMXH-0000VJ-4c; Fri, 27 Jan 2023 11:04:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485431.752648; Fri, 27 Jan 2023 11:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMXH-0000VC-1h; Fri, 27 Jan 2023 11:04:47 +0000
Received: by outflank-mailman (input) for mailman id 485431;
 Fri, 27 Jan 2023 11:04:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JKSU=5Y=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLMXF-0000Ta-Np
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:04:45 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2081.outbound.protection.outlook.com [40.107.22.81])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65afcf75-9e32-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 12:04:42 +0100 (CET)
Received: from AS8PR04CA0068.eurprd04.prod.outlook.com (2603:10a6:20b:313::13)
 by DB9PR08MB9948.eurprd08.prod.outlook.com (2603:10a6:10:3d0::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 11:04:39 +0000
Received: from AM7EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:313:cafe::1b) by AS8PR04CA0068.outlook.office365.com
 (2603:10a6:20b:313::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend
 Transport; Fri, 27 Jan 2023 11:04:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT033.mail.protection.outlook.com (100.127.140.129) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 11:04:39 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Fri, 27 Jan 2023 11:04:38 +0000
Received: from 35e18bb5e9c7.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 39D4614B-5018-4448-9B88-75CF38E143BA.1; 
 Fri, 27 Jan 2023 11:04:33 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 35e18bb5e9c7.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Jan 2023 11:04:33 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB9PR08MB6459.eurprd08.prod.outlook.com (2603:10a6:10:256::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 11:04:23 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Fri, 27 Jan 2023
 11:04:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65afcf75-9e32-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Yr8Ewi+aEYg+FppOqBJ5NN8pgGWxGVq5zqHmAceUmPk=;
 b=iGGNuFTmLJRBBOyciKBVMO2FlIV2DEFKoWC4ANn/ONb60+/fYP7BO0eijdt1R27CaC1QDMFs+CYCrrfTGUQkRQ+duNE4+gNRBm9tOCsWSKe5DCBu0IqqIDH6+PFTwu0DECDjP7GkXdyP0/fiBR5nKII6YUVIPidTApNaIqBFz1Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uc31rkgnuo2O3lpb1Pun+1StVFCSG3V+6OwwlC0u2w8mYWRah0iqRJZg9w9N/wOOI1xFvtkCGeeDO94BuSHgN6sTb6W7tuCEtFCe59AXQ7S2fZVMB30xzwT/lLDLqKB3GLUI2Y8/gWL3FGfpS9oBkld0jSlddJ5oj/Y+CnBP/cPha5rdHY5Q0eQmb4GQXfewE8yV3fWV5SjxsshG3d6rdqPjizep8MefVL8kWWXaG7xHSZdiQH01T2jhOieCSDp6ThVRXuukRER4D3mnKYUimRjJ2iEx88orxsjf4ERFWehBauD1GwYVHWpuuT5+FNqETJc7BxVtqqgrWElzLZ8XXQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Yr8Ewi+aEYg+FppOqBJ5NN8pgGWxGVq5zqHmAceUmPk=;
 b=Q6Z4kT0pEdGYeiZFj+oC2BUDZ+FCwj13kpREfQHzPvAQy+xREmJG/1zk7TvIrJef3SCN7X0wzVkOK2tmOjf24UnlEU1rXVTLTY06piPpzvF8BmZ+8eSbIbd/0FjX/6DxBjtkpKntEfL0am84Kj1HNx31FRnTtlJtBmfnlAYeYYBPEv9ltoYXhkdSI5QujqJ5pKFON8GTkRgdnw8Z5pEF53R31e149W89AzJfywnZntZhwLAi19ZFoBkRsbV+GDXb1f3KpQ3Gfy7J0J04Ecs7X1vuoTljLPB19vL6lrhbkGfHYueAlloHjuiGlZpgTan/4XVHzPmFJXNz8IZVV0JbSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Yr8Ewi+aEYg+FppOqBJ5NN8pgGWxGVq5zqHmAceUmPk=;
 b=iGGNuFTmLJRBBOyciKBVMO2FlIV2DEFKoWC4ANn/ONb60+/fYP7BO0eijdt1R27CaC1QDMFs+CYCrrfTGUQkRQ+duNE4+gNRBm9tOCsWSKe5DCBu0IqqIDH6+PFTwu0DECDjP7GkXdyP0/fiBR5nKII6YUVIPidTApNaIqBFz1Q=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 1/3] xen/arm: Reduce redundant clear root pages when
 teardown p2m
Thread-Topic: [PATCH 1/3] xen/arm: Reduce redundant clear root pages when
 teardown p2m
Thread-Index: AQHZKU4erKVguEhJ4EOF/N1uL2C1/a6nFMuAgAbbyYCABDoEcA==
Date: Fri, 27 Jan 2023 11:04:23 +0000
Message-ID:
 <AS8PR08MB79917007B583442AB6B43F9892CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-2-Henry.Wang@arm.com>
 <36821aa0-4e88-57f7-3f8b-35ba0529fabf@amd.com>
 <59f4d24a-44cf-fa8f-bdac-2af036f2cd30@xen.org>
In-Reply-To: <59f4d24a-44cf-fa8f-bdac-2af036f2cd30@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4FB410BE413FDF40973393DD219C82C1.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB9PR08MB6459:EE_|AM7EUR03FT033:EE_|DB9PR08MB9948:EE_
X-MS-Office365-Filtering-Correlation-Id: c09c80c6-7712-44ac-79a0-08db005648cb
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 dVWjZcOKmnoVyoBb8aeI513zDgrOik4o3+/G5pxwOt6XYvJ3IFsgBELlKQ5klfppgXuNAIxPB+Dy8Jhpxk9SjhFBOQeE/cDOJPvMPWtOZ8gkyP0YvcvsuB05NsLR5jU7IrgvH33wLI23Fv07FUkZDVHf7Eu6XoKj+c29oUt5dnAPzJaccVlA4XLE4Fd3Y1imM3QGtoZkw11l+Up6yhbDIoU2xL+GsRb/u0RVBEwBwx13CKS+fSCLb3gwOG5U+csY7aM9zMdd1uNoTyIYU6I2Lg9q+tSlBYkWPxcdAonHZtMpNcd01ehVTt4SCWLTc2QK1YtGEfsG6RB7fIXfRwLUq+URAsJvTIHK9sZEQ/Fwmibr8pKqWE4CuV4MYr7bl+OZe22ga8027P16mh1oPJQfFfw0B3VW2n0iVpzGyms0oxr3ZF9JtG/2sayajK5aElGpcKw9JATYumrQstVkyNRgIy4pWS074+Q7PVU2jdo39dLjBuJy2j8JgqrfSx4qkEijsladJ167vSGm7J8P1wF/jf3pDQFG46UL+UlFt/GsuTb3t9kKtMw1c7gwWQQ7EsTwz+8r5rt3p12lyXwvrnk3lXyaLvfZabLBUz/E2bdNErGMx7Oy4KNiAi2VbSlW7EVpJ5CXkJ3HiYvCKyq1c3IkIsSVg8Onx6QmSlad+sU6gDzR7qjpQd3U3hKYi7K/6TRrnbhbYh6iztzykfHQV45gJA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(376002)(396003)(366004)(346002)(136003)(451199018)(41300700001)(66476007)(66446008)(64756008)(66556008)(55016003)(83380400001)(86362001)(7696005)(316002)(38100700002)(38070700005)(122000001)(110136005)(54906003)(76116006)(8676002)(66946007)(71200400001)(6506007)(186003)(26005)(478600001)(4326008)(33656002)(9686003)(2906002)(8936002)(52536014)(5660300002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6459
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b83e45b0-09a2-477c-965f-08db00563f8d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fz2vwi1el5Gg4scoKG08nOgM9Y8a0UrrgcsSHNODDEIXv2XkmZMUY9dbm+sKL9sXszHxZXRuwv1MDEMqfWbtrMDYyLoxux2Kp5TN2iPsDKKBRSslQybAQJtbT9uAVNWsIIdqwZgPO0B9O6NZGvglmxTBHLxwJapS57LNB0I5arHT4d4i+BqcKWVMLQVZqgbIzFno7WfKuRN2zgjrsO2D7LVYCuMGDecdsaRrARUzrtaD0LwGOQiMjRzb9F/ZyrCVjrN7AioF9pXIzVX9C0eZqZTsJW+9cvBG8S1IK4tYvKbXoJHx7XvHKI+cfwCUb5v+r+aruqduBKCYQERmQDYXjadVPiPveDQuQtDx6G0jz4ukA57PaM0+8NDz9x/Nuie8IEQG55sIxb2oy1/RfPwAzazSeliwgFRLrhDn485H7OVuowHelJlrQNe2iASm+U8HhvwTTmmIDWYd1X/jzq4YICcNxf0gUbMNAUXCqmFaY7r6WlRxNbJ6gnSHQP+PO71f80Wl/2dSKtkuJfuK0C3hFMX9RVbarvpaK02MFceHPWodGw9AUHvFZruyyr1ONpkc/zGm5apoFEL55HatXQucgk1IhMdkppaigjiNcGIy6vN3oYf1apEo4Vzw2zp6nUguMZnX+fiJyMHTqfaGXKKm7wJE6ftP5kIcjmgwr/tTJJaFmQwjTX/6+sRIM/svfraq8LWTkTKatJp9QBN9N03QDg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199018)(40470700004)(46966006)(36840700001)(83380400001)(36860700001)(336012)(40460700003)(86362001)(82740400003)(110136005)(316002)(54906003)(5660300002)(47076005)(55016003)(81166007)(40480700001)(33656002)(356005)(82310400005)(7696005)(9686003)(107886003)(478600001)(2906002)(26005)(8676002)(4326008)(8936002)(186003)(70586007)(70206006)(6506007)(52536014)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 11:04:39.0914
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c09c80c6-7712-44ac-79a0-08db005648cb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9948

SGkgTWljaGFsIGFuZCBKdWxpZW4sDQoNCkFwb2xvZ2llcyBmb3IgdGhlIGxhdGUgcmVzcG9uc2Uu
DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPj4gKyAgICBQUk9HUkVTUyhwMm1f
cm9vdCk6DQo+ID4+ICsgICAgICAgIC8qDQo+ID4+ICsgICAgICAgICAqIFdlIGFyZSBhYm91dCB0
byBmcmVlIHRoZSBpbnRlcm1lZGlhdGUgcGFnZS10YWJsZXMsIHNvIGNsZWFyIHRoZQ0KPiA+PiAr
ICAgICAgICAgKiByb290IHRvIHByZXZlbnQgYW55IHdhbGsgdG8gdXNlIHRoZW0uDQo+ID4gVGhl
IGNvbW1lbnQgZnJvbSBoZXJlLi4uDQo+ID4+ICsgICAgICAgICAqIFRoZSBkb21haW4gd2lsbCBu
b3QgYmUgc2NoZWR1bGVkIGFueW1vcmUsIHNvIGluIHRoZW9yeSB3ZQ0KPiBzaG91bGQNCj4gPj4g
KyAgICAgICAgICogbm90IG5lZWQgdG8gZmx1c2ggdGhlIFRMQnMuIERvIGl0IGZvciBzYWZldHkg
cHVycG9zZS4NCj4gPj4gKyAgICAgICAgICogTm90ZSB0aGF0IGFsbCB0aGUgZGV2aWNlcyBoYXZl
IGFscmVhZHkgYmVlbiBkZS1hc3NpZ25lZC4gU28gd2UNCj4gZG9uJ3QNCj4gPj4gKyAgICAgICAg
ICogbmVlZCB0byBmbHVzaCB0aGUgSU9NTVUgVExCIGhlcmUuDQo+ID4+ICsgICAgICAgICAqLw0K
PiA+IHRvIGhlcmUgZG9lcyBub3QgbWFrZSBhIGxvdCBvZiBzZW5zZSBpbiB0aGlzIHBsYWNlIGFu
ZCBzaG91bGQgYmUgbW92ZWQgdG8NCj4gcDJtX2NsZWFyX3Jvb3RfcGFnZXMNCj4gPiB3aGVyZSBh
IHVzZXIgY2FuIHNlZSB0aGUgY2FsbCB0byBwMm1fZm9yY2VfdGxiX2ZsdXNoX3N5bmMuDQo+IA0K
PiArMQ0KDQpJIHdpbGwgbW92ZSB0aGlzIGNvZGUgY29tbWVudCBibG9jayB0byB0aGUgcGxhY2Ug
dGhhdCB5b3Ugc3VnZ2VzdGVkIGluIHYyLg0KDQo+IA0KPiA+IEFwYXJ0IGZyb20gdGhhdDoNCj4g
PiBSZXZpZXdlZC1ieTogTWljaGFsIE9yemVsIDxtaWNoYWwub3J6ZWxAYW1kLmNvbT4NCj4gDQo+
IEFja2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KDQpUaGFua3MgdG8g
Ym90aCBvZiB5b3UuIEkgd2lsbCBrZWVwIHRoZXNlIHR3byB0YWdzIHdpdGggdGhlIGZpeCBhYm91
dCB0aGUgYWJvdmUNCmluLWNvZGUgY29tbWVudCBwb3NpdGlvbi4NCg0KS2luZCByZWdhcmRzLA0K
SGVucnkNCg0KPiANCj4gQ2hlZXJzLA0KPiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:12:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:12:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485441.752660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMe3-0002Ex-TL; Fri, 27 Jan 2023 11:11:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485441.752660; Fri, 27 Jan 2023 11:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMe3-0002Eq-Qk; Fri, 27 Jan 2023 11:11:47 +0000
Received: by outflank-mailman (input) for mailman id 485441;
 Fri, 27 Jan 2023 11:11:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JKSU=5Y=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLMe2-0002Ei-NV
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:11:46 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2070.outbound.protection.outlook.com [40.107.241.70])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 612c6fa6-9e33-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:11:43 +0100 (CET)
Received: from AM6P192CA0059.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:82::36)
 by DU0PR08MB10367.eurprd08.prod.outlook.com (2603:10a6:10:409::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Fri, 27 Jan
 2023 11:11:39 +0000
Received: from AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:82:cafe::13) by AM6P192CA0059.outlook.office365.com
 (2603:10a6:209:82::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23 via Frontend
 Transport; Fri, 27 Jan 2023 11:11:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT034.mail.protection.outlook.com (100.127.140.87) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 11:11:39 +0000
Received: ("Tessian outbound baf1b7a96f25:v132");
 Fri, 27 Jan 2023 11:11:38 +0000
Received: from d4e6941d166e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 438ED51A-D488-4F9C-BB60-304B9BB7803D.1; 
 Fri, 27 Jan 2023 11:11:29 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d4e6941d166e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Jan 2023 11:11:29 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GV1PR08MB7313.eurprd08.prod.outlook.com (2603:10a6:150:1c::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.25; Fri, 27 Jan
 2023 11:11:25 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Fri, 27 Jan 2023
 11:11:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 612c6fa6-9e33-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LojGeEAsI1TEiePLXv4nJT33G4Xly4OKlP8v8NBl+P0=;
 b=DED2ZW+NsyEE8ULo+ghgxhMpaX2nE2u1ZYtK1nSRuKV+TzdHg0+gTq4kn1YjQ3bdnvESjephWfUtgR+SjP0FaO0c/mObLTPMv9PSPfA7op+eXVIB4xD7Dt9Dhwcdw9hKui5UY3cmsfVg2orLS8e6X1trkzSWsFovyb1yLLbml1s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WYd6K+IF7bxBGyoqJ7xq4jv4Dz/Pt5MNTqiL6PmFojAzkdbANbmqv7w1ypTsEjMmlq8gO37yGAkpQg94RFXzjwcpfADc52oRF3wTlrSVyZLHV+9UH4a0pTRWPiOsDQO3Qx50PGni2f87cdZ5g08Wkn50TNGtpodRy0v+/3F/Mrd+IkWpFODypJVAHsEvqFspgY3idqloAfUCICTsl9Rg3YrvZMXtP8vo3yMiOe1d1ajLcftWoCnWHjHGxZsC8YviWfRmrdBfghF+0waWpbSxNM4n+Aq0VnRh0E2HIMsVO81dJ309i4kBXjVNDxphLK2HY2EtlLyhHliVBzrxehVjWg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LojGeEAsI1TEiePLXv4nJT33G4Xly4OKlP8v8NBl+P0=;
 b=SF8e6KS/dIJpPfqyLF0TagvxXbYTJEC+87zeYJfIBq/KUmMUEPJtWs6xmlBxw/rMSCsio06zD8WVo0WhzGt1tJZwVgiIIT4fRxQRk8/P4xXcSpPv8rlPHRe/lVbORmWuSAoBaaJJNTxrdpMUHEkcxmZx7m8zEjxTZQnU+ss6WJZSYcapDreRsS4oufb4IUwc0jiLvdZPbUyrQQKMeG+tmLGhDq/gW+7Kt3rre0nAI+MaIEoQXTvWZ2Ie8U/NHG6VO7x/J2Oc37lviiHIVksZXHEXk6vaDYrkhysNhtPaaYGDl6Q3/i3B2KmrInPG1p0SNoqgb5Sx4jkGn+TGg5RYBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LojGeEAsI1TEiePLXv4nJT33G4Xly4OKlP8v8NBl+P0=;
 b=DED2ZW+NsyEE8ULo+ghgxhMpaX2nE2u1ZYtK1nSRuKV+TzdHg0+gTq4kn1YjQ3bdnvESjephWfUtgR+SjP0FaO0c/mObLTPMv9PSPfA7op+eXVIB4xD7Dt9Dhwcdw9hKui5UY3cmsfVg2orLS8e6X1trkzSWsFovyb1yLLbml1s=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen
	<Wei.Chen@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Thread-Topic: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Thread-Index: AQHZKU4eIUricNo750ej72nDJ2giuq6nF8cAgAsT2vA=
Date: Fri, 27 Jan 2023 11:11:25 +0000
Message-ID:
 <AS8PR08MB79913487DBC6F434758EAE5A92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-3-Henry.Wang@arm.com>
 <b2822e36-0972-5c4b-90d9-aee6533824b2@amd.com>
In-Reply-To: <b2822e36-0972-5c4b-90d9-aee6533824b2@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 74C4A913DDA654468B36A9DE6BB03100.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GV1PR08MB7313:EE_|AM7EUR03FT034:EE_|DU0PR08MB10367:EE_
X-MS-Office365-Filtering-Correlation-Id: a306298e-11e9-444f-8cca-08db0057432d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 W30xP7Ltqi3Y0n7CVjIcspcwJVhUN2dx8QWKepigo0iDluvbpcsND5+bUBfeB0D89eXXKZqrf2a3i2EpWVtM+VMqGKUWx2E0xsvc6T7rDEx5369v1KYLoi5+lUFCEjaEU3ahbJbfjElnfQ+QTUtM73P/2lNsiMWftGukCrpQ1VXSe5blOnS1ftk8cwzEV8hA1xjWleYPEA/pGHqsIgANfvW2wTua0GifRJFCYM61PiOs+4kQtuPMitT3x+A78xOBReN8sw2rA6M/R23rI4b0cEHrWWrhJC5V6ccgHP9MuIjzhjbxbpMv9ru3S3gXVXNsSM9ehELiWANDtcBPVLKVzz873QZ2+i13ac89tAMoBToHsNjXlHx55+hnbJcVyXGxuKxAY5bHrFgT7xd7ArVa1ptcM/Y2DpWIjsJZ/CmEs8oKFvgxhOyFVgQ07AyLjPjA5zXt+OW+vYvNxOuo2j7B91LzVAR7ZgJVDO01X/gRSWpPjSC4IplMJOQorRU7QcTEdt30Rova5KRj2q1JAzSmoOQv746xg+fBq+H/P/xDoPn6LyJ2YHx0szGnHoPopJ8aB6wi01qMHTaE92abOrgfTRmq4LjY5q1tff/RDRQW3DBR9yrAaYjtSizuk1Z9wIvlLuD4kl4soO7k5cCNmoPufhNotDH83hgLmqJtzy7T5femTTIq+WEhWhKk658NgyUJsoTGZfQP1FS0O4LH4/NLIRAHoyfo3lfJlCz9/Ksn2hibRsMpa3ipsyTki6mmgR29
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(39860400002)(136003)(366004)(376002)(346002)(451199018)(316002)(8936002)(52536014)(6506007)(53546011)(86362001)(71200400001)(478600001)(7696005)(38100700002)(186003)(122000001)(26005)(38070700005)(9686003)(54906003)(33656002)(110136005)(5660300002)(55016003)(2906002)(83380400001)(41300700001)(66476007)(64756008)(66446008)(4326008)(66556008)(8676002)(76116006)(66946007)(414714003)(473944003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7313
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9f532b16-e628-4ca5-2cf3-08db00573b15
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dNgkRFcLS/Y875o3Y+8A+s8vmgOlag+61lGFYCz92itxExClo/f1jZzINY90+vxCQXaTC9C+rD/E64lGOT2mmQlBkPttXADewupJmyM371IUILcVBN2m/fuh06ClQDD+bs55e0gTZ+zA2IJ8Su48mQ7+3/CBsPkzliLhwThFujP5llWE9M9yKk3LY5KTZw9EKW0+dF0c4Mx5EoJo8bhcl2BeSA4si8IFhf6uNZ8r5UTrzvjXHw/sMG1RXqirqh9xi2RqiRRxf7tOgPxD543xYlnkoyRoxAEaBWnODTWpQu+bDYK5am6up2XGCkKnbvnn7L5XCSkQuliiD1jXILlE+iU+wVvzrriZhngi8vGhoNyVPu5d1CeM9sFKbGF0kBA7JTg21i5mdkChmEiAej0sh3gkLM2/fAvcwfX4d2ecXbRWmVbVlLiMXAy3yQkRI0G7pwBAt5ZPLy2ogBq6ZeSp0UoPQbChcmsJW0x4VEA5KJ4hnMKuUoDymaz6dJ2++7BTIifBgEKxeVorYU1dmFM3sng4VQoWpd0edeojQ5p8eM0RoPNuTiXwCV0iHyJiSC3v9Bi3JQXfeW/8B1dWUS2UUDR4NnKIfqf7vQKzIH7ZEW/8swE2NgPUbiXGC3sR6H24Sk3WUzHqADR4BnVks7pn0IUn6yetW/T877idMMfy31ohzOcUtKs0HsKuGrKOIQ4CNiRdX9rt5yJs0Ltqs3kgb7H+KkIf4+gztGl9OTEg0BSu4r0NJn2L0SyH1WMvRtBG
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(136003)(376002)(346002)(39860400002)(396003)(451199018)(40470700004)(36840700001)(46966006)(110136005)(41300700001)(107886003)(54906003)(478600001)(70206006)(316002)(7696005)(82310400005)(70586007)(40460700003)(33656002)(86362001)(336012)(9686003)(356005)(186003)(47076005)(26005)(83380400001)(8936002)(8676002)(40480700001)(4326008)(2906002)(81166007)(53546011)(5660300002)(6506007)(55016003)(82740400003)(52536014)(36860700001)(414714003)(473944003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 11:11:39.1489
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a306298e-11e9-444f-8cca-08db0057432d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB10367

SGkgTWljaGFsLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IE1pY2hh
bCBPcnplbCA8bWljaGFsLm9yemVsQGFtZC5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggMi8z
XSB4ZW4vYXJtOiBEZWZlciBHSUN2MiBDUFUgaW50ZXJmYWNlIG1hcHBpbmcgdW50aWwNCj4gdGhl
IGZpcnN0IGFjY2Vzcw0KPiANCj4gSGkgSGVucnksDQo+IA0KPiBPbiAxNi8wMS8yMDIzIDAyOjU4
LCBIZW5yeSBXYW5nIHdyb3RlOg0KPiA+IE5vdGUgdGhhdCBHSUN2MiBjaGFuZ2VzIGludHJvZHVj
ZWQgYnkgdGhpcyBwYXRjaCBpcyBub3QgYXBwbGllZCB0byB0aGUNCj4gPiAiTmV3IHZHSUMiIGlt
cGxlbWVudGF0aW9uLCBhcyB0aGUgIk5ldyB2R0lDIiBpcyBub3QgdXNlZC4gQWxzbyBzaW5jZQ0K
PiBUaGUgZmFjdCB0aGF0ICJOZXcgdkdJQyIgaXMgbm90IHVzZWQgdmVyeSBvZnRlbiBhbmQgaXRz
IHdvcmsgaXMgaW4tcHJvZ3Jlc3MNCj4gZG9lcyBub3QgbWVhbiB0aGF0IHdlIGNhbiBpbXBsZW1l
bnQgY2hhbmdlcyByZXN1bHRpbmcgaW4gYSBidWlsZCBmYWlsdXJlDQo+IHdoZW4gZW5hYmxpbmcg
Q09ORklHX05FV19WR0lDLCB3aGljaCBpcyB0aGUgY2FzZSB3aXRoIHlvdXIgcGF0Y2guDQo+IFNv
IHRoaXMgbmVlZHMgdG8gYmUgZml4ZWQuDQoNCkkgd2lsbCBhZGQgdGhlICJOZXcgdkdJQyIgY2hh
bmdlcyBpbiB2Mi4NCg0KPiANCj4gPiBAQCAtMTUzLDYgKzE1Myw4IEBAIHN0cnVjdCB2Z2ljX2Rp
c3Qgew0KPiA+ICAgICAgLyogQmFzZSBhZGRyZXNzIGZvciBndWVzdCBHSUMgKi8NCj4gPiAgICAg
IHBhZGRyX3QgZGJhc2U7IC8qIERpc3RyaWJ1dG9yIGJhc2UgYWRkcmVzcyAqLw0KPiA+ICAgICAg
cGFkZHJfdCBjYmFzZTsgLyogQ1BVIGludGVyZmFjZSBiYXNlIGFkZHJlc3MgKi8NCj4gPiArICAg
IHBhZGRyX3QgY3NpemU7IC8qIENQVSBpbnRlcmZhY2Ugc2l6ZSAqLw0KPiA+ICsgICAgcGFkZHJf
dCB2YmFzZTsgLyogdmlydHVhbCBDUFUgaW50ZXJmYWNlIGJhc2UgYWRkcmVzcyAqLw0KPiBDb3Vs
ZCB5b3Ugc3dhcCB0aGVtIHNvIHRoYXQgYmFzZSBhZGRyZXNzIHZhcmlhYmxlcyBhcmUgZ3JvdXBl
ZD8NCg0KU3VyZSwgbXkgb3JpZ2luYWwgdGhvdWdodCB3YXMgZ3JvdXBpbmcgdGhlIENQVSBpbnRl
cmZhY2UgcmVsYXRlZCBmaWVsZHMgYnV0DQpzaW5jZSB5b3UgcHJlZmVyIGdyb3VwaW5nIHRoZSBi
YXNlIGFkZHJlc3MsIEkgd2lsbCBmb2xsb3cgeW91ciBzdWdnZXN0aW9uLg0KDQo+IA0KPiA+ICAj
aWZkZWYgQ09ORklHX0dJQ1YzDQo+ID4gICAgICAvKiBHSUMgVjMgYWRkcmVzc2luZyAqLw0KPiA+
ICAgICAgLyogTGlzdCBvZiBjb250aWd1b3VzIG9jY3VwaWVkIGJ5IHRoZSByZWRpc3RyaWJ1dG9y
cyAqLw0KPiA+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vdHJhcHMuYyBiL3hlbi9hcmNoL2Fy
bS90cmFwcy5jDQo+ID4gaW5kZXggMDYxYzkyYWNiZC4uZDk4ZjE2NjA1MCAxMDA2NDQNCj4gPiAt
LS0gYS94ZW4vYXJjaC9hcm0vdHJhcHMuYw0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS90cmFwcy5j
DQo+ID4gQEAgLTE3ODcsOSArMTc4NywxMiBAQCBzdGF0aWMgaW5saW5lIGJvb2wgaHBmYXJfaXNf
dmFsaWQoYm9vbCBzMXB0dywNCj4gdWludDhfdCBmc2MpDQo+ID4gIH0NCj4gPg0KPiA+ICAvKg0K
PiA+IC0gKiBXaGVuIHVzaW5nIEFDUEksIG1vc3Qgb2YgdGhlIE1NSU8gcmVnaW9ucyB3aWxsIGJl
IG1hcHBlZCBvbi0NCj4gZGVtYW5kDQo+ID4gLSAqIGluIHN0YWdlLTIgcGFnZSB0YWJsZXMgZm9y
IHRoZSBoYXJkd2FyZSBkb21haW4gYmVjYXVzZSBYZW4gaXMgbm90DQo+ID4gLSAqIGFibGUgdG8g
a25vdyBmcm9tIHRoZSBFRkkgbWVtb3J5IG1hcCB0aGUgTU1JTyByZWdpb25zLg0KPiA+ICsgKiBU
cnkgdG8gbWFwIHRoZSBNTUlPIHJlZ2lvbnMgZm9yIHNvbWUgc3BlY2lhbCBjYXNlczoNCj4gPiAr
ICogMS4gV2hlbiB1c2luZyBBQ1BJLCBtb3N0IG9mIHRoZSBNTUlPIHJlZ2lvbnMgd2lsbCBiZSBt
YXBwZWQgb24tDQo+IGRlbWFuZA0KPiA+ICsgKiAgICBpbiBzdGFnZS0yIHBhZ2UgdGFibGVzIGZv
ciB0aGUgaGFyZHdhcmUgZG9tYWluIGJlY2F1c2UgWGVuIGlzIG5vdA0KPiA+ICsgKiAgICBhYmxl
IHRvIGtub3cgZnJvbSB0aGUgRUZJIG1lbW9yeSBtYXAgdGhlIE1NSU8gcmVnaW9ucy4NCj4gPiAr
ICogMi4gRm9yIGd1ZXN0cyB1c2luZyBHSUN2MiwgdGhlIEdJQ3YyIENQVSBpbnRlcmZhY2UgbWFw
cGluZyBpcyBjcmVhdGVkDQo+ID4gKyAqICAgIG9uIHRoZSBmaXJzdCBhY2Nlc3Mgb2YgdGhlIE1N
SU8gcmVnaW9uLg0KPiA+ICAgKi8NCj4gPiAgc3RhdGljIGJvb2wgdHJ5X21hcF9tbWlvKGdmbl90
IGdmbikNCj4gPiAgew0KPiA+IEBAIC0xNzk4LDYgKzE4MDEsMTYgQEAgc3RhdGljIGJvb2wgdHJ5
X21hcF9tbWlvKGdmbl90IGdmbikNCj4gPiAgICAgIC8qIEZvciB0aGUgaGFyZHdhcmUgZG9tYWlu
LCBhbGwgTU1JT3MgYXJlIG1hcHBlZCB3aXRoIEdGTiA9PSBNRk4NCj4gKi8NCj4gPiAgICAgIG1m
bl90IG1mbiA9IF9tZm4oZ2ZuX3goZ2ZuKSk7DQo+ID4NCj4gPiArICAgIC8qDQo+ID4gKyAgICAg
KiBNYXAgdGhlIEdJQ3YyIHZpcnR1YWwgY3B1IGludGVyZmFjZSBpbiB0aGUgZ2ljIGNwdSBpbnRl
cmZhY2UNCj4gTklUOiBpbiBhbGwgdGhlIG90aGVyIHBsYWNlcyB5b3UgdXNlIENQVSAoY2FwaXRh
bCBsZXR0ZXJzKQ0KDQpPaCBnb29kIGNhdGNoLCB0aGFuayB5b3UuIEkgdGhpbmsgdGhpcyBwYXJ0
IGlzIHRoZSBzYW1lIGFzIHRoZSBvcmlnaW5hbCBpbi1jb2RlDQpjb21tZW50LCBidXQgc2luY2Ug
SSBhbSB0b3VjaGluZyB0aGlzIHBhcnQgYW55d2F5LCBpdCB3b3VsZCBiZSBnb29kIHRvDQpjb3Jy
ZWN0IHRoZW0uDQoNCj4gDQo+ID4gKyAgICAgKiByZWdpb24gb2YgdGhlIGd1ZXN0IG9uIHRoZSBm
aXJzdCBhY2Nlc3Mgb2YgdGhlIE1NSU8gcmVnaW9uLg0KPiA+ICsgICAgICovDQo+ID4gKyAgICBp
ZiAoIGQtPmFyY2gudmdpYy52ZXJzaW9uID09IEdJQ19WMiAmJg0KPiA+ICsgICAgICAgICBnZm5f
eChnZm4pID09IGdmbl94KGdhZGRyX3RvX2dmbihkLT5hcmNoLnZnaWMuY2Jhc2UpKSApDQo+IFRo
ZXJlIGlzIGEgbWFjcm8gZ25mX2VxIHRoYXQgeW91IGNhbiB1c2UgdG8gYXZvaWQgb3BlbmNvZGlu
ZyBpdC4NCg0KVGhhbmtzISBJIHdpbGwgZml4IGluIHYyLg0KDQo+IA0KPiA+ICsgICAgICAgIHJl
dHVybiAhbWFwX21taW9fcmVnaW9ucyhkLCBnYWRkcl90b19nZm4oZC0+YXJjaC52Z2ljLmNiYXNl
KSwNCj4gQXQgdGhpcyBwb2ludCB5b3UgY2FuIHVzZSBqdXN0IGdmbiBpbnN0ZWFkIG9mIGdhZGRy
X3RvX2dmbihkLT5hcmNoLnZnaWMuY2Jhc2UpDQoNCldpbGwgZml4IGluIHYyLg0KDQo+IA0KPiA+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBkLT5hcmNoLnZnaWMuY3NpemUgLyBQ
QUdFX1NJWkUsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIG1hZGRyX3Rv
X21mbihkLT5hcmNoLnZnaWMudmJhc2UpKTsNCj4gPiArDQo+ID4gICAgICAvKg0KPiA+ICAgICAg
ICogRGV2aWNlLVRyZWUgc2hvdWxkIGFscmVhZHkgaGF2ZSBldmVyeXRoaW5nIG1hcHBlZCB3aGVu
IGJ1aWxkaW5nDQo+ID4gICAgICAgKiB0aGUgaGFyZHdhcmUgZG9tYWluLg0KPiA+IGRpZmYgLS1n
aXQgYS94ZW4vYXJjaC9hcm0vdmdpYy12Mi5jIGIveGVuL2FyY2gvYXJtL3ZnaWMtdjIuYw0KPiA+
IGluZGV4IDAwMjZjYjQzNjAuLjIxZTE0YTVhNmYgMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2FyY2gv
YXJtL3ZnaWMtdjIuYw0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS92Z2ljLXYyLmMNCj4gPiBAQCAt
NjQ0LDEwICs2NDQsNiBAQCBzdGF0aWMgaW50IHZnaWNfdjJfdmNwdV9pbml0KHN0cnVjdCB2Y3B1
ICp2KQ0KPiA+DQo+ID4gIHN0YXRpYyBpbnQgdmdpY192Ml9kb21haW5faW5pdChzdHJ1Y3QgZG9t
YWluICpkKQ0KPiA+ICB7DQo+ID4gLSAgICBpbnQgcmV0Ow0KPiA+IC0gICAgcGFkZHJfdCBjc2l6
ZTsNCj4gPiAtICAgIHBhZGRyX3QgdmJhc2U7DQo+ID4gLQ0KPiA+ICAgICAgLyoNCj4gPiAgICAg
ICAqIFRoZSBoYXJkd2FyZSBkb21haW4gYW5kIGRpcmVjdC1tYXBwZWQgZG9tYWluIGJvdGggZ2V0
IHRoZQ0KPiBoYXJkd2FyZQ0KPiA+ICAgICAgICogYWRkcmVzcy4NCj4gPiBAQCAtNjY3LDggKzY2
Myw4IEBAIHN0YXRpYyBpbnQgdmdpY192Ml9kb21haW5faW5pdChzdHJ1Y3QgZG9tYWluICpkKQ0K
PiA+ICAgICAgICAgICAqIGFsaWduZWQgdG8gUEFHRV9TSVpFLg0KPiA+ICAgICAgICAgICAqLw0K
PiA+ICAgICAgICAgIGQtPmFyY2gudmdpYy5jYmFzZSA9IHZnaWNfdjJfaHcuY2Jhc2U7DQo+ID4g
LSAgICAgICAgY3NpemUgPSB2Z2ljX3YyX2h3LmNzaXplOw0KPiA+IC0gICAgICAgIHZiYXNlID0g
dmdpY192Ml9ody52YmFzZTsNCj4gPiArICAgICAgICBkLT5hcmNoLnZnaWMuY3NpemUgPSB2Z2lj
X3YyX2h3LmNzaXplOw0KPiA+ICsgICAgICAgIGQtPmFyY2gudmdpYy52YmFzZSA9IHZnaWNfdjJf
aHcudmJhc2U7DQo+ID4gICAgICB9DQo+ID4gICAgICBlbHNlIGlmICggaXNfZG9tYWluX2RpcmVj
dF9tYXBwZWQoZCkgKQ0KPiA+ICAgICAgew0KPiA+IEBAIC02ODMsOCArNjc5LDggQEAgc3RhdGlj
IGludCB2Z2ljX3YyX2RvbWFpbl9pbml0KHN0cnVjdCBkb21haW4gKmQpDQo+ID4gICAgICAgICAg
ICovDQo+ID4gICAgICAgICAgZC0+YXJjaC52Z2ljLmRiYXNlID0gdmdpY192Ml9ody5kYmFzZTsN
Cj4gPiAgICAgICAgICBkLT5hcmNoLnZnaWMuY2Jhc2UgPSB2Z2ljX3YyX2h3LmNiYXNlOw0KPiA+
IC0gICAgICAgIGNzaXplID0gR1VFU1RfR0lDQ19TSVpFOw0KPiA+IC0gICAgICAgIHZiYXNlID0g
dmdpY192Ml9ody52YmFzZSArIHZnaWNfdjJfaHcuYWxpYXNlZF9vZmZzZXQ7DQo+ID4gKyAgICAg
ICAgZC0+YXJjaC52Z2ljLmNzaXplID0gR1VFU1RfR0lDQ19TSVpFOw0KPiA+ICsgICAgICAgIGQt
PmFyY2gudmdpYy52YmFzZSA9IHZnaWNfdjJfaHcudmJhc2UgKyB2Z2ljX3YyX2h3LmFsaWFzZWRf
b2Zmc2V0Ow0KPiA+ICAgICAgfQ0KPiA+ICAgICAgZWxzZQ0KPiA+ICAgICAgew0KPiA+IEBAIC02
OTcsMTkgKzY5MywxMCBAQCBzdGF0aWMgaW50IHZnaWNfdjJfZG9tYWluX2luaXQoc3RydWN0IGRv
bWFpbiAqZCkNCj4gPiAgICAgICAgICAgKi8NCj4gPiAgICAgICAgICBCVUlMRF9CVUdfT04oR1VF
U1RfR0lDQ19TSVpFICE9IFNaXzhLKTsNCj4gPiAgICAgICAgICBkLT5hcmNoLnZnaWMuY2Jhc2Ug
PSBHVUVTVF9HSUNDX0JBU0U7DQo+ID4gLSAgICAgICAgY3NpemUgPSBHVUVTVF9HSUNDX1NJWkU7
DQo+ID4gLSAgICAgICAgdmJhc2UgPSB2Z2ljX3YyX2h3LnZiYXNlICsgdmdpY192Ml9ody5hbGlh
c2VkX29mZnNldDsNCj4gPiArICAgICAgICBkLT5hcmNoLnZnaWMuY3NpemUgPSBHVUVTVF9HSUND
X1NJWkU7DQo+ID4gKyAgICAgICAgZC0+YXJjaC52Z2ljLnZiYXNlID0gdmdpY192Ml9ody52YmFz
ZSArIHZnaWNfdjJfaHcuYWxpYXNlZF9vZmZzZXQ7DQo+ID4gICAgICB9DQo+ID4NCj4gPiAtICAg
IC8qDQo+ID4gLSAgICAgKiBNYXAgdGhlIGdpYyB2aXJ0dWFsIGNwdSBpbnRlcmZhY2UgaW4gdGhl
IGdpYyBjcHUgaW50ZXJmYWNlDQo+ID4gLSAgICAgKiByZWdpb24gb2YgdGhlIGd1ZXN0Lg0KPiA+
IC0gICAgICovDQo+ID4gLSAgICByZXQgPSBtYXBfbW1pb19yZWdpb25zKGQsIGdhZGRyX3RvX2dm
bihkLT5hcmNoLnZnaWMuY2Jhc2UpLA0KPiA+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICBj
c2l6ZSAvIFBBR0VfU0laRSwgbWFkZHJfdG9fbWZuKHZiYXNlKSk7DQo+ID4gLSAgICBpZiAoIHJl
dCApDQo+ID4gLSAgICAgICAgcmV0dXJuIHJldDsNCj4gPiAtDQo+IE1heWJlIGFkZGluZyBzb21l
IGNvbW1lbnQgbGlrZSAiTWFwcGluZyBvZiB0aGUgdmlydHVhbCBDUFUgaW50ZXJmYWNlIGlzDQo+
IGRlZmVycmVkIHVudGlsIGZpcnN0IGFjY2VzcyINCj4gd291bGQgYmUgaGVscGZ1bC4NCg0KU3Vy
ZSwgSSB3aWxsIGFkZCB0aGUgY29tbWVudCBpbiB2Mi4NCg0KS2luZCByZWdhcmRzLA0KSGVucnkN
Cg0KPiANCj4gfk1pY2hhbA0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:15:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:15:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485448.752670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMhk-00037n-Hn; Fri, 27 Jan 2023 11:15:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485448.752670; Fri, 27 Jan 2023 11:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMhk-00037g-Eh; Fri, 27 Jan 2023 11:15:36 +0000
Received: by outflank-mailman (input) for mailman id 485448;
 Fri, 27 Jan 2023 11:15:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLMhj-00037V-BA
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:15:35 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea36d1b5-9e33-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:15:33 +0100 (CET)
Received: by mail-wr1-x42d.google.com with SMTP id n7so4662194wrx.5
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 03:15:33 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 b11-20020adfd1cb000000b002bcaa47bf78sm4106257wrd.26.2023.01.27.03.15.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 03:15:31 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea36d1b5-9e33-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=Lz2mgXwLTWCl1I59Rv0my1BKuIzJqIzO3ZVsL2+1V3g=;
        b=EBzFA1lfLpj+CHt+fMqPJJwwyeFRVuZPU+o96w3rFK42jo8UIUoyDwlBpjyFQGIVzT
         N6RRzuXGwxUxA9VXiWAjWSpzoXvhVWxMEH6Fs0/YWVdTgxJrHMrDBrfCE1Lx3bGuM3iM
         my9t5wRbgSPIyI11SEtgokeJRsXM1n812yZy7i6RvkQ6gj4fJq2ST/GQHFOPw/3RrHmU
         FHhwOveIOFJeaPcJXqQVM2j/QV/lm7PPogUoox5XRJm2Lq9wLANURHpNfbYKuXfjB9tI
         IKaBVLtENmhk6ruSjkKmR/yR6EWGE2rmBD1xESx1oA3f38NXrbo6Gk2bFnyPNY2CbtxG
         sQSA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Lz2mgXwLTWCl1I59Rv0my1BKuIzJqIzO3ZVsL2+1V3g=;
        b=77JICDworXjO6K94OGSYex1RIy6fqedyg584mbNnIgTRoa4hRkXcjCJK9WTcPcLb5v
         q+820RBY0xibpRu5rGdB89K1iZ17FnjGwuDAz/Qw4Y53eOeFqCVgx0izOFm6IhinFW+G
         54ITAalxMZ69YPyO5MDSfwXcnpMJHSyovlShRGL4wmKx9XfgdUTlS6B55mjucLBRs0Ff
         rxiOfr0lCPm+5Jhq31JO4/KSQMxtSy3YRVFPtoi02Fq7sOmwt+MvDV8MkmAqF/Yy6Zh/
         KJoU8ldyJ1TOdhcpnV5ha0/RN3Q8rYWiOXaBzH1Hydgge0+iEu3e3KGSgxB6Y/2/Xg/k
         UGZw==
X-Gm-Message-State: AFqh2korccWzJ9QgwxktWtvMNdx5p4Sud/Yex38kNc1ks5S6JGKz0dKB
	nlFcNXoiXUAvygamdufaNjtrBdrzsIo=
X-Google-Smtp-Source: AMrXdXuUW7BN9obL7kxRsNDFyUvor/T96MarEA3quMv8bQxI5AXuimX9cT8H/1It8OdW8fJFp3jeow==
X-Received: by 2002:a05:6000:98d:b0:2a5:6244:329e with SMTP id by13-20020a056000098d00b002a56244329emr32912973wrb.40.1674818132029;
        Fri, 27 Jan 2023 03:15:32 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v6 0/2]  Basic early_printk and smoke test implementation
Date: Fri, 27 Jan 2023 13:15:22 +0200
Message-Id: <cover.1674816429.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- the minimal set of headers and changes inside them.
- SBI (RISC-V Supervisor Binary Interface) things necessary for basic
  early_printk implementation.
- things needed to set up the stack.
- early_printk() function to print only strings.
- RISC-V smoke test which checks if  "Hello from C env" message is
  present in serial.tmp

The patch series is rebased on top of the patch "include/types: move
stddef.h-kind types to common header" [1]

[1] https://lore.kernel.org/xen-devel/5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com/

---
Changes in V6:
  - Remove patches that were merged to upstream:
      * xen/include: Change <asm/types.h> to <xen/types.h>
      * xen/riscv: introduce asm/types.h header file
      * xen/riscv: introduce sbi call to putchar to console
  - Remove __riscv_cmodel_medany check from early_printk.C
  - Rename container name in test.yaml for .qemu-riscv64.
---
Changes in V5:
  - Code style fixes
  - Remove size_t from asm/types after rebase on top of the patch
    "include/types: move stddef.h-kind types to common header". [1]

    All other types were back as they are used in <xen/types.h>
  - Update <xen/early_printk.h> after rebase on top of [1] as size_t was moved from
    <asm/types.h> to <xen/types.h>
  - Remove unneeded <xen/errno.h> from sbi.c
  - Change an error message of #error in case of __riscv_cmodel_medany isn't defined
---
Changes in V4:
    - Patches "xen/riscv: introduce dummy asm/init.h" and "xen/riscv: introduce
      stack stuff" were removed from the patch series as they were merged separately
      into staging.
    - Remove "depends on RISCV*" from Kconfig.debug as Kconfig.debug is located
      in arch specific folder.
    - fix code style.
    - Add "ifdef __riscv_cmodel_medany" to early_printk.c.  
---
Changes in V3:
    - Most of "[PATCH v2 7/8] xen/riscv: print hello message from C env"
      was merged with [PATCH v2 3/6] xen/riscv: introduce stack stuff.
    - "[PATCH v2 7/8] xen/riscv: print hello message from C env" was
      merged with "[PATCH v2 6/8] xen/riscv: introduce early_printk basic
      stuff".
    - "[PATCH v2 5/8] xen/include: include <asm/types.h> in
      <xen/early_printk.h>" was removed as it has been already merged to
      mainline staging.
    - code style fixes.
---
Changes in V2:
    - update commit patches commit messages according to the mailing
      list comments
    - Remove unneeded types in <asm/types.h>
    - Introduce definition of STACK_SIZE
    - order the files alphabetically in Makefile
    - Add license to early_printk.c
    - Add RISCV_32 dependecy to config EARLY_PRINTK in Kconfig.debug
    - Move dockerfile changes to separate config and sent them as
      separate patch to mailing list.
    - Update test.yaml to wire up smoke test
---

Oleksii Kurochko (2):
  xen/riscv: introduce early_printk basic stuff
  automation: add RISC-V smoke test

 automation/gitlab-ci/test.yaml            | 20 ++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh  | 20 ++++++++++++++
 xen/arch/riscv/Kconfig.debug              |  5 ++++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
 xen/arch/riscv/setup.c                    |  4 +++
 7 files changed, 95 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:15:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:15:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485449.752681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMhl-0003Mv-Pd; Fri, 27 Jan 2023 11:15:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485449.752681; Fri, 27 Jan 2023 11:15:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMhl-0003Mo-Mh; Fri, 27 Jan 2023 11:15:37 +0000
Received: by outflank-mailman (input) for mailman id 485449;
 Fri, 27 Jan 2023 11:15:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLMhk-00037V-4g
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:15:36 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eac0ac04-9e33-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:15:33 +0100 (CET)
Received: by mail-wr1-x434.google.com with SMTP id q10so4661539wrm.4
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 03:15:33 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 b11-20020adfd1cb000000b002bcaa47bf78sm4106257wrd.26.2023.01.27.03.15.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 03:15:32 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eac0ac04-9e33-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Hy0V7Beg+Or9woWn4Y+M3sCQn1S9+9nvXmkH0E6MOKk=;
        b=i0jMNwOaBLTDEbH4sRYt2ki1XPyoDA8OVxTmNwmxeVxOPZto8zdpCTB4Y60BGv/MwK
         8yyRp4Oj/hoeHy8A+PRRM2GfgsCHZN+JaCAUHc/dsnFMJ4biFHAOsHRaZghppOEKe7Q+
         AlFDmxdaVWYvIS62lmF38ugnKBMqq/x1ulcAY18MkBUwPu2SkxJB1b9ilWYcCflCdccv
         ZhwY9wwJ54v6C2P+azAs4cJKydPJKmT6yGlOAdGWciuzDiGY7cIZItjWSwF2KweFzK9b
         C+1bNFZn/2+7TBD++Q1fvsb66AkJmmUp7vXBCF07AZwqyc+efBSt82NmSD8nxo0LDeMr
         RQgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Hy0V7Beg+Or9woWn4Y+M3sCQn1S9+9nvXmkH0E6MOKk=;
        b=TKGuwOTItfL9Th4OV0qPIDtUWWQdAk1EODQoPEZPRgbHFZmqCVdSu0k8fU7aVOIIO9
         nvJxRv8ebxCJ2eneBrOjaqzc0QejF5vZazFygKqLfrsuNmo56+DxjTZGlOUnza+vyuaV
         TE++DAh99jErnGkH7ynSuSLP3SfX6umOcpuUJ3XadvYjr+5wd2vYCyVDaq3WmQ8WHOMa
         hvHU3UVn4WVPBabhKoTwa/Nid92h1XhQYrb5TMNJunXL4Iqugvy2H3WFI94UHXvCQQf2
         cFNjyDOtMe7LOlnVYdwgxsV1fUQxb/OgkGIDwn18WxlI+XSjhIi8lloZtmg2HEI6+rK8
         HXVA==
X-Gm-Message-State: AFqh2kr4oHTc1snDPfRO7e3NSM8aHL2g7QWQSM5p+5JNusLs+IPYZ1GU
	RNcVSpdN9drA7yWjQMLVEnd6StvCKLg=
X-Google-Smtp-Source: AMrXdXtzGgrz+WD/SW7Uw7PgH/Aeoslx8A4SKQLuEG0c3llKjQ8Rwlym0melZBNBTzn/lIwMSsTmXA==
X-Received: by 2002:adf:e48e:0:b0:2bd:ed75:808c with SMTP id i14-20020adfe48e000000b002bded75808cmr44313463wrm.38.1674818133016;
        Fri, 27 Jan 2023 03:15:33 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: [PATCH v6 1/2] xen/riscv: introduce early_printk basic stuff
Date: Fri, 27 Jan 2023 13:15:23 +0200
Message-Id: <06c2c36bd68b2534c757dc4087476e855253680a.1674816429.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674816429.git.oleksii.kurochko@gmail.com>
References: <cover.1674816429.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Because printk() relies on a serial driver (like the ns16550 driver)
and drivers require working virtual memory (ioremap()) there is not
print functionality early in Xen boot.

The patch introduces the basic stuff of early_printk functionality
which will be enough to print 'hello from C environment".

Originally early_printk.{c,h} was introduced by Bobby Eshleman
(https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1aab71384)
but some functionality was changed:
early_printk() function was changed in comparison with the original as
common isn't being built now so there is no vscnprintf.

This commit adds early printk implementation built on the putc SBI call.

As sbi_console_putchar() is already being planned for deprecation
it is used temporarily now and will be removed or reworked after
real uart will be ready.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
---
Changes in V6:
    - Remove __riscv_cmodel_medany check from early_printk.c
---
Changes in V5:
    - Code style fixes
    - Change an error message of #error in case of __riscv_cmodel_medany
      isn't defined
---
Changes in V4:
    - Remove "depends on RISCV*" from Kconfig.debug as it is located in
      arch specific folder so by default RISCV configs should be ebabled.
    - Add "ifdef __riscv_cmodel_medany" to be sure that PC-relative addressing
      is used as early_*() functions can be called from head.S with MMU-off and
      before relocation (if it would be needed at all for RISC-V)
    - fix code style
---
Changes in V3:
    - reorder headers in alphabetical order
    - merge changes related to start_xen() function from "[PATCH v2 7/8]
      xen/riscv: print hello message from C env" to this patch
    - remove unneeded parentheses in definition of STACK_SIZE
---
Changes in V2:
    - introduce STACK_SIZE define.
    - use consistent padding between instruction mnemonic and operand(s)
---
 xen/arch/riscv/Kconfig.debug              |  5 ++++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
 xen/arch/riscv/setup.c                    |  4 +++
 5 files changed, 55 insertions(+)
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
index e69de29bb2..608c9ff832 100644
--- a/xen/arch/riscv/Kconfig.debug
+++ b/xen/arch/riscv/Kconfig.debug
@@ -0,0 +1,5 @@
+config EARLY_PRINTK
+    bool "Enable early printk"
+    default DEBUG
+    help
+      Enables early printk debug messages
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index fd916e1004..1a4f1a6015 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,3 +1,4 @@
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
new file mode 100644
index 0000000000..b66a11f1bc
--- /dev/null
+++ b/xen/arch/riscv/early_printk.c
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V early printk using SBI
+ *
+ * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
+ */
+#include <asm/early_printk.h>
+#include <asm/sbi.h>
+
+/*
+ * TODO:
+ *   sbi_console_putchar is already planned for deprecation
+ *   so it should be reworked to use UART directly.
+*/
+void early_puts(const char *s, size_t nr)
+{
+    while ( nr-- > 0 )
+    {
+        if ( *s == '\n' )
+            sbi_console_putchar('\r');
+        sbi_console_putchar(*s);
+        s++;
+    }
+}
+
+void early_printk(const char *str)
+{
+    while ( *str )
+    {
+        early_puts(str, 1);
+        str++;
+    }
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
new file mode 100644
index 0000000000..05106e160d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -0,0 +1,12 @@
+#ifndef __EARLY_PRINTK_H__
+#define __EARLY_PRINTK_H__
+
+#include <xen/early_printk.h>
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_printk(const char *str);
+#else
+static inline void early_printk(const char *s) {};
+#endif
+
+#endif /* __EARLY_PRINTK_H__ */
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 13e24e2fe1..d09ffe1454 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,12 +1,16 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/early_printk.h>
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
 void __init noreturn start_xen(void)
 {
+    early_printk("Hello from C env\n");
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:15:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:15:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485450.752690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMhn-0003cf-13; Fri, 27 Jan 2023 11:15:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485450.752690; Fri, 27 Jan 2023 11:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMhm-0003cY-UM; Fri, 27 Jan 2023 11:15:38 +0000
Received: by outflank-mailman (input) for mailman id 485450;
 Fri, 27 Jan 2023 11:15:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLMhl-00037V-4j
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:15:37 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb35ad3d-9e33-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:15:34 +0100 (CET)
Received: by mail-wr1-x434.google.com with SMTP id t18so4679807wro.1
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 03:15:34 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 b11-20020adfd1cb000000b002bcaa47bf78sm4106257wrd.26.2023.01.27.03.15.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 03:15:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb35ad3d-9e33-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qFP8ING4UzVnUkEuxsmFa3m3vfqvjFoN/0VbfFaV0lo=;
        b=fZupR+cGz2/cY4+KqtP+vPnS3moL+zV4Q+RVcUC4N2VS4yKuRj7EOk9imAy+Kdx78m
         bWgRars7oHHgZZqilRsjMAT4Pw8q4QjVVjv7vlGMLiPe85y0S12jF9CbNTYBcfGw77oq
         1WTP6A2IwSAPAHPSNg888BXt9Zyr5VARyhDE8kDKbje89vCCZuLMi/DfvAIkz0ZSWpX5
         u3aDXXElHCKTtKf95yFVMJ+dpZZci07Yd68Nd8CmZrAp9kd319mu1TBBNH0aFLefsaJ/
         U3228Esdy41GT12P9SAoe3ySa0oTJlMgSf+RyeOgqTdp+KZ2Ml9l+sa6xjOjcz2+ownX
         VoRw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=qFP8ING4UzVnUkEuxsmFa3m3vfqvjFoN/0VbfFaV0lo=;
        b=MgZ7tAT/nK4TPhtW0zEbwKImweK9b4YRjuP9Bi6PSEuIk8n+UqQUfxi5siGsCe/Dmy
         fizFQ7kzhBxz7IcRJM0cKRcZjevubiB8YPP/P0R7DDrd9LpGPz7yEOYFcaZtoNreuqCv
         EsboZb8qkCvfXXtI6xYv3eOXTIO2mOSH8G04MIUYs1mQnuLA4PAQHIg4dqY7MyTBsy5y
         Hfzi0ah4+r3SX+1sEORaemdheOj+mILLt37HQ2tP4vUacp4bUNEvhpaHJkR4pR/nTgWa
         MUwtyRpuLTV127loATd6fGpcFJkFUmHxs8ckgUTPJ/RYvYx/Izf2wv/iRAkCAovCiWbo
         LqlA==
X-Gm-Message-State: AO0yUKW2j2cjlJC9MvvZZ7V0Q0QN5M3XD/LY7ONmFtOZvuZlMh98ADKq
	inM/hPFCSZcgiC3z4ubRG/KJeO7lSRI=
X-Google-Smtp-Source: AK7set8oVYDYFNPryV3eLsRpHPeeu7to+lwgFqoz5xDyRc7WqbECdj0idljGVzfKhAEIOkR1Y4NXog==
X-Received: by 2002:a05:6000:154d:b0:2bf:d0b4:2ccf with SMTP id 13-20020a056000154d00b002bfd0b42ccfmr2946885wry.37.1674818133936;
        Fri, 27 Jan 2023 03:15:33 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v6 2/2] automation: add RISC-V smoke test
Date: Fri, 27 Jan 2023 13:15:24 +0200
Message-Id: <22314025ad141e44e4cf46c29875af86113e631a.1674816429.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674816429.git.oleksii.kurochko@gmail.com>
References: <cover.1674816429.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add check if there is a message 'Hello from C env' presents
in log file to be sure that stack is set and C part of early printk
is working.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V6:
 - Rename container name in test.yaml for .qemu-riscv64.
---
Changes in V5:
  - Nothing changed
---
Changes in V4:
  - Nothing changed
---
Changes in V3:
  - Nothing changed
  - All mentioned comments by Stefano in Xen mailing list will be
    fixed as a separate patch out of this patch series.
---
 automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
 2 files changed, 40 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index afd80adfe1..d3c62e0995 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -54,6 +54,19 @@
   tags:
     - x86_64
 
+.qemu-riscv64:
+  extends: .test-jobs-common
+  variables:
+    CONTAINER: archlinux:current-riscv64
+    LOGFILE: qemu-smoke-riscv64.log
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - x86_64
+
 .yocto-test:
   extends: .test-jobs-common
   script:
@@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
   needs:
     - debian-unstable-clang-debug
 
+qemu-smoke-riscv64-gcc:
+  extends: .qemu-riscv64
+  script:
+    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
+  needs:
+    - riscv64-cross-gcc
+
 # Yocto test jobs
 yocto-qemuarm64:
   extends: .yocto-test-arm64
diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
new file mode 100755
index 0000000000..e0f06360bc
--- /dev/null
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -ex
+
+# Run the test
+rm -f smoke.serial
+set +e
+
+timeout -k 1 2 \
+qemu-system-riscv64 \
+    -M virt \
+    -smp 1 \
+    -nographic \
+    -m 2g \
+    -kernel binaries/xen \
+    |& tee smoke.serial
+
+set -e
+(grep -q "Hello from C env" smoke.serial) || exit 1
+exit 0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:15:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:15:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485451.752701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMhv-00041A-A3; Fri, 27 Jan 2023 11:15:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485451.752701; Fri, 27 Jan 2023 11:15:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMhv-000413-5l; Fri, 27 Jan 2023 11:15:47 +0000
Received: by outflank-mailman (input) for mailman id 485451;
 Fri, 27 Jan 2023 11:15:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JKSU=5Y=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLMhu-0003zK-0h
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:15:46 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20607.outbound.protection.outlook.com
 [2a01:111:f400:7d00::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f0fa60c8-9e33-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 12:15:44 +0100 (CET)
Received: from AM5PR04CA0004.eurprd04.prod.outlook.com (2603:10a6:206:1::17)
 by AM8PR08MB5729.eurprd08.prod.outlook.com (2603:10a6:20b:1de::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Fri, 27 Jan
 2023 11:15:42 +0000
Received: from AM7EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:1:cafe::c3) by AM5PR04CA0004.outlook.office365.com
 (2603:10a6:206:1::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23 via Frontend
 Transport; Fri, 27 Jan 2023 11:15:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT017.mail.protection.outlook.com (100.127.140.184) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 11:15:42 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Fri, 27 Jan 2023 11:15:42 +0000
Received: from 4411fa24753b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D751F962-8F4F-436F-9CA2-BF6B06454AC5.1; 
 Fri, 27 Jan 2023 11:15:33 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4411fa24753b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Jan 2023 11:15:32 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM7PR08MB5399.eurprd08.prod.outlook.com (2603:10a6:20b:104::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 11:15:29 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Fri, 27 Jan 2023
 11:15:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0fa60c8-9e33-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E9lgnEcGVC/efddQ/CrfPikrjF0gpC28yMH8eiTwciU=;
 b=cBviRByDJoRjhxtwIgpV1MLkHShRIsgOdXxVxLnIaWDc01l/0dj8ytKsGNeQjVPulWSbFRXCX/9JNir15yG9gsdX/3mcjap7EgTNcEakqAShzgsQr228siqpx4GJqSFawxyPXEKkBjCSYf6TiS2IApIMyKBinbQaBzUAdcWwBsU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=btGH44JcsbuPIVD7C9278no175282NuC+pajB8hofCkc5FzoVJ9gPtqpH3ySUNAaE1zORFe2IjUihzzVHXY7848uoZhb+KwVUetrfCsuqkOW3q/jB6UxzxDZr2X6ks9r3denjnR3drbSSY9B+lHNR5U0ItXKvnFKMc3SugnejvJqoa5YwCqcpKSDNhlXQrItCzqDdf77bGH7kh/oyLetNr8fPM7hudPASp2DQqNiLGrVsJaWEjlxlvkoh8p69oL3YPn0Vn3WF5NY4EoSpT5mVLFdoJcTppkdSk3drwDbdMTKfT4j/eYDqv11dCGQjFKVt9NaZQPcsr1gctSi6ctUJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E9lgnEcGVC/efddQ/CrfPikrjF0gpC28yMH8eiTwciU=;
 b=Onh/tBDJ0QB1IBALYxkrxdTuBkf5K4gk8krOgCsDYBbFfBVJ48izWXSuFhLENTuv5EZt7J5+J33psP1LPsgCOHmwRYNe2lWbEEBZwUrA18Cseds5wesfjdsFy7VnRfIwpB3fOz3OEY7X3ciEn6G6PRCMmFP4cjrRIlSIIdXcDvHdOVOwMJl+oRKrGoCUeu5APt90QXhh+CYeYTq3wC9TVZMVJACK85ofd8by2EDIR1jgZbM5L4BGgVhSklxHHyNRRCa6jXCXhL0tJzHsk9cbqQCWAvKgGyVUCY7nG182ZxOMJHeY1SKo+vIrgzOH/l4BktqO9YpADL2ia7mfpgUqzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E9lgnEcGVC/efddQ/CrfPikrjF0gpC28yMH8eiTwciU=;
 b=cBviRByDJoRjhxtwIgpV1MLkHShRIsgOdXxVxLnIaWDc01l/0dj8ytKsGNeQjVPulWSbFRXCX/9JNir15yG9gsdX/3mcjap7EgTNcEakqAShzgsQr228siqpx4GJqSFawxyPXEKkBjCSYf6TiS2IApIMyKBinbQaBzUAdcWwBsU=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Julien Grall <julien@xen.org>
Subject: RE: [PATCH 3/3] xen/arm: Clean-up in p2m_init() and
 p2m_final_teardown()
Thread-Topic: [PATCH 3/3] xen/arm: Clean-up in p2m_init() and
 p2m_final_teardown()
Thread-Index: AQHZKU4fKh9kG5Xa7kmVEqYgCVpcz66nH76AgAbPKgCABD7DYA==
Date: Fri, 27 Jan 2023 11:15:29 +0000
Message-ID:
 <AS8PR08MB7991A2641FCF28C39F0D2FD692CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-4-Henry.Wang@arm.com>
 <d9861060-22ba-5fce-eef6-a7f2ef01526a@amd.com>
 <25264dca-acf6-7ad1-e8a5-a1b893eab30d@xen.org>
In-Reply-To: <25264dca-acf6-7ad1-e8a5-a1b893eab30d@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 8F1ED35A6B17E7499665B327F6AE5C51.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AM7PR08MB5399:EE_|AM7EUR03FT017:EE_|AM8PR08MB5729:EE_
X-MS-Office365-Filtering-Correlation-Id: db815b97-4c82-45ac-9f2e-08db0057d42c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jRnkvV4b09vlHo8GTUI2YUYJzNnFcAVnUwqRPqVtLJokG6sPpuz8kEu801VBaocObyKqAZyXaLtuZADph2miRxsxnRhxxTlgN9Ohk+k0eJ4Yu/AsdilkDDguh0H24FhK4B3GZLLAyzKOVqSc4o4VM2Oxztk239YUVjyhBs6EDX08gL5geDSNO2n8SKoPJMEz46cUkMJdq0vgUaneOpdZWSJJqInKkyO04UyG0GVWGZvYIZ6gmOv2ozXW8EEuYdpD59MhYHvrsQtBWxIJ/zQtPpeqWvP9lBjPU7K93llwtVlvmh+BNP3FPTVnVEYThaW0dz7vs7obZNCCJ9kOdoS2qWXi0TdrVU9Mict4+aZevSu7N7oetiuXyZHbSjfRKlcvJDDUkQtGDz/gG6RVXkGCYDt+XuCmDetitZFCAcW1xnV02JIc30eUU1d9322rNoFzdjJd1tBuyZQYEpZiDtdTGBPcXCiwCMkp2Gn9zB74CFfLhBsS9iR/ACNoGnfEiQGs+hm6Zm1Kvxm2I7tV48v/fT/TjZ2IdvrwTuwJrTWrs5A9v0jWN3anZC0F04fVyEv3Z7cBNjqN8gRZya4cislto1ErXT9+rYdYoNyvYlNTbb8q99wJ8A1eANMg+MWftjjAm7VddaLALFHAYrmVVvHHAEEKZbIjj3zC6hP0ai2tCr9RTWzXQw1ceW7X9ptpxKCFVXM+6n1GsNSAJoVmX+eV3w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(346002)(376002)(366004)(396003)(136003)(39860400002)(451199018)(7696005)(38070700005)(33656002)(41300700001)(83380400001)(64756008)(66556008)(316002)(66476007)(8676002)(66446008)(66946007)(55016003)(86362001)(76116006)(478600001)(4326008)(54906003)(110136005)(122000001)(5660300002)(4744005)(186003)(26005)(9686003)(2906002)(8936002)(71200400001)(52536014)(38100700002)(6506007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5399
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2c1ce10b-e148-4648-0700-08db0057cc99
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lIMG/Xel+bs3sbsuePGIAvSaMOyA3hSxyqf+zF1rGsu9zFqOuAsPYEnG4ui2BVGvDXz/f75BncxIZ4TzyD6uWkWSRcMClHTkatqRiBcEBxjbzMDpBRSfJEdKNof3GzFry2ik+LdFdrnfAoK0AQuTALkpqCFKTLgQ3uXEQdAgRQihCRzsqQ29FXscsGEtKtHiDIrRb0JC65meUeIyHisISd4UpiGHMKEPdLrdNJ1sws6r7notioNvsO2DB3XPfzpyK6dZah9QLm9fGmCuoP4a3mVIOXJaWkT5N7exfNw+GPj4vDRLr0HSOn+jKys+whd/aazZNM5ZL4r5mo1uwr0fHfup5dpb9LwPU31Vq6R1VoRlC4u4ixDWDar8AZThMWDmvn3g08t3R9D04WgzpKyMoyxJnakSarweLxynAmpdyoTIQer+HS5PGqPza/pPch/6mRoJ8WhxEf03fH3gdwXuRfmR+vl42qYsExRqVHvt6wMtZeOAK1X9vi2gaskdKZgsR4p+h73HUiypk1dfJrVvTmqKTRFW9vmgPN6mvhQ5fgBNiqq+fP3/8K3AqS6zBjP0IdWzj0rVYoZPxPaEZaAsKnfBMADK0vrZzGwNaK5GmtY251LXj6dzpiCDsc3JAsP0FUD91/pDhcN/iNLwHe41PvkrJCdtb6QsyHoT1tI7vIsH12hTIUD2Ir/W+XeQU2ir2wkbatIEMad7iZn6xwcG4w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(136003)(39860400002)(396003)(376002)(346002)(451199018)(46966006)(40470700004)(36840700001)(83380400001)(82310400005)(47076005)(336012)(186003)(26005)(9686003)(36860700001)(6506007)(82740400003)(2906002)(81166007)(4744005)(5660300002)(110136005)(54906003)(356005)(316002)(86362001)(8676002)(40480700001)(4326008)(8936002)(70586007)(52536014)(40460700003)(41300700001)(7696005)(33656002)(478600001)(70206006)(55016003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 11:15:42.4281
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: db815b97-4c82-45ac-9f2e-08db0057d42c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5729

SGkgTWljaGFsLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+DQo+ID4+IC0g
ICAgQlVHX09OKHAybV90ZWFyZG93bihkLCBmYWxzZSkpOw0KPiA+IEJlY2F1c2UgeW91IHJlbW92
ZSB0aGlzLA0KPiA+PiAgICAgICBBU1NFUlQocGFnZV9saXN0X2VtcHR5KCZwMm0tPnBhZ2VzKSk7
DQo+ID4geW91IG5vIGxvbmdlciBuZWVkIHRoaXMgYXNzZXJ0LCByaWdodD8NCj4gSSB0aGluayB0
aGUgQVNTRVJUKCkgaXMgc3RpbGwgdXNlZnVsIGFzIGl0IGF0IGxlYXN0IHNob3cgdGhhdCB0aGUg
cGFnZXMNCj4gc2hvdWxkIGhhdmUgYmVlbiBmcmVlZCBiZWZvcmUgdGhlIGNhbGwgdG8gcDJtX2Zp
bmFsX3RlYXJkb3duKCkuDQoNCkkgdGhpbmsgSSBhbHNvIHByZWZlciB0byBoYXZlIHRoaXMgQVNT
RVJUKCksIGJlY2F1c2Ugb2YgdGhlIGV4YWN0bHkgc2FtZQ0KcmVhc29uIGFzIEp1bGllbidzIGFu
c3dlci4gSSB0aGluayBoYXZpbmcgdGhpcyBBU1NFUlQoKSB3aWxsIGhlbHAgdXMgdG8NCmF2b2lk
IHBvdGVudGlhbCBtaXN0YWtlcyBpbiB0aGUgZnV0dXJlLg0KDQpNYXkgSSBwbGVhc2UgYXNrIGlm
IHlvdSBhcmUgaGFwcHkgd2l0aCBrZWVwaW5nIHRoaXMgQVNTRVJUKCkgYW5kIEkgY2FuDQpjYXJy
eSB5b3VyIHJldmlld2VkLWJ5IHRhZz8gVGhhbmtzIQ0KDQpLaW5kIHJlZ2FyZHMsDQpIZW5yeQ0K
DQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:21:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:21:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485467.752711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMmj-0006GZ-0a; Fri, 27 Jan 2023 11:20:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485467.752711; Fri, 27 Jan 2023 11:20:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMmi-0006GS-U2; Fri, 27 Jan 2023 11:20:44 +0000
Received: by outflank-mailman (input) for mailman id 485467;
 Fri, 27 Jan 2023 11:20:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLMmg-0006GL-LL
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:20:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLMmg-0001ro-8J; Fri, 27 Jan 2023 11:20:42 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=[192.168.15.151]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLMmg-0000dz-0b; Fri, 27 Jan 2023 11:20:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=O1W99WMEM74VQlNA+L0P4P31XxHJMhRxWouSq7umw+s=; b=YOCZ8yDar+7LqzQigsZNLn3TlE
	uc/PlG90Y6G+VJS9q+uOfavKVwNXQvD9nc7IXdOh5aGKNdeMVEyH6rdMe2iwExoPyp/uQh4GNPKgb
	hc1uoeUQvYjpmdzSc9KgUiFXAhl+lNmrsIj8/0ZuHsufVd8fiC+Ph7rmnVLhxFSsE/iw=;
Message-ID: <a729bf36-8c67-ccd4-c787-d62aaf7e24b2@xen.org>
Date: Fri, 27 Jan 2023 11:20:39 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, Michal Orzel <michal.orzel@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-3-Henry.Wang@arm.com>
 <b2822e36-0972-5c4b-90d9-aee6533824b2@amd.com>
 <AS8PR08MB79913487DBC6F434758EAE5A92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AS8PR08MB79913487DBC6F434758EAE5A92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 27/01/2023 11:11, Henry Wang wrote:
>> -----Original Message-----
>> From: Michal Orzel <michal.orzel@amd.com>
>> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
>> the first access
>>
>> Hi Henry,
>>
>> On 16/01/2023 02:58, Henry Wang wrote:
>>> Note that GICv2 changes introduced by this patch is not applied to the
>>> "New vGIC" implementation, as the "New vGIC" is not used. Also since
>> The fact that "New vGIC" is not used very often and its work is in-progress
>> does not mean that we can implement changes resulting in a build failure
>> when enabling CONFIG_NEW_VGIC, which is the case with your patch.
>> So this needs to be fixed.
> 
> I will add the "New vGIC" changes in v2.
> 
>>
>>> @@ -153,6 +153,8 @@ struct vgic_dist {
>>>       /* Base address for guest GIC */
>>>       paddr_t dbase; /* Distributor base address */
>>>       paddr_t cbase; /* CPU interface base address */
>>> +    paddr_t csize; /* CPU interface size */
>>> +    paddr_t vbase; /* virtual CPU interface base address */
>> Could you swap them so that base address variables are grouped?
> 
> Sure, my original thought was grouping the CPU interface related fields but
> since you prefer grouping the base address, I will follow your suggestion.

I would actually prefer your approach because it is easier to associate 
the size with the base.

An alternative would be to use a structure to combine the base/size. So 
it is even clearer the association.

I don't have a strong opinion on either of the two approach I suggested.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:27:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:27:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485474.752721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMsd-00076b-LV; Fri, 27 Jan 2023 11:26:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485474.752721; Fri, 27 Jan 2023 11:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMsd-00076U-Ic; Fri, 27 Jan 2023 11:26:51 +0000
Received: by outflank-mailman (input) for mailman id 485474;
 Fri, 27 Jan 2023 11:26:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLMsc-00076O-7V
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:26:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLMsb-0001y8-9I; Fri, 27 Jan 2023 11:26:49 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=[192.168.15.151]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLMsb-0000u0-3D; Fri, 27 Jan 2023 11:26:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=hFl5q6c+KORc9MJ6+SiJJd3W2y82G42bzfwPnw4ZJbc=; b=PnSVrYQzHzPWSr3n7CEfHGOhua
	A8Z6JsWtAPJaYJIDY38k1t5/Eg6lC97GL30Ico7Yxu4JQoMc5MraQwMnGWavGwVh0kSiDnd7+S0hV
	UA01SdrJ5YaEkGwyoTUdhCCsm6W7BkiOapnyan7/npeJzdExZfZ8nlgfuq05xsLxlhH0=;
Message-ID: <fbab23b9-663e-9516-5721-a92486686f84@xen.org>
Date: Fri, 27 Jan 2023 11:26:46 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v6 1/2] xen/riscv: introduce early_printk basic stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
References: <cover.1674816429.git.oleksii.kurochko@gmail.com>
 <06c2c36bd68b2534c757dc4087476e855253680a.1674816429.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <06c2c36bd68b2534c757dc4087476e855253680a.1674816429.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 27/01/2023 11:15, Oleksii Kurochko wrote:
> Because printk() relies on a serial driver (like the ns16550 driver)
> and drivers require working virtual memory (ioremap()) there is not
> print functionality early in Xen boot.
> 
> The patch introduces the basic stuff of early_printk functionality
> which will be enough to print 'hello from C environment".
> 
> Originally early_printk.{c,h} was introduced by Bobby Eshleman
> (https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1aab71384)
> but some functionality was changed:
> early_printk() function was changed in comparison with the original as
> common isn't being built now so there is no vscnprintf.
> 
> This commit adds early printk implementation built on the putc SBI call.
> 
> As sbi_console_putchar() is already being planned for deprecation
> it is used temporarily now and will be removed or reworked after
> real uart will be ready.
> 
> Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> ---
> Changes in V6:
>      - Remove __riscv_cmodel_medany check from early_printk.c

Why? I know Andrew believed this is wrong, but I replied back with my 
understanding and saw no discussion afterwards explaining why I am 
incorrect.

I am not a maintainer of the code here, but I don't particularly 
appreciate comments to be ignored. If there was any discussion on IRC, 
then please summarize them here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:31:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:31:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485479.752731 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMwq-0008Vn-6K; Fri, 27 Jan 2023 11:31:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485479.752731; Fri, 27 Jan 2023 11:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLMwq-0008Vg-2j; Fri, 27 Jan 2023 11:31:12 +0000
Received: by outflank-mailman (input) for mailman id 485479;
 Fri, 27 Jan 2023 11:31:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JKSU=5Y=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLMwp-0008Va-5S
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:31:11 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20614.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::614])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16fa9ea8-9e36-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:31:07 +0100 (CET)
Received: from AM9P195CA0015.EURP195.PROD.OUTLOOK.COM (2603:10a6:20b:21f::20)
 by GV2PR08MB8415.eurprd08.prod.outlook.com (2603:10a6:150:ba::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 11:31:04 +0000
Received: from AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:21f:cafe::dd) by AM9P195CA0015.outlook.office365.com
 (2603:10a6:20b:21f::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.34 via Frontend
 Transport; Fri, 27 Jan 2023 11:31:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT025.mail.protection.outlook.com (100.127.140.199) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 11:31:03 +0000
Received: ("Tessian outbound 0d7b2ab0f13d:v132");
 Fri, 27 Jan 2023 11:31:03 +0000
Received: from 70e776e21b2d.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E3F33F0F-204F-44A3-A24C-7C7C8E0E47A1.1; 
 Fri, 27 Jan 2023 11:30:53 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 70e776e21b2d.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Jan 2023 11:30:53 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB9PR08MB6617.eurprd08.prod.outlook.com (2603:10a6:10:261::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan
 2023 11:30:50 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Fri, 27 Jan 2023
 11:30:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16fa9ea8-9e36-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TvrLCKSqW/H2c7Qr2QEItyxX61XTnTMZ1d6vVkbNZgQ=;
 b=35NL6ftIIdjmXUgHlQIRBRYzcZhfAU+CJYETpy7W059GoAv4KS7BMTm/IYxESSL+ysztr0mBNt39AkhjureyNKH8XG8nqiIZbP1N59qO0MxgOPueowdU6uSREJRGgv4Eg4mqSC2HUloSxfQalG8Wc0PevTdWRPyHSI8ze/Wz3BA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h+Qehr9IvC84WC2mxQbjg2izbf8m4t3f8M74iycxILtrIUpIotLOx3ri7G/wGBYkWDqj9ReTAXEuKPQLGLfcMnEFU+ZrNkW/i17roicMj9OfEVhx1lpfFHOty05FPTE0d3xWFg0LqOIoMlL/PEPG8kNtVANma7iNp1muCPVHTE+t7wLpQcQpRKpms/QtQZLJTDwMgOLTJOvvJcqpCiIv6X90dFiDC7+gcJHPsb96nJXL/YANPXPaP7Jg8iwT4TjHSpisOQ6Bo+jEHbqtTc2hrt3bnVNonR1ZOTWyC5uwmvm2f69IRQu0pMP/peq0lTQsClqwqrYJLL2+B/WYWjoR5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TvrLCKSqW/H2c7Qr2QEItyxX61XTnTMZ1d6vVkbNZgQ=;
 b=EMPFRIeCl/57HYjJiXNByxAx/MLcHHYiyG+UT6qJYHjCT1BUcQF8kFqvsozTVVUARya52LG9vLmGKuUzmWXKtDBKXs3hu6MPCnb5z0Sli++n9fwAgi6gsviksqnMLk0GhdCgDqjZ+hhpC95JCrWaTbx+OJBOIZMimjh/IHW3d6q71yjwLjJRZgc+APR3P7BzAVHrh6NeMwA30Z/jzIk2Vyyxd/jC2E/O9KuPiRvXh08dv4+9D/BNMndNc1N5gRdPs5/x7mXShra0bxVub3roqS8idWGmJVP9VZ6rPvUmg7G7fMznNu9chuzn9yZnUTIhGquno/6l2E8pmPrLaInUpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TvrLCKSqW/H2c7Qr2QEItyxX61XTnTMZ1d6vVkbNZgQ=;
 b=35NL6ftIIdjmXUgHlQIRBRYzcZhfAU+CJYETpy7W059GoAv4KS7BMTm/IYxESSL+ysztr0mBNt39AkhjureyNKH8XG8nqiIZbP1N59qO0MxgOPueowdU6uSREJRGgv4Eg4mqSC2HUloSxfQalG8Wc0PevTdWRPyHSI8ze/Wz3BA=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Thread-Topic: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Thread-Index: AQHZKU4eIUricNo750ej72nDJ2giuq6nF8cAgAsT2vCAAASHgIAAAULQ
Date: Fri, 27 Jan 2023 11:30:50 +0000
Message-ID:
 <AS8PR08MB799127D46D09BCFEF9A0392192CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-3-Henry.Wang@arm.com>
 <b2822e36-0972-5c4b-90d9-aee6533824b2@amd.com>
 <AS8PR08MB79913487DBC6F434758EAE5A92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <a729bf36-8c67-ccd4-c787-d62aaf7e24b2@xen.org>
In-Reply-To: <a729bf36-8c67-ccd4-c787-d62aaf7e24b2@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 42F48898BF258D41B62FCA0355764BCB.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB9PR08MB6617:EE_|AM7EUR03FT025:EE_|GV2PR08MB8415:EE_
X-MS-Office365-Filtering-Correlation-Id: 110071a6-f6a6-4376-b788-08db0059f946
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OL7kGssYL6dkQpJdbLvUK4Xr52njQYPvzGxCCZqGqZK1QPuQu4DZFaqv0cVauMUaetCyQ2m3Lm9CQtyOD1SReUrBPRAuOo/S3bUR4D20/iYqjJxYlu/Ypsp36bdHzJnZCBcwa0RAa3z2Q8aJlf/eKRkQ6VTzRyz58531jY60+t0MHtqTNPdEvSSVgj8KvNSUiYZT0aMAmCUC3xSKDU3ZfJ4RgGeSEBd067ylLUsisNyqJbWE6VwRcC7mGBwsPXD6N2vxYadjtKyjmT1N6A2UmJpue5051c6CCSkCmnvvZF+zuQ4jeQKTczvWBEp4GwtaSnCBW5iUDryu8TOwMfKFgKsCHh2fDsw2iVG60xefCUnW59zBBJ7hdxWG+nmTfCMMAE+3HcYkrVND3ZQ+HnihakBLMTjsBprOnquPC0v+m2bhwLVdBqJzkX99Zlx49btAwxCCD2GqbdjOQ0zzPzp96ajeNsj3GkKVXOmnkgP3REgBPN3LMARXr9jOOx/rxTzbqlvaWKPw78iSZ8eOp88XjsWghJkfvnmYwfl1wun5XRBqyV6jT10poTxuvz7vRR+u9/D6q2nhEwE1lyyVTtiLlyJo23gdd2XpeWqmIaoFGVuL4D3Kzz12Ny28/7Z1KvJg9mNZFkTmgPzD7v4NpRrJ7Wy5ypOnZFfcQP8rJTz7pJxnZ13wZv6syOWXmklzYyTfobm7SeJ97XAb2H3Opqsr/g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(366004)(346002)(136003)(396003)(376002)(451199018)(54906003)(6506007)(26005)(9686003)(478600001)(71200400001)(186003)(8676002)(66556008)(66446008)(4326008)(316002)(76116006)(110136005)(66946007)(64756008)(66476007)(83380400001)(7696005)(41300700001)(38100700002)(8936002)(52536014)(5660300002)(2906002)(122000001)(38070700005)(86362001)(33656002)(55016003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6617
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3f417748-75dc-4b1b-7077-08db0059f136
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SNMpz2/HOspYOn8P1RkaID8o5GIUn7xnm8osh85GYiHydsJycVeFQz35AouuWnIh70Gye9WbdQdMK6Ln2ykCeyRiMF37pMeMWqqAg3M06P9//sRK19tDbaZdTOzRCYsrm9LtVs0ytG30u0rI87CNT+bIu+Lk/FmgKJmbrOnB76/HrzodTQs0W2XxW602YDvzxasgaAceuo3Wuzxd9Di9t4yi8nAozTyNPsI4aSEPCxOyImfdY8qHH+GorJxbLjNd5xeK43zxSZrYHq73k/kCBgC0AUHhu2ETvF0AKmUDp+P0A8Wt6sj2DTwS9enD3fAOGSEdaFMc3wXsne1fOl3BD5oQBJfJECjzXTdMsmne0UcEdsQLMNQtB6MkUEBUkMRhGnBEf+5Vyxtzs0on7n38fllxgHNN89mLECPVfCLq0WM9p80Owyz3m+oF6al29EWGdXyBVFenvp5laLno686xhcy1TJUOru5iH/2/VQx5MhT/cQmj4EtADZT5YoihRf1kkJ1t7+EoNT69wcOy4UbFQon8tQ9UAAz8AUAZ0Gz0/0GzN9VRp6Fz7bBN48vlsivVIz8fG2cZq2TSxi6NlRQ95bxwH0Lt5VTwM5qnujT6Tp4qEZyUP3iY0Fj6tiaV+bFEmnTU0XzYLDSTKu3npGRoLayzg0J6VmHkjocZYPf3Mo0ZggKNg5kO0GM4SvRBHMS3M/7HbwaKhM1BwLzSKYuhQw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(136003)(346002)(376002)(39860400002)(396003)(451199018)(36840700001)(40470700004)(46966006)(7696005)(55016003)(8676002)(40480700001)(316002)(54906003)(110136005)(83380400001)(47076005)(107886003)(478600001)(336012)(40460700003)(26005)(6506007)(186003)(9686003)(82310400005)(5660300002)(4326008)(70586007)(70206006)(41300700001)(8936002)(2906002)(52536014)(86362001)(81166007)(356005)(36860700001)(33656002)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 11:31:03.6682
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 110071a6-f6a6-4376-b788-08db0059f946
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8415

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggMi8zXSB4ZW4v
YXJtOiBEZWZlciBHSUN2MiBDUFUgaW50ZXJmYWNlIG1hcHBpbmcgdW50aWwNCj4gdGhlIGZpcnN0
IGFjY2Vzcw0KPiANCj4gSGksDQo+IA0KPiA+Pj4gQEAgLTE1Myw2ICsxNTMsOCBAQCBzdHJ1Y3Qg
dmdpY19kaXN0IHsNCj4gPj4+ICAgICAgIC8qIEJhc2UgYWRkcmVzcyBmb3IgZ3Vlc3QgR0lDICov
DQo+ID4+PiAgICAgICBwYWRkcl90IGRiYXNlOyAvKiBEaXN0cmlidXRvciBiYXNlIGFkZHJlc3Mg
Ki8NCj4gPj4+ICAgICAgIHBhZGRyX3QgY2Jhc2U7IC8qIENQVSBpbnRlcmZhY2UgYmFzZSBhZGRy
ZXNzICovDQo+ID4+PiArICAgIHBhZGRyX3QgY3NpemU7IC8qIENQVSBpbnRlcmZhY2Ugc2l6ZSAq
Lw0KPiA+Pj4gKyAgICBwYWRkcl90IHZiYXNlOyAvKiB2aXJ0dWFsIENQVSBpbnRlcmZhY2UgYmFz
ZSBhZGRyZXNzICovDQo+ID4+IENvdWxkIHlvdSBzd2FwIHRoZW0gc28gdGhhdCBiYXNlIGFkZHJl
c3MgdmFyaWFibGVzIGFyZSBncm91cGVkPw0KPiA+DQo+ID4gU3VyZSwgbXkgb3JpZ2luYWwgdGhv
dWdodCB3YXMgZ3JvdXBpbmcgdGhlIENQVSBpbnRlcmZhY2UgcmVsYXRlZCBmaWVsZHMgYnV0DQo+
ID4gc2luY2UgeW91IHByZWZlciBncm91cGluZyB0aGUgYmFzZSBhZGRyZXNzLCBJIHdpbGwgZm9s
bG93IHlvdXIgc3VnZ2VzdGlvbi4NCj4gDQo+IEkgd291bGQgYWN0dWFsbHkgcHJlZmVyIHlvdXIg
YXBwcm9hY2ggYmVjYXVzZSBpdCBpcyBlYXNpZXIgdG8gYXNzb2NpYXRlDQo+IHRoZSBzaXplIHdp
dGggdGhlIGJhc2UuDQo+IA0KPiBBbiBhbHRlcm5hdGl2ZSB3b3VsZCBiZSB0byB1c2UgYSBzdHJ1
Y3R1cmUgdG8gY29tYmluZSB0aGUgYmFzZS9zaXplLiBTbw0KPiBpdCBpcyBldmVuIGNsZWFyZXIg
dGhlIGFzc29jaWF0aW9uLg0KPiANCj4gSSBkb24ndCBoYXZlIGEgc3Ryb25nIG9waW5pb24gb24g
ZWl0aGVyIG9mIHRoZSB0d28gYXBwcm9hY2ggSSBzdWdnZXN0ZWQuDQoNCk1heWJlIHdlIGNhbiBk
byBzb21ldGhpbmcgbGlrZSB0aGlzOg0KYGBgDQpwYWRkcl90IGRiYXNlOyAvKiBEaXN0cmlidXRv
ciBiYXNlIGFkZHJlc3MgKi8NCnBhZGRyX3QgdmJhc2U7IC8qIHZpcnR1YWwgQ1BVIGludGVyZmFj
ZSBiYXNlIGFkZHJlc3MgKi8NCnBhZGRyX3QgY2Jhc2U7IC8qIENQVSBpbnRlcmZhY2UgYmFzZSBh
ZGRyZXNzICovDQpwYWRkcl90IGNzaXplOyAvKiBDUFUgaW50ZXJmYWNlIHNpemUgKi8gICAgDQpg
YGANCg0KU28gd2UgY2FuIGVuc3VyZSBib3RoICJiYXNlIGFkZHJlc3MgdmFyaWFibGVzIGFyZSBn
cm91cGVkIiBhbmQNCiJDUFUgaW50ZXJmYWNlIHZhcmlhYmxlcyBhcmUgZ3JvdXBlZCIuDQoNCklm
IHlvdSBkb24ndCBsaWtlIHRoaXMsIEkgd291bGQgcHJlZmVyIHRoZSB3YXkgSSBhbSBjdXJyZW50
bHkgZG9pbmcsIGFzDQpwZXJzb25hbGx5IEkgdGhpbmsgYW4gZXh0cmEgc3RydWN0dXJlIHdvdWxk
IHNsaWdodGx5IGJlIGFuIG92ZXJraWxsIDopDQoNClRoYW5rcy4NCg0KS2luZCByZWdhcmRzLA0K
SGVucnkNCg0KPiANCj4gQ2hlZXJzLA0KPiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:35:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:35:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485485.752741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN1A-0000rs-OK; Fri, 27 Jan 2023 11:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485485.752741; Fri, 27 Jan 2023 11:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN1A-0000rl-KH; Fri, 27 Jan 2023 11:35:40 +0000
Received: by outflank-mailman (input) for mailman id 485485;
 Fri, 27 Jan 2023 11:35:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLN19-0000rd-FA
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:35:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLN19-0002Ac-2b; Fri, 27 Jan 2023 11:35:39 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=[192.168.15.151]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLN18-0001FQ-Sy; Fri, 27 Jan 2023 11:35:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ONPcW5Rwkp1ZgIhf8zY8FmKLkRgXYF90Amoqzb6mXUw=; b=ViiRxEeY9GS0WQG5Dx1gjIrrTu
	d8QbVRErt9RtWyFuFxf72Y9YuU77jLrMNgax1j7vgtyFHe5hU5xAaZlTcgELoy8dB7ChLysg7qy6K
	kTRlVTB2c9sCUV08kPfpBFZHer8vG5drLZujaWCEhFRIFXAY9Z3Tc2trv8nL/BSGAyeI=;
Message-ID: <ed09cf44-cb7b-6713-6ea4-ac38e80b3549@xen.org>
Date: Fri, 27 Jan 2023 11:35:36 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, Michal Orzel <michal.orzel@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-3-Henry.Wang@arm.com>
 <b2822e36-0972-5c4b-90d9-aee6533824b2@amd.com>
 <AS8PR08MB79913487DBC6F434758EAE5A92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <a729bf36-8c67-ccd4-c787-d62aaf7e24b2@xen.org>
 <AS8PR08MB799127D46D09BCFEF9A0392192CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AS8PR08MB799127D46D09BCFEF9A0392192CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 27/01/2023 11:30, Henry Wang wrote:
> Hi Julien,
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
>> the first access
>>
>> Hi,
>>
>>>>> @@ -153,6 +153,8 @@ struct vgic_dist {
>>>>>        /* Base address for guest GIC */
>>>>>        paddr_t dbase; /* Distributor base address */
>>>>>        paddr_t cbase; /* CPU interface base address */
>>>>> +    paddr_t csize; /* CPU interface size */
>>>>> +    paddr_t vbase; /* virtual CPU interface base address */
>>>> Could you swap them so that base address variables are grouped?
>>>
>>> Sure, my original thought was grouping the CPU interface related fields but
>>> since you prefer grouping the base address, I will follow your suggestion.
>>
>> I would actually prefer your approach because it is easier to associate
>> the size with the base.
>>
>> An alternative would be to use a structure to combine the base/size. So
>> it is even clearer the association.
>>
>> I don't have a strong opinion on either of the two approach I suggested.
> 
> Maybe we can do something like this:
> ```
> paddr_t dbase; /* Distributor base address */
> paddr_t vbase; /* virtual CPU interface base address */
> paddr_t cbase; /* CPU interface base address */
> paddr_t csize; /* CPU interface size */
> ```
> 
> So we can ensure both "base address variables are grouped" and
> "CPU interface variables are grouped".
> 
> If you don't like this, I would prefer the way I am currently doing, as
> personally I think an extra structure would slightly be an overkill :)

This is really a matter of taste here. My preference is your initial 
approach because I find strange to have virtual CPU interface 
information the physical one.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485490.752750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN4s-0001VI-6U; Fri, 27 Jan 2023 11:39:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485490.752750; Fri, 27 Jan 2023 11:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN4s-0001VB-3o; Fri, 27 Jan 2023 11:39:30 +0000
Received: by outflank-mailman (input) for mailman id 485490;
 Fri, 27 Jan 2023 11:39:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLN4q-0001Uz-C6
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:39:28 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 40ef7ac6-9e37-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 12:39:27 +0100 (CET)
Received: by mail-wm1-x329.google.com with SMTP id
 q10-20020a1cf30a000000b003db0edfdb74so5329006wmq.1
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 03:39:27 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 x10-20020a05600c420a00b003c6b70a4d69sm3919529wmh.42.2023.01.27.03.39.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 03:39:25 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40ef7ac6-9e37-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=93xHdobg99NHM1CBAealBqb3ezQMD/tc5tDFhGtXYqU=;
        b=YD+3oSg57fsqTmqLNx/QPsH0gMWuiIY7q+Ott6LU6odo0ENwcCR4SAAAJSy4o2JoUq
         O74X3TH3doIFvdu7n8B0e+uHZSeOEGNJXYpV6EHatl76RKUG0hoCxjbRGUTr6TwOyQlt
         KHIuNONWg5+zQKT2nXhBj64D+IlFK8OlVdoYxYfgVIhG51UR3nSFtWLSinvOyrDnCF24
         QVIJx8PM5kJ4iwOmfrC6Bfywn5IvrvqD/U0p/ku4ZqQYJ88zO7C3bwTyJ9jflKwJJUsV
         m1mLCsZcCyeRzaq4qWXr7VkY606x9bVq4wVWKOpZeLVLNOIsgmNVEqs44MTvYQ0qisae
         9eEQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=93xHdobg99NHM1CBAealBqb3ezQMD/tc5tDFhGtXYqU=;
        b=Kkn2g9oJIluNU+ZgwpTxhvIMP0gkXZfyfnlAOKnpS29Eqs/bi05zbbM9FdRWFhKqGI
         Do1OJgXGV/XYTLSzbPYu3MrJpA6u4mQ+pjERBNeCjYm4RukFLewloAapPAfcw8KZg6du
         SaLQ6d+IhqOGklwRRaHRlxSOTD9CXPa/21eZfOVyukuHKPRt4fG8snl34QdeWvONRF0l
         E20GWqm+Zkib2ZEZ9cbI5E8rvfINsgBJLb3CDI4WgrEihWOoIOmTlIFfeVBToWyQaDhP
         AUA3xwzYTZQPU8I1eC1vKtgct6hJzk5pdezzoXFjPyTvh8cWsku4dh8QWXJ4PZPFw3st
         HVng==
X-Gm-Message-State: AFqh2kpM2ZR2RJ1EtKuQ6dNht1VXOn9sRVY2bP/t4qqZRmD7IIeK1gnh
	b0hU0GgvR4/mJmAVtoW4vl78wkXeDbg=
X-Google-Smtp-Source: AMrXdXsIYjHN7H9ZA+TDIJLR8oocWmd64DZrSW8Hp8H3UKmsOpUzXNREi3DRKipbdGtWU5rE5QCm4w==
X-Received: by 2002:a05:600c:1e1d:b0:3cf:674a:aefe with SMTP id ay29-20020a05600c1e1d00b003cf674aaefemr38575760wmb.22.1674819566250;
        Fri, 27 Jan 2023 03:39:26 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v7 0/2] Basic early_printk and smoke test implementation
Date: Fri, 27 Jan 2023 13:39:13 +0200
Message-Id: <cover.1674819203.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- the minimal set of headers and changes inside them.
- SBI (RISC-V Supervisor Binary Interface) things necessary for basic
  early_printk implementation.
- things needed to set up the stack.
- early_printk() function to print only strings.
- RISC-V smoke test which checks if  "Hello from C env" message is
  present in serial.tmp

The patch series is rebased on top of the patch "include/types: move
stddef.h-kind types to common header" [1]

[1] https://lore.kernel.org/xen-devel/5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com/

---
Changes in V7:
  - Fix dependecy for qemu-smoke-riscv64-gcc job
---
Changes in V6:
  - Remove patches that were merged to upstream:
      * xen/include: Change <asm/types.h> to <xen/types.h>
      * xen/riscv: introduce asm/types.h header file
      * xen/riscv: introduce sbi call to putchar to console
  - Remove __riscv_cmodel_medany check from early_printk.C
  - Rename container name in test.yaml for .qemu-riscv64.
---
Changes in V5:
  - Code style fixes
  - Remove size_t from asm/types after rebase on top of the patch
    "include/types: move stddef.h-kind types to common header". [1]

    All other types were back as they are used in <xen/types.h>
  - Update <xen/early_printk.h> after rebase on top of [1] as size_t was moved from
    <asm/types.h> to <xen/types.h>
  - Remove unneeded <xen/errno.h> from sbi.c
  - Change an error message of #error in case of __riscv_cmodel_medany isn't defined
---
Changes in V4:
    - Patches "xen/riscv: introduce dummy asm/init.h" and "xen/riscv: introduce
      stack stuff" were removed from the patch series as they were merged separately
      into staging.
    - Remove "depends on RISCV*" from Kconfig.debug as Kconfig.debug is located
      in arch specific folder.
    - fix code style.
    - Add "ifdef __riscv_cmodel_medany" to early_printk.c.  
---
Changes in V3:
    - Most of "[PATCH v2 7/8] xen/riscv: print hello message from C env"
      was merged with [PATCH v2 3/6] xen/riscv: introduce stack stuff.
    - "[PATCH v2 7/8] xen/riscv: print hello message from C env" was
      merged with "[PATCH v2 6/8] xen/riscv: introduce early_printk basic
      stuff".
    - "[PATCH v2 5/8] xen/include: include <asm/types.h> in
      <xen/early_printk.h>" was removed as it has been already merged to
      mainline staging.
    - code style fixes.
---
Changes in V2:
    - update commit patches commit messages according to the mailing
      list comments
    - Remove unneeded types in <asm/types.h>
    - Introduce definition of STACK_SIZE
    - order the files alphabetically in Makefile
    - Add license to early_printk.c
    - Add RISCV_32 dependecy to config EARLY_PRINTK in Kconfig.debug
    - Move dockerfile changes to separate config and sent them as
      separate patch to mailing list.
    - Update test.yaml to wire up smoke test
---

Oleksii Kurochko (2):
  xen/riscv: introduce early_printk basic stuff
  automation: add RISC-V smoke test

 automation/gitlab-ci/test.yaml            | 20 ++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh  | 20 ++++++++++++++
 xen/arch/riscv/Kconfig.debug              |  5 ++++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
 xen/arch/riscv/setup.c                    |  4 +++
 7 files changed, 95 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485491.752761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN4u-0001kp-Hz; Fri, 27 Jan 2023 11:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485491.752761; Fri, 27 Jan 2023 11:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN4u-0001ki-FE; Fri, 27 Jan 2023 11:39:32 +0000
Received: by outflank-mailman (input) for mailman id 485491;
 Fri, 27 Jan 2023 11:39:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLN4r-0001Uz-5Y
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:39:29 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41f36b44-9e37-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 12:39:28 +0100 (CET)
Received: by mail-wm1-x329.google.com with SMTP id
 q10-20020a1cf30a000000b003db0edfdb74so5329050wmq.1
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 03:39:28 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 x10-20020a05600c420a00b003c6b70a4d69sm3919529wmh.42.2023.01.27.03.39.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 03:39:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41f36b44-9e37-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Uv27LIdYCOpQaEPTODxpqyn21y70RSmO8lj9poeyTD0=;
        b=Vdcw6Z3aVdiU3Sn0P/j/aSOsJ9zToHiDE2D9yOE3xe4E/80BGnJkcnR1iPVVpbFhBR
         XQ5GjH9sixS+9yUDeOJpSMRnNlYIE1tJq3JAGBVOr52DU2ViwVlZzeE65dPa6wiO0/V6
         HragHpODHCvSJfO9+xHA/DqsLlpnvYPshDVYmy3T2XuHURPSCNROp0FHvUuxhPZJGHk8
         4KC172vuszXLlC//m5UjMMPUue53kyYE+mOKmeqN7MCAU9A/ea2/VxApHF1GURxcfIIT
         BMYJ/l+LbO2GM9yGSFVGcKf21cGgC8ZdEKsWqgg/ySBzyCEbe3GqprxrBMZNY1/TQA65
         1tlA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Uv27LIdYCOpQaEPTODxpqyn21y70RSmO8lj9poeyTD0=;
        b=CfVPS/5Xe8+14XH6vv6N5XnY6AqDVZHBSSA/94sWDEnB1JuhVDyAWtW52mOvDa50gD
         ufKghsZf//XvEZA53vNQZG7SD3iMZswfzG6lLnai7usLsa89IwLP5GGmqsR/o6o2Olb6
         NvMGOliXS8lfFQPFDASUa/o/xS8srObt5Vnvd5KeVGJhhBQS4nMWBg7y9doUTAKRWNue
         cO0oEM1Ja7zYrc/e9otygxvwkXZmJs+1oPZ4QFjyioWOfABqjNsEb0jzw4/l6CG/G19F
         FX5zq9ucqS7V17YYv5zy5sg/CKCHaqsB003JFgfNLeOsOAdJvQs5i1hFyMURoVyNI7Kt
         z1pA==
X-Gm-Message-State: AO0yUKWndmJhhuIL6YMlCJgtYkwAV55OLETjxtgVUOJwkYmyNxF1QPVc
	ErfT8BaPXfnSY+byHzIK71S8vgTfLSw=
X-Google-Smtp-Source: AK7set9HjLKVcPhRNaIz/MVicozKNYe4geIJ1zxFzuF28gioPDQALBXCWh1g5BsZxvff99m08PHbVg==
X-Received: by 2002:a05:600c:3501:b0:3dc:19d1:3c1f with SMTP id h1-20020a05600c350100b003dc19d13c1fmr10629718wmq.30.1674819568009;
        Fri, 27 Jan 2023 03:39:28 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v7 2/2] automation: add RISC-V smoke test
Date: Fri, 27 Jan 2023 13:39:15 +0200
Message-Id: <e2d722a5f3fffc5708c1cc99efad63ab04d25ec3.1674819203.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674819203.git.oleksii.kurochko@gmail.com>
References: <cover.1674819203.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add check if there is a message 'Hello from C env' presents
in log file to be sure that stack is set and C part of early printk
is working.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V7:
 - Fix dependecy for qemu-smoke-riscv64-gcc job
---
Changes in V6:
 - Rename container name in test.yaml for .qemu-riscv64.
---
Changes in V5:
  - Nothing changed
---
Changes in V4:
  - Nothing changed
---
Changes in V3:
  - Nothing changed
  - All mentioned comments by Stefano in Xen mailing list will be
    fixed as a separate patch out of this patch series.
---
 automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
 2 files changed, 40 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index afd80adfe1..4dbe1b8af7 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -54,6 +54,19 @@
   tags:
     - x86_64
 
+.qemu-riscv64:
+  extends: .test-jobs-common
+  variables:
+    CONTAINER: archlinux:current-riscv64
+    LOGFILE: qemu-smoke-riscv64.log
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - x86_64
+
 .yocto-test:
   extends: .test-jobs-common
   script:
@@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
   needs:
     - debian-unstable-clang-debug
 
+qemu-smoke-riscv64-gcc:
+  extends: .qemu-riscv64
+  script:
+    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
+  needs:
+    - .gcc-riscv64-cross-build
+
 # Yocto test jobs
 yocto-qemuarm64:
   extends: .yocto-test-arm64
diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
new file mode 100755
index 0000000000..e0f06360bc
--- /dev/null
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -ex
+
+# Run the test
+rm -f smoke.serial
+set +e
+
+timeout -k 1 2 \
+qemu-system-riscv64 \
+    -M virt \
+    -smp 1 \
+    -nographic \
+    -m 2g \
+    -kernel binaries/xen \
+    |& tee smoke.serial
+
+set -e
+(grep -q "Hello from C env" smoke.serial) || exit 1
+exit 0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485492.752766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN4u-0001nj-S5; Fri, 27 Jan 2023 11:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485492.752766; Fri, 27 Jan 2023 11:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN4u-0001nO-NV; Fri, 27 Jan 2023 11:39:32 +0000
Received: by outflank-mailman (input) for mailman id 485492;
 Fri, 27 Jan 2023 11:39:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLN4s-0001VA-EN
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:39:30 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 418b6290-9e37-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:39:28 +0100 (CET)
Received: by mail-wm1-x332.google.com with SMTP id l8so3239842wms.3
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 03:39:28 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 x10-20020a05600c420a00b003c6b70a4d69sm3919529wmh.42.2023.01.27.03.39.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 03:39:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 418b6290-9e37-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kD9WTDzKXvdc7hcj/tM3aNt9EYeu8WRQj4/RhYxhaQs=;
        b=fiuBhFv3Q1w6K1BLC1mZXHcPqytGRc5HF9W2Jov993P3ucMGjedlzzFnQzwpkgmY6B
         JoloqYCghreioF3HjMMSOOs/PomMn3N3IcHFswthvzvhqM00nFeKzGlOe/HQrO/E1I2p
         7tQG04+5e/5AHAZ3mHmduuuYs8fpJqZ8F6xlD+S5zIyZ7lqrIVwTlccFcmS56FZKCxoS
         7/eAKN5P9XOkHhkfjv4b893bRwZshkQtg9JSEHmFje4bJK9nrOKS8v3q0L7pfoEOK/f1
         CqnuvXtdNV6xJ+uXriY+Jt14AFoM1ggcbe0DLz7LATbGRGLFvimdYu5wEcs7SOmpDm5a
         T9FQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kD9WTDzKXvdc7hcj/tM3aNt9EYeu8WRQj4/RhYxhaQs=;
        b=LqOLUtUeoFWMmcTiDz6jwcu3G/2/8iShR7VJKs0F1fTV/qoJ9wa0xE8vXMqRv4PyrZ
         shOeiX3cpEIbSWvv7Wge/hfwjW4hyVMznkCTZ0IxWlOdc7Truvh3p/nG2BTrVntgm4FO
         mi7Ysv9T6mnmbNMsmo1xlcpGDSxlRE0ypvDQ7c3Vztf+O6bvrnwRboMYHPw2gOiVqzyp
         Yc58m2J3zW04rNxCWrC11qr21TEIhfjJwul8ow7EBX3vrOXXy0SN0BqlX+J26mJ4VSbB
         ptdt5aHztzlKTp1phuWjfY1W2j4mV9jy0nYtD/eqGy+xDcDxBkgp0/8MV1zEgisGsD4M
         HMGg==
X-Gm-Message-State: AO0yUKXMnu6l1jG+LgRIDpDOB8UY5iNR7FOO2zIkBI+j/DT41j+u5MzQ
	vl5q+Rn4AXv6IukWCUdPpHhkI/t7CRg=
X-Google-Smtp-Source: AK7set/d68ZSvlXnJnOPBgabJIFxl+Gzsa1TZe8Nq5jnlvFxiIE9QQ6jIjoMC6/yY5gu/q6/f5B0ig==
X-Received: by 2002:a05:600c:19c8:b0:3dc:353c:8b44 with SMTP id u8-20020a05600c19c800b003dc353c8b44mr2106455wmq.5.1674819567139;
        Fri, 27 Jan 2023 03:39:27 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: [PATCH v7 1/2] xen/riscv: introduce early_printk basic stuff
Date: Fri, 27 Jan 2023 13:39:14 +0200
Message-Id: <06c2c36bd68b2534c757dc4087476e855253680a.1674819203.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674819203.git.oleksii.kurochko@gmail.com>
References: <cover.1674819203.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Because printk() relies on a serial driver (like the ns16550 driver)
and drivers require working virtual memory (ioremap()) there is not
print functionality early in Xen boot.

The patch introduces the basic stuff of early_printk functionality
which will be enough to print 'hello from C environment".

Originally early_printk.{c,h} was introduced by Bobby Eshleman
(https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1aab71384)
but some functionality was changed:
early_printk() function was changed in comparison with the original as
common isn't being built now so there is no vscnprintf.

This commit adds early printk implementation built on the putc SBI call.

As sbi_console_putchar() is already being planned for deprecation
it is used temporarily now and will be removed or reworked after
real uart will be ready.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
---
Changes in V7:
    - Nothing was changed
---
Changes in V6:
    - Remove __riscv_cmodel_medany check from early_printk.c
---
Changes in V5:
    - Code style fixes
    - Change an error message of #error in case of __riscv_cmodel_medany
      isn't defined
---
Changes in V4:
    - Remove "depends on RISCV*" from Kconfig.debug as it is located in
      arch specific folder so by default RISCV configs should be ebabled.
    - Add "ifdef __riscv_cmodel_medany" to be sure that PC-relative addressing
      is used as early_*() functions can be called from head.S with MMU-off and
      before relocation (if it would be needed at all for RISC-V)
    - fix code style
---
Changes in V3:
    - reorder headers in alphabetical order
    - merge changes related to start_xen() function from "[PATCH v2 7/8]
      xen/riscv: print hello message from C env" to this patch
    - remove unneeded parentheses in definition of STACK_SIZE
---
Changes in V2:
    - introduce STACK_SIZE define.
    - use consistent padding between instruction mnemonic and operand(s)
---
 xen/arch/riscv/Kconfig.debug              |  5 ++++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
 xen/arch/riscv/setup.c                    |  4 +++
 5 files changed, 55 insertions(+)
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
index e69de29bb2..608c9ff832 100644
--- a/xen/arch/riscv/Kconfig.debug
+++ b/xen/arch/riscv/Kconfig.debug
@@ -0,0 +1,5 @@
+config EARLY_PRINTK
+    bool "Enable early printk"
+    default DEBUG
+    help
+      Enables early printk debug messages
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index fd916e1004..1a4f1a6015 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,3 +1,4 @@
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
new file mode 100644
index 0000000000..b66a11f1bc
--- /dev/null
+++ b/xen/arch/riscv/early_printk.c
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V early printk using SBI
+ *
+ * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
+ */
+#include <asm/early_printk.h>
+#include <asm/sbi.h>
+
+/*
+ * TODO:
+ *   sbi_console_putchar is already planned for deprecation
+ *   so it should be reworked to use UART directly.
+*/
+void early_puts(const char *s, size_t nr)
+{
+    while ( nr-- > 0 )
+    {
+        if ( *s == '\n' )
+            sbi_console_putchar('\r');
+        sbi_console_putchar(*s);
+        s++;
+    }
+}
+
+void early_printk(const char *str)
+{
+    while ( *str )
+    {
+        early_puts(str, 1);
+        str++;
+    }
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
new file mode 100644
index 0000000000..05106e160d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -0,0 +1,12 @@
+#ifndef __EARLY_PRINTK_H__
+#define __EARLY_PRINTK_H__
+
+#include <xen/early_printk.h>
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_printk(const char *str);
+#else
+static inline void early_printk(const char *s) {};
+#endif
+
+#endif /* __EARLY_PRINTK_H__ */
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 13e24e2fe1..d09ffe1454 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,12 +1,16 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/early_printk.h>
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
 void __init noreturn start_xen(void)
 {
+    early_printk("Hello from C env\n");
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:39:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:39:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485498.752781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN5E-0002qV-3D; Fri, 27 Jan 2023 11:39:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485498.752781; Fri, 27 Jan 2023 11:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN5D-0002qH-W0; Fri, 27 Jan 2023 11:39:51 +0000
Received: by outflank-mailman (input) for mailman id 485498;
 Fri, 27 Jan 2023 11:39:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JKSU=5Y=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLN5C-0001VA-Lw
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:39:50 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2049.outbound.protection.outlook.com [40.107.15.49])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4dc76280-9e37-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:39:48 +0100 (CET)
Received: from DB3PR06CA0006.eurprd06.prod.outlook.com (2603:10a6:8:1::19) by
 AS2PR08MB10054.eurprd08.prod.outlook.com (2603:10a6:20b:649::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.17; Fri, 27 Jan
 2023 11:39:47 +0000
Received: from DBAEUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:1:cafe::db) by DB3PR06CA0006.outlook.office365.com
 (2603:10a6:8:1::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23 via Frontend
 Transport; Fri, 27 Jan 2023 11:39:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT062.mail.protection.outlook.com (100.127.142.64) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.25 via Frontend Transport; Fri, 27 Jan 2023 11:39:46 +0000
Received: ("Tessian outbound b1d3ffe56e73:v132");
 Fri, 27 Jan 2023 11:39:46 +0000
Received: from 4c3590cb03cd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 550DF01F-B657-453F-BFA4-711DF4424243.1; 
 Fri, 27 Jan 2023 11:39:37 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4c3590cb03cd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Jan 2023 11:39:37 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB8735.eurprd08.prod.outlook.com (2603:10a6:20b:563::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Fri, 27 Jan
 2023 11:39:33 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Fri, 27 Jan 2023
 11:39:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dc76280-9e37-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RfodZ64Uf/SDdmV+PCiWIuFLK5VSWHLzT2tcYByaAcA=;
 b=9Cnv9c0MU15ix2qvV43cMlFs4loahaf25rQGdD4dHIN+fOsXm+pZuLG/G7sRJweDwSzQsHyJlvQD+rLIM7zZEBQVu6n6OYlgue3R6i16rp3tM5zjGjULj13yLvG4ALlDTFD7M0Ot1MhSTjlZHm5K1atG+fQEc6C4APOiJOFczng=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YLAnR+iAJjQiT6eOlqO8rnMu7G/He/p9abF9Rq7aGw6tPWkEo/4i3YeIEWO+NoCyzXur9wejkJM0FoRq740jNwCjHs4FGwEmza2s4w9pFKZMt0Akla2BQoBPkSz7wWTQ5JVA3QncxDgahJjmfbsRQk6MXwNzx15TNIhNqWNARyBQzXTKQrRfo+r7Wa/UftA/KwGRYOWd6QKXex0o0mSaF/qVqhEvT3dqPMsjgHySn2YXSlpI7Ova3f/WlgUIwKwF39q7K9dqGI+OT1ACbbqy03gP5ejDHIbQfiFp1cVm6zY8uClA8l1wGLKL5oL7Nv9uSk1NKIN9rzcmtL+zb8oGaA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RfodZ64Uf/SDdmV+PCiWIuFLK5VSWHLzT2tcYByaAcA=;
 b=QsFktrjytGQbsK5AcTooBTwbZtbuzz/Ael6sPgG6wFO007hqn4kIXwzr8258kK9yG3uEFQLgbvg+UNIlAMXG/N9TXVRSZd42ELYM++l0hb8GcaA20zyQh31xxSwpUGvwYfQfll0RGfUFOWWAhOUnUnAgiI1C8PpXFLKilwK8/y90toAPb05CyhPIvBxb/Tmhxk/M5oHsMM+MELsQcuy3QI7T5eYJXxE6eHHQ9dK102eLmyYnbg0H8DuiTQTQneviXOChiu2ETrgkLIwiBe77blFJcnNJkPRYmNJiLslKgbAFd2zCbNlpeOkeeS3Y107muz1l7/F5dDJN7t3mDtnjwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RfodZ64Uf/SDdmV+PCiWIuFLK5VSWHLzT2tcYByaAcA=;
 b=9Cnv9c0MU15ix2qvV43cMlFs4loahaf25rQGdD4dHIN+fOsXm+pZuLG/G7sRJweDwSzQsHyJlvQD+rLIM7zZEBQVu6n6OYlgue3R6i16rp3tM5zjGjULj13yLvG4ALlDTFD7M0Ot1MhSTjlZHm5K1atG+fQEc6C4APOiJOFczng=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Thread-Topic: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Thread-Index:
 AQHZKU4eIUricNo750ej72nDJ2giuq6nF8cAgAsT2vCAAASHgIAAAULQgAAC6wCAAAA1gA==
Date: Fri, 27 Jan 2023 11:39:32 +0000
Message-ID:
 <AS8PR08MB7991EA180B325C6E58B0157F92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-3-Henry.Wang@arm.com>
 <b2822e36-0972-5c4b-90d9-aee6533824b2@amd.com>
 <AS8PR08MB79913487DBC6F434758EAE5A92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <a729bf36-8c67-ccd4-c787-d62aaf7e24b2@xen.org>
 <AS8PR08MB799127D46D09BCFEF9A0392192CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <ed09cf44-cb7b-6713-6ea4-ac38e80b3549@xen.org>
In-Reply-To: <ed09cf44-cb7b-6713-6ea4-ac38e80b3549@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: C327ABF5DD689948BBD8543CCDEB539B.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB8735:EE_|DBAEUR03FT062:EE_|AS2PR08MB10054:EE_
X-MS-Office365-Filtering-Correlation-Id: e4481670-57ab-4e24-ab93-08db005b3113
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 5RWcS3AWyItnstCss+Ld+B0Lni75Ywvyn8rt7ziimcoWxNVvRsetNbr4CiDNTJSPU3vu7mkWH4e9lU6inAw6bcJ82cjoiY+fdJ5xkcgdducXr6lz05UIrh8PkXDumV8ZudC3MljE4NkuSGXnSAty+7dFrFQRCP1x08It6srDyXWHeLiDWmwb3YL5rL0eu4f6jKCDhU1AYuT8W2ChJlaoEOxkaMqr2wa3CAnkoh++2AmMrvwhcp0Jh0lA3s3HTds72SaL+qp4qkgxAYuxc6acOZ597Js+56g0AYRCKIed+BiJ3tumugo9fBSo2BPg+gfCdERKW1491F1H1E81IKpYb9LFAUJ1VeiCYBtmHrgjR9dfhCFiGNpO58K605BwhUpATogSk4ys6SIbtbzGB+iQTRbB4fA+nL4+5T2dFSskBjSYDiFW3cDT36DG07Wbd3mwe4CAgXrTGmIbpp8qwuegumn0Hys9W8gYPS28fVMHNCdfqd76vG/ZO4i1AeWJoHMa5N9DgRX71vg06Gzi0FrDfOQP5fDDB/WO3YLVxLCweft3T6Fm57Ug4muvbY1t7t/Cs7bGoLd4aFQqsVADiXw0SKU7DxJWA+Q4+Bg7reTvfHMf3N42jZc3wlnbVW5lIhbNCPYEdQ6/50wlc0b4/aKFNQBqFzM/eBuPH2CnDURyuVoilVOoFitbfvz4eOmlfsE7Lujn3y+k5HLI045I0LQfEg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(376002)(396003)(136003)(366004)(346002)(39860400002)(451199018)(2906002)(8936002)(5660300002)(8676002)(4326008)(41300700001)(64756008)(33656002)(66946007)(83380400001)(76116006)(66446008)(66476007)(66556008)(52536014)(9686003)(186003)(26005)(478600001)(316002)(6506007)(55016003)(38070700005)(86362001)(122000001)(71200400001)(54906003)(110136005)(38100700002)(7696005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8735
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6ed41414-f7b6-4984-8822-08db005b28d1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	g8qo3NLwlgI09UH+j61FAzQw/fyVoBusS2AiM1v4wPLSWgSAmbeI6d1FCidIBNg5hn9SSM3GFBxoF3aataRZQlzW44THNEE9ktOCCy21gfma6FQn+F7DAFA6NOq+ISja8Eyf+klTyDX+83zQpyidnhydI5mB49o19LtXK2rARsLkCMBoYqlg9X5GLQrvN5wKfAkF4OveD+KWb02PfzTkuWY5PNQcKhpDAf6Y3HE4rVf/Gd+YOIjwIW7eJsQd7d+NKHi59Nce5wZW4HQmJzdLQVkJo4fGD2YnpD4JT1QmNUbwnFYdkc/GkFVfrCjZjTLUsry4gercYvNuFbOuisqQZXsB+4dFwFCQpKuSVFdbVpIsAbiMqUl3VQk3odsPRQt2EwD3PXNOOTaq4AbMyJOOEqc6hVgHdlSLCuKGGiUIhtk9ILS9Q6oP7GuOWzUgkYFAOpwzpJ/c806ZMZjYG0EeiuMAEczkkX/nvpSXW7LJuUuilWoIpoCdBCC2QiI76wT270hp4hofeVHqDQr9K1Va49YGsKIvmYr2WckqrRCxF0W8by7LN+aPBAMJU/OG1EwePqrpil0mS46632WqvnuJ7VXzyWHqngbH5ggcObKKLhhMkWXVog8US8ZNAUC6OuJbx+pe9b07sGEq3RKdOQAe0muUsPL3tGeRh0RQTEAOasqyuVN3be99UgAMnOXMCp4RtW4DAG/BNWxg44uc4tnMPw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(346002)(136003)(376002)(396003)(451199018)(46966006)(40470700004)(36840700001)(9686003)(186003)(478600001)(107886003)(26005)(4326008)(70586007)(70206006)(7696005)(6506007)(8936002)(336012)(47076005)(52536014)(5660300002)(41300700001)(36860700001)(83380400001)(2906002)(8676002)(40480700001)(55016003)(33656002)(356005)(82310400005)(81166007)(82740400003)(110136005)(86362001)(54906003)(316002)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 11:39:46.8453
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e4481670-57ab-4e24-ab93-08db005b3113
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10054

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggMi8zXSB4ZW4v
YXJtOiBEZWZlciBHSUN2MiBDUFUgaW50ZXJmYWNlIG1hcHBpbmcgdW50aWwNCj4gdGhlIGZpcnN0
IGFjY2Vzcw0KPiA+Pj4+PiBAQCAtMTUzLDYgKzE1Myw4IEBAIHN0cnVjdCB2Z2ljX2Rpc3Qgew0K
PiA+Pj4+PiAgICAgICAgLyogQmFzZSBhZGRyZXNzIGZvciBndWVzdCBHSUMgKi8NCj4gPj4+Pj4g
ICAgICAgIHBhZGRyX3QgZGJhc2U7IC8qIERpc3RyaWJ1dG9yIGJhc2UgYWRkcmVzcyAqLw0KPiA+
Pj4+PiAgICAgICAgcGFkZHJfdCBjYmFzZTsgLyogQ1BVIGludGVyZmFjZSBiYXNlIGFkZHJlc3Mg
Ki8NCj4gPj4+Pj4gKyAgICBwYWRkcl90IGNzaXplOyAvKiBDUFUgaW50ZXJmYWNlIHNpemUgKi8N
Cj4gPj4+Pj4gKyAgICBwYWRkcl90IHZiYXNlOyAvKiB2aXJ0dWFsIENQVSBpbnRlcmZhY2UgYmFz
ZSBhZGRyZXNzICovDQo+ID4+Pj4gQ291bGQgeW91IHN3YXAgdGhlbSBzbyB0aGF0IGJhc2UgYWRk
cmVzcyB2YXJpYWJsZXMgYXJlIGdyb3VwZWQ/DQo+ID4+Pg0KPiA+Pj4gU3VyZSwgbXkgb3JpZ2lu
YWwgdGhvdWdodCB3YXMgZ3JvdXBpbmcgdGhlIENQVSBpbnRlcmZhY2UgcmVsYXRlZCBmaWVsZHMN
Cj4gYnV0DQo+ID4+PiBzaW5jZSB5b3UgcHJlZmVyIGdyb3VwaW5nIHRoZSBiYXNlIGFkZHJlc3Ms
IEkgd2lsbCBmb2xsb3cgeW91ciBzdWdnZXN0aW9uLg0KPiA+Pg0KPiA+PiBJIHdvdWxkIGFjdHVh
bGx5IHByZWZlciB5b3VyIGFwcHJvYWNoIGJlY2F1c2UgaXQgaXMgZWFzaWVyIHRvIGFzc29jaWF0
ZQ0KPiA+PiB0aGUgc2l6ZSB3aXRoIHRoZSBiYXNlLg0KPiA+Pg0KPiA+PiBBbiBhbHRlcm5hdGl2
ZSB3b3VsZCBiZSB0byB1c2UgYSBzdHJ1Y3R1cmUgdG8gY29tYmluZSB0aGUgYmFzZS9zaXplLiBT
bw0KPiA+PiBpdCBpcyBldmVuIGNsZWFyZXIgdGhlIGFzc29jaWF0aW9uLg0KPiA+Pg0KPiA+PiBJ
IGRvbid0IGhhdmUgYSBzdHJvbmcgb3BpbmlvbiBvbiBlaXRoZXIgb2YgdGhlIHR3byBhcHByb2Fj
aCBJIHN1Z2dlc3RlZC4NCj4gPg0KPiA+IE1heWJlIHdlIGNhbiBkbyBzb21ldGhpbmcgbGlrZSB0
aGlzOg0KPiA+IGBgYA0KPiA+IHBhZGRyX3QgZGJhc2U7IC8qIERpc3RyaWJ1dG9yIGJhc2UgYWRk
cmVzcyAqLw0KPiA+IHBhZGRyX3QgdmJhc2U7IC8qIHZpcnR1YWwgQ1BVIGludGVyZmFjZSBiYXNl
IGFkZHJlc3MgKi8NCj4gPiBwYWRkcl90IGNiYXNlOyAvKiBDUFUgaW50ZXJmYWNlIGJhc2UgYWRk
cmVzcyAqLw0KPiA+IHBhZGRyX3QgY3NpemU7IC8qIENQVSBpbnRlcmZhY2Ugc2l6ZSAqLw0KPiA+
IGBgYA0KPiA+DQo+ID4gU28gd2UgY2FuIGVuc3VyZSBib3RoICJiYXNlIGFkZHJlc3MgdmFyaWFi
bGVzIGFyZSBncm91cGVkIiBhbmQNCj4gPiAiQ1BVIGludGVyZmFjZSB2YXJpYWJsZXMgYXJlIGdy
b3VwZWQiLg0KPiA+DQo+ID4gSWYgeW91IGRvbid0IGxpa2UgdGhpcywgSSB3b3VsZCBwcmVmZXIg
dGhlIHdheSBJIGFtIGN1cnJlbnRseSBkb2luZywgYXMNCj4gPiBwZXJzb25hbGx5IEkgdGhpbmsg
YW4gZXh0cmEgc3RydWN0dXJlIHdvdWxkIHNsaWdodGx5IGJlIGFuIG92ZXJraWxsIDopDQo+IA0K
PiBUaGlzIGlzIHJlYWxseSBhIG1hdHRlciBvZiB0YXN0ZSBoZXJlLg0KDQpJbmRlZWQsDQoNCj4g
TXkgcHJlZmVyZW5jZSBpcyB5b3VyIGluaXRpYWwNCj4gYXBwcm9hY2ggYmVjYXVzZSBJIGZpbmQg
c3RyYW5nZSB0byBoYXZlIHZpcnR1YWwgQ1BVIGludGVyZmFjZQ0KPiBpbmZvcm1hdGlvbiB0aGUg
cGh5c2ljYWwgb25lLg0KDQp0aGVuIEkgd2lsbCBrZWVwIGl0IGFzIGl0IGlzIGlmIHRoZXJlIGlz
IG5vIHN0cm9uZyBvYmplY3Rpb24gZnJvbSBNaWNoYWwuDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5
DQoNCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:41:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485510.752790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN6N-0004VU-DB; Fri, 27 Jan 2023 11:41:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485510.752790; Fri, 27 Jan 2023 11:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN6N-0004VN-AO; Fri, 27 Jan 2023 11:41:03 +0000
Received: by outflank-mailman (input) for mailman id 485510;
 Fri, 27 Jan 2023 11:41:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JKSU=5Y=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLN6L-0001VA-Pm
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:41:01 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0601.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7843a840-9e37-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:41:00 +0100 (CET)
Received: from DU2PR04CA0313.eurprd04.prod.outlook.com (2603:10a6:10:2b5::18)
 by AM8PR08MB6499.eurprd08.prod.outlook.com (2603:10a6:20b:317::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 11:40:57 +0000
Received: from DBAEUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b5:cafe::2b) by DU2PR04CA0313.outlook.office365.com
 (2603:10a6:10:2b5::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend
 Transport; Fri, 27 Jan 2023 11:40:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT004.mail.protection.outlook.com (100.127.142.103) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.21 via Frontend Transport; Fri, 27 Jan 2023 11:40:57 +0000
Received: ("Tessian outbound b1d3ffe56e73:v132");
 Fri, 27 Jan 2023 11:40:57 +0000
Received: from 4773e5d45a32.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 68CF4482-67CA-4F03-8E55-CF6E48E0EF45.1; 
 Fri, 27 Jan 2023 11:40:51 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4773e5d45a32.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Jan 2023 11:40:51 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM8PR08MB5585.eurprd08.prod.outlook.com (2603:10a6:20b:1c5::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 11:40:49 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Fri, 27 Jan 2023
 11:40:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7843a840-9e37-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YXV1oY9xm7h5NPYs046MBxQOeZR1Sy82smO3XqCajas=;
 b=zHAQCTL7vJl/+EyFoIi+bQxUwVFHuhC5AkV9y1Gi7cEAa997iVcFS0Hz9XWd1VeWeoZr8UAtmBo6CdlMe0wvn0Cu4SppXbuiEacGErtCktrmrAFN2EwfYQ5p82NcCQcHg2X8CYVHg49A27RY/vyr9PyOBeqA6BJyoNf9I7KnmuM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cyFmpT360gM87IlOUR7yspyNO53JgMrbAWfpec3aruWI3lNBbEzLnxkSYPvw0d0aa2lxJZ3MVC8iZI7lqi/uqwX/L/KcRFfUyjggywj9C7nVuRhYo7dTGoS3Dp6d5EoXc9C3GSEyVQqCwAXGd9Km3ZtrkU7VirmzHyNXHJ01Go+0FLaCmxz8v5u9dZgdf64U9c1oGt8d2ZvHPm0AA/tW+KUxGXrYs+ClOlm8SuqlUBYbKcrCWyQGIYJsGJKBbJCdhW0pJeZKqZGoopaSSpBgnFHfzCUS4/bpJSxJknc3FrIxXvHCmqQ7A+akRL5++SBqEfSfJjhGzqkwLEzDoaQNVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YXV1oY9xm7h5NPYs046MBxQOeZR1Sy82smO3XqCajas=;
 b=RHXZgNz3TS4GHu43u6LOeEhybixle94PLWAx4SDOHFWHgZ5QiE2y7mc0vPzfHQZCHCBfBrwaDN7pB2gj3FfkEBzMcYgozVeRbxKS1iDTgi9xyR6tkmslvpwT4AG3CKbDkrN3I+d8FRo09u6i3vLPG3QnLiUpe7pYtWMUF1LTrOi83hvjSblhThtiLgP0EB8ViVimTl3cpoAfW0ZiKl/AoPE07XAdTQtV9xteof7BBeK3muEFFKlGzslXKi4lExUK44MlqNSorXorl5izSSzQJmGt184hNPMz3i/i1mmCmz1T2JQxzj4/TSrjsHelxJiXefG3CykDogwrzHc+K6dRoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YXV1oY9xm7h5NPYs046MBxQOeZR1Sy82smO3XqCajas=;
 b=zHAQCTL7vJl/+EyFoIi+bQxUwVFHuhC5AkV9y1Gi7cEAa997iVcFS0Hz9XWd1VeWeoZr8UAtmBo6CdlMe0wvn0Cu4SppXbuiEacGErtCktrmrAFN2EwfYQ5p82NcCQcHg2X8CYVHg49A27RY/vyr9PyOBeqA6BJyoNf9I7KnmuM=
From: Henry Wang <Henry.Wang@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
Subject: RE: [PATCH] Changelog: Add details about new features for SPR
Thread-Topic: [PATCH] Changelog: Add details about new features for SPR
Thread-Index: AQHZLRqZNJwiiBkqnku+FjPpsLRxRa6yKKGw
Date: Fri, 27 Jan 2023 11:40:49 +0000
Message-ID:
 <AS8PR08MB79918B0D0329A2B722B773EB92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230120220004.7456-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230120220004.7456-1-andrew.cooper3@citrix.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4ADB29284D25B54B9F8BC3377AA29FC4.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AM8PR08MB5585:EE_|DBAEUR03FT004:EE_|AM8PR08MB6499:EE_
X-MS-Office365-Filtering-Correlation-Id: b48bea97-cbb8-440a-3b8c-08db005b5b4f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 X0c6cufnb+jk/da9qFU3D8FV7xCDen4jyM80RGi+rZ3d8APUM552FTivZZrHB9oLN2e9emqSJf6z9/gP8DHc7YrLwy0iLs5um2NF7dq80Itj49Ef7OGi5YpWSCbdSgI6dHfUL+A0PFcSYH8lbX7DDNCsXuB0CUNjpzsVkRNQjtMgVj3/xWH1/Hf2tBsQ8KhnHXkbDoRS2gvY2ylfWs3+7NHZ+Wrgty0DTGNhj9RcE62zLXVeBkJ6oc2OQjwMMHsrB9+vLBBSITvb85/dThwXxZS2TkXvQzb3yKOoWl2j7StvaP0CDGSrwbUlXNUDsNgrJBLHAfqFcif9eM5MYygpUSBD6MoUPZZ2LgPEo1YDV/YMQTUZmuuHoTPspUXijHDkdIgLbfCYUdX/EpWAu/6ZDwZB+g/qyCq55GFrDMcBtUYlm41adQd/qbvJbjw3uiKWup2+RIQbIhMhp7PxK31x+VSYj+z/zS/M3W7KYu2dic1GiG4cbwhuGMBoMxb6cM7bgbc6y15EMDt9JA1YQx7lrwu7wMaQFKp5sWeiQ3+hrejDWaelclniyAPcs9LDKvFiq0Ue9vG9P5kV8u3xGcu2I4/nFjIb4+EPGY7eoEaRkkrz2vQVucHc/0hFuNNns0PbDYWTJzkNjzpTKW96pFk2lveXCB+AcPQoaO9ZKp03KOZyJwWT24Vg4EmarSPymQnQq8gvriJcueDvPVtgu2lJzxIRrmBDzYqtGnUN2ElF7Sxa80PTzvQUPz6SjzJ3jU+r+dl8S/xFnQN93GOx2BcikRXNOF6Tlui2QwK1s6DlE2o=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(366004)(396003)(136003)(346002)(376002)(39860400002)(451199018)(71200400001)(41300700001)(7696005)(38100700002)(122000001)(54906003)(86362001)(4744005)(8936002)(110136005)(316002)(5660300002)(38070700005)(52536014)(33656002)(55016003)(2906002)(83380400001)(9686003)(26005)(186003)(8676002)(4326008)(6506007)(76116006)(478600001)(66556008)(66476007)(66946007)(66446008)(64756008)(59356011)(207903002)(219803003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5585
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c4889867-9b5c-46ae-c0b1-08db005b567f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0QXcvtzUWe3MOgS0fPiw7gt8Cp2+evBKQFY7okzYMXEGOVEzPmspJksxnLzuECzjfNN+MlpMMaPdV2U0bMlWdP9Uc1gmGbSYM+LQiDCsGQ3He5KRfnLt/PePRUvHbEmGBfz1JSZ3UUPkp/+BnO5Ex0p5EvUtPofCMhokEW2CIw2SX2INYjHDwJX30vJ9CDNrhG0RmKMjQ+RssO73nMX4AdkQ6BtlYBUB1kVWJSj5o/ekufC1c1RwZK1dEim/IJToTQFa4mR4rEEf1ocDJIlIZcLsTbgI+k2k0REuOle/KF9DfxJyXwAcQEjB6yIHfXsHWfihHsuWjms8xBVAOXzu0NNxom3cOq8DFCsOdoQTVI8bq8RPM7z6Uf4yKnY2r2k8rZncW56VE17MyUmAM48yG8dJ0rEYD2hTq4owj0RKHSYXzuGQOyq1E2AhH9D3VchjogsYvxlD14amsU4Gtx6/j8xZZnJ4oep+HN59a66lx0/BLIbiqwzrg0BbSM9VtE8tuIS4D4q608yTUXDaD1qdi74dptp1bKxKoIUsupxz1yncYWZKlc362QdG4XNqL8Di6Bd2WWUxu+xemZdD9M4xZiW4jn6TjdAR3kUjemaIAUXEMbKl9GLLe/SRKYVrftP8hoZBsl75Ny7/JlFi0Z/KQMlMG1qaddBNtYoKvlqPbbWkm27AbHJwMFWVZfbAvo0uXeh6oQOEAOJ7ZqcWngeUPxEiF08jbbnHKE9dvYxxMBxRjADaipM2oP1icjo1SLVPZG0eV1gT9zLpGLdb5aQheUFcOnjtQEdj4FlwI2IId0I=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199018)(46966006)(36840700001)(40470700004)(4326008)(41300700001)(70206006)(8676002)(70586007)(83380400001)(316002)(52536014)(8936002)(5660300002)(110136005)(55016003)(81166007)(40460700003)(356005)(4744005)(40480700001)(2906002)(33656002)(9686003)(36860700001)(54906003)(47076005)(86362001)(7696005)(478600001)(336012)(82740400003)(6506007)(186003)(82310400005)(26005)(59356011)(219803003)(207903002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 11:40:57.7000
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b48bea97-cbb8-440a-3b8c-08db005b5b4f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6499

Hi Andrew,

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Subject: [PATCH] Changelog: Add details about new features for SPR
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks for remembering this :)

Acked-by: Henry Wang <Henry.Wang@arm.com>

> ---
> A reminder to everyone, write the changelog as it happens, rather than
> scrambling to remember 8 months of development just as the release is
> happening.

I wonder if there is a way to automate this in our CI so we can avoid
forgetting this. But currently I am not really sure if the solution in my m=
ind
is simple enough... I will try to keep this issue in my mind so that probab=
ly I
can come back with some solutions.

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:44:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485516.752801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN9B-0005Ez-UH; Fri, 27 Jan 2023 11:43:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485516.752801; Fri, 27 Jan 2023 11:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLN9B-0005Es-RX; Fri, 27 Jan 2023 11:43:57 +0000
Received: by outflank-mailman (input) for mailman id 485516;
 Fri, 27 Jan 2023 11:43:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JKSU=5Y=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLN9B-0005Ek-1T
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:43:57 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2072.outbound.protection.outlook.com [40.107.104.72])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e0ae37ec-9e37-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:43:55 +0100 (CET)
Received: from AS9PR04CA0046.eurprd04.prod.outlook.com (2603:10a6:20b:46a::26)
 by GV1PR08MB7876.eurprd08.prod.outlook.com (2603:10a6:150:5f::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Fri, 27 Jan
 2023 11:43:49 +0000
Received: from AM7EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:46a:cafe::36) by AS9PR04CA0046.outlook.office365.com
 (2603:10a6:20b:46a::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23 via Frontend
 Transport; Fri, 27 Jan 2023 11:43:49 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT014.mail.protection.outlook.com (100.127.140.163) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 11:43:49 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Fri, 27 Jan 2023 11:43:48 +0000
Received: from 9fd30c872713.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 877A6A2B-C1D4-4776-AF54-18D5574531DE.1; 
 Fri, 27 Jan 2023 11:43:43 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9fd30c872713.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Jan 2023 11:43:43 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DBBPR08MB6297.eurprd08.prod.outlook.com (2603:10a6:10:20b::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan
 2023 11:43:41 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Fri, 27 Jan 2023
 11:43:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0ae37ec-9e37-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0E9/pU3fuaw+nh0NzIeuODgm/QU38osue3b8VFLQ5Gs=;
 b=I2ZoBcmEzY/ZgDhEPfa/bWPvHI4JFVUZsIk8+6r9QVIvkvSbQ4EixOMOxPgU9iD6wZk3HeTpUMo76UsVGN2UAyKArzFm6be+uCgKmvnZdIRQOzZP4WE6yvBGlcevIGBSITv1+YlnJVOJ4EY3NH55JBKI5UHtOnGrh6owySYyRhM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SCh+pyQTm/dUr2N4L48rhAUSbA6+i8T7M6cCSKBGw2G7f4zMDn1MUAbKZJfPCLEd0dxuHWaRe8pOFf50WJdt1GtowEqeLuFnW6Z+qaU8+Akdk3BaVJGZ1IvYHAEHGWD62AjdJcLYT51lvMrXeIT4sPj5ektpUQ6mrjIMN6JX3x7EvgLjPtrDv6Lug0M54l/hlUJkJJzT1Dhnx0XGPYU46Z1ETI2ier6zt8UJhgY1+CjLKqB0YvoMsZ6mpstjb/Czcl0UaDHM80I5YxDPccqTbecoCMVAQVlrTUsiTWOUuYWNAX5LNL8P01VlWnJ2P+qPPnji39quyAPDwUWDnmObqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0E9/pU3fuaw+nh0NzIeuODgm/QU38osue3b8VFLQ5Gs=;
 b=ShWRHAJdkyevaHWHsGlSrPbUaUKYQQyZOLfqRIFpRKTT6jHwAkqnINMtjM2/I12Ax88f8E2u3WOp1K7HnniZZCaNvk4vKXEQSgsWuNlAtw6CPhg3oSPsgJefT2IhQd6wZoKyP8h9flF6Fz2SB0/YGzL0TkAFqjdjbzMWeviiVLgx0bf794FUE1Wqi1kjJhsGaqAJiT89Xx9ECZVtz1a8rTTkuZJjVIqWSXQUCjuXtYjtHWIi80v0B5Y+4IIiN1jfwqyNbgs6ZWSYqjg1rQ+kvgXNdNxCfed6QZB0VFuXek+aRXgmngrFDSQcj30oxmermGqJlhDN+CeSexiP9xuZjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0E9/pU3fuaw+nh0NzIeuODgm/QU38osue3b8VFLQ5Gs=;
 b=I2ZoBcmEzY/ZgDhEPfa/bWPvHI4JFVUZsIk8+6r9QVIvkvSbQ4EixOMOxPgU9iD6wZk3HeTpUMo76UsVGN2UAyKArzFm6be+uCgKmvnZdIRQOzZP4WE6yvBGlcevIGBSITv1+YlnJVOJ4EY3NH55JBKI5UHtOnGrh6owySYyRhM=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Thread-Topic: [PATCH v2 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Thread-Index: AQHZD2qaLVIiQ3Z13EK3E/FV+D1Sxa6vQQGAgAMop3A=
Date: Fri, 27 Jan 2023 11:43:41 +0000
Message-ID:
 <AS8PR08MB79910E7BC6B095415CAE07FA92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20221214031654.2815589-1-Henry.Wang@arm.com>
 <20221214031654.2815589-2-Henry.Wang@arm.com>
 <bb52731d-94b3-694b-8038-8c87dd986654@xen.org>
In-Reply-To: <bb52731d-94b3-694b-8038-8c87dd986654@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 5A1F901BEE41AB438AE2D6A6A80F8A78.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DBBPR08MB6297:EE_|AM7EUR03FT014:EE_|GV1PR08MB7876:EE_
X-MS-Office365-Filtering-Correlation-Id: 5b967224-a736-4519-ffc1-08db005bc176
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 rr01SAyVckYiQeTkk8FbcXon/f4Bg7seHJaOU4kc69hIFH/bCEO+1LNfTB/YmM1nCMcku4s/FKIniNftX1yNws9Upel3dq3BrbysXb9njBNusm4iWzUW2BhdTXbzGjW0kigvdjjNW6BXztkG/UKDYxIRmp5BBSVIQC9C+LKDCdFCuZlXqy+5e+cVFjcMWJmPPABGQThRlpPUJA/l9Kg/h+HVmLCbF7o0zZ8Sz7XvxIxqo15dZdaD+SI8mmXwSrFRr6BuyuF+RTu8rik2vKG62DbUv4dJrZ1kpdfBid2rDt7prW9iuGNVq6PaNVRXFAy49JhFF6HPiTnAPRpRlPjUA9Xpn6nkIbPjZsg1UAuPCnXqlCOybTT0YzFTeB7UF526ms4ahrHsCVgaXhQ+FVwAgi0K4ru0BEOk7zW6ug9U6axpHIhD+xMPT+MyARiKcLgT1d9RFTjO2WPq6S7xRfZjqJSleOT0lIRVN2zWE9SmbJz0JMwbitU/YlSTv0jv/OCrlEhNkacVKhOq4rl2zgO5uXaL6Lo0cW90fBLGp0zllzfRBzN5QOwSztpZn9JUsDwWnyOJ8Rr4FXGknhxiNiJAp4QIZnfK/5g8BoCABe3q71pBnzPWyHbaCBjdkY76a5YYem3CjIvAFagemmVwI1AU8rfwc3a5hKF4PYvoQR76+vzu7XrL6cx7Nxoklu/or0ZITGemZ5NTGQWQgJ0SaL8iRw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199018)(5660300002)(8936002)(2906002)(41300700001)(4326008)(66946007)(66476007)(66556008)(66446008)(8676002)(64756008)(54906003)(316002)(110136005)(186003)(9686003)(26005)(71200400001)(52536014)(76116006)(53546011)(33656002)(7696005)(6506007)(478600001)(83380400001)(122000001)(86362001)(38070700005)(38100700002)(55016003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6297
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d9a28e0e-7ca7-4582-316e-08db005bbd20
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	D87RSzcxQfO8dEAeUA9tF+BmvyHvk8pHg0haCmbJTci9P06Zfo/h7dc92NfyP940j5qmevijb/64jfkVK0QS10uDsBn3XeQmvaiy2R8F/zR37S3N3jAPTgiSg5HabjFTTJSOJCeoL7Zku/2YrYBFN6GpBAW05KMMFY0xwisxvO5YNvkMI7tOutTU8gCHB+DeeQ9c9StOKNEbVdYCc0mJ+GA98tzHFt1FEDuUVVRLS/qkCc1Yl2kHQ3Bx8+H7zZ0qmUtvisehiYVJhlKIzKuENicG5thorwHBjcr/smHQoQHgLCwidWStvDtW5QWg1oZk45MNopLBroW0cbzNlvpkxcKBeyTcfzR5lZcjOs7+VbHvVFtOBP0RrNnVJ4Dih7EfrrqUbew0GIyweVjLPa8z61BSIe0CwO4yIvDPZktSFCLCU7Ew28Ehj27VRZL2ipURGmUtAYMqpRoTO3/fpCpRmMQMG/i21Iqk/KTAR9No5KMqXVva8g7j1Ie3WnxadaO6LvcR/AqZhw9neiKguWgYxJ+xsWvuhImxbwnagSs8jsOdZcIa2AtzaJXCMzRM28l90YvVPs3RBeHty3dvK0CJg3ebqruVz2CbOjnLePDS6t1T8SIeNNtmYZy3twQsHBylN/K4zJxt7NjhwGzAfx17ldRIZaL78H8DoMavJGSdi/hjU51fn1BD10/RfHTbDYU5pBzougrToRVYdR34Qt2pzg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(346002)(39860400002)(376002)(396003)(136003)(451199018)(36840700001)(46966006)(40470700004)(478600001)(186003)(83380400001)(336012)(82310400005)(6506007)(33656002)(40460700003)(9686003)(316002)(81166007)(53546011)(55016003)(41300700001)(7696005)(8676002)(2906002)(5660300002)(36860700001)(8936002)(54906003)(26005)(4326008)(70586007)(70206006)(107886003)(86362001)(40480700001)(356005)(52536014)(47076005)(82740400003)(110136005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 11:43:49.0257
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b967224-a736-4519-ffc1-08db005bc176
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7876

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IFN1YmplY3Q6IFJl
OiBbUEFUQ0ggdjIgMS8zXSB4ZW4vYXJtOiBBZGQgbWVtb3J5IG92ZXJsYXAgY2hlY2sgZm9yDQo+
IGJvb3RpbmZvLnJlc2VydmVkX21lbQ0KPiANCj4gSGkgSGVucnksDQo+IA0KPiBPbiAxNC8xMi8y
MDIyIDAzOjE2LCBIZW5yeSBXYW5nIHdyb3RlOg0KPiA+DQo+ID4gK3N0YXRpYyBpbnQgX19pbml0
IG1lbWluZm9fb3ZlcmxhcF9jaGVjayhzdHJ1Y3QgbWVtaW5mbyAqbWVtaW5mbywNCj4gPiArICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhZGRyX3QgcmVnaW9uX3N0YXJ0
LA0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFkZHJfdCBy
ZWdpb25fZW5kKQ0KPiANCj4gSSBhbSBzdGFydGluZyB0byBkaXNsaWtlIHRoZSB1c2Ugb2YgJ2Vu
ZCcgZm9yIGEgY291cGxlIG9mIHJlYXNvbnM6DQo+ICAgIDEpIEl0IG5ldmVyIGNsZWFyIHdoZXRo
ZXIgdGhpcyBpcyBpbmNsdXNpdmUgb3IgZXhjbHVzaXZlDQo+ICAgIDIpIFdoZW4gaXQgaXMgZXhj
bHVzaXZlLCB0aGlzIGRvZXNuJ3QgcHJvcGVybHkgd29yayBpZiB0aGUgcmVnaW9uDQo+IGZpbmlz
aCBhdCAoMl42NCAtIDEpIGFzICdlbmQnIHdvdWxkIGJlIDANCg0KR29vZCBwb2ludHMhDQoNCj4g
DQo+IEkgaGF2ZSBzdGFydGVkIHRvIGNsZWFuLXVwIHRoZSBBcm0gY29kZSB0byBhdm9pZCBhbGwg
dGhvc2UgaXNzdWVzLiBTbw0KPiBmb3IgbmV3IGNvZGUsIEkgd291bGQgcmF0aGVyIHByZWZlciBp
ZiB3ZSB1c2UgJ3N0YXJ0JyBhbmQgJ3NpemUnIHRvDQo+IGRlc2NyaWJlIGEgcmVnaW9uLg0KDQpT
byBJIHdpbGwgdXNlICdzdGFydCcgYW5kICdzaXplJyBpbiB2My4NCg0KPiANCj4gPiArLyoNCj4g
PiArICogR2l2ZW4gYW4gaW5wdXQgcGh5c2ljYWwgYWRkcmVzcyByYW5nZSwgY2hlY2sgaWYgdGhp
cyByYW5nZSBpcyBvdmVybGFwcGluZw0KPiA+ICsgKiB3aXRoIHRoZSBleGlzdGluZyByZXNlcnZl
ZCBtZW1vcnkgcmVnaW9ucyBkZWZpbmVkIGluIGJvb3RpbmZvLg0KPiA+ICsgKiBSZXR1cm4gMCBp
ZiB0aGUgaW5wdXQgcGh5c2ljYWwgYWRkcmVzcyByYW5nZSBpcyBub3Qgb3ZlcmxhcHBpbmcgd2l0
aCBhbnkNCj4gPiArICogZXhpc3RpbmcgcmVzZXJ2ZWQgbWVtb3J5IHJlZ2lvbnMsIG90aGVyd2lz
ZSAtRUlOVkFMLg0KPiA+ICsgKi8NCj4gPiAraW50IF9faW5pdCBjaGVja19yZXNlcnZlZF9yZWdp
b25zX292ZXJsYXAocGFkZHJfdCByZWdpb25fc3RhcnQsDQo+ID4gKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHBhZGRyX3QgcmVnaW9uX3NpemUpDQo+IA0KPiBOb25l
IG9mIHRoZSBjYWxsZXIgc2VlbXMgdG8gY2FyZSBhYm91dCB0aGUgcmV0dXJuIChvdGhlciB0aGFu
IGl0IGlzDQo+IGZhaWxpbmcgb3Igbm90KS4gU28gSSB3b3VsZCBwcmVmZXIgaWYgdGhpcyByZXR1
cm5zIGEgYm9vbGVhbiB0byBpbmRpY2F0ZQ0KPiB3aGV0aGVyIHRoZSBjaGVjayBwYXNzIG9yIG5v
dC4NCg0KU3VyZSwgd2lsbCBmaXggaW4gdjMuDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:47:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:47:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485522.752811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNCt-00062g-Dp; Fri, 27 Jan 2023 11:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485522.752811; Fri, 27 Jan 2023 11:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNCt-00062Z-BC; Fri, 27 Jan 2023 11:47:47 +0000
Received: by outflank-mailman (input) for mailman id 485522;
 Fri, 27 Jan 2023 11:47:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JKSU=5Y=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLNCs-00062T-Kv
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:47:46 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::611])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6953e2b0-9e38-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:47:44 +0100 (CET)
Received: from AS9PR06CA0070.eurprd06.prod.outlook.com (2603:10a6:20b:464::21)
 by AS2PR08MB9834.eurprd08.prod.outlook.com (2603:10a6:20b:605::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21; Fri, 27 Jan
 2023 11:47:42 +0000
Received: from AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:464:cafe::2b) by AS9PR06CA0070.outlook.office365.com
 (2603:10a6:20b:464::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23 via Frontend
 Transport; Fri, 27 Jan 2023 11:47:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT005.mail.protection.outlook.com (100.127.140.218) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.25 via Frontend Transport; Fri, 27 Jan 2023 11:47:41 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Fri, 27 Jan 2023 11:47:41 +0000
Received: from 830692fff205.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 16FD01B2-4282-411C-ABCD-C5AE7FF2B129.1; 
 Fri, 27 Jan 2023 11:47:35 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 830692fff205.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Jan 2023 11:47:35 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB9346.eurprd08.prod.outlook.com (2603:10a6:10:41e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan
 2023 11:47:33 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Fri, 27 Jan 2023
 11:47:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6953e2b0-9e38-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LNfAuDjB9ohOLCAlbAU9zSwBfbCvIFOHixWRJVF6lto=;
 b=Au1dsnPM6sbdX8xwBi629NQKtwQ1qtDsF09/FvT6qfX4SwrPFYXCEL8BeJHwLf2/1bsbp5acaZ8y6Ju6mrac/q8kxBwwbqTMP35Ydjsr3SdkFlgEdD5TZIT2GfqlUlPA57DjZE2l6tr83EZ00qWL+NFnIkBiZnOP/Y46s8Cb7Pw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xi/nvl8/5SNu/2quC1r5gsSSlmAhB5oKHDwYbl1m77KsxpiGfFvjfPIHTs+VnTRkfUIaJNsuTO0DYZW3t7RLcCt9PwE3v/waHfJ0CnQJyDmcJRMXc4U21l2ZeSkM+wGZz4KVsi8vt44K7UtLqMgSkY+kVlGvPwxOOYvmmNlyNUGuEkK7k7G0FVNavkc1VrcRKaTIl9uMYIF0er2xM1rsOmQ4zEfZbNA5FBfBrr7BujDiDbY9kMZTiy1p8Kp3EgIuu3AB/GDHd6Hq61UUMTSF9PQUAybFiqweHrVqvnLjNLrLsTEb/HTp/HhT/UZTndWsJIOwbMlTUwSfzzvwgcTu1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LNfAuDjB9ohOLCAlbAU9zSwBfbCvIFOHixWRJVF6lto=;
 b=W4/8vj5tsY8jJjXvIq939L/OuvPJxf7pn23qUeV0HrEIKRRbVBtlfr/ISl7Qjr4HwtNkoJFbCmqM3uhp16tgj0f6ubMmtKT/SL87hi3B67nHuM+38YOxLM0TgDns5RetH82ZCmf0rLZCu04GYf/oAFNrQaQCfsBknue9mcCK3puuhWi/gUnprXLu/dKA54+FYTztUObtJ7K7H0PgGhVRz41n26ov4+GNa9SCAzyDzEG40+IfhMslqYbJHgedzNGX0oI2fblQ/Ey/bVO7UHyjF2axhE5rJK3esBZA45v9UHKiBAKILwtph0hw1LnTM6C0v2IrZ2q2BUPAkJawSXRtQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LNfAuDjB9ohOLCAlbAU9zSwBfbCvIFOHixWRJVF6lto=;
 b=Au1dsnPM6sbdX8xwBi629NQKtwQ1qtDsF09/FvT6qfX4SwrPFYXCEL8BeJHwLf2/1bsbp5acaZ8y6Ju6mrac/q8kxBwwbqTMP35Ydjsr3SdkFlgEdD5TZIT2GfqlUlPA57DjZE2l6tr83EZ00qWL+NFnIkBiZnOP/Y46s8Cb7Pw=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Thread-Topic: [PATCH v2 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Thread-Index: AQHZD2qaLVIiQ3Z13EK3E/FV+D1Sxa6vQakAgAMoxuA=
Date: Fri, 27 Jan 2023 11:47:33 +0000
Message-ID:
 <AS8PR08MB7991323841685D68FD648B7292CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20221214031654.2815589-1-Henry.Wang@arm.com>
 <20221214031654.2815589-2-Henry.Wang@arm.com>
 <e86d2b48-2da7-f21c-d191-85615a934c81@xen.org>
In-Reply-To: <e86d2b48-2da7-f21c-d191-85615a934c81@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 739D5CA5CAF57D40BBD8EA37F8273760.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB9346:EE_|AM7EUR03FT005:EE_|AS2PR08MB9834:EE_
X-MS-Office365-Filtering-Correlation-Id: b78d6bfe-688d-494f-6544-08db005c4bfb
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VFHRTnzIAmtkeNVSmtOH7nff1OqWbxV1upD8lfouD/t/hEpTnQR08lrm3riNRJyMD5ZHqhv81+OdsdJpG9Cqu6VLNZTpuwtllH+8ERfDHo38dkQoSBN7Bt224DcZY4CwG+rvr/yg7e8c/ccPrexYAtsG+qAWlFb+GuMFIIwaE+2U3HlnChM+TCENidFDFc6s2eTTn1+HDPCoK1uhbfivkwZZ8FO6HdiVLxba7AavGlyqeYPjoG+xNdV4+BglvC2iCCLuxytKTFFHo5BtQPoJl48/crxR+PHYHIhVwh4KBkPbwzspXQLiAQvq6Q300d4lnVmDZ/BiZ+pMntEb1in1hLvRmlVp9epo60FCxUXxKVusa7Z+aguNRqo7RG0UCAJ+dWsMoM7G9w05eS2OFxZXMPa7JUmBN5Z1aUcww6MN22qjHCjoqOUpOqNlhY3mgcefM9qQfElA3g/ZGg3yretBL2eC/4uQ5Hm2x9pICd/O+GTH5Jrt9AUyP1f7XQ+u20Ci1PHn8q04hIaUdnOafV+K6dv5xDrMIDXJyr4Rz2cVSjDAayi6QjpDrgMtGkPHiMbKReLKp6Dy89WSD61xE1F5TMlWyUsAxoXzeTy9R+8+N9DXytR0ALE9TWkrPXq+1O2tjWAwtKdYN5qqC3gSsr0KTJvDOkaJJRleEo45OBwGipCmJ5K4l4MCr4OPRkacyv/9Sz7Czq5haRSo+Q5W4Jd6bA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(376002)(39860400002)(366004)(136003)(346002)(396003)(451199018)(8936002)(52536014)(41300700001)(5660300002)(122000001)(4326008)(86362001)(83380400001)(38070700005)(33656002)(66946007)(54906003)(7696005)(110136005)(64756008)(186003)(76116006)(66556008)(71200400001)(66476007)(38100700002)(316002)(66446008)(9686003)(55016003)(6506007)(478600001)(26005)(8676002)(4744005)(2906002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9346
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ae5fa694-e7ce-4805-077b-08db005c4707
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AEUDPI1YKR6msjuU21q90tzJAmsKxcx+49JeiY85u7Eh7R+1hrRUJjIjjAKT0bXU2OsmAq0ZX8P1E39G4QBIvczM8dFXC97/6AFB5lg2Q0PzD/tXRXBJMEJRU+SO0wor5dqpkwemEShMEqj7t6h1ce7qAWCY5yESKhH7fDijulsjjq7k9CihHfvKsUWRVMW+dHmNGa1e9vSYdY+OXoxwXlPtimd6zVuOGo9Ber2XI6r+G1pJKNcMcmt8n46/QJRknw9U9OL/V46rBabv4OxFcajnY+V92fmj9HnmnfvKoAnU+34ZuaBMB/CqNAWH801FUAtiQeMKvUE/hJIDBtuJ7//gELlcmcscSp4GgMam8lGCjL3OiR/Bj7LC1aw18VXotOxAXNOqLyM5tpnCP3TgE1f7pd0IT2/GBuurOUOnXTrbRkJ+Mcc93fka2YKUgeb6gyjhevaE/PdOVil7A3kIgVRnByHtUY36HuUMK1Sto8LPPk/D8J2S4q5KZPzs9LzsLD49bw2f2Pd7rLHiYBOok5LvTKwCpfho9HfZh1vea29Lqr15Qrz0/VtLS/yib64HKT/NQDXp69ZO2torjYXeIf9ViQLab0zH+IS8SgOF69V/TksEWvI5Bu4uV4WToH2FtSh5OIKa8Me2PevFTg8R4ia7FKKobPCLKigYyw9Q6gsf88iSI8/9LHLnDFDUnSLp1gxf83WTTUC1+LcwHSJrqA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(346002)(396003)(39860400002)(376002)(136003)(451199018)(46966006)(36840700001)(40470700004)(336012)(41300700001)(356005)(83380400001)(4744005)(8936002)(47076005)(5660300002)(52536014)(7696005)(81166007)(478600001)(86362001)(107886003)(36860700001)(40460700003)(33656002)(40480700001)(70206006)(4326008)(70586007)(55016003)(8676002)(2906002)(6506007)(26005)(186003)(54906003)(82740400003)(82310400005)(316002)(9686003)(110136005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 11:47:41.4190
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b78d6bfe-688d-494f-6544-08db005c4bfb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9834

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IFN1YmplY3Q6IFJl
OiBbUEFUQ0ggdjIgMS8zXSB4ZW4vYXJtOiBBZGQgbWVtb3J5IG92ZXJsYXAgY2hlY2sgZm9yDQo+
IGJvb3RpbmZvLnJlc2VydmVkX21lbQ0KPiA+ICsgICAgICAgICAgICBwcmludGsoIlJlZ2lvbiAl
IyJQUklwYWRkciIgLSAlIyJQUklwYWRkciIgb3ZlcmxhcHBpbmcgd2l0aA0KPiBiYW5rWyV1XSAl
IyJQUklwYWRkciIgLSAlIyJQUklwYWRkciJcbiIsDQo+IA0KPiBBRkFJQ1QsIGluIG1lc3NhZ2Vz
LCB0aGUgZW5kIHdvdWxkIGJlIGluY2x1c2l2ZS4gQnV0IGhlcmUuLi4NCj4gDQo+ID4gKyAgICAg
ICAgICAgICAgICAgICByZWdpb25fc3RhcnQsIHJlZ2lvbl9lbmQsIGksIGJhbmtfc3RhcnQsIGJh
bmtfZW5kKTsNCj4gDQo+IC4uLiBpdCB3b3VsZCBiZSBleGNsdXNpdmUuIEkgd291bGQgc3VnZ2Vz
dCB0byBwcmludCB1c2luZyB0aGUgZm9ybWF0DQo+IFtzdGFydCwgZW5kWyBvciBkZWNyZW1lbnQg
dGhlIHZhbHVlIGJ5IDEuDQoNCkFub3RoZXIgZ29vZCBwb2ludCwgdGhhbmtzISBJIHdpbGwgc3dp
dGNoIHRvIHRoZSAiW3N0YXJ0LCBlbmRdIiBmb3JtYXQNCmluIGFsbCAzIHBhdGNoZXMuDQoNCktp
bmQgcmVnYXJkcywNCkhlbnJ5DQoNCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBH
cmFsbA0K


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:49:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485526.752820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNED-0006bU-Nj; Fri, 27 Jan 2023 11:49:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485526.752820; Fri, 27 Jan 2023 11:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNED-0006bN-LA; Fri, 27 Jan 2023 11:49:09 +0000
Received: by outflank-mailman (input) for mailman id 485526;
 Fri, 27 Jan 2023 11:49:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLNEC-0006b8-3N
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:49:08 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a199ad5-9e38-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:49:06 +0100 (CET)
Received: by mail-wm1-x331.google.com with SMTP id
 fl11-20020a05600c0b8b00b003daf72fc844so5249904wmb.0
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 03:49:06 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 m14-20020a5d6a0e000000b002bfd09f2ca6sm1965515wru.3.2023.01.27.03.49.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 03:49:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a199ad5-9e38-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=7AyoerhmqCeZ8pg+s7b8bghinbz1l+MavFXsve8+Lmo=;
        b=Zi6UfbzXY5r5pi95k3hTnHxVLqfDn8zkOVCMuIQJ8+T0KhMIptD9o8rW2WnOvnnru+
         ++Yfnp1n6kEgfYSfL9eWY6l3D0UhA8KqORc9HmXnYW+sF9+3GxAimpt7pnMRPdX5j98T
         yliSncIwksl/WwxHUuko40p77/amLETUdKfpzjGM+RZU1IwBilIN/0R2w+uqBkl/0w4x
         G+aRLfl3gWUEbgSMUbO1A7mziDJW/z2zU8DzkBjbRQbqMHr8X8x8QrLX7w3rKfVSI6/Y
         8efmXhGHxbNK8PAN+/8MCLEEX7dsLdu6lOTBVRXn5SXkoQKeATqtM7p69yi+3kteummB
         ZbQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=7AyoerhmqCeZ8pg+s7b8bghinbz1l+MavFXsve8+Lmo=;
        b=dVQWpw8j6cU5hlVogEBz0ogqCcHh/qGKaxbvGkuFxcwVqJsSAvt3Y8GsePO0Z5EpsE
         xOg5E7kN7aKuo0uhTAGFfvFSPH5Lw8zfI838t6AHt8xGXp/SdSgA4/escV/DirC2I0PU
         uvXwaO6NnCWFm+7l/8b/8EZF7lEQNt93cUG9/Ze7ptHuRxAcB535vgCSRZ/ouUEm1zO+
         sxUr2IxNtEGs+hXwX7+XNZwSRjLoB42CQHUEQIp0KnIZloouQ30BSBWI39l8DdZjXuBW
         +ZLzuJVTqx5gw07nTKqNb/AL1scfQJQoLWUzUkuAZQXnwWTqYLm+6A5rLQ9o5KrjNntu
         YIaQ==
X-Gm-Message-State: AFqh2kqU1YBqPwJksy2zk0a0hTlnRzCEuejVq0G9SNP/HWljOik63n4f
	g/iwNcxETZS+O3Qaw4UsV7s=
X-Google-Smtp-Source: AMrXdXt2zD+S0X1Vu5V/GxQiUgpWj4okTGy0/Gapl31SOMAdPMXlSMTInfI8FKWLzqtvZkn6PLzslg==
X-Received: by 2002:a05:600c:214:b0:3db:30f:bd72 with SMTP id 20-20020a05600c021400b003db030fbd72mr36948159wmi.8.1674820145489;
        Fri, 27 Jan 2023 03:49:05 -0800 (PST)
Message-ID: <b7070c68ce88fdd3a1a7b04400ca8c3366ddf416.camel@gmail.com>
Subject: Re: [PATCH v6 1/2] xen/riscv: introduce early_printk basic stuff
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, Bobby Eshleman
 <bobby.eshleman@gmail.com>
Date: Fri, 27 Jan 2023 13:49:04 +0200
In-Reply-To: <fbab23b9-663e-9516-5721-a92486686f84@xen.org>
References: <cover.1674816429.git.oleksii.kurochko@gmail.com>
	 <06c2c36bd68b2534c757dc4087476e855253680a.1674816429.git.oleksii.kurochko@gmail.com>
	 <fbab23b9-663e-9516-5721-a92486686f84@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-27 at 11:26 +0000, Julien Grall wrote:
> Hi,
>=20
> On 27/01/2023 11:15, Oleksii Kurochko wrote:
> > Because printk() relies on a serial driver (like the ns16550
> > driver)
> > and drivers require working virtual memory (ioremap()) there is not
> > print functionality early in Xen boot.
> >=20
> > The patch introduces the basic stuff of early_printk functionality
> > which will be enough to print 'hello from C environment".
> >=20
> > Originally early_printk.{c,h} was introduced by Bobby Eshleman
> > (
> > https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d
> > 1aab71384)
> > but some functionality was changed:
> > early_printk() function was changed in comparison with the original
> > as
> > common isn't being built now so there is no vscnprintf.
> >=20
> > This commit adds early printk implementation built on the putc SBI
> > call.
> >=20
> > As sbi_console_putchar() is already being planned for deprecation
> > it is used temporarily now and will be removed or reworked after
> > real uart will be ready.
> >=20
> > Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> > ---
> > Changes in V6:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - Remove __riscv_cmodel_medany check from earl=
y_printk.c
>=20
> Why? I know Andrew believed this is wrong, but I replied back with my
> understanding and saw no discussion afterwards explaining why I am=20
> incorrect.
>=20
> I am not a maintainer of the code here, but I don't particularly=20
> appreciate comments to be ignored. If there was any discussion on
> IRC,=20
> then please summarize them here.
Sorry, I have to mentioned that in the description of patch series.

There is no any specific reason to remove and only one reason I decided
to remove the check was that the check wasn't present in original
Alistair/Bobby patches and based on my experiments with that patches=20
all worked fine. ( at least with some additional patches from my side I
was able to run Dom0 with console )

I pushed a new version (v7) of the patch series ( as I missed to change
dependency for CI job ) so probably we have to open a discussion again
as Andrew don't answer for a long time.
>=20
> Cheers,
>=20
~ Oleksii



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 11:53:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 11:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485533.752831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNI6-00084G-Ah; Fri, 27 Jan 2023 11:53:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485533.752831; Fri, 27 Jan 2023 11:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNI6-000849-7o; Fri, 27 Jan 2023 11:53:10 +0000
Received: by outflank-mailman (input) for mailman id 485533;
 Fri, 27 Jan 2023 11:53:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xsEI=5Y=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pLNI4-000843-D0
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 11:53:08 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2073.outbound.protection.outlook.com [40.107.220.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 285b5aed-9e39-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 12:53:06 +0100 (CET)
Received: from DS7PR03CA0122.namprd03.prod.outlook.com (2603:10b6:5:3b4::7) by
 SN7PR12MB8148.namprd12.prod.outlook.com (2603:10b6:806:351::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.25; Fri, 27 Jan
 2023 11:53:00 +0000
Received: from DM6NAM11FT050.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b4:cafe::ff) by DS7PR03CA0122.outlook.office365.com
 (2603:10b6:5:3b4::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend
 Transport; Fri, 27 Jan 2023 11:53:00 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT050.mail.protection.outlook.com (10.13.173.111) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 11:53:00 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 27 Jan
 2023 05:52:59 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 27 Jan
 2023 03:52:59 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 27 Jan 2023 05:52:58 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 285b5aed-9e39-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HwCDcd1VN0gR3Qq+MnTto6puKhcWRCMuWK1WprBgmPD5tZDhWNRdxK5e6PGrSL7WvZDzehFznFVCskebYGJXuBz0fM+Xqn24ZSaOySyN2GLLydXWlthjUXt5lS0K5o4Ni1sqxnf+CWbc5J/6RTxVEQ7xJGV2b7W4Ng5VNYIlKjy3+Rw6YNQTN+EDfC43GvMMHKIgQwUVxNkKRO5scfse+ick0+RT977VYnYV8LvBIXugEB91ga0/Qr2zlWhbhpwwlqdAVnTCliWRlJMyNPCffdxbArAV6Xuh4Z90NKvP9hhcPXxY+nN/k23xIc8pXuUHGP/OgnYARYTvDiomHFOmqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dhijC0uOxXO1WKKXskNj09v5rtoISG0vbj8f9f9JMAM=;
 b=m75QuJxAbkUasDJcAbSfnC9M5TBMd2enT6nkONGv/s+hCxnsG1v6r/B28GzWo3+JoI3PYifYYk5GVMrv33nVRzdFOaoW6AVoggJXFvLueejS5HblYfYpkd7zIeNSdCLVreHk3pBjRYFEcesqrjVN1YAuFafi5+OQ4b6ovS4wjYPFLGiFTpHfLZnzh2qgM+rMIuQ/kwlaI+S9EaMwpT3goICwq3BXO/7JLDpvew9mmBaFn4aWF0J/yZpy5XK7qIqgZ8ZuAsYEW2Kcf1lCjaFpJObK5pJRiLJJoa3RQIrtUSqKtZStqlBif2X98xN87LWHwzNxTD5yRRVZSBctc0/aUw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dhijC0uOxXO1WKKXskNj09v5rtoISG0vbj8f9f9JMAM=;
 b=IIOOYWNgd2IKckj6ptiEFxNjW0K8HnOddyKQJR0AqOoIIruSFumk9mULD1TApOMSmVdJYIifcfRAU96WER2OnKL/ggvl8Oi4W3RX/P5apxwrGdPwGce22RvmZvWo8TxF7fWUs3hvrdtQUamOXjwECP9NQhmY35VX6hxyfTGY4Ek=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <06cc44e6-9ffc-aef1-9d32-66c8d87fae9b@amd.com>
Date: Fri, 27 Jan 2023 12:52:52 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the
 first access
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, Julien Grall <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-3-Henry.Wang@arm.com>
 <b2822e36-0972-5c4b-90d9-aee6533824b2@amd.com>
 <AS8PR08MB79913487DBC6F434758EAE5A92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <a729bf36-8c67-ccd4-c787-d62aaf7e24b2@xen.org>
 <AS8PR08MB799127D46D09BCFEF9A0392192CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <ed09cf44-cb7b-6713-6ea4-ac38e80b3549@xen.org>
 <AS8PR08MB7991EA180B325C6E58B0157F92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <AS8PR08MB7991EA180B325C6E58B0157F92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT050:EE_|SN7PR12MB8148:EE_
X-MS-Office365-Filtering-Correlation-Id: 4e564f51-9a5b-443b-54ce-08db005d0a39
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+pwpkY4hforKbllIYfc/bIK+ByQpFlA7fJ13PPDyYzq7fkuM96u2B+PY4Kxn3QroyPPQZ36r3W4NTWPjl5r0wP8CTWUmmqEXKYUv/cmd9hdlctXOzLCDHBGWAQXW3+aXQs4dsIOGXZXv4Qkg06AjsjV0G075RdNdUeLxb4TWFs77lxidkYE6JBM5f+Mr5AEvF4TEKtHJonSjSk+QwTGYNbEjuNUhNyTghu+w0fSH3zoHWJcKKUx+a1G5xIDMED1VLE4fhz/pbtXuOXs3p79YTmqFyQhe4wXoUhb9ZfuG5JwkVYpKHughhGt/f1Qw1DJhv5dQpWS+xmz5TStOFcXAecerEPtT8WtnSORRyEMLKEApSLbrGgTf6z8WVPqcdXcd+uoAsnpTbusHQNeGIDMYs1dq3tExkKbnl1k4M4XkIsiUqS+LTasbRaBhEBTF0pGaWhUNv245/gL0/g4Yd5XmnYS2TCB4YhHz/mnPQWfEyan7gJM/mVNeG4EQj94Qhe51T8f6kj2xg/vuPIhfxAlqKDOq6xPqm6/6QCKl1VTH5IAdpxACQkNqwgBcCx6oY6tDp9guI6wpLVqUV0TTOLkMx00YBQfJYbU+ZdeV1GvPbCks0J3CSYSMod8V3BjfpoRNh7kWKpxybPgQt72u6cPaGDhY+ApbQrVe8PMoFmYfkh2ZaC7vbo09hz3T5pzwJIvr1sB1zqLlfozFMOtjCIP8p8qbYn+Xx/5IXXRrwbE8qR3CLJZkz5heC+rXGdjmYwsvqVScKnORgxEjyFz9etZ8lA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199018)(40470700004)(46966006)(36840700001)(16576012)(41300700001)(356005)(36756003)(316002)(53546011)(8936002)(31686004)(186003)(26005)(8676002)(40460700003)(2906002)(40480700001)(70586007)(6666004)(70206006)(83380400001)(4326008)(5660300002)(47076005)(426003)(54906003)(478600001)(110136005)(44832011)(86362001)(2616005)(336012)(31696002)(36860700001)(82740400003)(82310400005)(81166007)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 11:53:00.5763
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e564f51-9a5b-443b-54ce-08db005d0a39
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT050.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB8148

Hi Henry,

On 27/01/2023 12:39, Henry Wang wrote:
> 
> 
> Hi Julien,
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
>> the first access
>>>>>>> @@ -153,6 +153,8 @@ struct vgic_dist {
>>>>>>>        /* Base address for guest GIC */
>>>>>>>        paddr_t dbase; /* Distributor base address */
>>>>>>>        paddr_t cbase; /* CPU interface base address */
>>>>>>> +    paddr_t csize; /* CPU interface size */
>>>>>>> +    paddr_t vbase; /* virtual CPU interface base address */
>>>>>> Could you swap them so that base address variables are grouped?
>>>>>
>>>>> Sure, my original thought was grouping the CPU interface related fields
>> but
>>>>> since you prefer grouping the base address, I will follow your suggestion.
>>>>
>>>> I would actually prefer your approach because it is easier to associate
>>>> the size with the base.
>>>>
>>>> An alternative would be to use a structure to combine the base/size. So
>>>> it is even clearer the association.
>>>>
>>>> I don't have a strong opinion on either of the two approach I suggested.
>>>
>>> Maybe we can do something like this:
>>> ```
>>> paddr_t dbase; /* Distributor base address */
>>> paddr_t vbase; /* virtual CPU interface base address */
>>> paddr_t cbase; /* CPU interface base address */
>>> paddr_t csize; /* CPU interface size */
>>> ```
>>>
>>> So we can ensure both "base address variables are grouped" and
>>> "CPU interface variables are grouped".
>>>
>>> If you don't like this, I would prefer the way I am currently doing, as
>>> personally I think an extra structure would slightly be an overkill :)
>>
>> This is really a matter of taste here.
> 
> Indeed,
> 
>> My preference is your initial
>> approach because I find strange to have virtual CPU interface
>> information the physical one.
> 
> then I will keep it as it is if there is no strong objection from Michal.
there are none. It was just a suggestion.

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 12:10:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 12:10:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485544.752841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNYp-0002Zz-U9; Fri, 27 Jan 2023 12:10:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485544.752841; Fri, 27 Jan 2023 12:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNYp-0002Zs-Px; Fri, 27 Jan 2023 12:10:27 +0000
Received: by outflank-mailman (input) for mailman id 485544;
 Fri, 27 Jan 2023 12:10:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xsEI=5Y=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pLNYp-0002Zk-AR
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 12:10:27 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2050.outbound.protection.outlook.com [40.107.244.50])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 93c2aa5b-9e3b-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 13:10:25 +0100 (CET)
Received: from BN9PR03CA0692.namprd03.prod.outlook.com (2603:10b6:408:ef::7)
 by CY8PR12MB8241.namprd12.prod.outlook.com (2603:10b6:930:76::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan
 2023 12:10:20 +0000
Received: from BN8NAM11FT028.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ef:cafe::3c) by BN9PR03CA0692.outlook.office365.com
 (2603:10b6:408:ef::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23 via Frontend
 Transport; Fri, 27 Jan 2023 12:10:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT028.mail.protection.outlook.com (10.13.176.225) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.25 via Frontend Transport; Fri, 27 Jan 2023 12:10:20 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 27 Jan
 2023 06:10:19 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 27 Jan
 2023 06:10:19 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 27 Jan 2023 06:10:18 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93c2aa5b-9e3b-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WzW9EMgzFxb3pNW3kZsYN/ooI4HwaldkAfbKAscEQqja6OrbAQBNYh+2M7y2vVusMakH0Kc0dqRePkvBxkJ0Olj2++4ddAb8LSPcNq1fw+2q3XYkkeNMAn0jYoB+2VJ0ZKgTIBNExs+sEMQn6Y+XzhkY7gooXfUJkwOnbre8xbi5DK45C8v3dQ58SAINfUOwzt8r4M1rfMDWz2VA76zTf1X6+bHPyxqj+itppXlzFdN9TyUjdeNRYxZ/Q/XseCP/eZ91LAyMbNCo7h/Ocnp22i2yohPjRgZxx1zB3O4qtUfwwwCjOony2jQLqWadrxWCHqCmy3kNx/CngqcUGomD1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KwET6Q0cQdmE55vZ5Tz3o/6vpgzks292IFq2syxY+R8=;
 b=d9tDrizP/ONrlfzyLFKSkuGdvP7RD7R/yBhh/rw/elMYZCHmU5ds9fS918V6VCKVsJz/RW7+YLewoq+oaim+B3GHSFaUU85sqiJ3Ymz8pQzAYJn+PSu/2ONuPxeX2UaLB4vitzDMfaVJD2IpjqiiPeux2VxQx16Q5/VCZ52GU7UQnaZdZOpKcopKXyeg0VQPdSXRMz/17rmN09BeubY0RA095VdkgNFd9KVd4/0gmT6w2B+Cch7HDhzMRIIC64948TdUBxHOF+fwZvebffyu3U2A1zfJdOXh6gA/h7grguQeOlIFM4jhSAW3BnJz4a8cqlbgj+d5/P3dXDMoMChU/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KwET6Q0cQdmE55vZ5Tz3o/6vpgzks292IFq2syxY+R8=;
 b=NNC6KNdV+pxUemFyrHxuJx9EH3AJCJWqT9RnAA3qFYr1RuhrAfE0lUDBamknwfqmIO+TCycKkuorMapOwpX83txCNLLUOxWjETH/6Je+bsRNdGroNpVz5jWuWEF2m8OSzAYDecIQCAuuyOr17cbdqS4c293AU9fTIQztz/MUWlY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <b1e2c76f-4d93-647e-ae3e-f83724cbd1e0@amd.com>
Date: Fri, 27 Jan 2023 13:10:17 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 3/3] xen/arm: Clean-up in p2m_init() and
 p2m_final_teardown()
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Julien Grall <julien@xen.org>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-4-Henry.Wang@arm.com>
 <d9861060-22ba-5fce-eef6-a7f2ef01526a@amd.com>
 <25264dca-acf6-7ad1-e8a5-a1b893eab30d@xen.org>
 <AS8PR08MB7991A2641FCF28C39F0D2FD692CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <AS8PR08MB7991A2641FCF28C39F0D2FD692CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT028:EE_|CY8PR12MB8241:EE_
X-MS-Office365-Filtering-Correlation-Id: 8e243913-bdb4-456e-7ab1-08db005f75d8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vpG3MECHY0bX4VGysz3pwgraj4RpNWEunNeejwBHHLRTre22+jqEr22D8EdC56K7vuI717m9EGnBujXSZaJW1zUmQR1LyyaloitQfM6Khx8dqDldh3uggdkwMb3wZZVfN52IZjaswMQe9dQziDLUfxOHJd+dL88tge7NOfptwoLlrITLf5j3RjUD8vrULljVVFF9ibd+FhQ/vT4h+Qs5uv+l4dh+/HlCUg8tRyNAS5P0Zqi56hutPsOB6xmi8nS7LSPaeP+CaE7lE9FcZAUTBsXNYHpbVbc5Iz0RoRwSG3NfvtIM2XCAgcHgqIX1pEyzzd97Ly+dDNcrO3p1XMfOfgX/7GL2GLUAsTwomwpZ/6lW7FoQ9xSEyEKNEhkxJ90ugtd6xLrztTFiyN+g644+EVJ8v7kSwpc2M/FBV6zrDn9ap3RYo3yb2ZstP7Tmdwniq732CvBVvWZff/4OQbXHBhTizbMLHQJoHNI6sGAJodsnFknGF8FUva0nDYuHR/+FAlqYKHCV5A2xk66/M7FA8pDubNJrrsMMjkMPxWVQE7Q1/vxyAJhswrpV0GP/l+BnRkMB2bDSn9WRB4hhBiY8zhT05CZXxAGYcqFjCQkTF7BGj1JhUgZMl5nh3Gw2ZbfnMOVgAATJRIjG4nqHikWXrRwMmBwbLqQ82PlmqRz07UJMdi+MaHgDBiXMYNn7xIAFP01nQEmi0CI25hNnc+C3eoQTTa1liLv428MLHwn6vigfU7byd6s75uCBY5npCTtPLhGfbObcPvye0a77MxGQRQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(136003)(376002)(396003)(346002)(451199018)(46966006)(40470700004)(36840700001)(31686004)(47076005)(8676002)(40460700003)(31696002)(36756003)(81166007)(86362001)(26005)(36860700001)(82310400005)(426003)(186003)(83380400001)(2616005)(356005)(336012)(40480700001)(54906003)(5660300002)(44832011)(478600001)(4744005)(110136005)(16576012)(53546011)(316002)(82740400003)(70206006)(70586007)(8936002)(41300700001)(2906002)(4326008)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 12:10:20.1603
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8e243913-bdb4-456e-7ab1-08db005f75d8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT028.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8241

Hi Henry,

On 27/01/2023 12:15, Henry Wang wrote:
> 
> 
> Hi Michal,
> 
>> -----Original Message-----
>>>>
>>>> -    BUG_ON(p2m_teardown(d, false));
>>> Because you remove this,
>>>>       ASSERT(page_list_empty(&p2m->pages));
>>> you no longer need this assert, right?
>> I think the ASSERT() is still useful as it at least show that the pages
>> should have been freed before the call to p2m_final_teardown().
> 
> I think I also prefer to have this ASSERT(), because of the exactly same
> reason as Julien's answer. I think having this ASSERT() will help us to
> avoid potential mistakes in the future.
> 
> May I please ask if you are happy with keeping this ASSERT() and I can
> carry your reviewed-by tag? Thanks!
Yes, you can :)

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 12:13:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 12:13:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485548.752851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNc6-00039u-CG; Fri, 27 Jan 2023 12:13:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485548.752851; Fri, 27 Jan 2023 12:13:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNc6-00039n-97; Fri, 27 Jan 2023 12:13:50 +0000
Received: by outflank-mailman (input) for mailman id 485548;
 Fri, 27 Jan 2023 12:13:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JKSU=5Y=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLNc5-00039M-AW
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 12:13:49 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20603.outbound.protection.outlook.com
 [2a01:111:f400:fe13::603])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0c75c16a-9e3c-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 13:13:46 +0100 (CET)
Received: from DB9PR05CA0014.eurprd05.prod.outlook.com (2603:10a6:10:1da::19)
 by AS8PR08MB9072.eurprd08.prod.outlook.com (2603:10a6:20b:5c0::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan
 2023 12:13:44 +0000
Received: from DBAEUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1da:cafe::78) by DB9PR05CA0014.outlook.office365.com
 (2603:10a6:10:1da::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend
 Transport; Fri, 27 Jan 2023 12:13:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT064.mail.protection.outlook.com (100.127.143.3) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.17 via Frontend Transport; Fri, 27 Jan 2023 12:13:43 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Fri, 27 Jan 2023 12:13:43 +0000
Received: from 0bf3a71630ea.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5236DC4F-651A-4ADB-B12F-61ED2305E6A1.1; 
 Fri, 27 Jan 2023 12:13:32 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0bf3a71630ea.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Jan 2023 12:13:32 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PR3PR08MB5625.eurprd08.prod.outlook.com (2603:10a6:102:89::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 12:13:30 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Fri, 27 Jan 2023
 12:13:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c75c16a-9e3c-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wxm2NaW8a7M+SYSIA1DsK65dUhGmuldvYGg8PctsAig=;
 b=JClBxeRDcTQ82BNk83eqTcYlabi22bbAxiYdCRxlOE5cSRt8j1R7rC8AInNyNXz5+YF+YRYDt/hkSgFO3+cFipLDGghsMb54VQ4/TCG/Ki4WbgC0nW5guFOPEh5+oAz4gLgYEV2vUqbbuBFviRSmYLNhWds5V7zD6Me4AGeakbQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PHI66ulJ/oDBPhEdj3f86neOoZ4//KfaIgCiE+tGjc0J3RQHJ+HgMil1XJtxqLNhME1LFY3/m45BHu1ZOQUr3frME8sUx3Gdj+1JFbETqq6uiLATELgvzx5ZoUf5lgF/S+kDL51e6opP8hf0uWuXYmpA0dBscI5uTjwgW4jW9eaykIcp/Y8fqX6cHlstUmPRvNsaj3mhRcSxtAheJdj+604aORUKFb1XVB45vAfd7iqXiThuZ6TtJ9YZ8E5FldKdeXYheazjl0UknZykoLM/dinSL6Vgm33UqJG7Wg+xoDzyr1PYuiSr4owjOTeJIbQ2ViRb15gAqzUsAgnYU5il3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wxm2NaW8a7M+SYSIA1DsK65dUhGmuldvYGg8PctsAig=;
 b=AtMj6M6Ob8mOca0IAmxI6GXKBBYwAQhOpXTnozY1MwAg+AHQSMlXTaCgrCvMyWBdWhNPQXKzHOZscvSoaYB1U+JofjgkHq+tYZXKE7y5PDTmtmd//fZDbczKC3hq2myFqEpzKGh4l/6Kn9HVubkiOHOd7ILDVknHqVxmj20Fu1d2kwlzxgGHXujI04G2dTusXio8D1q00wPx13AstkzuJw6weslg0dycdq75EkxW0Hc9C2Q2gPJVxRIHXe/NXlArn4xVd4ZR98n3FVtt7JlXT3iM9gNN6DliZYPKOMuCLNyhz7DsRRksGrcD2JUf9IwzodiendXiNDwFUfpgU2tznA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wxm2NaW8a7M+SYSIA1DsK65dUhGmuldvYGg8PctsAig=;
 b=JClBxeRDcTQ82BNk83eqTcYlabi22bbAxiYdCRxlOE5cSRt8j1R7rC8AInNyNXz5+YF+YRYDt/hkSgFO3+cFipLDGghsMb54VQ4/TCG/Ki4WbgC0nW5guFOPEh5+oAz4gLgYEV2vUqbbuBFviRSmYLNhWds5V7zD6Me4AGeakbQ=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Julien Grall <julien@xen.org>
Subject: RE: [PATCH 3/3] xen/arm: Clean-up in p2m_init() and
 p2m_final_teardown()
Thread-Topic: [PATCH 3/3] xen/arm: Clean-up in p2m_init() and
 p2m_final_teardown()
Thread-Index: AQHZKU4fKh9kG5Xa7kmVEqYgCVpcz66nH76AgAbPKgCABD7DYIAAEFuAgAAAmkA=
Date: Fri, 27 Jan 2023 12:13:29 +0000
Message-ID:
 <AS8PR08MB799123266FEC2B60BDD3409592CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230116015820.1269387-1-Henry.Wang@arm.com>
 <20230116015820.1269387-4-Henry.Wang@arm.com>
 <d9861060-22ba-5fce-eef6-a7f2ef01526a@amd.com>
 <25264dca-acf6-7ad1-e8a5-a1b893eab30d@xen.org>
 <AS8PR08MB7991A2641FCF28C39F0D2FD692CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <b1e2c76f-4d93-647e-ae3e-f83724cbd1e0@amd.com>
In-Reply-To: <b1e2c76f-4d93-647e-ae3e-f83724cbd1e0@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 6C9A3BCDC87C604880D5EE023BDC60F7.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PR3PR08MB5625:EE_|DBAEUR03FT064:EE_|AS8PR08MB9072:EE_
X-MS-Office365-Filtering-Correlation-Id: d0a16ddd-d78f-4e64-703e-08db005fef3f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7eAYfDp4+Y0JovXWGgXYQphOaPtpYDB3S7+PztV380u4ChAvb0rTK3f5WfZl7+R375XVcKbpz254KmYsyUV+bIig+KHDWpaSfbAMLFykqn5rBAGcH31TkI9rJ5wUAxnSY1GHYJwByW5NnJzOv/uSFYQjBdpnt5vhW5Ld7TxgRX/KXiQuNDqZr99Z/EA9JAYcTcOsEyadfl7+1pDOL+0Mh2hs2QZLHvmn2iu4jtma2K3BgVTC4EmrhonYZ2WhvbYl2AdBaejfi48nxiarYpBoXtvuTJi+zoBnPNUztTWtF3XVtr2u04A+tXjNgUK5uXcWzEDhK5yfeRej2g8nrzJAXCnLmlyKXZ+tjejbNe56FR1FjIc2Jh33OTCuHHJ9O8J5oLqY3hK8x+k9x85CconG6MeJ+Ci6KZsRXMpxbUcn1ckrmvgVGDkJarGAkg0d21FijF0L0x9ct+RJTXKMMt/edCKNFPMz9aCDII45Pq6of8X0P39Y9MmuNh0EG4ROGp+DnsurrGE4ShWlsAHSEB/6zgjWwEQKQA1rkwMFUUhELAeY07QfE7l17N6mDM43CIh5q01byuEJUUcbhh20efzIkyqnpobNroNPU8q/vZGSQK+dppTXoEd6qZ+imC/GZlEsg880BX1KH75ARnaNTUzJWQkm4Gzspcl2NnB8y2cTneJoUVVURhXhbfARR/8LiM/8uOclCx82rjM31uxtRYhckg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(366004)(346002)(39860400002)(376002)(396003)(136003)(451199018)(8936002)(2906002)(38100700002)(478600001)(33656002)(5660300002)(186003)(53546011)(7696005)(9686003)(26005)(55016003)(38070700005)(52536014)(86362001)(41300700001)(71200400001)(83380400001)(76116006)(4326008)(6506007)(66946007)(66446008)(64756008)(66556008)(66476007)(122000001)(8676002)(316002)(110136005)(54906003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5625
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0a00c396-32d5-482d-3743-08db005fe69c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dZHJzKR/9orEYcw7WrnzylDiG7MH5VJDrF/3+vbgR5DQpOSznMw0D3BJ3wOWBI97ORFygr1nlsUosq9+ezPjMXldtP4kVtsa4FVdky9qnHkviELePdNj0i6uI79pI+8GHmlYc7kXJ7p+hKFckPrZa+B1n9jUbkTPGwI6kE4GZrUfzuruHKCLtbi7vOvsK5k42sr67XYiVDe8zALHQ24Dyohy8/8FE27qmFnAQ6EM82Kb31cfzzgFBJXcHpFsZ2nwdr0mshWczJUEZaQoC+Ic4GMyiFQAkI4Vx6oxtu+gJtdbF9XNtK8VdSKbYYyy0H9oEo4yvKUR+aEM50z8Mi3QSZnIDx2Az2c3JqKc+tARb1XzNz4GaXHBfWCSjvL25GDrUDNPYAT3JyrEUsRBi2fPZdUqVjn2Frlz1HkXsK6IytFwkznMHP4JaHTxNNkSs8BEAgQhbUY5VLljmFDFaBwZOp92cCrhnGiXe/xUbR/MONY9ZF9/+bdpIsOy+Sv1Zw/cN7WdFpQ8FGE/Fql8ZW4sQVpR51EkahcznrF6af84lZKrGdREPDeu6hw4saXTnEpNOL7IUK5aR8ZAUm9nnS+V9KF1i3GgYJ4QfrdSwzjvFCumel/IWIq6IPNwg8uNLyPz4SHWX1aS2pP3jzu+mNnQ7jhBhu3GX+IT9NuGfrD/cKcZ0i+8p50ALxuGSBiSCspBzAWg3zpGQdYKDIP2TWoWzw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(346002)(376002)(136003)(396003)(451199018)(46966006)(36840700001)(40470700004)(2906002)(5660300002)(52536014)(36860700001)(41300700001)(336012)(47076005)(8936002)(53546011)(478600001)(9686003)(186003)(26005)(316002)(4326008)(6506007)(82740400003)(70586007)(33656002)(356005)(83380400001)(81166007)(40480700001)(86362001)(7696005)(55016003)(40460700003)(8676002)(82310400005)(110136005)(54906003)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 12:13:43.8706
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d0a16ddd-d78f-4e64-703e-08db005fef3f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9072

SGkgTWljaGFsLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IE1pY2hh
bCBPcnplbCA8bWljaGFsLm9yemVsQGFtZC5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggMy8z
XSB4ZW4vYXJtOiBDbGVhbi11cCBpbiBwMm1faW5pdCgpIGFuZA0KPiBwMm1fZmluYWxfdGVhcmRv
d24oKQ0KPiANCj4gSGkgSGVucnksDQo+IA0KPiBPbiAyNy8wMS8yMDIzIDEyOjE1LCBIZW5yeSBX
YW5nIHdyb3RlOg0KPiA+DQo+ID4NCj4gPiBIaSBNaWNoYWwsDQo+ID4NCj4gPj4gLS0tLS1Pcmln
aW5hbCBNZXNzYWdlLS0tLS0NCj4gPj4+Pg0KPiA+Pj4+IC0gICAgQlVHX09OKHAybV90ZWFyZG93
bihkLCBmYWxzZSkpOw0KPiA+Pj4gQmVjYXVzZSB5b3UgcmVtb3ZlIHRoaXMsDQo+ID4+Pj4gICAg
ICAgQVNTRVJUKHBhZ2VfbGlzdF9lbXB0eSgmcDJtLT5wYWdlcykpOw0KPiA+Pj4geW91IG5vIGxv
bmdlciBuZWVkIHRoaXMgYXNzZXJ0LCByaWdodD8NCj4gPj4gSSB0aGluayB0aGUgQVNTRVJUKCkg
aXMgc3RpbGwgdXNlZnVsIGFzIGl0IGF0IGxlYXN0IHNob3cgdGhhdCB0aGUgcGFnZXMNCj4gPj4g
c2hvdWxkIGhhdmUgYmVlbiBmcmVlZCBiZWZvcmUgdGhlIGNhbGwgdG8gcDJtX2ZpbmFsX3RlYXJk
b3duKCkuDQo+ID4NCj4gPiBJIHRoaW5rIEkgYWxzbyBwcmVmZXIgdG8gaGF2ZSB0aGlzIEFTU0VS
VCgpLCBiZWNhdXNlIG9mIHRoZSBleGFjdGx5IHNhbWUNCj4gPiByZWFzb24gYXMgSnVsaWVuJ3Mg
YW5zd2VyLiBJIHRoaW5rIGhhdmluZyB0aGlzIEFTU0VSVCgpIHdpbGwgaGVscCB1cyB0bw0KPiA+
IGF2b2lkIHBvdGVudGlhbCBtaXN0YWtlcyBpbiB0aGUgZnV0dXJlLg0KPiA+DQo+ID4gTWF5IEkg
cGxlYXNlIGFzayBpZiB5b3UgYXJlIGhhcHB5IHdpdGgga2VlcGluZyB0aGlzIEFTU0VSVCgpIGFu
ZCBJIGNhbg0KPiA+IGNhcnJ5IHlvdXIgcmV2aWV3ZWQtYnkgdGFnPyBUaGFua3MhDQo+IFllcywg
eW91IGNhbiA6KQ0KDQpUaGFuayB5b3UgdmVyeSBtdWNoISBBbmQgYWxzbyB0aGFuayB5b3UgZm9y
IHlvdXIgZGV0YWlsZWQgcmV2aWV3IDopDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQoNCj4gDQo+
IH5NaWNoYWwNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 12:16:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 12:16:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485553.752861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNev-0003w5-Ql; Fri, 27 Jan 2023 12:16:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485553.752861; Fri, 27 Jan 2023 12:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNev-0003vy-No; Fri, 27 Jan 2023 12:16:45 +0000
Received: by outflank-mailman (input) for mailman id 485553;
 Fri, 27 Jan 2023 12:16:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLNeu-0003vo-Gc; Fri, 27 Jan 2023 12:16:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLNeu-0003RY-C9; Fri, 27 Jan 2023 12:16:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLNet-0003Fr-78; Fri, 27 Jan 2023 12:16:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLNet-00028v-6e; Fri, 27 Jan 2023 12:16:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rJ6nvwD6ugHoYAcIdmLugwLAAYAPkSRriS5wk2dFXkI=; b=C2L5DQxligltGZXa1WYv4zMxZE
	0TRpyFNfW509C7f8dgbJpVbOca3vqS/fQDPLkOlP31WrgdFLEh6Zry8NB9Q1K+xrKRXsov1tjOj2H
	idfw+aGTxEsE+sUJpL3pSrPOQxeAbnm+e+cB3gHayURUdxOgRVRK7vpt29h1I2SrXpy0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176230-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176230: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-vhd:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:build-armhf:syslog-server:running:regression
    linux-linus:test-arm64-arm64-xl-vhd:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-examine:host-install:broken:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-cubietruck:capture-logs(22):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-vhd:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-examine:capture-logs:broken:heisenbug
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:build-armhf:capture-logs:broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:capture-logs(9):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:capture-logs(6):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7c46948a6e9cf47ed03b0d489fde894ad46f1437
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 12:16:43 +0000

flight 176230 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176230/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-arm64                   4 host-install(4)        broken REGR. vs. 173462
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 173462
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 173462
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 test-arm64-arm64-xl-credit2     <job status>                 broken  in 176143
 test-arm64-arm64-xl-seattle     <job status>                 broken  in 176143
 test-arm64-arm64-xl-vhd         <job status>                 broken  in 176143
 test-arm64-arm64-xl-xsm         <job status>                 broken  in 176143
 test-armhf-armhf-libvirt-qcow2    <job status>                broken in 176223
 test-armhf-armhf-xl-cubietruck    <job status>                broken in 176223
 test-armhf-armhf-xl-credit2     <job status>                 broken  in 176223
 test-armhf-armhf-libvirt        <job status>                 broken  in 176223
 test-armhf-armhf-xl-vhd         <job status>                 broken  in 176223
 test-armhf-armhf-xl-multivcpu    <job status>                 broken in 176223
 test-armhf-armhf-libvirt-raw    <job status>                 broken  in 176223
 test-armhf-armhf-xl-credit1     <job status>                 broken  in 176223
 test-armhf-armhf-xl             <job status>                 broken  in 176223
 test-armhf-armhf-xl-arndale     <job status>                 broken  in 176223
 test-armhf-armhf-xl-rtds        <job status>                 broken  in 176223
 test-arm64-arm64-xl-vhd       8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-xl-xsm       8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-xl-credit2   8 xen-boot       fail in 176135 REGR. vs. 173462
 test-arm64-arm64-examine      8 reboot         fail in 176143 REGR. vs. 173462
 test-arm64-arm64-xl           8 xen-boot       fail in 176143 REGR. vs. 173462
 test-arm64-arm64-libvirt-xsm  8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot      fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot       fail in 176143 REGR. vs. 173462
 test-arm64-arm64-xl-credit1   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-arm64-arm64-libvirt-raw  8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot         fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot     fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot       fail in 176143 REGR. vs. 173462
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd      5 host-install(5) broken in 176143 pass in 176135
 test-arm64-arm64-xl-seattle  5 host-install(5) broken in 176143 pass in 176135
 test-arm64-arm64-xl-xsm      5 host-install(5) broken in 176143 pass in 176135
 test-arm64-arm64-xl-credit2  5 host-install(5) broken in 176143 pass in 176135
 test-armhf-armhf-xl-arndale  5 host-install(5) broken in 176223 pass in 176143
 test-armhf-armhf-xl-credit1  5 host-install(5) broken in 176223 pass in 176143
 test-armhf-armhf-libvirt     5 host-install(5) broken in 176223 pass in 176143
 test-armhf-armhf-xl          5 host-install(5) broken in 176223 pass in 176143
 test-armhf-armhf-libvirt-qcow2 5 host-install(5) broken in 176223 pass in 176143
 test-armhf-armhf-examine      5 host-install   broken in 176223 pass in 176143
 test-armhf-armhf-xl-credit2  5 host-install(5) broken in 176223 pass in 176143
 test-armhf-armhf-xl-cubietruck 22 capture-logs(22) broken in 176223 pass in 176143
 test-armhf-armhf-xl-vhd      5 host-install(5) broken in 176223 pass in 176143
 test-armhf-armhf-xl-multivcpu 5 host-install(5) broken in 176223 pass in 176143
 test-armhf-armhf-libvirt-raw 5 host-install(5) broken in 176223 pass in 176143
 test-armhf-armhf-examine      6 capture-logs   broken in 176223 pass in 176143
 test-amd64-amd64-xl-xsm       8 xen-boot         fail in 176135 pass in 176230
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 176223

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot       fail in 176223 REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 173462
 test-armhf-armhf-libvirt-qcow2 6 capture-logs(6) broken in 176223 blocked in 173462
 test-armhf-armhf-xl-credit1 6 capture-logs(6) broken in 176223 blocked in 173462
 test-armhf-armhf-libvirt  6 capture-logs(6) broken in 176223 blocked in 173462
 test-armhf-armhf-xl-arndale 6 capture-logs(6) broken in 176223 blocked in 173462
 test-armhf-armhf-xl-credit2 6 capture-logs(6) broken in 176223 blocked in 173462
 test-armhf-armhf-xl       6 capture-logs(6) broken in 176223 blocked in 173462
 test-armhf-armhf-xl-rtds  9 capture-logs(9) broken in 176223 blocked in 173462
 test-armhf-armhf-xl-vhd   6 capture-logs(6) broken in 176223 blocked in 173462
 test-armhf-armhf-xl-multivcpu 6 capture-logs(6) broken in 176223 blocked in 173462
 test-armhf-armhf-libvirt-raw 6 capture-logs(6) broken in 176223 blocked in 173462
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 176143 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 176143 never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check fail in 176223 never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check fail in 176223 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                7c46948a6e9cf47ed03b0d489fde894ad46f1437
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  111 days
Failing since        173470  2022-10-08 06:21:34 Z  111 days  230 attempts
Testing same since   176135  2023-01-26 00:10:53 Z    1 days    4 attempts

------------------------------------------------------------
3442 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job build-arm64-xsm broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-xl-vhd broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job build-arm64-pvops broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-arm64-arm64-xl-credit2 broken
broken-job test-arm64-arm64-xl-seattle broken
broken-job test-arm64-arm64-xl-vhd broken
broken-job test-arm64-arm64-xl-xsm broken

Not pushing.

(No revision log; it would be 529026 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 12:23:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 12:23:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485563.752873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNkd-0005Qh-L2; Fri, 27 Jan 2023 12:22:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485563.752873; Fri, 27 Jan 2023 12:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLNkd-0005Qa-IN; Fri, 27 Jan 2023 12:22:39 +0000
Received: by outflank-mailman (input) for mailman id 485563;
 Fri, 27 Jan 2023 12:22:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLNkc-0005QU-Be
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 12:22:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLNkb-0003Zl-D5; Fri, 27 Jan 2023 12:22:37 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLNkb-0003Cg-5e; Fri, 27 Jan 2023 12:22:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=NNYmZxh3/3DjcWnwmpnH91d0aTGSNw/T0EVG0RJbciI=; b=SwwX/r4/Hj6eOUsLuOCkcyABLf
	bdKD53k3RfDPjyJto7p4O/rNVMANDp81HK0ACAJqFdRIKK8R4GIir3sA/ZJ2L6I+wRP6W9T2ihZrO
	yT2WXh79lEUqPnxyJFfvWMjHSEpvKMBgOwvm/ZaP/3AR0xUXPvdcd/d0Ud7ef8x6OiMU=;
Message-ID: <599792fa-b08c-0b1e-10c1-0451519d9e4a@xen.org>
Date: Fri, 27 Jan 2023 12:22:34 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
References: <cover.1674816429.git.oleksii.kurochko@gmail.com>
 <06c2c36bd68b2534c757dc4087476e855253680a.1674816429.git.oleksii.kurochko@gmail.com>
 <fbab23b9-663e-9516-5721-a92486686f84@xen.org>
 <b7070c68ce88fdd3a1a7b04400ca8c3366ddf416.camel@gmail.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v6 1/2] xen/riscv: introduce early_printk basic stuff
In-Reply-To: <b7070c68ce88fdd3a1a7b04400ca8c3366ddf416.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Oleksii,

On 27/01/2023 11:49, Oleksii wrote:
> On Fri, 2023-01-27 at 11:26 +0000, Julien Grall wrote:
>> Hi,
>>
>> On 27/01/2023 11:15, Oleksii Kurochko wrote:
>>> Because printk() relies on a serial driver (like the ns16550
>>> driver)
>>> and drivers require working virtual memory (ioremap()) there is not
>>> print functionality early in Xen boot.
>>>
>>> The patch introduces the basic stuff of early_printk functionality
>>> which will be enough to print 'hello from C environment".
>>>
>>> Originally early_printk.{c,h} was introduced by Bobby Eshleman
>>> (
>>> https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d
>>> 1aab71384)
>>> but some functionality was changed:
>>> early_printk() function was changed in comparison with the original
>>> as
>>> common isn't being built now so there is no vscnprintf.
>>>
>>> This commit adds early printk implementation built on the putc SBI
>>> call.
>>>
>>> As sbi_console_putchar() is already being planned for deprecation
>>> it is used temporarily now and will be removed or reworked after
>>> real uart will be ready.
>>>
>>> Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
>>> ---
>>> Changes in V6:
>>>       - Remove __riscv_cmodel_medany check from early_printk.c
>>
>> Why? I know Andrew believed this is wrong, but I replied back with my
>> understanding and saw no discussion afterwards explaining why I am
>> incorrect.
>>
>> I am not a maintainer of the code here, but I don't particularly
>> appreciate comments to be ignored. If there was any discussion on
>> IRC,
>> then please summarize them here.
> Sorry, I have to mentioned that in the description of patch series.

I think this should be part of the section --- in this patch as this 
makes easier to know what changes have been done.

> 
> There is no any specific reason to remove and only one reason I decided
> to remove the check was that the check wasn't present in original
> Alistair/Bobby patches and based on my experiments with that patches
> all worked fine. ( at least with some additional patches from my side I
> was able to run Dom0 with console )

The lines you removed only confirm that the file was built with the 
given model and if it is incorrect then the compilation will fail. There 
are no change in behavior expected past the compilation.

So I am not quite too sure what sort of experiment you did. Did you try 
to change the model used to build Xen?

Now if you (or anyone else) can tell me that there will be no issues if 
the model is changed. Then yes, I will agree that the check is unnecessary.

The alternative is you say that you are happy to accept the risk if the 
model is changed. If I were the maintainer, that would not be something 
I would agree with because "wrong" unwritten assumptions are a pain to 
debug. So I much prefer a "wrong" written assumptions that would tip me 
that the author thought the code would not work otherwise.

This is quite easy to remove the check after the fact if it is not correct.

I am not the maintainer of the code, so if this is what they want then 
so be it. But it needs to be explicitely stated. So far, the reviewed-by 
from Bobby was with the check, so it would imply that he was happy with 
it...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:32:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:32:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485571.752887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLOpm-0005Xz-P9; Fri, 27 Jan 2023 13:32:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485571.752887; Fri, 27 Jan 2023 13:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLOpm-0005Xs-Ku; Fri, 27 Jan 2023 13:32:02 +0000
Received: by outflank-mailman (input) for mailman id 485571;
 Fri, 27 Jan 2023 13:32:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLOpk-0005Xm-Uz
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:32:01 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2060b.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::60b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f866da8e-9e46-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 14:31:57 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8347.eurprd04.prod.outlook.com (2603:10a6:10:245::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 13:31:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 13:31:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f866da8e-9e46-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fZiZhqwosoYZEWLRLD8aoVJF3qUSpbC9qedAJ8R+hHyAcQAS0Zbb9o548HKARLOUDq2C9QRF7ANIBo0YJoDGd4yfPu0AWs9dU4B1GDBOQ3NFfddrk+0MmpR3+4XbRJm6r0jL3P3YOF9ECzbxp2UCi0p81IC1dQdZwWlwsd9CzRzmrG7uPL5ihN6grlE0WvAEMVmWWSYU2gU0TEekfqUcofzbk8cLc9XX3Qi9fpoN4qgbmqEDyMB9BI52Lum95fogJ/X0gPYhrX+pHH2D7LD6+NNXQb9v8keyPBHCVA+XQ3l/EYpoyu5Ojb59nVUwtw3/0P6DOES8/ohXPGInMwHmYw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=k1FFysP9RrthLav+DJU++YkJ6BUwe3JJ8HzNqAXWhM8=;
 b=b4BqureZo3+AOTS3IMLU9JsfjCcCeuuJi6thUbNHx2D5A3yn4Hi68zFPU+YWn71RlhcF8HYTnSaOd4exCi0vCRGUFFIW7vHGTc5c5Iv1rX5+hA6Y2ZhabdowlY8LPgjRpbAQJfB6rm0uqliw0D7o/xqsDuAN51ry/u/tCJuDmor9BxbQDnMMlilxIo8dgaJbCmkn6SFs0CeyX/vPTrhXZvECy960uLsY7BROh6e9iJnyhWPStI7ynqlLmAwSq6D7OYuXJDsdP8lvlYQnz4JrF2FRukhAFhtG2YuApmNHE0lQv1X67GQj16pGLXlm7azMQjs+cpG9hO/Lu0Uce4j/rQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k1FFysP9RrthLav+DJU++YkJ6BUwe3JJ8HzNqAXWhM8=;
 b=LGSlRphLLX3xqdO6w1NIHGnNmTnf0PbjgCz0Li7bomdhCDiw7T95QPJYcP5tYTGF/6Y+24pp5+h+jEO84xT8cUL6+JUioDVo4cCd1DlFzO73ZZSLPxMu+a1zo3SoPx7BSv4aeORj7/AMPG3wM5R+S5WYK8j4G2bL9IN1flE1a1ecUbL56Z12ZYGXIdzqcQQvTFNDAhKKDASoWvhVkEysoWycaSeJErOt5Qv+djVXSAF5R45G1h4TqS2CvUGIXrRJJj7lT1C0/Z1Mq5JADEGWbf5kblg3jRxD78LDQgwyjKmKsTMLKrSkvcUWshtOzKBzPEB7X8hMdLnOv32abToNrA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c843b031-52f7-056d-e8c0-75fe9c426343@suse.com>
Date: Fri, 27 Jan 2023 14:31:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4 07/11] xen: add cache coloring allocator for domains
Content-Language: en-US
To: Carlo Nonato <carlo.nonato@minervasys.tech>
Cc: Luca Miccio <lucmiccio@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Marco Solieri <marco.solieri@minervasys.tech>, xen-devel@lists.xenproject.org
References: <20230123154735.74832-1-carlo.nonato@minervasys.tech>
 <20230123154735.74832-8-carlo.nonato@minervasys.tech>
 <21f1e368-5d44-b689-0bb7-164a53e5ffd7@suse.com>
 <CAG+AhRXvKi4HmPoOL7cLToCgOPf3-evvcdvSzYGZ6fLc+mBvtQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAG+AhRXvKi4HmPoOL7cLToCgOPf3-evvcdvSzYGZ6fLc+mBvtQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0175.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8347:EE_
X-MS-Office365-Filtering-Correlation-Id: 2745055e-edf4-4e5d-6454-08db006adb75
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6fQASB5syyCLthh9GEAleSF028xCSFZVMJduQLDvKlEgoILm86Q5fKn7w7W3oVFw4eYEL/O5PBnqn3kaKAOYNoXbkeqsnr3qzR9/o1Yz7t1MM7W/vJOn4RKHNKnv1cZCj6hyHfHlKR/YdiRJFiO8UHk3OwAYKbanHGy/20iLC3i4gxnutBxk0SxLQrA01d8L7ZVbNDwVMFjhXDqSjpVJmQU4yl5CzpFgSXOWgwIEotnxhIXitwib1KpNbEvGGmP14fDsgROewW4KE6L6v4/NgmUUKWEHKkKU9eWu/UYhFeKTDrPsLQYUWbN48rwd1zbLy82Dw/6wEkACMThMQIpH+nNFWQubNjFDFMoSI9a69aqMpUFQTqnF3DHsxJS34Fv6p3rFSDAg+6OCWXYxE9I290Jp4lxoDWDpDk/IIK7OTy/3Z6ld8AbYT9IGaL0UHsAii7EAzqYnN/pscntekh6/a6FxkdfpI4912YOU5Oyzm+mfAh7WReYDHPwydEEoR0vCHCV3TAt1EjU92/YJD/ipVw/qN7rK6ykkF8+RiLxfjQ7IIH5FoGytzm+WZ42PbavXIZt8WZxpviUHKbSUc6DBk/aiztgZHwfqjio0m5hB3Z71fQI/6zyfJqcF7QEjzGSMt/ASpZkH/a48MHdYc9hXBqaS5IkVH8PSkCmQhpugKr3R8yPGihXy9Kt3bOywSl3TbBwDR7mdcZU4LO8d8ZS6M6c0BdlNdMHlQT2y3a5/uaQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(396003)(346002)(366004)(136003)(376002)(451199018)(2906002)(5660300002)(8936002)(41300700001)(7416002)(4326008)(6486002)(478600001)(26005)(186003)(8676002)(31686004)(6512007)(53546011)(6506007)(86362001)(2616005)(66899018)(66556008)(83380400001)(38100700002)(66476007)(6916009)(66946007)(31696002)(54906003)(316002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUlBRmphZm9ZcUV2RlZwN0FBcFhUQllVYWtoY0dLUWs4b1ZTMWZ5K0tuUGtF?=
 =?utf-8?B?bUFNcTczNjE5MjNSY29FM21mdVVLNmxCa3doQmVMY0tJcHZCSkFkdFdlSTVt?=
 =?utf-8?B?SDBVQkNtdjBocUFMMHp2Y2RzVFhacEhOU3AxQ0x2TU0wZS9qdXFjRkZGZUpT?=
 =?utf-8?B?TmlTWnJsQkZZd3dxeUVZdi9Mek10WmU5MWMyVmFuaUs1NW9raWthakdvK1d6?=
 =?utf-8?B?Tkc4eHhKRWgrUXJUdDdyYmRxMGRwU2pMTzhKaEg1R3pCWXRuR2FOQXZGVFdS?=
 =?utf-8?B?SGlUY2wzNTJlb01GT3RWak5nV2V1a0MxcUFKdUFpUjE5UW5ZUnpoRDIvekR0?=
 =?utf-8?B?Qlp0eVJaOWl5SWpQWUNGalRTQVgrMGVqbkZwcENmc0Vzb1pWM2tNWEF3VHFy?=
 =?utf-8?B?MDNNcmc0ZUpMNHZpV1dTa2g2Z3ZhWVJTcUZSTHZtdC9EYzc1L3hnZkRiNW1I?=
 =?utf-8?B?aU9JNUVJcmtVbytMVHYrd3NXbFdCN0hHSXg3a1d0MTZaRkJibVVwbmFTcm1k?=
 =?utf-8?B?M0p2UlQ0Wjd5ZEplWGNiajdpcllrZ24yZ25Uei85bVlVSjhrU1pyMG5PR2pa?=
 =?utf-8?B?dWVLZVE0MlNxQmk0elhWWnYvQ1piWEkzelA1ZXBpL0hlLzBnd2grc0JiUjdP?=
 =?utf-8?B?T1BPUXkyOUsrZEo2a0ZmTVI4bGVCOTdzbkhYa0FuVnFraG9xUmVzcGkyYmE2?=
 =?utf-8?B?d29PbG1yZTdBdXRPd3lERGtWVUhId1huSEU2dVJReExvd1cwRTh3TTUrSStY?=
 =?utf-8?B?b3JMVHUvZk4veGljZzdzZ1hPQ3V5QTBUY3VRdXBKQXZXenpVRVVQek5ObVhG?=
 =?utf-8?B?dldCQmdjSVh4NkFWNjhGUWxQMEptTHRLNThweGNBb1NFV0RxSWdIS01iUjht?=
 =?utf-8?B?YjZlUklvZG1nckplMUkyUnBhTDFZeDRSM0J4bFJSRjVtYUR5YzZjLzJUdDYr?=
 =?utf-8?B?UW00cVJyOHg5TGlzMDhlZnBhNk1XN2xkWWlSMFQ1M0wzeTQrZE5Oa0Z2UDNn?=
 =?utf-8?B?TnFjQ0ZUWEVCRytJblB1YzlGZDVPR1IwTnZLZktaWWlPa0FGSHlSUEdVSVp1?=
 =?utf-8?B?OEhCQmtNdmliSWtMVlhxZEd1ZThuemVsNUszT0dvUENSaENCTlVtMmR2NXZB?=
 =?utf-8?B?RjdEVGJmVGw1T2Y3R3lFdE9EeW5GakxWdE9nZVU4NTdaZlp2ak0vc2lxcmdn?=
 =?utf-8?B?WE5PdXI0M2Uxelc5TlBSZzBXMGo4cURBbGpUcGhMTDlSbTlaRjV3c09WL21i?=
 =?utf-8?B?QkMwM0ZOTGcvZ1JiM0RJRGFrWDFTZEllQm5TSHd0aitWTTU1Z3J2Nmd5SWhO?=
 =?utf-8?B?b000VzhMZkVJcnVkWlladDkweHlYQmlMN1k1cTd0cjVRdzFrZk1QWFNjWE9l?=
 =?utf-8?B?d3JPYkh2NTFkVHd0azh5c09ndThwR2hITmoyOUVYcUlqY2owSlFDT3hYQ1VU?=
 =?utf-8?B?UGVTQTJMTStER3QzOGpJL3lTY2hvM2Jab1JwSkk1aXRoV1lkbFFzb2xQdTN4?=
 =?utf-8?B?MzVJb1FMcUtXME9JZldxVHRPY0RqQWNRWHREVTM0dXJtaU9DNDA0aEM2dncy?=
 =?utf-8?B?NGMxY2F5RmcyZmN5T2o0TGp3YlR0TDdBMmFwMmdhUDY1dkltN0ZNeXVCM0ps?=
 =?utf-8?B?cjQzeHU3UERpdXVzejJwRU1NdG16Q2xGWGJiQ3JhenYvbXdRRURGeUttbVRM?=
 =?utf-8?B?L1U0c1JJenBUT2NjdlN5cGdUSDZ6amdKYTJQam9ZWUhWYjExM0RQbmRQNFRB?=
 =?utf-8?B?cGMzL0hhbFJJUmJYTWh4dEVmSVFnT1JXNStWMHBGZ2kxRUFFRnNIamZBZU9p?=
 =?utf-8?B?blBkTHJldzZ5ZW5sMUJiWjYvbk8wam0zNGtyNW41cmIwNHZBazlVZG5YVDRQ?=
 =?utf-8?B?VzVkdlY0WW9nWnNlSVIrOUFuMEFhcDlFREFuVTBJSGdrK2hPTFdmLzd0L1lB?=
 =?utf-8?B?cURLaVM2dUdNU1lGeFFnRzZibWprc1BnazRVMjAxWmV5K1doSXJrMjh3UENC?=
 =?utf-8?B?VEVkYVdQRi8zN2JOWXQrWWtUcWRpOGhmNTgxbDQxL3JZWWZ5MXFvNlRYS0lM?=
 =?utf-8?B?N1REbTB5SU0zangxVkpWVlFUc0dhaWl3SHB4V0hlemUyb3RURklQMHBJR3Vx?=
 =?utf-8?Q?7HLzD4dL268iepIK5fMJGKX+6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2745055e-edf4-4e5d-6454-08db006adb75
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 13:31:55.2900
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3Xcr8u+HiRQVZsdxWs436lMiECN7wcC3iOJLIxRGvSIrvsTHQXhz8isH0HIoGlTIP+Rc4aUfrmvD5s+oaPg1Dg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8347

On 27.01.2023 11:17, Carlo Nonato wrote:
> On Thu, Jan 26, 2023 at 5:29 PM Jan Beulich <jbeulich@suse.com> wrote:
>> On 23.01.2023 16:47, Carlo Nonato wrote:
>>> --- a/xen/arch/arm/include/asm/mm.h
>>> +++ b/xen/arch/arm/include/asm/mm.h
>>> @@ -128,6 +128,9 @@ struct page_info
>>>  #else
>>>  #define PGC_static     0
>>>  #endif
>>> +/* Page is cache colored */
>>> +#define _PGC_colored      PG_shift(4)
>>> +#define PGC_colored       PG_mask(1, 4)
>>
>> Is there a reason you don't follow the conditional approach we've taken
>> for PGC_static?
> 
> Nope, fixed that.

And btw, if at all possible please avoid the #else part. page_alloc.c
already deals with that case, so it would be needed only if you have a
use of this constant somewhere else.

>>> @@ -1488,7 +1497,7 @@ static void free_heap_pages(
>>>              /* Merge with predecessor block? */
>>>              if ( !mfn_valid(page_to_mfn(predecessor)) ||
>>>                   !page_state_is(predecessor, free) ||
>>> -                 (predecessor->count_info & PGC_static) ||
>>> +                 (predecessor->count_info & (PGC_static | PGC_colored)) ||
>>>                   (PFN_ORDER(predecessor) != order) ||
>>>                   (page_to_nid(predecessor) != node) )
>>>                  break;
>>> @@ -1512,7 +1521,7 @@ static void free_heap_pages(
>>>              /* Merge with successor block? */
>>>              if ( !mfn_valid(page_to_mfn(successor)) ||
>>>                   !page_state_is(successor, free) ||
>>> -                 (successor->count_info & PGC_static) ||
>>> +                 (successor->count_info & (PGC_static | PGC_colored)) ||
>>>                   (PFN_ORDER(successor) != order) ||
>>>                   (page_to_nid(successor) != node) )
>>>                  break;
>>
>> This, especially without being mentioned in the description (only in the
>> revision log), could likely also be split out (and then also be properly
>> justified).
> 
> You mean to make it a prereq patch or putting it after this patch?

A prereq, for ...

> In the second case it would be like a fix for the previous patch, something
> that is to be avoided, right?

... precisely this reason. Elsewhere you make a constant from
PGC_extra | PGC_static | PGC_colored. Maybe that could become a file scope
one, which you could then use here as well. Then the change wouldn't even
depend on you already having introduced PGC_colored (but otherwise just
having the stub #define earlier in the file would suffice as well for this
to compile fine independent of the bulk of the changes).

(FTAOD PGC_extra would be meaningless / benign here, for only ever being
set on allocated pages.)

>>> +static void free_color_heap_page(struct page_info *pg)
>>> +{
>>> +    unsigned int color = page_to_llc_color(pg), nr_colors = get_nr_llc_colors();
>>> +    unsigned long pdx = page_to_pdx(pg);
>>> +    colored_pages_t *head = color_heap(color);
>>> +    struct page_info *prev = pdx >= nr_colors ? pg - nr_colors : NULL;
>>> +    struct page_info *next = pdx + nr_colors < FRAMETABLE_NR ? pg + nr_colors
>>> +                                                             : NULL;
>>
>> Are these two calculations safe? At least on x86 parts of frame_table[] may
>> not be populated, so de-referencing prev and/or next might fault.
> 
> I have to admit I've not taken this into consideration. I'll check that.
> 
>>> +    spin_lock(&heap_lock);
>>> +
>>> +    if ( is_free_colored_page(prev) )
>>> +        next = page_list_next(prev, head);
>>> +    else if ( !is_free_colored_page(next) )
>>> +    {
>>> +        /*
>>> +         * FIXME: linear search is slow, but also note that the frametable is
>>> +         * used to find free pages in the immediate neighborhood of pg in
>>> +         * constant time. When freeing contiguous pages, the insert position of
>>> +         * most of them is found without the linear search.
>>> +         */
>>> +        page_list_for_each( next, head )
>>> +        {
>>> +            if ( page_to_maddr(next) > page_to_maddr(pg) )
>>> +                break;
>>> +        }
>>> +    }
>>> +
>>> +    mark_page_free(pg, page_to_mfn(pg));
>>> +    pg->count_info |= PGC_colored;
>>> +    free_colored_pages[color]++;
>>> +    page_list_add_prev(pg, next, head);
>>> +
>>> +    spin_unlock(&heap_lock);
>>> +}
>>
>> There's no scrubbing here at all, and no mention of the lack thereof in
>> the description.
> 
> This comes from the very first version (which I'm not an author of).
> Can you explain to me briefly what is it and if I need it? Or can you give
> me pointers?

Did you look for occurrences of the word "scrub" elsewhere in the file?
You want to avoid pages holding information from one guest to become
visible unchanged to another one.

>>> +static void __init init_color_heap_pages(struct page_info *pg,
>>> +                                         unsigned long nr_pages)
>>> +{
>>> +    unsigned int i;
>>> +
>>> +    if ( buddy_alloc_size )
>>> +    {
>>> +        unsigned long buddy_pages = min(PFN_DOWN(buddy_alloc_size), nr_pages);
>>> +
>>> +        init_heap_pages(pg, buddy_pages);
>>> +        nr_pages -= buddy_pages;
>>> +        buddy_alloc_size -= buddy_pages << PAGE_SHIFT;
>>> +        pg += buddy_pages;
>>> +    }
>>
>> I think you want to bail here if nr_pages is now zero, not the least to avoid
>> crashing ...
>>
>>> +    if ( !_color_heap )
>>> +    {
>>> +        unsigned int nr_colors = get_nr_llc_colors();
>>> +
>>> +        _color_heap = xmalloc_array(colored_pages_t, nr_colors);
>>> +        BUG_ON(!_color_heap);
>>> +        free_colored_pages = xzalloc_array(unsigned long, nr_colors);
>>> +        BUG_ON(!free_colored_pages);
>>
>> ... here in case the amount that was freed was really tiny.
> 
> Here the array is allocated with nr_colors not nr_pages. nr_colors can't be 0.
> And if nr_pages is 0 it just means that all pages went to the buddy. I see no
> crash in this case.

Aren't the two BUG_ON() not very clear crash causes? My comment wasn't
about nr_pages == 0 being an issue further down (I think I had convinced
myself that this case was handled correctly), but about the buddy
allocator still having too little memory to satisfy the two allocations
here (which also you don't need yet if there's no memory to go to the
colored allocator in the first place).

>>> --- a/xen/include/xen/mm.h
>>> +++ b/xen/include/xen/mm.h
>>> @@ -299,6 +299,33 @@ page_list_add_tail(struct page_info *page, struct page_list_head *head)
>>>      }
>>>      head->tail = page;
>>>  }
>>> +static inline void
>>> +_page_list_add(struct page_info *page, struct page_info *prev,
>>> +               struct page_info *next)
>>> +{
>>> +    page->list.prev = page_to_pdx(prev);
>>> +    page->list.next = page_to_pdx(next);
>>> +    prev->list.next = page_to_pdx(page);
>>> +    next->list.prev = page_to_pdx(page);
>>> +}
>>> +static inline void
>>> +page_list_add_prev(struct page_info *page, struct page_info *next,
>>> +                   struct page_list_head *head)
>>> +{
>>> +    struct page_info *prev;
>>> +
>>> +    if ( !next )
>>> +    {
>>> +        page_list_add_tail(page, head);
>>> +        return;
>>> +    }
>>
>> !next is ambiguous in its meaning, so a comment towards the intended
>> behavior here would be helpful. It could be that the tail insertion is
>> necessary behavior, but it also could be that insertion anywhere would
>> actually be okay, and tail insertion is merely the variant you ended up
>> picking.
> 
> This makes it possible to call the function when next has been used to iterate
> over a list. If there's no next we are at the end of the list, so add to the
> tail. The other way is to handle the case outside this function.
> 
>> Then again ...
>>
>>> +    prev = page_list_prev(next, head);
>>> +    if ( !prev )
>>> +        page_list_add(page, head);
>>> +    else
>>> +        _page_list_add(page, prev, next);
>>> +}
>>>  static inline bool_t
>>>  __page_list_del_head(struct page_info *page, struct page_list_head *head,
>>>                       struct page_info *next, struct page_info *prev)
>>> @@ -451,6 +478,12 @@ page_list_add_tail(struct page_info *page, struct page_list_head *head)
>>>      list_add_tail(&page->list, head);
>>>  }
>>>  static inline void
>>> +page_list_add_prev(struct page_info *page, struct page_info *next,
>>> +                   struct page_list_head *head)
>>> +{
>>> +    list_add_tail(&page->list, &next->list);
>>
>> ... you don't care about !next here at all?
> 
> I did it this way since *page is never checked for NULL as well. Also, this
> other type of list isn't NULL terminated.

I see. In which case properly explaining the intended behavior and use case
in the earlier function should hopefully eliminate the need for anything
special here. I have to admit though that I consider this a little fragile
for a seemingly generic helper function; I wonder whether the special case
wouldn't better be handled in the caller instead.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:55:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:55:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485578.752900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPBt-0008Oy-JO; Fri, 27 Jan 2023 13:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485578.752900; Fri, 27 Jan 2023 13:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPBt-0008Or-GT; Fri, 27 Jan 2023 13:54:53 +0000
Received: by outflank-mailman (input) for mailman id 485578;
 Fri, 27 Jan 2023 13:54:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gttX=5Y=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pLPBs-0008Ol-ES
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:54:52 +0000
Received: from ppsw-43.srv.uis.cam.ac.uk (ppsw-43.srv.uis.cam.ac.uk
 [131.111.8.143]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 29f58a4d-9e4a-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 14:54:49 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:56294)
 by ppsw-43.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.139]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pLPBo-000y7V-Th (Exim 4.96) (return-path <amc96@srcf.net>);
 Fri, 27 Jan 2023 13:54:48 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 1555C1FBD6;
 Fri, 27 Jan 2023 13:54:48 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29f58a4d-9e4a-11ed-b8d1-410ff93cb8f0
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <aca42065-45b4-0e82-c0ba-b283243caaa9@srcf.net>
Date: Fri, 27 Jan 2023 13:54:46 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230127055421.22945-1-jgross@suse.com>
 <547fab47-d4b5-2c04-74d5-baffa10b9638@srcf.net>
 <260bfbf4-8a6c-d3ea-a4e6-547a51d59ba1@suse.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH v2] tools/helpers: don't log errors when trying to load
 PVH xenstore-stubdom
In-Reply-To: <260bfbf4-8a6c-d3ea-a4e6-547a51d59ba1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 27/01/2023 10:55 am, Juergen Gross wrote:
>
> For xenstore-stubdom it is easier, as there is no reason to prefer
> the PV mode over PVH (in fact I'm still working on Xenstore LU for
> the PVH case, making the decision even easier).

I fully support having both PV and PVH options available.  It helps
maintain architecturally "clean" interfaces, and not decisions are about
performance.


But a PVH stub xenstored cannot outperform a equivalent PV stub
xenstored, for x86 architectural reasons.

PV stub xenstored doesn't do any of the usual things that causes PV
guests to be generally slow (different address spaces, user/kernel
context switching), and event/grants to PV guests is architecturally
more efficient than the HVM/PVH equivalents.

PV will never go away fully die, because we've got enough fastpaths
where this matters,  but I do expect that we'll eventually get to a
point where we don't have any general purpose OSes running as PV - only
single-task workloads such as stub xenstored.


The great thing about having both PV and PVH available is that people
will be able to see the concrete data confirming that PV is better,
without having to simply trust me on the matter.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485593.752960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGi-00026X-17; Fri, 27 Jan 2023 13:59:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485593.752960; Fri, 27 Jan 2023 13:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGh-000251-Sc; Fri, 27 Jan 2023 13:59:51 +0000
Received: by outflank-mailman (input) for mailman id 485593;
 Fri, 27 Jan 2023 13:59:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGg-0000nM-6C
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:50 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dcedcaa3-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:49 +0100 (CET)
Received: by mail-wr1-x430.google.com with SMTP id t18so5079831wro.1
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:49 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:47 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcedcaa3-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6pRfpzK/BxGYd1R016SGWTULXiHCDLqhF4uHP1wp+tw=;
        b=ayMkROxaLs9ZATgMw1CBplX17OMRHxhJ1brLsK0riscFz5Ll6K3gowSE4q/z9jx4qE
         8pplVrtzVOGqd6XofmIuXU0ZK1Yet0NDJdbOYXiaA6O00ZUHbh95E5iuDcx+NNTcvabI
         P24zxXAr/OTT1/Gfz5nKIxhusBpBbNEbV8fLo1666EAS1f/OFVdxahPSjvb1wgZUlh5k
         i3dcHL/r0p5s6xKM2UXbARdsMx1MXWBJ4oQ41uWztwA+R+coxbAjXqPvkq1lj4b3jrlR
         WRkOl53na/Hy7iZhUv0JAgICTaTuNQdFVPRs9d7174K8xjClE2BtmJEckNzYSmeX4PMH
         sTIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=6pRfpzK/BxGYd1R016SGWTULXiHCDLqhF4uHP1wp+tw=;
        b=FiTRNeeBsbsUrmQ6hkB8BJwzzUXCBgcFJj+UZveXaaqRz06vhDJzJ6gZKnUmJZ53pR
         sUXDb9cwSx3mg+bQuBM/0RxOsgLMhCXTHhb47VMPPH01JhozjugxpVYzWBnkjlmvGXHR
         aKITQ99UaUDjGzjNSL09+G2Ch5KnNXq9Xri50UW6013vEmJ2zhOPV2KbbaPZzo/K3fAr
         DfxhVIbuCxEaXYtcKakRofikowWXTB3LCjQn5jQ/IG3CWO2gMo6HhS9380UmItJRx+w/
         FzGxZwlvKxUMvpcgqrAoA7z49LWXwoDBxGkWrv429IYi9wd8wOyLPNp1jmqGaH2f4gFl
         GTyQ==
X-Gm-Message-State: AO0yUKX1mdkLap+RwVLb0hDTAkihWoVRX5C318braBColF9hoOnZG+Fl
	mtzhoxcj5NcVgurkTuNMVTbs6Dlc2sA=
X-Google-Smtp-Source: AK7set9ngm/GcKE0N6yO6aMweyaQ/2209Uv/+ejskFBDLDkxvBlzhYxxHlh8VApbkaz6oxfCpP8Eyg==
X-Received: by 2002:a5d:58d0:0:b0:2bf:b26a:3404 with SMTP id o16-20020a5d58d0000000b002bfb26a3404mr10774311wrf.12.1674827988360;
        Fri, 27 Jan 2023 05:59:48 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 05/14] xen/riscv: introduce empty <asm/string.h>
Date: Fri, 27 Jan 2023 15:59:10 +0200
Message-Id: <19c64efc3c05f64de97cdc4a96919ee28844440b.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

To include <xen/lib.h> <asm/string.h> is required

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - <asm/string.h> is a new empty header which is required to include
    <xen/lib.h>
---
 xen/arch/riscv/include/asm/string.h | 6 ++++++
 1 file changed, 6 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/string.h

diff --git a/xen/arch/riscv/include/asm/string.h b/xen/arch/riscv/include/asm/string.h
new file mode 100644
index 0000000000..a26ba8f5c6
--- /dev/null
+++ b/xen/arch/riscv/include/asm/string.h
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_RISCV_STRING_H
+#define _ASM_RISCV_STRING_H
+
+#endif /* _ASM_RISCV_STRING_H */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485588.752910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGd-0000ne-4E; Fri, 27 Jan 2023 13:59:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485588.752910; Fri, 27 Jan 2023 13:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGd-0000nX-1M; Fri, 27 Jan 2023 13:59:47 +0000
Received: by outflank-mailman (input) for mailman id 485588;
 Fri, 27 Jan 2023 13:59:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGb-0000nM-DE
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:45 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id da0b65de-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:44 +0100 (CET)
Received: by mail-wr1-x433.google.com with SMTP id d14so5043679wrr.9
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:44 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da0b65de-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=jAx0Mn8J44WfqcJprMgRa+KRzCfpCYZQMBUBNPmWHMc=;
        b=Dh8kanroU5lGejeoxm13ZdObW+mHxL4ASiLXDKXz83rGCndbXmma3cMH6iP/BCNp4U
         7moNydA++TOZ2SeXtZasQbz7SnOGOKupJMT4kAl1V8u1U2xM6AElMdU/EV+l1AJlM4m0
         X3obuKErQPJV8Dy1joozmRIJp8niaIJrCrTdXvM3CrlU21w88R5SRxDUTqYBKtNfgjKk
         Ss0032Vr/l7fc0fslRi3HL0GBfaNnrBbNTwQLSs5PP/ZmgJw07uGOas/7KCdqO2dqrVI
         8tL7gUHaKcfkJs/E20sb5A1mlgqVvDnCPcKZfhiAwJzFCYcc66diRmICBCCmd4UBbhEG
         GEnw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=jAx0Mn8J44WfqcJprMgRa+KRzCfpCYZQMBUBNPmWHMc=;
        b=CZkPYm8Fq1NLL9dkpFwsHMWueI19LQ37PA0qajURHB6r/DJfX5Bw11Csub0iW89vIR
         Ari7iHEnO1/hzlByjU9j9WD7aIU1G/6SvClkqgpuevJY7YQxoo6/e1U81vhKrISfNs9S
         A8ISm7+FQ6RgeisOyupz8boxjZEPzz048NZawebk0+ZdzLxUPyTdf3Ocr2dZwfa4VYjk
         qF/eEYm3ecp5aA/8FpIe98XcaQAwHFqHkLj14E2OcvbUM+WdB4cblarGLZQQ91NfC1hy
         8cHEkTxkxPkH4LwsuMKzRFr8/CIgJr+W1U5WMuJw6RV8Y5zf2BMB9kC1ysu4RN/oLZlW
         tmPg==
X-Gm-Message-State: AO0yUKUr31Aw/Wx+YKD1xe3hCBbU85rH81NGZHUOq9eUQ8rWvUdSH3rb
	xsk+q9WQPDQ4BQA1JHeUHDWflIcM2PE=
X-Google-Smtp-Source: AK7set9PC/KvhFFh9eyiGgoGgyU6s5oxi2Ehj2TQe37co4M0b1Rza/+MtECDMsjNvKBCD+Xhs2307A==
X-Received: by 2002:a5d:664f:0:b0:2bf:be0f:b016 with SMTP id f15-20020a5d664f000000b002bfbe0fb016mr8450627wrw.23.1674827983434;
        Fri, 27 Jan 2023 05:59:43 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 00/14] RISCV basic exception handling implementation
Date: Fri, 27 Jan 2023 15:59:05 +0200
Message-Id: <cover.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series is based on another one [Basic early_printk and smoke
test implementation] which hasn't been commited yet.

The patch series provides a basic implementation of exception handling.
It can do only basic things such as decode a cause of an exception,
save/restore registers and execute "wfi" instruction if an exception
can not be handled.

To verify that exception handling works well it was implemented macros
from <asm/bug.h> such as BUG/WARN/run_in_exception/assert_failed.
The implementation of macros is used "ebreak" instruction and set up bug
frame tables for each type of macros.
Also it was implemented register save/restore to return and continue work
after WARN/run_in_exception.
Not all functionality of the macros was implemented as some of them
require hard-panic the system which is not available now. Instead of
hard-panic 'wfi' instruction is used but it should be definitely changed
in the neareset future.
It wasn't implemented show_execution_state() and stack trace discovering
as it's not necessary now.

---
Changes in V2:
  - take the latest riscv_encoding.h from OpenSBI, update it with Xen
    related changes, and update the commit message with "Origin:"
    tag and the commit message itself.
  - add "Origin:" tag to the commit messag of the patch
    [xen/riscv: add <asm/csr.h> header].
  - Remove the patch [xen/riscv: add early_printk_hnum() function] as the
    functionality provided by the patch isn't used now.
  - Refactor prcoess.h: move structure offset defines to asm-offsets.c,
    change register_t to unsigned long.
  - Refactor entry.S to use offsets defined in asm-offsets.C
  - Rename {__,}handle_exception to handle_trap() and do_trap() to be more
    consistent with RISC-V spec.
  - Merge the pathc which introduces do_unexpected_trap() with the patch
    [xen/riscv: introduce exception handlers implementation].
  - Rename setup_trap_handler() to trap_init() and update correspondingly
    the patches in the patch series.
  - Refactor bug.h, remove bug_instr_t type from it.
  - Refactor decode_trap_cause() function to be more optimization-friendly.
  - Add two new empty headers: <cache.h> and <string.h> as they are needed to
    include <xen/lib.h> which provides ARRAY_SIZE and other macros.
  - Code style fixes.
---

Oleksii Kurochko (14):
  xen/riscv: add _zicsr to CFLAGS
  xen/riscv: add <asm/asm.h> header
  xen/riscv: add <asm/riscv_encoding.h header
  xen/riscv: add <asm/csr.h> header
  xen/riscv: introduce empty <asm/string.h>
  xen/riscv: introduce empty <asm/cache.h>
  xen/riscv: introduce exception context
  xen/riscv: introduce exception handlers implementation
  xen/riscv: introduce decode_cause() stuff
  xen/riscv: mask all interrupts
  xen/riscv: introduce trap_init()
  xen/riscv: introduce an implementation of macros from <asm/bug.h>
  xen/riscv: test basic handling stuff
  automation: add smoke test to verify macros from bug.h

 automation/scripts/qemu-smoke-riscv64.sh    |   2 +-
 xen/arch/riscv/Makefile                     |   2 +
 xen/arch/riscv/arch.mk                      |   2 +-
 xen/arch/riscv/entry.S                      |  94 ++
 xen/arch/riscv/include/asm/asm.h            |  54 ++
 xen/arch/riscv/include/asm/bug.h            | 118 +++
 xen/arch/riscv/include/asm/cache.h          |   6 +
 xen/arch/riscv/include/asm/csr.h            |  84 ++
 xen/arch/riscv/include/asm/processor.h      |  82 ++
 xen/arch/riscv/include/asm/riscv_encoding.h | 927 ++++++++++++++++++++
 xen/arch/riscv/include/asm/string.h         |   6 +
 xen/arch/riscv/include/asm/traps.h          |  14 +
 xen/arch/riscv/riscv64/asm-offsets.c        |  53 ++
 xen/arch/riscv/riscv64/head.S               |   5 +
 xen/arch/riscv/setup.c                      |  21 +
 xen/arch/riscv/traps.c                      | 231 +++++
 xen/arch/riscv/xen.lds.S                    |  10 +
 17 files changed, 1709 insertions(+), 2 deletions(-)
 create mode 100644 xen/arch/riscv/entry.S
 create mode 100644 xen/arch/riscv/include/asm/asm.h
 create mode 100644 xen/arch/riscv/include/asm/bug.h
 create mode 100644 xen/arch/riscv/include/asm/cache.h
 create mode 100644 xen/arch/riscv/include/asm/csr.h
 create mode 100644 xen/arch/riscv/include/asm/processor.h
 create mode 100644 xen/arch/riscv/include/asm/riscv_encoding.h
 create mode 100644 xen/arch/riscv/include/asm/string.h
 create mode 100644 xen/arch/riscv/include/asm/traps.h
 create mode 100644 xen/arch/riscv/traps.c

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485589.752915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGd-0000r7-GN; Fri, 27 Jan 2023 13:59:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485589.752915; Fri, 27 Jan 2023 13:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGd-0000qw-9q; Fri, 27 Jan 2023 13:59:47 +0000
Received: by outflank-mailman (input) for mailman id 485589;
 Fri, 27 Jan 2023 13:59:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGc-0000nM-5W
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:46 +0000
Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com
 [2a00:1450:4864:20::435])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id da9747f9-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:45 +0100 (CET)
Received: by mail-wr1-x435.google.com with SMTP id r2so5045666wrv.7
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:45 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da9747f9-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=smSUDHICnf15fQP+C9uW/E72KuO/jLzmp6nN3W3CB9I=;
        b=dadblgR1q6IpOd/TXAOyYr6zV+UO44OSHTRu9HcBRSAYqqx8znzLOQFLfVDCmzRDom
         eGyuXaL+Jx/q+d4+uLVPTn+wluxGZXCG9UeeuJ68ilv5SwvdyuzVX6cGZWEk4d3xcmSH
         WXYG96A3vobHVKMEsk3LlSrP9VJgKyNQTqIg5pOcAZUYoknyuZQzvs+/nSdUwYO0Xnlz
         TYpTLZM+l5+rNnmhDW6b5a6QEikx6WwD5P7K7rHHBPFrKUkQ9zk8ul3aM1bSFA+GxSsZ
         FQ5SxA1vE3xoIr4GnPm0MbVhOaezcWmBMg5vze6OzqJmL4g+Q54VHyfh/UyFlDFeOjDY
         3v0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=smSUDHICnf15fQP+C9uW/E72KuO/jLzmp6nN3W3CB9I=;
        b=TFSrjpfiPKtEQ/z6EoQQuvtB6hk7a4JCqDeYyaS0pYdRYrCt3BrK/J5MxLe7LLtro3
         xQ62pvbLjACeGZw/YoI1fcSWKwL3zBCBhnMVe9Uw7NBuUGHA6b6HXGMzc5H//Wv0hIBC
         H1oJDOkGk+dhxf1tleRA/aMzF4VwM70iVJzVWkcdYLQDsazh/qZKstxeEaMfzF7jMXeu
         NXtZHxPKrZqQZglONglOGPdpmODpKEzOnOt9AV1SnmVUPPHELU/1VJafzgHy5HOAmZ48
         uHcCMKLA6I4bzZJTBFItrO5iLLxH+ORHH2BNQfr+8nK8Xx4Uo3kDRoGH7SGg00Bw5SNk
         piKQ==
X-Gm-Message-State: AO0yUKU6U9NaUSILkEwd1rL0J0S8MTdO9SCEF13qOiMfecxZmgfmdke4
	zfwRTtYoeF1TwsohaLOlJqUhS6ZpylI=
X-Google-Smtp-Source: AK7set8GzZy3ME0eU8wzMjCPiBHK7DckEg6zcX4yWcJO1ZuBodx9oX5jkJMzgMZlyzokyKbux07oZA==
X-Received: by 2002:a5d:4283:0:b0:2bf:d428:a768 with SMTP id k3-20020a5d4283000000b002bfd428a768mr2043010wrq.49.1674827984421;
        Fri, 27 Jan 2023 05:59:44 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 01/14] xen/riscv: add _zicsr to CFLAGS
Date: Fri, 27 Jan 2023 15:59:06 +0200
Message-Id: <971c400abf7f88a6be322db72481c075d3ceb233.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Work with some registers requires csr command which is part of
Zicsr.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - Nothing changed
---
 xen/arch/riscv/arch.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
index 012dc677c3..95b41d9f3e 100644
--- a/xen/arch/riscv/arch.mk
+++ b/xen/arch/riscv/arch.mk
@@ -10,7 +10,7 @@ riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
 # into the upper half _or_ the lower half of the address space.
 # -mcmodel=medlow would force Xen into the lower half.
 
-CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+CFLAGS += -march=$(riscv-march-y)_zicsr -mstrict-align -mcmodel=medany
 
 # TODO: Drop override when more of the build is working
 override ALL_OBJS-y = arch/$(TARGET_ARCH)/built_in.o
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485594.752970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGj-0002PX-Dj; Fri, 27 Jan 2023 13:59:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485594.752970; Fri, 27 Jan 2023 13:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGj-0002OF-8z; Fri, 27 Jan 2023 13:59:53 +0000
Received: by outflank-mailman (input) for mailman id 485594;
 Fri, 27 Jan 2023 13:59:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGi-00023y-6B
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:52 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd9bedf9-9e4a-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 14:59:50 +0100 (CET)
Received: by mail-wr1-x42d.google.com with SMTP id m14so4594038wrg.13
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:50 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd9bedf9-9e4a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nZU75X7zN3E8l3wJnpARzbbD/gMXF2Tgi8DqO2CpH0I=;
        b=GHI2tm52iAwYlhN8x+9ynOzv3/ZRsq9sY0Dla1Uic5UCwS+KVTCEEUZnGVC0NO1goJ
         8Hr7gqsY7To896P9L4zYW25oNnmIPBnjFzqLTbGyAHwM3lFFMj2EzrnYRQKKIfb14Kgr
         /ha/NXFwh7FsLPByjKLb3re1KsTZ1vCiD9RMUMglieg+FXmkHk7QL9L1W3w8bj3ehzJ5
         yr1rMzDtmJHcXi8pJIu0qEdDtHHLesC8C1Y0bKNgXvxWurthhmt7FPPLH2K42L49MTdV
         euMpAfVPQUpc0RXMpIsLGkohMoW9p/39cGG8HEZncZhtuKy8Nd0SNSBOZBrCgTvqYWSk
         dZHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nZU75X7zN3E8l3wJnpARzbbD/gMXF2Tgi8DqO2CpH0I=;
        b=24Kvp0TgxmyKSh8NlafCjAxKYeEfiNdH47lTYEduq4ZIN0/BNbHwCeuLaWOGF7sUaZ
         F6VSMBH+9Ca7zdLM6CN93PxUgGWg3lqWvFrecxq9zpyHlt7lcq9NMZiCOTyZwxyoxuzc
         tbNqSCdRq96gwDgXaIVE7rN2mKc8Kj3u9Sn73Y/V/vuYxCjxtETv3xOf8yVUYXCTQVDJ
         +F+dYp+MKBGmoQ4zmeYqdNvMbBL4DSacdRd1ReonSTHFjk1QyRCnVtOxtAXvukXtGRdm
         pXC8r8Krlajgj9ewaqG83iLKLgWA0BBj5CZVIPTqrq3ptLqa4IB0Z1aDN4DBAbp80X9r
         YbaQ==
X-Gm-Message-State: AFqh2kpvPCZyPvlQe9H0UHr7otTrnhnmMkyLon7dGM5uSy/vjuTHPx7l
	xM2s+AZeK7cUeg/oL6wgeCreW1EADp0=
X-Google-Smtp-Source: AMrXdXvvRg9Xkp5SUkn5oAnJJ++hi0NhAJhqfmq4Ydun6pzIeJaEBnccx6LMLEIg5f5Wtkc2og1zNg==
X-Received: by 2002:a5d:6582:0:b0:2be:5c14:eb74 with SMTP id q2-20020a5d6582000000b002be5c14eb74mr23111943wru.62.1674827989547;
        Fri, 27 Jan 2023 05:59:49 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 06/14] xen/riscv: introduce empty <asm/cache.h>
Date: Fri, 27 Jan 2023 15:59:11 +0200
Message-Id: <1c53e9784707482edf96d144d9ce36a4fc9d7ed5.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

To include <xen/lib.h> <asm/cache.h> is required

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - <asm/cache.h> is a new empty header which is required to include
    <xen/lib.h>
---
 xen/arch/riscv/include/asm/cache.h | 6 ++++++
 1 file changed, 6 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/cache.h

diff --git a/xen/arch/riscv/include/asm/cache.h b/xen/arch/riscv/include/asm/cache.h
new file mode 100644
index 0000000000..69573eb051
--- /dev/null
+++ b/xen/arch/riscv/include/asm/cache.h
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_RISCV_CACHE_H
+#define _ASM_RISCV_CACHE_H
+
+#endif /* _ASM_RISCV_CACHE_H */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485590.752930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGe-0001IX-RE; Fri, 27 Jan 2023 13:59:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485590.752930; Fri, 27 Jan 2023 13:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGe-0001IO-O5; Fri, 27 Jan 2023 13:59:48 +0000
Received: by outflank-mailman (input) for mailman id 485590;
 Fri, 27 Jan 2023 13:59:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGd-0000nM-5b
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:47 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db1c1821-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:46 +0100 (CET)
Received: by mail-wr1-x430.google.com with SMTP id b7so5070069wrt.3
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:46 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db1c1821-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=M6CXWGrRmetWnTON0psYBk3rMNfFAijYRefln9RvY+M=;
        b=qIiooRD3Jx9D80j/dfTVYrMWYhVzIrmVBrhRw2guFhbEOTunfZgr9SMOXUejUh/vba
         AYagNIgLuYAkeiwq+zaydtQeZMdGJDMONquCLgGeSEMd/dTjQsbzXkqxgDy+cnq3YwYs
         OybGtUSQsLltQP0J+UnVFz3erk19XgoHXHV8293/DS5ZKUqkzrG/PgGR7OYnLIAEKmTl
         S2s7zLIW1Yk+B1PCVytvbgi6CWDXu7VRf0jvuHf5vqaCxazjFOISMFGFnwfGhUY+Xzhy
         Ev/j4n9bZWtUBvzts7U0CiwpZEKFmOTr64a+tTmum6k2jJdGENUdwItdR9gkWYfYJ4T6
         pIBg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=M6CXWGrRmetWnTON0psYBk3rMNfFAijYRefln9RvY+M=;
        b=UhYyNvOLFlvNZ9mKDTuoHkmfrcTmibycUQN/8uhE8RNJRkxPqiH7AE4933n7H3XLhP
         dgL1CkEZalrdDnJW8e3YqSdWJhlgkAR6SELen0taKCzcdToJPBMouRsC+xudXrxe6Iod
         KP6VhoFGeo6F8f/VejDe1TH8L1sEQrUE0h8VRF7rCORZlNXl+9mi4LzPu67QZ2RHCuWt
         phRiP0CNcF54bP1HtfGJ91vI4d6JU7R1s2gkxWR703kuxA2hvU9TReB6Bk8fV/Rok+cr
         TBphZ/Hx26PLvg7/CHZui9I3XEoHL5KKn8II7JdX1jeNcnBvgoyl182qjzjxVEX8fcH/
         ph0w==
X-Gm-Message-State: AFqh2kpx5pz4mNikjs4Ga0FCkfa+FokcgtN2We+jyoWgt6wMWksdCZP0
	QoiFezSd8jkTj7VtZePqJEpdOFzmxjU=
X-Google-Smtp-Source: AMrXdXsGr8dR80xKZBB9zzklpx4B8zZKx38QJzLfxWlGgu+EzMuTpXN+Ar0T4t70kVLT1b6kouxzjQ==
X-Received: by 2002:adf:f2cb:0:b0:2be:4fbe:42e1 with SMTP id d11-20020adff2cb000000b002be4fbe42e1mr27629599wrp.5.1674827985409;
        Fri, 27 Jan 2023 05:59:45 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 02/14] xen/riscv: add <asm/asm.h> header
Date: Fri, 27 Jan 2023 15:59:07 +0200
Message-Id: <9a098db8e3fef97df987b2a7330333b51a21cb8c.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - Nothing changed
---
 xen/arch/riscv/include/asm/asm.h | 54 ++++++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/asm.h

diff --git a/xen/arch/riscv/include/asm/asm.h b/xen/arch/riscv/include/asm/asm.h
new file mode 100644
index 0000000000..6d426ecea7
--- /dev/null
+++ b/xen/arch/riscv/include/asm/asm.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: (GPL-2.0-only) */
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ */
+
+#ifndef _ASM_RISCV_ASM_H
+#define _ASM_RISCV_ASM_H
+
+#ifdef __ASSEMBLY__
+#define __ASM_STR(x)	x
+#else
+#define __ASM_STR(x)	#x
+#endif
+
+#if __riscv_xlen == 64
+#define __REG_SEL(a, b)	__ASM_STR(a)
+#elif __riscv_xlen == 32
+#define __REG_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define REG_L		__REG_SEL(ld, lw)
+#define REG_S		__REG_SEL(sd, sw)
+
+#if __SIZEOF_POINTER__ == 8
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.dword
+#else
+#define RISCV_PTR		".dword"
+#endif
+#elif __SIZEOF_POINTER__ == 4
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.word
+#else
+#define RISCV_PTR		".word"
+#endif
+#else
+#error "Unexpected __SIZEOF_POINTER__"
+#endif
+
+#if (__SIZEOF_INT__ == 4)
+#define RISCV_INT		__ASM_STR(.word)
+#else
+#error "Unexpected __SIZEOF_INT__"
+#endif
+
+#if (__SIZEOF_SHORT__ == 2)
+#define RISCV_SHORT		__ASM_STR(.half)
+#else
+#error "Unexpected __SIZEOF_SHORT__"
+#endif
+
+#endif /* _ASM_RISCV_ASM_H */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485591.752939 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGg-0001Yq-35; Fri, 27 Jan 2023 13:59:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485591.752939; Fri, 27 Jan 2023 13:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGg-0001Yc-0G; Fri, 27 Jan 2023 13:59:50 +0000
Received: by outflank-mailman (input) for mailman id 485591;
 Fri, 27 Jan 2023 13:59:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGe-0000nM-Mu
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:48 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc41cfcc-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:48 +0100 (CET)
Received: by mail-wm1-x32e.google.com with SMTP id fl24so3519043wmb.1
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:47 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc41cfcc-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7CNvAgjXDN7DvNk7wXZV36nzw79XUYPxFgVXqj3cPmo=;
        b=gfRFhfLrC0gpo+JKFXYycl/QL6Y11jmYjSMdL5zkD5QpmzjJuEiYU5BAlkgNIU3D9G
         D+PSoBedxFd+gJO7dHkbFGw58idBgzvkizWdZGUSHkq84kauS4yII6aP2xz/xPIgr8Ve
         jBCd5XBn/wBcuvXY5vst5+Zq1852g4OYEBsGAoiMitVRBKbxbO0DI+cQhEjyyr0xDOgC
         jVmiNPUUvWm9I6zq0IygTAnOo+o5qhs1+BpvhVuKKmbZ1KDcToyWS/1Yuk9Tnnb+SI69
         yqYqlbBr3QsqzJPBNuvj3s1mgWz47vBNxKIpSDackKesjHss82bxyVzx7vyac/QuaCbl
         JYgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7CNvAgjXDN7DvNk7wXZV36nzw79XUYPxFgVXqj3cPmo=;
        b=Nkexcay9fkNMOHrCbDXMEnXql/W/5hBKtzHHatcdm0qp+ara1V1wlvFnDfl7ISBSfh
         jM/asFyoqgsm46SZYShrXDz5jzPY3/XgEoc8yXnoW4HTkl1QCSb/VxDCZBsn/EYyZeBi
         E2ru1kG90INxqdzRcZFkbuhv0Y+5zsli2e3y2AjHkqfUctsFrkxVCEYHS2uu4rI/v8Oi
         I1Exeb6kQuZtrj5Uwb4rANsYkmMv/78d+F6fJhUo+v3AvAHQ3qnr8zN9OJD9hLto4BUq
         yUkIvkaChYZUkb2r3dAV7L+XTUNUVEfX6JKYdDHcQ/8Lc6x2zPtqrHH9xxNIsnw0iY0b
         bq7w==
X-Gm-Message-State: AO0yUKUzNCXSdZZe0d9k8gfWEiDVv1gYnX+RuwDK59MZBHwPFVSu56lr
	wVKXhh/XpsUx2ueQodyV08BswIR+D0M=
X-Google-Smtp-Source: AK7set+bePZVpTJkP/anUZYoflLwL8Nzs21eqGjTRsfEcvnwXCGOOqOPTKCthpF8ezJCXmAU344z2A==
X-Received: by 2002:a7b:c7ce:0:b0:3dc:42d2:aeee with SMTP id z14-20020a7bc7ce000000b003dc42d2aeeemr643533wmk.25.1674827987321;
        Fri, 27 Jan 2023 05:59:47 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 04/14] xen/riscv: add <asm/csr.h> header
Date: Fri, 27 Jan 2023 15:59:09 +0200
Message-Id: <b26d981f189adad8af4560fcc10360da02df97a9.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The following changes were made in comparison with <asm/csr.h> from
Linux:
  * remove all defines as they are defined in riscv_encoding.h
  * leave only csr_* macros

Origin: https://github.com/torvalds/linux.git 2475bf0250de
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - Minor refactoring mentioned in the commit message, switch tabs to
    spaces and refactor things around __asm__ __volatile__.
  - Update the commit message and add "Origin:" tag.
---
 xen/arch/riscv/include/asm/csr.h | 84 ++++++++++++++++++++++++++++++++
 1 file changed, 84 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/csr.h

diff --git a/xen/arch/riscv/include/asm/csr.h b/xen/arch/riscv/include/asm/csr.h
new file mode 100644
index 0000000000..4275cf6515
--- /dev/null
+++ b/xen/arch/riscv/include/asm/csr.h
@@ -0,0 +1,84 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2015 Regents of the University of California
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <asm/asm.h>
+#include <xen/const.h>
+#include <asm/riscv_encoding.h>
+
+#ifndef __ASSEMBLY__
+
+#define csr_read(csr)                                           \
+({                                                              \
+    register unsigned long __v;                                 \
+    __asm__ __volatile__ (  "csrr %0, " __ASM_STR(csr)          \
+                            : "=r" (__v)                        \
+                            : : "memory" );                     \
+    __v;                                                        \
+})
+
+#define csr_write(csr, val)                                     \
+({                                                              \
+    unsigned long __v = (unsigned long)(val);                   \
+    __asm__ __volatile__ (  "csrw " __ASM_STR(csr) ", %0"       \
+                            : /* no outputs */                  \
+                            : "rK" (__v)                        \
+                            : "memory" );                       \
+})
+
+#define csr_swap(csr, val)                                      \
+({                                                              \
+    unsigned long __v = (unsigned long)(val);                   \
+    __asm__ __volatile__ (  "csrrw %0, " __ASM_STR(csr) ", %1"  \
+                            : "=r" (__v)                        \
+                            : "rK" (__v)                        \
+                            : "memory" );                       \
+    __v;                                                        \
+})
+
+#define csr_read_set(csr, val)                                  \
+({                                                              \
+    unsigned long __v = (unsigned long)(val);                   \
+    __asm__ __volatile__ (  "csrrs %0, " __ASM_STR(csr) ", %1"  \
+                            : "=r" (__v)                        \
+                            : "rK" (__v)                        \
+                            : "memory" );                       \
+    __v;                                                        \
+})
+
+#define csr_set(csr, val)                                       \
+({                                                              \
+    unsigned long __v = (unsigned long)(val);                   \
+    __asm__ __volatile__ (  "csrs " __ASM_STR(csr) ", %0"       \
+                            : /* no outputs */                  \
+                            : "rK" (__v)                        \
+                            : "memory" );                       \
+})
+
+#define csr_read_clear(csr, val)                                \
+({                                                              \
+    unsigned long __v = (unsigned long)(val);                   \
+    __asm__ __volatile__ (  "csrrc %0, " __ASM_STR(csr) ", %1"  \
+                            : "=r" (__v)                        \
+                            : "rK" (__v)                        \
+                            : "memory" );                       \
+    __v;                                                        \
+})
+
+#define csr_clear(csr, val)                                     \
+({                                                              \
+    unsigned long __v = (unsigned long)(val);                   \
+    __asm__ __volatile__ (  "csrc " __ASM_STR(csr) ", %0"       \
+                            : /*no outputs */                   \
+                            : "rK" (__v)                        \
+                            : "memory" );                       \
+})
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_CSR_H */
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485592.752945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGg-0001cX-GN; Fri, 27 Jan 2023 13:59:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485592.752945; Fri, 27 Jan 2023 13:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGg-0001be-8D; Fri, 27 Jan 2023 13:59:50 +0000
Received: by outflank-mailman (input) for mailman id 485592;
 Fri, 27 Jan 2023 13:59:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGf-0000nM-8M
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:49 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dbdc2be8-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:47 +0100 (CET)
Received: by mail-wm1-x32f.google.com with SMTP id
 fl11-20020a05600c0b8b00b003daf72fc844so5486566wmb.0
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:47 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbdc2be8-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sxi+b+46cItHD8ntKiCVrHJg0a6FCEn4zAzYd6pPfPY=;
        b=VFrGcymGl7DLfseraYaY03fC1LfF5/lZaYkt9qUklO5GZ67ghlqGB1sXmrZxl+ojwB
         WWelYZO8HC8ygreB4yBmXLkKJJMIcdJlcEbocFGU8tnpHgs4okIDl72RsPMCsVTWsaQZ
         5mQteKUeqLPKdCY70n44BzPkzh1RKeB3LOqhsbLn8CeHMcpq/IQHnCPLL4wWeZgnGd77
         VwBSILTfHpjWvYpdzF01OabFuW6csDXi+P8sr0/Q3Ci4VW4iA5EEYPbjFw8qn70ZrWa2
         578B44iz2eR8R0h+sxvM16hUDnAHpxZlY4exmoWkIfBGq5HzAV9znv1zcaKtE5bd+ZVq
         B3FQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=sxi+b+46cItHD8ntKiCVrHJg0a6FCEn4zAzYd6pPfPY=;
        b=kFRNsIva+dvy2lBctqpBJSo30+njqXK+evq2jHf02+9LWf2utVmugbW2KobK2x7d/V
         lcEJDSJxdQZI23tb/kGa1Cycdivlo2aMa2OnwAhJ5dUYG2UqlKglZw5r7lomsePG2aEz
         JddaRS0ih7mzvXz+lU0JUqdY5Om2wzdZIC6Dv6pjZ2KB2QKYS7mGEkaj8sXFL7Stenxo
         IIlkKNQ9uNXv2eTRmRr7uq9Tki4+FIUFvh6kOeBGvq9Hm0kPDJGgIC3FrlGwFVxJli1H
         VNBF9B6OaHUog7JxIhKK/wRCPYU0rdBA84VyMUlUkY6YPLiHuTgR7GwAKnfH4Ao8uCpk
         EPkw==
X-Gm-Message-State: AFqh2kps7t/oqyGSLDBJlLLR1MsVEsOFz3H+F5lYw+Csua2MDUOibmaD
	WHQ51peNIdrGwVGHrUnX8NxTXJKoY/Q=
X-Google-Smtp-Source: AMrXdXu1j/lOo07QIXzz29ohG3fYx/0APtUzyV5aMFQDFhlSEAfWDtF/sQHcqGsTNzqTVpkI06z70w==
X-Received: by 2002:a05:600c:c12:b0:3da:28a9:a900 with SMTP id fm18-20020a05600c0c1200b003da28a9a900mr37751797wmb.41.1674827986319;
        Fri, 27 Jan 2023 05:59:46 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 03/14] xen/riscv: add <asm/riscv_encoding.h header
Date: Fri, 27 Jan 2023 15:59:08 +0200
Message-Id: <135f7fb8ac64007c0ea580e76630962dc1bd11c9.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The following changes were done in Xen code base in comparison with OpenSBI:
  * Remove "#include <sbi/sbi_const.h>" as most of the stuff inside
    it is present in Xen code base.
  * Add macros _UL and _ULL as they were in <sbi/sbi_const.h> before
  * Add SATP32_MODE_SHIFT/SATP64_MODE_SHIFT/SATP_MODE_SHIFT as they will
    be used in riscv/mm.c
  * Add CAUSE_IRQ_FLAG which is going to be used insised exception
    handler
  * Change ulong to unsigned long in macros REG_PTR(...)
  * Change s32 to int32_t

Originally authored by Anup Patel <anup.patel@wdc.com>

Origin: https://github.com/riscv-software-src/opensbi.git c45992cc2b12
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - Take the latest version of riscv_encoding.h from OpenSBI.
  - Update riscv_encoding.h with Xen related changes mentioned in the
    commit message.
  - Update commit message and add "Origin:" tag
---
 xen/arch/riscv/include/asm/riscv_encoding.h | 927 ++++++++++++++++++++
 1 file changed, 927 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/riscv_encoding.h

diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/include/asm/riscv_encoding.h
new file mode 100644
index 0000000000..43dd4f6981
--- /dev/null
+++ b/xen/arch/riscv/include/asm/riscv_encoding.h
@@ -0,0 +1,927 @@
+/*
+ * SPDX-License-Identifier: BSD-2-Clause
+ *
+ * Copyright (c) 2019 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *   Anup Patel <anup.patel@wdc.com>
+ */
+
+#ifndef __RISCV_ENCODING_H__
+#define __RISCV_ENCODING_H__
+
+#define _UL(X) _AC(X, UL)
+#define _ULL(X) _AC(X, ULL)
+
+/* clang-format off */
+#define MSTATUS_SIE			_UL(0x00000002)
+#define MSTATUS_MIE			_UL(0x00000008)
+#define MSTATUS_SPIE_SHIFT		5
+#define MSTATUS_SPIE			(_UL(1) << MSTATUS_SPIE_SHIFT)
+#define MSTATUS_UBE			_UL(0x00000040)
+#define MSTATUS_MPIE			_UL(0x00000080)
+#define MSTATUS_SPP_SHIFT		8
+#define MSTATUS_SPP			(_UL(1) << MSTATUS_SPP_SHIFT)
+#define MSTATUS_MPP_SHIFT		11
+#define MSTATUS_MPP			(_UL(3) << MSTATUS_MPP_SHIFT)
+#define MSTATUS_FS			_UL(0x00006000)
+#define MSTATUS_XS			_UL(0x00018000)
+#define MSTATUS_VS			_UL(0x00000600)
+#define MSTATUS_MPRV			_UL(0x00020000)
+#define MSTATUS_SUM			_UL(0x00040000)
+#define MSTATUS_MXR			_UL(0x00080000)
+#define MSTATUS_TVM			_UL(0x00100000)
+#define MSTATUS_TW			_UL(0x00200000)
+#define MSTATUS_TSR			_UL(0x00400000)
+#define MSTATUS32_SD			_UL(0x80000000)
+#if __riscv_xlen == 64
+#define MSTATUS_UXL			_ULL(0x0000000300000000)
+#define MSTATUS_SXL			_ULL(0x0000000C00000000)
+#define MSTATUS_SBE			_ULL(0x0000001000000000)
+#define MSTATUS_MBE			_ULL(0x0000002000000000)
+#define MSTATUS_GVA			_ULL(0x0000004000000000)
+#define MSTATUS_GVA_SHIFT		38
+#define MSTATUS_MPV			_ULL(0x0000008000000000)
+#else
+#define MSTATUSH_SBE			_UL(0x00000010)
+#define MSTATUSH_MBE			_UL(0x00000020)
+#define MSTATUSH_GVA			_UL(0x00000040)
+#define MSTATUSH_GVA_SHIFT		6
+#define MSTATUSH_MPV			_UL(0x00000080)
+#endif
+#define MSTATUS32_SD			_UL(0x80000000)
+#define MSTATUS64_SD			_ULL(0x8000000000000000)
+
+#define SSTATUS_SIE			MSTATUS_SIE
+#define SSTATUS_SPIE_SHIFT		MSTATUS_SPIE_SHIFT
+#define SSTATUS_SPIE			MSTATUS_SPIE
+#define SSTATUS_SPP_SHIFT		MSTATUS_SPP_SHIFT
+#define SSTATUS_SPP			MSTATUS_SPP
+#define SSTATUS_FS			MSTATUS_FS
+#define SSTATUS_XS			MSTATUS_XS
+#define SSTATUS_VS			MSTATUS_VS
+#define SSTATUS_SUM			MSTATUS_SUM
+#define SSTATUS_MXR			MSTATUS_MXR
+#define SSTATUS32_SD			MSTATUS32_SD
+#define SSTATUS64_UXL			MSTATUS_UXL
+#define SSTATUS64_SD			MSTATUS64_SD
+
+#if __riscv_xlen == 64
+#define HSTATUS_VSXL			_UL(0x300000000)
+#define HSTATUS_VSXL_SHIFT		32
+#endif
+#define HSTATUS_VTSR			_UL(0x00400000)
+#define HSTATUS_VTW			_UL(0x00200000)
+#define HSTATUS_VTVM			_UL(0x00100000)
+#define HSTATUS_VGEIN			_UL(0x0003f000)
+#define HSTATUS_VGEIN_SHIFT		12
+#define HSTATUS_HU			_UL(0x00000200)
+#define HSTATUS_SPVP			_UL(0x00000100)
+#define HSTATUS_SPV			_UL(0x00000080)
+#define HSTATUS_GVA			_UL(0x00000040)
+#define HSTATUS_VSBE			_UL(0x00000020)
+
+#define IRQ_S_SOFT			1
+#define IRQ_VS_SOFT			2
+#define IRQ_M_SOFT			3
+#define IRQ_S_TIMER			5
+#define IRQ_VS_TIMER			6
+#define IRQ_M_TIMER			7
+#define IRQ_S_EXT			9
+#define IRQ_VS_EXT			10
+#define IRQ_M_EXT			11
+#define IRQ_S_GEXT			12
+#define IRQ_PMU_OVF			13
+
+#define MIP_SSIP			(_UL(1) << IRQ_S_SOFT)
+#define MIP_VSSIP			(_UL(1) << IRQ_VS_SOFT)
+#define MIP_MSIP			(_UL(1) << IRQ_M_SOFT)
+#define MIP_STIP			(_UL(1) << IRQ_S_TIMER)
+#define MIP_VSTIP			(_UL(1) << IRQ_VS_TIMER)
+#define MIP_MTIP			(_UL(1) << IRQ_M_TIMER)
+#define MIP_SEIP			(_UL(1) << IRQ_S_EXT)
+#define MIP_VSEIP			(_UL(1) << IRQ_VS_EXT)
+#define MIP_MEIP			(_UL(1) << IRQ_M_EXT)
+#define MIP_SGEIP			(_UL(1) << IRQ_S_GEXT)
+#define MIP_LCOFIP			(_UL(1) << IRQ_PMU_OVF)
+
+#define SIP_SSIP			MIP_SSIP
+#define SIP_STIP			MIP_STIP
+
+#define PRV_U				_UL(0)
+#define PRV_S				_UL(1)
+#define PRV_M				_UL(3)
+
+#define SATP32_MODE			_UL(0x80000000)
+#define SATP32_MODE_SHIFT		31
+#define SATP32_ASID			_UL(0x7FC00000)
+#define SATP32_PPN			_UL(0x003FFFFF)
+#define SATP64_MODE			_ULL(0xF000000000000000)
+#define SATP64_MODE_SHIFT		60
+#define SATP64_ASID			_ULL(0x0FFFF00000000000)
+#define SATP64_PPN			_ULL(0x00000FFFFFFFFFFF)
+
+#define SATP_MODE_OFF			_UL(0)
+#define SATP_MODE_SV32			_UL(1)
+#define SATP_MODE_SV39			_UL(8)
+#define SATP_MODE_SV48			_UL(9)
+#define SATP_MODE_SV57			_UL(10)
+#define SATP_MODE_SV64			_UL(11)
+
+#define HGATP_MODE_OFF			_UL(0)
+#define HGATP_MODE_SV32X4		_UL(1)
+#define HGATP_MODE_SV39X4		_UL(8)
+#define HGATP_MODE_SV48X4		_UL(9)
+
+#define HGATP32_MODE_SHIFT		31
+#define HGATP32_VMID_SHIFT		22
+#define HGATP32_VMID_MASK		_UL(0x1FC00000)
+#define HGATP32_PPN			_UL(0x003FFFFF)
+
+#define HGATP64_MODE_SHIFT		60
+#define HGATP64_VMID_SHIFT		44
+#define HGATP64_VMID_MASK		_ULL(0x03FFF00000000000)
+#define HGATP64_PPN			_ULL(0x00000FFFFFFFFFFF)
+
+#define PMP_R				_UL(0x01)
+#define PMP_W				_UL(0x02)
+#define PMP_X				_UL(0x04)
+#define PMP_A				_UL(0x18)
+#define PMP_A_TOR			_UL(0x08)
+#define PMP_A_NA4			_UL(0x10)
+#define PMP_A_NAPOT			_UL(0x18)
+#define PMP_L				_UL(0x80)
+
+#define PMP_SHIFT			2
+#define PMP_COUNT			64
+#if __riscv_xlen == 64
+#define PMP_ADDR_MASK			((_ULL(0x1) << 54) - 1)
+#else
+#define PMP_ADDR_MASK			_UL(0xFFFFFFFF)
+#endif
+
+#if __riscv_xlen == 64
+#define MSTATUS_SD			MSTATUS64_SD
+#define SSTATUS_SD			SSTATUS64_SD
+#define SATP_MODE			SATP64_MODE
+#define SATP_MODE_SHIFT			SATP64_MODE_SHIFT
+
+#define HGATP_PPN			HGATP64_PPN
+#define HGATP_VMID_SHIFT		HGATP64_VMID_SHIFT
+#define HGATP_VMID_MASK			HGATP64_VMID_MASK
+#define HGATP_MODE_SHIFT		HGATP64_MODE_SHIFT
+#else
+#define MSTATUS_SD			MSTATUS32_SD
+#define SSTATUS_SD			SSTATUS32_SD
+#define SATP_MODE			SATP32_MODE
+#define SATP_MODE_SHIFT			SATP32_MODE_SHIFT
+
+#define HGATP_PPN			HGATP32_PPN
+#define HGATP_VMID_SHIFT		HGATP32_VMID_SHIFT
+#define HGATP_VMID_MASK			HGATP32_VMID_MASK
+#define HGATP_MODE_SHIFT		HGATP32_MODE_SHIFT
+#endif
+
+#define TOPI_IID_SHIFT			16
+#define TOPI_IID_MASK			0xfff
+#define TOPI_IPRIO_MASK		0xff
+
+#if __riscv_xlen == 64
+#define MHPMEVENT_OF			(_UL(1) << 63)
+#define MHPMEVENT_MINH			(_UL(1) << 62)
+#define MHPMEVENT_SINH			(_UL(1) << 61)
+#define MHPMEVENT_UINH			(_UL(1) << 60)
+#define MHPMEVENT_VSINH			(_UL(1) << 59)
+#define MHPMEVENT_VUINH			(_UL(1) << 58)
+#else
+#define MHPMEVENTH_OF			(_ULL(1) << 31)
+#define MHPMEVENTH_MINH			(_ULL(1) << 30)
+#define MHPMEVENTH_SINH			(_ULL(1) << 29)
+#define MHPMEVENTH_UINH			(_ULL(1) << 28)
+#define MHPMEVENTH_VSINH		(_ULL(1) << 27)
+#define MHPMEVENTH_VUINH		(_ULL(1) << 26)
+
+#define MHPMEVENT_OF			(MHPMEVENTH_OF << 32)
+#define MHPMEVENT_MINH			(MHPMEVENTH_MINH << 32)
+#define MHPMEVENT_SINH			(MHPMEVENTH_SINH << 32)
+#define MHPMEVENT_UINH			(MHPMEVENTH_UINH << 32)
+#define MHPMEVENT_VSINH			(MHPMEVENTH_VSINH << 32)
+#define MHPMEVENT_VUINH			(MHPMEVENTH_VUINH << 32)
+
+#endif
+
+#define MHPMEVENT_SSCOF_MASK		_ULL(0xFFFF000000000000)
+
+#if __riscv_xlen > 32
+#define ENVCFG_STCE			(_ULL(1) << 63)
+#define ENVCFG_PBMTE			(_ULL(1) << 62)
+#else
+#define ENVCFGH_STCE			(_UL(1) << 31)
+#define ENVCFGH_PBMTE			(_UL(1) << 30)
+#endif
+#define ENVCFG_CBZE			(_UL(1) << 7)
+#define ENVCFG_CBCFE			(_UL(1) << 6)
+#define ENVCFG_CBIE_SHIFT		4
+#define ENVCFG_CBIE			(_UL(0x3) << ENVCFG_CBIE_SHIFT)
+#define ENVCFG_CBIE_ILL			_UL(0x0)
+#define ENVCFG_CBIE_FLUSH		_UL(0x1)
+#define ENVCFG_CBIE_INV			_UL(0x3)
+#define ENVCFG_FIOM			_UL(0x1)
+
+/* ===== User-level CSRs ===== */
+
+/* User Trap Setup (N-extension) */
+#define CSR_USTATUS			0x000
+#define CSR_UIE				0x004
+#define CSR_UTVEC			0x005
+
+/* User Trap Handling (N-extension) */
+#define CSR_USCRATCH			0x040
+#define CSR_UEPC			0x041
+#define CSR_UCAUSE			0x042
+#define CSR_UTVAL			0x043
+#define CSR_UIP				0x044
+
+/* User Floating-point CSRs */
+#define CSR_FFLAGS			0x001
+#define CSR_FRM				0x002
+#define CSR_FCSR			0x003
+
+/* User Counters/Timers */
+#define CSR_CYCLE			0xc00
+#define CSR_TIME			0xc01
+#define CSR_INSTRET			0xc02
+#define CSR_HPMCOUNTER3			0xc03
+#define CSR_HPMCOUNTER4			0xc04
+#define CSR_HPMCOUNTER5			0xc05
+#define CSR_HPMCOUNTER6			0xc06
+#define CSR_HPMCOUNTER7			0xc07
+#define CSR_HPMCOUNTER8			0xc08
+#define CSR_HPMCOUNTER9			0xc09
+#define CSR_HPMCOUNTER10		0xc0a
+#define CSR_HPMCOUNTER11		0xc0b
+#define CSR_HPMCOUNTER12		0xc0c
+#define CSR_HPMCOUNTER13		0xc0d
+#define CSR_HPMCOUNTER14		0xc0e
+#define CSR_HPMCOUNTER15		0xc0f
+#define CSR_HPMCOUNTER16		0xc10
+#define CSR_HPMCOUNTER17		0xc11
+#define CSR_HPMCOUNTER18		0xc12
+#define CSR_HPMCOUNTER19		0xc13
+#define CSR_HPMCOUNTER20		0xc14
+#define CSR_HPMCOUNTER21		0xc15
+#define CSR_HPMCOUNTER22		0xc16
+#define CSR_HPMCOUNTER23		0xc17
+#define CSR_HPMCOUNTER24		0xc18
+#define CSR_HPMCOUNTER25		0xc19
+#define CSR_HPMCOUNTER26		0xc1a
+#define CSR_HPMCOUNTER27		0xc1b
+#define CSR_HPMCOUNTER28		0xc1c
+#define CSR_HPMCOUNTER29		0xc1d
+#define CSR_HPMCOUNTER30		0xc1e
+#define CSR_HPMCOUNTER31		0xc1f
+#define CSR_CYCLEH			0xc80
+#define CSR_TIMEH			0xc81
+#define CSR_INSTRETH			0xc82
+#define CSR_HPMCOUNTER3H		0xc83
+#define CSR_HPMCOUNTER4H		0xc84
+#define CSR_HPMCOUNTER5H		0xc85
+#define CSR_HPMCOUNTER6H		0xc86
+#define CSR_HPMCOUNTER7H		0xc87
+#define CSR_HPMCOUNTER8H		0xc88
+#define CSR_HPMCOUNTER9H		0xc89
+#define CSR_HPMCOUNTER10H		0xc8a
+#define CSR_HPMCOUNTER11H		0xc8b
+#define CSR_HPMCOUNTER12H		0xc8c
+#define CSR_HPMCOUNTER13H		0xc8d
+#define CSR_HPMCOUNTER14H		0xc8e
+#define CSR_HPMCOUNTER15H		0xc8f
+#define CSR_HPMCOUNTER16H		0xc90
+#define CSR_HPMCOUNTER17H		0xc91
+#define CSR_HPMCOUNTER18H		0xc92
+#define CSR_HPMCOUNTER19H		0xc93
+#define CSR_HPMCOUNTER20H		0xc94
+#define CSR_HPMCOUNTER21H		0xc95
+#define CSR_HPMCOUNTER22H		0xc96
+#define CSR_HPMCOUNTER23H		0xc97
+#define CSR_HPMCOUNTER24H		0xc98
+#define CSR_HPMCOUNTER25H		0xc99
+#define CSR_HPMCOUNTER26H		0xc9a
+#define CSR_HPMCOUNTER27H		0xc9b
+#define CSR_HPMCOUNTER28H		0xc9c
+#define CSR_HPMCOUNTER29H		0xc9d
+#define CSR_HPMCOUNTER30H		0xc9e
+#define CSR_HPMCOUNTER31H		0xc9f
+
+/* ===== Supervisor-level CSRs ===== */
+
+/* Supervisor Trap Setup */
+#define CSR_SSTATUS			0x100
+#define CSR_SIE				0x104
+#define CSR_STVEC			0x105
+#define CSR_SCOUNTEREN			0x106
+
+/* Supervisor Configuration */
+#define CSR_SENVCFG			0x10a
+
+/* Supervisor Trap Handling */
+#define CSR_SSCRATCH			0x140
+#define CSR_SEPC			0x141
+#define CSR_SCAUSE			0x142
+#define CSR_STVAL			0x143
+#define CSR_SIP				0x144
+
+/* Sstc extension */
+#define CSR_STIMECMP			0x14D
+#define CSR_STIMECMPH			0x15D
+
+/* Supervisor Protection and Translation */
+#define CSR_SATP			0x180
+
+/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
+#define CSR_SISELECT			0x150
+#define CSR_SIREG			0x151
+
+/* Supervisor-Level Interrupts (AIA) */
+#define CSR_STOPEI			0x15c
+#define CSR_STOPI			0xdb0
+
+/* Supervisor-Level High-Half CSRs (AIA) */
+#define CSR_SIEH			0x114
+#define CSR_SIPH			0x154
+
+/* Supervisor stateen CSRs */
+#define CSR_SSTATEEN0			0x10C
+#define CSR_SSTATEEN1			0x10D
+#define CSR_SSTATEEN2			0x10E
+#define CSR_SSTATEEN3			0x10F
+
+/* ===== Hypervisor-level CSRs ===== */
+
+/* Hypervisor Trap Setup (H-extension) */
+#define CSR_HSTATUS			0x600
+#define CSR_HEDELEG			0x602
+#define CSR_HIDELEG			0x603
+#define CSR_HIE				0x604
+#define CSR_HCOUNTEREN			0x606
+#define CSR_HGEIE			0x607
+
+/* Hypervisor Configuration */
+#define CSR_HENVCFG			0x60a
+#define CSR_HENVCFGH			0x61a
+
+/* Hypervisor Trap Handling (H-extension) */
+#define CSR_HTVAL			0x643
+#define CSR_HIP				0x644
+#define CSR_HVIP			0x645
+#define CSR_HTINST			0x64a
+#define CSR_HGEIP			0xe12
+
+/* Hypervisor Protection and Translation (H-extension) */
+#define CSR_HGATP			0x680
+
+/* Hypervisor Counter/Timer Virtualization Registers (H-extension) */
+#define CSR_HTIMEDELTA			0x605
+#define CSR_HTIMEDELTAH			0x615
+
+/* Virtual Supervisor Registers (H-extension) */
+#define CSR_VSSTATUS			0x200
+#define CSR_VSIE			0x204
+#define CSR_VSTVEC			0x205
+#define CSR_VSSCRATCH			0x240
+#define CSR_VSEPC			0x241
+#define CSR_VSCAUSE			0x242
+#define CSR_VSTVAL			0x243
+#define CSR_VSIP			0x244
+#define CSR_VSATP			0x280
+
+/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
+#define CSR_HVIEN			0x608
+#define CSR_HVICTL			0x609
+#define CSR_HVIPRIO1			0x646
+#define CSR_HVIPRIO2			0x647
+
+/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */
+#define CSR_VSISELECT			0x250
+#define CSR_VSIREG			0x251
+
+/* VS-Level Interrupts (H-extension with AIA) */
+#define CSR_VSTOPEI			0x25c
+#define CSR_VSTOPI			0xeb0
+
+/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
+#define CSR_HIDELEGH			0x613
+#define CSR_HVIENH			0x618
+#define CSR_HVIPH			0x655
+#define CSR_HVIPRIO1H			0x656
+#define CSR_HVIPRIO2H			0x657
+#define CSR_VSIEH			0x214
+#define CSR_VSIPH			0x254
+
+/* Hypervisor stateen CSRs */
+#define CSR_HSTATEEN0			0x60C
+#define CSR_HSTATEEN0H			0x61C
+#define CSR_HSTATEEN1			0x60D
+#define CSR_HSTATEEN1H			0x61D
+#define CSR_HSTATEEN2			0x60E
+#define CSR_HSTATEEN2H			0x61E
+#define CSR_HSTATEEN3			0x60F
+#define CSR_HSTATEEN3H			0x61F
+
+/* ===== Machine-level CSRs ===== */
+
+/* Machine Information Registers */
+#define CSR_MVENDORID			0xf11
+#define CSR_MARCHID			0xf12
+#define CSR_MIMPID			0xf13
+#define CSR_MHARTID			0xf14
+
+/* Machine Trap Setup */
+#define CSR_MSTATUS			0x300
+#define CSR_MISA			0x301
+#define CSR_MEDELEG			0x302
+#define CSR_MIDELEG			0x303
+#define CSR_MIE				0x304
+#define CSR_MTVEC			0x305
+#define CSR_MCOUNTEREN			0x306
+#define CSR_MSTATUSH			0x310
+
+/* Machine Configuration */
+#define CSR_MENVCFG			0x30a
+#define CSR_MENVCFGH			0x31a
+
+/* Machine Trap Handling */
+#define CSR_MSCRATCH			0x340
+#define CSR_MEPC			0x341
+#define CSR_MCAUSE			0x342
+#define CSR_MTVAL			0x343
+#define CSR_MIP				0x344
+#define CSR_MTINST			0x34a
+#define CSR_MTVAL2			0x34b
+
+/* Machine Memory Protection */
+#define CSR_PMPCFG0			0x3a0
+#define CSR_PMPCFG1			0x3a1
+#define CSR_PMPCFG2			0x3a2
+#define CSR_PMPCFG3			0x3a3
+#define CSR_PMPCFG4			0x3a4
+#define CSR_PMPCFG5			0x3a5
+#define CSR_PMPCFG6			0x3a6
+#define CSR_PMPCFG7			0x3a7
+#define CSR_PMPCFG8			0x3a8
+#define CSR_PMPCFG9			0x3a9
+#define CSR_PMPCFG10			0x3aa
+#define CSR_PMPCFG11			0x3ab
+#define CSR_PMPCFG12			0x3ac
+#define CSR_PMPCFG13			0x3ad
+#define CSR_PMPCFG14			0x3ae
+#define CSR_PMPCFG15			0x3af
+#define CSR_PMPADDR0			0x3b0
+#define CSR_PMPADDR1			0x3b1
+#define CSR_PMPADDR2			0x3b2
+#define CSR_PMPADDR3			0x3b3
+#define CSR_PMPADDR4			0x3b4
+#define CSR_PMPADDR5			0x3b5
+#define CSR_PMPADDR6			0x3b6
+#define CSR_PMPADDR7			0x3b7
+#define CSR_PMPADDR8			0x3b8
+#define CSR_PMPADDR9			0x3b9
+#define CSR_PMPADDR10			0x3ba
+#define CSR_PMPADDR11			0x3bb
+#define CSR_PMPADDR12			0x3bc
+#define CSR_PMPADDR13			0x3bd
+#define CSR_PMPADDR14			0x3be
+#define CSR_PMPADDR15			0x3bf
+#define CSR_PMPADDR16			0x3c0
+#define CSR_PMPADDR17			0x3c1
+#define CSR_PMPADDR18			0x3c2
+#define CSR_PMPADDR19			0x3c3
+#define CSR_PMPADDR20			0x3c4
+#define CSR_PMPADDR21			0x3c5
+#define CSR_PMPADDR22			0x3c6
+#define CSR_PMPADDR23			0x3c7
+#define CSR_PMPADDR24			0x3c8
+#define CSR_PMPADDR25			0x3c9
+#define CSR_PMPADDR26			0x3ca
+#define CSR_PMPADDR27			0x3cb
+#define CSR_PMPADDR28			0x3cc
+#define CSR_PMPADDR29			0x3cd
+#define CSR_PMPADDR30			0x3ce
+#define CSR_PMPADDR31			0x3cf
+#define CSR_PMPADDR32			0x3d0
+#define CSR_PMPADDR33			0x3d1
+#define CSR_PMPADDR34			0x3d2
+#define CSR_PMPADDR35			0x3d3
+#define CSR_PMPADDR36			0x3d4
+#define CSR_PMPADDR37			0x3d5
+#define CSR_PMPADDR38			0x3d6
+#define CSR_PMPADDR39			0x3d7
+#define CSR_PMPADDR40			0x3d8
+#define CSR_PMPADDR41			0x3d9
+#define CSR_PMPADDR42			0x3da
+#define CSR_PMPADDR43			0x3db
+#define CSR_PMPADDR44			0x3dc
+#define CSR_PMPADDR45			0x3dd
+#define CSR_PMPADDR46			0x3de
+#define CSR_PMPADDR47			0x3df
+#define CSR_PMPADDR48			0x3e0
+#define CSR_PMPADDR49			0x3e1
+#define CSR_PMPADDR50			0x3e2
+#define CSR_PMPADDR51			0x3e3
+#define CSR_PMPADDR52			0x3e4
+#define CSR_PMPADDR53			0x3e5
+#define CSR_PMPADDR54			0x3e6
+#define CSR_PMPADDR55			0x3e7
+#define CSR_PMPADDR56			0x3e8
+#define CSR_PMPADDR57			0x3e9
+#define CSR_PMPADDR58			0x3ea
+#define CSR_PMPADDR59			0x3eb
+#define CSR_PMPADDR60			0x3ec
+#define CSR_PMPADDR61			0x3ed
+#define CSR_PMPADDR62			0x3ee
+#define CSR_PMPADDR63			0x3ef
+
+/* Machine Counters/Timers */
+#define CSR_MCYCLE			0xb00
+#define CSR_MINSTRET			0xb02
+#define CSR_MHPMCOUNTER3		0xb03
+#define CSR_MHPMCOUNTER4		0xb04
+#define CSR_MHPMCOUNTER5		0xb05
+#define CSR_MHPMCOUNTER6		0xb06
+#define CSR_MHPMCOUNTER7		0xb07
+#define CSR_MHPMCOUNTER8		0xb08
+#define CSR_MHPMCOUNTER9		0xb09
+#define CSR_MHPMCOUNTER10		0xb0a
+#define CSR_MHPMCOUNTER11		0xb0b
+#define CSR_MHPMCOUNTER12		0xb0c
+#define CSR_MHPMCOUNTER13		0xb0d
+#define CSR_MHPMCOUNTER14		0xb0e
+#define CSR_MHPMCOUNTER15		0xb0f
+#define CSR_MHPMCOUNTER16		0xb10
+#define CSR_MHPMCOUNTER17		0xb11
+#define CSR_MHPMCOUNTER18		0xb12
+#define CSR_MHPMCOUNTER19		0xb13
+#define CSR_MHPMCOUNTER20		0xb14
+#define CSR_MHPMCOUNTER21		0xb15
+#define CSR_MHPMCOUNTER22		0xb16
+#define CSR_MHPMCOUNTER23		0xb17
+#define CSR_MHPMCOUNTER24		0xb18
+#define CSR_MHPMCOUNTER25		0xb19
+#define CSR_MHPMCOUNTER26		0xb1a
+#define CSR_MHPMCOUNTER27		0xb1b
+#define CSR_MHPMCOUNTER28		0xb1c
+#define CSR_MHPMCOUNTER29		0xb1d
+#define CSR_MHPMCOUNTER30		0xb1e
+#define CSR_MHPMCOUNTER31		0xb1f
+#define CSR_MCYCLEH			0xb80
+#define CSR_MINSTRETH			0xb82
+#define CSR_MHPMCOUNTER3H		0xb83
+#define CSR_MHPMCOUNTER4H		0xb84
+#define CSR_MHPMCOUNTER5H		0xb85
+#define CSR_MHPMCOUNTER6H		0xb86
+#define CSR_MHPMCOUNTER7H		0xb87
+#define CSR_MHPMCOUNTER8H		0xb88
+#define CSR_MHPMCOUNTER9H		0xb89
+#define CSR_MHPMCOUNTER10H		0xb8a
+#define CSR_MHPMCOUNTER11H		0xb8b
+#define CSR_MHPMCOUNTER12H		0xb8c
+#define CSR_MHPMCOUNTER13H		0xb8d
+#define CSR_MHPMCOUNTER14H		0xb8e
+#define CSR_MHPMCOUNTER15H		0xb8f
+#define CSR_MHPMCOUNTER16H		0xb90
+#define CSR_MHPMCOUNTER17H		0xb91
+#define CSR_MHPMCOUNTER18H		0xb92
+#define CSR_MHPMCOUNTER19H		0xb93
+#define CSR_MHPMCOUNTER20H		0xb94
+#define CSR_MHPMCOUNTER21H		0xb95
+#define CSR_MHPMCOUNTER22H		0xb96
+#define CSR_MHPMCOUNTER23H		0xb97
+#define CSR_MHPMCOUNTER24H		0xb98
+#define CSR_MHPMCOUNTER25H		0xb99
+#define CSR_MHPMCOUNTER26H		0xb9a
+#define CSR_MHPMCOUNTER27H		0xb9b
+#define CSR_MHPMCOUNTER28H		0xb9c
+#define CSR_MHPMCOUNTER29H		0xb9d
+#define CSR_MHPMCOUNTER30H		0xb9e
+#define CSR_MHPMCOUNTER31H		0xb9f
+
+/* Machine Counter Setup */
+#define CSR_MCOUNTINHIBIT		0x320
+#define CSR_MHPMEVENT3			0x323
+#define CSR_MHPMEVENT4			0x324
+#define CSR_MHPMEVENT5			0x325
+#define CSR_MHPMEVENT6			0x326
+#define CSR_MHPMEVENT7			0x327
+#define CSR_MHPMEVENT8			0x328
+#define CSR_MHPMEVENT9			0x329
+#define CSR_MHPMEVENT10			0x32a
+#define CSR_MHPMEVENT11			0x32b
+#define CSR_MHPMEVENT12			0x32c
+#define CSR_MHPMEVENT13			0x32d
+#define CSR_MHPMEVENT14			0x32e
+#define CSR_MHPMEVENT15			0x32f
+#define CSR_MHPMEVENT16			0x330
+#define CSR_MHPMEVENT17			0x331
+#define CSR_MHPMEVENT18			0x332
+#define CSR_MHPMEVENT19			0x333
+#define CSR_MHPMEVENT20			0x334
+#define CSR_MHPMEVENT21			0x335
+#define CSR_MHPMEVENT22			0x336
+#define CSR_MHPMEVENT23			0x337
+#define CSR_MHPMEVENT24			0x338
+#define CSR_MHPMEVENT25			0x339
+#define CSR_MHPMEVENT26			0x33a
+#define CSR_MHPMEVENT27			0x33b
+#define CSR_MHPMEVENT28			0x33c
+#define CSR_MHPMEVENT29			0x33d
+#define CSR_MHPMEVENT30			0x33e
+#define CSR_MHPMEVENT31			0x33f
+
+/* For RV32 */
+#define CSR_MHPMEVENT3H			0x723
+#define CSR_MHPMEVENT4H			0x724
+#define CSR_MHPMEVENT5H			0x725
+#define CSR_MHPMEVENT6H			0x726
+#define CSR_MHPMEVENT7H			0x727
+#define CSR_MHPMEVENT8H			0x728
+#define CSR_MHPMEVENT9H			0x729
+#define CSR_MHPMEVENT10H		0x72a
+#define CSR_MHPMEVENT11H		0x72b
+#define CSR_MHPMEVENT12H		0x72c
+#define CSR_MHPMEVENT13H		0x72d
+#define CSR_MHPMEVENT14H		0x72e
+#define CSR_MHPMEVENT15H		0x72f
+#define CSR_MHPMEVENT16H		0x730
+#define CSR_MHPMEVENT17H		0x731
+#define CSR_MHPMEVENT18H		0x732
+#define CSR_MHPMEVENT19H		0x733
+#define CSR_MHPMEVENT20H		0x734
+#define CSR_MHPMEVENT21H		0x735
+#define CSR_MHPMEVENT22H		0x736
+#define CSR_MHPMEVENT23H		0x737
+#define CSR_MHPMEVENT24H		0x738
+#define CSR_MHPMEVENT25H		0x739
+#define CSR_MHPMEVENT26H		0x73a
+#define CSR_MHPMEVENT27H		0x73b
+#define CSR_MHPMEVENT28H		0x73c
+#define CSR_MHPMEVENT29H		0x73d
+#define CSR_MHPMEVENT30H		0x73e
+#define CSR_MHPMEVENT31H		0x73f
+
+/* Counter Overflow CSR */
+#define CSR_SCOUNTOVF			0xda0
+
+/* Debug/Trace Registers */
+#define CSR_TSELECT			0x7a0
+#define CSR_TDATA1			0x7a1
+#define CSR_TDATA2			0x7a2
+#define CSR_TDATA3			0x7a3
+
+/* Debug Mode Registers */
+#define CSR_DCSR			0x7b0
+#define CSR_DPC				0x7b1
+#define CSR_DSCRATCH0			0x7b2
+#define CSR_DSCRATCH1			0x7b3
+
+/* Machine-Level Window to Indirectly Accessed Registers (AIA) */
+#define CSR_MISELECT			0x350
+#define CSR_MIREG			0x351
+
+/* Machine-Level Interrupts (AIA) */
+#define CSR_MTOPEI			0x35c
+#define CSR_MTOPI			0xfb0
+
+/* Virtual Interrupts for Supervisor Level (AIA) */
+#define CSR_MVIEN			0x308
+#define CSR_MVIP			0x309
+
+/* Smstateen extension registers */
+/* Machine stateen CSRs */
+#define CSR_MSTATEEN0			0x30C
+#define CSR_MSTATEEN0H			0x31C
+#define CSR_MSTATEEN1			0x30D
+#define CSR_MSTATEEN1H			0x31D
+#define CSR_MSTATEEN2			0x30E
+#define CSR_MSTATEEN2H			0x31E
+#define CSR_MSTATEEN3			0x30F
+#define CSR_MSTATEEN3H			0x31F
+
+/* Machine-Level High-Half CSRs (AIA) */
+#define CSR_MIDELEGH			0x313
+#define CSR_MIEH			0x314
+#define CSR_MVIENH			0x318
+#define CSR_MVIPH			0x319
+#define CSR_MIPH			0x354
+
+/* ===== Trap/Exception Causes ===== */
+
+/* Exception cause high bit - is an interrupt if set */
+#define CAUSE_IRQ_FLAG			(_UL(1) << (__riscv_xlen - 1))
+
+#define CAUSE_MISALIGNED_FETCH		0x0
+#define CAUSE_FETCH_ACCESS		0x1
+#define CAUSE_ILLEGAL_INSTRUCTION	0x2
+#define CAUSE_BREAKPOINT		0x3
+#define CAUSE_MISALIGNED_LOAD		0x4
+#define CAUSE_LOAD_ACCESS		0x5
+#define CAUSE_MISALIGNED_STORE		0x6
+#define CAUSE_STORE_ACCESS		0x7
+#define CAUSE_USER_ECALL		0x8
+#define CAUSE_SUPERVISOR_ECALL		0x9
+#define CAUSE_VIRTUAL_SUPERVISOR_ECALL	0xa
+#define CAUSE_MACHINE_ECALL		0xb
+#define CAUSE_FETCH_PAGE_FAULT		0xc
+#define CAUSE_LOAD_PAGE_FAULT		0xd
+#define CAUSE_STORE_PAGE_FAULT		0xf
+#define CAUSE_FETCH_GUEST_PAGE_FAULT	0x14
+#define CAUSE_LOAD_GUEST_PAGE_FAULT	0x15
+#define CAUSE_VIRTUAL_INST_FAULT	0x16
+#define CAUSE_STORE_GUEST_PAGE_FAULT	0x17
+
+/* Common defines for all smstateen */
+#define SMSTATEEN_MAX_COUNT		4
+#define SMSTATEEN0_CS_SHIFT		0
+#define SMSTATEEN0_CS			(_ULL(1) << SMSTATEEN0_CS_SHIFT)
+#define SMSTATEEN0_FCSR_SHIFT		1
+#define SMSTATEEN0_FCSR			(_ULL(1) << SMSTATEEN0_FCSR_SHIFT)
+#define SMSTATEEN0_IMSIC_SHIFT		58
+#define SMSTATEEN0_IMSIC		(_ULL(1) << SMSTATEEN0_IMSIC_SHIFT)
+#define SMSTATEEN0_AIA_SHIFT		59
+#define SMSTATEEN0_AIA			(_ULL(1) << SMSTATEEN0_AIA_SHIFT)
+#define SMSTATEEN0_SVSLCT_SHIFT		60
+#define SMSTATEEN0_SVSLCT		(_ULL(1) << SMSTATEEN0_SVSLCT_SHIFT)
+#define SMSTATEEN0_HSENVCFG_SHIFT	62
+#define SMSTATEEN0_HSENVCFG		(_ULL(1) << SMSTATEEN0_HSENVCFG_SHIFT)
+#define SMSTATEEN_STATEN_SHIFT		63
+#define SMSTATEEN_STATEN		(_ULL(1) << SMSTATEEN_STATEN_SHIFT)
+
+/* ===== Instruction Encodings ===== */
+
+#define INSN_MATCH_LB			0x3
+#define INSN_MASK_LB			0x707f
+#define INSN_MATCH_LH			0x1003
+#define INSN_MASK_LH			0x707f
+#define INSN_MATCH_LW			0x2003
+#define INSN_MASK_LW			0x707f
+#define INSN_MATCH_LD			0x3003
+#define INSN_MASK_LD			0x707f
+#define INSN_MATCH_LBU			0x4003
+#define INSN_MASK_LBU			0x707f
+#define INSN_MATCH_LHU			0x5003
+#define INSN_MASK_LHU			0x707f
+#define INSN_MATCH_LWU			0x6003
+#define INSN_MASK_LWU			0x707f
+#define INSN_MATCH_SB			0x23
+#define INSN_MASK_SB			0x707f
+#define INSN_MATCH_SH			0x1023
+#define INSN_MASK_SH			0x707f
+#define INSN_MATCH_SW			0x2023
+#define INSN_MASK_SW			0x707f
+#define INSN_MATCH_SD			0x3023
+#define INSN_MASK_SD			0x707f
+
+#define INSN_MATCH_FLW			0x2007
+#define INSN_MASK_FLW			0x707f
+#define INSN_MATCH_FLD			0x3007
+#define INSN_MASK_FLD			0x707f
+#define INSN_MATCH_FLQ			0x4007
+#define INSN_MASK_FLQ			0x707f
+#define INSN_MATCH_FSW			0x2027
+#define INSN_MASK_FSW			0x707f
+#define INSN_MATCH_FSD			0x3027
+#define INSN_MASK_FSD			0x707f
+#define INSN_MATCH_FSQ			0x4027
+#define INSN_MASK_FSQ			0x707f
+
+#define INSN_MATCH_C_LD			0x6000
+#define INSN_MASK_C_LD			0xe003
+#define INSN_MATCH_C_SD			0xe000
+#define INSN_MASK_C_SD			0xe003
+#define INSN_MATCH_C_LW			0x4000
+#define INSN_MASK_C_LW			0xe003
+#define INSN_MATCH_C_SW			0xc000
+#define INSN_MASK_C_SW			0xe003
+#define INSN_MATCH_C_LDSP		0x6002
+#define INSN_MASK_C_LDSP		0xe003
+#define INSN_MATCH_C_SDSP		0xe002
+#define INSN_MASK_C_SDSP		0xe003
+#define INSN_MATCH_C_LWSP		0x4002
+#define INSN_MASK_C_LWSP		0xe003
+#define INSN_MATCH_C_SWSP		0xc002
+#define INSN_MASK_C_SWSP		0xe003
+
+#define INSN_MATCH_C_FLD		0x2000
+#define INSN_MASK_C_FLD			0xe003
+#define INSN_MATCH_C_FLW		0x6000
+#define INSN_MASK_C_FLW			0xe003
+#define INSN_MATCH_C_FSD		0xa000
+#define INSN_MASK_C_FSD			0xe003
+#define INSN_MATCH_C_FSW		0xe000
+#define INSN_MASK_C_FSW			0xe003
+#define INSN_MATCH_C_FLDSP		0x2002
+#define INSN_MASK_C_FLDSP		0xe003
+#define INSN_MATCH_C_FSDSP		0xa002
+#define INSN_MASK_C_FSDSP		0xe003
+#define INSN_MATCH_C_FLWSP		0x6002
+#define INSN_MASK_C_FLWSP		0xe003
+#define INSN_MATCH_C_FSWSP		0xe002
+#define INSN_MASK_C_FSWSP		0xe003
+
+#define INSN_MASK_WFI			0xffffff00
+#define INSN_MATCH_WFI			0x10500000
+
+#define INSN_MASK_FENCE_TSO		0xffffffff
+#define INSN_MATCH_FENCE_TSO		0x8330000f
+
+#if __riscv_xlen == 64
+
+/* 64-bit read for VS-stage address translation (RV64) */
+#define INSN_PSEUDO_VS_LOAD		0x00003000
+
+/* 64-bit write for VS-stage address translation (RV64) */
+#define INSN_PSEUDO_VS_STORE	0x00003020
+
+#elif __riscv_xlen == 32
+
+/* 32-bit read for VS-stage address translation (RV32) */
+#define INSN_PSEUDO_VS_LOAD		0x00002000
+
+/* 32-bit write for VS-stage address translation (RV32) */
+#define INSN_PSEUDO_VS_STORE	0x00002020
+
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define INSN_16BIT_MASK			0x3
+#define INSN_32BIT_MASK			0x1c
+
+#define INSN_IS_16BIT(insn)		\
+	(((insn) & INSN_16BIT_MASK) != INSN_16BIT_MASK)
+#define INSN_IS_32BIT(insn)		\
+	(((insn) & INSN_16BIT_MASK) == INSN_16BIT_MASK && \
+	 ((insn) & INSN_32BIT_MASK) != INSN_32BIT_MASK)
+
+#define INSN_LEN(insn)			(INSN_IS_16BIT(insn) ? 2 : 4)
+
+#if __riscv_xlen == 64
+#define LOG_REGBYTES			3
+#else
+#define LOG_REGBYTES			2
+#endif
+#define REGBYTES			(1 << LOG_REGBYTES)
+
+#define SH_RD				7
+#define SH_RS1				15
+#define SH_RS2				20
+#define SH_RS2C				2
+
+#define RV_X(x, s, n)			(((x) >> (s)) & ((1 << (n)) - 1))
+#define RVC_LW_IMM(x)			((RV_X(x, 6, 1) << 2) | \
+					 (RV_X(x, 10, 3) << 3) | \
+					 (RV_X(x, 5, 1) << 6))
+#define RVC_LD_IMM(x)			((RV_X(x, 10, 3) << 3) | \
+					 (RV_X(x, 5, 2) << 6))
+#define RVC_LWSP_IMM(x)			((RV_X(x, 4, 3) << 2) | \
+					 (RV_X(x, 12, 1) << 5) | \
+					 (RV_X(x, 2, 2) << 6))
+#define RVC_LDSP_IMM(x)			((RV_X(x, 5, 2) << 3) | \
+					 (RV_X(x, 12, 1) << 5) | \
+					 (RV_X(x, 2, 3) << 6))
+#define RVC_SWSP_IMM(x)			((RV_X(x, 9, 4) << 2) | \
+					 (RV_X(x, 7, 2) << 6))
+#define RVC_SDSP_IMM(x)			((RV_X(x, 10, 3) << 3) | \
+					 (RV_X(x, 7, 3) << 6))
+#define RVC_RS1S(insn)			(8 + RV_X(insn, SH_RD, 3))
+#define RVC_RS2S(insn)			(8 + RV_X(insn, SH_RS2C, 3))
+#define RVC_RS2(insn)			RV_X(insn, SH_RS2C, 5)
+
+#define SHIFT_RIGHT(x, y)		\
+	((y) < 0 ? ((x) << -(y)) : ((x) >> (y)))
+
+#define REG_MASK			\
+	((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES))
+
+#define REG_OFFSET(insn, pos)		\
+	(SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK)
+
+#define REG_PTR(insn, pos, regs)	\
+	(unsigned long *)((unsigned long)(regs) + REG_OFFSET(insn, pos))
+
+#define GET_RM(insn)			(((insn) >> 12) & 7)
+
+#define GET_RS1(insn, regs)		(*REG_PTR(insn, SH_RS1, regs))
+#define GET_RS2(insn, regs)		(*REG_PTR(insn, SH_RS2, regs))
+#define GET_RS1S(insn, regs)		(*REG_PTR(RVC_RS1S(insn), 0, regs))
+#define GET_RS2S(insn, regs)		(*REG_PTR(RVC_RS2S(insn), 0, regs))
+#define GET_RS2C(insn, regs)		(*REG_PTR(insn, SH_RS2C, regs))
+#define GET_SP(regs)			(*REG_PTR(2, 0, regs))
+#define SET_RD(insn, regs, val)		(*REG_PTR(insn, SH_RD, regs) = (val))
+#define IMM_I(insn)			((int32_t)(insn) >> 20)
+#define IMM_S(insn)			(((int32_t)(insn) >> 25 << 5) | \
+					 (int32_t)(((insn) >> 7) & 0x1f))
+#define MASK_FUNCT3			0x7000
+
+/* clang-format on */
+
+#endif
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485595.752980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGk-0002hD-UA; Fri, 27 Jan 2023 13:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485595.752980; Fri, 27 Jan 2023 13:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGk-0002gT-Lj; Fri, 27 Jan 2023 13:59:54 +0000
Received: by outflank-mailman (input) for mailman id 485595;
 Fri, 27 Jan 2023 13:59:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGj-00023y-1d
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:53 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id de23c063-9e4a-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 14:59:51 +0100 (CET)
Received: by mail-wr1-x42d.google.com with SMTP id m7so5048683wru.8
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:51 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:50 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de23c063-9e4a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wCLAAcBnWsFBM5fAII3qd5TGioouQa7O2JJOpOoIC74=;
        b=ZqsJn+z621oKYaejZLaEYFkbEYnpSQEKZ0x18ppMnnmPdYKAwAppX6YxiScRVturD0
         4mpu4UZf7g4cAYbfUf9Oec8drDusL7DOjOWGhkTrqeYXUMNi5Cy5F81Rstzifb/UuWOv
         BXCspCyeuvV2T5aMwcN7f7CUSTzbqt29IEzf/lq6LmPrXn9r9yRc3nT7dhoY7aj1XX+I
         7yLQvutv/Qy5N4UWfo/2KZdNJY++LaP2s+XlnmQl3yvgvWnDdkNKHTwgc7bzzM941Bql
         RxtoPJEPFWBJygkpWlKAAKETM/wpCQAnhOxTd7qE47jo2jKAF5TqEeJBBc5n/ad7fZs1
         DfWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wCLAAcBnWsFBM5fAII3qd5TGioouQa7O2JJOpOoIC74=;
        b=skeox7OIHnxUlHKMlvAcfhQtWHZbuNdhsEprYYn/oAeN8YoQruwr+4mZZ5LMVOY3xu
         pldw83ULmTW8sRmG1HU93K7X0HNYZjmzzUQgelGWd7gHRh5RvHF2XuE6rEYpwAxv5lwa
         aBCHXgKM8VFgTJ3RlRrDz58PCHJJrEDJnpbEtBHxlHk9dOPIusAy7c1/y8n4nUrRP+Z5
         /Ev5TPW5oYPsQcrXYeT3xyitGt5OHMbpFrcKnik+bBnbtdZeH/+OUXoG2NU8sYTVUo++
         NvsswJ3OVq4wgWE/tH6K7i8hHbiF6vors4Gv2ooas32dhCxELV8pgAdFGUSxmkDu5IpP
         0hlA==
X-Gm-Message-State: AFqh2kpmrcVP4m3QXBAS+45X3efTlMk3yJf3gJbi158GqMZKQHCm2Ghj
	NnhnYbYjb6XYuHlcPojSoU0DoGBZFOY=
X-Google-Smtp-Source: AMrXdXt8B5aoYGmQwYb+6ohAlmytp7Td25gWn8hLje/RpRUsJGztev9qYNTsZqmI/v10pomqSTOh7Q==
X-Received: by 2002:adf:fa43:0:b0:2bd:c225:1fe8 with SMTP id y3-20020adffa43000000b002bdc2251fe8mr35240260wrr.14.1674827990437;
        Fri, 27 Jan 2023 05:59:50 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: [PATCH v2 07/14] xen/riscv: introduce exception context
Date: Fri, 27 Jan 2023 15:59:12 +0200
Message-Id: <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces a set of registers which should be saved to and
restored from a stack after an exception occurs and a set of defines
which will be used during exception context saving/restoring.

Originally <asm/processor.h> header was introduced in the patch series
from Bobby so partially it was
re-used and the following changes were done:
  - Move all RISCV_CPU_USER_REGS_* to asm/asm-offsets.c
  - Remove RISCV_CPU_USER_REGS_OFFSET & RISCV_CPU_USER_REGS_SIZE as
    there is no sense in them after RISCV_CPU_USER_REGS_* were moved to
    asm/asm-offsets.c
  - Remove RISCV_PCPUINFO_* as they aren't needed for current status of
    the RISC-V port
  - register_t renamed to unsigned long
  - rename wait_for_interrupt to wfi

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - All the changes were added to the commit message.
  - temporarily was added function die() to stop exectution it will be
    removed after panic() will be available.
---
 xen/arch/riscv/include/asm/processor.h | 82 ++++++++++++++++++++++++++
 xen/arch/riscv/riscv64/asm-offsets.c   | 53 +++++++++++++++++
 2 files changed, 135 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/processor.h

diff --git a/xen/arch/riscv/include/asm/processor.h b/xen/arch/riscv/include/asm/processor.h
new file mode 100644
index 0000000000..4292de2efc
--- /dev/null
+++ b/xen/arch/riscv/include/asm/processor.h
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: MIT */
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ * Copyright 2021 (C) Bobby Eshleman <bobby.eshleman@gmail.com>
+ * Copyright 2023 (C) Vates
+ *
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#ifndef __ASSEMBLY__
+
+/* On stack VCPU state */
+struct cpu_user_regs
+{
+    unsigned long zero;
+    unsigned long ra;
+    unsigned long sp;
+    unsigned long gp;
+    unsigned long tp;
+    unsigned long t0;
+    unsigned long t1;
+    unsigned long t2;
+    unsigned long s0;
+    unsigned long s1;
+    unsigned long a0;
+    unsigned long a1;
+    unsigned long a2;
+    unsigned long a3;
+    unsigned long a4;
+    unsigned long a5;
+    unsigned long a6;
+    unsigned long a7;
+    unsigned long s2;
+    unsigned long s3;
+    unsigned long s4;
+    unsigned long s5;
+    unsigned long s6;
+    unsigned long s7;
+    unsigned long s8;
+    unsigned long s9;
+    unsigned long s10;
+    unsigned long s11;
+    unsigned long t3;
+    unsigned long t4;
+    unsigned long t5;
+    unsigned long t6;
+    unsigned long sepc;
+    unsigned long sstatus;
+    /* pointer to previous stack_cpu_regs */
+    unsigned long pregs;
+};
+
+static inline void wfi(void)
+{
+    __asm__ __volatile__ ("wfi");
+}
+
+/* 
+ * panic() isn't available at the moment so an infinite loop will be
+ * used temporarily.
+ * TODO: change it to panic()
+ */
+static inline void die(void)
+{
+    for( ;; ) wfi();
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/riscv64/asm-offsets.c b/xen/arch/riscv/riscv64/asm-offsets.c
index e69de29bb2..d632b75c2a 100644
--- a/xen/arch/riscv/riscv64/asm-offsets.c
+++ b/xen/arch/riscv/riscv64/asm-offsets.c
@@ -0,0 +1,53 @@
+#define COMPILE_OFFSETS
+
+#include <asm/processor.h>
+#include <xen/types.h>
+
+#define DEFINE(_sym, _val)                                                 \
+    asm volatile ("\n.ascii\"==>#define " #_sym " %0 /* " #_val " */<==\"" \
+                  : : "i" (_val) )
+#define BLANK()                                                            \
+    asm volatile ( "\n.ascii\"==><==\"" : : )
+#define OFFSET(_sym, _str, _mem)                                           \
+    DEFINE(_sym, offsetof(_str, _mem));
+
+void asm_offsets(void)
+{
+    BLANK();
+    DEFINE(CPU_USER_REGS_SIZE, sizeof(struct cpu_user_regs));
+    OFFSET(CPU_USER_REGS_ZERO, struct cpu_user_regs, zero);
+    OFFSET(CPU_USER_REGS_RA, struct cpu_user_regs, ra);
+    OFFSET(CPU_USER_REGS_SP, struct cpu_user_regs, sp);
+    OFFSET(CPU_USER_REGS_GP, struct cpu_user_regs, gp);
+    OFFSET(CPU_USER_REGS_TP, struct cpu_user_regs, tp);
+    OFFSET(CPU_USER_REGS_T0, struct cpu_user_regs, t0);
+    OFFSET(CPU_USER_REGS_T1, struct cpu_user_regs, t1);
+    OFFSET(CPU_USER_REGS_T2, struct cpu_user_regs, t2);
+    OFFSET(CPU_USER_REGS_S0, struct cpu_user_regs, s0);
+    OFFSET(CPU_USER_REGS_S1, struct cpu_user_regs, s1);
+    OFFSET(CPU_USER_REGS_A0, struct cpu_user_regs, a0);
+    OFFSET(CPU_USER_REGS_A1, struct cpu_user_regs, a1);
+    OFFSET(CPU_USER_REGS_A2, struct cpu_user_regs, a2);
+    OFFSET(CPU_USER_REGS_A3, struct cpu_user_regs, a3);
+    OFFSET(CPU_USER_REGS_A4, struct cpu_user_regs, a4);
+    OFFSET(CPU_USER_REGS_A5, struct cpu_user_regs, a5);
+    OFFSET(CPU_USER_REGS_A6, struct cpu_user_regs, a6);
+    OFFSET(CPU_USER_REGS_A7, struct cpu_user_regs, a7);
+    OFFSET(CPU_USER_REGS_S2, struct cpu_user_regs, s2);
+    OFFSET(CPU_USER_REGS_S3, struct cpu_user_regs, s3);
+    OFFSET(CPU_USER_REGS_S4, struct cpu_user_regs, s4);
+    OFFSET(CPU_USER_REGS_S5, struct cpu_user_regs, s5);
+    OFFSET(CPU_USER_REGS_S6, struct cpu_user_regs, s6);
+    OFFSET(CPU_USER_REGS_S7, struct cpu_user_regs, s7);
+    OFFSET(CPU_USER_REGS_S8, struct cpu_user_regs, s8);
+    OFFSET(CPU_USER_REGS_S9, struct cpu_user_regs, s9);
+    OFFSET(CPU_USER_REGS_S10, struct cpu_user_regs, s10);
+    OFFSET(CPU_USER_REGS_S11, struct cpu_user_regs, s11);
+    OFFSET(CPU_USER_REGS_T3, struct cpu_user_regs, t3);
+    OFFSET(CPU_USER_REGS_T4, struct cpu_user_regs, t4);
+    OFFSET(CPU_USER_REGS_T5, struct cpu_user_regs, t5);
+    OFFSET(CPU_USER_REGS_T6, struct cpu_user_regs, t6);
+    OFFSET(CPU_USER_REGS_SEPC, struct cpu_user_regs, sepc);
+    OFFSET(CPU_USER_REGS_SSTATUS, struct cpu_user_regs, sstatus);
+    OFFSET(CPU_USER_REGS_PREGS, struct cpu_user_regs, pregs);
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485596.752985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGl-0002lr-Bc; Fri, 27 Jan 2023 13:59:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485596.752985; Fri, 27 Jan 2023 13:59:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGl-0002ky-3Z; Fri, 27 Jan 2023 13:59:55 +0000
Received: by outflank-mailman (input) for mailman id 485596;
 Fri, 27 Jan 2023 13:59:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGj-0000nM-5r
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:53 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id decc5197-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:52 +0100 (CET)
Received: by mail-wr1-x42f.google.com with SMTP id h16so5020233wrz.12
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:52 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: decc5197-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OuxkqTkIbxqOx8ZDugGcNSa9bGe/taJIeKj9TtWs0As=;
        b=G7kAb+GJgwbCJDzFBn7LkTCTIex1YksQ553mGggixibG/GhAqm9CP7zXgqYMFSxmLb
         M7aCwPhcoasFAM6a5swSuoO6vBRq/qzbRZ/2ZTJwEpfwf2imek0HHXTtIyaQqw5XGPJW
         YaAUXx1VyvO4dzcGvxx4fObNU9HKaxVX2eS7qkTcAE4fWuGeWKaaEloQ0GVJ/ASxQ+Cg
         th9vsXdFhYi27Hs6WEx2ATQFXMC/5BCF+cDCQlkXFZd3D3kk5uY1tpFaA1g/T3qRMT3D
         +8UitmfmKes73hB97exObiHj4Y+2tyztQ2NjMl+Tab5/Gz6NB4vgXV/SGpuorPEUuuDL
         P6Pg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=OuxkqTkIbxqOx8ZDugGcNSa9bGe/taJIeKj9TtWs0As=;
        b=f0tJC4TH6OALGGB3O1qEJzX3ErOS8b/qu917HyXlyZ+fRCin9YCuuCT3qKjpY/f/6v
         aV/yDc8b8gG8J2cTOwUOK5qDQvA/533jHJGAHzaH3HElg3qy/mXRqst7nn8nKv/aGlFf
         8xy5dEDI5Mko3GCG5vTxNoK2PDDbgD+PSsXqwQsUY0sx0uoj8+O2KyLCWhGZNSMq5d7g
         1KHgA0Y9iUSxMdm57fDZ/AaR1fBTuWVNwfrY0bDbKF+UYjjSK8GxCNkyjKCcDCnId0sY
         72DxJw013Zijz6mjx0JOhe2ZKaJLP33LsEGZqtBgpQO3lokbqTyaJKTrwv+TbEibIhXt
         oJyg==
X-Gm-Message-State: AO0yUKUH6f8gYgy/4+IkGiA4Pq0Usa0MY9iIakaNBlRjWFzU0NbMRjY7
	mImdC6Bj2OPj4YjcWBp5pFQUgCD0uIY=
X-Google-Smtp-Source: AK7set+EB52XVAGjnl4tnQ1dzM6UvqdaCQakclHvoKnBHE6wvjHXq+NQAVCAef+OTET4aoOUzabzGg==
X-Received: by 2002:a05:6000:14e:b0:2bf:ccb3:255d with SMTP id r14-20020a056000014e00b002bfccb3255dmr3592575wrx.62.1674827991415;
        Fri, 27 Jan 2023 05:59:51 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 08/14] xen/riscv: introduce exception handlers implementation
Date: Fri, 27 Jan 2023 15:59:13 +0200
Message-Id: <9cc958411ef5e0a36bbfb056a71ce7232e6665ef.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces an implementation of basic exception handlers:
- to save/restore context
- to handle an exception itself. The handler calls wait_for_interrupt
  now, nothing more.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - Refactor entry.S to start using of defines introduced in asm_offsets.C
  - Rename {__,}handle_exception to handle_trap() and do_trap() to be more
    consistent with RISC-V spec.
  - Wrap handle_trap() to ENTRY().
---
 xen/arch/riscv/Makefile            |  2 +
 xen/arch/riscv/entry.S             | 94 ++++++++++++++++++++++++++++++
 xen/arch/riscv/include/asm/traps.h | 13 +++++
 xen/arch/riscv/traps.c             | 13 +++++
 4 files changed, 122 insertions(+)
 create mode 100644 xen/arch/riscv/entry.S
 create mode 100644 xen/arch/riscv/include/asm/traps.h
 create mode 100644 xen/arch/riscv/traps.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 1a4f1a6015..443f6bf15f 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,7 +1,9 @@
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+obj-y += entry.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
+obj-y += traps.o
 
 $(TARGET): $(TARGET)-syms
 	$(OBJCOPY) -O binary -S $< $@
diff --git a/xen/arch/riscv/entry.S b/xen/arch/riscv/entry.S
new file mode 100644
index 0000000000..0be543f8e0
--- /dev/null
+++ b/xen/arch/riscv/entry.S
@@ -0,0 +1,94 @@
+#include <asm/asm.h>
+#include <asm/asm-offsets.h>
+#include <asm/processor.h>
+#include <asm/riscv_encoding.h>
+#include <asm/traps.h>
+
+/* WIP: only works while interrupting Xen context */
+ENTRY(handle_trap)
+
+    /* Exceptions from xen */
+save_to_stack:
+        /* Save context to stack */
+        REG_S   sp, (CPU_USER_REGS_SP - CPU_USER_REGS_SIZE) (sp)
+        addi    sp, sp, -CPU_USER_REGS_SIZE
+        REG_S   t0, CPU_USER_REGS_T0(sp)
+
+        /* Save registers */
+        REG_S   ra, CPU_USER_REGS_RA(sp)
+        REG_S   gp, CPU_USER_REGS_GP(sp)
+        REG_S   t1, CPU_USER_REGS_T1(sp)
+        REG_S   t2, CPU_USER_REGS_T2(sp)
+        REG_S   s0, CPU_USER_REGS_S0(sp)
+        REG_S   s1, CPU_USER_REGS_S1(sp)
+        REG_S   a0, CPU_USER_REGS_A0(sp)
+        REG_S   a1, CPU_USER_REGS_A1(sp)
+        REG_S   a2, CPU_USER_REGS_A2(sp)
+        REG_S   a3, CPU_USER_REGS_A3(sp)
+        REG_S   a4, CPU_USER_REGS_A4(sp)
+        REG_S   a5, CPU_USER_REGS_A5(sp)
+        REG_S   a6, CPU_USER_REGS_A6(sp)
+        REG_S   a7, CPU_USER_REGS_A7(sp)
+        REG_S   s2, CPU_USER_REGS_S2(sp)
+        REG_S   s3, CPU_USER_REGS_S3(sp)
+        REG_S   s4, CPU_USER_REGS_S4(sp)
+        REG_S   s5, CPU_USER_REGS_S5(sp)
+        REG_S   s6, CPU_USER_REGS_S6(sp)
+        REG_S   s7, CPU_USER_REGS_S7(sp)
+        REG_S   s8, CPU_USER_REGS_S8(sp)
+        REG_S   s9, CPU_USER_REGS_S9(sp)
+        REG_S   s10,CPU_USER_REGS_S10(sp)
+        REG_S   s11,CPU_USER_REGS_S11(sp)
+        REG_S   t3, CPU_USER_REGS_T3(sp)
+        REG_S   t4, CPU_USER_REGS_T4(sp)
+        REG_S   t5, CPU_USER_REGS_T5(sp)
+        REG_S   t6, CPU_USER_REGS_T6(sp)
+        csrr    t0, CSR_SEPC
+        REG_S   t0, CPU_USER_REGS_SEPC(sp)
+        csrr    t0, CSR_SSTATUS
+        REG_S   t0, CPU_USER_REGS_SSTATUS(sp)
+
+        mv      a0, sp
+        jal     do_trap
+
+restore_registers:
+        /* Restore stack_cpu_regs */
+        REG_L   t0, CPU_USER_REGS_SEPC(sp)
+        csrw    CSR_SEPC, t0
+        REG_L   t0, CPU_USER_REGS_SSTATUS(sp)
+        csrw    CSR_SSTATUS, t0
+
+        REG_L   ra, CPU_USER_REGS_RA(sp)
+        REG_L   gp, CPU_USER_REGS_GP(sp)
+        REG_L   t0, CPU_USER_REGS_T0(sp)
+        REG_L   t1, CPU_USER_REGS_T1(sp)
+        REG_L   t2, CPU_USER_REGS_T2(sp)
+        REG_L   s0, CPU_USER_REGS_S0(sp)
+        REG_L   s1, CPU_USER_REGS_S1(sp)
+        REG_L   a0, CPU_USER_REGS_A0(sp)
+        REG_L   a1, CPU_USER_REGS_A1(sp)
+        REG_L   a2, CPU_USER_REGS_A2(sp)
+        REG_L   a3, CPU_USER_REGS_A3(sp)
+        REG_L   a4, CPU_USER_REGS_A4(sp)
+        REG_L   a5, CPU_USER_REGS_A5(sp)
+        REG_L   a6, CPU_USER_REGS_A6(sp)
+        REG_L   a7, CPU_USER_REGS_A7(sp)
+        REG_L   s2, CPU_USER_REGS_S2(sp)
+        REG_L   s3, CPU_USER_REGS_S3(sp)
+        REG_L   s4, CPU_USER_REGS_S4(sp)
+        REG_L   s5, CPU_USER_REGS_S5(sp)
+        REG_L   s6, CPU_USER_REGS_S6(sp)
+        REG_L   s7, CPU_USER_REGS_S7(sp)
+        REG_L   s8, CPU_USER_REGS_S8(sp)
+        REG_L   s9, CPU_USER_REGS_S9(sp)
+        REG_L   s10, CPU_USER_REGS_S10(sp)
+        REG_L   s11, CPU_USER_REGS_S11(sp)
+        REG_L   t3, CPU_USER_REGS_T3(sp)
+        REG_L   t4, CPU_USER_REGS_T4(sp)
+        REG_L   t5, CPU_USER_REGS_T5(sp)
+        REG_L   t6, CPU_USER_REGS_T6(sp)
+
+        /* Restore sp */
+        REG_L   sp, CPU_USER_REGS_SP(sp)
+
+        sret
diff --git a/xen/arch/riscv/include/asm/traps.h b/xen/arch/riscv/include/asm/traps.h
new file mode 100644
index 0000000000..f3fb6b25d1
--- /dev/null
+++ b/xen/arch/riscv/include/asm/traps.h
@@ -0,0 +1,13 @@
+#ifndef __ASM_TRAPS_H__
+#define __ASM_TRAPS_H__
+
+#include <asm/processor.h>
+
+#ifndef __ASSEMBLY__
+
+void do_trap(struct cpu_user_regs *cpu_regs);
+void handle_trap(void);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_TRAPS_H__ */
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
new file mode 100644
index 0000000000..ccd3593f5a
--- /dev/null
+++ b/xen/arch/riscv/traps.c
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Copyright (C) 2023 Vates
+ *
+ * RISC-V Trap handlers
+ */
+#include <asm/processor.h>
+#include <asm/traps.h>
+
+void do_trap(struct cpu_user_regs *cpu_regs)
+{
+    die();
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485597.752989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGl-0002tc-OY; Fri, 27 Jan 2023 13:59:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485597.752989; Fri, 27 Jan 2023 13:59:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGl-0002pZ-Fw; Fri, 27 Jan 2023 13:59:55 +0000
Received: by outflank-mailman (input) for mailman id 485597;
 Fri, 27 Jan 2023 13:59:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGj-0000nM-Nc
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:53 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df587b6a-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:53 +0100 (CET)
Received: by mail-wr1-x42f.google.com with SMTP id h16so5020282wrz.12
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:53 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df587b6a-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iFyYyPwBn9OgtucWxezUDT/W93gze1yZbOtcpEyG5FI=;
        b=nEzGY4RrpxaVb3ZuCObP2TewcA8p1mMJ8Te3uzA2J2GNVDnNgSG729cegKOCYy2hCI
         jG2w9WFh5suUgQxLGGixYNQfFFgte3H9QT/T2Wsu3r8bzLXK4rmP0jm65MgnMWP2QpgW
         1uvlHG6NjR0v70WT8unlJe9g7E5t8VzDJL7K6C7sW7vu+Dl/SeTElfioXB4PT48siwwi
         EFIPj1P5tl9Xm+BHD0A5PsS2GvCWoDAQyS9mfDrAs1uI75gkck1EL+Sd/GkJEO4scvHL
         IFsnasEz8djaJdWlFNj7W2kDj2DraF+pcB5BFschBSrgKpoic9NBNDKZYv1iwkoEQ/9j
         2Quw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=iFyYyPwBn9OgtucWxezUDT/W93gze1yZbOtcpEyG5FI=;
        b=6xcu4uMbdKU0uLQ3w29KFyOk78qfnCmV/wv/RII5TVoNvD4kELZbx11hlZkE9s7SJE
         pqUvWBbap4Ru6WocEt+f+u+slUW2w+TgeeCBrCZui859zoXyGG5triT6usaCbPwQi7ua
         VBnF5uDXL0EAYhHi7rpmOq56hbOWkG5IgSEPEFLCu9eMuasP3cFNMzuseCLJ/7ribAWE
         HJeko6AaCZEyimTp06vxRdkZ1Pdle89/ukdmDpfL5J9whlooogO/bEctdB1uSGAo08LZ
         s9s7LUj9kw6xvItCY1jJUoGg4fsZuVEuhpCVkQAo+mt2dqXRttTX96d6i3nQZ+0e8uJg
         QzhA==
X-Gm-Message-State: AFqh2krBvu3vKLWuuHmvGIdTeL5M4ZNZXDfTyAba5OQtLtKWzddaki6N
	gu/envEDfhl6ghnmzwueulFo+NuJkV8=
X-Google-Smtp-Source: AMrXdXv4349AKuNWUMaghthp68QWPvH7Xes8fNq4qne42EPmK/TeGCJXxAtyuuS5aqMs+d4CEEC6IA==
X-Received: by 2002:a5d:6548:0:b0:2bd:f645:19dc with SMTP id z8-20020a5d6548000000b002bdf64519dcmr32348410wrv.68.1674827992465;
        Fri, 27 Jan 2023 05:59:52 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 09/14] xen/riscv: introduce decode_cause() stuff
Date: Fri, 27 Jan 2023 15:59:14 +0200
Message-Id: <5deb64dcb85d5ddef33c9a1adc9c50ddaadc5d5a.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces stuff needed to decode a reason of an
exception.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - Make decode_trap_cause() more optimization friendly.
  - Merge the pathc which introduces do_unexpected_trap() to the current one.
---
 xen/arch/riscv/traps.c | 85 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 84 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index ccd3593f5a..f2a1e1ffcf 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -4,10 +4,93 @@
  *
  * RISC-V Trap handlers
  */
+#include <asm/csr.h>
+#include <asm/early_printk.h>
 #include <asm/processor.h>
 #include <asm/traps.h>
+#include <xen/errno.h>
+#include <xen/lib.h>
 
-void do_trap(struct cpu_user_regs *cpu_regs)
+static const char *decode_trap_cause(unsigned long cause)
+{
+    static const char *const trap_causes[] = {
+        [CAUSE_MISALIGNED_FETCH] = "Instruction Address Misaligned",
+        [CAUSE_FETCH_ACCESS] = "Instruction Access Fault",
+        [CAUSE_ILLEGAL_INSTRUCTION] = "Illegal Instruction",
+        [CAUSE_BREAKPOINT] = "Breakpoint",
+        [CAUSE_MISALIGNED_LOAD] = "Load Address Misaligned",
+        [CAUSE_LOAD_ACCESS] = "Load Access Fault",
+        [CAUSE_MISALIGNED_STORE] = "Store/AMO Address Misaligned",
+        [CAUSE_STORE_ACCESS] = "Store/AMO Access Fault",
+        [CAUSE_USER_ECALL] = "Environment Call from U-Mode",
+        [CAUSE_SUPERVISOR_ECALL] = "Environment Call from S-Mode",
+        [CAUSE_MACHINE_ECALL] = "Environment Call from M-Mode",
+        [CAUSE_FETCH_PAGE_FAULT] = "Instruction Page Fault",
+        [CAUSE_LOAD_PAGE_FAULT] = "Load Page Fault",
+        [CAUSE_STORE_PAGE_FAULT] = "Store/AMO Page Fault",
+        [CAUSE_FETCH_GUEST_PAGE_FAULT] = "Instruction Guest Page Fault",
+        [CAUSE_LOAD_GUEST_PAGE_FAULT] = "Load Guest Page Fault",
+        [CAUSE_VIRTUAL_INST_FAULT] = "Virtualized Instruction Fault",
+        [CAUSE_STORE_GUEST_PAGE_FAULT] = "Guest Store/AMO Page Fault",
+    };
+
+    if ( cause < ARRAY_SIZE(trap_causes) && trap_causes[cause] )
+        return trap_causes[cause];
+    return "UNKNOWN";
+}
+
+const char *decode_reserved_interrupt_cause(unsigned long irq_cause)
+{
+    switch ( irq_cause )
+    {
+    case IRQ_M_SOFT:
+        return "M-mode Software Interrupt";
+    case IRQ_M_TIMER:
+        return "M-mode TIMER Interrupt";
+    case IRQ_M_EXT:
+        return "M-mode TIMER Interrupt";
+    default:
+        return "UNKNOWN IRQ type";
+    }
+}
+
+const char *decode_interrupt_cause(unsigned long cause)
+{
+    unsigned long irq_cause = cause & ~CAUSE_IRQ_FLAG;
+
+    switch ( irq_cause )
+    {
+    case IRQ_S_SOFT:
+        return "Supervisor Software Interrupt";
+    case IRQ_S_TIMER:
+        return "Supervisor Timer Interrupt";
+    case IRQ_S_EXT:
+        return "Supervisor External Interrupt";
+    default:
+        return decode_reserved_interrupt_cause(irq_cause);
+    }
+}
+
+const char *decode_cause(unsigned long cause)
+{
+    if ( cause & CAUSE_IRQ_FLAG )
+        return decode_interrupt_cause(cause);
+
+    return decode_trap_cause(cause);
+}
+
+static void do_unexpected_trap(const struct cpu_user_regs *regs)
 {
+    unsigned long cause = csr_read(CSR_SCAUSE);
+
+    early_printk("Unhandled exception: ");
+    early_printk(decode_cause(cause));
+    early_printk("\n");
+
     die();
 }
+
+void do_trap(struct cpu_user_regs *cpu_regs)
+{
+    do_unexpected_trap(cpu_regs);
+}
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485598.752997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGm-00038W-QO; Fri, 27 Jan 2023 13:59:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485598.752997; Fri, 27 Jan 2023 13:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGm-00035s-EU; Fri, 27 Jan 2023 13:59:56 +0000
Received: by outflank-mailman (input) for mailman id 485598;
 Fri, 27 Jan 2023 13:59:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGl-00023y-Ms
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:55 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dfdbb369-9e4a-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 14:59:54 +0100 (CET)
Received: by mail-wr1-x430.google.com with SMTP id n7so5062849wrx.5
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:54 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfdbb369-9e4a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/dWUYXcXNDY01C5vnUaSRzj9YFbcSmtm26QVe941wX4=;
        b=VfgsX33JS/Ryp8A+2gA7P+XJCn2VbHxyPSMmBDr99GxXfiZcKmCCVl6ZUkXxMMP8v1
         6QcuO4fLyohJOa2J3pbngTEuZ1UM1hPL+f0rezp7y3biIeUFCt/3ELu66jiFj2abe31z
         enBGQ27S2XVVBFD6Y8JAzkds59FLafHKmJDEgSnDfjDOZ3JGjsJzoIw+nu98QPEUHP1o
         Y3lVzr4EMhztWFxiYbLdQPVBQQcj9o4dgWc+YRr8JPNWa30zLaHzW3kDRleUy5RqDCuM
         vNsQ3M4q2vyzPXSNm10moxZCM2q791I8WWy70BK4HhLQz4u2EC2SwRWmV79HuL6EHz95
         Gm6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/dWUYXcXNDY01C5vnUaSRzj9YFbcSmtm26QVe941wX4=;
        b=K9TuBK63MKQxJIKyg//gmHNsG25xSYvTFa6WBAyp3n/FGtI8Udq/n5dzWamcUoGWS7
         3033PEvG54T8RsgUMaNv2IxncH7gF36S+LEN3l5KQobot9NqzFAPzvB5qyysvAHEmhS2
         U5K5OlqgaJXy5e74Lu20YabBFSZgDPU2JhzPPQpB/g02N38pobFeRy/WfmSNB/bXL7R+
         gT5WbT13gUEgXIZBMPmh1WomQj/tbp6xbgsAByfqIdsPUATYb7HA0fBExU0BEkDVrrrZ
         FXaGiDk/G5PPmCshlpadQppvhRQOL6jtW+Uh0EhKHeltxcWcPleylTDOI0lKxWJAS/Fp
         BF9A==
X-Gm-Message-State: AFqh2kqdeL9eZ4CDUWwiHjNIz1u0dxtfx/n4H0283YVA8TQHOpdR/c7D
	5E74ZsDye255KKW4oRCQGXPQSjVwTJE=
X-Google-Smtp-Source: AMrXdXt1QtJ2KGKtsDoqFSOM+ehkGbzrxhgzkBt4LFVmc5d27mhJczMXjOtfF7qilFgWwDpKXGR9kA==
X-Received: by 2002:a5d:52ce:0:b0:242:82a4:2bf4 with SMTP id r14-20020a5d52ce000000b0024282a42bf4mr27520358wrv.10.1674827993349;
        Fri, 27 Jan 2023 05:59:53 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 10/14] xen/riscv: mask all interrupts
Date: Fri, 27 Jan 2023 15:59:15 +0200
Message-Id: <5ae68ebe2bfb4745c7ecbcac879aebe44a73aeac.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V2:
 - Add Reviewed-by to the commit message
---
 xen/arch/riscv/riscv64/head.S | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index d444dd8aad..ffd95f9f89 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,6 +1,11 @@
+#include <asm/riscv_encoding.h>
+
         .section .text.header, "ax", %progbits
 
 ENTRY(start)
+        /* Mask all interrupts */
+        csrw    CSR_SIE, zero
+
         la      sp, cpu0_boot_stack
         li      t0, STACK_SIZE
         add     sp, sp, t0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485599.753012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGo-0003dx-U5; Fri, 27 Jan 2023 13:59:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485599.753012; Fri, 27 Jan 2023 13:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGo-0003aT-Af; Fri, 27 Jan 2023 13:59:58 +0000
Received: by outflank-mailman (input) for mailman id 485599;
 Fri, 27 Jan 2023 13:59:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGm-00023y-Cj
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:56 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e05a0442-9e4a-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 14:59:54 +0100 (CET)
Received: by mail-wr1-x42d.google.com with SMTP id m7so5048865wru.8
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:54 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e05a0442-9e4a-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ow2bkBFPte3lViXpinOUB5QNvNEMi29RCIxf21SF5TY=;
        b=SQ9/qh+TTheEDiNF2ViKLmE6RsZxhvokJSDTyyxL+oEz2Hch7XkqxYAmdVgl5sesS9
         hAlEv758aA4Ynlqu71BYzwQxFLhiwSgJxiP/KK35f6at9wkK/CDPva3WHxPycOXM6UUe
         d6XOt5n0Ts5Sf2wtMSeSpQJ6P3AKh0EwUHI79o/tARJr8UW+5SFuyBgtbd+Ff6Q1mA9D
         /XAD9XlEsvoOjDKYcyP7cUmwE7qsh3MnUnFEk6AM2ZCFzY77uKk1yIQNsKTj9wqZHrB7
         HMosOykw3R5agoEk7Dhnd0JUB6zAg5CWwaiBL7STGuaB7jG5B1dMdRMHo6YgaS+WYh7v
         yp3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ow2bkBFPte3lViXpinOUB5QNvNEMi29RCIxf21SF5TY=;
        b=lsGKchNKLtB0/7Y3OZfw5rZl5nJKhpPibX/gv1eAIJYpDQyZiJpWTvMkHxQQyJXrox
         ln4CDMws/1euK3sOi1rzj8/y6kKnUyYeZgLD4CaEr1OXOPYg/y6LhM2x+DwleqiuuZas
         5tJIISY5BsLITwe+CX+y+2x2ggQFw1D5F8OAd6yfNXdXEDE6JlVWMMey1FT6IX2DGKh7
         12xfNW/MaG9zLyrvIyY5oa1TXV8X71GNEPSkOlFxJFTpXQ3frgdb5c4H8pa+TUa7E9cB
         4te0B5oxTYt3ZQe+T2cEQbUeiWypatfzeamo1DPhrDsQVwjlBQY5LZfJjOHLU/4TD3Hq
         wFJQ==
X-Gm-Message-State: AO0yUKUmzklT+yv8CQ11ZCuPQS9YHjWpvy3NFJw4lG8ZTrpmFpXTtyVL
	3yo+trC5m/PRtLTZWZzgLSi2ecmusTE=
X-Google-Smtp-Source: AK7set/6CfDjk8PEmrwzMqgVJa9okxetml0QJgUZWoN2e+jkCFDXF3imEgzKvHUOUVjN76HrLwCvSw==
X-Received: by 2002:adf:dd01:0:b0:2bf:c530:329e with SMTP id a1-20020adfdd01000000b002bfc530329emr5100646wrm.14.1674827994201;
        Fri, 27 Jan 2023 05:59:54 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 11/14] xen/riscv: introduce trap_init()
Date: Fri, 27 Jan 2023 15:59:16 +0200
Message-Id: <1081e7e12f50227a4e15171129a468420b613273.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V2:
  - Rename setup_trap_handler() to trap_init().
  - Add Reviewed-by to the commit message.
---
 xen/arch/riscv/include/asm/traps.h | 1 +
 xen/arch/riscv/setup.c             | 4 ++++
 xen/arch/riscv/traps.c             | 7 +++++++
 3 files changed, 12 insertions(+)

diff --git a/xen/arch/riscv/include/asm/traps.h b/xen/arch/riscv/include/asm/traps.h
index f3fb6b25d1..f1879294ef 100644
--- a/xen/arch/riscv/include/asm/traps.h
+++ b/xen/arch/riscv/include/asm/traps.h
@@ -7,6 +7,7 @@
 
 void do_trap(struct cpu_user_regs *cpu_regs);
 void handle_trap(void);
+void trap_init(void);
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index d09ffe1454..c8513ca4f8 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,7 +1,9 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/csr.h>
 #include <asm/early_printk.h>
+#include <asm/traps.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
@@ -11,6 +13,8 @@ void __init noreturn start_xen(void)
 {
     early_printk("Hello from C env\n");
 
+    trap_init();
+
     for ( ;; )
         asm volatile ("wfi");
 
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index f2a1e1ffcf..31ed63e3c1 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -11,6 +11,13 @@
 #include <xen/errno.h>
 #include <xen/lib.h>
 
+void trap_init(void)
+{
+    unsigned long addr = (unsigned long)&handle_trap;
+
+    csr_write(CSR_STVEC, addr);
+}
+
 static const char *decode_trap_cause(unsigned long cause)
 {
     static const char *const trap_causes[] = {
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 13:59:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 13:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485600.753017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGp-0003lr-KN; Fri, 27 Jan 2023 13:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485600.753017; Fri, 27 Jan 2023 13:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGp-0003jk-17; Fri, 27 Jan 2023 13:59:59 +0000
Received: by outflank-mailman (input) for mailman id 485600;
 Fri, 27 Jan 2023 13:59:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGm-0000nM-PQ
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:57 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0f72e7a-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:55 +0100 (CET)
Received: by mail-wr1-x430.google.com with SMTP id b7so5070485wrt.3
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:55 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:54 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0f72e7a-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1ymM7VYG+IAhzyt5tLEPdv6aDAXSwDbgeStu22LGX3s=;
        b=fxxoXABq56d1fI5A+cxIhzH+0V/lzE6Rx1+TE4MeRoPeru9LfOgVK5qKiNr/KiOSAr
         4uRMoB+LtRxt060z7kscWcEdiYcZ3OP4qff7A7Mn8abYpPXsDkDNLy85om623k+j2sKo
         sDS3H8oLr5iIxk7H44QTuEZpcyicw6KxrZKROtQEzC5HWPT5F/ayjrGmRHDE7qmRz7qh
         b0fig3dvRXZWMJACqUnAdzsBOQqyHbaFf/R313wyfdg295pEOTU5/76NW0paCAtfkaBy
         OnsX1uhZ9wp6V7+eoyW1xsC1y2MpBBiUmDOozaUancKBRGSrOgcnK6oT3XI+RbRdhZvN
         IJtQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1ymM7VYG+IAhzyt5tLEPdv6aDAXSwDbgeStu22LGX3s=;
        b=MmdreYxUmJ/xIY7nSv2ywCHrP1Qoq9/mG+wME1US1avZ9IJZXov+M/cSr1nasn0GPS
         75Vy78lOlRLs2IRtk/pYcSDCTUJljZlRUeqyMYBHi6JwTu9X/BqGVBnkEEfLzC1NcT6X
         QkSYenpLhQiUH8/JbSHq36YM/vO3Zs8z3fwSbDf9V6Ph6gpHt6CGDMrefWb3srXgn5V3
         OYWs1SwdKgAlLJAbLNX7VeYtyTkNY/wwGt6sujuYLEaglxSbOu7khFKVGfO0v7YKNOpa
         mbnKmmCeU3rcEb0TUjZ/L+9xI9Z1ccnJX27gBwPnYyX6AD65v151LmFGotA+eq7rIv6H
         3YZg==
X-Gm-Message-State: AO0yUKXO4l1k+doXu6UtgMY5zFGztl5wSPCzDCAUzxVkeMqezz3MU5XA
	veBBlYkVyizrvuJtLT7tzGleTcFOthI=
X-Google-Smtp-Source: AK7set9PR5ZCr9WS/mpizo81Ydixjas02rqQO+3hH4x02+5z24wqezuZnzW+A41WtxIFkXk5DaYgCQ==
X-Received: by 2002:a5d:4f87:0:b0:2bf:b2c2:e122 with SMTP id d7-20020a5d4f87000000b002bfb2c2e122mr11312167wru.29.1674827995098;
        Fri, 27 Jan 2023 05:59:55 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 12/14] xen/riscv: introduce an implementation of macros from <asm/bug.h>
Date: Fri, 27 Jan 2023 15:59:17 +0200
Message-Id: <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces macros: BUG(), WARN(), run_in_exception(),
assert_failed.

The implementation uses "ebreak" instruction in combination with
diffrent bug frame tables (for each type) which contains useful
information.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes:
  - Remove __ in define namings
  - Update run_in_exception_handler() with
    register void *fn_ asm(__stringify(BUG_FN_REG)) = (fn);
  - Remove bug_instr_t type and change it's usage to uint32_t
---
 xen/arch/riscv/include/asm/bug.h | 118 ++++++++++++++++++++++++++++
 xen/arch/riscv/traps.c           | 128 +++++++++++++++++++++++++++++++
 xen/arch/riscv/xen.lds.S         |  10 +++
 3 files changed, 256 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/bug.h

diff --git a/xen/arch/riscv/include/asm/bug.h b/xen/arch/riscv/include/asm/bug.h
new file mode 100644
index 0000000000..4b15d8eba6
--- /dev/null
+++ b/xen/arch/riscv/include/asm/bug.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2021-2023 Vates
+ *
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#include <xen/stringify.h>
+#include <xen/types.h>
+
+#ifndef __ASSEMBLY__
+
+struct bug_frame {
+    signed int loc_disp;    /* Relative address to the bug address */
+    signed int file_disp;   /* Relative address to the filename */
+    signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
+    uint16_t line;          /* Line number */
+    uint32_t pad0:16;       /* Padding for 8-bytes align */
+};
+
+#define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
+#define bug_file(b) ((const void *)(b) + (b)->file_disp);
+#define bug_line(b) ((b)->line)
+#define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
+
+#define BUGFRAME_run_fn 0
+#define BUGFRAME_warn   1
+#define BUGFRAME_bug    2
+#define BUGFRAME_assert 3
+
+#define BUGFRAME_NR     4
+
+#define INSN_LENGTH_MASK        _UL(0x3)
+#define INSN_LENGTH_32          _UL(0x3)
+
+#define BUG_INSN_32             _UL(0x00100073) /* ebreak */
+#define BUG_INSN_16             _UL(0x9002) /* c.ebreak */
+#define COMPRESSED_INSN_MASK    _UL(0xffff)
+
+#define GET_INSN_LENGTH(insn)                               \
+({                                                          \
+    unsigned long len;                                      \
+    len = ((insn & INSN_LENGTH_MASK) == INSN_LENGTH_32) ?   \
+        4UL : 2UL;                                          \
+    len;                                                    \
+})
+
+/* These are defined by the architecture */
+int is_valid_bugaddr(uint32_t addr);
+
+#define BUG_FN_REG t0
+
+/* Many versions of GCC doesn't support the asm %c parameter which would
+ * be preferable to this unpleasantness. We use mergeable string
+ * sections to avoid multiple copies of the string appearing in the
+ * Xen image. BUGFRAME_run_fn needs to be handled separately.
+ */
+#define BUG_FRAME(type, line, file, has_msg, msg) do {                      \
+    asm ("1:ebreak\n"                                                       \
+         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"                \
+         "2:\t.asciz " __stringify(file) "\n"                               \
+         "3:\n"                                                             \
+         ".if " #has_msg "\n"                                               \
+         "\t.asciz " #msg "\n"                                              \
+         ".endif\n"                                                         \
+         ".popsection\n"                                                    \
+         ".pushsection .bug_frames." __stringify(type) ", \"a\", %progbits\n"\
+         "4:\n"                                                             \
+         ".p2align 2\n"                                                     \
+         ".long (1b - 4b)\n"                                                \
+         ".long (2b - 4b)\n"                                                \
+         ".long (3b - 4b)\n"                                                \
+         ".hword " __stringify(line) ", 0\n"                                \
+         ".popsection");                                                    \
+} while (0)
+
+/*
+ * GCC will not allow to use "i"  when PIE is enabled (Xen doesn't set the
+ * flag but instead rely on the default value from the compiler). So the
+ * easiest way to implement run_in_exception_handler() is to pass the to
+ * be called function in a fixed register.
+ */
+#define  run_in_exception_handler(fn) do {                                  \
+    register void *fn_ asm(__stringify(BUG_FN_REG)) = (fn);                 \
+    asm ("1:ebreak\n"                                                       \
+         ".pushsection .bug_frames." __stringify(BUGFRAME_run_fn) ","       \
+         "             \"a\", %%progbits\n"                                 \
+         "2:\n"                                                             \
+         ".p2align 2\n"                                                     \
+         ".long (1b - 2b)\n"                                                \
+         ".long 0, 0, 0\n"                                                  \
+         ".popsection" :: "r" (fn_) );                                      \
+} while (0)
+
+#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0, "")
+
+#define BUG() do {                                              \
+    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0, "");        \
+    unreachable();                                              \
+} while (0)
+
+#define assert_failed(msg) do {                                 \
+    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, msg);     \
+    unreachable();                                              \
+} while (0)
+
+extern const struct bug_frame __start_bug_frames[],
+                              __stop_bug_frames_0[],
+                              __stop_bug_frames_1[],
+                              __stop_bug_frames_2[],
+                              __stop_bug_frames_3[];
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index 31ed63e3c1..0afb8e4e42 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -4,6 +4,7 @@
  *
  * RISC-V Trap handlers
  */
+#include <asm/bug.h>
 #include <asm/csr.h>
 #include <asm/early_printk.h>
 #include <asm/processor.h>
@@ -97,7 +98,134 @@ static void do_unexpected_trap(const struct cpu_user_regs *regs)
     die();
 }
 
+void show_execution_state(const struct cpu_user_regs *regs)
+{
+    early_printk("implement show_execution_state(regs)\n");
+}
+
+int do_bug_frame(struct cpu_user_regs *regs, vaddr_t pc)
+{
+    struct bug_frame *start, *end;
+    struct bug_frame *bug = NULL;
+    unsigned int id = 0;
+    const char *filename, *predicate;
+    int lineno;
+
+    unsigned long bug_frames[] = {
+        (unsigned long)&__start_bug_frames[0],
+        (unsigned long)&__stop_bug_frames_0[0],
+        (unsigned long)&__stop_bug_frames_1[0],
+        (unsigned long)&__stop_bug_frames_2[0],
+        (unsigned long)&__stop_bug_frames_3[0],
+    };
+
+    for ( id = 0; id < BUGFRAME_NR; id++ )
+    {
+        start = (struct  bug_frame *)bug_frames[id];
+        end = (struct  bug_frame *)bug_frames[id + 1];
+
+        while ( start != end )
+        {
+            if ( (vaddr_t)bug_loc(start) == pc )
+            {
+                bug = start;
+                goto found;
+            }
+
+            start++;
+        }
+    }
+
+ found:
+    if ( bug == NULL )
+        return -ENOENT;
+
+    if ( id == BUGFRAME_run_fn )
+    {
+        void (*fn)(const struct cpu_user_regs *) = (void *)regs->BUG_FN_REG;
+
+        fn(regs);
+
+        goto end;
+    }
+
+    /* WARN, BUG or ASSERT: decode the filename pointer and line number. */
+    filename = bug_file(bug);
+    lineno = bug_line(bug);
+
+    switch ( id )
+    {
+    case BUGFRAME_warn:
+        /*
+         * TODO: change early_printk's function to early_printk with format
+         *       when s(n)printf() will be added.
+         */
+        early_printk("Xen WARN at ");
+        early_printk(filename);
+        early_printk(":");
+        // early_printk_hnum(lineno);
+
+        show_execution_state(regs);
+
+        goto end;
+
+    case BUGFRAME_bug:
+         /*
+          * TODO: change early_printk's function to early_printk with format
+          *       when s(n)printf() will be added.
+          */
+        early_printk("Xen BUG at ");
+        early_printk(filename);
+        early_printk(":");
+        // early_printk_hnum(lineno);
+
+        show_execution_state(regs);
+        early_printk("change wait_for_interrupt to panic() when common is available\n");
+        die();
+
+    case BUGFRAME_assert:
+        /* ASSERT: decode the predicate string pointer. */
+        predicate = bug_msg(bug);
+
+        /*
+         * TODO: change early_printk's function to early_printk with format
+         *       when s(n)printf() will be added.
+         */
+        early_printk("Assertion \'");
+        early_printk(predicate);
+        early_printk("\' failed at ");
+        early_printk(filename);
+        early_printk(":");
+        // early_printk_hnum(lineno);
+
+        show_execution_state(regs);
+        early_printk("change wait_for_interrupt to panic() when common is available\n");
+        die();
+    }
+
+    return -EINVAL;
+
+ end:
+    regs->sepc += GET_INSN_LENGTH(*(uint32_t *)pc);
+
+    return 0;
+}
+
+int is_valid_bugaddr(uint32_t insn)
+{
+    if ((insn & INSN_LENGTH_MASK) == INSN_LENGTH_32)
+        return (insn == BUG_INSN_32);
+    else
+        return ((insn & COMPRESSED_INSN_MASK) == BUG_INSN_16);
+}
+
 void do_trap(struct cpu_user_regs *cpu_regs)
 {
+    register_t pc = cpu_regs->sepc;
+    uint32_t instr = *(uint32_t *)pc;
+
+    if (is_valid_bugaddr(instr))
+        if (!do_bug_frame(cpu_regs, pc)) return;
+
     do_unexpected_trap(cpu_regs);
 }
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index ca57cce75c..139e2d04cb 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -39,6 +39,16 @@ SECTIONS
     . = ALIGN(PAGE_SIZE);
     .rodata : {
         _srodata = .;          /* Read-only data */
+        /* Bug frames table */
+       __start_bug_frames = .;
+       *(.bug_frames.0)
+       __stop_bug_frames_0 = .;
+       *(.bug_frames.1)
+       __stop_bug_frames_1 = .;
+       *(.bug_frames.2)
+       __stop_bug_frames_2 = .;
+       *(.bug_frames.3)
+       __stop_bug_frames_3 = .;
         *(.rodata)
         *(.rodata.*)
         *(.data.rel.ro)
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 14:00:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 14:00:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485601.753027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGr-00048k-6h; Fri, 27 Jan 2023 14:00:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485601.753027; Fri, 27 Jan 2023 14:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPGq-00042z-IV; Fri, 27 Jan 2023 14:00:00 +0000
Received: by outflank-mailman (input) for mailman id 485601;
 Fri, 27 Jan 2023 13:59:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGn-0000nM-Ox
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:57 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e1820c02-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:56 +0100 (CET)
Received: by mail-wm1-x329.google.com with SMTP id
 l41-20020a05600c1d2900b003daf986faaeso3520823wms.3
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:56 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1820c02-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RDMQ8RB6nJcd+xIpijr05fcryAe7A9XA8ZLTs2OE2HE=;
        b=PugMAfdOadAlmmcj8qeXA+PcxuDOPjDKT5csS7cPXEkUocMxtv3AZsPa6djzMsBpV5
         UqqHUoFGee7QiEP+Vkl8oSJB6Ilm2Kf00SqS4QDU10v76sAlhHHOhxMdcmKr9iCCSJWt
         Lyfb3lYbr6AJlHMPCvjN/K0dsptHNTRFFCM9paNT9M1g1L2Zwc8V35dsVkl4eYAh0KDN
         1o0QfiDzycxjH6DHhlo7Kq+rhxarzXotgt2viBsS6lHYutrTCW5sEXBBoQnLZb6gI7+J
         wNX5HnhRGKJuA4817glxddGNrzXCLEY06muwYTdBIX06GuVozRM9ykVo7SkdJ63xYjQ9
         LhWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=RDMQ8RB6nJcd+xIpijr05fcryAe7A9XA8ZLTs2OE2HE=;
        b=bn/tiaZEkYKqczsWUoTJM+zFEhDOCevSyjfynf2H8D5GPF/x7vTJgSMU7BVMISSZqO
         yjuIRDtfJAkz9FxlaiJy3ZNLJD71Y7cK+knPiAwhgyfLGmDwXmATIGxK0nrYHqXWGHGN
         nqCfiUuMhM/3vfitblZiDLai01bQJm912U/a3fGOgiGQqGIGxPArA/sK1/zQw+p0mW+g
         LAlOvaD9wSVv8k6znHwljDbq9w6EUo3UY6TuHXai6NxYmHXfe7zBCsNQ/9mdUiEqPCx9
         BYjFHGztkQMoHEap7auspCrFc/igwQAtAJ4GO+Yv/eoaAvfIJooLGy6NWxX5gq3OO4SN
         W9HQ==
X-Gm-Message-State: AFqh2kqVuE+jMqZ163F9Ey+i2uLJt3nO9jsiGSSSS/BW+UhSkbv7xQM0
	up3rX/TwYKj0+Pb9KjlHBIl9K1OZjVU=
X-Google-Smtp-Source: AMrXdXv+Sa7lY2kFw50zQcdLDMmYCqQJzqfDGIf5ZH/cKKOFISRt08tj/w3TtSDErq2tpNVgzBclaQ==
X-Received: by 2002:a7b:c5c4:0:b0:3d7:889:7496 with SMTP id n4-20020a7bc5c4000000b003d708897496mr38257070wmk.17.1674827996126;
        Fri, 27 Jan 2023 05:59:56 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v2 13/14] xen/riscv: test basic handling stuff
Date: Fri, 27 Jan 2023 15:59:18 +0200
Message-Id: <edd820f874e63328203356b3fbd6feacd742e231.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
  - Nothing changed
---
 xen/arch/riscv/setup.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index c8513ca4f8..bcff680fb5 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,6 +1,7 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/bug.h>
 #include <asm/csr.h>
 #include <asm/early_printk.h>
 #include <asm/traps.h>
@@ -9,12 +10,28 @@
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
+static void test_run_in_exception(struct cpu_user_regs *regs)
+{
+    early_printk("If you see this message, ");
+    early_printk("run_in_exception_handler is most likely working\n");
+}
+
+static void test_macros_from_bug_h(void)
+{
+    run_in_exception_handler(test_run_in_exception);
+    WARN();
+    early_printk("If you see this message, ");
+    early_printk("WARN is most likely working\n");
+}
+
 void __init noreturn start_xen(void)
 {
     early_printk("Hello from C env\n");
 
     trap_init();
 
+    test_macros_from_bug_h();
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 14:05:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 14:05:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485623.753053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPLs-0001JZ-S3; Fri, 27 Jan 2023 14:05:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485623.753053; Fri, 27 Jan 2023 14:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPLs-0001JS-OD; Fri, 27 Jan 2023 14:05:12 +0000
Received: by outflank-mailman (input) for mailman id 485623;
 Fri, 27 Jan 2023 14:05:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPGp-0000nM-SK
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 13:59:59 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e207bc04-9e4a-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 14:59:57 +0100 (CET)
Received: by mail-wm1-x32f.google.com with SMTP id
 fl11-20020a05600c0b8b00b003daf72fc844so5486905wmb.0
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 05:59:57 -0800 (PST)
Received: from localhost.localdomain
 (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr. [90.112.199.53])
 by smtp.gmail.com with ESMTPSA id
 d3-20020adfe2c3000000b002bc7fcf08ddsm3971131wrj.103.2023.01.27.05.59.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 05:59:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e207bc04-9e4a-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OqiiG/Dxt3kswno1Fem+rUYZyblwf9Uw3t4hy1o3WcQ=;
        b=h7ryYdfPsalVmQWY/BhcuEeCHfTyjoSIhX8dEzvpw6QgLyhCudRlHSfaKXubBK+rA+
         eB5LsLM/Sl+4sT8tyazwZAlYL/Xk+RV04zwUAyktBM7a/nqJTqc1QOQy3T+ChB2tvEFP
         DCYQK3yeM7XzbiEITTTiEgnS4fOLi4IUbOglAk0xaLo7fS69favF6rXrBs/FdqqPiiXk
         Qu0MCKJ6gOiTvkOwxy5si5IqHFQInv/evuyveuw2HQuenfSupcyHvq3GiLNberIWc4YD
         MqHxGhBCA5r4hLcNKQkOAeNSPU1w5mVg4zOlEm6NtWS3XdYfG2kKM/R7KWqMynTI4Lie
         dKDA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=OqiiG/Dxt3kswno1Fem+rUYZyblwf9Uw3t4hy1o3WcQ=;
        b=MZWGs/WBpt5QeuunkmUejUdb9Tfph5i1Ghxb5Ack+sYKhEpSL5nTr1jnfpEXCaI1qm
         NFHn5AAEdandMM5xIzs2tQzUfUdIlu6KmIcAm27uwJiNQ/raENv10Su4utpXSrlM/g23
         ngHr02BBgFgrMvpDu1yC7MhblH02pD/y8hCs9DbshsvrgCd6QwDDktn6UZm9Tw87LGZS
         2OVY3owZnJHHCgoCptPYBoYITpMlNrpIbfGCmToU7K3a94tz04RWofFfcJnI9Xy+Kq41
         8KUtzrOkGiv0RKzkFTMu0NzC9dJq+qEcgJSthdAKj0K2a2Umlb1tqcySULpkrQ5efBzV
         4RQw==
X-Gm-Message-State: AFqh2kr2p13ecy71qIRoWBqt8xX23yqzw9lBz1CHCLNf/dL4bpqkSEgE
	voXWNOhyf8bx1IGx1hq0AkuFF8MSb8M=
X-Google-Smtp-Source: AMrXdXsUmCoZys6I0gJyXVumu88pFsz6IizpYHHgKKEW5yJ2MNdxYnxZK6/nbnFYE/tejsYxnBnyOA==
X-Received: by 2002:a05:600c:4f10:b0:3d3:48f4:7a69 with SMTP id l16-20020a05600c4f1000b003d348f47a69mr40997118wmq.17.1674827997031;
        Fri, 27 Jan 2023 05:59:57 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 14/14] automation: add smoke test to verify macros from bug.h
Date: Fri, 27 Jan 2023 15:59:19 +0200
Message-Id: <ed819dc612fcadbd04b4b44b2c0560a77796793a.1674818705.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1674818705.git.oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V2:
 - Leave only the latest "grep ..."
---
 automation/scripts/qemu-smoke-riscv64.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
index e0f06360bc..02fc66be03 100755
--- a/automation/scripts/qemu-smoke-riscv64.sh
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -16,5 +16,5 @@ qemu-system-riscv64 \
     |& tee smoke.serial
 
 set -e
-(grep -q "Hello from C env" smoke.serial) || exit 1
+(grep -q "WARN is most likely working" smoke.serial) || exit 1
 exit 0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 14:11:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 14:11:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485674.753063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPRP-0002zV-Fy; Fri, 27 Jan 2023 14:10:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485674.753063; Fri, 27 Jan 2023 14:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPRP-0002zO-Bj; Fri, 27 Jan 2023 14:10:55 +0000
Received: by outflank-mailman (input) for mailman id 485674;
 Fri, 27 Jan 2023 14:10:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLPRO-0002zI-1N
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 14:10:54 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2064.outbound.protection.outlook.com [40.107.247.64])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 67db86c3-9e4c-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 15:10:51 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8629.eurprd04.prod.outlook.com (2603:10a6:10:2dc::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Fri, 27 Jan
 2023 14:10:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 14:10:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67db86c3-9e4c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lP2FHtQ+ejRGZFfhRyXDXZVClVIudzpl6viE/RIe45uPy/N/t/uq7eVh3mGY4Apqx9frKeNgi1Apd65hoCngoghO9zqMQ7e/tITRzekPfWBhsQWNWz6UnP6et837DGfq2Jy6XpTpFJ7S+6LL9GGpXOYdH32xF9fvph91Cz8imm/nfzO+TrjGG+7fZhCctgnn4ydns/KICBIo6/3mtwk0muTBVVw9AgbCFMfqRoiuzjbZDKZQ6nQecck7EC6FpXJSC7AcmZKEj2QgOTCTKnQ3YqiR6Q1d+UO/ZMwowmjrqAEc4IfFp5/0Arn+2BEbs8SXT89/ug2YYrClNhAp0drJ5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hXxGyIXVzWT9P3rWzXFHpgvtXdPourokcZtthskjvuE=;
 b=gBGAefrSKkamu0t8iOpVjnIpspBOHaJI0Vxc1H50Nu/ZfrKtz6oBc9UzV9RgE3cqgKkLI5VJVo2ffIpf2q6y22zP12cs/KMZ/Qu3gWalgAxxVB3P64YVqpsUstog42rjTe76wbWUefTvHG40jppqIlOIr7TTwOjvKKLU0hNDUlal3kmfxDmw1axvLUxbK7rcHpTwvd+8LuczxLPyMiKyOEbJJa4yxPEDI+bztTXW8NDh4WE18m0bKWP8MJ6Ya6iNx1owHttGdE3ogSIbC6SQmApV5ByV/kLXWiEd20LMYS2+jtmr6ApJ5hgyqok59OxCmvmzDcOMfhDmfcUC7OYYnA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hXxGyIXVzWT9P3rWzXFHpgvtXdPourokcZtthskjvuE=;
 b=ocpT0dCViRdX+KwLLhPpMfPH15eq8BEUJwTufCWSK9GUb4nC2JKs/oW/rWwGIKDhAbwqSh+PCWQVLZuNgQPA2xaDENqXFLWeE2+i9ociotp5xTPiRFPNHthMmFmT7i2qVfKMmI+Psq/M6Jv7Y4f0hrhTtu31u3a3bzb7F0gOmWyUeJcTUMC7wCPYtN3cO9EfRAH6S8tI59yEHGbWvbNSwKHKBFwlOlY2DgDraSur/63zfsG9rmzn0oLN8xrP1UFpEVIS/M1cBfrhOEuflUezmnaZ9BVfATHSyEZIUPokj31l164xoLWMrO2xd0nfceITCpW3tLJyuMG5a03uX9y9eQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7dbda8fa-d4f3-5101-2e8f-96b4b2ff790e@suse.com>
Date: Fri, 27 Jan 2023 15:10:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 04/14] xen/riscv: add <asm/csr.h> header
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <b26d981f189adad8af4560fcc10360da02df97a9.1674818705.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b26d981f189adad8af4560fcc10360da02df97a9.1674818705.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0023.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8629:EE_
X-MS-Office365-Filtering-Correlation-Id: 1730a452-c664-45c3-150a-08db00704ae3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IsVB/DwEUG3eSyqBTXFvsYBNBP5oBGJMLSBjjcnFBlZbHdDVbUxPnSvJ8RNkqT2oEPKJF1eWJ3OgUCp7I2yKaHsTOEQQHwUFK1RSMTD1msQzyMV9y2Nwd4jOUkpjoR5dRUTHJ0fyay2qDjKTGRr1I9ANW3r1NxTst6FMsyu3fHOfl0JVX7HFHI4oYBGe4yikxJclcu7ftbyZkILAxl7YiZg20zENzTOVXOgRmPj82FkRHfTnBHAO24jOnPP7B/o6uQfUKsbr65Rego3nk+Sy4M5a1d1y8smwDLntvt4GBtkoa+VfZ0W1p6I0ybarqbwSbKKF69EyPBUSbj0Jvlh0c5WvyzAdObsdS8qKEApoS6WxZZQilhYxzs4XLE/MmnZ8hury7B6Re5yL6v7MaDtj5/lCl9xDQUVNKGD4s7bmAK2ttUwDNJYZWXbK9FlixhpcL6z83pe3FTNIYHc68xOakpDkDNiMYJMH3U6vJeBLVku9Y0N4xbSxR5TNMWQBXT37+bLNputhyv58LgXvP+PfBj5N3RYsBPt9nxqszNfcH62/ptp7aUVNN5UKBZE3spt/y8ga+ZHbiKAJDX8Csscj6kLHc30oYK0eXlNhii46bOHRbQO0PW2TOg3S0x6YPM8HeNnkMd7Q+vOZ1qQk4kSZf/u+JIA3K67iPNo/O5yixLf0FbdnoTqv8dzfIiohEiWE2wyVdRNwtqLkboVTfSblRKvVXRXJAJYdhwCX/22q/hZpGLOqmjsJAd9PGy88Zc948qi+PgqDoRCPLjoVN6ywYw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(346002)(39860400002)(366004)(136003)(376002)(396003)(451199018)(2616005)(6512007)(26005)(186003)(53546011)(36756003)(38100700002)(66476007)(66946007)(66556008)(86362001)(8676002)(4326008)(316002)(4744005)(31696002)(8936002)(5660300002)(2906002)(41300700001)(6486002)(31686004)(478600001)(966005)(54906003)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SHI4bndtT04ycW5OM2ZILzQ4WUIyY2puMzE4dzRGYlUzbXEzZzRpS0RVMmVE?=
 =?utf-8?B?UkxzWEpmT21wSjJrKzZIVCtuMGlxdXhBYVZmTllpNmJKVnczZ0M0c0ZDNG51?=
 =?utf-8?B?aUJpODZQOTFYdTVGTlhoT1ZRQTdZN0RLeDlWUkF2a3ZHTXY3T05kWXFYSUJw?=
 =?utf-8?B?VnY0YTE5WmN3QVBqdnF1ODZpbVdSTG56bWtHVm9GRWh5N1lDRUpNR3luTzRX?=
 =?utf-8?B?bjNEL29pcTBXZzZNVkhtKy8zT0xPMlluQUJYT1dUTTZTZzZBbVFLM3lnN3NS?=
 =?utf-8?B?N3g1WVFBWU4rVWVJMDVjTGcxaFdkKzJZVWxyL3VCeWtpMk05Y0FqM3NTNmI4?=
 =?utf-8?B?eTkxT3Z3dHpMSFAxa3VsbSswQmErcWFHZ0lyTE9ONmh3cVcxazI3S28rNGZB?=
 =?utf-8?B?WHM0OEk0UlNSQlg2UlBQMjRhbGNXL000V1M2NEFaYkZGTmc1OUFQcVBpam80?=
 =?utf-8?B?UFpML01LMjlrT1pXRFNKamRCbnNnWnF4Sk9HUUpBaE1VQXpnVHNzRXpMVVdh?=
 =?utf-8?B?ZFUyZDY5djdUdVVPRng4Rm5CSFRxVUZnK3hRdmpsZXRnaGpPQVpNTFFtMjky?=
 =?utf-8?B?cGU5WFA1OXBLbENtTnBlQ0sxUXNIMjBiUnltMlVOSHJSaURRd0tBK2ZnbE5Q?=
 =?utf-8?B?YkdVWkFKMmRZY0d2a1RJMVNsODBobUZYNW9wT1dscmV6SjdYOS9KdkdQdVYz?=
 =?utf-8?B?bDZQYTkxVURNeDRRaFJiN0YvbGlJbjJ2a3V6RGJFSHorVUJJMXBiK2oxK01w?=
 =?utf-8?B?SDhySGZVc3Y1Nkp6N0xOa00xbVcrSDJpL0toOTJVWXdIa1JLWnhEQ3FGUVpn?=
 =?utf-8?B?MFVEK0ZES1FFT2Y4dXQxREFGT1J2SGMyVi9seTZMa2lnSmRTMnZTY05GUGd1?=
 =?utf-8?B?YTRFc2JJOEllUThpWGlqYjA5MU5tb2xIYWNmNkxKVEJ4ODhSOUJpWFBXVjhw?=
 =?utf-8?B?b05FT25vOXFTck94VkVEUXoxa1YyRXBobVgvaSs1ZUN4UUlnMmNUUVZaY2lp?=
 =?utf-8?B?VGRjazNOWXV1cFBsdHg3SG1JZWVCTTkrWGhLMUtpQmdBSmNTRVVtVU9ibnhW?=
 =?utf-8?B?OUorWmpBeU5RWFRuSHpmS2czbndRTlJCUytSRjBIWnVXNlNsQnJaUm0rZTM1?=
 =?utf-8?B?SWVHZFdOYnZUYUhKSWpQWVpPM2l5L1k2WmhROTVwMzB3Y1V3SXJmL2JLakNZ?=
 =?utf-8?B?cW45NEZLZmdsUnMyQkdtOGxxRk10bTNqS09uZzRNNEJvZjUzNEdDS2JneHFG?=
 =?utf-8?B?MkVodGxQNnJ0VG44UDRNbFZIektHQzN6b254TTh1a0tuTXczeFVRZE4zVGth?=
 =?utf-8?B?dy8wM0dOMGl4QWNlMnJUanBzeUN3aDZJMENCeWN1d3pSNGdweGdDWlRrRStZ?=
 =?utf-8?B?YlpKM1ZlODJIN0g2dy9Ic29WWURkYWNFaC9zRzhGenExT0ptRnpFUUxOQTRS?=
 =?utf-8?B?a3ExYmZCZkJ0OW1PRnpTRVlWVjFhb0k5eTI0Uk01U091N1czYkVmdU5MK0ZT?=
 =?utf-8?B?N0pxVGpGWFRwNXZ1cUV1L2xJcVA1Y1pJRDF2clJNdmdMblhINVl4aGk1UUVX?=
 =?utf-8?B?WGtzZE5RSlo1MzJ2RGtjQ0Zscm0zQ3g0dTdLL3grSzRPV3NGMVM4ZHZSRkNW?=
 =?utf-8?B?Nmk2WFNkQnNIMmZnMjlQT05jdG4wdlpQVXFCdzJMcGdCdnErdXRzVU9kcXU3?=
 =?utf-8?B?RDBhQk03bE9wL1VyK0NFUDBQbTN0UVRzMlBiMjFrWVJ5NXdUQjVxQVBhVVE1?=
 =?utf-8?B?aitRL25IYXhaVFNnNmdwVjExRkFqbkNDczlyTE5lbjZ0UnJWQnlsUXY4bmFS?=
 =?utf-8?B?RVJWRnRKMVBBb2pYcVRSc29HenB6anRIb3B0cDh4bmQ2Z2tqa2pPMXVvUm5h?=
 =?utf-8?B?NklYSVJlK2tSdGZiQW9kYkJoYTRKZlYyRVRkYlhEakVsNGdFa2NheXQ0WkFC?=
 =?utf-8?B?SjA5Sm9ac1NLRFNwbkZLSnZXUFZsRGNtbzF5ejR4Q2U1cjcvZTQ5eURhU1A5?=
 =?utf-8?B?VGNzQ1dzRUQzOWE1VzlyMzNlK0JnKzhMNWxPZVFodGk5RVVPTFVNa09tSTJC?=
 =?utf-8?B?TlVIaUduamJ6NGlrOVV4M0s2Q1J4VDRGdXJqTkJEUTdxbkpwalFmcjMxM0F0?=
 =?utf-8?Q?AymQwJu7sb703YDNLpfrev1JF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1730a452-c664-45c3-150a-08db00704ae3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 14:10:49.9227
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Q1mvrJl6uw5V9Ec4cAN2SpiImz0yBJhTCFm94zBkLkIgVW28nj/lJf6HUZbSNmrSXS2o17ZjQnbzyadp8vI6dQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8629

On 27.01.2023 14:59, Oleksii Kurochko wrote:
> The following changes were made in comparison with <asm/csr.h> from
> Linux:
>   * remove all defines as they are defined in riscv_encoding.h
>   * leave only csr_* macros
> 
> Origin: https://github.com/torvalds/linux.git 2475bf0250de

I'm sorry to be picky, but I think such references should be to the canonical
tree, which here aiui is the one at git.kernel.org.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 14:15:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 14:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485678.753073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPVR-0003ad-09; Fri, 27 Jan 2023 14:15:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485678.753073; Fri, 27 Jan 2023 14:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPVQ-0003aW-Sm; Fri, 27 Jan 2023 14:15:04 +0000
Received: by outflank-mailman (input) for mailman id 485678;
 Fri, 27 Jan 2023 14:15:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhsa=5Y=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pLPVP-0003aQ-Pb
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 14:15:03 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fd35090b-9e4c-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 15:15:02 +0100 (CET)
Received: by mail-wr1-x42c.google.com with SMTP id h16so5061853wrz.12
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 06:15:02 -0800 (PST)
Received: from [192.168.100.7] (lfbn-gre-1-240-53.w90-112.abo.wanadoo.fr.
 [90.112.199.53]) by smtp.gmail.com with ESMTPSA id
 j5-20020adff005000000b002bddd75a83fsm4133579wro.8.2023.01.27.06.15.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Jan 2023 06:15:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd35090b-9e4c-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=4Q86dD1lo7VUFB+GozTLUzEN5VfEloh1zuvqKw45joo=;
        b=dRuwMasfnpkREPWr/1vr9i22DPJ3orpn4yaox5lPwZUiPzFz1Z6fA4ZiHrZ/NHkjyo
         O33gsx9SuSn1WzIwgqiMN2jhu0skJhU3bO0wEcS9Hu6cNF+xSz2DscrdWcJSk9lL9INU
         Po0AZO0W3NF3riTMK8lWDdbit3iaSzF3wsCU6LHr1kfiIKvYiaX4/QtvB5g619aCVKKU
         iYdSr1zsmC6xYSxBAJW42NDNhhh+FxBHs8JUbzAXZtMMNxsjX4wp2hn1GmJCVUtL/bCJ
         aUg9luJoIwqfhLdAgC5n/8qDkmSzekjFAvkZidiQ/kq9/c3X9Fbjdmmi0Jqil5CtCWV7
         nk2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=4Q86dD1lo7VUFB+GozTLUzEN5VfEloh1zuvqKw45joo=;
        b=AnYY2y3w6fzuFTDZ4ddHS+xReuXKxRIX8XYtJWqd0RKP3TwCRvJzohmZv3ozw8iOBx
         tDrP1K7eLfBsDcBFc3Y4L/RCWFESmN729JsCgrJFLLaSRAb6/C4LaUu+0TylwVWcKQLq
         jGsOdg/kNVscxhIcuqy9cAOG1LKcothdlBUAnTfExVuXSDaiBhVwhSH/3f8XqnVz50Mr
         xsW8698N1JmlCAsAQTtffMWHsdUmcQT0MIN/fhWB/q0bfcf9Xb4/yYDHdqPCfos8tO8T
         7Ur/S2cu5Md2RbVzJTT6naFg4PMD/OyzIP6tDLNB3yaWuP0I0+uTYA2/lWiBPWKV9Ac2
         L2zQ==
X-Gm-Message-State: AFqh2krNjbHU2ohlk5cNroLtadkDEPIIdcjCFsSTmKRvu7hs3VQfVBJq
	m8WlfZq3NiHoVS+q3LyNNdFUrUgJyDw=
X-Google-Smtp-Source: AMrXdXscQG8kEGMr7Y1AsjJdJ3mhDuCiYu3+zdW3CsKbfJhZ//tAD/a8rfa3vsb3LaZsUMLkve96Ew==
X-Received: by 2002:a05:6000:1f14:b0:2ac:5b46:9c85 with SMTP id bv20-20020a0560001f1400b002ac5b469c85mr34231516wrb.68.1674828901396;
        Fri, 27 Jan 2023 06:15:01 -0800 (PST)
Message-ID: <f5cd1bfb116bfcc86fc2848df7eead05cd1a24c0.camel@gmail.com>
Subject: Re: [PATCH v7 1/2] xen/riscv: introduce early_printk basic stuff
From: Oleksii <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org, Bob Eshleman <bobbyeshleman@gmail.com>, 
	Alistair Francis <alistair.francis@wdc.com>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>,  Stefano Stabellini <sstabellini@kernel.org>, Gianluca
 Guida <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,  Connor Davis
 <connojdavis@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>
Date: Fri, 27 Jan 2023 16:15:00 +0200
In-Reply-To: <06c2c36bd68b2534c757dc4087476e855253680a.1674819203.git.oleksii.kurochko@gmail.com>
References: <cover.1674819203.git.oleksii.kurochko@gmail.com>
	 <06c2c36bd68b2534c757dc4087476e855253680a.1674819203.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

Hi Alistair, Bobby and community,

I would like to ask your help with the following check:
+/*
+ * early_*() can be called from head.S with MMU-off.
+ *
+ * The following requiremets should be honoured for early_*() to
+ * work correctly:
+ *    It should use PC-relative addressing for accessing symbols.
+ *    To achieve that GCC cmodel=3Dmedany should be used.
+ */
+#ifndef __riscv_cmodel_medany
+#error "early_*() can be called from head.S with MMU-off"
+#endif

Please take a look at the following messages and help me to decide if
the check mentioned above should be in early_printk.c or not:
[1]
https://lore.kernel.org/xen-devel/599792fa-b08c-0b1e-10c1-0451519d9e4a@xen.=
org/
[2]
https://lore.kernel.org/xen-devel/0ec33871-96fa-bd9f-eb1b-eb37d3d7c982@xen.=
org/

Thanks in advance.

~ Oleksii

On Fri, 2023-01-27 at 13:39 +0200, Oleksii Kurochko wrote:
> Because printk() relies on a serial driver (like the ns16550 driver)
> and drivers require working virtual memory (ioremap()) there is not
> print functionality early in Xen boot.
>=20
> The patch introduces the basic stuff of early_printk functionality
> which will be enough to print 'hello from C environment".
>=20
> Originally early_printk.{c,h} was introduced by Bobby Eshleman
> (
> https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1a
> ab71384)
> but some functionality was changed:
> early_printk() function was changed in comparison with the original
> as
> common isn't being built now so there is no vscnprintf.
>=20
> This commit adds early printk implementation built on the putc SBI
> call.
>=20
> As sbi_console_putchar() is already being planned for deprecation
> it is used temporarily now and will be removed or reworked after
> real uart will be ready.
>=20
> Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> ---
> Changes in V7:
> =C2=A0=C2=A0=C2=A0 - Nothing was changed
> ---
> Changes in V6:
> =C2=A0=C2=A0=C2=A0 - Remove __riscv_cmodel_medany check from early_printk=
.c
> ---
> Changes in V5:
> =C2=A0=C2=A0=C2=A0 - Code style fixes
> =C2=A0=C2=A0=C2=A0 - Change an error message of #error in case of
> __riscv_cmodel_medany
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 isn't defined
> ---
> Changes in V4:
> =C2=A0=C2=A0=C2=A0 - Remove "depends on RISCV*" from Kconfig.debug as it =
is located
> in
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 arch specific folder so by default RISCV c=
onfigs should be
> ebabled.
> =C2=A0=C2=A0=C2=A0 - Add "ifdef __riscv_cmodel_medany" to be sure that PC=
-relative
> addressing
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 is used as early_*() functions can be call=
ed from head.S with
> MMU-off and
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 before relocation (if it would be needed a=
t all for RISC-V)
> =C2=A0=C2=A0=C2=A0 - fix code style
> ---
> Changes in V3:
> =C2=A0=C2=A0=C2=A0 - reorder headers in alphabetical order
> =C2=A0=C2=A0=C2=A0 - merge changes related to start_xen() function from "=
[PATCH v2
> 7/8]
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xen/riscv: print hello message from C env"=
 to this patch
> =C2=A0=C2=A0=C2=A0 - remove unneeded parentheses in definition of STACK_S=
IZE
> ---
> Changes in V2:
> =C2=A0=C2=A0=C2=A0 - introduce STACK_SIZE define.
> =C2=A0=C2=A0=C2=A0 - use consistent padding between instruction mnemonic =
and
> operand(s)
> ---
> =C2=A0xen/arch/riscv/Kconfig.debug=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 5 ++++
> =C2=A0xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 =
1 +
> =C2=A0xen/arch/riscv/early_printk.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 33
> +++++++++++++++++++++++
> =C2=A0xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
> =C2=A0xen/arch/riscv/setup.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=
=A0 4 +++
> =C2=A05 files changed, 55 insertions(+)
> =C2=A0create mode 100644 xen/arch/riscv/early_printk.c
> =C2=A0create mode 100644 xen/arch/riscv/include/asm/early_printk.h
>=20
> diff --git a/xen/arch/riscv/Kconfig.debug
> b/xen/arch/riscv/Kconfig.debug
> index e69de29bb2..608c9ff832 100644
> --- a/xen/arch/riscv/Kconfig.debug
> +++ b/xen/arch/riscv/Kconfig.debug
> @@ -0,0 +1,5 @@
> +config EARLY_PRINTK
> +=C2=A0=C2=A0=C2=A0 bool "Enable early printk"
> +=C2=A0=C2=A0=C2=A0 default DEBUG
> +=C2=A0=C2=A0=C2=A0 help
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Enables early printk debug messages
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index fd916e1004..1a4f1a6015 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,3 +1,4 @@
> +obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o
> =C2=A0obj-$(CONFIG_RISCV_64) +=3D riscv64/
> =C2=A0obj-y +=3D sbi.o
> =C2=A0obj-y +=3D setup.o
> diff --git a/xen/arch/riscv/early_printk.c
> b/xen/arch/riscv/early_printk.c
> new file mode 100644
> index 0000000000..b66a11f1bc
> --- /dev/null
> +++ b/xen/arch/riscv/early_printk.c
> @@ -0,0 +1,33 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * RISC-V early printk using SBI
> + *
> + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> + */
> +#include <asm/early_printk.h>
> +#include <asm/sbi.h>
> +
> +/*
> + * TODO:
> + *=C2=A0=C2=A0 sbi_console_putchar is already planned for deprecation
> + *=C2=A0=C2=A0 so it should be reworked to use UART directly.
> +*/
> +void early_puts(const char *s, size_t nr)
> +{
> +=C2=A0=C2=A0=C2=A0 while ( nr-- > 0 )
> +=C2=A0=C2=A0=C2=A0 {
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( *s =3D=3D '\n' )
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sbi_c=
onsole_putchar('\r');
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sbi_console_putchar(*s);
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 s++;
> +=C2=A0=C2=A0=C2=A0 }
> +}
> +
> +void early_printk(const char *str)
> +{
> +=C2=A0=C2=A0=C2=A0 while ( *str )
> +=C2=A0=C2=A0=C2=A0 {
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 early_puts(str, 1);
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 str++;
> +=C2=A0=C2=A0=C2=A0 }
> +}
> diff --git a/xen/arch/riscv/include/asm/early_printk.h
> b/xen/arch/riscv/include/asm/early_printk.h
> new file mode 100644
> index 0000000000..05106e160d
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/early_printk.h
> @@ -0,0 +1,12 @@
> +#ifndef __EARLY_PRINTK_H__
> +#define __EARLY_PRINTK_H__
> +
> +#include <xen/early_printk.h>
> +
> +#ifdef CONFIG_EARLY_PRINTK
> +void early_printk(const char *str);
> +#else
> +static inline void early_printk(const char *s) {};
> +#endif
> +
> +#endif /* __EARLY_PRINTK_H__ */
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> index 13e24e2fe1..d09ffe1454 100644
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -1,12 +1,16 @@
> =C2=A0#include <xen/compile.h>
> =C2=A0#include <xen/init.h>
> =C2=A0
> +#include <asm/early_printk.h>
> +
> =C2=A0/* Xen stack for bringing up the first CPU. */
> =C2=A0unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
> =C2=A0=C2=A0=C2=A0=C2=A0 __aligned(STACK_SIZE);
> =C2=A0
> =C2=A0void __init noreturn start_xen(void)
> =C2=A0{
> +=C2=A0=C2=A0=C2=A0 early_printk("Hello from C env\n");
> +
> =C2=A0=C2=A0=C2=A0=C2=A0 for ( ;; )
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 asm volatile ("wfi");
> =C2=A0



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 14:24:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 14:24:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485687.753083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPef-0005Mh-VJ; Fri, 27 Jan 2023 14:24:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485687.753083; Fri, 27 Jan 2023 14:24:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPef-0005Ma-SY; Fri, 27 Jan 2023 14:24:37 +0000
Received: by outflank-mailman (input) for mailman id 485687;
 Fri, 27 Jan 2023 14:24:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLPef-0005MU-C5
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 14:24:37 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2075.outbound.protection.outlook.com [40.107.20.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 529cfffc-9e4e-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 15:24:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9176.eurprd04.prod.outlook.com (2603:10a6:20b:44b::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Fri, 27 Jan
 2023 14:24:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 14:24:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 529cfffc-9e4e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aQHxNsirJRnehuXKWFevR+NK6n5/k72LKj2Yf08a6oeUSjl/7PzukefwqD7Wh6S2U/nG8muepY51Ce3fSfbh7+iUzEqol0m23E0u6Lx/GbLZq4vll7087ljBlQSqghAavEOMML1nvvG1UNDWCYBKKNZRLtw/eNkE4+MVqfZOopSpMFiwP2MJo7UA0OSdLki8TrTdspLRkPqsNertFfq/enI/O02hPk56mONHSCSqvyoWpjw4YL/pfUdSnQs1lv6k5A3YkFG2VnY5GDgmsKPexEFbt3beyQDWezQClNOolF19oZ99hhto8ctx2zZoTH86Ft6U2YPY6IjA4N645EZQcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=39S+I3MneBJhbeyryLlr5sj2mu3nP26wl0wNCoY0Ho8=;
 b=oKYQYcQGiKfCBv9aAjr3yDttxGHODFvkKX6HGPz31/mwQZNHNufUKhC8E21WdDKpkZosXkZrRNHigind9LhVrDSWh/0Jrn/46qAHY8jXM6zIgCx1om58Y5AqRkUOInCVuPdPdUJj/8zcMxUR+b8yl4GIQ3qwM5CYLw8mFWJooVuvwkp8eDluxVGPxYNLwbN/s9OxD5S8s1oMdRtKWMwX94JGPYNLCpogE4h3IkXH1frL8WxEaNdkm4lJJZFDqrb2KS4ObZBb4W7B6JNOAnZYx9BZHdcXxy+RrDUXNrZnOW+bHX87a5qkMKikDQSZA4kR+v9suJAJQVqD40S/PKq37w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=39S+I3MneBJhbeyryLlr5sj2mu3nP26wl0wNCoY0Ho8=;
 b=CtX6ZwAaKWuiIqJ2GEEYhbX9pFxvZxvHpMt+X0jSo88dkFNvUBVHXXXxhmzSt1ziwrqaSanFVeRxY4KwaxfC+DQkQ4YWCzC5eICLDt0jdg3mhZr1Blw1FDqMIY3+AO+EzxMSe3GWUvFi2S3C9vNkJ655nSMCS2nXpiQsFieByhtNFrFgzVUfblwSo+lmt8mCyFze880WKQDtkUfXpIzJ/MsEK1IqjZPkj/+yREg6lyHci/cUdqpI1H00fXuRHhnJ3Hb5GTpnkL2C/9jNPu8fnajNGmMyiU+6kZpUQgcYYDXqzEITVra5KoYPuxae1IahsXrrOSyRfSd/VSblLDynmQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <75328420-0fbd-92ae-40c7-9fee1c31c907@suse.com>
Date: Fri, 27 Jan 2023 15:24:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 07/14] xen/riscv: introduce exception context
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0036.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9176:EE_
X-MS-Office365-Filtering-Correlation-Id: a5b38790-628a-4d16-3b33-08db00723594
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sUInzmr7pLV8gb+oksUH7xykZ8Vzi7Cdj8K87SYJeWMY6ltWbdL35qNt/fzovw7VfVDWigI5ZevXticbCkt79ccK6IdEl9925dnRi7ApxIZv+cWcJeDcDgrgUuEWcmfBS9F6pZyMmLMPIkGK52GFodbZNu2kztzCK4BLlLMojdQ/IMDLXALvxwXhAbKIgp/KBSYpKuq9pF+knxewYuoQUCrPFM3T4e8jo5nB4/faWPeM2xVQ3WRMa9WutHO4s49hv/q/0arlsSxyjwQD9FhdEbHuc6dAdprQAE3+l8GlHxHkUoqod6PITga0sOIBYUhMqpfat1ssWpiaMRhRi7rOHKZKhA5VcAEraP9WBMxqQF0qKJ87CuMuXlqJ62TutY4zTUeujHGz8awk4yCHXsQqhIXskegomYa6L2eQJs34bCu/VCgybaaXnDnUN2cuSWQ/IWfaXwVlOhClJax3D5/Ry4cG+97KQ4Vn6GjQ0mtXdTmiOtb4Ima3vU0cdChagxndcWBD9yol4K3r2g6AbJ4TUIJyAx0LQsiKMgfzLiN4hdGb5qb5kbwLxxpVODxZcyMEPThnjIRrg0JE/aY1zfezreBCAKX4Cme7q/4EkwMEwOwNJa32nVwqleXohMAylMCzYMIM+92J9JrMwb2pSbJVNg0qhKVfc6BGbN7bLgUs1GGIpdiIVrV9beT8oGzNzFQaJpKL6Kw4PdoY62LAVyowoRSaVR8CrS1OyD9f5Yg+LKA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(346002)(136003)(396003)(366004)(39860400002)(376002)(451199018)(316002)(54906003)(36756003)(2616005)(66946007)(38100700002)(6916009)(6512007)(4326008)(186003)(8676002)(478600001)(66556008)(26005)(86362001)(66476007)(53546011)(31696002)(6486002)(6506007)(31686004)(7416002)(2906002)(8936002)(41300700001)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VVp4dTBpeXJXbGs3Y1JVNnBGR2gxR3FiQVdlZlhzby9TMk44YnVTdGZzZURO?=
 =?utf-8?B?UUd2YlV5Nzk2MzRWREtxRUw2Y1NvZ1RCVThUOFdzclpOWWtVQUJNSFNCdlZC?=
 =?utf-8?B?SkZhYmtDandlSDk0MFJTZzFtYVlPcEtnSDZxTVM2Q0NPazdzaGNNVWlUMElJ?=
 =?utf-8?B?clZ6OXduNTl5M3JEYVJmZ3VwM21qdnB6cDcvYmoyTnZFMC9UY0dpK2lnR3Ni?=
 =?utf-8?B?Q0NRT1V0dHluNWJnU2dnS0RzUnBPSU9aQW00OFFmaDNuTnh1YnZYS3BJS3JM?=
 =?utf-8?B?Z2JkOGdjSEh5WE9yTjRGTnhUMHY2TFA0bHRnUTNTQm9lTlNYeWhNRUtxcGM4?=
 =?utf-8?B?RUhRck1DVC9yVVZRdkpScUpkUGtjTERzeWZiK3V4S3lPU3crR00wT0libUpa?=
 =?utf-8?B?elV2RmFSWXVuUm5mSnI2Y1MxZDlnQ3pLcjRDMm9xallGdVNKZEhPUC9MczN1?=
 =?utf-8?B?UFNlSEVEcXNVVmh1UWRGMHJOZWxhTnlTNWh0dmdhOG9XeXdrcXBBMzJOeVNm?=
 =?utf-8?B?Q2M2d0FRSmxFS01SVEVRb0lYTjZNK3hpRjlGclZ5OW4wNzJ3UGpCdlVjQzdB?=
 =?utf-8?B?eDJpMS9yeER0eFRJakpFc3puaVNQSy9zSVV4VWdxbnAybmZQaWpBaXgzSnQv?=
 =?utf-8?B?cEgrR3dEZU5RSGN1eGloamtEWjN0ckZ6dkk1enhlRW95bEpnNHA0NmpJNUww?=
 =?utf-8?B?cUpHNVJQeEs4RlF1SVNwbyt2VmlHOThSYWROVnlTenE5TnRuSVNUOFJRaVBq?=
 =?utf-8?B?U2M0RExZdTcxUS92eTRYazFVWWhlTnlQSld6NnBhdG0rT2d0Kyt1VUNpS1Ba?=
 =?utf-8?B?TDRtbmJSU3BLcDVZY2E5N0ErM3Vzbml0K3V3dHBWT0F0QzVOSFNaU0tMSWZk?=
 =?utf-8?B?ZWtrRFJpMU1PZVM1d21CekY1ZzkyTXQ5QXZ4L3dHZW4yckdHd01qQU5FM0xH?=
 =?utf-8?B?VXNLcEdQMDhnQ3lQZXVxUVJoTDVNU1RQYktMMG5ZQnQ1V3NIWG1xUjNtSmlk?=
 =?utf-8?B?TEo1R0t6djVWeFBHNW5hbzkvLzRsN09Lb01iclkwTzFyZVVmTmdYNVRyTFZO?=
 =?utf-8?B?ZVhza2pvKzNORUF1akR1aHNqUkJlcVNYZ2ROM0ZIRk1nbFdtaERQcS8wTlZE?=
 =?utf-8?B?TFFqUmlCd0NLeXZZd1VmclFzMFJIVVhBR09laU9hcW8xZlhLazNjWG9XWjQ4?=
 =?utf-8?B?VExRNjR6WHZZbElDY1dVcXY4bGt6cjFhWU1rbDExUk9mQW5kZmRXaXFyYzJP?=
 =?utf-8?B?b3NMeU5FbDJobVhXNE96Rm9CdWw0MFdNMXNzWkdHc2UrU1lBT2xpZzd1ZWYr?=
 =?utf-8?B?eDVFaWM4SWNMZGVGZ3pCNEhLU1VsckFBRmxFQXhmSkxYTjhSRDBGbllONDdI?=
 =?utf-8?B?ZkMwamczdml1WldDeTNvcnM2eTB2Q1BUMDBYN1VkTG9SZ1J1MTlRVmw1b1dQ?=
 =?utf-8?B?bHRSKzBIZHQvM2h3ZzZ5aHp1VU1VaTA0bGtKV25QdlhCMXVxenJ2citiMUh1?=
 =?utf-8?B?Y2NjMDdBUVp1NEpXUnJaM3RqdnZ2MzkzSXJ6SFZGekh5SWpXTWJGTkc4WFF1?=
 =?utf-8?B?TXE5Q0x4T3U4V3NGMUVZdUFTRTdjcmZkVXFhY2FRM1JscVQ5aTJUZ0FXUU9F?=
 =?utf-8?B?UXRldWVRR1FXaVdkV1M5czB6UlpTRXJMVE50TEY4REluOEp3YkFxWC9CR01h?=
 =?utf-8?B?WnY1dzBGenJqR0JzcDJKZll3NVBwRXJ3eFBtd1RQMjNtSE5RWXE4M2lmdkZC?=
 =?utf-8?B?Rld5cEgzeDlXZGZ3YStZdm1wWlplTlVGek9nY1o3QURWQWhpRWNXRmVRMHph?=
 =?utf-8?B?Zi93a1dURlk2SisyeU9kY2dXSU10UHRBN0ZrM21oTzkwNEoySGkvL2RnbG54?=
 =?utf-8?B?ZnNvd1RHb0dBTzQwbEVZL1oxZE0yWDc3cWpEWVFDNDZEY1JTdUZzeU9POWh1?=
 =?utf-8?B?VUp0NzVTc0JyTzNsTThyNDhpTDhWR3BseWFCd1VGbUZVSlYzaTlQUkFjRUla?=
 =?utf-8?B?dXRva0RRYzBDZkM1Y0RvT1NTQ3R0emZCMHlJZlNza2pkWEFHMTdJNnk3cjBs?=
 =?utf-8?B?QWhLM0RzSEwyY3ZWVW1KMW95cWVKalFsMnlaZ0NXV2ROUiswbStjbE1nMGg3?=
 =?utf-8?Q?tXleW01mntpO5s8Y+F/7noem0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a5b38790-628a-4d16-3b33-08db00723594
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 14:24:32.9798
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UzowZ6itbaRxukYLiIJlB8e0vBzFzD8ZxUsaENqbl4P4Q3M69A3OchUfBHS2t9fjsWhYOpwE3cXWLPuj0AO8RA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9176

On 27.01.2023 14:59, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/processor.h
> @@ -0,0 +1,82 @@
> +/* SPDX-License-Identifier: MIT */
> +/******************************************************************************
> + *
> + * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
> + * Copyright 2021 (C) Bobby Eshleman <bobby.eshleman@gmail.com>
> + * Copyright 2023 (C) Vates
> + *
> + */
> +
> +#ifndef _ASM_RISCV_PROCESSOR_H
> +#define _ASM_RISCV_PROCESSOR_H
> +
> +#ifndef __ASSEMBLY__
> +
> +/* On stack VCPU state */
> +struct cpu_user_regs
> +{
> +    unsigned long zero;
> +    unsigned long ra;
> +    unsigned long sp;
> +    unsigned long gp;
> +    unsigned long tp;
> +    unsigned long t0;
> +    unsigned long t1;
> +    unsigned long t2;
> +    unsigned long s0;
> +    unsigned long s1;
> +    unsigned long a0;
> +    unsigned long a1;
> +    unsigned long a2;
> +    unsigned long a3;
> +    unsigned long a4;
> +    unsigned long a5;
> +    unsigned long a6;
> +    unsigned long a7;
> +    unsigned long s2;
> +    unsigned long s3;
> +    unsigned long s4;
> +    unsigned long s5;
> +    unsigned long s6;
> +    unsigned long s7;
> +    unsigned long s8;
> +    unsigned long s9;
> +    unsigned long s10;
> +    unsigned long s11;
> +    unsigned long t3;
> +    unsigned long t4;
> +    unsigned long t5;
> +    unsigned long t6;
> +    unsigned long sepc;
> +    unsigned long sstatus;
> +    /* pointer to previous stack_cpu_regs */
> +    unsigned long pregs;
> +};

Just to restate what I said on the earlier version: We have a struct of
this name in the public interface for x86. Besides to confusion about
re-using the name for something private, I'd still like to understand
what the public interface plans are. This is specifically because I
think it would be better to re-use suitable public interface structs
internally where possible. But that of course requires spelling out
such parts of the public interface first.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 14:34:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 14:34:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485694.753092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPo9-0007BD-SO; Fri, 27 Jan 2023 14:34:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485694.753092; Fri, 27 Jan 2023 14:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPo9-0007B6-Pn; Fri, 27 Jan 2023 14:34:25 +0000
Received: by outflank-mailman (input) for mailman id 485694;
 Fri, 27 Jan 2023 14:34:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLPo8-0007B0-Fv
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 14:34:24 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b0dc7fdd-9e4f-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 15:34:22 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8311.eurprd04.prod.outlook.com (2603:10a6:20b:3b3::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 14:34:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 14:34:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0dc7fdd-9e4f-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EtbqvTNE7IGEH3ll0frhByL9ctjIKRsWCKw/a3jQqOzVlb7qVCxrVVG92y1n2JdLqJQchpz19/+JRnzAZVbfj3YjeVvcS2SnFVlop+SBkTfHc9f64nJJU76CZ66Asf4C5uGvnBmWBjAvL5cpif6LS/NPHnury867PnyXzD0qoUxnKl8/Ju6Ecd8Nb4tPPTcpBzzBHwxj9ZhN/np40EB4P+alHLLt/RlPAhNEwhbGSoFXHwaQTc4zsH+bGUr+abZmM2wSXHZjtotU5hZ5i4CqtqpLj1kqgfJUqP4JqZmULQUUgmivhbwSokGcxfLP0AoBxCoeylhBfM++FueuCK4qJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p+y4yvqopXpvgyB8ejzGsOVkvXAMiQzvJ5mjT0msAQE=;
 b=LLWwmvq9idV5EqDU6lA77QMoYQKAQC2q3zAKFGeiyebz5/hgORPM2EEy0F3F047O0nlwzYAQRLjxeOPRqb75zMaNonVZJDto/MQd8uUH8taS/0c2ho+x6qtTvml3oz/+Y3Brx7ndn8KBp+mEtGGeTDgccewKVh8FKB3m/40t2BD9D5d7HWrDOHo1EBQDfQ5hGHkLqYDys8P1p1C7N63yte/r3oTb3kC/UuwcHtD4oEWXq9ENkn3dIzzhjZW37zcL7+27iAftiz6X2xZcg4Xvh4bJFo3h+X4mIhG2ff75NEuctZs5cwDR/NLy84yGq37DUrzV/wSe2Sgq+4vr6+R5Kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p+y4yvqopXpvgyB8ejzGsOVkvXAMiQzvJ5mjT0msAQE=;
 b=Pe6NJAPZ1h5rLQN+rvliWQi2OhN7KE7VLBVNMEBJZ/OD3HEy9N90qU+W8jnPQ7LdhN/CoWozmJiuwGI5RluHpV5r69zuXmAnPZc+10nXo32WyaXZ2Sz2A/to0617OzOXfb6KT3gAmGZkmob5rBVofZOyOqu1f+C1WX+mBnmlkOsA6s0g9Vz+RYLZqnQ2xtHgzommOH8msMAO2DgWjgBGnD6MMe5huMMBMYkpCsJnfEYJ0Rf1Crd30/A9IfY/5AzZA6pfteXaeV37guVs2yGfdzxzfLW3yWc60yotIp0euz6TPaOj6KE/C6WEHl0297bPlltJrU+Kh2FL2XRWO1cgZg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0af565ba-646f-1540-0b0c-6a14e73ab5fc@suse.com>
Date: Fri, 27 Jan 2023 15:34:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 12/14] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0052.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8311:EE_
X-MS-Office365-Filtering-Correlation-Id: 23bbeb45-d255-4005-e1b2-08db00739437
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bVxAjqzRzh6mo6ZXTCbovaWWjhDdbtu1hTUZirKf9+8BSfH+yQNuGZXQqUP6sQyIpJ4t6DlfBkhA1NAl/rG4NdlTXtXRIvoGvpNLPIei6Pkyuhz+Wr1EF8DU6jHjf/u5VFvNwaQ9uZ512UMoidd3/k4LRoqHFgkrm7gG5YAU+k0GrlrPkOFaQ9qo1z8BD0QygxucAdBma45lcb6/jQZY5D4AMtdfP11X/wL87e3bgx/L8fKcaazxPlUf4XH3PWF0UnW7wyd1H2DbvBXGgOZbAF6wbwOd41wNPtvrFvCT7MXagygcWuwjjPzx/zSs8ohuf1ENddkqj1W4wjZm0+3VNWlq2KSQyuOndTTHy3S7Q+ecJEpIgIh2tcpm1FX9DdZHcdVfqhhTrizsy6RvzcdNCjogilheMH0munpv8SVvfwpQ8Zg3pKsl9qYN8WY2gukltaTB/0XM4RK46eNf4760DGsyMa5VMCG+1YXz2zvRy7ULFDt3G4+0H/yoMB073HSh9kfGyd/tTeClgIJxIla6MpgMqGtDWSoHDGerrxHaCTsYCVTgNnT+e50naCCpkpXuib4PBmf9pgovOh1tfEcU50+ckZO2GfNu+swkf7HiStP9ylKgxSXUh4BYcSTLIVJfeUZ/djeffNWiIVLTLuDB/EeWqctghLX/WUyMFGjFtsFhwTff9EtfV+Y5nLMLh9EVxkltIC+YlwqKuJkfSjKMZLyhtDtKbq+78t0P5xfhhXo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(376002)(39860400002)(396003)(346002)(136003)(366004)(451199018)(41300700001)(36756003)(66556008)(83380400001)(66476007)(66946007)(316002)(6916009)(478600001)(6486002)(86362001)(4326008)(54906003)(31696002)(2616005)(8676002)(5660300002)(26005)(4744005)(186003)(6512007)(31686004)(2906002)(8936002)(6506007)(53546011)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d2NER0hWUE5vUWVMQ0dIeWNoSjhNZ2FOSFdYY0R2NmtSTlhMQ3gzU3FxdGox?=
 =?utf-8?B?ZWJHVDBhYUJuakF4b1VGZzNaM1Q3ZzIra2t4MFJyRnplUUdvdkpKbE5uTHNK?=
 =?utf-8?B?dGQxY0ZhbDdYRHo5b3E5N2t2OU1kMjhGWnFvYVo0OHRkand3ZklLRG0xRy9E?=
 =?utf-8?B?eFgyZzNKZUFyelFmdnlHWkhoQXErakNKVUNjbkJ2czJDT2d5aURZWDFCZWtV?=
 =?utf-8?B?SzhkSUlaQ0NRTU53R3htOUVMQWJTQnRCZ283Q0NCVkJXNTJ5cm5EUjFyUzB1?=
 =?utf-8?B?VkR5dGlQUnlHRXoyV1VWYmpFUE5zWk92aHdwaGR5enNFODkyOUlQZEUrU2FC?=
 =?utf-8?B?bjVyRGlqMDh3NjJzVjNuNGJKZGlHMmttMEtTaGJqOVVUMFRyT1FYQ0JMT2U1?=
 =?utf-8?B?TUdUdWJVMCsrNHFBSXlaM2ZGSGJwTXo4bzBzT0J0VnhRRy95RzJ0aEh1SW1P?=
 =?utf-8?B?b25Oc1duKy9CcHFHcmZmblpsQ2M1M1RCODBqVUVIeXhieWdxZlo1QTdNeFg4?=
 =?utf-8?B?ZzY2NmRSc2xwN2JTckdDcGhid3ZmeXVnclE4dCtGcEE4cVB6Q2xyNnBOL3Yy?=
 =?utf-8?B?QmdSVWt1TW9MeGJLbmF6eXZSNGIwaTdDOVNNTE5HNjVFUEZTYko5YlRGK3Zm?=
 =?utf-8?B?ajdEOVdyQnNUUUNkSWRYeCtYd3hXeUlNY0cxdGZLM1BaM3gyeGI2NUdoRStV?=
 =?utf-8?B?WjNhdklNWE1DTjkzOFovbTBVNkVqNDliTlRaTjBWVWgyNnhQS1I5bzRxdFJN?=
 =?utf-8?B?aEV2VnNJQW5TUEJ2b0Ywb2hzczRZZVVtV05OSEtkekx3dFRUQmlSMkJFSjRm?=
 =?utf-8?B?MDBVRWloYkhsRVFnNENqd2kxS083UkFPZ0lpbklvN2VFTG5jVDlyYWhhaXJq?=
 =?utf-8?B?WEc2N1RUUVM1V0x1NVVFYUZZZDluSEZ0TldqWlBLb3FkaVBvejFnYUlXellr?=
 =?utf-8?B?SXB2ai9INmhoTGxMeFNxc1V4NFY3UU5HaGMyNjFacms1eXVlVFdwY2VtZzFt?=
 =?utf-8?B?VXNtY1Erb2QrUTBab1dENUVtdjcyekkrRXhoMFI1NDRodXNvUlJ4ZXcxazMr?=
 =?utf-8?B?TWwrYVpaM0ZhS2JLbHV6MUdPMEQwZDlER0dnM0VrVmVpMVVYQVJJM2dNa1VQ?=
 =?utf-8?B?bzJDeHZ6WlZwbGlmbjlXWDhMKzhKYjBJOHN1NEFoVHNQMWRPS2NRbDZ0algy?=
 =?utf-8?B?Y0xoYnhjeUFTWkszK2NuWWd3bnNKdDN0WnlNSjNpZW0yTmZpZDdVdXR4MFVz?=
 =?utf-8?B?Ulk5ZjkxU0pSOUo1bUR5MldMbitxdVVhNWtoR3BaVko5YVNRRlE3VmhwLzVt?=
 =?utf-8?B?VmptQVUyY1ppT2huOTJuZDNkNmZjYUVHQyszN3pQRXVYRVoxWkJvM1l3enM1?=
 =?utf-8?B?cGlvamd6K0pjQ05nSmZaSENFMWc5U0thVEVFWENZUTNjODBRRlo0K2Jnb1pM?=
 =?utf-8?B?dFNhT3UyWWNSN3BZMFV1d0ZpZ003YmdyZzdBWktEK1ZmZ1hyeVNEQ2xmcjd0?=
 =?utf-8?B?bU9yUUdQK0xKRGlEUnVMZ1ZlNjFsS2hTZmlLWldIRDRTdGU1ai9kejBEbjRZ?=
 =?utf-8?B?OWcvdEZZd0VNUWVvL1F5c29QNjdRLzNPOWxQZHA2aThiWFlSNmhxRFhPQldt?=
 =?utf-8?B?WGlTcFNVS2tPNEF1MEFZQWFmZnNTZ2hmOUo2bVUwdDJDUE9xWDRiVFJiaW5j?=
 =?utf-8?B?eDExcGFCKzh1U2Vtd1h2SlVnSzdQTzNaVnhRMHVjT2FoeHlyQmNWSno1VVRX?=
 =?utf-8?B?OHQrS0lYbTVRMzFqVTBRNHFWZy9UcEQyTXJqYTg2WSszUkdMZDZXRTlqNC9K?=
 =?utf-8?B?OGZxQ09JbjB5VVlPclVXVkJOR2pmVjVmUzZGellzbDc0S05NUy80SmlPakI5?=
 =?utf-8?B?MmUwTDNlcTdveUdrV0hHdEQvRmlUdXc2M1hJRndlOE5RSElZQTVvaXl6bjNI?=
 =?utf-8?B?cDY5R2o4L2F6UzF6UXVqanYxc0p4Qk5PVURoakdzRjVuT0QxS3NTb3JaU2Ro?=
 =?utf-8?B?V2cyd0t0Q1IrOVhVeHNDVU9FN3d0ZkFQZ0FRS3JZM3cwK3VOTXhuL1EzdDgy?=
 =?utf-8?B?bWErcnV6UzRuSEFWTlVpemdkOUgrWGk0RGZvdHo1TThSSVBhVk05MU1PNGdx?=
 =?utf-8?Q?hFI4ZRHEMQNQqc4WPfib+O4bD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 23bbeb45-d255-4005-e1b2-08db00739437
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 14:34:21.2392
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: I6l6OioiwMYLJkPUZlMNJcaeBF03yt96GVtQukXYOKeQ8v+F9ckScPziUtL6UWlGv9APk93xowALA+L5/nu4vA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8311

On 27.01.2023 14:59, Oleksii Kurochko wrote:
> The patch introduces macros: BUG(), WARN(), run_in_exception(),
> assert_failed.
> 
> The implementation uses "ebreak" instruction in combination with
> diffrent bug frame tables (for each type) which contains useful
> information.
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes:
>   - Remove __ in define namings
>   - Update run_in_exception_handler() with
>     register void *fn_ asm(__stringify(BUG_FN_REG)) = (fn);
>   - Remove bug_instr_t type and change it's usage to uint32_t

But that's not correct - as said before, you can't assume you can access
32 bits, there maybe only a 16-bit insn at the end of a page, with nothing
mapped to the VA of the subsequent page. Even more ...

> + end:
> +    regs->sepc += GET_INSN_LENGTH(*(uint32_t *)pc);

... you obtain insn length you don't even need to read 32 bits.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 14:38:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 14:38:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485707.753106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPsA-00081C-EX; Fri, 27 Jan 2023 14:38:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485707.753106; Fri, 27 Jan 2023 14:38:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPsA-000815-BK; Fri, 27 Jan 2023 14:38:34 +0000
Received: by outflank-mailman (input) for mailman id 485707;
 Fri, 27 Jan 2023 14:38:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N8iV=5Y=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pLPs8-00080z-Tc
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 14:38:32 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062.outbound.protection.outlook.com [40.107.20.62])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 44f3c0eb-9e50-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 15:38:31 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7718.eurprd04.prod.outlook.com (2603:10a6:20b:29b::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 14:38:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.022; Fri, 27 Jan 2023
 14:38:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44f3c0eb-9e50-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wt0I0hi+jjNKJMCjnzU6WZUofTVJTzzLxEopDiKZrrM2DsVUqID+pojOoDQJM0juCRrlrz7Cd9PDaTUYByIuOCl0PBGeGHNR0qhmoK4/j/XC4o6LYWQfbxekrzxe6pOLvpa1P+Gxu7k47Y5/b1NJp+TWU7DIj+d51fvN7PPS1bLC9+jArL4BlDK3fp9KlUWnMgMVUSkN4vVw4pmGwc90lyq25o5qUJZkvTXfWxyv7JGO0bChmfP8EGQAhM/h3eRNInYDPfN6yjrsTGfU9yEBX/PlCRLGZee98cm/GaWjGdqrRshxn2+Y2TAfV4Y+L9vltDV74HLelAG3bPE0c2O5yw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QvJmfmMFfgX9QdUHG4DTivp1ERMM0wHykiRs3uyy0gc=;
 b=EZGUJTrdisPLHg1xXNgL8yYP3TY3oaE2xJBVbIwMKbPxE7tXVmLP1hjgg0eJdnz7Z7o+udAZCyZ2nRgvkpwV2fN+KnTjL0uXxZJDysvLWfgMPiRrwvq4tIU1IIPDXss+aZtwOvsm37E6WtDM4SdaJOXbnqVjuQAavAKXSiCiTaj/a2ffk5ut0eRYa5oHmSOE1S7A3TG+RasK/XP9roQpgp+UXIldueGPbhIBhXlAzYkL6lfFF3kr4Q4UFLtbKoKThLWM8+FmKhiC6RapZaHzIQJ7U77rx/vfWg3W17u644scFr9Ier5X3DQsnSygzPxBrF9UvqfqJqdpI/ifAYkKHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QvJmfmMFfgX9QdUHG4DTivp1ERMM0wHykiRs3uyy0gc=;
 b=Io6yVOqrGYajrJOOhSsr4FSunkEwoueu3T5Cc/hU0YRZi72RZw8FQYVk6zCr0Mme3r/RP/bqELIrd0KAPSwX9xpZ/IElx9bU+bdMrx6YeFkIuwR05tjNt8qxcvlKuw3M/+c/kU3V8A9sDQctvJV5yj+sSozrG7Z5nY1J00smm7SXuH9zmzcwCXqW7EXCVoNRXDglYfiNNDWLNPoNobyF3JMS1KQs5atvtOw9k0zgPsiNVHUdLlyv8mfgkY9dDoaW0iiKmDAJ88Hy5W2Dt3ZTRb3CcO+yW3YuCAn6/3rdRumNejdsYXUIAVftDOsOqLV+T1rdj9KDx4WvMRM+EIjHkQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <13f10201-6d81-3ed3-96bd-15996d417855@suse.com>
Date: Fri, 27 Jan 2023 15:38:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 12/14] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0138.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7718:EE_
X-MS-Office365-Filtering-Correlation-Id: 59fb678b-4c43-4dc8-357c-08db0074285d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2kEoJ5Ei9oSqbwtBvA7yiI2sh4vHa+4gZnsadjwEHiYrgHdRKe4jIR9ds9KUJ9+qcbmpiSv+M+KhXKIACUY/D3TUty9aAQ2x9TffQLc9hmahMmemZmTe21AyvFc39iWDoJfxsvCLd6rIEeWEX+JXKP1ART6NkYFm2jcOnyjgXw10luBqqrEF25IkrVgFAZKzlKEI7k8msVxaDHh1RoHIuoVLBzACxGmQFD1WaKqldqYXdX7/BgOEkoWqPbzrB0kwlBtdYBiN0ZZDFGQZexKh24nXls+MVfwpVqC0syvBCEJviBCh81102e7DTXW1I2T8xZR7gfs6i/vt/aksmvJlqcIypiEBGPHnC1Zi1o/aoE7oc0rU3g4n04qfJzJ6+fRNj2uV85NZ4gWlk84H+LkPyNCxaBy/n4dDCdFHtC8n80sYhR0w9IufvpJpr43hq0szQHYNK3jssKtrUe1wBmi3vqUx8P3BFldCnH20AiJdSXI0ubq2bHPa/rH12wOTHK4vjzfRTtjU/qBSnP1NEo+hFboHbsl8ayPXX2UnadY7JNr8eYeTcaLQTrrOfgliS8R4h4RIRLoPIP+4z+Bq1c4UjjcHo/s3ZWZNQHCKy8r6KzCxFuBcKg6kLVm8243M6csVnErq6eMwVMakHNNICY3bk0ogKD72ZWNCAD/q7eVdwf4gPWqiCakMPJWbs7HXPsqTwSno/QnDe0xSIWYfev8kxpjZ8rPom6sTdlixyldHj8M=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(366004)(39860400002)(376002)(346002)(136003)(396003)(451199018)(31686004)(2906002)(36756003)(4744005)(5660300002)(86362001)(31696002)(38100700002)(4326008)(66476007)(66946007)(6916009)(8676002)(66556008)(41300700001)(6512007)(54906003)(83380400001)(53546011)(6506007)(2616005)(26005)(8936002)(186003)(478600001)(6486002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RE02dDZvekluOWlWZmNUQ3dieVBaem1BV1cwVXNIRXE4S0RjbWRreG1VVXVJ?=
 =?utf-8?B?eGI2dHc4TDJHZXJUM3dWcFViU0hrODNPUTVVTjFLeTJ6cXdzZThUcWVqRldM?=
 =?utf-8?B?eUJjV1FydGdOdFVtYlc4azlwNTRkclAyVmZhMngvUDBkNEZ1aklYSWd4ajZP?=
 =?utf-8?B?MlVYMFRtSGsyWS9sMlhSZzRWZGVPM3ljeFdLWlAzM25wdEg3cERtcmppRzhj?=
 =?utf-8?B?MHJ3N01walFRb1JSM2VnRko4ZTBzdzhHcGEyUkNQR0FWZlArVnRUUHBxSWZr?=
 =?utf-8?B?NTRzcFdobkxGZnl0VHphRkcwUHFxZ3YxNlhYTVFVZVJRV0hMWEpsWU5pK1po?=
 =?utf-8?B?MlZDbkpvNmo0RHF2bWx3cWhyaTR6TXBlMUFrb3VZL0djblc3SDBZZ1ByNEdt?=
 =?utf-8?B?SEJoTjBFcXZZOWtNQTFkQ0QrSEZhS1dBRXlKVSt0TGJ2VC9zRDFOcDZSbnVP?=
 =?utf-8?B?T1JqWENNeUM4Y3pNRERrNzJWQ2N5Vi9kTjNGUHBWRitwTTFCTUNQWkc5NlJx?=
 =?utf-8?B?cVN4akN2WCtxd0NDRzlGcWdPS0pHNGI4Q3V4SnA1Ymk4cHg0NEtUU2tBVTFY?=
 =?utf-8?B?TElYREFMNml3NGJqN1c3am9KT3JmN09XNnY4NmJESHdJVnFuRXREd1kwNXFh?=
 =?utf-8?B?VDZjZUUrbjJQeTd0U0VoVUQ3eDI5SjdqTmZGTlpobUN6TUFoYmJ2VmI4YWN3?=
 =?utf-8?B?NlUzRk9nTy9FSi84YitRNktJbWJ6TmpnQWZySFBiTUZHdWdLbWszTDJMVTVt?=
 =?utf-8?B?YmdoQnJLN2dGNTExditrVUl3TFpSZ0lMMzhiNFNydjEwOElKWkdUV2NmbWpi?=
 =?utf-8?B?Q3o5ajBialMrS0ovSlJpelJocmlDSHk4S1JnclZINnBiaUdLeGVYeTNpOHRh?=
 =?utf-8?B?UVBueE92NjF2ODFYUUl6VDdOQ09VZk5XUmpldkdlcWpBMk0wa1FZM1lmbS9q?=
 =?utf-8?B?MU5wZ01rV0J0MUtZMXh3c3plb0p0RlNMOGovNXNLTkFxSFNIQlIyRFlkM2pZ?=
 =?utf-8?B?dnNhSU15UVB6UW5rOTVhWTRoUW9SMGFqUmo0UE1UbWpSdkdLSCsyWUZzbkIx?=
 =?utf-8?B?aEZZbUxtU1Frd1IzdFFvd1h4dlFTbjFwVVZLTGlPTE5iMG52L3BxbDBwN29p?=
 =?utf-8?B?SnA4QllZVFNra1Zua1ZMR01IVUd0eWw3RlpNMUhVelh5ZG5DZXYxUGxaUkcr?=
 =?utf-8?B?aytyNWRTbnlBRFg2Y04wYWRWVVV5OW5OQ21qRDk5MlY2VmZOMHJwM2gwaGVl?=
 =?utf-8?B?UGo2OFhYZWMyc3dUWmgvaEV0c3BNMmJDMXVZY3hGSVlhVHVuNlNncURoTEty?=
 =?utf-8?B?QUdRL2J0eW9razNYaCtmUU5GcXhnT1ZxYWVRaG95bXlUYU1RNTVFaTdld2pC?=
 =?utf-8?B?eU5HUXNmakwzamZQaXJOZjlWSDRqWDlUSExJZ1daeHgvSHhzYUk3WEFFcGJX?=
 =?utf-8?B?SHpIcnJWOW56NjlWWEhiVEhhSHZSVWNWb2k2dTdBZHJ4VmFpeFZhMkhKendq?=
 =?utf-8?B?S1JQQldMNXBlY2xrUzBmOVBlM0ovUlE5bUxXUm1ZN2FPbURSK1crc0Q4OHdm?=
 =?utf-8?B?ckRaSzEveVJIVTNBKy9VT3R5Y1YzaVlsaklUL3lGTGNKcklKR09WUExkMTZq?=
 =?utf-8?B?WDhtb2UyUjR0dFpOVitRaTBzOWxMWFZLbXRxSjVNTXo2ZHhmd0xhVExLWjJ6?=
 =?utf-8?B?L05yVUtETG9tMzdza2ZNbHh0bW0xVjZ4a0ZORWNhaVQ5SS9qbnlHbnBTRXp6?=
 =?utf-8?B?Y2lkdGJ1dXRHOFNFaXhnK3ZvSUc3UDBQR052WFV1QnVIbTVpenBqVWR1eDhU?=
 =?utf-8?B?QUd4eWlmNkVuVlNjQVZvaStZMjNIRGxVREVYeTg1MlduL1E3MmFLdWZvd1ow?=
 =?utf-8?B?SG9DZ3FuQXpJOURoOGt3Y2hDanVYWlVQOWJEdVIrckx2S1IrUUlicUExNk5I?=
 =?utf-8?B?VVUveWNXZXhpWWFNOHQ3ZzgwOVdxVExvaTBrM2g4OGMxOHRqbDVsTkltTi9O?=
 =?utf-8?B?RnFMdVJNT1ltZjRrZ1d2UFBneFRqbGtmSkMxSzNVbDNQUG1HT1lER015SER2?=
 =?utf-8?B?WUNPNHV0dXFWRW5XSVhYaFhINTZjY2dGQ3Foeis5Nk0waklhWUhsMHFhY1No?=
 =?utf-8?Q?sRw4D5Er7mxs95jmgLoaMSyIn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 59fb678b-4c43-4dc8-357c-08db0074285d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 14:38:29.8023
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WAmAtHvm9Kx5unO5lMN7LSNrFXv8ynkcL0F3ODD2S5bZGSkKcafuIAkGUmYmjQxbVpVp82Ikf/c1iVMnCrcEtw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7718

On 27.01.2023 14:59, Oleksii Kurochko wrote:
> +int is_valid_bugaddr(uint32_t insn)
> +{
> +    if ((insn & INSN_LENGTH_MASK) == INSN_LENGTH_32)
> +        return (insn == BUG_INSN_32);
> +    else
> +        return ((insn & COMPRESSED_INSN_MASK) == BUG_INSN_16);
> +}
> +
>  void do_trap(struct cpu_user_regs *cpu_regs)
>  {
> +    register_t pc = cpu_regs->sepc;
> +    uint32_t instr = *(uint32_t *)pc;
> +
> +    if (is_valid_bugaddr(instr))
> +        if (!do_bug_frame(cpu_regs, pc)) return;
> +
>      do_unexpected_trap(cpu_regs);
>  }

One more remark, style related: Even if some of the additions you're making
are temporary, it'll be better if you have everything in Xen style. That'll
reduce the risk of someone copying bad style from adjacent code, and it'll
also avoid people like me thinking whether to comment and request an
adjustment, or whether to assume that it's temporary code and will get
changed again anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 14:43:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 14:43:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485714.753116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPxD-000145-6D; Fri, 27 Jan 2023 14:43:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485714.753116; Fri, 27 Jan 2023 14:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLPxD-00013y-35; Fri, 27 Jan 2023 14:43:47 +0000
Received: by outflank-mailman (input) for mailman id 485714;
 Fri, 27 Jan 2023 14:43:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xsEI=5Y=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pLPxB-00013s-RO
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 14:43:45 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2044.outbound.protection.outlook.com [40.107.92.44])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe4496d6-9e50-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 15:43:42 +0100 (CET)
Received: from MW4PR04CA0143.namprd04.prod.outlook.com (2603:10b6:303:84::28)
 by PH8PR12MB7351.namprd12.prod.outlook.com (2603:10b6:510:215::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Fri, 27 Jan
 2023 14:43:39 +0000
Received: from CO1NAM11FT079.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:84:cafe::97) by MW4PR04CA0143.outlook.office365.com
 (2603:10b6:303:84::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.20 via Frontend
 Transport; Fri, 27 Jan 2023 14:43:39 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT079.mail.protection.outlook.com (10.13.175.134) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.22 via Frontend Transport; Fri, 27 Jan 2023 14:43:39 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 27 Jan
 2023 08:43:38 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 27 Jan 2023 08:43:37 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe4496d6-9e50-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lpUW46YQGXG3tIRT9kvgnue4WJqubBAG4+fe0TlIu/xU70zXCAaRMM//Ab7jAnczrzjr6H2OKhtBVYUJNq5JHyll8mLQl+eMKaZkMr7zzxXSkLJTDq6m6905kDQsR19iTMqjaK2ellbTKl3/PwbYFUY4Nv9m4AItwQnsH5uOL1j8vEahX9TW1vtwyHhwCJDOppG9rY3wxjR3CB2pVv/udSYq+9krLKhdp4JZOHxiKR3T/D5gjf1tFj5VP3mF6arI+/41VEkdv1pH24hFCBNWUlfyWgJUnw65Dfx4zVF2yr/zSZ9cqxqz3xY5Y3bt505e4bL7INgt9VSA1zHbHEa/LA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q7ge0+vYmN+Gld9693vJ9njAWboUUQHvcT6vUQfNbgo=;
 b=FS8N/ukHWjNUUL2uNjm8cjem6qIF9sS9s/Dz5bMoR1gc++P+Mhc57Vgq4FmgAvCa8A2M6WDuLxGTsExWw33LsseQ2TFEZe2qAiLTdiWeNTKmoiDoazcfvQ6i7x73gVKL0YLgHv8YnUIhb9+B5Z3iF9Qb7fSOgIVUgfX8+4Z5gHanja7dGGKfWLhM3t3RVOyuL9yi19Mv9Z44pyPT+CfFn6RtfC6YvO7UN0lBjDitSKMUnTqvoTrMqM05INogSrNtZkmLqic1w7N5zGgW+xoqaYIw/4EaVdqrc09O+scZ/5ixa64BbAx4WHtr7ZspToSe9ZrvoKAFqq9fFVSnfZj6YQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q7ge0+vYmN+Gld9693vJ9njAWboUUQHvcT6vUQfNbgo=;
 b=FoOg7DsQUrRouW8uLdvAFYzks9L0FiWkJaX6bF0QN8ObN2QiiT2/iO2Sl1wHHdLw+/rP5PWc8zZ0cU4aPdIv2fNhATTK2Hl+/zDma1HtFajjkeHfiQZeHzbWIFpr+yiaB8sfeq0ASEdfSuMerCKiEPKuxoNyukPN4uwtvVakWok=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <fe470f53-5cf5-138b-40e7-83c111ce225a@amd.com>
Date: Fri, 27 Jan 2023 15:43:36 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 14/14] automation: add smoke test to verify macros from
 bug.h
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	<xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Doug
 Goldstein <cardoe@cardoe.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <ed819dc612fcadbd04b4b44b2c0560a77796793a.1674818705.git.oleksii.kurochko@gmail.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <ed819dc612fcadbd04b4b44b2c0560a77796793a.1674818705.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT079:EE_|PH8PR12MB7351:EE_
X-MS-Office365-Filtering-Correlation-Id: 41db0996-34b9-4a36-5a8d-08db0074e122
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5VK1c2vlBK7rBCS2w1mZjAybI9/0Rd4lV0Wk08JTufmHuIg28p1g41AF+xVl6x3CVSBktiDVYihwqt+VU8/fp+Sb8EX0qSLP5XPUHqluhqO870oU+urwQi0KV+thqKvtVoebWJWEnZR0JIt0VN+OZeZMnYdvgY95G6jIC1BMAuOuga6YSilPTUVOMEGuHYgAEvV2MWE1YpOn72XhZrNk6jcRckw8A+VHy4FRB1STNcxuxwQd1zv4IJJKQzQWujvE50QFT9v/igqmsT4ERUk5RplXuYjPyX/fhhza865KYP/dXGilhTDZcmsDpsbWfIEiqOi90M8II1M9ZxyDv4W8QhFjiILLR7gQXq7g74ldIMkKuhTdDyYLKYwOwqa9x41w9BCiq9DdcDF5KCbCWpZ7m8vbHb+vfALaOMiP/SLSI08SSwq7fcIU1Y5OkTSGqeIAUHK5Dc/qnyQ43sBSIK+GVaLWQPVA3tZKAfPdqwFoxpCEsajKrrKqA217huZR9LHskO4/cs8lCORPrXNezkko/STvzrXLkeDyt78I/wxjvGtg8MWm/17UaVuCwp/r/m5a2T8cJ+1W6w989Uf0wQDTuFGhlswg73FDKxx9FZk4RX2c7AUKlPM1/06jFka59sqpPfrc0NVPlk+ppDZGAqknBjdwNb7a4UzXwOb1eMtD71AyYp8IhcqWR0v3wlKAygqqke+dRFUm5HgEGRHG3Nji9JoUo2wusO+cKWJ0sKkGF1tdxcDndPvSWrBobxcbnp5cFzsDfzKomH6phwkbnIOa9Q==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199018)(36840700001)(46966006)(40470700004)(31686004)(356005)(2906002)(44832011)(5660300002)(81166007)(4744005)(8936002)(41300700001)(86362001)(478600001)(40460700003)(83380400001)(110136005)(36860700001)(54906003)(16576012)(82740400003)(316002)(82310400005)(47076005)(31696002)(426003)(336012)(40480700001)(36756003)(2616005)(8676002)(70586007)(4326008)(70206006)(53546011)(186003)(26005)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 14:43:39.4982
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 41db0996-34b9-4a36-5a8d-08db0074e122
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT079.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7351

Hi Oleksii,

On 27/01/2023 14:59, Oleksii Kurochko wrote:
> 
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V2:
>  - Leave only the latest "grep ..."
> ---
>  automation/scripts/qemu-smoke-riscv64.sh | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
> index e0f06360bc..02fc66be03 100755
> --- a/automation/scripts/qemu-smoke-riscv64.sh
> +++ b/automation/scripts/qemu-smoke-riscv64.sh
> @@ -16,5 +16,5 @@ qemu-system-riscv64 \
>      |& tee smoke.serial
> 
>  set -e
> -(grep -q "Hello from C env" smoke.serial) || exit 1
> +(grep -q "WARN is most likely working" smoke.serial) || exit 1
I think the commit msg is a bit misleading and should be changed.
FWICS, you are not *adding* any smoke test but instead modifying
the grep pattern to reflect the usage of WARN.

>  exit 0
> --
> 2.39.0
> 
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 14:54:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 14:54:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485720.753129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLQ7Y-0002mw-6z; Fri, 27 Jan 2023 14:54:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485720.753129; Fri, 27 Jan 2023 14:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLQ7Y-0002mp-3v; Fri, 27 Jan 2023 14:54:28 +0000
Received: by outflank-mailman (input) for mailman id 485720;
 Fri, 27 Jan 2023 14:54:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLQ7X-0002mj-NZ
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 14:54:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLQ7X-0007NC-4D; Fri, 27 Jan 2023 14:54:27 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLQ7W-0000yI-V6; Fri, 27 Jan 2023 14:54:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=09u7c0dC1PQgPIcHx/A9P3MEkHG0v6TFFCoK2rui+fM=; b=sqHIPHI9+BZ/xNArFVvwAJ0UDv
	vjKwJ2RW1flYLTe+ri7W8wLawkVDYnmbKhV7yzA+Vy4MY/5KQlMoA9PYnuxy9C7w965BfMOZqiUDx
	E9JGa/MoL0l0/1UkdrqUr2yTUQs4QJQgNwmRGI6uKsFRN2d0YtJLLS/xTc6nw9blMhRs=;
Message-ID: <a8219b2d-a22d-63ac-5088-c33610310d6e@xen.org>
Date: Fri, 27 Jan 2023 14:54:24 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 07/14] xen/riscv: introduce exception context
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Oleksii,

On 27/01/2023 13:59, Oleksii Kurochko wrote:
> +static inline void wfi(void)
> +{
> +    __asm__ __volatile__ ("wfi");

I have starred at this line for a while and I am not quite too sure to 
understand why we don't need to clobber the memory like we do on Arm.

FWIW, Linux is doing the same, so I guess this is correct. For Arm we 
also follow the Linux implementation.

But I am wondering whether we are just too strict on Arm, RISCv compiler 
offer a different guarantee, or you expect the user to be responsible to 
prevent the compiler to do harmful optimization.

> +/*
> + * panic() isn't available at the moment so an infinite loop will be
> + * used temporarily.
> + * TODO: change it to panic()
> + */
> +static inline void die(void)
> +{
> +    for( ;; ) wfi();

Please move wfi() to a newline.

> +}

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 15:06:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 15:06:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485736.753138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLQJ5-0004ll-83; Fri, 27 Jan 2023 15:06:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485736.753138; Fri, 27 Jan 2023 15:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLQJ5-0004lc-5P; Fri, 27 Jan 2023 15:06:23 +0000
Received: by outflank-mailman (input) for mailman id 485736;
 Fri, 27 Jan 2023 15:06:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLQJ4-0004lS-Ez; Fri, 27 Jan 2023 15:06:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLQJ3-0007c4-PP; Fri, 27 Jan 2023 15:06:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLQJ3-0003pr-7l; Fri, 27 Jan 2023 15:06:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLQJ3-0004bq-71; Fri, 27 Jan 2023 15:06:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MOqVBiU7gHSfa9naHZ2q86gIy/N/jgCU1Y6QpUA7eN0=; b=gsopmnU/jAEhWqt++rk00OEPkJ
	V1cBGVM3BVCvVPed5gFtiloIjy/GKmg0LfWVuN//UbdXWh9whGyCf5X3O5tds+4+a+YuZoEw/aKX3
	SMW6b2yD1IdTxxSOksz+3YfxmQ6v2Haca3QocRXkUwxvBOv5zjIZWkYddtfvOrwySojw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176238-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176238: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.17-testing:build-arm64-pvops:<job status>:broken:regression
    xen-4.17-testing:build-arm64-xsm:<job status>:broken:regression
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:build-arm64-xsm:host-install(4):broken:regression
    xen-4.17-testing:build-arm64-pvops:host-install(4):broken:regression
    xen-4.17-testing:build-arm64-libvirt:<job status>:broken:regression
    xen-4.17-testing:build-arm64-libvirt:host-install(4):broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-reuse/final:broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-ping-check-xen:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 15:06:21 +0000

flight 176238 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176238/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>                 broken
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 175447
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 175447
 build-arm64-libvirt             <job status>                 broken  in 176224
 build-arm64-libvirt        4 host-install(4) broken in 176224 REGR. vs. 175447
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail REGR. vs. 175447
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-ws16-amd64  5 host-install(5)   broken pass in 176224
 test-amd64-i386-qemut-rhel6hvm-intel 19 host-reuse/final broken pass in 176224
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 host-ping-check-xen fail in 176224 pass in 176238
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install          fail pass in 176224

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop   fail in 176224 like 175447
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   36 days
Testing same since   176224  2023-01-26 22:14:43 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          broken  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step test-amd64-i386-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-i386-qemut-rhel6hvm-intel host-reuse/final
broken-job build-arm64-pvops broken
broken-job build-armhf broken
broken-job build-arm64-libvirt broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 15:49:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 15:49:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485750.753155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLQyd-0001gs-Lt; Fri, 27 Jan 2023 15:49:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485750.753155; Fri, 27 Jan 2023 15:49:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLQyd-0001gl-IF; Fri, 27 Jan 2023 15:49:19 +0000
Received: by outflank-mailman (input) for mailman id 485750;
 Fri, 27 Jan 2023 15:49:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLQyc-0001ga-59; Fri, 27 Jan 2023 15:49:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLQyc-00009b-1w; Fri, 27 Jan 2023 15:49:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLQyb-00067y-Iv; Fri, 27 Jan 2023 15:49:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLQyb-0003HP-IS; Fri, 27 Jan 2023 15:49:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4No2tJ0x1SIbk8GDNASKdd1UoULzcfQWLYK8XgsTKoA=; b=PWrEzC+weUYm//zGJuVjL+2HkR
	VRTpiumgNuJYrx5ldn3FGyB0UKV6T7eNq308+bsdq/+sRYSAwRSCzGwpXRnLzW17YgFClCdnHcxDh
	g7gytxEkZUC26iCs0i2b04IywXxDfSOKGHI4pYluKaIrjPssFZc8k5RIOcsinQU7jSkA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176222-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176222: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-pair:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-pair:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-pair:host-install/src_host(6):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-pair:host-install/src_host(6):broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    xen-unstable:build-armhf:syslog-server:running:regression
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf:capture-logs:broken:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
X-Osstest-Versions-This:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 15:49:17 +0000

flight 176222 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176222/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair            <job status>                 broken
 build-arm64                     <job status>                 broken
 build-armhf                     <job status>                 broken
 test-amd64-amd64-libvirt-pair    <job status>                 broken
 test-amd64-amd64-libvirt-pair 6 host-install/src_host(6) broken REGR. vs. 175994
 build-arm64                   4 host-install(4)        broken REGR. vs. 175994
 test-amd64-i386-pair        6 host-install/src_host(6) broken REGR. vs. 175994
 build-armhf                   4 host-install(4)        broken REGR. vs. 175994
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 175994
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 175994
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175994
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install       fail like 175994
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass

version targeted for testing:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    7 days
Failing since        176003  2023-01-20 17:40:27 Z    6 days   15 attempts
Testing same since   176222  2023-01-26 22:13:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  George Dunlap <george.dunlap@cloud.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-libvirt-pair                                broken  
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-pair broken
broken-job build-arm64 broken
broken-job build-armhf broken
broken-job test-amd64-amd64-libvirt-pair broken
broken-step test-amd64-amd64-libvirt-pair host-install/src_host(6)
broken-step build-arm64 host-install(4)
broken-step test-amd64-i386-pair host-install/src_host(6)
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

(No revision log; it would be 1225 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 15:59:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 15:59:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485757.753168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLR80-0003Rl-LY; Fri, 27 Jan 2023 15:59:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485757.753168; Fri, 27 Jan 2023 15:59:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLR80-0003Re-Ib; Fri, 27 Jan 2023 15:59:00 +0000
Received: by outflank-mailman (input) for mailman id 485757;
 Fri, 27 Jan 2023 15:51:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QlU0=5Y=3mdeb.com=michal.zygowski@srs-se1.protection.inumbo.net>)
 id 1pLR0O-00032F-UN
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 15:51:09 +0000
Received: from 19.mo581.mail-out.ovh.net (19.mo581.mail-out.ovh.net
 [178.33.251.118]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 68a9e726-9e5a-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 16:51:06 +0100 (CET)
Received: from director10.ghost.mail-out.ovh.net (unknown [10.108.1.59])
 by mo581.mail-out.ovh.net (Postfix) with ESMTP id 9F2C9201ED
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 15:51:05 +0000 (UTC)
Received: from ghost-submission-6684bf9d7b-jfnlm (unknown [10.110.208.180])
 by director10.ghost.mail-out.ovh.net (Postfix) with ESMTPS id C62BB200BA;
 Fri, 27 Jan 2023 15:46:59 +0000 (UTC)
Received: from 3mdeb.com ([37.59.142.107])
 by ghost-submission-6684bf9d7b-jfnlm with ESMTPSA
 id v1HxLPPx02OwXgEAfPC4jg
 (envelope-from <michal.zygowski@3mdeb.com>); Fri, 27 Jan 2023 15:46:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68a9e726-9e5a-11ed-b8d1-410ff93cb8f0
Authentication-Results:garm.ovh; auth=pass (GARM-107S001f2ac79bd-0ffd-4042-ba12-4a2658949fcd,
                    72E125D90F31E450B543EC411465CC29B9302547) smtp.auth=michal.zygowski@3mdeb.com
X-OVh-ClientIp:84.10.27.202
Message-ID: <6bd71ecd-5320-6162-375b-2bc9afc47802@3mdeb.com>
Date: Fri, 27 Jan 2023 16:46:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.4.2
From: =?UTF-8?B?TWljaGHFgiDFu3lnb3dza2k=?= <michal.zygowski@3mdeb.com>
To: trenchboot-devel@googlegroups.com
Cc: xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net
Content-Language: en-US
Organization: 3mdeb Sp. z o.o.
Subject: TrenchBoot Anti Evil Maid for Qubes OS
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------getUOkq6vc0OY7QVU0BpnWyf"
X-Ovh-Tracer-Id: 17434560058756450447
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: 0
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedvhedruddviedgkedtucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuqfggjfdpvefjgfevmfevgfenuceurghilhhouhhtmecuhedttdenucenucfjughrpefkffggfgfhvfevohfutgesghdtreertdefjeenucfhrhhomhepofhitghhrghlucjlhihgohifshhkihcuoehmihgthhgrlhdriiihghhofihskhhiseefmhguvggsrdgtohhmqeenucggtffrrghtthgvrhhnpefhvedtteelueejieffgfefieektedvjeeikeefgfefhffgleefffetfeeludehtdenucffohhmrghinhepuggrshhhrghrohdrtghomhdplhhkmhhlrdhorhhgpdhgihhthhhusgdrtghomhdphihouhhtuhdrsggvnecukfhppeduvdejrddtrddtrddupdefjedrheelrddugedvrddutdejnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehinhgvthepuddvjedrtddrtddruddpmhgrihhlfhhrohhmpeeomhhitghhrghlrdiihihgohifshhkihesfehmuggvsgdrtghomheqpdhnsggprhgtphhtthhopedupdhrtghpthhtohepgigvnhdquggvvhgvlheslhhishhtshdrgigvnhhprhhojhgvtghtrdhorhhgpdfovfetjfhoshhtpehmohehkedupdhmohguvgepshhmthhpohhuth

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------getUOkq6vc0OY7QVU0BpnWyf
Content-Type: multipart/mixed; boundary="------------jgWIbDNKWczTcS0CWS6x4pNU";
 protected-headers="v1"
From: =?UTF-8?B?TWljaGHFgiDFu3lnb3dza2k=?= <michal.zygowski@3mdeb.com>
To: trenchboot-devel@googlegroups.com
Cc: xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net
Message-ID: <6bd71ecd-5320-6162-375b-2bc9afc47802@3mdeb.com>
Subject: TrenchBoot Anti Evil Maid for Qubes OS

--------------jgWIbDNKWczTcS0CWS6x4pNU
Content-Type: multipart/mixed; boundary="------------ssBMaubtu7nOxnjxZ9TKJnoX"

--------------ssBMaubtu7nOxnjxZ9TKJnoX
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

RGVhciBUcmVuY2hCb290IENvbW11bml0eSwNCg0KQXMgeW91IG1heSBrbm93LCAzbWRlYiBo
YXMgYmVlbiBkb2luZyBhIFByb29mIG9mIENvbmNlcHQgKFBvQykgDQppbnRlZ3JhdGlvbiBv
ZiBUcmVuY2hCb290IGludG8gUXViZXMgT1MgdG8gcHJvdmlkZSBBbnRpIEV2aWwgTWFpZCAo
QUVNKSANCnNlcnZpY2UgaW4gcGxhY2Ugb2YgVHJ1c3RlZCBCb290ICh0Ym9vdCksIHRoZSBm
b3JtZXIgc29sdXRpb24gYXZhaWxhYmxlIA0KZm9yIEludGVsIFRYVCBsYXVuY2guIE91ciAo
M21kZWIncykgaW5pdGlhbCBwbGFuIGFuZCBtb3RpdmF0aW9uIGhhcyBiZWVuIA0KZXhwbGFp
bmVkIG9uIERhc2hhcm8gZG9jdW1lbnRhdGlvbiBwYWdlWzFdLg0KDQpXZSAoM21kZWIpIHdv
dWxkIGxpa2UgdG8gY29udGludWUgdGhlIHdvcmsgdG8gZXh0ZW5kIHRoZSBUcmVuY2hCb290
IA0Kc3VwcG9ydCBpbiBRdWJlcyBPUyBmdXJ0aGVyLiBJdCB3aWxsIGluY2x1ZGUgQU1EIGFu
ZCBJbnRlbCBwbGF0Zm9ybXMgDQp3aXRoIFRQTSAyLjAgYW5kIGJvdGggbGVnYWN5IGFuZCBV
RUZJIGJvb3QgbW9kZS4gQSBsb3Qgb2YgdGltZSBoYXMgDQpwYXNzZWQgc2luY2UgdGhlIGlu
aXRpYWwgcGxhbiBoYXMgYmVlbiBkZXZpc2VkIGFuZCB0aGUgdXBzdHJlYW0gd29yayBvZiAN
CmltcGxlbWVudGluZyBTZWN1cmUgTGF1bmNoIHN1cHBvcnQgZm9yIExpbnV4IGtlcm5lbFsy
XSBoYXMgcHJvZ3Jlc3NlZCBhcyANCndlbGwuIEJlY2F1c2Ugb2YgdGhhdCB0aGUgaW5pdGlh
bCBwbGFuIGZvciBUcmVuY2hCb290IEFFTSBzdXBwb3J0IGZvciANClF1YmVzIE9TIGhhcyB0
byBiZSBjaGFuZ2VkLiBCZWxvdyBJIGhhdmUgYnJpZWZseSBleHBsYWluZWQgd2hhdCBpcyAN
Cm5lZWRlZCB0byBhY2hpZXZlIHRoZSBnb2FsIHVzaW5nIHRoZSBtb3N0IHJlY2VudCBwcm90
b2NvbCB2ZXJzaW9uIA0KZGVzaWduZWQgZm9yIExpbnV4IGtlcm5lbC4NCg0KUHJvamVjdCBQ
bGFuDQotLS0tLS0tLS0tLS0NCg0KVGhlIFBvQyBwaGFzZSBmb3IgSW50ZWwgcGxhdGZvcm1z
IGluIGxlZ2FjeSBib290IG1vZGUgaXMgYXBwcm9hY2hpbmcgaXRzIA0KZmluaXNoIGxpbmUg
YW5kIHJlc3VsdHMgc2hhbGwgYmUgcHVibGlzaGVkIHNvb24uIE1vc3Qgb2YgdGhlIHdvcmsg
DQphbHJlYWR5IGRvbmUgd2lsbCBzdGlsbCBiZSBhcHBsaWNhYmxlIGluIHRoZSBuZXcgVHJl
bmNoQm9vdCBib290IA0KcHJvdG9jb2wgZm9yIGJvdGggVUVGSSBhbmQgbGVnYWN5IEJJT1Mg
d2hlbiBpdCBjb21lcyB0byBYZW4gaHlwZXJ2aXNvciANCnN1cHBvcnQgZm9yIERSVE0gcG9z
dC1sYXVuY2guIFRoZSBjaGFuZ2VzIGludHJvZHVjZWQgaW4gdGhlIHVwc3RyZWFtIA0KcGF0
Y2hlcyBmb3IgVHJlbmNoQm9vdCBhcmUgbWFpbmx5IGFmZmVjdGluZyB0aGUgRFJUTSBwcmUt
bGF1bmNoIHBoYXNlIA0Kd2hpY2ggd2lsbCBoYXZlIHRvIGJlIHJlYmFzZWQgdG8gbW9zdCBy
ZWNlbnQgdmVyc2lvbi4NCg0KVGhlIHBsYW4gaXMgZGl2aWRlZCBpbnRvIG11bHRpcGxlIHBo
YXNlcywgZWFjaCBhY2hpZXZpbmcgYSBkaWZmZXJlbnQNCmZ1bmN0aW9uYWxpdHkgb2YgdGhl
IFF1YmVzIE9TIEFudGkgRXZpbCBNYWlkIHdpdGggVHJlbmNoQm9vdDoNCg0KMS4gVFBNIDIu
MCBzdXBwb3J0IGluIFF1YmVzIE9TIEFFTSAgKEludGVsIGhhcmR3YXJlKToNCg0KICAgIC0g
SW1wbGVtZW50IHN1cHBvcnQgZm9yIFRQTSAyLjAgbW9kdWxlIGluIFhlbg0KICAgIC0gSW1w
bGVtZW50IHN1cHBvcnQgZm9yIFRQTSAyLjAgZXZlbnQgbG9nIGluIFhlbg0KDQogICAgICBU
aGUgdHdvIGFib3ZlIGFyZSBuZWVkZWQgdG8gaGFuZGxlIHRoZSBtZWFzdXJlbWVudHMgYW5k
IGV2ZW50IGxvZw0KICAgICAgY3JlYXRlZCBhbmQgcmV0dXJuZWQgdG8ga2VybmVsIChYZW4p
IGJ5IHNwZWNpZmljIERSVE0gdGVjaG5vbG9neQ0KICAgICAgaW1wbGVtZW50ZWQgaW4gdGhl
IHNpbGljb24uIFhlbiBhbHNvIG5lZWRzIHRvIG1lYXN1cmVkIHRoZSBEb20wDQogICAgICBr
ZXJuZWwgYW5kIGluaXRyZCBwcmlvciB0byBEb20wIGNvbnN0cnVjdGlvbiB0byBlbnN1cmUg
dGhlDQogICAgICBwcmluY2lwbGUgbG9hZC1tZWFzdXJlLWV4ZWN1dGUuDQoNCiAgICAtIElt
cGxlbWVudCBwYXJhbGxlbCBDUFUgY29yZXMgYnJpbmctdXAgZm9yIERSVE0gbGF1bmNoDQoN
CiAgICAgIEN1cnJlbnRseSB0aGUgQ1BVIGNvcmVzIGFyZSBiZWluZyB3b2tlbiB1cCBpbiBw
YXJhbGxlbCwgYnV0IGxhdGVyDQogICAgICB0aGV5IGFyZSBoYWNrZWQgdG8gYmUgd2FpdGlu
ZyBpbiBhIHF1ZXVlLiBJZiBhbnkgaW50ZXJydXB0IHdvdWxkDQogICAgICBjb21lIGF0IHRo
YXQgdGltZSwgaXQgY291bGQgYmUgYSBzZXJpb3VzIGRhbmdlci4gSXQgaGFzIHRvIGJlDQog
ICAgICBmaXhlZCBhcyBzb29uIGFzIHBvc3NpYmxlLCBhcyByZXF1aXJlZCBieSBJbnRlbCBU
WFQgc3BlY2lmaWNhdGlvbi4NCg0KICAgIC0gSW50ZWdyYXRlIFRQTSAyLjAgc29mdHdhcmUg
c3RhY2sgaW50byBRdWJlcyBPUyBEb20wDQogICAgLSBFeHRlbmQgdGhlIEFFTSBzY3JpcHRz
IHRvIGRldGVjdCBUUE0gdmVyc2lvbiBvbiB0aGUgcGxhdGZvcm0NCiAgICAtIEV4dGVuZCB0
aGUgQUVNIHNjcmlwdHMgdG8gdXNlIGFwcHJvcHJpYXRlIHNvZnR3YXJlIHN0YWNrIGZvciBU
UE0NCiAgICAgIDIuMA0KDQogICAgICBDdXJyZW50bHksIG9ubHkgVFBNIDEuMiBpcyBzdXBw
b3J0ZWQgaW4gUXViZXMgT1MgQUVNIHNlcnZpY2UgY29kZS4NCiAgICAgIFRoZSAzIGl0ZW1z
IGFib3ZlIHdpbGwgZW5zdXJlIHRoZSBuZWNlc3Nhcnkgc29mdHdhcmUgZm9yIFRQTSAyLjAN
CiAgICAgIGlzIGF2YWlsYWJsZSBhbmQgQUVNIHNjcmlwdHMgZXhlY3V0ZWQgZWFybHkgZnJv
bSB0aGUgaW5pdHJkIGNhbg0KICAgICAgZGV0ZWN0IHdoaWNoIFRQTSBmYW1pbHkgaXMgcHJl
c2VudCBvbiB0aGUgcGxhdGZvcm0gYW5kIHVzZQ0KICAgICAgYXBwcm9wcmlhdGUgc29mdHdh
cmUgc3RhY2sgYW5kIGZ1bmN0aW9ucy4gVFBNIDEuMiBhbmQgVFBNIDIuMA0KICAgICAgc29m
dHdhcmUgc3RhY2tzIGFyZSBub3QgY29tcGF0aWJsZSBzbyB0aGUgc2NyaXB0cyB0aGVtc2Vs
dmVzIG11c3QNCiAgICAgIHVzZSBwcm9wZXIgQVBJIGZvciBnaXZlbiBUUE0gYW5kIGl0cyBy
ZXNwZWN0aXZlIHNvZnR3YXJlIHN0YWNrLg0KDQogICAgLSBVcGRhdGUgUXViZXMgT1MgQUVN
IGRvY3VtZW50YXRpb24NCg0KICAgICAgVGhpcyBwaGFzZSBpcyBtZXJlbHkgYW4gZXhwYW5z
aW9uIG9mIHRoZSBpbml0aWFsIFBvQyB0byBzdXBwb3J0DQogICAgICBUUE0gMi4wIGluIFF1
YmVzIE9TIEFFTSB3aXRoIFRyZW5jaEJvb3QuIE5leHQgcGhhc2VzIHdpbGwgZm9jdXMgb24N
CiAgICAgIGVuYWJsaW5nIFVFRkkgYm9vdCBtb2RlIHN1cHBvcnQgYW5kIGFsaWduaW5nIHdp
dGggdGhlIG1vc3QgcmVjZW50DQogICAgICBUcmVuY2hCb290IFNlY3VyZSBMYXVuY2ggcHJv
dG9jb2wuDQoNCiAgICAtIFRlc3QgdGhlIHNvbHV0aW9uIG9uIEludGVsIGhhcmR3YXJlIHdp
dGggVFBNIDEuMiBhbmQgMi4wIHVzaW5nDQogICAgICBsZWdhY3kgYm9vdCBtb2RlDQoNCjIu
IFVwZGF0ZSB0byB0aGUgbmV3ZXN0IFRyZW5jaEJvb3QgYm9vdCBwcm90b2NvbA0KDQogICAg
LSBDb2RlIHJlYmFzZSBvbnRvIHRoZSBtb3N0IHJlY2VudCB3b3JrIGltcGxlbWVudGluZyBT
ZWN1cmUgTGF1bmNoDQogICAgICBwcm90b2NvbCBiZWluZyB1cHN0cmVhbWVkIHRvIExpbnV4
IGFuZCBHUlVCDQoNCiAgICAgIE1vZGlmaWNhdGlvbnMgaW50cm9kdWNlZCBpbiBHUlVCIGFu
ZCBYZW4gZm9yIHRoZSBBRU0gcHJvamVjdCB3aWxsDQogICAgICBoYXZlIHRvIGJlIGFsaWdu
ZWQgd2l0aCB0aGUgVHJlbmNoQm9vdCBjb2RlIGJlaW5nIHVwc3RyZWFtZWQgdG8NCiAgICAg
IEdSVUIgYW5kIExpbnV4IGtlcm5lbC4gVGhlIG1haW4gaWRlYSBpcyB0byBoYXZlIGEgZ2Vu
ZXJpYyBTZWN1cmUNCiAgICAgIExhdW5jaCBtb2R1bGUgYmVpbmcgZXhwb3NlZCBieSBmaXJt
d2FyZS9ib290bG9hZGVyIHRvIHRhcmdldA0KICAgICAgb3BlcmF0aW5nIHN5c3RlbSBrZXJu
ZWwgd2hpY2ggY2FuIGJlIGNhbGxlZCBieSB0aGUga2VybmVsIHRvDQogICAgICBwZXJmb3Jt
IFNlY3VyZSBMYXVuY2ggdXNpbmcgRFJUTS4gVGhlIGFkdmFudGFnZSBvZiBzdWNoIGFwcHJv
YWNoDQogICAgICBpcyBsZXNzZXIgbWFpbnRlbmFuY2UgYnVyZGVuIGluIG11bHRpcGxlIHBy
b2plY3RzIHByb3ZpZGluZw0KICAgICAga2VybmVscyBmb3Igb3BlcmF0aW5nIHN5c3RlbXMg
KHN1Y2ggYXMgWGVuIGFuZCBMaW51eCBrZXJuZWwpLA0KICAgICAgc3RhbmRhcmRpemVkIGJv
b3QgcHJvdG9jb2wgYW5kIGNlbnRyYWxpemVkIGNvZGUgaW4gb25lIHBsYWNlLg0KICAgICAg
VW5mb3J0dW5hdGVseSwgdGhlcmUgaXMgbm8gZG9jdW1lbnRhdGlvbiBleHBsYWluaW5nIHRo
ZSBkZXRhaWxzIG9mDQogICAgICB0aGUgbW9zdCByZWNlbnQgcHJvdG9jb2wgbmVpdGhlciBp
biBUcmVuY2hCb290IGRvY3VtZW50YXRpb24gbm9yDQogICAgICB3YXMgaXQgYW5ub3VuY2Vk
IG9uIFRyZW5jaEJvb3QgbWFpbGluZyBsaXN0LiBHaXZlbiB0aGF0LCBvdXINCiAgICAgIHBy
b3Bvc2FsIG1heSBub3QgYmUgZXhhY3RseSBjb3JyZWN0IGJlY2F1c2Ugb2YgYXNzdW1wdGlv
bnMgbWFkZSBieQ0KICAgICAgM21kZWIgYWJvdXQgdGhlIGRldGFpbHMgb2YgdGhlIG1vc3Qg
cmVjZW50IGJvb3QgcHJvdG9jb2wuDQoNCiAgICAtIFRlc3QgdGhlIHNvbHV0aW9uIG9uIElu
dGVsIGhhcmR3YXJlIHdpdGggVFBNIDEuMiBhbmQgVFBNIDIuMCB1c2luZw0KICAgICAgbGVn
YWN5IGJvb3QgbW9kZQ0KDQozLiBBTUQgc3VwcG9ydCBmb3IgUXViZXMgT1MgQUVNIHdpdGgg
VHJlbmNoQm9vdA0KDQogICAgLSBDbGVhbiB1cCB0aGUgU2VjdXJlIEtlcm5lbCBMb2FkZXIg
KGZvcm1lcmx5IExhbmRpbmdab25lKSBwYWNrYWdlDQogICAgICBzdXBwb3J0IGZvciBRdWJl
cyBPUw0KDQogICAgICBBbiBpbml0aWFsIGludGVncmF0aW9uIG9mIHRoZSBTZWN1cmUgS2Vy
bmVsIExvYWRlciAoU0tMLCBmb3JtZXJseQ0KICAgICAgTGFuZGluZ1pvbmUpIGZvciBRdWJl
cyBPU1szXSBoYXMgYmVlbiBtYWRlIGJ5IDNtZGViIGFzIGEgUHJvb2Ygb2YNCiAgICAgIENv
bmNlcHQgZm9yIHJ1bm5pbmcgRFJUTSBpbiBRdWJlcyBPUyBvbiBBTUQgcGxhdGZvcm1zWzRd
LiBUaGlzDQogICAgICBjb2RlLCBob3dldmVyLCBpcyB2ZXJ5IG91dGRhdGVkLCB0byB0aGUg
ZXh0ZW50IHdoZXJlIHRoZSBwcm9qZWN0DQogICAgICBuYW1lIGhhcHBlbmVkIHRvIGNoYW5n
ZSBpbiB0aGUgbWVhbiB0aW1lLiBBIGxvdCBvZiBjaGFuZ2VzIGFuZA0KICAgICAgaW1wcm92
ZW1lbnRzIGhhdmUgYmVlbiBtYWRlIHRvIFNLTCBhbmQgUXViZXMgT1Mgc2hvdWxkIHVzZSB0
aGUNCiAgICAgIG1vc3QgcmVjZW50IHZlcnNpb24uIFRoZSB3b3JrIGhlcmUgd2lsbCBpbmNs
dWRlIGEgY2xlYW4gdXAgb2YgdGhlDQogICAgICBleGlzdGluZyBRdWJlcyBPUyBwYWNrYWdl
IGludGVncmF0aW9uIGFuZCB1cGRhdGUgdG8gdGhlIG1vc3QNCiAgICAgIHJlY2VudCB2ZXJz
aW9uIG9mIFNLTC4NCg0KICAgIC0gVHJlbmNoQm9vdCBTZWN1cmUgS2VybmVsIExvYWRlciAo
U0tMKSBpbXByb3ZlbWVudHMgZm9yIEFNRCBzZXJ2ZXINCiAgICAgIENQVXMgd2l0aCBtdWx0
aXBsZSBub2Rlcw0KDQogICAgICBBcyBTS0wgd2FzIG1vc3RseSB2YWxpZGF0ZWQgdG8gd29y
ayBvbiBjb25zdW1lciBkZXZpY2VzLCBpdCBkb2VzDQogICAgICBub3QgdGFrZSBpbnRvIGFj
Y291bnQgQU1EIENQVXMgd2l0aCBtdWx0aXBsZSBub2Rlcy4gRWFjaCBub2RlIGNhbg0KICAg
ICAgYmUgdHJlYXRlZCBhcyBhIHNlcGFyYXRlIHByb2Nlc3NvciB3aXRoIGEgZnVsbCBzZXQg
b2YgQ1BVDQogICAgICByZWdpc3RlcnMuIFRoaXMgaW1wbGllcyBhZGRpdGlvbmFsIGFjdGlv
biB0byBiZSBtYWRlIGJ5IFNLTCwgdGhhdA0KICAgICAgaXMgaGFuZGxpbmcgdGhlIERNQSBw
cm90ZWN0aW9uIGZvciBTS0wgb24gZWFjaCBub2RlLiBDdXJyZW50bHkgdGhlDQogICAgICBE
TUEgcHJvdGVjdGlvbiBpcyBsaWZ0ZWQgd2hlbiBleGl0aW5nIFNLTCBvbmx5IG9uIGEgc2lu
Z2xlIG5vZGUNCiAgICAgIHN5c3RlbS4gVGhlIHdvcmsgaW5jbHVkZXMgZXh0ZW5kaW5nIHRo
ZSBTS0wgY29kZSB0byBoYW5kbGUgbXVsdGkNCiAgICAgIG5vZGUgc3lzdGVtcyBsaWtlIHNl
cnZlcnMuDQoNCiAgICAtIENsZWFuIHVwIHRoZSBBTUQgRFJUTSAoU0tJTklUKSBzdXBwb3J0
IGluIFRyZW5jaEJvb3QgR1JVQjINCg0KICAgICAgVGhlIFRyZW5jaEJvb3QgY29kZSBpbiBH
UlVCIHN1cHBvcnRpbmcgQU1EIGlzIHZlcnkgb2xkIGFzIHdlbGwgYW5kDQogICAgICBpcyBk
ZWZpbml0ZWx5IG5vdCBhbGlnbmVkIHdpdGggY3VycmVudCB3b3JrLiBNb3Jlb3ZlciBpdCBu
ZWVkcyBhDQogICAgICBjbGVhbnVwIG9mIG9ic29sZXRlIGNvZGUuDQoNCiAgICAtIFRlc3Qg
dGhlIHNvbHV0aW9uIG9uIEFNRCBhbmQgSW50ZWwgaGFyZHdhcmUgd2l0aCBUUE0gMi4wIGFu
ZCBUUE0NCiAgICAgIDEuMiB1c2luZyBsZWdhY3kgYm9vdCBtb2RlDQoNCjQuIFhlbiBVRUZJ
IGJvb3QgbW9kZSB3aXRoIERSVE0gc2lnbiBUcmVuY2hCb290Og0KDQogICAgLSBUcmVuY2hC
b290IHN1cHBvcnQgZm9yIFVFRkkgYm9vdCBtb2RlIGZvciBBTUQgaW4gR1JVQg0KDQogICAg
ICBEdXJpbmcgdGhlIFBvQyB3b3JrIG9uIGJvb3RpbmcgUXViZXMgT1Mgd2l0aCBEUlRNIHVz
aW5nDQogICAgICBUcmVuY2hCb290LCBVRUZJIGJvb3QgbW9kZSBkaWQgbm90IHdvcmsuIFVF
RkkgYWxzbyBoYXMgbm90IGJlZW4NCiAgICAgIHRlc3RlZCBleHRlbnNpdmVseSBzbyBhIG1p
c2ltcGxlbWVudGF0aW9uIGluIHNvbWUgcGxhY2VzIGlzDQogICAgICBwb3NzaWJsZS4gSXQg
aGFzIHRvIGJlIGFuYWx5emVkIGFuZCBmaXhlZC4NCg0KICAgIC0gVHJlbmNoQm9vdCBzdXBw
b3J0IGZvciBVRUZJIGJvb3QgbW9kZSBpbiBYZW4gZm9yIEludGVsDQogICAgLSBUcmVuY2hC
b290IHN1cHBvcnQgZm9yIFVFRkkgYm9vdCBtb2RlIGluIFhlbiBmb3IgQU1EDQoNCiAgICAg
IFRoZSBzY29wZSBvZiB0aGUgd29yayBmb3IgdGhlIHR3byBhYm92ZSBpdGVtcyB3aWxsIGlu
Y2x1ZGUNCiAgICAgIGltcGxlbWVudGluZyBzdXBwb3J0IGZvciB0aGUgbW9zdCByZWNlbnQg
VHJlbmNoQm9vdCBib290IHByb3RvY29sDQogICAgICBpbiB0aGUgWGVuIGh5cGVydmlzb3Iu
IFhlbiBtdXN0IGJlIGFibGUgdG8gZmluZCB0aGUgU2VjdXJlIExhdW5jaA0KICAgICAgbW9k
dWxlLCBmZWVkIGl0IHdpdGggbmVjZXNzYXJ5IGRhdGEgYW5kIGNhbGwgaXQgdG8gcGVyZm9y
bSBEUlRNDQogICAgICBsYXVuY2guDQoNCiAgICAtIFRlc3QgdGhlIHNvbHV0aW9uIG9uIEFN
RCBhbmQgSW50ZWwgaGFyZHdhcmUgd2l0aCBUUE0gMi4wIGFuZCBUUE0NCiAgICAgIDEuMiB1
c2luZyBsZWdhY3kgYW5kIFVFRkkgYm9vdCBtb2RlDQoNCkZ1dHVyZSBpbXByb3ZlbWVudHMN
Ci0tLS0tLS0tLS0tLS0tLS0tLS0NCg0KSW5pdGlhbGx5IHRoZXJlIHdhcyBTMyBzdXBwb3J0
IGluIHRoZSBzY29wZSBvZiB0aGUgd29yaywgaG93ZXZlciBhZnRlcg0KY29uc3VsdGluZyBv
dGhlciBUcmVuY2hCb290IERldmVsb3BlcnMgKHRoYW5rIHlvdSBEYW5pZWwgUC4gU21pdGgg
YW5kDQpBbmRyZXcgQ29vcGVyKSBhbmQgYSBsb25nZXIgY29uc2lkZXJhdGlvbiB3ZSAoM21k
ZWIpIGNhbWUgdG8gdGhlDQpmb2xsb3dpbmcgY29uY2x1c2lvbnM6DQoNCjEuIFBlcmZvcm1p
bmcgRFJUTSBsYXVuY2ggb24gUzMgcmVzdW1lIHNlZW1zIHRvIGJlIG91dCBvZiBzY29wZSBm
b3INCiAgICBjdXJyZW50IEFFTSBpbXBsZW1lbnRhdGlvbiAoQUVNIGRvZXMgbm90IHBsYXkg
YW55IHJvbGUgaW4gUXViZXMgT1MNCiAgICByZXN1bWUgcHJvY2VzcykuIEFFTSB0YWtlcyBh
ZHZhbnRhZ2Ugb2YgRFJUTSBvbmx5IGR1cmluZyBub3JtYWwgYm9vdC4NCjIuIERSVE0gbGF1
bmNoIG9mIFhlbiBoeXBlcnZpc29yIG9uIFMzIHJlc3VtZSBpcyBhIGNvbXBsZXggdG9waWMg
d2hpY2gNCiAgICBuZWVkcyB0aG9yb3VnaCBkZWJhdGUgb24gaG93IHRvIGFwcHJvYWNoIHRo
YXQgYW5kIGhvdyBpdCBzaG91bGQgYmUNCiAgICBpbXBsZW1lbnRlZC4gaXQgbWF5IGJlIGV2
ZW4gZWxpZ2libGUgZm9yIGEgc2VwYXJhdGUgcHJvamVjdC4NCjMuIERSVE0gb24gUzMgcmVz
dW1lIHdpbGwgbm90IGJlIGFibGUgdG8gZG8gdGhlIHNhbWUgbWVhc3VyZW1lbnRzIGFzIG9u
DQogICAgbm9ybWFsIGJvb3QuIFRoZSBYZW4gYW5kIERvbTAgaW1hZ2VzIGluIG1lbW9yeSB3
aWxsIGJlIGRpZmZlcmVudCB0aGFuDQogICAgb24gbm9ybWFsIGJvb3QuIFdoYXQgb25lIHdv
dWxkIGdldCBmcm9tIERSVE0gbGF1bmNoIGR1cmluZyBTMyByZXN1bWUNCiAgICBpcyBhIGZv
b3RwcmludCBvZiBjdXJyZW50IHN0YXRlIG9mIHRoZSBoeXBlcnZpc29yLiBJdCBoYXMgdG8g
YmUgdGFrZW4NCiAgICBpbiBhcHByb3ByaWF0ZSBwbGFjZSBhbmQgdGltZSB0byBtYWtlIHRo
ZSBtZWFzdXJlbWVudHMgYXMgY29uc2lzdGVudA0KICAgIGFuZCBtZWFuaW5nZnVsIGFzIHBv
c3NpYmxlLiBNb3Jlb3ZlciwgbWVhc3VyaW5nIHRoZSBmb290cHJpbnQgb2YNCiAgICBydW50
aW1lIGNvbXBvbmVudHMgaXMgbW9yZSBzdWl0YWJsZSBmb3IgYXR0ZXN0YXRpb24gcHVycG9z
ZXMgdGhhbg0KICAgIEFudGkgRXZpbCBNYWlkIHVzZSBjYXNlLg0KDQpEZXNwaXRlIG9mIHRo
ZSBhYm92ZSBjb25jbHVzaW9ucyBJIGhpZ2hseSBlbmNvdXJhZ2UgdG8gZGViYXRlIG9uIHRo
ZSANCmZvbGxvd2luZzoNCg0KLSBpZiBhbmQgaG93IFF1YmVzIE9TIEFFTSBjb3VsZCB1c2Ug
dGhlIFMzIHJlc3VtZSBEUlRNIG1lYXN1cmVtZW50cyBvZg0KICAgWGVuIChhbmQgRG9tMCkN
Ci0gd2hhdCBwYXJ0cyBvZiBYZW4gaHlwZXJ2aXNvciAoYW5kIHBvc3NpYmx5IERvbTApIHNo
b3VsZCBiZSBtZWFzdXJlZA0KICAgZHVyaW5nIFMzIHJlc3VtZSB3aXRoIERSVE0NCi0gd2hl
biB0aGUgbWVhc3VyZW1lbnQgb2YgWGVuIGh5cGVydmlzb3Igc2hvdWxkIGJlIG1hZGUgZHVy
aW5nIFMzIHJlc3VtZQ0KICAgd2l0aCBEUlRNDQotIGlmIHRoZXJlIGFyZSBhbnkgb3V0c3Rh
bmRpbmcgaXNzdWVzIG9yIGJsb2NrZXJzIHRvIGltcGxlbWVudCBEUlRNDQogICBsYXVuY2gg
b2YgWGVuIGh5cGVydmlzb3IgZHVyaW5nIFMzIHJlc3VtZQ0KDQpPbmUgaWRlYSB0byBwZXJm
b3JtIGEgRFJUTSBkdXJpbmcgUzMgcmVzdW1lIHdvdWxkIGJlIGFzIGZvbGxvd3M6DQoNCkNv
bGQgYm9vdDogR1JVQiAtPiBEUlRNIC0+IFhlbiAoaW5pdCB1cCB0byBEb20wIGNyZWF0aW9u
KSAtPiBEUlRNIC0+IFhlbiANCi0+IERvbTANClMzIHJlc3VtZTogWGVuIC0+IERSVE0gLT4g
WGVuICghKSAtPiBEb20wIHJlc3VtZQ0KDQooISkgLSBoZXJlIHdlIGhhdmUgdG8gdGVhY2gg
WGVuIGhvdyB0byByZXR1cm4gdG8gcHJldmlvdXMgc3RhdGUgd2l0aG91dA0KcmVzdGFydGlu
ZyBEb20wDQoNClN1Y2ggYXBwcm9hY2ggd291bGQgcmVzdWx0IGluIGlkZW50aWNhbCBtZWFz
dXJlbWVudHMgYWZ0ZXIgY29sZGJvb3QgYW5kIA0KUzMgcmVzdW1lIG9mIFhlbiBoeXBlcnZp
c29yLg0KDQpTdW1tYXJ5DQotLS0tLS0tDQoNClBsZWFzZSBrZWVwIGluIG1pbmQgdGhhdCAz
bWRlYiBoYXMgb25seSBhIHZhZ3VlIHVuZGVyc3RhbmRpbmcgb2YgdGhlDQpuZXdlc3QgVHJl
bmNoQm9vdCBib290IHByb3RvY29sIGFuZCBiZWNhdXNlIG9mIHRoYXQgdGhlIGRldGFpbHMN
CmV4cGxhaW5lZCBpbiB0aGlzIG1lc3NhZ2UgbWF5IG5vdCBiZSBhY2N1cmF0ZS4gSXQgd291
bGQgYmUgYXBwcmVjaWF0ZWQNCnRvIGhhdmUgYSBkcmFmdCBvZiB0aGUgc3BlY2lmaWNhdGlv
biBvciBkb2N1bWVudGF0aW9uIGV4cGxhaW5pbmcgdGhlDQpkZXRhaWxzIG9mIHRoZSBjdXJy
ZW50IGFwcHJvYWNoIG9mIHBlcmZvcm1pbmcgU2VjdXJlIExhdW5jaCB1c2luZw0KVHJlbmNo
Qm9vdC4NCg0KV2Uga2luZGx5IGFzayBmb3IgZmVlZGJhY2sgYW5kIGNvbW1lbnRzIGZyb20g
dGhlIGNvbW11bml0eSB3aXRoDQpjb25zdHJ1Y3RpdmUgY3JpdGlxdWUgb2Ygb3VyIHBsYW4u
DQoNCkJlc3QgcmVnYXJkcywNCjNtZGViIHRlYW0NCg0KWzFdIGh0dHBzOi8vZG9jcy5kYXNo
YXJvLmNvbS9wcm9qZWN0cy90cmVuY2hib290LWFlbS8NClsyXSBodHRwczovL2xrbWwub3Jn
L2xrbWwvMjAyMi8yLzE4LzY5OQ0KWzNdIGh0dHBzOi8vZ2l0aHViLmNvbS8zbWRlYi9xdWJl
cy1sYW5kaW5nLXpvbmUvdHJlZS9sel9zdXBwb3J0DQpbNF0gaHR0cHM6Ly95b3V0dS5iZS9y
TTB2Umk2cUFCRT90PTE0NjENCg0KDQo=
--------------ssBMaubtu7nOxnjxZ9TKJnoX
Content-Type: application/pgp-keys; name="OpenPGP_0x6B5BA214D21FCEB2.asc"
Content-Disposition: attachment; filename="OpenPGP_0x6B5BA214D21FCEB2.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFn4WzYBEADIFGje2/SdbwwxgmzgWLG4+5gsf6UdpRDAMU13tAI/r8gkdFuB
4vnjvGbbJMWlCgiJ/yX1NQgtYC1i4uFr4U9Ertw/j2DG8a7YkQSRa4vKczYkG3mi
DFwE/pk7cyme6CsDRb9tp+3A7sGa4e9lCZMSKfaeR5Cr1+en4JtFtAryBE37Apus
nJKqpCxXyXIJOtYrVTTD7PHa3KTdtt33hgP7fxF7CKwIWl0hLz1UXNLRsDaKccGx
4FcWUFj70e8yvzARh6EnzfXcCINGLjN6BxVK5JiZt2DEKTL+HH41dOG64L/fNDVS
O7NeqczvW4lM8TGvlbBxt8gX3Kb2B1oCPI5u+j5NGIEizGk3+C/hZzEyAXnJEYfW
zYz7+sAtUQc/0LI7Rm4+xA2Cw7iCmy13xtT4bVvqBZQOblKYLR+0/oMFi5Xtpwx1
0/vpVzpgMJqHXohDbsLYAAxatP/jEPFeRfwPghTHjtahyPf+COaOHHGHiNfOXx/a
bdwjrEMyHNaXtrqDTilP4tdPqott8k964rmdBlpVv+Hbi2JrHZnH6raHXKG4Nbbr
ZKVXdLt2gHhxWxhEkLYpZ5HAT9zGNDf2L5dpu2gWCH8gczDV6/dE7X4JS9bvkmFJ
tV8Bnv8VuT+K/5z8SQpU8MjcUGFwdL+UUJk7y18mmCZ1kagyIbHFUqN6wwARAQAB
zTZNaWNoYcWCIMW7eWdvd3NraSAobWljenlnKSA8bWljaGFsLnp5Z293c2tpQDNt
ZGViLmNvbT7CwZUEEwECAD8CGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAFiEE
ALiPsl/WN1vF/xlda1uiFNIfzrIFAlyaCRIFCQwHr1wACgkQa1uiFNIfzrIDTg/+
MXI7BjewwXyUjzSzlGQBY5KzIKoUVM9B5GB+RPuPegcZeKoJCgzlGyqAPcr8nH3n
xOq2eI70/NEN5+//oJOmb6PWivXz2MJSIf4koHW4WZTRhLSsIc0TiHv4n3jUXv6B
6xspiGa7hCfESj+1QkKePM6VIDGv3T9eYms65FQUKUeoV5oZvPqlVvumMi1ujuLK
VJaCuy6XywUnefOyBONogFCeyCb6OU5lrf0e4jlau+ggAs5SDS8+ZB480ebYpaAD
nLrNf7y/+isdlWfRVzoEkZXNTSN9DkvWBgccTNH1+6MiVUBjJjF+FNUoKJ0KAUh8
G2+gjAA8Bj+nucUdMH1AfEzffZ12HG/oxJQ0ZnhA8HSApt9GDxG7oZeHmEeBrmRg
JyRltDA6f/2NDXcN8C4gKfm0A0C/mJP6a8NNDn/Le+DJ8XCsoVBOeVAsq7VdoRkN
w7aLdQYyYEUjBPKd9jBcrVfNMxE2GYFmiiKtGBslzHyhotYJ/FPmeK1d5Of5mlmN
V1awzD8hNWaS3FIuldz7iN0XZQ4k6/ymaCI/m4KVPLw6trQsXzl8y3sQ6Ps34qef
cO6+hdxYtKy630z/3+ZqdWJK7ohNeva0xD50Mt08zjxpNHYW2mRWmwnTZ6FpDDrK
S4o6vT/7KyHZdGhEZT/X9vJkZJwYuFD9GIeltvslkuvOwU0EWfhbNgEQANaMjK0/
egUCRnJNUIy1y2N91MsggaiTNP64eXpY7CsDx0pg5Q/d+8QPb190PcMy7rdo9020
vPif93DXfIUZl/9gP9+5byHAz2wF5btFpi717BU8kWwxC5/rVszTcP3B3Iupxd0P
avurvJ4Lm1LPCcXJX+yRkeDNQamgU1I2UFUwWqj7MOOyFa+AHOaVsfweCu1yhJ2O
Lo2zvlcs2SaO8So8w5e0FxFiXDNFSLFSDPt9/wiiDt8Xmge9NX1VUBApvu8iDcYS
yeNdoi9hmX3/UbKTRxfCOw2FzoGMxhIDcjlAvBfmSXzudFWESC4JeVSSM4QbenH3
7vYTY3TW6ohQ2Hj0a58cetjBqM8VjXRWwqtF4g7wvlsp1F+c7Jc85lbJ7jNdkVbD
2CTzkMcPhg+QvfByrXONLNkjz7Ink3xSYN05fwfWmEPHqLdBtu58XQYnlI3f9vJe
Ui0pItpz1R+/Nqg7TmLi4UiujHKuP0vklQbTBItbXHaWO2XF+z++jiiaWchXEOVv
ACVneBO5UL/myK0gFoBT6N1ALCJ8MVIe4hGLbRp5ZZfQE0pUKRjHmaHX6uzw5Oe8
Lj5AmTdQfDf2c8K8QvWEWzuh1s9MHN4E7FPYTqgafnsOTE9cUtXaJ0ISBtd8VET/
6aCByhRMP6gY4y7qAiZLOLxtDUxvUYitUWRbABEBAAHCwV8EGAECAAkFAln4WzYC
GwwACgkQa1uiFNIfzrK7Dg/9FmRr2ZCPlCc4o75znhex06WX38xOF23CLc8wnxj2
s+1bn96Oh25UZQjSAbTTtCv4ElA5Kil0WtfKxcO7u5yz4MID8FUtDIeh4fPARjEr
x9zRW8rROzJppMbfJheDnDUXCxOx1diYrPRsnSmI41Zexdlq5XwEtD/FPS6rdT/d
7l7lY5JAhidjWOzDst939rMYLQWBSbRi/ULPF61tYaGjbeCoZ+bfm/pAMXKGaeRJ
g1xv080apubCjYZmX6qmMedRin+7iyvVKCUMdNq2o/yEehj/roL55MmXGiY71JDo
F97eijv/2ui4XyriN32RD+FAtEBx1uzxNSO1WHCLw+3C/W/llDlHee7HRD1kyaWQ
pEpFjac+NG3qT5fSEw1kieTZbxNwVLcmwOU5LaWkNMoMbOkJ1HKTmL5ootcW3/+I
Iehr4hQkBsu46lhnsRzVWY5KOd91mVP/L0qPFKfo8abAfdlUK1XphghiEQ3fgxOP
nozDlNtCrKOAE5SFbmB7n2cCReCPWiQAxy2JUFDGVafTcwc8K4NLGguv5UrkzZHO
ySVHPtWC3+oWOV6yHhBm0HFkOZY6u0KPK7PaQVVaqaLSEaE1e6fP13VGaaQIjQH4
uijE57gVz8F06a3RmD0K3mlD+w5GYezkVKg2hI/knemayMgX5EKfFLZG5/2bxWU2
Bsg=3D
=3DQlfJ
-----END PGP PUBLIC KEY BLOCK-----

--------------ssBMaubtu7nOxnjxZ9TKJnoX--

--------------jgWIbDNKWczTcS0CWS6x4pNU--

--------------getUOkq6vc0OY7QVU0BpnWyf
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsF5BAABCAAjFiEEALiPsl/WN1vF/xlda1uiFNIfzrIFAmPT8fMFAwAAAAAACgkQa1uiFNIfzrIO
7Q/9GKY/PHGmh5kUfIL7zCmky1nXKfTJ6C9bP/iXlJHX5FFf7wtl+DCCt0LMeXll3jUJ/V1ohQUB
7/9FI9O5Rqq2xrqL3Cx16dQ7pbQBUS9jGufbflhOEs1V1dU9p2isYQKIIu0qYEgONcMgGVJucq83
hAOCpB0zr1T9QuTSrm/lf9ZcNCQh2mppTPncGMfvkfmqYZ7kSqPtJYkJRm36j58eRMbZIgIIyjCI
AcSvU2NYZ8KuO6+Tt24M2m5lcyGb+swx4ICXLU/q+AFptp4WdqOof9gHMHixpXRemcZaoo8S1viA
DLDKlV/svddvmywW6oi2xYtwPAEddTVo12Hii8cXKawJIeDZD9pbczMGH3GYhAsBsDTYedzoBH+L
CD/ysjSoQ2GNSiUZ3CwNzpw25BstkL/QqUDipc4O4pPUwKkoGdgq1tLrRiJq5jBqT4tN0hit7pgb
9zRXa7FhutbWnhD+3Rl7NAP8SdmwN5rngrQ7wfYaf2S46wHC+yqZhXNL1Bp03o/Av/Z1ITiPPkuL
n+Q1vUNv45Qru37Iavxoh0MH1iiP/yhQQ7EmjmGb/xfCYTaP/l1Uovl9FsJsXSZ/xmUZZKuQTb4e
ZFe+LyMVX8yRtv+6OfeH9biK+RLXE03CuuXt0WY/UriUDD0caCdGkvkAU5Gi7lm8LpSu1IQphk1d
0S4=
=VPh8
-----END PGP SIGNATURE-----

--------------getUOkq6vc0OY7QVU0BpnWyf--


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 16:02:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 16:02:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485781.753177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLRB6-0005Ph-8N; Fri, 27 Jan 2023 16:02:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485781.753177; Fri, 27 Jan 2023 16:02:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLRB6-0005Pa-5K; Fri, 27 Jan 2023 16:02:12 +0000
Received: by outflank-mailman (input) for mailman id 485781;
 Fri, 27 Jan 2023 16:02:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLRB5-0005PU-5X
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 16:02:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLRB4-0000xD-8O; Fri, 27 Jan 2023 16:02:10 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLRB4-0003yN-2F; Fri, 27 Jan 2023 16:02:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=5jplpJY0zfhroJDOM6N0A2mqZtKNEP4nqIi9Adh6dG4=; b=A2I6eLbRurVzNWhuqmCfvYgdbl
	AM4uFLRCvVFif2Lwgw4iNsNIEwEVAfJp1Jsl0W7UCEYxtVMUuyj2reLZwz35v4jDMMbtvIJiGRdED
	+Iqdvt15GCbpnSukaerj+p5RUTejj8ERqJm1NyGMB/svbUJ1hB8C2gfZ8f3VLrlVQTB8=;
Message-ID: <73553bcf-9688-c187-a9cb-c12806484893@xen.org>
Date: Fri, 27 Jan 2023 16:02:08 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 12/14] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Oleksii,

On 27/01/2023 13:59, Oleksii Kurochko wrote:
> The patch introduces macros: BUG(), WARN(), run_in_exception(),
> assert_failed.
> 
> The implementation uses "ebreak" instruction in combination with
> diffrent bug frame tables (for each type) which contains useful
> information.
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes:
>    - Remove __ in define namings
>    - Update run_in_exception_handler() with
>      register void *fn_ asm(__stringify(BUG_FN_REG)) = (fn);
>    - Remove bug_instr_t type and change it's usage to uint32_t
> ---
>   xen/arch/riscv/include/asm/bug.h | 118 ++++++++++++++++++++++++++++
>   xen/arch/riscv/traps.c           | 128 +++++++++++++++++++++++++++++++
>   xen/arch/riscv/xen.lds.S         |  10 +++
>   3 files changed, 256 insertions(+)
>   create mode 100644 xen/arch/riscv/include/asm/bug.h
> 
> diff --git a/xen/arch/riscv/include/asm/bug.h b/xen/arch/riscv/include/asm/bug.h
> new file mode 100644
> index 0000000000..4b15d8eba6
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/bug.h
> @@ -0,0 +1,118 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2012 Regents of the University of California
> + * Copyright (C) 2021-2023 Vates

I have to question the two copyrights here given that the majority of 
the code seems to be taken from the arm implementation (see 
arch/arm/include/asm/bug.h).

With that said, we should consolidate the code rather than duplicating 
it on every architecture.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 16:18:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 16:18:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485791.753191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLRQ9-0007SV-Kt; Fri, 27 Jan 2023 16:17:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485791.753191; Fri, 27 Jan 2023 16:17:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLRQ9-0007SO-Hy; Fri, 27 Jan 2023 16:17:45 +0000
Received: by outflank-mailman (input) for mailman id 485791;
 Fri, 27 Jan 2023 16:17:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D8Jc=5Y=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pLRQ7-0007SE-Nc
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 16:17:43 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f93b420-9e5e-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 17:17:41 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 112DB220D7;
 Fri, 27 Jan 2023 16:17:41 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DE32E138E3;
 Fri, 27 Jan 2023 16:17:40 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id tGTFNCT502O5AgAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 27 Jan 2023 16:17:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f93b420-9e5e-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1674836261; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=fTn5+Q+SWqEwE2uyxIo/eODqZLSuycg85wgwNabMcXk=;
	b=HqKwHTjkanDLKW6KGWWyfh7x6eRJuWNf2BcSByRK6ZmBIkNQYoNbfZiw23hXCk6px++bKg
	zjYEUW3nBRencLpV2if1pW7NyChxKc3ExYCV8Yr8Mvg9iANf6NNwFzM/1YN/fjis1ETzrP
	r+XPpUYwCMpBe8OTwXwYRc59vPzOwIY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3] tools/helpers: don't log errors when trying to load PVH xenstore-stubdom
Date: Fri, 27 Jan 2023 17:17:39 +0100
Message-Id: <20230127161739.5596-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When loading a Xenstore stubdom the loader doesn't know whether the
lo be loaded kernel is a PVH or a PV one. So it tries to load it as
a PVH one first, and if this fails it is loading it as a PV kernel.

This results in errors being logged in case the stubdom is a PV kernel.

Suppress those errors by setting the minimum logging level to
"critical" while trying to load the kernel as PVH.

In case PVH mode and PV mode loading fails, retry PVH mode loading
without changing the log level in order to get the error messages
logged.

Fixes: f89955449c5a ("tools/init-xenstore-domain: support xenstore pvh stubdom")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- retry PVH loading with logging if PV fails, too (Jan Beulich)
V3:
- expand commit message (Jan Beulich)
- expand comment (Andrew Cooper)
---
 tools/helpers/init-xenstore-domain.c | 28 ++++++++++++++++++++--------
 1 file changed, 20 insertions(+), 8 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 04e351ca29..85cc9e8381 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -31,6 +31,8 @@ static int memory;
 static int maxmem;
 static xen_pfn_t console_gfn;
 static xc_evtchn_port_or_error_t console_evtchn;
+static xentoollog_level minmsglevel = XTL_PROGRESS;
+static void *logger;
 
 static struct option options[] = {
     { "kernel", 1, NULL, 'k' },
@@ -141,19 +143,33 @@ static int build(xc_interface *xch)
         goto err;
     }
 
+    /*
+     * This is a bodge.  We can't currently inspect the kernel's ELF notes
+     * ahead of attempting to construct a domain, so try PVH first, suppressing
+     * errors by setting min level to high, and fall back to PV.
+     */
     dom->container_type = XC_DOM_HVM_CONTAINER;
+    xtl_stdiostream_set_minlevel(logger, XTL_CRITICAL);
     rv = xc_dom_parse_image(dom);
+    xtl_stdiostream_set_minlevel(logger, minmsglevel);
     if ( rv )
     {
         dom->container_type = XC_DOM_PV_CONTAINER;
         rv = xc_dom_parse_image(dom);
         if ( rv )
         {
-            fprintf(stderr, "xc_dom_parse_image failed\n");
-            goto err;
+            /* Retry PVH, now with normal logging level. */
+            dom->container_type = XC_DOM_HVM_CONTAINER;
+            rv = xc_dom_parse_image(dom);
+            if ( rv )
+            {
+                fprintf(stderr, "xc_dom_parse_image failed\n");
+                goto err;
+            }
         }
     }
-    else
+
+    if ( dom->container_type == XC_DOM_HVM_CONTAINER )
     {
         config.flags |= XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap;
         config.arch.emulation_flags = XEN_X86_EMU_LAPIC;
@@ -412,8 +428,6 @@ int main(int argc, char** argv)
     char buf[16], be_path[64], fe_path[64];
     int rv, fd;
     char *maxmem_str = NULL;
-    xentoollog_level minmsglevel = XTL_PROGRESS;
-    xentoollog_logger *logger = NULL;
 
     while ( (opt = getopt_long(argc, argv, "v", options, NULL)) != -1 )
     {
@@ -456,9 +470,7 @@ int main(int argc, char** argv)
         return 2;
     }
 
-    logger = (xentoollog_logger *)xtl_createlogger_stdiostream(stderr,
-                                                               minmsglevel, 0);
-
+    logger = xtl_createlogger_stdiostream(stderr, minmsglevel, 0);
     xch = xc_interface_open(logger, logger, 0);
     if ( !xch )
     {
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 16:40:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 16:40:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485803.753207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLRm3-0002ts-Hu; Fri, 27 Jan 2023 16:40:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485803.753207; Fri, 27 Jan 2023 16:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLRm3-0002tl-E8; Fri, 27 Jan 2023 16:40:23 +0000
Received: by outflank-mailman (input) for mailman id 485803;
 Fri, 27 Jan 2023 16:40:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLRm2-0002ta-0u; Fri, 27 Jan 2023 16:40:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLRm2-0001lI-09; Fri, 27 Jan 2023 16:40:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLRm1-0007rv-Qp; Fri, 27 Jan 2023 16:40:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLRm1-0003pt-QN; Fri, 27 Jan 2023 16:40:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1DJ+MqWONCsWjoHye0+Tzv5UrkoZCoznnWRE7J+0E/o=; b=TZBw9QGOBbEk5kzmLLhK05/89M
	FVgy6P/RApGVgiOhxwXOREIMKMfZIrrMQ2QvpcBo60FLnIb6DQfLJyzQH5GP5NJsbKDTHeSw3fxtf
	D+SGfWINVxZRWQ57zAFEn+n7Q2Y32bfWS9c3Fkg6csBI756j0Qu1rygVUn7j8mho9Srs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176248-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176248: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e7aac7fc137e247edad22f7ee53b9a1fba227397
X-Osstest-Versions-That:
    ovmf=0d129ef7c3a95d64f2f2cab4f8302318775f9933
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 16:40:21 +0000

flight 176248 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176248/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e7aac7fc137e247edad22f7ee53b9a1fba227397
baseline version:
 ovmf                 0d129ef7c3a95d64f2f2cab4f8302318775f9933

Last test of basis   176232  2023-01-27 04:15:54 Z    0 days
Testing same since   176248  2023-01-27 14:45:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kun Qin <kun.qin@microsoft.com>
  Rebecca Cran <quic_rcran@quicinc.com>
  Rebecca Cran <rebecca@quicinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0d129ef7c3..e7aac7fc13  e7aac7fc137e247edad22f7ee53b9a1fba227397 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 17:18:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 17:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485817.753217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLSMP-0007SV-C7; Fri, 27 Jan 2023 17:17:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485817.753217; Fri, 27 Jan 2023 17:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLSMP-0007SO-9K; Fri, 27 Jan 2023 17:17:57 +0000
Received: by outflank-mailman (input) for mailman id 485817;
 Fri, 27 Jan 2023 17:17:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gttX=5Y=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pLSMO-0007SI-1b
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 17:17:56 +0000
Received: from ppsw-43.srv.uis.cam.ac.uk (ppsw-43.srv.uis.cam.ac.uk
 [131.111.8.143]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88c04f5a-9e66-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 18:17:54 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:34326)
 by ppsw-43.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.139]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pLSML-000nZM-TQ (Exim 4.96) (return-path <amc96@srcf.net>);
 Fri, 27 Jan 2023 17:17:53 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 014CE1FA74;
 Fri, 27 Jan 2023 17:17:52 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88c04f5a-9e66-11ed-a5d9-ddcf98b90cbd
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <db17d550-bbf8-f645-8ac4-51e20007aafc@srcf.net>
Date: Fri, 27 Jan 2023 17:17:52 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Subject: Re: [PATCH v3] tools/helpers: don't log errors when trying to load
 PVH xenstore-stubdom
Content-Language: en-GB
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230127161739.5596-1-jgross@suse.com>
From: Andrew Cooper <amc96@srcf.net>
In-Reply-To: <20230127161739.5596-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27/01/2023 4:17 pm, Juergen Gross wrote:
> When loading a Xenstore stubdom the loader doesn't know whether the
> lo be loaded kernel is a PVH or a PV one. So it tries to load it as
> a PVH one first, and if this fails it is loading it as a PV kernel.
>
> This results in errors being logged in case the stubdom is a PV kernel.
>
> Suppress those errors by setting the minimum logging level to
> "critical" while trying to load the kernel as PVH.
>
> In case PVH mode and PV mode loading fails, retry PVH mode loading
> without changing the log level in order to get the error messages
> logged.
>
> Fixes: f89955449c5a ("tools/init-xenstore-domain: support xenstore pvh stubdom")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

although lets wait for OSStest to unblock before putting this in.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 17:47:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 17:47:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485824.753227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLSpD-0002yF-L6; Fri, 27 Jan 2023 17:47:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485824.753227; Fri, 27 Jan 2023 17:47:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLSpD-0002y8-IB; Fri, 27 Jan 2023 17:47:43 +0000
Received: by outflank-mailman (input) for mailman id 485824;
 Fri, 27 Jan 2023 17:47:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gttX=5Y=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pLSpC-0002y2-K1
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 17:47:42 +0000
Received: from ppsw-32.srv.uis.cam.ac.uk (ppsw-32.srv.uis.cam.ac.uk
 [131.111.8.132]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b03d4ddf-9e6a-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 18:47:39 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:45906)
 by ppsw-32.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.136]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pLSp7-000cJ7-Ay (Exim 4.96) (return-path <amc96@srcf.net>);
 Fri, 27 Jan 2023 17:47:37 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 3DF701FB65;
 Fri, 27 Jan 2023 17:47:37 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b03d4ddf-9e6a-11ed-b8d1-410ff93cb8f0
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <ef52740c-7352-84d3-248a-5aed6f076d6d@srcf.net>
Date: Fri, 27 Jan 2023 17:47:37 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <930254a6-d0c8-4910-982a-bfd227187240@suse.com>
 <c39faba2-1ab6-71da-f748-1545aac8290b@suse.com>
 <d0a0960a-f110-c065-753d-9f000ca379a9@srcf.net>
 <0acfa717-8711-599f-4d29-d92a1466a1db@suse.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH v3 3/4] x86: limit issuing of IBPB during context switch
In-Reply-To: <0acfa717-8711-599f-4d29-d92a1466a1db@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 27/01/2023 7:51 am, Jan Beulich wrote:
> On 26.01.2023 21:49, Andrew Cooper wrote:
>> On 25/01/2023 3:26 pm, Jan Beulich wrote:
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -2015,7 +2015,8 @@ void context_switch(struct vcpu *prev, s
>>>  
>>>          ctxt_switch_levelling(next);
>>>  
>>> -        if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) )
>>> +        if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) &&
>>> +             !(prevd->arch.spec_ctrl_flags & SCF_entry_ibpb) )
>>>          {
>>>              static DEFINE_PER_CPU(unsigned int, last);
>>>              unsigned int *last_id = &this_cpu(last);
>>>
>>>
>> The aforementioned naming change makes the (marginal) security hole here
>> more obvious.
>>
>> When we use entry-IBPB to protect Xen, we only care about the branch
>> types in the BTB.  We don't flush the RSB when using the SMEP optimisation.
>>
>> Therefore, entry-IBPB is not something which lets us safely skip
>> exit-new-pred-context.
> Yet what's to be my takeaway? You may be suggesting to drop the patch,
> or you may be suggesting to tighten the condition. (My guess would be
> the former.)

Well - the patch can't be committed in this form.

I haven't figured out if there is a nice way to solve this, so I'm
afraid I don't have a helpful answer to your question.  I think it is
reasonable to wait a bit and see if a solution comes to mind.

I'm not aversed in principle to having this optimisation (well - a
working version of it), but honestly, its marginal right now and it has
to be weighed against whatever extra complexity is required to not open
the security hole.


And FYI, this (general issue) was precisely why ended up not trying to
do this rearranging in XSA-407/422.  I almost missed this security hole
and acked the patch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 18:05:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 18:05:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485831.753240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLT6M-0005rY-4A; Fri, 27 Jan 2023 18:05:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485831.753240; Fri, 27 Jan 2023 18:05:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLT6M-0005rR-1O; Fri, 27 Jan 2023 18:05:26 +0000
Received: by outflank-mailman (input) for mailman id 485831;
 Fri, 27 Jan 2023 18:05:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BqQd=5Y=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLT6K-0005rL-EG
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 18:05:24 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a16028d-9e6d-11ed-b8d1-410ff93cb8f0;
 Fri, 27 Jan 2023 19:05:22 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E210A618C5;
 Fri, 27 Jan 2023 18:05:20 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C926C433D2;
 Fri, 27 Jan 2023 18:05:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a16028d-9e6d-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674842720;
	bh=FNk5O2nMDZRZ4nsVzbrBX9duBUtNbM5zNga9Ek+YZGg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ispgKXK1vVVFxa3AQ2E/5cMSUiDFBlEHjaype2LY7U49b4Lxm8eWafy0YWxqsLios
	 nymvWqppUK/Xo4LBuk8dvGF+onNyt9MFcey3m9EncbaNqHqM/SuTRP5mNNNFdt7LVj
	 j4FJhWWZIc2WJwjqiQV8pSdR1MS9mcUkDlziRxFlS0zDzBuDsoaZtpe5N5AeowhrHV
	 jS04+2JF9XKOKVlYEsAGifun5rrDG/lgTMB3nH4gUMD+L5WsIUDQCFH747xIgO58vs
	 HZs997JaajYhREC+ym7/UtZH/M8iuYcIcOUNhfVgyFQK0LlA080Gh0nN+DxdWa1//7
	 kpFq7gcQR8p3A==
Date: Fri, 27 Jan 2023 10:05:17 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] hw/xen/xen_pt: fix uninitialized variable
In-Reply-To: <20230127050815.4155276-1-marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2301271004530.1978264@ubuntu-linux-20-04-desktop>
References: <20230127050815.4155276-1-marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1316837341-1674842720=:1978264"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1316837341-1674842720=:1978264
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 27 Jan 2023, Marek Marczykowski-Górecki wrote:
> xen_pt_config_reg_init() reads only that many bytes as the size of the
> register that is being initialized. It uses
> xen_host_pci_get_{byte,word,long} and casts its last argument to
> expected pointer type. This means for smaller registers higher bits of
> 'val' are not initialized. Then, the function fails if any of those
> higher bits are set.
> 
> Fix this by initializing 'val' with zero.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  hw/xen/xen_pt_config_init.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
> index cde898b744..8b9b554352 100644
> --- a/hw/xen/xen_pt_config_init.c
> +++ b/hw/xen/xen_pt_config_init.c
> @@ -1924,7 +1924,7 @@ static void xen_pt_config_reg_init(XenPCIPassthroughState *s,
>      if (reg->init) {
>          uint32_t host_mask, size_mask;
>          unsigned int offset;
> -        uint32_t val;
> +        uint32_t val = 0;
>  
>          /* initialize emulate register */
>          rc = reg->init(s, reg_entry->reg,
> -- 
> 2.37.3
> 
--8323329-1316837341-1674842720=:1978264--


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 18:14:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 18:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485837.753253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTEr-0007Kx-16; Fri, 27 Jan 2023 18:14:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485837.753253; Fri, 27 Jan 2023 18:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTEq-0007Kq-Uc; Fri, 27 Jan 2023 18:14:12 +0000
Received: by outflank-mailman (input) for mailman id 485837;
 Fri, 27 Jan 2023 18:14:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BqQd=5Y=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLTEp-0007Kk-8x
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 18:14:11 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 650f0f89-9e6e-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 19:14:10 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 7849FB80B18;
 Fri, 27 Jan 2023 18:14:09 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 078C2C433D2;
 Fri, 27 Jan 2023 18:14:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 650f0f89-9e6e-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674843248;
	bh=X99K51mQo/OW5UOP/69cHAe+79Te/Xzf01qxdjLqMWs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=R1Vdz7WNSyUL03ioLDdjZWsNPaw9vgbGRDcfqA6W52lVNvC5Bb6bGeIQNnQUUH6k/
	 gWDd6c3Y2uvE725Mpu3s/y2xB2/xLwIIevTgQdZmEbn5FgXutD+gcQKr3I8QCzDMQM
	 kQ8AzJIGOeiTl5xPPy7uUy+39tRwuYezLEXrsxmb1lvWRWNUHDiG4NdXIIv3FOjM7W
	 jsIyWUEYC9VFBSZqCkDCn1mub9DACb2cAfpkAa1FNsIFBAKWamh+ej9OhE7HGOOuNK
	 sv3rIt/VeBQNhOhMctqAY45JRm1eX5qgOjhgPMoyVhlMBHdSc6mBmmzPF8Wk4oalzh
	 N/10ST6WkZ6zg==
Date: Fri, 27 Jan 2023 10:14:05 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Gianluca Guida <gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>, 
    Alistair Francis <alistair.francis@wdc.com>
Subject: Re: [PATCH v7 2/2] automation: add RISC-V smoke test
In-Reply-To: <e2d722a5f3fffc5708c1cc99efad63ab04d25ec3.1674819203.git.oleksii.kurochko@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2301271013460.1978264@ubuntu-linux-20-04-desktop>
References: <cover.1674819203.git.oleksii.kurochko@gmail.com> <e2d722a5f3fffc5708c1cc99efad63ab04d25ec3.1674819203.git.oleksii.kurochko@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 27 Jan 2023, Oleksii Kurochko wrote:
> Add check if there is a message 'Hello from C env' presents
> in log file to be sure that stack is set and C part of early printk
> is working.
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> ---
> Changes in V7:
>  - Fix dependecy for qemu-smoke-riscv64-gcc job
> ---
> Changes in V6:
>  - Rename container name in test.yaml for .qemu-riscv64.
> ---
> Changes in V5:
>   - Nothing changed
> ---
> Changes in V4:
>   - Nothing changed
> ---
> Changes in V3:
>   - Nothing changed
>   - All mentioned comments by Stefano in Xen mailing list will be
>     fixed as a separate patch out of this patch series.
> ---
>  automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
>  automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
>  2 files changed, 40 insertions(+)
>  create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index afd80adfe1..4dbe1b8af7 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -54,6 +54,19 @@
>    tags:
>      - x86_64
>  
> +.qemu-riscv64:
> +  extends: .test-jobs-common
> +  variables:
> +    CONTAINER: archlinux:current-riscv64
> +    LOGFILE: qemu-smoke-riscv64.log
> +  artifacts:
> +    paths:
> +      - smoke.serial
> +      - '*.log'
> +    when: always
> +  tags:
> +    - x86_64
> +
>  .yocto-test:
>    extends: .test-jobs-common
>    script:
> @@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
>    needs:
>      - debian-unstable-clang-debug
>  
> +qemu-smoke-riscv64-gcc:
> +  extends: .qemu-riscv64
> +  script:
> +    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - .gcc-riscv64-cross-build

This is wrong, I think it should be: archlinux-current-gcc-riscv64 ?


>  # Yocto test jobs
>  yocto-qemuarm64:
>    extends: .yocto-test-arm64
> diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
> new file mode 100755
> index 0000000000..e0f06360bc
> --- /dev/null
> +++ b/automation/scripts/qemu-smoke-riscv64.sh
> @@ -0,0 +1,20 @@
> +#!/bin/bash
> +
> +set -ex
> +
> +# Run the test
> +rm -f smoke.serial
> +set +e
> +
> +timeout -k 1 2 \
> +qemu-system-riscv64 \
> +    -M virt \
> +    -smp 1 \
> +    -nographic \
> +    -m 2g \
> +    -kernel binaries/xen \
> +    |& tee smoke.serial
> +
> +set -e
> +(grep -q "Hello from C env" smoke.serial) || exit 1
> +exit 0
> -- 
> 2.39.0
> 


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 18:33:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 18:33:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485848.753263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTWi-0001ix-Jx; Fri, 27 Jan 2023 18:32:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485848.753263; Fri, 27 Jan 2023 18:32:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTWi-0001iq-GT; Fri, 27 Jan 2023 18:32:40 +0000
Received: by outflank-mailman (input) for mailman id 485848;
 Fri, 27 Jan 2023 18:32:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLTWh-0001ig-4Y; Fri, 27 Jan 2023 18:32:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLTWh-0005BF-1N; Fri, 27 Jan 2023 18:32:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLTWg-00067n-Lg; Fri, 27 Jan 2023 18:32:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLTWg-0005iQ-L5; Fri, 27 Jan 2023 18:32:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LtAh0H56e0o7aMCjB1MtmyPTyLqCrERyzDyL2phwQro=; b=QWPvymcA33TRoqjb9VBxjNG0FI
	LTrAWeIkr8mB4bRfc+Nlf8uh+1u7IfKjkXas57Kz/EjVb2PgP6GiU0o79Y99Efhs/vkYP8ChzLRIQ
	wfHZ8a8QVDik8Lv+k3nmh/753lsExPiR5r5HiFEu22K9g6zaoqrx2jGHo3Uwktr7otK8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176245-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176245: regressions - trouble: broken/fail/pass/starved
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-vhd:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-vhd:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-examine:host-install:broken:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-cubietruck:capture-logs(22):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-vhd:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-examine:capture-logs:broken:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:host-install(5):broken:heisenbug
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:capture-logs(9):broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=7c46948a6e9cf47ed03b0d489fde894ad46f1437
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 18:32:38 +0000

flight 176245 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176245/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-libvirt-qcow2    <job status>                 broken
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 test-arm64-arm64-xl-vhd         <job status>                 broken  in 176143
 test-arm64-arm64-xl-xsm         <job status>                 broken  in 176143
 test-arm64-arm64-xl-seattle     <job status>                 broken  in 176143
 test-arm64-arm64-xl-credit2     <job status>                 broken  in 176143
 build-arm64-pvops               <job status>                 broken  in 176223
 build-arm64-xsm                 <job status>                 broken  in 176223
 build-arm64-pvops          4 host-install(4) broken in 176223 REGR. vs. 173462
 build-arm64-xsm            4 host-install(4) broken in 176223 REGR. vs. 173462
 test-arm64-arm64-xl-seattle   8 xen-boot       fail in 176135 REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot      fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot         fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot     fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot       fail in 176143 REGR. vs. 173462

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle  5 host-install(5) broken in 176143 pass in 176135
 test-arm64-arm64-xl-vhd      5 host-install(5) broken in 176143 pass in 176245
 test-arm64-arm64-xl-xsm      5 host-install(5) broken in 176143 pass in 176245
 test-arm64-arm64-xl-credit2  5 host-install(5) broken in 176143 pass in 176245
 test-armhf-armhf-xl-arndale   5 host-install(5)          broken pass in 176143
 test-armhf-armhf-xl-credit1   5 host-install(5)          broken pass in 176143
 test-armhf-armhf-libvirt      5 host-install(5)          broken pass in 176143
 test-armhf-armhf-xl           5 host-install(5)          broken pass in 176143
 test-armhf-armhf-libvirt-qcow2  5 host-install(5)        broken pass in 176143
 test-armhf-armhf-examine      5 host-install             broken pass in 176143
 test-armhf-armhf-xl-credit2   5 host-install(5)          broken pass in 176143
 test-armhf-armhf-xl-cubietruck 22 capture-logs(22)       broken pass in 176143
 test-armhf-armhf-xl-vhd       5 host-install(5)          broken pass in 176143
 test-armhf-armhf-xl-multivcpu  5 host-install(5)         broken pass in 176143
 test-armhf-armhf-libvirt-raw  5 host-install(5)          broken pass in 176143
 test-armhf-armhf-examine      6 capture-logs             broken pass in 176143
 test-armhf-armhf-xl-rtds      5 host-install(5)          broken pass in 176223
 test-amd64-amd64-xl-xsm       8 xen-boot         fail in 176135 pass in 176245
 test-arm64-arm64-xl-vhd       8 xen-boot         fail in 176135 pass in 176245
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 176135 pass in 176245
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 176135 pass in 176245
 test-arm64-arm64-examine      8 reboot           fail in 176143 pass in 176245
 test-arm64-arm64-xl           8 xen-boot         fail in 176143 pass in 176245
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 176143 pass in 176245
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 176143 pass in 176245
 test-arm64-arm64-libvirt-raw  8 xen-boot         fail in 176143 pass in 176245
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 176223

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot       fail in 176143 REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 176223 n/a
 test-armhf-armhf-libvirt-qcow2  6 capture-logs(6)     broken blocked in 173462
 test-armhf-armhf-xl-credit1   6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-libvirt      6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-credit2   6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-arndale   6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-rtds      6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl           6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-vhd       6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-multivcpu  6 capture-logs(6)      broken blocked in 173462
 test-armhf-armhf-libvirt-raw  6 capture-logs(6)       broken blocked in 173462
 test-armhf-armhf-xl-rtds  9 capture-logs(9) broken in 176223 blocked in 173462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                7c46948a6e9cf47ed03b0d489fde894ad46f1437
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  111 days
Failing since        173470  2022-10-08 06:21:34 Z  111 days  231 attempts
Testing same since   176135  2023-01-26 00:10:53 Z    1 days    5 attempts

------------------------------------------------------------
3442 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  broken  
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               broken  
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      broken  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken
broken-step test-armhf-armhf-libvirt-qcow2 capture-logs(6)
broken-step test-armhf-armhf-xl-credit1 capture-logs(6)
broken-step test-armhf-armhf-libvirt capture-logs(6)
broken-step test-armhf-armhf-xl-credit2 capture-logs(6)
broken-step test-armhf-armhf-xl-arndale capture-logs(6)
broken-step test-armhf-armhf-xl-rtds capture-logs(6)
broken-step test-armhf-armhf-xl-arndale host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)
broken-step test-armhf-armhf-xl-credit1 host-install(5)
broken-step test-armhf-armhf-libvirt host-install(5)
broken-step test-armhf-armhf-xl host-install(5)
broken-step test-armhf-armhf-xl capture-logs(6)
broken-step test-armhf-armhf-libvirt-qcow2 host-install(5)
broken-step test-armhf-armhf-examine host-install
broken-step test-armhf-armhf-xl-credit2 host-install(5)
broken-step test-armhf-armhf-xl-cubietruck capture-logs(22)
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-libvirt-raw host-install(5)
broken-step test-armhf-armhf-examine capture-logs
broken-step test-armhf-armhf-xl-vhd capture-logs(6)
broken-step test-armhf-armhf-xl-multivcpu capture-logs(6)
broken-step test-armhf-armhf-libvirt-raw capture-logs(6)
broken-job test-arm64-arm64-xl-vhd broken
broken-job test-arm64-arm64-xl-xsm broken
broken-job test-arm64-arm64-xl-seattle broken
broken-job test-arm64-arm64-xl-credit2 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken

Not pushing.

(No revision log; it would be 529026 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 18:33:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 18:33:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485853.753273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTXX-0002F2-UW; Fri, 27 Jan 2023 18:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485853.753273; Fri, 27 Jan 2023 18:33:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTXX-0002Ev-RM; Fri, 27 Jan 2023 18:33:31 +0000
Received: by outflank-mailman (input) for mailman id 485853;
 Fri, 27 Jan 2023 18:33:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z/4q=5Y=amd.com=stefano.stabellini@srs-se1.protection.inumbo.net>)
 id 1pLTXW-0002Ef-Fs
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 18:33:30 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on2071.outbound.protection.outlook.com [40.107.95.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 16aa28f3-9e71-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 19:33:28 +0100 (CET)
Received: from BN9PR03CA0583.namprd03.prod.outlook.com (2603:10b6:408:10d::18)
 by DM6PR12MB4138.namprd12.prod.outlook.com (2603:10b6:5:220::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.25; Fri, 27 Jan
 2023 18:33:25 +0000
Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10d:cafe::ce) by BN9PR03CA0583.outlook.office365.com
 (2603:10b6:408:10d::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend
 Transport; Fri, 27 Jan 2023 18:33:24 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.25 via Frontend Transport; Fri, 27 Jan 2023 18:33:24 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 27 Jan
 2023 12:33:24 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 27 Jan
 2023 10:33:23 -0800
Received: from ubuntu-20.04.2-arm64.shared (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2375.34 via Frontend Transport; Fri, 27 Jan 2023 12:33:23 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16aa28f3-9e71-11ed-a5d9-ddcf98b90cbd
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VDpTCARloKEXYb1a1fNoeDgoyBwc2SLNXt5VgNHvEAx9oL0MOx1Gw55Qgcy57zbTP/thOu3Z38D4gfPTsR2wyTjI6l6sTmzpuxBSYoERHL0LV4qJ7Kq66BlZx8olK6guaOgMiCdOzYWh/E7bCPTNY7JMMafT12jNor12Lix0HuRZN9lVqKcvSFKBFsDeqLGX9BpxoBaQdqr20S2rQXr1PSmZih7yOe7s8Hq7SPHqbNfs8jSpBohsREVP6TrxfNpV0xfJjglqVPewo0IWnrApYRdedfsZHkna90a0i12fYaGmimodOXSJSnNp2Flp6CjmFsi9/dXipcpH6NYQISbXww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dUeS2AzeEXhqUIkTnSCzDW1sKcFG5DEArAfKABQ9hws=;
 b=MqwGRqLkJNXdmlUS/ryWYQJhTx4uuKSnU6Hb2gqmCcS+24WTYaj/iRUqn4A+hThkBJMZAs7p57qIYG4bwjvkWjLMdXtlF+/hSmfLPVajYsAPSbTCw3RAUz7+9ttaannEPAyBeeEadPW66f1xAV5piQqP+PVvSuLuMm03YZ3LhPq8fqEzpxwKDF392MejEok3ofncd7TT7ldQ87E3d5WJ+sqsb3vZAiMQiuF3FAb1yI+mpWC+BQ5QrQ51GdqOQS0+aYxg7ExClQt69Zegz1cTBdJfdUudAVg68gCZoyy1Bp10NPAolOIkuCAQldOYKxQ0BAnVs92pOHaCLxW7einXpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dUeS2AzeEXhqUIkTnSCzDW1sKcFG5DEArAfKABQ9hws=;
 b=IJRZ8Y+TT60chdr/ti9uL3hX5BG9j/f/zwAbFBxNc1SfI1eunnWGAt6BXk/3luPXTZvUtYQmzwnt8tppuxZXWJHjaoiXfLQg7tirp8Qt867JbFiOlHtfRQ8ALgb/35m9R75rr6NxVlWVpxEQFbDPOe1p+fPUS8MCUDgWx6itwDU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Date: Fri, 27 Jan 2023 10:33:22 -0800
From: Stefano Stabellini <stefano.stabellini@amd.com>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <stefano.stabellini@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, <george.dunlap@citrix.com>,
	<andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<Bertrand.Marquis@arm.com>, <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
In-Reply-To: <db97da84-5f23-e7ed-119b-74aed02fb573@suse.com>
Message-ID: <alpine.DEB.2.22.394.2301271016360.1978264@ubuntu-linux-20-04-desktop>
References: <20230125205735.2662514-1-sstabellini@kernel.org> <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com> <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop> <5a3ef92e-281f-e337-1a3e-aa4c6825d964@suse.com>
 <alpine.DEB.2.22.394.2301261041440.1978264@ubuntu-linux-20-04-desktop> <db97da84-5f23-e7ed-119b-74aed02fb573@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT017:EE_|DM6PR12MB4138:EE_
X-MS-Office365-Filtering-Correlation-Id: 09be02f5-a381-4cef-bbe6-08db0094f99c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+H4ASlV/jbPqETH5kqgAHRzcDUStq5v2kqwk/FO8m/4H1+xWt1pPQpp7kSXvNS2z5kgrF11kXzGXV4ETI9CKnV8bCUgqhA0IeN040+49DdHqaVTTwaWZixu9UDuE/5E3iWP0MXZL4M1F2OhchMJiMEvFtKXab6FXzd+pUZHaG56neSlb09M9+H3txX51zzDMolW235PxlWGAIAe9/TnHDPtRwAt7U8tPho9VD/lIa0/nfTOjydF9YZFn0lAjgpkLNrrv7XV935Q87UaK39BC0ekwUHLRRjiJQrpGMLajVasEue21J1OYy38bH1+72V82oF+rkP+TW7pAPXtbbex7IpMzkuIWZjkPWs4l4d+Y2tGW2zInLS1sf0XWdKYRMpJDuXud3PfPYypad2gGQdhQHPX0WwXdySVopURd/qnMEnJu4EkLxM2RCkswp870y7KhJ/NlbTzXGm8EzTVmC1qqWb+IdRliebmeIMLO/DomoVYDNjCEmpeGoYiaSMkEHxb09DTyDqoQyoznHyIMSV4cs4DF+nnRpszmKL+dl9KkfQaXhpGoIrlfh6j4Pb8R5sRdlgJnKEBgFpLgrB/bHTbuZxdRHosXTnQbcJxyOAByXCIKGRU31VAlF9hcwzIdJRkoUmr6HDmgidFtWJs7zDrOTrMajLZdR8mAlIsceyzJH+AtDeYuixZZqdWKleyNx9Tj1nn/UcOVX8RotzCMeQDbEFmVgwuPTaIRWA04Jo+5SIY=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(7916004)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199018)(46966006)(40470700004)(36840700001)(336012)(426003)(83380400001)(356005)(186003)(81166007)(53546011)(82740400003)(36860700001)(9686003)(33716001)(82310400005)(2906002)(47076005)(26005)(40480700001)(40460700003)(54906003)(316002)(8676002)(478600001)(4326008)(5660300002)(44832011)(6916009)(41300700001)(70206006)(70586007)(8936002)(86362001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2023 18:33:24.5702
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 09be02f5-a381-4cef-bbe6-08db0094f99c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4138

On Fri, 27 Jan 2023, Jan Beulich wrote:
> On 26.01.2023 19:54, Stefano Stabellini wrote:
> > Coming back to 18.2: it makes sense for Xen and the scanners today work
> > well with this rule, so I think we are fine.
> 
> I disagree. 

OK. I'll resend this patch, removing 18.2. I'll mark it appropriately in
the sheet as well.


> Looking back at the sheet, it says "rule already followed by
> the community in most cases" which I assume was based on there being
> only very few violations that are presently reported. Now we've found
> the frame_table[] issue, I'm inclined to say that the statement was put
> there by mistake (due to that oversight).

cppcheck is unable to find violations; we know cppcheck has limitations
and that's OK.

Eclair is excellent and finds violations (including the frame_table[]
issue you mentioned), but currently it doesn't read configs from xen.git
and we cannot run a test to see if adding a couple of deviations for 2
macros removes most of the violations. If we want to use Eclair as a
reference (could be a good idea) then I think we need a better
integration. I'll talk to Roberto and see if we can arrange something
better.

I am writing this with the assumption that if I could show that, as an
example, adding 2 deviations reduces the Eclair violations down to less
than 10, then we could adopt the rule. Do you think that would be
acceptable in your opinion, as a process?


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 18:35:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 18:35:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485863.753283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTZm-00036J-EY; Fri, 27 Jan 2023 18:35:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485863.753283; Fri, 27 Jan 2023 18:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTZm-00036C-Bg; Fri, 27 Jan 2023 18:35:50 +0000
Received: by outflank-mailman (input) for mailman id 485863;
 Fri, 27 Jan 2023 18:35:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BqQd=5Y=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLTZl-000364-FL
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 18:35:49 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6ac31c6d-9e71-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 19:35:48 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 3533EB8214D;
 Fri, 27 Jan 2023 18:35:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1002C433EF;
 Fri, 27 Jan 2023 18:35:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ac31c6d-9e71-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674844545;
	bh=hpHAcdSFOtCLhFqbX/JA+8llrM6/bNsw7JHhCmB7eMM=;
	h=From:To:Cc:Subject:Date:From;
	b=lhofPJvfW1aIU6810KInaKJVTLgTtzeQlPMzsthBXJy5CUj3Q6q6hdF5DGay1ypyO
	 exKMsMu9gZuxwzBXEgHtEQY2qAIAASln9TB+ufGL40dTlX7zf60rcPb63Cm+awgNd5
	 iE4o/RbJrZPcNmoqLhZ8Q0RzxOhbYXJPGdk21ERhRfG45unRggpWwa62tk2+CaME58
	 cMUjBtSQn+EnZlLKW9VwV8mUIxpOUwxotIJ5aKxbuPmDJo9B4jBxCFxuLk/eufItnd
	 ZaB62Hrh++ou18Y8oA37W141VTQaJkS7gOZNevkMTPsTQYbif59lZgWpLAmcZdzoM2
	 1WDVHoZ4W5EfQ==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	george.dunlap@citrix.com,
	andrew.cooper3@citrix.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	Bertrand.Marquis@arm.com,
	julien@xen.org,
	stefano.stabellini@amd.com
Subject: [PATCH v2] Add more rules to docs/misra/rules.rst
Date: Fri, 27 Jan 2023 10:35:41 -0800
Message-Id: <20230127183541.3013908-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@amd.com>

As agreed during the last MISRA C discussion, I am adding the following
MISRA C rules: 7.1, 7.3, 18.3.

I am also adding 13.1 that was "agreed pending an analysis on the amount
of violations". There are zero violations reported by cppcheck.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
Changes in v2:
- remove 18.2
---
 docs/misra/rules.rst | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
index dcceab9388..83f01462f7 100644
--- a/docs/misra/rules.rst
+++ b/docs/misra/rules.rst
@@ -138,6 +138,16 @@ existing codebase are work-in-progress.
      - Single-bit named bit fields shall not be of a signed type
      -
 
+   * - `Rule 7.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_07_01.c>`_
+     - Required
+     - Octal constants shall not be used
+     -
+
+   * - `Rule 7.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_07_03.c>`_
+     - Required
+     - The lowercase character l shall not be used in a literal suffix
+     -
+
    * - `Rule 8.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_01.c>`_
      - Required
      - Types shall be explicitly specified
@@ -200,6 +210,11 @@ existing codebase are work-in-progress.
        expression which has potential side effects
      -
 
+   * - `Rule 13.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_01_1.c>`_
+     - Required
+     - Initializer lists shall not contain persistent side effects
+     -
+
    * - `Rule 14.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_01.c>`_
      - Required
      - A loop counter shall not have essentially floating type
@@ -227,6 +242,11 @@ existing codebase are work-in-progress.
        static keyword between the [ ]
      -
 
+   * - `Rule 18.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_03.c>`_
+     - Required
+     - The relational operators > >= < and <= shall not be applied to objects of pointer type except where they point into the same object
+     -
+
    * - `Rule 19.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_19_01.c>`_
      - Mandatory
      - An object shall not be assigned or copied to an overlapping
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 18:37:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 18:37:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485868.753293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTbD-0003fZ-OP; Fri, 27 Jan 2023 18:37:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485868.753293; Fri, 27 Jan 2023 18:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTbD-0003fS-Le; Fri, 27 Jan 2023 18:37:19 +0000
Received: by outflank-mailman (input) for mailman id 485868;
 Fri, 27 Jan 2023 18:37:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=in7c=5Y=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pLTbC-0003ey-FT
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 18:37:18 +0000
Received: from hedgehog.birch.relay.mailchannels.net
 (hedgehog.birch.relay.mailchannels.net [23.83.209.81])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e668548-9e71-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 19:37:16 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id 0CCE1821481
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 18:37:14 +0000 (UTC)
Received: from pdx1-sub0-mail-a305.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id 6B8838210A4
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 18:37:13 +0000 (UTC)
Received: from pdx1-sub0-mail-a305.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.109.196.237 (trex/6.7.1); Fri, 27 Jan 2023 18:37:13 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a305.dreamhost.com (Postfix) with ESMTPSA id 4P3RBc6yQ7zRW
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 10:37:12 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e00cb by kmjvbox (DragonFly Mail Agent v0.12);
 Fri, 27 Jan 2023 10:37:10 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e668548-9e71-11ed-a5d9-ddcf98b90cbd
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1674844633; a=rsa-sha256;
	cv=none;
	b=4LFIQX4S3tZOfDEp+xy33gVo7jaLUXDb6wet+ASLjAPV261Zy61B5oNRM5WO3f38Mv4Cwh
	kWCYXPe4G/NV2JsOZIKd2D/0QYiTpixH3dwMHJvS8Epo+CRktRJ5BIKedNQ/1lrWzhRZHz
	4B5SgdQrkx9t/bDlKK/BGDxY7glJeowCCNXBeVjxtCqfs+thsPdqIkVYAjeF3kCDCRmArl
	vAE/IhN9Y/6DOdnP9Ju7fHfYh5MJhh7sFen4muBlkc/+UP9+PuOukJ5qxueAgFaSWkrcp0
	DYj8fMag+5MC4fFE7aSdqYhzfl4Yi5zrLuJ+ReFl5qwKPQDXYAzITEdwFXK4qA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1674844633;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:dkim-signature;
	bh=vC3M/mfIsIhsrlC/2752bkkfLE5eFiFR7/YgAX2CWkQ=;
	b=yl6L32KYOt6vAiden3mXGMIuOW4QJGmIyvYbS4QACsve5R6toAG8wKPXp3JPI3LHF6t9Y3
	2O8Xt0n7CngQl3ckXltPxZ96u/0S+/BiKZhElUpX9d6A0VU9TQ1A9fxve7L5zZFQImeFv0
	jp8YOtj+kq66VKuR6QbG8PA6NzA7lZF/iD0TM3qjUMM+PC7Ep6bMEys300k6itNDTY9aFD
	MSl9v2LMjA+2MGPrT22/QilrhXUn0fDhA3tipI1PmuIt5/n7yY+oCyoRp45zLZp1CbktGO
	rIP0qlUy9fFwFLhC8mHcGa94YtAcD6vm7TSe0W85Auu1Uifq87XIxo7IHUrnhg==
ARC-Authentication-Results: i=1;
	rspamd-5fb8f68d88-ns49n;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Neutral
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Cellar-Desert: 69ec801c50ba458a_1674844633707_1631481390
X-MC-Loop-Signature: 1674844633707:3962874044
X-MC-Ingress-Time: 1674844633707
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1674844633;
	bh=vC3M/mfIsIhsrlC/2752bkkfLE5eFiFR7/YgAX2CWkQ=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=LQDyrquGMufpYTWZcxE/lTdolWwj4LkVZJr7xMOWeniv+6FbHxPDpGUL3frJDSNnm
	 3yD9p80QhCepSzi+Ft5o8pkZuF9AfftmEGiNxNFiS1sP45KPX7Bq6VZ7CJIcpZgIYh
	 H3z+JMWbZ8Wb2fEx5regjVx3GyxlQpBI56EoMqG0=
Date: Fri, 27 Jan 2023 10:37:10 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230127183710.GA1955@templeofstupid.com>
References: <20230125184506.GE1963@templeofstupid.com>
 <77576aab-93bf-5f6a-9b04-17eaf1d84ffb@suse.com>
 <20230126180244.GB1959@templeofstupid.com>
 <0c4eb4ba-4314-02c7-62d5-b08a3573fcc2@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <0c4eb4ba-4314-02c7-62d5-b08a3573fcc2@suse.com>

On Fri, Jan 27, 2023 at 08:16:18AM +0100, Jan Beulich wrote:
> On 26.01.2023 19:02, Krister Johansen wrote:
> > On Thu, Jan 26, 2023 at 10:57:01AM +0100, Jan Beulich wrote:
> >> On 25.01.2023 19:45, Krister Johansen wrote:
> >>> --- a/xen/include/public/arch-x86/cpuid.h
> >>> +++ b/xen/include/public/arch-x86/cpuid.h
> >>> @@ -72,6 +72,14 @@
> >>>   * Sub-leaf 2: EAX: host tsc frequency in kHz
> >>>   */
> >>>  
> >>> +#define XEN_CPUID_TSC_EMULATED               (1u << 0)
> >>> +#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
> >>> +#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
> >>> +#define XEN_CPUID_TSC_MODE_DEFAULT           (0)
> >>> +#define XEN_CPUID_TSC_MODE_EMULATE           (1u)
> >>> +#define XEN_CPUID_TSC_MODE_NOEMULATE         (2u)
> >>> +#define XEN_CPUID_TSC_MODE_NOEMULATE_TSC_AUX (3u)
> >>
> >> Actually I think we'd better stick to the names found in asm/time.h
> >> (and then replace their uses, dropping the #define-s there). If you
> >> agree, I'd be happy to make the adjustment while committing.
> > 
> > Just to confirm, this would be moving these:
> > 
> >    #define TSC_MODE_DEFAULT          0
> >    #define TSC_MODE_ALWAYS_EMULATE   1
> >    #define TSC_MODE_NEVER_EMULATE    2
> >    
> > To cpuid.h?  I'm generally fine with this.  I don't see anything in
> > Linux that's using these names.  The only question I have is whether
> > we'd still want to prefix the names with XEN so that if they're pulled
> > in to Linux it's clear that the define is Xen specific?  E.g. something
> > like this perhaps?
> > 
> >    #define XEN_TSC_MODE_DEFAULT          0
> >    #define XEN_TSC_MODE_ALWAYS_EMULATE   1
> >    #define XEN_TSC_MODE_NEVER_EMULATE    2
> > 
> > That does increase the number of files we'd need to touch to make the
> > change, though. (And the other defines in that file all start with
> > XEN_CPUID).
> > 
> > Though, if you mean doing it this way:
> > 
> >    #define XEN_CPUID_TSC_MODE_DEFAULT          0
> >    #define XEN_CPUID_TSC_MODE_ALWAYS_EMULATE   1
> >    #define XEN_CPUID_TSC_MODE_NEVER_EMULATE    2
> >  
> > then no objection to that at all.  Apologies for overlooking the naming
> > overlap when I put this together the first time.
> 
> Yes, it's the last variant you list that I was after. And I'd be okay to
> leave dropping the so far private constants to a separate follow-on patch.

Ok, thanks. I'll send you a v3 that makes these changes, unless you've
already fixed this up and committed the v2.  In that case, feel free to
disregard.

-K


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 18:51:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 18:51:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485875.753303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTof-0006FM-V5; Fri, 27 Jan 2023 18:51:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485875.753303; Fri, 27 Jan 2023 18:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTof-0006FF-R3; Fri, 27 Jan 2023 18:51:13 +0000
Received: by outflank-mailman (input) for mailman id 485875;
 Fri, 27 Jan 2023 18:51:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=in7c=5Y=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pLToe-0006F9-9b
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 18:51:12 +0000
Received: from crocodile.elm.relay.mailchannels.net
 (crocodile.elm.relay.mailchannels.net [23.83.212.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ef94deb-9e73-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 19:51:09 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id 03CC35C1203
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 18:51:07 +0000 (UTC)
Received: from pdx1-sub0-mail-a305.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id 955D55C1A79
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 18:51:06 +0000 (UTC)
Received: from pdx1-sub0-mail-a305.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.109.138.31 (trex/6.7.1); Fri, 27 Jan 2023 18:51:06 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a305.dreamhost.com (Postfix) with ESMTPSA id 4P3RVd36b7zT5
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 10:51:05 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e00cb by kmjvbox (DragonFly Mail Agent v0.12);
 Fri, 27 Jan 2023 10:51:03 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ef94deb-9e73-11ed-a5d9-ddcf98b90cbd
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1674845466; a=rsa-sha256;
	cv=none;
	b=l3uUapRDUt9TyoUvdX/uRZzR5zQCdxtd+SZfzjpNagftL1bVVJnR+36ruWHU0zVPzbz+QS
	YADlLButB1GDI1AqEupaOLyVcnhkvjWLTihuc+VLKUKLXVVN/9DRFgB03smi0Z5yjWx8YU
	LVxllGZupg7qgmEeOWrAQMJCp3tmLxqGVBzS1E3oJzv/baRq+keAcS0c4FCCbl6rAuNRBM
	Iw6qW/Y0iUwHU9Vzex3C2rKkLWmNGjFgpxDutgyA58/Dcinpb9/FII1qS/8Bc0y09v8fJ+
	axpBbnaIrmV/Xhixopn+msh3u9M5vD1APVJkXyw4r5BwIBXEcb7ck5y1n3mnLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1674845466;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:dkim-signature;
	bh=/tENzN1fZmTiIr9IRlRDDiiliEJeTf+ZkNKhBh7jCCQ=;
	b=aDS+HgXS2Lc409JnGpQ7BqEj5j1ty7AwsvblABp0cIQVwFLpUY4TaUdLOsXqnhMKT2AYLh
	EWgL0AMckdlP/PUOcotkKWVVnid5JUXa7HYcjS9+b/lZfw9H7bezkRho6gORMOTJdUuFrW
	o/XxIw0bnb6wYCEI23QfHKEopXZId/zQp1fZZYBXzLz9IZy0dqDBkkQIygAhb76Z19lP8+
	xnwKNUPME3smRjbEjReDWB8W5cBZYWPWrbdg4BbUClHtVd+gBrviO0GbmyZmeBfLpg4bNb
	PW3x7Z5j8gsCcRcy3BPNs8LzPxL85Ci+oVZtJaP8dTi2V19yKl0Z3wiJSfCS9w==
ARC-Authentication-Results: i=1;
	rspamd-544f66f495-b824g;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Neutral
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Imminent-Whimsical: 183d5168448ab42e_1674845466854_2630497509
X-MC-Loop-Signature: 1674845466854:2979937404
X-MC-Ingress-Time: 1674845466854
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1674845465;
	bh=/tENzN1fZmTiIr9IRlRDDiiliEJeTf+ZkNKhBh7jCCQ=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=pCWLym6dy9sMeKjHPfNiKgthih4RtF6rtqAZl1H7vIiklQ7pDBgvor19ccYJin+SK
	 kdKlIdXMrJsFmaF1NVOYaWfwmUQJyND7Xr5v2cvYAXEIpbZ7qXqGt4JSOilqSTmF3u
	 b+2LnG8v/lDeFKpU7dAMS49qETdDmrKeQTriVv9k=
Date: Fri, 27 Jan 2023 10:51:03 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>
Subject: [PATCH v3] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230127185103.GB1955@templeofstupid.com>
References: <0c4eb4ba-4314-02c7-62d5-b08a3573fcc2@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <0c4eb4ba-4314-02c7-62d5-b08a3573fcc2@suse.com>

Cpuid leaf 4 contains information about how the state of the tsc, its
mode, and some additional information.  A commit that is queued for
linux would like to use this to determine whether the tsc mode has been
set to 'no emulation' in order to make some decisions about which
clocksource is more reliable.

Expose this information in the public API headers so that they can
subsequently be imported into linux and used there.

Link: https://lore.kernel.org/xen-devel/eda8d9f2-3013-1b68-0df8-64d7f13ee35e@suse.com/
Link: https://lore.kernel.org/xen-devel/0835453d-9617-48d5-b2dc-77a2ac298bad@oracle.com/
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
---
v3
  - Additional formating cleanups (feedback from Jan Buelich)
  - Ensure that TSC_MODE #defines match the names of those in time.h (feedback
    from Jan Buelich)
v2:
  - Fix whitespace between comment and #defines (feedback from Jan Beulich)
  - Add tsc mode 3: no emulate TSC_AUX (feedback from Jan Beulich)
---
 xen/include/public/arch-x86/cpuid.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/include/public/arch-x86/cpuid.h b/xen/include/public/arch-x86/cpuid.h
index 7ecd16ae05..3fbd912dcd 100644
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -72,6 +72,15 @@
  * Sub-leaf 2: EAX: host tsc frequency in kHz
  */
 
+#define XEN_CPUID_TSC_EMULATED               (1u << 0)
+#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
+#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
+
+#define XEN_CPUID_TSC_MODE_DEFAULT               (0)
+#define XEN_CPUID_TSC_MODE_ALWAYS_EMULATE        (1u)
+#define XEN_CPUID_TSC_MODE_NEVER_EMULATE         (2u)
+#define XEN_CPUID_TSC_MODE_NEVER_EMULATE_TSC_AUX (3u)
+
 /*
  * Leaf 5 (0x40000x04)
  * HVM-specific features
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 18:55:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 18:55:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485881.753313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTtA-00073M-Fs; Fri, 27 Jan 2023 18:55:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485881.753313; Fri, 27 Jan 2023 18:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLTtA-00073F-CV; Fri, 27 Jan 2023 18:55:52 +0000
Received: by outflank-mailman (input) for mailman id 485881;
 Fri, 27 Jan 2023 18:55:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLTt9-000739-Ib
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 18:55:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLTt9-0005jI-BT; Fri, 27 Jan 2023 18:55:51 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLTt9-0008VI-3T; Fri, 27 Jan 2023 18:55:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=Pp20tM9NtELwv5dXWmU95bkMysXiBUg791NET6iHyfY=; b=t3i9qm
	UXJSYfDdLu6L3v9dK044e4R2+jjG1ARSiC8SAeKhhNr3YaRlOQesOqD6iQlZnOL6UBOAzhNRFVGct
	Xv5MhsSFvoy5p61UeyYLgCEqKLycM5o31Ooe80Sk8g86rJWkdN+fD0Ph2CvKhVn1FCZtRx+Yid46R
	Pn5e6udcrh0=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/xenstored: hashtable: Constify the parameters of hashfn/eqfn
Date: Fri, 27 Jan 2023 18:55:46 +0000
Message-Id: <20230127185546.65760-1-julien@xen.org>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

The parameters of hashfn/eqfn should never be modified. So constify
them and propagate the const to the users.

Take the opportunity to solve some coding style issues around the
code modified.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/hashtable.c        | 16 ++++++++--------
 tools/xenstore/hashtable.h        | 10 +++++-----
 tools/xenstore/xenstored_core.c   |  8 ++++----
 tools/xenstore/xenstored_domain.c |  8 ++++----
 4 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index 30eb9f21d250..3d4466b59756 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -23,8 +23,8 @@ struct hashtable {
     unsigned int entrycount;
     unsigned int loadlimit;
     unsigned int primeindex;
-    unsigned int (*hashfn) (void *k);
-    int (*eqfn) (void *k1, void *k2);
+    unsigned int (*hashfn) (const void *k);
+    int (*eqfn) (const void *k1, const void *k2);
 };
 
 /*
@@ -53,8 +53,8 @@ indexFor(unsigned int tablelength, unsigned int hashvalue) {
 /*****************************************************************************/
 struct hashtable *
 create_hashtable(const void *ctx, unsigned int minsize,
-                 unsigned int (*hashf) (void*),
-                 int (*eqf) (void*,void*),
+                 unsigned int (*hashf) (const void *),
+                 int (*eqf) (const void *, const void *),
                  unsigned int flags)
 {
     struct hashtable *h;
@@ -92,7 +92,7 @@ err0:
 
 /*****************************************************************************/
 unsigned int
-hash(struct hashtable *h, void *k)
+hash(const struct hashtable *h, const void *k)
 {
     /* Aim to protect against poor hash functions by adding logic here
      * - logic taken from java 1.4 hashtable source */
@@ -151,7 +151,7 @@ hashtable_expand(struct hashtable *h)
 
 /*****************************************************************************/
 unsigned int
-hashtable_count(struct hashtable *h)
+hashtable_count(const struct hashtable *h)
 {
     return h->entrycount;
 }
@@ -188,7 +188,7 @@ hashtable_insert(struct hashtable *h, void *k, void *v)
 
 /*****************************************************************************/
 void * /* returns value associated with key */
-hashtable_search(struct hashtable *h, void *k)
+hashtable_search(const struct hashtable *h, const void *k)
 {
     struct entry *e;
     unsigned int hashvalue, index;
@@ -206,7 +206,7 @@ hashtable_search(struct hashtable *h, void *k)
 
 /*****************************************************************************/
 void
-hashtable_remove(struct hashtable *h, void *k)
+hashtable_remove(struct hashtable *h, const void *k)
 {
     /* TODO: consider compacting the table when the load factor drops enough,
      *       or provide a 'compact' method. */
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index 4e2823134eb3..cc0090f13378 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -24,8 +24,8 @@ struct hashtable;
 
 struct hashtable *
 create_hashtable(const void *ctx, unsigned int minsize,
-                 unsigned int (*hashfunction) (void*),
-                 int (*key_eq_fn) (void*,void*),
+                 unsigned int (*hashfunction) (const void *),
+                 int (*key_eq_fn) (const void *, const void *),
                  unsigned int flags
 );
 
@@ -61,7 +61,7 @@ hashtable_insert(struct hashtable *h, void *k, void *v);
  */
 
 void *
-hashtable_search(struct hashtable *h, void *k);
+hashtable_search(const struct hashtable *h, const void *k);
 
 /*****************************************************************************
  * hashtable_remove
@@ -72,7 +72,7 @@ hashtable_search(struct hashtable *h, void *k);
  */
 
 void
-hashtable_remove(struct hashtable *h, void *k);
+hashtable_remove(struct hashtable *h, const void *k);
 
 /*****************************************************************************
  * hashtable_count
@@ -82,7 +82,7 @@ hashtable_remove(struct hashtable *h, void *k);
  * @return      the number of items stored in the hashtable
  */
 unsigned int
-hashtable_count(struct hashtable *h);
+hashtable_count(const struct hashtable *h);
 
 /*****************************************************************************
  * hashtable_iterate
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 4f00e0cdc0cf..7348f935bc26 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2386,9 +2386,9 @@ void setup_structure(bool live_update)
 	}
 }
 
-static unsigned int hash_from_key_fn(void *k)
+static unsigned int hash_from_key_fn(const void *k)
 {
-	char *str = k;
+	const char *str = k;
 	unsigned int hash = 5381;
 	char c;
 
@@ -2399,9 +2399,9 @@ static unsigned int hash_from_key_fn(void *k)
 }
 
 
-static int keys_equal_fn(void *key1, void *key2)
+static int keys_equal_fn(const void *key1, const void *key2)
 {
-	return 0 == strcmp((char *)key1, (char *)key2);
+	return 0 == strcmp(key1, key2);
 }
 
 int remember_string(struct hashtable *hash, const char *str)
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 9ef41ede03ae..d7fc2fafc729 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -916,14 +916,14 @@ void dom0_init(void)
 	xenevtchn_notify(xce_handle, dom0->port);
 }
 
-static unsigned int domhash_fn(void *k)
+static unsigned int domhash_fn(const void *k)
 {
-	return *(unsigned int *)k;
+	return *(const unsigned int *)k;
 }
 
-static int domeq_fn(void *key1, void *key2)
+static int domeq_fn(const void *key1, const void *key2)
 {
-	return *(unsigned int *)key1 == *(unsigned int *)key2;
+	return *(const unsigned int *)key1 == *(const unsigned int *)key2;
 }
 
 void domain_init(int evtfd)
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:05:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:05:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485886.753324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLU2L-0000Jl-DA; Fri, 27 Jan 2023 19:05:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485886.753324; Fri, 27 Jan 2023 19:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLU2L-0000Je-8d; Fri, 27 Jan 2023 19:05:21 +0000
Received: by outflank-mailman (input) for mailman id 485886;
 Fri, 27 Jan 2023 19:05:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLU2K-0000JY-0i
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:05:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLU2J-0005vR-HY; Fri, 27 Jan 2023 19:05:19 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLU2J-0000WN-90; Fri, 27 Jan 2023 19:05:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=fTCKKhrIkTqrwRgYKyo0rDxZRUPeduOTNvM+XVhaz38=; b=DgxBbM
	mOUg6ujloGoy/HYwpEdEKwCZjdFnuBPi2KPaSj5d2f6JmrGXLsrnMgmKywdndo129tC7Xi8NbST+E
	T20kRz+dDCpOPQeh1RiytevD3az6D2HdbK9lgcQKEBX1FGcYzCqxXhu8YjG95X7EvgcnLCn1UAoGL
	ZCV0hPM4JAw=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/common: Constify the parameter of _spin_is_locked()
Date: Fri, 27 Jan 2023 19:05:16 +0000
Message-Id: <20230127190516.52994-1-julien@xen.org>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

The lock is not meant to be modified by _spin_is_locked(). So constify
it.

This is helpful to be able to assert the locked is taken when the
underlying structure is const.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/spinlock.c      | 2 +-
 xen/include/xen/spinlock.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 84996c3fbc1f..a15f0a2eb667 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -368,7 +368,7 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
     local_irq_restore(flags);
 }
 
-int _spin_is_locked(spinlock_t *lock)
+int _spin_is_locked(const spinlock_t *lock)
 {
     /*
      * Recursive locks may be locked by another CPU, yet we return
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index 2fa6ba36548e..ca40c71c88f9 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -185,7 +185,7 @@ void _spin_unlock(spinlock_t *lock);
 void _spin_unlock_irq(spinlock_t *lock);
 void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags);
 
-int _spin_is_locked(spinlock_t *lock);
+int _spin_is_locked(const spinlock_t *lock);
 int _spin_trylock(spinlock_t *lock);
 void _spin_barrier(spinlock_t *lock);
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:20:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485902.753333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUGG-0002AS-LW; Fri, 27 Jan 2023 19:19:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485902.753333; Fri, 27 Jan 2023 19:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUGG-0002AL-IX; Fri, 27 Jan 2023 19:19:44 +0000
Received: by outflank-mailman (input) for mailman id 485902;
 Fri, 27 Jan 2023 19:19:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLUGF-0002AD-GI
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:19:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUGE-0006KB-Ur; Fri, 27 Jan 2023 19:19:42 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUGE-00013e-Oq; Fri, 27 Jan 2023 19:19:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	References:Cc:To:From:Subject:MIME-Version:Date:Message-ID;
	bh=uHaXHL/FVNly5UZIdc3Ga9SPTYKmG02IgzTGGybur6I=; b=Yq4FK+kW1oOq9Rl6jhjyICqw8B
	g5k31CFRoq4rmZeEbQqcPF5Bsqkps4CDo3bU2CogYFdVsHA3r35IwQcgiIBp6C1MwCV2qkqpEZ2zz
	4LU1FHqNlSNxP3hHwdfDdSDWx1JKKD92pBflVKvAzk9Dlyh/uXjNJJhglTFkm9ATO6Bg=;
Message-ID: <d1240124-8dd5-74c5-381a-0ee0edb49c43@xen.org>
Date: Fri, 27 Jan 2023 19:19:40 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 10/14] xen/arm32: head: Widen the use of the temporary
 mapping
Content-Language: en-US
From: Julien Grall <julien@xen.org>
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-11-julien@xen.org>
 <0271e540-d3b0-fb9b-0f66-015abb45231c@amd.com>
 <5c18827c-ffc2-1c31-bd7c-812ca05c4bc3@xen.org>
In-Reply-To: <5c18827c-ffc2-1c31-bd7c-812ca05c4bc3@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 24/01/2023 19:43, Julien Grall wrote:
>>> -        /*
>>> -         * This will override the link to boot_second in 
>>> XEN_FIRST_SLOT.
>>> -         * The page-tables are not live yet. So no need to use
>>> -         * break-before-make.
>>> -         */
>>>           create_table_entry boot_pgtable, boot_second_id, r9, 1
>>>           create_table_entry boot_second_id, boot_third_id, r9, 2
>> Do we need to duplicate this if we just did the same in 
>> create_page_tables before branching to
>> use_temporary_mapping?
> 
> Hmmm... Possibly not. I will give a try and let you know.

I confirm this is not necessary. So I have removed the two lines.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:31:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:31:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485909.753345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUR8-0004fm-Lk; Fri, 27 Jan 2023 19:30:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485909.753345; Fri, 27 Jan 2023 19:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUR8-0004ff-JC; Fri, 27 Jan 2023 19:30:58 +0000
Received: by outflank-mailman (input) for mailman id 485909;
 Fri, 27 Jan 2023 19:30:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLUR7-0004fZ-Bv
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:30:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUR6-0006YJ-Vx; Fri, 27 Jan 2023 19:30:56 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUR6-0001TA-PS; Fri, 27 Jan 2023 19:30:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=lPZItNN6eNhhUYlN0kWXTystKfOq9xQ6Lt3Jg7aTSL8=; b=NHtZXYwK43tc7fxQv9oQw+9CUH
	KLZlvy6XHkU3ycAA5tb/j33oWz8mTPyU8QYPJFzIMAvs3XSYNTtZuWvJdZy6evYXYgpjLxDpV22S3
	HifwDnjK+llYlD3N4/0WxC4Ns32f94f71vfXnCSLFuExlaLXA4L08N5EOxcOhLFC+vME=;
Message-ID: <5c1428a5-155f-b582-75b2-395f88086d15@xen.org>
Date: Fri, 27 Jan 2023 19:30:55 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 12/14] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-13-julien@xen.org>
 <ddbcb326-b158-daa4-e9d2-42c420983497@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <ddbcb326-b158-daa4-e9d2-42c420983497@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 16/01/2023 08:53, Michal Orzel wrote:
> On 13/01/2023 11:11, Julien Grall wrote:
>> +static void __init prepare_boot_identity_mapping(void)
>> +{
>> +    paddr_t id_addr = virt_to_maddr(_start);
>> +    lpae_t pte;
>> +    DECLARE_OFFSETS(id_offsets, id_addr);
>> +
>> +    /*
>> +     * We will be re-using the boot ID tables. They may not have been
>> +     * zeroed but they should be unlinked. So it is fine to use
>> +     * clear_page().
>> +     */
>> +    clear_page(boot_first_id);
>> +    clear_page(boot_second_id);
>> +    clear_page(boot_third_id);
>> +
>> +    if ( id_offsets[0] != 0 )
>> +        panic("Cannot handled ID mapping above 512GB\n");
> I might be lost but didn't we say before that we can load Xen in the first 2TB?
> Then, how does this check correspond to it?

I forgot to change the check after we decided to extend the reserved 
area from 512GB to 2TB. I will update it in the next version.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:40:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:40:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485914.753355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUZc-0005Xc-G8; Fri, 27 Jan 2023 19:39:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485914.753355; Fri, 27 Jan 2023 19:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUZc-0005XV-Da; Fri, 27 Jan 2023 19:39:44 +0000
Received: by outflank-mailman (input) for mailman id 485914;
 Fri, 27 Jan 2023 19:39:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLUZb-0005XP-Qo
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:39:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUZb-0006hH-CZ; Fri, 27 Jan 2023 19:39:43 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUZb-0001nb-7D; Fri, 27 Jan 2023 19:39:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=mxU+Z0A/FyzUGWyvDzJ3yQKsr/satDe7LU9nvwy1YJg=; b=aEmlhlZYmr+ulss/GjI3KCVN7+
	Uf+0W54QWqCCwqEWA/OAKb8pkBTCSobuqEBvhGq3X6/CfsiTYPhHOZ48jSyKykC0hX2KZZkUFwg28
	I2VvUTdAtAqlBj/E25sRnA0jCVSzRVnpHgKarrN/9nG97YosQTkLCtEued27qrfLNVkI=;
Message-ID: <73b31683-1ffc-b708-467a-9ba628a1fd1d@xen.org>
Date: Fri, 27 Jan 2023 19:39:41 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 14/14] xen/arm64: smpboot: Directly switch to the
 runtime page-tables
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113101136.479-1-julien@xen.org>
 <20230113101136.479-15-julien@xen.org>
 <8A0AD684-FB21-46B3-A0C9-DE0BF67030D0@arm.com>
 <69C4635C-1C1D-4F00-813B-83DF9E6D825D@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <69C4635C-1C1D-4F00-813B-83DF9E6D825D@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 16/01/2023 09:06, Luca Fancellu wrote:
> 
> Hi Julien,

Hi Luca,

>>
>> I’ve left the boards to test all night, so on Monday I will be 100% sure this serie
>> Is not introducing any issue.
> 
> The serie passed the overnight tests on neoverse board, raspberry pi 4, Juno board.

Thanks for testing!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:55:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485921.753368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUob-0008Hh-TV; Fri, 27 Jan 2023 19:55:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485921.753368; Fri, 27 Jan 2023 19:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUob-0008Ha-QQ; Fri, 27 Jan 2023 19:55:13 +0000
Received: by outflank-mailman (input) for mailman id 485921;
 Fri, 27 Jan 2023 19:55:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLUoa-0008HU-Sk
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:55:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUoa-00077l-Ex; Fri, 27 Jan 2023 19:55:12 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUoa-0002YX-54; Fri, 27 Jan 2023 19:55:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=zl61osytHpssWg6N5x3XEIa5GQM4XDkKGDQRne3hbt0=; b=JHVV/J
	SBLTjDbXLw5LvH6y8CzusDSZ1zct6hMWPta6Bh8foqGNSZCa+kQOIeGUebliqpvWrEQsqAvobr7se
	hM2neWMSlddzHVifVHSGGYiwZ2dCxtEr+MwmdwlKpkdp+ZvNcOHpPUpEkG/iI48s7MrDcyP+i9T/k
	wRKxEfvKIVc=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 0/5] xen/arm: Don't switch TTBR while the MMU is on
Date: Fri, 27 Jan 2023 19:55:03 +0000
Message-Id: <20230127195508.2786-1-julien@xen.org>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Hi all,

Currently, Xen on Arm will switch TTBR whilst the MMU is on. This is
similar to replacing existing mappings with new ones. So we need to
follow a break-before-make sequence.

When switching the TTBR, we need to temporarily disable the MMU
before updating the TTBR. This means the page-tables must contain an
identity mapping.

The current memory layout is not very flexible and has an higher chance
to clash with the identity mapping.

On Arm64, we have plenty of unused virtual address space Therefore, we can
simply reshuffle the layout to leave the first part of the virtual
address space empty.

On Arm32, the virtual address space is already quite full. Even if we
find space, it would be necessary to have a dynamic layout. So a
different approach will be necessary. The chosen one is to have
a temporary mapping that will be used to jumped from the ID mapping
to the runtime mapping (or vice versa). The temporary mapping will
be overlapping with the domheap area as it should not be used when
switching on/off the MMU.

The Arm32 part is not yet addressed and will be handled in a follow-up
series.

After this series, most of Xen page-table code should be compliant
with the Arm Arm. The last two issues I am aware of are:
 - domheap: Mappings are replaced without using the Break-Before-Make
   approach.
 - The cache is not cleaned/invalidated when updating the page-tables
   with Data cache off (like during early boot).

The long term plan is to get rid of boot_* page tables and then
directly use the runtime pages. This means for coloring, we will
need to build the pages in the relocated Xen rather than the current
Xen.

For convience, I pushed a branch with everything applied:

https://xenbits.xen.org/git-http/people/julieng/xen-unstable.git
branch boot-pt-rework-v5

Cheers,

Julien Grall (5):
  xen/arm32: head: Widen the use of the temporary mapping
  xen/arm64: Rework the memory layout
  xen/arm64: mm: Introduce helpers to prepare/enable/disable the
    identity mapping
  xen/arm64: mm: Rework switch_ttbr()
  xen/arm64: smpboot: Directly switch to the runtime page-tables

 xen/arch/arm/arm32/head.S           |  85 +++------------
 xen/arch/arm/arm32/smpboot.c        |   4 +
 xen/arch/arm/arm64/Makefile         |   1 +
 xen/arch/arm/arm64/head.S           |  82 +++++++-------
 xen/arch/arm/arm64/mm.c             | 161 ++++++++++++++++++++++++++++
 xen/arch/arm/arm64/smpboot.c        |  15 ++-
 xen/arch/arm/include/asm/arm32/mm.h |   4 +
 xen/arch/arm/include/asm/arm64/mm.h |  13 +++
 xen/arch/arm/include/asm/config.h   |  34 ++++--
 xen/arch/arm/include/asm/mm.h       |   2 +
 xen/arch/arm/include/asm/setup.h    |  11 ++
 xen/arch/arm/include/asm/smp.h      |   1 +
 xen/arch/arm/mm.c                   |  19 ++--
 xen/arch/arm/smpboot.c              |   1 +
 14 files changed, 305 insertions(+), 128 deletions(-)
 create mode 100644 xen/arch/arm/arm64/mm.c

-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:55:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485923.753389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUof-0000Lp-BR; Fri, 27 Jan 2023 19:55:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485923.753389; Fri, 27 Jan 2023 19:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUof-0000Li-8S; Fri, 27 Jan 2023 19:55:17 +0000
Received: by outflank-mailman (input) for mailman id 485923;
 Fri, 27 Jan 2023 19:55:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLUod-0000Bh-Cd
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:55:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUod-000787-1x; Fri, 27 Jan 2023 19:55:15 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUoc-0002YX-Pz; Fri, 27 Jan 2023 19:55:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=CzYvwBg1e2Stcnln9R9Sv6yl3KyIDrD+Aww1TKBlI6Q=; b=LJPExbYL83QNOP+mwSjwZxQc1Y
	wcfjFXOGLhHmm8B3+sg59K3wcp3D+xluGB6uXof1LigwsHRANpBeqriKpvfyJN7VZnvL63sPb7nDx
	/Tlqv3hWcwc6bbCNd+74SmTs+mIseQ81cc23ZTSX92OGjK0Zw1hnAEOCyefN+s3x/GRc=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Luca Fancellu <luca.fancellu@arm.com>
Subject: [PATCH v5 2/5] xen/arm64: Rework the memory layout
Date: Fri, 27 Jan 2023 19:55:05 +0000
Message-Id: <20230127195508.2786-3-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230127195508.2786-1-julien@xen.org>
References: <20230127195508.2786-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Xen is currently not fully compliant with the Arm Arm because it will
switch the TTBR with the MMU on.

In order to be compliant, we need to disable the MMU before
switching the TTBR. The implication is the page-tables should
contain an identity mapping of the code switching the TTBR.

In most of the case we expect Xen to be loaded in low memory. I am aware
of one platform (i.e AMD Seattle) where the memory start above 512GB.
To give us some slack, consider that Xen may be loaded in the first 2TB
of the physical address space.

The memory layout is reshuffled to keep the first four slots of the zeroeth
level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
tweak of the boot code because XEN_VIRT_START cannot be used as an
immediate.

This reshuffle will make trivial to create a 1:1 mapping when Xen is
loaded below 2TB.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

---
    Changes in v5:
        - We are reserving 4 slots rather than 2.
        - Fix the addresses in the layout comment.
        - Fix the size of the region in the layout comment
        - Add Luca's tested-by (the reviewed-by was not added
          because of the changes requested by Michal
        - Add Michal's reviewed-by

    Changes in v4:
        - Correct the documentation
        - The start address is 2TB, so slot0 is 4 not 2.

    Changes in v2:
        - Reword the commit message
        - Load Xen at 2TB + 2MB
        - Update the documentation to reflect the new layout
---
 xen/arch/arm/arm64/head.S         |  3 ++-
 xen/arch/arm/include/asm/config.h | 34 +++++++++++++++++++++----------
 xen/arch/arm/mm.c                 | 11 +++++-----
 3 files changed, 31 insertions(+), 17 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 4a3f87117c83..663f5813b12e 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -607,7 +607,8 @@ create_page_tables:
          * need an additional 1:1 mapping, the virtual mapping will
          * suffice.
          */
-        cmp   x19, #XEN_VIRT_START
+        ldr   x0, =XEN_VIRT_START
+        cmp   x19, x0
         bne   1f
         ret
 1:
diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 5df0e4c4959b..e388462c23d1 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -72,16 +72,13 @@
 #include <xen/page-size.h>
 
 /*
- * Common ARM32 and ARM64 layout:
+ * ARM32 layout:
  *   0  -   2M   Unmapped
  *   2M -   4M   Xen text, data, bss
  *   4M -   6M   Fixmap: special-purpose 4K mapping slots
  *   6M -  10M   Early boot mapping of FDT
  *   10M - 12M   Livepatch vmap (if compiled in)
  *
- * ARM32 layout:
- *   0  -  12M   <COMMON>
- *
  *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
  * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
  *                    space
@@ -90,14 +87,23 @@
  *   2G -   4G   Domheap: on-demand-mapped
  *
  * ARM64 layout:
- * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
- *   0  -  12M   <COMMON>
+ * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3])
+ *
+ *  Reserved to identity map Xen
+ *
+ * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]
+ *  (Relative offsets)
+ *   0  -   2M   Unmapped
+ *   2M -   4M   Xen text, data, bss
+ *   4M -   6M   Fixmap: special-purpose 4K mapping slots
+ *   6M -  10M   Early boot mapping of FDT
+ *  10M -  12M   Livepatch vmap (if compiled in)
  *
  *   1G -   2G   VMAP: ioremap and early_ioremap
  *
  *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
  *
- * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
+ * 0x0000028000000000 - 0x00007fffffffffff (125TB, L0 slots [5..255])
  *  Unused
  *
  * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
@@ -107,7 +113,17 @@
  *  Unused
  */
 
+#ifdef CONFIG_ARM_32
 #define XEN_VIRT_START          _AT(vaddr_t, MB(2))
+#else
+
+#define SLOT0_ENTRY_BITS  39
+#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
+#define SLOT0_ENTRY_SIZE  SLOT0(1)
+
+#define XEN_VIRT_START          (SLOT0(4) + _AT(vaddr_t, MB(2)))
+#endif
+
 #define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
 
 #define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
@@ -163,10 +179,6 @@
 
 #else /* ARM_64 */
 
-#define SLOT0_ENTRY_BITS  39
-#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
-#define SLOT0_ENTRY_SIZE  SLOT0(1)
-
 #define VMAP_VIRT_START  GB(1)
 #define VMAP_VIRT_SIZE   GB(1)
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index f758cad545fa..0b0edf28d57a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -153,7 +153,7 @@ static void __init __maybe_unused build_assertions(void)
 #endif
     /* Page table structure constraints */
 #ifdef CONFIG_ARM_64
-    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
+    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START) < 2);
 #endif
     BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
 #ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
@@ -496,10 +496,11 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
     phys_offset = boot_phys_offset;
 
 #ifdef CONFIG_ARM_64
-    p = (void *) xen_pgtable;
-    p[0] = pte_of_xenaddr((uintptr_t)xen_first);
-    p[0].pt.table = 1;
-    p[0].pt.xn = 0;
+    pte = pte_of_xenaddr((uintptr_t)xen_first);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+    xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] = pte;
+
     p = (void *) xen_first;
 #else
     p = (void *) cpu0_pgtable;
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:55:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485922.753379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUod-00004w-4W; Fri, 27 Jan 2023 19:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485922.753379; Fri, 27 Jan 2023 19:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUod-0008WV-1K; Fri, 27 Jan 2023 19:55:15 +0000
Received: by outflank-mailman (input) for mailman id 485922;
 Fri, 27 Jan 2023 19:55:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLUoc-0008KF-0g
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:55:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUob-00077v-Na; Fri, 27 Jan 2023 19:55:13 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUob-0002YX-Dr; Fri, 27 Jan 2023 19:55:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=0Wafxwr667wMVGYsY7VCWFAdY7fU0KHeCxw0wBvtoLk=; b=gRg5qVIsN8NtNq1+c7C2rfluzs
	P05so9Fdw4hyfuxerx18okCtt5facoq5/RkcrKhUTNxZoXb9Eu9JdKvCmfGSY0NBDeG0hVcN5iunk
	LuY+jRzxme8m/6dmTl5DiBjSt0UlwWNn+MRTL1P9UtU5iGDmAYqwxVkvVQ8LfTHLlCT4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 1/5] xen/arm32: head: Widen the use of the temporary mapping
Date: Fri, 27 Jan 2023 19:55:04 +0000
Message-Id: <20230127195508.2786-2-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230127195508.2786-1-julien@xen.org>
References: <20230127195508.2786-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, the temporary mapping is only used when the virtual
runtime region of Xen is clashing with the physical region.

In follow-up patches, we will rework how secondary CPU bring-up works
and it will be convenient to use the fixmap area for accessing
the root page-table (it is per-cpu).

Rework the code to use temporary mapping when the Xen physical address
is not overlapping with the temporary mapping.

This also has the advantage to simplify the logic to identity map
Xen.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Even if this patch is rewriting part of the previous patch, I decided
to keep them separated to help the review.

The "follow-up patches" are still in draft at the moment. I still haven't
find a way to split them nicely and not require too much more work
in the coloring side.

I have provided some medium-term goal in the cover letter.

    Changes in v5:
        - Fix typo in a comment
        - No need to link boot_{second, third}_id again if we need to
          create a temporary area.

    Changes in v3:
        - Resolve conflicts after switching from "ldr rX, <label>" to
          "mov_w rX, <label>" in a previous patch

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 85 +++++++--------------------------------
 1 file changed, 15 insertions(+), 70 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index df51550baa8a..93b0af114b0c 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -459,7 +459,6 @@ ENDPROC(cpu_init)
 create_page_tables:
         /* Prepare the page-tables for mapping Xen */
         mov_w r0, XEN_VIRT_START
-        create_table_entry boot_pgtable, boot_second, r0, 1
         create_table_entry boot_second, boot_third, r0, 2
 
         /* Setup boot_third: */
@@ -479,70 +478,37 @@ create_page_tables:
         cmp   r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */
         blo   1b
 
-        /*
-         * If Xen is loaded at exactly XEN_VIRT_START then we don't
-         * need an additional 1:1 mapping, the virtual mapping will
-         * suffice.
-         */
-        cmp   r9, #XEN_VIRT_START
-        moveq pc, lr
-
         /*
          * Setup the 1:1 mapping so we can turn the MMU on. Note that
          * only the first page of Xen will be part of the 1:1 mapping.
-         *
-         * In all the cases, we will link boot_third_id. So create the
-         * mapping in advance.
          */
+        create_table_entry boot_pgtable, boot_second_id, r9, 1
+        create_table_entry boot_second_id, boot_third_id, r9, 2
         create_mapping_entry boot_third_id, r9, r9
 
         /*
-         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
-         * then the 1:1 mapping will use its own set of page-tables from
-         * the second level.
+         * Find the first slot used. If the slot is not the same
+         * as TEMPORARY_AREA_FIRST_SLOT, then we will want to switch
+         * to the temporary mapping before jumping to the runtime
+         * virtual mapping.
          */
         get_table_slot r1, r9, 1     /* r1 := first slot */
-        cmp   r1, #XEN_FIRST_SLOT
-        beq   1f
-        create_table_entry boot_pgtable, boot_second_id, r9, 1
-        b     link_from_second_id
-
-1:
-        /*
-         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
-         * 1:1 mapping will use its own set of page-tables from the
-         * third level.
-         */
-        get_table_slot r1, r9, 2     /* r1 := second slot */
-        cmp   r1, #XEN_SECOND_SLOT
-        beq   virtphys_clash
-        create_table_entry boot_second, boot_third_id, r9, 2
-        b     link_from_third_id
+        cmp   r1, #TEMPORARY_AREA_FIRST_SLOT
+        bne   use_temporary_mapping
 
-link_from_second_id:
-        create_table_entry boot_second_id, boot_third_id, r9, 2
-link_from_third_id:
-        /* Good news, we are not clashing with Xen virtual mapping */
+        mov_w r0, XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_second, r0, 1
         mov   r12, #0                /* r12 := temporary mapping not created */
         mov   pc, lr
 
-virtphys_clash:
+use_temporary_mapping:
         /*
-         * The identity map clashes with boot_third. Link boot_first_id and
-         * map Xen to a temporary mapping. See switch_to_runtime_mapping
-         * for more details.
+         * The identity mapping is not using the first slot
+         * TEMPORARY_AREA_FIRST_SLOT. Create a temporary mapping.
+         * See switch_to_runtime_mapping for more details.
          */
-        PRINT("- Virt and Phys addresses clash  -\r\n")
         PRINT("- Create temporary mapping -\r\n")
 
-        /*
-         * This will override the link to boot_second in XEN_FIRST_SLOT.
-         * The page-tables are not live yet. So no need to use
-         * break-before-make.
-         */
-        create_table_entry boot_pgtable, boot_second_id, r9, 1
-        create_table_entry boot_second_id, boot_third_id, r9, 2
-
         /* Map boot_second (cover Xen mappings) to the temporary 1st slot */
         mov_w r0, TEMPORARY_XEN_VIRT_START
         create_table_entry boot_pgtable, boot_second, r0, 1
@@ -675,33 +641,12 @@ remove_identity_mapping:
         /* r2:r3 := invalid page-table entry */
         mov   r2, #0x0
         mov   r3, #0x0
-        /*
-         * Find the first slot used. Remove the entry for the first
-         * table if the slot is not XEN_FIRST_SLOT.
-         */
+        /* Find the first slot used and remove it */
         get_table_slot r1, r9, 1     /* r1 := first slot */
-        cmp   r1, #XEN_FIRST_SLOT
-        beq   1f
-        /* It is not in slot 0, remove the entry */
         mov_w r0, boot_pgtable       /* r0 := root table */
         lsl   r1, r1, #3             /* r1 := Slot offset */
         strd  r2, r3, [r0, r1]
-        b     identity_mapping_removed
-
-1:
-        /*
-         * Find the second slot used. Remove the entry for the first
-         * table if the slot is not XEN_SECOND_SLOT.
-         */
-        get_table_slot r1, r9, 2     /* r1 := second slot */
-        cmp   r1, #XEN_SECOND_SLOT
-        beq   identity_mapping_removed
-        /* It is not in slot 1, remove the entry */
-        mov_w r0, boot_second        /* r0 := second table */
-        lsl   r1, r1, #3             /* r1 := Slot offset */
-        strd  r2, r3, [r0, r1]
 
-identity_mapping_removed:
         flush_xen_tlb_local r0
         mov   pc, lr
 ENDPROC(remove_identity_mapping)
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:55:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485924.753399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUog-0000cZ-O8; Fri, 27 Jan 2023 19:55:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485924.753399; Fri, 27 Jan 2023 19:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUog-0000cO-L3; Fri, 27 Jan 2023 19:55:18 +0000
Received: by outflank-mailman (input) for mailman id 485924;
 Fri, 27 Jan 2023 19:55:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLUoe-0000LI-M8
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:55:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUoe-00079e-EG; Fri, 27 Jan 2023 19:55:16 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUoe-0002YX-2T; Fri, 27 Jan 2023 19:55:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Fh+yMOmVBaP2YZSsLc5E3abzzFZ9ZHkYos+ktvQIIPg=; b=AvD5B0SihsYU/wP/443KJeQauh
	XAGCdS6PLzk3F1vqqrVqrxH6EHE8cXswhMniMFs9tGbBdEwZ+2Y1XN1SVkmJhaREg5BIzY8IOIYgl
	+fTKmQhtQRKhX0FMUaPS2WXXCTSxF8DVi3UpeROS3s7fXRY1Qnr+sEmM9/X7iE6uKf3w=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 3/5] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping
Date: Fri, 27 Jan 2023 19:55:06 +0000
Message-Id: <20230127195508.2786-4-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230127195508.2786-1-julien@xen.org>
References: <20230127195508.2786-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

In follow-up patches we will need to have part of Xen identity mapped in
order to safely switch the TTBR.

On some platform, the identity mapping may have to start at 0. If we always
keep the identity region mapped, NULL pointer dereference would lead to
access to valid mapping.

It would be possible to relocate Xen to avoid clashing with address 0.
However the identity mapping is only meant to be used in very limited
places. Therefore it would be better to keep the identity region invalid
for most of the time.

Two new external helpers are introduced:
    - arch_setup_page_tables() will setup the page-tables so it is
      easy to create the mapping afterwards.
    - update_identity_mapping() will create/remove the identity mapping

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v5:
        - The reserved area for the identity mapping is 2TB (so 4 slots)
          rather than 512GB.

    Changes in v4:
        - Fix typo in a comment
        - Clarify which page-tables are updated

    Changes in v2:
        - Remove the arm32 part
        - Use a different logic for the boot page tables and runtime
          one because Xen may be running in a different place.
---
 xen/arch/arm/arm64/Makefile         |   1 +
 xen/arch/arm/arm64/mm.c             | 130 ++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm32/mm.h |   4 +
 xen/arch/arm/include/asm/arm64/mm.h |  13 +++
 xen/arch/arm/include/asm/config.h   |   2 +
 xen/arch/arm/include/asm/setup.h    |  11 +++
 xen/arch/arm/mm.c                   |   6 +-
 7 files changed, 165 insertions(+), 2 deletions(-)
 create mode 100644 xen/arch/arm/arm64/mm.c

diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 6d507da0d44d..28481393e98f 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -10,6 +10,7 @@ obj-y += entry.o
 obj-y += head.o
 obj-y += insn.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-y += mm.o
 obj-y += smc.o
 obj-y += smpboot.o
 obj-y += traps.o
diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
new file mode 100644
index 000000000000..f8e0887d25bc
--- /dev/null
+++ b/xen/arch/arm/arm64/mm.c
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <xen/init.h>
+#include <xen/mm.h>
+
+#include <asm/setup.h>
+
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
+static DEFINE_PAGE_TABLE(xen_first_id);
+static DEFINE_PAGE_TABLE(xen_second_id);
+static DEFINE_PAGE_TABLE(xen_third_id);
+
+/*
+ * The identity mapping may start at physical address 0. So we don't want
+ * to keep it mapped longer than necessary.
+ *
+ * When this is called, we are still using the boot_pgtable.
+ *
+ * We need to prepare the identity mapping for both the boot page tables
+ * and runtime page tables.
+ *
+ * The logic to create the entry is slightly different because Xen may
+ * be running at a different location at runtime.
+ */
+static void __init prepare_boot_identity_mapping(void)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    lpae_t pte;
+    DECLARE_OFFSETS(id_offsets, id_addr);
+
+    /*
+     * We will be re-using the boot ID tables. They may not have been
+     * zeroed but they should be unlinked. So it is fine to use
+     * clear_page().
+     */
+    clear_page(boot_first_id);
+    clear_page(boot_second_id);
+    clear_page(boot_third_id);
+
+    if ( id_offsets[0] != 0 )
+        panic("Cannot handled ID mapping above 512GB\n");
+
+    /* Link first ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_first_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_pgtable[id_offsets[0]], pte);
+
+    /* Link second ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_second_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_first_id[id_offsets[1]], pte);
+
+    /* Link third ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_third_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_second_id[id_offsets[2]], pte);
+
+    /* The mapping in the third table will be created at a later stage */
+}
+
+static void __init prepare_runtime_identity_mapping(void)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    lpae_t pte;
+    DECLARE_OFFSETS(id_offsets, id_addr);
+
+    if ( id_offsets[0] >= IDENTITY_MAPPING_AREA_NR_L0 )
+        panic("Cannot handled ID mapping above 512GB\n");
+
+    /* Link first ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_first_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_pgtable[id_offsets[0]], pte);
+
+    /* Link second ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_second_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_first_id[id_offsets[1]], pte);
+
+    /* Link third ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_third_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_second_id[id_offsets[2]], pte);
+
+    /* The mapping in the third table will be created at a later stage */
+}
+
+void __init arch_setup_page_tables(void)
+{
+    prepare_boot_identity_mapping();
+    prepare_runtime_identity_mapping();
+}
+
+void update_identity_mapping(bool enable)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    int rc;
+
+    if ( enable )
+        rc = map_pages_to_xen(id_addr, maddr_to_mfn(id_addr), 1,
+                              PAGE_HYPERVISOR_RX);
+    else
+        rc = destroy_xen_mappings(id_addr, id_addr + PAGE_SIZE);
+
+    BUG_ON(rc);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
index 8bfc906e7178..856f2dbec4ad 100644
--- a/xen/arch/arm/include/asm/arm32/mm.h
+++ b/xen/arch/arm/include/asm/arm32/mm.h
@@ -18,6 +18,10 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
 
 bool init_domheap_mappings(unsigned int cpu);
 
+static inline void arch_setup_page_tables(void)
+{
+}
+
 #endif /* __ARM_ARM32_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/arm64/mm.h b/xen/arch/arm/include/asm/arm64/mm.h
index aa2adac63189..e7059a36bf17 100644
--- a/xen/arch/arm/include/asm/arm64/mm.h
+++ b/xen/arch/arm/include/asm/arm64/mm.h
@@ -1,6 +1,8 @@
 #ifndef __ARM_ARM64_MM_H__
 #define __ARM_ARM64_MM_H__
 
+extern DEFINE_PAGE_TABLE(xen_pgtable);
+
 /*
  * On ARM64, all the RAM is currently direct mapped in Xen.
  * Hence return always true.
@@ -10,6 +12,17 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
     return true;
 }
 
+void arch_setup_page_tables(void);
+
+/*
+ * Enable/disable the identity mapping in the live page-tables (i.e.
+ * the one pointer by TTBR_EL2).
+ *
+ * Note that nested a call (e.g. enable=true, enable=true) is not
+ * supported.
+ */
+void update_identity_mapping(bool enable);
+
 #endif /* __ARM_ARM64_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index e388462c23d1..f02733e40a87 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -179,6 +179,8 @@
 
 #else /* ARM_64 */
 
+#define IDENTITY_MAPPING_AREA_NR_L0  4
+
 #define VMAP_VIRT_START  GB(1)
 #define VMAP_VIRT_SIZE   GB(1)
 
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2be4..66b27f2b57c1 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -166,6 +166,17 @@ u32 device_tree_get_u32(const void *fdt, int node,
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
+extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
+
+#ifdef CONFIG_ARM_64
+extern DEFINE_BOOT_PAGE_TABLE(boot_first_id);
+#endif
+extern DEFINE_BOOT_PAGE_TABLE(boot_second_id);
+extern DEFINE_BOOT_PAGE_TABLE(boot_third_id);
+
+/* Find where Xen will be residing at runtime and return a PT entry */
+lpae_t pte_of_xenaddr(vaddr_t);
+
 extern const char __ro_after_init_start[], __ro_after_init_end[];
 
 struct init_info
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0b0edf28d57a..e95843d88f37 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -93,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
 
 #ifdef CONFIG_ARM_64
 #define HYP_PT_ROOT_LEVEL 0
-static DEFINE_PAGE_TABLE(xen_pgtable);
+DEFINE_PAGE_TABLE(xen_pgtable);
 static DEFINE_PAGE_TABLE(xen_first);
 #define THIS_CPU_PGTABLE xen_pgtable
 #else
@@ -388,7 +388,7 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
         invalidate_icache();
 }
 
-static inline lpae_t pte_of_xenaddr(vaddr_t va)
+lpae_t pte_of_xenaddr(vaddr_t va)
 {
     paddr_t ma = va + phys_offset;
 
@@ -495,6 +495,8 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
 
     phys_offset = boot_phys_offset;
 
+    arch_setup_page_tables();
+
 #ifdef CONFIG_ARM_64
     pte = pte_of_xenaddr((uintptr_t)xen_first);
     pte.pt.table = 1;
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:55:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:55:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485925.753406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUoh-0000gK-82; Fri, 27 Jan 2023 19:55:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485925.753406; Fri, 27 Jan 2023 19:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUog-0000ff-UM; Fri, 27 Jan 2023 19:55:18 +0000
Received: by outflank-mailman (input) for mailman id 485925;
 Fri, 27 Jan 2023 19:55:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLUog-0000YN-5I
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:55:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUof-0007AE-Q6; Fri, 27 Jan 2023 19:55:17 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUof-0002YX-I1; Fri, 27 Jan 2023 19:55:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=KLwWHrCkNnX0Y4hNRjBGRhOv14avVuiiGcO2JM2k4f8=; b=KbqDXkzXTz4W2wfcheZWAa3pY8
	HD7KU6EE1MKv7YYXG5jEVOtmtbvpdQfat4p4sRnhtR2rmvXAQERhWPDx1WAscbiKOGf89E00LFiPq
	PLXvsAc3jfS8DAfYYUM3TfjdjL/wzW+4ae9hpYZ+sPag+EoyOF5iI1tEy5eepQdjxBoY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Luca Fancellu <luca.fancellu@arm.com>
Subject: [PATCH v5 4/5] xen/arm64: mm: Rework switch_ttbr()
Date: Fri, 27 Jan 2023 19:55:07 +0000
Message-Id: <20230127195508.2786-5-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230127195508.2786-1-julien@xen.org>
References: <20230127195508.2786-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
still on.

Switching TTBR is like replacing existing mappings with new ones. So
we need to follow the break-before-make sequence.

In this case, it means the MMU needs to be switched off while the
TTBR is updated. In order to disable the MMU, we need to first
jump to an identity mapping.

Rename switch_ttbr() to switch_ttbr_id() and create an helper on
top to temporary map the identity mapping and call switch_ttbr()
via the identity address.

switch_ttbr_id() is now reworked to temporarily turn off the MMU
before updating the TTBR.

We also need to make sure the helper switch_ttbr() is part of the
identity mapping. So move _end_boot past it.

The arm32 code will use a different approach. So this issue is for now
only resolved on arm64.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>

---
    Changes in v5:
        - Add a newline in switch_ttbr()
        - Add Luca's reviewed-by and tested-by

    Changes in v4:
        - Don't modify setup_pagetables() as we don't handle arm32.
        - Move the clearing of the boot page tables in an earlier patch
        - Fix the numbering

    Changes in v2:
        - Remove the arm32 changes. This will be addressed differently
        - Re-instate the instruct cache flush. This is not strictly
          necessary but kept it for safety.
        - Use "dsb ish"  rather than "dsb sy".


    TODO:
        * Handle the case where the runtime Xen is loaded at a different
          position for cache coloring. This will be dealt separately.
---
 xen/arch/arm/arm64/head.S     | 50 +++++++++++++++++++++++------------
 xen/arch/arm/arm64/mm.c       | 31 ++++++++++++++++++++++
 xen/arch/arm/include/asm/mm.h |  2 ++
 xen/arch/arm/mm.c             |  2 --
 4 files changed, 66 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 663f5813b12e..5efd442b24af 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -816,30 +816,46 @@ ENDPROC(fail)
  * Switch TTBR
  *
  * x0    ttbr
- *
- * TODO: This code does not comply with break-before-make.
  */
-ENTRY(switch_ttbr)
-        dsb   sy                     /* Ensure the flushes happen before
-                                      * continuing */
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
+ENTRY(switch_ttbr_id)
+        /* 1) Ensure any previous read/write have completed */
+        dsb    ish
+        isb
+
+        /* 2) Turn off MMU */
+        mrs    x1, SCTLR_EL2
+        bic    x1, x1, #SCTLR_Axx_ELx_M
+        msr    SCTLR_EL2, x1
+        isb
+
+        /*
+         * 3) Flush the TLBs.
+         * See asm/arm64/flushtlb.h for the explanation of the sequence.
+         */
+        dsb   nshst
+        tlbi  alle2
+        dsb   nsh
+        isb
+
+        /* 4) Update the TTBR */
+        msr   TTBR0_EL2, x0
         isb
 
-        msr    TTBR0_EL2, x0
+        /*
+         * 5) Flush I-cache
+         * This should not be necessary but it is kept for safety.
+         */
+        ic     iallu
+        isb
 
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
+        /* 6) Turn on the MMU */
+        mrs   x1, SCTLR_EL2
+        orr   x1, x1, #SCTLR_Axx_ELx_M  /* Enable MMU */
+        msr   SCTLR_EL2, x1
         isb
 
         ret
-ENDPROC(switch_ttbr)
+ENDPROC(switch_ttbr_id)
 
 #ifdef CONFIG_EARLY_PRINTK
 /*
diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
index f8e0887d25bc..efcd5e70ecf6 100644
--- a/xen/arch/arm/arm64/mm.c
+++ b/xen/arch/arm/arm64/mm.c
@@ -120,6 +120,37 @@ void update_identity_mapping(bool enable)
     BUG_ON(rc);
 }
 
+extern void switch_ttbr_id(uint64_t ttbr);
+
+typedef void (switch_ttbr_fn)(uint64_t ttbr);
+
+void __init switch_ttbr(uint64_t ttbr)
+{
+    vaddr_t id_addr = virt_to_maddr(switch_ttbr_id);
+    switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;
+    lpae_t pte;
+
+    /* Enable the identity mapping in the boot page tables */
+    update_identity_mapping(true);
+
+    /* Enable the identity mapping in the runtime page tables */
+    pte = pte_of_xenaddr((vaddr_t)switch_ttbr_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+    pte.pt.ro = 1;
+    write_pte(&xen_third_id[third_table_offset(id_addr)], pte);
+
+    /* Switch TTBR */
+    fn(ttbr);
+
+    /*
+     * Disable the identity mapping in the runtime page tables.
+     * Note it is not necessary to disable it in the boot page tables
+     * because they are not going to be used by this CPU anymore.
+     */
+    update_identity_mapping(false);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 23dec574eb31..4262165ce25e 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -207,6 +207,8 @@ extern unsigned long total_pages;
 extern void setup_pagetables(unsigned long boot_phys_offset);
 /* Map FDT in boot pagetable */
 extern void *early_fdt_map(paddr_t fdt_paddr);
+/* Switch to a new root page-tables */
+extern void switch_ttbr(uint64_t ttbr);
 /* Remove early mappings */
 extern void remove_early_mappings(void);
 /* Allocate and initialise pagetables for a secondary CPU. Sets init_ttbr to the
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index e95843d88f37..0b2d31cc5d6c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -476,8 +476,6 @@ static void xen_pt_enforce_wnx(void)
     flush_xen_tlb_local();
 }
 
-extern void switch_ttbr(uint64_t ttbr);
-
 /* Clear a translation table and clean & invalidate the cache */
 static void clear_table(void *table)
 {
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 19:55:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 19:55:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485926.753419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUoi-00017Q-GV; Fri, 27 Jan 2023 19:55:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485926.753419; Fri, 27 Jan 2023 19:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLUoi-00017D-AL; Fri, 27 Jan 2023 19:55:20 +0000
Received: by outflank-mailman (input) for mailman id 485926;
 Fri, 27 Jan 2023 19:55:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pLUoh-0000q2-Fk
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 19:55:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUoh-0007AZ-5u; Fri, 27 Jan 2023 19:55:19 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pLUog-0002YX-U9; Fri, 27 Jan 2023 19:55:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=d5jjOT63wfioO3UMwK1cYouxinkUbidctQUUqa9J0K0=; b=jiZp5qIHF/dYs5XunvFsr+RL9F
	vY9lfZR9r7dDEo50GWbetj/NFxLB9PHu5CYE4LGWgv1HKx3DklCmTN2NCXd1s6ym+1AsicOBQoXdU
	vuoGEACHZz4EEQ8WAB+QXmp5pIDt3ia8FlE+EXt+0C8r0+vpga3qolaqUC+NZN8my508=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Luca Fancellu <luca.fancellu@arm.com>
Subject: [PATCH v5 5/5] xen/arm64: smpboot: Directly switch to the runtime page-tables
Date: Fri, 27 Jan 2023 19:55:08 +0000
Message-Id: <20230127195508.2786-6-julien@xen.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230127195508.2786-1-julien@xen.org>
References: <20230127195508.2786-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Switching TTBR while the MMU is on is not safe. Now that the identity
mapping will not clash with the rest of the memory layout, we can avoid
creating temporary page-tables every time a CPU is brought up.

The arm32 code will use a different approach. So this issue is for now
only resolved on arm64.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>

---
    Changes in v5:
        - Add Luca's reviewed-by and tested-by tags.

    Changes in v4:
        - Somehow I forgot to send it in v3. So re-include it.

    Changes in v2:
        - Remove arm32 code
---
 xen/arch/arm/arm32/smpboot.c   |  4 ++++
 xen/arch/arm/arm64/head.S      | 29 +++++++++--------------------
 xen/arch/arm/arm64/smpboot.c   | 15 ++++++++++++++-
 xen/arch/arm/include/asm/smp.h |  1 +
 xen/arch/arm/smpboot.c         |  1 +
 5 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/arm32/smpboot.c b/xen/arch/arm/arm32/smpboot.c
index e7368665d50d..518e9f9c7e70 100644
--- a/xen/arch/arm/arm32/smpboot.c
+++ b/xen/arch/arm/arm32/smpboot.c
@@ -21,6 +21,10 @@ int arch_cpu_up(int cpu)
     return platform_cpu_up(cpu);
 }
 
+void arch_cpu_up_finish(void)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 5efd442b24af..a61b4d3c2738 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -308,6 +308,7 @@ real_start_efi:
         bl    check_cpu_mode
         bl    cpu_init
         bl    create_page_tables
+        load_paddr x0, boot_pgtable
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
@@ -365,29 +366,14 @@ GLOBAL(init_secondary)
 #endif
         bl    check_cpu_mode
         bl    cpu_init
-        bl    create_page_tables
+        load_paddr x0, init_ttbr
+        ldr   x0, [x0]
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
         ldr   x0, =secondary_switched
         br    x0
 secondary_switched:
-        /*
-         * Non-boot CPUs need to move on to the proper pagetables, which were
-         * setup in init_secondary_pagetables.
-         *
-         * XXX: This is not compliant with the Arm Arm.
-         */
-        ldr   x4, =init_ttbr         /* VA of TTBR0_EL2 stashed by CPU 0 */
-        ldr   x4, [x4]               /* Actual value */
-        dsb   sy
-        msr   TTBR0_EL2, x4
-        dsb   sy
-        isb
-        tlbi  alle2
-        dsb   sy                     /* Ensure completion of TLB flush */
-        isb
-
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
         ldr   x23, =EARLY_UART_VIRTUAL_ADDRESS
@@ -672,9 +658,13 @@ ENDPROC(create_page_tables)
  * mapping. In other word, the caller is responsible to switch to the runtime
  * mapping.
  *
- * Clobbers x0 - x3
+ * Inputs:
+ *   x0 : Physical address of the page tables.
+ *
+ * Clobbers x0 - x4
  */
 enable_mmu:
+        mov   x4, x0
         PRINT("- Turning on paging -\r\n")
 
         /*
@@ -685,8 +675,7 @@ enable_mmu:
         dsb   nsh
 
         /* Write Xen's PT's paddr into TTBR0_EL2 */
-        load_paddr x0, boot_pgtable
-        msr   TTBR0_EL2, x0
+        msr   TTBR0_EL2, x4
         isb
 
         mrs   x0, SCTLR_EL2
diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
index 694fbf67e62a..9637f424699e 100644
--- a/xen/arch/arm/arm64/smpboot.c
+++ b/xen/arch/arm/arm64/smpboot.c
@@ -106,10 +106,23 @@ int __init arch_cpu_init(int cpu, struct dt_device_node *dn)
 
 int arch_cpu_up(int cpu)
 {
+    int rc;
+
     if ( !smp_enable_ops[cpu].prepare_cpu )
         return -ENODEV;
 
-    return smp_enable_ops[cpu].prepare_cpu(cpu);
+    update_identity_mapping(true);
+
+    rc = smp_enable_ops[cpu].prepare_cpu(cpu);
+    if ( rc )
+        update_identity_mapping(false);
+
+    return rc;
+}
+
+void arch_cpu_up_finish(void)
+{
+    update_identity_mapping(false);
 }
 
 /*
diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
index 8133d5c29572..a37ca55bff2c 100644
--- a/xen/arch/arm/include/asm/smp.h
+++ b/xen/arch/arm/include/asm/smp.h
@@ -25,6 +25,7 @@ extern void noreturn stop_cpu(void);
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
 extern int arch_cpu_up(int cpu);
+extern void arch_cpu_up_finish(void);
 
 int cpu_up_send_sgi(int cpu);
 
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 412ae2286906..4a89b3a8345b 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -500,6 +500,7 @@ int __cpu_up(unsigned int cpu)
     init_data.cpuid = ~0;
     smp_up_cpu = MPIDR_INVALID;
     clean_dcache(smp_up_cpu);
+    arch_cpu_up_finish();
 
     if ( !cpu_online(cpu) )
     {
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 20:28:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 20:28:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485955.753428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLVKN-0006nJ-2a; Fri, 27 Jan 2023 20:28:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485955.753428; Fri, 27 Jan 2023 20:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLVKM-0006nC-W6; Fri, 27 Jan 2023 20:28:02 +0000
Received: by outflank-mailman (input) for mailman id 485955;
 Fri, 27 Jan 2023 20:28:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLVKL-0006n2-C7; Fri, 27 Jan 2023 20:28:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLVKL-000828-7U; Fri, 27 Jan 2023 20:28:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLVKK-00038W-Th; Fri, 27 Jan 2023 20:28:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLVKK-0006Dg-TF; Fri, 27 Jan 2023 20:28:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=AGD61jE/0SAPXM+EJxLmHdCaUDCNfdAABkLpXeO8BJ4=; b=ipkE+zE0G7z5dcXi6JEGh2IVO0
	swy/3Jw2L8WZQc+e2eWppFE1qclcbyfvk4e4AGtWFwwU/FKjZsr0ghv1GpHwwSiYNitMnOUVi6IoG
	iMY4oGp27IypitzdXOJ5Gp8J5hF2c+fQL7xWagBc2svj+84oIciSZDkllPk9jtLb6ReU=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-shadow
Message-Id: <E1pLVKK-0006Dg-TF@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 20:28:00 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-shadow
testid guest-localmigrate

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176257/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-shadow.guest-localmigrate.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-shadow.guest-localmigrate --summary-out=tmp/176257.bisection-summary --basis-template=175994 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl-shadow guest-localmigrate
Searching for failure / basis pass:
 176140 fail [host=nobling0] / 175994 [host=pinot0] 175987 [host=nobling1] 175965 [host=albana1] 175734 [host=huxelrebe1] 175726 [host=debina1] 175720 [host=huxelrebe0] 175714 [host=debina0] 175694 [host=pinot1] 175671 [host=elbling1] 175651 [host=elbling0] 175635 [host=albana0] 175624 [host=nocera1] 175601 [host=nocera0] 175592 [host=fiano1] 175573 [host=nobling1] 175569 [host=albana1] 175562 ok.
Failure / basis pass flights: 176140 / 175562
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 7eef80e06ed2282bbcec3619d860c6aacb0515d8
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#1cf02b05b27c48775a25699e61b93b8\
 14b9ae042-625eb5e96dc96aa7fddef59a08edae215527f19c git://xenbits.xen.org/xen.git#7eef80e06ed2282bbcec3619d860c6aacb0515d8-3b760245f74ab2022b1aa4da842c4545228c2e83
Loaded 10003 nodes in revision graph
Searching for test results:
 175592 [host=fiano1]
 175601 [host=nocera0]
 175612 [host=nocera1]
 175624 [host=nocera1]
 175635 [host=albana0]
 175651 [host=elbling0]
 175671 [host=elbling1]
 175694 [host=pinot1]
 175714 [host=debina0]
 175720 [host=huxelrebe0]
 175726 [host=debina1]
 175734 [host=huxelrebe1]
 175834 []
 175861 []
 175890 []
 175907 []
 175931 []
 175956 []
 175965 [host=albana1]
 175987 [host=nobling1]
 175994 [host=pinot0]
 176003 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 89cc5d96a9d1fce81cf58b6814dac62a9e07fbee
 176011 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176025 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176035 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176042 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176048 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176056 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176062 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d60c20260c7e82fe5344d06c20d718e0cc03b8b
 176076 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176091 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c d60324d8af9404014cfcc37bba09e9facfd02fcf
 176110 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 352c89f72ddb67b8d9d4e492203f8c77f85c8df1
 176121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
 176132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
 176140 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
 176229 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 7eef80e06ed2282bbcec3619d860c6aacb0515d8
 176231 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 3b760245f74ab2022b1aa4da842c4545228c2e83
 176234 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 3edca52ce736297d7fcf293860cd94ef62638052
 176235 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 38525f6f73f906699f77a1af86c16b4eaad48e04
 176236 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d4994ac79ed96550f8e8c9a682d468e83db4dfe
 176237 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 83d9679db057d5736c7b5a56db06bb6bb66c3914
 176239 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1d99732f2b092173d8600fa818aee3fa51046bb0
 176243 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 661489874e87c0f6e21ac298b039aab9379f6ee0
 176244 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
 176246 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176247 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176249 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176253 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176254 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 176255 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
 176257 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 1894049fa283308d5f90446370be1ade7afe8975
 175517 [host=huxelrebe0]
 175520 [host=italia0]
 175526 [host=pinot0]
 175534 [host=pinot0]
 175541 [host=italia1]
 175554 [host=fiano0]
 175562 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 7eef80e06ed2282bbcec3619d860c6aacb0515d8
 175569 [host=albana1]
 175573 [host=nobling1]
Searching for interesting versions
 Result found: flight 175562 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f, results HASH(0x562d9d2f4e88) HASH(0x562d9d2f80b8) HASH(0x562d9d2fbc48) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96\
 dc96aa7fddef59a08edae215527f19c f588e7b7cb70800533aaa8a2a9d7a4b32d10b363, results HASH(0x562d9d2efbf0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 3edca52ce736297d7fcf293860cd94ef62638052, results HASH(0x562d9d2d9998) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 661489874e87c0f6e21ac298b039aab9379f6ee0, results HASH(0x562d9d2ee9c8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 83d9679db057d5736c7b5a56db06bb6bb66c3914, results HASH(0x562d9d2e7e40) For basis failure, parent search stopping at c3038e718a19\
 fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 38525f6f73f906699f77a1af86c16b4eaad48e04, results HASH(0x562d9d2cb1b8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1cf02b05b27c48775a25699e61b93b814b9ae042 7eef80e06ed2282bbcec3619d860c6aacb0515d8, results HASH(0x562d9d2e0c0\
 0) HASH(0x562d9d2fee78) Result found: flight 176003 (fail), for basis failure (at ancestor ~1002)
 Repro found: flight 176229 (pass), for basis pass
 Repro found: flight 176231 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625eb5e96dc96aa7fddef59a08edae215527f19c 20279afd732371dd2534380d27aa6d1863d82d1f
No revisions left to test, checking graph state.
 Result found: flight 176247 (pass), for last pass
 Result found: flight 176249 (fail), for first failure
 Repro found: flight 176253 (pass), for last pass
 Repro found: flight 176254 (fail), for first failure
 Repro found: flight 176255 (pass), for last pass
 Repro found: flight 176257 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  1894049fa283308d5f90446370be1ade7afe8975
  Bug not present: 20279afd732371dd2534380d27aa6d1863d82d1f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/176257/


  commit 1894049fa283308d5f90446370be1ade7afe8975
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Fri Jan 20 09:17:33 2023 +0100
  
      x86/shadow: L2H shadow type is PV32-only
      
      Like for the various HVM-only types, save a little bit of code by suitably
      "masking" this type out when !PV32.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-shadow.guest-localmigrate.{dot,ps,png,html,svg}.
----------------------------------------
176257: tolerable ALL FAIL

flight 176257 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/176257/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-shadow    18 guest-localmigrate      fail baseline untested


jobs:
 test-amd64-i386-xl-shadow                                    fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Jan 27 22:07:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 22:07:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485978.753442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLWry-0002Fk-6M; Fri, 27 Jan 2023 22:06:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485978.753442; Fri, 27 Jan 2023 22:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLWry-0002Fb-3n; Fri, 27 Jan 2023 22:06:50 +0000
Received: by outflank-mailman (input) for mailman id 485978;
 Fri, 27 Jan 2023 22:06:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z+bZ=5Y=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1pLWrv-0002FV-Qk
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 22:06:48 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df26c746-9e8e-11ed-a5d9-ddcf98b90cbd;
 Fri, 27 Jan 2023 23:06:45 +0100 (CET)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id E23A732009BC;
 Fri, 27 Jan 2023 17:06:36 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Fri, 27 Jan 2023 17:06:37 -0500
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 27 Jan 2023 17:06:35 -0500 (EST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df26c746-9e8e-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:date:date:from:from
	:in-reply-to:message-id:mime-version:reply-to:sender:subject
	:subject:to:to; s=fm3; t=1674857196; x=1674943596; bh=TPBEW+uosa
	jbirowMp+jxKXraQdttyikIIaXjbTbDB4=; b=vKA95+vD+TZtLXUoXaU9BBrozP
	4dCxrxxKxnN7h88FlBZpiC/SIfFMg39I3jkG75FgjsLjSGIUOl0NBwxfV26J9ieI
	je6jNQf7G80WtS3Fsnaki4lrXpKHDB3GNaibgMxCfJDe/k/SqMEshYrcn/1akesj
	g3IibY5pM5qtlLK2K5QpVy8MGjGY7Vs2/ukZvVHc0jM22faNk/cgWP9ipu/m2swx
	42aXf1CefxttCSOQncVXGua08O3/2+Xz187Aw9Qrkhck68z3FLzqCqOzgqV80J9Y
	f8JQkP7SkDmlE3tm1Fq92kd+3B5Gs4idA+Px5wFCXlNAp/RpSzP8SbbCZ/xg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:date:date:feedback-id
	:feedback-id:from:from:in-reply-to:message-id:mime-version
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1674857196; x=
	1674943596; bh=TPBEW+uosajbirowMp+jxKXraQdttyikIIaXjbTbDB4=; b=p
	CiQBL4Vu7YaQHF1aDsbMqLYdl83dzb9YW8yfBilrgHK7hcPBgLg//yjY0L0ilSt7
	KKeCA0AXBO69bsIzyrCXAiG0OtyvmdzzIts1Ud15ccqf/R9ZMRRQZECpJFT46Oyf
	q2kzvJKbxoszPhSunHShb4e4HAZ9rJ8+0ECvYz5oBJmZC/JdoF3IxpxZ7afreLe+
	o+2/E7qCOfMmmENohkPe9bcKhCHddrqpemLyM1tpZzMtp8XTbIqDGt8RGAMiLLkD
	VJf84rPX3EL7FhLlvEZOR882TtMn1N316/VXxCIqOQKVTQxdfA12vkwPYZrYBLdS
	zYLQ38/6YIMHEZ7tQPWig==
X-ME-Sender: <xms:7ErUYyAPNPx49IYJnWPrsLDq4XVjUHa8ET9a0XGJt6Ao69qR4p-AbA>
    <xme:7ErUY8hMNyErpTSV1tJ0g8Fk6aRpPIe8J-u9GvxHUUTSwz3W7OeH8LlcCZgedoXdV
    sM6TOVHHMCB-nE>
X-ME-Received: <xmr:7ErUY1m5RwO7NeUbZ0tdz97692yS8UV7RCd_J_x2Ssko3pvmeb72VYbcEJU>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddviedgudehiecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfggtggusehgtderredttddvnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeiiedvfeevvddtgfduvdekieekieeg
    heelkeetffefjeekfedvgffgtdejveeljeenucevlhhushhtvghrufhiiigvpedtnecurf
    grrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhl
    rggsrdgtohhm
X-ME-Proxy: <xmx:7ErUYwxN_ybDKMGOqrOmjW8tc7WcDwjr67PFK-D-wIeOg8sn3rGLEA>
    <xmx:7ErUY3RaQKUDWP9ML3TSq898ACPqEbSC4r5zjwPT-yVxLgdZCiE81g>
    <xmx:7ErUY7ZXgakXuBI15ZRYJhpAe42DtW64TaYxtEAjRVG4rmA3fOSz_g>
    <xmx:7ErUY6NQfWFYgKxpLPfiH0cXgZNUMl4RFCpQI4wo8qtytlnNMTr8nQ>
Feedback-ID: iac594737:Fastmail
Date: Fri, 27 Jan 2023 17:06:29 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Xen developer discussion <xen-devel@lists.xenproject.org>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Regular file support in Linux blkback?
Message-ID: <Y9RK6cf2Hu9vQRqN@itl-email>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="JgBqpPnRKPLzKWek"
Content-Disposition: inline


--JgBqpPnRKPLzKWek
Content-Type: text/plain; protected-headers=v1; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 27 Jan 2023 17:06:29 -0500
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Xen developer discussion <xen-devel@lists.xenproject.org>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Regular file support in Linux blkback?

What would be involved in supporting regular files in Linux blkback?  Is
it just a matter of using the call_{read,write}_iter functions for read
and write, and punting the work that cannot be done asynchronously to a
thread pool?  Or is it more complex than that?
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--JgBqpPnRKPLzKWek
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmPUSukACgkQsoi1X/+c
IsEQIhAAj84mXFmQ8iojEvwh+xC9AfAEgY0m87RnVfj4sFqiOWPZ4QK9uMApgp/R
pTVf4sjcSqtPgNMygAwLdWtJA56U2bhnJDyWYYLTBGuwgsD6P+lKEKpOulF7qebM
Tl85A6LYLVjoOr+QkprMHQC52TdIjtOiygTaD/VoaYe+FdzsPJaHRei3a8K4YmCv
ooxcyJLNbYhpZi6bJNMyZG3CunEqK/QY/A57Rl7rYxEyils9o1uZhVF5fg+T9K6T
weC5AJoPjLM4BRDsEugxPPJAhViusyF7Nbaa5r5ePDUIDgDJzD0aBAyj188SK3B3
moAIhrJbOAhphBpWSu/7d3Kz9YyuUt6mhDiUGpBblxR/OapSJeF89vrXAefqkBci
TMTUN1v6qGwfdMVl/kJNxSPq8nGOp410DsH9fGkUboMzo3awLnOOQCm2yKOLq9oU
uyTyOzprQGzOujrF67rqEBEmHHojgcmSUug7Ndk8YsBuTSvs1FoM36Fwp7GN7IN0
U9Jxtdz+HTxTJLqey4DE9Cj8xvet11eFSKfVRkJx31x1CSpKc1img0bN3sF/xijj
U/ZLa44VHZzsK+Jk8PAoYbHF14DWiY0AfHMd5MXd8nmiUcFhAbfaXy5avyySEXtB
kxjmFXZ0WTt9lJeJO+a9iY4fzOlWqf4h9Vo3TJ0kcO2f7fh5+7s=
=XMPn
-----END PGP SIGNATURE-----

--JgBqpPnRKPLzKWek--


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 23:17:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 23:17:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.485996.753452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLXxO-0002Na-9T; Fri, 27 Jan 2023 23:16:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 485996.753452; Fri, 27 Jan 2023 23:16:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLXxO-0002NT-56; Fri, 27 Jan 2023 23:16:30 +0000
Received: by outflank-mailman (input) for mailman id 485996;
 Fri, 27 Jan 2023 23:16:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLXxN-0002NJ-B4; Fri, 27 Jan 2023 23:16:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLXxN-0003eE-8e; Fri, 27 Jan 2023 23:16:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLXxM-0004YV-PK; Fri, 27 Jan 2023 23:16:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLXxM-0003ba-Ot; Fri, 27 Jan 2023 23:16:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YiCulGQPseGfDigAp/VLRXRuI5muLmFRz2BWOVUsqIc=; b=OEQKgX02ldhePx5rkS7cUPQLim
	cZBkpsEwPo/QPyVYlESawOz5LUAII7rSiXU8dAIdQkZJoFfhZeSKmD+w9TzCSIzYOTN583is56xK/
	GSlT0HFLSBjxSUekk41G/o9xMOPbDbec6C+OkGgIZ5wbfeGNjevObN12xNa74RiTr7Ak=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176259-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 176259: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=bf1c4eb6cb52785cf539eb83752dfcecfe66c5d1
X-Osstest-Versions-That:
    xtf=c0f454c68329301447fd258e47824f7d402f19e9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Jan 2023 23:16:28 +0000

flight 176259 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176259/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  bf1c4eb6cb52785cf539eb83752dfcecfe66c5d1
baseline version:
 xtf                  c0f454c68329301447fd258e47824f7d402f19e9

Last test of basis   175715  2023-01-11 01:41:46 Z   16 days
Testing same since   176259  2023-01-27 21:12:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   c0f454c..bf1c4eb  bf1c4eb6cb52785cf539eb83752dfcecfe66c5d1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jan 27 23:57:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Jan 2023 23:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486008.753462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLYas-0007Ee-Fj; Fri, 27 Jan 2023 23:57:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486008.753462; Fri, 27 Jan 2023 23:57:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLYas-0007EX-BX; Fri, 27 Jan 2023 23:57:18 +0000
Received: by outflank-mailman (input) for mailman id 486008;
 Fri, 27 Jan 2023 23:57:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BMZP=5Y=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pLYar-0007EO-1f
 for xen-devel@lists.xenproject.org; Fri, 27 Jan 2023 23:57:17 +0000
Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com
 [2a00:1450:4864:20::333])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5287e869-9e9e-11ed-b8d1-410ff93cb8f0;
 Sat, 28 Jan 2023 00:57:14 +0100 (CET)
Received: by mail-wm1-x333.google.com with SMTP id k16so4565692wms.2
 for <xen-devel@lists.xenproject.org>; Fri, 27 Jan 2023 15:57:14 -0800 (PST)
Received: from [192.168.0.114] ([196.77.14.77])
 by smtp.gmail.com with ESMTPSA id
 bi5-20020a05600c3d8500b003db0bb81b6asm5975351wmb.1.2023.01.27.15.57.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 27 Jan 2023 15:57:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5287e869-9e9e-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=83C/sqLpYcRoe72UESTSQaaVCL+73bpf4QRcam6XkOQ=;
        b=KsUF4JL9cfOoJtRyznvNR/BwaPQD7NpThV9G5oCpMmQ14t+0dU95I0IO8bOWUH4ArQ
         boN2ct+k+O39oW9PF0tjs6rSRtDGbOIGu4QJvk7Iqmz9kSTblMBSvbSkcpqFGhq1yg6Y
         27WMuEjSGgRSPVcgfz6fH5LVZtwFSUQgx2hA7QAqwn8IjnT1cwZDLMoRXg4QIJsuhik0
         IVP2lq+OKg2UqCxALQqHNNsl+PdfDzh3CcV5s6h+ISZ6NDiZDmuPmoS8xRZuNMhnt567
         oPMXOIaC6EWp0q5Q7XfHSFyn3xU+pxCVUGtBzMkAQB4zIcJ+i87QcwLl4ijwM94suV5d
         xiUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=83C/sqLpYcRoe72UESTSQaaVCL+73bpf4QRcam6XkOQ=;
        b=Fr/tt8IYEdSpzKN6AeTeWMr4YHD7weoN+I5cNh1h5OJQ9kbdpO4mzBtBE7vecXaNB7
         cVCmcAaPWBbYGP6WWRzNq/3kODNPhg8uE8ChMqLZBkLoqcv0svCYcrPCc7VYup3XhIap
         m6Zb1HXEe+JsuqZry2wHrxEJMcne1MvV0oScbAWAI1evOQtRAJF05a/P8z6dyk+sVaY0
         MF8Wkr3zRjkbncDxvI80iAK25y+bqavhaB/NWU0+ieUvkpIO7OIPxhP0sxPhH85FeP2A
         eYC/S+ASN5SLHmvNlpRpPP810qjWghdeMoE3DXmQB7htbjXS5Y/YsTip7+mbVfE3Lh79
         nq6Q==
X-Gm-Message-State: AFqh2krmDH3HXBmwsB0BLcxonc1L/jxmvKMQc/nXW3QyE0ieL6DjlbCR
	02KrUwBTHCdGm5mBMbH0dyf4RQ==
X-Google-Smtp-Source: AMrXdXtwL+SC1CEsaB+gZBcbt+ILUbPRKm89eURC+M+uo7XJxNmJawyVIaj/H+xHnHamksdM/k6fzg==
X-Received: by 2002:a05:600c:5025:b0:3db:14dc:8cff with SMTP id n37-20020a05600c502500b003db14dc8cffmr37244799wmr.33.1674863834222;
        Fri, 27 Jan 2023 15:57:14 -0800 (PST)
Message-ID: <cd7b2036-b203-5cd7-4efe-281dfaa0c186@linaro.org>
Date: Sat, 28 Jan 2023 00:57:12 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 7/7] hw/i386/pc: Initialize ram_memory variable directly
Content-Language: en-US
To: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org
Cc: Eduardo Habkost <eduardo@habkost.net>, Paolo Bonzini
 <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-trivial@nongnu.org,
 "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>
References: <20230127164718.98156-1-shentey@gmail.com>
 <20230127164718.98156-8-shentey@gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230127164718.98156-8-shentey@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 27/1/23 17:47, Bernhard Beschow wrote:
> Going through pc_memory_init() seems quite complicated for a simple
> assignment.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>

> ---
>   include/hw/i386/pc.h | 1 -
>   hw/i386/pc.c         | 2 --
>   hw/i386/pc_piix.c    | 4 ++--
>   hw/i386/pc_q35.c     | 5 ++---
>   4 files changed, 4 insertions(+), 8 deletions(-)
> 
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index 88a120bc23..5331b9a5c5 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -163,7 +163,6 @@ void xen_load_linux(PCMachineState *pcms);
>   void pc_memory_init(PCMachineState *pcms,
>                       MemoryRegion *system_memory,
>                       MemoryRegion *rom_memory,
> -                    MemoryRegion **ram_memory,
>                       uint64_t pci_hole64_size);
>   uint64_t pc_pci_hole64_start(void);
>   DeviceState *pc_vga_init(ISABus *isa_bus, PCIBus *pci_bus);
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 6e592bd969..8898cc9961 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -936,7 +936,6 @@ static hwaddr pc_max_used_gpa(PCMachineState *pcms, uint64_t pci_hole64_size)
>   void pc_memory_init(PCMachineState *pcms,
>                       MemoryRegion *system_memory,
>                       MemoryRegion *rom_memory,
> -                    MemoryRegion **ram_memory,
>                       uint64_t pci_hole64_size)
>   {
>       int linux_boot, i;
> @@ -994,7 +993,6 @@ void pc_memory_init(PCMachineState *pcms,
>        * Split single memory region and use aliases to address portions of it,
>        * done for backwards compatibility with older qemus.
>        */
> -    *ram_memory = machine->ram;
>       ram_below_4g = g_malloc(sizeof(*ram_below_4g));
>       memory_region_init_alias(ram_below_4g, NULL, "ram-below-4g", machine->ram,
>                                0, x86ms->below_4g_mem_size);
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 5bde4533cc..00ba725656 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -143,6 +143,7 @@ static void pc_init1(MachineState *machine,
>       if (xen_enabled()) {
>           xen_hvm_init_pc(pcms, &ram_memory);
>       } else {
> +        ram_memory = machine->ram;
>           if (!pcms->max_ram_below_4g) {
>               pcms->max_ram_below_4g = 0xe0000000; /* default: 3.5G */
>           }
> @@ -205,8 +206,7 @@ static void pc_init1(MachineState *machine,
>   
>       /* allocate ram and load rom/bios */
>       if (!xen_enabled()) {
> -        pc_memory_init(pcms, system_memory,
> -                       rom_memory, &ram_memory, hole64_size);
> +        pc_memory_init(pcms, system_memory, rom_memory, hole64_size);
>       } else {
>           pc_system_flash_cleanup_unused(pcms);
>           if (machine->kernel_filename != NULL) {
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index b97979bebb..8559924eb4 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -128,7 +128,7 @@ static void pc_q35_init(MachineState *machine)
>       MemoryRegion *system_io = get_system_io();
>       MemoryRegion *pci_memory;
>       MemoryRegion *rom_memory;
> -    MemoryRegion *ram_memory;
> +    MemoryRegion *ram_memory = machine->ram;
>       GSIState *gsi_state;
>       ISABus *isa_bus;
>       int i;
> @@ -215,8 +215,7 @@ static void pc_q35_init(MachineState *machine)
>       }
>   
>       /* allocate ram and load rom/bios */
> -    pc_memory_init(pcms, system_memory, rom_memory, &ram_memory,
> -                   pci_hole64_size);
> +    pc_memory_init(pcms, system_memory, rom_memory, pci_hole64_size);
>   
>       object_property_add_child(OBJECT(machine), "q35", OBJECT(phb));
>       object_property_set_link(OBJECT(phb), MCH_HOST_PROP_RAM_MEM,



From xen-devel-bounces@lists.xenproject.org Sat Jan 28 00:39:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 00:39:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486017.753482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLZF6-0004tO-7v; Sat, 28 Jan 2023 00:38:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486017.753482; Sat, 28 Jan 2023 00:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLZF6-0004tH-56; Sat, 28 Jan 2023 00:38:52 +0000
Received: by outflank-mailman (input) for mailman id 486017;
 Sat, 28 Jan 2023 00:38:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k5Ro=5Z=csail.mit.edu=srivatsa@srs-se1.protection.inumbo.net>)
 id 1pLZF5-0004sT-Ns
 for xen-devel@lists.xenproject.org; Sat, 28 Jan 2023 00:38:51 +0000
Received: from outgoing2021.csail.mit.edu (outgoing2021.csail.mit.edu
 [128.30.2.78]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1faad50f-9ea4-11ed-b8d1-410ff93cb8f0;
 Sat, 28 Jan 2023 01:38:47 +0100 (CET)
Received: from [64.186.27.43] (helo=srivatsa-dev.eng.vmware.com)
 by outgoing2021.csail.mit.edu with esmtpa (Exim 4.95)
 (envelope-from <srivatsa@csail.mit.edu>) id 1pLZER-000elb-VP;
 Fri, 27 Jan 2023 19:38:12 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1faad50f-9ea4-11ed-b8d1-410ff93cb8f0
From: "Srivatsa S. Bhat" <srivatsa@csail.mit.edu>
To: tglx@linutronix.de,
	mingo@redhat.com,
	x86@kernel.org,
	peterz@infradead.org,
	hpa@zytor.com
Cc: bp@alien8.de,
	dave.hansen@linux.intel.com,
	rafael.j.wysocki@intel.com,
	paulmck@kernel.org,
	jgross@suse.com,
	seanjc@google.com,
	thomas.lendacky@amd.com,
	linux-kernel@vger.kernel.org,
	imammedo@redhat.com,
	amakhalov@vmware.com,
	ganb@vmware.com,
	ankitja@vmware.com,
	bordoloih@vmware.com,
	keerthanak@vmware.com,
	blamoreaux@vmware.com,
	namit@vmware.com,
	wyes.karny@amd.com,
	lewis.carroll@amd.com,
	pv-drivers@vmware.com,
	virtualization@lists.linux-foundation.org,
	kvm@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	srivatsa@csail.mit.edu
Subject: [PATCH] x86/hotplug: Remove incorrect comment about mwait_play_dead()
Date: Fri, 27 Jan 2023 16:37:51 -0800
Message-Id: <20230128003751.141317-1-srivatsa@csail.mit.edu>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>

The comment that says mwait_play_dead() returns only on failure is a
bit misleading because mwait_play_dead() could actually return for
valid reasons (such as mwait not being supported by the platform) that
do not indicate a failure of the CPU offline operation. So, remove the
comment.

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu>
---
 arch/x86/kernel/smpboot.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 55cad72715d9..9013bb28255a 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1833,7 +1833,7 @@ void native_play_dead(void)
 	play_dead_common();
 	tboot_shutdown(TB_SHUTDOWN_WFS);
 
-	mwait_play_dead();	/* Only returns on failure */
+	mwait_play_dead();
 	if (cpuidle_play_dead())
 		hlt_play_dead();
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sat Jan 28 00:39:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 00:39:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486014.753472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLZF0-0004d8-RG; Sat, 28 Jan 2023 00:38:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486014.753472; Sat, 28 Jan 2023 00:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLZF0-0004d1-O7; Sat, 28 Jan 2023 00:38:46 +0000
Received: by outflank-mailman (input) for mailman id 486014;
 Sat, 28 Jan 2023 00:38:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLZEy-0004cr-VT; Sat, 28 Jan 2023 00:38:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLZEy-0006J9-RQ; Sat, 28 Jan 2023 00:38:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLZEy-0000WQ-Id; Sat, 28 Jan 2023 00:38:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLZEy-0008Vr-II; Sat, 28 Jan 2023 00:38:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JjpDeQyT/yQZcGek5vWGm4qwtR5ZDN2Pz90vJ4GFFCY=; b=tEt3y2/jNj5eN2t2ybvu1QZIVb
	NOwG2NZDADHBeUIQFeYh/he7Zn5tJgJoqeb+GKkwgiCE6ZpSB31yLZePP+0fivkKOJUVdIGViebr5
	tHuLUdI4/uCG0KDCvFwMXIUpFCG8dJs2RxzQcCnuVswJytPM+fDdLPBiFs9sO9XpTkWM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176250-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176250: trouble: broken/fail/pass/starved
X-Osstest-Failures:
    xen-4.17-testing:test-armhf-armhf-libvirt:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-credit1:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-vhd:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-credit2:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-arndale:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:host-install(5):broken:regression
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.17-testing:build-arm64-xsm:<job status>:broken:regression
    xen-4.17-testing:build-arm64-pvops:<job status>:broken:regression
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:build-arm64-xsm:host-install(4):broken:regression
    xen-4.17-testing:build-arm64-pvops:host-install(4):broken:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-reuse/final:broken:heisenbug
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-4.17-testing:test-armhf-armhf-xl-rtds:host-install(5):broken:allowable
    xen-4.17-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:capture-logs(6):broken:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Jan 2023 00:38:44 +0000

flight 176250 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176250/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-libvirt-qcow2    <job status>                 broken
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 test-armhf-armhf-libvirt-qcow2  5 host-install(5)      broken REGR. vs. 175447
 test-armhf-armhf-xl-credit1   5 host-install(5)        broken REGR. vs. 175447
 test-armhf-armhf-xl-vhd       5 host-install(5)        broken REGR. vs. 175447
 test-armhf-armhf-libvirt-raw  5 host-install(5)        broken REGR. vs. 175447
 test-armhf-armhf-libvirt      5 host-install(5)        broken REGR. vs. 175447
 test-armhf-armhf-xl-credit2   5 host-install(5)        broken REGR. vs. 175447
 test-armhf-armhf-xl-multivcpu  5 host-install(5)       broken REGR. vs. 175447
 test-armhf-armhf-xl-arndale   5 host-install(5)        broken REGR. vs. 175447
 test-armhf-armhf-xl           5 host-install(5)        broken REGR. vs. 175447
 test-armhf-armhf-xl-cubietruck  5 host-install(5)      broken REGR. vs. 175447
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>          broken in 176238
 build-arm64-xsm                 <job status>                 broken  in 176238
 build-arm64-pvops               <job status>                 broken  in 176238
 build-armhf                     <job status>                 broken  in 176238
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>           broken in 176238
 build-armhf                4 host-install(4) broken in 176238 REGR. vs. 175447
 build-arm64-xsm            4 host-install(4) broken in 176238 REGR. vs. 175447
 build-arm64-pvops          4 host-install(4) broken in 176238 REGR. vs. 175447
 build-armhf                   3 syslog-server                running in 176238

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-ws16-amd64 5 host-install(5) broken in 176238 pass in 176250
 test-amd64-i386-qemut-rhel6hvm-intel 19 host-reuse/final broken in 176238 pass in 176250
 test-amd64-i386-qemut-rhel6hvm-amd 7 xen-install fail in 176238 pass in 176250
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail in 176238 pass in 176250
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install          fail pass in 176238

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      5 host-install(5)        broken REGR. vs. 175447

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 176238 n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)         blocked in 176238 n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)          blocked in 176238 n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)           blocked in 176238 n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)         blocked in 176238 n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)           blocked in 176238 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 176238 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 176238 n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)           blocked in 176238 n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)           blocked in 176238 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 176238 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 176238 n/a
 test-armhf-armhf-libvirt      1 build-check(1)           blocked in 176238 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 176238 n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)           blocked in 176238 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 176238 n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)           blocked in 176238 n/a
 test-armhf-armhf-xl           1 build-check(1)           blocked in 176238 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 176238 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 176238 n/a
 build-armhf-libvirt           1 build-check(1)           blocked in 176238 n/a
 test-armhf-armhf-xl-credit1   6 capture-logs(6)       broken blocked in 175447
 test-armhf-armhf-libvirt-qcow2  6 capture-logs(6)     broken blocked in 175447
 test-armhf-armhf-libvirt-raw  6 capture-logs(6)       broken blocked in 175447
 test-armhf-armhf-libvirt      6 capture-logs(6)       broken blocked in 175447
 test-armhf-armhf-xl-credit2   6 capture-logs(6)       broken blocked in 175447
 test-armhf-armhf-xl-multivcpu  6 capture-logs(6)      broken blocked in 175447
 test-armhf-armhf-xl-vhd       6 capture-logs(6)       broken blocked in 175447
 test-armhf-armhf-xl-rtds      6 capture-logs(6)       broken blocked in 175447
 test-armhf-armhf-xl-arndale   6 capture-logs(6)       broken blocked in 175447
 test-armhf-armhf-xl           6 capture-logs(6)       broken blocked in 175447
 test-armhf-armhf-xl-cubietruck  6 capture-logs(6)     broken blocked in 175447
 build-armhf                  5 capture-logs broken in 176238 blocked in 175447
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   36 days
Testing same since   176224  2023-01-26 22:14:43 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  broken  
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               broken  
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      broken  
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken
broken-step test-armhf-armhf-libvirt-qcow2 host-install(5)
broken-step test-armhf-armhf-xl-credit1 host-install(5)
broken-step test-armhf-armhf-xl-credit1 capture-logs(6)
broken-step test-armhf-armhf-libvirt-qcow2 capture-logs(6)
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-libvirt-raw host-install(5)
broken-step test-armhf-armhf-libvirt-raw capture-logs(6)
broken-step test-armhf-armhf-libvirt host-install(5)
broken-step test-armhf-armhf-libvirt capture-logs(6)
broken-step test-armhf-armhf-xl-credit2 host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-xl-arndale host-install(5)
broken-step test-armhf-armhf-xl-credit2 capture-logs(6)
broken-step test-armhf-armhf-xl-rtds host-install(5)
broken-step test-armhf-armhf-xl-multivcpu capture-logs(6)
broken-step test-armhf-armhf-xl host-install(5)
broken-step test-armhf-armhf-xl-vhd capture-logs(6)
broken-step test-armhf-armhf-xl-rtds capture-logs(6)
broken-step test-armhf-armhf-xl-arndale capture-logs(6)
broken-step test-armhf-armhf-xl capture-logs(6)
broken-step test-armhf-armhf-xl-cubietruck host-install(5)
broken-step test-armhf-armhf-xl-cubietruck capture-logs(6)
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job build-arm64-xsm broken
broken-job build-arm64-pvops broken
broken-job build-armhf broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 00:56:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 00:56:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486029.753492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLZW7-0007yb-MT; Sat, 28 Jan 2023 00:56:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486029.753492; Sat, 28 Jan 2023 00:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLZW7-0007yU-Iq; Sat, 28 Jan 2023 00:56:27 +0000
Received: by outflank-mailman (input) for mailman id 486029;
 Sat, 28 Jan 2023 00:56:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LM4o=5Z=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLZW6-0007yO-G0
 for xen-devel@lists.xenproject.org; Sat, 28 Jan 2023 00:56:26 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 969eb3c2-9ea6-11ed-a5d9-ddcf98b90cbd;
 Sat, 28 Jan 2023 01:56:25 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 4AEDCB821C0;
 Sat, 28 Jan 2023 00:56:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 861EEC433D2;
 Sat, 28 Jan 2023 00:56:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 969eb3c2-9ea6-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674867383;
	bh=ko/2BoiEHAjn/12n/nbJy4SQj1VKto8eBKFuvRsHVtg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GyF2KsSZ7GqOjPO1q1cNZk+qCrJIeUyV2NYvKC1r9UdNXE1E6BnhzEPZAXsEHVde7
	 qgQ9C2N/jsJDXwyXIqLpVD9CljvfRnc6rC+zlWLT6McPzOExM2EV78mpRGdqquvv+o
	 6U6G1M/BtD+bvqMnJ2agJsakQ7GRlEPLRJffE0ejkF5DKENKNs8ACJufAxCAwxUdyc
	 or0Eyid1TDaKNdTlnD8lwledm1A+4X2pZC22LJj7Vdwf/OyLIVGqcRRdTSNBd/CeAz
	 CdOrlHX4rMYrvrxWzQVijVa8jZz29hw+uWipAxPelpypZV7q2X/vLqMpOjpbjRZmID
	 wdIttP371QKiA==
Date: Fri, 27 Jan 2023 16:56:17 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Paul Durrant <paul@xen.org>
Subject: Re: [RFC PATCH 07/10] xen: pci: add per-device locking
In-Reply-To: <20220831141040.13231-8-volodymyr_babchuk@epam.com>
Message-ID: <alpine.DEB.2.22.394.2301271615330.1978264@ubuntu-linux-20-04-desktop>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com> <20220831141040.13231-8-volodymyr_babchuk@epam.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
> Spinlock in struct pci_device will be used to protect access to device
> itself. Right now it is used mostly by MSI code.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

There are 2 instances of:

    BUG_ON(list_empty(&dev->msi_list));

in xen/arch/x86/msi.c:__pci_disable_msi and
xen/arch/x86/msi.c:__pci_disable_msix which are not protected by
pcidev_lock. However list_empty needs to be protected. (pci_disable_msi
can also be called from xen/arch/x86/irq.c where it is not surrounded by
pcidev_lock.)

Given that they are BUG_ON, I wonder if we could remove them instead of
adding locks there. It would make things simpler.


> ---
>  xen/arch/x86/hvm/vmsi.c       |  6 +++++-
>  xen/arch/x86/msi.c            | 16 ++++++++++++++++
>  xen/drivers/passthrough/msi.c |  8 +++++++-
>  xen/drivers/passthrough/pci.c |  2 ++
>  xen/include/xen/pci.h         | 12 ++++++++++++
>  5 files changed, 42 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
> index 7fb1075673..c9e5f279c5 100644
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -203,10 +203,14 @@ static struct msi_desc *msixtbl_addr_to_desc(
>  
>      nr_entry = (addr - entry->gtable) / PCI_MSIX_ENTRY_SIZE;
>  
> +    pcidev_lock(entry->pdev);
>      list_for_each_entry( desc, &entry->pdev->msi_list, list )
>          if ( desc->msi_attrib.type == PCI_CAP_ID_MSIX &&
> -             desc->msi_attrib.entry_nr == nr_entry )
> +             desc->msi_attrib.entry_nr == nr_entry ) {
> +	    pcidev_unlock(entry->pdev);

code style


>              return desc;
> +	}
> +    pcidev_unlock(entry->pdev);
>  
>      return NULL;
>  }
> diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
> index bccaccb98b..6b62c4f452 100644
> --- a/xen/arch/x86/msi.c
> +++ b/xen/arch/x86/msi.c
> @@ -389,6 +389,7 @@ static bool msi_set_mask_bit(struct irq_desc *desc, bool host, bool guest)
>      default:
>          return 0;
>      }
> +

spurious change


>      entry->msi_attrib.host_masked = host;
>      entry->msi_attrib.guest_masked = guest;
>  
> @@ -585,12 +586,17 @@ static struct msi_desc *find_msi_entry(struct pci_dev *dev,
>  {
>      struct msi_desc *entry;
>  
> +    pcidev_lock(dev);
>      list_for_each_entry( entry, &dev->msi_list, list )
>      {
>          if ( entry->msi_attrib.type == cap_id &&
>               (irq == -1 || entry->irq == irq) )
> +	{
> +	    pcidev_unlock(dev);
>              return entry;
> +	}
>      }
> +    pcidev_unlock(dev);
>  
>      return NULL;
>  }
> @@ -661,7 +667,9 @@ static int msi_capability_init(struct pci_dev *dev,
>          maskbits |= ~(uint32_t)0 >> (32 - dev->msi_maxvec);
>          pci_conf_write32(dev->sbdf, mpos, maskbits);
>      }
> +    pcidev_lock(dev);
>      list_add_tail(&entry->list, &dev->msi_list);
> +    pcidev_unlock(dev);
>  
>      *desc = entry;
>      /* Restore the original MSI enabled bits  */
> @@ -946,7 +954,9 @@ static int msix_capability_init(struct pci_dev *dev,
>  
>  	pcidev_get(dev);
>  
> +	pcidev_lock(dev);
>          list_add_tail(&entry->list, &dev->msi_list);
> +	pcidev_unlock(dev);
>          *desc = entry;
>      }
>  
> @@ -1231,11 +1241,13 @@ static void msi_free_irqs(struct pci_dev* dev)
>  {
>      struct msi_desc *entry, *tmp;
>  
> +    pcidev_lock(dev);
>      list_for_each_entry_safe( entry, tmp, &dev->msi_list, list )
>      {
>          pci_disable_msi(entry);
>          msi_free_irq(entry);
>      }
> +    pcidev_unlock(dev);
>  }
>  
>  void pci_cleanup_msi(struct pci_dev *pdev)
> @@ -1354,6 +1366,7 @@ int pci_restore_msi_state(struct pci_dev *pdev)
>      if ( ret )
>          return ret;
>  
> +    pcidev_lock(pdev);
>      list_for_each_entry_safe( entry, tmp, &pdev->msi_list, list )
>      {
>          unsigned int i = 0, nr = 1;
> @@ -1371,6 +1384,7 @@ int pci_restore_msi_state(struct pci_dev *pdev)
>              dprintk(XENLOG_ERR, "Restore MSI for %pp entry %u not set?\n",
>                      &pdev->sbdf, i);
>              spin_unlock_irqrestore(&desc->lock, flags);
> +	    pcidev_unlock(pdev);
>              if ( type == PCI_CAP_ID_MSIX )
>                  pci_conf_write16(pdev->sbdf, msix_control_reg(pos),
>                                   control & ~PCI_MSIX_FLAGS_ENABLE);
> @@ -1393,6 +1407,7 @@ int pci_restore_msi_state(struct pci_dev *pdev)
>              if ( unlikely(!memory_decoded(pdev)) )
>              {
>                  spin_unlock_irqrestore(&desc->lock, flags);
> +		pcidev_unlock(pdev);
>                  pci_conf_write16(pdev->sbdf, msix_control_reg(pos),
>                                   control & ~PCI_MSIX_FLAGS_ENABLE);
>                  return -ENXIO;
> @@ -1438,6 +1453,7 @@ int pci_restore_msi_state(struct pci_dev *pdev)
>          pci_conf_write16(pdev->sbdf, msix_control_reg(pos),
>                           control | PCI_MSIX_FLAGS_ENABLE);
>  
> +    pcidev_unlock(pdev);
>      return 0;
>  }
>  
> diff --git a/xen/drivers/passthrough/msi.c b/xen/drivers/passthrough/msi.c
> index ce1a450f6f..98f4d2721a 100644
> --- a/xen/drivers/passthrough/msi.c
> +++ b/xen/drivers/passthrough/msi.c
> @@ -22,6 +22,7 @@ int pdev_msi_init(struct pci_dev *pdev)
>  {
>      unsigned int pos;
>  
> +    pcidev_lock(pdev);
>      INIT_LIST_HEAD(&pdev->msi_list);
>  
>      pos = pci_find_cap_offset(pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
> @@ -41,7 +42,10 @@ int pdev_msi_init(struct pci_dev *pdev)
>          uint16_t ctrl;
>  
>          if ( !msix )
> -            return -ENOMEM;
> +        {
> +             pcidev_unlock(pdev);
> +             return -ENOMEM;
> +        }
>  
>          spin_lock_init(&msix->table_lock);
>  
> @@ -51,6 +55,8 @@ int pdev_msi_init(struct pci_dev *pdev)
>          pdev->msix = msix;
>      }
>  
> +    pcidev_unlock(pdev);
> +
>      return 0;
>  }
>  
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index c8da80b981..c83397211b 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -1383,7 +1383,9 @@ static int cf_check _dump_pci_devices(struct pci_seg *pseg, void *arg)
>              printk("%pd", pdev->domain);
>          printk(" - node %-3d refcnt %d", (pdev->node != NUMA_NO_NODE) ? pdev->node : -1,
>                 atomic_read(&pdev->refcnt));
> +        pcidev_lock(pdev);
>          pdev_dump_msi(pdev);
> +        pcidev_unlock(pdev);
>          printk("\n");
>      }
>      spin_unlock(&pseg->alldevs_lock);
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index e71a180ef3..d0a7339d84 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -106,6 +106,8 @@ struct pci_dev {
>      uint8_t msi_maxvec;
>      uint8_t phantom_stride;
>  
> +    /* Device lock */
> +    spinlock_t lock;
>      nodeid_t node; /* NUMA node */
>  
>      /* Device to be quarantined, don't automatically re-assign to dom0 */
> @@ -235,6 +237,16 @@ int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable);
>  void msixtbl_pt_unregister(struct domain *, struct pirq *);
>  void msixtbl_pt_cleanup(struct domain *d);
>  
> +static inline void pcidev_lock(struct pci_dev *pdev)
> +{
> +    spin_lock(&pdev->lock);
> +}
> +
> +static inline void pcidev_unlock(struct pci_dev *pdev)
> +{
> +    spin_unlock(&pdev->lock);
> +}
> +
>  #ifdef CONFIG_HVM
>  int arch_pci_clean_pirqs(struct domain *d);
>  #else
> -- 
> 2.36.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 01:32:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 01:32:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486036.753502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLa4j-0002dQ-HI; Sat, 28 Jan 2023 01:32:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486036.753502; Sat, 28 Jan 2023 01:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLa4j-0002dJ-C4; Sat, 28 Jan 2023 01:32:13 +0000
Received: by outflank-mailman (input) for mailman id 486036;
 Sat, 28 Jan 2023 01:32:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LM4o=5Z=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLa4h-0002dD-II
 for xen-devel@lists.xenproject.org; Sat, 28 Jan 2023 01:32:11 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94cf727f-9eab-11ed-a5d9-ddcf98b90cbd;
 Sat, 28 Jan 2023 02:32:09 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id B6959B82217;
 Sat, 28 Jan 2023 01:32:08 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE843C433EF;
 Sat, 28 Jan 2023 01:32:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94cf727f-9eab-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674869527;
	bh=EFWfFpfiH8r6XMH+rfQWbY8Chcx0m99d6kLSG+o81tA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=YsTd/UH47lBn3r1SFvB10ooSdiwOUnJRxwsPtfqnnxqizuhZyuZGHxW51FqlX/CGg
	 Iz5JR567+l0DdxZrfnNWPSnd1vw21ZEPPqHVbbNvvKvRvRhK14rRAuiJlpnlbnOsL9
	 sxQsF7oq9SKxG341+rdmV++jim/YXAoBT8c65//vcD7LK0bxXJlIM3eDpZIsu7mSuJ
	 b6N/yOQlk2Gkl+DQIlR0u6DgYt+mmA3JF6OnajIGDGBGPpepntIXVjUaIcpFBXePHE
	 CS09sTovWpVlttZI3uhmIXEzomZO4X6rPiS6ffSPtVWdlB2fn8RxtbcZvON5hSJLU+
	 1Bie/VhAbN14Q==
Date: Fri, 27 Jan 2023 17:32:04 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [RFC PATCH 08/10] xen: pci: remove pcidev_[un]lock[ed] calls
In-Reply-To: <20220831141040.13231-9-volodymyr_babchuk@epam.com>
Message-ID: <alpine.DEB.2.22.394.2301271717090.1978264@ubuntu-linux-20-04-desktop>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com> <20220831141040.13231-9-volodymyr_babchuk@epam.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
> As pci devices are refcounted now and all list that store them are
> protected by separate locks, we can safely drop global pcidevs_lock.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

Up until this patch this patch series introduces:
- d->pdevs_lock to protect d->pdev_list
- pci_seg->alldevs_lock to protect pci_seg->alldevs_list
- iommu->ats_list_lock to protect iommu->ats_devices
- pdev refcounting to detect a pdev in-use and when to free it
- pdev->lock to protect pdev->msi_list

They cover a lot of ground.  Are they collectively covering everything
pcidevs_lock() was protecting?

deassign_device is not protected by pcidevs_lock anymore.
deassign_device accesses a number of pdev fields, including quarantine,
phantom_stride and fault.count.

deassign_device could run at the same time as assign_device who sets
quarantine and other fields.

It looks like assign_device, deassign_device, and other functions
accessing/modifying pdev fields should be protected by pdev->lock.

In fact, I think it would be safer to make sure every place that
currently has a pcidevs_lock() gets a pdev->lock (unless there is a
d->pdevs_lock, pci_seg->alldevs_lock, iommu->ats_list_lock, or another
lock) ?


> ---
>  xen/arch/x86/domctl.c                       |  8 ---
>  xen/arch/x86/hvm/vioapic.c                  |  2 -
>  xen/arch/x86/hvm/vmsi.c                     | 12 ----
>  xen/arch/x86/irq.c                          |  7 ---
>  xen/arch/x86/msi.c                          | 11 ----
>  xen/arch/x86/pci.c                          |  4 --
>  xen/arch/x86/physdev.c                      |  7 +--
>  xen/common/sysctl.c                         |  2 -
>  xen/drivers/char/ns16550.c                  |  4 --
>  xen/drivers/passthrough/amd/iommu_init.c    |  7 ---
>  xen/drivers/passthrough/amd/iommu_map.c     |  5 --
>  xen/drivers/passthrough/amd/pci_amd_iommu.c |  4 --
>  xen/drivers/passthrough/pci.c               | 63 +--------------------
>  xen/drivers/passthrough/vtd/iommu.c         |  8 ---
>  xen/drivers/video/vga.c                     |  2 -
>  15 files changed, 4 insertions(+), 142 deletions(-)
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index 020df615bd..9f4ca03385 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -537,11 +537,7 @@ long arch_do_domctl(
>  
>          ret = -ESRCH;
>          if ( is_iommu_enabled(d) )
> -        {
> -            pcidevs_lock();
>              ret = pt_irq_create_bind(d, bind);
> -            pcidevs_unlock();
> -        }
>          if ( ret < 0 )
>              printk(XENLOG_G_ERR "pt_irq_create_bind failed (%ld) for dom%d\n",
>                     ret, d->domain_id);
> @@ -566,11 +562,7 @@ long arch_do_domctl(
>              break;
>  
>          if ( is_iommu_enabled(d) )
> -        {
> -            pcidevs_lock();
>              ret = pt_irq_destroy_bind(d, bind);
> -            pcidevs_unlock();
> -        }
>          if ( ret < 0 )
>              printk(XENLOG_G_ERR "pt_irq_destroy_bind failed (%ld) for dom%d\n",
>                     ret, d->domain_id);
> diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
> index cb7f440160..aa4e7766a3 100644
> --- a/xen/arch/x86/hvm/vioapic.c
> +++ b/xen/arch/x86/hvm/vioapic.c
> @@ -197,7 +197,6 @@ static int vioapic_hwdom_map_gsi(unsigned int gsi, unsigned int trig,
>          return ret;
>      }
>  
> -    pcidevs_lock();
>      ret = pt_irq_create_bind(currd, &pt_irq_bind);
>      if ( ret )
>      {
> @@ -207,7 +206,6 @@ static int vioapic_hwdom_map_gsi(unsigned int gsi, unsigned int trig,
>          unmap_domain_pirq(currd, pirq);
>          write_unlock(&currd->event_lock);
>      }
> -    pcidevs_unlock();
>  
>      return ret;
>  }
> diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
> index c9e5f279c5..344bbd646c 100644
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -470,7 +470,6 @@ int msixtbl_pt_register(struct domain *d, struct pirq *pirq, uint64_t gtable)
>      struct msixtbl_entry *entry, *new_entry;
>      int r = -EINVAL;
>  
> -    ASSERT(pcidevs_locked());
>      ASSERT(rw_is_write_locked(&d->event_lock));
>  
>      if ( !msixtbl_initialised(d) )
> @@ -540,7 +539,6 @@ void msixtbl_pt_unregister(struct domain *d, struct pirq *pirq)
>      struct pci_dev *pdev;
>      struct msixtbl_entry *entry;
>  
> -    ASSERT(pcidevs_locked());
>      ASSERT(rw_is_write_locked(&d->event_lock));
>  
>      if ( !msixtbl_initialised(d) )
> @@ -686,8 +684,6 @@ static int vpci_msi_update(const struct pci_dev *pdev, uint32_t data,
>  {
>      unsigned int i;
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( (address & MSI_ADDR_BASE_MASK) != MSI_ADDR_HEADER )
>      {
>          gdprintk(XENLOG_ERR, "%pp: PIRQ %u: unsupported address %lx\n",
> @@ -728,7 +724,6 @@ void vpci_msi_arch_update(struct vpci_msi *msi, const struct pci_dev *pdev)
>  
>      ASSERT(msi->arch.pirq != INVALID_PIRQ);
>  
> -    pcidevs_lock();
>      for ( i = 0; i < msi->vectors && msi->arch.bound; i++ )
>      {
>          struct xen_domctl_bind_pt_irq unbind = {
> @@ -747,7 +742,6 @@ void vpci_msi_arch_update(struct vpci_msi *msi, const struct pci_dev *pdev)
>  
>      msi->arch.bound = !vpci_msi_update(pdev, msi->data, msi->address,
>                                         msi->vectors, msi->arch.pirq, msi->mask);
> -    pcidevs_unlock();
>  }
>  
>  static int vpci_msi_enable(const struct pci_dev *pdev, unsigned int nr,
> @@ -785,10 +779,8 @@ int vpci_msi_arch_enable(struct vpci_msi *msi, const struct pci_dev *pdev,
>          return rc;
>      msi->arch.pirq = rc;
>  
> -    pcidevs_lock();
>      msi->arch.bound = !vpci_msi_update(pdev, msi->data, msi->address, vectors,
>                                         msi->arch.pirq, msi->mask);
> -    pcidevs_unlock();
>  
>      return 0;
>  }
> @@ -800,7 +792,6 @@ static void vpci_msi_disable(const struct pci_dev *pdev, int pirq,
>  
>      ASSERT(pirq != INVALID_PIRQ);
>  
> -    pcidevs_lock();
>      for ( i = 0; i < nr && bound; i++ )
>      {
>          struct xen_domctl_bind_pt_irq bind = {
> @@ -816,7 +807,6 @@ static void vpci_msi_disable(const struct pci_dev *pdev, int pirq,
>      write_lock(&pdev->domain->event_lock);
>      unmap_domain_pirq(pdev->domain, pirq);
>      write_unlock(&pdev->domain->event_lock);
> -    pcidevs_unlock();
>  }
>  
>  void vpci_msi_arch_disable(struct vpci_msi *msi, const struct pci_dev *pdev)
> @@ -863,7 +853,6 @@ int vpci_msix_arch_enable_entry(struct vpci_msix_entry *entry,
>  
>      entry->arch.pirq = rc;
>  
> -    pcidevs_lock();
>      rc = vpci_msi_update(pdev, entry->data, entry->addr, 1, entry->arch.pirq,
>                           entry->masked);
>      if ( rc )
> @@ -871,7 +860,6 @@ int vpci_msix_arch_enable_entry(struct vpci_msix_entry *entry,
>          vpci_msi_disable(pdev, entry->arch.pirq, 1, false);
>          entry->arch.pirq = INVALID_PIRQ;
>      }
> -    pcidevs_unlock();
>  
>      return rc;
>  }
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index d8672a03e1..6a08830a55 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -2156,8 +2156,6 @@ int map_domain_pirq(
>          struct pci_dev *pdev;
>          unsigned int nr = 0;
>  
> -        ASSERT(pcidevs_locked());
> -
>          ret = -ENODEV;
>          if ( !cpu_has_apic )
>              goto done;
> @@ -2317,7 +2315,6 @@ int unmap_domain_pirq(struct domain *d, int pirq)
>      if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
>          return -EINVAL;
>  
> -    ASSERT(pcidevs_locked());
>      ASSERT(rw_is_write_locked(&d->event_lock));
>  
>      info = pirq_info(d, pirq);
> @@ -2423,7 +2420,6 @@ void free_domain_pirqs(struct domain *d)
>  {
>      int i;
>  
> -    pcidevs_lock();
>      write_lock(&d->event_lock);
>  
>      for ( i = 0; i < d->nr_pirqs; i++ )
> @@ -2431,7 +2427,6 @@ void free_domain_pirqs(struct domain *d)
>              unmap_domain_pirq(d, i);
>  
>      write_unlock(&d->event_lock);
> -    pcidevs_unlock();
>  }
>  
>  static void cf_check dump_irqs(unsigned char key)
> @@ -2911,7 +2906,6 @@ int allocate_and_map_msi_pirq(struct domain *d, int index, int *pirq_p,
>  
>      msi->irq = irq;
>  
> -    pcidevs_lock();
>      /* Verify or get pirq. */
>      write_lock(&d->event_lock);
>      pirq = allocate_pirq(d, index, *pirq_p, irq, type, &msi->entry_nr);
> @@ -2927,7 +2921,6 @@ int allocate_and_map_msi_pirq(struct domain *d, int index, int *pirq_p,
>  
>   done:
>      write_unlock(&d->event_lock);
> -    pcidevs_unlock();
>      if ( ret )
>      {
>          switch ( type )
> diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
> index 6b62c4f452..f04b90e235 100644
> --- a/xen/arch/x86/msi.c
> +++ b/xen/arch/x86/msi.c
> @@ -623,7 +623,6 @@ static int msi_capability_init(struct pci_dev *dev,
>      u8 slot = PCI_SLOT(dev->devfn);
>      u8 func = PCI_FUNC(dev->devfn);
>  
> -    ASSERT(pcidevs_locked());
>      pos = pci_find_cap_offset(seg, bus, slot, func, PCI_CAP_ID_MSI);
>      if ( !pos )
>          return -ENODEV;
> @@ -810,8 +809,6 @@ static int msix_capability_init(struct pci_dev *dev,
>      if ( !pos )
>          return -ENODEV;
>  
> -    ASSERT(pcidevs_locked());
> -
>      control = pci_conf_read16(dev->sbdf, msix_control_reg(pos));
>      /*
>       * Ensure MSI-X interrupts are masked during setup. Some devices require
> @@ -1032,7 +1029,6 @@ static int __pci_enable_msi(struct msi_info *msi, struct msi_desc **desc)
>      struct msi_desc *old_desc;
>      int ret;
>  
> -    ASSERT(pcidevs_locked());
>      pdev = pci_get_pdev(NULL, msi->sbdf);
>      if ( !pdev )
>          return -ENODEV;
> @@ -1092,7 +1088,6 @@ static int __pci_enable_msix(struct msi_info *msi, struct msi_desc **desc)
>      struct msi_desc *old_desc;
>      int ret;
>  
> -    ASSERT(pcidevs_locked());
>      pdev = pci_get_pdev(NULL, msi->sbdf);
>      if ( !pdev || !pdev->msix )
>          return -ENODEV;
> @@ -1191,7 +1186,6 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool off)
>      if ( !use_msi )
>          return 0;
>  
> -    pcidevs_lock();
>      pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
>      if ( !pdev )
>          rc = -ENODEV;
> @@ -1204,7 +1198,6 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool off)
>      }
>      else
>          rc = msix_capability_init(pdev, NULL, NULL);
> -    pcidevs_unlock();
>  
>      pcidev_put(pdev);
>  
> @@ -1217,8 +1210,6 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool off)
>   */
>  int pci_enable_msi(struct msi_info *msi, struct msi_desc **desc)
>  {
> -    ASSERT(pcidevs_locked());
> -
>      if ( !use_msi )
>          return -EPERM;
>  
> @@ -1355,8 +1346,6 @@ int pci_restore_msi_state(struct pci_dev *pdev)
>      unsigned int type = 0, pos = 0;
>      u16 control = 0;
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( !use_msi )
>          return -EOPNOTSUPP;
>  
> diff --git a/xen/arch/x86/pci.c b/xen/arch/x86/pci.c
> index 1d38f0df7c..4dcd6d96f3 100644
> --- a/xen/arch/x86/pci.c
> +++ b/xen/arch/x86/pci.c
> @@ -88,15 +88,11 @@ int pci_conf_write_intercept(unsigned int seg, unsigned int bdf,
>      if ( reg < 64 || reg >= 256 )
>          return 0;
>  
> -    pcidevs_lock();
> -
>      pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bdf));
>      if ( pdev ) {
>          rc = pci_msi_conf_write_intercept(pdev, reg, size, data);
>  	pcidev_put(pdev);
>      }
>  
> -    pcidevs_unlock();
> -
>      return rc;
>  }
> diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
> index 96214a3d40..a41366b609 100644
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -162,11 +162,9 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
>              goto free_domain;
>      }
>  
> -    pcidevs_lock();
>      write_lock(&d->event_lock);
>      ret = unmap_domain_pirq(d, pirq);
>      write_unlock(&d->event_lock);
> -    pcidevs_unlock();
>  
>   free_domain:
>      rcu_unlock_domain(d);
> @@ -530,7 +528,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( copy_from_guest(&restore_msi, arg, 1) != 0 )
>              break;
>  
> -        pcidevs_lock();
>          pdev = pci_get_pdev(NULL,
>                              PCI_SBDF(0, restore_msi.bus, restore_msi.devfn));
>          if ( pdev )
> @@ -541,7 +538,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          else
>              ret = -ENODEV;
>  
> -        pcidevs_unlock();
>          break;
>      }
>  
> @@ -553,7 +549,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( copy_from_guest(&dev, arg, 1) != 0 )
>              break;
>  
> -        pcidevs_lock();
>          pdev = pci_get_pdev(NULL, PCI_SBDF(dev.seg, dev.bus, dev.devfn));
>          if ( pdev )
>          {
> @@ -562,7 +557,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          }
>          else
>              ret = -ENODEV;
> -        pcidevs_unlock();
> +
>          break;
>      }
>  
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 0feef94cd2..6bb8c5c295 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -446,7 +446,6 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>                  break;
>              }
>  
> -            pcidevs_lock();
>              pdev = pci_get_pdev(NULL, PCI_SBDF(dev.seg, dev.bus, dev.devfn));
>              if ( !pdev )
>                  node = XEN_INVALID_DEV;
> @@ -454,7 +453,6 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>                  node = XEN_INVALID_NODE_ID;
>              else
>                  node = pdev->node;
> -            pcidevs_unlock();
>  
>              if ( pdev )
>                  pcidev_put(pdev);
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index 01a05c9aa8..66c10b18e5 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -445,8 +445,6 @@ static void __init cf_check ns16550_init_postirq(struct serial_port *port)
>              {
>                  struct msi_desc *msi_desc = NULL;
>  
> -                pcidevs_lock();
> -
>                  rc = pci_enable_msi(&msi, &msi_desc);
>                  if ( !rc )
>                  {
> @@ -460,8 +458,6 @@ static void __init cf_check ns16550_init_postirq(struct serial_port *port)
>                          pci_disable_msi(msi_desc);
>                  }
>  
> -                pcidevs_unlock();
> -
>                  if ( rc )
>                  {
>                      uart->irq = 0;
> diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
> index 7c1713a602..e42af65a40 100644
> --- a/xen/drivers/passthrough/amd/iommu_init.c
> +++ b/xen/drivers/passthrough/amd/iommu_init.c
> @@ -638,10 +638,7 @@ static void cf_check parse_ppr_log_entry(struct amd_iommu *iommu, u32 entry[])
>      uint16_t device_id = iommu_get_devid_from_cmd(entry[0]);
>      struct pci_dev *pdev;
>  
> -    pcidevs_lock();
>      pdev = pci_get_real_pdev(PCI_SBDF(iommu->seg, device_id));
> -    pcidevs_unlock();
> -
>      if ( pdev )
>          guest_iommu_add_ppr_log(pdev->domain, entry);
>      pcidev_put(pdev);
> @@ -747,14 +744,12 @@ static bool_t __init set_iommu_interrupt_handler(struct amd_iommu *iommu)
>          return 0;
>      }
>  
> -    pcidevs_lock();
>      /*
>       * XXX: it is unclear if this device can be removed. Right now
>       * there is no code that clears msi.dev, so no one will decrease
>       * refcount on it.
>       */
>      iommu->msi.dev = pci_get_pdev(NULL, PCI_SBDF(iommu->seg, iommu->bdf));
> -    pcidevs_unlock();
>      if ( !iommu->msi.dev )
>      {
>          AMD_IOMMU_WARN("no pdev for %pp\n",
> @@ -1289,9 +1284,7 @@ static int __init cf_check amd_iommu_setup_device_table(
>              {
>                  if ( !pci_init )
>                      continue;
> -                pcidevs_lock();
>                  pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bdf));
> -                pcidevs_unlock();
>              }
>  
>              if ( pdev && (pdev->msix || pdev->msi_maxvec) )
> diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
> index 9d621e3d36..d04aa37538 100644
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -726,9 +726,7 @@ int cf_check amd_iommu_get_reserved_device_memory(
>              /* May need to trigger the workaround in find_iommu_for_device(). */
>              struct pci_dev *pdev;
>  
> -            pcidevs_lock();
>              pdev = pci_get_pdev(NULL, sbdf);
> -            pcidevs_unlock();
>  
>              if ( pdev )
>              {
> @@ -848,7 +846,6 @@ int cf_check amd_iommu_quarantine_init(struct pci_dev *pdev, bool scratch_page)
>      const struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(pdev->seg);
>      int rc;
>  
> -    ASSERT(pcidevs_locked());
>      ASSERT(!hd->arch.amd.root_table);
>      ASSERT(page_list_empty(&hd->arch.pgtables.list));
>  
> @@ -903,8 +900,6 @@ void amd_iommu_quarantine_teardown(struct pci_dev *pdev)
>  {
>      struct domain_iommu *hd = dom_iommu(dom_io);
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( !pdev->arch.amd.root_table )
>          return;
>  
> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> index 955f3af57a..919e30129e 100644
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -268,8 +268,6 @@ static int __must_check amd_iommu_setup_domain_device(
>                      req_id, pdev->type, page_to_maddr(root_pg),
>                      domid, hd->arch.amd.paging_mode);
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
>           !ivrs_dev->block_ats &&
>           iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) &&
> @@ -416,8 +414,6 @@ static void amd_iommu_disable_domain_device(const struct domain *domain,
>      if ( QUARANTINE_SKIP(domain, pdev) )
>          return;
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
>           pci_ats_enabled(iommu->seg, bus, pdev->devfn) )
>      {
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index c83397211b..cc62a5aec4 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -517,7 +517,6 @@ int __init pci_hide_device(unsigned int seg, unsigned int bus,
>      struct pci_seg *pseg;
>      int rc = -ENOMEM;
>  
> -    pcidevs_lock();
>      pseg = alloc_pseg(seg);
>      if ( pseg )
>      {
> @@ -528,7 +527,6 @@ int __init pci_hide_device(unsigned int seg, unsigned int bus,
>              rc = 0;
>          }
>      }
> -    pcidevs_unlock();
>  
>      return rc;
>  }
> @@ -588,8 +586,6 @@ struct pci_dev *pci_get_pdev(struct domain *d, pci_sbdf_t sbdf)
>  {
>      struct pci_dev *pdev;
>  
> -    ASSERT(d || pcidevs_locked());
> -
>      /*
>       * The hardware domain owns the majority of the devices in the system.
>       * When there are multiple segments, traversing the per-segment list is
> @@ -730,7 +726,6 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>          pdev_type = "device";
>      else if ( info->is_virtfn )
>      {
> -        pcidevs_lock();
>          pdev = pci_get_pdev(NULL,
>                              PCI_SBDF(seg, info->physfn.bus,
>                                       info->physfn.devfn));
> @@ -739,7 +734,6 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>              pf_is_extfn = pdev->info.is_extfn;
>              pcidev_put(pdev);
>          }
> -        pcidevs_unlock();
>          if ( !pdev )
>              pci_add_device(seg, info->physfn.bus, info->physfn.devfn,
>                             NULL, node);
> @@ -756,7 +750,6 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>  
>      ret = -ENOMEM;
>  
> -    pcidevs_lock();
>      pseg = alloc_pseg(seg);
>      if ( !pseg )
>          goto out;
> @@ -858,7 +851,6 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>      pci_enable_acs(pdev);
>  
>  out:
> -    pcidevs_unlock();
>      if ( !ret )
>      {
>          printk(XENLOG_DEBUG "PCI add %s %pp\n", pdev_type,  &pdev->sbdf);
> @@ -889,7 +881,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>      if ( !pseg )
>          return -ENODEV;
>  
> -    pcidevs_lock();
>      spin_lock(&pseg->alldevs_lock);
>      list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
>          if ( pdev->bus == bus && pdev->devfn == devfn )
> @@ -910,12 +901,10 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>              break;
>          }
>  
> -    pcidevs_unlock();
>      spin_unlock(&pseg->alldevs_lock);
>      return ret;
>  }
>  
> -/* Caller should hold the pcidevs_lock */
>  static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>                             uint8_t devfn)
>  {
> @@ -927,7 +916,6 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>      if ( !is_iommu_enabled(d) )
>          return -EINVAL;
>  
> -    ASSERT(pcidevs_locked());
>      pdev = pci_get_pdev(d, PCI_SBDF(seg, bus, devfn));
>      if ( !pdev )
>          return -ENODEV;
> @@ -981,13 +969,10 @@ int pci_release_devices(struct domain *d)
>      u8 bus, devfn;
>      int ret;
>  
> -    pcidevs_lock();
>      ret = arch_pci_clean_pirqs(d);
>      if ( ret )
> -    {
> -        pcidevs_unlock();
>          return ret;
> -    }
> +
>      spin_lock(&d->pdevs_lock);
>      list_for_each_entry_safe ( pdev, tmp, &d->pdev_list, domain_list )
>      {
> @@ -996,7 +981,6 @@ int pci_release_devices(struct domain *d)
>          ret = deassign_device(d, pdev->seg, bus, devfn) ?: ret;
>      }
>      spin_unlock(&d->pdevs_lock);
> -    pcidevs_unlock();
>  
>      return ret;
>  }
> @@ -1094,7 +1078,6 @@ void pci_check_disable_device(u16 seg, u8 bus, u8 devfn)
>      s_time_t now = NOW();
>      u16 cword;
>  
> -    pcidevs_lock();
>      pdev = pci_get_real_pdev(PCI_SBDF(seg, bus, devfn));
>      if ( pdev )
>      {
> @@ -1108,7 +1091,6 @@ void pci_check_disable_device(u16 seg, u8 bus, u8 devfn)
>              pdev = NULL;
>          }
>      }
> -    pcidevs_unlock();
>  
>      if ( !pdev )
>          return;
> @@ -1164,13 +1146,7 @@ static int __init cf_check _scan_pci_devices(struct pci_seg *pseg, void *arg)
>  
>  int __init scan_pci_devices(void)
>  {
> -    int ret;
> -
> -    pcidevs_lock();
> -    ret = pci_segments_iterate(_scan_pci_devices, NULL);
> -    pcidevs_unlock();
> -
> -    return ret;
> +    return pci_segments_iterate(_scan_pci_devices, NULL);
>  }
>  
>  struct setup_hwdom {
> @@ -1239,19 +1215,11 @@ static int __hwdom_init cf_check _setup_hwdom_pci_devices(
>  
>              pcidev_put(pdev);
>              if ( iommu_verbose )
> -            {
> -                pcidevs_unlock();
>                  process_pending_softirqs();
> -                pcidevs_lock();
> -            }
>          }
>  
>          if ( !iommu_verbose )
> -        {
> -            pcidevs_unlock();
>              process_pending_softirqs();
> -            pcidevs_lock();
> -        }
>      }
>  
>      return 0;
> @@ -1262,9 +1230,7 @@ void __hwdom_init setup_hwdom_pci_devices(
>  {
>      struct setup_hwdom ctxt = { .d = d, .handler = handler };
>  
> -    pcidevs_lock();
>      pci_segments_iterate(_setup_hwdom_pci_devices, &ctxt);
> -    pcidevs_unlock();
>  }
>  
>  /* APEI not supported on ARM yet. */
> @@ -1396,9 +1362,7 @@ static int cf_check _dump_pci_devices(struct pci_seg *pseg, void *arg)
>  static void cf_check dump_pci_devices(unsigned char ch)
>  {
>      printk("==== PCI devices ====\n");
> -    pcidevs_lock();
>      pci_segments_iterate(_dump_pci_devices, NULL);
> -    pcidevs_unlock();
>  }
>  
>  static int __init cf_check setup_dump_pcidevs(void)
> @@ -1417,8 +1381,6 @@ static int iommu_add_device(struct pci_dev *pdev)
>      if ( !pdev->domain )
>          return -EINVAL;
>  
> -    ASSERT(pcidevs_locked());
> -
>      hd = dom_iommu(pdev->domain);
>      if ( !is_iommu_enabled(pdev->domain) )
>          return 0;
> @@ -1446,8 +1408,6 @@ static int iommu_enable_device(struct pci_dev *pdev)
>      if ( !pdev->domain )
>          return -EINVAL;
>  
> -    ASSERT(pcidevs_locked());
> -
>      hd = dom_iommu(pdev->domain);
>      if ( !is_iommu_enabled(pdev->domain) ||
>           !hd->platform_ops->enable_device )
> @@ -1494,7 +1454,6 @@ static int device_assigned(struct pci_dev *pdev)
>  {
>      int rc = 0;
>  
> -    ASSERT(pcidevs_locked());
>      /*
>       * If the device exists and it is not owned by either the hardware
>       * domain or dom_io then it must be assigned to a guest, or be
> @@ -1507,7 +1466,6 @@ static int device_assigned(struct pci_dev *pdev)
>      return rc;
>  }
>  
> -/* Caller should hold the pcidevs_lock */
>  static int assign_device(struct domain *d, struct pci_dev *pdev, u32 flag)
>  {
>      const struct domain_iommu *hd = dom_iommu(d);
> @@ -1521,7 +1479,6 @@ static int assign_device(struct domain *d, struct pci_dev *pdev, u32 flag)
>          return -EXDEV;
>  
>      /* device_assigned() should already have cleared the device for assignment */
> -    ASSERT(pcidevs_locked());
>      ASSERT(pdev && (pdev->domain == hardware_domain ||
>                      pdev->domain == dom_io));
>  
> @@ -1587,7 +1544,6 @@ static int iommu_get_device_group(
>      if ( group_id < 0 )
>          return group_id;
>  
> -    pcidevs_lock();
>      spin_lock(&d->pdevs_lock);
>      for_each_pdev( d, pdev )
>      {
> @@ -1603,7 +1559,6 @@ static int iommu_get_device_group(
>          sdev_id = iommu_call(ops, get_device_group_id, seg, b, df);
>          if ( sdev_id < 0 )
>          {
> -            pcidevs_unlock();
>              spin_unlock(&d->pdevs_lock);
>              return sdev_id;
>          }
> @@ -1614,7 +1569,6 @@ static int iommu_get_device_group(
>  
>              if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
>              {
> -                pcidevs_unlock();
>                  spin_unlock(&d->pdevs_lock);
>                  return -EFAULT;
>              }
> @@ -1622,7 +1576,6 @@ static int iommu_get_device_group(
>          }
>      }
>  
> -    pcidevs_unlock();
>      spin_unlock(&d->pdevs_lock);
>  
>      return i;
> @@ -1630,17 +1583,12 @@ static int iommu_get_device_group(
>  
>  void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev)
>  {
> -    pcidevs_lock();
> -
>      /* iommu->ats_list_lock is taken by the caller of this function */
>      disable_ats_device(pdev);
>  
>      ASSERT(pdev->domain);
>      if ( d != pdev->domain )
> -    {
> -        pcidevs_unlock();
>          return;
> -    }
>  
>      pdev->broken = true;
>  
> @@ -1649,8 +1597,6 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev)
>                 d->domain_id, &pdev->sbdf);
>      if ( !is_hardware_domain(d) )
>          domain_crash(d);
> -
> -    pcidevs_unlock();
>  }
>  
>  int iommu_do_pci_domctl(
> @@ -1740,7 +1686,6 @@ int iommu_do_pci_domctl(
>              break;
>          }
>  
> -        pcidevs_lock();
>          ret = device_assigned(pdev);
>          if ( domctl->cmd == XEN_DOMCTL_test_assign_device )
>          {
> @@ -1755,7 +1700,7 @@ int iommu_do_pci_domctl(
>              ret = assign_device(d, pdev, flags);
>  
>          pcidev_put(pdev);
> -        pcidevs_unlock();
> +
>          if ( ret == -ERESTART )
>              ret = hypercall_create_continuation(__HYPERVISOR_domctl,
>                                                  "h", u_domctl);
> @@ -1787,9 +1732,7 @@ int iommu_do_pci_domctl(
>          bus = PCI_BUS(machine_sbdf);
>          devfn = PCI_DEVFN(machine_sbdf);
>  
> -        pcidevs_lock();
>          ret = deassign_device(d, seg, bus, devfn);
> -        pcidevs_unlock();
>          break;
>  
>      default:
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index 42661f22f4..87868188b7 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1490,7 +1490,6 @@ int domain_context_mapping_one(
>      if ( QUARANTINE_SKIP(domain, pgd_maddr) )
>          return 0;
>  
> -    ASSERT(pcidevs_locked());
>      spin_lock(&iommu->lock);
>      maddr = bus_to_context_maddr(iommu, bus);
>      context_entries = (struct context_entry *)map_vtd_domain_page(maddr);
> @@ -1711,8 +1710,6 @@ static int domain_context_mapping(struct domain *domain, u8 devfn,
>      if ( drhd && drhd->iommu->node != NUMA_NO_NODE )
>          dom_iommu(domain)->node = drhd->iommu->node;
>  
> -    ASSERT(pcidevs_locked());
> -
>      for_each_rmrr_device( rmrr, bdf, i )
>      {
>          if ( rmrr->segment != pdev->seg || bdf != pdev->sbdf.bdf )
> @@ -2072,8 +2069,6 @@ static void quarantine_teardown(struct pci_dev *pdev,
>  {
>      struct domain_iommu *hd = dom_iommu(dom_io);
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( !pdev->arch.vtd.pgd_maddr )
>          return;
>  
> @@ -2341,8 +2336,6 @@ static int cf_check intel_iommu_add_device(u8 devfn, struct pci_dev *pdev)
>      u16 bdf;
>      int ret, i;
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( !pdev->domain )
>          return -EINVAL;
>  
> @@ -3176,7 +3169,6 @@ static int cf_check intel_iommu_quarantine_init(struct pci_dev *pdev,
>      bool rmrr_found = false;
>      int rc;
>  
> -    ASSERT(pcidevs_locked());
>      ASSERT(!hd->arch.vtd.pgd_maddr);
>      ASSERT(page_list_empty(&hd->arch.pgtables.list));
>  
> diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
> index 1298f3a7b6..1f7c496114 100644
> --- a/xen/drivers/video/vga.c
> +++ b/xen/drivers/video/vga.c
> @@ -117,9 +117,7 @@ void __init video_endboot(void)
>                  struct pci_dev *pdev;
>                  u8 b = bus, df = devfn, sb;
>  
> -                pcidevs_lock();
>                  pdev = pci_get_pdev(NULL, PCI_SBDF(0, bus, devfn));
> -                pcidevs_unlock();
>  
>                  if ( !pdev ||
>                       pci_conf_read16(PCI_SBDF(0, bus, devfn),
> -- 
> 2.36.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 01:37:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 01:37:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486043.753512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLa9I-0003VI-6T; Sat, 28 Jan 2023 01:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486043.753512; Sat, 28 Jan 2023 01:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLa9I-0003VB-3a; Sat, 28 Jan 2023 01:36:56 +0000
Received: by outflank-mailman (input) for mailman id 486043;
 Sat, 28 Jan 2023 01:36:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LM4o=5Z=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pLa9G-0003V5-46
 for xen-devel@lists.xenproject.org; Sat, 28 Jan 2023 01:36:54 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3cb7f2f3-9eac-11ed-b8d1-410ff93cb8f0;
 Sat, 28 Jan 2023 02:36:51 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5A04D61DC8;
 Sat, 28 Jan 2023 01:36:50 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C531EC433EF;
 Sat, 28 Jan 2023 01:36:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cb7f2f3-9eac-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1674869809;
	bh=x6TulpTojwOjmImiJrgcjSXC+kR5Yowcqr37vHpuYQo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=h13Ic3ptpv4qwMZ3Vjmi000NlkAa9jSn8bkPHCHW0OgjnDtYhldoItDNvWMMxamyS
	 ebtFX0IwWSR8m8yY3+dtMlX5iAJstOCstuDUt8t/pRTm2w6Xrup5Vslh9QTT8/SDUG
	 hS7x+W/YjUd/RQ8EFShiLSmUvfMY0gPgdQ8jdYcAQXYk09RBmHwRYsRe4oLXsDzWGD
	 lqwbHnXcGQofAZ5WQxylIols77K9F7bvwUcNdQ9VNOzDr8hrBE7Ld0PBSy99evjkfA
	 Aafw8ee/eN+ITUg+Q+dEdWHEiDMoKhdJ9VmQE8+sU9RD6t8I5k7tlKgNJ5hwl2gKLh
	 W/mO0oxW8AtMw==
Date: Fri, 27 Jan 2023 17:36:47 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>, 
    Kevin Tian <kevin.tian@intel.com>, Jan Beulich <jbeulich@suse.com>, 
    Paul Durrant <paul@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [RFC PATCH 09/10] [RFC only] xen: iommu: remove last  pcidevs_lock()
 calls in iommu
In-Reply-To: <20220831141040.13231-10-volodymyr_babchuk@epam.com>
Message-ID: <alpine.DEB.2.22.394.2301271733240.1978264@ubuntu-linux-20-04-desktop>
References: <20220831141040.13231-1-volodymyr_babchuk@epam.com> <20220831141040.13231-10-volodymyr_babchuk@epam.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
> There are number of cases where pcidevs_lock() is used to protect
> something that is not related to PCI devices per se.
> 
> Probably pcidev_lock in these places should be replaced with some
> other lock.
> 
> This patch is not intended to be merged and is present only to discuss
> this use of pcidevs_lock()
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

I wonder if we are being too ambitious: is it necessary to get rid of
pcidevs_lock completely, without replacing all its occurrences with
something else?

Because it would be a lot easier to only replace pcidevs_lock with
pdev->lock, replacing the global lock with a per-device lock. That alone
would be a major improvement and would be far easier to verify its
correctness.

While this patch and the previous patch together remove all occurrences
of pcidevs_lock without adding pdev->lock, which is difficult to prove
correct.


> ---
>  xen/drivers/passthrough/vtd/intremap.c | 2 --
>  xen/drivers/passthrough/vtd/iommu.c    | 5 -----
>  xen/drivers/passthrough/x86/iommu.c    | 5 -----
>  3 files changed, 12 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
> index 1512e4866b..44e3b72f91 100644
> --- a/xen/drivers/passthrough/vtd/intremap.c
> +++ b/xen/drivers/passthrough/vtd/intremap.c
> @@ -893,8 +893,6 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
>  
>      spin_unlock_irq(&desc->lock);
>  
> -    ASSERT(pcidevs_locked());
> -
>      return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
>  
>   unlock_out:
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index 87868188b7..9d258d154d 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -127,8 +127,6 @@ static int context_set_domain_id(struct context_entry *context,
>  {
>      unsigned int i;
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( domid_mapping(iommu) )
>      {
>          unsigned int nr_dom = cap_ndoms(iommu->cap);
> @@ -1882,7 +1880,6 @@ int domain_context_unmap_one(
>      int iommu_domid, rc, ret;
>      bool_t flush_dev_iotlb;
>  
> -    ASSERT(pcidevs_locked());
>      spin_lock(&iommu->lock);
>  
>      maddr = bus_to_context_maddr(iommu, bus);
> @@ -2601,7 +2598,6 @@ static void __hwdom_init setup_hwdom_rmrr(struct domain *d)
>      u16 bdf;
>      int ret, i;
>  
> -    pcidevs_lock();
>      for_each_rmrr_device ( rmrr, bdf, i )
>      {
>          /*
> @@ -2616,7 +2612,6 @@ static void __hwdom_init setup_hwdom_rmrr(struct domain *d)
>              dprintk(XENLOG_ERR VTDPREFIX,
>                       "IOMMU: mapping reserved region failed\n");
>      }
> -    pcidevs_unlock();
>  }
>  
>  static struct iommu_state {
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> index f671b0f2bb..4e94ad15df 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -207,7 +207,6 @@ int iommu_identity_mapping(struct domain *d, p2m_access_t p2ma,
>      struct identity_map *map;
>      struct domain_iommu *hd = dom_iommu(d);
>  
> -    ASSERT(pcidevs_locked());
>      ASSERT(base < end);
>  
>      /*
> @@ -479,8 +478,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
>      static unsigned int start;
>      unsigned int idx = find_next_zero_bit(map, UINT16_MAX - DOMID_MASK, start);
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( idx >= UINT16_MAX - DOMID_MASK )
>          idx = find_first_zero_bit(map, UINT16_MAX - DOMID_MASK);
>      if ( idx >= UINT16_MAX - DOMID_MASK )
> @@ -495,8 +492,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
>  
>  void iommu_free_domid(domid_t domid, unsigned long *map)
>  {
> -    ASSERT(pcidevs_locked());
> -
>      if ( domid == DOMID_INVALID )
>          return;
>  
> -- 
> 2.36.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 03:35:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 03:35:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486051.753521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLbzQ-0000pq-Qr; Sat, 28 Jan 2023 03:34:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486051.753521; Sat, 28 Jan 2023 03:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLbzQ-0000pj-Nu; Sat, 28 Jan 2023 03:34:52 +0000
Received: by outflank-mailman (input) for mailman id 486051;
 Sat, 28 Jan 2023 03:34:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dFBb=5Z=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLbzP-0000pd-E9
 for xen-devel@lists.xenproject.org; Sat, 28 Jan 2023 03:34:51 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2067.outbound.protection.outlook.com [40.107.22.67])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b700052c-9ebc-11ed-a5d9-ddcf98b90cbd;
 Sat, 28 Jan 2023 04:34:48 +0100 (CET)
Received: from DUZPR01CA0166.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b3::9) by AS4PR08MB7529.eurprd08.prod.outlook.com
 (2603:10a6:20b:4f8::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Sat, 28 Jan
 2023 03:34:38 +0000
Received: from DBAEUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b3:cafe::63) by DUZPR01CA0166.outlook.office365.com
 (2603:10a6:10:4b3::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.37 via Frontend
 Transport; Sat, 28 Jan 2023 03:34:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT023.mail.protection.outlook.com (100.127.142.253) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.21 via Frontend Transport; Sat, 28 Jan 2023 03:34:38 +0000
Received: ("Tessian outbound b1d3ffe56e73:v132");
 Sat, 28 Jan 2023 03:34:37 +0000
Received: from ef6069c63589.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1404F001-9059-4362-9726-ED0C8D282907.1; 
 Sat, 28 Jan 2023 03:34:32 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ef6069c63589.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 28 Jan 2023 03:34:32 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU2PR08MB10201.eurprd08.prod.outlook.com (2603:10a6:10:496::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.28; Sat, 28 Jan
 2023 03:34:29 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Sat, 28 Jan 2023
 03:34:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b700052c-9ebc-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OW/rBcIsElCeCi/+9OoWfqxAhGbJ2nZaJF2gZuC1x44=;
 b=L03VvvX8j+vICHNYIBF+cWpkc9QDsVQwy+L93iD8cXCQPOKYIwvomldp0wa3DrIOkci9HEFiLfmHjbmeCDtsf8P8IK0Z8G+iDJyL5t4twlbV3qY9pOrapyqC9vjTU2RNDObwpFpeVIX4fgypCvRtliJajMVtym2H1t1O7o4uz5g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EDkrG601KRc7pMl3w81H1cZyXLkS7wNz2tbswAtRhvJJMDvwj2Xub6nFWnTNLRwjMLgGnu244ZjMfOXUrN4Icb91mWJdhqZHZYjrats7gJUEH2zWlysg7Yth6gqSy27kYkOTi86Elg3eOK6geeaY6uqrOmYyTSVc8gPRCXilo5fGnS/x3X69h2KyPAbVD5ImHH6iHDWsh4kEaRQinfS7rVo6fk4hK7dRz53K3kXlJ8E6VjlOra7y/Y+x6xPL9miuTl3faqmsWryBbfVrQy/v7M4d2tRyVEFRybkDWl3yKyvI0FbvECq55CALzoD8LVqFZlQ91sqJ3E24ObzCXFii0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OW/rBcIsElCeCi/+9OoWfqxAhGbJ2nZaJF2gZuC1x44=;
 b=m2rcNmDxslwPbZWPmyrPw6hLLfu7z0s5iRcvNjhgKE+b6cMfiX8EepxXyINRlA1vFYAD78PneqbeLa52MRaEPNqhJ3EZCn+tBmVmVzyZvPyYYYWe6Azpgj2MKZX5RfkeqnRI9t1mL8W8nThlKs56jwZ2OsfEHr1GqNQsog+LTvtvBl64plCHjXKQjmr85QGCbwmuQQo2wzoaFT3DYijrNdGp01PFQdnwwPYiu+Bpac4fMxCO7aIjWoqInT5oImQtXKyc0JgVeSEtz3OTmUYMt+hzQ4HnsaiM/B+Kl0kaq2wyKmP3pKWnd8LlK11s0MheqV1alq1j3xrSif3INKivAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OW/rBcIsElCeCi/+9OoWfqxAhGbJ2nZaJF2gZuC1x44=;
 b=L03VvvX8j+vICHNYIBF+cWpkc9QDsVQwy+L93iD8cXCQPOKYIwvomldp0wa3DrIOkci9HEFiLfmHjbmeCDtsf8P8IK0Z8G+iDJyL5t4twlbV3qY9pOrapyqC9vjTU2RNDObwpFpeVIX4fgypCvRtliJajMVtym2H1t1O7o4uz5g=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>
Subject: RE: [PATCH] xen/common: Constify the parameter of _spin_is_locked()
Thread-Topic: [PATCH] xen/common: Constify the parameter of _spin_is_locked()
Thread-Index: AQHZMoJm2PD2ghcl7Emq47YXKIuaKq6zEkOw
Date: Sat, 28 Jan 2023 03:34:29 +0000
Message-ID:
 <AS8PR08MB79916E7213108E3960B79AEA92CD9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230127190516.52994-1-julien@xen.org>
In-Reply-To: <20230127190516.52994-1-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 2CA0B27134ADB64989C2FFFE3D86D772.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU2PR08MB10201:EE_|DBAEUR03FT023:EE_|AS4PR08MB7529:EE_
X-MS-Office365-Filtering-Correlation-Id: 53967866-590e-4c7c-63af-08db00e09556
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 j6V1fq56GcOuTHdC0SZ3hdr4TLXzShvn97/LZorKSEoFjwWgVFif/aFVMqGtyF5a8ElcsuORch6xHohTZ19TrninEa91G8KQ22qxoXQQqCWd+kb8BgDUnCiaFYbWEU/w0LbZWKeDJQbC1i9btS1HdqoIOlfbce9HNmNMMLbP4MyPN1MigTdf1NKMTgFJ/WcS0iv6vSVrp34C9ooi38ymP0GU66DOS8AxDyitq8j+jux7J99lI9H4yp8Tq9XPKI2wLF9PIztlSQroYdr2uIsrBWi9Oj/vPpRJYc28m1Z8y7aUOxS50sk067jtyARSj/tc+FfaxYScwMDuc1Gw1Yl15yuq3YzswMM4oYNQAGBYBH3tAaGlThFjLvCwe6mmxaG1MdNhsKabOnIZvQONXMZEG4s3u6PJLxMKLAP1L8eqJLa4fo3sbdDrTzQABTq72/LURe2feycgEtjq/WTlEOPLgnW6jgOe216VUCIOiZARD/f/IpLGvigHu3H4Xi6Unv3jHTySQWGTM6uUHZz4OlKihQFtYePAWhV2ZTRlNrfj/8kIUuTBOQeI9qU1yMeoljjzCpDbcJC5GEpd9aZDxbJtnE7ZwwZP/NGbBoh58x1HIUNLd8DzCvF42WYSzjVnGI/dyUIYKSmrGpXaIQUN2Rsvi9na+6vU2rwO2xKXk1ibrhzn/6P3KZoReuIxSrEATXfud5bPwiffgpMzadeainS8mw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(366004)(39860400002)(346002)(376002)(136003)(451199018)(2906002)(83380400001)(71200400001)(7696005)(38070700005)(478600001)(9686003)(26005)(6506007)(38100700002)(122000001)(4326008)(8676002)(76116006)(66446008)(64756008)(66556008)(66476007)(66946007)(5660300002)(4744005)(8936002)(41300700001)(33656002)(52536014)(86362001)(186003)(316002)(55016003)(110136005)(54906003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB10201
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3cf1ffe7-75cd-40d5-39db-08db00e08ff2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Mjwy6oN/X1X6D2W5xRXp8cuqYYy5+xcy6BgIrGuOqJh2qTHFhNBmme+N7D5nm/+4ivxIKKvEHMLVAwrOnCMcMg6Wa1Ow+aOv6OTLkbCxfizvYvkQ5mxZHCtfqKc9YR6mzvN5yCOhWyFkmlmi42MFzrIgoLGT0zzWJqvJA+8GPIUWCmyyTeZiLGW46PvX0vXMn4cNezMvJRWy25pitDkArRGh76F4Jg2EZS1gpNAKa7x6SVXUtfn0DkwLljVhUfeLxqdpnuOpvPJO3vVawyuqqNPNtZCZDOILjtvimoaB62v251Na8JG101prt1z46CMn+cuRsA5AUkNIDQPbi9RgxCVw/jMjjWghyddoeiMaNZYvkag0tG0KChx7YCy7YjdynAUNXfzKRiX+mJ4MbReYsx6qhnPfo/eA0LVCD26HMp1HUAoistyXh+WqUiAZYp2AWxWtIHnGUIoXwiAD1eksuTFduNWkZWBqJAak/wo8Gy4jyFRJdGetl47mA7AbnSCnGKL8D61j5sxc8rsNOlxxM0z/gSXGkwaZwHKNV7Efu8J1Kbty3G0ldvOlMkLhuvcgdKwCTEVACbbi53hIBljNKCSA/HaPm/6vV4xVfRU2MJul9jrHi9ljRuodbosbsElZLCoLAhNFVGzP3cdM/kGBFs9SAu9VLVRMrm2SjoO6l/VrRp78ihS0kfHC8wScpcvkt4Op/PFVGgDiTtdPB1/VlQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(376002)(346002)(39860400002)(396003)(136003)(451199018)(36840700001)(40470700004)(46966006)(33656002)(41300700001)(5660300002)(36860700001)(8936002)(52536014)(47076005)(40480700001)(7696005)(356005)(86362001)(55016003)(81166007)(83380400001)(40460700003)(70206006)(316002)(110136005)(82740400003)(70586007)(8676002)(82310400005)(54906003)(4326008)(9686003)(336012)(26005)(478600001)(6506007)(186003)(4744005)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jan 2023 03:34:38.1198
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 53967866-590e-4c7c-63af-08db00e09556
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7529

Hi Julien,

> -----Original Message-----
> Subject: [PATCH] xen/common: Constify the parameter of _spin_is_locked()
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> The lock is not meant to be modified by _spin_is_locked(). So constify
> it.
>=20
> This is helpful to be able to assert the locked is taken when the

Nit: s/locked/lock/
This definitely can be fixed on commit.

> underlying structure is const.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

I tried to test this patch on FVP (both Arm32 and Arm64) to play safe.
This patch is good (as expected of course :)), so:

Reviewed-by: Henry Wang <Henry.Wang@arm.com>
Tested-by: Henry Wang <Henry.Wang@arm.com> #Arm

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 03:42:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 03:42:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486056.753531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLc6X-0002T0-JI; Sat, 28 Jan 2023 03:42:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486056.753531; Sat, 28 Jan 2023 03:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLc6X-0002St-GV; Sat, 28 Jan 2023 03:42:13 +0000
Received: by outflank-mailman (input) for mailman id 486056;
 Sat, 28 Jan 2023 03:42:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dFBb=5Z=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLc6V-0002Sn-RZ
 for xen-devel@lists.xenproject.org; Sat, 28 Jan 2023 03:42:11 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2057.outbound.protection.outlook.com [40.107.6.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bda8d805-9ebd-11ed-b8d1-410ff93cb8f0;
 Sat, 28 Jan 2023 04:42:09 +0100 (CET)
Received: from DB6PR07CA0073.eurprd07.prod.outlook.com (2603:10a6:6:2b::11) by
 PAXPR08MB6350.eurprd08.prod.outlook.com (2603:10a6:102:12c::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Sat, 28 Jan
 2023 03:42:07 +0000
Received: from DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2b:cafe::d) by DB6PR07CA0073.outlook.office365.com
 (2603:10a6:6:2b::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.13 via Frontend
 Transport; Sat, 28 Jan 2023 03:42:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT019.mail.protection.outlook.com (100.127.142.129) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.21 via Frontend Transport; Sat, 28 Jan 2023 03:42:06 +0000
Received: ("Tessian outbound 8038f0863a52:v132");
 Sat, 28 Jan 2023 03:42:06 +0000
Received: from de0917c2af9a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0216A9BB-6B29-4C63-A849-EA2666D0833E.1; 
 Sat, 28 Jan 2023 03:42:00 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id de0917c2af9a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 28 Jan 2023 03:42:00 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PR3PR08MB5786.eurprd08.prod.outlook.com (2603:10a6:102:85::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Sat, 28 Jan
 2023 03:41:59 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Sat, 28 Jan 2023
 03:41:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bda8d805-9ebd-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XGDSjEtfcXizPgazm0/AW6Wagym2+2R8sTT6ABlbzdU=;
 b=BRm+OaCGgLaUvw5bRNQbthxU9OsCBDUGlDdYgri0unmrlmKSRSEjO5xPraS0wLZTtrk6B2qyMEZFY3mxDP0MM72piRrwBPahyoRf129zHwd2g4SZeJwmi4hkrdNZo+Dvn9rzjp6jWbobF7mFnaCxxTBM9liF6jS6ddARp55HWJI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HdTS59d+QMaCfHvNCPVH3h0wKQdVgr1OESMSlkOrF7Qumlh4NEpfP+UL5CrK54EhCx3Bqqhy9J+eJTwvxDcBilEKVgSuVfVYHuLyenKUl+tOKjWLXWhH2SXb3KRei+ulSdAUiSACobEwG+YAho90CvDd1EYAyO0zQyXXH5J79XP9nV/O5hAvWKPf7QKMXktp5y7i1qGE3RMay29KNt8iL1RRDe93Hgbza7UrXXv2nNqZcDED83HndvpJWEpKIB5CVwU/0Mqhqtl5+kFQXFVxeifFPPP5izaf/qHV76UXTJdDGuh+ZP8oqH+kpSp59rSHfJmJOUyYFvGfgDP56Heqeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XGDSjEtfcXizPgazm0/AW6Wagym2+2R8sTT6ABlbzdU=;
 b=Hfb1lD/1Zg6EN50al9TagK1kVI43fkxJ7b5KhEJ2vNk6t3G4xqxidKjbSoSuBmSeM47I8SyDMW3sOstWxw7NUKpv7uniFeQk7/GX+LMcRZh2oB1gpYRHlNVbxAEtxt5TpAGndYzm2LWCQ51vWC6YOMCwWRcN5AO6+VWOowniMbU1ZUYFzfCXVa11rHGRbw/v9rZv1d7Xl/DqDkw+fENNNiSSf0/402vwNU1j4NKVsxs4wYNRbn3U0QRXLbcVQzSkpVuzKZP3y5fEEEk/4CKvzCy92iQ0JrBzpmqmqDn7VlXzaW0Sw5oXCD4By314vyG+qyjooshuO9aN9CKNhs8Spg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XGDSjEtfcXizPgazm0/AW6Wagym2+2R8sTT6ABlbzdU=;
 b=BRm+OaCGgLaUvw5bRNQbthxU9OsCBDUGlDdYgri0unmrlmKSRSEjO5xPraS0wLZTtrk6B2qyMEZFY3mxDP0MM72piRrwBPahyoRf129zHwd2g4SZeJwmi4hkrdNZo+Dvn9rzjp6jWbobF7mFnaCxxTBM9liF6jS6ddARp55HWJI=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Anthony PERARD <anthony.perard@citrix.com>
Subject: RE: [PATCH] tools/xenstored: hashtable: Constify the parameters of
 hashfn/eqfn
Thread-Topic: [PATCH] tools/xenstored: hashtable: Constify the parameters of
 hashfn/eqfn
Thread-Index: AQHZMoEK6Qg1GivE6U2ZJONPV13z5a6zGZ7w
Date: Sat, 28 Jan 2023 03:41:59 +0000
Message-ID:
 <AS8PR08MB7991F1FCC828DBACE24605C192CD9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230127185546.65760-1-julien@xen.org>
In-Reply-To: <20230127185546.65760-1-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 07B9CA2BF1E8ED4EBA1AF7D7047076A4.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PR3PR08MB5786:EE_|DBAEUR03FT019:EE_|PAXPR08MB6350:EE_
X-MS-Office365-Filtering-Correlation-Id: eea51cb8-7ac7-4271-9c3f-08db00e1a0c0
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Rcm31ZHUKkA7C33UPcTY832hO9b0MpDK1dNl3GFmpB4D7yYnpgFiftJLa89hyKm1fxT/TYcZIu6z8l9FlDf6F9ECfA5x/InRO/moWRixizMfMQELxGq5MtiygaX8nbHhzUJnegMxaDI6KKGeUIsftj2tNpxWdIoejUvTvhcH6tXUHYoJQoouWtJEY5FM1R/LusAtCXakHXDuJC5r3K8Sp6/N4dlbjYTCuSVMmgHATXnGun1a1+PdEn/0reNYvqTJ/BU5HfmtRDJ9FVm2501o4wa2ZzaBIi9oEF9Va9sxI2YUSdT446FrapwVpFlwSs3NZEvj5aAlxU0cU6H7LSY3fsf2xu/cBG6yn4uzoY7YvAM9Xg3xnJBHmHhH6EwP1cXib4jPiCIeWTrKJiRh4JAaL/H8TyfN25d6yHNsqgSFC2ggXzRvLM9KIng9DzBUIN/ntSLBI/41YKF2CenAt0p4RBz8NQ1yqscsi/h+frVYNDU2EnpnyRequsoLs1R/0czGs42l78NxPzFy8GWFJgRt0pdHEZzYTw5unAbzDoiT+fKtuCUzjovXWtWdHR3eQJ6Pnuy24/T0cCMLAgAZ2j3fRy1BHA7Loi8Q/bn5DtpJBishtqoVROX3aNSYb4UB9S1o3gF6JnxWfZ4wgmdwuLovsYelPgPOHAjMUIqmR8spuRkoks6dHOIb27DJHIAnG6JJ
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(346002)(136003)(366004)(396003)(376002)(39860400002)(451199018)(83380400001)(5660300002)(316002)(52536014)(54906003)(8936002)(6506007)(71200400001)(122000001)(7696005)(86362001)(478600001)(26005)(38100700002)(186003)(38070700005)(110136005)(55016003)(2906002)(4744005)(9686003)(33656002)(66556008)(64756008)(66446008)(66476007)(8676002)(76116006)(66946007)(4326008)(41300700001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5786
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ffe88aa1-4eef-4c4b-ad5c-08db00e19c35
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Jhd9dG3OWVVqPjD2mfeMm57Fg0j8YYqeAm/kf//1t3ydE9NpvdrRAhp0RnjpHHYlTYLFwUYNH44/CZqF0CY3ELNZymDtO6eYJmkdHo+Frr0peK3MEnZgirpqJOAUyGyvtwlQ/7YB63742qbFy423p0rBfs9eRX5H3jw0bCxqMbeoNlNbh/gRfSTluooUXYm6YmoeQEB2rO7HNMI29DGk0qEEdCVyKLmqD7FMOqW+AjWE4XBldfpDadpztDm+a+3Fj+3e0nTkKG9SPjTrFEtK3BBeAUclZy6h7r803Y6fNEu+kbl6Eo+nm1+gy24UPkakLiz3zN3wFqVytEN7dhVAAN3f/c4fVsjigI5ufTwCdmjfW/1UEkNRpkntXvHfNkybmDLcPqXXWdZwvh11kw1PGl1Zma5jGoQKtrfA5fQtdp8T+A6JeNGTy1QskP8ZN895tzu2XuH2kvpVRALxNBVG1x8bp1PwStLPNRvzXVJRe1P2LfDNbgo3hpBWaLGvbpA8786A7hikP7fDpNZzpV6z9Ffd1K20WtyCOPNDU15/umHG2SSf98yWaT6Wc+hq3t11U2ZeS455Lngae1kcsZqJb9vGMpi8Xf5wNGZnbu4qWBlwkdmB7tS5eNdfylEiynQ1ENvx4HTxuPuAdAt1tciHtgJcBGWpHMQeo0HMImVt7lsldz/MjO8vEyhHPVgjDDnB
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199018)(40470700004)(46966006)(36840700001)(4744005)(52536014)(2906002)(47076005)(5660300002)(82310400005)(83380400001)(55016003)(336012)(40480700001)(356005)(81166007)(82740400003)(4326008)(110136005)(41300700001)(8676002)(8936002)(54906003)(36860700001)(33656002)(316002)(70206006)(70586007)(6506007)(9686003)(40460700003)(86362001)(26005)(186003)(7696005)(107886003)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jan 2023 03:42:06.7676
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: eea51cb8-7ac7-4271-9c3f-08db00e1a0c0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6350

Hi Julien,

> -----Original Message-----
> Subject: [PATCH] tools/xenstored: hashtable: Constify the parameters of
> hashfn/eqfn
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> The parameters of hashfn/eqfn should never be modified. So constify
> them and propagate the const to the users.
>=20
> Take the opportunity to solve some coding style issues around the
> code modified.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Tested the build:
Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 04:22:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 04:22:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486064.753542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLcjW-0007lF-M2; Sat, 28 Jan 2023 04:22:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486064.753542; Sat, 28 Jan 2023 04:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLcjW-0007l8-J5; Sat, 28 Jan 2023 04:22:30 +0000
Received: by outflank-mailman (input) for mailman id 486064;
 Sat, 28 Jan 2023 04:22:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dFBb=5Z=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLcjV-0007l2-B9
 for xen-devel@lists.xenproject.org; Sat, 28 Jan 2023 04:22:29 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2082.outbound.protection.outlook.com [40.107.14.82])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e8f6d29-9ec3-11ed-a5d9-ddcf98b90cbd;
 Sat, 28 Jan 2023 05:22:27 +0100 (CET)
Received: from AM7PR04CA0025.eurprd04.prod.outlook.com (2603:10a6:20b:110::35)
 by GV2PR08MB8193.eurprd08.prod.outlook.com (2603:10a6:150:78::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Sat, 28 Jan
 2023 04:22:22 +0000
Received: from AM7EUR03FT021.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:110:cafe::82) by AM7PR04CA0025.outlook.office365.com
 (2603:10a6:20b:110::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23 via Frontend
 Transport; Sat, 28 Jan 2023 04:22:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT021.mail.protection.outlook.com (100.127.140.243) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.22 via Frontend Transport; Sat, 28 Jan 2023 04:22:21 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Sat, 28 Jan 2023 04:22:21 +0000
Received: from f430aefa61eb.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 10AD6028-5B4F-4279-BCD9-91E083529096.1; 
 Sat, 28 Jan 2023 04:22:15 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f430aefa61eb.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 28 Jan 2023 04:22:15 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAVPR08MB9138.eurprd08.prod.outlook.com (2603:10a6:102:30d::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Sat, 28 Jan
 2023 04:22:02 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Sat, 28 Jan 2023
 04:22:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e8f6d29-9ec3-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5xAuOrLTh7mlsR9R63XbKpHBxxCyVMa9LcjLskT0hoM=;
 b=REUZpziHCMUVfV6WNws+ctyngkYl/LN9qqNEZIh/5AThYaUUZt2jOpCnoev3FL68WejKLNwQ78AKS08R7HSd3zJYwvVoM/bkVvVycenpUovGasaY85VDpcEQZmXXCkNAwR95SqPhNQr+Y8bDm+rPyn8ehxsYeaSCq7kDVBOAfkA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BT2/aulhOtPE4SXi1gaT0qZ8PFNwBrXr5eqWU8IA+F1FR4kX1B1cNns+Y8bQQHmVyLxXv4pRc5DZMKvbvfFYAN2uuEmeOdabvaajo6A6+M7lJS9LaxcbIBXR9aDh8exMQ5QycmIquPACAYrRN1rEsigEd6EHsrXrL8Rx6axj55MyQv7KJO7UUr20NGp25apzm9fAsaARtXUCF40f1X2ULZpywsfSrlXFljVVBP3vrTzdDNI3Mqi8fN/W8FMiV/ND78tv7Pls2AqFMyi3/PkAt/1tCf89dF1kln6/y7MiSsdX38tDi8TTXvO4VHE/MasREEa0O8zL9NSw0wTsNIoooQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5xAuOrLTh7mlsR9R63XbKpHBxxCyVMa9LcjLskT0hoM=;
 b=ebBVYd9I2+t1xNwtxGbi29zSyd+eaYbDpRrkihfVdDqbtePsRHrMPDATo77RyIqlqvyjhycZdYg9DyRw5d86N0MbHJOpNjVSlCzPfsb0pfVcDwiPLbeUe7uwTR3156cyFX9ydLxNCUGsKljaT/yG4B6m14Ko2xPWKFqioU4mG7xa7UaCQ4I7Br2tiDIbSxk1hrlTBK8kdRrO2cx0M6egDwdGXiDf2Y+0Kv53G0WTopMkGMNhNPmWRJJOQ8M1I2K7vCG7ijYi61zulmaddCvJJw4fjw99rSzbiaG2xX/535NCQZKwPpIbpuyeTNii/EYGviO6oA+b36Wmhz5RKa+NcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5xAuOrLTh7mlsR9R63XbKpHBxxCyVMa9LcjLskT0hoM=;
 b=REUZpziHCMUVfV6WNws+ctyngkYl/LN9qqNEZIh/5AThYaUUZt2jOpCnoev3FL68WejKLNwQ78AKS08R7HSd3zJYwvVoM/bkVvVycenpUovGasaY85VDpcEQZmXXCkNAwR95SqPhNQr+Y8bDm+rPyn8ehxsYeaSCq7kDVBOAfkA=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v5 1/5] xen/arm32: head: Widen the use of the temporary
 mapping
Thread-Topic: [PATCH v5 1/5] xen/arm32: head: Widen the use of the temporary
 mapping
Thread-Index: AQHZMolUoPTZuw2UFUyOpejFjtxMfK6zFitQ
Date: Sat, 28 Jan 2023 04:22:02 +0000
Message-ID:
 <AS8PR08MB79912CB0263983B5551A946B92CD9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-2-julien@xen.org>
In-Reply-To: <20230127195508.2786-2-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: B9FFF2330491DA41B0C9B0C61B5904E6.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAVPR08MB9138:EE_|AM7EUR03FT021:EE_|GV2PR08MB8193:EE_
X-MS-Office365-Filtering-Correlation-Id: f687242e-b46e-493c-7a34-08db00e7402c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8zbaKRGvAbEeEzeP2EMIRDkvW0/l7T9zreLavb9X6mvHSVOAAeb/PzphJ9Hf+YEXNPsSlTUD4DrTRvg17NZ74yCvAWa6HSeFI1UWfU+S+X7dq5+bi+OUNp/26XJL1RnTkchsWm+Lb9x6ILAaQ7YKSsAT1Ezavb4FTXbGxRTQxsNCLiD2rGY1bUw/q5AWfAFMw2qOG5dXU2+s/5e38ILq2f1ade0l/3pocH2TW80HxBk+WMNSM/syMWzb9E8dBe9RoTU1VeyfCb77V5IuhvC4QHWTSPOZ8gb3VRhT9YE8THbN7upSW6UcUAoA7dyNVUsJbETrdQ3y+HBqYefvOtulLTrkuMW1Zm9fNoIik8WmryxRDeE2cuDXebjtPn5a+4G0F37uR4OkxNDp/qKgUZ/S66SNa3GLKvKhJRqLbXneSN2Osnwra3LU94yKF1hPBSXlcjIrbgoGbJyiyDf4RdQ7xp0cGRlyvE2++tyfiAc06XeACEVkl46fTViHLb5E19WIdSx7A9ERE+FM5VNhPozoBrPU5LrdrL9B+uFgkhKB6iH/bMiZ+JkUmnpjNMWIrwg7jnU1QuUXvGNu916xOiATT60/j8b2+xrvUGm8KX15M2BsM7dHnZQMMnB+2GNcIGhrvEcOJeBRmbMGSLgOrDDmPU3VJNdjYU8uzzB7oLJwrFvXPSFu7JDTdbpYkef2pJKDM/u0mTXWQc8EDuxpcRxKD6x8u9OZOdNOfhe6N10Lm3M5Lwq9tobf6b0MQtaLC8k9
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(136003)(39860400002)(346002)(396003)(376002)(366004)(451199018)(186003)(6506007)(38070700005)(66446008)(7696005)(76116006)(64756008)(66946007)(8936002)(66556008)(4744005)(8676002)(66476007)(4326008)(2906002)(9686003)(33656002)(41300700001)(54906003)(71200400001)(110136005)(55016003)(122000001)(86362001)(316002)(478600001)(26005)(5660300002)(52536014)(83380400001)(38100700002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9138
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	434181c9-4b29-42b9-fd7c-08db00e734b5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YvZX3ZKWc14+9hDAjLbOkY5Qy7eCcF9Dc4pQfZAAbzJ0eDWxfjZ6+o+vYMQh6yVzbV6oCUc74qOtH4T4gqgpS56FJ2yej4CK9Z6qhUdit5SpO+TzlE0CSYaUOZbBD5RUY5ibv3qab2xyXuegMgNNni4TmO+bFLq/6DkGCUFIGEw8XghYzmqsaxb8tQKbYXS9/Wu63VeAwSeJhTT6fStVqODrfX2p2xDebhXBsaGrZOH63hHp7NAB4vgNJHtbM1dpMhseqbnRLcOLonvzTVO5vjpEyO9oOGzmmB6pGSZOkaWGFP8X4Xz7xvNXsDrmVzs+d+qsGFl9YNJ00+KbsXHqElq0TJfFFex70f/nxVk2UF/hY7ZTtHM0wofaWO1V2s9HBZkp/IGynKT1QFp5K1FM+SrYIdJTXFpUtLIKWbAWhVwgDFb/L6BLFK/1gGERYx9+MX+mdnALzvV2a3WtfeOs8jE7PI1JKv1YxKU74LC6//Cg2WXwmJLxEbvBL1xw26VJMFiMx6K3ZX5SME/aKifNR9wpwXFdBBrq3VTRMGrhilZOOsKmLYGO9cn3IlIbz4UJSA1p3RhIBGd8wi5ulPBwMP5IglBPKAi47mVg5lOaWxynvo6RYLM3JJb1thDmNzrphYemYjGPE/ZtpSVmPGJKJHwI/7uNpnTsGByS4rfDZLdnlHNiTTHz74LdOa5FVMGAc/M0d9egx54CvGiTPJlAgQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199018)(40470700004)(46966006)(36840700001)(47076005)(8676002)(7696005)(33656002)(40460700003)(86362001)(81166007)(26005)(55016003)(36860700001)(83380400001)(9686003)(186003)(82310400005)(336012)(40480700001)(54906003)(356005)(41300700001)(70586007)(4744005)(110136005)(107886003)(478600001)(4326008)(6506007)(316002)(5660300002)(70206006)(8936002)(52536014)(82740400003)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jan 2023 04:22:21.6690
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f687242e-b46e-493c-7a34-08db00e7402c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8193

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v5 1/5] xen/arm32: head: Widen the use of the temporary
> mapping
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, the temporary mapping is only used when the virtual
> runtime region of Xen is clashing with the physical region.
>=20
> In follow-up patches, we will rework how secondary CPU bring-up works
> and it will be convenient to use the fixmap area for accessing
> the root page-table (it is per-cpu).
>=20
> Rework the code to use temporary mapping when the Xen physical address
> is not overlapping with the temporary mapping.
>=20
> This also has the advantage to simplify the logic to identity map
> Xen.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Tested on FVP in Arm32 mode, so:
Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 04:28:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 04:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486070.753552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLcok-00008X-Af; Sat, 28 Jan 2023 04:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486070.753552; Sat, 28 Jan 2023 04:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLcok-00008Q-7h; Sat, 28 Jan 2023 04:27:54 +0000
Received: by outflank-mailman (input) for mailman id 486070;
 Sat, 28 Jan 2023 04:27:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLcoi-00008F-H7; Sat, 28 Jan 2023 04:27:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLcoi-0002lo-Cx; Sat, 28 Jan 2023 04:27:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLcoi-00042e-2T; Sat, 28 Jan 2023 04:27:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLcoi-0003Xq-0m; Sat, 28 Jan 2023 04:27:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fqTQ2XORURePxxwvocROiHV5LmJkISZ1XyYwr9eflnI=; b=qlr0fwVnYY/pNTy2KaNVUxseAs
	uLbCrKzchS8CuDxNJPOuUcoMvdgoW+TOoeLlmgF3cFXrmR2uq6HTryQkBnR0JI0TtZRk68Drc1JFy
	vVbTDfOQHPHMT+Agcy2jzNW0E664oq3SfjVIB5ZoTIyNfPY6vutvqG4bbd7aP9hJvUeY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176252-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176252: trouble: broken/fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-arndale:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-xl:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-libvirt-raw:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-xl-cubietruck:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-xl-credit1:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-xl-credit2:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-examine:host-install:broken:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-examine:capture-logs:broken:regression
    xen-unstable:test-armhf-armhf-libvirt:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-xl-vhd:host-install(5):broken:regression
    xen-unstable:test-armhf-armhf-xl-rtds:host-install(5):broken:allowable
    xen-unstable:test-armhf-armhf-libvirt-raw:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:capture-logs(6):broken:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:capture-logs(6):broken:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Jan 2023 04:27:52 +0000

flight 176252 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176252/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-libvirt-qcow2    <job status>                 broken
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 test-armhf-armhf-xl-arndale   5 host-install(5)        broken REGR. vs. 175994
 test-armhf-armhf-xl           5 host-install(5)        broken REGR. vs. 175994
 test-armhf-armhf-libvirt-raw  5 host-install(5)        broken REGR. vs. 175994
 test-armhf-armhf-libvirt-qcow2  5 host-install(5)      broken REGR. vs. 175994
 test-armhf-armhf-xl-cubietruck  5 host-install(5)      broken REGR. vs. 175994
 test-armhf-armhf-xl-credit1   5 host-install(5)        broken REGR. vs. 175994
 test-armhf-armhf-xl-credit2   5 host-install(5)        broken REGR. vs. 175994
 test-armhf-armhf-examine      5 host-install           broken REGR. vs. 175994
 test-armhf-armhf-xl-multivcpu  5 host-install(5)       broken REGR. vs. 175994
 test-armhf-armhf-examine      6 capture-logs           broken REGR. vs. 175994
 test-armhf-armhf-libvirt      5 host-install(5)        broken REGR. vs. 175994
 test-armhf-armhf-xl-vhd       5 host-install(5)        broken REGR. vs. 175994

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      5 host-install(5)        broken REGR. vs. 175994

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw  6 capture-logs(6)       broken blocked in 175994
 test-armhf-armhf-xl-credit2   6 capture-logs(6)       broken blocked in 175994
 test-armhf-armhf-libvirt-qcow2  6 capture-logs(6)     broken blocked in 175994
 test-armhf-armhf-xl           6 capture-logs(6)       broken blocked in 175994
 test-armhf-armhf-xl-multivcpu  6 capture-logs(6)      broken blocked in 175994
 test-armhf-armhf-xl-cubietruck  6 capture-logs(6)     broken blocked in 175994
 test-armhf-armhf-xl-rtds      6 capture-logs(6)       broken blocked in 175994
 test-armhf-armhf-xl-arndale   6 capture-logs(6)       broken blocked in 175994
 test-armhf-armhf-xl-vhd       6 capture-logs(6)       broken blocked in 175994
 test-armhf-armhf-xl-credit1   6 capture-logs(6)       broken blocked in 175994
 test-armhf-armhf-libvirt      6 capture-logs(6)       broken blocked in 175994
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    7 days
Failing since        176003  2023-01-20 17:40:27 Z    7 days   16 attempts
Testing same since   176222  2023-01-26 22:13:29 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  George Dunlap <george.dunlap@cloud.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  broken  
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               broken  
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      broken  
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken
broken-step test-armhf-armhf-libvirt-raw capture-logs(6)
broken-step test-armhf-armhf-xl-credit2 capture-logs(6)
broken-step test-armhf-armhf-libvirt-qcow2 capture-logs(6)
broken-step test-armhf-armhf-xl capture-logs(6)
broken-step test-armhf-armhf-xl-multivcpu capture-logs(6)
broken-step test-armhf-armhf-xl-cubietruck capture-logs(6)
broken-step test-armhf-armhf-xl-rtds capture-logs(6)
broken-step test-armhf-armhf-xl-arndale capture-logs(6)
broken-step test-armhf-armhf-xl-vhd capture-logs(6)
broken-step test-armhf-armhf-xl-credit1 capture-logs(6)
broken-step test-armhf-armhf-libvirt capture-logs(6)
broken-step test-armhf-armhf-xl-arndale host-install(5)
broken-step test-armhf-armhf-xl host-install(5)
broken-step test-armhf-armhf-libvirt-raw host-install(5)
broken-step test-armhf-armhf-libvirt-qcow2 host-install(5)
broken-step test-armhf-armhf-xl-cubietruck host-install(5)
broken-step test-armhf-armhf-xl-credit1 host-install(5)
broken-step test-armhf-armhf-xl-credit2 host-install(5)
broken-step test-armhf-armhf-examine host-install
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-examine capture-logs
broken-step test-armhf-armhf-libvirt host-install(5)
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)

Not pushing.

(No revision log; it would be 1225 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 04:55:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 04:55:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486080.753562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLdEi-0003sL-LL; Sat, 28 Jan 2023 04:54:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486080.753562; Sat, 28 Jan 2023 04:54:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLdEi-0003sE-Hg; Sat, 28 Jan 2023 04:54:44 +0000
Received: by outflank-mailman (input) for mailman id 486080;
 Sat, 28 Jan 2023 04:54:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dFBb=5Z=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pLdEg-0003s8-Rk
 for xen-devel@lists.xenproject.org; Sat, 28 Jan 2023 04:54:43 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2082.outbound.protection.outlook.com [40.107.15.82])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id de8600bd-9ec7-11ed-a5d9-ddcf98b90cbd;
 Sat, 28 Jan 2023 05:54:39 +0100 (CET)
Received: from DB6PR0601CA0010.eurprd06.prod.outlook.com (2603:10a6:4:7b::20)
 by PAWPR08MB9663.eurprd08.prod.outlook.com (2603:10a6:102:2e0::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.28; Sat, 28 Jan
 2023 04:54:36 +0000
Received: from DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::2b) by DB6PR0601CA0010.outlook.office365.com
 (2603:10a6:4:7b::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23 via Frontend
 Transport; Sat, 28 Jan 2023 04:54:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT044.mail.protection.outlook.com (100.127.142.189) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.21 via Frontend Transport; Sat, 28 Jan 2023 04:54:35 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Sat, 28 Jan 2023 04:54:34 +0000
Received: from e7c29e8277fd.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 836F390F-E9AA-4110-B4DD-C86D929B1F01.1; 
 Sat, 28 Jan 2023 04:54:28 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e7c29e8277fd.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 28 Jan 2023 04:54:28 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAXPR08MB6558.eurprd08.prod.outlook.com (2603:10a6:102:151::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.25; Sat, 28 Jan
 2023 04:54:27 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.023; Sat, 28 Jan 2023
 04:54:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de8600bd-9ec7-11ed-a5d9-ddcf98b90cbd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3JT/aJUB8U4AgaGOam6ZEMPMEVnAM6/l9M9Q8RKHRzw=;
 b=wbcLZF1L7hRZNDAy5Al+ZCIrGu2tP0KzG2+4N1SoNUedd/xVlGrY6e0p+W5PZKTrJjsVew/SmdobreqOvcAlGyFtcziG6txOMriq6WwFp/R/TsbNVPMneQA7oZuV5CCm4Dmqvu67lx5WOxm07py1W9FtiwCu3IrvKqnkfZJfvq8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QovIZRGzb8n3AVEeHbDozuQ5+S6KeS1spXUu+l19sBydMf+2LU3NUPn7WIZC53bJCmWUnLbN+vwJZFxOfQW8aTzYxfQtveHxv7H0lsjozp+yut7/1ghxBl6BFe4NiN2q6f+ODmvaeYoxO2Q0rMAgtovxDfw1OL8nWYuX7NoqUW9lA0EGQkrjmqjNCPPjXU7mZrF7S4Z8jeoYywMK8YJJ58S4wes94xAwGkXOb/dFMJ6lZgjtsI5ftrghMuzmGgtjeWhYtuHXVmCKpcjumzNSVgcfqJiDqkzc/NzQ8Vq9fOroPQLwqc4+B2eyHyndMPMpsUYSDKGl8GXrM5ZQnorYQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3JT/aJUB8U4AgaGOam6ZEMPMEVnAM6/l9M9Q8RKHRzw=;
 b=B6is1xiqObPSjVd8MKUurnD9f/K5G+xic1ljINhDcws6Mo/oFPvgy/0wDMvD5PsRkg5Xt9niMv9U5Y93X6jJ02Y4S/xz9qrl55KEsFBI0aCHrIYoqdhPHgfVW0ICQDIc8lHIXpAxRfODvz5iVFY9ML09XeAy5jrovVHoyYfbmr4yVN85dCBs/Kc8y/Ga1kx4e7Tup5PsBLk8L3fPN20uRPPe3IagOxyCvwG3vBpQHgzC2THuNyvGo9z4yNmFPBWeHE70QmgPe0MmfUPtnDoPiMkccVv0ebRbCxET705p8mKQfwIHaERK9ayFxcYLyN/6phloca07wUOViEPDsAcCIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3JT/aJUB8U4AgaGOam6ZEMPMEVnAM6/l9M9Q8RKHRzw=;
 b=wbcLZF1L7hRZNDAy5Al+ZCIrGu2tP0KzG2+4N1SoNUedd/xVlGrY6e0p+W5PZKTrJjsVew/SmdobreqOvcAlGyFtcziG6txOMriq6WwFp/R/TsbNVPMneQA7oZuV5CCm4Dmqvu67lx5WOxm07py1W9FtiwCu3IrvKqnkfZJfvq8=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v5 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Topic: [PATCH v5 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Index: AQHZMolY4mue8S6qKE2ltVytEV95o66zFE0A
Date: Sat, 28 Jan 2023 04:54:25 +0000
Message-ID:
 <AS8PR08MB7991FBCB731CC548C98ECA2792CD9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-4-julien@xen.org>
In-Reply-To: <20230127195508.2786-4-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 5E3F718C6378674A847101709A516BA3.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAXPR08MB6558:EE_|DBAEUR03FT044:EE_|PAWPR08MB9663:EE_
X-MS-Office365-Filtering-Correlation-Id: d01c452d-b2fb-4069-a57f-08db00ebc092
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 r6Gf9fTXr8J/6vCP/NL7tHMcLznIaNhB81chsfL3rnt1l2w6ae1quNTwjpqHNi3VEogi0ollPa0i81RXxLHBf1b06vijgcbGYhUIsIEqP8ICShchoN0HB329U28rOWFWx7LuzGKbbc5vaaDWesViPkV6QRyQvjk6Z7w3kC3T0DOVn77c2zrU9vpj7psnO81tjzo/D4YGKEkUPwCqaW+oUrBlSDtffV4rtlTwDs5NOoR/gm8RaUMJIKFCqKoE5Nw1uIodAHhOeQHGqkgWEanaLDbUrblBhS3Sll8M65Dd6RqFsjD4854/SPiSVDdac2Nje6G76WUe5kQ4vbGDzhNefMUJ0je+tdqBqymHxyWt8m3gkebQOqkMrPNymtvTwr39UEqm+/pTr1kwuT27hJLQk9ExAHdEtj6X/qDrCmQ6a4XUnRz6VJlMpIph0I1rPrjeySHsJDZhAz8hKaypZkeQClnWzgh1utljXSJMwRzpZ2V5c98YiqWeLPm/dM0C7GH4nOCeacdyCX8eANNex0xG6fZygnlh0GHZGbceviESgJJ+GD1lf0YtS20Hb9i6OvEMLlDbFVJZf0ERXP1RfdzSb0CKo3bxVs9DK2ytYA6jIAQ8NNQuSa1o0Hl3mW8UaZtVAK6jHjGA34VeSSXPkrQhCc6RW90z07lRVuXyjhe7FxVYqFKDkZRjoBnVJSP/gWn31vH6NoYVknq+wuCOIlRm7w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(346002)(136003)(366004)(39860400002)(376002)(451199018)(66476007)(66556008)(64756008)(66946007)(76116006)(66446008)(41300700001)(83380400001)(4326008)(8676002)(52536014)(8936002)(316002)(122000001)(9686003)(33656002)(55016003)(2906002)(110136005)(54906003)(5660300002)(478600001)(86362001)(7696005)(71200400001)(6506007)(38070700005)(26005)(186003)(38100700002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6558
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ac73fc11-e0b8-49e9-ba8d-08db00ebbad1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FOELtVZruyBzO0j7zUgxJtZGHUqxq+KbTQBHocGzFIxSrKj3/TnbVnb0e89oqVILappDbdy2Yv65VP0zDWEGAPX3t/jPJ2gIwO7FbqByLrVxjD+gEBxQ9rb+KwuTKy+gyAtglbnh3kzPhk9gfTBitarLTk3eFXXyKiumshynVlTXRfGXrXLjuOWty2H3pkMYfpw3TuvTkrASXcULLzBdoQcs5/lHBMZaY7D3T6gsOUsM5Hz9mnE/u9xdKX+ifL9efOJg2DWEIGq/D2IOLltnlo84dwcP7dSxFkcjMOWM7Q11Wtd1gXd10LgOW3CD5+H9vqEFR+altkm45rlTOTGD8zaEAv1vazsGqJ+sgjvgqfKZW9WDQBCCfUgdx+wm4eUoG4RrmHiUrGgkFEzH1w9ZhuRdM+c5ig04HwxAVELeNdBtdKFLa0yxSRdQKBitjvxGbYC23qY2jk8Hrs3UsupBHT8ycOdcnKuXIiRxkjpNTyLOkKiM4fsik/QqNWN1lCcFkHkbKYhE4v1FvjTbEReR0aHEuBijIFUziLT1q2qmGfBBi3NtWZSPrrOAwtbYkqm916S3FtwQ1vDHlr108AAdlMpk+KQ/cdyQmHDhS6hd2SBHBKPYXLeLRMZL0lTlbFunaj1WH743LISH/pEf3aI87W8XnousImx834dKqfK6b7cfZoMtuher2CQCWCN8pyRF7dUOBC7D9JMRudbeJl7IPA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(346002)(396003)(376002)(136003)(39860400002)(451199018)(40470700004)(36840700001)(46966006)(54906003)(110136005)(5660300002)(7696005)(40460700003)(86362001)(47076005)(336012)(2906002)(40480700001)(186003)(55016003)(33656002)(26005)(6506007)(107886003)(9686003)(478600001)(83380400001)(41300700001)(81166007)(4326008)(82310400005)(70206006)(70586007)(82740400003)(316002)(52536014)(356005)(8676002)(8936002)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jan 2023 04:54:35.1051
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d01c452d-b2fb-4069-a57f-08db00ebc092
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9663

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v5 3/5] xen/arm64: mm: Introduce helpers to
> prepare/enable/disable the identity mapping
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> In follow-up patches we will need to have part of Xen identity mapped in
> order to safely switch the TTBR.
>=20
> On some platform, the identity mapping may have to start at 0. If we alwa=
ys
> keep the identity region mapped, NULL pointer dereference would lead to
> access to valid mapping.
>=20
> It would be possible to relocate Xen to avoid clashing with address 0.
> However the identity mapping is only meant to be used in very limited
> places. Therefore it would be better to keep the identity region invalid
> for most of the time.
>=20
> Two new external helpers are introduced:
>     - arch_setup_page_tables() will setup the page-tables so it is
>       easy to create the mapping afterwards.
>     - update_identity_mapping() will create/remove the identity mapping
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>
With some nits below that can definitely be fixed on commit.

Tested on FVP in Arm64 mode, so:
Tested-by: Henry Wang <Henry.Wang@arm.com>

> +static void __init prepare_boot_identity_mapping(void)
> +{
> +    paddr_t id_addr =3D virt_to_maddr(_start);
> +    lpae_t pte;
> +    DECLARE_OFFSETS(id_offsets, id_addr);
> +
> +    /*
> +     * We will be re-using the boot ID tables. They may not have been
> +     * zeroed but they should be unlinked. So it is fine to use
> +     * clear_page().
> +     */
> +    clear_page(boot_first_id);
> +    clear_page(boot_second_id);
> +    clear_page(boot_third_id);
> +
> +    if ( id_offsets[0] !=3D 0 )
> +        panic("Cannot handled ID mapping above 512GB\n");

Nit: s/handled/handle/

> +
> +static void __init prepare_runtime_identity_mapping(void)
> +{
> +    paddr_t id_addr =3D virt_to_maddr(_start);
> +    lpae_t pte;
> +    DECLARE_OFFSETS(id_offsets, id_addr);
> +
> +    if ( id_offsets[0] >=3D IDENTITY_MAPPING_AREA_NR_L0 )
> +        panic("Cannot handled ID mapping above 512GB\n");

Same here.

> +void arch_setup_page_tables(void);
> +
> +/*
> + * Enable/disable the identity mapping in the live page-tables (i.e.
> + * the one pointer by TTBR_EL2).

Nit: I might be wrong but I think s/pointer/pointed/.

> + *
> + * Note that nested a call (e.g. enable=3Dtrue, enable=3Dtrue) is not

Nit: s/nested/nesting/ or "a nested call"?

> + * supported.
> + */
> +void update_identity_mapping(bool enable);

Kind regards,
Henry



From xen-devel-bounces@lists.xenproject.org Sat Jan 28 08:13:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 08:13:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486120.753589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLgKW-0004CF-9C; Sat, 28 Jan 2023 08:12:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486120.753589; Sat, 28 Jan 2023 08:12:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLgKW-0004C8-5R; Sat, 28 Jan 2023 08:12:56 +0000
Received: by outflank-mailman (input) for mailman id 486120;
 Sat, 28 Jan 2023 08:12:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLgKV-0004Bw-Dz; Sat, 28 Jan 2023 08:12:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLgKV-00009S-An; Sat, 28 Jan 2023 08:12:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLgKU-0005x2-Oy; Sat, 28 Jan 2023 08:12:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLgKU-0002Wt-OW; Sat, 28 Jan 2023 08:12:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ovSfkD7kLrNrB+jgsdj71fbK2W7cKSF7qtYlbe3u8QI=; b=c/RkA9hJBURDO3CirXcT+LASs6
	xE5pXKkSuotL6BMIrpO3Fg1xmm3gpsfJG6ghPiIzxk/ao3sB0av739OmdaWQ5cnjD+lBzASFCywzX
	kn7RkP0pWdonjp8zpVEYMUVOSb1iwZwuQGq+2mEac3QmMKWH4+9zgo7RGCvVhQvDGzGg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176256-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176256: regressions - trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:test-arm64-arm64-xl-vhd:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-seattle:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-xsm:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:build-armhf:syslog-server:running:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-vhd:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-install(5):broken:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-examine:host-install:broken:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-cubietruck:capture-logs(22):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-vhd:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-examine:capture-logs:broken:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:host-install(5):broken:heisenbug
    linux-linus:test-amd64-amd64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-raw:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:build-armhf:capture-logs:broken:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:capture-logs(9):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:capture-logs(6):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=7c46948a6e9cf47ed03b0d489fde894ad46f1437
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Jan 2023 08:12:54 +0000

flight 176256 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176256/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 test-arm64-arm64-xl-vhd         <job status>                 broken  in 176143
 test-arm64-arm64-xl-credit2     <job status>                 broken  in 176143
 test-arm64-arm64-xl-seattle     <job status>                 broken  in 176143
 test-arm64-arm64-xl-xsm         <job status>                 broken  in 176143
 build-arm64-xsm                 <job status>                 broken  in 176223
 build-arm64-pvops               <job status>                 broken  in 176223
 build-arm64-pvops          4 host-install(4) broken in 176223 REGR. vs. 173462
 build-arm64-xsm            4 host-install(4) broken in 176223 REGR. vs. 173462
 test-armhf-armhf-xl-vhd         <job status>                 broken  in 176245
 test-armhf-armhf-xl-arndale     <job status>                 broken  in 176245
 test-armhf-armhf-xl             <job status>                 broken  in 176245
 test-armhf-armhf-xl-credit1     <job status>                 broken  in 176245
 test-armhf-armhf-xl-cubietruck    <job status>                broken in 176245
 test-armhf-armhf-libvirt        <job status>                 broken  in 176245
 test-armhf-armhf-xl-multivcpu    <job status>                 broken in 176245
 test-armhf-armhf-xl-rtds        <job status>                 broken  in 176245
 test-armhf-armhf-libvirt-qcow2    <job status>                broken in 176245
 test-armhf-armhf-xl-credit2     <job status>                 broken  in 176245
 test-armhf-armhf-libvirt-raw    <job status>                 broken  in 176245
 test-arm64-arm64-xl-seattle   8 xen-boot       fail in 176135 REGR. vs. 173462
 test-armhf-armhf-xl-arndale   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-multivcpu  8 xen-boot      fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl           8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-vhd       8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-credit1   8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt-raw  8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt      8 xen-boot       fail in 176143 REGR. vs. 173462
 test-armhf-armhf-examine      8 reboot         fail in 176143 REGR. vs. 173462
 test-armhf-armhf-libvirt-qcow2  8 xen-boot     fail in 176143 REGR. vs. 173462
 test-armhf-armhf-xl-credit2   8 xen-boot       fail in 176143 REGR. vs. 173462
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle  5 host-install(5) broken in 176143 pass in 176135
 test-arm64-arm64-xl-vhd      5 host-install(5) broken in 176143 pass in 176256
 test-arm64-arm64-xl-xsm      5 host-install(5) broken in 176143 pass in 176256
 test-arm64-arm64-xl-credit2  5 host-install(5) broken in 176143 pass in 176256
 test-armhf-armhf-xl-arndale  5 host-install(5) broken in 176245 pass in 176143
 test-armhf-armhf-xl-credit1  5 host-install(5) broken in 176245 pass in 176143
 test-armhf-armhf-libvirt     5 host-install(5) broken in 176245 pass in 176143
 test-armhf-armhf-xl          5 host-install(5) broken in 176245 pass in 176143
 test-armhf-armhf-libvirt-qcow2 5 host-install(5) broken in 176245 pass in 176143
 test-armhf-armhf-examine      5 host-install   broken in 176245 pass in 176143
 test-armhf-armhf-xl-credit2  5 host-install(5) broken in 176245 pass in 176143
 test-armhf-armhf-xl-cubietruck 22 capture-logs(22) broken in 176245 pass in 176143
 test-armhf-armhf-xl-vhd      5 host-install(5) broken in 176245 pass in 176143
 test-armhf-armhf-xl-multivcpu 5 host-install(5) broken in 176245 pass in 176143
 test-armhf-armhf-libvirt-raw 5 host-install(5) broken in 176245 pass in 176143
 test-armhf-armhf-examine      6 capture-logs   broken in 176245 pass in 176143
 test-armhf-armhf-xl-rtds     5 host-install(5) broken in 176245 pass in 176223
 test-amd64-amd64-xl-xsm       8 xen-boot         fail in 176135 pass in 176256
 test-arm64-arm64-xl-vhd       8 xen-boot         fail in 176135 pass in 176256
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 176135 pass in 176256
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 176135 pass in 176256
 test-arm64-arm64-examine      8 reboot           fail in 176143 pass in 176256
 test-arm64-arm64-xl           8 xen-boot         fail in 176143 pass in 176256
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 176143 pass in 176256
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 176143 pass in 176256
 test-arm64-arm64-libvirt-raw  8 xen-boot         fail in 176143 pass in 176256
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 176245 pass in 176256
 test-amd64-amd64-xl-vhd      21 guest-start/debian.repeat  fail pass in 176245

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot       fail in 176143 REGR. vs. 173462

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 176223 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 176223 n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 173462
 test-armhf-armhf-xl-rtds  9 capture-logs(9) broken in 176223 blocked in 173462
 test-armhf-armhf-libvirt-qcow2 6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-xl-credit1 6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-libvirt  6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-xl-credit2 6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-xl-arndale 6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-xl-rtds  6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-xl       6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-xl-vhd   6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-xl-multivcpu 6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-libvirt-raw 6 capture-logs(6) broken in 176245 blocked in 173462
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check fail in 176245 never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check fail in 176245 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                7c46948a6e9cf47ed03b0d489fde894ad46f1437
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  112 days
Failing since        173470  2022-10-08 06:21:34 Z  112 days  232 attempts
Testing same since   176135  2023-01-26 00:10:53 Z    2 days    6 attempts

------------------------------------------------------------
3442 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job test-armhf-armhf-xl-vhd broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-arm64-arm64-xl-vhd broken
broken-job test-arm64-arm64-xl-credit2 broken
broken-job test-arm64-arm64-xl-seattle broken
broken-job test-arm64-arm64-xl-xsm broken
broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-xl-vhd broken
broken-job build-arm64-xsm broken
broken-job build-arm64-pvops broken

Not pushing.

(No revision log; it would be 529026 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 14:29:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 14:29:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486158.753599 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLmC8-0004zL-Gw; Sat, 28 Jan 2023 14:28:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486158.753599; Sat, 28 Jan 2023 14:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLmC8-0004zE-Dm; Sat, 28 Jan 2023 14:28:40 +0000
Received: by outflank-mailman (input) for mailman id 486158;
 Sat, 28 Jan 2023 14:28:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLmC6-0004z4-Ul; Sat, 28 Jan 2023 14:28:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLmC6-0001ZE-QC; Sat, 28 Jan 2023 14:28:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLmC6-0005XJ-A2; Sat, 28 Jan 2023 14:28:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLmC6-0006eR-9I; Sat, 28 Jan 2023 14:28:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5Nq0XzKUbEmL8tMLjbKqVWdybwFexguCFun6CNjFUhk=; b=cY/T6JelSY09ZyR/SpqXSFpWUe
	I5juP3nxPiS/gYSF1eJNMZjnmoNLLGO2s7BESeTIEChM5cnFThI+GUifWrwctaTXHHQ9VY2u6wrVQ
	/hnuJbdmilLiZD61OoB20ct7kUw+ltGrOa1BUZlMdf7HcDKcz1C3m+aorSYK1QJIRHGg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176261-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176261: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-credit1:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-vhd:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-libvirt:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-credit2:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-arndale:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl:host-install(5):broken:regression
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:host-install(5):broken:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start.2:fail:heisenbug
    xen-4.17-testing:test-armhf-armhf-xl-rtds:host-install(5):broken:allowable
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:capture-logs(6):broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Jan 2023 14:28:38 +0000

flight 176261 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176261/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 test-armhf-armhf-libvirt        <job status>                 broken  in 176250
 test-armhf-armhf-xl-cubietruck    <job status>                broken in 176250
 test-armhf-armhf-xl-credit1     <job status>                 broken  in 176250
 test-armhf-armhf-xl-credit2     <job status>                 broken  in 176250
 test-armhf-armhf-xl             <job status>                 broken  in 176250
 test-armhf-armhf-xl-multivcpu    <job status>                 broken in 176250
 test-armhf-armhf-xl-arndale     <job status>                 broken  in 176250
 test-armhf-armhf-xl-rtds        <job status>                 broken  in 176250
 test-armhf-armhf-libvirt-raw    <job status>                 broken  in 176250
 test-armhf-armhf-xl-vhd         <job status>                 broken  in 176250
 test-armhf-armhf-libvirt-qcow2    <job status>                broken in 176250
 test-armhf-armhf-libvirt-qcow2 5 host-install(5) broken in 176250 REGR. vs. 175447
 test-armhf-armhf-xl-credit1 5 host-install(5) broken in 176250 REGR. vs. 175447
 test-armhf-armhf-xl-vhd    5 host-install(5) broken in 176250 REGR. vs. 175447
 test-armhf-armhf-libvirt-raw 5 host-install(5) broken in 176250 REGR. vs. 175447
 test-armhf-armhf-libvirt   5 host-install(5) broken in 176250 REGR. vs. 175447
 test-armhf-armhf-xl-credit2 5 host-install(5) broken in 176250 REGR. vs. 175447
 test-armhf-armhf-xl-multivcpu 5 host-install(5) broken in 176250 REGR. vs. 175447
 test-armhf-armhf-xl-arndale 5 host-install(5) broken in 176250 REGR. vs. 175447
 test-armhf-armhf-xl        5 host-install(5) broken in 176250 REGR. vs. 175447
 test-armhf-armhf-xl-cubietruck 5 host-install(5) broken in 176250 REGR. vs. 175447
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 176250 pass in 176261
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 21 guest-start.2 fail pass in 176250

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds   5 host-install(5) broken in 176250 REGR. vs. 175447

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-armhf-armhf-xl-credit1 6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-libvirt-qcow2 6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-libvirt-raw 6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-libvirt  6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-xl-credit2 6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-xl-multivcpu 6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-xl-vhd   6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-xl-rtds  6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-xl-arndale 6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-xl       6 capture-logs(6) broken in 176250 blocked in 175447
 test-armhf-armhf-xl-cubietruck 6 capture-logs(6) broken in 176250 blocked in 175447
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   37 days
Testing same since   176224  2023-01-26 22:14:43 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-xl-cubietruck broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-credit2 broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-arndale broken
broken-job test-armhf-armhf-xl-rtds broken
broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl-vhd broken
broken-job test-armhf-armhf-libvirt-qcow2 broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 16:35:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 16:35:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486173.753609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLoAJ-0003rA-Sf; Sat, 28 Jan 2023 16:34:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486173.753609; Sat, 28 Jan 2023 16:34:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLoAJ-0003r3-Pe; Sat, 28 Jan 2023 16:34:55 +0000
Received: by outflank-mailman (input) for mailman id 486173;
 Sat, 28 Jan 2023 16:34:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLoAH-0003qt-TS; Sat, 28 Jan 2023 16:34:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLoAH-0004mR-Mq; Sat, 28 Jan 2023 16:34:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLoAH-0002Q2-4f; Sat, 28 Jan 2023 16:34:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLoAH-0002Q0-4B; Sat, 28 Jan 2023 16:34:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hF0ilzwiacdyautKgSk5VBDQSEyGraj7d0HX8h0y8Og=; b=uPoHbN6qcRUP9l+I/hVEMogZ21
	8F7cNVFAd807fnf+dCDy1r4NmDVme0FnewpANYUBuza9554F3dYp/Kx4JvHIGDrrsR4t8sFlKrKLs
	9bdgo6Qs55pqzEkaAlzHIVCv084G0HkZ75y/Eial5VCn5pkdI6G5YHmd6Zf4UjVRkn9I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176262-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 176262: trouble: blocked/broken/pass
X-Osstest-Failures:
    libvirt:build-armhf:<job status>:broken:regression
    libvirt:build-armhf:host-install(4):broken:regression
    libvirt:build-armhf:syslog-server:running:regression
    libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:build-armhf:capture-logs:broken:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=9f8fba7501327a60f6adb279ea17f0e2276071be
X-Osstest-Versions-That:
    libvirt=95a278a84591b6a4cfa170eba31c8ec60e82f940
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Jan 2023 16:34:53 +0000

flight 176262 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176262/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176139
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176139
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              9f8fba7501327a60f6adb279ea17f0e2276071be
baseline version:
 libvirt              95a278a84591b6a4cfa170eba31c8ec60e82f940

Last test of basis   176139  2023-01-26 04:18:49 Z    2 days
Failing since        176233  2023-01-27 04:18:53 Z    1 days    2 attempts
Testing same since   176262  2023-01-28 04:20:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiri Denemark <jdenemar@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 9f8fba7501327a60f6adb279ea17f0e2276071be
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Thu Jan 26 16:12:00 2023 +0100

    remote: Fix version annotation for remoteDomainFDAssociate
    
    The API was added in libvirt 9.0.0.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Peter Krempa <pkrempa@redhat.com>

commit a0fbf1e25cd0f91bedf159bf7f0086f4b1aeafc2
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 16:48:50 2023 +0100

    rpc: Use struct zero initializer for args
    
    In a recent commit of v9.0.0-104-g0211e430a8 I've turned all args
    vars in src/remote/remote_driver.c to be initialized wit {0}.
    What I've missed was the generated code.
    
    Do what we've done in v9.0.0-13-g1c656836e3 and init also args,
    not just ret.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit 2dde3840b1d50e79f6b8161820fff9fe62f613a9
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix missing device in crypto-builtin XML
    
    Another forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit f3c9cbc36cc10775f6cefeb7e3de2f799dc74d70
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix watchdog parameters in crypto-builtin
    
    Forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit a2c5c5dad2275414e325ca79778fad2612d14470
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:34 2023 +0100

    news: Add information about iTCO watchdog changes
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 2fa92efe9b286ad064833cd2d8b907698e58e1cf
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:30 2023 +0100

    Document change to multiple watchdogs
    
    With the reasoning behind it.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 926594dcc82b40f483010cebe5addbf1d7f58b24
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 11:22:22 2023 +0100

    qemu: Add implicit watchdog for q35 machine types
    
    The iTCO watchdog is part of the q35 machine type since its inception,
    we just did not add it implicitly.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2137346
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d81a27b9815d68d85d2ddc9671649923ee5905d7
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 14:15:06 2023 +0100

    qemu: Enable iTCO watchdog by disabling its noreboot pin strap
    
    In order for the iTCO watchdog to be operational we must disable the
    noreboot pin strap in qemu.  This is the default starting from 8.0
    machine types, but desirable for older ones as well.  And we can safely
    do that since that is not guest-visible.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 5b80e93e42a1d89ee64420debd2b4b785a144c40
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:26:21 2023 +0100

    Add iTCO watchdog support
    
    Supported only with q35 machine types.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1c61bd718a9e311016da799a42dfae18f538385a
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Tue Nov 8 09:10:57 2022 +0100

    Support multiple watchdog devices
    
    This is already possible with qemu, and actually already happening with
    q35 machines and a specified watchdog since q35 already includes a
    watchdog we do not include in the XML.  In order to express such
    posibility multiple watchdogs need to be supported.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit c5340d5420012412ea298f0102cc7f113e87d89b
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:28:52 2023 +0100

    qemuDomainAttachWatchdog: Avoid unnecessary nesting
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1cf7e6ec057a80f3c256d739a8228e04b7fb8862
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:25:06 2023 +0100

    remote: Drop useless cleanup in remoteDispatchNodeGet{CPU,Memory}Stats
    
    The function cannot fail once it starts populating
    ret->params.params_val[i].field.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d0f339170f35957e7541e5b20552d0007e150fbc
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:06:33 2023 +0100

    remote: Avoid leaking uri_out
    
    In case the API returned success and a NULL pointer in uri_out, we would
    leak the preallocated buffer used for storing the uri_out pointer.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 4849eb2220fb2171e88e014a8e63018d20a8de95
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 11:56:28 2023 +0100

    remote: Propagate error from virDomainGetSecurityLabelList via RPC
    
    The daemon side of this API has been broken ever since the API was
    introduced in 2012. Instead of sending the error from
    virDomainGetSecurityLabelList via RPC so that the client can see it, the
    dispatcher would just send a successful reply with return value set to
    -1 (and an empty array of labels). The client side would propagate this
    return value so the client can see the API failed, but the original
    error would be lost.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 0211e430a87a96db9a4e085e12f33caad9167653
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 13:19:31 2023 +0100

    remote: Initialize args variable
    
    Recently, in v9.0.0-7-gb2034bb04c we've dropped initialization of
    @args variable. The reasoning was that eventually, all members of
    the variable will be set. Well, this is not correct. For
    instance, in remoteConnectGetAllDomainStats() the
    args.doms.doms_val pointer is set iff @ndoms != 0. However,
    regardless of that, the pointer is then passed to VIR_FREE().
    
    Worse, the whole args is passed to
    xdr_remote_connect_get_all_domain_stats_args() which then calls
    xdr_array, which tests the (uninitialized) pointer against NULL.
    
    This effectively reverts b2034bb04c61c75ddbfbed46879d641b6f8ca8dc.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Martin Kletzander <mkletzan@redhat.com>

commit c3afde9211b550d3900edc5386ab121f5b39fd3e
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 11:56:10 2023 +0100

    qemu_domain: Don't unref NULL hash table in qemuDomainRefreshStatsSchema()
    
    The g_hash_table_unref() function does not accept NULL. Passing
    NULL results in a glib warning being triggered. Check whether the
    hash table is not NULL and unref it only then.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 20:00:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 20:00:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486199.753619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLrMf-0001ap-Dq; Sat, 28 Jan 2023 19:59:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486199.753619; Sat, 28 Jan 2023 19:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLrMf-0001ai-8t; Sat, 28 Jan 2023 19:59:53 +0000
Received: by outflank-mailman (input) for mailman id 486199;
 Sat, 28 Jan 2023 19:59:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLrMe-0001aY-9V; Sat, 28 Jan 2023 19:59:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLrMe-000123-5d; Sat, 28 Jan 2023 19:59:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLrMd-0002QT-Jw; Sat, 28 Jan 2023 19:59:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLrMd-0002QF-JR; Sat, 28 Jan 2023 19:59:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7bl993DRow1pyWdhtJMMcr7lEC+fikftIc8SIeUGdbA=; b=q1opj8W6HhOoJQz7NCdelgx/z6
	lqgURnYe9PpshAcb152EbRv9XSf4OyZxG+jIh3TYBZsWMTH1cjahSGR0MwY/StHWqWmNiaFPEB7ud
	eGmah5OFOn6SJcT5A67eoXfbuHD7how6i1RQCrAxQLro1y93fjrBKcvE3Jlhg8DrdXYo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176263-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176263: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-armhf:syslog-server:running:regression
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf:capture-logs:broken:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Jan 2023 19:59:51 +0000

flight 176263 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176263/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175994
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175994
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    8 days
Failing since        176003  2023-01-20 17:40:27 Z    8 days   17 attempts
Testing same since   176222  2023-01-26 22:13:29 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  George Dunlap <george.dunlap@cloud.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

(No revision log; it would be 1225 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jan 28 23:50:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Jan 2023 23:50:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486212.753629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLuww-0002an-Jq; Sat, 28 Jan 2023 23:49:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486212.753629; Sat, 28 Jan 2023 23:49:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLuww-0002ag-Gm; Sat, 28 Jan 2023 23:49:34 +0000
Received: by outflank-mailman (input) for mailman id 486212;
 Sat, 28 Jan 2023 23:49:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLuwv-0002aW-An; Sat, 28 Jan 2023 23:49:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLuwv-0006Ox-6v; Sat, 28 Jan 2023 23:49:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLuwu-0002xc-O0; Sat, 28 Jan 2023 23:49:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLuwu-0004Rs-NW; Sat, 28 Jan 2023 23:49:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ofBf16YmhoiCIpyBUSuuCMzbA7shJZ6xHa9KZGADyRg=; b=UzKAPOayL7md07d4oPjDq+zjhs
	LWzuirKvzdrRAvTVdGknQE1uXGhcWO9MkcQZZpYeIp2ICw+hNUHCsQbe9pghy/6XQeJivM2jRy1YR
	QxQ53n4OH9eVDuoj8jm9OETvWCtgVWk7r9w+C0PVel7HWjW/TVnDg2nj1FXlavvLTodI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176264-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176264: regressions - trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-start/debian.repeat:fail:regression
    linux-linus:build-armhf:syslog-server:running:regression
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:build-armhf:capture-logs:broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=5af6ce7049365952f7f023155234fe091693ead1
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Jan 2023 23:49:32 +0000

flight 176264 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176264/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 test-amd64-amd64-dom0pvh-xl-intel 22 guest-start/debian.repeat fail REGR. vs. 173462
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 173462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                5af6ce7049365952f7f023155234fe091693ead1
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  113 days
Failing since        173470  2022-10-08 06:21:34 Z  112 days  233 attempts
Testing same since   176264  2023-01-28 08:16:16 Z    0 days    1 attempts

------------------------------------------------------------
3466 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

(No revision log; it would be 533123 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 04:04:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 04:04:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486229.753639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLyup-0005hm-2Q; Sun, 29 Jan 2023 04:03:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486229.753639; Sun, 29 Jan 2023 04:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLyuo-0005he-SL; Sun, 29 Jan 2023 04:03:38 +0000
Received: by outflank-mailman (input) for mailman id 486229;
 Sun, 29 Jan 2023 04:03:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLyun-0005hR-Ni; Sun, 29 Jan 2023 04:03:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLyun-0003Hj-Ji; Sun, 29 Jan 2023 04:03:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLyun-0007Ab-1D; Sun, 29 Jan 2023 04:03:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLyun-00010x-0T; Sun, 29 Jan 2023 04:03:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0hrwjaWL1yLMG03rcPW01e65cxno8Hh9IWIL9i48djs=; b=gT1wcJgXMRXmnUXZQ2iDG+2KdT
	31QuL8Rx/wsAuUcwmn12i3vCtzoaLJi4bf2hER+RFlsKwPKAzBLP7B7uKCy0qeuPi2Yyfgd0mUHoK
	+xKnhT0gvRoOsS16PypDIONWhlgqfZkd1BaghilANyl7XoggKs/kPt2Z9+goqNS9SrvI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176266-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176266: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:build-armhf:syslog-server:running:regression
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf:capture-logs:broken:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Jan 2023 04:03:37 +0000

flight 176266 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176266/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175994
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 175994
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 176263

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175994
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 176263 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 176263 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 176263 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 176263 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 176263 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 176263 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 176263 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 176263 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 176263 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 176263 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 176263 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 176263 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 176263 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 176263 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 176263 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 176263 never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 176263 n/a

version targeted for testing:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    8 days
Failing since        176003  2023-01-20 17:40:27 Z    8 days   18 attempts
Testing same since   176222  2023-01-26 22:13:29 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  George Dunlap <george.dunlap@cloud.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job build-armhf broken

Not pushing.

(No revision log; it would be 1225 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 05:06:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 05:06:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486239.753648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLztA-00058A-NG; Sun, 29 Jan 2023 05:06:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486239.753648; Sun, 29 Jan 2023 05:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pLztA-000583-KK; Sun, 29 Jan 2023 05:06:00 +0000
Received: by outflank-mailman (input) for mailman id 486239;
 Sun, 29 Jan 2023 05:05:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLzt9-00057t-RW; Sun, 29 Jan 2023 05:05:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLzt9-00058t-K4; Sun, 29 Jan 2023 05:05:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pLzt9-0000VW-1o; Sun, 29 Jan 2023 05:05:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pLzt9-0006Kl-1L; Sun, 29 Jan 2023 05:05:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lsR4f1BeiYQs4CjpcniD/3iR8Y3IFlZKR930lDSdkn0=; b=TGD22RQBrL6lnCvazjF2Bluxge
	bl1qn7TzrTg0LN1YPDfhKK9f37RRO5A0g2n3hJf/8gz3q6iXfFeKbztpG3qxjL3V59gkw2nc/X0b/
	mmlTOa5cNkufzGyZkt0C9KN5nRA3nM0lIwgVWv/qYcrb9Naktq3vYLHIrS7pHTDbVHlQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176265-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176265: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start.2:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-ping-check-xen:fail:heisenbug
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Jan 2023 05:05:59 +0000

flight 176265 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176265/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 21 guest-start.2 fail in 176261 pass in 176265
 test-amd64-amd64-xl-rtds     10 host-ping-check-xen        fail pass in 176261

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   38 days
Testing same since   176224  2023-01-26 22:14:43 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 05:39:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 05:39:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486246.753658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM0PR-0000dE-9a; Sun, 29 Jan 2023 05:39:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486246.753658; Sun, 29 Jan 2023 05:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM0PR-0000d7-6S; Sun, 29 Jan 2023 05:39:21 +0000
Received: by outflank-mailman (input) for mailman id 486246;
 Sun, 29 Jan 2023 05:39:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hFYh=52=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pM0PQ-0000d1-1l
 for xen-devel@lists.xenproject.org; Sun, 29 Jan 2023 05:39:20 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2084.outbound.protection.outlook.com [40.107.21.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4544d95d-9f97-11ed-b8d1-410ff93cb8f0;
 Sun, 29 Jan 2023 06:39:17 +0100 (CET)
Received: from DU2PR04CA0194.eurprd04.prod.outlook.com (2603:10a6:10:28d::19)
 by GV1PR08MB7778.eurprd08.prod.outlook.com (2603:10a6:150:56::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.25; Sun, 29 Jan
 2023 05:39:13 +0000
Received: from DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28d:cafe::d7) by DU2PR04CA0194.outlook.office365.com
 (2603:10a6:10:28d::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33 via Frontend
 Transport; Sun, 29 Jan 2023 05:39:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT028.mail.protection.outlook.com (100.127.142.236) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.30 via Frontend Transport; Sun, 29 Jan 2023 05:39:12 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Sun, 29 Jan 2023 05:39:11 +0000
Received: from 6d33aa5ad35e.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0D9A5BE2-A010-446A-BEED-E06D20D6A0FE.1; 
 Sun, 29 Jan 2023 05:39:05 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6d33aa5ad35e.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sun, 29 Jan 2023 05:39:05 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by DB9PR08MB7627.eurprd08.prod.outlook.com (2603:10a6:10:30b::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Sun, 29 Jan
 2023 05:39:02 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%3]) with mapi id 15.20.6043.030; Sun, 29 Jan 2023
 05:39:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4544d95d-9f97-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2OcMZ91hUBqDTO5GgFqXDvrwtpeYKr7DWVKaUqUqvkE=;
 b=E+1EHupc7d/IAqyCYYamF7ttmNN1lZNd6nt6e7+4zbQ9j4kO7KcHKQ/+ZB43D1XWDP+1FyiywDMsifFrQImzLf9diY9NxS6eiyfd4fpe4SEUmiwYnjyjtHnANhnOoszJPOAH7RwRt1VMDjuBf5MCF58pQYrX46N8rW7oFXPZ8/A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CUon5c2cAQIbJKtl/QuNHdbjObS1mpCbO7n1iD2A95VMHqjRhthJDk9gn/y4tw+99i7YG5BzIazUnNImXwO2KzQeaJmw0iDg+qXbUMAS1lFLQTswHumxFaR2pz/guM6QEX3A/LPzhK8tK2dU+aGW/v8rif4VSe3KifCfG/hz29t6LsPKLpECy4Or5ph8PISl5zdJlo1kYkBQ9k6Y8jIghCmEHASHiU3x2vuiCQ2XYfPpmU63DXOxI9m85rqHY1bgqTUCpIuy9xovbWMcX+erSfJucD8hDg9i7LwPxIBn6nXJQYDn4NkQEA8N6GlRK28XVKQcj4JUd0wwoANCrfbnMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2OcMZ91hUBqDTO5GgFqXDvrwtpeYKr7DWVKaUqUqvkE=;
 b=NFbudj2eVOXvaKUrDHZLCFoCZijCnpb2ZL+F7PNA4HgtLa2di6OqaiTf8gvPpZLlcWIUy346LpL2kLywDLDek6plxG+ayI/lyQDKgnr5G0m/CFV3Qj2B8/KixX1XUKApASvhFO8bdVLBXAvc3QcgBEdzCaxusKhOGVO7Sq/LDrbuxweq20MOTB/y1MyNTBD+Zdby2ldAVr4vjd8ueweGIAMiU3A7T0ejWwNWE9qtFP+ExknQrMpv1R11+0dWhDXBn0tT+VOzFVzcFwTZwgWR2pnwxdt78SLzZKA+YjsT5qbYb8NHL6naBcbgoS119Ak1HbEDHdbVMBahiVN8eO8XBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2OcMZ91hUBqDTO5GgFqXDvrwtpeYKr7DWVKaUqUqvkE=;
 b=E+1EHupc7d/IAqyCYYamF7ttmNN1lZNd6nt6e7+4zbQ9j4kO7KcHKQ/+ZB43D1XWDP+1FyiywDMsifFrQImzLf9diY9NxS6eiyfd4fpe4SEUmiwYnjyjtHnANhnOoszJPOAH7RwRt1VMDjuBf5MCF58pQYrX46N8rW7oFXPZ8/A=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Thread-Topic: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Thread-Index: AQHZJxAhTWb1zeTHm029bSgcZo/2Yq6l4H6AgA7nVJA=
Date: Sun, 29 Jan 2023 05:39:02 +0000
Message-ID:
 <AM0PR08MB453083B74DB1D00BDF469331F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-12-Penny.Zheng@arm.com>
 <c30b4458-b5f6-f996-0c3c-220b18bfb356@xen.org>
In-Reply-To: <c30b4458-b5f6-f996-0c3c-220b18bfb356@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 109BBDD9E31E1B4DAF02D6B564F5F2E0.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|DB9PR08MB7627:EE_|DBAEUR03FT028:EE_|GV1PR08MB7778:EE_
X-MS-Office365-Filtering-Correlation-Id: 73cbe5c9-5115-467e-41ea-08db01bb268d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +Q99jpXKqXvPTxQb1zxw//oYXofHsJDuHuxNCmtANi5EdqbSPw84/+pE7nYlZCulwIYVBYylFCq3baufkAXd+Ke1YSP5RCwMmtnfClueQHF1j33LJ1aOesWg2RRubD7tRDpgLQ2Q8dWCfwkuEeU0zn80id9wZXWqGIsNREF0dD391s/e1ynFf3YalOQ80chAF8vG9YeKysasdiHYuwR9Y83wTRomcLB+2pnfaFSuYhfjGLwNa7p1vsD35Qcp19EHbAD+veVhBIWWOAlboCBB0T5rQfea5El2a+r1VjT0M/OB/rErQiV3zjo7FbTCJrUjZ6lSGM9MMePM3w3J+5HEH51qb1wpVYUVhVK34Kot0IagTuWmjST4Bk+FJynT0Gwcgqo5xslPXUnnOowhwsUQJnC7TtUSiKOI9Bwf64qCci18LtTIKLTAoNKhpYiSKFpw5g3rbf7NJYNyINYjqk5pSC/jTD1CVy0h2qgJPLI+yyEey+Ocb8jmjSi1gRPSLjXaJC2smsmrmPXO2l0VHt58H/k+4Fx0A0kZWrEMFnl2403n2RHq/aEa3dQxmCs+fMx4/TbcJtrBqMc0EiGZx6vcHR8VbnzTgb5MzW7opuaah23rxNljq3v8EFvFL+X7WAO3Ej8vdm+cwiYUOpF0e0rM4ZQARyAHYAgoY3Bk7Knhhbkl0zIzJGYfBv1poHJqxUgckJeCfbUNzOcrCTkWS2s2Pw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(376002)(396003)(39860400002)(136003)(366004)(346002)(451199018)(478600001)(38100700002)(4326008)(64756008)(66556008)(66946007)(66476007)(6506007)(66446008)(83380400001)(76116006)(54906003)(110136005)(8676002)(316002)(122000001)(7696005)(186003)(9686003)(26005)(53546011)(33656002)(5660300002)(55016003)(52536014)(41300700001)(38070700005)(86362001)(71200400001)(8936002)(2906002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7627
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9695860d-9cb2-4cd6-0273-08db01bb20c5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Y4ZSgHGcZ2k1frR58iwqBQklL/dBovENkAjC2vAyh52Cah/zaAWFHVRSnPJiGKdarsBfs2/VyuPA6UodYox4XYnV5xGIzX8fSoDwURKmO334oKSMpHjHXXQYwxpS10J62gSDzijqmOmUqkA33LhcD8kPDr+5AYRIfkqnLQbhIiXQPM7RYTQc8ftlrC4ztJQVW/6nP2IXSQheb7q+jMDIOQvOPV9iskeV91gJUdaKpDgIxJmpwW8UEvlQya40liQvBlrInWwqNLvP32bpQelr0bCWyUnpcDuMv1X+FukHDgpcqU9154CzrGivnMF8DAR6EglEiLwPL9jz93FPlpU3yWc8MUP2fcOi+jRTIoP6kLsTpbOIrRhRAQKXPj8HKYF49igx6giSFnfhrXcgUlV59V+pmuP5Ab9tEkqJtKPaYyrJFqAv2k49KDtR9chCxAvT572ugQ9+e8flKrSncQyKN4fg6cImNm4ZD90hpzEiVgTE/xSeYlf8SRiCUIkZIR9ExuD9oRxXn4ppO6gnKAdbBeo9HkM7lPyr3iOrxaE4nUiCsXnmgEdYqZlDJDESHx7HAWQjL/QzNOyakmuHaY9j3K9XsjBjrD/yQn530966hcmtbbq/M6Gnited+Ux9rw6sqeMMk/4X5uRU2l9P8X685PKNnWKVMSc9M4g/w6Y+9ZQ=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(396003)(39860400002)(346002)(376002)(136003)(451199018)(46966006)(36840700001)(8936002)(336012)(70206006)(70586007)(47076005)(7696005)(8676002)(4326008)(53546011)(9686003)(26005)(186003)(33656002)(2906002)(82310400005)(86362001)(54906003)(36860700001)(52536014)(41300700001)(83380400001)(55016003)(478600001)(110136005)(316002)(6506007)(107886003)(81166007)(40480700001)(82740400003)(5660300002)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jan 2023 05:39:12.0360
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 73cbe5c9-5115-467e-41ea-08db01bb268d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7778

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPg0KPiBTZW50OiBUaHVyc2RheSwgSmFudWFyeSAxOSwgMjAyMyAxMTowNCBQTQ0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFu
byBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9k
eW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDExLzQwXSB4
ZW4vbXB1OiBidWlsZCB1cCBzdGFydC1vZi1kYXkgWGVuIE1QVQ0KPiBtZW1vcnkgcmVnaW9uIG1h
cA0KPiANCj4gSGkgUGVubnksDQo+DQoNCkhpIEp1bGllbg0KDQpTb3JyeSBmb3IgdGhlIGxhdGUg
cmVzcG9uc2UsIGp1c3QgY29tZSBiYWNrIGZyb20gQ2hpbmVzZSBTcHJpbmcgRmVzdGl2YWwgSG9s
aWRheX4NCiANCj4gT24gMTMvMDEvMjAyMyAwNToyOCwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4g
RnJvbTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+ID4NCj4gPiBUaGUgc3Rh
cnQtb2YtZGF5IFhlbiBNUFUgbWVtb3J5IHJlZ2lvbiBsYXlvdXQgc2hhbGwgYmUgbGlrZSBhcyBm
b2xsb3dzOg0KPiA+DQo+ID4geGVuX21wdW1hcFswXSA6IFhlbiB0ZXh0DQo+ID4geGVuX21wdW1h
cFsxXSA6IFhlbiByZWFkLW9ubHkgZGF0YQ0KPiA+IHhlbl9tcHVtYXBbMl0gOiBYZW4gcmVhZC1v
bmx5IGFmdGVyIGluaXQgZGF0YSB4ZW5fbXB1bWFwWzNdIDogWGVuDQo+ID4gcmVhZC13cml0ZSBk
YXRhIHhlbl9tcHVtYXBbNF0gOiBYZW4gQlNTIC4uLi4uLg0KPiA+IHhlbl9tcHVtYXBbbWF4X3hl
bl9tcHVtYXAgLSAyXTogWGVuIGluaXQgZGF0YQ0KPiA+IHhlbl9tcHVtYXBbbWF4X3hlbl9tcHVt
YXAgLSAxXTogWGVuIGluaXQgdGV4dA0KPiANCj4gQ2FuIHlvdSBleHBsYWluIHdoeSB0aGUgaW5p
dCByZWdpb24gc2hvdWxkIGJlIGF0IHRoZSBlbmQgb2YgdGhlIE1QVT8NCj4gDQoNCkFzIGRpc2N1
c3NlZCBpbiB0aGUgdjEgU2VyaWUsIEknZCBsaWtlIHRvIHB1dCBhbGwgdHJhbnNpZW50IE1QVSBy
ZWdpb25zLCBsaWtlIGJvb3Qtb25seSByZWdpb24sDQphdCB0aGUgZW5kIG9mIHRoZSBNUFUuDQpT
aW5jZSB0aGV5IHdpbGwgZ2V0IHJlbW92ZWQgYXQgdGhlIGVuZCBvZiB0aGUgYm9vdCwgSSBhbSB0
cnlpbmcgbm90IHRvIGxlYXZlIGhvbGVzIGluIHRoZSBNUFUNCm1hcCBieSBwdXR0aW5nIGFsbCB0
cmFuc2llbnQgTVBVIHJlZ2lvbnMgYXQgcmVhci4gDQoNCj4gPg0KPiA+IG1heF94ZW5fbXB1bWFw
IHJlZmVycyB0byB0aGUgbnVtYmVyIG9mIHJlZ2lvbnMgc3VwcG9ydGVkIGJ5IHRoZSBFTDINCj4g
TVBVLg0KPiA+IFRoZSBsYXlvdXQgc2hhbGwgYmUgY29tcGxpYW50IHdpdGggd2hhdCB3ZSBkZXNj
cmliZSBpbiB4ZW4ubGRzLlMsIG9yDQo+ID4gdGhlIGNvZGVzIG5lZWQgYWRqdXN0bWVudC4NCj4g
Pg0KPiA+IEFzIE1NVSBzeXN0ZW0gYW5kIE1QVSBzeXN0ZW0gaGF2ZSBkaWZmZXJlbnQgZnVuY3Rp
b25zIHRvIGNyZWF0ZSB0aGUNCj4gPiBib290IE1NVS9NUFUgbWVtb3J5IG1hbmFnZW1lbnQgZGF0
YSwgaW5zdGVhZCBvZiBpbnRyb2R1Y2luZyBleHRyYQ0KPiA+ICNpZmRlZiBpbiBtYWluIGNvZGUg
Zmxvdywgd2UgaW50cm9kdWNlIGEgbmV1dHJhbCBuYW1lDQo+ID4gcHJlcGFyZV9lYXJseV9tYXBw
aW5ncyBmb3IgYm90aCwgYW5kIGFsc28gdG8gcmVwbGFjZSBjcmVhdGVfcGFnZV90YWJsZXMNCj4g
Zm9yIE1NVS4NCj4gPg0KPiA+IFNpZ25lZC1vZmYtYnk6IFBlbm55IFpoZW5nIDxwZW5ueS56aGVu
Z0Bhcm0uY29tPg0KPiA+IFNpZ25lZC1vZmYtYnk6IFdlaSBDaGVuIDx3ZWkuY2hlbkBhcm0uY29t
Pg0KPiA+IC0tLQ0KPiA+ICAgeGVuL2FyY2gvYXJtL2FybTY0L01ha2VmaWxlICAgICAgICAgICAg
ICB8ICAgMiArDQo+ID4gICB4ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TICAgICAgICAgICAgICAg
IHwgIDE3ICstDQo+ID4gICB4ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tbXUuUyAgICAgICAgICAg
IHwgICA0ICstDQo+ID4gICB4ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tcHUuUyAgICAgICAgICAg
IHwgMzIzDQo+ICsrKysrKysrKysrKysrKysrKysrKysrDQo+ID4gICB4ZW4vYXJjaC9hcm0vaW5j
bHVkZS9hc20vYXJtNjQvbXB1LmggICAgIHwgIDYzICsrKysrDQo+ID4gICB4ZW4vYXJjaC9hcm0v
aW5jbHVkZS9hc20vYXJtNjQvc3lzcmVncy5oIHwgIDQ5ICsrKysNCj4gPiAgIHhlbi9hcmNoL2Fy
bS9tbV9tcHUuYyAgICAgICAgICAgICAgICAgICAgfCAgNDggKysrKw0KPiA+ICAgeGVuL2FyY2gv
YXJtL3hlbi5sZHMuUyAgICAgICAgICAgICAgICAgICB8ICAgNCArDQo+ID4gICA4IGZpbGVzIGNo
YW5nZWQsIDUwMiBpbnNlcnRpb25zKCspLCA4IGRlbGV0aW9ucygtKQ0KPiA+ICAgY3JlYXRlIG1v
ZGUgMTAwNjQ0IHhlbi9hcmNoL2FybS9hcm02NC9oZWFkX21wdS5TDQo+ID4gICBjcmVhdGUgbW9k
ZSAxMDA2NDQgeGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L21wdS5oDQo+ID4gICBjcmVh
dGUgbW9kZSAxMDA2NDQgeGVuL2FyY2gvYXJtL21tX21wdS5jDQo+ID4NCj4gPiArLyoNCj4gPiAr
ICogTWFjcm8gdG8gY3JlYXRlIGEgbmV3IE1QVSBtZW1vcnkgcmVnaW9uIGVudHJ5LCB3aGljaCBp
cyBhDQo+ID4gK3N0cnVjdHVyZQ0KPiA+ICsgKiBvZiBwcl90LCAgaW4gXHBybWFwLg0KPiA+ICsg
Kg0KPiA+ICsgKiBJbnB1dHM6DQo+ID4gKyAqIHBybWFwOiAgIG1wdSBtZW1vcnkgcmVnaW9uIG1h
cCB0YWJsZSBzeW1ib2wNCj4gPiArICogc2VsOiAgICAgcmVnaW9uIHNlbGVjdG9yDQo+ID4gKyAq
IHByYmFyOiAgIHByZXNlcnZlIHZhbHVlIGZvciBQUkJBUl9FTDINCj4gPiArICogcHJsYXIgICAg
cHJlc2VydmUgdmFsdWUgZm9yIFBSTEFSX0VMMg0KPiA+ICsgKg0KPiA+ICsgKiBDbG9iYmVycyBc
dG1wMSwgXHRtcDINCj4gPiArICoNCj4gPiArICovDQo+ID4gKy5tYWNybyBjcmVhdGVfbXB1X2Vu
dHJ5IHBybWFwLCBzZWwsIHByYmFyLCBwcmxhciwgdG1wMSwgdG1wMg0KPiA+ICsgICAgbW92ICAg
XHRtcDIsIFxzZWwNCj4gPiArICAgIGxzbCAgIFx0bXAyLCBcdG1wMiwgI01QVV9FTlRSWV9TSElG
VA0KPiA+ICsgICAgYWRyX2wgXHRtcDEsIFxwcm1hcA0KPiA+ICsgICAgLyogV3JpdGUgdGhlIGZp
cnN0IDggYnl0ZXMocHJiYXJfdCkgb2YgcHJfdCAqLw0KPiA+ICsgICAgc3RyICAgXHByYmFyLCBb
XHRtcDEsIFx0bXAyXQ0KPiA+ICsNCj4gPiArICAgIGFkZCAgIFx0bXAyLCBcdG1wMiwgIzgNCj4g
PiArICAgIC8qIFdyaXRlIHRoZSBsYXN0IDggYnl0ZXMocHJsYXJfdCkgb2YgcHJfdCAqLw0KPiA+
ICsgICAgc3RyICAgXHBybGFyLCBbXHRtcDEsIFx0bXAyXQ0KPiANCj4gQW55IHBhcnRpY3VsYXIg
cmVhc29uIHRvIG5vdCB1c2UgJ3N0cCc/DQo+IA0KPiBBbHNvLCBBRkFJQ1QsIHdpdGggZGF0YSBj
YWNoZSBkaXNhYmxlZC4gQnV0IGF0IGxlYXN0IG9uIEFSTXY4LUEsIHRoZSBjYWNoZSBpcw0KPiBu
ZXZlciByZWFsbHkgb2ZmLiBTbyBkb24ndCBuZWVkIHNvbWUgY2FjaGUgbWFpbnRhaW5hbmNlPw0K
PiANCj4gRkFPRCwgSSBrbm93IHRoZSBleGlzdGluZyBNTVUgY29kZSBoYXMgdGhlIHNhbWUgaXNz
dWUuIEJ1dCBJIHdvdWxkIHJhdGhlcg0KPiBwcmVmZXIgaWYgdGhlIG5ldyBjb2RlIGludHJvZHVj
ZWQgaXMgY29tcGxpYW50IHRvIHRoZSBBcm0gQXJtLg0KPiANCg0KVHJ1ZSwgYHN0cGAgaXMgYmV0
dGVyIGFuZCBJIHdpbGwgY2xlYW4gZGF0YSBjYWNoZSB0byBiZSBjb21wbGlhbnQgdG8gdGhlIEFy
bSBBcm0uDQpJIHdyaXRlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZSB0byBzZWUgaWYgSSBjYXRjaCB3
aGF0IHlvdSBzdWdnZXN0ZWQ6DQpgYGANCmFkZCBcdG1wMSwgXHRtcDEsIFx0bXAyDQpzdHAgXHBy
YmFyLCBccHJsYXIsIFtcdG1wMV0NCmRjIGN2YXUsIFx0bXAxDQppc2INCmRzYiBzeQ0KYGBgDQoN
Cj4gPiArLmVuZG0NCj4gPiArDQo+ID4gKy8qDQo+ID4gKyAqIE1hY3JvIHRvIHN0b3JlIHRoZSBt
YXhpbXVtIG51bWJlciBvZiByZWdpb25zIHN1cHBvcnRlZCBieSB0aGUgRUwyDQo+ID4gK01QVQ0K
PiA+ICsgKiBpbiBtYXhfeGVuX21wdW1hcCwgd2hpY2ggaXMgaWRlbnRpZmllZCBieSBNUFVJUl9F
TDIuDQo+ID4gKyAqDQo+ID4gKyAqIE91dHB1dHM6DQo+ID4gKyAqIG5yX3JlZ2lvbnM6IHByZXNl
cnZlIHRoZSBtYXhpbXVtIG51bWJlciBvZiByZWdpb25zIHN1cHBvcnRlZCBieQ0KPiA+ICt0aGUg
RUwyIE1QVQ0KPiA+ICsgKg0KPiA+ICsgKiBDbG9iYmVycyBcdG1wMQ0KPiA+ICsgKiA+ICsgKi8N
Cj4gDQo+IEFyZSB5b3UgZ29pbmcgdG8gaGF2ZSBtdWx0aXBsZSB1c2Vycz8gSWYgbm90LCB0aGVu
IEkgd291bGQgcHJlZmVyIGlmIHRoaXMgaXMNCj4gZm9sZGVkIGluIHRoZSBvbmx5IGNhbGxlci4N
Cj4gDQoNCk9rLiBJIHdpbGwgZm9sZCBzaW5jZSBJIHRoaW5rIGl0IGlzIG9uZS10aW1lIHJlYWRp
bmcgdGhpbmd5Lg0KDQo+ID4gKy5tYWNybyByZWFkX21heF9lbDJfcmVnaW9ucywgbnJfcmVnaW9u
cywgdG1wMQ0KPiA+ICsgICAgbG9hZF9wYWRkciBcdG1wMSwgbWF4X3hlbl9tcHVtYXANCj4gDQo+
IEkgd291bGQgcmF0aGVyIHByZWZlciBpZiB3ZSByZXN0cmljdCB0aGUgdXNlIG9mIGdsb2JhbCB3
aGlsZSB0aGUgTU1VIGlmIG9mZiAoc2VlDQo+IHdoeSBhYm92ZSkuDQo+IA0KDQpJZiB3ZSBkb24n
dCB1c2UgZ2xvYmFsIGhlcmUsIHRoZW4gYWZ0ZXIgTVBVIGVuYWJsZWQsIHdlIG5lZWQgdG8gcmUt
cmVhZCBNUFVJUl9FTDINCnRvIGdldCB0aGUgbnVtYmVyIG9mIG1heGltdW0gRUwyIHJlZ2lvbnMu
DQoNCk9yIEkgcHV0IGRhdGEgY2FjaGUgY2xlYW4gYWZ0ZXIgYWNjZXNzaW5nIGdsb2JhbCwgaXMg
aXQgYmV0dGVyPw0KYGBgDQpzdHIgICBcbnJfcmVnaW9ucywgW1x0bXAxXQ0KZGMgY3ZhdSwgXHRt
cDENCmlzYg0KZHNiIHN5DQpgYGANCg0KPiA+ICsgICAgbXJzICAgXG5yX3JlZ2lvbnMsIE1QVUlS
X0VMMg0KPiA+ICsgICAgaXNiDQo+IA0KPiBXaGF0J3MgdGhhdCBpc2IgZm9yPw0KPiANCj4gPiAr
ICAgIHN0ciAgIFxucl9yZWdpb25zLCBbXHRtcDFdDQo+ID4gKy5lbmRtDQo+ID4gKw0KPiA+ICsv
Kg0KPiA+ICsgKiBFTlRSWSB0byBjb25maWd1cmUgYSBFTDIgTVBVIG1lbW9yeSByZWdpb24NCj4g
PiArICogQVJNdjgtUiBBQXJjaDY0IGF0IG1vc3Qgc3VwcG9ydHMgMjU1IE1QVSBwcm90ZWN0aW9u
IHJlZ2lvbnMuDQo+ID4gKyAqIFNlZSBzZWN0aW9uIEcxLjMuMTggb2YgdGhlIHJlZmVyZW5jZSBt
YW51YWwgZm9yIEFSTXY4LVIgQUFyY2g2NCwNCj4gPiArICogUFJCQVI8bj5fRUwyIGFuZCBQUkxB
UjxuPl9FTDIgcHJvdmlkZXMgYWNjZXNzIHRvIHRoZSBFTDIgTVBVDQo+ID4gK3JlZ2lvbg0KPiA+
ICsgKiBkZXRlcm1pbmVkIGJ5IHRoZSB2YWx1ZSBvZiAnbicgYW5kIFBSU0VMUl9FTDIuUkVHSU9O
IGFzDQo+ID4gKyAqIFBSU0VMUl9FTDIuUkVHSU9OPDc6ND46bi4obiA9IDAsIDEsIDIsIC4uLiAs
IDE1KQ0KPiA+ICsgKiBGb3IgZXhhbXBsZSB0byBhY2Nlc3MgcmVnaW9ucyBmcm9tIDE2IHRvIDMx
ICgwYjEwMDAwIHRvIDBiMTExMTEpOg0KPiA+ICsgKiAtIFNldCBQUlNFTFJfRUwyIHRvIDBiMXh4
eHgNCj4gPiArICogLSBSZWdpb24gMTYgY29uZmlndXJhdGlvbiBpcyBhY2Nlc3NpYmxlIHRocm91
Z2ggUFJCQVIwX0VMMiBhbmQNCj4gPiArUFJMQVIwX0VMMg0KPiA+ICsgKiAtIFJlZ2lvbiAxNyBj
b25maWd1cmF0aW9uIGlzIGFjY2Vzc2libGUgdGhyb3VnaCBQUkJBUjFfRUwyIGFuZA0KPiA+ICtQ
UkxBUjFfRUwyDQo+ID4gKyAqIC0gUmVnaW9uIDE4IGNvbmZpZ3VyYXRpb24gaXMgYWNjZXNzaWJs
ZSB0aHJvdWdoIFBSQkFSMl9FTDIgYW5kDQo+ID4gK1BSTEFSMl9FTDINCj4gPiArICogLSAuLi4N
Cj4gPiArICogLSBSZWdpb24gMzEgY29uZmlndXJhdGlvbiBpcyBhY2Nlc3NpYmxlIHRocm91Z2gg
UFJCQVIxNV9FTDIgYW5kDQo+ID4gK1BSTEFSMTVfRUwyDQo+ID4gKyAqDQo+ID4gKyAqIElucHV0
czoNCj4gPiArICogeDI3OiByZWdpb24gc2VsZWN0b3INCj4gPiArICogeDI4OiBwcmVzZXJ2ZSB2
YWx1ZSBmb3IgUFJCQVJfRUwyDQo+ID4gKyAqIHgyOTogcHJlc2VydmUgdmFsdWUgZm9yIFBSTEFS
X0VMMg0KPiA+ICsgKg0KPiA+ICsgKi8NCj4gPiArRU5UUlkod3JpdGVfcHIpDQo+IA0KPiBBRkFJ
Q1QsIHRoaXMgZnVuY3Rpb24gd291bGQgbm90IGJlIG5lY2Vzc2FyeSBpZiB0aGUgaW5kZXggZm9y
IHRoZSBpbml0IHNlY3Rpb25zDQo+IHdlcmUgaGFyZGNvZGVkLg0KPiANCj4gU28gSSB3b3VsZCBs
aWtlIHRvIHVuZGVyc3RhbmQgd2h5IHRoZSBpbmRleCBjYW5ub3QgYmUgaGFyZGNvZGVkLg0KPiAN
Cg0KVGhlIHJlYXNvbiBpcyB0aGF0IHdlIGFyZSBwdXR0aW5nIGluaXQgc2VjdGlvbnMgYXQgdGhl
ICplbmQqIG9mIHRoZSBNUFUgbWFwLCBhbmQNCnRoZSBsZW5ndGggb2YgdGhlIHdob2xlIE1QVSBt
YXAgaXMgcGxhdGZvcm0tc3BlY2lmaWMuIFdlIHJlYWQgaXQgZnJvbSBNUFVJUl9FTDIuDQogDQo+
ID4gKyAgICBtc3IgICBQUlNFTFJfRUwyLCB4MjcNCj4gPiArICAgIGRzYiAgIHN5DQo+IA0KPiBb
Li4uXQ0KPiANCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3hlbi5sZHMuUyBiL3hlbi9h
cmNoL2FybS94ZW4ubGRzLlMgaW5kZXgNCj4gPiBiYzQ1ZWEyYzY1Li43OTk2NWEzYzE3IDEwMDY0
NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS94ZW4ubGRzLlMNCj4gPiArKysgYi94ZW4vYXJjaC9h
cm0veGVuLmxkcy5TDQo+ID4gQEAgLTkxLDYgKzkxLDggQEAgU0VDVElPTlMNCj4gPiAgICAgICAg
IF9fcm9fYWZ0ZXJfaW5pdF9lbmQgPSAuOw0KPiA+ICAgICB9IDogdGV4dA0KPiA+DQo+ID4gKyAg
LiA9IEFMSUdOKFBBR0VfU0laRSk7DQo+IA0KPiBXaHkgZG8geW91IG5lZWQgdGhpcyBBTElHTj8N
Cj4gDQoNCkkgbmVlZCBhIHN5bWJvbCBhcyB0aGUgc3RhcnQgb2YgdGhlIGRhdGEgc2VjdGlvbiwg
c28gSSBpbnRyb2R1Y2UNCiJfX2RhdGFfYmVnaW4gPSAuOyIuIA0KSWYgSSB1c2UgIl9fcm9fYWZ0
ZXJfaW5pdF9lbmQgPSAuOyIgaW5zdGVhZCwgSSdtIGFmcmFpZCBpbiB0aGUgZnV0dXJlLA0KaWYg
c29tZW9uZSBpbnRyb2R1Y2VzIGEgbmV3IHNlY3Rpb24gYWZ0ZXIgcm8tYWZ0ZXItaW5pdCBzZWN0
aW9uLCB0aGlzIHBhcnQNCmFsc28gbmVlZHMgbW9kaWZpY2F0aW9uIHRvby4NCg0KV2hlbiB3ZSBk
ZWZpbmUgTVBVIHJlZ2lvbnMgZm9yIGVhY2ggc2VjdGlvbiBpbiB4ZW4ubGRzLlMsIHdlIGFsd2F5
cyB0cmVhdCB0aGVzZSBzZWN0aW9ucw0KcGFnZS1hbGlnbmVkLg0KSSBjaGVja2VkIGVhY2ggc2Vj
dGlvbiBpbiB4ZW4ubGRzLlMsIGFuZCAiLiA9IEFMSUdOKFBBR0VfU0laRSk7IiBpcyBlaXRoZXIg
YWRkZWQNCmF0IHNlY3Rpb24gaGVhZCwgb3IgYXQgdGhlIHRhaWwgb2YgdGhlIHByZXZpb3VzIHNl
Y3Rpb24sIHRvIG1ha2Ugc3VyZSBzdGFydGluZyBhZGRyZXNzIHN5bWJvbA0KcGFnZS1hbGlnbmVk
Lg0KDQpBbmQgaWYgd2UgZG9uJ3QgcHV0IHRoaXMgQUxJR04sIGlmICJfX3JvX2FmdGVyX2luaXRf
ZW5kICIgc3ltYm9sIGl0c2VsZiBpcyBub3QgcGFnZS1hbGlnbmVkLA0KdGhlIHR3byBhZGphY2Vu
dCBzZWN0aW9ucyB3aWxsIG92ZXJsYXAgaW4gTVBVLg0KIA0KPiA+ICsgIF9fZGF0YV9iZWdpbiA9
IC47DQo+ID4gICAgIC5kYXRhLnJlYWRfbW9zdGx5IDogew0KPiA+ICAgICAgICAgIC8qIEV4Y2Vw
dGlvbiB0YWJsZSAqLw0KPiA+ICAgICAgICAgIF9fc3RhcnRfX19leF90YWJsZSA9IC47DQo+ID4g
QEAgLTE1Nyw3ICsxNTksOSBAQCBTRUNUSU9OUw0KPiA+ICAgICAgICAgICooLmFsdGluc3RyX3Jl
cGxhY2VtZW50KQ0KPiANCj4gSSBrbm93IHlvdSBhcmUgbm90IHVzaW5nIGFsdGVybmF0aXZlIGlu
c3RydWN0aW9ucyB5ZXQuIEJ1dCwgeW91IHNob3VsZCBtYWtlDQo+IHN1cmUgdGhleSBhcmUgaW5j
bHVkZWQuIFNvIEkgdGhpbmsgcmF0aGVyIHRoYW4gaW50cm9kdWNlIF9faW5pdF9kYXRhX2JlZ2lu
LA0KPiB5b3Ugd2FudCB0byB1c2UgIl9laW5pdGV4dCIgZm9yIHRoZSBzdGFydCBvZiB0aGUgIklu
aXQgZGF0YSIgc2VjdGlvbi4NCj4gDQo+ID4gICAgIH0gOnRleHQNCj4gPiAgICAgLiA9IEFMSUdO
KFBBR0VfU0laRSk7DQo+ID4gKw0KPiANCj4gU3B1cmlvdXM/DQo+IA0KPiA+ICAgICAuaW5pdC5k
YXRhIDogew0KPiA+ICsgICAgICAgX19pbml0X2RhdGFfYmVnaW4gPSAuOyAgICAgICAgICAgIC8q
IEluaXQgZGF0YSAqLw0KPiA+ICAgICAgICAgICooLmluaXQucm9kYXRhKQ0KPiA+ICAgICAgICAg
ICooLmluaXQucm9kYXRhLiopDQo+ID4NCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGll
biBHcmFsbA0KDQpDaGVlcnMsDQoNCi0tDQpQZW5ueSBaaGVuZw0K


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 06:18:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 06:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486252.753669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM10Z-0005ia-CE; Sun, 29 Jan 2023 06:17:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486252.753669; Sun, 29 Jan 2023 06:17:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM10Z-0005iT-99; Sun, 29 Jan 2023 06:17:43 +0000
Received: by outflank-mailman (input) for mailman id 486252;
 Sun, 29 Jan 2023 06:17:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hFYh=52=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pM10W-0005iM-Ti
 for xen-devel@lists.xenproject.org; Sun, 29 Jan 2023 06:17:41 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2041.outbound.protection.outlook.com [40.107.8.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0f61c95-9f9c-11ed-b836-f113854b840a;
 Sun, 29 Jan 2023 07:17:39 +0100 (CET)
Received: from DU2PR04CA0321.eurprd04.prod.outlook.com (2603:10a6:10:2b5::26)
 by AS8PR08MB9599.eurprd08.prod.outlook.com (2603:10a6:20b:619::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Sun, 29 Jan
 2023 06:17:29 +0000
Received: from DBAEUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b5:cafe::15) by DU2PR04CA0321.outlook.office365.com
 (2603:10a6:10:2b5::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33 via Frontend
 Transport; Sun, 29 Jan 2023 06:17:29 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT006.mail.protection.outlook.com (100.127.142.72) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.21 via Frontend Transport; Sun, 29 Jan 2023 06:17:28 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Sun, 29 Jan 2023 06:17:28 +0000
Received: from fea346f21983.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B7C5DA6D-CD3F-420C-A835-E3EB487BD56D.1; 
 Sun, 29 Jan 2023 06:17:18 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fea346f21983.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sun, 29 Jan 2023 06:17:18 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by AS8PR08MB6023.eurprd08.prod.outlook.com (2603:10a6:20b:291::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.30; Sun, 29 Jan
 2023 06:17:16 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%3]) with mapi id 15.20.6043.030; Sun, 29 Jan 2023
 06:17:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0f61c95-9f9c-11ed-b836-f113854b840a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mPEekBCBe5wnOeP6x05Y66zeXOv11u1Q8UedBbT537U=;
 b=MgCcyfhobLHXQDLD0WdHUclisI6pPj0cRZM1D1QcI57uefr3dxfP6azAZoiWePfu4ttQ2Uzj5P/06aafPcCZ6cJSqJsgFnEdBve+WFP9Yiza1CBg7D+g38j1JNcnR/kvw8WezVvzbm/CMdRD5ow6udCW28pSqfIfgNv4wMMmk4A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Pm5GKsMJxyoUcKcahElJWHaZXVcy0i9U05zVKMO0osy22+U44oZWACnu1MAukShBMQFcNb/okadAynrut3aIDKCh2wi5lBbyTVPlRdyj/3zjUS6DhDq/PPUpG/QdACkA6OEodRXnIQEKRsxt1Z3n8LMvnEwvwk95r5ep5PPAJPzJ+4ieIZM0d+JKI7ClfRNmv3FJdgDwAn9BBtvfRr1CVeGqpoxlC3bgvX9oplKtb61NVVH0iLnJUdnpPi8xqG2mvSX8m67/p3dHksxUPQVE55ixH1HJAs++PHulRp+iBPPQMKyDU0yLCgoRnf4Bjer/CmQvyeo5mTKF0gjiuuLQbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mPEekBCBe5wnOeP6x05Y66zeXOv11u1Q8UedBbT537U=;
 b=QWEXJQqJRYBGPy+AU08jY2BfutX7u2CEC++CFCoPSLkJPvmP1jDmlTW5TWs+t2/K6XYr8Cd3hQSq3QuMGxXv+ECyU+VpOKAdtfdF/piC2b7XzjvMTFz7t5AbsrZxl6fmFwv7KIBHngv4m5/6T/Xz1P9dqXCbDt0CnBsjGDoPIuIzfjJAQY7/PPazbm/0i9uvAcWhp123UsVJDeTrllX5Idz5JgT0Sq+soMlr+WpHiN2Lj7/GntHR0OnrFEEehg94b/EmLdpfYAiS/4wxQnvBo41BrZkdVezpIBT+I6/0e3wYupmTOg2YVJdLLKQqvv7lRa5YNhyHS3/QZtVFElWtiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mPEekBCBe5wnOeP6x05Y66zeXOv11u1Q8UedBbT537U=;
 b=MgCcyfhobLHXQDLD0WdHUclisI6pPj0cRZM1D1QcI57uefr3dxfP6azAZoiWePfu4ttQ2Uzj5P/06aafPcCZ6cJSqJsgFnEdBve+WFP9Yiza1CBg7D+g38j1JNcnR/kvw8WezVvzbm/CMdRD5ow6udCW28pSqfIfgNv4wMMmk4A=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
Thread-Topic: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
Thread-Index: AQHZJxAoM+nWeUC5lUC+09eIkdu2V66uAIMAgAb9FDA=
Date: Sun, 29 Jan 2023 06:17:16 +0000
Message-ID:
 <AM0PR08MB4530B7AF6EA406882974D528F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-14-Penny.Zheng@arm.com>
 <23f49916-dd2a-a956-1e6b-6dbb41a8817b@xen.org>
In-Reply-To: <23f49916-dd2a-a956-1e6b-6dbb41a8817b@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: AC0319E3917F92438C0E430FB536509B.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|AS8PR08MB6023:EE_|DBAEUR03FT006:EE_|AS8PR08MB9599:EE_
X-MS-Office365-Filtering-Correlation-Id: 699712e6-355d-4603-ec43-08db01c07f83
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 NYfLQzkiJmw17kic1oU5BhvctDArXU053QxJRLPyQ4UxVUF+fZt7P0+xWrhpBZ++5ELw8TF2NGfJZU7NVD20y6yi5HGbfVOyGA+vqP3j1wFboig+Jr5PP6ZkoV6nMGg9PLBmLeVV8zxS2AbYkxrs6tQUXlIsKMuvFOilMyFV1ArKc6/B7rYH9jrO3yJitdQJidJn0fG0ihr6W9Xl+RlmUrh0potVtjEDdbUdmxYq3rHQaYfVfgtpJ7CSdwIbS5u5YOdawsmCuBlKmrHWe2MUUQCkpRfSDbU7nn3tfRRfLlDZXECWxfan6GbZYi+Sf3VHbQec1OL/JQC8/A4CeZ9bhhsiNURFKnRWGZXU3UQY1QJ1WJ+7msh49dealKGAakBmrnmuax+xt7fmGQJMIGx/c4zi79/7oAH7xt0rtmtdt50eNe5B7r/KDdAsfio8tJzindBNVJY/873pJ+JnnAiJDiSyduuPexa3GTX88SHK+hIqEXDHGgBjVKlyMuJeANg7JJE6RjLbgQr4REAzFNe1zMqjHrtvDuI0NnIKGfCS2f+EoujNkKhi2pp/pDi4i3ywYhbHti1XttLMAuhm8ElaJWzEETz0EN6QK288htKiSUPNc8GMPIKbDzFenymImHVr7TniFiTtoTJ7ZKXVAZ4P4lxQz/lKyS92hLnQx+VEak2jfZ2DRyluCl5wMUgtkmUYN8DYA3VQDzdrcAOeJQAlSg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(136003)(346002)(376002)(39860400002)(396003)(366004)(451199018)(26005)(7696005)(71200400001)(6506007)(478600001)(41300700001)(9686003)(186003)(53546011)(54906003)(110136005)(316002)(4326008)(76116006)(66946007)(83380400001)(38100700002)(122000001)(64756008)(66446008)(66476007)(8676002)(66556008)(2906002)(55016003)(5660300002)(38070700005)(86362001)(8936002)(52536014)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6023
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9b194aa0-7d75-4844-6776-08db01c07842
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	j0fA38QqESiDHyHoZ/smqc0VDyLFiFq1eQbsJNROnzNEa/uo2Bz+r8lGym5L0ARk5VQLVfl/FsGThUGCZ4vi7QfRFvesFwRH53KDMhFnMRaGvy0Y7JZ39ogehiGhoGu2j264zd40Ip9vzU87kzttEMC7R0YhZksf0YnV/IGkaHYnGKprXB7jTF8BpmheCpOBfjpznPqS3qhnxLZfOm5pHAIJg0McYkFKNQdz6QSy1yr7S8tLyasAdI2CNRwo8hTMPLbJoDcURm4lTRrHM9N1BpOjgj9D+W+eBgxTNL7hx5dPiSAStLwZ+QDcghgryNdffYUD7QNSyXjfeYG1iOvurtklfDQpkzvGpoxKwgbN9IOYPKLyN0KXuvRWs+/jFlYuVB9fe/LBBfqGr/8Mo42doyyIuUS97QgT5L2z3U5udglnY++vZVGzvGVXaWtBp41NPmH5M0F7leKOjQ967qbRpX4sCf+Zw6QFHE8sZO2Knxe9ZHNloNZwvSHq0QZNkijkOnn7zmwe2/U32ZggTCxRLV+VFOYVV3acJxSQUToriDeU6TVZV3ENAt36ZZLL5FtwtJ2krN7lqCrholzQden9aNIWUwBr9mjEj0IadRGGEqK4HlJQxTNGoSCpLeWErN4kLEcUfpEl3y+Yth5lmfiM+HrbFxVUIPjLX77UQOy/ErY=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199018)(36840700001)(46966006)(8676002)(41300700001)(4326008)(107886003)(33656002)(55016003)(40480700001)(8936002)(52536014)(316002)(70206006)(70586007)(5660300002)(110136005)(54906003)(7696005)(83380400001)(2906002)(81166007)(26005)(186003)(9686003)(478600001)(6506007)(53546011)(36860700001)(86362001)(336012)(47076005)(356005)(82740400003)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jan 2023 06:17:28.7757
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 699712e6-355d-4603-ec43-08db01c07f83
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9599

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPg0KPiBTZW50OiBXZWRuZXNkYXksIEphbnVhcnkgMjUsIDIwMjMgMzowOSBBTQ0K
PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFu
byBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9k
eW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDEzLzQwXSB4
ZW4vbXB1OiBpbnRyb2R1Y2UgdW5pZmllZCBmdW5jdGlvbg0KPiBzZXR1cF9lYXJseV91YXJ0IHRv
IG1hcCBlYXJseSBVQVJUDQo+IA0KPiBIaSBQZW55LA0KDQpIaSBKdWxpZW4sDQoNCj4gDQo+IE9u
IDEzLzAxLzIwMjMgMDU6MjgsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+IEluIE1NVSBzeXN0ZW0s
IHdlIG1hcCB0aGUgVUFSVCBpbiB0aGUgZml4bWFwICh3aGVuIGVhcmx5cHJpbnRrIGlzIHVzZWQp
Lg0KPiA+IEhvd2V2ZXIgaW4gTVBVIHN5c3RlbSwgd2UgbWFwIHRoZSBVQVJUIHdpdGggYSB0cmFu
c2llbnQgTVBVIG1lbW9yeQ0KPiA+IHJlZ2lvbi4NCj4gPg0KPiA+IFNvIHdlIGludHJvZHVjZSBh
IG5ldyB1bmlmaWVkIGZ1bmN0aW9uIHNldHVwX2Vhcmx5X3VhcnQgdG8gcmVwbGFjZSB0aGUNCj4g
PiBwcmV2aW91cyBzZXR1cF9maXhtYXAuDQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBa
aGVuZyA8cGVubnkuemhlbmdAYXJtLmNvbT4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBXZWkgQ2hlbiA8
d2VpLmNoZW5AYXJtLmNvbT4NCj4gPiAtLS0NCj4gPiAgIHhlbi9hcmNoL2FybS9hcm02NC9oZWFk
LlMgICAgICAgICAgICAgICB8ICAyICstDQo+ID4gICB4ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9t
bXUuUyAgICAgICAgICAgfCAgNCArLQ0KPiA+ICAgeGVuL2FyY2gvYXJtL2FybTY0L2hlYWRfbXB1
LlMgICAgICAgICAgIHwgNTINCj4gKysrKysrKysrKysrKysrKysrKysrKysrKw0KPiA+ICAgeGVu
L2FyY2gvYXJtL2luY2x1ZGUvYXNtL2Vhcmx5X3ByaW50ay5oIHwgIDEgKw0KPiA+ICAgNCBmaWxl
cyBjaGFuZ2VkLCA1NiBpbnNlcnRpb25zKCspLCAzIGRlbGV0aW9ucygtKQ0KPiA+DQo+ID4gZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMgYi94ZW4vYXJjaC9hcm0vYXJtNjQv
aGVhZC5TDQo+ID4gaW5kZXggN2YzZjk3MzQ2OC4uYTkyODgzMzE5ZCAxMDA2NDQNCj4gPiAtLS0g
YS94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TDQo+ID4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0
L2hlYWQuUw0KPiA+IEBAIC0yNzIsNyArMjcyLDcgQEAgcHJpbWFyeV9zd2l0Y2hlZDoNCj4gPiAg
ICAgICAgICAgICogYWZ0ZXJ3YXJkcy4NCj4gPiAgICAgICAgICAgICovDQo+ID4gICAgICAgICAg
IGJsICAgIHJlbW92ZV9pZGVudGl0eV9tYXBwaW5nDQo+ID4gLSAgICAgICAgYmwgICAgc2V0dXBf
Zml4bWFwDQo+ID4gKyAgICAgICAgYmwgICAgc2V0dXBfZWFybHlfdWFydA0KPiA+ICAgI2lmZGVm
IENPTkZJR19FQVJMWV9QUklOVEsNCj4gPiAgICAgICAgICAgLyogVXNlIGEgdmlydHVhbCBhZGRy
ZXNzIHRvIGFjY2VzcyB0aGUgVUFSVC4gKi8NCj4gPiAgICAgICAgICAgbGRyICAgeDIzLCA9RUFS
TFlfVUFSVF9WSVJUVUFMX0FERFJFU1MNCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2Fy
bTY0L2hlYWRfbW11LlMNCj4gPiBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21tdS5TIGluZGV4
IGI1OWM0MDQ5NWYuLmExOWI3Yzg3M2QNCj4gMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2FyY2gvYXJt
L2FybTY0L2hlYWRfbW11LlMNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tbXUu
Uw0KPiA+IEBAIC0zMTIsNyArMzEyLDcgQEAgRU5EUFJPQyhyZW1vdmVfaWRlbnRpdHlfbWFwcGlu
ZykNCj4gPiAgICAqDQo+ID4gICAgKiBDbG9iYmVycyB4MCAtIHgzDQo+ID4gICAgKi8NCj4gPiAt
RU5UUlkoc2V0dXBfZml4bWFwKQ0KPiA+ICtFTlRSWShzZXR1cF9lYXJseV91YXJ0KQ0KPiANCj4g
VGhpcyBmdW5jdGlvbiBpcyBkb2luZyBtb3JlIHRoYW4gZW5hYmxlIHRoZSBlYXJseSBVQVJULiBJ
dCBhbHNvIHNldHVwcyB0aGUNCj4gZml4bWFwIGV2ZW4gZWFybHlwcmludGsgaXMgbm90IGNvbmZp
Z3VyZWQuDQoNClRydWUsIHRydWUuDQpJJ3ZlIHRob3JvdWdobHkgcmVhZCB0aGUgTU1VIGltcGxl
bWVudGF0aW9uIG9mIHNldHVwX2ZpeG1hcCwgYW5kIEknbGwgdHJ5IHRvIHNwbGl0DQppdCB1cC4N
Cg0KPiANCj4gSSBhbSBub3QgZW50aXJlbHkgc3VyZSB3aGF0IGNvdWxkIGJlIHRoZSBuYW1lLiBN
YXliZSB0aGlzIG5lZWRzIHRvIGJlIHNwbGl0DQo+IGZ1cnRoZXIuDQo+IA0KPiA+ICAgI2lmZGVm
IENPTkZJR19FQVJMWV9QUklOVEsNCj4gPiAgICAgICAgICAgLyogQWRkIFVBUlQgdG8gdGhlIGZp
eG1hcCB0YWJsZSAqLw0KPiA+ICAgICAgICAgICBsZHIgICB4MCwgPUVBUkxZX1VBUlRfVklSVFVB
TF9BRERSRVNTDQo+ID4gQEAgLTMyNSw3ICszMjUsNyBAQCBFTlRSWShzZXR1cF9maXhtYXApDQo+
ID4gICAgICAgICAgIGRzYiAgIG5zaHN0DQo+ID4NCj4gPiAgICAgICAgICAgcmV0DQo+ID4gLUVO
RFBST0Moc2V0dXBfZml4bWFwKQ0KPiA+ICtFTkRQUk9DKHNldHVwX2Vhcmx5X3VhcnQpDQo+ID4N
Cj4gPiAgIC8qIEZhaWwtc3RvcCAqLw0KPiA+ICAgZmFpbDogICBQUklOVCgiLSBCb290IGZhaWxl
ZCAtXHJcbiIpDQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21wdS5T
DQo+ID4gYi94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tcHUuUyBpbmRleCBlMmFjNjliMGNjLi43
MmQxZTA4NjNkDQo+IDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21w
dS5TDQo+ID4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0L2hlYWRfbXB1LlMNCj4gPiBAQCAtMTgs
OCArMTgsMTAgQEANCj4gPiAgICNkZWZpbmUgUkVHSU9OX1RFWFRfUFJCQVIgICAgICAgMHgzOCAg
ICAvKiBTSD0xMSBBUD0xMCBYTj0wMCAqLw0KPiA+ICAgI2RlZmluZSBSRUdJT05fUk9fUFJCQVIg
ICAgICAgICAweDNBICAgIC8qIFNIPTExIEFQPTEwIFhOPTEwICovDQo+ID4gICAjZGVmaW5lIFJF
R0lPTl9EQVRBX1BSQkFSICAgICAgIDB4MzIgICAgLyogU0g9MTEgQVA9MDAgWE49MTAgKi8NCj4g
PiArI2RlZmluZSBSRUdJT05fREVWSUNFX1BSQkFSICAgICAweDIyICAgIC8qIFNIPTEwIEFQPTAw
IFhOPTEwICovDQo+ID4NCj4gPiAgICNkZWZpbmUgUkVHSU9OX05PUk1BTF9QUkxBUiAgICAgMHgw
ZiAgICAvKiBOUz0wIEFUVFI9MTExIEVOPTEgKi8NCj4gPiArI2RlZmluZSBSRUdJT05fREVWSUNF
X1BSTEFSICAgICAweDA5ICAgIC8qIE5TPTAgQVRUUj0xMDAgRU49MSAqLw0KPiA+DQo+ID4gICAv
Kg0KPiA+ICAgICogTWFjcm8gdG8gcm91bmQgdXAgdGhlIHNlY3Rpb24gYWRkcmVzcyB0byBiZSBQ
QUdFX1NJWkUgYWxpZ25lZCBAQA0KPiA+IC0zMzQsNiArMzM2LDU2IEBAIEVOVFJZKGVuYWJsZV9t
bSkNCj4gPiAgICAgICByZXQNCj4gPiAgIEVORFBST0MoZW5hYmxlX21tKQ0KPiA+DQo+ID4gKy8q
DQo+ID4gKyAqIE1hcCB0aGUgZWFybHkgVUFSVCB3aXRoIGEgbmV3IHRyYW5zaWVudCBNUFUgbWVt
b3J5IHJlZ2lvbi4NCj4gPiArICoNCj4gDQo+IE1pc3NpbmcgIklucHV0czogIg0KPiANCj4gPiAr
ICogeDI3OiByZWdpb24gc2VsZWN0b3INCj4gPiArICogeDI4OiBwcmJhcg0KPiA+ICsgKiB4Mjk6
IHBybGFyDQo+ID4gKyAqDQo+ID4gKyAqIENsb2JiZXJzIHgwIC0geDQNCj4gPiArICoNCj4gPiAr
ICovDQo+ID4gK0VOVFJZKHNldHVwX2Vhcmx5X3VhcnQpDQo+ID4gKyNpZmRlZiBDT05GSUdfRUFS
TFlfUFJJTlRLDQo+ID4gKyAgICAvKiBzdGFjayBMUiBhcyB3cml0ZV9wciB3aWxsIGJlIGNhbGxl
ZCBsYXRlciBsaWtlIG5lc3RlZCBmdW5jdGlvbiAqLw0KPiA+ICsgICAgbW92ICAgeDMsIGxyDQo+
ID4gKw0KPiA+ICsgICAgLyoNCj4gPiArICAgICAqIE1QVSByZWdpb24gZm9yIGVhcmx5IFVBUlQg
aXMgYSB0cmFuc2llbnQgcmVnaW9uLCBzaW5jZSBpdCB3aWxsIGJlDQo+ID4gKyAgICAgKiByZXBs
YWNlZCBieSBzcGVjaWZpYyBkZXZpY2UgbWVtb3J5IGxheW91dCB3aGVuIEZEVCBnZXRzIHBhcnNl
ZC4NCj4gDQo+IEkgd291bGQgcmF0aGVyIG5vdCBtZW50aW9uICJGRFQiIGhlcmUgYmVjYXVzZSB0
aGlzIGNvZGUgaXMgaW5kZXBlbmRlbnQgdG8NCj4gdGhlIGZpcm13YXJlIHRhYmxlIHVzZWQuDQo+
IA0KPiBIb3dldmVyLCBhbnkgcmVhc29uIHRvIHVzZSBhIHRyYW5zaWVudCByZWdpb24gcmF0aGVy
IHRoYW4gdGhlIG9uZSB0aGF0IHdpbGwNCj4gYmUgdXNlZCBmb3IgdGhlIFVBUlQgZHJpdmVyPw0K
PiANCg0KV2UgZG9u4oCZdCB3YW50IHRvIGRlZmluZSBhIE1QVSByZWdpb24gZm9yIGVhY2ggZGV2
aWNlIGRyaXZlci4gSXQgd2lsbCBleGhhdXN0DQpNUFUgcmVnaW9ucyB2ZXJ5IHF1aWNrbHkuDQpJ
biBjb21taXQgIiBbUEFUQ0ggdjIgMjgvNDBdIHhlbi9tcHU6IG1hcCBib290IG1vZHVsZSBzZWN0
aW9uIGluIE1QVSBzeXN0ZW0iLCANCkEgbmV3IEZEVCBwcm9wZXJ0eSBgbXB1LGRldmljZS1tZW1v
cnktc2VjdGlvbmAgd2lsbCBiZSBpbnRyb2R1Y2VkIGZvciB1c2VycyB0byBzdGF0aWNhbGx5DQpj
b25maWd1cmUgdGhlIHdob2xlIHN5c3RlbSBkZXZpY2UgbWVtb3J5IHdpdGggdGhlIGxlYXN0IG51
bWJlciBvZiBtZW1vcnkgcmVnaW9ucyBpbiBEZXZpY2UgVHJlZS4NClRoaXMgc2VjdGlvbiBzaGFs
bCBjb3ZlciBhbGwgZGV2aWNlcyB0aGF0IHdpbGwgYmUgdXNlZCBpbiBYZW4sIGxpa2UgYFVBUlRg
LCBgR0lDYCwgZXRjLg0KRm9yIEZWUF9CYXNlUl9BRU12OFIsIHdlIGhhdmUgdGhlIGZvbGxvd2lu
ZyBkZWZpbml0aW9uOg0KYGBgDQptcHUsZGV2aWNlLW1lbW9yeS1zZWN0aW9uID0gPDB4MCAweDgw
MDAwMDAwIDB4MCAweDdmZmZmMDAwPjsNCmBgYA0KDQo+ID4gKyAgICAgKi8NCj4gPiArICAgIGxv
YWRfcGFkZHIgeDAsIG5leHRfdHJhbnNpZW50X3JlZ2lvbl9pZHgNCj4gPiArICAgIGxkciAgIHg0
LCBbeDBdDQo+ID4gKw0KPiA+ICsgICAgbGRyICAgeDI4LCA9Q09ORklHX0VBUkxZX1VBUlRfQkFT
RV9BRERSRVNTDQo+ID4gKyAgICBhbmQgICB4MjgsIHgyOCwgI01QVV9SRUdJT05fTUFTSw0KPiA+
ICsgICAgbW92ICAgeDEsICNSRUdJT05fREVWSUNFX1BSQkFSDQo+ID4gKyAgICBvcnIgICB4Mjgs
IHgyOCwgeDENCj4gDQo+IFRoaXMgbmVlZHMgc29tZSBkb2N1bWVudGF0aW9uIHRvIGV4cGxhaW4g
dGhlIGxvZ2ljLiBNYXliZSBldmVuIGEgbWFjcm8uDQo+IA0KDQpEbyB5b3Ugc3VnZ2VzdCB0aGF0
IEkgc2hhbGwgZXhwbGFpbiBob3cgd2UgY29tcG9zZSBQUkJBUl9FTDIgcmVnaXN0ZXI/DQoNCj4g
PiArDQo+ID4gKyAgICBsZHIgeDI5LCA9KENPTkZJR19FQVJMWV9VQVJUX0JBU0VfQUREUkVTUyAr
IEVBUkxZX1VBUlRfU0laRSkNCj4gPiArICAgIHJvdW5kdXBfc2VjdGlvbiB4MjkNCj4gDQo+IERv
ZXMgdGhpcyBtZWFuIHdlIGNvdWxkIGdpdmUgYWNjZXNzIHRvIG1vcmUgdGhhbiBuZWNlc3Nhcnk/
IFNob3VsZG4ndA0KPiBpbnN0ZWFkIHByZXZlbnQgY29tcGlsYXRpb24gaWYgdGhlIHNpemUgZG9l
c24ndCBhbGlnbiB3aXRoIHRoZSBzZWN0aW9uIHNpemU/DQo+IA0KDQpUcnVlLCB3ZSBjb3VsZCBu
b3QgdHJlYXQgdWFydCBzZWN0aW9uIGxpa2Ugd2UgZG8gZm9yIHRoZSBzZWN0aW9uIGRlZmluZWQg
aW4geGVuLmxkcy5TLg0KQ09ORklHX0VBUkxZX1VBUlRfQkFTRV9BRERSRVNTIGFuZCBFQVJMWV9V
QVJUX1NJWkUgc2hhbGwgYm90aCBiZSBjaGVja2VkDQppZiBpdCBpcyBhbGlnbmVkIHdpdGggUEFH
RV9TSVpFLg0KDQo+ID4gKyAgICAvKiBMaW1pdCBhZGRyZXNzIGlzIGluY2x1c2l2ZSAqLw0KPiA+
ICsgICAgc3ViICAgeDI5LCB4MjksICMxDQo+ID4gKyAgICBhbmQgICB4MjksIHgyOSwgI01QVV9S
RUdJT05fTUFTSw0KPiA+ICsgICAgbW92ICAgeDIsICNSRUdJT05fREVWSUNFX1BSTEFSDQo+ID4g
KyAgICBvcnIgICB4MjksIHgyOSwgeDINCj4gPiArDQo+ID4gKyAgICBtb3YgICB4MjcsIHg0DQo+
IA0KPiBUaGlzIG5lZWRzIHNvbWUgZG9jdW1lbnRhdGlvbiBsaWtlOg0KPiANCj4geDI3OiByZWdp
b24gc2VsZWN0b3INCj4gDQo+IFNlZSBob3cgd2UgZG9jdW1lbnRlZCB0aGUgZXhpc3RpbmcgaGVs
cGVycy4NCj4gDQo+ID4gKyAgICBibCAgICB3cml0ZV9wcg0KPiA+ICsNCj4gPiArICAgIC8qIENy
ZWF0ZSBhIG5ldyBlbnRyeSBpbiB4ZW5fbXB1bWFwIGZvciBlYXJseSBVQVJUICovDQo+ID4gKyAg
ICBjcmVhdGVfbXB1X2VudHJ5IHhlbl9tcHVtYXAsIHg0LCB4MjgsIHgyOSwgeDEsIHgyDQo+ID4g
Kw0KPiA+ICsgICAgLyogVXBkYXRlIG5leHRfdHJhbnNpZW50X3JlZ2lvbl9pZHggKi8NCj4gPiAr
ICAgIHN1YiAgIHg0LCB4NCwgIzENCj4gPiArICAgIHN0ciAgIHg0LCBbeDBdDQo+ID4gKw0KPiA+
ICsgICAgbW92ICAgbHIsIHgzDQo+ID4gKyAgICByZXQNCj4gPiArI2VuZGlmDQo+ID4gK0VORFBS
T0Moc2V0dXBfZWFybHlfdWFydCkNCj4gPiArDQo+ID4gICAvKg0KPiA+ICAgICogTG9jYWwgdmFy
aWFibGVzOg0KPiA+ICAgICogbW9kZTogQVNNDQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2Fy
bS9pbmNsdWRlL2FzbS9lYXJseV9wcmludGsuaA0KPiA+IGIveGVuL2FyY2gvYXJtL2luY2x1ZGUv
YXNtL2Vhcmx5X3ByaW50ay5oDQo+ID4gaW5kZXggNDRhMjMwODUzZi4uZDg3NjIzZTZkNSAxMDA2
NDQNCj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZWFybHlfcHJpbnRrLmgNCj4g
PiArKysgYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZWFybHlfcHJpbnRrLmgNCj4gPiBAQCAt
MjIsNiArMjIsNyBAQA0KPiA+ICAgICogZm9yIEVBUkxZX1VBUlRfVklSVFVBTF9BRERSRVNTLg0K
PiA+ICAgICovDQo+ID4gICAjZGVmaW5lIEVBUkxZX1VBUlRfVklSVFVBTF9BRERSRVNTDQo+IENP
TkZJR19FQVJMWV9VQVJUX0JBU0VfQUREUkVTUw0KPiA+ICsjZGVmaW5lIEVBUkxZX1VBUlRfU0la
RSAgICAgICAgICAgIDB4MTAwMA0KPiANCj4gU2hvdWxkbid0IHRoaXMgYmUgUEFHRV9TSVpFPyBJ
ZiBub3QsIGhvdyBkaWQgeW91IGNvbWUgdXAgd2l0aCB0aGUgbnVtYmVyPw0KPiANCj4gQ2hlZXJz
LA0KPiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 06:48:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 06:48:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486256.753679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM1Te-0001BC-MO; Sun, 29 Jan 2023 06:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486256.753679; Sun, 29 Jan 2023 06:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM1Te-0001B5-JZ; Sun, 29 Jan 2023 06:47:46 +0000
Received: by outflank-mailman (input) for mailman id 486256;
 Sun, 29 Jan 2023 06:47:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hFYh=52=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pM1Td-0001Az-3E
 for xen-devel@lists.xenproject.org; Sun, 29 Jan 2023 06:47:45 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2085.outbound.protection.outlook.com [40.107.22.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4020138-9fa0-11ed-b836-f113854b840a;
 Sun, 29 Jan 2023 07:47:42 +0100 (CET)
Received: from DB6PR0202CA0032.eurprd02.prod.outlook.com (2603:10a6:4:a5::18)
 by PAWPR08MB10060.eurprd08.prod.outlook.com (2603:10a6:102:35a::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.28; Sun, 29 Jan
 2023 06:47:39 +0000
Received: from DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a5:cafe::fe) by DB6PR0202CA0032.outlook.office365.com
 (2603:10a6:4:a5::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.37 via Frontend
 Transport; Sun, 29 Jan 2023 06:47:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT055.mail.protection.outlook.com (100.127.142.171) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.30 via Frontend Transport; Sun, 29 Jan 2023 06:47:39 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Sun, 29 Jan 2023 06:47:39 +0000
Received: from f5a2e4db7e68.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9702BFB7-F731-4F36-8F4F-A546E519AC38.1; 
 Sun, 29 Jan 2023 06:47:32 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f5a2e4db7e68.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sun, 29 Jan 2023 06:47:32 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by PR3PR08MB5562.eurprd08.prod.outlook.com (2603:10a6:102:85::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.31; Sun, 29 Jan
 2023 06:47:30 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%3]) with mapi id 15.20.6043.030; Sun, 29 Jan 2023
 06:47:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4020138-9fa0-11ed-b836-f113854b840a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5dWuG2UPPtT7nqAbc7D7vzScag53zwV8SczmR3BjogA=;
 b=n7JOevdpROPaOFeFfkplTgAKTOCp/9w/KkBa4UM9tqPgCHMxfyouhpXWg8W4hp9IaN/Y2hAc4RSuU0DjvwxtvYU45h8AvUaINgxAye7hlMpv1rgsOlghU6451ImzBHSqnBJLNhZOZb05nVjOA6v2lE85YkO0F29C68VK3EMOKN8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iDvY0xQhGoragX7QSI2rQqldyUuAzvvVofEyOxKENSOMmejEetNeaoasTIKvuQ2m+xMYHR5IN3bu+dBugzSwfwnGvNnxcqtZv5FgmwHufm7d4eRevvibHgqMWLaZiRdf2FlAEkPmqyPLp5kzvTSoDj+dVM6JgL9DQPTxSJErLhZU6QPSYqmjbnBCVI6Ixw3SSjth3Gq7fCXLe9C1dsB+3LPKnQo8OhMYZ/PFb/njQ5Yfs7LFqDW1LEKkhNQ/Pk1XtHBA9UHAacvDSC8CqOBOc+/tMtuvLO1typs0cmRXM6LcFEVBPCREx9Jewju6dEtj43KsaqLBmQuCzSQ6etDEzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5dWuG2UPPtT7nqAbc7D7vzScag53zwV8SczmR3BjogA=;
 b=NzE9BtmPC9asC1ChNOpg4uL/XC803iO4wTQjESomH4R7VJlgRqyisamDrH/3LUrf9mgMfXDYa0mAmaQHl/gyhcyt3lUwBUJ/xt/abHECUYb2o5JXwyRE/fi2dGMvklM12GxFwtZMYNTfrYMAbQyklBJgy24SKGmZDciKWZXybe+QGPob8t1kWQkfz+APN4wf12alqjgUBNRtCOrWtfuqBwcimphbblvVScoP88Mcd6BUHNuKC3l6E32XTvdQLLStkAlgdi1bZ4/p4TqgWwbA3cHNW9hkfpFOHezyqPE7TEdOrqLODrEyYAiINZOxiyoJaLqc6KJ2uzAFvDf/Wh7opw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5dWuG2UPPtT7nqAbc7D7vzScag53zwV8SczmR3BjogA=;
 b=n7JOevdpROPaOFeFfkplTgAKTOCp/9w/KkBa4UM9tqPgCHMxfyouhpXWg8W4hp9IaN/Y2hAc4RSuU0DjvwxtvYU45h8AvUaINgxAye7hlMpv1rgsOlghU6451ImzBHSqnBJLNhZOZb05nVjOA6v2lE85YkO0F29C68VK3EMOKN8=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Ayan Kumar Halder <ayankuma@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	"Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Thread-Topic: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Thread-Index: AQHZJxAhTWb1zeTHm029bSgcZo/2Yq6lkLCAgA92BsA=
Date: Sun, 29 Jan 2023 06:47:30 +0000
Message-ID:
 <AM0PR08MB453049ABE537CC36D9574CC4F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-12-Penny.Zheng@arm.com>
 <54355320-2c3c-665f-32e2-90329586d98a@amd.com>
In-Reply-To: <54355320-2c3c-665f-32e2-90329586d98a@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 08635A7C284C2740B8C02673D51094A9.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|PR3PR08MB5562:EE_|DBAEUR03FT055:EE_|PAWPR08MB10060:EE_
X-MS-Office365-Filtering-Correlation-Id: 1bbaa118-438f-4cbb-4c34-08db01c4b6cf
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 DNS+M5/5qVHWudeDPITe8SB7BCkUB7KFDugclRQ2OExuJJlg7rTqxHRwuha3/gU2q8XFMPO7Py1V2DOXZVx17iAqgjtoXWp92ByMb1IfmJmvlbq1djr48kYOJbuLDwR5Fj8JG3UVzxHMzgF2nTaruqyUS55R6gqDEkMVT+GhlmEZA3lEbl2g0DXY4KovfPwN6/09MnuMctpZ8/lounH9LhLAY1+W02YuEgxYbYvcIJVOGdLwkWm37XHYl588d9LmzUEDvDLPsGeP5Ox6wE9TXDUTIDGPhU7Z3/H2pWHTMZSfvMtxZZ6IpPNEdOqJuKWdayPfeumsfAdnHrN/P9qv7nrkyjj23aPwhKsTW2nGZKtvWrUV7kl/ag6HWrzhqofy9S2TLE1MzHID0ULDyFFX1Ei1aotsIffsY4Cx8J1O5rxEtozrGJ6ibN5qucEEPGu5kbWfk3o7ABqJdS1FjQauO+XWBCvl+O0zxmL8zmgPKxocbnHQuq2v27K5AhN1MKkbVMpqzW6gd6dccO0oV++Oa8nxu3pi4Cic2D8xLwE8+DcxbiO7QYOhBSwxaC5i+CuTI1pgNnpcrJjuViYBztQ/TGTpaXH9jkUBSAWarnCnLO8U++5mQbMcsTQnzUZ1Kti17WWZ/9ISH0ZmUs6cXYJ2uUsfuPN9cNrGV9AzHVmImiTPgEbV2UVui1teGH+1bmC+5Zt5ZKGKBbE1Awl47yZJ48e3VFWclgU1Avm3j9UvWD4=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(451199018)(9686003)(7696005)(26005)(71200400001)(186003)(6506007)(53546011)(478600001)(2906002)(966005)(64756008)(110136005)(66556008)(66446008)(66476007)(76116006)(66946007)(54906003)(4326008)(8676002)(316002)(83380400001)(41300700001)(5660300002)(52536014)(30864003)(8936002)(122000001)(38100700002)(86362001)(33656002)(38070700005)(55016003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5562
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1beedee1-3aaf-46b5-fab7-08db01c4b143
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Td3uZLiEFBj2g6DgmUAj5R1ZWj5ovrqABaTUQoM31HO9KKZ/LPNUgmWy874npL1bAwvWrqJCYeDeIPsy29t7fD8LfVSC2/gfHjjzS8ORQDtnjsYy8oh8J4j1qs2OKSkgSaWVCvBl+ndsLJYZmPmlCBtYAD9IBcVMdXedJf/XIutUZsT5oJROgiYwDHolx25Kydpu+94K6lygL2ClCrIomDf25ucN5jL57N9TN+pqb2mFfB0aSn/ZI46gBi+h3xL6DmjY1qq53pftlZEbdkbTkFRLwiOYu9EVccYYo/mNjvZgVIuJR2Idf0KbHeyG6e1NxoUJbnXfWH4sl5HAxd+rYiEfBpsgMv9AoVq005G7rWJ2euPJec5XDh6xjtziNwaTadjMHCY9R4p5luYxd5kQATraBCXvx+dWoofBotC42azYSkmzSNvYe/esEvnA5SzPWKlJfOaKTI9FdjGHeeUDGWlxKsF5D5/V1NKCJoBfe5d+kd0wZREnQLSqHJPriHELBHzF3iJLLMsZZhusMcExhcy9QRriqcePKlDRIu1qhY/aBMHyXK/t8qe8BJz+e1SfFKG0qgfnSvhw6FiU9g4/KJf8HyIXJrOQbcQkOS4ywmHa3NEz5WkDisJrxfol03C4771N/8UvmuGSb0Wb0rcuaXi0UFjQJKWRxLr96y02o00nQx/XfZfsD7FGkvp6FkAqnqW1+ABtwT9XN9O6noz0cA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199018)(46966006)(36840700001)(82310400005)(2906002)(52536014)(86362001)(30864003)(336012)(5660300002)(47076005)(41300700001)(356005)(81166007)(8936002)(83380400001)(966005)(7696005)(478600001)(26005)(9686003)(6506007)(53546011)(186003)(107886003)(82740400003)(36860700001)(33656002)(70586007)(70206006)(4326008)(8676002)(110136005)(54906003)(40480700001)(316002)(55016003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jan 2023 06:47:39.5157
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1bbaa118-438f-4cbb-4c34-08db01c4b6cf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10060

SGkgQXlhbg0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEF5YW4gS3Vt
YXIgSGFsZGVyIDxheWFua3VtYUBhbWQuY29tPg0KPiBTZW50OiBUaHVyc2RheSwgSmFudWFyeSAx
OSwgMjAyMyA2OjE5IFBNDQo+IFRvOiB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4g
Q2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgUGVubnkgWmhlbmcNCj4gPFBlbm55Llpo
ZW5nQGFybS5jb20+OyBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
OyBKdWxpZW4NCj4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgQmVydHJhbmQgTWFycXVpcyA8QmVy
dHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20NCj4g
U3ViamVjdDogUmU6IFtQQVRDSCB2MiAxMS80MF0geGVuL21wdTogYnVpbGQgdXAgc3RhcnQtb2Yt
ZGF5IFhlbiBNUFUNCj4gbWVtb3J5IHJlZ2lvbiBtYXANCj4gDQo+IA0KPiBPbiAxMy8wMS8yMDIz
IDA1OjI4LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPiBDQVVUSU9OOiBUaGlzIG1lc3NhZ2UgaGFz
IG9yaWdpbmF0ZWQgZnJvbSBhbiBFeHRlcm5hbCBTb3VyY2UuIFBsZWFzZSB1c2UNCj4gcHJvcGVy
IGp1ZGdtZW50IGFuZCBjYXV0aW9uIHdoZW4gb3BlbmluZyBhdHRhY2htZW50cywgY2xpY2tpbmcg
bGlua3MsIG9yDQo+IHJlc3BvbmRpbmcgdG8gdGhpcyBlbWFpbC4NCj4gPg0KPiA+DQo+ID4gRnJv
bTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+ID4NCj4gPiBUaGUgc3RhcnQt
b2YtZGF5IFhlbiBNUFUgbWVtb3J5IHJlZ2lvbiBsYXlvdXQgc2hhbGwgYmUgbGlrZSBhcyBmb2xs
b3dzOg0KPiA+DQo+ID4geGVuX21wdW1hcFswXSA6IFhlbiB0ZXh0DQo+ID4geGVuX21wdW1hcFsx
XSA6IFhlbiByZWFkLW9ubHkgZGF0YQ0KPiA+IHhlbl9tcHVtYXBbMl0gOiBYZW4gcmVhZC1vbmx5
IGFmdGVyIGluaXQgZGF0YSB4ZW5fbXB1bWFwWzNdIDogWGVuDQo+ID4gcmVhZC13cml0ZSBkYXRh
IHhlbl9tcHVtYXBbNF0gOiBYZW4gQlNTIC4uLi4uLg0KPiA+IHhlbl9tcHVtYXBbbWF4X3hlbl9t
cHVtYXAgLSAyXTogWGVuIGluaXQgZGF0YQ0KPiA+IHhlbl9tcHVtYXBbbWF4X3hlbl9tcHVtYXAg
LSAxXTogWGVuIGluaXQgdGV4dA0KPiA+DQo+ID4gbWF4X3hlbl9tcHVtYXAgcmVmZXJzIHRvIHRo
ZSBudW1iZXIgb2YgcmVnaW9ucyBzdXBwb3J0ZWQgYnkgdGhlIEVMMg0KPiBNUFUuDQo+ID4gVGhl
IGxheW91dCBzaGFsbCBiZSBjb21wbGlhbnQgd2l0aCB3aGF0IHdlIGRlc2NyaWJlIGluIHhlbi5s
ZHMuUywgb3INCj4gPiB0aGUgY29kZXMgbmVlZCBhZGp1c3RtZW50Lg0KPiA+DQo+ID4gQXMgTU1V
IHN5c3RlbSBhbmQgTVBVIHN5c3RlbSBoYXZlIGRpZmZlcmVudCBmdW5jdGlvbnMgdG8gY3JlYXRl
IHRoZQ0KPiA+IGJvb3QgTU1VL01QVSBtZW1vcnkgbWFuYWdlbWVudCBkYXRhLCBpbnN0ZWFkIG9m
IGludHJvZHVjaW5nIGV4dHJhDQo+ID4gI2lmZGVmIGluIG1haW4gY29kZSBmbG93LCB3ZSBpbnRy
b2R1Y2UgYSBuZXV0cmFsIG5hbWUNCj4gPiBwcmVwYXJlX2Vhcmx5X21hcHBpbmdzIGZvciBib3Ro
LCBhbmQgYWxzbyB0byByZXBsYWNlIGNyZWF0ZV9wYWdlX3RhYmxlcw0KPiBmb3IgTU1VLg0KPiA+
DQo+ID4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+
ID4gU2lnbmVkLW9mZi1ieTogV2VpIENoZW4gPHdlaS5jaGVuQGFybS5jb20+DQo+ID4gLS0tDQo+
ID4gICB4ZW4vYXJjaC9hcm0vYXJtNjQvTWFrZWZpbGUgICAgICAgICAgICAgIHwgICAyICsNCj4g
PiAgIHhlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMgICAgICAgICAgICAgICAgfCAgMTcgKy0NCj4g
PiAgIHhlbi9hcmNoL2FybS9hcm02NC9oZWFkX21tdS5TICAgICAgICAgICAgfCAgIDQgKy0NCj4g
PiAgIHhlbi9hcmNoL2FybS9hcm02NC9oZWFkX21wdS5TICAgICAgICAgICAgfCAzMjMNCj4gKysr
KysrKysrKysrKysrKysrKysrKysNCj4gPiAgIHhlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9hcm02
NC9tcHUuaCAgICAgfCAgNjMgKysrKysNCj4gPiAgIHhlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9h
cm02NC9zeXNyZWdzLmggfCAgNDkgKysrKw0KPiA+ICAgeGVuL2FyY2gvYXJtL21tX21wdS5jICAg
ICAgICAgICAgICAgICAgICB8ICA0OCArKysrDQo+ID4gICB4ZW4vYXJjaC9hcm0veGVuLmxkcy5T
ICAgICAgICAgICAgICAgICAgIHwgICA0ICsNCj4gPiAgIDggZmlsZXMgY2hhbmdlZCwgNTAyIGlu
c2VydGlvbnMoKyksIDggZGVsZXRpb25zKC0pDQo+ID4gICBjcmVhdGUgbW9kZSAxMDA2NDQgeGVu
L2FyY2gvYXJtL2FybTY0L2hlYWRfbXB1LlMNCj4gPiAgIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4v
YXJjaC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvbXB1LmgNCj4gPiAgIGNyZWF0ZSBtb2RlIDEwMDY0
NCB4ZW4vYXJjaC9hcm0vbW1fbXB1LmMNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9h
cm0vYXJtNjQvTWFrZWZpbGUNCj4gYi94ZW4vYXJjaC9hcm0vYXJtNjQvTWFrZWZpbGUNCj4gPiBp
bmRleCAyMmRhMmY1NGI1Li40MzhjOTczN2FkIDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2Fy
bS9hcm02NC9NYWtlZmlsZQ0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9hcm02NC9NYWtlZmlsZQ0K
PiA+IEBAIC0xMCw2ICsxMCw4IEBAIG9iai15ICs9IGVudHJ5Lm8NCj4gPiAgIG9iai15ICs9IGhl
YWQubw0KPiA+ICAgaWZuZXEgKCQoQ09ORklHX0hBU19NUFUpLHkpDQo+ID4gICBvYmoteSArPSBo
ZWFkX21tdS5vDQo+ID4gK2Vsc2UNCj4gPiArb2JqLXkgKz0gaGVhZF9tcHUubw0KPiA+ICAgZW5k
aWYNCj4gPiAgIG9iai15ICs9IGluc24ubw0KPiA+ICAgb2JqLSQoQ09ORklHX0xJVkVQQVRDSCkg
Kz0gbGl2ZXBhdGNoLm8gZGlmZiAtLWdpdA0KPiA+IGEveGVuL2FyY2gvYXJtL2FybTY0L2hlYWQu
UyBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMgaW5kZXgNCj4gPiA3ODJiZDFmOTRjLi4xNDVl
M2Q1M2RjIDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMNCj4gPiAr
KysgYi94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TDQo+ID4gQEAgLTY4LDkgKzY4LDkgQEANCj4g
PiAgICAqICB4MjQgLQ0KPiA+ICAgICogIHgyNSAtDQo+ID4gICAgKiAgeDI2IC0gc2tpcF96ZXJv
X2JzcyAoYm9vdCBjcHUgb25seSkNCj4gPiAtICogIHgyNyAtDQo+ID4gLSAqICB4MjggLQ0KPiA+
IC0gKiAgeDI5IC0NCj4gPiArICogIHgyNyAtIHJlZ2lvbiBzZWxlY3RvciAobXB1IG9ubHkpDQo+
ID4gKyAqICB4MjggLSBwcmJhciAobXB1IG9ubHkpDQo+ID4gKyAqICB4MjkgLSBwcmxhciAobXB1
IG9ubHkpDQo+ID4gICAgKiAgeDMwIC0gbHINCj4gPiAgICAqLw0KPiA+DQo+ID4gQEAgLTgyLDcg
KzgyLDcgQEANCj4gPiAgICAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPiA+ICAgICoN
Cj4gPiAgICAqIFRoZSByZXF1aXJlbWVudHMgYXJlOg0KPiA+IC0gKiAgIE1NVSA9IG9mZiwgRC1j
YWNoZSA9IG9mZiwgSS1jYWNoZSA9IG9uIG9yIG9mZiwNCj4gPiArICogICBNTVUvTVBVID0gb2Zm
LCBELWNhY2hlID0gb2ZmLCBJLWNhY2hlID0gb24gb3Igb2ZmLA0KPiA+ICAgICogICB4MCA9IHBo
eXNpY2FsIGFkZHJlc3MgdG8gdGhlIEZEVCBibG9iLg0KPiA+ICAgICoNCj4gPiAgICAqIFRoaXMg
bXVzdCBiZSB0aGUgdmVyeSBmaXJzdCBhZGRyZXNzIGluIHRoZSBsb2FkZWQgaW1hZ2UuDQo+ID4g
QEAgLTI1Miw3ICsyNTIsMTIgQEAgcmVhbF9zdGFydF9lZmk6DQo+ID4NCj4gPiAgICAgICAgICAg
YmwgICAgY2hlY2tfY3B1X21vZGUNCj4gPiAgICAgICAgICAgYmwgICAgY3B1X2luaXQNCj4gPiAt
ICAgICAgICBibCAgICBjcmVhdGVfcGFnZV90YWJsZXMNCj4gPiArDQo+ID4gKyAgICAgICAgLyoN
Cj4gPiArICAgICAgICAgKiBDcmVhdGUgYm9vdCBtZW1vcnkgbWFuYWdlbWVudCBkYXRhLCBwYWdl
dGFibGUgZm9yIE1NVQ0KPiBzeXN0ZW1zDQo+ID4gKyAgICAgICAgICogYW5kIG1lbW9yeSByZWdp
b25zIGZvciBNUFUgc3lzdGVtcy4NCj4gPiArICAgICAgICAgKi8NCj4gPiArICAgICAgICBibCAg
ICBwcmVwYXJlX2Vhcmx5X21hcHBpbmdzDQo+ID4gICAgICAgICAgIGJsICAgIGVuYWJsZV9tbXUN
Cj4gPg0KPiA+ICAgICAgICAgICAvKiBXZSBhcmUgc3RpbGwgaW4gdGhlIDE6MSBtYXBwaW5nLiBK
dW1wIHRvIHRoZSBydW50aW1lDQo+ID4gVmlydHVhbCBBZGRyZXNzLiAqLyBAQCAtMzEwLDcgKzMx
NSw3IEBAIEdMT0JBTChpbml0X3NlY29uZGFyeSkNCj4gPiAgICNlbmRpZg0KPiA+ICAgICAgICAg
ICBibCAgICBjaGVja19jcHVfbW9kZQ0KPiA+ICAgICAgICAgICBibCAgICBjcHVfaW5pdA0KPiA+
IC0gICAgICAgIGJsICAgIGNyZWF0ZV9wYWdlX3RhYmxlcw0KPiA+ICsgICAgICAgIGJsICAgIHBy
ZXBhcmVfZWFybHlfbWFwcGluZ3MNCj4gPiAgICAgICAgICAgYmwgICAgZW5hYmxlX21tdQ0KPiA+
DQo+ID4gICAgICAgICAgIC8qIFdlIGFyZSBzdGlsbCBpbiB0aGUgMToxIG1hcHBpbmcuIEp1bXAg
dG8gdGhlIHJ1bnRpbWUNCj4gPiBWaXJ0dWFsIEFkZHJlc3MuICovIGRpZmYgLS1naXQgYS94ZW4v
YXJjaC9hcm0vYXJtNjQvaGVhZF9tbXUuUw0KPiA+IGIveGVuL2FyY2gvYXJtL2FybTY0L2hlYWRf
bW11LlMgaW5kZXggNmZmMTNjNzUxYy4uMjM0NmY3NTVkZg0KPiAxMDA2NDQNCj4gPiAtLS0gYS94
ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tbXUuUw0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9hcm02
NC9oZWFkX21tdS5TDQo+ID4gQEAgLTEyMyw3ICsxMjMsNyBAQA0KPiA+ICAgICoNCj4gPiAgICAq
IENsb2JiZXJzIHgwIC0geDQNCj4gPiAgICAqLw0KPiA+IC1FTlRSWShjcmVhdGVfcGFnZV90YWJs
ZXMpDQo+ID4gK0VOVFJZKHByZXBhcmVfZWFybHlfbWFwcGluZ3MpDQo+ID4gICAgICAgICAgIC8q
IFByZXBhcmUgdGhlIHBhZ2UtdGFibGVzIGZvciBtYXBwaW5nIFhlbiAqLw0KPiA+ICAgICAgICAg
ICBsZHIgICB4MCwgPVhFTl9WSVJUX1NUQVJUDQo+ID4gICAgICAgICAgIGNyZWF0ZV90YWJsZV9l
bnRyeSBib290X3BndGFibGUsIGJvb3RfZmlyc3QsIHgwLCAwLCB4MSwgeDIsDQo+ID4geDMgQEAg
LTIwOCw3ICsyMDgsNyBAQCB2aXJ0cGh5c19jbGFzaDoNCj4gPiAgICAgICAgICAgLyogSWRlbnRp
dHkgbWFwIGNsYXNoZXMgd2l0aCBib290X3RoaXJkLCB3aGljaCB3ZSBjYW5ub3QgaGFuZGxlIHll
dA0KPiAqLw0KPiA+ICAgICAgICAgICBQUklOVCgiLSBVbmFibGUgdG8gYnVpbGQgYm9vdCBwYWdl
IHRhYmxlcyAtIHZpcnQgYW5kIHBoeXMgYWRkcmVzc2VzDQo+IGNsYXNoLiAtXHJcbiIpDQo+ID4g
ICAgICAgICAgIGIgICAgIGZhaWwNCj4gPiAtRU5EUFJPQyhjcmVhdGVfcGFnZV90YWJsZXMpDQo+
ID4gK0VORFBST0MocHJlcGFyZV9lYXJseV9tYXBwaW5ncykNCj4gDQo+IE5JVDotIENhbiB0aGlz
IHJlbmFtaW5nIGJlIGRvbmUgaW4gYSBzZXBhcmF0ZSBwYXRjaCBvZiBpdHMgb3duIChiZWZvcmUg
dGhpcw0KPiBwYXRjaCkuDQo+IA0KDQpZYXksIHlvdSdyZSByaWdodC4gSSdsbCBwdXQgaXQgaW4g
ZGlmZmVyZW50IGNvbW1pdC4NCg0KPiBTbyB0aGF0IHRoaXMgcGF0Y2ggY2FuIGJlIG9ubHkgYWJv
dXQgdGhlIG5ldyBmdW5jdGlvbmFsaXR5IGludHJvZHVjZWQuDQo+IA0KPiA+DQo+ID4gICAvKg0K
PiA+ICAgICogVHVybiBvbiB0aGUgRGF0YSBDYWNoZSBhbmQgdGhlIE1NVS4gVGhlIGZ1bmN0aW9u
IHdpbGwgcmV0dXJuIG9uDQo+ID4gdGhlIDE6MSBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2Fy
bTY0L2hlYWRfbXB1LlMNCj4gPiBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21wdS5TIG5ldyBm
aWxlIG1vZGUgMTAwNjQ0IGluZGV4DQo+ID4gMDAwMDAwMDAwMC4uMGI5N2NlNDY0Ng0KPiA+IC0t
LSAvZGV2L251bGwNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tcHUuUw0KPiA+
IEBAIC0wLDAgKzEsMzIzIEBADQo+ID4gKy8qIFNQRFgtTGljZW5zZS1JZGVudGlmaWVyOiBHUEwt
Mi4wLW9ubHkgKi8NCj4gPiArLyoNCj4gPiArICogU3RhcnQtb2YtZGF5IGNvZGUgZm9yIGFuIEFy
bXY4LVIgQUFyY2g2NCBNUFUgc3lzdGVtLg0KPiA+ICsgKi8NCj4gPiArDQo+ID4gKyNpbmNsdWRl
IDxhc20vYXJtNjQvbXB1Lmg+DQo+ID4gKyNpbmNsdWRlIDxhc20vZWFybHlfcHJpbnRrLmg+DQo+
ID4gKyNpbmNsdWRlIDxhc20vcGFnZS5oPg0KPiA+ICsNCj4gPiArLyoNCj4gPiArICogT25lIGVu
dHJ5IGluIFhlbiBNUFUgbWVtb3J5IHJlZ2lvbiBtYXBwaW5nIHRhYmxlKHhlbl9tcHVtYXApIGlz
DQo+IGENCj4gPiArc3RydWN0dXJlDQo+ID4gKyAqIG9mIHByX3QsIHdoaWNoIGlzIDE2LWJ5dGVz
IHNpemUsIHNvIHRoZSBlbnRyeSBvZmZzZXQgaXMgdGhlIG9yZGVyIG9mIDQuDQo+ID4gKyAqLw0K
PiBOSVQgOi0gSXQgd291bGQgYmUgZ29vZCB0byBxdW90ZSBBcm0gQVJNIGZyb20gd2hpY2ggY2Fu
IGJlIHJlZmVycmVkIGZvcg0KPiB0aGUgZGVmaW5pdGlvbnMuDQo+ID4gKyNkZWZpbmUgTVBVX0VO
VFJZX1NISUZUICAgICAgICAgMHg0DQo+ID4gKw0KPiA+ICsjZGVmaW5lIFJFR0lPTl9TRUxfTUFT
SyAgICAgICAgIDB4Zg0KPiA+ICsNCj4gPiArI2RlZmluZSBSRUdJT05fVEVYVF9QUkJBUiAgICAg
ICAweDM4ICAgIC8qIFNIPTExIEFQPTEwIFhOPTAwICovDQo+ID4gKyNkZWZpbmUgUkVHSU9OX1JP
X1BSQkFSICAgICAgICAgMHgzQSAgICAvKiBTSD0xMSBBUD0xMCBYTj0xMCAqLw0KPiA+ICsjZGVm
aW5lIFJFR0lPTl9EQVRBX1BSQkFSICAgICAgIDB4MzIgICAgLyogU0g9MTEgQVA9MDAgWE49MTAg
Ki8NCj4gPiArDQo+ID4gKyNkZWZpbmUgUkVHSU9OX05PUk1BTF9QUkxBUiAgICAgMHgwZiAgICAv
KiBOUz0wIEFUVFI9MTExIEVOPTEgKi8NCj4gPiArDQo+ID4gKy8qDQo+ID4gKyAqIE1hY3JvIHRv
IHJvdW5kIHVwIHRoZSBzZWN0aW9uIGFkZHJlc3MgdG8gYmUgUEFHRV9TSVpFIGFsaWduZWQNCj4g
PiArICogRWFjaCBzZWN0aW9uKGUuZy4gLnRleHQsIC5kYXRhLCBldGMpIGluIHhlbi5sZHMuUyBp
cyBwYWdlLWFsaWduZWQsDQo+ID4gKyAqIHdoaWNoIGlzIHVzdWFsbHkgZ3VhcmRlZCB3aXRoICIu
ID0gQUxJR04oUEFHRV9TSVpFKSIgaW4gdGhlIGhlYWQsDQo+ID4gKyAqIG9yIGluIHRoZSBlbmQN
Cj4gPiArICovDQo+ID4gKy5tYWNybyByb3VuZHVwX3NlY3Rpb24sIHhiDQo+ID4gKyAgICAgICAg
YWRkICAgXHhiLCBceGIsICMoUEFHRV9TSVpFLTEpDQo+ID4gKyAgICAgICAgYW5kICAgXHhiLCBc
eGIsICNQQUdFX01BU0sNCj4gPiArLmVuZG0NCj4gPiArDQo+ID4gKy8qDQo+ID4gKyAqIE1hY3Jv
IHRvIGNyZWF0ZSBhIG5ldyBNUFUgbWVtb3J5IHJlZ2lvbiBlbnRyeSwgd2hpY2ggaXMgYQ0KPiA+
ICtzdHJ1Y3R1cmUNCj4gPiArICogb2YgcHJfdCwgIGluIFxwcm1hcC4NCj4gPiArICoNCj4gPiAr
ICogSW5wdXRzOg0KPiA+ICsgKiBwcm1hcDogICBtcHUgbWVtb3J5IHJlZ2lvbiBtYXAgdGFibGUg
c3ltYm9sDQo+ID4gKyAqIHNlbDogICAgIHJlZ2lvbiBzZWxlY3Rvcg0KPiA+ICsgKiBwcmJhcjog
ICBwcmVzZXJ2ZSB2YWx1ZSBmb3IgUFJCQVJfRUwyDQo+ID4gKyAqIHBybGFyICAgIHByZXNlcnZl
IHZhbHVlIGZvciBQUkxBUl9FTDINCj4gPiArICoNCj4gPiArICogQ2xvYmJlcnMgXHRtcDEsIFx0
bXAyDQo+ID4gKyAqDQo+ID4gKyAqLw0KPiA+ICsubWFjcm8gY3JlYXRlX21wdV9lbnRyeSBwcm1h
cCwgc2VsLCBwcmJhciwgcHJsYXIsIHRtcDEsIHRtcDINCj4gPiArICAgIG1vdiAgIFx0bXAyLCBc
c2VsDQo+ID4gKyAgICBsc2wgICBcdG1wMiwgXHRtcDIsICNNUFVfRU5UUllfU0hJRlQNCj4gPiAr
ICAgIGFkcl9sIFx0bXAxLCBccHJtYXANCj4gPiArICAgIC8qIFdyaXRlIHRoZSBmaXJzdCA4IGJ5
dGVzKHByYmFyX3QpIG9mIHByX3QgKi8NCj4gPiArICAgIHN0ciAgIFxwcmJhciwgW1x0bXAxLCBc
dG1wMl0NCj4gPiArDQo+ID4gKyAgICBhZGQgICBcdG1wMiwgXHRtcDIsICM4DQo+ID4gKyAgICAv
KiBXcml0ZSB0aGUgbGFzdCA4IGJ5dGVzKHBybGFyX3QpIG9mIHByX3QgKi8NCj4gPiArICAgIHN0
ciAgIFxwcmxhciwgW1x0bXAxLCBcdG1wMl0NCj4gPiArLmVuZG0NCj4gPiArDQo+ID4gKy8qDQo+
ID4gKyAqIE1hY3JvIHRvIHN0b3JlIHRoZSBtYXhpbXVtIG51bWJlciBvZiByZWdpb25zIHN1cHBv
cnRlZCBieSB0aGUgRUwyDQo+ID4gK01QVQ0KPiA+ICsgKiBpbiBtYXhfeGVuX21wdW1hcCwgd2hp
Y2ggaXMgaWRlbnRpZmllZCBieSBNUFVJUl9FTDIuDQo+ID4gKyAqDQo+ID4gKyAqIE91dHB1dHM6
DQo+ID4gKyAqIG5yX3JlZ2lvbnM6IHByZXNlcnZlIHRoZSBtYXhpbXVtIG51bWJlciBvZiByZWdp
b25zIHN1cHBvcnRlZCBieQ0KPiA+ICt0aGUgRUwyIE1QVQ0KPiA+ICsgKg0KPiA+ICsgKiBDbG9i
YmVycyBcdG1wMQ0KPiA+ICsgKg0KPiA+ICsgKi8NCj4gPiArLm1hY3JvIHJlYWRfbWF4X2VsMl9y
ZWdpb25zLCBucl9yZWdpb25zLCB0bXAxDQo+ID4gKyAgICBsb2FkX3BhZGRyIFx0bXAxLCBtYXhf
eGVuX21wdW1hcA0KPiA+ICsgICAgbXJzICAgXG5yX3JlZ2lvbnMsIE1QVUlSX0VMMg0KPiA+ICsg
ICAgaXNiDQo+ID4gKyAgICBzdHIgICBcbnJfcmVnaW9ucywgW1x0bXAxXQ0KPiA+ICsuZW5kbQ0K
PiA+ICsNCj4gPiArLyoNCj4gPiArICogTWFjcm8gdG8gcHJlcGFyZSBhbmQgc2V0IGEgTVBVIG1l
bW9yeSByZWdpb24NCj4gPiArICoNCj4gPiArICogSW5wdXRzOg0KPiA+ICsgKiBiYXNlOiAgICAg
ICAgYmFzZSBhZGRyZXNzIHN5bWJvbCAoc2hvdWxkIGJlIHBhZ2UtYWxpZ25lZCkNCj4gPiArICog
bGltaXQ6ICAgICAgIGxpbWl0IGFkZHJlc3Mgc3ltYm9sDQo+ID4gKyAqIHNlbDogICAgICAgICBy
ZWdpb24gc2VsZWN0b3INCj4gPiArICogcHJiYXI6ICAgICAgIHN0b3JlIGNvbXB1dGVkIFBSQkFS
X0VMMiB2YWx1ZQ0KPiA+ICsgKiBwcmxhcjogICAgICAgc3RvcmUgY29tcHV0ZWQgUFJMQVJfRUwy
IHZhbHVlDQo+ID4gKyAqIGF0dHJfcHJiYXI6ICBQUkJBUl9FTDItcmVsYXRlZCBtZW1vcnkgYXR0
cmlidXRlcy4gSWYgbm90IHNwZWNpZmllZA0KPiA+ICtpdCB3aWxsIGJlIFJFR0lPTl9EQVRBX1BS
QkFSDQo+ID4gKyAqIGF0dHJfcHJsYXI6ICBQUkxBUl9FTDItcmVsYXRlZCBtZW1vcnkgYXR0cmli
dXRlcy4gSWYgbm90IHNwZWNpZmllZA0KPiA+ICtpdCB3aWxsIGJlIFJFR0lPTl9OT1JNQUxfUFJM
QVINCj4gPiArICoNCj4gPiArICogQ2xvYmJlciBcdG1wMQ0KPiA+ICsgKg0KPiA+ICsgKi8NCj4g
PiArLm1hY3JvIHByZXBhcmVfeGVuX3JlZ2lvbiwgYmFzZSwgbGltaXQsIHNlbCwgcHJiYXIsIHBy
bGFyLCB0bXAxLA0KPiBhdHRyX3ByYmFyPVJFR0lPTl9EQVRBX1BSQkFSLCBhdHRyX3BybGFyPVJF
R0lPTl9OT1JNQUxfUFJMQVINCj4gPiArICAgIC8qIFByZXBhcmUgdmFsdWUgZm9yIFBSQkFSX0VM
MiByZWcgYW5kIHByZXNlcnZlIGl0IGluIFxwcmJhci4qLw0KPiA+ICsgICAgbG9hZF9wYWRkciBc
cHJiYXIsIFxiYXNlDQo+ID4gKyAgICBhbmQgICBccHJiYXIsIFxwcmJhciwgI01QVV9SRUdJT05f
TUFTSw0KPiA+ICsgICAgbW92ICAgXHRtcDEsICNcYXR0cl9wcmJhcg0KPiA+ICsgICAgb3JyICAg
XHByYmFyLCBccHJiYXIsIFx0bXAxDQo+ID4gKw0KPiA+ICsgICAgLyogUHJlcGFyZSB2YWx1ZSBm
b3IgUFJMQVJfRUwyIHJlZyBhbmQgcHJlc2VydmUgaXQgaW4gXHBybGFyLiovDQo+ID4gKyAgICBs
b2FkX3BhZGRyIFxwcmxhciwgXGxpbWl0DQo+ID4gKyAgICAvKiBSb3VuZCB1cCBsaW1pdCBhZGRy
ZXNzIHRvIGJlIFBBR0VfU0laRSBhbGlnbmVkICovDQo+ID4gKyAgICByb3VuZHVwX3NlY3Rpb24g
XHBybGFyDQo+ID4gKyAgICAvKiBMaW1pdCBhZGRyZXNzIHNob3VsZCBiZSBpbmNsdXNpdmUgKi8N
Cj4gPiArICAgIHN1YiAgIFxwcmxhciwgXHBybGFyLCAjMQ0KPiA+ICsgICAgYW5kICAgXHBybGFy
LCBccHJsYXIsICNNUFVfUkVHSU9OX01BU0sNCj4gPiArICAgIG1vdiAgIFx0bXAxLCAjXGF0dHJf
cHJsYXINCj4gPiArICAgIG9yciAgIFxwcmxhciwgXHBybGFyLCBcdG1wMQ0KPiA+ICsNCj4gPiAr
ICAgIG1vdiAgIHgyNywgXHNlbA0KPiA+ICsgICAgbW92ICAgeDI4LCBccHJiYXINCj4gPiArICAg
IG1vdiAgIHgyOSwgXHBybGFyDQo+IA0KPiBBbnkgcmVhc29ucyBmb3IgdXNpbmcgeDI3LCB4Mjgs
IHgyOSB0byBwYXNzIGZ1bmN0aW9uIHBhcmFtZXRlcnMuDQo+IA0KPiBodHRwczovL2dpdGh1Yi5j
b20vQVJNLXNvZnR3YXJlL2FiaS1hYS9ibG9iL21haW4vYWFwY3M2NC9hYXBjczY0LnJzdA0KPiBz
dGF0ZXMgeDAuLng3IHNob3VsZCBiZSB1c2VkIChUYWJsZSAyLCBHZW5lcmFsLXB1cnBvc2UgcmVn
aXN0ZXJzIGFuZA0KPiBBQVBDUzY0IHVzYWdlKS4NCj4gDQoNClRoZXNlIHJlZ2lzdGVycyBhcmUg
ZG9jdW1lbnRlZCBhbmQgcmVzZXJ2ZWQgaW4geGVuL2FyY2gvYXJtL2FybTY0L2hlYWQuUywgbGlr
ZQ0KaG93IHdlIHJlc2VydmUgeDI2IHRvIHBhc3MgZnVuY3Rpb24gcGFyYW1ldGVyIGluIHNraXBf
emVyb19ic3MsIHNlZQ0KYGBgDQpkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2FybTY0L2hlYWQu
UyBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMNCmluZGV4IDc4MmJkMWY5NGMuLjE0NWUzZDUz
ZGMgMTAwNjQ0DQotLS0gYS94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TDQorKysgYi94ZW4vYXJj
aC9hcm0vYXJtNjQvaGVhZC5TDQpAQCAtNjgsOSArNjgsOSBAQA0KICAqICB4MjQgLQ0KICAqICB4
MjUgLQ0KICAqICB4MjYgLSBza2lwX3plcm9fYnNzIChib290IGNwdSBvbmx5KQ0KLSAqICB4Mjcg
LQ0KLSAqICB4MjggLQ0KLSAqICB4MjkgLQ0KKyAqICB4MjcgLSByZWdpb24gc2VsZWN0b3IgKG1w
dSBvbmx5KQ0KKyAqICB4MjggLSBwcmJhciAobXB1IG9ubHkpDQorICogIHgyOSAtIHBybGFyICht
cHUgb25seSkNCiAgKiAgeDMwIC0gbHINCiAgKi8NCmBgYA0KeDAuLi54NyBhcmUgYWxyZWFkeSBj
b21tb25seSB1c2VkIGluIHhlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMsIGl0IGlzIGRpZmZpY3Vs
dCBmb3IgbWUNCnRvIHByZXNlcnZlIHRoZW0gb25seSBmb3Igd3JpdGVfcHIuDQoNCklmIHdlIGFy
ZSB1c2luZyB4MC4uLng3IGFzIGZ1bmN0aW9uIHBhcmFtZXRlciwgSSBuZWVkIHRvIHN0YWNrL3Bv
cCB0aGVtIHRvIG11dGF0ZQ0Kc3RhY2sgb3BlcmF0aW9uIGluIHdyaXRlX3ByIHRvIGF2b2lkIGNv
cnJ1cHRpb24uDQoNCj4gPiArICAgIC8qDQo+ID4gKyAgICAgKiB4MnNraXBfemVybzcsIHgyOCwg
eDI5IGFyZSBzcGVjaWFsIHJlZ2lzdGVycyBkZXNpZ25lZCBhcw0KPiA+ICsgICAgICogaW5wdXRz
IGZvciBmdW5jdGlvbiB3cml0ZV9wcg0KPiA+ICsgICAgICovDQo+ID4gKyAgICBibCAgICB3cml0
ZV9wcg0KPiA+ICsuZW5kbQ0KPiA+ICsNClsuLi5dDQo+ID4gLS0NCj4gPiAyLjI1LjENCj4gPg0K
PiBOSVQ6LSBXb3VsZCB5b3UgY29uc2lkZXIgc3BsaXR0aW5nIHRoaXMgcGF0Y2gsIHNvbWV0aGlu
ZyBsaWtlIHRoaXMgOi0NCj4gDQo+IDEuIFJlbmFtaW5nIG9mIHRoZSBtbXUgZnVuY3Rpb24NCj4g
DQo+IDIuIERlZmluZSBzeXNyZWdzLCBwcmxhcl90LCBwcmJhcl90IGFuZCBvdGhlciBvdGhlciBo
YXJkd2FyZSBzcGVjaWZpYyBtYWNyb3MuDQo+IA0KPiAzLiBEZWZpbmUgd3JpdGVfcHINCj4gDQo+
IDQuIFRoZSByZXN0IG9mIHRoZSBjaGFuZ2VzIChpZSBwcmVwYXJlX2Vhcmx5X21hcHBpbmdzKCks
IHhlbi5sZHMuUywgZXRjKQ0KPiANCg0KRm9yIDIsIDMgYW5kIDQsIGl0IHdpbGwgYnJlYWsgdGhl
IHJ1bGUgb2YgIkFsd2F5cyBkZWZpbmUgYW5kIGludHJvZHVjZSBhdCB0aGUNCmZpcnN0IHVzYWdl
Ii4NCkhvd2V2ZXIsIEkga25vdyB0aGF0IHRoaXMgY29tbWl0IGlzIHZlcnkgYmlnIDsvLCBzbyBh
cyBsb25nIGFzIG1haW50YWluZXJzIGFyZSBhbHNvDQppbiBmYXZvciBvZiB5b3VyIHNwbGl0dGlu
ZyBzdWdnZXN0aW9uLCBJJ20gaGFwcHkgdG8gZG8gdGhlIHNwbGl0IHRvb34NCg0KPiAtIEF5YW4N
Cg0K


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 07:37:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 07:37:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486264.753689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM2Fn-0007Ue-Hw; Sun, 29 Jan 2023 07:37:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486264.753689; Sun, 29 Jan 2023 07:37:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM2Fn-0007UX-F8; Sun, 29 Jan 2023 07:37:31 +0000
Received: by outflank-mailman (input) for mailman id 486264;
 Sun, 29 Jan 2023 07:37:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pM2Fm-0007UR-A7
 for xen-devel@lists.xenproject.org; Sun, 29 Jan 2023 07:37:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pM2Fl-0000Iu-T5; Sun, 29 Jan 2023 07:37:29 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pM2Fl-000197-NG; Sun, 29 Jan 2023 07:37:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=ZUahC7aqKP2EFGYOlFbHDxPU3jHbiQr55anPEZpzm/4=; b=U55fp/yCUyJyDx53S0HUWaA4jo
	v5uC04boZCO9jan/OHJgUTzw1qbPOXMZBxU0dzmdfl5d3rARpYORllNy7yIToYdvspwD9KYqCENpC
	6LyVdLPpEn8gW6S3cVLk6mUE9EsM+yGipU1tDHD4WyJdW29C9rwhHu8K+jCt3/P0Ps0E=;
Message-ID: <7931e70f-3754-363c-28d8-5fde3198d70f@xen.org>
Date: Sun, 29 Jan 2023 07:37:27 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-12-Penny.Zheng@arm.com>
 <c30b4458-b5f6-f996-0c3c-220b18bfb356@xen.org>
 <AM0PR08MB453083B74DB1D00BDF469331F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
In-Reply-To: <AM0PR08MB453083B74DB1D00BDF469331F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 29/01/2023 05:39, Penny Zheng wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Thursday, January 19, 2023 11:04 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU
>> memory region map
>>
>> Hi Penny,
>>
> 
> Hi Julien
> 
> Sorry for the late response, just come back from Chinese Spring Festival Holiday~
>   
>> On 13/01/2023 05:28, Penny Zheng wrote:
>>> From: Penny Zheng <penny.zheng@arm.com>
>>>
>>> The start-of-day Xen MPU memory region layout shall be like as follows:
>>>
>>> xen_mpumap[0] : Xen text
>>> xen_mpumap[1] : Xen read-only data
>>> xen_mpumap[2] : Xen read-only after init data xen_mpumap[3] : Xen
>>> read-write data xen_mpumap[4] : Xen BSS ......
>>> xen_mpumap[max_xen_mpumap - 2]: Xen init data
>>> xen_mpumap[max_xen_mpumap - 1]: Xen init text
>>
>> Can you explain why the init region should be at the end of the MPU?
>>
> 
> As discussed in the v1 Serie, I'd like to put all transient MPU regions, like boot-only region,
> at the end of the MPU.

I vaguely recall the discussion but can't seem to find the thread. Do 
you have a link? (A summary in the patch would have been nice)

> Since they will get removed at the end of the boot, I am trying not to leave holes in the MPU
> map by putting all transient MPU regions at rear.

I understand the principle, but I am not convinced this is worth it 
because of the increase complexity in the assembly code.

What would be the problem with reshuffling partially the MPU once we booted?

> 
>>>
>>> max_xen_mpumap refers to the number of regions supported by the EL2
>> MPU.
>>> The layout shall be compliant with what we describe in xen.lds.S, or
>>> the codes need adjustment.
>>>
>>> As MMU system and MPU system have different functions to create the
>>> boot MMU/MPU memory management data, instead of introducing extra
>>> #ifdef in main code flow, we introduce a neutral name
>>> prepare_early_mappings for both, and also to replace create_page_tables
>> for MMU.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>> ---
>>>    xen/arch/arm/arm64/Makefile              |   2 +
>>>    xen/arch/arm/arm64/head.S                |  17 +-
>>>    xen/arch/arm/arm64/head_mmu.S            |   4 +-
>>>    xen/arch/arm/arm64/head_mpu.S            | 323
>> +++++++++++++++++++++++
>>>    xen/arch/arm/include/asm/arm64/mpu.h     |  63 +++++
>>>    xen/arch/arm/include/asm/arm64/sysregs.h |  49 ++++
>>>    xen/arch/arm/mm_mpu.c                    |  48 ++++
>>>    xen/arch/arm/xen.lds.S                   |   4 +
>>>    8 files changed, 502 insertions(+), 8 deletions(-)
>>>    create mode 100644 xen/arch/arm/arm64/head_mpu.S
>>>    create mode 100644 xen/arch/arm/include/asm/arm64/mpu.h
>>>    create mode 100644 xen/arch/arm/mm_mpu.c
>>>
>>> +/*
>>> + * Macro to create a new MPU memory region entry, which is a
>>> +structure
>>> + * of pr_t,  in \prmap.
>>> + *
>>> + * Inputs:
>>> + * prmap:   mpu memory region map table symbol
>>> + * sel:     region selector
>>> + * prbar:   preserve value for PRBAR_EL2
>>> + * prlar    preserve value for PRLAR_EL2
>>> + *
>>> + * Clobbers \tmp1, \tmp2
>>> + *
>>> + */
>>> +.macro create_mpu_entry prmap, sel, prbar, prlar, tmp1, tmp2
>>> +    mov   \tmp2, \sel
>>> +    lsl   \tmp2, \tmp2, #MPU_ENTRY_SHIFT
>>> +    adr_l \tmp1, \prmap
>>> +    /* Write the first 8 bytes(prbar_t) of pr_t */
>>> +    str   \prbar, [\tmp1, \tmp2]
>>> +
>>> +    add   \tmp2, \tmp2, #8
>>> +    /* Write the last 8 bytes(prlar_t) of pr_t */
>>> +    str   \prlar, [\tmp1, \tmp2]
>>
>> Any particular reason to not use 'stp'?
>>
>> Also, AFAICT, with data cache disabled. But at least on ARMv8-A, the cache is
>> never really off. So don't need some cache maintainance?
>>
>> FAOD, I know the existing MMU code has the same issue. But I would rather
>> prefer if the new code introduced is compliant to the Arm Arm.
>>
> 
> True, `stp` is better and I will clean data cache to be compliant to the Arm Arm.
> I write the following example to see if I catch what you suggested:
> ```
> add \tmp1, \tmp1, \tmp2
> stp \prbar, \prlar, [\tmp1]
> dc cvau, \tmp1

I think this wants to be invalidate rather than clean because the cache 
is off.

> isb
> dsb sy
> ```
> 
>>> +.endm
>>> +
>>> +/*
>>> + * Macro to store the maximum number of regions supported by the EL2
>>> +MPU
>>> + * in max_xen_mpumap, which is identified by MPUIR_EL2.
>>> + *
>>> + * Outputs:
>>> + * nr_regions: preserve the maximum number of regions supported by
>>> +the EL2 MPU
>>> + *
>>> + * Clobbers \tmp1
>>> + * > + */
>>
>> Are you going to have multiple users? If not, then I would prefer if this is
>> folded in the only caller.
>>
> 
> Ok. I will fold since I think it is one-time reading thingy.
> 
>>> +.macro read_max_el2_regions, nr_regions, tmp1
>>> +    load_paddr \tmp1, max_xen_mpumap
>>
>> I would rather prefer if we restrict the use of global while the MMU if off (see
>> why above).
>>
> 
> If we don't use global here, then after MPU enabled, we need to re-read MPUIR_EL2
> to get the number of maximum EL2 regions.

Which, IMHO, is better than having to think about cache.

> 
> Or I put data cache clean after accessing global, is it better?
> ```
> str   \nr_regions, [\tmp1]
> dc cvau, \tmp1
> isb
> dsb sy
> ```
> 
>>> +    mrs   \nr_regions, MPUIR_EL2
>>> +    isb
>>
>> What's that isb for?
>>
>>> +    str   \nr_regions, [\tmp1]
>>> +.endm
>>> +
>>> +/*
>>> + * ENTRY to configure a EL2 MPU memory region
>>> + * ARMv8-R AArch64 at most supports 255 MPU protection regions.
>>> + * See section G1.3.18 of the reference manual for ARMv8-R AArch64,
>>> + * PRBAR<n>_EL2 and PRLAR<n>_EL2 provides access to the EL2 MPU
>>> +region
>>> + * determined by the value of 'n' and PRSELR_EL2.REGION as
>>> + * PRSELR_EL2.REGION<7:4>:n.(n = 0, 1, 2, ... , 15)
>>> + * For example to access regions from 16 to 31 (0b10000 to 0b11111):
>>> + * - Set PRSELR_EL2 to 0b1xxxx
>>> + * - Region 16 configuration is accessible through PRBAR0_EL2 and
>>> +PRLAR0_EL2
>>> + * - Region 17 configuration is accessible through PRBAR1_EL2 and
>>> +PRLAR1_EL2
>>> + * - Region 18 configuration is accessible through PRBAR2_EL2 and
>>> +PRLAR2_EL2
>>> + * - ...
>>> + * - Region 31 configuration is accessible through PRBAR15_EL2 and
>>> +PRLAR15_EL2
>>> + *
>>> + * Inputs:
>>> + * x27: region selector
>>> + * x28: preserve value for PRBAR_EL2
>>> + * x29: preserve value for PRLAR_EL2
>>> + *
>>> + */
>>> +ENTRY(write_pr)
>>
>> AFAICT, this function would not be necessary if the index for the init sections
>> were hardcoded.
>>
>> So I would like to understand why the index cannot be hardcoded.
>>
> 
> The reason is that we are putting init sections at the *end* of the MPU map, and
> the length of the whole MPU map is platform-specific. We read it from MPUIR_EL2.

Right, I got that bit from the code. What I would like to understand is 
why all the initial address cannot be hardocoded?

 From a brief look, this would simplify a lot the assembly code.

>   
>>> +    msr   PRSELR_EL2, x27
>>> +    dsb   sy
>>
>> [...]
>>
>>> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S index
>>> bc45ea2c65..79965a3c17 100644
>>> --- a/xen/arch/arm/xen.lds.S
>>> +++ b/xen/arch/arm/xen.lds.S
>>> @@ -91,6 +91,8 @@ SECTIONS
>>>          __ro_after_init_end = .;
>>>      } : text
>>>
>>> +  . = ALIGN(PAGE_SIZE);
>>
>> Why do you need this ALIGN?
>>
> 
> I need a symbol as the start of the data section, so I introduce
> "__data_begin = .;".
> If I use "__ro_after_init_end = .;" instead, I'm afraid in the future,
> if someone introduces a new section after ro-after-init section, this part
> also needs modification too.

I haven't suggested there is a problem to define a new symbol. I am 
merely asking about the align.

> 
> When we define MPU regions for each section in xen.lds.S, we always treat these sections
> page-aligned.
> I checked each section in xen.lds.S, and ". = ALIGN(PAGE_SIZE);" is either added
> at section head, or at the tail of the previous section, to make sure starting address symbol
> page-aligned.
> 
> And if we don't put this ALIGN, if "__ro_after_init_end " symbol itself is not page-aligned,
> the two adjacent sections will overlap in MPU.

__ro_after_init_end *has* to be page aligned because the permissions are 
different than for __data_begin.

If we were going to add a new section, then either it has the same 
permission as .data.read.mostly and we will bundle them or it doesn't 
and we would need a .align.

But today, the extra .ALIGN seems unnecessary (at least in the context 
of this patch).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 07:43:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 07:43:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486269.753698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM2LN-0000UF-4v; Sun, 29 Jan 2023 07:43:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486269.753698; Sun, 29 Jan 2023 07:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM2LN-0000U8-2M; Sun, 29 Jan 2023 07:43:17 +0000
Received: by outflank-mailman (input) for mailman id 486269;
 Sun, 29 Jan 2023 07:43:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pM2LL-0000U2-92
 for xen-devel@lists.xenproject.org; Sun, 29 Jan 2023 07:43:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pM2LK-0000PX-Ss; Sun, 29 Jan 2023 07:43:14 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pM2LK-0001L8-NJ; Sun, 29 Jan 2023 07:43:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=rf8PdB3U2aMf3op0qNt9E1hxzGSWbvx9c2BmOpQAWaM=; b=EHKOzXdMHGZhYyEQqDv8/jXSJZ
	7TM5tzzOMFt0uXswFGTtq1F+PqMkcUgecLDlbbggFMMl5lWdDIyh6qv/hhQ3tgL49rOBX0hfpiUiM
	6fYinSvLeMWeyjMpJiYFKs00MlX5+KlxnxFmnSnklyEtctBBpS8fQN1suiciZwaCfRzs=;
Message-ID: <33bddc11-ae1e-b467-32d7-647748d1c627@xen.org>
Date: Sun, 29 Jan 2023 07:43:13 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-14-Penny.Zheng@arm.com>
 <23f49916-dd2a-a956-1e6b-6dbb41a8817b@xen.org>
 <AM0PR08MB4530B7AF6EA406882974D528F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
In-Reply-To: <AM0PR08MB4530B7AF6EA406882974D528F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Penny,

On 29/01/2023 06:17, Penny Zheng wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Wednesday, January 25, 2023 3:09 AM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
>> setup_early_uart to map early UART
>>
>> Hi Peny,
> 
> Hi Julien,
> 
>>
>> On 13/01/2023 05:28, Penny Zheng wrote:
>>> In MMU system, we map the UART in the fixmap (when earlyprintk is used).
>>> However in MPU system, we map the UART with a transient MPU memory
>>> region.
>>>
>>> So we introduce a new unified function setup_early_uart to replace the
>>> previous setup_fixmap.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>> ---
>>>    xen/arch/arm/arm64/head.S               |  2 +-
>>>    xen/arch/arm/arm64/head_mmu.S           |  4 +-
>>>    xen/arch/arm/arm64/head_mpu.S           | 52
>> +++++++++++++++++++++++++
>>>    xen/arch/arm/include/asm/early_printk.h |  1 +
>>>    4 files changed, 56 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>>> index 7f3f973468..a92883319d 100644
>>> --- a/xen/arch/arm/arm64/head.S
>>> +++ b/xen/arch/arm/arm64/head.S
>>> @@ -272,7 +272,7 @@ primary_switched:
>>>             * afterwards.
>>>             */
>>>            bl    remove_identity_mapping
>>> -        bl    setup_fixmap
>>> +        bl    setup_early_uart
>>>    #ifdef CONFIG_EARLY_PRINTK
>>>            /* Use a virtual address to access the UART. */
>>>            ldr   x23, =EARLY_UART_VIRTUAL_ADDRESS
>>> diff --git a/xen/arch/arm/arm64/head_mmu.S
>>> b/xen/arch/arm/arm64/head_mmu.S index b59c40495f..a19b7c873d
>> 100644
>>> --- a/xen/arch/arm/arm64/head_mmu.S
>>> +++ b/xen/arch/arm/arm64/head_mmu.S
>>> @@ -312,7 +312,7 @@ ENDPROC(remove_identity_mapping)
>>>     *
>>>     * Clobbers x0 - x3
>>>     */
>>> -ENTRY(setup_fixmap)
>>> +ENTRY(setup_early_uart)
>>
>> This function is doing more than enable the early UART. It also setups the
>> fixmap even earlyprintk is not configured.
> 
> True, true.
> I've thoroughly read the MMU implementation of setup_fixmap, and I'll try to split
> it up.
> 
>>
>> I am not entirely sure what could be the name. Maybe this needs to be split
>> further.
>>
>>>    #ifdef CONFIG_EARLY_PRINTK
>>>            /* Add UART to the fixmap table */
>>>            ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
>>> @@ -325,7 +325,7 @@ ENTRY(setup_fixmap)
>>>            dsb   nshst
>>>
>>>            ret
>>> -ENDPROC(setup_fixmap)
>>> +ENDPROC(setup_early_uart)
>>>
>>>    /* Fail-stop */
>>>    fail:   PRINT("- Boot failed -\r\n")
>>> diff --git a/xen/arch/arm/arm64/head_mpu.S
>>> b/xen/arch/arm/arm64/head_mpu.S index e2ac69b0cc..72d1e0863d
>> 100644
>>> --- a/xen/arch/arm/arm64/head_mpu.S
>>> +++ b/xen/arch/arm/arm64/head_mpu.S
>>> @@ -18,8 +18,10 @@
>>>    #define REGION_TEXT_PRBAR       0x38    /* SH=11 AP=10 XN=00 */
>>>    #define REGION_RO_PRBAR         0x3A    /* SH=11 AP=10 XN=10 */
>>>    #define REGION_DATA_PRBAR       0x32    /* SH=11 AP=00 XN=10 */
>>> +#define REGION_DEVICE_PRBAR     0x22    /* SH=10 AP=00 XN=10 */
>>>
>>>    #define REGION_NORMAL_PRLAR     0x0f    /* NS=0 ATTR=111 EN=1 */
>>> +#define REGION_DEVICE_PRLAR     0x09    /* NS=0 ATTR=100 EN=1 */
>>>
>>>    /*
>>>     * Macro to round up the section address to be PAGE_SIZE aligned @@
>>> -334,6 +336,56 @@ ENTRY(enable_mm)
>>>        ret
>>>    ENDPROC(enable_mm)
>>>
>>> +/*
>>> + * Map the early UART with a new transient MPU memory region.
>>> + *
>>
>> Missing "Inputs: "
>>
>>> + * x27: region selector
>>> + * x28: prbar
>>> + * x29: prlar
>>> + *
>>> + * Clobbers x0 - x4
>>> + *
>>> + */
>>> +ENTRY(setup_early_uart)
>>> +#ifdef CONFIG_EARLY_PRINTK
>>> +    /* stack LR as write_pr will be called later like nested function */
>>> +    mov   x3, lr
>>> +
>>> +    /*
>>> +     * MPU region for early UART is a transient region, since it will be
>>> +     * replaced by specific device memory layout when FDT gets parsed.
>>
>> I would rather not mention "FDT" here because this code is independent to
>> the firmware table used.
>>
>> However, any reason to use a transient region rather than the one that will
>> be used for the UART driver?
>>
> 
> We don’t want to define a MPU region for each device driver. It will exhaust
> MPU regions very quickly.
What the usual size of an MPU?

However, even if you don't want to define one for every device, it still 
seem to be sensible to define a fixed temporary one for the early UART 
as this would simplify the assembly code.


> In commit " [PATCH v2 28/40] xen/mpu: map boot module section in MPU system",

Did you mean patch #27?

> A new FDT property `mpu,device-memory-section` will be introduced for users to statically
> configure the whole system device memory with the least number of memory regions in Device Tree.
> This section shall cover all devices that will be used in Xen, like `UART`, `GIC`, etc.
> For FVP_BaseR_AEMv8R, we have the following definition:
> ```
> mpu,device-memory-section = <0x0 0x80000000 0x0 0x7ffff000>;
> ```

I am a bit worry this will be a recipe for mistake. Do you have an 
example where the MPU will be exhausted if we reserve some entries for 
each device (or some)?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 10:56:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 10:56:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486325.753709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM5MF-0005iV-0c; Sun, 29 Jan 2023 10:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486325.753709; Sun, 29 Jan 2023 10:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM5ME-0005iO-UE; Sun, 29 Jan 2023 10:56:22 +0000
Received: by outflank-mailman (input) for mailman id 486325;
 Sun, 29 Jan 2023 10:56:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pM5MD-0005iE-RH; Sun, 29 Jan 2023 10:56:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pM5MD-0005lq-MH; Sun, 29 Jan 2023 10:56:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pM5MD-0002ST-2p; Sun, 29 Jan 2023 10:56:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pM5MD-0004YH-2P; Sun, 29 Jan 2023 10:56:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vBvG5y52rHdI4Hewgu3sPPVWD60QeRfA83qu/ayL1VA=; b=5c8CGKG5pmSb8M/8d2GzsF09Gl
	GcMjXtgKNKQ4uENHwATgHq1PEGq4b4Bnh2DUrZkHXAOZn2emJmJImjaXe8vJ/cZP/5ays1M3hn9ic
	2r8hD43cke81fgliNFG4o93umQsV5mtquCVljKEF3Ubmq5f0GmrKoOUewcOXv4NjQHvA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176267-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176267: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:build-armhf:syslog-server:running:regression
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:build-armhf:capture-logs:broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=c96618275234ad03d44eafe9f8844305bb44fda4
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Jan 2023 10:56:21 +0000

flight 176267 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176267/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 173462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                c96618275234ad03d44eafe9f8844305bb44fda4
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  113 days
Failing since        173470  2022-10-08 06:21:34 Z  113 days  234 attempts
Testing same since   176267  2023-01-29 00:13:34 Z    0 days    1 attempts

------------------------------------------------------------
3466 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

(No revision log; it would be 533253 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 13:18:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 13:18:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486352.753719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM7ZB-0004Rj-8d; Sun, 29 Jan 2023 13:17:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486352.753719; Sun, 29 Jan 2023 13:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM7ZB-0004Rc-5z; Sun, 29 Jan 2023 13:17:53 +0000
Received: by outflank-mailman (input) for mailman id 486352;
 Sun, 29 Jan 2023 13:17:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pM7ZA-0004RS-6x; Sun, 29 Jan 2023 13:17:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pM7ZA-0000Un-31; Sun, 29 Jan 2023 13:17:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pM7Z9-0001Hm-Jl; Sun, 29 Jan 2023 13:17:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pM7Z9-00023u-JJ; Sun, 29 Jan 2023 13:17:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KaaRI9YVWR4CRhV6plfCRqG4ztlUKlqmpBOD9kdRpqY=; b=S3Ky+wxvtogWElxOwtn7rkRUT6
	9Uwbj+wY7lne1VgvLkE+jb0EMNjurG7s3cngBHmrWxZ1mBid4/5fojjo0t8oEvtWiordYqT57jtcf
	FrGvyWHekbYh1k2VsKpY/lkj9FU0ZWy7ihiRua/U8drfcdCkcGmAmj4cKQOSb57m3rCI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176270-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176270: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start.2:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-ping-check-xen:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Jan 2023 13:17:51 +0000

flight 176270 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176270/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds      5 host-install(5)          broken pass in 176265
 test-amd64-i386-xl-qemut-ws16-amd64  5 host-install(5)   broken pass in 176265
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 21 guest-start.2 fail in 176261 pass in 176270
 test-amd64-amd64-xl-rtds  10 host-ping-check-xen fail in 176265 pass in 176261
 test-amd64-i386-libvirt-raw   7 xen-install                fail pass in 176265
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 176265

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop   fail in 176265 like 175447
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 176265 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 176265 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 176265 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 176265 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 176265 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 176265 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 176265 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 176265 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 176265 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 176265 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 176265 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 176265 never pass
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 176265 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 176265 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 176265 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 176265 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 176265 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   38 days
Testing same since   176224  2023-01-26 22:14:43 Z    2 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          broken  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-i386-xl-qemut-ws16-amd64 host-install(5)
broken-job build-armhf broken
broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 15:08:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 15:08:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486372.753729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM9HQ-0008VE-68; Sun, 29 Jan 2023 15:07:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486372.753729; Sun, 29 Jan 2023 15:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pM9HQ-0008V7-3O; Sun, 29 Jan 2023 15:07:40 +0000
Received: by outflank-mailman (input) for mailman id 486372;
 Sun, 29 Jan 2023 15:07:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pM9HO-0008Ux-Vk; Sun, 29 Jan 2023 15:07:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pM9HO-0002v3-Rc; Sun, 29 Jan 2023 15:07:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pM9HO-000622-Fr; Sun, 29 Jan 2023 15:07:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pM9HO-0002YF-FK; Sun, 29 Jan 2023 15:07:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7dkGIR+xHGNC3g0Vt8oTSev7zBTuZtXYzK9Or7Law74=; b=l+YIpopFgVMdOyov8KrpRAcnLk
	cJ+mt1YEqTl/pr71kfV+Jn581kkTz+A+zHY81Rndg4F4UIXn8gdEhgr/hiQSJ2UdbigLo0OzHEALE
	dmQqHe5hYsajk7uIQrhP/DJKejB39g+eNhDCwTwOkoOO7QM5EKkZiNI46oFDFFsKlSng=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176268-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176268: regressions - trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-pair:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:build-armhf:syslog-server:running:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf:capture-logs:broken:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Jan 2023 15:07:38 +0000

flight 176268 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176268/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-pair    <job status>                 broken
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-pvhv2-intel    <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175994
 build-arm64-pvops             6 kernel-build   fail in 176266 REGR. vs. 175994
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ovmf-amd64  5 host-install(5)   broken pass in 176266
 test-amd64-amd64-xl-pvhv2-intel  5 host-install(5)       broken pass in 176266
 test-amd64-i386-libvirt-pair  6 host-install/src_host(6) broken pass in 176266
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 176266 pass in 176268
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install    fail pass in 176266
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install          fail pass in 176266
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail pass in 176266

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 176266 n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175994
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    9 days
Failing since        176003  2023-01-20 17:40:27 Z    8 days   19 attempts
Testing same since   176222  2023-01-26 22:13:29 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  George Dunlap <george.dunlap@cloud.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 broken  
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-libvirt-pair broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job build-armhf broken
broken-step test-amd64-i386-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-xl-pvhv2-intel host-install(5)
broken-step test-amd64-i386-libvirt-pair host-install/src_host(6)
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job build-armhf broken

Not pushing.

(No revision log; it would be 1225 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 17:49:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 17:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486397.753739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMBnG-0001W5-Sw; Sun, 29 Jan 2023 17:48:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486397.753739; Sun, 29 Jan 2023 17:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMBnG-0001Vy-PM; Sun, 29 Jan 2023 17:48:42 +0000
Received: by outflank-mailman (input) for mailman id 486397;
 Sun, 29 Jan 2023 17:48:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMBnF-0001Vo-9H; Sun, 29 Jan 2023 17:48:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMBnF-0006wa-4q; Sun, 29 Jan 2023 17:48:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMBnE-00062D-Lc; Sun, 29 Jan 2023 17:48:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMBnE-0000SH-LC; Sun, 29 Jan 2023 17:48:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=912R82siLUpl82YycsHMaXCGjcAYYE+cY381RLtDawE=; b=mgkdM71zgRze6k4+iTRJOAWQsC
	5XY0THKeg/v44yvv+ydbpnpyq49QBEKP+jt1dUae2HPxyqY9mbsKmz8HmRtm9pWvexSAmHDbtzcv9
	OS+y7Nta/RA27fH1VVHnN+gFYRIwTpCySaEdqx2GCpohLPrvNomNIjXgP0v1BLzf6q7U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176269-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 176269: trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-armhf:<job status>:broken:regression
    libvirt:build-armhf:host-install(4):broken:regression
    libvirt:build-armhf:syslog-server:running:regression
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:build-armhf:capture-logs:broken:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=9f8fba7501327a60f6adb279ea17f0e2276071be
X-Osstest-Versions-That:
    libvirt=95a278a84591b6a4cfa170eba31c8ec60e82f940
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Jan 2023 17:48:40 +0000

flight 176269 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176269/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176139
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 18 guest-start/debianhvm.repeat fail pass in 176262

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176139
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              9f8fba7501327a60f6adb279ea17f0e2276071be
baseline version:
 libvirt              95a278a84591b6a4cfa170eba31c8ec60e82f940

Last test of basis   176139  2023-01-26 04:18:49 Z    3 days
Failing since        176233  2023-01-27 04:18:53 Z    2 days    3 attempts
Testing same since   176262  2023-01-28 04:20:25 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiri Denemark <jdenemar@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 9f8fba7501327a60f6adb279ea17f0e2276071be
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Thu Jan 26 16:12:00 2023 +0100

    remote: Fix version annotation for remoteDomainFDAssociate
    
    The API was added in libvirt 9.0.0.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Peter Krempa <pkrempa@redhat.com>

commit a0fbf1e25cd0f91bedf159bf7f0086f4b1aeafc2
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 16:48:50 2023 +0100

    rpc: Use struct zero initializer for args
    
    In a recent commit of v9.0.0-104-g0211e430a8 I've turned all args
    vars in src/remote/remote_driver.c to be initialized wit {0}.
    What I've missed was the generated code.
    
    Do what we've done in v9.0.0-13-g1c656836e3 and init also args,
    not just ret.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit 2dde3840b1d50e79f6b8161820fff9fe62f613a9
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix missing device in crypto-builtin XML
    
    Another forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit f3c9cbc36cc10775f6cefeb7e3de2f799dc74d70
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix watchdog parameters in crypto-builtin
    
    Forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit a2c5c5dad2275414e325ca79778fad2612d14470
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:34 2023 +0100

    news: Add information about iTCO watchdog changes
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 2fa92efe9b286ad064833cd2d8b907698e58e1cf
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:30 2023 +0100

    Document change to multiple watchdogs
    
    With the reasoning behind it.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 926594dcc82b40f483010cebe5addbf1d7f58b24
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 11:22:22 2023 +0100

    qemu: Add implicit watchdog for q35 machine types
    
    The iTCO watchdog is part of the q35 machine type since its inception,
    we just did not add it implicitly.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2137346
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d81a27b9815d68d85d2ddc9671649923ee5905d7
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 14:15:06 2023 +0100

    qemu: Enable iTCO watchdog by disabling its noreboot pin strap
    
    In order for the iTCO watchdog to be operational we must disable the
    noreboot pin strap in qemu.  This is the default starting from 8.0
    machine types, but desirable for older ones as well.  And we can safely
    do that since that is not guest-visible.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 5b80e93e42a1d89ee64420debd2b4b785a144c40
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:26:21 2023 +0100

    Add iTCO watchdog support
    
    Supported only with q35 machine types.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1c61bd718a9e311016da799a42dfae18f538385a
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Tue Nov 8 09:10:57 2022 +0100

    Support multiple watchdog devices
    
    This is already possible with qemu, and actually already happening with
    q35 machines and a specified watchdog since q35 already includes a
    watchdog we do not include in the XML.  In order to express such
    posibility multiple watchdogs need to be supported.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit c5340d5420012412ea298f0102cc7f113e87d89b
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:28:52 2023 +0100

    qemuDomainAttachWatchdog: Avoid unnecessary nesting
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1cf7e6ec057a80f3c256d739a8228e04b7fb8862
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:25:06 2023 +0100

    remote: Drop useless cleanup in remoteDispatchNodeGet{CPU,Memory}Stats
    
    The function cannot fail once it starts populating
    ret->params.params_val[i].field.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d0f339170f35957e7541e5b20552d0007e150fbc
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:06:33 2023 +0100

    remote: Avoid leaking uri_out
    
    In case the API returned success and a NULL pointer in uri_out, we would
    leak the preallocated buffer used for storing the uri_out pointer.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 4849eb2220fb2171e88e014a8e63018d20a8de95
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 11:56:28 2023 +0100

    remote: Propagate error from virDomainGetSecurityLabelList via RPC
    
    The daemon side of this API has been broken ever since the API was
    introduced in 2012. Instead of sending the error from
    virDomainGetSecurityLabelList via RPC so that the client can see it, the
    dispatcher would just send a successful reply with return value set to
    -1 (and an empty array of labels). The client side would propagate this
    return value so the client can see the API failed, but the original
    error would be lost.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 0211e430a87a96db9a4e085e12f33caad9167653
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 13:19:31 2023 +0100

    remote: Initialize args variable
    
    Recently, in v9.0.0-7-gb2034bb04c we've dropped initialization of
    @args variable. The reasoning was that eventually, all members of
    the variable will be set. Well, this is not correct. For
    instance, in remoteConnectGetAllDomainStats() the
    args.doms.doms_val pointer is set iff @ndoms != 0. However,
    regardless of that, the pointer is then passed to VIR_FREE().
    
    Worse, the whole args is passed to
    xdr_remote_connect_get_all_domain_stats_args() which then calls
    xdr_array, which tests the (uninitialized) pointer against NULL.
    
    This effectively reverts b2034bb04c61c75ddbfbed46879d641b6f8ca8dc.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Martin Kletzander <mkletzan@redhat.com>

commit c3afde9211b550d3900edc5386ab121f5b39fd3e
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 11:56:10 2023 +0100

    qemu_domain: Don't unref NULL hash table in qemuDomainRefreshStatsSchema()
    
    The g_hash_table_unref() function does not accept NULL. Passing
    NULL results in a glib warning being triggered. Check whether the
    hash table is not NULL and unref it only then.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 18:50:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 18:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486409.753754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMCkc-0000Ce-KR; Sun, 29 Jan 2023 18:50:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486409.753754; Sun, 29 Jan 2023 18:50:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMCkc-0000C6-CU; Sun, 29 Jan 2023 18:50:02 +0000
Received: by outflank-mailman (input) for mailman id 486409;
 Sun, 29 Jan 2023 18:50:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=As8j=52=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1pMCkb-00007Y-1L
 for xen-devel@lists.xenproject.org; Sun, 29 Jan 2023 18:50:01 +0000
Received: from sonata.ens-lyon.org (sonata.ens-lyon.org [140.77.166.138])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b84e0226-a005-11ed-b8d1-410ff93cb8f0;
 Sun, 29 Jan 2023 19:49:56 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 4D6B2200FB;
 Sun, 29 Jan 2023 19:49:54 +0100 (CET)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 4AaagANKtKKu; Sun, 29 Jan 2023 19:49:54 +0100 (CET)
Received: from begin (lfbn-bor-1-1163-184.w92-158.abo.wanadoo.fr
 [92.158.138.184])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 0289A200F9;
 Sun, 29 Jan 2023 19:49:53 +0100 (CET)
Received: from samy by begin with local (Exim 4.96)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1pMCkT-000G6W-1i;
 Sun, 29 Jan 2023 19:49:53 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b84e0226-a005-11ed-b8d1-410ff93cb8f0
Date: Sun, 29 Jan 2023 19:49:53 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	wl@xen.org
Subject: Re: [PATCH] Mini-OS: move xenbus test code into test.c
Message-ID: <20230129184953.lcfdp23l6pp42c3l@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org, wl@xen.org
References: <20230127073346.6992-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230127073346.6992-1-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Juergen Gross, le ven. 27 janv. 2023 08:33:46 +0100, a ecrit:
> The test code in xenbus.c can easily be moved into test.c.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  test.c   | 108 +++++++++++++++++++++++++++++++++++++++++++++++++++-
>  xenbus.c | 113 -------------------------------------------------------
>  2 files changed, 106 insertions(+), 115 deletions(-)
> 
> diff --git a/test.c b/test.c
> index 42a26661..465c54e8 100644
> --- a/test.c
> +++ b/test.c
> @@ -44,6 +44,7 @@
>  #include <fcntl.h>
>  #include <xen/features.h>
>  #include <xen/version.h>
> +#include <xen/io/xs_wire.h>
>  
>  #ifdef CONFIG_XENBUS
>  static unsigned int do_shutdown = 0;
> @@ -52,11 +53,114 @@ static DECLARE_WAIT_QUEUE_HEAD(shutdown_queue);
>  #endif
>  
>  #ifdef CONFIG_XENBUS
> -void test_xenbus(void);
> +/* Send a debug message to xenbus.  Can block. */
> +static void xenbus_debug_msg(const char *msg)
> +{
> +    int len = strlen(msg);
> +    struct write_req req[] = {
> +        { "print", sizeof("print") },
> +        { msg, len },
> +        { "", 1 }};
> +    struct xsd_sockmsg *reply;
> +
> +    reply = xenbus_msg_reply(XS_DEBUG, 0, req, ARRAY_SIZE(req));
> +    printk("Got a reply, type %d, id %d, len %d.\n",
> +           reply->type, reply->req_id, reply->len);
> +}
> +
> +static void do_ls_test(const char *pre)
> +{
> +    char **dirs, *msg;
> +    int x;
> +
> +    printk("ls %s...\n", pre);
> +    msg = xenbus_ls(XBT_NIL, pre, &dirs);
> +    if ( msg )
> +    {
> +        printk("Error in xenbus ls: %s\n", msg);
> +        free(msg);
> +        return;
> +    }
> +
> +    for ( x = 0; dirs[x]; x++ )
> +    {
> +        printk("ls %s[%d] -> %s\n", pre, x, dirs[x]);
> +        free(dirs[x]);
> +    }
> +
> +    free(dirs);
> +}
> +
> +static void do_read_test(const char *path)
> +{
> +    char *res, *msg;
> +
> +    printk("Read %s...\n", path);
> +    msg = xenbus_read(XBT_NIL, path, &res);
> +    if ( msg )
> +    {
> +        printk("Error in xenbus read: %s\n", msg);
> +        free(msg);
> +        return;
> +    }
> +    printk("Read %s -> %s.\n", path, res);
> +    free(res);
> +}
> +
> +static void do_write_test(const char *path, const char *val)
> +{
> +    char *msg;
> +
> +    printk("Write %s to %s...\n", val, path);
> +    msg = xenbus_write(XBT_NIL, path, val);
> +    if ( msg )
> +    {
> +        printk("Result %s\n", msg);
> +        free(msg);
> +    }
> +    else
> +        printk("Success.\n");
> +}
> +
> +static void do_rm_test(const char *path)
> +{
> +    char *msg;
> +
> +    printk("rm %s...\n", path);
> +    msg = xenbus_rm(XBT_NIL, path);
> +    if ( msg )
> +    {
> +        printk("Result %s\n", msg);
> +        free(msg);
> +    }
> +    else
> +        printk("Success.\n");
> +}
>  
>  static void xenbus_tester(void *p)
>  {
> -    test_xenbus();
> +    printk("Doing xenbus test.\n");
> +    xenbus_debug_msg("Testing xenbus...\n");
> +
> +    printk("Doing ls test.\n");
> +    do_ls_test("device");
> +    do_ls_test("device/vif");
> +    do_ls_test("device/vif/0");
> +
> +    printk("Doing read test.\n");
> +    do_read_test("device/vif/0/mac");
> +    do_read_test("device/vif/0/backend");
> +
> +    printk("Doing write test.\n");
> +    do_write_test("device/vif/0/flibble", "flobble");
> +    do_read_test("device/vif/0/flibble");
> +    do_write_test("device/vif/0/flibble", "widget");
> +    do_read_test("device/vif/0/flibble");
> +
> +    printk("Doing rm test.\n");
> +    do_rm_test("device/vif/0/flibble");
> +    do_read_test("device/vif/0/flibble");
> +    printk("(Should have said ENOENT)\n");
>  }
>  #endif
>  
> diff --git a/xenbus.c b/xenbus.c
> index aa1fe7bf..81e9b65d 100644
> --- a/xenbus.c
> +++ b/xenbus.c
> @@ -964,119 +964,6 @@ domid_t xenbus_get_self_id(void)
>      return ret;
>  }
>  
> -#ifdef CONFIG_TEST
> -/* Send a debug message to xenbus.  Can block. */
> -static void xenbus_debug_msg(const char *msg)
> -{
> -    int len = strlen(msg);
> -    struct write_req req[] = {
> -        { "print", sizeof("print") },
> -        { msg, len },
> -        { "", 1 }};
> -    struct xsd_sockmsg *reply;
> -
> -    reply = xenbus_msg_reply(XS_DEBUG, 0, req, ARRAY_SIZE(req));
> -    printk("Got a reply, type %d, id %d, len %d.\n",
> -           reply->type, reply->req_id, reply->len);
> -}
> -
> -static void do_ls_test(const char *pre)
> -{
> -    char **dirs, *msg;
> -    int x;
> -
> -    printk("ls %s...\n", pre);
> -    msg = xenbus_ls(XBT_NIL, pre, &dirs);
> -    if ( msg )
> -    {
> -        printk("Error in xenbus ls: %s\n", msg);
> -        free(msg);
> -        return;
> -    }
> -
> -    for ( x = 0; dirs[x]; x++ )
> -    {
> -        printk("ls %s[%d] -> %s\n", pre, x, dirs[x]);
> -        free(dirs[x]);
> -    }
> -
> -    free(dirs);
> -}
> -
> -static void do_read_test(const char *path)
> -{
> -    char *res, *msg;
> -
> -    printk("Read %s...\n", path);
> -    msg = xenbus_read(XBT_NIL, path, &res);
> -    if ( msg )
> -    {
> -        printk("Error in xenbus read: %s\n", msg);
> -        free(msg);
> -        return;
> -    }
> -    printk("Read %s -> %s.\n", path, res);
> -    free(res);
> -}
> -
> -static void do_write_test(const char *path, const char *val)
> -{
> -    char *msg;
> -
> -    printk("Write %s to %s...\n", val, path);
> -    msg = xenbus_write(XBT_NIL, path, val);
> -    if ( msg )
> -    {
> -        printk("Result %s\n", msg);
> -        free(msg);
> -    }
> -    else
> -        printk("Success.\n");
> -}
> -
> -static void do_rm_test(const char *path)
> -{
> -    char *msg;
> -
> -    printk("rm %s...\n", path);
> -    msg = xenbus_rm(XBT_NIL, path);
> -    if ( msg )
> -    {
> -        printk("Result %s\n", msg);
> -        free(msg);
> -    }
> -    else
> -        printk("Success.\n");
> -}
> -
> -/* Simple testing thing */
> -void test_xenbus(void)
> -{
> -    printk("Doing xenbus test.\n");
> -    xenbus_debug_msg("Testing xenbus...\n");
> -
> -    printk("Doing ls test.\n");
> -    do_ls_test("device");
> -    do_ls_test("device/vif");
> -    do_ls_test("device/vif/0");
> -
> -    printk("Doing read test.\n");
> -    do_read_test("device/vif/0/mac");
> -    do_read_test("device/vif/0/backend");
> -
> -    printk("Doing write test.\n");
> -    do_write_test("device/vif/0/flibble", "flobble");
> -    do_read_test("device/vif/0/flibble");
> -    do_write_test("device/vif/0/flibble", "widget");
> -    do_read_test("device/vif/0/flibble");
> -
> -    printk("Doing rm test.\n");
> -    do_rm_test("device/vif/0/flibble");
> -    do_read_test("device/vif/0/flibble");
> -    printk("(Should have said ENOENT)\n");
> -}
> -#endif /* CONFIG_TEST */
> -
>  /*
>   * Local variables:
>   * mode: C
> -- 
> 2.35.3
> 

-- 
Samuel
---
Pour une évaluation indépendante, transparente et rigoureuse !
Je soutiens la Commission d'Évaluation de l'Inria.


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 18:50:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 18:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486411.753767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMCkm-0001Hl-S2; Sun, 29 Jan 2023 18:50:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486411.753767; Sun, 29 Jan 2023 18:50:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMCkm-0001He-Oi; Sun, 29 Jan 2023 18:50:12 +0000
Received: by outflank-mailman (input) for mailman id 486411;
 Sun, 29 Jan 2023 18:50:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=As8j=52=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1pMCkl-00007Y-Jg
 for xen-devel@lists.xenproject.org; Sun, 29 Jan 2023 18:50:11 +0000
Received: from sonata.ens-lyon.org (sonata.ens-lyon.org [140.77.166.138])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c0103a7b-a005-11ed-b8d1-410ff93cb8f0;
 Sun, 29 Jan 2023 19:50:07 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 6C37B200FB;
 Sun, 29 Jan 2023 19:50:07 +0100 (CET)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 5T9GIAFM5FCn; Sun, 29 Jan 2023 19:50:06 +0100 (CET)
Received: from begin (lfbn-bor-1-1163-184.w92-158.abo.wanadoo.fr
 [92.158.138.184])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 642D4200F9;
 Sun, 29 Jan 2023 19:50:06 +0100 (CET)
Received: from samy by begin with local (Exim 4.96)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1pMCkg-000G6l-38;
 Sun, 29 Jan 2023 19:50:06 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0103a7b-a005-11ed-b8d1-410ff93cb8f0
Date: Sun, 29 Jan 2023 19:50:06 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	wl@xen.org
Subject: Re: [PATCH] Mini-OS: remove stale subdirs from Makefile
Message-ID: <20230129185006.p44xmmrlppiaayru@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org, wl@xen.org
References: <20230127073244.6883-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230127073244.6883-1-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Juergen Gross, le ven. 27 janv. 2023 08:32:44 +0100, a ecrit:
> The SUBDIRS make variable has some stale entries, remove them.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  Makefile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/Makefile b/Makefile
> index f3acdd2f..747c7c01 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -34,7 +34,7 @@ EXTRA_OBJS =
>  TARGET := mini-os
>  
>  # Subdirectories common to mini-os
> -SUBDIRS := lib xenbus console
> +SUBDIRS := lib
>  
>  src-$(CONFIG_BLKFRONT) += blkfront.c
>  src-$(CONFIG_CONSFRONT) += consfront.c
> -- 
> 2.35.3
> 

-- 
Samuel
---
Pour une évaluation indépendante, transparente et rigoureuse !
Je soutiens la Commission d'Évaluation de l'Inria.


From xen-devel-bounces@lists.xenproject.org Sun Jan 29 23:03:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Jan 2023 23:03:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486445.753777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMGhZ-0003ug-Nq; Sun, 29 Jan 2023 23:03:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486445.753777; Sun, 29 Jan 2023 23:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMGhZ-0003uZ-L1; Sun, 29 Jan 2023 23:03:09 +0000
Received: by outflank-mailman (input) for mailman id 486445;
 Sun, 29 Jan 2023 23:03:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMGhY-0003uP-2o; Sun, 29 Jan 2023 23:03:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMGhX-0005sF-MH; Sun, 29 Jan 2023 23:03:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMGhX-00081n-3I; Sun, 29 Jan 2023 23:03:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMGhX-0002b8-2r; Sun, 29 Jan 2023 23:03:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1T2tXTeVP9/5AgKnjM7D/rhJqQ5jAMqSCTQ08AfUfJc=; b=q1LXRnqdUDFwj5PD+OBKoO/llZ
	MFbzu8ZDHMFP5+a9hyaceZ4mEBpEGxqwtVuLdoQIA4u6+QdOPBA53IuPOTa7Ygfyz/7KNyEUO+50h
	nLDefupPOabUPs2U8q8tGN+mBOAQWsuNbIJucRKU9qRCJbczMZJBLFlJo0J3w+YUp7rY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176271-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176271: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:test-amd64-amd64-libvirt:<job status>:broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:build-armhf:syslog-server:running:regression
    linux-linus:test-amd64-amd64-libvirt:host-install(5):broken:heisenbug
    linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:build-armhf:capture-logs:broken:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=c96618275234ad03d44eafe9f8844305bb44fda4
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Jan 2023 23:03:07 +0000

flight 176271 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176271/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt      5 host-install(5)          broken pass in 176267
 test-amd64-amd64-qemuu-nested-intel  8 xen-boot            fail pass in 176267

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 173462
 test-amd64-amd64-libvirt    15 migrate-support-check fail in 176267 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                c96618275234ad03d44eafe9f8844305bb44fda4
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  114 days
Failing since        173470  2022-10-08 06:21:34 Z  113 days  235 attempts
Testing same since   176267  2023-01-29 00:13:34 Z    0 days    2 attempts

------------------------------------------------------------
3466 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     broken  
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-job test-amd64-amd64-libvirt broken
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job build-armhf broken

Not pushing.

(No revision log; it would be 533253 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 00:38:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 00:38:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486459.753786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMIBj-0006FB-F5; Mon, 30 Jan 2023 00:38:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486459.753786; Mon, 30 Jan 2023 00:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMIBj-0006F4-CL; Mon, 30 Jan 2023 00:38:23 +0000
Received: by outflank-mailman (input) for mailman id 486459;
 Mon, 30 Jan 2023 00:38:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jy0V=53=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pMIBi-0006Ey-It
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 00:38:22 +0000
Received: from sonic313-20.consmr.mail.gq1.yahoo.com
 (sonic313-20.consmr.mail.gq1.yahoo.com [98.137.65.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 62a345e5-a036-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 01:38:18 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic313.consmr.mail.gq1.yahoo.com with HTTP; Mon, 30 Jan 2023 00:38:15 +0000
Received: by hermes--production-ne1-746bc6c6c4-b28lr (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 11cce946b55ddb623c33b98cbbbc1855; 
 Mon, 30 Jan 2023 00:38:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62a345e5-a036-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1675039095; bh=Z1YRfJ9MJzzbL2OT5tKmtYqA6tEeZV38Lqsc4gptOkU=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=NdgdCxF0fWZVVAXJ+rJz41cFjS0SOrQrlts3tXMdpa6lv+QvW3h/PWbD5ZgO0m5Ya4oa6VXA9AOfAMzEW+3e/dU0RVXw/zk3zvU+AYJaw9WtZfFeGxcsAkos/T/f9tYC/v+H8rzHgkHt29khg6S90M2igYXHaXA+pg7VW8X23ZJkRNyW5yLno1v6I7yoEAGUp2dO52sZcsOynVG13ieGo8H3CoRP3nqMyYqA3h/4CQhuGqZna0wnpA3Py/oHVyAyaWTy5ei3BRjxa2RHgJRN9+oG/TeIm4WOzyPmOKr83vWbWRXqdcbo65ZEIXPe2O1oZVIXy/PfOOUR79ec40LiZw==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1675039095; bh=Roxfhryd3dMDVMT8R0d19nWF/nLhCq/9FObQ2+MU/Q8=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=LQJaPe77AMWV27Mz715x5yQrCq0+qjCIiR0x0B7dv+mDl0IFc8WffYd/bDWzmnTS/eB8xYzpe2ozlwQd5Q4r6WKz/a5taPRhjTtTQGZmgOXIY3YAE1DmnX1l0CsvMRkHn0ZiQ75G0uhZ/TGWFjkXKa+1qkd1emk7xNB04eAA68nPwRaOMpFr7DqKqpV7H2o3caV+3yqo+ExX4uZnVhGcMCVaO5i1JnIqbxL/08anUNiVpGFZY48/2l6HKLESuQVkR6MgluivtMKX5zBHwr8X57QiPOYE8hhuiYZ2jsV6J3PR9u6XfpYIgXZt0dfJ4svDhft1wGLr2gah3tISC4wACw==
X-YMail-OSG: 5z.wZlwVM1mKgr02M6._W7BcirUxhI6bnZMsEsVl4RMQPFWvo.rRNaIstk4gqbX
 NbZcsw7dHOqz_CDBnEbTGJy4xwrsdjdLnDxDebTI4kdPSTH9vD.9oj8mkp1XirohHKYw5okuxL1w
 KYQUtLWKhKTQ_l8EzVWqJx7aUqiLIjlIVFDRqW7i2k3hybZLjlSIey7_4oAo4RTwwE4w_IAaNVxk
 UymJ7k.SgF5909ca6H3PRPWYf58pVmPalMv5db2Bal3SA3Ns7oEmKttc8jTmEN1h33qywEAGkh4t
 D0EHgm7JG.HiWBJaubNhfcLnUY5O8AK.co59IC6rLRcKjzxWa3GWuiBmGZMwFlVUnEXGwLbmfxCL
 p3hXcr6mOBTOg_Szt_lLHh93d748e3RXb2TVWC6tOHjNud.Gd92wF532rV6GRSq4W_UYpFfAt8hJ
 SdMn3ViJg0uOu5uF..La_0deNkZoFCcAQoqNi5DoSvBRLdCuYeIWnCdvE0Gz9yezv66EidGRzya2
 _dlmJpnItacoMS3NJCzukqlLH7LpZJq97LHRsxu07MaDEsgf5GYsta6hWY3v8ScdUKbYOk2WMP7h
 Kk49E29e8KcV1vXkAhNZneANf15wbrouD1uJp33QSa93IuQwXk_wsoCQhbBcO_DvYkRNWaLpzivk
 3WoEZiPxZ.4EsLk466jTy5Blq0GH_IEAnzCtzji.jl.XvX.kM8iVPbQ2Gf7ZHIcKKBlwVofz.JWU
 0YIwwFhKfsVc5ApSlGMx_JonuzgnR4lG71uH9K39XXAxkHRoYxDioYJZpkhNAi1WQFKkoWJ4pOu_
 2flv9XLpNYm8thk12FdFn6r7fwuc4MTZivABSbPyOFaKh50JXTPuSD1PBKbTQudtkbclNAi62KId
 qyJiKwC.j7jPae5l.NmjdjWix9riPWdonQN63t7yNiB89t9aLJww5d82Fkfe4yphZpbH9VT3YSLr
 29FBXu_iIFYk4DDtFTH1zOsRFR1hJofoxMDi22WjqRaQALT_ErX_.9NHW6S5m_BEuPPZjk6sdND3
 t2tNKlK.tuHcy350j4MZ.Grswp_fnJzCtuMMelUNHy712TLSuNJkthb2rh4d4SUWfmiCQlC3DFN5
 .03y4eX15p3NKCPQOcclRSmOSCgiBmBi3n4nMG.PPnY5FNZbGLXk6BIsPltXbfJx9MnIde90t1RV
 9nP6n5Z8zqxJq1j9c3Q6xTyLXzMdpbHoqd0pERTkHJg5SCMtB5LfBQLg2eUiqy2ARguKGFhOYkFu
 M36nzAjmJ3ymRPAHjOeEV2obyUpGimj_diaHxHDoWB0kHc4CuY5yvX11Dqh6jQFJUI0fetCEaJHG
 IqIrtMFEuqOXz7QBLMjqpjmQoEEqDdtcYtR6TCCkeKWmELaBMVmuAEjT3ClBgf_VIdDnetNnNFRS
 txg0ew5VNZNYMqXeoE4empQBlEvHw_chFS7sAe3lDou6wA2uTX7sB0IHJjtPE7S8wOCL1GdpgQjo
 uz3hPnUPP3WvxH1CG3AMqBnmdbR4rouDpS_OBACM1HFxCk3AwGGiSxV5q4WVlUXp5XUC_jt0_hEi
 QaRfJYiWzDphe4AsLngmZa8NGEJ9QLHNpjfT2yD4pdF758nZWq0qWDEHfMLI.Ps92QJuIdKpHDlO
 xs461BUzr6M2iUXhZ.ceCGyL2QPJwflfxHhzW866qDRFT90xppvJYrrePpN5eAN5_c9nISK1Ru0v
 JoQaufkNSlZnMbjx_j.qOfgFThdXELzAtxuUyA27kXK..del0T47St0dA8wmKRn7SJIHfNlZnxlM
 hkQOc3TKGNe3eXpRMkSIAk1gbfWjUkC3SmJEAMbSMz.gR30YZh8OXCMtzes3uoCCCpO3y9wZzUXv
 UJfw33PTWiL7sQHpjPJu9hhamvAZqIkmIlDIdV8.Eg.cMy8xOD4h.U001hL419eMxR8CR6xpEquR
 RLX1_r6O6MeGq.MXV8pFu3kS0TL2ydFKq0X0my79Vr85ZKHn.R4GyfK6cwETU2_.0zv6HcweJZ4o
 QHjDQ2x2axFBQ4wSO_hqwwMqmQv27JEqgZpop3tJnlAupUPPfADkYsdEIjg.24am7LDfDp7rXPpC
 qOOaKdMNOLW9QFuG15BWSIwGCmoWtBsInIR3GppZ1mU.fSt7dYgJW0Bqj9l_fhBt_clajoaCkefy
 aKuiHaKUBrTEQEwJRqFm.71wer6ToZot7.Gs0hLT9SSoqP7F3dUilzY_lVb9HRa_TakHe1omn6bf
 7pA8z.bY-
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <a2a927bd-a764-8676-68c9-4c53cb86af3e@aol.com>
Date: Sun, 29 Jan 2023 19:38:10 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Subject: Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default
 for igd-passthru
Content-Language: en-US
From: Chuck Zmudzinski <brchuckz@aol.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, qemu-devel@nongnu.org
References: <cover.1673300848.git.brchuckz.ref@aol.com>
 <cover.1673300848.git.brchuckz@aol.com>
 <Y9EUarVVWr223API@perard.uk.xensource.com>
 <de3a3992-8f56-086a-e19e-bac9233d4265@aol.com>
In-Reply-To: <de3a3992-8f56-086a-e19e-bac9233d4265@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21123 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 6387

On 1/25/23 6:19 PM, Chuck Zmudzinski wrote:
> On 1/25/2023 6:37 AM, Anthony PERARD wrote:
>> On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
>> > I call attention to the commit message of the first patch which points
>> > out that using the "pc" machine and adding the xen platform device on
>> > the qemu upstream command line is not functionally equivalent to using
>> > the "xenfv" machine which automatically adds the xen platform device
>> > earlier in the guest creation process. As a result, there is a noticeable
>> > reduction in the performance of the guest during startup with the "pc"
>> > machne type even if the xen platform device is added via the qemu
>> > command line options, although eventually both Linux and Windows guests
>> > perform equally well once the guest operating system is fully loaded.
>>
>> There shouldn't be a difference between "xenfv" machine or using the
>> "pc" machine while adding the "xen-platform" device, at least with
>> regards to access to disk or network.
>>
>> The first patch of the series is using the "pc" machine without any
>> "xen-platform" device, so we can't compare startup performance based on
>> that.
>>
>> > Specifically, startup time is longer and neither the grub vga drivers
>> > nor the windows vga drivers in early startup perform as well when the
>> > xen platform device is added via the qemu command line instead of being
>> > added immediately after the other emulated i440fx pci devices when the
>> > "xenfv" machine type is used.
>>
>> The "xen-platform" device is mostly an hint to a guest that they can use
>> pv-disk and pv-network devices. I don't think it would change anything
>> with regards to graphics.
>>
>> > For example, when using the "pc" machine, which adds the xen platform
>> > device using a command line option, the Linux guest could not display
>> > the grub boot menu at the native resolution of the monitor, but with the
>> > "xenfv" machine, the grub menu is displayed at the full 1920x1080
>> > native resolution of the monitor for testing. So improved startup
>> > performance is an advantage for the patch for qemu.
>>
>> I've just found out that when doing IGD passthrough, both machine
>> "xenfv" and "pc" are much more different than I though ... :-(
>> pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
>> turns copy some informations from the real host bridge.
>> I guess this new host bridge help when the firmware setup the graphic
>> for grub.

Yes, it is needed - see below for the very simple patch to Qemu
upstream that fixes it for the "pc" machine!

> 
> I am surprised it works at all with the "pc" machine, that is, without the
> TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE that is used in the "xenfv"
> machine. This only seems to affect the legacy grub vga driver and the legacy
> Windows vga driver during early boot. Still, I much prefer keeping the "xenfv"
> machine for Intel IGD than this workaround of patching libxl to use the "pc"
> machine.
> 
>>
>> > I also call attention to the last point of the commit message of the
>> > second patch and the comments for reviewers section of the second patch.
>> > This approach, as opposed to fixing this in qemu upstream, makes
>> > maintaining the code in libxl__build_device_model_args_new more
>> > difficult and therefore increases the chances of problems caused by
>> > coding errors and typos for users of libxl. So that is another advantage
>> > of the patch for qemu.
>>
>> We would just needs to use a different approach in libxl when generating
>> the command line. We could probably avoid duplications.

I was thinking we could also either write a test to verify the correctness
of the second patch to ensure it generates the correct Qemu command line
or take the time to verify the second patch's accuracy before committing it.

>> I was hopping to
>> have patch series for libxl that would change the machine used to start
>> using "pc" instead of "xenfv" for all configurations, but based on the
>> point above (IGD specific change to "xenfv"), then I guess we can't
>> really do anything from libxl to fix IGD passthrough.
> 
> We could switch to the "pc" machine, but we would need to patch
> qemu also so the "pc" machine uses the special device the "xenfv"
> machine uses (TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE).
> ...

I just tested a very simple patch to Qemu upstream to fix the
difference between the upstream Qemu "pc" machine and the upstream
Qemu "xenfv" machine:

--- a/hw/i386/pc_piix.c	2023-01-28 13:22:15.714595514 -0500
+++ b/hw/i386/pc_piix.c	2023-01-29 18:08:34.668491593 -0500
@@ -434,6 +434,8 @@
             compat(machine); \
         } \
         pc_init1(machine, TYPE_I440FX_PCI_HOST_BRIDGE, \
+                 xen_igd_gfx_pt_enabled() ? \
+                 TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE : \
                  TYPE_I440FX_PCI_DEVICE); \
     } \
     DEFINE_PC_MACHINE(suffix, name, pc_init_##suffix, optionfn)
----- snip -------

With this simple two-line patch to upstream Qemu, we can use the "pc"
machine without any regressions such as the startup performance
degradation I observed without this small patch to fix the "pc" machine
with igd passthru!

The "pc" machine maintainers for upstream Qemu would need to accept
this small patch to Qemu upstream. They might prefer this to the
other Qemu patch that reserves slot 2 for the Qemu upstream "xenfv"
machine when the guest is configured for igd passthru.

>>
>> ...
>>
>> So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
>> IGD passthrough as the "xenfv" machine has already some workaround to
>> make IGD work and just need some more.

Well, the little patch to upstream shown above fixes the "pc" machine
with IGD so maybe this approach of patching libxl to use the "pc" machine
will be a viable fix for the IGD.

>>
>> I've seen that the patch for QEMU is now reviewed, so I look at having
>> it merged soonish.
>>
>> Thanks,
>>
> 

I just added the bit of information above to help you decide which
approach to use to improve the support for the igd passthru feature
with Xen and Qemu upstream. I think the test of the small patch to
Qemu to fix the "pc" machine with igd passthru makes this patch to
libxl a viable alternative to the other patch to Qemu upstream that
reserves slot 2 when using the "xenfv" machine and igd passthru.

Thanks,

Chuck


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 01:58:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 01:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486464.753797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMJQV-00056F-Ao; Mon, 30 Jan 2023 01:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486464.753797; Mon, 30 Jan 2023 01:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMJQV-000567-50; Mon, 30 Jan 2023 01:57:43 +0000
Received: by outflank-mailman (input) for mailman id 486464;
 Mon, 30 Jan 2023 01:57:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMJQU-00055x-2j; Mon, 30 Jan 2023 01:57:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMJQT-0000Df-Vn; Mon, 30 Jan 2023 01:57:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMJQT-0006Hf-G0; Mon, 30 Jan 2023 01:57:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMJQT-0007Qs-Et; Mon, 30 Jan 2023 01:57:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LRqnRTQXQeZgFxrVfr14tv0zOH7UL/7V/vATsYsW4WE=; b=W9+9pH/oCFccPLCLKzV+0LRwce
	j29g9XysDazxUoN4fr/mEtm4PytGcNLqSPnWkCpRqPTpB50a1JgglGSDHJjNqTazYLgz5d2pFIkQK
	mZ+xh79H3t6N86YCRM6CmbsQI9nk+bK88iGyILQTl0PFw6dUZx6d4Scdmc9jYVY4N2CQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176272-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176272: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-pair:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-migrupgrade:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-migrupgrade:host-install/dst_host(7):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start.2:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-ping-check-xen:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 01:57:41 +0000

flight 176272 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176272/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair            <job status>                 broken
 test-amd64-i386-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 build-armhf                     <job status>                 broken
 test-amd64-amd64-migrupgrade    <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds      5 host-install(5)          broken pass in 176265
 test-amd64-i386-xl-qemut-ws16-amd64  5 host-install(5)   broken pass in 176265
 test-amd64-i386-pair          7 host-install/dst_host(7) broken pass in 176270
 test-amd64-amd64-migrupgrade  7 host-install/dst_host(7) broken pass in 176270
 test-amd64-i386-xl-qemut-win7-amd64  5 host-install(5)   broken pass in 176270
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 21 guest-start.2 fail in 176261 pass in 176272
 test-amd64-amd64-xl-rtds  10 host-ping-check-xen fail in 176265 pass in 176261
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 176270 pass in 176272
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 176265

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop   fail in 176265 like 175447
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop   fail in 176270 like 175447
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2   3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl           3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   39 days
Testing same since   176224  2023-01-26 22:14:43 Z    3 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          broken  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 broken  
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-pair broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job build-armhf broken
broken-job test-amd64-amd64-migrupgrade broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-step test-amd64-i386-pair host-install/dst_host(7)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-step test-amd64-amd64-migrupgrade host-install/dst_host(7)
broken-step test-amd64-i386-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-i386-xl-qemut-ws16-amd64 host-install(5)
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job build-armhf broken
broken-job build-armhf broken
broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:06:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:06:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486485.753837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLQg-0004hU-Cg; Mon, 30 Jan 2023 04:06:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486485.753837; Mon, 30 Jan 2023 04:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLQg-0004hE-9N; Mon, 30 Jan 2023 04:06:02 +0000
Received: by outflank-mailman (input) for mailman id 486485;
 Mon, 30 Jan 2023 04:06:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JFid=53=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMLQe-0003sz-Ak
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:06:00 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 6737e0d2-a053-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 05:05:59 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E11B21FB;
 Sun, 29 Jan 2023 20:06:40 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 786A93F64C;
 Sun, 29 Jan 2023 20:05:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6737e0d2-a053-11ed-9ec0-891035b88211
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 3/3] xen/arm: Extend the memory overlap check to include EfiACPIReclaimMemory
Date: Mon, 30 Jan 2023 12:05:35 +0800
Message-Id: <20230130040535.188236-4-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230130040535.188236-1-Henry.Wang@arm.com>
References: <20230130040535.188236-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Similarly as the static regions and boot modules, memory regions with
EfiACPIReclaimMemory type (defined in bootinfo.acpi if CONFIG_ACPI is
enabled) should also not be overlapping with memory regions in
bootinfo.reserved_mem and bootinfo.modules.

Therefore, this commit reuses the `meminfo_overlap_check()` to further
extends the check in function `check_reserved_regions_overlap()` so that
memory regions in bootinfo.acpi are included. If any error occurs in the
extended `check_reserved_regions_overlap()`, the `meminfo_add_bank()`
defined in `efi-boot.h` will return early.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v2 -> v3:
1. Rebase on top of patch #1 and #2.
2. Add Stefano's Reviewed-by tag.
v1 -> v2:
1. Rebase on top of patch #1 and #2.
---
 xen/arch/arm/efi/efi-boot.h | 10 ++++++++--
 xen/arch/arm/setup.c        |  6 ++++++
 2 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index 223db0c4da..bb64925d70 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -161,13 +161,19 @@ static bool __init meminfo_add_bank(struct meminfo *mem,
                                     EFI_MEMORY_DESCRIPTOR *desc)
 {
     struct membank *bank;
+    paddr_t start = desc->PhysicalStart;
+    paddr_t size = desc->NumberOfPages * EFI_PAGE_SIZE;
 
     if ( mem->nr_banks >= NR_MEM_BANKS )
         return false;
+#ifdef CONFIG_ACPI
+    if ( check_reserved_regions_overlap(start, size) )
+        return false;
+#endif
 
     bank = &mem->bank[mem->nr_banks];
-    bank->start = desc->PhysicalStart;
-    bank->size = desc->NumberOfPages * EFI_PAGE_SIZE;
+    bank->start = start;
+    bank->size = size;
 
     mem->nr_banks++;
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 98df0baffa..7718cee672 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -342,6 +342,12 @@ bool __init check_reserved_regions_overlap(paddr_t region_start,
                                    region_start, region_size) )
         return true;
 
+#ifdef CONFIG_ACPI
+    /* Check if input region is overlapping with ACPI EfiACPIReclaimMemory */
+    if ( meminfo_overlap_check(&bootinfo.acpi, region_start, region_size) )
+        return true;
+#endif
+
     return false;
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:06:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:06:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486484.753826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLQc-0004Oq-4N; Mon, 30 Jan 2023 04:05:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486484.753826; Mon, 30 Jan 2023 04:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLQc-0004Oj-15; Mon, 30 Jan 2023 04:05:58 +0000
Received: by outflank-mailman (input) for mailman id 486484;
 Mon, 30 Jan 2023 04:05:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JFid=53=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMLQa-0003sz-SE
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:05:56 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 650c2efa-a053-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 05:05:55 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4ADC81FB;
 Sun, 29 Jan 2023 20:06:37 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D44F53F64C;
 Sun, 29 Jan 2023 20:05:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 650c2efa-a053-11ed-9ec0-891035b88211
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 2/3] xen/arm: Extend the memory overlap check to include bootmodules
Date: Mon, 30 Jan 2023 12:05:34 +0800
Message-Id: <20230130040535.188236-3-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230130040535.188236-1-Henry.Wang@arm.com>
References: <20230130040535.188236-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Similarly as the static regions defined in bootinfo.reserved_mem,
the bootmodule regions defined in bootinfo.modules should also not
be overlapping with memory regions in either bootinfo.reserved_mem
or bootinfo.modules.

Therefore, this commit introduces a helper `bootmodules_overlap_check()`
and uses this helper to extend the check in function
`check_reserved_regions_overlap()` so that memory regions in
bootinfo.modules are included. Use `check_reserved_regions_overlap()`
in `add_boot_module()` to return early if any error occurs.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. Use "[start, end]" format in printk error message.
2. Change the return type of helper functions to bool.
3. Use 'start' and 'size' in helper functions to describe a region.
v1 -> v2:
1. Split original `overlap_check()` to `bootmodules_overlap_check()`.
2. Rework commit message.
---
 xen/arch/arm/setup.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 636604aafa..98df0baffa 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -287,6 +287,32 @@ static bool __init meminfo_overlap_check(struct meminfo *meminfo,
     return false;
 }
 
+static bool __init bootmodules_overlap_check(struct bootmodules *bootmodules,
+                                             paddr_t region_start,
+                                             paddr_t region_size)
+{
+    paddr_t mod_start = INVALID_PADDR, mod_end = 0;
+    paddr_t region_end = region_start + region_size;
+    unsigned int i, mod_num = bootmodules->nr_mods;
+
+    for ( i = 0; i < mod_num; i++ )
+    {
+        mod_start = bootmodules->module[i].start;
+        mod_end = mod_start + bootmodules->module[i].size;
+
+        if ( region_end <= mod_start || region_start >= mod_end )
+            continue;
+        else
+        {
+            printk("Region: [%#"PRIpaddr", %#"PRIpaddr"] overlapping with mod[%u]: [%#"PRIpaddr", %#"PRIpaddr"]\n",
+                   region_start, region_end, i, mod_start, mod_end);
+            return true;
+        }
+    }
+
+    return false;
+}
+
 void __init fw_unreserved_regions(paddr_t s, paddr_t e,
                                   void (*cb)(paddr_t, paddr_t),
                                   unsigned int first)
@@ -311,6 +337,11 @@ bool __init check_reserved_regions_overlap(paddr_t region_start,
                                region_start, region_size) )
         return true;
 
+    /* Check if input region is overlapping with bootmodules */
+    if ( bootmodules_overlap_check(&bootinfo.modules,
+                                   region_start, region_size) )
+        return true;
+
     return false;
 }
 
@@ -328,6 +359,10 @@ struct bootmodule __init *add_boot_module(bootmodule_kind kind,
                boot_module_kind_as_string(kind), start, start + size);
         return NULL;
     }
+
+    if ( check_reserved_regions_overlap(start, size) )
+        return NULL;
+
     for ( i = 0 ; i < mods->nr_mods ; i++ )
     {
         mod = &mods->module[i];
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:06:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486483.753817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLQZ-00048d-RD; Mon, 30 Jan 2023 04:05:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486483.753817; Mon, 30 Jan 2023 04:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLQZ-00048W-OS; Mon, 30 Jan 2023 04:05:55 +0000
Received: by outflank-mailman (input) for mailman id 486483;
 Mon, 30 Jan 2023 04:05:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JFid=53=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMLQY-000485-MR
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:05:54 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 62d8f450-a053-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 05:05:52 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7DAAE2F4;
 Sun, 29 Jan 2023 20:06:33 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 14DDE3F64C;
 Sun, 29 Jan 2023 20:05:48 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62d8f450-a053-11ed-b8d1-410ff93cb8f0
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Wei Chen <wei.chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 1/3] xen/arm: Add memory overlap check for bootinfo.reserved_mem
Date: Mon, 30 Jan 2023 12:05:33 +0800
Message-Id: <20230130040535.188236-2-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230130040535.188236-1-Henry.Wang@arm.com>
References: <20230130040535.188236-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As we are having more and more types of static region, and all of
these static regions are defined in bootinfo.reserved_mem, it is
necessary to add the overlap check of reserved memory regions in Xen,
because such check will help user to identify the misconfiguration in
the device tree at the early stage of boot time.

Currently we have 3 types of static region, namely
(1) static memory
(2) static heap
(3) static shared memory

(1) and (2) are parsed by the function `device_tree_get_meminfo()` and
(3) is parsed using its own logic. All of parsed information of these
types will be stored in `struct meminfo`.

Therefore, to unify the overlap checking logic for all of these types,
this commit firstly introduces a helper `meminfo_overlap_check()` and
a function `check_reserved_regions_overlap()` to check if an input
physical address range is overlapping with the existing memory regions
defined in bootinfo. After that, use `check_reserved_regions_overlap()`
in `device_tree_get_meminfo()` to do the overlap check of (1) and (2)
and replace the original overlap check of (3) with
`check_reserved_regions_overlap()`.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. Use "[start, end]" format in printk error message.
2. Change the return type of helper functions to bool.
3. Use 'start' and 'size' in helper functions to describe a region.
v1 -> v2:
1. Split original `overlap_check()` to `meminfo_overlap_check()`.
2. Rework commit message.
---
 xen/arch/arm/bootfdt.c           | 13 +++++-----
 xen/arch/arm/include/asm/setup.h |  2 ++
 xen/arch/arm/setup.c             | 41 ++++++++++++++++++++++++++++++++
 3 files changed, 49 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 0085c28d74..e2f6c7324b 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -88,6 +88,9 @@ static int __init device_tree_get_meminfo(const void *fdt, int node,
     for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
     {
         device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+        if ( mem == &bootinfo.reserved_mem &&
+             check_reserved_regions_overlap(start, size) )
+            return -EINVAL;
         /* Some DT may describe empty bank, ignore them */
         if ( !size )
             continue;
@@ -482,7 +485,9 @@ static int __init process_shm_node(const void *fdt, int node,
                 return -EINVAL;
             }
 
-            if ( (end <= mem->bank[i].start) || (paddr >= bank_end) )
+            if ( check_reserved_regions_overlap(paddr, size) )
+                return -EINVAL;
+            else
             {
                 if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 )
                     continue;
@@ -493,12 +498,6 @@ static int __init process_shm_node(const void *fdt, int node,
                     return -EINVAL;
                 }
             }
-            else
-            {
-                printk("fdt: shared memory region overlap with an existing entry %#"PRIpaddr" - %#"PRIpaddr"\n",
-                        mem->bank[i].start, bank_end);
-                return -EINVAL;
-            }
         }
     }
 
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2b..f0592370ea 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -143,6 +143,8 @@ void fw_unreserved_regions(paddr_t s, paddr_t e,
 size_t boot_fdt_info(const void *fdt, paddr_t paddr);
 const char *boot_fdt_cmdline(const void *fdt);
 
+bool check_reserved_regions_overlap(paddr_t region_start, paddr_t region_size);
+
 struct bootmodule *add_boot_module(bootmodule_kind kind,
                                    paddr_t start, paddr_t size, bool domU);
 struct bootmodule *boot_module_find_by_kind(bootmodule_kind kind);
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90..636604aafa 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -261,6 +261,32 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
     cb(s, e);
 }
 
+static bool __init meminfo_overlap_check(struct meminfo *meminfo,
+                                         paddr_t region_start,
+                                         paddr_t region_size)
+{
+    paddr_t bank_start = INVALID_PADDR, bank_end = 0;
+    paddr_t region_end = region_start + region_size;
+    unsigned int i, bank_num = meminfo->nr_banks;
+
+    for ( i = 0; i < bank_num; i++ )
+    {
+        bank_start = meminfo->bank[i].start;
+        bank_end = bank_start + meminfo->bank[i].size;
+
+        if ( region_end <= bank_start || region_start >= bank_end )
+            continue;
+        else
+        {
+            printk("Region: [%#"PRIpaddr", %#"PRIpaddr"] overlapping with bank[%u]: [%#"PRIpaddr", %#"PRIpaddr"]\n",
+                   region_start, region_end, i, bank_start, bank_end);
+            return true;
+        }
+    }
+
+    return false;
+}
+
 void __init fw_unreserved_regions(paddr_t s, paddr_t e,
                                   void (*cb)(paddr_t, paddr_t),
                                   unsigned int first)
@@ -271,7 +297,22 @@ void __init fw_unreserved_regions(paddr_t s, paddr_t e,
         cb(s, e);
 }
 
+/*
+ * Given an input physical address range, check if this range is overlapping
+ * with the existing reserved memory regions defined in bootinfo.
+ * Return true if the input physical address range is overlapping with any
+ * existing reserved memory regions, otherwise false.
+ */
+bool __init check_reserved_regions_overlap(paddr_t region_start,
+                                           paddr_t region_size)
+{
+    /* Check if input region is overlapping with bootinfo.reserved_mem banks */
+    if ( meminfo_overlap_check(&bootinfo.reserved_mem,
+                               region_start, region_size) )
+        return true;
 
+    return false;
+}
 
 struct bootmodule __init *add_boot_module(bootmodule_kind kind,
                                           paddr_t start, paddr_t size,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:06:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486482.753807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLQU-0003tD-KF; Mon, 30 Jan 2023 04:05:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486482.753807; Mon, 30 Jan 2023 04:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLQU-0003t5-HF; Mon, 30 Jan 2023 04:05:50 +0000
Received: by outflank-mailman (input) for mailman id 486482;
 Mon, 30 Jan 2023 04:05:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JFid=53=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMLQT-0003sz-EH
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:05:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 5f35eae1-a053-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 05:05:47 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 693D91FB;
 Sun, 29 Jan 2023 20:06:27 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F03DE3F64C;
 Sun, 29 Jan 2023 20:05:42 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f35eae1-a053-11ed-9ec0-891035b88211
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Wei Chen <wei.chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 0/3] Memory region overlap check in device tree
Date: Mon, 30 Jan 2023 12:05:32 +0800
Message-Id: <20230130040535.188236-1-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As we are having more and more types of memory region defined in the
device tree, it is necessary to add the overlap check of these memory
regions in Xen, because such check will help user to identify the
misconfiguration in the device tree at the early stage of boot time.

The first patch introduces the basic memory overlap check mechanism,
and does the memory check for memory regions in bootinfo.reserved_mem.
Following patches extend the overlap check to include bootmodules and
EfiACPIReclaimMemory.

v2 -> v3:
1. Use "[start, end]" format in printk error message.
2. Change the return type of helper functions to bool.
3. Use 'start' and 'size' in helper functions to describe a region.

v1 -> v2:
- Split original `overlap_check()` to `meminfo_overlap_check()` and
  `bootmodules_overlap_check()`.
- Rework commit message.

Henry Wang (3):
  xen/arm: Add memory overlap check for bootinfo.reserved_mem
  xen/arm: Extend the memory overlap check to include bootmodules
  xen/arm: Extend the memory overlap check to include
    EfiACPIReclaimMemory

 xen/arch/arm/bootfdt.c           | 13 +++--
 xen/arch/arm/efi/efi-boot.h      | 10 +++-
 xen/arch/arm/include/asm/setup.h |  2 +
 xen/arch/arm/setup.c             | 82 ++++++++++++++++++++++++++++++++
 4 files changed, 98 insertions(+), 9 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:06:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:06:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486497.753847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLR6-0005iA-LS; Mon, 30 Jan 2023 04:06:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486497.753847; Mon, 30 Jan 2023 04:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLR6-0005i3-Id; Mon, 30 Jan 2023 04:06:28 +0000
Received: by outflank-mailman (input) for mailman id 486497;
 Mon, 30 Jan 2023 04:06:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JFid=53=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMLR6-000485-4d
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:06:28 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7728065c-a053-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 05:06:26 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 930A91FB;
 Sun, 29 Jan 2023 20:07:07 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 29E5D3F64C;
 Sun, 29 Jan 2023 20:06:22 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7728065c-a053-11ed-b8d1-410ff93cb8f0
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 0/4] P2M improvements for Arm
Date: Mon, 30 Jan 2023 12:06:10 +0800
Message-Id: <20230130040614.188296-1-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are some clean-up/improvement work that can be done in the
Arm P2M code triggered by [1] and [2]. These were found at the 4.17
code freeze period so the issues were not fixed at that time.
Therefore do the follow-ups here.

Patch#1 addresses one comment in [1]. It was sent earlier and reviewed
once. Pick the updated version, i.e. "[PATCH v2] xen/arm: Reduce
redundant clear root pages when teardown p2m", to this series.

Patch#2 is a new patch based on v1 comments, this is a pre-requisite
patch for patch#3 where the deferring of GICv2 CPU interface mapping
should also be applied for new vGIC.

Patch#3 and #4 addresses the comment in [2] following the discussion
between two possible options.

[1] https://lore.kernel.org/xen-devel/a947e0b4-8f76-cea6-893f-abf30ff95e0d@xen.org/
[2] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/

v1 -> v2:
1. Move in-code comment for p2m_force_tlb_flush_sync() on top of
   p2m_clear_root_pages().
2. Add a new patch as patch #2.
3. Correct style in in-code comment in patch #3.
4. Avoid open-coding gfn_eq() and gaddr_to_gfn(d->arch.vgic.cbase).
5. Apply same changes for the new vGICv2 implementation, update the
   commit message accordingly.
6. Add in-code comment in old GICv2's vgic_v2_domain_init() and
   new GICv2's vgic_v2_map_resources() to mention the mapping of the
   virtual CPU interface is deferred until first access.
7. Add reviewed-by and acked-by tags accordingly.

Henry Wang (4):
  xen/arm: Reduce redundant clear root pages when teardown p2m
  xen/arm: Rename vgic_cpu_base and vgic_dist_base for new vGIC
  xen/arm: Defer GICv2 CPU interface mapping until the first access
  xen/arm: Clean-up in p2m_init() and p2m_final_teardown()

 xen/arch/arm/domain.c               |  8 ++++
 xen/arch/arm/include/asm/new_vgic.h | 10 +++--
 xen/arch/arm/include/asm/p2m.h      |  5 +--
 xen/arch/arm/include/asm/vgic.h     |  2 +
 xen/arch/arm/p2m.c                  | 60 ++++++++++-------------------
 xen/arch/arm/traps.c                | 18 +++++++--
 xen/arch/arm/vgic-v2.c              | 25 ++++--------
 xen/arch/arm/vgic/vgic-init.c       |  4 +-
 xen/arch/arm/vgic/vgic-v2.c         | 41 +++++++-------------
 9 files changed, 76 insertions(+), 97 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:06:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:06:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486500.753857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLRA-00062d-TP; Mon, 30 Jan 2023 04:06:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486500.753857; Mon, 30 Jan 2023 04:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLRA-00062W-QK; Mon, 30 Jan 2023 04:06:32 +0000
Received: by outflank-mailman (input) for mailman id 486500;
 Mon, 30 Jan 2023 04:06:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JFid=53=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMLR9-0003sz-GQ
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:06:31 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 79a542c7-a053-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 05:06:30 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BEE611FB;
 Sun, 29 Jan 2023 20:07:11 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C6C793F64C;
 Sun, 29 Jan 2023 20:06:26 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79a542c7-a053-11ed-9ec0-891035b88211
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Wei Chen <wei.chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v2 1/4] xen/arm: Reduce redundant clear root pages when teardown p2m
Date: Mon, 30 Jan 2023 12:06:11 +0800
Message-Id: <20230130040614.188296-2-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230130040614.188296-1-Henry.Wang@arm.com>
References: <20230130040614.188296-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently, p2m for a domain will be teardown from two paths:
(1) The normal path when a domain is destroyed.
(2) The arch_domain_destroy() in the failure path of domain creation.

When tearing down p2m from (1), the part to clear and clean the root
is only needed to do once rather than for every call of p2m_teardown().
If the p2m teardown is from (2), the clear and clean of the root
is unnecessary because the domain is not scheduled.

Therefore, this patch introduces a helper `p2m_clear_root_pages()` to
do the clear and clean of the root, and move this logic outside of
p2m_teardown(). With this movement, the `page_list_empty(&p2m->pages)`
check can be dropped.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
v1 -> v2:
1. Move in-code comment for p2m_force_tlb_flush_sync() on top of
   p2m_clear_root_pages().
2. Add Reviewed-by tag from Michal and Acked-by tag from Julien.
---
 xen/arch/arm/domain.c          |  8 +++++++
 xen/arch/arm/include/asm/p2m.h |  1 +
 xen/arch/arm/p2m.c             | 40 +++++++++++++++++-----------------
 3 files changed, 29 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 99577adb6c..b8a4a60570 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -959,6 +959,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m_root,
     PROG_p2m,
     PROG_p2m_pool,
     PROG_done,
@@ -1021,6 +1022,13 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_root):
+        /*
+         * We are about to free the intermediate page-tables, so clear the
+         * root to prevent any walk to use them.
+         */
+        p2m_clear_root_pages(&d->arch.p2m);
+
     PROGRESS(p2m):
         ret = p2m_teardown(d, true);
         if ( ret )
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index 91df922e1c..bf5183e53a 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -281,6 +281,7 @@ int p2m_set_entry(struct p2m_domain *p2m,
 
 bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn);
 
+void p2m_clear_root_pages(struct p2m_domain *p2m);
 void p2m_invalidate_root(struct p2m_domain *p2m);
 
 /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 948f199d84..f1ccd7efbd 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1314,6 +1314,26 @@ static void p2m_invalidate_table(struct p2m_domain *p2m, mfn_t mfn)
     p2m->need_flush = true;
 }
 
+/*
+ * The domain will not be scheduled anymore, so in theory we should
+ * not need to flush the TLBs. Do it for safety purpose.
+ * Note that all the devices have already been de-assigned. So we don't
+ * need to flush the IOMMU TLB here.
+ */
+void p2m_clear_root_pages(struct p2m_domain *p2m)
+{
+    unsigned int i;
+
+    p2m_write_lock(p2m);
+
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    p2m_force_tlb_flush_sync(p2m);
+
+    p2m_write_unlock(p2m);
+}
+
 /*
  * Invalidate all entries in the root page-tables. This is
  * useful to get fault on entry and do an action.
@@ -1698,30 +1718,10 @@ int p2m_teardown(struct domain *d, bool allow_preemption)
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
     struct page_info *pg;
-    unsigned int i;
     int rc = 0;
 
-    if ( page_list_empty(&p2m->pages) )
-        return 0;
-
     p2m_write_lock(p2m);
 
-    /*
-     * We are about to free the intermediate page-tables, so clear the
-     * root to prevent any walk to use them.
-     */
-    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
-        clear_and_clean_page(p2m->root + i);
-
-    /*
-     * The domain will not be scheduled anymore, so in theory we should
-     * not need to flush the TLBs. Do it for safety purpose.
-     *
-     * Note that all the devices have already been de-assigned. So we don't
-     * need to flush the IOMMU TLB here.
-     */
-    p2m_force_tlb_flush_sync(p2m);
-
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
         p2m_free_page(p2m->domain, pg);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:06:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:06:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486501.753867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLRF-0006Nd-8p; Mon, 30 Jan 2023 04:06:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486501.753867; Mon, 30 Jan 2023 04:06:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLRF-0006NW-5x; Mon, 30 Jan 2023 04:06:37 +0000
Received: by outflank-mailman (input) for mailman id 486501;
 Mon, 30 Jan 2023 04:06:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JFid=53=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMLRE-000485-5L
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:06:36 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7bcf2d6f-a053-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 05:06:34 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 859E41FB;
 Sun, 29 Jan 2023 20:07:15 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1CAB23F64C;
 Sun, 29 Jan 2023 20:06:30 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bcf2d6f-a053-11ed-b8d1-410ff93cb8f0
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Wei Chen <wei.chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 2/4] xen/arm: Rename vgic_cpu_base and vgic_dist_base for new vGIC
Date: Mon, 30 Jan 2023 12:06:12 +0800
Message-Id: <20230130040614.188296-3-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230130040614.188296-1-Henry.Wang@arm.com>
References: <20230130040614.188296-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In the follow-up patch from this series, the GICv2 CPU interface
mapping will be deferred until the first access in the stage-2
data abort trap handling code. Since the data abort trap handling
code is common for the current and the new vGIC implementation,
it is necessary to unify the variable names in struct vgic_dist
for these two implementations.

Therefore, this commit renames the vgic_cpu_base and vgic_dist_base
for new vGIC to cbase and dbase. So we can use the same code in
the data abort trap handling code for both vGIC implementations.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v1 -> v2:
1. New patch.
---
 xen/arch/arm/include/asm/new_vgic.h |  8 ++++----
 xen/arch/arm/vgic/vgic-init.c       |  4 ++--
 xen/arch/arm/vgic/vgic-v2.c         | 17 ++++++++---------
 3 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/include/asm/new_vgic.h b/xen/arch/arm/include/asm/new_vgic.h
index b7fa9ab11a..18ed3f754a 100644
--- a/xen/arch/arm/include/asm/new_vgic.h
+++ b/xen/arch/arm/include/asm/new_vgic.h
@@ -115,11 +115,11 @@ struct vgic_dist {
     unsigned int        nr_spis;
 
     /* base addresses in guest physical address space: */
-    paddr_t             vgic_dist_base;     /* distributor */
+    paddr_t             dbase;     /* distributor */
     union
     {
         /* either a GICv2 CPU interface */
-        paddr_t         vgic_cpu_base;
+        paddr_t         cbase;
         /* or a number of GICv3 redistributor regions */
         struct
         {
@@ -188,12 +188,12 @@ struct vgic_cpu {
 
 static inline paddr_t vgic_cpu_base(const struct vgic_dist *vgic)
 {
-    return vgic->vgic_cpu_base;
+    return vgic->cbase;
 }
 
 static inline paddr_t vgic_dist_base(const struct vgic_dist *vgic)
 {
-    return vgic->vgic_dist_base;
+    return vgic->dbase;
 }
 
 #endif /* __ASM_ARM_NEW_VGIC_H */
diff --git a/xen/arch/arm/vgic/vgic-init.c b/xen/arch/arm/vgic/vgic-init.c
index 62ae553699..ea739d081e 100644
--- a/xen/arch/arm/vgic/vgic-init.c
+++ b/xen/arch/arm/vgic/vgic-init.c
@@ -112,8 +112,8 @@ int domain_vgic_register(struct domain *d, int *mmio_count)
         BUG();
     }
 
-    d->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
-    d->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
+    d->arch.vgic.dbase = VGIC_ADDR_UNDEF;
+    d->arch.vgic.cbase = VGIC_ADDR_UNDEF;
     d->arch.vgic.vgic_redist_base = VGIC_ADDR_UNDEF;
 
     return 0;
diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c
index 1a99d3a8b4..07c8f8a005 100644
--- a/xen/arch/arm/vgic/vgic-v2.c
+++ b/xen/arch/arm/vgic/vgic-v2.c
@@ -272,7 +272,7 @@ int vgic_v2_map_resources(struct domain *d)
      */
     if ( is_hardware_domain(d) )
     {
-        d->arch.vgic.vgic_dist_base = gic_v2_hw_data.dbase;
+        d->arch.vgic.dbase = gic_v2_hw_data.dbase;
         /*
          * For the hardware domain, we always map the whole HW CPU
          * interface region in order to match the device tree (the "reg"
@@ -280,13 +280,13 @@ int vgic_v2_map_resources(struct domain *d)
          * Note that we assume the size of the CPU interface is always
          * aligned to PAGE_SIZE.
          */
-        d->arch.vgic.vgic_cpu_base = gic_v2_hw_data.cbase;
+        d->arch.vgic.cbase = gic_v2_hw_data.cbase;
         csize = gic_v2_hw_data.csize;
         vbase = gic_v2_hw_data.vbase;
     }
     else if ( is_domain_direct_mapped(d) )
     {
-        d->arch.vgic.vgic_dist_base = gic_v2_hw_data.dbase;
+        d->arch.vgic.dbase = gic_v2_hw_data.dbase;
         /*
          * For all the direct-mapped domain other than the hardware domain,
          * we only map a 8kB CPU interface but we make sure it is at a location
@@ -296,13 +296,13 @@ int vgic_v2_map_resources(struct domain *d)
          * address when the GIC is aliased to get a 8kB contiguous
          * region.
          */
-        d->arch.vgic.vgic_cpu_base = gic_v2_hw_data.cbase;
+        d->arch.vgic.cbase = gic_v2_hw_data.cbase;
         csize = GUEST_GICC_SIZE;
         vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
     }
     else
     {
-        d->arch.vgic.vgic_dist_base = GUEST_GICD_BASE;
+        d->arch.vgic.dbase = GUEST_GICD_BASE;
         /*
          * The CPU interface exposed to the guest is always 8kB. We may
          * need to add an offset to the virtual CPU interface base
@@ -310,14 +310,13 @@ int vgic_v2_map_resources(struct domain *d)
          * region.
          */
         BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
-        d->arch.vgic.vgic_cpu_base = GUEST_GICC_BASE;
+        d->arch.vgic.cbase = GUEST_GICC_BASE;
         csize = GUEST_GICC_SIZE;
         vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
     }
 
 
-    ret = vgic_register_dist_iodev(d, gaddr_to_gfn(dist->vgic_dist_base),
-                                   VGIC_V2);
+    ret = vgic_register_dist_iodev(d, gaddr_to_gfn(dist->dbase), VGIC_V2);
     if ( ret )
     {
         gdprintk(XENLOG_ERR, "Unable to register VGIC MMIO regions\n");
@@ -328,7 +327,7 @@ int vgic_v2_map_resources(struct domain *d)
      * Map the gic virtual cpu interface in the gic cpu interface
      * region of the guest.
      */
-    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.vgic_cpu_base),
+    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
                            csize / PAGE_SIZE, maddr_to_mfn(vbase));
     if ( ret )
     {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:07:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:07:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486502.753877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLRJ-0006iG-Jn; Mon, 30 Jan 2023 04:06:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486502.753877; Mon, 30 Jan 2023 04:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLRJ-0006i4-Er; Mon, 30 Jan 2023 04:06:41 +0000
Received: by outflank-mailman (input) for mailman id 486502;
 Mon, 30 Jan 2023 04:06:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JFid=53=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMLRI-000485-1y
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:06:40 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7e008748-a053-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 05:06:37 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1B67F1FB;
 Sun, 29 Jan 2023 20:07:19 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A65FB3F64C;
 Sun, 29 Jan 2023 20:06:34 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e008748-a053-11ed-b8d1-410ff93cb8f0
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Wei Chen <wei.chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 3/4] xen/arm: Defer GICv2 CPU interface mapping until the first access
Date: Mon, 30 Jan 2023 12:06:13 +0800
Message-Id: <20230130040614.188296-4-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230130040614.188296-1-Henry.Wang@arm.com>
References: <20230130040614.188296-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently, the mapping of the GICv2 CPU interface is created in
arch_domain_create(). This causes some troubles in populating and
freeing of the domain P2M pages pool. For example, a default 16
P2M pages are required in p2m_init() to cope with the P2M mapping
of 8KB GICv2 CPU interface area, and these 16 P2M pages would cause
the complexity of P2M destroy in the failure path of
arch_domain_create().

As per discussion in [1], similarly as the MMIO access for ACPI, this
patch defers the GICv2 CPU interface mapping until the first MMIO
access. This is achieved by moving the GICv2 CPU interface mapping
code from vgic_v2_domain_init()/vgic_v2_map_resources() to the
stage-2 data abort trap handling code. The original CPU interface
size and virtual CPU interface base address is now saved in
`struct vgic_dist` instead of the local variable of
vgic_v2_domain_init()/vgic_v2_map_resources().

Take the opportunity to unify the way of data access using the
existing pointer to struct vgic_dist in vgic_v2_map_resources() for
new GICv2.

Since the hardware domain (Dom0) has an unlimited size P2M pool, the
gicv2_map_hwdom_extra_mappings() is also not touched by this patch.

[1] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v1 -> v2:
1. Correct style in in-code comment.
2. Avoid open-coding gfn_eq() and gaddr_to_gfn(d->arch.vgic.cbase).
3. Apply same changes for the new vGICv2 implementation, update the
   commit message accordingly.
4. Add in-code comment in old GICv2's vgic_v2_domain_init() and
   new GICv2's vgic_v2_map_resources() to mention the mapping of the
   virtual CPU interface is deferred until first access.
---
 xen/arch/arm/include/asm/new_vgic.h |  2 ++
 xen/arch/arm/include/asm/vgic.h     |  2 ++
 xen/arch/arm/traps.c                | 18 +++++++++++---
 xen/arch/arm/vgic-v2.c              | 25 ++++++-------------
 xen/arch/arm/vgic/vgic-v2.c         | 38 ++++++++++-------------------
 5 files changed, 39 insertions(+), 46 deletions(-)

diff --git a/xen/arch/arm/include/asm/new_vgic.h b/xen/arch/arm/include/asm/new_vgic.h
index 18ed3f754a..1e76213893 100644
--- a/xen/arch/arm/include/asm/new_vgic.h
+++ b/xen/arch/arm/include/asm/new_vgic.h
@@ -127,6 +127,8 @@ struct vgic_dist {
             paddr_t     vgic_redist_free_offset;
         };
     };
+    paddr_t             csize; /* CPU interface size */
+    paddr_t             vbase; /* virtual CPU interface base address */
 
     /* distributor enabled */
     bool                enabled;
diff --git a/xen/arch/arm/include/asm/vgic.h b/xen/arch/arm/include/asm/vgic.h
index 3d44868039..328fd46d1b 100644
--- a/xen/arch/arm/include/asm/vgic.h
+++ b/xen/arch/arm/include/asm/vgic.h
@@ -153,6 +153,8 @@ struct vgic_dist {
     /* Base address for guest GIC */
     paddr_t dbase; /* Distributor base address */
     paddr_t cbase; /* CPU interface base address */
+    paddr_t csize; /* CPU interface size */
+    paddr_t vbase; /* Virtual CPU interface base address */
 #ifdef CONFIG_GICV3
     /* GIC V3 addressing */
     /* List of contiguous occupied by the redistributors */
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 061c92acbd..9dd703f661 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1787,9 +1787,12 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
 }
 
 /*
- * When using ACPI, most of the MMIO regions will be mapped on-demand
- * in stage-2 page tables for the hardware domain because Xen is not
- * able to know from the EFI memory map the MMIO regions.
+ * Try to map the MMIO regions for some special cases:
+ * 1. When using ACPI, most of the MMIO regions will be mapped on-demand
+ *    in stage-2 page tables for the hardware domain because Xen is not
+ *    able to know from the EFI memory map the MMIO regions.
+ * 2. For guests using GICv2, the GICv2 CPU interface mapping is created
+ *    on the first access of the MMIO region.
  */
 static bool try_map_mmio(gfn_t gfn)
 {
@@ -1798,6 +1801,15 @@ static bool try_map_mmio(gfn_t gfn)
     /* For the hardware domain, all MMIOs are mapped with GFN == MFN */
     mfn_t mfn = _mfn(gfn_x(gfn));
 
+    /*
+     * Map the GICv2 virtual CPU interface in the GIC CPU interface
+     * region of the guest on the first access of the MMIO region.
+     */
+    if ( d->arch.vgic.version == GIC_V2 &&
+         gfn_eq(gfn, gaddr_to_gfn(d->arch.vgic.cbase)) )
+        return !map_mmio_regions(d, gfn, d->arch.vgic.csize / PAGE_SIZE,
+                                 maddr_to_mfn(d->arch.vgic.vbase));
+
     /*
      * Device-Tree should already have everything mapped when building
      * the hardware domain.
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 0026cb4360..0b083c33e6 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -644,10 +644,6 @@ static int vgic_v2_vcpu_init(struct vcpu *v)
 
 static int vgic_v2_domain_init(struct domain *d)
 {
-    int ret;
-    paddr_t csize;
-    paddr_t vbase;
-
     /*
      * The hardware domain and direct-mapped domain both get the hardware
      * address.
@@ -667,8 +663,8 @@ static int vgic_v2_domain_init(struct domain *d)
          * aligned to PAGE_SIZE.
          */
         d->arch.vgic.cbase = vgic_v2_hw.cbase;
-        csize = vgic_v2_hw.csize;
-        vbase = vgic_v2_hw.vbase;
+        d->arch.vgic.csize = vgic_v2_hw.csize;
+        d->arch.vgic.vbase = vgic_v2_hw.vbase;
     }
     else if ( is_domain_direct_mapped(d) )
     {
@@ -683,8 +679,8 @@ static int vgic_v2_domain_init(struct domain *d)
          */
         d->arch.vgic.dbase = vgic_v2_hw.dbase;
         d->arch.vgic.cbase = vgic_v2_hw.cbase;
-        csize = GUEST_GICC_SIZE;
-        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
+        d->arch.vgic.csize = GUEST_GICC_SIZE;
+        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
     }
     else
     {
@@ -697,18 +693,11 @@ static int vgic_v2_domain_init(struct domain *d)
          */
         BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
         d->arch.vgic.cbase = GUEST_GICC_BASE;
-        csize = GUEST_GICC_SIZE;
-        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
+        d->arch.vgic.csize = GUEST_GICC_SIZE;
+        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
     }
 
-    /*
-     * Map the gic virtual cpu interface in the gic cpu interface
-     * region of the guest.
-     */
-    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
-                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
-    if ( ret )
-        return ret;
+    /* Mapping of the virtual CPU interface is deferred until first access */
 
     register_mmio_handler(d, &vgic_v2_distr_mmio_handler, d->arch.vgic.dbase,
                           PAGE_SIZE, NULL);
diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c
index 07c8f8a005..1308948eec 100644
--- a/xen/arch/arm/vgic/vgic-v2.c
+++ b/xen/arch/arm/vgic/vgic-v2.c
@@ -258,8 +258,6 @@ void vgic_v2_enable(struct vcpu *vcpu)
 int vgic_v2_map_resources(struct domain *d)
 {
     struct vgic_dist *dist = &d->arch.vgic;
-    paddr_t csize;
-    paddr_t vbase;
     int ret;
 
     /*
@@ -272,7 +270,7 @@ int vgic_v2_map_resources(struct domain *d)
      */
     if ( is_hardware_domain(d) )
     {
-        d->arch.vgic.dbase = gic_v2_hw_data.dbase;
+        dist->dbase = gic_v2_hw_data.dbase;
         /*
          * For the hardware domain, we always map the whole HW CPU
          * interface region in order to match the device tree (the "reg"
@@ -280,13 +278,13 @@ int vgic_v2_map_resources(struct domain *d)
          * Note that we assume the size of the CPU interface is always
          * aligned to PAGE_SIZE.
          */
-        d->arch.vgic.cbase = gic_v2_hw_data.cbase;
-        csize = gic_v2_hw_data.csize;
-        vbase = gic_v2_hw_data.vbase;
+        dist->cbase = gic_v2_hw_data.cbase;
+        dist->csize = gic_v2_hw_data.csize;
+        dist->vbase = gic_v2_hw_data.vbase;
     }
     else if ( is_domain_direct_mapped(d) )
     {
-        d->arch.vgic.dbase = gic_v2_hw_data.dbase;
+        dist->dbase = gic_v2_hw_data.dbase;
         /*
          * For all the direct-mapped domain other than the hardware domain,
          * we only map a 8kB CPU interface but we make sure it is at a location
@@ -296,13 +294,13 @@ int vgic_v2_map_resources(struct domain *d)
          * address when the GIC is aliased to get a 8kB contiguous
          * region.
          */
-        d->arch.vgic.cbase = gic_v2_hw_data.cbase;
-        csize = GUEST_GICC_SIZE;
-        vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
+        dist->cbase = gic_v2_hw_data.cbase;
+        dist->csize = GUEST_GICC_SIZE;
+        dist->vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
     }
     else
     {
-        d->arch.vgic.dbase = GUEST_GICD_BASE;
+        dist->dbase = GUEST_GICD_BASE;
         /*
          * The CPU interface exposed to the guest is always 8kB. We may
          * need to add an offset to the virtual CPU interface base
@@ -310,9 +308,9 @@ int vgic_v2_map_resources(struct domain *d)
          * region.
          */
         BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
-        d->arch.vgic.cbase = GUEST_GICC_BASE;
-        csize = GUEST_GICC_SIZE;
-        vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
+        dist->cbase = GUEST_GICC_BASE;
+        dist->csize = GUEST_GICC_SIZE;
+        dist->vbase = gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset;
     }
 
 
@@ -323,17 +321,7 @@ int vgic_v2_map_resources(struct domain *d)
         return ret;
     }
 
-    /*
-     * Map the gic virtual cpu interface in the gic cpu interface
-     * region of the guest.
-     */
-    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
-                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
-    if ( ret )
-    {
-        gdprintk(XENLOG_ERR, "Unable to remap VGIC CPU to VCPU\n");
-        return ret;
-    }
+    /* Mapping of the virtual CPU interface is deferred until first access */
 
     dist->ready = true;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:15:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:15:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486521.753887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLZa-0000yP-Gk; Mon, 30 Jan 2023 04:15:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486521.753887; Mon, 30 Jan 2023 04:15:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLZa-0000yG-C6; Mon, 30 Jan 2023 04:15:14 +0000
Received: by outflank-mailman (input) for mailman id 486521;
 Mon, 30 Jan 2023 04:15:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JFid=53=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMLRL-000485-I6
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:06:43 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 803d39b6-a053-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 05:06:41 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DDC1F1FB;
 Sun, 29 Jan 2023 20:07:22 -0800 (PST)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.24])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 331583F64C;
 Sun, 29 Jan 2023 20:06:37 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 803d39b6-a053-11ed-b8d1-410ff93cb8f0
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH v2 4/4] xen/arm: Clean-up in p2m_init() and p2m_final_teardown()
Date: Mon, 30 Jan 2023 12:06:14 +0800
Message-Id: <20230130040614.188296-5-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230130040614.188296-1-Henry.Wang@arm.com>
References: <20230130040614.188296-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

With the change in previous patch, the initial 16 pages in the P2M
pool is not necessary anymore. Drop them for code simplification.

Also the call to p2m_teardown() from arch_domain_destroy() is not
necessary anymore since the movement of the P2M allocation out of
arch_domain_create(). Drop the code and the above in-code comment
mentioning it.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
I am not entirely sure if I should also drop the "TODO" on top of
the p2m_set_entry(). Because although we are sure there is no
p2m pages populated in domain_create() stage now, but we are not
sure if anyone will add more in the future...Any comments?
v1 -> v2:
1. Add Reviewed-by tag from Michal.
---
 xen/arch/arm/include/asm/p2m.h |  4 ----
 xen/arch/arm/p2m.c             | 20 +-------------------
 2 files changed, 1 insertion(+), 23 deletions(-)

diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index bf5183e53a..cf06d3cc21 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -200,10 +200,6 @@ int p2m_init(struct domain *d);
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
- *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
- *  free the P2M when failures happen in the domain creation with P2M pages
- *  already in use. In this case p2m_teardown() is called non-preemptively and
- *  p2m_teardown() will always return 0.
  */
 int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f1ccd7efbd..46406a9eb1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1750,13 +1750,9 @@ void p2m_final_teardown(struct domain *d)
     /*
      * No need to call relinquish_p2m_mapping() here because
      * p2m_final_teardown() is called either after domain_relinquish_resources()
-     * where relinquish_p2m_mapping() has been called, or from failure path of
-     * domain_create()/arch_domain_create() where mappings that require
-     * p2m_put_l3_page() should never be created. For the latter case, also see
-     * comment on top of the p2m_set_entry() for more info.
+     * where relinquish_p2m_mapping() has been called.
      */
 
-    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
 
     while ( p2m_teardown_allocation(d) == -ERESTART )
@@ -1827,20 +1823,6 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
-    /*
-     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
-     * when the domain is created. Considering the worst case for page
-     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
-     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
-     * the allocated 16 pages here would not be lost, hence populate these
-     * pages unconditionally.
-     */
-    spin_lock(&d->arch.paging.lock);
-    rc = p2m_set_allocation(d, 16, NULL);
-    spin_unlock(&d->arch.paging.lock);
-    if ( rc )
-        return rc;
-
     return 0;
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 04:35:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 04:35:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486530.753897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLtU-0003oq-47; Mon, 30 Jan 2023 04:35:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486530.753897; Mon, 30 Jan 2023 04:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMLtU-0003oj-1I; Mon, 30 Jan 2023 04:35:48 +0000
Received: by outflank-mailman (input) for mailman id 486530;
 Mon, 30 Jan 2023 04:35:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kEu1=53=gmail.com=christopher.w.clark@srs-se1.protection.inumbo.net>)
 id 1pMLtT-0003od-5q
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 04:35:47 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8f72b79c-a057-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 05:35:44 +0100 (CET)
Received: by mail-ej1-x62b.google.com with SMTP id ud5so28335106ejc.4
 for <xen-devel@lists.xenproject.org>; Sun, 29 Jan 2023 20:35:44 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f72b79c-a057-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=tcwDFwSyVUur/SOWJhNh6+iBub0EwUFatnyIjButEEM=;
        b=ZUQ8cXXQpDrpHL8h5de9AieDoYs0Wk23NjVgU7iyHC1Tt7Sc0MQxdgyAdeoVmNGC8z
         F9hv4sQME6QYvAQcREVONApeAIQWfABuMUpGBg5wBO/MkD+wxpbOklyzdPrdkQG1MX9c
         jcTVMSZ7ivouIShrp8CGq/QqvAFiQ9KrG03+v5B8IwekjapBQZ2w1plDeWWjtcU84uZQ
         OBZqNhBSTq3nank/9SXdWxu74AOJQXSB0nbom8ggALHnTOzouhNsVRJCajv87CUYGhIh
         uZGCQkEiNNzWR+hL6OYjm6J2Z35xg+ho+4UYYr8AauFYfKvIzu2nClXmXckVYnRc8VLS
         AaLg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=tcwDFwSyVUur/SOWJhNh6+iBub0EwUFatnyIjButEEM=;
        b=YU6lfzm38hYhKt1mQX9Vi/5ganJzGqee/kRrs1YJqPyRSl7pUD6aXDCPGq4xwDqKUF
         CNTrFZFa1Ota+qtlYjypgBvmbjtcaAiC+fb28gNcRI0Uqe6tjV+ntFbdIaYo4n8DiH7v
         p5RaSdN3DZ1ILzDz7e7IFAz3zqyN0jB8cU5NN/0F0/1vd64oyJkmC0yhedYsveIfMhGK
         5SC7Wh2IG2h4rCU8kG+UKnqWuVvuhEzuga4zEhCSClox2/Wx5hrHcAiwLD32mMqxNYPq
         9yLxgZNmQ99RR3Qlb1m3sqE1Xy8O6RdSg81+ZvGM05bjv0zZTNg6muvfi6Fxl7eCn4ZP
         +7Kg==
X-Gm-Message-State: AO0yUKVstH/ohK4i/iVwfn6UKEQWvSKkY0BVkKsPATDzMG1geBScRY5U
	irYhl/BQ4slAtWBNmGH3Qef+vPj2/FHV+iAePNI=
X-Google-Smtp-Source: AK7set8aYsCUQ5546trUxrG0rN/8vlXwXeaXb9JRpPOYqsnpQPKZ7STy8bOYBcFowGxXjCEVRZ6+n3OATKC9cqn5RQQ=
X-Received: by 2002:a17:906:5856:b0:883:b70b:c049 with SMTP id
 h22-20020a170906585600b00883b70bc049mr1351753ejs.8.1675053344305; Sun, 29 Jan
 2023 20:35:44 -0800 (PST)
MIME-Version: 1.0
References: <f9cd7b84-6f51-d797-cd2a-b9c9bc62b0f6@suse.com> <d03dc8b3-4c1f-2db0-4d97-944972dc6e06@suse.com>
In-Reply-To: <d03dc8b3-4c1f-2db0-4d97-944972dc6e06@suse.com>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Sun, 29 Jan 2023 20:35:32 -0800
Message-ID: <CACMJ4GbUjLczb9ru_QUERGaNCModspnqgGwAgCqUN+oZ_90NDA@mail.gmail.com>
Subject: Re: Ping: [PATCH] Argo: don't obtain excess page references
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="00000000000091357e05f373bfc1"

--00000000000091357e05f373bfc1
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 21, 2022 at 4:41 AM Jan Beulich <jbeulich@suse.com> wrote:

> On 11.10.2022 11:28, Jan Beulich wrote:
> > find_ring_mfn() already holds a page reference when trying to obtain a
> > writable type reference. We shouldn't make assumptions on the general
> > reference count limit being effectively "infinity". Obtain merely a type
> > ref, re-using the general ref by only dropping the previously acquired
> > one in the case of an error.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> Ping?
>

Hi Jan,

Sorry it has taken me so long to review this patch and thank-you for
posting it. The points raised are helpful.

Wrt to the patch - I can't ack because:
the general ref that is already held is from the page owner, and it may
actually be foreign; so the second ref acquire is currently ensuring that
it is a match for the owner of the ring. That needs addressing.

Am supportive of points raised:
- review + limit ref counts taken
    - better to not need two general page refs
- a type ref rather than general may be sufficient to hold for the ring
lifetime?
- paging_mark_dirty at writes
- p2m log dirty would be better to be allowed than EAGAIN
- allowing mapping of foreign pages may have uses though likely also
challenging

I should let you know that my time available is extremely limited at the
moment, sorry.

Christopher




>
> > ---
> > I further question the log-dirty check there: The present P2M type of a
> > page doesn't really matter for writing to the page (plus it's stale by
> > the time it is looked at). Instead I think every write to such a page
> > needs to be accompanied by a call to paging_mark_dirty().
> >
> > --- a/xen/common/argo.c
> > +++ b/xen/common/argo.c
> > @@ -1429,10 +1429,11 @@ find_ring_mfn(struct domain *d, gfn_t gf
> >          ret = -EAGAIN;
> >  #endif
> >      else if ( (p2mt != p2m_ram_rw) ||
> > -              !get_page_and_type(page, d, PGT_writable_page) )
> > +              !get_page_type(page, PGT_writable_page) )
> >          ret = -EINVAL;
> >
> > -    put_page(page);
> > +    if ( unlikely(ret) )
> > +        put_page(page);
> >
> >      return ret;
> >  }
> >
>
>

--00000000000091357e05f373bfc1
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Mon, Nov 21, 2022 at 4:41 AM Jan B=
eulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; w=
rote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0p=
x 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 11.10.2=
022 11:28, Jan Beulich wrote:<br>
&gt; find_ring_mfn() already holds a page reference when trying to obtain a=
<br>
&gt; writable type reference. We shouldn&#39;t make assumptions on the gene=
ral<br>
&gt; reference count limit being effectively &quot;infinity&quot;. Obtain m=
erely a type<br>
&gt; ref, re-using the general ref by only dropping the previously acquired=
<br>
&gt; one in the case of an error.<br>
&gt; <br>
&gt; Signed-off-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com" ta=
rget=3D"_blank">jbeulich@suse.com</a>&gt;<br>
<br>
Ping?<br></blockquote><div><br></div><div>Hi Jan,<br><br>Sorry it has taken=
 me so long to review this patch and thank-you for posting it. The points r=
aised are helpful.<br><br>Wrt to the patch - I can&#39;t ack because:</div>=
<div>the general ref that is already held is from the page owner, and it ma=
y actually be foreign; so the second ref acquire is currently ensuring that=
 it is a match for the owner of the ring. That needs addressing.<br><br>Am =
supportive of points raised:<br>- review + limit ref counts taken<br>=C2=A0=
 =C2=A0 - better to not need two general page refs<br>- a type ref rather t=
han general may be sufficient to hold for the ring lifetime?<br>- paging_ma=
rk_dirty at writes<br>- p2m log dirty would be better to be allowed than EA=
GAIN<br>- allowing mapping of foreign pages may have uses though likely als=
o challenging<br><br>I should let you know that my time available is extrem=
ely limited at the moment, sorry.<br><br>Christopher<br></div><div><br></di=
v><div><br></div><div>=C2=A0</div><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding=
-left:1ex">
<br>
&gt; ---<br>
&gt; I further question the log-dirty check there: The present P2M type of =
a<br>
&gt; page doesn&#39;t really matter for writing to the page (plus it&#39;s =
stale by<br>
&gt; the time it is looked at). Instead I think every write to such a page<=
br>
&gt; needs to be accompanied by a call to paging_mark_dirty().<br>
&gt; <br>
&gt; --- a/xen/common/argo.c<br>
&gt; +++ b/xen/common/argo.c<br>
&gt; @@ -1429,10 +1429,11 @@ find_ring_mfn(struct domain *d, gfn_t gf<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D -EAGAIN;<br>
&gt;=C2=A0 #endif<br>
&gt;=C2=A0 =C2=A0 =C2=A0 else if ( (p2mt !=3D p2m_ram_rw) ||<br>
&gt; -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 !get_page_and_type(p=
age, d, PGT_writable_page) )<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 !get_page_type(page,=
 PGT_writable_page) )<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D -EINVAL;<br>
&gt;=C2=A0 <br>
&gt; -=C2=A0 =C2=A0 put_page(page);<br>
&gt; +=C2=A0 =C2=A0 if ( unlikely(ret) )<br>
&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 put_page(page);<br>
&gt;=C2=A0 <br>
&gt;=C2=A0 =C2=A0 =C2=A0 return ret;<br>
&gt;=C2=A0 }<br>
&gt; <br>
<br>
</blockquote></div></div>

--00000000000091357e05f373bfc1--


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 05:37:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 05:37:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486551.753906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMMqx-0002ZH-Ng; Mon, 30 Jan 2023 05:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486551.753906; Mon, 30 Jan 2023 05:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMMqx-0002ZA-Kz; Mon, 30 Jan 2023 05:37:15 +0000
Received: by outflank-mailman (input) for mailman id 486551;
 Mon, 30 Jan 2023 05:37:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMMqw-0002Z0-6H; Mon, 30 Jan 2023 05:37:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMMqv-0006oz-Rv; Mon, 30 Jan 2023 05:37:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMMqv-00089T-Cm; Mon, 30 Jan 2023 05:37:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMMqv-0007zn-AT; Mon, 30 Jan 2023 05:37:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=20+C+6n8OhmFPo4vcgDneMhmqPpHAAiryszUknUDkec=; b=guGJk2/oUWi9E7yHohiiPeWosx
	SjRD85S1ELHZcn9CxSva1ObN3dwnE74CMd9JsK6tKfWJfdLDIEIa3BCixm7ySKDyQybcgGflSrxjK
	XC3sXOYebDeMkYg/ID934oCaSP1dhDGaiKo/tHnHtdtTH+m3z0qx/97AAqN8NJiqnbio=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176273-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176273: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-i386-prev:<job status>:broken:regression
    xen-unstable:build-i386-pvops:<job status>:broken:regression
    xen-unstable:build-i386-prev:host-install(4):broken:regression
    xen-unstable:build-i386-pvops:host-install(4):broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:build-armhf:syslog-server:running:regression
    xen-unstable:test-amd64-i386-examine-bios:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine-uefi:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf:capture-logs:broken:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 05:37:13 +0000

flight 176273 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176273/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-i386-prev                 <job status>                 broken
 build-i386-pvops                <job status>                 broken
 build-i386-prev               4 host-install(4)        broken REGR. vs. 175994
 build-i386-pvops              4 host-install(4)        broken REGR. vs. 175994
 build-armhf                   4 host-install(4)        broken REGR. vs. 175994
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine-bios  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine-uefi  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175994
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z    9 days
Failing since        176003  2023-01-20 17:40:27 Z    9 days   20 attempts
Testing same since   176222  2023-01-26 22:13:29 Z    3 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  George Dunlap <george.dunlap@cloud.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              broken  
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-job build-i386-prev broken
broken-job build-i386-pvops broken
broken-step build-i386-prev host-install(4)
broken-step build-i386-pvops host-install(4)
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

(No revision log; it would be 1225 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 05:46:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 05:46:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486558.753917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMMzT-00048N-Jg; Mon, 30 Jan 2023 05:46:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486558.753917; Mon, 30 Jan 2023 05:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMMzT-00048G-GH; Mon, 30 Jan 2023 05:46:03 +0000
Received: by outflank-mailman (input) for mailman id 486558;
 Mon, 30 Jan 2023 05:46:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+jcy=53=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pMMzS-00048A-Da
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 05:46:02 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe02::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f72acc3-a061-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 06:46:00 +0100 (CET)
Received: from DB6PR0601CA0007.eurprd06.prod.outlook.com (2603:10a6:4:7b::17)
 by VI1PR08MB10198.eurprd08.prod.outlook.com (2603:10a6:800:1be::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Mon, 30 Jan
 2023 05:45:54 +0000
Received: from DBAEUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::45) by DB6PR0601CA0007.outlook.office365.com
 (2603:10a6:4:7b::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33 via Frontend
 Transport; Mon, 30 Jan 2023 05:45:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT040.mail.protection.outlook.com (100.127.142.157) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.36 via Frontend Transport; Mon, 30 Jan 2023 05:45:53 +0000
Received: ("Tessian outbound 6e565e48ed4a:v132");
 Mon, 30 Jan 2023 05:45:53 +0000
Received: from 42ada8b8b51f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E7163675-6408-4827-929E-3E9622C37DE2.1; 
 Mon, 30 Jan 2023 05:45:42 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 42ada8b8b51f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 30 Jan 2023 05:45:42 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by PA4PR08MB7411.eurprd08.prod.outlook.com (2603:10a6:102:2a3::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Mon, 30 Jan
 2023 05:45:34 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%3]) with mapi id 15.20.6043.033; Mon, 30 Jan 2023
 05:45:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f72acc3-a061-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9bIa68+B0WM57uemPmnO3JU2hBnHGsYWIKr+8pnWxsQ=;
 b=oV9mh/fteYHqtTsOCDhXqjjG/5fvrNOR4tTXo25BlvKAAXoVFrK+dpG/khdb4pKtLb9SVOnoXJaCa7wuXlYu0k4OnvoDedPzHHzjITai2uhYiZ7uk5Y1PtGSdldoZbEveAnAl8NKf0fjIjrIcccdPq7YkeZfSP5Dm7e/+cwI3N4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q5N4YcXn+AibQhaafPbx2p6ugxNESTPAt9fIseIwMmXoIf9MS8vBRj56HPYbLK7vzXCNuk99LLV8Q7EN91c7L45aoFLZzfvxTq3VcN2farVv7G3ypwq45DpOsfrtlJSCnYV2Ou3wsIHRosaQL68q0gF2qVjXNmRrCdew2sg1yv9iwmaPVQdVZeO838qq8YONH9GiHsTYQSzHX8lcxCpCK5Qw9wzudoZZSSPMgLRjgAgj/qVEHzYJBu1CpDVcMPUAbzCuDGQqPrc9282bemdbba1ItXug4pHfYrdxo4b5fyK/k8wdTeqkyg+o38BjS0xr4b3mtT0+kN6xDFxDsQ4pBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9bIa68+B0WM57uemPmnO3JU2hBnHGsYWIKr+8pnWxsQ=;
 b=lIK7y1+NPZWiPpi8JEh63eYsFKJvwSvnzSdsEzZ5syIqmsOrd0LxFuHVBYrnFEgqAGjLnjbwnURNMzkUop468WH/jY9aWzhxe9QltSAPIJNJ0o2OqU40Q7g/c8oJ3XsexZEzEqnEpc6UnzGPUwogubyGWs2aOM2DkIuo83sDTpBtR+5lEMBRNlyVTe/c6EL5qRpvRMKqj7wk9lcU5hwzAB8aQAQNkMy/Hqv3zCkVcMMSu91KjKM+VBdCiEwwH4ffezBwRSGfJ4+oMGqUYDwe0S0dtRA+Z3Iy3jkBNk3y87een07gRYUQnzO/pmxVrNhE6E3Rj/3jfL8zHmV9waPgUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9bIa68+B0WM57uemPmnO3JU2hBnHGsYWIKr+8pnWxsQ=;
 b=oV9mh/fteYHqtTsOCDhXqjjG/5fvrNOR4tTXo25BlvKAAXoVFrK+dpG/khdb4pKtLb9SVOnoXJaCa7wuXlYu0k4OnvoDedPzHHzjITai2uhYiZ7uk5Y1PtGSdldoZbEveAnAl8NKf0fjIjrIcccdPq7YkeZfSP5Dm7e/+cwI3N4=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Thread-Topic: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Thread-Index: AQHZJxAhTWb1zeTHm029bSgcZo/2Yq6l4H6AgA7nVJCAAFMfgIAAHOxg
Date: Mon, 30 Jan 2023 05:45:34 +0000
Message-ID:
 <AM0PR08MB45308D5CD69EBB5FE85A4B88F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-12-Penny.Zheng@arm.com>
 <c30b4458-b5f6-f996-0c3c-220b18bfb356@xen.org>
 <AM0PR08MB453083B74DB1D00BDF469331F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <7931e70f-3754-363c-28d8-5fde3198d70f@xen.org>
In-Reply-To: <7931e70f-3754-363c-28d8-5fde3198d70f@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: E2F7F722C4B67F4D9D50390D6EC92F9E.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|PA4PR08MB7411:EE_|DBAEUR03FT040:EE_|VI1PR08MB10198:EE_
X-MS-Office365-Filtering-Correlation-Id: 15dc7507-5f6c-478f-5635-08db0285405e
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 17CXw4t88mK/J3UewaHeIJEdESwtDdw9v5gpk0/c3cIjv9qu6qvqGnJvLe6PFBYRTth29c8ruV2lEZy4sxFirQVvW+Byt2R4MkhaUuFSWD4GD9BvRtapqK8WDcm6Rh/wz8A9D4ZhSN7GkgcQ8XH5C04YaoEKuuxmqJ5c2+C6zRa3+JZekfc9zuXEcpLQ7vhSKY8JXfzrMH346g4iq8ZcOW/94sGh+3N89O3OFY52lrxJKzeOia7coYUaCvR8+Ps3OSy3WR4ul8uLD4ktms0vlmfS9h98YvX53HnzqSfQ5Wk6tUjA+5Mw7W3rU2TZA0ppkgLoYzad8REYpeFV0xywPHOsTB38J5m4QfD1O+Qz5ZFdEbuvJEPuabV70xmt7B+zeinJvrkABwchesU8LPyCflbP9iufvubfXrpdAyXb8o0ERxadZUkxmN/6Gc7Zz6XhNuFjVmqrvvZMAUe24Win8F5iEtiT5BDkNsqmj5NUrdYyJMgiaVRwrI50KEn7tyh8hmCIJNJz0Qct4eTK44bAp2Ed3QiP32ElRX2gSMRlHMbLlFhLywtueVl59S9kZjnsTyxuNPXBgWxFmoaImd/XD5yscsew9DTAUkWcSkho65P5RyVKSm4SxGXQy3ElHPDxq/dhtX2q12HJSjzAUV5Dl39dS/bFzm+xSj5QiyLUDK77LMbloSWD5I6ML1OE/t3NO0xYwl+wPLpQt7Z6+Sp4r3/gjeey8cAtTyheF3gvGsc0ZhBB9DIm+iiypdJyq7VrsQJyjKGKsKrj5sd/oz3c5Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(376002)(39860400002)(346002)(136003)(366004)(451199018)(83380400001)(122000001)(38070700005)(86362001)(33656002)(55016003)(38100700002)(110136005)(5660300002)(54906003)(2906002)(4326008)(76116006)(41300700001)(66946007)(64756008)(66446008)(66476007)(66556008)(52536014)(8936002)(316002)(8676002)(26005)(186003)(53546011)(6506007)(9686003)(478600001)(71200400001)(7696005)(966005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB7411
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	115056e8-697c-4f64-599d-08db028534f6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	clmTUN3Yv+xKKhmavVvK4Rc0/hjlamxWFT9n/3G66hhuVkmhYGaCqdvH4XqA7gyq816i/5+/z4ZSJfmn53PHIQgMpe8PJ3jzMusKHTTRqqW9UgBkLFptHrxNWAFJWWLK+F2X4ZlXZWtnparVD+euu5yhodAKZS6jn0ACAWBQ9zDelqrkDAkOHX5vYQWGPtzvkjDhPWfAHFSFyrNQjQ3hYDtBDIXK1HmvT6zLPdD0ZaJ88WEUTtEXKserN6qqCMg1m1Ew89cQ0lw4VDqILd/pRfJ8r2D8HYVnZ42ig8vaWFAhddGvmOy487bmK45Liytpn8+kNGg/5AAnAWZx+/bSSWB0HQ6l2mSGEQdL9/6T+zPj1Lse6Q1Dk3ZnOsKZWC4G2y+SVi8DZKrCBYIx8g8sANq7tcaY/IYKArVND+bziju0JI5I8K5k1AKg5cNhVUxcfbZMukApSfk5hbcts8+T9bZ0cqHUfeA7YyUQ73CYarHIUPWjtXgxyUMsG6UG6V/VHpyxOYimH669rNNbQ6ziSydcn3aLXyyEWK213Q8Z8oNkXWIEhX4pC7RUDridUVm1R0KBvDiSfwVmPKSZmg4V3W3LA9UxGvvJvWnIZX07GvkAQRqq4AdY/hfXCdYWStlvy24mufs4sILfjDbk1UGBA/jOCRTKpRb53cOa7KN8vclhEynKgg3U/hmaKha8DSyDIPnW5bsAigL+ak6iWsm+tFq8uYeIZfgfYLSKLThVloOaqdEt3sRTgTAwzvZhC9hQ
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(376002)(136003)(39850400004)(396003)(346002)(451199018)(46966006)(36840700001)(2906002)(8936002)(5660300002)(52536014)(41300700001)(86362001)(83380400001)(55016003)(53546011)(6506007)(26005)(70206006)(107886003)(4326008)(110136005)(70586007)(33656002)(8676002)(966005)(186003)(9686003)(478600001)(7696005)(47076005)(336012)(82310400005)(40480700001)(82740400003)(316002)(54906003)(356005)(36860700001)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 05:45:53.6731
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 15dc7507-5f6c-478f-5635-08db0285405e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB10198

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPg0KPiBTZW50OiBTdW5kYXksIEphbnVhcnkgMjksIDIwMjMgMzozNyBQTQ0KPiBU
bzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVu
cHJvamVjdC5vcmcNCj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFubyBT
dGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVpcyA8
QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsNCj4gVm9sb2R5bXlyIEJhYmNodWsgPFZvbG9keW15
cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDExLzQwXSB4ZW4v
bXB1OiBidWlsZCB1cCBzdGFydC1vZi1kYXkgWGVuIE1QVQ0KPiBtZW1vcnkgcmVnaW9uIG1hcA0K
PiANCj4gSGkgUGVubnksDQo+IA0KDQpIaSBKdWxpZW4sDQoNCj4gT24gMjkvMDEvMjAyMyAwNToz
OSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+
ID4+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+IFNlbnQ6IFRodXJz
ZGF5LCBKYW51YXJ5IDE5LCAyMDIzIDExOjA0IFBNDQo+ID4+IFRvOiBQZW5ueSBaaGVuZyA8UGVu
bnkuWmhlbmdAYXJtLmNvbT47IHhlbi0NCj4gZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4g
Pj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+
ID4+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVpcw0KPiA+PiA8QmVy
dHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgVm9sb2R5bXlyIEJhYmNodWsNCj4gPj4gPFZvbG9keW15
cl9CYWJjaHVrQGVwYW0uY29tPg0KPiA+PiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDExLzQwXSB4
ZW4vbXB1OiBidWlsZCB1cCBzdGFydC1vZi1kYXkgWGVuIE1QVQ0KPiA+PiBtZW1vcnkgcmVnaW9u
IG1hcA0KPiA+Pg0KPiA+PiBIaSBQZW5ueSwNCj4gPj4NCj4gPg0KPiA+IEhpIEp1bGllbg0KPiA+
DQo+ID4gU29ycnkgZm9yIHRoZSBsYXRlIHJlc3BvbnNlLCBqdXN0IGNvbWUgYmFjayBmcm9tIENo
aW5lc2UgU3ByaW5nDQo+ID4gRmVzdGl2YWwgSG9saWRheX4NCj4gPg0KPiA+PiBPbiAxMy8wMS8y
MDIzIDA1OjI4LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPj4+IEZyb206IFBlbm55IFpoZW5nIDxw
ZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+Pj4NCj4gPj4+IFRoZSBzdGFydC1vZi1kYXkgWGVuIE1Q
VSBtZW1vcnkgcmVnaW9uIGxheW91dCBzaGFsbCBiZSBsaWtlIGFzIGZvbGxvd3M6DQo+ID4+Pg0K
PiA+Pj4geGVuX21wdW1hcFswXSA6IFhlbiB0ZXh0DQo+ID4+PiB4ZW5fbXB1bWFwWzFdIDogWGVu
IHJlYWQtb25seSBkYXRhDQo+ID4+PiB4ZW5fbXB1bWFwWzJdIDogWGVuIHJlYWQtb25seSBhZnRl
ciBpbml0IGRhdGEgeGVuX21wdW1hcFszXSA6IFhlbg0KPiA+Pj4gcmVhZC13cml0ZSBkYXRhIHhl
bl9tcHVtYXBbNF0gOiBYZW4gQlNTIC4uLi4uLg0KPiA+Pj4geGVuX21wdW1hcFttYXhfeGVuX21w
dW1hcCAtIDJdOiBYZW4gaW5pdCBkYXRhDQo+ID4+PiB4ZW5fbXB1bWFwW21heF94ZW5fbXB1bWFw
IC0gMV06IFhlbiBpbml0IHRleHQNCj4gPj4NCj4gPj4gQ2FuIHlvdSBleHBsYWluIHdoeSB0aGUg
aW5pdCByZWdpb24gc2hvdWxkIGJlIGF0IHRoZSBlbmQgb2YgdGhlIE1QVT8NCj4gPj4NCj4gPg0K
PiA+IEFzIGRpc2N1c3NlZCBpbiB0aGUgdjEgU2VyaWUsIEknZCBsaWtlIHRvIHB1dCBhbGwgdHJh
bnNpZW50IE1QVQ0KPiA+IHJlZ2lvbnMsIGxpa2UgYm9vdC1vbmx5IHJlZ2lvbiwgYXQgdGhlIGVu
ZCBvZiB0aGUgTVBVLg0KPiANCj4gSSB2YWd1ZWx5IHJlY2FsbCB0aGUgZGlzY3Vzc2lvbiBidXQg
Y2FuJ3Qgc2VlbSB0byBmaW5kIHRoZSB0aHJlYWQuIERvIHlvdSBoYXZlDQo+IGEgbGluaz8gKEEg
c3VtbWFyeSBpbiB0aGUgcGF0Y2ggd291bGQgaGF2ZSBiZWVuIG5pY2UpDQo+IA0KPiA+IFNpbmNl
IHRoZXkgd2lsbCBnZXQgcmVtb3ZlZCBhdCB0aGUgZW5kIG9mIHRoZSBib290LCBJIGFtIHRyeWlu
ZyBub3QgdG8NCj4gPiBsZWF2ZSBob2xlcyBpbiB0aGUgTVBVIG1hcCBieSBwdXR0aW5nIGFsbCB0
cmFuc2llbnQgTVBVIHJlZ2lvbnMgYXQgcmVhci4NCj4gDQo+IEkgdW5kZXJzdGFuZCB0aGUgcHJp
bmNpcGxlLCBidXQgSSBhbSBub3QgY29udmluY2VkIHRoaXMgaXMgd29ydGggaXQgYmVjYXVzZSBv
Zg0KPiB0aGUgaW5jcmVhc2UgY29tcGxleGl0eSBpbiB0aGUgYXNzZW1ibHkgY29kZS4NCj4gDQo+
IFdoYXQgd291bGQgYmUgdGhlIHByb2JsZW0gd2l0aCByZXNodWZmbGluZyBwYXJ0aWFsbHkgdGhl
IE1QVSBvbmNlIHdlDQo+IGJvb3RlZD8NCg0KIFRoZXJlIGFyZSB0aHJlZSB0eXBlcyBvZiBNUFUg
cmVnaW9ucyBkdXJpbmcgYm9vdC10aW1lOg0KMS4gRml4ZWQgTVBVIHJlZ2lvbg0KUmVnaW9ucyBs
aWtlIFhlbiB0ZXh0IHNlY3Rpb24sIFhlbiBoZWFwIHNlY3Rpb24sIGV0Yy4NCjIuIEJvb3Qtb25s
eSBNUFUgcmVnaW9uDQpSZWdpb25zIGxpa2UgWGVuIGluaXQgc2VjdGlvbnMsIGV0Yy4gSXQgd2ls
bCBiZSByZW1vdmVkIGF0IHRoZSBlbmQgb2YgYm9vdGluZy4NCjMuICAgUmVnaW9ucyBuZWVkIHN3
aXRjaGluZyBpbi9vdXQgZHVyaW5nIHZjcHUgY29udGV4dCBzd2l0Y2gNClJlZ2lvbiBsaWtlIHN5
c3RlbSBkZXZpY2UgbWVtb3J5IG1hcC4gDQpGb3IgZXhhbXBsZSwgZm9yIEZWUF9CYXNlUl9BRU12
OFIsIHdlIGhhdmUgWzB4ODAwMDAwMDAsIDB4ZmZmZmYwMDApIGFzDQp0aGUgd2hvbGUgc3lzdGVt
IGRldmljZSBtZW1vcnkgbWFwIGZvciBYZW4oaWRsZSB2Y3B1KSBpbiBFTDIsICB3aGVuDQpjb250
ZXh0IHN3aXRjaGluZyB0byBndWVzdCB2Y3B1LCBpdCBzaGFsbCBiZSByZXBsYWNlZCB3aXRoIGd1
ZXN0LXNwZWNpZmljDQpkZXZpY2UgbWFwcGluZywgbGlrZSB2Z2ljLCB2cGwwMTEsIHBhc3N0aHJv
dWdoIGRldmljZSwgZXRjLg0KDQpXZSBkb24ndCBoYXZlIHR3byBtYXBwaW5ncyBmb3IgZGlmZmVy
ZW50IHN0YWdlIHRyYW5zbGF0aW9ucyBpbiBNUFUsIGxpa2Ugd2UgaGFkIGluIE1NVS4NClhlbiBz
dGFnZSAxIEVMMiBtYXBwaW5nIGFuZCBzdGFnZSAyIG1hcHBpbmcgYXJlIGJvdGggc2hhcmluZyBv
bmUgTVBVIG1lbW9yeSBtYXBwaW5nKHhlbl9tcHVtYXApDQpTbyB0byBzYXZlIHRoZSB0cm91Ymxl
IG9mIGh1bnRpbmcgZG93biBlYWNoIHN3aXRjaGluZyByZWdpb25zIGluIHRpbWUtc2Vuc2l0aXZl
IGNvbnRleHQgc3dpdGNoLCB3ZQ0KbXVzdCByZS1vcmRlciB4ZW5fbXB1bWFwIHRvIGtlZXAgZml4
ZWQgcmVnaW9ucyBpbiB0aGUgZnJvbnQsIGFuZCBzd2l0Y2hpbmcgb25lcyBpbiB0aGUgaGVlbHMg
b2YgdGhlbS4NCg0KSW4gUGF0Y2ggU2VyaWUgdjEsIEkgd2FzIGFkZGluZyBNUFUgcmVnaW9ucyBp
biBzZXF1ZW5jZSwgIGFuZCBJIGludHJvZHVjZWQgYSBzZXQgb2YgYml0bWFwcyB0byByZWNvcmQg
dGhlIGxvY2F0aW9uIG9mDQpzYW1lIHR5cGUgcmVnaW9ucy4gQXQgdGhlIGVuZCBvZiBib290aW5n
LCBJIG5lZWQgdG8gKmRpc2FibGUqIE1QVSB0byBkbyB0aGUgcmVzaHVmZmxpbmcsIGFzIEkgY2Fu
J3QNCm1vdmUgcmVnaW9ucyBsaWtlIHhlbiBoZWFwIHdoaWxlIE1QVSBvbi4NCg0KQW5kIHdlIGRp
c2N1c3NlZCB0aGF0IGl0IGlzIHRvbyByaXNreSB0byBkaXNhYmxlIE1QVSwgYW5kIHlvdSBzdWdn
ZXN0ZWQgWzFdDQoiDQo+IFlvdSBzaG91bGQgbm90IG5lZWQgYW55IHJlb3JnIGlmIHlvdSBtYXAg
dGhlIGJvb3Qtb25seSBzZWN0aW9uIHRvd2FyZHMgaW4NCj4gdGhlIGhpZ2hlciBzbG90IGluZGV4
IChvciBqdXN0IGFmdGVyIHRoZSBmaXhlZCBvbmVzKS4NCiINCg0KTWF5YmUgaW4gYXNzZW1ibHks
IHdlIGtub3cgZXhhY3RseSBob3cgbWFueSBmaXhlZCByZWdpb25zIGFyZSwgYm9vdC1vbmx5IHJl
Z2lvbnMgYXJlLCBidXQgaW4gQyBjb2Rlcywgd2UgcGFyc2UgRkRUDQp0byBnZXQgc3RhdGljIGNv
bmZpZ3VyYXRpb24sIGxpa2Ugd2UgZG9uJ3Qga25vdyBob3cgbWFueSBmaXhlZCByZWdpb25zIGZv
ciB4ZW4gc3RhdGljIGhlYXAgaXMgZW5vdWdoLiANCkFwcHJveGltYXRpb24gaXMgbm90IHN1Z2dl
c3RlZCBpbiBNUFUgc3lzdGVtIHdpdGggbGltaXRlZCBNUFUgcmVnaW9ucywgc29tZSBwbGF0Zm9y
bSBtYXkgb25seSBoYXZlIDE2IE1QVSByZWdpb25zLA0KSU1ITywgaXQgaXMgbm90IHdvcnRoeSB3
YXN0aW5nIGluIGFwcHJveGltYXRpb24uIA0KDQpTbyBJIHRha2UgdGhlIHN1Z2dlc3Rpb24gb2Yg
cHV0dGluZyByZWdpb25zIGluIHRoZSBoaWdoZXIgc2xvdCBpbmRleC4gUHV0dGluZyBmaXhlZCBy
ZWdpb25zIGluIHRoZSBmcm9udCwgYW5kIHB1dHRpbmcNCmJvb3Qtb25seSBhbmQgc3dpdGNoaW5n
IG9uZXMgYXQgdGFpbC4gVGhlbiwgYXQgdGhlIGVuZCBvZiBib290aW5nLCB3aGVuIHdlIHJlb3Jn
IHRoZSBtcHUgbWFwcGluZywgd2UgcmVtb3ZlDQphbGwgYm9vdC1vbmx5IHJlZ2lvbnMsIGFuZCBm
b3Igc3dpdGNoaW5nIG9uZXMsIHdlIGRpc2FibGUtcmVsb2NhdGUoYWZ0ZXIgZml4ZWQgb25lcykt
ZW5hYmxlIHRoZW0uIFNwZWNpZmljIGNvZGVzIGluIFsyXS4NCg0KWzFdIGh0dHBzOi8vbGlzdHMu
eGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAyMi0xMS9tc2cwMDQ1Ny5o
dG1sDQpbMl0gaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1k
ZXZlbC8yMDIzLTAxL21zZzAwNzk1Lmh0bWwNCg0KPiANClsuLi5dDQo+ID4+PiArLyoNCj4gPj4+
ICsgKiBFTlRSWSB0byBjb25maWd1cmUgYSBFTDIgTVBVIG1lbW9yeSByZWdpb24NCj4gPj4+ICsg
KiBBUk12OC1SIEFBcmNoNjQgYXQgbW9zdCBzdXBwb3J0cyAyNTUgTVBVIHByb3RlY3Rpb24gcmVn
aW9ucy4NCj4gPj4+ICsgKiBTZWUgc2VjdGlvbiBHMS4zLjE4IG9mIHRoZSByZWZlcmVuY2UgbWFu
dWFsIGZvciBBUk12OC1SIEFBcmNoNjQsDQo+ID4+PiArICogUFJCQVI8bj5fRUwyIGFuZCBQUkxB
UjxuPl9FTDIgcHJvdmlkZXMgYWNjZXNzIHRvIHRoZSBFTDIgTVBVDQo+ID4+PiArcmVnaW9uDQo+
ID4+PiArICogZGV0ZXJtaW5lZCBieSB0aGUgdmFsdWUgb2YgJ24nIGFuZCBQUlNFTFJfRUwyLlJF
R0lPTiBhcw0KPiA+Pj4gKyAqIFBSU0VMUl9FTDIuUkVHSU9OPDc6ND46bi4obiA9IDAsIDEsIDIs
IC4uLiAsIDE1KQ0KPiA+Pj4gKyAqIEZvciBleGFtcGxlIHRvIGFjY2VzcyByZWdpb25zIGZyb20g
MTYgdG8gMzEgKDBiMTAwMDAgdG8gMGIxMTExMSk6DQo+ID4+PiArICogLSBTZXQgUFJTRUxSX0VM
MiB0byAwYjF4eHh4DQo+ID4+PiArICogLSBSZWdpb24gMTYgY29uZmlndXJhdGlvbiBpcyBhY2Nl
c3NpYmxlIHRocm91Z2ggUFJCQVIwX0VMMiBhbmQNCj4gPj4+ICtQUkxBUjBfRUwyDQo+ID4+PiAr
ICogLSBSZWdpb24gMTcgY29uZmlndXJhdGlvbiBpcyBhY2Nlc3NpYmxlIHRocm91Z2ggUFJCQVIx
X0VMMiBhbmQNCj4gPj4+ICtQUkxBUjFfRUwyDQo+ID4+PiArICogLSBSZWdpb24gMTggY29uZmln
dXJhdGlvbiBpcyBhY2Nlc3NpYmxlIHRocm91Z2ggUFJCQVIyX0VMMiBhbmQNCj4gPj4+ICtQUkxB
UjJfRUwyDQo+ID4+PiArICogLSAuLi4NCj4gPj4+ICsgKiAtIFJlZ2lvbiAzMSBjb25maWd1cmF0
aW9uIGlzIGFjY2Vzc2libGUgdGhyb3VnaCBQUkJBUjE1X0VMMiBhbmQNCj4gPj4+ICtQUkxBUjE1
X0VMMg0KPiA+Pj4gKyAqDQo+ID4+PiArICogSW5wdXRzOg0KPiA+Pj4gKyAqIHgyNzogcmVnaW9u
IHNlbGVjdG9yDQo+ID4+PiArICogeDI4OiBwcmVzZXJ2ZSB2YWx1ZSBmb3IgUFJCQVJfRUwyDQo+
ID4+PiArICogeDI5OiBwcmVzZXJ2ZSB2YWx1ZSBmb3IgUFJMQVJfRUwyDQo+ID4+PiArICoNCj4g
Pj4+ICsgKi8NCj4gPj4+ICtFTlRSWSh3cml0ZV9wcikNCj4gPj4NCj4gPj4gQUZBSUNULCB0aGlz
IGZ1bmN0aW9uIHdvdWxkIG5vdCBiZSBuZWNlc3NhcnkgaWYgdGhlIGluZGV4IGZvciB0aGUNCj4g
Pj4gaW5pdCBzZWN0aW9ucyB3ZXJlIGhhcmRjb2RlZC4NCj4gPj4NCj4gPj4gU28gSSB3b3VsZCBs
aWtlIHRvIHVuZGVyc3RhbmQgd2h5IHRoZSBpbmRleCBjYW5ub3QgYmUgaGFyZGNvZGVkLg0KPiA+
Pg0KPiA+DQo+ID4gVGhlIHJlYXNvbiBpcyB0aGF0IHdlIGFyZSBwdXR0aW5nIGluaXQgc2VjdGlv
bnMgYXQgdGhlICplbmQqIG9mIHRoZQ0KPiA+IE1QVSBtYXAsIGFuZCB0aGUgbGVuZ3RoIG9mIHRo
ZSB3aG9sZSBNUFUgbWFwIGlzIHBsYXRmb3JtLXNwZWNpZmljLiBXZQ0KPiByZWFkIGl0IGZyb20g
TVBVSVJfRUwyLg0KPiANCj4gUmlnaHQsIEkgZ290IHRoYXQgYml0IGZyb20gdGhlIGNvZGUuIFdo
YXQgSSB3b3VsZCBsaWtlIHRvIHVuZGVyc3RhbmQgaXMgd2h5IGFsbA0KPiB0aGUgaW5pdGlhbCBh
ZGRyZXNzIGNhbm5vdCBiZSBoYXJkb2NvZGVkPw0KPiANCj4gIEZyb20gYSBicmllZiBsb29rLCB0
aGlzIHdvdWxkIHNpbXBsaWZ5IGEgbG90IHRoZSBhc3NlbWJseSBjb2RlLg0KPiANCg0KTGlrZSBJ
IHNhaWQgYmVmb3JlLCAgIm1hcCB0b3dhcmRzIGhpZ2hlciBzbG90IiwgaWYgaXQgaXMgbm90IHRo
ZSB0YWlsLCBpdCBpcyBoYXJkIHRvIGRlY2lkZSBhbm90aGVyDQpudW1iZXIgdG8gbWVldCBkaWZm
ZXJlbnQgcGxhdGZvcm1zIGFuZCB2YXJpb3VzIEZEVCBzdGF0aWMgY29uZmlndXJhdGlvbi4NCg0K
SWYgd2UsIGluIGFzc2VtYmx5LCBwdXQgZml4ZWQgcmVnaW9ucyBpbiBmcm9udCBhbmQgYm9vdC1v
bmx5IHJlZ2lvbnMgYWZ0ZXIsIHRoZW4sIHdoZW4gd2UNCmVudGVyIEMgd29ybGQsIHdlIGltbWVk
aWF0ZWx5IGRvIGEgc2ltcGxlIHJlc2h1ZmZsZSwgd2hpY2ggbWVhbnMgdGhhdCB3ZSBuZWVkIHRv
IHJlbG9jYXRlDQp0aGVzZSBpbml0IHNlY3Rpb25zIHRvIHRhaWwsIGl0IGlzIHdvcmthYmxlIG9u
bHkgd2hlbiBNUFUgaXMgZGlzYWJsZWQsIHVubGVzcyB3ZSdyZSBzdXJlIHRoYXQNCiJyZXNodWZm
bGluZyBwYXJ0IiBpcyBub3QgdXNpbmcgYW55IGluaXQgY29kZXMgb3IgZGF0YS4NCiAgDQo+ID4N
Cj4gPj4+ICsgICAgbXNyICAgUFJTRUxSX0VMMiwgeDI3DQo+ID4+PiArICAgIGRzYiAgIHN5DQo+
ID4+DQo+ID4+IFsuLi5dDQo+ID4+DQo+ID4+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3hl
bi5sZHMuUyBiL3hlbi9hcmNoL2FybS94ZW4ubGRzLlMgaW5kZXgNCj4gPj4+IGJjNDVlYTJjNjUu
Ljc5OTY1YTNjMTcgMTAwNjQ0DQo+ID4+PiAtLS0gYS94ZW4vYXJjaC9hcm0veGVuLmxkcy5TDQo+
ID4+PiArKysgYi94ZW4vYXJjaC9hcm0veGVuLmxkcy5TDQo+ID4+PiBAQCAtOTEsNiArOTEsOCBA
QCBTRUNUSU9OUw0KPiA+Pj4gICAgICAgICAgX19yb19hZnRlcl9pbml0X2VuZCA9IC47DQo+ID4+
PiAgICAgIH0gOiB0ZXh0DQo+ID4+Pg0KPiA+Pj4gKyAgLiA9IEFMSUdOKFBBR0VfU0laRSk7DQo+
ID4+DQo+ID4+IFdoeSBkbyB5b3UgbmVlZCB0aGlzIEFMSUdOPw0KPiA+Pg0KPiA+DQo+ID4gSSBu
ZWVkIGEgc3ltYm9sIGFzIHRoZSBzdGFydCBvZiB0aGUgZGF0YSBzZWN0aW9uLCBzbyBJIGludHJv
ZHVjZQ0KPiA+ICJfX2RhdGFfYmVnaW4gPSAuOyIuDQo+ID4gSWYgSSB1c2UgIl9fcm9fYWZ0ZXJf
aW5pdF9lbmQgPSAuOyIgaW5zdGVhZCwgSSdtIGFmcmFpZCBpbiB0aGUgZnV0dXJlLA0KPiA+IGlm
IHNvbWVvbmUgaW50cm9kdWNlcyBhIG5ldyBzZWN0aW9uIGFmdGVyIHJvLWFmdGVyLWluaXQgc2Vj
dGlvbiwgdGhpcw0KPiA+IHBhcnQgYWxzbyBuZWVkcyBtb2RpZmljYXRpb24gdG9vLg0KPiANCj4g
SSBoYXZlbid0IHN1Z2dlc3RlZCB0aGVyZSBpcyBhIHByb2JsZW0gdG8gZGVmaW5lIGEgbmV3IHN5
bWJvbC4gSSBhbSBtZXJlbHkNCj4gYXNraW5nIGFib3V0IHRoZSBhbGlnbi4NCj4gDQo+ID4NCj4g
PiBXaGVuIHdlIGRlZmluZSBNUFUgcmVnaW9ucyBmb3IgZWFjaCBzZWN0aW9uIGluIHhlbi5sZHMu
Uywgd2UgYWx3YXlzDQo+ID4gdHJlYXQgdGhlc2Ugc2VjdGlvbnMgcGFnZS1hbGlnbmVkLg0KPiA+
IEkgY2hlY2tlZCBlYWNoIHNlY3Rpb24gaW4geGVuLmxkcy5TLCBhbmQgIi4gPSBBTElHTihQQUdF
X1NJWkUpOyIgaXMNCj4gPiBlaXRoZXIgYWRkZWQgYXQgc2VjdGlvbiBoZWFkLCBvciBhdCB0aGUg
dGFpbCBvZiB0aGUgcHJldmlvdXMgc2VjdGlvbiwNCj4gPiB0byBtYWtlIHN1cmUgc3RhcnRpbmcg
YWRkcmVzcyBzeW1ib2wgcGFnZS1hbGlnbmVkLg0KPiA+DQo+ID4gQW5kIGlmIHdlIGRvbid0IHB1
dCB0aGlzIEFMSUdOLCBpZiAiX19yb19hZnRlcl9pbml0X2VuZCAiIHN5bWJvbA0KPiA+IGl0c2Vs
ZiBpcyBub3QgcGFnZS1hbGlnbmVkLCB0aGUgdHdvIGFkamFjZW50IHNlY3Rpb25zIHdpbGwgb3Zl
cmxhcCBpbiBNUFUuDQo+IA0KPiBfX3JvX2FmdGVyX2luaXRfZW5kICpoYXMqIHRvIGJlIHBhZ2Ug
YWxpZ25lZCBiZWNhdXNlIHRoZSBwZXJtaXNzaW9ucyBhcmUNCj4gZGlmZmVyZW50IHRoYW4gZm9y
IF9fZGF0YV9iZWdpbi4NCj4gDQo+IElmIHdlIHdlcmUgZ29pbmcgdG8gYWRkIGEgbmV3IHNlY3Rp
b24sIHRoZW4gZWl0aGVyIGl0IGhhcyB0aGUgc2FtZSBwZXJtaXNzaW9uDQo+IGFzIC5kYXRhLnJl
YWQubW9zdGx5IGFuZCB3ZSB3aWxsIGJ1bmRsZSB0aGVtIG9yIGl0IGRvZXNuJ3QgYW5kIHdlIHdv
dWxkDQo+IG5lZWQgYSAuYWxpZ24uDQo+IA0KPiBCdXQgdG9kYXksIHRoZSBleHRyYSAuQUxJR04g
c2VlbXMgdW5uZWNlc3NhcnkgKGF0IGxlYXN0IGluIHRoZSBjb250ZXh0IG9mIHRoaXMNCj4gcGF0
Y2gpLg0KPiANCg0KVW5kZXJzdG9vZCwgSSdsbCByZW1vdmUNCg0KPiBDaGVlcnMsDQo+IA0KPiAt
LQ0KPiBKdWxpZW4gR3JhbGwNCg0KQ2hlZXJzLA0KDQotLQ0KUGVubnkgWmhlbmcNCg0KLS0NClBl
bm55IFpoZW5nDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 06:15:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 06:15:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486566.753926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMNRp-00080v-1H; Mon, 30 Jan 2023 06:15:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486566.753926; Mon, 30 Jan 2023 06:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMNRo-00080o-Ul; Mon, 30 Jan 2023 06:15:20 +0000
Received: by outflank-mailman (input) for mailman id 486566;
 Mon, 30 Jan 2023 06:15:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMNRn-00080i-70
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 06:15:19 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 774a2611-a065-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 07:15:17 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E66731F749;
 Mon, 30 Jan 2023 06:15:15 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BB86113A06;
 Mon, 30 Jan 2023 06:15:15 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id DMdoLHNg12PiFgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 06:15:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 774a2611-a065-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675059315; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MJ/6r2WK59uWwST50/RBuTm+QFBBetMoB1hCizuxEMU=;
	b=FMQWmIXJx89Gq14G6q9apPSRdMJhfgt6gA3mWkw54rzJccMKXb7djeTSoTQdj5eZoULBt4
	sH4pcin3Yg/zK5W/+48ySOflXYfaErEPZyCw3n77h7quS6fUMoso5UzkbZnWBtv/t111+J
	XWQXDsftkIYoIA0aKvkr+EznzOPb934=
Message-ID: <78036f22-3b47-f95a-a438-cca012681bb9@suse.com>
Date: Mon, 30 Jan 2023 07:15:15 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] tools/xenstored: hashtable: Constify the parameters of
 hashfn/eqfn
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230127185546.65760-1-julien@xen.org>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230127185546.65760-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------vsYPRZpjdKCf31ja1V5zfHmV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------vsYPRZpjdKCf31ja1V5zfHmV
Content-Type: multipart/mixed; boundary="------------p2yrXRFknz6rKKnpJcPIsI2F";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <78036f22-3b47-f95a-a438-cca012681bb9@suse.com>
Subject: Re: [PATCH] tools/xenstored: hashtable: Constify the parameters of
 hashfn/eqfn
References: <20230127185546.65760-1-julien@xen.org>
In-Reply-To: <20230127185546.65760-1-julien@xen.org>

--------------p2yrXRFknz6rKKnpJcPIsI2F
Content-Type: multipart/mixed; boundary="------------xa0JDEGm405aUH6HeRjBpNvm"

--------------xa0JDEGm405aUH6HeRjBpNvm
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDEuMjMgMTk6NTUsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4gDQo+IFRoZSBwYXJhbWV0ZXJzIG9mIGhh
c2hmbi9lcWZuIHNob3VsZCBuZXZlciBiZSBtb2RpZmllZC4gU28gY29uc3RpZnkNCj4gdGhl
bSBhbmQgcHJvcGFnYXRlIHRoZSBjb25zdCB0byB0aGUgdXNlcnMuDQo+IA0KPiBUYWtlIHRo
ZSBvcHBvcnR1bml0eSB0byBzb2x2ZSBzb21lIGNvZGluZyBzdHlsZSBpc3N1ZXMgYXJvdW5k
IHRoZQ0KPiBjb2RlIG1vZGlmaWVkLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdy
YWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3Mg
PGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------xa0JDEGm405aUH6HeRjBpNvm
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------xa0JDEGm405aUH6HeRjBpNvm--

--------------p2yrXRFknz6rKKnpJcPIsI2F--

--------------vsYPRZpjdKCf31ja1V5zfHmV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPXYHMFAwAAAAAACgkQsN6d1ii/Ey8Q
Fwf/YX2qa1WyaN1VW3oXMP9YX8uvvl1u5iVdW/D2mIs/chuHZyQVrzmntlpmFKSCnZtKyODky3eU
ta48q9Z0t90+eOGHxUy62x6USiGXP8nw0y+kCNnRET8mE2y94LuzRpCcGFzV2nI1RdK+XzhbymBA
kcwgtps9+Sp3t+1pGHsQn2eVWI3bjBeAxYO8jgV2IFRaZWLT49AnjvT6zIqBR67T9jwD6TbVQMuF
MKrNBn5UdZH+KUW2if3bSHq08mmzvRD1tdsZjs4pjtiQ63dE/5D7gGYnQz7LZToW5YFeX4pTYAek
EFV9GTuJ+aTAIhX4N0fs/rZzhqqqs05tjEmC3YA7+g==
=66+w
-----END PGP SIGNATURE-----

--------------vsYPRZpjdKCf31ja1V5zfHmV--


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 06:25:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 06:25:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486572.753937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMNbD-00019z-Vy; Mon, 30 Jan 2023 06:25:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486572.753937; Mon, 30 Jan 2023 06:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMNbD-00019s-Sk; Mon, 30 Jan 2023 06:25:03 +0000
Received: by outflank-mailman (input) for mailman id 486572;
 Mon, 30 Jan 2023 06:25:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+jcy=53=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pMNbD-00019m-9a
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 06:25:03 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::605])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d32463c1-a066-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 07:25:01 +0100 (CET)
Received: from DU2PR04CA0239.eurprd04.prod.outlook.com (2603:10a6:10:2b1::34)
 by PAWPR08MB9994.eurprd08.prod.outlook.com (2603:10a6:102:35f::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Mon, 30 Jan
 2023 06:24:58 +0000
Received: from DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b1:cafe::78) by DU2PR04CA0239.outlook.office365.com
 (2603:10a6:10:2b1::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Mon, 30 Jan 2023 06:24:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT044.mail.protection.outlook.com (100.127.142.189) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.21 via Frontend Transport; Mon, 30 Jan 2023 06:24:58 +0000
Received: ("Tessian outbound 43b0faad5a68:v132");
 Mon, 30 Jan 2023 06:24:58 +0000
Received: from 111d9c8eb965.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B5972BF2-E1C1-4352-9F40-1231715CE2D3.1; 
 Mon, 30 Jan 2023 06:24:51 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 111d9c8eb965.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 30 Jan 2023 06:24:51 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by DB9PR08MB8388.eurprd08.prod.outlook.com (2603:10a6:10:3d7::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.30; Mon, 30 Jan
 2023 06:24:49 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%3]) with mapi id 15.20.6043.033; Mon, 30 Jan 2023
 06:24:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d32463c1-a066-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=owjUSjlT5aFyqV0noqTlySyN9fiC201po4t+nWU/EMA=;
 b=D0sFZsBL3K3EAl3in5iCLXLsMxlJYd4SJgFIynu3626se4d8supQAyl64f2FGgMNmMCU4w2lIJCAHup1amr5o6d7O68CjuKXsNfRPjF2Ol8ndK/a+waCgR3LwYHzspj1xZDLL6x5NPTfn4kft3VGCstBAlkWhE9TTDM+zqki604=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AQTKUbBn9wV6jlfZ1pMjJtQlwKhrJip7uLirqPPt2t4fzLli0K33GXfnswdt/fSDj2hGNX1J+xl3809YCEUF7DXpDhkha64IZCBOBLhtLlKCPbwJC5zKR3H+H/p6bFe4VeSesqpHrekBlspcYyLrOllLk0/KAULiIEN90xnsCAoUkGgZU9ocrB61AXAXvDOA6pVF5ZWKT836A7oWfMurbymi7zfh7LQQ2hquOqCBBJ8URSSi7t5HUmk64/XdomVV4S58itktoaoip+oJ4lFIx0S0qiE2Q9frkx0J6yDcKIo3eaZe2+RQyBGiXQllJA1ywEcimNjW+fd2DFegTJPmVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=owjUSjlT5aFyqV0noqTlySyN9fiC201po4t+nWU/EMA=;
 b=b1nxX4+hJ6gYc65K7yda4acvfAFuuOlB3KOKx4AGTopZ8SL6uSr3qs6nG30BOPGbpv2yPuxCuGaLSK8n0FlvRSguKWSha2RFE+HzEMBXzllU5dnsfniqmnamYe4gEHUutlfgKKg6GJOk8invAxL4w+BRLyavAVlbJcqdlyClrb3gg1FUY+voQdH3Jpk5WhhMzN831M5oeEAJgpRcOwdwaaPEQ37Td2GKaeryX/SWzNX3G41DabTDmblzyWK/LIArXK95v5jRddgk0s+cpB5MCFm7adqHHUTB326pQh64xfDpe/rYXOZZ6TWRoZPzWY1ykuq9OrgEOY1yfLkaIa1thw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=owjUSjlT5aFyqV0noqTlySyN9fiC201po4t+nWU/EMA=;
 b=D0sFZsBL3K3EAl3in5iCLXLsMxlJYd4SJgFIynu3626se4d8supQAyl64f2FGgMNmMCU4w2lIJCAHup1amr5o6d7O68CjuKXsNfRPjF2Ol8ndK/a+waCgR3LwYHzspj1xZDLL6x5NPTfn4kft3VGCstBAlkWhE9TTDM+zqki604=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
Thread-Topic: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
Thread-Index: AQHZJxAoM+nWeUC5lUC+09eIkdu2V66uAIMAgAb9FDCAAB73gIABdWjw
Date: Mon, 30 Jan 2023 06:24:48 +0000
Message-ID:
 <AM0PR08MB453026B268BA9FBEEE970090F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-14-Penny.Zheng@arm.com>
 <23f49916-dd2a-a956-1e6b-6dbb41a8817b@xen.org>
 <AM0PR08MB4530B7AF6EA406882974D528F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <33bddc11-ae1e-b467-32d7-647748d1c627@xen.org>
In-Reply-To: <33bddc11-ae1e-b467-32d7-647748d1c627@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 0719802EBC829F4D8F55FEAA89EA02F0.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|DB9PR08MB8388:EE_|DBAEUR03FT044:EE_|PAWPR08MB9994:EE_
X-MS-Office365-Filtering-Correlation-Id: be21ce6c-2ce8-4d87-422f-08db028ab5dc
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 5+XVcmx2yl4ET3f3DMsVnAWHv4fWA9qs5YOl5vFKekhnZK8+oUqYL/+7KAPdXWxaEefwTb3ozG63mXA2dcO5yu3bhWmj4Gti+UmHnpZE513j/B4vJs9ZQ0rMMtbF1lTdLU7vT0tWVVaqz+5LAwMqqJAXrYQRayV6SIdGTZzxfPC8YkmNNTQlE/bxtlbboq5DNIDMUdxdf+83oYqXt9jZwJSlDqKGZIb3Cla3tBVXEg0+ATNh9dZMA02kl3OhWj9j8o8P79RrEAwXhbxvR5xlQAqkvVfGRf8UkKD8S9kZ2wU7748Thqm5yPMM382WfBpNDVw5SV4mquefzbEdsge4sOVKZl4hm0IufMrIxgNH9BqHaC2u4y6KZtasaFD5ujLvvwO0v9EACgJf5xKVthNmUm1/yQv2Sftjh35idHnITvk3nfzU5GvOCOUN7VywL0KqwdkadcyMQ86XrBbrCZsKrXmVezzYfPOS3BXlE/UHQA03bQes2KsiVSu1bj6nUD7IkDVYIqLA8A8/CUXvzR9e8KB9JWDo4h90TTYkNAOLmBPtoBY5GjaRDcp+w78eDNjeNu5PTxzmDIxs1Ssoj1DZxpldmPosPD1FmcNGW2wga/eDuk5kiG9QkrTg/ReIA6joq0gfu06cU12ur9XfaIOOndVPArCEcH+9cNWGG+rD6cGbNv0D4DTH6d0NOBLiQrIhy2wDEOtoItHFod3xiSoWb9UrMqEtSpk7O08Qeq2LZc4=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(451199018)(76116006)(66946007)(8676002)(4326008)(52536014)(8936002)(38100700002)(316002)(38070700005)(53546011)(6506007)(5660300002)(110136005)(55016003)(26005)(54906003)(33656002)(186003)(9686003)(66476007)(64756008)(41300700001)(66556008)(66446008)(122000001)(71200400001)(7696005)(83380400001)(478600001)(86362001)(2906002)(21314003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8388
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3288efca-7028-48a6-a82d-08db028aaff3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AWJ9h6ve7SxYM/0wZgSQca8oe0pT0e9wBoQMgxhEfe8yiTRu0Qtfydw03jDhjqDNbhqUqVIR8/AF23wrVQX44Cz5h/ICCLl8ZAQ43XrFT/m824pPmYlaQoGmWKa18cqGTFDrjv8HBTCz7oYr99fcxLFPW8y8rEhsC75x+hx9lOBDTJOoiD9cTjJotge9jylNnxv4Qptj+dPhq6ww098hO5BkPaByaBA3jD5Teihi2Nruj1Xnt+dKNQBIGFOPYomAQWOHUd9hPeguuQ7UP2Ftbx1wFknMS/mPCW3nnI9kTOkNhwEcRJX518DKdMYyyv+kaBaqEX/Mu3iEJ+10xtKd0R8S2A0w83t3RilweAxsFVvUOiBnoPiNx62+34drB/Z8+RhZqo+UBQT7ghP9/L23RTp/lB8AMcIELeKxh5rOU5ft0CJslVl7ZUagV0VTkhoEas2t836om2gWUakDssr3SoZ57L7RSYOgVq11q9xpdCVObUA+ayRlGGr6T0+CVjfCdgC1RgyOaXtnmkWwnFR3RWvcrgns59WaaH/J0RQ4cEgvn+PkhOU6Nk0oAfEsfJdUvxjqA+234CCHcOgNBtyFWuNd7hMK8/v+bgXxa0FxANxlcDpijXdgA7NycB1jwHHdio/vcXxXbb/ziQiZWCfMZdQpPYN7u6HCbRsOLF3tamLDLow2hPFYtE784VECbiLSBMXZbQU2tss6xnDBPaj6fhcNYc04KWjnVHvJbnVmeVI=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(346002)(136003)(396003)(376002)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(82310400005)(8936002)(40460700003)(82740400003)(52536014)(6506007)(53546011)(54906003)(81166007)(33656002)(40480700001)(356005)(107886003)(9686003)(55016003)(26005)(86362001)(478600001)(2906002)(7696005)(186003)(316002)(110136005)(5660300002)(83380400001)(41300700001)(70586007)(8676002)(4326008)(70206006)(336012)(36860700001)(47076005)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 06:24:58.2754
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: be21ce6c-2ce8-4d87-422f-08db028ab5dc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9994

SGksIEp1bGllbg0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IFN1bmRheSwgSmFudWFyeSAyOSwgMjAy
MyAzOjQzIFBNDQo+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi1k
ZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBDYzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5j
b20+OyBTdGVmYW5vIFN0YWJlbGxpbmkNCj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0
cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+Ow0KPiBWb2xvZHlteXIgQmFi
Y2h1ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0gg
djIgMTMvNDBdIHhlbi9tcHU6IGludHJvZHVjZSB1bmlmaWVkIGZ1bmN0aW9uDQo+IHNldHVwX2Vh
cmx5X3VhcnQgdG8gbWFwIGVhcmx5IFVBUlQNCj4gDQo+IEhpIFBlbm55LA0KPiANCj4gT24gMjkv
MDEvMjAyMyAwNjoxNywgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVz
c2FnZS0tLS0tDQo+ID4+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+
IFNlbnQ6IFdlZG5lc2RheSwgSmFudWFyeSAyNSwgMjAyMyAzOjA5IEFNDQo+ID4+IFRvOiBQZW5u
eSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi0NCj4gZGV2ZWxAbGlzdHMueGVucHJv
amVjdC5vcmcNCj4gPj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFubyBT
dGFiZWxsaW5pDQo+ID4+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgQmVydHJhbmQgTWFycXVp
cw0KPiA+PiA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgVm9sb2R5bXlyIEJhYmNodWsNCj4g
Pj4gPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiA+PiBTdWJqZWN0OiBSZTogW1BBVENI
IHYyIDEzLzQwXSB4ZW4vbXB1OiBpbnRyb2R1Y2UgdW5pZmllZCBmdW5jdGlvbg0KPiA+PiBzZXR1
cF9lYXJseV91YXJ0IHRvIG1hcCBlYXJseSBVQVJUDQo+ID4+DQo+ID4+IEhpIFBlbnksDQo+ID4N
Cj4gPiBIaSBKdWxpZW4sDQo+ID4NCj4gPj4NCj4gPj4gT24gMTMvMDEvMjAyMyAwNToyOCwgUGVu
bnkgWmhlbmcgd3JvdGU6DQo+ID4+PiBJbiBNTVUgc3lzdGVtLCB3ZSBtYXAgdGhlIFVBUlQgaW4g
dGhlIGZpeG1hcCAod2hlbiBlYXJseXByaW50ayBpcw0KPiB1c2VkKS4NCj4gPj4+IEhvd2V2ZXIg
aW4gTVBVIHN5c3RlbSwgd2UgbWFwIHRoZSBVQVJUIHdpdGggYSB0cmFuc2llbnQgTVBVDQo+IG1l
bW9yeQ0KPiA+Pj4gcmVnaW9uLg0KPiA+Pj4NCj4gPj4+IFNvIHdlIGludHJvZHVjZSBhIG5ldyB1
bmlmaWVkIGZ1bmN0aW9uIHNldHVwX2Vhcmx5X3VhcnQgdG8gcmVwbGFjZQ0KPiA+Pj4gdGhlIHBy
ZXZpb3VzIHNldHVwX2ZpeG1hcC4NCj4gPj4+DQo+ID4+PiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBa
aGVuZyA8cGVubnkuemhlbmdAYXJtLmNvbT4NCj4gPj4+IFNpZ25lZC1vZmYtYnk6IFdlaSBDaGVu
IDx3ZWkuY2hlbkBhcm0uY29tPg0KPiA+Pj4gLS0tDQo+ID4+PiAgICB4ZW4vYXJjaC9hcm0vYXJt
NjQvaGVhZC5TICAgICAgICAgICAgICAgfCAgMiArLQ0KPiA+Pj4gICAgeGVuL2FyY2gvYXJtL2Fy
bTY0L2hlYWRfbW11LlMgICAgICAgICAgIHwgIDQgKy0NCj4gPj4+ICAgIHhlbi9hcmNoL2FybS9h
cm02NC9oZWFkX21wdS5TICAgICAgICAgICB8IDUyDQo+ID4+ICsrKysrKysrKysrKysrKysrKysr
KysrKysNCj4gPj4+ICAgIHhlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9lYXJseV9wcmludGsuaCB8
ICAxICsNCj4gPj4+ICAgIDQgZmlsZXMgY2hhbmdlZCwgNTYgaW5zZXJ0aW9ucygrKSwgMyBkZWxl
dGlvbnMoLSkNCj4gPj4+DQo+ID4+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2FybTY0L2hl
YWQuUyBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMNCj4gPj4+IGluZGV4IDdmM2Y5NzM0Njgu
LmE5Mjg4MzMxOWQgMTAwNjQ0DQo+ID4+PiAtLS0gYS94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5T
DQo+ID4+PiArKysgYi94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZC5TDQo+ID4+PiBAQCAtMjcyLDcg
KzI3Miw3IEBAIHByaW1hcnlfc3dpdGNoZWQ6DQo+ID4+PiAgICAgICAgICAgICAqIGFmdGVyd2Fy
ZHMuDQo+ID4+PiAgICAgICAgICAgICAqLw0KPiA+Pj4gICAgICAgICAgICBibCAgICByZW1vdmVf
aWRlbnRpdHlfbWFwcGluZw0KPiA+Pj4gLSAgICAgICAgYmwgICAgc2V0dXBfZml4bWFwDQo+ID4+
PiArICAgICAgICBibCAgICBzZXR1cF9lYXJseV91YXJ0DQo+ID4+PiAgICAjaWZkZWYgQ09ORklH
X0VBUkxZX1BSSU5USw0KPiA+Pj4gICAgICAgICAgICAvKiBVc2UgYSB2aXJ0dWFsIGFkZHJlc3Mg
dG8gYWNjZXNzIHRoZSBVQVJULiAqLw0KPiA+Pj4gICAgICAgICAgICBsZHIgICB4MjMsID1FQVJM
WV9VQVJUX1ZJUlRVQUxfQUREUkVTUw0KPiA+Pj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9h
cm02NC9oZWFkX21tdS5TDQo+ID4+PiBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21tdS5TIGlu
ZGV4IGI1OWM0MDQ5NWYuLmExOWI3Yzg3M2QNCj4gPj4gMTAwNjQ0DQo+ID4+PiAtLS0gYS94ZW4v
YXJjaC9hcm0vYXJtNjQvaGVhZF9tbXUuUw0KPiA+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0
L2hlYWRfbW11LlMNCj4gPj4+IEBAIC0zMTIsNyArMzEyLDcgQEAgRU5EUFJPQyhyZW1vdmVfaWRl
bnRpdHlfbWFwcGluZykNCj4gPj4+ICAgICAqDQo+ID4+PiAgICAgKiBDbG9iYmVycyB4MCAtIHgz
DQo+ID4+PiAgICAgKi8NCj4gPj4+IC1FTlRSWShzZXR1cF9maXhtYXApDQo+ID4+PiArRU5UUlko
c2V0dXBfZWFybHlfdWFydCkNCj4gPj4NCj4gPj4gVGhpcyBmdW5jdGlvbiBpcyBkb2luZyBtb3Jl
IHRoYW4gZW5hYmxlIHRoZSBlYXJseSBVQVJULiBJdCBhbHNvDQo+ID4+IHNldHVwcyB0aGUgZml4
bWFwIGV2ZW4gZWFybHlwcmludGsgaXMgbm90IGNvbmZpZ3VyZWQuDQo+ID4NCj4gPiBUcnVlLCB0
cnVlLg0KPiA+IEkndmUgdGhvcm91Z2hseSByZWFkIHRoZSBNTVUgaW1wbGVtZW50YXRpb24gb2Yg
c2V0dXBfZml4bWFwLCBhbmQgSSdsbA0KPiA+IHRyeSB0byBzcGxpdCBpdCB1cC4NCj4gPg0KPiA+
Pg0KPiA+PiBJIGFtIG5vdCBlbnRpcmVseSBzdXJlIHdoYXQgY291bGQgYmUgdGhlIG5hbWUuIE1h
eWJlIHRoaXMgbmVlZHMgdG8gYmUNCj4gPj4gc3BsaXQgZnVydGhlci4NCj4gPj4NCj4gPj4+ICAg
ICNpZmRlZiBDT05GSUdfRUFSTFlfUFJJTlRLDQo+ID4+PiAgICAgICAgICAgIC8qIEFkZCBVQVJU
IHRvIHRoZSBmaXhtYXAgdGFibGUgKi8NCj4gPj4+ICAgICAgICAgICAgbGRyICAgeDAsID1FQVJM
WV9VQVJUX1ZJUlRVQUxfQUREUkVTUw0KPiA+Pj4gQEAgLTMyNSw3ICszMjUsNyBAQCBFTlRSWShz
ZXR1cF9maXhtYXApDQo+ID4+PiAgICAgICAgICAgIGRzYiAgIG5zaHN0DQo+ID4+Pg0KPiA+Pj4g
ICAgICAgICAgICByZXQNCj4gPj4+IC1FTkRQUk9DKHNldHVwX2ZpeG1hcCkNCj4gPj4+ICtFTkRQ
Uk9DKHNldHVwX2Vhcmx5X3VhcnQpDQo+ID4+Pg0KPiA+Pj4gICAgLyogRmFpbC1zdG9wICovDQo+
ID4+PiAgICBmYWlsOiAgIFBSSU5UKCItIEJvb3QgZmFpbGVkIC1cclxuIikNCj4gPj4+IGRpZmYg
LS1naXQgYS94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tcHUuUw0KPiA+Pj4gYi94ZW4vYXJjaC9h
cm0vYXJtNjQvaGVhZF9tcHUuUyBpbmRleCBlMmFjNjliMGNjLi43MmQxZTA4NjNkDQo+ID4+IDEw
MDY0NA0KPiA+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL2FybTY0L2hlYWRfbXB1LlMNCj4gPj4+ICsr
KyBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21wdS5TDQo+ID4+PiBAQCAtMTgsOCArMTgsMTAg
QEANCj4gPj4+ICAgICNkZWZpbmUgUkVHSU9OX1RFWFRfUFJCQVIgICAgICAgMHgzOCAgICAvKiBT
SD0xMSBBUD0xMCBYTj0wMCAqLw0KPiA+Pj4gICAgI2RlZmluZSBSRUdJT05fUk9fUFJCQVIgICAg
ICAgICAweDNBICAgIC8qIFNIPTExIEFQPTEwIFhOPTEwICovDQo+ID4+PiAgICAjZGVmaW5lIFJF
R0lPTl9EQVRBX1BSQkFSICAgICAgIDB4MzIgICAgLyogU0g9MTEgQVA9MDAgWE49MTAgKi8NCj4g
Pj4+ICsjZGVmaW5lIFJFR0lPTl9ERVZJQ0VfUFJCQVIgICAgIDB4MjIgICAgLyogU0g9MTAgQVA9
MDAgWE49MTAgKi8NCj4gPj4+DQo+ID4+PiAgICAjZGVmaW5lIFJFR0lPTl9OT1JNQUxfUFJMQVIg
ICAgIDB4MGYgICAgLyogTlM9MCBBVFRSPTExMSBFTj0xICovDQo+ID4+PiArI2RlZmluZSBSRUdJ
T05fREVWSUNFX1BSTEFSICAgICAweDA5ICAgIC8qIE5TPTAgQVRUUj0xMDAgRU49MSAqLw0KPiA+
Pj4NCj4gPj4+ICAgIC8qDQo+ID4+PiAgICAgKiBNYWNybyB0byByb3VuZCB1cCB0aGUgc2VjdGlv
biBhZGRyZXNzIHRvIGJlIFBBR0VfU0laRSBhbGlnbmVkDQo+ID4+PiBAQA0KPiA+Pj4gLTMzNCw2
ICszMzYsNTYgQEAgRU5UUlkoZW5hYmxlX21tKQ0KPiA+Pj4gICAgICAgIHJldA0KPiA+Pj4gICAg
RU5EUFJPQyhlbmFibGVfbW0pDQo+ID4+Pg0KPiA+Pj4gKy8qDQo+ID4+PiArICogTWFwIHRoZSBl
YXJseSBVQVJUIHdpdGggYSBuZXcgdHJhbnNpZW50IE1QVSBtZW1vcnkgcmVnaW9uLg0KPiA+Pj4g
KyAqDQo+ID4+DQo+ID4+IE1pc3NpbmcgIklucHV0czogIg0KPiA+Pg0KPiA+Pj4gKyAqIHgyNzog
cmVnaW9uIHNlbGVjdG9yDQo+ID4+PiArICogeDI4OiBwcmJhcg0KPiA+Pj4gKyAqIHgyOTogcHJs
YXINCj4gPj4+ICsgKg0KPiA+Pj4gKyAqIENsb2JiZXJzIHgwIC0geDQNCj4gPj4+ICsgKg0KPiA+
Pj4gKyAqLw0KPiA+Pj4gK0VOVFJZKHNldHVwX2Vhcmx5X3VhcnQpDQo+ID4+PiArI2lmZGVmIENP
TkZJR19FQVJMWV9QUklOVEsNCj4gPj4+ICsgICAgLyogc3RhY2sgTFIgYXMgd3JpdGVfcHIgd2ls
bCBiZSBjYWxsZWQgbGF0ZXIgbGlrZSBuZXN0ZWQgZnVuY3Rpb24gKi8NCj4gPj4+ICsgICAgbW92
ICAgeDMsIGxyDQo+ID4+PiArDQo+ID4+PiArICAgIC8qDQo+ID4+PiArICAgICAqIE1QVSByZWdp
b24gZm9yIGVhcmx5IFVBUlQgaXMgYSB0cmFuc2llbnQgcmVnaW9uLCBzaW5jZSBpdCB3aWxsIGJl
DQo+ID4+PiArICAgICAqIHJlcGxhY2VkIGJ5IHNwZWNpZmljIGRldmljZSBtZW1vcnkgbGF5b3V0
IHdoZW4gRkRUIGdldHMgcGFyc2VkLg0KPiA+Pg0KPiA+PiBJIHdvdWxkIHJhdGhlciBub3QgbWVu
dGlvbiAiRkRUIiBoZXJlIGJlY2F1c2UgdGhpcyBjb2RlIGlzDQo+ID4+IGluZGVwZW5kZW50IHRv
IHRoZSBmaXJtd2FyZSB0YWJsZSB1c2VkLg0KPiA+Pg0KPiA+PiBIb3dldmVyLCBhbnkgcmVhc29u
IHRvIHVzZSBhIHRyYW5zaWVudCByZWdpb24gcmF0aGVyIHRoYW4gdGhlIG9uZQ0KPiA+PiB0aGF0
IHdpbGwgYmUgdXNlZCBmb3IgdGhlIFVBUlQgZHJpdmVyPw0KPiA+Pg0KPiA+DQo+ID4gV2UgZG9u
4oCZdCB3YW50IHRvIGRlZmluZSBhIE1QVSByZWdpb24gZm9yIGVhY2ggZGV2aWNlIGRyaXZlci4g
SXQgd2lsbA0KPiA+IGV4aGF1c3QgTVBVIHJlZ2lvbnMgdmVyeSBxdWlja2x5Lg0KPiBXaGF0IHRo
ZSB1c3VhbCBzaXplIG9mIGFuIE1QVT8NCj4gDQo+IEhvd2V2ZXIsIGV2ZW4gaWYgeW91IGRvbid0
IHdhbnQgdG8gZGVmaW5lIG9uZSBmb3IgZXZlcnkgZGV2aWNlLCBpdCBzdGlsbCBzZWVtDQo+IHRv
IGJlIHNlbnNpYmxlIHRvIGRlZmluZSBhIGZpeGVkIHRlbXBvcmFyeSBvbmUgZm9yIHRoZSBlYXJs
eSBVQVJUIGFzIHRoaXMNCj4gd291bGQgc2ltcGxpZnkgdGhlIGFzc2VtYmx5IGNvZGUuDQo+IA0K
DQpXZSB3aWxsIGFkZCBmaXhlZCBNUFUgcmVnaW9ucyBmb3IgWGVuIHN0YXRpYyBoZWFwIGluIGZ1
bmN0aW9uIHNldHVwX21tLg0KSWYgd2UgcHV0IGVhcmx5IHVhcnQgcmVnaW9uIGluIGZyb250KGZp
eGVkIHJlZ2lvbiBwbGFjZSksIGl0IHdpbGwgbGVhdmUgaG9sZXMNCmxhdGVyIGFmdGVyIHJlbW92
aW5nIGl0Lg0KDQo+IA0KPiA+IEluIGNvbW1pdCAiIFtQQVRDSCB2MiAyOC80MF0geGVuL21wdTog
bWFwIGJvb3QgbW9kdWxlIHNlY3Rpb24gaW4gTVBVDQo+ID4gc3lzdGVtIiwNCj4gDQo+IERpZCB5
b3UgbWVhbiBwYXRjaCAjMjc/DQo+IA0KPiA+IEEgbmV3IEZEVCBwcm9wZXJ0eSBgbXB1LGRldmlj
ZS1tZW1vcnktc2VjdGlvbmAgd2lsbCBiZSBpbnRyb2R1Y2VkIGZvcg0KPiA+IHVzZXJzIHRvIHN0
YXRpY2FsbHkgY29uZmlndXJlIHRoZSB3aG9sZSBzeXN0ZW0gZGV2aWNlIG1lbW9yeSB3aXRoIHRo
ZQ0KPiBsZWFzdCBudW1iZXIgb2YgbWVtb3J5IHJlZ2lvbnMgaW4gRGV2aWNlIFRyZWUuDQo+ID4g
VGhpcyBzZWN0aW9uIHNoYWxsIGNvdmVyIGFsbCBkZXZpY2VzIHRoYXQgd2lsbCBiZSB1c2VkIGlu
IFhlbiwgbGlrZSBgVUFSVGAsDQo+IGBHSUNgLCBldGMuDQo+ID4gRm9yIEZWUF9CYXNlUl9BRU12
OFIsIHdlIGhhdmUgdGhlIGZvbGxvd2luZyBkZWZpbml0aW9uOg0KPiA+IGBgYA0KPiA+IG1wdSxk
ZXZpY2UtbWVtb3J5LXNlY3Rpb24gPSA8MHgwIDB4ODAwMDAwMDAgMHgwIDB4N2ZmZmYwMDA+OyBg
YGANCj4gDQo+IEkgYW0gYSBiaXQgd29ycnkgdGhpcyB3aWxsIGJlIGEgcmVjaXBlIGZvciBtaXN0
YWtlLiBEbyB5b3UgaGF2ZSBhbiBleGFtcGxlDQo+IHdoZXJlIHRoZSBNUFUgd2lsbCBiZSBleGhh
dXN0ZWQgaWYgd2UgcmVzZXJ2ZSBzb21lIGVudHJpZXMgZm9yIGVhY2ggZGV2aWNlDQo+IChvciBz
b21lKT8NCj4gDQoNClllcywgd2UgaGF2ZSBpbnRlcm5hbCBwbGF0Zm9ybSB3aGVyZSBNUFUgcmVn
aW9ucyBhcmUgb25seSAxNi4gSXQgd2lsbCBhbG1vc3QgZWF0IHVwDQphbGwgTVBVIHJlZ2lvbnMg
YmFzZWQgb24gY3VycmVudCBpbXBsZW1lbnRhdGlvbiwgd2hlbiBsYXVuY2hpbmcgdHdvIGd1ZXN0
cyBpbiBwbGF0Zm9ybS4NCg0KTGV0J3MgY2FsY3VsYXRlIHRoZSBtb3N0IHNpbXBsZSBzY2VuYXJp
bzoNClRoZSBmb2xsb3dpbmcgaXMgTVBVLXJlbGF0ZWQgc3RhdGljIGNvbmZpZ3VyYXRpb24gaW4g
ZGV2aWNlIHRyZWU6DQpgYGANCiAgICAgICAgbXB1LGJvb3QtbW9kdWxlLXNlY3Rpb24gPSA8MHgw
IDB4MTAwMDAwMDAgMHgwIDB4MTAwMDAwMDA+Ow0KICAgICAgICBtcHUsZ3Vlc3QtbWVtb3J5LXNl
Y3Rpb24gPSA8MHgwIDB4MjAwMDAwMDAgMHgwIDB4NDAwMDAwMDA+Ow0KICAgICAgICBtcHUsZGV2
aWNlLW1lbW9yeS1zZWN0aW9uID0gPDB4MCAweDgwMDAwMDAwIDB4MCAweDdmZmZmMDAwPjsNCiAg
ICAgICAgbXB1LHNoYXJlZC1tZW1vcnktc2VjdGlvbiA9IDwweDAgMHg3YTAwMDAwMCAweDAgMHgw
MjAwMDAwMD47DQoNCiAgICAgICAgeGVuLHN0YXRpYy1oZWFwID0gPDB4MCAweDYwMDAwMDAwIDB4
MCAweDFhMDAwMDAwPjsNCmBgYA0KQXQgdGhlIGVuZCBvZiB0aGUgYm9vdCwgYmVmb3JlIHJlc2h1
ZmZsaW5nLCB0aGUgTVBVIHJlZ2lvbiB1c2FnZSB3aWxsIGJlIGFzIGZvbGxvd3M6DQo3IChkZWZp
bmVkIGluIGFzc2VtYmx5KSArIEZEVChlYXJseV9mZHRfbWFwKSArIDUgKGF0IGxlYXN0IG9uZSBy
ZWdpb24gZm9yIGVhY2ggIm1wdSx4eHgtbWVtb3J5LXNlY3Rpb24iKS4NCg0KVGhhdCB3aWxsIGJl
IGFscmVhZHkgYXQgbGVhc3QgMTMgTVBVIHJlZ2lvbnMgO1wuDQoNCj4gQ2hlZXJzLA0KPiANCj4g
LS0NCj4gSnVsaWVuIEdyYWxsDQoNCkNoZWVycywNCg0KLS0NClBlbm55IFpoZW5nDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 06:37:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 06:37:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486579.753946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMNnI-00030L-6f; Mon, 30 Jan 2023 06:37:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486579.753946; Mon, 30 Jan 2023 06:37:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMNnI-00030E-3q; Mon, 30 Jan 2023 06:37:32 +0000
Received: by outflank-mailman (input) for mailman id 486579;
 Mon, 30 Jan 2023 06:37:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMNnH-000308-4W
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 06:37:31 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 90acb5c4-a068-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 07:37:28 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D5623211C5;
 Mon, 30 Jan 2023 06:37:27 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 97F8113A06;
 Mon, 30 Jan 2023 06:37:27 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id zyHQI6dl12MKHwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 06:37:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90acb5c4-a068-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675060647; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=dLwfZqKuhcJFgmp1664wEGpvEiDvYtty2a0eFpAy+rE=;
	b=QDg4rBrWSDdxymNPi7Wk1oRBwD/lRh62A0ygeTeiYWamCO6nYGf4tya1sQVdPEUeTcFMvu
	RdbG4IgN/D38PDRoRcbU10HnfHimDnWu1ZuLuM3+ADWB55d2uUnWcYo5zqFYosIuh0Henm
	+HB+7PbbPXFikCM07CvSWmoErRBDujw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/public: move xenstore related doc into 9pfs.h
Date: Mon, 30 Jan 2023 07:37:25 +0100
Message-Id: <20230130063725.22846-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The Xenstore related documentation is currently to be found in
docs/misc/9pfs.pandoc, instead of the related header file
xen/include/public/io/9pfs.h like for most other paravirtualized
device protocols.

There is a comment in the header pointing at the document, but the
given file name is wrong. Additionally such headers are meant to be
copied into consuming projects (Linux kernel, qemu, etc.), so pointing
at a doc file in the Xen git repository isn't really helpful for the
consumers of the header.

This situation is far from ideal, which is already being proved by the
fact that neither qemu nor the Linux kernel are implementing the
device attach/detach protocol correctly. Additionally the documented
Xenstore entries are not matching the reality, as the "tag" Xenstore
entry is on the frontend side, not on the backend one.

Change that by moving the Xenstore related 9pfs documentation from
docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h while fixing
the wrong Xenstore entry detail.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/9pfs.pandoc        | 151 --------------------------------
 xen/include/public/io/9pfs.h | 165 ++++++++++++++++++++++++++++++++++-
 2 files changed, 164 insertions(+), 152 deletions(-)

diff --git a/docs/misc/9pfs.pandoc b/docs/misc/9pfs.pandoc
index b034fb5fa6..00f6817a01 100644
--- a/docs/misc/9pfs.pandoc
+++ b/docs/misc/9pfs.pandoc
@@ -59,157 +59,6 @@ This document does not cover the 9pfs client/server design or
 implementation, only the transport for it.
 
 
-## Xenstore
-
-The frontend and the backend connect via xenstore to exchange
-information. The toolstack creates front and back nodes with state
-[XenbusStateInitialising]. The protocol node name is **9pfs**.
-
-Multiple rings are supported for each frontend and backend connection.
-
-### Backend XenBus Nodes
-
-Backend specific properties, written by the backend, read by the
-frontend:
-
-    versions
-         Values:         <string>
-
-         List of comma separated protocol versions supported by the backend.
-         For example "1,2,3". Currently the value is just "1", as there is
-         only one version. N.B.: this is the version of the Xen trasport
-         protocol, not the version of 9pfs supported by the server.
-
-    max-rings
-         Values:         <uint32_t>
-
-         The maximum supported number of rings per frontend.
-
-    max-ring-page-order
-         Values:         <uint32_t>
-
-         The maximum supported size of a memory allocation in units of
-         log2n(machine pages), e.g. 1 = 2 pages, 2 == 4 pages, etc. It
-         must be at least 1.
-
-Backend configuration nodes, written by the toolstack, read by the
-backend:
-
-    path
-         Values:         <string>
-
-         Host filesystem path to share.
-
-    tag
-         Values:         <string>
-
-         Alphanumeric tag that identifies the 9pfs share. The client needs
-         to know the tag to be able to mount it.
-
-    security-model
-         Values:         "none"
-
-         *none*: files are stored using the same credentials as they are
-                 created on the guest (no user ownership squash or remap)
-         Only "none" is supported in this version of the protocol.
-
-### Frontend XenBus Nodes
-
-    version
-         Values:         <string>
-
-         Protocol version, chosen among the ones supported by the backend
-         (see **versions** under [Backend XenBus Nodes]). Currently the
-         value must be "1".
-
-    num-rings
-         Values:         <uint32_t>
-
-         Number of rings. It needs to be lower or equal to max-rings.
-
-    event-channel-<num> (event-channel-0, event-channel-1, etc)
-         Values:         <uint32_t>
-
-         The identifier of the Xen event channel used to signal activity
-         in the ring buffer. One for each ring.
-
-    ring-ref<num> (ring-ref0, ring-ref1, etc)
-         Values:         <uint32_t>
-
-         The Xen grant reference granting permission for the backend to
-         map a page with information to setup a share ring. One for each
-         ring.
-
-### State Machine
-
-Initialization:
-
-    *Front*                               *Back*
-    XenbusStateInitialising               XenbusStateInitialising
-    - Query virtual device                - Query backend device
-      properties.                           identification data.
-    - Setup OS device instance.           - Publish backend features
-    - Allocate and initialize the           and transport parameters
-      request ring.                                      |
-    - Publish transport parameters                       |
-      that will be in effect during                      V
-      this connection.                            XenbusStateInitWait
-                 |
-                 |
-                 V
-       XenbusStateInitialised
-
-                                          - Query frontend transport parameters.
-                                          - Connect to the request ring and
-                                            event channel.
-                                                         |
-                                                         |
-                                                         V
-                                                 XenbusStateConnected
-
-     - Query backend device properties.
-     - Finalize OS virtual device
-       instance.
-                 |
-                 |
-                 V
-        XenbusStateConnected
-
-Once frontend and backend are connected, they have a shared page per
-ring, which are used to setup the rings, and an event channel per ring,
-which are used to send notifications.
-
-Shutdown:
-
-    *Front*                            *Back*
-    XenbusStateConnected               XenbusStateConnected
-                |
-                |
-                V
-       XenbusStateClosing
-
-                                       - Unmap grants
-                                       - Unbind evtchns
-                                                 |
-                                                 |
-                                                 V
-                                         XenbusStateClosing
-
-    - Unbind evtchns
-    - Free rings
-    - Free data structures
-               |
-               |
-               V
-       XenbusStateClosed
-
-                                       - Free remaining data structures
-                                                 |
-                                                 |
-                                                 V
-                                         XenbusStateClosed
-
-
 ## Ring Setup
 
 The shared page has the following layout:
diff --git a/xen/include/public/io/9pfs.h b/xen/include/public/io/9pfs.h
index ad26bd69eb..6b5d1d8ad9 100644
--- a/xen/include/public/io/9pfs.h
+++ b/xen/include/public/io/9pfs.h
@@ -14,9 +14,172 @@
 #include "ring.h"
 
 /*
- * See docs/misc/9pfs.markdown in xen.git for the full specification:
+ * See docs/misc/9pfs.pandoc in xen.git for the full specification:
  * https://xenbits.xen.org/docs/unstable/misc/9pfs.html
  */
+
+/*
+ ******************************************************************************
+ *                                  Xenstore
+ ******************************************************************************
+ *
+ * The frontend and the backend connect via xenstore to exchange
+ * information. The toolstack creates front and back nodes with state
+ * [XenbusStateInitialising]. The protocol node name is **9pfs**.
+ *
+ * Multiple rings are supported for each frontend and backend connection.
+ *
+ ******************************************************************************
+ *                            Backend XenBus Nodes
+ ******************************************************************************
+ *
+ * Backend specific properties, written by the backend, read by the
+ * frontend:
+ *
+ *    versions
+ *         Values:         <string>
+ *
+ *         List of comma separated protocol versions supported by the backend.
+ *         For example "1,2,3". Currently the value is just "1", as there is
+ *         only one version. N.B.: this is the version of the Xen trasport
+ *         protocol, not the version of 9pfs supported by the server.
+ *
+ *    max-rings
+ *         Values:         <uint32_t>
+ *
+ *         The maximum supported number of rings per frontend.
+ *
+ *    max-ring-page-order
+ *         Values:         <uint32_t>
+ *
+ *         The maximum supported size of a memory allocation in units of
+ *         log2n(machine pages), e.g. 1 = 2 pages, 2 == 4 pages, etc. It
+ *         must be at least 1.
+ *
+ * Backend configuration nodes, written by the toolstack, read by the
+ * backend:
+ *
+ *    path
+ *         Values:         <string>
+ *
+ *         Host filesystem path to share.
+ *
+ *    security-model
+ *         Values:         "none"
+ *
+ *         *none*: files are stored using the same credentials as they are
+ *                 created on the guest (no user ownership squash or remap)
+ *         Only "none" is supported in this version of the protocol.
+ *
+ ******************************************************************************
+ *                            Frontend XenBus Nodes
+ ******************************************************************************
+ *
+ *    version
+ *         Values:         <string>
+ *
+ *         Protocol version, chosen among the ones supported by the backend
+ *         (see **versions** under [Backend XenBus Nodes]). Currently the
+ *         value must be "1".
+ *
+ *    num-rings
+ *         Values:         <uint32_t>
+ *
+ *         Number of rings. It needs to be lower or equal to max-rings.
+ *
+ *    event-channel-<num> (event-channel-0, event-channel-1, etc)
+ *         Values:         <uint32_t>
+ *
+ *         The identifier of the Xen event channel used to signal activity
+ *         in the ring buffer. One for each ring.
+ *
+ *    ring-ref<num> (ring-ref0, ring-ref1, etc)
+ *         Values:         <uint32_t>
+ *
+ *         The Xen grant reference granting permission for the backend to
+ *         map a page with information to setup a share ring. One for each
+ *         ring.
+ *
+ *    tag
+ *         Values:         <string>
+ *
+ *         Alphanumeric tag that identifies the 9pfs share. The client needs
+ *         to know the tag to be able to mount it.
+ *
+ ******************************************************************************
+ *                              State Machine
+ ******************************************************************************
+ *
+ * Initialization:
+ *
+ *    *Front*                               *Back*
+ *    XenbusStateInitialising               XenbusStateInitialising
+ *    - Query virtual device                - Query backend device
+ *      properties.                           identification data.
+ *    - Setup OS device instance.           - Publish backend features
+ *    - Allocate and initialize the           and transport parameters
+ *      request ring.                                      |
+ *    - Publish transport parameters                       |
+ *      that will be in effect during                      V
+ *      this connection.                            XenbusStateInitWait
+ *                 |
+ *                 |
+ *                 V
+ *       XenbusStateInitialised
+ *
+ *                                          - Query frontend transport
+ *                                            parameters.
+ *                                          - Connect to the request ring and
+ *                                            event channel.
+ *                                                         |
+ *                                                         |
+ *                                                         V
+ *                                                 XenbusStateConnected
+ *
+ *     - Query backend device properties.
+ *     - Finalize OS virtual device
+ *       instance.
+ *                 |
+ *                 |
+ *                 V
+ *        XenbusStateConnected
+ *
+ * Once frontend and backend are connected, they have a shared page per
+ * ring, which are used to setup the rings, and an event channel per ring,
+ * which are used to send notifications.
+ *
+ * Shutdown:
+ *
+ *    *Front*                            *Back*
+ *    XenbusStateConnected               XenbusStateConnected
+ *                |
+ *                |
+ *                V
+ *       XenbusStateClosing
+ *
+ *                                       - Unmap grants
+ *                                       - Unbind evtchns
+ *                                                 |
+ *                                                 |
+ *                                                 V
+ *                                         XenbusStateClosing
+ *
+ *    - Unbind evtchns
+ *    - Free rings
+ *    - Free data structures
+ *               |
+ *               |
+ *               V
+ *       XenbusStateClosed
+ *
+ *                                       - Free remaining data structures
+ *                                                 |
+ *                                                 |
+ *                                                 V
+ *                                         XenbusStateClosed
+ *
+ ******************************************************************************
+ */
 DEFINE_XEN_FLEX_RING_AND_INTF(xen_9pfs);
 
 #endif
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 07:33:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 07:33:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486583.753956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMOfc-0001KK-7O; Mon, 30 Jan 2023 07:33:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486583.753956; Mon, 30 Jan 2023 07:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMOfc-0001KD-4l; Mon, 30 Jan 2023 07:33:40 +0000
Received: by outflank-mailman (input) for mailman id 486583;
 Mon, 30 Jan 2023 07:33:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMOfa-0001K7-Ru
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 07:33:39 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20629.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 67ff93e6-a070-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 08:33:36 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8251.eurprd04.prod.outlook.com (2603:10a6:10:24f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Mon, 30 Jan
 2023 07:33:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 07:33:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67ff93e6-a070-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YNYvQMmIThtSou8HfZF1AVZ8YHGm88eSxpG3kHjKIx9EDtN+P217AhMFiVElj1XGkWzocAYTxRARQi1gzq737V5mdDm7XrNU07zguF3AWgyIzSpE9jcratPHAdx2e9c+RieTf+q75HoYQKgtZYsrUFrwA7TEC0oR4RUAK7IyPnFLiarWTKJcXtQTajcvxCTSPKCRgRABk/p+9GzoL8RMbmqVZws84M2ED8by+/reOOK4DjUv2IQPG/8aD6glGujiEteofrg0C8s98zTyzGMlonls8OIuypHeudUCMYDtZ+KiVuptf9ajKsA223NsklEvgUynMImTRe5loXBVBX860w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5Aodvu8vE3nlooikN1x4cHxGE0HAqcf18LOFNZNQdMc=;
 b=ODO063cJqjjWC4IxThw5+tdlDIW8OaCDAcLnsCI2RTIBZvMlBpXpkUlYQr5igKl1O6ob1F9McbSVRd/hcrGMWgWEtoeKatAuZSa5GhLvobpYWO3bt4qg6NlsYGGPn+Zrt/SOS3pXFvTcAw2BVJWKyVIwRgaYWpllMmFq0aH+grWaLQSHC7gshoElCWucA24WhQjaXUl3nAutx/B52m6JRTz+QFxnwHZ/xhCkjlY5nTSZ5FS4yM91n1rvYGoMenIIVnFOULy8Z2qEwE3XmFkDhn8a9jgLY/O3rAN6ydfCfBWQNrqQ98Hbf0ZenzS9hWT5A+LLOVs6Sub+eNNe86SV7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Aodvu8vE3nlooikN1x4cHxGE0HAqcf18LOFNZNQdMc=;
 b=LpO0iki0ELqNpFnwduRP/wKTas7GS85qVsptRfiQydy7lZYtuFqogHLrfiAsLFZ3Sx45baf+XHzIZ/xnwbyIswMaIILjSqh9dFyh27K/8Q8g02yEnJR1Wfg/edbljsr3nuj6qgfwSmLGf6NR36zjcqI96ySW/27n22tfFaHxmDIcbVlqpOsbXFJcd6Z/Jkq/ijmVq27RxpKdrGWL+m2Wam4H3SCwTtNJyT3gUSMmP3oy2ZQ/38pjfp0xGNAmE+uyISKobgb3bBA1mA+IEp6tPHJR/JLTkbSLlkmWFM4SdWvN73AE5VbszxVslh3YU/y6Etnlp4yyC03KBv4MDXm8Jg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <03ce9f48-191e-b1b5-a3b2-8b769aa8feeb@suse.com>
Date: Mon, 30 Jan 2023 08:33:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
Content-Language: en-US
To: Stefano Stabellini <stefano.stabellini@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, george.dunlap@citrix.com,
 andrew.cooper3@citrix.com, roger.pau@citrix.com, Bertrand.Marquis@arm.com,
 julien@xen.org, xen-devel@lists.xenproject.org
References: <20230125205735.2662514-1-sstabellini@kernel.org>
 <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com>
 <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop>
 <5a3ef92e-281f-e337-1a3e-aa4c6825d964@suse.com>
 <alpine.DEB.2.22.394.2301261041440.1978264@ubuntu-linux-20-04-desktop>
 <db97da84-5f23-e7ed-119b-74aed02fb573@suse.com>
 <alpine.DEB.2.22.394.2301271016360.1978264@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2301271016360.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0122.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8251:EE_
X-MS-Office365-Filtering-Correlation-Id: 45c19137-4344-4bdd-5736-08db02944a92
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WY3Fywb3Sd7Vj4x5ATrhKvHPLG9ZZDe5IUapGNtyoXaFR0MwkqijLsbC2z2je7+QNs2ucRyfPxS8VYKfrmx/jgJGgCfFvfWDnuPlMsaMpK54YjrljymuYkduHrQLhIuJmPwWNvKBDwOyw4Q8zD+X68HsKCAqqjE6U50k6iwDQKPhF79/6ELk3tvCE3u1nPy03jH5qvuJBf8Ol1wbl4JpQj7Y8HFb5TVasZq4KGJDesJB/YOaP1SOYJFJOYlOYyDk53NxWHkgqP9ZWE6vSWISSHGF+FYyrKPLF5rXpERSqca7/lRdVa2RQcvfjsTzTTHVRkdYc5udHP9eU0euJ1NiWMGq8jZ2qSIuHX9nq4VE6cpLpFZs4Tz3CZN8PaXB5Td5SoA3sJ+xvcGaGP65TCZPY9HPR1zRHV4nniuF4qN03a2nMBe6QNVoVbHoE4c7jr7pa6GxeiOpAlzOmP5XUVY90lx98PEPj9gsfEmL8ViZ1Zl4zYvujxpmQifSC7cDR+XysGaVhTAh2MewpTnOLmITZm9yGaenEwSbHagBfACIYidnmLcPU6O109Wo+Rj0IGlf2qGGhLo6WoYUyCWzyYo5XIjvXsp5OQ2uj7aP8BIgnXD45MClRCGtVn2ltIhZRdltwu7pfUzAx/Y8OY/YtK5AScW11T0a+Kk6VObvG/QVJVO0lJUldW9/zaxJN3xaBB7PnYTjVLoplAgV0e97soAvGcMGTmtEZQiU2TI/6FxCL88=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39850400004)(136003)(346002)(396003)(376002)(366004)(451199018)(8936002)(41300700001)(5660300002)(53546011)(38100700002)(86362001)(316002)(36756003)(6916009)(66946007)(31696002)(66556008)(66476007)(31686004)(8676002)(6506007)(186003)(6512007)(478600001)(83380400001)(2616005)(26005)(6486002)(4326008)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WmFHb2lRWUhYQ2tyakRzdms1M2xMSlRNd0Zjby9GTWx1dXd2OWo3dFU2RDBR?=
 =?utf-8?B?QW5tWWxqb2pjdEtnTkhqdEZDREVMcW9NTm5Gb2s4N2ZEWTBmam1QQVZudTZB?=
 =?utf-8?B?aWszYmpLSHhSZDI1Snk3dzd4dDhkMktEK2xLbnZaTTBGMzY4UDRveDU1N0F1?=
 =?utf-8?B?M3NSVVltVnRlaUtBMk5LeUpFYnZlMXJaTUJMQW9ldkM1RE03emN4cEtlcUNn?=
 =?utf-8?B?Q0hGeXZpNlYzVlNrdTI5cXo4Vk1LaEZyemJXa3NDdmdjdktqZ2hWZWl0aUdY?=
 =?utf-8?B?ZWlmTjc5UzJQTTFadW9PRzdZbHdBZGNiNkgxTXQrekltell1Y3paR2NsOWsw?=
 =?utf-8?B?eU5zVXV5OHRzdTRtcWdkWGxVR3ZsaTB6UGVvODRFRVB4bGZSQVFqeWVGdUw1?=
 =?utf-8?B?TTBUaFJMalhEWjB3RFVEZUMwNGg2U3RvSHg0TGNwaGFoWDNYUWYwaE80M2ts?=
 =?utf-8?B?STBQaGF6cnpUT01IaUx1N3FuVm9qNU9jY1RhUUwxUm1xd2ZIM2pUZ1lZRTQy?=
 =?utf-8?B?NHE3T0RNbjJYTkdGcVpnQjZ0WlZoaDlBQWIvTSt0em12ZVR2OGlqc1hHUVRl?=
 =?utf-8?B?YlA3cFBwZmRlaFNqeTJTbXpkYWFBcmhhWVhURFlDb041OVlEMjVlUWJlcjRu?=
 =?utf-8?B?MEQwS29PK2RkU2lhcnpSc0hvMFhKM0VjczM2dWdUVEJnV0F6cTFNODFyYVVZ?=
 =?utf-8?B?ZlgwdXJ1TWU2Z0dNM3pyUmNpcU9UN2xkZEF6ak40aFVDRnVrdDFrVS92WFdu?=
 =?utf-8?B?U3BkOU92cW9aQ1hLeFU3RlA2Wm5DZUxZM0IwOU9uY0hFdlo5dnd0K3F1aU5w?=
 =?utf-8?B?QS9BNTJlZkFFT3NHcFNDM0xac3pzbkpPaGZSN1ovOUFSRVZuWUtIbEV6ODNn?=
 =?utf-8?B?ZnNlOGdQUEFNS2diWnNadlVORjI3cTFia2Z4M0RuYklQSzk5SWg3K3d2b3FF?=
 =?utf-8?B?UE9mV0NYU1BHcHp6Sll5THlZdDJUZWhJN1Z6WGhBWmtVZklLYk8wMllOQVhX?=
 =?utf-8?B?cS9PS2lCK21xb2kwelZmdlBHTFY0Tnp1ZUtOY2h4QjdaaXVkUWdRcmVNSzk1?=
 =?utf-8?B?WGVWTjM4SWs4Z3F4MkxoY3lOaWNmMXN4aGFLWnR2UzBPNWtlS1RFNi9sOGZR?=
 =?utf-8?B?bVR4cllGVW5XREJ6akFuNGV3TEt2MFMyTVp6VHVvNmE1VFEwY21UVFFpeHh1?=
 =?utf-8?B?Z0NLcnNkZmY5T1Z2aENpZWJLa2xCV3VYZ3pUWEpIcW4wV2JXenM0czgwL0lF?=
 =?utf-8?B?eFNzRzV6U29HWFVkbGV2djZIU25VOU45NXhRWE5zZ2Fad094Y2xBcXN5OWUz?=
 =?utf-8?B?NWNSdWtYZmJuK052bEJJeTJtQWRIZFhwWFNRTGxnRTlSWDZCMmtHZEVUbGlq?=
 =?utf-8?B?dlhDTTFRRXluSFpsZksyeDNiRVFUTXMxYWI2dTR3MHJqTEtsMkM2Rzl6MHNq?=
 =?utf-8?B?aGErUVI5SXJvM0lpazc1MTM4MkNxaFdiVXY4SFB4M2pmcTVZWGs3Y2NZRE9l?=
 =?utf-8?B?bkJIR1RaVXp6Y0ZmaEU0bGlhUm01NDFFUm91M0dRVHF5MHdLcG1zbHhzU2Z0?=
 =?utf-8?B?NUNmcFJhaDJacGJoa3U5OVcrWGpPTXVTUlQrZVFlQWlrak5hY0VLcEJMQmtD?=
 =?utf-8?B?Ukc4dlMwSXBSK2pEaUlSREtlM28xR3d5dUVKcUd4a0t1TmJVYTF2cXFYdVVY?=
 =?utf-8?B?WlZTSVhNQVljUGhkNEhiVHhaL2VuKzN5KzhlZGcvMkxSa1U3MlBsYStGRmdw?=
 =?utf-8?B?ZjZpeFBweDJGRTlDWllGQTFrYmtrR2dqZGRLN0VkdnV2TmJEMzN1TTFUNFph?=
 =?utf-8?B?ZStSYWZ2VlYyVUFEeGh2TktuRC9HUmdZWFBCSXdlbG10K0gvcGpjRVRvcWVo?=
 =?utf-8?B?RXNuOG01WG9GZUdaYjlyRlJZZGNrMytEM3JOMG42SXBadWpiUnVKM2NEMkNT?=
 =?utf-8?B?cS9zaElQSzlabllrY3JLbXJrb3pkVTJJc2Q4Vng0WlJiWWxYd25rYU8vM05R?=
 =?utf-8?B?ZDAvdG5sWlZmbWVwcFR0Yjk1UlU2Nzhpazl3TUUvR2l4RVpZYUJybEwzY2dS?=
 =?utf-8?B?QzJYaUFWbmtidDNtVGZueWg0Z0x5MVQxRlpxZlpROGpoeFJ1eXNtSEhxYmNh?=
 =?utf-8?Q?Rh5130zunQkiEIL5gj4m2NeDj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 45c19137-4344-4bdd-5736-08db02944a92
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 07:33:33.3983
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Vt6Z3A0vO1csqWxT9Qd11ZPi35iWM3lgLIkNYyW5t861+hegZ1oXefL5tadGBOsQtAaWkno1SlMH0oHLUhMDQg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8251

On 27.01.2023 19:33, Stefano Stabellini wrote:
> On Fri, 27 Jan 2023, Jan Beulich wrote:
>> On 26.01.2023 19:54, Stefano Stabellini wrote:
>> Looking back at the sheet, it says "rule already followed by
>> the community in most cases" which I assume was based on there being
>> only very few violations that are presently reported. Now we've found
>> the frame_table[] issue, I'm inclined to say that the statement was put
>> there by mistake (due to that oversight).
> 
> cppcheck is unable to find violations; we know cppcheck has limitations
> and that's OK.
> 
> Eclair is excellent and finds violations (including the frame_table[]
> issue you mentioned), but currently it doesn't read configs from xen.git
> and we cannot run a test to see if adding a couple of deviations for 2
> macros removes most of the violations. If we want to use Eclair as a
> reference (could be a good idea) then I think we need a better
> integration. I'll talk to Roberto and see if we can arrange something
> better.
> 
> I am writing this with the assumption that if I could show that, as an
> example, adding 2 deviations reduces the Eclair violations down to less
> than 10, then we could adopt the rule. Do you think that would be
> acceptable in your opinion, as a process?

Hmm, to be quite honest: Not sure. Having noticed the oversight of the
frame_table[] issue makes me wonder how much else may be missed in this
same area (18.1, 18.2, and 18.3).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 07:36:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 07:36:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486589.753967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMOi9-00022Y-KJ; Mon, 30 Jan 2023 07:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486589.753967; Mon, 30 Jan 2023 07:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMOi9-00022R-HX; Mon, 30 Jan 2023 07:36:17 +0000
Received: by outflank-mailman (input) for mailman id 486589;
 Mon, 30 Jan 2023 07:36:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMOi8-00022L-54
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 07:36:16 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2089.outbound.protection.outlook.com [40.107.22.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6b7c394-a070-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 08:36:15 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8875.eurprd04.prod.outlook.com (2603:10a6:20b:40a::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Mon, 30 Jan
 2023 07:36:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 07:36:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6b7c394-a070-11ed-9ec0-891035b88211
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aece+OUvSSu+rDp5tKwRVLpNYgEpJGJFUjCoG2RnNhcx9VeJFIyInihZR5i/OvgJ7jNLGSSzgEcpDVzXWpRfLt1SSB7LvRqR2HIQ16+de+Mb4D0nyyy+ELnOjbMaat1xKJcSykMwjO5xwxFpV1aLD0cJ3SaLjmWGZq9DziH28zsDrFL9Qp0pM2QgQTNeKtPdKJ2JuFdY2R+R2RmGPMaAWJ6gpq8NfGk5Z0y5xNATWGVy1wO/mRAARebvi7p98aVAolZMDHiHjFY80lYMk4VoiDQcdVD9F8E1a4R9ZTbjMyyZ0+s2wWMMNqGzX/yuU9o539dWTbQ6gWGsdSiZOu54Vw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=I9IGcYJqbsrEHuiTf2l9n2i6a7iZ9s/4bY309Nh2kkI=;
 b=f3B2iFlHIBj8uB97uk+YDSYjokMLut+qHvgOWeivZUDSJ4+B6I/x9iYaEXAOEHD6Q/v8uIEhO8wmZ/pz4H1KJ4Rli2rKxiiq2SobQJVdSV47WYC/bdOp7Hn/rK7F0qTscmEzKNke7i4JV3Mikr6PHT1xQ2fMo5R7wbrq6JF3dNmJXM0R1Ow3rFl2ApDughGwiATgfzC8eMAyvT1YP9rAd7H6KZ06JE7bK2h1kMN/cpsPqs0U5t6u/QoXLqOn96Zr/IFkFZv1bkQOOxM3E4tUbVdr0T3Fwi8d7Q+PdecRbZPD/H1h2iVrdN13dtW+6sgBc6UGdmVjMBr/zCTdYC715w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I9IGcYJqbsrEHuiTf2l9n2i6a7iZ9s/4bY309Nh2kkI=;
 b=gOwAhUadFrjjwEuwOm6C55SXJz1VImMJY0IyfsoP7o8DNGXUf+2QQhQMIrxuhuwQypfM2od9UAU1jeI9rC5aPGmNqn9TUlzOckemST6jODkgG9V9xcY4NJxQ4lEe9zEJQbmbQKkEwC7pLvJKHXPnH025nHClhNQLwuKzaW6nsjdj5NhQADWZd9sbqnmHIw1p0Tgo/gR1YRLrCcXCtqBtsOjT207kf4VEKkfEOL94AkWeBek3g8R+6ysDgnHgKjrEyU5Qu4aIgj1vGEgn1c5DGgbI3g/XxK1aOYfTjN/rBy67Agvmhe3FkbhsrQZipAa0d9An+IexnXqadlGxs2fIWw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <03cfb3a9-66f7-c376-c815-feba34afaf51@suse.com>
Date: Mon, 30 Jan 2023 08:36:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/public: move xenstore related doc into 9pfs.h
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230130063725.22846-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230130063725.22846-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0176.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8875:EE_
X-MS-Office365-Filtering-Correlation-Id: ac8a3b23-65d1-40d9-1730-08db0294a9e3
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	clTaM/FI3Umr8p4mZjZhzssztPoKpByBkT6n33/vRt2tXw3XhbaP9bUCmE64KGo2DNDLKe8AlCDtjcotHYISMRD0HASnZGIIuQEPyxAPKZHPMX/xIxIO9Vbu50wMjyaGZoU3nu/d1w6xp99Ept6+zz229LhlczCKA/2ss7DzrNU+WaQYujDNnXgxTyBJcEQ9IMs2K6q13tA81R5o+4xZnZ5mCIfTANcrHPVc7ZQQ2HsP2fWBjIfHB5u+zmDqzskvNpcUdUCQxeMtdyTi1JVxlGNLGkc9ZRzsDbK4OcwbV3QM+5nXAsHmr4szIoUdNE2YNlIVyKGemaRZtEnuE8JnYwtFVLURxPbQcRlFAECV0hsVAVb5+CJc1aGFkdFy7278TPukcsuQ3YeC3CGXrCCPdnbIVjWvcX6yxp9WR7XNHzj7+gosrrlkxg6EX7enGnvGZ2/sTmowniYR+bDLs8DWxGfUGAAh2+A6ke+qfymaM7pg3U2v/sKu6SsoNsySvKaOt2khSk9EO9BdyyvFhvbrH2/RYzVxrH6wV8LMBRKCH1VK8lwsqTwZARpAQwY+X4eqaED1XzucWm0Cf1uHDmVg3m5gFqG4wum1bBv8LcBmiktGNvyfYbzJUfX+aKAGqGZ5PisjjdS3VzpDlC/lmk+urOAK0oT/0VmJ2JOtQ2WomUorJVlFZVY8iczqTjzcHYd9lbIU2QXO8P3TulQK99juN89XXXwC2QMbH7MQDYa6bik=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(136003)(376002)(346002)(366004)(396003)(451199018)(38100700002)(86362001)(31696002)(36756003)(41300700001)(2906002)(6636002)(54906003)(6486002)(478600001)(8936002)(5660300002)(66556008)(66946007)(4326008)(316002)(66476007)(8676002)(37006003)(83380400001)(6512007)(186003)(26005)(6506007)(53546011)(2616005)(6862004)(66899018)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T3ljUDhJWWxXall3QWJEL3VaY2VWaHYyY3B3ZkNXWWhRdGROZnRVOU9EeTd5?=
 =?utf-8?B?SHk4NW9pRGxXcHhoSlFpUjhJaC8vVTBjamRuK1Vmc0dpWDZlY2Y5N3FMT2Jj?=
 =?utf-8?B?RXlTRmV3WXkxcW9jYi9JaEhEdFExOGI0SnFoK0ZQVlFoTzZoL3JHNFliTzN0?=
 =?utf-8?B?SFZNdGIwM0llcFFycCtxR01lcHVFMXI3cGU1UllKWktjYjU0WXUrTTlLYTFH?=
 =?utf-8?B?d1V5TFJsY2RZdTFqMUlZWG5RSnVHa0dhSG4ybHcvSlI2UDJkd0FBeGFiZ1JL?=
 =?utf-8?B?bjhjZXVUNGdqcEk3Zk9YekFITG0yL0wwbGxGd001T2s2SHg4V2tEbnljZ21T?=
 =?utf-8?B?bStxeEp0bDFUSmNONWxvdEpOYTRkVmpBQkJtdW9taVFWYjVzbHVjRXpCKzhR?=
 =?utf-8?B?T0Z6alc1VGtzNVhjeDVXVFJrS1RLdjlNbmk2cEpEQUNZZEpLOG1qNnRiM04x?=
 =?utf-8?B?ZWNuUWxwekxzV1I5a1BKeTh6QnNXTG1TYlVGZGg5aEpDdjVhYnpWelg4VXpC?=
 =?utf-8?B?ak5kenRJQjVoYWtYY21HcTlpQVkrRlhWYjBjNUtLZkZhT04xL2dKVXVZQ1ZW?=
 =?utf-8?B?YVhiYTNuYTBzcjQ0VjBNNHRITU9QclJCZ2tLMlk3bWRqNjF3eFFnb2M4YU1R?=
 =?utf-8?B?dC9kbjVxY3hzTlc3c21VeC9IT25yN3hPdUJ4L0lqT0R2Z28yakx0QlJ3TUFi?=
 =?utf-8?B?U0Fsb25sd2FVU1ZqbmJ2d2dnUVFJRFRhNGc4N2o0dDVVb0h6WHp3cjBWdDM1?=
 =?utf-8?B?SitjYzNVa2tudUdvSVJ1WWJPNUdvTE1mckRjR1pYV1BVdVBzWmVmRTJYS0F4?=
 =?utf-8?B?b0hHRTMxNUQreFNudWlLc293N1l4WUdQSEJmYkFmV2hUNVRCM0hGYUZUS3NE?=
 =?utf-8?B?NE5mOFpwa1pQcmVlQndIZGprVEZVSktMejVucFo5R2JoUTI3QnZsVHNQLzBB?=
 =?utf-8?B?Y2F6ZWgvR2RNWkl3YkJuR1V1enkwU2R2YzREczlLa3FKVkpHbm53ZWwrQWdt?=
 =?utf-8?B?K1k2bC9xOEV2bU5TQnQ3U2hNcmZPUTAwK0p1bVpYemNZbXRnNDBiTjQrQ05p?=
 =?utf-8?B?ZXNSRFBVQVlCOHNVNEpZQkc0UHlWUHdXMXBXcVVNUmtqdE5td3NWbnVudkho?=
 =?utf-8?B?ME1MNVEzMGlXYkk2UmtMWDV4NEdHVFI3cXpRV3VuU3NQdDIvYU5hVGNtQzZD?=
 =?utf-8?B?RGl6NTg5YTJ0Z1pzektUZnZVcmc0RklXZE5sZXlJeUQvb2h5b1llSkJlU09p?=
 =?utf-8?B?eUNQKzh5elJPUFBhd2h2SkhwbE56b3dSdHZ2OXhPNXRaV1VEaGJDNElEeVdM?=
 =?utf-8?B?aEVHaUVSeTB1dlRsZWJqZGExUUlraHp0aThMRmthcE4vYkhwTXYwMHhjeEFG?=
 =?utf-8?B?QTZReURSSHJjdGFlcVdXSDlRdW9WWDl1Yy96VHg4LzJ2TkJxaWZScEFpdGpm?=
 =?utf-8?B?RHhkdm43TmZMb2taZEcyTFNpZGdLY1VSbjBNam9ObWIvUm9IaE1wTnNCYXFx?=
 =?utf-8?B?UFU0d2txSGxnc081aUY5QktjUkFibEw0d0FGcFRLTUxFVFpITHlqWjd4SDdk?=
 =?utf-8?B?dGpwSlZBQnI4VU9zNmd3RHdKTGQ0UTNaZ1gzSmV0L3hYOWZPZktIMkVwdi9N?=
 =?utf-8?B?cU5CT0ZWaWEzT3l2cEN6VVJkU05nZXl5RTZSSjFRTHE4UFZHOUE0RXhLZGYv?=
 =?utf-8?B?QWtleS9KYm9KMkdVTGVCZnQ0b01CUldjOTRUTmZJN2Z4ZFZ6MGJMNXNaQXJh?=
 =?utf-8?B?d3lqRDNaUVVacXVXNWtvWUZBbldUM1dkYWlrb1hTVVFMVVFmdURtWlZIcS9N?=
 =?utf-8?B?WDgrMXRWNk0xTUoveStuWkZBMkJyL3NQcm9xTDduL0RrMnZlR1k1VEowckZU?=
 =?utf-8?B?ekhZSVFlTXhiZ01McU1yWHpPZE5GVUlsSDJMSlZQeXZKZHNieXcvQ0FVVUcz?=
 =?utf-8?B?Z3U1QVFqZURoV01ELzJydnBWMnRaZXRWcjNFQlgxV2p6dXZ1NEZJbFFnbnJB?=
 =?utf-8?B?aFpQZkdzU0c3SkQ4SWtKTjVhREI5c2NUTTFSdDZ1VCtKRFBTdzM3VmFGL2NS?=
 =?utf-8?B?RjVINkFnWmR2cDMzcmc3OU5xeGNJY2V1aVhXdmpYT0JSdVViSlNjVUxhNEVY?=
 =?utf-8?Q?9h3xjswQJc5kc/MqGFbw3lr9S?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ac8a3b23-65d1-40d9-1730-08db0294a9e3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 07:36:13.3110
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pdeAQmxFgSzSck+vvub9keYAEQhwtabXFXw2ojpmnx9Dh0pZ0A66gMOxigxcrRoGtAdlKKSFGDsrkuhPcTgLxQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8875

On 30.01.2023 07:37, Juergen Gross wrote:
> The Xenstore related documentation is currently to be found in
> docs/misc/9pfs.pandoc, instead of the related header file
> xen/include/public/io/9pfs.h like for most other paravirtualized
> device protocols.
> 
> There is a comment in the header pointing at the document, but the
> given file name is wrong. Additionally such headers are meant to be
> copied into consuming projects (Linux kernel, qemu, etc.), so pointing
> at a doc file in the Xen git repository isn't really helpful for the
> consumers of the header.
> 
> This situation is far from ideal, which is already being proved by the
> fact that neither qemu nor the Linux kernel are implementing the
> device attach/detach protocol correctly. Additionally the documented
> Xenstore entries are not matching the reality, as the "tag" Xenstore
> entry is on the frontend side, not on the backend one.
> 
> Change that by moving the Xenstore related 9pfs documentation from
> docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h while fixing
> the wrong Xenstore entry detail.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  docs/misc/9pfs.pandoc        | 151 --------------------------------
>  xen/include/public/io/9pfs.h | 165 ++++++++++++++++++++++++++++++++++-
>  2 files changed, 164 insertions(+), 152 deletions(-)
> 
> diff --git a/docs/misc/9pfs.pandoc b/docs/misc/9pfs.pandoc
> index b034fb5fa6..00f6817a01 100644
> --- a/docs/misc/9pfs.pandoc
> +++ b/docs/misc/9pfs.pandoc
> @@ -59,157 +59,6 @@ This document does not cover the 9pfs client/server design or
>  implementation, only the transport for it.
>  
>  
> -## Xenstore

Maybe leave a reference here now pointing at the public header?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 07:47:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 07:47:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486597.753977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMOsV-0003jE-M2; Mon, 30 Jan 2023 07:46:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486597.753977; Mon, 30 Jan 2023 07:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMOsV-0003j7-JA; Mon, 30 Jan 2023 07:46:59 +0000
Received: by outflank-mailman (input) for mailman id 486597;
 Mon, 30 Jan 2023 07:46:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMOsT-0003j0-JK
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 07:46:57 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 430fec9f-a072-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 08:46:53 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A93E31F855;
 Mon, 30 Jan 2023 07:46:52 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6379813444;
 Mon, 30 Jan 2023 07:46:52 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id yj1jFux112OhOgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 07:46:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 430fec9f-a072-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675064812; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ALpHwQMeAdsVsGP0ybp8R+AM8RKHdlTkO8wdswsUhLE=;
	b=nUMlCu5Zp0/v+jExYw3wpknW+6/4FMKc4rU97FqzbowQKInb+19prp0rzXQWCrbxDKk4WM
	/iHXOLHG3dFsHrKWKNCSP/PIaWuwIi4pmwcn8njYrWO+qHVHl0hGERrlb/GSXBWaFJlf12
	hROQ9g8Bt36LVAvx8FSJYq7R22KnD+8=
Message-ID: <962bd261-74cc-e78d-be54-182e4b9457d8@suse.com>
Date: Mon, 30 Jan 2023 08:46:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230130063725.22846-1-jgross@suse.com>
 <03cfb3a9-66f7-c376-c815-feba34afaf51@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xen/public: move xenstore related doc into 9pfs.h
In-Reply-To: <03cfb3a9-66f7-c376-c815-feba34afaf51@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------BWOUTv0k8SDabJuUN1eDeCCP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------BWOUTv0k8SDabJuUN1eDeCCP
Content-Type: multipart/mixed; boundary="------------OBH3csnzWrCvYAGhXPsvQDvJ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Message-ID: <962bd261-74cc-e78d-be54-182e4b9457d8@suse.com>
Subject: Re: [PATCH] xen/public: move xenstore related doc into 9pfs.h
References: <20230130063725.22846-1-jgross@suse.com>
 <03cfb3a9-66f7-c376-c815-feba34afaf51@suse.com>
In-Reply-To: <03cfb3a9-66f7-c376-c815-feba34afaf51@suse.com>

--------------OBH3csnzWrCvYAGhXPsvQDvJ
Content-Type: multipart/mixed; boundary="------------tnIMrbXhoJ8ZtA5q1DC00ZsX"

--------------tnIMrbXhoJ8ZtA5q1DC00ZsX
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDEuMjMgMDg6MzYsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAzMC4wMS4yMDIz
IDA3OjM3LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gVGhlIFhlbnN0b3JlIHJlbGF0ZWQg
ZG9jdW1lbnRhdGlvbiBpcyBjdXJyZW50bHkgdG8gYmUgZm91bmQgaW4NCj4+IGRvY3MvbWlz
Yy85cGZzLnBhbmRvYywgaW5zdGVhZCBvZiB0aGUgcmVsYXRlZCBoZWFkZXIgZmlsZQ0KPj4g
eGVuL2luY2x1ZGUvcHVibGljL2lvLzlwZnMuaCBsaWtlIGZvciBtb3N0IG90aGVyIHBhcmF2
aXJ0dWFsaXplZA0KPj4gZGV2aWNlIHByb3RvY29scy4NCj4+DQo+PiBUaGVyZSBpcyBhIGNv
bW1lbnQgaW4gdGhlIGhlYWRlciBwb2ludGluZyBhdCB0aGUgZG9jdW1lbnQsIGJ1dCB0aGUN
Cj4+IGdpdmVuIGZpbGUgbmFtZSBpcyB3cm9uZy4gQWRkaXRpb25hbGx5IHN1Y2ggaGVhZGVy
cyBhcmUgbWVhbnQgdG8gYmUNCj4+IGNvcGllZCBpbnRvIGNvbnN1bWluZyBwcm9qZWN0cyAo
TGludXgga2VybmVsLCBxZW11LCBldGMuKSwgc28gcG9pbnRpbmcNCj4+IGF0IGEgZG9jIGZp
bGUgaW4gdGhlIFhlbiBnaXQgcmVwb3NpdG9yeSBpc24ndCByZWFsbHkgaGVscGZ1bCBmb3Ig
dGhlDQo+PiBjb25zdW1lcnMgb2YgdGhlIGhlYWRlci4NCj4+DQo+PiBUaGlzIHNpdHVhdGlv
biBpcyBmYXIgZnJvbSBpZGVhbCwgd2hpY2ggaXMgYWxyZWFkeSBiZWluZyBwcm92ZWQgYnkg
dGhlDQo+PiBmYWN0IHRoYXQgbmVpdGhlciBxZW11IG5vciB0aGUgTGludXgga2VybmVsIGFy
ZSBpbXBsZW1lbnRpbmcgdGhlDQo+PiBkZXZpY2UgYXR0YWNoL2RldGFjaCBwcm90b2NvbCBj
b3JyZWN0bHkuIEFkZGl0aW9uYWxseSB0aGUgZG9jdW1lbnRlZA0KPj4gWGVuc3RvcmUgZW50
cmllcyBhcmUgbm90IG1hdGNoaW5nIHRoZSByZWFsaXR5LCBhcyB0aGUgInRhZyIgWGVuc3Rv
cmUNCj4+IGVudHJ5IGlzIG9uIHRoZSBmcm9udGVuZCBzaWRlLCBub3Qgb24gdGhlIGJhY2tl
bmQgb25lLg0KPj4NCj4+IENoYW5nZSB0aGF0IGJ5IG1vdmluZyB0aGUgWGVuc3RvcmUgcmVs
YXRlZCA5cGZzIGRvY3VtZW50YXRpb24gZnJvbQ0KPj4gZG9jcy9taXNjLzlwZnMucGFuZG9j
IGludG8geGVuL2luY2x1ZGUvcHVibGljL2lvLzlwZnMuaCB3aGlsZSBmaXhpbmcNCj4+IHRo
ZSB3cm9uZyBYZW5zdG9yZSBlbnRyeSBkZXRhaWwuDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTog
SnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4gLS0tDQo+PiAgIGRvY3MvbWlz
Yy85cGZzLnBhbmRvYyAgICAgICAgfCAxNTEgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0NCj4+ICAgeGVuL2luY2x1ZGUvcHVibGljL2lvLzlwZnMuaCB8IDE2NSArKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrLQ0KPj4gICAyIGZpbGVzIGNoYW5nZWQsIDE2
NCBpbnNlcnRpb25zKCspLCAxNTIgZGVsZXRpb25zKC0pDQo+Pg0KPj4gZGlmZiAtLWdpdCBh
L2RvY3MvbWlzYy85cGZzLnBhbmRvYyBiL2RvY3MvbWlzYy85cGZzLnBhbmRvYw0KPj4gaW5k
ZXggYjAzNGZiNWZhNi4uMDBmNjgxN2EwMSAxMDA2NDQNCj4+IC0tLSBhL2RvY3MvbWlzYy85
cGZzLnBhbmRvYw0KPj4gKysrIGIvZG9jcy9taXNjLzlwZnMucGFuZG9jDQo+PiBAQCAtNTks
MTU3ICs1OSw2IEBAIFRoaXMgZG9jdW1lbnQgZG9lcyBub3QgY292ZXIgdGhlIDlwZnMgY2xp
ZW50L3NlcnZlciBkZXNpZ24gb3INCj4+ICAgaW1wbGVtZW50YXRpb24sIG9ubHkgdGhlIHRy
YW5zcG9ydCBmb3IgaXQuDQo+PiAgIA0KPj4gICANCj4+IC0jIyBYZW5zdG9yZQ0KPiANCj4g
TWF5YmUgbGVhdmUgYSByZWZlcmVuY2UgaGVyZSBub3cgcG9pbnRpbmcgYXQgdGhlIHB1Ymxp
YyBoZWFkZXI/DQoNCk9rYXksIFdvdWxkIHlvdSBiZSBmaW5lIHdpdGg6DQoNCiAgICMjIENv
bmZpZ3VyYXRpb24NCg0KICAgVGhlIGZyb250ZW5kIGFuZCBiYWNrZW5kIGFyZSBjb25maWd1
cmVkIHZpYSBYZW5zdG9yZS4gSGF2ZSBhIGxvb2sgYXQNCiAgIFtoZWFkZXJdIGZvciB0aGUg
ZGV0YWlsZWQgWGVuc3RvcmUgZW50cmllcyBhbmQgdGhlIGNvbm5lY3Rpb24gcHJvdG9jb2wu
DQoNCg0KSnVlcmdlbg0K
--------------tnIMrbXhoJ8ZtA5q1DC00ZsX
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------tnIMrbXhoJ8ZtA5q1DC00ZsX--

--------------OBH3csnzWrCvYAGhXPsvQDvJ--

--------------BWOUTv0k8SDabJuUN1eDeCCP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPXdewFAwAAAAAACgkQsN6d1ii/Ey+0
nwf/QDoHbA8gEdLPtyOtVWWvXRDU/v9LKelzfNU3CaaYDRqRyxkWeDfd25nNzDib8+RC81wntvlo
/guA762EGN3Vy/Ge9ueBqu8yLadmCP1sITBfm1NOprCgOjENDc4Z723+AfJfYRkeUbwxHYxZ9ZQd
7riJQqSE+gw1dO81uM/wRzdjELqudliyivYaCJHXaBfe5s3ixl758Jh0TuCpNA/T9swb9uITPm6x
I3j1K28/NM5Dbfr9HE9vjp83jOfmehC+r5Htet4QKo/CJbteOxY9kLxbtV7gUY7dNvQrmOLfoOnL
fmNdWT6irhTq2GUSrbTXf+V5CZG0OinzOOepR4acjg==
=haox
-----END PGP SIGNATURE-----

--------------BWOUTv0k8SDabJuUN1eDeCCP--


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 07:55:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 07:55:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486602.753986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMP0H-0005BN-Ei; Mon, 30 Jan 2023 07:55:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486602.753986; Mon, 30 Jan 2023 07:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMP0H-0005BG-Bm; Mon, 30 Jan 2023 07:55:01 +0000
Received: by outflank-mailman (input) for mailman id 486602;
 Mon, 30 Jan 2023 07:55:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMP0H-0005BA-0i
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 07:55:01 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2049.outbound.protection.outlook.com [40.107.21.49])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 650bb10f-a073-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 08:54:59 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7515.eurprd04.prod.outlook.com (2603:10a6:10:202::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Mon, 30 Jan
 2023 07:54:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 07:54:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 650bb10f-a073-11ed-9ec0-891035b88211
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bHsFFlhDdMRn3X6PGRbwSzbpZ8UipnXkO/tEZygs/9mj1Q+jiP7jK9dnU53wWFmabWgXN6Gr7BIulvD2Mgg642our1sgPC0fxeiDju/dhTmZlUUR+o9YHCrYBo2W1DF4EVjOLOY4THrOVmpoGpeLahCTE0TaXH0X2mtw+nrrzEyHdewYeYnI3dFJjMquDT9hdlSzLc3QKMjCOPP6H7Bl2xmJSIoBOb5R8i/8d5sFJrXiXTQ6QACCx5Yqk78EWo/VAvP72bPdAla4wqDEzhCQaArY9oaX5+8kwUpTwU5r2QeNIeI5GYpx/uvtEjTGw/7sG2tccanyiMd32WLPeNzyTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=70Pn2yCF/cP1GLXj0LcZAFzxe+BNAry5w/bcKzqRRdc=;
 b=mAlVgoNXxn9apmtISjg4kj/vFVSs1LTKFoobx5fJnWEbgn+7MZk5remKdlNkhOD8mLVhpiUY6NmCX/sXlvEHJCYONEeRKkM015cst3zMLfAM3tid2BRiBnYmmtq2oSsjBdCrQBxD7WiyKid5Cqjfm6GpsrHXaTIMYs6KD+lqIJOw+4917sU+SLg9cD4putl9iYUm/8ss26dN2ky/HtPY4gmGm4zXt3Wjtv097B2NZJNiMMgEsu8fs3jhja5Foj+OHSPFFuQYgOKpK+cc6rV+lerbunm+xXSWzs+rf3ndZWeoEoc6iQW47maDzigJcz0g4QDqrJU2oAbnEKNBJPPfKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=70Pn2yCF/cP1GLXj0LcZAFzxe+BNAry5w/bcKzqRRdc=;
 b=GHdHVhUu3kDYuAMYmrtP7Ko5aqwT38dt2e2GQya5WK8dGHaSeBibMFyDCMg2lrRHoUtb+w1ZKGnVPl0z8wSmJCqgRwHDr8Ze+vDYx1mQoj17mlj0MXJ4kul9zJULI4IhktazNpXu2VOYQkhErfCbiGVZxhKphxTU21fcPWWCVcy8h4SnHHBuIoo0MfXnMlvaZTLDpEtYgB/Z1R6YYZQXRZ1B6NacHQZqFw25e51fP+gIw4euksbrJDTEn+9OFwpoz5n3l2BuoSxltV9iQ/l7DwJnqfp6Wpm+6IS38HwaJZI+++jgAphWxO7b8QaLcUho3UW84Ik4wK+oZbGsLbcSAg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e48fcc4a-4ad8-d478-1e05-1528bdb35a5f@suse.com>
Date: Mon, 30 Jan 2023 08:54:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/public: move xenstore related doc into 9pfs.h
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230130063725.22846-1-jgross@suse.com>
 <03cfb3a9-66f7-c376-c815-feba34afaf51@suse.com>
 <962bd261-74cc-e78d-be54-182e4b9457d8@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <962bd261-74cc-e78d-be54-182e4b9457d8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0041.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7515:EE_
X-MS-Office365-Filtering-Correlation-Id: 0360371d-98d5-4b4e-0e5b-08db029747cd
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JrLSJvbmMFbqepz2Hn5Yv2FbigaGWW5pzNqy63z1t5GCtHeyrCTw/AZc0m1xIwrQ9m67+TkqOmuv89qRQTNA77iK6zW7o5IaA/6IrNfw7dtd8EIGJnXMtKCZRpOQo7dc5urfg78b/T+oURFh7iUSFLz1BRMysN5887z9Xb3UsFK2cYO/IwWUigTLL2Hk4e8/LQF9bSjrY3C0OY/kSrR4YdtiWNr0gq7lGNBoPftK8OuV1jHnRbzDy9prNIGX8aGwNCb7mw+YSLTYTTnM5PKFfIsMPC4x6/l9F6yAKxvYaeWZ5wITcJPvED9tMkDyqS+ooZUyOpDJmA9m+h/bG1lmoP+rBmuU/Y/JC4iSFWz/iScNkw6ckspd1OhjagIJLPSw1MBjZzsx8E2lhpso9FLl1j+YiILZ0aDHEc86rrGoTdoPNHWFOPwDgJajA1QtpulpE4lhtmBb+DbqyDYB32GHr8IBrkOvjqyFbGJjbIJ073VxLAR+1jHWUOF8XVRBnCH4rhsp4I1RURVw9mLmygvoP2JLuc/xyKJ7S2r72z7kl4M91+mAZuP6vqSr1Yb2YZDHyCbkQMkK9fyJ0TZ57DIyVO6EuVsGH9hRhy0mGpIK6+/I+ux2KDVE/b1KAAnFOkOJI9LWsrBgMuSPUxcyOdGgBGjjalBkhNQ0l5oWeFqVHdxw/B4t3AWljON407dMAW0ySrJ6XAJcUTQQ5/DGz8udofbkIqLfZpWx8wVAaXLpLoA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(376002)(346002)(396003)(366004)(136003)(451199018)(38100700002)(86362001)(31696002)(36756003)(41300700001)(2906002)(54906003)(478600001)(6486002)(8936002)(5660300002)(316002)(66946007)(4326008)(37006003)(8676002)(2616005)(83380400001)(26005)(66556008)(6512007)(186003)(6636002)(53546011)(6506007)(6862004)(66476007)(66899018)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d2xJOUowbHRXa05DWEs2M3NnaGRjbk91UFpWeURxU29YamdjYmd4ODAzalFF?=
 =?utf-8?B?cTZSeFlBNVI4VFI1QVNXMU43R2JIV1V2Tk5PUWZxcDRGcDlzTGpYcHFybHpU?=
 =?utf-8?B?SkZsTzN5QlMvNnVZK0lDTEFkcVNKSThCNzlVa3JWbmhlUGJKT0NpZy9Ba1VG?=
 =?utf-8?B?TWY5dHlwenM2TXM0Y0FydUY3OC9lK0J4bURpdnRDY2grQTNpUlhZcnFMY2Zz?=
 =?utf-8?B?UjVUbGNROHVrY2l4OC96V1B3L1VDN2FpME9GN3dTY1kyWTZVUVdkNWcxVnF2?=
 =?utf-8?B?cStJZWVBbklnU1A1dnpyV1ZHMGp2TlNJTUpDSlFQZlRINWNuY1plMFFmYlg2?=
 =?utf-8?B?VTFEY1hqZVBhRlhvaXVHY28rY0Vtc0w0QkxDblZ5c3ZIREdaWHk3QU92TDVP?=
 =?utf-8?B?dElNeG4zTTNOTEpHZWlXdHJGYThRNHA3bDBBUUMrT1hwNitoQ0Jzam9kM0Uv?=
 =?utf-8?B?MGhzdGhIQjFHNVpuS016T2s0NzlDZmtqeGRNejg4c2ZJY1FYSW56dnVGVVY1?=
 =?utf-8?B?VEtlcHBtdTJXSllON3ZvaTlack1idWViQm5RZTdTVWFRS2crOUkxS2lUTVRK?=
 =?utf-8?B?ZG52Y2hqQ3NLd0FudDFyUzlVT2JlU0Z3aStuS0lxMlZodUhuUFArVGNPaDNv?=
 =?utf-8?B?VnlCem5zcUhMdmJNalNBZjVoNHRDU200RnpDYlRFVnArRm00b20rNVNMaC84?=
 =?utf-8?B?SnM5N3Q4ZXBvRnM4aWNjQy9PeDVUU2VjemgrZkhjOHptY1FvRlJ2andmeXJo?=
 =?utf-8?B?ZjlZajFWTW9wdXU5MVBQS3g0U05pa0EvVEh5YzR4VFM5ZTFQa1RtWWF0MkpH?=
 =?utf-8?B?RFF3ZnRlTVQ4MFNKZm1rY3J4bmhqMjFvM3RqeStNOXBZRG1LcUcrc0ZvT0pE?=
 =?utf-8?B?YmtIekFXZmZNZ1c1TVorZVQyR3BoRVl4Q1lkajRsM3lVelFmSzQ4TGtQZkJY?=
 =?utf-8?B?ZmdCdHRERzF3YU9XVHlNREhXQWV4L3BlQkxwcDVmTFdNNEhoUk5pYitSN3JH?=
 =?utf-8?B?dm5VUWk4cXlHdU85WHhUOW9TWExURE9hcERFU0MybzkxK1lPS1pHV1Zaem1q?=
 =?utf-8?B?dWhha1RmQUR0M1JlWU1rZ2I3bXo4ZnZMUFQxRVQ0QVRTSFNRNWN5VCthZ1hC?=
 =?utf-8?B?Y3BKeW9qNHNId2VqUlYrbWUzL0l1b0haeXhYbHkxdEFXdFJTSUQ5Z2RJdWcw?=
 =?utf-8?B?VmcrcEViQ0VmbUVHQnMrWnY2aEtaeHZnSDF2ODVRT2tjRzNjbXdEemxBQ0tY?=
 =?utf-8?B?TUY1QU0wc0hyTVY4clE1ckw3UXE5QmgxVFFwNjk4Q3FidHdnYXM1TW9yMG4x?=
 =?utf-8?B?ek5PbVBHY3ZlUWtoYXVtRGEycko3K09JTXBOOGlQQVRwZm8remxlek9SOFUy?=
 =?utf-8?B?RmV2bFAxTXIrdWJEUmVlYmxURnZFbHZkNjA3Q2U0OCtZNyt6dXlVeGdzUG5P?=
 =?utf-8?B?aU8xSi9sOXBsVC9pOHlrL0VBZ2tWMWZvZld2Y0NOUE5iVUdpQXZEWlBock8r?=
 =?utf-8?B?cWZpaWF2d3hKVUVrcHh6WXZtVDVzZTJFTGsrMXhiRDd2WUV3QmVCKzNLRjhk?=
 =?utf-8?B?VXNNQWdSTXJJUzh6dkMzVVdKUDlRYmFTL1M4SGFKZHdQdmpockNySU55Q0hw?=
 =?utf-8?B?cStrWCsrVWNpdmlBbVpSTThaQ3J2YmRQUENSTVZaV21raWxWbE1NYi9Odjhp?=
 =?utf-8?B?MldLdEgrSWVRakUvenN6L0dKYk1yT1lGZXdpKzk3a2hScWNKeUxiN01ub2dQ?=
 =?utf-8?B?SnpTR09OcmhOQnpwM3gyc25OT1pPa1JKRVF2ZFU1cmMwZGdsa2NnNHJHZjVn?=
 =?utf-8?B?NXcxT1F6TldnQUl5alRUYW4weVZvZmo5cFAxRkN3N2g3YWNZUVFqTTFSdkhW?=
 =?utf-8?B?bndHUTg4WXBGTENzQnVhdkJEM3llVmIrb1E3dGVTcy9mb0hpUFdWNEk0RmFJ?=
 =?utf-8?B?ZTlUcEtsVlFqeHJ2TkE5QmhoUzRXdWM0eXVFdmc5d25PZjA2bWZBaFNLTEMy?=
 =?utf-8?B?YWE1RFAxamxUV0xmUnB2SVVHYjVUcGpHZ1RWMmtZN2E4WTZuM3RPRmdRSUVz?=
 =?utf-8?B?a21pbmt3SlM3OFJzVlFnUXcvQ2JJbjR0MGQ4RXVoZDRkYjdHVXgxMWo3VzN1?=
 =?utf-8?Q?49elXegskVnmBZRBZH4fFrTl1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0360371d-98d5-4b4e-0e5b-08db029747cd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 07:54:57.2713
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pBw3MWt2iCdbScHvKfdWhwJnDcWX9LOjjt73U4VOyuti982NiGXobm1moS6qQkQw7bazBG3bIryGqWlW7n6flw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7515

On 30.01.2023 08:46, Juergen Gross wrote:
> On 30.01.23 08:36, Jan Beulich wrote:
>> On 30.01.2023 07:37, Juergen Gross wrote:
>>> The Xenstore related documentation is currently to be found in
>>> docs/misc/9pfs.pandoc, instead of the related header file
>>> xen/include/public/io/9pfs.h like for most other paravirtualized
>>> device protocols.
>>>
>>> There is a comment in the header pointing at the document, but the
>>> given file name is wrong. Additionally such headers are meant to be
>>> copied into consuming projects (Linux kernel, qemu, etc.), so pointing
>>> at a doc file in the Xen git repository isn't really helpful for the
>>> consumers of the header.
>>>
>>> This situation is far from ideal, which is already being proved by the
>>> fact that neither qemu nor the Linux kernel are implementing the
>>> device attach/detach protocol correctly. Additionally the documented
>>> Xenstore entries are not matching the reality, as the "tag" Xenstore
>>> entry is on the frontend side, not on the backend one.
>>>
>>> Change that by moving the Xenstore related 9pfs documentation from
>>> docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h while fixing
>>> the wrong Xenstore entry detail.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   docs/misc/9pfs.pandoc        | 151 --------------------------------
>>>   xen/include/public/io/9pfs.h | 165 ++++++++++++++++++++++++++++++++++-
>>>   2 files changed, 164 insertions(+), 152 deletions(-)
>>>
>>> diff --git a/docs/misc/9pfs.pandoc b/docs/misc/9pfs.pandoc
>>> index b034fb5fa6..00f6817a01 100644
>>> --- a/docs/misc/9pfs.pandoc
>>> +++ b/docs/misc/9pfs.pandoc
>>> @@ -59,157 +59,6 @@ This document does not cover the 9pfs client/server design or
>>>   implementation, only the transport for it.
>>>   
>>>   
>>> -## Xenstore
>>
>> Maybe leave a reference here now pointing at the public header?
> 
> Okay, Would you be fine with:
> 
>    ## Configuration
> 
>    The frontend and backend are configured via Xenstore. Have a look at
>    [header] for the detailed Xenstore entries and the connection protocol.

Sure. (Personally I'd use "See [header] ...", but as so often a native speaker
might actually point out that one wouldn't word it like this.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 08:03:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 08:03:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486612.753997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMP8d-0007Tl-Mx; Mon, 30 Jan 2023 08:03:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486612.753997; Mon, 30 Jan 2023 08:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMP8d-0007Te-Ju; Mon, 30 Jan 2023 08:03:39 +0000
Received: by outflank-mailman (input) for mailman id 486612;
 Mon, 30 Jan 2023 08:03:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMP8c-0007TR-1S
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 08:03:38 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2068.outbound.protection.outlook.com [40.107.247.68])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 98848321-a074-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 09:03:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7432.eurprd04.prod.outlook.com (2603:10a6:10:1a9::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan
 2023 08:03:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 08:03:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98848321-a074-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ni1I7/76kVYAZWz7FNWj/O3jxeoh9vtEKHNATkkCrs6sWZ5rGLZvU3pwBKP2s9dnagYhvPx/oOSTSdA1ik08PduhyXlUDXYRFxgFF0hV/iL+4qnJ8MZ/5Xq4BApu1MVRB1B6q3NnoJXZfSkLG6BTLxLkRoyDtY784A4XY4V6v4AixcA8pTLINNTQMp0PznlNflRuy4hx2HcSFH3VEKNWnMick27CEvBV3IFmcUvRoJmnLyMwHgxLgZo5HFaWW9KvlJAqOD2zcbRUl77hTZgNA1FNRn93xfTgN1BsFxIrnyioEWdDrrlOv26Mr7ZPbzdIhBKn4Re1hFC40QauG1p2tA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XOaCDG/C2sJdD5TzBiisp/1yDh1cwFvX0sisRQDW49E=;
 b=IFRsXLwlUyw6Dznfn+cMKeZfRS/A3Xq/qrCF06/PnlKtrJznPX/NoyB+1E9IldUSQeKIFccY/l2jg3sLbh4u/G9cXxJkYOH6IHeqfJJnezcMLGfM4SxxMWSP6Z1n0BupdwDXT3KxyPgMMP38baOEyFdw3voQnhE+VAm+PHpGxrT3OZcFqs+UqhAZf29cm7ya7pR1MaXrNnkO0oZvUROd8s5NElkSwxSsWSb8bTR83QCneXcW5Ng08wTih1MhTEn+Qcsdc/LTN10vKcIl0EATJDLGZMRV5oGbtwwgyWbTs+wDejBUNF03rzeY3ezEFuPPJzFojT/tVhmtUONvG/OMFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XOaCDG/C2sJdD5TzBiisp/1yDh1cwFvX0sisRQDW49E=;
 b=B92Q6FxEvuCX9WGwaEL0oGaVq0ruUbujWF1eQTip7PdBhSuj6kdNn73sC8wdoxR8HQMFueRZ/z5X96fFYSNZjXDvRfxEAMxh+QrRzXF/xTVYMxi+GVg6SnfZ+uYxOxTKD9UWbvtFop+njUoPWCgMVv7GO6RRf/5mWtuVevT2tJPfEpGOVJWiqhHiwr0l7uJB9BE6E6sviKoqPmF/Ad3PKwc6KBlv0BM32VpRNw4nwXFXrF/nz2QDoy4sH8V8SSJPDezwch4MrPG8CC5DJ+LYh56YXSYfQ92FFNfupZvMJwaccLVzHPNV+U0kb5vCp5JThrbKeBi8MXtJFbs+wGDfLw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <44118a7a-3ef6-e984-babb-f5b75e5e53a2@suse.com>
Date: Mon, 30 Jan 2023 09:03:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: Ping: [PATCH] Argo: don't obtain excess page references
Content-Language: en-US
To: Christopher Clark <christopher.w.clark@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f9cd7b84-6f51-d797-cd2a-b9c9bc62b0f6@suse.com>
 <d03dc8b3-4c1f-2db0-4d97-944972dc6e06@suse.com>
 <CACMJ4GbUjLczb9ru_QUERGaNCModspnqgGwAgCqUN+oZ_90NDA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CACMJ4GbUjLczb9ru_QUERGaNCModspnqgGwAgCqUN+oZ_90NDA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0040.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7432:EE_
X-MS-Office365-Filtering-Correlation-Id: 5a96e581-e809-4256-4e19-08db02987bb4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bUZRHa69HjxJIqQc0bGs+PtFbUlSuaVRLqb9Px6OjDlJAps5pxshCyA1btaasQ0CUyuMhYQeEG6sw9gC/bpPUnniftjYB9tmJyGSyD6vvInhF30P8DNdBmI+mxiwqSiiuoQQwwUd8nbOvFDbwkxEVsq/ZEjd54xjn5Yd+ZR1R4ZfeuBy2FnFcTgt+QRG6Wjdb5cjg0UOz+8FPysEtJq7Pz2DF6yU5HkwtwKieyjFIDYyfxAZQHC9T+ksQGE7VJp1qYuyXGv5eIuSTe2leT3TsnCP+pmzvU/oxjH3YbG3L+XLcb1E5Hyb47Su4CW5PV17n4fEQtkRZajQEOKi76WAH133Qr7yGX4iaJrjpYTd4TAtX5uYYOtiffm18wacxz8NNPRxu3AVBxeVFx/FPVO59vj0ia42Sf3GYIVOnkhkr3sHKYb87ZyylgwniHWrqn8Y6mWEzu27qycW+wxXo7oyrYYq4lzj2i9eLI7k2ntA3BUkIiiwetCjLWBlMweiSmp7BrgcDYKAMPMu83G2kJXLWAPzkPdQz2OMYWZsx0S3dO3ZuIu6QoU+ooVWgihP50wOrUutb4m+vpRSXR4tYyeSrVxKP7TZyjehIaN5w0Uo2HH05l+NSXy7/vEt6kE9u5gLsCrxq1E5FAU2NsA7SK7LDfTD4FTE+WFWgzdeFx7l28Z4Os0Tr/nCJMr+1WIlqJht+a7HksnzW5vXpJxjRJM6I1tKWvLwf8O12gnz9+KESuk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39850400004)(366004)(346002)(136003)(396003)(376002)(451199018)(86362001)(31696002)(38100700002)(36756003)(41300700001)(8936002)(5660300002)(316002)(66946007)(66556008)(66476007)(6916009)(4326008)(8676002)(2906002)(83380400001)(2616005)(6486002)(478600001)(6512007)(53546011)(6506007)(26005)(186003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TGdFNUErSU04aGc0TG9EQnJKblV1QmhtRldBNis2a2pac2VXL2oySTV6ZG5Y?=
 =?utf-8?B?UTRIMm9qZ2dSNW9NL2dqOTN0Y1pnRWh2dWpldU5xMGJWblVzY3gyT1ZMdGht?=
 =?utf-8?B?TXgrZWJXanR3VEVRODdacUp6dnkyWEEwTmRndTFRdXFpM25KSVZLcENhekdv?=
 =?utf-8?B?SS9WRVMwRmc2V1paSzFPWTJ6ZzBzZis1d0NqWGxPUHliUUduN3JHYjhMbm9L?=
 =?utf-8?B?ZDlPekNFdG9qTGxzY3ZXdG1ET0pPamRKeG0wY25XcDlPaTVzRzFFUjBIMG9W?=
 =?utf-8?B?R0ZRT3FKaDJjc0xsaWdXY2pWbkdhN3BRc2lwc1A2N3Jsb3hlMFJRQXlLQWIv?=
 =?utf-8?B?UFVheDU3WWdPYkxtbXJ5cFRMampOcFFxaVNEK0RiaU03WVBhZ1BKUmpwSDY0?=
 =?utf-8?B?MFl5VHNMK1dtZDY3MlhiQjVrb3ZIS2IxMEVpYVZaL3Y3YW5CaE9SWEtSeFpK?=
 =?utf-8?B?VGZKMDI4aVlRY3B1bmRobkdENXFlR0F5b1hKTFltU0dIWGdJU3BvWEJybUtF?=
 =?utf-8?B?NS93QzlBWmVKenFhZjVkTll1VjlFcDVLUmFKdk1leksrM1Fsc1V0T2ZCZ0tV?=
 =?utf-8?B?bTVoOS9QeUZ2MjluYW9nSnM3THBhZmYrSUV5cExVS2Y3STk2dW9LbDVidG9H?=
 =?utf-8?B?ZktsemdNMzFLdC83WGNHQnhpRnpRRWViNm5VYnhpd1NvL1dxcFlSUWl2Tk1P?=
 =?utf-8?B?MGYySWkyS0hIeWRNSHZJdlh1eGpiK3pOWW5Xd2pFVklQaC9xTm1qUGF4N3g3?=
 =?utf-8?B?Q1c4Q0Y3QUQxVEQ0NGZYaWI1REpYUFQwaHVQZFVyUCtEdC9MdDNHak9nMUg3?=
 =?utf-8?B?YzkzSmRxdmczck13M0NvQlJudW9EUXJJUHBZQm02R0drZkh5VzRWSnZzak14?=
 =?utf-8?B?QzMzQVlZTXQ4UjBzekJ2RUhOVFBQZGxDOUVhUkkyQVVuZ3d3T0I1eUJZWHZ0?=
 =?utf-8?B?M0FmeFcrbDhuVEduaDk4MlRBTHZsbVNxUmRWVldSdFdBdnpVbzNKUS8ycEFQ?=
 =?utf-8?B?NlBlWVRMUy9zcmdiMVRGQkZaS0svSEdvNzR0dEVMMTJ6aGhkNjN5OFJQanJD?=
 =?utf-8?B?OHl5QU8xWWg3ZngwOVU0amcvejU5YnNsQWxJVkE1bFJ2RlQxU1Z0UHdVckxU?=
 =?utf-8?B?QmdFTWZ5VnhleDc2TWZlb2RVd1B6cEdpamcwSEhxRjd4Nkh5RHdvU3dGV0p0?=
 =?utf-8?B?R0xhYkdMWThsdUNHQmxMY3NMc3kzMFJHa1ZjeDJPbWMyb0hvZXRKN3dMTXVr?=
 =?utf-8?B?bHg2V0NSWkhKZS9PNkw2OFdnVHJ5RGsxdWxPanU2TDJOY1QrQlkrSGdJR0hO?=
 =?utf-8?B?Vk8zUDV0Q1NtSUZhN3VzV1VMSmFUeWRYbWlLbGJ4VkZHanU3czVjb05QbU9n?=
 =?utf-8?B?OGlvcGUwN0FHYWxSV1ZxY0ppU3FmMXM1T080ZWRPTmMyeUxXTWN5cDVIYVBi?=
 =?utf-8?B?S01hWlJpMjFKZUlzWEhlUDU3MnhkbmxCR0xyNkpna3FRUDVneEZGWUs3Sy9o?=
 =?utf-8?B?aG5WcDI3NkVoRXhuVFlvOGZPUUovTVFVSVBGNHc5aWE5ZXg0dzZrcm1FeHFG?=
 =?utf-8?B?UVM0b1p1NU02SmRzWDhVOW5QUGQ1aWJCMmVMYWpRb2RJZEtiVXUzbXhxai9D?=
 =?utf-8?B?Y2I3eVNmTEp6U3pSU1ZlUGQxRDJBMjAxbUg5MEZXUTRpU01MYXdMOEJrTCt3?=
 =?utf-8?B?bmVKSFo3RlVnS211TVR4WVRWWXFCcFJNbTFXcG5SUHgybDFWN0E0NzFUakpL?=
 =?utf-8?B?alpEai9tM3Z6V0FHTUdKUThNdFI2Y3FIYy9IK3VOQkExWEdNWkQvVDJubGw4?=
 =?utf-8?B?YW5RbE5uVk5DNS9kT3I1Q1NGOGs2czBETE1xUGZKb0xHT3VKQ3huUUdxd1dm?=
 =?utf-8?B?azFBUlUzVzBmc09LUmdLdlJpdWw4aE44T3l2V05jWENEb1oxQVZoaXBGK1d3?=
 =?utf-8?B?dnNtanNFRDhXUjRsYWxYLzUzNTZGZkVVQWlGVG5XMWJhZThKcXNnKzlJeHJC?=
 =?utf-8?B?amhMNUg2UGUrVk5OVXNtd0hrU3Q2SlMyRWk2YVJvU2lWZ0dCZk0xazhCSjVw?=
 =?utf-8?B?QmZ1d0FHemg5a1NvaXJNYlZDYzBuaWpZbjlIQ2RjbzJjT1cvMURXMHFJYVRN?=
 =?utf-8?Q?qM+SfIPpqFpDENJsUV6mCTJYH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a96e581-e809-4256-4e19-08db02987bb4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 08:03:33.8786
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 06nX71JXq5o35c8QKwcGx07v7rmt0f1t30/NOlCAEStwYS+BdQdCUq0Q0rXUqx1EG0dxMmf+QzT3zyU7Nq7Z3w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7432

On 30.01.2023 05:35, Christopher Clark wrote:
> On Mon, Nov 21, 2022 at 4:41 AM Jan Beulich <jbeulich@suse.com> wrote:
> 
>> On 11.10.2022 11:28, Jan Beulich wrote:
>>> find_ring_mfn() already holds a page reference when trying to obtain a
>>> writable type reference. We shouldn't make assumptions on the general
>>> reference count limit being effectively "infinity". Obtain merely a type
>>> ref, re-using the general ref by only dropping the previously acquired
>>> one in the case of an error.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Ping?
> 
> Sorry it has taken me so long to review this patch and thank-you for
> posting it. The points raised are helpful.
> 
> Wrt to the patch - I can't ack because:
> the general ref that is already held is from the page owner, and it may
> actually be foreign; so the second ref acquire is currently ensuring that
> it is a match for the owner of the ring. That needs addressing.

I'm afraid I may not understand your reply: Are you saying there's something
wrong with the change? Or are you saying there's something wrong that merely
becomes apparent due to the change? Or yet something else?

> Am supportive of points raised:
> - review + limit ref counts taken
>     - better to not need two general page refs
> - a type ref rather than general may be sufficient to hold for the ring
> lifetime?
> - paging_mark_dirty at writes
> - p2m log dirty would be better to be allowed than EAGAIN

I can certainly extend the patch to a series, but that'll make sense only
if ...

> - allowing mapping of foreign pages may have uses though likely also
> challenging
> 
> I should let you know that my time available is extremely limited at the
> moment, sorry.

... it would then also be looked at within a reasonable amount of time. I'm
already sitting on way too many unreviewed patches.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 08:11:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 08:11:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486620.754007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPGC-0000kd-M0; Mon, 30 Jan 2023 08:11:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486620.754007; Mon, 30 Jan 2023 08:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPGC-0000kW-J9; Mon, 30 Jan 2023 08:11:28 +0000
Received: by outflank-mailman (input) for mailman id 486620;
 Mon, 30 Jan 2023 08:11:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMPGB-0000kQ-CG
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 08:11:27 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0601.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b04839fe-a075-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 09:11:25 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6882.eurprd04.prod.outlook.com (2603:10a6:208:184::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan
 2023 08:11:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 08:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b04839fe-a075-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=inXnDK9gWOhxeL3W7EgTIcjJxHtdVmKr0K79BjATXFOFjVTilTI/Gnbpexol2Qg2HyNkCPoEZJLZCwfqKTn+UaRyKZKS5OOK/I775F+CXWbrXChv6Ybr8JrNz1vSawLJLpF8u+KeV7hbLyvItQ6JLcrOtPOxUh//L5HQDn0Fyqw5+Rv+xsXGx7ZyJhx5E0DH5uXBB7SjQ+94OJ31W4tnsrTEzU6KZmuME8dGbtsegdSWTflBRacb0vpNj2Z6jcuB6kZN0NcMmBM0n+znSJDNnBTdTU+8D3ATEs97pafLlHaNGM64EaNT2hB/S/l+sZAY2ZqOD2j/AYKHafP4vL3lEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zYG/MPUoOhkYDW4nIF++KsWNArgHFC3Xi/Ub/Kx07fA=;
 b=SHw2OjJkxzx4B33KyZDgvUOM4a4I02y/dWd0CiHs1CADDNWxf13VZwvBs6AXScByz76ENXB0NHjqMVOeFX2pmTuZtkWHIpdzqur1e5DexgL4lTgakBQAhcjs9Tiwxf1ECB9XJMR2iTuMPjFkV7aPM3x9gKbPJAAiBXP8uQN52cqEa2oe2YZ+6ecQSv3KlxG4uH7xK19Hqfifl5362esUfYv72yUqQ4ktf1iqu2KO8/ZfO2f8KOlmOAdymI8bKGS1bFvsAGjTEpwwm0opC1nWJeeB1Z6dR27G+9rlEfGLcFpjwJJNawhrncBvBz+8vAresVfYCHSalhi0I4Spa1I/tQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zYG/MPUoOhkYDW4nIF++KsWNArgHFC3Xi/Ub/Kx07fA=;
 b=JKdMGg8crSbZ6gPVF1K2Q15m3fAC31xzvIOzXtkcydLVYZ3vhqzLxrz6+usIZhsKf9Z0UVeyAPWJoMnjSu9Mngpw8Phf8atlsbsPzinsy3DWtvgd1kRz4MpQv6geJKDZnvuGPINsdR4sDW+/AMqZc2TvX35tQyUjmfrwDWmJMV5a+Wgy5S9FQ1rI44qjud9h53c8siMqli4Ivy6N+xYO+mbZy9XZQg14gUqGw5xsN1ZI5CjVVgNrBquDCJOGQcdMpM9c0Tdo7SiBsP4OEXuoFni2YlKqZ0JuoxL69OKAyL8/dd/ajG4zzc4h1UOZBokDIb/FiPkM0+zsOJAg7RqdWw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <eddf2879-1914-9401-d715-b711aaa7ed6c@suse.com>
Date: Mon, 30 Jan 2023 09:11:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/common: Constify the parameter of _spin_is_locked()
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230127190516.52994-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230127190516.52994-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0116.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6882:EE_
X-MS-Office365-Filtering-Correlation-Id: e00c7e18-d71c-4001-3737-08db029992dc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JrCCPy+j21OLnCkDAav8efZpdmmFXmkx3gArSB8uJaPPHKopvn0n8ySLQqVWEugp/ZSXJjL/mUOkx4h0a2yqlgpKtA0bC2Q3q23LEar+tELtsT/z304fVdRyZmUwxXuZv43Ub3PAReGft5rdDvgQGNMEj5dAPRK4C8lxfok3AXoXzqW3eIGrQV9iOoj/Q1NyAG0hcmh9sOQAYvbqQTwCi1rHRWmZyPCCmYk0TAAp4+14OACjfM4K7xtirBjj/I+VGl4OVF3e4DMeeuFbSQanPCTQKCfpJ7lUouFvst6prK30ndYuSJeCgPqoYJLT/E+D7yd7wxLSktwEamRgcZEg7sLxJtfGhnNGW9L+7As1EInf+x107qqanlgkTaUEFX5+a+Pd1QIzLqexWrFkJtcpp75PhfAo2+6x3TYlCe19SgYDQd8zxqlJZN6w7JuD/5nEUgRVaTFdJ82bUtsNnAGL1ioi1dNLcyLFKR4xMzmMU7c6a4JgdPaglu5GSHIAeifZ1BOZXMHTlFz+eAeVvYzJz82rj0cVtNMVA5IAaZO4vJ9hKS7jnxthPsatoCID60t5U83YtwpK7GDrO5Af4PSaT3yqTat98w7jIU0KRQQIH78UQZ6O6fTBjKyUH6WVihvbKzyJoASfLK/C5F8QvdNPsPARZ8njJhqSTfReqcuZIxUqLnJta6UAEOboEOLyGZ0Ir7hJwQMrSmHbqaTf/E4dkT5m6lzidI8K01JyZ1xAeI8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(366004)(376002)(346002)(39860400002)(396003)(136003)(451199018)(31686004)(83380400001)(31696002)(86362001)(36756003)(66946007)(66556008)(66476007)(6916009)(8676002)(2906002)(4744005)(4326008)(41300700001)(6486002)(478600001)(316002)(54906003)(5660300002)(38100700002)(53546011)(6506007)(2616005)(26005)(6512007)(8936002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bkhQQkpBcXVIVTlpbVNVSzAxcm4wWkZ2RldtMWFPQVFSOXZIYldyZStMY0t3?=
 =?utf-8?B?NkFSa3JFZERjUmxQWFE0QXhjV1dnM0JPRWowbEV5eFJLcm45WDI5VnpiTS9Y?=
 =?utf-8?B?Qm44VGxCWWh3OXhIVU9uRGw4SXhyMDAySjBXTFVoazI5WGQyTHhKa1ltZnNo?=
 =?utf-8?B?QStNamZ3UTJPZEVLemlrVWU0aHlIWDJNWENobTUzTHM5SEtJOEFJWGd0aUNq?=
 =?utf-8?B?ZGJMTU80M21HVzB4RXV0MVdoRUNtQlVzRnNuaGYxTzBLUlFiNTIyN0liTFlq?=
 =?utf-8?B?QS81SXlqNkZ3SmZVa0hOa1VMR2hZVjVsei9hRW9hTUZJa2dXeEo0aFhlRllL?=
 =?utf-8?B?THRsTWRQbDNXL2g1dEwwd3hsWUI5aEowYnM2VU9PSUVPZU04OHZmQVdHTy8v?=
 =?utf-8?B?dmd4dUdVQU00VHp1THBTeHArSEQ3Vzc3VXdBUlhWT2U2ckQwZkNWQy9jVVRs?=
 =?utf-8?B?eE96SU54UjlPVHdURE5QZjVBUHRyZnNHeXp4bjlrZFJRVElBei9HRngzU0Ux?=
 =?utf-8?B?MHBiakJ5ZWVzaEZKZU1tT1hZQWtWRzd4TXJGckt4NER1S1N6SzI1Y2ZoSVVP?=
 =?utf-8?B?SCtZdE50RU9FSllkUFo0eDg5TDJDc1FzUUxDblF4V21BS0REYyszRUxzdWpJ?=
 =?utf-8?B?WGNBbCtMYU1WRVFiMHFJcy9wRm9WS3NVOVBRbHg5c1d3SGx2TFZtYnU2L2lZ?=
 =?utf-8?B?V0ZrOWVHMVFkdlBMNnpoOU1OZW5xOXlYeVcvZG5DeHI3b0VxRXMvUkowR1VM?=
 =?utf-8?B?bEJQbzc1OUZZbjJLNklzUTNjK2xKQ0V0OUxSeGcvOXdFNFdsVWtuQmFPRzli?=
 =?utf-8?B?YmdwRGRMSDlFVGZEZlhEM01xUGNKOTdsOFhBdEl0Mmo3VStabjB3T1p4QnNG?=
 =?utf-8?B?MDg2WFNmOEZhRW0vZmhzN3RraGlUZHRBcGp6ZXllRUxSRzMySE5VMVF1RFE3?=
 =?utf-8?B?MEttbFZmMEFqR3NSRUtOZlVzUHNrVkliN1gyYkI1citOS3RxR1NXeW94eGxm?=
 =?utf-8?B?a0VYRDdlRG5mWm5rSWZkY2N0K2phVzVvaW53MmI3TWIxOWdOYThsSFh4MkN5?=
 =?utf-8?B?OXhUbVYydXloZjVUd25pVE12bUxFSkcrbVBxV0FhM2JKaXNaVzZydVpJNitz?=
 =?utf-8?B?VmxSaDltVGNBVmg2cW5HV0pkU3ZtalBnUW1LQi94RXg3bE1qRzdhZDhEQ2Rq?=
 =?utf-8?B?bHFTZ2ZDREQ4ZkVZVm9ZazZSZEVrcENUNEhoeG9udVpwcDNtYi9nODRHblly?=
 =?utf-8?B?RnhGdEdPUGVqU3o5RVhFMWhTelJ4M0U5dFJkeHJ6SGJJZUUyUUhpS3JEcFEv?=
 =?utf-8?B?R3ZsOUxnUW9tZXV6ZDd0K25va1FHUDhsaHhSbEdlZlNuejNJSmVWMndRSVFy?=
 =?utf-8?B?SUJNdzc1Qll3TWhvWDIwZHZld2F4WG9JbUJEdjcydFlPcGlWM2xyMWFLWEg0?=
 =?utf-8?B?Yy9UakpCTTFocko4QmZQb1hVL3dpMEFFRmt2WGVHelJCZ2NxUWJyVjNvck1U?=
 =?utf-8?B?L2k3SVhTZ01YZ2hhanVGekhMTkpwS2FPWlljTkNjWmRFMWhOOFlWRWlWeTJ6?=
 =?utf-8?B?YWhWelZZSFhnV3hsSHViZkFQdCtKaDNKcEJtaVpXZThCSFQ0TjRQNTZzNktZ?=
 =?utf-8?B?L1VSR2ZhWTlSeGhwbXRBeVdtNGkybmIyM3FTV0JmSmVhS3locjBaNVBnMm1L?=
 =?utf-8?B?a1A4eGVnVEFYVmV3MFpyc3RacU91S1hQanhhUG5UZGJQMGVQQmRTRHRKN2Zo?=
 =?utf-8?B?dTM4UUx2WHdyZDY0ZGthWHo0K01rVHgxRXVQNWMxa1JJdnpnNGl0M1RJMDY5?=
 =?utf-8?B?SndLb2tRVytiMGJ1VkVabytWZUJZZzQ3ZWVVOU1jWmN4MTJyQkQrNmR0blZj?=
 =?utf-8?B?bFhxWkIzRWoxZWgvakhkYXBiOXA2eDR5ckU5bGNKM1E3UTZlYlZaUGVqRGts?=
 =?utf-8?B?UmN4dkVmOUFSTEFvclBDRXgxL0oxYkpmVDNVc0c0ZXpvTFRMaHd5S2Jtb3dr?=
 =?utf-8?B?Um5BaG9ZaUk0SHA5TThHcGEwQlhCbnlBZ1h0RU1sTDRiRkVNeGhFc0N6aEk2?=
 =?utf-8?B?SGovK29iTkZwSUZja2FBaUtlM05LNENMWXV3N0kwajNDU3RWUjlVWGl4OHg4?=
 =?utf-8?Q?/4uJhdyMcCwjiS/btneKkA4Lv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e00c7e18-d71c-4001-3737-08db029992dc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 08:11:22.2245
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YUAxSfeydHjJcS7zNGqly+IZBmv/O4U52zvQ1D9zvDnXWvQW1SzFaAY5EOPAp1+Blo9VTkTwQsB9RVLvJ2IeYA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6882

On 27.01.2023 20:05, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The lock is not meant to be modified by _spin_is_locked(). So constify
> it.
> 
> This is helpful to be able to assert the locked is taken when the
> underlying structure is const.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

But: Could I talk you into doing the same for _rw_is{,_write}_locked() for
consistency?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 08:17:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 08:17:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486627.754016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPLz-0001Vo-Ay; Mon, 30 Jan 2023 08:17:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486627.754016; Mon, 30 Jan 2023 08:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPLz-0001Vh-88; Mon, 30 Jan 2023 08:17:27 +0000
Received: by outflank-mailman (input) for mailman id 486627;
 Mon, 30 Jan 2023 08:17:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMPLy-0001Vb-7E
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 08:17:26 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86b96f31-a076-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 09:17:24 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 342EF2192D;
 Mon, 30 Jan 2023 08:17:24 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EFD2413A06;
 Mon, 30 Jan 2023 08:17:23 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id PKxTORN912MsSAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 08:17:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86b96f31-a076-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675066644; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=jopNSckvVZXWP7qJpysxPCfTgMLKADJgaADF0FTRvmI=;
	b=dj4xelLKkFj0QI/kuxk/Tn6zCytRVJG60V6s5+VKzThwKSv4lK5s0+R03KMC8l7KLsCWb3
	QQ85rurnAjSURhhlEEdYLZbGlTpANZX0rkX3ObL1i3EAvoP3FqfbQaWEQs244tbK5sAkUt
	wK0ETJUGhf4i0K1OsxIowSYT2dEAP08=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] xen/public: move xenstore related doc into 9pfs.h
Date: Mon, 30 Jan 2023 09:17:22 +0100
Message-Id: <20230130081722.29012-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The Xenstore related documentation is currently to be found in
docs/misc/9pfs.pandoc, instead of the related header file
xen/include/public/io/9pfs.h like for most other paravirtualized
device protocols.

There is a comment in the header pointing at the document, but the
given file name is wrong. Additionally such headers are meant to be
copied into consuming projects (Linux kernel, qemu, etc.), so pointing
at a doc file in the Xen git repository isn't really helpful for the
consumers of the header.

This situation is far from ideal, which is already being proved by the
fact that neither qemu nor the Linux kernel are implementing the
device attach/detach protocol correctly. Additionally the documented
Xenstore entries are not matching the reality, as the "tag" Xenstore
entry is on the frontend side, not on the backend one.

Change that by moving the Xenstore related 9pfs documentation from
docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h while fixing
the wrong Xenstore entry detail.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- add reference to header in the pandoc document (Jan Beulich)
---
 docs/misc/9pfs.pandoc        | 153 +-------------------------------
 xen/include/public/io/9pfs.h | 166 ++++++++++++++++++++++++++++++++++-
 2 files changed, 169 insertions(+), 150 deletions(-)

diff --git a/docs/misc/9pfs.pandoc b/docs/misc/9pfs.pandoc
index b034fb5fa6..5c82625040 100644
--- a/docs/misc/9pfs.pandoc
+++ b/docs/misc/9pfs.pandoc
@@ -59,155 +59,10 @@ This document does not cover the 9pfs client/server design or
 implementation, only the transport for it.
 
 
-## Xenstore
+## Configuration
 
-The frontend and the backend connect via xenstore to exchange
-information. The toolstack creates front and back nodes with state
-[XenbusStateInitialising]. The protocol node name is **9pfs**.
-
-Multiple rings are supported for each frontend and backend connection.
-
-### Backend XenBus Nodes
-
-Backend specific properties, written by the backend, read by the
-frontend:
-
-    versions
-         Values:         <string>
-
-         List of comma separated protocol versions supported by the backend.
-         For example "1,2,3". Currently the value is just "1", as there is
-         only one version. N.B.: this is the version of the Xen trasport
-         protocol, not the version of 9pfs supported by the server.
-
-    max-rings
-         Values:         <uint32_t>
-
-         The maximum supported number of rings per frontend.
-
-    max-ring-page-order
-         Values:         <uint32_t>
-
-         The maximum supported size of a memory allocation in units of
-         log2n(machine pages), e.g. 1 = 2 pages, 2 == 4 pages, etc. It
-         must be at least 1.
-
-Backend configuration nodes, written by the toolstack, read by the
-backend:
-
-    path
-         Values:         <string>
-
-         Host filesystem path to share.
-
-    tag
-         Values:         <string>
-
-         Alphanumeric tag that identifies the 9pfs share. The client needs
-         to know the tag to be able to mount it.
-
-    security-model
-         Values:         "none"
-
-         *none*: files are stored using the same credentials as they are
-                 created on the guest (no user ownership squash or remap)
-         Only "none" is supported in this version of the protocol.
-
-### Frontend XenBus Nodes
-
-    version
-         Values:         <string>
-
-         Protocol version, chosen among the ones supported by the backend
-         (see **versions** under [Backend XenBus Nodes]). Currently the
-         value must be "1".
-
-    num-rings
-         Values:         <uint32_t>
-
-         Number of rings. It needs to be lower or equal to max-rings.
-
-    event-channel-<num> (event-channel-0, event-channel-1, etc)
-         Values:         <uint32_t>
-
-         The identifier of the Xen event channel used to signal activity
-         in the ring buffer. One for each ring.
-
-    ring-ref<num> (ring-ref0, ring-ref1, etc)
-         Values:         <uint32_t>
-
-         The Xen grant reference granting permission for the backend to
-         map a page with information to setup a share ring. One for each
-         ring.
-
-### State Machine
-
-Initialization:
-
-    *Front*                               *Back*
-    XenbusStateInitialising               XenbusStateInitialising
-    - Query virtual device                - Query backend device
-      properties.                           identification data.
-    - Setup OS device instance.           - Publish backend features
-    - Allocate and initialize the           and transport parameters
-      request ring.                                      |
-    - Publish transport parameters                       |
-      that will be in effect during                      V
-      this connection.                            XenbusStateInitWait
-                 |
-                 |
-                 V
-       XenbusStateInitialised
-
-                                          - Query frontend transport parameters.
-                                          - Connect to the request ring and
-                                            event channel.
-                                                         |
-                                                         |
-                                                         V
-                                                 XenbusStateConnected
-
-     - Query backend device properties.
-     - Finalize OS virtual device
-       instance.
-                 |
-                 |
-                 V
-        XenbusStateConnected
-
-Once frontend and backend are connected, they have a shared page per
-ring, which are used to setup the rings, and an event channel per ring,
-which are used to send notifications.
-
-Shutdown:
-
-    *Front*                            *Back*
-    XenbusStateConnected               XenbusStateConnected
-                |
-                |
-                V
-       XenbusStateClosing
-
-                                       - Unmap grants
-                                       - Unbind evtchns
-                                                 |
-                                                 |
-                                                 V
-                                         XenbusStateClosing
-
-    - Unbind evtchns
-    - Free rings
-    - Free data structures
-               |
-               |
-               V
-       XenbusStateClosed
-
-                                       - Free remaining data structures
-                                                 |
-                                                 |
-                                                 V
-                                         XenbusStateClosed
+The frontend and backend are configured via Xenstore. See [header] for
+the detailed Xenstore entries and the connection protocol.
 
 
 ## Ring Setup
@@ -415,5 +270,5 @@ the *size* field of the 9pfs header.
 
 [paper]: https://www.usenix.org/legacy/event/usenix05/tech/freenix/full_papers/hensbergen/hensbergen.pdf
 [website]: https://github.com/chaos/diod/blob/master/protocol.md
-[XenbusStateInitialising]: https://xenbits.xen.org/docs/unstable/hypercall/x86_64/include,public,io,xenbus.h.html
+[header]: https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/io/9pfs.h;hb=HEAD
 [ring.h]: https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/io/ring.h;hb=HEAD
diff --git a/xen/include/public/io/9pfs.h b/xen/include/public/io/9pfs.h
index ad26bd69eb..c4a39b6ab2 100644
--- a/xen/include/public/io/9pfs.h
+++ b/xen/include/public/io/9pfs.h
@@ -14,9 +14,173 @@
 #include "ring.h"
 
 /*
- * See docs/misc/9pfs.markdown in xen.git for the full specification:
+ * See docs/misc/9pfs.pandoc in xen.git for the full specification:
  * https://xenbits.xen.org/docs/unstable/misc/9pfs.html
  */
+
+/*
+ ******************************************************************************
+ *                                  Xenstore
+ ******************************************************************************
+ *
+ * The frontend and the backend connect via xenstore to exchange
+ * information. The toolstack creates front and back nodes with state
+ * XenbusStateInitialising. The protocol node name is **9pfs**.
+ *
+ * Multiple rings are supported for each frontend and backend connection.
+ *
+ ******************************************************************************
+ *                            Backend XenBus Nodes
+ ******************************************************************************
+ *
+ * Backend specific properties, written by the backend, read by the
+ * frontend:
+ *
+ *    versions
+ *         Values:         <string>
+ *
+ *         List of comma separated protocol versions supported by the backend.
+ *         For example "1,2,3". Currently the value is just "1", as there is
+ *         only one version. N.B.: this is the version of the Xen transport
+ *         protocol, not the version of 9pfs supported by the server.
+ *
+ *    max-rings
+ *         Values:         <uint32_t>
+ *
+ *         The maximum supported number of rings per frontend.
+ *
+ *    max-ring-page-order
+ *         Values:         <uint32_t>
+ *
+ *         The maximum supported size of a memory allocation in units of
+ *         log2n(machine pages), e.g. 1 = 2 pages, 2 == 4 pages, etc. It
+ *         must be at least 1.
+ *
+ * Backend configuration nodes, written by the toolstack, read by the
+ * backend:
+ *
+ *    path
+ *         Values:         <string>
+ *
+ *         Host filesystem path to share.
+ *
+ *    security-model
+ *         Values:         "none"
+ *
+ *         *none*: files are stored using the same credentials as they are
+ *                 created on the guest (no user ownership squash or remap)
+ *         Only "none" is supported in this version of the protocol.
+ *
+ ******************************************************************************
+ *                            Frontend XenBus Nodes
+ ******************************************************************************
+ *
+ *    version
+ *         Values:         <string>
+ *
+ *         Protocol version, chosen among the ones supported by the backend
+ *         (see **versions** under [Backend XenBus Nodes]). Currently the
+ *         value must be "1".
+ *
+ *    num-rings
+ *         Values:         <uint32_t>
+ *
+ *         Number of rings. It needs to be lower or equal to max-rings.
+ *
+ *    event-channel-<num> (event-channel-0, event-channel-1, etc)
+ *         Values:         <uint32_t>
+ *
+ *         The identifier of the Xen event channel used to signal activity
+ *         in the ring buffer. One for each ring.
+ *
+ *    ring-ref<num> (ring-ref0, ring-ref1, etc)
+ *         Values:         <uint32_t>
+ *
+ *         The Xen grant reference granting permission for the backend to
+ *         map a page with information to setup a share ring. One for each
+ *         ring.
+ *
+ *    tag
+ *         Values:         <string>
+ *
+ *         Alphanumeric tag that identifies the 9pfs share. The client needs
+ *         to know the tag to be able to mount it.
+ *
+ ******************************************************************************
+ *                              State Machine
+ ******************************************************************************
+ *
+ * Initialization:
+ *
+ *    *Front*                               *Back*
+ *    XenbusStateInitialising               XenbusStateInitialising
+ *    - Query virtual device                - Query backend device
+ *      properties.                           identification data.
+ *    - Setup OS device instance.           - Publish backend features
+ *    - Allocate and initialize the           and transport parameters
+ *      request ring.                                      |
+ *    - Publish transport parameters                       |
+ *      that will be in effect during                      V
+ *      this connection.                            XenbusStateInitWait
+ *                 |
+ *                 |
+ *                 V
+ *       XenbusStateInitialised
+ *
+ *                                          - Query frontend transport
+ *                                            parameters.
+ *                                          - Connect to the request ring and
+ *                                            event channel.
+ *                                                         |
+ *                                                         |
+ *                                                         V
+ *                                                 XenbusStateConnected
+ *
+ *     - Query backend device properties.
+ *     - Finalize OS virtual device
+ *       instance.
+ *                 |
+ *                 |
+ *                 V
+ *        XenbusStateConnected
+ *
+ * Once frontend and backend are connected, they have a shared page per
+ * ring, which are used to setup the rings, and an event channel per ring,
+ * which are used to send notifications.
+ *
+ * Shutdown:
+ *
+ *    *Front*                            *Back*
+ *    XenbusStateConnected               XenbusStateConnected
+ *                |
+ *                |
+ *                V
+ *       XenbusStateClosing
+ *
+ *                                       - Unmap grants
+ *                                       - Unbind evtchns
+ *                                                 |
+ *                                                 |
+ *                                                 V
+ *                                         XenbusStateClosing
+ *
+ *    - Unbind evtchns
+ *    - Free rings
+ *    - Free data structures
+ *               |
+ *               |
+ *               V
+ *       XenbusStateClosed
+ *
+ *                                       - Free remaining data structures
+ *                                                 |
+ *                                                 |
+ *                                                 V
+ *                                         XenbusStateClosed
+ *
+ ******************************************************************************
+ */
+
 DEFINE_XEN_FLEX_RING_AND_INTF(xen_9pfs);
 
 #endif
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 08:22:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 08:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486632.754026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPQf-0002wy-Sf; Mon, 30 Jan 2023 08:22:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486632.754026; Mon, 30 Jan 2023 08:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPQf-0002wr-Q1; Mon, 30 Jan 2023 08:22:17 +0000
Received: by outflank-mailman (input) for mailman id 486632;
 Mon, 30 Jan 2023 08:22:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMPQe-0002we-Ep
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 08:22:16 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2075.outbound.protection.outlook.com [40.107.241.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 33e27235-a077-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 09:22:15 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7533.eurprd04.prod.outlook.com (2603:10a6:102:f1::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan
 2023 08:22:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 08:22:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33e27235-a077-11ed-9ec0-891035b88211
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CI/25ojHHKB8HK6uFQY39DwZ2SA7W+YRT50SxoTzOW9rrotGvQAaC4WrjMRAAqxFej+dztZh20tjEuAxYU1qGeny8uOXN3IXFUTfhVeCgDYe9HhMomevc7shAjlIq8dcF2Q2UHImtxZ7gb8qu5DUxdvSIZIeBS1hAJXZPU+RCBY+ryGDi3gcvQ7YSyFvVpsdCShksinxOFtidrqFmg3auscqgY9/kuYII+2nA57slQKLNNnOQJ0jpMmQ2d3xgVPcwTL5sI+S4dUgUpjdDeMFnUH8nB2U1YFbibfUvcJS3WtkF7J1WKX0OT6xa2xhHR2HQ7BU7CQwsNM7dZ3o6mbxzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OJpCjFzdC0n2mF6Ymmtvo8Q5F6HG/DxAdfWVkx9uERA=;
 b=ded/l1CG89Wn5bDYXL6vfYp5qzH/gi3P4hxfJIzEhuhuhn1r/UW1DrOtQJMufly4nGWBd2Fxs0Nc82EMdh97VhYQzWTgDoQZzXy7YwDr8dLFbbk4DK9Cha7iLstjvSv/gnu0neGT+w2zYh5CHiIKEiOl8Pao3Ub/y1V/iR9gwns7Zhpa0dJrLjwsUBDxPIs5IuXcUU1BEvbWYcUWP0ftoVUCTdsO50UrkgFTQC8yBX0bP/O4mFT4nkG3iYaWNl84kczLJ5MqgkSerSs3uoQiaN1K74vH+LDi8f5CEclf3DIAY21qp5uRZ+JGXLeYI+8NLhdzwAMYnRBaKSbJ2MN4pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OJpCjFzdC0n2mF6Ymmtvo8Q5F6HG/DxAdfWVkx9uERA=;
 b=2Yc3ItjJWTf+Doi10aoHaDRtSOPhs0bbaXWvXePsZmff8PHPY42OyrlI1u8Iw6qPaSL0zRQ557WzXmG3MpWtVbU/P6gscFWqa/6unFl1d3Q0wbTu6+aDk65Oyt/tB22b7jkx22CA8NIEacBxxU4WHvGaPFqpGPqDuZo05xCXFswLNDeFFEsZGGQW25VONiuhzuAjEVdEqeQvb30bMYxAc7albxr7wJYtpb06T7Am43M4k2wHk2LlCyK6FJeMhcdMxOIYzkzd1VILZDr6z46SjZ76ZnWpe5bGXdQQ5s3U33FWANr9eOCLRN1xV8ru546mZpm+iTMNj/KINIPDi0R3lw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bfccbe22-e5e4-40d3-aded-639d812bfa08@suse.com>
Date: Mon, 30 Jan 2023 09:22:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3] xen/x86: public: add TSC defines for cpuid leaf 4
Content-Language: en-US
To: Krister Johansen <kjlx@templeofstupid.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 David Reaver <me@davidreaver.com>, xen-devel@lists.xenproject.org
References: <0c4eb4ba-4314-02c7-62d5-b08a3573fcc2@suse.com>
 <20230127185103.GB1955@templeofstupid.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230127185103.GB1955@templeofstupid.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0178.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7533:EE_
X-MS-Office365-Filtering-Correlation-Id: 8f7b307b-88bd-4dc4-f115-08db029b1595
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/7CHL5RgFR3gUXwG4mNGFdIlmQTSLC8IKgVYnOFbw0wq+nJNNjzj8ogWi3VHuL74E+EYkH240u5tZiw0H/xbGFE8l3L2Zcaov6w7HIgC1GHa7Y1y1gArdto0xkqaosIIzIHLRrM7Q8ckmoaJwYoqe2t79kWP+4QdihHLKGxgmvHXUdwcSisBlFZnhklsu2qeZgtgW96GnXFewg7Rva00O+ak5/+3xb+Ju3yGTf1Tiuu4tMOA9Y0DMbWonrr0KZH77vVqhNxPH4vKtZfYvHmAEvKWydlCiJUZUhAiVRHmoD81strAwTUL55A9qx2bXutxhh+hmCBZJ+hAU201kWNjsdtZfrG3dJ3PHT1AT0YbR45Nj4ylBpcRCMMO7lSY0NmpWT4LmAFejSEIU+F6CLMNDdaUORJhxtL6TEiem9CvTMj/IEpjkyhah7H/ATYf4EZK+TAc1XiID5giBXjQeO9p+9g9X8Z7jE6Npw5ogVDlrwmrud61X7BU0fk0a6hxhmyWFm7LnW09657HYk2VUDRSd5NMlCrOy8N3Dz2k5IuEjIU6WoBQtWHSVOl4lD9EEh8R0DwY6WIecBLx1AjnpFoNDP4MLkjOdTiehp6yhSXUtLILIY7+YEV0VGMKgmUUZTEMxCcjoH1KY6+8/mQ8wT92hYWC1HA1sZ0fHARaQhGgse69T8gYLIJQLHf5wjXbPJ/JRhihvszZ2L1eyVpMOt2kz8jXFNrWUSG9F+C9RWXXaPM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(346002)(396003)(376002)(39860400002)(136003)(366004)(451199018)(31696002)(86362001)(38100700002)(36756003)(8936002)(5660300002)(66946007)(66476007)(54906003)(316002)(6916009)(66556008)(8676002)(4326008)(2906002)(4744005)(2616005)(6486002)(478600001)(41300700001)(53546011)(26005)(6506007)(6512007)(186003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Um82WUE5UmIrbjkyRERjYVBkMVNoVGtMbzZpREtLWTRaNWdMcHZWSVlvTnZO?=
 =?utf-8?B?Z1d6b1FqOTFJT3VuMHRLNTJTUWl6NWQwaDFCQjNNM0x4L0h6MmM4SXVqWTZj?=
 =?utf-8?B?VTh3TjZwc1JIaDNDWUYzU2RnQVVTR3pML0hDVGRCK0hrYnYrMnNMZHZmakJv?=
 =?utf-8?B?L09ZYTdkS3p2WmRyaW56UUJVUG8wRmN5cXhqWUI3Z3piSEpLWkhEVUt0aFRT?=
 =?utf-8?B?NU9JajRlNGFzQ0Vic2J2UExYYWd3VC9ENEV3a1haK1hRRnhhNFNNOWZSZzNF?=
 =?utf-8?B?RllUZ2xYLzdFUWgzRXVDNHBCRGtpaS8zMTF6NTFlVkhrVWFzdHZocjgrLytP?=
 =?utf-8?B?Vk5oaG9sV0g3WWE1Qi9HcWR4M3o1UzA1Wi8wSTNGaWhnL25ib01kcUNBeHRZ?=
 =?utf-8?B?U0tibmpwRTVrVFg1UDZPdjNicndZWkZsaFNyWEgyYkpCMGRrbWVpOEpBdk5I?=
 =?utf-8?B?N0Y3Qk1uVWZSamJadXpvWlhaQkNQT1VDbzBDSDZTNWdrRXhjSUdFVEN1bmVE?=
 =?utf-8?B?ZFNWTDZaVnY2NEJ4ZGMwUlhXdU1mSFNjTkxSTDJjU05UR29oNUlkSGVSL3Nq?=
 =?utf-8?B?WXd1TmcyWTFSSjZYNkl0UFgzcVA0a2Y3Mi9RL0dyQ292eEY3Z2FRUlNxdnZ0?=
 =?utf-8?B?RklxTVNXOTZSSDhnM2RZNkpyeFlJMUo1VkNYeEFZK2I0ZFVjT0ppVTQreU45?=
 =?utf-8?B?cVREby9wTG4ycER2UE5ydzhlMnkwY1ZVMnJJQ3JiWVNLTnZWSmZQeXJHc3JD?=
 =?utf-8?B?NTBnWkNSS0Q2U1lGL0dzRjhubGh3M054NlZBQnFXWm9yZGRvQUIvV1EzbkRL?=
 =?utf-8?B?V2l0OEZGb0xCZlB6cVJjMk5vTEFaaElhcUc1STRKQjFHM1MrSXJLQmJEc21R?=
 =?utf-8?B?WGg4bWdkbm5vQjExcGR5RW53MHJRUjR1Y1A2ejhkNXVjZHFuZElSdnhQcnln?=
 =?utf-8?B?ZXZxQzhZa3V6eVdLTFVzcHpMM1VzK3krRVpmdzZYdkpjcEQydmtSY0hxbHZX?=
 =?utf-8?B?K044TjVlSE5oN25iUnVTdG5DcTBuVHpyOHhXc3RGSjFBUHZ2bnREMzVVRElk?=
 =?utf-8?B?blUyanJ2NFFFaFZZK2lPaXlyNEhBdDl1SnlsODM0S2tLRDByOTVRaklORThC?=
 =?utf-8?B?NlBiQ3FSTEp5S2NhS0kydG56emVZZ3BZZUJYZmpZRVYxR0N1WG5tOUREaTg0?=
 =?utf-8?B?NjRxYkdkdFhNM29hUDFQMjNjNElLTFZXbDhQY0hvWmRUWVl5TFYvSGRYVGdE?=
 =?utf-8?B?OFpaZ2NLMThFL3lOVkxlaDRWNm4wOE5Bem9YSDdRdWRaZWl1VjhoY0Y2TkNP?=
 =?utf-8?B?QXdOZFBWOStaWWVMdzhFb2tiQS9UOElOSHF0V0FpR1l3Rk9MSm54MktkSElz?=
 =?utf-8?B?WkJrTlR3anZvYlJEYnNxd21OWHd3OXBBay9nN1lzdVlLbVZGT1lsaWI1V0lw?=
 =?utf-8?B?VGd2R0M2QmxIa1h2ODNiaUVZbkpkTGphNHZ0WDM4akZJK0xSdUpSMS9qNHcz?=
 =?utf-8?B?MENvS1I1T21XeG5lVVVsUGFLTk5DaHVRNUhmZkNyYkxCZEVSVDNWWEJPcGtE?=
 =?utf-8?B?S1V1a2FiZWxDanpBTjJ2c29TY2FOaHpOSVlUek5UWDUwQ2cxRFo0MEREaDRD?=
 =?utf-8?B?ellkeUpROXl1OWhDWGlTK1BNMkFGRVVUSEpSQTVhVWtDNkNKWVdLQTg0RWFn?=
 =?utf-8?B?WGRxY1RTMWhQb2VnRU8yWnUyZ0RBcCt3S2QwaVNsNFJCKzFUZ1I5Y1BHTXpl?=
 =?utf-8?B?KzJKczRnRmZTYTIyM0p5U0VMWUtHMjlxVFh5M3ZUQ2VRNkt3SkNHUWFXQXR1?=
 =?utf-8?B?dlBFaFpkTnBFNEZZL3V2TFBkU3BSd2R4RjBUc1huL2lzNThWSWFpelBKWTdE?=
 =?utf-8?B?dFBQaTQ0YlBqOUlYNmVuUFpLRXFBY3Q2U3UrcWVFdGswT2NaUGl1L3ZhWHdw?=
 =?utf-8?B?VmdQRHdCSWwyZkFFVE5JSGgvNThOdC9odmlGNUxqalVXYnBxZTM0N25tbzc4?=
 =?utf-8?B?MWJlNmZoQitxbWxJZ05ta1d3SUw1YlEyMGtFS2NjaE04cmNqaS9POUdvMmRL?=
 =?utf-8?B?SGFQSENKMlcxdGh1SW16Q3ZUY0I5SS8vWForUzNQckhLTGRpeFlnM3ZReGVI?=
 =?utf-8?Q?SOT50YZExUgPGZBJNFnqfHVj4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f7b307b-88bd-4dc4-f115-08db029b1595
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 08:22:11.1828
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: b1CNVpBbDn71FF2dOGiORfAzjEygzgjC67EIj+yTU5zqWRVeHQTK4dajWowBbNQI2seFeqr5cwFFMGt8HB0FdA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7533

On 27.01.2023 19:51, Krister Johansen wrote:
> --- a/xen/include/public/arch-x86/cpuid.h
> +++ b/xen/include/public/arch-x86/cpuid.h
> @@ -72,6 +72,15 @@
>   * Sub-leaf 2: EAX: host tsc frequency in kHz
>   */
>  
> +#define XEN_CPUID_TSC_EMULATED               (1u << 0)
> +#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
> +#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
> +
> +#define XEN_CPUID_TSC_MODE_DEFAULT               (0)
> +#define XEN_CPUID_TSC_MODE_ALWAYS_EMULATE        (1u)
> +#define XEN_CPUID_TSC_MODE_NEVER_EMULATE         (2u)
> +#define XEN_CPUID_TSC_MODE_NEVER_EMULATE_TSC_AUX (3u)

While perhaps it doesn't matter much with the mode no longer supported,
I'd prefer if here we used the original name (PVRDTSCP) as well.
Preferably with that adjustment (which once again I'd be happy to do
while committing, albeit I'd like to wait with that until osstest is in
a better mood again)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 08:25:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 08:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486638.754037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPTt-0003kf-Fo; Mon, 30 Jan 2023 08:25:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486638.754037; Mon, 30 Jan 2023 08:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPTt-0003kY-Be; Mon, 30 Jan 2023 08:25:37 +0000
Received: by outflank-mailman (input) for mailman id 486638;
 Mon, 30 Jan 2023 08:25:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMPTr-0003kS-K1
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 08:25:35 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2057.outbound.protection.outlook.com [40.107.7.57])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aabfe6d4-a077-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 09:25:34 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6792.eurprd04.prod.outlook.com (2603:10a6:20b:dc::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan
 2023 08:25:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 08:25:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aabfe6d4-a077-11ed-9ec0-891035b88211
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WfTxDWwCEb1zp2TWGJL9MJYQaydzCBuTOVewRJodPnjvrp+DtFrWS4cQ4In/ak+BmQXhFGzVJJ/KFsW1s66osjb6pq/ZMVuS2xJiF+tpCHxmHk5JugS43ljW5ONn7l/RBKMKRkM3vjJ0ZNTxVfZz7HP8ereTSdKRlVB0zedwemNDtGILzTsclKrACtAwrmRj+kL1wdbnQul89jRNcMNIWlB9sfR7AsHpnM3tHx/0vxIKp0jM/IokAkURf3IHZhmObFOL7xlzF8Og4lFtVC0y2nw9cpSQNtx0FzV7BuWmFj+rGDGuo9ewasA9rnUwcL4F5GepI0aeNfx3RUHHWdqI+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GFoXzVzz/M7PtTy9u/nPOOEsljAJTh0FZA4dxVTGQZ8=;
 b=Iia+uP3O66JEC7uS6C5LZRflop8nxHl5GZVSNR/lXAgPGvstEqbwe5TFLw5+BzrTiA8mtjxcSzbHyLIHa5ws1xP3VJYhY5DK8mqwy3bzDwBMx8TXA59d4gtESx2NSR0yRflCREetVJ0PfzSaf/9yaPMgTUMq4YCxhaGqFVjU7cQu5kYDRZ8Ie439VITy0BjZosQL7/YuIfTEPR+WIo52xyoIoHzG8mNwkrRr/zJRNMoRRJSGf+kxiVT0Hi5JgNJJnAOc+Ll52yELZC829so1u1afOPyZRrfK5KJMja6AtmL2UOSiBm2Sh6MbiUnf2+fT6LxyXufIvJCp4OM/aRvPsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GFoXzVzz/M7PtTy9u/nPOOEsljAJTh0FZA4dxVTGQZ8=;
 b=yV20YHpNJR6sn4eFOQbrXR8cpYOr1u2j5+ViPC2zCRQZVUWoqtxa6yhflNc+Gwy/M4v04YZniUjWkXnIZtdItlZk0MiZ9IXDI4yC52zB2LFib6NkjIYJ3HfQdzKtjuZToNn7sKx20nwvbo/K0CzfJhmwUYohJK9ZG11CV63/vYXoTwn0MDPVfRDF3xKnizJxDw5E9M5Couh89g2969KxAbufHov+svciUe2oPYALIJcnzERA4GysfrnT5L6YPV8iUUkXftMkxQDkTEiK3Fg+w6QqZhocq4LRrUXahDhdEHBJr93d/13bByx0dD/p09WhhvxaV1n921Jxb09/ykAzjw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fb813a16-a1be-bfeb-60c0-02197c98adf7@suse.com>
Date: Mon, 30 Jan 2023 09:25:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] Add more rules to docs/misra/rules.rst
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: george.dunlap@citrix.com, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, Bertrand.Marquis@arm.com, julien@xen.org,
 stefano.stabellini@amd.com, xen-devel@lists.xenproject.org
References: <20230127183541.3013908-1-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230127183541.3013908-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0060.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6792:EE_
X-MS-Office365-Filtering-Correlation-Id: 93f5090b-c461-4cd8-8ccd-08db029b8df3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p6vFwcUUUSBpOVODJQvtYGvGC6EhDOevaogywW1uZx/UaukwkWSEc4McLqFUsU7gLcVcdpr8t7VfO4j63GBwtAKeqi/cY9HH52rkQZpBie9XisLXydRz8MLZGW7HWeTMqDjhlOSMABuCLtW0LXizsV9TsvPcNzt6gv4i0U4h/hOnAFqaA7pZqSFeNMyLUtBWmqY54De3zs06NfU2PHiE0LW2FPghNsCd/b200kVndaK+81jj3ldQrV12+kraHskkUucC+3QdM3bl01XXgWFK65u79IS+jw5ozF2uyGDqUsT5k/IEpAclQ0Evc44lWDh7+9ERAML04QDd4B3kDRyrDZUw/X56Vz30gHUcssqqHi1AoBBhQ8vi3T9dXZExKtby3mGF3N1Avh1D9/SLZFpHbr8zJKecHL+2k5k8hiQH2Jrd2fj/3tjkbk7Sy1d7AUfTcsB4qqbozEucfEx4ihll50pcsT7FULPgxlxu9wT54GYBPXfFeuMJwYe4+x65j8V9yGnLj0NT0KMejiQdEIuXJjq8WMaurCw5K8RN+S5E+UPAYc42HFFrvk6MMeaqIai/4PJmgIlwFYwWh9aFH7bA/RSFXtSxpBIdqexp89h9wj8I52/EoQBBXoRoPkl4phF5DGPQlH63V62EVlFIZ13wF9OvuSXHzYviVdL6mo+Sical/ZUd+A7E9dQ9Dy8lbnC+a6NKuw9dQ0bupsOvRXE93KwRsSp9Ad2upMYZEkjYloM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(366004)(376002)(39850400004)(396003)(346002)(136003)(451199018)(316002)(41300700001)(8936002)(66476007)(6916009)(66556008)(66946007)(4326008)(8676002)(6506007)(186003)(6512007)(53546011)(26005)(2616005)(6486002)(478600001)(36756003)(31696002)(5660300002)(4744005)(38100700002)(2906002)(86362001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bU5tT0pGYzNFK1RidVJZK1hTc25TVjF4c2p3Rk1aNmlWNUFZZjh2dXFDMTNZ?=
 =?utf-8?B?SjBuUmM1RUdKSEtXZ1h6UFNBbERHNHp5SE1uUk9QaU44OVJacy9JMEJoV2xr?=
 =?utf-8?B?R0UyU0VveTVkNDJSemFmM1o1YVVXZUpRZ0NZd0lrdmh4aWtVS1BIQ051S2Ru?=
 =?utf-8?B?R3hNVFZvazFsdmVqbGx2d0svMWR0alRBdzRORmgyNmFQUFU2NS9Ba0N4YlQ4?=
 =?utf-8?B?ZUJ4WHR5ZjZpMHVGc3h0NDdCWEVWd0tEcFd1cjhHWHczYUdWQnBDSSszVmdv?=
 =?utf-8?B?RXdiamk0UkhFNGxkUFl2UVh4TElyUkQ4TG9mVUFJTzBaZEZCZXlXVmc0c0xS?=
 =?utf-8?B?YmozT0ZkR3RiWXVrWTRZU3JZTXZFQkZWdUFMbGo0WnYrZkQwMC9iOTJWNldT?=
 =?utf-8?B?MjhnOXhUQmxSdFN1VEhDWFhYL1JXbk9PQ1BqTy9RSGRkNVBrd213NEhqcUxw?=
 =?utf-8?B?cjk1YXNjais3cEdqVHYyVTFSTGd1VWZQZENicW4zRHNEVTd0OXVqZFllTldl?=
 =?utf-8?B?N1hGRkk0eGZDSkprZjMxZ054Ym1DbTlzaDZBUHhGYW0rekdvN1NOVzhqN3kr?=
 =?utf-8?B?bnBJTmFsVXdOUy8wN2I1YXF6TFQxN01hZGl5aHlvSmVUMzRERXhHZWtQVjFS?=
 =?utf-8?B?ZUVBaWNuNmRFZjE0YTVRa3FySXJvRGZuK2gwNUZyMnRmSS9LVk1vWVBCc0dN?=
 =?utf-8?B?MW9HcE1mMkRuT01mZjRoeEdFM0puZWg2R3o4Z09hMkdSU1RBOXA1VFdIUjlK?=
 =?utf-8?B?Zm5IL1NLQjk3Q21KVUx6MkNueDdaODNwQ2xqT0dldkFxY2sreTVVUG4xYjNT?=
 =?utf-8?B?bDJxQjI0eWEzcy9FUFg1NTFwTHRJSllocUFTUHIyc0ZoR21JekhCZHFidDZy?=
 =?utf-8?B?TzhuU25IdkJUVk1sUXhTN2VIb3ZrRnRCOHV3c3JQb200ZllRZ3JTVHF4R2tt?=
 =?utf-8?B?TURrOW10WS9MSWV0SGY2b3dHOXdVOGVUZ0p5Ykx3dHo0KzVIQmMyY3E1NlZJ?=
 =?utf-8?B?cU5aZlpRc2FTM0wwVU15RFRwK0lsNUxXZ0ZMamNvYUVJMDhZcDl5V2lORTRj?=
 =?utf-8?B?N0hyQWZ0V1FwMGJNOXp6K3oyTFo2MlAvZkhnY0pPaHlySUF2SnFIOURxaUNL?=
 =?utf-8?B?V1V6c3N0UEdRUjNyd1J1WkJRR3JOMUxoSEsxUFpWeUJIL3pFQlk3SjlsZllL?=
 =?utf-8?B?ejJpT3h0WGZsRzNMZHQ2REpjM0trZTM0NnlnbUV0S3VaaUc3Z0x3Sm5JdWNI?=
 =?utf-8?B?cmtnVUEvZk1QM0dGQ1hoNVZRYWxySFNZT3Z4R0V5SmY4TDFUWEVrOFRyaXJI?=
 =?utf-8?B?cTRoOXJMOVMyMzlBbU40T1hMQzJCeHdRdCtJYk1lNGVCM3lxdWErMFdWTlNi?=
 =?utf-8?B?MjFQVzdtYlAyUWR0MytJWWdaUnZRSi9rQ2RUTDlQOEd0Q1ZiUmUzZkE1Rkx6?=
 =?utf-8?B?bTYxUFNBU1BWY2NBM29oZ1dEb1VtY3lyU2FtQU5vNFhuR1htUFNIN0VFdGJp?=
 =?utf-8?B?MWVQdUFFYU9ycklhUWdxT2xac1YxdzVDQU9lblNWUkVCNko5VE9ia1BiNjh1?=
 =?utf-8?B?RjM1ek80VjVvSmxhMkZ2ZnAxWGQ3VkZXQXhiY1pFcWFCd05oY3FhaHp2Rysv?=
 =?utf-8?B?dURXbjVyYld4OFNRTjZ6THhLZHJXRHN5QXhMSUtVd2hGbjJkMjNUV1YzNVN0?=
 =?utf-8?B?TDlFWXdrSXNlOUtuelFsQVE4UHpiK2NibnU4OXVrNVdCQmFyeHg4NDZTR1Q4?=
 =?utf-8?B?S2IwQU9FNDVnT3NqOXZBRWJxRWZ2SUpsa1hweFNyd3B5MmJkY1VYeUc4Sk1K?=
 =?utf-8?B?cnBYQ3lEZHFDbktWL1FGU2RqNktVWkFKY2VoNU0vRi93L1djZGFNM0F5Mk14?=
 =?utf-8?B?YitBMVhLT0hwbXBiNHRoZzNLWnNMeWtsU0pEdHY2UVIzT1lqMS9NYnU2YlNV?=
 =?utf-8?B?YWVtbDVvZUpjR3lBVnFtTXlYNDlYR01ZM1RPUFpPcy9FeVRqci8rbGR3T01q?=
 =?utf-8?B?cUdZKzNpRFBzcUR5OG5SR2tJU3l1MVhEeG03VWRWTFVkbUJ5VERNcWRkTWov?=
 =?utf-8?B?b3FvbTdrS2dWYlhYU1JFRzVUeUFwSVhRcmZDVU1MZXBhV0o3VVRhcTJQclg4?=
 =?utf-8?Q?TRgI7ugOvCPYAqf5bfqhIrjLm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 93f5090b-c461-4cd8-8ccd-08db029b8df3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 08:25:33.2167
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EQ/ar+soHxPrTyjgDkOtlayyxH5I1XkMSXs76yV6sfMjGBxNuf8t6E3jF6bwf7MpXjJnnGUekvoTvyvjW4qP8g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6792

On 27.01.2023 19:35, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> As agreed during the last MISRA C discussion, I am adding the following
> MISRA C rules: 7.1, 7.3, 18.3.
> 
> I am also adding 13.1 that was "agreed pending an analysis on the amount
> of violations". There are zero violations reported by cppcheck.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jan 30 08:34:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 08:34:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486643.754047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPbn-0005CY-7v; Mon, 30 Jan 2023 08:33:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486643.754047; Mon, 30 Jan 2023 08:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPbn-0005CR-5F; Mon, 30 Jan 2023 08:33:47 +0000
Received: by outflank-mailman (input) for mailman id 486643;
 Mon, 30 Jan 2023 08:33:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMPbm-0005CL-OQ
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 08:33:46 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ced8f7b4-a078-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 09:33:44 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4E16E21976;
 Mon, 30 Jan 2023 08:33:44 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 104EB13A06;
 Mon, 30 Jan 2023 08:33:44 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id B9v2AeiA12NBUAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 08:33:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ced8f7b4-a078-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675067624; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JS3xYt/PFkAZ5ccg25c9dVtoNiBgZfyd59cxnaWQbRk=;
	b=YQXKhrCwKPWJIDz2q5413IT4bs74XndcnoZyV+wWLivaPKOHdFPInMr93162l622A2oFfl
	QUb0l+d0LDaAyvkD7QcYryAuTmpxyBfGB/cxc1wbvHiS52vTajHGasx/UqhKDuUEl5fS15
	tLqXftjUKZ4AJCebFYVQUBf0yCVUOrU=
Message-ID: <72145559-dc4b-eae9-de70-50429cef8939@suse.com>
Date: Mon, 30 Jan 2023 09:33:43 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2] xen/public: move xenstore related doc into 9pfs.h
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230130081722.29012-1-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230130081722.29012-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------0AMu4URJvm4oxx7SfqBLSSZR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------0AMu4URJvm4oxx7SfqBLSSZR
Content-Type: multipart/mixed; boundary="------------QJlSYRqFnF0nd1Fj4E45wypq";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Message-ID: <72145559-dc4b-eae9-de70-50429cef8939@suse.com>
Subject: Re: [PATCH v2] xen/public: move xenstore related doc into 9pfs.h
References: <20230130081722.29012-1-jgross@suse.com>
In-Reply-To: <20230130081722.29012-1-jgross@suse.com>

--------------QJlSYRqFnF0nd1Fj4E45wypq
Content-Type: multipart/mixed; boundary="------------2fCWHspyxWIQYc6JJkYVvf0n"

--------------2fCWHspyxWIQYc6JJkYVvf0n
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDEuMjMgMDk6MTcsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IFRoZSBYZW5zdG9y
ZSByZWxhdGVkIGRvY3VtZW50YXRpb24gaXMgY3VycmVudGx5IHRvIGJlIGZvdW5kIGluDQo+
IGRvY3MvbWlzYy85cGZzLnBhbmRvYywgaW5zdGVhZCBvZiB0aGUgcmVsYXRlZCBoZWFkZXIg
ZmlsZQ0KPiB4ZW4vaW5jbHVkZS9wdWJsaWMvaW8vOXBmcy5oIGxpa2UgZm9yIG1vc3Qgb3Ro
ZXIgcGFyYXZpcnR1YWxpemVkDQo+IGRldmljZSBwcm90b2NvbHMuDQo+IA0KPiBUaGVyZSBp
cyBhIGNvbW1lbnQgaW4gdGhlIGhlYWRlciBwb2ludGluZyBhdCB0aGUgZG9jdW1lbnQsIGJ1
dCB0aGUNCj4gZ2l2ZW4gZmlsZSBuYW1lIGlzIHdyb25nLiBBZGRpdGlvbmFsbHkgc3VjaCBo
ZWFkZXJzIGFyZSBtZWFudCB0byBiZQ0KPiBjb3BpZWQgaW50byBjb25zdW1pbmcgcHJvamVj
dHMgKExpbnV4IGtlcm5lbCwgcWVtdSwgZXRjLiksIHNvIHBvaW50aW5nDQo+IGF0IGEgZG9j
IGZpbGUgaW4gdGhlIFhlbiBnaXQgcmVwb3NpdG9yeSBpc24ndCByZWFsbHkgaGVscGZ1bCBm
b3IgdGhlDQo+IGNvbnN1bWVycyBvZiB0aGUgaGVhZGVyLg0KPiANCj4gVGhpcyBzaXR1YXRp
b24gaXMgZmFyIGZyb20gaWRlYWwsIHdoaWNoIGlzIGFscmVhZHkgYmVpbmcgcHJvdmVkIGJ5
IHRoZQ0KPiBmYWN0IHRoYXQgbmVpdGhlciBxZW11IG5vciB0aGUgTGludXgga2VybmVsIGFy
ZSBpbXBsZW1lbnRpbmcgdGhlDQo+IGRldmljZSBhdHRhY2gvZGV0YWNoIHByb3RvY29sIGNv
cnJlY3RseS4gQWRkaXRpb25hbGx5IHRoZSBkb2N1bWVudGVkDQo+IFhlbnN0b3JlIGVudHJp
ZXMgYXJlIG5vdCBtYXRjaGluZyB0aGUgcmVhbGl0eSwgYXMgdGhlICJ0YWciIFhlbnN0b3Jl
DQo+IGVudHJ5IGlzIG9uIHRoZSBmcm9udGVuZCBzaWRlLCBub3Qgb24gdGhlIGJhY2tlbmQg
b25lLg0KPiANCj4gQ2hhbmdlIHRoYXQgYnkgbW92aW5nIHRoZSBYZW5zdG9yZSByZWxhdGVk
IDlwZnMgZG9jdW1lbnRhdGlvbiBmcm9tDQo+IGRvY3MvbWlzYy85cGZzLnBhbmRvYyBpbnRv
IHhlbi9pbmNsdWRlL3B1YmxpYy9pby85cGZzLmggd2hpbGUgZml4aW5nDQo+IHRoZSB3cm9u
ZyBYZW5zdG9yZSBlbnRyeSBkZXRhaWwuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+IC0tLQ0KPiBWMjoNCj4gLSBhZGQgcmVmZXJl
bmNlIHRvIGhlYWRlciBpbiB0aGUgcGFuZG9jIGRvY3VtZW50IChKYW4gQmV1bGljaCkNCg0K
T2gsIEkganVzdCBmb3VuZCBhbm90aGVyIGJ1ZyBpbiB0aGUgZG9jdW1lbnRhdGlvbiBvZiB0
aGUgY29ubmVjdGlvbg0KcHJvdG9jb2wgd2hpbGUgdHJ5aW5nIHRvIGZpeCB0aGUgTGludXgg
ZnJvbnRlbmQuIEknbGwgc2VuZCBWMyBvZiB0aGUNCnBhdGNoIHNvb24uDQoNCg0KSnVlcmdl
bg0K
--------------2fCWHspyxWIQYc6JJkYVvf0n
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2fCWHspyxWIQYc6JJkYVvf0n--

--------------QJlSYRqFnF0nd1Fj4E45wypq--

--------------0AMu4URJvm4oxx7SfqBLSSZR
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPXgOcFAwAAAAAACgkQsN6d1ii/Ey+d
rQf/RfVy1ICt8fcOOeohV7CpYykuh7yGgtFLQRUcREBeddqAIPCxqmkNBWf1APWssdtLfoB9jZyh
zAW9gzO3RSV+fKT+iDcYTvBEdqiuz7cYgdzOSiBGtin5YdE+1o73EHyIxTGmPVtgoKHPDHK+1uho
YfY+wbupN37whf335A3w2kOI2V89FS3a/L7ELDGZBVw+mrpuVaD/0H58aDU6hyBzSa03kjWLH5OA
miHD38/2hwROYlgVGgwPA84l4vZEc1aSGzogUcJ7EZeelwgeYClSDpkOXaOYGgscGQh1C0GPgGcm
Ukei0ofaCu2yRVz3OJbTLwxGN4+qA7PXxi6e+Sk1Pg==
=aSul
-----END PGP SIGNATURE-----

--------------0AMu4URJvm4oxx7SfqBLSSZR--


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 08:59:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 08:59:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486648.754057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPzs-00082s-8q; Mon, 30 Jan 2023 08:58:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486648.754057; Mon, 30 Jan 2023 08:58:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMPzs-00082l-5D; Mon, 30 Jan 2023 08:58:40 +0000
Received: by outflank-mailman (input) for mailman id 486648;
 Mon, 30 Jan 2023 08:58:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6cZ7=53=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pMPzq-00082c-Hp
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 08:58:38 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2082.outbound.protection.outlook.com [40.107.94.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 463c5aa1-a07c-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 09:58:34 +0100 (CET)
Received: from MW4PR04CA0051.namprd04.prod.outlook.com (2603:10b6:303:6a::26)
 by MW4PR12MB8609.namprd12.prod.outlook.com (2603:10b6:303:1e2::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan
 2023 08:58:26 +0000
Received: from CO1NAM11FT028.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:6a:cafe::f7) by MW4PR04CA0051.outlook.office365.com
 (2603:10b6:303:6a::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Mon, 30 Jan 2023 08:58:26 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT028.mail.protection.outlook.com (10.13.175.214) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.22 via Frontend Transport; Mon, 30 Jan 2023 08:58:25 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 30 Jan
 2023 02:58:24 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 30 Jan 2023 02:58:23 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 463c5aa1-a07c-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bM0V34Uxl3bUaY9A7n20C6sISq59iXYQu6hsAsYNbOf2Mqe5gX2Z3Lle5awE+HuAsmTlJiNpBtpDms7UvqfhnNNd+jsm624bT7gYpLqn8EEHAIbdzYH8PL+nUKGTYL6F8p4vRK9kPrkEqaAjB6SevB/XcDZXxRbKVXw8qPEPBJLtQPQLAglkRL4U5ZFPXg+AAOIAgJQyVpbhekQ5DrUULB7+Tg4BigED9kC8aP+sShWIgaKVTCqvbqeORPBQFTYHjT6V25/1XCwK+1Wq4dAXOopLjxk7IBBzw2aCX9FPEXZ849rhrhPE4J0v/XOT2fQ3XAE/jJ5SCkGzZoEjLH272w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MD4Ak62XVoYyV1FkkTpqciDvLoLSM+HT5y72qDbWiio=;
 b=Qa6fBndA3fuGIYhmPYGyIfP5HXecTHfdPuUdjSqUPnjHvbkUEVztTZ8YvvOe2kLabFng79vVpZFkKraqRdCrtw2YEDTsT9HKGVS+GBvKAK6tgl1ydpk35lzsMoyMwTHtmzl69YZHa0Blna2Kv05XWfmVxDsCoiwKBMQpc7hscCnTHScSKU5PKUISqfly9CyKSva6ub6FniXrwzXj98cX4ZVdsp1wAsjk3vSkZRcQ0pxoSJWqamUdOIf725wrjiqjiTV+5C5LWiu4jHXhxw+P5Yc2PIo/A94L6ID48xrNYvn6H0yF//4YePD/NJwzD53e9bctGzwuGMjoAftunPSYTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MD4Ak62XVoYyV1FkkTpqciDvLoLSM+HT5y72qDbWiio=;
 b=SEjKk4m79wxCdycfBS14+hg25LHltFn9tE9bM0wfDUbL0OchEVbHq/hg8EwGJMw5Zau+FGyIEK1/BTwwPT6EYv0eGxWtJTjwLwngQpEUaJi96RmqJU7CkW/7+bkasNLdATmx8hbKyk496OrTZwyiQ7lGQ2XowqCryYokQOqXgC0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <3872442b-b9d0-4594-0869-d505aefa5aa5@amd.com>
Date: Mon, 30 Jan 2023 09:58:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v5 1/5] xen/arm32: head: Widen the use of the temporary
 mapping
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-2-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230127195508.2786-2-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT028:EE_|MW4PR12MB8609:EE_
X-MS-Office365-Filtering-Correlation-Id: 56fdb194-2161-4422-4822-08db02a0260d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KT3yLMPu4612ylXsc/K4dADwMnayTDfw7LhBiH+qYuX4SRabhHfQzhhU4ayoto2ZRxWr9sA3AjVt+R78JElugyAb/r3tJ5ThdgH53ZMWE6Rqrxg6DVUxw9SoSJCIkWkQEG50dmRjO8QkZ5Uby5O2OOwi8lXacrrSZQ+YISgZyYmZY9cHLTOKoC7MIbArVZaIZy1EJOvXcWz6RKQVN868LCK/D3aM5PKu5GIf+Wba12SaJMq3vPx6mPqXox78Iij6ZlP1FxK4Hg4ncD5Q180BOAyk8gppHBdX1w+Raj4Od+7hFV9lrYntz1TXicTsLcA+qWAN9GKOGPHEeHe2E5v3+gwx+AecEMRWK+fEJPAChqbXzjRI1O/cm2DNuIrFzjn4gBucVlX+4blB/Ccc1zptllyKeVywNwRwrKewHM44Re+PMOzAYvb886sJujAIN9RF67hxOd+6Uz6adxrXfm2ywhFB820yWGHLR8X1qhDmqh4cocReO08f7KTVIMewzHbwR3MYcIxAJHRBzvD9aXx4vC/Mg11anBBaE2K1PQZnT5fFNDr31Ek+nQ5PIZDE7elD22cnRFgEjmleU5XoXa9uqzEWy8vOjmCg6DME4SlQmx7h7rvPeQPYyClvZHwka7QUwjJVN0EpM0+cmtB3k3HvYdYQDX9x9NSvy/YpH+b3kGyryruMcvszF7FCtQORgkcNMuH3/ggbegdaVXncFZMRRtijihVqiQaL/NjmCbl/l+uak7VPn40NSWqMsqbXnb5+YBep+4P2U4pK1GgoaYlcBg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199018)(40470700004)(36840700001)(46966006)(8676002)(8936002)(41300700001)(2616005)(4326008)(70206006)(70586007)(47076005)(5660300002)(44832011)(83380400001)(426003)(336012)(53546011)(478600001)(16576012)(110136005)(54906003)(316002)(26005)(186003)(81166007)(356005)(40460700003)(36756003)(40480700001)(82310400005)(31696002)(86362001)(2906002)(82740400003)(36860700001)(31686004)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 08:58:25.7922
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 56fdb194-2161-4422-4822-08db02a0260d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT028.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB8609

Hi Julien,

On 27/01/2023 20:55, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, the temporary mapping is only used when the virtual
> runtime region of Xen is clashing with the physical region.
> 
> In follow-up patches, we will rework how secondary CPU bring-up works
> and it will be convenient to use the fixmap area for accessing
> the root page-table (it is per-cpu).
> 
> Rework the code to use temporary mapping when the Xen physical address
> is not overlapping with the temporary mapping.
> 
> This also has the advantage to simplify the logic to identity map
> Xen.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> Even if this patch is rewriting part of the previous patch, I decided
> to keep them separated to help the review.
> 
> The "follow-up patches" are still in draft at the moment. I still haven't
> find a way to split them nicely and not require too much more work
> in the coloring side.
> 
> I have provided some medium-term goal in the cover letter.
> 
>     Changes in v5:
>         - Fix typo in a comment
>         - No need to link boot_{second, third}_id again if we need to
>           create a temporary area.
> 
>     Changes in v3:
>         - Resolve conflicts after switching from "ldr rX, <label>" to
>           "mov_w rX, <label>" in a previous patch
> 
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/arm32/head.S | 85 +++++++--------------------------------
>  1 file changed, 15 insertions(+), 70 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index df51550baa8a..93b0af114b0c 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
...
> @@ -675,33 +641,12 @@ remove_identity_mapping:
>          /* r2:r3 := invalid page-table entry */
>          mov   r2, #0x0
>          mov   r3, #0x0
> -        /*
> -         * Find the first slot used. Remove the entry for the first
> -         * table if the slot is not XEN_FIRST_SLOT.
> -         */
Could you please add an empty line here to improve readability?

> +        /* Find the first slot used and remove it */

Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:00:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486656.754067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQ1o-00013P-Rf; Mon, 30 Jan 2023 09:00:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486656.754067; Mon, 30 Jan 2023 09:00:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQ1o-00013I-N3; Mon, 30 Jan 2023 09:00:40 +0000
Received: by outflank-mailman (input) for mailman id 486656;
 Mon, 30 Jan 2023 09:00:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMQ1n-00013B-22
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:00:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQ1m-0003bM-My; Mon, 30 Jan 2023 09:00:38 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQ1m-0007nd-Cp; Mon, 30 Jan 2023 09:00:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=T0FBWSleNRLT70NcxT7Oz+5E/9oRoUOjdRftzzagm48=; b=rZk/I2a+6qAJvpjuWRtwRzA2ob
	yko7NHt2plbtDEZBj/XbiJYwbXVAXsnmbveWq6W5ueNPwhTPMgql271EYTEsKO6p96dmJYXA1dbyY
	tqUqZugsudJXFWxOTwARqhmne3onRrCWAuRs/FqdZF9OgzdbFtz2bGBWS5PA8oSsCqLE=;
Message-ID: <dc631f7a-67f3-64eb-fbaf-a39bbcd616b3@xen.org>
Date: Mon, 30 Jan 2023 09:00:36 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v5 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, michal.orzel@amd.com,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-4-julien@xen.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230127195508.2786-4-julien@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 27/01/2023 19:55, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> In follow-up patches we will need to have part of Xen identity mapped in
> order to safely switch the TTBR.
> 
> On some platform, the identity mapping may have to start at 0. If we always
> keep the identity region mapped, NULL pointer dereference would lead to
> access to valid mapping.
> 
> It would be possible to relocate Xen to avoid clashing with address 0.
> However the identity mapping is only meant to be used in very limited
> places. Therefore it would be better to keep the identity region invalid
> for most of the time.
> 
> Two new external helpers are introduced:
>      - arch_setup_page_tables() will setup the page-tables so it is
>        easy to create the mapping afterwards.
>      - update_identity_mapping() will create/remove the identity mapping
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>      Changes in v5:
>          - The reserved area for the identity mapping is 2TB (so 4 slots)
>            rather than 512GB.
> 
>      Changes in v4:
>          - Fix typo in a comment
>          - Clarify which page-tables are updated
> 
>      Changes in v2:
>          - Remove the arm32 part
>          - Use a different logic for the boot page tables and runtime
>            one because Xen may be running in a different place.
> ---
>   xen/arch/arm/arm64/Makefile         |   1 +
>   xen/arch/arm/arm64/mm.c             | 130 ++++++++++++++++++++++++++++
>   xen/arch/arm/include/asm/arm32/mm.h |   4 +
>   xen/arch/arm/include/asm/arm64/mm.h |  13 +++
>   xen/arch/arm/include/asm/config.h   |   2 +
>   xen/arch/arm/include/asm/setup.h    |  11 +++
>   xen/arch/arm/mm.c                   |   6 +-
>   7 files changed, 165 insertions(+), 2 deletions(-)
>   create mode 100644 xen/arch/arm/arm64/mm.c
> 
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 6d507da0d44d..28481393e98f 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -10,6 +10,7 @@ obj-y += entry.o
>   obj-y += head.o
>   obj-y += insn.o
>   obj-$(CONFIG_LIVEPATCH) += livepatch.o
> +obj-y += mm.o
>   obj-y += smc.o
>   obj-y += smpboot.o
>   obj-y += traps.o
> diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
> new file mode 100644
> index 000000000000..f8e0887d25bc
> --- /dev/null
> +++ b/xen/arch/arm/arm64/mm.c
> @@ -0,0 +1,130 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <xen/init.h>
> +#include <xen/mm.h>
> +
> +#include <asm/setup.h>
> +
> +/* Override macros from asm/page.h to make them work with mfn_t */
> +#undef virt_to_mfn
> +#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> +
> +static DEFINE_PAGE_TABLE(xen_first_id);
> +static DEFINE_PAGE_TABLE(xen_second_id);
> +static DEFINE_PAGE_TABLE(xen_third_id);
> +
> +/*
> + * The identity mapping may start at physical address 0. So we don't want
> + * to keep it mapped longer than necessary.
> + *
> + * When this is called, we are still using the boot_pgtable.
> + *
> + * We need to prepare the identity mapping for both the boot page tables
> + * and runtime page tables.
> + *
> + * The logic to create the entry is slightly different because Xen may
> + * be running at a different location at runtime.
> + */
> +static void __init prepare_boot_identity_mapping(void)
> +{
> +    paddr_t id_addr = virt_to_maddr(_start);
> +    lpae_t pte;
> +    DECLARE_OFFSETS(id_offsets, id_addr);
> +
> +    /*
> +     * We will be re-using the boot ID tables. They may not have been
> +     * zeroed but they should be unlinked. So it is fine to use
> +     * clear_page().
> +     */
> +    clear_page(boot_first_id);
> +    clear_page(boot_second_id);
> +    clear_page(boot_third_id);
> +
> +    if ( id_offsets[0] != 0 )
> +        panic("Cannot handled ID mapping above 512GB\n");

Hmmm... I forgot to update the two lines above and ...

> +
> +    /* Link first ID table */
> +    pte = mfn_to_xen_entry(virt_to_mfn(boot_first_id), MT_NORMAL);
> +    pte.pt.table = 1;
> +    pte.pt.xn = 0;
> +
> +    write_pte(&boot_pgtable[id_offsets[0]], pte);
> +
> +    /* Link second ID table */
> +    pte = mfn_to_xen_entry(virt_to_mfn(boot_second_id), MT_NORMAL);
> +    pte.pt.table = 1;
> +    pte.pt.xn = 0;
> +
> +    write_pte(&boot_first_id[id_offsets[1]], pte);
> +
> +    /* Link third ID table */
> +    pte = mfn_to_xen_entry(virt_to_mfn(boot_third_id), MT_NORMAL);
> +    pte.pt.table = 1;
> +    pte.pt.xn = 0;
> +
> +    write_pte(&boot_second_id[id_offsets[2]], pte);
> +
> +    /* The mapping in the third table will be created at a later stage */
> +}
> +
> +static void __init prepare_runtime_identity_mapping(void)
> +{
> +    paddr_t id_addr = virt_to_maddr(_start);
> +    lpae_t pte;
> +    DECLARE_OFFSETS(id_offsets, id_addr);
> +
> +    if ( id_offsets[0] >= IDENTITY_MAPPING_AREA_NR_L0 )
> +        panic("Cannot handled ID mapping above 512GB\n");

... the error message here. I will send a new version of this patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:09:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:09:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486662.754077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQ9s-0001sV-Jz; Mon, 30 Jan 2023 09:09:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486662.754077; Mon, 30 Jan 2023 09:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQ9s-0001sO-Gx; Mon, 30 Jan 2023 09:09:00 +0000
Received: by outflank-mailman (input) for mailman id 486662;
 Mon, 30 Jan 2023 09:08:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6cZ7=53=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pMQ9q-0001sI-PZ
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:08:58 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2075.outbound.protection.outlook.com [40.107.92.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b88e0082-a07d-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 10:08:55 +0100 (CET)
Received: from MW2PR2101CA0002.namprd21.prod.outlook.com (2603:10b6:302:1::15)
 by BY5PR12MB4132.namprd12.prod.outlook.com (2603:10b6:a03:209::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan
 2023 09:08:52 +0000
Received: from CO1NAM11FT096.eop-nam11.prod.protection.outlook.com
 (2603:10b6:302:1:cafe::c5) by MW2PR2101CA0002.outlook.office365.com
 (2603:10b6:302:1::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.6 via Frontend
 Transport; Mon, 30 Jan 2023 09:08:52 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT096.mail.protection.outlook.com (10.13.175.84) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.25 via Frontend Transport; Mon, 30 Jan 2023 09:08:51 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 30 Jan
 2023 03:08:51 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 30 Jan
 2023 01:08:50 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 30 Jan 2023 03:08:49 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b88e0082-a07d-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iHHMQ/UKvzY20BRQzTlNufdDMVyQhQddRiQ4RgQSJODbOWot633uyEvubRAwLpejXzbOhj0RA6EKeKH8sWCJU2SO+UBUrnwLPFAjNtlW9n/6dbuXF3XzxOgxjtic+7NDc1z5brGk1wQ6TvfSap6R1mxf763E+4Htwf8/ZAhc/T27kX5vWxHjqoLGVj65xnE1lfACdFwYaERTSFS+atFuYHwKiPjfFlsRMbnLSdftFC07+VwTcg9ndEwulxspZ/ghaG2a3xBUyWrkAjEvjAp9WZ1n9w9ddujanFqeogcHq61LiHyLQju1IEHHRbSfCtYwVvFYtZCXBgzW9Q8mbXnwSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=J1a+MbhIMv8EBARb8BlfZrMO4M0yMJY+3CqZR+eJHqM=;
 b=ZGd5urpjOOvzf+oK2u7158YVcWxjg5aW3BRfeK4+gmNARFb0JhzF4rHLs13q/sqZ8xmmtJCFffK3Xf/9w03DqeMQ/Y7SDrkr2maaTqllRdbkBLtmC9Tb7ggUZwYHp2v6sCKgmA/G4S79C8/hFoX2adaEaUJzl+XaCcMAGTC9irW+E5vgpxcGVVtX4hxSEC+sRI/NNkIMyBP7aqWqv4tfO8dcMXD/nSCEy9PkGdt5Lv85NRg2kFAnydATvmAbhHvWHcnbt+Gk1gt2hmil5idWFTPDp+37adUHYCUHRoNehOklRRLs6pIauADFVGmImSyvOmzHu92BZxD9IemK9VwfRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J1a+MbhIMv8EBARb8BlfZrMO4M0yMJY+3CqZR+eJHqM=;
 b=l/APYzj4dqHzMAgxFOWCCjwbJ2AOpJZ5wnw24h9tUFUb8kFbgKXAanho8PfqC3C2Oa3u0D+J/KlvZiyfKdO2/VAWGMrm4KGxyZ6YnLsKHrvcr3n2di5cRhciwu3L0MReCD6eJ3DLzazrsDUr9/RXP02CE22gP1eGBl1ng5f1H0A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <af0bf576-373c-353e-b575-40f5bbde861f@amd.com>
Date: Mon, 30 Jan 2023 10:08:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v5 2/5] xen/arm64: Rework the memory layout
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-3-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230127195508.2786-3-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT096:EE_|BY5PR12MB4132:EE_
X-MS-Office365-Filtering-Correlation-Id: 37facfca-2b65-4d5c-ef31-08db02a19b37
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rnrPXCnPn4fY6WJYQ8xwYUGqbcDB59qVAQq9YlEcv2+joyKtW5ZsxOUX15Y1JGOLH3/DrnC8UdHO+0sor38AaOV+ac81/jtAgXb+B5BSrpQ+eYhgD9623Xt201BP2+U3EmPemKygxmiwFNXLjdwUFVxyfFJDjydX2KcBTcKIexaAklNVgnpjhIsBhR9zF7whjLxWAu1ckeK7lP4Vzcsct5u7XiIH16/TnJow4friB/bj6yuMSJSBCwntzFK6F13JJefBkV9trpEs9SGNqRYSylb7HIJ6MCFchVWCrjvkl2bC4z5efplsGg3PMtwAwMi6nkVWh8T+5JfAX87qlvHwSKvcMMh+rDVT1sRhI5VMAfGJ+E3rTxH0FRF+5g3KOKzct/4DvFRg0s2rPC02EHCnheb0ItINRsXWI8syIjv8yb5vry6VQgJQokr9vRj4bNevTa1p49mO+Vuq5pIaYQCj6PDn1l/v8M7EKjQfNxyTKC894vb/2qanH4Hnwk6rTVpgJBDfjJdYaKagqWAVVlcMUK5VvpugHH4JURY3UuCO37f9SwsPdOrDIGhEAcmjOn7YrW/ysIlyHojcoR5PrYVbrTw3NT4A/xOsV1ov9VOeARNAuODXWt6hDNVlNK+z8jNTG1Qn0ZFO91JYpdDEBybH8z8R6CHOLHfAjrNTLlqGqatiRfHquon/pwqiTByns014EE/D9vHyUi8EYhhrcOrItVkpgNF/NjjqubcxqZqO7/lQQ3fNbk+Dw9X21WzTlx/qMyN2Fj+tPnTUmBXr4JUCDA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(376002)(396003)(136003)(39860400002)(451199018)(36840700001)(40470700004)(46966006)(8936002)(41300700001)(44832011)(5660300002)(36860700001)(426003)(83380400001)(47076005)(40480700001)(81166007)(40460700003)(86362001)(82740400003)(31696002)(356005)(36756003)(316002)(110136005)(70206006)(4326008)(54906003)(8676002)(70586007)(478600001)(16576012)(2616005)(336012)(186003)(26005)(53546011)(31686004)(82310400005)(2906002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 09:08:51.8775
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 37facfca-2b65-4d5c-ef31-08db02a19b37
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT096.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4132

Hi Julien,

On 27/01/2023 20:55, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Xen is currently not fully compliant with the Arm Arm because it will
> switch the TTBR with the MMU on.
> 
> In order to be compliant, we need to disable the MMU before
> switching the TTBR. The implication is the page-tables should
> contain an identity mapping of the code switching the TTBR.
> 
> In most of the case we expect Xen to be loaded in low memory. I am aware
> of one platform (i.e AMD Seattle) where the memory start above 512GB.
> To give us some slack, consider that Xen may be loaded in the first 2TB
> of the physical address space.
> 
> The memory layout is reshuffled to keep the first four slots of the zeroeth
> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
> tweak of the boot code because XEN_VIRT_START cannot be used as an
> immediate.
> 
> This reshuffle will make trivial to create a 1:1 mapping when Xen is
> loaded below 2TB.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> 
> ---
>     Changes in v5:
>         - We are reserving 4 slots rather than 2.
>         - Fix the addresses in the layout comment.
>         - Fix the size of the region in the layout comment
>         - Add Luca's tested-by (the reviewed-by was not added
>           because of the changes requested by Michal
>         - Add Michal's reviewed-by
> 
>     Changes in v4:
>         - Correct the documentation
>         - The start address is 2TB, so slot0 is 4 not 2.
> 
>     Changes in v2:
>         - Reword the commit message
>         - Load Xen at 2TB + 2MB
>         - Update the documentation to reflect the new layout
> ---
>  xen/arch/arm/arm64/head.S         |  3 ++-
>  xen/arch/arm/include/asm/config.h | 34 +++++++++++++++++++++----------
>  xen/arch/arm/mm.c                 | 11 +++++-----
>  3 files changed, 31 insertions(+), 17 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 4a3f87117c83..663f5813b12e 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -607,7 +607,8 @@ create_page_tables:
>           * need an additional 1:1 mapping, the virtual mapping will
>           * suffice.
>           */
> -        cmp   x19, #XEN_VIRT_START
> +        ldr   x0, =XEN_VIRT_START
> +        cmp   x19, x0
>          bne   1f
>          ret
>  1:
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 5df0e4c4959b..e388462c23d1 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -72,16 +72,13 @@
>  #include <xen/page-size.h>
> 
>  /*
> - * Common ARM32 and ARM64 layout:
> + * ARM32 layout:
>   *   0  -   2M   Unmapped
>   *   2M -   4M   Xen text, data, bss
>   *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>   *   6M -  10M   Early boot mapping of FDT
>   *   10M - 12M   Livepatch vmap (if compiled in)
>   *
> - * ARM32 layout:
> - *   0  -  12M   <COMMON>
> - *
>   *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>   * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>   *                    space
> @@ -90,14 +87,23 @@
>   *   2G -   4G   Domheap: on-demand-mapped
>   *
>   * ARM64 layout:
> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
> - *   0  -  12M   <COMMON>
> + * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3])
> + *
> + *  Reserved to identity map Xen
> + *
> + * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]
> + *  (Relative offsets)
> + *   0  -   2M   Unmapped
> + *   2M -   4M   Xen text, data, bss
> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
> + *   6M -  10M   Early boot mapping of FDT
> + *  10M -  12M   Livepatch vmap (if compiled in)
>   *
>   *   1G -   2G   VMAP: ioremap and early_ioremap
>   *
>   *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
>   *
> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
> + * 0x0000028000000000 - 0x00007fffffffffff (125TB, L0 slots [5..255])
>   *  Unused
>   *
>   * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
> @@ -107,7 +113,17 @@
>   *  Unused
>   */
> 
> +#ifdef CONFIG_ARM_32
>  #define XEN_VIRT_START          _AT(vaddr_t, MB(2))
> +#else
> +
> +#define SLOT0_ENTRY_BITS  39
> +#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
> +#define SLOT0_ENTRY_SIZE  SLOT0(1)
> +
> +#define XEN_VIRT_START          (SLOT0(4) + _AT(vaddr_t, MB(2)))
> +#endif
> +
>  #define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
> 
>  #define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
> @@ -163,10 +179,6 @@
> 
>  #else /* ARM_64 */
> 
> -#define SLOT0_ENTRY_BITS  39
> -#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
> -#define SLOT0_ENTRY_SIZE  SLOT0(1)
> -
>  #define VMAP_VIRT_START  GB(1)
>  #define VMAP_VIRT_SIZE   GB(1)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index f758cad545fa..0b0edf28d57a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -153,7 +153,7 @@ static void __init __maybe_unused build_assertions(void)
>  #endif
>      /* Page table structure constraints */
>  #ifdef CONFIG_ARM_64
> -    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
> +    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START) < 2);
I haven't spotted this before but this should be < 4.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:09:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:09:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486665.754086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQAY-0002NL-TD; Mon, 30 Jan 2023 09:09:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486665.754086; Mon, 30 Jan 2023 09:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQAY-0002NE-Po; Mon, 30 Jan 2023 09:09:42 +0000
Received: by outflank-mailman (input) for mailman id 486665;
 Mon, 30 Jan 2023 09:09:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMQAY-0002JU-5l
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:09:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3632d01-a07d-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 10:09:39 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 535631FDF5;
 Mon, 30 Jan 2023 09:09:39 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 12CCB13A06;
 Mon, 30 Jan 2023 09:09:39 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id nwJLA1OJ12MwYgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 09:09:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3632d01-a07d-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675069779; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=ynzLcV94eyF20Xb/k83uxhmZy0ZHoXN+ElyJrQSwXnE=;
	b=Rlx7wGyQJ+zXIa0b/SlBBw44w4e11D4Kmb+vWVQhEy8oDKF8vr0N32sNmR+ixl7LRIVbRg
	zrf415SvccihsBR1BR2+icGcKuCN4EpqF2wqGU5bQD0ju0Z9wkT6ryTDUd94f2TafCboBj
	2GzTKt7an0yaEJBCZHgIv9PEdwxFQcs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3] xen/public: move xenstore related doc into 9pfs.h
Date: Mon, 30 Jan 2023 10:09:37 +0100
Message-Id: <20230130090937.31623-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The Xenstore related documentation is currently to be found in
docs/misc/9pfs.pandoc, instead of the related header file
xen/include/public/io/9pfs.h like for most other paravirtualized
device protocols.

There is a comment in the header pointing at the document, but the
given file name is wrong. Additionally such headers are meant to be
copied into consuming projects (Linux kernel, qemu, etc.), so pointing
at a doc file in the Xen git repository isn't really helpful for the
consumers of the header.

This situation is far from ideal, which is already being proved by the
fact that neither qemu nor the Linux kernel are implementing the
device attach/detach protocol correctly. Additionally the documented
Xenstore entries are not matching the reality, as the "tag" Xenstore
entry is on the frontend side, not on the backend one.

There is another bug in the connection sequence: the frontend needs to
wait for the backend to have published its features before being able
to allocate its rings and event-channels.

Change that by moving the Xenstore related 9pfs documentation from
docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h while fixing
the wrong Xenstore entry detail and the connection sequence.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- add reference to header in the pandoc document (Jan Beulich)
V3:
- fix flaw in the connection sequence
---
 docs/misc/9pfs.pandoc        | 153 +-----------------------------
 xen/include/public/io/9pfs.h | 178 ++++++++++++++++++++++++++++++++++-
 2 files changed, 181 insertions(+), 150 deletions(-)

diff --git a/docs/misc/9pfs.pandoc b/docs/misc/9pfs.pandoc
index b034fb5fa6..5c82625040 100644
--- a/docs/misc/9pfs.pandoc
+++ b/docs/misc/9pfs.pandoc
@@ -59,155 +59,10 @@ This document does not cover the 9pfs client/server design or
 implementation, only the transport for it.
 
 
-## Xenstore
+## Configuration
 
-The frontend and the backend connect via xenstore to exchange
-information. The toolstack creates front and back nodes with state
-[XenbusStateInitialising]. The protocol node name is **9pfs**.
-
-Multiple rings are supported for each frontend and backend connection.
-
-### Backend XenBus Nodes
-
-Backend specific properties, written by the backend, read by the
-frontend:
-
-    versions
-         Values:         <string>
-
-         List of comma separated protocol versions supported by the backend.
-         For example "1,2,3". Currently the value is just "1", as there is
-         only one version. N.B.: this is the version of the Xen trasport
-         protocol, not the version of 9pfs supported by the server.
-
-    max-rings
-         Values:         <uint32_t>
-
-         The maximum supported number of rings per frontend.
-
-    max-ring-page-order
-         Values:         <uint32_t>
-
-         The maximum supported size of a memory allocation in units of
-         log2n(machine pages), e.g. 1 = 2 pages, 2 == 4 pages, etc. It
-         must be at least 1.
-
-Backend configuration nodes, written by the toolstack, read by the
-backend:
-
-    path
-         Values:         <string>
-
-         Host filesystem path to share.
-
-    tag
-         Values:         <string>
-
-         Alphanumeric tag that identifies the 9pfs share. The client needs
-         to know the tag to be able to mount it.
-
-    security-model
-         Values:         "none"
-
-         *none*: files are stored using the same credentials as they are
-                 created on the guest (no user ownership squash or remap)
-         Only "none" is supported in this version of the protocol.
-
-### Frontend XenBus Nodes
-
-    version
-         Values:         <string>
-
-         Protocol version, chosen among the ones supported by the backend
-         (see **versions** under [Backend XenBus Nodes]). Currently the
-         value must be "1".
-
-    num-rings
-         Values:         <uint32_t>
-
-         Number of rings. It needs to be lower or equal to max-rings.
-
-    event-channel-<num> (event-channel-0, event-channel-1, etc)
-         Values:         <uint32_t>
-
-         The identifier of the Xen event channel used to signal activity
-         in the ring buffer. One for each ring.
-
-    ring-ref<num> (ring-ref0, ring-ref1, etc)
-         Values:         <uint32_t>
-
-         The Xen grant reference granting permission for the backend to
-         map a page with information to setup a share ring. One for each
-         ring.
-
-### State Machine
-
-Initialization:
-
-    *Front*                               *Back*
-    XenbusStateInitialising               XenbusStateInitialising
-    - Query virtual device                - Query backend device
-      properties.                           identification data.
-    - Setup OS device instance.           - Publish backend features
-    - Allocate and initialize the           and transport parameters
-      request ring.                                      |
-    - Publish transport parameters                       |
-      that will be in effect during                      V
-      this connection.                            XenbusStateInitWait
-                 |
-                 |
-                 V
-       XenbusStateInitialised
-
-                                          - Query frontend transport parameters.
-                                          - Connect to the request ring and
-                                            event channel.
-                                                         |
-                                                         |
-                                                         V
-                                                 XenbusStateConnected
-
-     - Query backend device properties.
-     - Finalize OS virtual device
-       instance.
-                 |
-                 |
-                 V
-        XenbusStateConnected
-
-Once frontend and backend are connected, they have a shared page per
-ring, which are used to setup the rings, and an event channel per ring,
-which are used to send notifications.
-
-Shutdown:
-
-    *Front*                            *Back*
-    XenbusStateConnected               XenbusStateConnected
-                |
-                |
-                V
-       XenbusStateClosing
-
-                                       - Unmap grants
-                                       - Unbind evtchns
-                                                 |
-                                                 |
-                                                 V
-                                         XenbusStateClosing
-
-    - Unbind evtchns
-    - Free rings
-    - Free data structures
-               |
-               |
-               V
-       XenbusStateClosed
-
-                                       - Free remaining data structures
-                                                 |
-                                                 |
-                                                 V
-                                         XenbusStateClosed
+The frontend and backend are configured via Xenstore. See [header] for
+the detailed Xenstore entries and the connection protocol.
 
 
 ## Ring Setup
@@ -415,5 +270,5 @@ the *size* field of the 9pfs header.
 
 [paper]: https://www.usenix.org/legacy/event/usenix05/tech/freenix/full_papers/hensbergen/hensbergen.pdf
 [website]: https://github.com/chaos/diod/blob/master/protocol.md
-[XenbusStateInitialising]: https://xenbits.xen.org/docs/unstable/hypercall/x86_64/include,public,io,xenbus.h.html
+[header]: https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/io/9pfs.h;hb=HEAD
 [ring.h]: https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/io/ring.h;hb=HEAD
diff --git a/xen/include/public/io/9pfs.h b/xen/include/public/io/9pfs.h
index ad26bd69eb..88c131820b 100644
--- a/xen/include/public/io/9pfs.h
+++ b/xen/include/public/io/9pfs.h
@@ -14,9 +14,185 @@
 #include "ring.h"
 
 /*
- * See docs/misc/9pfs.markdown in xen.git for the full specification:
+ * See docs/misc/9pfs.pandoc in xen.git for the full specification:
  * https://xenbits.xen.org/docs/unstable/misc/9pfs.html
  */
+
+/*
+ ******************************************************************************
+ *                                  Xenstore
+ ******************************************************************************
+ *
+ * The frontend and the backend connect via xenstore to exchange
+ * information. The toolstack creates front and back nodes with state
+ * XenbusStateInitialising. The protocol node name is **9pfs**.
+ *
+ * Multiple rings are supported for each frontend and backend connection.
+ *
+ ******************************************************************************
+ *                            Backend XenBus Nodes
+ ******************************************************************************
+ *
+ * Backend specific properties, written by the backend, read by the
+ * frontend:
+ *
+ *    versions
+ *         Values:         <string>
+ *
+ *         List of comma separated protocol versions supported by the backend.
+ *         For example "1,2,3". Currently the value is just "1", as there is
+ *         only one version. N.B.: this is the version of the Xen transport
+ *         protocol, not the version of 9pfs supported by the server.
+ *
+ *    max-rings
+ *         Values:         <uint32_t>
+ *
+ *         The maximum supported number of rings per frontend.
+ *
+ *    max-ring-page-order
+ *         Values:         <uint32_t>
+ *
+ *         The maximum supported size of a memory allocation in units of
+ *         log2n(machine pages), e.g. 1 = 2 pages, 2 == 4 pages, etc. It
+ *         must be at least 1.
+ *
+ * Backend configuration nodes, written by the toolstack, read by the
+ * backend:
+ *
+ *    path
+ *         Values:         <string>
+ *
+ *         Host filesystem path to share.
+ *
+ *    security-model
+ *         Values:         "none"
+ *
+ *         *none*: files are stored using the same credentials as they are
+ *                 created on the guest (no user ownership squash or remap)
+ *         Only "none" is supported in this version of the protocol.
+ *
+ ******************************************************************************
+ *                            Frontend XenBus Nodes
+ ******************************************************************************
+ *
+ *    version
+ *         Values:         <string>
+ *
+ *         Protocol version, chosen among the ones supported by the backend
+ *         (see **versions** under [Backend XenBus Nodes]). Currently the
+ *         value must be "1".
+ *
+ *    num-rings
+ *         Values:         <uint32_t>
+ *
+ *         Number of rings. It needs to be lower or equal to max-rings.
+ *
+ *    event-channel-<num> (event-channel-0, event-channel-1, etc)
+ *         Values:         <uint32_t>
+ *
+ *         The identifier of the Xen event channel used to signal activity
+ *         in the ring buffer. One for each ring.
+ *
+ *    ring-ref<num> (ring-ref0, ring-ref1, etc)
+ *         Values:         <uint32_t>
+ *
+ *         The Xen grant reference granting permission for the backend to
+ *         map a page with information to setup a share ring. One for each
+ *         ring.
+ *
+ *    tag
+ *         Values:         <string>
+ *
+ *         Alphanumeric tag that identifies the 9pfs share. The client needs
+ *         to know the tag to be able to mount it.
+ *
+ ******************************************************************************
+ *                              State Machine
+ ******************************************************************************
+ *
+ * Initialization:
+ *
+ *    *Front*                               *Back*
+ *    XenbusStateInitialising               XenbusStateInitialising
+ *                                          - Query backend device
+ *                                            identification data.
+ *                                          - Publish backend features
+ *                                            and transport parameters.
+ *                                                         |
+ *                                                         |
+ *                                                         V
+ *                                                  XenbusStateInitWait
+ *
+ *    - Query virtual device
+ *      properties.
+ *    - Query backend features and
+ *      transport parameters.
+ *    - Setup OS device instance.
+ *    - Allocate and initialize the
+ *      request ring(s) and
+ *      event-channel(s).
+ *    - Publish transport parameters
+ *      that will be in effect during
+ *      this connection.
+ *                 |
+ *                 |
+ *                 V
+ *       XenbusStateInitialised
+ *
+ *                                          - Query frontend transport
+ *                                            parameters.
+ *                                          - Connect to the request ring(s)
+ *                                            and event channel(s).
+ *                                                         |
+ *                                                         |
+ *                                                         V
+ *                                                 XenbusStateConnected
+ *
+ *     - Query backend device properties.
+ *     - Finalize OS virtual device
+ *       instance.
+ *                 |
+ *                 |
+ *                 V
+ *        XenbusStateConnected
+ *
+ * Once frontend and backend are connected, they have a shared page per
+ * ring, which are used to setup the rings, and an event channel per ring,
+ * which are used to send notifications.
+ *
+ * Shutdown:
+ *
+ *    *Front*                            *Back*
+ *    XenbusStateConnected               XenbusStateConnected
+ *                |
+ *                |
+ *                V
+ *       XenbusStateClosing
+ *
+ *                                       - Unmap grants
+ *                                       - Unbind evtchns
+ *                                                 |
+ *                                                 |
+ *                                                 V
+ *                                         XenbusStateClosing
+ *
+ *    - Unbind evtchns
+ *    - Free rings
+ *    - Free data structures
+ *               |
+ *               |
+ *               V
+ *       XenbusStateClosed
+ *
+ *                                       - Free remaining data structures
+ *                                                 |
+ *                                                 |
+ *                                                 V
+ *                                         XenbusStateClosed
+ *
+ ******************************************************************************
+ */
+
 DEFINE_XEN_FLEX_RING_AND_INTF(xen_9pfs);
 
 #endif
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:10:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:10:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486670.754097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQB1-0003jy-7g; Mon, 30 Jan 2023 09:10:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486670.754097; Mon, 30 Jan 2023 09:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQB1-0003jr-4b; Mon, 30 Jan 2023 09:10:11 +0000
Received: by outflank-mailman (input) for mailman id 486670;
 Mon, 30 Jan 2023 09:10:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OGSx=53=oderland.se=josef@srs-se1.protection.inumbo.net>)
 id 1pMQAy-0002nL-Ts
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:10:09 +0000
Received: from vsp04-out.oderland.com (vsp04-out.oderland.com
 [2a02:28f0::31:1]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e41bf481-a07d-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 10:10:07 +0100 (CET)
Received: from office.oderland.com (office.oderland.com [91.201.60.5])
 by vsp-out.oderland.com (Halon) with ESMTPSA
 id e31d1808-a07d-11ed-84da-3167612d0455;
 Mon, 30 Jan 2023 10:10:06 +0100 (CET)
Received: from 160.193-180-18.r.oderland.com ([193.180.18.160]:45302
 helo=[10.137.0.14])
 by office.oderland.com with esmtpsa  (TLS1.3) tls TLS_AES_128_GCM_SHA256
 (Exim 4.95) (envelope-from <josef@oderland.se>) id 1pMQAw-00AeZR-U5;
 Mon, 30 Jan 2023 10:10:05 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e41bf481-a07d-11ed-9ec0-891035b88211
X-Scanned-Cookie: 0162612eaad5b0a05cb4a8d9b4dce14a4c5e9ffb
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=oderland.se
	; s=default; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID:Sender:Reply-To:
	Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
	Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
	List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=GlzWozcYP8VFhu3TfQEgzwqw9EzqhKahQNYGN1a1Wes=; b=sO6gzS7syeZqv4KneATIZJGZZG
	pPrayOKMQvC6ujfYoqPVZ5iu6s8P2RivjpYriVO7PSEcFlOdB2jI+ivBkOCv94fwjeuJuMIyC40ir
	UXp0LliY8I8ccAnjLNKQ33zzWzEN1eUzymXqWAtdOsrAuar3sTXEG18wIdm1kKYcvscvoG1P1Ow3P
	/vRzPCK0aZPcWcN3nOLNgNdRJbPDcYlUMFT3L7BsPzJohsos3lIvtr4jYQsTI66h9DgI/vYGI2nkn
	+oPV7Hc2dj3bBYDZ30dyGhX+WxJ9eUImuELLhtY2afGdzFqNQWm9StkYudFQhjAfZzRgDi4V1bR1T
	4L2P1TyA==;
Message-ID: <407b7c10-ad1f-b6d2-2976-2b297755b2b3@oderland.se>
Date: Mon, 30 Jan 2023 10:10:05 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:108.0) Gecko/20100101
 Thunderbird/108.0
Subject: Re: [PATCH 3/3] xen/acpi: upload power and performance related data
 from a PVH dom0
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
Cc: xen-devel@lists.xenproject.org, x86@kernel.org
References: <20221121102113.41893-1-roger.pau@citrix.com>
 <20221121102113.41893-4-roger.pau@citrix.com>
From: Josef Johansson <josef@oderland.se>
In-Reply-To: <20221121102113.41893-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
x-oderland-domain-valid: yes


On 11/21/22 11:21, Roger Pau Monne wrote:
> When running as a PVH dom0 the ACPI MADT is crafted by Xen in order to
> report the correct numbers of vCPUs that dom0 has, so the host MADT is
> not provided to dom0.  This creates issues when parsing the power and
> performance related data from ACPI dynamic tables, as the ACPI
> Processor UIDs found on the dynamic code are likely to not match the
> ones crafted by Xen in the dom0 MADT.
>
> Xen would rely on Linux having filled at least the power and
> performance related data of the vCPUs on the system, and would clone
> that information in order to setup the remaining pCPUs on the system
> if dom0 vCPUs < pCPUs.  However when running as PVH dom0 it's likely
> that none of dom0 CPUs will have the power and performance data
> filled, and hence the Xen ACPI Processor driver needs to fetch that
> information by itself.
>
> In order to do so correctly, introduce a new helper to fetch the _CST
> data without taking into account the system capabilities from the
> CPUID output, as the capabilities reported to dom0 in CPUID might be
> different from the ones on the host.
>
>

Hi Roger,

First of all, thanks for doing the grunt work here to clear up the ACPI 
situation.

A bit of background, I'm trying to get an AMD APU (CPUID 0x17 (23)) to 
work properly
under Xen. It works fine otherwise but something is amiss under Xen.

Just to make sure that the patch is working as intended, what I found 
when trying it out
is that the first 8 CPUs have C-States, but 0, 2, 4, 6, 8, 10, 12, 14 
have P-States, out of
16 CPUs. Xen tells Linux that there are 8 CPUs since it's running with 
nosmt.

Regards
- Josef

xen_acpi_processor: Max ACPI ID: 30
xen_acpi_processor: Uploading Xen processor PM info
xen_acpi_processor: ACPI CPU0 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU0 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU1 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU2 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU2 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU3 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU4 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU4 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU5 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU6 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU6 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU7 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU0 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU0 w/ PST:coord_type = 254 domain = 0
xen_acpi_processor: CPU with ACPI ID 1 is unavailable
xen_acpi_processor: ACPI CPU2 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU2 w/ PST:coord_type = 254 domain = 1
xen_acpi_processor: CPU with ACPI ID 3 is unavailable
xen_acpi_processor: ACPI CPU4 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU4 w/ PST:coord_type = 254 domain = 2
xen_acpi_processor: CPU with ACPI ID 5 is unavailable
xen_acpi_processor: ACPI CPU6 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU6 w/ PST:coord_type = 254 domain = 3
xen_acpi_processor: CPU with ACPI ID 7 is unavailable
xen_acpi_processor: ACPI CPU8 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU8 w/ PST:coord_type = 254 domain = 4
xen_acpi_processor: CPU with ACPI ID 9 is unavailable
xen_acpi_processor: ACPI CPU10 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU10 w/ PST:coord_type = 254 domain = 5
xen_acpi_processor: CPU with ACPI ID 11 is unavailable
xen_acpi_processor: ACPI CPU12 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU12 w/ PST:coord_type = 254 domain = 6
xen_acpi_processor: CPU with ACPI ID 13 is unavailable
xen_acpi_processor: ACPI CPU14 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU14 w/ PST:coord_type = 254 domain = 7
xen_acpi_processor: CPU with ACPI ID 15 is unavailable
xen_acpi_processor: ACPI CPU8 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU10 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU12 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU14 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS

As a bonus, here are the previous debug output on the same machine.

xen_acpi_processor: Max ACPI ID: 30
xen_acpi_processor: Uploading Xen processor PM info
xen_acpi_processor: ACPI CPU0 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU0 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU1 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU2 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU2 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU3 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU4 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU4 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU5 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU6 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU6 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU7 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU0 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU0 w/ PST:coord_type = 254 domain = 0
xen_acpi_processor: ACPI CPU1 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU1 w/ PST:coord_type = 254 domain = 0
xen_acpi_processor: ACPI CPU2 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU2 w/ PST:coord_type = 254 domain = 1
xen_acpi_processor: ACPI CPU3 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU3 w/ PST:coord_type = 254 domain = 1
xen_acpi_processor: ACPI CPU4 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU4 w/ PST:coord_type = 254 domain = 2
xen_acpi_processor: ACPI CPU5 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU5 w/ PST:coord_type = 254 domain = 2
xen_acpi_processor: ACPI CPU6 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU6 w/ PST:coord_type = 254 domain = 3
xen_acpi_processor: ACPI CPU7 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU7 w/ PST:coord_type = 254 domain = 3
xen_acpi_processor: ACPI CPU8 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU8 w/ PST:coord_type = 254 domain = 4
xen_acpi_processor: ACPI CPU9 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU9 w/ PST:coord_type = 254 domain = 4
xen_acpi_processor: ACPI CPU10 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU10 w/ PST:coord_type = 254 domain = 5
xen_acpi_processor: ACPI CPU11 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU11 w/ PST:coord_type = 254 domain = 5
xen_acpi_processor: ACPI CPU12 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU12 w/ PST:coord_type = 254 domain = 6
xen_acpi_processor: ACPI CPU13 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU13 w/ PST:coord_type = 254 domain = 6
xen_acpi_processor: ACPI CPU14 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU14 w/ PST:coord_type = 254 domain = 7
xen_acpi_processor: ACPI CPU15 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU15 w/ PST:coord_type = 254 domain = 7
xen_acpi_processor: ACPI CPU8 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU10 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU12 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS
xen_acpi_processor: ACPI CPU14 - P-states uploaded.
xen_acpi_processor:      *P0: 1700 MHz, 2071 mW, 0 uS
xen_acpi_processor:       P1: 1600 MHz, 1520 mW, 0 uS
xen_acpi_processor:       P2: 1400 MHz, 1277 mW, 0 uS


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:17:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:17:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486689.754107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQII-0004eW-VK; Mon, 30 Jan 2023 09:17:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486689.754107; Mon, 30 Jan 2023 09:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQII-0004eP-Se; Mon, 30 Jan 2023 09:17:42 +0000
Received: by outflank-mailman (input) for mailman id 486689;
 Mon, 30 Jan 2023 09:17:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6cZ7=53=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pMQIH-0004eJ-Ci
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:17:41 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2052.outbound.protection.outlook.com [40.107.223.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f05462a9-a07e-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 10:17:38 +0100 (CET)
Received: from BN9PR03CA0100.namprd03.prod.outlook.com (2603:10b6:408:fd::15)
 by PH8PR12MB6937.namprd12.prod.outlook.com (2603:10b6:510:1bc::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Mon, 30 Jan
 2023 09:17:31 +0000
Received: from BN8NAM11FT109.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fd:cafe::3e) by BN9PR03CA0100.outlook.office365.com
 (2603:10b6:408:fd::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Mon, 30 Jan 2023 09:17:31 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT109.mail.protection.outlook.com (10.13.176.221) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.22 via Frontend Transport; Mon, 30 Jan 2023 09:17:31 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 30 Jan
 2023 03:17:30 -0600
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 30 Jan
 2023 01:17:30 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 30 Jan 2023 03:17:29 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f05462a9-a07e-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dvBc4O1+7jZxJy/zEhlVActk326zPB+J5uB0hmHB8Agy/I0Gt2OwYjzVR6CIP5fJ2jtszEoUXzCsZNNak4CYmSgRL/qT8Lu7ZK3MBirp5eiZ2ee6j3JtshqbPRV2lHU6B3zasCn/jE8KAAhWjB67hfNWycIJqgntknjnsAC6QaLwOQXbINgtnO655SOyO7HD24MFbv8WmBj4keVqAUHLFdsJ+Cz6slv5fpHku5IDjrhHQI2P5eIDryj4Xn4sq3uC6P3xim26xCIt8s86ie6QAe4vLOXTtCDpN39N5vGe7KOZRYTLtygQLpNxi5gwZnZHr4eDcvfzRQp/p7lN1DJZZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+DkzSkOY4mWSm4jeBZPfXwlxaZh4t6lCNw+L/pSG8Fs=;
 b=JosAiX3gHnACWDO9h083mBx38gb1lwhbyipjRS+ivx+Ztllu+QW7FKE4zORmDOCQgbVm7BAjF2HoB6xRSE4jDqYJ+DaL5WU62ao2MxmN3CUzbxaWARkMSUeRjsqFd3384dEoYnyTbULDY3DdNJDMRh+an69SP39EobAXeeBh4vvGHgXSiV2BM7RO0dCbU2v7pYq6fzbtzbmJ7caRMQmsYVVcnGp2X0uTx4MiIqED0k5Yf7bi/Z8eQUV1xA8M+MUOqWvb3flSE5FFOB9ica96O4n1SGOxCimX6KM7R0p4jCz9Rb2ryRXACH48Zwdm0EI4Z3CM0u+cOSHr0LGohZVXSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+DkzSkOY4mWSm4jeBZPfXwlxaZh4t6lCNw+L/pSG8Fs=;
 b=cn7F7fUaiDI1PdeyZgEh8fmCH6YkgT6FxDtXYgaL/Vvg0T+pZfY2AG6eWupQl1o0rNgvNFMHmHZj3guOGFNW1V2TZLKtS8U/cm17HCV193rWCOL4ugWBySisAl+Qn1aaaTl4raN71OVQYPVXJ38bGIsHa+Lb7poqH8A+P9Gro5I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <0026b6b7-c3e7-3e5d-a4d8-1cc420de10f5@amd.com>
Date: Mon, 30 Jan 2023 10:17:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v5 4/5] xen/arm64: mm: Rework switch_ttbr()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-5-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230127195508.2786-5-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT109:EE_|PH8PR12MB6937:EE_
X-MS-Office365-Filtering-Correlation-Id: ec517c87-d7c2-4288-3644-08db02a2d0c3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eJI8pSqtoFFcEVMMVavoHiI21jz9Sc2AntMdTjtF8JbVZSfEEMvI5ZX+QgNu0Ui6VJ1y25fYCp8TLlXGtNnvaRBvWNLhcdalHnfmaTK+ggA5LZbeucOcuj5P01ihlQm9KJXkG7uKUpCuYHdYleQ29zOrbs54mP+m7MC68IuyXBb/JCB2JUTnIyBN6/wS8Q/YPKxR4hryIDNX4mnBL73xT8AMc0E75ZXpjG8VW2k1WwJB3qStLPOZIW8CbYv9Bei37LPaMitA0jdsU1YjkXbaoxnpYfaT8CFFF1oaBr1mWW1DSKhFBbrvGFcQVi03D+z14g65u5BOzi+H25gDTMrXUaO5cRYY/me+1pxA3zcNxPw5ykG0wDVSyL0d5jEcAooqsKSO+i89gfd8JprSMXaTnO9Ri96YCSDSU9cG4l4g1kIBYaMrkLE0ygBXV5HFHJyksOrD3E5gK6E7Ng0UBis7ngv5ie0As8yCkTe9Lb2zbrEzScZtDuIaotXMiDwhr/w9djlm3c3C6h+KUbs4iDp4VdEUWf0JN2m62VDppSQ012X73VImABLXcxdgkV13drxw4ganNuHKH8yu/U8DB/Vi3OlEBnKubFz3j64VMUkQRXaIowFiwkR0x7n5vPbBbj22fPHtADojk1cj+tOx+VAsIas6PwhpA9mt74mXFEnRz+Okl7FtwaIcVJmqhCvxhsDPFvgLxTTlSPjcQW8VQ1io286YGTJyEAVX+TbRZaip41UnSSgFbUGR5RgcAxxpmNULO9xI4OpEVdR+erZ7MbGk0g==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199018)(36840700001)(46966006)(40470700004)(31686004)(70586007)(36756003)(70206006)(8676002)(4326008)(40460700003)(86362001)(31696002)(81166007)(356005)(186003)(478600001)(426003)(336012)(54906003)(83380400001)(53546011)(26005)(2616005)(41300700001)(82310400005)(8936002)(5660300002)(316002)(110136005)(16576012)(47076005)(44832011)(40480700001)(82740400003)(2906002)(36860700001)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 09:17:31.3000
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ec517c87-d7c2-4288-3644-08db02a2d0c3
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT109.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6937

Hi Julien,

On 27/01/2023 20:55, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
> still on.
> 
> Switching TTBR is like replacing existing mappings with new ones. So
> we need to follow the break-before-make sequence.
> 
> In this case, it means the MMU needs to be switched off while the
> TTBR is updated. In order to disable the MMU, we need to first
> jump to an identity mapping.
> 
> Rename switch_ttbr() to switch_ttbr_id() and create an helper on
> top to temporary map the identity mapping and call switch_ttbr()
> via the identity address.
> 
> switch_ttbr_id() is now reworked to temporarily turn off the MMU
> before updating the TTBR.
> 
> We also need to make sure the helper switch_ttbr() is part of the
> identity mapping. So move _end_boot past it.
> 
> The arm32 code will use a different approach. So this issue is for now
> only resolved on arm64.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Tested-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:18:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:18:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486694.754116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQJN-0005CD-9A; Mon, 30 Jan 2023 09:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486694.754116; Mon, 30 Jan 2023 09:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQJN-0005C6-6M; Mon, 30 Jan 2023 09:18:49 +0000
Received: by outflank-mailman (input) for mailman id 486694;
 Mon, 30 Jan 2023 09:18:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMQJL-0005Bw-Rc
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:18:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQJL-00043O-JP; Mon, 30 Jan 2023 09:18:47 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQJL-0000At-Bz; Mon, 30 Jan 2023 09:18:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=EvH2GSzh7wb+NC1+AolVd8zL7pfd6ev9Y3/k5Kszeq8=; b=Ajyp+Ko8ZDv1IxBifoFdBPj9gG
	5WjsCD6g0Aru30diFwTlJ3q2BSuEVwbigu7TF4Q58k+QQv8QhpY50heQsvakq6VCc2FOB/Qu0mw9J
	Z+evqpd607q6BAUhI/Uuc1G3GjdyQGLnEjQLmrpQF3EBH4RKD0cw93ggqi9C3Z1PHCaQ=;
Message-ID: <83ea4c36-4762-c06f-28bc-00deedb7efd3@xen.org>
Date: Mon, 30 Jan 2023 09:18:45 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3] xen/public: move xenstore related doc into 9pfs.h
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230130090937.31623-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230130090937.31623-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 30/01/2023 09:09, Juergen Gross wrote:
> The Xenstore related documentation is currently to be found in
> docs/misc/9pfs.pandoc, instead of the related header file
> xen/include/public/io/9pfs.h like for most other paravirtualized
> device protocols.
> 
> There is a comment in the header pointing at the document, but the
> given file name is wrong. Additionally such headers are meant to be
> copied into consuming projects (Linux kernel, qemu, etc.), so pointing
> at a doc file in the Xen git repository isn't really helpful for the
> consumers of the header.
> 
> This situation is far from ideal, which is already being proved by the
> fact that neither qemu nor the Linux kernel are implementing the
> device attach/detach protocol correctly. Additionally the documented
> Xenstore entries are not matching the reality, as the "tag" Xenstore
> entry is on the frontend side, not on the backend one.
> 
> There is another bug in the connection sequence: the frontend needs to
> wait for the backend to have published its features before being able
> to allocate its rings and event-channels.
> 
> Change that by moving the Xenstore related 9pfs documentation from
> docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h while fixing
> the wrong Xenstore entry detail and the connection sequence.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - add reference to header in the pandoc document (Jan Beulich)
> V3:
> - fix flaw in the connection sequence

Please don't do that in the same patch. This makes a lot more difficult 
to confirm the doc movement was correct.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:21:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:21:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486698.754126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQMF-0006ai-Ms; Mon, 30 Jan 2023 09:21:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486698.754126; Mon, 30 Jan 2023 09:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQMF-0006ab-KI; Mon, 30 Jan 2023 09:21:47 +0000
Received: by outflank-mailman (input) for mailman id 486698;
 Mon, 30 Jan 2023 09:21:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OGSx=53=oderland.se=josef@srs-se1.protection.inumbo.net>)
 id 1pMQME-0006aT-0L
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:21:46 +0000
Received: from vsp04-out.oderland.com (vsp04-out.oderland.com
 [2a02:28f0::30:1]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8320259e-a07f-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 10:21:43 +0100 (CET)
Received: from office.oderland.com (office.oderland.com [91.201.60.5])
 by vsp-out.oderland.com (Halon) with ESMTPSA
 id 826dac8c-a07f-11ed-84da-3167612d0455;
 Mon, 30 Jan 2023 10:21:42 +0100 (CET)
Received: from 160.193-180-18.r.oderland.com ([193.180.18.160]:48568
 helo=[10.137.0.14])
 by office.oderland.com with esmtpsa  (TLS1.3) tls TLS_AES_128_GCM_SHA256
 (Exim 4.95) (envelope-from <josef@oderland.se>) id 1pMQMB-00AhxU-NW;
 Mon, 30 Jan 2023 10:21:42 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8320259e-a07f-11ed-b8d1-410ff93cb8f0
X-Scanned-Cookie: 0162612eaad5b0a05cb4a8d9b4dce14a4c5e9ffb
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=oderland.se
	; s=default; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID:Sender:Reply-To:
	Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
	Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
	List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=0ed0B94xsqNqJd94Zwcm9kr+L8XMZvFguOtTUNMulaA=; b=Yfgkk28gf3fJvP5Ju08zAIRzJ5
	3MXCfJonVoX+et3HNf9QiBraYUR1WdaKnr/Vt31bLq2MpkSieV2rEtWh+cqTi6KJuBlzDsKz0WV1W
	wr8PtJiALInhKzd3U4DwZUrY5X4KhB4nReJ1jM9muQQMTQ/4f9p8MkoOi2s1F/732ff1Dq8bivKL5
	uVer+W7VH2ODWRR9aR5wnh1A1pWgMJfuwsOUHZbwKGovxnPGXkZOKk7ZYfR7Y4T0OXf364DZlnfxd
	B4eJyh+gpnyclQZZoOnt3uFTL2cbYszZNyqwRuuhpPiPAzeXl5rnQngv2+IrV0oIw1hHnsxQM4Qcy
	7MoED3aA==;
Message-ID: <952fdc14-a8e5-a59a-9c7d-af1adf361d77@oderland.se>
Date: Mon, 30 Jan 2023 10:21:42 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:108.0) Gecko/20100101
 Thunderbird/108.0
Subject: Re: [PATCH 1/3] acpi/processor: fix evaluating _PDC method when
 running as Xen dom0
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
Cc: xen-devel@lists.xenproject.org, x86@kernel.org, linux-acpi@vger.kernel.org
References: <20221121102113.41893-1-roger.pau@citrix.com>
 <20221121102113.41893-2-roger.pau@citrix.com>
From: Josef Johansson <josef@oderland.se>
In-Reply-To: <20221121102113.41893-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
x-oderland-domain-valid: yes

On 11/21/22 11:21, Roger Pau Monne wrote:
> When running as a Xen dom0 the number of CPUs available to Linux can
> be different from the number of CPUs present on the system, but in
> order to properly fetch processor performance related data _PDC must
> be executed on all the physical CPUs online on the system.
>
> The current checks in processor_physically_present() result in some
> processor objects not getting their _PDC methods evaluated when Linux
> is running as Xen dom0.  Fix this by introducing a custom function to
> use when running as Xen dom0 in order to check whether a processor
> object matches a CPU that's online.
>
> Fixes: 5d554a7bb064 ('ACPI: processor: add internal processor_physically_present()')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>   arch/x86/include/asm/xen/hypervisor.h | 10 ++++++++++
>   arch/x86/xen/enlighten.c              | 27 +++++++++++++++++++++++++++
>   drivers/acpi/processor_pdc.c          | 11 +++++++++++
>   3 files changed, 48 insertions(+)
>
> diff --git a/arch/x86/include/asm/xen/hypervisor.h b/arch/x86/include/asm/xen/hypervisor.h
> index 16f548a661cf..b9f512138043 100644
> --- a/arch/x86/include/asm/xen/hypervisor.h
> +++ b/arch/x86/include/asm/xen/hypervisor.h
> @@ -61,4 +61,14 @@ void __init xen_pvh_init(struct boot_params *boot_params);
>   void __init mem_map_via_hcall(struct boot_params *boot_params_p);
>   #endif
>   
> +#ifdef CONFIG_XEN_DOM0
> +bool __init xen_processor_present(uint32_t acpi_id);
> +#else
> +static inline bool xen_processor_present(uint32_t acpi_id)
> +{
> +	BUG();
> +	return false;
> +}
> +#endif
> +
>   #endif /* _ASM_X86_XEN_HYPERVISOR_H */
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index b8db2148c07d..d4c44361a26c 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -346,3 +346,30 @@ void xen_arch_unregister_cpu(int num)
>   }
>   EXPORT_SYMBOL(xen_arch_unregister_cpu);
>   #endif
> +
> +#ifdef CONFIG_XEN_DOM0
> +bool __init xen_processor_present(uint32_t acpi_id)
> +{
> +	unsigned int i, maxid;
> +	struct xen_platform_op op = {
> +		.cmd = XENPF_get_cpuinfo,
> +		.interface_version = XENPF_INTERFACE_VERSION,
> +	};
> +	int ret = HYPERVISOR_platform_op(&op);
> +
> +	if (ret)
> +		return false;
> +
> +	maxid = op.u.pcpu_info.max_present;
> +	for (i = 0; i <= maxid; i++) {
> +		op.u.pcpu_info.xen_cpuid = i;
> +		ret = HYPERVISOR_platform_op(&op);
> +		if (ret)
> +			continue;
> +		if (op.u.pcpu_info.acpi_id == acpi_id)
> +			return op.u.pcpu_info.flags & XEN_PCPU_FLAGS_ONLINE;
> +	}
> +
> +	return false;
> +}
My compiler (Default GCC on Fedora 32, compiling for Qubes) complain 
loudly that the below was missing.

+}
+EXPORT_SYMBOL(xen_processor_present);

`ERROR: MODPOST xen_processor_present 
[drivers/xen/xen-acpi-processor.ko] undefined!`

Same thing with xen_sanitize_pdc in the next patch.

+}
+EXPORT_SYMBOL(xen_sanitize_pdc);

Everything compiled fine after those changes.
> +#endif
> diff --git a/drivers/acpi/processor_pdc.c b/drivers/acpi/processor_pdc.c
> index 8c3f82c9fff3..18fb04523f93 100644
> --- a/drivers/acpi/processor_pdc.c
> +++ b/drivers/acpi/processor_pdc.c
> @@ -14,6 +14,8 @@
>   #include <linux/acpi.h>
>   #include <acpi/processor.h>
>   
> +#include <xen/xen.h>
> +
>   #include "internal.h"
>   
>   static bool __init processor_physically_present(acpi_handle handle)
> @@ -47,6 +49,15 @@ static bool __init processor_physically_present(acpi_handle handle)
>   		return false;
>   	}
>   
> +	if (xen_initial_domain())
> +		/*
> +		 * When running as a Xen dom0 the number of processors Linux
> +		 * sees can be different from the real number of processors on
> +		 * the system, and we still need to execute _PDC for all of
> +		 * them.
> +		 */
> +		return xen_processor_present(acpi_id);
> +
>   	type = (acpi_type == ACPI_TYPE_DEVICE) ? 1 : 0;
>   	cpuid = acpi_get_cpuid(handle, type, acpi_id);
>   



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:24:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:24:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486712.754137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQOL-0007Ex-74; Mon, 30 Jan 2023 09:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486712.754137; Mon, 30 Jan 2023 09:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQOL-0007Eq-44; Mon, 30 Jan 2023 09:23:57 +0000
Received: by outflank-mailman (input) for mailman id 486712;
 Mon, 30 Jan 2023 09:23:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMQOK-0007Ek-3R
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:23:56 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d0855678-a07f-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 10:23:54 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7858521A0B;
 Mon, 30 Jan 2023 09:23:53 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 38DF413444;
 Mon, 30 Jan 2023 09:23:53 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id AsJuDKmM12MqaQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 09:23:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0855678-a07f-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675070633; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=yD/ihHce94iSN3d5ir8ilsHhMIbvNXBZefgFxgj5Cpo=;
	b=V2Le3IegQRJ3SzoMc1+1fJzRMcKGDqImnB/aJcgHmAvDcSjrpqHgdislftYFgThlnBpVNw
	4tt2xKTTKd2AqG8xSfTcRjlmORmBaS7/v+ZDc513jspfAByl9IG4iIjYur0vex3q1/BfCv
	Qluy9QmlX7HyvwC0X//TANq5A/WEc1A=
Message-ID: <65757a42-f55b-55e9-4853-4854ecabbfca@suse.com>
Date: Mon, 30 Jan 2023 10:23:52 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v3] xen/public: move xenstore related doc into 9pfs.h
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230130090937.31623-1-jgross@suse.com>
 <83ea4c36-4762-c06f-28bc-00deedb7efd3@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <83ea4c36-4762-c06f-28bc-00deedb7efd3@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------550xXOwYHDISJOFsd5jC0yuu"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------550xXOwYHDISJOFsd5jC0yuu
Content-Type: multipart/mixed; boundary="------------pz8ODHep3umwCK8ktMeXu4eY";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <65757a42-f55b-55e9-4853-4854ecabbfca@suse.com>
Subject: Re: [PATCH v3] xen/public: move xenstore related doc into 9pfs.h
References: <20230130090937.31623-1-jgross@suse.com>
 <83ea4c36-4762-c06f-28bc-00deedb7efd3@xen.org>
In-Reply-To: <83ea4c36-4762-c06f-28bc-00deedb7efd3@xen.org>

--------------pz8ODHep3umwCK8ktMeXu4eY
Content-Type: multipart/mixed; boundary="------------T3mGt4uhRVlrmy0Mj8RQQX0E"

--------------T3mGt4uhRVlrmy0Mj8RQQX0E
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDEuMjMgMTA6MTgsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDMwLzAxLzIwMjMgMDk6MDksIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBU
aGUgWGVuc3RvcmUgcmVsYXRlZCBkb2N1bWVudGF0aW9uIGlzIGN1cnJlbnRseSB0byBiZSBm
b3VuZCBpbg0KPj4gZG9jcy9taXNjLzlwZnMucGFuZG9jLCBpbnN0ZWFkIG9mIHRoZSByZWxh
dGVkIGhlYWRlciBmaWxlDQo+PiB4ZW4vaW5jbHVkZS9wdWJsaWMvaW8vOXBmcy5oIGxpa2Ug
Zm9yIG1vc3Qgb3RoZXIgcGFyYXZpcnR1YWxpemVkDQo+PiBkZXZpY2UgcHJvdG9jb2xzLg0K
Pj4NCj4+IFRoZXJlIGlzIGEgY29tbWVudCBpbiB0aGUgaGVhZGVyIHBvaW50aW5nIGF0IHRo
ZSBkb2N1bWVudCwgYnV0IHRoZQ0KPj4gZ2l2ZW4gZmlsZSBuYW1lIGlzIHdyb25nLiBBZGRp
dGlvbmFsbHkgc3VjaCBoZWFkZXJzIGFyZSBtZWFudCB0byBiZQ0KPj4gY29waWVkIGludG8g
Y29uc3VtaW5nIHByb2plY3RzIChMaW51eCBrZXJuZWwsIHFlbXUsIGV0Yy4pLCBzbyBwb2lu
dGluZw0KPj4gYXQgYSBkb2MgZmlsZSBpbiB0aGUgWGVuIGdpdCByZXBvc2l0b3J5IGlzbid0
IHJlYWxseSBoZWxwZnVsIGZvciB0aGUNCj4+IGNvbnN1bWVycyBvZiB0aGUgaGVhZGVyLg0K
Pj4NCj4+IFRoaXMgc2l0dWF0aW9uIGlzIGZhciBmcm9tIGlkZWFsLCB3aGljaCBpcyBhbHJl
YWR5IGJlaW5nIHByb3ZlZCBieSB0aGUNCj4+IGZhY3QgdGhhdCBuZWl0aGVyIHFlbXUgbm9y
IHRoZSBMaW51eCBrZXJuZWwgYXJlIGltcGxlbWVudGluZyB0aGUNCj4+IGRldmljZSBhdHRh
Y2gvZGV0YWNoIHByb3RvY29sIGNvcnJlY3RseS4gQWRkaXRpb25hbGx5IHRoZSBkb2N1bWVu
dGVkDQo+PiBYZW5zdG9yZSBlbnRyaWVzIGFyZSBub3QgbWF0Y2hpbmcgdGhlIHJlYWxpdHks
IGFzIHRoZSAidGFnIiBYZW5zdG9yZQ0KPj4gZW50cnkgaXMgb24gdGhlIGZyb250ZW5kIHNp
ZGUsIG5vdCBvbiB0aGUgYmFja2VuZCBvbmUuDQo+Pg0KPj4gVGhlcmUgaXMgYW5vdGhlciBi
dWcgaW4gdGhlIGNvbm5lY3Rpb24gc2VxdWVuY2U6IHRoZSBmcm9udGVuZCBuZWVkcyB0bw0K
Pj4gd2FpdCBmb3IgdGhlIGJhY2tlbmQgdG8gaGF2ZSBwdWJsaXNoZWQgaXRzIGZlYXR1cmVz
IGJlZm9yZSBiZWluZyBhYmxlDQo+PiB0byBhbGxvY2F0ZSBpdHMgcmluZ3MgYW5kIGV2ZW50
LWNoYW5uZWxzLg0KPj4NCj4+IENoYW5nZSB0aGF0IGJ5IG1vdmluZyB0aGUgWGVuc3RvcmUg
cmVsYXRlZCA5cGZzIGRvY3VtZW50YXRpb24gZnJvbQ0KPj4gZG9jcy9taXNjLzlwZnMucGFu
ZG9jIGludG8geGVuL2luY2x1ZGUvcHVibGljL2lvLzlwZnMuaCB3aGlsZSBmaXhpbmcNCj4+
IHRoZSB3cm9uZyBYZW5zdG9yZSBlbnRyeSBkZXRhaWwgYW5kIHRoZSBjb25uZWN0aW9uIHNl
cXVlbmNlLg0KPj4NCj4+IFNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0Bz
dXNlLmNvbT4NCj4+IC0tLQ0KPj4gVjI6DQo+PiAtIGFkZCByZWZlcmVuY2UgdG8gaGVhZGVy
IGluIHRoZSBwYW5kb2MgZG9jdW1lbnQgKEphbiBCZXVsaWNoKQ0KPj4gVjM6DQo+PiAtIGZp
eCBmbGF3IGluIHRoZSBjb25uZWN0aW9uIHNlcXVlbmNlDQo+IA0KPiBQbGVhc2UgZG9uJ3Qg
ZG8gdGhhdCBpbiB0aGUgc2FtZSBwYXRjaC4gVGhpcyBtYWtlcyBhIGxvdCBtb3JlIGRpZmZp
Y3VsdCB0byANCj4gY29uZmlybSB0aGUgZG9jIG1vdmVtZW50IHdhcyBjb3JyZWN0Lg0KDQpZ
b3UgaGF2ZSB0byBjaGVjayBpdCBtYW51YWxseSBhbnl3YXksIGFzIHRoZSBjb21tZW50IGZv
cm1hdCBpcyBwcmVmaXhpbmcNCiIgKiAiIHRvIGV2ZXJ5IGxpbmUuDQoNCkl0cyBub3QgYXMg
aWYgdGhlIGNoYW5nZXMgd291bGQgYmUgVEhBVCBjb21wbGljYXRlZCwgYW5kIHRoZSBjb21t
aXQgbWVzc2FnZQ0KaXMgcXVpdGUgY2xlYXIgV0hBVCBpcyBjaGFuZ2VkLg0KDQpJIGNhbiBk
byB0aGUgc3BsaXQsIG9mIGNvdXJzZSwgaWYgeW91IHJlYWxseSBuZWVkIHRoYXQgKGluIHRo
aXMgY2FzZSBJJ2QNCmVuZCB1cCB3aXRoIDMgcGF0Y2hlcywgb25lIGZvciB0aGUgbW92ZSwg
b25lIGZvciB0aGUgInRhZyIgZW50cnksIGFuZCBvbmUNCmZvciB0aGUgZml4IG9mIHRoZSBj
b25uZWN0aW9uIHNlcXVlbmNlKS4NCg0KDQpKdWVyZ2VuDQo=
--------------T3mGt4uhRVlrmy0Mj8RQQX0E
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------T3mGt4uhRVlrmy0Mj8RQQX0E--

--------------pz8ODHep3umwCK8ktMeXu4eY--

--------------550xXOwYHDISJOFsd5jC0yuu
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPXjKgFAwAAAAAACgkQsN6d1ii/Ey+L
8Qf/XVFferY96jOSbRkHPdsfDHdZu2/gQsGGvdgBQh9sZ8e8/4r5kvjCfxmHW9Ac2kHuILk1xGJj
Bt+ZWmPSi3u8JIZcL8YnZ7wKERHQTnpa46EquVZxfK5yYXJFenY4CYUh3V/ZRl03jIjj1Q+a0myM
B1fs2El/a+8eaZl9cQYC31WRXPOiP5fEMn0s9RuvzPjD3pK/JRwu6rT/2HuZvkf2ujWd1MB1bM0Y
3e3maiWAUEnyBYqDssr3PxYGAJUfrpNnShCaCvPrL0TpJO/Cdx5OVD1Tlsc+v076tdXCQxFoJGh+
1IBKRz4R83Xax3D5526N9EqhsHjx/QmcsmQF5jNtWQ==
=eRNT
-----END PGP SIGNATURE-----

--------------550xXOwYHDISJOFsd5jC0yuu--


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:33:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:33:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486722.754147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQXL-0000Qt-2U; Mon, 30 Jan 2023 09:33:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486722.754147; Mon, 30 Jan 2023 09:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQXK-0000Qm-W1; Mon, 30 Jan 2023 09:33:14 +0000
Received: by outflank-mailman (input) for mailman id 486722;
 Mon, 30 Jan 2023 09:33:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VtRO=53=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pMQXK-0000Qg-82
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:33:14 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c5eaea0-a081-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 10:33:11 +0100 (CET)
Received: from AS8PR04CA0071.eurprd04.prod.outlook.com (2603:10a6:20b:313::16)
 by DB9PR08MB9561.eurprd08.prod.outlook.com (2603:10a6:10:452::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan
 2023 09:33:03 +0000
Received: from AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:313:cafe::ba) by AS8PR04CA0071.outlook.office365.com
 (2603:10a6:20b:313::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Mon, 30 Jan 2023 09:33:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT037.mail.protection.outlook.com (100.127.140.225) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.21 via Frontend Transport; Mon, 30 Jan 2023 09:33:03 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Mon, 30 Jan 2023 09:33:03 +0000
Received: from ffd0fb05013f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BF066034-C8E5-43B7-90A7-77406A2D7E9F.1; 
 Mon, 30 Jan 2023 09:32:53 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ffd0fb05013f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 30 Jan 2023 09:32:53 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAWPR08MB8840.eurprd08.prod.outlook.com (2603:10a6:102:339::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Mon, 30 Jan
 2023 09:32:48 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 09:32:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c5eaea0-a081-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dibFMqZvR7Rnflsae3OShO/SukIWjWcvhwJPpuUDoes=;
 b=GSAmXcY/EjAnxOcb2Bg6edkfvfgA0jodgnCmu2mOF2u8TY+YRcLkETfFz4hRuRTxACGIRXN996bzK40wLlEf8cL3LCJkYNtFf7upO/LSzYzI3OKhm2MkA4CGCdVwFNcbHzhxYWQMApSq/bs+aIkX8008dYTkIDW0q3nqGEj0a3U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 577e688602667a73
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=D+fRq7uniYSxXbb58r5ykywNroaHgPQYc2JcQ2pjDw+N4HyRHygoo07/h7G0AoQLiGjkozoNJGY+vZ3vb9+hu7zBx3GDmaQN+2OmxJo8bO32ZcyN0OTuZi74mEcjglRgsImZkPa8R9Y2nZSkrUmcRnnl/EN2GCit5jARhn3mgatV3FQY64YxIeyMckVOb/sGiIoxNgXC9SuJpVK7yE1MR7+lp2pIF0OPEvMq65p17TDz652gnrVsTZcpGVzzCEzpYlyhp3YsNz4wxdS/D5z1OzWOUvJSaYj0ZCXI6cyR/9Crw9NN7b/OX9zr5gnVBVE6muxHs3wyC9DxxV5w8EmzhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dibFMqZvR7Rnflsae3OShO/SukIWjWcvhwJPpuUDoes=;
 b=bfomlTxOic2Yrp1i+FBBrHv5XZMSc7ysrLaykjfpbr0hN6Q6ADPMTfoKeMuYgkOw36N7m6yzhpUgS/dzfGUp9AmCYv94aQJdbKTWZQvzfvC/URlhXWQj7cy2enD3AatRdqGnfr9V+weOYlHgY8qCHsz9nm+Li2Nbg43HuuWnldSoq39jGHuVpmieAl9m14DuAY6X+BeDqP7g8bTuzecvMktnTEK87Z3kOfiL0/CCgi+hu6eumD4Jo8Etg4x2qwMrevZAFw/GdqHkQLvuiIZxW8nuJi2FAqMDWdlcjTSxA/AMm6HXugFN2PVcFrXbgKgiB0S7xTF1HBEPNqbHgJYsww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dibFMqZvR7Rnflsae3OShO/SukIWjWcvhwJPpuUDoes=;
 b=GSAmXcY/EjAnxOcb2Bg6edkfvfgA0jodgnCmu2mOF2u8TY+YRcLkETfFz4hRuRTxACGIRXN996bzK40wLlEf8cL3LCJkYNtFf7upO/LSzYzI3OKhm2MkA4CGCdVwFNcbHzhxYWQMApSq/bs+aIkX8008dYTkIDW0q3nqGEj0a3U=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <stefano.stabellini@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>, "roger.pau@citrix.com" <roger.pau@citrix.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, "julien@xen.org"
	<julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
Thread-Topic: [PATCH] Add more rules to docs/misra/rules.rst
Thread-Index:
 AQHZMP/Bwov1pLqo70y7UrD5jvL/DK6waZaAgAByqQCAAAyrgIAAJGIAgADVWICAALcHAIAD/qYAgAAhQ4A=
Date: Mon, 30 Jan 2023 09:32:48 +0000
Message-ID: <2733D8BE-CCA5-4322-BB9B-1D4318378525@arm.com>
References: <20230125205735.2662514-1-sstabellini@kernel.org>
 <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com>
 <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop>
 <5a3ef92e-281f-e337-1a3e-aa4c6825d964@suse.com>
 <alpine.DEB.2.22.394.2301261041440.1978264@ubuntu-linux-20-04-desktop>
 <db97da84-5f23-e7ed-119b-74aed02fb573@suse.com>
 <alpine.DEB.2.22.394.2301271016360.1978264@ubuntu-linux-20-04-desktop>
 <03ce9f48-191e-b1b5-a3b2-8b769aa8feeb@suse.com>
In-Reply-To: <03ce9f48-191e-b1b5-a3b2-8b769aa8feeb@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAWPR08MB8840:EE_|AM7EUR03FT037:EE_|DB9PR08MB9561:EE_
X-MS-Office365-Filtering-Correlation-Id: fc96b988-9b91-42f8-a404-08db02a4fc58
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 mGTI8XNGt2UpkkuWB23M+2g/ZipL/2psnK8c1lcYoZU9uGwx2wMO+FAkWSOVYon1STaMrgKsjTYWIhfweQmlzvJ3bl5UMTcmj4fQ8hpH6iW7zwuM6b97uttr7+ihc/5+Bnk8M7BuEeO20jup+rUAyH/6Vz8rz9j1E9XO2ztf4qVTY0+1JriPbQm2dWkX1Bxj/bCs0IPmZxi7vQYKBjRktfMSbuzCFSteLsT6J6c4isYJCgiivTENV88LIer5o+To79WZ3xsdnLWm8nGOIYEmZ/VSxZwsAad4QVGZGMe0OOfnBi2F3dCqD0X2JsDFe7bC3Mnla5t7SGAcyvIsdYsKHyc/1klYcgow71I/+cUCxwinLm3mAznueXOFZLS9K1D7nihGtBRwq+Zn+Cwni02BigvuG0EV/rtVWYf77wW416BYIjjXd+US5t/DxJ/6pvIp6d3IP6c022axiGdpSNUX9z3OG3VhG3PMipVCvXbGr31xgreF65ctuLlXMItgLEmAijJVkqo3HsM5FFN54veaKDFkZgNdGjiJ4HuQptGZtELWKc7UjmY/YG6gK+M9wqOi8mPzalmx0ft7BI26xeLJSvH5B1bIuWkqmM9pMw8O4b0StYWmy+vHGkQ+/mVCOuuXsqlt+2XPT/rtFCllfgYuSW+DCjNzhUjuIm0IRl/SsTJGmT57X7z34Kd7mxzAzpZIULw17o2uRRFSJ70yGWCrg1Q6u+4RWf8s8IRGRYDVOkE=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(366004)(376002)(39860400002)(396003)(346002)(136003)(451199018)(478600001)(71200400001)(6486002)(2616005)(41300700001)(26005)(6512007)(186003)(6506007)(53546011)(54906003)(316002)(4326008)(76116006)(66946007)(83380400001)(38100700002)(122000001)(6916009)(66446008)(66476007)(8676002)(66556008)(64756008)(2906002)(5660300002)(91956017)(36756003)(86362001)(38070700005)(8936002)(33656002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <743C82DB9D84E3459A25006AFD85DE2C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB8840
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	182a65f5-dba7-43dd-11d9-08db02a4f35d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Rv/LEfJ5fWJXM0UYfU86mU6nd5yIfH/qtRZaRIgfCklBsEHDWTq+/tZDXXj8ObNbUOXAi1LEdZze0CtmV0ESh3CVWhyOPE7X3trc81rPRiKNWNIyKCnRFYtE/ceqzlszJqYmoONcAHHTvCX0gD+kXHTLAAAaU5CznTvleHSm/MTpxCXQS8LAoOuUktd0/U3pUjQpiwWiYpTIjmeV1uDqxHJzeYF/JtsZHJaOQelo8ppK2Pj/KyaabKBCbRA0OeL1Pfy277QSmswtVIYrUsPW/cnxBZ5JeicQTfDv1u2oQcZTIR5K0eAXE/3EpZ+Tdp8MvIyOjZgR1GG9RJPmR6s9LoE+Sd1L3dS+aJiqQJON4eLpzWfznovplO5enEmfGWRJgtGbFQpm7tv2dUcYJQhXFiny5Nwd7ppBgjmzAvZnL8/BxTh3HA4+93e4/NJDNL8AAlBywy9WF75expXzJEg2Lb8lF315/w3BNRfmx/wYvgINQElu+M12f7JUvjbDu/vJ7DtQlKTXjrXR+Q9azejGwP6rl3qCwDpUMn8eakvPVTSv3mwzh69VuFXnMklL+hlnwEbqTktDVEtOqtEGojpdKCQF5o+OYqIQm7JrMddRTvGgBKwo74S3F0//7TleZRRY09T253dK8Yy9YZ796gEM6N+O1FDoj2al/RTUbh7oejP0va6d+GfLc6TDh72xhqTnTNRyeg1EP6SgOL0JJiGnJw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199018)(46966006)(40470700004)(36840700001)(41300700001)(86362001)(83380400001)(40480700001)(336012)(2616005)(47076005)(82310400005)(356005)(36860700001)(81166007)(82740400003)(316002)(36756003)(54906003)(8676002)(4326008)(70206006)(6506007)(53546011)(70586007)(33656002)(6486002)(26005)(6512007)(186003)(40460700003)(478600001)(6862004)(8936002)(2906002)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 09:33:03.4148
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fc96b988-9b91-42f8-a404-08db02a4fc58
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9561



> On 30 Jan 2023, at 07:33, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 27.01.2023 19:33, Stefano Stabellini wrote:
>> On Fri, 27 Jan 2023, Jan Beulich wrote:
>>> On 26.01.2023 19:54, Stefano Stabellini wrote:
>>> Looking back at the sheet, it says "rule already followed by
>>> the community in most cases" which I assume was based on there being
>>> only very few violations that are presently reported. Now we've found
>>> the frame_table[] issue, I'm inclined to say that the statement was put
>>> there by mistake (due to that oversight).
>>=20
>> cppcheck is unable to find violations; we know cppcheck has limitations
>> and that's OK.
>>=20
>> Eclair is excellent and finds violations (including the frame_table[]
>> issue you mentioned), but currently it doesn't read configs from xen.git
>> and we cannot run a test to see if adding a couple of deviations for 2
>> macros removes most of the violations. If we want to use Eclair as a
>> reference (could be a good idea) then I think we need a better
>> integration. I'll talk to Roberto and see if we can arrange something
>> better.
>>=20
>> I am writing this with the assumption that if I could show that, as an
>> example, adding 2 deviations reduces the Eclair violations down to less
>> than 10, then we could adopt the rule. Do you think that would be
>> acceptable in your opinion, as a process?
>=20
> Hmm, to be quite honest: Not sure. Having noticed the oversight of the
> frame_table[] issue makes me wonder how much else may be missed in this
> same area (18.1, 18.2, and 18.3).

Hi Jan,

I think I recall the frame_table[] issue but I was looking into the eclair =
reports to
understand it better and I was unable to find it, do you recall where the t=
ool was
complaining for the 18.2 related to the frame_table[]?
Any notes or link is appreciated, maybe we could speak with Roberto to unde=
rstand
It better, because I checked with Coverity and I was unable to link finding=
s of 18.2 with
the symbol frame_table[] (however I might be a bit lost in all the macros).

Thank you.

Cheers,
Luca


>=20
> Jan




From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:37:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:37:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486727.754156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQb3-0001Bb-Hm; Mon, 30 Jan 2023 09:37:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486727.754156; Mon, 30 Jan 2023 09:37:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQb3-0001BU-FD; Mon, 30 Jan 2023 09:37:05 +0000
Received: by outflank-mailman (input) for mailman id 486727;
 Mon, 30 Jan 2023 09:37:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMQb1-0001BM-RZ
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:37:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQb1-0004PL-Hx; Mon, 30 Jan 2023 09:37:03 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQb1-0000r0-BP; Mon, 30 Jan 2023 09:37:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=zlHs4sQPioNGt/jm0p8NQqKlLxy5Zf7TJ4QtHn0s63U=; b=S/TEn2pWoWiXHOuxArnurzZTnR
	GA2R8avJ4yEn0ZzaYkAGRJjq17O8LF/CJ05kxsykVBUwuE3ngdYBLXIa7CTayXz5AbF4Q9K+Ig73d
	7f0p+p/16/J4Cb+Ueo35mRI2qk/QfYqSUUeCODKK2aYwEbX/k8/NZxbCyoYrx6qotMUc=;
Message-ID: <c09e6d5e-8474-3c53-6d40-35a9e41863c7@xen.org>
Date: Mon, 30 Jan 2023 09:37:01 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3] xen/public: move xenstore related doc into 9pfs.h
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230130090937.31623-1-jgross@suse.com>
 <83ea4c36-4762-c06f-28bc-00deedb7efd3@xen.org>
 <65757a42-f55b-55e9-4853-4854ecabbfca@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <65757a42-f55b-55e9-4853-4854ecabbfca@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 30/01/2023 09:23, Juergen Gross wrote:
> On 30.01.23 10:18, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 30/01/2023 09:09, Juergen Gross wrote:
>>> The Xenstore related documentation is currently to be found in
>>> docs/misc/9pfs.pandoc, instead of the related header file
>>> xen/include/public/io/9pfs.h like for most other paravirtualized
>>> device protocols.
>>>
>>> There is a comment in the header pointing at the document, but the
>>> given file name is wrong. Additionally such headers are meant to be
>>> copied into consuming projects (Linux kernel, qemu, etc.), so pointing
>>> at a doc file in the Xen git repository isn't really helpful for the
>>> consumers of the header.
>>>
>>> This situation is far from ideal, which is already being proved by the
>>> fact that neither qemu nor the Linux kernel are implementing the
>>> device attach/detach protocol correctly. Additionally the documented
>>> Xenstore entries are not matching the reality, as the "tag" Xenstore
>>> entry is on the frontend side, not on the backend one.
>>>
>>> There is another bug in the connection sequence: the frontend needs to
>>> wait for the backend to have published its features before being able
>>> to allocate its rings and event-channels.
>>>
>>> Change that by moving the Xenstore related 9pfs documentation from
>>> docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h while fixing
>>> the wrong Xenstore entry detail and the connection sequence.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V2:
>>> - add reference to header in the pandoc document (Jan Beulich)
>>> V3:
>>> - fix flaw in the connection sequence
>>
>> Please don't do that in the same patch. This makes a lot more 
>> difficult to confirm the doc movement was correct.
> 
> You have to check it manually anyway, as the comment format is prefixing
> " * " to every line.
> 
> Its not as if the changes would be THAT complicated, and the commit message
> is quite clear WHAT is changed.

You are breaking two conventional rules [1]:
   1. Patch should only be doing one logical thing
   2. Don't mix code clean-up and code changes

While there are a couple of cases we are tolerating it, I think doing 
code clean-up and code movement at the same time should always be avoided.

> 
> I can do the split, of course, if you really need that (in this case I'd
> end up with 3 patches, one for the move, one for the "tag" entry, and one
> for the fix of the connection sequence).

That would be my preference because the simpler the patches are the 
easier they are to review.

Cheers,

[1] 
https://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches#Break_down_your_patches_appropriately

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:40:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:40:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486733.754167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQdk-0001qI-3Y; Mon, 30 Jan 2023 09:39:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486733.754167; Mon, 30 Jan 2023 09:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQdk-0001qB-0E; Mon, 30 Jan 2023 09:39:52 +0000
Received: by outflank-mailman (input) for mailman id 486733;
 Mon, 30 Jan 2023 09:39:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMQdi-0001q5-Hy
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:39:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQdi-0004Ru-2X; Mon, 30 Jan 2023 09:39:50 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQdh-0000zo-RR; Mon, 30 Jan 2023 09:39:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=v6TdHBz16Es+9YR2QApBuXa/DHFCu0kf9+5sfD+y4J8=; b=ULxjKcAwb7ehfBM+ImRinZlyVO
	lgf8EoW+98D8IoN3FH5SFHfO7nwYkGHsddgE3OqcLMGQ492PK79Yr1985P+pEdTcRx/n8rFDSpHwZ
	k1Hw6WdXh+1mZPkHTDO35KByHukils11oq3hEzO5SBn2SIQa6fNnPUwB83D/S8W3yBtw=;
Message-ID: <3c915633-ddb8-d1e4-f42e-064aaff168b2@xen.org>
Date: Mon, 30 Jan 2023 09:39:48 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-12-Penny.Zheng@arm.com>
 <c30b4458-b5f6-f996-0c3c-220b18bfb356@xen.org>
 <AM0PR08MB453083B74DB1D00BDF469331F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <7931e70f-3754-363c-28d8-5fde3198d70f@xen.org>
 <AM0PR08MB45308D5CD69EBB5FE85A4B88F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM0PR08MB45308D5CD69EBB5FE85A4B88F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Penny,

On 30/01/2023 05:45, Penny Zheng wrote:
>   There are three types of MPU regions during boot-time:
> 1. Fixed MPU region
> Regions like Xen text section, Xen heap section, etc.
> 2. Boot-only MPU region
> Regions like Xen init sections, etc. It will be removed at the end of booting.
> 3.   Regions need switching in/out during vcpu context switch
> Region like system device memory map.
> For example, for FVP_BaseR_AEMv8R, we have [0x80000000, 0xfffff000) as
> the whole system device memory map for Xen(idle vcpu) in EL2,  when
> context switching to guest vcpu, it shall be replaced with guest-specific
> device mapping, like vgic, vpl011, passthrough device, etc.
> 
> We don't have two mappings for different stage translations in MPU, like we had in MMU.
> Xen stage 1 EL2 mapping and stage 2 mapping are both sharing one MPU memory mapping(xen_mpumap)
> So to save the trouble of hunting down each switching regions in time-sensitive context switch, we
> must re-order xen_mpumap to keep fixed regions in the front, and switching ones in the heels of them.

 From my understanding, hunting down each switching regions would be a 
matter to loop over a bitmap. There will be a slight increase in the 
number of instructions executed, but I don't believe it will be noticeable.

> 
> In Patch Serie v1, I was adding MPU regions in sequence,  and I introduced a set of bitmaps to record the location of
> same type regions. At the end of booting, I need to *disable* MPU to do the reshuffling, as I can't
> move regions like xen heap while MPU on.
> 
> And we discussed that it is too risky to disable MPU, and you suggested [1]
> "
>> You should not need any reorg if you map the boot-only section towards in
>> the higher slot index (or just after the fixed ones).
> "

Right, looking at the new code. I realize that this was probably a bad 
idea because we are adding complexity in the assembly code.

> 
> Maybe in assembly, we know exactly how many fixed regions are, boot-only regions are, but in C codes, we parse FDT
> to get static configuration, like we don't know how many fixed regions for xen static heap is enough.
> Approximation is not suggested in MPU system with limited MPU regions, some platform may only have 16 MPU regions,
> IMHO, it is not worthy wasting in approximation.

I haven't suggested to use approximation anywhere here. I will answer 
about the limited number of entries in the other thread.

> 
> So I take the suggestion of putting regions in the higher slot index. Putting fixed regions in the front, and putting
> boot-only and switching ones at tail. Then, at the end of booting, when we reorg the mpu mapping, we remove
> all boot-only regions, and for switching ones, we disable-relocate(after fixed ones)-enable them. Specific codes in [2].

 From this discussion, it feels to me that you are trying to make the 
code more complicated just to keep the split and save a few cycles (see 
above).

I would suggest to investigate the cost of "hunting down each section". 
Depending on the result, we can discuss what the best approach.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:40:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:40:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486737.754177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQeN-0003AA-DQ; Mon, 30 Jan 2023 09:40:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486737.754177; Mon, 30 Jan 2023 09:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQeN-0003A3-8t; Mon, 30 Jan 2023 09:40:31 +0000
Received: by outflank-mailman (input) for mailman id 486737;
 Mon, 30 Jan 2023 09:40:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMQeM-00039t-MD; Mon, 30 Jan 2023 09:40:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMQeM-0004US-LK; Mon, 30 Jan 2023 09:40:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMQeM-0002eH-9x; Mon, 30 Jan 2023 09:40:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMQeM-0005US-9P; Mon, 30 Jan 2023 09:40:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d5hdPdqMTYVgwV/HejBjilYGODWBk5Z6oPyEEGQ+2HQ=; b=pZleCPGjNoE6V8Lz76TJ6O8hgq
	wVbmAWYAe+y1qZhs5j9Wg68AONwDFhAAdWA3oM+jFLzs0dVCiQ684UmcKQ6PGIdzWpYLDwOD9GX+A
	ekud1ErWkWVktYlBuTfKBjGNq6bstfI3NcH6eQYJri7Wq8kwQPTz7GvUupHUMiKO71Zs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176278-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176278: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=bb1376254803bfdaa012c62f1cf6d6b26161cfe7
X-Osstest-Versions-That:
    ovmf=e7aac7fc137e247edad22f7ee53b9a1fba227397
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 09:40:30 +0000

flight 176278 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176278/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 bb1376254803bfdaa012c62f1cf6d6b26161cfe7
baseline version:
 ovmf                 e7aac7fc137e247edad22f7ee53b9a1fba227397

Last test of basis   176248  2023-01-27 14:45:33 Z    2 days
Testing same since   176278  2023-01-30 07:40:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Pierre Gondois <pierre.gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e7aac7fc13..bb13762548  bb1376254803bfdaa012c62f1cf6d6b26161cfe7 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 09:57:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 09:57:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486753.754187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQuO-00055l-Ne; Mon, 30 Jan 2023 09:57:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486753.754187; Mon, 30 Jan 2023 09:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQuO-00055e-Kg; Mon, 30 Jan 2023 09:57:04 +0000
Received: by outflank-mailman (input) for mailman id 486753;
 Mon, 30 Jan 2023 09:57:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vILW=53=bounce.vates.fr=bounce-md_30504962.63d7946b.v1-e8351b53cc5b4ae388825a33c694b60f@srs-se1.protection.inumbo.net>)
 id 1pMQuM-00055W-D2
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 09:57:02 +0000
Received: from mail134-17.atl141.mandrillapp.com
 (mail134-17.atl141.mandrillapp.com [198.2.134.17])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 705bde08-a084-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 10:57:00 +0100 (CET)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-17.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4P53Vz2yvRzRKLfBF
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 09:56:59 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 e8351b53cc5b4ae388825a33c694b60f; Mon, 30 Jan 2023 09:56:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 705bde08-a084-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1675072619; x=1675130219; i=marc.ungeschikts@vates.fr;
	bh=wvk8CLtqhuNiAa/XgR/raZGyD9MDSLS1htgmz2HjxR4=;
	h=From:Subject:Message-Id:To:Feedback-ID:Date:MIME-Version:
	 Content-Type:CC:Date:Subject:From;
	b=UPGXFyhHrutZ/Z1ZlRfulCag6Kr5MzPPgZdPBETrhWSTSNpu9M83bMuVu22hPSmfO
	 kKQfXcwpp/e2VYbgMXpyEX2i7rc8cpnew8aNqPqLaBNgn2k6sc0KZQ4UMdpEGYLqTE
	 LlZnmK1L/p88vtDldqW6FWUtbDljX6foyFlztJ5E=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1675072619; h=From : 
 Subject : Message-Id : To : Date : MIME-Version : Content-Type : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=wvk8CLtqhuNiAa/XgR/raZGyD9MDSLS1htgmz2HjxR4=; 
 b=Id9D6VHcecrUQiC/V1DCdN7BR2L9rw9GA80EziZYGS4zK9mK4AK+qSP1jQTwHW7swogeuO
 Tar/BuBII/0zEunc1NbWHn3IZXGNXd4Ywn1zv3ANrFXJHWsBElsCt9Bx2ZaOzSiDWfs9PfCU
 YZnZel31Dc65uLFUH2Bw5wOW9pKas=
From: Marc Ungeschikts <marc.ungeschikts@vates.fr>
Subject: Weekly meeting - Xen Gitlab Issues Review
X-Bm-Draft-Refresh-Date: 1675072617542
X-Bm-Internal-Id: DE8B30B6-117C-4174-8780-9ABFF47FC58A#bluemind-4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2:1138650
X-Bm-Previous-Body: 37a2c48e760562d170c075e3336f7d408524748a
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 545ba458-3bed-4745-8d49-e993459f60be
X-Bm-Transport-Timestamp: 1675072617903
Message-Id: <ldimw71r.3hc2nsooowo00@vates.fr>
To: xen-devel@lists.xenproject.org
X-Report-Abuse: Please forward a copy of this message, including all headers, to abuse@mandrill.com
X-Report-Abuse: You can also report abuse here: http://mandrillapp.com/contact/abuse?id=30504962.e8351b53cc5b4ae388825a33c694b60f
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230130:md
Date: Mon, 30 Jan 2023 09:56:59 +0000
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="_av-6LUygm_hxK3KEwlLb3Dblg"

--_av-6LUygm_hxK3KEwlLb3Dblg
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Hi everydody, last Friday, during the Backlog Review meeting, we decided to schedule a weekly meeting every Friday-14:00(UTC) to talk about Xen Gitlab issues [https://gitlab.com/groups/xen-project/-/issues](discussion, grooming, triage,...) 
Jitsi Room: https://meet.jit.si/XenIssuesReview [https://meet.jit.si/XenIssuesReview]
You are all welcome, specially developers and maintainers.

Marc Ungeschikts | Vates Project Manager
Mobile: 0613302401
XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com

--_av-6LUygm_hxK3KEwlLb3Dblg
Content-Type: multipart/related; boundary="_av-LlkQpaO7ZPAZzAG0MLTLhw"

--_av-LlkQpaO7ZPAZzAG0MLTLhw
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html>
 <head></head>
 <body>
  <div id=3D"bm-composer-content-wrapper">
   <div style=3D"font-family: Montserrat, sans-serif; font-size: 9pt; color=
: rgb(31, 31, 31);">
    <span style=3D"color: rgb(63, 67, 80); font-family: &quot;Open Sans&quo=
t;, sans-serif; font-size: 13.5px; white-space: pre-wrap; background-color:=
 transparent; display: inline !important;">Hi everydody, last Friday, durin=
g the Backlog Review meeting, we decided to schedule a weekly meeting </spa=
n><span style=3D"background-color: transparent;"><strong style=3D"box-sizin=
g:border-box;font-weight:600;color:rgb(63, 67, 80);font-family:&quot;Open S=
ans&quot;, sans-serif;font-size:13.5px;white-space:pre-wrap;background-colo=
r:rgba(63, 67, 80, 0.04)">every Friday-14:00(UTC)</strong></span><span styl=
e=3D"color: rgb(63, 67, 80); font-family: &quot;Open Sans&quot;, sans-serif=
; font-size: 13.5px; white-space: pre-wrap; background-color: transparent; =
display: inline !important;">&nbsp;to talk about </span><a href=3D"https://=
gitlab.com/groups/xen-project/-/issues" target=3D"_blank" style=3D"box-sizi=
ng:border-box;background-color:rgba(63, 67, 80, 0.04);color:rgb(56, 111, 22=
9);cursor:pointer;word-break:break-word;font-family:&quot;Open Sans&quot;, =
sans-serif;font-size:13.5px;white-space:pre-wrap"><span style=3D"box-sizing=
: border-box; background-color: transparent;">Xen Gitlab issues</span></a><=
span style=3D"color: rgb(63, 67, 80); font-family: &quot;Open Sans&quot;, s=
ans-serif; font-size: 13.5px; white-space: pre-wrap; background-color: tran=
sparent; display: inline !important;">(discussion, grooming, triage,...) </=
span>
    <br><span style=3D"color:rgb(63, 67, 80);font-family:&quot;Open Sans&qu=
ot;, sans-serif;font-size:13.5px;white-space:pre-wrap;background-color:rgba=
(63, 67, 80, 0.04);display:inline !important">&nbsp;</span><span style=3D"b=
ackground-color: transparent;"><strong style=3D"box-sizing:border-box;font-=
weight:600;color:rgb(63, 67, 80);font-family:&quot;Open Sans&quot;, sans-se=
rif;font-size:13.5px;white-space:pre-wrap;background-color:rgba(63, 67, 80,=
 0.04)">Jitsi Room</strong></span><span style=3D"color: rgb(63, 67, 80); fo=
nt-family: &quot;Open Sans&quot;, sans-serif; font-size: 13.5px; white-spac=
e: pre-wrap; background-color: transparent; display: inline !important;">: =
</span><a href=3D"https://meet.jit.si/XenIssuesReview" target=3D"_blank" st=
yle=3D"box-sizing:border-box;background-color:rgba(63, 67, 80, 0.04);color:=
rgb(56, 111, 229);cursor:pointer;word-break:break-word;font-family:&quot;Op=
en Sans&quot;, sans-serif;font-size:13.5px;white-space:pre-wrap"><span styl=
e=3D"box-sizing: border-box; background-color: transparent;">https://meet.j=
it.si/XenIssuesReview</span></a>
    <br>
    <br>
    <br><span style=3D"color: rgb(23, 25, 28); font-family: Inter, Twemoji,=
 &quot;Apple Color Emoji&quot;, &quot;Segoe UI Emoji&quot;, Arial, Helvetic=
a, sans-serif, STIXGeneral, &quot;Noto Color Emoji&quot;; font-size: 14px; =
background-color: transparent; display: inline !important;">You are all wel=
come, specially developers and maintainers.</span>
    <br>
   </div>
   <style></style>
  </div>
  <div class=3D"x-disclaimer507876458">
   <div>
     &nbsp;
   </div>
   <div>
     &nbsp;
   </div>
   <div>
    <div>
     <br>
     <table>
      <tbody>
       <tr>
        <td style=3D"font-size: 10pt;">&nbsp;</td>
        <td style=3D"font-size: 10pt; padding-left: 20px; border-left-color=
: #b42626; border-left-style: solid; border-left-width: 1px;">
         <div> <strong> Marc Ungeschikts | Vates Project Manager</strong>
         </div>
         <div> <strong>Mobile: 0613302401</strong>
         </div>
         <div> <strong>XCP-ng &amp; Xen Orchestra - </strong>Vates solution=
s
         </div>
         <div> <strong>w:</strong>&nbsp;vates.fr&nbsp;| xcp-ng.org | xen-or=
chestra.com
         </div>
         <div>
          <img style=3D"float: left;" src=3D"cid:x-disclaimer507876458-1675=
072617903.png@bm-disclaimer" alt=3D"" width=3D"174" height=3D"159">
         </div> </td>
       </tr>
      </tbody>
     </table>
    </div>
   </div>
  </div>
 <img src=3D"https://mandrillapp.com/track/open.php?u=3D30504962&id=3De8351=
b53cc5b4ae388825a33c694b60f" height=3D"1" width=3D"1" alt=3D""></body>
</html>


--_av-LlkQpaO7ZPAZzAG0MLTLhw
Content-Type: image/png
Content-Transfer-Encoding: base64
Content-Id: <x-disclaimer507876458-1675072617903.png@bm-disclaimer>
Content-Disposition: inline

iVBORw0KGgoAAAANSUhEUgAAAK4AAACfCAYAAABgKuLmAAAm4XpUWHRSYXcg
cHJvZmlsZSB0eXBlIGV4aWYAAHjatZxpkmSpcoX/swotgXlYDuBgph1o+foO
kVmva2jZa8lU1VWZFRlxL+DuZ3C47c5//ed1/8GvEYZ3ubReR62eX3nkESff
dP/5Nd/fwef39+el8/Wz8PPrbtyvH0ReSnxNn3/2+vX+79fDjwt8vky+K3+5
UN9fP1g//2Dkr+v3Xy70daOkEUW+se8RfV0oxc8PwtcF5mdavo7e/jqF9TW1
r89/loE/Tn/dHYdeK+vzs1//nRurZ4X7pBhPCsnzd0pfA0j6E12afFP5O6YU
30vvlc7fIcWvkbAgf1qnH7+4rbsaav7jm36Kyo/vfonWGV9r9Gu0cvx6S/pl
keuPr3983YXy56i8pf/LnXP/+i7+/ProMX5G9Mvqv8W/1u+bM7OYubLU9WtS
31N83/E+wpF16+4YWvWNP4VLtPd78LuT1ZtUML/94vcOI0TCdUMOFma44byv
O2yGmONxsfFNjDum92JPLY64iVtIWb/DjS2NZMQxpv3CnlP8MZbwbjv8du9u
nTtb4K0xcLHwkuAf/nb/9AP3qhRC0FoS+vCJb4xabIahyOlv3kZEwv1a1PIW
+Pv3r78U10QEi1ZZJTJY2PW5xCrhX0iQXqATbyx8/dRgaPZ1AZaIWxcGExIR
IGohlVCDbzG2EFjIToAmQ48px0UEQinRGGTMKVVi06NuzUdaeG+NJfKy43XA
jEgUKq4Rm5Emwcq5kD8td3JollRyKaWWVnoZZdZUcy211lYFirOlll0rrbbW
ehtt9tRzL7321nsffY44EqBZRh1t9DHGnNxzcuXJpydvmHPFlVZexa262upr
rLlJn5132XW33ffY06IlAz+sWrNuw+YJh1Q6+ZRTTzv9jDMvqXaTu/mWW2+7
/Y47f0TtK6y//f4HUQtfUYsvUnpj+xE1Xm3t+xJBcFIUMwIWXQ5EvCkEJHRU
zHwPOUdFTjHzA8RLJTLIophZUMSIYD4hlhu+Y+fiJ6KK3P8pbq7ln+IW/7eR
cwrdP4zc73H7U9RMNLRfxD5VqEX1ierj56fP2KfI7revTjNvjRUI8HbqXrdv
xdvlv139mjVY7qcXSmjvnE4bo8XF2Po+56wzNhTXorNcILFSOrVFGe7A9LnF
NK3KmL3OuveqQXFodY9uQGIfkzocZ0Cv8/S0/XDHjzVuYJly3SeNUl+5pphZ
wnA2ydC5/Gms3vWbhUrjhH59OqOcMI3qtTTNBUi/7wWM2upzkzUk28nDkwib
peRNfVeuOYkcWVEWoyHtrJzdejkdFUJybDcy60sGdOZf9xzr7tVPLgZWA7un
pRuJQj8ELu92/SHFjDBxrTEZXWplj7tcjiRQ4D5UgmkAhMlaKocx7HLjJAQl
9VJOmYvI1pLi7gW+inX1OEsjy85pbl7+BXiFxSL5wj8LObqCTULGAjEO1rnn
wFVOKo0ViBEtRMFsS6GXmk+2c1ybjIVsz7WWXfhA8jt3VrhNhlQHFGQzHX4e
ZjUS/aSpW55k96xiSA/L0ABTG6msQIjyQvgR07amtR6O1VNaM4TSYvixPq0R
Q0zGHM3yu8WsA54kRg4as3LTaiSL7RbO1PULaxupwhzbYriRYgtjo35KhXsz
eTLrqYUs3Zv5Uj9uz7vJCkpxmU+FYDZbKIZUQN6Qx76gQSdpxs0I0VlJRj9O
7kq+ESja3fKmRM4l+FVE0ts+FR0WSvOdwJ05zt61BMqn8P6sb3KROvzDV/d3
P+ArI2aAo+Vl5PoF2pjz7GUZf3ZpixEAFndYSd5ZbHu3g7q4hXRf5cRrYeUM
qhAAgn8LRDbvzMHbmoHwGtULYwdAYuUWNffgiHnbCcCpfiRIgpvCkk0IC5MC
djESpKS45mPpSOM23fU0tA24Vcoq2cCj0ljOvraFsVoKBygJaezT7LbGZftN
dsqezHCWvfxRvjLwPQjcpNiJ0J7AyDk1jNJCzGVITzWSgMQPdw6GulgkQwyR
PLycbZGTzOSmaicBIevmmMO8w92FvlpzFWp5VIubWS9KCtSKB4IrI5L2fs9F
nXrrZyYwoTAR4MoD3iFQ9WE4BCJzWaQh6bqbNaqMer0l2WAGm8iV2QLVEU05
xLhPBy9yIFFLPZpF4JaOpSX5gfmRbh8xqzJnaCWBCZV/8zFKB73YMCswBIgc
y5lb8Y+xGtzVUlzH7eZBm7JT2tQxeQBjWa2HdbwR7iqwwRi5Ax/oGWNGK+qe
4ChoR2neDKzG5YI/lfWf5Sq+eofFAEN11Bs1ukG6OOPRrM5B5RbgASIjWTpk
OS4/7cPyJPyeRG7xxA0MrlJmznPtjQFsZfrT4csNvEYSgjFwXVYXSjA+yNio
PXHFudRaXBQjl5IZYI2qRqLAHq7YrYLgc3cIm5QmkKx0hUJUfavDYmQVP9zm
JnFBSPcKas+sFIaCFuIkGFkcdqvkk5HNKz9p3jdTj3B6k468jPV4gAOoLQR6
Znj9dooOfjjiNIqBP/bFpQav+Pm+93/z1f32A+gH4Ukq1JOgssPStw3Jwfyn
whyQiGpvyfDNS6Ch2WTRLdsKR4+3mp04YMjSSbs4Ka8hCaw4g4Z8CJHEihCJ
Qt1INC0iFkMfTMhRY/MuJnzJbjyHl3Bi8XoX1gP6jZrwECpaqwrtKA1WhUEb
vL+mJAjyaTsqdiC1oqgFLO+oPZUQtE3ZiZt46xqinw2tJt6VVRKFNIROU7G2
8cD+uLoo2YFCGIb4AfP6nO1Ax8BvTJXc6wYHpKbvblGt7ZA6tcB4pKuQGMze
3DWMV52icGOw7aiwmrLukCtzCh9JP8ogGQxb6pOFYPm9naWRAVeSXqf0K8U3
iCsY6nKfVW/KAhNgp4IRtc7LgA/xKtQF9YegPNVSrHyu8SKpXtwDcEgEyqIe
9uNXBIJW1b9kgs9IZ2iHeW3kQkbropKAJ+IY4KZMAiC0CBdsM1rjSowX7fnU
BzIuAohIyQbYIQ1hi0l2AyJA4eKGAAGhYLnCEaQ4FEJBb4vNIWgYRPKKu8om
wWngQLKnFA6ITpUjdsXB6Yqtqb0pcRML3M8FVwuQBrIIgASuO/V19R+RQoyg
0akfOB+/la7ZZml5F682FBXw5jsy10Hz2QgSq9tJbCKU14K20MOpUR2dSuk7
XH1EQ5/Q/RVmiwFQ28dmb5J+YNlAId7Hrghw6NI/lkZ6/DtfixlubSy3rQAE
udThFyrPp3Q8UoSlHcd4A3oe1YyER6rHhQIuZGEn6nBDUQZy45VGdSREAR6J
BapWqnTiWU7D45jaIEgVrdtl+CdU2ADjeUVeyw4fWDs+n1SSO6jHLc24Qfk6
FqxNjlNVLXnk32DJK1oYyQuKAHukDjA8MAPoGeCXOidbNwRJmkGfUAicRf3o
dnJESUKZ+k8d82Faki0rSGitzzVwC/sywE61wVitOxZ4oyqhn0SUIAwQDnc0
QAHAjgxgTRa5jtZjhan0+qXUScl+SRR0xOGaLgEhr6GAEQifCt8jI0P0flJk
ngQQHWWxJFtH6zLe4tX3qipkxL8oxkFSTAqWQ9kcALxJBMaSVfckL1kzfTGP
ifK3UPbob0IC65JJwCjv405ze8cIPD4EYAC9kK8suFJW/pRy0GXhfOCIT7CC
Yy6KuU4KH7GLx9hG9clqOIwliQmeLtQw+W31bhCJoXX4dEaWmTVgqEVYB/YA
Z6AnSh0J6GMhqTBUq1IiCMBja9cDTUEHU7MghUBUqTmsgkHYT1QDi4hHJAGQ
vTEkvuYkj5V2yehsNAkLFAwEmRNjsQX4oSL3A54OjbcQa7gh3tQRAbFQ3qTG
9lsgenApESnmgB+jGKAkcAbtjZ6BLzHCAN6BmpBCcwSiTQIrTZFOqaNWD+gH
DuaKo+x2p7N7UVpeFKGhBRyTbkj+XgkzxsK/GDp/GCJmA1RA5EdWTi2wSg4R
NKY27t4SaKgqUtTjkRYpl0UAOxEk4Phc3M5ZCV1MwTI1ZMNDNDIuiJTAsoCI
IGNmarfyXvlRrB0431hcFPcqCETCeNOtuGUuBm2thX5BmB0YGECmGOSy/85+
v68Lw4p5Yf2O76iggwZOMD1qRTl1JbDD83WIURg6S5PB1QaMI94lfMBYCX44
2cslSY61K/6naDCuBY6TkDywKdIAJKZo0eFEGQ6R7BURjryANggvspKA9UEy
IyhYXMQ1ohBEgRzJu1wzMgq9TzGSkMOEQWTERMPAsbAQlA7q9+UX6IFF24ls
BluJt+QVLoscQKHhQQnG1IQdeOszYoOy2RDpwNvqDhQWVjo3y0gguMEQToho
ZJtxV+JUKOw4kCDUKF44uowWRMDkhiGW9qDExF19nxmBm2tCIPVgUBcCt8uS
zLlQWih3ro3iAHgzVhQM52OsCT/Ez6IN+WHHQCw0PDWIcztNmrPydkqJIhJ7
YgrxMxdyRwaPUzHHeeM8wIvbgqwimabmNZ4b6s7PsJJD8ckXiEshRXQPVp2A
dRYiV3WXnDy/YTNPqjgASLmAs9gx8jExJEAUEGhwA4DP2mK8WN9B3cIyov/G
RRaGyQW0RCXeaPCLiwloClz+JSlQORQvNQqg14TyQFpPBA0IBuGRKig0hY2c
mym7NVJsSKfI20gBTNWD4y9oHZVgQEa4INaBYbTMbTqItzCcBulQJLiNlNwH
fEFJ7Gc4H6kUYUIYAlEzkFn5brIfTbdAIuXRAIqPyK2Qm9hww8dRIgnp2ja4
UyWu0IHrUDuIFwgLJ4Ptq6fLE+A9D9mHlInPZmCU0KRSLI14uziWBdAQn0PS
5IjlQUMhVHfWssaGbS/6GzLinixuxloP6PYCc4/Gr3yQw7zjGLH2IBGrsaXK
KOYOJEGRh0msAS+q0plegcRZNwQNkSBdq4WI2kNmOO02NRINR0tCJzCIZV9k
PwUInB2P7CUCFKotNQeh4N0lTTe1g7a+CRoAtR3FRI01WIc0Gr2jeFkhxBWu
EQoCwLpnkkudJNQ6OI7p4q8h9Sr7nkhlVsNRkGgttKIZ3MuNMGUJ6WuMOIA9
3bxQOG4AFoAC+NXOeZ5g7CfOUPOM1pHxH/3V/w4m40SAqB1rPe8plGBuaAem
i+CeRNTIK0cJTYUlHwQhSQ50Y/AoRji5W2bs0MVFOIBGhHKA+SsI7QZVFlHD
i3RqsTksFmOlPvAdkXStYcJ82BByMQYSrNaGYasIGQo43ZZVyVXMBWMEQKKX
68sh/AE1UlId8ELfDK1FDCPrFpkJH5sUASwEBsBTSPuQZeD2JrS2YahANTFV
11dQq1rtWFhWbYUnUkORrggIYnQoVplspLCxL/29CFKkco8ujBrAXxyHNAas
mIZym8TPZig66mdGqQA0a2pVPr0QZ8gy+MDMcH0kKAqg4DqYUVtgdvKoctBT
zVNwgJzSTJFt2Fo8LbxV1RGs6k8EFU6mHGHfirzLwBI4hKpy6jUuxhGylGC9
KH302FNwWtygpUC78FG01lI3ed7LMnbKcUrhCR+gBzfxH5qSqbHNZxF6XVm2
qGx1eOS7jFXrSZtF8v94P0TTaZTQZVA9YGMlIvAp8J7QSXR4tdU7q2oSJAOX
h5jSh8kyMqJUgfRIpC2xEGojxyEN5IubFZ84iXeEp9XO1FrbyqqrCDyk68FX
YEwdAZAtlWcRDnx8yGN0cJKV9w7MuaoyqRRxlzQPQILqgehKUNMG9G5kE5AP
SVR5YGMF8VsgyJYPiLfgRRAH2I09qV34HSETRaymlkhDYvFFu5hhJUA5xtrk
1DtUAHKR1vV5TbV9UOVzHaCWyr3Q7dj26YUaVNn/fcPlfnohyLfhFagX5Bwq
69ELyR4gl9s2KvBOuSVMh3wCmv+1JvZMz/dDBd4aa/xAnSJE4oC461G3evlj
1jjJf6z6IH+xikLGim9eQ9uEbZhrYBgCg7RJmHojSXANmK0Ku3p7TgJjLd6H
1Ej4HUuPBAxUJR0hogL7UmYsNl4ablQXrcqO1NI2iYeO56LIeaRUVa8nT5BB
9hFpjNEbZCQKSY3xJlvsbIbYnjNCi6NDQFJCPZVNGWinvJN6CKR9R0/h1kS0
Er8YaDW2N8KQFPausAo4yIDNmCQ3ghZUIK9kvckR9D2wF1Di0COvRxRxU2NQ
zc+k3l4lwcq6rsULUWGWo5SfkR/oMNxAr0n7PkQnq1OKwmVlgW+GAhYgpyBu
xMDBc0nBZMclcBlUGWasCH1UJ0gSbp7UlED8seLYCcww9R+Upw+PYBKApG2U
YvLpOioc2JLBWTcAH1tdcQQ/YWuRIp0kNuDdwUG5r7f5xDxx6gk9S1HUFRrY
6IKOUeCUcScJfoMpKFuo72ovK4YQqUSkCQlPfjGbpL2HQaYhliCMpZ2GQ5Y6
lhjYR6p5eAiUQnyrpQDqkFP3IFNRM/ACYY/DTGZmdDzF7p4yQlCpFQ4jY2pw
ayjb+LZSKytzyAybS1s4B505IcY91KuBEwFIct7uoZBQN4CTIaMPueuGZ/mo
5whtUNdloSNImaBtUAwLfN1RXyhkBDWwzFi7NBtRB5kRW9pNBRDMaRPvU+65
pf5P2it/LfbVULWvCYrI1/bDSqQ92CajeVrVvugFGZFHmbXqFQ5Gj7PMRBaQ
RmcsVD946d3OR0UMQrxtR5wbpAS0gqirk7QocPDtykVi4utOkCE2Mowzj0dx
TmQBybS40BWBQfpgJ1pD+yiwONhHul3BpxQWV8JaET/lpNcqErTI2jV5/w4Q
uartko6/yIFvhDmUSlj19WJR0WrIs9Rkt1o0VLB62JAg2B8W2hOdbDBMcHVB
MoxUO942t4eb4OKpVl2QImC2FZZWYyyomcEPXjc7elQhxgeZEMHE7ZL21du1
myVqsHOYFxg0IoQQP9R4er0W8iOJ0OJuW0PC9sWtPlqhNFnW46gjZrcZT8eo
Qj1F2nDlcnBh2jBgGIFJYg+RX1fdeXmdzkxmNJjEUOZlNtcRktSwr9ryPt2Q
EqDIwKYgw+L0alDBT0kVTWaC+7vfpcxGiwUyb6nHv7bLGw+HMtM2b/u0WFGj
QAFy+YBfSELlBqZ9HFOLjvIoSwiGTzyIi6RdsX2dUBobL8k3UQ6oNSCLqio7
yVdgqNDUiMaKlg/YaOC3GrBP0EdCg1UqDlaCjpp6HIhwFE/QnimevXtJPRAc
FYdnI/B4CGzb8VQ9ohGoqjDSYiqVG2hbfrhYtTnpXzhNHrYQUpQiA99cs8L7
UTsGUoJTWyjaxIckVsBP5nfYqWMx8P1UCDAGtKyFoJi1AkMF5whVJg3F1OzH
+XkIlNWO+Af4lAVFpvQ7lyQYes/BweCcNijuXmDn7b5AIMSHrK3AQcfzR6/0
yyzr1g5qRGTierT3BU5CX0Cdo+jxORtSUwMfQgNjKobiG1uq/x93VX98df7f
fOP/z4XQrKyQQe24MASGuL9bQst6ZDlvWoAz5SIRTf2zMAAzYFIR92qkqpWF
VcLeC6BY66kyjHh93532mfldIDxpvxi4FgrAi/77EwLqgQF0acSOBLAV8a6F
N5BSaD+kYdJxBKkRtPqtHbjHaSbuRjUBABkfGwgtMgLO9/XsMUgo8VLxfQ2x
LsiXIl4M7+PUw4xxC6j35rMYn/vzvXQraoK5XB2JIR1JUvRSBdxwtQlzTKI6
rx2XEnFDCCYk1YaE0WPYT4EkN3y5LF4Hc3NX7xHEu1jv65EuU3vTOsvjSn0H
TeTqAfqo4zddR4d8ax4PAUYh7aDfO+4AsaI2DMoEC2vdhnAh4wZwW1xSK5DR
MYpf1lAnaj4z67+PmjBCL0YxHCijtusiwhmKxxvKgejzBHVzOXwnqgFwWSh7
BgXnqJrRWVvdfaH2jbkkUMUvHKRwSy6SKUXs5lIcYGSS5vdJ/Wm0o32G5ub3
2H4fmv91cJtcAuFFnB7c53MFAQ1krG5O27m1fgXi9AKxkYdSZ9BqeTs12LjX
isZNI68Qs0AIvJyLyoSgwm9tYI7JhC3KEhgHtbPByXwYDByIsJfbswVc6+wW
CYOivdQS9ApjlSu1GFbxUv4IuJs17iniVpS4vQ2QT4QGCU6ENfxsgDkMTrlU
MQb0aSwdGaNWA1YUAvubvVxkHJUGqOOAQt7kDk5c7RJAq01UA7qfq09yMLhG
NDN1NZP29PuldqeteWPVdjJWMlydNJQ0FpVh/UhezJjISBvjOjekNj/h10kP
yWOsH6uHGMaisgQNAeELYQ77Y3XebjIAoqNA2BsEEIFicXXcB99vmOVV1TNI
F3ZBUuaKsjlSAVtNY+0s4pyRgQSyjXEE32iaDglFirYbFHbx/RdLAkuetpg0
fiYGdCRrq+0rhdNQce0oXnBwgG5ginN5h5zmJhIduV+TEyZqI6ohEQ5Wh2Ed
dcQKLk0t3ZKQ3QDOEjX1Vv3FgUBZpjNRI1MJB60dUP6wok6ZHJgT+Yvqw4lR
+ls722h9fF7QDmel0iNqg4zUDFjsxji6OggHl41fUwdVR0ZC6aQGJIp+EiID
c0ZVNrVEESpk/2Itr3aRieVQMxoTUkSXMm/uuY/8zAlFofPhgixEltqB1xBS
hJJ6azqigp5JF62PHmf28qP5deR4xYHyx9SOydJsQy2UygLwxhGokbMl83LD
6peFYfSGBYBRMKM2Vq7aeqEcC4Ids3TV6H6ZTuBwFPgJmJlvQHxisuCC8bbb
9/DaVB0Zrtbukc5ewQJqjjowt7HWE2hEENi9KKqqTRrZ0hVxZUJF7Pc6W0Lr
6Him2nZZh1/iOwGjVXSBkWBGn6QkSwGZpO4/yhY00XYNCffGCiSkDMChgs8A
+yqGHRxSg9/OTk7aBi80BIdGXiL+qHDmiuNbG76ksMhOLR+ufhMaGVQsKsoO
uESzsZpm4+33+23etBPGL92UpcDCBWU3mNzVtUcbbb1AWeaedbwGB77RqEO9
bjjYRaEXF21SVQxmzBKoc9uwDWt4DiIO2QCJgAng/y6gLZndN+UYcBJN5zDI
I+BK4hL3bzgOcsKrWYnuy20NbCXYRrCKtoPgLTweinn7VXVDzBO5qfM8YzgS
GsOoYyZFB6LgBKCIdDdDbWBzQPLwPDf8XNQZTSA3L/SJzZj8UA3esZf7CLOD
bdPB3D8roqr9Udm3pZ0M+MNQj8MbzN4wFxcnvJyaq0wy6+Rjxv9g8KunsifW
FA/BmIz4Q4TzpIOgzV1nNxEtpWG3prQzAyzDeeqEPMLzok1b40YHiY+kx6NX
Qsi15Wcg6aWhg5yI4FGOfCx1wlC1SiM64lyLjpBpgzm8bT7tg8TedCIRmFrv
VCDABIri+JZ6+OqdtBhuQrlrR+dYd4h7yLcEHSTd6GhVPUU6dIIdc32W6JB5
KhBQChiVcNkrUXTW0ntoIBD46+5YF/jBqlPOSAbgAT4qmExybiIQKSivwwUZ
/wSzUCtXA4euhU4VUCaWKTjA4mR1sK8xsL2eqhusynk9owPwogPTQWZ1vGPQ
CcDdIU7eFhg6NYOmKur5o7goq463YDhXx1gM/MIaa8fXIxp0/C9REs9HRX6r
NwThJWEAljiJ7xw0Cp00xG1TPwfFAHLpKQl/dI6YcL8tacPiwHEnJ4zPq8/4
OrxendSGhXHEEMEAmHudF8bQaopKMfgPfTl0Bkgf2TqdnpG+Jplzs863JTFj
WAcI6w6ppJ0SxHyGjHVqGL4qwiF4DZgD2YAGzF0lsOgwon/1oAjIfjC2nRxX
Uzg40jKMoX8sIicHpP5Xwt+9k3VeUT1oAcHgVoePMRSNFGZ6W1GA3/S47L11
GnRfaYqExmG5UtKR8gdlVGsj/4GloazLDAj6YbaxemTdODqkuxMG0M0MFqu2
SIwTtWfqBRAJrwUyzopVrl6HjRpRIEKv8wE2wKWrHgTJAPsEtfNo0/hFAC3A
Dd6esVEb2lssgzlSYHCgR4Uj+6oIXk6vMo5oBiji8XGQxLWqr6oqyaYjZxVR
ON5OiLW+0HpDMafgxI2FSIOSeppGbSkdgQw6NlGhbAxreMeaO9WMpNGZ0QAJ
AcctHnUVECpqEIfLaMT8XQeug69wUQvomoa9cGkfpRy0HgSo2gdeRY3apOkO
omIKNkoI4Djqj4mLQH6IMenmOg+FRHRfm1YXkiTN/nZHC1WDBtisLtLLr0Ta
a/9rCWYBA234LrUukVZd54x76L7qyaG+Gf+R1D0oCqAJB6LKRyVoPxysQanO
gXyJ0seGzZIgSO9MDzGjng0loxNuL46eGh1oOu1yIowBpV58IwA2H4ztuhje
ojhcifs2ryOIhqeaLOEcOquBqx3ECqbhR+BMzWhHKf6zVmaE2rHWUUuSCWDG
HWXtCZMDSU0BHX+N8EBWGwpNzdB6DFldWk2qFykdMgs80UEFPV4RElS6VnAL
o1d03EQa0MAc9YmpRcJraHxyBeGIEGZygQVbr5BAjl+XwWkd3mkcTd7rMFrL
VZ6yaN80KuSAdWuJggb+NCOdVLMpYmJASKikkqZEdDbD5DtAiKhUnV+7EcA1
LItsaBfd7N/h9AZxdHinvTsl9RPPtNvMReU+agY/jhQbRB8h7SW1PseQgYC/
QcYI4arJJ6+UcZADR5CRdlHaDv9AxWR1RcVi0XMPUl590KTm45MAY/WlE8Y1
Te0Jo5W0CbVsoF7xjy0Z1oWrl/IFaTDeB9ISiPRGQ3VM1vo+VMQ6Bgz3BelR
cU5OJQ17khIVgoZBXQ1j/vgp4jJMx60PAg0FopFXHSSHvX/BUfcTkJazovrr
ADeVeXwt1BdXguOTpChFKoG+u55+MnRgkmMrrIK5ObAAynVZaBKRUbfzJrKO
L0qRv8znzWYPqQ+qAk5d2u7FVq2rJ1h0ejXorNzU0SOdxUchfNr2k4T34zW3
oZyQ/3U379/9dDdEiNSIHnlBA189VVR1RgM8itPrgRLteQ30DTaEbNE5Edwe
0TAdTAe+HrqC76yfd6QsBMfoyHc+CBSqs7LUISAD9pT3bu1PuLAAhKLTVwlU
Oq69XUDyyriYWlIr6qDYVOcB8CezqqaFatDO7tQzEg2hijCLIyxLrFXW6VY1
NKtcjI7HnWY68IPa1uPDUYVXtUcjh4cy0qnTI+Ir/etRC1Dp+0Cn037hMZ3H
BWh6fudXoQ2DjZAgEABGtqFetSGQZPt1bkb9IeEaK7tX0IFFlL8ekEG/qpmv
apOOkAJfcELSqalS9QSOdm6pTuqaCUfspajcK+6EHVGfdR7yneE22SB8xL8y
qGJeH5PgpNsnhYoO0vDm9jmAMtunb4Oiso8XyThe7S/r5HlSH2ZVrPJRSwQx
KfV+f7qX7pTePljXoYwE1zqv1rYe22lBp+fOmDGKei7CVU/qaIOx+dfVwG1o
r1znqV+3cSMATMdaqEamtq72unb13evgWJL1U/97+Pf4VxVibz0uhmYkJUER
zAq6O8hg6/kHaGLrFKueWwHM/e46fagng8DrY6anGfaV4FDL8zA+dC6qwN+q
xpT8MchedXwKh39c0JNoF87ziMrxtcJ/WV/yoSFu4LQGsthmHjAy6by1PYfW
06FrH7MjrlsgtbxBuLpQ1YXa94Xi8xAe5sWLBsKuh7tC8Tr7dpCNkJ2O/Ban
58Z+D3R+GdF+itJVH4zqgEYoJ27WdTRCJ4sA5ekgXC9/iO3QU9wqzS6JjKFH
FCDRCxoFhAgvcXw0dZd0EpbYlxqwhjrjMrHrkyqbpdg7mzY1h+/xa1PqGNZF
zxOyKKbVxfLrUbKLdiHZHy7BVEkPntSHuwnB8g0QQ8ckGqrkarfyqeyLtAJN
sMNaXD2F0CAK7VagxKEj70LQI5RepwcwzGLtGrInq/bRc80jk/aoBPQs5I7O
CP6CmTfFrtPwwDmSTeN0fx7oH8fZPuP8HA75daSSfnok1mIEzScJl7yIcckV
Yt/6x6v9SIsf+RXV+s/Wi/inJu+yTvvoZKwOMGRI8Wh3HJcHStX/cYFr1Olt
nVsPlLjTg+AYvPQNEXt9rfBn1GGzolx+NsGDrqPNdj1yMdfrISRIXgF3wi0d
dQnv2EGbOYK0oANiE7/3+R8ElKwjOkYt6fzJ1rFFCAiunF0n8UH105x6u/2d
s6CueC8aZWLowSdqyy/R6aDGk/oeOvgdYNzp/5L48QOFTl2ydzud290hYIn1
UDbqdH5qoiblpU72j6ND5qHgS1Fs1D1LGEkhYgOLpLXD3GCQThoWSBHmRymq
laltPcjToi549PRbVAcpyhz0jiGfgUigglie5CKUixIbb1cQw6fI7qXnPYcU
NCUStU/lwxZSEr4aV44WMWnLrg5FA8iIdxfWMp20g5HnV3+PNfru7wVpGSbX
1HUrUpZVR9JT1qF9eHRR2k197eFsjIYi9NrllvhHNlxtwUG5V8dZcfWsfHzN
voSEfE8x1iZnhvOyZ9DWTlf7/QxnUjMIh4iIZL5JZ1dJEGgCOga3buO1Tm1V
PYnbSSevTTu15hIXJufV0QLS9ThxgVWv/tcEEF97nRLtRkxZS/8auuTMfhpC
F23Ub1ZTOWtnlDjqqOd7qIvKQ4aOT6niuD+F8Kb5Cpzw4z4/Wxlo3vg6pNpy
YbV0cMnFsHgbUqYundrX8TcdhD9BnZC3Al/X/1z9c23/dfWmq+M8qCLtZcNw
aBltgOgJwaRBH1YTL+vzeyQiqxdWt9ScpZOzMr8h8VLX/75BveU+HHX1lqEh
CFG4lpCG2p0ZMLE0vp7mJXVhx04IR2N9vY6XHD07YnpuRKSxigNXq8zX2xH+
vmrQFnkXk34vw4hSNDuCpkl2IOg5Gj1lgfE8A3PiekNB6Fk+VgJHTnVsMbSa
Fd+XlXq1PpYdibF1SalsH/qa5VOy+SLYS9HxFfWlNtUBs+p8BUyCDvRBZk9t
MSipvP8XRg7azQhJj1kyzLNg213PdTCeTlJTE72mlMZ77GzrkRXBjXb71ipd
55a2Glx6diuj9H1epWHCxJuqvUr4jzr1yKKES1xYftMeI4HzehSydmEette/
p41SVtWwGClRy9pe1lnzBMAyIrTJ0mGLIypX6Lok8ce4EDxdtr3LAgXaTBp6
4Kd1oq4d/2PKkFrRR11PAp7wykec4aGTqFVO611svYfQ9NTppzxIO6FMHDJk
/bwD8Se497gOfF3L0YYBWBBkipUwOuZetBuvDa5P1akzrII4uF21OLWptgZO
6TidysQmyAlcM+0X6lQQmhEIkGBtepZ963jC1tEZkinO9lkJPTvyulkarvtY
AkYshwUpH4nHV0B6qF3PFyQ9w9yE0jonpHOJeuzWH6G8nod4hd0cNbY//SPU
ivS52PKsqh/DhTZQhbdKLSMr9RzK0D4j2ZIMnNl6eNSTqdXpiT+CgEfpLHuT
avpEuuAFlKY6Y+kTSLBAfjSf9uY/R7DexlQNldie5ChkcdeM7+i72nY23H8D
uWZ05wi91FUAAAGEaUNDUElDQyBwcm9maWxlAAB4nH2RPUjDQBzFX1O1IhUH
C4o4ZKhOFqSKOGoVilAh1AqtOphc+iE0aUhSXBwF14KDH4tVBxdnXR1cBUHw
A8TRyUnRRUr8X1JoEePBcT/e3XvcvQOEeplpVsc4oOm2mU4mxGxuRQy9ogsD
iCCEuMwsY1aSUvAdX/cI8PUuxrP8z/05etW8xYCASDzDDNMmXiee2rQNzvvE
EVaSVeJz4jGTLkj8yHXF4zfORZcFnhkxM+k54gixWGxjpY1ZydSIJ4mjqqZT
vpD1WOW8xVkrV1nznvyF4by+vMR1msNIYgGLkCBCQRUbKMNGjFadFAtp2k/4
+Idcv0QuhVwbYOSYRwUaZNcP/ge/u7UKE3EvKZwAOl8c52MECO0CjZrjfB87
TuMECD4DV3rLX6kD05+k11pa9Ajo2wYurluasgdc7gCDT4Zsyq4UpCkUCsD7
GX1TDui/BXpWvd6a+zh9ADLUVeoGODgERouUvebz7u723v490+zvB3gHcqkl
oKXxAAAN/WlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPD94cGFja2V0IGJl
Z2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4KPHg6
eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1Q
IENvcmUgNC40LjAtRXhpdjIiPgogPHJkZjpSREYgeG1sbnM6cmRmPSJodHRw
Oi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICA8
cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgeG1sbnM6eG1wTU09
Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iCiAgICB4bWxuczpz
dEV2dD0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291
cmNlRXZlbnQjIgogICAgeG1sbnM6ZGM9Imh0dHA6Ly9wdXJsLm9yZy9kYy9l
bGVtZW50cy8xLjEvIgogICAgeG1sbnM6R0lNUD0iaHR0cDovL3d3dy5naW1w
Lm9yZy94bXAvIgogICAgeG1sbnM6dGlmZj0iaHR0cDovL25zLmFkb2JlLmNv
bS90aWZmLzEuMC8iCiAgICB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5j
b20veGFwLzEuMC8iCiAgIHhtcE1NOkRvY3VtZW50SUQ9ImdpbXA6ZG9jaWQ6
Z2ltcDo5NmE3ZjI0MS1lMjNjLTRiMWEtOTdjZS1kNmU2NjliOTk4ZTIiCiAg
IHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6MGNlZmJjNjYtNjFiMy00ZDZk
LWExYzgtMTg5M2QwNWFjOTg5IgogICB4bXBNTTpPcmlnaW5hbERvY3VtZW50
SUQ9InhtcC5kaWQ6NDIyZDdlNTItOGE2Ny00NmExLWI5MjYtNTJiOGEzMGIx
OGIwIgogICBkYzpGb3JtYXQ9ImltYWdlL3BuZyIKICAgR0lNUDpBUEk9IjIu
MCIKICAgR0lNUDpQbGF0Zm9ybT0iTGludXgiCiAgIEdJTVA6VGltZVN0YW1w
PSIxNjU2MDE0ODk0NDU0Mjg5IgogICBHSU1QOlZlcnNpb249IjIuMTAuMzAi
CiAgIHRpZmY6T3JpZW50YXRpb249IjEiCiAgIHhtcDpDcmVhdG9yVG9vbD0i
R0lNUCAyLjEwIj4KICAgPHhtcE1NOkhpc3Rvcnk+CiAgICA8cmRmOlNlcT4K
ICAgICA8cmRmOmxpCiAgICAgIHN0RXZ0OmFjdGlvbj0ic2F2ZWQiCiAgICAg
IHN0RXZ0OmNoYW5nZWQ9Ii8iCiAgICAgIHN0RXZ0Omluc3RhbmNlSUQ9Inht
cC5paWQ6YTY0MGI4MmMtMDg0My00MjYwLTk3NmMtYTg1ZjA3MDc5ZjcwIgog
ICAgICBzdEV2dDpzb2Z0d2FyZUFnZW50PSJHaW1wIDIuMTAgKExpbnV4KSIK
ICAgICAgc3RFdnQ6d2hlbj0iMjAyMi0wNC0yOVQxMzoyMzo1NCswMjowMCIv
PgogICAgIDxyZGY6bGkKICAgICAgc3RFdnQ6YWN0aW9uPSJzYXZlZCIKICAg
ICAgc3RFdnQ6Y2hhbmdlZD0iLyIKICAgICAgc3RFdnQ6aW5zdGFuY2VJRD0i
eG1wLmlpZDozYTUyMDNkNS04NGRiLTQzNDMtOWZhYy03NjFmZDZmZmFhYjgi
CiAgICAgIHN0RXZ0OnNvZnR3YXJlQWdlbnQ9IkdpbXAgMi4xMCAoTGludXgp
IgogICAgICBzdEV2dDp3aGVuPSIyMDIyLTA2LTIzVDIyOjA4OjE0KzAyOjAw
Ii8+CiAgICA8L3JkZjpTZXE+CiAgIDwveG1wTU06SGlzdG9yeT4KICA8L3Jk
ZjpEZXNjcmlwdGlvbj4KIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+CiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAg
ICAgICAgICAgICAgICAgICAKPD94cGFja2V0IGVuZD0idyI/PlmiVpAAAAAG
YktHRADwAKIAftw2PhcAAAAJcEhZcwAALiMAAC4jAXilP3YAAAAHdElNRQfm
BhcUCA56CWQaAAAgAElEQVR42ux9Z5gkV3X2e+6tqk4Td3d2epN2tUlhJRSQ
QEKABpFE/ggm2SAMxmBbtjA2/uwPJ2wwGNs8BBuDbTAyOdnYGIEtECMhhISQ
YIXianOc2Z3diR2r7j3fj3sqdO/M7Ehik9TneXq3p7u6uvrWe8894T3nEjry
mOWnT79UFdaUledrUoFPBADWAgzYRoio3rTR2CQfuW/UPuPgdu6M2GMX6gzB
I5MvAuqiN7+8N9/f1Uta94C5xzbCZQCWE/MSUqoLzAFZVrAccWRqNoomOTIj
1vIBT6uDNjTTptqYmtkxOnHR5ntqnVHtAPe4yAcA9aprX3O231N6ktJ6E0Ab
CLyWmc8gYFApBTADlsHWAswgZsAwYC3YMmAsODKwYVSHtXs4srvYmK22GT1g
a817p3cdvOein9871hntDnAfs4x95DeWB+XBl0S18MW1feMbibEUhD4AgAXA
DGYGCVhhOXkNNn7YBMDuISCOH5FpIjKH2JgDpt78cWOy+vWoEd1y/o/ujDp3
oAPcBcn0z79O/uKlJZ6aXG12bX0bwuYvAeiv7DgYmJkagWXAmMEWABxQs6CN
wcqMFKjt/xsHYjY2AbH73xgY2zC18CFTb36Cm9G3olrz0Dk/vqvZuTsd4M4q
tT3f61G54tNA9FoeP/wqs3NLCcagfmgS9f3jIM4CFSBmB073T6phGQDbDHjb
tW1sPlhQon3NUSBGZGEb4RbbjK63jehbY7f85L7LEHa0cAe4Tr4P0OUjt7xA
5XK/CqWv4kp1UfTQPUCjhqjWRGXHKLhpkNW2DsDynMkBVQCdALXNdKDY/s2Y
DWwtyLADa6yBI3kexQA2sI3oAW5G/9mcqn7irLt+uqsD2ycwcOt7vqOY9BqV
L/wx+cGLoPQAIkPhQ/eCx8cAZtT2HUHzyLSAVAZMNC/Hmje2d5E1F6zYuQCx
PE+ctVbTIQYtotgeNmDDgDUpkCMLhKZpm9HOqFr/a2v4c9GRmfDcrQ89YUNr
+gkJ2t3f7qEg9xKVy3+afP/ZINUFIjIj+2EO7AGshak00Dg04bReLC3gjac9
y1NuUwecqoW246ntfOCMCiH3lI5WK5qIFmtPX03Aucr3HvytpUsP//3IKHeA
+0QA7Y5vbaRC/t0IgveS5y0FiKAItlJFtO0hoFEDW0ZjbApmun70CZgyiJsN
gTHwSMwKyiA9A0aOj6UW0B6FYzpqUfSUUptI0ZUKmH7TpHrgk80p0zEVHqdS
ve/LSnV1PY3y+Q/C858KpRVIAUoDDITbt8Ls3ApSCrYRorL9oFu6E13q7AVi
yuCWW7VwxmEjMR1iBy0xGVqiD3GEIXXYkHHYkvdixy1yNm9sPnAzGjfN8PP1
w5U/O3fLA0c6wD3JwibSbK0GW5+NLbCNjA2bDQ6bhqPQchRZNpEtrjlrQctk
9YEv+6pQeBHlCx8nzyuz0gTScMBVsJUaGrffArIWpBTqByfRHJvODBCJkkyX
f7KZEUwiDXL9MTiRcdjaw2aZBEUK3NjObQ2XJeCNDDhiUGQcsCMLNE1omtE3
o0b4e8pXu9bdcfcTwnTwTrULMmFjEVuzni0vh4lK1kTEJiKOIsCaBlszzdZM
sAkPTz949xSHYQXM1Z4nXTbrclm99/NFlcu9SeVzH4BW3dmVGaI8w507wI0m
oBWstQgnKslSziCQ876S5Z8TwLZaDUycrPMxnlODFanZEL+usrpDOa1MBCiV
2gwxyBWBtQLYgKHSL2b4CniF76mSZf59APd2NO4Jlqg6vQTAZTCmLwYsoojY
RMaayMJEPhujOQVzncNwkk1z3IbRITbRyOJnvHAyNQ8+X9CFwrWUy70bnt8L
JaYBacTPbb2J6i03A40aSGuYWhO1fUdafKv5BooZIMqALLZtmVvBl9W0nIn5
xq+bWTSvbYvtGuPeiyMNJmM6GAbl9O3N6fp163545487wD1B0jh4gMjXl7C1
Z7GJFKLIMts9BGxja2esiRgmUjYMCxw2+zkKl9owLLOJijARWRMZNlEdUTRm
w+jhXF80mh/w30aF4l+S9rqgtQNt/CD3f3PvftTvuhPEBqQ1wqkqmmMzqe16
1GAd7Zy1BA1sm/0bZ9CQPncWRzZpkUkNcxa8DDbGxXqtgDR+L5S/TQpgHXgM
ojubk9W3nXnrnT/rmAonxK5tehzaIluj2Rpma7eTUncWVp8Vth06PrX5tgNs
7QMwkeJmY4BNuMZG0UpEUcma6EzYaD1pfR4F/luIkBPPqQ1qTqWaI0fAjRBQ
nNqnQkkEEZjIEWbiMxBnIJzGcuNvOEr7QlQy24y1EM+KTMRBUaqBKY6JMUgp
MDsnEZ4CIoBgwR5l7Q2AGTay5HcFTwl68h/ceeVT3r7m5h9v7wD3OItt1tlG
IWCMYhM12PK+nk2XhLMd23PB0xJdBuAAgAOj//VZn61ZbqNwdXFQvTS3uPBG
Isq1hKJa9KN7bmZmwGEIaAcAHXiIFDkPPrED2j5Oqf3bEpeNTYckOpaJAVNG
UyukSQvKfI2iVr3ODsCkyH0LE6Al2MYK7AGAkRMy2DAsA7rgP5eZ37v1souu
XX/7T490gHscpbj6rGjq53dU2BogipS19hF5x4MvfUMIYFf17n9eqoq5FytP
dR29zvNRhhKDYKMISkLa5Cn4/UUXVbAupcs0Wyx3lvPNYnsxZ4E9WzzXARNW
JgQh0bgOsCxuoYA30bAWBAX24vnrZoM1FirnQxf81+RQOnQj8M7nOnQ/rkSd
UhGFWvWgqVUiU6/6tl7pfcSx2js/PkC++jPy6KzUPIhVIKcevizDYED39jiz
MjJgE4EjA5UPECzpBnn66Ngrs3ATRCPDgpH+zVnGWBwBludHWcYUTx52c4FE
9SqAldPApMhpYqXccy1/a3ccKeVWC3m4cxEo5yld8N941vOe9pbHo8Y9pYDL
YWO/rVWnTK0amGqlPPrNzwcL/ez4v/+5B/BvkadelLC42jJaRyk+ZgTLlwF+
DhxGCXhhDXQxQH55H4LFJaiCB/KUA1qW6WWt8HIFfAkrjJNH1gim9vgZ0tBZ
elnueCKn6V14jEQLKwE2pWBVAGnKAFdWCN+DCrw+rxhct+s5l118/8aN1AHu
cZJFV1wd2Ub9XlurwNSqK2zYGFjoZ/0l3c9Xvn6HI7s4NDHzLIt5JjEAhtff
i/yTNsGGkQNvaMCRAUcRQIygv4TCikXIr+hHfkU/gmX9CAZ64C8qwesrQHcF
8IoBdM6HCjyQryWRHpNonOfPcTQgq30zThqJicCKkjgxqRSryGpgUqlJodwB
DrwO1GzZTTRfQ+W8c3Uh+N2uM5eUOjbu8XTSarVdNmruYWPWsok27f3sRw6u
fMN14Xyfmfzvv1iuct4fgdADy26pzWhVZLQftdu7xCiesxGILGp33gmERixG
9zkLgDwNnfOAonJLM1HCMcjGGBJeQRLGlZBW5GxPjgw4NOBGCNuMYKoNl/iw
6bUloeBsAE4ULospQYpdEoKNHJwlO5D7Lq3Er2RoBK82kfnOtisv+cK6m3/C
HeAeBym//E3R3s999CdsomUcmY0cRTsBbJnr+MOffZf2unJvIOJLnBtunbPC
FiS2KLNuATC1k2ICH8WLNsFbsgi1n26GnZ4Em2YmicCwrKEYYM/ZlaQAaOXA
SgJaUuJXCdJiGyFW8vH5LCdEdI4MTK0JU2nAzFQRzdQltSuJB2KXvYudNY41
NLtMGqwYxNYBWJGjRSoCPC+2PAK/K/eeqBF+F8BoB7jHS+vWa4c5MvewNU+x
xlyx7UPvPrLune+btZDQX1TaCMIbAeQSMncc0FcshYtWNJQAxrLEbV1CF2CQ
VsiduQrByuUIRw8hGhmFnZ4G12tAWAfCJtiGQDNyqWGtQUwg0oCWeC9ZsCzj
1OIJphExzsRsiQHyPZCn4ZXywJIesImBXIep1GBrTZhq08WaORMyY7lyFcfW
4siDcpPVMpSvwYEnsQhenesr/h6AP3g8APeUNdh3fPQ9RRs1n8/GnAFjdrAx
393w7g9VW0yEb/yxTznv3bpU+DOVCwDfB3nuAc8HPA+kPEDLQ2lAe+CWtK9y
z4lApNyQKPc/hyG4EYKbTQhXQlKwkctagUHW2cQEYX+5WDQoioCoCYQNcLPu
PgvKmDApo6w98ZFlidlmCFMPHZCnazCTVXBkMsWYklXLknUiC10K4PcUHYss
ioBmBNsItzanar+0+pu3/qwD3OMoD3/gXQNszP+BMUU25ic2Mnec875/TOqu
xr/yByu9ntLdqpgfoMAD+YGA1nP/ax+kNaB9uJSvgFd5gFIOwEqlXpBSmWVe
ZZIPqjUR0ZIZQyvN0XJS/ZDSFg240QDqVXB1GlyZBlcmgXpVQmwSTrOcOHDc
UuLj+Ak2isDNCNFkBeGRGdh6U0p9suAFYCzIU8gN9gChTZxNbhhj6s0PTNy7
/z2bfn5/2AHucZSH/vTajWzM89gazZG5ka19YNPf/SsDwOTX//B9urv4R5Tz
iQK/ReOS5wlgZ9G6ygMrBVKO2ug0MBBTHSlejikT8CeAWbUl0lqdojTjlbFr
M5oxKWM3wjlo1MEzU+DpCfDMBNCog8M6OGwCRiaAkfhwzMmNARoahNM1B+Bq
A7YRyXnTmrf8qkXuPKERu9nA1sKfR9Xmq1Z99aYtHRv3eCYlKpWtUKqPjbkC
xj7LGtMEsPXwP//mMqXptWBLR3NdnZYisgBZMImjJsTw2CYErLMZrQWRcl67
smCbAShJViqpfKC20FqWdBOHstLXKE4oAGAoB17tJgrl84CfA/UtdsCqVYDq
DOzMJLgyBcxMgK2rTI/TvkkGzdPwe4vwSjmYSgPRVBXRZBW21kiiDCyZwGxZ
kcrx+RSZK3a87BnbzvzPH5y2GbVTvnTnH370M/7NS590iKOwyCZayZFZ/mtn
rR9dvGnJy1TOfwV52leeLPOkkqWf4r/lQXFgNKMhCRLgByVJqzRcRq00B8qE
uxJiubx+lPZFi2kRg7m9Lo3bExGeD8oXobp6QaV+UM8iUL4AGGcvE7ddV3zN
noZXyMEr5qACDxxGYGuhSzmoQCe8oCTRYblP+frLH7pnW9gB7nGUj9+x2fzG
xecesM1oKYxZpjXO6FrV/Xqd99c7G1biq0olYak4ch+HqwAl2ah2AGa8/hha
TKm2nM26mqWSkaid7hh/dyuYqf08NJsmJ5DWYD8AFbug+5aAuvqE+hhJ6CtT
jZEkLBR0zoPXlQdpBXgaKtCZ0qPkWlc2p6r/+eH7dx7oAPc4yz/+5L7oN87f
uIejqKzz6ik67w353UFR5XxQFrixxo0B2wJeJGGjJKqfZNFUGxYpA+A58EuU
wSQfrXUFnDQLIz3W+URHfwFTWyElKVCQg+pdBOruA5QHshZswgT0JMmU2KzQ
+QC6qwjl+RLL5qzWJQDVD/1s6/90gHsC5BObH2q+/bz1I7m+4GkEfpYJDfnd
OajAl4ySSp0rSvP7FCcDkvwpzfL/LG5rsiRTK7E8k5WjozTzLDbwUeDN2MDy
eUosZEoATekMSTW254FK3VClXqhc0dn1Yf1oCgQBKshBFQsg7WK5HEWp4if0
vW5L/dP/1JiMOsA9AfIXb7lC5RcVXmItPzmsNhHVQ/jdeSjfazUXlErDWImN
mzUP6JEHVtiRZx3uCG053xatfrSKbbOFidqOEB1MbYVqbXZ3ch6tnT1c6gEK
3UDYAKIo07OBQYEPFQRuRfI8kOc57oS1IEU6WNXzgw/fs/207IzjnXYXnNNd
5HuXeqUcZkam0RyvYGr7QfSsL8Pv1s4OJAkjkcRBYR14LYNhj4Zqu8mb7dih
tDtPrAI5Trva1AyJqxYSnZkloGcKexTJNcQvKYl6xH4jAVaBlAWzAhQjtYA5
5T+wc7gckTwPpT1QsRt2/BB4bASoVR2NR2vAc+X35DOU2M5mega2Wiv6/T1P
N8w/auzewsrzTX7FWu4A9zgJG7uYPD7Xy2t0repFZf8UGocmMcWM7vXL4fd7
SdM5F+aKg/tGaCyUhMISZRnnYpXYiUoSCETp/6QkR6tS+1ecvZTdZVu1aYsT
lzEbIARxJba1UCIpS9Zjm8SQKWPJZvluBBbwimmwuAyUemCOHALGD4GUA2pM
u4yvxOvuhvG8HFTXUOWhzbfrXK5mrZ2sHdg1rTx/mrQ3HSxaajrA/QXJyJ++
kmy1cb7KeQEsw8/76DqjHzN7J9A4NAUbWXSftRK5xb2upFy0LZEFWwLYIOvX
J0kCBZCSrBepjEWhWioSQKJR4wSFOhrAiPmwMZOrxXrgpM6MbOoTAgAsuetw
vLS2OHEavyXWLcVChEjAy04L5wvQS5eDu3qByjQQ1Y6uygBDI09EXtkcPniW
GlxWBxETUZ1J1UipSnPy8GGl/f1Q6pBX7LId4D6WZMRMhWDNxV5vIUmRenkf
3WuWYGr7ITSPTGPyvl3oPXcNckt6QUYIKCZmTikwmTT5QNwKYEmUOWXHDszK
1X7F3j8RA5bSpEQM7qTVAWcAzWn4jdEaKFZZ8Aq0bEpddCnoWfxFHWvgNJjr
XvdSzjoDVCwBQQnUqMPOjIG4hpTX7swOCuu56P6fPOQtecFegAYYWAqgZIEu
ApYyaKNSQcU0ajtIeztJ6xqRsh3gPkKhyBDXG5tsvQnKec4UsAwv56F34zJM
bT2IxuFpTPxsG3rOX4vC4CLJiMXsKSN+i3GVtko7jaycxkvWZPk74RwoSstq
SIl2zfQHI5VoXmTrx5iOtpvjyshY8yYJihjAbrJRsrirWYMdaVxCZ173Ms9Z
EjAeSPvgyjioNikmkgb5AFvbT4e2e91nXbQFwJbK9vsVrOmDCcsglC3QA3Av
cXARSJ1FpLYx824AE0TEHeAuUKofvRHF33nuRlttQHcX0nZFiqE9B97pXR6q
e49g4qdbYM45E12ry0k4LE6Zoi3k77DHwlXgFFxKA2SkYLLdUROerEo5sQzl
eoslwITUoVHatSaxh9MHIcMFV7H1qqSDjqSiMwBuddg4S6dw3GMwCB7IKHFG
A6BrMeDlgMo4UJ8Rh832kfUWxectrT3XAjgC4Eh9ZPeDMKafQWUAZwC82ILP
I/JXEaktzLydiE5aKE2d6mCtj+wmU68pZu454/ChjWxoma00HLWPWUpiXIBd
eRo9Z5ZRWj0A2wgxde82TG7ZA9NoCgHFpMebTG9aeR3WZB629WEMEEUJQSb5
XKY0B2zSjjNx07qWjuUp26u9lCcFc1pjxkpJuY6W+jJKqJiucFI75yumZyr5
W7KJ8IQdJ6Ew5EugnqWgYj/geyBf++Tp/u/NEhPMl8+wuaUrDoP5frb2BxyF
P+dmI+Sw2cvWXAjgHGb2Ohq3TQ7f8i3f6+3LcxQus1FztQqC/nD8cDcA31Zq
sPUmKPBBSgBnlHOstEbP2mWA1qjsHMX0vdthGyF6zl4Dr5AXe9ZzlQJJVEol
/RJIcRKidT27MrRHZIokk9dj8nhWEzvSOif9QFKOr1DGEo3M1JYuTmzgWJGn
kQ1SGnEfBafHZdlvSWcwCNqNhfFSsyK2bQFQrH2nx4BGtGj1W1/o4Z9vmJW3
ECwpM4CZcGLsHhuFoQI/Ccw+PP9cUmoMrq9FB7h7PvN3XbqrZ9A2aitto7Ay
mpnMgygkoolo/15CFDEzw0xWoUt5p1mMY3WRNQ4QWqPnzGXQ+RymH96HmQd2
IqrU0XveOgS9Pc7WZRcPTcrPFYOUdpW6BJAWABpOQ2TZzBuzA6tSmWVfpd1p
SDmQsZgolh0IkwqJ1AZmUm3ZOk4dPpuaGe7w2AZWAkorZkWbzeuyFK02MWXj
wl0AaSiV6+49R2kA8xJu/L4l3Dw8+qC19gyydoli9qG9VTZsjio/sE9Y4G75
y+uKKpc/N6rW1kB7i0l7PkAVADtJe7t1d+/YzA1fOIejkEEa5vAU7KJuKK0d
aA07+qJQGYkIpWWLofM5TN63A7U9I4iqdfRdsBGFgcWAjp0rKdmKPW4BYtJu
KY7f8izghQLX6+BmE5TLg/J5cdxUxplL7WsYAbXKkH2Yjo7/zpZhE83rSnWk
T69SaTwXypUlJXRKlZCGkigC0kY87u8CYFA0hYVlUIPFg7Y+sntGWX/AMoM8
308Zd08g4N71yueRLuY93du1KZqaebIu2S4QMYFnCPgZaW+LLhQrpjIdBj2L
edfbn+trYwmKwI0Q0dgk/HzgMk1khP1lQWQcMJRCYVEP9JPPxpHNWxEensDh
2+5B30Vno7RyEPDS+C10ppkHuYJKluLHuEmH075WogiM6MgRmIlxhwat4JWX
Q/f1Sz1bHCZTqQdGkoY2nPYlUxkvjV34LO3B0BZPSIIMEus1ovFZzAatUnOA
/JaohQO1zoBYIO55HuuFZf4rW+/N22ZjKVtLyu3vNkpK8xMKuHe8aEhxZFay
MU+31foqMAwIh0F0P4Cfr/nNP6ke9aHIGBibcGjDg5PQfV1Aj4YiI0ATEDMl
jeGCUh5LLjkbE/fvQn3/IRz50WZE561H9/rV0PmcgNJhkiV+y1IUxsSiGeNq
WwJHDYSjo7DTk5msmkK0fx9UoQgKgrRwkjNxWbKpJo4N3JYoBGUa6FFbq9NM
nzIid26F1N5uy7ARuwhJkiaG16px43N52ir/2DCY3HxbzkbhJYrQBWay4P1k
ov358hlPHODePnSZ5mrjEpP3LwNRD1ueAeFuAPec+8FPjc8Zx52oTHOOmIzk
90Mg3D+OIJ+DVQpkLEAuO8baOltWu2Xa830sOm8tprqLmHloN6bueRjhZBW9
m9Yh6OlKa9BYpYWMkgJ2HRstQBqm0UQ4sh92ZsaxupQSLe+0oJ2cgF4y4Mrg
W2K/3NqNJsmypbxdjsNlSrWnupJgbxrG4zQC0RYqAynAatczV6d8iTRtnIJc
53I1v6s0r406fsf3ejiKLmRqrLVsFTGPKvDdhdVnVZ8wUYVbNp1P0Uz9WSrn
XUqEAMxjIHybiHZf8Kl/nzcuSFqFkF4DiBzXwEzMIDo0Bb/cLzHPuDRHHDUY
uUOA0ho9Zy6HXypgYvPDqO3ch2hyGr0XnIVieUmijuLtnYhbkw62XkNz3z7Y
atWV0iglpB6VEHpsZQZ60aKjQJlUWyT9wVRiw7JQL51il++nTOOR2Gywadw5
3RSFUs5DvKsPeTKRbBpliM0FnfkbDPJ0FRqzAvfQ/35Vkx9ssGFzkwL3Wdfg
abtS6q7i+vOnnlgJCLYXcZMutYhyxDwBa79JRHsu+Y/vHnPJMX1do/rIhE28
64jBHCHcNwZVykP36gSsTrM4jetuvnFaUGkUBxfBu/xJGN/8MMIjUzh822aE
561H95krHbdXmoiwZUdUIYat1dDYsxdcr2dirKKNlYDRWlfNa0wCPBaqolva
M8mNtgyaOy4u1HTZvcQkQIZUzi29+49uGKk0iL2M2REbHpyGzrImSRhNHL73
wURh7PviPyjy/EAFuaUcRReBaEDmzIxi3AOih7ufdPlJL/k5ocAdPnNjF4f2
AlguANrA8j0M7HnK8B0LspOoUpuC5SqMKba0V66FaO4YQbB+JXR3UZwXkwEs
UuaxmAFBdwGLLz4bkw/vQW33CCbufgDhVBW9Z62B310UXDGYCaYyjXDvPtha
VSqD40SAaOM4ARD3YoiipFVTmiLmNns21pay1EtLUbKZqg1QUrnAnNbJtdQx
cFsimAiSz209JGMmIILQHS0oFxze8Kf/YHZ/6oMead3PJhokpddzFC2XfOA4
gD0A7ut/+gsmTpUo1InVuIZ7mE0vrCIwmhzZQ0+/e/OCjXuaqVkGdhBjietF
ALDnNv2wk1U0tx9AsHY5dHdJbpLcPM7cPEq3dPLyAfrOWYOgtwuT9+9A5aHt
CCen0XvOWhTKSwBiRGMTiA4cADcbzg6OTYOEtG7FPnbFmSzZtaTal9itDnEo
DUga1aUZCE4IO6wy5TxCoYwrLDiJCLTt5scZaJKX2LwsHc2zh1A8gyOANU+b
ah07Pvae820YrlTAEhD1WgqhgEMgPMxEexpTM2OrXvvrpxRX94QClw1HZGHY
9ZLNwdCiR3YCZgAPwfClSHxsm9h4dmIGzW37E/DGFbGsKWmRy4pb9iRTWqFr
1VJ43UVM/HwrmiNjODI1g9KG1SiUcrCHD8vSLyaBirNaUspOSl6XaIcx7hED
VykJswmIFSXhsKOcNQGw40BQ6nMlnF+VDAOhrchSCA+kc+67rDOLkkN00sAM
1kaIqlWEU9Nh9cDoJjZRFwiaQREYuwG6j5U+gCiqD77iLaccpfHEa1zGEWbe
SeABZtLQ6sKb1569B0Q7rtz2wLFnNBNg7T2OdB0bacq11xQNbMen0XxoN4K1
K6D6ul2TEKSUAdKZ1rmWJdevke/rweJLzsXkfdtR33sQ48N3odKVQ/eKRfAK
OZehk33QXBIh5g64PgxxNIAj2Vw6abEYmwQqWfIpm2WLaZCU6ZZDShJznKmf
UxmHk9JO57GpwORqy5SW3mgaMBGsNa59aqOOaHoG0cw0bK3mtHdopqNqbYKN
GQeww4IeVkqNn3HN756SYD1pwH3W/q3R95etu5UtdRPzuYBdykq9jBTfdvP6
s++/cuuD0/N93uYDpnrzTmoaZw9qm4Z5JIwFTbDTNTTu3wlvxQD0kj5nOgSU
tP+MebbQygFJAyALbS26l/SARw+hVm+iPlWDma6juGoRCv1daYdyss4WTXqP
2dSMEIJO0kshBmQmRew0asbGNZlSdZJMWmJFxHTJjA17VK2RVPAqH6bRhKnX
YWo12FoFplqFqVRgmw0Qu4mqcwWofA4Atk/v3Ptl02ju3vjuD51WRZMnpQXT
95et6wFwKRRdTory0NQkrQ5Aq/tJ0z3PfOC+OQG8+wUXroGxPwZhAFq5BsaZ
Dt0JoUU7WqIq5qC6S9CLeqB6SlCFAijnWjOBXeWrrTZgp2Zgq1Vwo+k27as2
MbN/EmG1AeVr5Ad60LW8HzoXuMWBFjYAACAASURBVEgFpWwtQuqs6cUDCDZu
yPR3oJYGzNmWpMjasi2vZ96LqZMJ+SYO+QE2jGCqNZhKFabWAIcM2wydgxiG
ieZXngevuwdeVze8gtSoKQrZmo/lNz3393AayknrHfb9Zes0QMtBeCYpWgOP
fNKKyVMNeGoHaXU/ebSbfN0gX0Xkeeby4dt599UXLoYxnwXzCxzVT2VayTuw
ZvdNgFauylUKB0kraYCXlrFztl2SjhvguQbJ0yNTaIxX3X4RpQA9Zw4g6CpA
aS39aSk5DykFPbAUwYb1rYBUWQBniDbZluM0W6k8JfYxRxGiag1RtYpopoJo
esZ1kYwpFX4Oygsc1VFp6FIXgr5+B9hiwX2XS9O6lcHYKRs235rf9OyvdID7
aEG8Yv0qUnQ+NK0irfrJUyV4isjXNeWrA+TrfeTrEQq8qXxem77m9LXUDN/l
Uv8OmKwFQPHeCEQOpKJ9Y1AmGlll3lMZjSjAJa0S6mF1oobqwWmYegTyFEor
+lEc6IHO+SkoBZB66VLkNm5odb5UpkGJoqTJM0tTvJSfa2EjA9towDQaMLWG
06bVKrjZTDQ3KQXyPei8W/J1Vw+83n54XT3wu7rhlbqgfD9pPcpxv2B2HGTX
TMTuCyvTl5cuuHpPB7iPNc57xsZuaCqTokHy1HJ4qky+WqwCrSnQlgK/pgJv
qqjt2XkTvlUp5EiRUGMpaTCXhJPYiK8Ta2OVaGPKAjUBcgbEif1KIAWEDYPq
oRnUj1QArRD0FVEq9yHfV0z6OTAAtXgJ/HVrXWdEk5LWbRRJO/1I3pPWn6GB
DSPYZhMcNh1BPjMRSBFUEEB3d8Pr6oIuFaG7itCFAnQuD10swF+0FCqXd90k
kwaASJoAMrfv0G4BE30rWPf0F+M0lVOKjzu0e8s0gOmb1529FZbzsJyD5SJH
djmDliuOBi3z4kbOa2hShxRhpdIeyHcNL5QfPzTI90Gedg9p0USedtmpRLPS
0VNYgArpnRtHHXylUFgH1KbrmNz8AJpHKogqDVSKubRNLgOq+zD03jHJvkma
ldM0MmSXSHdeSlYAUgoUBPAX9cLr6YLuKsHv6oJXKkDl8/I7PChPqhxiTV7s
AgW5tEslZ6riJWyX7V/ivo/YRvgiTmM5JSsgrtz2IAOoyWPiBxddcICY7wYz
gaGbTP19pjlAxv4ye55m4wG+LLWc8lFVzJpSGgh8IPBBfgDlyw45nvTLFQ2X
dHSMCxez8VN57i8l5JcuwfjmB1DfvR/N8UrGPiZopUHVmpgqGqw1yNdQMglU
PgddLMArFqCLBac5i3nnNOVybS2ksi2lkFybc7oAeD5UroCE6xXveJlllVG6
fatLO1uwwQFrzXc6wD3O8oyfbs5uEGYAjO55yaWf5cPVl4LRFwOPfQ34ngOp
74F9B1Yb+KDAAwUBOPBhfR8UOACTp8WRa3XYjuqpgLR5niagf8MqTHOI6t5R
mHrkqt89heKKJShu2gAVBK4tlO9DBb5bCYKg1d4las2mxRuryH4kJCQZNrFp
Y1OWmKegC6WY/wG0tFHN0BaTymVytq17+4sqCKY7wD0Zxvmqvlt4qnYP18Nn
xuVg1Ixca/lGmEQZWMjcnN2hUWvA11JsmLFvdVqU2AoupFqZKFl2CznAW9mP
6lgFjYkabNOgcWQahcggN9CVmANJijjuRB6H0qAc9VGlapLjY1m1lANxhpRD
WkEVSm67AGtTk4dtS+untFNPTNIgEHicib9hGrXTupW+Pl0v/EN3bjfvuGDN
jD1SfSURWnrcU0t3xex+uEiNUes2HoGxbiMS49rNI4qAMBInKkpTuFLpy9ak
LfEBaE3Ideeh8x6iaghTa6IxOY2w3oDfU3ImQmLfZri0nCmOjCmU8ZoS794e
X2uyH7B7UeWL0F09LaZAAk7O7AecCarFf7O1/83G/Gth3TNrHeCeJHn7isFd
0Ux4lTnSWJX2MUhvcKJogKSbdwKGJGXMKcvMWnBoYBoGptqEqYsGR6YSlzOf
SzrDEPy8j6A37wiESiGcmELtwBgoF0DnA+d8MbcCMbPrTgIym6F5Z/cEjr8r
l4fX15+QZ+bKrbc4nXG9mbVTUa3x4fyZV9yB01y803rWDXTVTGQ+HI03nhzt
reeooKALChQoaF+BAg0VKChfupbDAFaWabaJA2aMgY0sTCOCqUWw9QgxQZs0
Qffk4C8qwe/JZ4g2Cqw4Q75R8JRCz7I+2CVLUBufRmP0CCZ+ch9qK5aia80K
FJb0O7ILK+ETuMoJZDgJ2XKflO/gbCFVKMLr6XPfHfdjiHuXcXsTXm7Rxszg
+sTUgfv++pPTNwLBc4Hm6XzvT/uNibe87PKBaNvYv/C+6kuTpVM8fPIJ5JEL
l+WU2w9BkhEMcrFUw8nOOGzRatNm9tBVnkawrAtBXzHT+Ty2mymJCZPvIzhn
A6irhOrBI5h+eA9sowlVLKCwfAA968+AVypKdYRqidei5bVMJk0pqHwe3pIB
UC7naJIUGwFtfX+pNfOWtEY1trnzhps/t+dX37+bBnr/F8AdVx3abk9bpXW6
A/djD+2t/s75a+pcbV6Fui0l4GW4ComIXQ6/bmBqBqZqYGoRTCWCrRlw0+0p
xnHD5vYVl1MurK00oUsBlG6rxM1uJmIMqKcHXm83gp4u5JcuggkNzEwVzcMT
qI2MAbLZCGXolmCb9iKLTZt4c5QggL94idulJ7aTk7BfVg1lgy+ZPdgIgDGf
ffBPPvYJnmluBGEdgJHrq+NjHeCeRHn3bz9vW2PHofU83rgw6cGZ2RAa3L64
tFYSUNZxi6P1LQ2eKXHa2Vj4pVwG3PHewKlzRYEP1dvlSDe+j+LSRdClAjgy
CCdnUN93EM3pGSjPh8r5LusnoE3sYDm3yufhDQxA5fMttiu1zJg0htvSjze2
+a3dwlH0tt0f++oWNCIPwBkAzrym1L/7+ur41Ol4zwmPE3n4FU87I9py6Aa7
p7qplSKY+aUxOVujdfsnTUmrT7e5dDYEhpYlmHyNwooeeKWglTij0nAZFYsI
LjgHqpDLxIYVTDNC/fAEZrbvQzRThcoFyJWXoLR6OfJL+jMpZve/7umGv3QA
FOTld7SxxmKnsn0Xobjw0iF3mg1fx6H5bG71M6KbBtaWALwcwFkAtgH4j6sO
bZ/sAPdk2rtXX/qc6KGxb/CRZgnZJnLZkFGchdJJ+vMo8EK4Dy3HxzFUBfiL
Cigs6ZIYcab+LGObBudthF68SEg86W5AIIIJI8zsOYjKrv1gY6ByAfLlJeje
sBp+VxHwPHh9vfCXLgX5Hlo2giDV+nd7x5tMlx0iGLb286bZvDa/5llJwuGm
gbWLAPwqgH4AwwBuvurQ9tMqrqsfT8C97qK1+8HW2Knm0xGxbp2eWV5C2y46
oKT0uyX+Ge+G094aiQFd8DO75WTf42QjaW9xb2a5To9TWiG/qAe5gX5YY2Gq
dTTHxlHbdxBMQLBsKfIryiBf4WhDti3cNVcozPVZ2myb0bX5Nc8ayR5xfXW8
dk2p/wiAcwGsArD7+ur4eAe4J0k+ev/u6LqL1z1oTbQaU81zwS196Y/aLK/1
ZWpxspKAvXQXz8b02TB0zoP2VGrnirOWJD9qDVBvN1QuSE8qAI7juDoXoDDQ
B18akphGA83KNMJaDdZa6MCHzgetAVmeJU47K355Dxvz9tzqoZ/O9vY1pf4p
AAUAawEMXFPqf+D66vhpo3UfV6ZCYu++9LLV4cNjX7L7ak+N98BLQlvZJTW2
eQkZjkImjKTJTW2NdGMRiYL5fXnk+gqOxK4yfcJUthqiD/65awHPS7ZsJZWp
cohplL4Pb3ARjCJMPrQDtV17HaGnvw/FNavQs2Et/O5uHL0vG6V2b8tmajTO
lt8WVutfL539gjlDXjcNrC0DeDWAATEXvtsB7skG7yuv2Bj+fOTf7Wj93KxS
TR2uzHNFGcpfZlhiuzUGL6UtRVWgUBjscjRDIbS3OmsK5GvotavgLR9ICeoZ
GiM8Dd3bhdwZg1DFHECupq16cByTP38IzUOHwcZCFQvo2rgePRvWwSvmQdpD
a1tzuVb39xQYf2Ii+4n82uccM8lw08Da5wO4AsAEgM9ddWj7wY6pcBLlnetX
HuFA/QhhdBnXTRktMfl5NtDLOnNx909u3x2SwJahfZWYC9QWz4VUHXC1Dirm
HTNMlnnSCrqrAH/5YgTLF4kD5sJqpBSCniK6Vq+Av6gfzAwzNY3qjl2o7NoD
YwzI86DjDQkJ2VDcNJj/xkT8kfy65y4oM3ZNqb8GYBOAbgAz15T6d11fHe8A
92TJR7bsxR+/4YrR5tjM3WDzZK5Gy1oC8nR0PDcFL9DCfYj9nbb9OthaeHk/
w1nIOGixNCOgUgOKeahCDrqvBL/ch2BpP3RXrpX3kLGFSRGC3m4Uly9FbnAA
qlBAc+wwarv2oLp3P5pTFfn+HKTb4gyAP42a9iOF9c9fcDr3mlL/DIDzAPTJ
VN12fXW82QHuSZS/veUBvPuFF+83im9GM7qYq9EKJC0PqSVJQUeBN8OyijNo
3KZ1DaA95TJpQGtbUM7MkjACGjUEq5YgWD0IHVdN2CzZBhnyDSeZOVIEv6uI
wuASlM5cBVXIIzx0BI3Rg6jvO4DKnv2I6o2KtfjjmYMTn7rjkl+qf+4RjNH1
1XFcU+o/B8ASAD6AB6+vjp/yXF3CE0S2XX3JoBmb+ajZPfNS27D5JGLQFv6C
JCCo3ZlL+hu4piIuSUHw8hqF/rw4ZM7WJU+KMD0FlfNAOSH55H34mzYgWLcK
qpRPS3B0WiKUlOTEpUUtSQkNgBDVm5jeuReVnXvRGB07MLNz7Lbp23Y9wJVw
Oxib4Xp9VUCoXXVwOx/DxtUA3iphMQbw6asObd/eAe4pJHtfdXlfuG/qOnOg
cq0day6xyoGXhRye7BopiYW4gXhL+liliQhogDyFfF8OftFH3OeBPAUEGsrP
RBmIQNrtA6xXLIW/cTW88mKokpSOJ7vmqKQaI67MyBJ6EgArbcJ68zu7v3Hz
p/b98TeZBoI1cBvsEYBJALsB7AVwEG4LqMmrDm1vtIG2COBiAFcByImp8Omr
Dm3f0QHuKSb7fv25+fDBg8/kidp77c7KpWziPRkgtEJhESaZMAdQViTWQxxW
EAWpCbrLR7Ck4BIGnk7Da0l1sTDHVCZ0FnjwBpdAryrDX1WG6iqkIE22hFIt
FcdJtQbpcfaCD8HPfyq35oUHbiqv82F4EYAyXCp3A4CSTLWq2L9VABV5biSG
uxjAoIAWcDvofOmqQ9vHOsA9FTXvm5+jzP6JMlea77K7Jt/Ik1E/DChJ+WbS
xUnzOdVmF6us2aDgDxagu4LEXEhitNn4bty4JC4N0uSqkQs5eKtXwD9zOVRP
t9sGy0tAmikvUg1o/x7ki+8C0W25DS9vSRjctHQtgeEDyANYDeAc+b+AlKWR
vffx/bcC6Buh6KdXjW7jDnBPYal84To19oUfPdfunXwnjzeeyhNRLyxad30k
auUraMwKXspp+INFqECnnIWYeKPRymeINXEyMTK1ZINL4K9eDjXQD10qgAp5
kO8beP6DCILPcan/44UNL1kwo+umgbU+gGUAlosp0ScaNo5mNwGMArjnqkPb
93YSEKeRHPi/r+xr3rXzRXam+Toeqz6PRxp+mmnLgFcBRzHPVFopoUoa/qI8
KNAtO0Qm5kJG06avq8z5UzYb5XPQ5QF4a1c9EKw740tU6vpa/uI33v+L+L03
DazNifZlEDWuOrjttCOUd4ArMva3b6bqj7cuhu892R6uvs3uPPICHqkFrj3M
HODNpoYVQB5BFTX8/pyzd7OtneL2ULEJojMApqwNTAxPR7Sof6saGPgEafVt
b9PZu3pfdF2zc5c6wJ1TRt/7RgoPVQLS+qxo3+E3m4dHrsZEY4Aj20PVyGtJ
YChKCTnCayCPoHIKXl8OKqdaTQTp8sjZFLMmKZVX0+QHk7S49261fPCz3hkr
v2PGJ2uLf+NvTOeudID7iOXI1/6yv/K9u59p9h6+gierm9A0a2HsGXy4UsRk
oxW8RIBPIA8gX8HrDaAK4lwRUq2qFVDMMbTeC0/vpEJ+K/UWf6RXL7958Tv+
fktn1DvA/YXKwY++Y3Hjvp0reaK6DL5eg2a4kWcaa2CilVxtLEY+txTVRhGa
FGmX+lWeangF7zAF3gQU7UHg7abA3wZjtqCQO6CXLdnvv/iZ+3oveUtHs3aA
e/xl/N//Sjfu2RZEe8Z8rtV8O1HRKOQ1pisq0azWAgD7pcBQITDkqwg9pVAt
H2gO/MEnw84odqQjHelIRzrSkY50pCMd6UhHOtKRjnSkIx3pSEc60pGOdKQj
HelIRzrSkY50pCMd6UhHOtKRjnSkIx3pSEc60pGOdKQjHelIRzry2OW4FksO
D5bVPN9hh0ZH+GScc3iwTMf47bzQaxseLB+PHsM82zXIdavHeu6h0RG7wN8W
f58H1xyvC67P2BSIDsqGwo/qPj5W8Y4jaPMAPgy3l1a7GACfA3DDIzpneVkJ
zB+D60Q4283+RwA3L+BUlwG4fB7w7gLwtQX8xhyATx+HcawA+CiAn7W9fg2A
5+GxNeSeGh4sv3dodGTXMe7dagBXA3gJgIvgGuel21YyjwO4C8B/DQ+WbwRw
YGh0pHnaA1ekBLery2ySf6TABfOzAfwKXOfsdpkG8EcL1Nh/LgCYS3YND5Zv
Gxod2X+M02n5fb/ocZyUidMO3Ivl+x4LcMcA/INMztnGZy3c5n1vgWuWN5cU
4BrpvQTAdgCfGB4sf2ZodOTQiQCuOo7nbgL4+jzvnz88WF7/CM/5ynlu2o1w
/V2PJRfB7XkwnywG8MIFXtPxaBiX3U16Ia//Is6N4cHyebJqvfsYoG2XtQDe
B+CTw4Pllac1cMWOuh/AfXMcshTA0x+B6bEcwCXzXPNXZbIcS64WYB5rpXjO
8GC58ERxdoYHy4MA/hLAcx6l7+MDeCmAjw4PlntOZ40LAPsA/HCO93oAXC52
4kLkcnEQZpOHAfzsWE6H3JxnIO3APZ/TejFcY+STMYbH02k+yjEV8+n5AP7P
PL/HOWXO/p7PdHougF+Rc56eNu7Q6EhleLD8QwCvFaC2y5MBrITbxXs+wHkA
njaHowcAt8kkOZZslO9ciKwDcMnwYHnz0OjIXH29DID/Fm3DswBkBYAL5/js
NgAPzOOcPdKN8u6Ea9B8LNBOiA3dri3fNs/nDosTeq9M+mcCeAWA4izHdsHt
zv4fCzTdTknnDKJx980B3PMBrB8eLG8/Rkhl2TxmwgyAW4dGR6aPAf4AwBDc
tkgLXY1eCuArcrPnsuPfNs/nXwng7+ewM28A8Bdz/CaeBVzHko8B+J8F2uQT
s9j0F88zOT8K4P1DoyOhjOXXANQAvHkOn+MyUUinL3CHRke2DQ+Wfwa3qUb7
TQoAPBtu6/nGPKc5U5yqucyR2xZwKT0CpNmkMYf58Gy5ARNz/DaeSzPKUjk1
j4NUHRod+UVuEjI5NDryaLczXTZHpCYG+v/EoJXfPT48WP4C3G49g3Pgas3w
YPmuhcaMT0WNGztOL5cQWLu8BMD75wLu8GDZFyeue45BvW9odGQhLeYvBHDB
HO99GsCvzXLz8gBeI0vk41n0McyLFbO8fheA/zfHSqrEKT9uiYkTBdzvybJx
5izvnSWAGp7js3kAL5tnqf76Aq/hrXO8flCW8wvFAWyX1w0Plj94LFPkNJfR
Y4D6ncOD5fuHRkcezGjdaTGjToqcKOBOi7H+zjlm9GvmAe66eeyvIwC+tYBQ
z5kAnjXH2/8DYD+Ab8wB3BUSQvvqaRDSOmY0Yg5f4hBcEmHDHPfnMgBfGx4s
fxzAF+V+mpOR6j1R4bDsYH0ZQDTHIVcPD5a753jvl+aZYN+Yx47MyivnWNIs
gG/LOX4AYLZtwwPMnf07leQ88fbne1wyR2y6KYCcT+tuEgfwHgDvhQtlLjtO
XI1TA7giWwH8eI73Fosj1K5BcuLZz+U0fOlYs354sNwF4AUCwHZ5CMDPxYHY
KSGl2cboQskqncryPlm15nt8Hi5N265YIgD/BuDuBeBlJYD/C+C7cr53DQ+W
L5WQ5eMSuDNwMc/ZpCjgapfL4TZHnk3uxtxx0KxcKnb0bMvoT2WJBIARmViz
xWxXArjycWDLRvO8t0MAudANqAtifr0PwJfg0r1nPe6AK8yh2+cIH2kAF8yS
534hZg9yQ2zbY8VuPYlILJ9jIt0yNDpSzZgzt4i9N5uD+MzhwfKix6t3JqvO
TeJv3A639y8vEENr4ZhrNw4Pll9+IswHdYLH54F5QkvrsuEqAcllmD2+OA6X
dGgc4/uWwsUaZ9O2YwC+3/ba7QD2zHGuZ8BR/U5noWOBd2h05CdwfIXflWjQ
yALPrWV1/EcALzne4FUneFaPALgDwGw7ziwR5yG2RS+QmTyb3ANH4DmWh71e
wN8uDMdt2NJ2fdNys2ZbUpcBuPJE23KPUCn88BiPuwDUF3CfKkOjI/8kjvE1
4ozdfgxTI5ZBONromsdDOCwr/wsX7B+Y5b0r4XidYxICm22JDwH8eAFcWQ9z
Jz2MRDlmky8DuG6OsXk9gH9e4A080fJBGdtj2biHH4GimQDwv8OD5WHRpBsB
/LIAumceDX6BaN2/F8fvcQHcH8pyPBtwLxOwRrI0zzYwk1hYTr57njDWYQDf
meO9ewFsnkNTPxku9XzrKQjcIwuYzI/FP9k/PFg+IPfvQ3DZzhdi7lTxCwF8
8nhN8hMO3KHRkVBIGhfOYqoU4IL9/425ubq7sTBuwksAlOd47z/FOcMc2vgL
cwBXwVUGnIrAfdQyXF7mg7k8h6JgAKNDoyNNcWBDAA8MD5bfKtGEZ83xuXPx
2Co1TinnLJavzGNrvVpMhtnI3hbAfw2NjtTmvRFnrCYAb5xnKfvqHGGvOLrw
XczNCLvqRLH8T6B0i4n01VkeX5ptEkuJzn/Po1GXHE98nSzg7p3Fo4/lfMye
GgZchucLxzx7o3GpzPjZgLsZwEPHSFyMYO4U9GIAL35cwZa5LhGTp8zyuAyO
Tz3Xij2XcpjE44BkM5uD9RUAL5ojrHLmXMpUTIVjyQsxN+n8pgU4KFMSXXjZ
LDemCOD5w4Pl64+l+U+wLJLypmOJBXA4S1MUrTkszudsyu2Vw4Pl7wO4YWh0
pCJRmzXi/M5lDjw416p22gJ3aHTEDg+W7xAQnvEIPvplzB5Ky4bB5ivPmQJw
W5x0mOf6zPBg+W65vvbYLcmqcB5mTxGfLLlOPP5jyQSAP0Rr1UkIx1V4FWZP
jS+FY9B9e3iwvANpFcSl82jcb2FhNYCnlcaNl+ObAbxhgcfvAXDHAojJ52Fu
3u02uDTvQuRe0RqzJR3WAnjq8GD57nnKek60XLjA48bQxm0eGh3h4cHybXCk
pbkiMUvhYrp2ASbmVgDfPl6hsJNp42JodGQSjpFVX+BHbsYxSkGElPPMORw7
A2Dz0OjItgVe39Q810dijvSdjhbtbLbn0OjIEQAfAPCTY9imx8LMtGjn+4/n
j1AneRDvWKDNWodL8U4c47heuErV2aSGBXB32+QGzB02u/IRmjmnvAyNjvwU
wNvh4uThozjFGFyM95+O90p0soF7HxbG8NorID+WXArgSfMM6o2P8Po2z6M5
igB+6VGWYc/X/2w+ITz20vV5zzE0OnIXXCebd8CliBcSGYgE7G8B8Fcnwmn1
TvIMN8OD5X+CY4z58xz6EIjuPYaZQBJJuH6Owb5DzJNH6kR+SGxjmsP8mG9J
3gbHc7WzvPeTR7lCFfHYAvvTcJUj8/3uEbkvXxVF8Hy4FPxaAIuQlrnvgOMw
fBuOPzJxoqoiCCdZFtA5EVhg98RjnIsfZXfIR33OX/T1LHCs8IsYy3m+lzKT
71GPa0c60pGOdKQjHelIRzrSkY50pCMd6UhHOtKRjnTk8S3UGYKOPBaRquc+
KHV46MD+E5aI8DIXkIOr4rwIjrd6B4CDx5MsIe2RngzH2pqY57gyXIn0UwA8
9bHmwuW3FuF6ytpf0G/Jwe0dMX66ZJKGB8sb4Squ/+bR7JYjGbWXAfhtWPt2
OBroY72mM+B2T/rAfFtaeXJwNxy5+GVwvIECHBHkT4cHy9+V5xpaN2BML4Aw
ZsJnvrBXNPjk0OgIDy9bTrC2AKI6mLsBqKHRkfamcmcA+ASAa4YHy3cB8KFU
E9b2A2gMjY7MDJeXebJN1DMBvAdAODxYjs/bBaAuBZgFue4JSGv7odGRppBg
egFEQ6Mj0zLYr5XBed7wYHmfHG/h0pjxcxvv2yU9ehWImkMjB1i0TC+AytDo
SEx7fBVcO6JLhwfLR+AI2SZzjpyMTwNAfP25+Jjh8jIN5l4AtfaJKb8hB6Xq
sLZXXp4UHm3LuGcA1QsgglIVWFuQ87K81wdH8u6D69n2D5nP9QAJ7XQ2YHUL
biaFyxEXru6T/dFivm6XTGIjr3fL39FweRmBuQDH+ssDyEHriaH9+1i+/2oA
H2/DFgBMxb+R5I3Xy418J4i+LwP4DqQbtJ0LR7LwATwVjqjxKTi+agGOef9s
Od+tAK4HUQjm98NtLHIFHGv+63CN6iL53nPh9vN6k1zYs+U7LhEAfgRuP4TP
wxGZbwXw+wB+A66n68UA/lpu0lvlmB/IwO6EK+a7RlYRgiNKPwxHvXsaXB+C
98BVqhblRj4omvPw0OjIl6Ujy9Uyyf4VrlvLtXCdd8bk2rbJOZ8j3/ERuLKk
zUOjI/8hv/UaOJ7w9aIk7pLf+8+iLH4HjsQyKWP07cyNWya/eULGEnDNqON9
5DwAnwXwXzLOvwxHjKnLeGwA8Fcyad4k3zsNx3x7hUy6MRmrIRmrHwC4Pgvg
4cHyi+V4wJFqPgXXTuD1cO32r5HvXwbXjOVOGY83y98/gev4GME1DblLvq8f
wI/gejecAeCbcM0Ot8Ix1a6S7/whgE8OjY5UlcyyV8P1Zht5LAAABqlJREFU
0bpxaORANDQ6clgGOA9XLLcObu+rQG7QDBxZuCxf8DoZyE+K1n4NmEtwNLcN
cDtM3i6TYy7a4SoAvwdHFv9rGYA/guPR/hBuQ7mvyOu/Jtf8kMzkv8pcWwjg
twSsa+B6YX1NburvC8h/CNfG6Xq4yoohAU6fAP4ypK37SSbuFQKUj8P1zP0b
uL6674fr3nKbgOEzMjmubvutl8r3lOA4w++S756Qm9kHtyfEjwH89fBgOdsT
uEdAcbaM5VbRklfLZx+AI4EPyHe8C65m7tNwZf7Xyji9EK7E5wa4nT0vFpBp
uPqxV8tnPgHXmvV1cSul4cHyYri2TLfJuL1IxqQsz7sEYL8s3/0ZuU8fkM98
Uu7F6wRXr5fnXxWF9lbBS0LegavUfp0A+pMyad40PFimmEvah6PLsadkxg7I
zbsHjiB8C4C/E4A8Ha7i9Udw3NotctFPF018BK6c/HY4el8Vs7dljwFyH4Bv
Do2O/FDAtlau4QHRSrfAUQlnBKx/I8vSMnl+swx8XJ7TkKXrQgHJtXC7NW6T
a7ldNIoF8CMQXSugtjiaGhmfZwOADw6Njtws2vpv4cqQtsu1/ihjr7efI/7b
yCT7SwHl+QKGg0ib+V02y/34N1l1viTn+qRcx+dl6V8Pt13TdwF8BkTfk1Vi
XFayF8n5PyOrzafkmj1RQHfClSxtg+MuXyaAhBx3CK6DpoLr7HgnWisqrIzp
fwoYbxXz4IsyWW6QyReIQvq3odGRG+Dq3T4DVysYV5X0yiT8X7me+2VSPx9I
ieQPAtjQtufYWjlJvMFeDek+DVUBblE04JNktr5G1H7cZypE2lExks/PF8mY
RFpg15zlhnPmuCPiOMa/oS72Tw1puc0eAH8mN+YPAfwBWjfbsJn/DwyNHDDy
nNBaNBjvHRzvZ+uKLbVuCmCnM9eXPWeWY5xv+z3xTkOBPK6W8XuVKIltbZO6
nvmNkTwqmbEyojkDub7m0MgBztwryL2qZZp7VDP3tARXr/dquBZL8X0MM8B9
L1x711+RlWbTLBNzKvP7K3I/4r9n5BqVHFsRezqSz/lIucZK/r5Iruc18trt
EIeJZeadB+DPhwfLTx0eLL9ILvLHSLsrXgLg9cOD5U1idyyVyMOtcsJvy2NV
22Adr5BbfN4xGZy3SPPl18qyDLG/3gjXxv/34DbcO0eAVoIreJytQ/ceAE8b
HizHG55cLQP9M1mZfn14sHw2jHmr2LPL5abk4NqRFgV4Vw4Pls8ZHiw/DbN3
xoGM734B0OfEJDobR3dJXMg4hnLPXgjXu+si+f0DAvRb5fUXDw+Wnyz3sU9A
f5N8xw2Z+9jMALsgy/k4gN8W5XEFHhmpPfsbCgDeMDxYfsrwYPnZYgrdn1n5
p2QCB2Lz3ij271hW4/5UbJ+nidr+sCz7fyQgJFkWLhFj+w2yzG2VZWqLLA1f
kxt4Y0bbRpnZWEFr55N42TdyfCWjVUN5j2UAq5llaSZznu1iR70IwL/LcnOP
DPiorCZfkCXyftEYPxAQflhmdDXW0hIe+7RMhv+QSMEd8v6E2M9PlvNdB+Bf
BHxxR8S/kwjI+5FutP0O+b5qRvMY+b4pAL8JV8N2qzgtX0brVrLZcUL7OTLv
N+Umf09Mp8/IdT8k4/lV0VgfFafwiEywUCbNw/Kbvy7A+Z84HDo0OjIjy/Z1
goUV8tsa8t2xhs0Wl9ba/q7LayzPd8q1/Ivcxw9ncBOKP7EbrmXW12X8/2to
dIRbt8ZctlzB2mUApmVAIcb5r4nH/OsxqOSHZMMk/fIDk5DXcHmZD6IoDkwP
l5d5AOzQyAELAMPLVxCs9UjriI0hENTQgQMu4rBsmQJDATAgIjCroZEDLpQC
aBCZbMB7uLwsB+ZeaH0I1moALEt/HC8uktaHrnQhFyTnAQwICgzEx8tnAgCL
oNRhABbMyJzPB9AHosmhkQPNzDXE57RDIwfscHlZAOZ+AEegFIMZ0NrAGA9K
RRL+yQby+0E0PTRyoKWyWM7rAYiGRg64UCNz+reMI7SKhvbt45tXriIOw0UA
6iCqtnx2+QqCMYtBVBHQKQBmaORA/Lk+AdbkbPFoCU8uhivTaQyvWKlgjIJS
BtZqEHjogIzTsuWejJvc0+VOOzuM3Sq4+qFo3/Gh0REeXrGSsuMjwYMuAB6U
mojv+TGXHwHuW8Xof8sCKm070pFjYWqlOPS/OjQ68t3HlDmbR1iWLfP/x0DD
k0lGwYgCX6AjJE8o7eAQU+oy0vJkklEwokpcRuiIwR9yp9wBBiwyY5u0u3MA
AAAASUVORK5CYII=

--_av-LlkQpaO7ZPAZzAG0MLTLhw--

--_av-6LUygm_hxK3KEwlLb3Dblg--



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 10:00:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 10:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486759.754196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQxM-0006Y5-B4; Mon, 30 Jan 2023 10:00:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486759.754196; Mon, 30 Jan 2023 10:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMQxM-0006Xb-7L; Mon, 30 Jan 2023 10:00:08 +0000
Received: by outflank-mailman (input) for mailman id 486759;
 Mon, 30 Jan 2023 10:00:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMQxL-0006XV-Hn
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 10:00:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQxL-00053B-5b; Mon, 30 Jan 2023 10:00:07 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMQxK-0001nq-WA; Mon, 30 Jan 2023 10:00:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Wr8K3C+IpcFRpTEyVuYXytd1rKbGbJmHB5zBno5xuPY=; b=n+GI0gJUKFZd8wI1oW8R/DIv5i
	xQnWmKOPubGkADjNdR8gah6Ht68/aVsHryiplz0qgS2q1nqR0EXJMxYNGVVkIyfFeEJQpXMQE6Pbx
	QBpeeZCXI8Jg+19zAplBwkuPEvRz0JyqG0IfNtp7A6yBqBml0hVwHehdQrWnHmbPNwJE=;
Message-ID: <49329992-3203-78a7-fc61-d6494e37705c@xen.org>
Date: Mon, 30 Jan 2023 10:00:05 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-14-Penny.Zheng@arm.com>
 <23f49916-dd2a-a956-1e6b-6dbb41a8817b@xen.org>
 <AM0PR08MB4530B7AF6EA406882974D528F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <33bddc11-ae1e-b467-32d7-647748d1c627@xen.org>
 <AM0PR08MB453026B268BA9FBEEE970090F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM0PR08MB453026B268BA9FBEEE970090F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 30/01/2023 06:24, Penny Zheng wrote:
> Hi, Julien

Hi Penny,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Sunday, January 29, 2023 3:43 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
>> setup_early_uart to map early UART
>>
>> Hi Penny,
>>
>> On 29/01/2023 06:17, Penny Zheng wrote:
>>>> -----Original Message-----
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: Wednesday, January 25, 2023 3:09 AM
>>>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-
>> devel@lists.xenproject.org
>>>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>>>> <sstabellini@kernel.org>; Bertrand Marquis
>>>> <Bertrand.Marquis@arm.com>; Volodymyr Babchuk
>>>> <Volodymyr_Babchuk@epam.com>
>>>> Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
>>>> setup_early_uart to map early UART
>>>>
>>>> Hi Peny,
>>>
>>> Hi Julien,
>>>
>>>>
>>>> On 13/01/2023 05:28, Penny Zheng wrote:
>>>>> In MMU system, we map the UART in the fixmap (when earlyprintk is
>> used).
>>>>> However in MPU system, we map the UART with a transient MPU
>> memory
>>>>> region.
>>>>>
>>>>> So we introduce a new unified function setup_early_uart to replace
>>>>> the previous setup_fixmap.
>>>>>
>>>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>>>> ---
>>>>>     xen/arch/arm/arm64/head.S               |  2 +-
>>>>>     xen/arch/arm/arm64/head_mmu.S           |  4 +-
>>>>>     xen/arch/arm/arm64/head_mpu.S           | 52
>>>> +++++++++++++++++++++++++
>>>>>     xen/arch/arm/include/asm/early_printk.h |  1 +
>>>>>     4 files changed, 56 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>>>>> index 7f3f973468..a92883319d 100644
>>>>> --- a/xen/arch/arm/arm64/head.S
>>>>> +++ b/xen/arch/arm/arm64/head.S
>>>>> @@ -272,7 +272,7 @@ primary_switched:
>>>>>              * afterwards.
>>>>>              */
>>>>>             bl    remove_identity_mapping
>>>>> -        bl    setup_fixmap
>>>>> +        bl    setup_early_uart
>>>>>     #ifdef CONFIG_EARLY_PRINTK
>>>>>             /* Use a virtual address to access the UART. */
>>>>>             ldr   x23, =EARLY_UART_VIRTUAL_ADDRESS
>>>>> diff --git a/xen/arch/arm/arm64/head_mmu.S
>>>>> b/xen/arch/arm/arm64/head_mmu.S index b59c40495f..a19b7c873d
>>>> 100644
>>>>> --- a/xen/arch/arm/arm64/head_mmu.S
>>>>> +++ b/xen/arch/arm/arm64/head_mmu.S
>>>>> @@ -312,7 +312,7 @@ ENDPROC(remove_identity_mapping)
>>>>>      *
>>>>>      * Clobbers x0 - x3
>>>>>      */
>>>>> -ENTRY(setup_fixmap)
>>>>> +ENTRY(setup_early_uart)
>>>>
>>>> This function is doing more than enable the early UART. It also
>>>> setups the fixmap even earlyprintk is not configured.
>>>
>>> True, true.
>>> I've thoroughly read the MMU implementation of setup_fixmap, and I'll
>>> try to split it up.
>>>
>>>>
>>>> I am not entirely sure what could be the name. Maybe this needs to be
>>>> split further.
>>>>
>>>>>     #ifdef CONFIG_EARLY_PRINTK
>>>>>             /* Add UART to the fixmap table */
>>>>>             ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
>>>>> @@ -325,7 +325,7 @@ ENTRY(setup_fixmap)
>>>>>             dsb   nshst
>>>>>
>>>>>             ret
>>>>> -ENDPROC(setup_fixmap)
>>>>> +ENDPROC(setup_early_uart)
>>>>>
>>>>>     /* Fail-stop */
>>>>>     fail:   PRINT("- Boot failed -\r\n")
>>>>> diff --git a/xen/arch/arm/arm64/head_mpu.S
>>>>> b/xen/arch/arm/arm64/head_mpu.S index e2ac69b0cc..72d1e0863d
>>>> 100644
>>>>> --- a/xen/arch/arm/arm64/head_mpu.S
>>>>> +++ b/xen/arch/arm/arm64/head_mpu.S
>>>>> @@ -18,8 +18,10 @@
>>>>>     #define REGION_TEXT_PRBAR       0x38    /* SH=11 AP=10 XN=00 */
>>>>>     #define REGION_RO_PRBAR         0x3A    /* SH=11 AP=10 XN=10 */
>>>>>     #define REGION_DATA_PRBAR       0x32    /* SH=11 AP=00 XN=10 */
>>>>> +#define REGION_DEVICE_PRBAR     0x22    /* SH=10 AP=00 XN=10 */
>>>>>
>>>>>     #define REGION_NORMAL_PRLAR     0x0f    /* NS=0 ATTR=111 EN=1 */
>>>>> +#define REGION_DEVICE_PRLAR     0x09    /* NS=0 ATTR=100 EN=1 */
>>>>>
>>>>>     /*
>>>>>      * Macro to round up the section address to be PAGE_SIZE aligned
>>>>> @@
>>>>> -334,6 +336,56 @@ ENTRY(enable_mm)
>>>>>         ret
>>>>>     ENDPROC(enable_mm)
>>>>>
>>>>> +/*
>>>>> + * Map the early UART with a new transient MPU memory region.
>>>>> + *
>>>>
>>>> Missing "Inputs: "
>>>>
>>>>> + * x27: region selector
>>>>> + * x28: prbar
>>>>> + * x29: prlar
>>>>> + *
>>>>> + * Clobbers x0 - x4
>>>>> + *
>>>>> + */
>>>>> +ENTRY(setup_early_uart)
>>>>> +#ifdef CONFIG_EARLY_PRINTK
>>>>> +    /* stack LR as write_pr will be called later like nested function */
>>>>> +    mov   x3, lr
>>>>> +
>>>>> +    /*
>>>>> +     * MPU region for early UART is a transient region, since it will be
>>>>> +     * replaced by specific device memory layout when FDT gets parsed.
>>>>
>>>> I would rather not mention "FDT" here because this code is
>>>> independent to the firmware table used.
>>>>
>>>> However, any reason to use a transient region rather than the one
>>>> that will be used for the UART driver?
>>>>
>>>
>>> We don’t want to define a MPU region for each device driver. It will
>>> exhaust MPU regions very quickly.
>> What the usual size of an MPU?
>>
>> However, even if you don't want to define one for every device, it still seem
>> to be sensible to define a fixed temporary one for the early UART as this
>> would simplify the assembly code.
>>
> 
> We will add fixed MPU regions for Xen static heap in function setup_mm.
> If we put early uart region in front(fixed region place), it will leave holes
> later after removing it.

Why? The entry could be re-used to map the devices entry.

> 
>>
>>> In commit " [PATCH v2 28/40] xen/mpu: map boot module section in MPU
>>> system",
>>
>> Did you mean patch #27?
>>
>>> A new FDT property `mpu,device-memory-section` will be introduced for
>>> users to statically configure the whole system device memory with the
>> least number of memory regions in Device Tree.
>>> This section shall cover all devices that will be used in Xen, like `UART`,
>> `GIC`, etc.
>>> For FVP_BaseR_AEMv8R, we have the following definition:
>>> ```
>>> mpu,device-memory-section = <0x0 0x80000000 0x0 0x7ffff000>; ```
>>
>> I am a bit worry this will be a recipe for mistake. Do you have an example
>> where the MPU will be exhausted if we reserve some entries for each device
>> (or some)?
>>
> 
> Yes, we have internal platform where MPU regions are only 16.

Internal is in silicon (e.g. real) or virtual platform?

>  It will almost eat up
> all MPU regions based on current implementation, when launching two guests in platform.
> 
> Let's calculate the most simple scenario:
> The following is MPU-related static configuration in device tree:
> ```
>          mpu,boot-module-section = <0x0 0x10000000 0x0 0x10000000>;
>          mpu,guest-memory-section = <0x0 0x20000000 0x0 0x40000000>;
>          mpu,device-memory-section = <0x0 0x80000000 0x0 0x7ffff000>;
>          mpu,shared-memory-section = <0x0 0x7a000000 0x0 0x02000000>;
> 
>          xen,static-heap = <0x0 0x60000000 0x0 0x1a000000>;
> ```
> At the end of the boot, before reshuffling, the MPU region usage will be as follows:
> 7 (defined in assembly) + FDT(early_fdt_map) + 5 (at least one region for each "mpu,xxx-memory-section").

Can you list the 7 sections? Is it including the init section?

> 
> That will be already at least 13 MPU regions ;\.

The section I am the most concern of is mpu,device-memory-section 
because it would likely mean that all the devices will be mapped in Xen. 
Is there any risk that the guest may use different memory attribute?

On the platform you are describing, what are the devices you are 
expected to be used by Xen?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 10:04:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 10:04:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486765.754207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR1b-0007ES-SR; Mon, 30 Jan 2023 10:04:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486765.754207; Mon, 30 Jan 2023 10:04:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR1b-0007EL-P8; Mon, 30 Jan 2023 10:04:31 +0000
Received: by outflank-mailman (input) for mailman id 486765;
 Mon, 30 Jan 2023 10:04:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMR1a-0007EF-28
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 10:04:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMR1W-00058N-6P; Mon, 30 Jan 2023 10:04:26 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMR1V-0001zd-WF; Mon, 30 Jan 2023 10:04:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=fHldoKu4fx42q8hpsPXR2zEIKhwDzVrtvicQqDQdKyY=; b=qsABEo5H6RSSdqW7taPfH5FauE
	y2EekoZJ+9GVR04hYNK7dSuD5neqWPUJvhMZ6hQ02S4I9N/5hKEILHjuvzfcGRz3JMYLtn7Pir29o
	vSTWhzY6mOCZiGmFDA6ZrHNlI/K1TLKgcbI0EPxZAxdiPxQ3vlMmJu+7kfakaLSP420k=;
Message-ID: <ead5b716-24d8-6fa1-77bc-cc0b3e81a493@xen.org>
Date: Mon, 30 Jan 2023 10:04:24 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/common: Constify the parameter of _spin_is_locked()
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230127190516.52994-1-julien@xen.org>
 <eddf2879-1914-9401-d715-b711aaa7ed6c@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <eddf2879-1914-9401-d715-b711aaa7ed6c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 30/01/2023 08:11, Jan Beulich wrote:
> On 27.01.2023 20:05, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The lock is not meant to be modified by _spin_is_locked(). So constify
>> it.
>>
>> This is helpful to be able to assert the locked is taken when the
>> underlying structure is const.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> But: Could I talk you into doing the same for _rw_is{,_write}_locked() for
> consistency?

Sure. Although, I would prefer to do it in a separate patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 10:08:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 10:08:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486773.754217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR5L-0007zw-AY; Mon, 30 Jan 2023 10:08:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486773.754217; Mon, 30 Jan 2023 10:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR5L-0007zp-7p; Mon, 30 Jan 2023 10:08:23 +0000
Received: by outflank-mailman (input) for mailman id 486773;
 Mon, 30 Jan 2023 10:08:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMR5K-0007zj-5o
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 10:08:22 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 058c1f97-a086-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 11:08:19 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 75DF52198D;
 Mon, 30 Jan 2023 10:08:19 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2E93513444;
 Mon, 30 Jan 2023 10:08:19 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id mTYCChOX12PBfgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 10:08:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 058c1f97-a086-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675073299; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=UvoK/zP3JM8dIh4F9eh+Issdx8xHzdJYq7B8CjxYg64=;
	b=fIo6wkWhVBRcOZIk5qiJwE6eSb6/sMlO02xAgL3o0FYCOMprY2In/G17omYZ6IOgwrqt0h
	FL9T7T2MX+eLWk4BjYx9wrZnI9/b5+Vqq+s0uHU/de1KD4H4HliviGTMcBb6FjdJIROPSE
	xIJVqNqiIrDSf8u849rPYlTYfN/NIXw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 0/3] xen/public move and fix 9pfs documentation
Date: Mon, 30 Jan 2023 11:08:10 +0100
Message-Id: <20230130100813.3298-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Changes in V4:
- patch split into 3 patches

Juergen Gross (3):
  xen/public: move xenstore related doc into 9pfs.h
  xen/public: fix 9pfs Xenstore entry documentation
  xen/public: fix 9pfs documentation of connection sequence

 docs/misc/9pfs.pandoc        | 153 +-----------------------------
 xen/include/public/io/9pfs.h | 178 ++++++++++++++++++++++++++++++++++-
 2 files changed, 181 insertions(+), 150 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 10:08:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 10:08:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486774.754227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR5Q-0008G5-IT; Mon, 30 Jan 2023 10:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486774.754227; Mon, 30 Jan 2023 10:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR5Q-0008Fy-FI; Mon, 30 Jan 2023 10:08:28 +0000
Received: by outflank-mailman (input) for mailman id 486774;
 Mon, 30 Jan 2023 10:08:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMR5O-0008FD-Qf
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 10:08:26 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 08c647aa-a086-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 11:08:25 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 17A9E1FE0D;
 Mon, 30 Jan 2023 10:08:25 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D02B313444;
 Mon, 30 Jan 2023 10:08:24 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Bs6PMRiX12PPfgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 10:08:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08c647aa-a086-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675073305; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1gpVOixSgSo8mxLsQVouLU//Xv8r+8OeyJOFQ+WAXrg=;
	b=DtFkOxy5eB0WNI82Kcsdi2fTpJFpxyfanJSn4pNfEPMHT/CN1DWkIMb4Le4YZ1VNpX9MHn
	tB6rO9Csna8jMzJfWqReQXaCeiAHKKPKZJ5wvTe7wZxiC6o+WfcIuc21y041psIlilvAAH
	p3WeBg57ymSI3NRclMBZ41adIlYXwtw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 1/3] xen/public: move xenstore related doc into 9pfs.h
Date: Mon, 30 Jan 2023 11:08:11 +0100
Message-Id: <20230130100813.3298-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230130100813.3298-1-jgross@suse.com>
References: <20230130100813.3298-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The Xenstore related documentation is currently to be found in
docs/misc/9pfs.pandoc, instead of the related header file
xen/include/public/io/9pfs.h like for most other paravirtualized
device protocols.

There is a comment in the header pointing at the document, but the
given file name is wrong. Additionally such headers are meant to be
copied into consuming projects (Linux kernel, qemu, etc.), so pointing
at a doc file in the Xen git repository isn't really helpful for the
consumers of the header.

This situation is far from ideal, which is already being proved by the
fact that neither qemu nor the Linux kernel are implementing the
device attach/detach protocol correctly.

Change that by moving the Xenstore related 9pfs documentation from
docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- add reference to header in the pandoc document (Jan Beulich)
V3:
- fix flaw in the connection sequence
V4:
- split patch (Julien Grall)
---
 docs/misc/9pfs.pandoc        | 153 +-------------------------------
 xen/include/public/io/9pfs.h | 166 ++++++++++++++++++++++++++++++++++-
 2 files changed, 169 insertions(+), 150 deletions(-)

diff --git a/docs/misc/9pfs.pandoc b/docs/misc/9pfs.pandoc
index b034fb5fa6..5c82625040 100644
--- a/docs/misc/9pfs.pandoc
+++ b/docs/misc/9pfs.pandoc
@@ -59,155 +59,10 @@ This document does not cover the 9pfs client/server design or
 implementation, only the transport for it.
 
 
-## Xenstore
+## Configuration
 
-The frontend and the backend connect via xenstore to exchange
-information. The toolstack creates front and back nodes with state
-[XenbusStateInitialising]. The protocol node name is **9pfs**.
-
-Multiple rings are supported for each frontend and backend connection.
-
-### Backend XenBus Nodes
-
-Backend specific properties, written by the backend, read by the
-frontend:
-
-    versions
-         Values:         <string>
-
-         List of comma separated protocol versions supported by the backend.
-         For example "1,2,3". Currently the value is just "1", as there is
-         only one version. N.B.: this is the version of the Xen trasport
-         protocol, not the version of 9pfs supported by the server.
-
-    max-rings
-         Values:         <uint32_t>
-
-         The maximum supported number of rings per frontend.
-
-    max-ring-page-order
-         Values:         <uint32_t>
-
-         The maximum supported size of a memory allocation in units of
-         log2n(machine pages), e.g. 1 = 2 pages, 2 == 4 pages, etc. It
-         must be at least 1.
-
-Backend configuration nodes, written by the toolstack, read by the
-backend:
-
-    path
-         Values:         <string>
-
-         Host filesystem path to share.
-
-    tag
-         Values:         <string>
-
-         Alphanumeric tag that identifies the 9pfs share. The client needs
-         to know the tag to be able to mount it.
-
-    security-model
-         Values:         "none"
-
-         *none*: files are stored using the same credentials as they are
-                 created on the guest (no user ownership squash or remap)
-         Only "none" is supported in this version of the protocol.
-
-### Frontend XenBus Nodes
-
-    version
-         Values:         <string>
-
-         Protocol version, chosen among the ones supported by the backend
-         (see **versions** under [Backend XenBus Nodes]). Currently the
-         value must be "1".
-
-    num-rings
-         Values:         <uint32_t>
-
-         Number of rings. It needs to be lower or equal to max-rings.
-
-    event-channel-<num> (event-channel-0, event-channel-1, etc)
-         Values:         <uint32_t>
-
-         The identifier of the Xen event channel used to signal activity
-         in the ring buffer. One for each ring.
-
-    ring-ref<num> (ring-ref0, ring-ref1, etc)
-         Values:         <uint32_t>
-
-         The Xen grant reference granting permission for the backend to
-         map a page with information to setup a share ring. One for each
-         ring.
-
-### State Machine
-
-Initialization:
-
-    *Front*                               *Back*
-    XenbusStateInitialising               XenbusStateInitialising
-    - Query virtual device                - Query backend device
-      properties.                           identification data.
-    - Setup OS device instance.           - Publish backend features
-    - Allocate and initialize the           and transport parameters
-      request ring.                                      |
-    - Publish transport parameters                       |
-      that will be in effect during                      V
-      this connection.                            XenbusStateInitWait
-                 |
-                 |
-                 V
-       XenbusStateInitialised
-
-                                          - Query frontend transport parameters.
-                                          - Connect to the request ring and
-                                            event channel.
-                                                         |
-                                                         |
-                                                         V
-                                                 XenbusStateConnected
-
-     - Query backend device properties.
-     - Finalize OS virtual device
-       instance.
-                 |
-                 |
-                 V
-        XenbusStateConnected
-
-Once frontend and backend are connected, they have a shared page per
-ring, which are used to setup the rings, and an event channel per ring,
-which are used to send notifications.
-
-Shutdown:
-
-    *Front*                            *Back*
-    XenbusStateConnected               XenbusStateConnected
-                |
-                |
-                V
-       XenbusStateClosing
-
-                                       - Unmap grants
-                                       - Unbind evtchns
-                                                 |
-                                                 |
-                                                 V
-                                         XenbusStateClosing
-
-    - Unbind evtchns
-    - Free rings
-    - Free data structures
-               |
-               |
-               V
-       XenbusStateClosed
-
-                                       - Free remaining data structures
-                                                 |
-                                                 |
-                                                 V
-                                         XenbusStateClosed
+The frontend and backend are configured via Xenstore. See [header] for
+the detailed Xenstore entries and the connection protocol.
 
 
 ## Ring Setup
@@ -415,5 +270,5 @@ the *size* field of the 9pfs header.
 
 [paper]: https://www.usenix.org/legacy/event/usenix05/tech/freenix/full_papers/hensbergen/hensbergen.pdf
 [website]: https://github.com/chaos/diod/blob/master/protocol.md
-[XenbusStateInitialising]: https://xenbits.xen.org/docs/unstable/hypercall/x86_64/include,public,io,xenbus.h.html
+[header]: https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/io/9pfs.h;hb=HEAD
 [ring.h]: https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/io/ring.h;hb=HEAD
diff --git a/xen/include/public/io/9pfs.h b/xen/include/public/io/9pfs.h
index ad26bd69eb..5dad0db869 100644
--- a/xen/include/public/io/9pfs.h
+++ b/xen/include/public/io/9pfs.h
@@ -14,9 +14,173 @@
 #include "ring.h"
 
 /*
- * See docs/misc/9pfs.markdown in xen.git for the full specification:
+ * See docs/misc/9pfs.pandoc in xen.git for the full specification:
  * https://xenbits.xen.org/docs/unstable/misc/9pfs.html
  */
+
+/*
+ ******************************************************************************
+ *                                  Xenstore
+ ******************************************************************************
+ *
+ * The frontend and the backend connect via xenstore to exchange
+ * information. The toolstack creates front and back nodes with state
+ * XenbusStateInitialising. The protocol node name is **9pfs**.
+ *
+ * Multiple rings are supported for each frontend and backend connection.
+ *
+ ******************************************************************************
+ *                            Backend XenBus Nodes
+ ******************************************************************************
+ *
+ * Backend specific properties, written by the backend, read by the
+ * frontend:
+ *
+ *    versions
+ *         Values:         <string>
+ *
+ *         List of comma separated protocol versions supported by the backend.
+ *         For example "1,2,3". Currently the value is just "1", as there is
+ *         only one version. N.B.: this is the version of the Xen transport
+ *         protocol, not the version of 9pfs supported by the server.
+ *
+ *    max-rings
+ *         Values:         <uint32_t>
+ *
+ *         The maximum supported number of rings per frontend.
+ *
+ *    max-ring-page-order
+ *         Values:         <uint32_t>
+ *
+ *         The maximum supported size of a memory allocation in units of
+ *         log2n(machine pages), e.g. 1 = 2 pages, 2 == 4 pages, etc. It
+ *         must be at least 1.
+ *
+ * Backend configuration nodes, written by the toolstack, read by the
+ * backend:
+ *
+ *    path
+ *         Values:         <string>
+ *
+ *         Host filesystem path to share.
+ *
+ *    tag
+ *         Values:         <string>
+ *
+ *         Alphanumeric tag that identifies the 9pfs share. The client needs
+ *         to know the tag to be able to mount it.
+ *
+ *    security-model
+ *         Values:         "none"
+ *
+ *         *none*: files are stored using the same credentials as they are
+ *                 created on the guest (no user ownership squash or remap)
+ *         Only "none" is supported in this version of the protocol.
+ *
+ ******************************************************************************
+ *                            Frontend XenBus Nodes
+ ******************************************************************************
+ *
+ *    version
+ *         Values:         <string>
+ *
+ *         Protocol version, chosen among the ones supported by the backend
+ *         (see **versions** under [Backend XenBus Nodes]). Currently the
+ *         value must be "1".
+ *
+ *    num-rings
+ *         Values:         <uint32_t>
+ *
+ *         Number of rings. It needs to be lower or equal to max-rings.
+ *
+ *    event-channel-<num> (event-channel-0, event-channel-1, etc)
+ *         Values:         <uint32_t>
+ *
+ *         The identifier of the Xen event channel used to signal activity
+ *         in the ring buffer. One for each ring.
+ *
+ *    ring-ref<num> (ring-ref0, ring-ref1, etc)
+ *         Values:         <uint32_t>
+ *
+ *         The Xen grant reference granting permission for the backend to
+ *         map a page with information to setup a share ring. One for each
+ *         ring.
+ *
+ ******************************************************************************
+ *                              State Machine
+ ******************************************************************************
+ *
+ * Initialization:
+ *
+ *    *Front*                               *Back*
+ *    XenbusStateInitialising               XenbusStateInitialising
+ *    - Query virtual device                - Query backend device
+ *      properties.                           identification data.
+ *    - Setup OS device instance.           - Publish backend features
+ *    - Allocate and initialize the           and transport parameters
+ *      request ring.                                      |
+ *    - Publish transport parameters                       |
+ *      that will be in effect during                      V
+ *      this connection.                            XenbusStateInitWait
+ *                 |
+ *                 |
+ *                 V
+ *       XenbusStateInitialised
+ *
+ *                                          - Query frontend transport
+ *                                            parameters.
+ *                                          - Connect to the request ring and
+ *                                            event channel.
+ *                                                         |
+ *                                                         |
+ *                                                         V
+ *                                                 XenbusStateConnected
+ *
+ *    - Query backend device properties.
+ *    - Finalize OS virtual device
+ *      instance.
+ *                |
+ *                |
+ *                V
+ *       XenbusStateConnected
+ *
+ * Once frontend and backend are connected, they have a shared page per
+ * ring, which are used to setup the rings, and an event channel per ring,
+ * which are used to send notifications.
+ *
+ * Shutdown:
+ *
+ *    *Front*                            *Back*
+ *    XenbusStateConnected               XenbusStateConnected
+ *                |
+ *                |
+ *                V
+ *       XenbusStateClosing
+ *
+ *                                       - Unmap grants
+ *                                       - Unbind evtchns
+ *                                                 |
+ *                                                 |
+ *                                                 V
+ *                                         XenbusStateClosing
+ *
+ *    - Unbind evtchns
+ *    - Free rings
+ *    - Free data structures
+ *               |
+ *               |
+ *               V
+ *       XenbusStateClosed
+ *
+ *                                       - Free remaining data structures
+ *                                                 |
+ *                                                 |
+ *                                                 V
+ *                                         XenbusStateClosed
+ *
+ ******************************************************************************
+ */
+
 DEFINE_XEN_FLEX_RING_AND_INTF(xen_9pfs);
 
 #endif
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 10:08:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 10:08:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486775.754237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR5V-00008m-Va; Mon, 30 Jan 2023 10:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486775.754237; Mon, 30 Jan 2023 10:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR5V-00008d-S8; Mon, 30 Jan 2023 10:08:33 +0000
Received: by outflank-mailman (input) for mailman id 486775;
 Mon, 30 Jan 2023 10:08:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMR5U-0007zj-FH
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 10:08:32 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0c0be557-a086-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 11:08:30 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8EEE22198D;
 Mon, 30 Jan 2023 10:08:30 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7408A13444;
 Mon, 30 Jan 2023 10:08:30 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id IpoWGx6X12PdfgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 10:08:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c0be557-a086-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675073310; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PIKqSOQIHwMFBZvx0nM7IWyLZBtpfrkRHMl2wjqjT3E=;
	b=ol8vD0zsG46kHnFOwZCRNRQuasrgV42YBlye71qNr6UzkpM02PFyq0c/D7ectsA54zRBGF
	tGXR8OUrTZkR1uB7KJ7EPSKas/Ug+x6jeA8g75dlCbaKXFlGTjSywUMqMiioyDrIUsN55m
	E9idgMYEh2fil+NZ6hNjd4ZgQjoTfAk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>
Subject: [PATCH v4 2/3] xen/public: fix 9pfs Xenstore entry documentation
Date: Mon, 30 Jan 2023 11:08:12 +0100
Message-Id: <20230130100813.3298-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230130100813.3298-1-jgross@suse.com>
References: <20230130100813.3298-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In xen/include/public/io/9pfs.h the documentation regarding the
Xenstore entries isn't reflecting reality: the "tag" Xenstore entry
is on the frontend side, not on the backend one.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- patch split off (Julien Grall)
---
 xen/include/public/io/9pfs.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/include/public/io/9pfs.h b/xen/include/public/io/9pfs.h
index 5dad0db869..617ad9afd7 100644
--- a/xen/include/public/io/9pfs.h
+++ b/xen/include/public/io/9pfs.h
@@ -64,12 +64,6 @@
  *
  *         Host filesystem path to share.
  *
- *    tag
- *         Values:         <string>
- *
- *         Alphanumeric tag that identifies the 9pfs share. The client needs
- *         to know the tag to be able to mount it.
- *
  *    security-model
  *         Values:         "none"
  *
@@ -106,6 +100,12 @@
  *         map a page with information to setup a share ring. One for each
  *         ring.
  *
+ *    tag
+ *         Values:         <string>
+ *
+ *         Alphanumeric tag that identifies the 9pfs share. The client needs
+ *         to know the tag to be able to mount it.
+ *
  ******************************************************************************
  *                              State Machine
  ******************************************************************************
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 10:08:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 10:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486776.754247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR5b-0000XD-7k; Mon, 30 Jan 2023 10:08:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486776.754247; Mon, 30 Jan 2023 10:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMR5b-0000W0-3q; Mon, 30 Jan 2023 10:08:39 +0000
Received: by outflank-mailman (input) for mailman id 486776;
 Mon, 30 Jan 2023 10:08:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMR5a-0007zj-1R
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 10:08:38 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f4fb7ba-a086-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 11:08:36 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 11A701F8C4;
 Mon, 30 Jan 2023 10:08:36 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EBD5513444;
 Mon, 30 Jan 2023 10:08:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 8jdJOCOX12PpfgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 10:08:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f4fb7ba-a086-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675073316; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l9hWyX1xxO2Hr5xWFMrMhIclyrLkWDLxq1LOLbj+I4I=;
	b=BxYB/DNEWRCHsmyS3EpSsivX0/aho+AoE2IVhSnhlc9tYl5U4poTocdzx/q3SuxMLnzIvs
	usgKW3GlIbwCbAhOsoUPdcRdJIFCyEQA2IXLTFelllHVSBbzqv33TAxmeTCQiWXpVV/9is
	EhsQRunnJe9xhGj2KTXe3Aq2mKjLZUQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>
Subject: [PATCH v4 3/3] xen/public: fix 9pfs documentation of connection sequence
Date: Mon, 30 Jan 2023 11:08:13 +0100
Message-Id: <20230130100813.3298-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230130100813.3298-1-jgross@suse.com>
References: <20230130100813.3298-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The documented connection sequence in xen/include/public/io/9pfs.h has
a bug: the frontend needs to wait for the backend to have published its
features before being able to allocate its rings and event-channels.

While correcting that, make it clear that there might be multiple
rings and event-channels by adding "(s)".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- patch split off (Julien Grall)
---
 xen/include/public/io/9pfs.h | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/include/public/io/9pfs.h b/xen/include/public/io/9pfs.h
index 617ad9afd7..a0ce82d0a8 100644
--- a/xen/include/public/io/9pfs.h
+++ b/xen/include/public/io/9pfs.h
@@ -114,14 +114,26 @@
  *
  *    *Front*                               *Back*
  *    XenbusStateInitialising               XenbusStateInitialising
- *    - Query virtual device                - Query backend device
- *      properties.                           identification data.
- *    - Setup OS device instance.           - Publish backend features
- *    - Allocate and initialize the           and transport parameters
- *      request ring.                                      |
- *    - Publish transport parameters                       |
- *      that will be in effect during                      V
- *      this connection.                            XenbusStateInitWait
+ *                                          - Query backend device
+ *                                            identification data.
+ *                                          - Publish backend features
+ *                                            and transport parameters.
+ *                                                         |
+ *                                                         |
+ *                                                         V
+ *                                                  XenbusStateInitWait
+ *
+ *    - Query virtual device
+ *      properties.
+ *    - Query backend features and
+ *      transport parameters.
+ *    - Setup OS device instance.
+ *    - Allocate and initialize the
+ *      request ring(s) and
+ *      event-channel(s).
+ *    - Publish transport parameters
+ *      that will be in effect during
+ *      this connection.
  *                 |
  *                 |
  *                 V
@@ -129,8 +141,8 @@
  *
  *                                          - Query frontend transport
  *                                            parameters.
- *                                          - Connect to the request ring and
- *                                            event channel.
+ *                                          - Connect to the request ring(s)
+ *                                            and event channel(s).
  *                                                         |
  *                                                         |
  *                                                         V
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 10:35:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 10:35:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486823.754257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRVQ-00056g-An; Mon, 30 Jan 2023 10:35:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486823.754257; Mon, 30 Jan 2023 10:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRVQ-00056Z-7N; Mon, 30 Jan 2023 10:35:20 +0000
Received: by outflank-mailman (input) for mailman id 486823;
 Mon, 30 Jan 2023 10:35:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yMo9=53=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1pMRVO-00056T-FX
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 10:35:18 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c9623db2-a089-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 11:35:16 +0100 (CET)
Received: by mail-ej1-x632.google.com with SMTP id k4so24876441eje.1
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 02:35:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9623db2-a089-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=4uHfr7it5a2a70vFI7R0+3gXMao07p+8Hh+OMmhXjhA=;
        b=C5np3YF7UMcb6nANt9JJo+7RycoDvgSYTH2c9vImEqSYoMQDnodlW6pwFr59CkdiBG
         bcO1NWrKY7Wp0wTDIcJdtHPneqtRX8VvmhLgusFzA7ukgIG5thL1+38Cna6ORlQwkJRZ
         hIwY50mWe8XSQ+WBGo6bQlKbkGILLHvLgTMjE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=4uHfr7it5a2a70vFI7R0+3gXMao07p+8Hh+OMmhXjhA=;
        b=5fOqyKZ24VSDMvmSzCZveCI0VunD0uxmtf3hdBXqIVpTXjKCE7CGUqdwrJi1r8LvRq
         2qFSs0UB2K8j1liboUTGTLU4whF9lYhoCJeK0L7HBT94r5zODmw9BUfh4FCV795syrAG
         cRiXFDR1Osuddh3pEwtKmQYb5LjTx3GGHM5dFJg6rwXUEE7+vvrr7iytm40C/8NHfCaG
         CdUFV8vcLLmr87dmq2NsDnZcwkfqkL5JPfM/rgfQl6mBjd4Xf9/q83PNOnhc9Y+ylkQt
         Tp/EO2s7yU8+ZTGPLzRqoIXzGPVEI6y1Fj33/4b6a6/htL3hoboT5QJFXBtHhyFkha3E
         IIRg==
X-Gm-Message-State: AO0yUKU+2UM0o5w/Ck7sT8qo6KH/pRJFek4NSibkX+dbK6Aj2OP8B7Eh
	ImQz2ww66JDhERWSEvPUxAkbZG7vJ97ASKCSajbKYDF3b75NBCIv
X-Google-Smtp-Source: AK7set9IGEmZwnlXJmvFaZCeWAUCV94pcydfUGwl1fSH7lRcf781zS0SsSMcBb4PcW7nuOPf78YWRQuxp8U2Kv6siCs=
X-Received: by 2002:a17:906:3993:b0:878:4cc7:ed23 with SMTP id
 h19-20020a170906399300b008784cc7ed23mr4449772eje.14.1675074916270; Mon, 30
 Jan 2023 02:35:16 -0800 (PST)
MIME-Version: 1.0
From: George Dunlap <george.dunlap@cloud.com>
Date: Mon, 30 Jan 2023 10:35:05 +0000
Message-ID: <CA+zSX=btwMhb5ah1Hcc3q-V2hGuo4nxhcGuKQvVRb8yLMfKaoQ@mail.gmail.com>
Subject: [ANNOUNCE] Call for agenda items for 2 February Community Call @ 1600 UTC
To: Xen-devel <xen-devel@lists.xenproject.org>, 
	Tamas K Lengyel <tamas.k.lengyel@gmail.com>, "intel-xen@intel.com" <intel-xen@intel.com>, 
	"daniel.kiper@oracle.com" <daniel.kiper@oracle.com>, Roger Pau Monne <roger.pau@citrix.com>, 
	Sergey Dyasli <sergey.dyasli@citrix.com>, 
	Christopher Clark <christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>, 
	Kevin Pearson <kevin.pearson@ortmanconsulting.com>, Juergen Gross <jgross@suse.com>, 
	Paul Durrant <pdurrant@amazon.com>, "Ji, John" <john.ji@intel.com>, 
	"edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, 
	"robin.randhawa@arm.com" <robin.randhawa@arm.com>, Artem Mygaiev <Artem_Mygaiev@epam.com>, 
	Matt Spencer <Matt.Spencer@arm.com>, Stewart Hildebrand <Stewart.Hildebrand@amd.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith" <dpsmith@apertussolutions.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?= <cardoe@cardoe.com>, 
	George Dunlap <george.dunlap@citrix.com>, David Woodhouse <dwmw@amazon.co.uk>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?= <varadgautam@gmail.com>, 
	Brian Woods <brian.woods@xilinx.com>, Robert Townley <rob.townley@gmail.com>, 
	Bobby Eshleman <bobby.eshleman@gmail.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQ29yZXkgTWlueWFyZA==?= <cminyard@mvista.com>, 
	Olivier Lambert <olivier.lambert@vates.fr>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Ash Wilding <ash.j.wilding@gmail.com>, Rahul Singh <Rahul.Singh@arm.com>, 
	=?UTF-8?Q?Piotr_Kr=C3=B3l?= <piotr.krol@3mdeb.com>, 
	Brendan Kerrigan <brendank310@gmail.com>, insurgo@riseup.net, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Scott Davis <scottwd@gmail.com>, 
	Ben Boyd <ben@exotanium.io>, Anthony PERARD <anthony.perard@citrix.com>, 
	Michal Orzel <michal.orzel@amd.com>, Marc Ungeschikts <marc.ungeschikts@vates.fr>, 
	Zhiming Shen <zshen@exotanium.io>, Xenia Ragiadakou <burzalodowa@gmail.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLSGVucnkgV2FuZw==?= <Henry.Wang@arm.com>, 
	Per Bilse <per.bilse@citrix.com>, Samuel Verschelde <stormi-xcp@ylix.fr>, 
	Andrei Semenov <andrei.semenov@vates.fr>, Yann Dirson <yann.dirson@vates.fr>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLTHVjYSBGYW5jZWxsdQ==?= <luca.fancellu@arm.com>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Content-Type: multipart/alternative; boundary="0000000000005b5c4005f378c558"

--0000000000005b5c4005f378c558
Content-Type: text/plain; charset="UTF-8"

Hi all,

The proposed agenda is in
https://cryptpad.fr/pad/#/2/pad/edit/cgGa9qrIcYsDwlFMii+tBJv+/ and you can
edit to add items.  Alternatively, you can reply to this mail directly.

Agenda items appreciated a few days before the call: please put your name
besides items if you edit the document.

Note the following administrative conventions for the call:
* Unless, agreed in the previous meeting otherwise, the call is on the 1st
Thursday of each month at 1600 British Time (either GMT or BST)
* I usually send out a meeting reminder a few days before with a
provisional agenda

* To allow time to switch between meetings, we'll plan on starting the
agenda at 16:05 sharp.  Aim to join by 16:03 if possible to allocate time
to sort out technical difficulties &c

* If you want to be CC'ed please add or remove yourself from the
sign-up-sheet at
https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/

Best Regards
George


== Dial-in Information ==
## Meeting time
16:00 - 17:00 UTC
Further International meeting times:
<https://www.timeanddate.com/worldclock/meetingdetails.html?year=2020&month=4&day=2&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179>
<https://www.timeanddate.com/worldclock/meetingdetails.html?year=2021&month=04&day=8&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179>
https://www.timeanddate.com/worldclock/meetingdetails.html?year=2022&month=11&day=3&hour=16&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179


## Dial in details
Web: https://meet.jit.si/XenProjectCommunityCall

Dial-in info and pin can be found here:

https://meet.jit.si/static/dialInInfo.html?room=XenProjectCommunityCall

--0000000000005b5c4005f378c558
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><p>Hi all,</p><p>The proposed agenda is in <a href=3D"http=
s://cryptpad.fr/pad/#/2/pad/edit/cgGa9qrIcYsDwlFMii+tBJv+/">https://cryptpa=
d.fr/pad/#/2/pad/edit/cgGa9qrIcYsDwlFMii+tBJv+/</a>=C2=A0and you can edit t=
o add items.=C2=A0 Alternatively, you can reply to this mail directly.<br><=
/p><p>Agenda items appreciated a few days before the call: please put your =
name besides items if you edit the document.<br></p><p>Note the following a=
dministrative conventions for the call:<br>* Unless, agreed in the previous=
 meeting otherwise, the call is on the 1st Thursday of each month at 1600 B=
ritish Time (either GMT or BST)<br>* I usually send out a meeting reminder =
a few days before with a provisional agenda<br></p><p>* To allow time to sw=
itch between meetings, we&#39;ll plan on starting the agenda at 16:05 sharp=
.=C2=A0 Aim to join by 16:03 if possible to allocate time to sort out techn=
ical difficulties &amp;c</p><p>* If you want to be CC&#39;ed please add or =
remove yourself from the sign-up-sheet at=C2=A0<a href=3D"https://cryptpad.=
fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/">https://cryptpad.fr/pad/#/2/=
pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/</a><br></p><p>Best Regards<br>George<br>=
</p><p><br></p><p>=3D=3D Dial-in Information =3D=3D<br>## Meeting time<br>1=
6:00 - 17:00 UTC<br>Further International meeting times:=C2=A0<a href=3D"ht=
tps://www.timeanddate.com/worldclock/meetingdetails.html?year=3D2020&amp;mo=
nth=3D4&amp;day=3D2&amp;hour=3D15&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp=
;p2=3D37&amp;p3=3D224&amp;p4=3D179"></a><a href=3D"https://www.timeanddate.=
com/worldclock/meetingdetails.html?year=3D2021&amp;month=3D04&amp;day=3D8&a=
mp;hour=3D15&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224=
&amp;p4=3D179"></a><a href=3D"https://www.timeanddate.com/worldclock/meetin=
gdetails.html?year=3D2022&amp;month=3D11&amp;day=3D3&amp;hour=3D16&amp;min=
=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p4=3D179">https=
://www.timeanddate.com/worldclock/meetingdetails.html?year=3D2022&amp;month=
=3D11&amp;day=3D3&amp;hour=3D16&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p=
2=3D37&amp;p3=3D224&amp;p4=3D179</a><br></p><p><br>## Dial in details<br>We=
b:=C2=A0<a href=3D"https://meet.jit.si/XenProjectCommunityCall">https://mee=
t.jit.si/XenProjectCommunityCall</a><br></p><p>Dial-in info and pin can be =
found here:<br></p><p><a href=3D"https://meet.jit.si/static/dialInInfo.html=
?room=3DXenProjectCommunityCall">https://meet.jit.si/static/dialInInfo.html=
?room=3DXenProjectCommunityCall</a></p></div>

--0000000000005b5c4005f378c558--


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 10:42:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 10:42:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486836.754267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRcb-0006Y0-41; Mon, 30 Jan 2023 10:42:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486836.754267; Mon, 30 Jan 2023 10:42:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRca-0006Xt-W8; Mon, 30 Jan 2023 10:42:44 +0000
Received: by outflank-mailman (input) for mailman id 486836;
 Mon, 30 Jan 2023 10:42:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AqBv=53=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pMRcZ-0006Xn-Oa
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 10:42:43 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2084.outbound.protection.outlook.com [40.107.7.84])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d34d7055-a08a-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 11:42:43 +0100 (CET)
Received: from DUZPR01CA0031.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:468::16) by AS4PR08MB7902.eurprd08.prod.outlook.com
 (2603:10a6:20b:51d::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Mon, 30 Jan
 2023 10:42:28 +0000
Received: from DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:468:cafe::d) by DUZPR01CA0031.outlook.office365.com
 (2603:10a6:10:468::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Mon, 30 Jan 2023 10:42:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT063.mail.protection.outlook.com (100.127.142.255) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.17 via Frontend Transport; Mon, 30 Jan 2023 10:42:27 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Mon, 30 Jan 2023 10:42:27 +0000
Received: from 60b79171cdcc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6A77341F-F10E-4E94-89C3-DA7F8DEBBAF8.1; 
 Mon, 30 Jan 2023 10:42:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 60b79171cdcc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 30 Jan 2023 10:42:16 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB8133.eurprd08.prod.outlook.com (2603:10a6:20b:54e::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23; Mon, 30 Jan
 2023 10:42:14 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::28a0:e0e8:1277:2a06]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::28a0:e0e8:1277:2a06%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 10:42:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d34d7055-a08a-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LaLvTfGY3yPCKPIkdgdR6515xyazaVC9XVEgVwtipb8=;
 b=7rOK3ya4kX5Pn+LzzNWz9hJ5iQxxmxZPAHsSG8o2nvJTCxrZhvyW13BW85ouM7j5RQH0Wwnib6xpjIcJHqBVqH5UiD6zdLxvuSu8EPBLcA8EeXYdVOeznGbRiWluUJJNbOGNuVkObqjf2IRU1hxnNBipxCjznMtMrvdXDId8Wa0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6a5222100630ab47
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SjQHVUoqR1SkNWqCxW5X0FvZ6DhU9j9s6JIU3WUaBIN388ax/IaKifRjN9ovc26KaIKt4DHmcAdyxEUV8b61mJYwF0AetXblnybDU1m6Z5A5YmF4OomdzCm15Q3P3F0GrT9qKTKNgqwTu+XNlGtC4OdYuLpabE2lO50oN+5ql4Gt6z2NC14Tys4XGcoLn1Hst2/yhcNb+b9+wmYfzEnZVBQb846YFeiQeCd0iMS2DWpcaPbxhHID5YAzvieSKEWTn8KBBuafw5PTZr5yz0Mcs5al2bjtWDeVbpH+C+iTVqxLglvLpOfhpCel2Dh3BqttFa7I50+ZDEYHTV2lukxmkg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LaLvTfGY3yPCKPIkdgdR6515xyazaVC9XVEgVwtipb8=;
 b=YGudpXjxGLBDJ1TApeWKgRIEuo7Gv+j3hhzLTLvea+B4Rp9+RS07LibeyGZbDt7Js3cHsdD7vziRSFK1+Hd6QSTaXVc2glSL7P4QPzN+Psgc6IvuhiZilW9KQOuCVhJHyj9jiCgBnn2sUaCSF1MIiw7zQwKn9f5F2vrQzvcqRA2yaw5ul1KKrRzoSGzJ9gj82MoqYjLV+UiNZO3cZWx6jVo4luCmhpFwGCW42XPzJa0INRiyfFfs8S3rdsZxNWFZWbDN/QWPjVCGYr3v4RWb7XhdvwLtT9/6EmZc8Uofb9B6PihACxHewloOD/2w+tzzsSY+xUw3ALeaerR6MUPIrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LaLvTfGY3yPCKPIkdgdR6515xyazaVC9XVEgVwtipb8=;
 b=7rOK3ya4kX5Pn+LzzNWz9hJ5iQxxmxZPAHsSG8o2nvJTCxrZhvyW13BW85ouM7j5RQH0Wwnib6xpjIcJHqBVqH5UiD6zdLxvuSu8EPBLcA8EeXYdVOeznGbRiWluUJJNbOGNuVkObqjf2IRU1hxnNBipxCjznMtMrvdXDId8Wa0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Luca Fancellu
	<Luca.Fancellu@arm.com>, Michal Orzel <michal.orzel@amd.com>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 4/5] xen/arm64: mm: Rework switch_ttbr()
Thread-Topic: [PATCH v5 4/5] xen/arm64: mm: Rework switch_ttbr()
Thread-Index: AQHZMolOd87sSpDhTkiljIZJzDGtf662yeWA
Date: Mon, 30 Jan 2023 10:42:12 +0000
Message-ID: <1A67BC62-95DE-41F4-ABA5-D473B9D27D60@arm.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-5-julien@xen.org>
In-Reply-To: <20230127195508.2786-5-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS8PR08MB8133:EE_|DBAEUR03FT063:EE_|AS4PR08MB7902:EE_
X-MS-Office365-Filtering-Correlation-Id: aface393-8025-41cf-526e-08db02aeae99
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 iyO/BnotuK0ZD/eEPiH1/FbTM5JjKgiolDncwOavxGeCu4zuN9gWSKBoOy2kir7iPL1kON2KrZxwYgh91Qbee4K1Dq6BUelsMw/7NOGAkdWYvFNBcwwOANSkaHIINh18A/JMm7FnH6ILnRCZI/EgraUC/HXToJ/J4VNtilcPPNz5RnRgbfuRG8bfRSCnGaZVu7AA7SKLAsZ+Seojd4pXUW7Z6PTnoS4ogO+eFdNaT9s0kVxWxMUyGxpxww9UAnlgmqsao2Wv170K43CLu6Lry6AYzxkj+8SNPm3w2v7qBeMFNCDehvwqf8EjnZzLyko8A1dP6zrDn0iP1ImwrBGn4Q22nVU/rJRC+gs8C1l1yB8UPdztqjchzP+ibLeZdG91jCpHQZXCqYwbmEaUPa5C4AjerLBrwWpQM03JGl5KJkDpcYPoKTZlaQac9kMG84MMvI78ZkA1CM0VVNX3LoEOJ82AxO08rIxb5T8ZJE71SLY5j0JP0No6bNyYc2Ceq8K0THJlzSHSAsK0yyG6Nwf0o6tnWlp3Mqk4670cXNtRzaKWEharPuh/DoKw9MMOsWteG7XtAnSh4ET6IxkNFhR1TzxQXRU5hO86XvBUGG+6DBWN5hIIYAP5HSeks6Bl+jcOn66ypDQBsauNNjZSTNR9M3a9m9e/Vw20obYG7U0kIpaIs4B/dPeq1cHaNr9ifU6Add/KQLAda2rCnBeuZGRI749ZRPBRNUshwiIrG+XuC2A=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(346002)(366004)(39860400002)(396003)(136003)(376002)(451199018)(66476007)(66556008)(6916009)(64756008)(66446008)(8676002)(41300700001)(4326008)(76116006)(83380400001)(91956017)(66946007)(316002)(8936002)(6506007)(36756003)(2906002)(6512007)(33656002)(71200400001)(54906003)(5660300002)(6486002)(478600001)(86362001)(38070700005)(186003)(26005)(122000001)(38100700002)(53546011)(2616005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: multipart/alternative;
	boundary="_000_1A67BC6295DE41F4ABA5D473B9D27D60armcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8133
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7a49f224-0116-4b18-6cc9-08db02aea5a6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HXrD61spGOgYg1arDi2Cbjgqo4XbtNYXYC7hrdpc2KvakrFzx8seAlCogozgUF7xAzDWeUrNe5WLG8OWuwVtdsPRZhofnNySmF8A3mVKNkroZz4oxpukxcUkihxbJ719iEEOiMD+EEYPNfV8x/qpOs6iuLYigxTEBuC4YoyrTstpDeRCw94NGFIRJANmigO++2ROgZlnorij/nRyczcfnPXsjxdIEotm7EyBCnBGIvatyiEGt8tH8W8sTfSlVd9yyfNnvsJxEc1jkjuFPsqQZvbNb2tH+JGsYj6H2mqnXPgXe4UnuO0YpVYuK8m00t66fL6MlaSEQj0cq7JWaVFAjRfpzAbA9Lbn8e1pq/5XB4MuarxUstRObIA+Lybam04SlGjbiUbD1K7Xtxf/n7hekc+zU4I/BBn0BDzZlenObvr6QgyjDX56LCG1yscBaemo7mqoNybHB+jMT5JMXTTP7vUCvpV6QQPQC/gWnCQuwTaa9S7H1HnS/ZPBv22XameMuLNtAyB2vNW6OVYFx2Y4kXshNhX1MjoYhsaEue8UXdJplDJU9WijrR4Zufk9NEAdZorec/vFpk2b3kICgt4wAPx+bRqlKGhGz7/8s7kptq5AcR+mwsZygMQYX/cy2b9l7gCSnAztpfA3pJIYxix7vFG1zmpwmxZ8w3Rbk6kJakg18LS14+dk/dhzLnx3K5S80YE7O3KaIzei+cCNvjAdhg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(396003)(39860400002)(346002)(376002)(136003)(451199018)(36840700001)(40470700004)(46966006)(82310400005)(36756003)(36860700001)(26005)(47076005)(6512007)(82740400003)(2616005)(186003)(5660300002)(40480700001)(2906002)(53546011)(356005)(81166007)(478600001)(45080400002)(6486002)(6506007)(70206006)(70586007)(8676002)(4326008)(86362001)(83380400001)(336012)(54906003)(107886003)(40460700003)(316002)(41300700001)(8936002)(6862004)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 10:42:27.9749
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: aface393-8025-41cf-526e-08db02aeae99
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7902

--_000_1A67BC6295DE41F4ABA5D473B9D27D60armcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

On 27 Jan 2023, at 20:55, Julien Grall <julien@xen.org> wrote:

From: Julien Grall <jgrall@amazon.com>

At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
still on.

Switching TTBR is like replacing existing mappings with new ones. So
we need to follow the break-before-make sequence.

In this case, it means the MMU needs to be switched off while the
TTBR is updated. In order to disable the MMU, we need to first
jump to an identity mapping.

Rename switch_ttbr() to switch_ttbr_id() and create an helper on
top to temporary map the identity mapping and call switch_ttbr()
via the identity address.

switch_ttbr_id() is now reworked to temporarily turn off the MMU
before updating the TTBR.

We also need to make sure the helper switch_ttbr() is part of the
identity mapping. So move _end_boot past it.

The arm32 code will use a different approach. So this issue is for now
only resolved on arm64.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com<mailto:bertrand.mar=
quis@arm.com>>

Cheers
Bertrand


--_000_1A67BC6295DE41F4ABA5D473B9D27D60armcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <92100302E4FEA943A6E4277A45408317@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"overflow-wrap: break-word; -webkit-nbsp-mode: space; line-br=
eak: after-white-space;">
Hi Julien,<br>
<div><br>
<blockquote type=3D"cite">
<div>On 27 Jan 2023, at 20:55, Julien Grall &lt;julien@xen.org&gt; wrote:</=
div>
<br class=3D"Apple-interchange-newline">
<div>
<div>From: Julien Grall &lt;jgrall@amazon.com&gt;<br>
<br>
At the moment, switch_ttbr() is switching the TTBR whilst the MMU is<br>
still on.<br>
<br>
Switching TTBR is like replacing existing mappings with new ones. So<br>
we need to follow the break-before-make sequence.<br>
<br>
In this case, it means the MMU needs to be switched off while the<br>
TTBR is updated. In order to disable the MMU, we need to first<br>
jump to an identity mapping.<br>
<br>
Rename switch_ttbr() to switch_ttbr_id() and create an helper on<br>
top to temporary map the identity mapping and call switch_ttbr()<br>
via the identity address.<br>
<br>
switch_ttbr_id() is now reworked to temporarily turn off the MMU<br>
before updating the TTBR.<br>
<br>
We also need to make sure the helper switch_ttbr() is part of the<br>
identity mapping. So move _end_boot past it.<br>
<br>
The arm32 code will use a different approach. So this issue is for now<br>
only resolved on arm64.<br>
<br>
Signed-off-by: Julien Grall &lt;jgrall@amazon.com&gt;<br>
Reviewed-by: Luca Fancellu &lt;luca.fancellu@arm.com&gt;<br>
Tested-by: Luca Fancellu &lt;luca.fancellu@arm.com&gt;<br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Reviewed-by: Bertrand Marquis &lt;<a href=3D"mailto:bertrand.marquis@a=
rm.com">bertrand.marquis@arm.com</a>&gt;</div>
<div><br>
</div>
<div>Cheers</div>
<div>Bertrand</div>
</div>
<br>
</body>
</html>

--_000_1A67BC6295DE41F4ABA5D473B9D27D60armcom_--


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:02:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:02:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486881.754276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRv4-0000xt-PS; Mon, 30 Jan 2023 11:01:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486881.754276; Mon, 30 Jan 2023 11:01:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRv4-0000xm-Mx; Mon, 30 Jan 2023 11:01:50 +0000
Received: by outflank-mailman (input) for mailman id 486881;
 Mon, 30 Jan 2023 11:01:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VtRO=53=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pMRv3-0000xa-K1
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:01:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7c407a3f-a08d-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 12:01:45 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E929316F2;
 Mon, 30 Jan 2023 03:02:26 -0800 (PST)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B1FCF3F71E;
 Mon, 30 Jan 2023 03:01:43 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c407a3f-a08d-11ed-b8d1-410ff93cb8f0
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] Cppcheck MISRA analysis improvements
Date: Mon, 30 Jan 2023 11:01:30 +0000
Message-Id: <20230130110132.2774782-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This serie is adding a way to skip the check for some rules that the Xen project
has agreed to follow, this is because cppcheck reports too many false-positive
on some rules and it would be easier in a first phase to skip the check on them
and allow the tool to be mature enough before using it on the specific rules.

The serie includes also an improvement for the cppcheck report.

Luca Fancellu (2):
  xen/cppcheck: sort alphabetically cppcheck report entries
  xen/cppcheck: add parameter to skip given MISRA rules

 xen/scripts/xen_analysis/cppcheck_analysis.py |  8 +++--
 .../xen_analysis/cppcheck_report_utils.py     |  7 ++++
 xen/scripts/xen_analysis/settings.py          | 35 +++++++++++--------
 xen/tools/convert_misra_doc.py                | 28 ++++++++++-----
 4 files changed, 53 insertions(+), 25 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:02:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:02:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486882.754282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRv5-00010c-1G; Mon, 30 Jan 2023 11:01:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486882.754282; Mon, 30 Jan 2023 11:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRv4-0000zr-TU; Mon, 30 Jan 2023 11:01:50 +0000
Received: by outflank-mailman (input) for mailman id 486882;
 Mon, 30 Jan 2023 11:01:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VtRO=53=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pMRv4-0000xa-1D
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:01:50 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 7d21635d-a08d-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 12:01:47 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7B02D16F3;
 Mon, 30 Jan 2023 03:02:28 -0800 (PST)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 268BC3F71E;
 Mon, 30 Jan 2023 03:01:45 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d21635d-a08d-11ed-b8d1-410ff93cb8f0
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH v2 1/2] xen/cppcheck: sort alphabetically cppcheck report entries
Date: Mon, 30 Jan 2023 11:01:31 +0000
Message-Id: <20230130110132.2774782-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230130110132.2774782-1-luca.fancellu@arm.com>
References: <20230130110132.2774782-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Sort alphabetically cppcheck report entries when producing the text
report, this will help comparing different reports and will group
together findings from the same file.

The sort operation is performed with two criteria, the first one is
sorting by misra rule, the second one is sorting by file.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes in v2:
 - Sort with two criteria, first misra rule, second filename
   (Michal, Jan)
---
---
 xen/scripts/xen_analysis/cppcheck_report_utils.py | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
index 02440aefdfec..0b6cc72b9ac1 100644
--- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
+++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
@@ -104,6 +104,13 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
                 for path in strip_paths:
                     text_report_content[i] = text_report_content[i].replace(
                                                                 path + "/", "")
+                    # Split by : separator
+                    text_report_content[i] = text_report_content[i].split(":")
+            # sort alphabetically for second field (misra rule) and as second
+            # criteria for the first field (file name)
+            text_report_content.sort(key = lambda x: (x[1], x[0]))
+            # merge back with : separator
+            text_report_content = [":".join(x) for x in text_report_content]
             # Write the final text report
             outfile.writelines(text_report_content)
     except OSError as e:
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:02:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:02:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486883.754297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRv6-0001S1-82; Mon, 30 Jan 2023 11:01:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486883.754297; Mon, 30 Jan 2023 11:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMRv6-0001Ru-5C; Mon, 30 Jan 2023 11:01:52 +0000
Received: by outflank-mailman (input) for mailman id 486883;
 Mon, 30 Jan 2023 11:01:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VtRO=53=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pMRv4-0000xg-Ss
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:01:51 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 7df2c5c9-a08d-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 12:01:49 +0100 (CET)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E0DDF16F8;
 Mon, 30 Jan 2023 03:02:29 -0800 (PST)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A9FC63F71E;
 Mon, 30 Jan 2023 03:01:46 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7df2c5c9-a08d-11ed-9ec0-891035b88211
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] xen/cppcheck: add parameter to skip given MISRA rules
Date: Mon, 30 Jan 2023 11:01:32 +0000
Message-Id: <20230130110132.2774782-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230130110132.2774782-1-luca.fancellu@arm.com>
References: <20230130110132.2774782-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add parameter to skip the passed MISRA rules during the cppcheck
analysis, the rules are specified as a list of comma separated
rules with the MISRA number notation (e.g. 1.1,1.3,...).

Modify convert_misra_doc.py script to take an extra parameter
giving a list of MISRA rule to be skipped, comma separated.
While there, fix some typos in the help and print functions.

Modify settings.py and cppcheck_analysis.py to have a new
parameter (--cppcheck-skip-rules) used to specify a list of
MISRA rule to be skipped during the cppcheck analysis.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in v2:
 - Add Ack-by (Stefano)
---
---
 xen/scripts/xen_analysis/cppcheck_analysis.py |  8 +++--
 xen/scripts/xen_analysis/settings.py          | 35 +++++++++++--------
 xen/tools/convert_misra_doc.py                | 28 ++++++++++-----
 3 files changed, 46 insertions(+), 25 deletions(-)

diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
index 0e952a169641..cc1f403d315e 100644
--- a/xen/scripts/xen_analysis/cppcheck_analysis.py
+++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
@@ -153,11 +153,15 @@ def generate_cppcheck_deps():
     if settings.cppcheck_misra:
         cppcheck_flags = cppcheck_flags + " --addon=cppcheck-misra.json"
 
+        skip_rules_arg = ""
+        if settings.cppcheck_skip_rules != "":
+            skip_rules_arg = "-s {}".format(settings.cppcheck_skip_rules)
+
         utils.invoke_command(
             "{}/convert_misra_doc.py -i {}/docs/misra/rules.rst"
-            " -o {}/cppcheck-misra.txt -j {}/cppcheck-misra.json"
+            " -o {}/cppcheck-misra.txt -j {}/cppcheck-misra.json {}"
                 .format(settings.tools_dir, settings.repo_dir,
-                        settings.outdir, settings.outdir),
+                        settings.outdir, settings.outdir, skip_rules_arg),
             False, CppcheckDepsPhaseError,
             "An error occured when running:\n{}"
         )
diff --git a/xen/scripts/xen_analysis/settings.py b/xen/scripts/xen_analysis/settings.py
index a8502e554e95..8c0d357fe0dc 100644
--- a/xen/scripts/xen_analysis/settings.py
+++ b/xen/scripts/xen_analysis/settings.py
@@ -24,6 +24,7 @@ cppcheck_binpath = "cppcheck"
 cppcheck_html = False
 cppcheck_htmlreport_binpath = "cppcheck-htmlreport"
 cppcheck_misra = False
+cppcheck_skip_rules = ""
 make_forward_args = ""
 outdir = xen_dir
 
@@ -53,20 +54,22 @@ Cppcheck report creation phase runs only when --run-cppcheck is passed to the
 script.
 
 Options:
-  --build-only          Run only the commands to build Xen with the optional
-                        make arguments passed to the script
-  --clean-only          Run only the commands to clean the analysis artifacts
-  --cppcheck-bin=       Path to the cppcheck binary (Default: {})
-  --cppcheck-html       Produce an additional HTML output report for Cppcheck
-  --cppcheck-html-bin=  Path to the cppcheck-html binary (Default: {})
-  --cppcheck-misra      Activate the Cppcheck MISRA analysis
-  --distclean           Clean analysis artifacts and reports
-  -h, --help            Print this help
-  --no-build            Skip the build Xen phase
-  --no-clean            Don\'t clean the analysis artifacts on exit
-  --run-coverity        Run the analysis for the Coverity tool
-  --run-cppcheck        Run the Cppcheck analysis tool on Xen
-  --run-eclair          Run the analysis for the Eclair tool
+  --build-only            Run only the commands to build Xen with the optional
+                          make arguments passed to the script
+  --clean-only            Run only the commands to clean the analysis artifacts
+  --cppcheck-bin=         Path to the cppcheck binary (Default: {})
+  --cppcheck-html         Produce an additional HTML output report for Cppcheck
+  --cppcheck-html-bin=    Path to the cppcheck-html binary (Default: {})
+  --cppcheck-misra        Activate the Cppcheck MISRA analysis
+  --cppcheck-skip-rules=  List of MISRA rules to be skipped, comma separated.
+                          (e.g. --cppcheck-skip-rules=1.1,20.7,8.4)
+  --distclean             Clean analysis artifacts and reports
+  -h, --help              Print this help
+  --no-build              Skip the build Xen phase
+  --no-clean              Don\'t clean the analysis artifacts on exit
+  --run-coverity          Run the analysis for the Coverity tool
+  --run-cppcheck          Run the Cppcheck analysis tool on Xen
+  --run-eclair            Run the analysis for the Eclair tool
 """
     print(msg.format(sys.argv[0], cppcheck_binpath,
                      cppcheck_htmlreport_binpath))
@@ -78,6 +81,7 @@ def parse_commandline(argv):
     global cppcheck_html
     global cppcheck_htmlreport_binpath
     global cppcheck_misra
+    global cppcheck_skip_rules
     global make_forward_args
     global outdir
     global step_get_make_vars
@@ -115,6 +119,9 @@ def parse_commandline(argv):
             cppcheck_htmlreport_binpath = args_with_content_regex.group(2)
         elif option == "--cppcheck-misra":
             cppcheck_misra = True
+        elif args_with_content_regex and \
+             args_with_content_regex.group(1) == "--cppcheck-skip-rules":
+            cppcheck_skip_rules = args_with_content_regex.group(2)
         elif option == "--distclean":
             target_distclean = True
         elif (option == "--help") or (option == "-h"):
diff --git a/xen/tools/convert_misra_doc.py b/xen/tools/convert_misra_doc.py
index 13074d8a2e91..8984ec625fa7 100755
--- a/xen/tools/convert_misra_doc.py
+++ b/xen/tools/convert_misra_doc.py
@@ -4,12 +4,14 @@
 This script is converting the misra documentation RST file into a text file
 that can be used as text-rules for cppcheck.
 Usage:
-    convert_misr_doc.py -i INPUT [-o OUTPUT] [-j JSON]
+    convert_misra_doc.py -i INPUT [-o OUTPUT] [-j JSON] [-s RULES,[...,RULES]]
 
     INPUT  - RST file containing the list of misra rules.
     OUTPUT - file to store the text output to be used by cppcheck.
              If not specified, the result will be printed to stdout.
     JSON   - cppcheck json file to be created (optional).
+    RULES  - list of rules to skip during the analysis, comma separated
+             (e.g. 1.1,1.2,1.3,...)
 """
 
 import sys, getopt, re
@@ -47,21 +49,25 @@ def main(argv):
     outfile = ''
     outstr = sys.stdout
     jsonfile = ''
+    force_skip = ''
 
     try:
-        opts, args = getopt.getopt(argv,"hi:o:j:",["input=","output=","json="])
+        opts, args = getopt.getopt(argv,"hi:o:j:s:",
+                                   ["input=","output=","json=","skip="])
     except getopt.GetoptError:
-        print('convert-misra.py -i <input> [-o <output>] [-j <json>')
+        print('convert-misra.py -i <input> [-o <output>] [-j <json>] [-s <rules>]')
         sys.exit(2)
     for opt, arg in opts:
         if opt == '-h':
-            print('convert-misra.py -i <input> [-o <output>] [-j <json>')
+            print('convert-misra.py -i <input> [-o <output>] [-j <json>] [-s <rules>]')
             print('  If output is not specified, print to stdout')
             sys.exit(1)
         elif opt in ("-i", "--input"):
             infile = arg
         elif opt in ("-o", "--output"):
             outfile = arg
+        elif opt in ("-s", "--skip"):
+            force_skip = arg
         elif opt in ("-j", "--json"):
             jsonfile = arg
 
@@ -169,14 +175,18 @@ def main(argv):
 
     skip_list = []
 
+    # Add rules to be skipped anyway
+    for r in force_skip.split(','):
+        skip_list.append(r)
+
     # Search for missing rules and add a dummy text with the rule number
     for i in misra_c2012_rules:
         for j in list(range(1,misra_c2012_rules[i]+1)):
-            if str(i) + '.' + str(j) not in rule_list:
-                outstr.write('Rule ' + str(i) + '.' + str(j) + '\n')
-                outstr.write('No description for rule ' + str(i) + '.' + str(j)
-                             + '\n')
-                skip_list.append(str(i) + '.' + str(j))
+            rule_str = str(i) + '.' + str(j)
+            if (rule_str not in rule_list) and (rule_str not in skip_list):
+                outstr.write('Rule ' + rule_str + '\n')
+                outstr.write('No description for rule ' + rule_str + '\n')
+                skip_list.append(rule_str)
 
     # Make cppcheck happy by starting the appendix
     outstr.write('Appendix B\n')
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:15:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:15:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486901.754307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMS8J-0003wG-Dh; Mon, 30 Jan 2023 11:15:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486901.754307; Mon, 30 Jan 2023 11:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMS8J-0003w9-9o; Mon, 30 Jan 2023 11:15:31 +0000
Received: by outflank-mailman (input) for mailman id 486901;
 Mon, 30 Jan 2023 11:15:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qdfF=53=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMS8I-0003w3-9N
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:15:30 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67590b2f-a08f-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 12:15:29 +0100 (CET)
Received: by mail-ej1-x62c.google.com with SMTP id ud5so30684642ejc.4
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 03:15:29 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 o2-20020a1709064f8200b0088a2aebb146sm617053eju.52.2023.01.30.03.15.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 30 Jan 2023 03:15:28 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67590b2f-a08f-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=/3Hfj4m87BNkMW8vXogmnIwV9g+t8c7XPYJqmmu3bsI=;
        b=l/kL/FWhmF6LpHkn6+dPmMUefkxko7NZz8kU0evFVYF3dBaxuktPtXjb9hCXI9KIFp
         A/QR7AjSmDr2nfMU9DRLZ7Rst18BKGZmxvjaldS9RxMOjRhsAC2z9t7bN4lz0Q7r1Qst
         WhiK/jxPkFEUfLpR8bK3PN74lRShGu7tW3Qlj5+HO1dH6sgGLZaR0rOSlw48yOGWv3w5
         ayiHbiAIiShlKbMg48U14zakHmMm8+Zdoi2ym8sb7o19hoQxMIgoRzZqqpvo6igNRBne
         fTgMOucrCVIvq5PzYNx6q/hh68bVVoIqoXHscSp3ERhYYg1HMhfvyPlIHkfOH/dFF2/+
         vstg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=/3Hfj4m87BNkMW8vXogmnIwV9g+t8c7XPYJqmmu3bsI=;
        b=bveKyVRHc0GNYFfdL31YwHvSuM6Pl6nSQEn5iMExh7RVMvYMOWobtqDJ6I0hiIQwR0
         Rw7p/v6DrNVhVCzrSrOrrMkxONQwjQxQpcmVkw4FDaH7QCr/zHE2TQY1sTjMN4+LIyjC
         GSMpAOtL2fpJRtkEl/eN83AtMNfG9waDLOiBqMHhGdfAk5GZuUdi3my2J3O6mNeQ3TYc
         j5kRO+7p5vhm2U53AKbVpA6Qx12yIKcgL5LZdykIpR5HiHgErvvc9rpkW21RyWu6YTZC
         DKWp6mUsJLslaSAXwIRHGeZVuSEmZX64Z7S6/pXCf1oT8WpD381to8S73wU14jde7y0J
         4bPw==
X-Gm-Message-State: AO0yUKVeeMHG4b1zpMjbDFXcqJ0CSPyWsAHTRIlXMvPkSVdt9PjTUEto
	k+yyIcmaGyPUqV40bNNidEg=
X-Google-Smtp-Source: AK7set+lPe9bSWIsJRPDXQhAdFO2UG3p7I5TZAnTd+AMWGffuw7Lj6SQli5lrE47Jr9+ci2Ote2JVg==
X-Received: by 2002:a17:906:fc16:b0:878:81d9:63ea with SMTP id ov22-20020a170906fc1600b0087881d963eamr15943092ejb.52.1675077328900;
        Mon, 30 Jan 2023 03:15:28 -0800 (PST)
Message-ID: <2a09c8648c6ed388997ab07aa6fcf78db6ba4b16.camel@gmail.com>
Subject: Re: [PATCH v2 14/14] automation: add smoke test to verify macros
 from bug.h
From: Oleksii <oleksii.kurochko@gmail.com>
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Doug
 Goldstein <cardoe@cardoe.com>
Date: Mon, 30 Jan 2023 13:15:27 +0200
In-Reply-To: <fe470f53-5cf5-138b-40e7-83c111ce225a@amd.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
	 <ed819dc612fcadbd04b4b44b2c0560a77796793a.1674818705.git.oleksii.kurochko@gmail.com>
	 <fe470f53-5cf5-138b-40e7-83c111ce225a@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-27 at 15:43 +0100, Michal Orzel wrote:
> Hi Oleksii,
>=20
> On 27/01/2023 14:59, Oleksii Kurochko wrote:
> >=20
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes in V2:
> > =C2=A0- Leave only the latest "grep ..."
> > ---
> > =C2=A0automation/scripts/qemu-smoke-riscv64.sh | 2 +-
> > =C2=A01 file changed, 1 insertion(+), 1 deletion(-)
> >=20
> > diff --git a/automation/scripts/qemu-smoke-riscv64.sh
> > b/automation/scripts/qemu-smoke-riscv64.sh
> > index e0f06360bc..02fc66be03 100755
> > --- a/automation/scripts/qemu-smoke-riscv64.sh
> > +++ b/automation/scripts/qemu-smoke-riscv64.sh
> > @@ -16,5 +16,5 @@ qemu-system-riscv64 \
> > =C2=A0=C2=A0=C2=A0=C2=A0 |& tee smoke.serial
> >=20
> > =C2=A0set -e
> > -(grep -q "Hello from C env" smoke.serial) || exit 1
> > +(grep -q "WARN is most likely working" smoke.serial) || exit 1
> I think the commit msg is a bit misleading and should be changed.
> FWICS, you are not *adding* any smoke test but instead modifying
> the grep pattern to reflect the usage of WARN.
>=20
It's incorrect so it will be changed in the new version of the patch
series.

Thanks.
> > =C2=A0exit 0
> > --
> > 2.39.0
> >=20
> >=20
>=20
> ~Michal



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:23:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486905.754316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSFk-0005MQ-5L; Mon, 30 Jan 2023 11:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486905.754316; Mon, 30 Jan 2023 11:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSFk-0005MJ-2l; Mon, 30 Jan 2023 11:23:12 +0000
Received: by outflank-mailman (input) for mailman id 486905;
 Mon, 30 Jan 2023 11:23:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6cZ7=53=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pMSFi-0005MD-CE
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:23:10 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2060c.outbound.protection.outlook.com
 [2a01:111:f400:7e89::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 76926e86-a090-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 12:23:06 +0100 (CET)
Received: from BN0PR02CA0057.namprd02.prod.outlook.com (2603:10b6:408:e5::32)
 by SN7PR12MB7979.namprd12.prod.outlook.com (2603:10b6:806:32a::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan
 2023 11:23:02 +0000
Received: from BN8NAM11FT077.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e5:cafe::53) by BN0PR02CA0057.outlook.office365.com
 (2603:10b6:408:e5::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Mon, 30 Jan 2023 11:23:02 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT077.mail.protection.outlook.com (10.13.177.232) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.21 via Frontend Transport; Mon, 30 Jan 2023 11:23:02 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 30 Jan
 2023 05:23:01 -0600
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 30 Jan 2023 05:22:59 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76926e86-a090-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eW0hmAfFfz2i8/UhMpX0H4u6l2AMI6aUH8FttPyBHpH9W1/Vs3nG9X3tYSbAVJhpnhKAF4oOjUdLCG9ObM46+rXFFnknMSTn2ZZ3wl93wXc2jbt3/i2CU+bowKs2krnCbFdMdljxX+BJDZq2oJhU5dKMcbiON7b+HBd0GvJzjIAB7HiGI/Xk6uZTHyf24Tm50jcMqdIU0pZ3BjgVu0Te/X0WLf7MyU8lnC31HcRs1sG6ZUyx4NHYvFv21zIe8t4sZ/hJjwvMEAiJZWpjEBOI15YmVa0WkHvZNQBnsNySI59zBy37dzIIsLBuxBko8iTnWLovc8rFuZkcRlDM89AY3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z13DJ4/lWshBGiGNlBJ7mHuqI3xxp6AcbqjVpPJrQGU=;
 b=j2OGSWxmlhU6QGBSBS087IGubv2p5yQxCwlb52SbplOHYttUAaywbA5gySX3/cFLYXkbYN+tNzoM2bvhGDxVoT6d6CXise+obSlKfb2SJw04NMmwVOdhPOUeZGKysdfcG4aNn1ah9I0xYFAqak+c+fJ4KvmGKjijM6/N5CKDLCmiam8CmN5JfplThHZO8CAzkqqwiePzf07jKGOpwV5uDjPrrumwAnK8XPUi9uE58zd8p2Jb5lXTnC6eRmyQhtG5gH8rW0wgBrsK1gJNCNQwM++j1tH85H1hMCtikaM+Y1MFAb4h3lCy6FIDzyK79bEpxfKsDmU8J6k9gpmJ0JKNLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z13DJ4/lWshBGiGNlBJ7mHuqI3xxp6AcbqjVpPJrQGU=;
 b=tUiYOdWp5RMu2fxI6NMb7jvAds2I1siuuKEqpnczPUiauonJJ9F3G7xIlPPJUnDRUFqBP/dpLHm+EQFJfD9aCyeNmjTIEFkT5rkMIGfeYszU1cF3W0RsvFaRUYBXtbrcg8FZj5Hu4CuU4vFprfYPvlSA8/2p2RUh4W+H9zz6tJw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <b24bd369-680f-45ac-2ce6-d8cad582eecc@amd.com>
Date: Mon, 30 Jan 2023 12:22:54 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 1/2] xen/cppcheck: sort alphabetically cppcheck report
 entries
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, <xen-devel@lists.xenproject.org>
CC: <bertrand.marquis@arm.com>, <wei.chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230130110132.2774782-1-luca.fancellu@arm.com>
 <20230130110132.2774782-2-luca.fancellu@arm.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230130110132.2774782-2-luca.fancellu@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT077:EE_|SN7PR12MB7979:EE_
X-MS-Office365-Filtering-Correlation-Id: b753f28d-b0f4-4ef1-229c-08db02b45976
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	C1Sl4kHGE6yXjUAT8Zg4ViZebTW8WhNhADriQBGQJFM/uQS0gaj/DYmb4nqUwmCSf4tMLOy+BhUy9QzsHKvRo6OOOmLFUAl9dmjw7SLXXBA9XFarjj1Q/TJk+Rf/4dJciBTZvAmP8Y4c8vLes04XnxmlxnO0ERLNa8v/b1XTf6UCuPapDE7LAsBiRwM3hWKiB3Zqwq6Xr/gC62WrzBy7kWYN8DCokw2IXUXs0cdN3pyDvyfVsh2lsI9GVldRQgIw7Tg4IRIJ4rTzUrsVdpc72PP2Zr+VGfCVyUDhQzX174S3KXh8tDr2d6DDpx/jHrINLy/5hh97tmyUh8rof/lO+k7+b6sGJDzX+uNYcDq35ztt6YL8itMJJ1IsRca30r5ef0P+Cbk+JsCFx9e6QRluObBS9CPfUT7h/NeyuKVcO0u+7J1WsC4UikbFrt2II1FRzgQH+E8iW/s0bcilQs8d73xhnGcLMIWQIGKYlfhI2UcaxZnB2uMHfB+59Uc2oqzdv/3AM7Mt/G37+qo78GdGCL//vFr3d2yggTWs0uGh5XOpQ7e3nryFGfoHCtt8BWshvzfSpgE6bmLd239Hd+k0KoxTYkRSLgHQNrg85XYKlk/t6LxxFHD/lyLYHz2KBIMElTuuFSG1XxCCKi88lKeGuRQoW7SjHcSsJyAl3CcIqmvOokOIWFxy7EwxR58TfnBHn37y9fkYBcXaTgXY0ys4h1XT+EtpJdG6qC/h6dfPNk7+72X8iFU0kmqF/b8V618SyszGiWu57wMvGfZ1BBAXEg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(136003)(396003)(346002)(39860400002)(451199018)(46966006)(40470700004)(36840700001)(44832011)(7416002)(86362001)(2616005)(426003)(31696002)(336012)(47076005)(82740400003)(82310400005)(356005)(81166007)(2906002)(40460700003)(36756003)(36860700001)(53546011)(26005)(186003)(478600001)(40480700001)(6666004)(110136005)(8676002)(8936002)(41300700001)(4326008)(31686004)(54906003)(5660300002)(70206006)(70586007)(316002)(16576012)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 11:23:02.0871
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b753f28d-b0f4-4ef1-229c-08db02b45976
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT077.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7979

Hi Luca,

On 30/01/2023 12:01, Luca Fancellu wrote:
> 
> 
> Sort alphabetically cppcheck report entries when producing the text
> report, this will help comparing different reports and will group
> together findings from the same file.
> 
> The sort operation is performed with two criteria, the first one is
> sorting by misra rule, the second one is sorting by file.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes in v2:
>  - Sort with two criteria, first misra rule, second filename
>    (Michal, Jan)
> ---
> ---
>  xen/scripts/xen_analysis/cppcheck_report_utils.py | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> index 02440aefdfec..0b6cc72b9ac1 100644
> --- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
> +++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> @@ -104,6 +104,13 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
>                  for path in strip_paths:
>                      text_report_content[i] = text_report_content[i].replace(
>                                                                  path + "/", "")
> +                    # Split by : separator
> +                    text_report_content[i] = text_report_content[i].split(":")
This is where the for loop body ends so it should be separated from the rest by an empty line.

> +            # sort alphabetically for second field (misra rule) and as second
The second field is not necessary a "misra rule". It is just an error id (e.g. unreadVariable).
However this is just a python script and we use cppcheck mostly for MISRA so I do not object.
 
> +            # criteria for the first field (file name)
> +            text_report_content.sort(key = lambda x: (x[1], x[0]))
> +            # merge back with : separator
> +            text_report_content = [":".join(x) for x in text_report_content]
>              # Write the final text report
>              outfile.writelines(text_report_content)
>      except OSError as e:
> --
> 2.25.1
> 

With the first remark fixed (e.g. on commit),
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:23:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:23:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486906.754327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSFw-0005gk-Hz; Mon, 30 Jan 2023 11:23:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486906.754327; Mon, 30 Jan 2023 11:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSFw-0005gb-EB; Mon, 30 Jan 2023 11:23:24 +0000
Received: by outflank-mailman (input) for mailman id 486906;
 Mon, 30 Jan 2023 11:23:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qdfF=53=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMSFv-0005MD-NB
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:23:23 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80f94a72-a090-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 12:23:21 +0100 (CET)
Received: by mail-ej1-x62e.google.com with SMTP id gr7so6125816ejb.5
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 03:23:21 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 f6-20020a17090631c600b0088879b211easm1583515ejf.69.2023.01.30.03.23.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 30 Jan 2023 03:23:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80f94a72-a090-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=zoccrR9drtIg2Q4jc0Uq6rKA7pidLeebLHvSEOmeCg8=;
        b=N3e5sdUpwcpBCOGzwJzMwxILHjcXQ0YcFJfbN24NAuTFY8pTKiSa2+CUKN9eiV2fM5
         IEZJ4w4Tg0SN+kzYfP7GZfA+1nK3Hub2RrWibKQhmZrUQAJE7YPD68kjROVuh3Re4VnG
         s02sX9drn6oFvsWWlH0jFHGC5si7AwtkydQXevrOU4JT3/K0SIeQ0gvJaFDmiYMLVFeu
         SliCONaeI+CC3Tv9aHhTfIgTCf3Kp2F6HMVz5QLpT947EHqEwqU2IFewPJW9+0uNyZtb
         g+IDqnqX3q87UtPlRys+ZDz2CD+7l1sTRln2Elp9bMxi3X0ZnN1lBNbshp3EBpKnVsq2
         LZsQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=zoccrR9drtIg2Q4jc0Uq6rKA7pidLeebLHvSEOmeCg8=;
        b=dtNnPI1GdSYy/EYMvUAWLNG6GYprBNInOPFZI4ICuaYYj5AkeBxYX23K9NwPVxZEel
         7z1Tt8g7nafIbZb26L3j1MwRDNvfXXBJUWueRWdxBQ284ka4EaM74MqnybYQpsQY9mV3
         /to1X3WtjXbt9UQ3RAVdlasmb88Q+d2+pK4GsHuLKFlxkEuKYTvtYaVxt5XRsdz2mpUn
         Nd0vRmnJKEY04ftDEXKWWKXcn3NqOYZE3n6KbqOgPoD+8unzVkxXcHKyPO3X7/0fE1Tj
         aQ9ZNMWeRGV+Zu9ihP4PC+4kDmVxGPVH9whjZesA9ORmWMUSub6AyzJFo324IY5JTLkr
         ek2w==
X-Gm-Message-State: AO0yUKUEwzoQNloOnqSZMEAC9voprJfXnb1BYWURzxACSoUzX0nkQEJ/
	oVpYvDqY0mRQI4ONCYDk01M=
X-Google-Smtp-Source: AK7set/9ElLq5qFvezmrrKAO6qZhuJ9bd28Tl+XPq2J1E4C7F3qERDMfBhFS/e4CefaxysW1KEI5Rg==
X-Received: by 2002:a17:906:b7d4:b0:87b:dba1:1bf3 with SMTP id fy20-20020a170906b7d400b0087bdba11bf3mr11877358ejb.30.1675077801353;
        Mon, 30 Jan 2023 03:23:21 -0800 (PST)
Message-ID: <5069f8b672b107e83c151ce689f0586a89abd864.camel@gmail.com>
Subject: Re: [PATCH v2 12/14] xen/riscv: introduce an implementation of
 macros from <asm/bug.h>
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Mon, 30 Jan 2023 13:23:19 +0200
In-Reply-To: <0af565ba-646f-1540-0b0c-6a14e73ab5fc@suse.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
	 <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
	 <0af565ba-646f-1540-0b0c-6a14e73ab5fc@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-27 at 15:34 +0100, Jan Beulich wrote:
> On 27.01.2023 14:59, Oleksii Kurochko wrote:
> > The patch introduces macros: BUG(), WARN(), run_in_exception(),
> > assert_failed.
> >=20
> > The implementation uses "ebreak" instruction in combination with
> > diffrent bug frame tables (for each type) which contains useful
> > information.
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes:
> > =C2=A0 - Remove __ in define namings
> > =C2=A0 - Update run_in_exception_handler() with
> > =C2=A0=C2=A0=C2=A0 register void *fn_ asm(__stringify(BUG_FN_REG)) =3D =
(fn);
> > =C2=A0 - Remove bug_instr_t type and change it's usage to uint32_t
>=20
> But that's not correct - as said before, you can't assume you can
> access
> 32 bits, there maybe only a 16-bit insn at the end of a page, with
> nothing
> mapped to the VA of the subsequent page. Even more ...
>=20
Agree that it will be an issue if 16-bit insn will be at the end of a
page.
The code is based on Linux
(https://elixir.bootlin.com/linux/latest/source/arch/riscv/kernel/traps.c#L=
152)
and it seems they might have the same issue.
> > + end:
> > +=C2=A0=C2=A0=C2=A0 regs->sepc +=3D GET_INSN_LENGTH(*(uint32_t *)pc);
>=20
> ... you obtain insn length you don't even need to read 32 bits.
>=20
It looks you are right so I'll change that in the new version of the
patch series.

> Jan
Oleksii



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:26:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:26:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486917.754337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSJ6-0006k7-Vu; Mon, 30 Jan 2023 11:26:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486917.754337; Mon, 30 Jan 2023 11:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSJ6-0006k0-TG; Mon, 30 Jan 2023 11:26:40 +0000
Received: by outflank-mailman (input) for mailman id 486917;
 Mon, 30 Jan 2023 11:26:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VtRO=53=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pMSJ5-0006ju-K9
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:26:39 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0613.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f4ef521e-a090-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 12:26:36 +0100 (CET)
Received: from AM6P195CA0042.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:87::19)
 by AS8PR08MB9526.eurprd08.prod.outlook.com (2603:10a6:20b:61e::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.28; Mon, 30 Jan
 2023 11:26:33 +0000
Received: from AM7EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:87:cafe::df) by AM6P195CA0042.outlook.office365.com
 (2603:10a6:209:87::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Mon, 30 Jan 2023 11:26:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT055.mail.protection.outlook.com (100.127.141.28) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.22 via Frontend Transport; Mon, 30 Jan 2023 11:26:33 +0000
Received: ("Tessian outbound 0d7b2ab0f13d:v132");
 Mon, 30 Jan 2023 11:26:32 +0000
Received: from a6c24b45df92.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0BACEF4A-3B1E-4AF9-9F6B-F7D159E5E0D9.1; 
 Mon, 30 Jan 2023 11:26:23 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a6c24b45df92.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 30 Jan 2023 11:26:23 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAVPR08MB8967.eurprd08.prod.outlook.com (2603:10a6:102:326::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Mon, 30 Jan
 2023 11:26:18 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::b14f:1c13:afa:4eda%3]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 11:26:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4ef521e-a090-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wr28CWpgDAy8PG03ixLv5XhCoOG/I/wArDIjzVqZcSw=;
 b=w5l8wfsceXLxfhot8BncafL8sLgiJmk5bKfT+UvwbeSrAkCFmWZPUk5gY5i6hwBUoH26JfwGZs2lufaYEgA62ZFjdYO7HzW7Wk1NnuD1RPHN/UscqJziw3ikC7CZk3Jfu2yEYVePp6i74IFUeMe/DDVED/21W4HPubDIkjj9+bY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9541616c216b6f61
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VicdB19/MTrqkxo2D7QnalozPrv+vLHrmfy5fbdcRv/CJnj+GC/h14pJxXJgoxCpEKQZU7qMYrVA8BfK2mensqU60HDs5G756yqI1BEqDSbuKOnQz7/dkvj6az9/tqZC+pCJgsnzrAGeOlalOONcM2CFI9KMoWBbeoQeP9Zxp9RWfUcIvf/C8dLVd87SIafiBr7wnmdKYxJHrAShP8CMRW2bTZQc0u8RIV6wWQd3I0SPH8bwNgcmYg1zcmNpWrL2FkWIv/qNcNuTz0OzQufBi/Fl6wwRtWy1w3B7wxMspLQOC92d/8g93apA2qc2x3NezZzECLO/fLwAd6CcfHm7jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wr28CWpgDAy8PG03ixLv5XhCoOG/I/wArDIjzVqZcSw=;
 b=EPSf+HOLhQeRmRkCd+o390N70Vv6KqtQcszP8E6hTRepEXJXr+7mxTWZP6zw0pBuCKgGty5a5Gd1xSQGV/2sdKgBG7DJcyK6gmGhp+dYjK6jn9jBryjnYW+jQP6hfxuQYr0OgZJp4HYB8qHy/hB8cI3znrevBMAZeeQ62EBpkLTUEN94Ub8FouUrYRwoWOzvG99bp5PAtWbGdriFKqQfPAL5smBzlukgV2tjmaP0to2p+ow5jj8GZykj7V7JHo21Q2OBVO8LWP2yreek0IzY6OTExcvQSptZXuC1YgX5ex7oZLTR8NaaoI8vd6H54upawtQ26XYgyBb2A7FiWlBXqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wr28CWpgDAy8PG03ixLv5XhCoOG/I/wArDIjzVqZcSw=;
 b=w5l8wfsceXLxfhot8BncafL8sLgiJmk5bKfT+UvwbeSrAkCFmWZPUk5gY5i6hwBUoH26JfwGZs2lufaYEgA62ZFjdYO7HzW7Wk1NnuD1RPHN/UscqJziw3ikC7CZk3Jfu2yEYVePp6i74IFUeMe/DDVED/21W4HPubDIkjj9+bY=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/2] xen/cppcheck: sort alphabetically cppcheck report
 entries
Thread-Topic: [PATCH v2 1/2] xen/cppcheck: sort alphabetically cppcheck report
 entries
Thread-Index: AQHZNJpUJ5ClhO3EhEi0PsL49bNu7q620S8AgAAA5oA=
Date: Mon, 30 Jan 2023 11:26:18 +0000
Message-ID: <2339E25F-97A1-468D-ADB4-0DB6A8BCB230@arm.com>
References: <20230130110132.2774782-1-luca.fancellu@arm.com>
 <20230130110132.2774782-2-luca.fancellu@arm.com>
 <b24bd369-680f-45ac-2ce6-d8cad582eecc@amd.com>
In-Reply-To: <b24bd369-680f-45ac-2ce6-d8cad582eecc@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.300.101.1.3)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAVPR08MB8967:EE_|AM7EUR03FT055:EE_|AS8PR08MB9526:EE_
X-MS-Office365-Filtering-Correlation-Id: 33e8de92-3b05-4d51-2824-08db02b4d745
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 S+rQroI0t9mt2oQ31tXJlHDmmgDc1fX3LYeh7noUABN4BIbODJtu8L+VaT6cKClxJN23CgUQCaD3kQIRwOg2fa+xdvnjqYNjvDTeCsON748eyqU6sxaILJyuTrEMla4JRegdftxBIo3fTKuL77KIdMTJP1MMaODxJI3UkLaLoqCvY1PLHCl0AfKuZ20yj5Ojtjh57UxQkj28GMFbdQ9SdxG/ewp1mM2PZcekgCK1pWidHt+OoGeAX4IjYRd9dYnksLl9Vr/3s41RF3RuzqUZVj8INnuP2zRmcxoGUhs72eCCMw8csEHbQ5/OmI4HVpL31bx0q9mytgl/viSLQRNpk9EXtiGxL3cEXfZ9T3I8ZrgFfh4BjAPg8XXw6CuM+wS6sDinfA5EGcdOHEegDEMxigFEEwyld/eyMu+9wy63YwpyxTRiVIdMZV+cmG5pdg2MkorUY6Dr5I1pyTCHqOCxabBMJOYEDIVbOSldfvjTHDug4IphxCmmnxvS/yNbBbRX9BuitE0dfYTXGxk4dHrbIIctnL3VpYv2jG3peGLCfqUObR2yqWyrYU8sfI5cze3RDBGJGfrx1uZYRWGXxj1Ko41UBXFWM92QBqM2d6e4ARQZwcjSOXOdtEkBCOXtH3OQcB1pZ2fV1I8qLFKElcjWR0Av6NrVSSKlEVXhtTaGnQ9b6BY8TGty0aOgUPqx5Kd7yyvAmOmgKVBA3HjW/mM28zIRFc/2XI2I9K/KDOIr/ds=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(396003)(366004)(376002)(136003)(346002)(451199018)(6486002)(54906003)(2906002)(478600001)(38100700002)(26005)(38070700005)(6512007)(186003)(6506007)(53546011)(2616005)(71200400001)(86362001)(66476007)(91956017)(66446008)(6916009)(66556008)(4326008)(64756008)(5660300002)(8676002)(66946007)(76116006)(316002)(122000001)(41300700001)(36756003)(33656002)(8936002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <D56A48CC4AA3594BACEAC5632E6D77B1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB8967
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	431f40ce-edbe-45d9-b2e3-08db02b4ceb3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Lqwp0LQi1wdGX3h6Ai2zyj0+nuFJeK9XoXm0Ijtygw6AXcgZjmOkn7qmTLTRWfiwlIa83+UKgTvTRlLPfCVbA3ia3KveIEcsiXlZlxMDNJEx+Jg7AVxWanaZAvJSbN81TmCCNQVXelL7tWTfIx/qpXPS61M1eJAxRRQNGX+HrzHfXkOwRErOiIMKpr8erYx6o85jVR1byhZqtRvb2cneTnDKsC1BWxuyCeSZEDwakUOxFgOIGAva6g44Z5nBaxh23vhDcNNfjAWEnWcEpfKMaa8POgI0tx0RHpdGzYKBn/rwaAgt8XvsNxxjlfnPOL7t6heKVWVQtcqMlS2QNeC/NxdmifjMPHHdz3W3jd4rELUta8JrBC8ygdt4gwsJpaxhCQAkL+owLV5zoGAh1osK+Igl/CINMkN0SolZoUrAlmAgZYar4So0UmZYRTAVuQhll6Mxw8oi+WzogMAQrNAftMapuWVUUE/UwBriekWEQ9KBFVm/3ymIeC1EevF9caHPPEBNeafppnsDqDwaQ2kFITzs0zGKk/wkEmeCc6KNt2TNUF8hqr6pp01HHD3MMR0lHYrZjfJQVLWS/tYQmXRGKAeNOk/KLnc+/Hs/KcqxR6s/JYiPQmHMwHdcGqhVtkVp+CwyW3Zy6dhq3Y7bGTJBD69XpekYroorVshFc36jiU7HSKmpN4ZvfJdNTmsXVvmnY6yZoKplZIwhyATiJJ5Tlw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199018)(36840700001)(46966006)(40470700004)(478600001)(6486002)(26005)(36860700001)(356005)(47076005)(81166007)(86362001)(40480700001)(82740400003)(82310400005)(33656002)(36756003)(186003)(2616005)(40460700003)(336012)(53546011)(6512007)(6506007)(5660300002)(316002)(41300700001)(6862004)(8936002)(4326008)(54906003)(2906002)(8676002)(70206006)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 11:26:33.1434
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 33e8de92-3b05-4d51-2824-08db02b4d745
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9526

DQoNCj4gT24gMzAgSmFuIDIwMjMsIGF0IDExOjIyLCBNaWNoYWwgT3J6ZWwgPG1pY2hhbC5vcnpl
bEBhbWQuY29tPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAzMC8wMS8yMDIzIDEy
OjAxLCBMdWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4gDQo+PiANCj4+IFNvcnQgYWxwaGFiZXRpY2Fs
bHkgY3BwY2hlY2sgcmVwb3J0IGVudHJpZXMgd2hlbiBwcm9kdWNpbmcgdGhlIHRleHQNCj4+IHJl
cG9ydCwgdGhpcyB3aWxsIGhlbHAgY29tcGFyaW5nIGRpZmZlcmVudCByZXBvcnRzIGFuZCB3aWxs
IGdyb3VwDQo+PiB0b2dldGhlciBmaW5kaW5ncyBmcm9tIHRoZSBzYW1lIGZpbGUuDQo+PiANCj4+
IFRoZSBzb3J0IG9wZXJhdGlvbiBpcyBwZXJmb3JtZWQgd2l0aCB0d28gY3JpdGVyaWEsIHRoZSBm
aXJzdCBvbmUgaXMNCj4+IHNvcnRpbmcgYnkgbWlzcmEgcnVsZSwgdGhlIHNlY29uZCBvbmUgaXMg
c29ydGluZyBieSBmaWxlLg0KPj4gDQo+PiBTaWduZWQtb2ZmLWJ5OiBMdWNhIEZhbmNlbGx1IDxs
dWNhLmZhbmNlbGx1QGFybS5jb20+DQo+PiAtLS0NCj4+IENoYW5nZXMgaW4gdjI6DQo+PiAtIFNv
cnQgd2l0aCB0d28gY3JpdGVyaWEsIGZpcnN0IG1pc3JhIHJ1bGUsIHNlY29uZCBmaWxlbmFtZQ0K
Pj4gICAoTWljaGFsLCBKYW4pDQo+PiAtLS0NCj4+IC0tLQ0KPj4geGVuL3NjcmlwdHMveGVuX2Fu
YWx5c2lzL2NwcGNoZWNrX3JlcG9ydF91dGlscy5weSB8IDcgKysrKysrKw0KPj4gMSBmaWxlIGNo
YW5nZWQsIDcgaW5zZXJ0aW9ucygrKQ0KPj4gDQo+PiBkaWZmIC0tZ2l0IGEveGVuL3NjcmlwdHMv
eGVuX2FuYWx5c2lzL2NwcGNoZWNrX3JlcG9ydF91dGlscy5weSBiL3hlbi9zY3JpcHRzL3hlbl9h
bmFseXNpcy9jcHBjaGVja19yZXBvcnRfdXRpbHMucHkNCj4+IGluZGV4IDAyNDQwYWVmZGZlYy4u
MGI2Y2M3MmI5YWMxIDEwMDY0NA0KPj4gLS0tIGEveGVuL3NjcmlwdHMveGVuX2FuYWx5c2lzL2Nw
cGNoZWNrX3JlcG9ydF91dGlscy5weQ0KPj4gKysrIGIveGVuL3NjcmlwdHMveGVuX2FuYWx5c2lz
L2NwcGNoZWNrX3JlcG9ydF91dGlscy5weQ0KPj4gQEAgLTEwNCw2ICsxMDQsMTMgQEAgZGVmIGNw
cGNoZWNrX21lcmdlX3R4dF9mcmFnbWVudHMoZnJhZ21lbnRzX2xpc3QsIG91dF90eHRfZmlsZSwg
c3RyaXBfcGF0aHMpOg0KPj4gICAgICAgICAgICAgICAgIGZvciBwYXRoIGluIHN0cmlwX3BhdGhz
Og0KPj4gICAgICAgICAgICAgICAgICAgICB0ZXh0X3JlcG9ydF9jb250ZW50W2ldID0gdGV4dF9y
ZXBvcnRfY29udGVudFtpXS5yZXBsYWNlKA0KPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhdGggKyAiLyIsICIiKQ0KPj4g
KyAgICAgICAgICAgICAgICAgICAgIyBTcGxpdCBieSA6IHNlcGFyYXRvcg0KPj4gKyAgICAgICAg
ICAgICAgICAgICAgdGV4dF9yZXBvcnRfY29udGVudFtpXSA9IHRleHRfcmVwb3J0X2NvbnRlbnRb
aV0uc3BsaXQoIjoiKQ0KPiBUaGlzIGlzIHdoZXJlIHRoZSBmb3IgbG9vcCBib2R5IGVuZHMgc28g
aXQgc2hvdWxkIGJlIHNlcGFyYXRlZCBmcm9tIHRoZSByZXN0IGJ5IGFuIGVtcHR5IGxpbmUuDQo+
IA0KPj4gKyAgICAgICAgICAgICMgc29ydCBhbHBoYWJldGljYWxseSBmb3Igc2Vjb25kIGZpZWxk
IChtaXNyYSBydWxlKSBhbmQgYXMgc2Vjb25kDQo+IFRoZSBzZWNvbmQgZmllbGQgaXMgbm90IG5l
Y2Vzc2FyeSBhICJtaXNyYSBydWxlIi4gSXQgaXMganVzdCBhbiBlcnJvciBpZCAoZS5nLiB1bnJl
YWRWYXJpYWJsZSkuDQo+IEhvd2V2ZXIgdGhpcyBpcyBqdXN0IGEgcHl0aG9uIHNjcmlwdCBhbmQg
d2UgdXNlIGNwcGNoZWNrIG1vc3RseSBmb3IgTUlTUkEgc28gSSBkbyBub3Qgb2JqZWN0Lg0KDQpZ
ZXMgeW91IGFyZSByaWdodCwgaWYgdGhlIGNvbW1pdHRlciBpcyB3aWxsaW5nIHRvIGNoYW5nZSBp
dCBvbiBjb21taXQgSeKAmWxsIGFwcHJlY2lhdGUsIG90aGVyd2lzZSBJIGNhbg0KZml4IGl0IGFu
ZCByZXNwaW4gdGhlIHBhdGNoLg0KDQo+IA0KPj4gKyAgICAgICAgICAgICMgY3JpdGVyaWEgZm9y
IHRoZSBmaXJzdCBmaWVsZCAoZmlsZSBuYW1lKQ0KPj4gKyAgICAgICAgICAgIHRleHRfcmVwb3J0
X2NvbnRlbnQuc29ydChrZXkgPSBsYW1iZGEgeDogKHhbMV0sIHhbMF0pKQ0KPj4gKyAgICAgICAg
ICAgICMgbWVyZ2UgYmFjayB3aXRoIDogc2VwYXJhdG9yDQo+PiArICAgICAgICAgICAgdGV4dF9y
ZXBvcnRfY29udGVudCA9IFsiOiIuam9pbih4KSBmb3IgeCBpbiB0ZXh0X3JlcG9ydF9jb250ZW50
XQ0KPj4gICAgICAgICAgICAgIyBXcml0ZSB0aGUgZmluYWwgdGV4dCByZXBvcnQNCj4+ICAgICAg
ICAgICAgIG91dGZpbGUud3JpdGVsaW5lcyh0ZXh0X3JlcG9ydF9jb250ZW50KQ0KPj4gICAgIGV4
Y2VwdCBPU0Vycm9yIGFzIGU6DQo+PiAtLQ0KPj4gMi4yNS4xDQo+PiANCj4gDQo+IFdpdGggdGhl
IGZpcnN0IHJlbWFyayBmaXhlZCAoZS5nLiBvbiBjb21taXQpLA0KPiBSZXZpZXdlZC1ieTogTWlj
aGFsIE9yemVsIDxtaWNoYWwub3J6ZWxAYW1kLmNvbT4NCg0KVGhhbmsgeW91IGZvciB5b3VyIHJl
dmlldw0KDQo+IA0KPiB+TWljaGFsDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:35:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:35:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486922.754347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSRR-0008Dy-Q8; Mon, 30 Jan 2023 11:35:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486922.754347; Mon, 30 Jan 2023 11:35:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSRR-0008Dr-NQ; Mon, 30 Jan 2023 11:35:17 +0000
Received: by outflank-mailman (input) for mailman id 486922;
 Mon, 30 Jan 2023 11:35:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qdfF=53=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMSRQ-0008C3-DN
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:35:16 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29f6158c-a092-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 12:35:15 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id qw12so14915479ejc.2
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 03:35:14 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 r15-20020a1709067fcf00b0087fa83790d8sm4427334ejs.13.2023.01.30.03.35.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 30 Jan 2023 03:35:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29f6158c-a092-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=04m/QlYD5IP5yaIiFAfiA9arqesecfjmAPr2w53O6Rk=;
        b=eX4a3VUUwe9jt4YmxjSXpvkax5KfJYj5y8JB5a4eSLdZH9pfzo8yY8PG2GVBebq65O
         dEjCUzG94LC4+neBZsKF7/Wxjb0h3Z6o6uht66ErNyavC7NuTmUIYOZBBpshf/6IYqIX
         4uAq9jDMkm6lCvEPNGKgGsXQ0QxrL3uiiQLutWweO3+eP0tZvF2Ozt1Sm3WmVRtkscJi
         9H40HGcugyRG6Vd+WN6/zqbtRKi4PMenEQjiki0VVvhdrN5NAiHz6v674mren8caiSzs
         bT7sH8a//aykK5yLRJ2Bu0iT8KKN7IkKgxP4QK60pJQRslIzQVWKpZI9kJ9jo0Pmjgjb
         9UtA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=04m/QlYD5IP5yaIiFAfiA9arqesecfjmAPr2w53O6Rk=;
        b=1ggp30hWoIUuA+49i3vbsVmPPrHFbdLM2hsE+3vmMkUfE2JnKdhLx2wz5gX5lrsYRw
         g6KJVJTNYZDBSWUnaZdYYbaKaQBvt4+jz+fsGYcitunVM3gty0CGzeW3y3KvIXOJsL4O
         HCVh3HOOrJ+LQMvL8zRoijdbFHXxNiqndoRhwUGbs/hoi5PwreNycaCdM9uQmXvP5tTW
         nuaj02NcWaFx38ySJhR5EfEuJa1iNIRRCfcYN95ZLN63GEpRr8/hGoN/kT3/go4T7Q9i
         fYqL+tJEriDIiruaUSfmMsAqXmf3SzdBmDwzTPcFdmj/gOU5+jgohHkiq9eWDW8mVKv7
         gbiw==
X-Gm-Message-State: AO0yUKUOVKWNX00LEyHSjggy2yk3UHE8DpS4xNTN0MN83qRh/gHdpGLz
	RHKK/+LKxZ2JwMkPlgPosbI=
X-Google-Smtp-Source: AK7set+mgM167L0+D51sv4So75bRjVBICY3zbXS7d9H0qk/9DDSKJ8Lx9iTMJltEZqG00/G0xiu5RA==
X-Received: by 2002:a17:907:7856:b0:878:4cdb:cb42 with SMTP id lb22-20020a170907785600b008784cdbcb42mr15673697ejc.60.1675078514510;
        Mon, 30 Jan 2023 03:35:14 -0800 (PST)
Message-ID: <2c4d490bde4f04f956e481257ddc7129c312abb6.camel@gmail.com>
Subject: Re: [PATCH v2 12/14] xen/riscv: introduce an implementation of
 macros from <asm/bug.h>
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>
Date: Mon, 30 Jan 2023 13:35:13 +0200
In-Reply-To: <73553bcf-9688-c187-a9cb-c12806484893@xen.org>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
	 <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
	 <73553bcf-9688-c187-a9cb-c12806484893@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

Hi Julien,
On Fri, 2023-01-27 at 16:02 +0000, Julien Grall wrote:
> Hi Oleksii,
>=20
> On 27/01/2023 13:59, Oleksii Kurochko wrote:
> > The patch introduces macros: BUG(), WARN(), run_in_exception(),
> > assert_failed.
> >=20
> > The implementation uses "ebreak" instruction in combination with
> > diffrent bug frame tables (for each type) which contains useful
> > information.
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes:
> > =C2=A0=C2=A0 - Remove __ in define namings
> > =C2=A0=C2=A0 - Update run_in_exception_handler() with
> > =C2=A0=C2=A0=C2=A0=C2=A0 register void *fn_ asm(__stringify(BUG_FN_REG)=
) =3D (fn);
> > =C2=A0=C2=A0 - Remove bug_instr_t type and change it's usage to uint32_=
t
> > ---
> > =C2=A0 xen/arch/riscv/include/asm/bug.h | 118
> > ++++++++++++++++++++++++++++
> > =C2=A0 xen/arch/riscv/traps.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 | 128
> > +++++++++++++++++++++++++++++++
> > =C2=A0 xen/arch/riscv/xen.lds.S=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 |=C2=A0 10 +++
> > =C2=A0 3 files changed, 256 insertions(+)
> > =C2=A0 create mode 100644 xen/arch/riscv/include/asm/bug.h
> >=20
> > diff --git a/xen/arch/riscv/include/asm/bug.h
> > b/xen/arch/riscv/include/asm/bug.h
> > new file mode 100644
> > index 0000000000..4b15d8eba6
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/bug.h
> > @@ -0,0 +1,118 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright (C) 2012 Regents of the University of California
> > + * Copyright (C) 2021-2023 Vates
>=20
> I have to question the two copyrights here given that the majority of
> the code seems to be taken from the arm implementation (see=20
> arch/arm/include/asm/bug.h).
>=20
> With that said, we should consolidate the code rather than
> duplicating=20
> it on every architecture.
>=20
Copyrights should be removed. They were taken from the previous
implementation of bug.h for RISC-V so I just forgot to remove them.

It looks like we should have common bug.h for ARM and RISCV but I am
not sure that I know how to do that better.
Probably the code wants to be moved to xen/include/bug.h and using
ifdef ARM && RISCV ...
But still I am not sure that this is the best one option as at least we
have different implementation for x86_64.
> Cheers,
>=20
~ Oleksii



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:37:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486926.754357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSTD-0000UJ-5N; Mon, 30 Jan 2023 11:37:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486926.754357; Mon, 30 Jan 2023 11:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSTD-0000UC-2E; Mon, 30 Jan 2023 11:37:07 +0000
Received: by outflank-mailman (input) for mailman id 486926;
 Mon, 30 Jan 2023 11:37:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qdfF=53=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMSTB-0000TW-Pr
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:37:05 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6afa78dc-a092-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 12:37:04 +0100 (CET)
Received: by mail-ej1-x631.google.com with SMTP id m2so30362189ejb.8
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 03:37:04 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 lf16-20020a170907175000b008787134a939sm6777950ejc.18.2023.01.30.03.37.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 30 Jan 2023 03:37:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6afa78dc-a092-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=U4WUK8j1sqBukKnilpM/eMgOIFdvBjYmoxUwpd9tbWA=;
        b=d/5nANWUnEkRDuontsQ9lZExXIfaMDcs6xNlRlc8oRJWb6KuLu8+MOTj2jxnvvcOr8
         lkTM5NcyYR93mA7qXTAcBSbGh0dn27FQuV84v4B1Kd3TWiXLDy6qGMnQWXVxPbZDTGKS
         r2IwnaPPAF7+bpYl63AZor21CK7g/3wGRklq7XzJPlT+1UyDgxzdn/DxmPeMW/5XtSBU
         fAlAiz1rkYNddaIa8FsF7sfbnAqxJmRrUaUwct5bGoniAy7ACF4XsTx+YKqRLuYnpsV3
         N5YdmUnLx7ZLWJeCXL4RbgfWJNBqPCRveH3iJR4OX3UFIp+tFL/JSVp20GXWB8oDqynV
         WoAg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=U4WUK8j1sqBukKnilpM/eMgOIFdvBjYmoxUwpd9tbWA=;
        b=p8VgsiQvGkANzkcR0hd9Hs36v+pxla5jOKut64ZjKgkOr4woGLvdD24frVe4gnw+EQ
         0BCVBgIMu3YXUTebOgtxmKD9n+lO/W+mrND3G8C6vCpyPexmyVVC022+OucGe7oddQ1A
         9FthN6Brd0aAyvM+TfFMLx36wFPRA19ut7jHK3ZCKwQs5wtNX+gHtAznXTtiblh1J3kp
         G9NHBbX22q/PVcasDPTmu1WQDiMT+oLYIrnM57tqn7+Had2UzyYw5OtwYags4oyyxeXr
         vlYaTHBFCVEr5XB6PHKYu3xlAPg/DYHbystRhjK+nUCTMSe/VG76cEXKwtKmv67M80rl
         qq5g==
X-Gm-Message-State: AFqh2kqdC3AlgrFKDFyBuUDhp/soq4dstqVmzq+e6U0WPKEBMIwtkRpt
	6ny7RNaHQHsRwol/RwXMqDU=
X-Google-Smtp-Source: AMrXdXvYji0hm7dgAtbOj9PKxCjt2tifZXGu92y62Qad0Ny/pmzkrR70ehXVA5E4UF4dkF+wkAlRHA==
X-Received: by 2002:a17:907:8c14:b0:84c:e9c4:5751 with SMTP id ta20-20020a1709078c1400b0084ce9c45751mr54156387ejc.74.1675078623652;
        Mon, 30 Jan 2023 03:37:03 -0800 (PST)
Message-ID: <7f9c94502d78d2c226d60b92a8f949407cfb1062.camel@gmail.com>
Subject: Re: [PATCH v2 04/14] xen/riscv: add <asm/csr.h> header
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>
Date: Mon, 30 Jan 2023 13:37:02 +0200
In-Reply-To: <7dbda8fa-d4f3-5101-2e8f-96b4b2ff790e@suse.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
	 <b26d981f189adad8af4560fcc10360da02df97a9.1674818705.git.oleksii.kurochko@gmail.com>
	 <7dbda8fa-d4f3-5101-2e8f-96b4b2ff790e@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-27 at 15:10 +0100, Jan Beulich wrote:
> On 27.01.2023 14:59, Oleksii Kurochko wrote:
> > The following changes were made in comparison with <asm/csr.h> from
> > Linux:
> > =C2=A0 * remove all defines as they are defined in riscv_encoding.h
> > =C2=A0 * leave only csr_* macros
> >=20
> > Origin: https://github.com/torvalds/linux.git=C2=A02475bf0250de
>=20
> I'm sorry to be picky, but I think such references should be to the
> canonical
> tree, which here aiui is the one at git.kernel.org.
>=20
Thanks for clarification.
I will change it then.

> Jan
~ Oleksii


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:40:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:40:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486933.754367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSWA-0001xh-O3; Mon, 30 Jan 2023 11:40:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486933.754367; Mon, 30 Jan 2023 11:40:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSWA-0001xa-K1; Mon, 30 Jan 2023 11:40:10 +0000
Received: by outflank-mailman (input) for mailman id 486933;
 Mon, 30 Jan 2023 11:40:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qdfF=53=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMSW9-0001xP-Py
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:40:09 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d87cda99-a092-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 12:40:07 +0100 (CET)
Received: by mail-ej1-x62d.google.com with SMTP id mc11so8374488ejb.10
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 03:40:07 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 r15-20020a1709067fcf00b0087fa83790d8sm4433121ejs.13.2023.01.30.03.40.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 30 Jan 2023 03:40:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d87cda99-a092-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=BFVX98D8tqCmSDNnjJJ2ONKYAyOt2WgCH9QPXSnXVJs=;
        b=PybxUaskY0DhOArONfwG32AmbvEBOEb8D5/cHUHEFkpkcc0I83xb/7TE8uObr71hPg
         wemD7+zSYI2gpDh1oYHqG2cYMZrUQ7iAa9LvgWWzWOYdGR5XitNZBY6rFrm3VxQi2nk0
         vmOZL+hX5haJ5BGZwt5r0xzFnteEOfxjSKy0aO14QEROqPFAWBS+HYpOZrhLnu4VX5uZ
         o/vRffpk3cnVoBzsNcRcmV1m5qe1bHo6ZVY655hFkZN7A9v0mqm4mblza9PB+SZncORH
         SYWM8DyVJ8iXA2idYISWroWFD449aPCSYbNQUApjKX1RBBoddAMrvEk2XxXTOpcTH12L
         /SIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=BFVX98D8tqCmSDNnjJJ2ONKYAyOt2WgCH9QPXSnXVJs=;
        b=Aiv+0ZPKhUvcfXaCIUQ6rl+uAMKfDV/sVvUTfeCIHbitqziOzviFF0IR+BhasUtZWl
         oaKxP+QMy4ZviL1MWziqI72Lo/2MXzb7WbbE0Ot16iIo1RBmT1MFf1XncvuJQGcrz9UU
         rps2PONTEGiH0aVpQt9BE+Atp+NME3A/2f4VquTqGfGb1UMMow+V+99GNPfWhbNhK4mB
         1F1HbponDF2UKVrayjGpifchvbVj2TLBNDvaFU+avW22xT7tOUGt4cTuWetk+iJ+iqcI
         JNoMcNIMulqAptdAY/TDPjs56YrfK2Rxici7hBd7olhLGeynTk8M1QIvdr1QfZ5gAdYl
         kh+g==
X-Gm-Message-State: AO0yUKXBitX5/2LTx+FRfNCCmh3M2vL5hzGjdkLFpx21RH4smhdbXeml
	FWDJ7FJgAoVGokX2RVHBFSqKraBPse0=
X-Google-Smtp-Source: AK7set+qPXXnIknIcXX8uIFGMP76+3Vb1o3oautunRSYuFxqXR4DY4uS+BTzqtDYuv8046xTkE9vdw==
X-Received: by 2002:a17:907:7618:b0:888:5d34:dc79 with SMTP id jx24-20020a170907761800b008885d34dc79mr3723428ejc.40.1675078807412;
        Mon, 30 Jan 2023 03:40:07 -0800 (PST)
Message-ID: <27469e861d4777af42b84fb637b24ed47a187647.camel@gmail.com>
Subject: Re: [PATCH v2 07/14] xen/riscv: introduce exception context
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, Bobby Eshleman
 <bobby.eshleman@gmail.com>
Date: Mon, 30 Jan 2023 13:40:05 +0200
In-Reply-To: <a8219b2d-a22d-63ac-5088-c33610310d6e@xen.org>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
	 <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
	 <a8219b2d-a22d-63ac-5088-c33610310d6e@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

Hi Julien,

On Fri, 2023-01-27 at 14:54 +0000, Julien Grall wrote:
> Hi Oleksii,
>=20
> On 27/01/2023 13:59, Oleksii Kurochko wrote:
> > +static inline void wfi(void)
> > +{
> > +=C2=A0=C2=A0=C2=A0 __asm__ __volatile__ ("wfi");
>=20
> I have starred at this line for a while and I am not quite too sure
> to=20
> understand why we don't need to clobber the memory like we do on Arm.
>=20
I don't have an answer. The code was based on Linux so...
> FWIW, Linux is doing the same, so I guess this is correct. For Arm we
> also follow the Linux implementation.
>=20
> But I am wondering whether we are just too strict on Arm, RISCv
> compiler=20
> offer a different guarantee, or you expect the user to be responsible
> to=20
> prevent the compiler to do harmful optimization.
>=20
> > +/*
> > + * panic() isn't available at the moment so an infinite loop will
> > be
> > + * used temporarily.
> > + * TODO: change it to panic()
> > + */
> > +static inline void die(void)
> > +{
> > +=C2=A0=C2=A0=C2=A0 for( ;; ) wfi();
>=20
> Please move wfi() to a newline.
Thanks.

I thought that it is fine to put into one line in this case but I'll
move it to a newline. It's fine.
>=20
> > +}
>=20
~Oleksii



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:49:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:49:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486937.754377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSey-0002p2-JW; Mon, 30 Jan 2023 11:49:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486937.754377; Mon, 30 Jan 2023 11:49:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSey-0002ov-Fh; Mon, 30 Jan 2023 11:49:16 +0000
Received: by outflank-mailman (input) for mailman id 486937;
 Mon, 30 Jan 2023 11:49:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZQwi=53=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pMSex-0002op-Nm
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:49:15 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d94d574-a094-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 12:49:13 +0100 (CET)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E36E21FE4C;
 Mon, 30 Jan 2023 11:49:11 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 97E5513444;
 Mon, 30 Jan 2023 11:49:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Tuy8I7eu12MsLgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 30 Jan 2023 11:49:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d94d574-a094-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1675079351; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=9prjl4m2Rk4B/GytnuBQOODd9ZaELhm9GsrZsLQeA9o=;
	b=V9ceniBLf0wNyjL1WvYpGMR2kytNJULJiF54/AxX8WWBsliZNe91/s/DbTngX61twPdz/M
	fnQsNhdUhWYumdog3kUQXNxvH9bGMx+zUmkeH2ifHmO+nFp3oDTpGYJuo7hs8YoRzhHo3M
	e+3+/0rAeMkkXe/bBV9f1/VKrJ5z2hc=
Message-ID: <e1943855-729e-8bb2-4af2-138be84c0576@suse.com>
Date: Mon, 30 Jan 2023 12:49:11 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 12/14] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
 <73553bcf-9688-c187-a9cb-c12806484893@xen.org>
 <2c4d490bde4f04f956e481257ddc7129c312abb6.camel@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <2c4d490bde4f04f956e481257ddc7129c312abb6.camel@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------dF3i0ecysf9AJQWulpeQ63FA"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------dF3i0ecysf9AJQWulpeQ63FA
Content-Type: multipart/mixed; boundary="------------Fh3fGTKT3p6uwLn90hBgyw5R";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Oleksii <oleksii.kurochko@gmail.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
Message-ID: <e1943855-729e-8bb2-4af2-138be84c0576@suse.com>
Subject: Re: [PATCH v2 12/14] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
 <73553bcf-9688-c187-a9cb-c12806484893@xen.org>
 <2c4d490bde4f04f956e481257ddc7129c312abb6.camel@gmail.com>
In-Reply-To: <2c4d490bde4f04f956e481257ddc7129c312abb6.camel@gmail.com>

--------------Fh3fGTKT3p6uwLn90hBgyw5R
Content-Type: multipart/mixed; boundary="------------uaeY0p00u1ACrQlFUO184YnO"

--------------uaeY0p00u1ACrQlFUO184YnO
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDEuMjMgMTI6MzUsIE9sZWtzaWkgd3JvdGU6DQo+IEhpIEp1bGllbiwNCj4gT24g
RnJpLCAyMDIzLTAxLTI3IGF0IDE2OjAyICswMDAwLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+
PiBIaSBPbGVrc2lpLA0KPj4NCj4+IE9uIDI3LzAxLzIwMjMgMTM6NTksIE9sZWtzaWkgS3Vy
b2Noa28gd3JvdGU6DQo+Pj4gVGhlIHBhdGNoIGludHJvZHVjZXMgbWFjcm9zOiBCVUcoKSwg
V0FSTigpLCBydW5faW5fZXhjZXB0aW9uKCksDQo+Pj4gYXNzZXJ0X2ZhaWxlZC4NCj4+Pg0K
Pj4+IFRoZSBpbXBsZW1lbnRhdGlvbiB1c2VzICJlYnJlYWsiIGluc3RydWN0aW9uIGluIGNv
bWJpbmF0aW9uIHdpdGgNCj4+PiBkaWZmcmVudCBidWcgZnJhbWUgdGFibGVzIChmb3IgZWFj
aCB0eXBlKSB3aGljaCBjb250YWlucyB1c2VmdWwNCj4+PiBpbmZvcm1hdGlvbi4NCj4+Pg0K
Pj4+IFNpZ25lZC1vZmYtYnk6IE9sZWtzaWkgS3Vyb2Noa28gPG9sZWtzaWkua3Vyb2Noa29A
Z21haWwuY29tPg0KPj4+IC0tLQ0KPj4+IENoYW5nZXM6DQo+Pj4gIMKgwqAgLSBSZW1vdmUg
X18gaW4gZGVmaW5lIG5hbWluZ3MNCj4+PiAgwqDCoCAtIFVwZGF0ZSBydW5faW5fZXhjZXB0
aW9uX2hhbmRsZXIoKSB3aXRoDQo+Pj4gIMKgwqDCoMKgIHJlZ2lzdGVyIHZvaWQgKmZuXyBh
c20oX19zdHJpbmdpZnkoQlVHX0ZOX1JFRykpID0gKGZuKTsNCj4+PiAgwqDCoCAtIFJlbW92
ZSBidWdfaW5zdHJfdCB0eXBlIGFuZCBjaGFuZ2UgaXQncyB1c2FnZSB0byB1aW50MzJfdA0K
Pj4+IC0tLQ0KPj4+ICDCoCB4ZW4vYXJjaC9yaXNjdi9pbmNsdWRlL2FzbS9idWcuaCB8IDEx
OA0KPj4+ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4+PiAgwqAgeGVuL2FyY2gv
cmlzY3YvdHJhcHMuY8KgwqDCoMKgwqDCoMKgwqDCoMKgIHwgMTI4DQo+Pj4gKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKw0KPj4+ICDCoCB4ZW4vYXJjaC9yaXNjdi94ZW4ubGRz
LlPCoMKgwqDCoMKgwqDCoMKgIHzCoCAxMCArKysNCj4+PiAgwqAgMyBmaWxlcyBjaGFuZ2Vk
LCAyNTYgaW5zZXJ0aW9ucygrKQ0KPj4+ICDCoCBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL2Fy
Y2gvcmlzY3YvaW5jbHVkZS9hc20vYnVnLmgNCj4+Pg0KPj4+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC9yaXNjdi9pbmNsdWRlL2FzbS9idWcuaA0KPj4+IGIveGVuL2FyY2gvcmlzY3YvaW5j
bHVkZS9hc20vYnVnLmgNCj4+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4+IGluZGV4IDAw
MDAwMDAwMDAuLjRiMTVkOGViYTYNCj4+PiAtLS0gL2Rldi9udWxsDQo+Pj4gKysrIGIveGVu
L2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vYnVnLmgNCj4+PiBAQCAtMCwwICsxLDExOCBAQA0K
Pj4+ICsvKiBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogR1BMLTIuMCAqLw0KPj4+ICsvKg0K
Pj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMTIgUmVnZW50cyBvZiB0aGUgVW5pdmVyc2l0eSBv
ZiBDYWxpZm9ybmlhDQo+Pj4gKyAqIENvcHlyaWdodCAoQykgMjAyMS0yMDIzIFZhdGVzDQo+
Pg0KPj4gSSBoYXZlIHRvIHF1ZXN0aW9uIHRoZSB0d28gY29weXJpZ2h0cyBoZXJlIGdpdmVu
IHRoYXQgdGhlIG1ham9yaXR5IG9mDQo+PiB0aGUgY29kZSBzZWVtcyB0byBiZSB0YWtlbiBm
cm9tIHRoZSBhcm0gaW1wbGVtZW50YXRpb24gKHNlZQ0KPj4gYXJjaC9hcm0vaW5jbHVkZS9h
c20vYnVnLmgpLg0KPj4NCj4+IFdpdGggdGhhdCBzYWlkLCB3ZSBzaG91bGQgY29uc29saWRh
dGUgdGhlIGNvZGUgcmF0aGVyIHRoYW4NCj4+IGR1cGxpY2F0aW5nDQo+PiBpdCBvbiBldmVy
eSBhcmNoaXRlY3R1cmUuDQo+Pg0KPiBDb3B5cmlnaHRzIHNob3VsZCBiZSByZW1vdmVkLiBU
aGV5IHdlcmUgdGFrZW4gZnJvbSB0aGUgcHJldmlvdXMNCj4gaW1wbGVtZW50YXRpb24gb2Yg
YnVnLmggZm9yIFJJU0MtViBzbyBJIGp1c3QgZm9yZ290IHRvIHJlbW92ZSB0aGVtLg0KPiAN
Cj4gSXQgbG9va3MgbGlrZSB3ZSBzaG91bGQgaGF2ZSBjb21tb24gYnVnLmggZm9yIEFSTSBh
bmQgUklTQ1YgYnV0IEkgYW0NCj4gbm90IHN1cmUgdGhhdCBJIGtub3cgaG93IHRvIGRvIHRo
YXQgYmV0dGVyLg0KPiBQcm9iYWJseSB0aGUgY29kZSB3YW50cyB0byBiZSBtb3ZlZCB0byB4
ZW4vaW5jbHVkZS9idWcuaCBhbmQgdXNpbmcNCj4gaWZkZWYgQVJNICYmIFJJU0NWIC4uLg0K
PiBCdXQgc3RpbGwgSSBhbSBub3Qgc3VyZSB0aGF0IHRoaXMgaXMgdGhlIGJlc3Qgb25lIG9w
dGlvbiBhcyBhdCBsZWFzdCB3ZQ0KPiBoYXZlIGRpZmZlcmVudCBpbXBsZW1lbnRhdGlvbiBm
b3IgeDg2XzY0Lg0KDQpUaGVyZSBhcmUgYWxyZWFkeSBhIGxvdCBvZiBkdXBsaWNhdGVkICNk
ZWZpbmVzIGluIHRoZSBBcm0gYW5kIHg4NiBhc20vYnVnLmgNCmZpbGVzLg0KDQpJJ2QgY3Jl
YXRlIHhlbi9pbmNsdWRlL3hlbi9idWcuaCBpbmNsdWRpbmcgYXNtL2J1Zy5oIGZpcnN0IGFu
ZCB0aGVuIGFkZGluZw0KYWxsIHRoZSBjb21tb24gc3R1ZmYuDQoNCkluIGNhc2UgMiBhcmNo
cyBhcmUgc2hhcmluZyBzb21lICNkZWZpbmUgRk9PIHlvdSBjb3VsZCAjZGVmaW5lIEZPTyBp
biB0aGUNCmFzbS9idWcuaCBmb3IgdGhlIGFyY2ggbm90IHVzaW5nIHRoZSBjb21tb24gZGVm
aW5pdGlvbiBhbmQgZG8gI2lmbmRlZiBGT08NCmluIHhlbi9pbmNsdWRlL3hlbi9idWcuaCBh
cm91bmQgdGhlIHZhcmlhbnQgc2hhcmVkIGJ5IHRoZSBvdGhlciBhcmNocy4NCg0KDQpKdWVy
Z2VuDQoNCg==
--------------uaeY0p00u1ACrQlFUO184YnO
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------uaeY0p00u1ACrQlFUO184YnO--

--------------Fh3fGTKT3p6uwLn90hBgyw5R--

--------------dF3i0ecysf9AJQWulpeQ63FA
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmPXrrcFAwAAAAAACgkQsN6d1ii/Ey/F
rwf/W8rz5wC0ViUsGPe7ppRiONsesYMT87n/Je70W7N9rNImTqj1uZ1JLz2KiSm/iOotNJ7uN++Z
ZZnYOYERa+552som3MARe8vFPRuRBRbtsmsVXZiFx9b4gFZBTWCdJPdajP2Kxo+59PMK6jE0de59
I056LEJX7oTTvirnhOXnwUHQ8X+RyGzzKkTjrAve9GO57k2Q347OGcPJ/Xq0vl4MfyTHf8X6hg3l
NLe6Cf9tY0UkJBXTJXrPHxPDUWikkHppVg9xz5kxzhFfUmGRMVRLzGJQSPBzY3dVfxLT0BPaMz0I
ZDiBNdS1ZylQ1JpzXNEN71NqDCU9WdqovyO2cH32Rw==
=lFZI
-----END PGP SIGNATURE-----

--------------dF3i0ecysf9AJQWulpeQ63FA--


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:54:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:54:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486941.754386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSjv-0004E4-4s; Mon, 30 Jan 2023 11:54:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486941.754386; Mon, 30 Jan 2023 11:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSjv-0004Dx-27; Mon, 30 Jan 2023 11:54:23 +0000
Received: by outflank-mailman (input) for mailman id 486941;
 Mon, 30 Jan 2023 11:54:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qdfF=53=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMSjt-0004Dr-VM
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:54:22 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4fd9997-a094-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 12:54:20 +0100 (CET)
Received: by mail-ed1-x52d.google.com with SMTP id v13so10659124eda.11
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 03:54:20 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 eo14-20020a056402530e00b0049e08f781e3sm6691931edb.3.2023.01.30.03.54.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 30 Jan 2023 03:54:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4fd9997-a094-11ed-9ec0-891035b88211
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=qSDmlF2OoJ5Q3AsZwDCdJTqMVugx7W+X3PB8E4z01XM=;
        b=QnfodsB2k2oSsLqV28vxcjdzPLACAXihEd1IaRzqLhf6Fo/RUeEg5At9AXhf7dTwCJ
         MSgzH32qBwGW7PpWqYWr8tbriRGkPuR6PBeOrqdYXDad32oiv0wfBlbTITtzol5Goa7e
         LDKGFXLAlePIEjqql91fdNbmwH1C+ZUx2RJccPJafXTrNuiQKEECjPUV8Vgk9yX1u2fU
         9Hvti6SHHGmQGFZYix4cVZEPToda/wyXLpbxwsBNECHQUhBf6njgTSNF81TSK0SIhIqI
         M9NtnKwUPvKNDzqOJVrnQMFtzjgbqTWbwsgpL7VddMAzCeDaik4CVSUApLbMhQPexpf8
         jhOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=qSDmlF2OoJ5Q3AsZwDCdJTqMVugx7W+X3PB8E4z01XM=;
        b=GGLKNtORhT5VqnvArZa0DGRfM188aB6DisXY0TNQ3YdQX5EXJoXoTqWM2l/ssiW9uc
         yuVwM8dp0zWKvXTcClVvoQELgwmereRtIDFEHKdqvqiwtIZvfB47bi8MySEp8Y0eKFu8
         PW1jeoQuRTcRWUCetu/va759Hhi0REpSaw4AyS3VfUKClHAY5A4xhM5f+fwZzKUaW/O1
         46j0eiCcZ90q2L1fs6HT1NUNYJQbnRI0nl/2YclXQ1xvWzjcNKGMUF9lkEQMkl6KkuKw
         kkJ4uhsqW85kxAqb2UGzholQt3bPp2h4KH3DTLZ5H4dzf9xalIs5UEUxLoqeAChJqzL2
         /PfQ==
X-Gm-Message-State: AFqh2koP0xnnL+c+0+bPYU1g85c4EBmlpfqdlR0RgzcwG5gX6hdF8RzB
	ClEKmXGznmq3jGHw0ZQ41nFWqf7SmwE=
X-Google-Smtp-Source: AMrXdXtxAle95+0/a/wgHd7z1KT/zqWqRIN401nvw7Y8v1Q0u7kILqm9Rb10v8FZX3sbjmtz2JJ3Aw==
X-Received: by 2002:a05:6402:4026:b0:49e:4786:a0e2 with SMTP id d38-20020a056402402600b0049e4786a0e2mr52987540eda.14.1675079660331;
        Mon, 30 Jan 2023 03:54:20 -0800 (PST)
Message-ID: <792bc4533d3604acd4321b4b15965adec22431a4.camel@gmail.com>
Subject: Re: [PATCH v2 07/14] xen/riscv: introduce exception context
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, Bobby Eshleman
 <bobby.eshleman@gmail.com>,  xen-devel@lists.xenproject.org
Date: Mon, 30 Jan 2023 13:54:18 +0200
In-Reply-To: <75328420-0fbd-92ae-40c7-9fee1c31c907@suse.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
	 <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
	 <75328420-0fbd-92ae-40c7-9fee1c31c907@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

Hi Jan,

On Fri, 2023-01-27 at 15:24 +0100, Jan Beulich wrote:
> On 27.01.2023 14:59, Oleksii Kurochko wrote:
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/processor.h
> > @@ -0,0 +1,82 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*****************************************************************
> > *************
> > + *
> > + * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
> > + * Copyright 2021 (C) Bobby Eshleman <bobby.eshleman@gmail.com>
> > + * Copyright 2023 (C) Vates
> > + *
> > + */
> > +
> > +#ifndef _ASM_RISCV_PROCESSOR_H
> > +#define _ASM_RISCV_PROCESSOR_H
> > +
> > +#ifndef __ASSEMBLY__
> > +
> > +/* On stack VCPU state */
> > +struct cpu_user_regs
> > +{
> > +=C2=A0=C2=A0=C2=A0 unsigned long zero;
> > +=C2=A0=C2=A0=C2=A0 unsigned long ra;
> > +=C2=A0=C2=A0=C2=A0 unsigned long sp;
> > +=C2=A0=C2=A0=C2=A0 unsigned long gp;
> > +=C2=A0=C2=A0=C2=A0 unsigned long tp;
> > +=C2=A0=C2=A0=C2=A0 unsigned long t0;
> > +=C2=A0=C2=A0=C2=A0 unsigned long t1;
> > +=C2=A0=C2=A0=C2=A0 unsigned long t2;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s0;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s1;
> > +=C2=A0=C2=A0=C2=A0 unsigned long a0;
> > +=C2=A0=C2=A0=C2=A0 unsigned long a1;
> > +=C2=A0=C2=A0=C2=A0 unsigned long a2;
> > +=C2=A0=C2=A0=C2=A0 unsigned long a3;
> > +=C2=A0=C2=A0=C2=A0 unsigned long a4;
> > +=C2=A0=C2=A0=C2=A0 unsigned long a5;
> > +=C2=A0=C2=A0=C2=A0 unsigned long a6;
> > +=C2=A0=C2=A0=C2=A0 unsigned long a7;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s2;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s3;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s4;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s5;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s6;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s7;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s8;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s9;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s10;
> > +=C2=A0=C2=A0=C2=A0 unsigned long s11;
> > +=C2=A0=C2=A0=C2=A0 unsigned long t3;
> > +=C2=A0=C2=A0=C2=A0 unsigned long t4;
> > +=C2=A0=C2=A0=C2=A0 unsigned long t5;
> > +=C2=A0=C2=A0=C2=A0 unsigned long t6;
> > +=C2=A0=C2=A0=C2=A0 unsigned long sepc;
> > +=C2=A0=C2=A0=C2=A0 unsigned long sstatus;
> > +=C2=A0=C2=A0=C2=A0 /* pointer to previous stack_cpu_regs */
> > +=C2=A0=C2=A0=C2=A0 unsigned long pregs;
> > +};
>=20
> Just to restate what I said on the earlier version: We have a struct
> of
> this name in the public interface for x86. Besides to confusion about
> re-using the name for something private, I'd still like to understand
> what the public interface plans are. This is specifically because I
> think it would be better to re-use suitable public interface structs
> internally where possible. But that of course requires spelling out
> such parts of the public interface first.
>=20
I am not sure that I get you here.
I greped a little bit and found out that each architecture declares
this structure inside arch-specific folder.

Mostly the usage of the cpu_user_regs is to save/restore current state
of CPU during traps ( exceptions/interrupts ) and context_switch().
Also some registers are modified during construction of a domain.
Thereby I prefer here to see the arch-specific register names instead
of common.

> Jan
~ Oleksii


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 11:55:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 11:55:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486945.754397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSkz-0004vJ-Dy; Mon, 30 Jan 2023 11:55:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486945.754397; Mon, 30 Jan 2023 11:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSkz-0004vC-B4; Mon, 30 Jan 2023 11:55:29 +0000
Received: by outflank-mailman (input) for mailman id 486945;
 Mon, 30 Jan 2023 11:55:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YhKs=53=citrix.com=prvs=387b7cf06=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1pMSky-0004uS-3i
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 11:55:28 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fab1cf55-a094-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 12:55:25 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fab1cf55-a094-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675079725;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=fCUwTbUb3AGJCOVD9paDwHq4D4mk04YKXr7Q+YvK7fk=;
  b=HU2LGeOHdSktZV+a6hlOtI/JpBW3Cefe0VGXjKhZFlVPlegeF5q/lS4i
   IY8qEa8VtFoPDSDbso7HzM4/qQTi3OourJtjClES9JMN8sPb0TAzBYqnf
   UX90le3JycnUSOJcjc7/IzB51ZgWZGyu/U3M+3tqTYtfQWLMNfBg04LwH
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93678977
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:1+TtD6AudGKu3xVW/z3jw5YqxClBgxIJ4kV8jS/XYbTApGwn1zYDm
 DFJC2iEaPjYNGv2ettwOYm+8x4GsMOAydJlQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpD5gRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw2MYrKkhkx
 /Ykb3NSQTavmOTr2KK0Vbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIIkzhuillz/zYjRDrFO9rqsr+WnDigd21dABNfKEIYLVFZQKwS50o
 ErtuEmnLwBGNuCk4jOl93iwj7HXuzv0Ddd6+LqQqacx3Qz7KnYoIAISfUu2p7++kEHWc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUS6guA167V6AaxHXUfQ3hKb9lOiSMtbWV0j
 BnTxYqvXGEx9uTPEhpx64t4sxuQGXQ+BEUwbxNDDlQqxfX8ptwrnh3QG4ML/LGOsvX5HjT5w
 javpSc4hqkOgcNj65hX7WwrkBr3+MGXE1ddChH/Gzv8s1gnPNLNi5mAswCz0BpWEGqOorBtV
 lAgktPW0u0BBIrleMelELRUR+HBCxpo3VThbb9T83sJrW/FF52LJ9o4DNRCyKBBb645lcfBO
 hO7hO+ozMY70IGWRaF2eZmtLM8h0LLtE9/oPtiNMIUTOsEuLV/bpHAxDaJ144wKuBF8+ZzTx
 L/BKZr8ZZrkIfsPIMWKqxc1juZwm3FWKZL7TpHn1RW3uYdyl1bMIYrpxGCmN7hjhIvd+VW9z
 jqqH5fSo/mpeLGkM3a/HE96BQxiEEXX8riv9JIMJrfTeVM2cIzjYteIqY4cl0Vet/w9vo/1E
 ruVACe0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:DpRMi6M9k0iBqsBcT0D155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jzjSWE8wr4WBkb6LO90dq7MAnhHP9OkMIs1NKZMDUO11HYS72KgbGC/9SkIVyHygc/79
 YrT0EdMqyXMbESt6+Tj2eF+pQbsaC6GcuT9IXjJgJWPGVXgtZbnmJE42igcnFedU1jP94UBZ
 Cc7s1Iq36LYnIMdPm2AXEDQqzqu8DLvIiOW29LOzcXrC21yR+44r/zFBaVmj0EVSlU/Lsk+W
 /Z1yTk+6SYte2hwBO07R6d030Woqqu9jJwPr3NtiEnEESutu9uXvUiZ1S2hkF1nAho0idurD
 CDmWZlAy050QKtQoj8m2qQ5+Cn6kdi15aq8y7nvZPuzPaJOw4SGo5Pg5lUfQDe7FdltNZg0L
 hT12bcrJZPCwjc9R6NkeQgeisa4nZcm0BS5tI7njhaS88TebVRpYsQ8AdcF4oBBjvz7MQiHP
 N1BM/R6f5KeRfCBkqp9lVH0ZipRDA+Dx2GSk8Ntoic1CVXhmlwyw8dyNYElnkN+ZohQ91P5v
 jCMK5viLZSJ/VmJJ5VFaMEW4+6G2bNSRXDPCabJknmDrgOPzbXp5v+8NwOlZSXkVwzvekPcb
 j6ISBlXDQJCjPT4OW1re12ziw=
X-IronPort-AV: E=Sophos;i="5.97,257,1669093200"; 
   d="scan'208";a="93678977"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Sergey Dyasli <sergey.dyasli@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v3] x86/ucode/AMD: apply the patch early on every logical thread
Date: Mon, 30 Jan 2023 11:55:03 +0000
Message-ID: <20230130115503.19941-1-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The original issue has been reported on AMD Bulldozer-based CPUs where
ucode loading loses the LWP feature bit in order to gain the IBPB bit.
LWP disabling is per-SMT/CMT core modification and needs to happen on
each sibling thread despite the shared microcode engine. Otherwise,
logical CPUs will end up with different cpuid capabilities.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=216211

Guests running under Xen happen to be not affected because of levelling
logic for the feature masking/override MSRs which causes the LWP bit to
fall out and hides the issue. The latest recommendation from AMD, after
discussing this bug, is to load ucode on every logical CPU.

In Linux kernel this issue has been addressed by e7ad18d1169c
("x86/microcode/AMD: Apply the patch early on every logical thread").
Follow the same approach in Xen.

Introduce SAME_UCODE match result and use it for early AMD ucode
loading. Take this opportunity and move opt_ucode_allow_same out of
compare_revisions() to the relevant callers and also modify the warning
message based on it. Intel's side of things is modified for consistency
but provides no functional change.

Late loading is still performed only on the first of SMT/CMT
siblings and only if a newer ucode revision has been provided (unless
allow_same option is specified).

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v2 --> v3:
- Moved opt_ucode_allow_same out of compare_revisions() and updated
  the commit message
- Adjusted the warning message

v1 --> v2:
- Expanded the commit message with the levelling section
- Adjusted comment for OLD_UCODE

CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: "Roger Pau Monné" <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu/microcode/amd.c     | 11 ++++++++---
 xen/arch/x86/cpu/microcode/core.c    | 19 ++++++++++++++-----
 xen/arch/x86/cpu/microcode/intel.c   | 10 +++++++---
 xen/arch/x86/cpu/microcode/private.h |  3 ++-
 4 files changed, 31 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
index 4b097187a0..a9a5557835 100644
--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -176,8 +176,8 @@ static enum microcode_match_result compare_revisions(
     if ( new_rev > old_rev )
         return NEW_UCODE;
 
-    if ( opt_ucode_allow_same && new_rev == old_rev )
-        return NEW_UCODE;
+    if ( new_rev == old_rev )
+        return SAME_UCODE;
 
     return OLD_UCODE;
 }
@@ -220,8 +220,13 @@ static int cf_check apply_microcode(const struct microcode_patch *patch)
     unsigned int cpu = smp_processor_id();
     struct cpu_signature *sig = &per_cpu(cpu_sig, cpu);
     uint32_t rev, old_rev = sig->rev;
+    enum microcode_match_result result = microcode_fits(patch);
 
-    if ( microcode_fits(patch) != NEW_UCODE )
+    /*
+     * Allow application of the same revision to pick up SMT-specific changes
+     * even if the revision of the other SMT thread is already up-to-date.
+     */
+    if ( result != NEW_UCODE && result != SAME_UCODE )
         return -EINVAL;
 
     if ( check_final_patch_levels(sig) )
diff --git a/xen/arch/x86/cpu/microcode/core.c b/xen/arch/x86/cpu/microcode/core.c
index d14754e222..912ef2c7be 100644
--- a/xen/arch/x86/cpu/microcode/core.c
+++ b/xen/arch/x86/cpu/microcode/core.c
@@ -612,13 +612,21 @@ static long cf_check microcode_update_helper(void *data)
      * that ucode revision.
      */
     spin_lock(&microcode_mutex);
-    if ( microcode_cache &&
-         alternative_call(ucode_ops.compare_patch,
-                          patch, microcode_cache) != NEW_UCODE )
+    if ( microcode_cache )
     {
+        enum microcode_match_result result;
+
+        result = alternative_call(ucode_ops.compare_patch, patch,
+                                                           microcode_cache);
         spin_unlock(&microcode_mutex);
-        printk(XENLOG_WARNING "microcode: couldn't find any newer revision "
-                              "in the provided blob!\n");
+
+        if ( result == NEW_UCODE ||
+             (opt_ucode_allow_same && result == SAME_UCODE) )
+            goto apply;
+
+        printk(XENLOG_WARNING "microcode: couldn't find any newer%s revision "
+                              "in the provided blob!\n", opt_ucode_allow_same ?
+                                                         " (or the same)" : "");
         microcode_free_patch(patch);
         ret = -ENOENT;
 
@@ -626,6 +634,7 @@ static long cf_check microcode_update_helper(void *data)
     }
     spin_unlock(&microcode_mutex);
 
+apply:
     cpumask_clear(&cpu_callin_map);
     atomic_set(&cpu_out, 0);
     atomic_set(&cpu_updated, 0);
diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c
index f7fec4b4ed..8d4d6574aa 100644
--- a/xen/arch/x86/cpu/microcode/intel.c
+++ b/xen/arch/x86/cpu/microcode/intel.c
@@ -232,8 +232,8 @@ static enum microcode_match_result compare_revisions(
     if ( new_rev > old_rev )
         return NEW_UCODE;
 
-    if ( opt_ucode_allow_same && new_rev == old_rev )
-        return NEW_UCODE;
+    if ( new_rev == old_rev )
+        return SAME_UCODE;
 
     /*
      * Treat pre-production as always applicable - anyone using pre-production
@@ -290,8 +290,12 @@ static int cf_check apply_microcode(const struct microcode_patch *patch)
     unsigned int cpu = smp_processor_id();
     struct cpu_signature *sig = &this_cpu(cpu_sig);
     uint32_t rev, old_rev = sig->rev;
+    enum microcode_match_result result;
+
+    result = microcode_update_match(patch);
 
-    if ( microcode_update_match(patch) != NEW_UCODE )
+    if ( result != NEW_UCODE &&
+         !(opt_ucode_allow_same && result == SAME_UCODE) )
         return -EINVAL;
 
     wbinvd();
diff --git a/xen/arch/x86/cpu/microcode/private.h b/xen/arch/x86/cpu/microcode/private.h
index 73b095d5bf..626aeb4d08 100644
--- a/xen/arch/x86/cpu/microcode/private.h
+++ b/xen/arch/x86/cpu/microcode/private.h
@@ -6,7 +6,8 @@
 extern bool opt_ucode_allow_same;
 
 enum microcode_match_result {
-    OLD_UCODE, /* signature matched, but revision id is older or equal */
+    OLD_UCODE, /* signature matched, but revision id is older */
+    SAME_UCODE, /* signature matched, but revision id is the same */
     NEW_UCODE, /* signature matched, but revision id is newer */
     MIS_UCODE, /* signature mismatched */
 };
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 12:10:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 12:10:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486957.754407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSzL-0007el-T9; Mon, 30 Jan 2023 12:10:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486957.754407; Mon, 30 Jan 2023 12:10:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMSzL-0007ee-P6; Mon, 30 Jan 2023 12:10:19 +0000
Received: by outflank-mailman (input) for mailman id 486957;
 Mon, 30 Jan 2023 12:10:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMSzL-0007eU-6B; Mon, 30 Jan 2023 12:10:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMSzL-00082e-4H; Mon, 30 Jan 2023 12:10:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMSzK-0000Jd-Iw; Mon, 30 Jan 2023 12:10:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMSzK-0001BS-IR; Mon, 30 Jan 2023 12:10:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c2i8hkbuxuUbC+hZcjlAcOGkInAXrgyBEB/TmO59gNs=; b=wTwAUcJ1A5APpti3C2kS4O0uRO
	b8wHXNA7zwuvstLqI2cKOBbZJumX/J/k5izVUx+vHyeOZvGvYIf79kbguriwM0ghPx/TNflNVG+EI
	UKb8mV8eRsADFHjdazPAI1F1wuFN0seXF70bc0kEz37neWmjIKlU+NQQ4nB3U2AgL+3Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176274-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176274: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:build-armhf:syslog-server:running:regression
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:build-armhf:capture-logs:broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=6d796c50f84ca79f1722bb131799e5a5710c4700
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 12:10:18 +0000

flight 176274 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176274/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 173462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                6d796c50f84ca79f1722bb131799e5a5710c4700
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  114 days
Failing since        173470  2022-10-08 06:21:34 Z  114 days  236 attempts
Testing same since   176274  2023-01-29 23:13:33 Z    0 days    1 attempts

------------------------------------------------------------
3468 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

(No revision log; it would be 533719 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 12:14:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 12:14:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486965.754416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMT3Q-0008H2-DX; Mon, 30 Jan 2023 12:14:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486965.754416; Mon, 30 Jan 2023 12:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMT3Q-0008Gv-Aq; Mon, 30 Jan 2023 12:14:32 +0000
Received: by outflank-mailman (input) for mailman id 486965;
 Mon, 30 Jan 2023 12:14:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMT3P-0008Gp-6B
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 12:14:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMT3O-00086f-Vd; Mon, 30 Jan 2023 12:14:30 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMT3O-0007dt-Nh; Mon, 30 Jan 2023 12:14:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=OvXXU34ce5Mb0tZ0C8e9ecgmuP5zlLcUybBBTrKXRfk=; b=tbqjpoAVtSiXxUnghNatg7xnYD
	Eh0Sxq04sYQZvMbvt6K8n34H/CjDJXTjuo4kJYoqQaOOGE7mvOagqiu51SVPDe66mhETRg9MgH7ui
	b3G/rNkvrjtj36t2FjvJVTFN6fEtZ0h0CfBduV/0C1y8HQjmUyRfA7JV8WphOEq+SZHY=;
Message-ID: <2fc53558-ceb9-af6f-a349-961f0f17c83f@xen.org>
Date: Mon, 30 Jan 2023 12:14:28 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v4 1/3] xen/public: move xenstore related doc into 9pfs.h
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230130100813.3298-1-jgross@suse.com>
 <20230130100813.3298-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230130100813.3298-2-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 30/01/2023 10:08, Juergen Gross wrote:
> The Xenstore related documentation is currently to be found in
> docs/misc/9pfs.pandoc, instead of the related header file
> xen/include/public/io/9pfs.h like for most other paravirtualized
> device protocols.
> 
> There is a comment in the header pointing at the document, but the
> given file name is wrong. Additionally such headers are meant to be
> copied into consuming projects (Linux kernel, qemu, etc.), so pointing
> at a doc file in the Xen git repository isn't really helpful for the
> consumers of the header.
> 
> This situation is far from ideal, which is already being proved by the
> fact that neither qemu nor the Linux kernel are implementing the
> device attach/detach protocol correctly.
> 
> Change that by moving the Xenstore related 9pfs documentation from
> docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - add reference to header in the pandoc document (Jan Beulich)
> V3:
> - fix flaw in the connection sequence
> V4:
> - split patch (Julien Grall)

Thanks for the split. I have checked before and after and they look the 
same to me.

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 12:28:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 12:28:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486973.754427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMTGE-0001ii-Iq; Mon, 30 Jan 2023 12:27:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486973.754427; Mon, 30 Jan 2023 12:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMTGE-0001ib-G1; Mon, 30 Jan 2023 12:27:46 +0000
Received: by outflank-mailman (input) for mailman id 486973;
 Mon, 30 Jan 2023 12:27:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMTGD-0001iV-6o
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 12:27:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMTGC-0008Vz-DG; Mon, 30 Jan 2023 12:27:44 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMTGC-0008AF-5l; Mon, 30 Jan 2023 12:27:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=9C7l2Z6QScVG9X3ZnVeJW7+yG1RaH7+/zJpR8ilHISw=; b=tP3subzomB73HZ+uJEcQRy+rsU
	jUQkdjknGSXxX/L8dVs7zWG6utXVHU9oaJorA6QjAfvAx6Qw8lFFKlfuLveFAShthOP9eC2Wo+s8L
	M7uVSkB/6DoQSY0JLL9tPdGljRa+IFcZRfE3BTECPl1Tm1M9INyLJeWdnCu+hAh8DGR8=;
Message-ID: <5a53d16a-8842-457b-612a-a3623a3a98ed@xen.org>
Date: Mon, 30 Jan 2023 12:27:42 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Proposal for consistent Kconfig usage by the hypervisor build
 system
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <d77c1a7a-9d15-491d-38fa-99739f20bebd@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d77c1a7a-9d15-491d-38fa-99739f20bebd@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

Apologies for the late reply.

On 12/01/2023 16:52, Jan Beulich wrote:
> (re-sending with REST on Cc, as requested at the community call)
> 
> At present we use a mix of Makefile and Kconfig driven capability checks for
> tool chain components involved in the building of the hypervisor.  What approach
> is used where is in some part a result of the relatively late introduction of
> Kconfig into the build system, but in other places also simply a result of
> different taste of different contributors.  Switching to a uniform model,
> however, has drawbacks as well:
>   - A uniformly Makefile based model is not in line with Linux, where Kconfig is
>     actually coming from (at least as far as we're concerned; there may be
>     earlier origins).  This model is also being disliked by some community
>     members.
>   - A uniformly Kconfig based model suffers from a weakness of Kconfig in that
>     dependent options are silently turned off when dependencies aren't met.  This
>     has the undesirable effect that a carefully crafted .config may be silently
>     converted to one with features turned off which were intended to be on.
>     While this could be deemed expected behavior when a dependency is also an
>     option which was selected by the person configuring the hypervisor, it
>     certainly can be surprising when the dependency is an auto-detected tool
>     chain capability.  Furthermore there's no automatic re-running of kconfig if
>     any part of the tool chain changed.  (Despite knowing of this in principle,
>     I've still been hit by this more than once in the past: If one rebuilds a
>     tree which wasn't touched for a while, and if some time has already passed
>     since the updating to the newer component, one may not immediately make the
>     connection.)
> 
> Therefore I'd like to propose that we use an intermediate model: Detected tool
> chain capabilities (and alike) may only be used to control optimization (i.e.
> including their use as dependencies for optimization controls) and to establish
> the defaults of options.  They may not be used to control functionality, i.e.
> they may in particular not be specified as a dependency of an option controlling
> functionality.  This way unless defaults were overridden things will build, and
> non-default settings will be honored (albeit potentially resulting in a build
> failure).
> 
> For example
> 
> config AS_VMX
> 	def_bool $(as-instr,vmcall)
> 
> would be okay (as long as we have fallback code to deal with the case of too
> old an assembler; raising the baseline there is a separate topic), but instead
> of what we have currently
> 
> config XEN_SHSTK
> 	bool "Supervisor Shadow Stacks"
> 	default HAS_AS_CET_SS
> 
> would be the way to go.

I think your intermediate model makes sense.

> 
> It was additionally suggested that, for a better user experience, unmet
> dependencies which are known to result in build failures (which at times may be
> hard to associate back with the original cause) would be re-checked by Makefile
> based logic, leading to an early build failure with a comprehensible error
> message.  Personally I'd prefer this to be just warnings (first and foremost to
> avoid failing the build just because of a broken or stale check), but I can see
> that they might be overlooked when there's a lot of other output. 

If we wanted the Makefile to check the available features, then I would 
prefer an early error rather than warning. That said...

> In any event
> we may want to try to figure an approach which would make sufficiently sure that
> Makefile and Kconfig checks don't go out of sync.

... this is indeed a concern. How incomprehensible would the error be if 
we don't check it in the Makefile?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 12:30:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 12:30:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486978.754437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMTIW-00032N-2J; Mon, 30 Jan 2023 12:30:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486978.754437; Mon, 30 Jan 2023 12:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMTIV-00032G-TB; Mon, 30 Jan 2023 12:30:07 +0000
Received: by outflank-mailman (input) for mailman id 486978;
 Mon, 30 Jan 2023 12:30:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMTIU-00032A-6f
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 12:30:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMTIT-00006r-RD; Mon, 30 Jan 2023 12:30:05 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMTIT-0008DS-Jl; Mon, 30 Jan 2023 12:30:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=G7+tPAKAB4bS/r8HAFALygiFB07by+WZrw9qhPOucd8=; b=SRumM8LqEaF+oKmXSme0NPiTJv
	sB0nZlzVYYJShXDODItf3ZYJ1XRzdJaRagymomONb7LdTGHlrPtxB035ae+v03VQYaUe9NlaOMINl
	2Fm6luOS81ABEYxbHmB0zTrlyoymkuk4n8icMdR/Fxs/FWKnIPo2Qrn7pm+Ozb/D0C8o=;
Message-ID: <cd184a58-426e-4249-c635-504509734262@xen.org>
Date: Mon, 30 Jan 2023 12:30:03 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v6] xen/arm: Probe the load/entry point address of an uImage
 correctly
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230125112131.19682-1-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301251302360.1978264@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2301251302360.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 25/01/2023 21:06, Stefano Stabellini wrote:
> On Wed, 25 Jan 2023, Ayan Kumar Halder wrote:
>> Currently, kernel_uimage_probe() does not read the load/entry point address
>> set in the uImge header. Thus, info->zimage.start is 0 (default value). This
>> causes, kernel_zimage_place() to treat the binary (contained within uImage)
>> as position independent executable. Thus, it loads it at an incorrect
>> address.
>>
>> The correct approach would be to read "uimage.load" and set
>> info->zimage.start. This will ensure that the binary is loaded at the
>> correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
>> address).
>>
>> If user provides load address (ie "uimage.load") as 0x0, then the image is
>> treated as position independent executable. Xen can load such an image at
>> any address it considers appropriate. A position independent executable
>> cannot have a fixed entry point address.
>>
>> This behavior is applicable for both arm32 and arm64 platforms.
>>
>> Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
>> point address set in the uImage header. With this commit, Xen will use them.
>> This makes the behavior of Xen consistent with uboot for uimage headers.
>>
>> Users who want to use Xen with statically partitioned domains, can provide
>> non zero load address and entry address for the dom0/domU kernel. It is
>> required that the load and entry address provided must be within the memory
>> region allocated by Xen.
>>
>> A deviation from uboot behaviour is that we consider load address == 0x0,
>> to denote that the image supports position independent execution. This
>> is to make the behavior consistent across uImage and zImage.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>>
>> Changes from v1 :-
>> 1. Added a check to ensure load address and entry address are the same.
>> 2. Considered load address == 0x0 as position independent execution.
>> 3. Ensured that the uImage header interpretation is consistent across
>> arm32 and arm64.
>>
>> v2 :-
>> 1. Mentioned the change in existing behavior in booting.txt.
>> 2. Updated booting.txt with a new section to document "Booting Guests".
>>
>> v3 :-
>> 1. Removed the constraint that the entry point should be same as the load
>> address. Thus, Xen uses both the load address and entry point to determine
>> where the image is to be copied and the start address.
>> 2. Updated documentation to denote that load address and start address
>> should be within the memory region allocated by Xen.
>> 3. Added constraint that user cannot provide entry point for a position
>> independent executable (PIE) image.
>>
>> v4 :-
>> 1. Explicitly mentioned the version in booting.txt from when the uImage
>> probing behavior has changed.
>> 2. Logged the requested load address and entry point parsed from the uImage
>> header.
>> 3. Some style issues.
>>
>> v5 :-
>> 1. Set info->zimage.text_offset = 0 in kernel_uimage_probe().
>> 2. Mention that if the kernel has a legacy image header on top of zImage/zImage64
>> header, then the attrbutes from legacy image header is used to determine the load
>> address, entry point, etc. Thus, zImage/zImage64 header is effectively ignored.
>>
>> This is true because Xen currently does not support recursive probing of kernel
>> headers ie if uImage header is probed, then Xen will not attempt to see if there
>> is an underlying zImage/zImage64 header.
>>
>>   docs/misc/arm/booting.txt         | 30 ++++++++++++++++
>>   xen/arch/arm/include/asm/kernel.h |  2 +-
>>   xen/arch/arm/kernel.c             | 58 +++++++++++++++++++++++++++++--
>>   3 files changed, 86 insertions(+), 4 deletions(-)
>>
>> diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
>> index 3e0c03e065..1837579aef 100644
>> --- a/docs/misc/arm/booting.txt
>> +++ b/docs/misc/arm/booting.txt
>> @@ -23,6 +23,32 @@ The exceptions to this on 32-bit ARM are as follows:
>>   
>>   There are no exception on 64-bit ARM.
>>   
>> +Booting Guests
>> +--------------
>> +
>> +Xen supports the legacy image header[3], zImage protocol for 32-bit
>> +ARM Linux[1] and Image protocol defined for ARM64[2].
>> +
>> +Until Xen 4.17, in case of legacy image protocol, Xen ignored the load
>> +address and entry point specified in the header. This has now changed.
>> +
>> +Now, it loads the image at the load address provided in the header.
>> +And the entry point is used as the kernel start address.
>> +
>> +A deviation from uboot is that, Xen treats "load address == 0x0" as
>> +position independent execution (PIE). Thus, Xen will load such an image
>> +at an address it considers appropriate. Also, user cannot specify the
>> +entry point of a PIE image since the start address cennot be
>> +predetermined.
>> +
>> +Users who want to use Xen with statically partitioned domains, can provide
>> +the fixed non zero load address and start address for the dom0/domU kernel.
>> +The load address and start address specified by the user in the header must
>> +be within the memory region allocated by Xen.
>> +
>> +Also, it is to be noted that if user provides the legacy image header on top of
>> +zImage or Image header, then Xen uses the attrbutes of legacy image header only
>                                               ^ attributes                    ^ remove only
> 
>> +to determine the load address, entry point, etc.
> 
> Also add:
> 
> """
> Known limitation: compressed kernels with a uboot headers are not
> working.
> """

I am not entirely sure where you want this sentence to be added. So...

> 
> These few minor changes to the documentation can be done on commit:

... can you take care of committing it?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 13:27:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 13:27:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486987.754447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUBs-00013n-ES; Mon, 30 Jan 2023 13:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486987.754447; Mon, 30 Jan 2023 13:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUBs-00013g-AX; Mon, 30 Jan 2023 13:27:20 +0000
Received: by outflank-mailman (input) for mailman id 486987;
 Mon, 30 Jan 2023 13:27:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=65dI=53=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pMUBq-00013X-5E
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 13:27:18 +0000
Received: from mail-vs1-xe2d.google.com (mail-vs1-xe2d.google.com
 [2607:f8b0:4864:20::e2d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cdf448d9-a0a1-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 14:27:13 +0100 (CET)
Received: by mail-vs1-xe2d.google.com with SMTP id a24so10334066vsl.2
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 05:27:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdf448d9-a0a1-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=agbH+6v2SExuB14vIduPjdZUjlX1wZOZ1Bw5ItVJ5b4=;
        b=bIXKBUP02iceG1guIX5N4Wj/4r5rnFF3JBIhYD+ZnvGc4VXqwSIDlwiEJQcLnKW/6R
         G6YUSjC3YfDS7QghwnaQ5XPswd43a0nsu39Po19PxmJB/4ctJBFFdRqnLd+Fi+kfjsRv
         NGwov6e+Wi7jSDtUu5uVyIjxnDeVbxBVRVeaY3amNFfAaCbKipXPGlw4zCfpCzDYZxDQ
         Q2dR9pu7WHwYBYF3Z+PnmkKEIAymm4vnSdQGHPTHbFCWCM5To6WJKbd2MtCyh6hgyiPa
         EHsxoGdyitQ+/Z3iDmLpPlxq6FvxKmqm1tiMidl3BhwjFZo6BSLBzwviJM6yehh3uP8U
         STLg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=agbH+6v2SExuB14vIduPjdZUjlX1wZOZ1Bw5ItVJ5b4=;
        b=bPXmlc01QH2qqGLalM7rIl3u5IS/vyzKvrA7kMMwlAWM9kOZ57X9BAqGkkeE+8bZY3
         9cXcuyR+CNMg1ZVILXwwSwCpRux0tSMGtSFaI8dzDg3Dg0DaDUJ7T29SEngmByK7y9jl
         Tzokdh3vnFSIPQq9qOk5fzKaaKKG/9fryyzdwGGo3F0U/8x59RJZ2eRDQX6jmvBnD2EF
         6B5Nhq4ABjjGNB7f62Kgi4paIAVFTGxqkt5mWiHnXXKKgVPD75/esb61JwJ+qVIHQxKT
         SLjrGHIo+y5FcoGvD3YDDAhxoa5MrF6Fn1EOycS5ihev8Exo1BFV7oFWpfqdH7LbPXJ9
         S6Dg==
X-Gm-Message-State: AFqh2kqCaxMDgFVenXjHBPgSSl/p0RpOjY28ov0f8D0XuO8Mwh5fjzff
	cXp5ra71n2MO9GrvIkrw1ZyzWzl4JEYc10EOxaACopVcG3M=
X-Google-Smtp-Source: AMrXdXvbACkNYSaynJM2vpibMdSwJyV5GYDa4t1EYLKPOY1AdHeSeMFXzGmg6SRA+BMVxsKxbP09XYv/VecxRU59VN0=
X-Received: by 2002:a05:6102:cd4:b0:3d0:c2e9:cb77 with SMTP id
 g20-20020a0561020cd400b003d0c2e9cb77mr6615300vst.54.1675085233465; Mon, 30
 Jan 2023 05:27:13 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674818705.git.oleksii.kurochko@gmail.com> <b26d981f189adad8af4560fcc10360da02df97a9.1674818705.git.oleksii.kurochko@gmail.com>
In-Reply-To: <b26d981f189adad8af4560fcc10360da02df97a9.1674818705.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 30 Jan 2023 23:26:47 +1000
Message-ID: <CAKmqyKNjsxZPZDeZwbOaOAdS2F8H5U+imEdd1p9ro_J15gBw7g@mail.gmail.com>
Subject: Re: [PATCH v2 04/14] xen/riscv: add <asm/csr.h> header
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 28, 2023 at 12:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> The following changes were made in comparison with <asm/csr.h> from
> Linux:
>   * remove all defines as they are defined in riscv_encoding.h
>   * leave only csr_* macros
>
> Origin: https://github.com/torvalds/linux.git 2475bf0250de
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V2:
>   - Minor refactoring mentioned in the commit message, switch tabs to
>     spaces and refactor things around __asm__ __volatile__.
>   - Update the commit message and add "Origin:" tag.
> ---
>  xen/arch/riscv/include/asm/csr.h | 84 ++++++++++++++++++++++++++++++++
>  1 file changed, 84 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/csr.h
>
> diff --git a/xen/arch/riscv/include/asm/csr.h b/xen/arch/riscv/include/asm/csr.h
> new file mode 100644
> index 0000000000..4275cf6515
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/csr.h
> @@ -0,0 +1,84 @@
> +/*
> + * SPDX-License-Identifier: GPL-2.0-only
> + *
> + * Copyright (C) 2015 Regents of the University of California
> + */
> +
> +#ifndef _ASM_RISCV_CSR_H
> +#define _ASM_RISCV_CSR_H
> +
> +#include <asm/asm.h>
> +#include <xen/const.h>
> +#include <asm/riscv_encoding.h>
> +
> +#ifndef __ASSEMBLY__
> +
> +#define csr_read(csr)                                           \
> +({                                                              \
> +    register unsigned long __v;                                 \
> +    __asm__ __volatile__ (  "csrr %0, " __ASM_STR(csr)          \
> +                            : "=r" (__v)                        \
> +                            : : "memory" );                     \
> +    __v;                                                        \
> +})
> +
> +#define csr_write(csr, val)                                     \
> +({                                                              \
> +    unsigned long __v = (unsigned long)(val);                   \
> +    __asm__ __volatile__ (  "csrw " __ASM_STR(csr) ", %0"       \
> +                            : /* no outputs */                  \
> +                            : "rK" (__v)                        \
> +                            : "memory" );                       \
> +})
> +
> +#define csr_swap(csr, val)                                      \
> +({                                                              \
> +    unsigned long __v = (unsigned long)(val);                   \
> +    __asm__ __volatile__ (  "csrrw %0, " __ASM_STR(csr) ", %1"  \
> +                            : "=r" (__v)                        \
> +                            : "rK" (__v)                        \
> +                            : "memory" );                       \
> +    __v;                                                        \
> +})
> +
> +#define csr_read_set(csr, val)                                  \
> +({                                                              \
> +    unsigned long __v = (unsigned long)(val);                   \
> +    __asm__ __volatile__ (  "csrrs %0, " __ASM_STR(csr) ", %1"  \
> +                            : "=r" (__v)                        \
> +                            : "rK" (__v)                        \
> +                            : "memory" );                       \
> +    __v;                                                        \
> +})
> +
> +#define csr_set(csr, val)                                       \
> +({                                                              \
> +    unsigned long __v = (unsigned long)(val);                   \
> +    __asm__ __volatile__ (  "csrs " __ASM_STR(csr) ", %0"       \
> +                            : /* no outputs */                  \
> +                            : "rK" (__v)                        \
> +                            : "memory" );                       \
> +})
> +
> +#define csr_read_clear(csr, val)                                \
> +({                                                              \
> +    unsigned long __v = (unsigned long)(val);                   \
> +    __asm__ __volatile__ (  "csrrc %0, " __ASM_STR(csr) ", %1"  \
> +                            : "=r" (__v)                        \
> +                            : "rK" (__v)                        \
> +                            : "memory" );                       \
> +    __v;                                                        \
> +})
> +
> +#define csr_clear(csr, val)                                     \
> +({                                                              \
> +    unsigned long __v = (unsigned long)(val);                   \
> +    __asm__ __volatile__ (  "csrc " __ASM_STR(csr) ", %0"       \
> +                            : /*no outputs */                   \
> +                            : "rK" (__v)                        \
> +                            : "memory" );                       \
> +})
> +
> +#endif /* __ASSEMBLY__ */
> +
> +#endif /* _ASM_RISCV_CSR_H */
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 13:29:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 13:29:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486991.754456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUEO-0001dy-Qk; Mon, 30 Jan 2023 13:29:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486991.754456; Mon, 30 Jan 2023 13:29:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUEO-0001dr-Ns; Mon, 30 Jan 2023 13:29:56 +0000
Received: by outflank-mailman (input) for mailman id 486991;
 Mon, 30 Jan 2023 13:29:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=65dI=53=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pMUEN-0001dj-Tb
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 13:29:56 +0000
Received: from mail-vk1-xa2a.google.com (mail-vk1-xa2a.google.com
 [2607:f8b0:4864:20::a2a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c045733-a0a2-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 14:29:50 +0100 (CET)
Received: by mail-vk1-xa2a.google.com with SMTP id v81so5718358vkv.5
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 05:29:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c045733-a0a2-11ed-b8d1-410ff93cb8f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=VxOYPhhZnuY6MG0gYFTYyXlgkqDDtLZAJ+tZrvOPl6k=;
        b=TcipOae9bd13tNibyCunpmvaRTjw8n0oVZp/jSpPvcksuxH3nf/YBOgUl3F65qLqqq
         WJBudRv5VTVt3CBdicIYMhPhF2evMRDNhfkj0bvXE3xvTY6BPAwJVGLFisIYIyO9f61k
         gvj7vF/a4nXhXrEhPwrvwE0IVBJJTMNFOXQnusOcYMxeARQlTEM0uHIt8rMwyT6PLbTT
         xg3KBqZaTDvfSjc3xDvNrMSA6447rzo/SYYFFT7zQM6PpGca94TpGQJhAyyCgdcSvmj3
         1H748N8K85LBhGzXJJbqhFOSUTaMei0L8SQJDXK/+jMmKZ9gIf80NS8pt2AafriNwHlI
         foxQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=VxOYPhhZnuY6MG0gYFTYyXlgkqDDtLZAJ+tZrvOPl6k=;
        b=YCPyFEayDHatYMHuJxRgLE+WyQRuxUjiOIiTMBoHapjFjdl/NyFYOUTIiHkdY8Nwyb
         Sdz4bMNFeGrjB3gdSEm5qgD/UQxNZaFNkAjfjQqT7+ijdiU0+7mL4y4Km/gpPlqRVBXD
         /d7dkGF4GidWzU6lHpT+pkYZ1TdjLbfV5zQBY/LOY4CeKrNshKvkDaPFx7eCLRcBYWVa
         Drct4+PbgYFTKQlJHFJPMT6RaALWn+Ln4uytRf0SB9oDO45PRLXSeQW4eGOm5vKaXXCj
         X1eBdccrgMFYMdoiKkB9bMhfMrkKy6DUYTBMFS0nhgptNixYHh5nwnC6zzrq68YQJhF9
         onHA==
X-Gm-Message-State: AO0yUKXIKPyQ6PhvYWkyjhwVAQ34Ima+par9P8hriHokfZawzpGKVb2j
	eHT0Ouusa+qRbfQIXNOYNZp0jW24gmEYOCcR1HY=
X-Google-Smtp-Source: AK7set8I+Crc9KewtoYlYOhS4r7unDC9Lk1vnNLT+lz2yaFi1mnKitg3klFihQsxXDeBxxVbajNZk7zYD13KlBolAEA=
X-Received: by 2002:ac5:c98a:0:b0:3ea:3000:8627 with SMTP id
 e10-20020ac5c98a000000b003ea30008627mr562964vkm.7.1675085391137; Mon, 30 Jan
 2023 05:29:51 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674818705.git.oleksii.kurochko@gmail.com> <135f7fb8ac64007c0ea580e76630962dc1bd11c9.1674818705.git.oleksii.kurochko@gmail.com>
In-Reply-To: <135f7fb8ac64007c0ea580e76630962dc1bd11c9.1674818705.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 30 Jan 2023 23:29:24 +1000
Message-ID: <CAKmqyKO0XXvB36KMpHH7GZMot2VgNVEE9apC_vX3SpOxirtNrQ@mail.gmail.com>
Subject: Re: [PATCH v2 03/14] xen/riscv: add <asm/riscv_encoding.h header
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 28, 2023 at 12:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> The following changes were done in Xen code base in comparison with OpenSBI:
>   * Remove "#include <sbi/sbi_const.h>" as most of the stuff inside
>     it is present in Xen code base.
>   * Add macros _UL and _ULL as they were in <sbi/sbi_const.h> before
>   * Add SATP32_MODE_SHIFT/SATP64_MODE_SHIFT/SATP_MODE_SHIFT as they will
>     be used in riscv/mm.c
>   * Add CAUSE_IRQ_FLAG which is going to be used insised exception
>     handler
>   * Change ulong to unsigned long in macros REG_PTR(...)
>   * Change s32 to int32_t
>
> Originally authored by Anup Patel <anup.patel@wdc.com>
>
> Origin: https://github.com/riscv-software-src/opensbi.git c45992cc2b12
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Acked-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V2:
>   - Take the latest version of riscv_encoding.h from OpenSBI.
>   - Update riscv_encoding.h with Xen related changes mentioned in the
>     commit message.
>   - Update commit message and add "Origin:" tag
> ---
>  xen/arch/riscv/include/asm/riscv_encoding.h | 927 ++++++++++++++++++++
>  1 file changed, 927 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/riscv_encoding.h
>
> diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/include/asm/riscv_encoding.h
> new file mode 100644
> index 0000000000..43dd4f6981
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/riscv_encoding.h
> @@ -0,0 +1,927 @@
> +/*
> + * SPDX-License-Identifier: BSD-2-Clause
> + *
> + * Copyright (c) 2019 Western Digital Corporation or its affiliates.
> + *
> + * Authors:
> + *   Anup Patel <anup.patel@wdc.com>
> + */
> +
> +#ifndef __RISCV_ENCODING_H__
> +#define __RISCV_ENCODING_H__
> +
> +#define _UL(X) _AC(X, UL)
> +#define _ULL(X) _AC(X, ULL)
> +
> +/* clang-format off */
> +#define MSTATUS_SIE                    _UL(0x00000002)
> +#define MSTATUS_MIE                    _UL(0x00000008)
> +#define MSTATUS_SPIE_SHIFT             5
> +#define MSTATUS_SPIE                   (_UL(1) << MSTATUS_SPIE_SHIFT)
> +#define MSTATUS_UBE                    _UL(0x00000040)
> +#define MSTATUS_MPIE                   _UL(0x00000080)
> +#define MSTATUS_SPP_SHIFT              8
> +#define MSTATUS_SPP                    (_UL(1) << MSTATUS_SPP_SHIFT)
> +#define MSTATUS_MPP_SHIFT              11
> +#define MSTATUS_MPP                    (_UL(3) << MSTATUS_MPP_SHIFT)
> +#define MSTATUS_FS                     _UL(0x00006000)
> +#define MSTATUS_XS                     _UL(0x00018000)
> +#define MSTATUS_VS                     _UL(0x00000600)
> +#define MSTATUS_MPRV                   _UL(0x00020000)
> +#define MSTATUS_SUM                    _UL(0x00040000)
> +#define MSTATUS_MXR                    _UL(0x00080000)
> +#define MSTATUS_TVM                    _UL(0x00100000)
> +#define MSTATUS_TW                     _UL(0x00200000)
> +#define MSTATUS_TSR                    _UL(0x00400000)
> +#define MSTATUS32_SD                   _UL(0x80000000)
> +#if __riscv_xlen == 64
> +#define MSTATUS_UXL                    _ULL(0x0000000300000000)
> +#define MSTATUS_SXL                    _ULL(0x0000000C00000000)
> +#define MSTATUS_SBE                    _ULL(0x0000001000000000)
> +#define MSTATUS_MBE                    _ULL(0x0000002000000000)
> +#define MSTATUS_GVA                    _ULL(0x0000004000000000)
> +#define MSTATUS_GVA_SHIFT              38
> +#define MSTATUS_MPV                    _ULL(0x0000008000000000)
> +#else
> +#define MSTATUSH_SBE                   _UL(0x00000010)
> +#define MSTATUSH_MBE                   _UL(0x00000020)
> +#define MSTATUSH_GVA                   _UL(0x00000040)
> +#define MSTATUSH_GVA_SHIFT             6
> +#define MSTATUSH_MPV                   _UL(0x00000080)
> +#endif
> +#define MSTATUS32_SD                   _UL(0x80000000)
> +#define MSTATUS64_SD                   _ULL(0x8000000000000000)
> +
> +#define SSTATUS_SIE                    MSTATUS_SIE
> +#define SSTATUS_SPIE_SHIFT             MSTATUS_SPIE_SHIFT
> +#define SSTATUS_SPIE                   MSTATUS_SPIE
> +#define SSTATUS_SPP_SHIFT              MSTATUS_SPP_SHIFT
> +#define SSTATUS_SPP                    MSTATUS_SPP
> +#define SSTATUS_FS                     MSTATUS_FS
> +#define SSTATUS_XS                     MSTATUS_XS
> +#define SSTATUS_VS                     MSTATUS_VS
> +#define SSTATUS_SUM                    MSTATUS_SUM
> +#define SSTATUS_MXR                    MSTATUS_MXR
> +#define SSTATUS32_SD                   MSTATUS32_SD
> +#define SSTATUS64_UXL                  MSTATUS_UXL
> +#define SSTATUS64_SD                   MSTATUS64_SD
> +
> +#if __riscv_xlen == 64
> +#define HSTATUS_VSXL                   _UL(0x300000000)
> +#define HSTATUS_VSXL_SHIFT             32
> +#endif
> +#define HSTATUS_VTSR                   _UL(0x00400000)
> +#define HSTATUS_VTW                    _UL(0x00200000)
> +#define HSTATUS_VTVM                   _UL(0x00100000)
> +#define HSTATUS_VGEIN                  _UL(0x0003f000)
> +#define HSTATUS_VGEIN_SHIFT            12
> +#define HSTATUS_HU                     _UL(0x00000200)
> +#define HSTATUS_SPVP                   _UL(0x00000100)
> +#define HSTATUS_SPV                    _UL(0x00000080)
> +#define HSTATUS_GVA                    _UL(0x00000040)
> +#define HSTATUS_VSBE                   _UL(0x00000020)
> +
> +#define IRQ_S_SOFT                     1
> +#define IRQ_VS_SOFT                    2
> +#define IRQ_M_SOFT                     3
> +#define IRQ_S_TIMER                    5
> +#define IRQ_VS_TIMER                   6
> +#define IRQ_M_TIMER                    7
> +#define IRQ_S_EXT                      9
> +#define IRQ_VS_EXT                     10
> +#define IRQ_M_EXT                      11
> +#define IRQ_S_GEXT                     12
> +#define IRQ_PMU_OVF                    13
> +
> +#define MIP_SSIP                       (_UL(1) << IRQ_S_SOFT)
> +#define MIP_VSSIP                      (_UL(1) << IRQ_VS_SOFT)
> +#define MIP_MSIP                       (_UL(1) << IRQ_M_SOFT)
> +#define MIP_STIP                       (_UL(1) << IRQ_S_TIMER)
> +#define MIP_VSTIP                      (_UL(1) << IRQ_VS_TIMER)
> +#define MIP_MTIP                       (_UL(1) << IRQ_M_TIMER)
> +#define MIP_SEIP                       (_UL(1) << IRQ_S_EXT)
> +#define MIP_VSEIP                      (_UL(1) << IRQ_VS_EXT)
> +#define MIP_MEIP                       (_UL(1) << IRQ_M_EXT)
> +#define MIP_SGEIP                      (_UL(1) << IRQ_S_GEXT)
> +#define MIP_LCOFIP                     (_UL(1) << IRQ_PMU_OVF)
> +
> +#define SIP_SSIP                       MIP_SSIP
> +#define SIP_STIP                       MIP_STIP
> +
> +#define PRV_U                          _UL(0)
> +#define PRV_S                          _UL(1)
> +#define PRV_M                          _UL(3)
> +
> +#define SATP32_MODE                    _UL(0x80000000)
> +#define SATP32_MODE_SHIFT              31
> +#define SATP32_ASID                    _UL(0x7FC00000)
> +#define SATP32_PPN                     _UL(0x003FFFFF)
> +#define SATP64_MODE                    _ULL(0xF000000000000000)
> +#define SATP64_MODE_SHIFT              60
> +#define SATP64_ASID                    _ULL(0x0FFFF00000000000)
> +#define SATP64_PPN                     _ULL(0x00000FFFFFFFFFFF)
> +
> +#define SATP_MODE_OFF                  _UL(0)
> +#define SATP_MODE_SV32                 _UL(1)
> +#define SATP_MODE_SV39                 _UL(8)
> +#define SATP_MODE_SV48                 _UL(9)
> +#define SATP_MODE_SV57                 _UL(10)
> +#define SATP_MODE_SV64                 _UL(11)
> +
> +#define HGATP_MODE_OFF                 _UL(0)
> +#define HGATP_MODE_SV32X4              _UL(1)
> +#define HGATP_MODE_SV39X4              _UL(8)
> +#define HGATP_MODE_SV48X4              _UL(9)
> +
> +#define HGATP32_MODE_SHIFT             31
> +#define HGATP32_VMID_SHIFT             22
> +#define HGATP32_VMID_MASK              _UL(0x1FC00000)
> +#define HGATP32_PPN                    _UL(0x003FFFFF)
> +
> +#define HGATP64_MODE_SHIFT             60
> +#define HGATP64_VMID_SHIFT             44
> +#define HGATP64_VMID_MASK              _ULL(0x03FFF00000000000)
> +#define HGATP64_PPN                    _ULL(0x00000FFFFFFFFFFF)
> +
> +#define PMP_R                          _UL(0x01)
> +#define PMP_W                          _UL(0x02)
> +#define PMP_X                          _UL(0x04)
> +#define PMP_A                          _UL(0x18)
> +#define PMP_A_TOR                      _UL(0x08)
> +#define PMP_A_NA4                      _UL(0x10)
> +#define PMP_A_NAPOT                    _UL(0x18)
> +#define PMP_L                          _UL(0x80)
> +
> +#define PMP_SHIFT                      2
> +#define PMP_COUNT                      64
> +#if __riscv_xlen == 64
> +#define PMP_ADDR_MASK                  ((_ULL(0x1) << 54) - 1)
> +#else
> +#define PMP_ADDR_MASK                  _UL(0xFFFFFFFF)
> +#endif
> +
> +#if __riscv_xlen == 64
> +#define MSTATUS_SD                     MSTATUS64_SD
> +#define SSTATUS_SD                     SSTATUS64_SD
> +#define SATP_MODE                      SATP64_MODE
> +#define SATP_MODE_SHIFT                        SATP64_MODE_SHIFT
> +
> +#define HGATP_PPN                      HGATP64_PPN
> +#define HGATP_VMID_SHIFT               HGATP64_VMID_SHIFT
> +#define HGATP_VMID_MASK                        HGATP64_VMID_MASK
> +#define HGATP_MODE_SHIFT               HGATP64_MODE_SHIFT
> +#else
> +#define MSTATUS_SD                     MSTATUS32_SD
> +#define SSTATUS_SD                     SSTATUS32_SD
> +#define SATP_MODE                      SATP32_MODE
> +#define SATP_MODE_SHIFT                        SATP32_MODE_SHIFT
> +
> +#define HGATP_PPN                      HGATP32_PPN
> +#define HGATP_VMID_SHIFT               HGATP32_VMID_SHIFT
> +#define HGATP_VMID_MASK                        HGATP32_VMID_MASK
> +#define HGATP_MODE_SHIFT               HGATP32_MODE_SHIFT
> +#endif
> +
> +#define TOPI_IID_SHIFT                 16
> +#define TOPI_IID_MASK                  0xfff
> +#define TOPI_IPRIO_MASK                0xff
> +
> +#if __riscv_xlen == 64
> +#define MHPMEVENT_OF                   (_UL(1) << 63)
> +#define MHPMEVENT_MINH                 (_UL(1) << 62)
> +#define MHPMEVENT_SINH                 (_UL(1) << 61)
> +#define MHPMEVENT_UINH                 (_UL(1) << 60)
> +#define MHPMEVENT_VSINH                        (_UL(1) << 59)
> +#define MHPMEVENT_VUINH                        (_UL(1) << 58)
> +#else
> +#define MHPMEVENTH_OF                  (_ULL(1) << 31)
> +#define MHPMEVENTH_MINH                        (_ULL(1) << 30)
> +#define MHPMEVENTH_SINH                        (_ULL(1) << 29)
> +#define MHPMEVENTH_UINH                        (_ULL(1) << 28)
> +#define MHPMEVENTH_VSINH               (_ULL(1) << 27)
> +#define MHPMEVENTH_VUINH               (_ULL(1) << 26)
> +
> +#define MHPMEVENT_OF                   (MHPMEVENTH_OF << 32)
> +#define MHPMEVENT_MINH                 (MHPMEVENTH_MINH << 32)
> +#define MHPMEVENT_SINH                 (MHPMEVENTH_SINH << 32)
> +#define MHPMEVENT_UINH                 (MHPMEVENTH_UINH << 32)
> +#define MHPMEVENT_VSINH                        (MHPMEVENTH_VSINH << 32)
> +#define MHPMEVENT_VUINH                        (MHPMEVENTH_VUINH << 32)
> +
> +#endif
> +
> +#define MHPMEVENT_SSCOF_MASK           _ULL(0xFFFF000000000000)
> +
> +#if __riscv_xlen > 32
> +#define ENVCFG_STCE                    (_ULL(1) << 63)
> +#define ENVCFG_PBMTE                   (_ULL(1) << 62)
> +#else
> +#define ENVCFGH_STCE                   (_UL(1) << 31)
> +#define ENVCFGH_PBMTE                  (_UL(1) << 30)
> +#endif
> +#define ENVCFG_CBZE                    (_UL(1) << 7)
> +#define ENVCFG_CBCFE                   (_UL(1) << 6)
> +#define ENVCFG_CBIE_SHIFT              4
> +#define ENVCFG_CBIE                    (_UL(0x3) << ENVCFG_CBIE_SHIFT)
> +#define ENVCFG_CBIE_ILL                        _UL(0x0)
> +#define ENVCFG_CBIE_FLUSH              _UL(0x1)
> +#define ENVCFG_CBIE_INV                        _UL(0x3)
> +#define ENVCFG_FIOM                    _UL(0x1)
> +
> +/* ===== User-level CSRs ===== */
> +
> +/* User Trap Setup (N-extension) */
> +#define CSR_USTATUS                    0x000
> +#define CSR_UIE                                0x004
> +#define CSR_UTVEC                      0x005
> +
> +/* User Trap Handling (N-extension) */
> +#define CSR_USCRATCH                   0x040
> +#define CSR_UEPC                       0x041
> +#define CSR_UCAUSE                     0x042
> +#define CSR_UTVAL                      0x043
> +#define CSR_UIP                                0x044
> +
> +/* User Floating-point CSRs */
> +#define CSR_FFLAGS                     0x001
> +#define CSR_FRM                                0x002
> +#define CSR_FCSR                       0x003
> +
> +/* User Counters/Timers */
> +#define CSR_CYCLE                      0xc00
> +#define CSR_TIME                       0xc01
> +#define CSR_INSTRET                    0xc02
> +#define CSR_HPMCOUNTER3                        0xc03
> +#define CSR_HPMCOUNTER4                        0xc04
> +#define CSR_HPMCOUNTER5                        0xc05
> +#define CSR_HPMCOUNTER6                        0xc06
> +#define CSR_HPMCOUNTER7                        0xc07
> +#define CSR_HPMCOUNTER8                        0xc08
> +#define CSR_HPMCOUNTER9                        0xc09
> +#define CSR_HPMCOUNTER10               0xc0a
> +#define CSR_HPMCOUNTER11               0xc0b
> +#define CSR_HPMCOUNTER12               0xc0c
> +#define CSR_HPMCOUNTER13               0xc0d
> +#define CSR_HPMCOUNTER14               0xc0e
> +#define CSR_HPMCOUNTER15               0xc0f
> +#define CSR_HPMCOUNTER16               0xc10
> +#define CSR_HPMCOUNTER17               0xc11
> +#define CSR_HPMCOUNTER18               0xc12
> +#define CSR_HPMCOUNTER19               0xc13
> +#define CSR_HPMCOUNTER20               0xc14
> +#define CSR_HPMCOUNTER21               0xc15
> +#define CSR_HPMCOUNTER22               0xc16
> +#define CSR_HPMCOUNTER23               0xc17
> +#define CSR_HPMCOUNTER24               0xc18
> +#define CSR_HPMCOUNTER25               0xc19
> +#define CSR_HPMCOUNTER26               0xc1a
> +#define CSR_HPMCOUNTER27               0xc1b
> +#define CSR_HPMCOUNTER28               0xc1c
> +#define CSR_HPMCOUNTER29               0xc1d
> +#define CSR_HPMCOUNTER30               0xc1e
> +#define CSR_HPMCOUNTER31               0xc1f
> +#define CSR_CYCLEH                     0xc80
> +#define CSR_TIMEH                      0xc81
> +#define CSR_INSTRETH                   0xc82
> +#define CSR_HPMCOUNTER3H               0xc83
> +#define CSR_HPMCOUNTER4H               0xc84
> +#define CSR_HPMCOUNTER5H               0xc85
> +#define CSR_HPMCOUNTER6H               0xc86
> +#define CSR_HPMCOUNTER7H               0xc87
> +#define CSR_HPMCOUNTER8H               0xc88
> +#define CSR_HPMCOUNTER9H               0xc89
> +#define CSR_HPMCOUNTER10H              0xc8a
> +#define CSR_HPMCOUNTER11H              0xc8b
> +#define CSR_HPMCOUNTER12H              0xc8c
> +#define CSR_HPMCOUNTER13H              0xc8d
> +#define CSR_HPMCOUNTER14H              0xc8e
> +#define CSR_HPMCOUNTER15H              0xc8f
> +#define CSR_HPMCOUNTER16H              0xc90
> +#define CSR_HPMCOUNTER17H              0xc91
> +#define CSR_HPMCOUNTER18H              0xc92
> +#define CSR_HPMCOUNTER19H              0xc93
> +#define CSR_HPMCOUNTER20H              0xc94
> +#define CSR_HPMCOUNTER21H              0xc95
> +#define CSR_HPMCOUNTER22H              0xc96
> +#define CSR_HPMCOUNTER23H              0xc97
> +#define CSR_HPMCOUNTER24H              0xc98
> +#define CSR_HPMCOUNTER25H              0xc99
> +#define CSR_HPMCOUNTER26H              0xc9a
> +#define CSR_HPMCOUNTER27H              0xc9b
> +#define CSR_HPMCOUNTER28H              0xc9c
> +#define CSR_HPMCOUNTER29H              0xc9d
> +#define CSR_HPMCOUNTER30H              0xc9e
> +#define CSR_HPMCOUNTER31H              0xc9f
> +
> +/* ===== Supervisor-level CSRs ===== */
> +
> +/* Supervisor Trap Setup */
> +#define CSR_SSTATUS                    0x100
> +#define CSR_SIE                                0x104
> +#define CSR_STVEC                      0x105
> +#define CSR_SCOUNTEREN                 0x106
> +
> +/* Supervisor Configuration */
> +#define CSR_SENVCFG                    0x10a
> +
> +/* Supervisor Trap Handling */
> +#define CSR_SSCRATCH                   0x140
> +#define CSR_SEPC                       0x141
> +#define CSR_SCAUSE                     0x142
> +#define CSR_STVAL                      0x143
> +#define CSR_SIP                                0x144
> +
> +/* Sstc extension */
> +#define CSR_STIMECMP                   0x14D
> +#define CSR_STIMECMPH                  0x15D
> +
> +/* Supervisor Protection and Translation */
> +#define CSR_SATP                       0x180
> +
> +/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
> +#define CSR_SISELECT                   0x150
> +#define CSR_SIREG                      0x151
> +
> +/* Supervisor-Level Interrupts (AIA) */
> +#define CSR_STOPEI                     0x15c
> +#define CSR_STOPI                      0xdb0
> +
> +/* Supervisor-Level High-Half CSRs (AIA) */
> +#define CSR_SIEH                       0x114
> +#define CSR_SIPH                       0x154
> +
> +/* Supervisor stateen CSRs */
> +#define CSR_SSTATEEN0                  0x10C
> +#define CSR_SSTATEEN1                  0x10D
> +#define CSR_SSTATEEN2                  0x10E
> +#define CSR_SSTATEEN3                  0x10F
> +
> +/* ===== Hypervisor-level CSRs ===== */
> +
> +/* Hypervisor Trap Setup (H-extension) */
> +#define CSR_HSTATUS                    0x600
> +#define CSR_HEDELEG                    0x602
> +#define CSR_HIDELEG                    0x603
> +#define CSR_HIE                                0x604
> +#define CSR_HCOUNTEREN                 0x606
> +#define CSR_HGEIE                      0x607
> +
> +/* Hypervisor Configuration */
> +#define CSR_HENVCFG                    0x60a
> +#define CSR_HENVCFGH                   0x61a
> +
> +/* Hypervisor Trap Handling (H-extension) */
> +#define CSR_HTVAL                      0x643
> +#define CSR_HIP                                0x644
> +#define CSR_HVIP                       0x645
> +#define CSR_HTINST                     0x64a
> +#define CSR_HGEIP                      0xe12
> +
> +/* Hypervisor Protection and Translation (H-extension) */
> +#define CSR_HGATP                      0x680
> +
> +/* Hypervisor Counter/Timer Virtualization Registers (H-extension) */
> +#define CSR_HTIMEDELTA                 0x605
> +#define CSR_HTIMEDELTAH                        0x615
> +
> +/* Virtual Supervisor Registers (H-extension) */
> +#define CSR_VSSTATUS                   0x200
> +#define CSR_VSIE                       0x204
> +#define CSR_VSTVEC                     0x205
> +#define CSR_VSSCRATCH                  0x240
> +#define CSR_VSEPC                      0x241
> +#define CSR_VSCAUSE                    0x242
> +#define CSR_VSTVAL                     0x243
> +#define CSR_VSIP                       0x244
> +#define CSR_VSATP                      0x280
> +
> +/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
> +#define CSR_HVIEN                      0x608
> +#define CSR_HVICTL                     0x609
> +#define CSR_HVIPRIO1                   0x646
> +#define CSR_HVIPRIO2                   0x647
> +
> +/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */
> +#define CSR_VSISELECT                  0x250
> +#define CSR_VSIREG                     0x251
> +
> +/* VS-Level Interrupts (H-extension with AIA) */
> +#define CSR_VSTOPEI                    0x25c
> +#define CSR_VSTOPI                     0xeb0
> +
> +/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
> +#define CSR_HIDELEGH                   0x613
> +#define CSR_HVIENH                     0x618
> +#define CSR_HVIPH                      0x655
> +#define CSR_HVIPRIO1H                  0x656
> +#define CSR_HVIPRIO2H                  0x657
> +#define CSR_VSIEH                      0x214
> +#define CSR_VSIPH                      0x254
> +
> +/* Hypervisor stateen CSRs */
> +#define CSR_HSTATEEN0                  0x60C
> +#define CSR_HSTATEEN0H                 0x61C
> +#define CSR_HSTATEEN1                  0x60D
> +#define CSR_HSTATEEN1H                 0x61D
> +#define CSR_HSTATEEN2                  0x60E
> +#define CSR_HSTATEEN2H                 0x61E
> +#define CSR_HSTATEEN3                  0x60F
> +#define CSR_HSTATEEN3H                 0x61F
> +
> +/* ===== Machine-level CSRs ===== */
> +
> +/* Machine Information Registers */
> +#define CSR_MVENDORID                  0xf11
> +#define CSR_MARCHID                    0xf12
> +#define CSR_MIMPID                     0xf13
> +#define CSR_MHARTID                    0xf14
> +
> +/* Machine Trap Setup */
> +#define CSR_MSTATUS                    0x300
> +#define CSR_MISA                       0x301
> +#define CSR_MEDELEG                    0x302
> +#define CSR_MIDELEG                    0x303
> +#define CSR_MIE                                0x304
> +#define CSR_MTVEC                      0x305
> +#define CSR_MCOUNTEREN                 0x306
> +#define CSR_MSTATUSH                   0x310
> +
> +/* Machine Configuration */
> +#define CSR_MENVCFG                    0x30a
> +#define CSR_MENVCFGH                   0x31a
> +
> +/* Machine Trap Handling */
> +#define CSR_MSCRATCH                   0x340
> +#define CSR_MEPC                       0x341
> +#define CSR_MCAUSE                     0x342
> +#define CSR_MTVAL                      0x343
> +#define CSR_MIP                                0x344
> +#define CSR_MTINST                     0x34a
> +#define CSR_MTVAL2                     0x34b
> +
> +/* Machine Memory Protection */
> +#define CSR_PMPCFG0                    0x3a0
> +#define CSR_PMPCFG1                    0x3a1
> +#define CSR_PMPCFG2                    0x3a2
> +#define CSR_PMPCFG3                    0x3a3
> +#define CSR_PMPCFG4                    0x3a4
> +#define CSR_PMPCFG5                    0x3a5
> +#define CSR_PMPCFG6                    0x3a6
> +#define CSR_PMPCFG7                    0x3a7
> +#define CSR_PMPCFG8                    0x3a8
> +#define CSR_PMPCFG9                    0x3a9
> +#define CSR_PMPCFG10                   0x3aa
> +#define CSR_PMPCFG11                   0x3ab
> +#define CSR_PMPCFG12                   0x3ac
> +#define CSR_PMPCFG13                   0x3ad
> +#define CSR_PMPCFG14                   0x3ae
> +#define CSR_PMPCFG15                   0x3af
> +#define CSR_PMPADDR0                   0x3b0
> +#define CSR_PMPADDR1                   0x3b1
> +#define CSR_PMPADDR2                   0x3b2
> +#define CSR_PMPADDR3                   0x3b3
> +#define CSR_PMPADDR4                   0x3b4
> +#define CSR_PMPADDR5                   0x3b5
> +#define CSR_PMPADDR6                   0x3b6
> +#define CSR_PMPADDR7                   0x3b7
> +#define CSR_PMPADDR8                   0x3b8
> +#define CSR_PMPADDR9                   0x3b9
> +#define CSR_PMPADDR10                  0x3ba
> +#define CSR_PMPADDR11                  0x3bb
> +#define CSR_PMPADDR12                  0x3bc
> +#define CSR_PMPADDR13                  0x3bd
> +#define CSR_PMPADDR14                  0x3be
> +#define CSR_PMPADDR15                  0x3bf
> +#define CSR_PMPADDR16                  0x3c0
> +#define CSR_PMPADDR17                  0x3c1
> +#define CSR_PMPADDR18                  0x3c2
> +#define CSR_PMPADDR19                  0x3c3
> +#define CSR_PMPADDR20                  0x3c4
> +#define CSR_PMPADDR21                  0x3c5
> +#define CSR_PMPADDR22                  0x3c6
> +#define CSR_PMPADDR23                  0x3c7
> +#define CSR_PMPADDR24                  0x3c8
> +#define CSR_PMPADDR25                  0x3c9
> +#define CSR_PMPADDR26                  0x3ca
> +#define CSR_PMPADDR27                  0x3cb
> +#define CSR_PMPADDR28                  0x3cc
> +#define CSR_PMPADDR29                  0x3cd
> +#define CSR_PMPADDR30                  0x3ce
> +#define CSR_PMPADDR31                  0x3cf
> +#define CSR_PMPADDR32                  0x3d0
> +#define CSR_PMPADDR33                  0x3d1
> +#define CSR_PMPADDR34                  0x3d2
> +#define CSR_PMPADDR35                  0x3d3
> +#define CSR_PMPADDR36                  0x3d4
> +#define CSR_PMPADDR37                  0x3d5
> +#define CSR_PMPADDR38                  0x3d6
> +#define CSR_PMPADDR39                  0x3d7
> +#define CSR_PMPADDR40                  0x3d8
> +#define CSR_PMPADDR41                  0x3d9
> +#define CSR_PMPADDR42                  0x3da
> +#define CSR_PMPADDR43                  0x3db
> +#define CSR_PMPADDR44                  0x3dc
> +#define CSR_PMPADDR45                  0x3dd
> +#define CSR_PMPADDR46                  0x3de
> +#define CSR_PMPADDR47                  0x3df
> +#define CSR_PMPADDR48                  0x3e0
> +#define CSR_PMPADDR49                  0x3e1
> +#define CSR_PMPADDR50                  0x3e2
> +#define CSR_PMPADDR51                  0x3e3
> +#define CSR_PMPADDR52                  0x3e4
> +#define CSR_PMPADDR53                  0x3e5
> +#define CSR_PMPADDR54                  0x3e6
> +#define CSR_PMPADDR55                  0x3e7
> +#define CSR_PMPADDR56                  0x3e8
> +#define CSR_PMPADDR57                  0x3e9
> +#define CSR_PMPADDR58                  0x3ea
> +#define CSR_PMPADDR59                  0x3eb
> +#define CSR_PMPADDR60                  0x3ec
> +#define CSR_PMPADDR61                  0x3ed
> +#define CSR_PMPADDR62                  0x3ee
> +#define CSR_PMPADDR63                  0x3ef
> +
> +/* Machine Counters/Timers */
> +#define CSR_MCYCLE                     0xb00
> +#define CSR_MINSTRET                   0xb02
> +#define CSR_MHPMCOUNTER3               0xb03
> +#define CSR_MHPMCOUNTER4               0xb04
> +#define CSR_MHPMCOUNTER5               0xb05
> +#define CSR_MHPMCOUNTER6               0xb06
> +#define CSR_MHPMCOUNTER7               0xb07
> +#define CSR_MHPMCOUNTER8               0xb08
> +#define CSR_MHPMCOUNTER9               0xb09
> +#define CSR_MHPMCOUNTER10              0xb0a
> +#define CSR_MHPMCOUNTER11              0xb0b
> +#define CSR_MHPMCOUNTER12              0xb0c
> +#define CSR_MHPMCOUNTER13              0xb0d
> +#define CSR_MHPMCOUNTER14              0xb0e
> +#define CSR_MHPMCOUNTER15              0xb0f
> +#define CSR_MHPMCOUNTER16              0xb10
> +#define CSR_MHPMCOUNTER17              0xb11
> +#define CSR_MHPMCOUNTER18              0xb12
> +#define CSR_MHPMCOUNTER19              0xb13
> +#define CSR_MHPMCOUNTER20              0xb14
> +#define CSR_MHPMCOUNTER21              0xb15
> +#define CSR_MHPMCOUNTER22              0xb16
> +#define CSR_MHPMCOUNTER23              0xb17
> +#define CSR_MHPMCOUNTER24              0xb18
> +#define CSR_MHPMCOUNTER25              0xb19
> +#define CSR_MHPMCOUNTER26              0xb1a
> +#define CSR_MHPMCOUNTER27              0xb1b
> +#define CSR_MHPMCOUNTER28              0xb1c
> +#define CSR_MHPMCOUNTER29              0xb1d
> +#define CSR_MHPMCOUNTER30              0xb1e
> +#define CSR_MHPMCOUNTER31              0xb1f
> +#define CSR_MCYCLEH                    0xb80
> +#define CSR_MINSTRETH                  0xb82
> +#define CSR_MHPMCOUNTER3H              0xb83
> +#define CSR_MHPMCOUNTER4H              0xb84
> +#define CSR_MHPMCOUNTER5H              0xb85
> +#define CSR_MHPMCOUNTER6H              0xb86
> +#define CSR_MHPMCOUNTER7H              0xb87
> +#define CSR_MHPMCOUNTER8H              0xb88
> +#define CSR_MHPMCOUNTER9H              0xb89
> +#define CSR_MHPMCOUNTER10H             0xb8a
> +#define CSR_MHPMCOUNTER11H             0xb8b
> +#define CSR_MHPMCOUNTER12H             0xb8c
> +#define CSR_MHPMCOUNTER13H             0xb8d
> +#define CSR_MHPMCOUNTER14H             0xb8e
> +#define CSR_MHPMCOUNTER15H             0xb8f
> +#define CSR_MHPMCOUNTER16H             0xb90
> +#define CSR_MHPMCOUNTER17H             0xb91
> +#define CSR_MHPMCOUNTER18H             0xb92
> +#define CSR_MHPMCOUNTER19H             0xb93
> +#define CSR_MHPMCOUNTER20H             0xb94
> +#define CSR_MHPMCOUNTER21H             0xb95
> +#define CSR_MHPMCOUNTER22H             0xb96
> +#define CSR_MHPMCOUNTER23H             0xb97
> +#define CSR_MHPMCOUNTER24H             0xb98
> +#define CSR_MHPMCOUNTER25H             0xb99
> +#define CSR_MHPMCOUNTER26H             0xb9a
> +#define CSR_MHPMCOUNTER27H             0xb9b
> +#define CSR_MHPMCOUNTER28H             0xb9c
> +#define CSR_MHPMCOUNTER29H             0xb9d
> +#define CSR_MHPMCOUNTER30H             0xb9e
> +#define CSR_MHPMCOUNTER31H             0xb9f
> +
> +/* Machine Counter Setup */
> +#define CSR_MCOUNTINHIBIT              0x320
> +#define CSR_MHPMEVENT3                 0x323
> +#define CSR_MHPMEVENT4                 0x324
> +#define CSR_MHPMEVENT5                 0x325
> +#define CSR_MHPMEVENT6                 0x326
> +#define CSR_MHPMEVENT7                 0x327
> +#define CSR_MHPMEVENT8                 0x328
> +#define CSR_MHPMEVENT9                 0x329
> +#define CSR_MHPMEVENT10                        0x32a
> +#define CSR_MHPMEVENT11                        0x32b
> +#define CSR_MHPMEVENT12                        0x32c
> +#define CSR_MHPMEVENT13                        0x32d
> +#define CSR_MHPMEVENT14                        0x32e
> +#define CSR_MHPMEVENT15                        0x32f
> +#define CSR_MHPMEVENT16                        0x330
> +#define CSR_MHPMEVENT17                        0x331
> +#define CSR_MHPMEVENT18                        0x332
> +#define CSR_MHPMEVENT19                        0x333
> +#define CSR_MHPMEVENT20                        0x334
> +#define CSR_MHPMEVENT21                        0x335
> +#define CSR_MHPMEVENT22                        0x336
> +#define CSR_MHPMEVENT23                        0x337
> +#define CSR_MHPMEVENT24                        0x338
> +#define CSR_MHPMEVENT25                        0x339
> +#define CSR_MHPMEVENT26                        0x33a
> +#define CSR_MHPMEVENT27                        0x33b
> +#define CSR_MHPMEVENT28                        0x33c
> +#define CSR_MHPMEVENT29                        0x33d
> +#define CSR_MHPMEVENT30                        0x33e
> +#define CSR_MHPMEVENT31                        0x33f
> +
> +/* For RV32 */
> +#define CSR_MHPMEVENT3H                        0x723
> +#define CSR_MHPMEVENT4H                        0x724
> +#define CSR_MHPMEVENT5H                        0x725
> +#define CSR_MHPMEVENT6H                        0x726
> +#define CSR_MHPMEVENT7H                        0x727
> +#define CSR_MHPMEVENT8H                        0x728
> +#define CSR_MHPMEVENT9H                        0x729
> +#define CSR_MHPMEVENT10H               0x72a
> +#define CSR_MHPMEVENT11H               0x72b
> +#define CSR_MHPMEVENT12H               0x72c
> +#define CSR_MHPMEVENT13H               0x72d
> +#define CSR_MHPMEVENT14H               0x72e
> +#define CSR_MHPMEVENT15H               0x72f
> +#define CSR_MHPMEVENT16H               0x730
> +#define CSR_MHPMEVENT17H               0x731
> +#define CSR_MHPMEVENT18H               0x732
> +#define CSR_MHPMEVENT19H               0x733
> +#define CSR_MHPMEVENT20H               0x734
> +#define CSR_MHPMEVENT21H               0x735
> +#define CSR_MHPMEVENT22H               0x736
> +#define CSR_MHPMEVENT23H               0x737
> +#define CSR_MHPMEVENT24H               0x738
> +#define CSR_MHPMEVENT25H               0x739
> +#define CSR_MHPMEVENT26H               0x73a
> +#define CSR_MHPMEVENT27H               0x73b
> +#define CSR_MHPMEVENT28H               0x73c
> +#define CSR_MHPMEVENT29H               0x73d
> +#define CSR_MHPMEVENT30H               0x73e
> +#define CSR_MHPMEVENT31H               0x73f
> +
> +/* Counter Overflow CSR */
> +#define CSR_SCOUNTOVF                  0xda0
> +
> +/* Debug/Trace Registers */
> +#define CSR_TSELECT                    0x7a0
> +#define CSR_TDATA1                     0x7a1
> +#define CSR_TDATA2                     0x7a2
> +#define CSR_TDATA3                     0x7a3
> +
> +/* Debug Mode Registers */
> +#define CSR_DCSR                       0x7b0
> +#define CSR_DPC                                0x7b1
> +#define CSR_DSCRATCH0                  0x7b2
> +#define CSR_DSCRATCH1                  0x7b3
> +
> +/* Machine-Level Window to Indirectly Accessed Registers (AIA) */
> +#define CSR_MISELECT                   0x350
> +#define CSR_MIREG                      0x351
> +
> +/* Machine-Level Interrupts (AIA) */
> +#define CSR_MTOPEI                     0x35c
> +#define CSR_MTOPI                      0xfb0
> +
> +/* Virtual Interrupts for Supervisor Level (AIA) */
> +#define CSR_MVIEN                      0x308
> +#define CSR_MVIP                       0x309
> +
> +/* Smstateen extension registers */
> +/* Machine stateen CSRs */
> +#define CSR_MSTATEEN0                  0x30C
> +#define CSR_MSTATEEN0H                 0x31C
> +#define CSR_MSTATEEN1                  0x30D
> +#define CSR_MSTATEEN1H                 0x31D
> +#define CSR_MSTATEEN2                  0x30E
> +#define CSR_MSTATEEN2H                 0x31E
> +#define CSR_MSTATEEN3                  0x30F
> +#define CSR_MSTATEEN3H                 0x31F
> +
> +/* Machine-Level High-Half CSRs (AIA) */
> +#define CSR_MIDELEGH                   0x313
> +#define CSR_MIEH                       0x314
> +#define CSR_MVIENH                     0x318
> +#define CSR_MVIPH                      0x319
> +#define CSR_MIPH                       0x354
> +
> +/* ===== Trap/Exception Causes ===== */
> +
> +/* Exception cause high bit - is an interrupt if set */
> +#define CAUSE_IRQ_FLAG                 (_UL(1) << (__riscv_xlen - 1))
> +
> +#define CAUSE_MISALIGNED_FETCH         0x0
> +#define CAUSE_FETCH_ACCESS             0x1
> +#define CAUSE_ILLEGAL_INSTRUCTION      0x2
> +#define CAUSE_BREAKPOINT               0x3
> +#define CAUSE_MISALIGNED_LOAD          0x4
> +#define CAUSE_LOAD_ACCESS              0x5
> +#define CAUSE_MISALIGNED_STORE         0x6
> +#define CAUSE_STORE_ACCESS             0x7
> +#define CAUSE_USER_ECALL               0x8
> +#define CAUSE_SUPERVISOR_ECALL         0x9
> +#define CAUSE_VIRTUAL_SUPERVISOR_ECALL 0xa
> +#define CAUSE_MACHINE_ECALL            0xb
> +#define CAUSE_FETCH_PAGE_FAULT         0xc
> +#define CAUSE_LOAD_PAGE_FAULT          0xd
> +#define CAUSE_STORE_PAGE_FAULT         0xf
> +#define CAUSE_FETCH_GUEST_PAGE_FAULT   0x14
> +#define CAUSE_LOAD_GUEST_PAGE_FAULT    0x15
> +#define CAUSE_VIRTUAL_INST_FAULT       0x16
> +#define CAUSE_STORE_GUEST_PAGE_FAULT   0x17
> +
> +/* Common defines for all smstateen */
> +#define SMSTATEEN_MAX_COUNT            4
> +#define SMSTATEEN0_CS_SHIFT            0
> +#define SMSTATEEN0_CS                  (_ULL(1) << SMSTATEEN0_CS_SHIFT)
> +#define SMSTATEEN0_FCSR_SHIFT          1
> +#define SMSTATEEN0_FCSR                        (_ULL(1) << SMSTATEEN0_FCSR_SHIFT)
> +#define SMSTATEEN0_IMSIC_SHIFT         58
> +#define SMSTATEEN0_IMSIC               (_ULL(1) << SMSTATEEN0_IMSIC_SHIFT)
> +#define SMSTATEEN0_AIA_SHIFT           59
> +#define SMSTATEEN0_AIA                 (_ULL(1) << SMSTATEEN0_AIA_SHIFT)
> +#define SMSTATEEN0_SVSLCT_SHIFT                60
> +#define SMSTATEEN0_SVSLCT              (_ULL(1) << SMSTATEEN0_SVSLCT_SHIFT)
> +#define SMSTATEEN0_HSENVCFG_SHIFT      62
> +#define SMSTATEEN0_HSENVCFG            (_ULL(1) << SMSTATEEN0_HSENVCFG_SHIFT)
> +#define SMSTATEEN_STATEN_SHIFT         63
> +#define SMSTATEEN_STATEN               (_ULL(1) << SMSTATEEN_STATEN_SHIFT)
> +
> +/* ===== Instruction Encodings ===== */
> +
> +#define INSN_MATCH_LB                  0x3
> +#define INSN_MASK_LB                   0x707f
> +#define INSN_MATCH_LH                  0x1003
> +#define INSN_MASK_LH                   0x707f
> +#define INSN_MATCH_LW                  0x2003
> +#define INSN_MASK_LW                   0x707f
> +#define INSN_MATCH_LD                  0x3003
> +#define INSN_MASK_LD                   0x707f
> +#define INSN_MATCH_LBU                 0x4003
> +#define INSN_MASK_LBU                  0x707f
> +#define INSN_MATCH_LHU                 0x5003
> +#define INSN_MASK_LHU                  0x707f
> +#define INSN_MATCH_LWU                 0x6003
> +#define INSN_MASK_LWU                  0x707f
> +#define INSN_MATCH_SB                  0x23
> +#define INSN_MASK_SB                   0x707f
> +#define INSN_MATCH_SH                  0x1023
> +#define INSN_MASK_SH                   0x707f
> +#define INSN_MATCH_SW                  0x2023
> +#define INSN_MASK_SW                   0x707f
> +#define INSN_MATCH_SD                  0x3023
> +#define INSN_MASK_SD                   0x707f
> +
> +#define INSN_MATCH_FLW                 0x2007
> +#define INSN_MASK_FLW                  0x707f
> +#define INSN_MATCH_FLD                 0x3007
> +#define INSN_MASK_FLD                  0x707f
> +#define INSN_MATCH_FLQ                 0x4007
> +#define INSN_MASK_FLQ                  0x707f
> +#define INSN_MATCH_FSW                 0x2027
> +#define INSN_MASK_FSW                  0x707f
> +#define INSN_MATCH_FSD                 0x3027
> +#define INSN_MASK_FSD                  0x707f
> +#define INSN_MATCH_FSQ                 0x4027
> +#define INSN_MASK_FSQ                  0x707f
> +
> +#define INSN_MATCH_C_LD                        0x6000
> +#define INSN_MASK_C_LD                 0xe003
> +#define INSN_MATCH_C_SD                        0xe000
> +#define INSN_MASK_C_SD                 0xe003
> +#define INSN_MATCH_C_LW                        0x4000
> +#define INSN_MASK_C_LW                 0xe003
> +#define INSN_MATCH_C_SW                        0xc000
> +#define INSN_MASK_C_SW                 0xe003
> +#define INSN_MATCH_C_LDSP              0x6002
> +#define INSN_MASK_C_LDSP               0xe003
> +#define INSN_MATCH_C_SDSP              0xe002
> +#define INSN_MASK_C_SDSP               0xe003
> +#define INSN_MATCH_C_LWSP              0x4002
> +#define INSN_MASK_C_LWSP               0xe003
> +#define INSN_MATCH_C_SWSP              0xc002
> +#define INSN_MASK_C_SWSP               0xe003
> +
> +#define INSN_MATCH_C_FLD               0x2000
> +#define INSN_MASK_C_FLD                        0xe003
> +#define INSN_MATCH_C_FLW               0x6000
> +#define INSN_MASK_C_FLW                        0xe003
> +#define INSN_MATCH_C_FSD               0xa000
> +#define INSN_MASK_C_FSD                        0xe003
> +#define INSN_MATCH_C_FSW               0xe000
> +#define INSN_MASK_C_FSW                        0xe003
> +#define INSN_MATCH_C_FLDSP             0x2002
> +#define INSN_MASK_C_FLDSP              0xe003
> +#define INSN_MATCH_C_FSDSP             0xa002
> +#define INSN_MASK_C_FSDSP              0xe003
> +#define INSN_MATCH_C_FLWSP             0x6002
> +#define INSN_MASK_C_FLWSP              0xe003
> +#define INSN_MATCH_C_FSWSP             0xe002
> +#define INSN_MASK_C_FSWSP              0xe003
> +
> +#define INSN_MASK_WFI                  0xffffff00
> +#define INSN_MATCH_WFI                 0x10500000
> +
> +#define INSN_MASK_FENCE_TSO            0xffffffff
> +#define INSN_MATCH_FENCE_TSO           0x8330000f
> +
> +#if __riscv_xlen == 64
> +
> +/* 64-bit read for VS-stage address translation (RV64) */
> +#define INSN_PSEUDO_VS_LOAD            0x00003000
> +
> +/* 64-bit write for VS-stage address translation (RV64) */
> +#define INSN_PSEUDO_VS_STORE   0x00003020
> +
> +#elif __riscv_xlen == 32
> +
> +/* 32-bit read for VS-stage address translation (RV32) */
> +#define INSN_PSEUDO_VS_LOAD            0x00002000
> +
> +/* 32-bit write for VS-stage address translation (RV32) */
> +#define INSN_PSEUDO_VS_STORE   0x00002020
> +
> +#else
> +#error "Unexpected __riscv_xlen"
> +#endif
> +
> +#define INSN_16BIT_MASK                        0x3
> +#define INSN_32BIT_MASK                        0x1c
> +
> +#define INSN_IS_16BIT(insn)            \
> +       (((insn) & INSN_16BIT_MASK) != INSN_16BIT_MASK)
> +#define INSN_IS_32BIT(insn)            \
> +       (((insn) & INSN_16BIT_MASK) == INSN_16BIT_MASK && \
> +        ((insn) & INSN_32BIT_MASK) != INSN_32BIT_MASK)
> +
> +#define INSN_LEN(insn)                 (INSN_IS_16BIT(insn) ? 2 : 4)
> +
> +#if __riscv_xlen == 64
> +#define LOG_REGBYTES                   3
> +#else
> +#define LOG_REGBYTES                   2
> +#endif
> +#define REGBYTES                       (1 << LOG_REGBYTES)
> +
> +#define SH_RD                          7
> +#define SH_RS1                         15
> +#define SH_RS2                         20
> +#define SH_RS2C                                2
> +
> +#define RV_X(x, s, n)                  (((x) >> (s)) & ((1 << (n)) - 1))
> +#define RVC_LW_IMM(x)                  ((RV_X(x, 6, 1) << 2) | \
> +                                        (RV_X(x, 10, 3) << 3) | \
> +                                        (RV_X(x, 5, 1) << 6))
> +#define RVC_LD_IMM(x)                  ((RV_X(x, 10, 3) << 3) | \
> +                                        (RV_X(x, 5, 2) << 6))
> +#define RVC_LWSP_IMM(x)                        ((RV_X(x, 4, 3) << 2) | \
> +                                        (RV_X(x, 12, 1) << 5) | \
> +                                        (RV_X(x, 2, 2) << 6))
> +#define RVC_LDSP_IMM(x)                        ((RV_X(x, 5, 2) << 3) | \
> +                                        (RV_X(x, 12, 1) << 5) | \
> +                                        (RV_X(x, 2, 3) << 6))
> +#define RVC_SWSP_IMM(x)                        ((RV_X(x, 9, 4) << 2) | \
> +                                        (RV_X(x, 7, 2) << 6))
> +#define RVC_SDSP_IMM(x)                        ((RV_X(x, 10, 3) << 3) | \
> +                                        (RV_X(x, 7, 3) << 6))
> +#define RVC_RS1S(insn)                 (8 + RV_X(insn, SH_RD, 3))
> +#define RVC_RS2S(insn)                 (8 + RV_X(insn, SH_RS2C, 3))
> +#define RVC_RS2(insn)                  RV_X(insn, SH_RS2C, 5)
> +
> +#define SHIFT_RIGHT(x, y)              \
> +       ((y) < 0 ? ((x) << -(y)) : ((x) >> (y)))
> +
> +#define REG_MASK                       \
> +       ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES))
> +
> +#define REG_OFFSET(insn, pos)          \
> +       (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK)
> +
> +#define REG_PTR(insn, pos, regs)       \
> +       (unsigned long *)((unsigned long)(regs) + REG_OFFSET(insn, pos))
> +
> +#define GET_RM(insn)                   (((insn) >> 12) & 7)
> +
> +#define GET_RS1(insn, regs)            (*REG_PTR(insn, SH_RS1, regs))
> +#define GET_RS2(insn, regs)            (*REG_PTR(insn, SH_RS2, regs))
> +#define GET_RS1S(insn, regs)           (*REG_PTR(RVC_RS1S(insn), 0, regs))
> +#define GET_RS2S(insn, regs)           (*REG_PTR(RVC_RS2S(insn), 0, regs))
> +#define GET_RS2C(insn, regs)           (*REG_PTR(insn, SH_RS2C, regs))
> +#define GET_SP(regs)                   (*REG_PTR(2, 0, regs))
> +#define SET_RD(insn, regs, val)                (*REG_PTR(insn, SH_RD, regs) = (val))
> +#define IMM_I(insn)                    ((int32_t)(insn) >> 20)
> +#define IMM_S(insn)                    (((int32_t)(insn) >> 25 << 5) | \
> +                                        (int32_t)(((insn) >> 7) & 0x1f))
> +#define MASK_FUNCT3                    0x7000
> +
> +/* clang-format on */
> +
> +#endif
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 13:40:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 13:40:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.486999.754467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUOl-0004B9-Uj; Mon, 30 Jan 2023 13:40:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 486999.754467; Mon, 30 Jan 2023 13:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUOl-0004B2-S2; Mon, 30 Jan 2023 13:40:39 +0000
Received: by outflank-mailman (input) for mailman id 486999;
 Mon, 30 Jan 2023 13:40:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMUOl-0004Aw-0Z
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 13:40:39 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2087.outbound.protection.outlook.com [40.107.13.87])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac271747-a0a3-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 14:40:35 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8374.eurprd04.prod.outlook.com (2603:10a6:102:1bd::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan
 2023 13:40:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 13:40:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac271747-a0a3-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WVkspJj//cxGPqpPp6ENExDvkke4JdA1b9zSY8BgvXQTAKFW/PEmz38sqLksDhengsB80+v7fNbebki3nZVjX+o3lEVZwRSTUEjSFdL41xj+HzokSPjqE+mqUb+2Jc9HZb4Reg6amOPdUgVXMbNDZ5WQbpx+/gd/uZD+lC3HFhtNSiR4UuKY0fsSN9OqIlX36mSr0mR/YMtmdcPQKecEBfL5W5YYyb6YZh6Nz9i3qktG3e+yV4WaCbL1br+QoDBuyTTRg9a85QMYDguO7CWERrmBf8VQxsfGbPs1QWVPXr6Gl1ycH08i8zl9AMfmirblbk1FjOkxGPcqCJSuxVtjcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ExghRb3FOA6IrBrnaCyvwC8TiiK+aQDyfO402SkmcJ8=;
 b=TFQjQxalxlqg/9CR/wB8YgjnegfeeEhrqDkAMteCvO2Ypw0pF3G6x8C8CoFllXlxgGt3cISqWECDSFAtiUP9vbsmzewAJXltuE5x/vi7OSfIveyFXUFQlr6BFXWAxugAjAgtpgxrxFyORnHgRYVbYhmxDL96XVCPpQLUBy04E3RtqaNLjOHk+tr1FqUzPAwt7jY4YPEYLN5dnjONV+7/yOU5gLUDp0VOwaw1GFaWoPQGj1fH/V2DXjfzobJ4tO6NQx0aHyWAs/9fzRhmEbLGDvbfbZVMGMsZFDWKzVzHDE4UfiCfGprIXf7X+nqMQQTVi1KMX4z3LGGstGnH0SYQkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ExghRb3FOA6IrBrnaCyvwC8TiiK+aQDyfO402SkmcJ8=;
 b=2mHaSPdl9wBIbPwZ29qKNnqKBtnLSuYXZEQds4s/TkS8w0zKPBzu4HinhVHa6m0yz7HlcqKkpp1PZ/d4FGvZAQfvS1Dq5+2izf1TFgpMzfbtTBAd9HXGu8ggnVKtQ8rFsr7BFj7tqk1Npqoj9T/drzfhnt1icWSlKB7CALIgTZvxpMc8r4EyEozcDQHxZq2kYh5Z/0q7+hJVqk6XxpnNReFWfE3N9h1s22qWDfLh8RzU9T32BAQQYa+afNQOPDIzlSldwBoH+wDe3wFKUK2Ec7HPtfk6EHAGX0YnIo5lsENG5qXEdWnALAFcqVQKZZ7CcsSj5qYarg3+1uWBG6rj3Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <72c88d99-5ddb-f156-e768-83d1861016f4@suse.com>
Date: Mon, 30 Jan 2023 14:40:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Stefano Stabellini <stefano.stabellini@amd.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "george.dunlap@citrix.com" <george.dunlap@citrix.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, "julien@xen.org"
 <julien@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230125205735.2662514-1-sstabellini@kernel.org>
 <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com>
 <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop>
 <5a3ef92e-281f-e337-1a3e-aa4c6825d964@suse.com>
 <alpine.DEB.2.22.394.2301261041440.1978264@ubuntu-linux-20-04-desktop>
 <db97da84-5f23-e7ed-119b-74aed02fb573@suse.com>
 <alpine.DEB.2.22.394.2301271016360.1978264@ubuntu-linux-20-04-desktop>
 <03ce9f48-191e-b1b5-a3b2-8b769aa8feeb@suse.com>
 <2733D8BE-CCA5-4322-BB9B-1D4318378525@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2733D8BE-CCA5-4322-BB9B-1D4318378525@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0132.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8374:EE_
X-MS-Office365-Filtering-Correlation-Id: 2ecb0652-532d-45ae-54a3-08db02c78fe0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1mXg9W3sMoI+QBD/CPrBY4AddJx0qh3CRGBqt4MdLmDVGG+73/ctI7PuroyPbv5I1seyPF57I+YJVrBv3ft75/68Kd5VQMKQ5P8S9T9/h1OnLt6APiDDiKfzH58p8YiMJb1eI5exc0gDtaxdOT2vkiBetVdxlbfrwV16choiCa134r8uN2AHSctHdasSvnQoJClotjdJNFbg+5zhp0AiPmKH/Mhi/FbRUdpr1fMpMl/rTvlOPVx/HgHWptnmzAcwGFo9zkU826Fmat6a377njU94imdCVEb74aFpj8jDrYBkl8et2KDX22F5fCr/yAxNbKFSEEQ+mxXXBUvWUpEebPSjHV7vcSUXiWnZ1ORgd6P+rj7St59AoMgvX16OiyyH+i0eWSGBeF+TfK3Z54C+qMJlJSxrhcsH9C52xLnEOtb1K/beuieep5ZWR3vW3VVjqjgP1GXdDf22B166Wju2RvocK2xzOl+nDOtPNpPyNIPfh8xPBbWgoWXDHu/m7aFH6hYg60WsyCA8K0V7JRMcrNqE4xC0/pZEqGcsuU4PTZ/ScuH56kakfElfrn3TjAKmlj0qjd3+TnbDmRc+mSwRjRSYz8zk7fcbDbOsAr471IcSLrAr9k21xc9onuo3CmZHc/RZiaH+ZrdR8aXzyNiowNzqTEmsRg0X4GN3bGOXjBdglvueo5yA/3JTk4CikkG2iROZaKjm7Oi3mCHqsV5GBhGJvLtfjNqUtg+am1GiQ5Q=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(136003)(39860400002)(346002)(366004)(376002)(451199018)(316002)(83380400001)(54906003)(41300700001)(8936002)(66476007)(6916009)(66556008)(66946007)(8676002)(4326008)(6506007)(186003)(6512007)(53546011)(26005)(2616005)(6486002)(478600001)(36756003)(31696002)(5660300002)(38100700002)(2906002)(86362001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QUgvaXBIdnF1OXdpRlpFcHFXTEVFM096L3RqNlBWbnMwQkd6Ty9TNEYzZXhI?=
 =?utf-8?B?TEVFclhENEdUZDFSWVJQdHBWTWxBTXErcXVzMnFmUWsvQ0pOREllU1NYdk9h?=
 =?utf-8?B?WXlpTlpSU1V5a090clBXMDdzUUhQTHRWT2pXaVc3c0l2aW1rQ0crQ3Nxb2hY?=
 =?utf-8?B?SDhuRFN3aTNQczE3bmQ4NURsNUUrMERBc3ZZbXJoVmtTVU5OelR3VDBTaTho?=
 =?utf-8?B?c0JsdFNaa081U0xleTlyS1p0WmZhSmZFMGZHeENyUVpQZzFHZjRybHB3S1ZJ?=
 =?utf-8?B?U1FPNmt1RGpLR25iVGhIR1EzaW4xdlNhRmJHTHJMTWljOGVvN3VDV1Z1eExY?=
 =?utf-8?B?aUhIUW91YWhvYThSMnczTnBjNFpMUUFIdzF1cXlmcHFIcHg5THZtK0NUbys0?=
 =?utf-8?B?V09vNkpZdFZxcENEMWhFWllpeDUzY29uUXpjMFhFb3l1VStiREtFRGVGczFH?=
 =?utf-8?B?WkdWSHlUUTlIWTY3bnZqNjdmcDc2UE9yU1ZKV2V3MWhxS3ZScklHTTlrRmNU?=
 =?utf-8?B?c014NG1xbG5MOTl4WFpSUStGK1lEY2hXSGpxVlBtbmpjdmx3Zkg2Y0JkZ3R5?=
 =?utf-8?B?NDlFQUlZWWZ1eTNGK3lXb0NrQmV6RktFUXRwVEROejkwSDF1ZG94TlhubFZS?=
 =?utf-8?B?UHpld0NLa2dneEQvbEc0VDZ2OVJ0WUR0bGpEVmV3enZwNkxxbis1SlNzT09Y?=
 =?utf-8?B?bFg3d3R6ZXhhU3JoSXhKZ1RLelZtM1JhMXNQc0tjVWVuQmM5ZngveWxTanl6?=
 =?utf-8?B?TG1JV3M4NWgzZFBnYU14YmpnbTc1MGF3MXhzRDVCTTh0WlB2N1Y0d2dHb3pF?=
 =?utf-8?B?WlFmRFNPWlBsTGZtQW9oY1hTbWZobDdMcG1aSUpXT0ViUncxRXlYNzI5dVZJ?=
 =?utf-8?B?V2g1dnFjSmRScHVCakowbVEzdVBpWmpLdDZFSzBKU0ZMeVlNU1laTGFTOFdm?=
 =?utf-8?B?Kytad1BSM1dIOTNUVEJ3TVJDcHE0RXMrSUphaDdIc3RNUCtYNWlac25hSHl1?=
 =?utf-8?B?T0RuZ1BMQ2poenQ3MkR4dlI0N1hBN3NBeVlrU25hSjNncjZlNmhtbmozT3BH?=
 =?utf-8?B?YnRXNUduYWh6TGRqbVVyM1ZhNTczSFhuY28wN0hMQXEwYXY0dytWV2dRb3pQ?=
 =?utf-8?B?SU84WDVYSzF4OWQ0OFJJT255cEphT3F4MkRHSjN2MjU4QTFObS8rdUE0ZWVp?=
 =?utf-8?B?WTEyVlVWRzRHU3d4U0cweHhCTEl5aHZpUGFPYnIwb1I2TytwVkt5VUlpSTI1?=
 =?utf-8?B?bWM1VWs5Z0JKbnROVTZPWVVmTU55TVlxL0ZXbkpxRFJ3TXFUYTNBcEZHVTBw?=
 =?utf-8?B?ZUhWVko4eUJGWlFxTEV1VTdrNGRWZEhiSTdnMTVPb1piMjZ5VyttQUIyRlI5?=
 =?utf-8?B?ZDVYSUVxYmY0QVdoaW13S3J6akhOMURZbStTazkxMS9lNXFQQ3lQZ0VXUzlD?=
 =?utf-8?B?NEI5dlVGWHJCZUZEekV5VVlXUkxIZWJURThwc3F6a2N5MnF0aHQrNUN4L3k1?=
 =?utf-8?B?MFhqSU94d1JGaGRkeGIyOGJ3RTNxMjlrcWVPOUxGSktkcGdyaGZMYk9JcUwv?=
 =?utf-8?B?YUNOcE4vVGp4SkZxU3hlQVlkZWdiQk5qaWQ3djA2UmllTExsWk9ybkRGaGRl?=
 =?utf-8?B?aVdSZTVrSFRodVM3dGgvYnVWc1NlTVUzQWFobmdFUjJzR3VrSmJnRHo5eUd2?=
 =?utf-8?B?MXVBVzZ3dk1mNVNBOTRITU5lbkFhRFdmbzFUVkhBY1g3UUJWekM1V2lhd3Ay?=
 =?utf-8?B?STVzNFFaNWlvTFZCcTdCWVFJZzZIaFoybHkzeDVYdEc2WFY3bXBtY25DWHc4?=
 =?utf-8?B?WlZ3NHlCNHlsRStKUlBzeE9TNG1sNVVkMXZjVzU2QVNwV0RCaXRCeU5SRHM3?=
 =?utf-8?B?dnI2RW5qVFdNTjUzQkNlYmMzcUdES2tsSGNLcnRVakRVNHN6M0daZm5SVFFN?=
 =?utf-8?B?b1pFUnJLUzZCTEpXU3ZVVGdPNDhtRkVEcEtmcEhCNUZUM2drSlRKWERtWHl1?=
 =?utf-8?B?N0ZmNmxBZUVqd0JyQnFlMXJpS3hyMExjeWpUTFhmNWFXUWswQTMrdjhEM2FT?=
 =?utf-8?B?T041aytlV25UNE5LSnpEK0JiZnhaNTlRTGlMcGJVc1kxaXlWZFFyOG5KQ29z?=
 =?utf-8?Q?Biv0MwTSYbTP2aiH36l45qlcD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ecb0652-532d-45ae-54a3-08db02c78fe0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 13:40:34.2554
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ozoH4geuD9HvxIAdJP4LB2kEQKOhklKNhl9TxfcuBG1E13UriQ3Q5/9kSpW+YNkGiyAU1OZW3f3VT+xEjNXp7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8374

On 30.01.2023 10:32, Luca Fancellu wrote:
> 
> 
>> On 30 Jan 2023, at 07:33, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 27.01.2023 19:33, Stefano Stabellini wrote:
>>> On Fri, 27 Jan 2023, Jan Beulich wrote:
>>>> On 26.01.2023 19:54, Stefano Stabellini wrote:
>>>> Looking back at the sheet, it says "rule already followed by
>>>> the community in most cases" which I assume was based on there being
>>>> only very few violations that are presently reported. Now we've found
>>>> the frame_table[] issue, I'm inclined to say that the statement was put
>>>> there by mistake (due to that oversight).
>>>
>>> cppcheck is unable to find violations; we know cppcheck has limitations
>>> and that's OK.
>>>
>>> Eclair is excellent and finds violations (including the frame_table[]
>>> issue you mentioned), but currently it doesn't read configs from xen.git
>>> and we cannot run a test to see if adding a couple of deviations for 2
>>> macros removes most of the violations. If we want to use Eclair as a
>>> reference (could be a good idea) then I think we need a better
>>> integration. I'll talk to Roberto and see if we can arrange something
>>> better.
>>>
>>> I am writing this with the assumption that if I could show that, as an
>>> example, adding 2 deviations reduces the Eclair violations down to less
>>> than 10, then we could adopt the rule. Do you think that would be
>>> acceptable in your opinion, as a process?
>>
>> Hmm, to be quite honest: Not sure. Having noticed the oversight of the
>> frame_table[] issue makes me wonder how much else may be missed in this
>> same area (18.1, 18.2, and 18.3).
> 
> Hi Jan,
> 
> I think I recall the frame_table[] issue but I was looking into the eclair reports to
> understand it better and I was unable to find it, do you recall where the tool was
> complaining for the 18.2 related to the frame_table[]?

I think you're meaning to ask Stefano instead? I have no pointers into
what Eclair may or may not have reported at any point in time.

Jan

> Any notes or link is appreciated, maybe we could speak with Roberto to understand
> It better, because I checked with Coverity and I was unable to link findings of 18.2 with
> the symbol frame_table[] (however I might be a bit lost in all the macros).
> 
> Thank you.
> 
> Cheers,
> Luca
> 
> 
>>
>> Jan
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 13:50:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 13:50:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487003.754477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUXw-0005oD-RZ; Mon, 30 Jan 2023 13:50:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487003.754477; Mon, 30 Jan 2023 13:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUXw-0005o6-Oj; Mon, 30 Jan 2023 13:50:08 +0000
Received: by outflank-mailman (input) for mailman id 487003;
 Mon, 30 Jan 2023 13:50:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMUXv-0005jQ-II
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 13:50:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2058.outbound.protection.outlook.com [40.107.6.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 00bd49ce-a0a5-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 14:50:06 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9283.eurprd04.prod.outlook.com (2603:10a6:10:36d::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.25; Mon, 30 Jan
 2023 13:50:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 13:50:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00bd49ce-a0a5-11ed-9ec0-891035b88211
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=D+/ws3OCV5pc4HpnaadllOz+Kzmlznt3x9Y29sV5GOEX3MedeIcXLIt8EmMJge0EemBYK5RzUkyfVg0niY8YHn09LKQbMNfxaBfGR9c98dm2mrX/nQNCEl1R1w6XXvtjUExN/MIDQ4f4TGkgYpKbhtkS0ZwmVp3YhRX1al/2SRxswl336Bnh7+aVu1WMNJfm6xYsFyMxVwi9dAsMcC4lqlRp+n6ulhmw71ko6Kx2t4TwcUHAzi8YiItPjgYtPPa10N6KImzKRhCHyKCg7p0hKfC+tUM9B9g2NsZUpKVe0vUPbpKgDn6FC49NfibLfHfZIvssh5hi9LuNE5a+Cn7pEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B6tMXtP2znewAMpE/XQR/3UQTZC5BMmEej0LC9BJ0/A=;
 b=f/jiLKUaotTXd0fiqtm3TF7KpPYA3TEDI3IklciPfWPsgfgoXBbVesSIPplOQ7MWrLI0PmnlTTvMrBNgPXUvqg6g2Ya/KZAalXkGxl9VC9H+bBKw+M6d7cwxvgo5E0XcbrxyJt02Xq73LzlTa/N/wOgeikktvgckL0PRaIUK5QQHwt9JEBIpPF8iOuTTGl9qpf1JMcqMei7YeyfRo3KccBu+YOler2OAveI7RbRH7+p/EcnR2m8VPChs5JQoD1hsczJZ9zV18Btzb/6+beHAJ/k0ssaPkberstDi7VZ7Muz+3D3VlNVj/aZrH3msC/Rzz0J8DKMXD7oQOxiiESyBgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B6tMXtP2znewAMpE/XQR/3UQTZC5BMmEej0LC9BJ0/A=;
 b=W6SLw1tUMYgN5VnfBuPlYavziHfyXWKPgJbaAU76gUS46SWGB9TtPsOrRrZDbjSHv4/TpVJlrXQSDjV+ZCF5AdZR0S5LjJ51KgpiaFuq+TYKFAibtA5ypjCmUeSI8+M0aob9zV4apa5Tgom2AF1ZVl8VemnCZ+8ma72sH4BXA/BESIxzwbgjdaf0+3K8rI6sUyqiPRJseBf+hqiutoIr6cYQesGraMW4fai9EDGdMJCu8wqdkXsWID/2tLOiX3YoBmgASEd0yqJi9sYDU4Yh+6l2+lC0U4DCo+8jnYTcOvA/FXN6+CUTrejxdbWqvBU2+jfICcqWXAH9Syf7OO9cRg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bdb8c6a8-a677-9bef-c819-904bd944e6da@suse.com>
Date: Mon, 30 Jan 2023 14:50:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 07/14] xen/riscv: introduce exception context
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
 <75328420-0fbd-92ae-40c7-9fee1c31c907@suse.com>
 <792bc4533d3604acd4321b4b15965adec22431a4.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <792bc4533d3604acd4321b4b15965adec22431a4.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0197.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9283:EE_
X-MS-Office365-Filtering-Correlation-Id: e43894ef-23bd-4271-9c2c-08db02c8e322
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Mn3FpdSJvbQ7a8pIDChTUBc5qkwCvROOWQVg9bv69q3/F8FvyVlaTDDbuyvJk85PspFtBqJw1F2uXmYFayo07fH0ASQ+GbsVbjEJRMANYdjPaJ0zxLoWBjPDINjqo1L1Qebyy35Y3AdHPmNGITpraZ3C4TxpHMDc1JhU7mICFWi82jEu1nGfh02+g3yBnFDi7KhSTr6j4ZIddOfPzCZYuo2Y0lGxWUweeo/3e2BtrMol6bX/M9Zf2/IZcTEc/XuupUvcHeT76xf5EGhlb20AQSz+wfS45hHG+iykdMvMNneRVIaQbZG10YuUBvJeLtWZ4dArWkUCZgDHw6eqJmEkP8hTioQdwhsA1RdvGJ9T/6DCewicqzXf/Od2oe7dzBea7+U7nlaz2++sL+lvRgNSPXODXB4bUuvH55/eMGvXyQwKPYH3lp6PGB9nYwPToqqH3ibWK6AKP8OWC2iOY+Qgm16uXwSvPz6t2E31rcrojyYRe1aiclv4ESwv7M0JmMMrHCE1uPZJYLaq9U13tMTdzroPZjt1EpxeXLROEBsq6O5sddJCwRWjhO7CGY9M2bDRVSExyUm7CqjF3rAK6SN1RmERJHU17CFFE2eNILGUX01GqYoaECYMhIrht+Ad6MjxuN2e4v4E1c6Ebf+YP+Sjfyj0ILr1bQDABRbIxH6gbYLnkgr+Fy993rLE9uoG7uX8n8Mewx1C08529EScYsQD5T9YRlGtfPhhjIdXtlPrU70=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(366004)(136003)(396003)(376002)(39860400002)(346002)(451199018)(86362001)(36756003)(2906002)(31696002)(2616005)(66556008)(26005)(54906003)(6512007)(53546011)(6916009)(66476007)(8676002)(66946007)(6506007)(186003)(4326008)(38100700002)(6486002)(316002)(5660300002)(478600001)(8936002)(7416002)(83380400001)(41300700001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dXlZbVdaWkVFOXk0bHF0S3VsQUo3MERqNnBvc1ZCaE93OWcwWHkyTEJRZW1s?=
 =?utf-8?B?UVBBeUtNTll3eEZRRFZVOHoxOEdFcC9jRGtyQ0YyYUtTYllaTSszV0tFcUlh?=
 =?utf-8?B?Ry9udFZhNFR1Tjd2VHFxSlE1dUJjcXdYZ1dxSFJwM3hYd0d4NHBGUUlpWE8r?=
 =?utf-8?B?WXBMbnVWTElZOVQ5UlEyK2RCckV4SUszOSt6THlvUWFVaGhub1hoK1BaSmdt?=
 =?utf-8?B?SnhSbUh6cVJJNFhISllpcW16WTBhZDVTcWFDRWpMSWRwYjBUSm1xVDNDeVJp?=
 =?utf-8?B?ekRwQkJkQnRPV1dJT2MyM2UrTHNXVnA0NWJoSm5HSWlQUGdmaElNTlMySENI?=
 =?utf-8?B?S09GSEhYUExKUkVzQzg1UGVBSnhMQlB6TlNlaXNjeE9NMmpnOGg4WmlNZnhS?=
 =?utf-8?B?UUd6NEhuTE5NclRxSHF1QkFMdGdsblFQeEU5eGc5YVZnRE54dzA2bHNPNWg3?=
 =?utf-8?B?RVhDYjZ5UmpIUkVvcHV1N3hZMHNJMjdaMGZ2LzZlbkFCMlE2RE15bXA5QTZI?=
 =?utf-8?B?UXNmcWdtV1FNT1lhSE5RSlRIMTBoYmtoQ3k1U1BXZERZaUsxVExydzFDNWdB?=
 =?utf-8?B?QUxDQ1RoVWVBU24rZlBTbTBidjhtd1UvY1RBY0liUkFtWHl5a3k1aWJOSzF4?=
 =?utf-8?B?Z3VvMlRUc2JSMEN2dTlydDUzSmI3YzFQUm5kV1d6eU1LdDhYZ3NKcytPNElr?=
 =?utf-8?B?WWNtRWZEQUpKTmRoUmV2RGEzWDd6WVJab1ZQZkFjR3pJOHA0cUFrQm1ZTmEx?=
 =?utf-8?B?anVRTkFpenJYbUxWZWNNT1JCSmV3NWwzUWFtRkNlVVhDRFcxSWVJdW1Zc2pm?=
 =?utf-8?B?V2sxaFluQkpsNzBURzZ1aVlxandPNHdTcjJveWFVOVR2am9RRHVGZGJUeFBE?=
 =?utf-8?B?bDJKaHc5WXNLTVNnUWY5TkZYTE1sdEdwK01rcHlNcElOcXMvN2hQSktDYmdR?=
 =?utf-8?B?UzVmNVJuVlBTRmY1WXYwWXorSDBacHdYQ1Y5U0JLUHZvOExOd2w1Ty9TTEtu?=
 =?utf-8?B?ODFvdHMzdVU0dEdPcnd6ajh2VnVLbVUvdTNpd0NsSWFsS2ROLzZLaFVpL3h4?=
 =?utf-8?B?aTZCeGZUTURrUjl4aklaelVselp1Q3MwbzdFekQ1TDlYL0ViYW92U3BMMG1H?=
 =?utf-8?B?UE5yb2Fwbm13RHBuWWxWdEMzdW45eVQ2V1RGYmQ0R3BCdXNXNExxRGplaHd0?=
 =?utf-8?B?STIvUnNOdHl5WVI2SHlmaWlNQkFzSDJzL0ZQYnRSM08vYVZYcnZlalJZOGlm?=
 =?utf-8?B?TFZLQmNWL0plK2ZDVEcvYlJ6K2VvOHRjSTEzNlJWQ1hIMHlrTXY3QUtCZWJW?=
 =?utf-8?B?bUk0T2h3Kzc5TUhqRVlEL3B6VXpQSG8zQ2NDVU1sbStIZGM3ZE1EZkhTUmYv?=
 =?utf-8?B?d2sybTl4Tk41T1Z2WTV0c3ZTc1BwalhCK3ZDWXpSKzBTZTdMUWd1K3dkMHl0?=
 =?utf-8?B?RG85Uk9Dem1EWkR2VkhINnFHUCtiRzdMZjc4c1JyeXZNUHIvUjVCV21UWkly?=
 =?utf-8?B?TlZzMXJPR2hHeWVOZmtkdWxUZ0tDTjB6ZDVka200MHFMUE1VcnFHWUU5bzhI?=
 =?utf-8?B?QXlaU3NJK1BYRnVwQ0hpcFc5RXF0UXdmcEpoWFNmaDZxSjc1MWlkMmtjMnJD?=
 =?utf-8?B?citQa2Z1VStFOUhIR1h5NTExcEp3d3RZcjZJQVFsVlhwcXg3Q3ptdHA1emNt?=
 =?utf-8?B?QmJrcGVrbnR0N29QWCtKaC92OVh0eUg2TTJQNHA4LytlalowWlpQTktSdWZP?=
 =?utf-8?B?WWhLZ2ZXRUx1YnVpMDJHQnJvWXI5cVl3aDgyRlNtL3FsS2dZVk8zSG9acFRk?=
 =?utf-8?B?OGlPcldGOWo5QjU3REwwVjFISzE3ZjB6MDZac0pKSUFnbmc3OCtiK2lLa2tw?=
 =?utf-8?B?TGlIbkZQNWRjNzBhWndYVDM1ZlpTdFVzWTdmNFAzdnpPcHBuZHBRWVdYMHJt?=
 =?utf-8?B?bUd3UXhiL2crS2xUYW5OUEhNbVBnS0FRd1dERkJqa2Z2MitST253UUVQVnIy?=
 =?utf-8?B?dUtQaTJoN2M5K3oza1dTK3AwQ1lDMEhNNHZYZzhWeDRiSlJpS2dNaU80dEZt?=
 =?utf-8?B?Wkg4RTBQWGkrWVIrWlJaK1NwMkZOTVVkTGNTRUhrdFZ1MTc2ekhRdlhYK0hE?=
 =?utf-8?Q?yaliku+t6rH/AFIeyst5bDKWy?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e43894ef-23bd-4271-9c2c-08db02c8e322
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 13:50:03.1874
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4vdVUyATrMvjNzq9BMy5EJFLk1fBDr3fbMMdQC6MwvxUibUrvrFz2nDQe6+qgSeZNu3piFvAfyx4RVP0Nfaa6g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9283

On 30.01.2023 12:54, Oleksii wrote:
> Hi Jan,
> 
> On Fri, 2023-01-27 at 15:24 +0100, Jan Beulich wrote:
>> On 27.01.2023 14:59, Oleksii Kurochko wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/include/asm/processor.h
>>> @@ -0,0 +1,82 @@
>>> +/* SPDX-License-Identifier: MIT */
>>> +/*****************************************************************
>>> *************
>>> + *
>>> + * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
>>> + * Copyright 2021 (C) Bobby Eshleman <bobby.eshleman@gmail.com>
>>> + * Copyright 2023 (C) Vates
>>> + *
>>> + */
>>> +
>>> +#ifndef _ASM_RISCV_PROCESSOR_H
>>> +#define _ASM_RISCV_PROCESSOR_H
>>> +
>>> +#ifndef __ASSEMBLY__
>>> +
>>> +/* On stack VCPU state */
>>> +struct cpu_user_regs
>>> +{
>>> +    unsigned long zero;
>>> +    unsigned long ra;
>>> +    unsigned long sp;
>>> +    unsigned long gp;
>>> +    unsigned long tp;
>>> +    unsigned long t0;
>>> +    unsigned long t1;
>>> +    unsigned long t2;
>>> +    unsigned long s0;
>>> +    unsigned long s1;
>>> +    unsigned long a0;
>>> +    unsigned long a1;
>>> +    unsigned long a2;
>>> +    unsigned long a3;
>>> +    unsigned long a4;
>>> +    unsigned long a5;
>>> +    unsigned long a6;
>>> +    unsigned long a7;
>>> +    unsigned long s2;
>>> +    unsigned long s3;
>>> +    unsigned long s4;
>>> +    unsigned long s5;
>>> +    unsigned long s6;
>>> +    unsigned long s7;
>>> +    unsigned long s8;
>>> +    unsigned long s9;
>>> +    unsigned long s10;
>>> +    unsigned long s11;
>>> +    unsigned long t3;
>>> +    unsigned long t4;
>>> +    unsigned long t5;
>>> +    unsigned long t6;
>>> +    unsigned long sepc;
>>> +    unsigned long sstatus;
>>> +    /* pointer to previous stack_cpu_regs */
>>> +    unsigned long pregs;
>>> +};
>>
>> Just to restate what I said on the earlier version: We have a struct
>> of
>> this name in the public interface for x86. Besides to confusion about
>> re-using the name for something private, I'd still like to understand
>> what the public interface plans are. This is specifically because I
>> think it would be better to re-use suitable public interface structs
>> internally where possible. But that of course requires spelling out
>> such parts of the public interface first.
>>
> I am not sure that I get you here.
> I greped a little bit and found out that each architecture declares
> this structure inside arch-specific folder.
> 
> Mostly the usage of the cpu_user_regs is to save/restore current state
> of CPU during traps ( exceptions/interrupts ) and context_switch().

Arm effectively duplicates the public interface struct vcpu_guest_core_regs
and the internal struct cpu_user_regs (and this goes as far as also
duplicating the __DECL_REG() helper). Personally I find such duplication
odd at the first glance at least; maybe there is a specific reason for this
in Arm. But whether the public interface struct can be re-used can likely
only be known once it was spelled out.

> Also some registers are modified during construction of a domain.
> Thereby I prefer here to see the arch-specific register names instead
> of common.

Not sure what meaning of "common" you imply here. Surely register names
want to be arch-specific, and hence can't be "common" with other arch-es.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 13:54:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 13:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487008.754486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUc6-0006Pn-CC; Mon, 30 Jan 2023 13:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487008.754486; Mon, 30 Jan 2023 13:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMUc6-0006Pg-9K; Mon, 30 Jan 2023 13:54:26 +0000
Received: by outflank-mailman (input) for mailman id 487008;
 Mon, 30 Jan 2023 13:54:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nJND=53=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMUc4-0006Pa-8w
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 13:54:24 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2078.outbound.protection.outlook.com [40.107.241.78])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9938863a-a0a5-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 14:54:22 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8636.eurprd04.prod.outlook.com (2603:10a6:20b:43f::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Mon, 30 Jan
 2023 13:54:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.036; Mon, 30 Jan 2023
 13:54:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9938863a-a0a5-11ed-b8d1-410ff93cb8f0
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZWQlktb9H1v90w7ST5vBLzYz2JMrRSxpXIHQKah/B5xzTGT70F+x4q8l11zHrhCCN5cWxsjdWM0le4hV6lSc7nbj5WkPeBg1wvM5WoVnANnsHd7DDOjmf1FfL7S9L5/bshwBEBORlrDv2RBT8PbTtJiYbROmLDsO2tZ3TRFkZWv42u5QNMqdcOSrHM3DqNzDdLVaQp6OEDJx33ncHZLvEFZtgQUp10VSyih5vSM99uSx1imheKp5KBFayRt2ixIMrjKl8f9N1xGnPwqCviGyDo4U9aUJp4aXTOXi4zIa3/BlfQvZt6GKUPV+FxxRU6irIFphxMg3yVzG3fyf52H7Xw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oqxiTy4sto+kgAfSlVrCbTFXexPFYV6JU8DMt4QBwp0=;
 b=OoUu16/QkqNaaZYIFx3cdV35ZS4Ba9hG9QUbK54a/TUCCLtnQfNpQYODq/wyXbOIOT22A3RAr/Moepsq8DRtPQcQqJQVuIzjvnuKCwkMGUMJs6XVzFpYZ7Cl9tu0sv8aOgvmRYgGZxy0CC1eLVi1PiecIHTD5vE05CEEXhiK5Xsqsiw2KjjpwluwNmhz/tSpZkNcTJqjpi013fgsxcY+ZgkLq4ngGwdkPFplC0v+7hF73zwrEvwidQhHmEITb7Dhwet/rb7o1YLw/x9nF1BwfiKqrBew45lu5hmZm9fplA3cOBXXijKV5/OI73wkhuXIOYMyGhnxSAxQPioHWkCf7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oqxiTy4sto+kgAfSlVrCbTFXexPFYV6JU8DMt4QBwp0=;
 b=RSATZmgmpTcAZo32zi37e2s5lSl6zyIC+CEkXsYHRbB+q8D8b+kbH7q1TYghNaHu/m9u10VTlIcJrAl1eD+YinxAeAXnODBJYVhUF3pvEp2wt5v3uOpe+TrU7Wx+7Bq0JUpdlmaibDafpfDoha+KNPEQXhMnDzHxubqfB8sGCFDOu7o+WoTFGF/uqjLe0gGdCzRe9w3obVXsc17OMq4Rhs9feto9PCl/gjuuB7R3tZalTrkMdHhgo2hYmi7sP+KplAgQZPKynZVE0wszdbFldv0SK+/UEPiN057j9+ZalNh2yYuGbiqje/7AVB1o2wPBRV/umaZ40F3YJ4konLL/6Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1d03a5bc-f02b-fdb0-e9b1-d6e9055cadf4@suse.com>
Date: Mon, 30 Jan 2023 14:54:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: Proposal for consistent Kconfig usage by the hypervisor build
 system
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d77c1a7a-9d15-491d-38fa-99739f20bebd@suse.com>
 <5a53d16a-8842-457b-612a-a3623a3a98ed@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5a53d16a-8842-457b-612a-a3623a3a98ed@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0109.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8636:EE_
X-MS-Office365-Filtering-Correlation-Id: 8f2230bd-cca1-4477-8aa1-08db02c97c88
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VLHy98JMdzjGNgMuIUtZuR+Xpr+JKE+f5EFuU4OpRJQHVBvEkMpAdFOv5PqcPM+5gDDF0vFkiGFHaOUun6rqZPFnTZrr374qL0rdf0bSYqkAVR37OGAp3XhCBCxigKGe1y6oag15qHp4KwmVjT/s0kZ0orz3eGloBi+3359i0MfFdtAugUGiVxyS4qVUOnne87ylFTIsevYBxPEIXcOFzHSrQUd0oBtWJeFGKHXyx4/gJ/fgMxEuQeKd099vY5PDxuhNK03zJpKbrCvOLasiB9O/C3l96bNVtmzr+gDguTOUtl0FRTni36bc5PJRf8F5bzQfFot/AR/QTQj8rcQt9dvDm6XwyuF4W3mWusiT3fSWyePPb9V4vp5XNW45NADq1HTbAWH23qiqbMxhquNhd2yBsbubyNQkr4QKxvMxDKyekwkW2w8ICntT1BYL6o2TX4uDEXMLPPqkCsFyEWX+jbzNSpkjrf7W6d2aogvSkvtpf6tWfofR1RdxOz5mGNZxp0xTRZiOhr1yWnFesxFP6OOZTMFbAEdu739Zk2FCU2zKUCNh2ODMkklHG4OrP1JIJDFM1qaLj+k2UOnGdFUVdTlwmhqPW64BNF8mvrEGIyYhK3G9PxP70eiNUUigwj3a1hnMiLVrIdUKN1wUlfL5Avc81Az9JC8j3qQj7Fh5OK7T4ENPlCyCthsTMtlFvXpSq3zX8rK2moYbPX/kS/v0AVIRWQew+9mCH8/hkc1PivM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(346002)(396003)(136003)(39860400002)(366004)(376002)(451199018)(31686004)(2906002)(5660300002)(41300700001)(86362001)(8936002)(66946007)(6506007)(2616005)(53546011)(4326008)(54906003)(36756003)(66476007)(66556008)(478600001)(6512007)(186003)(6486002)(26005)(6916009)(83380400001)(38100700002)(31696002)(8676002)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dVJCV01VUTNrcUg2SEo5QnUrU29TOFZhYm1xdnF0bkRpSkU3cmxBcEFTVVdq?=
 =?utf-8?B?VkRzZC90enk5ZG1LVEo5dDBDTWlRdXJKRGtXTngvWk1Zb2wxeWJlVUQvV2R0?=
 =?utf-8?B?ajVOcFExbGRiUjBXa2VYVjdVcnVQelhWeGg4SUNDSDBWclpkYVI5K3pRNGNR?=
 =?utf-8?B?disyeit4YzcwU1R2c3VIS1U2SGhPV2RKWXpCaXVKZnBMaWRmVGFzUkdBUk53?=
 =?utf-8?B?S3F4RzVtZWNlZGJkVGwyVnhHak0xTDNCYlcyL2ExcGJ5SU5jUk1OcjJiRm92?=
 =?utf-8?B?a3crMlBOTGYvL2RoOTd3eERzWlVFZ2JxVndWcHNyRG02ZTJxb1N1bHNWQU41?=
 =?utf-8?B?WEo5dzc4anNMUURGMnovblY5U0NETERiVGdwcUtYUGh5SWJNVDkzYmhPa1lt?=
 =?utf-8?B?N09xS2gwd2VweURSMTZsd21tNVUzUnJBMXc2R0pTaEVQbGdscXhzZitSL3JZ?=
 =?utf-8?B?OXVSWUlMbDBhQkhVeFlRM2Q2YjQyMjVjYTM5eE1ZbW52Z2RaWS9HSEpONHdK?=
 =?utf-8?B?UFVXWFVyejR6N1RqelZNcWZtNlRVT1hidDl3Z3NKZzVXRnBOSHhVN0oxNFkz?=
 =?utf-8?B?YTY4YlM1bmhyQ1c1Q2VBUFpBd1JNMUl3dThDNTA5aDdlRnEwcmNLRzdkTnFX?=
 =?utf-8?B?Qmo1d0N5STJDdFNGdXVrWVJFWjBUQlNoeXAzUWRZSmkzRFF4czFTSWxOYnJw?=
 =?utf-8?B?d1prbHFQSjkxc2RBNkNYVE1tK3NzOThYVlc5bmVhbmsxbVF4Y1RnY1VzY0w4?=
 =?utf-8?B?aUdDRm0xZ0h6QjdDcEVFV0JzeHNZdVA4bk5GeFFLdXlkdGF4MG1QcnpKMHJk?=
 =?utf-8?B?OWFqeU9CZDNFOEdiZEVScFVBZVFQREU1c3RGMjdZRnZkam1PRzJxQTk1ZXh2?=
 =?utf-8?B?REJFUGw1R2x5OThrRjhDWlhYbGxPMmxaS3FjRVEvYTJqTTlPVlU3eExPdGlm?=
 =?utf-8?B?bDNjVVA5d0QvRWdpdzFkZzRNWE1ITENYeFdVNVV2RWR5YWVFZkZMTmZBcE1p?=
 =?utf-8?B?MVpEdWtIUU9RbWZybDJuUmJBczV5YlBHdUlXN1RUZGlUWFUxdlA5UlViL3Rv?=
 =?utf-8?B?S1IzSkxDenpPMlpnTTFTcjdEWUNLZHc5MkxsemVtV1FpaFRoRmpUdlIzN3U3?=
 =?utf-8?B?T0kyenZTZWw1bmZwcDZ3OUZPbnI2eFU5b1NZZUhTWVJHQzUwUjdTRnh3K3I3?=
 =?utf-8?B?eVE0bm1jQlhNYS9iaTlhaFgwakZqNncwMUpGaUlOSVAyc3h0N2hiV3VBNEhj?=
 =?utf-8?B?ek51ZDZ1NVZtZTl3ZUpENFVnOTdrVnhnNlVHcWRIYmhSOFQvVlBhUlhTdWli?=
 =?utf-8?B?VU02YzBNbWVwdUdiMHdEZzJYMm9FaDd3MnI0dzR5cElLSGVnei9heVYwT2Fh?=
 =?utf-8?B?S2VYWUZ3b1FEd0hZYlFnOGh4MFQ1SWlRZnFiQU5ibnNlenJvMEhqcEFkM3R3?=
 =?utf-8?B?MjhLNHhoTTlzektMazg4NysyZWJRbEgzWENDTXRKa3NoRnJaVTdSQkQ5Nnpl?=
 =?utf-8?B?RWVLSTVGeUcwMTIvSHV4dHVOZzdLcitSTVgxV2t3Mnh6end1OG01SDk3UUIw?=
 =?utf-8?B?VU9UVndROThPbEYvWmYrTEFPTGdia1I3T2pSeDg3L25FQzdjSTVsMzF2MVhv?=
 =?utf-8?B?Q1lpbHNPVEhxYzEzc1NXeG1tUnM1ektVcHdxaHFJa3kweTZIRndPRERySDhu?=
 =?utf-8?B?R1A3R2RFLzNuRnNLdENVcUJpUmJoUERmbnRrOHE3ZXJQaDNYcm55d1dqN3dV?=
 =?utf-8?B?eXBrNWlLMmZNQlBHNSs4RkhYQVp2eU1mWW5ob1ZWNTU3QnhtaXpjWGU5U2JN?=
 =?utf-8?B?MmZ3enRGaWRIZTl3MHE1M2t2eUl3anhiWXM1RVNXWmMremlObFdwMGM4ZWJI?=
 =?utf-8?B?cHRkNVNtYnExOG81UTBONytQak9Ec1RRQUQ5UytWU1hDcTJlTHpkQ3dJb3pV?=
 =?utf-8?B?S2ROUmtZbG9nYTZOVkV1RXJDNzFLQWQzNzFTMkkzZXF2d0ZSRUVJa0JpMXA2?=
 =?utf-8?B?QWlUYldKaCs0NEFLR2xvZ1lsWFZ0NXc2dmtpdzFST25yOHJhUVppTERlTE91?=
 =?utf-8?B?Vk9xZE02bDZlTHVpdnU3WDNNeHlpell2N1JyWk1BdjRxN0F1YlpWbE9hTkF0?=
 =?utf-8?Q?AqmX59SSNLGTRUJatM8C/mz5d?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f2230bd-cca1-4477-8aa1-08db02c97c88
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 13:54:20.5468
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TVWmEyt5HXc5A+LGMTjQaNEchnyyFl3zKn61cl06pQ5gnR5jZscp2fJNJB09w0aC7c/nD45wVckYXuycV8y/0A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8636

On 30.01.2023 13:27, Julien Grall wrote:
> On 12/01/2023 16:52, Jan Beulich wrote:
>> It was additionally suggested that, for a better user experience, unmet
>> dependencies which are known to result in build failures (which at times may be
>> hard to associate back with the original cause) would be re-checked by Makefile
>> based logic, leading to an early build failure with a comprehensible error
>> message.  Personally I'd prefer this to be just warnings (first and foremost to
>> avoid failing the build just because of a broken or stale check), but I can see
>> that they might be overlooked when there's a lot of other output. 
> 
> If we wanted the Makefile to check the available features, then I would 
> prefer an early error rather than warning. That said...
> 
>> In any event
>> we may want to try to figure an approach which would make sufficiently sure that
>> Makefile and Kconfig checks don't go out of sync.
> 
> ... this is indeed a concern. How incomprehensible would the error be if 
> we don't check it in the Makefile?

That'll depend on the particular feature / functionality, and might range from
very obvious and easy to very well obfuscated.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 14:51:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 14:51:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487025.754500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMVV6-0005Mo-Kp; Mon, 30 Jan 2023 14:51:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487025.754500; Mon, 30 Jan 2023 14:51:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMVV6-0005Mh-ID; Mon, 30 Jan 2023 14:51:16 +0000
Received: by outflank-mailman (input) for mailman id 487025;
 Mon, 30 Jan 2023 14:51:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SNTu=53=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pMVV5-0005Mb-GU
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 14:51:15 +0000
Received: from ppsw-42.srv.uis.cam.ac.uk (ppsw-42.srv.uis.cam.ac.uk
 [131.111.8.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 886e52e4-a0ad-11ed-b8d1-410ff93cb8f0;
 Mon, 30 Jan 2023 15:51:09 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:52714)
 by ppsw-42.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.138]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pMVUu-000RxM-EY (Exim 4.96) (return-path <amc96@srcf.net>);
 Mon, 30 Jan 2023 14:51:04 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 4EDAB1FBD6;
 Mon, 30 Jan 2023 14:51:04 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 886e52e4-a0ad-11ed-b8d1-410ff93cb8f0
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <9abcfc06-1401-cdb7-a1f1-670cd307a593@srcf.net>
Date: Mon, 30 Jan 2023 14:51:04 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Content-Language: en-GB
To: Henry Wang <Henry.Wang@arm.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Jan Beulich <JBeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>
References: <20230120220004.7456-1-andrew.cooper3@citrix.com>
 <AS8PR08MB79918B0D0329A2B722B773EB92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Andrew Cooper <amc96@srcf.net>
Subject: Re: [PATCH] Changelog: Add details about new features for SPR
In-Reply-To: <AS8PR08MB79918B0D0329A2B722B773EB92CC9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 27/01/2023 11:40 am, Henry Wang wrote:
> Hi Andrew,
>
>> -----Original Message-----
>> From: Andrew Cooper <andrew.cooper3@citrix.com>
>> Subject: [PATCH] Changelog: Add details about new features for SPR
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Thanks for remembering this :)
>
> Acked-by: Henry Wang <Henry.Wang@arm.com>

Thanks.

I'll commit this when OSSTest has become unblocked.

>
>> ---
>> A reminder to everyone, write the changelog as it happens, rather than
>> scrambling to remember 8 months of development just as the release is
>> happening.
> I wonder if there is a way to automate this in our CI so we can avoid
> forgetting this. But currently I am not really sure if the solution in my mind
> is simple enough... I will try to keep this issue in my mind so that probably I
> can come back with some solutions.

The automated version is `git log $PREV_RELEASE > changelog.log`, and
this is very deliberately not that.

It needs the maintainers / committers to keep "interesting user visible
changes" in mind at some point after the patches have gone in, are
logically complete, and have been around long enough that major
catastrophes (i.e. those liable to incur a full revert) are likely to
have happened.

But I would like to stress.  While it is the Release Maintainer's job to
make sure this gets done, it is not the Release Maintainers job to write
it.  That is an unreasonable burden.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 15:06:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 15:06:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487048.754527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMVjv-0007YU-4c; Mon, 30 Jan 2023 15:06:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487048.754527; Mon, 30 Jan 2023 15:06:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMVjv-0007YN-1b; Mon, 30 Jan 2023 15:06:35 +0000
Received: by outflank-mailman (input) for mailman id 487048;
 Mon, 30 Jan 2023 15:06:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMVjt-0007YD-Cf; Mon, 30 Jan 2023 15:06:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMVjt-0003mJ-7m; Mon, 30 Jan 2023 15:06:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMVjs-0007gw-O8; Mon, 30 Jan 2023 15:06:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMVjs-0001dP-NT; Mon, 30 Jan 2023 15:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YmY/9Hq1BeRxi3rBjwu8bcbf0FbN4bkZ4mConz9Q/+g=; b=6Xt7waH9pbM0O+NlXn/F1B+R2P
	fjnG+Htp3jr6sxJWaCmbzfgj5zHK0woPKqNy8sPzEN6VbXf3v+TVy5glqYjIROutyFLfJ/o997F6Y
	6MqOlPFc+W/z5+Trn3GPghJNfYWxZImPIoeNEj6ZWHYyqJKCYcpH53c3ECyKdsQhdyN0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176275-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176275: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-migrupgrade:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-pair:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-migrupgrade:host-install/dst_host(7):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start.2:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-ping-check-xen:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-xsm:guest-localmigrate/x10:fail:heisenbug
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 15:06:32 +0000

flight 176275 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176275/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <job status>             broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 build-armhf                     <job status>                 broken
 test-amd64-amd64-xl             <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 test-amd64-amd64-xl-rtds        <job status>                 broken  in 176270
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>           broken in 176270
 test-amd64-amd64-migrupgrade    <job status>                 broken  in 176272
 test-amd64-i386-pair            <job status>                 broken  in 176272
 test-amd64-i386-xl-qemut-win7-amd64    <job status>           broken in 176272
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-ws16-amd64 5 host-install(5) broken in 176270 pass in 176275
 test-amd64-amd64-xl-rtds     5 host-install(5) broken in 176270 pass in 176275
 test-amd64-i386-pair 7 host-install/dst_host(7) broken in 176272 pass in 176275
 test-amd64-amd64-migrupgrade 7 host-install/dst_host(7) broken in 176272 pass in 176275
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken in 176272 pass in 176275
 test-amd64-amd64-xl           5 host-install(5)          broken pass in 176272
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken pass in 176272
 test-amd64-amd64-xl-qemuu-ovmf-amd64  5 host-install(5)  broken pass in 176272
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 21 guest-start.2 fail in 176261 pass in 176275
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 176270 pass in 176275
 test-amd64-amd64-xl-rtds     10 host-ping-check-xen        fail pass in 176261
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 176265
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot            fail pass in 176270
 test-amd64-amd64-xl-xsm      20 guest-localmigrate/x10     fail pass in 176272

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken like 175437
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop   fail in 176270 like 175447
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1   3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl           3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate           starved in 176270 n/a
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   39 days
Testing same since   176224  2023-01-26 22:14:43 Z    3 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  broken  
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         broken  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job build-armhf broken
broken-job test-amd64-amd64-xl broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-step test-amd64-amd64-xl host-install(5)
broken-step test-amd64-i386-xl-qemuu-debianhvm-i386-xsm host-install(5)
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-ovmf-amd64 host-install(5)
broken-job test-amd64-amd64-xl-rtds broken
broken-job build-armhf broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job build-armhf broken
broken-job build-armhf broken
broken-job build-armhf broken
broken-job test-amd64-amd64-migrupgrade broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-i386-pair broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 15:10:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 15:10:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487059.754536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMVng-0000bb-Ou; Mon, 30 Jan 2023 15:10:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487059.754536; Mon, 30 Jan 2023 15:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMVng-0000bU-ME; Mon, 30 Jan 2023 15:10:28 +0000
Received: by outflank-mailman (input) for mailman id 487059;
 Mon, 30 Jan 2023 15:10:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SNTu=53=srcf.net=amc96@srs-se1.protection.inumbo.net>)
 id 1pMVnf-0000bM-3N
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 15:10:27 +0000
Received: from ppsw-33.srv.uis.cam.ac.uk (ppsw-33.srv.uis.cam.ac.uk
 [131.111.8.133]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3955c2e4-a0b0-11ed-9ec0-891035b88211;
 Mon, 30 Jan 2023 16:10:26 +0100 (CET)
Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:47388)
 by ppsw-33.srv.uis.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.137]:25)
 with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256)
 id 1pMVnY-0009bd-SO (Exim 4.96) (return-path <amc96@srcf.net>);
 Mon, 30 Jan 2023 15:10:20 +0000
Received: from [10.80.2.8] (default-46-102-197-194.interdsl.co.uk
 [46.102.197.194]) (Authenticated sender: amc96)
 by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id 88A0C1FBD6;
 Mon, 30 Jan 2023 15:10:20 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3955c2e4-a0b0-11ed-9ec0-891035b88211
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus
Message-ID: <4b454af5-925c-c95b-42a1-4e125265e3f4@srcf.net>
Date: Mon, 30 Jan 2023 15:10:20 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Subject: Re: [XEN PATCH v4 3/3] build: compat-xlat-header.py: optimisation to
 search for just '{' instead of [{}]
Content-Language: en-GB
To: Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230119152256.15832-1-anthony.perard@citrix.com>
 <20230119152256.15832-4-anthony.perard@citrix.com>
 <60df7795-8f0b-e0f2-a790-2e00c0d4db2a@citrix.com>
 <Y85hxvyTHa/nXZ9H@perard.uk.xensource.com>
From: Andrew Cooper <amc96@srcf.net>
In-Reply-To: <Y85hxvyTHa/nXZ9H@perard.uk.xensource.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 23/01/2023 10:30 am, Anthony PERARD wrote:
> On Fri, Jan 20, 2023 at 06:26:14PM +0000, Andrew Cooper wrote:
>> On 19/01/2023 3:22 pm, Anthony PERARD wrote:
>>> `fields` and `extrafields` always all the parts of a sub-struct, so
>>> when there is '}', there is always a '{' before it. Also, both are
>>> lists.
>>>
>>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>>> ---
>>>  xen/tools/compat-xlat-header.py | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/tools/compat-xlat-header.py b/xen/tools/compat-xlat-header.py
>>> index ae5c9f11c9..d0a864b68e 100644
>>> --- a/xen/tools/compat-xlat-header.py
>>> +++ b/xen/tools/compat-xlat-header.py
>>> @@ -105,7 +105,7 @@ def handle_field(prefix, name, id, type, fields):
>>>          else:
>>>              k = id.replace('.', '_')
>>>              print("%sXLAT_%s_HNDL_%s(_d_, _s_);" % (prefix, name, k), end='')
>>> -    elif not re_brackets.search(' '.join(fields)):
>>> +    elif not '{' in fields:
>>>          tag = ' '.join(fields)
>>>          tag = re.sub(r'\s*(struct|union)\s+(compat_)?(\w+)\s.*', '\\3', tag)
>>>          print(" \\")
>>> @@ -290,7 +290,7 @@ def build_body(name, tokens):
>>>      print(" \\\n} while (0)")
>>>  
>>>  def check_field(kind, name, field, extrafields):
>>> -    if not re_brackets.search(' '.join(extrafields)):
>>> +    if not '{' in extrafields:
>>>          print("; \\")
>>>          if len(extrafields) != 0:
>>>              for token in extrafields:
>> These are the only two users of re_brackets aren't they?  In which case
>> you should drop the re.compile() too.
> Indeed, I miss that, we can drop re_brackets.

I've folded this deletion and queued the series for when OSSTest gets
unblocked.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 16:28:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 16:28:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487069.754546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMX0I-0001GD-BN; Mon, 30 Jan 2023 16:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487069.754546; Mon, 30 Jan 2023 16:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMX0I-0001G6-8k; Mon, 30 Jan 2023 16:27:34 +0000
Received: by outflank-mailman (input) for mailman id 487069;
 Mon, 30 Jan 2023 16:27:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMX0H-0001Fw-05; Mon, 30 Jan 2023 16:27:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMX0G-0006CQ-RV; Mon, 30 Jan 2023 16:27:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMX0G-0001wm-Cb; Mon, 30 Jan 2023 16:27:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMX0G-0006lO-C2; Mon, 30 Jan 2023 16:27:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XBqWYK+2OeKht95q8+lEf3RZbknW/hO+ktmICoFNbRk=; b=4Sjljj84dnRnLSGkxEwKMDrEM8
	ZWzid6tOCBpqBzW7EnffwsX4HGDftA6U5dTOGz9Z82yqlkEoarFOmjSEMsEC/H6ARSzn3m6uvI7Z6
	czTI+Dmpjce2GFVLHK0dmOov0jHB2qlE+RrF4gvd+rh4zDVOliZVrgoKJjsRs/lb+O4Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176280-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176280: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4b384c21ad02fbf5eda25a1516cc72fa66b150f6
X-Osstest-Versions-That:
    ovmf=bb1376254803bfdaa012c62f1cf6d6b26161cfe7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 16:27:32 +0000

flight 176280 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176280/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4b384c21ad02fbf5eda25a1516cc72fa66b150f6
baseline version:
 ovmf                 bb1376254803bfdaa012c62f1cf6d6b26161cfe7

Last test of basis   176278  2023-01-30 07:40:53 Z    0 days
Testing same since   176280  2023-01-30 14:12:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dionna Glaze <dionnaglaze@google.com>
  Dionna Glaze via groups.io <dionnaglaze=google.com@groups.io>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   bb13762548..4b384c21ad  4b384c21ad02fbf5eda25a1516cc72fa66b150f6 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 17:33:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 17:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487089.754557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMY1K-0000sh-5N; Mon, 30 Jan 2023 17:32:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487089.754557; Mon, 30 Jan 2023 17:32:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMY1K-0000sZ-26; Mon, 30 Jan 2023 17:32:42 +0000
Received: by outflank-mailman (input) for mailman id 487089;
 Mon, 30 Jan 2023 17:32:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMY1I-0000sO-2q; Mon, 30 Jan 2023 17:32:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMY1I-0007et-0V; Mon, 30 Jan 2023 17:32:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMY1H-0005CL-GU; Mon, 30 Jan 2023 17:32:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMY1H-0001gb-G3; Mon, 30 Jan 2023 17:32:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IO7RyLmiHY5wS1XU9wNNYrR7sdfm62NQ5sPRaPRa9PE=; b=SgupzOhsCNyjScYS9IqS9vXPc/
	bSm8I3eq6gLeTDxijP9th3dCUt5Hbj0qpRAW6P6m2iavSMCLg27m1LNtn08ia1REyUAIl/9fZPAGW
	UrQ9Bs7NMJlA6cGtH4RCh2LR0jHDvFPlkFnelt8UCPAa+XeH4RI+qeatMjLkFkyDrcaU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176276-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 176276: trouble: blocked/broken/pass
X-Osstest-Failures:
    libvirt:build-armhf:<job status>:broken:regression
    libvirt:build-armhf:host-install(4):broken:regression
    libvirt:build-armhf:syslog-server:running:regression
    libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:build-armhf:capture-logs:broken:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=9f8fba7501327a60f6adb279ea17f0e2276071be
X-Osstest-Versions-That:
    libvirt=95a278a84591b6a4cfa170eba31c8ec60e82f940
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 17:32:39 +0000

flight 176276 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176276/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176139
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176139
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              9f8fba7501327a60f6adb279ea17f0e2276071be
baseline version:
 libvirt              95a278a84591b6a4cfa170eba31c8ec60e82f940

Last test of basis   176139  2023-01-26 04:18:49 Z    4 days
Failing since        176233  2023-01-27 04:18:53 Z    3 days    4 attempts
Testing same since   176262  2023-01-28 04:20:25 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiri Denemark <jdenemar@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf capture-logs
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 9f8fba7501327a60f6adb279ea17f0e2276071be
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Thu Jan 26 16:12:00 2023 +0100

    remote: Fix version annotation for remoteDomainFDAssociate
    
    The API was added in libvirt 9.0.0.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Peter Krempa <pkrempa@redhat.com>

commit a0fbf1e25cd0f91bedf159bf7f0086f4b1aeafc2
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 16:48:50 2023 +0100

    rpc: Use struct zero initializer for args
    
    In a recent commit of v9.0.0-104-g0211e430a8 I've turned all args
    vars in src/remote/remote_driver.c to be initialized wit {0}.
    What I've missed was the generated code.
    
    Do what we've done in v9.0.0-13-g1c656836e3 and init also args,
    not just ret.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit 2dde3840b1d50e79f6b8161820fff9fe62f613a9
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix missing device in crypto-builtin XML
    
    Another forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit f3c9cbc36cc10775f6cefeb7e3de2f799dc74d70
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix watchdog parameters in crypto-builtin
    
    Forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit a2c5c5dad2275414e325ca79778fad2612d14470
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:34 2023 +0100

    news: Add information about iTCO watchdog changes
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 2fa92efe9b286ad064833cd2d8b907698e58e1cf
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:30 2023 +0100

    Document change to multiple watchdogs
    
    With the reasoning behind it.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 926594dcc82b40f483010cebe5addbf1d7f58b24
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 11:22:22 2023 +0100

    qemu: Add implicit watchdog for q35 machine types
    
    The iTCO watchdog is part of the q35 machine type since its inception,
    we just did not add it implicitly.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2137346
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d81a27b9815d68d85d2ddc9671649923ee5905d7
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 14:15:06 2023 +0100

    qemu: Enable iTCO watchdog by disabling its noreboot pin strap
    
    In order for the iTCO watchdog to be operational we must disable the
    noreboot pin strap in qemu.  This is the default starting from 8.0
    machine types, but desirable for older ones as well.  And we can safely
    do that since that is not guest-visible.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 5b80e93e42a1d89ee64420debd2b4b785a144c40
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:26:21 2023 +0100

    Add iTCO watchdog support
    
    Supported only with q35 machine types.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1c61bd718a9e311016da799a42dfae18f538385a
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Tue Nov 8 09:10:57 2022 +0100

    Support multiple watchdog devices
    
    This is already possible with qemu, and actually already happening with
    q35 machines and a specified watchdog since q35 already includes a
    watchdog we do not include in the XML.  In order to express such
    posibility multiple watchdogs need to be supported.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit c5340d5420012412ea298f0102cc7f113e87d89b
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:28:52 2023 +0100

    qemuDomainAttachWatchdog: Avoid unnecessary nesting
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1cf7e6ec057a80f3c256d739a8228e04b7fb8862
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:25:06 2023 +0100

    remote: Drop useless cleanup in remoteDispatchNodeGet{CPU,Memory}Stats
    
    The function cannot fail once it starts populating
    ret->params.params_val[i].field.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d0f339170f35957e7541e5b20552d0007e150fbc
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:06:33 2023 +0100

    remote: Avoid leaking uri_out
    
    In case the API returned success and a NULL pointer in uri_out, we would
    leak the preallocated buffer used for storing the uri_out pointer.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 4849eb2220fb2171e88e014a8e63018d20a8de95
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 11:56:28 2023 +0100

    remote: Propagate error from virDomainGetSecurityLabelList via RPC
    
    The daemon side of this API has been broken ever since the API was
    introduced in 2012. Instead of sending the error from
    virDomainGetSecurityLabelList via RPC so that the client can see it, the
    dispatcher would just send a successful reply with return value set to
    -1 (and an empty array of labels). The client side would propagate this
    return value so the client can see the API failed, but the original
    error would be lost.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 0211e430a87a96db9a4e085e12f33caad9167653
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 13:19:31 2023 +0100

    remote: Initialize args variable
    
    Recently, in v9.0.0-7-gb2034bb04c we've dropped initialization of
    @args variable. The reasoning was that eventually, all members of
    the variable will be set. Well, this is not correct. For
    instance, in remoteConnectGetAllDomainStats() the
    args.doms.doms_val pointer is set iff @ndoms != 0. However,
    regardless of that, the pointer is then passed to VIR_FREE().
    
    Worse, the whole args is passed to
    xdr_remote_connect_get_all_domain_stats_args() which then calls
    xdr_array, which tests the (uninitialized) pointer against NULL.
    
    This effectively reverts b2034bb04c61c75ddbfbed46879d641b6f8ca8dc.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Martin Kletzander <mkletzan@redhat.com>

commit c3afde9211b550d3900edc5386ab121f5b39fd3e
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 11:56:10 2023 +0100

    qemu_domain: Don't unref NULL hash table in qemuDomainRefreshStatsSchema()
    
    The g_hash_table_unref() function does not accept NULL. Passing
    NULL results in a glib warning being triggered. Check whether the
    hash table is not NULL and unref it only then.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 17:45:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 17:45:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487098.754567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYCr-0002cP-8O; Mon, 30 Jan 2023 17:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487098.754567; Mon, 30 Jan 2023 17:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYCr-0002cI-5O; Mon, 30 Jan 2023 17:44:37 +0000
Received: by outflank-mailman (input) for mailman id 487098;
 Mon, 30 Jan 2023 17:44:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5/WO=53=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pMYCg-0002cA-2I
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 17:44:36 +0000
Received: from black.elm.relay.mailchannels.net
 (black.elm.relay.mailchannels.net [23.83.212.19])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ad933b8d-a0c5-11ed-8ba2-5fe241e16ab0;
 Mon, 30 Jan 2023 18:44:02 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id 8FDFC88158C
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 17:43:58 +0000 (UTC)
Received: from pdx1-sub0-mail-a304.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id EFF7A882328
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 17:43:53 +0000 (UTC)
Received: from pdx1-sub0-mail-a304.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.103.24.61 (trex/6.7.1); Mon, 30 Jan 2023 17:43:54 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a304.dreamhost.com (Postfix) with ESMTPSA id 4P5Fsj1Ml6z2P
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 09:43:53 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e0034 by kmjvbox (DragonFly Mail Agent v0.12);
 Mon, 30 Jan 2023 09:43:50 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad933b8d-a0c5-11ed-8ba2-5fe241e16ab0
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1675100634; a=rsa-sha256;
	cv=none;
	b=A7g7aFtspdV/+GooRiJbYxTM3Vkh1VvJ/Ll9R9MdOueS2Zy2I9SFi0VM/hio+Q4+kMTLll
	L9+cmO3zFB5i+FUeMxdSvveGyKcWp2q6N2nDo3bkgHVWJtpJKd1SntTHqS+P7omfeTsbTa
	ztZJuBsjb6LkurG+Qyoe7CCdQeNSP+l8qJfJX8nwGZTFSOMRiTDSUzsvIXCm0fYmtN7djm
	93QHQesFPuvEU6P71GZHCZ83cDgdt0uTsbm8GbvrRzoIVgxfHqeFjTNaVC5/YJcSKCDsEx
	+HTHQx/mVa5JpthCwU32mB6djnG5YUWlIMpFUEAf8BY29LePjmi7xRY+7zfp2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1675100634;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:dkim-signature;
	bh=I0x2MZ7pHTEySJdwNZX7i0oTJQoYVYraWHS7W6F0g8w=;
	b=Xg6hBNKIq9HRcS1bYCZHmHwnTK8+jPRJ37TbSx1A1EBmdpBR6SiWUe60bGdzjo3TcCPZq3
	IrwbIJurXPiGk4vSkwGaqaRNZ/iacJhE0QOLwpNNLOyUS2aYLDUsmQaZNHU/ju1moKsyTn
	dqFj9MleUhkmcniKwKXLL5q0NBW9YA3CBBj1jsF/SjGIzvEu+PHgiZ6gOWBlOCR8pzjKNc
	Su/8dN6KvXg5ycd32TuJPgtEgq42wbRfX3bAwjf3fZvExzfPacxxdiDBxeBQm12hAiGI1k
	d0q6dVHLoIx5LGJ5RhZOZObJLfwsYodOZEB6tsjzs3Fy9mhPj1ZKCp6Rjgernw==
ARC-Authentication-Results: i=1;
	rspamd-544f66f495-mkpz8;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Neutral
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Chemical-Stop: 651fe4aa059b81d3_1675100634391_747117249
X-MC-Loop-Signature: 1675100634390:1605748642
X-MC-Ingress-Time: 1675100634390
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1675100633;
	bh=I0x2MZ7pHTEySJdwNZX7i0oTJQoYVYraWHS7W6F0g8w=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=Xh28BsH9//NDU6RcT5oS22zFUueB3Pbl7FcUACWwhylISEakpnoupwmT3zngNrLME
	 rS1x5quAzH62jDpwrMLQ0ZnpRaGiqHsE0qWCK7r1WzOK2XzoKabPmrHwQvYJUEXCpH
	 yvWI+JhBMu1XSUWDJFD4saMYW8lqW8XYA9Y0osfE=
Date: Mon, 30 Jan 2023 09:43:50 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230130174350.GA2001@templeofstupid.com>
References: <0c4eb4ba-4314-02c7-62d5-b08a3573fcc2@suse.com>
 <20230127185103.GB1955@templeofstupid.com>
 <bfccbe22-e5e4-40d3-aded-639d812bfa08@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <bfccbe22-e5e4-40d3-aded-639d812bfa08@suse.com>

Hi Jan,

On Mon, Jan 30, 2023 at 09:22:11AM +0100, Jan Beulich wrote:
> On 27.01.2023 19:51, Krister Johansen wrote:
> > --- a/xen/include/public/arch-x86/cpuid.h
> > +++ b/xen/include/public/arch-x86/cpuid.h
> > @@ -72,6 +72,15 @@
> >   * Sub-leaf 2: EAX: host tsc frequency in kHz
> >   */
> >  
> > +#define XEN_CPUID_TSC_EMULATED               (1u << 0)
> > +#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
> > +#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
> > +
> > +#define XEN_CPUID_TSC_MODE_DEFAULT               (0)
> > +#define XEN_CPUID_TSC_MODE_ALWAYS_EMULATE        (1u)
> > +#define XEN_CPUID_TSC_MODE_NEVER_EMULATE         (2u)
> > +#define XEN_CPUID_TSC_MODE_NEVER_EMULATE_TSC_AUX (3u)
> 
> While perhaps it doesn't matter much with the mode no longer supported,
> I'd prefer if here we used the original name (PVRDTSCP) as well.
> Preferably with that adjustment (which once again I'd be happy to do
> while committing, albeit I'd like to wait with that until osstest is in
> a better mood again)
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks for the additional feedback.  I'll send you a v4 that makes the
requested modification.

-K


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 17:45:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 17:45:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487102.754577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYDT-0003EA-LT; Mon, 30 Jan 2023 17:45:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487102.754577; Mon, 30 Jan 2023 17:45:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYDT-0003Df-IA; Mon, 30 Jan 2023 17:45:15 +0000
Received: by outflank-mailman (input) for mailman id 487102;
 Mon, 30 Jan 2023 17:45:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5/WO=53=templeofstupid.com=kjlx@srs-se1.protection.inumbo.net>)
 id 1pMYDS-00031n-H9
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 17:45:14 +0000
Received: from bonobo.larch.relay.mailchannels.net
 (bonobo.larch.relay.mailchannels.net [23.83.213.22])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d462c1cd-a0c5-11ed-8ba2-5fe241e16ab0;
 Mon, 30 Jan 2023 18:45:07 +0100 (CET)
Received: from relay.mailchannels.net (localhost [127.0.0.1])
 by relay.mailchannels.net (Postfix) with ESMTP id 660A95C2652
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 17:45:04 +0000 (UTC)
Received: from pdx1-sub0-mail-a304.dreamhost.com (unknown [127.0.0.6])
 (Authenticated sender: dreamhost)
 by relay.mailchannels.net (Postfix) with ESMTPA id 20F7E5C22AC
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 17:45:03 +0000 (UTC)
Received: from pdx1-sub0-mail-a304.dreamhost.com (pop.dreamhost.com
 [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384)
 by 100.116.179.68 (trex/6.7.1); Mon, 30 Jan 2023 17:45:04 +0000
Received: from kmjvbox (c-76-102-200-71.hsd1.ca.comcast.net [76.102.200.71])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: kjlx@templeofstupid.com)
 by pdx1-sub0-mail-a304.dreamhost.com (Postfix) with ESMTPSA id 4P5Fv23cYtzS9
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 09:45:02 -0800 (PST)
Received: from johansen (uid 1000) (envelope-from kjlx@templeofstupid.com)
 id e0034 by kmjvbox (DragonFly Mail Agent v0.12);
 Mon, 30 Jan 2023 09:44:59 -0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d462c1cd-a0c5-11ed-8ba2-5fe241e16ab0
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1675100703; a=rsa-sha256;
	cv=none;
	b=TrsAuql7YYd5WwaELsMFJyT20+05x1Vs+1TTMg+hWteoVNhAWBrrcFXxmL+JcXKac5FgI2
	KpfLmxPZyGt1r9SovVmK6TVncc8fcwngR93Jq5g7oWxCaT0jgVUQ+lAiJlzK2OvcO116ui
	fWJ7SFGTd5I7ql4Xp1QLT+MGE4hNSHfcubF7ibOmWo5JMUXazub/w2BByBFTGvsulJS6mX
	Yp9VpKbVtrMf1Ip/aW/vPk/JWJLxkzvRQOJIOD2YVTMOm9O+6XvP6QUo0fwLK49RtRbUw4
	TXp7h5xl/yX+WIoP+S/l0dDDmSNT/OZ8Hr0SFR+IT+bFp3zjnbU4Kp0DndtQUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
 d=mailchannels.net;
	s=arc-2022; t=1675100703;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:dkim-signature;
	bh=lzWbg7pH2Qdroe4uerh/jh+EVVJsKlEREiv+gIZXp/4=;
	b=E3tMINDvnoMbohYwJ2477XupHt22eo5PKXuXrCR80DCEOx6O3LtSpoKqYpU4NiSr/RvYAt
	r4cpur82sRhkJdBkIOFC/7Gi66WferqRwBCK3SM9C+Q8oICzA5CGBzIPGCgiMDnUeYAJbE
	V6mw+ksIgkXsy62qdb0SGeecvTs1vGQ7eyvAQyddDtewVg4mr+ChgDGZhn91JfEyBqrNCh
	jx7qO+h4ZCffvFpNK1LU3NCpDMNZhUlwfkgwaFgOa4TrU65P+nTr1yUEgdQAaKYTBzbroz
	7YetaIDnEiEr9W0pxX/bLIvHwQ2mvbgfn/tKvqfZtdgjLFoRi9YmRb79+6T80Q==
ARC-Authentication-Results: i=1;
	rspamd-5fb8f68d88-c76m6;
	auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@templeofstupid.com
X-Sender-Id: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MC-Relay: Good
X-MailChannels-SenderId: dreamhost|x-authsender|kjlx@templeofstupid.com
X-MailChannels-Auth-Id: dreamhost
X-Chief-Abortive: 6c1bbaf9559e6c5a_1675100704225_1603156238
X-MC-Loop-Signature: 1675100704225:1628132625
X-MC-Ingress-Time: 1675100704225
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=templeofstupid.com;
	s=dreamhost; t=1675100702;
	bh=lzWbg7pH2Qdroe4uerh/jh+EVVJsKlEREiv+gIZXp/4=;
	h=Date:From:To:Cc:Subject:Content-Type;
	b=XZIPzc7tWyRIsJ5Nl1A+WZvwQPUww3beix2acoBlBry7erf54qAkT87ly6BsCQeI+
	 GvE9ZnFpO3pHv7GIDqIKL09jDoyZTXXm5ZT+4TBeWHe7m6LTGVB2d/E1IRTeQWKswE
	 rvLmm6awA85ljK3livi/ugFn2X7msalK1Od1WOYo=
Date: Mon, 30 Jan 2023 09:44:59 -0800
From: Krister Johansen <kjlx@templeofstupid.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Reaver <me@davidreaver.com>
Subject: [PATCH v4] xen/x86: public: add TSC defines for cpuid leaf 4
Message-ID: <20230130174459.GB2001@templeofstupid.com>
References: <bfccbe22-e5e4-40d3-aded-639d812bfa08@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <bfccbe22-e5e4-40d3-aded-639d812bfa08@suse.com>

Cpuid leaf 4 contains information about how the state of the tsc, its
mode, and some additional information.  A commit that is queued for
linux would like to use this to determine whether the tsc mode has been
set to 'no emulation' in order to make some decisions about which
clocksource is more reliable.

Expose this information in the public API headers so that they can
subsequently be imported into linux and used there.

Link: https://lore.kernel.org/xen-devel/eda8d9f2-3013-1b68-0df8-64d7f13ee35e@suse.com/
Link: https://lore.kernel.org/xen-devel/0835453d-9617-48d5-b2dc-77a2ac298bad@oracle.com/
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
---
v4
  - Rename TSC_MODE_NEVER_EMULATE_TSC_AUX to TSC_MODE_PVRDTSCP (feedback from
    Jan Buelich)
v3
  - Additional formating cleanups (feedback from Jan Buelich)
  - Ensure that TSC_MODE #defines match the names of those in time.h (feedback
    from Jan Buelich)
v2:
  - Fix whitespace between comment and #defines (feedback from Jan Beulich)
  - Add tsc mode 3: no emulate TSC_AUX (feedback from Jan Beulich)
---
 xen/include/public/arch-x86/cpuid.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/include/public/arch-x86/cpuid.h b/xen/include/public/arch-x86/cpuid.h
index 7ecd16ae05..9d02f86564 100644
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -72,6 +72,15 @@
  * Sub-leaf 2: EAX: host tsc frequency in kHz
  */
 
+#define XEN_CPUID_TSC_EMULATED               (1u << 0)
+#define XEN_CPUID_HOST_TSC_RELIABLE          (1u << 1)
+#define XEN_CPUID_RDTSCP_INSTR_AVAIL         (1u << 2)
+
+#define XEN_CPUID_TSC_MODE_DEFAULT               (0)
+#define XEN_CPUID_TSC_MODE_ALWAYS_EMULATE        (1u)
+#define XEN_CPUID_TSC_MODE_NEVER_EMULATE         (2u)
+#define XEN_CPUID_TSC_MODE_PVRDTSCP              (3u)
+
 /*
  * Leaf 5 (0x40000x04)
  * HVM-specific features
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:06:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:06:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487114.754586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYXf-0006Jw-Bg; Mon, 30 Jan 2023 18:06:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487114.754586; Mon, 30 Jan 2023 18:06:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYXf-0006Jp-8w; Mon, 30 Jan 2023 18:06:07 +0000
Received: by outflank-mailman (input) for mailman id 487114;
 Mon, 30 Jan 2023 18:06:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dpD8=53=citrix.com=prvs=3879b2cf9=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMYXe-0006Jj-D9
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:06:06 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id beb6dcb9-a0c8-11ed-93eb-7b0ecb3c1525;
 Mon, 30 Jan 2023 19:06:00 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: beb6dcb9-a0c8-11ed-93eb-7b0ecb3c1525
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675101960;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=vad01AZYH/ejT2higFuRrEqS51p2PmLYpTQW+ca61OQ=;
  b=d6xp++JgTsk08/aK5FzKEFVM6lUQrrknx/2y1eLzpH4G3lI6DgK/9cMD
   5dr2wZ0gPvc4DoN1mqYQvGoIJxmX4ScCIu3TbKnbu/2WPc0F2Axq+Xp9o
   bNLpXmC8bO+CyP44RjWKOSIQ9bRVeYIh+Rp/aTT/P1Wi60CcKVcpzqS4/
   k=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 94308129
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:oVkuc6IKDAE5BBOwFE+R9pUlxSXFcZb7ZxGr2PjKsXjdYENSgjQBz
 2ZMDD3TM6zcN2P3f9B2ad6xoR9TsJ+AyN82SwJlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wdmPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c57KGZvz
 vsWEwtQTRCurt25mZSrTvlj05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TbGZ4Nzh/C9
 woq+Uz0Jx0BF/m98AG1/0qjmc3zlACnUZArQejQGvlC3wTImz175ActfVm0u/6ikWalRslSb
 UcT/0IGpLA/7kWxQvHhXhezpziPuRt0c8VUO/037keK0KW8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhqgGqy89G3of3JPdClbOHFCFFFeizX+nG0tph7mSfdYF6COtYDWGRzZ/
 D/Tijg6l7pG2KbnyJ6H1VzAhjutoL3AQQg0+hjbUwqZ0+9pWGK2T9f2sAaGtJ6sOK7cFwDc5
 yZcx6By+chUVfmweDqxrPLh9V1Dz9KMK3XijFFmBPHNHBz9qif4Lei8DNyTTXqF0/romxezO
 yc/WisLvve/2UdGiocqC79d8+xwkcDd+S3ND5g4lOZmbJlrbxOg9ypzf0OW1G2FuBFyzvxnY
 MjCIZr0XCly5UFbIN2eHrd17FPW7npmmTO7qW7TkHxLLoZylFbKEOxYYTNin8gy7b+eoRW9z
 jqsH5Li9vmra8WnOnO/2ddKfTg3wY0TWcieRzp/KrTSfWKL2QgJV5fs/F/WU9c8xPwMzbaVp
 xlQmCZwkTLCuJEOEi3SAlgLVV8ldcwXQa4TVcD0AWuV5g==
IronPort-HdrOrdr: A9a23:619wKKAJcb3nN1TlHelW55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgL1+DfKlbbak/DH4BmtJ
 uJc8JFeaDN5VoRt7eH3OFveexQv+Vu88qT9JnjJ28Gd3AMV0n5hT0JcTpyFCdNNW97LKt8Lr
 WwzOxdqQGtfHwGB/7LfEXsD4D41qT2fIuNW29/OyIa
X-IronPort-AV: E=Sophos;i="5.97,258,1669093200"; 
   d="scan'208";a="94308129"
Date: Mon, 30 Jan 2023 18:05:36 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: Re: [PATCH 1/6] tools/libxc: Move xc_version() out of xc_private.c
 into its own file
Message-ID: <Y9gG8O0QYEJJL3Id@perard.uk.xensource.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <20230117135336.11662-2-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230117135336.11662-2-andrew.cooper3@citrix.com>

On Tue, Jan 17, 2023 at 01:53:31PM +0000, Andrew Cooper wrote:
> kexec-tools uses xc_version(), meaning that it is not a private API.  As we're
> going to extend the functionality substantially, move it to its own file.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:06:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:06:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487117.754597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYYE-0006oM-JT; Mon, 30 Jan 2023 18:06:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487117.754597; Mon, 30 Jan 2023 18:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYYE-0006oF-Gg; Mon, 30 Jan 2023 18:06:42 +0000
Received: by outflank-mailman (input) for mailman id 487117;
 Mon, 30 Jan 2023 18:06:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dpD8=53=citrix.com=prvs=3879b2cf9=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMYYD-0006my-3W
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:06:41 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d72873d0-a0c8-11ed-93eb-7b0ecb3c1525;
 Mon, 30 Jan 2023 19:06:39 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d72873d0-a0c8-11ed-93eb-7b0ecb3c1525
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675101999;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=YNKQLtgZEG5dWxVtP2OQaAUcPcCNC2dDfECDbqrmXBA=;
  b=F3qQHKBxz0z4L5K4aA3aU7lGtHEcsL4IJbEc+mIUKs+A1sRu/tUG1jLE
   UWE31XlGCDTu/CJfCthLEcV2DYu4v4YdLSnvCMX/GEQgIqMeX/abF+6lN
   KKW1GdO/c4QlrZspqMtA2LZQIlBWRH1+XglVY9vJEG2Pw0rPFY428vV0u
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 94816167
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:FiTmoaxzsShjXMvpxG16t+cBxirEfRIJ4+MujC+fZmUNrF6WrkUFy
 TEeXW6FMv/ZZmfwKo8iOdi29x9T7MLdz4Q2TgE9ryAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UwHUMja4mtC5QRnPqgT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUd2q
 ucdEWsWVy2Si7icmKCKEMtijMt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZQJzh/G/
 zyZl4j/KhsUO9W9yzaLyyKxhc7XgiDfXaIAPZTto5aGh3XMnzdOWXX6T2CTrfCnh2akVtlYK
 khS/TAhxYA77EGxR8PxdwG5qnWD+BUbXrJ4A+A8rQ2A1KfQywKYHXQfCC5MbsQ8s807TiBs0
 UWG9/vrCiZoq6a9Um+G+/GfqjbaETcRBX8PY2kDVwRt3jX4iNht1FSVFI8lSfPryISvQlkc3
 gxmsgAYv+oIiPdQzJyG7FydqWq+l8LrQAQ6s1C/sn2e0it1Y4usZoqN4Ffd7OpdIIvxcmRtr
 EToiODFsrlQUMjleDilBbxUQer3v6rt3Cj02wYHInU3y9i6F5dPl6h06So2GkpmO91sldTBM
 B6K4lM5CHO+0RKXgU5Lj2CZUZ9CIUvIT46NuhXogj1mP/BMmPevpn0GWKJp9zmFfLIQua8+I
 4yHVs2nEGwXD69qpBLvGbhAieZ0n3BinTKMLXwe8/hA+ePODEN5tJ9faAfeBgzHxPzsTPrpH
 yZ3aJLRlkQ3vBzWaSjL648DRW3m3lBiba0aX/d/L7bZSiI/QTFJNhMk6e95E2CTt/gPx7igE
 7DUchMw9WcTclWccF7SMysyNeqHsFQWhStTABHA9G2AgxALCbtDJo9GH3frVdHLLNBe8MM=
IronPort-HdrOrdr: A9a23:77ZirquTaLyRMkqvfp2etxwL7skCq4Aji2hC6mlwRA09TyXGra
 +TdaUguSMc1gx9ZJh5o6H8BEGBKUmskKKdkrNhQYtKPTOW81dASbsN0WKM+UyYJ8STzJ8/6U
 4kSdkFNDSSNykxsS+Z2njBLz9I+rDum8rI5ds2jU0dNj2CAJsQizuRfzzrdHGeMzM2YqbReq
 DshPZvln6FQzA6f867Dn4KU6zqoMDKrovvZVorFgMq8w6HiBKv8frfHwKD1hkTfjtTyfN6mF
 K13zDR1+GGibWW2xXc32jc49B/n8bg8MJKAIihm9UYMTLljyevfcBEV6eZtD44jemz4BIBkc
 XKoT0nI8NvgkmhM12dkF/I4U3NwTwu43jtxRuzmn34u/H0Qzo8Fo5omZ9ZWgGx0TtvgPhMlI
 Zwm06JvZteCh3N2A7n4cLTah1snk2o5VI/jO8oiWBFW4d2Us4RkWVfxjIULH4zJlO51GkVKp
 gqMCga3ocTTbquVQGbgoCo+q3qYp18JGbBfqFIgL3r79EfpgEG86Jf/r1Rol4wsKsnTZ9K/u
 LFNbktuo1vY6YtHPpALdZEeNCwDGPVRxLKLSa1GnTIUI86G1+lke+v3F0SjNvaIqDgCKFCw6
 jpQRdWs3U/dFnpDtDL1JpX8grVSGH4Rjj1zNpCjqIJzIEUaYCbRRFrcmpe5PeIsrEaGInWSv
 yzMJVZD7vqKnbvA59A20n7V4NJIXcTXcUJspJjMmj+6v7jO8nvrKjWYfzTLL3iHXItXX7+GG
 IKWHz2KN9b5k6mV3fkiFzaWm/reEb44ZVseZKqttQ72cwILMlBowIVgVO26oWCLiBDqLU/eA
 9kLLbugsqA1ByLFKbznhdU0zZmfzVoCe/bIgJ3TCcxQjPJTYo=
X-IronPort-AV: E=Sophos;i="5.97,258,1669093200"; 
   d="scan'208";a="94816167"
Date: Mon, 30 Jan 2023 18:06:28 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 2/6] tools: Introduce a non-truncating
 xc_xenver_extraversion()
Message-ID: <Y9gHJLTaeVKWgeS8@perard.uk.xensource.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <20230117135336.11662-3-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230117135336.11662-3-andrew.cooper3@citrix.com>

On Tue, Jan 17, 2023 at 01:53:32PM +0000, Andrew Cooper wrote:
> ... which uses XENVER_extraversion2.
> 
> In order to do sensibly, use manual hypercall buffer handling.  Not only does
> this avoid an extra bounce buffer (we need to strip the xen_varbuf_t header
> anyway), it's also shorter and easlier to follow.
> 
> Update libxl and the ocaml stubs to match.  No API/ABI change in either.
> 
> With this change made, `xl info` can now correctly access a >15 char
> extraversion:
> 
>   # xl info xen_version
>   4.18-unstable+REALLY LONG EXTRAVERSION
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:07:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:07:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487122.754607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYYz-0007L0-SY; Mon, 30 Jan 2023 18:07:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487122.754607; Mon, 30 Jan 2023 18:07:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYYz-0007Kt-Pd; Mon, 30 Jan 2023 18:07:29 +0000
Received: by outflank-mailman (input) for mailman id 487122;
 Mon, 30 Jan 2023 18:07:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dpD8=53=citrix.com=prvs=3879b2cf9=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMYYy-0007Kf-Hh
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:07:28 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f33f737e-a0c8-11ed-93eb-7b0ecb3c1525;
 Mon, 30 Jan 2023 19:07:26 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f33f737e-a0c8-11ed-93eb-7b0ecb3c1525
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675102046;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=QJxaharSK/Jo0woOff0VVvBI87ddnh6E/mwN+YlcpFk=;
  b=ZKIxwqJ00XCaVlBEgSlNV8859tDSjip/XgEvbMynvLCB6Vbg/kJX4ZA5
   K3TALbyp9QKWprDhSb6xTt+N8PtUh1+5FCcSpZX2xIghs96z+GOY0GaO9
   0umqGFPic1BIW75O1l7Y2UOcjsVwznVRLsIQ2DTRaDK7SZjKyn+JKe/FY
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 97314012
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Gpr6zqgn+A/xDb21643p71APX161VhAKZh0ujC45NGQN5FlHY01je
 htvXGrTa6zcYjH2e9FyO9y29RgFvsXVyYVjS1Y/rHtkF34b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QSGzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQjIxESZBmOltvs+6KACbEy3u8YNtLCadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27J+
 TmfozygWHn2MvSEwmqc1Vvyg9TMog3KVaQWD5+E5tNT1Qj7Kms7V0RNCArTTeOCol6zXZdTJ
 lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJ4DOkS+AyLjK3O7G6xBGceSSVaQMc7r8JwTjsvv
 neFls3kLSZiu7qUTTSa7Lj8hTqqNDIcN2MqeS4ORgxD6N7myLzflTqWEIwlSvTsyISoR3epm
 WviQDUCa6s7tsUqyK+y8EH+2Qm8nduREFYe3R/Mdzfwhu9mX7KNa4ut4FndyP9PKoeFU1WM1
 EQ5d9iiAPMmVs/UynHUKAkZNPTwvqvebmWA6bJ6N8N5nwlB7UJPamy5DNtWAE5yevgJdjbyC
 KM4kVMAvcQDVJdGgEIeXm5QNyjI5fK7fTgGfqqOBjarXnSWXFLvwc2WTRTMt10BaWB1+U3FB
 b+VcNy3EVERArl9wTy9So81iOF0m3hnmjqDGciqkHxLNIZyg1bMGd843KamNLhlvMtoXi2Km
 zqgCyd640oGC7CvCsUm2YUSMUoLPRAG6WPe8qRqmhq4ClM+QgkJUqaBqY7NjqQ5x8y5YM+Up
 CDiMqKZoXKj7UD6xfKiMSk4MOq0DMsmxZ/5VAR1VWuVN7EYSd7HxM8im1EfJ9HLKMQLISZIc
 sQ4
IronPort-HdrOrdr: A9a23:76BNR6jYcd0zyARg3aVGCWWqs3BQXwd23DAbv31ZSRFFG/FwyP
 rAoB1L73PJYWgqNU3IwerwRZVpQRvnhPtICPoqTMuftWjdySCVxeRZg7cKrAeQYhEWmtQttp
 uINpIOcuEYbmIKx/oSgjPIa+rIqePvmMvD5IfjJjVWPHpXgspbnmNE43OgYytLrX59dP0E/f
 Snl6h6jgvlXU5SQtWwB3EDUeSGj9rXlKj+aRpDKw875BKIhTaI7qe/NxSDxB8RXx5G3L9nqA
 H+4kDEz5Tml8v+5g7X1mfV4ZgTsNz9yuFbDMjJptkJJi7qggOIYp0kf7GZpjg6rMym9V5vut
 jRpBULOdh19hrqDyyIiCqo/zOl/Ccl6nfkx1Pdq2Dku9bFSDUzDNcErZ5FczPCgnBQ8u1U4e
 Zu5Sa0ppBXBRTPkGDW/N7TTSxnkUKyvD4LjfMTtXpCSoETAYUh7LD3vXklUKvoLhiKqrzPI9
 MeSf00I8wmNW9yWkqp/VWHBubcGUjbUC32BHTq8fblrAS+1EoJsXfwgvZv0UsoxdYFUJ9D6P
 3DMqN00J9zbuJ+V9MkOM4xBfKtDGrDWBTNN3/XB2/GOuUoB1LhwqSHuYncwomRCcY1JV8J6c
 /8eUIdumgod030D8qSmJVN7xDWWW24GS/g08dE+vFCy8vBrZfQQFm+oWoV4rydiuRaBteeV+
 e4OZpQDfOmJWzyGZxR1wm7X5VJM3ERXMAcp95+Aju104r2A5yvsvaefOfYJbLrHzphUmTjAm
 EbVDy2IMlb9EikVnLxnRCUUXLwfU70+452DcHhjqEu4ZlIMpcJvhkeiFy/6M3OITpesrYudE
 87O7/jmrPTnxjCwY8J1RQaBvNwNDcn3Fy7aQI6meYjCTKFTYo+
X-IronPort-AV: E=Sophos;i="5.97,258,1669093200"; 
   d="scan'208";a="97314012"
Date: Mon, 30 Jan 2023 18:07:13 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 3/6] tools: Introduce a non-truncating
 xc_xenver_capabilities()
Message-ID: <Y9gHUZ0acOr17ESM@perard.uk.xensource.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <20230117135336.11662-4-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230117135336.11662-4-andrew.cooper3@citrix.com>

On Tue, Jan 17, 2023 at 01:53:33PM +0000, Andrew Cooper wrote:
> Update libxl and the ocaml stubs to match.  No API/ABI change in either.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,


-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:08:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:08:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487126.754617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYZd-0007sg-6i; Mon, 30 Jan 2023 18:08:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487126.754617; Mon, 30 Jan 2023 18:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYZd-0007sZ-2a; Mon, 30 Jan 2023 18:08:09 +0000
Received: by outflank-mailman (input) for mailman id 487126;
 Mon, 30 Jan 2023 18:08:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dpD8=53=citrix.com=prvs=3879b2cf9=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMYZc-0007o2-0J
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:08:08 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 09ed18af-a0c9-11ed-8ba2-5fe241e16ab0;
 Mon, 30 Jan 2023 19:08:05 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09ed18af-a0c9-11ed-8ba2-5fe241e16ab0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675102085;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=VEkJZWBAAtXC9BWq4tD82Ej5MvxPoFi+Ko2JT2G+O9I=;
  b=fS07MpDGRJImPDOjjSuAnECesluuFa1rd8LRfi8pWo2tsxVT4EHx4rlJ
   bCqF8PKnpBT1xd5ALH/L8H3RVg//haLnbbexxjrM4U/ONpCvOM8iF1V3u
   b8vHxwe4jlnEjs4aBmB/Bw136z6zEGciZkpqWg/OsZzSa3fRIsDMt9t75
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 94887947
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:wwyh0KBImoi//BVW/x/jw5YqxClBgxIJ4kV8jS/XYbTApD4r3jJRz
 TMXXm2CaPyPZDamfdByYY+/8E9XvZ6EyddqQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpD5gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw6vgwJXhIz
 fwkazkOXBGk2OioxJPhRbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9K4DaFZoEwhnwS
 mTuxVWhHhoBOtejlAGpzl6UgrDXjznBR9dHfFG/3qEz2wDCroAJMzUGWF3+rfSnh0qWX9NEN
 1dS6icotbI19kGgUp/6RRLQiGGAlg4RXZxXCeJSwAOC0K3P+C6CG3MJCDVGbbQOuMYoSBQw2
 1SOntevAiZg2JWcUX+H/62YhS+zMyMSa2QFYEc5oRAtuoe55ttp11SWE4glSfTu5jHoJd3u6
 yCU6wwngY0TsY0C1Je62g/NnTaN/JecG2bZ+T7rdm6i6wp4YqusaIqp9UXX4J58EWqJcrWSl
 CNawpbDtYjiGbnIzXXQG7tVQNlF8t7faFXhbUhT847NHthH01qqZshu7T53Py+F2e5UKGayM
 Cc/Ve68jaK/3UdGj4ctOOpd6Oxwl8AM8OgJsdiJBueimrArKGe6ENhGPCZ8JVzFnkk2ir0YM
 pyGa8uqBntyIf05k2fuHrhEgeNzl39WKYbvqXfTlkTP7FZjTCTNFedt3KWmMYjVE59oUC2Kq
 o0CZqNmOj1UUfHkYzm/zGLgBQliEJTPPriv85Y/XrfacmJb9JQJV6e5LUUJJ9Y0wMy4V47go
 hmAZ6Ov4AGm3iWeclTXMxiOqtrHBP5CkJ7yBgR0VX7A5pTpSdzHAHs3H3fvQYQayQ==
IronPort-HdrOrdr: A9a23:aEJNO6x/eKmQZN7QhQ9fKrPwLL1zdoMgy1knxilNoRw8SKKlfu
 SV7ZAmPH7P+VMssR4b9OxoVJPtfZqYz+8T3WBzB8bBYOCFgguVxehZhOOIqQEIWReOldK1vZ
 0QFZSWY+eQMbEVt6nH3DU=
X-IronPort-AV: E=Sophos;i="5.97,258,1669093200"; 
   d="scan'208";a="94887947"
Date: Mon, 30 Jan 2023 18:07:49 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 4/6] tools: Introduce a non-truncating
 xc_xenver_changeset()
Message-ID: <Y9gHdQerdxcPXZLh@perard.uk.xensource.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <20230117135336.11662-5-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230117135336.11662-5-andrew.cooper3@citrix.com>

On Tue, Jan 17, 2023 at 01:53:34PM +0000, Andrew Cooper wrote:
> Update libxl and the ocaml stubs to match.  No API/ABI change in either.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,


-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:08:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:08:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487128.754627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYZw-0008M9-DE; Mon, 30 Jan 2023 18:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487128.754627; Mon, 30 Jan 2023 18:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYZw-0008M2-AL; Mon, 30 Jan 2023 18:08:28 +0000
Received: by outflank-mailman (input) for mailman id 487128;
 Mon, 30 Jan 2023 18:08:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dpD8=53=citrix.com=prvs=3879b2cf9=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMYZv-0008Lc-EH
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:08:27 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 16bff37a-a0c9-11ed-93eb-7b0ecb3c1525;
 Mon, 30 Jan 2023 19:08:26 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16bff37a-a0c9-11ed-93eb-7b0ecb3c1525
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675102106;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=hY8ZpsY3akN3Q7iBsmMh+rNm83298kWao+j2lsTm8yo=;
  b=JWOLbV0JKuRQMyUhZQIGxaOk2RZ4WRYu/a5BziF9Oit6ni48j5V0YXGh
   bOQE48jcYJn0VVXghEiwAsSjHo5PBM8xtNegEWmUKs9QbaOyo7pIvTqGS
   j8xbgbWfdk5lTrBym8aPamOmg/ewV2ONFnberzV7yor3ewPK5/nywIMFM
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 97314179
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:R68ImKjk6n6H4DZyh4bZyNRaX161RxAKZh0ujC45NGQN5FlHY01je
 htvWmGPbPncYGehfoskYY6w9E0BsZ6HzYdnQFNt+yk1QiMb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmUpH1QMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWt0N8klgZmP6sT5QSGzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQfCi4vZDfSptuN2bGSceM22u8jFcDSadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27J+
 Tmfoz2mU3n2MvSgzxul3lKCg9Tm3huiXtoZK+ye5PpT1Qj7Kms7V0RNCArTTeOCol6zXZdTJ
 lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJ4DOkS+AyLjK3O7G6xBGceSSVaQMc7r8JwTjsvv
 neAh97zDCZjmKGUQ3masLyTqFuP1TM9dDFYI3VeFE1cvoel+dto5v7Scjp9OKmXkP//PmDR+
 guTrwEFje9Pps4y3pzuqDgrnAmQjpTOSwc04CDeUWSk8h51aeaZWmC41bTIxa0eddjEFzFtq
 FBBwpHDt75WUflhgQTXGI0w8KeVC+Fp2dE2qXpmBNEf+juk4BZPlqgAsWgldC+F3ivpEAIFg
 XM/WysLv/e/31PwN8ebhr5d7Ox3pZUM7fy/CpjpgiNmO/CdjjOv8iB0flK31GvwikUqmqxXE
 c7FLpv0VClDWfg/nWXeqwIhPVgDn3BW+I8ubcqjk0TPPUS2OxZ5tovpwHPRN7tkvctoUS3e8
 spFNtvi9vmseLSWX8UjyqZKdQpiBSFiVfjLRzl/KrbrzvxORDtwVJc8ANoJJ+RYokiivryRp
 SnkAR4FkTISRxTvcG23V5yqU5u3Nb4XkJ7xFXVE0YqAs5T7XbuS0Q==
IronPort-HdrOrdr: A9a23:XgSVhaEfgLxr+IXrpLqE+ceALOsnbusQ8zAXPiFKJCC9F/by/f
 xG885rtiMc9wxhOk3I9ervBEDiex/hHPxOgbX5VI3KNDUO01HGEGgN1+rfKjTbakjDytI=
X-IronPort-AV: E=Sophos;i="5.97,258,1669093200"; 
   d="scan'208";a="97314179"
Date: Mon, 30 Jan 2023 18:08:15 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: Re: [PATCH 5/6] tools: Introduce a non-truncating xc_xenver_cmdline()
Message-ID: <Y9gHj4jtzRmJYsRN@perard.uk.xensource.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <20230117135336.11662-6-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230117135336.11662-6-andrew.cooper3@citrix.com>

On Tue, Jan 17, 2023 at 01:53:35PM +0000, Andrew Cooper wrote:
> Update libxl to match.  No API/ABI change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:09:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:09:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487136.754636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYaj-0000cA-RP; Mon, 30 Jan 2023 18:09:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487136.754636; Mon, 30 Jan 2023 18:09:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYaj-0000c3-OF; Mon, 30 Jan 2023 18:09:17 +0000
Received: by outflank-mailman (input) for mailman id 487136;
 Mon, 30 Jan 2023 18:09:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dpD8=53=citrix.com=prvs=3879b2cf9=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMYai-0000bo-LA
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:09:16 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 33138544-a0c9-11ed-8ba2-5fe241e16ab0;
 Mon, 30 Jan 2023 19:09:14 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33138544-a0c9-11ed-8ba2-5fe241e16ab0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675102154;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=xouqlqiYb7B9rW3YcR6qZFZRtnr9jXX8J8LIOIh3rbE=;
  b=FRDCoBen31wtHZ9Du5LvDM/Fz5Q1P+Ih/P6Vslh0+0Pdz1gkxe9vjeT1
   FUoJuntmCXTWYeSC+7KsMgySEBnQEieng554pbdIPfBXw0vzdaentIwzY
   CA9BqNMt+1XGUeP4xyG83f+4CxJ/T8ZNEuBkCIDL8PuyqKxwdyRFB6KxJ
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95283211
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:SFDY0aDvb2slChVW/w7jw5YqxClBgxIJ4kV8jS/XYbTApDkkhDxVy
 WUeXziPPPuLMTahLYggbI3ipkxTsceAx9MwQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpD5gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw/7hKXWwU8
 uEhdw8hMDKawOvon5K7Vbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9K4HQFZ4Pxx/wS
 mTuwSf+WTM7Ctijyzep1zWd2u/zlCLZcddHfFG/3qEz2wDCroAJMzUUWkG8uuKRkVOlVpRUL
 El80iM2oLI77kCDUtj3VBr+q3mB1jYDX/JAHut87xuCooLE7gDcCmUaQzppbN09qNRwVTEsz
 kWOnd7iGXpoqrL9dJ6G3u7K93XoY3FTdDJcI3ZeFmPp/uUPvqk20C+TQ4xkDZfqsfGrOyi3y
 m7VjCgh0uB7YdEw6423+lXOgjSJr5fPTxIo6gi/Yl9J/j+Vd6b+OdX2tAGzAeJoad/AEwLf5
 CRsd922trhmMH2bqMCarAzh9pmN7u3NDjDTiEUH83IJp2X0oC7LkWy9DVhDyKZV3iQsI2SBj
 Kz741k5CHpv0JyCMMdKj3qZUZhC8EQZPY2NugroRtRPeINtUwSM4TtjY0Wdt0i0zhdxyfhgY
 MfHKZfzZZr/NUiA5GPmL9rxLJdxnnxurY8tbc+TI+ubPUq2OyfOFOZt3KqmZeEl9qKUyDg5A
 P4GX/ZmPy53CbWkCgGOqN57ELz/BSRjbXwAg5ANJ7Hrz8sPMD1JNsI9Npt6Itc9xv8Ey76gE
 7PUchYw9WcTTEbvcW2iAk2Popu1NXqjhRrX5RARAGs=
IronPort-HdrOrdr: A9a23:0FfoiKs2e7jvxr5gB0Yi90Qc7skDZ9V00zEX/kB9WHVpm62j+v
 xG+c5xvyMc5wxhO03I5urwWpVoLUmzyXcX2+Us1NWZPDUO0VHARL2KhrGM/9SPIUzDH+dmpM
 JdT5Q=
X-IronPort-AV: E=Sophos;i="5.97,258,1669093200"; 
   d="scan'208";a="95283211"
Date: Mon, 30 Jan 2023 18:09:03 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>
Subject: Re: [PATCH 6/6] tools: Introduce a xc_xenver_buildid() wrapper
Message-ID: <Y9gHv3Ype4+x4r8L@perard.uk.xensource.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
 <20230117135336.11662-7-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230117135336.11662-7-andrew.cooper3@citrix.com>

On Tue, Jan 17, 2023 at 01:53:36PM +0000, Andrew Cooper wrote:
> ... which converts binary content to hex automatically.
> 
> Update libxl to match.  No API/ABI change.
> 
> This removes a latent bug for cases when the buildid is longer than 4092
> bytes.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,


-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:10:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:10:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487142.754647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYcD-00022d-4z; Mon, 30 Jan 2023 18:10:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487142.754647; Mon, 30 Jan 2023 18:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYcD-00022W-2A; Mon, 30 Jan 2023 18:10:49 +0000
Received: by outflank-mailman (input) for mailman id 487142;
 Mon, 30 Jan 2023 18:10:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dpD8=53=citrix.com=prvs=3879b2cf9=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMYcB-00022F-Eo
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:10:47 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68a2551d-a0c9-11ed-93eb-7b0ecb3c1525;
 Mon, 30 Jan 2023 19:10:45 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68a2551d-a0c9-11ed-93eb-7b0ecb3c1525
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675102246;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=b0f6lavD++iE+TA/+evuMB1/iYP5KiQ/PTS8XIwKCt4=;
  b=UhniO0SvN56Mfcqlenvhzkje4Ro0Cfz9xE8xckWezBWjZn/ShVL55m5L
   B9bR+qyRRdJyjxujr8VvzKfrcDqWG0/q9WG6IZs0m9S0EKDJnGB/a75QD
   0Ygky+8GpjicR2dmRr6x8myXt6YhljJPPL5VXyqpacZWcUW/qmeJxO0qj
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93750567
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:WLePWKmjZDQvLqn2Qa1NtGvo5gy3JkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIcUGmOaPiIMGqmLd9wYdy/8U5VvJ7dyIBiSVFrrX88FiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5weGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aZfcDtXMA++vP+n3J7kSPFhh504J9a+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglH2dSFYr1SE47I6+WHJwCR60aT3McqTcduPLSlQthfD+
 T+eojqmav0cHPiu8DGP8TWJvf7OwSfLdJMURbuzp8c/1TV/wURMUUZLBDNXu8KRhkegVvpFJ
 kcT+y5oqrI9nGSiVtTnVge0iGKFtBUbHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRtv6eSUmm17aqPoHW5Pi19BXAGTT8JS00C+daLnW0opkuRFJA5Svfz14CrX2iqm
 FhmsRTSmZ1JypYAjfukwGvaki6A+ZrRQw9s/Q7ICzfNAhxCWKapYImh6F7+5PlGLZqEQlTpg
 EXoi/Ry/8hVU8jTyXXlrPElWejwuq3baGG0bUtHRcFJyti7x5K0kWm8ChlaLVwhDMsLcCSBj
 KT76VIIv8870JdHgMZKj2ON5yYCl/OI+TfNDKq8gj9yjn9ZKWe6ENlGPxL44owUuBFEfVsDE
 Zmaa92wKn0RFL5qyjG7L89Ej+B2nnlhnDOPHcGkp/hC7VZ5TCfFIYrpzXPUNrxphE96iFq9H
 ylj2zuilEwEDbyWjtj/+o8PN1EaRUXX9rivw/G7gtWre1I8cEl4Uq+5/F/UU9A990ijvruSr
 y7Vt44x4AaXuEAr3i3RMys7Mei+AM8XQLBSFXVEAGtEEkMLOe6HhJrzvbNrFVX73ISPFcJJc
 sQ=
IronPort-HdrOrdr: A9a23:ClzYQK5kuajZjXsqBAPXwMbXdLJyesId70hD6qkRc3xom6mj/f
 xG88536faZslkssQgb6LK90cq7IE80i6Qf3WB5B97LYOCMggeVBbxPxa+n/hulIhbZ29J26M
 5bAstD4bPLY2RHsQ==
X-IronPort-AV: E=Sophos;i="5.97,258,1669093200"; 
   d="scan'208";a="93750567"
Date: Mon, 30 Jan 2023 18:10:26 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, "Juergen
 Gross" <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 0/6] tools: Switch to non-truncating XENVER_* ops
Message-ID: <Y9gIEpQ5G1inUDX7@perard.uk.xensource.com>
References: <20230117135336.11662-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230117135336.11662-1-andrew.cooper3@citrix.com>

On Tue, Jan 17, 2023 at 01:53:30PM +0000, Andrew Cooper wrote:
> This is the tools side of the Xen series posted previously.

There's also python bindings using xc_version(), is it something we want
to update?

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:29:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487151.754656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYtY-0004Ap-KK; Mon, 30 Jan 2023 18:28:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487151.754656; Mon, 30 Jan 2023 18:28:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYtY-0004Ai-Ha; Mon, 30 Jan 2023 18:28:44 +0000
Received: by outflank-mailman (input) for mailman id 487151;
 Mon, 30 Jan 2023 18:28:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMYtX-0004Aa-5y
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:28:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMYtX-0000h3-41; Mon, 30 Jan 2023 18:28:43 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMYtW-0004Ps-S0; Mon, 30 Jan 2023 18:28:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=QHeyb39BAUbBq7kVi2Vj45hoLod0vCzIr6POB59arQY=; b=qYChNV
	uGSGzG8G0/IL4MYnug029eOK+NZO9TKW3l2K5Q0rWyepqUN1Idraz1S/e0rHlmSTnYaiNMENU9BF2
	5E9ES212yhB2gQx7sOf7WTb0PllYA8/ke3CrMc2VhbyxJzaaX9QCaLoUTJFweOrETOsDor4+KRv7H
	vwJ5AYfL6gg=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH] xen/common: rwlock: Constify the parameter of _rw_is{,_write}_locked()
Date: Mon, 30 Jan 2023 18:28:40 +0000
Message-Id: <20230130182840.86744-1-julien@xen.org>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

The lock is not meant to be modified by _rw_is{,_write}_locked(). So
constify it.

This is helpful to be able to assert if the lock is taken when the
underlying structure is const.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/xen/rwlock.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index b8d52a5aa939..e0d2b41c5c7e 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -149,7 +149,7 @@ static inline void _read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
     local_irq_restore(flags);
 }
 
-static inline int _rw_is_locked(rwlock_t *lock)
+static inline int _rw_is_locked(const rwlock_t *lock)
 {
     return atomic_read(&lock->cnts);
 }
@@ -254,7 +254,7 @@ static inline void _write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
     local_irq_restore(flags);
 }
 
-static inline int _rw_is_write_locked(rwlock_t *lock)
+static inline int _rw_is_write_locked(const rwlock_t *lock)
 {
     return (atomic_read(&lock->cnts) & _QW_WMASK) == _QW_LOCKED;
 }
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:29:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487152.754667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYtr-0004VU-SC; Mon, 30 Jan 2023 18:29:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487152.754667; Mon, 30 Jan 2023 18:29:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYtr-0004VN-P8; Mon, 30 Jan 2023 18:29:03 +0000
Received: by outflank-mailman (input) for mailman id 487152;
 Mon, 30 Jan 2023 18:29:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMYtq-0004V1-I9
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:29:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMYtq-0000hQ-Cp; Mon, 30 Jan 2023 18:29:02 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMYtq-0004Q6-48; Mon, 30 Jan 2023 18:29:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=QHeyb39BAUbBq7kVi2Vj45hoLod0vCzIr6POB59arQY=; b=5MW5wf
	IktPHTedVDpOTeYX5WQ+DysHeLljYLx4BjQFlHICsbg2pqmyHE3LrLuLekE5DTpdBrxkp+Tie8UWs
	YswpKmQPjvkr7VDY82wZiwde+rVPSfzl98xQGXJFaf1ptGKLr4gNZJ8XXhlj6YoCgRQm0FObAEw6E
	k6iFmibySDo=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/common: rwlock: Constify the parameter of _rw_is{,_write}_locked()
Date: Mon, 30 Jan 2023 18:28:58 +0000
Message-Id: <20230130182858.86886-1-julien@xen.org>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

The lock is not meant to be modified by _rw_is{,_write}_locked(). So
constify it.

This is helpful to be able to assert if the lock is taken when the
underlying structure is const.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/xen/rwlock.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index b8d52a5aa939..e0d2b41c5c7e 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -149,7 +149,7 @@ static inline void _read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
     local_irq_restore(flags);
 }
 
-static inline int _rw_is_locked(rwlock_t *lock)
+static inline int _rw_is_locked(const rwlock_t *lock)
 {
     return atomic_read(&lock->cnts);
 }
@@ -254,7 +254,7 @@ static inline void _write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
     local_irq_restore(flags);
 }
 
-static inline int _rw_is_write_locked(rwlock_t *lock)
+static inline int _rw_is_write_locked(const rwlock_t *lock)
 {
     return (atomic_read(&lock->cnts) & _QW_WMASK) == _QW_LOCKED;
 }
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:29:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:29:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487159.754677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYuN-00058t-52; Mon, 30 Jan 2023 18:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487159.754677; Mon, 30 Jan 2023 18:29:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYuN-00058k-1i; Mon, 30 Jan 2023 18:29:35 +0000
Received: by outflank-mailman (input) for mailman id 487159;
 Mon, 30 Jan 2023 18:29:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMYuM-00058c-Dp
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:29:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMYuM-0000iC-Cr; Mon, 30 Jan 2023 18:29:34 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMYuM-0004QW-7r; Mon, 30 Jan 2023 18:29:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=tzqDTVV8yfUrv9E4PbVKIfBZM3FKqaSLTNPG7sYwDNI=; b=n2GdjzizIy3/+kWvwYHUrllMT9
	C4BrrxMFcV6Nrb67tQJ9vdGVpefGNujkjmQb9/4ETL7Nqey392VPq2cCC49j5oWPEPqbIU0nRhmh9
	9zC95hnUQf2kCDmHo7nuKlatqM5yQ0bqoUNVmr9OZlMWJ+uAm7REukCLdq92yTQq+r+0=;
Message-ID: <865b2983-ca59-a135-e52d-8b33b5ef1c81@xen.org>
Date: Mon, 30 Jan 2023 18:29:32 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/common: rwlock: Constify the parameter of
 _rw_is{,_write}_locked()
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>
References: <20230130182840.86744-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230130182840.86744-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

I forgot to CC the maintainers. Please ignore this version.

Cheers,

On 30/01/2023 18:28, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The lock is not meant to be modified by _rw_is{,_write}_locked(). So
> constify it.
> 
> This is helpful to be able to assert if the lock is taken when the
> underlying structure is const.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>   xen/include/xen/rwlock.h | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
> index b8d52a5aa939..e0d2b41c5c7e 100644
> --- a/xen/include/xen/rwlock.h
> +++ b/xen/include/xen/rwlock.h
> @@ -149,7 +149,7 @@ static inline void _read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
>       local_irq_restore(flags);
>   }
>   
> -static inline int _rw_is_locked(rwlock_t *lock)
> +static inline int _rw_is_locked(const rwlock_t *lock)
>   {
>       return atomic_read(&lock->cnts);
>   }
> @@ -254,7 +254,7 @@ static inline void _write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
>       local_irq_restore(flags);
>   }
>   
> -static inline int _rw_is_write_locked(rwlock_t *lock)
> +static inline int _rw_is_write_locked(const rwlock_t *lock)
>   {
>       return (atomic_read(&lock->cnts) & _QW_WMASK) == _QW_LOCKED;
>   }

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:31:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:31:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487164.754687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYvo-0006cs-FS; Mon, 30 Jan 2023 18:31:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487164.754687; Mon, 30 Jan 2023 18:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMYvo-0006cj-CX; Mon, 30 Jan 2023 18:31:04 +0000
Received: by outflank-mailman (input) for mailman id 487164;
 Mon, 30 Jan 2023 18:31:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMYvn-0006cd-JX
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:31:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMYvn-0000lg-Ai; Mon, 30 Jan 2023 18:31:03 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMYvn-0004Rf-5e; Mon, 30 Jan 2023 18:31:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=uOs7yU0ltVCxvR2JglsK9WgzzaL16tYx6DpVFE59/fU=; b=uq+Ab13HD7rvh7/pBSrQLK9rop
	VWQQpdf7SOEb/OgjPGO54+Yop9E+8sUoN+J3+APgipoQdpZu/YoPm/GPG5i+Z/tYBZZYvr1SdKFUk
	qRkujVkpcYr6bbZGr06zsycfCXwMS7bIdEHYMawOvyvxY49osOq1+veBWCPuCYeL2jE8=;
Message-ID: <ca9eb6e5-a4dd-f27d-09e1-fba7c407e014@xen.org>
Date: Mon, 30 Jan 2023 18:31:01 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v5 1/5] xen/arm32: head: Widen the use of the temporary
 mapping
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-2-julien@xen.org>
 <3872442b-b9d0-4594-0869-d505aefa5aa5@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <3872442b-b9d0-4594-0869-d505aefa5aa5@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 30/01/2023 08:58, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> 
> On 27/01/2023 20:55, Julien Grall wrote:
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, the temporary mapping is only used when the virtual
>> runtime region of Xen is clashing with the physical region.
>>
>> In follow-up patches, we will rework how secondary CPU bring-up works
>> and it will be convenient to use the fixmap area for accessing
>> the root page-table (it is per-cpu).
>>
>> Rework the code to use temporary mapping when the Xen physical address
>> is not overlapping with the temporary mapping.
>>
>> This also has the advantage to simplify the logic to identity map
>> Xen.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>>
>> Even if this patch is rewriting part of the previous patch, I decided
>> to keep them separated to help the review.
>>
>> The "follow-up patches" are still in draft at the moment. I still haven't
>> find a way to split them nicely and not require too much more work
>> in the coloring side.
>>
>> I have provided some medium-term goal in the cover letter.
>>
>>      Changes in v5:
>>          - Fix typo in a comment
>>          - No need to link boot_{second, third}_id again if we need to
>>            create a temporary area.
>>
>>      Changes in v3:
>>          - Resolve conflicts after switching from "ldr rX, <label>" to
>>            "mov_w rX, <label>" in a previous patch
>>
>>      Changes in v2:
>>          - Patch added
>> ---
>>   xen/arch/arm/arm32/head.S | 85 +++++++--------------------------------
>>   1 file changed, 15 insertions(+), 70 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>> index df51550baa8a..93b0af114b0c 100644
>> --- a/xen/arch/arm/arm32/head.S
>> +++ b/xen/arch/arm/arm32/head.S
> ...
>> @@ -675,33 +641,12 @@ remove_identity_mapping:
>>           /* r2:r3 := invalid page-table entry */
>>           mov   r2, #0x0
>>           mov   r3, #0x0
>> -        /*
>> -         * Find the first slot used. Remove the entry for the first
>> -         * table if the slot is not XEN_FIRST_SLOT.
>> -         */
> Could you please add an empty line here to improve readability?

Sure. I will do that on commit.

> 
>> +        /* Find the first slot used and remove it */
> 
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Thanks!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 18:36:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 18:36:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487170.754697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMZ0u-0007SB-1F; Mon, 30 Jan 2023 18:36:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487170.754697; Mon, 30 Jan 2023 18:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMZ0t-0007S4-Ud; Mon, 30 Jan 2023 18:36:19 +0000
Received: by outflank-mailman (input) for mailman id 487170;
 Mon, 30 Jan 2023 18:36:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMZ0s-0007Ry-0I
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 18:36:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMZ0r-0000rZ-Kx; Mon, 30 Jan 2023 18:36:17 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMZ0r-0004jz-EU; Mon, 30 Jan 2023 18:36:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=a2kQJNrsHRx03dlxBg4S4RGE4AGxSGg4a/xMYq/lqP8=; b=2w3Fq/YXv3MsooNsbTyC3JtLYu
	IbrPjujvy3/ItQGWAhTlUrgoU8Pn61s6+Bh1j4hG0b4YlvGiX+GPaocHrZxtBp3oUQnC/jxA94qYd
	gbf4OvQW+CFID+5koYpm3eekD01/eK97I+DfZ5rs7uSkQCkymy9GAwJ1br4+Aw75n4j8=;
Message-ID: <044a4899-661b-8a84-d949-dd14d1d32383@xen.org>
Date: Mon, 30 Jan 2023 18:36:15 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v5 2/5] xen/arm64: Rework the memory layout
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-3-julien@xen.org>
 <af0bf576-373c-353e-b575-40f5bbde861f@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <af0bf576-373c-353e-b575-40f5bbde861f@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 30/01/2023 09:08, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 27/01/2023 20:55, Julien Grall wrote:
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Xen is currently not fully compliant with the Arm Arm because it will
>> switch the TTBR with the MMU on.
>>
>> In order to be compliant, we need to disable the MMU before
>> switching the TTBR. The implication is the page-tables should
>> contain an identity mapping of the code switching the TTBR.
>>
>> In most of the case we expect Xen to be loaded in low memory. I am aware
>> of one platform (i.e AMD Seattle) where the memory start above 512GB.
>> To give us some slack, consider that Xen may be loaded in the first 2TB
>> of the physical address space.
>>
>> The memory layout is reshuffled to keep the first four slots of the zeroeth
>> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
>> tweak of the boot code because XEN_VIRT_START cannot be used as an
>> immediate.
>>
>> This reshuffle will make trivial to create a 1:1 mapping when Xen is
>> loaded below 2TB.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
>> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
>>
>> ---
>>      Changes in v5:
>>          - We are reserving 4 slots rather than 2.
>>          - Fix the addresses in the layout comment.
>>          - Fix the size of the region in the layout comment
>>          - Add Luca's tested-by (the reviewed-by was not added
>>            because of the changes requested by Michal
>>          - Add Michal's reviewed-by
>>
>>      Changes in v4:
>>          - Correct the documentation
>>          - The start address is 2TB, so slot0 is 4 not 2.
>>
>>      Changes in v2:
>>          - Reword the commit message
>>          - Load Xen at 2TB + 2MB
>>          - Update the documentation to reflect the new layout
>> ---
>>   xen/arch/arm/arm64/head.S         |  3 ++-
>>   xen/arch/arm/include/asm/config.h | 34 +++++++++++++++++++++----------
>>   xen/arch/arm/mm.c                 | 11 +++++-----
>>   3 files changed, 31 insertions(+), 17 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index 4a3f87117c83..663f5813b12e 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -607,7 +607,8 @@ create_page_tables:
>>            * need an additional 1:1 mapping, the virtual mapping will
>>            * suffice.
>>            */
>> -        cmp   x19, #XEN_VIRT_START
>> +        ldr   x0, =XEN_VIRT_START
>> +        cmp   x19, x0
>>           bne   1f
>>           ret
>>   1:
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 5df0e4c4959b..e388462c23d1 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -72,16 +72,13 @@
>>   #include <xen/page-size.h>
>>
>>   /*
>> - * Common ARM32 and ARM64 layout:
>> + * ARM32 layout:
>>    *   0  -   2M   Unmapped
>>    *   2M -   4M   Xen text, data, bss
>>    *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>    *   6M -  10M   Early boot mapping of FDT
>>    *   10M - 12M   Livepatch vmap (if compiled in)
>>    *
>> - * ARM32 layout:
>> - *   0  -  12M   <COMMON>
>> - *
>>    *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>>    * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>>    *                    space
>> @@ -90,14 +87,23 @@
>>    *   2G -   4G   Domheap: on-demand-mapped
>>    *
>>    * ARM64 layout:
>> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
>> - *   0  -  12M   <COMMON>
>> + * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3])
>> + *
>> + *  Reserved to identity map Xen
>> + *
>> + * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]
>> + *  (Relative offsets)
>> + *   0  -   2M   Unmapped
>> + *   2M -   4M   Xen text, data, bss
>> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>> + *   6M -  10M   Early boot mapping of FDT
>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>    *
>>    *   1G -   2G   VMAP: ioremap and early_ioremap
>>    *
>>    *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
>>    *
>> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>> + * 0x0000028000000000 - 0x00007fffffffffff (125TB, L0 slots [5..255])
>>    *  Unused
>>    *
>>    * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
>> @@ -107,7 +113,17 @@
>>    *  Unused
>>    */
>>
>> +#ifdef CONFIG_ARM_32
>>   #define XEN_VIRT_START          _AT(vaddr_t, MB(2))
>> +#else
>> +
>> +#define SLOT0_ENTRY_BITS  39
>> +#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
>> +#define SLOT0_ENTRY_SIZE  SLOT0(1)
>> +
>> +#define XEN_VIRT_START          (SLOT0(4) + _AT(vaddr_t, MB(2)))
>> +#endif
>> +
>>   #define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
>>
>>   #define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
>> @@ -163,10 +179,6 @@
>>
>>   #else /* ARM_64 */
>>
>> -#define SLOT0_ENTRY_BITS  39
>> -#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
>> -#define SLOT0_ENTRY_SIZE  SLOT0(1)
>> -
>>   #define VMAP_VIRT_START  GB(1)
>>   #define VMAP_VIRT_SIZE   GB(1)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index f758cad545fa..0b0edf28d57a 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -153,7 +153,7 @@ static void __init __maybe_unused build_assertions(void)
>>   #endif
>>       /* Page table structure constraints */
>>   #ifdef CONFIG_ARM_64
>> -    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
>> +    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START) < 2);
> I haven't spotted this before but this should be < 4.

Good catch! I am planning to handle it on commit unless there is more to 
fix in this patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 19:00:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 19:00:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487178.754706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMZO3-0002lG-13; Mon, 30 Jan 2023 19:00:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487178.754706; Mon, 30 Jan 2023 19:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMZO2-0002l9-Uk; Mon, 30 Jan 2023 19:00:14 +0000
Received: by outflank-mailman (input) for mailman id 487178;
 Mon, 30 Jan 2023 19:00:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMZO1-0002l3-9c
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 19:00:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMZO0-0001Ri-E8; Mon, 30 Jan 2023 19:00:12 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMZO0-0005iH-8C; Mon, 30 Jan 2023 19:00:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=i4UteikAAiy/qR9ogw3voYDumzbXr6mpYdSZf/5RnbU=; b=NHukpuSq32Y3ROVxH8I/yz5mEi
	5fUYf4o5ZCavtNV4RcbvewWUJDzTjPEv7BR3/p+3O3sybyYds0Gd6FWOsTONKw6awX0MJOV8eBBnj
	7EHcJQp/1NYuYtrv1H0uBcCmOlm9ejrbPcbkdhE74TOqKGGv9RhbHmIilEHKKMlXimBM=;
Message-ID: <2edca838-2616-7434-62d7-a69dd9e00797@xen.org>
Date: Mon, 30 Jan 2023 19:00:10 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] x86: idle domains don't have a domain-page mapcache
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <139bab3a-fd3e-fec2-b7b4-f63dd9f9439a@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <139bab3a-fd3e-fec2-b7b4-f63dd9f9439a@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 05/01/2023 11:09, Jan Beulich wrote:
> First and foremost correct a comment implying the opposite. Then, to
> make things more clear PV-vs-HVM-wise, move the PV check earlier in the
> function, making it unnecessary for both callers to perform the check
> individually. Finally return NULL from the function when using the idle
> domain's page tables, allowing a dcache->inuse check to also become an
> assertion.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/domain_page.c
> +++ b/xen/arch/x86/domain_page.c
> @@ -28,8 +28,11 @@ static inline struct vcpu *mapcache_curr
>       /*
>        * When current isn't properly set up yet, this is equivalent to
>        * running in an idle vCPU (callers must check for NULL).
> +     *
> +     * Non-PV domains don't have any mapcache.  For idle domains (which
> +     * appear to be PV but also have no mapcache) see below.
>        */
> -    if ( !v )
> +    if ( !v || !is_pv_vcpu(v) )
>           return NULL;

I am trying to figure out the implication on this change with my patch 
[1]. Is this expected to go before hand?

If so, is this expectation that I will remove !is_pv_vcpu() and replace 
with a new check...

>   
>       /*
> @@ -41,19 +44,22 @@ static inline struct vcpu *mapcache_curr
>           return NULL;

... right here?

if ( !is_pv_vcpu(v) )
   return v;


>   
>       /*
> -     * If guest_table is NULL, and we are running a paravirtualised guest,
> -     * then it means we are running on the idle domain's page table and must
> -     * therefore use its mapcache.
> +     * If guest_table is NULL for a PV domain (which includes IDLE), then it
> +     * means we are running on the idle domain's page tables and therefore
> +     * must not use any mapcache.
>        */
> -    if ( unlikely(pagetable_is_null(v->arch.guest_table)) && is_pv_vcpu(v) )
> +    if ( unlikely(pagetable_is_null(v->arch.guest_table)) )
>       {
>           /* If we really are idling, perform lazy context switch now. */
> -        if ( (v = idle_vcpu[smp_processor_id()]) == current )
> +        if ( idle_vcpu[smp_processor_id()] == current )
>               sync_local_execstate();
>           /* We must now be running on the idle page table. */
>           ASSERT(cr3_pa(read_cr3()) == __pa(idle_pg_table));
> +        return NULL;
>       }
>   
> +    ASSERT(!is_idle_vcpu(v));
> +
>       return v;
>   }
>   
> @@ -82,13 +88,12 @@ void *map_domain_page(mfn_t mfn)
>   #endif
>   
>       v = mapcache_current_vcpu();
> -    if ( !v || !is_pv_vcpu(v) )
> +    if ( !v )
>           return mfn_to_virt(mfn_x(mfn));
>   
>       dcache = &v->domain->arch.pv.mapcache;
>       vcache = &v->arch.pv.mapcache;
> -    if ( !dcache->inuse )
> -        return mfn_to_virt(mfn_x(mfn));
> +    ASSERT(dcache->inuse);
>   
>       perfc_incr(map_domain_page_count);
>   
> @@ -187,7 +192,7 @@ void unmap_domain_page(const void *ptr)
>       ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
>   
>       v = mapcache_current_vcpu();
> -    ASSERT(v && is_pv_vcpu(v));
> +    ASSERT(v);
>   
>       dcache = &v->domain->arch.pv.mapcache;
>       ASSERT(dcache->inuse);

Cheers,

[1] https://lore.kernel.org/xen-devel/20221216114853.8227-15-julien@xen.org

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 19:27:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 19:27:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487183.754716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMZo5-0005nX-4R; Mon, 30 Jan 2023 19:27:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487183.754716; Mon, 30 Jan 2023 19:27:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMZo5-0005nQ-1j; Mon, 30 Jan 2023 19:27:09 +0000
Received: by outflank-mailman (input) for mailman id 487183;
 Mon, 30 Jan 2023 19:27:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMZo4-0005nK-1w
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 19:27:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMZo2-00026y-A6; Mon, 30 Jan 2023 19:27:06 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.10.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMZo2-0006kB-4W; Mon, 30 Jan 2023 19:27:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	References:Cc:To:From:Subject:MIME-Version:Date:Message-ID;
	bh=AVVR1lIanMEjwybAaFOdqu5HWODPwL1xu9SWyfCOU28=; b=0MmMxLROU2F1zIvoLqjUkRO3Sk
	emld4FLrHmHL/gEQWmLg6aXNvDnYc3Eggg4O9snkSn3ZTRmSgssf0HiC4I6GeWiFJMlqHtYcbvKT+
	4gw0KS99FGfOJ2L+U5aF0L7O2iOgyVG3tPOO3nbjWT4srRHZx5DB+82MNsLOIPxm1KMo=;
Message-ID: <41de340c-b5ad-6c30-816f-1ce1ddc98069@xen.org>
Date: Mon, 30 Jan 2023 19:27:03 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 05/22] x86/srat: vmap the pages for acpi_slit
Content-Language: en-US
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-6-julien@xen.org>
 <ca02a313-0fa2-8041-3e8f-d467c3e99fb6@suse.com>
 <965e3faa-472d-9a79-83ca-fef57cda81c5@xen.org>
In-Reply-To: <965e3faa-472d-9a79-83ca-fef57cda81c5@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Jan,

On 23/12/2022 11:31, Julien Grall wrote:
> On 20/12/2022 15:30, Jan Beulich wrote:
>> On 16.12.2022 12:48, Julien Grall wrote:
>>> From: Hongyan Xia <hongyxia@amazon.com>
>>>
>>> This avoids the assumption that boot pages are in the direct map.
>>>
>>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> However, ...
>>
>>> --- a/xen/arch/x86/srat.c
>>> +++ b/xen/arch/x86/srat.c
>>> @@ -139,7 +139,8 @@ void __init acpi_numa_slit_init(struct 
>>> acpi_table_slit *slit)
>>>           return;
>>>       }
>>>       mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
>>> -    acpi_slit = mfn_to_virt(mfn_x(mfn));
>>> +    acpi_slit = vmap_contig_pages(mfn, PFN_UP(slit->header.length));
>>
>> ... with the increased use of vmap space the VA range used will need
>> growing. And that's perhaps better done ahead of time than late.
> 
> I will have a look to increase the vmap().

I have started to look at this. The current size of VMAP is 64GB.

At least in the setup I have I didn't see any particular issue with the 
existing size of the vmap. Looking through the history, the last time it 
was bumped by one of your commit (see b0581b9214d2) but it is not clear 
what was the setup.

Given I don't have a setup where the VMAP is exhausted it is not clear 
to me what would be an acceptable bump.

AFAICT, in PML4 slot 261, we still have 62GB reserved for future. So I 
was thinking to add an extra 32GB which would bring the VMAP to 96GB. 
This is just a number that doesn't use all the reserved space but still 
a power of two.

Are you fine with that?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 19:39:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 19:39:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487188.754727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMZzn-0007Ua-5t; Mon, 30 Jan 2023 19:39:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487188.754727; Mon, 30 Jan 2023 19:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMZzn-0007UT-39; Mon, 30 Jan 2023 19:39:15 +0000
Received: by outflank-mailman (input) for mailman id 487188;
 Mon, 30 Jan 2023 19:39:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMZzm-0007UJ-2v; Mon, 30 Jan 2023 19:39:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMZzm-0002LW-1n; Mon, 30 Jan 2023 19:39:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMZzl-0002z2-IW; Mon, 30 Jan 2023 19:39:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMZzl-0004fS-I0; Mon, 30 Jan 2023 19:39:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/u7ylizqTY5u434PH6RaAZawvp80DlX5LzKchcNE81A=; b=5I+3WsilrNjFf4KiMirOC/au14
	vUm2jBGZ1LbhB1I3+Q2/HqUpRJvnB+mlTbrlACa7tWDnOggpLAz5M4UpV7hoczZcUy9rO1gdQVo1I
	POFP79DGNJORDVzVZoeRlL1ySbTbrCmCmZa8k/bP16ugW199NCAhadr5hdBjFo8XNK0s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176282-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176282: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c59230bce1c6973af4190b418971c1d008340cc4
X-Osstest-Versions-That:
    ovmf=4b384c21ad02fbf5eda25a1516cc72fa66b150f6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 19:39:13 +0000

flight 176282 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176282/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c59230bce1c6973af4190b418971c1d008340cc4
baseline version:
 ovmf                 4b384c21ad02fbf5eda25a1516cc72fa66b150f6

Last test of basis   176280  2023-01-30 14:12:12 Z    0 days
Testing same since   176282  2023-01-30 17:12:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bob Feng <bob.c.feng@intel.com>
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4b384c21ad..c59230bce1  c59230bce1c6973af4190b418971c1d008340cc4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 20:55:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 20:55:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487198.754737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbBL-0008Q6-NS; Mon, 30 Jan 2023 20:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487198.754737; Mon, 30 Jan 2023 20:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbBL-0008Pn-J9; Mon, 30 Jan 2023 20:55:15 +0000
Received: by outflank-mailman (input) for mailman id 487198;
 Mon, 30 Jan 2023 20:55:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMbBK-0008PX-OM; Mon, 30 Jan 2023 20:55:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMbBK-0004Np-L5; Mon, 30 Jan 2023 20:55:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMbBK-0004pi-3g; Mon, 30 Jan 2023 20:55:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMbBK-0001Y5-3N; Mon, 30 Jan 2023 20:55:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eI7jFFhsYjZ/AFFVrr7T814J/L5tFjIipd7kvolF544=; b=OzGKTg8BJy5ZAG/UiJ9++qAvAP
	f23u0veG4KiMXPLLVkZC/o/4T33PZfk49nVlb7X7K+CUvF2NFuMVlvVTFT07SLTkr8wA3OIA9t5PX
	9L1b7VVnIbjgAWDjuGCSJyuCQNddWO4Bd+M3oe9q4/UmG8uQ5Njn8gYyvHCBtijaWxBs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176277-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176277: regressions - trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    xen-unstable:test-amd64-i386-libvirt-pair:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:build-armhf:syslog-server:running:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-ping-check-native:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf:capture-logs:broken:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 20:55:14 +0000

flight 176277 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176277/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175994
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>           broken in 176268
 test-amd64-amd64-xl-pvhv2-intel    <job status>               broken in 176268
 test-amd64-i386-libvirt-pair    <job status>                 broken  in 176268
 build-arm64-pvops             6 kernel-build   fail in 176266 REGR. vs. 175994
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 176268 pass in 176277
 test-amd64-amd64-xl-pvhv2-intel 5 host-install(5) broken in 176268 pass in 176277
 test-amd64-i386-libvirt-pair 6 host-install/src_host(6) broken in 176268 pass in 176277
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail in 176268 pass in 176266
 test-amd64-i386-xl-qemut-debianhvm-amd64 7 xen-install fail in 176268 pass in 176277
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 176268 pass in 176277
 test-amd64-i386-libvirt-pair 10 xen-install/src_host       fail pass in 176266
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host       fail pass in 176266
 test-amd64-i386-xl-qemuu-ovmf-amd64 18 guest-localmigrate/x10 fail pass in 176266
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 host-ping-check-native fail pass in 176268
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 176268

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 176266 n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175994
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z   10 days
Failing since        176003  2023-01-20 17:40:27 Z   10 days   21 attempts
Testing same since   176222  2023-01-26 22:13:29 Z    3 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  George Dunlap <george.dunlap@cloud.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job test-amd64-i386-libvirt-pair broken
broken-job build-armhf broken
broken-job build-armhf broken

Not pushing.

(No revision log; it would be 1225 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 20:57:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 20:57:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487207.754746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbD9-0000kP-7M; Mon, 30 Jan 2023 20:57:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487207.754746; Mon, 30 Jan 2023 20:57:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbD9-0000kI-4R; Mon, 30 Jan 2023 20:57:07 +0000
Received: by outflank-mailman (input) for mailman id 487207;
 Mon, 30 Jan 2023 20:57:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fYol=53=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pMbD8-0000k3-1r
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 20:57:06 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a53a2d0e-a0e0-11ed-933c-83870f6b2ba8;
 Mon, 30 Jan 2023 21:57:04 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8EE6B61222;
 Mon, 30 Jan 2023 20:57:01 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01901C433D2;
 Mon, 30 Jan 2023 20:56:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a53a2d0e-a0e0-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1675112221;
	bh=18YOYum/MX28t5WraK0HRv+6cpTf75pxWOESa3rDN7U=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gEPTjUlLKDgS4ilCPg/XHQVbcTPCKG8eMPYNzYsBMDKJgK2KhDVUvdzUzWl4Mo8gm
	 BuNWTJMG1AseSZSZ0wtsR+yzzDLy1m0XxaS/CQRFARJYt4bWPZvAOjNhrUKzw+nqf2
	 byZsTn42UuxkWxCHkX3l7frl0bEhS/4pHZ/ta6uMMZIH1uFGt7oKbOdApd8WXCfqtw
	 QXC8QNaM13wmvMJuJFIqOHanC/ujF+EGk27KFIGRBVP/9Qxdph2L8AIi525YrPBMxK
	 DNjLF1yhujjCBoj4fIqQpdNRufZgnpr6GgxF9ld5E4YpuEhghTZ0tq+9+Gxn62KHvf
	 Xw1djrgQLnYOQ==
Date: Mon, 30 Jan 2023 12:56:55 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
    xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, 
    Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
Subject: Re: [XEN v6] xen/arm: Probe the load/entry point address of an uImage
 correctly
In-Reply-To: <cd184a58-426e-4249-c635-504509734262@xen.org>
Message-ID: <alpine.DEB.2.22.394.2301301256470.132504@ubuntu-linux-20-04-desktop>
References: <20230125112131.19682-1-ayan.kumar.halder@amd.com> <alpine.DEB.2.22.394.2301251302360.1978264@ubuntu-linux-20-04-desktop> <cd184a58-426e-4249-c635-504509734262@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Jan 2023, Julien Grall wrote:
> Hi Stefano,
> 
> On 25/01/2023 21:06, Stefano Stabellini wrote:
> > On Wed, 25 Jan 2023, Ayan Kumar Halder wrote:
> > > Currently, kernel_uimage_probe() does not read the load/entry point
> > > address
> > > set in the uImge header. Thus, info->zimage.start is 0 (default value).
> > > This
> > > causes, kernel_zimage_place() to treat the binary (contained within
> > > uImage)
> > > as position independent executable. Thus, it loads it at an incorrect
> > > address.
> > > 
> > > The correct approach would be to read "uimage.load" and set
> > > info->zimage.start. This will ensure that the binary is loaded at the
> > > correct address. Also, read "uimage.ep" and set info->entry (ie kernel
> > > entry
> > > address).
> > > 
> > > If user provides load address (ie "uimage.load") as 0x0, then the image is
> > > treated as position independent executable. Xen can load such an image at
> > > any address it considers appropriate. A position independent executable
> > > cannot have a fixed entry point address.
> > > 
> > > This behavior is applicable for both arm32 and arm64 platforms.
> > > 
> > > Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
> > > point address set in the uImage header. With this commit, Xen will use
> > > them.
> > > This makes the behavior of Xen consistent with uboot for uimage headers.
> > > 
> > > Users who want to use Xen with statically partitioned domains, can provide
> > > non zero load address and entry address for the dom0/domU kernel. It is
> > > required that the load and entry address provided must be within the
> > > memory
> > > region allocated by Xen.
> > > 
> > > A deviation from uboot behaviour is that we consider load address == 0x0,
> > > to denote that the image supports position independent execution. This
> > > is to make the behavior consistent across uImage and zImage.
> > > 
> > > Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> > > ---
> > > 
> > > Changes from v1 :-
> > > 1. Added a check to ensure load address and entry address are the same.
> > > 2. Considered load address == 0x0 as position independent execution.
> > > 3. Ensured that the uImage header interpretation is consistent across
> > > arm32 and arm64.
> > > 
> > > v2 :-
> > > 1. Mentioned the change in existing behavior in booting.txt.
> > > 2. Updated booting.txt with a new section to document "Booting Guests".
> > > 
> > > v3 :-
> > > 1. Removed the constraint that the entry point should be same as the load
> > > address. Thus, Xen uses both the load address and entry point to determine
> > > where the image is to be copied and the start address.
> > > 2. Updated documentation to denote that load address and start address
> > > should be within the memory region allocated by Xen.
> > > 3. Added constraint that user cannot provide entry point for a position
> > > independent executable (PIE) image.
> > > 
> > > v4 :-
> > > 1. Explicitly mentioned the version in booting.txt from when the uImage
> > > probing behavior has changed.
> > > 2. Logged the requested load address and entry point parsed from the
> > > uImage
> > > header.
> > > 3. Some style issues.
> > > 
> > > v5 :-
> > > 1. Set info->zimage.text_offset = 0 in kernel_uimage_probe().
> > > 2. Mention that if the kernel has a legacy image header on top of
> > > zImage/zImage64
> > > header, then the attrbutes from legacy image header is used to determine
> > > the load
> > > address, entry point, etc. Thus, zImage/zImage64 header is effectively
> > > ignored.
> > > 
> > > This is true because Xen currently does not support recursive probing of
> > > kernel
> > > headers ie if uImage header is probed, then Xen will not attempt to see if
> > > there
> > > is an underlying zImage/zImage64 header.
> > > 
> > >   docs/misc/arm/booting.txt         | 30 ++++++++++++++++
> > >   xen/arch/arm/include/asm/kernel.h |  2 +-
> > >   xen/arch/arm/kernel.c             | 58 +++++++++++++++++++++++++++++--
> > >   3 files changed, 86 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
> > > index 3e0c03e065..1837579aef 100644
> > > --- a/docs/misc/arm/booting.txt
> > > +++ b/docs/misc/arm/booting.txt
> > > @@ -23,6 +23,32 @@ The exceptions to this on 32-bit ARM are as follows:
> > >     There are no exception on 64-bit ARM.
> > >   +Booting Guests
> > > +--------------
> > > +
> > > +Xen supports the legacy image header[3], zImage protocol for 32-bit
> > > +ARM Linux[1] and Image protocol defined for ARM64[2].
> > > +
> > > +Until Xen 4.17, in case of legacy image protocol, Xen ignored the load
> > > +address and entry point specified in the header. This has now changed.
> > > +
> > > +Now, it loads the image at the load address provided in the header.
> > > +And the entry point is used as the kernel start address.
> > > +
> > > +A deviation from uboot is that, Xen treats "load address == 0x0" as
> > > +position independent execution (PIE). Thus, Xen will load such an image
> > > +at an address it considers appropriate. Also, user cannot specify the
> > > +entry point of a PIE image since the start address cennot be
> > > +predetermined.
> > > +
> > > +Users who want to use Xen with statically partitioned domains, can
> > > provide
> > > +the fixed non zero load address and start address for the dom0/domU
> > > kernel.
> > > +The load address and start address specified by the user in the header
> > > must
> > > +be within the memory region allocated by Xen.
> > > +
> > > +Also, it is to be noted that if user provides the legacy image header on
> > > top of
> > > +zImage or Image header, then Xen uses the attrbutes of legacy image
> > > header only
> >                                               ^ attributes
> > ^ remove only
> > 
> > > +to determine the load address, entry point, etc.
> > 
> > Also add:
> > 
> > """
> > Known limitation: compressed kernels with a uboot headers are not
> > working.
> > """
> 
> I am not entirely sure where you want this sentence to be added. So...
> 
> > 
> > These few minor changes to the documentation can be done on commit:
> 
> ... can you take care of committing it?

Certainly, done


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 21:01:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 21:01:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487213.754756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbHX-0002It-Nk; Mon, 30 Jan 2023 21:01:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487213.754756; Mon, 30 Jan 2023 21:01:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbHX-0002Il-L4; Mon, 30 Jan 2023 21:01:39 +0000
Received: by outflank-mailman (input) for mailman id 487213;
 Mon, 30 Jan 2023 21:01:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fYol=53=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pMbHW-0002IZ-IT
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 21:01:38 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 47adf535-a0e1-11ed-b63b-5f92e7d2e73a;
 Mon, 30 Jan 2023 22:01:36 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9D5A26125D;
 Mon, 30 Jan 2023 21:01:34 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2321C433D2;
 Mon, 30 Jan 2023 21:01:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47adf535-a0e1-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1675112494;
	bh=G93K1QW14FKMZfeko/uopY7uHtTyZpgky5yglGIn2WY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=O4/HGIvAkepIn+7gIRMzyKnf5KT/nNRno18/d/1JyxCp4Sf+7LRBupdLS1WgkgS2b
	 1aZIyo71GpvlTplzVLWgrwsaecup0G60ODBRWFb+vmUfM6WkHYHTE+JS3gel8t2MRP
	 Mzi8arwjndfqXUFdxLqbzQ3UlAYq0XKanwYAtmOod2gUdma55csLnWEP3NoRPPLxA3
	 MvkWUbjKuYwi+3xiikVlMRYG5iBdt5CVi8nLJB7V54RQdqxOMBzX+D+3Y92gYi2WGN
	 GydZseYS7vWnEYvy1S7EIsCCU+REUQnNbi6U1VYOcCjril5IuVaxLkbdwpb3s0Nvs2
	 E3+6TnXGge7kg==
Date: Mon, 30 Jan 2023 13:01:31 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Juergen Gross <jgross@suse.com>
cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] xen/public: move xenstore related doc into 9pfs.h
In-Reply-To: <65757a42-f55b-55e9-4853-4854ecabbfca@suse.com>
Message-ID: <alpine.DEB.2.22.394.2301301300210.132504@ubuntu-linux-20-04-desktop>
References: <20230130090937.31623-1-jgross@suse.com> <83ea4c36-4762-c06f-28bc-00deedb7efd3@xen.org> <65757a42-f55b-55e9-4853-4854ecabbfca@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Jan 2023, Juergen Gross wrote:
> On 30.01.23 10:18, Julien Grall wrote:
> > Hi Juergen,
> > 
> > On 30/01/2023 09:09, Juergen Gross wrote:
> > > The Xenstore related documentation is currently to be found in
> > > docs/misc/9pfs.pandoc, instead of the related header file
> > > xen/include/public/io/9pfs.h like for most other paravirtualized
> > > device protocols.
> > > 
> > > There is a comment in the header pointing at the document, but the
> > > given file name is wrong. Additionally such headers are meant to be
> > > copied into consuming projects (Linux kernel, qemu, etc.), so pointing
> > > at a doc file in the Xen git repository isn't really helpful for the
> > > consumers of the header.
> > > 
> > > This situation is far from ideal, which is already being proved by the
> > > fact that neither qemu nor the Linux kernel are implementing the
> > > device attach/detach protocol correctly. Additionally the documented
> > > Xenstore entries are not matching the reality, as the "tag" Xenstore
> > > entry is on the frontend side, not on the backend one.
> > > 
> > > There is another bug in the connection sequence: the frontend needs to
> > > wait for the backend to have published its features before being able
> > > to allocate its rings and event-channels.
> > > 
> > > Change that by moving the Xenstore related 9pfs documentation from
> > > docs/misc/9pfs.pandoc into xen/include/public/io/9pfs.h while fixing
> > > the wrong Xenstore entry detail and the connection sequence.
> > > 
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > ---
> > > V2:
> > > - add reference to header in the pandoc document (Jan Beulich)
> > > V3:
> > > - fix flaw in the connection sequence
> > 
> > Please don't do that in the same patch. This makes a lot more difficult to
> > confirm the doc movement was correct.
> 
> You have to check it manually anyway, as the comment format is prefixing
> " * " to every line.

Just to give an example, in the past I used vim to remove the " * "
prefix on every line then used diff or wdiff to check that the content
of the old and the new files are identical.


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 21:30:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 21:30:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487218.754767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbjH-0006If-T4; Mon, 30 Jan 2023 21:30:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487218.754767; Mon, 30 Jan 2023 21:30:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbjH-0006IY-OZ; Mon, 30 Jan 2023 21:30:19 +0000
Received: by outflank-mailman (input) for mailman id 487218;
 Mon, 30 Jan 2023 21:30:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fYol=53=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pMbjG-0006IS-Cb
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 21:30:18 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 48d5ef0d-a0e5-11ed-b63b-5f92e7d2e73a;
 Mon, 30 Jan 2023 22:30:15 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 759BF6127D;
 Mon, 30 Jan 2023 21:30:14 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC5D6C433D2;
 Mon, 30 Jan 2023 21:30:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48d5ef0d-a0e5-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1675114213;
	bh=/4sIY4qlBNW9WG/JgZR8mA7eoWpFlZoOTC47lyDrwjo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hF/6tB3bF1q4s0xF/iM+o/TGQE5iSZ/ZbSsGNR3eL7CQsVB3FmM0xTVFvzk9tU1zP
	 qnxlHv4iYkYt8yVsIi1T+Vw4zD1wQ7ws8nQt/h3bMflXxjKYx3ITs6ZnURuWoQZmj3
	 8TGFKGA/ni7lJo3+jrX07c04FBck6lCr1ZYtOLOCjE980MkATyKizkksS9JKgtqm7E
	 EUr5KrnYFvCffulVidxsEspvAliP8juGtyNQEPOTyk4U1793D5KzGfUp1Mr1JGgRYt
	 QM4+NYRFWUsg4VSyqn3GuQNAf4n7u118TyX3CnUv5JUGDm0sB7zWJT8jw5K5uUwGdW
	 Uc1sWJuaZZQ3w==
Date: Mon, 30 Jan 2023 13:30:11 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Henry Wang <Henry.Wang@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Wei Chen <wei.chen@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
In-Reply-To: <20230130040535.188236-2-Henry.Wang@arm.com>
Message-ID: <alpine.DEB.2.22.394.2301301329550.132504@ubuntu-linux-20-04-desktop>
References: <20230130040535.188236-1-Henry.Wang@arm.com> <20230130040535.188236-2-Henry.Wang@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Jan 2023, Henry Wang wrote:
> As we are having more and more types of static region, and all of
> these static regions are defined in bootinfo.reserved_mem, it is
> necessary to add the overlap check of reserved memory regions in Xen,
> because such check will help user to identify the misconfiguration in
> the device tree at the early stage of boot time.
> 
> Currently we have 3 types of static region, namely
> (1) static memory
> (2) static heap
> (3) static shared memory
> 
> (1) and (2) are parsed by the function `device_tree_get_meminfo()` and
> (3) is parsed using its own logic. All of parsed information of these
> types will be stored in `struct meminfo`.
> 
> Therefore, to unify the overlap checking logic for all of these types,
> this commit firstly introduces a helper `meminfo_overlap_check()` and
> a function `check_reserved_regions_overlap()` to check if an input
> physical address range is overlapping with the existing memory regions
> defined in bootinfo. After that, use `check_reserved_regions_overlap()`
> in `device_tree_get_meminfo()` to do the overlap check of (1) and (2)
> and replace the original overlap check of (3) with
> `check_reserved_regions_overlap()`.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v2 -> v3:
> 1. Use "[start, end]" format in printk error message.
> 2. Change the return type of helper functions to bool.
> 3. Use 'start' and 'size' in helper functions to describe a region.
> v1 -> v2:
> 1. Split original `overlap_check()` to `meminfo_overlap_check()`.
> 2. Rework commit message.
> ---
>  xen/arch/arm/bootfdt.c           | 13 +++++-----
>  xen/arch/arm/include/asm/setup.h |  2 ++
>  xen/arch/arm/setup.c             | 41 ++++++++++++++++++++++++++++++++
>  3 files changed, 49 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 0085c28d74..e2f6c7324b 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -88,6 +88,9 @@ static int __init device_tree_get_meminfo(const void *fdt, int node,
>      for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
>      {
>          device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +        if ( mem == &bootinfo.reserved_mem &&
> +             check_reserved_regions_overlap(start, size) )
> +            return -EINVAL;
>          /* Some DT may describe empty bank, ignore them */
>          if ( !size )
>              continue;
> @@ -482,7 +485,9 @@ static int __init process_shm_node(const void *fdt, int node,
>                  return -EINVAL;
>              }
~  
> -            if ( (end <= mem->bank[i].start) || (paddr >= bank_end) )
> +            if ( check_reserved_regions_overlap(paddr, size) )
> +                return -EINVAL;
> +            else
>              {
>                  if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 )
>                      continue;
> @@ -493,12 +498,6 @@ static int __init process_shm_node(const void *fdt, int node,
>                      return -EINVAL;
>                  }
>              }
> -            else
> -            {
> -                printk("fdt: shared memory region overlap with an existing entry %#"PRIpaddr" - %#"PRIpaddr"\n",
> -                        mem->bank[i].start, bank_end);
> -                return -EINVAL;
> -            }
>          }
>      }
>  
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index a926f30a2b..f0592370ea 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -143,6 +143,8 @@ void fw_unreserved_regions(paddr_t s, paddr_t e,
>  size_t boot_fdt_info(const void *fdt, paddr_t paddr);
>  const char *boot_fdt_cmdline(const void *fdt);
>  
> +bool check_reserved_regions_overlap(paddr_t region_start, paddr_t region_size);
> +
>  struct bootmodule *add_boot_module(bootmodule_kind kind,
>                                     paddr_t start, paddr_t size, bool domU);
>  struct bootmodule *boot_module_find_by_kind(bootmodule_kind kind);
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f26f67b90..636604aafa 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -261,6 +261,32 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>      cb(s, e);
>  }
>  
> +static bool __init meminfo_overlap_check(struct meminfo *meminfo,
> +                                         paddr_t region_start,
> +                                         paddr_t region_size)
> +{
> +    paddr_t bank_start = INVALID_PADDR, bank_end = 0;
> +    paddr_t region_end = region_start + region_size;
> +    unsigned int i, bank_num = meminfo->nr_banks;
> +
> +    for ( i = 0; i < bank_num; i++ )
> +    {
> +        bank_start = meminfo->bank[i].start;
> +        bank_end = bank_start + meminfo->bank[i].size;
> +
> +        if ( region_end <= bank_start || region_start >= bank_end )
> +            continue;
> +        else
> +        {
> +            printk("Region: [%#"PRIpaddr", %#"PRIpaddr"] overlapping with bank[%u]: [%#"PRIpaddr", %#"PRIpaddr"]\n",
> +                   region_start, region_end, i, bank_start, bank_end);
> +            return true;
> +        }
> +    }
> +
> +    return false;
> +}
> +
>  void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>                                    void (*cb)(paddr_t, paddr_t),
>                                    unsigned int first)
> @@ -271,7 +297,22 @@ void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>          cb(s, e);
>  }
>  
> +/*
> + * Given an input physical address range, check if this range is overlapping
> + * with the existing reserved memory regions defined in bootinfo.
> + * Return true if the input physical address range is overlapping with any
> + * existing reserved memory regions, otherwise false.
> + */
> +bool __init check_reserved_regions_overlap(paddr_t region_start,
> +                                           paddr_t region_size)
> +{
> +    /* Check if input region is overlapping with bootinfo.reserved_mem banks */
> +    if ( meminfo_overlap_check(&bootinfo.reserved_mem,
> +                               region_start, region_size) )
> +        return true;
>  
> +    return false;
> +}
>  
>  struct bootmodule __init *add_boot_module(bootmodule_kind kind,
>                                            paddr_t start, paddr_t size,
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 21:33:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 21:33:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487222.754777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbln-0006rY-9H; Mon, 30 Jan 2023 21:32:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487222.754777; Mon, 30 Jan 2023 21:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbln-0006rR-68; Mon, 30 Jan 2023 21:32:55 +0000
Received: by outflank-mailman (input) for mailman id 487222;
 Mon, 30 Jan 2023 21:32:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fYol=53=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pMblm-0006rL-Lw
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 21:32:54 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a6846ea0-a0e5-11ed-933c-83870f6b2ba8;
 Mon, 30 Jan 2023 22:32:53 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A080661222;
 Mon, 30 Jan 2023 21:32:51 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 233FBC433D2;
 Mon, 30 Jan 2023 21:32:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6846ea0-a0e5-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1675114371;
	bh=Lq0wV2zvfbmXGHnuUgmYgtrcdOasT7Ug7OK+c9UhsaA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XDZxHwItIASxvUtyQMIaaulAVQudqD62jdBhO00r3C/ohqOGRSZvUP4BqEEGqwZ3u
	 IBGARAf2gKnoKhLCso5eAZwnv04MQNmpxsq41GyWaBgLl/GN+9KD55bLmkpjwuWZVr
	 dkDu9KDPKGpTXQ6m/Vnj2+5D7nJtO1koW1eTkBWo9NrGe94LDj4zePxdrpsPrI7edV
	 XXSqxqMgvigsd1/KFrKzGVT+bClFKmS0PnZQwqYHr28WR6Hy3FT6NlfLfo/oiBetQQ
	 6MZ1k8ZsKQB6Cw0Xvbv07kIxJssAFKuM+RF5bFi10glbcZN81/WF5jBQ6xDiJJgRjy
	 7g6oSG1wlFVvg==
Date: Mon, 30 Jan 2023 13:32:48 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Henry Wang <Henry.Wang@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <wei.chen@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v3 2/3] xen/arm: Extend the memory overlap check to
 include bootmodules
In-Reply-To: <20230130040535.188236-3-Henry.Wang@arm.com>
Message-ID: <alpine.DEB.2.22.394.2301301332410.132504@ubuntu-linux-20-04-desktop>
References: <20230130040535.188236-1-Henry.Wang@arm.com> <20230130040535.188236-3-Henry.Wang@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Jan 2023, Henry Wang wrote:
> Similarly as the static regions defined in bootinfo.reserved_mem,
> the bootmodule regions defined in bootinfo.modules should also not
> be overlapping with memory regions in either bootinfo.reserved_mem
> or bootinfo.modules.
> 
> Therefore, this commit introduces a helper `bootmodules_overlap_check()`
> and uses this helper to extend the check in function
> `check_reserved_regions_overlap()` so that memory regions in
> bootinfo.modules are included. Use `check_reserved_regions_overlap()`
> in `add_boot_module()` to return early if any error occurs.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v2 -> v3:
> 1. Use "[start, end]" format in printk error message.
> 2. Change the return type of helper functions to bool.
> 3. Use 'start' and 'size' in helper functions to describe a region.
> v1 -> v2:
> 1. Split original `overlap_check()` to `bootmodules_overlap_check()`.
> 2. Rework commit message.
> ---
>  xen/arch/arm/setup.c | 35 +++++++++++++++++++++++++++++++++++
>  1 file changed, 35 insertions(+)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 636604aafa..98df0baffa 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -287,6 +287,32 @@ static bool __init meminfo_overlap_check(struct meminfo *meminfo,
>      return false;
>  }
>  
> +static bool __init bootmodules_overlap_check(struct bootmodules *bootmodules,
> +                                             paddr_t region_start,
> +                                             paddr_t region_size)
> +{
> +    paddr_t mod_start = INVALID_PADDR, mod_end = 0;
> +    paddr_t region_end = region_start + region_size;
> +    unsigned int i, mod_num = bootmodules->nr_mods;
> +
> +    for ( i = 0; i < mod_num; i++ )
> +    {
> +        mod_start = bootmodules->module[i].start;
> +        mod_end = mod_start + bootmodules->module[i].size;
> +
> +        if ( region_end <= mod_start || region_start >= mod_end )
> +            continue;
> +        else
> +        {
> +            printk("Region: [%#"PRIpaddr", %#"PRIpaddr"] overlapping with mod[%u]: [%#"PRIpaddr", %#"PRIpaddr"]\n",
> +                   region_start, region_end, i, mod_start, mod_end);
> +            return true;
> +        }
> +    }
> +
> +    return false;
> +}
> +
>  void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>                                    void (*cb)(paddr_t, paddr_t),
>                                    unsigned int first)
> @@ -311,6 +337,11 @@ bool __init check_reserved_regions_overlap(paddr_t region_start,
>                                 region_start, region_size) )
>          return true;
>  
> +    /* Check if input region is overlapping with bootmodules */
> +    if ( bootmodules_overlap_check(&bootinfo.modules,
> +                                   region_start, region_size) )
> +        return true;
> +
>      return false;
>  }
>  
> @@ -328,6 +359,10 @@ struct bootmodule __init *add_boot_module(bootmodule_kind kind,
>                 boot_module_kind_as_string(kind), start, start + size);
>          return NULL;
>      }
> +
> +    if ( check_reserved_regions_overlap(start, size) )
> +        return NULL;
> +
>      for ( i = 0 ; i < mods->nr_mods ; i++ )
>      {
>          mod = &mods->module[i];
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 21:37:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 21:37:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487229.754787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbpt-0007kk-V5; Mon, 30 Jan 2023 21:37:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487229.754787; Mon, 30 Jan 2023 21:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMbpt-0007kd-SF; Mon, 30 Jan 2023 21:37:09 +0000
Received: by outflank-mailman (input) for mailman id 487229;
 Mon, 30 Jan 2023 21:37:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fYol=53=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pMbps-0007kX-7X
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 21:37:08 +0000
Received: from ams.source.kernel.org (ams.source.kernel.org
 [2604:1380:4601:e00::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3dbe4831-a0e6-11ed-b63b-5f92e7d2e73a;
 Mon, 30 Jan 2023 22:37:06 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by ams.source.kernel.org (Postfix) with ESMTPS id 31257B8169A;
 Mon, 30 Jan 2023 21:37:05 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 68269C433EF;
 Mon, 30 Jan 2023 21:37:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3dbe4831-a0e6-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1675114623;
	bh=hRBa5PIW9OZ2WRwVnEEib++9dEUJ+CLEaHzgYur0qq0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=p40UHOT1O6rqzKlyAuETNTS8KmzSnxW9P2rZAE0PGYKSJ4oNK5hvdSVKtqCla+r7k
	 GCvyQuj8ivW0Ye2zv75uOdFVQNHM8uVlcKi+/LYCCunh+LIgNaY8yPg2DW8Hr/8No9
	 UZ2A11TTEdeJ1eV/pzYItcVNQ4Pv/qGrzFLUstOWNi3Lb/ho8b/QQeMg0Xcdyx2ov4
	 d4+oOV8Miw33N7facVHnTFQD1gIPhjewZuTFoHSMZuj1GEJ4K+sJKybeTmVJMibmIB
	 LYaxsodO+R3tPl423yMEn5JViH+8+9iQM3MXtOBdZL5wrt+Rp90Cq7y2sG32tbE74F
	 XNu7szNcV6x5A==
Date: Mon, 30 Jan 2023 13:37:01 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, Michal Orzel <michal.orzel@amd.com>
Subject: Re: [PATCH v2 1/2] xen/cppcheck: sort alphabetically cppcheck report
 entries
In-Reply-To: <20230130110132.2774782-2-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2301301336530.132504@ubuntu-linux-20-04-desktop>
References: <20230130110132.2774782-1-luca.fancellu@arm.com> <20230130110132.2774782-2-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1654406162-1675114623=:132504"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1654406162-1675114623=:132504
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 30 Jan 2023, Luca Fancellu wrote:
> Sort alphabetically cppcheck report entries when producing the text
> report, this will help comparing different reports and will group
> together findings from the same file.
> 
> The sort operation is performed with two criteria, the first one is
> sorting by misra rule, the second one is sorting by file.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v2:
>  - Sort with two criteria, first misra rule, second filename
>    (Michal, Jan)
> ---
> ---
>  xen/scripts/xen_analysis/cppcheck_report_utils.py | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> index 02440aefdfec..0b6cc72b9ac1 100644
> --- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
> +++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> @@ -104,6 +104,13 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
>                  for path in strip_paths:
>                      text_report_content[i] = text_report_content[i].replace(
>                                                                  path + "/", "")
> +                    # Split by : separator
> +                    text_report_content[i] = text_report_content[i].split(":")
> +            # sort alphabetically for second field (misra rule) and as second
> +            # criteria for the first field (file name)
> +            text_report_content.sort(key = lambda x: (x[1], x[0]))
> +            # merge back with : separator
> +            text_report_content = [":".join(x) for x in text_report_content]
>              # Write the final text report
>              outfile.writelines(text_report_content)
>      except OSError as e:
> -- 
> 2.25.1
> 
--8323329-1654406162-1675114623=:132504--


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 21:48:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 21:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487233.754797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMc0f-00011m-0X; Mon, 30 Jan 2023 21:48:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487233.754797; Mon, 30 Jan 2023 21:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMc0e-00011f-TJ; Mon, 30 Jan 2023 21:48:16 +0000
Received: by outflank-mailman (input) for mailman id 487233;
 Mon, 30 Jan 2023 21:48:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fYol=53=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pMc0d-00011X-91
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 21:48:15 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id caeb4caf-a0e7-11ed-b63b-5f92e7d2e73a;
 Mon, 30 Jan 2023 22:48:12 +0100 (CET)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A6CE361286;
 Mon, 30 Jan 2023 21:48:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0A6CC433EF;
 Mon, 30 Jan 2023 21:48:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: caeb4caf-a0e7-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1675115291;
	bh=t8SOvwSgVvXIooxQg1/n38aMVlKvyvB3JZYGMkzGf5k=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=MZn0QT+lbusKMr5aZNSO4AdDQP8n9hTRks2lLKRYynI503jR0zHHbXE6ezfgblokf
	 A5Y8XBnydp0VBoPs4Q8+PxYYAm8KGvJmZVHyKoZzfhuCOa79agVXQsrMFKzo9hNN8y
	 M0FMqoc5XET6qx4ze5fo2kgjTcwH6EuYUxcQ8ODYQiYU6N4VORObbdAhVKVCZQV3FV
	 nSNI3YnXbqUYiwp4KOVGdpKK1D+eDBbo4npN5VnjLAx/FVVtDfAhdJyhVJLf7hm1az
	 w97SH4jMAxu8JRrLUm0Hxg9PxGH034yF7vGk8crpNED1v/cHSjdBS08JTEa5Da8714
	 X+QoVoutqg5Yg==
Date: Mon, 30 Jan 2023 13:48:08 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <Luca.Fancellu@arm.com>
cc: Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <stefano.stabellini@amd.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    "george.dunlap@citrix.com" <george.dunlap@citrix.com>, 
    "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, 
    "roger.pau@citrix.com" <roger.pau@citrix.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "julien@xen.org" <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] Add more rules to docs/misra/rules.rst
In-Reply-To: <2733D8BE-CCA5-4322-BB9B-1D4318378525@arm.com>
Message-ID: <alpine.DEB.2.22.394.2301301339110.132504@ubuntu-linux-20-04-desktop>
References: <20230125205735.2662514-1-sstabellini@kernel.org> <9d536cec-726d-4a39-da36-ecc19d35d420@suse.com> <alpine.DEB.2.22.394.2301260749150.1978264@ubuntu-linux-20-04-desktop> <5a3ef92e-281f-e337-1a3e-aa4c6825d964@suse.com>
 <alpine.DEB.2.22.394.2301261041440.1978264@ubuntu-linux-20-04-desktop> <db97da84-5f23-e7ed-119b-74aed02fb573@suse.com> <alpine.DEB.2.22.394.2301271016360.1978264@ubuntu-linux-20-04-desktop> <03ce9f48-191e-b1b5-a3b2-8b769aa8feeb@suse.com>
 <2733D8BE-CCA5-4322-BB9B-1D4318378525@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 30 Jan 2023, Luca Fancellu wrote:
> > On 30 Jan 2023, at 07:33, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 27.01.2023 19:33, Stefano Stabellini wrote:
> >> On Fri, 27 Jan 2023, Jan Beulich wrote:
> >>> On 26.01.2023 19:54, Stefano Stabellini wrote:
> >>> Looking back at the sheet, it says "rule already followed by
> >>> the community in most cases" which I assume was based on there being
> >>> only very few violations that are presently reported. Now we've found
> >>> the frame_table[] issue, I'm inclined to say that the statement was put
> >>> there by mistake (due to that oversight).
> >> 
> >> cppcheck is unable to find violations; we know cppcheck has limitations
> >> and that's OK.
> >> 
> >> Eclair is excellent and finds violations (including the frame_table[]
> >> issue you mentioned), but currently it doesn't read configs from xen.git
> >> and we cannot run a test to see if adding a couple of deviations for 2
> >> macros removes most of the violations. If we want to use Eclair as a
> >> reference (could be a good idea) then I think we need a better
> >> integration. I'll talk to Roberto and see if we can arrange something
> >> better.
> >> 
> >> I am writing this with the assumption that if I could show that, as an
> >> example, adding 2 deviations reduces the Eclair violations down to less
> >> than 10, then we could adopt the rule. Do you think that would be
> >> acceptable in your opinion, as a process?
> > 
> > Hmm, to be quite honest: Not sure. Having noticed the oversight of the
> > frame_table[] issue makes me wonder how much else may be missed in this
> > same area (18.1, 18.2, and 18.3).
> 
> Hi Jan,
> 
> I think I recall the frame_table[] issue but I was looking into the eclair reports to
> understand it better and I was unable to find it, do you recall where the tool was
> complaining for the 18.2 related to the frame_table[]?
> Any notes or link is appreciated, maybe we could speak with Roberto to understand
> It better, because I checked with Coverity and I was unable to link findings of 18.2 with
> the symbol frame_table[] (however I might be a bit lost in all the macros).

Eclair could find the following violation which is related to the
frametable in arch/x86/mm.c:init_frametable_chunk:

   L229: memset(start, 0, end - start);

https://eclairit.com:8443/job/XEN/lastBuild/Target=X86_64,agent=docker1/eclair/type.-1600986843/moduleName.-84949685/packageName.590865259/fileName.1553859513/
https://eclairit.com:3787/fs/var/lib/jenkins/jobs/XEN/configurations/axis-Target/X86_64/axis-agent/docker1/builds/686/archive/ECLAIR/out/PROJECT.ecd;/sources/xen/arch/x86/mm.c.html#R46776_1

Almost all the other 18.2 violations are due to the "guest_mode" macro
and the NEEP_IP/OP HAVE_IO/OP macros in lzo.c.


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 21:56:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 21:56:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487237.754806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMc7w-0002i9-N9; Mon, 30 Jan 2023 21:55:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487237.754806; Mon, 30 Jan 2023 21:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMc7w-0002i2-KU; Mon, 30 Jan 2023 21:55:48 +0000
Received: by outflank-mailman (input) for mailman id 487237;
 Mon, 30 Jan 2023 21:55:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMc7v-0002ht-AX
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 21:55:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMc7u-0005o9-Ub; Mon, 30 Jan 2023 21:55:46 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMc7u-0004xJ-PH; Mon, 30 Jan 2023 21:55:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=d2sUIbcctqT9JK2RiKQdZk0gPwMWfyGwnVHiKQaf/yg=; b=JKdOYZRdUc/fdYRRTUBgR4eqfu
	KhvDZzWB0htB6sEVLMxcUFupT6VWKPJ11QfJIqpBQsDUAiEsKEWQBN126/g3RToSck4APiZn5TBsp
	FbhgjlP4AyyBVRofaWG6gZCruAK+ZgmPLgTx+e23+m3EOd7A3OB3ELD79uBRM6pBK1Zo=;
Message-ID: <fca91d3c-5d8a-3f7e-419a-a4c5208273dc@xen.org>
Date: Mon, 30 Jan 2023 21:55:44 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Henry Wang <Henry.Wang@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <wei.chen@arm.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230130040535.188236-1-Henry.Wang@arm.com>
 <20230130040535.188236-2-Henry.Wang@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
In-Reply-To: <20230130040535.188236-2-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Henry,

On 30/01/2023 04:05, Henry Wang wrote:
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index a926f30a2b..f0592370ea 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -143,6 +143,8 @@ void fw_unreserved_regions(paddr_t s, paddr_t e,
>   size_t boot_fdt_info(const void *fdt, paddr_t paddr);
>   const char *boot_fdt_cmdline(const void *fdt);
>   
> +bool check_reserved_regions_overlap(paddr_t region_start, paddr_t region_size);
> +
>   struct bootmodule *add_boot_module(bootmodule_kind kind,
>                                      paddr_t start, paddr_t size, bool domU);
>   struct bootmodule *boot_module_find_by_kind(bootmodule_kind kind);
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f26f67b90..636604aafa 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -261,6 +261,32 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>       cb(s, e);
>   }
>   
> +static bool __init meminfo_overlap_check(struct meminfo *meminfo,
> +                                         paddr_t region_start,
> +                                         paddr_t region_size)

So the interface looks nicer but now...

> +{
> +    paddr_t bank_start = INVALID_PADDR, bank_end = 0;
> +    paddr_t region_end = region_start + region_size;
> +    unsigned int i, bank_num = meminfo->nr_banks;
> +
> +    for ( i = 0; i < bank_num; i++ )
> +    {
> +        bank_start = meminfo->bank[i].start;
> +        bank_end = bank_start + meminfo->bank[i].size;
> +
> +        if ( region_end <= bank_start || region_start >= bank_end )

... it clearly shows how this check would be wrong when either the bank 
or the region is at the end of the address space. You may say it doesn't 
overlap when it could (e.g. when region_end < region_start).

That said, unless we rework 'bank', we would not properly solve the 
problem. But that's likely a bigger piece of work and not something I 
would request.

So for now, I would suggest to add a comment. Stefano, what do you think?

> +            continue;
> +        else
> +        {
> +            printk("Region: [%#"PRIpaddr", %#"PRIpaddr"] overlapping with bank[%u]: [%#"PRIpaddr", %#"PRIpaddr"]\n",

']' usually mean inclusive. But here, 'end' is exclusive. So you want '['.

This could be fixed on commit.


BTW, the same comments applies for the second patch.

> +                   region_start, region_end, i, bank_start, bank_end);
> +            return true;
> +        }
> +    }
> +
> +    return false;
> +}
> +
>   void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>                                     void (*cb)(paddr_t, paddr_t),
>                                     unsigned int first)
> @@ -271,7 +297,22 @@ void __init fw_unreserved_regions(paddr_t s, paddr_t e,
>           cb(s, e);
>   }
>   
> +/*
> + * Given an input physical address range, check if this range is overlapping
> + * with the existing reserved memory regions defined in bootinfo.
> + * Return true if the input physical address range is overlapping with any
> + * existing reserved memory regions, otherwise false.
> + */
> +bool __init check_reserved_regions_overlap(paddr_t region_start,
> +                                           paddr_t region_size)
> +{
> +    /* Check if input region is overlapping with bootinfo.reserved_mem banks */
> +    if ( meminfo_overlap_check(&bootinfo.reserved_mem,
> +                               region_start, region_size) )
> +        return true;
>   
> +    return false;
> +}
>   
>   struct bootmodule __init *add_boot_module(bootmodule_kind kind,
>                                             paddr_t start, paddr_t size,

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 22:01:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 22:01:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487243.754816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcCv-00049i-BZ; Mon, 30 Jan 2023 22:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487243.754816; Mon, 30 Jan 2023 22:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcCv-00049b-8v; Mon, 30 Jan 2023 22:00:57 +0000
Received: by outflank-mailman (input) for mailman id 487243;
 Mon, 30 Jan 2023 22:00:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMcCu-00049V-L6
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 22:00:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMcCu-0005vY-4S; Mon, 30 Jan 2023 22:00:56 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMcCt-0005Gh-W1; Mon, 30 Jan 2023 22:00:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=jjcpx6AWEawApwDcid0vR0p+UgxMH54KD33PMXpnhuA=; b=b0P44Oo6aLOOw7UwsdYXCIADr4
	55JCJiR9b9814goJ5jcRYGa+4/8pokOtqmvoQqDkXsWmJ4GKO4fS6ufh0uLrZOzY2hQjmGIzIP/2f
	taOiQ0KQ4OsKvYqL6z3WQQ75NLpHwy3nAccPIoS/OvdEASU/FwQICv0ZiDSLsFXprcyM=;
Message-ID: <7f663e3a-1f1c-91ad-728a-ea29414e4ba7@xen.org>
Date: Mon, 30 Jan 2023 22:00:54 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 09/11] xen/arm: Introduce ARM_PA_32 to support 32 bit
 physical address
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-10-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230117174358.15344-10-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 17/01/2023 17:43, Ayan Kumar Halder wrote:
> diff --git a/xen/arch/arm/include/asm/types.h b/xen/arch/arm/include/asm/types.h
> index 083acbd151..f9595b9098 100644
> --- a/xen/arch/arm/include/asm/types.h
> +++ b/xen/arch/arm/include/asm/types.h
> @@ -37,9 +37,16 @@ typedef signed long long s64;
>   typedef unsigned long long u64;
>   typedef u32 vaddr_t;
>   #define PRIvaddr PRIx32
> +#if defined(CONFIG_ARM_PA_32)
> +typedef u32 paddr_t;
> +#define INVALID_PADDR (~0UL)
> +#define PADDR_SHIFT BITS_PER_LONG
You define PADDR_SHIFT but don't seem to use it. Is it intended?

> +#define PRIpaddr PRIx32
> +#else
>   typedef u64 paddr_t;
>   #define INVALID_PADDR (~0ULL)
>   #define PRIpaddr "016llx"
> +#endif
>   typedef u32 register_t;
>   #define PRIregister "08x"
>   #elif defined (CONFIG_ARM_64)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 22:11:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 22:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487248.754826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcNB-0005pr-Af; Mon, 30 Jan 2023 22:11:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487248.754826; Mon, 30 Jan 2023 22:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcNB-0005pk-7c; Mon, 30 Jan 2023 22:11:33 +0000
Received: by outflank-mailman (input) for mailman id 487248;
 Mon, 30 Jan 2023 22:11:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMcNA-0005pe-Bu
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 22:11:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMcN9-00066a-IM; Mon, 30 Jan 2023 22:11:31 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMcN9-0005tS-CI; Mon, 30 Jan 2023 22:11:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=W0g9OHm+SaI85wEZniYACedt3YZ8DzwJcSmRkBFvQuo=; b=3jdS8nomV2jVFDhv7C76GIuJBl
	tndVjl4Je7Hs9dYP46mf4cPl4vhVc9x2z/ksi02/24VNWBCqXH7XuKWhv72huyw+/H3mKhWEo3A0T
	VLpmkniEoG/JUILabwSeAPWfZYnf18GXawBfzZI0IDQL6E9ogoB+t2nYOXgSLsmR2g2M=;
Message-ID: <8c0bce0b-05bd-5f4b-7b66-f6668ad34899@xen.org>
Date: Mon, 30 Jan 2023 22:11:29 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
 <a8219b2d-a22d-63ac-5088-c33610310d6e@xen.org>
 <27469e861d4777af42b84fb637b24ed47a187647.camel@gmail.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 07/14] xen/riscv: introduce exception context
In-Reply-To: <27469e861d4777af42b84fb637b24ed47a187647.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 30/01/2023 11:40, Oleksii wrote:
> On Fri, 2023-01-27 at 14:54 +0000, Julien Grall wrote:
>> Hi Oleksii,
>>
>> On 27/01/2023 13:59, Oleksii Kurochko wrote:
>>> +static inline void wfi(void)
>>> +{
>>> +    __asm__ __volatile__ ("wfi");
>>
>> I have starred at this line for a while and I am not quite too sure
>> to
>> understand why we don't need to clobber the memory like we do on Arm.
>>
> I don't have an answer. The code was based on Linux so...

Hmmm ok. It would probably wise to understand how code imported from 
Linux work so we don't end up introducing bug when calling such function.

 From your current use in this patch, I don't expect any issue. That may 
chance for the others use.

>> FWIW, Linux is doing the same, so I guess this is correct. For Arm we
>> also follow the Linux implementation.
>>
>> But I am wondering whether we are just too strict on Arm, RISCv
>> compiler
>> offer a different guarantee, or you expect the user to be responsible
>> to
>> prevent the compiler to do harmful optimization.
>>
>>> +/*
>>> + * panic() isn't available at the moment so an infinite loop will
>>> be
>>> + * used temporarily.
>>> + * TODO: change it to panic()
>>> + */
>>> +static inline void die(void)
>>> +{
>>> +    for( ;; ) wfi();
>>
>> Please move wfi() to a newline.
> Thanks.
> 
> I thought that it is fine to put into one line in this case but I'll
> move it to a newline. It's fine.

I am not aware of any place in Xen where we would combine the lines.
Also, you want a space after 'for'.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 22:17:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 22:17:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487252.754836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcTD-0006fI-W9; Mon, 30 Jan 2023 22:17:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487252.754836; Mon, 30 Jan 2023 22:17:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcTD-0006fB-TQ; Mon, 30 Jan 2023 22:17:47 +0000
Received: by outflank-mailman (input) for mailman id 487252;
 Mon, 30 Jan 2023 22:17:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMcTC-0006f1-90; Mon, 30 Jan 2023 22:17:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMcTC-0006Nt-6f; Mon, 30 Jan 2023 22:17:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMcTB-0007aW-Rh; Mon, 30 Jan 2023 22:17:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMcTB-0000yG-RJ; Mon, 30 Jan 2023 22:17:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FFG9K2dSPT8juZnLADJkmtpn1+i16HA6fHbs6zc5RQE=; b=mgfV5uwuyNd5FWa3/Xx/M55Nqb
	Hd5PylaGBwSmRiwxokeFkaOj1KTf07Nb+JgIzazWG6gBS8cmqY8mTNSk5mf+NcyNg4ziaA2EHQtwm
	GTVJdVZEwmR2hAUOCa3pFQKFaLD0XsdDYTYIuFUi24+k/18Idm9lH8ucz4fOxvoDFyoE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176283-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176283: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Jan 2023 22:17:45 +0000

flight 176283 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176283/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    4 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 22:28:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 22:28:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487262.754848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcde-0008Qc-6i; Mon, 30 Jan 2023 22:28:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487262.754848; Mon, 30 Jan 2023 22:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcde-0008QV-1a; Mon, 30 Jan 2023 22:28:34 +0000
Received: by outflank-mailman (input) for mailman id 487262;
 Mon, 30 Jan 2023 22:28:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMcdc-0008QP-M1
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 22:28:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMcdb-0006YX-Fa; Mon, 30 Jan 2023 22:28:31 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMcdb-0006RX-9V; Mon, 30 Jan 2023 22:28:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=EDF1QFuI4jxrtzBXSFtiSx9pKV3f0YCfWn7V8Syo8/s=; b=kEhBIhejaJFr7qxrljuUpxR7p8
	MxC5Lcq4rYbc4dmfLw0oDWsT2cbKBzLCS+Rh4rZx7JwNldERdgRRjjCu/b2iul9X2+kdLDlB2WC+W
	mcOr6dMUXfM7FERyqoZd1oJ8LK5HLqbfw8g+aPxcE6YKaZrlsryaAc0dMo7+5v6kPsQ4=;
Message-ID: <873d4754-0314-0022-96a9-e54ed78ac6ef@xen.org>
Date: Mon, 30 Jan 2023 22:28:29 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
 <73553bcf-9688-c187-a9cb-c12806484893@xen.org>
 <2c4d490bde4f04f956e481257ddc7129c312abb6.camel@gmail.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 12/14] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
In-Reply-To: <2c4d490bde4f04f956e481257ddc7129c312abb6.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Oleksii,

On 30/01/2023 11:35, Oleksii wrote:
> Hi Julien,
> On Fri, 2023-01-27 at 16:02 +0000, Julien Grall wrote:
>> Hi Oleksii,
>>
>> On 27/01/2023 13:59, Oleksii Kurochko wrote:
>>> The patch introduces macros: BUG(), WARN(), run_in_exception(),
>>> assert_failed.
>>>
>>> The implementation uses "ebreak" instruction in combination with
>>> diffrent bug frame tables (for each type) which contains useful
>>> information.
>>>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> ---
>>> Changes:
>>>     - Remove __ in define namings
>>>     - Update run_in_exception_handler() with
>>>       register void *fn_ asm(__stringify(BUG_FN_REG)) = (fn);
>>>     - Remove bug_instr_t type and change it's usage to uint32_t
>>> ---
>>>    xen/arch/riscv/include/asm/bug.h | 118
>>> ++++++++++++++++++++++++++++
>>>    xen/arch/riscv/traps.c           | 128
>>> +++++++++++++++++++++++++++++++
>>>    xen/arch/riscv/xen.lds.S         |  10 +++
>>>    3 files changed, 256 insertions(+)
>>>    create mode 100644 xen/arch/riscv/include/asm/bug.h
>>>
>>> diff --git a/xen/arch/riscv/include/asm/bug.h
>>> b/xen/arch/riscv/include/asm/bug.h
>>> new file mode 100644
>>> index 0000000000..4b15d8eba6
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/include/asm/bug.h
>>> @@ -0,0 +1,118 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Copyright (C) 2012 Regents of the University of California
>>> + * Copyright (C) 2021-2023 Vates
>>
>> I have to question the two copyrights here given that the majority of
>> the code seems to be taken from the arm implementation (see
>> arch/arm/include/asm/bug.h).
>>
>> With that said, we should consolidate the code rather than
>> duplicating
>> it on every architecture.
>>
> Copyrights should be removed. They were taken from the previous
> implementation of bug.h for RISC-V so I just forgot to remove them.
> 
> It looks like we should have common bug.h for ARM and RISCV but I am
> not sure that I know how to do that better.
> Probably the code wants to be moved to xen/include/bug.h and using
> ifdef ARM && RISCV ...

Or you could introduce CONFIG_BUG_GENERIC or else, so it is easily 
selectable by other architecture.

> But still I am not sure that this is the best one option as at least we
> have different implementation for x86_64.

My main concern is the maintainance effort. For every bug, we would need 
to fix it in two places. The risk is we may forget to fix one architecture.
This is not a very ideal situation.

So I think sharing the header between RISC-V and Arm (or x86, see below) 
is at least a must. We can do the 3rd architecture at a leisure pace.

One option would be to introduce asm-generic like Linux (IIRC this was a 
suggestion from Andrew). This would also to share code between two of 
the archs.

Also, from a brief look, the difference in implementation is mainly 
because on Arm we can't use %c (some version of GCC didn't support it). 
Is this also the case on RISC-V? If not, you may want to consider to use 
the x86 version.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jan 30 22:44:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Jan 2023 22:44:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487266.754856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcsn-0002Yi-Db; Mon, 30 Jan 2023 22:44:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487266.754856; Mon, 30 Jan 2023 22:44:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMcsn-0002Yb-B9; Mon, 30 Jan 2023 22:44:13 +0000
Received: by outflank-mailman (input) for mailman id 487266;
 Mon, 30 Jan 2023 22:44:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMcsl-0002YV-N8
 for xen-devel@lists.xenproject.org; Mon, 30 Jan 2023 22:44:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMcsh-0006nR-MG; Mon, 30 Jan 2023 22:44:07 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMcsh-0007DG-GO; Mon, 30 Jan 2023 22:44:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=mC1/+GCUZh5t3jhpg3lFZjwqR6jMNtu16L7JSPUjM60=; b=PnP6TRjX61yEHyoaqWhWeHAHH6
	IBnnY5qCpVmejxolRRcTb3L/cxEtEiC8IjAplgrunjwX/tjbZPeawW11o/YCIzZoOp/2sxkyzZ7ES
	XJ29s0ukc9Fi5y/wIija3x1CL1UvK7HD1qDGtZW2I4TpYNAhRCTIcrLU/6Epyp9+MeWU=;
Message-ID: <7fe303b6-9d01-783b-f24a-12605fe62e5f@xen.org>
Date: Mon, 30 Jan 2023 22:44:05 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Jan Beulich <jbeulich@suse.com>, Oleksii <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
 <75328420-0fbd-92ae-40c7-9fee1c31c907@suse.com>
 <792bc4533d3604acd4321b4b15965adec22431a4.camel@gmail.com>
 <bdb8c6a8-a677-9bef-c819-904bd944e6da@suse.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 07/14] xen/riscv: introduce exception context
In-Reply-To: <bdb8c6a8-a677-9bef-c819-904bd944e6da@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Jan,

On 30/01/2023 13:50, Jan Beulich wrote:
> On 30.01.2023 12:54, Oleksii wrote:
>> Hi Jan,
>>
>> On Fri, 2023-01-27 at 15:24 +0100, Jan Beulich wrote:
>>> On 27.01.2023 14:59, Oleksii Kurochko wrote:
>>>> --- /dev/null
>>>> +++ b/xen/arch/riscv/include/asm/processor.h
>>>> @@ -0,0 +1,82 @@
>>>> +/* SPDX-License-Identifier: MIT */
>>>> +/*****************************************************************
>>>> *************
>>>> + *
>>>> + * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
>>>> + * Copyright 2021 (C) Bobby Eshleman <bobby.eshleman@gmail.com>
>>>> + * Copyright 2023 (C) Vates
>>>> + *
>>>> + */
>>>> +
>>>> +#ifndef _ASM_RISCV_PROCESSOR_H
>>>> +#define _ASM_RISCV_PROCESSOR_H
>>>> +
>>>> +#ifndef __ASSEMBLY__
>>>> +
>>>> +/* On stack VCPU state */
>>>> +struct cpu_user_regs
>>>> +{
>>>> +    unsigned long zero;
>>>> +    unsigned long ra;
>>>> +    unsigned long sp;
>>>> +    unsigned long gp;
>>>> +    unsigned long tp;
>>>> +    unsigned long t0;
>>>> +    unsigned long t1;
>>>> +    unsigned long t2;
>>>> +    unsigned long s0;
>>>> +    unsigned long s1;
>>>> +    unsigned long a0;
>>>> +    unsigned long a1;
>>>> +    unsigned long a2;
>>>> +    unsigned long a3;
>>>> +    unsigned long a4;
>>>> +    unsigned long a5;
>>>> +    unsigned long a6;
>>>> +    unsigned long a7;
>>>> +    unsigned long s2;
>>>> +    unsigned long s3;
>>>> +    unsigned long s4;
>>>> +    unsigned long s5;
>>>> +    unsigned long s6;
>>>> +    unsigned long s7;
>>>> +    unsigned long s8;
>>>> +    unsigned long s9;
>>>> +    unsigned long s10;
>>>> +    unsigned long s11;
>>>> +    unsigned long t3;
>>>> +    unsigned long t4;
>>>> +    unsigned long t5;
>>>> +    unsigned long t6;
>>>> +    unsigned long sepc;
>>>> +    unsigned long sstatus;
>>>> +    /* pointer to previous stack_cpu_regs */
>>>> +    unsigned long pregs;
>>>> +};
>>>
>>> Just to restate what I said on the earlier version: We have a struct
>>> of
>>> this name in the public interface for x86. Besides to confusion about
>>> re-using the name for something private, I'd still like to understand
>>> what the public interface plans are. This is specifically because I
>>> think it would be better to re-use suitable public interface structs
>>> internally where possible. But that of course requires spelling out
>>> such parts of the public interface first.
>>>
>> I am not sure that I get you here.
>> I greped a little bit and found out that each architecture declares
>> this structure inside arch-specific folder.
>>
>> Mostly the usage of the cpu_user_regs is to save/restore current state
>> of CPU during traps ( exceptions/interrupts ) and context_switch().
> 
> Arm effectively duplicates the public interface struct vcpu_guest_core_regs
> and the internal struct cpu_user_regs (and this goes as far as also
> duplicating the __DECL_REG() helper). Personally I find such duplication
> odd at the first glance at least; maybe there is a specific reason for this
> in Arm. But whether the public interface struct can be re-used can likely
> only be known once it was spelled out.

There are some force padding, a different ordering and some extra fields 
in the internal version to simplify the assembly code.

The rationale is explained in 1c38a1e937d3 ("xen: arm: separate guest 
user regs from internal guest state").

We also have a split between 32-bit and 64-bit to avoid doubling up the 
size in the former case.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 00:22:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 00:22:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487276.754867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMePW-0006e5-FA; Tue, 31 Jan 2023 00:22:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487276.754867; Tue, 31 Jan 2023 00:22:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMePW-0006dy-C2; Tue, 31 Jan 2023 00:22:06 +0000
Received: by outflank-mailman (input) for mailman id 487276;
 Tue, 31 Jan 2023 00:22:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+DaM=54=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pMePU-0006ds-OS
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 00:22:04 +0000
Received: from mail-vk1-xa32.google.com (mail-vk1-xa32.google.com
 [2607:f8b0:4864:20::a32])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4870b4b0-a0fd-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 01:22:02 +0100 (CET)
Received: by mail-vk1-xa32.google.com with SMTP id l20so6602939vkm.11
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 16:22:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4870b4b0-a0fd-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=lCKttsuDVf51XjHaovWCYjyPXGiNd+/TAWMpshQo7To=;
        b=YMlCMt1NgNAEV/lNHGZtwhlL+fz+rgRer7enoOXwV/cqUc5I3Bnt3obQKMg3ACv5wQ
         p/yrV/nCAYlo8qlY27gupISbaKkZnzjdv7ITYl/wkdk3P/w9iYLC4T6vnFPy44L7qUYc
         GIqLAfEoEqe1jEoorXxkEG0o1ShYpXhzqdu2IpnTGroHIt2bd957CSf4eO/wQ++T0CAM
         yvbuTvIldvEFNUMrXL1Uyg4olbYVd+gKb3opt5C4rFD9Sx8/6AunA2aiv7rTBjIcWcMb
         OppKcLgn0sM9AB0oAftUacFWoPhUARw2C0c1EiUcj0ZpZlfUg0nGv5GAsckQxzcoXB+C
         vaSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=lCKttsuDVf51XjHaovWCYjyPXGiNd+/TAWMpshQo7To=;
        b=gaEAXpZCdiPdDiFXhrSgVM6g8/xwpAUzYQQMLSmNi4gGbzpz46Iikfinb+KRE3wWem
         BeN8xaimcsjmdgy01Xrf9BrFAa0Aimlmf5JreGrjY3dl0BG6NgbSjDxiLAifUIIqlX2r
         m3iGS1iBDox3fqYl1ZkBbL0R0iDa+hNDffzvSxg4B6wdBSNpqFd6LtaDSYz1NsN/En9O
         A/NTYxtAmIvYA2Nusj97N5hLlLpZnh+aOLrurLRWhILzFkYyyOWumSaRPWNi1gP6sgUo
         ZpN6NWKYWmGuoHHbge5Rn7EXnYt+53H2R3frNL1sl0CjPHoOhz/anDQ9AxF1oqfzr38a
         IyyQ==
X-Gm-Message-State: AFqh2krliFUUWipQt5y+Yg7Za+l+Rkw7J1mnEQS/HzbgRyhoeE8bY9yu
	D4/IkL3flBKld2LrYNH86PXYLMiYvJ9Xy2gAViA=
X-Google-Smtp-Source: AMrXdXuW/JSKWwpsCH5njcgJf5G3OnExFuB77ejREq7GzG9XkGmQeVTLJeiI9dNvSeOzu4VRcDIyZWTH52ZImNXMANs=
X-Received: by 2002:a1f:2c0c:0:b0:3e1:7e08:a117 with SMTP id
 s12-20020a1f2c0c000000b003e17e08a117mr6804924vks.34.1675124521576; Mon, 30
 Jan 2023 16:22:01 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674818705.git.oleksii.kurochko@gmail.com> <971c400abf7f88a6be322db72481c075d3ceb233.1674818705.git.oleksii.kurochko@gmail.com>
In-Reply-To: <971c400abf7f88a6be322db72481c075d3ceb233.1674818705.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 31 Jan 2023 10:21:35 +1000
Message-ID: <CAKmqyKNSywyF8=KUTiKN12JL_Bst5if74h6mgek1aMYS1QpjeQ@mail.gmail.com>
Subject: Re: [PATCH v2 01/14] xen/riscv: add _zicsr to CFLAGS
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 28, 2023 at 12:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Work with some registers requires csr command which is part of
> Zicsr.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V2:
>   - Nothing changed
> ---
>  xen/arch/riscv/arch.mk | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
> index 012dc677c3..95b41d9f3e 100644
> --- a/xen/arch/riscv/arch.mk
> +++ b/xen/arch/riscv/arch.mk
> @@ -10,7 +10,7 @@ riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
>  # into the upper half _or_ the lower half of the address space.
>  # -mcmodel=medlow would force Xen into the lower half.
>
> -CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
> +CFLAGS += -march=$(riscv-march-y)_zicsr -mstrict-align -mcmodel=medany
>
>  # TODO: Drop override when more of the build is working
>  override ALL_OBJS-y = arch/$(TARGET_ARCH)/built_in.o
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 00:49:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 00:49:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487296.754904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMeqR-0002SD-6Q; Tue, 31 Jan 2023 00:49:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487296.754904; Tue, 31 Jan 2023 00:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMeqR-0002S6-2Y; Tue, 31 Jan 2023 00:49:55 +0000
Received: by outflank-mailman (input) for mailman id 487296;
 Tue, 31 Jan 2023 00:49:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+DaM=54=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pMeqP-0001Qz-DX
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 00:49:53 +0000
Received: from mail-vk1-xa30.google.com (mail-vk1-xa30.google.com
 [2607:f8b0:4864:20::a30])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b429fbb-a101-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 01:49:51 +0100 (CET)
Received: by mail-vk1-xa30.google.com with SMTP id l20so6626283vkm.11
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 16:49:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b429fbb-a101-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=PELe0C7W8pESvdkoWYjveyfeaJy0mj9fdSjjtua1pX4=;
        b=DzIGKZugqcW4K4Czi1TskR59zyooLT5XsWRNp7uNNOUfal7mCgX61bA8Ezp9xfEM6t
         h+HQM9ADbvrbMH0f+qltjGVB+ed8h6nKceoZwwkRmCodsP0zdwz8MCb8dYUevIf+mq3U
         Vei2scVbEoeZzSTOf9l803QAjxt7bmnw/9+wsmrphmoEO/MqeLInfgDt1HtJ3z0awrR4
         Z4SDsyjISE+6PqHE84ZU272TgqNYc/cbQAgOOVkm5WZt9aQQUqOFkQtuBT8AaMuWhXSI
         RJ3YdcbEXynYfdRvZ51DP7//QGbNvh2o6PTtzakAbYLED9/borgWmDh3YJN7vGjY7rVy
         GR0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=PELe0C7W8pESvdkoWYjveyfeaJy0mj9fdSjjtua1pX4=;
        b=AVoxPLkDn4pbhTPLkvTsbaP7jCoMxOkXRW+yo1aFJDRUb2Io1/QXFuZ9AJKZZOBzSR
         n3g+qmHc4kg8XoLFH4GqWlyDgIYVoq9ErrePIgZjDAeYhrUZWMeaYpBAJmj1S2wI9wCw
         JTfG6Otgd/1ewcVp4znPzb6WDm/TUasxXercc1QT58HKnluT3YqKQIYGz4RzLk0KlwuR
         BAQmCLggnNy9/spqQjUPwJHqbaiSWjXGQSZ6kpHoQ4gbUVn1j04hnm8usP/HBX/zOMca
         De9CRmypZIwbvwB+fjyJMwiE0vjApDS+HAaY1+QqEVi5UgiLUvuWRuMNkPB7t45KY0jS
         gXrw==
X-Gm-Message-State: AO0yUKWRBxFH+tlzZUSH+HZwhd/ufe7BO/h7olXPR2F7wFqILIQ+yPHY
	1c7Y4ldpUziuLFtb5yEE58JhIjQL1CDZlDXYiZc=
X-Google-Smtp-Source: AK7set8ut3MFmooQ/16CmmO6+L6quSLvLobLcHFVGnpbvybRrbnk5MC320w/qSFM3Hnkpnr4mtt43BofShFsWMYoDJ0=
X-Received: by 2002:a1f:1c0b:0:b0:3ea:3dee:4545 with SMTP id
 c11-20020a1f1c0b000000b003ea3dee4545mr801979vkc.26.1675126190656; Mon, 30 Jan
 2023 16:49:50 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674818705.git.oleksii.kurochko@gmail.com> <9a098db8e3fef97df987b2a7330333b51a21cb8c.1674818705.git.oleksii.kurochko@gmail.com>
In-Reply-To: <9a098db8e3fef97df987b2a7330333b51a21cb8c.1674818705.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 31 Jan 2023 10:49:24 +1000
Message-ID: <CAKmqyKO9VWtUza7EdgWDKq2BDEz9=+zmWrJwMcrakDTJ_APbjQ@mail.gmail.com>
Subject: Re: [PATCH v2 02/14] xen/riscv: add <asm/asm.h> header
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Fri, Jan 27, 2023 at 11:59 PM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V2:
>   - Nothing changed
> ---
>  xen/arch/riscv/include/asm/asm.h | 54 ++++++++++++++++++++++++++++++++
>  1 file changed, 54 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/asm.h
>
> diff --git a/xen/arch/riscv/include/asm/asm.h b/xen/arch/riscv/include/asm/asm.h
> new file mode 100644
> index 0000000000..6d426ecea7
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/asm.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: (GPL-2.0-only) */
> +/*
> + * Copyright (C) 2015 Regents of the University of California
> + */
> +
> +#ifndef _ASM_RISCV_ASM_H
> +#define _ASM_RISCV_ASM_H
> +
> +#ifdef __ASSEMBLY__
> +#define __ASM_STR(x)   x
> +#else
> +#define __ASM_STR(x)   #x
> +#endif
> +
> +#if __riscv_xlen == 64
> +#define __REG_SEL(a, b)        __ASM_STR(a)
> +#elif __riscv_xlen == 32
> +#define __REG_SEL(a, b)        __ASM_STR(b)
> +#else
> +#error "Unexpected __riscv_xlen"
> +#endif
> +
> +#define REG_L          __REG_SEL(ld, lw)
> +#define REG_S          __REG_SEL(sd, sw)
> +
> +#if __SIZEOF_POINTER__ == 8
> +#ifdef __ASSEMBLY__
> +#define RISCV_PTR              .dword
> +#else
> +#define RISCV_PTR              ".dword"
> +#endif
> +#elif __SIZEOF_POINTER__ == 4
> +#ifdef __ASSEMBLY__
> +#define RISCV_PTR              .word
> +#else
> +#define RISCV_PTR              ".word"
> +#endif
> +#else
> +#error "Unexpected __SIZEOF_POINTER__"
> +#endif
> +
> +#if (__SIZEOF_INT__ == 4)
> +#define RISCV_INT              __ASM_STR(.word)
> +#else
> +#error "Unexpected __SIZEOF_INT__"
> +#endif
> +
> +#if (__SIZEOF_SHORT__ == 2)
> +#define RISCV_SHORT            __ASM_STR(.half)
> +#else
> +#error "Unexpected __SIZEOF_SHORT__"
> +#endif
> +
> +#endif /* _ASM_RISCV_ASM_H */
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 00:49:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 00:49:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487294.754893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMeqP-0002D7-Ta; Tue, 31 Jan 2023 00:49:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487294.754893; Tue, 31 Jan 2023 00:49:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMeqP-0002D0-R6; Tue, 31 Jan 2023 00:49:53 +0000
Received: by outflank-mailman (input) for mailman id 487294;
 Tue, 31 Jan 2023 00:49:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMeqO-0002Ch-HX; Tue, 31 Jan 2023 00:49:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMeqO-0002GT-DN; Tue, 31 Jan 2023 00:49:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMeqN-0000o2-VL; Tue, 31 Jan 2023 00:49:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMeqN-0003P1-Uv; Tue, 31 Jan 2023 00:49:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p4jcPijkHlZyhQq3Im2M6eNCQ5dyW5WjmvS6pF2xxUk=; b=H/HdpNogvpjgTfKLPzRgrpqF95
	gNvaOxp0Qf9vrB9uCi+4Rb/qiOFNzUueVdvze3vdzwbveHQUSwZF4al8AQkpetHePrOWul7sRmx9G
	1zHtbTVrjSzuxMcccIVWhhEobtlLEHfrb2sQQcjzVhOzGyrRHeYxZ1vtGfl1T95LWbTM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176285-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176285: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 00:49:51 +0000

flight 176285 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176285/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    4 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 00:50:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 00:50:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487302.754914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMeqy-00046K-LO; Tue, 31 Jan 2023 00:50:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487302.754914; Tue, 31 Jan 2023 00:50:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMeqy-00046D-Hm; Tue, 31 Jan 2023 00:50:28 +0000
Received: by outflank-mailman (input) for mailman id 487302;
 Tue, 31 Jan 2023 00:50:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+DaM=54=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pMeqx-0003t5-GH
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 00:50:27 +0000
Received: from mail-vs1-xe2c.google.com (mail-vs1-xe2c.google.com
 [2607:f8b0:4864:20::e2c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4044423b-a101-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 01:50:27 +0100 (CET)
Received: by mail-vs1-xe2c.google.com with SMTP id i188so14459545vsi.8
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 16:50:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4044423b-a101-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=oqC5k7Jxr08rLQX92K857Nue+507+QiY0epeO96EDeE=;
        b=Lv4uCyXBAGLzNjcgXuiCZNMBefr+ZyaWl4vZf/+IEH4erdAiFpwwkFmAOk1XIdZ1jr
         rtGZjHwG/4tmF7Ko8XNiO83wdyw2CaBZmaqSa3jHZ628jp+2FulcG5G/zdBXvW5CUY2f
         BOOQGDegstWBrXBNTlRZUroddL/kSsTWjuIkIv1PSykto+rGt/fbVeuNAFBoe784MWkQ
         SankiPq+3hcKMc4a/DpG2rIIbEZPnJXQKQ6ZWX3bVyH8AWLTMsLzXTv96s7BFTD240Iy
         Z7oVauc4aVNGJI9k+Xg8FJp1pwwhryHcraxPGfgf2F7NZN24IknASs6s1idOI16dc8f7
         b5rQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=oqC5k7Jxr08rLQX92K857Nue+507+QiY0epeO96EDeE=;
        b=UoNTi0fGUgq74qmPuzJfrLuk9/5bUQ1uMY0kXBk2fFKx6fZYskyaweJBoBLZorLkEs
         O6R5r0SoGRnugv3+ugzEGGRfJMm9yqbanOPmHtruFmwp9HdDx5kgvxwBXmtuekLxXbBj
         VNSb8olsa/jR2m1dytLo4PF/LZYkO3hlufV6unOjvcTk9YxyEIUAPQjqZJQh1PItK8wd
         yV/MCmV8pEtHYfFUv5nBDY6ucKlKIEQrgF5FkML7yOIVobn1ocpeednxd/kAnKNmOAw1
         ph6pemeX1ESlxLwrhi2xfnaOAMygX/cPxWh9rgN7s4ZX+NamMj7mrQcye9BVgaNJPKWE
         HkKA==
X-Gm-Message-State: AFqh2kp1wbyd72oXwuXGSPL7uv6a/dtv7bfu5BLQXh3VCGAKOgahKwoX
	iLUWVDhNUryjSVBOjsi3A0g2mSMqct4r717jOUs=
X-Google-Smtp-Source: AMrXdXtiTycoBa9qjmM1hpjye+pvUmKMTgvUuORYzVixHDZ7qfRXSQZE0ukaEYDmEuc7JVEvOD+HvHekaNjazyQd3xQ=
X-Received: by 2002:a05:6102:cd4:b0:3d0:c2e9:cb77 with SMTP id
 g20-20020a0561020cd400b003d0c2e9cb77mr6959006vst.54.1675126225968; Mon, 30
 Jan 2023 16:50:25 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674818705.git.oleksii.kurochko@gmail.com> <19c64efc3c05f64de97cdc4a96919ee28844440b.1674818705.git.oleksii.kurochko@gmail.com>
In-Reply-To: <19c64efc3c05f64de97cdc4a96919ee28844440b.1674818705.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 31 Jan 2023 10:49:59 +1000
Message-ID: <CAKmqyKPpfGBJ+nLQrqras1DtPLxELQOyiKiFJq2JP63LjiEdow@mail.gmail.com>
Subject: Re: [PATCH v2 05/14] xen/riscv: introduce empty <asm/string.h>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 28, 2023 at 12:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> To include <xen/lib.h> <asm/string.h> is required
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Acked-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V2:
>   - <asm/string.h> is a new empty header which is required to include
>     <xen/lib.h>
> ---
>  xen/arch/riscv/include/asm/string.h | 6 ++++++
>  1 file changed, 6 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/string.h
>
> diff --git a/xen/arch/riscv/include/asm/string.h b/xen/arch/riscv/include/asm/string.h
> new file mode 100644
> index 0000000000..a26ba8f5c6
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/string.h
> @@ -0,0 +1,6 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _ASM_RISCV_STRING_H
> +#define _ASM_RISCV_STRING_H
> +
> +#endif /* _ASM_RISCV_STRING_H */
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 00:51:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 00:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487307.754923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMerQ-0004e9-Ta; Tue, 31 Jan 2023 00:50:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487307.754923; Tue, 31 Jan 2023 00:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMerQ-0004e2-Qb; Tue, 31 Jan 2023 00:50:56 +0000
Received: by outflank-mailman (input) for mailman id 487307;
 Tue, 31 Jan 2023 00:50:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+DaM=54=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pMerQ-0003t5-9S
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 00:50:56 +0000
Received: from mail-vs1-xe2b.google.com (mail-vs1-xe2b.google.com
 [2607:f8b0:4864:20::e2b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5165e774-a101-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 01:50:55 +0100 (CET)
Received: by mail-vs1-xe2b.google.com with SMTP id e9so6651364vsj.3
 for <xen-devel@lists.xenproject.org>; Mon, 30 Jan 2023 16:50:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5165e774-a101-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=Gck6xgng8kFOr9KO6xp8nFknGs/yyh8Xj+812scWE+E=;
        b=O2FdhqiuG8n7TYa8O/JNOIsMWUJczqWtdPvY+8FLD/PTCrCvb/q2kQv06uEsUGWC8k
         qBaMzuUiPbCM21y3lpbDZaLVhQwMjj+yDfc5dNUH68BaybASwAUKvJiiEiFifpJBHu3s
         zO4uzvQ78MYavjQnzLHRJlGFJIMImlXGyydssvkr588O2j6zuMMnz3y3swCnqBfykt0J
         F4smr4FJ/LgAw3HvSJSn/JQCAvq5no3QI6Ns5OS20JyX41rwcd8QXVILUgR4U0JATCsA
         qP6yNxr4Uw+Pk792p7nYJz3FXHKDOlrNsmpiom4y/1Y4JmHctqwIPSnbaqOr2N+yOyJG
         6F0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Gck6xgng8kFOr9KO6xp8nFknGs/yyh8Xj+812scWE+E=;
        b=dxuzpdtcedB1RUgIDQffaQojlG1TuGlaPugEoo3OX0AVeugZZiWvz6yNTp3K0gtpDq
         xtpEzkVZR7EmZfBH152V5WX7LPECqO7Z1hBAKySGmPUBaivHyST5yp6TKgKPkCeBleJs
         QHm9SucM2YT8QN+g4lPc9ph4zZY0gDEDfaKES168uH4HVHvy8LhCzJ7EHugGTswi9tzS
         5YN/38xR9hutaEkUUM/Lg2vgv7gAFcqannrxLvlYNjwoTf+eU86/MmS3A5HsM9qj9j1g
         XNg6v4+1Cvw+aMM/vMARsBwf2wqyXsXkQCx2XR/j1oloaSKDZfeHfgM+Xm2VNpudEkgL
         XhnA==
X-Gm-Message-State: AO0yUKU6lY6ovFgvptdl1Gq2WZc5HBe8yjMSnA4pBorOHS3HmRZHXdKH
	LuzNH2m9dtFwf0jivr3Px7yD2EBLdPOTXEKX4/FL0bjowAOsiw==
X-Google-Smtp-Source: AK7set9D+ULchibZ29kfKVEfwGXbpK6kpig/xpI8jL+GMAt7BMa8ad7eT4EZW4+Au8BTb8M0Ky0MbriXZMINpLqyC9M=
X-Received: by 2002:a67:e184:0:b0:3eb:f205:2c08 with SMTP id
 e4-20020a67e184000000b003ebf2052c08mr2226792vsl.10.1675126254693; Mon, 30 Jan
 2023 16:50:54 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674818705.git.oleksii.kurochko@gmail.com> <1c53e9784707482edf96d144d9ce36a4fc9d7ed5.1674818705.git.oleksii.kurochko@gmail.com>
In-Reply-To: <1c53e9784707482edf96d144d9ce36a4fc9d7ed5.1674818705.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 31 Jan 2023 10:50:28 +1000
Message-ID: <CAKmqyKMQ8_jzGviQ-sx+=L38ECfCkXbwqW6Vb04yY3GmGaVTWg@mail.gmail.com>
Subject: Re: [PATCH v2 06/14] xen/riscv: introduce empty <asm/cache.h>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 28, 2023 at 12:00 AM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> To include <xen/lib.h> <asm/cache.h> is required
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Acked-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V2:
>   - <asm/cache.h> is a new empty header which is required to include
>     <xen/lib.h>
> ---
>  xen/arch/riscv/include/asm/cache.h | 6 ++++++
>  1 file changed, 6 insertions(+)
>  create mode 100644 xen/arch/riscv/include/asm/cache.h
>
> diff --git a/xen/arch/riscv/include/asm/cache.h b/xen/arch/riscv/include/asm/cache.h
> new file mode 100644
> index 0000000000..69573eb051
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/cache.h
> @@ -0,0 +1,6 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _ASM_RISCV_CACHE_H
> +#define _ASM_RISCV_CACHE_H
> +
> +#endif /* _ASM_RISCV_CACHE_H */
> --
> 2.39.0
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 02:26:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 02:26:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487330.754934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMgL0-0006lZ-SZ; Tue, 31 Jan 2023 02:25:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487330.754934; Tue, 31 Jan 2023 02:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMgL0-0006lR-Ne; Tue, 31 Jan 2023 02:25:34 +0000
Received: by outflank-mailman (input) for mailman id 487330;
 Tue, 31 Jan 2023 02:25:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WzvD=54=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMgKz-0006lL-Nz
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 02:25:34 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on2083.outbound.protection.outlook.com [40.107.14.83])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 882100f9-a10e-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 03:25:31 +0100 (CET)
Received: from DB8PR03CA0035.eurprd03.prod.outlook.com (2603:10a6:10:be::48)
 by AS2PR08MB9810.eurprd08.prod.outlook.com (2603:10a6:20b:605::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 02:25:28 +0000
Received: from DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::9f) by DB8PR03CA0035.outlook.office365.com
 (2603:10a6:10:be::48) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Tue, 31 Jan 2023 02:25:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT019.mail.protection.outlook.com (100.127.142.129) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 02:25:27 +0000
Received: ("Tessian outbound 333ca28169fa:v132");
 Tue, 31 Jan 2023 02:25:27 +0000
Received: from 89a828a51190.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0798C50E-9AAF-4664-9588-E96E3F461D0E.1; 
 Tue, 31 Jan 2023 02:25:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 89a828a51190.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 31 Jan 2023 02:25:22 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB9363.eurprd08.prod.outlook.com (2603:10a6:20b:5aa::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.31; Tue, 31 Jan
 2023 02:25:08 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.033; Tue, 31 Jan 2023
 02:25:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 882100f9-a10e-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UMDPrCse9C468YL8oipJD7KpwWoj4gt3JDyEH20o2P0=;
 b=zE7csT5RfJZtjgA1QsWR9sPdI28VkXBA9bfGLniZGNodRZ3S9S2amIAk39/j/D2NSWT/Vb8oe3HsBl/t9Lg6gzsibf9NbnMV6vVfyhzd2NzFWSg4IYKgXQJsfRIg+dY7YWODb1gL3DjffsPdzM7BCFzX1UWimjU8sJNXqSe7yPA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WHk6437HIuCACtTHeQvQ8RVbyANHr/vPILZT9cPKiWNEI9Lr5kYFMowxf8R+wZ9BfFrWaLIBwg/34F/e8oepQoofPL2YQilbyRvQpSfh4/V0F3q2TptmlCeQXrF05v4PHyV+fyTcCyMmO0YhPw3z2vLd58nv637hRV6xQ/h/N/1/WIxWoh0n5onNR8j8/27OBZcGqiuHpzHj34BJ0y20/3QJdEA32yrYScZB/6Yp5c/Z+2L9nulXfY/uAerqtVUbG2x/Ag33grIGl7Wk11QN5wj0pSpE4thGUp9Ms8ZOa59QueWXZ7CC/15DV9vu3jORLEkWzg55puGhcnVwL26hlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UMDPrCse9C468YL8oipJD7KpwWoj4gt3JDyEH20o2P0=;
 b=dUeQJ6JcKreypcJ386SItCILYCaZa8AoefrdSnCDpdLnesWFiqOdi3MH3jAjmRNi70NlGoCTJCeNvanENO5MzwGe0ZKCYtQ8fmomlRqxif7DzgfdAI7WkPTP2HPu1UQfiEOvhyUyws+2zsx2vhdY5Os19gae2qK5fCEgkCcilGyINoQ9/3yTBAH59w9IoOuEs6IOSSD4HQ6p1yv3LtG6JgB5CwIhpH1/PiybeP7lZX3Oe6p943FiXKaMd41gMKxMuy3RfNkyC9Vmai1pFyEzaCQNVABzyxyL4AF8ZJLKAZHI1a407G9vuSVk0mWmtNoJSH3QV+nLJcF+IPq1SiodGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UMDPrCse9C468YL8oipJD7KpwWoj4gt3JDyEH20o2P0=;
 b=zE7csT5RfJZtjgA1QsWR9sPdI28VkXBA9bfGLniZGNodRZ3S9S2amIAk39/j/D2NSWT/Vb8oe3HsBl/t9Lg6gzsibf9NbnMV6vVfyhzd2NzFWSg4IYKgXQJsfRIg+dY7YWODb1gL3DjffsPdzM7BCFzX1UWimjU8sJNXqSe7yPA=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v3 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Thread-Topic: [PATCH v3 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Thread-Index: AQHZNGA1IwnnIsRmjEGECQ4jTGRpG663gnMAgAA6SvA=
Date: Tue, 31 Jan 2023 02:25:08 +0000
Message-ID:
 <AS8PR08MB79910D7E3C7F32D8CDCB851092D09@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230130040535.188236-1-Henry.Wang@arm.com>
 <20230130040535.188236-2-Henry.Wang@arm.com>
 <fca91d3c-5d8a-3f7e-419a-a4c5208273dc@xen.org>
In-Reply-To: <fca91d3c-5d8a-3f7e-419a-a4c5208273dc@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: E7B68B85109E8D44ADD76939F6ABD58C.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB9363:EE_|DBAEUR03FT019:EE_|AS2PR08MB9810:EE_
X-MS-Office365-Filtering-Correlation-Id: 5b943789-3b8a-4ba8-2231-08db03326acb
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ZmWIt/jHc8FPf3MEimBk3kzdRcbON6nRp+Ayd4/jvTMWAjDmkPyZI+sKUsprjLiAsfhbopcNHXlNOswtMNXSd/pD0dEj+NBBH71R+o5e+cCcG6ulIh5SujiREXCB3rUquxYbgfVgwi8wfOIN0pcXmpZLHX8fvH+MsZurbaW0j0xp4ssDmJlYOUD1ACP2m2VBmpADdGk2c4bnsO3OjPdhx2zlaMLS94ta3Lxs1HNxyEG7kXhzdi97qFcfIYdkjUSNuaZrPHjYV8H4zDRxyTVQ78yReBJo/KIFw7qNRJsZ3FhGyGl6b08YNMSqo3anQCU64jqWncVMIRDcwUF2jf0czGD3LOHp3YnO467l7gxpBiTCScki2JcHe6oxpSKet6KqXFkkTJZb+O0cvMyqprJl9T5PF0us7J9g/PI9lChikKaUUdig44irB373IPM9EHQ+WRm8ggsVnnNRQUd+X8zdhjWQxDeLytjtpWR3Gg4xh5Qv05gDDHW7PWrUgqpY0WFY4bVwzu+o2rs4bz8bfV+l0Cl9SZdZjKDdOxcDJLP8JxHctfhReNkY9CIpVtdizGgq8raQ2/6g8Vh1ZG2f5TYGIN1Q/L2UISLeewTBFDNJyJACJaVmsu7S5tESzMWxBWx/XIQoPu4AJ6YSYaqEE8pZTZxEUQPCLVeCdRIwr1oUBx7ou+osLRfLxdNAL4KUwIsfTmI39b7ykTh+Yiu55dWXItOdW7rhRP22syL94kry9z8=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(136003)(346002)(366004)(396003)(376002)(451199018)(33656002)(38070700005)(38100700002)(86362001)(122000001)(41300700001)(54906003)(2906002)(110136005)(478600001)(71200400001)(966005)(66556008)(8936002)(55016003)(4326008)(5660300002)(8676002)(66476007)(76116006)(316002)(66946007)(52536014)(83380400001)(26005)(66446008)(9686003)(186003)(6506007)(64756008)(7696005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9363
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f87cd6a3-a8a8-4406-39ed-08db03325f2d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gX6OjByirul196e8wKOhYl2CSOBVnSRt7vm4wrTqPQQERzE7M7gx+Cls7yZo/AA7GiBBUfBGxjeyFk6DvXaRv0+ryRL3Zys/JZ2nGIRu1LI186U2ZZAB/OfuEU7YSp+V2lD3QjPKzcDGqR0vYxChKKFSkNec2qxA6Px+addAdoIHAtXEfo39y70DdshKNzfc9YHCS2NTH50t3M8HoYEeybKSYAqtkEigqAZW3TO4OO7i28lCmQPo6bohuurQLt2jvOPQ+zb2Ud3NPo3XMWoa1LjVmIIzmJJAO6+OZh5leL7JlDuCrLUbKCJS5m1pXgf0wfA5OJjRq1e6YCQIbw7OdgjZUqjT0SNr+5NiYKmey7Rs8CFMFYTOYHhQtjp+flt/QRe7LeFj8LCJKlX1zV/e5dhVbRP3AWY24US22KXyKoL2Tg1q0J6LzVXo0DJRiJmEW4hIdZ9SLRZktli8W1AdkxRsTX6i+7Mw6tf03RbegdrJ8ojXRTVd9UrWY+23x3XTY+tjWXwHSYRNsI0GzlYzLkmTXdNrmsCM8fILmZptHmyXfPR2BGk0GEHRe0x62rgTWtSLF/5ygaEvTH8+2VRnBhpLYxIGrsos/nA7iKSWlAsud1OUSvZqwcZxz4NYnJgOmU3AY4QVsh+YW9uzXJn/EWLAFKeaFOC0GiO1CDknAVOMfhecrgozIQuR5tj4bGSXQpxSWM0q/FTzjDo6BJt1vvSRUnjQdbZ0S06beDq/R6Q=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199018)(40470700004)(46966006)(36840700001)(70586007)(33656002)(2906002)(40460700003)(5660300002)(316002)(107886003)(47076005)(40480700001)(356005)(52536014)(81166007)(82740400003)(70206006)(41300700001)(36860700001)(86362001)(8936002)(4326008)(83380400001)(82310400005)(55016003)(26005)(336012)(54906003)(9686003)(8676002)(110136005)(6506007)(7696005)(478600001)(966005)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 02:25:27.8002
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b943789-3b8a-4ba8-2231-08db03326acb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9810

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjMgMS8zXSB4
ZW4vYXJtOiBBZGQgbWVtb3J5IG92ZXJsYXAgY2hlY2sgZm9yDQo+IGJvb3RpbmZvLnJlc2VydmVk
X21lbQ0KPiANCj4gSGkgSGVucnksDQo+IA0KPiA+ICt7DQo+ID4gKyAgICBwYWRkcl90IGJhbmtf
c3RhcnQgPSBJTlZBTElEX1BBRERSLCBiYW5rX2VuZCA9IDA7DQo+ID4gKyAgICBwYWRkcl90IHJl
Z2lvbl9lbmQgPSByZWdpb25fc3RhcnQgKyByZWdpb25fc2l6ZTsNCj4gPiArICAgIHVuc2lnbmVk
IGludCBpLCBiYW5rX251bSA9IG1lbWluZm8tPm5yX2JhbmtzOw0KPiA+ICsNCj4gPiArICAgIGZv
ciAoIGkgPSAwOyBpIDwgYmFua19udW07IGkrKyApDQo+ID4gKyAgICB7DQo+ID4gKyAgICAgICAg
YmFua19zdGFydCA9IG1lbWluZm8tPmJhbmtbaV0uc3RhcnQ7DQo+ID4gKyAgICAgICAgYmFua19l
bmQgPSBiYW5rX3N0YXJ0ICsgbWVtaW5mby0+YmFua1tpXS5zaXplOw0KPiA+ICsNCj4gPiArICAg
ICAgICBpZiAoIHJlZ2lvbl9lbmQgPD0gYmFua19zdGFydCB8fCByZWdpb25fc3RhcnQgPj0gYmFu
a19lbmQgKQ0KPiANCj4gLi4uIGl0IGNsZWFybHkgc2hvd3MgaG93IHRoaXMgY2hlY2sgd291bGQg
YmUgd3Jvbmcgd2hlbiBlaXRoZXIgdGhlIGJhbmsNCj4gb3IgdGhlIHJlZ2lvbiBpcyBhdCB0aGUg
ZW5kIG9mIHRoZSBhZGRyZXNzIHNwYWNlLiBZb3UgbWF5IHNheSBpdCBkb2Vzbid0DQo+IG92ZXJs
YXAgd2hlbiBpdCBjb3VsZCAoZS5nLiB3aGVuIHJlZ2lvbl9lbmQgPCByZWdpb25fc3RhcnQpLg0K
DQpIZXJlIGRvIHlvdSBtZWFuIGlmIHRoZSByZWdpb24gaXMgYXQgdGhlIGVuZCBvZiB0aGUgYWRk
ciBzcGFjZSwNCiJyZWdpb25fc3RhcnQgKyByZWdpb25fZW5kIiB3aWxsIG92ZXJmbG93IGFuZCBj
YXVzZQ0KcmVnaW9uX2VuZCA8IHJlZ2lvbl9zdGFydD8gSWYgc28uLi4NCg0KPiANCj4gVGhhdCBz
YWlkLCB1bmxlc3Mgd2UgcmV3b3JrICdiYW5rJywgd2Ugd291bGQgbm90IHByb3Blcmx5IHNvbHZl
IHRoZQ0KPiBwcm9ibGVtLiBCdXQgdGhhdCdzIGxpa2VseSBhIGJpZ2dlciBwaWVjZSBvZiB3b3Jr
IGFuZCBub3Qgc29tZXRoaW5nIEkNCj4gd291bGQgcmVxdWVzdC4NCj4gDQo+IFNvIGZvciBub3cs
IEkgd291bGQgc3VnZ2VzdCB0byBhZGQgYSBjb21tZW50LiBTdGVmYW5vLCB3aGF0IGRvIHlvdSB0
aGluaz8NCg0KLi4uSSBhbSBub3QgcmVhbGx5IHN1cmUgaWYgc2ltcGx5IGFkZGluZyBhIGNvbW1l
bnQgaGVyZSB3b3VsZCBoZWxwLA0KYmVjYXVzZSB3aGVuIHRoZSBvdmVyZmxvdyBoYXBwZW5zLCB3
ZSBhcmUgYWxyZWFkeSBkb29tZWQgYmVjYXVzZQ0Kb2YgdGhlIG1lc3NlZC11cCBkZXZpY2UgdHJl
ZS4NCg0KV291bGQgYWRkaW5nIGEgYEJVR19PTihyZWdpb25fZW5kIDwgcmVnaW9uX3N0YXJ0KWAg
bWFrZSBzZW5zZSB0byB5b3U/DQoNCj4gDQo+ID4gKyAgICAgICAgICAgIGNvbnRpbnVlOw0KPiA+
ICsgICAgICAgIGVsc2UNCj4gPiArICAgICAgICB7DQo+ID4gKyAgICAgICAgICAgIHByaW50aygi
UmVnaW9uOiBbJSMiUFJJcGFkZHIiLCAlIyJQUklwYWRkciJdIG92ZXJsYXBwaW5nIHdpdGgNCj4g
YmFua1sldV06IFslIyJQUklwYWRkciIsICUjIlBSSXBhZGRyIl1cbiIsDQo+IA0KPiAnXScgdXN1
YWxseSBtZWFuIGluY2x1c2l2ZS4gQnV0IGhlcmUsICdlbmQnIGlzIGV4Y2x1c2l2ZS4gU28geW91
IHdhbnQgJ1snLg0KDQpPaCwgbm93IEkgdW5kZXJzdGFuZCB0aGUgbWlzdW5kZXJzdGFuZGluZyBp
biBvdXIgY29tbXVuaWNhdGlvbiBpbiB2MToNCkkgZGlkbid0IGtub3cgJ1snIG1lYW5zIGV4Y2x1
c2l2ZSBiZWNhdXNlIEkgd2FzIGVkdWNhdGVkIHRvIHVzZSAnKScgWzFdIHNvIEkNCnRob3VnaHQg
eW91IG1lYW50IGluY2x1c2l2ZS4gU29ycnkgZm9yIHRoaXMuDQoNClRvIGtlZXAgY29uc2lzdGVu
Y3ksIG1heSBJIHVzZSAnKScgaGVyZT8gQmVjYXVzZSBJIHRoaW5rIHRoaXMgaXMgdGhlIGN1cnJl
bnQNCndheSBpbiB0aGUgY29kZSBiYXNlLCBmb3IgZXhhbXBsZSBzZWU6DQp4ZW4vaW5jbHVkZS94
ZW4vbnVtYS5oIEw5OTogWypzdGFydCwgKmVuZCkNCnhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2Ft
ZC9pb21tdV9hY3BpLmMgTDE3Nzogb3ZlcmxhcCBbJWx4LCVseCkNCg0KPiANCj4gVGhpcyBjb3Vs
ZCBiZSBmaXhlZCBvbiBjb21taXQuDQo+IA0KPiBCVFcsIHRoZSBzYW1lIGNvbW1lbnRzIGFwcGxp
ZXMgZm9yIHRoZSBzZWNvbmQgcGF0Y2guDQoNCkkgd2lsbCBmaXggdGhpcyBwYXRjaCBhbmQgIzIg
aW4gdjQuDQoNClsxXSBodHRwczovL2VuLndpa2lwZWRpYS5vcmcvd2lraS9JbnRlcnZhbF8obWF0
aGVtYXRpY3MpI0luY2x1ZGluZ19vcl9leGNsdWRpbmdfZW5kcG9pbnRzDQoNCktpbmQgcmVnYXJk
cywNCkhlbnJ5DQoNCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 03:19:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 03:19:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487335.754944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMhAv-0004h8-Nw; Tue, 31 Jan 2023 03:19:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487335.754944; Tue, 31 Jan 2023 03:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMhAv-0004h1-KY; Tue, 31 Jan 2023 03:19:13 +0000
Received: by outflank-mailman (input) for mailman id 487335;
 Tue, 31 Jan 2023 03:19:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMhAu-0004gr-OW; Tue, 31 Jan 2023 03:19:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMhAu-0004Wj-M3; Tue, 31 Jan 2023 03:19:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMhAu-00069N-5F; Tue, 31 Jan 2023 03:19:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMhAu-00074A-4C; Tue, 31 Jan 2023 03:19:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s0DtBWTVKa4aFA+ospzfWIz8aIiFkzRQwMihBRVOfIE=; b=TyFnsVhjZkApyewuOAFe+zW/iv
	eE9Lg3Up2+SmQAIKl7knTkO6Jqib80NlQSWLH5NmtoMswmQ3821BWWX+xUof8JtB61KpskmB+D5ea
	Iv55gvn9W1u5Sp4t3+833DpYuTkIttyZ3v16BBFyUmysz/7cT4ZuPdZN6y9VQN6FxZmo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176286-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176286: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 03:19:12 +0000

flight 176286 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176286/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    4 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 04:12:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 04:12:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487345.754954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMi00-0003RX-Po; Tue, 31 Jan 2023 04:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487345.754954; Tue, 31 Jan 2023 04:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMi00-0003RQ-MU; Tue, 31 Jan 2023 04:12:00 +0000
Received: by outflank-mailman (input) for mailman id 487345;
 Tue, 31 Jan 2023 04:11:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gnQn=54=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pMhzy-0003RK-Nl
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 04:11:59 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on061d.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 655f300e-a11d-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 05:11:55 +0100 (CET)
Received: from DB6P191CA0001.EURP191.PROD.OUTLOOK.COM (2603:10a6:6:28::11) by
 AS4PR08MB8048.eurprd08.prod.outlook.com (2603:10a6:20b:588::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 04:11:52 +0000
Received: from DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:28:cafe::31) by DB6P191CA0001.outlook.office365.com
 (2603:10a6:6:28::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Tue, 31 Jan 2023 04:11:52 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT052.mail.protection.outlook.com (100.127.142.144) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.36 via Frontend Transport; Tue, 31 Jan 2023 04:11:51 +0000
Received: ("Tessian outbound baf1b7a96f25:v132");
 Tue, 31 Jan 2023 04:11:51 +0000
Received: from a39a1b8ec813.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A9BA2675-AF35-4371-9DEA-14C59964F59F.1; 
 Tue, 31 Jan 2023 04:11:41 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a39a1b8ec813.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 31 Jan 2023 04:11:41 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by AS8PR08MB9480.eurprd08.prod.outlook.com (2603:10a6:20b:61f::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 04:11:38 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%3]) with mapi id 15.20.6043.038; Tue, 31 Jan 2023
 04:11:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 655f300e-a11d-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QiB+/urnfzXe/eo3tHgQ/gRpy1d7SMVHvIeRb+JAkKg=;
 b=7SBKQ0iQ6aq1c/BLgcfp0OW5oMxkG3LayZqmtc8pcIkZc5aLsfX4dw3keYvqDMaywMZfxDmq2cD2NTfres5T2bRwHTR3OtB8RKMcawCmCGPBHAk+j9DQTWtzW21DwnvVYZNEiLJf62cAkeUpEyqdazU9k2SgXyUpHFiQOUqacmY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SyDKuKCPAQBu8xW2wJ0IgJPw+ZHKrzxYqzAzhl4gtUb0/wFvOVh4b1jdMMRbRaEZvzT9qErOvRArv8a3TDORxR5E88yiBEph8KTjkh4wziBrn37rZyJjF5mE3rglSJTbFT0b0uxU+BvLIHPEnnVs4lAWQtdO13CN9TLqgc9EGCz7kZ5C30Nusr5enE7IqzcTC+9LbiiqITagoyDOCyCycdxb9H5KwQtyKNo6aHBmIhkoyZHuXUnOuSthCGPlSyogTzAh0Mqye4dBgUoreCVhtyNN8yBoVlT36v7Ewqg4xNH2lrkeaxMTqYSyCOZI83MM/KuZC1DLOveIZ4SQUkwC1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QiB+/urnfzXe/eo3tHgQ/gRpy1d7SMVHvIeRb+JAkKg=;
 b=iOUSUKGQeRUrnhgtS1O7tcj9BbKcPFtFNWR0lfzXhN/PpHwJvdcmWw3fIDgxGjujR3kW8ktG+oG8KmFRnUhocD5Ya80Qtjc0DUWLmEDExqWvel1cGJZko6j499TsMgMc8iWVD1LSrL+YEln/L6MXcQ096TrI6liYbScH7tPzZyYYeHPI/FEo/IN/krJH5mrkKK82Pdbb015zLjx0OWuojJyVTNKiWgUxnFO74z3QTR0cNz75WuCRDPnwUBr7S0xluWyr+NpLs2tH29q2xw6yYLwHPMtD+3wjOOvqm2Qj59VRB+zGjESp+nB6+bVfeVOECc9LvMcWm6amaqzK3X7cIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QiB+/urnfzXe/eo3tHgQ/gRpy1d7SMVHvIeRb+JAkKg=;
 b=7SBKQ0iQ6aq1c/BLgcfp0OW5oMxkG3LayZqmtc8pcIkZc5aLsfX4dw3keYvqDMaywMZfxDmq2cD2NTfres5T2bRwHTR3OtB8RKMcawCmCGPBHAk+j9DQTWtzW21DwnvVYZNEiLJf62cAkeUpEyqdazU9k2SgXyUpHFiQOUqacmY=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "ayan.kumar.halder@xilinx.com"
	<ayan.kumar.halder@xilinx.com>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Thread-Topic: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Thread-Index:
 AQHZJxAhTWb1zeTHm029bSgcZo/2Yq6l4H6AgA7nVJCAAFMfgIAAHOxggAGXmACAARsywA==
Date: Tue, 31 Jan 2023 04:11:38 +0000
Message-ID:
 <AM0PR08MB45309F6DCB1B1E0975A741B7F7D09@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-12-Penny.Zheng@arm.com>
 <c30b4458-b5f6-f996-0c3c-220b18bfb356@xen.org>
 <AM0PR08MB453083B74DB1D00BDF469331F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <7931e70f-3754-363c-28d8-5fde3198d70f@xen.org>
 <AM0PR08MB45308D5CD69EBB5FE85A4B88F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <3c915633-ddb8-d1e4-f42e-064aaff168b2@xen.org>
In-Reply-To: <3c915633-ddb8-d1e4-f42e-064aaff168b2@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-Mentions:
 Wei.Chen@arm.com,sstabellini@kernel.org,Bertrand.Marquis@arm.com,ayan.kumar.halder@xilinx.com
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: F2E2FC6CCAE42C449A4090FAEF29058B.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|AS8PR08MB9480:EE_|DBAEUR03FT052:EE_|AS4PR08MB8048:EE_
X-MS-Office365-Filtering-Correlation-Id: 07613e93-c352-4384-9f80-08db034147a8
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 O1eOdZwSAzmoeiVc8QoznTPS33c1W4o+GBxIVYmK/FzmfL09k1UwzQedewvjR0HCHH+6lKnISyUqT6zZvKfbmvYD6c4Or/kjoQuCDApj4xf2IE7jT7v4Ohpq0YeTxDtTzIasB+VE8VZZG6QFEtVzHbIoHbzXG1YqeISuTIZ2Z5i50+rXFDm2eDjdPHnn4KhuThdsME8aa557g9R7+P6CJdhG7bvOZT9AVnhC1yhmUVq37IwfStXaBq0UHRmo6cdY15COHldFJTvHgFV6vqn3F8lMmfW0WXRe8solS0FH58Op4fdOCznGSjt1ZqwMCMeKetBLLdHekmart+IdiTyLkIdOtefWEKQsJbgNscUhjqDBrLqnx0v+cu7q4s/X2936JnSZ1soUHhRUmwwFbzHFOXUb4xitvizB8MXh/SzBl+U6EqlYLqVsDkUVE0Dao0DFxA+wCyUw3h2gWVEmCQdGJCcSPqVPpiYn8GHsWp1eao/dj78ryvOIcypcSCd5aTevCNCezE/3rCBIDanxNRM7mLl6lhmg9KuN3inyS1izVIsvDTICVYewc7Rxrof5r14imfwUEHrMUSP0UsFrad4AeXX5fBhAY6rpV5kYoFMHT0dXP+5Sa9wB+o1WP1KkPUHis5WMbi38MBwEknDHW+6DWmhBjJ77n0vkIc0q63Vtye5SYxpLvxdGVTLfjWUCUHHbmF0rIyKpr28dnDdaKjrtZg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(346002)(396003)(39860400002)(366004)(376002)(136003)(451199018)(4326008)(66946007)(76116006)(41300700001)(64756008)(66556008)(8676002)(66446008)(66476007)(33656002)(55016003)(8936002)(52536014)(316002)(5660300002)(110136005)(53546011)(83380400001)(71200400001)(2906002)(7696005)(186003)(9686003)(26005)(478600001)(6506007)(38070700005)(86362001)(122000001)(38100700002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9480
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	35e87b48-f51f-4c84-c8ee-08db03413fd4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	X50AutmX49K3NGTlGhY7cjVCNsEwlFW35lkXPX7i/Li9c8jfZqFir6ENyW5KJBhknS1V6ydwTXUxD7TOuqtKNkSxvCMl3vFP79KeZ1JLkF4dr1rOds4zlR1wzAMOk83o69d0+qphVJ4oJpz+Mvz08hXwx5Wc3pu4e5Qji16UMfpb1bur2ayWuee0LbONYhRBR3Sf08gClf1B6VU0NdSe3R60YenSAKdbyzL3ks1DzvOED4KmF8cTtNCY1N1PqzDLOugtxjIohBpwO2CZFyMZOTh9PrCdIqMXYTKiKgDoL6Jc6efsyskHsxtlBZbHM4lzKKsJAUDqwbZrnK/ezVbINiWGrSPtih7jtQIjpEAZ0ibtrUjG0cBZg2nNUsqJl8BWXvmk0QHqwhyvn3zlDS4pfAS99zyylIj0AN+HG1fPBIM+3IBnUEb+HOodGltRX3GQcThKQSrwPPO6+N1lIqIukTQoCxQ/My5fMzf+UZJ1mixNCZlK8nEc10Pp9Flxv2FZpeqLgpzOn/omLIfdCZCe4COUOMwh1gkD81oi6elMNdif25JPLo94sQQIzAQGmWoznC4bgu7aeCcqAIRrPWIu0uK9NuklUR+3aN1IKCnrVPKUgpfQC1zDq2u+eUJVqRWZlTC0/asVFafvClrMv+tcSx5VAOP9dX1jE3XgIpaTfT0iJZt+e3xQml8+lIl690c68TSc0GrQsa+MtG27qVPXCQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199018)(36840700001)(46966006)(40470700004)(5660300002)(55016003)(40480700001)(36860700001)(82310400005)(82740400003)(2906002)(33656002)(86362001)(40460700003)(478600001)(6506007)(53546011)(26005)(336012)(9686003)(7696005)(81166007)(356005)(110136005)(83380400001)(186003)(70206006)(70586007)(47076005)(107886003)(4326008)(8676002)(316002)(41300700001)(52536014)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 04:11:51.3047
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 07613e93-c352-4384-9f80-08db034147a8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8048

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogTW9uZGF5LCBKYW51YXJ5IDMwLCAyMDIz
IDU6NDAgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRl
dmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IENjOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNv
bT47IFN0ZWZhbm8gU3RhYmVsbGluaQ0KPiA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRy
YW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47DQo+IFZvbG9keW15ciBCYWJj
aHVrIDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2
MiAxMS80MF0geGVuL21wdTogYnVpbGQgdXAgc3RhcnQtb2YtZGF5IFhlbiBNUFUNCj4gbWVtb3J5
IHJlZ2lvbiBtYXANCj4gDQo+IEhpIFBlbm55LA0KPiANCj4gT24gMzAvMDEvMjAyMyAwNTo0NSwg
UGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gICBUaGVyZSBhcmUgdGhyZWUgdHlwZXMgb2YgTVBVIHJl
Z2lvbnMgZHVyaW5nIGJvb3QtdGltZToNCj4gPiAxLiBGaXhlZCBNUFUgcmVnaW9uDQo+ID4gUmVn
aW9ucyBsaWtlIFhlbiB0ZXh0IHNlY3Rpb24sIFhlbiBoZWFwIHNlY3Rpb24sIGV0Yy4NCj4gPiAy
LiBCb290LW9ubHkgTVBVIHJlZ2lvbg0KPiA+IFJlZ2lvbnMgbGlrZSBYZW4gaW5pdCBzZWN0aW9u
cywgZXRjLiBJdCB3aWxsIGJlIHJlbW92ZWQgYXQgdGhlIGVuZCBvZiBib290aW5nLg0KPiA+IDMu
ICAgUmVnaW9ucyBuZWVkIHN3aXRjaGluZyBpbi9vdXQgZHVyaW5nIHZjcHUgY29udGV4dCBzd2l0
Y2gNCj4gPiBSZWdpb24gbGlrZSBzeXN0ZW0gZGV2aWNlIG1lbW9yeSBtYXAuDQo+ID4gRm9yIGV4
YW1wbGUsIGZvciBGVlBfQmFzZVJfQUVNdjhSLCB3ZSBoYXZlIFsweDgwMDAwMDAwLCAweGZmZmZm
MDAwKSBhcw0KPiA+IHRoZSB3aG9sZSBzeXN0ZW0gZGV2aWNlIG1lbW9yeSBtYXAgZm9yIFhlbihp
ZGxlIHZjcHUpIGluIEVMMiwgIHdoZW4NCj4gPiBjb250ZXh0IHN3aXRjaGluZyB0byBndWVzdCB2
Y3B1LCBpdCBzaGFsbCBiZSByZXBsYWNlZCB3aXRoDQo+ID4gZ3Vlc3Qtc3BlY2lmaWMgZGV2aWNl
IG1hcHBpbmcsIGxpa2UgdmdpYywgdnBsMDExLCBwYXNzdGhyb3VnaCBkZXZpY2UsIGV0Yy4NCj4g
Pg0KPiA+IFdlIGRvbid0IGhhdmUgdHdvIG1hcHBpbmdzIGZvciBkaWZmZXJlbnQgc3RhZ2UgdHJh
bnNsYXRpb25zIGluIE1QVSwgbGlrZQ0KPiB3ZSBoYWQgaW4gTU1VLg0KPiA+IFhlbiBzdGFnZSAx
IEVMMiBtYXBwaW5nIGFuZCBzdGFnZSAyIG1hcHBpbmcgYXJlIGJvdGggc2hhcmluZyBvbmUgTVBV
DQo+ID4gbWVtb3J5IG1hcHBpbmcoeGVuX21wdW1hcCkgU28gdG8gc2F2ZSB0aGUgdHJvdWJsZSBv
ZiBodW50aW5nIGRvd24NCj4gZWFjaA0KPiA+IHN3aXRjaGluZyByZWdpb25zIGluIHRpbWUtc2Vu
c2l0aXZlIGNvbnRleHQgc3dpdGNoLCB3ZSBtdXN0IHJlLW9yZGVyDQo+IHhlbl9tcHVtYXAgdG8g
a2VlcCBmaXhlZCByZWdpb25zIGluIHRoZSBmcm9udCwgYW5kIHN3aXRjaGluZyBvbmVzIGluIHRo
ZQ0KPiBoZWVscyBvZiB0aGVtLg0KPiANCj4gIEZyb20gbXkgdW5kZXJzdGFuZGluZywgaHVudGlu
ZyBkb3duIGVhY2ggc3dpdGNoaW5nIHJlZ2lvbnMgd291bGQgYmUgYQ0KPiBtYXR0ZXIgdG8gbG9v
cCBvdmVyIGEgYml0bWFwLiBUaGVyZSB3aWxsIGJlIGEgc2xpZ2h0IGluY3JlYXNlIGluIHRoZSBu
dW1iZXINCj4gb2YgaW5zdHJ1Y3Rpb25zIGV4ZWN1dGVkLCBidXQgSSBkb24ndCBiZWxpZXZlIGl0
IHdpbGwgYmUgbm90aWNlYWJsZS4NCj4gDQo+ID4NCj4gPiBJbiBQYXRjaCBTZXJpZSB2MSwgSSB3
YXMgYWRkaW5nIE1QVSByZWdpb25zIGluIHNlcXVlbmNlLCAgYW5kIEkNCj4gPiBpbnRyb2R1Y2Vk
IGEgc2V0IG9mIGJpdG1hcHMgdG8gcmVjb3JkIHRoZSBsb2NhdGlvbiBvZiBzYW1lIHR5cGUNCj4g
PiByZWdpb25zLiBBdCB0aGUgZW5kIG9mIGJvb3RpbmcsIEkgbmVlZCB0byAqZGlzYWJsZSogTVBV
IHRvIGRvIHRoZQ0KPiByZXNodWZmbGluZywgYXMgSSBjYW4ndCBtb3ZlIHJlZ2lvbnMgbGlrZSB4
ZW4gaGVhcCB3aGlsZSBNUFUgb24uDQo+ID4NCj4gPiBBbmQgd2UgZGlzY3Vzc2VkIHRoYXQgaXQg
aXMgdG9vIHJpc2t5IHRvIGRpc2FibGUgTVBVLCBhbmQgeW91DQo+ID4gc3VnZ2VzdGVkIFsxXSAi
DQo+ID4+IFlvdSBzaG91bGQgbm90IG5lZWQgYW55IHJlb3JnIGlmIHlvdSBtYXAgdGhlIGJvb3Qt
b25seSBzZWN0aW9uDQo+ID4+IHRvd2FyZHMgaW4gdGhlIGhpZ2hlciBzbG90IGluZGV4IChvciBq
dXN0IGFmdGVyIHRoZSBmaXhlZCBvbmVzKS4NCj4gPiAiDQo+IA0KPiBSaWdodCwgbG9va2luZyBh
dCB0aGUgbmV3IGNvZGUuIEkgcmVhbGl6ZSB0aGF0IHRoaXMgd2FzIHByb2JhYmx5IGEgYmFkIGlk
ZWENCj4gYmVjYXVzZSB3ZSBhcmUgYWRkaW5nIGNvbXBsZXhpdHkgaW4gdGhlIGFzc2VtYmx5IGNv
ZGUuDQo+IA0KPiA+DQo+ID4gTWF5YmUgaW4gYXNzZW1ibHksIHdlIGtub3cgZXhhY3RseSBob3cg
bWFueSBmaXhlZCByZWdpb25zIGFyZSwNCj4gPiBib290LW9ubHkgcmVnaW9ucyBhcmUsIGJ1dCBp
biBDIGNvZGVzLCB3ZSBwYXJzZSBGRFQgdG8gZ2V0IHN0YXRpYw0KPiBjb25maWd1cmF0aW9uLCBs
aWtlIHdlIGRvbid0IGtub3cgaG93IG1hbnkgZml4ZWQgcmVnaW9ucyBmb3IgeGVuIHN0YXRpYw0K
PiBoZWFwIGlzIGVub3VnaC4NCj4gPiBBcHByb3hpbWF0aW9uIGlzIG5vdCBzdWdnZXN0ZWQgaW4g
TVBVIHN5c3RlbSB3aXRoIGxpbWl0ZWQgTVBVIHJlZ2lvbnMsDQo+ID4gc29tZSBwbGF0Zm9ybSBt
YXkgb25seSBoYXZlIDE2IE1QVSByZWdpb25zLCBJTUhPLCBpdCBpcyBub3Qgd29ydGh5DQo+IHdh
c3RpbmcgaW4gYXBwcm94aW1hdGlvbi4NCj4gDQo+IEkgaGF2ZW4ndCBzdWdnZXN0ZWQgdG8gdXNl
IGFwcHJveGltYXRpb24gYW55d2hlcmUgaGVyZS4gSSB3aWxsIGFuc3dlcg0KPiBhYm91dCB0aGUg
bGltaXRlZCBudW1iZXIgb2YgZW50cmllcyBpbiB0aGUgb3RoZXIgdGhyZWFkLg0KPiANCj4gPg0K
PiA+IFNvIEkgdGFrZSB0aGUgc3VnZ2VzdGlvbiBvZiBwdXR0aW5nIHJlZ2lvbnMgaW4gdGhlIGhp
Z2hlciBzbG90IGluZGV4Lg0KPiA+IFB1dHRpbmcgZml4ZWQgcmVnaW9ucyBpbiB0aGUgZnJvbnQs
IGFuZCBwdXR0aW5nIGJvb3Qtb25seSBhbmQNCj4gPiBzd2l0Y2hpbmcgb25lcyBhdCB0YWlsLiBU
aGVuLCBhdCB0aGUgZW5kIG9mIGJvb3RpbmcsIHdoZW4gd2UgcmVvcmcgdGhlDQo+IG1wdSBtYXBw
aW5nLCB3ZSByZW1vdmUgYWxsIGJvb3Qtb25seSByZWdpb25zLCBhbmQgZm9yIHN3aXRjaGluZyBv
bmVzLCB3ZQ0KPiBkaXNhYmxlLXJlbG9jYXRlKGFmdGVyIGZpeGVkIG9uZXMpLWVuYWJsZSB0aGVt
LiBTcGVjaWZpYyBjb2RlcyBpbiBbMl0uDQo+IA0KPiAgRnJvbSB0aGlzIGRpc2N1c3Npb24sIGl0
IGZlZWxzIHRvIG1lIHRoYXQgeW91IGFyZSB0cnlpbmcgdG8gbWFrZSB0aGUgY29kZQ0KPiBtb3Jl
IGNvbXBsaWNhdGVkIGp1c3QgdG8ga2VlcCB0aGUgc3BsaXQgYW5kIHNhdmUgYSBmZXcgY3ljbGVz
IChzZWUgYWJvdmUpLg0KPiANCj4gSSB3b3VsZCBzdWdnZXN0IHRvIGludmVzdGlnYXRlIHRoZSBj
b3N0IG9mICJodW50aW5nIGRvd24gZWFjaCBzZWN0aW9uIi4NCj4gRGVwZW5kaW5nIG9uIHRoZSBy
ZXN1bHQsIHdlIGNhbiBkaXNjdXNzIHdoYXQgdGhlIGJlc3QgYXBwcm9hY2guDQo+IA0KDQpDb3Jy
ZWN0IG1lIGlmIEknbSB3cm9uZywgdGhlIGNvbXBsaWNhdGVkIHRoaW5ncyBpbiBhc3NlbWJseSB5
b3UgYXJlIHdvcnJpZWQgYWJvdXQNCmlzIHRoYXQgd2UgY291bGRuJ3QgZGVmaW5lIHRoZSBpbmRl
eCBmb3IgaW5pdGlhbCBzZWN0aW9ucywgbm8gaGFyZGNvZGVkIHRvIGtlZXAgc2ltcGxlLg0KQW5k
IGZ1bmN0aW9uIHdyaXRlX3ByLCBpaywgaXMgcmVhbGx5IGEgYmlnIGNodW5rIG9mIGNvZGVzLCBo
b3dldmVyIHRoZSBsb2dpYyBpcyBzaW1wbGUgdGhlcmUsDQpqdXN0IGEgYnVuY2ggb2YgInN3aXRj
aC1jYXNlcyIuDQoNCklmIHdlIGFyZSBhZGRpbmcgTVBVIHJlZ2lvbnMgaW4gc2VxdWVuY2UgYXMg
eW91IHN1Z2dlc3RlZCwgd2hpbGUgdXNpbmcgYml0bWFwIGF0IHRoZQ0Kc2FtZSB0aW1lIHRvIHJl
Y29yZCB1c2VkIGVudHJ5Lg0KVEJILCB0aGlzIGlzIGhvdyBJIGRlc2lnbmVkIGF0IHRoZSB2ZXJ5
IGJlZ2lubmluZyBpbnRlcm5hbGx5LiBXZSBmb3VuZCB0aGF0IGlmIHdlIGRvbid0DQpkbyByZW9y
ZyBsYXRlLWJvb3QgdG8ga2VlcCBmaXhlZCBpbiBmcm9udCBhbmQgc3dpdGNoaW5nIG9uZXMgYWZ0
ZXIsIGVhY2ggdGltZSB3aGVuIHdlDQpkbyB2Y3B1IGNvbnRleHQgc3dpdGNoLCBub3Qgb25seSB3
ZSBuZWVkIHRvIGh1bnQgZG93biBzd2l0Y2hpbmcgb25lcyB0byBkaXNhYmxlLA0Kd2hpbGUgd2Ug
YWRkIG5ldyBzd2l0Y2gtaW4gcmVnaW9ucywgdXNpbmcgYml0bWFwIHRvIGZpbmQgZnJlZSBlbnRy
eSBpcyBzYXlpbmcgdGhhdCB0aGUNCnByb2Nlc3MgaXMgdW5wcmVkaWN0YWJsZS4gVW5jZXJ0YWlu
dHkgaXMgd2hhdCB3ZSB3YW50IHRvIGF2b2lkIGluIEFybXY4LVIgYXJjaGl0ZWN0dXJlLiANCg0K
SG1tbSwgVEJILCBJIHJlYWxseSByZWFsbHkgbGlrZSB5b3VyIHN1Z2dlc3Rpb24gdG8gcHV0IGJv
b3Qtb25seS9zd2l0Y2hpbmcgcmVnaW9ucyBpbnRvDQpoaWdoZXIgc2xvdC4gSXQgcmVhbGx5IHNh
dmVkIGEgbG90IHRyb3VibGUgaW4gbGF0ZS1pbml0IHJlb3JnIGFuZCBhbHNvIGF2b2lkcyBkaXNh
YmxpbmcgTVBVDQphdCB0aGUgc2FtZSB0aW1lLiBUaGUgc3BsaXQgaXMgYSBzaW1wbGUgYW5kIGVh
c3ktdG8tdW5kZXJzdGFuZCBjb25zdHJ1Y3Rpb24gY29tcGFyZWQNCndpdGggYml0bWFwIHRvby4N
Cg0KSU1ITywgcmVvcmcgaXMgcmVhbGx5IHdvcnRoIGRvaW5nLiBXZSBwdXQgYWxsIGNvbXBsaWNh
dGVkIHRoaW5ncyBpbiBib290LXRpbWUgdG8NCm1ha2UgcnVudGltZSBjb250ZXh0LXN3aXRjaCBz
aW1wbGUgYW5kIGZhc3QsIGV2ZW4gZm9yIGEgZmV3IGN5Y2xlcy4NCkFzIHRoZSBBcm12OC1SIGFy
Y2hpdGVjdHVyZSBwcm9maWxlIGZyb20gdGhlIGJlZ2lubmluZyBpcyBkZXNpZ25lZCB0byBzdXBw
b3J0IHVzZQ0KY2FzZXMgdGhhdCBoYXZlIGEgaGlnaCBzZW5zaXRpdml0eSB0byBkZXRlcm1pbmlz
dGljIGV4ZWN1dGlvbi4gKGUuZy4gRnVlbCBJbmplY3Rpb24sDQpCcmFrZSBjb250cm9sLCBEcml2
ZSB0cmFpbnMsIE1vdG9yIGNvbnRyb2wgZXRjKS4NCkhvd2V2ZXIsIHdoZW4gdGFsa2luZyBhYm91
dCBhcmNoaXRlY3R1cmUgdGhpbmcsIEkgbmVlZCBtb3JlIHByb2Zlc3Npb25hbCBvcGluaW9ucywg
DQpAV2VpIENoZW4gQEJlcnRyYW5kIE1hcnF1aXMNCkFsc28sIFdpbGwgUjUyIGltcGxlbWVudGF0
aW9uIGVuY291bnRlciB0aGUgc2FtZSBpc3N1ZS4gQGF5YW4ua3VtYXIuaGFsZGVyQHhpbGlueC5j
b20NCkBTdGVmYW5vIFN0YWJlbGxpbmkNCg0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4g
R3JhbGwNCg0KQ2hlZXJzLA0KDQotLQ0KUGVubnkgWmhlbmcNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 04:28:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 04:28:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487351.754964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMiG0-0005S6-5w; Tue, 31 Jan 2023 04:28:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487351.754964; Tue, 31 Jan 2023 04:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMiG0-0005Rz-1t; Tue, 31 Jan 2023 04:28:32 +0000
Received: by outflank-mailman (input) for mailman id 487351;
 Tue, 31 Jan 2023 04:28:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMiFy-0005Rp-3R; Tue, 31 Jan 2023 04:28:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMiFy-00069W-2M; Tue, 31 Jan 2023 04:28:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMiFx-0007qi-NF; Tue, 31 Jan 2023 04:28:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMiFx-00043J-Mk; Tue, 31 Jan 2023 04:28:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6a/uC0Xs1PyOJ7bbwQZvUqQs+NesQDMSespLCgrfkCM=; b=qp7iOXzEmTy+l70PLjT39aVgBs
	Q5Pj/rBA5jMgDczspDY0jqyOPHoUX/K0yNR1B+u5spp/irUz6Sh7XawIyh3RiySEGDCUG/A4ysGC6
	DCNW9TSPnTiRHWwG5NZdSrYgnZgnKRVTdM37kPDegyAby9DEHOZnnIVMCoRy/ZFSO+bw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176287-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176287: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=35091031329e741b25ed60ac51f4710d75d92310
X-Osstest-Versions-That:
    ovmf=c59230bce1c6973af4190b418971c1d008340cc4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 04:28:29 +0000

flight 176287 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176287/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 35091031329e741b25ed60ac51f4710d75d92310
baseline version:
 ovmf                 c59230bce1c6973af4190b418971c1d008340cc4

Last test of basis   176282  2023-01-30 17:12:20 Z    0 days
Testing same since   176287  2023-01-31 02:42:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chao Li <lichao@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c59230bce1..3509103132  35091031329e741b25ed60ac51f4710d75d92310 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 05:25:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 05:25:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487360.754974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMj9G-00050M-A0; Tue, 31 Jan 2023 05:25:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487360.754974; Tue, 31 Jan 2023 05:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMj9G-00050F-7A; Tue, 31 Jan 2023 05:25:38 +0000
Received: by outflank-mailman (input) for mailman id 487360;
 Tue, 31 Jan 2023 05:25:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMj9E-000505-QT; Tue, 31 Jan 2023 05:25:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMj9E-0007xW-Gj; Tue, 31 Jan 2023 05:25:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMj9E-0000o3-1S; Tue, 31 Jan 2023 05:25:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMj9E-0003hT-11; Tue, 31 Jan 2023 05:25:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7qZQ6X419w/J6mwvxHav2uZE84jPEeAKZ9IlZP/FEm4=; b=FsASXLv7l/9lACPg2Dv46iW257
	bkLbrLWy73APUMr2fuA1ckNFH5fyqnobNGD9wCXlXCgK1kG05IOYyIgKujU0v3H4snt5kM9rJcM1b
	g2tMhLlq4S58H8wECwgy0Tg5rYCP4A14Kqm1XbT2TpX7fNkHb9QGnYmxPPJCAANteVVs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176281-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176281: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl:<job status>:broken:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-amd64-xl:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start.2:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-ping-check-xen:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-xsm:guest-localmigrate/x10:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 05:25:36 +0000

flight 176281 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176281/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-rtds        <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <job status>   broken in 176275
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>          broken in 176275
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>     broken in 176275
 test-amd64-amd64-xl             <job status>                 broken  in 176275
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl          5 host-install(5) broken in 176275 pass in 176281
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 176275 pass in 176281
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 176275 pass in 176281
 test-amd64-amd64-xl-rtds      5 host-install(5)          broken pass in 176275
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 21 guest-start.2 fail in 176261 pass in 176275
 test-amd64-amd64-xl-rtds  10 host-ping-check-xen fail in 176275 pass in 176261
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot  fail in 176275 pass in 176281
 test-amd64-amd64-xl-xsm 20 guest-localmigrate/x10 fail in 176275 pass in 176281
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 176275 pass in 176281
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot            fail pass in 176275
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 176275

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken in 176275 like 175437
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop   fail in 176275 like 175447
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 176275 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 176275 n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   40 days
Testing same since   176224  2023-01-26 22:14:43 Z    4 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-rtds broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-job build-armhf broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job test-amd64-amd64-xl broken
broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 05:31:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 05:31:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487369.754984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMjFB-0006U6-3G; Tue, 31 Jan 2023 05:31:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487369.754984; Tue, 31 Jan 2023 05:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMjFA-0006Tz-WC; Tue, 31 Jan 2023 05:31:45 +0000
Received: by outflank-mailman (input) for mailman id 487369;
 Tue, 31 Jan 2023 05:31:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMjF9-0006Tp-JR; Tue, 31 Jan 2023 05:31:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMjF9-00083k-HE; Tue, 31 Jan 2023 05:31:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMjF8-0000xg-Im; Tue, 31 Jan 2023 05:31:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMjF8-0006rA-IL; Tue, 31 Jan 2023 05:31:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=K82hduVo/a6As+fLiIlHx8X5eswd0YDob37ptkUgDF4=; b=F6dRrnVK12nKcrAPvJyOqmtxYF
	R7u1xgAqe8zw3jThNo12tNpF/rUqYNqOiGX1dzh9FbJEIqk3u9dbg9JLfBJwkm7k87WXmqjadf8VD
	umsl1WzgoyC6DdwsYRBY3HVKKGyjUKYum2V88C2F5ii/q/GyiKSsEJAL/Er9lJDaZGdk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176279-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176279: trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:test-amd64-amd64-pygrub:<job status>:broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:build-armhf:syslog-server:running:regression
    linux-linus:test-amd64-amd64-pygrub:host-install(5):broken:heisenbug
    linux-linus:test-amd64-amd64-freebsd11-amd64:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:build-armhf:capture-logs:broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=6d796c50f84ca79f1722bb131799e5a5710c4700
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 05:31:42 +0000

flight 176279 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176279/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pygrub       5 host-install(5)          broken pass in 176274
 test-amd64-amd64-freebsd11-amd64  8 xen-boot               fail pass in 176274

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 173462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 176274 n/a

version targeted for testing:
 linux                6d796c50f84ca79f1722bb131799e5a5710c4700
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  115 days
Failing since        173470  2022-10-08 06:21:34 Z  114 days  237 attempts
Testing same since   176274  2023-01-29 23:13:33 Z    1 days    2 attempts

------------------------------------------------------------
3468 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-job test-amd64-amd64-pygrub broken
broken-step build-armhf capture-logs
broken-step build-armhf host-install(4)
broken-step test-amd64-amd64-pygrub host-install(5)
broken-job build-armhf broken

Not pushing.

(No revision log; it would be 533719 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 05:39:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 05:39:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487377.754994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMjMG-0007Mi-Sa; Tue, 31 Jan 2023 05:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487377.754994; Tue, 31 Jan 2023 05:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMjMG-0007Mb-OD; Tue, 31 Jan 2023 05:39:04 +0000
Received: by outflank-mailman (input) for mailman id 487377;
 Tue, 31 Jan 2023 05:39:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gnQn=54=arm.com=Penny.Zheng@srs-se1.protection.inumbo.net>)
 id 1pMjMF-0007MV-8d
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 05:39:03 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2060b.outbound.protection.outlook.com
 [2a01:111:f400:fe16::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8feca083-a129-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 06:39:00 +0100 (CET)
Received: from DB9PR01CA0007.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:1d8::12) by AS8PR08MB9148.eurprd08.prod.outlook.com
 (2603:10a6:20b:57f::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 05:38:50 +0000
Received: from DBAEUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1d8:cafe::34) by DB9PR01CA0007.outlook.office365.com
 (2603:10a6:10:1d8::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 05:38:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT016.mail.protection.outlook.com (100.127.142.204) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 05:38:49 +0000
Received: ("Tessian outbound b1d3ffe56e73:v132");
 Tue, 31 Jan 2023 05:38:49 +0000
Received: from 720672c8728d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 ACC52A56-79AB-4005-B85D-2F2E48EFE858.1; 
 Tue, 31 Jan 2023 05:38:41 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 720672c8728d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 31 Jan 2023 05:38:41 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com (2603:10a6:208:13c::21)
 by DU2PR08MB10279.eurprd08.prod.outlook.com (2603:10a6:10:46e::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 05:38:36 +0000
Received: from AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab]) by AM0PR08MB4530.eurprd08.prod.outlook.com
 ([fe80::ee26:4b5e:4334:b7ab%3]) with mapi id 15.20.6043.038; Tue, 31 Jan 2023
 05:38:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8feca083-a129-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=klZ/rJ0IriTTIeI07ZLeECqG1SFHXuRi5UiSK2fzgFA=;
 b=nqd7pf3+CdpwwXx8C/LM1SNdTMCrylpfFxrLCdxUKniYbAMQS9h+OzEjdQrwnp9qayXJWqMzhKBf463hikN9JOGeEKrMZg6MVRPJssg1CkXB5SK5ROp+UGv5yqRsiaD8FGmQBWQIWaEJoqGco4JqSnOlJ+kS08h4nnGvf7DN7RM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aqvo2djadPCwf9NiYAb6LmHFWECGU3KiDdIO747FaTRr7TPOy/GtZMjG5b/P5aZ/PsuDgcFvfUwGGp77NZJl/pv1mKcZra2C3pVKxuLJsRfIbvUs0RCPN9TJfo97ySRefpYW68qFyjadjathABIlvr1WYejk3+GKiHCOz8eglp1Vl84LAD1vYudWCOpg6dRcKlC1fmkUY8DGytvprdbqsCeAEXuXOW18bzDzcTXXu+6U2AAyvIn8TSwryo3zIovP0G0IiOfBIDFKnaBoRcLCMSOY/6t44fv65yYQlRbRJW8C8O29i2b0jmCE9Bc90mVbDngj/YO+uHSHjllAxo07yw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=klZ/rJ0IriTTIeI07ZLeECqG1SFHXuRi5UiSK2fzgFA=;
 b=T5B7QjkHjtUWicS2n9Xal9fCu4Zj0MWyKzTeMfzu/EYfCfPfzCv9Ykm52tzS/TKSAZLEKXvnxSMxZh42WfG2UfkEmYbQFHi/A5LpziuVjhriQsnMwj45ptcwGqwqQ5FuRny2rBpxdXeY9BGpnMAyPWQeTQSx1yOFjmetsR4cxXBrL/JaaIq2Ldx2DxvF1r0PkIXvIiCNiCBeLbmIljZdlx/dUnXdDBdm4YCFEIxNOq8v0YkzEH3zFvkWH4tHDn/5dkfa1InFp2AccJt5NCukc+wkoBbMpmSa4v4KHvToRokCifYYoQKgusRj8K93rtaKXWKmtSL+VjOhJR7L/EKP0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=klZ/rJ0IriTTIeI07ZLeECqG1SFHXuRi5UiSK2fzgFA=;
 b=nqd7pf3+CdpwwXx8C/LM1SNdTMCrylpfFxrLCdxUKniYbAMQS9h+OzEjdQrwnp9qayXJWqMzhKBf463hikN9JOGeEKrMZg6MVRPJssg1CkXB5SK5ROp+UGv5yqRsiaD8FGmQBWQIWaEJoqGco4JqSnOlJ+kS08h4nnGvf7DN7RM=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
Thread-Topic: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
Thread-Index:
 AQHZJxAoM+nWeUC5lUC+09eIkdu2V66uAIMAgAb9FDCAAB73gIABdWjwgABDKoCAATGeAA==
Date: Tue, 31 Jan 2023 05:38:35 +0000
Message-ID:
 <AM0PR08MB45305D27CA8353162445AE1EF7D09@AM0PR08MB4530.eurprd08.prod.outlook.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-14-Penny.Zheng@arm.com>
 <23f49916-dd2a-a956-1e6b-6dbb41a8817b@xen.org>
 <AM0PR08MB4530B7AF6EA406882974D528F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <33bddc11-ae1e-b467-32d7-647748d1c627@xen.org>
 <AM0PR08MB453026B268BA9FBEEE970090F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <49329992-3203-78a7-fc61-d6494e37705c@xen.org>
In-Reply-To: <49329992-3203-78a7-fc61-d6494e37705c@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9327828047C15A4D9B35598F7EBB24CA.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB4530:EE_|DU2PR08MB10279:EE_|DBAEUR03FT016:EE_|AS8PR08MB9148:EE_
X-MS-Office365-Filtering-Correlation-Id: d5fc2399-92f4-4e90-a0d7-08db034d6e30
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fowEuS06mtJewb6CgTXRyQJrVX+4jmhk6y9d4KDgAnPlIpObpdTdf6Ukkjtfec3+bMSxhKTrrpDHXBF1uryP2mACxIKdafd1JId/iFQkzkQYp2gFgc/2UCOWZplqgCSAbs9sBoMxF2IiQG6or9fjuKyIw+vBSit4LBPsF167v50dxXEBk8wvvlONDS/uJs2a/uCpkAiTjQrrR+HUVYu/BhHn/YOpM9OtmWP4ucE/n5yNiLP+4LrCD8Y5FttNlF0LgmHW8IJfjIOJ8hCxwhLJmo7AN3VgFdiucSO5TtWXOye+MG+WCW2sqUSIAxDCXD3+IyW9WpLLrnivWHA1fXs9vBqGAabyKZDN0n/Qq2cdP0xM/HeWnQ1Sz7zcnc1utcLBsTYsEGH4qDODhN9tgRlJYbYjDmw8FY9498nTXd3V065ipWLsoldhieL8qSwfX0Rhoe8ZDAB9CrM5KxotUZ4WaPoM6i0S0fnwlYeXEQTRep8DAmMWbIGvtwXT2hCpl7WCtNgLjs8rbmePgtfN4CRsz7OoKL9khz1DwHCH/bdHcY+MxBNuoRTqEDubhJ9z/g3Ta5/JyVigjxXfIxlIAlIKFpEk66KEVWCwa5dL3FmdUWoVCcB1OAchHYI+Dv7Szrc+A4Haj+D5m/Hqg6HS71BhW5nPZPgvPmfa5Rk8WIgk5jKizKWc2Z+wEgW90Edq6US7bs56dQJuKBu8PqZ7pcmiPFDQrEftUBaIemDSFXZvtMA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB4530.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(346002)(376002)(136003)(366004)(396003)(451199018)(86362001)(66556008)(38070700005)(33656002)(38100700002)(122000001)(110136005)(52536014)(5660300002)(316002)(66476007)(8676002)(4326008)(54906003)(8936002)(76116006)(66946007)(66446008)(64756008)(2906002)(55016003)(30864003)(41300700001)(83380400001)(478600001)(7696005)(6506007)(26005)(71200400001)(9686003)(186003)(53546011)(21314003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB10279
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	aae59db9-8319-4c68-14b6-08db034d659b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KJ2O51tz8waBD6z8j0c7JPDesOpmhcJGg3glQD4/lBJI90nQNlhFP3Rdz47FR+muZwRa8coBE7mctieoaYkDTkG+HHwWvI+OUemtTwOnW73USWsMk65Lu8qqlK8uM7SgPbhU3l/1tcuYVyjXn0PCrxJ05s7R6SWSJ7e9/u/spaiVEH1+YRO2S6aK5X7hJWijaFeFHRLDYaXzNPZE2jPuqm0+ngCoPRWySqCy0k/xtftg/Krco0317bVVFYGyqJCjVcD9w85a+jfdrj+XlKTmx+JKLp36KUf1LFcm4YizmRjM3X07x3/bIVE+hCPBVYZ8OKHv074v/taZWFKB1pM5peFnZVk9owQu5xbFzvEewqyk29qMS6JwOS9JlzuoHNB/iaWHmQw2o1i5P6BXMQk5MrpVvD+oBDSNGhxkzyox4e3tfrwQ+hgBhGPcPxVaLIk9KO/gDW3CVqFMvc1DYW/3reWEk+oUhtJomxJaNeejHLCYbpZmi5im4sA3aTDIbP0lK2d+7+gRvoEZF9USg0sQX4k88J8nAgznk7wd66A3Fu/u6BA2JnMqMD6q3F+AjdKaXsfoMO/hgjog+ahkZGqqDukPvwDty2uOlaNAhJs4f92saMv4rN+06HfDvmt120ZReVXD6z0OKu2Id+Z+AFOZA4FNujxjhpdqRkZbo8oBh69EzOYStsVk58ez2ebJDJWGEzUZ9Dcp8r2ACrzaYSjiI8IjiKc3MwVIj8q8fOINQvg=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199018)(36840700001)(40470700004)(46966006)(8936002)(2906002)(4326008)(8676002)(336012)(478600001)(30864003)(40460700003)(33656002)(5660300002)(186003)(6506007)(53546011)(107886003)(26005)(9686003)(7696005)(52536014)(55016003)(40480700001)(86362001)(47076005)(41300700001)(82310400005)(356005)(81166007)(83380400001)(70206006)(70586007)(82740400003)(316002)(36860700001)(110136005)(54906003)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 05:38:49.8946
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d5fc2399-92f4-4e90-a0d7-08db034d6e30
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9148

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogTW9uZGF5LCBKYW51YXJ5IDMwLCAyMDIz
IDY6MDAgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRl
dmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IENjOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNv
bT47IFN0ZWZhbm8gU3RhYmVsbGluaQ0KPiA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz47IEJlcnRy
YW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47DQo+IFZvbG9keW15ciBCYWJj
aHVrIDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2
MiAxMy80MF0geGVuL21wdTogaW50cm9kdWNlIHVuaWZpZWQgZnVuY3Rpb24NCj4gc2V0dXBfZWFy
bHlfdWFydCB0byBtYXAgZWFybHkgVUFSVA0KPiANCj4gDQo+IA0KPiBPbiAzMC8wMS8yMDIzIDA2
OjI0LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPiBIaSwgSnVsaWVuDQo+IA0KPiBIaSBQZW5ueSwN
Cj4gDQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206IEp1bGllbiBH
cmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+IFNlbnQ6IFN1bmRheSwgSmFudWFyeSAyOSwgMjAy
MyAzOjQzIFBNDQo+ID4+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhl
bi0NCj4gZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gPj4gQ2M6IFdlaSBDaGVuIDxXZWku
Q2hlbkBhcm0uY29tPjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+ID4+IDxzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPjsgQmVydHJhbmQgTWFycXVpcw0KPiA+PiA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29t
PjsgVm9sb2R5bXlyIEJhYmNodWsNCj4gPj4gPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0K
PiA+PiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDEzLzQwXSB4ZW4vbXB1OiBpbnRyb2R1Y2UgdW5p
ZmllZCBmdW5jdGlvbg0KPiA+PiBzZXR1cF9lYXJseV91YXJ0IHRvIG1hcCBlYXJseSBVQVJUDQo+
ID4+DQo+ID4+IEhpIFBlbm55LA0KPiA+Pg0KPiA+PiBPbiAyOS8wMS8yMDIzIDA2OjE3LCBQZW5u
eSBaaGVuZyB3cm90ZToNCj4gPj4+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+Pj4+
IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+Pj4gU2VudDogV2VkbmVz
ZGF5LCBKYW51YXJ5IDI1LCAyMDIzIDM6MDkgQU0NCj4gPj4+PiBUbzogUGVubnkgWmhlbmcgPFBl
bm55LlpoZW5nQGFybS5jb20+OyB4ZW4tDQo+ID4+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3Jn
DQo+ID4+Pj4gQ2M6IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgU3RlZmFubyBTdGFiZWxs
aW5pDQo+ID4+Pj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBCZXJ0cmFuZCBNYXJxdWlzDQo+
ID4+Pj4gPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFZvbG9keW15ciBCYWJjaHVrDQo+ID4+
Pj4gPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiA+Pj4+IFN1YmplY3Q6IFJlOiBbUEFU
Q0ggdjIgMTMvNDBdIHhlbi9tcHU6IGludHJvZHVjZSB1bmlmaWVkIGZ1bmN0aW9uDQo+ID4+Pj4g
c2V0dXBfZWFybHlfdWFydCB0byBtYXAgZWFybHkgVUFSVA0KPiA+Pj4+DQo+ID4+Pj4gSGkgUGVu
eSwNCj4gPj4+DQo+ID4+PiBIaSBKdWxpZW4sDQo+ID4+Pg0KPiA+Pj4+DQo+ID4+Pj4gT24gMTMv
MDEvMjAyMyAwNToyOCwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4+Pj4+IEluIE1NVSBzeXN0ZW0s
IHdlIG1hcCB0aGUgVUFSVCBpbiB0aGUgZml4bWFwICh3aGVuIGVhcmx5cHJpbnRrIGlzDQo+ID4+
IHVzZWQpLg0KPiA+Pj4+PiBIb3dldmVyIGluIE1QVSBzeXN0ZW0sIHdlIG1hcCB0aGUgVUFSVCB3
aXRoIGEgdHJhbnNpZW50IE1QVQ0KPiA+PiBtZW1vcnkNCj4gPj4+Pj4gcmVnaW9uLg0KPiA+Pj4+
Pg0KPiA+Pj4+PiBTbyB3ZSBpbnRyb2R1Y2UgYSBuZXcgdW5pZmllZCBmdW5jdGlvbiBzZXR1cF9l
YXJseV91YXJ0IHRvIHJlcGxhY2UNCj4gPj4+Pj4gdGhlIHByZXZpb3VzIHNldHVwX2ZpeG1hcC4N
Cj4gPj4+Pj4NCj4gPj4+Pj4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5n
QGFybS5jb20+DQo+ID4+Pj4+IFNpZ25lZC1vZmYtYnk6IFdlaSBDaGVuIDx3ZWkuY2hlbkBhcm0u
Y29tPg0KPiA+Pj4+PiAtLS0NCj4gPj4+Pj4gICAgIHhlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMg
ICAgICAgICAgICAgICB8ICAyICstDQo+ID4+Pj4+ICAgICB4ZW4vYXJjaC9hcm0vYXJtNjQvaGVh
ZF9tbXUuUyAgICAgICAgICAgfCAgNCArLQ0KPiA+Pj4+PiAgICAgeGVuL2FyY2gvYXJtL2FybTY0
L2hlYWRfbXB1LlMgICAgICAgICAgIHwgNTINCj4gPj4+PiArKysrKysrKysrKysrKysrKysrKysr
KysrDQo+ID4+Pj4+ICAgICB4ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZWFybHlfcHJpbnRrLmgg
fCAgMSArDQo+ID4+Pj4+ICAgICA0IGZpbGVzIGNoYW5nZWQsIDU2IGluc2VydGlvbnMoKyksIDMg
ZGVsZXRpb25zKC0pDQo+ID4+Pj4+DQo+ID4+Pj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0v
YXJtNjQvaGVhZC5TDQo+IGIveGVuL2FyY2gvYXJtL2FybTY0L2hlYWQuUw0KPiA+Pj4+PiBpbmRl
eCA3ZjNmOTczNDY4Li5hOTI4ODMzMTlkIDEwMDY0NA0KPiA+Pj4+PiAtLS0gYS94ZW4vYXJjaC9h
cm0vYXJtNjQvaGVhZC5TDQo+ID4+Pj4+ICsrKyBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkLlMN
Cj4gPj4+Pj4gQEAgLTI3Miw3ICsyNzIsNyBAQCBwcmltYXJ5X3N3aXRjaGVkOg0KPiA+Pj4+PiAg
ICAgICAgICAgICAgKiBhZnRlcndhcmRzLg0KPiA+Pj4+PiAgICAgICAgICAgICAgKi8NCj4gPj4+
Pj4gICAgICAgICAgICAgYmwgICAgcmVtb3ZlX2lkZW50aXR5X21hcHBpbmcNCj4gPj4+Pj4gLSAg
ICAgICAgYmwgICAgc2V0dXBfZml4bWFwDQo+ID4+Pj4+ICsgICAgICAgIGJsICAgIHNldHVwX2Vh
cmx5X3VhcnQNCj4gPj4+Pj4gICAgICNpZmRlZiBDT05GSUdfRUFSTFlfUFJJTlRLDQo+ID4+Pj4+
ICAgICAgICAgICAgIC8qIFVzZSBhIHZpcnR1YWwgYWRkcmVzcyB0byBhY2Nlc3MgdGhlIFVBUlQu
ICovDQo+ID4+Pj4+ICAgICAgICAgICAgIGxkciAgIHgyMywgPUVBUkxZX1VBUlRfVklSVFVBTF9B
RERSRVNTDQo+ID4+Pj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tbXUu
Uw0KPiA+Pj4+PiBiL3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21tdS5TIGluZGV4IGI1OWM0MDQ5
NWYuLmExOWI3Yzg3M2QNCj4gPj4+PiAxMDA2NDQNCj4gPj4+Pj4gLS0tIGEveGVuL2FyY2gvYXJt
L2FybTY0L2hlYWRfbW11LlMNCj4gPj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0L2hlYWRf
bW11LlMNCj4gPj4+Pj4gQEAgLTMxMiw3ICszMTIsNyBAQCBFTkRQUk9DKHJlbW92ZV9pZGVudGl0
eV9tYXBwaW5nKQ0KPiA+Pj4+PiAgICAgICoNCj4gPj4+Pj4gICAgICAqIENsb2JiZXJzIHgwIC0g
eDMNCj4gPj4+Pj4gICAgICAqLw0KPiA+Pj4+PiAtRU5UUlkoc2V0dXBfZml4bWFwKQ0KPiA+Pj4+
PiArRU5UUlkoc2V0dXBfZWFybHlfdWFydCkNCj4gPj4+Pg0KPiA+Pj4+IFRoaXMgZnVuY3Rpb24g
aXMgZG9pbmcgbW9yZSB0aGFuIGVuYWJsZSB0aGUgZWFybHkgVUFSVC4gSXQgYWxzbw0KPiA+Pj4+
IHNldHVwcyB0aGUgZml4bWFwIGV2ZW4gZWFybHlwcmludGsgaXMgbm90IGNvbmZpZ3VyZWQuDQo+
ID4+Pg0KPiA+Pj4gVHJ1ZSwgdHJ1ZS4NCj4gPj4+IEkndmUgdGhvcm91Z2hseSByZWFkIHRoZSBN
TVUgaW1wbGVtZW50YXRpb24gb2Ygc2V0dXBfZml4bWFwLCBhbmQNCj4gPj4+IEknbGwgdHJ5IHRv
IHNwbGl0IGl0IHVwLg0KPiA+Pj4NCj4gPj4+Pg0KPiA+Pj4+IEkgYW0gbm90IGVudGlyZWx5IHN1
cmUgd2hhdCBjb3VsZCBiZSB0aGUgbmFtZS4gTWF5YmUgdGhpcyBuZWVkcyB0bw0KPiA+Pj4+IGJl
IHNwbGl0IGZ1cnRoZXIuDQo+ID4+Pj4NCj4gPj4+Pj4gICAgICNpZmRlZiBDT05GSUdfRUFSTFlf
UFJJTlRLDQo+ID4+Pj4+ICAgICAgICAgICAgIC8qIEFkZCBVQVJUIHRvIHRoZSBmaXhtYXAgdGFi
bGUgKi8NCj4gPj4+Pj4gICAgICAgICAgICAgbGRyICAgeDAsID1FQVJMWV9VQVJUX1ZJUlRVQUxf
QUREUkVTUw0KPiA+Pj4+PiBAQCAtMzI1LDcgKzMyNSw3IEBAIEVOVFJZKHNldHVwX2ZpeG1hcCkN
Cj4gPj4+Pj4gICAgICAgICAgICAgZHNiICAgbnNoc3QNCj4gPj4+Pj4NCj4gPj4+Pj4gICAgICAg
ICAgICAgcmV0DQo+ID4+Pj4+IC1FTkRQUk9DKHNldHVwX2ZpeG1hcCkNCj4gPj4+Pj4gK0VORFBS
T0Moc2V0dXBfZWFybHlfdWFydCkNCj4gPj4+Pj4NCj4gPj4+Pj4gICAgIC8qIEZhaWwtc3RvcCAq
Lw0KPiA+Pj4+PiAgICAgZmFpbDogICBQUklOVCgiLSBCb290IGZhaWxlZCAtXHJcbiIpDQo+ID4+
Pj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vYXJtNjQvaGVhZF9tcHUuUw0KPiA+Pj4+PiBi
L3hlbi9hcmNoL2FybS9hcm02NC9oZWFkX21wdS5TIGluZGV4IGUyYWM2OWIwY2MuLjcyZDFlMDg2
M2QNCj4gPj4+PiAxMDA2NDQNCj4gPj4+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL2FybTY0L2hlYWRf
bXB1LlMNCj4gPj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0L2hlYWRfbXB1LlMNCj4gPj4+
Pj4gQEAgLTE4LDggKzE4LDEwIEBADQo+ID4+Pj4+ICAgICAjZGVmaW5lIFJFR0lPTl9URVhUX1BS
QkFSICAgICAgIDB4MzggICAgLyogU0g9MTEgQVA9MTAgWE49MDAgKi8NCj4gPj4+Pj4gICAgICNk
ZWZpbmUgUkVHSU9OX1JPX1BSQkFSICAgICAgICAgMHgzQSAgICAvKiBTSD0xMSBBUD0xMCBYTj0x
MCAqLw0KPiA+Pj4+PiAgICAgI2RlZmluZSBSRUdJT05fREFUQV9QUkJBUiAgICAgICAweDMyICAg
IC8qIFNIPTExIEFQPTAwIFhOPTEwICovDQo+ID4+Pj4+ICsjZGVmaW5lIFJFR0lPTl9ERVZJQ0Vf
UFJCQVIgICAgIDB4MjIgICAgLyogU0g9MTAgQVA9MDAgWE49MTAgKi8NCj4gPj4+Pj4NCj4gPj4+
Pj4gICAgICNkZWZpbmUgUkVHSU9OX05PUk1BTF9QUkxBUiAgICAgMHgwZiAgICAvKiBOUz0wIEFU
VFI9MTExIEVOPTENCj4gKi8NCj4gPj4+Pj4gKyNkZWZpbmUgUkVHSU9OX0RFVklDRV9QUkxBUiAg
ICAgMHgwOSAgICAvKiBOUz0wIEFUVFI9MTAwIEVOPTEgKi8NCj4gPj4+Pj4NCj4gPj4+Pj4gICAg
IC8qDQo+ID4+Pj4+ICAgICAgKiBNYWNybyB0byByb3VuZCB1cCB0aGUgc2VjdGlvbiBhZGRyZXNz
IHRvIGJlIFBBR0VfU0laRQ0KPiA+Pj4+PiBhbGlnbmVkIEBADQo+ID4+Pj4+IC0zMzQsNiArMzM2
LDU2IEBAIEVOVFJZKGVuYWJsZV9tbSkNCj4gPj4+Pj4gICAgICAgICByZXQNCj4gPj4+Pj4gICAg
IEVORFBST0MoZW5hYmxlX21tKQ0KPiA+Pj4+Pg0KPiA+Pj4+PiArLyoNCj4gPj4+Pj4gKyAqIE1h
cCB0aGUgZWFybHkgVUFSVCB3aXRoIGEgbmV3IHRyYW5zaWVudCBNUFUgbWVtb3J5IHJlZ2lvbi4N
Cj4gPj4+Pj4gKyAqDQo+ID4+Pj4NCj4gPj4+PiBNaXNzaW5nICJJbnB1dHM6ICINCj4gPj4+Pg0K
PiA+Pj4+PiArICogeDI3OiByZWdpb24gc2VsZWN0b3INCj4gPj4+Pj4gKyAqIHgyODogcHJiYXIN
Cj4gPj4+Pj4gKyAqIHgyOTogcHJsYXINCj4gPj4+Pj4gKyAqDQo+ID4+Pj4+ICsgKiBDbG9iYmVy
cyB4MCAtIHg0DQo+ID4+Pj4+ICsgKg0KPiA+Pj4+PiArICovDQo+ID4+Pj4+ICtFTlRSWShzZXR1
cF9lYXJseV91YXJ0KQ0KPiA+Pj4+PiArI2lmZGVmIENPTkZJR19FQVJMWV9QUklOVEsNCj4gPj4+
Pj4gKyAgICAvKiBzdGFjayBMUiBhcyB3cml0ZV9wciB3aWxsIGJlIGNhbGxlZCBsYXRlciBsaWtl
IG5lc3RlZCBmdW5jdGlvbiAqLw0KPiA+Pj4+PiArICAgIG1vdiAgIHgzLCBscg0KPiA+Pj4+PiAr
DQo+ID4+Pj4+ICsgICAgLyoNCj4gPj4+Pj4gKyAgICAgKiBNUFUgcmVnaW9uIGZvciBlYXJseSBV
QVJUIGlzIGEgdHJhbnNpZW50IHJlZ2lvbiwgc2luY2UgaXQgd2lsbCBiZQ0KPiA+Pj4+PiArICAg
ICAqIHJlcGxhY2VkIGJ5IHNwZWNpZmljIGRldmljZSBtZW1vcnkgbGF5b3V0IHdoZW4gRkRUIGdl
dHMNCj4gcGFyc2VkLg0KPiA+Pj4+DQo+ID4+Pj4gSSB3b3VsZCByYXRoZXIgbm90IG1lbnRpb24g
IkZEVCIgaGVyZSBiZWNhdXNlIHRoaXMgY29kZSBpcw0KPiA+Pj4+IGluZGVwZW5kZW50IHRvIHRo
ZSBmaXJtd2FyZSB0YWJsZSB1c2VkLg0KPiA+Pj4+DQo+ID4+Pj4gSG93ZXZlciwgYW55IHJlYXNv
biB0byB1c2UgYSB0cmFuc2llbnQgcmVnaW9uIHJhdGhlciB0aGFuIHRoZSBvbmUNCj4gPj4+PiB0
aGF0IHdpbGwgYmUgdXNlZCBmb3IgdGhlIFVBUlQgZHJpdmVyPw0KPiA+Pj4+DQo+ID4+Pg0KPiA+
Pj4gV2UgZG9u4oCZdCB3YW50IHRvIGRlZmluZSBhIE1QVSByZWdpb24gZm9yIGVhY2ggZGV2aWNl
IGRyaXZlci4gSXQgd2lsbA0KPiA+Pj4gZXhoYXVzdCBNUFUgcmVnaW9ucyB2ZXJ5IHF1aWNrbHku
DQo+ID4+IFdoYXQgdGhlIHVzdWFsIHNpemUgb2YgYW4gTVBVPw0KPiA+Pg0KPiA+PiBIb3dldmVy
LCBldmVuIGlmIHlvdSBkb24ndCB3YW50IHRvIGRlZmluZSBvbmUgZm9yIGV2ZXJ5IGRldmljZSwg
aXQNCj4gPj4gc3RpbGwgc2VlbSB0byBiZSBzZW5zaWJsZSB0byBkZWZpbmUgYSBmaXhlZCB0ZW1w
b3Jhcnkgb25lIGZvciB0aGUNCj4gPj4gZWFybHkgVUFSVCBhcyB0aGlzIHdvdWxkIHNpbXBsaWZ5
IHRoZSBhc3NlbWJseSBjb2RlLg0KPiA+Pg0KPiA+DQo+ID4gV2Ugd2lsbCBhZGQgZml4ZWQgTVBV
IHJlZ2lvbnMgZm9yIFhlbiBzdGF0aWMgaGVhcCBpbiBmdW5jdGlvbiBzZXR1cF9tbS4NCj4gPiBJ
ZiB3ZSBwdXQgZWFybHkgdWFydCByZWdpb24gaW4gZnJvbnQoZml4ZWQgcmVnaW9uIHBsYWNlKSwg
aXQgd2lsbA0KPiA+IGxlYXZlIGhvbGVzIGxhdGVyIGFmdGVyIHJlbW92aW5nIGl0Lg0KPiANCj4g
V2h5PyBUaGUgZW50cnkgY291bGQgYmUgcmUtdXNlZCB0byBtYXAgdGhlIGRldmljZXMgZW50cnku
DQo+IA0KPiA+DQo+ID4+DQo+ID4+PiBJbiBjb21taXQgIiBbUEFUQ0ggdjIgMjgvNDBdIHhlbi9t
cHU6IG1hcCBib290IG1vZHVsZSBzZWN0aW9uIGluDQo+IE1QVQ0KPiA+Pj4gc3lzdGVtIiwNCj4g
Pj4NCj4gPj4gRGlkIHlvdSBtZWFuIHBhdGNoICMyNz8NCj4gPj4NCj4gPj4+IEEgbmV3IEZEVCBw
cm9wZXJ0eSBgbXB1LGRldmljZS1tZW1vcnktc2VjdGlvbmAgd2lsbCBiZSBpbnRyb2R1Y2VkDQo+
ID4+PiBmb3IgdXNlcnMgdG8gc3RhdGljYWxseSBjb25maWd1cmUgdGhlIHdob2xlIHN5c3RlbSBk
ZXZpY2UgbWVtb3J5DQo+ID4+PiB3aXRoIHRoZQ0KPiA+PiBsZWFzdCBudW1iZXIgb2YgbWVtb3J5
IHJlZ2lvbnMgaW4gRGV2aWNlIFRyZWUuDQo+ID4+PiBUaGlzIHNlY3Rpb24gc2hhbGwgY292ZXIg
YWxsIGRldmljZXMgdGhhdCB3aWxsIGJlIHVzZWQgaW4gWGVuLCBsaWtlDQo+ID4+PiBgVUFSVGAs
DQo+ID4+IGBHSUNgLCBldGMuDQo+ID4+PiBGb3IgRlZQX0Jhc2VSX0FFTXY4Uiwgd2UgaGF2ZSB0
aGUgZm9sbG93aW5nIGRlZmluaXRpb246DQo+ID4+PiBgYGANCj4gPj4+IG1wdSxkZXZpY2UtbWVt
b3J5LXNlY3Rpb24gPSA8MHgwIDB4ODAwMDAwMDAgMHgwIDB4N2ZmZmYwMDA+OyBgYGANCj4gPj4N
Cj4gPj4gSSBhbSBhIGJpdCB3b3JyeSB0aGlzIHdpbGwgYmUgYSByZWNpcGUgZm9yIG1pc3Rha2Uu
IERvIHlvdSBoYXZlIGFuDQo+ID4+IGV4YW1wbGUgd2hlcmUgdGhlIE1QVSB3aWxsIGJlIGV4aGF1
c3RlZCBpZiB3ZSByZXNlcnZlIHNvbWUgZW50cmllcw0KPiA+PiBmb3IgZWFjaCBkZXZpY2UgKG9y
IHNvbWUpPw0KPiA+Pg0KPiA+DQo+ID4gWWVzLCB3ZSBoYXZlIGludGVybmFsIHBsYXRmb3JtIHdo
ZXJlIE1QVSByZWdpb25zIGFyZSBvbmx5IDE2Lg0KPiANCj4gSW50ZXJuYWwgaXMgaW4gc2lsaWNv
biAoZS5nLiByZWFsKSBvciB2aXJ0dWFsIHBsYXRmb3JtPw0KPiANCg0KU29ycnksIHdlIG1ldCB0
aGlzIGtpbmQgb2YgdHlwZSBwbGF0Zm9ybSBpcyBhbGwgSSdtIGFsbG93ZWQgdG8gc2F5Lg0KRHVl
IHRvIE5EQSwgSSBjb3VsZG7igJl0IHRlbGwgbW9yZS4NCg0KPiA+ICBJdCB3aWxsIGFsbW9zdCBl
YXQgdXANCj4gPiBhbGwgTVBVIHJlZ2lvbnMgYmFzZWQgb24gY3VycmVudCBpbXBsZW1lbnRhdGlv
biwgd2hlbiBsYXVuY2hpbmcgdHdvDQo+IGd1ZXN0cyBpbiBwbGF0Zm9ybS4NCj4gPg0KPiA+IExl
dCdzIGNhbGN1bGF0ZSB0aGUgbW9zdCBzaW1wbGUgc2NlbmFyaW86DQo+ID4gVGhlIGZvbGxvd2lu
ZyBpcyBNUFUtcmVsYXRlZCBzdGF0aWMgY29uZmlndXJhdGlvbiBpbiBkZXZpY2UgdHJlZToNCj4g
PiBgYGANCj4gPiAgICAgICAgICBtcHUsYm9vdC1tb2R1bGUtc2VjdGlvbiA9IDwweDAgMHgxMDAw
MDAwMCAweDAgMHgxMDAwMDAwMD47DQo+ID4gICAgICAgICAgbXB1LGd1ZXN0LW1lbW9yeS1zZWN0
aW9uID0gPDB4MCAweDIwMDAwMDAwIDB4MCAweDQwMDAwMDAwPjsNCj4gPiAgICAgICAgICBtcHUs
ZGV2aWNlLW1lbW9yeS1zZWN0aW9uID0gPDB4MCAweDgwMDAwMDAwIDB4MCAweDdmZmZmMDAwPjsN
Cj4gPiAgICAgICAgICBtcHUsc2hhcmVkLW1lbW9yeS1zZWN0aW9uID0gPDB4MCAweDdhMDAwMDAw
IDB4MCAweDAyMDAwMDAwPjsNCj4gPg0KPiA+ICAgICAgICAgIHhlbixzdGF0aWMtaGVhcCA9IDww
eDAgMHg2MDAwMDAwMCAweDAgMHgxYTAwMDAwMD47IGBgYCBBdCB0aGUNCj4gPiBlbmQgb2YgdGhl
IGJvb3QsIGJlZm9yZSByZXNodWZmbGluZywgdGhlIE1QVSByZWdpb24gdXNhZ2Ugd2lsbCBiZSBh
cw0KPiBmb2xsb3dzOg0KPiA+IDcgKGRlZmluZWQgaW4gYXNzZW1ibHkpICsgRkRUKGVhcmx5X2Zk
dF9tYXApICsgNSAoYXQgbGVhc3Qgb25lIHJlZ2lvbiBmb3INCj4gZWFjaCAibXB1LHh4eC1tZW1v
cnktc2VjdGlvbiIpLg0KPiANCj4gQ2FuIHlvdSBsaXN0IHRoZSA3IHNlY3Rpb25zPyBJcyBpdCBp
bmNsdWRpbmcgdGhlIGluaXQgc2VjdGlvbj8NCj4gDQoNClllcywgSSdsbCBkcmF3IHRoZSBsYXlv
dXQgZm9yIHlvdToNCicnJw0KIFhlbiBNUFUgTWFwIGJlZm9yZSByZW9yZzoNCg0KeGVuX21wdW1h
cFswXSA6IFhlbiB0ZXh0DQp4ZW5fbXB1bWFwWzFdIDogWGVuIHJlYWQtb25seSBkYXRhDQp4ZW5f
bXB1bWFwWzJdIDogWGVuIHJlYWQtb25seSBhZnRlciBpbml0IGRhdGENCnhlbl9tcHVtYXBbM10g
OiBYZW4gcmVhZC13cml0ZSBkYXRhDQp4ZW5fbXB1bWFwWzRdIDogWGVuIEJTUw0KeGVuX21wdW1h
cFs1XSA6IFhlbiBzdGF0aWMgaGVhcA0KLi4uLi4uDQp4ZW5fbXB1bWFwW21heF94ZW5fbXB1bWFw
IC0gN106IFN0YXRpYyBzaGFyZWQgbWVtb3J5IHNlY3Rpb24NCnhlbl9tcHVtYXBbbWF4X3hlbl9t
cHVtYXAgLSA2XTogQm9vdCBNb2R1bGUgbWVtb3J5IHNlY3Rpb24oa2VybmVsLCBpbml0cmFtZnMs
IGV0YykNCnhlbl9tcHVtYXBbbWF4X3hlbl9tcHVtYXAgLSA1XTogRGV2aWNlIG1lbW9yeSBzZWN0
aW9uDQp4ZW5fbXB1bWFwW21heF94ZW5fbXB1bWFwIC0gNF06IEd1ZXN0IG1lbW9yeSBzZWN0aW9u
DQp4ZW5fbXB1bWFwW21heF94ZW5fbXB1bWFwIC0gM106IEVhcmx5IEZEVA0KeGVuX21wdW1hcFtt
YXhfeGVuX21wdW1hcCAtIDJdOiBYZW4gaW5pdCBkYXRhDQp4ZW5fbXB1bWFwW21heF94ZW5fbXB1
bWFwIC0gMV06IFhlbiBpbml0IHRleHQNCg0KSW4gdGhlIGVuZCBvZiBib290LCBmdW5jdGlvbiBp
bml0X2RvbmUgd2lsbCBkbyB0aGUgcmVvcmcgYW5kIGJvb3Qtb25seSByZWdpb24gY2xlYW4tdXA6
DQoNClhlbiBNUFUgTWFwIGFmdGVyIHJlb3JnKGlkbGUgdmNwdSk6DQoNCnhlbl9tcHVtYXBbMF0g
OiBYZW4gdGV4dA0KeGVuX21wdW1hcFsxXSA6IFhlbiByZWFkLW9ubHkgZGF0YQ0KeGVuX21wdW1h
cFsyXSA6IFhlbiByZWFkLW9ubHkgYWZ0ZXIgaW5pdCBkYXRhDQp4ZW5fbXB1bWFwWzNdIDogWGVu
IHJlYWQtd3JpdGUgZGF0YQ0KeGVuX21wdW1hcFs0XSA6IFhlbiBCU1MNCnhlbl9tcHVtYXBbNV0g
OiBYZW4gc3RhdGljIGhlYXANCnhlbl9tcHVtYXBbNl0gOiBHdWVzdCBtZW1vcnkgc2VjdGlvbg0K
eGVuX21wdW1hcFs3XSA6IERldmljZSBtZW1vcnkgc2VjdGlvbg0KeGVuX21wdW1hcFs2XSA6IFN0
YXRpYyBzaGFyZWQgbWVtb3J5IHNlY3Rpb24NCg0KWGVuIE1QVSBNYXAgb24gcnVudGltZShndWVz
dCB2Y3B1KToNCg0KeGVuX21wdW1hcFswXSA6IFhlbiB0ZXh0DQp4ZW5fbXB1bWFwWzFdIDogWGVu
IHJlYWQtb25seSBkYXRhDQp4ZW5fbXB1bWFwWzJdIDogWGVuIHJlYWQtb25seSBhZnRlciBpbml0
IGRhdGENCnhlbl9tcHVtYXBbM10gOiBYZW4gcmVhZC13cml0ZSBkYXRhDQp4ZW5fbXB1bWFwWzRd
IDogWGVuIEJTUw0KeGVuX21wdW1hcFs1XSA6IFhlbiBzdGF0aWMgaGVhcA0KeGVuX21wdW1hcFs2
XSA6IEd1ZXN0IG1lbW9yeQ0KeGVuX21wdW1hcFs3XSA6IHZHSUMgbWFwDQp4ZW5fbXB1bWFwWzhd
IDogdlBMMDExIG1hcA0KeGVuX21wdW1hcFs5XSA6IFBhc3N0aHJvdWdoIGRldmljZSBtYXAoVUFS
VCwgZXRjKQ0KeGVuX21wdW1hcFsxMF0gOiBTdGF0aWMgc2hhcmVkIG1lbW9yeSBzZWN0aW9uDQoN
Cj4gPg0KPiA+IFRoYXQgd2lsbCBiZSBhbHJlYWR5IGF0IGxlYXN0IDEzIE1QVSByZWdpb25zIDtc
Lg0KPiANCj4gVGhlIHNlY3Rpb24gSSBhbSB0aGUgbW9zdCBjb25jZXJuIG9mIGlzIG1wdSxkZXZp
Y2UtbWVtb3J5LXNlY3Rpb24NCj4gYmVjYXVzZSBpdCB3b3VsZCBsaWtlbHkgbWVhbiB0aGF0IGFs
bCB0aGUgZGV2aWNlcyB3aWxsIGJlIG1hcHBlZCBpbiBYZW4uDQo+IElzIHRoZXJlIGFueSByaXNr
IHRoYXQgdGhlIGd1ZXN0IG1heSB1c2UgZGlmZmVyZW50IG1lbW9yeSBhdHRyaWJ1dGU/DQo+IA0K
DQpZZXMsIG9uIGN1cnJlbnQgaW1wbGVtZW50YXRpb24sIHBlci1kb21haW4gdmdpYywgdnBsMDEx
LCBhbmQgcGFzc3Rocm91Z2ggZGV2aWNlIG1hcA0Kd2lsbCBiZSBpbmRpdmlkdWFsbHkgYWRkZWQg
aW50byBwZXItZG9tYWluIFAyTSBtYXBwaW5nLCB0aGVuIHdoZW4gc3dpdGNoaW5nIGludG8gZ3Vl
c3QNCnZjcHUgZnJvbSB4ZW4gaWRsZSB2Y3B1LCBkZXZpY2UgbWVtb3J5IHNlY3Rpb24gd2lsbCBi
ZSByZXBsYWNlZCBieSB2Z2ljLCB2cGwwMTEsIHBhc3N0aHJvdWdoDQpkZXZpY2UgbWFwLg0KDQo+
IE9uIHRoZSBwbGF0Zm9ybSB5b3UgYXJlIGRlc2NyaWJpbmcsIHdoYXQgYXJlIHRoZSBkZXZpY2Vz
IHlvdSBhcmUgZXhwZWN0ZWQNCj4gdG8gYmUgdXNlZCBieSBYZW4/DQo+IA0KPiBDaGVlcnMsDQo+
IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 05:52:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 05:52:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487383.755003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMjYk-0001VK-4k; Tue, 31 Jan 2023 05:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487383.755003; Tue, 31 Jan 2023 05:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMjYk-0001VC-2A; Tue, 31 Jan 2023 05:51:58 +0000
Received: by outflank-mailman (input) for mailman id 487383;
 Tue, 31 Jan 2023 05:51:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMjYj-0001V3-Oh; Tue, 31 Jan 2023 05:51:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMjYj-000070-Ku; Tue, 31 Jan 2023 05:51:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMjYj-0001a1-C4; Tue, 31 Jan 2023 05:51:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMjYj-0005bV-BL; Tue, 31 Jan 2023 05:51:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JjtkaNJg4l8JgqbhWgAOuRaDu49oF1azn4bJI5PdD6A=; b=56EWh797cWPgK+iAu8HXakmBgw
	G5rFrqu3KhBNR4Trap7mV9ptQWw0z0VuPcK7NayuYXosDKoxjWPrJF84ZYsiK/jJz+fl84vbaxTJE
	uro8tLFQfArZ3+0cxMu9WeRtJKU/FV3I6FoaVSIFGxqqPMdK9G65iFNCsBh1KTjdefRA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176288-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176288: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 05:51:57 +0000

flight 176288 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176288/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    4 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 07:40:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 07:40:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487425.755031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMlFs-0007TF-7D; Tue, 31 Jan 2023 07:40:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487425.755031; Tue, 31 Jan 2023 07:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMlFs-0007T8-4T; Tue, 31 Jan 2023 07:40:36 +0000
Received: by outflank-mailman (input) for mailman id 487425;
 Tue, 31 Jan 2023 07:40:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMlFq-0007Sy-Px; Tue, 31 Jan 2023 07:40:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMlFq-0002io-Lw; Tue, 31 Jan 2023 07:40:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMlFq-0005ts-E2; Tue, 31 Jan 2023 07:40:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMlFq-00012k-Db; Tue, 31 Jan 2023 07:40:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EwNNDeGWhOPZKqryfxpDNRRd8NZIS9pfjKJ5tuLPnXQ=; b=iwLQccbfThBAHF4dd1N3+NDy+m
	grNIB0SyxmKwmoCqCUI8x6Qb0MGgC5kFrl7nKajxzLPksNtXRK1kCsswzO4+cCa07sR64WSjNGp02
	Baak+HS25GaS0iCURcDnaS+yAaP87H0gDQVkDGcL+jGW3WnJ9DE1flXPrUpo+w19V9fo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176292-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176292: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 07:40:34 +0000

flight 176292 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176292/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    4 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 08:43:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 08:43:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487441.755042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmE4-00072J-8a; Tue, 31 Jan 2023 08:42:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487441.755042; Tue, 31 Jan 2023 08:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmE4-00072C-3P; Tue, 31 Jan 2023 08:42:48 +0000
Received: by outflank-mailman (input) for mailman id 487441;
 Tue, 31 Jan 2023 08:42:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nz19=54=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMmE2-000726-RN
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 08:42:47 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2058.outbound.protection.outlook.com [40.107.8.58])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ae54432-a143-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 09:42:45 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9467.eurprd04.prod.outlook.com (2603:10a6:10:35b::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 08:42:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.038; Tue, 31 Jan 2023
 08:42:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ae54432-a143-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jz67N8T/SrEV1gQ7eGK/t2+x2W3qa+bx7VOp83KRsgopLHvVLd7K2ZfSwhZ8WTYGuZub9sRfIXh3VxeVV1xFiR005uonbVJtXLlpXck/vlS0g4iCXfuouIdtth4HP0p33NingqMzXpAI8nj2mNYRev80PIJK69eB6VXUPPlH1OU3VMSdsa+85GpzspRKsOgN/EfnlQ4DI0smglGp6mzKeAz3al5WGaGY6ONH6Y81BUb0xSBNH76ltyy7BT3RY2bw5TxF8hAWXKByBCFg5Cx8AmJwh4CtKFHeu0qfTZE9UYjYti+O8xYMs7jg/vf3ppNjJUun321f/THZl+t5kyHqfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BqJZpumkOVkB1VVPHvU9hNvqmPP8gt0FgzEtAdRZ6EM=;
 b=LKfB+Fh2ZEzzvjUeqGVkRyy2VW16xiSNpA6qfwvq5pnT9y5IXucR4SAhvs7R7ngIncNspHtEhHh9of7hI7HucGkNi6IZF8A56NVcWIvcTk9zotaKnT2zfVDJdmFnzUi7dw1MRqvlKr6F1Pt1tBdzUIeJ4g91GP7sW1pdjjYYvXXsThmifNWtZoONjRctxrTBFXag8Rp4Q9BUG8aWumlPjLjNq8hbdQLy7X+S50ACFA5EvT6QLQeAowlAbCwhk6Cn/m8vGfbYhYsOXUeohkdyG9M/96Um+pUNzpdFM4SuKdHY0eaCuDRe8q/MhNA/+4YVGTw4iQqFrPm8j0BzxRXxew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BqJZpumkOVkB1VVPHvU9hNvqmPP8gt0FgzEtAdRZ6EM=;
 b=1mNS6JxlBnLeIj0upLdKXCEnxb4fDMLNRZlAs2blUvq6xsSeNI0HOOe+PAoTMLQQSVZSklaFd89HC9nyCrgBlgGdLtAsieeZhg9O4EQwDMsar8EQ3p16b6DKczBc6DH53jy00t7QJGu8Ujpui8yV7Pt4bdbUvDdBZsHZgLWa7yUDFPSqUX6KEeTDP/inrj4QkMcpHXUx7jJCgi/IXoIRvdhcH25LJ7fZ2RKkKYCdBTUObXSUwABPDLTQZvCCxI0ODwxfkOGi5qir70MQtepZYJ7wf5gRZM6J8u88HbpewUkMu1FQ9iIQzkJIYuBTf978X8lDqNOEtM45QFESxf4NiA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <42feefa9-d969-4d13-e1d9-4b68635013d9@suse.com>
Date: Tue, 31 Jan 2023 09:42:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v4] xen/x86: public: add TSC defines for cpuid leaf 4
Content-Language: en-US
To: Krister Johansen <kjlx@templeofstupid.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 David Reaver <me@davidreaver.com>, xen-devel@lists.xenproject.org
References: <bfccbe22-e5e4-40d3-aded-639d812bfa08@suse.com>
 <20230130174459.GB2001@templeofstupid.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230130174459.GB2001@templeofstupid.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0088.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9467:EE_
X-MS-Office365-Filtering-Correlation-Id: 7dc099da-34e0-415d-8967-08db03671c78
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	90k24KNMGOvWHJnt+vnye8XFHwrcnjr+ICPxI8NOuzPIgqxouZ+52CR6fDwnshMxiy2awhGOlEfqnvw1eLZORQZ0z5RRgAp0Sj7azP0EQFrQtXthtDqceY9y8omPAX9VCNjdVF9Vz18Gv9KpqVI48mWey6IMV+wj0GSHQ5cXmkgglyE8Evgff1uIuBh1J2XzCJ3NRUtR7rJZFwjbVVJ6ON42vCLQQ7oweHqh9sOBBVD6QtUKl0dJQDxBAgha+EngZlkIKVfrkHS2oGo3iXJsf5W+A3xFAl3VSDPfB/KyLxhQnr5bDz+VyFX2LcisxMgSGpKlbTe+bW3aaQ/k06xgwM4mifWlYp0hkB2i4LJRPRaXQeO3vqjfARHVXmkR+LuuR5+KF7jWjxZx9NHrx+sZ5IObNdA3g9T1UZGCUnj0yOhvN01JGrNavPctkSI65WORCKDmKNDfNXWlH/pK2XqkbUJD1LNTKIhgVyRZYdeUWIRlIwSXdkbxpRIUDe81haupWIfsbJvpEiHSR0A49cQbQ/CxxRVaxsbUFjMfE9u3akXV0KsCWp3K0auatQhdqrtHCFQmnWJrHb51b726FKQszjy2s186JJERgnQmFCM5IctJe2tBXrhnjGkuAatjD8UHadpZrTo7T9YG1t/sLzT3292BbNvyj2w76Xwo7G3xs7t2sX48g6+pf1jzRyRb4eFqIORjtWW5Ixl3I464+bXMSpTn6GJJfP3cJXLs6nfTLhn94ByACpxno6aR81JuJbb2gqTuDaZpdSV8EcysKpEg7tPOoUrbw9UfzYfQZpzSiTptbAYDLP8Aj8G/Qm+B0a4u
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39860400002)(396003)(366004)(136003)(376002)(346002)(451199018)(31686004)(38100700002)(66946007)(36756003)(2906002)(5660300002)(4744005)(316002)(31696002)(66476007)(41300700001)(6916009)(66556008)(86362001)(8936002)(4326008)(83380400001)(2616005)(26005)(53546011)(54906003)(8676002)(6506007)(6512007)(6486002)(966005)(478600001)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QzlnUiswblkvdjltaUs4cllvTkhWTktCRVllTHh2NmM5NDVPY0dicW5YWllm?=
 =?utf-8?B?VmFVK1M4KzhTR0RNeFpLNytsR1RsbWUrRHg0Q2NUL3dEU0RTbytGenlQanFl?=
 =?utf-8?B?c1FSdTdTUTM5b01ucklHbWJaVmpPaFZLalZ1bVhYT1VZdlV5dERqZUZHYkth?=
 =?utf-8?B?TGhic2pJa0NDUDhIQ1pLWGtLbjFTYWpQazgrK2V1MWR5c0J0VlpwZGhjVTZk?=
 =?utf-8?B?Q2hkRVRjZXREREFUczc3SU9JS0d4Ni9CRnUwNTl4V0RTRFVYSkRwOHJ3a2tn?=
 =?utf-8?B?cHhGaWhjOERSbzRpQVRFWGVjZThKZllTWmZERGs3ZkVxa0ZlTlhvZ1F3TXRO?=
 =?utf-8?B?UnZYUXQzeFBUOHV3c2g1RCtmc1FBa1dXaWhGYkNrQjBVbm01UjErb3VPRURp?=
 =?utf-8?B?SXVLeTZpZS9NL1RJTjh3d3pHM216YlB6N1F5b1JrbUxPV2xiMmw4TVlBRkNj?=
 =?utf-8?B?bW94bkJudXh4aUs1K1Rxdlh6S3g1cDZzMFJ6N2kvNUI2enRLN25wT1BzSldt?=
 =?utf-8?B?VElKcmhWbU53TEpHaHc3TFBQOUtKeFhkQnJkY0ttVk0xQTFZMDVnRGtrZWRw?=
 =?utf-8?B?WndjRGpvMkp0R1ErdW1LVnQvVU9WQmNTVUJNUWJwN3paMTJ5ZGRoRFkzQXAz?=
 =?utf-8?B?OE1rd0hQTW5SaUp2dnR0RjRWVXBRSXF1MlJPd3FwUEFra3JzZGFRdEE0ZTVR?=
 =?utf-8?B?VFVhUFd0Q3ZCZVNoVGtJN0E0Ukh2OTBpLzZPREU0WHNFbktBc1Q1ZDdiTmZS?=
 =?utf-8?B?N2QxcmZ4Yk0wS1RQcjRldWxTU1laa0prYkhGZ0xHSWg5Ty9WdjNrNnBiK3FR?=
 =?utf-8?B?Vk9ZU0JLODRWenVhWEhmUStjWTgxK3BlT3hiU2JBbzc2d3ZIZlRZSUlzUEJ1?=
 =?utf-8?B?Tm8wNHNCQzVaRWhsZVZZdlJySFR1WllTL1VML1dqa1RVWkd3RkltMXorY1Bk?=
 =?utf-8?B?UGhteHlTcXo1UmlqT3phRVp6NUdJeGhyaHBzK0JhNytpQnM4b25VQmMxYWQw?=
 =?utf-8?B?ZDFTdHFlaW15aHh5UUIvdGtlNTNNU2FBR2NxTDIwSEVwNGhxWU9XekxMRHp4?=
 =?utf-8?B?UG5WUmE0QWZrTVBvLzJJUlFjWFVaWis0UzFIeTQxWFpwblVjSDlLSW8rQWRI?=
 =?utf-8?B?UUlkSWRiN096Qit0SzJMUmswanNENU9xU1g2eURBUjVKcnhseXBqWGZGbVBN?=
 =?utf-8?B?RHdHeGlQYmpNckhjMkJlVzg5R1NYelFoSlV5ODYzQjlvdVg3ODg2MXp0Nmc4?=
 =?utf-8?B?aTdTUU96TzROSHFrckVXenBZWm9NN01nWUJ4cUxwMG53VFUyUGk2aExYNnNj?=
 =?utf-8?B?V29RTkk5MXJ5dE5paVZ2akpxZG4rTWM4Sjd4K1d0RUUrWTJyb3ZZT2tVTjJX?=
 =?utf-8?B?b05aYXQwZEloRXBUM1dqR1J6bHkrQTlRVUZFZjZ3RytPT21pRUpNUjB2Y0Iw?=
 =?utf-8?B?TCtuTG9QMCtiNmMzenNzTlV6NU45V0pVcHNmSWxHUUlGbkdsekQwKy9vVWxo?=
 =?utf-8?B?Y3FYVjNwU3lVYkdiOENueE9XSm5NL2RFdDBNZExQUjhDQWZ0cWEzMm95YXg3?=
 =?utf-8?B?ZksxWENtZytobkdwT05xbDJCZzlBUW1lby9ZZHdGdzNFSTMzZGdTa0NaV2RL?=
 =?utf-8?B?aThIeHVabEZ6L1RvSUJsTXNnRmx4T2Fob1lsTi81dFltbXA3MGt3bkRPVFB1?=
 =?utf-8?B?OHB0cVowM25BU0xpaGU0VXVNdGticVc3ZFBFQ3lhU2RzT3RaSGVOWkY5aEI4?=
 =?utf-8?B?cFdCcHc5NkI2WkNpcURKL2xTaFRmQk5uUHVQZUw5dEZLcFBneS94UWxPdWsv?=
 =?utf-8?B?ZUo5d0swWU1SdU1vcmJxZlRzMXMrTUVIY3IybkJjRyt5MVNBYk1DMXJnc2RO?=
 =?utf-8?B?Q2ZyNVZQUnR3NE15NWdzMjE3QmRxL1JxOVFVd01aaEthUFdrcFdGMWlzL2U5?=
 =?utf-8?B?UzlRYnk5bjZ6RkZ2czhWRnE4clBEU1dUS21FUHBWYkE0Z2QxdGxkaXV4STNq?=
 =?utf-8?B?UzVsZlhYRzBLRDNUblp6SEpVaVVUQmxFVy9nODNxTy9LTTZMT01iSXZ2UDBC?=
 =?utf-8?B?ZmwwL2h4MXRxaVJqZWczek9tL3VSZ24wbmJEUDhYU1ZEcVpNZ0xrR053cTlD?=
 =?utf-8?Q?KXkjLZLzUAd/a375RH8e5oZKb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7dc099da-34e0-415d-8967-08db03671c78
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 08:42:39.9264
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zaGKbnYMg9vLZlINRMqu04PP80ueNVx32CTVKwDYw7izJFi5pMmeDEi0DiRl6dA0gdR/QCWFvU1GkQzCps6OtA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9467

On 30.01.2023 18:44, Krister Johansen wrote:
> Cpuid leaf 4 contains information about how the state of the tsc, its
> mode, and some additional information.  A commit that is queued for
> linux would like to use this to determine whether the tsc mode has been
> set to 'no emulation' in order to make some decisions about which
> clocksource is more reliable.
> 
> Expose this information in the public API headers so that they can
> subsequently be imported into linux and used there.
> 
> Link: https://lore.kernel.org/xen-devel/eda8d9f2-3013-1b68-0df8-64d7f13ee35e@suse.com/
> Link: https://lore.kernel.org/xen-devel/0835453d-9617-48d5-b2dc-77a2ac298bad@oracle.com/
> Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>

As said:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:05:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:05:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487445.755051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMma6-0001kS-0l; Tue, 31 Jan 2023 09:05:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487445.755051; Tue, 31 Jan 2023 09:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMma5-0001kL-TU; Tue, 31 Jan 2023 09:05:33 +0000
Received: by outflank-mailman (input) for mailman id 487445;
 Tue, 31 Jan 2023 09:05:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nz19=54=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMma4-0001kF-G2
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:05:32 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2088.outbound.protection.outlook.com [40.107.13.88])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 69ad3ef1-a146-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 10:05:31 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8366.eurprd04.prod.outlook.com (2603:10a6:102:1be::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 09:05:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.038; Tue, 31 Jan 2023
 09:05:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69ad3ef1-a146-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bqqQSxLvoyU8bLA3jQFW/+tbYaxU+N75YQ4o2c18ZCGYCrvOv5OY5NMK+Y6i/oeQ0k+U5daPh+91vQtLZ1t74RbenTrsHqAz/CyPKreynQGfpG/c63Nb4RoHpVJqAw57G0IuUct9W7aOLlmWe1PfLPtUK/rsV97WDop6MrfsAJdNj9V5NHI8x10yZYC42ydWZQWYSsuy1YK/rGZGTAUtsXcDzp4OcuDDtIcdWWrINAosM/ZJ+ikaZk0LMk4DAZKdLvjTGzUqlzKgFtX7HTN5CvvkrpqtHPfGuGrctnqvsjfrqrW2H1xEG4DY7ZNcb9w0vyMVDQP2Pw397Dx8qDv/fA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lchttXlb5td7/Pv+1xrYNwaTzv15GP55gP8SAkuZPME=;
 b=niptkvmsQlQs+s392BU8HXfW/cVW6cIvjTx3535F5eUw+u9KMlnUnhjtD3D2BAzbCrvixHJ73UqOQN1n0tQAa7++vt/v+BAAw67L4RIf3zONTQq7SQMmJuEdRblHuk7bmtZ95DhchVo3ydVwhSgnmLqMjSOOJOsZAQmQPj2ieiBUXcF+V6uBEHOfZmIoFXkhc2yLGCtldZH3IipFzm0kuW/hDIyx7BdO8RP1hAfptRFiJUFfqqGCduGE9vwhGA8XvnuOjSXJ7JE402d7MGh8rAOuvK0C8lCev+8wgYlj8jw7bj3SnbYZMMS/HfZvzxZh14JQZB7k/YQZfWf5zdJz/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lchttXlb5td7/Pv+1xrYNwaTzv15GP55gP8SAkuZPME=;
 b=5i8HLUIg+dJhiq/rqGtd5kKY9ZyahVkybItgdK8ETS7LcamwsDn+QF1/We1kq12KvOFa4QlBVe8OsamQwm56uWFahrEHFdy/DtzY01KrTMty3wRZn5YDDEfDjTwAWdMEus7IWdRT0F3EtTUEpqHHYWp+RoulXp7duqzjDBVm9omxEKDCDHUW9JlUFcBH4AIyl8dflAkXueFPQIJP9e55hy3rL3qr6S2pLOkJjg14yGh+8GvYeEDXqJ91UTmy/8BI0bsUdRs45ysqPNLucapsKU+A6rTOrDs+nYOuUInSeVcPV0LBX7SmrgOlOQaLPZ8qxG5T4Yhdj1OLej7/ApCoXQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <29687fb9-eeaa-4501-61aa-3a3e3cbbf156@suse.com>
Date: Tue, 31 Jan 2023 10:05:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] x86: idle domains don't have a domain-page mapcache
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <139bab3a-fd3e-fec2-b7b4-f63dd9f9439a@suse.com>
 <2edca838-2616-7434-62d7-a69dd9e00797@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2edca838-2616-7434-62d7-a69dd9e00797@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0023.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8366:EE_
X-MS-Office365-Filtering-Correlation-Id: 84dfe13b-ed46-43f7-90ec-08db036a4c95
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MwdP/4Q5ZgzOyDJtp41FHraO0YpYoT+gEfy/HlDRG5IgJJN3qFEHiLFlX2ulySorlPQovZ4m8aRdB6U4ZzHYNplGWcQGMiGhBuIXtIZmQDeMEcuzJjBMvfsL5lKqn2XzedHYWtJ7YH487vYcPMbbgSgpb8Atmeey4L9vjjxmcMMwsa1K9NptZ+jdhbpoj9di3epklfFHaEfTvzmX/0d5p9hKcIrCw4IvJcG0icgCjFnMsCMpDe/JrleYVjHDosXoTDY0fFvnTfM182Sg6Q1aVJP+/WO3caH4ZKk1mmN4C6pug3IulDOR5RqGrnDLhsfWJd6RlRyxOW25qKxrsVPY5vA11DC5gT7sksUBWi4xh7vKArmg76qzQEMD6c7F/ijuWC03gGAOtlb68WDLReqsB1xC4pJ3EM/nUGY+JN8FYM6YT30qv9b70lrWjquuRIc6OxRCVsY4hz9ma7BNUIAwoJl/ApTDr5MKaG5Bay7A4ie3/RKwgak6hHaJZY+14TP7J0VYk2jPGHp/h1KOEKN3L2lDIUAHdiDjKeK88Wb5iP6sfnTJJvcFgrOA42cL+Vho2uamgclJw7SBqU4yAQFQ54uwtKX0fGJnF+AkTeK3fK7KXPcD21wQt0WgVOOSy0nGDZaKuu5mpmE9oNOk20GouW6G1VEafr8m3BAdYr7LKoHPekjXtNaA3xCDLJR9DKknGieoqWYTlf8/P2CEJ27EXvx0ZQEORSdn81WD2Jxitj0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(136003)(376002)(39860400002)(346002)(396003)(366004)(451199018)(2906002)(31686004)(186003)(26005)(478600001)(6512007)(316002)(6486002)(54906003)(86362001)(36756003)(66476007)(31696002)(38100700002)(6916009)(4326008)(66946007)(8676002)(8936002)(66556008)(53546011)(5660300002)(41300700001)(6506007)(2616005)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MlZPbXhhaG1lREIxSEFhNjYrc1hOL3oxRTBOU1A5MllOcFdWRzJ0cCtjREZK?=
 =?utf-8?B?VExoWjJQZjZpaW1qdE5YN3F4VXhMUjdhTlErL0Q0eFNXemNseklrNUxCU2Jj?=
 =?utf-8?B?cXExY0RLVzl3aGREV2J0ZWFPaVM0OUNFUlFSYldNQ1laT0Nld0xrbmFCRGhs?=
 =?utf-8?B?STZPUzZCY1A1Vlc0VTJDMHViY09MVERtMTFZdFpORmU2ZVRmTFlYQ29IYXNE?=
 =?utf-8?B?REtkRHg4YlZLb2U1Yis3M0R2dXlUMEVuV3FXMk4zNFI3b2hjU1BlYS9Cb3c1?=
 =?utf-8?B?T1B2emQxUzBuYXh1bDNPRXNFYldRb05Hc0xobk1hZ25uZVBYOW82RTVLcTNl?=
 =?utf-8?B?RGtQVURwT0NmbHo3bHZjbDZjdWJmWTM3S3prSkFwVnBEQnloMzlpTnNmcTVJ?=
 =?utf-8?B?SFZONWVscnZETFFWdExvSlRZLzZoY25zc1g4aHVVV3J0TUlTRWt0ZTFmclc4?=
 =?utf-8?B?MWFVZXBNc0Rva1BsNTJNR0xPUVJUek4yT3ZlaE9QZkU1QjdCdGZSUmN2bDJG?=
 =?utf-8?B?ejAzUy9XOGw2SHA2RHVONllRNzh1dWdrTTluTEhuTVk2a0pRWmd4clVwN3dP?=
 =?utf-8?B?Ui84SFR4ckZBeHd5Sk1lbWdSTEhxeWFBUWk4UmFsNVVoRHcyT0JPUHZ1Mm1Y?=
 =?utf-8?B?RGtYS3N4ajBzaDRjc21MbXljNzhqZEVHTG9lZlljM2RxWDZyTUo3ZThRN1Zs?=
 =?utf-8?B?dE15S3JRRFdpK1RNcHpXY0F1NnlkRk5zMWtmWXF5dDRXalFJSVNwbkF4K3Ar?=
 =?utf-8?B?NW5hTjFEbGlyUFlKUlcvcnhLZHZ5ZUw3cVRZbDlpL0VlNjR1aVZsQ1NlWEtl?=
 =?utf-8?B?WHJoWG54NmFZZE50SlFyaEtxVlVVMXp2UzlmQ0YzN1pGcnhqMitqQzF6MWZj?=
 =?utf-8?B?OURwaE5wVUg3eGtrc01wSWFJdlZQQWx4UzlqQUpuaGhRWXlsLzVRcVJ4UGlZ?=
 =?utf-8?B?M1pzdkJtUkZ1eFBUQnNXZVh4RE44blMwVnRrd3lOV0ltWFVnVWlLWnFUdFNi?=
 =?utf-8?B?aENOakFJZDFFODhFWFBnOS9VM1Fjc0pSUWYwUlpQWHhnSVZscVR2NzFTamhE?=
 =?utf-8?B?aW52WXBNYlBUYUFsMXpOWUZ5alB5VnZpczQ2djRncHcwd2hGZTdZVU9xYkdS?=
 =?utf-8?B?UG8ycTdrZzlXUmxVRlVwOE1wcElqMUUvaWRoK3haTGtFanhFN05neUp1cGcv?=
 =?utf-8?B?aDBBdE9rbGphSk1maisxU2FDRi94YkRGWXI0SG9QWUF3TGFCeUZBbCtwS0dL?=
 =?utf-8?B?U0p1enpsOGFOamh5MTZKSkNlciswZ1ZjY1RZaWtZWEQvSmEzTUxFV1V5Lzlz?=
 =?utf-8?B?Y0xPZzhOcmJoVU9SRFV3L1FBRXlTWmF6RVErUkR3cWpONWg0ZTBkMGRpMTls?=
 =?utf-8?B?NGhONm9kZjYzd1pQWEt2TUErOWVySkZmUDBqR1hqSm5NT3plN1FUbmZJUWxm?=
 =?utf-8?B?cEdpbENFeHZvcm5LamNVRy9Jc0JnRnZRYXRUb1RSOFh2cFRtNkhmbDNwbTNG?=
 =?utf-8?B?MnRsOGxjWXI4KzRYRENhQnFMNDlFbnJiT2Z1dGNaQXB4K2o5NlRNN0h6akla?=
 =?utf-8?B?Q21lRm9IRXhaTDJueHJPMElRM3gvcEpjK0t3TE1NTDVIRHprRURISk9VT1Qy?=
 =?utf-8?B?WTNnM1FPQTN6VXUwWUd1VG5sTGhja2JSckZLTEE1cVAwNmtML3JTQ2hnRVFu?=
 =?utf-8?B?OWJIc2VxdkVWWk1TcjVvUlQ4dXR4M2Y5aEd2MmpzY2QyUmdPbFFnblE1ekVr?=
 =?utf-8?B?NkQveGpSWHFOQXhMb2JsTjJhekphZkY0VWZCalVpRU5uSlc3ejVhNUVKT1lU?=
 =?utf-8?B?ZzJ6ck9KMWpWNDB2WVRLOWdVQkNiVS9ldFJuV2lWL1VDdEZUdzFORkJXYk9Y?=
 =?utf-8?B?Zkd3SXdERE1KK3k3YXJINE51VUlPek9OU2NlTnF5cUl0MDFlUXdwdTVZRVdv?=
 =?utf-8?B?TzZPcTBlZFVkeU5VYS9GbzYvTXd6NHdYZmw4RkhDR3NDSzlVaGgvNXFDQWVC?=
 =?utf-8?B?UDBKWkxVMzU5cXljSlg4c1ZPZkdDTmx0WkVxUitUYlpYSUtnQnRsQlpPS1Y1?=
 =?utf-8?B?NmZpb01vaXJQd01OU1VQcnhKa0M5dGtjUUYyZ054TWp5SWl3NHAvNzJ6YzZn?=
 =?utf-8?Q?U/+mIljIsL13XygnyT4+gPGBN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 84dfe13b-ed46-43f7-90ec-08db036a4c95
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:05:29.0906
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DELQWzp2VxJLpErXRC9hpiWAKFHP8SCd7aZUln4ofD0QMe1HjZFCTYbP1u39EWGFxfA6BTeUin45OlJx7W57tQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8366

On 30.01.2023 20:00, Julien Grall wrote:
> On 05/01/2023 11:09, Jan Beulich wrote:
>> --- a/xen/arch/x86/domain_page.c
>> +++ b/xen/arch/x86/domain_page.c
>> @@ -28,8 +28,11 @@ static inline struct vcpu *mapcache_curr
>>       /*
>>        * When current isn't properly set up yet, this is equivalent to
>>        * running in an idle vCPU (callers must check for NULL).
>> +     *
>> +     * Non-PV domains don't have any mapcache.  For idle domains (which
>> +     * appear to be PV but also have no mapcache) see below.
>>        */
>> -    if ( !v )
>> +    if ( !v || !is_pv_vcpu(v) )
>>           return NULL;
> 
> I am trying to figure out the implication on this change with my patch 
> [1]. Is this expected to go before hand?

The change here may not be necessary at all with your change. The main
reason I sent this patch was to augment my comments on your patch. If
yours gets adjusted to account for the (perhaps just theoretical) issues
(if just theoretical, then description adjustments may be all that's
needed), then I may be able to drop this instead of seeing whether I can
convince myself that some / all of Andrew's requests can actually safely
be fulfilled.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:11:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:11:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487452.755061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmg2-0003EA-Pu; Tue, 31 Jan 2023 09:11:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487452.755061; Tue, 31 Jan 2023 09:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmg2-0003E3-MP; Tue, 31 Jan 2023 09:11:42 +0000
Received: by outflank-mailman (input) for mailman id 487452;
 Tue, 31 Jan 2023 09:11:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nz19=54=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMmg1-0003Dx-Qq
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:11:41 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20621.outbound.protection.outlook.com
 [2a01:111:f400:fe16::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45a4f5ec-a147-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 10:11:40 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7458.eurprd04.prod.outlook.com (2603:10a6:20b:1c4::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 09:11:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.038; Tue, 31 Jan 2023
 09:11:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45a4f5ec-a147-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E0E6leiZcuKNdXbzF6gJgocxZH35J42hvKn2Ke5oJarPqlRW+Co479bLZuSmI4U0kCaQHi9IdiuSNQ4AG7k7tulG4RSF3fTLiCbtR6GwGDFKf3sbH2UiBaSyZhV8mZfexPoygW+JuoIxezA9u9luNCWFSlPaqPhMDJbNIAmXRc+AJohEARrCXypGmEedJ1npfgj/89OYrNaAteGbdpYIutHxIqHPECqnDs+eJTNe1UWZBTjFX2DA63YXHxl7f6Sdknyv8cabunvFZ43xZYHBk3xlbM+ojYOFqis1u+WsNJnwk1/ZBzB3aWSmv+DrmWQl5PWOJEC6HuH3XKzUIKd6gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=c+Us/zrBz+sIWQ2aX9yjTegxSf9aX72zo+V5LClUWLw=;
 b=HxBeMtWsB4voVpEdM/7rDxx8ktxcN+fYPNdghx9KbULTdjlo+IBxlojnt7CfdEbHFF/ksfWHY8rEGFE3bh0N5t5wyhf8x+6i1/KlUHNAl2T5UN+9Z7biy7rMKO/emeSHuO1etuu4Y4fcrskLwX2gnusrDNKRNZ6f3w1E8LDq5/muoew46Tw6ibNqwz94Q8D7prmwBIS9SrJHCgnZzBbaPJOxZrR4YsVa7WhFhuJ3dNrRJDLh7zGNQrVnBWtu6UQn1BNtriQPNapDBYByEdxO+oMY2h/e3xcgSEZSrdL06RWE5V3b5v8ORrer6HPDMym80BCNKYoT6Q3Dn5qGwWlABw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=c+Us/zrBz+sIWQ2aX9yjTegxSf9aX72zo+V5LClUWLw=;
 b=3wBkJcGIitQFGeviU++J41eM3z2vCB7QLFkWr/yc1hW7G+1HwKJi/OTMC8FT7T9UWxg398fR2r28ZjTO/ytIzdSN+C8evYch2eUefg4Abdpv0f21lbrf0qOLaSYUVxfEJORjdaxcLk0CBvQZRMqOUfXcV/RdLwfQmZ8STHuvA3KisY3NBzQQv2Te+48Q7cBnkNxWQA2g/GzZoph0ZiVS3z16e52jnr/ep2Gks+Izl2VFSJPvx3abbGa9dN37OOH/B9teFgva/MwyDeH8aOW01JKFiDnz6dk5rYow0gSAttDQO9aVhYtEEfOU3YhUdBCNqJ8WdR4ohdRgiMh2RHFTlw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d645ef07-f30b-ff6b-ffc0-7ef76da63285@suse.com>
Date: Tue, 31 Jan 2023 10:11:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH 05/22] x86/srat: vmap the pages for acpi_slit
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-6-julien@xen.org>
 <ca02a313-0fa2-8041-3e8f-d467c3e99fb6@suse.com>
 <965e3faa-472d-9a79-83ca-fef57cda81c5@xen.org>
 <41de340c-b5ad-6c30-816f-1ce1ddc98069@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <41de340c-b5ad-6c30-816f-1ce1ddc98069@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0174.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7458:EE_
X-MS-Office365-Filtering-Correlation-Id: d02748da-a364-416e-57d6-08db036b28cf
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Tclx+1hAdDQEpUuV/4tSk7w7IZOf03Ad06pUMOfEtYc9dXC5pGYgqZYZkQOrO1fwYQnwJO7AlwkMFabYAOw26wDrPwz5dwWGrvJzuNKRjcqRWolyzIjtvblpL8Fe8aQTCTlZMYN/YbmPXDyaHfwDBtf7SZMQPfj/fLMU1BPJdPOcQTsR2ZsJlr6TXYb5g31nVE08qTEOZcKvmJHgQKv0NaeXcsOkehbYRAck08bCglUmsALbkfL0lJl3JsBYG8HA2+jc6o0Gid+ZSR/UK54dpss+u+RewsJb2vTdvtzHoDgDsyl6AN5R6iuvFToUc36OUh7GtKbDm9RqPSFk6t8TmtLyIaIe65bxHkWlgTUCkSda1taKOX3J8nLNvw5os2chTeauLf0Ire2Kgh7D+QPphPWXsJcilRY2y8bs2V5XkVIRsp6u5u93t8nHoZy4Ps1mnZQtdlhdpOqOLYshpEh4NDIiO0gA6tuz4HshZtKDYA6heOqAg+xA3fg5leORfMGXqtPDZrcvGStjJD6+O77EVfGcNfSEwZdHN+cn2o2qnKkNVqiIIQ//xUiwsL3yR9fK8LElu1NKwB0nvVi0gEiWpTNjF0PiNHxGkg41Hd9XZeaU9h7y65Fdwj6QDQL6yF+CQzucEtFy98nbkon2WBcPRR04vixwOgHw9ttsPC/vS/7+cegT4Dfv5pxMOZeMWrnrxL+ObuxNXIBGtwVlCGaX6c9dGiMSegal6tMBdFJ/hJU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(366004)(396003)(346002)(39860400002)(376002)(136003)(451199018)(8676002)(66556008)(66476007)(8936002)(4326008)(66946007)(6916009)(5660300002)(2616005)(41300700001)(53546011)(6506007)(54906003)(316002)(6486002)(478600001)(26005)(6512007)(38100700002)(186003)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T2F0eW9DWHJEYmV3UkJ0M0JMMWY1YnVVZENNdXBTU1NiSkxSQzBKUExHMmtJ?=
 =?utf-8?B?OEpDTHFJU0RhTTMyc3B3RVIvejg5ckdQeFNWQkR0bGJ6dTZkTUcwbGpCWHly?=
 =?utf-8?B?NUxscitqL1BZeGhNeThOL0o3UTR1azFKUXpPMXZoaUNzV3VoVVoxYUd3cUsr?=
 =?utf-8?B?NmQxeTFLVGVNUDhiUmlTTVVERTFtU25pRVFVTDBQd1FpTW5qcGNNNE1pVUVl?=
 =?utf-8?B?UXRkNy9rdnhDMVUrd0xDb1ZReUg0YkltZW1CMDlhelBmSlhuaU5NUGczK0VG?=
 =?utf-8?B?R21oNWt1ZmlVbGdVT3lFY1V0RE84K0czeW1IaGNvSzZyUUUzeG5RR2N3UXNV?=
 =?utf-8?B?RVNKMmM3Szh0b3A1ampaWXJSb0tKeUtORDY2UE10UjJvbU1mQ1VIVTRNYzg1?=
 =?utf-8?B?cERzaEVtd3RrelhjMmFtZWZMR0pNTW4zdCtTRkp3Zy9TQUQrak9KK0tJZ2NK?=
 =?utf-8?B?d2FFcGM4ck81TnI4bG8rdlNEOTFnbEhudTNCZzgzcWpDUFcvek1rZFI4UXJn?=
 =?utf-8?B?bithVmN5TFhvOFd6bE5XZFBxbWJJQlV5SDJqZWpJcEpIME1MaVJsWmdST1JW?=
 =?utf-8?B?K3M4dUtDR282TVUxdWkxYW1KOW1FSmljMjU3Q1F4ZXVXUHB5YzJDM1R4YXdL?=
 =?utf-8?B?ak52MDVCWUpLMGlPNmxsRUN0UzJnYnkvWllxWjg0ZjNiRkV5K2lLRnpMNlVl?=
 =?utf-8?B?eEI2eFpOa29nb2hsNWtCK1F6V01vMkY5dEpjZ2tiT0FoZmsxbWRKM0ZtaXkv?=
 =?utf-8?B?VGhPcHl1NitzM1prWWJFTnRiSFREeStFZnQ5cTlyc3puNU1hTEdXTUMvRGNQ?=
 =?utf-8?B?Z3pZMWpWM2ZwSE0yTHcrNFQzUnhKd1RDVDVKczRXUUlNYmMwNDl4dkV0QWh4?=
 =?utf-8?B?a3J4ZmpKSml3OGY4dFZZME5zcTI5VFhIZW9DL2FpeTAzM21VOHA5bDE3MFNB?=
 =?utf-8?B?cjVNb0hNQkZrUUxvMklQVkRhYmNSL1F3ZjlVOVQ0dDFQSmMrSnlVWFJxVURT?=
 =?utf-8?B?TUh0akRadGRBbFYwQXNacE83TElEL0xYMjUrb2JFOURTcnE1bkttYk9zM1RJ?=
 =?utf-8?B?TkNjaXNpOUE5QVNFQnAwdHhOU3R6eGJ0WEs2ZUNPL2lvMGl4cjlLTGVPS00z?=
 =?utf-8?B?V1k1ekYwWjdYOEVMVm9LVG5ncURxMTNTNEFiS1lrcTdBNzNlam40Q2NsUTI0?=
 =?utf-8?B?ZGpibjE3S2pPTlRremNibERvNjB5N1NVMkc0WnhHN0U3MHdpOVZ2VGJSRFBs?=
 =?utf-8?B?UGhObmlueGRkUlhtUUg3ek9mejNzZk9Bb0pQU0svRFRPbEN6dWF6aldHRGY4?=
 =?utf-8?B?UHR6b2JxNjhLODdNSWpEdDBSNlhxQkNhcklGeUpYRmpKMGpWYzFFUVhUMTF4?=
 =?utf-8?B?Umg4dlo2NjVUakJ0aFJEdVUzUkxhNHRJbUZpUVNuSHp0WVA3NnBlZFNQQXJt?=
 =?utf-8?B?OUZ4MWxhSXJnakFleHo5bDdSUVRwM1VvVUNYdGZEQUUvajdZcGg4aUczVkI0?=
 =?utf-8?B?eHpFZ1FLWnFNaXpFVUF1TVNhblF6VXIySHRqclhNOTRCdmVuaTFGVmZOWGdO?=
 =?utf-8?B?Zmg5R1E1c2NzUjZKOVlOaEF5eTdGTFlPazZLWi92MzhWWTNmUHVYbXZwKzl6?=
 =?utf-8?B?cXRoUnNsYmhiNHRxZmo0dzRZQ3dSUTl6RC9WSWN2TVd2WklyZkwzbmNYWGVl?=
 =?utf-8?B?VmxOdUJvVkJ2cGtvZVZ3M3h1dkM2cUx0NGxIcWZKTGZvcUlQOXd3dEtzTFVR?=
 =?utf-8?B?dkxEYVpScGNEL3J6Q3JrWGJCU3BRSVlkQ1ZoVEpLeGJMT0lHZTF5d0VNcEtT?=
 =?utf-8?B?bXFDMmR5NlkwVURJSVVLU3FPNGFpdnJEZm5pRjRFSmlMOHlCQ3FlQ29DeEdB?=
 =?utf-8?B?eG9Ea1ZsdDlMNnR6YThrdDg0TkNIOWgwUkc1bFdOaFRyUjdyTVVoUVpLNERv?=
 =?utf-8?B?bHhuWDhLZDJJOXRNaDFydWV0WW5xSko2ak5mTjZRWU9TMVlUWGlSVGFIQ1BO?=
 =?utf-8?B?RnNGcUM3UUFqTmw4V1JaaTBrbWNDYnJBblAyTUNOSTRXbk9MelVFckVmWmR4?=
 =?utf-8?B?SFFXQjIwSGVrWXo3UXdZR1FITkE2Zm1ZbExMWTdMay83WjBlMU9Ea0lJRXJY?=
 =?utf-8?Q?ZAg/Z5oVd+pzndykvDh8QAJY6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d02748da-a364-416e-57d6-08db036b28cf
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:11:38.5836
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l3ppFa4qlNi7ezDF0hsSJTwnCu4xUF6pf29mYBFK5TCBcbZNczUgs5upX3ngG+98VMFLvJKxtplY5eisM9JgCQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7458

On 30.01.2023 20:27, Julien Grall wrote:
> Hi Jan,
> 
> On 23/12/2022 11:31, Julien Grall wrote:
>> On 20/12/2022 15:30, Jan Beulich wrote:
>>> On 16.12.2022 12:48, Julien Grall wrote:
>>>> From: Hongyan Xia <hongyxia@amazon.com>
>>>>
>>>> This avoids the assumption that boot pages are in the direct map.
>>>>
>>>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> However, ...
>>>
>>>> --- a/xen/arch/x86/srat.c
>>>> +++ b/xen/arch/x86/srat.c
>>>> @@ -139,7 +139,8 @@ void __init acpi_numa_slit_init(struct 
>>>> acpi_table_slit *slit)
>>>>           return;
>>>>       }
>>>>       mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
>>>> -    acpi_slit = mfn_to_virt(mfn_x(mfn));
>>>> +    acpi_slit = vmap_contig_pages(mfn, PFN_UP(slit->header.length));
>>>
>>> ... with the increased use of vmap space the VA range used will need
>>> growing. And that's perhaps better done ahead of time than late.
>>
>> I will have a look to increase the vmap().
> 
> I have started to look at this. The current size of VMAP is 64GB.
> 
> At least in the setup I have I didn't see any particular issue with the 
> existing size of the vmap. Looking through the history, the last time it 
> was bumped by one of your commit (see b0581b9214d2) but it is not clear 
> what was the setup.
> 
> Given I don't have a setup where the VMAP is exhausted it is not clear 
> to me what would be an acceptable bump.
> 
> AFAICT, in PML4 slot 261, we still have 62GB reserved for future. So I 
> was thinking to add an extra 32GB which would bring the VMAP to 96GB. 
> This is just a number that doesn't use all the reserved space but still 
> a power of two.
> 
> Are you fine with that?

Hmm. Leaving aside that 96Gb isn't a power of two, my comment saying
"ahead of time" was under the (wrong, as it now looks) impression that
the goal of your series was to truly do away with the directmap. I was
therefore expecting a much larger bump in size, perhaps moving the
vmap area into space presently occupied by the directmap. IOW for the
time being, with no _significant_ increase of space consumption, we
may well be fine with the 64Gb range.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:15:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:15:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487457.755071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmjK-0003qI-8k; Tue, 31 Jan 2023 09:15:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487457.755071; Tue, 31 Jan 2023 09:15:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmjK-0003qB-5p; Tue, 31 Jan 2023 09:15:06 +0000
Received: by outflank-mailman (input) for mailman id 487457;
 Tue, 31 Jan 2023 09:15:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nz19=54=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMmjI-0003q3-Kj
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:15:04 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2078.outbound.protection.outlook.com [40.107.22.78])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be737369-a147-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 10:15:03 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6794.eurprd04.prod.outlook.com (2603:10a6:10:11b::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 09:15:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.038; Tue, 31 Jan 2023
 09:14:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be737369-a147-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S7qn9sqX/sgDmCy1qEQi9d18FK+VyGe19guSiWSIpZFx4Q72KjrB1me3WGk8F0n28Qdxp2Fwus3KmcDssPZLtoC0TYuGfrcuBjHAeTKWkz+8Edtqk6ouzgqroPin0GT+Z9kNZ77KPuGkrANX/Vbp0USZYeR8m8CaKMqsANJQrijKI5rHnIwjt7g9RmlLx9KpG8Lwg9VfRHssuZ3qqM10q/Oef6g7W76W0fUTGM2DvUsdpcnL3eTy/p+8o6oQqP3lTW1q42jdmnEAgtSPqUSksbwdo8FacBOqmz+27WmiA86s8Q49kOw3SvGzrr3h3mPOXtkPiS2VNZB1BWYal7sb0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lZrE9fuEb7NSiW+HiQighC5FKwhlCKCmQ8RFTDyAOXI=;
 b=VC0X46u4X7KBz4LROdkNMpoNJv9Mdc7/K+HWqv0RtrCPu7+eDqrheS0IwEGZcslIArT9o57NIMs6SfW3RtHuxTWLxxI9exrPUmuO3joZcNQ0Uv9OSX8bjipM8G9Jq6PwOTO63MOUEDEB7EAR9mFyVm9qclvM66yuGWvGDhKyqNYWLLl/1aZFcWcBaEHTDU+hb7YkGOV42fuoFCacTwDU06FRJSRAlxZQJFJJJdwAhmFgF2DUybSqQgvg+qAkNSx76ISPSEGrT6DkN7cc1wsRU82upHvdIK7Wp+UpVOahPLjnw8kpCxE6dNJHQkgSrUi0017baeHMAmkKOOWgF1cy7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lZrE9fuEb7NSiW+HiQighC5FKwhlCKCmQ8RFTDyAOXI=;
 b=ofh9DS3Ph2JOP4VmxwAexw6m+w/NCczcLfiUW4ajjnTnMpAAvZe3PO6ioNOf+QHhVv9GZw/zQ2aOe3MgsQJ8TkmNDK9rcIfbQr0UcVhvbSr/zmHIASo6Ww64pmZ+GdAhoPOrmB1uZu9o4lXp/P75SRpGbCdt7Z6HIt+iYnXRUE/XvEufBDDWAKtijZEizlxrobz2ODLqseZxeKZe9pijDHLVmJZLSOVNWn2WwhydwgFP4U/z+nvKWnerxvCKxD71JwXD18hG+hPvzNyYXky3kpWlLqFgaojMBQSDlVIwa1nQIlF2rLlANS3mbIRTTGR7t0/kuQdvtAgat4a6vW8xrg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1d63dd9a-77df-4fca-e35b-886ba19fb35d@suse.com>
Date: Tue, 31 Jan 2023 10:14:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v2 01/14] xen/riscv: add _zicsr to CFLAGS
Content-Language: en-US
To: Alistair Francis <alistair23@gmail.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <971c400abf7f88a6be322db72481c075d3ceb233.1674818705.git.oleksii.kurochko@gmail.com>
 <CAKmqyKNSywyF8=KUTiKN12JL_Bst5if74h6mgek1aMYS1QpjeQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKmqyKNSywyF8=KUTiKN12JL_Bst5if74h6mgek1aMYS1QpjeQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0137.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6794:EE_
X-MS-Office365-Filtering-Correlation-Id: 6e871bbb-1e94-4e97-c10b-08db036ba0b5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SpDQ/5rCh/dVzNvt6vrZ8q0/DE5/RhT4odlLR6tYq0muRZp2F8CMdVBYL2apEUVPC+TZe1B+et+gfP3DD9Je74XnTIK5RGgPudzMiNbmii9OU7ham+jAhveO51/H7xLeURVJ43KNTv6rikNPbH3vRKJyQTcVqFT195wO/Pe3RMxSZ5mmk86r5nGXoaiY3TWp8klnVnHRal9Da7Mz9n54ats57Fa0X3YY6X9Cpios5krDVDAhgKl9O6zpz9snpX0K+DdRMlMrb+t/aOTQ2QAOcJNn60I4MOuH7di8LIWTK27blPLrTXd9z765TwvKHXoQ6CiUI8i9E5ZAD5akFaIiRup6SUukx+nKmgJXQCew4cU8B+JmdeuY9CViIkDFdRD/hJIZCQkPc4kDywaRXJ9p19oihmGArFMaHBrhgk1sCGGD9Ua5gCXKY9PDTlvKA+TcMyzQ+8TyaQVdd7Pect+RxM5kzI/KdlCO+gRWvY54+HTlrrtAY9l/8k9MIcEaU8z4a4/DLkG8MYNz+VZjFFICLM0mpER6lYPRkHeUzKgpdCtkaTxWvkz/npVqUmqzvdKFDMA0JnWg4noevRjQq7bCMyf+X4aGuQcLEpJnaedIaTaf2mvBl2EmYfd1BYBb6m3sPoUfiPz0n0zi41+N7QPzA1SpHStHE7wcrmG0g1n2Eem4ajUYNAFWpxmqiOoPstvvjo2iVXINjP+oI0npPjxBjZ7zqkSMcG1nHOVOgufMsso=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(136003)(39860400002)(376002)(396003)(346002)(366004)(451199018)(41300700001)(4326008)(86362001)(2616005)(31696002)(38100700002)(316002)(6506007)(36756003)(8676002)(66556008)(6916009)(66476007)(66946007)(54906003)(53546011)(26005)(186003)(6512007)(6486002)(478600001)(4744005)(31686004)(2906002)(8936002)(7416002)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YkVRQlc0VmdkTGp0MS84NFdkM2dBRDJmVnl5MkNkUHpuVDdpQkFiYjN3UFR5?=
 =?utf-8?B?MHVhdGMrckxKd0cyNzRLc29mTUdqTTNHNUtodGEvMVZYTTh2cFdNQTI4QVU0?=
 =?utf-8?B?V1Q5Z3l4cnJkL2RxSERuM29GMnZYdEpjY0FYRldGNG1QVmZXWjcwaXB3SThv?=
 =?utf-8?B?NVhqZVQwYTJhOU9HbDl0WG90Z01xcEljSWx3TFdjWUc5N3hKeGx6cUE5WnZN?=
 =?utf-8?B?WExVZHdubVJwN0gxR3JsYTJhYmE2bkI1QWNTVFlOWjR5ekNQSG5vQVZoU3pi?=
 =?utf-8?B?dmhSMXM0N2VNLzJGekFFNVdQR0hlbW56bkVISzJaNDY4ZmVWRktkR3JVS0p3?=
 =?utf-8?B?OXIyY1M2ekxOdzZtU3BRZWE3dzlTNVBMSE5XUk9tc2t2NVRMM0Ura05jUUVi?=
 =?utf-8?B?NHlpdlhEQ0FheHFRWFRqalZxOU0wZGYxMXMwN1NlbmxvWHZPdGdHcS9NSWx4?=
 =?utf-8?B?S0xicldZZkVDTVFQU1lqb1RreCt3eXA5UTg4eEE2K01QN1RSd01uZUx5NDFS?=
 =?utf-8?B?cDJBc0x1Qzd6ZWc1djNyZE00S3VZVXpJbWdjTVpKeDdjS1BRV3lianJUcmxQ?=
 =?utf-8?B?OXNEQUVmd0hQTjR6UDQyc3NwdWh0MDNVaXQ0ckE4dGtNUFFFOUtrYnNkK2xt?=
 =?utf-8?B?am9LM0hXL2xkUW1QcDh1QkYyTG81UEVPbzdBclRqVGh0U28xV0FyTU5kQXY4?=
 =?utf-8?B?bUxFZ3pJdEk3NjZhcEVobUFIdTZZdFZMN1E0b24yS3ZhaUFDeDExYUJyaEJR?=
 =?utf-8?B?L284OU5wUkkySEhKTXAwanFtTlIyYmRhU2pXbHdqYlZzR1VSUVhiM2I5VEds?=
 =?utf-8?B?V3pSdlFSMCsybnh5ZUdRQVdyU1ZkK1hhbUxDT2s0QndUcVVuSlZrdFEyNU5N?=
 =?utf-8?B?endtdmRialVHZlNxRFRKd0RsNXpUV0Q3b2dNTjB6bzRyZTNBdDBsL0ZpRUUv?=
 =?utf-8?B?TFpaVHliaGhwdDFPVysxeHRmQ1l1cVU3S2xLTVAyTE5zV2M4Qm82VVNMWmFR?=
 =?utf-8?B?bENYVW00cnd1QzNzVTF1bkpvTlkzSkNyOGR0dFg5NFVzUmVzaUZMUXREdnM4?=
 =?utf-8?B?T0ZpNFpuU3pSQnFDSE8vN0U1RFpPVDlHMVVZMjFRZjRIQ1BSNEU5b05wOWNF?=
 =?utf-8?B?MnJKNEtBbzFyVkI4U0RrVFhRbWVhNk5lU0E2NlZzTkVGUk9FY1BrVWx1bFIy?=
 =?utf-8?B?T2RnVk5QbGF0WGk3cEVUL2lZRTlZOE8vSU1KSzlNek5zclVLMFBEbnFsQkd3?=
 =?utf-8?B?dGppMEpOWGI5REd4dXd2SW04bWR3R09lOGVxeFNnOXVqR3RDdXdsbkpLTEZz?=
 =?utf-8?B?MjZselpzMCtnaHI0MTNYOEVHeE9EalZNb1g1K21QeFRNaUUzaWtFY3Y3K0ht?=
 =?utf-8?B?N0xzRncrYzJTQTBzZ0t5T00yWmM3Z3RweUE3cW1VTXdKeVNWdy9LTWRUSTNt?=
 =?utf-8?B?YW43WXRKY3NxVUlBZm80V1UwOFdYdzRPL0dwelJxbytvOEh1U1JpYzhITURr?=
 =?utf-8?B?SlRqSHlycWowU2pDdUFpTXF6YUh4U2wvNWZTdFFJVU9BbmxNaGJ3dEFDeDJM?=
 =?utf-8?B?K3FMcmFET0pMUnd5bFNhcG4wU3NSdmkvdXFqaUdRQWJnVzRMcmpVQ2QvNjFI?=
 =?utf-8?B?NUo0TFE3bWhiTi9DM0h0azIyZ2Y5dURjMmlHMmdNaDRVaU5ub3lmOHBmTEoy?=
 =?utf-8?B?anltN2o4OUVuZjNGeFUyaUJpTnkyYmROZ2I3U3pOMnEzYkowMDI4R0gxV3hD?=
 =?utf-8?B?OXVhK1RFMldYazNoMzYrNkRabHdYeHRyeStxaUNuVytONEF2byszN1htR2dF?=
 =?utf-8?B?ZzhCak9RM0RNbEhScHprRVFDZUY4N1lBMHZsQ0RYY0pkbGdPMHlrMzNhMGI4?=
 =?utf-8?B?cUdhS1BjNDhERmQ0SDdxb1ZJYm5uWWdmVnBNSTEweTF3ZUFLbXozcldEZnZR?=
 =?utf-8?B?eU01aHVPZGRFRXJXVnFOVmRldkh2dVo3U1N4SEs2alU5eGltZG5oQjNIRk0r?=
 =?utf-8?B?cFhjYWpvNTVrcW1jOExCTDE3aitTaVdub0NtcWQ3Rm5IcTdNSHdvNVVhMGpB?=
 =?utf-8?B?dzNIUHRISVBRMnVjVEpNMnN5SVpmeGtrZFpaK2FIRExja3E3WG5XMUVDV2s2?=
 =?utf-8?Q?pdVlv8cceJ0WVZAG6uaZWBqM0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e871bbb-1e94-4e97-c10b-08db036ba0b5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:14:59.7269
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jinSyhXhai4busi/6z1g5x3y6Z3jgCVNoaB3ktv9PdY/oAZt7o5u0mzEPRQlB16ZepSORnQAwj6syZRLuDODsQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6794

On 31.01.2023 01:21, Alistair Francis wrote:
> On Sat, Jan 28, 2023 at 12:00 AM Oleksii Kurochko
> <oleksii.kurochko@gmail.com> wrote:
>>
>> Work with some registers requires csr command which is part of
>> Zicsr.
>>
>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> 
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

There is an open aspect Andrew has pointed out on an earlier version.
I think it would be quite helpful if that could be settled one way or
another before this patch gets committed (which formally may now be
possible, depending on whether that open aspect is considered an
"open" objection, as per ./MAINTAINERS).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:19:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:19:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487462.755081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmnU-0004go-QI; Tue, 31 Jan 2023 09:19:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487462.755081; Tue, 31 Jan 2023 09:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmnU-0004gh-Mi; Tue, 31 Jan 2023 09:19:24 +0000
Received: by outflank-mailman (input) for mailman id 487462;
 Tue, 31 Jan 2023 09:19:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMmnT-0004gZ-A7
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:19:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMmnS-0005Vc-TP; Tue, 31 Jan 2023 09:19:22 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.14.74]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMmnS-0002S8-N9; Tue, 31 Jan 2023 09:19:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=IWtLm+5apAMskl5lwqIIWvgIeweGCvq3HPr7LSyV2yU=; b=Kn9uMvrWEHONvKbcgNW4Qu74CK
	MT4CmtvhnRXnsqvu1ASB73bPqwXxWWHklbrMmnQB7QN1AIUxuTFH7kKsRMyYM73zoKMus2P5+uZ3t
	O+ZI1sWru6V0fgQglD+ysPzO64egWdsbqS7va7oH+hZSEsApmxxXuYnn9XulwLYF1IIM=;
Message-ID: <ffa2c5e3-9dcb-eca1-fe3f-6ad4c7c83bae@xen.org>
Date: Tue, 31 Jan 2023 09:19:18 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
To: Henry Wang <Henry.Wang@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230130040535.188236-1-Henry.Wang@arm.com>
 <20230130040535.188236-2-Henry.Wang@arm.com>
 <fca91d3c-5d8a-3f7e-419a-a4c5208273dc@xen.org>
 <AS8PR08MB79910D7E3C7F32D8CDCB851092D09@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <AS8PR08MB79910D7E3C7F32D8CDCB851092D09@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 31/01/2023 02:25, Henry Wang wrote:
> Hi Julien,
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Subject: Re: [PATCH v3 1/3] xen/arm: Add memory overlap check for
>> bootinfo.reserved_mem
>>
>> Hi Henry,
>>
>>> +{
>>> +    paddr_t bank_start = INVALID_PADDR, bank_end = 0;
>>> +    paddr_t region_end = region_start + region_size;
>>> +    unsigned int i, bank_num = meminfo->nr_banks;
>>> +
>>> +    for ( i = 0; i < bank_num; i++ )
>>> +    {
>>> +        bank_start = meminfo->bank[i].start;
>>> +        bank_end = bank_start + meminfo->bank[i].size;
>>> +
>>> +        if ( region_end <= bank_start || region_start >= bank_end )
>>
>> ... it clearly shows how this check would be wrong when either the bank
>> or the region is at the end of the address space. You may say it doesn't
>> overlap when it could (e.g. when region_end < region_start).
> 
> Here do you mean if the region is at the end of the addr space,
> "region_start + region_end" will overflow and cause
> region_end < region_start? If so...

Yes.

> 
>>
>> That said, unless we rework 'bank', we would not properly solve the
>> problem. But that's likely a bigger piece of work and not something I
>> would request.
>>
>> So for now, I would suggest to add a comment. Stefano, what do you think?
> 
> ...I am not really sure if simply adding a comment here would help,
> because when the overflow happens, we are already doomed because
> of the messed-up device tree.

Not necessarily. This could happen if the region is right at the top of 
the address (e.g. finishing at 2^64 - 1). As the 'end' is exclusive, 
then it would be equal to 0.

I think this is less likely for arm64, but this could happen for 32-bit 
Arm as we will allow the admin to reduce paddr_t from 64-bit to 32-bit.

> 
> Would adding a `BUG_ON(region_end < region_start)` make sense to you?
No for the reason I stated above.

> 
>>
>>> +            continue;
>>> +        else
>>> +        {
>>> +            printk("Region: [%#"PRIpaddr", %#"PRIpaddr"] overlapping with
>> bank[%u]: [%#"PRIpaddr", %#"PRIpaddr"]\n",
>>
>> ']' usually mean inclusive. But here, 'end' is exclusive. So you want '['.
> 
> Oh, now I understand the misunderstanding in our communication in v1:
> I didn't know '[' means exclusive because I was educated to use ')' [1] so I
> thought you meant inclusive. Sorry for this.

No worries. That might be only me using [ and ) interchangeably :).

> 
> To keep consistency, may I use ')' here? Because I think this is the current
> way in the code base, for example see:
> xen/include/xen/numa.h L99: [*start, *end)
> xen/drivers/passthrough/amd/iommu_acpi.c L177: overlap [%lx,%lx)

I am fine with that.

>> BTW, the same comments applies for the second patch.
> 
> I will fix this patch and #2 in v4.

I am happy to deal with it on commit if you want.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:23:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:23:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487467.755091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmqt-00063e-8x; Tue, 31 Jan 2023 09:22:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487467.755091; Tue, 31 Jan 2023 09:22:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmqt-00063X-5Y; Tue, 31 Jan 2023 09:22:55 +0000
Received: by outflank-mailman (input) for mailman id 487467;
 Tue, 31 Jan 2023 09:22:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPLs=54=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pMmqr-00063P-Kb
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:22:53 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2040.outbound.protection.outlook.com [40.107.92.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d54d885c-a148-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 10:22:51 +0100 (CET)
Received: from BN9PR03CA0302.namprd03.prod.outlook.com (2603:10b6:408:112::7)
 by IA1PR12MB8357.namprd12.prod.outlook.com (2603:10b6:208:3ff::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 09:22:48 +0000
Received: from BN8NAM11FT097.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:112:cafe::4f) by BN9PR03CA0302.outlook.office365.com
 (2603:10b6:408:112::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.37 via Frontend
 Transport; Tue, 31 Jan 2023 09:22:48 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT097.mail.protection.outlook.com (10.13.176.147) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.28 via Frontend Transport; Tue, 31 Jan 2023 09:22:48 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 03:22:46 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 01:22:46 -0800
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 03:22:44 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d54d885c-a148-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JaXNPVSCBigyGjdDma6TW5b7V7LjocT/xEcr2r9OR3g182pxVDegKgxUWDMKZA+i87rT6R2D74FDPqWVtQovrzEKWWtTrSxlBZf8LzHU+BuFt4Tu5w/LLscAJMme7dV16F+PtJoPtAMiniM2IapYrME6sgJL5IVHt+3I0fI7a3w9KJf8Nuu3BQE8D4EfFyEoUSRXKDU38mI+7VRti4bGFg6VBPaL4dmFFkRXqWw25LuWVuerQ7/O1ZVskywru53t7y2TqEftvMkl3Thmw9LVt7gAS/x1kMzaClnvCIJbfiHCXCbMkKKEf8audGMMk1t/JM2GfedT983FpOztapiNCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iW/rWT1RAvQbBMu/9dj9U11MtCDOKCe+tz4tT1uB3kU=;
 b=F20g9wuUMuy61Jiuwe+ZbvK5pwhZY1j128/EuUvtLYHEWwTMni2UEJ7yU3RxXTssU0NdKRgLpLEnbs7d3baZa4Y2mz6r+R2lBM09mfFKkbkk22fMijrF/M2KfbK29h8i/ViLfEhGytCFYj+cCTZwNa55ZAxjUoQxLwauq0LTn8bXGbuky4zuURnHLa1MRFviZOmOydMUSAnU/djdxSDp52KTvvzcCVn6gGy0rltEFSuOZpwt/Q/tT0F+6KsY5tEOj26EWglEdNJmomyxqdfM9vEdZwMRXzPZQ88kqE+4Ix/YHzur+uMBiM15t4oJOdMGZTHKffS+aVBzQGtg4nfCtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iW/rWT1RAvQbBMu/9dj9U11MtCDOKCe+tz4tT1uB3kU=;
 b=ttLhH0ooQqj7aqCv9H+1Fouekb5Dti4paprSSqtklTPU4kOE/LS9CUk9/34DdPs5dr9gTxseniMCDH9O0cciC/tf4+zCl6L6OT2PaOhGp1WNGUnRqOxWiZSDU1NLqPIDKZPZEmtRmtBFbX12Rn5UndgumWg+Jt2ua+PfAsMa4ZM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <852a9b82-bc82-37c3-75e7-83275ff6c577@amd.com>
Date: Tue, 31 Jan 2023 10:22:39 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH v5 2/5] xen/arm64: Rework the memory layout
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230127195508.2786-1-julien@xen.org>
 <20230127195508.2786-3-julien@xen.org>
 <af0bf576-373c-353e-b575-40f5bbde861f@amd.com>
 <044a4899-661b-8a84-d949-dd14d1d32383@xen.org>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <044a4899-661b-8a84-d949-dd14d1d32383@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT097:EE_|IA1PR12MB8357:EE_
X-MS-Office365-Filtering-Correlation-Id: f4f44234-1564-4220-3b5a-08db036cb833
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SmDI+lxPKTQEv29BstBZH0fHt8X8mm1tEbNxD2J936m/2uR1tJ8T1/1xKvciKIty4jv87holvPhqeydV12WDo9rcczVqjzWzL409wVt728Ops0J6HET1fXPhuSAwTmvBCcMTm/nOKMZhr/PvgqGQjUAt4RGSOvW1JcZ6xcruaiY5824HIgTIZ2unemWI9lpxwpaWlVzhxJWulX1fma78cs0HJmoP15JNWTG8nCZPFlFPUiBTsDn2q4mKlFWn8m522NP3XgAeh57kLwU9WzhQsi00LDqXLMP/JwyTkYIbUgz8Ybbi6YXLP/gAG6rhilWRKflPo+3egDlNh/ToQP7Tm3g/d2s7Z8sOvFSVMnyFdSdg2FHF5mRq5KuZp4bjacorqidzWR8uC8G7AmLMrB6oHBwt5jVGVzfgcPv+zIhKQeoZvneYi+Acpp4c2EQD2fk6bYhERXjYxsoTFSKvF4i4/p2zEvGtW/psERYh74X4PLsUntOtCp6tARWlg2vZqgzrhvUaBV2YOqGprPNj7n1g6JzQWkpGQLCgTGW3G21odMsicInRRPd+1cHsCvzjVLg0HW7pwDdiZNZFWO7kPwyU7HHQD3mO+paW6/ZaXcIc1Qh2U955mdhXvKa1w08FQCWyRuqhIKKWlgbHqwzFqzTAdc+eGeXNYVuNCuZBCvL6kBKGBYZNDAPYdHjr680Q/7RTPYd93dbNBpkOVo4R2OzwrK+e0YasUzJMQu1+IrXbBHnqwYiEncaXSSldzRrmlRiV2zz8Q1HpLYHpmrIti7zMfA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199018)(36840700001)(40470700004)(46966006)(70586007)(70206006)(82310400005)(8676002)(2616005)(4326008)(82740400003)(86362001)(36860700001)(83380400001)(41300700001)(47076005)(426003)(336012)(31686004)(8936002)(316002)(356005)(16576012)(81166007)(53546011)(5660300002)(54906003)(186003)(110136005)(26005)(31696002)(6666004)(478600001)(40460700003)(36756003)(2906002)(40480700001)(44832011)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:22:48.4305
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f4f44234-1564-4220-3b5a-08db036cb833
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT097.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8357

Hi Julien,

On 30/01/2023 19:36, Julien Grall wrote:
> 
> 
> On 30/01/2023 09:08, Michal Orzel wrote:
>> Hi Julien,
> 
> Hi Michal,
> 
>> On 27/01/2023 20:55, Julien Grall wrote:
>>>
>>>
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Xen is currently not fully compliant with the Arm Arm because it will
>>> switch the TTBR with the MMU on.
>>>
>>> In order to be compliant, we need to disable the MMU before
>>> switching the TTBR. The implication is the page-tables should
>>> contain an identity mapping of the code switching the TTBR.
>>>
>>> In most of the case we expect Xen to be loaded in low memory. I am aware
>>> of one platform (i.e AMD Seattle) where the memory start above 512GB.
>>> To give us some slack, consider that Xen may be loaded in the first 2TB
>>> of the physical address space.
>>>
>>> The memory layout is reshuffled to keep the first four slots of the zeroeth
>>> level free. Xen will now be loaded at (2TB + 2MB). This requires a slight
>>> tweak of the boot code because XEN_VIRT_START cannot be used as an
>>> immediate.
>>>
>>> This reshuffle will make trivial to create a 1:1 mapping when Xen is
>>> loaded below 2TB.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
>>> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
>>>
>>> ---
>>>      Changes in v5:
>>>          - We are reserving 4 slots rather than 2.
>>>          - Fix the addresses in the layout comment.
>>>          - Fix the size of the region in the layout comment
>>>          - Add Luca's tested-by (the reviewed-by was not added
>>>            because of the changes requested by Michal
>>>          - Add Michal's reviewed-by
>>>
>>>      Changes in v4:
>>>          - Correct the documentation
>>>          - The start address is 2TB, so slot0 is 4 not 2.
>>>
>>>      Changes in v2:
>>>          - Reword the commit message
>>>          - Load Xen at 2TB + 2MB
>>>          - Update the documentation to reflect the new layout
>>> ---
>>>   xen/arch/arm/arm64/head.S         |  3 ++-
>>>   xen/arch/arm/include/asm/config.h | 34 +++++++++++++++++++++----------
>>>   xen/arch/arm/mm.c                 | 11 +++++-----
>>>   3 files changed, 31 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>>> index 4a3f87117c83..663f5813b12e 100644
>>> --- a/xen/arch/arm/arm64/head.S
>>> +++ b/xen/arch/arm/arm64/head.S
>>> @@ -607,7 +607,8 @@ create_page_tables:
>>>            * need an additional 1:1 mapping, the virtual mapping will
>>>            * suffice.
>>>            */
>>> -        cmp   x19, #XEN_VIRT_START
>>> +        ldr   x0, =XEN_VIRT_START
>>> +        cmp   x19, x0
>>>           bne   1f
>>>           ret
>>>   1:
>>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>>> index 5df0e4c4959b..e388462c23d1 100644
>>> --- a/xen/arch/arm/include/asm/config.h
>>> +++ b/xen/arch/arm/include/asm/config.h
>>> @@ -72,16 +72,13 @@
>>>   #include <xen/page-size.h>
>>>
>>>   /*
>>> - * Common ARM32 and ARM64 layout:
>>> + * ARM32 layout:
>>>    *   0  -   2M   Unmapped
>>>    *   2M -   4M   Xen text, data, bss
>>>    *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>>    *   6M -  10M   Early boot mapping of FDT
>>>    *   10M - 12M   Livepatch vmap (if compiled in)
>>>    *
>>> - * ARM32 layout:
>>> - *   0  -  12M   <COMMON>
>>> - *
>>>    *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>>>    * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>>>    *                    space
>>> @@ -90,14 +87,23 @@
>>>    *   2G -   4G   Domheap: on-demand-mapped
>>>    *
>>>    * ARM64 layout:
>>> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
>>> - *   0  -  12M   <COMMON>
>>> + * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3])
>>> + *
>>> + *  Reserved to identity map Xen
>>> + *
>>> + * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]
>>> + *  (Relative offsets)
>>> + *   0  -   2M   Unmapped
>>> + *   2M -   4M   Xen text, data, bss
>>> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>> + *   6M -  10M   Early boot mapping of FDT
>>> + *  10M -  12M   Livepatch vmap (if compiled in)
>>>    *
>>>    *   1G -   2G   VMAP: ioremap and early_ioremap
>>>    *
>>>    *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
>>>    *
>>> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>>> + * 0x0000028000000000 - 0x00007fffffffffff (125TB, L0 slots [5..255])
>>>    *  Unused
>>>    *
>>>    * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
>>> @@ -107,7 +113,17 @@
>>>    *  Unused
>>>    */
>>>
>>> +#ifdef CONFIG_ARM_32
>>>   #define XEN_VIRT_START          _AT(vaddr_t, MB(2))
>>> +#else
>>> +
>>> +#define SLOT0_ENTRY_BITS  39
>>> +#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
>>> +#define SLOT0_ENTRY_SIZE  SLOT0(1)
>>> +
>>> +#define XEN_VIRT_START          (SLOT0(4) + _AT(vaddr_t, MB(2)))
>>> +#endif
>>> +
>>>   #define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
>>>
>>>   #define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
>>> @@ -163,10 +179,6 @@
>>>
>>>   #else /* ARM_64 */
>>>
>>> -#define SLOT0_ENTRY_BITS  39
>>> -#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
>>> -#define SLOT0_ENTRY_SIZE  SLOT0(1)
>>> -
>>>   #define VMAP_VIRT_START  GB(1)
>>>   #define VMAP_VIRT_SIZE   GB(1)
>>>
>>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>>> index f758cad545fa..0b0edf28d57a 100644
>>> --- a/xen/arch/arm/mm.c
>>> +++ b/xen/arch/arm/mm.c
>>> @@ -153,7 +153,7 @@ static void __init __maybe_unused build_assertions(void)
>>>   #endif
>>>       /* Page table structure constraints */
>>>   #ifdef CONFIG_ARM_64
>>> -    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
>>> +    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START) < 2);
>> I haven't spotted this before but this should be < 4.
> 
> Good catch! I am planning to handle it on commit unless there is more to
> fix in this patch.
No more remarks from my side.

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:27:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487473.755101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmvT-0006y3-Tk; Tue, 31 Jan 2023 09:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487473.755101; Tue, 31 Jan 2023 09:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmvT-0006xw-Qz; Tue, 31 Jan 2023 09:27:39 +0000
Received: by outflank-mailman (input) for mailman id 487473;
 Tue, 31 Jan 2023 09:27:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMmvS-0006xq-Tg
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:27:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMmvS-0005g0-Fe; Tue, 31 Jan 2023 09:27:38 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.14.74]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMmvS-0002kH-9C; Tue, 31 Jan 2023 09:27:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=iEOW7X8Cc3FoaUkO2QLv7nhpRfsoHwTOTV3ZCYd3sO8=; b=wHXsLZB3AoKB3zX/lHL6EkcfGm
	jMfwxPWcjUEfDmZntmHoica2rgz1NVnmkcuNITy2ci1UtyJVf6howibx2p8nO8AG9LrxQB99XuzLp
	/NuVUp95WgDMeWzbHD/cNR2FhLxs8vqu0+1X1lBzN/XWOKidcjnqSj9qhTO6SL9hsie8=;
Message-ID: <509eafac-bbe7-ed18-ce11-3fede7ca691d@xen.org>
Date: Tue, 31 Jan 2023 09:27:36 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU memory
 region map
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "ayan.kumar.halder@xilinx.com" <ayan.kumar.halder@xilinx.com>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-12-Penny.Zheng@arm.com>
 <c30b4458-b5f6-f996-0c3c-220b18bfb356@xen.org>
 <AM0PR08MB453083B74DB1D00BDF469331F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <7931e70f-3754-363c-28d8-5fde3198d70f@xen.org>
 <AM0PR08MB45308D5CD69EBB5FE85A4B88F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <3c915633-ddb8-d1e4-f42e-064aaff168b2@xen.org>
 <AM0PR08MB45309F6DCB1B1E0975A741B7F7D09@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM0PR08MB45309F6DCB1B1E0975A741B7F7D09@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 31/01/2023 04:11, Penny Zheng wrote:
> Hi Julien
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Monday, January 30, 2023 5:40 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v2 11/40] xen/mpu: build up start-of-day Xen MPU
>> memory region map
>>
>> Hi Penny,
>>
>> On 30/01/2023 05:45, Penny Zheng wrote:
>>>    There are three types of MPU regions during boot-time:
>>> 1. Fixed MPU region
>>> Regions like Xen text section, Xen heap section, etc.
>>> 2. Boot-only MPU region
>>> Regions like Xen init sections, etc. It will be removed at the end of booting.
>>> 3.   Regions need switching in/out during vcpu context switch
>>> Region like system device memory map.
>>> For example, for FVP_BaseR_AEMv8R, we have [0x80000000, 0xfffff000) as
>>> the whole system device memory map for Xen(idle vcpu) in EL2,  when
>>> context switching to guest vcpu, it shall be replaced with
>>> guest-specific device mapping, like vgic, vpl011, passthrough device, etc.
>>>
>>> We don't have two mappings for different stage translations in MPU, like
>> we had in MMU.
>>> Xen stage 1 EL2 mapping and stage 2 mapping are both sharing one MPU
>>> memory mapping(xen_mpumap) So to save the trouble of hunting down
>> each
>>> switching regions in time-sensitive context switch, we must re-order
>> xen_mpumap to keep fixed regions in the front, and switching ones in the
>> heels of them.
>>
>>   From my understanding, hunting down each switching regions would be a
>> matter to loop over a bitmap. There will be a slight increase in the number
>> of instructions executed, but I don't believe it will be noticeable.
>>
>>>
>>> In Patch Serie v1, I was adding MPU regions in sequence,  and I
>>> introduced a set of bitmaps to record the location of same type
>>> regions. At the end of booting, I need to *disable* MPU to do the
>> reshuffling, as I can't move regions like xen heap while MPU on.
>>>
>>> And we discussed that it is too risky to disable MPU, and you
>>> suggested [1] "
>>>> You should not need any reorg if you map the boot-only section
>>>> towards in the higher slot index (or just after the fixed ones).
>>> "
>>
>> Right, looking at the new code. I realize that this was probably a bad idea
>> because we are adding complexity in the assembly code.
>>
>>>
>>> Maybe in assembly, we know exactly how many fixed regions are,
>>> boot-only regions are, but in C codes, we parse FDT to get static
>> configuration, like we don't know how many fixed regions for xen static
>> heap is enough.
>>> Approximation is not suggested in MPU system with limited MPU regions,
>>> some platform may only have 16 MPU regions, IMHO, it is not worthy
>> wasting in approximation.
>>
>> I haven't suggested to use approximation anywhere here. I will answer
>> about the limited number of entries in the other thread.
>>
>>>
>>> So I take the suggestion of putting regions in the higher slot index.
>>> Putting fixed regions in the front, and putting boot-only and
>>> switching ones at tail. Then, at the end of booting, when we reorg the
>> mpu mapping, we remove all boot-only regions, and for switching ones, we
>> disable-relocate(after fixed ones)-enable them. Specific codes in [2].
>>
>>   From this discussion, it feels to me that you are trying to make the code
>> more complicated just to keep the split and save a few cycles (see above).
>>
>> I would suggest to investigate the cost of "hunting down each section".
>> Depending on the result, we can discuss what the best approach.
>>
> 
> Correct me if I'm wrong, the complicated things in assembly you are worried about
> is that we couldn't define the index for initial sections, no hardcoded to keep simple.

Correct.

> And function write_pr, ik, is really a big chunk of codes, however the logic is simple there,
> just a bunch of "switch-cases".

I agree that write_pr() is a bunch of switch-cases. But there are a lot 
of duplication in it and the interface to use it is, IMHO, not intuitive.

> 
> If we are adding MPU regions in sequence as you suggested, while using bitmap at the
> same time to record used entry.
> TBH, this is how I designed at the very beginning internally. We found that if we don't
> do reorg late-boot to keep fixed in front and switching ones after, each time when we
> do vcpu context switch, not only we need to hunt down switching ones to disable,
> while we add new switch-in regions, using bitmap to find free entry is saying that the
> process is unpredictable. Uncertainty is what we want to avoid in Armv8-R architecture.

I don't understand why it would be unpredictable. For a given 
combination of platform/device-tree, the bitmap will always look the 
same. So the number of cycles/instructions will always be the same.

This is not very different from the case where you split the MPU in two 
because

> 
> Hmmm, TBH, I really really like your suggestion to put boot-only/switching regions into
> higher slot. It really saved a lot trouble in late-init reorg and also avoids disabling MPU
> at the same time. The split is a simple and easy-to-understand construction compared
> with bitmap too.

I would like to propose another split. I will reply to that in the 
thread where you provided the MPU layout.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:30:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:30:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487475.755111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmyW-0008FE-CU; Tue, 31 Jan 2023 09:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487475.755111; Tue, 31 Jan 2023 09:30:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMmyW-0008F7-91; Tue, 31 Jan 2023 09:30:48 +0000
Received: by outflank-mailman (input) for mailman id 487475;
 Tue, 31 Jan 2023 09:30:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WzvD=54=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pMmyV-0008Ey-Dg
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:30:47 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on2053.outbound.protection.outlook.com [40.107.8.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ef5a4890-a149-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 10:30:44 +0100 (CET)
Received: from DB6PR07CA0074.eurprd07.prod.outlook.com (2603:10a6:6:2b::12) by
 AM0PR08MB5297.eurprd08.prod.outlook.com (2603:10a6:208:18a::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 09:30:42 +0000
Received: from DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2b:cafe::6f) by DB6PR07CA0074.outlook.office365.com
 (2603:10a6:6:2b::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.20 via Frontend
 Transport; Tue, 31 Jan 2023 09:30:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT038.mail.protection.outlook.com (100.127.143.23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6043.36 via Frontend Transport; Tue, 31 Jan 2023 09:30:41 +0000
Received: ("Tessian outbound 3ad958cd7492:v132");
 Tue, 31 Jan 2023 09:30:41 +0000
Received: from 814e91673b6e.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9696D683-772A-4C99-A1CF-1E3E841C3F72.1; 
 Tue, 31 Jan 2023 09:30:35 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 814e91673b6e.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 31 Jan 2023 09:30:35 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB8393.eurprd08.prod.outlook.com (2603:10a6:20b:569::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 09:30:33 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::d1d7:166d:6c34:d19e%5]) with mapi id 15.20.6043.033; Tue, 31 Jan 2023
 09:30:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef5a4890-a149-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TLhtKl+Utw5UAEC1Dx7Msd4JVJQ8kROlz7KpHGW8NHU=;
 b=tk55W9OzcV6NCFRq/d8ZfdX5+/O+XAk4DZtSNVEHX3euyZ832WPrmBq9v1Nn3roLBn8js9xpkFQrbJslEATqIUoaKBYEBGsL7IWVFxcmBUKZJMKRbibDrbJbU2d522uit2fgPegivxKGFnRDni3gV5lY8VbAu2d88DlcIpaD4eQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EpqidlYKOr94impxdC+xv8f2j6lrpLjOBXDRPEN64D/aj9gcjItti4PgnpJ5GSLj3R9rGuIP4+1qflYUYG42uQTpWnW/8MRRQUXea20k2pFu4+YkEPwtvOhS2hZlmqVsThu7GZzLlXd15OOZI6YROf7YZBZG+eJy+GAx7cm6zHkk6ReU344Agwyt78bYAllWmoVpoW/bJCnqyizoqPRfaskzkc6nQeGMTlvuuYGH0yejpCmC5YJuNbHkfrwQr+T7rUEuVakxCTcS3km+Z8ytumMa/7Lywv0vOX5BKw8pW89MGKaCgWUSzzrbXAiSMgp2Cvc27Gkocea4r5Dw6GVjFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TLhtKl+Utw5UAEC1Dx7Msd4JVJQ8kROlz7KpHGW8NHU=;
 b=cZsl6eZC05gcEkaJDx1+C4Bmvgz9CLGIdzJl0OU/lrYaPiFgXNa1DiIBLn0th8HJcAQLufTe+O74t7+9aHtJzWv/dYseH/4vx9zCTSDHV29ehOESm53pm5BSeQwoLCxfO/Rwu5RJaDWcqdcc3muD5PuRtOzqSvV5U7YD0FVExEA2ZQtNrOsidnxJPWFKbV0fWUP6F9cp7eQd9hm2MNFzGD87wWGE+S0cWXr9dSCfkgiIkAKSAB/1N158IqooMUEn/EkWYMHgXL6xMyIEom/8/1Fo0iT2S5R4K9synPEiOw/uTsf92x59ts9yAXVksoC39sTXFclPyUOPIRUwndTIEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TLhtKl+Utw5UAEC1Dx7Msd4JVJQ8kROlz7KpHGW8NHU=;
 b=tk55W9OzcV6NCFRq/d8ZfdX5+/O+XAk4DZtSNVEHX3euyZ832WPrmBq9v1Nn3roLBn8js9xpkFQrbJslEATqIUoaKBYEBGsL7IWVFxcmBUKZJMKRbibDrbJbU2d522uit2fgPegivxKGFnRDni3gV5lY8VbAu2d88DlcIpaD4eQ=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v3 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Thread-Topic: [PATCH v3 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Thread-Index: AQHZNGA1IwnnIsRmjEGECQ4jTGRpG663gnMAgAA6SvCAAISzAIAAAKDw
Date: Tue, 31 Jan 2023 09:30:33 +0000
Message-ID:
 <AS8PR08MB799131C4BA5A3C5A33A0CD8D92D09@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230130040535.188236-1-Henry.Wang@arm.com>
 <20230130040535.188236-2-Henry.Wang@arm.com>
 <fca91d3c-5d8a-3f7e-419a-a4c5208273dc@xen.org>
 <AS8PR08MB79910D7E3C7F32D8CDCB851092D09@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <ffa2c5e3-9dcb-eca1-fe3f-6ad4c7c83bae@xen.org>
In-Reply-To: <ffa2c5e3-9dcb-eca1-fe3f-6ad4c7c83bae@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: C16B78551AD1024BA44A40369D0BE28F.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB8393:EE_|DBAEUR03FT038:EE_|AM0PR08MB5297:EE_
X-MS-Office365-Filtering-Correlation-Id: a877b8b0-af61-4e7c-4689-08db036dd257
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 EZ7jkKDBPbzDiTNyUEKBBkpTC4pUTCxC0a8uEU4d/vfYyubm+uYnKIpabhQoqpp72gRCvAngmGkFHnk0664RANUbeqPedHvAYk9/3Y3bNwKoqOjuqunXYyonCr7O37aBnziZNNFPAZFa8rzMsi/aTCaECCV1tD7ZvinLTe7grEU8dBz2mhbfNbcS7gI29u1oc567CmAgeIrnjUmWiIj4ORD3IzyLg1Uw9E4ywJ5+4IP7Q1Jw7wWf8TXIOt4ChvRL5NKAQmDcBqsYB1KKyIRR6S62ZxIaeXNiWQ7BEdWIWf/ql7/yeTwAI/Tb43tPrmBL3JcMRa7vq9XiexlcybnScYqLRy0VvRXNuiRWFxfBXnlMDQED0z7kg0zUWYcAfb6z96qAAxpG0pP3OYDKCoACNEv8/gEXAfxhk2dmGk9LLi0RTYJ9VysHDxunkSuWVAy7crWQnc7UqNl4IENoAqiwQRo8LuadEehjKOOXbGzlUs7/H2xKJqNJLSqeyD4+WPsuxem6yrm3pSH8boeDE38GD0ar3xpoAylPSCQTTzRsDSRJZHUuu0enrkO16pwfuypHSxKKDcEyzKpARxr4IUi8mEvelFSwDCNnMpKemluw3/70Ia5uqgKJNckYAz7uXhI7g96nnZJCGGkbld1NwHZldOUVJv1lQFzWsNvNvbrsPOvzVusk2bpi3e4WD+keZKFMjofRKvUtWg7HH0jH7rt73w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(136003)(346002)(366004)(376002)(396003)(451199018)(83380400001)(54906003)(110136005)(316002)(52536014)(8936002)(41300700001)(66946007)(66476007)(66446008)(64756008)(4326008)(8676002)(66556008)(76116006)(6506007)(26005)(186003)(478600001)(7696005)(9686003)(71200400001)(86362001)(38070700005)(33656002)(55016003)(5660300002)(4744005)(2906002)(122000001)(38100700002)(66899018);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8393
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f3991fe2-9ebe-40c8-4833-08db036dcd86
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	h/xk+xMtXfWqlqETttEEgktXBE4F3FIZKG33g2Zj92VNReS81/+icfldt8i9bJQyib2cAXrIdOhMpgr75IqUBYLTaaToz7WOSnzdMuT2rBAdXgret8NfKMDPO27iNsmDag9NaC/KnG6YOD7FCld6mvG9GSytxylLXxDO/z6HXhlHZqU8QUtpztylk1XR2xlY0czWkhS/W5HpPYWsVZvPEXYS5/K8I+C124zYBG8n5/xOx/3xAzuzSCRea2ENi+Ld3QwHFn9Hhcz1hbrJx4s90QuLkcqJFljYWjqjJJ+z6iHsOz/den8zOcO687z1sFl0ZZXDiiWmkRqxHWbg1bllGNPiRgXBpes9HWpXK5DVdP7NeeE1lTXV4qgxO3bOxCvFkoZVw8dluqDLRH9qve+3tpBEqREQ3pXkVaS1H7aiZjvi7c/MdQyzd5JymylkmU5BLieOSMBv4E6FJ13mDglZhNC6OZga4vb45NyQ2Hgh5Dh9904/9SHSmxnAjOLZuF5W75deASuYfnp3ZXUnhyyuGyZ1vrZ76jYStpiVlgT4gYnsFBQaXNOE24x/FPId36tZba0Npme6nzsQv8Yf4ZVd31/bmjIkAyDsqL0zy/i8OcEyElAtwxnLqTIOpjKQBEAYzPl19OvTmuwTURweiQOX5bmrTx1629lI2EXzjtF/aDtZVpVEhw8FbbOTQ4NTmwtlnMtCXwZmO9wkupqzrX47Qg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(376002)(136003)(396003)(346002)(451199018)(46966006)(36840700001)(40470700004)(7696005)(26005)(41300700001)(66899018)(478600001)(186003)(336012)(9686003)(107886003)(6506007)(54906003)(110136005)(316002)(47076005)(70586007)(70206006)(4326008)(40460700003)(83380400001)(8676002)(2906002)(82740400003)(55016003)(5660300002)(8936002)(36860700001)(81166007)(82310400005)(86362001)(52536014)(33656002)(4744005)(356005)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:30:41.8310
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a877b8b0-af61-4e7c-4689-08db036dd257
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5297

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjMgMS8zXSB4
ZW4vYXJtOiBBZGQgbWVtb3J5IG92ZXJsYXAgY2hlY2sgZm9yDQo+IGJvb3RpbmZvLnJlc2VydmVk
X21lbQ0KPiA+IEkgd2lsbCBmaXggdGhpcyBwYXRjaCBhbmQgIzIgaW4gdjQuDQo+IA0KPiBJIGFt
IGhhcHB5IHRvIGRlYWwgd2l0aCBpdCBvbiBjb21taXQgaWYgeW91IHdhbnQuDQoNCkluY2x1ZGlu
ZyBhZGRpbmcgdGhlIGNvbW1lbnQgZm9yIGJvdGggcGF0Y2hlcz8gVGhpcyB3b3VsZCBiZSB3b25k
ZXJmdWwNCmFuZCB2ZXJ5IG5pY2Ugb2YgeW91IHRvIGRvIHRoYXQuIEJ1dCBpZiB5b3VyIHRpbWUg
aXMgbGltaXRlZCBJIGFtIGFsc28gbW9yZQ0KdGhhbiBoYXBweSB0byByZXNwaW4gdGhlIHBhdGNo
IChwcm9iYWJseSBldmVuIHdpdGggU3RlZmFubydzIFJldmlld2VkLWJ5DQp0YWcgaWYgaGUgaXMg
b2sgd2l0aCBpdCkgdG8gcmVkdWNlIHlvdXIgYnVyZGVuLiBUaGF0IHNhaWQsIGlmIEkgbmVlZCB0
byByZXNwaW4gdGhlDQpwYXRjaCwgaXQgd291bGQgYmUgZ29vZCB0byBnZXQgc29tZSBoaW50cyBh
Ym91dCB0aGUgd29yZGluZyBvZiB0aGUgY29tbWVudHMNCnRvIGF2b2lkIGFub3RoZXIgdisxIGp1
c3QgYmVjYXVzZSBvZiBteSBpbmFjY3VyYXRlIHdvcmRpbmcgOikgVGhhbmtzIHZlcnkNCm11Y2gh
DQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQoNCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1
bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:34:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:34:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487480.755120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMn26-0000Pv-Rb; Tue, 31 Jan 2023 09:34:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487480.755120; Tue, 31 Jan 2023 09:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMn26-0000Po-Ov; Tue, 31 Jan 2023 09:34:30 +0000
Received: by outflank-mailman (input) for mailman id 487480;
 Tue, 31 Jan 2023 09:34:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nz19=54=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMn25-0000Pi-Ii
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:34:29 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2041.outbound.protection.outlook.com [40.107.241.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75146907-a14a-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 10:34:28 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8478.eurprd04.prod.outlook.com (2603:10a6:10:2c4::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.33; Tue, 31 Jan
 2023 09:34:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.038; Tue, 31 Jan 2023
 09:34:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75146907-a14a-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ODIHzNUxNeNKibGytfkT/fU5kSu6KsFYiimQC+qA0LY7TuAx7j7eisWScaMcKJwQb+HxF9vdgTKZaBehz/8IAWol1XontBPMyjOb2gl5GfkGBSEnrt4eo2Zs84PteEZjWctPzojn7TY4kT8rz423JL98Ix4Vaa30Fwigiy0UcVgK4+GdtWEM0B1u/CaPMCcOwpF9vPOTk2SgBpPgVeMP91W7JqesyDCt0+VrdCEgmw7wEeLAdPTw6jQwJYw5Kvh4989vEfnMpP0DMDeGQsOANjKO4jpogga12i224Jk5SymU7siR5Uh3sT3xnpLyvGBhxE5lGxC8HZpt8K2u2kZ8xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Z+PlCKXgtw66fgXDbozMa0qLuW7sUDRE/I1XW0+CKPg=;
 b=VsMFxG4zcga4qWvn97xvr7hqqfi3GPBQPtP2bGfwPSmU/LE+jGbRXwPA3xXQFHcJjlPECJKnpgEwqMuswlAp/BXWBPzgpQY7HGhOhZfQNeGhs2Sa/u3r3k5t9riRIpZoJrbcGTBA+EOJsaoM3SosZriNF3yXoh9W2yzGkmg+n7M1msmAw86VY5hHM/vh7KcHtuH1a4XvubK4YF/6ESWc4I2FdyMOGHXwVF3bguOr1eqo/QdqZBxXqgpKMzIEiwup+VDJD/4m5n06je0K9jPFQYES7nqrYqpftSV1ohFOZOoF/V9VxptE028WOH74B1e48ntGX02PmGuvzOEYG5Q9fw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z+PlCKXgtw66fgXDbozMa0qLuW7sUDRE/I1XW0+CKPg=;
 b=JIq1topNSojTmk3AEVNArAD5+BIfjC10WZgDaqKMBOZvqmWFpCMRjJ//appJ4NA3sMr4O0OoDLDGaXHFfX/OMMdtmPLqHuL0Amp7xiz31PMNy5SCLqoPPZlrp+LO2+44b/dwGxKwqrsoCbZAPt6sPqxqp+9D6aoTEKHD9eM2B+iDG2XTjlJTJ1iQLM3uiBMIb3g3murtFMjvTRqTA1eIN9JjDFDp9M26XrHfsoZ1gLidi1a8Q5SVLOtXHLo/l0ZNfkRb5KNcvFJVFWM0zV8xpBk5HxptOA9FIQ9Mf8bWMgrp+ZQHSqRM1Bmq7sV3M6QWVTxivY9m5WGuSrUppgw6dA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9dc52d71-4148-0c16-d153-3ebcd1a9c754@suse.com>
Date: Tue, 31 Jan 2023 10:34:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/common: rwlock: Constify the parameter of
 _rw_is{,_write}_locked()
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230130182858.86886-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230130182858.86886-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0059.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8478:EE_
X-MS-Office365-Filtering-Correlation-Id: da0cab7e-2aee-405f-8b01-08db036e5821
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5+3HFUygyOI/zXDQT7zabEmUFYLbE1z5aa8htR9mom/P1TA5tVjF6LC9pAurtnoP23WAu6PV+w0UZzKSgeTDHFmhOXjkANlAOHtj4hLExJsa8kzxz5PzPExzC2tsmNBEax0E5sjd3sfvWcmcThf+4cx7dqe7YJjRcJxIA8SHqL5Gd2/zR/DPsgFWUU1+vDorAH+uoTuEdC3e6tQdcLVitClviuwHn4T/vy2KIWNl+N9mWKSAqWx0Qdxvsr2l2Ox7ADgHtOzIOy/Km1rbMG/mq/d+4IS1rrcM16YqCGfSaOS9GDzerrTEUhgTp8QY6NrDin4AE2BssSE45t7bIXsB/4Jj239/IxZ2mRPXNXxXG6hooz5NBVm6o2bi6NdgsgUICBf2Y7Im6wTw6CtFHA4VKE2HMR1b1w6BxBNg0LWcJiitelYGXpm/7EZ3M0JPVO/Xp5izKU0nVPQRZ9Cd+y8KYIEGmmQzaxrmJ09gDqW/clFDvODhWWh/ICBH1j2DxV+idzNwbb53JUW2YREebwssVTHFPrOdVXZpREkFx34mmF0H9EbgKEvB4oOL2Wh3BeCnpQ4xdtWxeEokqCRH+QaFYMbXimIZk0rq4B3rTXn1W+1CHnIab4qK221Jsd4sG2ixQHYpc91Ehdw7+00KWBJTtC01pFsUNOVrARCaG7UF1xYAqeN3k9A4vF8vC0bqzLebFpeQv3kCPaRlsmKdVYyCAnsOpA69h21pbCxu4fd2wnw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(396003)(366004)(376002)(136003)(346002)(39860400002)(451199018)(4326008)(6916009)(66556008)(66946007)(66476007)(41300700001)(8936002)(8676002)(83380400001)(31686004)(2616005)(6506007)(36756003)(53546011)(54906003)(316002)(31696002)(5660300002)(6512007)(6486002)(186003)(4744005)(2906002)(26005)(86362001)(38100700002)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MUtiWWpyZnFoVmovOFFadlI3ZGNxUjZobFYrdmlvMmJZeXhNdFV4MWdGWUhV?=
 =?utf-8?B?OFdyUHhkaW1BMUhFczJPYityMWpTL0R4YmNiVHVZNlNheU1mYm5mVlZ2UjBE?=
 =?utf-8?B?S2cra00wY0FCc0JkbHdtN1ZXYWxuUnRiektsOUU1eUZ4ZEE2ZFdxN1R6Z2Fv?=
 =?utf-8?B?MmNqZEd5aVIxS2k2VWhJcVZMd1pCODRmRHpLM0RoS2hUYk5IWGtMbURkSUly?=
 =?utf-8?B?NCthQ1BvcTBPejVJNm8yVHJ2MG9MRGc0TWdVOVIvTW5pOTk4NXIxSmlRZnk3?=
 =?utf-8?B?dit3UmhDSG4zM1dVMVFsSmhuMGNrYkRnSXVNTTBqVDl2bWVHdnRWK09zelJi?=
 =?utf-8?B?V0lrYUdpTkgyU3QxTGVKbW12ZUFJNUFxNkVVRjFxQ2RFTG5vWWMyZ2EzTytX?=
 =?utf-8?B?b2dCQzMzaXMxYXl3Y0RTRXpITFNaTDBrU0xzK01VVXovUGVVek1valEvZ2Zp?=
 =?utf-8?B?aUFjbStsT1pSQ0psSG11Qkd3NS9ocmxuWXJLRDl6bXd3aTB4MVNoUSt3S1Jr?=
 =?utf-8?B?ek9vNUFETE1YR0tvQnZBTDFSVFp0bnFtL3gzdVN1L2NwWHV4RjA4cVRtYmtE?=
 =?utf-8?B?SG5tVnRZZUhaL3dxMk1IaHJpbWpGYzRBZnRmSm9xOXFrcVpnS0w4WEowZ21X?=
 =?utf-8?B?aEVBcnFKRU02YTVKZitUcm92SlNkMTkzU1pmcHNQVGh3aGpoL3dwc2ZJbUk2?=
 =?utf-8?B?ZWNIWFNEbkUyaXZYalpsbGpta0oyRWw5WVJFelJRNlZubE9Pa0VPWFNBM0xL?=
 =?utf-8?B?UzFkU255Y1FDaUw3ZVY0eFdyZEp6SERQMWlUcDQwTW0rdkwvUllLNFNIcmdo?=
 =?utf-8?B?RHZCbzJnUXNDWGNTQ3Rsb0t4aG9CaWVFZUJRL2I1VlorZFlFWDRjZFVXaDVD?=
 =?utf-8?B?OWE5bXV4Sm1BNjIvSVJ3bFFaRUV5cXY5UTY3VEFWWHhhMnQwek1YLzNyVzZN?=
 =?utf-8?B?aXNKUFZkbFpJc3phejBub1VjZGpscjcyY0xKaUR1aW44d2h1dmlmb2k4Sjdy?=
 =?utf-8?B?aDVrMkxjWTY0NVdaTnV0MCsrZk9JTS9kaDY2bTVueXhuY2hOQjRLeTBTbyt5?=
 =?utf-8?B?UTFTVUFmaThWTER3UXVuQWsxSUY3N0tzd2U2WmRFZWxpUjQrNkpFcytyaVNK?=
 =?utf-8?B?NjF1dTdVaXNLWGNHV0Nld0lxTlhmR2ZZK3NiYzJMQktlSXcvWUZicHVEVWhr?=
 =?utf-8?B?eDhvM3l0S0VlUUV3aFI1cE1XeURocHpyay83ZEw0cXVkTHpLeU5IdjZRTlBD?=
 =?utf-8?B?dGZ0QzZwYXo4SlRpcWIwT2RCOEVaUHJzSWcwMTFKbjhsYXFDTzFYVExtK3Nm?=
 =?utf-8?B?VjZaUVBqcjZyV2RUdUxZbFV5dnVFMXl1SEt2REVRQjNnc24yNXYySU1OMG50?=
 =?utf-8?B?eE1BZ041NXdJOFB5Y1cvdmdhOWRHWUpDN3dzQ29Fbm15Sll0dU1kdWtlRFB4?=
 =?utf-8?B?cHJRUjRPMlkxUk9LRXowZkdmSVZyOGdSUzY2VnF0V3F2ZDBneThpNkFuVFBP?=
 =?utf-8?B?WTBpelZpRDNUQTR2QXlQdW9LdjNZamJheE1rVEFudlI2MnpiN3JMbVFHZDZG?=
 =?utf-8?B?NExDOGdReFNNMTJtSzNTc0h4UmtsMi9YeXhmQkxsUXV6TGYyZkVzUmpHc1VF?=
 =?utf-8?B?QUg2SjczWmNPSzFXZzRaS2hidWF5c09CWEZpNnhDMDNEd3NSbU80N1dBOXBC?=
 =?utf-8?B?RVdNWkZyQWU4ZmdjK0NXeWhzZkFWc2phQXF5STk4a2F5TFNoTHhvd1h1Rm5l?=
 =?utf-8?B?ZStkV1lYNWFhS29laWFTVkdiTnNOa0JobTBTRHZFNDdmand4V0dNeHBKYVA5?=
 =?utf-8?B?R0oyRmlac1ZqN0tnU1ZnTG5KbUgrSlJCb2E0RWZDMk1rV3RpQm44VHhHRlB1?=
 =?utf-8?B?RE04UVBuaUJoTEFxRWxHUGFaSDdNZ2FuV21CQjllaHpRYW5NL2FOTXE0VzQz?=
 =?utf-8?B?QmZucThuUmJ2by9uZDFocTFrSFFxdWdlZCtkZ05SaC9KRWxhcDhrbXNnTUhQ?=
 =?utf-8?B?OW1ReDJ3dVA4d1h5QzhDb0k2YVU5NDhpRlJ1OHByWVR3bDYwakJ0UVU0cjZM?=
 =?utf-8?B?UEh5OHVDK0FCemNQQmJ2VGJIaURZM2Z6blpKejRtTDRyT3Q0bzArQVB3MW1O?=
 =?utf-8?Q?QCDcmfghNXqtzN5beuL8p4snm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da0cab7e-2aee-405f-8b01-08db036e5821
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:26.5279
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Hhco2Uk3YsNVV9Tqlp27Bsp+2A5QHPCiKEg53cR1iumQSrV86FpjEYGF1HpaonQ+O/TxgQMDBI2WU48GceYCEA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8478

On 30.01.2023 19:28, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The lock is not meant to be modified by _rw_is{,_write}_locked(). So
> constify it.
> 
> This is helpful to be able to assert if the lock is taken when the
> underlying structure is const.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
(maybe also Requested-by)

Thanks for doing this on top of the spinlock change.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:42:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487489.755131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMn9Q-0002Gx-O2; Tue, 31 Jan 2023 09:42:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487489.755131; Tue, 31 Jan 2023 09:42:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMn9Q-0002Gq-KC; Tue, 31 Jan 2023 09:42:04 +0000
Received: by outflank-mailman (input) for mailman id 487489;
 Tue, 31 Jan 2023 09:42:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMn9P-0002Gk-2z
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:42:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMn9O-0005yZ-OF; Tue, 31 Jan 2023 09:42:02 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.14.74]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMn9N-0003LQ-N3; Tue, 31 Jan 2023 09:42:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=eApX0SiPJqBG8PjynyliWGtRgSgyc8CP81OtRv2OCSo=; b=JgbFBTR27UpQj3kY4mBHLKz3pz
	pYA6eiJUU3asEEq6ZEDScyPseYC312fh0b9mDg/szPz1cdy+VQhMw2I9eMgZnZRNWbYIm/5G0nGXg
	UnGr3VPCzU7FlFbk8ilsrOT5ylSMiQxgN42PtC2xkHSgvajtocNm5i7wZTjk558sRVAc=;
Message-ID: <14f9c89a-6eea-204e-cd1b-6bc1cca99716@xen.org>
Date: Tue, 31 Jan 2023 09:41:59 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
 setup_early_uart to map early UART
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230113052914.3845596-1-Penny.Zheng@arm.com>
 <20230113052914.3845596-14-Penny.Zheng@arm.com>
 <23f49916-dd2a-a956-1e6b-6dbb41a8817b@xen.org>
 <AM0PR08MB4530B7AF6EA406882974D528F7D29@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <33bddc11-ae1e-b467-32d7-647748d1c627@xen.org>
 <AM0PR08MB453026B268BA9FBEEE970090F7D39@AM0PR08MB4530.eurprd08.prod.outlook.com>
 <49329992-3203-78a7-fc61-d6494e37705c@xen.org>
 <AM0PR08MB45305D27CA8353162445AE1EF7D09@AM0PR08MB4530.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AM0PR08MB45305D27CA8353162445AE1EF7D09@AM0PR08MB4530.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Penny,

On 31/01/2023 05:38, Penny Zheng wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Monday, January 30, 2023 6:00 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org
>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
>> setup_early_uart to map early UART
>>
>>
>>
>> On 30/01/2023 06:24, Penny Zheng wrote:
>>> Hi, Julien
>>
>> Hi Penny,
>>
>>>> -----Original Message-----
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: Sunday, January 29, 2023 3:43 PM
>>>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-
>> devel@lists.xenproject.org
>>>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>>>> <sstabellini@kernel.org>; Bertrand Marquis
>>>> <Bertrand.Marquis@arm.com>; Volodymyr Babchuk
>>>> <Volodymyr_Babchuk@epam.com>
>>>> Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
>>>> setup_early_uart to map early UART
>>>>
>>>> Hi Penny,
>>>>
>>>> On 29/01/2023 06:17, Penny Zheng wrote:
>>>>>> -----Original Message-----
>>>>>> From: Julien Grall <julien@xen.org>
>>>>>> Sent: Wednesday, January 25, 2023 3:09 AM
>>>>>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-
>>>> devel@lists.xenproject.org
>>>>>> Cc: Wei Chen <Wei.Chen@arm.com>; Stefano Stabellini
>>>>>> <sstabellini@kernel.org>; Bertrand Marquis
>>>>>> <Bertrand.Marquis@arm.com>; Volodymyr Babchuk
>>>>>> <Volodymyr_Babchuk@epam.com>
>>>>>> Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
>>>>>> setup_early_uart to map early UART
>>>>>>
>>>>>> Hi Peny,
>>>>>
>>>>> Hi Julien,
>>>>>
>>>>>>
>>>>>> On 13/01/2023 05:28, Penny Zheng wrote:
>>>>>>> In MMU system, we map the UART in the fixmap (when earlyprintk is
>>>> used).
>>>>>>> However in MPU system, we map the UART with a transient MPU
>>>> memory
>>>>>>> region.
>>>>>>>
>>>>>>> So we introduce a new unified function setup_early_uart to replace
>>>>>>> the previous setup_fixmap.
>>>>>>>
>>>>>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>>>>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>>>>>> ---
>>>>>>>      xen/arch/arm/arm64/head.S               |  2 +-
>>>>>>>      xen/arch/arm/arm64/head_mmu.S           |  4 +-
>>>>>>>      xen/arch/arm/arm64/head_mpu.S           | 52
>>>>>> +++++++++++++++++++++++++
>>>>>>>      xen/arch/arm/include/asm/early_printk.h |  1 +
>>>>>>>      4 files changed, 56 insertions(+), 3 deletions(-)
>>>>>>>
>>>>>>> diff --git a/xen/arch/arm/arm64/head.S
>> b/xen/arch/arm/arm64/head.S
>>>>>>> index 7f3f973468..a92883319d 100644
>>>>>>> --- a/xen/arch/arm/arm64/head.S
>>>>>>> +++ b/xen/arch/arm/arm64/head.S
>>>>>>> @@ -272,7 +272,7 @@ primary_switched:
>>>>>>>               * afterwards.
>>>>>>>               */
>>>>>>>              bl    remove_identity_mapping
>>>>>>> -        bl    setup_fixmap
>>>>>>> +        bl    setup_early_uart
>>>>>>>      #ifdef CONFIG_EARLY_PRINTK
>>>>>>>              /* Use a virtual address to access the UART. */
>>>>>>>              ldr   x23, =EARLY_UART_VIRTUAL_ADDRESS
>>>>>>> diff --git a/xen/arch/arm/arm64/head_mmu.S
>>>>>>> b/xen/arch/arm/arm64/head_mmu.S index b59c40495f..a19b7c873d
>>>>>> 100644
>>>>>>> --- a/xen/arch/arm/arm64/head_mmu.S
>>>>>>> +++ b/xen/arch/arm/arm64/head_mmu.S
>>>>>>> @@ -312,7 +312,7 @@ ENDPROC(remove_identity_mapping)
>>>>>>>       *
>>>>>>>       * Clobbers x0 - x3
>>>>>>>       */
>>>>>>> -ENTRY(setup_fixmap)
>>>>>>> +ENTRY(setup_early_uart)
>>>>>>
>>>>>> This function is doing more than enable the early UART. It also
>>>>>> setups the fixmap even earlyprintk is not configured.
>>>>>
>>>>> True, true.
>>>>> I've thoroughly read the MMU implementation of setup_fixmap, and
>>>>> I'll try to split it up.
>>>>>
>>>>>>
>>>>>> I am not entirely sure what could be the name. Maybe this needs to
>>>>>> be split further.
>>>>>>
>>>>>>>      #ifdef CONFIG_EARLY_PRINTK
>>>>>>>              /* Add UART to the fixmap table */
>>>>>>>              ldr   x0, =EARLY_UART_VIRTUAL_ADDRESS
>>>>>>> @@ -325,7 +325,7 @@ ENTRY(setup_fixmap)
>>>>>>>              dsb   nshst
>>>>>>>
>>>>>>>              ret
>>>>>>> -ENDPROC(setup_fixmap)
>>>>>>> +ENDPROC(setup_early_uart)
>>>>>>>
>>>>>>>      /* Fail-stop */
>>>>>>>      fail:   PRINT("- Boot failed -\r\n")
>>>>>>> diff --git a/xen/arch/arm/arm64/head_mpu.S
>>>>>>> b/xen/arch/arm/arm64/head_mpu.S index e2ac69b0cc..72d1e0863d
>>>>>> 100644
>>>>>>> --- a/xen/arch/arm/arm64/head_mpu.S
>>>>>>> +++ b/xen/arch/arm/arm64/head_mpu.S
>>>>>>> @@ -18,8 +18,10 @@
>>>>>>>      #define REGION_TEXT_PRBAR       0x38    /* SH=11 AP=10 XN=00 */
>>>>>>>      #define REGION_RO_PRBAR         0x3A    /* SH=11 AP=10 XN=10 */
>>>>>>>      #define REGION_DATA_PRBAR       0x32    /* SH=11 AP=00 XN=10 */
>>>>>>> +#define REGION_DEVICE_PRBAR     0x22    /* SH=10 AP=00 XN=10 */
>>>>>>>
>>>>>>>      #define REGION_NORMAL_PRLAR     0x0f    /* NS=0 ATTR=111 EN=1
>> */
>>>>>>> +#define REGION_DEVICE_PRLAR     0x09    /* NS=0 ATTR=100 EN=1 */
>>>>>>>
>>>>>>>      /*
>>>>>>>       * Macro to round up the section address to be PAGE_SIZE
>>>>>>> aligned @@
>>>>>>> -334,6 +336,56 @@ ENTRY(enable_mm)
>>>>>>>          ret
>>>>>>>      ENDPROC(enable_mm)
>>>>>>>
>>>>>>> +/*
>>>>>>> + * Map the early UART with a new transient MPU memory region.
>>>>>>> + *
>>>>>>
>>>>>> Missing "Inputs: "
>>>>>>
>>>>>>> + * x27: region selector
>>>>>>> + * x28: prbar
>>>>>>> + * x29: prlar
>>>>>>> + *
>>>>>>> + * Clobbers x0 - x4
>>>>>>> + *
>>>>>>> + */
>>>>>>> +ENTRY(setup_early_uart)
>>>>>>> +#ifdef CONFIG_EARLY_PRINTK
>>>>>>> +    /* stack LR as write_pr will be called later like nested function */
>>>>>>> +    mov   x3, lr
>>>>>>> +
>>>>>>> +    /*
>>>>>>> +     * MPU region for early UART is a transient region, since it will be
>>>>>>> +     * replaced by specific device memory layout when FDT gets
>> parsed.
>>>>>>
>>>>>> I would rather not mention "FDT" here because this code is
>>>>>> independent to the firmware table used.
>>>>>>
>>>>>> However, any reason to use a transient region rather than the one
>>>>>> that will be used for the UART driver?
>>>>>>
>>>>>
>>>>> We don’t want to define a MPU region for each device driver. It will
>>>>> exhaust MPU regions very quickly.
>>>> What the usual size of an MPU?
>>>>
>>>> However, even if you don't want to define one for every device, it
>>>> still seem to be sensible to define a fixed temporary one for the
>>>> early UART as this would simplify the assembly code.
>>>>
>>>
>>> We will add fixed MPU regions for Xen static heap in function setup_mm.
>>> If we put early uart region in front(fixed region place), it will
>>> leave holes later after removing it.
>>
>> Why? The entry could be re-used to map the devices entry.
>>
>>>
>>>>
>>>>> In commit " [PATCH v2 28/40] xen/mpu: map boot module section in
>> MPU
>>>>> system",
>>>>
>>>> Did you mean patch #27?
>>>>
>>>>> A new FDT property `mpu,device-memory-section` will be introduced
>>>>> for users to statically configure the whole system device memory
>>>>> with the
>>>> least number of memory regions in Device Tree.
>>>>> This section shall cover all devices that will be used in Xen, like
>>>>> `UART`,
>>>> `GIC`, etc.
>>>>> For FVP_BaseR_AEMv8R, we have the following definition:
>>>>> ```
>>>>> mpu,device-memory-section = <0x0 0x80000000 0x0 0x7ffff000>; ```
>>>>
>>>> I am a bit worry this will be a recipe for mistake. Do you have an
>>>> example where the MPU will be exhausted if we reserve some entries
>>>> for each device (or some)?
>>>>
>>>
>>> Yes, we have internal platform where MPU regions are only 16.
>>
>> Internal is in silicon (e.g. real) or virtual platform?
>>
> 
> Sorry, we met this kind of type platform is all I'm allowed to say.
> Due to NDA, I couldn’t tell more.
> 
>>>   It will almost eat up
>>> all MPU regions based on current implementation, when launching two
>> guests in platform.
>>>
>>> Let's calculate the most simple scenario:
>>> The following is MPU-related static configuration in device tree:
>>> ```
>>>           mpu,boot-module-section = <0x0 0x10000000 0x0 0x10000000>;
>>>           mpu,guest-memory-section = <0x0 0x20000000 0x0 0x40000000>;
>>>           mpu,device-memory-section = <0x0 0x80000000 0x0 0x7ffff000>;
>>>           mpu,shared-memory-section = <0x0 0x7a000000 0x0 0x02000000>;
>>>
>>>           xen,static-heap = <0x0 0x60000000 0x0 0x1a000000>; ``` At the
>>> end of the boot, before reshuffling, the MPU region usage will be as
>> follows:
>>> 7 (defined in assembly) + FDT(early_fdt_map) + 5 (at least one region for
>> each "mpu,xxx-memory-section").
>>
>> Can you list the 7 sections? Is it including the init section?
>>
> 
> Yes, I'll draw the layout for you:

Thanks!

> '''
>   Xen MPU Map before reorg:
> 
> xen_mpumap[0] : Xen text
> xen_mpumap[1] : Xen read-only data
> xen_mpumap[2] : Xen read-only after init data
> xen_mpumap[3] : Xen read-write data
> xen_mpumap[4] : Xen BSS
> xen_mpumap[5] : Xen static heap
> ......
> xen_mpumap[max_xen_mpumap - 7]: Static shared memory section
> xen_mpumap[max_xen_mpumap - 6]: Boot Module memory section(kernel, initramfs, etc)
> xen_mpumap[max_xen_mpumap - 5]: Device memory section
> xen_mpumap[max_xen_mpumap - 4]: Guest memory section
> xen_mpumap[max_xen_mpumap - 3]: Early FDT
> xen_mpumap[max_xen_mpumap - 2]: Xen init data
> xen_mpumap[max_xen_mpumap - 1]: Xen init text
> 
> In the end of boot, function init_done will do the reorg and boot-only region clean-up:
> 
> Xen MPU Map after reorg(idle vcpu):
> 
> xen_mpumap[0] : Xen text
> xen_mpumap[1] : Xen read-only data
> xen_mpumap[2] : Xen read-only after init data

In theory 1 and 2 could be merged after boot. But I guess it might be 
complicated?

> xen_mpumap[3] : Xen read-write data
> xen_mpumap[4] : Xen BSS
> xen_mpumap[5] : Xen static heap
> xen_mpumap[6] : Guest memory section

Why do you need to map the "Guest memory section" for the idle vCPU?

> xen_mpumap[7] : Device memory section

I might be missing some context here. But why this section is not also 
mapped in the context of the guest vCPU?

For instance, how would you write to the serial console when the context 
is the guest vCPU?

> xen_mpumap[6] : Static shared memory section
> 
> Xen MPU Map on runtime(guest vcpu):
> 
> xen_mpumap[0] : Xen text
> xen_mpumap[1] : Xen read-only data
> xen_mpumap[2] : Xen read-only after init data
> xen_mpumap[3] : Xen read-write data
> xen_mpumap[4] : Xen BSS
> xen_mpumap[5] : Xen static heap
> xen_mpumap[6] : Guest memory
> xen_mpumap[7] : vGIC map
> xen_mpumap[8] : vPL011 map

I was expected the PL011 to be fully emulated. So why is this necessary?

> xen_mpumap[9] : Passthrough device map(UART, etc)
> xen_mpumap[10] : Static shared memory section
> 
>>>
>>> That will be already at least 13 MPU regions ;\.
>>
>> The section I am the most concern of is mpu,device-memory-section
>> because it would likely mean that all the devices will be mapped in Xen.
>> Is there any risk that the guest may use different memory attribute?
>>
> 
> Yes, on current implementation, per-domain vgic, vpl011, and passthrough device map
> will be individually added into per-domain P2M mapping, then when switching into guest
> vcpu from xen idle vcpu, device memory section will be replaced by vgic, vpl011, passthrough
> device map.

Per above, I am not entirely sure how you could remove the device memory 
section when using the guest vCPU.

Now about the layout between init and runtime. From previous discussion, 
you said you didn't want to have init section to be fixed because of the 
section "Xen static heap".

Furthermore, you also mention that you didn't want a bitmap. So how 
about the following for the assembly part:

xen_mpumap[0] : Xen text
xen_mpumap[1] : Xen read-only data
xen_mpumap[2] : Xen read-only after init data
xen_mpumap[3] : Xen read-write data
xen_mpumap[4] : Xen BSS
xen_mpumap[5]: Early FDT
xen_mpumap[6]: Xen init data
xen_mpumap[7]: Xen init text
xen_mpumap[8]: Early UART (optional)

Then when you switch to C, you could have:

xen_mpumap[0] : Xen text
xen_mpumap[1] : Xen read-only data
xen_mpumap[2] : Xen read-only after init data
xen_mpumap[3] : Xen read-write data
xen_mpumap[4] : Xen BSS
xen_mpumap[5]: Early FDT
xen_mpumap[6]: Xen init data
xen_mpumap[7]: Xen init text

xen_mpumap[max_xen_mpumap - 4]: Device memory section
xen_mpumap[max_xen_mpumap - 3]: Guest memory section
xen_mpumap[max_xen_mpumap - 2]: Static shared memory section
xen_mpumap[max_xen_mpumap - 1] : Xen static heap

And at runtime, you would keep the "Xen static heap" right at the end of 
the MPU and keep the middle entries as the switchable one.

There would be not bitmap with this solution and all the entries for the 
assembly code would be fixed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:51:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:51:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487494.755141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMnIH-000409-KA; Tue, 31 Jan 2023 09:51:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487494.755141; Tue, 31 Jan 2023 09:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMnIH-000402-HF; Tue, 31 Jan 2023 09:51:13 +0000
Received: by outflank-mailman (input) for mailman id 487494;
 Tue, 31 Jan 2023 09:51:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMnIG-0003zs-CU; Tue, 31 Jan 2023 09:51:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMnIG-0006Mj-9X; Tue, 31 Jan 2023 09:51:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMnIF-0004wD-Vm; Tue, 31 Jan 2023 09:51:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMnIF-00087c-VG; Tue, 31 Jan 2023 09:51:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZIzrdvFw+xfQW/VPfLACutseE8Dk0iuenV7YyaFwlQ0=; b=bTALwk9/AB8rKg8VfK030TGiJM
	74BIZGxTRb+HvvSy/kg2/z87bR1Tl31XyWm2U9VSMYDZSvF0O/IFUi6mokq2cRGxfjuZirW6Lu6wL
	O0JCj+A1Dfh671HRcq+LBIHYBrnqHnpFtUP2gAEP4iVTmyUuYyNNo0sceSEDgBo+i0QA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176293-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176293: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:host-install(5):broken:heisenbug
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 09:51:11 +0000

flight 176293 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176293/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt      5 host-install(5)          broken pass in 176292

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt    15 migrate-support-check fail in 176292 never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    4 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     broken  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-job test-amd64-amd64-libvirt broken
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 09:59:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 09:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487502.755151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMnQE-0004jB-Eb; Tue, 31 Jan 2023 09:59:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487502.755151; Tue, 31 Jan 2023 09:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMnQE-0004j4-BH; Tue, 31 Jan 2023 09:59:26 +0000
Received: by outflank-mailman (input) for mailman id 487502;
 Tue, 31 Jan 2023 09:59:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMnQD-0004iy-JS
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 09:59:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMnQD-0006WY-6z; Tue, 31 Jan 2023 09:59:25 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.14.74]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMnQD-0003xm-1W; Tue, 31 Jan 2023 09:59:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=OAPeHy/0SDnRTC1ShnbPDm2XNvl7LMFEozy9czPWj/0=; b=ht+lfgtjfPr2v6A5YNrCHDYzjP
	dbQY8oRSmm/KYAedxCHrpsPV5avRqhFQn9Om2+AjshdbSQ2JBg5FALfB5FdFrmdZP9Rj8Ti6ZG5Cr
	rnFpkjZRFCPvLeMrDLrwTtV/UhZhlM3WemGUYXch7FTUbvSrzOeK/PKqIT4IqVE9QV3w=;
Message-ID: <b8b0b07c-2b5f-f5df-1a95-e787f2cab25a@xen.org>
Date: Tue, 31 Jan 2023 09:59:22 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v3 1/3] xen/arm: Add memory overlap check for
 bootinfo.reserved_mem
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <Wei.Chen@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230130040535.188236-1-Henry.Wang@arm.com>
 <20230130040535.188236-2-Henry.Wang@arm.com>
 <fca91d3c-5d8a-3f7e-419a-a4c5208273dc@xen.org>
 <AS8PR08MB79910D7E3C7F32D8CDCB851092D09@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <ffa2c5e3-9dcb-eca1-fe3f-6ad4c7c83bae@xen.org>
 <AS8PR08MB799131C4BA5A3C5A33A0CD8D92D09@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AS8PR08MB799131C4BA5A3C5A33A0CD8D92D09@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 31/01/2023 09:30, Henry Wang wrote:
> Hi Julien,
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Subject: Re: [PATCH v3 1/3] xen/arm: Add memory overlap check for
>> bootinfo.reserved_mem
>>> I will fix this patch and #2 in v4.
>>
>> I am happy to deal with it on commit if you want.
> 
> Including adding the comment for both patches? This would be wonderful
> and very nice of you to do that. But if your time is limited I am also more
> than happy to respin the patch (probably even with Stefano's Reviewed-by
> tag if he is ok with it) to reduce your burden. That said, if I need to respin the
> patch, it would be good to get some hints about the wording of the comments
> to avoid another v+1 just because of my inaccurate wording :)

Good idea. My suggestion would be:

TODO: '*_end' could be 0 if the bank/region is at the end of the 
physical address space. This is for now not handled as it requires more 
rework.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 10:51:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 10:51:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487513.755161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoEc-0003ji-BW; Tue, 31 Jan 2023 10:51:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487513.755161; Tue, 31 Jan 2023 10:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoEc-0003jb-7t; Tue, 31 Jan 2023 10:51:30 +0000
Received: by outflank-mailman (input) for mailman id 487513;
 Tue, 31 Jan 2023 10:51:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dLqc=54=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pMoEa-0003jV-Bi
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 10:51:28 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32efa716-a155-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 11:51:23 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by SA0PR12MB7462.namprd12.prod.outlook.com (2603:10b6:806:24b::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 10:51:20 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%4]) with mapi id 15.20.6043.033; Tue, 31 Jan 2023
 10:51:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32efa716-a155-11ed-b63b-5f92e7d2e73a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MAcxdI5GvOLyF1/OY3A9njQ34WDLTAebZ9eK5wcV6R6tYZImWBKvvi7aaNjq2tQFEwie7uvK4kTN/t3fc4H9nGlFWfRBRvQu1HnFrfi1wgGAYR9PA7+yiT9Iu8/Uu2omGYtqVX4RIL5xXbC2WeILXVpxXkLnGQZg71aFnE95Zk/QnG7bb8/b8osuQ1RtQgrxUB4XIXG3kIHFdlGHksSbnB8iD5g0WuNxTTtE/szqtSjH8p4qGxzM7ySqQt/ahWZXQr236KtomBDUZvvu9vd19ygvumSUnLOgC7Cbb6CO3ewfcsCl02NoLmhY7ZmOtNKae5mXAczr8/tUuYW34S+o2w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+lcZovNqwfdmQTyE9LoA0TNs8cfvm8wnIJGJ5hKwhmA=;
 b=mBWxBegzzMaVTGcDVKzzF+GV0guo5uMCVvckuZ42LBPh9i0VY83CpHoX0wgTj2CKro3OWPF/CDFqaM8eO4KbABY60j1xpwCfuUOGje/FmK2x2s4lNNhg+0YXSY4VTzIY+ZdgBpiBRb3rEv7lIYZZyU98+k8tlxECWAZVkLpbGBsnmykXELafe+0f4bq/Gj/kyqeefq96kNQrAjUZWnuYxzWd7FzjiThWdjDX/G+5oPxHJDT3SEGTzYd0/z/cgKicEI+6+BEzZqcBYGNLQykb7Uhk6f+FnbLRlCL43NYhJLEVcrjAz8MEYuYKFmPVw7qMOOi4k0KRBOJ/aUUg8yUJ+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+lcZovNqwfdmQTyE9LoA0TNs8cfvm8wnIJGJ5hKwhmA=;
 b=p5B8q5RWMPCsu4MkdSGKeZR7QVtCFxOjLXgTabln+SKgK0MRy4jFaiGLkdw1XaUVMGOFcay24Jam6L6M9tdmxX5DFoYZ8xhVDt/sdherwWrZgn0zN6//8PTQxzkiRiCIkxUnau1btG0WHblqb48jON8RI5453n0lmWxGoST1gBg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <82e2a6ae-b941-6fa6-dd00-c2c36bd5e079@amd.com>
Date: Tue, 31 Jan 2023 10:51:07 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 04/11] xen/arm: Typecast the DT values into paddr_t
To: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Cc: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-5-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191514240.731018@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2301191532370.731018@ubuntu-linux-20-04-desktop>
 <1a227f76-005d-0307-5161-2e8432171eb7@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <1a227f76-005d-0307-5161-2e8432171eb7@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P123CA0031.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:388::10) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|SA0PR12MB7462:EE_
X-MS-Office365-Filtering-Correlation-Id: b0d9585d-5d47-4109-9bf6-08db037915df
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2zfsljpVPXCH6kpHxUtJK1AvCYZ36DL9h5wH3BQbtwi3oz2KcLpNXLUGnxXJCwhkd8BlZkOZLt3VrM/TEy0ZI5jXEC2Jgn4O89EUdMI5sECSMpNGvSvtDeEG1shG9sychK+zaDiE/PwCo3Kqct6jFkpiLrKN9oWC99H4wh3uS6PYiBmkue0iLuGC0t5k4c+hXv4nc3dZPBTz+6nO2FEPqGqGOj9BSXIHEp+uDdqb1bohg/8kStGLl/2qN6wcJh6lMLvAfIgQLfx5Hwp4xVo0xjNrRUGlckRN/N0H6W6QsPHK1s2YABrvAqgenzJAMFEka8qT6TEr++B7FaANKxUKMHeWP9PKjIDYqWLJh8dNX0W3hg+j8aI1CoPy2RlNDL6hyBio167AWCE13bQ/Rj/lRpY9h2qQxuBoap6LULzo9cJqhw1GG/jnjGhVzrYfFwIYn8tLaQg7NzSQKu9z1tOKWFTt90o8yQvKXT9+IL06ysFZlufdV6sVwjDZoUE5OmwDV20CI3PgvKWFyzkNUdN9zKFGUrBdFT4TS/+k0dT6Ps0r+ZLjGN1O5nto2snntMoT78EcK7TAmJRhHmj1QjtR81bkHYZkT8XWXduh9XtB9nsYlFDRw11E8HFcqkZs1uzosV+etczVCTM/X4ZjfURel/DPjOSWTfEaxyJ2BdrhjmozHOLcuBWDaXlxzxLZ9H9TBlWuVUWer4CYTc1Y6Y06iPZwJhE6T/4Ft+7DJzFotHY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(136003)(376002)(39860400002)(346002)(396003)(366004)(451199018)(2906002)(31686004)(6512007)(478600001)(26005)(186003)(110136005)(316002)(6666004)(36756003)(31696002)(6486002)(41300700001)(38100700002)(66556008)(2616005)(4326008)(66946007)(8936002)(8676002)(66476007)(53546011)(83380400001)(5660300002)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SFhlM09uQXVhSkhybzdCaG9hUm9iN0dwTkNONXFJMWFFMXZ2RGdNdWJHR3hi?=
 =?utf-8?B?UE9QdGlubStldCs5cmw2VFFtL09rMDhSdWcrVW9MbzZIUHgyTVJ6YXA4MEpT?=
 =?utf-8?B?c2FJSERCY1E0Z1VlK2o3YlRUblhkRnFPTXF4dldXMTE2cy9CQm9DYlE3WEUw?=
 =?utf-8?B?a05UUnJxQ09BZDZvWWdiSTYxa0FVZkZYYzBKZ28veDErZjVNL0JJMHlWRDFF?=
 =?utf-8?B?d05lU2Ywb1RlMnFBWXhnZlpDUFl6enZ4elFOb0owOUlKSXJmMTJ3TUdRdzd4?=
 =?utf-8?B?cFUwRjc5a3VkMCtzZFIvYWY1aGFPWUlBdVYvaG1OZVRVM0VhMWpYOWlRaUkv?=
 =?utf-8?B?SFlFRVl0TTg2d3BwWEU2Wk1EQ05tSDFDOU9HeHdLdzJDQ0l5NFJmTlowYUlj?=
 =?utf-8?B?U3BvRE90OWg2N2w3b1haYW1qK1pFQnFNMGhlNkxreVdyZzRIZ3o4d0REMm1C?=
 =?utf-8?B?bC9Pbng4Q0FXMXhwV1ZiREFoRGlpdlZlRFY5ekJYUDQwN1pIMGI0dDh4WXZN?=
 =?utf-8?B?VEk0OHU2TTRMaytOREhpNzR3c0tldGhpL0xHMHo2R1g1NW9FbGJ0dTVKL1hT?=
 =?utf-8?B?aTA5T0Zvb09lMkpzZFdwUUR2bk9IRjgyZVVFYTk1d2pxVStQQytXNjMvUm4v?=
 =?utf-8?B?QnVnSXVtNGlkalhKNmNkcUpCZElqS2Vxeit1MGNZR0x6Umhzc1JVOU1lV1BM?=
 =?utf-8?B?SS9xNGpKYUxKWGtmcmk2aVlBS0lrblVLSGxmaFpQMzNNV1lFR3BwR0Q2VXFD?=
 =?utf-8?B?U1NnT015b1dmRTg4SHBJbnNaclRTU0owNU9pS1RsZ2JYa09TaENBS1B5UkQw?=
 =?utf-8?B?YzhLdDBjRm42MU9UcHozYXR5TUFkQzd1bDdqK2Y2dEFWVVp2MnB4N3M0dFps?=
 =?utf-8?B?ZzFKdGlCMStadzBwYUp5WnVDcjM2VEVhVDh1Sm5iZ2YyRVYyNGFiUXd0R3Nh?=
 =?utf-8?B?RjNNWmlwQ01MYnhwcnlnSlpKTFFBbTJuSkErT2NaV3Y2QmNTS0tGZS9BYjQ1?=
 =?utf-8?B?TjhsYUgxcXFzdTg2T1N6RVVhalpHVTd1WXhTajFWdkNpRkcyck9jNjZMVkQz?=
 =?utf-8?B?NHVGanN0ZjN2RlNmUlpCZGt0NkJSQTJ4Rk9WNmhlS1EzbUxaZTNERzA4QWQy?=
 =?utf-8?B?aldtakV0UjRZZFptY3ErUWdORy9iaWpidlMzckZoTkc2OW9qc25ML1pHYUti?=
 =?utf-8?B?RE9IV3RlUGRzbjVrMEhFOGovRkQyN2FTeXZmcXJFdk9Tdm9sNjQ2VEtmbGUz?=
 =?utf-8?B?UUhJR3N3N0NEUVhlVnE3SlNYRjB5bVZBbk5jcHJQY1NYSytoU3NPUUZ0bzJE?=
 =?utf-8?B?V3QrNmZWRjNLMFpLZGtmcEtLYjY2MG5WaHNhWm1FN2g5WXlXMldOR1pyTHU4?=
 =?utf-8?B?cGdpTFErSnRodEVMY1dsYnpqTzQ0QnhCUXVkTEVaYVZCY1l6VVRUWWd0T3B0?=
 =?utf-8?B?WGdhcDBoWTBsVmtnNTJaQ0ZrUHZQNjdjSTdrNENScVVFK01DWk5CeklOZU1x?=
 =?utf-8?B?bko5OUpZU0JGQmROaW9udmlWYm1ocXlhemZFKzZVYnM0OFhMQTU3dUVNelBt?=
 =?utf-8?B?UEE0dlZjYlk4dDUzMWFDby83b1VvTGo0K09tN25hMWV6Y3BqbFdVVCtLcFhQ?=
 =?utf-8?B?TzcrQ2JlV0dySEg0eXNFb2NSMGxSenE5UnBWNitzL3BkRmMwQWdCMlR1U01W?=
 =?utf-8?B?em5talh5eHJjMTJLdXlRU3NadThzMUs5aWNES01Dbis5OStpdnExS2dHZmVl?=
 =?utf-8?B?ZVlRODVDOHR6aVY5TGJnNzVMWThGQzBNRGcvdWtBMkl3ZTk0ZytabW8wNW8y?=
 =?utf-8?B?TnU0Y1VTejBvd3N5dUVzcGlWRTFGVTZENmxmUnVQdVo4T05BMXE4MmJmYVRv?=
 =?utf-8?B?dTJRd2c4U3pRNzJVb1F5NGwzRXpkT1Z3d1JDYkpRbWRzRkdoNCtoK0Q5dXB0?=
 =?utf-8?B?VXVmeGZDR1JYQ3NBQmJLOWtacFdDWVl2NER1SW5OSG1TYnVkZnFVS0ZOVzdr?=
 =?utf-8?B?L3RJZVcyajVnZ3ZaMGkyTU5IZEQzRlNFeEQzTTNnUHVDeVRSSWVTdS9XMERC?=
 =?utf-8?B?andjMU9xNjBTQ2NHMFlpMTVNVkRTNTlIVjdjVEJNV1Mrb1B6QnVVQldLbUNV?=
 =?utf-8?Q?5qygPvvauhcFL/lERhY8UEJDF?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b0d9585d-5d47-4109-9bf6-08db037915df
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 10:51:19.8615
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aQxSn6AjJCR/1vqEPyJoNMY/8Q1xgriGuqePmgJjR6xILJ2Hc3vB0T3DwjKgazm3YpkbrFFiCzY56F7KliuOpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB7462


On 20/01/2023 10:16, Julien Grall wrote:
> Hi,
Hi Julien/Stefano,
>
> I am answering to multiple e-mails at the same time.
>
> On 19/01/2023 23:34, Stefano Stabellini wrote:
>> On Thu, 19 Jan 2023, Stefano Stabellini wrote:
>>> On Tue, 17 Jan 2023, Ayan Kumar Halder wrote:
>>>> In future, we wish to support 32 bit physical address.
>>>> However, one can only read u64 values from the DT. Thus, we need to
>
> A cell in the DT is a 32-bit value. So you could read 32-bit value 
> (address-size could be #1). It is just that our wrapper return 64-bit 
> values because this is how we use the most.
ok
>
>>>> typecast the values appropriately from u64 to paddr_t.
>
> C will perfectly be able to truncate a 64-bit to 32-bit value. So this 
> is not a very good argument to explain why all of this is necessary.
I can remove this line from the commit message. However, I have a 
related point below...
>
>
>>>>
>>>> device_tree_get_reg() should now be able to return paddr_t. This is
>>>> invoked by various callers to get DT address and size.
>>>> Similarly, dt_read_number() is invoked as well to get DT address and
>>>> size. The return value is typecasted to paddr_t.
>>>> fdt_get_mem_rsv() can only accept u64 values. So, we provide a warpper
>
> Typo: s/warpper/wrapper/
ok
>
>>>> for this called fdt_get_mem_rsv_paddr() which will do the necessary
>>>> typecasting before invoking fdt_get_mem_rsv() and while returning.
>>>>
>>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>>
>>> I know we discussed this before and you implemented exactly what we
>>> suggested, but now looking at this patch I think we should do the
>>> following:
>>>
>>> - also add a wrapper for dt_read_number, something like
>>>    dt_read_number_paddr that returns paddr
>
> "number" and "paddr" pretty much means the same. I think it would be 
> better to name it "dt_read_paddr".
ok
>
>>> - add a check for the top 32-bit being zero in all the wrappers
>>>    (dt_read_number_paddr, device_tree_get_reg, fdt_get_mem_rsv_paddr)
>>>    when paddr!=uint64_t. In case the top 32-bit are != zero I think we
>>>    should print an error
>>>
>>> Julien, I remember you were concerned about BUG_ON/panic/ASSERT and I
>>> agree with you there (especially considering Vikram's device tree
>>> overlay series). So here I am only suggesting to check truncation and
>>> printk a message, not panic. Would you be OK with that?
>
> Aside dt_read_number(), I would expect that most of the helper can 
> return an error. So if you want to check the truncation, then we 
> should propagate the error.
ok
>
>>>
>>> Last comment, maybe we could add fdt_get_mem_rsv_paddr to setup.h
>>> instead of introducing xen/arch/arm/include/asm/device_tree.h, given
>>> that we already have device tree definitions there
>>> (device_tree_get_reg). I am OK either way.
>>   Actually I noticed you also add dt_device_get_paddr to
>> xen/arch/arm/include/asm/device_tree.h. So it sounds like it is a good
>> idea to introduce xen/arch/arm/include/asm/device_tree.h, and we could
>> also move the declarations of device_tree_get_reg, device_tree_get_u32
>> there.
>
> None of the helpers you mention sounds arm specific. So why should the 
> be move an arch specific headers?

The functions (fdt_get_mem_rsv_paddr(), device_tree_get_reg(), 
device_tree_get_u32()) are currently used by Arm specific code only.

So, I thought of implementing fdt_get_mem_rsv_paddr() in 
xen/arch/arm/include/asm/device_tree.h.

However, I see your point as well. So the alternative approach would be :-

Approach 1) Implement fdt_get_mem_rsv_paddr() in 
./xen/include/xen/libfdt/libfdt.h.

However libfdt is imported from external sources, so I am not sure if 
this is the  best approach.

Approach 2) Rename fdt_get_mem_rsv_paddr() to dt_get_mem_rsv_paddr() and 
implement it in ./xen/include/xen/device_tree.h.

However, this will be an odd one as it invokes fdt_get_mem_rsv() for 
which we will have to include libfdt.h in device_tree.h.


So, I am biased towards having xen/arch/arm/include/asm/device_tree.h in 
which we can implement all the non-static fdt and dt functions (which 
are required by xen/arch/arm/*).

And then (as Stefano suggested), we can move the definitions of the 
following to xen/arch/arm/include/asm/device_tree.h.

device_tree_get_reg()

device_tree_get_u32()

device_tree_for_each_node()


Please let me know your thoughts.

>
> [...]
>
>>>> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
>>>> index 0085c28d74..f536a3f3ab 100644
>>>> --- a/xen/arch/arm/bootfdt.c
>>>> +++ b/xen/arch/arm/bootfdt.c
>>>> @@ -11,9 +11,9 @@
>>>>   #include <xen/efi.h>
>>>>   #include <xen/device_tree.h>
>>>>   #include <xen/lib.h>
>>>> -#include <xen/libfdt/libfdt.h>
>>>>   #include <xen/sort.h>
>>>>   #include <xsm/xsm.h>
>>>> +#include <asm/device_tree.h>
>>>>   #include <asm/setup.h>
>>>>     static bool __init device_tree_node_matches(const void *fdt, 
>>>> int node,
>>>> @@ -53,10 +53,15 @@ static bool __init 
>>>> device_tree_node_compatible(const void *fdt, int node,
>>>>   }
>>>>     void __init device_tree_get_reg(const __be32 **cell, u32 
>>>> address_cells,
>>>> -                                u32 size_cells, u64 *start, u64 
>>>> *size)
>>>> +                                u32 size_cells, paddr_t *start, 
>>>> paddr_t *size)
>>>>   {
>>>> -    *start = dt_next_cell(address_cells, cell);
>>>> -    *size = dt_next_cell(size_cells, cell);
>>>> +    /*
>>>> +     * dt_next_cell will return u64 whereas paddr_t may be u64 or 
>>>> u32. Thus, one
>>>> +     * needs to cast paddr_t to u32. Note that we do not check for 
>>>> truncation as
>>>> +     * it is user's responsibility to provide the correct values 
>>>> in the DT.
>>>> +     */
>>>> +    *start = (paddr_t) dt_next_cell(address_cells, cell);
>>>> +    *size = (paddr_t) dt_next_cell(size_cells, cell);
>
> There is no need for explicit cast here.

Should we have it for the sake of documentation (that it casts u64 to 
paddr_t) ?

- Ayan

>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 10:57:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 10:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487519.755170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoK4-0004ef-3n; Tue, 31 Jan 2023 10:57:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487519.755170; Tue, 31 Jan 2023 10:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoK4-0004eY-17; Tue, 31 Jan 2023 10:57:08 +0000
Received: by outflank-mailman (input) for mailman id 487519;
 Tue, 31 Jan 2023 10:57:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMoK2-0004eO-Uz; Tue, 31 Jan 2023 10:57:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMoK2-0007yl-PU; Tue, 31 Jan 2023 10:57:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMoK2-0000B5-6G; Tue, 31 Jan 2023 10:57:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMoK2-00059j-5j; Tue, 31 Jan 2023 10:57:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=k2Kmabe10/omstOFYGKD9d3CVSJl/CmWS6MaoSrSDbA=; b=05Al5qsRuARgewj3UClnfzLGOQ
	DNAENxATUt6iZZCcAbagJVRiBgUB2UQHttl4PNuczYyzYQVGNjdw9+biuK8QyQv0noxdcRpq8qQ0L
	lovpJi1RYYype3ou8tXOOi0qced+9XlT0u04XRwPpFcTNoT1/mdYMtJHrb4dJa5B081I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176290-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176290: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-pair:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:test-amd64-amd64-xl:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-amd64-xl:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start.2:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-ping-check-xen:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-xsm:guest-localmigrate/x10:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:xen-boot:fail:heisenbug
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 10:57:06 +0000

flight 176290 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176290/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair            <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>                 broken
 test-amd64-i386-xl-qemut-win7-amd64    <job status>                 broken
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>                 broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 test-amd64-amd64-xl             <job status>                 broken  in 176275
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>          broken in 176275
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>     broken in 176275
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <job status>   broken in 176275
 test-amd64-amd64-xl-rtds        <job status>                 broken  in 176281
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl          5 host-install(5) broken in 176275 pass in 176290
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 176275 pass in 176290
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 176275 pass in 176290
 test-amd64-amd64-xl-rtds     5 host-install(5) broken in 176281 pass in 176290
 test-amd64-i386-pair          7 host-install/dst_host(7) broken pass in 176281
 test-amd64-i386-xl-qemut-win7-amd64  5 host-install(5)   broken pass in 176281
 test-amd64-i386-qemut-rhel6hvm-intel  5 host-install(5)  broken pass in 176281
 test-amd64-i386-xl-qemut-ws16-amd64  5 host-install(5)   broken pass in 176281
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 21 guest-start.2 fail in 176261 pass in 176290
 test-amd64-amd64-xl-rtds  10 host-ping-check-xen fail in 176275 pass in 176261
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot  fail in 176275 pass in 176281
 test-amd64-amd64-xl-xsm 20 guest-localmigrate/x10 fail in 176275 pass in 176290
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 176275 pass in 176290
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot  fail in 176281 pass in 176275
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 176281 pass in 176290
 test-amd64-amd64-xl-rtds      8 xen-boot                   fail pass in 176275

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken in 176275 like 175437
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop   fail in 176275 like 175447
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 176275 never pass
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop   fail in 176281 like 175447
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 176281 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 176281 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 176275 n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   40 days
Testing same since   176224  2023-01-26 22:14:43 Z    4 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          broken  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-pair broken
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-step test-amd64-i386-pair host-install/dst_host(7)
broken-step test-amd64-i386-xl-qemut-win7-amd64 host-install(5)
broken-step test-amd64-i386-qemut-rhel6hvm-intel host-install(5)
broken-step test-amd64-i386-xl-qemut-ws16-amd64 host-install(5)
broken-job test-amd64-amd64-xl-rtds broken
broken-job build-armhf broken
broken-job test-amd64-amd64-xl broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job build-armhf broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm broken
broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 11:18:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 11:18:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487533.755186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoeM-0007fH-7F; Tue, 31 Jan 2023 11:18:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487533.755186; Tue, 31 Jan 2023 11:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoeM-0007eo-2j; Tue, 31 Jan 2023 11:18:06 +0000
Received: by outflank-mailman (input) for mailman id 487533;
 Tue, 31 Jan 2023 11:18:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fq0=54=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMoeK-0007cS-Fy
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 11:18:04 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ed27a3eb-a158-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 12:18:03 +0100 (CET)
Received: by mail-ej1-x634.google.com with SMTP id p26so29346203ejx.13
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 03:18:02 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 ov9-20020a170906fc0900b0087bdae9a1ebsm6712058ejb.94.2023.01.31.03.18.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 31 Jan 2023 03:18:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed27a3eb-a158-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wfQOzmI0ND5xgeMj5qf0FcEgmhRpEir/hocEHXP8Mt0=;
        b=EgKt5niaErUnbLb2f5cbcqiFhXkxAiZ2cz6UQ9fy0T03umMKUitexfvDGpQQ2+Od2R
         9WuW6sMRAOl4gyW2P7LjdIYl5l4KM5i8+nxh8bevmTgWnu6bBDrU6pBXnYfbOiLLGeo0
         c+rc4mx1EKihzVpaoBwotEdwhb1eWH7b10dWZIBr/gjCaPq63maEgKfnO8HABpLn+VEf
         7Vr5utbNvN/avCRrM8bL4LiI9Kck5xR+3ZXvAVlwXjzFrLY7Lxnmoxpku5TcCCv9cFmx
         BvSYHen3IbmDbhC+xexm1hsExG//UBfk536PBMxjREg4Q5h7zyLHTGbih3Yx8dMA5nOC
         8sUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wfQOzmI0ND5xgeMj5qf0FcEgmhRpEir/hocEHXP8Mt0=;
        b=T+MyDzAIcEo4T73BIJ+qmYJtKCptuHuOVEbvafYWEVHMfhH+a8EskDhjasWzf6b43z
         HUl3b2iJpEDsh/+GQw6JL/aFwS1xKFEMWFFl8sX10mZp5L3TpO59mWFmVfFwVGM+Q0nS
         IwmzrptHRZKwKa/dQyIPVNfMXdXUa1uhzxgsGbQVz14rwbOeouG4xTi4MF1JUbWxxg3s
         caCDBY/M3a1Lxii+3tMZMcsde7cuA2KkPl6KvPvOPv8YHYqAvGLYHChqmizCtY8d4qbC
         li334xLy0NJu0zOXIyq3Z01Jm8nvOSx8msJfW3Q/pqH0Jja/oAAgjlna1iNWADSAtlBp
         HGDg==
X-Gm-Message-State: AO0yUKXBV0+OCzuQmptlmhNCx0/MGd1dG4pam00YhA8T+tQ4CGpjEj58
	lqfdVsVqgqtcLjYbd3I2JHzhumacnbc=
X-Google-Smtp-Source: AK7set8o8MRDq1CxDIX+Nyn8bi9Gl0kzET374ToTSKGGTCT1h+X/i98ET16sXDhVWHtlipfLUpDMDQ==
X-Received: by 2002:a17:906:f143:b0:882:1b70:8967 with SMTP id gw3-20020a170906f14300b008821b708967mr11083308ejb.35.1675163882054;
        Tue, 31 Jan 2023 03:18:02 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v8 2/2] automation: update RISC-V smoke test
Date: Tue, 31 Jan 2023 13:17:55 +0200
Message-Id: <2ff69540fffe67880bb13a468295c593d0db8604.1675163330.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1675163330.git.oleksii.kurochko@gmail.com>
References: <cover.1675163330.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add check if there is a message 'Hello from C env' presents
in log file to be sure that stack is set and C part of early printk
is working.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V8:
 - Set "needs: archlinux-current-gcc-riscv64-debug" in test.yaml
   for RISCV job as CONFIG_EARLY_PRINTK is available only when
   CONFIG_DEBUG is available.
---
Changes in V7:
 - Fix dependecy for qemu-smoke-riscv64-gcc job
---
Changes in V6:
 - Rename container name in test.yaml for .qemu-riscv64.
---
Changes in V5:
  - Nothing changed
---
Changes in V4:
  - Nothing changed
---
Changes in V3:
  - Nothing changed
  - All mentioned comments by Stefano in Xen mailing list will be
    fixed as a separate patch out of this patch series.
---
 automation/gitlab-ci/test.yaml           | 20 ++++++++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh | 20 ++++++++++++++++++++
 2 files changed, 40 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index afd80adfe1..ce543ef5c0 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -54,6 +54,19 @@
   tags:
     - x86_64
 
+.qemu-riscv64:
+  extends: .test-jobs-common
+  variables:
+    CONTAINER: archlinux:current-riscv64
+    LOGFILE: qemu-smoke-riscv64.log
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - x86_64
+
 .yocto-test:
   extends: .test-jobs-common
   script:
@@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
   needs:
     - debian-unstable-clang-debug
 
+qemu-smoke-riscv64-gcc:
+  extends: .qemu-riscv64
+  script:
+    - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 | tee ${LOGFILE}
+  needs:
+    - archlinux-current-gcc-riscv64-debug
+
 # Yocto test jobs
 yocto-qemuarm64:
   extends: .yocto-test-arm64
diff --git a/automation/scripts/qemu-smoke-riscv64.sh b/automation/scripts/qemu-smoke-riscv64.sh
new file mode 100755
index 0000000000..e0f06360bc
--- /dev/null
+++ b/automation/scripts/qemu-smoke-riscv64.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+set -ex
+
+# Run the test
+rm -f smoke.serial
+set +e
+
+timeout -k 1 2 \
+qemu-system-riscv64 \
+    -M virt \
+    -smp 1 \
+    -nographic \
+    -m 2g \
+    -kernel binaries/xen \
+    |& tee smoke.serial
+
+set -e
+(grep -q "Hello from C env" smoke.serial) || exit 1
+exit 0
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 11:18:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 11:18:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487532.755181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoeL-0007ck-U9; Tue, 31 Jan 2023 11:18:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487532.755181; Tue, 31 Jan 2023 11:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoeL-0007cd-R9; Tue, 31 Jan 2023 11:18:05 +0000
Received: by outflank-mailman (input) for mailman id 487532;
 Tue, 31 Jan 2023 11:18:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fq0=54=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMoeJ-0007cK-SS
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 11:18:03 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec182ecf-a158-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 12:18:01 +0100 (CET)
Received: by mail-ej1-x633.google.com with SMTP id qw12so24657339ejc.2
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 03:18:01 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 ov9-20020a170906fc0900b0087bdae9a1ebsm6712058ejb.94.2023.01.31.03.17.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 31 Jan 2023 03:17:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec182ecf-a158-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=GEm/QtivQA3wincGDxWw4an3Uc5tv8Gv005JVo7FJtA=;
        b=dVYFJyGIczdKXDLKHNNwI0E588m3OFaIT3tAJgrpzi9Igin28ui2RP7uTWIL33LRpK
         HU6LTHg5AvKPR+is88rSC7eAZKks2ncEWFp4iQ/YU974sGkvLCapydUXNwYXRZIS1lAK
         CTsZRLlvd4aZDCkD/aEf/m8+pXB06ERVfKoYGrQfhglTYHXM5rHRqitsEoMjmI21+aI+
         P1rGfe4TCuiD6hnmNw7xXTfK5rNin4XZ9gwxlRz1Dgw6E4SZGCqnEsskSk9T4LDMl2Mx
         Isi+V0+tdzpHNJatsbtmZ6VaaK2cAoaek2ukiaSX9ZmR0TJgQMAYsUzDv7DYSlGBr6dl
         Ojxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=GEm/QtivQA3wincGDxWw4an3Uc5tv8Gv005JVo7FJtA=;
        b=K0rAVD9FlSYWWp5gAdHaZUfCJQM7ElibS7w2tp70ZDreRKYx04sK7beW2zErHHvj2u
         rJ5f+O2Yi6W7TgwAwiJyxSWstmVBIEavwUZSBEvFgAk7Rgh22J0S3tB0VwI2bGK7w82c
         ooljuJjCRpeft/2OONXQp9jwQEt/IKwYtENA3IpWZZKk/M4XoGZ3uEqyJ1cMB4DUNZ2N
         tLvzMcGSfcIHBZeQOxZ4TDeQbxqMH4csYzVAdZEuPsgyjMB08VeXBZfnHKJHflmJh2rD
         dkMgn37uJBKBykYIZDvIXrTZoU1Rct0yNwByJl3XpktN+VgbZR4Tos66RNgrqUvYsL51
         Ugqw==
X-Gm-Message-State: AFqh2kqFf32eF/E/qgbnTqfJNwQ2qO3pPZ1VqLivZeCX6ZZvt/PjeCnQ
	yI2X86tOy4E79oSUq7abVfowDHWkIW8=
X-Google-Smtp-Source: AMrXdXu3PY9LdnLt1HtquurHr6+C0kxfF6ouffDQ9Sbft7KapmplR8oN8XHvBg3GJmGx5o9IT0gfSg==
X-Received: by 2002:a17:906:c283:b0:86a:833d:e7d8 with SMTP id r3-20020a170906c28300b0086a833de7d8mr54335180ejz.17.1675163880154;
        Tue, 31 Jan 2023 03:18:00 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v8 0/2] Basic early_printk and smoke test implementation
Date: Tue, 31 Jan 2023 13:17:53 +0200
Message-Id: <cover.1675163330.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following:
- the minimal set of headers and changes inside them.
- SBI (RISC-V Supervisor Binary Interface) things necessary for basic
  early_printk implementation.
- things needed to set up the stack.
- early_printk() function to print only strings.
- RISC-V smoke test which checks if  "Hello from C env" message is
  present in serial.tmp

The patch series is rebased on top of the patch "include/types: move
stddef.h-kind types to common header" [1]

[1] https://lore.kernel.org/xen-devel/5a0a9e2a-c116-21b5-8081-db75fe4178d7@suse.com/

---
Changes in V8:
 - Set "needs: archlinux-current-gcc-riscv64-debug" in test.yaml
   for RISCV job as CONFIG_EARLY_PRINTK is available only when
   CONFIG_DEBUG is available.
---
Changes in V7:
 - Fix dependecy for qemu-smoke-riscv64-gcc job
---
Changes in V6:
 - Rename container name in test.yaml for .qemu-riscv64.
---
Changes in V5:
  - Nothing changed
---
Changes in V4:
  - Nothing changed
---
Changes in V3:
  - Nothing changed
  - All mentioned comments by Stefano in Xen mailing list will be
    fixed as a separate patch out of this patch series.
---

Oleksii Kurochko (2):
  xen/riscv: introduce early_printk basic stuff
  automation: add RISC-V smoke test

 automation/gitlab-ci/test.yaml            | 20 ++++++++++++++
 automation/scripts/qemu-smoke-riscv64.sh  | 20 ++++++++++++++
 xen/arch/riscv/Kconfig.debug              |  5 ++++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
 xen/arch/riscv/setup.c                    |  4 +++
 7 files changed, 95 insertions(+)
 create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 11:18:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 11:18:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487534.755192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoeM-0007km-JZ; Tue, 31 Jan 2023 11:18:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487534.755192; Tue, 31 Jan 2023 11:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMoeM-0007k1-B0; Tue, 31 Jan 2023 11:18:06 +0000
Received: by outflank-mailman (input) for mailman id 487534;
 Tue, 31 Jan 2023 11:18:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fq0=54=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMoeK-0007cK-IS
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 11:18:04 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec985ac1-a158-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 12:18:01 +0100 (CET)
Received: by mail-ej1-x631.google.com with SMTP id gr7so15966529ejb.5
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 03:18:01 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 ov9-20020a170906fc0900b0087bdae9a1ebsm6712058ejb.94.2023.01.31.03.18.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 31 Jan 2023 03:18:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec985ac1-a158-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Y085K0nUfCuQzxisU9i9l8wFm3KxLDa0CMqvfqi8NAE=;
        b=KgQPuRwPRCeN2HgTmzgky93xrPust6oWo1CdKcO6O3S3DH5bp9E1lpcdSSfoGXzUB9
         w1+yqbvvFUgkkXhLVOhLD5Ol4iSHKjtTNObxWBXQ7eE2IJSb/rwK03hMZZwVpc4PVPRc
         Tse6Plnj1bun29WWsiEtmdWOKh30egGnBvGm+TGPqhQYdq+anYJTy6cYcD68TSnfGdjR
         hWd9pwwEhAEn/NABNkfN9+Au9Hlk73ArhYlSalz2AMLQV9r5tnyuhDxycd6Zmi8ZTinu
         bc76KBy64tODZTmMedvLllkNOWpnrYtSYaQbZZ8dzPizBiVcPT35bzJg2yACD08iEE9v
         JYhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Y085K0nUfCuQzxisU9i9l8wFm3KxLDa0CMqvfqi8NAE=;
        b=Le9EJ2GPkju6EPcvPEm2O8qxAVpAMRK14SmCyQGLrC6XPotlFGQCX6Ci0PLtpTi1w6
         4Os/mpqjWbu2zvVM3DHYTGutQlzoKe3ZqgDVhOgg9rm5ZnH+8nEODndVyCP5k3tt74DO
         dN48xrFpr9nWYmtnbutzPAyo4d9YQTUhL3nGC/+Oaa7mOE/+FzRS5jdRJjUfbXmEjePZ
         zWwAUp/ncds68j/4Uc4o1xifOq0ixCFeRbpd0Z/RIRkPZjcPUuYGXMxphxYJBkolGq3w
         pMXKf9E457gJZlkuE09FI/lsj0zUJJ8MZXAFY+tqaWjTcDJ1CK1Cwn1deG3XFafxz3hP
         AI0Q==
X-Gm-Message-State: AO0yUKWlkqRqjgeAfsF6QETP2EKwptcU38aC1NO+0kJh9LmmMADxCBYr
	/UXpYuMIEhUYFW3wo3lH0SzZruwA9UA=
X-Google-Smtp-Source: AK7set8/orIAz/ZORgXx9i6Y45+PTLUtH5yyIZPrFzjlr7xQpHGXRGM7Lcv1AvPg5mxd6chhhMBkhg==
X-Received: by 2002:a17:906:bccc:b0:870:3c70:8c8d with SMTP id lw12-20020a170906bccc00b008703c708c8dmr2893771ejb.17.1675163881159;
        Tue, 31 Jan 2023 03:18:01 -0800 (PST)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>,
	Bobby Eshleman <bobby.eshleman@gmail.com>
Subject: [PATCH v8 1/2] xen/riscv: introduce early_printk basic stuff
Date: Tue, 31 Jan 2023 13:17:54 +0200
Message-Id: <06c2c36bd68b2534c757dc4087476e855253680a.1675163330.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <cover.1675163330.git.oleksii.kurochko@gmail.com>
References: <cover.1675163330.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Because printk() relies on a serial driver (like the ns16550 driver)
and drivers require working virtual memory (ioremap()) there is not
print functionality early in Xen boot.

The patch introduces the basic stuff of early_printk functionality
which will be enough to print 'hello from C environment".

Originally early_printk.{c,h} was introduced by Bobby Eshleman
(https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1aab71384)
but some functionality was changed:
early_printk() function was changed in comparison with the original as
common isn't being built now so there is no vscnprintf.

This commit adds early printk implementation built on the putc SBI call.

As sbi_console_putchar() is already being planned for deprecation
it is used temporarily now and will be removed or reworked after
real uart will be ready.

Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
---
Changes in V8:
    - Nothing was changed
---
Changes in V7:
    - Nothing was changed
---
Changes in V6:
    - Remove __riscv_cmodel_medany check from early_printk.c
---
Changes in V5:
    - Code style fixes
    - Change an error message of #error in case of __riscv_cmodel_medany
      isn't defined
---
Changes in V4:
    - Remove "depends on RISCV*" from Kconfig.debug as it is located in
      arch specific folder so by default RISCV configs should be ebabled.
    - Add "ifdef __riscv_cmodel_medany" to be sure that PC-relative addressing
      is used as early_*() functions can be called from head.S with MMU-off and
      before relocation (if it would be needed at all for RISC-V)
    - fix code style
---
Changes in V3:
    - reorder headers in alphabetical order
    - merge changes related to start_xen() function from "[PATCH v2 7/8]
      xen/riscv: print hello message from C env" to this patch
    - remove unneeded parentheses in definition of STACK_SIZE
---
Changes in V2:
    - introduce STACK_SIZE define.
    - use consistent padding between instruction mnemonic and operand(s)
---
 xen/arch/riscv/Kconfig.debug              |  5 ++++
 xen/arch/riscv/Makefile                   |  1 +
 xen/arch/riscv/early_printk.c             | 33 +++++++++++++++++++++++
 xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
 xen/arch/riscv/setup.c                    |  4 +++
 5 files changed, 55 insertions(+)
 create mode 100644 xen/arch/riscv/early_printk.c
 create mode 100644 xen/arch/riscv/include/asm/early_printk.h

diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
index e69de29bb2..608c9ff832 100644
--- a/xen/arch/riscv/Kconfig.debug
+++ b/xen/arch/riscv/Kconfig.debug
@@ -0,0 +1,5 @@
+config EARLY_PRINTK
+    bool "Enable early printk"
+    default DEBUG
+    help
+      Enables early printk debug messages
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index fd916e1004..1a4f1a6015 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,3 +1,4 @@
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
new file mode 100644
index 0000000000..b66a11f1bc
--- /dev/null
+++ b/xen/arch/riscv/early_printk.c
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * RISC-V early printk using SBI
+ *
+ * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
+ */
+#include <asm/early_printk.h>
+#include <asm/sbi.h>
+
+/*
+ * TODO:
+ *   sbi_console_putchar is already planned for deprecation
+ *   so it should be reworked to use UART directly.
+*/
+void early_puts(const char *s, size_t nr)
+{
+    while ( nr-- > 0 )
+    {
+        if ( *s == '\n' )
+            sbi_console_putchar('\r');
+        sbi_console_putchar(*s);
+        s++;
+    }
+}
+
+void early_printk(const char *str)
+{
+    while ( *str )
+    {
+        early_puts(str, 1);
+        str++;
+    }
+}
diff --git a/xen/arch/riscv/include/asm/early_printk.h b/xen/arch/riscv/include/asm/early_printk.h
new file mode 100644
index 0000000000..05106e160d
--- /dev/null
+++ b/xen/arch/riscv/include/asm/early_printk.h
@@ -0,0 +1,12 @@
+#ifndef __EARLY_PRINTK_H__
+#define __EARLY_PRINTK_H__
+
+#include <xen/early_printk.h>
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_printk(const char *str);
+#else
+static inline void early_printk(const char *s) {};
+#endif
+
+#endif /* __EARLY_PRINTK_H__ */
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 13e24e2fe1..d09ffe1454 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,12 +1,16 @@
 #include <xen/compile.h>
 #include <xen/init.h>
 
+#include <asm/early_printk.h>
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
 void __init noreturn start_xen(void)
 {
+    early_printk("Hello from C env\n");
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 11:21:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 11:21:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487544.755210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMohL-0001ek-Ue; Tue, 31 Jan 2023 11:21:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487544.755210; Tue, 31 Jan 2023 11:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMohL-0001ed-Rm; Tue, 31 Jan 2023 11:21:11 +0000
Received: by outflank-mailman (input) for mailman id 487544;
 Tue, 31 Jan 2023 11:21:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fq0=54=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMohK-0001eT-M9
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 11:21:10 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 580956ae-a159-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 12:21:02 +0100 (CET)
Received: by mail-ej1-x629.google.com with SMTP id k4so35108814eje.1
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 03:21:02 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 rh16-20020a17090720f000b0087329ff593fsm8283408ejb.144.2023.01.31.03.21.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 31 Jan 2023 03:21:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 580956ae-a159-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=1EU17L5Dv+ou0u3yO5I2dlzjouARcKMNMmRCwy0uBFE=;
        b=I1j7JHYX6G3zWQ2g6WpFnUJgdfNCGg6Dlox3OeMLJbcRAWv2ed+C3ztCLd5iK4PMB9
         znSBv9MUrpQupiFlqJwZ2+CyGN1MDNF9hWTvVrbz6QkSfKCQut3IR61uJot6Db3tRA6N
         w0jUGuXZg/26Y5xErRe05gM7dQ67elUtf+OEdOmiUgrSZakQgy8L5e0k0ruGGOBdaMcH
         1J4/yrt+WX9rXmwtJqY3nYJ0aBedk6FO1HUVOA6WBf5BDjX8pv2oOz0amWB8RjEB8+kF
         BMwOdPq/JhyMJJWGOq95XWFFWFWCD4adDW0UPmII2dqakO1cjNCXagD7z84oBoNdGsG1
         zHpg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=1EU17L5Dv+ou0u3yO5I2dlzjouARcKMNMmRCwy0uBFE=;
        b=YpL4W9FDfuWBCSoKu1NKWclzdKaBln/HpaHlgccE9R9R9YJ+cKZqoij+nfZgKkzN0u
         awZeW26vFDKtLrL6HA+Dx9yERZbjUTKl3GzdalYs65b4ol0h/rAQK4Mhtc1ciby4zZ4H
         1r2mY/Mpd76skza0/pxKxtyMR/movQlAvtb+L2eVaR1/gT1p+3ESk2zGUQpkNIaj8+mG
         aRvowEfTUneUX53axQNBIJ5UiTHCsQA8ixd3a3sM1HDI/8rfZNFrBS5Sh5HBr75/BRgo
         x3XoUIPYTQr5Q+8yPFvgLFmYYNwlsiE8sFu4VpPWM64+SHqpRIZx6T8iL5H+YDoUTGQ6
         qofA==
X-Gm-Message-State: AO0yUKUfHaeHkHjkbl2UyeJjOAT3HGL5olzg73y7k42CTE2e4r+b/UWZ
	a0EFzih38PIcP+YfG3jebdQ=
X-Google-Smtp-Source: AK7set+7i1iruwlpsuqr6KpszXv55af3TSVc6M9tdGBSSp+zMStA9mJyUhAB7FjTr3NcVW9KQkGHqQ==
X-Received: by 2002:a17:906:4793:b0:87b:db62:d659 with SMTP id cw19-20020a170906479300b0087bdb62d659mr18063417ejc.19.1675164061688;
        Tue, 31 Jan 2023 03:21:01 -0800 (PST)
Message-ID: <2d0bcfe525ff26f9487efc906b0db29e4ff7a6b3.camel@gmail.com>
Subject: Re: [PATCH v7 2/2] automation: add RISC-V smoke test
From: Oleksii <oleksii.kurochko@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Gianluca
 Guida <gianluca@rivosinc.com>, Doug Goldstein <cardoe@cardoe.com>, Alistair
 Francis <alistair.francis@wdc.com>
Date: Tue, 31 Jan 2023 13:21:00 +0200
In-Reply-To: <alpine.DEB.2.22.394.2301271013460.1978264@ubuntu-linux-20-04-desktop>
References: <cover.1674819203.git.oleksii.kurochko@gmail.com>
	 <e2d722a5f3fffc5708c1cc99efad63ab04d25ec3.1674819203.git.oleksii.kurochko@gmail.com>
	 <alpine.DEB.2.22.394.2301271013460.1978264@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Fri, 2023-01-27 at 10:14 -0800, Stefano Stabellini wrote:
> On Fri, 27 Jan 2023, Oleksii Kurochko wrote:
> > Add check if there is a message 'Hello from C env' presents
> > in log file to be sure that stack is set and C part of early printk
> > is working.
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > ---
> > Changes in V7:
> > =C2=A0- Fix dependecy for qemu-smoke-riscv64-gcc job
> > ---
> > Changes in V6:
> > =C2=A0- Rename container name in test.yaml for .qemu-riscv64.
> > ---
> > Changes in V5:
> > =C2=A0 - Nothing changed
> > ---
> > Changes in V4:
> > =C2=A0 - Nothing changed
> > ---
> > Changes in V3:
> > =C2=A0 - Nothing changed
> > =C2=A0 - All mentioned comments by Stefano in Xen mailing list will be
> > =C2=A0=C2=A0=C2=A0 fixed as a separate patch out of this patch series.
> > ---
> > =C2=A0automation/gitlab-ci/test.yaml=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 20 ++++++++++++++++++++
> > =C2=A0automation/scripts/qemu-smoke-riscv64.sh | 20 +++++++++++++++++++=
+
> > =C2=A02 files changed, 40 insertions(+)
> > =C2=A0create mode 100755 automation/scripts/qemu-smoke-riscv64.sh
> >=20
> > diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-
> > ci/test.yaml
> > index afd80adfe1..4dbe1b8af7 100644
> > --- a/automation/gitlab-ci/test.yaml
> > +++ b/automation/gitlab-ci/test.yaml
> > @@ -54,6 +54,19 @@
> > =C2=A0=C2=A0 tags:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - x86_64
> > =C2=A0
> > +.qemu-riscv64:
> > +=C2=A0 extends: .test-jobs-common
> > +=C2=A0 variables:
> > +=C2=A0=C2=A0=C2=A0 CONTAINER: archlinux:current-riscv64
> > +=C2=A0=C2=A0=C2=A0 LOGFILE: qemu-smoke-riscv64.log
> > +=C2=A0 artifacts:
> > +=C2=A0=C2=A0=C2=A0 paths:
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - smoke.serial
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - '*.log'
> > +=C2=A0=C2=A0=C2=A0 when: always
> > +=C2=A0 tags:
> > +=C2=A0=C2=A0=C2=A0 - x86_64
> > +
> > =C2=A0.yocto-test:
> > =C2=A0=C2=A0 extends: .test-jobs-common
> > =C2=A0=C2=A0 script:
> > @@ -234,6 +247,13 @@ qemu-smoke-x86-64-clang-pvh:
> > =C2=A0=C2=A0 needs:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - debian-unstable-clang-debug
> > =C2=A0
> > +qemu-smoke-riscv64-gcc:
> > +=C2=A0 extends: .qemu-riscv64
> > +=C2=A0 script:
> > +=C2=A0=C2=A0=C2=A0 - ./automation/scripts/qemu-smoke-riscv64.sh 2>&1 |=
 tee
> > ${LOGFILE}
> > +=C2=A0 needs:
> > +=C2=A0=C2=A0=C2=A0 - .gcc-riscv64-cross-build
>=20
> This is wrong, I think it should be: archlinux-current-gcc-riscv64 ?
>=20
Thanks for noticing.

You are right.
I changed it to archlinux-current-gcc-riscv64-debug as
CONFIG_EARLY_PRINTK is available only when CONFIG_DEBUG is enabled.

Please look at new version of the patch series:
https://lore.kernel.org/xen-devel/cover.1675163330.git.oleksii.kurochko@gma=
il.com/T/#t
>=20
> > =C2=A0# Yocto test jobs
> > =C2=A0yocto-qemuarm64:
> > =C2=A0=C2=A0 extends: .yocto-test-arm64
> > diff --git a/automation/scripts/qemu-smoke-riscv64.sh
> > b/automation/scripts/qemu-smoke-riscv64.sh
> > new file mode 100755
> > index 0000000000..e0f06360bc
> > --- /dev/null
> > +++ b/automation/scripts/qemu-smoke-riscv64.sh
> > @@ -0,0 +1,20 @@
> > +#!/bin/bash
> > +
> > +set -ex
> > +
> > +# Run the test
> > +rm -f smoke.serial
> > +set +e
> > +
> > +timeout -k 1 2 \
> > +qemu-system-riscv64 \
> > +=C2=A0=C2=A0=C2=A0 -M virt \
> > +=C2=A0=C2=A0=C2=A0 -smp 1 \
> > +=C2=A0=C2=A0=C2=A0 -nographic \
> > +=C2=A0=C2=A0=C2=A0 -m 2g \
> > +=C2=A0=C2=A0=C2=A0 -kernel binaries/xen \
> > +=C2=A0=C2=A0=C2=A0 |& tee smoke.serial
> > +
> > +set -e
> > +(grep -q "Hello from C env" smoke.serial) || exit 1
> > +exit 0
> > --=20
> > 2.39.0
> >=20



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 11:43:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 11:43:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487556.755220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMp2L-0004aJ-MA; Tue, 31 Jan 2023 11:42:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487556.755220; Tue, 31 Jan 2023 11:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMp2L-0004aC-JS; Tue, 31 Jan 2023 11:42:53 +0000
Received: by outflank-mailman (input) for mailman id 487556;
 Tue, 31 Jan 2023 11:42:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMp2K-0004a6-5m
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 11:42:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMp2I-0000Ws-Sc; Tue, 31 Jan 2023 11:42:50 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.14.74]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMp2I-0008A1-KV; Tue, 31 Jan 2023 11:42:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=cYbddL3CURU9jdq9J7FCfPemIrbfq0pmftxjWKGQ/ZI=; b=wYvuRrvUUllwl+hlSoOYmu4Xai
	4RXlNO4qcCusshnaSe0roGVQ5Q4XrUMpxP8+2SR7JE59ruCiYhcCnkIqgNK/JnsvoUBQTndZfYy2W
	EWxnv/Bc4e2Jb2pzXHWbQV0YnSRlngoqXUaBCpKrSFGfvkA/qsZwSQSwi32ay+5v8NdE=;
Message-ID: <12f4a315-19dc-2462-7bbf-f02408b09081@xen.org>
Date: Tue, 31 Jan 2023 11:42:47 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v8 1/2] xen/riscv: introduce early_printk basic stuff
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
References: <cover.1675163330.git.oleksii.kurochko@gmail.com>
 <06c2c36bd68b2534c757dc4087476e855253680a.1675163330.git.oleksii.kurochko@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <06c2c36bd68b2534c757dc4087476e855253680a.1675163330.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Oleksii,

On 31/01/2023 11:17, Oleksii Kurochko wrote:
> Because printk() relies on a serial driver (like the ns16550 driver)
> and drivers require working virtual memory (ioremap()) there is not
> print functionality early in Xen boot.
> 
> The patch introduces the basic stuff of early_printk functionality
> which will be enough to print 'hello from C environment".
> 
> Originally early_printk.{c,h} was introduced by Bobby Eshleman
> (https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1aab71384)
> but some functionality was changed:
> early_printk() function was changed in comparison with the original as
> common isn't being built now so there is no vscnprintf.
> 
> This commit adds early printk implementation built on the putc SBI call.
> 
> As sbi_console_putchar() is already being planned for deprecation
> it is used temporarily now and will be removed or reworked after
> real uart will be ready.
> 
> Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> ---
> Changes in V8:
>      - Nothing was changed
> ---
> Changes in V7:
>      - Nothing was changed
> ---
> Changes in V6:
>      - Remove __riscv_cmodel_medany check from early_printk.c

This discussion is still not resolved. I will echo Jan's point [1] and 
expand it. There is limited point to keep sending a new version for 
small changes if there main open points are not addressed.

Can you please look at settling done on the issues first and then send a 
new version?

Cheers,

[1] 1d63dd9a-77df-4fca-e35b-886ba19fb35d@suse.com

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 11:45:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 11:45:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487563.755231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMp4T-0005DV-6U; Tue, 31 Jan 2023 11:45:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487563.755231; Tue, 31 Jan 2023 11:45:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMp4T-0005DO-3f; Tue, 31 Jan 2023 11:45:05 +0000
Received: by outflank-mailman (input) for mailman id 487563;
 Tue, 31 Jan 2023 11:45:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+DaM=54=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pMp4Q-0005DG-Vi
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 11:45:03 +0000
Received: from mail-vk1-xa34.google.com (mail-vk1-xa34.google.com
 [2607:f8b0:4864:20::a34])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b1143f95-a15c-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 12:45:00 +0100 (CET)
Received: by mail-vk1-xa34.google.com with SMTP id 22so7242553vkn.2
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 03:45:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1143f95-a15c-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=GTnTC1/eN7LgIUNbmysm/3SunNlHlzL90igccppt9Mo=;
        b=Xz53LI/Zib9wjFUBACOJUXmbVxauT6CK1Bsoh6/U2igSbQ+bTnV1kLTgiO0OcHPFMH
         7XSnWxKTuMQUUZxoiT/CeIuCavB2rRAHaz1W4cBQmaQxscRDJ+67K00Fy7lhfdgnjtrs
         nmYSVSYT151L/dp4mhn16hDsqhYzqmHxvaZL04um2OQjfQRXc1VDuMQnU1JX1HcITbSb
         fMHIwSw7raQ4m7NBxcH/Vs/Ans33XezFRD+521Z/aecarFXIpaouG8kmvN6nCDnF0OR0
         Hq8LKG+jQjSVn1jMePJ7RwQAquiN5QYODcNjZi2XSwTJZiMpuB5Jdd2iTYqw06HTC9BU
         PIxQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=GTnTC1/eN7LgIUNbmysm/3SunNlHlzL90igccppt9Mo=;
        b=fkwnEsi/hv2hfnH+pd7TxQTaX5NufStLhhcycOsWL+yVVohxQFCwSZQbqQJKEusInO
         o7TzVB4BsS7xTL8Q8fxMN/0QUbn3Ozz+sNEyMJcdTkZj0vK8EIPiqRAYy3oLwFQoAeNE
         9WCesDR8LWYdjO+lQm5UcSiS4bjMlG7WMyJoY1Z/REYkBPj5vnS6e8h4cT46O6ZWdTx6
         dhsoJ098iyCRuV4eca6UNYz6VZag3B0hXlGqUgV8Iix0QspDAsjngzwNl/p36k+pKm+x
         2R0Z+ZJhGfwf0uXOiuPeZpPu6JgyOKXZ8yvc4dIVMpuyrSTB0IHBXr3tpQHazl6zBbVa
         eObA==
X-Gm-Message-State: AO0yUKXTtX08B8kvGsrIaWDHy1En7kH19M0QuKcZoTvfoKnLgiIXHOQS
	y2X0nGq0nOA5ScPmRI4vWIKidtv7xNokkgLIh/M=
X-Google-Smtp-Source: AK7set935Jshae+cSBfnOisrBca+6lWyXnqV+EaR3pM3ZxvMvgxLBjogAF5Ukio4TDYU23QB2L+3muhcLODa2X4ynYA=
X-Received: by 2002:ac5:c98a:0:b0:3ea:3000:8627 with SMTP id
 e10-20020ac5c98a000000b003ea30008627mr1142447vkm.7.1675165499602; Tue, 31 Jan
 2023 03:44:59 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674819203.git.oleksii.kurochko@gmail.com>
 <06c2c36bd68b2534c757dc4087476e855253680a.1674819203.git.oleksii.kurochko@gmail.com>
 <f5cd1bfb116bfcc86fc2848df7eead05cd1a24c0.camel@gmail.com>
In-Reply-To: <f5cd1bfb116bfcc86fc2848df7eead05cd1a24c0.camel@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 31 Jan 2023 21:44:33 +1000
Message-ID: <CAKmqyKMGiDiPRZBekdKan=+YduSmkB2DoWo5btrtVQ8nS3KMAg@mail.gmail.com>
Subject: Re: [PATCH v7 1/2] xen/riscv: introduce early_printk basic stuff
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Bob Eshleman <bobbyeshleman@gmail.com>, 
	Alistair Francis <alistair.francis@wdc.com>, Julien Grall <julien@xen.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Connor Davis <connojdavis@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jan 28, 2023 at 12:15 AM Oleksii <oleksii.kurochko@gmail.com> wrote:
>
> Hi Alistair, Bobby and community,
>
> I would like to ask your help with the following check:
> +/*
> + * early_*() can be called from head.S with MMU-off.
> + *
> + * The following requiremets should be honoured for early_*() to
> + * work correctly:
> + *    It should use PC-relative addressing for accessing symbols.
> + *    To achieve that GCC cmodel=medany should be used.
> + */
> +#ifndef __riscv_cmodel_medany
> +#error "early_*() can be called from head.S with MMU-off"
> +#endif

I have never seen a check like this before. I don't really understand
what it's looking for, if the linker is unable to call early_*() I
would expect it to throw an error. I'm not sure what this is adding.

I think this is safe to remove.

Alistair

>
> Please take a look at the following messages and help me to decide if
> the check mentioned above should be in early_printk.c or not:
> [1]
> https://lore.kernel.org/xen-devel/599792fa-b08c-0b1e-10c1-0451519d9e4a@xen.org/
> [2]
> https://lore.kernel.org/xen-devel/0ec33871-96fa-bd9f-eb1b-eb37d3d7c982@xen.org/
>
> Thanks in advance.
>
> ~ Oleksii
>
> On Fri, 2023-01-27 at 13:39 +0200, Oleksii Kurochko wrote:
> > Because printk() relies on a serial driver (like the ns16550 driver)
> > and drivers require working virtual memory (ioremap()) there is not
> > print functionality early in Xen boot.
> >
> > The patch introduces the basic stuff of early_printk functionality
> > which will be enough to print 'hello from C environment".
> >
> > Originally early_printk.{c,h} was introduced by Bobby Eshleman
> > (
> > https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d1a
> > ab71384)
> > but some functionality was changed:
> > early_printk() function was changed in comparison with the original
> > as
> > common isn't being built now so there is no vscnprintf.
> >
> > This commit adds early printk implementation built on the putc SBI
> > call.
> >
> > As sbi_console_putchar() is already being planned for deprecation
> > it is used temporarily now and will be removed or reworked after
> > real uart will be ready.
> >
> > Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> > ---
> > Changes in V7:
> >     - Nothing was changed
> > ---
> > Changes in V6:
> >     - Remove __riscv_cmodel_medany check from early_printk.c
> > ---
> > Changes in V5:
> >     - Code style fixes
> >     - Change an error message of #error in case of
> > __riscv_cmodel_medany
> >       isn't defined
> > ---
> > Changes in V4:
> >     - Remove "depends on RISCV*" from Kconfig.debug as it is located
> > in
> >       arch specific folder so by default RISCV configs should be
> > ebabled.
> >     - Add "ifdef __riscv_cmodel_medany" to be sure that PC-relative
> > addressing
> >       is used as early_*() functions can be called from head.S with
> > MMU-off and
> >       before relocation (if it would be needed at all for RISC-V)
> >     - fix code style
> > ---
> > Changes in V3:
> >     - reorder headers in alphabetical order
> >     - merge changes related to start_xen() function from "[PATCH v2
> > 7/8]
> >       xen/riscv: print hello message from C env" to this patch
> >     - remove unneeded parentheses in definition of STACK_SIZE
> > ---
> > Changes in V2:
> >     - introduce STACK_SIZE define.
> >     - use consistent padding between instruction mnemonic and
> > operand(s)
> > ---
> >  xen/arch/riscv/Kconfig.debug              |  5 ++++
> >  xen/arch/riscv/Makefile                   |  1 +
> >  xen/arch/riscv/early_printk.c             | 33
> > +++++++++++++++++++++++
> >  xen/arch/riscv/include/asm/early_printk.h | 12 +++++++++
> >  xen/arch/riscv/setup.c                    |  4 +++
> >  5 files changed, 55 insertions(+)
> >  create mode 100644 xen/arch/riscv/early_printk.c
> >  create mode 100644 xen/arch/riscv/include/asm/early_printk.h
> >
> > diff --git a/xen/arch/riscv/Kconfig.debug
> > b/xen/arch/riscv/Kconfig.debug
> > index e69de29bb2..608c9ff832 100644
> > --- a/xen/arch/riscv/Kconfig.debug
> > +++ b/xen/arch/riscv/Kconfig.debug
> > @@ -0,0 +1,5 @@
> > +config EARLY_PRINTK
> > +    bool "Enable early printk"
> > +    default DEBUG
> > +    help
> > +      Enables early printk debug messages
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index fd916e1004..1a4f1a6015 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,3 +1,4 @@
> > +obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
> >  obj-$(CONFIG_RISCV_64) += riscv64/
> >  obj-y += sbi.o
> >  obj-y += setup.o
> > diff --git a/xen/arch/riscv/early_printk.c
> > b/xen/arch/riscv/early_printk.c
> > new file mode 100644
> > index 0000000000..b66a11f1bc
> > --- /dev/null
> > +++ b/xen/arch/riscv/early_printk.c
> > @@ -0,0 +1,33 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * RISC-V early printk using SBI
> > + *
> > + * Copyright (C) 2021 Bobby Eshleman <bobbyeshleman@gmail.com>
> > + */
> > +#include <asm/early_printk.h>
> > +#include <asm/sbi.h>
> > +
> > +/*
> > + * TODO:
> > + *   sbi_console_putchar is already planned for deprecation
> > + *   so it should be reworked to use UART directly.
> > +*/
> > +void early_puts(const char *s, size_t nr)
> > +{
> > +    while ( nr-- > 0 )
> > +    {
> > +        if ( *s == '\n' )
> > +            sbi_console_putchar('\r');
> > +        sbi_console_putchar(*s);
> > +        s++;
> > +    }
> > +}
> > +
> > +void early_printk(const char *str)
> > +{
> > +    while ( *str )
> > +    {
> > +        early_puts(str, 1);
> > +        str++;
> > +    }
> > +}
> > diff --git a/xen/arch/riscv/include/asm/early_printk.h
> > b/xen/arch/riscv/include/asm/early_printk.h
> > new file mode 100644
> > index 0000000000..05106e160d
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/early_printk.h
> > @@ -0,0 +1,12 @@
> > +#ifndef __EARLY_PRINTK_H__
> > +#define __EARLY_PRINTK_H__
> > +
> > +#include <xen/early_printk.h>
> > +
> > +#ifdef CONFIG_EARLY_PRINTK
> > +void early_printk(const char *str);
> > +#else
> > +static inline void early_printk(const char *s) {};
> > +#endif
> > +
> > +#endif /* __EARLY_PRINTK_H__ */
> > diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> > index 13e24e2fe1..d09ffe1454 100644
> > --- a/xen/arch/riscv/setup.c
> > +++ b/xen/arch/riscv/setup.c
> > @@ -1,12 +1,16 @@
> >  #include <xen/compile.h>
> >  #include <xen/init.h>
> >
> > +#include <asm/early_printk.h>
> > +
> >  /* Xen stack for bringing up the first CPU. */
> >  unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
> >      __aligned(STACK_SIZE);
> >
> >  void __init noreturn start_xen(void)
> >  {
> > +    early_printk("Hello from C env\n");
> > +
> >      for ( ;; )
> >          asm volatile ("wfi");
> >
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 11:46:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 11:46:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487574.755241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMp5O-00060L-IR; Tue, 31 Jan 2023 11:46:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487574.755241; Tue, 31 Jan 2023 11:46:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMp5O-00060E-DK; Tue, 31 Jan 2023 11:46:02 +0000
Received: by outflank-mailman (input) for mailman id 487574;
 Tue, 31 Jan 2023 11:46:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMp5N-000604-Cd; Tue, 31 Jan 2023 11:46:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMp5N-0000bP-Bd; Tue, 31 Jan 2023 11:46:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMp5M-0001eq-TZ; Tue, 31 Jan 2023 11:46:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMp5M-00057y-T2; Tue, 31 Jan 2023 11:46:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2EJ+GBa0FadW7CIc2oKLwgHTPoxgOaEwvhZw3V5o63A=; b=4WYWau4d4cwJ4qplC2uQqQPcWd
	b1NVeVySOFn27aGR0N9XAxe/52ig/yzA4lJ4Zg2Hdgb1xw6ATmgGPFm3mNZJBp2i3k7o23pIC6aPD
	czF6tvuUxy86/03i2gMBgqbc/N0Q8Tam5NzAbC+wDbLa68yk2YtzHI1L2MRE3vYwAiv0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176294-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176294: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 11:46:00 +0000

flight 176294 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176294/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    4 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 11:49:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 11:49:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487580.755250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMp8r-0006ex-02; Tue, 31 Jan 2023 11:49:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487580.755250; Tue, 31 Jan 2023 11:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMp8q-0006eq-Td; Tue, 31 Jan 2023 11:49:36 +0000
Received: by outflank-mailman (input) for mailman id 487580;
 Tue, 31 Jan 2023 11:49:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+DaM=54=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pMp8p-0006ek-Dc
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 11:49:35 +0000
Received: from mail-vs1-xe36.google.com (mail-vs1-xe36.google.com
 [2607:f8b0:4864:20::e36])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53583c01-a15d-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 12:49:33 +0100 (CET)
Received: by mail-vs1-xe36.google.com with SMTP id a24so13658195vsl.2
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 03:49:33 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53583c01-a15d-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=76fBGWc5LJRCSdEAxjHz2pXIwr0W3yh0xSvvo/l/+Zs=;
        b=b3lyt7dHGOeAhWN5ELVMnyAvZfFtyDcs2rcLWbBPLeG3fGqQ+JXOc+/eLXWL6y6aoQ
         uKzAOYIX0Un+2H1VJIJ9mSFrPShwnF3XAoRCL+KpDjNmAaXcoXFGdNPe79iv3lME+AEt
         h9Ubzmyz4eq89JCwW1DcSS5JpBuqhNzzjuzODFl2hRfP8KjQ5ne7oubbSkaV2VuuzgTA
         xImBJOdUmnMCLL1zg2SsEFD3AngXCd8TKF0e8mqy6LVFxfCqIzrRDbDKodft++cHtVD4
         AgKiXg5A/f1Zgu7r7xIQ5I/psY5A/kxufiHZUk1T0CWB3xOHihQZ1zi3qGqJjppd+dUm
         g59Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=76fBGWc5LJRCSdEAxjHz2pXIwr0W3yh0xSvvo/l/+Zs=;
        b=bVfO+NCTOSKiCzyW0KyKsTdzAbji2A7nGmyPCbu5c/u6PPV6OUlkUloYODaqXkYfNk
         4/TWVZhOYKARlG7L79Ch6B0orfOlGfgc6izVTWZi9dlYlG2GhC6+kRk2FBpCHqeWs0Ay
         kiYjLWLd0lx4OZ9xrG7fkAwqVotW5EIpaN9edtlnfUSrHL3Em8DBw5kRlIxSKIVCFcbt
         2gqswFdwxRxAsUXgfk9ukRFWsET++hA7vps+fPUmJr3jOVsvjqmbJYIRWF8xyio7SkaL
         oSnE8FEY8aQsMHQImpf3r/dlHArDz7jAyTI2fx7GBkAfqf6tLLycyo+xuzlkVhwqtrk3
         orHw==
X-Gm-Message-State: AO0yUKVHrdtlboImxQRd107kuJNXswU9F6+7fytYcRW0lKSNCe/zCeal
	C79n8FOTcevDQ1lkzDWj1hsVvSEQRmeJhdMlQ5w=
X-Google-Smtp-Source: AK7set+4J1JJDyOhUWTBkUQ+yFINmj91UHAL+mWR1PwgnyAPbwX9KzADnbLaAxDQJpvDvcVOIfoR3veqoDwQrUmjfMY=
X-Received: by 2002:a67:e101:0:b0:3f0:89e1:7c80 with SMTP id
 d1-20020a67e101000000b003f089e17c80mr1537104vsl.72.1675165771875; Tue, 31 Jan
 2023 03:49:31 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
 <3617dc882193166580ae7e5d122447e924cab524.1674226563.git.oleksii.kurochko@gmail.com>
 <d5d9a305-3501-cbc4-1c8a-1a62bd08d588@citrix.com> <d3e2c18e443439d18f8ece31c9419e30a19be8c5.camel@gmail.com>
In-Reply-To: <d3e2c18e443439d18f8ece31c9419e30a19be8c5.camel@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 31 Jan 2023 21:49:05 +1000
Message-ID: <CAKmqyKOecoz91e-4-KZUdgT5HNhuwuN83tpFR+HmwkUPb2r3ew@mail.gmail.com>
Subject: Re: [PATCH v1 01/14] xen/riscv: add _zicsr to CFLAGS
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jan 23, 2023 at 8:43 PM Oleksii <oleksii.kurochko@gmail.com> wrote:
>
> On Fri, 2023-01-20 at 15:29 +0000, Andrew Cooper wrote:
> > On 20/01/2023 2:59 pm, Oleksii Kurochko wrote:
> > > Work with some registers requires csr command which is part of
> > > Zicsr.
> > >
> > > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > > ---
> > >  xen/arch/riscv/arch.mk | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
> > > index 012dc677c3..95b41d9f3e 100644
> > > --- a/xen/arch/riscv/arch.mk
> > > +++ b/xen/arch/riscv/arch.mk
> > > @@ -10,7 +10,7 @@ riscv-march-$(CONFIG_RISCV_ISA_C)       :=3D
> > > $(riscv-march-y)c
> > >  # into the upper half _or_ the lower half of the address space.
> > >  # -mcmodel=3Dmedlow would force Xen into the lower half.
> > >
> > > -CFLAGS +=3D -march=3D$(riscv-march-y) -mstrict-align -mcmodel=3Dmeda=
ny
> > > +CFLAGS +=3D -march=3D$(riscv-march-y)_zicsr -mstrict-align -
> > > mcmodel=3Dmedany
> >
> > Should we just go straight for G, rather than bumping it along every
> > time we make a tweak?
> >
> I didn't go straight for G as it represents the =E2=80=9CIMAFDZicsr Zifen=
cei=E2=80=9D
> base and extensions thereby it will be needed to add support for FPU
> (at least it requires {save,restore}_fp_state) but I am not sure that
> we need it in general.

That seems fair enough. I don't see a reason to restrict ourselves if
we aren't using something. Although we probably will hit a requirement
on G at some point anyway.

Alistair

>
> Another one reason is that Linux kernel introduces _zicsr extenstion
> separately (but I am not sure that it can be considered as a serious
> argument):
> https://elixir.bootlin.com/linux/latest/source/arch/riscv/Makefile#L58
> https://lore.kernel.org/all/20221024113000.891820486@linuxfoundation.org/
>
> > ~Andrew
> ~Oleksii
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 12:04:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 12:04:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487599.755261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpMT-00017P-Kb; Tue, 31 Jan 2023 12:03:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487599.755261; Tue, 31 Jan 2023 12:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpMT-00017H-Hx; Tue, 31 Jan 2023 12:03:41 +0000
Received: by outflank-mailman (input) for mailman id 487599;
 Tue, 31 Jan 2023 12:03:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMpMR-00017A-PR
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 12:03:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMpMQ-00017H-VS; Tue, 31 Jan 2023 12:03:38 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMpMQ-0000fT-Pq; Tue, 31 Jan 2023 12:03:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=u5nY5Kts/Uq5eSWU6q+8En3cC2yDTmdBxeciRaBP2tE=; b=bO7817/dhQFRwQvsH26wcxFXbg
	bBYA3G/iQjBAo80LOQyJGsHHO7o3ghkxWfOKLfjDp1DaRm0SIV0LHlO51oQY9/9/vl895c/xudjqs
	wQQ4/GT1cFE1q3FmI9KqQVlC3vHDNJ4HbuWhDr8PqJ3PZt0T2MKIssmAyNeDjyK0Wizk=;
Message-ID: <2f6a3b17-4e41-fe9a-1713-4942b3bd3585@xen.org>
Date: Tue, 31 Jan 2023 12:03:36 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v7 1/2] xen/riscv: introduce early_printk basic stuff
To: Alistair Francis <alistair23@gmail.com>,
 Oleksii <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Connor Davis
 <connojdavis@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>
References: <cover.1674819203.git.oleksii.kurochko@gmail.com>
 <06c2c36bd68b2534c757dc4087476e855253680a.1674819203.git.oleksii.kurochko@gmail.com>
 <f5cd1bfb116bfcc86fc2848df7eead05cd1a24c0.camel@gmail.com>
 <CAKmqyKMGiDiPRZBekdKan=+YduSmkB2DoWo5btrtVQ8nS3KMAg@mail.gmail.com>
Content-Language: en-US
From: Julien Grall <julien@xen.org>
In-Reply-To: <CAKmqyKMGiDiPRZBekdKan=+YduSmkB2DoWo5btrtVQ8nS3KMAg@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 31/01/2023 11:44, Alistair Francis wrote:
> On Sat, Jan 28, 2023 at 12:15 AM Oleksii <oleksii.kurochko@gmail.com> wrote:
>>
>> Hi Alistair, Bobby and community,
>>
>> I would like to ask your help with the following check:
>> +/*
>> + * early_*() can be called from head.S with MMU-off.
>> + *
>> + * The following requiremets should be honoured for early_*() to
>> + * work correctly:
>> + *    It should use PC-relative addressing for accessing symbols.
>> + *    To achieve that GCC cmodel=medany should be used.
>> + */
>> +#ifndef __riscv_cmodel_medany
>> +#error "early_*() can be called from head.S with MMU-off"
>> +#endif
> 
> I have never seen a check like this before. 

The check is in the Linux code, see [3].

> I don't really understand
> what it's looking for, if the linker is unable to call early_*() I
> would expect it to throw an error. I'm not sure what this is adding.

When the MMU is off during early boot, you want any C function called to 
use PC-relative address rather than absolute address. This is because 
the physical address may not match the virtual address.

 From my understanding, on RISC-V, the use of PC-relative address is 
only guaranteed with medany. So if you were going to change the cmodel 
(Andrew suggested you would), then early_*() may end up to be broken.

This check serve as a documentation of the assumption and also help the 
developer any change in the model and take the appropriate action to 
remediate it.

> 
> I think this is safe to remove.
Based on what I wrote above, do you still think this is safe?

Cheers,

>> Please take a look at the following messages and help me to decide if
>> the check mentioned above should be in early_printk.c or not:
>> [1]
>> https://lore.kernel.org/xen-devel/599792fa-b08c-0b1e-10c1-0451519d9e4a@xen.org/
>> [2]
>> https://lore.kernel.org/xen-devel/0ec33871-96fa-bd9f-eb1b-eb37d3d7c982@xen.org/

[3] 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/riscv/mm/init.c


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 12:24:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 12:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487606.755270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpgG-000446-8n; Tue, 31 Jan 2023 12:24:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487606.755270; Tue, 31 Jan 2023 12:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpgG-00043z-5y; Tue, 31 Jan 2023 12:24:08 +0000
Received: by outflank-mailman (input) for mailman id 487606;
 Tue, 31 Jan 2023 12:24:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fq0=54=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMpgE-00043t-TK
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 12:24:06 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 265c11a3-a162-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 13:24:04 +0100 (CET)
Received: by mail-ed1-x52d.google.com with SMTP id z11so14239206ede.1
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 04:24:04 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 m13-20020aa7c2cd000000b00499b6b50419sm2477349edp.11.2023.01.31.04.24.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 31 Jan 2023 04:24:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 265c11a3-a162-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=2kltFoK2okjjXyyfDtDoDJKFwbDdK3iC7eRwue+UiHs=;
        b=aQX2xe0vlEtnVV8+9ASUtxPzSqFBnlpzRpJuFfM+WwOezaoX3NXedeEGBSBAER+iSm
         PKt3SW76EUjzoGhYQrUCswybLTILSONQd7wgeiaRr2SYGPSNzLdq1AeO+VDxvp1T0k/x
         k/HcMZhZ3+d5M4Awmobz+X8dRcl83lhoSjlH6mdFnCJxCSG6dIwqkpSBO3BDNawSAPwR
         G2oJufd0/RBwTg4jIEAwgWtfNy/hrL+QG1OHRhbQpv2ENKcB+jq6lpW69F1yqJonY+FI
         y+tGCHTUX+pMdVs0nI00d3IxxMUtLRQ3FVCs2rIn8EffUQHXNl3PLl0mUvquzIgiypUs
         yHrw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=2kltFoK2okjjXyyfDtDoDJKFwbDdK3iC7eRwue+UiHs=;
        b=WM32TX8mPGYBOoP7J9MyAahkGhDAhbubhIbxoDsjW6ATd1NJBSqszC+P9ubhz6324W
         V9+sH+otzNVRbUyDxVDQgyfmk59U66YD2c5hDxosjvGL77UJOXGRo1HA3S7EI+N/Bo9v
         7RAG9yeuGkHd3tSVoCV1zlIKm5vpW5AcAc5cNRkCP09F/8oT7Af9i9z5IpecfFcBay4A
         PMvovUcasJOnmECB0IZ9AiNa0NWGnaj+h7UfqH7++bO54w0gWVL/azEoQG7oUamTDJnm
         bA5qbmg9M7FtZDBIOrIIBBpjt7q0PBf0O7vFaRNthCcWOorT+0E76ONaC9tbC1wfhqZx
         Gmyg==
X-Gm-Message-State: AFqh2krTEhyz0TE2lN6UqREsYmhLDkQKP6h8JCZ4VwiBC5e1XnZCrEqy
	UGFa7QDtak7Zog0+941hYQQ=
X-Google-Smtp-Source: AMrXdXtrq8e1rmhBRv1KU7pAS4kqe6nLqWP+MmTtcWVfLTgoHulWqwSgLw/8GZMx+Me+QCXFEB28ww==
X-Received: by 2002:a50:c005:0:b0:49e:f062:99e6 with SMTP id r5-20020a50c005000000b0049ef06299e6mr41538084edb.28.1675167843575;
        Tue, 31 Jan 2023 04:24:03 -0800 (PST)
Message-ID: <72a7bdfc72161144df741e3a39f628874d88debd.camel@gmail.com>
Subject: Re: [PATCH v2 07/14] xen/riscv: introduce exception context
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, Bobby Eshleman
 <bobby.eshleman@gmail.com>
Date: Tue, 31 Jan 2023 14:24:02 +0200
In-Reply-To: <8c0bce0b-05bd-5f4b-7b66-f6668ad34899@xen.org>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
	 <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
	 <a8219b2d-a22d-63ac-5088-c33610310d6e@xen.org>
	 <27469e861d4777af42b84fb637b24ed47a187647.camel@gmail.com>
	 <8c0bce0b-05bd-5f4b-7b66-f6668ad34899@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

Hi Julien,

On Mon, 2023-01-30 at 22:11 +0000, Julien Grall wrote:
> Hi,
>=20
> On 30/01/2023 11:40, Oleksii wrote:
> > On Fri, 2023-01-27 at 14:54 +0000, Julien Grall wrote:
> > > Hi Oleksii,
> > >=20
> > > On 27/01/2023 13:59, Oleksii Kurochko wrote:
> > > > +static inline void wfi(void)
> > > > +{
> > > > +=C2=A0=C2=A0=C2=A0 __asm__ __volatile__ ("wfi");
> > >=20
> > > I have starred at this line for a while and I am not quite too
> > > sure
> > > to
> > > understand why we don't need to clobber the memory like we do on
> > > Arm.
> > >=20
> > I don't have an answer. The code was based on Linux so...
>=20
> Hmmm ok. It would probably wise to understand how code imported from=20
> Linux work so we don't end up introducing bug when calling such
> function.
>=20
> =C2=A0From your current use in this patch, I don't expect any issue. That
> may=20
> chance for the others use.
>=20
Could you please share with me a link or explain what kind of problems
may occur in case when we don't clobber the memory in the others use
case during usage of "wfi" ?

As I understand the reason for clobber the memory is to cause GCC to
not keep memory values cached in registers across the
assembler	instruction and not optimize stores/load to the memory.
But current one instruction isn't expected to work with the memory so
it should be safe enough to stall current hart ( CPU ) until an
interrupt might need servicing.

Anyway we can change the code to:
    __asm__ __volatile__ ("wfi" ::: "memory")
In order to be sure that no problems will arise in the future.

> > > FWIW, Linux is doing the same, so I guess this is correct. For
> > > Arm we
> > > also follow the Linux implementation.
> > >=20
> > > But I am wondering whether we are just too strict on Arm, RISCv
> > > compiler
> > > offer a different guarantee, or you expect the user to be
> > > responsible
> > > to
> > > prevent the compiler to do harmful optimization.
> > >=20
> > > > +/*
> > > > + * panic() isn't available at the moment so an infinite loop
> > > > will
> > > > be
> > > > + * used temporarily.
> > > > + * TODO: change it to panic()
> > > > + */
> > > > +static inline void die(void)
> > > > +{
> > > > +=C2=A0=C2=A0=C2=A0 for( ;; ) wfi();
> > >=20
> > > Please move wfi() to a newline.
> > Thanks.
> >=20
> > I thought that it is fine to put into one line in this case but
> > I'll
> > move it to a newline. It's fine.
>=20
> I am not aware of any place in Xen where we would combine the lines.
> Also, you want a space after 'for'.
>=20
> Cheers,
>=20

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 12:30:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 12:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487612.755281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpmQ-0005lG-U5; Tue, 31 Jan 2023 12:30:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487612.755281; Tue, 31 Jan 2023 12:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpmQ-0005l9-RH; Tue, 31 Jan 2023 12:30:30 +0000
Received: by outflank-mailman (input) for mailman id 487612;
 Tue, 31 Jan 2023 12:30:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fq0=54=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMpmO-0005l3-SA
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 12:30:28 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0b16727f-a163-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 13:30:27 +0100 (CET)
Received: by mail-ej1-x636.google.com with SMTP id ud5so41136989ejc.4
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 04:30:27 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 l3-20020a056402230300b0049f29a7c0d6sm8328170eda.34.2023.01.31.04.30.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 31 Jan 2023 04:30:27 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b16727f-a163-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=pCmLGAI5vdHBS0eYH6qCqi6roe/ilfkfoE+bpcz9Jog=;
        b=fFVQI4lrSpdvK270rKm/WdXqSIbMcxBu1UYdrvsbnL85EKJ1Gyqersvi8rbTFOu9ML
         AV8Ivb/KYIyM42OmZnei5Vp4a+d/aEtZGS0olnpDIo54YAPQLDZnSCdMSQvUE5PfyYWG
         vFZk6xOHMi5NK0p9GvF3eNa9y8aRzr57zwIdO7LuZR0GvDKHW/D67bNTjuxLji1F9bz6
         yvr22NsU4qK54YOQw25hXn/VwUQ7Gx/4xBN3nMXa30o7dKyCm/sLZ+HeS3gLKYOxbeRE
         1VdDSSlO4o7ocAAbsmYIld41j55AmUI7Mx8BKDCdQfuw5Ctar6KKUDeQE4Or0ulqKKcC
         RP0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=pCmLGAI5vdHBS0eYH6qCqi6roe/ilfkfoE+bpcz9Jog=;
        b=UZZTkGMSRU91lm30f4vvxc6h97oa+xTbzOI37mzE8yRI7UMXa3jSnWXRowj85jffkJ
         GsEHaecsJUVeT4FQdSwSSLjNqtcX6Z6fRU5G4nRy+Vj3g8JzD4fm4h+0rFbai+fgQf+v
         r5wwrs6a5w4lE54PRDz7ssUbJ7Q/do4zwjzWy34s6fbng4I4LHDL1E+jOJXY0A7GWvCD
         A4fq3rIkpwPUAtD92XP8WjnNPYs57JikCF+DTX93HXG1MTzKJdCxcS/Anb9BL13A0Mau
         QmtzL4LVI6J0DEcJ8v8jvYRa/Kqi5D5qQcgSmEtNkfxarsCwYqcuJ+2bHmw1eS+I7Vqq
         UpGA==
X-Gm-Message-State: AO0yUKWUpxbdGsLt+wQ/KIgxSsoa/ueYtd4HGd0U+fV7mRtmZiLnukTm
	0rlskiXLZQE0VSaeVQego6U=
X-Google-Smtp-Source: AK7set+V/QAdvwc377Dd9bPYcjGsDrMH/7bjzENkzEIziofx99wtg+61HlS/+oAlUhGP0gdhSeigIg==
X-Received: by 2002:a17:907:97d3:b0:889:33ca:70c6 with SMTP id js19-20020a17090797d300b0088933ca70c6mr7928894ejc.2.1675168227442;
        Tue, 31 Jan 2023 04:30:27 -0800 (PST)
Message-ID: <1bb7d9d3580311888aa97c8ad50aa93c09c46fce.camel@gmail.com>
Subject: Re: [PATCH v1 01/14] xen/riscv: add _zicsr to CFLAGS
From: Oleksii <oleksii.kurochko@gmail.com>
To: Alistair Francis <alistair23@gmail.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, 
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>,  Gianluca Guida
 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
 Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Date: Tue, 31 Jan 2023 14:30:26 +0200
In-Reply-To: <CAKmqyKOecoz91e-4-KZUdgT5HNhuwuN83tpFR+HmwkUPb2r3ew@mail.gmail.com>
References: <cover.1674226563.git.oleksii.kurochko@gmail.com>
	 <3617dc882193166580ae7e5d122447e924cab524.1674226563.git.oleksii.kurochko@gmail.com>
	 <d5d9a305-3501-cbc4-1c8a-1a62bd08d588@citrix.com>
	 <d3e2c18e443439d18f8ece31c9419e30a19be8c5.camel@gmail.com>
	 <CAKmqyKOecoz91e-4-KZUdgT5HNhuwuN83tpFR+HmwkUPb2r3ew@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Tue, 2023-01-31 at 21:49 +1000, Alistair Francis wrote:
> On Mon, Jan 23, 2023 at 8:43 PM Oleksii <oleksii.kurochko@gmail.com>
> wrote:
> >=20
> > On Fri, 2023-01-20 at 15:29 +0000, Andrew Cooper wrote:
> > > On 20/01/2023 2:59 pm, Oleksii Kurochko wrote:
> > > > Work with some registers requires csr command which is part of
> > > > Zicsr.
> > > >=20
> > > > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > > > ---
> > > > =C2=A0xen/arch/riscv/arch.mk | 2 +-
> > > > =C2=A01 file changed, 1 insertion(+), 1 deletion(-)
> > > >=20
> > > > diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
> > > > index 012dc677c3..95b41d9f3e 100644
> > > > --- a/xen/arch/riscv/arch.mk
> > > > +++ b/xen/arch/riscv/arch.mk
> > > > @@ -10,7 +10,7 @@ riscv-march-$(CONFIG_RISCV_ISA_C)=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 :=3D
> > > > $(riscv-march-y)c
> > > > =C2=A0# into the upper half _or_ the lower half of the address
> > > > space.
> > > > =C2=A0# -mcmodel=3Dmedlow would force Xen into the lower half.
> > > >=20
> > > > -CFLAGS +=3D -march=3D$(riscv-march-y) -mstrict-align -
> > > > mcmodel=3Dmedany
> > > > +CFLAGS +=3D -march=3D$(riscv-march-y)_zicsr -mstrict-align -
> > > > mcmodel=3Dmedany
> > >=20
> > > Should we just go straight for G, rather than bumping it along
> > > every
> > > time we make a tweak?
> > >=20
> > I didn't go straight for G as it represents the =E2=80=9CIMAFDZicsr
> > Zifencei=E2=80=9D
> > base and extensions thereby it will be needed to add support for
> > FPU
> > (at least it requires {save,restore}_fp_state) but I am not sure
> > that
> > we need it in general.
>=20
> That seems fair enough. I don't see a reason to restrict ourselves if
> we aren't using something. Although we probably will hit a
> requirement
> on G at some point anyway.
>=20
Thanks for your notes so I will change it to G.

> Alistair
>=20
> >=20
> > Another one reason is that Linux kernel introduces _zicsr
> > extenstion
> > separately (but I am not sure that it can be considered as a
> > serious
> > argument):
> > https://elixir.bootlin.com/linux/latest/source/arch/riscv/Makefile#L58
> > https://lore.kernel.org/all/20221024113000.891820486@linuxfoundation.or=
g/
> >=20
> > > ~Andrew
> > ~Oleksii
> >=20
> >=20



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 12:34:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 12:34:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487617.755291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpq7-0006NP-Ed; Tue, 31 Jan 2023 12:34:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487617.755291; Tue, 31 Jan 2023 12:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpq7-0006NI-Bi; Tue, 31 Jan 2023 12:34:19 +0000
Received: by outflank-mailman (input) for mailman id 487617;
 Tue, 31 Jan 2023 12:34:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fq0=54=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMpq6-0006NA-AQ
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 12:34:18 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 932ebe66-a163-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 13:34:16 +0100 (CET)
Received: by mail-ej1-x62d.google.com with SMTP id m2so40674863ejb.8
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 04:34:16 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 de3-20020a1709069bc300b0088be5f9843fsm1138776ejc.158.2023.01.31.04.34.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 31 Jan 2023 04:34:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 932ebe66-a163-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=rHkuxG+nSuICPPi7KodEiRosBEeuCix+OENZAvUeQVA=;
        b=g+T7mDATQy7IYOwckkooh1zbU7ylJdT2cfn9caLlxOejNK9YLb24nQ31v5ppki+1so
         0z/1p/LwIMz+Y2tUv0nA5+IIpRljM7HHn7nZ7/GOjva/+so/ADrHItjTBpoET+8cW1II
         ctfLA1m7TBvIknZ59F18Zk8YF65K1FrSVxzg77oPoXrZikXWGAjIXJhQCgfKzcg67UgB
         bZLxmPBitWwc4ff/TKxRP8kvYq+nnmzNF8r3Mc4nuj0ln2svLZl8p6uhCz+X5ux1DR8h
         jAVAyQw+qDVTnLdi7uBQqvnl2C+sOH4YcXO3oPCVuWTV6OlxN75R7Hxk6vr6VBVunC5M
         bWEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=rHkuxG+nSuICPPi7KodEiRosBEeuCix+OENZAvUeQVA=;
        b=cK6Fw9tCCMyr9L7atM+RQGDwrj+UlXUBUP3s4I2L7e0KW+1FCPVBQZDtemTbOBhi8H
         XJAY+LxEyyL0IrsONljonHDpGoRK4+m6kfpOg+hexU8mgXx5sTndbUeWBeOT6/rpOLlb
         zXZRqElCqs0Ob7V4dsWheSUTTsksuAHHlafACM4x2R0blPmJZ6AhVnhUsKEN1P2O/8WP
         odLDZxsGOhPBUKIxCpu3plmJxo+QjT6qS8DVkt45bLtr8pHx2KiyXd6zUn/J+sbUjpbA
         BLR/pRtkAb5MnxbRNWGSOaapfkczFx0Gf4Csov1/ArwYSRDFWcPB0Ja8jmPtxDPmBWs0
         bcrw==
X-Gm-Message-State: AO0yUKVAKsNuIyhUBy6o/s9mbnkPPrA7VBMBwb9hKAtZfN7wPxVsn2eD
	pTBDX9qChYHDT3LZtMDJiUI=
X-Google-Smtp-Source: AK7set/M5UHOxpF//t4LAM0ABaXpcT6VZDrCQxCfLpVwhTLYsVZPFhtB8NpFXkVENRmoqUnReQIz6A==
X-Received: by 2002:a17:906:6402:b0:87b:cefa:c99b with SMTP id d2-20020a170906640200b0087bcefac99bmr16160588ejm.48.1675168455771;
        Tue, 31 Jan 2023 04:34:15 -0800 (PST)
Message-ID: <fdbf044d79b23b33b75a684f5124fa218f826a32.camel@gmail.com>
Subject: Re: [PATCH v2 12/14] xen/riscv: introduce an implementation of
 macros from <asm/bug.h>
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>
Date: Tue, 31 Jan 2023 14:34:14 +0200
In-Reply-To: <873d4754-0314-0022-96a9-e54ed78ac6ef@xen.org>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
	 <06c06dde5ee635c6d1ebf66a8cff9e7e1f4fbf5c.1674818705.git.oleksii.kurochko@gmail.com>
	 <73553bcf-9688-c187-a9cb-c12806484893@xen.org>
	 <2c4d490bde4f04f956e481257ddc7129c312abb6.camel@gmail.com>
	 <873d4754-0314-0022-96a9-e54ed78ac6ef@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-01-30 at 22:28 +0000, Julien Grall wrote:
> Hi Oleksii,
>=20
> On 30/01/2023 11:35, Oleksii wrote:
> > Hi Julien,
> > On Fri, 2023-01-27 at 16:02 +0000, Julien Grall wrote:
> > > Hi Oleksii,
> > >=20
> > > On 27/01/2023 13:59, Oleksii Kurochko wrote:
> > > > The patch introduces macros: BUG(), WARN(), run_in_exception(),
> > > > assert_failed.
> > > >=20
> > > > The implementation uses "ebreak" instruction in combination
> > > > with
> > > > diffrent bug frame tables (for each type) which contains useful
> > > > information.
> > > >=20
> > > > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > > > ---
> > > > Changes:
> > > > =C2=A0=C2=A0=C2=A0 - Remove __ in define namings
> > > > =C2=A0=C2=A0=C2=A0 - Update run_in_exception_handler() with
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 register void *fn_ asm(__stringify(B=
UG_FN_REG)) =3D (fn);
> > > > =C2=A0=C2=A0=C2=A0 - Remove bug_instr_t type and change it's usage =
to uint32_t
> > > > ---
> > > > =C2=A0=C2=A0 xen/arch/riscv/include/asm/bug.h | 118
> > > > ++++++++++++++++++++++++++++
> > > > =C2=A0=C2=A0 xen/arch/riscv/traps.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 128
> > > > +++++++++++++++++++++++++++++++
> > > > =C2=A0=C2=A0 xen/arch/riscv/xen.lds.S=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0 10 +++
> > > > =C2=A0=C2=A0 3 files changed, 256 insertions(+)
> > > > =C2=A0=C2=A0 create mode 100644 xen/arch/riscv/include/asm/bug.h
> > > >=20
> > > > diff --git a/xen/arch/riscv/include/asm/bug.h
> > > > b/xen/arch/riscv/include/asm/bug.h
> > > > new file mode 100644
> > > > index 0000000000..4b15d8eba6
> > > > --- /dev/null
> > > > +++ b/xen/arch/riscv/include/asm/bug.h
> > > > @@ -0,0 +1,118 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +/*
> > > > + * Copyright (C) 2012 Regents of the University of California
> > > > + * Copyright (C) 2021-2023 Vates
> > >=20
> > > I have to question the two copyrights here given that the
> > > majority of
> > > the code seems to be taken from the arm implementation (see
> > > arch/arm/include/asm/bug.h).
> > >=20
> > > With that said, we should consolidate the code rather than
> > > duplicating
> > > it on every architecture.
> > >=20
> > Copyrights should be removed. They were taken from the previous
> > implementation of bug.h for RISC-V so I just forgot to remove them.
> >=20
> > It looks like we should have common bug.h for ARM and RISCV but I
> > am
> > not sure that I know how to do that better.
> > Probably the code wants to be moved to xen/include/bug.h and using
> > ifdef ARM && RISCV ...
>=20
> Or you could introduce CONFIG_BUG_GENERIC or else, so it is easily=20
> selectable by other architecture.
>=20
> > But still I am not sure that this is the best one option as at
> > least we
> > have different implementation for x86_64.
>=20
> My main concern is the maintainance effort. For every bug, we would
> need=20
> to fix it in two places. The risk is we may forget to fix one
> architecture.
> This is not a very ideal situation.
>=20
> So I think sharing the header between RISC-V and Arm (or x86, see
> below)=20
> is at least a must. We can do the 3rd architecture at a leisure pace.
>=20
> One option would be to introduce asm-generic like Linux (IIRC this
> was a=20
> suggestion from Andrew). This would also to share code between two of
> the archs.
>=20
> Also, from a brief look, the difference in implementation is mainly=20
> because on Arm we can't use %c (some version of GCC didn't support
> it).=20
> Is this also the case on RISC-V? If not, you may want to consider to
> use=20
> the x86 version.
>=20
No, it shouldn't be an issue for RISC-V. I'll double check.
Any way it should bug.h should be shared between archs so I am going to
rework that in this patch series and sent the changes in the next
version of the patch series.

Thanks.

~Oleksii
> Cheers,
>=20



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 12:37:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 12:37:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487622.755301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpsz-00077e-U5; Tue, 31 Jan 2023 12:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487622.755301; Tue, 31 Jan 2023 12:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpsz-00077X-R5; Tue, 31 Jan 2023 12:37:17 +0000
Received: by outflank-mailman (input) for mailman id 487622;
 Tue, 31 Jan 2023 12:37:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fq0=54=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pMpsy-00077M-Sv
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 12:37:16 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fda5c719-a163-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 13:37:14 +0100 (CET)
Received: by mail-ed1-x52f.google.com with SMTP id f7so7027140edw.5
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 04:37:14 -0800 (PST)
Received: from pc-879.home (83.29.147.144.ipv4.supernova.orange.pl.
 [83.29.147.144]) by smtp.gmail.com with ESMTPSA id
 g12-20020a056402114c00b004a216fa259esm6225345edw.60.2023.01.31.04.37.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 31 Jan 2023 04:37:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fda5c719-a163-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=pQteqOCyAwbu0M9jaRz8XXN1rQY3QJ0D8KKKbXHgsYY=;
        b=GsBKl0XLdwLmbwS2zltNkihJofVcRpe+mstb6uMVNe1iuUVZveUv0scDmFbqNNzQvm
         YF0L/RqIE94fSwvmi56daqSKguUZzZ+CNOdu7+4v3OJoEi4bSlcSRRsIKdWFOMKjVFq/
         Ih4FSg5LkWCWP0z105kDQRXziFOv8DCuj7IIkWqT9KBf1TzMuhvsqp9p9ecnBMezmBW0
         FeGqOL/wMWAn1DIsGIXOqYTqNSCJ9tvpeJugnDRwl0ZajAvWyaW9e7zTkGkMotcXABW9
         XUx7Lgy6YZEfWjVj3q8qmWFtB1EdkCafvl/mYECOIo3Qftgfu8QlCu3VSCDggR90SJsj
         jRpg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=pQteqOCyAwbu0M9jaRz8XXN1rQY3QJ0D8KKKbXHgsYY=;
        b=KQqUUuNIu6v5/vQrmtaJ6UAtQrIpQQI2s1y9BO0W0Rjh4uKVn0ZNTTu5C95KcOCvvu
         S1TJn6Lmu+NP4lfoNuy7GUCB3mEvZtEsZIdtWT4K7a7UmN8PQr7nXJ67h9hMjd12pC21
         NItyuyHQB6RB4qfGgGFDj0jeOTVBu48a+tYz5cQgwM7VLCIYyoMxL7I44R87w3ie+GxW
         fxH0LLwRMNfu1R2wCCvCVv0NLi6+Lql/CJlrX4uY393Jn0SETfHostN0TOQrQ1OXvsoo
         9ryUNESheSwjjKOlrD5yOi+JxoWXQxlaSUwq6CoBeSMcJXpD28h32ZVzA9TdiAGfax3n
         hqKA==
X-Gm-Message-State: AO0yUKUDcsebOKqCdk8wHKE121x/RCDkq7t6qkknzpCGC+gsb+o1A+T6
	c5Vrfm0c+KKkmaHzOUjoHVM=
X-Google-Smtp-Source: AK7set9nAjQwlEifEw/u78Qh5wLhEz5Cnq11+HNpony01MddJpfoNhlK5liZ9OQvbgIczskdaWIv1g==
X-Received: by 2002:a05:6402:22b9:b0:4a2:6f53:c417 with SMTP id cx25-20020a05640222b900b004a26f53c417mr1370199edb.32.1675168634511;
        Tue, 31 Jan 2023 04:37:14 -0800 (PST)
Message-ID: <5c6e413b606ae268f0a6a24d347d6dd994598d37.camel@gmail.com>
Subject: Re: [PATCH v8 1/2] xen/riscv: introduce early_printk basic stuff
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, Bobby Eshleman
 <bobby.eshleman@gmail.com>
Date: Tue, 31 Jan 2023 14:37:13 +0200
In-Reply-To: <12f4a315-19dc-2462-7bbf-f02408b09081@xen.org>
References: <cover.1675163330.git.oleksii.kurochko@gmail.com>
	 <06c2c36bd68b2534c757dc4087476e855253680a.1675163330.git.oleksii.kurochko@gmail.com>
	 <12f4a315-19dc-2462-7bbf-f02408b09081@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.3 (3.46.3-1.fc37) 
MIME-Version: 1.0

Hi Julien,

On Tue, 2023-01-31 at 11:42 +0000, Julien Grall wrote:
> Hi Oleksii,
>=20
> On 31/01/2023 11:17, Oleksii Kurochko wrote:
> > Because printk() relies on a serial driver (like the ns16550
> > driver)
> > and drivers require working virtual memory (ioremap()) there is not
> > print functionality early in Xen boot.
> >=20
> > The patch introduces the basic stuff of early_printk functionality
> > which will be enough to print 'hello from C environment".
> >=20
> > Originally early_printk.{c,h} was introduced by Bobby Eshleman
> > (
> > https://github.com/glg-rv/xen/commit/a3c9916bbdff7ad6030055bbee7e53d
> > 1aab71384)
> > but some functionality was changed:
> > early_printk() function was changed in comparison with the original
> > as
> > common isn't being built now so there is no vscnprintf.
> >=20
> > This commit adds early printk implementation built on the putc SBI
> > call.
> >=20
> > As sbi_console_putchar() is already being planned for deprecation
> > it is used temporarily now and will be removed or reworked after
> > real uart will be ready.
> >=20
> > Signed-off-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > Reviewed-by: Bobby Eshleman <bobby.eshleman@gmail.com>
> > ---
> > Changes in V8:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - Nothing was changed
> > ---
> > Changes in V7:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - Nothing was changed
> > ---
> > Changes in V6:
> > =C2=A0=C2=A0=C2=A0=C2=A0 - Remove __riscv_cmodel_medany check from earl=
y_printk.c
>=20
> This discussion is still not resolved. I will echo Jan's point [1]
> and=20
> expand it. There is limited point to keep sending a new version for=20
> small changes if there main open points are not addressed.
>=20
> Can you please look at settling done on the issues first and then
> send a=20
> new version?
Sure, I won't provide new patch series until the issue will be
resolved.

This patch series has been sent to resolve an issue with CI&CD which I
missed after the renaming of task for RISC-V in build.yaml.

~ Oleksii
>=20
> Cheers,
>=20
> [1] 1d63dd9a-77df-4fca-e35b-886ba19fb35d@suse.com
>=20



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 12:40:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 12:40:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487628.755311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpvQ-0007rR-DJ; Tue, 31 Jan 2023 12:39:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487628.755311; Tue, 31 Jan 2023 12:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMpvQ-0007rI-AR; Tue, 31 Jan 2023 12:39:48 +0000
Received: by outflank-mailman (input) for mailman id 487628;
 Tue, 31 Jan 2023 12:39:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMpvP-0007rC-6E
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 12:39:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMpvO-0001zI-Fh; Tue, 31 Jan 2023 12:39:46 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=[192.168.16.209]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMpvO-0002YK-8L; Tue, 31 Jan 2023 12:39:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=48mc04cykZf2e/+LVCMxoruKGZxhi/uPVfjPXSFXYT8=; b=lok78Vl/4V+u2clTmwd2PukJLZ
	hlaJSGztJZ1hi4OPO2PCqcOBsYAGVhp9wfDm65q94AtlB0DXkcctqEOMPRPUp7fBChiIb5mUHhryI
	4SC3AV0vC8Rdccbt2sujKEmw2ag8lR41BYU016ZJw7VyL/WRt6nQyGBr7PmlBekXgYC0=;
Message-ID: <a3e2a1d3-0e64-82af-53d0-8b25cd1b7580@xen.org>
Date: Tue, 31 Jan 2023 12:39:43 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH v2 07/14] xen/riscv: introduce exception context
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>
References: <cover.1674818705.git.oleksii.kurochko@gmail.com>
 <652289358975cf869e4bc0a6a70e3aba7bd2fbf6.1674818705.git.oleksii.kurochko@gmail.com>
 <a8219b2d-a22d-63ac-5088-c33610310d6e@xen.org>
 <27469e861d4777af42b84fb637b24ed47a187647.camel@gmail.com>
 <8c0bce0b-05bd-5f4b-7b66-f6668ad34899@xen.org>
 <72a7bdfc72161144df741e3a39f628874d88debd.camel@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <72a7bdfc72161144df741e3a39f628874d88debd.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 31/01/2023 12:24, Oleksii wrote:
> Hi Julien,

Hi Oleksii,

> On Mon, 2023-01-30 at 22:11 +0000, Julien Grall wrote:
>> Hi,
>>
>> On 30/01/2023 11:40, Oleksii wrote:
>>> On Fri, 2023-01-27 at 14:54 +0000, Julien Grall wrote:
>>>> Hi Oleksii,
>>>>
>>>> On 27/01/2023 13:59, Oleksii Kurochko wrote:
>>>>> +static inline void wfi(void)
>>>>> +{
>>>>> +    __asm__ __volatile__ ("wfi");
>>>>
>>>> I have starred at this line for a while and I am not quite too
>>>> sure
>>>> to
>>>> understand why we don't need to clobber the memory like we do on
>>>> Arm.
>>>>
>>> I don't have an answer. The code was based on Linux so...
>>
>> Hmmm ok. It would probably wise to understand how code imported from
>> Linux work so we don't end up introducing bug when calling such
>> function.
>>
>>   From your current use in this patch, I don't expect any issue. That
>> may
>> chance for the others use.
>>
> Could you please share with me a link or explain what kind of problems
> may occur in case when we don't clobber the memory in the others use
> case during usage of "wfi" ?

I don't have a link and that's why I was asking the question here.

The concern I have is the following:

1)
    wfi();
    val = *addr;

2)
    *addr = val;
    wfi();


Is the compiler allowed to re-order the sequence so '*addr' will be read 
before 'wfi' or (for the second case) write after 'wfi'?

At the moment, I believe this is why we have the 'memory' clobber on 
Arm. But then I couldn't find any documentation implying that the 
compiler cannot do the re-ordering.

> 
> As I understand the reason for clobber the memory is to cause GCC to
> not keep memory values cached in registers across the
> assembler	instruction and not optimize stores/load to the memory.
> But current one instruction isn't expected to work with the memory so
> it should be safe enough to stall current hart ( CPU ) until an
> interrupt might need servicing.
> 
> Anyway we can change the code to:
>      __asm__ __volatile__ ("wfi" ::: "memory")
> In order to be sure that no problems will arise in the future.

As I wrote earlier, so far, I didn't suggest to change any code. I am 
simply trying to understand how this is meant to work.

One action may be that we can remove the memory clobber on Arm.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 14:10:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 14:10:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487652.755322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMrKk-0002YO-9z; Tue, 31 Jan 2023 14:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487652.755322; Tue, 31 Jan 2023 14:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMrKk-0002Xb-3C; Tue, 31 Jan 2023 14:10:02 +0000
Received: by outflank-mailman (input) for mailman id 487652;
 Tue, 31 Jan 2023 14:10:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMrKj-0002RB-Dc; Tue, 31 Jan 2023 14:10:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMrKj-00045T-9D; Tue, 31 Jan 2023 14:10:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMrKi-0008Q3-RN; Tue, 31 Jan 2023 14:10:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMrKi-0005UI-Qs; Tue, 31 Jan 2023 14:10:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xAuReD2gq1log9qpAxZx5luecRP1U3kItRmzBzVeA98=; b=OuavYIqVD4zFV70n1KT0cNYYMF
	9qNuOXKtCiPjo2l9+tL9e/mHgEOKskIm8OrK1C6y/QhuNBKxkYmpWsUVh1mkbLyL3JdZhS23+oLWZ
	4vy3PfrJxCLVqjzVXyWJ/t3noR8In9+64FU9eAQL9DPYnJqrWPGzGnuZTzClCu7nO2SE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176284-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 176284: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:<job status>:broken:regression
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-install(4):broken:regression
    xen-unstable:test-amd64-i386-libvirt-pair:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:build-armhf:syslog-server:running:regression
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:host-install/src_host(6):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-examine:host-install:broken:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:host-ping-check-native:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start.2:fail:heisenbug
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf:capture-logs:broken:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
X-Osstest-Versions-That:
    xen=f588e7b7cb70800533aaa8a2a9d7a4b32d10b363
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 14:10:00 +0000

flight 176284 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176284/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <job status>         broken
 test-amd64-i386-xl-qemuu-ovmf-amd64    <job status>                 broken
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <job status>               broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175994
 test-amd64-i386-libvirt-pair    <job status>                 broken  in 176268
 test-amd64-amd64-xl-pvhv2-intel    <job status>               broken in 176268
 build-arm64-pvops             6 kernel-build   fail in 176266 REGR. vs. 175994
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvhv2-intel 5 host-install(5) broken in 176268 pass in 176284
 test-amd64-i386-libvirt-pair 6 host-install/src_host(6) broken in 176268 pass in 176284
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 5 host-install(5) broken pass in 176277
 test-amd64-i386-xl-qemuu-ovmf-amd64  5 host-install(5)   broken pass in 176277
 test-amd64-amd64-xl-qemut-debianhvm-amd64 5 host-install(5) broken pass in 176277
 test-amd64-amd64-examine      5 host-install             broken pass in 176277
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail in 176268 pass in 176266
 test-amd64-i386-xl-qemut-debianhvm-amd64 7 xen-install fail in 176268 pass in 176284
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 176268 pass in 176284
 test-amd64-i386-xl-qemuu-ovmf-amd64 18 guest-localmigrate/x10 fail in 176277 pass in 176266
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 host-ping-check-native fail in 176277 pass in 176268
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail in 176277 pass in 176284
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail in 176277 pass in 176284
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 176277 pass in 176284
 test-amd64-amd64-libvirt-vhd 20 guest-start.2              fail pass in 176277

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 176266 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 176266 n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175994
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 176277 like 175987
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175987
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175994
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175994
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175994
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 175994
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175994
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install       fail like 175994
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 176277 n/a

version targeted for testing:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a
baseline version:
 xen                  f588e7b7cb70800533aaa8a2a9d7a4b32d10b363

Last test of basis   175994  2023-01-20 08:38:32 Z   11 days
Failing since        176003  2023-01-20 17:40:27 Z   10 days   22 attempts
Testing same since   176222  2023-01-26 22:13:29 Z    4 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Bobby Eshleman <bobby.eshleman@gmail.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  George Dunlap <george.dunlap@cloud.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>
  Xenia Ragiadakou <burzalodowa@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    broken  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              broken  
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 broken
broken-job build-armhf broken
broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow host-install(5)
broken-step test-amd64-i386-xl-qemuu-ovmf-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemut-debianhvm-amd64 host-install(5)
broken-step test-amd64-amd64-examine host-install
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job build-armhf broken
broken-job test-amd64-i386-libvirt-pair broken
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 broken
broken-job build-armhf broken
broken-job test-amd64-amd64-xl-pvhv2-intel broken
broken-job build-armhf broken

Not pushing.

(No revision log; it would be 1225 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 14:53:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 14:53:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487667.755331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMs0b-00008G-LS; Tue, 31 Jan 2023 14:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487667.755331; Tue, 31 Jan 2023 14:53:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMs0b-000089-Ip; Tue, 31 Jan 2023 14:53:17 +0000
Received: by outflank-mailman (input) for mailman id 487667;
 Tue, 31 Jan 2023 14:53:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMs0a-00007z-45; Tue, 31 Jan 2023 14:53:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMs0Z-00058N-NN; Tue, 31 Jan 2023 14:53:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMs0Z-00025g-B3; Tue, 31 Jan 2023 14:53:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMs0Z-0000eX-Ad; Tue, 31 Jan 2023 14:53:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LCGcPl0Fsd1IX65/GYFfIajjGF6oEBDTywPmfIBRq6g=; b=YlEgnrS/ITaChS34lnmESz1Usp
	JB0rPEX2OPHpZ4OLsDgp5hNYBkPjpYMAoQO/Zrua9QnyyPy9iAN70Wn9pS201AWmfSiy3C0tsceNr
	sTdHyukhz2wOAB8IYW4VcsNnj78329jTPKjbXpOcVBA6X71nrApky8ZjoyVhe0dJ1688=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176296-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176296: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:host-install(5):broken:heisenbug
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 14:53:15 +0000

flight 176296 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176296/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-amd64-libvirt        <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt      5 host-install(5)          broken pass in 176294

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt    15 migrate-support-check fail in 176294 never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    5 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     broken  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-job test-amd64-amd64-libvirt broken
broken-step test-amd64-amd64-libvirt host-install(5)
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 15:14:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 15:14:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487703.755358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMsKs-0004AP-Lr; Tue, 31 Jan 2023 15:14:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487703.755358; Tue, 31 Jan 2023 15:14:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMsKs-0004AI-Iz; Tue, 31 Jan 2023 15:14:14 +0000
Received: by outflank-mailman (input) for mailman id 487703;
 Tue, 31 Jan 2023 15:14:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPLs=54=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pMsKq-0004A7-U1
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 15:14:13 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20615.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e8f91c62-a179-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 16:14:10 +0100 (CET)
Received: from BN9PR03CA0849.namprd03.prod.outlook.com (2603:10b6:408:13d::14)
 by MN2PR12MB4144.namprd12.prod.outlook.com (2603:10b6:208:15f::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 15:14:07 +0000
Received: from BN8NAM11FT063.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:13d:cafe::3f) by BN9PR03CA0849.outlook.office365.com
 (2603:10b6:408:13d::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 15:14:07 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT063.mail.protection.outlook.com (10.13.177.110) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6064.22 via Frontend Transport; Tue, 31 Jan 2023 15:14:06 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 09:14:06 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 07:14:06 -0800
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 31 Jan 2023 09:14:04 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8f91c62-a179-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UYii56VkzqS28+UZJPaiPTJ3BV7q/dBkvxecgbeT+maRwUc7TL66wOCBWEikEybUhLAVkSCNYRtBcUG4sh32RIMRyUH1tsDMMibRU+CKDNOqtJfCrMvLi7l6BMOuoqxrK6xj78OIwbXh5kQQsvTYep0JD2Dypod6FzSfhOK9gULZb5dzvhEVnWYqh8WVR/SsYMZFS4IVuHe9xcHs4Ovml5PD6dQf4c2+tskbqUJ3FJasLLKQ7fFiKl795328WcvjUtI/tzzMnBDsUvM9P1UmdDMZR62SaG5OzPaPXBesnb3W2brTfi3vtqJr6i1ZkybmwpO8dCZzEUayONj6K/QBxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7N3qgUPJKYn/L7cjGhSZmyThGymjVPPS9OBhdt5y28Q=;
 b=Pg0sm5+MBd9Mdb3ajk9LGH2gEGHxSqwW9DPnertEJu5JDNo4DPE4pR+Bnwwx2jJ/SeoPTTGIVlxQwGjkDda6WOAoVyFk1ICjMFDK/tOFGSloXc6O/8FiTpynNn4/unnPWi2JirHUAJKTseFxo6sGYMLiZlemtOTPmtAEwjUEUMf0gNZ4MjT0pBQ5a7pfjitN9hgHIzJVVh07eig6MTaESmcQ55hmBEc4MiTCnYqwd8FwrzIg+E1MkWWwWiQgorx5/fNtDecxjxuznyJebYXXNK05243bweEZI/VzdwaJ5VaH+1jBs2kgEIf6xOIk7EwLtMkz7+YuOHn/Wo5aoQezOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7N3qgUPJKYn/L7cjGhSZmyThGymjVPPS9OBhdt5y28Q=;
 b=paqseHn8+5tvjZdO6pMZUprMjIy19Ewb39F89qxSzy66L0I9oVYw083NNU9g+1f3nv6jzYnQ5kawlEUaZT+RmHkiIADKuYpmJkw2KltZfWqXc7gXL14Dngt5EcbwCrlck+syqu+ahEp79Zx3vi19bDR5or+fuPUa3X08hT6deRs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<ayankuma@amd.com>
Subject: [PATCH 0/2] xen/arm: Support compressed uImages
Date: Tue, 31 Jan 2023 16:13:52 +0100
Message-ID: <20230131151354.25943-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT063:EE_|MN2PR12MB4144:EE_
X-MS-Office365-Filtering-Correlation-Id: 0d53bf81-94d0-4f00-17a6-08db039dcbe4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OqGTT5C/VDwgnwpf0P706Dfxryohy3a1CEK1PSNo6+lmRtOF3aqwDVQjJow3WX6rjGOTPwfD/Mo3aVuGOpUm+0Dnqltl++4oXu2s/nnbMNGxLcaXRcKJLXbgkFPUf/hX4MBLpkXEWH3UrWolF3UtR/Jw0k0AfNPNvJbM//WWBTs6lpTTFEOaeqLjtc7UNQ9JvLqfk4hxzrsP0/LMdCIUwZcLwbGEsuPtqaWBpTePpH/sWOv/ThzoFB8ZKfZrAKrG0TaqfKJaAij9H5JzvAUzjHoTSN14FN28+ZTBdZwzY3CZlfZO10V1ZDBpNoa0z+fgS/9b1ITTNrdDdOR8ISv3wYnDhiMp001D6ImuSs/Gkvj33N3U6th+9Vw09/jIaO7LE0U7Oc8SncRhVANv33+O3Zmm3G8tmmMnlHVwbhFB6WiJycqLQshF4D8y8duji+tKcEJ/rcxS5hIXQGTnWiLMV/0GV060RQceEscxEg3tJs0rkh4RYkTIKviIEclArr4/Xwwa955jDwB7lxV8zkdqO5Ab2WGYZ3K5IiLcHfI7wcZw3qy++JBy2CcvN0RAl6q+cKzi79193s0nn1oM0ycZIculvb3IJ5yAULp2Rj/tahSaHiMJQK5agPIDrnIxuJ24lH4ws9kITnfS5XSE/5nI7bgfCGCPFqArza4bKDafY9obfNA4FYTDYPgvyPCj6/jei2J4hNCOkvlGerq0eJ7y2R8lReuzKLYgtw/82yWpbus=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199018)(36840700001)(40470700004)(46966006)(36860700001)(36756003)(40460700003)(82740400003)(316002)(5660300002)(54906003)(426003)(336012)(83380400001)(356005)(2616005)(47076005)(86362001)(40480700001)(81166007)(82310400005)(26005)(186003)(2906002)(478600001)(6916009)(4326008)(44832011)(70586007)(70206006)(1076003)(6666004)(8676002)(8936002)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 15:14:06.8060
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d53bf81-94d0-4f00-17a6-08db039dcbe4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT063.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4144

This series adds support for booting gzip compressed images with u-boot header.
Currently Xen does not support such images because we are trying to decompress
the kernel before probing uImage header.

The problem can be solved using 2 different approaches:
1) Split uImage probing into 2 stages. The first stage is called before
   decompression, does the usual probing and sets up correctly module start
   address and size by taking the uImage header size into account. The second
   stage is called after decompression to update the zimage.{kernel_addr,len}.

2) Call the decompression function from within the uImage probing to avoid the
   split and to make the function self-containing. This way the only case for
   falling through to try to probe other image types is when there is no u-boot
   header detected.

In this series the second approach is taken that results in a better looking
code.

Michal Orzel (2):
  xen/arm: Move kernel_uimage_probe definition after kernel_decompress
  xen/arm: Add support for booting gzip compressed uImages

 docs/misc/arm/booting.txt |   3 -
 xen/arch/arm/kernel.c     | 197 +++++++++++++++++++++++---------------
 2 files changed, 118 insertions(+), 82 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 15:14:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 15:14:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487705.755382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMsKz-0004gO-7P; Tue, 31 Jan 2023 15:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487705.755382; Tue, 31 Jan 2023 15:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMsKz-0004gB-26; Tue, 31 Jan 2023 15:14:21 +0000
Received: by outflank-mailman (input) for mailman id 487705;
 Tue, 31 Jan 2023 15:14:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPLs=54=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pMsKx-0004eZ-82
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 15:14:19 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7e89::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec478786-a179-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 16:14:16 +0100 (CET)
Received: from BN9PR03CA0593.namprd03.prod.outlook.com (2603:10b6:408:10d::28)
 by CY5PR12MB6323.namprd12.prod.outlook.com (2603:10b6:930:20::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 15:14:10 +0000
Received: from BN8NAM11FT090.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10d:cafe::fb) by BN9PR03CA0593.outlook.office365.com
 (2603:10b6:408:10d::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 15:14:10 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT090.mail.protection.outlook.com (10.13.177.105) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6064.22 via Frontend Transport; Tue, 31 Jan 2023 15:14:09 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 09:14:09 -0600
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 31 Jan 2023 09:14:08 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec478786-a179-11ed-b63b-5f92e7d2e73a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W/XQYafhuPgq7Oph46gN87TD/SGXM7axcGm53RaU9hgENHW/aKejvzcZpvKPjnl4MJAHMhGAsF5mxgkeoO/8FV6gullTGBsFl16psu2/JBS0Btokruc4u5MBxRi+psO+gu3habfIcbkPyXyGasS/FPq+kFg80d48RUIYj+MIy1XQHGQKPD8bLCkgYrWHSSetxDJwmGNhKHfn38Pjgc+hOEqQB+nga2XyzdWDI8IP3lp/pe5be4RsQ2+DJWQmFnxx0T59suAeMG7mrNOskLjIBpThBxj8BxYF8TeUM99Q7mc+fwRNurGjrFLYOW+q7O5qYiWsn9T8O+BcZl6GUrjFPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JNe9aseWivHMHPuq28/4HWwkjR5Ct14+G2rm7fTg36Y=;
 b=WxTUEwyK9TJdWEZpas6+07qCCr3whDRrcda22U7kISvaZRWuUYedlzhsDi5LlGT+LAUYo+RppODQAzBO9RSOyU0TRPWF00LzWTXOv6VaUOYUU/SVzsWE3kY8CciB9YV5MxKlyUV+/nT6t/x2f880tipCGZpFEoHBX2TREdm9wukWDoLIdjR7jQSQZLfbCBnx5yn1iVmjed9ENSfN5X6YRkM1Cek6LlCZ4fRD/LF3w5s9EXLA8pc9dPNpstEH/ZCJDyxYrHYusB1HSOLrvno0rOzQUQtC32uAPB4kI53aucE07HGNcbQrEX+mMebhZit4AIGuhoB1DQnFvSgDUmDEnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JNe9aseWivHMHPuq28/4HWwkjR5Ct14+G2rm7fTg36Y=;
 b=FKE04ibgvEz3a84ML11S4OAf5Xv5QPYGyOBePO3rJiIv9n2I0gZjtWu2xnZ/nIiPhs0hAXgdMnBVfcELGDb2VJRxm6kfJg7pIEAC55Tw5fOktDHUrrgZ1OiXFmwJP573TTEv00u6BxzDCLFRKPCODHZXgTwPS9AqPDV5qDIg5GM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<ayankuma@amd.com>
Subject: [PATCH 2/2] xen/arm: Add support for booting gzip compressed uImages
Date: Tue, 31 Jan 2023 16:13:54 +0100
Message-ID: <20230131151354.25943-3-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230131151354.25943-1-michal.orzel@amd.com>
References: <20230131151354.25943-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT090:EE_|CY5PR12MB6323:EE_
X-MS-Office365-Filtering-Correlation-Id: 34b32970-ae76-4a0c-534f-08db039dcdc7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LOH9+Y8mvLBgKxyxWQe1YKdd3+tQ1Sjaccwtu7cH66guy4EUGE5y8IFP+b9JQ4wubreVpW+Usv+HEGfHptBVT11NlsZDidU5qsAcCeNFqti7aatjVayFDMOZwf/dlbNbqZGB3CxNXaDjlB6O1bSDQa5wzpLaguX5LieptCJDNiAR1/2Y1L9bC9FgvQNlJuSuZg2hm324kUfSj6BExF/0/L8tLUjqOGS5XsOqJEIQgA86qRmzcNRHkRGKc23JKLE279CKgx+NP9X61uhJ9KTYFS1gDqWYIdOBzW0uEbqTgKUIALfKqNBw/NlJXOQOip8ttGdqkfoys0iLZveOUcTiZBEWNTFyrSIMYttOoahujUwAApc18p5tsa1qQ04pONkn9kURaBi1AcIHaaPWskgbPqke9YemL2RVK0wQ3SGRm/Ge1BCD8dlReb5lDzTJX/M1Y0E+1bqR3qv+yBGP0yFnAdc1uyHmZwhzfJyjdvNIpJ6sXH8McVYIl4fvbHsXPglVXRHfzf1Hd9VEDLh7XV5pyzMAlY0HeI4rTmjiIS//r+tDc3yWikpiG5bdwZjYK33rJQlfymKj4BUfYSrN2NB75hWahoSSR1qWjw/YauMmFkrysugvDxQzwKIYUIXaK2NmaPPNfi51QopNUa0BvfszWLn915bP+B9qDp0OegOa9pilxf3GQGrk7bYDHswdKGFtXofFi0waHF7CkEdSY5D684LKXmqYyXvtPL7k44lpRPE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(26005)(186003)(478600001)(6916009)(70206006)(8676002)(83380400001)(6666004)(1076003)(70586007)(316002)(54906003)(4326008)(36860700001)(5660300002)(41300700001)(44832011)(2616005)(82740400003)(8936002)(2906002)(356005)(86362001)(81166007)(82310400005)(336012)(47076005)(426003)(40460700003)(36756003)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 15:14:09.9729
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 34b32970-ae76-4a0c-534f-08db039dcdc7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT090.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6323

At the moment, Xen does not support booting gzip compressed uImages.
This is because we are trying to decompress the kernel before probing
the u-boot header. This leads to a failure as the header always appears
at the top of the image (and therefore obscuring the gzip header).

Move the call to kernel_uimage_probe before kernel_decompress and make
the function self-containing by taking the following actions:
 - take a pointer to struct bootmodule as a parameter,
 - check the comp field of a u-boot header to determine compression type,
 - in case of compressed image, modify boot module start address and size
   by taking the header size into account and call kernel_decompress,
 - set up zimage.{kernel_addr,len} accordingly,
 - return -ENOENT in case of a u-boot header not found to distinguish it
   amongst other return values and make it the only case for falling
   through to try to probe other image types.

This is done to avoid splitting the uImage probing into 2 stages (executed
before and after decompression) which otherwise would be necessary to
properly update boot module start and size before decompression and
zimage.{kernel_addr,len} afterwards.

Remove the limitation from the booting.txt documentation.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 docs/misc/arm/booting.txt |  3 ---
 xen/arch/arm/kernel.c     | 51 ++++++++++++++++++++++++++++++++++-----
 2 files changed, 45 insertions(+), 9 deletions(-)

diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
index bd7bfe7f284a..02f7bb65ec6d 100644
--- a/docs/misc/arm/booting.txt
+++ b/docs/misc/arm/booting.txt
@@ -50,9 +50,6 @@ Also, it is to be noted that if user provides the legacy image header on
 top of zImage or Image header, then Xen uses the attributes of legacy
 image header to determine the load address, entry point, etc.
 
-Known limitation: compressed kernels with a uboot headers are not
-working.
-
 
 Firmware/bootloader requirements
 --------------------------------
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 068fbf88e492..ea5f9618169e 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -265,11 +265,14 @@ static __init int kernel_decompress(struct bootmodule *mod)
 #define IH_ARCH_ARM             2       /* ARM          */
 #define IH_ARCH_ARM64           22      /* ARM64        */
 
+/* uImage Compression Types */
+#define IH_COMP_GZIP            1
+
 /*
  * Check if the image is a uImage and setup kernel_info
  */
 static int __init kernel_uimage_probe(struct kernel_info *info,
-                                      paddr_t addr, paddr_t size)
+                                      struct bootmodule *mod)
 {
     struct {
         __be32 magic;   /* Image Header Magic Number */
@@ -287,6 +290,8 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
     } uimage;
 
     uint32_t len;
+    paddr_t addr = mod->start;
+    paddr_t size = mod->size;
 
     if ( size < sizeof(uimage) )
         return -EINVAL;
@@ -294,13 +299,21 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
     copy_from_paddr(&uimage, addr, sizeof(uimage));
 
     if ( be32_to_cpu(uimage.magic) != UIMAGE_MAGIC )
-        return -EINVAL;
+        return -ENOENT;
 
     len = be32_to_cpu(uimage.size);
 
     if ( len > size - sizeof(uimage) )
         return -EINVAL;
 
+    /* Only gzip compression is supported. */
+    if ( uimage.comp && uimage.comp != IH_COMP_GZIP )
+    {
+        printk(XENLOG_ERR
+               "Unsupported uImage compression type %"PRIu8"\n", uimage.comp);
+        return -EOPNOTSUPP;
+    }
+
     info->zimage.start = be32_to_cpu(uimage.load);
     info->entry = be32_to_cpu(uimage.ep);
 
@@ -330,8 +343,26 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
         return -EINVAL;
     }
 
-    info->zimage.kernel_addr = addr + sizeof(uimage);
-    info->zimage.len = len;
+    if ( uimage.comp )
+    {
+        int rc;
+
+        /* Prepare start and size for decompression. */
+        mod->start += sizeof(uimage);
+        mod->size -= sizeof(uimage);
+
+        rc = kernel_decompress(mod);
+        if ( rc )
+            return rc;
+
+        info->zimage.kernel_addr = mod->start;
+        info->zimage.len = mod->size;
+    }
+    else
+    {
+        info->zimage.kernel_addr = addr + sizeof(uimage);
+        info->zimage.len = len;
+    }
 
     info->load = kernel_zimage_load;
 
@@ -561,6 +592,16 @@ int __init kernel_probe(struct kernel_info *info,
         printk("Loading ramdisk from boot module @ %"PRIpaddr"\n",
                info->initrd_bootmodule->start);
 
+    /*
+     * uImage header always appears at the top of the image (even compressed),
+     * so it needs to be probed first. Note that in case of compressed uImage,
+     * kernel_decompress is called from kernel_uimage_probe making the function
+     * self-containing (i.e. fall through only in case of a header not found).
+    */
+    rc = kernel_uimage_probe(info, mod);
+    if ( rc != -ENOENT )
+        return rc;
+
     /* if it is a gzip'ed image, 32bit or 64bit, uncompress it */
     rc = kernel_decompress(mod);
     if ( rc && rc != -EINVAL )
@@ -570,8 +611,6 @@ int __init kernel_probe(struct kernel_info *info,
     rc = kernel_zimage64_probe(info, mod->start, mod->size);
     if (rc < 0)
 #endif
-        rc = kernel_uimage_probe(info, mod->start, mod->size);
-    if (rc < 0)
         rc = kernel_zimage32_probe(info, mod->start, mod->size);
 
     return rc;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 15:14:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 15:14:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487704.755370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMsKt-0004PL-TO; Tue, 31 Jan 2023 15:14:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487704.755370; Tue, 31 Jan 2023 15:14:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMsKt-0004PE-QA; Tue, 31 Jan 2023 15:14:15 +0000
Received: by outflank-mailman (input) for mailman id 487704;
 Tue, 31 Jan 2023 15:14:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OPLs=54=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pMsKr-0004A7-Qq
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 15:14:13 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2060c.outbound.protection.outlook.com
 [2a01:111:f400:fe59::60c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ea9867b8-a179-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 16:14:13 +0100 (CET)
Received: from BN0PR02CA0018.namprd02.prod.outlook.com (2603:10b6:408:e4::23)
 by SA1PR12MB6871.namprd12.prod.outlook.com (2603:10b6:806:25f::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 15:14:09 +0000
Received: from BN8NAM11FT065.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e4:cafe::3) by BN0PR02CA0018.outlook.office365.com
 (2603:10b6:408:e4::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 15:14:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT065.mail.protection.outlook.com (10.13.177.63) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6064.22 via Frontend Transport; Tue, 31 Jan 2023 15:14:08 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 09:14:07 -0600
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 31 Jan 2023 09:14:06 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea9867b8-a179-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eWVB5kY93w5hQ7p1s0ORVx6QjjlvZHxv/mebwjEP6vO7a3wUwB9P28mlNyZU6w1ITR2qhutm6ZwbTMpBO4gy4b3VXKXKtWdSGRGzVRDNI57e0rFVjkZgxpLQg+pNaXnPwc4sfSH7tfvN8cboniisFD16LBkleqP72yDEBgChoqR0MnFKDfuKGmXGYqedkHhtfsogKVqP/TXzCVsAotqP39YcGW07HZVecg9eiDIT1t6nim5ME+eK5bhgQP1fnfpC+tS7fE1GMYcw5thbqxHq0tJ1pPiJH6+g4/IUX9Wqp8KKw023/1IICW9Ly86+pn+9uGvtWZniDgcuGgUwh8sdLQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3PnjvfUKhMVAdYzSmtJaKp16nTOZivuM1huY1HnWKtg=;
 b=LkxqSfSgjCvwFgifLGhqYeuxvVtQDyIhxYw4yb6b3uHCypT8uDnuONuUCreofmYUc+xY3jaeVVpA/nt3nn/fDxCOQfM+oxNh/mjbGd/k+n5e58kAqS3ulXDE1IZFtyCpk69vn29fmKJ58g6ZFfdC1E5+H5js15GDyP3kyABb7kaPIVLbM3MzRaVSxMh5FSkyoBoXElkuvStG+Yp67+5HaDnNFz1UJZ0q44p8qjQILPWenQBkXYSDvRKvIPEFmRPxPK1pxzmhenUcXE7pB659FCw/6BeOPfpQQ2Lh2qNyiCmCZTxhVHJKPwX1rhQgOtmRp6uJU3hqhWnP2+wTmZrp7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3PnjvfUKhMVAdYzSmtJaKp16nTOZivuM1huY1HnWKtg=;
 b=m8Vebu3HKbEHlAi2S7BFHmqBCdnUqDV2d0qyInj8YVV9pMZA+vsVPSWnci97B9qyffFyIKVT3UgInbbdAnm985PekrHhURiPjwLtPtDnhShioojjxQ9twBkgYEtFyMzaHkW4voqTmxm3mJi8ryKjImCpMndXm+d9OQoH7mkKjhs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<ayankuma@amd.com>
Subject: [PATCH 1/2] xen/arm: Move kernel_uimage_probe definition after kernel_decompress
Date: Tue, 31 Jan 2023 16:13:53 +0100
Message-ID: <20230131151354.25943-2-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230131151354.25943-1-michal.orzel@amd.com>
References: <20230131151354.25943-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT065:EE_|SA1PR12MB6871:EE_
X-MS-Office365-Filtering-Correlation-Id: 27b4f85e-0882-4450-51f9-08db039dccca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	byzLgpcIEQu0LTdEyjaeiRJ1vq/vjb5EvNKJlIZX5TD5J2lSgGOmL20MUFkyUOdroiwQU2UMD+ANQuyCVTNouKU/wz8dwcfom4xYSOHiA7jws77xZm+5QNVrIvSx2g8k84LQEyGecfr283DeG/Mv4ARy9vb+iE71O/S8+iPhBtIl8He9a7qRlThFHbTMx7tJZL0WD4RUp9eqIlG+h3vC+BaT2XmrVXj79hwjXOO8g4asg5Bg6zgyAwIQWnyiu1PMMSEtAttRq7OTpQ4o6nc8aiBOkwXUDaKvDbwTzpWadL9yKKGgVDhCDnON7698vSN37+2x5Ei96AwsLElx0KETQ8H0kv00gQOz2Mb2uaKweBVKqLUhydgguLhr+TsirPHQ3yCCXdBcRD5Hcug+87AiF4H1/fVVyb5b1xv6mNEX/cpP4gvyHH1pXxSX98yPV7F5OLBtzoKBdyx7zVVyq8sD1RFKwC5sz4bj5l0VvOdu+6LxH84TdXeoxNEezfkEu7N4EuSWGMmkdhMSL/4Pxf6o5uSVQ6CCodHZkfNuL6Yl3RiRcUIfjHhxZ1gUGelpx8ZEQ9HhKTHLLKpGItJLZRyIuvjy83VbNzmFFYtRRJs8NJPCqfHICjDmR72jqtTyCLPuUK6lUNpXpKX05FnV+V5Wj01xQjwOkynMkQz7F245979EwOG4sQ9XUV+Gv4hPUKjHTrj7f27O2at72y07BNuP4IMASZkav0Wmjs5bspAthEA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(346002)(136003)(396003)(376002)(451199018)(36840700001)(46966006)(40470700004)(44832011)(83380400001)(426003)(86362001)(2616005)(47076005)(336012)(81166007)(82740400003)(82310400005)(356005)(2906002)(36756003)(40460700003)(186003)(1076003)(26005)(36860700001)(478600001)(6666004)(8936002)(4326008)(70206006)(70586007)(41300700001)(40480700001)(8676002)(5660300002)(54906003)(6916009)(316002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 15:14:08.3148
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 27b4f85e-0882-4450-51f9-08db039dccca
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT065.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB6871

In a follow-up patch, we will be calling kernel_decompress function from
within kernel_uimage_probe to support booting compressed images with
u-boot header. Therefore, move the kernel_uimage_probe definition after
kernel_decompress so that the latter is visible to the former.

No functional change intended.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/arch/arm/kernel.c | 146 +++++++++++++++++++++---------------------
 1 file changed, 73 insertions(+), 73 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 36081e73f140..068fbf88e492 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -186,6 +186,79 @@ static void __init kernel_zimage_load(struct kernel_info *info)
     iounmap(kernel);
 }
 
+static __init uint32_t output_length(char *image, unsigned long image_len)
+{
+    return *(uint32_t *)&image[image_len - 4];
+}
+
+static __init int kernel_decompress(struct bootmodule *mod)
+{
+    char *output, *input;
+    char magic[2];
+    int rc;
+    unsigned int kernel_order_out;
+    paddr_t output_size;
+    struct page_info *pages;
+    mfn_t mfn;
+    int i;
+    paddr_t addr = mod->start;
+    paddr_t size = mod->size;
+
+    if ( size < 2 )
+        return -EINVAL;
+
+    copy_from_paddr(magic, addr, sizeof(magic));
+
+    /* only gzip is supported */
+    if ( !gzip_check(magic, size) )
+        return -EINVAL;
+
+    input = ioremap_cache(addr, size);
+    if ( input == NULL )
+        return -EFAULT;
+
+    output_size = output_length(input, size);
+    kernel_order_out = get_order_from_bytes(output_size);
+    pages = alloc_domheap_pages(NULL, kernel_order_out, 0);
+    if ( pages == NULL )
+    {
+        iounmap(input);
+        return -ENOMEM;
+    }
+    mfn = page_to_mfn(pages);
+    output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
+
+    rc = perform_gunzip(output, input, size);
+    clean_dcache_va_range(output, output_size);
+    iounmap(input);
+    vunmap(output);
+
+    if ( rc )
+    {
+        free_domheap_pages(pages, kernel_order_out);
+        return rc;
+    }
+
+    mod->start = page_to_maddr(pages);
+    mod->size = output_size;
+
+    /*
+     * Need to free pages after output_size here because they won't be
+     * freed by discard_initial_modules
+     */
+    i = PFN_UP(output_size);
+    for ( ; i < (1 << kernel_order_out); i++ )
+        free_domheap_page(pages + i);
+
+    /*
+     * Free the original kernel, update the pointers to the
+     * decompressed kernel
+     */
+    fw_unreserved_regions(addr, addr + size, init_domheap_pages, 0);
+
+    return 0;
+}
+
 /*
  * Uimage CPU Architecture Codes
  */
@@ -289,79 +362,6 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
     return 0;
 }
 
-static __init uint32_t output_length(char *image, unsigned long image_len)
-{
-    return *(uint32_t *)&image[image_len - 4];
-}
-
-static __init int kernel_decompress(struct bootmodule *mod)
-{
-    char *output, *input;
-    char magic[2];
-    int rc;
-    unsigned int kernel_order_out;
-    paddr_t output_size;
-    struct page_info *pages;
-    mfn_t mfn;
-    int i;
-    paddr_t addr = mod->start;
-    paddr_t size = mod->size;
-
-    if ( size < 2 )
-        return -EINVAL;
-
-    copy_from_paddr(magic, addr, sizeof(magic));
-
-    /* only gzip is supported */
-    if ( !gzip_check(magic, size) )
-        return -EINVAL;
-
-    input = ioremap_cache(addr, size);
-    if ( input == NULL )
-        return -EFAULT;
-
-    output_size = output_length(input, size);
-    kernel_order_out = get_order_from_bytes(output_size);
-    pages = alloc_domheap_pages(NULL, kernel_order_out, 0);
-    if ( pages == NULL )
-    {
-        iounmap(input);
-        return -ENOMEM;
-    }
-    mfn = page_to_mfn(pages);
-    output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
-
-    rc = perform_gunzip(output, input, size);
-    clean_dcache_va_range(output, output_size);
-    iounmap(input);
-    vunmap(output);
-
-    if ( rc )
-    {
-        free_domheap_pages(pages, kernel_order_out);
-        return rc;
-    }
-
-    mod->start = page_to_maddr(pages);
-    mod->size = output_size;
-
-    /*
-     * Need to free pages after output_size here because they won't be
-     * freed by discard_initial_modules
-     */
-    i = PFN_UP(output_size);
-    for ( ; i < (1 << kernel_order_out); i++ )
-        free_domheap_page(pages + i);
-
-    /*
-     * Free the original kernel, update the pointers to the
-     * decompressed kernel
-     */
-    fw_unreserved_regions(addr, addr + size, init_domheap_pages, 0);
-
-    return 0;
-}
-
 #ifdef CONFIG_ARM_64
 /*
  * Check if the image is a 64-bit Image.
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 15:43:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 15:43:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487720.755392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMsmX-0000z0-I8; Tue, 31 Jan 2023 15:42:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487720.755392; Tue, 31 Jan 2023 15:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMsmX-0000yt-FI; Tue, 31 Jan 2023 15:42:49 +0000
Received: by outflank-mailman (input) for mailman id 487720;
 Tue, 31 Jan 2023 15:42:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+3C0=54=citrix.com=prvs=388cd47ec=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMsmW-0000yn-6F
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 15:42:48 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e6d862ff-a17d-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 16:42:45 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6d862ff-a17d-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675179765;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=cFIPxvZZ0rwSP//GS3Rv8v2zkq9ONDq0Ii/XMdlr8oM=;
  b=YqxNn5US6hfiamVW04Wm8q0eCp2D+DLgZH6TgGzmmnIK80yAvXgXyoLs
   JvSwZ4jHrI8HHKpICErT/DOPPIdcPm7CrGtFYq6p8ndcZW6PH95H1xjH/
   MRODm7dfeZLQ2bIxx1dC7St/6PfFgz3bDkOwK94b5hON40lpKO2xjjkr9
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 94969644
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:D68YQq2d23ghMUNkqPbD5exxkn2cJEfYwER7XKvMYLTBsI5bpzcHy
 jQfCG/Vaf7eM2v2f9t/Ooqz9B5Tu5LWx9VjHQtspC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK5ULSfUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS93uDgNyo4GlD5gVlPKgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfPU9O7
 PIgIi40VwGk3O6Vh734R7lBiZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tO6umnn4dSwesF+PrLA7y2PS0BZwwP7mN9+9ltmiFJkEwBfG+
 j6uE2LRHB4Fb9uf8T25qWuJ2+zsmCrlSZ4zPejtnhJtqALKnTFCYPEMbnO5vP24h0iWS99Zb
 UsO9UIGr6I/6UiqRdnVRACjrTiPuRt0c8VUO/037keK0KW8yx2CGmEOQzpFadonnMw7Xzon0
 hmOhdyBLSxitviZRGyQ8p+QrCiuIm4FIGkafygGQAAZpd75r+kOYgnnF4g5VvTv15usRG+2m
 mrRxMQju1kNpfIl17y1xVrZugu9gcPFbUk6pQiHc23wu2uVe7WZT4Cv7FHa69NJI4CYUkSNs
 RA4piSO0AwdJcrTzXLQGY3hCJnsvq/Ya2OE3TaDCrF7r1yQF2ifkZe8Cd2UDGNgKY46dDDge
 yc/UisBtcYIbBNGgUKaCr9d6vjGL4C6TLwJtdiONLKih6SdkyfZlByCnWbKgwjQfLEEyMnTw
 6uzf8e2Fmo9Aq961jewTOp1+eZ1mX1nnTiNH8ijlUvPPV+iiJm9EOdtDbdzRrphsPPsTPv9r
 L6zyPdmOz0ACbajM0E7AKYYLEwQLGhTOHwFg5U/SwJ3GSI/QDtJI6aIkdscl3lNw/w9ehHgo
 ivsBSe1CTPX2RX6FOl9Qio8Nu23B88l9ylT0O5FFQ/A5kXPqL2HtM83H6bbt5F9nAC/5ZaYl
 8U4Rvg=
IronPort-HdrOrdr: A9a23:q1FNO6CbXtRg++HlHemd55DYdb4zR+YMi2TDgXoBLCC9E/bo7P
 xG+c5wuCMc5wxhP03I9erwQZVoIkm8yXcW2/h0AV7KZmCP01dASrsSjrcK7AeQeREWndQts5
 uIHZIOcOEYzmIUsS852mWF+hobruVvOZrJudvj
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="94969644"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH] .gitignore: Only ignore hidden dependency files
Date: Tue, 31 Jan 2023 15:42:35 +0000
Message-ID: <20230131154235.19845-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The current pattern also ignores directories suffixed with ".d", like:
    tools/hotplug/*/rc.d
    tools/hotplug/*/init.d

Avoid this by only ignoring "hidden" files, for which name starts with
a dot.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 .gitignore | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/.gitignore b/.gitignore
index 880ba88c55..5b30d8fc36 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,12 +1,12 @@
 .hg
 .*.cmd
+.*.d
+.*.d2
 .*.tmp
 *.orig
 *~
 *.swp
 *.o
-*.d
-*.d2
 *.cppcheck.txt
 *.cppcheck.xml
 *.opic
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 15:57:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 15:57:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487725.755403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMt0n-000334-Ql; Tue, 31 Jan 2023 15:57:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487725.755403; Tue, 31 Jan 2023 15:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMt0n-00032x-Nj; Tue, 31 Jan 2023 15:57:33 +0000
Received: by outflank-mailman (input) for mailman id 487725;
 Tue, 31 Jan 2023 15:57:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMt0m-00032r-No
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 15:57:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMt0m-0006cZ-E7; Tue, 31 Jan 2023 15:57:32 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=[192.168.16.209]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMt0m-0002l5-8P; Tue, 31 Jan 2023 15:57:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=rPEg+q1CImV6M5e08dmVDYPpY7sP9dueIu/fKJCnZ2g=; b=YmUoXHJG23UKtB2naQ8TKFlO7e
	VyIjI5mx2bIlmhXsfTg/PSLNCZUWgotzTkm9Uwsk/CCBkXi9L6qUK89nXHvIqSGKKiH/9PLPAtT9c
	y4XU6c6Cw1WAbUU+5ujRMt0nyiFs2N0xWEJx351t7/pgLEBeSl9cia4E6whK7FFNoNG4=;
Message-ID: <d2185e4f-bf00-95a2-1c39-f91b9eddde99@xen.org>
Date: Tue, 31 Jan 2023 15:57:29 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [XEN v2 04/11] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230117174358.15344-1-ayan.kumar.halder@amd.com>
 <20230117174358.15344-5-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2301191514240.731018@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2301191532370.731018@ubuntu-linux-20-04-desktop>
 <1a227f76-005d-0307-5161-2e8432171eb7@xen.org>
 <82e2a6ae-b941-6fa6-dd00-c2c36bd5e079@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <82e2a6ae-b941-6fa6-dd00-c2c36bd5e079@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Ayan,

On 31/01/2023 10:51, Ayan Kumar Halder wrote:
> On 20/01/2023 10:16, Julien Grall wrote:
>>>> Last comment, maybe we could add fdt_get_mem_rsv_paddr to setup.h
>>>> instead of introducing xen/arch/arm/include/asm/device_tree.h, given
>>>> that we already have device tree definitions there
>>>> (device_tree_get_reg). I am OK either way.
>>>   Actually I noticed you also add dt_device_get_paddr to
>>> xen/arch/arm/include/asm/device_tree.h. So it sounds like it is a good
>>> idea to introduce xen/arch/arm/include/asm/device_tree.h, and we could
>>> also move the declarations of device_tree_get_reg, device_tree_get_u32
>>> there.
>>
>> None of the helpers you mention sounds arm specific. So why should the 
>> be move an arch specific headers?
> 
> The functions (fdt_get_mem_rsv_paddr(), device_tree_get_reg(), 
> device_tree_get_u32()) are currently used by Arm specific code only.
> 
> So, I thought of implementing fdt_get_mem_rsv_paddr() in 
> xen/arch/arm/include/asm/device_tree.h.
> 
> However, I see your point as well. So the alternative approach would be :-
> 
> Approach 1) Implement fdt_get_mem_rsv_paddr() in 
> ./xen/include/xen/libfdt/libfdt.h.
> 
> However libfdt is imported from external sources, so I am not sure if 
> this is the  best approach.

One way would be to introduce "libfdt_xen.h" which would contain all the 
implementation specific to Xen.

> 
> Approach 2) Rename fdt_get_mem_rsv_paddr() to dt_get_mem_rsv_paddr() and 
> implement it in ./xen/include/xen/device_tree.h.
> 
> However, this will be an odd one as it invokes fdt_get_mem_rsv() for 
> which we will have to include libfdt.h in device_tree.h.

You could implement the functions in the device_tree.c but see below.

> 
> 
> So, I am biased towards having xen/arch/arm/include/asm/device_tree.h in 
> which we can implement all the non-static fdt and dt functions (which 
> are required by xen/arch/arm/*).
> 
> And then (as Stefano suggested), we can move the definitions of the 
> following to xen/arch/arm/include/asm/device_tree.h.
> 
> device_tree_get_reg()
> 
> device_tree_get_u32()
> 
> device_tree_for_each_node()

Well none of them are truly arch specific as well. I could imagine 
RISC-v to use it in the future if they also care about checking the 
truncation.

I have a slight preference to introduce libfdt_xen.h over adding the 
implementation in device_tree.h so we can keep the unflatten API 
(device_tree.h) separated from the flatten API (libfdt.h).

But this is not a strong preference between the two. That said, I would 
strongly argue against adding the helper in asm/*.h because there is 
nothing Arm specific in them.

>>
>> [...]
>>
>>>>> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
>>>>> index 0085c28d74..f536a3f3ab 100644
>>>>> --- a/xen/arch/arm/bootfdt.c
>>>>> +++ b/xen/arch/arm/bootfdt.c
>>>>> @@ -11,9 +11,9 @@
>>>>>   #include <xen/efi.h>
>>>>>   #include <xen/device_tree.h>
>>>>>   #include <xen/lib.h>
>>>>> -#include <xen/libfdt/libfdt.h>
>>>>>   #include <xen/sort.h>
>>>>>   #include <xsm/xsm.h>
>>>>> +#include <asm/device_tree.h>
>>>>>   #include <asm/setup.h>
>>>>>     static bool __init device_tree_node_matches(const void *fdt, 
>>>>> int node,
>>>>> @@ -53,10 +53,15 @@ static bool __init 
>>>>> device_tree_node_compatible(const void *fdt, int node,
>>>>>   }
>>>>>     void __init device_tree_get_reg(const __be32 **cell, u32 
>>>>> address_cells,
>>>>> -                                u32 size_cells, u64 *start, u64 
>>>>> *size)
>>>>> +                                u32 size_cells, paddr_t *start, 
>>>>> paddr_t *size)
>>>>>   {
>>>>> -    *start = dt_next_cell(address_cells, cell);
>>>>> -    *size = dt_next_cell(size_cells, cell);
>>>>> +    /*
>>>>> +     * dt_next_cell will return u64 whereas paddr_t may be u64 or 
>>>>> u32. Thus, one
>>>>> +     * needs to cast paddr_t to u32. Note that we do not check for 
>>>>> truncation as
>>>>> +     * it is user's responsibility to provide the correct values 
>>>>> in the DT.
>>>>> +     */
>>>>> +    *start = (paddr_t) dt_next_cell(address_cells, cell);
>>>>> +    *size = (paddr_t) dt_next_cell(size_cells, cell);
>>
>> There is no need for explicit cast here.
> 
> Should we have it for the sake of documentation (that it casts u64 to 
> paddr_t) ?

You already have a comment on top of dt_next_cell() to explain the 
typecast. So I would rather not add the explicit casts.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 16:17:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 16:17:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487734.755414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMtK1-0006NE-J7; Tue, 31 Jan 2023 16:17:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487734.755414; Tue, 31 Jan 2023 16:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMtK1-0006N7-FD; Tue, 31 Jan 2023 16:17:25 +0000
Received: by outflank-mailman (input) for mailman id 487734;
 Tue, 31 Jan 2023 16:17:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+3C0=54=citrix.com=prvs=388cd47ec=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMtJz-0006N1-Qe
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 16:17:23 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc140901-a182-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 17:17:21 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc140901-a182-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675181841;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=QT32KNaxwP5fKO6GMkHGbVdshYYYumzE0Z2WWk0aKBY=;
  b=Dakhg3jqdlNHZmIr5pORgRzteex1mOPXQtoZ0jRUJQuHcUVvEOhoRUet
   ysdzbajsdAuAZnZVHKuMmQRDogeLUaljataUJYbFaKGjHRdrqEG7rZIoD
   E1eZ8+Zu++J62AMkXhkUmxLIzDFGk1BBL3qg61PjoK6JlxW4daPVdNd/O
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 93912470
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:nzdU7qyHxxJy9S1mXvd6t+fqxirEfRIJ4+MujC+fZmUNrF6WrkUPx
 mMaX2CPbq7eZjb8fNp0PNnn8kNUusTUmN9kQFNsqiAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UwHUMja4mtC5QRnPqkT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KTwSx
 NA+ERoSUgKgpOmam7SFe8R0r+12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZQOzx/C+
 j2el4j/Ki8GEfmmlijVyyqPwerdkXLBXNgYT5Tto5aGh3XMnzdOWXX6T2CTvv2RmkO4HdVFJ
 CQ8+CU0qrMp3Fe2VdS7VBq9yFaNphMGUsBcO/E74gqKjKHT5m6xCm0FUiRQLscrscIwSCAx/
 lCMltLtQzdotdW9WX+bs7uZsz62ESwUNnMZIz8JSxMf5Nvuq511iQjAJv5vFb+plNrCAjz1z
 jaHsDMWiq0aiIgA0KDTwLzcq2vy/N6TFFdzv1iJGDv/tWuVebJJeaT1tWn3y89qM7qLbXqKk
 CMCpfmz9MknWMTleDO2fM0BG7Sg5vCgOTLagEJyE5RJywlB60JPbqgLvmggeR4B3tIsPGawP
 RSN4V85CIp7ZiPCUENhX26m5y3GJ4DEHM+taP3bZ8EmjnNZJF7ep3EGiaJ9MgnQfKkQfUMXY
 87znSWEVyxy5UFbIN2eGY8gPUcDnHxW+I8qbcmTI+6b+bSffmWJbrwOLUGDaOs0hIvd/lqIq
 o4EaZHSk08AOAEbXsUw2ddDRW3m0FBhXcymwyCpXrHrzvVa9JEJVKaKnOJJl31NlKVJjObYl
 kxRqWcBoGcTcUbvcF3QAlg6MeOHYHqKhS5jVcDaFQryiidLjEfGxPt3SqbbipF9q7I9laQtF
 aNeEyhCa9wWIgn6F/0mRcGVhORfmN6D2Gpi4wLNjOADQqNd
IronPort-HdrOrdr: A9a23:+IVZNqhH/76touEDhZHpGdhuGnBQXtgji2hC6mlwRA09TySZ//
 rOoB0+726StN9xYgBFpTnuAsW9qB/nmqKdpLNhW4tKPzOW3VdATrsSjrcKqgeIc0aVm9K1l5
 0QEZSWYOeAdGSS5vyb3ODXKbgd/OU=
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="93912470"
Date: Tue, 31 Jan 2023 16:17:06 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Dmytro Semenets <dmitry.semenets@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Dmytro Semenets
	<dmytro_semenets@epam.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>
Subject: Re: [RFC PATCH v3 00/10] PCID server
Message-ID: <Y9k/ApO7d6FGNAxX@perard.uk.xensource.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230115113111.1207605-1-dmitry.semenets@gmail.com>

On Sun, Jan 15, 2023 at 01:31:01PM +0200, Dmytro Semenets wrote:
> PCID server used if domain has passthrough PCI controller and we wants
> assign some device to other domain.
> pcid server should be launched in domain owns the PCI controller and process
> request from other domains.
> pcid server needs if domain which owns the PCI controller is not Domain-0.

Hi Dmytro,

Thanks for the patch.

I did already ask in the previous version[1], but could you write a new
binary for pcid? I still don't think libxl is appropriate for this kind
of tool. libxl is supposed to be used to manage a VM. Or could you
explain why it is implemented in xl/libxl?

[1] https://lore.kernel.org/xen-devel/YzMw8i7w7HyINjEp@perard.uk.xensource.com/

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 16:35:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 16:35:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487742.755425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMtaq-0000mV-W0; Tue, 31 Jan 2023 16:34:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487742.755425; Tue, 31 Jan 2023 16:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMtaq-0000mO-TL; Tue, 31 Jan 2023 16:34:48 +0000
Received: by outflank-mailman (input) for mailman id 487742;
 Tue, 31 Jan 2023 16:34:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+3C0=54=citrix.com=prvs=388cd47ec=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pMtap-0000mI-0F
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 16:34:47 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2aa028ac-a185-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 17:34:45 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2aa028ac-a185-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675182885;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=OI0aUjgIDP5iwsnoG0J9LhGm3fJcpKlqkuEsb9JqX44=;
  b=hs7IW1os3qTFabLSWPUt70x6NW6ClR/HdpvDBkqNTEW8yAlECJSYoFwW
   8m5IoWOAFuvmANyIRJI6B61ifbmwUxk9aILrdvj8WOduPx/05nl/xHUgJ
   DTUuOmrvNC0/HQHLbIXAU/81qH5TgbOLbSAW1irnvCRa7+hoy3ExDSJ+9
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 94981247
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:aiNk9aPqnFugflzvrR06l8FynXyQoLVcMsEvi/4bfWQNrUorhjxSx
 2YcWjqFbvyDMDP8e9hyPd7k/ExQ6JGGx4JhSwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5QVmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rlyHWJr2
 PckESwAdjrTlvm82Z+xReY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUo3WFJUMxBrHz
 o7A117nWEBKadPO8Bbb0y682O7kpiPSaLtHQdVU8dY12QbOlwT/EiY+WV66veOozFWzXt9ZJ
 lAP0iUrpKk2skesS7HVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQqsd4qXzsdz
 VKMktXkGSdHvaWcTDSW8bL8hSy2ETgYKykFfyBsZQkK+d74u6kokwnCCN1kFcadidn4Gir5x
 TyQmyE4i68Ols4A16i9/lfvjiqlo97CSQtdzgzRV3m55xh4ZYeSY5Gr6FHd4PBDK66UVlCE+
 nMDnqC25fgDF5iXmASRQe8GG/ei4PPtDdHHqQcxRd97rW3roiP9O9kKu1mSOXuFLO5bfCPqR
 WLYhTpN6Yd5bGqxZ7ZaaY2+XpFCIbfbKfzpUfXdb9xra5d3dROa8CwGWXN8z1wBg2B3z/hhZ
 M7zndKESC9DVP85lGbeq/I1i+dD+8wo+Y/EqXkXJTyD2KHWWnOaQKxt3LCmPrFgt/PsTOk4H
 r9i2yq2J/d3CrSWjsr/q9R7wbU2wZ8TW/jLRzR/LLLrH+afMDhJ5wXt6b0gYZd5uK9ei/3F+
 HqwMmcBlgWi3CWcd1/SMio8AF8KYXqYhStrVRHAwH7ygyRzCWpRxPh3m2QLkUkPq7U4kK8co
 wgtcMScGPVfIgkrCBxEBaQRWLdKLUzx7SrXZnrNXdTKV8I4L+A/0oO+L1SHGehnJnbfiPbSV
 JX6iV2FGcBaHV45ZCsUAdr2p26MUbEmsLoadyP1zhN7Ii0ALKACx/TNs8IK
IronPort-HdrOrdr: A9a23:b7BPJK/gidJq61MWVlpuk+A0I+orL9Y04lQ7vn2ZESYlDfBxl6
 iV7ZMmPGzP+UgssRAb6KK90cy7Kk80mqQFmrU5ELe5VgzvuG+lN6tl4IeK+VGPJ8STzJ856U
 4kSdkDNDSSNykOsS+Z2njDLz9I+rDunc/J9ISurUuFDzsaFp2IhD0JbDpzZ3cGPDWucqBJba
 Z0iPA3wwaIRHIsZMy9K1w9NtKjmzUW/KiWFSLuASRM1ODEt0LY1FezKWnp4v4xaUI9/Ysf
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="94981247"
Date: Tue, 31 Jan 2023 16:34:37 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Dmytro Semenets <dmitry.semenets@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Dmytro Semenets
	<dmytro_semenets@epam.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Oleksandr Andrushchenko
	<oleksandr_andrushchenko@epam.com>, Anastasiia Lukianenko
	<anastasiia_lukianenko@epam.com>
Subject: Re: [RFC PATCH v3 03/10] tools/xl: Add pcid daemon to xl
Message-ID: <Y9lDHdnhW56AOEAX@perard.uk.xensource.com>
References: <20230115113111.1207605-1-dmitry.semenets@gmail.com>
 <20230115113111.1207605-4-dmitry.semenets@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230115113111.1207605-4-dmitry.semenets@gmail.com>

On Sun, Jan 15, 2023 at 01:31:04PM +0200, Dmytro Semenets wrote:
> diff --git a/tools/include/pcid.h b/tools/include/pcid.h
> new file mode 100644
> index 0000000000..6506b18d25
> --- /dev/null
> +++ b/tools/include/pcid.h

Please, rename it "xen-pcid.h". We should try to use our own namespace
to avoid issue with another unrelated project using "pcid.h" as well.

Also, it would be a good idea to introduce this file and/or a protocol
description in a separate patch.

> @@ -0,0 +1,94 @@
> +/*
> +    Common definitions for Xen PCI client-server protocol.
> +    Copyright (C) 2021 EPAM Systems Inc.
> +
> +    This library is free software; you can redistribute it and/or
> +    modify it under the terms of the GNU Lesser General Public
> +    License as published by the Free Software Foundation; either
> +    version 2.1 of the License, or (at your option) any later version.
> +
> +    This library is distributed in the hope that it will be useful,
> +    but WITHOUT ANY WARRANTY; without even the implied warranty of
> +    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +    Lesser General Public License for more details.
> +
> +    You should have received a copy of the GNU Lesser General Public
> +    License along with this library; If not, see <http://www.gnu.org/licenses/>.

This licence boilerplate could be replace by
    /* SPDX-License-Identifier: LGPL-2.1+ */
at the top of the file.

> +*/
> +
> +#ifndef PCID_H
> +#define PCID_H
> +
> +#define PCID_SRV_NAME           "pcid"
> +#define PCID_XS_TOKEN           "pcid-token"
> +
> +#define PCI_RECEIVE_BUFFER_SIZE 4096
> +#define PCI_MAX_SIZE_RX_BUF     MB(1)
> +
> +/*
> + *******************************************************************************
> + * Common request and response structures used be the pcid remote protocol are
> + * described below.
> + *******************************************************************************
> + * Request:
> + * +-------------+--------------+----------------------------------------------+
> + * | Field       | Type         | Comment                                      |
> + * +-------------+--------------+----------------------------------------------+
> + * | cmd         | string       | String identifying the command               |
> + * +-------------+--------------+----------------------------------------------+
> + *
> + * Response:
> + * +-------------+--------------+----------------------------------------------+
> + * | Field       | Type         | Comment                                      |
> + * +-------------+--------------+----------------------------------------------+
> + * | resp        | string       | Command string as in the request             |

Instead of sending back the command, you could use a new field "id" that
would be set by the client, and send back as is by the server. A command
could be sent several time, but with an "id", the client can set a
different id to been able to differentiate which commands failed.

"id" field is been usable by QEMU's QMP protocol for example.

> + * +-------------+--------------+----------------------------------------------+
> + * | error       | string       | "okay", "failed"                               |
> + * +-------------+--------------+----------------------------------------------+
> + * | error_desc  | string       | Optional error description string            |
> + * +-------------+--------------+----------------------------------------------+
> + *
> + * Notes.
> + * 1. Every request and response must contain the above mandatory structures.
> + * 2. In case if a bad packet or an unknown command received by the server side
> + * a valid reply with the corresponding error code must be sent to the client.
> + *
> + * Requests and responses, which require SBDF as part of their payload, must
> + * use the following convention for encoding SBDF value:
> + *
> + * pci_device object:
> + * +-------------+--------------+----------------------------------------------+
> + * | Field       | Type         | Comment                                      |
> + * +-------------+--------------+----------------------------------------------+
> + * | sbdf        | string       | SBDF string in form SSSS:BB:DD.F             |
> + * +-------------+--------------+----------------------------------------------+
> + */

It could be nice to have a better description of the protocol, it's not
even spelled out that JSON is been used.

Also what are the possible commands with there arguments?

> +
> +#define PCID_MSG_FIELD_CMD      "cmd"
> +
> +#define PCID_MSG_FIELD_RESP     "resp"
> +#define PCID_MSG_FIELD_ERR      "error"
> +#define PCID_MSG_FIELD_ERR_DESC "error_desc"
> +
> +/* pci_device object fields. */
> +#define PCID_MSG_FIELD_SBDF     "sbdf"
> +
> +#define PCID_MSG_ERR_OK         "okay"
> +#define PCID_MSG_ERR_FAILED     "failed"
> +#define PCID_MSG_ERR_NA         "NA"
> +
> +#define PCID_SBDF_FMT           "%04x:%02x:%02x.%01x"

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 16:51:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 16:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487748.755436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMtr1-0003h4-Bo; Tue, 31 Jan 2023 16:51:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487748.755436; Tue, 31 Jan 2023 16:51:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMtr1-0003gx-8g; Tue, 31 Jan 2023 16:51:31 +0000
Received: by outflank-mailman (input) for mailman id 487748;
 Tue, 31 Jan 2023 16:51:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nz19=54=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pMtr0-0003gp-7b
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 16:51:30 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2044.outbound.protection.outlook.com [40.107.104.44])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80e1c70a-a187-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 17:51:27 +0100 (CET)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7205.eurprd04.prod.outlook.com (2603:10a6:10:1b3::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 16:51:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::2991:58a4:e308:4389%7]) with mapi id 15.20.6043.038; Tue, 31 Jan 2023
 16:51:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80e1c70a-a187-11ed-b63b-5f92e7d2e73a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P2ep+/ELOFpDyYX9mEJIPQMx4az626oq7ZmSeX4pPL541UqmQlRVIRQtthQP0Nb0ZPOAnf4wuNg64oWzsmqEFGpIX6jcUAWxPN7XENCxytBiF5wV52ARu9VsCxP/M15YZSb6Ngi52FRlAREiiS50jdVEoInaeqScqgD+J4VaVg4GgvdexPeoPkyuyGwZsYGS+pbpYKXHOAPMAOkF6sZSERLAgD6bj6zf9TlOtyjjIDbO3UVO402pe+I9p0sO/CbwAgq6QUvKcjpTcUvL2unGmYLFY6NVsqJJZ01bxI3x0NLZUxjZ7ztG9MpbvCVhROp9G2jFJBaQH1k2UmRqZUz6cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EruNCHzPhzQ4V1UOEwvU5gmSguaQemeDD1dhPH6CeEE=;
 b=cuSBYwJnXVaJHaSfreQmjYG8ogc4gFGe3ItVado3XCg8P5crxea+0IqHO1/Eq7b/PpcKjZ8BXj5Dk3mzfTd3jcoerZbKpclZIsUNP50Vac315XGH5hYqJLZ/JhrFUA9kw4fV0Is4/ayw6Wez9KIl2dl7pymoo8iC1dQctKhmoUceQocn5pOu1W4BOvVXWyTpXkCmLO4TyLjvtYRAfOiC1ArfahqSlekhi/QRK0mvW7QCVTbgfuwhn7pHRIII9EgumXF1DEwwkG9aP2MDR//zTGeXNHxvnARv2o4xzUFbSc+bgZ3owMQDynik6TTXJjuX2FwMkWZwFvDOtARBsMdRcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EruNCHzPhzQ4V1UOEwvU5gmSguaQemeDD1dhPH6CeEE=;
 b=X/WpDXpSk9eEEMTbqGp+CLQqqG9dv2QFROkVzKvixmq2s4fldsXM9U/1owIrwD1e8zcm7EEo4CbU3Q/QIeeb5FbBdZY5QIzmH1FyXZC7eve/IBBH/Z5QMThD9bde2zNpI/m0on0zZGElT5H05QEOhQCH53d2xX3he/5Ge2Cm0aJfZhzdZuV34+WFudrts6dPWN6dsz1tFguPw2dUvjCxoxmTBe8VniJtC9d3o7Kcyg131c2EfWcBFblBLKP6p9b1tqxYot9qFuqGAyjrO8qT5qeUqE2/0h672VYGq7o5vm1WGvn93dsCyUNWmasB7l+Na/AUkQSKX05NwggeDGYCAA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ea7dce71-17fd-467a-3f09-c07da48997a5@suse.com>
Date: Tue, 31 Jan 2023 17:51:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.1
Subject: Re: [XEN PATCH] .gitignore: Only ignore hidden dependency files
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230131154235.19845-1-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230131154235.19845-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0008.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7205:EE_
X-MS-Office365-Filtering-Correlation-Id: 84771236-7190-48d2-0940-08db03ab63da
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5NPLupF7H3jT2sYm8lQbj6fTHBLS5FdwPqvfwADVnHgcmApqmQCjX8Sipg1JQLbmVdADDMuFHFzZN0BdqcgV57g9B7Ijcs6/rb3UTob6IYRlX7F3a5tZESdW4M5ED+iK/jKZneWuB13PaT0ZJJ0lmm67Bn9Bl93GrYLFSExW/9caqnWuH26xXcIEW8NKv/kJlpH/lBEFmMwoUa93B59iF3hRJe3atcvzZsSGY87rb1CxMUwmTwke0UtkCaFvkQBBjebz3V3vjQrMdNNi0uobJ2cO/WxDk+PUzeQ+8EguFO1eb/O9bW9tPe/ouHuhaHYenS/EMwrFleOMoAMUw2DGjjUY+1zXwQBedkyhT7eZphoYhgliEsJO/AqmFv0rnHZr6PV5OcPnL2IY2ZXs/Dtcxet+YJnYM9YkwLUpeIf+J4PrwvyjXUG3XnNV3zGHC/0b0+SUEW4NwZ3WtLke4aDt0cBAXbeORbwk6BZ1b8jIwox+qttDxeA7dtflb6KJJcAtEdiG/yzRTZDlaSh6RH2dH4EOSZ3TuWZAEw6d+LQwv7/4GaLkB/H/52cC11zDLruEwAKCjgpgoV+9Bmao2luxYwIFxorfNjxjKnX10SR1mbxlz5D0wKBatrGx/bdeJz0Z+Vq+7jkDB5+kP+5GqZf6F+7Nq6M5BR4tSNokqg5fRKN/b1kKG8MrGyZpwDzAgse81K9VdhZ4msDXAofK8YKKziOd20djuxGStiLu5B7+y70=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(39850400004)(136003)(376002)(396003)(366004)(346002)(451199018)(478600001)(6486002)(31696002)(86362001)(38100700002)(26005)(2616005)(186003)(6512007)(36756003)(53546011)(5660300002)(54906003)(66556008)(316002)(66476007)(66946007)(8936002)(8676002)(6916009)(4326008)(31686004)(2906002)(41300700001)(6506007)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VG9JaDJQTUlncVVyRlNrdGN3VmVUWWdMdlZESnhPdCtxZnoybm12bTBNR1Ft?=
 =?utf-8?B?VHBrNHh4cWZDSUowNS8zam1TUVNyaUJTZDJ5VUt4QTlndzVYRksrRVRsbHJa?=
 =?utf-8?B?bll6ZTZQTjZDRnFrYnc5MVdsVXFwYlhzYTVqTnIyQ1ZSbVRZRTNVQkpFbXkx?=
 =?utf-8?B?dTBOdm1KVTVqdTdRMXQ3OFpDdTQxdWdZK3VsVFpoWWt2TnoxcUpkb3hkS0hm?=
 =?utf-8?B?dEs2eCtvdUNBWk5hWVBuY1VjL3pwTmVVcnVmbFBQWlRwc2w1YjQ2MlRvZW5y?=
 =?utf-8?B?RUlwQmQ3OE5TdG9oV2lIV3hZTU04SlQzdjNReE4raTRnYWFVSHdkT1g1VkYv?=
 =?utf-8?B?aDVONTNNd3Eyd1ZXaTRmWHM3MUN2S1BMa1QraXYrL2Y1R2FhVERpMDBYT2JZ?=
 =?utf-8?B?NUlLSmE1Nk1vczVvNHVUU09BWHdoR2RIekh6cENBMUc1cXhLZDZOYVdqUVZV?=
 =?utf-8?B?SjVuYXZaT3NPSU1lK0FmeW5CSTFhK2cxRXNVNkdaT0VFMFRNNWtTVkliSGkz?=
 =?utf-8?B?SFpaaEJLS0MzZnh6SUlkUFh1czMyYzNTbkNYSk9CVVVyQUsxQURPZ25tQ2dV?=
 =?utf-8?B?KzhESTBBOUl2UVNhWFJGN1BVN2FaSWJ1M2F6K2lyd0NzSzZFVGpoYVdKUG1k?=
 =?utf-8?B?NlR0Vkg3SGNaSFA5cWZObGYzWG9Jb01tOWlBbmNLajlFdzdqMXgxNzkwVmd5?=
 =?utf-8?B?SHpmdGs1U0NKdXQyWG9LY2g5RlAwS0c2UkNuNWg4L01WWnZiQjVwc2VRMlBK?=
 =?utf-8?B?WEhZb1EzalhHY2xyTHlxby94aWpRSXlLbzdGVEFPRjRsWm1QaUNIdjh5QnBH?=
 =?utf-8?B?UEFZWTQwVW1jOU1pYk1LZ0M3cEZneFNneXNVYTJtZmRqMDZkNEVBM3BKWTM2?=
 =?utf-8?B?TnlVTFVPdzAraTBhbnozNVU2Wk9ZZ3RUenRocXo0NlpoUDNQYXIxWVlRei9j?=
 =?utf-8?B?cCtkU28vaVB1bnlPQU1BT2dKemxWL1Vod1dGWXpZQmhLb295a3VnZWlRZDN4?=
 =?utf-8?B?b1JMU0E4YXlUUkVvNGFBNVhYaU9DUW5mSmlwdXpBallYWlRaTkpubW5uT2d2?=
 =?utf-8?B?eWF2b2szNGN6Vm5ibk5qNGl3TXV2cUJvZTV0RS9maEFNN1BwWXlRT2VQUmlr?=
 =?utf-8?B?QVJhVjBVenJqSGhWRFZsNVdvU21tNnVDZ0ZmcDYrNzMrcGwzZWJ5MnNwUGRS?=
 =?utf-8?B?OHRsL1lsR2g5WnZNVzdGTklWR2I1TVBnTENwTDNZUVhSOVdMODFUQzk3RVJx?=
 =?utf-8?B?cXU1Q3g1S2FBMEtneDdQWFJIUTRQVnlzSzdOVW1WbkJJVXl3aXBIMGhLbmxM?=
 =?utf-8?B?QkFkbEthNCtXdHNqRHQ3T2UrZ3paRW1xY3BwRUhVWDdUUG1HQ0ErNEcyeDlv?=
 =?utf-8?B?NFJIQUtpd05NMmt0Rk1pcksyRzV1b2xQN01BYURSQjg5aEo4UTRwM3RESUoz?=
 =?utf-8?B?MmU1VTNsZVdFLzJMeWp1STZQMjVPV3ZLRFo3VnRJY21OQkdLNGF4TkFPVlNv?=
 =?utf-8?B?VUFHNXhhNTBUbUo5U09UdTZEZEtkbTBEM2dHUW5Udmw5YzFKNjAyNzlLUExw?=
 =?utf-8?B?Nk1tUnZqeFBGMkUySzJlZDdVWlJWT3M5aEFwZWUySE4valQ4SHlwNWRON1Q1?=
 =?utf-8?B?SXQ4V2M4VXdmMTdVSk1vWWRIbWY1TmtMRU44TmZrRUlUK3BCYy85VFp2dVlQ?=
 =?utf-8?B?eG5hNURDN01sTm5vV0lRM3ZFcmt1VEJwZ1MwRktudHBuRWNYZGN1MDhmbzA1?=
 =?utf-8?B?QWpoWE14ZFc3WVFMcSszd3R5YzE2aHB6b0lPaU44RGw5T2pCWlZhdGQ3TUE2?=
 =?utf-8?B?Zks2YTErdm1qaUJVYTV3a3M3Qk9IOFZYUEpwV0w2TDlJNkhtdkkyWFNpUUUy?=
 =?utf-8?B?NHR5RWNENEZZMHBaSlMxV1UzQ0E3V0F0REdic3pDbEMyK3pYdFQ3TFFQV2Jk?=
 =?utf-8?B?TlJ0UnVWWFMra2lBSmI1Tno0aWp1d1pSS3hjOEJ0b1gxMHNLZm13U0ozNmR4?=
 =?utf-8?B?SG1vTGsyWlFUS3NhNitmVU5TVml0V3UwUnArS2pJSFZucEdtcnlGR3BPdVBs?=
 =?utf-8?B?UzdENVVZTHpkM2xCMTFNeTRmc2YxSzI1M2tKaWpQa3cxbldhanBMQ2pycXRq?=
 =?utf-8?Q?IB7QhdcmzGDCvBwMxQJNt7sLI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 84771236-7190-48d2-0940-08db03ab63da
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 16:51:25.4612
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mKNlEk1COGJ/qbl2kswi89/PVJRtV62q4+9exwyfmV/DxtijtQxso7VMNaN+bTGejghjKajr28L5Qe2xqu7hTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7205

On 31.01.2023 16:42, Anthony PERARD wrote:
> The current pattern also ignores directories suffixed with ".d", like:
>     tools/hotplug/*/rc.d
>     tools/hotplug/*/init.d
> 
> Avoid this by only ignoring "hidden" files, for which name starts with
> a dot.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Jan 31 16:54:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 16:54:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487754.755447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMttx-0004MK-Tz; Tue, 31 Jan 2023 16:54:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487754.755447; Tue, 31 Jan 2023 16:54:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMttx-0004MD-QV; Tue, 31 Jan 2023 16:54:33 +0000
Received: by outflank-mailman (input) for mailman id 487754;
 Tue, 31 Jan 2023 16:54:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMttx-0004M3-4y; Tue, 31 Jan 2023 16:54:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMttx-0008PK-0o; Tue, 31 Jan 2023 16:54:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMttw-0007U1-LU; Tue, 31 Jan 2023 16:54:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMttw-0007Q3-L3; Tue, 31 Jan 2023 16:54:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UoAE8aSTpcvk89FyrnXqEsO8z3/h8ml2JlZ8/79y0ww=; b=pshbwDkFFO9rE/6vbcVPP9/oWs
	UNIUhq4QKkr3tXa3xKqpyfDeBs/svOKA7xjIJOaUlq2rEr0t1uJAFm3gzQyfZr+bJdKoLjcLI2W8F
	uYm1y3y2KkXsiaCT5DBJldWXU8oa+IJKRUK1atNdrvCyVKLypArrG5u0+VKc1cqFOI2w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176295-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 176295: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-rtds:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<job status>:broken:regression
    xen-4.17-testing:build-armhf:<job status>:broken:regression
    xen-4.17-testing:build-armhf:host-install(4):broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<job status>:broken:regression
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-intel:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:<job status>:broken:regression
    xen-4.17-testing:test-amd64-i386-pair:<job status>:broken:regression
    xen-4.17-testing:build-armhf:syslog-server:running:regression
    xen-4.17-testing:test-amd64-amd64-xl:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-pair:host-install/dst_host(7):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-qemut-rhel6hvm-intel:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-install(5):broken:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start.2:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:host-ping-check-xen:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:xen-boot:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-xsm:guest-localmigrate/x10:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:xen-boot:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:test-amd64-amd64-xl-rtds:xen-boot:fail:heisenbug
    xen-4.17-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.17-testing:build-armhf:capture-logs:broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:host-install(5):broken:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-seattle:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=2f8851c37f88e4eb4858e16626fcb2379db71a4f
X-Osstest-Versions-That:
    xen=c4972a4272690384b15d5706f2a833aed636895e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 16:54:32 +0000

flight 176295 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176295/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <job status>       broken
 test-amd64-i386-xl-qemut-ws16-amd64    <job status>                 broken
 test-amd64-amd64-xl-rtds        <job status>                 broken
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict    <job status>   broken
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 175447
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <job status>          broken in 176275
 test-amd64-amd64-xl             <job status>                 broken  in 176275
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <job status>   broken in 176275
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>     broken in 176275
 test-amd64-i386-qemut-rhel6hvm-intel    <job status>          broken in 176290
 test-amd64-i386-xl-qemut-win7-amd64    <job status>           broken in 176290
 test-amd64-i386-pair            <job status>                 broken  in 176290
 build-armhf                   3 syslog-server                running

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl          5 host-install(5) broken in 176275 pass in 176295
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken in 176275 pass in 176295
 test-amd64-amd64-xl-qemuu-ovmf-amd64 5 host-install(5) broken in 176275 pass in 176295
 test-amd64-i386-pair 7 host-install/dst_host(7) broken in 176290 pass in 176295
 test-amd64-i386-xl-qemut-win7-amd64 5 host-install(5) broken in 176290 pass in 176295
 test-amd64-i386-qemut-rhel6hvm-intel 5 host-install(5) broken in 176290 pass in 176295
 test-amd64-i386-xl-qemut-ws16-amd64  5 host-install(5)   broken pass in 176281
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 5 host-install(5) broken pass in 176290
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 5 host-install(5) broken pass in 176290
 test-amd64-amd64-xl-rtds      5 host-install(5)          broken pass in 176290
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 21 guest-start.2 fail in 176261 pass in 176295
 test-amd64-amd64-xl-rtds  10 host-ping-check-xen fail in 176275 pass in 176261
 test-amd64-i386-xl-qemut-win7-amd64  8 xen-boot  fail in 176275 pass in 176295
 test-amd64-amd64-xl-xsm 20 guest-localmigrate/x10 fail in 176275 pass in 176295
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 176275 pass in 176295
 test-amd64-i386-xl-qemut-ws16-amd64  8 xen-boot  fail in 176281 pass in 176275
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 176281 pass in 176295
 test-amd64-amd64-xl-rtds      8 xen-boot         fail in 176290 pass in 176275

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 175447
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 5 host-install(5) broken in 176275 like 175437
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop   fail in 176275 like 175447
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 176275 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 176275 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 176281 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 176281 never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 176290 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 175447
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 175447
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 175447
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 175447
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 175447
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle   3 hosts-allocate           starved in 176275 n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  2f8851c37f88e4eb4858e16626fcb2379db71a4f
baseline version:
 xen                  c4972a4272690384b15d5706f2a833aed636895e

Last test of basis   175447  2022-12-22 00:40:06 Z   40 days
Testing same since   176224  2023-01-26 22:14:43 Z    4 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            broken  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          broken  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        broken  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     broken  
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict broken
broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs
broken-step test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm host-install(5)
broken-step test-amd64-i386-xl-qemut-ws16-amd64 host-install(5)
broken-step test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict host-install(5)
broken-step test-amd64-amd64-xl-rtds host-install(5)
broken-job test-amd64-i386-qemut-rhel6hvm-intel broken
broken-job test-amd64-i386-xl-qemut-win7-amd64 broken
broken-job test-amd64-i386-xl-qemut-ws16-amd64 broken
broken-job test-amd64-i386-pair broken
broken-job build-armhf broken
broken-job test-amd64-amd64-xl-rtds broken
broken-job build-armhf broken
broken-job build-armhf broken
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
broken-job test-amd64-amd64-xl broken
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm broken
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 2f8851c37f88e4eb4858e16626fcb2379db71a4f
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Jan 26 11:00:24 2023 +0100

    Revert "tools/xenstore: simplify loop handling connection I/O"
    
    I'm observing guest kexec trigger xenstored to abort on a double free.
    
    gdb output:
    Program received signal SIGABRT, Aborted.
    __pthread_kill_implementation (no_tid=0, signo=6, threadid=140645614258112) at ./nptl/pthread_kill.c:44
    44    ./nptl/pthread_kill.c: No such file or directory.
    (gdb) bt
        at ./nptl/pthread_kill.c:44
        at ./nptl/pthread_kill.c:78
        at ./nptl/pthread_kill.c:89
        at ../sysdeps/posix/raise.c:26
        at talloc.c:119
        ptr=ptr@entry=0x559fae724290) at talloc.c:232
        at xenstored_core.c:2945
    (gdb) frame 5
        at talloc.c:119
    119            TALLOC_ABORT("Bad talloc magic value - double free");
    (gdb) frame 7
        at xenstored_core.c:2945
    2945                talloc_increase_ref_count(conn);
    (gdb) p conn
    $1 = (struct connection *) 0x559fae724290
    
    Looking at a xenstore trace, we have:
    IN 0x559fae71f250 20230120 17:40:53 READ (/local/domain/3/image/device-model-dom
    id )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    0      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    wrl: dom    3      0  msec      10000 credit     1000000 reserve          0 disc
    ard
    OUT 0x559fae71f250 20230120 17:40:53 ERROR (ENOENT )
    wrl: dom    0      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    wrl: dom    3      1  msec      10000 credit     1000000 reserve        100 disc
    ard
    IN 0x559fae71f250 20230120 17:40:53 RELEASE (3 )
    DESTROY watch 0x559fae73f630
    DESTROY watch 0x559fae75ddf0
    DESTROY watch 0x559fae75ec30
    DESTROY watch 0x559fae75ea60
    DESTROY watch 0x559fae732c00
    DESTROY watch 0x559fae72cea0
    DESTROY watch 0x559fae728fc0
    DESTROY watch 0x559fae729570
    DESTROY connection 0x559fae724290
    orphaned node /local/domain/3/device/suspend/event-channel deleted
    orphaned node /local/domain/3/device/vbd/51712 deleted
    orphaned node /local/domain/3/device/vkbd/0 deleted
    orphaned node /local/domain/3/device/vif/0 deleted
    orphaned node /local/domain/3/control/shutdown deleted
    orphaned node /local/domain/3/control/feature-poweroff deleted
    orphaned node /local/domain/3/control/feature-reboot deleted
    orphaned node /local/domain/3/control/feature-suspend deleted
    orphaned node /local/domain/3/control/feature-s3 deleted
    orphaned node /local/domain/3/control/feature-s4 deleted
    orphaned node /local/domain/3/control/sysrq deleted
    orphaned node /local/domain/3/data deleted
    orphaned node /local/domain/3/drivers deleted
    orphaned node /local/domain/3/feature deleted
    orphaned node /local/domain/3/attr deleted
    orphaned node /local/domain/3/error deleted
    orphaned node /local/domain/3/console/backend-id deleted
    
    and no further output.
    
    The trace shows that DESTROY was called for connection 0x559fae724290,
    but that is the same pointer (conn) main() was looping through from
    connections.  So it wasn't actually removed from the connections list?
    
    Reverting commit e8e6e42279a5 "tools/xenstore: simplify loop handling
    connection I/O" fixes the abort/double free.  I think the use of
    list_for_each_entry_safe is incorrect.  list_for_each_entry_safe makes
    traversal safe for deleting the current iterator, but RELEASE/do_release
    will delete some other entry in the connections list.  I think the
    observed abort is because list_for_each_entry has next pointing to the
    deleted connection, and it is used in the subsequent iteration.
    
    Add a comment explaining the unsuitability of list_for_each_entry_safe.
    Also notice that the old code takes a reference on next which would
    prevents a use-after-free.
    
    This reverts commit e8e6e42279a5723239c5c40ba4c7f579a979465d.
    
    This is XSA-425/CVE-2022-42330.
    
    Fixes: e8e6e42279a5 ("tools/xenstore: simplify loop handling connection I/O")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 17:31:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 17:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487765.755457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMuTR-0001M8-R6; Tue, 31 Jan 2023 17:31:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487765.755457; Tue, 31 Jan 2023 17:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMuTR-0001M1-OT; Tue, 31 Jan 2023 17:31:13 +0000
Received: by outflank-mailman (input) for mailman id 487765;
 Tue, 31 Jan 2023 17:31:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMuTR-0001Lr-56; Tue, 31 Jan 2023 17:31:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMuTR-0000mu-3W; Tue, 31 Jan 2023 17:31:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMuTQ-0001fe-OS; Tue, 31 Jan 2023 17:31:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMuTQ-0002q8-O0; Tue, 31 Jan 2023 17:31:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rWahmNFHNd4+jiaZLJPucOUsLBWH+B2siMtrbNPD5g4=; b=Qn+NXSy8qO8/4614udS4W65qVW
	RtxOu00klUJf7oLX+gSpOIBaE2b6w+nM9PfAgo2nCGQporiRu3hLg+5AfFSZgEcYzhzT2xHHniNzZ
	CRf8R3yrRyFVHYfYJ3oNN3fDcxvJaDuQUn0jPkQXHLS6B9wNYg6kXttNocsH2q4giOig=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176298-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176298: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 17:31:12 +0000

flight 176298 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176298/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    5 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 17:38:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 17:38:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487772.755469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMuaa-0002H9-KE; Tue, 31 Jan 2023 17:38:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487772.755469; Tue, 31 Jan 2023 17:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMuaa-0002H2-HL; Tue, 31 Jan 2023 17:38:36 +0000
Received: by outflank-mailman (input) for mailman id 487772;
 Tue, 31 Jan 2023 17:38:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMuaa-0002Gs-0t; Tue, 31 Jan 2023 17:38:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMuaZ-0000uu-T0; Tue, 31 Jan 2023 17:38:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMuaZ-0002Jb-AD; Tue, 31 Jan 2023 17:38:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMuaZ-0003OD-9j; Tue, 31 Jan 2023 17:38:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BUoxrf31fX1OFAa8bhQ9m1m652eQERzWaQm9IrFM5jo=; b=40shYnzaEktxIfIL2A9Tyt81YF
	wnapgVKpJj+y9FlLXAgXdi3lap4GhKZytyQ/nl0u1bE42JOLGLzXiNcGz1K6yswVvYfLoddizjk2M
	Li/c2JkoH2dGxqh0weTmMlW9f8m0gONv1+O3I6osdOMFIGVaiRZ4IdV+kTOFMq0Pb4SI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176289-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 176289: trouble: blocked/broken/pass
X-Osstest-Failures:
    libvirt:build-armhf:<job status>:broken:regression
    libvirt:test-amd64-amd64-libvirt-xsm:<job status>:broken:regression
    libvirt:test-amd64-amd64-libvirt-xsm:host-install(5):broken:regression
    libvirt:build-armhf:host-install(4):broken:regression
    libvirt:build-armhf:syslog-server:running:regression
    libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:build-armhf:capture-logs:broken:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=648391f170ddbb0e92832d543a940bcc84fc2309
X-Osstest-Versions-That:
    libvirt=95a278a84591b6a4cfa170eba31c8ec60e82f940
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 17:38:35 +0000

flight 176289 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176289/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-amd64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-libvirt-xsm  5 host-install(5)        broken REGR. vs. 176139
 build-armhf                   4 host-install(4)        broken REGR. vs. 176139
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176139
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              648391f170ddbb0e92832d543a940bcc84fc2309
baseline version:
 libvirt              95a278a84591b6a4cfa170eba31c8ec60e82f940

Last test of basis   176139  2023-01-26 04:18:49 Z    5 days
Failing since        176233  2023-01-27 04:18:53 Z    4 days    5 attempts
Testing same since   176289  2023-01-31 04:18:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiri Denemark <jdenemar@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 broken  
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-job test-amd64-amd64-libvirt-xsm broken
broken-step build-armhf capture-logs
broken-step test-amd64-amd64-libvirt-xsm host-install(5)
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit 648391f170ddbb0e92832d543a940bcc84fc2309
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Thu Jan 26 16:54:06 2023 +0100

    remote: Fix memory leak in remoteDomainMigrateFinish3*
    
    Theoretically, when remoteDomainMigrateFinish3* is called without a
    pointer for storing migration cookie or its length (i.e., either
    cookieout == NULL or cookieoutlen == NULL), we would leak the freshly
    created virDomain object referenced by rv.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat.com>

commit 6f3f6c0f763b9ffd8ef93eb124c88dd0b79138fc
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Mon Jan 30 10:55:22 2023 +0100

    virsh: Make domif-setlink work more than once
    
    In virsh, we have this convenient domif-setlink command, which is
    just a wrapper over virDomainUpdateDeviceFlags() and which allows
    setting link state of given guest NIC. It does so by fetching
    corresponding <interface/> XML snippet and either putting <link
    state=''/> into it, OR if the element already exists setting the
    attribute to desired value. The XML is then fed into the update
    API.
    
    There's, however, a small bug in detecting the pre-existence of
    the element and its attribute. The code looks at "link"
    attribute, while in fact, the attribute is called "state".
    
    Resolves: https://gitlab.com/libvirt/libvirt/-/issues/426
    Fixes: e575bf082ed4889280be07c986375f1ca15bb7ee
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 9f8fba7501327a60f6adb279ea17f0e2276071be
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Thu Jan 26 16:12:00 2023 +0100

    remote: Fix version annotation for remoteDomainFDAssociate
    
    The API was added in libvirt 9.0.0.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Peter Krempa <pkrempa@redhat.com>

commit a0fbf1e25cd0f91bedf159bf7f0086f4b1aeafc2
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 16:48:50 2023 +0100

    rpc: Use struct zero initializer for args
    
    In a recent commit of v9.0.0-104-g0211e430a8 I've turned all args
    vars in src/remote/remote_driver.c to be initialized wit {0}.
    What I've missed was the generated code.
    
    Do what we've done in v9.0.0-13-g1c656836e3 and init also args,
    not just ret.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit 2dde3840b1d50e79f6b8161820fff9fe62f613a9
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix missing device in crypto-builtin XML
    
    Another forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit f3c9cbc36cc10775f6cefeb7e3de2f799dc74d70
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Thu Jan 26 16:57:20 2023 +0100

    qemuxml2argvdata: Fix watchdog parameters in crypto-builtin
    
    Forgotten fix after a post-review rebase.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>

commit a2c5c5dad2275414e325ca79778fad2612d14470
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:34 2023 +0100

    news: Add information about iTCO watchdog changes
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 2fa92efe9b286ad064833cd2d8b907698e58e1cf
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 18:22:30 2023 +0100

    Document change to multiple watchdogs
    
    With the reasoning behind it.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 926594dcc82b40f483010cebe5addbf1d7f58b24
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 11:22:22 2023 +0100

    qemu: Add implicit watchdog for q35 machine types
    
    The iTCO watchdog is part of the q35 machine type since its inception,
    we just did not add it implicitly.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2137346
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d81a27b9815d68d85d2ddc9671649923ee5905d7
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 14:15:06 2023 +0100

    qemu: Enable iTCO watchdog by disabling its noreboot pin strap
    
    In order for the iTCO watchdog to be operational we must disable the
    noreboot pin strap in qemu.  This is the default starting from 8.0
    machine types, but desirable for older ones as well.  And we can safely
    do that since that is not guest-visible.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 5b80e93e42a1d89ee64420debd2b4b785a144c40
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:26:21 2023 +0100

    Add iTCO watchdog support
    
    Supported only with q35 machine types.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1c61bd718a9e311016da799a42dfae18f538385a
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Tue Nov 8 09:10:57 2022 +0100

    Support multiple watchdog devices
    
    This is already possible with qemu, and actually already happening with
    q35 machines and a specified watchdog since q35 already includes a
    watchdog we do not include in the XML.  In order to express such
    posibility multiple watchdogs need to be supported.
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit c5340d5420012412ea298f0102cc7f113e87d89b
Author: Martin Kletzander <mkletzan@redhat.com>
Date:   Fri Jan 20 10:28:52 2023 +0100

    qemuDomainAttachWatchdog: Avoid unnecessary nesting
    
    Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 1cf7e6ec057a80f3c256d739a8228e04b7fb8862
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:25:06 2023 +0100

    remote: Drop useless cleanup in remoteDispatchNodeGet{CPU,Memory}Stats
    
    The function cannot fail once it starts populating
    ret->params.params_val[i].field.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit d0f339170f35957e7541e5b20552d0007e150fbc
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 15:06:33 2023 +0100

    remote: Avoid leaking uri_out
    
    In case the API returned success and a NULL pointer in uri_out, we would
    leak the preallocated buffer used for storing the uri_out pointer.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 4849eb2220fb2171e88e014a8e63018d20a8de95
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Wed Jan 25 11:56:28 2023 +0100

    remote: Propagate error from virDomainGetSecurityLabelList via RPC
    
    The daemon side of this API has been broken ever since the API was
    introduced in 2012. Instead of sending the error from
    virDomainGetSecurityLabelList via RPC so that the client can see it, the
    dispatcher would just send a successful reply with return value set to
    -1 (and an empty array of labels). The client side would propagate this
    return value so the client can see the API failed, but the original
    error would be lost.
    
    Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 0211e430a87a96db9a4e085e12f33caad9167653
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 13:19:31 2023 +0100

    remote: Initialize args variable
    
    Recently, in v9.0.0-7-gb2034bb04c we've dropped initialization of
    @args variable. The reasoning was that eventually, all members of
    the variable will be set. Well, this is not correct. For
    instance, in remoteConnectGetAllDomainStats() the
    args.doms.doms_val pointer is set iff @ndoms != 0. However,
    regardless of that, the pointer is then passed to VIR_FREE().
    
    Worse, the whole args is passed to
    xdr_remote_connect_get_all_domain_stats_args() which then calls
    xdr_array, which tests the (uninitialized) pointer against NULL.
    
    This effectively reverts b2034bb04c61c75ddbfbed46879d641b6f8ca8dc.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Martin Kletzander <mkletzan@redhat.com>

commit c3afde9211b550d3900edc5386ab121f5b39fd3e
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Thu Jan 26 11:56:10 2023 +0100

    qemu_domain: Don't unref NULL hash table in qemuDomainRefreshStatsSchema()
    
    The g_hash_table_unref() function does not accept NULL. Passing
    NULL results in a glib warning being triggered. Check whether the
    hash table is not NULL and unref it only then.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 19:03:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 19:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487790.755480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMvuk-0004kk-TO; Tue, 31 Jan 2023 19:03:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487790.755480; Tue, 31 Jan 2023 19:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMvuk-0004kd-Px; Tue, 31 Jan 2023 19:03:30 +0000
Received: by outflank-mailman (input) for mailman id 487790;
 Tue, 31 Jan 2023 19:03:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMvuk-0004kT-6G; Tue, 31 Jan 2023 19:03:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMvuk-0002tF-3O; Tue, 31 Jan 2023 19:03:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMvuj-00065A-KW; Tue, 31 Jan 2023 19:03:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMvuj-0007qe-K9; Tue, 31 Jan 2023 19:03:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oaWSjEEe7eVdwm7yOnxe7nbesol0E5P6yxyPmdIdobA=; b=3abSOmlgQqhVWzjI1Pwo90xAI+
	i4VdIK4ZdrnWgnvn54X6AKj72oFlw9nWqPgoaClER416VyuPqey1WVr5wD5YPwRwnXKIyNp2yArl8
	6ktUH4E+ZPHhYjfHEYyov623wMB6eFMiHW/3sqXZT3wsje8ekFHYJQ2d3ey+X9ewL7ko=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176299-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 176299: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=2f2fd79fc4000a9ef89792677e85c99224e5a035
X-Osstest-Versions-That:
    ovmf=35091031329e741b25ed60ac51f4710d75d92310
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 19:03:29 +0000

flight 176299 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176299/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 2f2fd79fc4000a9ef89792677e85c99224e5a035
baseline version:
 ovmf                 35091031329e741b25ed60ac51f4710d75d92310

Last test of basis   176287  2023-01-31 02:42:12 Z    0 days
Testing same since   176299  2023-01-31 15:11:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  KasimX Liu <kasimx.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3509103132..2f2fd79fc4  2f2fd79fc4000a9ef89792677e85c99224e5a035 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 19:06:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 19:06:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487801.755491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMvxl-0005d5-FJ; Tue, 31 Jan 2023 19:06:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487801.755491; Tue, 31 Jan 2023 19:06:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMvxl-0005cy-Cd; Tue, 31 Jan 2023 19:06:37 +0000
Received: by outflank-mailman (input) for mailman id 487801;
 Tue, 31 Jan 2023 19:06:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dLqc=54=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pMvxj-0005cr-Lz
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 19:06:35 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2060e.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::60e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5fe7def2-a19a-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 20:06:33 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MW3PR12MB4361.namprd12.prod.outlook.com (2603:10b6:303:5a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 19:06:30 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%4]) with mapi id 15.20.6043.033; Tue, 31 Jan 2023
 19:06:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5fe7def2-a19a-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nQfWn5FA+1MdkskmnhwQdUIdQxceUbt6HqskG0z8z87flIYSfQ13d0xlEmuDhDqr4NXekfMcS3eD5rPQlaPT8kFuV2gXrjM7TYZ1vUICXMxjFNif+z8Jj4z+GDnhy/C83vXu7kWECOfrw2Y+qflPENduMngpLcOe/SblWNzwFwNPypVzKrNhAP8YE/Qot0WnxXZ4XoqOteSIO28bpkCqTnprn0zO+jisgtG2G8U6XgNDVojYM3A3JLbelbDZK+VnkojfQTqjpLrvXmH7261tOGG2/YJJO+/wKBa+sIR5+Sn0Lofq5vBT0hPN1FB7vxtiO+IxCPLkPQV6v5vi3RbFNA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dfs+Jw+AZDDZLywDFBsgLDNnL1HYAAULn8BdHPceX7U=;
 b=eJ+sQMw0M7yLUj3cuuZFsso2SsT+8/6xY5DDrxhx+zW7wUGna01v+CldXTVejJE04aiwyBKSp5oKpYy/23zf1rj+IFfwwL+wTwZBiol53RflrueZN0clyeV5WEhHcuMGRBCyn/td53u8QoOEZNWkP8Mpum425HpgjwlAnfBm+hI/L7SKx25RvZgOOpAb4tEXNbYol7pZeuoEI+tVA/QSZYj24lg8gXZRBPaLBgwXiohg6cc9mwrAKjCMPrieAPNasEx3yU7u7zOjZljFn69qzw5x2hIScLhGDAF3K4KlnZXmRgehFvwdtW+exzFnxJY2UXvA37KOEvtDDj36ezLgVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dfs+Jw+AZDDZLywDFBsgLDNnL1HYAAULn8BdHPceX7U=;
 b=zdbx7gFrxae43mHO9Qolx2M7xtBI1EUOcjugSQfSCh7wiR4mgwPUkTFyTCrF/n122zy09BwA2pwJ+xBcIT9PII78jIZ5BqvuqDO1lXyTPPJ0LcqXObM5fNdCZ0iUEOX1we8DV2qaUNsxRS/xdWFKu1Q6SJlunHo4t0mGZ78ci9A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <03d7305b-00ea-481e-d097-fd5236bb03c5@amd.com>
Date: Tue, 31 Jan 2023 19:06:23 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 2/2] xen/arm: Add support for booting gzip compressed
 uImages
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230131151354.25943-1-michal.orzel@amd.com>
 <20230131151354.25943-3-michal.orzel@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230131151354.25943-3-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0492.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::17) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|MW3PR12MB4361:EE_
X-MS-Office365-Filtering-Correlation-Id: 7052c4f0-dd96-4c6a-1f10-08db03be4296
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8ym0IqbAgFIQlOnMBzNU+VOZDcKWDpkMhr/LHjMRU3gPEVWr6zFhMr+zzDPtQdgStlqXDdE9nvV/uHanyOnpGPfBkcMFzXhzp3HnaSyi+yYGNph+cbtUB/CPbOK9OXPx8zLcj1tUFqW5IqRlsJps4iZO5bXfZM2yK0xvzlGBVA94bNGDwNhaMWvHlsX34iS2nm1yE/QmIRUl1pg3qGmvu01K+ekzn98/lY3en3holJBhWSDuRX71Gv1VhseclCLfeHYV23tb/vjcagpvzDaB7HJi1d0pBAaWDRVZWzOC7sEYysu0uQT7epRfbLxqI5iK+YqZ3VEuHwFMOxh5xRXpxzraS/jrKoeVwExL4tby3GL8yr6e70MIGgX1xi/qeS7zrFs6gViW162EPAEgLR1LU2xj23jRdilVXemlycjXRcL/nVL7ki6qO4jnEKh4/vdw5CYgzNLRmoU9VVjkPEtU8uL38WWTAlQvwihnM39rTyWqmvXyig2lfyIoCSDFzogEhtdhKWqLU2AKepUTv07DJjT2M7yvvPYbf9DJylYkrElpwKvlNK4g89Dj2Uuszn/H8UWbAU8XIY4tdWNSvNdZsMjxuBEBPxEbV2++VO9ACAIFY7ajBTe2yuLG3ish59thkeLx/FAHUvoUu+WRNlmtcmzHFx4+TNHPXpHrwOsDKNbcnPXFGmAXuJd+0TZwuPAjP1Qblh19yMQOjxpzcDcFz/L7rWYKlw/C9Y8qJJ0yhXI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(366004)(39860400002)(136003)(396003)(376002)(346002)(451199018)(31686004)(2906002)(6512007)(5660300002)(8936002)(83380400001)(41300700001)(66946007)(6666004)(6506007)(53546011)(54906003)(36756003)(66556008)(8676002)(4326008)(66476007)(478600001)(186003)(26005)(6486002)(38100700002)(2616005)(31696002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WmpRV0kxbU91R1FpbEVzVXFuQmNlOHZmNUtiNFltQ255d3ZSMEpRL0FBQjMy?=
 =?utf-8?B?NmhXNEtPRTFIZzRSM2ZSS3gySTlualpMb2dodDBEbWN2L1NodVFoMW8wY1pW?=
 =?utf-8?B?R2dOTmV4ZzdFNjJiQ1dlM1R3L1dwZUFBblNCOTZ5S2x0Qi91UDhRM2RPMDdC?=
 =?utf-8?B?emRwM1V0b2Q1QTAvalFwbmJQOC80WlNobG5SVzJRQ0t5UGFnei9ibTR6Ym1H?=
 =?utf-8?B?YWRiek5HWnVSMzlkUG9kblJCblRONitoSllsTUt5NkpwdW9ra0NJV1JNakZU?=
 =?utf-8?B?dE12SElSUHpJckdEQzVTek1wNXV4Ym94YnJ5aDgrQkIxZHZaV0RnS3F1OFl2?=
 =?utf-8?B?UU1mWFFSN3MwT2c5NkYrOFhiUFhYTmlOdVlmemhwY1hMbFp4d3Q5djc1am9l?=
 =?utf-8?B?QVdWTHpUWmY5bFlpbmNTWmd5VmJoRFVPQzVOcExTaElDZ2xVUDRJcGd0a2Rq?=
 =?utf-8?B?aS9idlh3Y0RiazdLeURMRHdDZlBnRWs0Ym1QL2lhMGIvRHM4Si9Wb014RVlT?=
 =?utf-8?B?M3QzZTVhWWlYMTFpQWVnbk9EWGpiZzNHTHJJTW1qMUxvOVYrVWhrZXpya2l1?=
 =?utf-8?B?S1h4L0dTL0p1RC9VVE9ueEsyZTU1bVdzUENxNlAxOG5YUFlVanBLOEU2dVht?=
 =?utf-8?B?RnQ2d1RXSm5OM280Z3puNU5lMTNtckVib0NScUZvZmc0a3lVVHBML21nMUdu?=
 =?utf-8?B?bEpzbGxkdFVrWlpCMm5RWUxoYldUUG5Vd2o5Y1ZySXp0VHNPMjF2K05qT0ZQ?=
 =?utf-8?B?SFJhRmNLNWtCSk5NcDMzNXNHV0o2WlRsbXA1VTVsbVNma3g0NnVKR3h5Rk9R?=
 =?utf-8?B?aHl6Q2l6blNMWVNzbVJPK2ZHWDhyaDN1VVFPMlFPNnNaT09PTHZ3c0xlUG9K?=
 =?utf-8?B?RVA3ZGNzN3MwRS9nU2g0MlZGd1NBSHEyRGxGL2l5WStkRjhxSlFKMDNNZWZT?=
 =?utf-8?B?QUdFVFhqb1RPNU9BNlNVdmVJK0pCOFBTUE4vMTRiUndkNEpwUHdVQXVteW1X?=
 =?utf-8?B?dW1ablNUakhwZDZkdHVnSmR1dFlHMllEK2tUV3BCa2tlSEFveWVubXU0bE10?=
 =?utf-8?B?V2lpcG9NeDltSEZucndqMFhkbkFHenpFYnh0bC90K28wSHJxbjJ0SDVTUGl5?=
 =?utf-8?B?bldyMExjbjJPRGE3eWNOWHJCMG5yek5SQ04rVjdnelEzbXRaZVVSd2p6cnBD?=
 =?utf-8?B?dUdUcGVlcUJya1ZDR0ZYQzNmbk9KdEJzMXo1V0g3eDQ2RVU2K2ZwZVkrYjIr?=
 =?utf-8?B?Y25neU9rcTFIbk56RnpESnJpUVB4MnNlWFhpdnF0MjZRZm15WlhqM3JUNU9E?=
 =?utf-8?B?T3RFcGRGbWlGcEthUTFHbUEzTDVFbHdOQlRUZ0w3NmNLcWFFS0tISUl4cFda?=
 =?utf-8?B?SWxzd0xmdjNUQlFVMkdIUnNJOExzNkpBL3JPeFowQ3VaN3IxZ2Q3a1NvV3ZN?=
 =?utf-8?B?OHo5aVF4ZDFONk5BSjFhNHJQWm56cGE2QnVWaGxaeDF4S2xpYkVKbUxQVDh2?=
 =?utf-8?B?VUgzcklNY09PUE1mb01wUHI5djFzR1RuTktYWUtSZEdWRnFBdGR6dG82TmpM?=
 =?utf-8?B?c3ZOc25BdFJvZnRaMkI4R3VWNGlaQVNKWDhLa3ljSjNHQkpqMFFGWHVJUHpE?=
 =?utf-8?B?aG93QUlqbENCSmg3d3VhK0pJZU16NWtRVzRnNFpwL1ZWWXhLdGR2ZFNIQ3lS?=
 =?utf-8?B?bDhodmtIUS8zbm8veEljcmRGTkozdmwwcVlJNFp5Yzg5eC9VVTFTZVphaHll?=
 =?utf-8?B?UlN6STA4NC9VWjYvbWkvME8wT21QenpGQVBxSUhaa1lPM093SDE1Ujl5U0Nj?=
 =?utf-8?B?UmhlYmNCYmphYzVHSW5vUktqS3ZQY2NtWmZ5NHZaR2lFK2o2dzRrTW9zL1Er?=
 =?utf-8?B?aWVWYTZ1ZzV6cFpaL1gxa2JKRGk3RWVoa2MxL090czlRV2pMdGEvYkRuakVn?=
 =?utf-8?B?elJrZHlzbVFiVDlhQUVsQVdyRFI2YkR2LzFqQWhmeGNpeDB3TUNyOUZKRUw2?=
 =?utf-8?B?c3hYQW1ldkZQWFArMStnTkJ6N3k2eGo4VlF0NmxSTktFRzhvN2lRbitEZCtx?=
 =?utf-8?B?OFpOS2JZMCtYSkh0TEsydERieGhKUEppQnBCODh0cHBTazFYWVR2THYzMmJB?=
 =?utf-8?Q?cmuWAlkJ1G5WjYc0mkrZs+L+A?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7052c4f0-dd96-4c6a-1f10-08db03be4296
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 19:06:30.0579
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o5DznlDx4rl5/feMuRiyuyFEZ1kiNQ/7bj9IjxjtkqSo91urBPGpN09MsLNQ+jOLKoQJQog1dsxDqVdUgQVfug==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4361


On 31/01/2023 15:13, Michal Orzel wrote:
> At the moment, Xen does not support booting gzip compressed uImages.
> This is because we are trying to decompress the kernel before probing
> the u-boot header. This leads to a failure as the header always appears
> at the top of the image (and therefore obscuring the gzip header).
>
> Move the call to kernel_uimage_probe before kernel_decompress and make
> the function self-containing by taking the following actions:
>   - take a pointer to struct bootmodule as a parameter,
>   - check the comp field of a u-boot header to determine compression type,
>   - in case of compressed image, modify boot module start address and size
>     by taking the header size into account and call kernel_decompress,
>   - set up zimage.{kernel_addr,len} accordingly,
>   - return -ENOENT in case of a u-boot header not found to distinguish it
>     amongst other return values and make it the only case for falling
>     through to try to probe other image types.
>
> This is done to avoid splitting the uImage probing into 2 stages (executed
> before and after decompression) which otherwise would be necessary to
> properly update boot module start and size before decompression and
> zimage.{kernel_addr,len} afterwards.
>
> Remove the limitation from the booting.txt documentation.
>
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
LGTM, Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
>   docs/misc/arm/booting.txt |  3 ---
>   xen/arch/arm/kernel.c     | 51 ++++++++++++++++++++++++++++++++++-----
>   2 files changed, 45 insertions(+), 9 deletions(-)
>
> diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
> index bd7bfe7f284a..02f7bb65ec6d 100644
> --- a/docs/misc/arm/booting.txt
> +++ b/docs/misc/arm/booting.txt
> @@ -50,9 +50,6 @@ Also, it is to be noted that if user provides the legacy image header on
>   top of zImage or Image header, then Xen uses the attributes of legacy
>   image header to determine the load address, entry point, etc.
>   
> -Known limitation: compressed kernels with a uboot headers are not
> -working.
> -
>   
>   Firmware/bootloader requirements
>   --------------------------------
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 068fbf88e492..ea5f9618169e 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -265,11 +265,14 @@ static __init int kernel_decompress(struct bootmodule *mod)
>   #define IH_ARCH_ARM             2       /* ARM          */
>   #define IH_ARCH_ARM64           22      /* ARM64        */
>   
> +/* uImage Compression Types */
> +#define IH_COMP_GZIP            1
> +
>   /*
>    * Check if the image is a uImage and setup kernel_info
>    */
>   static int __init kernel_uimage_probe(struct kernel_info *info,
> -                                      paddr_t addr, paddr_t size)
> +                                      struct bootmodule *mod)
>   {
>       struct {
>           __be32 magic;   /* Image Header Magic Number */
> @@ -287,6 +290,8 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>       } uimage;
>   
>       uint32_t len;
> +    paddr_t addr = mod->start;
> +    paddr_t size = mod->size;
>   
>       if ( size < sizeof(uimage) )
>           return -EINVAL;
> @@ -294,13 +299,21 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>       copy_from_paddr(&uimage, addr, sizeof(uimage));
>   
>       if ( be32_to_cpu(uimage.magic) != UIMAGE_MAGIC )
> -        return -EINVAL;
> +        return -ENOENT;
>   
>       len = be32_to_cpu(uimage.size);
>   
>       if ( len > size - sizeof(uimage) )
>           return -EINVAL;
>   
> +    /* Only gzip compression is supported. */
> +    if ( uimage.comp && uimage.comp != IH_COMP_GZIP )
> +    {
> +        printk(XENLOG_ERR
> +               "Unsupported uImage compression type %"PRIu8"\n", uimage.comp);
> +        return -EOPNOTSUPP;
> +    }
> +
>       info->zimage.start = be32_to_cpu(uimage.load);
>       info->entry = be32_to_cpu(uimage.ep);
>   
> @@ -330,8 +343,26 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>           return -EINVAL;
>       }
>   
> -    info->zimage.kernel_addr = addr + sizeof(uimage);
> -    info->zimage.len = len;
> +    if ( uimage.comp )
> +    {
> +        int rc;
> +
> +        /* Prepare start and size for decompression. */
> +        mod->start += sizeof(uimage);
> +        mod->size -= sizeof(uimage);
> +
> +        rc = kernel_decompress(mod);
> +        if ( rc )
> +            return rc;
> +
> +        info->zimage.kernel_addr = mod->start;
> +        info->zimage.len = mod->size;
> +    }
> +    else
> +    {
> +        info->zimage.kernel_addr = addr + sizeof(uimage);
> +        info->zimage.len = len;
> +    }
>   
>       info->load = kernel_zimage_load;
>   
> @@ -561,6 +592,16 @@ int __init kernel_probe(struct kernel_info *info,
>           printk("Loading ramdisk from boot module @ %"PRIpaddr"\n",
>                  info->initrd_bootmodule->start);
>   
> +    /*
> +     * uImage header always appears at the top of the image (even compressed),
> +     * so it needs to be probed first. Note that in case of compressed uImage,
> +     * kernel_decompress is called from kernel_uimage_probe making the function
> +     * self-containing (i.e. fall through only in case of a header not found).
> +    */
> +    rc = kernel_uimage_probe(info, mod);
> +    if ( rc != -ENOENT )
> +        return rc;
> +
>       /* if it is a gzip'ed image, 32bit or 64bit, uncompress it */
>       rc = kernel_decompress(mod);
I think the disadvantage of this approach is that kernel_decompress() 
now needs to be called from 2 different places. But, I think it is still 
preferable over splitting the

kernel_uimage_probe() function.

>       if ( rc && rc != -EINVAL )
> @@ -570,8 +611,6 @@ int __init kernel_probe(struct kernel_info *info,
>       rc = kernel_zimage64_probe(info, mod->start, mod->size);
>       if (rc < 0)
>   #endif
> -        rc = kernel_uimage_probe(info, mod->start, mod->size);
> -    if (rc < 0)
>           rc = kernel_zimage32_probe(info, mod->start, mod->size);
>   
>       return rc;
- Ayan


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 19:07:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 19:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487805.755501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMvyp-0006AO-PG; Tue, 31 Jan 2023 19:07:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487805.755501; Tue, 31 Jan 2023 19:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMvyp-0006AH-MW; Tue, 31 Jan 2023 19:07:43 +0000
Received: by outflank-mailman (input) for mailman id 487805;
 Tue, 31 Jan 2023 19:07:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dLqc=54=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pMvyo-0006A0-Mt
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 19:07:42 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20607.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::607])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 87d04f31-a19a-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 20:07:41 +0100 (CET)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MW3PR12MB4361.namprd12.prod.outlook.com (2603:10b6:303:5a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 19:07:35 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::a3a7:87d9:60f1:7eb1%4]) with mapi id 15.20.6043.033; Tue, 31 Jan 2023
 19:07:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87d04f31-a19a-11ed-b63b-5f92e7d2e73a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AFkRJ7A/4sOx5/cISYJZw2B1F0JMDQ5ppYdyRDp3dwVhBkcjycUgbmVIeeiIDIX3F1qGrsIpzUzb+6rjaSqWC46ggrReqQTEF4Ag/OG0w6gpgsm4AjsZzor9/mmKY1BrMXIMp/lxYjuQFoPZBZrXGuT5t1xXrOHQoOJbVo2rjpQlGnjdatH4e2sdWBeO6zmB6fJi3Un762FzJChbQyHgu1rEQMRVhz452VavF3nfeowJPoshnYDQYAdlrNdX0UM9iV9Jn+ValasZc4vwFLO4L5z70ooUhvNDaTFSjZ1/vojdRnp68Mh6XT6vzzYvMKwjQt9iJYHB0pXlL2yeIzZz+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VQ38XZ67DqkmuNThReZZZu5DL/yy5JoGlHgaUWuf8GU=;
 b=Aukr6ZqDUXQ7diRULpcPPcIfKhKkzG47KfHo1nIIWCgD8SatiZRtxw1+ul3JMzNNkVNyaxiqBp7fCn8nqWimKZu4r/KQJCXoJRcv+L+lhr/0Jn2RUI0eUAaQAplyQvrucd1g68NyVtuBRHM9yvKz+XbQQVYdMdDkRnvPqiom6LgVlKrAwvHXWF2y8iBEhnd83nRbPwlM6RNZio2LupTrWycq+zBAUFrmCj6oKjkGaPCSDzr1L7cfmcpySE+cmur4w7Z/s0tUdXVUvCByKQk6ZRZdk0GqRAFCUq/a8sWks6iNExfRtinAlWaQUl/ENjFwwPcndosZe7MGLSBS4DuAMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VQ38XZ67DqkmuNThReZZZu5DL/yy5JoGlHgaUWuf8GU=;
 b=Ze9veedk8k/Sn89WfOS/X3bsNxVtIlHOg4P30CDVvVG9PNMNFadAvnlbha2cpPFbtHYttbI4vN7H93hC0BR00B9a/j5fGJKATNehs8dils+lMzpN72NjIiWNdutMA8UHZ+xW04+SSM/kZXJv2EmM26GgodG9xIED92wj5aHy2tY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <d5797a3a-b616-e034-1320-bf483f82fe07@amd.com>
Date: Tue, 31 Jan 2023 19:07:29 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] xen/arm: Move kernel_uimage_probe definition after
 kernel_decompress
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230131151354.25943-1-michal.orzel@amd.com>
 <20230131151354.25943-2-michal.orzel@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230131151354.25943-2-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0494.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::19) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|MW3PR12MB4361:EE_
X-MS-Office365-Filtering-Correlation-Id: 84debc48-0ea8-43e5-4bae-08db03be69c1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a8OPZvaXdDxekGoO+vvLpEbN7YM4c3T5FfaH6giWNJMHFT/OOpuKXTb3J6p2ypesDxJRfu8lsutYe8TdYuPL8PuWKpyxe5TTO4RN50NnEkDK6NvXiJk404ScjHMdaMVAHG+1btT4KA1sipFQBZ5ymEAzsJnI9bKX9Iw8IquMNU5heFexzpijU6dhzEOdItN9swezfo01O7LVsgdOpHHZ2qpDVPKk9EimGGqs1vnNbIwAvJu/VCjp+rkb0ups4qIVQHFtDhEXf6c9+WyPz/IrcRQSJp1R/8yjigT68y7FBwCiLmCsujPUkhD4splmojSRsLRj12yrjync9c/gAMFpsV+Wqklw8gRta4vzoXDwWQnHBoImneGFzhTq9xXFLvdGzPlpS7NL8MIt2IPhbn46YKiE5HdDiuhcSGdYX5AR+STAW26JSzvr/42lVDyRT5xLqPV+eoewaahmKSWZ12inJ+y3Q47owcbSyOGrkJDPqKBUocn03swkH+8ssMHPhHSkAZlRfmSl4rglVoWLkivUJbb5rJNHgu1DR60BD7nVtALcZDrup7QsMvXE6pM1oQfDeU2xONZa96RI9SpH2CGhw9wEMoh8kFsOW3Ld2fLvV6J3p+/FYPAhf/ksPg+bUZr+o6bkZZq+V3x94hhONOtbqHH2tCtjdOe51rZXeYKdkxcXHLD4wZcnqmBoymPTxQdALBoqTsf/uqpvyplrej+ACy77oRmo6aOXMtnL8jDfM7k=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(366004)(39860400002)(136003)(396003)(376002)(346002)(451199018)(31686004)(2906002)(6512007)(5660300002)(8936002)(83380400001)(41300700001)(66946007)(6666004)(6506007)(53546011)(54906003)(36756003)(66556008)(8676002)(4326008)(66476007)(478600001)(186003)(26005)(6486002)(38100700002)(2616005)(31696002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UU03ODZwVXVJUVRxcjc0NjNTaDBtTDVMUU4wOEgzYXVsTTExSkpxdllESWtv?=
 =?utf-8?B?OGpFSDVxcG1VVDNaMHdwK3hCR1lGM2dGZGJ5VWw3L2NBZm1TOEF0MTA5Wnc4?=
 =?utf-8?B?K3hhc0NPOHg1WFh4V3djZHRkZUJCSDg5SkFRK0YyTXlseG1obTZXWGhUTFcz?=
 =?utf-8?B?MWlTMEIzTVBjQ2lxV21LcHNDSmJnRVJkYzJBRVY0NjhrU3VjUXh5ckR0d0JJ?=
 =?utf-8?B?MHdaYm9kcWMvejUySUY0VTZVWUxmL3F5dVVkc0sxaFFHNUxEa2hOblpqd0Jz?=
 =?utf-8?B?Nzd3c3NYbGtyWWJHQ2RrTC9EZnpOZGZlZTVPM2RTL00ydGdzWElmOEpGcHQ4?=
 =?utf-8?B?L3JhRHNialZtNUJCSTJ0SHZJK3RGalcvRnMyaCszVmdubWpDYnZBMk5XZm9Z?=
 =?utf-8?B?M20vWWdHdksydDBKN2RYRWlhcVFOVmQvM1BnTktaWmRxYkFVeWFwS2VIdFB0?=
 =?utf-8?B?eE81Qm0xUyt5ekRPS1NFelFGcTdwc2U3c1VVVk85TTFpSGU5SVZ4MU5NeGN1?=
 =?utf-8?B?WWpOVXJzZWd1Njd6WUE3UENHN3BkVmlKK3ZmWkRkSUFmVlJhTlp5N3FXRHRK?=
 =?utf-8?B?THAwVkJSWENUNUVsWXhjUFBCNlpaRXdKNkU0UjIvMm53RTZtQlZNS1ladGYx?=
 =?utf-8?B?eEZMcVFva1I1bWFNZUdNTkdCblJKT1RmZkppZ2xkeEphQnhObkJJbnNzQXVx?=
 =?utf-8?B?bmJIa25LaWNXQk1Rc3BIQU1mTnJxcnFwUysrT3dOK3J1TkFCSEV3TmFMdnNI?=
 =?utf-8?B?dXJjQjdGckRXK0JGdUNqRVdpRCtKY0dDVStUZUdXTWY0NktFbE84c0QyTWVF?=
 =?utf-8?B?OHpuRnArYUJNOEZ6WGw2U3lreitIdjg5TDBxY0lsVjk1VlJhVXVaV0tyeUFh?=
 =?utf-8?B?Yk9GYysvWlZlK0Vkbk5QNnl1TTZkWXVlR0pMbHRrRGwyMkxFOUw4ekpKWUM0?=
 =?utf-8?B?enpLckZpVTJjNzRTTi90d3BJdzBXTVhib3paSkNBYnM0NHVOTWRQOXlTdkxQ?=
 =?utf-8?B?VlpRcGk5K3dXSk50T2F1UjFuQ3pGVnV2M21SNXdVVVRtbUMrenBwZEppSHAz?=
 =?utf-8?B?ZFFFNkV0M0prRUVLcFBibVNtU05kRXg2ZFUydzRTcDYyS3pYc3VURWVyeURO?=
 =?utf-8?B?V3E4UkxKM0UzMmxocEk4NE1KcW12T2QvUU5lYVBYSmhRYk1kYWxVUFJ3Nmdw?=
 =?utf-8?B?UzNodm9pMU9hTnlBbzJSM1BPT3BpaDgrM1paTklOb1ZrMEZWV1E1YzFaenlj?=
 =?utf-8?B?ZlIrYUoxVCsxY3FLUERrVHNQN25GN0g1RHRhMEppRmcySCsrMUgxL3RHWDhJ?=
 =?utf-8?B?aUYyWHdXTDJHT2FFNXdDcDJyMUtjdzNoRDZKOGRtZXQwMEY1QkxCamRRNzJm?=
 =?utf-8?B?R0ZiYVIxVXp3c1BLeERHb2M2WSt1S0xmL0lPWkthNklMZFRIQnhDZFRxUDh2?=
 =?utf-8?B?NUNpc2dUV0NUWEZDV2ExOG03M2xoZHhCNFRpMlp3TUtwN3ZkOTdCcjZYS0Z0?=
 =?utf-8?B?S1BvamF2NW9nZHFHQ1ZVV041SnFDQ281THcvZG5uOWxPRWpveUEycDVsV0Nx?=
 =?utf-8?B?UkVySDRLR2dkYnRSdENYZnlLK08xYzdXUkx5WEl5cDFLc0ZYTzBRTDJGdzA1?=
 =?utf-8?B?N0cwRWg0ek9JZzl5NzM4UnJxdFo4TFEvVUQ4UlpiMWgvNWV3TnQ3c0JjNGpy?=
 =?utf-8?B?MnpQRDZuYXoxUE5qQTRZdlRsTHZ5RmxkTHVCUFd6UEhwV21aWmJlUXlDSE1o?=
 =?utf-8?B?ZGFnNlVVbVE1YXZ0UXJrSjBvV0NmRWMzeUNWQmxzREZzRjlYMFM4YnJBWE1q?=
 =?utf-8?B?dEk3MTd0bS9BaEJvRit6UlkzUVhBaSs5QS8vWkc1NDdHNmJOWVFsT1BMaWp2?=
 =?utf-8?B?a1lRTlk1UEovN0JlTDJmaVNJM2x0RWtmQ1BFQnZsNkxQOFI1NHZicGtML0gw?=
 =?utf-8?B?cTZXeVJsa0NKUDVwMVVPTHUvL3FpUHdFRy9hRFpsclU4WTYxV2Q0WTh1dGZY?=
 =?utf-8?B?bGlLY1J5aGQ4cmVCOW9hWjB4dGhHZ3dkRHdNTXd0SFFqNi8vTFBjZHk4OS9j?=
 =?utf-8?B?eTJwWDgyVHBaQUM5d0lyMFYrM2N4UzRmdWdzSUFnUmNHRFRCRjh2VWdSek5n?=
 =?utf-8?Q?R++jTH7WJwaxz3KY5vtfJQmSk?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 84debc48-0ea8-43e5-4bae-08db03be69c1
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 19:07:35.8033
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GuenIn7wgk4yczIl+gxj7D2e/sG9yjcMOybIOds1Da64RAk1RSM/YkiOpKYW/Lr2R7Ex9AYPmMlwyBgNWy9iCA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4361


On 31/01/2023 15:13, Michal Orzel wrote:
> In a follow-up patch, we will be calling kernel_decompress function from
> within kernel_uimage_probe to support booting compressed images with
> u-boot header. Therefore, move the kernel_uimage_probe definition after
> kernel_decompress so that the latter is visible to the former.
>
> No functional change intended.
>
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
>   xen/arch/arm/kernel.c | 146 +++++++++++++++++++++---------------------
>   1 file changed, 73 insertions(+), 73 deletions(-)
>
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 36081e73f140..068fbf88e492 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -186,6 +186,79 @@ static void __init kernel_zimage_load(struct kernel_info *info)
>       iounmap(kernel);
>   }
>   
> +static __init uint32_t output_length(char *image, unsigned long image_len)
> +{
> +    return *(uint32_t *)&image[image_len - 4];
> +}
> +
> +static __init int kernel_decompress(struct bootmodule *mod)
> +{
> +    char *output, *input;
> +    char magic[2];
> +    int rc;
> +    unsigned int kernel_order_out;
> +    paddr_t output_size;
> +    struct page_info *pages;
> +    mfn_t mfn;
> +    int i;
> +    paddr_t addr = mod->start;
> +    paddr_t size = mod->size;
> +
> +    if ( size < 2 )
> +        return -EINVAL;
> +
> +    copy_from_paddr(magic, addr, sizeof(magic));
> +
> +    /* only gzip is supported */
> +    if ( !gzip_check(magic, size) )
> +        return -EINVAL;
> +
> +    input = ioremap_cache(addr, size);
> +    if ( input == NULL )
> +        return -EFAULT;
> +
> +    output_size = output_length(input, size);
> +    kernel_order_out = get_order_from_bytes(output_size);
> +    pages = alloc_domheap_pages(NULL, kernel_order_out, 0);
> +    if ( pages == NULL )
> +    {
> +        iounmap(input);
> +        return -ENOMEM;
> +    }
> +    mfn = page_to_mfn(pages);
> +    output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
> +
> +    rc = perform_gunzip(output, input, size);
> +    clean_dcache_va_range(output, output_size);
> +    iounmap(input);
> +    vunmap(output);
> +
> +    if ( rc )
> +    {
> +        free_domheap_pages(pages, kernel_order_out);
> +        return rc;
> +    }
> +
> +    mod->start = page_to_maddr(pages);
> +    mod->size = output_size;
> +
> +    /*
> +     * Need to free pages after output_size here because they won't be
> +     * freed by discard_initial_modules
> +     */
> +    i = PFN_UP(output_size);
> +    for ( ; i < (1 << kernel_order_out); i++ )
> +        free_domheap_page(pages + i);
> +
> +    /*
> +     * Free the original kernel, update the pointers to the
> +     * decompressed kernel
> +     */
> +    fw_unreserved_regions(addr, addr + size, init_domheap_pages, 0);
> +
> +    return 0;
> +}
> +
>   /*
>    * Uimage CPU Architecture Codes
>    */
> @@ -289,79 +362,6 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>       return 0;
>   }
>   
> -static __init uint32_t output_length(char *image, unsigned long image_len)
> -{
> -    return *(uint32_t *)&image[image_len - 4];
> -}
> -
> -static __init int kernel_decompress(struct bootmodule *mod)
> -{
> -    char *output, *input;
> -    char magic[2];
> -    int rc;
> -    unsigned int kernel_order_out;
> -    paddr_t output_size;
> -    struct page_info *pages;
> -    mfn_t mfn;
> -    int i;
> -    paddr_t addr = mod->start;
> -    paddr_t size = mod->size;
> -
> -    if ( size < 2 )
> -        return -EINVAL;
> -
> -    copy_from_paddr(magic, addr, sizeof(magic));
> -
> -    /* only gzip is supported */
> -    if ( !gzip_check(magic, size) )
> -        return -EINVAL;
> -
> -    input = ioremap_cache(addr, size);
> -    if ( input == NULL )
> -        return -EFAULT;
> -
> -    output_size = output_length(input, size);
> -    kernel_order_out = get_order_from_bytes(output_size);
> -    pages = alloc_domheap_pages(NULL, kernel_order_out, 0);
> -    if ( pages == NULL )
> -    {
> -        iounmap(input);
> -        return -ENOMEM;
> -    }
> -    mfn = page_to_mfn(pages);
> -    output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
> -
> -    rc = perform_gunzip(output, input, size);
> -    clean_dcache_va_range(output, output_size);
> -    iounmap(input);
> -    vunmap(output);
> -
> -    if ( rc )
> -    {
> -        free_domheap_pages(pages, kernel_order_out);
> -        return rc;
> -    }
> -
> -    mod->start = page_to_maddr(pages);
> -    mod->size = output_size;
> -
> -    /*
> -     * Need to free pages after output_size here because they won't be
> -     * freed by discard_initial_modules
> -     */
> -    i = PFN_UP(output_size);
> -    for ( ; i < (1 << kernel_order_out); i++ )
> -        free_domheap_page(pages + i);
> -
> -    /*
> -     * Free the original kernel, update the pointers to the
> -     * decompressed kernel
> -     */
> -    fw_unreserved_regions(addr, addr + size, init_domheap_pages, 0);
> -
> -    return 0;
> -}
> -
>   #ifdef CONFIG_ARM_64
>   /*
>    * Check if the image is a 64-bit Image.


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 19:36:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 19:36:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487813.755513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMwQ2-0001sP-4g; Tue, 31 Jan 2023 19:35:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487813.755513; Tue, 31 Jan 2023 19:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMwQ2-0001sI-1i; Tue, 31 Jan 2023 19:35:50 +0000
Received: by outflank-mailman (input) for mailman id 487813;
 Tue, 31 Jan 2023 19:35:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8ra6=54=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pMwQ0-0001sC-I9
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 19:35:48 +0000
Received: from sonic312-24.consmr.mail.gq1.yahoo.com
 (sonic312-24.consmr.mail.gq1.yahoo.com [98.137.69.205])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 73953587-a19e-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 20:35:46 +0100 (CET)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic312.consmr.mail.gq1.yahoo.com with HTTP; Tue, 31 Jan 2023 19:35:42 +0000
Received: by hermes--production-bf1-57c96c66f6-thr8d (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 3630b60c4b8d5aee2b4f4bc210d4b88a; 
 Tue, 31 Jan 2023 19:35:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73953587-a19e-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1675193742; bh=TcT1YLmDGMvqANZil3o3s8Huix+Bg0YzehzcamGdlf8=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=Qi/bBEwZFhU/LxhIdJmnAmZTqAsychYutnrghCnmzIiCUaAwIfedPUTiHaR+cqz33nJrQjMzTv9ecilMKBR6CL9mFHCDmFpvY5kvQUG8JUL1rgAgYunepSyhxIQxuRNz5QCn73NLvAk8/py/U8D5sir9jozSloFGVNBRav6TsSWzbuXKoWq2oIXvmh8fyylXb/Xtzsj++9FrXWdLm4sLJVnwh69bNQlEUv63WlxZqMJV+q3S1vd8zWrV5LlkfgSiQ4pGCfj0eeEH6fmvHCpwyDHTJ6EMHtTNv+ztz/BlNOq0VaR3587LPj+zuKP6m6i/xPBG0pjPSg/mUzlb8n5U3Q==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1675193742; bh=Vqs+NHE+qz2hmhRgzHF2LDUclXDDCON5zo+WbbQiIPm=; h=X-Sonic-MF:Date:Subject:From:To:From:Subject; b=Alp83Kv0TJdYJg0PSlSLvaSklAbcUR67w+GHazmKWp6rFIr58qqjkDlU/f4lqZhgWrt7+3gagWP4kZyZFayEubtGl20wRp0AkFRdQwrQgEfeu53YIkTreEXHiIFs+xtwan1Yy3dbWzbPJN6EsBi22bOGNlY7GkivZ51GMvBO0P/miQalWOmTB5gBvqdtUl0p/ymrfDlaFF5YQoaMo4fuwtbNV32jpL2OijAQYDraB9Hvv7qMN14jYQTMIJZYfdDAwuczJ2qLCjwT7xXcF1Usy7DlrBoGoLNNCtZpxiqq9TP20fRH7tl1NT86J6e8nee6YK/etntdq73eZLjTQDb3Zw==
X-YMail-OSG: bhJnv9EVM1n26u3qMRQeyZccSawdEKiv0NT6ya4GEOjSs48Tiq82GezNLFy9GVf
 qoVqnf0YtKXebF9NLH4iuYsbXUmQNpKanxLJ8GsMOf5vzyOzY9I5kFkyiunkYfHpctk6WE2gXmNQ
 ALjcKqPxBCSsX9z3btgP2VxgnZqPuVU2_Q3JjSJWPXSfwOb9iIliBOoqtb.QD30oCpv8IN8_7Ubk
 ZxwkmrOBmiBU1u9EYjtnUBScMmvAfWXNf.uETOV8h5ZjhmJg.t9hiCu.oTj3oF042FF4fntXzgtw
 HEWAwNfrEx1sfsovBhclkwNWmlKdzJ7ymX2zJN41fFhPLTPuwjwb5oYfpohEurESM2CVBEoPRhc2
 ZDz5CZb.hc95u0A_VPcCURe3Rw_2Ps6bm3dEsrYby36KU_6WFSDnbgylk73uCrnjfuiZJzmNJuV.
 y4GEO51MvfiX7skKpvkvXy7SCdOJ09oiQF.w.vvHVbCwpbgBLw_Dovj1_XSooXWWcVaH6lFdQIHh
 sivBgzXwy9hcmecCD5YnzvD6maoGmSSUHiGXDKqvzMISy0xkWXhfLgenUI0.Dkb2VZbIHBFq_shf
 _vMi_a0w.Wmn_w89MUIE_BlQ7wLfS2Pg33xMkcs9ycuqSnbaAtTHL1QYRh50_xBr7lUHh6utgNzN
 YAnS2cgqL3QkO1VUEKc9848ZW3vvfZoggwnki1fMyrYCgLrCW1aCoAwFikSdULXRnUDGAwi7cVFX
 GlcOmR4.MOQOswyRjCKsAzkEcmMBdz9TJ8hRW.GQxhxkxCPwidI6BMdFPTFBTk1_Bu62XmueDpHH
 ZsoDlaAWFX5CUABabLTmzW_27jzcBuk9DQ56YhXDXR0yynXBRyUaq3pEXcU5qw3V6jO57WYWdLAb
 roGAn4QLVX_TGVmc8DpOi6OgUUS6GqrURJHOImmB7ZbNhW6dim7UD4VGhq4rtSTaNpC.GdneRhHN
 _6QFjQI9PQLXHZOognUIf95CyPK4ZbwfhuvXAQkUj_elk08.whGPiMOxIjIE0xjKJ6Yjrku.WW84
 PlQO9.Sn1QmHMrBXkUIVmgnvE_eCx8tbDZRETgScrdaqmz9Xyejjxa2QNNoJwi1Ny1h2JMeq_kb8
 tp.HC0CP4MyqSoQ.q965bvTnxUXn1CUoV9fR2qMz36brRaxIOxxKOCRT0G_Ehg0EDMfzWaYcTJkj
 qbWPDxZ.4VF1DEZlosjv9Df1MIc3y.mfEk.uAfmNfnSfHm_S1Yoq6vkGUXTzaz171hWWgDlkT7BF
 6FZbY_InTrBCIRaWJXb_Ch6u2Xk909TJc3fC88nlTVjYo1FVBTvJ9LXa7cDEsg6hICw22cvuWhTq
 NpMEePQYHzPh__7IeuPlEXXTh.S5Ax8GlV3.2yrhMeu5wXruho3zjnBlvfnFMarnrkRO6_VwsupV
 Z_YZhJ.EsznOB24aFZWhT_RISGK.PQmls33Swg0NdOhOLOtO8vE5Hs945GWD_SgZ2excwA7c_YAU
 b3wbL8dkwsiAWIlr32wc3_rG7YH8ghCl6QGE.Y5f0ViupQ8CSvr3r2pyDGYPyBLT0BJCS7Y_fr7E
 VksqZUdOhdeN06ioqe0svtbmaIe4yTUgFgAwO.f5mi5NpzWlcF9j4scI19FOqOh4hcAgkDBRg8bD
 9S8gsRUyfaL0CQGnj4CqvNJNOL8i559VOsXlnL5rhyyGjiOI7LGFf_tGIJz_Q6TV5ZtQ5RVfF3Gu
 O9fU6b7402Xm2.E4_t6mR0Hufsi2hMpXHl7jBm2JUik5e403UPe__IGokQYjy.N6L_e3AvkuJRez
 WwnhPt8BdM7hYr.c3trLp_mygmXZUWmlaEM0K6KEE53Cir2nEmNtVgZjtAFVNt1GpaMB1Cg.Tv.I
 nkOjvBfOuSJEBH60IZ0.bPJulUZbAAS.pifsi04I_QDOIcXY0n0ff7DztmGdlBlFhuufKHLtsBlW
 G9N8nos80hjSGI5uCdVwBiEd7cmFGlb7NAosvZE.nEXlGzL9q9FsD2jNX8LaUjDWngpATKSbKVPT
 XeGnHaMgdgFkT0w.N18S5Ks282EkLGkAFq4iHPUGlphwmKZxxfVqHjOuYZuVyql.6LtnaQ8Oogyf
 vATUBK.fWhpIkSLXhe8f6jIkzOucOVdQzFG91t.JtmC6R.Zj9eBd8pdBKCEiO3XXz7V95GPGsljl
 RBBc6o_367jgwZnQCjv1eAq1BAKp3hj_dIKew0DvMDgV3s_avLj2SGhwhHBy3.G_rnwmNk9r_C7k
 a75fJtXSfBw--
X-Sonic-MF: <brchuckz@aim.com>
Message-ID: <af419213-ba65-c9dc-9191-006d908339eb@aol.com>
Date: Tue, 31 Jan 2023 14:35:37 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.1
Subject: Re: [XEN PATCH v2 0/3] Configure qemu upstream correctly by default
 for igd-passthru
From: Chuck Zmudzinski <brchuckz@aol.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, qemu-devel@nongnu.org
References: <cover.1673300848.git.brchuckz.ref@aol.com>
 <cover.1673300848.git.brchuckz@aol.com>
 <Y9EUarVVWr223API@perard.uk.xensource.com>
 <de3a3992-8f56-086a-e19e-bac9233d4265@aol.com>
 <a2a927bd-a764-8676-68c9-4c53cb86af3e@aol.com>
Content-Language: en-US
In-Reply-To: <a2a927bd-a764-8676-68c9-4c53cb86af3e@aol.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailer: WebService/1.1.21123 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 8163

On 1/29/23 7:38 PM, Chuck Zmudzinski wrote:
> On 1/25/23 6:19 PM, Chuck Zmudzinski wrote:
>> On 1/25/2023 6:37 AM, Anthony PERARD wrote:
>>> On Tue, Jan 10, 2023 at 02:32:01AM -0500, Chuck Zmudzinski wrote:
>>> > I call attention to the commit message of the first patch which points
>>> > out that using the "pc" machine and adding the xen platform device on
>>> > the qemu upstream command line is not functionally equivalent to using
>>> > the "xenfv" machine which automatically adds the xen platform device
>>> > earlier in the guest creation process. As a result, there is a noticeable
>>> > reduction in the performance of the guest during startup with the "pc"
>>> > machne type even if the xen platform device is added via the qemu
>>> > command line options, although eventually both Linux and Windows guests
>>> > perform equally well once the guest operating system is fully loaded.
>>>
>>> There shouldn't be a difference between "xenfv" machine or using the
>>> "pc" machine while adding the "xen-platform" device, at least with
>>> regards to access to disk or network.
>>>
>>> The first patch of the series is using the "pc" machine without any
>>> "xen-platform" device, so we can't compare startup performance based on
>>> that.
>>>
>>> > Specifically, startup time is longer and neither the grub vga drivers
>>> > nor the windows vga drivers in early startup perform as well when the
>>> > xen platform device is added via the qemu command line instead of being
>>> > added immediately after the other emulated i440fx pci devices when the
>>> > "xenfv" machine type is used.
>>>
>>> The "xen-platform" device is mostly an hint to a guest that they can use
>>> pv-disk and pv-network devices. I don't think it would change anything
>>> with regards to graphics.
>>>
>>> > For example, when using the "pc" machine, which adds the xen platform
>>> > device using a command line option, the Linux guest could not display
>>> > the grub boot menu at the native resolution of the monitor, but with the
>>> > "xenfv" machine, the grub menu is displayed at the full 1920x1080
>>> > native resolution of the monitor for testing. So improved startup
>>> > performance is an advantage for the patch for qemu.
>>>
>>> I've just found out that when doing IGD passthrough, both machine
>>> "xenfv" and "pc" are much more different than I though ... :-(
>>> pc_xen_hvm_init_pci() in QEMU changes the pci-host device, which in
>>> turns copy some informations from the real host bridge.
>>> I guess this new host bridge help when the firmware setup the graphic
>>> for grub.
> 
> Yes, it is needed - see below for the very simple patch to Qemu
> upstream that fixes it for the "pc" machine!
> 
>> 
>> I am surprised it works at all with the "pc" machine, that is, without the
>> TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE that is used in the "xenfv"
>> machine. This only seems to affect the legacy grub vga driver and the legacy
>> Windows vga driver during early boot. Still, I much prefer keeping the "xenfv"
>> machine for Intel IGD than this workaround of patching libxl to use the "pc"
>> machine.
>> 
>>>
>>> > I also call attention to the last point of the commit message of the
>>> > second patch and the comments for reviewers section of the second patch.
>>> > This approach, as opposed to fixing this in qemu upstream, makes
>>> > maintaining the code in libxl__build_device_model_args_new more
>>> > difficult and therefore increases the chances of problems caused by
>>> > coding errors and typos for users of libxl. So that is another advantage
>>> > of the patch for qemu.
>>>
>>> We would just needs to use a different approach in libxl when generating
>>> the command line. We could probably avoid duplications.
> 
> I was thinking we could also either write a test to verify the correctness
> of the second patch to ensure it generates the correct Qemu command line
> or take the time to verify the second patch's accuracy before committing it.
> 
>>> I was hopping to
>>> have patch series for libxl that would change the machine used to start
>>> using "pc" instead of "xenfv" for all configurations, but based on the
>>> point above (IGD specific change to "xenfv"), then I guess we can't
>>> really do anything from libxl to fix IGD passthrough.
>> 
>> We could switch to the "pc" machine, but we would need to patch
>> qemu also so the "pc" machine uses the special device the "xenfv"
>> machine uses (TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE).
>> ...
> 
> I just tested a very simple patch to Qemu upstream to fix the
> difference between the upstream Qemu "pc" machine and the upstream
> Qemu "xenfv" machine:
> 
> --- a/hw/i386/pc_piix.c	2023-01-28 13:22:15.714595514 -0500
> +++ b/hw/i386/pc_piix.c	2023-01-29 18:08:34.668491593 -0500
> @@ -434,6 +434,8 @@
>              compat(machine); \
>          } \
>          pc_init1(machine, TYPE_I440FX_PCI_HOST_BRIDGE, \
> +                 xen_igd_gfx_pt_enabled() ? \
> +                 TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE : \
>                   TYPE_I440FX_PCI_DEVICE); \
>      } \
>      DEFINE_PC_MACHINE(suffix, name, pc_init_##suffix, optionfn)
> ----- snip -------
> 
> With this simple two-line patch to upstream Qemu, we can use the "pc"
> machine without any regressions such as the startup performance
> degradation I observed without this small patch to fix the "pc" machine
> with igd passthru!

Hi Anthony,

Actually, to implement the fix for the "pc" machine and IGD in Qemu
upstream and not break builds for configurations such as --disable-xen
the patch to Qemu needs to add four lines instead of two (still trivial!):


--- a/hw/i386/pc_piix.c	2023-01-29 18:05:15.714595514 -0500
+++ b/hw/i386/pc_piix.c	2023-01-29 18:08:34.668491593 -0500
@@ -434,6 +434,8 @@
             compat(machine); \
         } \
         pc_init1(machine, TYPE_I440FX_PCI_HOST_BRIDGE, \
+                 pc_xen_igd_gfx_pt_enabled() ? \
+                 TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE : \
                  TYPE_I440FX_PCI_DEVICE); \
     } \
     DEFINE_PC_MACHINE(suffix, name, pc_init_##suffix, optionfn)
--- a/include/sysemu/xen.h	2023-01-20 08:17:55.000000000 -0500
+++ b/include/sysemu/xen.h	2023-01-30 00:18:29.276886734 -0500
@@ -23,6 +23,7 @@
 extern bool xen_allowed;
 
 #define xen_enabled()           (xen_allowed)
+#define pc_xen_igd_gfx_pt_enabled()    xen_igd_gfx_pt_enabled()
 
 #ifndef CONFIG_USER_ONLY
 void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
@@ -33,6 +34,7 @@
 #else /* !CONFIG_XEN_IS_POSSIBLE */
 
 #define xen_enabled() 0
+#define pc_xen_igd_gfx_pt_enabled() 0
 #ifndef CONFIG_USER_ONLY
 static inline void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
 {
------- snip -------

Would you support this patch to Qemu if I formally submitted it to
Qemu as a replacement for the current more complicated patch to Qemu
that I proposed to reserve slot 2 for the IGD?

Thanks,

Chuck

> 
> The "pc" machine maintainers for upstream Qemu would need to accept
> this small patch to Qemu upstream. They might prefer this to the
> other Qemu patch that reserves slot 2 for the Qemu upstream "xenfv"
> machine when the guest is configured for igd passthru.
> 
>>>
>>> ...
>>>
>>> So overall, unfortunately the "pc" machine in QEMU isn't suitable to do
>>> IGD passthrough as the "xenfv" machine has already some workaround to
>>> make IGD work and just need some more.
> 
> Well, the little patch to upstream shown above fixes the "pc" machine
> with IGD so maybe this approach of patching libxl to use the "pc" machine
> will be a viable fix for the IGD.
> 
>>>
>>> I've seen that the patch for QEMU is now reviewed, so I look at having
>>> it merged soonish.
>>>
>>> Thanks,
>>>
>> 
> 
> I just added the bit of information above to help you decide which
> approach to use to improve the support for the igd passthru feature
> with Xen and Qemu upstream. I think the test of the small patch to
> Qemu to fix the "pc" machine with igd passthru makes this patch to
> libxl a viable alternative to the other patch to Qemu upstream that
> reserves slot 2 when using the "xenfv" machine and igd passthru.
> 
> Thanks,
> 
> Chuck



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 19:55:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 19:55:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487819.755524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMwiH-0004VN-Lz; Tue, 31 Jan 2023 19:54:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487819.755524; Tue, 31 Jan 2023 19:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMwiH-0004VG-Hi; Tue, 31 Jan 2023 19:54:41 +0000
Received: by outflank-mailman (input) for mailman id 487819;
 Tue, 31 Jan 2023 19:54:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMwiH-0004VA-0u
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 19:54:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMwiG-00041t-No; Tue, 31 Jan 2023 19:54:40 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMwiG-0001gh-HN; Tue, 31 Jan 2023 19:54:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=aykJfTfg9x+zWIuqKrx4zCHNFdJ3j4FvPo1nojPOMtU=; b=sk08tq1OD7ViQ84Qj93EtZapuH
	NPu+/fckJgXAbQ2oDsFMCy6bYDIcEesGEMAhlmjqlYfqKph7U3SA6YrG5DvPyzCeA3UGU/hJx4nxT
	Mj4DK2TOg8HaUcS7ysZwnQ5FYGB5hs5oyjj14AlRUq6XIN6SJUQedoPvrYF4AZn8xD9U=;
Message-ID: <9a759452-85b8-3533-705f-7076699ff350@xen.org>
Date: Tue, 31 Jan 2023 19:54:38 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 1/2] xen/arm: Move kernel_uimage_probe definition after
 kernel_decompress
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, ayankuma@amd.com
References: <20230131151354.25943-1-michal.orzel@amd.com>
 <20230131151354.25943-2-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230131151354.25943-2-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 31/01/2023 15:13, Michal Orzel wrote:
> In a follow-up patch, we will be calling kernel_decompress function from
> within kernel_uimage_probe to support booting compressed images with
> u-boot header. Therefore, move the kernel_uimage_probe definition after
> kernel_decompress so that the latter is visible to the former.
> 
> No functional change intended.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 20:20:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 20:20:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487830.755535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMx77-0000At-LL; Tue, 31 Jan 2023 20:20:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487830.755535; Tue, 31 Jan 2023 20:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMx77-0000Al-IK; Tue, 31 Jan 2023 20:20:21 +0000
Received: by outflank-mailman (input) for mailman id 487830;
 Tue, 31 Jan 2023 20:20:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMx76-0000Af-Ae
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 20:20:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMx75-0004j3-QQ; Tue, 31 Jan 2023 20:20:19 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMx75-0002uj-Kf; Tue, 31 Jan 2023 20:20:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=F67LMJB+JfmwgmqBSyKzNJtd9PQ9rnK3eySxGXwv7tQ=; b=L+ju8Erk0PI1XsWTZOZ5S9HuEY
	J1c0mTW5gRVnlsOF/3PzREub+nb18XdoI8On78kS/1GPaiDf4ubgvh9+sb8hsmzZPZGil21WZ3/3U
	YelBv1SUCP4O0WgdntJ+FlRq6bHK4WLZYAN4KMzh5CYWopb/9cfxonA1xhjVFGBOsXuE=;
Message-ID: <653d05c1-a860-052c-4ce2-55998c77db42@xen.org>
Date: Tue, 31 Jan 2023 20:20:17 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, ayankuma@amd.com
References: <20230131151354.25943-1-michal.orzel@amd.com>
 <20230131151354.25943-3-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 2/2] xen/arm: Add support for booting gzip compressed
 uImages
In-Reply-To: <20230131151354.25943-3-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 31/01/2023 15:13, Michal Orzel wrote:
> At the moment, Xen does not support booting gzip compressed uImages.
> This is because we are trying to decompress the kernel before probing
> the u-boot header. This leads to a failure as the header always appears
> at the top of the image (and therefore obscuring the gzip header).
> 
> Move the call to kernel_uimage_probe before kernel_decompress and make
> the function self-containing by taking the following actions:
>   - take a pointer to struct bootmodule as a parameter,
>   - check the comp field of a u-boot header to determine compression type,
>   - in case of compressed image, modify boot module start address and size
>     by taking the header size into account and call kernel_decompress,
>   - set up zimage.{kernel_addr,len} accordingly,
>   - return -ENOENT in case of a u-boot header not found to distinguish it
>     amongst other return values and make it the only case for falling
>     through to try to probe other image types.
> 
> This is done to avoid splitting the uImage probing into 2 stages (executed
> before and after decompression) which otherwise would be necessary to
> properly update boot module start and size before decompression and
> zimage.{kernel_addr,len} afterwards.

AFAIU, it would be possible to have a zImage/Image header embedded in 
the uImage. So any reason to only handle a compressed binary?

> 
> Remove the limitation from the booting.txt documentation.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
>   docs/misc/arm/booting.txt |  3 ---
>   xen/arch/arm/kernel.c     | 51 ++++++++++++++++++++++++++++++++++-----
>   2 files changed, 45 insertions(+), 9 deletions(-)
> 
> diff --git a/docs/misc/arm/booting.txt b/docs/misc/arm/booting.txt
> index bd7bfe7f284a..02f7bb65ec6d 100644
> --- a/docs/misc/arm/booting.txt
> +++ b/docs/misc/arm/booting.txt
> @@ -50,9 +50,6 @@ Also, it is to be noted that if user provides the legacy image header on
>   top of zImage or Image header, then Xen uses the attributes of legacy
>   image header to determine the load address, entry point, etc.
>   
> -Known limitation: compressed kernels with a uboot headers are not
> -working.
> -
>   
>   Firmware/bootloader requirements
>   --------------------------------
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 068fbf88e492..ea5f9618169e 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -265,11 +265,14 @@ static __init int kernel_decompress(struct bootmodule *mod)
>   #define IH_ARCH_ARM             2       /* ARM          */
>   #define IH_ARCH_ARM64           22      /* ARM64        */
>   
> +/* uImage Compression Types */
> +#define IH_COMP_GZIP            1
> +
>   /*
>    * Check if the image is a uImage and setup kernel_info
>    */
>   static int __init kernel_uimage_probe(struct kernel_info *info,
> -                                      paddr_t addr, paddr_t size)
> +                                      struct bootmodule *mod)
>   {
>       struct {
>           __be32 magic;   /* Image Header Magic Number */
> @@ -287,6 +290,8 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>       } uimage;
>   
>       uint32_t len;
> +    paddr_t addr = mod->start;
> +    paddr_t size = mod->size;
>   
>       if ( size < sizeof(uimage) )
>           return -EINVAL;
> @@ -294,13 +299,21 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>       copy_from_paddr(&uimage, addr, sizeof(uimage));
>   
>       if ( be32_to_cpu(uimage.magic) != UIMAGE_MAGIC )
> -        return -EINVAL;
> +        return -ENOENT;
>   
>       len = be32_to_cpu(uimage.size);
>   
>       if ( len > size - sizeof(uimage) )
>           return -EINVAL;
>   
> +    /* Only gzip compression is supported. */
> +    if ( uimage.comp && uimage.comp != IH_COMP_GZIP )
> +    {
> +        printk(XENLOG_ERR
> +               "Unsupported uImage compression type %"PRIu8"\n", uimage.comp);
> +        return -EOPNOTSUPP;
> +    }
> +
>       info->zimage.start = be32_to_cpu(uimage.load);
>       info->entry = be32_to_cpu(uimage.ep);
>   
> @@ -330,8 +343,26 @@ static int __init kernel_uimage_probe(struct kernel_info *info,
>           return -EINVAL;
>       }
>   
> -    info->zimage.kernel_addr = addr + sizeof(uimage);
> -    info->zimage.len = len;
> +    if ( uimage.comp )
> +    {
> +        int rc;
> +
> +        /* Prepare start and size for decompression. */
> +        mod->start += sizeof(uimage);
> +        mod->size -= sizeof(uimage);

kernel_decompress() will free the compressed module once it is 
decompressed. By updating the region it means the free page will be not 
be freed (assuming start was previously page-aligned).

> +
> +        rc = kernel_decompress(mod);
> +        if ( rc )
> +            return rc;
> +
> +        info->zimage.kernel_addr = mod->start;
> +        info->zimage.len = mod->size;
> +    }
> +    else
> +    {
> +        info->zimage.kernel_addr = addr + sizeof(uimage);
> +        info->zimage.len = len;
> +    }
>   
>       info->load = kernel_zimage_load;
>   
> @@ -561,6 +592,16 @@ int __init kernel_probe(struct kernel_info *info,
>           printk("Loading ramdisk from boot module @ %"PRIpaddr"\n",
>                  info->initrd_bootmodule->start);
>   
> +    /*
> +     * uImage header always appears at the top of the image (even compressed),
> +     * so it needs to be probed first. Note that in case of compressed uImage,
> +     * kernel_decompress is called from kernel_uimage_probe making the function
> +     * self-containing (i.e. fall through only in case of a header not found).
> +    */
> +    rc = kernel_uimage_probe(info, mod);
> +    if ( rc != -ENOENT )
> +        return rc;
> +
>       /* if it is a gzip'ed image, 32bit or 64bit, uncompress it */
>       rc = kernel_decompress(mod);
>       if ( rc && rc != -EINVAL )
> @@ -570,8 +611,6 @@ int __init kernel_probe(struct kernel_info *info,
>       rc = kernel_zimage64_probe(info, mod->start, mod->size);
>       if (rc < 0)
>   #endif
> -        rc = kernel_uimage_probe(info, mod->start, mod->size);
> -    if (rc < 0)
>           rc = kernel_zimage32_probe(info, mod->start, mod->size);
>   
>       return rc;
Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 20:37:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 20:37:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487835.755545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMxNr-0002DS-3s; Tue, 31 Jan 2023 20:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487835.755545; Tue, 31 Jan 2023 20:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMxNr-0002DL-1F; Tue, 31 Jan 2023 20:37:39 +0000
Received: by outflank-mailman (input) for mailman id 487835;
 Tue, 31 Jan 2023 20:37:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMxNp-0002DB-Vg; Tue, 31 Jan 2023 20:37:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMxNp-00052T-TM; Tue, 31 Jan 2023 20:37:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMxNp-0004CN-DM; Tue, 31 Jan 2023 20:37:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMxNp-0008QZ-Cs; Tue, 31 Jan 2023 20:37:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6qyeEduSZR1xF4VKYvhI/XkhqzfL0xFy/dtbJqo3L9o=; b=z2f/xIP88b/opBR6NiHjsUM8k+
	E2XoSSJsH3sUUG6ljnEFy98doW9Z2QmiZALuSqQMxecshrBQCK3fIz9F86K+e2csMfC2CVZ+jwPGB
	wQRkeepZmN5yrIbUknVgX76vIS3XycBZ520DHve771u/KpdENAtcslKjqmaH+Y7G0uPs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176301-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176301: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 20:37:37 +0000

flight 176301 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176301/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    5 days
Testing same since   176283  2023-01-30 21:02:20 Z    0 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:29:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487849.755604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC7-0001QU-Q9; Tue, 31 Jan 2023 21:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487849.755604; Tue, 31 Jan 2023 21:29:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC7-0001PN-LU; Tue, 31 Jan 2023 21:29:35 +0000
Received: by outflank-mailman (input) for mailman id 487849;
 Tue, 31 Jan 2023 21:29:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qwjI=54=citrix.com=prvs=3886215e8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pMyC5-0000Nm-Kb
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:29:33 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58685249-a1ae-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 22:29:31 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58685249-a1ae-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675200571;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=D6fomBGSmJOQ8kYSoARFYZfJ7J0cbqAWh1IAdPyYhsE=;
  b=JYEzherDJoTOePts40n4Aw3VyqbG07cKdW3/nIiXhQDQl4rtz2h72RNi
   PL75zCAHkKhjEVHCL41hn6+/1ve1Xfp2v5iOkk7UP2tD4wymOE25Z025/
   vQcXVrnvZa+PdpAgj8BOfli07zt620i73jBCXIWNkrrA5xeZQaVIQjE/c
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95499181
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:UaIn26Dp78IZrxVW/yjjw5YqxClBgxIJ4kV8jS/XYbTApG8ghmNWy
 TcbCG+EOvyDYmOgL4t1O4Ti9RsHuZ/Wm9M1QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFtMpvlDs15K6p4GpD5wRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw0/5SWVMJ1
 +IhMTkAYjTY28uE8qOCY7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9K4HWFJQMzh/wS
 mTu2zrFWkslF8Wl8BWZ8m6QivTwwy3UcddHfFG/3qEz2wDCroAJMzUGWF3+rfSnh0qWX9NEN
 1dS6icotbI19kGgUp/6RRLQiHKNoBM0QddbFOw+rgaXxcLpDx2xXzZeCGQbMZp/6ZFwHGZxv
 rOUoz/3LRV3leWnDlCDz66doD+WYnQ8H10TXAZRGGPp/OLfiI00ixvOSPNqH6i0ksD5FFnM/
 tyakMQtr+5N1JBWjs1X6XiC2mvx/caREmbZ8y2NBgqYAhVFiJlJjmBCwXzS9r5+IYmQVTFtV
 1BUypHFvIji4Xxg/RFhodnh/pnzv55p0xWG2zaD+qXNEBzzk0NPhagKvFlDyL5Ba67ogwPBb
 k7Joh9275ROJnasZqIfS9vvVJlznPC4TYm/DK+8gj9yjn9ZLV/vwc2TTRTIgzCFfLYEzsnTx
 qt3ge7zVC1HWMyLPRK9RvsH0K9D+8zN7Tq7eHwP9Dz+ieD2TCfMGd843K6mMrhRAFWs/F+Er
 L6y9qKil31ibQEJSnKOrdBIcg1WdyhT6FKfg5U/S9Nv6zFOQAkJY8I9C5t4E2C5t8y5Ttv1w
 0w=
IronPort-HdrOrdr: A9a23:6YTmlaFIAcvJGvc6pLqF8ZLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdoZJkh8erhBEDyewKmyXcV2/hZAV7MZniDhILFFu9fBM7ZskTd8k7Fh6ZgPM
 VbAs9D4bTLZDAX4voSojPIderIq+P3k5xA8N2uqkuFOjsaCZ2IgT0ZNi+rVmlNACVWD5swE5
 SRouJBujqbYHwSKuirG3UfWODHhtvT0LbrewQPCRIL4BSHyWrA0s+xLzGomjMlFx9fy7Yr9m
 bI1yT/+6WYqvm+jjPMymPJ6JxSud35jv9OHtaFhMQ5IijlziyoeINicbufuy1dmpDl1H8a1P
 335zswNcV67H3cOkuvpwH25gXm2DEyr1f/1F6xmxLY0IDEbQN/L/AEqZNScxPf5UZllsp7yr
 h302WQsIcSJQ/cnR76+8PDW3hR5wWJSDsZ4KAuZk5kIMsjgYxq3M8iFYRuYdU99RfBmcEa+S
 9VfYThDbhtABenhjvizxNSKZSXLwkO91G9MwU/U4WuokRrtWE8wE0CyMMFmHAcsJo7Vplf/u
 zBdr9ljbdUU6YtHNZA7co6MLmK41b2MGfxGXPXJU6iGLAMOnrLpZKy6LIp5PuycJhNyJcpgp
 zOXF5RqGZ3IivVeLuz9YwO9gqITHS2XDzrxM0b759luqfkTL6uNSGYUlghn8apvv1aCMzGXP
 S4Po5QHpbYXBzTMJcM2xe7V4hZKHEYXsFQstEnW0iWqsaOMYHuvvyzSoehGFMsK0dVZorSOA
 pzYNGoHrQ+0qmCYA6HvCTs
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="95499181"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>,
	=?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: [PATCH 4/7] tools/ocaml/evtchn: Misc cleanup
Date: Tue, 31 Jan 2023 21:29:10 +0000
Message-ID: <20230131212913.6199-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230131212913.6199-1-andrew.cooper3@citrix.com>
References: <20230131212913.6199-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

 * Remove local integers when all we're returning is Val_int() of another
   variable.  The CAMLlocal*() can't be optimised automatically, as it's
   registered with the GC.
 * Rename "virq_type" to "virq" and "_port" to "port".
 * In stub_eventchn_pending(), rename 'port' to 'rc', to be consistent with
   all other stubs that return xenevtchn_port_or_error_t.
 * In stub_eventchn_unmask(), check for rc == -1 to be consistent with all
   other stubs.

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Török <edwin.torok@cloud.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/ocaml/libs/eventchn/xeneventchn_stubs.c | 45 +++++++++++----------------
 1 file changed, 18 insertions(+), 27 deletions(-)

diff --git a/tools/ocaml/libs/eventchn/xeneventchn_stubs.c b/tools/ocaml/libs/eventchn/xeneventchn_stubs.c
index d7881ca95f98..34dcfed30275 100644
--- a/tools/ocaml/libs/eventchn/xeneventchn_stubs.c
+++ b/tools/ocaml/libs/eventchn/xeneventchn_stubs.c
@@ -98,17 +98,15 @@ CAMLprim value stub_eventchn_fdopen(value fdval)
 CAMLprim value stub_eventchn_fd(value xce_val)
 {
 	CAMLparam1(xce_val);
-	CAMLlocal1(result);
 	xenevtchn_handle *xce = xce_of_val(xce_val);
 	int fd;
 
+	/* Don't drop the GC lock.  This is a simple read out of memory */
 	fd = xenevtchn_fd(xce);
 	if (fd == -1)
 		caml_failwith("evtchn fd failed");
 
-	result = Val_int(fd);
-
-	CAMLreturn(result);
+	CAMLreturn(Val_int(fd));
 }
 
 CAMLprim value stub_eventchn_notify(value xce_val, value port)
@@ -131,37 +129,34 @@ CAMLprim value stub_eventchn_bind_interdomain(value xce_val, value domid,
                                               value remote_port)
 {
 	CAMLparam3(xce_val, domid, remote_port);
-	CAMLlocal1(port);
 	xenevtchn_handle *xce = xce_of_val(xce_val);
 	xenevtchn_port_or_error_t rc;
 
 	caml_enter_blocking_section();
-	rc = xenevtchn_bind_interdomain(xce, Int_val(domid), Int_val(remote_port));
+	rc = xenevtchn_bind_interdomain(xce, Int_val(domid),
+					Int_val(remote_port));
 	caml_leave_blocking_section();
 
 	if (rc == -1)
 		caml_failwith("evtchn bind_interdomain failed");
-	port = Val_int(rc);
 
-	CAMLreturn(port);
+	CAMLreturn(Val_int(rc));
 }
 
-CAMLprim value stub_eventchn_bind_virq(value xce_val, value virq_type)
+CAMLprim value stub_eventchn_bind_virq(value xce_val, value virq)
 {
-	CAMLparam2(xce_val, virq_type);
-	CAMLlocal1(port);
+	CAMLparam2(xce_val, virq);
 	xenevtchn_handle *xce = xce_of_val(xce_val);
 	xenevtchn_port_or_error_t rc;
 
 	caml_enter_blocking_section();
-	rc = xenevtchn_bind_virq(xce, Int_val(virq_type));
+	rc = xenevtchn_bind_virq(xce, Int_val(virq));
 	caml_leave_blocking_section();
 
 	if (rc == -1)
 		caml_failwith("evtchn bind_virq failed");
-	port = Val_int(rc);
 
-	CAMLreturn(port);
+	CAMLreturn(Val_int(rc));
 }
 
 CAMLprim value stub_eventchn_unbind(value xce_val, value port)
@@ -183,35 +178,31 @@ CAMLprim value stub_eventchn_unbind(value xce_val, value port)
 CAMLprim value stub_eventchn_pending(value xce_val)
 {
 	CAMLparam1(xce_val);
-	CAMLlocal1(result);
 	xenevtchn_handle *xce = xce_of_val(xce_val);
-	xenevtchn_port_or_error_t port;
+	xenevtchn_port_or_error_t rc;
 
 	caml_enter_blocking_section();
-	port = xenevtchn_pending(xce);
+	rc = xenevtchn_pending(xce);
 	caml_leave_blocking_section();
 
-	if (port == -1)
+	if (rc == -1)
 		caml_failwith("evtchn pending failed");
-	result = Val_int(port);
 
-	CAMLreturn(result);
+	CAMLreturn(Val_int(rc));
 }
 
-CAMLprim value stub_eventchn_unmask(value xce_val, value _port)
+CAMLprim value stub_eventchn_unmask(value xce_val, value port)
 {
-	CAMLparam2(xce_val, _port);
+	CAMLparam2(xce_val, port);
 	xenevtchn_handle *xce = xce_of_val(xce_val);
-	evtchn_port_t port;
 	int rc;
 
-	port = Int_val(_port);
-
 	caml_enter_blocking_section();
-	rc = xenevtchn_unmask(xce, port);
+	rc = xenevtchn_unmask(xce, Int_val(port));
 	caml_leave_blocking_section();
 
-	if (rc)
+	if (rc == -1)
 		caml_failwith("evtchn unmask failed");
+
 	CAMLreturn(Val_unit);
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:29:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487845.755562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC4-0000QS-E2; Tue, 31 Jan 2023 21:29:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487845.755562; Tue, 31 Jan 2023 21:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC4-0000Pl-9d; Tue, 31 Jan 2023 21:29:32 +0000
Received: by outflank-mailman (input) for mailman id 487845;
 Tue, 31 Jan 2023 21:29:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qwjI=54=citrix.com=prvs=3886215e8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pMyC2-0000Nb-NH
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:29:30 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55eb48d0-a1ae-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 22:29:28 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55eb48d0-a1ae-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675200568;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Epp3wFTJjIGliIjbWr7uyY/q1naoD6A5ExyqnrFK5rI=;
  b=UJUr2HlDbZcKqqNgMq7dcLaPSyT+kXj+eeYjDK03mMc1fylKIMJ91E1y
   TaCNb/sumgXKP+nVlPEe2xdRk53pUKRw01M9gA7M6QMpBLI+LyZCY6qQC
   eKpIF+bQGFDtYfCUrJFlVkh6B2hfxOLNL7ya53W/Kst+FxGF+K/Fs+6yL
   M=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95024398
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:V4+TD6+/ZY02RbRFWKniDrUDgX6TJUtcMsCJ2f8bNWPcYEJGY0x3m
 GRJCz2Gb/iCYGX3fIwgOYiw9UkA6JeByNRrGlNlpSs8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKsS5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkUy
 u0nARFdLSqioMmR4ZeHavlmoMcKeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0Ewx7C+
 jmXrwwVBDlACIyBxheP60umj96I3gDhVLIfL4ano6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0V9NOHsUg5QqKy66S5ByWblXoVRYYNoZg7pVvA2V3i
 BnQxYiB6SFTXKO9E02MyZ61/XCIGA8+Ck4nWQ8URy0Gyoy2yG0stS7nQtFmGa+zq9T6HzDs3
 jyHxBQDa6UvYd0jjPviow2e6964jt2QF1NuuF2LNo6wxlkhDLNJcbBE/rQyARxoCI+CBmeMs
 3Ef8yR1xLBfVMrd/MBhrQhkIV1I2xpnGGeE6bKMN8N7n9hIx5JEVd443d2GDB01WvvogBewC
 KMphStf5YVIIFyhZrJtboS6BqwClPa/SI20DqiMM4AUPfCdkTNrGwk3NSatM53FyhBwwcnTx
 7/EGSpTMZrqIfs+l2fnLwvs+bQq2jo/1QvuqWPTlnyaPU6lTCfNE98taQLeBt3VGYvY+G05B
 f4DbZrVo/ieOcWiChTqHXk7dglWcyNkWMys+6S6tIere2JbJY3oMNeJqZtJRmCvt/09ejvgl
 p1lZnJl9Q==
IronPort-HdrOrdr: A9a23:9jBw2K20Bzub96R4VOQEwAqjBEgkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7Ar5PUtLpTnuAsa9qB/nm6KdgrNhWItKPjOW21dARbsKheffKlXbcBEWndQtt5
 uIHZIeNDXxZ2IK8PoT4mODYqodKA/sytHWuQ/cpU0dMz2Dc8tbnmBE4p7wKDwMeOFBb6BJcq
 a01458iBeLX28YVci/DmltZZm4mzWa/KiWGCLvHnQcmXGzsQ8=
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="95024398"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, Rob Hoes <Rob.Hoes@citrix.com>
Subject: [PATCH 3/7] tools/ocaml/evtchn: Don't reference Custom objects with the GC lock released
Date: Tue, 31 Jan 2023 21:29:09 +0000
Message-ID: <20230131212913.6199-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230131212913.6199-1-andrew.cooper3@citrix.com>
References: <20230131212913.6199-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Edwin Török <edwin.torok@cloud.com>

The modification to the _H() macro for Ocaml 5 support introduced a subtle
bug.  From the manual:

  https://ocaml.org/manual/intfc.html#ss:parallel-execution-long-running-c-code

"After caml_release_runtime_system() was called and until
caml_acquire_runtime_system() is called, the C code must not access any OCaml
data, nor call any function of the run-time system, nor call back into OCaml
code."

Previously, the value was a naked C pointer, so dereferencing it wasn't
"accessing any Ocaml data", but the fix to avoid naked C pointers added a
layer of indirection through an Ocaml Custom object, meaning that the common
pattern of using _H() in a blocking section is unsafe.

In order to fix:

 * Drop the _H() macro and replace it with a static inline xce_of_val().
 * Opencode the assignment into Data_custom_val() in the two constructors.
 * Rename "value xce" parameters to "value xce_val" so we can consistently
   have "xenevtchn_handle *xce" on the stack, and obtain the pointer with the
   GC lock still held.

Fixes: 22d5affdf0ce ("tools/ocaml/evtchn: OCaml 5 support, fix potential resource leak")
Signed-off-by: Edwin Török <edwin.torok@cloud.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Török <edwin.torok@cloud.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/ocaml/libs/eventchn/xeneventchn_stubs.c | 60 ++++++++++++++++-----------
 1 file changed, 35 insertions(+), 25 deletions(-)

diff --git a/tools/ocaml/libs/eventchn/xeneventchn_stubs.c b/tools/ocaml/libs/eventchn/xeneventchn_stubs.c
index aa8a69cc1ecb..d7881ca95f98 100644
--- a/tools/ocaml/libs/eventchn/xeneventchn_stubs.c
+++ b/tools/ocaml/libs/eventchn/xeneventchn_stubs.c
@@ -33,11 +33,14 @@
 #include <caml/fail.h>
 #include <caml/signals.h>
 
-#define _H(__h) (*((xenevtchn_handle **)Data_custom_val(__h)))
+static inline xenevtchn_handle *xce_of_val(value v)
+{
+	return *(xenevtchn_handle **)Data_custom_val(v);
+}
 
 static void stub_evtchn_finalize(value v)
 {
-	xenevtchn_close(_H(v));
+	xenevtchn_close(xce_of_val(v));
 }
 
 static struct custom_operations xenevtchn_ops = {
@@ -68,7 +71,7 @@ CAMLprim value stub_eventchn_init(value cloexec)
 		caml_failwith("open failed");
 
 	result = caml_alloc_custom(&xenevtchn_ops, sizeof(xce), 0, 1);
-	_H(result) = xce;
+	*(xenevtchn_handle **)Data_custom_val(result) = xce;
 
 	CAMLreturn(result);
 }
@@ -87,18 +90,19 @@ CAMLprim value stub_eventchn_fdopen(value fdval)
 		caml_failwith("evtchn fdopen failed");
 
 	result = caml_alloc_custom(&xenevtchn_ops, sizeof(xce), 0, 1);
-	_H(result) = xce;
+	*(xenevtchn_handle **)Data_custom_val(result) = xce;
 
 	CAMLreturn(result);
 }
 
-CAMLprim value stub_eventchn_fd(value xce)
+CAMLprim value stub_eventchn_fd(value xce_val)
 {
-	CAMLparam1(xce);
+	CAMLparam1(xce_val);
 	CAMLlocal1(result);
+	xenevtchn_handle *xce = xce_of_val(xce_val);
 	int fd;
 
-	fd = xenevtchn_fd(_H(xce));
+	fd = xenevtchn_fd(xce);
 	if (fd == -1)
 		caml_failwith("evtchn fd failed");
 
@@ -107,13 +111,14 @@ CAMLprim value stub_eventchn_fd(value xce)
 	CAMLreturn(result);
 }
 
-CAMLprim value stub_eventchn_notify(value xce, value port)
+CAMLprim value stub_eventchn_notify(value xce_val, value port)
 {
-	CAMLparam2(xce, port);
+	CAMLparam2(xce_val, port);
+	xenevtchn_handle *xce = xce_of_val(xce_val);
 	int rc;
 
 	caml_enter_blocking_section();
-	rc = xenevtchn_notify(_H(xce), Int_val(port));
+	rc = xenevtchn_notify(xce, Int_val(port));
 	caml_leave_blocking_section();
 
 	if (rc == -1)
@@ -122,15 +127,16 @@ CAMLprim value stub_eventchn_notify(value xce, value port)
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_eventchn_bind_interdomain(value xce, value domid,
+CAMLprim value stub_eventchn_bind_interdomain(value xce_val, value domid,
                                               value remote_port)
 {
-	CAMLparam3(xce, domid, remote_port);
+	CAMLparam3(xce_val, domid, remote_port);
 	CAMLlocal1(port);
+	xenevtchn_handle *xce = xce_of_val(xce_val);
 	xenevtchn_port_or_error_t rc;
 
 	caml_enter_blocking_section();
-	rc = xenevtchn_bind_interdomain(_H(xce), Int_val(domid), Int_val(remote_port));
+	rc = xenevtchn_bind_interdomain(xce, Int_val(domid), Int_val(remote_port));
 	caml_leave_blocking_section();
 
 	if (rc == -1)
@@ -140,14 +146,15 @@ CAMLprim value stub_eventchn_bind_interdomain(value xce, value domid,
 	CAMLreturn(port);
 }
 
-CAMLprim value stub_eventchn_bind_virq(value xce, value virq_type)
+CAMLprim value stub_eventchn_bind_virq(value xce_val, value virq_type)
 {
-	CAMLparam2(xce, virq_type);
+	CAMLparam2(xce_val, virq_type);
 	CAMLlocal1(port);
+	xenevtchn_handle *xce = xce_of_val(xce_val);
 	xenevtchn_port_or_error_t rc;
 
 	caml_enter_blocking_section();
-	rc = xenevtchn_bind_virq(_H(xce), Int_val(virq_type));
+	rc = xenevtchn_bind_virq(xce, Int_val(virq_type));
 	caml_leave_blocking_section();
 
 	if (rc == -1)
@@ -157,13 +164,14 @@ CAMLprim value stub_eventchn_bind_virq(value xce, value virq_type)
 	CAMLreturn(port);
 }
 
-CAMLprim value stub_eventchn_unbind(value xce, value port)
+CAMLprim value stub_eventchn_unbind(value xce_val, value port)
 {
-	CAMLparam2(xce, port);
+	CAMLparam2(xce_val, port);
+	xenevtchn_handle *xce = xce_of_val(xce_val);
 	int rc;
 
 	caml_enter_blocking_section();
-	rc = xenevtchn_unbind(_H(xce), Int_val(port));
+	rc = xenevtchn_unbind(xce, Int_val(port));
 	caml_leave_blocking_section();
 
 	if (rc == -1)
@@ -172,14 +180,15 @@ CAMLprim value stub_eventchn_unbind(value xce, value port)
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_eventchn_pending(value xce)
+CAMLprim value stub_eventchn_pending(value xce_val)
 {
-	CAMLparam1(xce);
+	CAMLparam1(xce_val);
 	CAMLlocal1(result);
+	xenevtchn_handle *xce = xce_of_val(xce_val);
 	xenevtchn_port_or_error_t port;
 
 	caml_enter_blocking_section();
-	port = xenevtchn_pending(_H(xce));
+	port = xenevtchn_pending(xce);
 	caml_leave_blocking_section();
 
 	if (port == -1)
@@ -189,16 +198,17 @@ CAMLprim value stub_eventchn_pending(value xce)
 	CAMLreturn(result);
 }
 
-CAMLprim value stub_eventchn_unmask(value xce, value _port)
+CAMLprim value stub_eventchn_unmask(value xce_val, value _port)
 {
-	CAMLparam2(xce, _port);
+	CAMLparam2(xce_val, _port);
+	xenevtchn_handle *xce = xce_of_val(xce_val);
 	evtchn_port_t port;
 	int rc;
 
 	port = Int_val(_port);
 
 	caml_enter_blocking_section();
-	rc = xenevtchn_unmask(_H(xce), port);
+	rc = xenevtchn_unmask(xce, port);
 	caml_leave_blocking_section();
 
 	if (rc)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:29:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487848.755601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC7-0001NM-EG; Tue, 31 Jan 2023 21:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487848.755601; Tue, 31 Jan 2023 21:29:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC7-0001NA-9y; Tue, 31 Jan 2023 21:29:35 +0000
Received: by outflank-mailman (input) for mailman id 487848;
 Tue, 31 Jan 2023 21:29:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qwjI=54=citrix.com=prvs=3886215e8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pMyC4-0000Nm-Ku
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:29:33 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5645ffb5-a1ae-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 22:29:27 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5645ffb5-a1ae-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675200567;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=tJ4tTGgnhHKyde+wF8qU4LPiC92eyP3hCl2+Pked3dE=;
  b=h0RIaL/CLB2RtKjhN19reXDuj+bnJZI4CkqtOSzxdvot/NfOX9gbYNeq
   LHjrL/ZM75xHvyaBL7giR6WQk9kQC8YRhZ2bELP3EHGJVE6KsZ7dzjKEM
   OhR+Rffut5fpnwEilSmol6HdB32rbB36TXdH0E+Z7d3jG1brKmKKcd9a1
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95097287
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:rLtUSq9AKfsAhmGvWTVPDrUDgX6TJUtcMsCJ2f8bNWPcYEJGY0x3x
 jAZXG3TbP2MNzajedp3Ydvn8UNS6JKEmoRgTQFsqyo8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKucYHsZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kIw1BjOkGlA5AdmPKsS5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklz2
 acxGmAiciucrMvn3rPiE+RQlpkaeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0Ewx7J+
 TiWoAwVBDlBKYyHxBG3wEmK3MPdlATpWqkIBIWRo6sCbFq7mTVIVUx+uUGAifWwlEOWQd9UL
 E0QvC00osAa5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWblXoVRYYNoZg7pVvA2V3i
 BnQxYiB6SFTXKO9dF7G34XEgi+JJgM8fHEDPHYJcRtY2oy2yG0stS7nQtFmGa+zq9T6HzDs3
 jyHxBQDa6UvYd0jjPviow2e6964jt2QF1NuuF2LNo6wxlkhDLNJcbBE/rQyARxoCI+CBmeMs
 3Ef8yR1xLBfVMrd/MBhrQhkIV1I2xpnGGeE6bKMN8N7n9hIx5JEVd443d2GDB01WvvogBewC
 KMphStf5YVIIFyhZrJtboS6BqwClPa/SI20DqiMM4AUPfCdkTNrGwk3NSatM53FyhBwwcnTx
 7/EGSpTMZrqIfs+l2fnLwvs+bQq2jo/1QvuqWPTlnyaPU6lTCfNE98taQLeBt3VGYvY+G05B
 f4DbZrVo/ieOcWiChTqHXk7dglWcyNkWMys+6S6tIere2JbJY3oMNeJqZtJRmCvt/49ejvgl
 p1lZnJl9Q==
IronPort-HdrOrdr: A9a23:Mzq3j6hycse6p0DbZoBFeMqRMnBQXuUji2hC6mlwRA09TyVXrb
 HWoB17726NtN91YhsdcL+7Scy9qB/nhPxICMwqTNSftWrd2VdATrsSibcKqgeIc0bDH6xmtZ
 uIGJIOb+EYY2IK6/oSIzPVLz/j+rS6GWyT6ts2Bk0CcT1X
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="95097287"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, Rob
 Hoes <Rob.Hoes@citrix.com>
Subject: [PATCH 7/7] tools/ocaml/xc: Don't reference Custom objects with the GC lock released
Date: Tue, 31 Jan 2023 21:29:13 +0000
Message-ID: <20230131212913.6199-8-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230131212913.6199-1-andrew.cooper3@citrix.com>
References: <20230131212913.6199-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Edwin Török <edwin.torok@cloud.com>

The modification to the _H() macro for Ocaml 5 support introduced a subtle
bug.  From the manual:

  https://ocaml.org/manual/intfc.html#ss:parallel-execution-long-running-c-code

"After caml_release_runtime_system() was called and until
caml_acquire_runtime_system() is called, the C code must not access any OCaml
data, nor call any function of the run-time system, nor call back into OCaml
code."

Previously, the value was a naked C pointer, so dereferencing it wasn't
"accessing any Ocaml data", but the fix to avoid naked C pointers added a
layer of indirection through an Ocaml Custom object, meaning that the common
pattern of using _H() in a blocking section is unsafe.

In order to fix:

 * Drop the _H() macro and replace it with a static inline xch_of_val().
 * Opencode the assignment into Data_custom_val() in the constructors.
 * Rename "value xch" parameters to "value xch_val" so we can consistently
   have "xc_interface *xch" on the stack, and obtain the pointer with the GC
   lock still held.
 * Drop the _D() macro while at it, because it's just pointless indirection.

Fixes: 8b3c06a3e545 ("tools/ocaml/xenctrl: OCaml 5 support, fix use-after-free")
Signed-off-by: Edwin Török <edwin.torok@cloud.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Török <edwin.torok@cloud.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 454 ++++++++++++++++++++----------------
 1 file changed, 251 insertions(+), 203 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index e5277f6f19a2..f9006c662382 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -37,9 +37,6 @@
 
 #include "mmap_stubs.h"
 
-#define _H(__h) (*((xc_interface **)Data_custom_val(__h)))
-#define _D(__d) ((uint32_t)Int_val(__d))
-
 #ifndef Val_none
 #define Val_none (Val_int(0))
 #endif
@@ -48,9 +45,18 @@
 #define Tag_some 0
 #endif
 
+static inline xc_interface *xch_of_val(value v)
+{
+	xc_interface *xch = *(xc_interface **)Data_custom_val(v);
+
+	return xch;
+}
+
 static void stub_xenctrl_finalize(value v)
 {
-	xc_interface_close(_H(v));
+	xc_interface *xch = xch_of_val(v);
+
+	xc_interface_close(xch);
 }
 
 static struct custom_operations xenctrl_ops = {
@@ -100,7 +106,7 @@ CAMLprim value stub_xc_interface_open(value unit)
 		failwith_xc(xch);
 
 	result = caml_alloc_custom(&xenctrl_ops, sizeof(xch), 0, 1);
-	_H(result) = xch;
+	*(xc_interface **)Data_custom_val(result) = xch;
 
 	CAMLreturn(result);
 }
@@ -187,10 +193,11 @@ static unsigned int ocaml_list_to_c_bitmap(value l)
 	return val;
 }
 
-CAMLprim value stub_xc_domain_create(value xch, value wanted_domid, value config)
+CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value config)
 {
-	CAMLparam3(xch, wanted_domid, config);
+	CAMLparam3(xch_val, wanted_domid, config);
 	CAMLlocal2(l, arch_domconfig);
+	xc_interface *xch = xch_of_val(xch_val);
 
 	/* Mnemonics for the named fields inside domctl_create_config */
 #define VAL_SSIDREF             Field(config, 0)
@@ -282,98 +289,104 @@ CAMLprim value stub_xc_domain_create(value xch, value wanted_domid, value config
 #undef VAL_SSIDREF
 
 	caml_enter_blocking_section();
-	result = xc_domain_create(_H(xch), &domid, &cfg);
+	result = xc_domain_create(xch, &domid, &cfg);
 	caml_leave_blocking_section();
 
 	if (result < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_int(domid));
 }
 
-CAMLprim value stub_xc_domain_max_vcpus(value xch, value domid,
+CAMLprim value stub_xc_domain_max_vcpus(value xch_val, value domid,
                                         value max_vcpus)
 {
-	CAMLparam3(xch, domid, max_vcpus);
+	CAMLparam3(xch_val, domid, max_vcpus);
+	xc_interface *xch = xch_of_val(xch_val);
 	int r;
 
-	r = xc_domain_max_vcpus(_H(xch), _D(domid), Int_val(max_vcpus));
+	r = xc_domain_max_vcpus(xch, Int_val(domid), Int_val(max_vcpus));
 	if (r)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
 
 
-value stub_xc_domain_sethandle(value xch, value domid, value handle)
+value stub_xc_domain_sethandle(value xch_val, value domid, value handle)
 {
-	CAMLparam3(xch, domid, handle);
+	CAMLparam3(xch_val, domid, handle);
+	xc_interface *xch = xch_of_val(xch_val);
 	xen_domain_handle_t h;
 	int i;
 
 	domain_handle_of_uuid_string(h, String_val(handle));
 
-	i = xc_domain_sethandle(_H(xch), _D(domid), h);
+	i = xc_domain_sethandle(xch, Int_val(domid), h);
 	if (i)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
 
-static value dom_op(value xch, value domid, int (*fn)(xc_interface *, uint32_t))
+static value dom_op(value xch_val, value domid,
+		    int (*fn)(xc_interface *, uint32_t))
 {
-	CAMLparam2(xch, domid);
+	CAMLparam2(xch_val, domid);
+	xc_interface *xch = xch_of_val(xch_val);
 	int result;
 
-	uint32_t c_domid = _D(domid);
+	uint32_t c_domid = Int_val(domid);
 
 	caml_enter_blocking_section();
-	result = fn(_H(xch), c_domid);
+	result = fn(xch, c_domid);
 	caml_leave_blocking_section();
         if (result)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_domain_pause(value xch, value domid)
+CAMLprim value stub_xc_domain_pause(value xch_val, value domid)
 {
-	return dom_op(xch, domid, xc_domain_pause);
+	return dom_op(xch_val, domid, xc_domain_pause);
 }
 
 
-CAMLprim value stub_xc_domain_unpause(value xch, value domid)
+CAMLprim value stub_xc_domain_unpause(value xch_val, value domid)
 {
-	return dom_op(xch, domid, xc_domain_unpause);
+	return dom_op(xch_val, domid, xc_domain_unpause);
 }
 
-CAMLprim value stub_xc_domain_destroy(value xch, value domid)
+CAMLprim value stub_xc_domain_destroy(value xch_val, value domid)
 {
-	return dom_op(xch, domid, xc_domain_destroy);
+	return dom_op(xch_val, domid, xc_domain_destroy);
 }
 
-CAMLprim value stub_xc_domain_resume_fast(value xch, value domid)
+CAMLprim value stub_xc_domain_resume_fast(value xch_val, value domid)
 {
-	CAMLparam2(xch, domid);
+	CAMLparam2(xch_val, domid);
+	xc_interface *xch = xch_of_val(xch_val);
 	int result;
 
-	uint32_t c_domid = _D(domid);
+	uint32_t c_domid = Int_val(domid);
 
 	caml_enter_blocking_section();
-	result = xc_domain_resume(_H(xch), c_domid, 1);
+	result = xc_domain_resume(xch, c_domid, 1);
 	caml_leave_blocking_section();
         if (result)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_domain_shutdown(value xch, value domid, value reason)
+CAMLprim value stub_xc_domain_shutdown(value xch_val, value domid, value reason)
 {
-	CAMLparam3(xch, domid, reason);
+	CAMLparam3(xch_val, domid, reason);
+	xc_interface *xch = xch_of_val(xch_val);
 	int ret;
 
-	ret = xc_domain_shutdown(_H(xch), _D(domid), Int_val(reason));
+	ret = xc_domain_shutdown(xch, Int_val(domid), Int_val(reason));
 	if (ret < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
@@ -433,10 +446,11 @@ static value alloc_domaininfo(xc_domaininfo_t * info)
 	CAMLreturn(result);
 }
 
-CAMLprim value stub_xc_domain_getinfolist(value xch, value first_domain, value nb)
+CAMLprim value stub_xc_domain_getinfolist(value xch_val, value first_domain, value nb)
 {
-	CAMLparam3(xch, first_domain, nb);
+	CAMLparam3(xch_val, first_domain, nb);
 	CAMLlocal2(result, temp);
+	xc_interface *xch = xch_of_val(xch_val);
 	xc_domaininfo_t * info;
 	int i, ret, toalloc, retval;
 	unsigned int c_max_domains;
@@ -450,16 +464,16 @@ CAMLprim value stub_xc_domain_getinfolist(value xch, value first_domain, value n
 
 	result = temp = Val_emptylist;
 
-	c_first_domain = _D(first_domain);
+	c_first_domain = Int_val(first_domain);
 	c_max_domains = Int_val(nb);
 	caml_enter_blocking_section();
-	retval = xc_domain_getinfolist(_H(xch), c_first_domain,
+	retval = xc_domain_getinfolist(xch, c_first_domain,
 				       c_max_domains, info);
 	caml_leave_blocking_section();
 
 	if (retval < 0) {
 		free(info);
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	}
 	for (i = 0; i < retval; i++) {
 		result = caml_alloc_small(2, Tag_cons);
@@ -474,38 +488,39 @@ CAMLprim value stub_xc_domain_getinfolist(value xch, value first_domain, value n
 	CAMLreturn(result);
 }
 
-CAMLprim value stub_xc_domain_getinfo(value xch, value domid)
+CAMLprim value stub_xc_domain_getinfo(value xch_val, value domid)
 {
-	CAMLparam2(xch, domid);
+	CAMLparam2(xch_val, domid);
 	CAMLlocal1(result);
+	xc_interface *xch = xch_of_val(xch_val);
 	xc_domaininfo_t info;
 	int ret;
 
-	ret = xc_domain_getinfolist(_H(xch), _D(domid), 1, &info);
+	ret = xc_domain_getinfolist(xch, Int_val(domid), 1, &info);
 	if (ret != 1)
-		failwith_xc(_H(xch));
-	if (info.domain != _D(domid))
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
+	if (info.domain != Int_val(domid))
+		failwith_xc(xch);
 
 	result = alloc_domaininfo(&info);
 	CAMLreturn(result);
 }
 
-CAMLprim value stub_xc_vcpu_getinfo(value xch, value domid, value vcpu)
+CAMLprim value stub_xc_vcpu_getinfo(value xch_val, value domid, value vcpu)
 {
-	CAMLparam3(xch, domid, vcpu);
+	CAMLparam3(xch_val, domid, vcpu);
 	CAMLlocal1(result);
+	xc_interface *xch = xch_of_val(xch_val);
 	xc_vcpuinfo_t info;
 	int retval;
 
-	uint32_t c_domid = _D(domid);
+	uint32_t c_domid = Int_val(domid);
 	uint32_t c_vcpu = Int_val(vcpu);
 	caml_enter_blocking_section();
-	retval = xc_vcpu_getinfo(_H(xch), c_domid,
-	                         c_vcpu, &info);
+	retval = xc_vcpu_getinfo(xch, c_domid, c_vcpu, &info);
 	caml_leave_blocking_section();
 	if (retval < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	result = caml_alloc_tuple(5);
 	Store_field(result, 0, Val_bool(info.online));
@@ -517,17 +532,18 @@ CAMLprim value stub_xc_vcpu_getinfo(value xch, value domid, value vcpu)
 	CAMLreturn(result);
 }
 
-CAMLprim value stub_xc_vcpu_context_get(value xch, value domid,
+CAMLprim value stub_xc_vcpu_context_get(value xch_val, value domid,
                                         value cpu)
 {
-	CAMLparam3(xch, domid, cpu);
+	CAMLparam3(xch_val, domid, cpu);
+	xc_interface *xch = xch_of_val(xch_val);
 	CAMLlocal1(context);
 	int ret;
 	vcpu_guest_context_any_t ctxt;
 
-	ret = xc_vcpu_getcontext(_H(xch), _D(domid), Int_val(cpu), &ctxt);
+	ret = xc_vcpu_getcontext(xch, Int_val(domid), Int_val(cpu), &ctxt);
 	if ( ret < 0 )
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	context = caml_alloc_string(sizeof(ctxt));
 	memcpy((char *) String_val(context), &ctxt.c, sizeof(ctxt.c));
@@ -535,10 +551,10 @@ CAMLprim value stub_xc_vcpu_context_get(value xch, value domid,
 	CAMLreturn(context);
 }
 
-static int get_cpumap_len(value xch, value cpumap)
+static int get_cpumap_len(xc_interface *xch, value cpumap)
 {
 	int ml_len = Wosize_val(cpumap);
-	int xc_len = xc_get_max_cpus(_H(xch));
+	int xc_len = xc_get_max_cpus(xch);
 
 	if (ml_len < xc_len)
 		return ml_len;
@@ -546,56 +562,58 @@ static int get_cpumap_len(value xch, value cpumap)
 		return xc_len;
 }
 
-CAMLprim value stub_xc_vcpu_setaffinity(value xch, value domid,
+CAMLprim value stub_xc_vcpu_setaffinity(value xch_val, value domid,
                                         value vcpu, value cpumap)
 {
-	CAMLparam4(xch, domid, vcpu, cpumap);
+	CAMLparam4(xch_val, domid, vcpu, cpumap);
+	xc_interface *xch = xch_of_val(xch_val);
 	int i, len = get_cpumap_len(xch, cpumap);
 	xc_cpumap_t c_cpumap;
 	int retval;
 
-	c_cpumap = xc_cpumap_alloc(_H(xch));
+	c_cpumap = xc_cpumap_alloc(xch);
 	if (c_cpumap == NULL)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	for (i=0; i<len; i++) {
 		if (Bool_val(Field(cpumap, i)))
 			c_cpumap[i/8] |= 1 << (i&7);
 	}
-	retval = xc_vcpu_setaffinity(_H(xch), _D(domid),
+	retval = xc_vcpu_setaffinity(xch, Int_val(domid),
 				     Int_val(vcpu),
 				     c_cpumap, NULL,
 				     XEN_VCPUAFFINITY_HARD);
 	free(c_cpumap);
 
 	if (retval < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_vcpu_getaffinity(value xch, value domid,
+CAMLprim value stub_xc_vcpu_getaffinity(value xch_val, value domid,
                                         value vcpu)
 {
-	CAMLparam3(xch, domid, vcpu);
+	CAMLparam3(xch_val, domid, vcpu);
 	CAMLlocal1(ret);
+	xc_interface *xch = xch_of_val(xch_val);
 	xc_cpumap_t c_cpumap;
-	int i, len = xc_get_max_cpus(_H(xch));
+	int i, len = xc_get_max_cpus(xch);
 	int retval;
 
 	if (len < 1)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
-	c_cpumap = xc_cpumap_alloc(_H(xch));
+	c_cpumap = xc_cpumap_alloc(xch);
 	if (c_cpumap == NULL)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
-	retval = xc_vcpu_getaffinity(_H(xch), _D(domid),
+	retval = xc_vcpu_getaffinity(xch, Int_val(domid),
 				     Int_val(vcpu),
 				     c_cpumap, NULL,
 				     XEN_VCPUAFFINITY_HARD);
 	if (retval < 0) {
 		free(c_cpumap);
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	}
 
 	ret = caml_alloc(len, 0);
@@ -612,63 +630,68 @@ CAMLprim value stub_xc_vcpu_getaffinity(value xch, value domid,
 	CAMLreturn(ret);
 }
 
-CAMLprim value stub_xc_sched_id(value xch)
+CAMLprim value stub_xc_sched_id(value xch_val)
 {
-	CAMLparam1(xch);
+	CAMLparam1(xch_val);
+	xc_interface *xch = xch_of_val(xch_val);
 	int sched_id;
 
-	if (xc_sched_id(_H(xch), &sched_id))
-		failwith_xc(_H(xch));
+	if (xc_sched_id(xch, &sched_id))
+		failwith_xc(xch);
+
 	CAMLreturn(Val_int(sched_id));
 }
 
-CAMLprim value stub_xc_evtchn_alloc_unbound(value xch,
+CAMLprim value stub_xc_evtchn_alloc_unbound(value xch_val,
                                             value local_domid,
                                             value remote_domid)
 {
-	CAMLparam3(xch, local_domid, remote_domid);
+	CAMLparam3(xch_val, local_domid, remote_domid);
+	xc_interface *xch = xch_of_val(xch_val);
 	int result;
 
-	uint32_t c_local_domid = _D(local_domid);
-	uint32_t c_remote_domid = _D(remote_domid);
+	uint32_t c_local_domid = Int_val(local_domid);
+	uint32_t c_remote_domid = Int_val(remote_domid);
 
 	caml_enter_blocking_section();
-	result = xc_evtchn_alloc_unbound(_H(xch), c_local_domid,
+	result = xc_evtchn_alloc_unbound(xch, c_local_domid,
 	                                     c_remote_domid);
 	caml_leave_blocking_section();
 
 	if (result < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_int(result));
 }
 
-CAMLprim value stub_xc_evtchn_reset(value xch, value domid)
+CAMLprim value stub_xc_evtchn_reset(value xch_val, value domid)
 {
-	CAMLparam2(xch, domid);
+	CAMLparam2(xch_val, domid);
+	xc_interface *xch = xch_of_val(xch_val);
 	int r;
 
-	r = xc_evtchn_reset(_H(xch), _D(domid));
+	r = xc_evtchn_reset(xch, Int_val(domid));
 	if (r < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_evtchn_status(value xch, value domid, value port)
+CAMLprim value stub_xc_evtchn_status(value xch_val, value domid, value port)
 {
-	CAMLparam3(xch, domid, port);
+	CAMLparam3(xch_val, domid, port);
 	CAMLlocal4(result, result_status, stat, interdomain);
+	xc_interface *xch = xch_of_val(xch_val);
 	xc_evtchn_status_t status = {
-		.dom = _D(domid),
+		.dom = Int_val(domid),
 		.port = Int_val(port),
 	};
 	int rc;
 
 	caml_enter_blocking_section();
-	rc = xc_evtchn_status(_H(xch), &status);
+	rc = xc_evtchn_status(xch, &status);
 	caml_leave_blocking_section();
 
 	if ( rc < 0 )
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	switch ( status.status )
 	{
@@ -716,7 +739,7 @@ CAMLprim value stub_xc_evtchn_status(value xch, value domid, value port)
 	CAMLreturn(result);
 }
 
-CAMLprim value stub_xc_readconsolering(value xch)
+CAMLprim value stub_xc_readconsolering(value xch_val)
 {
 	/* Safe to use outside of blocking sections because of Ocaml GC lock. */
 	static unsigned int conring_size = 16384 + 1;
@@ -725,8 +748,9 @@ CAMLprim value stub_xc_readconsolering(value xch)
 	char *str = NULL, *ptr;
 	int ret;
 
-	CAMLparam1(xch);
+	CAMLparam1(xch_val);
 	CAMLlocal1(ring);
+	xc_interface *xch = xch_of_val(xch_val);
 
 	str = malloc(size);
 	if (!str)
@@ -734,12 +758,12 @@ CAMLprim value stub_xc_readconsolering(value xch)
 
 	/* Hopefully our conring_size guess is sufficient */
 	caml_enter_blocking_section();
-	ret = xc_readconsolering(_H(xch), str, &count, 0, 0, &index);
+	ret = xc_readconsolering(xch, str, &count, 0, 0, &index);
 	caml_leave_blocking_section();
 
 	if (ret < 0) {
 		free(str);
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	}
 
 	while (count == size && ret >= 0) {
@@ -755,7 +779,7 @@ CAMLprim value stub_xc_readconsolering(value xch)
 		count = size - count;
 
 		caml_enter_blocking_section();
-		ret = xc_readconsolering(_H(xch), str, &count, 0, 1, &index);
+		ret = xc_readconsolering(xch, str, &count, 0, 1, &index);
 		caml_leave_blocking_section();
 
 		count += str - ptr;
@@ -777,30 +801,32 @@ CAMLprim value stub_xc_readconsolering(value xch)
 	CAMLreturn(ring);
 }
 
-CAMLprim value stub_xc_send_debug_keys(value xch, value keys)
+CAMLprim value stub_xc_send_debug_keys(value xch_val, value keys)
 {
-	CAMLparam2(xch, keys);
+	CAMLparam2(xch_val, keys);
+	xc_interface *xch = xch_of_val(xch_val);
 	int r;
 
-	r = xc_send_debug_keys(_H(xch), String_val(keys));
+	r = xc_send_debug_keys(xch, String_val(keys));
 	if (r)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_physinfo(value xch)
+CAMLprim value stub_xc_physinfo(value xch_val)
 {
-	CAMLparam1(xch);
+	CAMLparam1(xch_val);
 	CAMLlocal4(physinfo, cap_list, arch_cap_flags, arch_cap_list);
+	xc_interface *xch = xch_of_val(xch_val);
 	xc_physinfo_t c_physinfo;
 	int r, arch_cap_flags_tag;
 
 	caml_enter_blocking_section();
-	r = xc_physinfo(_H(xch), &c_physinfo);
+	r = xc_physinfo(xch, &c_physinfo);
 	caml_leave_blocking_section();
 
 	if (r)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	/*
 	 * capabilities: physinfo_cap_flag list;
@@ -837,10 +863,11 @@ CAMLprim value stub_xc_physinfo(value xch)
 	CAMLreturn(physinfo);
 }
 
-CAMLprim value stub_xc_pcpu_info(value xch, value nr_cpus)
+CAMLprim value stub_xc_pcpu_info(value xch_val, value nr_cpus)
 {
-	CAMLparam2(xch, nr_cpus);
+	CAMLparam2(xch_val, nr_cpus);
 	CAMLlocal2(pcpus, v);
+	xc_interface *xch = xch_of_val(xch_val);
 	xc_cpuinfo_t *info;
 	int r, size;
 
@@ -852,12 +879,12 @@ CAMLprim value stub_xc_pcpu_info(value xch, value nr_cpus)
 		caml_raise_out_of_memory();
 
 	caml_enter_blocking_section();
-	r = xc_getcpuinfo(_H(xch), Int_val(nr_cpus), info, &size);
+	r = xc_getcpuinfo(xch, Int_val(nr_cpus), info, &size);
 	caml_leave_blocking_section();
 
 	if (r) {
 		free(info);
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	}
 
 	if (size > 0) {
@@ -873,79 +900,82 @@ CAMLprim value stub_xc_pcpu_info(value xch, value nr_cpus)
 	CAMLreturn(pcpus);
 }
 
-CAMLprim value stub_xc_domain_setmaxmem(value xch, value domid,
+CAMLprim value stub_xc_domain_setmaxmem(value xch_val, value domid,
                                         value max_memkb)
 {
-	CAMLparam3(xch, domid, max_memkb);
+	CAMLparam3(xch_val, domid, max_memkb);
+	xc_interface *xch = xch_of_val(xch_val);
 	int retval;
 
-	uint32_t c_domid = _D(domid);
+	uint32_t c_domid = Int_val(domid);
 	unsigned int c_max_memkb = Int64_val(max_memkb);
 	caml_enter_blocking_section();
-	retval = xc_domain_setmaxmem(_H(xch), c_domid,
-	                                 c_max_memkb);
+	retval = xc_domain_setmaxmem(xch, c_domid, c_max_memkb);
 	caml_leave_blocking_section();
 	if (retval)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_domain_set_memmap_limit(value xch, value domid,
+CAMLprim value stub_xc_domain_set_memmap_limit(value xch_val, value domid,
                                                value map_limitkb)
 {
-	CAMLparam3(xch, domid, map_limitkb);
+	CAMLparam3(xch_val, domid, map_limitkb);
+	xc_interface *xch = xch_of_val(xch_val);
 	unsigned long v;
 	int retval;
 
 	v = Int64_val(map_limitkb);
-	retval = xc_domain_set_memmap_limit(_H(xch), _D(domid), v);
+	retval = xc_domain_set_memmap_limit(xch, Int_val(domid), v);
 	if (retval)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_domain_memory_increase_reservation(value xch,
+CAMLprim value stub_xc_domain_memory_increase_reservation(value xch_val,
                                                           value domid,
                                                           value mem_kb)
 {
-	CAMLparam3(xch, domid, mem_kb);
+	CAMLparam3(xch_val, domid, mem_kb);
+	xc_interface *xch = xch_of_val(xch_val);
 	int retval;
 
 	unsigned long nr_extents = ((unsigned long)(Int64_val(mem_kb))) >> (XC_PAGE_SHIFT - 10);
 
-	uint32_t c_domid = _D(domid);
+	uint32_t c_domid = Int_val(domid);
 	caml_enter_blocking_section();
-	retval = xc_domain_increase_reservation_exact(_H(xch), c_domid,
+	retval = xc_domain_increase_reservation_exact(xch, c_domid,
 							  nr_extents, 0, 0, NULL);
 	caml_leave_blocking_section();
 
 	if (retval)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_version_version(value xch)
+CAMLprim value stub_xc_version_version(value xch_val)
 {
-	CAMLparam1(xch);
+	CAMLparam1(xch_val);
 	CAMLlocal1(result);
+	xc_interface *xch = xch_of_val(xch_val);
 	xen_extraversion_t extra;
 	long packed;
 	int retval;
 
 	caml_enter_blocking_section();
-	packed = xc_version(_H(xch), XENVER_version, NULL);
+	packed = xc_version(xch, XENVER_version, NULL);
 	caml_leave_blocking_section();
 
 	if (packed < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	caml_enter_blocking_section();
-	retval = xc_version(_H(xch), XENVER_extraversion, &extra);
+	retval = xc_version(xch, XENVER_extraversion, &extra);
 	caml_leave_blocking_section();
 
 	if (retval)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	result = caml_alloc_tuple(3);
 
@@ -957,19 +987,20 @@ CAMLprim value stub_xc_version_version(value xch)
 }
 
 
-CAMLprim value stub_xc_version_compile_info(value xch)
+CAMLprim value stub_xc_version_compile_info(value xch_val)
 {
-	CAMLparam1(xch);
+	CAMLparam1(xch_val);
 	CAMLlocal1(result);
+	xc_interface *xch = xch_of_val(xch_val);
 	xen_compile_info_t ci;
 	int retval;
 
 	caml_enter_blocking_section();
-	retval = xc_version(_H(xch), XENVER_compile_info, &ci);
+	retval = xc_version(xch, XENVER_compile_info, &ci);
 	caml_leave_blocking_section();
 
 	if (retval)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	result = caml_alloc_tuple(4);
 
@@ -982,35 +1013,36 @@ CAMLprim value stub_xc_version_compile_info(value xch)
 }
 
 
-static value xc_version_single_string(value xch, int code, void *info)
+static value xc_version_single_string(value xch_val, int code, void *info)
 {
-	CAMLparam1(xch);
+	CAMLparam1(xch_val);
+	xc_interface *xch = xch_of_val(xch_val);
 	int retval;
 
 	caml_enter_blocking_section();
-	retval = xc_version(_H(xch), code, info);
+	retval = xc_version(xch, code, info);
 	caml_leave_blocking_section();
 
 	if (retval)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(caml_copy_string((char *)info));
 }
 
 
-CAMLprim value stub_xc_version_changeset(value xch)
+CAMLprim value stub_xc_version_changeset(value xch_val)
 {
 	xen_changeset_info_t ci;
 
-	return xc_version_single_string(xch, XENVER_changeset, &ci);
+	return xc_version_single_string(xch_val, XENVER_changeset, &ci);
 }
 
 
-CAMLprim value stub_xc_version_capabilities(value xch)
+CAMLprim value stub_xc_version_capabilities(value xch_val)
 {
 	xen_capabilities_info_t ci;
 
-	return xc_version_single_string(xch, XENVER_capabilities, &ci);
+	return xc_version_single_string(xch_val, XENVER_capabilities, &ci);
 }
 
 
@@ -1022,11 +1054,12 @@ CAMLprim value stub_pages_to_kib(value pages)
 }
 
 
-CAMLprim value stub_map_foreign_range(value xch, value dom,
+CAMLprim value stub_map_foreign_range(value xch_val, value dom,
                                       value size, value mfn)
 {
-	CAMLparam4(xch, dom, size, mfn);
+	CAMLparam4(xch_val, dom, size, mfn);
 	CAMLlocal1(result);
+	xc_interface *xch = xch_of_val(xch_val);
 	struct mmap_interface *intf;
 	unsigned long c_mfn = Nativeint_val(mfn);
 	int len = Int_val(size);
@@ -1037,7 +1070,7 @@ CAMLprim value stub_map_foreign_range(value xch, value dom,
 			    Abstract_tag);
 
 	caml_enter_blocking_section();
-	ptr = xc_map_foreign_range(_H(xch), _D(dom), len,
+	ptr = xc_map_foreign_range(xch, Int_val(dom), len,
 				   PROT_READ|PROT_WRITE, c_mfn);
 	caml_leave_blocking_section();
 
@@ -1050,18 +1083,19 @@ CAMLprim value stub_map_foreign_range(value xch, value dom,
 	CAMLreturn(result);
 }
 
-CAMLprim value stub_sched_credit_domain_get(value xch, value domid)
+CAMLprim value stub_sched_credit_domain_get(value xch_val, value domid)
 {
-	CAMLparam2(xch, domid);
+	CAMLparam2(xch_val, domid);
 	CAMLlocal1(sdom);
+	xc_interface *xch = xch_of_val(xch_val);
 	struct xen_domctl_sched_credit c_sdom;
 	int ret;
 
 	caml_enter_blocking_section();
-	ret = xc_sched_credit_domain_get(_H(xch), _D(domid), &c_sdom);
+	ret = xc_sched_credit_domain_get(xch, Int_val(domid), &c_sdom);
 	caml_leave_blocking_section();
 	if (ret != 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	sdom = caml_alloc_tuple(2);
 	Store_field(sdom, 0, Val_int(c_sdom.weight));
@@ -1070,67 +1104,71 @@ CAMLprim value stub_sched_credit_domain_get(value xch, value domid)
 	CAMLreturn(sdom);
 }
 
-CAMLprim value stub_sched_credit_domain_set(value xch, value domid,
+CAMLprim value stub_sched_credit_domain_set(value xch_val, value domid,
                                             value sdom)
 {
-	CAMLparam3(xch, domid, sdom);
+	CAMLparam3(xch_val, domid, sdom);
+	xc_interface *xch = xch_of_val(xch_val);
 	struct xen_domctl_sched_credit c_sdom;
 	int ret;
 
 	c_sdom.weight = Int_val(Field(sdom, 0));
 	c_sdom.cap = Int_val(Field(sdom, 1));
 	caml_enter_blocking_section();
-	ret = xc_sched_credit_domain_set(_H(xch), _D(domid), &c_sdom);
+	ret = xc_sched_credit_domain_set(xch, Int_val(domid), &c_sdom);
 	caml_leave_blocking_section();
 	if (ret != 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_shadow_allocation_get(value xch, value domid)
+CAMLprim value stub_shadow_allocation_get(value xch_val, value domid)
 {
-	CAMLparam2(xch, domid);
+	CAMLparam2(xch_val, domid);
 	CAMLlocal1(mb);
+	xc_interface *xch = xch_of_val(xch_val);
 	unsigned int c_mb;
 	int ret;
 
 	caml_enter_blocking_section();
-	ret = xc_shadow_control(_H(xch), _D(domid),
+	ret = xc_shadow_control(xch, Int_val(domid),
 				XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION,
 				&c_mb, 0);
 	caml_leave_blocking_section();
 	if (ret != 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	mb = Val_int(c_mb);
 	CAMLreturn(mb);
 }
 
-CAMLprim value stub_shadow_allocation_set(value xch, value domid,
+CAMLprim value stub_shadow_allocation_set(value xch_val, value domid,
 					  value mb)
 {
-	CAMLparam3(xch, domid, mb);
+	CAMLparam3(xch_val, domid, mb);
+	xc_interface *xch = xch_of_val(xch_val);
 	unsigned int c_mb;
 	int ret;
 
 	c_mb = Int_val(mb);
 	caml_enter_blocking_section();
-	ret = xc_shadow_control(_H(xch), _D(domid),
+	ret = xc_shadow_control(xch, Int_val(domid),
 				XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
 				&c_mb, 0);
 	caml_leave_blocking_section();
 	if (ret != 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_domain_ioport_permission(value xch, value domid,
+CAMLprim value stub_xc_domain_ioport_permission(value xch_val, value domid,
 					       value start_port, value nr_ports,
 					       value allow)
 {
-	CAMLparam5(xch, domid, start_port, nr_ports, allow);
+	CAMLparam5(xch_val, domid, start_port, nr_ports, allow);
+	xc_interface *xch = xch_of_val(xch_val);
 	uint32_t c_start_port, c_nr_ports;
 	uint8_t c_allow;
 	int ret;
@@ -1139,19 +1177,20 @@ CAMLprim value stub_xc_domain_ioport_permission(value xch, value domid,
 	c_nr_ports = Int_val(nr_ports);
 	c_allow = Bool_val(allow);
 
-	ret = xc_domain_ioport_permission(_H(xch), _D(domid),
+	ret = xc_domain_ioport_permission(xch, Int_val(domid),
 					 c_start_port, c_nr_ports, c_allow);
 	if (ret < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_domain_iomem_permission(value xch, value domid,
+CAMLprim value stub_xc_domain_iomem_permission(value xch_val, value domid,
 					       value start_pfn, value nr_pfns,
 					       value allow)
 {
-	CAMLparam5(xch, domid, start_pfn, nr_pfns, allow);
+	CAMLparam5(xch_val, domid, start_pfn, nr_pfns, allow);
+	xc_interface *xch = xch_of_val(xch_val);
 	unsigned long c_start_pfn, c_nr_pfns;
 	uint8_t c_allow;
 	int ret;
@@ -1160,18 +1199,19 @@ CAMLprim value stub_xc_domain_iomem_permission(value xch, value domid,
 	c_nr_pfns = Nativeint_val(nr_pfns);
 	c_allow = Bool_val(allow);
 
-	ret = xc_domain_iomem_permission(_H(xch), _D(domid),
+	ret = xc_domain_iomem_permission(xch, Int_val(domid),
 					 c_start_pfn, c_nr_pfns, c_allow);
 	if (ret < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_domain_irq_permission(value xch, value domid,
+CAMLprim value stub_xc_domain_irq_permission(value xch_val, value domid,
 					     value pirq, value allow)
 {
-	CAMLparam4(xch, domid, pirq, allow);
+	CAMLparam4(xch_val, domid, pirq, allow);
+	xc_interface *xch = xch_of_val(xch_val);
 	uint32_t c_pirq;
 	bool c_allow;
 	int ret;
@@ -1179,41 +1219,44 @@ CAMLprim value stub_xc_domain_irq_permission(value xch, value domid,
 	c_pirq = Int_val(pirq);
 	c_allow = Bool_val(allow);
 
-	ret = xc_domain_irq_permission(_H(xch), _D(domid),
+	ret = xc_domain_irq_permission(xch, Int_val(domid),
 				       c_pirq, c_allow);
 	if (ret < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_hvm_param_get(value xch, value domid, value param)
+CAMLprim value stub_xc_hvm_param_get(value xch_val, value domid, value param)
 {
-	CAMLparam3(xch, domid, param);
+	CAMLparam3(xch_val, domid, param);
+	xc_interface *xch = xch_of_val(xch_val);
 	uint64_t val;
 	int ret;
 
 	caml_enter_blocking_section();
-	ret = xc_hvm_param_get(_H(xch), _D(domid), Int_val(param), &val);
+	ret = xc_hvm_param_get(xch, Int_val(domid), Int_val(param), &val);
 	caml_leave_blocking_section();
 
 	if ( ret )
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(caml_copy_int64(val));
 }
 
-CAMLprim value stub_xc_hvm_param_set(value xch, value domid, value param, value val)
+CAMLprim value stub_xc_hvm_param_set(value xch_val, value domid, value param, value val)
 {
-	CAMLparam4(xch, domid, param, val);
+	CAMLparam4(xch_val, domid, param, val);
+	xc_interface *xch = xch_of_val(xch_val);
+	uint64_t val64 = Int64_val(val);
 	int ret;
 
 	caml_enter_blocking_section();
-	ret = xc_hvm_param_set(_H(xch), _D(domid), Int_val(param), Int64_val(val));
+	ret = xc_hvm_param_set(xch, Int_val(domid), Int_val(param), val64);
 	caml_leave_blocking_section();
 
 	if ( ret )
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_unit);
 }
@@ -1226,9 +1269,10 @@ static uint32_t encode_sbdf(int domain, int bus, int dev, int func)
 		((uint32_t)func   &    0x7);
 }
 
-CAMLprim value stub_xc_domain_test_assign_device(value xch, value domid, value desc)
+CAMLprim value stub_xc_domain_test_assign_device(value xch_val, value domid, value desc)
 {
-	CAMLparam3(xch, domid, desc);
+	CAMLparam3(xch_val, domid, desc);
+	xc_interface *xch = xch_of_val(xch_val);
 	int ret;
 	int domain, bus, dev, func;
 	uint32_t sbdf;
@@ -1239,14 +1283,15 @@ CAMLprim value stub_xc_domain_test_assign_device(value xch, value domid, value d
 	func = Int_val(Field(desc, 3));
 	sbdf = encode_sbdf(domain, bus, dev, func);
 
-	ret = xc_test_assign_device(_H(xch), _D(domid), sbdf);
+	ret = xc_test_assign_device(xch, Int_val(domid), sbdf);
 
 	CAMLreturn(Val_bool(ret == 0));
 }
 
-CAMLprim value stub_xc_domain_assign_device(value xch, value domid, value desc)
+CAMLprim value stub_xc_domain_assign_device(value xch_val, value domid, value desc)
 {
-	CAMLparam3(xch, domid, desc);
+	CAMLparam3(xch_val, domid, desc);
+	xc_interface *xch = xch_of_val(xch_val);
 	int ret;
 	int domain, bus, dev, func;
 	uint32_t sbdf;
@@ -1257,17 +1302,18 @@ CAMLprim value stub_xc_domain_assign_device(value xch, value domid, value desc)
 	func = Int_val(Field(desc, 3));
 	sbdf = encode_sbdf(domain, bus, dev, func);
 
-	ret = xc_assign_device(_H(xch), _D(domid), sbdf,
+	ret = xc_assign_device(xch, Int_val(domid), sbdf,
 			       XEN_DOMCTL_DEV_RDM_RELAXED);
 
 	if (ret < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_domain_deassign_device(value xch, value domid, value desc)
+CAMLprim value stub_xc_domain_deassign_device(value xch_val, value domid, value desc)
 {
-	CAMLparam3(xch, domid, desc);
+	CAMLparam3(xch_val, domid, desc);
+	xc_interface *xch = xch_of_val(xch_val);
 	int ret;
 	int domain, bus, dev, func;
 	uint32_t sbdf;
@@ -1278,28 +1324,29 @@ CAMLprim value stub_xc_domain_deassign_device(value xch, value domid, value desc
 	func = Int_val(Field(desc, 3));
 	sbdf = encode_sbdf(domain, bus, dev, func);
 
-	ret = xc_deassign_device(_H(xch), _D(domid), sbdf);
+	ret = xc_deassign_device(xch, Int_val(domid), sbdf);
 
 	if (ret < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_xc_get_cpu_featureset(value xch, value idx)
+CAMLprim value stub_xc_get_cpu_featureset(value xch_val, value idx)
 {
-	CAMLparam2(xch, idx);
+	CAMLparam2(xch_val, idx);
 	CAMLlocal1(bitmap_val);
 #if defined(__i386__) || defined(__x86_64__)
+	xc_interface *xch = xch_of_val(xch_val);
 
 	/* Safe, because of the global ocaml lock. */
 	static uint32_t fs_len;
 
 	if (fs_len == 0)
 	{
-		int ret = xc_get_cpu_featureset(_H(xch), 0, &fs_len, NULL);
+		int ret = xc_get_cpu_featureset(xch, 0, &fs_len, NULL);
 
 		if (ret || (fs_len == 0))
-			failwith_xc(_H(xch));
+			failwith_xc(xch);
 	}
 
 	{
@@ -1307,10 +1354,10 @@ CAMLprim value stub_xc_get_cpu_featureset(value xch, value idx)
 		uint32_t fs[fs_len], len = fs_len;
 		unsigned int i;
 
-		int ret = xc_get_cpu_featureset(_H(xch), Int_val(idx), &len, fs);
+		int ret = xc_get_cpu_featureset(xch, Int_val(idx), &len, fs);
 
 		if (ret)
-			failwith_xc(_H(xch));
+			failwith_xc(xch);
 
 		bitmap_val = caml_alloc(len, 0);
 
@@ -1323,15 +1370,16 @@ CAMLprim value stub_xc_get_cpu_featureset(value xch, value idx)
 	CAMLreturn(bitmap_val);
 }
 
-CAMLprim value stub_xc_watchdog(value xch, value domid, value timeout)
+CAMLprim value stub_xc_watchdog(value xch_val, value domid, value timeout)
 {
-	CAMLparam3(xch, domid, timeout);
+	CAMLparam3(xch_val, domid, timeout);
+	xc_interface *xch = xch_of_val(xch_val);
 	int ret;
 	unsigned int c_timeout = Int32_val(timeout);
 
-	ret = xc_watchdog(_H(xch), _D(domid), c_timeout);
+	ret = xc_watchdog(xch, Int_val(domid), c_timeout);
 	if (ret < 0)
-		failwith_xc(_H(xch));
+		failwith_xc(xch);
 
 	CAMLreturn(Val_int(ret));
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:29:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487850.755613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC8-0001Xy-CO; Tue, 31 Jan 2023 21:29:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487850.755613; Tue, 31 Jan 2023 21:29:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC8-0001VA-1g; Tue, 31 Jan 2023 21:29:36 +0000
Received: by outflank-mailman (input) for mailman id 487850;
 Tue, 31 Jan 2023 21:29:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qwjI=54=citrix.com=prvs=3886215e8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pMyC5-0000Nb-Kl
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:29:33 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 599ac36b-a1ae-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 22:29:32 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 599ac36b-a1ae-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675200571;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=YZc3Qe4tQu/04xfxAwG/P7vQhM7bvhY61z2fO5lZ86M=;
  b=FotiXju3+4liJe4vHjhgOMTCs9TVfB+zVgxBhtX9ltBoKjhWldW57DsF
   jgDX8ZTyCF3y2Py+klAHfyo7CvrW1UwBAUlmPKhiKZ/dBWo/swI09+eW7
   gXS/5P7YIe+u3+XgvypS2cJw+GZJt1RIcTD3loiooBiPJ42kKPMbIxbeE
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95499183
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:rHDd5a7tMly7k5beTkGK4QxRtDnHchMFZxGqfqrLsTDasY5as4F+v
 mEcD23TOqzfMWenftsnPozjpEkP6MLXztRrQQNs+CEzHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakQ5QeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m3
 OQ9cAJKTA66nuu7zJbgFsBetNUaBZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9zxvE9
 zOfrz+R7hcyF/yP8zuL8niWr/b+kwL/ZL80T52V36s/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c9ZZDeAS8gyGzavQpQGDCQA5oiVpMYJ88pVsHHpzi
 wHPxomybdByjFGLYVuF0++m6hGsADEcIGMmPnMtfzkU2vC29enfkSnzZtpkFae0iPj8Fjfx3
 y2GoUACulkDsSIY//7lpA6a2lpAsrCMF1dovVuPAgpJ+ysjPOaYi5qUBU83BBqqBKKQVRG/s
 XcNgKByB8heXMjWxERhrAjgdYxFBspp0hWG2TaD/LF7rVxBHkJPmqgOiAyS3G8zbq45lcbBO
 Sc/Qz956p5JJ2eNZqRqeY+3AMlC5fG+Som8B6iMNocUOMcZmOq7EMZGPB744owQuBJ0zfFX1
 WmzLq5A8kr2+Yw4lWHrFo/xIJcgxzwkxHO7eHwI503P7FZqX1bMEe1tGALXPogEAFas/F29H
 yB3a5HblH2ykYTWPkHqzGLkBQtTcSZgWs2q8Zw/myzqClMOJVzNwsT5mdsJE7GJVYwM/gsU1
 hlRgnNl9Wc=
IronPort-HdrOrdr: A9a23:hVYwJq3ICPaAWKoFIX7QPwqjBatxeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4expOMG7IU80hqQFmrX5XI3SFTUO11HYSL2KgbGN/9SkIVyGygc/79
 YpT0EdMqyWMbESt6+TjGaF+pQbsb+6GcuT9ITjJgJWPGRXgtZbnmVE42igc3FedU1jP94UBZ
 Cc7s1Iq36JfmkWVN2yAj0oTvXOvNrCkbPheFojCwQ84AeDoDu04PqieiLolCs2Yndq+/MP4G
 LFmwv26uGKtOy68AbV0yv+/olbg9zoz/pEHYiphtIOIjvhpw60bMBKWqGEvhoyvOazgWxa2+
 XkklMFBYBe+nnRdma6rV/GwA/7ygsj7Hfk1BuxnWbjidaRfkN3N+NxwaZiNjfJ4Uspu99xlI
 hR2XiCipZRBRTc2Azg+tnzUQ1wnEbcmwtsrQdTtQ0QbWItUs4QkWUtxjIXLH7GJlO51GkTKp
 guMCgb3ocSTbrVVQGcgoAl+q3XYp16JGb6fqFFgL3Z79EepgEE82IIgMMYhXsO75Q7Vt1N4P
 nFKL1hkPVUQtYRdr8VPpZ0fSKbMB2+ffv3ChPmHX33UKUcf37doZ/+57s4oOmsZZwT1ZM33J
 DMSklRu2I+c1/nTZTm5uw8zjndBGGmGTj9wMBX4JZ0/rX6WbrwKCWGDFQjidGprfkTCtDSH/
 yzJJVVCfn+KnaGI/c/4yTuH51JbXUOWswcvdg2H1qIv8LQM4Xv8vfWdf7CTYCdYgrMmlmPck
 frcAKDVfmotHrbJUMQqCKhJU/QRg==
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="95499183"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, Rob Hoes <Rob.Hoes@citrix.com>
Subject: [PATCH 5/7] tools/ocaml/xc: Fix binding for xc_domain_assign_device()
Date: Tue, 31 Jan 2023 21:29:11 +0000
Message-ID: <20230131212913.6199-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230131212913.6199-1-andrew.cooper3@citrix.com>
References: <20230131212913.6199-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Edwin Török <edwin.torok@cloud.com>

The patch adding this binding was plain broken, and unreviewed.  It modified
the C stub to add a 4th parameter without an equivalent adjustment in the
Ocaml side of the bindings.

In 64bit builds, this causes us to dereference whatever dead value is in %rcx
when trying to interpret the rflags parameter.

This has gone unnoticed because Xapi doesn't use this binding (it has its
own), but unbreak the binding by passing RDM_RELAXED unconditionally for
now (matching the libxl default behaviour).

Fixes: 9b34056cb4 ("tools: extend xc_assign_device() to support rdm reservation policy")
Signed-off-by: Edwin Török <edwin.torok@cloud.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Török <edwin.torok@cloud.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 17 +++++------------
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index fd1f306f0202..291663bb278a 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -1245,17 +1245,12 @@ CAMLprim value stub_xc_domain_test_assign_device(value xch, value domid, value d
 	CAMLreturn(Val_bool(ret == 0));
 }
 
-static int domain_assign_device_rdm_flag_table[] = {
-    XEN_DOMCTL_DEV_RDM_RELAXED,
-};
-
-CAMLprim value stub_xc_domain_assign_device(value xch, value domid, value desc,
-                                            value rflag)
+CAMLprim value stub_xc_domain_assign_device(value xch, value domid, value desc)
 {
-	CAMLparam4(xch, domid, desc, rflag);
+	CAMLparam3(xch, domid, desc);
 	int ret;
 	int domain, bus, dev, func;
-	uint32_t sbdf, flag;
+	uint32_t sbdf;
 
 	domain = Int_val(Field(desc, 0));
 	bus = Int_val(Field(desc, 1));
@@ -1263,10 +1258,8 @@ CAMLprim value stub_xc_domain_assign_device(value xch, value domid, value desc,
 	func = Int_val(Field(desc, 3));
 	sbdf = encode_sbdf(domain, bus, dev, func);
 
-	ret = Int_val(Field(rflag, 0));
-	flag = domain_assign_device_rdm_flag_table[ret];
-
-	ret = xc_assign_device(_H(xch), _D(domid), sbdf, flag);
+	ret = xc_assign_device(_H(xch), _D(domid), sbdf,
+			       XEN_DOMCTL_DEV_RDM_RELAXED);
 
 	if (ret < 0)
 		failwith_xc(_H(xch));
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:29:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487846.755566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC4-0000Xl-Mc; Tue, 31 Jan 2023 21:29:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487846.755566; Tue, 31 Jan 2023 21:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC4-0000VT-Iw; Tue, 31 Jan 2023 21:29:32 +0000
Received: by outflank-mailman (input) for mailman id 487846;
 Tue, 31 Jan 2023 21:29:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qwjI=54=citrix.com=prvs=3886215e8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pMyC3-0000Nb-Kh
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:29:31 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58507d6a-a1ae-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 22:29:29 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58507d6a-a1ae-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675200569;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=9v262fZBQ7E8UCWSJDzUn+RbfirFU1mUCzn0CR2PL+s=;
  b=FZU6LUFluLsEPZGYeWsR0ve0BwgE7Md93X3quPkb9QRpWdqmn+ORviJQ
   rkyq61/amgAV9TGgWYpTI1tPI+tFpCoLc9ABZ4zPzITKzrQ15SS/Gvf8l
   d+MlYYytf2U4pTb3bowI5FjJR5SPl4oGMwc8TyHbPge3tK7y2poF8Cd5x
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95499177
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:GnY/TaO4PKcnpcDvrR2kl8FynXyQoLVcMsEvi/4bfWQNrUp03jQEm
 mZOWWmHbv7eMzOkfY9xaoWxoBhQ65HQzdMwGwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9Suv3rRC9H5qyo42tB5QVmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0s9vJDl8s
 t4GESkEYxyn3L6f8oCqc8A506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUo3XGZQExxnHz
 o7A10CnBiEfDfGk9SGI3Sy0rf7+kBOlVJ1HQdVU8dY12QbOlwT/EiY+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFaItwARc8BdGOo77EeK0KW83uqCLjFaFHgbMoVg7ZJoA2Vwj
 Tdlgu8FGxRGqJjNezW/6I6NpD6+HXNJBF0wdBYtGF5tD8bYnKk/iRfGT9BGGaGzj8HoFTyY/
 w1mvBTSlJ1I05dVivzTEUTvxmv1+8OXFlJdChD/BDrN0+9vWGKyi2VEA3D/5O0IEouWR0LpU
 JMsy5nHt7Bm4X1geUWwrAQx8FOBva7t3N702wQH83wdG9OFpRaekXh4um0WGauQGp9slcXVS
 EHSoxhNw5RYIWGna6R6C6roVZt3lPm4TYy4DKqLBjarXnSWXFbXlByCmGbKhzy9+KTSuf5X1
 WinnTaEUi9BVPUPIMueTOYBy747rh3SNkuKLa0XOy+PiOLEDFbMEOdtDbd7RrxhhE9yiFmPo
 ok32grj40k3bdASlQGMqNJNdg9XcSJrbX00wuQOHtO+zsNdMDlJI5fsLXkJIuSJQ4w9ej/0w
 0yA
IronPort-HdrOrdr: A9a23:Gg2VgautOE79EJcYeB9HoSuS7skCb4Aji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H5BEGBKUm9yXcH2/hrAV7CZniuhILGFvAH0WKP+VPd8mjFh5dgPM
 RbAuBD4b/LfD9HZK/BiWHVfOrIguP3lpxA7t2urEuFODsaDp2ImD0JaDpzfHcWeCB2Qb4CUL
 aM7MtOoDStPVwRc8SAH3EAG8TTutHRk5riQBgeQzoq8hOHgz+E4KPzV0Hw5GZVbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7hGhdf7zdNHJcqUzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq
 iGnz4Qe+BIr1/BdGC8phXgnyP61iw11nPkwViExVP+vM3QXlsBeoZ8rLMcViGcx1srvdl63q
 4O9XmerYBrARTJmzm4z8TUVittilG/rRMZ4KEuZj1kIMUjgY1q3MwiFXBuYdQ99eXBmcIa+d
 xVfYDhDTBtABanhj7izy1SKZeXLw4O91+9MzU/U4quonVrdTlCvjcl7d1akXEa+J0nTZ5Yo+
 zCL6RzjblLCtQbdKRnGY46ML+K40H2MGDx2VipUCHaPbBCP2iIp4/84b0z6u3vcJsUzIEqkJ
 CEVF9Dr2Y9d0/nFMXLhfRwg2bwaXT4WS6oxtBV5pB/tLG5TL33MTebQFRrl8e7uf0QDsDSRv
 72MpNLBP3oK3foBO9yrnrDcogXLWNbXNweu949VV7LqsXXKpfyvuiea/rXLKqFK0dWZoo+OA
 pyYNHeHrQw0qnwYA6GvPH4YQKSRnDC
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="95499177"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>,
	=?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: [PATCH 0/7] tools/ocaml: Memory corruption fixes in bindings
Date: Tue, 31 Jan 2023 21:29:06 +0000
Message-ID: <20230131212913.6199-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

It turns out there have been some latent memory corruption bugs and other
errors in the bindings since they were first introduced.

These were discovered after realising that we'd introduced other memory
corruption bugs as part of the Ocaml 5 fixes, and in the case of the evtchn
bindings, backported this as part of the oxenstored-lu fixes.

This series addresses all the memory corrupution issues we're aware of that
can occur in an entirely well-formed program.

Deferred for now are the (hopefully latent) memory corruption errors which
happen due to bad parameter passing, and a substantial pile of related cleanup.

Andrew Cooper (3):
  tools/ocaml/libs: Allocate the correct amount of memory for Abstract_tag
  tools/ocaml/evtchn: Misc cleanup
  tools/ocaml/xc: Don't reference Abstract_Tag objects with the GC lock released

Edwin Török (4):
  tools/ocaml/libs: Don't declare stubs as taking void
  tools/ocaml/evtchn: Don't reference Custom objects with the GC lock released
  tools/ocaml/xc: Fix binding for xc_domain_assign_device()
  tools/ocaml/xc: Don't reference Custom objects with the GC lock released

 tools/ocaml/libs/eventchn/xeneventchn_stubs.c |  89 ++---
 tools/ocaml/libs/mmap/Makefile                |   2 +
 tools/ocaml/libs/mmap/xenmmap_stubs.c         |   6 +-
 tools/ocaml/libs/xb/xenbus_stubs.c            |   5 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c           | 494 ++++++++++++++------------
 5 files changed, 323 insertions(+), 273 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:29:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487851.755623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC9-0001lE-2r; Tue, 31 Jan 2023 21:29:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487851.755623; Tue, 31 Jan 2023 21:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC8-0001j6-PF; Tue, 31 Jan 2023 21:29:36 +0000
Received: by outflank-mailman (input) for mailman id 487851;
 Tue, 31 Jan 2023 21:29:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qwjI=54=citrix.com=prvs=3886215e8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pMyC6-0000Nb-Km
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:29:34 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5908c139-a1ae-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 22:29:32 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5908c139-a1ae-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675200572;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=DuMXyrr37uOetL3gBS1wPn+orSXwgbl8VJbcI3sGhvc=;
  b=Ualea35HX4y91C5TstnZbZvNo7xjd/wVtHCYkgPb/79wLexkIhn8FGS5
   8R4yewTlbnn0PswKqrQ/g7D0IHilaEizMxQ6vDJIjf3ws1B5d4bAblvpB
   wiH7KwSZcNP7JRuBueqnDJ+RSW7lczhNEY1DcYk9YJYgjhYTlQBZOoFgT
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95499184
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Gk3+MazYHRxfxxsTyDp6t+c2xirEfRIJ4+MujC+fZmUNrF6WrkUGz
 2MYWjuBPKmIYTH8L4onOYvjo0gE75SDydNhSAc5rSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTbaeYUidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UwHUMja4mtC5QRnPqkT5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUhW0
 KQqbxITVwmKiu6Sm+6SeNt9v+12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZQIwhfJ/
 zKal4j/Ki8lNMbA6AurzmKDueXSkg/ZXZwxEYTto5aGh3XMnzdOWXX6T2CTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1jYQUsRdO/c34waMzuzT+QnxO4QfZmcfMpp87pZwHGF0k
 AbTxLsFGACDrpW8UVfFxPC2swqrMCUZCTReTB02XDIstoyLTJ4IsjrDSdNqEaiQh9LzGC3tz
 z3ikBXSl4n/nuZQifzloAmvbyaE48GQE1Vrvlm/sneNtFsRWWKzW2C/BbE3B95kJZ3RcFSOt
 WNsdyO2vLFXVsHleMBgrYww8FCVCxStamW0bb1HRcNJG9GRF5mLI+htDMlWfhsBDyr9UWaBj
 LXvkQ1Q/oRPG3ChcLV6ZYm8Y+xzk/e9TIW9DqiJNIARCnSUSONg1Hg+DXN8Iki3yBR8+U3BE
 cjznTmQ4YYyVv08kWveqxY12r433CEurV4/triipylLJYG2PSbPIZ9caQvmUwzMxP/cyOkj2
 4oFZpTiJtQ2eLGWXxQ7BqZIdAxUdidmWcqmwyGVH8baSjdb9KgaI6e56dscl0ZNxsy5Ss+gE
 qmBZ3Jl
IronPort-HdrOrdr: A9a23:nQbdZ6p2m/YJiPBfwBbHvWUaV5syLNV00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssSkb6Ki90KnpexPhHO1OkPIs1NaZLUDbUQSTXeVfBOfZrQEIXheOj9K1tp
 0QO5SWaueAamSS5PySiGXWLz9j+qjgzEnCv5a8854Zd3AOV0gW1XYaNu/0KCxLbTgDIaB8OI
 uX58JBqTblUXMLbv6jDn1Ac/nfq8bNnJfGZwdDIxI88gGBgR6h9ba/SnGjr10jegIK5Y1n3X
 nOkgT/6Knmm/anyiXE32uWw4VKlMDnwt5jAtXJrsQOMD3jhiuheYwkcbyfuzIepv2p9T8R4Z
 LxiiZlG/42x2Laf2mzrxeo8RLnyiwS53jrzkLdqWf/oOTiLQhKR/ZptMZ8SF/0+kAgtNZz3O
 ZgxGSCradaChvGgWDU+8XIbRd3jUC5yEBS3tL7zkYvH7f2WoUh7bD3z3klU6vo2xiKqrzPJd
 MeTf00IswmNG9yIUqp+lWHi+bcJEjbVi32P3Tq/PblngS+1UoJs3cw1YgRmGwN+4k6TIQB7+
 PYMr5wnLULVcMOa7lhbd1xNfdfJ1a9My4kCljiVGjPBeUCITbAupT36LI66KWjf4EJ1oI7nN
 DEXElDvWA/dkryAYnWtac7hCzlUSG4R3Dg28te7592tvn1Q6fqKzSKTBQrn9G7q/sSD8XHU7
 K4OY5QAfXkMWzycLw5qDHWSt1XMz0TQccVstE0VxaHpd/KMJTjsqjBfPPaNNPWYEUZs6PEcw
 s+tRTIVbR9BxqQKwDFaTDqKg3QRnA=
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="95499184"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>,
	=?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: [PATCH 6/7] tools/ocaml/xc: Don't reference Abstract_Tag objects with the GC lock released
Date: Tue, 31 Jan 2023 21:29:12 +0000
Message-ID: <20230131212913.6199-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230131212913.6199-1-andrew.cooper3@citrix.com>
References: <20230131212913.6199-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The intf->{addr,len} references in the xc_map_foreign_range() call are unsafe.
>From the manual:

  https://ocaml.org/manual/intfc.html#ss:parallel-execution-long-running-c-code

"After caml_release_runtime_system() was called and until
caml_acquire_runtime_system() is called, the C code must not access any OCaml
data, nor call any function of the run-time system, nor call back into OCaml
code."

More than what the manual says, the intf pointer is (potentially) invalidated
by caml_enter_blocking_section() if another thread happens to perform garbage
collection at just the right (wrong) moment.

Rewrite the logic.  There's no need to stash data in the Ocaml object until
the success path at the very end.

Fixes: 8b7ce06a2d34 ("ocaml: Add XC bindings.")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Török <edwin.torok@cloud.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>

Note: the mmap stub has a similar pattern when constructing a mmap_interface,
but, but it's not actually unsafe because it doesn't drop the GC lock.

_H() is buggy too, but this patch needs backporting further than that fix.
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 291663bb278a..e5277f6f19a2 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -1028,26 +1028,25 @@ CAMLprim value stub_map_foreign_range(value xch, value dom,
 	CAMLparam4(xch, dom, size, mfn);
 	CAMLlocal1(result);
 	struct mmap_interface *intf;
-	uint32_t c_dom;
-	unsigned long c_mfn;
+	unsigned long c_mfn = Nativeint_val(mfn);
+	int len = Int_val(size);
+	void *ptr;
 
 	BUILD_BUG_ON((sizeof(struct mmap_interface) % sizeof(value)) != 0);
 	result = caml_alloc(Wsize_bsize(sizeof(struct mmap_interface)),
 			    Abstract_tag);
 
-	intf = (struct mmap_interface *) result;
-
-	intf->len = Int_val(size);
-
-	c_dom = _D(dom);
-	c_mfn = Nativeint_val(mfn);
 	caml_enter_blocking_section();
-	intf->addr = xc_map_foreign_range(_H(xch), c_dom,
-	                                  intf->len, PROT_READ|PROT_WRITE,
-	                                  c_mfn);
+	ptr = xc_map_foreign_range(_H(xch), _D(dom), len,
+				   PROT_READ|PROT_WRITE, c_mfn);
 	caml_leave_blocking_section();
-	if (!intf->addr)
+
+	if (!ptr)
 		caml_failwith("xc_map_foreign_range error");
+
+	intf = Data_abstract_val(result);
+	*intf = (struct mmap_interface){ ptr, len };
+
 	CAMLreturn(result);
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:29:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487844.755557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC4-0000Nz-6D; Tue, 31 Jan 2023 21:29:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487844.755557; Tue, 31 Jan 2023 21:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC4-0000Ns-23; Tue, 31 Jan 2023 21:29:32 +0000
Received: by outflank-mailman (input) for mailman id 487844;
 Tue, 31 Jan 2023 21:29:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qwjI=54=citrix.com=prvs=3886215e8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pMyC2-0000Nb-CI
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:29:30 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 54e58ec1-a1ae-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 22:29:28 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54e58ec1-a1ae-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675200568;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=jNU72Uu2tmrcqHFCNgIV8t/XfjcVgtHdUCV/DbhunZ0=;
  b=QMTAIaJm5uUkfYsHhHh2hOmT50uppvvn2vGnDBTiyGu9DAV8jDMUCRHI
   deD4VWRrMcJ8iF4eZBFHj1uUyUTGWch1DkfbvylTBb428J9UtTjolI0s9
   +3zUzq1u89Eb8QCyeAhb3jWz95EGclaGS6TXrvLTyzEadp/LYwBXR092n
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95499176
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:DRPIu6rBBa1a1sQTvyEoYRPaRKNeBmIZZRIvgKrLsJaIsI4StFCzt
 garIBmGOayJYmr3L4onPYrg9k1XucfSz4BjHQZtrC9kQy0WoJuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpAFc+E0/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WxwUmAWP6gR5weHzCBNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABkVS0ixwMCo+auyR+VI3ZgKHvv6BapK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 jObozWoW05EXDCZ4XmM3SylpNbppgPcQLIbD6y90aJ7gULGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U24R+A4rDZ6AGYAi4DVDEpVTA9nJZoH3pwj
 AbPxo63Q2U169V5VE5x6J+9tRbqC283M1YbbCIIaVBZyuvRsbga20enoslYLIa5idj8GDfVy
 j+MrTQji7h7sfPnx5lX7nic3Wvy+8Ghohodo1yOAzn7tl8RiJuNPdTA1LTN0RpXwG91pHGlt
 WNMpcWR5ftm4XqlxH3UG7Vl8F1ECp+43NzgbbxHRcFJG9eFoSTLkWVsDNZWei9U3j4sI2OBX
 aMqkVo5CGVvFHWrd7RrRIm6Ft4ny6Ptffy8CK+JN4sWOsAsLlXYlM2LWaJ39zm9+HXAbIllY
 cvLGSpSJSly5VtbIMqeGL5GjO5DKtEWzmLPX5HrpylLIpLHDEN5vYwtaQPUBshgtfPsnekg2
 4oHXyd840kFAbKWj+i+2dJ7EG3m2lBgXMyo8JMKKrTTSuekcUl4Y8LsLXoaU9QNt8xoei3gp
 xlRhmcwJILDuED6
IronPort-HdrOrdr: A9a23:zKvS1q9/wraM3VeHjKxuk+HBdr1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYW4qKQkdcdDpAsm9qADnhOVICOgqTP6ftWbdyQ+Vxe1Zg7cKhgeQYhEWldQtnZ
 uIEZIOb+EYZGIS5aqU3OD7KadH/DDtytHKuQ6q9QYJcegcUdAD0+4WMGemO3wzYDMDKYsyFZ
 Ka6MYCjz28eU4PZsD+KmgZU/PFr9jrkoujRRIdHRYo5CSHkDvtsdfBYlKl9yZbdwkK7aYp8G
 DDnQC8zqK/s8ujwhuZ+37P449QkN7BzMIGIMCXkMAaJhjllw7tToV8XL+puiwzvYiUmR0Xue
 iJhy1lE9V46nvXcG3wiwDqwRPc3DEn7GKn4UOEgFP4yPaJCA4SOo5kv8Z0YxHZ400vsJVXy6
 RQxV+UsJJREFfpgDn93d7VTBtn/3DE7kbK0NRjwUC3Y7FuKIO5nrZvv3+9161wXh4S3bpXUd
 WGyvusocq+P2nqK0wx9VMfuuBEFk5DYytuBHJy9/B9mgIm4ExR3g8WwtcSkWwH8494Q55Y5/
 7cOqAtj71WSNQKBJgNcNvpbPHHeFAleyi8RV66MBDiDuUKKnjNo5n47PE84/yrYoUByN83lI
 7aWF1VuGYucwa2YPf+qqFj41TIWiGwTD7twsZR69xwvaD9XqPiNWmGREo1m8Wtrv0DConQWu
 q1OphRH/j/RFGebrphzkn7Qd1fOHMeWMoatpIyXE+PuNvCLsnwuunSYJ/oVcnQ+PYfKxPC61
 c4LUnOzZ97nz+Ws1fD8WbsZ08=
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="95499176"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, Rob Hoes <Rob.Hoes@citrix.com>
Subject: [PATCH 1/7] tools/ocaml/libs: Don't declare stubs as taking void
Date: Tue, 31 Jan 2023 21:29:07 +0000
Message-ID: <20230131212913.6199-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230131212913.6199-1-andrew.cooper3@citrix.com>
References: <20230131212913.6199-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Edwin Török <edwin.torok@cloud.com>

There is no such thing as an Ocaml function (C stub or otherwise) taking no
parameters.  In the absence of any other parameters, unit is still passed.

This doesn't explode with any ABI we care about, but would malfunction for an
ABI environment such as stdcall.

Fixes: c3afd398ba7f ("ocaml: Add XS bindings.")
Fixes: 8b7ce06a2d34 ("ocaml: Add XC bindings.")
Signed-off-by: Edwin Török <edwin.torok@cloud.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Török <edwin.torok@cloud.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/ocaml/libs/xb/xenbus_stubs.c  | 5 ++---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 4 ++--
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/ocaml/libs/xb/xenbus_stubs.c b/tools/ocaml/libs/xb/xenbus_stubs.c
index 3065181a55e6..97116b07826a 100644
--- a/tools/ocaml/libs/xb/xenbus_stubs.c
+++ b/tools/ocaml/libs/xb/xenbus_stubs.c
@@ -30,10 +30,9 @@
 #include <xenctrl.h>
 #include <xen/io/xs_wire.h>
 
-CAMLprim value stub_header_size(void)
+CAMLprim value stub_header_size(value unit)
 {
-	CAMLparam0();
-	CAMLreturn(Val_int(sizeof(struct xsd_sockmsg)));
+	return Val_int(sizeof(struct xsd_sockmsg));
 }
 
 CAMLprim value stub_header_of_string(value s)
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 2fba9c5e94d6..728818445975 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -86,9 +86,9 @@ static void Noreturn failwith_xc(xc_interface *xch)
 	caml_raise_with_string(*caml_named_value("xc.error"), error_str);
 }
 
-CAMLprim value stub_xc_interface_open(void)
+CAMLprim value stub_xc_interface_open(value unit)
 {
-	CAMLparam0();
+	CAMLparam1(unit);
 	CAMLlocal1(result);
 	xc_interface *xch;
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:29:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:29:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487847.755590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC6-00015d-35; Tue, 31 Jan 2023 21:29:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487847.755590; Tue, 31 Jan 2023 21:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyC5-000159-Ty; Tue, 31 Jan 2023 21:29:33 +0000
Received: by outflank-mailman (input) for mailman id 487847;
 Tue, 31 Jan 2023 21:29:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qwjI=54=citrix.com=prvs=3886215e8=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pMyC4-0000Nb-Kh
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:29:32 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58eef30c-a1ae-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 22:29:31 +0100 (CET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58eef30c-a1ae-11ed-933c-83870f6b2ba8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1675200570;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=h6tenPYxyZVuTbfcgU9nfxQU9sqSsuEBVjFHdCiT8ok=;
  b=ZqQqvOdX5KKbbBc3TH9V8JrKxTvwFWztDm1k7EmLlFMal2Qc2wD+TqMZ
   MZj+V/IaN4scW7KbBhxfC/5ayssWolx4KVrtyEtccb5tDt0lvvDBvtvEV
   N8h6tAFJtwyY9c0q7TKebPETQ/bQV7g9QRc9QqhlA2PoLcfTTsmXI2xWG
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 95499179
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:dbqxyaKz/vJ8W1YuFE+R0JUlxSXFcZb7ZxGr2PjKsXjdYENShDMOz
 2QcXGCPaPuNajDyKNkiPYmy8E1S6JTWm4MyQVBlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHv+kUrWs1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPcwP9TlK6q4mhA5wdnPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5eGUQWx
 7s6NgkUZx2Po7nq6YCJW9BF05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TbGJkEzx/H9
 woq+UziPB9CZYSR+AaI91ypgM70xDrZcYE7QejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxasshcCVvJKHuY96QXLzbDbizt1HUBdEGQHMoZ/8pZrG3pzj
 AThc87V6SJHmaWwEUDa1KmttDq/NQxOcUYwVz49ZF5QizX8m70bghXKR9dlNae6iNzpBD39q
 wy3QDgCa6Y71pBSifjilbzTq3f1/8WSEFZpjunCdjj9hj6VcrJJcGBBBbLzyf9bZLiUQVCa1
 JTvs5jPtbteZX1hecHkfQnsIF1Lz6zdWNE/qQQ1d3XEy9hK0yDLQGyoyGsiTHqFy+5dEdMTX
 GfduBlK+LhYN2awYKl8buqZUpp1kPGxTYy9C6qOMbKih6SdkyferElTibO4hTixwCDAb4lgU
 XtkTSpcJSlDUvk2pNZHb+wczaUq1kgDKZD7HPjGI+Cc+ePGPha9EO5VWGZim8hltMtoVi2Jq
 YcAXyZLoj0DONDDjt7/qtdPcwtVcCRhVfgbaaV/L4a+H+avI0l5Y9e5/F/rU9UNc3h9/gsQw
 kyAZw==
IronPort-HdrOrdr: A9a23:x/ZWIKsggLX9ZL7yRf0m/2E77skCb4Aji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H5BEGBKUm9yXcH2/hrAV7CZniuhILGFvAH0WKP+VPd8mjFh5dgPM
 RbAuBD4b/LfD9HZK/BiWHVfOrIguP3lpxA7t2urEuFODsaDp2ImD0JaDpzfHcWeCB2Qb4CUL
 aM7MtOoDStPVwRc8SAH3EAG8TTutHRk5riQBgeQzoq8hOHgz+E4KPzV0Hw5GZVbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7hGhdf7zdNHJcqUzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq
 iGnz4Qe+BIr1/BdGC8phXgnyP61iw11nPkwViExVP+vM3QXlsBeoZ8rLMcViGcx1srvdl63q
 4O9XmerYBrARTJmzm4z8TUVittilG/rRMZ4KEuZj1kIMUjgY1q3MwiFXBuYdQ99eXBmcIa+d
 xVfYDhDTBtABanhj7izy1SKZeXLw4O91+9MzU/U4quonVrdTlCvjcl7d1akXEa+J0nTZ5Yo+
 zCL6RzjblLCtQbdKRnGY46ML+K40H2MGDx2VipUCHaPbBCP2iIp4/84b0z6u3vcJsUzIEqkJ
 CEVF9Dr2Y9d0/nFMXLhfRwg2bwaXT4WS6oxtBV5pB/tLG5TL33MTebQFRrl8e7uf0QDsDSRv
 72MpNLBP3oK3foBO9yrnrDcogXLWNbXNweu949VV7LqsXXKpfyvuiea/rXLKqFK0dWZoo+OA
 pyYNHeHrQw0qnwYA6GvPH4YQKSRnDC
X-IronPort-AV: E=Sophos;i="5.97,261,1669093200"; 
   d="scan'208";a="95499179"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>,
	=?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: [PATCH 2/7] tools/ocaml/libs: Allocate the correct amount of memory for Abstract_tag
Date: Tue, 31 Jan 2023 21:29:08 +0000
Message-ID: <20230131212913.6199-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20230131212913.6199-1-andrew.cooper3@citrix.com>
References: <20230131212913.6199-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

caml_alloc() takes units of Wsize (word size), not bytes.  As a consequence,
we're allocating 4 or 8 times too much memory.

Ocaml has a helper, Wsize_bsize(), but it truncates cases which aren't an
exact multiple.  Use a BUILD_BUG_ON() to cover the potential for truncation,
as there's no rounding-up form of the helper.

Fixes: 8b7ce06a2d34 ("ocaml: Add XC bindings.")
Fixes: d3e649277a13 ("ocaml: add mmap bindings implementation.")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Christian Lindig <christian.lindig@citrix.com>
CC: David Scott <dave@recoil.org>
CC: Edwin Török <edwin.torok@cloud.com>
CC: Rob Hoes <Rob.Hoes@citrix.com>
---
 tools/ocaml/libs/mmap/Makefile        | 2 ++
 tools/ocaml/libs/mmap/xenmmap_stubs.c | 6 +++++-
 tools/ocaml/libs/xc/xenctrl_stubs.c   | 5 ++++-
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/tools/ocaml/libs/mmap/Makefile b/tools/ocaml/libs/mmap/Makefile
index a62153713571..855b8b2c9877 100644
--- a/tools/ocaml/libs/mmap/Makefile
+++ b/tools/ocaml/libs/mmap/Makefile
@@ -2,6 +2,8 @@ OCAML_TOPLEVEL=$(CURDIR)/../..
 XEN_ROOT=$(OCAML_TOPLEVEL)/../..
 include $(OCAML_TOPLEVEL)/common.make
 
+CFLAGS += $(CFLAGS_xeninclude)
+
 OBJS = xenmmap
 INTF = $(foreach obj, $(OBJS),$(obj).cmi)
 LIBS = xenmmap.cma xenmmap.cmxa
diff --git a/tools/ocaml/libs/mmap/xenmmap_stubs.c b/tools/ocaml/libs/mmap/xenmmap_stubs.c
index e03951d781bb..d623ad390e40 100644
--- a/tools/ocaml/libs/mmap/xenmmap_stubs.c
+++ b/tools/ocaml/libs/mmap/xenmmap_stubs.c
@@ -21,6 +21,8 @@
 #include <errno.h>
 #include "mmap_stubs.h"
 
+#include <xen-tools/libs.h>
+
 #include <caml/mlvalues.h>
 #include <caml/memory.h>
 #include <caml/alloc.h>
@@ -59,7 +61,9 @@ CAMLprim value stub_mmap_init(value fd, value pflag, value mflag,
 	default: caml_invalid_argument("maptype");
 	}
 
-	result = caml_alloc(sizeof(struct mmap_interface), Abstract_tag);
+	BUILD_BUG_ON((sizeof(struct mmap_interface) % sizeof(value)) != 0);
+	result = caml_alloc(Wsize_bsize(sizeof(struct mmap_interface)),
+			    Abstract_tag);
 
 	if (mmap_interface_init(Intf_val(result), Int_val(fd),
 	                        c_pflag, c_mflag,
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 728818445975..fd1f306f0202 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -1031,7 +1031,10 @@ CAMLprim value stub_map_foreign_range(value xch, value dom,
 	uint32_t c_dom;
 	unsigned long c_mfn;
 
-	result = caml_alloc(sizeof(struct mmap_interface), Abstract_tag);
+	BUILD_BUG_ON((sizeof(struct mmap_interface) % sizeof(value)) != 0);
+	result = caml_alloc(Wsize_bsize(sizeof(struct mmap_interface)),
+			    Abstract_tag);
+
 	intf = (struct mmap_interface *) result;
 
 	intf->len = Int_val(size);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:34:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:34:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487876.755644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyGv-0005lz-H4; Tue, 31 Jan 2023 21:34:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487876.755644; Tue, 31 Jan 2023 21:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyGv-0005ls-EW; Tue, 31 Jan 2023 21:34:33 +0000
Received: by outflank-mailman (input) for mailman id 487876;
 Tue, 31 Jan 2023 21:34:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMyGu-0005lm-MG
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:34:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMyGr-0006KB-O1; Tue, 31 Jan 2023 21:34:29 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMyGr-0005tz-Hv; Tue, 31 Jan 2023 21:34:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=zx0GRjSnpNH7/j7W/En1PLZ3/bIiWZCovpNh5+xXDMg=; b=JqKoyaf92H6NKfjFn5H3VwJYtS
	HoA/5sSSIJ4tOuBFnJ1mtphtHN+gAxUUcR0drt5YU22PqDg37xGNBtSu6w/8PItp6dDhk9zT14uJJ
	DbD3q1pEub9/oAbozMxZvdMRAkPw+jc1bE4KYAl+H1uuxP62CLp8b4CQXm7mZkJhKvi0=;
Message-ID: <c68840af-0173-1408-a9a9-ac5ebacee4e9@xen.org>
Date: Tue, 31 Jan 2023 21:34:28 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH] xen/common: rwlock: Constify the parameter of
 _rw_is{,_write}_locked()
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230130182858.86886-1-julien@xen.org>
 <9dc52d71-4148-0c16-d153-3ebcd1a9c754@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <9dc52d71-4148-0c16-d153-3ebcd1a9c754@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 31/01/2023 09:34, Jan Beulich wrote:
> On 30.01.2023 19:28, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The lock is not meant to be modified by _rw_is{,_write}_locked(). So
>> constify it.
>>
>> This is helpful to be able to assert if the lock is taken when the
>> underlying structure is const.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> (maybe also Requested-by)

I will add a requested-by while committing (waiting for a push before 
doing it).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 21:37:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 21:37:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487887.755656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyJO-0006cJ-Tx; Tue, 31 Jan 2023 21:37:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487887.755656; Tue, 31 Jan 2023 21:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMyJO-0006cC-RD; Tue, 31 Jan 2023 21:37:06 +0000
Received: by outflank-mailman (input) for mailman id 487887;
 Tue, 31 Jan 2023 21:37:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pMyJN-0006c4-If
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 21:37:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMyJL-0006OL-OV; Tue, 31 Jan 2023 21:37:03 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pMyJL-000617-Jn; Tue, 31 Jan 2023 21:37:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=FQ77lcTKP5qh6yHnnUUY7njqMx15MTXjgWNuwt5iTtk=; b=uBptGxq9/LkKdhyYiWx2GwtIKU
	K1SLf/IJxxQ+2QeCD9kwMOLKQyblwjtiIqdw9Y+DYNQiXHLfjOtbsXa07+6Gd7xMFoEwaYdjWlWQf
	NhU4geigxQdXvVmt5z/IXQDGowqybJjlhj7uTqTGqlnC8ds69TpUHUfyku5UZck7Td2o=;
Message-ID: <7a4a26a5-6a11-3893-b727-e3c8e68ceef6@xen.org>
Date: Tue, 31 Jan 2023 21:37:01 +0000
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.6.1
Subject: Re: [PATCH 05/22] x86/srat: vmap the pages for acpi_slit
To: Jan Beulich <jbeulich@suse.com>
Cc: Hongyan Xia <hongyxia@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>
References: <20221216114853.8227-1-julien@xen.org>
 <20221216114853.8227-6-julien@xen.org>
 <ca02a313-0fa2-8041-3e8f-d467c3e99fb6@suse.com>
 <965e3faa-472d-9a79-83ca-fef57cda81c5@xen.org>
 <41de340c-b5ad-6c30-816f-1ce1ddc98069@xen.org>
 <d645ef07-f30b-ff6b-ffc0-7ef76da63285@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d645ef07-f30b-ff6b-ffc0-7ef76da63285@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Jan,

On 31/01/2023 09:11, Jan Beulich wrote:
> On 30.01.2023 20:27, Julien Grall wrote:
>> Hi Jan,
>>
>> On 23/12/2022 11:31, Julien Grall wrote:
>>> On 20/12/2022 15:30, Jan Beulich wrote:
>>>> On 16.12.2022 12:48, Julien Grall wrote:
>>>>> From: Hongyan Xia <hongyxia@amazon.com>
>>>>>
>>>>> This avoids the assumption that boot pages are in the direct map.
>>>>>
>>>>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> However, ...
>>>>
>>>>> --- a/xen/arch/x86/srat.c
>>>>> +++ b/xen/arch/x86/srat.c
>>>>> @@ -139,7 +139,8 @@ void __init acpi_numa_slit_init(struct
>>>>> acpi_table_slit *slit)
>>>>>            return;
>>>>>        }
>>>>>        mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
>>>>> -    acpi_slit = mfn_to_virt(mfn_x(mfn));
>>>>> +    acpi_slit = vmap_contig_pages(mfn, PFN_UP(slit->header.length));
>>>>
>>>> ... with the increased use of vmap space the VA range used will need
>>>> growing. And that's perhaps better done ahead of time than late.
>>>
>>> I will have a look to increase the vmap().
>>
>> I have started to look at this. The current size of VMAP is 64GB.
>>
>> At least in the setup I have I didn't see any particular issue with the
>> existing size of the vmap. Looking through the history, the last time it
>> was bumped by one of your commit (see b0581b9214d2) but it is not clear
>> what was the setup.
>>
>> Given I don't have a setup where the VMAP is exhausted it is not clear
>> to me what would be an acceptable bump.
>>
>> AFAICT, in PML4 slot 261, we still have 62GB reserved for future. So I
>> was thinking to add an extra 32GB which would bring the VMAP to 96GB.
>> This is just a number that doesn't use all the reserved space but still
>> a power of two.
>>
>> Are you fine with that?
> 
> Hmm. Leaving aside that 96Gb isn't a power of two, my comment saying
> "ahead of time" was under the (wrong, as it now looks) impression that
> the goal of your series was to truly do away with the directmap.

Yes, the directmap is still present with this series. There are more 
work to completely remove the vmap() (see the cover letter for some 
details) and would prefer if this is separated from this series.


> I was
> therefore expecting a much larger bump in size, perhaps moving the
> vmap area into space presently occupied by the directmap. IOW for the
> time being, with no _significant_ increase of space consumption, we
> may well be fine with the 64Gb range.

Ok. I will keep it in mind if I am working completely removing the 
directmap.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487894.755667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzTu-000822-DX; Tue, 31 Jan 2023 22:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487894.755667; Tue, 31 Jan 2023 22:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzTu-00081v-Ay; Tue, 31 Jan 2023 22:52:02 +0000
Received: by outflank-mailman (input) for mailman id 487894;
 Tue, 31 Jan 2023 22:52:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMzTs-00081l-LO; Tue, 31 Jan 2023 22:52:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMzTs-0008C0-DG; Tue, 31 Jan 2023 22:52:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pMzTs-0000mp-14; Tue, 31 Jan 2023 22:52:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pMzTs-0004IR-0X; Tue, 31 Jan 2023 22:52:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Q/pTRR4OpKL9B3HJ3THLqFdHXb4/Ma2Dokn1CVg9gHA=; b=47HzUuXhG/OH25eqAqeLaXiWsG
	zodTQNRtmBgS0Z/aGqfD5dM4mhtITreIEeB2w7xCvGnZUcIujMz8apCFe1ePEVVPMblich6ydv8kI
	kH6Z4PyesRl9PKEUUu6I5gEOhVTC//csQ+E2jINbQGib8AVJpZsWhWqH6dIp5PBpnlxY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176302-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 176302: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-install(4):broken:regression
    xen-unstable-smoke:build-armhf:syslog-server:running:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:build-armhf:capture-logs:broken:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78e93e6e57c218eead498a664785f414bcb12460
X-Osstest-Versions-That:
    xen=10b80ee5588e8928b266dea02a5e99d098bd227a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 22:52:00 +0000

flight 176302 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176302/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   4 host-install(4)        broken REGR. vs. 176151
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 176151
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78e93e6e57c218eead498a664785f414bcb12460
baseline version:
 xen                  10b80ee5588e8928b266dea02a5e99d098bd227a

Last test of basis   176151  2023-01-26 14:00:29 Z    5 days
Testing same since   176283  2023-01-30 21:02:20 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf host-install(4)
broken-step build-armhf capture-logs

Not pushing.

------------------------------------------------------------
commit 78e93e6e57c218eead498a664785f414bcb12460
Author: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Date:   Wed Jan 25 11:21:31 2023 +0000

    xen/arm: Probe the load/entry point address of an uImage correctly
    
    Currently, kernel_uimage_probe() does not read the load/entry point address
    set in the uImge header. Thus, info->zimage.start is 0 (default value). This
    causes, kernel_zimage_place() to treat the binary (contained within uImage)
    as position independent executable. Thus, it loads it at an incorrect
    address.
    
    The correct approach would be to read "uimage.load" and set
    info->zimage.start. This will ensure that the binary is loaded at the
    correct address. Also, read "uimage.ep" and set info->entry (ie kernel entry
    address).
    
    If user provides load address (ie "uimage.load") as 0x0, then the image is
    treated as position independent executable. Xen can load such an image at
    any address it considers appropriate. A position independent executable
    cannot have a fixed entry point address.
    
    This behavior is applicable for both arm32 and arm64 platforms.
    
    Earlier for arm32 and arm64 platforms, Xen was ignoring the load and entry
    point address set in the uImage header. With this commit, Xen will use them.
    This makes the behavior of Xen consistent with uboot for uimage headers.
    
    Users who want to use Xen with statically partitioned domains, can provide
    non zero load address and entry address for the dom0/domU kernel. It is
    required that the load and entry address provided must be within the memory
    region allocated by Xen.
    
    A deviation from uboot behaviour is that we consider load address == 0x0,
    to denote that the image supports position independent execution. This
    is to make the behavior consistent across uImage and zImage.
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    [stefano: minor doc improvement]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487897.755678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzU3-0008Im-L2; Tue, 31 Jan 2023 22:52:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487897.755678; Tue, 31 Jan 2023 22:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzU3-0008If-I8; Tue, 31 Jan 2023 22:52:11 +0000
Received: by outflank-mailman (input) for mailman id 487897;
 Tue, 31 Jan 2023 22:52:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzU1-0008Hj-EW
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:09 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2061c.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::61c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e22638a7-a1b9-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 23:52:07 +0100 (CET)
Received: from DM6PR21CA0011.namprd21.prod.outlook.com (2603:10b6:5:174::21)
 by DS7PR12MB5768.namprd12.prod.outlook.com (2603:10b6:8:77::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 22:52:03 +0000
Received: from DM6NAM11FT029.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:174:cafe::8c) by DM6PR21CA0011.outlook.office365.com
 (2603:10b6:5:174::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.4 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:03 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT029.mail.protection.outlook.com (10.13.173.23) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 22:52:02 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:02 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 14:52:02 -0800
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:01 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e22638a7-a1b9-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AMG3beTMP7Ttsu8eJFbXqTsxrF1uN7kgDOxGaRBKpGcWkskfWYxUuevaa55gmS6GL/0vaXYdsa7FaOv1zRDvwY3FMfUbrwupnAAJnErfrT0nZLYJanVl4feXZe20gwlFQ+2cs5MmfqXRr9w09v6F6bQ/FK13hRdkwma5Z56pyOFZRPuMUWMjfrjv5wUx6TI4Fbo4oE83VNAjzGm2qP/3wKSlEQv75R0joJ2wkeQ2wKpdCa6nUCDVT/2sJKNEf8ji9LpM+okiBkc+fJlZWpjI8ZlmC8zsce2WJGwZWhgodh91i/Bg0qKMEusKWbgPl+KBtKCeXXhJF7aVelxipCYXbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HX0/VprMtKGAirOyOoDFvnPvkn84PZOJUgA+v8ssH3c=;
 b=fo7Qc70usaGMCBEJSGlUAivzrD9Iysdj177hTNPOVV7TUz/o5gPJA6utSFkO71NB567YDnBkML5NV3h6i6GAcndSYXETQ43gJ92eL5GSM/jxVsES4mSjeyAdk0/ZGT6eH2BWSjre1EbtmbKsLb53lKIf9uSjrqmRCUzfLQeK0MsIkKD+mOty0glAIXd0LXARfGLvNGlFRO+9at4tvpEtuCu8F65YCw/kumPAyeFGMmDxAYZVSWENSFPFzWvtA3MPmjVEFxwi3aertQ4EfrkFTbL1Iqb7HxrR/brOXlgVGpbVGd1Sr8kBxdvT3uI8knPnHCjtRnAssBWeIvHHE4Q16Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HX0/VprMtKGAirOyOoDFvnPvkn84PZOJUgA+v8ssH3c=;
 b=ERv/aSuAc7EiynUP8a8gNFwokHUmc8l/3VQEWXoXiu8AY9+64VtaSoxkcLi9kYIGBZ+iFnynZzk4QdEKLGZrjqD46eJS/LAT0ELh53QfG4ZvOzYG3RqIQ4HWHRCKLEoO44v7O3mdbldD6kPzPY7X6/sQH1LTGR+lZF/9xjnMuh4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>
Subject: [QEMU][PATCH v5 00/10] Introduce xenpvh machine for arm architecture
Date: Tue, 31 Jan 2023 14:51:39 -0800
Message-ID: <20230131225149.14764-1-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT029:EE_|DS7PR12MB5768:EE_
X-MS-Office365-Filtering-Correlation-Id: baa3ead2-3303-42d1-c1fb-08db03ddc4f6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BQgl+Hw5J4q2pTObti6zMEXD8AmHejqQfcQwmEnyWCURKctLj/4AvB+voy8ae+qK3gEYUSB7v0pbEvDmsO0KoGaQCGfkzWEQKPILnMCrmuCHr8Bm+W0lL2W9LWJMArNo/OF1GRL2dY84sCKgsUxH6dl2nu7OLMWFdRF5m8+BRI4kerC35Cz+ZCFZon17HodfGO665odtOGceNyJk8q60FAmBFrzdcBiIWiDeFYgqGvWWnc57fYNh+Ka8WuDaVqVyuVx8BqPE25WRiDxlCQU6rZeQUTCthFKqrGK9TPapPo1yg9QbUxOGUPrRnAPdyoakCDEtZ4U7czwbXZX9fpsqgQ1NM1zeRixT7bq/R1IONBySJIa4mlIwkEIUFbbypwhs/GWy0zbOvH3pihxHb+ikIzW62bEK0GMBeiCeouqxpK8fVlfwWI3Pe5Nt2xdZZLSDh3BgmHzeZoLUwHgfD2ZJm6DsKPlgzecJ4zHlhLX+MVPVmJDvwxVwQk8iHjcXp+NnKKJ1ivSmm195P8leIscjsXIXsNb84gZFVgELlkg74Nala0nQRl+FZfuANP6VGKlVzMXQxlWiTB+H1tAn5Z92OZE4BWwshAlBCgV2/PxXycHNPT9GJ+iemwC3xpyYiC/HQHnYBLtrUuwtb49bfqtgKUVCAqmCobpPDhgWpQlmFYjdsXLnLy6mhPZ/O2WMiyi1SZaVc4awdGuerX9iXE9Hcu4TX9u1GEWt1VnKv2ZbHHc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(136003)(346002)(39860400002)(396003)(451199018)(40470700004)(36840700001)(46966006)(6666004)(40460700003)(41300700001)(2616005)(336012)(1076003)(478600001)(186003)(26005)(54906003)(316002)(6916009)(8676002)(4326008)(83380400001)(70586007)(70206006)(426003)(36860700001)(8936002)(2906002)(36756003)(82740400003)(47076005)(356005)(5660300002)(86362001)(44832011)(40480700001)(81166007)(82310400005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:02.9281
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: baa3ead2-3303-42d1-c1fb-08db03ddc4f6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT029.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5768

MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hi,
This series add xenpvh machine for aarch64. Motivation behind creating xenpvh
machine with IOREQ and TPM was to enable each guest on Xen aarch64 to have it's
own unique and emulated TPM.

This series does following:
    1. Moved common xen functionalities from hw/i386/xen to hw/xen/ so those can
       be used for aarch64.
    2. We added a minimal xenpvh arm machine which creates an IOREQ server and
       support TPM.

Also, checkpatch.pl fails for 03/12 and 06/12. These fails are due to
moving old code to new place which was not QEMU code style compatible.
No new add code was added.

Regards,
Vikram

ChangeLog:
    v4->v5:
        Fix missing 3 lines of codes in xen_exit_notifier() due to rebase.
        Fix 07/10 patch subject.

    v3->v4:
        Removed the out of series 04/12 patch.

    v2->v3:
        1. Change machine name to xenpvh as per Jurgen's input.
        2. Add docs/system/xenpvh.rst documentation.
        3. Removed GUEST_TPM_BASE and added tpm_base_address as property.
        4. Correct CONFIG_TPM related issues.
        5. Added xen_register_backend() function call to xen_register_ioreq().
        6. Added Oleksandr's suggestion i.e. removed extra interface opening and
           used accel=xen option

    v1 -> v2
    Merged patch 05 and 06.
    04/12: xen-hvm-common.c:
        1. Moved xen_be_init() and xen_be_register_common() from
            xen_register_ioreq() to xen_register_backend().
        2. Changed g_malloc to g_new and perror -> error_setg_errno.
        3. Created a local subroutine function for Xen_IOREQ_register.
        4. Fixed build issues with inclusion of xenstore.h.
        5. Fixed minor errors.

Stefano Stabellini (5):
  hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState
  xen-hvm: reorganize xen-hvm and move common function to xen-hvm-common
  include/hw/xen/xen_common: return error from xen_create_ioreq_server
  hw/xen/xen-hvm-common: skip ioreq creation on ioreq registration
    failure
  meson.build: do not set have_xen_pci_passthrough for aarch64 targets

Vikram Garhwal (5):
  hw/i386/xen/: move xen-mapcache.c to hw/xen/
  hw/i386/xen: rearrange xen_hvm_init_pc
  hw/xen/xen-hvm-common: Use g_new and error_report
  hw/arm: introduce xenpvh machine
  meson.build: enable xenpv machine build for ARM

 docs/system/arm/xenpvh.rst       |   34 +
 docs/system/target-arm.rst       |    1 +
 hw/arm/meson.build               |    2 +
 hw/arm/xen_arm.c                 |  182 +++++
 hw/i386/meson.build              |    1 +
 hw/i386/xen/meson.build          |    1 -
 hw/i386/xen/trace-events         |   19 -
 hw/i386/xen/xen-hvm.c            | 1078 +++---------------------------
 hw/xen/meson.build               |    7 +
 hw/xen/trace-events              |   19 +
 hw/xen/xen-hvm-common.c          |  893 +++++++++++++++++++++++++
 hw/{i386 => }/xen/xen-mapcache.c |    0
 include/hw/arm/xen_arch_hvm.h    |    9 +
 include/hw/i386/xen_arch_hvm.h   |   11 +
 include/hw/xen/arch_hvm.h        |    5 +
 include/hw/xen/xen-hvm-common.h  |   98 +++
 include/hw/xen/xen_common.h      |   13 +-
 meson.build                      |    4 +-
 18 files changed, 1364 insertions(+), 1013 deletions(-)
 create mode 100644 docs/system/arm/xenpvh.rst
 create mode 100644 hw/arm/xen_arm.c
 create mode 100644 hw/xen/xen-hvm-common.c
 rename hw/{i386 => }/xen/xen-mapcache.c (100%)
 create mode 100644 include/hw/arm/xen_arch_hvm.h
 create mode 100644 include/hw/i386/xen_arch_hvm.h
 create mode 100644 include/hw/xen/arch_hvm.h
 create mode 100644 include/hw/xen/xen-hvm-common.h

-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487898.755689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzU7-00008j-TM; Tue, 31 Jan 2023 22:52:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487898.755689; Tue, 31 Jan 2023 22:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzU7-00008c-QH; Tue, 31 Jan 2023 22:52:15 +0000
Received: by outflank-mailman (input) for mailman id 487898;
 Tue, 31 Jan 2023 22:52:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzU6-0008Hj-08
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:14 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e88::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6bd8c97-a1b9-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 23:52:13 +0100 (CET)
Received: from MW4PR04CA0121.namprd04.prod.outlook.com (2603:10b6:303:84::6)
 by IA1PR12MB6626.namprd12.prod.outlook.com (2603:10b6:208:3a2::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 22:52:09 +0000
Received: from CO1NAM11FT100.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:84:cafe::a1) by MW4PR04CA0121.outlook.office365.com
 (2603:10b6:303:84::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT100.mail.protection.outlook.com (10.13.175.133) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6064.22 via Frontend Transport; Tue, 31 Jan 2023 22:52:09 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:07 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:07 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:06 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6bd8c97-a1b9-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GC630tdiylspJQ4CZzRXUIhUCIIjHuaqddW8vav2zZ7hSknTPzIqO0wB90QAIad3ejhmf21n1UItVMVh51Ui+Zqh/72X/9TbV4AYrjqOvtTWtDpPuPvagz+u8Bpf8wXCB4NVvvPEQ4nV1uT5uCy+heCkK2f1CzJ0M5o7dFhVtjDtAe48fCRnoSBh8I+eRP6GmdwtDkkdrk7Emdyk6XXgZTW7gblwH8gEyn4ek7z6mueixxozqIKojuXuLphMJfhyjMptrpLhfosmVjoMy2WVJO5RQ/N4OwJtrRcxlh4EYOQtsKUgcOMPLSYCm4lGnA8rb6iTfYcBF/z70UqK2ILkvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z7qBBSFtT16dKo24xhhVW2fFUFVUWqNupyxQmZ/Rxf8=;
 b=M3gQ32FuazLL/p1bb8hJYqGYblDgjbOQzTUPbz1jpF2kIpgHkfVXukZF50GAN8mLoAURC4mSy7H0tc8kQqu8VPbOdZkRM4PuTIFU8eC21hOs00FF7JZk/KLzUAKa7rAAVuzCtxhE2YqKVRIwX1rAJ6Mx82G3FM34VwWIoNo/5CyClztlT4UtJIIzvN+FT0sGWueDFZUbwg9s0hH821svWe9jKi6l0N6RbDkx0z3iXoVGfgbbcOoNhTqB+OQz442/iXIJyjVoL1o71wI/oKHV3jzm6TecqOzQvmq6/3cfG5b06UcOdFIcyrgPcNuZQidJiHuHgA2lZWTUfmK1fuPOTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z7qBBSFtT16dKo24xhhVW2fFUFVUWqNupyxQmZ/Rxf8=;
 b=b6mJgIk2FVkh3991xxSYYBweFi5PPhaqnw63j/avr9IVKf1R2a1Yfsky6STo7Id+KNaK/igYs3/0C6J8k8Y7J4RT9tYNMPAuRnFKrGJ/mX4dPBM+/nQXP6xRTBzMazIXWJx0slUJjt+I/pEFQIvo/gJ3k6WAn6e5/mGO5aw88rg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, "Michael S. Tsirkin"
	<mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, "Paolo
 Bonzini" <pbonzini@redhat.com>, Richard Henderson
	<richard.henderson@linaro.org>, Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard
	<anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: [QEMU][PATCH v5 01/10] hw/i386/xen/: move xen-mapcache.c to hw/xen/
Date: Tue, 31 Jan 2023 14:51:40 -0800
Message-ID: <20230131225149.14764-2-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT100:EE_|IA1PR12MB6626:EE_
X-MS-Office365-Filtering-Correlation-Id: da900d49-00e6-4d0b-d87e-08db03ddc8a2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sMTu4NTIVq6BnP/vfpJum4h8Whh8iXnSpPCSrTjTFrxJWJbH/aolzHISb/7YEwwIBlmt1bNnmTPvjsBu/kCZEYEteOjQS3cDggp3E1RsJH9fo++fYFxlIX1j9sXoRsUg9x4dAn/ixMiEMrbUzpjGIkE23ju2hO/XSlJP9DcWRvK3D1vh5/FcabaT3yly/gPeILAMQh6D4Pww9RLY/uY2vzOpoKWRhGDAWGrjoLc4TfkuyjWQg34C93kKh06pRsx6znY3bsbTu4kRsor3w2Qhj/k6JaiwiQWN6MEb8aLt3IsfNKaQBskG7JuypnHdbBXSBAzbG5TrZviHwJ7QBruqjQLGxVtmIkHXSyn8oX7ZHiT8OL/y/2Jh6fnQNK//flwNTny04woF2BhjPgDd45OUYUcS+IF2eZgSPJzaZxQixVs0s/ayjXylg/L1zEKeJm23jgTAsydg6IgbAZLotfnmdbhzImp8Do3KOF0meCzJNuYp106tWYBg3HsW+OrCNxdr4B12x9cEWDwktaUvIogpHbIufvidsSPkUfieq0fWNkZGOrmmkygppu+A9s/4vixsjxvSKxpog+1MQJu07rPI+L7mO6gtmKbTf1EfLI7qoTMbH6Cdv6g7thSnoCh2ejTqfEB6jDLq23v9BdCo9Y8ArpzBuSOoQGykQDTP3kTWi3GBP9PQZzrdg6+PlFijOxOenp4nmcM1meT0S+Mn3unyWi+dEsTmXLVeCW1VN6KNllA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(39860400002)(396003)(376002)(136003)(451199018)(36840700001)(40470700004)(46966006)(8936002)(70206006)(316002)(70586007)(4326008)(8676002)(6916009)(5660300002)(7416002)(54906003)(44832011)(2906002)(1076003)(41300700001)(478600001)(186003)(6666004)(336012)(26005)(2616005)(356005)(36756003)(40460700003)(83380400001)(81166007)(47076005)(426003)(36860700001)(82310400005)(82740400003)(86362001)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:09.0265
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: da900d49-00e6-4d0b-d87e-08db03ddc8a2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT100.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6626

xen-mapcache.c contains common functions which can be used for enabling Xen on
aarch64 with IOREQ handling. Moving it out from hw/i386/xen to hw/xen to make it
accessible for both aarch64 and x86.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 hw/i386/meson.build              | 1 +
 hw/i386/xen/meson.build          | 1 -
 hw/i386/xen/trace-events         | 5 -----
 hw/xen/meson.build               | 4 ++++
 hw/xen/trace-events              | 5 +++++
 hw/{i386 => }/xen/xen-mapcache.c | 0
 6 files changed, 10 insertions(+), 6 deletions(-)
 rename hw/{i386 => }/xen/xen-mapcache.c (100%)

diff --git a/hw/i386/meson.build b/hw/i386/meson.build
index 213e2e82b3..cfdbfdcbcb 100644
--- a/hw/i386/meson.build
+++ b/hw/i386/meson.build
@@ -33,5 +33,6 @@ subdir('kvm')
 subdir('xen')
 
 i386_ss.add_all(xenpv_ss)
+i386_ss.add_all(xen_ss)
 
 hw_arch += {'i386': i386_ss}
diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
index be84130300..2fcc46e6ca 100644
--- a/hw/i386/xen/meson.build
+++ b/hw/i386/xen/meson.build
@@ -1,6 +1,5 @@
 i386_ss.add(when: 'CONFIG_XEN', if_true: files(
   'xen-hvm.c',
-  'xen-mapcache.c',
   'xen_apic.c',
   'xen_platform.c',
   'xen_pvdevice.c',
diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
index 5d6be61090..a0c89d91c4 100644
--- a/hw/i386/xen/trace-events
+++ b/hw/i386/xen/trace-events
@@ -21,8 +21,3 @@ xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
 cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
 cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
 
-# xen-mapcache.c
-xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
-xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
-xen_map_cache_return(void* ptr) "%p"
-
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index ae0ace3046..19d0637c46 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -22,3 +22,7 @@ else
 endif
 
 specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
+
+xen_ss = ss.source_set()
+
+xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))
diff --git a/hw/xen/trace-events b/hw/xen/trace-events
index 3da3fd8348..2c8f238f42 100644
--- a/hw/xen/trace-events
+++ b/hw/xen/trace-events
@@ -41,3 +41,8 @@ xs_node_vprintf(char *path, char *value) "%s %s"
 xs_node_vscanf(char *path, char *value) "%s %s"
 xs_node_watch(char *path) "%s"
 xs_node_unwatch(char *path) "%s"
+
+# xen-mapcache.c
+xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
+xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
+xen_map_cache_return(void* ptr) "%p"
diff --git a/hw/i386/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
similarity index 100%
rename from hw/i386/xen/xen-mapcache.c
rename to hw/xen/xen-mapcache.c
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487899.755700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUA-0000R3-Ap; Tue, 31 Jan 2023 22:52:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487899.755700; Tue, 31 Jan 2023 22:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUA-0000Qq-6i; Tue, 31 Jan 2023 22:52:18 +0000
Received: by outflank-mailman (input) for mailman id 487899;
 Tue, 31 Jan 2023 22:52:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzU9-0000Ma-AA
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:17 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2088.outbound.protection.outlook.com [40.107.237.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e76a9f97-a1b9-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 23:52:14 +0100 (CET)
Received: from DS7PR06CA0047.namprd06.prod.outlook.com (2603:10b6:8:54::27) by
 DM4PR12MB6421.namprd12.prod.outlook.com (2603:10b6:8:b7::19) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6064.22; Tue, 31 Jan 2023 22:52:11 +0000
Received: from DM6NAM11FT046.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:54:cafe::5e) by DS7PR06CA0047.outlook.office365.com
 (2603:10b6:8:54::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:11 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT046.mail.protection.outlook.com (10.13.172.121) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.28 via Frontend Transport; Tue, 31 Jan 2023 22:52:11 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:10 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 14:52:10 -0800
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:09 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e76a9f97-a1b9-11ed-b63b-5f92e7d2e73a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iivbKjrCXRJvAxBZmTyJVqPLhBfjMWboitslgLPJ2lO340BELTypqO9cMp/kRJNKdyt6C6flRSpb6YPxf4NmGr+715budMdiSNJRTayYMTvm1A4R2wWDl3XMf1sCPsYU+9p7GUxxC9x5IO6ISHCWue8gs4VAeVSt9SBHDhzVlofP69g1Fy6XNXC5YY88gkGlrOx3IxKpOMR/vEhlQjcB5yVrUq6wGHVw0zJfwBgbXWzIQbtoaJwQCmaNWpQoY8Cw/FT1BA8wEXzCjj1OelyBnX6nch3B1jRkY8JYzABcfY92SfhoK8aeAfXGbL7xFEkMGgyEBIEWbqKHTanNMPGzeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xQRygdwVbnDYBJcZUcE7RFsjWA1hPALnId6WXnaDoUU=;
 b=i25uaDQ6ERPpGGsrmvccsATGEd3kZZkAQFnENXFCZ5k0wduBxhWd6/Pv5E0mq/W36wX89n2whOp+mc9cSLPvQOMj/0EMQP3punKeQ0dLJ/LKvYgBnLaWDgNRvo08OZDiGNQheWnJQQyIEg8f2DEH8kCv4bwbLXT3EAyndzDvyBnQLqKHBxGFLpaza6b3VmAuohelGHtjoNdXQ3Khbmih1cgsq0F+8I3FDbJ//S8IViJF26klikLKXgd7630S+fvo6AmzCA5i3BpEhRFbNUzqaFaGpQd9ln+D9XP7+zMQ+Z7wkUtWYQkwUpv24GjcVxO2W3GhjSvd1GJO9Mt3RWVaIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xQRygdwVbnDYBJcZUcE7RFsjWA1hPALnId6WXnaDoUU=;
 b=JJFRFgOE0KrZ6DGgH1AdXkhPOASDwhy+NQWF6rLv68T5oEOSexmtm8Q7yUCk4WhQm48EM5rHoYLDyyffqX/TqEMfMxCc/AgkpZ9UWcXYVGgWjeagbYegVzKCKEY7/zxtSc7MS4fgQdMUSjilU1UtHVlOEIWvy+tm4psrY+CEXFQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>
Subject: [QEMU][PATCH v5 02/10] hw/i386/xen: rearrange xen_hvm_init_pc
Date: Tue, 31 Jan 2023 14:51:41 -0800
Message-ID: <20230131225149.14764-3-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT046:EE_|DM4PR12MB6421:EE_
X-MS-Office365-Filtering-Correlation-Id: 1b25cff7-d654-4181-b0c2-08db03ddc9db
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kuz7sr/0bMbr3YbKZjuO3tBeSOkJtL0Ws4NIIzwqFreglodeUujDGJIX+X6QoeguFUw+HfzfHKruWNUA7QRWXIj9zCQBP6is4NnOXFRV+dqgO2BYTxz2AT/5ekQISDqqbeLB8VoajMCTzRy+62U4RSq5nMMMKsAc+PfY7zESX/PSwcjqwZ4fj+pVGiV99HsyNZuBF55pObHJFSr0TMx2Y4cAVVX61t3wCE+4tEaomN3B5p6oM2JTL46Gv6oL1uKwMTpyKedfW2dIY/nX46s2pkVDsiAu5dL0jL/jZzRLo4gdLN5ud2K0vj/bySBSHYPzqweg/3bELPjubVFQKE2PoYIzPO0AIZUYVLvwwWgenriPzEzbvtOt3puv81y56aaQmZ3hWOv/pkkr8NPXaNQcxV59/BKVdKSemLXxMWkx94JA9KhlkIniCZJZvDZ6hNXUE0XiIpbMkB5A5gRQTnvF50W/+ELIegO7WKsmS9CH2ZjodE+7adHwsPkvdF/c9fHAihjFc8nry4xm8TjngJ7SCI1V0BOsCusLcNnOlwnhr25qsRdiIp8J8zHuF1i3OTZfXK2i7GX9bSvSixMrz6wMVop/ACZL6gEepp0rKvBPmOTblTsroOnvIqoRMX87zneXZI+RHEyvlFtzktmt6830Qe4ETdvkvNKj9tvmSMek5KOCASJdIqhUAVsTWmabSmO2i2wmZaAU504AjCeF7npJNjqZLJDA8f1CEiFNqUrU2YA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199018)(40470700004)(36840700001)(46966006)(356005)(186003)(26005)(86362001)(81166007)(1076003)(2616005)(36756003)(36860700001)(426003)(83380400001)(336012)(47076005)(6666004)(478600001)(70586007)(54906003)(316002)(5660300002)(4326008)(8676002)(41300700001)(82740400003)(6916009)(2906002)(7416002)(8936002)(44832011)(70206006)(82310400005)(40480700001)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:11.1549
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b25cff7-d654-4181-b0c2-08db03ddc9db
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT046.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6421

In preparation to moving most of xen-hvm code to an arch-neutral location,
move non IOREQ references to:
- xen_get_vmport_regs_pfn
- xen_suspend_notifier
- xen_wakeup_notifier
- xen_ram_init

towards the end of the xen_hvm_init_pc() function.

This is done to keep the common ioreq functions in one place which will be
moved to new function in next patch in order to make it common to both x86 and
aarch64 machines.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 hw/i386/xen/xen-hvm.c | 49 ++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index b9a6f7f538..1fba0e0ae1 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -1416,12 +1416,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
 
-    state->suspend.notify = xen_suspend_notifier;
-    qemu_register_suspend_notifier(&state->suspend);
-
-    state->wakeup.notify = xen_wakeup_notifier;
-    qemu_register_wakeup_notifier(&state->wakeup);
-
     /*
      * Register wake-up support in QMP query-current-machine API
      */
@@ -1432,23 +1426,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
         goto err;
     }
 
-    rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
-    if (!rc) {
-        DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
-        state->shared_vmport_page =
-            xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
-                                 1, &ioreq_pfn, NULL);
-        if (state->shared_vmport_page == NULL) {
-            error_report("map shared vmport IO page returned error %d handle=%p",
-                         errno, xen_xc);
-            goto err;
-        }
-    } else if (rc != -ENOSYS) {
-        error_report("get vmport regs pfn returned error %d, rc=%d",
-                     errno, rc);
-        goto err;
-    }
-
     /* Note: cpus is empty at this point in init */
     state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
 
@@ -1486,7 +1463,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 #else
     xen_map_cache_init(NULL, state);
 #endif
-    xen_ram_init(pcms, ms->ram_size, ram_memory);
 
     qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
 
@@ -1513,6 +1489,31 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
 
+    state->suspend.notify = xen_suspend_notifier;
+    qemu_register_suspend_notifier(&state->suspend);
+
+    state->wakeup.notify = xen_wakeup_notifier;
+    qemu_register_wakeup_notifier(&state->wakeup);
+
+    rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
+    if (!rc) {
+        DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
+        state->shared_vmport_page =
+            xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
+                                 1, &ioreq_pfn, NULL);
+        if (state->shared_vmport_page == NULL) {
+            error_report("map shared vmport IO page returned error %d handle=%p",
+                         errno, xen_xc);
+            goto err;
+        }
+    } else if (rc != -ENOSYS) {
+        error_report("get vmport regs pfn returned error %d, rc=%d",
+                     errno, rc);
+        goto err;
+    }
+
+    xen_ram_init(pcms, ms->ram_size, ram_memory);
+
     /* Disable ACPI build because Xen handles it */
     pcms->acpi_build_enabled = false;
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487900.755710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUE-0000lV-Ih; Tue, 31 Jan 2023 22:52:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487900.755710; Tue, 31 Jan 2023 22:52:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUE-0000lO-F1; Tue, 31 Jan 2023 22:52:22 +0000
Received: by outflank-mailman (input) for mailman id 487900;
 Tue, 31 Jan 2023 22:52:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzUC-0008Hj-La
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:20 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20606.outbound.protection.outlook.com
 [2a01:111:f400:7e89::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e9cbc5e5-a1b9-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 23:52:20 +0100 (CET)
Received: from MW4P222CA0002.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::7)
 by PH7PR12MB6857.namprd12.prod.outlook.com (2603:10b6:510:1af::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 22:52:15 +0000
Received: from CO1NAM11FT066.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:114:cafe::9a) by MW4P222CA0002.outlook.office365.com
 (2603:10b6:303:114::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:15 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT066.mail.protection.outlook.com (10.13.175.18) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Tue, 31 Jan 2023 22:52:13 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:12 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 14:52:12 -0800
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:11 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9cbc5e5-a1b9-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f68m9saXZEpw4fzGqfUGzhbFO+BjPRbUvRlW/N3EntBOdIqblOy4XCLeC50B6hqPHgeaBJSNYEoSSi7XxLlS0YcN0Ki9+sjbSt7kN9ILZRZ6k5UJo6ZcWJSlYWnJ9D1JiWno0Q1YroKZuJ/4Qz6K18037ijX1RkYbZDdL6np18Frqt6yAStUlHdI0pFrfhMKSoaBA7E6wuDYJYSteQvgkaPuU2uXPNaFRrl9qVCnyAh81MsrOfR0VhdALtA8PxlGdmv+TCfBMXW4ttRiNF9psGD/+jA8e+bTYuDLkXtvmgqJ/tw351epf2chqBpAIlxCTcvhaz/hvrEc//rAMnbxhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/pArBdHE6pupZI0iEmfLZok1fiWO0kN2Oo56dAzFs5Q=;
 b=KlkQUO3+U2dkJfZM0txeKTpAyvQyfZSDypXOKu6kwm9gh7O8I01QH7paDD58z7pcyueKs/18kMSrm71WZ8YH/Fvx60/Me/IjxqWXx2laQ8A0ZlcNKZsktoyinrASFGoyC6IV3+l5RUbxWRD4htgmuJW9DEB8kSBwoprg2YvrKzeRGBRWVa4r/AgYPtOR/4wMYVmMUYfs7hSvbCFAeh+tjhnMa6FHUZ4yQDp8Qh5jPQpxgf0G849H+l+xrGDqRtxKC1Ev7zrgm3XGfKy78Ywky76B+GJt80U6f4SGOlRgfyR1renc99WlxRVGfNpc+Sgg0vSyieywgcNYltJt9QfWHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/pArBdHE6pupZI0iEmfLZok1fiWO0kN2Oo56dAzFs5Q=;
 b=e6fijxWOMnF5eYEoC2iysRvmzZr0MHgH9Oi0rdouPtFDOpzfRHOKWvXN2W46HK6fcXvD8xeu4a1yWNm1I5i8m6ORyLKPHqZ/nBRVO2HzzAhJRNDVGtNjaEh94RMDnPooQuXwceeriZp0GG+ueKc5vIl+5P/NpxDHuYwcFV38DUY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>
Subject: [QEMU][PATCH v5 03/10] hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState
Date: Tue, 31 Jan 2023 14:51:42 -0800
Message-ID: <20230131225149.14764-4-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT066:EE_|PH7PR12MB6857:EE_
X-MS-Office365-Filtering-Correlation-Id: c5a632be-6f78-46c0-07e8-08db03ddcb70
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+8XOw0n08GypojEth6Ap7WO6i77NEtgLru79/4jqcpj26NvFhBRHhWjDP8L+G7XTKM4l+MjFBFRCDyzIj3w4GEfh4pVOTfyidHLM7aWWM1QM4S4d4ySaO5vcQN3SKVRzwO34RaqXwT9/+PgFnAewJe2fP4WsOcFE2RU7Qx1mQkeOv8IxjILoA6Ucu7VqGxDUB7i9TiC3Xms6LPAqEo29OUKR/4NgxJpZmnmImZZofdk1KdWHXRldpp//PRShd8zFcF2EcwzRRqvIcFaoj62G+VvFp+B4AoM4f80f8uk+oNll6fZsB9bEuOsesUmRCdzGJJfO9chRmaRnzeXGg3HxfAY/6P4TTgbf5C154Et6RdmRfmYgAyrkYqJoUrwEkyegj97fv4CUej/ljSFZ/dgfeyi3dPY4ghAjAbepDXjZ+4fR4Qt+TXpacoJ0gkENbKIiI/oUPh+7ZS6nA7kmGzBVtdAEonuATu+DlzVKULPX07EaXTRm4TTFOvluC6lWXnN5GeBCnX0D0jNyCpZNxpe7NpwkDdMj91vrVYQ8yHgSazct5sxNOtYsAXVaVxCb3Ka8kplUiYL4dUdU2hsDW6vXMIoySvfX6QAYjlfAceWak5Fzzaz5us8mG2LY9MO2Ui/quem11AP/ib8AzD6ikmw20JNwNLz/3L1z118K+p42bKcvAw2yfuCL3foJafolPxaJ3b9Cqu2eh2t8WD7HLM5O6NO4pH4dy7gEuc5PPfvbGx8=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199018)(36840700001)(46966006)(40470700004)(44832011)(2906002)(36756003)(82310400005)(66574015)(426003)(2616005)(54906003)(83380400001)(336012)(47076005)(6666004)(316002)(26005)(186003)(478600001)(1076003)(40460700003)(70586007)(4326008)(70206006)(8676002)(6916009)(356005)(40480700001)(5660300002)(8936002)(7416002)(86362001)(41300700001)(81166007)(36860700001)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:13.7267
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c5a632be-6f78-46c0-07e8-08db03ddcb70
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT066.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6857

From: Stefano Stabellini <stefano.stabellini@amd.com>

In preparation to moving most of xen-hvm code to an arch-neutral location, move:
- shared_vmport_page
- log_for_dirtybit
- dirty_bitmap
- suspend
- wakeup

out of XenIOState struct as these are only used on x86, especially the ones
related to dirty logging.
Updated XenIOState can be used for both aarch64 and x86.

Also, remove free_phys_offset as it was unused.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/i386/xen/xen-hvm.c | 58 ++++++++++++++++++++-----------------------
 1 file changed, 27 insertions(+), 31 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 1fba0e0ae1..06c446e7be 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -73,6 +73,7 @@ struct shared_vmport_iopage {
 };
 typedef struct shared_vmport_iopage shared_vmport_iopage_t;
 #endif
+static shared_vmport_iopage_t *shared_vmport_page;
 
 static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
 {
@@ -95,6 +96,11 @@ typedef struct XenPhysmap {
 } XenPhysmap;
 
 static QLIST_HEAD(, XenPhysmap) xen_physmap;
+static const XenPhysmap *log_for_dirtybit;
+/* Buffer used by xen_sync_dirty_bitmap */
+static unsigned long *dirty_bitmap;
+static Notifier suspend;
+static Notifier wakeup;
 
 typedef struct XenPciDevice {
     PCIDevice *pci_dev;
@@ -105,7 +111,6 @@ typedef struct XenPciDevice {
 typedef struct XenIOState {
     ioservid_t ioservid;
     shared_iopage_t *shared_page;
-    shared_vmport_iopage_t *shared_vmport_page;
     buffered_iopage_t *buffered_io_page;
     xenforeignmemory_resource_handle *fres;
     QEMUTimer *buffered_io_timer;
@@ -125,14 +130,8 @@ typedef struct XenIOState {
     MemoryListener io_listener;
     QLIST_HEAD(, XenPciDevice) dev_list;
     DeviceListener device_listener;
-    hwaddr free_phys_offset;
-    const XenPhysmap *log_for_dirtybit;
-    /* Buffer used by xen_sync_dirty_bitmap */
-    unsigned long *dirty_bitmap;
 
     Notifier exit;
-    Notifier suspend;
-    Notifier wakeup;
 } XenIOState;
 
 /* Xen specific function for piix pci */
@@ -462,10 +461,10 @@ static int xen_remove_from_physmap(XenIOState *state,
     }
 
     QLIST_REMOVE(physmap, list);
-    if (state->log_for_dirtybit == physmap) {
-        state->log_for_dirtybit = NULL;
-        g_free(state->dirty_bitmap);
-        state->dirty_bitmap = NULL;
+    if (log_for_dirtybit == physmap) {
+        log_for_dirtybit = NULL;
+        g_free(dirty_bitmap);
+        dirty_bitmap = NULL;
     }
     g_free(physmap);
 
@@ -626,16 +625,16 @@ static void xen_sync_dirty_bitmap(XenIOState *state,
         return;
     }
 
-    if (state->log_for_dirtybit == NULL) {
-        state->log_for_dirtybit = physmap;
-        state->dirty_bitmap = g_new(unsigned long, bitmap_size);
-    } else if (state->log_for_dirtybit != physmap) {
+    if (log_for_dirtybit == NULL) {
+        log_for_dirtybit = physmap;
+        dirty_bitmap = g_new(unsigned long, bitmap_size);
+    } else if (log_for_dirtybit != physmap) {
         /* Only one range for dirty bitmap can be tracked. */
         return;
     }
 
     rc = xen_track_dirty_vram(xen_domid, start_addr >> TARGET_PAGE_BITS,
-                              npages, state->dirty_bitmap);
+                              npages, dirty_bitmap);
     if (rc < 0) {
 #ifndef ENODATA
 #define ENODATA  ENOENT
@@ -650,7 +649,7 @@ static void xen_sync_dirty_bitmap(XenIOState *state,
     }
 
     for (i = 0; i < bitmap_size; i++) {
-        unsigned long map = state->dirty_bitmap[i];
+        unsigned long map = dirty_bitmap[i];
         while (map != 0) {
             j = ctzl(map);
             map &= ~(1ul << j);
@@ -676,12 +675,10 @@ static void xen_log_start(MemoryListener *listener,
 static void xen_log_stop(MemoryListener *listener, MemoryRegionSection *section,
                          int old, int new)
 {
-    XenIOState *state = container_of(listener, XenIOState, memory_listener);
-
     if (old & ~new & (1 << DIRTY_MEMORY_VGA)) {
-        state->log_for_dirtybit = NULL;
-        g_free(state->dirty_bitmap);
-        state->dirty_bitmap = NULL;
+        log_for_dirtybit = NULL;
+        g_free(dirty_bitmap);
+        dirty_bitmap = NULL;
         /* Disable dirty bit tracking */
         xen_track_dirty_vram(xen_domid, 0, 0, NULL);
     }
@@ -1021,9 +1018,9 @@ static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req)
 {
     vmware_regs_t *vmport_regs;
 
-    assert(state->shared_vmport_page);
+    assert(shared_vmport_page);
     vmport_regs =
-        &state->shared_vmport_page->vcpu_vmport_regs[state->send_vcpu];
+        &shared_vmport_page->vcpu_vmport_regs[state->send_vcpu];
     QEMU_BUILD_BUG_ON(sizeof(*req) < sizeof(*vmport_regs));
 
     current_cpu = state->cpu_by_vcpu_id[state->send_vcpu];
@@ -1468,7 +1465,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 
     state->memory_listener = xen_memory_listener;
     memory_listener_register(&state->memory_listener, &address_space_memory);
-    state->log_for_dirtybit = NULL;
 
     state->io_listener = xen_io_listener;
     memory_listener_register(&state->io_listener, &address_space_io);
@@ -1489,19 +1485,19 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
 
-    state->suspend.notify = xen_suspend_notifier;
-    qemu_register_suspend_notifier(&state->suspend);
+    suspend.notify = xen_suspend_notifier;
+    qemu_register_suspend_notifier(&suspend);
 
-    state->wakeup.notify = xen_wakeup_notifier;
-    qemu_register_wakeup_notifier(&state->wakeup);
+    wakeup.notify = xen_wakeup_notifier;
+    qemu_register_wakeup_notifier(&wakeup);
 
     rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
     if (!rc) {
         DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
-        state->shared_vmport_page =
+        shared_vmport_page =
             xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
                                  1, &ioreq_pfn, NULL);
-        if (state->shared_vmport_page == NULL) {
+        if (shared_vmport_page == NULL) {
             error_report("map shared vmport IO page returned error %d handle=%p",
                          errno, xen_xc);
             goto err;
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487901.755722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUG-000158-R4; Tue, 31 Jan 2023 22:52:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487901.755722; Tue, 31 Jan 2023 22:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUG-000150-NC; Tue, 31 Jan 2023 22:52:24 +0000
Received: by outflank-mailman (input) for mailman id 487901;
 Tue, 31 Jan 2023 22:52:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzUF-0008Hj-1K
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:23 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20631.outbound.protection.outlook.com
 [2a01:111:f400:7e88::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eba51c86-a1b9-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 23:52:22 +0100 (CET)
Received: from DM6PR03CA0043.namprd03.prod.outlook.com (2603:10b6:5:100::20)
 by CH2PR12MB4924.namprd12.prod.outlook.com (2603:10b6:610:6b::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 22:52:18 +0000
Received: from DM6NAM11FT081.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:100:cafe::91) by DM6PR03CA0043.outlook.office365.com
 (2603:10b6:5:100::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:18 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT081.mail.protection.outlook.com (10.13.172.136) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 22:52:17 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:17 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:17 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:16 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eba51c86-a1b9-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cX79zuo1zxiVfTrZpwDba+Frk8MttNAnD09x4mWSx1PcnTbctu8cdkvNFXLNZ0n0K9a94klLX31GKCPgrJC/BLue+Fy62SxKaBdNbrK7jWpMEE2DqgrO3nzBJ10x/n81PYCVqH+C45+XiySoEBKLzFs0jgLNcb3GO++zZtiSYE4oDgH6ZCCkxbgUzj3FKCV/X9eV2YoJ+JSvli/IfMvLMnUe6NBS/IAC1AqZDNbyIz4C1lTophh68doSqSX5izLHrFazZL7lIXNYBXNAeYulX2VfqXwhBCjp+hkJH2UeCOe7Ys3+f2//WUwpYsA71nNItyJ9YWBhJ/rH2YH0ivZaiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ML6cs7eQSjHyFoTbA1vwZOtdpcA9OgOwmiba5kQ3iTM=;
 b=eMC9FtGmgTvg6Q/jI/19HDsEIERtbYfXBlfabidZ1mqusIttzUbymxsR0u2CgDGv96Sne07w8wyTgSTaAlFt4wjwyqFuqlYKXQPsWszixYRiUYDQZuVBLqEo79Osht3kXuU2bkF1s1tIHaXXrDuOtCL5tS08VJpAYHdurSmJRkaFX30T0qWjhVzsBwn/Zln9KykTjAb1AkKsGR0nBobfbEO9ORJ4JM/vyV39gd5PYnpev1hsoFSi5s+xRwt5wgeWsgy4dJz1l48YYffjc71jK/kXHqV9esbUtchrBtavt2FjUcPNAaXEvP6ru2tYbuAquvx2kRdXHVTkOkrjideyNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ML6cs7eQSjHyFoTbA1vwZOtdpcA9OgOwmiba5kQ3iTM=;
 b=WXfhAZE0/zoFH2j+/xCGXxNanlwaMMclZheYLmdoYfJqAywNNOFzVRWKxiCjoOiKSjaFeC7h0IvGFHpCR+wFwLcfrom7HAewtJkjZXEOkqHg7AbxdsCjCOw2hdTnywIhXKrgQenzfcQRc/3NRo5KotjAno8butAaz2AT5eywGCs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v5 06/10] hw/xen/xen-hvm-common: skip ioreq creation on ioreq registration failure
Date: Tue, 31 Jan 2023 14:51:45 -0800
Message-ID: <20230131225149.14764-7-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT081:EE_|CH2PR12MB4924:EE_
X-MS-Office365-Filtering-Correlation-Id: 72553e62-25a8-4d7a-4d3a-08db03ddcdf2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aDeS878/CJIXyQhsw3x3qt3R67/eDkg6cAHgF6/Cm/0mGhCkhQ45wZbamRZ2K0w82Xc3U/DTn67cSByi2xY6t9w1WzyqaycFXjQJYS3trgpO2l1aoWK2/cGnOSshD3drsWunKDGYHbQEFX1RpV9E1M1T6lxE9xCBd7NB4lv6Pj+24CX4xCU1M3llKbJ68sqVjZlzGbYUmk2A3T1ZZ+ux8pB7zXbuD5lNqjbAmqIddW2MZ9s6Qeo56nQQ8j+svpGDjww176CBmRZ897LlzDuKTeVlp5vyyGaCNxTl1NAbVggtk/fbVRg+0kZvpAvqxXbFn2gK+yht8bhtdClrNXhSdNUSk/1iqRtAH3w/lAV7JcxdK8J0Akku7xAs8iNBBWQ/pavaG023DljnXarlWECrP4+8cRkiZmWCRa5MPfA1YtG73xYU2FykLZTsx1r+009AVKzJ9BKVx7SLQoXqtU6VBzxtlFKc/WnSkX9CS0hvEjwMSOi/+Z1XHy9hA/oNzFwzYskn+Gbvzl+XCaKHyqpvxfukKb2aE4tDPMrDWb2ho3kTNpGFP7/nK8tY5LepkS/JppVqSIOXhFDnNAOs9e8N/509wqkP1SUXH6VBtuZpMZ5R1CKwCP62pMVQr0uUmagfXLKdEelwLkV6Rsts0HjPRTR71U76DLOHpgpygUvrfuhYfAWaEcySsR5BQJM2DepCtVwN7qp/dN8yibfbYTpkXDXrUt3+BN1Oy6gIVTUSsEw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(346002)(396003)(376002)(136003)(451199018)(36840700001)(40470700004)(46966006)(6666004)(41300700001)(36756003)(6916009)(70206006)(54906003)(316002)(8676002)(82310400005)(4326008)(8936002)(5660300002)(70586007)(86362001)(356005)(81166007)(82740400003)(36860700001)(186003)(1076003)(26005)(336012)(426003)(40480700001)(2906002)(40460700003)(44832011)(478600001)(47076005)(83380400001)(2616005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:17.9984
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 72553e62-25a8-4d7a-4d3a-08db03ddcdf2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT081.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4924

From: Stefano Stabellini <stefano.stabellini@amd.com>

On ARM it is possible to have a functioning xenpv machine with only the
PV backends and no IOREQ server. If the IOREQ server creation fails continue
to the PV backends initialization.

Also, moved the IOREQ registration and mapping subroutine to new function
xen_do_ioreq_register().

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 hw/xen/xen-hvm-common.c | 53 ++++++++++++++++++++++++++++-------------
 1 file changed, 36 insertions(+), 17 deletions(-)

diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index c2e1e08124..5e3c7b073f 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -781,25 +781,12 @@ err:
     exit(1);
 }
 
-void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
-                        MemoryListener xen_memory_listener)
+static void xen_do_ioreq_register(XenIOState *state,
+                                           unsigned int max_cpus,
+                                           MemoryListener xen_memory_listener)
 {
     int i, rc;
 
-    state->xce_handle = xenevtchn_open(NULL, 0);
-    if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
-        goto err;
-    }
-
-    state->xenstore = xs_daemon_open();
-    if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
-        goto err;
-    }
-
-    xen_create_ioreq_server(xen_domid, &state->ioservid);
-
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
 
@@ -863,12 +850,44 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
     QLIST_INIT(&state->dev_list);
     device_listener_register(&state->device_listener);
 
+    return;
+
+err:
+    error_report("xen hardware virtual machine initialisation failed");
+    exit(1);
+}
+
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener)
+{
+    int rc;
+
+    state->xce_handle = xenevtchn_open(NULL, 0);
+    if (state->xce_handle == NULL) {
+        perror("xen: event channel open");
+        goto err;
+    }
+
+    state->xenstore = xs_daemon_open();
+    if (state->xenstore == NULL) {
+        perror("xen: xenstore open");
+        goto err;
+    }
+
+    rc = xen_create_ioreq_server(xen_domid, &state->ioservid);
+    if (!rc) {
+        xen_do_ioreq_register(state, max_cpus, xen_memory_listener);
+    } else {
+        warn_report("xen: failed to create ioreq server");
+    }
+
     xen_bus_init();
 
     xen_register_backend(state);
 
     return;
+
 err:
-    error_report("xen hardware virtual machine initialisation failed");
+    error_report("xen hardware virtual machine backend registration failed");
     exit(1);
 }
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487902.755733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUI-0001P9-7I; Tue, 31 Jan 2023 22:52:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487902.755733; Tue, 31 Jan 2023 22:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUI-0001Ox-3s; Tue, 31 Jan 2023 22:52:26 +0000
Received: by outflank-mailman (input) for mailman id 487902;
 Tue, 31 Jan 2023 22:52:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzUG-0000Ma-F5
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:24 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea5a87be-a1b9-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 23:52:20 +0100 (CET)
Received: from MW4P222CA0029.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::34)
 by PH8PR12MB6913.namprd12.prod.outlook.com (2603:10b6:510:1ca::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 22:52:17 +0000
Received: from CO1NAM11FT066.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:114:cafe::6d) by MW4P222CA0029.outlook.office365.com
 (2603:10b6:303:114::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:17 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT066.mail.protection.outlook.com (10.13.175.18) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.17 via Frontend Transport; Tue, 31 Jan 2023 22:52:16 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:15 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:15 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:15 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea5a87be-a1b9-11ed-b63b-5f92e7d2e73a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T93iwMoZgy9Jf7xdUTsUCVfYa/D6Jpp0RXFRIlQSwkCyfc7s9hL1LJQgcoR4/I0ER+9zBSr8ZmZC6cC75vECbi47vsdPwEXGiEE7WJu4O/+z25AKLzGdttzJgwhgHtxTGviy2M12pyuLE+B43l8qjBOcBmKBeBW3VRAKRFa8d1kQZCoSBJWXtxb8S8m/wwrhu/QBFG3lW6xGQ9bNnngBgrtE4BXYL/gJFlQPi0Gv/pxY0QyRAQJ+KQwZjWNeJIU9e7QYDyI47u/+LRdOglQ7tS4FNDPS0Z9HzaP8G06LYjkVxkoHeOfhvEQOSfV1cQbsdcJMmta41NabIAEfBhiIzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G835PztH0JFR0+pwJr/tzQC6Id3/bG3j6rumsmwk9Us=;
 b=j02vR+MZE15i1yfU/QTGm/OBjwSG1Ici7XSnR9VoefRgytLEMyUiIdMHb4RAJrOCAHhkonIsFQOK+oI15UifbqQLyJ1Ir5P/FmS+8zIIvmKeLcFirUrjFT2ioy4zvFAVyqzjYIG1JxYnQcyCYo7j0f0tzuiJ7pNh78zeDvvZdBnZYlCbKe3Dwr2lvMTjfbzXV2F+DovtM8oFxmePnv3BpIaxAIbGg73QRT8ZT0ekxEBVT6f9ZomoYVNyve4cbII7kJ5O7Ncv7TGKlP0gZ0wLN0Fn/DduOPZsVkgDAgVLHbro/HG2F9ChG5miuTtS2HV/KmhXSb0eKrBHI1mvHM1crA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G835PztH0JFR0+pwJr/tzQC6Id3/bG3j6rumsmwk9Us=;
 b=vvhB+nQylkuBK5EK0LeDpUQH+WM/0vZ8QuPvhwY6uC+8S3P+PEIIBPdxMpX8qBsw3IRouMD3kIK09w5+v7yKCRemeRj3duCc6WN3sMLKyW0FSh5o7doe7WJsYqU8Jm0Z02qLyOR3JXiiOiJohnsgbNjMDogZjhmHB313kZjK2lU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v5 05/10] include/hw/xen/xen_common: return error from xen_create_ioreq_server
Date: Tue, 31 Jan 2023 14:51:44 -0800
Message-ID: <20230131225149.14764-6-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT066:EE_|PH8PR12MB6913:EE_
X-MS-Office365-Filtering-Correlation-Id: 83650632-186f-4e4d-bceb-08db03ddccf4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z/DzTBG2KJu2lQZgNJo1jzl6BYALwZkz81aWsaHCo0wnAHFAHW5T4x4hO+W5StoejY7jvmfZcPHP+GJ0Ez2mNToJIMABJz4sPcD5/f7pFMXoArorGVeAzQZXVP4YdPihFx6qAIG0WQ4b2+98uCQadhHNcHol3+PTe5YcBkYdjEWpNGxfUed2ShISyVFvNDFQ9zY0iHwGOPo+NZ1YtBv+FtmFW48VSwI8rAiG4jOP8X+JTBC72GENzEAHGgZ8JQJHWPr1dhln6sPELrpl8hTp+3bDPswgtjdKznUVKwpr1pn0up/wJMU/vHObjHkSVqm/58Qoj1//FY2n/F4mqk8OF6ATqTiJhNRj2fPtKdmjzQaUebe+90ekXOxtss3mYG6IwbVZHKMrOYaR9VrcX/vCqG37XmteKI+NeorKot7izauKgottbL3DrODI8GsXptBxwnsAbqIfy/RQ73aUa+TncU8MEn0P3DC2YrilbYo82kANGnXGSJ7ATw09GTGAliKKVmNvkn8Lv1oVk46FViDBiXwcmW8hXs7lB/pblj5Npg7Af2h1jCgByOboxJhFE5T9EYJL1i0SdrR0pEpzlc0QXowv/FCUyDJgaUk/oNeScye89DvlO7WoWV+viayCUeXzJQqY//xnKiOnd2qjiCzVnYARYTbHf5sPXfVxd0FvRbaToA+E10H9Npo8MB4MARJO2mDxXN+Y3pCXOKTDVVcNhn0Kt4ggfPhx3STWBd/8CPs=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(39860400002)(376002)(396003)(136003)(451199018)(36840700001)(40470700004)(46966006)(6666004)(1076003)(6916009)(186003)(26005)(478600001)(8676002)(4326008)(70586007)(70206006)(336012)(2616005)(47076005)(41300700001)(426003)(83380400001)(2906002)(40460700003)(82740400003)(8936002)(86362001)(81166007)(36860700001)(36756003)(82310400005)(40480700001)(356005)(44832011)(316002)(5660300002)(54906003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:16.2741
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 83650632-186f-4e4d-bceb-08db03ddccf4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT066.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6913

From: Stefano Stabellini <stefano.stabellini@amd.com>

This is done to prepare for enabling xenpv support for ARM architecture.
On ARM it is possible to have a functioning xenpv machine with only the
PV backends and no IOREQ server. If the IOREQ server creation fails,
continue to the PV backends initialization.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 include/hw/xen/xen_common.h | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
index 9a13a756ae..9ec69582b3 100644
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -467,9 +467,10 @@ static inline void xen_unmap_pcidev(domid_t dom,
 {
 }
 
-static inline void xen_create_ioreq_server(domid_t dom,
-                                           ioservid_t *ioservid)
+static inline int xen_create_ioreq_server(domid_t dom,
+                                          ioservid_t *ioservid)
 {
+    return 0;
 }
 
 static inline void xen_destroy_ioreq_server(domid_t dom,
@@ -600,8 +601,8 @@ static inline void xen_unmap_pcidev(domid_t dom,
                                                   PCI_FUNC(pci_dev->devfn));
 }
 
-static inline void xen_create_ioreq_server(domid_t dom,
-                                           ioservid_t *ioservid)
+static inline int xen_create_ioreq_server(domid_t dom,
+                                          ioservid_t *ioservid)
 {
     int rc = xendevicemodel_create_ioreq_server(xen_dmod, dom,
                                                 HVM_IOREQSRV_BUFIOREQ_ATOMIC,
@@ -609,12 +610,14 @@ static inline void xen_create_ioreq_server(domid_t dom,
 
     if (rc == 0) {
         trace_xen_ioreq_server_create(*ioservid);
-        return;
+        return rc;
     }
 
     *ioservid = 0;
     use_default_ioreq_server = true;
     trace_xen_default_ioreq_server();
+
+    return rc;
 }
 
 static inline void xen_destroy_ioreq_server(domid_t dom,
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487903.755739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUI-0001Sk-NN; Tue, 31 Jan 2023 22:52:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487903.755739; Tue, 31 Jan 2023 22:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUI-0001S0-Dq; Tue, 31 Jan 2023 22:52:26 +0000
Received: by outflank-mailman (input) for mailman id 487903;
 Tue, 31 Jan 2023 22:52:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzUG-0008Hj-4t
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:25 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20630.outbound.protection.outlook.com
 [2a01:111:f400:7e88::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eaf9156e-a1b9-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 23:52:21 +0100 (CET)
Received: from DM6PR14CA0049.namprd14.prod.outlook.com (2603:10b6:5:18f::26)
 by PH8PR12MB6938.namprd12.prod.outlook.com (2603:10b6:510:1bd::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 22:52:16 +0000
Received: from DM6NAM11FT014.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:18f:cafe::ef) by DM6PR14CA0049.outlook.office365.com
 (2603:10b6:5:18f::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:16 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT014.mail.protection.outlook.com (10.13.173.132) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 22:52:16 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:14 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:14 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:13 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eaf9156e-a1b9-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Fo4fbUp3BJa4p2Lm7JxvLmnLtZESIB/qUK6mBMMc8B4uVuyf3/TnOY/pCsrUWN/6q4LUjyWIZEUowynkO+xELgefn0UISj2I96NNsZyckuVlQB7nB2hDswGAKwty+pKoxZqmIXeuAoOaX2KOJbNNNLKDpNYSSRn7+GZBS43uMQp91zkO/LUMHnaeAkKa+Ch4HIgfFVEm5J1jnd7WBPsbJl+5/NnKxHtKHcp7svKSFPf2goEsipJjyu/5b7bzmjlHOmancD2TWokmIIbbu1h3eUZrtjQV48rfP+/D+E0jInyVvEZ5aBE3QlTdUJumHnSjB6cD+pp71gAK2O3I+NXnJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nvF+mIbARuNRNwR1thWbjlm5A1nLdLjA/fz9phfNHnQ=;
 b=nAayO7AJqzwOn+ch4DEhWyb0tDFN5Xz6zyK8XXiMdKf3izwaX+UBNkiwkht8uzH8anNR1mUx+jUTrJhhNbBmY5YLuw03yZd/A66GhEyI1QlYVsuI5c2SgJS58JFigbCBMlMVwUPBVEy4xjTfrj3B+lqYY8HVUXFdwPqAzd7KHgyQq9Dzj1emVd2deS6e8KrfjpowttfEd2VrlY9KqD6E3n5n3aUyi8FVyhmfrPRcjPIOB8g7X9XFIfwEzn5Lu2hNO8WlFloboj7RO11oHwbZa6ZxC2+J/Vkc+FVModiEd5aWv3+6vQO7Ex6Ij/cA2V0tNUqOLrSd91WpdcLRDubTYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nvF+mIbARuNRNwR1thWbjlm5A1nLdLjA/fz9phfNHnQ=;
 b=Ecxk7bXQCUahjbaGGrfUgphaeXMS4kv9QbDn1hT2zI2JTugEYJDdsiZO2lWwVW5CSF+/Ov8kzb5l5wRNvZgS0RYmjzyipx/DF3FYkjRIYHGpPoJwCTBYn2KQPzxnhDOpkCGGtTZihaTsukYJuN4EGr40kbzdcV/Nuu0BuE9C7cY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>
Subject: [QEMU][PATCH v5 04/10] xen-hvm: reorganize xen-hvm and move common function to xen-hvm-common
Date: Tue, 31 Jan 2023 14:51:43 -0800
Message-ID: <20230131225149.14764-5-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT014:EE_|PH8PR12MB6938:EE_
X-MS-Office365-Filtering-Correlation-Id: 59f71f9b-a3e9-4e12-bde6-08db03ddccfb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/hmoigrFi/2wDo8rqWEd6r8kSnWoTFCQbYsmNaZO9TH0js6ijP7Xn4vkRB+J6GX4uCvNANqlrKXcc74i2zpMcOE5kzfDNY0UOXlReaEwTyNLZekmVSEoLwDYtDvyrsuvku0iWm1I0up+mnGjYuNVHDhBGkzUtqgHFvjlwBxAsH+swWo6A7yzbSTgxMIFpKZeDABMEK8bb8qw1k77JMoY0t+sWYOa7/o5u1mAb0Fjmt/Fkjim4UhoMVu1QWLC8x4L2DqKKOJegHx2dLLAmpAL5vEyqTF695pK8iHpi/WupLBK+OG7UwNe1/ephV9qZYs6K+SHs20p+kNS/Vi945nLYcaWqucHtDMGlU1+aJlj3xLwxZsqZ8An6duVj0Won1FtCo3l1c5zkT6n5n4qFbDZ7WBxdjlU8ZQZlTQoG3u0lI0RnxeQb3fVyBE9iQKQK6IL1y+7Gc82JAA5KHo41+TUwLfiVFkbKX2cCGst+Ir52P7ybg//Q1idqUVr6M/kBvbfBDi0Q6fEfJwcE/qXzUALMRmIsJ+fLtcbcWGHUEECClS9IBJFvapER44erQCouXKWCWAUOw22DB3yBngVKAXqPn3oJHSzu5AZYCCyEkeLk/x2BWI5cvnJg6iqpqGFTJKK+Lr8VPPZ3P7iZdpCORvAnuzvF3HgawTW/P6VtS7OEmBJqAT1lhd4hFcJLDUUPSwMQ3OYG+0rnkckWl97npLL4GkQSqPGEYf5EbJEf3yJbEI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199018)(40470700004)(36840700001)(46966006)(36756003)(40460700003)(86362001)(47076005)(426003)(336012)(83380400001)(82310400005)(316002)(82740400003)(54906003)(1076003)(6666004)(478600001)(26005)(2616005)(8936002)(5660300002)(186003)(40480700001)(356005)(30864003)(7416002)(81166007)(44832011)(4326008)(70206006)(70586007)(6916009)(36860700001)(8676002)(41300700001)(2906002)(36900700001)(559001)(579004);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:16.3825
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 59f71f9b-a3e9-4e12-bde6-08db03ddccfb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT014.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6938

From: Stefano Stabellini <stefano.stabellini@amd.com>

This patch does following:
1. creates arch_handle_ioreq() and arch_xen_set_memory(). This is done in
    preparation for moving most of xen-hvm code to an arch-neutral location,
    move the x86-specific portion of xen_set_memory to arch_xen_set_memory.
    Also, move handle_vmport_ioreq to arch_handle_ioreq.

2. Pure code movement: move common functions to hw/xen/xen-hvm-common.c
    Extract common functionalities from hw/i386/xen/xen-hvm.c and move them to
    hw/xen/xen-hvm-common.c. These common functions are useful for creating
    an IOREQ server.

    xen_hvm_init_pc() contains the architecture independent code for creating
    and mapping a IOREQ server, connecting memory and IO listeners, initializing
    a xen bus and registering backends. Moved this common xen code to a new
    function xen_register_ioreq() which can be used by both x86 and ARM machines.

    Following functions are moved to hw/xen/xen-hvm-common.c:
        xen_vcpu_eport(), xen_vcpu_ioreq(), xen_ram_alloc(), xen_set_memory(),
        xen_region_add(), xen_region_del(), xen_io_add(), xen_io_del(),
        xen_device_realize(), xen_device_unrealize(),
        cpu_get_ioreq_from_shared_memory(), cpu_get_ioreq(), do_inp(),
        do_outp(), rw_phys_req_item(), read_phys_req_item(),
        write_phys_req_item(), cpu_ioreq_pio(), cpu_ioreq_move(),
        cpu_ioreq_config(), handle_ioreq(), handle_buffered_iopage(),
        handle_buffered_io(), cpu_handle_ioreq(), xen_main_loop_prepare(),
        xen_hvm_change_state_handler(), xen_exit_notifier(),
        xen_map_ioreq_server(), destroy_hvm_domain() and
        xen_shutdown_fatal_error()

3. Removed static type from below functions:
    1. xen_region_add()
    2. xen_region_del()
    3. xen_io_add()
    4. xen_io_del()
    5. xen_device_realize()
    6. xen_device_unrealize()
    7. xen_hvm_change_state_handler()
    8. cpu_ioreq_pio()
    9. xen_exit_notifier()

4. Replace TARGET_PAGE_SIZE with XC_PAGE_SIZE to match the page side with Xen.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 hw/i386/xen/trace-events        |   14 -
 hw/i386/xen/xen-hvm.c           | 1019 ++-----------------------------
 hw/xen/meson.build              |    5 +-
 hw/xen/trace-events             |   14 +
 hw/xen/xen-hvm-common.c         |  874 ++++++++++++++++++++++++++
 include/hw/i386/xen_arch_hvm.h  |   11 +
 include/hw/xen/arch_hvm.h       |    3 +
 include/hw/xen/xen-hvm-common.h |   98 +++
 8 files changed, 1067 insertions(+), 971 deletions(-)
 create mode 100644 hw/xen/xen-hvm-common.c
 create mode 100644 include/hw/i386/xen_arch_hvm.h
 create mode 100644 include/hw/xen/arch_hvm.h
 create mode 100644 include/hw/xen/xen-hvm-common.h

diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
index a0c89d91c4..5d0a8d6dcf 100644
--- a/hw/i386/xen/trace-events
+++ b/hw/i386/xen/trace-events
@@ -7,17 +7,3 @@ xen_platform_log(char *s) "xen platform: %s"
 xen_pv_mmio_read(uint64_t addr) "WARNING: read from Xen PV Device MMIO space (address 0x%"PRIx64")"
 xen_pv_mmio_write(uint64_t addr) "WARNING: write to Xen PV Device MMIO space (address 0x%"PRIx64")"
 
-# xen-hvm.c
-xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"
-xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "0x%"PRIx64" size 0x%lx, log_dirty %i"
-handle_ioreq(void *req, uint32_t type, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p type=%d dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-handle_ioreq_read(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p read type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-handle_ioreq_write(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p write type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-cpu_ioreq_pio(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p pio dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-cpu_ioreq_pio_read_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio read reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
-cpu_ioreq_pio_write_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio write reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
-cpu_ioreq_move(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p copy dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
-cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
-cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
-
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 06c446e7be..4219308caf 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -10,43 +10,21 @@
 
 #include "qemu/osdep.h"
 #include "qemu/units.h"
+#include "qapi/error.h"
+#include "qapi/qapi-commands-migration.h"
+#include "trace.h"
 
-#include "cpu.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pci_host.h"
 #include "hw/i386/pc.h"
 #include "hw/irq.h"
-#include "hw/hw.h"
 #include "hw/i386/apic-msidef.h"
-#include "hw/xen/xen_common.h"
-#include "hw/xen/xen-legacy-backend.h"
-#include "hw/xen/xen-bus.h"
 #include "hw/xen/xen-x86.h"
-#include "qapi/error.h"
-#include "qapi/qapi-commands-migration.h"
-#include "qemu/error-report.h"
-#include "qemu/main-loop.h"
 #include "qemu/range.h"
-#include "sysemu/runstate.h"
-#include "sysemu/sysemu.h"
-#include "sysemu/xen.h"
-#include "sysemu/xen-mapcache.h"
-#include "trace.h"
 
-#include <xen/hvm/ioreq.h>
+#include "hw/xen/xen-hvm-common.h"
+#include "hw/xen/arch_hvm.h"
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN_HVM
-
-#ifdef DEBUG_XEN_HVM
-#define DPRINTF(fmt, ...) \
-    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
-#else
-#define DPRINTF(fmt, ...) \
-    do { } while (0)
-#endif
-
-static MemoryRegion ram_memory, ram_640k, ram_lo, ram_hi;
+static MemoryRegion ram_640k, ram_lo, ram_hi;
 static MemoryRegion *framebuffer;
 static bool xen_in_migration;
 
@@ -73,27 +51,8 @@ struct shared_vmport_iopage {
 };
 typedef struct shared_vmport_iopage shared_vmport_iopage_t;
 #endif
-static shared_vmport_iopage_t *shared_vmport_page;
 
-static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
-{
-    return shared_page->vcpu_ioreq[i].vp_eport;
-}
-static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
-{
-    return &shared_page->vcpu_ioreq[vcpu];
-}
-
-#define BUFFER_IO_MAX_DELAY  100
-
-typedef struct XenPhysmap {
-    hwaddr start_addr;
-    ram_addr_t size;
-    const char *name;
-    hwaddr phys_offset;
-
-    QLIST_ENTRY(XenPhysmap) list;
-} XenPhysmap;
+static shared_vmport_iopage_t *shared_vmport_page;
 
 static QLIST_HEAD(, XenPhysmap) xen_physmap;
 static const XenPhysmap *log_for_dirtybit;
@@ -102,38 +61,6 @@ static unsigned long *dirty_bitmap;
 static Notifier suspend;
 static Notifier wakeup;
 
-typedef struct XenPciDevice {
-    PCIDevice *pci_dev;
-    uint32_t sbdf;
-    QLIST_ENTRY(XenPciDevice) entry;
-} XenPciDevice;
-
-typedef struct XenIOState {
-    ioservid_t ioservid;
-    shared_iopage_t *shared_page;
-    buffered_iopage_t *buffered_io_page;
-    xenforeignmemory_resource_handle *fres;
-    QEMUTimer *buffered_io_timer;
-    CPUState **cpu_by_vcpu_id;
-    /* the evtchn port for polling the notification, */
-    evtchn_port_t *ioreq_local_port;
-    /* evtchn remote and local ports for buffered io */
-    evtchn_port_t bufioreq_remote_port;
-    evtchn_port_t bufioreq_local_port;
-    /* the evtchn fd for polling */
-    xenevtchn_handle *xce_handle;
-    /* which vcpu we are serving */
-    int send_vcpu;
-
-    struct xs_handle *xenstore;
-    MemoryListener memory_listener;
-    MemoryListener io_listener;
-    QLIST_HEAD(, XenPciDevice) dev_list;
-    DeviceListener device_listener;
-
-    Notifier exit;
-} XenIOState;
-
 /* Xen specific function for piix pci */
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
@@ -246,42 +173,6 @@ static void xen_ram_init(PCMachineState *pcms,
     }
 }
 
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
-                   Error **errp)
-{
-    unsigned long nr_pfn;
-    xen_pfn_t *pfn_list;
-    int i;
-
-    if (runstate_check(RUN_STATE_INMIGRATE)) {
-        /* RAM already populated in Xen */
-        fprintf(stderr, "%s: do not alloc "RAM_ADDR_FMT
-                " bytes of ram at "RAM_ADDR_FMT" when runstate is INMIGRATE\n",
-                __func__, size, ram_addr);
-        return;
-    }
-
-    if (mr == &ram_memory) {
-        return;
-    }
-
-    trace_xen_ram_alloc(ram_addr, size);
-
-    nr_pfn = size >> TARGET_PAGE_BITS;
-    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
-
-    for (i = 0; i < nr_pfn; i++) {
-        pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
-    }
-
-    if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) {
-        error_setg(errp, "xen: failed to populate ram at " RAM_ADDR_FMT,
-                   ram_addr);
-    }
-
-    g_free(pfn_list);
-}
-
 static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
 {
     XenPhysmap *physmap = NULL;
@@ -471,144 +362,6 @@ static int xen_remove_from_physmap(XenIOState *state,
     return 0;
 }
 
-static void xen_set_memory(struct MemoryListener *listener,
-                           MemoryRegionSection *section,
-                           bool add)
-{
-    XenIOState *state = container_of(listener, XenIOState, memory_listener);
-    hwaddr start_addr = section->offset_within_address_space;
-    ram_addr_t size = int128_get64(section->size);
-    bool log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
-    hvmmem_type_t mem_type;
-
-    if (section->mr == &ram_memory) {
-        return;
-    } else {
-        if (add) {
-            xen_map_memory_section(xen_domid, state->ioservid,
-                                   section);
-        } else {
-            xen_unmap_memory_section(xen_domid, state->ioservid,
-                                     section);
-        }
-    }
-
-    if (!memory_region_is_ram(section->mr)) {
-        return;
-    }
-
-    if (log_dirty != add) {
-        return;
-    }
-
-    trace_xen_client_set_memory(start_addr, size, log_dirty);
-
-    start_addr &= TARGET_PAGE_MASK;
-    size = TARGET_PAGE_ALIGN(size);
-
-    if (add) {
-        if (!memory_region_is_rom(section->mr)) {
-            xen_add_to_physmap(state, start_addr, size,
-                               section->mr, section->offset_within_region);
-        } else {
-            mem_type = HVMMEM_ram_ro;
-            if (xen_set_mem_type(xen_domid, mem_type,
-                                 start_addr >> TARGET_PAGE_BITS,
-                                 size >> TARGET_PAGE_BITS)) {
-                DPRINTF("xen_set_mem_type error, addr: "HWADDR_FMT_plx"\n",
-                        start_addr);
-            }
-        }
-    } else {
-        if (xen_remove_from_physmap(state, start_addr, size) < 0) {
-            DPRINTF("physmapping does not exist at "HWADDR_FMT_plx"\n", start_addr);
-        }
-    }
-}
-
-static void xen_region_add(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    memory_region_ref(section->mr);
-    xen_set_memory(listener, section, true);
-}
-
-static void xen_region_del(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    xen_set_memory(listener, section, false);
-    memory_region_unref(section->mr);
-}
-
-static void xen_io_add(MemoryListener *listener,
-                       MemoryRegionSection *section)
-{
-    XenIOState *state = container_of(listener, XenIOState, io_listener);
-    MemoryRegion *mr = section->mr;
-
-    if (mr->ops == &unassigned_io_ops) {
-        return;
-    }
-
-    memory_region_ref(mr);
-
-    xen_map_io_section(xen_domid, state->ioservid, section);
-}
-
-static void xen_io_del(MemoryListener *listener,
-                       MemoryRegionSection *section)
-{
-    XenIOState *state = container_of(listener, XenIOState, io_listener);
-    MemoryRegion *mr = section->mr;
-
-    if (mr->ops == &unassigned_io_ops) {
-        return;
-    }
-
-    xen_unmap_io_section(xen_domid, state->ioservid, section);
-
-    memory_region_unref(mr);
-}
-
-static void xen_device_realize(DeviceListener *listener,
-                               DeviceState *dev)
-{
-    XenIOState *state = container_of(listener, XenIOState, device_listener);
-
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
-        PCIDevice *pci_dev = PCI_DEVICE(dev);
-        XenPciDevice *xendev = g_new(XenPciDevice, 1);
-
-        xendev->pci_dev = pci_dev;
-        xendev->sbdf = PCI_BUILD_BDF(pci_dev_bus_num(pci_dev),
-                                     pci_dev->devfn);
-        QLIST_INSERT_HEAD(&state->dev_list, xendev, entry);
-
-        xen_map_pcidev(xen_domid, state->ioservid, pci_dev);
-    }
-}
-
-static void xen_device_unrealize(DeviceListener *listener,
-                                 DeviceState *dev)
-{
-    XenIOState *state = container_of(listener, XenIOState, device_listener);
-
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
-        PCIDevice *pci_dev = PCI_DEVICE(dev);
-        XenPciDevice *xendev, *next;
-
-        xen_unmap_pcidev(xen_domid, state->ioservid, pci_dev);
-
-        QLIST_FOREACH_SAFE(xendev, &state->dev_list, entry, next) {
-            if (xendev->pci_dev == pci_dev) {
-                QLIST_REMOVE(xendev, entry);
-                g_free(xendev);
-                break;
-            }
-        }
-    }
-}
-
 static void xen_sync_dirty_bitmap(XenIOState *state,
                                   hwaddr start_addr,
                                   ram_addr_t size)
@@ -716,277 +469,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-static MemoryListener xen_io_listener = {
-    .name = "xen-io",
-    .region_add = xen_io_add,
-    .region_del = xen_io_del,
-    .priority = 10,
-};
-
-static DeviceListener xen_device_listener = {
-    .realize = xen_device_realize,
-    .unrealize = xen_device_unrealize,
-};
-
-/* get the ioreq packets from share mem */
-static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
-{
-    ioreq_t *req = xen_vcpu_ioreq(state->shared_page, vcpu);
-
-    if (req->state != STATE_IOREQ_READY) {
-        DPRINTF("I/O request not ready: "
-                "%x, ptr: %x, port: %"PRIx64", "
-                "data: %"PRIx64", count: %u, size: %u\n",
-                req->state, req->data_is_ptr, req->addr,
-                req->data, req->count, req->size);
-        return NULL;
-    }
-
-    xen_rmb(); /* see IOREQ_READY /then/ read contents of ioreq */
-
-    req->state = STATE_IOREQ_INPROCESS;
-    return req;
-}
-
-/* use poll to get the port notification */
-/* ioreq_vec--out,the */
-/* retval--the number of ioreq packet */
-static ioreq_t *cpu_get_ioreq(XenIOState *state)
-{
-    MachineState *ms = MACHINE(qdev_get_machine());
-    unsigned int max_cpus = ms->smp.max_cpus;
-    int i;
-    evtchn_port_t port;
-
-    port = xenevtchn_pending(state->xce_handle);
-    if (port == state->bufioreq_local_port) {
-        timer_mod(state->buffered_io_timer,
-                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
-        return NULL;
-    }
-
-    if (port != -1) {
-        for (i = 0; i < max_cpus; i++) {
-            if (state->ioreq_local_port[i] == port) {
-                break;
-            }
-        }
-
-        if (i == max_cpus) {
-            hw_error("Fatal error while trying to get io event!\n");
-        }
-
-        /* unmask the wanted port again */
-        xenevtchn_unmask(state->xce_handle, port);
-
-        /* get the io packet from shared memory */
-        state->send_vcpu = i;
-        return cpu_get_ioreq_from_shared_memory(state, i);
-    }
-
-    /* read error or read nothing */
-    return NULL;
-}
-
-static uint32_t do_inp(uint32_t addr, unsigned long size)
-{
-    switch (size) {
-        case 1:
-            return cpu_inb(addr);
-        case 2:
-            return cpu_inw(addr);
-        case 4:
-            return cpu_inl(addr);
-        default:
-            hw_error("inp: bad size: %04x %lx", addr, size);
-    }
-}
-
-static void do_outp(uint32_t addr,
-        unsigned long size, uint32_t val)
-{
-    switch (size) {
-        case 1:
-            return cpu_outb(addr, val);
-        case 2:
-            return cpu_outw(addr, val);
-        case 4:
-            return cpu_outl(addr, val);
-        default:
-            hw_error("outp: bad size: %04x %lx", addr, size);
-    }
-}
-
-/*
- * Helper functions which read/write an object from/to physical guest
- * memory, as part of the implementation of an ioreq.
- *
- * Equivalent to
- *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
- *                          val, req->size, 0/1)
- * except without the integer overflow problems.
- */
-static void rw_phys_req_item(hwaddr addr,
-                             ioreq_t *req, uint32_t i, void *val, int rw)
-{
-    /* Do everything unsigned so overflow just results in a truncated result
-     * and accesses to undesired parts of guest memory, which is up
-     * to the guest */
-    hwaddr offset = (hwaddr)req->size * i;
-    if (req->df) {
-        addr -= offset;
-    } else {
-        addr += offset;
-    }
-    cpu_physical_memory_rw(addr, val, req->size, rw);
-}
-
-static inline void read_phys_req_item(hwaddr addr,
-                                      ioreq_t *req, uint32_t i, void *val)
-{
-    rw_phys_req_item(addr, req, i, val, 0);
-}
-static inline void write_phys_req_item(hwaddr addr,
-                                       ioreq_t *req, uint32_t i, void *val)
-{
-    rw_phys_req_item(addr, req, i, val, 1);
-}
-
-
-static void cpu_ioreq_pio(ioreq_t *req)
-{
-    uint32_t i;
-
-    trace_cpu_ioreq_pio(req, req->dir, req->df, req->data_is_ptr, req->addr,
-                         req->data, req->count, req->size);
-
-    if (req->size > sizeof(uint32_t)) {
-        hw_error("PIO: bad size (%u)", req->size);
-    }
-
-    if (req->dir == IOREQ_READ) {
-        if (!req->data_is_ptr) {
-            req->data = do_inp(req->addr, req->size);
-            trace_cpu_ioreq_pio_read_reg(req, req->data, req->addr,
-                                         req->size);
-        } else {
-            uint32_t tmp;
-
-            for (i = 0; i < req->count; i++) {
-                tmp = do_inp(req->addr, req->size);
-                write_phys_req_item(req->data, req, i, &tmp);
-            }
-        }
-    } else if (req->dir == IOREQ_WRITE) {
-        if (!req->data_is_ptr) {
-            trace_cpu_ioreq_pio_write_reg(req, req->data, req->addr,
-                                          req->size);
-            do_outp(req->addr, req->size, req->data);
-        } else {
-            for (i = 0; i < req->count; i++) {
-                uint32_t tmp = 0;
-
-                read_phys_req_item(req->data, req, i, &tmp);
-                do_outp(req->addr, req->size, tmp);
-            }
-        }
-    }
-}
-
-static void cpu_ioreq_move(ioreq_t *req)
-{
-    uint32_t i;
-
-    trace_cpu_ioreq_move(req, req->dir, req->df, req->data_is_ptr, req->addr,
-                         req->data, req->count, req->size);
-
-    if (req->size > sizeof(req->data)) {
-        hw_error("MMIO: bad size (%u)", req->size);
-    }
-
-    if (!req->data_is_ptr) {
-        if (req->dir == IOREQ_READ) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->addr, req, i, &req->data);
-            }
-        } else if (req->dir == IOREQ_WRITE) {
-            for (i = 0; i < req->count; i++) {
-                write_phys_req_item(req->addr, req, i, &req->data);
-            }
-        }
-    } else {
-        uint64_t tmp;
-
-        if (req->dir == IOREQ_READ) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->addr, req, i, &tmp);
-                write_phys_req_item(req->data, req, i, &tmp);
-            }
-        } else if (req->dir == IOREQ_WRITE) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->data, req, i, &tmp);
-                write_phys_req_item(req->addr, req, i, &tmp);
-            }
-        }
-    }
-}
-
-static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
-{
-    uint32_t sbdf = req->addr >> 32;
-    uint32_t reg = req->addr;
-    XenPciDevice *xendev;
-
-    if (req->size != sizeof(uint8_t) && req->size != sizeof(uint16_t) &&
-        req->size != sizeof(uint32_t)) {
-        hw_error("PCI config access: bad size (%u)", req->size);
-    }
-
-    if (req->count != 1) {
-        hw_error("PCI config access: bad count (%u)", req->count);
-    }
-
-    QLIST_FOREACH(xendev, &state->dev_list, entry) {
-        if (xendev->sbdf != sbdf) {
-            continue;
-        }
-
-        if (!req->data_is_ptr) {
-            if (req->dir == IOREQ_READ) {
-                req->data = pci_host_config_read_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->size);
-                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
-                                            req->size, req->data);
-            } else if (req->dir == IOREQ_WRITE) {
-                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
-                                             req->size, req->data);
-                pci_host_config_write_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->data, req->size);
-            }
-        } else {
-            uint32_t tmp;
-
-            if (req->dir == IOREQ_READ) {
-                tmp = pci_host_config_read_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->size);
-                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
-                                            req->size, tmp);
-                write_phys_req_item(req->data, req, 0, &tmp);
-            } else if (req->dir == IOREQ_WRITE) {
-                read_phys_req_item(req->data, req, 0, &tmp);
-                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
-                                             req->size, tmp);
-                pci_host_config_write_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    tmp, req->size);
-            }
-        }
-    }
-}
-
 static void regs_to_cpu(vmware_regs_t *vmport_regs, ioreq_t *req)
 {
     X86CPU *cpu;
@@ -1030,226 +512,6 @@ static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req)
     current_cpu = NULL;
 }
 
-static void handle_ioreq(XenIOState *state, ioreq_t *req)
-{
-    trace_handle_ioreq(req, req->type, req->dir, req->df, req->data_is_ptr,
-                       req->addr, req->data, req->count, req->size);
-
-    if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
-            (req->size < sizeof (target_ulong))) {
-        req->data &= ((target_ulong) 1 << (8 * req->size)) - 1;
-    }
-
-    if (req->dir == IOREQ_WRITE)
-        trace_handle_ioreq_write(req, req->type, req->df, req->data_is_ptr,
-                                 req->addr, req->data, req->count, req->size);
-
-    switch (req->type) {
-        case IOREQ_TYPE_PIO:
-            cpu_ioreq_pio(req);
-            break;
-        case IOREQ_TYPE_COPY:
-            cpu_ioreq_move(req);
-            break;
-        case IOREQ_TYPE_VMWARE_PORT:
-            handle_vmport_ioreq(state, req);
-            break;
-        case IOREQ_TYPE_TIMEOFFSET:
-            break;
-        case IOREQ_TYPE_INVALIDATE:
-            xen_invalidate_map_cache();
-            break;
-        case IOREQ_TYPE_PCI_CONFIG:
-            cpu_ioreq_config(state, req);
-            break;
-        default:
-            hw_error("Invalid ioreq type 0x%x\n", req->type);
-    }
-    if (req->dir == IOREQ_READ) {
-        trace_handle_ioreq_read(req, req->type, req->df, req->data_is_ptr,
-                                req->addr, req->data, req->count, req->size);
-    }
-}
-
-static bool handle_buffered_iopage(XenIOState *state)
-{
-    buffered_iopage_t *buf_page = state->buffered_io_page;
-    buf_ioreq_t *buf_req = NULL;
-    bool handled_ioreq = false;
-    ioreq_t req;
-    int qw;
-
-    if (!buf_page) {
-        return 0;
-    }
-
-    memset(&req, 0x00, sizeof(req));
-    req.state = STATE_IOREQ_READY;
-    req.count = 1;
-    req.dir = IOREQ_WRITE;
-
-    for (;;) {
-        uint32_t rdptr = buf_page->read_pointer, wrptr;
-
-        xen_rmb();
-        wrptr = buf_page->write_pointer;
-        xen_rmb();
-        if (rdptr != buf_page->read_pointer) {
-            continue;
-        }
-        if (rdptr == wrptr) {
-            break;
-        }
-        buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
-        req.size = 1U << buf_req->size;
-        req.addr = buf_req->addr;
-        req.data = buf_req->data;
-        req.type = buf_req->type;
-        xen_rmb();
-        qw = (req.size == 8);
-        if (qw) {
-            if (rdptr + 1 == wrptr) {
-                hw_error("Incomplete quad word buffered ioreq");
-            }
-            buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
-                                           IOREQ_BUFFER_SLOT_NUM];
-            req.data |= ((uint64_t)buf_req->data) << 32;
-            xen_rmb();
-        }
-
-        handle_ioreq(state, &req);
-
-        /* Only req.data may get updated by handle_ioreq(), albeit even that
-         * should not happen as such data would never make it to the guest (we
-         * can only usefully see writes here after all).
-         */
-        assert(req.state == STATE_IOREQ_READY);
-        assert(req.count == 1);
-        assert(req.dir == IOREQ_WRITE);
-        assert(!req.data_is_ptr);
-
-        qatomic_add(&buf_page->read_pointer, qw + 1);
-        handled_ioreq = true;
-    }
-
-    return handled_ioreq;
-}
-
-static void handle_buffered_io(void *opaque)
-{
-    XenIOState *state = opaque;
-
-    if (handle_buffered_iopage(state)) {
-        timer_mod(state->buffered_io_timer,
-                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
-    } else {
-        timer_del(state->buffered_io_timer);
-        xenevtchn_unmask(state->xce_handle, state->bufioreq_local_port);
-    }
-}
-
-static void cpu_handle_ioreq(void *opaque)
-{
-    XenIOState *state = opaque;
-    ioreq_t *req = cpu_get_ioreq(state);
-
-    handle_buffered_iopage(state);
-    if (req) {
-        ioreq_t copy = *req;
-
-        xen_rmb();
-        handle_ioreq(state, &copy);
-        req->data = copy.data;
-
-        if (req->state != STATE_IOREQ_INPROCESS) {
-            fprintf(stderr, "Badness in I/O request ... not in service?!: "
-                    "%x, ptr: %x, port: %"PRIx64", "
-                    "data: %"PRIx64", count: %u, size: %u, type: %u\n",
-                    req->state, req->data_is_ptr, req->addr,
-                    req->data, req->count, req->size, req->type);
-            destroy_hvm_domain(false);
-            return;
-        }
-
-        xen_wmb(); /* Update ioreq contents /then/ update state. */
-
-        /*
-         * We do this before we send the response so that the tools
-         * have the opportunity to pick up on the reset before the
-         * guest resumes and does a hlt with interrupts disabled which
-         * causes Xen to powerdown the domain.
-         */
-        if (runstate_is_running()) {
-            ShutdownCause request;
-
-            if (qemu_shutdown_requested_get()) {
-                destroy_hvm_domain(false);
-            }
-            request = qemu_reset_requested_get();
-            if (request) {
-                qemu_system_reset(request);
-                destroy_hvm_domain(true);
-            }
-        }
-
-        req->state = STATE_IORESP_READY;
-        xenevtchn_notify(state->xce_handle,
-                         state->ioreq_local_port[state->send_vcpu]);
-    }
-}
-
-static void xen_main_loop_prepare(XenIOState *state)
-{
-    int evtchn_fd = -1;
-
-    if (state->xce_handle != NULL) {
-        evtchn_fd = xenevtchn_fd(state->xce_handle);
-    }
-
-    state->buffered_io_timer = timer_new_ms(QEMU_CLOCK_REALTIME, handle_buffered_io,
-                                                 state);
-
-    if (evtchn_fd != -1) {
-        CPUState *cpu_state;
-
-        DPRINTF("%s: Init cpu_by_vcpu_id\n", __func__);
-        CPU_FOREACH(cpu_state) {
-            DPRINTF("%s: cpu_by_vcpu_id[%d]=%p\n",
-                    __func__, cpu_state->cpu_index, cpu_state);
-            state->cpu_by_vcpu_id[cpu_state->cpu_index] = cpu_state;
-        }
-        qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state);
-    }
-}
-
-
-static void xen_hvm_change_state_handler(void *opaque, bool running,
-                                         RunState rstate)
-{
-    XenIOState *state = opaque;
-
-    if (running) {
-        xen_main_loop_prepare(state);
-    }
-
-    xen_set_ioreq_server_state(xen_domid,
-                               state->ioservid,
-                               (rstate == RUN_STATE_RUNNING));
-}
-
-static void xen_exit_notifier(Notifier *n, void *data)
-{
-    XenIOState *state = container_of(n, XenIOState, exit);
-
-    xen_destroy_ioreq_server(xen_domid, state->ioservid);
-    if (state->fres != NULL) {
-        xenforeignmemory_unmap_resource(xen_fmem, state->fres);
-    }
-
-    xenevtchn_close(state->xce_handle);
-    xs_daemon_close(state->xenstore);
-}
-
 #ifdef XEN_COMPAT_PHYSMAP
 static void xen_read_physmap(XenIOState *state)
 {
@@ -1309,178 +571,17 @@ static void xen_wakeup_notifier(Notifier *notifier, void *data)
     xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 0);
 }
 
-static int xen_map_ioreq_server(XenIOState *state)
-{
-    void *addr = NULL;
-    xen_pfn_t ioreq_pfn;
-    xen_pfn_t bufioreq_pfn;
-    evtchn_port_t bufioreq_evtchn;
-    int rc;
-
-    /*
-     * Attempt to map using the resource API and fall back to normal
-     * foreign mapping if this is not supported.
-     */
-    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0);
-    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1);
-    state->fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
-                                         XENMEM_resource_ioreq_server,
-                                         state->ioservid, 0, 2,
-                                         &addr,
-                                         PROT_READ | PROT_WRITE, 0);
-    if (state->fres != NULL) {
-        trace_xen_map_resource_ioreq(state->ioservid, addr);
-        state->buffered_io_page = addr;
-        state->shared_page = addr + TARGET_PAGE_SIZE;
-    } else if (errno != EOPNOTSUPP) {
-        error_report("failed to map ioreq server resources: error %d handle=%p",
-                     errno, xen_xc);
-        return -1;
-    }
-
-    rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
-                                   (state->shared_page == NULL) ?
-                                   &ioreq_pfn : NULL,
-                                   (state->buffered_io_page == NULL) ?
-                                   &bufioreq_pfn : NULL,
-                                   &bufioreq_evtchn);
-    if (rc < 0) {
-        error_report("failed to get ioreq server info: error %d handle=%p",
-                     errno, xen_xc);
-        return rc;
-    }
-
-    if (state->shared_page == NULL) {
-        DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
-
-        state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
-                                                  PROT_READ | PROT_WRITE,
-                                                  1, &ioreq_pfn, NULL);
-        if (state->shared_page == NULL) {
-            error_report("map shared IO page returned error %d handle=%p",
-                         errno, xen_xc);
-        }
-    }
-
-    if (state->buffered_io_page == NULL) {
-        DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
-
-        state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
-                                                       PROT_READ | PROT_WRITE,
-                                                       1, &bufioreq_pfn,
-                                                       NULL);
-        if (state->buffered_io_page == NULL) {
-            error_report("map buffered IO page returned error %d", errno);
-            return -1;
-        }
-    }
-
-    if (state->shared_page == NULL || state->buffered_io_page == NULL) {
-        return -1;
-    }
-
-    DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
-
-    state->bufioreq_remote_port = bufioreq_evtchn;
-
-    return 0;
-}
-
 void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 {
     MachineState *ms = MACHINE(pcms);
     unsigned int max_cpus = ms->smp.max_cpus;
-    int i, rc;
+    int rc;
     xen_pfn_t ioreq_pfn;
     XenIOState *state;
 
     state = g_new0(XenIOState, 1);
 
-    state->xce_handle = xenevtchn_open(NULL, 0);
-    if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
-        goto err;
-    }
-
-    state->xenstore = xs_daemon_open();
-    if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
-        goto err;
-    }
-
-    xen_create_ioreq_server(xen_domid, &state->ioservid);
-
-    state->exit.notify = xen_exit_notifier;
-    qemu_add_exit_notifier(&state->exit);
-
-    /*
-     * Register wake-up support in QMP query-current-machine API
-     */
-    qemu_register_wakeup_support();
-
-    rc = xen_map_ioreq_server(state);
-    if (rc < 0) {
-        goto err;
-    }
-
-    /* Note: cpus is empty at this point in init */
-    state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
-
-    rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
-    if (rc < 0) {
-        error_report("failed to enable ioreq server info: error %d handle=%p",
-                     errno, xen_xc);
-        goto err;
-    }
-
-    state->ioreq_local_port = g_new0(evtchn_port_t, max_cpus);
-
-    /* FIXME: how about if we overflow the page here? */
-    for (i = 0; i < max_cpus; i++) {
-        rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
-                                        xen_vcpu_eport(state->shared_page, i));
-        if (rc == -1) {
-            error_report("shared evtchn %d bind error %d", i, errno);
-            goto err;
-        }
-        state->ioreq_local_port[i] = rc;
-    }
-
-    rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
-                                    state->bufioreq_remote_port);
-    if (rc == -1) {
-        error_report("buffered evtchn bind error %d", errno);
-        goto err;
-    }
-    state->bufioreq_local_port = rc;
-
-    /* Init RAM management */
-#ifdef XEN_COMPAT_PHYSMAP
-    xen_map_cache_init(xen_phys_offset_to_gaddr, state);
-#else
-    xen_map_cache_init(NULL, state);
-#endif
-
-    qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
-
-    state->memory_listener = xen_memory_listener;
-    memory_listener_register(&state->memory_listener, &address_space_memory);
-
-    state->io_listener = xen_io_listener;
-    memory_listener_register(&state->io_listener, &address_space_io);
-
-    state->device_listener = xen_device_listener;
-    QLIST_INIT(&state->dev_list);
-    device_listener_register(&state->device_listener);
-
-    xen_bus_init();
-
-    /* Initialize backend core & drivers */
-    if (xen_be_init() != 0) {
-        error_report("xen backend core setup failed");
-        goto err;
-    }
-    xen_be_register_common();
+    xen_register_ioreq(state, max_cpus, xen_memory_listener);
 
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
@@ -1520,59 +621,11 @@ err:
     exit(1);
 }
 
-void destroy_hvm_domain(bool reboot)
-{
-    xc_interface *xc_handle;
-    int sts;
-    int rc;
-
-    unsigned int reason = reboot ? SHUTDOWN_reboot : SHUTDOWN_poweroff;
-
-    if (xen_dmod) {
-        rc = xendevicemodel_shutdown(xen_dmod, xen_domid, reason);
-        if (!rc) {
-            return;
-        }
-        if (errno != ENOTTY /* old Xen */) {
-            perror("xendevicemodel_shutdown failed");
-        }
-        /* well, try the old thing then */
-    }
-
-    xc_handle = xc_interface_open(0, 0, 0);
-    if (xc_handle == NULL) {
-        fprintf(stderr, "Cannot acquire xenctrl handle\n");
-    } else {
-        sts = xc_domain_shutdown(xc_handle, xen_domid, reason);
-        if (sts != 0) {
-            fprintf(stderr, "xc_domain_shutdown failed to issue %s, "
-                    "sts %d, %s\n", reboot ? "reboot" : "poweroff",
-                    sts, strerror(errno));
-        } else {
-            fprintf(stderr, "Issued domain %d %s\n", xen_domid,
-                    reboot ? "reboot" : "poweroff");
-        }
-        xc_interface_close(xc_handle);
-    }
-}
-
 void xen_register_framebuffer(MemoryRegion *mr)
 {
     framebuffer = mr;
 }
 
-void xen_shutdown_fatal_error(const char *fmt, ...)
-{
-    va_list ap;
-
-    va_start(ap, fmt);
-    vfprintf(stderr, fmt, ap);
-    va_end(ap);
-    fprintf(stderr, "Will destroy the domain.\n");
-    /* destroy the domain */
-    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
-}
-
 void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
 {
     if (unlikely(xen_in_migration)) {
@@ -1604,3 +657,57 @@ void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
         memory_global_dirty_log_stop(GLOBAL_DIRTY_MIGRATION);
     }
 }
+
+void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
+                                bool add)
+{
+    hwaddr start_addr = section->offset_within_address_space;
+    ram_addr_t size = int128_get64(section->size);
+    bool log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
+    hvmmem_type_t mem_type;
+
+    if (!memory_region_is_ram(section->mr)) {
+        return;
+    }
+
+    if (log_dirty != add) {
+        return;
+    }
+
+    trace_xen_client_set_memory(start_addr, size, log_dirty);
+
+    start_addr &= TARGET_PAGE_MASK;
+    size = TARGET_PAGE_ALIGN(size);
+
+    if (add) {
+        if (!memory_region_is_rom(section->mr)) {
+            xen_add_to_physmap(state, start_addr, size,
+                               section->mr, section->offset_within_region);
+        } else {
+            mem_type = HVMMEM_ram_ro;
+            if (xen_set_mem_type(xen_domid, mem_type,
+                                 start_addr >> TARGET_PAGE_BITS,
+                                 size >> TARGET_PAGE_BITS)) {
+                DPRINTF("xen_set_mem_type error, addr: "HWADDR_FMT_plx"\n",
+                        start_addr);
+            }
+        }
+    } else {
+        if (xen_remove_from_physmap(state, start_addr, size) < 0) {
+            DPRINTF("physmapping does not exist at "HWADDR_FMT_plx"\n", start_addr);
+        }
+    }
+}
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    switch (req->type) {
+    case IOREQ_TYPE_VMWARE_PORT:
+            handle_vmport_ioreq(state, req);
+        break;
+    default:
+        hw_error("Invalid ioreq type 0x%x\n", req->type);
+    }
+
+    return;
+}
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index 19d0637c46..008e036d63 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -25,4 +25,7 @@ specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
 
 xen_ss = ss.source_set()
 
-xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))
+xen_ss.add(when: 'CONFIG_XEN', if_true: files(
+  'xen-mapcache.c',
+  'xen-hvm-common.c',
+))
diff --git a/hw/xen/trace-events b/hw/xen/trace-events
index 2c8f238f42..02ca1183da 100644
--- a/hw/xen/trace-events
+++ b/hw/xen/trace-events
@@ -42,6 +42,20 @@ xs_node_vscanf(char *path, char *value) "%s %s"
 xs_node_watch(char *path) "%s"
 xs_node_unwatch(char *path) "%s"
 
+# xen-hvm.c
+xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"
+xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "0x%"PRIx64" size 0x%lx, log_dirty %i"
+handle_ioreq(void *req, uint32_t type, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p type=%d dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+handle_ioreq_read(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p read type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+handle_ioreq_write(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p write type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+cpu_ioreq_pio(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p pio dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+cpu_ioreq_pio_read_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio read reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
+cpu_ioreq_pio_write_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio write reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
+cpu_ioreq_move(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p copy dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
+cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
+cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
+
 # xen-mapcache.c
 xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
 xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
new file mode 100644
index 0000000000..c2e1e08124
--- /dev/null
+++ b/hw/xen/xen-hvm-common.c
@@ -0,0 +1,874 @@
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+#include "qapi/error.h"
+#include "trace.h"
+
+#include "hw/pci/pci_host.h"
+#include "hw/xen/xen-hvm-common.h"
+#include "hw/xen/xen-legacy-backend.h"
+#include "hw/xen/xen-bus.h"
+#include "hw/boards.h"
+#include "hw/xen/arch_hvm.h"
+
+MemoryRegion ram_memory;
+
+void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
+                   Error **errp)
+{
+    unsigned long nr_pfn;
+    xen_pfn_t *pfn_list;
+    int i;
+
+    if (runstate_check(RUN_STATE_INMIGRATE)) {
+        /* RAM already populated in Xen */
+        fprintf(stderr, "%s: do not alloc "RAM_ADDR_FMT
+                " bytes of ram at "RAM_ADDR_FMT" when runstate is INMIGRATE\n",
+                __func__, size, ram_addr);
+        return;
+    }
+
+    if (mr == &ram_memory) {
+        return;
+    }
+
+    trace_xen_ram_alloc(ram_addr, size);
+
+    nr_pfn = size >> TARGET_PAGE_BITS;
+    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
+
+    for (i = 0; i < nr_pfn; i++) {
+        pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
+    }
+
+    if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) {
+        error_setg(errp, "xen: failed to populate ram at " RAM_ADDR_FMT,
+                   ram_addr);
+    }
+
+    g_free(pfn_list);
+}
+
+static void xen_set_memory(struct MemoryListener *listener,
+                           MemoryRegionSection *section,
+                           bool add)
+{
+    XenIOState *state = container_of(listener, XenIOState, memory_listener);
+
+    if (section->mr == &ram_memory) {
+        return;
+    } else {
+        if (add) {
+            xen_map_memory_section(xen_domid, state->ioservid,
+                                   section);
+        } else {
+            xen_unmap_memory_section(xen_domid, state->ioservid,
+                                     section);
+        }
+    }
+
+    arch_xen_set_memory(state, section, add);
+}
+
+void xen_region_add(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    memory_region_ref(section->mr);
+    xen_set_memory(listener, section, true);
+}
+
+void xen_region_del(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    xen_set_memory(listener, section, false);
+    memory_region_unref(section->mr);
+}
+
+void xen_io_add(MemoryListener *listener,
+                       MemoryRegionSection *section)
+{
+    XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops) {
+        return;
+    }
+
+    memory_region_ref(mr);
+
+    xen_map_io_section(xen_domid, state->ioservid, section);
+}
+
+void xen_io_del(MemoryListener *listener,
+                       MemoryRegionSection *section)
+{
+    XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops) {
+        return;
+    }
+
+    xen_unmap_io_section(xen_domid, state->ioservid, section);
+
+    memory_region_unref(mr);
+}
+
+void xen_device_realize(DeviceListener *listener,
+                               DeviceState *dev)
+{
+    XenIOState *state = container_of(listener, XenIOState, device_listener);
+
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
+        PCIDevice *pci_dev = PCI_DEVICE(dev);
+        XenPciDevice *xendev = g_new(XenPciDevice, 1);
+
+        xendev->pci_dev = pci_dev;
+        xendev->sbdf = PCI_BUILD_BDF(pci_dev_bus_num(pci_dev),
+                                     pci_dev->devfn);
+        QLIST_INSERT_HEAD(&state->dev_list, xendev, entry);
+
+        xen_map_pcidev(xen_domid, state->ioservid, pci_dev);
+    }
+}
+
+void xen_device_unrealize(DeviceListener *listener,
+                                 DeviceState *dev)
+{
+    XenIOState *state = container_of(listener, XenIOState, device_listener);
+
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
+        PCIDevice *pci_dev = PCI_DEVICE(dev);
+        XenPciDevice *xendev, *next;
+
+        xen_unmap_pcidev(xen_domid, state->ioservid, pci_dev);
+
+        QLIST_FOREACH_SAFE(xendev, &state->dev_list, entry, next) {
+            if (xendev->pci_dev == pci_dev) {
+                QLIST_REMOVE(xendev, entry);
+                g_free(xendev);
+                break;
+            }
+        }
+    }
+}
+
+MemoryListener xen_io_listener = {
+    .name = "xen-io",
+    .region_add = xen_io_add,
+    .region_del = xen_io_del,
+    .priority = 10,
+};
+
+DeviceListener xen_device_listener = {
+    .realize = xen_device_realize,
+    .unrealize = xen_device_unrealize,
+};
+
+/* get the ioreq packets from share mem */
+static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
+{
+    ioreq_t *req = xen_vcpu_ioreq(state->shared_page, vcpu);
+
+    if (req->state != STATE_IOREQ_READY) {
+        DPRINTF("I/O request not ready: "
+                "%x, ptr: %x, port: %"PRIx64", "
+                "data: %"PRIx64", count: %u, size: %u\n",
+                req->state, req->data_is_ptr, req->addr,
+                req->data, req->count, req->size);
+        return NULL;
+    }
+
+    xen_rmb(); /* see IOREQ_READY /then/ read contents of ioreq */
+
+    req->state = STATE_IOREQ_INPROCESS;
+    return req;
+}
+
+/* use poll to get the port notification */
+/* ioreq_vec--out,the */
+/* retval--the number of ioreq packet */
+static ioreq_t *cpu_get_ioreq(XenIOState *state)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+    unsigned int max_cpus = ms->smp.max_cpus;
+    int i;
+    evtchn_port_t port;
+
+    port = xenevtchn_pending(state->xce_handle);
+    if (port == state->bufioreq_local_port) {
+        timer_mod(state->buffered_io_timer,
+                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
+        return NULL;
+    }
+
+    if (port != -1) {
+        for (i = 0; i < max_cpus; i++) {
+            if (state->ioreq_local_port[i] == port) {
+                break;
+            }
+        }
+
+        if (i == max_cpus) {
+            hw_error("Fatal error while trying to get io event!\n");
+        }
+
+        /* unmask the wanted port again */
+        xenevtchn_unmask(state->xce_handle, port);
+
+        /* get the io packet from shared memory */
+        state->send_vcpu = i;
+        return cpu_get_ioreq_from_shared_memory(state, i);
+    }
+
+    /* read error or read nothing */
+    return NULL;
+}
+
+static uint32_t do_inp(uint32_t addr, unsigned long size)
+{
+    switch (size) {
+        case 1:
+            return cpu_inb(addr);
+        case 2:
+            return cpu_inw(addr);
+        case 4:
+            return cpu_inl(addr);
+        default:
+            hw_error("inp: bad size: %04x %lx", addr, size);
+    }
+}
+
+static void do_outp(uint32_t addr,
+        unsigned long size, uint32_t val)
+{
+    switch (size) {
+        case 1:
+            return cpu_outb(addr, val);
+        case 2:
+            return cpu_outw(addr, val);
+        case 4:
+            return cpu_outl(addr, val);
+        default:
+            hw_error("outp: bad size: %04x %lx", addr, size);
+    }
+}
+
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(hwaddr addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
+{
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    hwaddr offset = (hwaddr)req->size * i;
+    if (req->df) {
+        addr -= offset;
+    } else {
+        addr += offset;
+    }
+    cpu_physical_memory_rw(addr, val, req->size, rw);
+}
+
+static inline void read_phys_req_item(hwaddr addr,
+                                      ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(hwaddr addr,
+                                       ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 1);
+}
+
+
+void cpu_ioreq_pio(ioreq_t *req)
+{
+    uint32_t i;
+
+    trace_cpu_ioreq_pio(req, req->dir, req->df, req->data_is_ptr, req->addr,
+                         req->data, req->count, req->size);
+
+    if (req->size > sizeof(uint32_t)) {
+        hw_error("PIO: bad size (%u)", req->size);
+    }
+
+    if (req->dir == IOREQ_READ) {
+        if (!req->data_is_ptr) {
+            req->data = do_inp(req->addr, req->size);
+            trace_cpu_ioreq_pio_read_reg(req, req->data, req->addr,
+                                         req->size);
+        } else {
+            uint32_t tmp;
+
+            for (i = 0; i < req->count; i++) {
+                tmp = do_inp(req->addr, req->size);
+                write_phys_req_item(req->data, req, i, &tmp);
+            }
+        }
+    } else if (req->dir == IOREQ_WRITE) {
+        if (!req->data_is_ptr) {
+            trace_cpu_ioreq_pio_write_reg(req, req->data, req->addr,
+                                          req->size);
+            do_outp(req->addr, req->size, req->data);
+        } else {
+            for (i = 0; i < req->count; i++) {
+                uint32_t tmp = 0;
+
+                read_phys_req_item(req->data, req, i, &tmp);
+                do_outp(req->addr, req->size, tmp);
+            }
+        }
+    }
+}
+
+static void cpu_ioreq_move(ioreq_t *req)
+{
+    uint32_t i;
+
+    trace_cpu_ioreq_move(req, req->dir, req->df, req->data_is_ptr, req->addr,
+                         req->data, req->count, req->size);
+
+    if (req->size > sizeof(req->data)) {
+        hw_error("MMIO: bad size (%u)", req->size);
+    }
+
+    if (!req->data_is_ptr) {
+        if (req->dir == IOREQ_READ) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->addr, req, i, &req->data);
+            }
+        } else if (req->dir == IOREQ_WRITE) {
+            for (i = 0; i < req->count; i++) {
+                write_phys_req_item(req->addr, req, i, &req->data);
+            }
+        }
+    } else {
+        uint64_t tmp;
+
+        if (req->dir == IOREQ_READ) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
+            }
+        } else if (req->dir == IOREQ_WRITE) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
+            }
+        }
+    }
+}
+
+static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
+{
+    uint32_t sbdf = req->addr >> 32;
+    uint32_t reg = req->addr;
+    XenPciDevice *xendev;
+
+    if (req->size != sizeof(uint8_t) && req->size != sizeof(uint16_t) &&
+        req->size != sizeof(uint32_t)) {
+        hw_error("PCI config access: bad size (%u)", req->size);
+    }
+
+    if (req->count != 1) {
+        hw_error("PCI config access: bad count (%u)", req->count);
+    }
+
+    QLIST_FOREACH(xendev, &state->dev_list, entry) {
+        if (xendev->sbdf != sbdf) {
+            continue;
+        }
+
+        if (!req->data_is_ptr) {
+            if (req->dir == IOREQ_READ) {
+                req->data = pci_host_config_read_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->size);
+                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
+                                            req->size, req->data);
+            } else if (req->dir == IOREQ_WRITE) {
+                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
+                                             req->size, req->data);
+                pci_host_config_write_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->data, req->size);
+            }
+        } else {
+            uint32_t tmp;
+
+            if (req->dir == IOREQ_READ) {
+                tmp = pci_host_config_read_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->size);
+                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
+                                            req->size, tmp);
+                write_phys_req_item(req->data, req, 0, &tmp);
+            } else if (req->dir == IOREQ_WRITE) {
+                read_phys_req_item(req->data, req, 0, &tmp);
+                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
+                                             req->size, tmp);
+                pci_host_config_write_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    tmp, req->size);
+            }
+        }
+    }
+}
+
+static void handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    trace_handle_ioreq(req, req->type, req->dir, req->df, req->data_is_ptr,
+                       req->addr, req->data, req->count, req->size);
+
+    if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
+            (req->size < sizeof (target_ulong))) {
+        req->data &= ((target_ulong) 1 << (8 * req->size)) - 1;
+    }
+
+    if (req->dir == IOREQ_WRITE)
+        trace_handle_ioreq_write(req, req->type, req->df, req->data_is_ptr,
+                                 req->addr, req->data, req->count, req->size);
+
+    switch (req->type) {
+        case IOREQ_TYPE_PIO:
+            cpu_ioreq_pio(req);
+            break;
+        case IOREQ_TYPE_COPY:
+            cpu_ioreq_move(req);
+            break;
+        case IOREQ_TYPE_TIMEOFFSET:
+            break;
+        case IOREQ_TYPE_INVALIDATE:
+            xen_invalidate_map_cache();
+            break;
+        case IOREQ_TYPE_PCI_CONFIG:
+            cpu_ioreq_config(state, req);
+            break;
+        default:
+            arch_handle_ioreq(state, req);
+    }
+    if (req->dir == IOREQ_READ) {
+        trace_handle_ioreq_read(req, req->type, req->df, req->data_is_ptr,
+                                req->addr, req->data, req->count, req->size);
+    }
+}
+
+static bool handle_buffered_iopage(XenIOState *state)
+{
+    buffered_iopage_t *buf_page = state->buffered_io_page;
+    buf_ioreq_t *buf_req = NULL;
+    bool handled_ioreq = false;
+    ioreq_t req;
+    int qw;
+
+    if (!buf_page) {
+        return 0;
+    }
+
+    memset(&req, 0x00, sizeof(req));
+    req.state = STATE_IOREQ_READY;
+    req.count = 1;
+    req.dir = IOREQ_WRITE;
+
+    for (;;) {
+        uint32_t rdptr = buf_page->read_pointer, wrptr;
+
+        xen_rmb();
+        wrptr = buf_page->write_pointer;
+        xen_rmb();
+        if (rdptr != buf_page->read_pointer) {
+            continue;
+        }
+        if (rdptr == wrptr) {
+            break;
+        }
+        buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
+        req.size = 1U << buf_req->size;
+        req.addr = buf_req->addr;
+        req.data = buf_req->data;
+        req.type = buf_req->type;
+        xen_rmb();
+        qw = (req.size == 8);
+        if (qw) {
+            if (rdptr + 1 == wrptr) {
+                hw_error("Incomplete quad word buffered ioreq");
+            }
+            buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
+                                           IOREQ_BUFFER_SLOT_NUM];
+            req.data |= ((uint64_t)buf_req->data) << 32;
+            xen_rmb();
+        }
+
+        handle_ioreq(state, &req);
+
+        /* Only req.data may get updated by handle_ioreq(), albeit even that
+         * should not happen as such data would never make it to the guest (we
+         * can only usefully see writes here after all).
+         */
+        assert(req.state == STATE_IOREQ_READY);
+        assert(req.count == 1);
+        assert(req.dir == IOREQ_WRITE);
+        assert(!req.data_is_ptr);
+
+        qatomic_add(&buf_page->read_pointer, qw + 1);
+        handled_ioreq = true;
+    }
+
+    return handled_ioreq;
+}
+
+static void handle_buffered_io(void *opaque)
+{
+    XenIOState *state = opaque;
+
+    if (handle_buffered_iopage(state)) {
+        timer_mod(state->buffered_io_timer,
+                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
+    } else {
+        timer_del(state->buffered_io_timer);
+        xenevtchn_unmask(state->xce_handle, state->bufioreq_local_port);
+    }
+}
+
+static void cpu_handle_ioreq(void *opaque)
+{
+    XenIOState *state = opaque;
+    ioreq_t *req = cpu_get_ioreq(state);
+
+    handle_buffered_iopage(state);
+    if (req) {
+        ioreq_t copy = *req;
+
+        xen_rmb();
+        handle_ioreq(state, &copy);
+        req->data = copy.data;
+
+        if (req->state != STATE_IOREQ_INPROCESS) {
+            fprintf(stderr, "Badness in I/O request ... not in service?!: "
+                    "%x, ptr: %x, port: %"PRIx64", "
+                    "data: %"PRIx64", count: %u, size: %u, type: %u\n",
+                    req->state, req->data_is_ptr, req->addr,
+                    req->data, req->count, req->size, req->type);
+            destroy_hvm_domain(false);
+            return;
+        }
+
+        xen_wmb(); /* Update ioreq contents /then/ update state. */
+
+        /*
+         * We do this before we send the response so that the tools
+         * have the opportunity to pick up on the reset before the
+         * guest resumes and does a hlt with interrupts disabled which
+         * causes Xen to powerdown the domain.
+         */
+        if (runstate_is_running()) {
+            ShutdownCause request;
+
+            if (qemu_shutdown_requested_get()) {
+                destroy_hvm_domain(false);
+            }
+            request = qemu_reset_requested_get();
+            if (request) {
+                qemu_system_reset(request);
+                destroy_hvm_domain(true);
+            }
+        }
+
+        req->state = STATE_IORESP_READY;
+        xenevtchn_notify(state->xce_handle,
+                         state->ioreq_local_port[state->send_vcpu]);
+    }
+}
+
+static void xen_main_loop_prepare(XenIOState *state)
+{
+    int evtchn_fd = -1;
+
+    if (state->xce_handle != NULL) {
+        evtchn_fd = xenevtchn_fd(state->xce_handle);
+    }
+
+    state->buffered_io_timer = timer_new_ms(QEMU_CLOCK_REALTIME, handle_buffered_io,
+                                                 state);
+
+    if (evtchn_fd != -1) {
+        CPUState *cpu_state;
+
+        DPRINTF("%s: Init cpu_by_vcpu_id\n", __func__);
+        CPU_FOREACH(cpu_state) {
+            DPRINTF("%s: cpu_by_vcpu_id[%d]=%p\n",
+                    __func__, cpu_state->cpu_index, cpu_state);
+            state->cpu_by_vcpu_id[cpu_state->cpu_index] = cpu_state;
+        }
+        qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state);
+    }
+}
+
+
+void xen_hvm_change_state_handler(void *opaque, bool running,
+                                         RunState rstate)
+{
+    XenIOState *state = opaque;
+
+    if (running) {
+        xen_main_loop_prepare(state);
+    }
+
+    xen_set_ioreq_server_state(xen_domid,
+                               state->ioservid,
+                               (rstate == RUN_STATE_RUNNING));
+}
+
+void xen_exit_notifier(Notifier *n, void *data)
+{
+    XenIOState *state = container_of(n, XenIOState, exit);
+
+    xen_destroy_ioreq_server(xen_domid, state->ioservid);
+    if (state->fres != NULL) {
+        xenforeignmemory_unmap_resource(xen_fmem, state->fres);
+    }
+
+    xenevtchn_close(state->xce_handle);
+    xs_daemon_close(state->xenstore);
+}
+
+static int xen_map_ioreq_server(XenIOState *state)
+{
+    void *addr = NULL;
+    xen_pfn_t ioreq_pfn;
+    xen_pfn_t bufioreq_pfn;
+    evtchn_port_t bufioreq_evtchn;
+    int rc;
+
+    /*
+     * Attempt to map using the resource API and fall back to normal
+     * foreign mapping if this is not supported.
+     */
+    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0);
+    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1);
+    state->fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
+                                         XENMEM_resource_ioreq_server,
+                                         state->ioservid, 0, 2,
+                                         &addr,
+                                         PROT_READ | PROT_WRITE, 0);
+    if (state->fres != NULL) {
+        trace_xen_map_resource_ioreq(state->ioservid, addr);
+        state->buffered_io_page = addr;
+        state->shared_page = addr + XC_PAGE_SIZE;
+    } else if (errno != EOPNOTSUPP) {
+        error_report("failed to map ioreq server resources: error %d handle=%p",
+                     errno, xen_xc);
+        return -1;
+    }
+
+    rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
+                                   (state->shared_page == NULL) ?
+                                   &ioreq_pfn : NULL,
+                                   (state->buffered_io_page == NULL) ?
+                                   &bufioreq_pfn : NULL,
+                                   &bufioreq_evtchn);
+    if (rc < 0) {
+        error_report("failed to get ioreq server info: error %d handle=%p",
+                     errno, xen_xc);
+        return rc;
+    }
+
+    if (state->shared_page == NULL) {
+        DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
+
+        state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
+                                                  PROT_READ | PROT_WRITE,
+                                                  1, &ioreq_pfn, NULL);
+        if (state->shared_page == NULL) {
+            error_report("map shared IO page returned error %d handle=%p",
+                         errno, xen_xc);
+        }
+    }
+
+    if (state->buffered_io_page == NULL) {
+        DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
+
+        state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
+                                                       PROT_READ | PROT_WRITE,
+                                                       1, &bufioreq_pfn,
+                                                       NULL);
+        if (state->buffered_io_page == NULL) {
+            error_report("map buffered IO page returned error %d", errno);
+            return -1;
+        }
+    }
+
+    if (state->shared_page == NULL || state->buffered_io_page == NULL) {
+        return -1;
+    }
+
+    DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
+
+    state->bufioreq_remote_port = bufioreq_evtchn;
+
+    return 0;
+}
+
+void destroy_hvm_domain(bool reboot)
+{
+    xc_interface *xc_handle;
+    int sts;
+    int rc;
+
+    unsigned int reason = reboot ? SHUTDOWN_reboot : SHUTDOWN_poweroff;
+
+    if (xen_dmod) {
+        rc = xendevicemodel_shutdown(xen_dmod, xen_domid, reason);
+        if (!rc) {
+            return;
+        }
+        if (errno != ENOTTY /* old Xen */) {
+            perror("xendevicemodel_shutdown failed");
+        }
+        /* well, try the old thing then */
+    }
+
+    xc_handle = xc_interface_open(0, 0, 0);
+    if (xc_handle == NULL) {
+        fprintf(stderr, "Cannot acquire xenctrl handle\n");
+    } else {
+        sts = xc_domain_shutdown(xc_handle, xen_domid, reason);
+        if (sts != 0) {
+            fprintf(stderr, "xc_domain_shutdown failed to issue %s, "
+                    "sts %d, %s\n", reboot ? "reboot" : "poweroff",
+                    sts, strerror(errno));
+        } else {
+            fprintf(stderr, "Issued domain %d %s\n", xen_domid,
+                    reboot ? "reboot" : "poweroff");
+        }
+        xc_interface_close(xc_handle);
+    }
+}
+
+void xen_shutdown_fatal_error(const char *fmt, ...)
+{
+    va_list ap;
+
+    va_start(ap, fmt);
+    vfprintf(stderr, fmt, ap);
+    va_end(ap);
+    fprintf(stderr, "Will destroy the domain.\n");
+    /* destroy the domain */
+    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
+}
+
+static void xen_register_backend(XenIOState *state)
+{
+    /* Initialize backend core & drivers */
+    if (xen_be_init() != 0) {
+        error_report("xen backend core setup failed");
+        goto err;
+    }
+    xen_be_register_common();
+
+    return;
+
+err:
+    error_report("xen hardware virtual machine backend registration failed");
+    exit(1);
+}
+
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener)
+{
+    int i, rc;
+
+    state->xce_handle = xenevtchn_open(NULL, 0);
+    if (state->xce_handle == NULL) {
+        perror("xen: event channel open");
+        goto err;
+    }
+
+    state->xenstore = xs_daemon_open();
+    if (state->xenstore == NULL) {
+        perror("xen: xenstore open");
+        goto err;
+    }
+
+    xen_create_ioreq_server(xen_domid, &state->ioservid);
+
+    state->exit.notify = xen_exit_notifier;
+    qemu_add_exit_notifier(&state->exit);
+
+    /*
+     * Register wake-up support in QMP query-current-machine API
+     */
+    qemu_register_wakeup_support();
+
+    rc = xen_map_ioreq_server(state);
+    if (rc < 0) {
+        goto err;
+    }
+
+    /* Note: cpus is empty at this point in init */
+    state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
+
+    rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
+    if (rc < 0) {
+        error_report("failed to enable ioreq server info: error %d handle=%p",
+                     errno, xen_xc);
+        goto err;
+    }
+
+    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
+
+    /* FIXME: how about if we overflow the page here? */
+    for (i = 0; i < max_cpus; i++) {
+        rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
+                                        xen_vcpu_eport(state->shared_page, i));
+        if (rc == -1) {
+            error_report("shared evtchn %d bind error %d", i, errno);
+            goto err;
+        }
+        state->ioreq_local_port[i] = rc;
+    }
+
+    rc = xenevtchn_bind_interdomain(state->xce_handle, xen_domid,
+                                    state->bufioreq_remote_port);
+    if (rc == -1) {
+        error_report("buffered evtchn bind error %d", errno);
+        goto err;
+    }
+    state->bufioreq_local_port = rc;
+
+    /* Init RAM management */
+#ifdef XEN_COMPAT_PHYSMAP
+    xen_map_cache_init(xen_phys_offset_to_gaddr, state);
+#else
+    xen_map_cache_init(NULL, state);
+#endif
+
+    qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
+
+    state->memory_listener = xen_memory_listener;
+    memory_listener_register(&state->memory_listener, &address_space_memory);
+
+    state->io_listener = xen_io_listener;
+    memory_listener_register(&state->io_listener, &address_space_io);
+
+    state->device_listener = xen_device_listener;
+    QLIST_INIT(&state->dev_list);
+    device_listener_register(&state->device_listener);
+
+    xen_bus_init();
+
+    xen_register_backend(state);
+
+    return;
+err:
+    error_report("xen hardware virtual machine initialisation failed");
+    exit(1);
+}
diff --git a/include/hw/i386/xen_arch_hvm.h b/include/hw/i386/xen_arch_hvm.h
new file mode 100644
index 0000000000..1000f8f543
--- /dev/null
+++ b/include/hw/i386/xen_arch_hvm.h
@@ -0,0 +1,11 @@
+#ifndef HW_XEN_ARCH_I386_HVM_H
+#define HW_XEN_ARCH_I386_HVM_H
+
+#include <xen/hvm/ioreq.h>
+#include "hw/xen/xen-hvm-common.h"
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
+void arch_xen_set_memory(XenIOState *state,
+                         MemoryRegionSection *section,
+                         bool add);
+#endif
diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
new file mode 100644
index 0000000000..26674648d8
--- /dev/null
+++ b/include/hw/xen/arch_hvm.h
@@ -0,0 +1,3 @@
+#if defined(TARGET_I386) || defined(TARGET_X86_64)
+#include "hw/i386/xen_arch_hvm.h"
+#endif
diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h
new file mode 100644
index 0000000000..2979f84ee2
--- /dev/null
+++ b/include/hw/xen/xen-hvm-common.h
@@ -0,0 +1,98 @@
+#ifndef HW_XEN_HVM_COMMON_H
+#define HW_XEN_HVM_COMMON_H
+
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+
+#include "cpu.h"
+#include "hw/pci/pci.h"
+#include "hw/hw.h"
+#include "hw/xen/xen_common.h"
+#include "sysemu/runstate.h"
+#include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
+#include "sysemu/xen-mapcache.h"
+
+#include <xen/hvm/ioreq.h>
+
+extern MemoryRegion ram_memory;
+extern MemoryListener xen_io_listener;
+extern DeviceListener xen_device_listener;
+
+//#define DEBUG_XEN_HVM
+
+#ifdef DEBUG_XEN_HVM
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
+{
+    return shared_page->vcpu_ioreq[i].vp_eport;
+}
+static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
+{
+    return &shared_page->vcpu_ioreq[vcpu];
+}
+
+#define BUFFER_IO_MAX_DELAY  100
+
+typedef struct XenPhysmap {
+    hwaddr start_addr;
+    ram_addr_t size;
+    const char *name;
+    hwaddr phys_offset;
+
+    QLIST_ENTRY(XenPhysmap) list;
+} XenPhysmap;
+
+typedef struct XenPciDevice {
+    PCIDevice *pci_dev;
+    uint32_t sbdf;
+    QLIST_ENTRY(XenPciDevice) entry;
+} XenPciDevice;
+
+typedef struct XenIOState {
+    ioservid_t ioservid;
+    shared_iopage_t *shared_page;
+    buffered_iopage_t *buffered_io_page;
+    xenforeignmemory_resource_handle *fres;
+    QEMUTimer *buffered_io_timer;
+    CPUState **cpu_by_vcpu_id;
+    /* the evtchn port for polling the notification, */
+    evtchn_port_t *ioreq_local_port;
+    /* evtchn remote and local ports for buffered io */
+    evtchn_port_t bufioreq_remote_port;
+    evtchn_port_t bufioreq_local_port;
+    /* the evtchn fd for polling */
+    xenevtchn_handle *xce_handle;
+    /* which vcpu we are serving */
+    int send_vcpu;
+
+    struct xs_handle *xenstore;
+    MemoryListener memory_listener;
+    MemoryListener io_listener;
+    QLIST_HEAD(, XenPciDevice) dev_list;
+    DeviceListener device_listener;
+
+    Notifier exit;
+} XenIOState;
+
+void xen_exit_notifier(Notifier *n, void *data);
+
+void xen_region_add(MemoryListener *listener, MemoryRegionSection *section);
+void xen_region_del(MemoryListener *listener, MemoryRegionSection *section);
+void xen_io_add(MemoryListener *listener, MemoryRegionSection *section);
+void xen_io_del(MemoryListener *listener, MemoryRegionSection *section);
+void xen_device_realize(DeviceListener *listener, DeviceState *dev);
+void xen_device_unrealize(DeviceListener *listener, DeviceState *dev);
+
+void xen_hvm_change_state_handler(void *opaque, bool running, RunState rstate);
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener);
+
+void cpu_ioreq_pio(ioreq_t *req);
+#endif /* HW_XEN_HVM_COMMON_H */
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487904.755754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUK-0001vD-8N; Tue, 31 Jan 2023 22:52:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487904.755754; Tue, 31 Jan 2023 22:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUK-0001uY-2d; Tue, 31 Jan 2023 22:52:28 +0000
Received: by outflank-mailman (input) for mailman id 487904;
 Tue, 31 Jan 2023 22:52:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzUI-0008Hj-Co
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:26 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20626.outbound.protection.outlook.com
 [2a01:111:f400:fe59::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ed5f3a07-a1b9-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 23:52:25 +0100 (CET)
Received: from MW4PR03CA0022.namprd03.prod.outlook.com (2603:10b6:303:8f::27)
 by PH0PR12MB5468.namprd12.prod.outlook.com (2603:10b6:510:ea::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 22:52:21 +0000
Received: from CO1NAM11FT106.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8f:cafe::cc) by MW4PR03CA0022.outlook.office365.com
 (2603:10b6:303:8f::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT106.mail.protection.outlook.com (10.13.175.44) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 22:52:19 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:19 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 14:52:18 -0800
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:18 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed5f3a07-a1b9-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mXY2AGIYjpjCP5V5Cx+0Sq7p38NQM1z89S78V/x4yehYjVXBF77UpCm7EJPoowmBcmV4DDG4dj7Ub463DQU1VvX2BjXWv/GPvkFj9XTqL5gZrdhlGSxdYGAb84wnJmy/d315XqYZbTfZ+3/0Kcj/+x3D3LMUOjjtNyyR2Om8m8ebpPUDB7Y1D15g783RPppuz2s7yhRlF7c7Z7KHhTDBL2BcZRvQCnCaPJbtk4mlLJvVtKeFIH8YFrxjFUhjlwyY6N7M+HkUNPMUQzUhRF4CyAV/RrPDBRVCpUudcnDDrE++nIac2u4LUGhsp32SHTLhrHa3oqIEWNv5ybxOAYodLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A4ErF5BL7bDWql01g8Ddsvo8tfdsJHgWhVo5VZ0Rbpo=;
 b=YJNZDExdjXVkGI7o9ZE+UJi114bodC53Fr+EV7TTuTM4dIOTsWH6n9YpGMa2SMnoMx2sIc6R9/MXjb8jUpXl/bU83K38JvquPNYNWAQFtEwp1ziagUXr/9AXTQEMFW0EClBg50SOBdLZpzQ528Xd8n1VvVl1sC3hIPOo8c4dsOhl1ZYoeh1vCCoJV8vyw2NSwL+JbFQNrMP9uGzxmREKGVrsgBeJvbXFxzpBMI1S2f7inSZqu92MdhKUFnRKXz9lD+PJCnJwfCGZm7dpxngPCA06h5c5ktbVLmA2uVRMBOZhKuSS4VjDpsA6eeajMMat39P1bzBxFNLUzNgGy/QsGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A4ErF5BL7bDWql01g8Ddsvo8tfdsJHgWhVo5VZ0Rbpo=;
 b=YWhVSCLB4d07T0gyop8GGpH5x9B2fauThpMM44tfcHxGSlwOEXZqmf4gACVqY1Iuu3AjHonzrPixPwke+yvI/CGsGftvQ+KoYOOTxkyKDiq9b9qmNR8c4MKyEL2sE3Jfi0T+GRPt+QwEqssM7ntUYcxAPT8srrojr2JGk7IgYNY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>
Subject: [QEMU][PATCH v5 07/10] hw/xen/xen-hvm-common: Use g_new and error_report
Date: Tue, 31 Jan 2023 14:51:46 -0800
Message-ID: <20230131225149.14764-8-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT106:EE_|PH0PR12MB5468:EE_
X-MS-Office365-Filtering-Correlation-Id: 32835289-f924-41c9-871a-08db03ddcf01
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lKmVv4sLAJnszWivO3MTW1loDbkWoZnh4P/MHWWXo49dAMsIuIlbeRqxpqq/mao/r+S25MToHuJtE9Ad7Z6Z0jkV93BSqmMfmbkqBKdyZ8dWBuW4uFcOZosSaNjP5I29Zk8OeMR3LOQwz0sMNfO2N1EMRkkSCVDtlhBV+xsa0Z8bKVC0nIXUJn33m4JFBmvn9k+ahHYv3CoKvYGIXC2LtjITMH3Yfny5y1vlPZRzL9yDqjauBitKwQTqKD6ZYSja7r3Pzm9gE2rp0n1x76vBUxGlTAQ7Ry1/lReEnbJh2SFsx8OtxJJFBNis/YX2X27E6HgGajfaTuku1fH8PV/+gwdgRciABFPQYHohX3lLd6Jo4AbVOQJD9hCIdKqejgGY2+Mg3WzjrmRlIEVwd0zGIcib94p1FrHilkC9ay5n/CmreMk+HU4lXhGCI89YvlCjYK33VmyviRDyXulHB6c9LzgTlLnQot8lA3KtSrtuGFnNroPGeirElW4lygLqoB0WCs0FOK8kkaQolxBx3+h6UDea0zXls1zEanr68cBa/sX++GrEOA724l5uAcluOSwtxqMVoufO82EkO0X+ZaK/bCn6z7si6oqV/hRsb+4GhM3Mc68CtwrmGQx/hMZ1x4ZgVMEXkBGD03rPHhTTkMHnBGVj7W6gxM4/Iuk4WSG+Zp6eiKaeojlgRwIkWTEsOIuACXHHGjO7KymbZhKCCRzPVQuBDdJgrAlY3li71wvjK84=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199018)(40470700004)(46966006)(36840700001)(356005)(81166007)(86362001)(36860700001)(82740400003)(70586007)(316002)(36756003)(70206006)(8936002)(5660300002)(41300700001)(54906003)(4326008)(6916009)(8676002)(82310400005)(2906002)(40480700001)(44832011)(336012)(426003)(40460700003)(83380400001)(2616005)(47076005)(478600001)(186003)(26005)(1076003)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:19.7134
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 32835289-f924-41c9-871a-08db03ddcf01
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT106.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5468

Replace g_malloc with g_new and perror with error_report.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 hw/xen/xen-hvm-common.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index 5e3c7b073f..077c8dae5b 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -34,7 +34,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
     trace_xen_ram_alloc(ram_addr, size);
 
     nr_pfn = size >> TARGET_PAGE_BITS;
-    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
+    pfn_list = g_new(xen_pfn_t, nr_pfn);
 
     for (i = 0; i < nr_pfn; i++) {
         pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
@@ -731,7 +731,7 @@ void destroy_hvm_domain(bool reboot)
             return;
         }
         if (errno != ENOTTY /* old Xen */) {
-            perror("xendevicemodel_shutdown failed");
+            error_report("xendevicemodel_shutdown failed with error %d", errno);
         }
         /* well, try the old thing then */
     }
@@ -801,7 +801,7 @@ static void xen_do_ioreq_register(XenIOState *state,
     }
 
     /* Note: cpus is empty at this point in init */
-    state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
+    state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
 
     rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
     if (rc < 0) {
@@ -810,7 +810,7 @@ static void xen_do_ioreq_register(XenIOState *state,
         goto err;
     }
 
-    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
+    state->ioreq_local_port = g_new0(evtchn_port_t, max_cpus);
 
     /* FIXME: how about if we overflow the page here? */
     for (i = 0; i < max_cpus; i++) {
@@ -864,13 +864,13 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
 
     state->xce_handle = xenevtchn_open(NULL, 0);
     if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
+        error_report("xen: event channel open failed with error %d", errno);
         goto err;
     }
 
     state->xenstore = xs_daemon_open();
     if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
+        error_report("xen: xenstore open failed with error %d", errno);
         goto err;
     }
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487907.755764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUL-0002HB-MJ; Tue, 31 Jan 2023 22:52:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487907.755764; Tue, 31 Jan 2023 22:52:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUL-0002Gk-H7; Tue, 31 Jan 2023 22:52:29 +0000
Received: by outflank-mailman (input) for mailman id 487907;
 Tue, 31 Jan 2023 22:52:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzUK-0008Hj-Hd
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:28 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2088.outbound.protection.outlook.com [40.107.244.88])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eec783b7-a1b9-11ed-933c-83870f6b2ba8;
 Tue, 31 Jan 2023 23:52:28 +0100 (CET)
Received: from DS7PR03CA0044.namprd03.prod.outlook.com (2603:10b6:5:3b5::19)
 by BY5PR12MB4306.namprd12.prod.outlook.com (2603:10b6:a03:206::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.22; Tue, 31 Jan
 2023 22:52:24 +0000
Received: from DM6NAM11FT101.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b5:cafe::8b) by DS7PR03CA0044.outlook.office365.com
 (2603:10b6:5:3b5::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:24 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT101.mail.protection.outlook.com (10.13.172.208) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.36 via Frontend Transport; Tue, 31 Jan 2023 22:52:23 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:23 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:23 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:22 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eec783b7-a1b9-11ed-933c-83870f6b2ba8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z5wBSUyeOZ8rQDdxXb23NmYmVJH4DF1HgPBAEzUdKxFAZVjaz7y0Um+OIdqY/9PU9wehBvLFN65Db6jwdIoxo4CEAYRfGeqJFLWXrwRzk7XXq6Ap865TqnuCa0CpjHbjqyPxxZxCt715XONUxbcNGUgCVP/kyZl521CKbWCHENVMUFyVXNzGbQE8Y9QsSOTl29G3owo9lIf/U6x+pPPqkAfsMOhxho8xccKA1FPlM4nr3BACQEFigjcj7B7/KgG2LpgQRF2afCnargFP+YJjNd9RVGcc0UB9N/mPw43KBSiCCH7S+EXpve07MTp3bPhJgn8CLX3P1ADdcKTuAyfMFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5c1IzY0YQxuTPTSwy7UvS0mMbpPve8RkE4gupE88sNQ=;
 b=S1QUAKgpMq5V7y6K6fIXooUpM3nj/UyU7jWZMgNO6tAGBTQm1axO5OvmSeHcDokwk4YE+UxAn5bBjp80lfkYU6gLQ2GP+yV3o6A/kavZztKVhSfvDTRjOJo1d2vzuY6BYXVp/9j4HqgEZNMUnU8/zq+3vWpFOj7pT38o5lVVztv+NqvD/ThIAdqnw3tqjj/17De8Ncn18y2MqH/DJvluv/1f4m6FWL746jRi0xVZMtc9owSkF3AUlzEv60cWf8Kef88vF0Okk3iqPFeH8EzWkLIVuXbpVlruY54E8DNvkP+iba+plKiIzkLHZR635pwxKNWQdupppwETttCpCB0IKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5c1IzY0YQxuTPTSwy7UvS0mMbpPve8RkE4gupE88sNQ=;
 b=UMRryyixogad2mWzeOT0OucBvl14YgzhAImM1krSWAiO9iqxLEy432VQVmHAX6/jQZr2KFvMf+wZz17a+QdLtG9UXqCvE/iEZH80NOZKe9bV3g0AP1txPa3wMDLW4LZ6Jh6SiUs00zLim1WRNstcei5agNot7UKp7tYlfbkX7v8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Paolo Bonzini
	<pbonzini@redhat.com>, =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?=
	<marcandre.lureau@redhat.com>, =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?=
	<berrange@redhat.com>, Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [QEMU][PATCH v5 10/10] meson.build: enable xenpv machine build for ARM
Date: Tue, 31 Jan 2023 14:51:49 -0800
Message-ID: <20230131225149.14764-11-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT101:EE_|BY5PR12MB4306:EE_
X-MS-Office365-Filtering-Correlation-Id: 0ba9ab4a-a739-4ca2-e485-08db03ddd16d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CRHApirmURZUasY1yQQUrk3rGjD1IRLKcXSSZ99ty3v/koBoUmzhBeEIsOjwGTmpeJiF/5rh6fkCzT56jZGmPLRMNBpN72EDiDmCyKhKEvqtxvxmXc0Hu3Y3pDTR0Bsla8/O6TjDBDyyQonz/BSsa8WaCg1u1N6akFgPyjinzZIuc6jhiMx/LyTi7JtUks8AylZz/zSfSQid4kPpF5VcEJ0pz6L1b88DYZS0YE1QOEZx0cI0D9OVkgcGkODOF2ge2AAaVI+qCVtdu3tZ6mcMGttUyIeXYfRzckoQoENuMdLEHVycYVv2+cU8j2y62OdP3LdeKrMhe585lE4ziUviCTiMDLcH5bg9DKyVBva/unm7u0fIABKLS7zPHJZMyDckyAS5QXLOhm+54NhR8UjM8J10n7R+fV6Q2lJ3YpzUpoJgI27TSGGR138IlGMq0UMs8MalzETyOLdX7pdBMnPPVC5CZXOlt3NFe37PcHnmYRH5EIt1rUpsUpF9UXqtl/LTvuDTUHptZKjNFUPgNJvX8VUpK5/LCuDh+/IYM62+auTk6TrEwxj7C+gy0m1p7Lu2Cte/VVFCGZowpfH/QkWJRV9l7PkTii0R0F00XBAVEVWKqtHEDgrJvLfOdEwZ9kgnU0la/SE+HAKRAkyM/2MyMFxPqhxPByVigU0THoeP59dNuAcl+oMenwgYWF+956VCyLKiL8xOwwG1Z9PCoat2BUXmAP6ZyHlOkPFIKpEj4hM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199018)(36840700001)(46966006)(40470700004)(316002)(66574015)(426003)(356005)(54906003)(41300700001)(44832011)(2906002)(4744005)(36860700001)(36756003)(5660300002)(8936002)(81166007)(40480700001)(47076005)(83380400001)(82740400003)(40460700003)(6916009)(8676002)(4326008)(70586007)(70206006)(6666004)(2616005)(478600001)(26005)(186003)(86362001)(1076003)(82310400005)(336012)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:23.8571
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0ba9ab4a-a739-4ca2-e485-08db03ddd16d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT101.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4306

Add CONFIG_XEN for aarch64 device to support build for ARM targets.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 meson.build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meson.build b/meson.build
index 693802adb2..13c4ad1017 100644
--- a/meson.build
+++ b/meson.build
@@ -135,7 +135,7 @@ endif
 if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
   # i386 emulator provides xenpv machine type for multiple architectures
   accelerator_targets += {
-    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
+    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu', 'aarch64-softmmu'],
   }
 endif
 if cpu in ['x86', 'x86_64']
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487908.755776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUN-0002gB-4W; Tue, 31 Jan 2023 22:52:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487908.755776; Tue, 31 Jan 2023 22:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUM-0002el-Uj; Tue, 31 Jan 2023 22:52:30 +0000
Received: by outflank-mailman (input) for mailman id 487908;
 Tue, 31 Jan 2023 22:52:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzUL-0000Ma-U7
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:30 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on20616.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eedbcaa6-a1b9-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 23:52:28 +0100 (CET)
Received: from MW4PR03CA0019.namprd03.prod.outlook.com (2603:10b6:303:8f::24)
 by MW3PR12MB4554.namprd12.prod.outlook.com (2603:10b6:303:55::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan
 2023 22:52:21 +0000
Received: from CO1NAM11FT106.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8f:cafe::bb) by MW4PR03CA0019.outlook.office365.com
 (2603:10b6:303:8f::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:21 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT106.mail.protection.outlook.com (10.13.175.44) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 22:52:21 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:20 -0600
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:20 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:19 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eedbcaa6-a1b9-11ed-b63b-5f92e7d2e73a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IStmBFltMf2hZUI7++YVbUMKDrvJ8QDdfxnmlerOXfCXWEufrK1jlTeV/FzE8Pc1fDaOVvmieomEKH0Ri4Wh6c7aj8W9tiOj+fx6qTjnD9InuOHt7joJDe3y89r78xUo5lkD7b/NdPy77OZJ6XCkluFg/GNdMTzo031QOy7cqbdSr1JOC7p7Fwf3xboeNaFvWDDI7Owledr36/SJXnT3hP5mXopVufF9asZ8t1b6yB5MeSrqgtIAe2EUZF2S3Hj55KhRFulwAzxPGEo7e5IFnTf1nNOtYtFGo++14zTEr6OZ03I8sZUPAceLmOGF864bZe/V2gci984a3REoWrgj3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hmpuP41Ztn/lKlMcuw44+3sOiW4TYVQ+KI34gBksLz8=;
 b=iQjsdozJoalfyo+g4Ke0D0smBCYXLGopNwWsVHJ1863nKK6UAtKCAeUuJCkKiB+hZhRyP0KW9WSvQ6t4Yvx6iREjhnmTpwfzN8fE17YcA6dcHsPl6kDqLLyvMkge3luuNZqq7oM2yqLIf9rH0RVRKVARpa4MUnpNvWmz7wD07pLWwV7iYm+Hp4bEfmSFk9BtavBCYVp3AEmFKfad1icamzAoUqU9Pcl4biVvfdiRFEM2XvFJtUysQVR64QgYHXYSGZtjhLg4qAXGxfuTb9fD2ytpktucP95Pq7iXcnDw40/C/opT5QCsvFMhicbVb2+zkqbERiYar209ZvbTmLCBsg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hmpuP41Ztn/lKlMcuw44+3sOiW4TYVQ+KI34gBksLz8=;
 b=tsQQj7LxTOI3yrwo0I/9s5DcsHegIjL/0YipTyKm2RqOiG045x33xNLeveQtBGUzlt3f9NB6d5s7MdH3cwOvspGkSaH+QD5n4oGqe7l28WZH3llJZZInD2qmp31A4AVYVWugYY1/pmj98tCL6sIK/KctfkJA17dNZmTHHCpgzD8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Paolo Bonzini
	<pbonzini@redhat.com>, =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?=
	<marcandre.lureau@redhat.com>, =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?=
	<berrange@redhat.com>, Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [QEMU][PATCH v5 08/10] meson.build: do not set have_xen_pci_passthrough for aarch64 targets
Date: Tue, 31 Jan 2023 14:51:47 -0800
Message-ID: <20230131225149.14764-9-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT106:EE_|MW3PR12MB4554:EE_
X-MS-Office365-Filtering-Correlation-Id: ef99ddf4-e669-461a-50b8-08db03ddcff2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UVuXJyiAiNYd2meMUIwXSaLSy6HWTJJWqan/oB67squzUPDAP6kNLuO8pMhSMogs+WPezsQJ6mFtaYp3ChIKoHYIRWaimhu+w58LF3pXrSizqcFdUVi8r2ax24hjgE/QI2MhvS7rSHAQPiCmfbVrkwQuIcrs9zKYkHFG7SPJX/LL4SVaQDc2mzjb961miFzHpjI6gsPHUdaa/8prOwUBlfaMXhNZjvbwvt+W93cZE1CSBRIFHU2g2zQiAx8+WGTyTGlOqkcPaB0LnnjTAJPuBCL8YtE1I+TP5qsvN570MLFcLvnKvX5E3+6eVjL+TC3b9uTs/681Sjs8jyMd2Y2lTjvC0IOVDJtOaSoeCBDXPk0N0HRbiJzIx56EW4lfEMmsbwuzljChVYmH7E8OKe1UxALjvP/ab+zVPZwpYwrnsUUemiICZgbPgkq1Cyaju6AqNr9swnMp31ezjfXYkygsogyaMWDP5IYWbK55U2OGmxrs4xuf79Dgqw3xClvHs2/rUdQjM+jYiT6yj4Qn1tY+kgL82cAGtxyvTxO31seqjn5MtIkZCAmgnKj2XR46LZMB2F/FMdNZ+BGcQ9MDJC3CM8aGXvixfDs5gMGzZGYf5v/wps3PT21tOZXc4CNb1LxL4Oe3yW3fm7UMfm5DZCGwBooi+KnXuEik22kKbCq1UGOqdRevNAhpLxg1GbrtgMF0hcZAVv7ea0Ru0h+lSkVnDh9kAKn0QLETrsZNVW3+D2U=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(396003)(346002)(376002)(39860400002)(136003)(451199018)(40470700004)(36840700001)(46966006)(4744005)(2906002)(44832011)(36756003)(47076005)(83380400001)(82310400005)(426003)(54906003)(6666004)(40460700003)(1076003)(336012)(26005)(186003)(6916009)(478600001)(2616005)(8676002)(70586007)(70206006)(5660300002)(4326008)(8936002)(40480700001)(41300700001)(86362001)(356005)(81166007)(316002)(36860700001)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:21.2914
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ef99ddf4-e669-461a-50b8-08db03ddcff2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT106.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4554

From: Stefano Stabellini <stefano.stabellini@amd.com>

have_xen_pci_passthrough is only used for Xen x86 VMs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 meson.build | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/meson.build b/meson.build
index 6d3b665629..693802adb2 100644
--- a/meson.build
+++ b/meson.build
@@ -1471,6 +1471,8 @@ have_xen_pci_passthrough = get_option('xen_pci_passthrough') \
            error_message: 'Xen PCI passthrough requested but Xen not enabled') \
   .require(targetos == 'linux',
            error_message: 'Xen PCI passthrough not available on this platform') \
+  .require(cpu == 'x86'  or cpu == 'x86_64',
+           error_message: 'Xen PCI passthrough not available on this platform') \
   .allowed()
 
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 22:52:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 22:52:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487911.755787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUO-0002zN-NM; Tue, 31 Jan 2023 22:52:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487911.755787; Tue, 31 Jan 2023 22:52:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzUO-0002xs-Ea; Tue, 31 Jan 2023 22:52:32 +0000
Received: by outflank-mailman (input) for mailman id 487911;
 Tue, 31 Jan 2023 22:52:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=skZt=54=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pMzUM-0000Ma-UQ
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 22:52:31 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20622.outbound.protection.outlook.com
 [2a01:111:f400:fe59::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eea48eb0-a1b9-11ed-b63b-5f92e7d2e73a;
 Tue, 31 Jan 2023 23:52:28 +0100 (CET)
Received: from DM6PR14CA0051.namprd14.prod.outlook.com (2603:10b6:5:18f::28)
 by MW5PR12MB5649.namprd12.prod.outlook.com (2603:10b6:303:19d::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan
 2023 22:52:22 +0000
Received: from DM6NAM11FT014.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:18f:cafe::cf) by DM6PR14CA0051.outlook.office365.com
 (2603:10b6:5:18f::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend
 Transport; Tue, 31 Jan 2023 22:52:22 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT014.mail.protection.outlook.com (10.13.173.132) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 22:52:22 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 31 Jan
 2023 16:52:21 -0600
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 31 Jan 2023 16:52:20 -0600
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eea48eb0-a1b9-11ed-b63b-5f92e7d2e73a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j+9zG6Jv9a6g6o63oBmmUBFIu97pUTPA5RNwQBBw7aOO/VEqkmYE9TMzMHz18iGe4ki8lb9GK25NKKELU22XW+vy4c8y9ovQxAHAvYbkPnfSlD0ccUEBFtVjpCkWfl9xunDOte/MKz69dU20Ogz0VY11QVitZToDkCT8E9j8cCc1N12AX525hn69rImkpDhugqQu0VSjMztErjG4SYt+9rnpIDqmJd3wA8GvXzBeYzYep5vaKgreY2DtBdgqnyqBRMHeosPi6CvE/TcVNvbcc7eJjYxxihFyJWfF12nOWZPn2pNxweTL1WIkE006UY7HJH4SuSLxgtfAIv+zYH1OtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XOERh1QNXlFGYG44QdfuSA3Z2o8yCRpg/mYM5Bl16XE=;
 b=lXWAQPI1Jsghoqo+9ySH3MCP4PkJ9kFca4YKyGzp9XFoADXiN4ltiYQPlzf7x9ndYIgGtZBLMjiNkW0kmO4bLK8KbDMfAsAbWHa4MEpq/x6VAL57cmveo0NAs4ktDKgQUEH6P03dtV9aCgkALFsVMBM628aJzSc4La75Iy5rpjZ4DgDbWv1REA55DrtFlIKQ8Xq+eoppVAnu9nqliaX1FMDVnv9WS0NPVCZsZbndwnPmX1DafPgolMIlQ8otbS0zKlrYwfF9kvam5UmZzRlP/6Ey+8X5GOCBN/yeOhCXdIEoV//iffdKRdhsTQMj0AGs19QNIZcioNmYJ3ctZcsa9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XOERh1QNXlFGYG44QdfuSA3Z2o8yCRpg/mYM5Bl16XE=;
 b=YSGPrVwBdJluoxsg1EFSQcY2Zo6XZlGtFueWsnXfxdkUYlOWtT4Cb5APiNytd5gFZLiMg1yZ7/26nohHk5hWfwu2Y9kCYegE+JFT99l51df1Xu4ug4EHa921PAWYlxo6MmV/VbcRxmAqnHRpicCPgBjI2yGiflPDcfXwpahfX2Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, <alex.bennee@linaro.org>, Peter Maydell
	<peter.maydell@linaro.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
	"open list:ARM TCG CPUs" <qemu-arm@nongnu.org>
Subject: [QEMU][PATCH v5 09/10] hw/arm: introduce xenpvh machine
Date: Tue, 31 Jan 2023 14:51:48 -0800
Message-ID: <20230131225149.14764-10-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230131225149.14764-1-vikram.garhwal@amd.com>
References: <20230131225149.14764-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT014:EE_|MW5PR12MB5649:EE_
X-MS-Office365-Filtering-Correlation-Id: 778fcfa3-12be-4331-ad3e-08db03ddd085
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	v5ulNw42QLHhDJMag5e1/6s1b3wM8O9WTJu0bekQdNhekVjReqf9wZh6y8bDziL7pl+80m+JG3gboK6Ju3HcTfhVtz8ZKApz8X6FjJO4DWfkOYv0No3LhDvDoZ5LdyKHeE70bM3o9z51YuXpRKKi37KkCex0An6H84DTQ1GJXJ7OG9cnSKs2RwFQEbTt0cpnfpgP8Mh0cEB08cRpvwWx75LcZn+jOSDv0w+r8wXDKjmOFu1h1OwM1FEk4BrOhagWbfkUNwy/81SWzejkSQqeK9WF+UF2rWDc+ADlfD/V+YU4BhmBlqaWrZs0ND2bxG84uPXdkGan1MnlgQ1aWt2xQdVgfFQs2igN+bdnf9+D0SnEMY54qK5vhBXbcKODRX6dZSo21m9R78JjBDkXrzEBZHn+Xwkj8nlzk8o3lv8YJIERKDdhtsV2xpR2KYam8k73VF0HAigwickpA5bQYMQDbXnN7HXCMZJsez8AC+NPKKoRoiXXUz3Y+Fd9gvmlVAT6mwLml7XuihQJFioYuH9/Q0VrByTkYARotfcuk4SESRp5AJQueshMjBsg3nZyr9BZuUS+cdk3txXzLbgqyvxUaVNgCVvbaVp0+sHLG47ZJ8fL2e2r7Ytg/hXxU0ZONyiYx69/dQ2IzUv88QOxmcp/ZHtXcBGsWfgM/6mnNzK1HwJtFjxQD0LF/Vx5Qgq8HcDIM2s9hrGVS1Ern8KWxVcD5TNED9jdrfYe66pD2SY7w3zz4tAdGaRlamV0cR1MoCtM1wDGOBbDUlUGSlRUD1yJuA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(376002)(346002)(136003)(39860400002)(396003)(451199018)(46966006)(36840700001)(40470700004)(82310400005)(86362001)(478600001)(356005)(336012)(36756003)(2906002)(54906003)(966005)(6666004)(40480700001)(44832011)(5660300002)(8936002)(41300700001)(40460700003)(4326008)(316002)(70586007)(83380400001)(186003)(81166007)(26005)(70206006)(82740400003)(36860700001)(426003)(1076003)(47076005)(2616005)(6916009)(8676002)(66899018)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 22:52:22.3353
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 778fcfa3-12be-4331-ad3e-08db03ddd085
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT014.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW5PR12MB5649

Add a new machine xenpvh which creates a IOREQ server to register/connect with
Xen Hypervisor.

Optional: When CONFIG_TPM is enabled, it also creates a tpm-tis-device, adds a
TPM emulator and connects to swtpm running on host machine via chardev socket
and support TPM functionalities for a guest domain.

Extra command line for aarch64 xenpvh QEMU to connect to swtpm:
    -chardev socket,id=chrtpm,path=/tmp/myvtpm2/swtpm-sock \
    -tpmdev emulator,id=tpm0,chardev=chrtpm \
    -machine tpm-base-addr=0x0c000000 \

swtpm implements a TPM software emulator(TPM 1.2 & TPM 2) built on libtpms and
provides access to TPM functionality over socket, chardev and CUSE interface.
Github repo: https://github.com/stefanberger/swtpm
Example for starting swtpm on host machine:
    mkdir /tmp/vtpm2
    swtpm socket --tpmstate dir=/tmp/vtpm2 \
    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 docs/system/arm/xenpvh.rst    |  34 +++++++
 docs/system/target-arm.rst    |   1 +
 hw/arm/meson.build            |   2 +
 hw/arm/xen_arm.c              | 182 ++++++++++++++++++++++++++++++++++
 include/hw/arm/xen_arch_hvm.h |   9 ++
 include/hw/xen/arch_hvm.h     |   2 +
 6 files changed, 230 insertions(+)
 create mode 100644 docs/system/arm/xenpvh.rst
 create mode 100644 hw/arm/xen_arm.c
 create mode 100644 include/hw/arm/xen_arch_hvm.h

diff --git a/docs/system/arm/xenpvh.rst b/docs/system/arm/xenpvh.rst
new file mode 100644
index 0000000000..e1655c7ab8
--- /dev/null
+++ b/docs/system/arm/xenpvh.rst
@@ -0,0 +1,34 @@
+XENPVH (``xenpvh``)
+=========================================
+This machine creates a IOREQ server to register/connect with Xen Hypervisor.
+
+When TPM is enabled, this machine also creates a tpm-tis-device at a user input
+tpm base address, adds a TPM emulator and connects to a swtpm application
+running on host machine via chardev socket. This enables xenpvh to support TPM
+functionalities for a guest domain.
+
+More information about TPM use and installing swtpm linux application can be
+found at: docs/specs/tpm.rst.
+
+Example for starting swtpm on host machine:
+.. code-block:: console
+
+    mkdir /tmp/vtpm2
+    swtpm socket --tpmstate dir=/tmp/vtpm2 \
+    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
+
+Sample QEMU xenpvh commands for running and connecting with Xen:
+.. code-block:: console
+
+    qemu-system-aarch64 -xen-domid 1 \
+    -chardev socket,id=libxl-cmd,path=qmp-libxl-1,server=on,wait=off \
+    -mon chardev=libxl-cmd,mode=control \
+    -chardev socket,id=libxenstat-cmd,path=qmp-libxenstat-1,server=on,wait=off \
+    -mon chardev=libxenstat-cmd,mode=control \
+    -xen-attach -name guest0 -vnc none -display none -nographic \
+    -machine xenpvh -m 1301 \
+    -chardev socket,id=chrtpm,path=tmp/vtpm2/swtpm-sock \
+    -tpmdev emulator,id=tpm0,chardev=chrtpm -machine tpm-base-addr=0x0C000000
+
+In above QEMU command, last two lines are for connecting xenpvh QEMU to swtpm
+via chardev socket.
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
index 91ebc26c6d..af8d7c77d6 100644
--- a/docs/system/target-arm.rst
+++ b/docs/system/target-arm.rst
@@ -106,6 +106,7 @@ undocumented; you can get a complete list by running
    arm/stm32
    arm/virt
    arm/xlnx-versal-virt
+   arm/xenpvh
 
 Emulated CPU architecture support
 =================================
diff --git a/hw/arm/meson.build b/hw/arm/meson.build
index b036045603..06bddbfbb8 100644
--- a/hw/arm/meson.build
+++ b/hw/arm/meson.build
@@ -61,6 +61,8 @@ arm_ss.add(when: 'CONFIG_FSL_IMX7', if_true: files('fsl-imx7.c', 'mcimx7d-sabre.
 arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
 arm_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c', 'mcimx6ul-evk.c'))
 arm_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
+arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen_arm.c'))
+arm_ss.add_all(xen_ss)
 
 softmmu_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmu-common.c'))
 softmmu_ss.add(when: 'CONFIG_EXYNOS4', if_true: files('exynos4_boards.c'))
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
new file mode 100644
index 0000000000..eaca65af37
--- /dev/null
+++ b/hw/arm/xen_arm.c
@@ -0,0 +1,182 @@
+/*
+ * QEMU ARM Xen PVH Machine
+ *
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/error-report.h"
+#include "qapi/qapi-commands-migration.h"
+#include "qapi/visitor.h"
+#include "hw/boards.h"
+#include "hw/sysbus.h"
+#include "sysemu/block-backend.h"
+#include "sysemu/tpm_backend.h"
+#include "sysemu/sysemu.h"
+#include "hw/xen/xen-legacy-backend.h"
+#include "hw/xen/xen-hvm-common.h"
+#include "sysemu/tpm.h"
+#include "hw/xen/arch_hvm.h"
+
+#define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
+OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
+
+static MemoryListener xen_memory_listener = {
+    .region_add = xen_region_add,
+    .region_del = xen_region_del,
+    .log_start = NULL,
+    .log_stop = NULL,
+    .log_sync = NULL,
+    .log_global_start = NULL,
+    .log_global_stop = NULL,
+    .priority = 10,
+};
+
+struct XenArmState {
+    /*< private >*/
+    MachineState parent;
+
+    XenIOState *state;
+
+    struct {
+        uint64_t tpm_base_addr;
+    } cfg;
+};
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    hw_error("Invalid ioreq type 0x%x\n", req->type);
+
+    return;
+}
+
+void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
+                         bool add)
+{
+}
+
+void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
+{
+}
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+}
+
+#ifdef CONFIG_TPM
+static void xen_enable_tpm(XenArmState *xam)
+{
+    Error *errp = NULL;
+    DeviceState *dev;
+    SysBusDevice *busdev;
+
+    TPMBackend *be = qemu_find_tpm_be("tpm0");
+    if (be == NULL) {
+        DPRINTF("Couldn't fine the backend for tpm0\n");
+        return;
+    }
+    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
+    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
+    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
+    busdev = SYS_BUS_DEVICE(dev);
+    sysbus_realize_and_unref(busdev, &error_fatal);
+    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
+
+    DPRINTF("Connected tpmdev at address 0x%lx\n", xam->cfg.tpm_base_addr);
+}
+#endif
+
+static void xen_arm_init(MachineState *machine)
+{
+    XenArmState *xam = XEN_ARM(machine);
+
+    xam->state =  g_new0(XenIOState, 1);
+
+    xen_register_ioreq(xam->state, machine->smp.cpus, xen_memory_listener);
+
+#ifdef CONFIG_TPM
+    if (xam->cfg.tpm_base_addr) {
+        xen_enable_tpm(xam);
+    } else {
+        DPRINTF("tpm-base-addr is not provided. TPM will not be enabled\n");
+    }
+#endif
+}
+
+#ifdef CONFIG_TPM
+static void xen_arm_get_tpm_base_addr(Object *obj, Visitor *v,
+                                      const char *name, void *opaque,
+                                      Error **errp)
+{
+    XenArmState *xam = XEN_ARM(obj);
+    uint64_t value = xam->cfg.tpm_base_addr;
+
+    visit_type_uint64(v, name, &value, errp);
+}
+
+static void xen_arm_set_tpm_base_addr(Object *obj, Visitor *v,
+                                      const char *name, void *opaque,
+                                      Error **errp)
+{
+    XenArmState *xam = XEN_ARM(obj);
+    uint64_t value;
+
+    if (!visit_type_uint64(v, name, &value, errp)) {
+        return;
+    }
+
+    xam->cfg.tpm_base_addr = value;
+}
+#endif
+
+static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
+{
+
+    MachineClass *mc = MACHINE_CLASS(oc);
+    mc->desc = "Xen Para-virtualized PC";
+    mc->init = xen_arm_init;
+    mc->max_cpus = 1;
+    mc->default_machine_opts = "accel=xen";
+
+#ifdef CONFIG_TPM
+    object_class_property_add(oc, "tpm-base-addr", "uint64_t",
+                              xen_arm_get_tpm_base_addr,
+                              xen_arm_set_tpm_base_addr,
+                              NULL, NULL);
+    object_class_property_set_description(oc, "tpm-base-addr",
+                                          "Set Base address for TPM device.");
+
+    machine_class_allow_dynamic_sysbus_dev(mc, TYPE_TPM_TIS_SYSBUS);
+#endif
+}
+
+static const TypeInfo xen_arm_machine_type = {
+    .name = TYPE_XEN_ARM,
+    .parent = TYPE_MACHINE,
+    .class_init = xen_arm_machine_class_init,
+    .instance_size = sizeof(XenArmState),
+};
+
+static void xen_arm_machine_register_types(void)
+{
+    type_register_static(&xen_arm_machine_type);
+}
+
+type_init(xen_arm_machine_register_types)
diff --git a/include/hw/arm/xen_arch_hvm.h b/include/hw/arm/xen_arch_hvm.h
new file mode 100644
index 0000000000..8fd645e723
--- /dev/null
+++ b/include/hw/arm/xen_arch_hvm.h
@@ -0,0 +1,9 @@
+#ifndef HW_XEN_ARCH_ARM_HVM_H
+#define HW_XEN_ARCH_ARM_HVM_H
+
+#include <xen/hvm/ioreq.h>
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
+void arch_xen_set_memory(XenIOState *state,
+                         MemoryRegionSection *section,
+                         bool add);
+#endif
diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
index 26674648d8..c7c515220d 100644
--- a/include/hw/xen/arch_hvm.h
+++ b/include/hw/xen/arch_hvm.h
@@ -1,3 +1,5 @@
 #if defined(TARGET_I386) || defined(TARGET_X86_64)
 #include "hw/i386/xen_arch_hvm.h"
+#elif defined(TARGET_ARM) || defined(TARGET_ARM_64)
+#include "hw/arm/xen_arch_hvm.h"
 #endif
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Jan 31 23:18:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 23:18:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487962.755799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzt4-0000m7-UX; Tue, 31 Jan 2023 23:18:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487962.755799; Tue, 31 Jan 2023 23:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pMzt4-0000m0-Ra; Tue, 31 Jan 2023 23:18:02 +0000
Received: by outflank-mailman (input) for mailman id 487962;
 Tue, 31 Jan 2023 23:18:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+DaM=54=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pMzt3-0000lu-Vc
 for xen-devel@lists.xenproject.org; Tue, 31 Jan 2023 23:18:02 +0000
Received: from mail-vs1-xe34.google.com (mail-vs1-xe34.google.com
 [2607:f8b0:4864:20::e34])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7fe851a4-a1bd-11ed-b63b-5f92e7d2e73a;
 Wed, 01 Feb 2023 00:17:59 +0100 (CET)
Received: by mail-vs1-xe34.google.com with SMTP id e9so9957464vsj.3
 for <xen-devel@lists.xenproject.org>; Tue, 31 Jan 2023 15:17:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fe851a4-a1bd-11ed-b63b-5f92e7d2e73a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=P0iE2DlsNiDStuhq9/ES+SGds5yLPft49VIpL5Q7YLI=;
        b=aJvov2mf8O7uSbdVLCCosLS7V4Bh4HvdHoRGs1YeL0cQP5i5GFdI3wrgT0fbYiTDsf
         h00K9Fs/QidwsfPZKD5sxB/RXkTodfjD+tMWCoeyvmYnsqs92K/JPAMOp5gEMRlA/HDM
         wwZ5YK7muSTFONYjWFtZuMRKXCyFmu8PwyP82U7WXoP1VBGYvyvoU2jmPrkvhipEkJKX
         K3+JVvOk2/Oo7nAmE28eJuvpKxdHPcLQrqpfqD2ryoH+W5qeNpz7sY924BFGiFrIUaRG
         Qq9z3HpfNYepu6VYFD971DL35gLxHroOOO72BVHqMuBiFi2gs6n46kKrUdk2cidX/gcG
         EQPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=P0iE2DlsNiDStuhq9/ES+SGds5yLPft49VIpL5Q7YLI=;
        b=epjF+ocBANTeE6BKqyh6UdyDnoel2g5afdHuARQLmpLdMk7x/Nv+hObrNTgTcuypSB
         +CYrbM0D0OWfj/nMbX32fccuCkB9MZqyvNzJqy8mMhEy6m9EO01XhYz9mnsYAfPfXrwu
         G7s/fpKxarfqQ4Xdb3jrdejTomCWlzAZHtV5+N1NVERw/BtAPs9vfxx53HxkpCWnnaC+
         rcDkRZd8NIxIsD32Z77cdifVe5vWjDNTxenR/PhGsjv2Dsg/XlCSVbt4UyHDt6EF3ifj
         DdwEpowxlhORRLIOx+FkccSNnu2YvXGoK2KIBZsdmt521q2dGq/gBlBNwjHSsRwI78ek
         Llxg==
X-Gm-Message-State: AO0yUKVoXd3OIHWN6a3rYzA/hUnqYyITKWF3V6TUosOVhcDyQZ5HVDq2
	mm5hexlzeK8+sHwIEPcnFcuqWY7fZxfEeB7R1d4=
X-Google-Smtp-Source: AK7set8mhZH/fSgw8XBtc1iKOJiuySVOmjpRbeFSs0//kp9XO5jf0uBNlbT8742grdP4gZ5ve2UWL+js0W0SxO2qEy0=
X-Received: by 2002:a05:6102:52e:b0:3fe:5a64:f8ea with SMTP id
 m14-20020a056102052e00b003fe5a64f8eamr114115vsa.54.1675207077239; Tue, 31 Jan
 2023 15:17:57 -0800 (PST)
MIME-Version: 1.0
References: <cover.1674819203.git.oleksii.kurochko@gmail.com>
 <06c2c36bd68b2534c757dc4087476e855253680a.1674819203.git.oleksii.kurochko@gmail.com>
 <f5cd1bfb116bfcc86fc2848df7eead05cd1a24c0.camel@gmail.com>
 <CAKmqyKMGiDiPRZBekdKan=+YduSmkB2DoWo5btrtVQ8nS3KMAg@mail.gmail.com> <2f6a3b17-4e41-fe9a-1713-4942b3bd3585@xen.org>
In-Reply-To: <2f6a3b17-4e41-fe9a-1713-4942b3bd3585@xen.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 1 Feb 2023 09:17:31 +1000
Message-ID: <CAKmqyKNwH4-D3dKGQEsW_Zup4OT32C1RwaA7_Sey4fo_jOzFcA@mail.gmail.com>
Subject: Re: [PATCH v7 1/2] xen/riscv: introduce early_printk basic stuff
To: Julien Grall <julien@xen.org>
Cc: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, 
	Connor Davis <connojdavis@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, Jan 31, 2023 at 10:03 PM Julien Grall <julien@xen.org> wrote:
>
>
>
> On 31/01/2023 11:44, Alistair Francis wrote:
> > On Sat, Jan 28, 2023 at 12:15 AM Oleksii <oleksii.kurochko@gmail.com> wrote:
> >>
> >> Hi Alistair, Bobby and community,
> >>
> >> I would like to ask your help with the following check:
> >> +/*
> >> + * early_*() can be called from head.S with MMU-off.
> >> + *
> >> + * The following requiremets should be honoured for early_*() to
> >> + * work correctly:
> >> + *    It should use PC-relative addressing for accessing symbols.
> >> + *    To achieve that GCC cmodel=medany should be used.
> >> + */
> >> +#ifndef __riscv_cmodel_medany
> >> +#error "early_*() can be called from head.S with MMU-off"
> >> +#endif
> >
> > I have never seen a check like this before.
>
> The check is in the Linux code, see [3].
>
> > I don't really understand
> > what it's looking for, if the linker is unable to call early_*() I
> > would expect it to throw an error. I'm not sure what this is adding.
>
> When the MMU is off during early boot, you want any C function called to
> use PC-relative address rather than absolute address. This is because
> the physical address may not match the virtual address.

Ah!

I forgot that Xen would be compiled for virtual addresses, I have
spent too much time running on systems without an MMU recently.

>
>  From my understanding, on RISC-V, the use of PC-relative address is
> only guaranteed with medany. So if you were going to change the cmodel
> (Andrew suggested you would), then early_*() may end up to be broken.
>
> This check serve as a documentation of the assumption and also help the
> developer any change in the model and take the appropriate action to
> remediate it.
>
> >
> > I think this is safe to remove.
> Based on what I wrote above, do you still think this is safe?

With that in mind it's probably worth leaving in then. Maybe the
comment should be updated to make it explicit why we want this check
(I find the current comment not very helpful).

Alistair

>
> Cheers,
>
> >> Please take a look at the following messages and help me to decide if
> >> the check mentioned above should be in early_printk.c or not:
> >> [1]
> >> https://lore.kernel.org/xen-devel/599792fa-b08c-0b1e-10c1-0451519d9e4a@xen.org/
> >> [2]
> >> https://lore.kernel.org/xen-devel/0ec33871-96fa-bd9f-eb1b-eb37d3d7c982@xen.org/
>
> [3]
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/riscv/mm/init.c
>
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jan 31 23:41:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 31 Jan 2023 23:41:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.487970.755810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pN0F2-0004ke-Po; Tue, 31 Jan 2023 23:40:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 487970.755810; Tue, 31 Jan 2023 23:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pN0F2-0004kX-Lo; Tue, 31 Jan 2023 23:40:44 +0000
Received: by outflank-mailman (input) for mailman id 487970;
 Tue, 31 Jan 2023 23:40:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pN0F0-0004kM-Qu; Tue, 31 Jan 2023 23:40:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pN0F0-0000px-Ne; Tue, 31 Jan 2023 23:40:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pN0F0-0002XJ-9L; Tue, 31 Jan 2023 23:40:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pN0F0-0001yE-8o; Tue, 31 Jan 2023 23:40:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f+C2rI9u7rroWBne5AzGM/+7R13q3tjsOcPZW0TCoWM=; b=DYuZorWtXJbI9z+rXMCK+gJUii
	1Z2hG1yCq4xoulihC6C9F3LVDRsn+EFtnJHcPCjVvH2bKqM1mjYDDgXW+QFakHYmc+lnbehw4kytr
	SGxJYyv4A3bx87EXVg2kBcPTSE2sRAG33fQ26zAZTL9aRg2Q9zfn5N+GkhPEfoT8JTqE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-176291-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 176291: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:<job status>:broken:regression
    linux-linus:test-amd64-amd64-pygrub:<job status>:broken:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:host-install(5):broken:regression
    linux-linus:build-armhf:host-install(4):broken:regression
    linux-linus:test-amd64-amd64-pygrub:host-install(5):broken:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:build-armhf:syslog-server:running:regression
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:build-armhf:capture-logs:broken:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=22b8077d0fcec86c6ed0e0fce9f7e7e5a4c2d56a
X-Osstest-Versions-That:
    linux=9d84bb40bcb30a7fa16f33baa967aeb9953dda78
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 31 Jan 2023 23:40:42 +0000

flight 176291 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/176291/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-amd64-freebsd11-amd64    <job status>                 broken
 test-amd64-amd64-pygrub         <job status>                 broken
 test-amd64-amd64-freebsd11-amd64  5 host-install(5)    broken REGR. vs. 173462
 build-armhf                   4 host-install(4)        broken REGR. vs. 173462
 test-amd64-amd64-pygrub       5 host-install(5)        broken REGR. vs. 173462
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 173462
 build-armhf                   3 syslog-server                running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-armhf                   5 capture-logs          broken blocked in 173462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 173462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 173462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 173462
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                22b8077d0fcec86c6ed0e0fce9f7e7e5a4c2d56a
baseline version:
 linux                9d84bb40bcb30a7fa16f33baa967aeb9953dda78

Last test of basis   173462  2022-10-07 18:41:45 Z  116 days
Failing since        173470  2022-10-08 06:21:34 Z  115 days  238 attempts
Testing same since   176291  2023-01-31 05:35:02 Z    0 days    1 attempts

------------------------------------------------------------
3468 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             broken  
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      broken  
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-job test-amd64-amd64-freebsd11-amd64 broken
broken-job test-amd64-amd64-pygrub broken
broken-step build-armhf capture-logs
broken-step test-amd64-amd64-freebsd11-amd64 host-install(5)
broken-step build-armhf host-install(4)
broken-step test-amd64-amd64-pygrub host-install(5)

Not pushing.

(No revision log; it would be 533815 lines long.)


